From xen-devel-bounces@lists.xenproject.org Sat Jun 01 00:53:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 00:53:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.733919.1140181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDCzl-00004t-8U; Sat, 01 Jun 2024 00:53:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 733919.1140181; Sat, 01 Jun 2024 00:53:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDCzl-0008WS-5U; Sat, 01 Jun 2024 00:53:17 +0000
Received: by outflank-mailman (input) for mailman id 733919;
 Sat, 01 Jun 2024 00:53:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDCzj-0008WI-8J; Sat, 01 Jun 2024 00:53:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDCzj-000468-4I; Sat, 01 Jun 2024 00:53:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDCzi-0000Rn-KJ; Sat, 01 Jun 2024 00:53:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDCzi-0006Vu-Js; Sat, 01 Jun 2024 00:53:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+cc/+fXbOxKUBlvM6VCQ2SQDyR6eFg2HdcEB7qfhV0I=; b=yjXA5fx9y1s3AI/68su5ecW/st
	EbeYOYfBy/ONYrSh3uXPBqYDy5+EbqvR4FmCrXHbNwn3INgqONmmlBVLZds1ReJfDiSDFr21041ub
	AKpXYOaf7TrsEwpe0iWIa6jxGl8o0glx1Vf/A0ZxLxZXSE0NyrNBynndhVmbA1qjswA4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186213-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186213: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=03147e6837ff045dbc328be876b9600f7040c771
X-Osstest-Versions-That:
    xen=1250c73c1ae2eec0308b4efe3e345127e9dbdb2b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Jun 2024 00:53:14 +0000

flight 186213 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186213/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  03147e6837ff045dbc328be876b9600f7040c771
baseline version:
 xen                  1250c73c1ae2eec0308b4efe3e345127e9dbdb2b

Last test of basis   186204  2024-05-31 01:02:04 Z    0 days
Testing same since   186213  2024-05-31 22:02:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1250c73c1a..03147e6837  03147e6837ff045dbc328be876b9600f7040c771 -> smoke


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 01:48:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 01:48:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.733929.1140190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDDql-0004vK-7b; Sat, 01 Jun 2024 01:48:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 733929.1140190; Sat, 01 Jun 2024 01:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDDql-0004vD-52; Sat, 01 Jun 2024 01:48:03 +0000
Received: by outflank-mailman (input) for mailman id 733929;
 Sat, 01 Jun 2024 01:48:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OKEA=ND=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sDDqk-0004v7-HY
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 01:48:02 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id faa58236-1fb8-11ef-90a1-e314d9c70b13;
 Sat, 01 Jun 2024 03:48:01 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 5b1f17b1804b1-420180b5838so17618195e9.2
 for <xen-devel@lists.xenproject.org>; Fri, 31 May 2024 18:48:01 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4213411b25dsm16047185e9.40.2024.05.31.18.47.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 31 May 2024 18:48:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: faa58236-1fb8-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717206480; x=1717811280; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=gRVJdamzZHnCkHLExHkOawpaAPsoz9HUdRwUAkbcyzQ=;
        b=Q8TWH9tnssvdRhF9pyrR2VzqAGzU8ysAIQg1QfuftyJc2quB16sN6XAe0fLZNQb/Y3
         TQTGoL2SwSsghj/fm1HNbkiimSRvbc8jMVY+9Flt34Nbt0Aoxl6CkipA4+0OhCA3iUyl
         yakPk/LuxgTjKrN9Id9eU99OeHe80d92CUqtY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717206480; x=1717811280;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gRVJdamzZHnCkHLExHkOawpaAPsoz9HUdRwUAkbcyzQ=;
        b=G77XnWzuPOhQf5F99vQLnn9DIt7JNEVhZvuLZ0xjOMal1DsAQObmYLtO05o1CNfxqm
         f1rS8j/yoQuDdx83iF58wHIQg1k3vrVJz67+gdjQK3Ya56ySGKrnUFbPpquZLr6v946h
         Cyp/hX3/mm9sfgOrub1Bel6McEEJXEEQRnyyWfUzSfUS7I/zjO8ceEfxLOSXguM9+dFs
         TOCXWRMsIMvc1ji4Q/HkuP00U+3c47jFqT7JsOsUNpZAK49crgxAM4qoqcns3uQX9luq
         jsgNbkAbkMjsXqA08XYIda7pf2in9XoSrtMox5vHmKQSWN/XhbfKOCyG8jzz8d7CSKYv
         rb2w==
X-Forwarded-Encrypted: i=1; AJvYcCUCdlqmJ5Ss0Fqj7UFqcb+E2qbeKME6jvrCd1SVV21hUGJbRtfJcsrrFp8aPZ6c7sDyXZEJa7tqIIOeKcAhK/bHti1T+plRmSUWilKlzl8=
X-Gm-Message-State: AOJu0YyP3FWyCYr2ckQVv5FOLpux09glwSN4Fp/fEtm5b/nB6XhOGv2Q
	pwNsS/26lFrWranmZAitmxjFjqRXoO7g6bfZILZslbeixO3EfSPzax2CKI2VqcU=
X-Google-Smtp-Source: AGHT+IHewf+TznFq+XhfNN1WsUvfeGqqeZUkOWjM7HLUVLd06daAQHeq6RUe1RBorj6puLkRT8l2mA==
X-Received: by 2002:a05:600c:34d2:b0:421:b39:9ea0 with SMTP id 5b1f17b1804b1-4212e05e489mr29077445e9.16.1717206480551;
        Fri, 31 May 2024 18:48:00 -0700 (PDT)
Message-ID: <b5a24f69-855d-49e8-b13b-7c9b2ee199f4@citrix.com>
Date: Sat, 1 Jun 2024 02:47:59 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 07/13] x86/bitops: Improve arch_ffs() in the general
 case
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Shawn Anastasio <sanastasio@raptorengineering.com>,
 "consulting @ bugseng . com" <consulting@bugseng.com>,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Federico Serafini <federico.serafini@bugseng.com>,
 Nicola Vetrini <nicola.vetrini@bugseng.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240524200338.1232391-1-andrew.cooper3@citrix.com>
 <20240524200338.1232391-8-andrew.cooper3@citrix.com>
 <1660a2a7-cea4-4e6f-9286-0c134c34b6fb@suse.com>
 <57a47c76-c484-4309-8a87-a51f79dd48b6@suse.com>
 <b0838a62-1e6a-473a-a757-97091c84e164@citrix.com>
 <df7bb467-c778-43fb-bd04-f81f6e3dfd01@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <df7bb467-c778-43fb-bd04-f81f6e3dfd01@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 28/05/2024 2:12 pm, Jan Beulich wrote:
> On 28.05.2024 14:30, Andrew Cooper wrote:
>> On 27/05/2024 2:37 pm, Jan Beulich wrote:
>>> On 27.05.2024 15:27, Jan Beulich wrote:
>>>> On 24.05.2024 22:03, Andrew Cooper wrote:
>>>>> --- a/xen/arch/x86/include/asm/bitops.h
>>>>> +++ b/xen/arch/x86/include/asm/bitops.h
>>>>> @@ -432,12 +432,28 @@ static inline int ffsl(unsigned long x)
>>>>>  
>>>>>  static always_inline unsigned int arch_ffs(unsigned int x)
>>>>>  {
>>>>> -    int r;
>>>>> +    unsigned int r;
>>>>> +
>>>>> +    if ( __builtin_constant_p(x > 0) && x > 0 )
>>>>> +    {
>>>>> +        /* Safe, when the compiler knows that x is nonzero. */
>>>>> +        asm ( "bsf %[val], %[res]"
>>>>> +              : [res] "=r" (r)
>>>>> +              : [val] "rm" (x) );
>>>>> +    }
>>>> In patch 11 relevant things are all in a single patch, making it easier
>>>> to spot that this is dead code: The sole caller already has a
>>>> __builtin_constant_p(), hence I don't see how the one here could ever
>>>> return true. With that the respective part of the description is then
>>>> questionable, too, I'm afraid: Where did you observe any actual effect
>>>> from this? Or if you did - what am I missing?
>>> Hmm, thinking about it: I suppose that's why you have
>>> __builtin_constant_p(x > 0), not __builtin_constant_p(x). I have to admit
>>> I'm (positively) surprised that the former may return true when the latter
>>> doesn't.
>> So was I, but this recommendation came straight from the GCC mailing
>> list.  And it really does work, even back in obsolete versions of GCC.
>>
>> __builtin_constant_p() operates on an expression not a value, and is
>> documented as such.
> Of course.
>
>>>  Nevertheless I'm inclined to think this deserves a brief comment.
>> There is a comment, and it's even visible in the snippet.
> The comment is about the asm(); it is neither placed to clearly relate
> to __builtin_constant_p(), nor is it saying anything about this specific
> property of it. You said you were equally surprised; don't you think
> that when both of us are surprised, a specific (even if brief) comment
> is warranted?

Spell it out for me like I'm an idiot.

Because I'm looking at the patch I submitted, and at your request for "a
brief comment", and I still have no idea what you think is wrong at the
moment.

I'm also not included to write a comment saying "go and read the GCC
manual more carefully".

>
>>> As an aside, to better match the comment inside the if()'s body, how about
>>>
>>>     if ( __builtin_constant_p(!!x) && x )
>>>
>>> ? That also may make a little more clear that this isn't just a style
>>> choice, but actually needed for the intended purpose.
>> I am not changing the logic.
>>
>> Apart from anything else, your suggestion is trivially buggy.  I care
>> about whether the RHS collapses to a constant, and the only way of doing
>> that correctly is asking the compiler about the *exact* expression. 
>> Asking about some other expression which you hope - but do not know -
>> that the compiler will treat equivalently is bogus.  It would be
>> strictly better to only take the else clause, than to have both halves
>> emitted.
>>
>> This is the form I've tested extensively.  It's also the clearest form
>> IMO.  You can experiment with alternative forms when we're not staring
>> down code freeze of 4.19.
> "Clearest form" is almost always a matter of taste. To me, comparing
> unsigned values with > or < against 0 is generally at least suspicious.
> Using != is typically better (again: imo), and simply omitting the != 0
> then is shorter with no difference in effect. Except in peculiar cases
> like this one, where indeed it took me some time to figure why the
> comparison operator may not be omitted.
>
> All that said: I'm not going to insist on any change; the R-b previously
> offered still stands. I would highly appreciate though if the (further)
> comment asked for could be added.
>
> What I definitely dislike here is you - not for the first time - turning
> down remarks because a change of yours is late.

Actually it's not to do with the release.  I'd reject it at any point
because it's an unreasonable request to make; to me, or to anyone else.

It would be a matter of taste (which again you have a singular view on),
if it wasn't for the fact that what you actually said was:

"I don't like it, and you should discard all the careful analysis you
did because here's a form I prefer, that I haven't tested concerning a
behaviour I didn't even realise until this email."

and even if it wasn't a buggy suggestion to begin with, it's still toxic
maintainer feedback.


Frankly, I'd have more time to review other peoples patches if I wasn't
wasting all of my time on premium grade manure like this, while trying
to help Oleksii who's had it far worse this release trying to clean up
droppings of maintainers-past.

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 02:30:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 02:30:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.733957.1140236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDEVF-000183-Iv; Sat, 01 Jun 2024 02:29:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 733957.1140236; Sat, 01 Jun 2024 02:29:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDEVF-00017w-GA; Sat, 01 Jun 2024 02:29:53 +0000
Received: by outflank-mailman (input) for mailman id 733957;
 Sat, 01 Jun 2024 02:29:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDEVD-00017m-TW; Sat, 01 Jun 2024 02:29:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDEVD-00010f-Pm; Sat, 01 Jun 2024 02:29:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDEVD-0002lE-EW; Sat, 01 Jun 2024 02:29:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDEVD-00032p-EA; Sat, 01 Jun 2024 02:29:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iiy4iSgzprx0grxPlFMw5DRIOjXv0GmgzwIWFQr49Y8=; b=R8hMiOjFOSa/2zqjj1gSQqhM4h
	tLu/Jc19DY44kqkDV3nMski1AAi0Gybm/dM/v+BFjvcxniLK+smrp2fno4TPZoIFOIjmnADa3xxsH
	ADKll4BuBBiL0sCINl5/v/b2ALDNixZui8SXK2U4+6k6a3MFsBpmUgmZZf99DYJIBaEk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186212-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186212: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d8ec19857b095b39d114ae299713bd8ea6c1e66a
X-Osstest-Versions-That:
    linux=4a4be1ad3a6efea16c56615f31117590fd881358
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Jun 2024 02:29:51 +0000

flight 186212 linux-linus real [real]
flight 186215 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186212/
http://logs.test-lab.xenproject.org/osstest/logs/186215/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit2   8 xen-boot            fail pass in 186215-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 186215 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 186215 never pass
 test-armhf-armhf-xl           8 xen-boot                     fail  like 186196
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 186196
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 186196
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186196
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186202
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186202
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186202
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186202
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186202
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                d8ec19857b095b39d114ae299713bd8ea6c1e66a
baseline version:
 linux                4a4be1ad3a6efea16c56615f31117590fd881358

Last test of basis   186202  2024-05-30 22:43:12 Z    1 days
Testing same since   186212  2024-05-31 19:10:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Lobakin <aleksander.lobakin@intel.com>
  Alexander Maltsev <keltar.gw@gmail.com>
  Alexander Mikhalitsyn <aleksandr.mikhalitsyn@canonical.com>
  Alexei Starovoitov <ast@kernel.org>
  Andrii Nakryiko <andrii@kernel.org>
  Arun Ramadoss <arun.ramadoss@microchip.com>
  Carolina Jubran <cjubran@nvidia.com>
  Christian Brauner <brauner@kernel.org>
  Daniel Borkmann <daniel@iogearbox.net>
  Dave Ertman <david.m.ertman@intel.com>
  David S. Miller <davem@davemloft.net>
  Edward Adam Davis <eadavis@qq.com>
  Emil Renner Berthing <emil.renner.berthing@canonical.com>
  Eric Dumazet <edumazet@google.com>
  Eric Garver <eric@garver.life>
  Florian Westphal <fw@strlen.de>
  Friedrich Vock <friedrich.vock@gmx.de>
  Gal Pressman <gal@nvidia.com>
  Geliang Tang <tanggeliang@kylinos.cn>
  Hariprasad Kelam <hkelam@marvell.com>
  Hengqi Chen <hengqi.chen@gmail.com>
  Horatiu Vultur <horatiu.vultur@microchip.com>
  Hui Wang <hui.wang@canonical.com>
  Ido Schimmel <idosch@nvidia.com>
  Jacob Keller <jacob.e.keller@intel.com>
  Jakub Kicinski <kuba@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  Jerry Ray <jerry.ray@microchip.com>
  Jiri Olsa <jolsa@kernel.org>
  John Fastabend <john.fastabend@gmail.com>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Karim Ben Houcine <karim.benhoucine@landisgyr.com>
  Kory Maincent <kory.maincent@bootlin.com>
  Krishneil Singh <krishneil.k.singh@intel.com>
  Kuniyuki Iwashima <kuniyu@amazon.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Maher Sanalla <msanalla@nvidia.com>
  Mathieu Othacehe <othacehe@gnu.org>
  Matt Jan <zoo868e@gmail.com>
  Matthieu Baerts (NGI0) <matttbe@kernel.org>
  MD Danish Anwar <danishanwar@ti.com>
  Minda Chen <minda.chen@starfivetech.com>
  Naama Meir <naamax.meir@linux.intel.com>
  Neal Cardwell <ncardwell@google.com>
  Nikolay Aleksandrov <razor@blackwall.org>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Parthiban Veerasooran <Parthiban.Veerasooran@microchip.com>
  Paul Greenwalt <paul.greenwalt@intel.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com>
  Rahul Rameshbabu <rrameshbabu@nvidia.com>
  Rob Herring (Arm) <robh@kernel.org>
  Robert Thomas <rob.thomas@ibm.com>
  Roded Zats <rzats@paloaltonetworks.com>
  Shahab Vahedi <shahab@synopsys.com>
  Shay Agroskin <shayagr@amazon.com>
  Stanislav Fomichev <sdf@google.com>
  Stéphane Graber <stgraber@stgraber.org>
  syzbot+ec941d6e24f633a59172@syzkaller.appspotmail.com
  Tariq Toukan <tariqt@nvidia.com>
  Thadeu Lima de Souza Cascardo <cascardo@igalia.com>
  Thinh Tran <thinhtr@linux.ibm.com>
  Thorsten Blum <thorsten.blum@toblux.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tristram Ha <tristram.ha@microchip.com>
  Vitaly Lifshits <vitaly.lifshits@intel.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Willem de Bruijn <willemb@google.com>
  Xiaolei Wang <xiaolei.wang@windriver.com>
  Xu Kuohai <xukuohai@huaweicloud.com>
  Yue Haibing <yuehaibing@huawei.com>
  Zhang Rui <rui.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   4a4be1ad3a6e..d8ec19857b09  d8ec19857b095b39d114ae299713bd8ea6c1e66a -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 05:55:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 05:55:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.733992.1140247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDHhn-00089D-OO; Sat, 01 Jun 2024 05:55:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 733992.1140247; Sat, 01 Jun 2024 05:55:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDHhn-000896-LL; Sat, 01 Jun 2024 05:55:03 +0000
Received: by outflank-mailman (input) for mailman id 733992;
 Sat, 01 Jun 2024 05:55:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDHhm-00088w-SV; Sat, 01 Jun 2024 05:55:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDHhm-0004ns-Pf; Sat, 01 Jun 2024 05:55:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDHhm-00066A-Fy; Sat, 01 Jun 2024 05:55:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDHhm-0002ys-FX; Sat, 01 Jun 2024 05:55:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5snuIUGAjBAnU8LAoggwfky4NxIZSpe0JDEPMhLsXSA=; b=hUszpEq9uLYviE4P/WCGYdx7wA
	tL9N6AV3nBuwY6fUlPblTYdyTNGUsM5+svYOHh4j1lZ9bQJ60MIgwD+XQ5zDgr4oHu2e1+Dm296O0
	bUVwWCsvooatDU5IEWPI3/cPQe0QVQ1tiT0SSchwFmgUR8s18pwnmdtI42ozVLOQtOIU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186216-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186216: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5f7606c048f7cca1a4301b321af70791c1d22378
X-Osstest-Versions-That:
    xen=03147e6837ff045dbc328be876b9600f7040c771
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Jun 2024 05:55:02 +0000

flight 186216 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186216/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5f7606c048f7cca1a4301b321af70791c1d22378
baseline version:
 xen                  03147e6837ff045dbc328be876b9600f7040c771

Last test of basis   186213  2024-05-31 22:02:09 Z    0 days
Testing same since   186216  2024-06-01 02:00:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   03147e6837..5f7606c048  5f7606c048f7cca1a4301b321af70791c1d22378 -> smoke


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 07:51:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 07:51:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734030.1140256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDJWK-0005Ea-Pp; Sat, 01 Jun 2024 07:51:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734030.1140256; Sat, 01 Jun 2024 07:51:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDJWK-0005ES-Mr; Sat, 01 Jun 2024 07:51:20 +0000
Received: by outflank-mailman (input) for mailman id 734030;
 Sat, 01 Jun 2024 07:51:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmUY=ND=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sDJWJ-0005EK-4a
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 07:51:19 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b9bceb8a-1feb-11ef-90a1-e314d9c70b13;
 Sat, 01 Jun 2024 09:51:18 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 1AB014EE0745;
 Sat,  1 Jun 2024 09:51:16 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9bceb8a-1feb-11ef-90a1-e314d9c70b13
MIME-Version: 1.0
Date: Sat, 01 Jun 2024 09:51:16 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Xen-devel
 <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>, Oleksii Kurochko
 <oleksii.kurochko@gmail.com>, Shawn Anastasio
 <sanastasio@raptorengineering.com>, "consulting @ bugseng . com"
 <consulting@bugseng.com>, Simone Ballarin <simone.ballarin@bugseng.com>,
 Federico Serafini <federico.serafini@bugseng.com>
Subject: Re: [PATCH v2 06/13] xen/bitops: Implement ffs() in common logic
In-Reply-To: <f7ea72c3-45ef-43cb-ab57-b375a4fbc683@citrix.com>
References: <20240524200338.1232391-1-andrew.cooper3@citrix.com>
 <20240524200338.1232391-7-andrew.cooper3@citrix.com>
 <alpine.DEB.2.22.394.2405301809170.2557291@ubuntu-linux-20-04-desktop>
 <7b974b36b89c216379b86170af9de451@bugseng.com>
 <2917e122-51d8-4caf-ba70-52da70d1342a@citrix.com>
 <f7ea72c3-45ef-43cb-ab57-b375a4fbc683@citrix.com>
Message-ID: <59dc805e3401a47668a162f4f35adba7@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: multipart/mixed;
 boundary="=_def483bbfa127804df0ab484746c7f42"

--=_def483bbfa127804df0ab484746c7f42
Content-Transfer-Encoding: 8bit
Content-Type: text/plain; charset=UTF-8;
 format=flowed

On 2024-05-31 10:48, Andrew Cooper wrote:
> On 31/05/2024 9:34 am, Andrew Cooper wrote:
>> On 31/05/2024 7:56 am, Nicola Vetrini wrote:
>>> On 2024-05-31 03:14, Stefano Stabellini wrote:
>>>> On Fri, 24 May 2024, Andrew Cooper wrote:
>>>>> Perform constant-folding unconditionally, rather than having it
>>>>> implemented
>>>>> inconsistency between architectures.
>>>>> 
>>>>> Confirm the expected behaviour with compile time and boot time 
>>>>> tests.
>>>>> 
>>>>> For non-constant inputs, use arch_ffs() if provided but fall back 
>>>>> to
>>>>> generic_ffsl() if not.  In particular, RISC-V doesn't have a 
>>>>> builtin
>>>>> that
>>>>> works in all configurations.
>>>>> 
>>>>> For x86, rename ffs() to arch_ffs() and adjust the prototype.
>>>>> 
>>>>> For PPC, __builtin_ctz() is 1/3 of the size of size of the 
>>>>> transform to
>>>>> generic_fls().  Drop the definition entirely.  ARM too benefits in
>>>>> the general
>>>>> case by using __builtin_ctz(), but less dramatically because it 
>>>>> using
>>>>> optimised asm().
>>>>> 
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> This patch made me realize that we should add __builtin_ctz,
>>>> __builtin_constant_p and always_inline to
>>>> docs/misra/C-language-toolchain.rst as they don't seem to be 
>>>> currently
>>>> documented and they are not part of the C standard
>>>> 
>>>> Patch welcome :-)
>>>> 
>>> I can send a patch for the builtins.
>> That's very kind of you.
>> 
>> In total by the end of this series, we've got __builtin_constant_p() 
>> (definitely used elsewhere already), and 
>> __builtin_{ffs,ctz,clz}{,l}() 
>> (3x primitives, 2x input types).
>> 
>> If we're going for a list of the primitive operations, lets add
>> __builtin_popcnt{,l}() too right away, because if it weren't for 4.19
>> code freeze, I'd have cleaned up the hweight() helpers too.
> 
> Oh, and it's worth noting that __builtin_{ctz,clz}{,l}() have explicit
> UB if given an input of 0.  (Sadly, even on architectures where the
> underlying instruction emitted is safe with a 0 input. [0])
> 
> This is why every patch in the series using them checks for nonzero 
> input.
> 
> UBSAN (with an adequate compiler) will instrument this, and Xen has
> __ubsan_handle_invalid_builtin() to diagnose these.
> 
> ~Andrew
> 
> [0] It turns out that Clang has a 2-argument form of the builtin with
> the second being the "value forwarded" in case the first is 0.  I've 
> not
> investigated whether GCC has the same.

Hmm, maybe then it's best if builtins are listed in a separate section 
in that file, for ease of browsing. Xen also uses (conditionally) 
__builtin_mem*, __builtin_str* and others, so if all nonstandard 
intrinsics should be listed (as opposed to the ones in some way relevant 
for MISRA violations, which was the original scope of the document), 
then a subset of the attached list would be needed. There are a handful 
only used in ppc, and since the document only covers x86 and arm, those 
should be ignored for the time being.

Anyway, I'll send an RFC next week to decide the best route.

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)
--=_def483bbfa127804df0ab484746c7f42
Content-Transfer-Encoding: base64
Content-Type: text/plain;
 name=builtins.txt
Content-Disposition: attachment;
 filename=builtins.txt;
 size=724

MHwkIGdpdCBncmVwIC1FIC1vICJfX2J1aWx0aW5bYS16X10rIiAtLSB4ZW4gfCBjdXQgLWQnOicg
LWYyIHwgc29ydCAtdQpfX2J1aWx0aW5fYnN3YXAKX19idWlsdGluX2NsegpfX2J1aWx0aW5fY2x6
bGwKX19idWlsdGluX2NvbnN0YW50X3AKX19idWlsdGluX2N0egpfX2J1aWx0aW5fY3R6bGwKX19i
dWlsdGluX2V4cGVjdApfX2J1aWx0aW5fZnJhbWVfYWRkcmVzcwpfX2J1aWx0aW5faGFzX2F0dHJp
YnV0ZQpfX2J1aWx0aW5fbWVtY2hyCl9fYnVpbHRpbl9tZW1jbXAKX19idWlsdGluX21lbWNweQpf
X2J1aWx0aW5fbWVtbW92ZQpfX2J1aWx0aW5fbWVtc2V0Cl9fYnVpbHRpbl9vZmZzZXRvZgpfX2J1
aWx0aW5fcG9wY291bnQKX19idWlsdGluX3BvcGNvdW50bGwKX19idWlsdGluX3ByZWZldGNoCl9f
YnVpbHRpbl9yZXR1cm5fYWRkcmVzcwpfX2J1aWx0aW5fc3RyY2FzZWNtcApfX2J1aWx0aW5fc3Ry
Y2hyCl9fYnVpbHRpbl9zdHJjbXAKX19idWlsdGluX3N0cmxlbgpfX2J1aWx0aW5fc3RybmNhc2Vj
bXAKX19idWlsdGluX3N0cm5jbXAKX19idWlsdGluX3N0cnJjaHIKX19idWlsdGluX3N0cnN0cgpf
X2J1aWx0aW5fdHJhcApfX2J1aWx0aW5fdHlwZXNfY29tcGF0aWJsZV9wCl9fYnVpbHRpbl91bnJl
YWNoYWJsZQpfX2J1aWx0aW5fdmFfYXJnCl9fYnVpbHRpbl92YV9jb3B5Cl9fYnVpbHRpbl92YV9l
bmQKX19idWlsdGluX3ZhX2xpc3QKX19idWlsdGluX3ZhX3N0YXJ0Cg==
--=_def483bbfa127804df0ab484746c7f42--


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 08:06:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 08:06:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734046.1140266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDJke-0007Y8-1h; Sat, 01 Jun 2024 08:06:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734046.1140266; Sat, 01 Jun 2024 08:06:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDJkd-0007Y1-VO; Sat, 01 Jun 2024 08:06:07 +0000
Received: by outflank-mailman (input) for mailman id 734046;
 Sat, 01 Jun 2024 08:06:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDJkd-0007Xr-37; Sat, 01 Jun 2024 08:06:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDJkd-0007VH-0f; Sat, 01 Jun 2024 08:06:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDJkc-0001cs-NP; Sat, 01 Jun 2024 08:06:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDJkc-0003iv-Mz; Sat, 01 Jun 2024 08:06:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5Vmaa6P2EQZyCfqspKlpr90vNXQY005QH1jbiuhLMJk=; b=JGAJ1odO8PrrTliewuX1O5Jfbn
	OVV3LKZfqIewRwwzSY0vWKT9ir2Xb9I4AOBxTSlBCa2MiqxhITBFCe8RCVOZTEqn0CWAH0XxQogPg
	ne2Sgk45J0vvUJ53+MXygNdkQobCMKvGAI+Mt/doKlKX9EACsKJ1/8pYI162BrPsM7Is=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186219-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186219: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=7339bfeffa3fa30b18dce86409c0112039bacec5
X-Osstest-Versions-That:
    ovmf=3b36aa96de1d5f7a4660bec5c0cbad2616183dd6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Jun 2024 08:06:06 +0000

flight 186219 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186219/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 7339bfeffa3fa30b18dce86409c0112039bacec5
baseline version:
 ovmf                 3b36aa96de1d5f7a4660bec5c0cbad2616183dd6

Last test of basis   186211  2024-05-31 16:14:27 Z    0 days
Testing same since   186219  2024-06-01 06:14:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3b36aa96de..7339bfeffa  7339bfeffa3fa30b18dce86409c0112039bacec5 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 08:22:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 08:22:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734069.1140291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDK0X-0001us-MR; Sat, 01 Jun 2024 08:22:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734069.1140291; Sat, 01 Jun 2024 08:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDK0X-0001uX-JQ; Sat, 01 Jun 2024 08:22:33 +0000
Received: by outflank-mailman (input) for mailman id 734069;
 Sat, 01 Jun 2024 08:22:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDK0V-0001tp-W5; Sat, 01 Jun 2024 08:22:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDK0V-0007mS-UQ; Sat, 01 Jun 2024 08:22:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDK0V-00023H-Fo; Sat, 01 Jun 2024 08:22:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDK0V-0006bL-FN; Sat, 01 Jun 2024 08:22:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=l2vdwoSt6Zr5Yt4y3uKIVGEX7JRK9c5KU5TwNbG1DxM=; b=DGY+KHn65D4O6NgBYd4URk9KQb
	1TH2jOZAXEbLy/18CvFrsXad9KI9Lw0O7DesHJpQBtyV4GBsCqyO9uYAaqW2fbJcpUegEAWXuWnMG
	35CSnMMwSY2irjwhFcfPskOPF/c8YpAvIwUvxu2iNMNo9VHx2g7AM4WCewHSUozNrSVE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186214-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186214: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=03147e6837ff045dbc328be876b9600f7040c771
X-Osstest-Versions-That:
    xen=1250c73c1ae2eec0308b4efe3e345127e9dbdb2b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Jun 2024 08:22:31 +0000

flight 186214 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186214/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186208
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186208
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186208
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186208
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186208
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186208
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  03147e6837ff045dbc328be876b9600f7040c771
baseline version:
 xen                  1250c73c1ae2eec0308b4efe3e345127e9dbdb2b

Last test of basis   186208  2024-05-31 05:23:30 Z    1 days
Testing same since   186214  2024-06-01 01:08:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1250c73c1a..03147e6837  03147e6837ff045dbc328be876b9600f7040c771 -> master


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 10:17:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 10:17:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734114.1140343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDLnS-0006zT-RF; Sat, 01 Jun 2024 10:17:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734114.1140343; Sat, 01 Jun 2024 10:17:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDLnS-0006wa-Mm; Sat, 01 Jun 2024 10:17:10 +0000
Received: by outflank-mailman (input) for mailman id 734114;
 Sat, 01 Jun 2024 10:17:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmUY=ND=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sDLnR-0006UI-Ca
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 10:17:09 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 19088332-2000-11ef-b4bb-af5377834399;
 Sat, 01 Jun 2024 12:17:06 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 4DBA14EE074B;
 Sat,  1 Jun 2024 12:17:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19088332-2000-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [XEN PATCH 4/5] automation/eclair_analysis: address remaining violations of MISRA C Rule 20.12
Date: Sat,  1 Jun 2024 12:16:55 +0200
Message-Id: <ba7e17494f0bb167fe48f7fe0a69fabc1c3f5d1a.1717236930.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717236930.git.nicola.vetrini@bugseng.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The DEFINE macro in asm-offsets.c (for all architectures) still generates
violations despite the file(s) being excluded from compliance, due to the
fact that in its expansion it sometimes refers entities in non-excluded files.
These corner cases are deviated by the configuration.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index cf62a874d928..f29db9e08248 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -483,6 +483,12 @@ leads to a violation of the Rule are deviated."
 -config=MC3R1.R20.12,macros+={deliberate, "name(GENERATE_CASE)&&loc(file(deliberate_generate_case))"}
 -doc_end
 
+-doc_begin="The macro DEFINE is defined and used in excluded files asm-offsets.c.
+This may still cause violations if entities outside these files are referred to
+in the expansion."
+-config=MC3R1.R20.12,macros+={deliberate, "name(DEFINE)&&loc(file(asm_offsets))"}
+-doc_end
+
 #
 # Series 21.
 #
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sat Jun 01 10:17:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 10:17:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734111.1140307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDLnM-000631-3f; Sat, 01 Jun 2024 10:17:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734111.1140307; Sat, 01 Jun 2024 10:17:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDLnL-00062R-TL; Sat, 01 Jun 2024 10:17:03 +0000
Received: by outflank-mailman (input) for mailman id 734111;
 Sat, 01 Jun 2024 10:17:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmUY=ND=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sDLnL-0005zz-5C
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 10:17:03 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1687f838-2000-11ef-90a1-e314d9c70b13;
 Sat, 01 Jun 2024 12:17:02 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id DF2F84EE0749;
 Sat,  1 Jun 2024 12:17:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1687f838-2000-11ef-90a1-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH 1/5] xen/domain: deviate violation of MISRA C Rule 20.12
Date: Sat,  1 Jun 2024 12:16:52 +0200
Message-Id: <843540164f7e8f910226e1ded05e153cb04c519d.1717236930.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717236930.git.nicola.vetrini@bugseng.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.12 states: "A macro parameter used as an operand to
the # or ## operators, which is itself subject to further macro replacement,
shall only be used as an operand to these operators".

In this case, builds where CONFIG_DEBUG_LOCK_PROFILE=y the domain_lock
macro is used both as a regular macro argument and as an operand for
stringification in the expansion of macro spin_lock_init_prof.
A SAF-x-safe deviation is introduced to justify this.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 docs/misra/safe.json | 8 ++++++++
 xen/common/domain.c  | 1 +
 2 files changed, 9 insertions(+)

diff --git a/docs/misra/safe.json b/docs/misra/safe.json
index 9b13bcf71706..c213e0a0be3b 100644
--- a/docs/misra/safe.json
+++ b/docs/misra/safe.json
@@ -52,6 +52,14 @@
         },
         {
             "id": "SAF-6-safe",
+            "analyser": {
+                "eclair": "MC3R1.R20.12"
+            },
+            "name": "MC3R1.R20.12: use of a macro argument that deliberately violates the Rule",
+            "text": "A macro parameter that is itself a macro is intentionally used within the macro both as a regular parameter and for text replacement."
+        },
+        {
+            "id": "SAF-7-safe",
             "analyser": {},
             "name": "Sentinel",
             "text": "Next ID to be used"
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 67cadb7c3f4f..2c7168093734 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -632,6 +632,7 @@ struct domain *domain_create(domid_t domid,
 
     atomic_set(&d->refcnt, 1);
     RCU_READ_LOCK_INIT(&d->rcu_lock);
+    /* SAF-6-safe Rule 20.12 expansion of macro domain_lock in debug builds */
     rspin_lock_init_prof(d, domain_lock);
     rspin_lock_init_prof(d, page_alloc_lock);
     spin_lock_init(&d->hypercall_deadlock_mutex);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sat Jun 01 10:17:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 10:17:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734113.1140327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDLnR-0006Ym-Jf; Sat, 01 Jun 2024 10:17:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734113.1140327; Sat, 01 Jun 2024 10:17:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDLnR-0006Xx-EV; Sat, 01 Jun 2024 10:17:09 +0000
Received: by outflank-mailman (input) for mailman id 734113;
 Sat, 01 Jun 2024 10:17:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmUY=ND=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sDLnQ-0006UI-CG
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 10:17:08 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 17728f2e-2000-11ef-b4bb-af5377834399;
 Sat, 01 Jun 2024 12:17:05 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 54FFC4EE0748;
 Sat,  1 Jun 2024 12:17:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17728f2e-2000-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 2/5] x86/domain: deviate violation of MISRA C Rule 20.12
Date: Sat,  1 Jun 2024 12:16:53 +0200
Message-Id: <a8fe5f64e46e8980e1740583d59b95f88270f426.1717236930.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717236930.git.nicola.vetrini@bugseng.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.12 states: "A macro parameter used as an operand to
the # or ## operators, which is itself subject to further macro replacement,
shall only be used as an operand to these operators".

In this case, builds where CONFIG_COMPAT=y the fpu_ctxt
macro is used both as a regular macro argument and as an operand for
stringification in the expansion of CHECK_FIELD_.
This is deviated using a SAF-x-safe comment.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 xen/arch/x86/domain.c | 1 +
 xen/arch/x86/domctl.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 536542841ef5..ccadfe0c9e70 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1084,6 +1084,7 @@ void arch_domain_creation_finished(struct domain *d)
 #ifdef CONFIG_COMPAT
 #define xen_vcpu_guest_context vcpu_guest_context
 #define fpu_ctxt fpu_ctxt.x
+/* SAF-6-safe Rule 20.12 expansion of macro fpu_ctxt with CONFIG_COMPAT */
 CHECK_FIELD_(struct, vcpu_guest_context, fpu_ctxt);
 #undef fpu_ctxt
 #undef xen_vcpu_guest_context
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 9a72d57333e9..335aedf46d03 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1326,6 +1326,7 @@ long arch_do_domctl(
 #ifdef CONFIG_COMPAT
 #define xen_vcpu_guest_context vcpu_guest_context
 #define fpu_ctxt fpu_ctxt.x
+/* SAF-6-safe Rule 20.12 expansion of macro fpu_ctxt with CONFIG_COMPAT */
 CHECK_FIELD_(struct, vcpu_guest_context, fpu_ctxt);
 #undef fpu_ctxt
 #undef xen_vcpu_guest_context
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sat Jun 01 10:17:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 10:17:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734112.1140323 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDLnR-0006VJ-Av; Sat, 01 Jun 2024 10:17:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734112.1140323; Sat, 01 Jun 2024 10:17:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDLnR-0006VA-6X; Sat, 01 Jun 2024 10:17:09 +0000
Received: by outflank-mailman (input) for mailman id 734112;
 Sat, 01 Jun 2024 10:17:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmUY=ND=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sDLnQ-0006UI-3u
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 10:17:08 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 184c05ed-2000-11ef-b4bb-af5377834399;
 Sat, 01 Jun 2024 12:17:05 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id D7E144EE074A;
 Sat,  1 Jun 2024 12:17:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 184c05ed-2000-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 3/5] x86: deviate violation of MISRA C Rule 20.12
Date: Sat,  1 Jun 2024 12:16:54 +0200
Message-Id: <475daa82f5be77644b1f32ecd3f6e66ccd9ac904.1717236930.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717236930.git.nicola.vetrini@bugseng.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.12 states: "A macro parameter used as an operand to
the # or ## operators, which is itself subject to further macro replacement,
shall only be used as an operand to these operators".

When the second parameter of GET_SET_SHARED is a macro and is used as both
a regular parameter and for token pasting the rule deliberately violated.
A SAF-x-safe comment is used to deviate the usage.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 xen/arch/x86/include/asm/shared.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/include/asm/shared.h b/xen/arch/x86/include/asm/shared.h
index 60b67fa4b427..c26d4b2b3f0f 100644
--- a/xen/arch/x86/include/asm/shared.h
+++ b/xen/arch/x86/include/asm/shared.h
@@ -76,6 +76,7 @@ static inline void arch_set_##field(struct vcpu *v,         \
 
 GET_SET_SHARED(unsigned long, max_pfn)
 GET_SET_SHARED(xen_pfn_t, pfn_to_mfn_frame_list_list)
+/* SAF-6-safe Rule 20.12: expansion of macro nmi_reason */
 GET_SET_SHARED(unsigned long, nmi_reason)
 
 GET_SET_VCPU(unsigned long, cr2)
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sat Jun 01 10:17:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 10:17:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734115.1140347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDLnT-00072d-3o; Sat, 01 Jun 2024 10:17:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734115.1140347; Sat, 01 Jun 2024 10:17:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDLnT-000724-0F; Sat, 01 Jun 2024 10:17:11 +0000
Received: by outflank-mailman (input) for mailman id 734115;
 Sat, 01 Jun 2024 10:17:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmUY=ND=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sDLnS-0006UI-Cd
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 10:17:10 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1a50d486-2000-11ef-b4bb-af5377834399;
 Sat, 01 Jun 2024 12:17:08 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 85C6F4EE074C;
 Sat,  1 Jun 2024 12:17:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a50d486-2000-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH 5/5] xen: fix MISRA regressions on rule 20.9 and 20.12
Date: Sat,  1 Jun 2024 12:16:56 +0200
Message-Id: <7d454066eb24e0515ff5b37864ed7a7ef5215dc5.1717236930.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717236930.git.nicola.vetrini@bugseng.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

ea59e7d780d9 ("xen/bitops: Cleanup and new infrastructure ahead of rearrangements")
introduced new violations on previously clean rules 20.9 and 20.12.

The first is introduced because CONFIG_CC_IS_CLANG in xen/self-tests.h is not
defined in the configuration under analysis. Using "defined()" instead avoids
relying on the preprocessor's behaviour upon encountering an undedfined identifier
and addresses the violation.

The violation of Rule 20.12 is due to "val" being used both as an ordinary argument
in macro RUNTIME_CHECK, and as a stringification operator.

No functional change.

Fixes: ea59e7d780d9 ("xen/bitops: Cleanup and new infrastructure ahead of rearrangements")
Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 2 +-
 xen/include/xen/self-tests.h                     | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index f29db9e08248..e2653f77eb2c 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -473,7 +473,7 @@ deliberate."
 -doc_begin="Uses of a macro parameter for ordinary expansion and as an operand
 to the # or ## operators within the following macros are deliberate, to provide
 useful diagnostic messages to the user."
--config=MC3R1.R20.12,macros+={deliberate, "name(ASSERT||BUILD_BUG_ON||BUILD_BUG_ON_ZERO)"}
+-config=MC3R1.R20.12,macros+={deliberate, "name(ASSERT||BUILD_BUG_ON||BUILD_BUG_ON_ZERO||RUNTIME_CHECK)"}
 -doc_end
 
 -doc_begin="The helper macro GENERATE_CASE may use a macro parameter for ordinary
diff --git a/xen/include/xen/self-tests.h b/xen/include/xen/self-tests.h
index 8410db7aaaae..42a4cc4d17fe 100644
--- a/xen/include/xen/self-tests.h
+++ b/xen/include/xen/self-tests.h
@@ -16,7 +16,7 @@
  * Clang < 8 can't fold constants through static inlines, causing this to
  * fail.  Simply skip it for incredibly old compilers.
  */
-#if !CONFIG_CC_IS_CLANG || CONFIG_CLANG_VERSION >= 80000
+#if !defined(CONFIG_CC_IS_CLANG) || CONFIG_CLANG_VERSION >= 80000
 #define COMPILE_CHECK(fn, val, res)                                     \
     do {                                                                \
         typeof(fn(val)) real = fn(val);                                 \
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sat Jun 01 10:17:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 10:17:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734110.1140303 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDLnL-000612-PY; Sat, 01 Jun 2024 10:17:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734110.1140303; Sat, 01 Jun 2024 10:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDLnL-00060v-ME; Sat, 01 Jun 2024 10:17:03 +0000
Received: by outflank-mailman (input) for mailman id 734110;
 Sat, 01 Jun 2024 10:17:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmUY=ND=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sDLnK-0005zz-MF
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 10:17:02 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 15c9403e-2000-11ef-90a1-e314d9c70b13;
 Sat, 01 Jun 2024 12:17:01 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id ED9494EE0745;
 Sat,  1 Jun 2024 12:16:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15c9403e-2000-11ef-90a1-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [XEN PATCH 0/5] address violations of MISRA C rules
Date: Sat,  1 Jun 2024 12:16:51 +0200
Message-Id: <cover.1717236930.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Patches 1 to 4 address violations of MISRA C Rule 20.12 by deviating certain
uses of some macros, while the last patch addresses some regressions introduced
by the latest bitops series

Nicola Vetrini (5):
  xen/domain: deviate violation of MISRA C Rule 20.12
  x86/domain: deviate violation of MISRA C Rule 20.12
  x86: deviate violation of MISRA C Rule 20.12
  automation/eclair_analysis: address remaining violations of MISRA C
    Rule 20.12
  xen: fix MISRA regressions on rule 20.9 and 20.12

 automation/eclair_analysis/ECLAIR/deviations.ecl | 8 +++++++-
 docs/misra/safe.json                             | 8 ++++++++
 xen/arch/x86/domain.c                            | 1 +
 xen/arch/x86/domctl.c                            | 1 +
 xen/arch/x86/include/asm/shared.h                | 1 +
 xen/common/domain.c                              | 1 +
 xen/include/xen/self-tests.h                     | 2 +-
 7 files changed, 20 insertions(+), 2 deletions(-)

-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 12:14:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 12:14:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734164.1140362 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDNcV-0005gy-WE; Sat, 01 Jun 2024 12:14:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734164.1140362; Sat, 01 Jun 2024 12:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDNcV-0005gr-Tg; Sat, 01 Jun 2024 12:13:59 +0000
Received: by outflank-mailman (input) for mailman id 734164;
 Sat, 01 Jun 2024 12:13:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDNcU-0005ge-Jm; Sat, 01 Jun 2024 12:13:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDNcU-0003FW-64; Sat, 01 Jun 2024 12:13:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDNcT-00030i-QF; Sat, 01 Jun 2024 12:13:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDNcT-0004mc-Pq; Sat, 01 Jun 2024 12:13:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fRq/gMa3Tp7ivM1frl91yEE0o1LdfkBwehn0JCqdopE=; b=mp04OG4aseS2J6wExF/ou/EQBs
	dcva1ydGbg9jRjumYkxtn9w56L+8z79A7zP1duSLZlgKWeExB0vgRq8hlsg3MJUh9l4Oy/+a/atbc
	er4NRAwfcO145yRCNs8XDa7MX/R18cb+wTLb1rutLG1UaH9KJCDHmsqQqqznU7X5/x2o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186217-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186217: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:host-install(5):broken:allowable
    linux-linus:test-armhf-armhf-xl-arndale:host-ping-check-xen:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=cc8ed4d0a8486c7472cd72ec3c19957e509dc68c
X-Osstest-Versions-That:
    linux=d8ec19857b095b39d114ae299713bd8ea6c1e66a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Jun 2024 12:13:57 +0000

flight 186217 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186217/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-rtds        <job status>                 broken
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 186212

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      5 host-install(5)        broken REGR. vs. 186212

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale  10 host-ping-check-xen     fail blocked in 186212
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 186212
 test-armhf-armhf-xl           8 xen-boot                     fail  like 186212
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 186212
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186212
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186212
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186212
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                cc8ed4d0a8486c7472cd72ec3c19957e509dc68c
baseline version:
 linux                d8ec19857b095b39d114ae299713bd8ea6c1e66a

Last test of basis   186212  2024-05-31 19:10:39 Z    0 days
Testing same since   186217  2024-06-01 02:33:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Adrián Larumbe <adrian.larumbe@collabora.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexandre Belloni <alexandre.belloni@bootlin.com>
  Alexandre Ghiti <alexghiti@rivosinc.com>
  Alina Yu <alina_yu@richtek.com>
  Andi Shyti <andi.shyti@linux.intel.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Barry Song <baohua@kernel.org>
  Boris Brezillon <boris.brezillon@collabora.com>
  Breno Leitao <leitao@debian.org>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian König <christian.koenig@amd.com>
  Christoph Hellwig <hch@lst.de>
  Coly Li <colyli@suse.de>
  Damien Le Moal <dlemoal@kernel.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Danilo Krummrich <dakr@redhat.com>
  Dave Airlie <airlied@redhat.com>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dongsheng Yang <dongsheng.yang@easystack.cn>
  Douglas Anderson <dianders@chromium.org>
  Fedor Pchelkin <pchelkin@ispras.ru>
  Felix Kuehling <felix.kuehling@amd.com>
  Gerald Loacker <gerald.loacker@wolfvision.net>
  Gnattu OC <gnattuoc@me.com>
  Guenter Roeck <linux@roeck-us.net>
  Hannes Reinecke <hare@kernel.org>
  Hans de Goede <hdegoede@redhat.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Hawking Zhang <Hawking.Zhang@amd.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  hexue <xue01.he@samsung.com>
  Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
  hmtheboy154 <buingoc67@gmail.com>
  Imre Deak <imre.deak@intel.com>
  Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
  Jani Nikula <jani.nikula@intel.com>
  Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
  Jassi Brar <jassisinghbrar@gmail.com>
  Javier Carrasco <javier.carrasco.cruz@gmail.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jens Axboe <axboe@kernel.dk>
  Jesse Zhang <Jesse.Zhang@amd.com>
  Jessica Zhang <quic_jesszhan@quicinc.com>
  Jian Ye <jian.ye@intel.com>
  Jim Wylder <jwylder@google.com>
  John Harrison <John.C.Harrison@Intel.com>
  José Roberto de Souza <jose.souza@intel.com>
  Julia Filipchuk <julia.filipchuk@intel.com>
  Kanchan Joshi <joshi.k@samsung.com>
  Keith Busch <kbusch@kernel.org>
  Kent Overstreet <kent.overstreet@linux.dev>
  Kundan Kumar <kundan.kumar@samsung.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lukas Bulwahn <lukas.bulwahn@redhat.com>
  Luke D. Jones <luke@ljones.dev>
  Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Matthew Auld <matthew.auld@intel.com>
  Matthew Brost <matthew.brost@intel.com>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Mike Snitzer <snitzer@kernel.org>
  Mingzhe Zou <mingzhe.zou@easystack.cn>
  Mohamed Ahmed <mohamedahmedegypt2001@gmail.com>
  Nam Cao <namcao@linutronix.de>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nilay Shroff <nilay@linux.ibm.com>
  Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
  Nirmoy Das <nirmoy.das@intel.com>
  Nícolas F. R. A. Prado <nfraprado@collabora.com>
  Pali Rohár <pali@kernel.org>
  Palmer Dabbelt <palmer@rivosinc.com>
  Peter Colberg <peter.colberg@intel.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rajneesh Bhardwaj <rajneesh.bhardwaj@amd.com>
  Robin Murphy <robin.murphy@arm.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sergey Matyukevich <sergey.matyukevich@syntacore.com>
  Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Tanmay Shah <tanmay.shah@amd.com>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thierry Reding <treding@nvidia.com>
  Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
  Thomas Hellström <thomas.hellstrom@linux.intel.com>
  Thomas Zimmermann <tzimmermann@suse.de>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Val Packett <val@packett.cool>
  Vidya Srinivas <vidya.srinivas@intel.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Wachowski, Karol <karol.wachowski@intel.com>
  Waiman Long <longman@redhat.com>
  Witold Sadowski <wsadowski@marvell.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     broken  
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-rtds broken
broken-step test-armhf-armhf-xl-rtds host-install(5)

Not pushing.

(No revision log; it would be 3467 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 12:48:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 12:48:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734184.1140373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDO9G-000187-Jh; Sat, 01 Jun 2024 12:47:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734184.1140373; Sat, 01 Jun 2024 12:47:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDO9G-000180-GQ; Sat, 01 Jun 2024 12:47:50 +0000
Received: by outflank-mailman (input) for mailman id 734184;
 Sat, 01 Jun 2024 12:47:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OKEA=ND=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sDO9F-00017u-3i
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 12:47:49 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 263eeeda-2015-11ef-90a1-e314d9c70b13;
 Sat, 01 Jun 2024 14:47:48 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 5b1f17b1804b1-4211a86f124so26860405e9.0
 for <xen-devel@lists.xenproject.org>; Sat, 01 Jun 2024 05:47:48 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42135fea176sm17201475e9.24.2024.06.01.05.47.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 01 Jun 2024 05:47:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 263eeeda-2015-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717246067; x=1717850867; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=riCkD32/UuiItOB7MtcAWeONqLdUqZ8WmonNhxbn7V0=;
        b=utE3M44+B8Arn4vS+1TGG7Wvrwj90DJWtHXWSlkRqONsBvqKp3X7yS26NnjDJ5BMR9
         wrZseHPmxChTV3SDFqo5zNFFIDH5CJksGLFJkEUawvSSupS6AEKhjmx/5IVxoKlG91lK
         RYMgBS2nPX2aR95xiAwSp5BxwtS8xRujw72p8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717246067; x=1717850867;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=riCkD32/UuiItOB7MtcAWeONqLdUqZ8WmonNhxbn7V0=;
        b=nSWMCgwOltSzPsN7u/qW9sV+aQoStLF46LE0BO48unOgFMhxt8CXu9EdM3pQpQO701
         Dy0+m93GbiYYSrS8yZbTIN6MkKWxDkDp2YU8q6j4AQ1Eo8SCwc3OcfpYIQ8XzC4uKGFB
         5pOdQmyV8ku7pkTyXBjxkyRAJ0s2Wwnk5grtzMeJycj5AgZgBIeAchDW/57y5B2GZXGW
         jdgWXsRNSa+5LokkJviiBr0EiCor+Py6WzynSuXrQhc4qzcSIS3soiDGDW9foM4ItBk1
         5uYbCO8N06+N9UxcQxHL2BGz++g1nl1Y834QjstDe8b+soXLg9g4MJcCsXOGPWsIAcPg
         Q/fQ==
X-Forwarded-Encrypted: i=1; AJvYcCVhApnM2azLHHoMW83RB9+WxJwq8JExEwfUWbGpvxaTsMVrzu7v6xS5jYiwX561rgltHA6n3cJB8SO6Imd9d8BUKvd+rIlsG1ERFfi19Lc=
X-Gm-Message-State: AOJu0Yx4Q4GY5l9w83H7Mitkh+hoSLyuJNHS3dcTNIcZgJ6GiS+RK95d
	WcJq9qj/jZif6L0bKZ6J8xcZ1iaZYqcPisG+pDqGzA39fe+cBL1+65zeR8KeYvM=
X-Google-Smtp-Source: AGHT+IHWVz++Y2tyJTLooWZCXPDJEC/zDsjrz34/gY0uAaU94Mb/TYe9CHNJN0VM6MohGQoVFnc4Bw==
X-Received: by 2002:a05:600c:4fc9:b0:41b:f30a:41f1 with SMTP id 5b1f17b1804b1-4212e04422bmr38129845e9.7.1717246067396;
        Sat, 01 Jun 2024 05:47:47 -0700 (PDT)
Message-ID: <6ea1507d-25dc-4b3c-8c00-3b7b271e69a0@citrix.com>
Date: Sat, 1 Jun 2024 13:47:46 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 5/5] xen: fix MISRA regressions on rule 20.9 and 20.12
To: Nicola Vetrini <nicola.vetrini@bugseng.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, George Dunlap
 <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <7d454066eb24e0515ff5b37864ed7a7ef5215dc5.1717236930.git.nicola.vetrini@bugseng.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <7d454066eb24e0515ff5b37864ed7a7ef5215dc5.1717236930.git.nicola.vetrini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 01/06/2024 11:16 am, Nicola Vetrini wrote:
> ea59e7d780d9 ("xen/bitops: Cleanup and new infrastructure ahead of rearrangements")
> introduced new violations on previously clean rules 20.9 and 20.12.
>
> The first is introduced because CONFIG_CC_IS_CLANG in xen/self-tests.h is not
> defined in the configuration under analysis. Using "defined()" instead avoids
> relying on the preprocessor's behaviour upon encountering an undedfined identifier
> and addresses the violation.
>
> The violation of Rule 20.12 is due to "val" being used both as an ordinary argument
> in macro RUNTIME_CHECK, and as a stringification operator.
>
> No functional change.
>
> Fixes: ea59e7d780d9 ("xen/bitops: Cleanup and new infrastructure ahead of rearrangements")
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Thankyou for this patch.  I'd seen that I'd broken something.  (Entirely
my fault - I've done a lot of testing in Gitlab for the series, but
never manually ran the Eclair jobs.  I'll try to remember better next time.)

One question though. 
https://gitlab.com/xen-project/xen/-/jobs/6994213979 says:

Failure: 1 regressions found for clean guidelines
  service MC3R1.R20.9: (required) All identifiers used in the
controlling expression of `#if' or `#elif' preprocessing directives
shall be #define'd before evaluation:
   violation: 1

While there is a report for 20.12, it's not clean yet (so the first
sentence wants adjusting), and RUNTIME_CHECK doesn't show up newly in it.

So, while I agree that RUNTIME_CHECK() should be included in the 20.12
exclusions, why is current Gitlab Run not reporting it?

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 12:57:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 12:57:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734188.1140383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDOIU-0002p5-Ew; Sat, 01 Jun 2024 12:57:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734188.1140383; Sat, 01 Jun 2024 12:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDOIU-0002oy-Bt; Sat, 01 Jun 2024 12:57:22 +0000
Received: by outflank-mailman (input) for mailman id 734188;
 Sat, 01 Jun 2024 12:57:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OKEA=ND=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sDOIS-0002os-Fk
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 12:57:20 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ac67435-2016-11ef-90a1-e314d9c70b13;
 Sat, 01 Jun 2024 14:57:19 +0200 (CEST)
Received: by mail-wr1-x42e.google.com with SMTP id
 ffacd0b85a97d-35dcff36522so1764465f8f.1
 for <xen-devel@lists.xenproject.org>; Sat, 01 Jun 2024 05:57:19 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd062ea66sm4036270f8f.78.2024.06.01.05.57.18
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 01 Jun 2024 05:57:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ac67435-2016-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717246639; x=1717851439; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=gn0v72b84YUFLUpJV7MN79rcnCZDo6kmr+8d9kolpOw=;
        b=U5eG1ho+N/Ld/7OD+VCoIDbTbEhOcp3xxcWjNVkPN8hXrk/eO3vl+1M8+ojaEaO9Q9
         RDZKk10Rr4bsKzstO8fLZ12FuhH0tYTEMG/WfnB2aElJB+A+yhJ2qtYO3jcsfle9fem9
         aWer8Pt3zuZkcGdeNyu8KpGrtn+z7zDbWLOws=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717246639; x=1717851439;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gn0v72b84YUFLUpJV7MN79rcnCZDo6kmr+8d9kolpOw=;
        b=o6kMxDsuUpqITW6/ASfP5bpINRk6hriEBkl5yCsuwpu38Tr/V7Hb7kbAocWIDihokd
         fXY8Lx8oq7J9G+pASFZq528LB+HCHPkA/IA+vb0vEz0TcwvD1UTwm/dIe6f01Vvd3n0V
         QcXLC+nHUxeALPRrXlQrlMPpBpZshtJwA4j+qnBpUdEwKamGfckbA3Ih7SPYo9ULmAi8
         lWp+ZtBneNbLdRJnJEyOb85m45SwnbaFWieKPjCLSsr4qYRRukZk7iIwCy9WvzCIY7Uz
         YXhp0yHOl6p4ulhyPdflqEVYY2rC0aih5MEDMwNrv271MOsM6MsJMdiJfbv5m3lIx+l5
         eBJA==
X-Forwarded-Encrypted: i=1; AJvYcCViahFzAXULMBF6+yvCA81a61UGsaqEu4G1mh2NM1Lny3LgrNz2WcV5Q8I2IL34OyMDKIDDC7ttG6J1TvUGXf6iyB5hx5SGul8ngM5uO8A=
X-Gm-Message-State: AOJu0Yw4cKEeI8F5fzxY1FPzRfLZ7dJ+29qxhcOkNyIpsVnrJUb+eYCV
	hKU28vs6mAjU4kKj80ZuJeYbr+u+xHos8Zten1iWm+Dcd7sj9jzypaXcmVXj3T8=
X-Google-Smtp-Source: AGHT+IH2m4LBzl8MpHtYWHzuQTCmBMvBFxIw58vb8yUhZhiqTOF9un0a0MEYd5vOZmZGBfUeY9etxw==
X-Received: by 2002:adf:e247:0:b0:352:e4d5:5e12 with SMTP id ffacd0b85a97d-35e0f270a94mr3334413f8f.20.1717246638748;
        Sat, 01 Jun 2024 05:57:18 -0700 (PDT)
Message-ID: <3d0f7dee-072e-4787-b27a-e277ad4d91ae@citrix.com>
Date: Sat, 1 Jun 2024 13:57:17 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 12/13] xen/bitops: Clean up ffs64()/fls64() definitions
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Shawn Anastasio <sanastasio@raptorengineering.com>,
 "consulting @ bugseng . com" <consulting@bugseng.com>,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Federico Serafini <federico.serafini@bugseng.com>,
 Nicola Vetrini <nicola.vetrini@bugseng.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240524200338.1232391-1-andrew.cooper3@citrix.com>
 <20240524200338.1232391-13-andrew.cooper3@citrix.com>
 <1cf28a31-976f-45dc-8dfb-ceecdc60cac7@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <1cf28a31-976f-45dc-8dfb-ceecdc60cac7@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 27/05/2024 2:44 pm, Jan Beulich wrote:
>> --- a/xen/include/xen/bitops.h
>> +++ b/xen/include/xen/bitops.h
>> @@ -60,6 +60,14 @@ static always_inline __pure unsigned int ffsl(unsigned long x)
>>  #endif
>>  }
>>  
>> +static always_inline __pure unsigned int ffs64(uint64_t x)
>> +{
>> +    if ( BITS_PER_LONG == 64 )
> In principle >= 64 would be okay here, and hence I'd prefer if we used that
> less strict form. Yet I'm not going to insist.

Sorry - I'd meant to include this, but I've just found it still local to
my dev branch.

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 12:58:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 12:58:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734193.1140392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDOJJ-0003NQ-Q4; Sat, 01 Jun 2024 12:58:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734193.1140392; Sat, 01 Jun 2024 12:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDOJJ-0003NJ-NZ; Sat, 01 Jun 2024 12:58:13 +0000
Received: by outflank-mailman (input) for mailman id 734193;
 Sat, 01 Jun 2024 12:58:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmUY=ND=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sDOJJ-0003ND-DZ
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 12:58:13 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9a1d2f42-2016-11ef-90a1-e314d9c70b13;
 Sat, 01 Jun 2024 14:58:12 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 6CC624EE0737;
 Sat,  1 Jun 2024 14:58:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a1d2f42-2016-11ef-90a1-e314d9c70b13
MIME-Version: 1.0
Date: Sat, 01 Jun 2024 14:58:11 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com,
 consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, Doug
 Goldstein <cardoe@cardoe.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: [XEN PATCH 5/5] xen: fix MISRA regressions on rule 20.9 and 20.12
In-Reply-To: <6ea1507d-25dc-4b3c-8c00-3b7b271e69a0@citrix.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <7d454066eb24e0515ff5b37864ed7a7ef5215dc5.1717236930.git.nicola.vetrini@bugseng.com>
 <6ea1507d-25dc-4b3c-8c00-3b7b271e69a0@citrix.com>
Message-ID: <00424ba7b8e418c497ccee25167320e1@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=UTF-8;
 format=flowed
Content-Transfer-Encoding: 8bit

On 2024-06-01 14:47, Andrew Cooper wrote:
> On 01/06/2024 11:16 am, Nicola Vetrini wrote:
>> ea59e7d780d9 ("xen/bitops: Cleanup and new infrastructure ahead of 
>> rearrangements")
>> introduced new violations on previously clean rules 20.9 and 20.12.
>> 
>> The first is introduced because CONFIG_CC_IS_CLANG in xen/self-tests.h 
>> is not
>> defined in the configuration under analysis. Using "defined()" instead 
>> avoids
>> relying on the preprocessor's behaviour upon encountering an 
>> undedfined identifier
>> and addresses the violation.
>> 
>> The violation of Rule 20.12 is due to "val" being used both as an 
>> ordinary argument
>> in macro RUNTIME_CHECK, and as a stringification operator.
>> 
>> No functional change.
>> 
>> Fixes: ea59e7d780d9 ("xen/bitops: Cleanup and new infrastructure ahead 
>> of rearrangements")
>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
> 
> Thankyou for this patch.  I'd seen that I'd broken something.  
> (Entirely
> my fault - I've done a lot of testing in Gitlab for the series, but
> never manually ran the Eclair jobs.  I'll try to remember better next 
> time.)
> 
> One question though. 
> https://gitlab.com/xen-project/xen/-/jobs/6994213979 says:
> 
> Failure: 1 regressions found for clean guidelines
>   service MC3R1.R20.9: (required) All identifiers used in the
> controlling expression of `#if' or `#elif' preprocessing directives
> shall be #define'd before evaluation:
>    violation: 1
> 
> While there is a report for 20.12, it's not clean yet (so the first
> sentence wants adjusting), and RUNTIME_CHECK doesn't show up newly in 
> it.
> 
> So, while I agree that RUNTIME_CHECK() should be included in the 20.12
> exclusions, why is current Gitlab Run not reporting it?
> 
> ~Andrew

While Rule 20.12 wasn't clean on x86, but it was on arm, so in the arm64 
run it is reported as such

https://gitlab.com/xen-project/xen/-/jobs/6994213980

with the other patches in the series the rule should be clean on both 
architectures, so this imbalance should disappear rather shortly.

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 13:08:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 13:08:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734199.1140402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDOTX-0005Bg-Nz; Sat, 01 Jun 2024 13:08:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734199.1140402; Sat, 01 Jun 2024 13:08:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDOTX-0005BZ-LC; Sat, 01 Jun 2024 13:08:47 +0000
Received: by outflank-mailman (input) for mailman id 734199;
 Sat, 01 Jun 2024 13:08:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OKEA=ND=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sDOTW-0005BR-IE
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 13:08:46 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 139c2ce7-2018-11ef-90a1-e314d9c70b13;
 Sat, 01 Jun 2024 15:08:45 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 5b1f17b1804b1-42122ac2f38so16407335e9.1
 for <xen-devel@lists.xenproject.org>; Sat, 01 Jun 2024 06:08:45 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd04c0f47sm4067039f8f.8.2024.06.01.06.08.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 01 Jun 2024 06:08:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 139c2ce7-2018-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717247325; x=1717852125; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=MbozNucyW+sx+4W72eymgLOLCc+L5Y5BOCx+JClK1/A=;
        b=YBUuLXZJ9EHKtwOu2Q6/GD16Kic3ZdZepsnLfm3DiEyU2lbpLx9eQBss/KGrmvCj/N
         nrMhlryT8EtJCZ8GUAh3NCUZ8772yQoZg4chJ8kEhieD5WTuugHOU6WzCUGdAUM90UzM
         LXbK0mhBoaFRPkgA3RVk6nJFh6D4KcPcN5rJU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717247325; x=1717852125;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=MbozNucyW+sx+4W72eymgLOLCc+L5Y5BOCx+JClK1/A=;
        b=ggOiW4eOnft1uBFyOO29XWw2Mat9SUFOhvQI3PTibVqJjszJmB+z5y/vXyqg0SdBFr
         wUWMU5Hx36zY5PBF+99WkmchOOCffTPzn6hN2qp69dY3v1Xc4pTkuqo0LUjEXq6d2bax
         8HnbbPIT4jPxQDbC0GsDZ5nRLxQL+ylVYn90irKHxAqkmsamc6WRaCXXiEY+C9WT/KLb
         sZFxvO64rmH2qRRagA6E1TK7zN49xamvBOIkQSaObJznIe7fmtSjqDJDMiVwYazMdo0S
         0EjWbC0yBpJfKsJIP3GLdiTSA3+JELosLn5rJgVhyOmWLcmj/8XmBC7NZZk6B2WPfPIr
         UZdA==
X-Gm-Message-State: AOJu0Yx62GIn3inmL11YVjOG3uCsmOyGMmfTETOamPehrsR2VdWwabnu
	wcxxc/3h2+xamGknqCIQ7gDN1dWrv5fGKZad+R1dynQao38th43CSARkbNRxdTY=
X-Google-Smtp-Source: AGHT+IELjqaCsm8xhYnWtORL4243UV5PuzTF38x8wbyzxCpPw3UD5z1gohLLHJLni609/TDOII3cbQ==
X-Received: by 2002:a05:600c:4f4d:b0:418:9d4a:1ba5 with SMTP id 5b1f17b1804b1-421280dfabbmr64967945e9.6.1717247324717;
        Sat, 01 Jun 2024 06:08:44 -0700 (PDT)
Message-ID: <02138ee8-7a30-40b0-823f-af451fb4f060@citrix.com>
Date: Sat, 1 Jun 2024 14:08:43 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 5/5] xen: fix MISRA regressions on rule 20.9 and 20.12
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com,
 consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, George Dunlap
 <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <7d454066eb24e0515ff5b37864ed7a7ef5215dc5.1717236930.git.nicola.vetrini@bugseng.com>
 <6ea1507d-25dc-4b3c-8c00-3b7b271e69a0@citrix.com>
 <00424ba7b8e418c497ccee25167320e1@bugseng.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <00424ba7b8e418c497ccee25167320e1@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 01/06/2024 1:58 pm, Nicola Vetrini wrote:
> On 2024-06-01 14:47, Andrew Cooper wrote:
>> On 01/06/2024 11:16 am, Nicola Vetrini wrote:
>>> ea59e7d780d9 ("xen/bitops: Cleanup and new infrastructure ahead of
>>> rearrangements")
>>> introduced new violations on previously clean rules 20.9 and 20.12.
>>>
>>> The first is introduced because CONFIG_CC_IS_CLANG in
>>> xen/self-tests.h is not
>>> defined in the configuration under analysis. Using "defined()"
>>> instead avoids
>>> relying on the preprocessor's behaviour upon encountering an
>>> undedfined identifier
>>> and addresses the violation.
>>>
>>> The violation of Rule 20.12 is due to "val" being used both as an
>>> ordinary argument
>>> in macro RUNTIME_CHECK, and as a stringification operator.
>>>
>>> No functional change.
>>>
>>> Fixes: ea59e7d780d9 ("xen/bitops: Cleanup and new infrastructure
>>> ahead of rearrangements")
>>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
>>
>> Thankyou for this patch.  I'd seen that I'd broken something.  (Entirely
>> my fault - I've done a lot of testing in Gitlab for the series, but
>> never manually ran the Eclair jobs.  I'll try to remember better next
>> time.)
>>
>> One question though. 
>> https://gitlab.com/xen-project/xen/-/jobs/6994213979 says:
>>
>> Failure: 1 regressions found for clean guidelines
>>   service MC3R1.R20.9: (required) All identifiers used in the
>> controlling expression of `#if' or `#elif' preprocessing directives
>> shall be #define'd before evaluation:
>>    violation: 1
>>
>> While there is a report for 20.12, it's not clean yet (so the first
>> sentence wants adjusting), and RUNTIME_CHECK doesn't show up newly in
>> it.
>>
>> So, while I agree that RUNTIME_CHECK() should be included in the 20.12
>> exclusions, why is current Gitlab Run not reporting it?
>>
>> ~Andrew
>
> While Rule 20.12 wasn't clean on x86, but it was on arm, so in the
> arm64 run it is reported as such
>
> https://gitlab.com/xen-project/xen/-/jobs/6994213980
>
> with the other patches in the series the rule should be clean on both
> architectures, so this imbalance should disappear rather shortly.
>

Thanks - I'd just found the ARM report and was replying - I'll rebase
onto this thread.

I still don't understand why the violation doesn't show up on the x86
build.  Specifically, why it's missing from here:
https://saas.eclairit.com:3787/fs/var/local/eclair/xen-project.ecdf/xen-project/xen/ECLAIR_normal/staging/X86_64/6994213979/prev/PROJECT.ecd;/by_service/MC3R1.R20.12.html

>From the ARM report, one is xen/lib/generic-ffsl.c:61 and checking with
the x86 build log, we are building generic-ffsl.c too (which is expected.)

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 13:53:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 13:53:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734207.1140413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDP9w-0003j5-UB; Sat, 01 Jun 2024 13:52:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734207.1140413; Sat, 01 Jun 2024 13:52:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDP9w-0003iy-RB; Sat, 01 Jun 2024 13:52:36 +0000
Received: by outflank-mailman (input) for mailman id 734207;
 Sat, 01 Jun 2024 13:52:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmUY=ND=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sDP9v-0003is-LT
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 13:52:35 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 315d9c86-201e-11ef-b4bb-af5377834399;
 Sat, 01 Jun 2024 15:52:32 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id A92D34EE0737;
 Sat,  1 Jun 2024 15:52:31 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 315d9c86-201e-11ef-b4bb-af5377834399
MIME-Version: 1.0
Date: Sat, 01 Jun 2024 15:52:31 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com,
 consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, Doug
 Goldstein <cardoe@cardoe.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Subject: Re: [XEN PATCH 5/5] xen: fix MISRA regressions on rule 20.9 and 20.12
In-Reply-To: <02138ee8-7a30-40b0-823f-af451fb4f060@citrix.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <7d454066eb24e0515ff5b37864ed7a7ef5215dc5.1717236930.git.nicola.vetrini@bugseng.com>
 <6ea1507d-25dc-4b3c-8c00-3b7b271e69a0@citrix.com>
 <00424ba7b8e418c497ccee25167320e1@bugseng.com>
 <02138ee8-7a30-40b0-823f-af451fb4f060@citrix.com>
Message-ID: <00e1530e288a5957e87936e0b010a8cd@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=UTF-8;
 format=flowed
Content-Transfer-Encoding: 8bit

On 2024-06-01 15:08, Andrew Cooper wrote:
> On 01/06/2024 1:58 pm, Nicola Vetrini wrote:
>> On 2024-06-01 14:47, Andrew Cooper wrote:
>>> On 01/06/2024 11:16 am, Nicola Vetrini wrote:
>>>> ea59e7d780d9 ("xen/bitops: Cleanup and new infrastructure ahead of
>>>> rearrangements")
>>>> introduced new violations on previously clean rules 20.9 and 20.12.
>>>> 
>>>> The first is introduced because CONFIG_CC_IS_CLANG in
>>>> xen/self-tests.h is not
>>>> defined in the configuration under analysis. Using "defined()"
>>>> instead avoids
>>>> relying on the preprocessor's behaviour upon encountering an
>>>> undedfined identifier
>>>> and addresses the violation.
>>>> 
>>>> The violation of Rule 20.12 is due to "val" being used both as an
>>>> ordinary argument
>>>> in macro RUNTIME_CHECK, and as a stringification operator.
>>>> 
>>>> No functional change.
>>>> 
>>>> Fixes: ea59e7d780d9 ("xen/bitops: Cleanup and new infrastructure
>>>> ahead of rearrangements")
>>>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
>>> 
>>> Thankyou for this patch.  I'd seen that I'd broken something.  
>>> (Entirely
>>> my fault - I've done a lot of testing in Gitlab for the series, but
>>> never manually ran the Eclair jobs.  I'll try to remember better next
>>> time.)
>>> 
>>> One question though. 
>>> https://gitlab.com/xen-project/xen/-/jobs/6994213979 says:
>>> 
>>> Failure: 1 regressions found for clean guidelines
>>>   service MC3R1.R20.9: (required) All identifiers used in the
>>> controlling expression of `#if' or `#elif' preprocessing directives
>>> shall be #define'd before evaluation:
>>>    violation: 1
>>> 
>>> While there is a report for 20.12, it's not clean yet (so the first
>>> sentence wants adjusting), and RUNTIME_CHECK doesn't show up newly in
>>> it.
>>> 
>>> So, while I agree that RUNTIME_CHECK() should be included in the 
>>> 20.12
>>> exclusions, why is current Gitlab Run not reporting it?
>>> 
>>> ~Andrew
>> 
>> While Rule 20.12 wasn't clean on x86, but it was on arm, so in the
>> arm64 run it is reported as such
>> 
>> https://gitlab.com/xen-project/xen/-/jobs/6994213980
>> 
>> with the other patches in the series the rule should be clean on both
>> architectures, so this imbalance should disappear rather shortly.
>> 
> 
> Thanks - I'd just found the ARM report and was replying - I'll rebase
> onto this thread.
> 
> I still don't understand why the violation doesn't show up on the x86
> build.  Specifically, why it's missing from here:
> https://saas.eclairit.com:3787/fs/var/local/eclair/xen-project.ecdf/xen-project/xen/ECLAIR_normal/staging/X86_64/6994213979/prev/PROJECT.ecd;/by_service/MC3R1.R20.12.html
> 

Note the "prev" here in the URL: this means you're looking at the 
analysis run before your series was merged (on 03147e6837ff045db)

this is the latest run for x86 on staging:

https://saas.eclairit.com:3787/fs/var/local/eclair/xen-project.ecdf/xen-project/xen/ECLAIR_normal/staging/X86_64/6994213979/PROJECT.ecd;/by_service/MC3R1.R20.12.html

> From the ARM report, one is xen/lib/generic-ffsl.c:61 and checking with
> the x86 build log, we are building generic-ffsl.c too (which is 
> expected.)
> 
> ~Andrew

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 14:02:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 14:02:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734213.1140422 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDPJ4-0005WS-Mx; Sat, 01 Jun 2024 14:02:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734213.1140422; Sat, 01 Jun 2024 14:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDPJ4-0005WL-K6; Sat, 01 Jun 2024 14:02:02 +0000
Received: by outflank-mailman (input) for mailman id 734213;
 Sat, 01 Jun 2024 14:02:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OKEA=ND=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sDPJ2-0005WF-Qz
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 14:02:00 +0000
Received: from mail-qv1-xf2a.google.com (mail-qv1-xf2a.google.com
 [2607:f8b0:4864:20::f2a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 830a756d-201f-11ef-b4bb-af5377834399;
 Sat, 01 Jun 2024 16:01:59 +0200 (CEST)
Received: by mail-qv1-xf2a.google.com with SMTP id
 6a1803df08f44-6ad74e5afeaso18607546d6.0
 for <xen-devel@lists.xenproject.org>; Sat, 01 Jun 2024 07:01:59 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6ae4a7462f1sm14811306d6.33.2024.06.01.07.01.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 01 Jun 2024 07:01:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 830a756d-201f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717250518; x=1717855318; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=SuGF2XzFM+x7P8iYkP6eccS0bdWkdBlZETCYtzm4ego=;
        b=t92Z7A7jhctxK5EzvMclXXS5Ddwn+TS+bPLauSiJ/zXk0a9vzgPefAKTa3eHriUycs
         unZmwImyigkc9ZVYCgetwA+JzWLukpCT3MIX/X5uwLvWOb4uro7XezXucP4dIjbsaMtR
         S2cMyzuMY7yKuOW+Ye2u8zUfJhD4oyaHsyPo4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717250518; x=1717855318;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=SuGF2XzFM+x7P8iYkP6eccS0bdWkdBlZETCYtzm4ego=;
        b=MsW+8WJMpGtKJ3zpUEuGW2iisBvZyS2kkhOk03wre8QA1fAQMiNScSb68/fRZs4qx+
         cpl9IDcgw9tIz14rq61U/Vdl+BTF2iYGe2IOa41uiRbRLk73vwTDQ+j9348ZFpdu9Qk0
         A9A5PlHsWVjJfXfwuzelTO9WF8qeBevwhG52jIDGJ2DYeNFEyoh3b7g48DVURM+/66uj
         XO+F26PcF3R4holJEKIsCvbkBdri9ppsOy2AaePXJ0Xvpf7tiEKROGnHSa7BecX6Wf5W
         mVU4fXB/d10IARFgMSTWBCNNRJUfU0FKS+6tnv6o5hYkOgrHtIX32WU0glJtycbq1txL
         O8qg==
X-Gm-Message-State: AOJu0Yw84ZTCsr0llEa5SkpayA6Fd3iuqoZwCXf+MAy9xZVvxSWNJl4F
	tRNlr8H49OOvhDZfK8t/jFyO7NOwmDHEonfg7aTYq+ZXYp1iPuERvKR8s3NDYA4=
X-Google-Smtp-Source: AGHT+IH9hS1n35ry0qDr4aqte2rhIp2CS/ikf1Vi6BaXT5EMbXSaEbWBFcBbxvLRYeee+XpCJ6us5Q==
X-Received: by 2002:a05:6214:5f02:b0:6af:565c:d16 with SMTP id 6a1803df08f44-6af565c1ab5mr6753936d6.21.1717250517550;
        Sat, 01 Jun 2024 07:01:57 -0700 (PDT)
Message-ID: <06445c42-1e4f-4830-bed6-b16cfd5c3a9a@citrix.com>
Date: Sat, 1 Jun 2024 15:01:53 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 5/5] xen: fix MISRA regressions on rule 20.9 and 20.12
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com,
 consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, George Dunlap
 <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <7d454066eb24e0515ff5b37864ed7a7ef5215dc5.1717236930.git.nicola.vetrini@bugseng.com>
 <6ea1507d-25dc-4b3c-8c00-3b7b271e69a0@citrix.com>
 <00424ba7b8e418c497ccee25167320e1@bugseng.com>
 <02138ee8-7a30-40b0-823f-af451fb4f060@citrix.com>
 <00e1530e288a5957e87936e0b010a8cd@bugseng.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <00e1530e288a5957e87936e0b010a8cd@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 01/06/2024 2:52 pm, Nicola Vetrini wrote:
> On 2024-06-01 15:08, Andrew Cooper wrote:
>> On 01/06/2024 1:58 pm, Nicola Vetrini wrote:
>>> On 2024-06-01 14:47, Andrew Cooper wrote:
>>>> On 01/06/2024 11:16 am, Nicola Vetrini wrote:
>>>>> ea59e7d780d9 ("xen/bitops: Cleanup and new infrastructure ahead of
>>>>> rearrangements")
>>>>> introduced new violations on previously clean rules 20.9 and 20.12.
>>>>>
>>>>> The first is introduced because CONFIG_CC_IS_CLANG in
>>>>> xen/self-tests.h is not
>>>>> defined in the configuration under analysis. Using "defined()"
>>>>> instead avoids
>>>>> relying on the preprocessor's behaviour upon encountering an
>>>>> undedfined identifier
>>>>> and addresses the violation.
>>>>>
>>>>> The violation of Rule 20.12 is due to "val" being used both as an
>>>>> ordinary argument
>>>>> in macro RUNTIME_CHECK, and as a stringification operator.
>>>>>
>>>>> No functional change.
>>>>>
>>>>> Fixes: ea59e7d780d9 ("xen/bitops: Cleanup and new infrastructure
>>>>> ahead of rearrangements")
>>>>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
>>>>
>>>> Thankyou for this patch.  I'd seen that I'd broken something. 
>>>> (Entirely
>>>> my fault - I've done a lot of testing in Gitlab for the series, but
>>>> never manually ran the Eclair jobs.  I'll try to remember better next
>>>> time.)
>>>>
>>>> One question though. 
>>>> https://gitlab.com/xen-project/xen/-/jobs/6994213979 says:
>>>>
>>>> Failure: 1 regressions found for clean guidelines
>>>>   service MC3R1.R20.9: (required) All identifiers used in the
>>>> controlling expression of `#if' or `#elif' preprocessing directives
>>>> shall be #define'd before evaluation:
>>>>    violation: 1
>>>>
>>>> While there is a report for 20.12, it's not clean yet (so the first
>>>> sentence wants adjusting), and RUNTIME_CHECK doesn't show up newly in
>>>> it.
>>>>
>>>> So, while I agree that RUNTIME_CHECK() should be included in the 20.12
>>>> exclusions, why is current Gitlab Run not reporting it?
>>>>
>>>> ~Andrew
>>>
>>> While Rule 20.12 wasn't clean on x86, but it was on arm, so in the
>>> arm64 run it is reported as such
>>>
>>> https://gitlab.com/xen-project/xen/-/jobs/6994213980
>>>
>>> with the other patches in the series the rule should be clean on both
>>> architectures, so this imbalance should disappear rather shortly.
>>>
>>
>> Thanks - I'd just found the ARM report and was replying - I'll rebase
>> onto this thread.
>>
>> I still don't understand why the violation doesn't show up on the x86
>> build.  Specifically, why it's missing from here:
>> https://saas.eclairit.com:3787/fs/var/local/eclair/xen-project.ecdf/xen-project/xen/ECLAIR_normal/staging/X86_64/6994213979/prev/PROJECT.ecd;/by_service/MC3R1.R20.12.html
>>
>>
>
> Note the "prev" here in the URL: this means you're looking at the
> analysis run before your series was merged (on 03147e6837ff045db)
>
> this is the latest run for x86 on staging:
>
> https://saas.eclairit.com:3787/fs/var/local/eclair/xen-project.ecdf/xen-project/xen/ECLAIR_normal/staging/X86_64/6994213979/PROJECT.ecd;/by_service/MC3R1.R20.12.html

Oh - thankyou for explaining.

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 14:38:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 14:38:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734241.1140433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDPre-0001a3-Dj; Sat, 01 Jun 2024 14:37:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734241.1140433; Sat, 01 Jun 2024 14:37:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDPre-0001Zw-B9; Sat, 01 Jun 2024 14:37:46 +0000
Received: by outflank-mailman (input) for mailman id 734241;
 Sat, 01 Jun 2024 14:37:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OKEA=ND=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sDPrd-0001Zq-Lm
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 14:37:45 +0000
Received: from mail-qt1-x831.google.com (mail-qt1-x831.google.com
 [2607:f8b0:4864:20::831])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8140ba75-2024-11ef-b4bb-af5377834399;
 Sat, 01 Jun 2024 16:37:43 +0200 (CEST)
Received: by mail-qt1-x831.google.com with SMTP id
 d75a77b69052e-43fecdecd32so15480691cf.1
 for <xen-devel@lists.xenproject.org>; Sat, 01 Jun 2024 07:37:43 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-43ff23a3c75sm19528691cf.14.2024.06.01.07.37.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 01 Jun 2024 07:37:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8140ba75-2024-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717252662; x=1717857462; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=9dijQdS5Avup1F+5bLxZGXhPRRYBOFud66ULN15XSaA=;
        b=P+4hiJ08tmO+wkSeSQ546Qd1sUqoeiJgMWOpf4KBTFMy2dLjeHm+id+qpq57kEMEOQ
         MbbQPseavv5FqNq8uWeCCg/kfhpvwuDCaEH9D524zW3CGgXANWvom8TrorvN6SyWb95Z
         BCcNY4oLA9Nj2oLvUZI+Y2OloNr6nQuVCHkno=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717252662; x=1717857462;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=9dijQdS5Avup1F+5bLxZGXhPRRYBOFud66ULN15XSaA=;
        b=bTII3yZoZkybxdsWwoKv7RCoKgcP6qenk1wjADXx5Xm0codkGVrOtf/5Nq4VGefLhI
         djlQWXkccnMiURDANTbt4hHWqWLTqbUpWrVOkP/9OSSwy1mxefg7LcCc7p+IQz2dBY/x
         1UzxAY+dcKUkS9BzFGMTeDFnDa/hoVrsPe0JeIB5kg7uTCpfKUGcvvbkhLydnQ7jpplm
         ccH0kagXnN66k7qiZo88QV1V3/Zw7M2qX/UOZP3mtgn+mDv1wM3VskNYFVQA8msfkSVZ
         UeRUo42fJ1ZQv/3Bm3wDcHx19L7YqqL6vNj21rwaSds0eI3q2IC1oNHy8XNfKIWcf62+
         uGXg==
X-Forwarded-Encrypted: i=1; AJvYcCXnksqlV012KmBhTkXpwNE40fERunDEs4VQMgBQiyjF+A7kRFYrFk+JnrKbP/szwvpbP2LVi66pRofJ5WVkioJ0JCtL2LyqJAoWn8OKQ8w=
X-Gm-Message-State: AOJu0YxQNosOQrtlk9JVscEyZGAhPRcGyhlr84Gl9gYeCMd9hvUlWyT5
	CDbypY5Z/uDhSyfe/sN84FEgDHQA7ItP2cFZBGwXHK3H37ZZgSkFh81TLF3i8gk=
X-Google-Smtp-Source: AGHT+IHNoVZQ7YXsFYwkMbX2zP4vu0osrgiVvsYVb4keGEM9B9uGv5MWxejsbeGguC0M9VnKe1yhoA==
X-Received: by 2002:a05:622a:20f:b0:43f:eecd:c81c with SMTP id d75a77b69052e-43ff523782emr49796521cf.2.1717252662559;
        Sat, 01 Jun 2024 07:37:42 -0700 (PDT)
Message-ID: <b4267610-464b-479a-b886-12489c5e5a9b@citrix.com>
Date: Sat, 1 Jun 2024 15:37:39 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 0/5] address violations of MISRA C rules
To: Nicola Vetrini <nicola.vetrini@bugseng.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <cover.1717236930.git.nicola.vetrini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 01/06/2024 11:16 am, Nicola Vetrini wrote:
> Patches 1 to 4 address violations of MISRA C Rule 20.12 by deviating certain
> uses of some macros, while the last patch addresses some regressions introduced
> by the latest bitops series
>
> Nicola Vetrini (5):
>   xen/domain: deviate violation of MISRA C Rule 20.12
>   x86/domain: deviate violation of MISRA C Rule 20.12
>   x86: deviate violation of MISRA C Rule 20.12
>   automation/eclair_analysis: address remaining violations of MISRA C
>     Rule 20.12
>   xen: fix MISRA regressions on rule 20.9 and 20.12

I've committed patch 5 because it fixes a blocking failure in Gitlab CI
from content already accepted for Xen 4.19.

The others look fine to me, but you'll need to negotiate with Oleksii
(CC'd) to get them in, at this point in the release.

Given that this series makes x86 clean to Rule 20.12, shouldn't there be
a final patch making it blocking, to bring x86 in line with ARM?

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 15:05:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 15:05:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734249.1140446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDQIH-0005n1-GX; Sat, 01 Jun 2024 15:05:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734249.1140446; Sat, 01 Jun 2024 15:05:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDQIH-0005mu-E4; Sat, 01 Jun 2024 15:05:17 +0000
Received: by outflank-mailman (input) for mailman id 734249;
 Sat, 01 Jun 2024 15:05:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDQIG-0005mk-02; Sat, 01 Jun 2024 15:05:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDQIF-0006Dc-Rs; Sat, 01 Jun 2024 15:05:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDQIF-0000Tv-I9; Sat, 01 Jun 2024 15:05:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDQIF-0005X9-Hm; Sat, 01 Jun 2024 15:05:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KKTs91S3x94fxYoNc6/Odbb7P8VHz2bvjMgifk72YsE=; b=ExbLqRrfcVxCapJ5s0MFonM9/E
	C1llgaQC0KkEq+LYiALl1Hqxs46GiVJ3CbJpmt9chnJoTUoNqQh4W/iXHKCChSxv+TinIfN0uBFBs
	uPz3n8TbyHHErK/+zlTaPIQfjuzhmGoFqEa8WBqNkN6gknMA1VAR6DnhC3AmRAeBKxg0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186218-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186218: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-amd64-amd64-libvirt-pair:guest-migrate/dst_host/src_host/debian.repeat:fail:heisenbug
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=5fa180bc7756102c3af5fcbeb4c61109c4d0e829
X-Osstest-Versions-That:
    libvirt=6f293f1fadc08660ba470d2cd3a91fde58cef617
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Jun 2024 15:05:15 +0000

flight 186218 libvirt real [real]
flight 186222 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186218/
http://logs.test-lab.xenproject.org/osstest/logs/186222/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-pair 28 guest-migrate/dst_host/src_host/debian.repeat fail pass in 186222-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186206
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              5fa180bc7756102c3af5fcbeb4c61109c4d0e829
baseline version:
 libvirt              6f293f1fadc08660ba470d2cd3a91fde58cef617

Last test of basis   186206  2024-05-31 04:22:35 Z    1 days
Testing same since   186218  2024-06-01 04:20:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Privoznik <mprivozn@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   6f293f1fad..5fa180bc77  5fa180bc7756102c3af5fcbeb4c61109c4d0e829 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 17:20:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 17:20:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734278.1140456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDSOZ-0004O0-T5; Sat, 01 Jun 2024 17:19:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734278.1140456; Sat, 01 Jun 2024 17:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDSOZ-0004Nt-QP; Sat, 01 Jun 2024 17:19:55 +0000
Received: by outflank-mailman (input) for mailman id 734278;
 Sat, 01 Jun 2024 17:19:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmUY=ND=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sDSOZ-0004Nn-H8
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 17:19:55 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 29225393-203b-11ef-90a1-e314d9c70b13;
 Sat, 01 Jun 2024 19:19:54 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 52E054EE0737;
 Sat,  1 Jun 2024 19:19:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29225393-203b-11ef-90a1-e314d9c70b13
MIME-Version: 1.0
Date: Sat, 01 Jun 2024 19:19:53 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com,
 consulting@bugseng.com, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Simone Ballarin
 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>, Oleksii
 Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [XEN PATCH 0/5] address violations of MISRA C rules
In-Reply-To: <b4267610-464b-479a-b886-12489c5e5a9b@citrix.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <b4267610-464b-479a-b886-12489c5e5a9b@citrix.com>
Message-ID: <fa17cb21b7bba2dabf49bba5138c1cf2@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-01 16:37, Andrew Cooper wrote:
> On 01/06/2024 11:16 am, Nicola Vetrini wrote:
>> Patches 1 to 4 address violations of MISRA C Rule 20.12 by deviating 
>> certain
>> uses of some macros, while the last patch addresses some regressions 
>> introduced
>> by the latest bitops series
>> 
>> Nicola Vetrini (5):
>>   xen/domain: deviate violation of MISRA C Rule 20.12
>>   x86/domain: deviate violation of MISRA C Rule 20.12
>>   x86: deviate violation of MISRA C Rule 20.12
>>   automation/eclair_analysis: address remaining violations of MISRA C
>>     Rule 20.12
>>   xen: fix MISRA regressions on rule 20.9 and 20.12
> 
> I've committed patch 5 because it fixes a blocking failure in Gitlab CI
> from content already accepted for Xen 4.19.
> 

Thanks

> The others look fine to me, but you'll need to negotiate with Oleksii
> (CC'd) to get them in, at this point in the release.
> 

Well, having one more clean rule does look better for Xen, and the 
changes to the codebase are harmless enough, but ultimately given the 
closeness with the deadline I didn't really see a need to.

> Given that this series makes x86 clean to Rule 20.12, shouldn't there 
> be
> a final patch making it blocking, to bring x86 in line with ARM?
> 

Good point, I should have done that in patch 4. I'll do a follow-up 
patch.

> ~Andrew

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 17:23:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 17:23:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734283.1140466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDSRZ-0005ob-BD; Sat, 01 Jun 2024 17:23:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734283.1140466; Sat, 01 Jun 2024 17:23:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDSRZ-0005oU-8h; Sat, 01 Jun 2024 17:23:01 +0000
Received: by outflank-mailman (input) for mailman id 734283;
 Sat, 01 Jun 2024 17:23:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OKEA=ND=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sDSRY-0005oO-Gg
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 17:23:00 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9714b85b-203b-11ef-b4bb-af5377834399;
 Sat, 01 Jun 2024 19:22:58 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 5b1f17b1804b1-42122ac2f38so17158225e9.1
 for <xen-devel@lists.xenproject.org>; Sat, 01 Jun 2024 10:22:58 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4213860126fsm11921705e9.40.2024.06.01.10.22.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 01 Jun 2024 10:22:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9714b85b-203b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717262578; x=1717867378; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=FDGrYeLcgdKuAo3gBXrytu7NThrt2CtnDzy2HWVr+co=;
        b=h7dT8M3IpXS8vRyG5to139reCSTau+gc0KYSoabmS/iu0H4P/iwuqYl2DlbXD0hVKb
         9zu5qLSPcwqneKijVSD5L3TO15dWqxS9WtbGtTzc1Xp8iNWmj3+i3RlG9nxfi60HWlcr
         O2AVFDWUyc64HVA7/GPeQeRr9lfMK1RdI4C64=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717262578; x=1717867378;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FDGrYeLcgdKuAo3gBXrytu7NThrt2CtnDzy2HWVr+co=;
        b=WFx7Q0crllnCUNeT+nW6EBM1NSrBQQkIJfEyISRaYFlXdbtO18O+8+Y02lpUW0dUFS
         o6zNt990l1rWBqfMb1eUBZSb+3tRvyzgjlHrpVoWfEXNUFCt9fwWdVvpkfjeKEfRxR8/
         ymRFbJnXIIeYiEvOJvzXq0ihzxcS+qVbbk7H8utY9vglPavbg2W+vB+MPjAmW79XdrXu
         fY6hCj7e7b7CcYDiTh4QCOh/jumo/vHLvid+VxpYrXCrSdKGmH2RJXagrH1+DsocBqb4
         l+f33AAkpfTtZpAtgzVbsNqNQ0RVgZuf1fv55kvkXoxWsgVDAr1R10TnTuEF9qGF8rj1
         MGeA==
X-Gm-Message-State: AOJu0YyRM1wTedCQRC7FWnJRurKhd0/r6UtIkfddxO+b7wKDKYvHjpPX
	/TcgTj+LtuvuF7q+CrQ2S6vT6dPEbkG7iozzdvEOjCnWuV+cDJQ+OyaOCWehVNc=
X-Google-Smtp-Source: AGHT+IHQ2DBhH0YpG8WfT/1QfBnC/7xVwSck+BbvxXoPTpWgFcWmVZzFDhN3EKi04t7oZkALzKHG0w==
X-Received: by 2002:a05:600c:19d4:b0:41a:4623:7ee9 with SMTP id 5b1f17b1804b1-421280f023bmr77038375e9.10.1717262577680;
        Sat, 01 Jun 2024 10:22:57 -0700 (PDT)
Message-ID: <1665f895-48c2-494b-891e-d50b8d78b49c@citrix.com>
Date: Sat, 1 Jun 2024 18:22:56 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 0/5] address violations of MISRA C rules
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com,
 consulting@bugseng.com, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <b4267610-464b-479a-b886-12489c5e5a9b@citrix.com>
 <fa17cb21b7bba2dabf49bba5138c1cf2@bugseng.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <fa17cb21b7bba2dabf49bba5138c1cf2@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 01/06/2024 6:19 pm, Nicola Vetrini wrote:
> On 2024-06-01 16:37, Andrew Cooper wrote:
>> On 01/06/2024 11:16 am, Nicola Vetrini wrote:
>>> Patches 1 to 4 address violations of MISRA C Rule 20.12 by deviating
>>> certain
>>> uses of some macros, while the last patch addresses some regressions
>>> introduced
>>> by the latest bitops series
>>>
>>> Nicola Vetrini (5):
>>>   xen/domain: deviate violation of MISRA C Rule 20.12
>>>   x86/domain: deviate violation of MISRA C Rule 20.12
>>>   x86: deviate violation of MISRA C Rule 20.12
>>>   automation/eclair_analysis: address remaining violations of MISRA C
>>>     Rule 20.12
>>>   xen: fix MISRA regressions on rule 20.9 and 20.12
>>
>> I've committed patch 5 because it fixes a blocking failure in Gitlab CI
>> from content already accepted for Xen 4.19.
>>
>
> Thanks
>
>> The others look fine to me, but you'll need to negotiate with Oleksii
>> (CC'd) to get them in, at this point in the release.
>>
>
> Well, having one more clean rule does look better for Xen, and the
> changes to the codebase are harmless enough, but ultimately given the
> closeness with the deadline I didn't really see a need to.
>
>> Given that this series makes x86 clean to Rule 20.12, shouldn't there be
>> a final patch making it blocking, to bring x86 in line with ARM?
>>
>
> Good point, I should have done that in patch 4. I'll do a follow-up
> patch.

FWIW, given how simple this series is, I'm +1 for including it in 4.19,
even at this point.  It is definitely nicer to have the disposition of
Rule 20.12 the same between ARM and x86.

Still - it's very much Oleksii's call.

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 17:52:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 17:52:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734290.1140476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDSuK-0001Kp-Md; Sat, 01 Jun 2024 17:52:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734290.1140476; Sat, 01 Jun 2024 17:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDSuK-0001Ki-Js; Sat, 01 Jun 2024 17:52:44 +0000
Received: by outflank-mailman (input) for mailman id 734290;
 Sat, 01 Jun 2024 17:52:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDSuJ-0001KY-HV; Sat, 01 Jun 2024 17:52:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDSuJ-0000vi-BB; Sat, 01 Jun 2024 17:52:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDSuJ-0006un-2l; Sat, 01 Jun 2024 17:52:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDSuJ-0006X5-2K; Sat, 01 Jun 2024 17:52:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7ldrgy9muXIRZX5FBkMdmb4o+u455XqBT7/yDAN4+xk=; b=yW2wX09kw08N7G4GwZBzwM0Ddz
	EcWBgiUlE1XGczTwXHJMYFwlzGvMsNpDxAjzDRP4R8doUchin0EfL6LYwUj6fh8tPuPgLakb0WUUu
	OPcvL90fY2rpkNXh5eoscPGEF2VcGxFgZPOJVszzw61oLxtrO5Y6uR0z6HHgGo1/1X7c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186223-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186223: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
X-Osstest-Versions-That:
    xen=5f7606c048f7cca1a4301b321af70791c1d22378
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Jun 2024 17:52:43 +0000

flight 186223 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186223/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e
baseline version:
 xen                  5f7606c048f7cca1a4301b321af70791c1d22378

Last test of basis   186216  2024-06-01 02:00:22 Z    0 days
Testing same since   186223  2024-06-01 15:00:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Nicola Vetrini <nicola.vetrini@bugseng.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5f7606c048..c2d5e63c73  c2d5e63c7380c7cb435d00211b512c53accb528e -> smoke


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 18:51:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 18:51:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734304.1140487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDToT-0000Qi-QX; Sat, 01 Jun 2024 18:50:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734304.1140487; Sat, 01 Jun 2024 18:50:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDToT-0000Qb-MX; Sat, 01 Jun 2024 18:50:45 +0000
Received: by outflank-mailman (input) for mailman id 734304;
 Sat, 01 Jun 2024 18:50:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OKEA=ND=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sDToR-0000QV-RJ
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 18:50:43 +0000
Received: from mail-qt1-x836.google.com (mail-qt1-x836.google.com
 [2607:f8b0:4864:20::836])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d7df33ec-2047-11ef-b4bb-af5377834399;
 Sat, 01 Jun 2024 20:50:41 +0200 (CEST)
Received: by mail-qt1-x836.google.com with SMTP id
 d75a77b69052e-43fc2ad049aso17966661cf.3
 for <xen-devel@lists.xenproject.org>; Sat, 01 Jun 2024 11:50:41 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-43ff23a1774sm21231491cf.11.2024.06.01.11.50.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 01 Jun 2024 11:50:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7df33ec-2047-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717267839; x=1717872639; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=iQSShu5GxLPHIpCDN9m0CmfiigJxy5okrqlnR5CvZiA=;
        b=dy/euO0rG78NNECVBCJ94vgYwftZOrkhAfMO2oP0mlfVjyvH7PuK/fuk4rB8LmhonM
         x9oe3a8IWi7+ZoU+DsbUVwdMZDT9ne1xMl8MpTZM85OUIZUvNXk3F58/8iazLAuWoBIr
         t37F6Fcan/88N6Jtk+lcgyuallt2+wd2BU1oY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717267839; x=1717872639;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=iQSShu5GxLPHIpCDN9m0CmfiigJxy5okrqlnR5CvZiA=;
        b=h0NacXeYXIv8OdP92q8KuK9kCXfbmtmzLlGgZlVEOJcTwyc6hb6SlLWUwCD3e60/U2
         cifZOUBa5BKn0VP4IJWSq7C3jAILTO6ICJO+lHnwevTHNtIYtgVWtviZ7qQS7FUDrvUa
         MTFxW2vqzOStSWmZvMsRX/ElBix8ID5bqepozjhz12dBPiLB9McMyb3RH7AAMw1C7IC1
         wHiVK/og/5ZnbcP3YR//elkejeOW49tpr+9grlk6n3aHbxnHNlK/4sv0XfRuaH3h0wig
         7azTphKjwqu1TkoaeldA5nvsLM5H1mjG9wvUzcSkLcwhVQuDE20Ish1BStYSWAKdYW8t
         dxoA==
X-Gm-Message-State: AOJu0Yy87GkD4UShh97upXnA5beuve8iGSZ442Q93BbtMHnQFV3SL/nF
	fq1I648Fld7+q4C2pCYIYTGzjqJkJZikUQeUTyzHijwoj1VXc4th8WxOhu6oYfttEmTt17sqs56
	x
X-Google-Smtp-Source: AGHT+IEBCtpYJlMNcECJhXNfmhU1deFeKmww44th3qp4Zu2R21F4tCx8fTKcF4G/5ClGyp5HqLZgzg==
X-Received: by 2002:ac8:5f46:0:b0:43a:eac7:cfff with SMTP id d75a77b69052e-43ff54ff734mr52452491cf.41.1717267839151;
        Sat, 01 Jun 2024 11:50:39 -0700 (PDT)
Message-ID: <3bb4e3fa-376b-4641-824d-61864b4e1e8e@citrix.com>
Date: Sat, 1 Jun 2024 19:50:36 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-GB
To: xen-devel <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Jan Beulich <jbeulich@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: vcpumask_to_pcpumask() case study
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

One of the followon items I had from the bitops clean-up is this:

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 648d6dd475ba..9c3a017606ed 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3425,7 +3425,7 @@ static int vcpumask_to_pcpumask(
             unsigned int cpu;
 
             vcpu_id = ffsl(vmask) - 1;
-            vmask &= ~(1UL << vcpu_id);
+            vmask &= vmask - 1;
             vcpu_id += vcpu_bias;
             if ( (vcpu_id >= d->max_vcpus) )
                 return 0;

which yields the following improvement:

  add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-34 (-34)
  Function                                     old     new   delta
  vcpumask_to_pcpumask                         519     485     -34


While I (the programmer) can reason the two expressions are equivalent,
the compiler can't, so we really are saving a SHL to re-calculate (1 <<
vcpu_id) and swapping it for a LEA -1(vmask) which happens to be hoisted
above the ffsl() call.

However, the majority of the savings came from the fact that the old
code used to hold 1 in %r15 (across the entire function!) so it could
calculate (1 << vcpu_id) on each loop iteration.  With %r15 now free for
other use, we one fewer thing spilt to the stack.

Anyway - while it goes to show that while/ffs logic should be extra
careful to use x &= x - 1 for it's loop condition logic, that's not all.

The rest of this function is crazy.  We're reading a guest-word at a
time in order to make a d->max_vcpus sized bitmap (with a reasonable
amount of opencoded logic to reassemble the fragments back into a vcpu
number).

PVH dom0 can reach here, and because it's not pv32, will be deemed to
have a guest word size of 64.  Also, word-at-time for any HVM guest is
an insane overhead in terms of the pagewalks behind the copy_from_hvm()
infrastructure.

Instead, we should calculate the size of the bitmap at
DIV_ROUND_UP(d->max_vcpus, 8), copy the bitmap in one whole go, and then
just have a single for_each_set_bit() looping over it.  Importantly this
avoids needing to know or care about the guest word size.

This removes 4 of the 6 hiding lfences; the pair for calculating
is_native to start with, and then one of the two pairs behind the use of
is_native to select the type of the access.

The only complications is this:  Right now, PV max vCPUS is 8k, so we
could even get away with this being an on-stack bitmap.  However, the
vast majority of PV guests small and a 64-bit bitmap would do fine.

But, given this is just one example, wouldn't it be better if we just
unconditionally had a marshalling buffer for hypercalls?  There's the
XLAT area but it doesn't exist PV64, and we've got various other pieces
of scratch per-cpu data.

If not, having a 128/256-bit bitmap on the stack will still be good
enough in practice, but still ok at amortising the PVH dom0 costs.

Thoughts?

~Andrew



From xen-devel-bounces@lists.xenproject.org Sat Jun 01 19:13:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 19:13:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734314.1140497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDUAk-0003J0-FK; Sat, 01 Jun 2024 19:13:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734314.1140497; Sat, 01 Jun 2024 19:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDUAk-0003It-C9; Sat, 01 Jun 2024 19:13:46 +0000
Received: by outflank-mailman (input) for mailman id 734314;
 Sat, 01 Jun 2024 19:13:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mmUY=ND=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sDUAj-0003In-CT
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 19:13:45 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0fa732cb-204b-11ef-b4bb-af5377834399;
 Sat, 01 Jun 2024 21:13:43 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 5F9D84EE0737;
 Sat,  1 Jun 2024 21:13:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fa732cb-204b-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [XEN PATCH] automation/eclair_analysis: add more clean MISRA guidelines
Date: Sat,  1 Jun 2024 21:13:38 +0200
Message-Id: <3af20044d2906a6f873aac1b6dd2b41c5b9e0507.1717269049.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rules 20.9, 20.12 and 14.4 are now clean on ARM and x86, so they are added
to the list of clean guidelines.

Some guidelines listed in the additional clean section for ARM are also
clean on x86, so they can be removed from there.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
+Cc Oleksii for an opinion on the inclusion for 4.19

This is a follow-up to series
https://lore.kernel.org/xen-devel/cover.1717236930.git.nicola.vetrini@bugseng.com/
and depends on it (otherwise the gitlab MISRA analysis would fail on
violations of Rule 20.12).
If it is decided that the dependent series should go in for 4.19, then
my suggestion is to include this as well, to the gating on more
guidelines.
---
 automation/eclair_analysis/ECLAIR/tagging.ecl | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/automation/eclair_analysis/ECLAIR/tagging.ecl b/automation/eclair_analysis/ECLAIR/tagging.ecl
index a354ff322e03..b829655ca0bc 100644
--- a/automation/eclair_analysis/ECLAIR/tagging.ecl
+++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
@@ -60,6 +60,7 @@ MC3R1.R11.7||
 MC3R1.R11.9||
 MC3R1.R12.5||
 MC3R1.R14.1||
+MC3R1.R14.4||
 MC3R1.R16.7||
 MC3R1.R17.1||
 MC3R1.R17.3||
@@ -73,6 +74,7 @@ MC3R1.R20.4||
 MC3R1.R20.6||
 MC3R1.R20.9||
 MC3R1.R20.11||
+MC3R1.R20.12||
 MC3R1.R20.13||
 MC3R1.R20.14||
 MC3R1.R21.3||
@@ -105,7 +107,7 @@ if(string_equal(target,"x86_64"),
 )
 
 if(string_equal(target,"arm64"),
-    service_selector({"additional_clean_guidelines","MC3R1.R14.4||MC3R1.R16.6||MC3R1.R20.12||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.2||MC3R1.R7.3||MC3R1.R8.6||MC3R1.R9.3"})
+    service_selector({"additional_clean_guidelines","MC3R1.R16.6||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.3"})
 )
 
 -reports+={clean:added,"service(clean_guidelines_common||additional_clean_guidelines)"}
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 21:25:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 21:25:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734340.1140544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDWDQ-0001f0-VJ; Sat, 01 Jun 2024 21:24:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734340.1140544; Sat, 01 Jun 2024 21:24:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDWDQ-0001et-QZ; Sat, 01 Jun 2024 21:24:40 +0000
Received: by outflank-mailman (input) for mailman id 734340;
 Sat, 01 Jun 2024 21:24:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDWDO-0001ei-RR; Sat, 01 Jun 2024 21:24:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDWDO-0004gE-Pp; Sat, 01 Jun 2024 21:24:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDWDO-0003X9-CF; Sat, 01 Jun 2024 21:24:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDWDO-0001Io-Bx; Sat, 01 Jun 2024 21:24:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zGLnPll++JUj6EyaJeVUzvtUysd2qH4JZw3H2PPBa9M=; b=lyOScCGOsmqct5cI5I9WiaByhN
	Kv/fxlZixdknmhDNsMgLcanLRyfPJ7XMTxF7StMEd/5czC1htPWjjAspZKSGlawmtUTyViVVk8shH
	khds76EWIbhk9rO6Lg2CNrcqu6N8TngifgRrxdMc8E9NgQM0QWb03K8NxL1a8QhxrCtM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186220-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186220: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-examine:reboot:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5f7606c048f7cca1a4301b321af70791c1d22378
X-Osstest-Versions-That:
    xen=03147e6837ff045dbc328be876b9600f7040c771
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Jun 2024 21:24:38 +0000

flight 186220 xen-unstable real [real]
flight 186224 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186220/
http://logs.test-lab.xenproject.org/osstest/logs/186224/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186224-retest
 test-armhf-armhf-examine      8 reboot              fail pass in 186224-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186214
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186214
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186214
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186214
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186214
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186214
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5f7606c048f7cca1a4301b321af70791c1d22378
baseline version:
 xen                  03147e6837ff045dbc328be876b9600f7040c771

Last test of basis   186214  2024-06-01 01:08:41 Z    0 days
Testing same since   186220  2024-06-01 08:26:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Oleksii Kurochko <oleksii.kurochko@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   03147e6837..5f7606c048  5f7606c048f7cca1a4301b321af70791c1d22378 -> master


From xen-devel-bounces@lists.xenproject.org Sat Jun 01 23:27:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 23:27:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734358.1140553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDY7p-0006hy-B6; Sat, 01 Jun 2024 23:27:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734358.1140553; Sat, 01 Jun 2024 23:27:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDY7p-0006hr-7J; Sat, 01 Jun 2024 23:27:01 +0000
Received: by outflank-mailman (input) for mailman id 734358;
 Sat, 01 Jun 2024 23:26:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YPgr=ND=treblig.org=linux@srs-se1.protection.inumbo.net>)
 id 1sDY73-0006gd-Hc
 for xen-devel@lists.xenproject.org; Sat, 01 Jun 2024 23:26:13 +0000
Received: from mx.treblig.org (mx.treblig.org [2a00:1098:5b::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 543cfa5e-206e-11ef-90a1-e314d9c70b13;
 Sun, 02 Jun 2024 01:26:12 +0200 (CEST)
Received: from localhost ([127.0.0.1] helo=dalek.home.treblig.org)
 by mx.treblig.org with esmtp (Exim 4.96)
 (envelope-from <linux@treblig.org>) id 1sDY6v-003lo2-2B;
 Sat, 01 Jun 2024 23:26:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 543cfa5e-206e-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=treblig.org
	; s=bytemarkmx; h=MIME-Version:Message-ID:Date:Subject:From:Content-Type:From
	:Subject; bh=y3toXmjpE9FC/9nFEs8AsST4kK6tsu5EB+9tVoO4zzI=; b=Y2zUpNiB/hULCH7b
	PXwi63YHwj1Acizrz4gbxnX5hrKfgMbb8s4PFq+HSb8ewOzVTPwYwIo0t/XILqn8m+qA6dG3WNMis
	AeuSTyYsJbMXeMy9OA3whhJUgCtC7bwRtlBbQePV/k/eUjZDyn60ftXbTGoMsEeIg6XW3bOZUJuiv
	u4ejGw/Cb/2V3xoYjYUcD0blOwmNR4Sth7N6JFkfwOMeQocujlODvEzNbkd0APlz9q7vwvdd8gXDL
	nfuj0jZmnD/HO/aY4QY/h4ST+cvZFCJWt3DeYlW671WqjjLiKLpBDrd2sLcZ/QuMszUjQ49rCy902
	ZssT6wp4lyFN/gEjsg==;
From: linux@treblig.org
To: oleksandr_andrushchenko@epam.com,
	perex@perex.cz,
	tiwai@suse.com
Cc: xen-devel@lists.xenproject.org,
	linux-sound@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	"Dr. David Alan Gilbert" <linux@treblig.org>
Subject: [PATCH] ALSA: xen-front: remove unused struct 'alsa_sndif_hw_param'
Date: Sun,  2 Jun 2024 00:26:04 +0100
Message-ID: <20240601232604.198662-1-linux@treblig.org>
X-Mailer: git-send-email 2.45.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: "Dr. David Alan Gilbert" <linux@treblig.org>

'alsa_sndif_hw_param' has been unused since the original
commit 1cee559351a7 ("ALSA: xen-front: Implement ALSA virtual sound
driver").

Remove it.

Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>
---
 sound/xen/xen_snd_front_alsa.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/sound/xen/xen_snd_front_alsa.c b/sound/xen/xen_snd_front_alsa.c
index 31b5dc0f34d2..b229eb6f7057 100644
--- a/sound/xen/xen_snd_front_alsa.c
+++ b/sound/xen/xen_snd_front_alsa.c
@@ -69,11 +69,6 @@ struct alsa_sndif_sample_format {
 	snd_pcm_format_t alsa;
 };
 
-struct alsa_sndif_hw_param {
-	u8 sndif;
-	snd_pcm_hw_param_t alsa;
-};
-
 static const struct alsa_sndif_sample_format ALSA_SNDIF_FORMATS[] = {
 	{
 		.sndif = XENSND_PCM_FORMAT_U8,
-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Sat Jun 01 23:47:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 Jun 2024 23:47:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734365.1140562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDYRG-00015J-SV; Sat, 01 Jun 2024 23:47:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734365.1140562; Sat, 01 Jun 2024 23:47:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDYRG-00015C-Pk; Sat, 01 Jun 2024 23:47:06 +0000
Received: by outflank-mailman (input) for mailman id 734365;
 Sat, 01 Jun 2024 23:47:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDYRG-000152-AD; Sat, 01 Jun 2024 23:47:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDYRG-0007BO-2j; Sat, 01 Jun 2024 23:47:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDYRF-0000Hu-Nk; Sat, 01 Jun 2024 23:47:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDYRF-0006N9-N7; Sat, 01 Jun 2024 23:47:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pfFyE5IvTtKF7m13R7zaebf6Ie6JC1tdeCAUkbTaXlQ=; b=Bgjc3yAZQrsbYIggeh0ZkE00LG
	iCgaEIgUm7ajbTBQ4tmjIhzLbhaoyDEMXgoMzGhn81d61SVSyNA9QXAtGkc9RbietXEDVjlirzvyv
	25qf9AXLGtVbYHBgDLQIXFq8Fu626qEUIpB60NDWBlCFhUdVlTn++gRjT0Lav9W33JF0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186221-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186221: FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-rtds:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-rtds:host-install(5):broken:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit2:host-ping-check-xen:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:host-ping-check-xen:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=cc8ed4d0a8486c7472cd72ec3c19957e509dc68c
X-Osstest-Versions-That:
    linux=d8ec19857b095b39d114ae299713bd8ea6c1e66a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 Jun 2024 23:47:05 +0000

flight 186221 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186221/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-rtds        <job status>                 broken  in 186217

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     5 host-install(5) broken in 186217 pass in 186221
 test-armhf-armhf-examine      8 reboot           fail in 186217 pass in 186221
 test-armhf-armhf-xl-credit1   8 xen-boot                   fail pass in 186217

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2  10 host-ping-check-xen     fail blocked in 186212
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 186212
 test-armhf-armhf-xl-arndale 10 host-ping-check-xen fail in 186217 blocked in 186212
 test-armhf-armhf-xl           8 xen-boot            fail in 186217 like 186212
 test-armhf-armhf-xl-credit2   8 xen-boot            fail in 186217 like 186212
 test-armhf-armhf-libvirt-vhd  8 xen-boot            fail in 186217 like 186212
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 186217 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 186217 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186212
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                cc8ed4d0a8486c7472cd72ec3c19957e509dc68c
baseline version:
 linux                d8ec19857b095b39d114ae299713bd8ea6c1e66a

Last test of basis   186212  2024-05-31 19:10:39 Z    1 days
Testing same since   186217  2024-06-01 02:33:17 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Adrián Larumbe <adrian.larumbe@collabora.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexandre Belloni <alexandre.belloni@bootlin.com>
  Alexandre Ghiti <alexghiti@rivosinc.com>
  Alina Yu <alina_yu@richtek.com>
  Andi Shyti <andi.shyti@linux.intel.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Barry Song <baohua@kernel.org>
  Boris Brezillon <boris.brezillon@collabora.com>
  Breno Leitao <leitao@debian.org>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian König <christian.koenig@amd.com>
  Christoph Hellwig <hch@lst.de>
  Coly Li <colyli@suse.de>
  Damien Le Moal <dlemoal@kernel.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Danilo Krummrich <dakr@redhat.com>
  Dave Airlie <airlied@redhat.com>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dongsheng Yang <dongsheng.yang@easystack.cn>
  Douglas Anderson <dianders@chromium.org>
  Fedor Pchelkin <pchelkin@ispras.ru>
  Felix Kuehling <felix.kuehling@amd.com>
  Gerald Loacker <gerald.loacker@wolfvision.net>
  Gnattu OC <gnattuoc@me.com>
  Guenter Roeck <linux@roeck-us.net>
  Hannes Reinecke <hare@kernel.org>
  Hans de Goede <hdegoede@redhat.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Hawking Zhang <Hawking.Zhang@amd.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  hexue <xue01.he@samsung.com>
  Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
  hmtheboy154 <buingoc67@gmail.com>
  Imre Deak <imre.deak@intel.com>
  Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
  Jani Nikula <jani.nikula@intel.com>
  Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
  Jassi Brar <jassisinghbrar@gmail.com>
  Javier Carrasco <javier.carrasco.cruz@gmail.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jens Axboe <axboe@kernel.dk>
  Jesse Zhang <Jesse.Zhang@amd.com>
  Jessica Zhang <quic_jesszhan@quicinc.com>
  Jian Ye <jian.ye@intel.com>
  Jim Wylder <jwylder@google.com>
  John Harrison <John.C.Harrison@Intel.com>
  José Roberto de Souza <jose.souza@intel.com>
  Julia Filipchuk <julia.filipchuk@intel.com>
  Kanchan Joshi <joshi.k@samsung.com>
  Keith Busch <kbusch@kernel.org>
  Kent Overstreet <kent.overstreet@linux.dev>
  Kundan Kumar <kundan.kumar@samsung.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lukas Bulwahn <lukas.bulwahn@redhat.com>
  Luke D. Jones <luke@ljones.dev>
  Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Matthew Auld <matthew.auld@intel.com>
  Matthew Brost <matthew.brost@intel.com>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Mike Snitzer <snitzer@kernel.org>
  Mingzhe Zou <mingzhe.zou@easystack.cn>
  Mohamed Ahmed <mohamedahmedegypt2001@gmail.com>
  Nam Cao <namcao@linutronix.de>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nilay Shroff <nilay@linux.ibm.com>
  Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
  Nirmoy Das <nirmoy.das@intel.com>
  Nícolas F. R. A. Prado <nfraprado@collabora.com>
  Pali Rohár <pali@kernel.org>
  Palmer Dabbelt <palmer@rivosinc.com>
  Peter Colberg <peter.colberg@intel.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rajneesh Bhardwaj <rajneesh.bhardwaj@amd.com>
  Robin Murphy <robin.murphy@arm.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sergey Matyukevich <sergey.matyukevich@syntacore.com>
  Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Tanmay Shah <tanmay.shah@amd.com>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thierry Reding <treding@nvidia.com>
  Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
  Thomas Hellström <thomas.hellstrom@linux.intel.com>
  Thomas Zimmermann <tzimmermann@suse.de>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Val Packett <val@packett.cool>
  Vidya Srinivas <vidya.srinivas@intel.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Wachowski, Karol <karol.wachowski@intel.com>
  Waiman Long <longman@redhat.com>
  Witold Sadowski <wsadowski@marvell.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-rtds broken

Not pushing.

(No revision log; it would be 3467 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 02 04:34:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 04:34:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734394.1140573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDcuy-0001ol-Us; Sun, 02 Jun 2024 04:34:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734394.1140573; Sun, 02 Jun 2024 04:34:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDcuy-0001oa-PU; Sun, 02 Jun 2024 04:34:04 +0000
Received: by outflank-mailman (input) for mailman id 734394;
 Sun, 02 Jun 2024 04:34:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDcux-0001oQ-EO; Sun, 02 Jun 2024 04:34:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDcux-0005fG-73; Sun, 02 Jun 2024 04:34:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDcuw-0004Ge-7c; Sun, 02 Jun 2024 04:34:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDcuv-0000QF-UC; Sun, 02 Jun 2024 04:34:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BKv9qKg5g6VVTDz4NSGRQ9qIIuqk7Rvq7UM1DZhFoms=; b=5GdUn/Y2RPEOdOuVWaP03VaKRn
	yQGRwlxafgBOeTIV2m7BFVm5/p9E1HjTf5d2GbFuv+hqf/JZdMVXItHSGsJLjCp5+6rhfqWJyE0bM
	x2Iw2M/VwEe7a0xNr+a2BqxjUKvTCXCvDA4GrZei4w3iHzFSrvBAaM1sdtHX4k/2eFc8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186227-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186227: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=de2330450ff71df5a609fe48e2153eb4854d9359
X-Osstest-Versions-That:
    ovmf=7339bfeffa3fa30b18dce86409c0112039bacec5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 Jun 2024 04:34:01 +0000

flight 186227 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186227/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 de2330450ff71df5a609fe48e2153eb4854d9359
baseline version:
 ovmf                 7339bfeffa3fa30b18dce86409c0112039bacec5

Last test of basis   186219  2024-06-01 06:14:48 Z    0 days
Testing same since   186227  2024-06-02 02:11:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Du Lin <du.lin@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   7339bfeffa..de2330450f  de2330450ff71df5a609fe48e2153eb4854d9359 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Jun 02 06:55:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 06:55:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734408.1140583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDf7F-00016W-PO; Sun, 02 Jun 2024 06:54:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734408.1140583; Sun, 02 Jun 2024 06:54:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDf7F-00016P-MC; Sun, 02 Jun 2024 06:54:53 +0000
Received: by outflank-mailman (input) for mailman id 734408;
 Sun, 02 Jun 2024 06:54:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDf7E-00016F-2F; Sun, 02 Jun 2024 06:54:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDf7D-0008Vs-Ru; Sun, 02 Jun 2024 06:54:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDf7D-0007xN-E8; Sun, 02 Jun 2024 06:54:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDf7D-0002O5-Di; Sun, 02 Jun 2024 06:54:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7DCNdUMYESg9USNjTUMBZAJMrvbCM05+4d1BBA/oru4=; b=bk/9TPE+I1LYDsaB3W8Q3YRoNv
	aSOWX9m9Ffp7CrsiugWIaTlk9YpEFI41cJRnSPLdVzJpjLTaQtpzfy8uHxXA9JKO9Nemd0MSnWtYl
	QosnUxPPEpNXYmBQK3Jm7nHxyu+T9mKAa+JEuC7kCIeH1KOEjAVMNxWI1GTG1AkQWX9s=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186225-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186225: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt-vhd:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
X-Osstest-Versions-That:
    xen=5f7606c048f7cca1a4301b321af70791c1d22378
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 Jun 2024 06:54:51 +0000

flight 186225 xen-unstable real [real]
flight 186228 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186225/
http://logs.test-lab.xenproject.org/osstest/logs/186228/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-vhd  8 xen-boot            fail pass in 186228-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check fail in 186228 never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check fail in 186228 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186220
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186220
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186220
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186220
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186220
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186220
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e
baseline version:
 xen                  5f7606c048f7cca1a4301b321af70791c1d22378

Last test of basis   186220  2024-06-01 08:26:08 Z    0 days
Testing same since   186225  2024-06-01 21:38:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Nicola Vetrini <nicola.vetrini@bugseng.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5f7606c048..c2d5e63c73  c2d5e63c7380c7cb435d00211b512c53accb528e -> master


From xen-devel-bounces@lists.xenproject.org Sun Jun 02 11:29:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 11:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734459.1140597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDjOP-0006AU-G6; Sun, 02 Jun 2024 11:28:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734459.1140597; Sun, 02 Jun 2024 11:28:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDjOP-0006AN-D8; Sun, 02 Jun 2024 11:28:53 +0000
Received: by outflank-mailman (input) for mailman id 734459;
 Sun, 02 Jun 2024 11:28:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDjON-0006AD-Vo; Sun, 02 Jun 2024 11:28:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDjON-00059k-KA; Sun, 02 Jun 2024 11:28:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDjON-0001K0-81; Sun, 02 Jun 2024 11:28:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDjON-0000RM-7W; Sun, 02 Jun 2024 11:28:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hJVcrryqJIX/J9A4L1qEMv4C8imojOmDyugNFR95Yfk=; b=ffEecPlkYwKqoBkyYasCvu4Tpy
	6PLgxpR7comRvqy+Q9I5Tt94jzNTHsIXs8f4fI3JT3nqkwjyXVlQaSt7QMGh9aIlmG9OGECLyQ/h2
	atsjRTHh2/+HPPvMYxWteH6k70liqUzhBhgufMT9c2c46IYPoHIC22uByHjmgcR+Gx6c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186226-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186226: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-raw:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:host-ping-check-xen:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=89be4025b0db42db830d72d532437248774cba49
X-Osstest-Versions-That:
    linux=d8ec19857b095b39d114ae299713bd8ea6c1e66a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 Jun 2024 11:28:51 +0000

flight 186226 linux-linus real [real]
flight 186230 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186226/
http://logs.test-lab.xenproject.org/osstest/logs/186230/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-raw   12 debian-di-install fail in 186230 REGR. vs. 186212

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-multivcpu 10 host-ping-check-xen fail pass in 186230-retest
 test-armhf-armhf-xl-raw       8 xen-boot            fail pass in 186230-retest
 test-armhf-armhf-examine      8 reboot              fail pass in 186230-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 186212
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 186230 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 186230 never pass
 test-armhf-armhf-xl           8 xen-boot                     fail  like 186212
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186212
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                89be4025b0db42db830d72d532437248774cba49
baseline version:
 linux                d8ec19857b095b39d114ae299713bd8ea6c1e66a

Last test of basis   186212  2024-05-31 19:10:39 Z    1 days
Failing since        186217  2024-06-01 02:33:17 Z    1 days    3 attempts
Testing same since   186226  2024-06-02 00:13:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Darrick J. Wong" <djwong@kernel.org>
  "Rob Herring (Arm)" <robh@kernel.org>
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Adrián Larumbe <adrian.larumbe@collabora.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexandre Belloni <alexandre.belloni@bootlin.com>
  Alexandre Ghiti <alexghiti@rivosinc.com>
  Alina Yu <alina_yu@richtek.com>
  Andi Shyti <andi.shyti@linux.intel.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Barry Song <baohua@kernel.org>
  Boris Brezillon <boris.brezillon@collabora.com>
  Breno Leitao <leitao@debian.org>
  Chandan Babu R <chandanbabu@kernel.org>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen-Yu Tsai <wenst@chromium.org>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian König <christian.koenig@amd.com>
  Christoph Hellwig <hch@lst.de>
  Coly Li <colyli@suse.de>
  Damien Le Moal <dlemoal@kernel.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Danilo Krummrich <dakr@redhat.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Airlie <airlied@redhat.com>
  Disha Goel<disgoel@linux.ibm.com>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dongsheng Yang <dongsheng.yang@easystack.cn>
  Douglas Anderson <dianders@chromium.org>
  Fedor Pchelkin <pchelkin@ispras.ru>
  Felix Kuehling <felix.kuehling@amd.com>
  Gerald Loacker <gerald.loacker@wolfvision.net>
  Gnattu OC <gnattuoc@me.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hannes Reinecke <hare@kernel.org>
  Hans de Goede <hdegoede@redhat.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Hawking Zhang <Hawking.Zhang@amd.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  hexue <xue01.he@samsung.com>
  Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
  hmtheboy154 <buingoc67@gmail.com>
  Imre Deak <imre.deak@intel.com>
  Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
  Jani Nikula <jani.nikula@intel.com>
  Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
  Jassi Brar <jassisinghbrar@gmail.com>
  Javier Carrasco <javier.carrasco.cruz@gmail.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jens Axboe <axboe@kernel.dk>
  Jesse Zhang <Jesse.Zhang@amd.com>
  Jessica Zhang <quic_jesszhan@quicinc.com>
  Jian Ye <jian.ye@intel.com>
  Jim Wylder <jwylder@google.com>
  John Garry <john.g.garry@oracle.com>
  John Harrison <John.C.Harrison@Intel.com>
  José Roberto de Souza <jose.souza@intel.com>
  Julia Filipchuk <julia.filipchuk@intel.com>
  Kanchan Joshi <joshi.k@samsung.com>
  Keith Busch <kbusch@kernel.org>
  Kent Overstreet <kent.overstreet@linux.dev>
  Kundan Kumar <kundan.kumar@samsung.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lukas Bulwahn <lukas.bulwahn@redhat.com>
  Luke D. Jones <luke@ljones.dev>
  Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Masahiro Yamada <masahiroy@kernel.org>
  Matthew Auld <matthew.auld@intel.com>
  Matthew Brost <matthew.brost@intel.com>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Matthias Maennich <maennich@google.com>
  Mickaël Salaün <mic@digikod.net>
  Miguel Ojeda <ojeda@kernel.org>
  Mike Snitzer <snitzer@kernel.org>
  Mingzhe Zou <mingzhe.zou@easystack.cn>
  Mohamed Ahmed <mohamedahmedegypt2001@gmail.com>
  Nam Cao <namcao@linutronix.de>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nilay Shroff <nilay@linux.ibm.com>
  Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
  Nirmoy Das <nirmoy.das@intel.com>
  Nícolas F. R. A. Prado <nfraprado@collabora.com>
  Pali Rohár <pali@kernel.org>
  Palmer Dabbelt <palmer@rivosinc.com>
  Peter Colberg <peter.colberg@intel.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rajneesh Bhardwaj <rajneesh.bhardwaj@amd.com>
  Ritesh Harjani (IBM) <ritesh.list@gmail.com>
  Rob Herring (Arm) <robh@kernel.org>
  Robin Murphy <robin.murphy@arm.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sergey Matyukevich <sergey.matyukevich@syntacore.com>
  Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Tanmay Shah <tanmay.shah@amd.com>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thierry Reding <treding@nvidia.com>
  Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
  Thomas Hellström <thomas.hellstrom@linux.intel.com>
  Thomas Zimmermann <tzimmermann@suse.de>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Val Packett <val@packett.cool>
  Vidya Srinivas <vidya.srinivas@intel.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Wachowski, Karol <karol.wachowski@intel.com>
  Waiman Long <longman@redhat.com>
  Witold Sadowski <wsadowski@marvell.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4320 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 02 13:02:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 13:02:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734478.1140608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDkqQ-0000Cn-CM; Sun, 02 Jun 2024 13:01:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734478.1140608; Sun, 02 Jun 2024 13:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDkqQ-0000Cg-8L; Sun, 02 Jun 2024 13:01:54 +0000
Received: by outflank-mailman (input) for mailman id 734478;
 Sun, 02 Jun 2024 13:01:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bftg=NE=samsung.com=nj.shetty@srs-se1.protection.inumbo.net>)
 id 1sDkqN-0000Ca-HR
 for xen-devel@lists.xenproject.org; Sun, 02 Jun 2024 13:01:52 +0000
Received: from mailout4.samsung.com (mailout4.samsung.com [203.254.224.34])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4222c35b-20e0-11ef-b4bb-af5377834399;
 Sun, 02 Jun 2024 15:01:46 +0200 (CEST)
Received: from epcas5p1.samsung.com (unknown [182.195.41.39])
 by mailout4.samsung.com (KnoxPortal) with ESMTP id
 20240602130141epoutp04ef44592810056b03551e42c3bcf5e24b~VMYn1wsGl2490224902epoutp04M
 for <xen-devel@lists.xenproject.org>; Sun,  2 Jun 2024 13:01:41 +0000 (GMT)
Received: from epsnrtp1.localdomain (unknown [182.195.42.162]) by
 epcas5p2.samsung.com (KnoxPortal) with ESMTP id
 20240602130140epcas5p25c2d695eda36cf366a0a4a4a01c4d72e~VMYnNGZ7q1672216722epcas5p2-;
 Sun,  2 Jun 2024 13:01:40 +0000 (GMT)
Received: from epsmgec5p1new.samsung.com (unknown [182.195.38.183]) by
 epsnrtp1.localdomain (Postfix) with ESMTP id 4VscSM49VPz4x9Ps; Sun,  2 Jun
 2024 13:01:39 +0000 (GMT)
Received: from epcas5p1.samsung.com ( [182.195.41.39]) by
 epsmgec5p1new.samsung.com (Symantec Messaging Gateway) with SMTP id
 1B.21.08853.33D6C566; Sun,  2 Jun 2024 22:01:39 +0900 (KST)
Received: from epsmtrp1.samsung.com (unknown [182.195.40.13]) by
 epcas5p1.samsung.com (KnoxPortal) with ESMTPA id
 20240531140203epcas5p1fed913b6729b684e0916dfd62787be13~Ul6w9HJG72010620106epcas5p1W;
 Fri, 31 May 2024 14:02:03 +0000 (GMT)
Received: from epsmgmcp1.samsung.com (unknown [182.195.42.82]) by
 epsmtrp1.samsung.com (KnoxPortal) with ESMTP id
 20240531140203epsmtrp1d3e14a45850cff935f0c63202c392ca7~Ul6w8M3W40787407874epsmtrp1N;
 Fri, 31 May 2024 14:02:03 +0000 (GMT)
Received: from epsmtip1.samsung.com ( [182.195.34.30]) by
 epsmgmcp1.samsung.com (Symantec Messaging Gateway) with SMTP id
 35.7D.18846.B58D9566; Fri, 31 May 2024 23:02:03 +0900 (KST)
Received: from nj.shetty?samsung.com (unknown [107.99.41.245]) by
 epsmtip1.samsung.com (KnoxPortal) with ESMTPA id
 20240531140201epsmtip127bef841828dca583bedc8f707b5bfe0~Ul6uqPXcn2218222182epsmtip19;
 Fri, 31 May 2024 14:02:01 +0000 (GMT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4222c35b-20e0-11ef-b4bb-af5377834399
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout4.samsung.com 20240602130141epoutp04ef44592810056b03551e42c3bcf5e24b~VMYn1wsGl2490224902epoutp04M
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
	s=mail20170921; t=1717333301;
	bh=rHCZCFCbAOCe+0obDysUSNu4wORZPi6NKPSAyIbFDII=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=SKRqhHmfBSPkcAIqZvms8GLmHDo2PwGcOTDGLPbSti0I/AVPzEX3DbCyTMq5p0VfY
	 G3MlwRC4bK5x8QDJZbX4sDElfRe+jHDX+V3f+0TZ0rGxg9GMBwd4Pau0mHmFtxPILu
	 hnmwyKfX+W6O1XXAzG1BwU2eozdlCtGVS0illt5w=
X-AuditID: b6c32a44-d67ff70000002295-79-665c6d33bd5b
Date: Fri, 31 May 2024 13:55:01 +0000
From: Nitesh Shetty <nj.shetty@samsung.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, "Martin K. Petersen"
	<martin.petersen@oracle.com>, Richard Weinberger <richard@nod.at>, Anton
	Ivanov <anton.ivanov@cambridgegreys.com>, Johannes Berg
	<johannes@sipsolutions.net>, Josef Bacik <josef@toxicpanda.com>, Ilya
	Dryomov <idryomov@gmail.com>, Dongsheng Yang <dongsheng.yang@easystack.cn>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	linux-um@lists.infradead.org, linux-block@vger.kernel.org,
	nbd@other.debian.org, ceph-devel@vger.kernel.org,
	xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org, Bart Van Assche
	<bvanassche@acm.org>
Subject: Re: [PATCH 13/14] block: remove unused queue limits API
Message-ID: <20240531135501.c3yes3jbg7dl3g5w@nj.shetty@samsung.com>
MIME-Version: 1.0
In-Reply-To: <20240531074837.1648501-14-hch@lst.de>
X-Brightmail-Tracker: H4sIAAAAAAAAA01Tf1RTZRg+371328UaXSYcPyA8eD1WQsAWY1xIFI8cuxzMFvpHPw+N7bIR
	+9U2JLEQTKoxCQjiwIIpVCJoioAIEnCCgwocwYJE6ICa48gPicEy6EirwYbH/573ed/nfb/n
	e78PR3lzbD88VW1gdGqJkmRvwJq7t78YEq56L4VfXBhAPSy7iFFnxwvYVKntH5SyjX6NUKWW
	YUDVnu1BqDuWL9jUQv0iRq3cE1DtY8GUaaSFTeXX7aNqrjkQ6vHkjwhV3PoQUGWD37GopaZi
	JNaLHhpOoB/Unkbp2eJCQOdeeoZuNY9z6KEb6XTj90fpbtswRreNZrPpUctH9MLkGEb/9vdn
	KF2Q34zRhY0XWbS9YbP4uXfSdigYiYzRBTJqqUaWqpbHkAkHkvYkRYj4ghBBFBVJBqolKiaG
	jNsnDtmbqnRaJAMPSZTpTkos0evJsJ07dJp0AxOo0OgNMSSjlSm1Qm2oXqLSp6vloWrGEC3g
	81+JcBZ+kKa4XDoFtEbWx01/5nGyQQeWBzxwSAjhlyvL7DywAecRbQA2LPZwXMEigPMDd8GT
	4NJVK7Iu+aOxAnElWgHsnx3CXIEdwKqiCbBahRHb4LlyCysP4DibCIb9/+GrtDdBwsmZG2td
	UeIuBud+6ljrupGIhY9Kbq1pucQeONE6jbqwF+wtt2KrfTyIcDifd3hVCwkbDvN/nuK4ThQH
	/6276ja0Ec5ca3LzfnC64HM3zoC1JWfYLvFxAM0jZuBK7IK5fQVrw1BCAevn7G4+AH7Tdx5x
	8Z4w//G6fS5ssazjrfDchVNsF/aFt5Zy3JiG1ok+9xWdB3B2YBgtBJvNTxkyPzXPhaOh0XaM
	ZXYaRQl/WOPAXXA7vHAl7BRg1QFfRqtXyRlphFagZjKerFmqUTWAtXcfFNcCbp90hHYBBAdd
	AOIo6c39KuvdFB5XJjmcyeg0Sbp0JaPvAhHOBRWhfj5SjfPjqA1JAmEUXygSiYRR4SIBuYk7
	m1sp4xFyiYFJYxgto1vXIbiHXzbiFfns7jSrDakcXzkoDYqt3zL1a8eRm1s4C6I7jClr28jr
	XP83tZ1hh2bzOMZXl+vGgnrt3sKZxF3enUm+Ro71jUd6n5qdwX9JM+9Z7NU+g4tG0w9bPzkx
	ai17O96abD16OqJNMcWoxidr24ubxcrMhIrfddoMqV6crJtJmV8uqRT7H7Dw9zvOvHAzsQo+
	32Lq2XSlyDjQzSn7BXn/2GAv6Xm/IUv4rcCRfV22d8kUudumO1jd451TwXopueStFc/5+M7+
	/ZdfSwF4dahg4cPpHMeRsZL4cB56XK6hh5GeyHL5g1hWP2uhShVwn/fpbVWDyaPmut0+Fn2y
	3X/y5aBEEtMrJIIgVKeX/A8paUBugAQAAA==
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrOIsWRmVeSWpSXmKPExsWy7bCSnG70jcg0g1vHTS3ezNjIYrH6bj+b
	xbQPP5ktPtycxGQxbd4VRouVq48yWdyf185m8XHDJxaLPw8NLfbe0rbovr6DzaJ3lY/F8uP/
	mCx+P13LZDF55xtGixnnF7NafN8ymclB0OPyFW+P5yuXMXu8njyB0aN1K7fHzll32T0uny31
	2Lyk3uPwhyssHrtvNrB53JxX6PHx6S0Wj6vfmpk9+nu3sXhM2LyR1ePzJrkA/igum5TUnMyy
	1CJ9uwSujD/7bQt2M1V8enaMrYGxi6mLkZNDQsBE4tHmOUA2F4eQwHZGiSnbN7FDJCQllv09
	wgxhC0us/PecHaLoI6PEnJPnWUESLAKqEmtmzgOyOTjYBLQlTv/nAAmLCChJPH11lhGknlng
	GYtE39JLjCAJYQEHia9TroHZvALOEvd2vgRbICQQJdE96SQTRFxQ4uTMJywgNrOAmcS8zQ+Z
	QeYzC0hLLP/HAWJyChhLvO+qnMAoMAtJwywkDbMQGhYwMq9iFE0tKM5Nz00uMNQrTswtLs1L
	10vOz93ECI5LraAdjMvW/9U7xMjEwXiIUYKDWUmE91d6RJoQb0piZVVqUX58UWlOavEhRmkO
	FiVxXuWczhQhgfTEktTs1NSC1CKYLBMHp1QDU92Ng1vsGPLZ9/Q3P6mtF/45JfwVp+XRWxsW
	apxjUN5zSTn9x97PQjtt5hWJVNTdbBAvnpmak8O3T19A1IH3/3uhr9PzeZwuyp+6p5068b/2
	Hb7Ddlpxl3Z8eJcsea0gdc5h+U3zNBmru1/vKYx7nXojdu2t/6/CL3/+I/p8qtLl3847l/5I
	+hpaaCMbIiBwtjV3+znPnq+mtwT3cV5/6Nhav/KTjWQiry5nqdvE889vHWljnrvFWmF3widZ
	Jh5PsaeyPenOihs/rz4wo6X+69O0LY+uO5zK4zue1bpstc3vov3FFnpScyVmdtadnPWge7ag
	6hZ7nlV37kiItZ5dy7z6pSATKw+vXcq5aeV3NZVYijMSDbWYi4oTAW59SSg6AwAA
X-CMS-MailID: 20240531140203epcas5p1fed913b6729b684e0916dfd62787be13
X-Msg-Generator: CA
Content-Type: multipart/mixed;
	boundary="----4kURd6qQA_jt7Hem8gQqXiu1-FVD8qSeML2_RvQot6TDP4ZY=_48cac_"
X-Sendblock-Type: REQ_APPROVE
CMS-TYPE: 105P
DLP-Filter: Pass
X-CFilter-Loop: Reflected
X-CMS-RootMailID: 20240531140203epcas5p1fed913b6729b684e0916dfd62787be13
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-14-hch@lst.de>
	<CGME20240531140203epcas5p1fed913b6729b684e0916dfd62787be13@epcas5p1.samsung.com>

------4kURd6qQA_jt7Hem8gQqXiu1-FVD8qSeML2_RvQot6TDP4ZY=_48cac_
Content-Type: text/plain; charset="utf-8"; format="flowed"
Content-Disposition: inline

On 31/05/24 09:48AM, Christoph Hellwig wrote:
>Remove all APIs that are unused now that sd and sr have been converted
>to the atomic queue limits API.
>
>Signed-off-by: Christoph Hellwig <hch@lst.de>
>Reviewed-by: Bart Van Assche <bvanassche@acm.org>
>---
Reviewed-by: Nitesh Shetty <nj.shetty@samsung.com>

------4kURd6qQA_jt7Hem8gQqXiu1-FVD8qSeML2_RvQot6TDP4ZY=_48cac_
Content-Type: text/plain; charset="utf-8"


------4kURd6qQA_jt7Hem8gQqXiu1-FVD8qSeML2_RvQot6TDP4ZY=_48cac_--


From xen-devel-bounces@lists.xenproject.org Sun Jun 02 14:32:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 14:32:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734500.1140617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDmFx-0002dY-Ge; Sun, 02 Jun 2024 14:32:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734500.1140617; Sun, 02 Jun 2024 14:32:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDmFx-0002dR-D2; Sun, 02 Jun 2024 14:32:21 +0000
Received: by outflank-mailman (input) for mailman id 734500;
 Sun, 02 Jun 2024 14:32:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDmFw-0002dH-Fw; Sun, 02 Jun 2024 14:32:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDmFw-0008JK-8B; Sun, 02 Jun 2024 14:32:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDmFv-0000tm-U3; Sun, 02 Jun 2024 14:32:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDmFv-0006VZ-Td; Sun, 02 Jun 2024 14:32:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=w2YZPY8yEw+sY1TZ1XubWyZVX2c01EoUzEDt0oUJFrE=; b=lJxYKBZcGwjycWp12d7/kW1twl
	NH+YjUM6ZSFnDUAdIwRP8LferU78HskESCF62uO6Xwom7byQThfOt35mM0wNEDI5WlrMCTr8zfzcf
	zzAIjRZY+sE1ljJbKUJGbXIvAOnY4yvKjDocqQmt1YDZxXHRbKKl+0tMybrz0gUko0JM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186229-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186229: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt-vhd:xen-boot:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
X-Osstest-Versions-That:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 Jun 2024 14:32:19 +0000

flight 186229 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186229/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-vhd  8 xen-boot         fail in 186225 pass in 186229
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186225

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186225
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186225
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186225
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186225
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186225
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186225
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e
baseline version:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e

Last test of basis   186229  2024-06-02 06:57:13 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jun 02 18:23:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 18:23:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734536.1140627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDprE-0004QJ-BI; Sun, 02 Jun 2024 18:23:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734536.1140627; Sun, 02 Jun 2024 18:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDprE-0004QC-6u; Sun, 02 Jun 2024 18:23:04 +0000
Received: by outflank-mailman (input) for mailman id 734536;
 Sun, 02 Jun 2024 18:23:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDprC-0004Q2-N2; Sun, 02 Jun 2024 18:23:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDprC-0004RI-CQ; Sun, 02 Jun 2024 18:23:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDprB-0006hu-UA; Sun, 02 Jun 2024 18:23:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDprB-0000yE-Th; Sun, 02 Jun 2024 18:23:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0lIoCRhRcbXcIm/0P6M00OW+/ZHJgAkAcV0AR0F7Ors=; b=LiMXSIfGlkOgU4Ljw6fr5uDPT4
	0iE0PhpPwos380G7k/HsEu2DDaEBxTVjzshltgLEAaoVAzoMkfKiym30FN0klmr/9KbZci8SU/Th+
	IUQdfwjJqh+D32OarBpuyBFEQd15fSfIn3EAosztk2Uy9Pr/k00mrZau3qrqFcykx0Fg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186231-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186231: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=83814698cf48ce3aadc5d88a3f577f04482ff92a
X-Osstest-Versions-That:
    linux=d8ec19857b095b39d114ae299713bd8ea6c1e66a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 Jun 2024 18:23:01 +0000

flight 186231 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186231/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 186212

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 186212
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186212
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186212
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186212
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                83814698cf48ce3aadc5d88a3f577f04482ff92a
baseline version:
 linux                d8ec19857b095b39d114ae299713bd8ea6c1e66a

Last test of basis   186212  2024-05-31 19:10:39 Z    1 days
Failing since        186217  2024-06-01 02:33:17 Z    1 days    4 attempts
Testing same since   186231  2024-06-02 11:33:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Darrick J. Wong" <djwong@kernel.org>
  "Rob Herring (Arm)" <robh@kernel.org>
  Abhinav Kumar <quic_abhinavk@quicinc.com>
  Adrián Larumbe <adrian.larumbe@collabora.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexandre Belloni <alexandre.belloni@bootlin.com>
  Alexandre Ghiti <alexghiti@rivosinc.com>
  Alina Yu <alina_yu@richtek.com>
  Andi Shyti <andi.shyti@linux.intel.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Barry Song <baohua@kernel.org>
  Boris Brezillon <boris.brezillon@collabora.com>
  Breno Leitao <leitao@debian.org>
  Chandan Babu R <chandanbabu@kernel.org>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen-Yu Tsai <wenst@chromium.org>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian König <christian.koenig@amd.com>
  Christoph Hellwig <hch@lst.de>
  Coly Li <colyli@suse.de>
  Damien Le Moal <dlemoal@kernel.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Danilo Krummrich <dakr@redhat.com>
  Darrick J. Wong <djwong@kernel.org>
  Dave Airlie <airlied@redhat.com>
  Disha Goel<disgoel@linux.ibm.com>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dongsheng Yang <dongsheng.yang@easystack.cn>
  Douglas Anderson <dianders@chromium.org>
  Fedor Pchelkin <pchelkin@ispras.ru>
  Felix Kuehling <felix.kuehling@amd.com>
  Gerald Loacker <gerald.loacker@wolfvision.net>
  Gnattu OC <gnattuoc@me.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hannes Reinecke <hare@kernel.org>
  Hans de Goede <hdegoede@redhat.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Hawking Zhang <Hawking.Zhang@amd.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  hexue <xue01.he@samsung.com>
  Himal Prasad Ghimiray <himal.prasad.ghimiray@intel.com>
  hmtheboy154 <buingoc67@gmail.com>
  Imre Deak <imre.deak@intel.com>
  Jacek Lawrynowicz <jacek.lawrynowicz@linux.intel.com>
  Jani Nikula <jani.nikula@intel.com>
  Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
  Jassi Brar <jassisinghbrar@gmail.com>
  Javier Carrasco <javier.carrasco.cruz@gmail.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jens Axboe <axboe@kernel.dk>
  Jesse Zhang <Jesse.Zhang@amd.com>
  Jessica Zhang <quic_jesszhan@quicinc.com>
  Jian Ye <jian.ye@intel.com>
  Jim Wylder <jwylder@google.com>
  John Garry <john.g.garry@oracle.com>
  John Harrison <John.C.Harrison@Intel.com>
  José Roberto de Souza <jose.souza@intel.com>
  Julia Filipchuk <julia.filipchuk@intel.com>
  Kanchan Joshi <joshi.k@samsung.com>
  Keith Busch <kbusch@kernel.org>
  Kent Overstreet <kent.overstreet@linux.dev>
  Kundan Kumar <kundan.kumar@samsung.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lukas Bulwahn <lukas.bulwahn@redhat.com>
  Luke D. Jones <luke@ljones.dev>
  Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Masahiro Yamada <masahiroy@kernel.org>
  Matthew Auld <matthew.auld@intel.com>
  Matthew Brost <matthew.brost@intel.com>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Matthias Maennich <maennich@google.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Mickaël Salaün <mic@digikod.net>
  Miguel Ojeda <ojeda@kernel.org>
  Mike Snitzer <snitzer@kernel.org>
  Mingzhe Zou <mingzhe.zou@easystack.cn>
  Mohamed Ahmed <mohamedahmedegypt2001@gmail.com>
  Nam Cao <namcao@linutronix.de>
  Nathan Lynch <nathanl@linux.ibm.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nilay Shroff <nilay@linux.ibm.com>
  Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
  Nirmoy Das <nirmoy.das@intel.com>
  Nícolas F. R. A. Prado <nfraprado@collabora.com>
  Pali Rohár <pali@kernel.org>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paul E. McKenney <paulmck@kernel.org>
  Peter Colberg <peter.colberg@intel.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Puranjay Mohan <puranjay@kernel.org>
  Rajneesh Bhardwaj <rajneesh.bhardwaj@amd.com>
  Ritesh Harjani (IBM) <ritesh.list@gmail.com>
  Rob Herring (Arm) <robh@kernel.org>
  Robin Murphy <robin.murphy@arm.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Sagi Grimberg <sagi@grimberg.me>
  Samuel Holland <samuel.holland@sifive.com>
  Sergey Matyukevich <sergey.matyukevich@syntacore.com>
  Shinichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tanmay Shah <tanmay.shah@amd.com>
  Tejun Heo <tj@kernel.org>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thierry Reding <treding@nvidia.com>
  Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
  Thomas Hellström <thomas.hellstrom@linux.intel.com>
  Thomas Zimmermann <tzimmermann@suse.de>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Val Packett <val@packett.cool>
  Vidya Srinivas <vidya.srinivas@intel.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Wachowski, Karol <karol.wachowski@intel.com>
  Waiman Long <longman@redhat.com>
  Witold Sadowski <wsadowski@marvell.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   d8ec19857b09..83814698cf48  83814698cf48ce3aadc5d88a3f577f04482ff92a -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun Jun 02 20:04:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 20:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734553.1140637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRS-0007nH-Ds; Sun, 02 Jun 2024 20:04:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734553.1140637; Sun, 02 Jun 2024 20:04:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRS-0007nA-Aa; Sun, 02 Jun 2024 20:04:34 +0000
Received: by outflank-mailman (input) for mailman id 734553;
 Sun, 02 Jun 2024 20:04:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nKxc=NE=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sDrRR-0007mz-GC
 for xen-devel@lists.xenproject.org; Sun, 02 Jun 2024 20:04:33 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 532de246-211b-11ef-90a1-e314d9c70b13;
 Sun, 02 Jun 2024 22:04:32 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 5b1f17b1804b1-421338c4c3bso15295695e9.1
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 13:04:31 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd064bb21sm6879280f8f.102.2024.06.02.13.04.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 02 Jun 2024 13:04:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 532de246-211b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717358671; x=1717963471; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=3FD00BnVK+nXxJxFCCyFQGMqQcfaH7dFaqvgbk7uc4I=;
        b=lsH2Pg/ZN9IG/ZAzq/QO7nIw5zRAa9YiXyWiYgiORaaSeKLc1jkHvRv+dyyQpAk7X0
         +lxWQPGHizRwq5e2UCOSV4e7m9kgTmSIZZZSwa8gooQ1HRXU0GUJp6/ieCSnweIsMKxd
         S3DaLPupwsIsgVFkfxMKzEY251K59hLs2AlmnmwOi4zyUDPPbonWvnAlefDCDlG2XuDt
         +sk7c5kY+1l7AKFXm8R2fXJeap9BKIluYys65yZ37FVeZuYZTWcStz9zfhttRxrq+jZ5
         MsdnyZ3uR/AdqXUyir5L3CEZVK7Lgl2OV9JC1g6qHa/BZahTIgBkkEIm8q88AOau1hw1
         r/nA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717358671; x=1717963471;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=3FD00BnVK+nXxJxFCCyFQGMqQcfaH7dFaqvgbk7uc4I=;
        b=AEi60aHrF5Wjg1WLSBBitqOh+NjFsHu8eGHStzdmObMZq+RBFSHHXTlu1S1R/ewhLz
         5c0QTjnUWWwyFIHW8dicGIzN5zR/hGNgo1W6+0RPyqfh2rjJWo5G1sXeoOOhu7Z3OEV1
         1KzNkLHlhGL+jM+7rCIHD8+jR7momc2jpQtvz5p3c3kdlwv1ZNBdre9uY3yMQiodoUUA
         DttFiR99WSM7tDCJ4VBo4NbxdVshXogEDh0VnwYC3SvAWRgsbuF112PbgnaZi2DiNb1Y
         z6+hgaR9zcJp/P6u6I1RaPAod2CJyNYPU25fRz85zQgrdc7xiys1resB1DwLwv13ZTpj
         hA9g==
X-Gm-Message-State: AOJu0YwP/58BJepgF4vVdOgcZjUkPNFOqM4UFMi0Pfu8YRIkuTMNOreQ
	sDfszuh1MShruGKdpzRw1sq1iZLvisUKRjolY2+F0otnxJFx9uVeIt5yJA==
X-Google-Smtp-Source: AGHT+IGBccqpO9X861mA56aWUoHAjiXNV+TMR8tMrmieqJ7TIp5D3QKv1HvzGKWbPVpznVrJUrkDUA==
X-Received: by 2002:a05:600c:511f:b0:41e:3272:6476 with SMTP id 5b1f17b1804b1-4212e049d8dmr67108965e9.10.1717358670709;
        Sun, 02 Jun 2024 13:04:30 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony@xenproject.org>
Subject: [PATCH for-4.19? v5 01/10] tools/ocaml: Fix mixed tabs/spaces
Date: Sun,  2 Jun 2024 20:04:14 +0000
Message-Id: <7f1eb25ef6fd2ce56a8275211051d6a4a1211a6a.1717356829.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717356829.git.w1benny@gmail.com>
References: <cover.1717356829.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

No functional change.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index c6da9bb091..e86c455802 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -138,12 +138,12 @@ static void domain_handle_of_uuid_string(xen_domain_handle_t h,
  * integers in the Ocaml ABI for more idiomatic handling.
  */
 static value c_bitmap_to_ocaml_list
-             /* ! */
-             /*
+	     /* ! */
+	     /*
 	      * All calls to this function must be in a form suitable
 	      * for xenctrl_abi_check.  The parsing there is ad-hoc.
 	      */
-             (unsigned int bitmap)
+	         (unsigned int bitmap)
 {
 	CAMLparam0();
 	CAMLlocal2(list, tmp);
@@ -180,8 +180,8 @@ static value c_bitmap_to_ocaml_list
 }

 static unsigned int ocaml_list_to_c_bitmap(value l)
-             /* ! */
-             /*
+	     /* ! */
+	     /*
 	      * All calls to this function must be in a form suitable
 	      * for xenctrl_abi_check.  The parsing there is ad-hoc.
 	      */
@@ -259,7 +259,7 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 		/* Quick & dirty check for ABI changes. */
 		BUILD_BUG_ON(sizeof(cfg) != 68);

-        /* Mnemonics for the named fields inside xen_x86_arch_domainconfig */
+		/* Mnemonics for the named fields inside xen_x86_arch_domainconfig */
 #define VAL_EMUL_FLAGS          Field(arch_domconfig, 0)
 #define VAL_MISC_FLAGS          Field(arch_domconfig, 1)

@@ -351,7 +351,7 @@ static value dom_op(value xch_val, value domid,
 	caml_enter_blocking_section();
 	result = fn(xch, c_domid);
 	caml_leave_blocking_section();
-        if (result)
+	if (result)
 		failwith_xc(xch);
 	CAMLreturn(Val_unit);
 }
@@ -383,7 +383,7 @@ CAMLprim value stub_xc_domain_resume_fast(value xch_val, value domid)
 	caml_enter_blocking_section();
 	result = xc_domain_resume(xch, c_domid, 1);
 	caml_leave_blocking_section();
-        if (result)
+	if (result)
 		failwith_xc(xch);
 	CAMLreturn(Val_unit);
 }
@@ -426,7 +426,7 @@ static value alloc_domaininfo(xc_domaininfo_t * info)
 	Store_field(result, 13, Val_int(info->max_vcpu_id));
 	Store_field(result, 14, caml_copy_int32(info->ssidref));

-        tmp = caml_alloc_small(16, 0);
+	tmp = caml_alloc_small(16, 0);
 	for (i = 0; i < 16; i++) {
 		Field(tmp, i) = Val_int(info->handle[i]);
 	}
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 02 20:04:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 20:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734559.1140697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRY-0000mQ-4X; Sun, 02 Jun 2024 20:04:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734559.1140697; Sun, 02 Jun 2024 20:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRX-0000l3-V3; Sun, 02 Jun 2024 20:04:39 +0000
Received: by outflank-mailman (input) for mailman id 734559;
 Sun, 02 Jun 2024 20:04:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nKxc=NE=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sDrRW-0007mz-F1
 for xen-devel@lists.xenproject.org; Sun, 02 Jun 2024 20:04:38 +0000
Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com
 [2a00:1450:4864:20::435])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 567688e2-211b-11ef-90a1-e314d9c70b13;
 Sun, 02 Jun 2024 22:04:37 +0200 (CEST)
Received: by mail-wr1-x435.google.com with SMTP id
 ffacd0b85a97d-354cd8da8b9so3476656f8f.0
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 13:04:37 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd064bb21sm6879280f8f.102.2024.06.02.13.04.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 02 Jun 2024 13:04:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 567688e2-211b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717358676; x=1717963476; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=DGLeqc9nd3kadS5/DKrJoeONfRKHhkFkkBxhvVJOirc=;
        b=GhgmK7CuZN2hw4/lDt7H0jQ6aoSKY0BpaAMv4FRuNgRGe2ttx246ENfqe9ZGj5xNB0
         SDZUwlzrczAmaJFMKDvy3jQz0Yln4CorK7E931bIxdmNXTkN2cRW2tnX0Ty6Y9KSyrXJ
         XEeNaB9eDTUVuwNRd7pTG3UjcT5987KimH+Hv02IRK79qYJBqEA/1U/lXDhzrwoyTMHB
         R8npw6eflvzSQKjdhF/x1nYnNGopNN4wIZUzjf4k/Kt4j7gvupvhMwszOtMubSDjdjW+
         EtFW9QuIxmTOTFKWXy5dSGDCl+vI94s8EOUGtafdCfi97Xch780LydQz46xouIDu7YP2
         y9NQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717358676; x=1717963476;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=DGLeqc9nd3kadS5/DKrJoeONfRKHhkFkkBxhvVJOirc=;
        b=XHx0cz9t4ueZXR7TOEoXbW4IAaFjBnw0DhlAbZKdrapucJeS3bjbEsqX0wNKfBwnxp
         8YsS7QnLSWceLu6Jh71kf/8mTGzydbFZxBwPvyiFY4trNZb9s0Z0H7EKqLKJj26I5FWc
         JykLR79751OGJa88xbOUnucM6+7DvdB+5DQjHaex1Z4lmmeehkCUdBTscNVAFymbEygE
         kV4nsdE/tr2i6E5JsPNWeIC6Kt6yGtA53Byv4d7ZTuuVN6AMkY6p426pj5BCznErZWEB
         Jq11r5Bz7/soEpdxDzXHKcKFAImyK2nscLGQ63zAfFsdyQUCVdbQ1ujrJOsC3KlWYvDq
         X+jA==
X-Gm-Message-State: AOJu0YwPLmZl1CpPq0H5Oz/L9kMp3agn/kR+pxOMkO56g7Ko3/7GIEur
	vsHDVLm/z5i5RujvR/o8mRID6KOwlMmJjcRKoZJBdEu5xSjv/SyyTcu/RQf8
X-Google-Smtp-Source: AGHT+IEORE2b4SaQdkTR0pXCC6XV8zO8DY7VkZoXdbf4u7ErtCbWMsWhiSLCOTi1fq/GRpWayS+IXQ==
X-Received: by 2002:a5d:4f8f:0:b0:356:50e7:e948 with SMTP id ffacd0b85a97d-35e0f32e295mr5789389f8f.67.1717358675755;
        Sun, 02 Jun 2024 13:04:35 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>
Subject: [PATCH for-4.19? v5 06/10] x86/altp2m: Introduce accessor functions for safer array indexing
Date: Sun,  2 Jun 2024 20:04:19 +0000
Message-Id: <e2e5f7a3c9a0ac6d65a6f942b0ea54f0f0b104f3.1717356829.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717356829.git.w1benny@gmail.com>
References: <cover.1717356829.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

This patch introduces a set of accessor functions for altp2m array operations,
and refactoring direct array accesses with them.

This approach aims on improving the code clarity and also future-proofing the
codebase against potential speculative execution attacks.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 xen/arch/x86/hvm/vmx/vmx.c        |  4 +--
 xen/arch/x86/include/asm/altp2m.h | 32 +++++++++++++++++
 xen/arch/x86/include/asm/p2m.h    |  7 ++--
 xen/arch/x86/mm/altp2m.c          | 60 +++++++++++++------------------
 xen/arch/x86/mm/hap/hap.c         |  8 ++---
 xen/arch/x86/mm/mem_access.c      | 17 ++++-----
 xen/arch/x86/mm/mem_sharing.c     |  2 +-
 xen/arch/x86/mm/p2m-ept.c         | 14 ++++----
 xen/arch/x86/mm/p2m.c             | 16 ++++-----
 9 files changed, 91 insertions(+), 69 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f16faa6a61..a420d452b3 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -4887,10 +4887,10 @@ bool asmlinkage vmx_vmenter_helper(const struct cpu_user_regs *regs)

             for ( i = 0; i < MAX_ALTP2M; ++i )
             {
-                if ( currd->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+                if ( altp2m_get_eptp(currd, i) == mfn_x(INVALID_MFN) )
                     continue;

-                ept = &currd->arch.altp2m_p2m[i]->ept;
+                ept = &altp2m_get_p2m(currd, i)->ept;
                 if ( cpumask_test_cpu(cpu, ept->invalidate) )
                 {
                     cpumask_clear_cpu(cpu, ept->invalidate);
diff --git a/xen/arch/x86/include/asm/altp2m.h b/xen/arch/x86/include/asm/altp2m.h
index e5e59cbd68..2f064c61a2 100644
--- a/xen/arch/x86/include/asm/altp2m.h
+++ b/xen/arch/x86/include/asm/altp2m.h
@@ -19,6 +19,38 @@ static inline bool altp2m_active(const struct domain *d)
     return d->arch.altp2m_active;
 }

+static inline struct p2m_domain *altp2m_get_p2m(const struct domain* d,
+                                                unsigned int idx)
+{
+    return d->arch.altp2m_p2m[array_index_nospec(idx, MAX_ALTP2M)];
+}
+
+static inline uint64_t altp2m_get_eptp(const struct domain* d,
+                                       unsigned int idx)
+{
+    return d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)];
+}
+
+static inline void altp2m_set_eptp(const struct domain* d,
+                                   unsigned int idx,
+                                   uint64_t eptp)
+{
+    d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] = eptp;
+}
+
+static inline uint64_t altp2m_get_visible_eptp(const struct domain* d,
+                                               unsigned int idx)
+{
+    return d->arch.altp2m_visible_eptp[array_index_nospec(idx, MAX_EPTP)];
+}
+
+static inline void altp2m_set_visible_eptp(const struct domain* d,
+                                           unsigned int idx,
+                                           uint64_t eptp)
+{
+    d->arch.altp2m_visible_eptp[array_index_nospec(idx, MAX_EPTP)] = eptp;
+}
+
 /* Alternate p2m VCPU */
 void altp2m_vcpu_initialise(struct vcpu *v);
 void altp2m_vcpu_destroy(struct vcpu *v);
diff --git a/xen/arch/x86/include/asm/p2m.h b/xen/arch/x86/include/asm/p2m.h
index c1478ffc36..e6f7764f9f 100644
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -18,6 +18,7 @@
 #include <xen/mem_access.h>
 #include <asm/mem_sharing.h>
 #include <asm/page.h>    /* for pagetable_t */
+#include <asm/altp2m.h>

 /* Debugging and auditing of the P2M code? */
 #if !defined(NDEBUG) && defined(CONFIG_HVM)
@@ -888,13 +889,14 @@ static inline struct p2m_domain *p2m_get_altp2m(struct vcpu *v)

     BUG_ON(index >= MAX_ALTP2M);

-    return v->domain->arch.altp2m_p2m[index];
+    return altp2m_get_p2m(v->domain, index);
 }

 /* set current alternate p2m table */
 static inline bool p2m_set_altp2m(struct vcpu *v, unsigned int idx)
 {
     struct p2m_domain *orig;
+    struct p2m_domain *ap2m;

     BUG_ON(idx >= MAX_ALTP2M);

@@ -906,7 +908,8 @@ static inline bool p2m_set_altp2m(struct vcpu *v, unsigned int idx)
     atomic_dec(&orig->active_vcpus);

     vcpu_altp2m(v).p2midx = idx;
-    atomic_inc(&v->domain->arch.altp2m_p2m[idx]->active_vcpus);
+    ap2m = altp2m_get_p2m(v->domain, idx);
+    atomic_inc(&ap2m->active_vcpus);

     return true;
 }
diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index 6fe1e9ed6b..7fb1738376 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -205,7 +205,7 @@ bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)

     altp2m_list_lock(d);

-    if ( d->arch.altp2m_eptp[idx] != mfn_x(INVALID_MFN) )
+    if ( altp2m_get_eptp(d, idx) != mfn_x(INVALID_MFN) )
     {
         if ( p2m_set_altp2m(v, idx) )
             altp2m_vcpu_update_p2m(v);
@@ -307,7 +307,7 @@ static void p2m_reset_altp2m(struct domain *d, unsigned int idx,
     struct p2m_domain *p2m;

     ASSERT(idx < MAX_ALTP2M);
-    p2m = array_access_nospec(d->arch.altp2m_p2m, idx);
+    p2m = altp2m_get_p2m(d, idx);

     p2m_lock(p2m);

@@ -335,8 +335,8 @@ void p2m_flush_altp2m(struct domain *d)
     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
         p2m_reset_altp2m(d, i, ALTP2M_DEACTIVATE);
-        d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
-        d->arch.altp2m_visible_eptp[i] = mfn_x(INVALID_MFN);
+        altp2m_set_eptp(d, i, mfn_x(INVALID_MFN));
+        altp2m_set_visible_eptp(d, i, mfn_x(INVALID_MFN));
     }

     altp2m_list_unlock(d);
@@ -350,7 +350,7 @@ static int p2m_activate_altp2m(struct domain *d, unsigned int idx,

     ASSERT(idx < MAX_ALTP2M);

-    p2m = array_access_nospec(d->arch.altp2m_p2m, idx);
+    p2m = altp2m_get_p2m(d, idx);
     hostp2m = p2m_get_hostp2m(d);

     p2m_lock(p2m);
@@ -393,8 +393,7 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)

     altp2m_list_lock(d);

-    if ( d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] ==
-         mfn_x(INVALID_MFN) )
+    if ( altp2m_get_eptp(d, idx) == mfn_x(INVALID_MFN) )
         rc = p2m_activate_altp2m(d, idx, hostp2m->default_access);

     altp2m_list_unlock(d);
@@ -417,7 +416,7 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx,

     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
+        if ( altp2m_get_eptp(d, i) != mfn_x(INVALID_MFN) )
             continue;

         rc = p2m_activate_altp2m(d, i, a);
@@ -447,18 +446,15 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
     rc = -EBUSY;
     altp2m_list_lock(d);

-    if ( d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] !=
-         mfn_x(INVALID_MFN) )
+    if ( altp2m_get_eptp(d, idx) != mfn_x(INVALID_MFN) )
     {
-        p2m = array_access_nospec(d->arch.altp2m_p2m, idx);
+        p2m = altp2m_get_p2m(d, idx);

         if ( !_atomic_read(p2m->active_vcpus) )
         {
             p2m_reset_altp2m(d, idx, ALTP2M_DEACTIVATE);
-            d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] =
-                mfn_x(INVALID_MFN);
-            d->arch.altp2m_visible_eptp[array_index_nospec(idx, MAX_EPTP)] =
-                mfn_x(INVALID_MFN);
+            altp2m_set_eptp(d, idx, mfn_x(INVALID_MFN));
+            altp2m_set_visible_eptp(d, idx, mfn_x(INVALID_MFN));
             rc = 0;
         }
     }
@@ -485,7 +481,7 @@ int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
     rc = -EINVAL;
     altp2m_list_lock(d);

-    if ( d->arch.altp2m_visible_eptp[idx] != mfn_x(INVALID_MFN) )
+    if ( altp2m_get_visible_eptp(d, idx) != mfn_x(INVALID_MFN) )
     {
         for_each_vcpu( d, v )
             if ( p2m_set_altp2m(v, idx) )
@@ -510,13 +506,12 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
     mfn_t mfn;
     int rc = -EINVAL;

-    if ( idx >=  min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
-         d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] ==
-         mfn_x(INVALID_MFN) )
+    if ( idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+         altp2m_get_eptp(d, idx) == mfn_x(INVALID_MFN) )
         return rc;

     hp2m = p2m_get_hostp2m(d);
-    ap2m = array_access_nospec(d->arch.altp2m_p2m, idx);
+    ap2m = altp2m_get_p2m(d, idx);

     p2m_lock(hp2m);
     p2m_lock(ap2m);
@@ -577,10 +572,10 @@ int p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
         p2m_type_t t;
         p2m_access_t a;

-        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+        if ( altp2m_get_eptp(d, i) == mfn_x(INVALID_MFN) )
             continue;

-        p2m = d->arch.altp2m_p2m[i];
+        p2m = altp2m_get_p2m(d, i);

         /* Check for a dropped page that may impact this altp2m */
         if ( mfn_eq(mfn, INVALID_MFN) &&
@@ -598,7 +593,7 @@ int p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
                 for ( i = 0; i < MAX_ALTP2M; i++ )
                 {
                     if ( i == last_reset_idx ||
-                         d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+                         altp2m_get_eptp(d, i) == mfn_x(INVALID_MFN) )
                         continue;

                     p2m_reset_altp2m(d, i, ALTP2M_RESET);
@@ -660,11 +655,10 @@ int p2m_set_suppress_ve_multi(struct domain *d,
     if ( sve->view > 0 )
     {
         if ( sve->view >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
-             d->arch.altp2m_eptp[array_index_nospec(sve->view, MAX_EPTP)] ==
-             mfn_x(INVALID_MFN) )
+             altp2m_get_eptp(d, sve->view) == mfn_x(INVALID_MFN) )
             return -EINVAL;

-        p2m = ap2m = array_access_nospec(d->arch.altp2m_p2m, sve->view);
+        p2m = ap2m = altp2m_get_p2m(d, sve->view);
     }

     p2m_lock(host_p2m);
@@ -728,11 +722,10 @@ int p2m_get_suppress_ve(struct domain *d, gfn_t gfn, bool *suppress_ve,
     if ( altp2m_idx > 0 )
     {
         if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
-             d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] ==
-             mfn_x(INVALID_MFN) )
+             altp2m_get_eptp(d, altp2m_idx) == mfn_x(INVALID_MFN) )
             return -EINVAL;

-        p2m = ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx);
+        p2m = ap2m = altp2m_get_p2m(d, altp2m_idx);
     }
     else
         p2m = host_p2m;
@@ -766,15 +759,12 @@ int p2m_set_altp2m_view_visibility(struct domain *d, unsigned int altp2m_idx,
      * min(MAX_ALTP2M, MAX_EPTP).
      */
     if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
-         d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] ==
-         mfn_x(INVALID_MFN) )
+         altp2m_get_eptp(d, altp2m_idx) == mfn_x(INVALID_MFN) )
         rc = -EINVAL;
     else if ( visible )
-        d->arch.altp2m_visible_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] =
-            d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)];
+        altp2m_set_visible_eptp(d, altp2m_idx, altp2m_get_eptp(d, altp2m_idx));
     else
-        d->arch.altp2m_visible_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] =
-            mfn_x(INVALID_MFN);
+        altp2m_set_visible_eptp(d, altp2m_idx, mfn_x(INVALID_MFN));

     altp2m_list_unlock(d);

diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index d2011fde24..8fc8348152 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -511,13 +511,13 @@ int hap_enable(struct domain *d, u32 mode)

         for ( i = 0; i < MAX_EPTP; i++ )
         {
-            d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
-            d->arch.altp2m_visible_eptp[i] = mfn_x(INVALID_MFN);
+            altp2m_set_eptp(d, i, mfn_x(INVALID_MFN));
+            altp2m_set_visible_eptp(d, i, mfn_x(INVALID_MFN));
         }

         for ( i = 0; i < MAX_ALTP2M; i++ )
         {
-            rv = p2m_alloc_table(d->arch.altp2m_p2m[i]);
+            rv = p2m_alloc_table(altp2m_get_p2m(d, i));
             if ( rv != 0 )
                goto out;
         }
@@ -592,7 +592,7 @@ void hap_teardown(struct domain *d, bool *preempted)

         for ( i = 0; i < MAX_ALTP2M; i++ )
         {
-            p2m_teardown(d->arch.altp2m_p2m[i], false, preempted);
+            p2m_teardown(altp2m_get_p2m(d, i), false, preempted);
             if ( preempted && *preempted )
                 return;
         }
diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
index 60a0cce68a..43f3a8c2aa 100644
--- a/xen/arch/x86/mm/mem_access.c
+++ b/xen/arch/x86/mm/mem_access.c
@@ -348,11 +348,10 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
     if ( altp2m_idx )
     {
         if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
-             d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] ==
-             mfn_x(INVALID_MFN) )
+             altp2m_get_eptp(d, altp2m_idx) == mfn_x(INVALID_MFN) )
             return -EINVAL;

-        ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx);
+        ap2m = altp2m_get_p2m(d, altp2m_idx);
     }

     if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
@@ -404,11 +403,10 @@ long p2m_set_mem_access_multi(struct domain *d,
     if ( altp2m_idx )
     {
         if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
-             d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] ==
-             mfn_x(INVALID_MFN) )
+             altp2m_get_eptp(d, altp2m_idx) == mfn_x(INVALID_MFN) )
             return -EINVAL;

-        ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx);
+        ap2m = altp2m_get_p2m(d, altp2m_idx);
     }

     p2m_lock(p2m);
@@ -467,11 +465,10 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access,
     else if ( altp2m_idx ) /* altp2m view 0 is treated as the hostp2m */
     {
         if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
-             d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] ==
-             mfn_x(INVALID_MFN) )
+             altp2m_get_eptp(d, altp2m_idx) == mfn_x(INVALID_MFN) )
             return -EINVAL;

-        p2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx);
+        p2m = altp2m_get_p2m(d, altp2m_idx);
     }

     return _p2m_get_mem_access(p2m, gfn, access);
@@ -488,7 +485,7 @@ void arch_p2m_set_access_required(struct domain *d, bool access_required)
         unsigned int i;
         for ( i = 0; i < MAX_ALTP2M; i++ )
         {
-            struct p2m_domain *p2m = d->arch.altp2m_p2m[i];
+            struct p2m_domain *p2m = altp2m_get_p2m(d, i);

             if ( p2m )
                 p2m->access_required = access_required;
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index da28266ef0..21ac361111 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -914,7 +914,7 @@ static int nominate_page(struct domain *d, gfn_t gfn,

         for ( i = 0; i < MAX_ALTP2M; i++ )
         {
-            ap2m = d->arch.altp2m_p2m[i];
+            ap2m = altp2m_get_p2m(d, i);
             if ( !ap2m )
                 continue;

diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index f83610cb8c..ed4252822e 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -1297,10 +1297,10 @@ static void ept_set_ad_sync(struct domain *d, bool value)
         {
             struct p2m_domain *p2m;

-            if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+            if ( altp2m_get_eptp(d, i) == mfn_x(INVALID_MFN) )
                 continue;

-            p2m = d->arch.altp2m_p2m[i];
+            p2m = altp2m_get_p2m(d, i);

             p2m_lock(p2m);
             p2m->ept.ad = value;
@@ -1500,15 +1500,15 @@ void setup_ept_dump(void)

 void p2m_init_altp2m_ept(struct domain *d, unsigned int i)
 {
-    struct p2m_domain *p2m = array_access_nospec(d->arch.altp2m_p2m, i);
+    struct p2m_domain *p2m = altp2m_get_p2m(d, i);
     struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
     struct ept_data *ept;

     p2m->ept.ad = hostp2m->ept.ad;
     ept = &p2m->ept;
     ept->mfn = pagetable_get_pfn(p2m_get_pagetable(p2m));
-    d->arch.altp2m_eptp[array_index_nospec(i, MAX_EPTP)] = ept->eptp;
-    d->arch.altp2m_visible_eptp[array_index_nospec(i, MAX_EPTP)] = ept->eptp;
+    altp2m_set_eptp(d, i, ept->eptp);
+    altp2m_set_visible_eptp(d, i, ept->eptp);
 }

 unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)
@@ -1521,10 +1521,10 @@ unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)

     for ( i = 0; i < MAX_ALTP2M; i++ )
     {
-        if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
+        if ( altp2m_get_eptp(d, i) == mfn_x(INVALID_MFN) )
             continue;

-        p2m = d->arch.altp2m_p2m[i];
+        p2m = altp2m_get_p2m(d, i);
         ept = &p2m->ept;

         if ( eptp == ept->eptp )
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index e7e327d6a6..30a44441ba 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -107,9 +107,9 @@ void p2m_change_entry_type_global(struct domain *d,

         for ( i = 0; i < MAX_ALTP2M; i++ )
         {
-            if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
+            if ( altp2m_get_eptp(d, i) != mfn_x(INVALID_MFN) )
             {
-                struct p2m_domain *altp2m = d->arch.altp2m_p2m[i];
+                struct p2m_domain *altp2m = altp2m_get_p2m(d, i);

                 p2m_lock(altp2m);
                 change_entry_type_global(altp2m, ot, nt);
@@ -142,9 +142,9 @@ void p2m_memory_type_changed(struct domain *d)

         for ( i = 0; i < MAX_ALTP2M; i++ )
         {
-            if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
+            if ( altp2m_get_eptp(d, i) != mfn_x(INVALID_MFN) )
             {
-                struct p2m_domain *altp2m = d->arch.altp2m_p2m[i];
+                struct p2m_domain *altp2m = altp2m_get_p2m(d, i);

                 p2m_lock(altp2m);
                 _memory_type_changed(altp2m);
@@ -915,9 +915,9 @@ void p2m_change_type_range(struct domain *d,

         for ( i = 0; i < MAX_ALTP2M; i++ )
         {
-            if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
+            if ( altp2m_get_eptp(d, i) != mfn_x(INVALID_MFN) )
             {
-                struct p2m_domain *altp2m = d->arch.altp2m_p2m[i];
+                struct p2m_domain *altp2m = altp2m_get_p2m(d, i);

                 p2m_lock(altp2m);
                 change_type_range(altp2m, start, end, ot, nt);
@@ -988,9 +988,9 @@ int p2m_finish_type_change(struct domain *d,

         for ( i = 0; i < MAX_ALTP2M; i++ )
         {
-            if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
+            if ( altp2m_get_eptp(d, i) != mfn_x(INVALID_MFN) )
             {
-                struct p2m_domain *altp2m = d->arch.altp2m_p2m[i];
+                struct p2m_domain *altp2m = altp2m_get_p2m(d, i);

                 p2m_lock(altp2m);
                 rc = finish_type_change(altp2m, first_gfn, max_nr);
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 02 20:04:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 20:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734555.1140657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRU-0008Fs-SG; Sun, 02 Jun 2024 20:04:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734555.1140657; Sun, 02 Jun 2024 20:04:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRU-0008Fj-PH; Sun, 02 Jun 2024 20:04:36 +0000
Received: by outflank-mailman (input) for mailman id 734555;
 Sun, 02 Jun 2024 20:04:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nKxc=NE=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sDrRT-0007mz-59
 for xen-devel@lists.xenproject.org; Sun, 02 Jun 2024 20:04:35 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 53c68629-211b-11ef-90a1-e314d9c70b13;
 Sun, 02 Jun 2024 22:04:32 +0200 (CEST)
Received: by mail-wr1-x42c.google.com with SMTP id
 ffacd0b85a97d-35dc1d8867eso3032945f8f.0
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 13:04:32 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd064bb21sm6879280f8f.102.2024.06.02.13.04.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 02 Jun 2024 13:04:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53c68629-211b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717358672; x=1717963472; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=aBV36ZOSMPLFcAtuqEy9oKcOs0+DBw6xGfnoGAP2P0o=;
        b=ZHhA2bG9Z48ylIlEAvipg+fy4x5aGxH52BLytfEzr2fDID4FiGFiFA/LBstuIyzEju
         +Hzpf6SRajIFen7XISqfhcT0HUCTK3+3AaGK9d5A0gp4xqbl/Kmcoq6hu/xz1CPCvsFX
         PS6vM9ZMWOWyVpAVxwmdgleMC07I6hNOWaTY9+Ui1pfhWY+lx6BYpoN6vuy2VjXB4OS8
         zgoQorAUace4/wb9hIKQxfzknM/OHSRX/0MDCNyMtEbVLcsN8aIGfNvm/8NgSbzMCy71
         +5aUHvSBkeyWMI31WEv0I54kSilGlU89Cy91G/ND9JSGbUjNVm2meetMHbmUBkRKkulQ
         ov2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717358672; x=1717963472;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=aBV36ZOSMPLFcAtuqEy9oKcOs0+DBw6xGfnoGAP2P0o=;
        b=DftpX4/8ebEzsTRXlcM5k9sDY2q5/+IlNErviHq8FCiryRT9KNH0/5Yvaa3mn7X0BE
         W4U4Y+lG4POYFTVQk3IUeto1sBU+OXkg9zfRLs2TNmzfJNX1UVLAYXklifnNzndWjfpa
         KojrkGO/+KGqinGCkEVvKauH4pFKhnZh0XbkzPrg6iyvgkIMCEbCgbd8ghnCWYFUkvF8
         Vef3BNeESUDah64qPRoM8sILOJI1ejV82GlWQh0J77Uytt9pFT/oa+Qe8pXjCREZ/UC7
         fRliJcOYi1Pz+JOFtFWNZiZPVQRgBIGZmFRs/mfypROPHfTIYneSkVeGQTGhTTegtcgd
         WWxw==
X-Gm-Message-State: AOJu0Yys+S1vVk2mie1chvF8YW276CQ3LNXBmWTp7a4j8E2cVFAAXEsM
	msBym7N8RCfmv4gmYJNp9gbCGdqGnQEAWP+VlvtacnA2tRto2f7XZfV31g==
X-Google-Smtp-Source: AGHT+IFnawVfqRSURUafZl8mA3xgcxuztZ6SJjTvV0fXeOTIvRpt9DF9hCQnHAVSeJqLoDyTctdgEw==
X-Received: by 2002:adf:ef4d:0:b0:354:db85:3039 with SMTP id ffacd0b85a97d-35e0f34e789mr5289898f8f.44.1717358671689;
        Sun, 02 Jun 2024 13:04:31 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony@xenproject.org>
Subject: [PATCH for-4.19? v5 02/10] tools/ocaml: Add missing ocaml bindings for altp2m_opts
Date: Sun,  2 Jun 2024 20:04:15 +0000
Message-Id: <2fcb6972bfaa59ec1184929cceeab70da709dfc0.1717356829.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717356829.git.w1benny@gmail.com>
References: <cover.1717356829.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

Fixes: 0291089f6ea8 ("xen: enable altp2m at create domain domctl")

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 tools/ocaml/libs/xc/xenctrl.ml      | 1 +
 tools/ocaml/libs/xc/xenctrl.mli     | 1 +
 tools/ocaml/libs/xc/xenctrl_stubs.c | 9 ++++++---
 3 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 55923857ec..2690f9a923 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -85,6 +85,7 @@ type domctl_create_config =
     max_grant_frames: int;
     max_maptrack_frames: int;
     max_grant_version: int;
+    altp2m_opts: int32;
     vmtrace_buf_kb: int32;
     cpupool_id: int32;
     arch: arch_domainconfig;
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index 9b4b45db3a..febbe1f6ae 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -77,6 +77,7 @@ type domctl_create_config = {
   max_grant_frames: int;
   max_maptrack_frames: int;
   max_grant_version: int;
+  altp2m_opts: int32;
   vmtrace_buf_kb: int32;
   cpupool_id: int32;
   arch: arch_domainconfig;
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index e86c455802..a529080129 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -210,9 +210,10 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 #define VAL_MAX_GRANT_FRAMES    Field(config, 6)
 #define VAL_MAX_MAPTRACK_FRAMES Field(config, 7)
 #define VAL_MAX_GRANT_VERSION   Field(config, 8)
-#define VAL_VMTRACE_BUF_KB      Field(config, 9)
-#define VAL_CPUPOOL_ID          Field(config, 10)
-#define VAL_ARCH                Field(config, 11)
+#define VAL_ALTP2M_OPTS         Field(config, 9)
+#define VAL_VMTRACE_BUF_KB      Field(config, 10)
+#define VAL_CPUPOOL_ID          Field(config, 11)
+#define VAL_ARCH                Field(config, 12)

 	uint32_t domid = Int_val(wanted_domid);
 	uint64_t vmtrace_size = Int32_val(VAL_VMTRACE_BUF_KB);
@@ -230,6 +231,7 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 		.max_maptrack_frames = Int_val(VAL_MAX_MAPTRACK_FRAMES),
 		.grant_opts =
 		    XEN_DOMCTL_GRANT_version(Int_val(VAL_MAX_GRANT_VERSION)),
+		.altp2m_opts = Int32_val(VAL_ALTP2M_OPTS),
 		.vmtrace_size = vmtrace_size,
 		.cpupool_id = Int32_val(VAL_CPUPOOL_ID),
 	};
@@ -288,6 +290,7 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 #undef VAL_ARCH
 #undef VAL_CPUPOOL_ID
 #undef VAL_VMTRACE_BUF_KB
+#undef VAL_ALTP2M_OPTS
 #undef VAL_MAX_GRANT_VERSION
 #undef VAL_MAX_MAPTRACK_FRAMES
 #undef VAL_MAX_GRANT_FRAMES
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 02 20:04:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 20:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734562.1140722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRa-0001Nb-Hl; Sun, 02 Jun 2024 20:04:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734562.1140722; Sun, 02 Jun 2024 20:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRa-0001MG-6Y; Sun, 02 Jun 2024 20:04:42 +0000
Received: by outflank-mailman (input) for mailman id 734562;
 Sun, 02 Jun 2024 20:04:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nKxc=NE=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sDrRY-00084Y-Ty
 for xen-devel@lists.xenproject.org; Sun, 02 Jun 2024 20:04:40 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 57aa46a1-211b-11ef-b4bb-af5377834399;
 Sun, 02 Jun 2024 22:04:39 +0200 (CEST)
Received: by mail-wm1-x32d.google.com with SMTP id
 5b1f17b1804b1-421338c4c3bso15296275e9.1
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 13:04:39 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd064bb21sm6879280f8f.102.2024.06.02.13.04.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 02 Jun 2024 13:04:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57aa46a1-211b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717358678; x=1717963478; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nyVxB8SzIq3ix8NG+232jLUPvfNmU9Jk5I/4qPsfmIc=;
        b=LaGqPt/FCXizFXiToBbVw60xjCEWY5BxEOq2nas5cf7TyTy5jaML1HIyqZATCq1/rd
         HD7qVDO7i/n2ZU7tODZMZiLiN6rcHenUaksp1gbqqgfEP91stAx/ncsiH4msms14ysus
         oTKjvoHOasLELpNNo2uUAzkWrT0NXXvlhiWsipH8NuFFvTj13vkVVz7Z9GJp4ZlGaITb
         YaqcSU065fwYu1236yFNoGybGqyYWCBmeassQe0Ze6eGMiLetXH0tLk8psib/oT21KVr
         bzluTrW6J9XA0e8HC0PfWvKX1uKtRm0G5e4QNDRyt7QB6Eu0Cz0xwGz3wH73Si0PfaR2
         MZ8g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717358678; x=1717963478;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=nyVxB8SzIq3ix8NG+232jLUPvfNmU9Jk5I/4qPsfmIc=;
        b=nADz7E28FOKzGoj+uov5OqRCxIwiGrciGDMj7MPCL24wz5ruCwTNeyH3UlpKS1ldmj
         +hTWLauOnNI2eMQUeCeYlAWph5zLrv8Wn2jDNQB4vpmuNeNyvHrdv2EtV+43g0l3ptxq
         vOr5O2y8F1Y1mT81jMVdhPF56Nr7B8wiGP2aoZbJOpAVzj5KHz3rZiuARBnzsLlBiqdg
         FP9Us1mD++AARQ6p2N6xt+uJijsnwSDfWQhPcg4Y4u/36MwfIEfXs2xtWib2oENVLt97
         MAGsSCBd/1dzBfY7UO/AIt6quSkvmJ+IFaPfHJAuvTXEwWvR1r6n6w+l8mY5EPX1SLix
         xL3A==
X-Gm-Message-State: AOJu0YzhJrNIAi0eBcJ7xydpfkZ6TqNeLy6MV8V5alQTdHI+80nw77Rz
	anqa8FCBIz+fety0JyGigoNXoYD7fClTAUOB2XbS7fu0/isaR+j24OsLng==
X-Google-Smtp-Source: AGHT+IGVr5Ym4zdRhYa0+9XmV8eM6V0ASINffhd7dARFKkDZCPZn6fIveD60a1vitG9ViPB4KVzJ2w==
X-Received: by 2002:adf:f3c6:0:b0:354:f2a7:97d6 with SMTP id ffacd0b85a97d-35e0f285b04mr5191591f8f.30.1717358678069;
        Sun, 02 Jun 2024 13:04:38 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.19? v5 08/10] tools/libxl: Activate the altp2m_count feature
Date: Sun,  2 Jun 2024 20:04:21 +0000
Message-Id: <9f0f897ea677aa80da2465f1a713d409950d33dd.1717356829.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717356829.git.w1benny@gmail.com>
References: <cover.1717356829.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

This commit activates the previously introduced altp2m_count parameter,
establishing the connection between libxl and Xen.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 tools/libs/light/libxl_create.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 11d2f282f5..5ad552c4ec 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -656,6 +656,10 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
             .max_grant_frames = b_info->max_grant_frames,
             .max_maptrack_frames = b_info->max_maptrack_frames,
             .grant_opts = XEN_DOMCTL_GRANT_version(b_info->max_grant_version),
+            .altp2m = {
+                .opts = 0, /* .opts will be set below */
+                .nr = b_info->altp2m_count,
+            },
             .vmtrace_size = ROUNDUP(b_info->vmtrace_buf_kb << 10, XC_PAGE_SHIFT),
             .cpupool_id = info->poolid,
         };
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 02 20:04:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 20:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734563.1140729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRb-0001W0-2h; Sun, 02 Jun 2024 20:04:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734563.1140729; Sun, 02 Jun 2024 20:04:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRa-0001Tk-Nd; Sun, 02 Jun 2024 20:04:42 +0000
Received: by outflank-mailman (input) for mailman id 734563;
 Sun, 02 Jun 2024 20:04:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nKxc=NE=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sDrRZ-0007mz-Cw
 for xen-devel@lists.xenproject.org; Sun, 02 Jun 2024 20:04:41 +0000
Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com
 [2a00:1450:4864:20::331])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 58a538a6-211b-11ef-90a1-e314d9c70b13;
 Sun, 02 Jun 2024 22:04:40 +0200 (CEST)
Received: by mail-wm1-x331.google.com with SMTP id
 5b1f17b1804b1-421396e3918so7393745e9.0
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 13:04:40 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd064bb21sm6879280f8f.102.2024.06.02.13.04.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 02 Jun 2024 13:04:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58a538a6-211b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717358680; x=1717963480; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=c0MyFioV4z3Djhi95Acx+wxEtRnsa4FRDnKsyuw+8Ys=;
        b=SWmrMUSFY75yV/4WAgIuVxwBDnymKm/0pMtWHMRyQehAflR+AADJIfMuH56Ai+PYjI
         XUajNO9PYI60pGr2MdBB7VyG5xX6tOFQ6q/DjvhgMRbJ4W+epdwLr492bZ3Ypqq/X7MN
         v4hX8JzPQMX+xitMo1kgERnpJsBvmZv7olCdbC3BqGkyjurDGJcPl+k0g93hrBeiNpIq
         6PwacaRKQeR0afUsoeI9gnYsRkB9kInp84sU44fYPwnN6VN5BU/qMJzp0eREEJKQMdjO
         pD5t1KikTq568l0axBeVk5/2KzQjzn0z4/qWeOnH687JT6JBNOuFRn4L4a87GfvRlPE+
         cPMA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717358680; x=1717963480;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=c0MyFioV4z3Djhi95Acx+wxEtRnsa4FRDnKsyuw+8Ys=;
        b=quEVRcIECWmfzTGoMfSPSpYmQzNJ8PDwbpGtC0+hE3QNe6rh/xXP5fCSBhGgHw672i
         emqE8pabubhItTqKIIlt5j13IQOIiANvRCU2bV73162uzcC4l6BrHTD5YaV/Rl8HFoRc
         6mEcvAj9ipVnG/nQnJp/ag/7qbgKcheZdkJG7PABRZ4aJVfrt5V6iva52M2uQn9K01V/
         t5gWUpfGzF9Zwy66hKCNjEIUKGsTkHfJX0WgYon4d0pNPkHysOhhRscxWjY04H4l3ZtE
         QMtEfI9PsDG5euRxKNM8+2G9y5aji9iUUTqAqhqiTHGGW8rpbElRsdy25sxQoKdytkkX
         gKSQ==
X-Gm-Message-State: AOJu0Yy57cy8M/u1p7LRixPhzxQ+VQ3NufTgVIlOzQE+CJmu9mndNwqp
	NYEv8AuSaGtTf5fVFaG7zMvBKOUh82Pyzigq7C3DNxZ0nCSMnJiwKGWrDA==
X-Google-Smtp-Source: AGHT+IFVrhWJGp3BxPBC/kC6EdkM0rGsj0Uk09uo4DBxfEUzOlI675CDh+2WakaL9NAfhvrZjM2hhA==
X-Received: by 2002:adf:e6d2:0:b0:354:f489:faf with SMTP id ffacd0b85a97d-35e0f254f71mr7026169f8f.1.1717358679823;
        Sun, 02 Jun 2024 13:04:39 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony@xenproject.org>
Subject: [PATCH for-4.19? v5 10/10] tools/ocaml: Add altp2m_count parameter
Date: Sun,  2 Jun 2024 20:04:23 +0000
Message-Id: <3c92a64c23cca81e88c94bf7b85e5e3633fd88f0.1717356829.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717356829.git.w1benny@gmail.com>
References: <cover.1717356829.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

Allow developers using the OCaml bindings to set the altp2m_count parameter.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 tools/ocaml/libs/xc/xenctrl.ml      |  1 +
 tools/ocaml/libs/xc/xenctrl.mli     |  1 +
 tools/ocaml/libs/xc/xenctrl_stubs.c | 19 +++++++++++++++----
 3 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 2690f9a923..a3e50ac394 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -86,6 +86,7 @@ type domctl_create_config =
     max_maptrack_frames: int;
     max_grant_version: int;
     altp2m_opts: int32;
+    altp2m_count: int32;
     vmtrace_buf_kb: int32;
     cpupool_id: int32;
     arch: arch_domainconfig;
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index febbe1f6ae..b97021d3d2 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -78,6 +78,7 @@ type domctl_create_config = {
   max_maptrack_frames: int;
   max_grant_version: int;
   altp2m_opts: int32;
+  altp2m_count: int32;
   vmtrace_buf_kb: int32;
   cpupool_id: int32;
   arch: arch_domainconfig;
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index e6c977521f..78ae4967e6 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -211,13 +211,22 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 #define VAL_MAX_MAPTRACK_FRAMES Field(config, 7)
 #define VAL_MAX_GRANT_VERSION   Field(config, 8)
 #define VAL_ALTP2M_OPTS         Field(config, 9)
-#define VAL_VMTRACE_BUF_KB      Field(config, 10)
-#define VAL_CPUPOOL_ID          Field(config, 11)
-#define VAL_ARCH                Field(config, 12)
+#define VAL_ALTP2M_COUNT        Field(config, 10)
+#define VAL_VMTRACE_BUF_KB      Field(config, 11)
+#define VAL_CPUPOOL_ID          Field(config, 12)
+#define VAL_ARCH                Field(config, 13)

 	uint32_t domid = Int_val(wanted_domid);
+	uint32_t altp2m_opts = Int32_val(VAL_ALTP2M_OPTS);
+	uint32_t altp2m_nr = Int32_val(VAL_ALTP2M_COUNT);
 	uint64_t vmtrace_size = Int32_val(VAL_VMTRACE_BUF_KB);

+	if ( altp2m_opts != (uint16_t)altp2m_opts )
+		caml_invalid_argument("altp2m_opts");
+
+	if ( altp2m_nr != (uint16_t)altp2m_nr )
+		caml_invalid_argument("altp2m_count");
+
 	vmtrace_size = ROUNDUP(vmtrace_size << 10, XC_PAGE_SHIFT);
 	if ( vmtrace_size != (uint32_t)vmtrace_size )
 		caml_invalid_argument("vmtrace_buf_kb");
@@ -232,7 +241,8 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 		.grant_opts =
 		    XEN_DOMCTL_GRANT_version(Int_val(VAL_MAX_GRANT_VERSION)),
 		.altp2m = {
-			.opts = Int32_val(VAL_ALTP2M_OPTS),
+			.opts = altp2m_opts,
+			.nr = altp2m_nr,
 		},
 		.vmtrace_size = vmtrace_size,
 		.cpupool_id = Int32_val(VAL_CPUPOOL_ID),
@@ -292,6 +302,7 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 #undef VAL_ARCH
 #undef VAL_CPUPOOL_ID
 #undef VAL_VMTRACE_BUF_KB
+#undef VAL_ALTP2M_COUNT
 #undef VAL_ALTP2M_OPTS
 #undef VAL_MAX_GRANT_VERSION
 #undef VAL_MAX_MAPTRACK_FRAMES
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 02 20:04:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 20:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734558.1140679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRW-0000Fe-Tn; Sun, 02 Jun 2024 20:04:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734558.1140679; Sun, 02 Jun 2024 20:04:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRW-0000AK-KJ; Sun, 02 Jun 2024 20:04:38 +0000
Received: by outflank-mailman (input) for mailman id 734558;
 Sun, 02 Jun 2024 20:04:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nKxc=NE=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sDrRV-0007mz-5d
 for xen-devel@lists.xenproject.org; Sun, 02 Jun 2024 20:04:37 +0000
Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com
 [2a00:1450:4864:20::436])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 55a5657c-211b-11ef-90a1-e314d9c70b13;
 Sun, 02 Jun 2024 22:04:35 +0200 (CEST)
Received: by mail-wr1-x436.google.com with SMTP id
 ffacd0b85a97d-35dcd34a69bso2430727f8f.3
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 13:04:35 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd064bb21sm6879280f8f.102.2024.06.02.13.04.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 02 Jun 2024 13:04:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55a5657c-211b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717358675; x=1717963475; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xPmb+DHWEiqVT7szGSWw+jNtsJuaWk/ic+48cvb9OAo=;
        b=ilCUc32sdQ1yZ8UqZyk2l/gpkY/rAFncomSay9IOsCop+OyUh3Q7ZFehB+EZG6KXpM
         aAUu+ac9vZfsANHUdSYgBdYlrGSJ4hIEuIA3sa6po4qncaqB69fXrAFU8euPbCGxLHzQ
         ZJWnHtMSzLbBptLCbgrE5MFNHXkT7e0zpY5kmd6KW6do94YY2TE72eRHfjKjzDe8gfNt
         FUk5tE+EG3lgwdN2HVHYG0wczX/4hisfXYZreWPKIBBgymFMdLisRGJgoBhFhebvn6aC
         MY2bpcb6e+UkJg1sjEeUquH3oXqVCQgRryXWeMPeuUkuHnoreQN6IjKMGouw7/MsLZYD
         oNtQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717358675; x=1717963475;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=xPmb+DHWEiqVT7szGSWw+jNtsJuaWk/ic+48cvb9OAo=;
        b=eiB8PbLHvMA5nhm1JX8LbL/AcAs4NhCUDUNUCCy0MS7w0zXL6koCHDLEnjfrP6++Sp
         zJFq/oldye8rYSqKRI23Aa2FYOuPydUpqRwppuWjnbM+EOMYntm7wEAZLB9D3RxL8IJm
         cAvWfWbFGbEV4cSpgFWYyeqIhC+vwoYHL72vCWii9Uffp8DyFK/daAgTL6HaGEqWERF0
         FvpBWpYOsJeu6NFs+NuCt06l6nzbNrxQ6f26q3FHsutsObHDkKJzJDFf1Wdai9Yr37PO
         yJn2YV6njB/POwlxEWL8CEKbHFo5UJ8jGCl2OmrCVPQEMkL20G9SER6dOyJs7G92Ly87
         3hNQ==
X-Gm-Message-State: AOJu0YzDQ4s7wxB6cikzXgZ0eLWx4GlAh5ivAlHaY1HTMxcUaBqLNDcd
	vpMpykPXQ9AxK/pdsBM+Qk0r2SGg0x8DgOzhKscrVEj8pFzca5zsL6HzTQ==
X-Google-Smtp-Source: AGHT+IEi2D35Tnzu3IULZGk73EEhgGMItbZSQo70nq+seXU5oU0ENSxO0/uCFi2K2/onl2P+Lh15vQ==
X-Received: by 2002:a5d:4005:0:b0:354:f1de:33eb with SMTP id ffacd0b85a97d-35e0f2836b3mr4900489f8f.26.1717358674817;
        Sun, 02 Jun 2024 13:04:34 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Anthony PERARD <anthony@xenproject.org>
Subject: [PATCH for-4.19? v5 05/10] docs/man: Add altp2m_count parameter to the xl.cfg manual
Date: Sun,  2 Jun 2024 20:04:18 +0000
Message-Id: <e34f5632f2304b18c9729a749b4d0c406b042faa.1717356829.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717356829.git.w1benny@gmail.com>
References: <cover.1717356829.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

Update manual pages to include detailed information about the altp2m_count
configuration parameter.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 docs/man/xl.cfg.5.pod.in | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index ac3f88fd57..ff03b43884 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2039,6 +2039,20 @@ a single guest HVM domain. B<This option is deprecated, use the option
 B<Note>: While the option "altp2mhvm" is deprecated, legacy applications for
 x86 systems will continue to work using it.

+=item B<altp2m_count=NUMBER>
+
+Specifies the maximum number of alternate-p2m views available to the guest.
+This setting is crucial in domain introspection scenarios that require
+multiple physical-to-machine (p2m) memory mappings to be established
+simultaneously.
+
+Enabling multiple p2m views may increase memory usage. It is advisable to
+review and adjust the B<shadow_memory> setting as necessary to accommodate
+the additional memory requirements.
+
+B<Note>: This option is ignored if B<altp2m> is disabled. The default value
+is 10.
+
 =item B<nestedhvm=BOOLEAN>

 Enable or disables guest access to hardware virtualisation features,
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 02 20:04:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 20:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734561.1140715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRa-0001JS-1a; Sun, 02 Jun 2024 20:04:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734561.1140715; Sun, 02 Jun 2024 20:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRZ-0001Ij-SQ; Sun, 02 Jun 2024 20:04:41 +0000
Received: by outflank-mailman (input) for mailman id 734561;
 Sun, 02 Jun 2024 20:04:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nKxc=NE=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sDrRY-0007mz-9Z
 for xen-devel@lists.xenproject.org; Sun, 02 Jun 2024 20:04:40 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 580ba425-211b-11ef-90a1-e314d9c70b13;
 Sun, 02 Jun 2024 22:04:39 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 5b1f17b1804b1-42133f8432aso10412765e9.3
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 13:04:39 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd064bb21sm6879280f8f.102.2024.06.02.13.04.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 02 Jun 2024 13:04:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 580ba425-211b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717358679; x=1717963479; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=4tWBEUL49TAySW16gIvLPk/vey32SSkMkPdHxsZn7CQ=;
        b=d536lnBfkYCeDZ8tKQ1RKRoa+gTQ9riLbc7ml3J2qmS7e66F97Nt8z3CE4O2ykWR5N
         VpWTfgwGWMJnXKIgqpvWfVNoqLUVVrglBaCMQ035MvDL/zFcf3VBRGw2AdI11rVVHLRf
         WDd3R9U/DYCBi+rlNrxfTxnUPcptKq65DenK8ZGzI+aKesRm0SLe4v5Qv1J7Wa0KuWPx
         Ysx+5SSQtWRokxg3Fx13Gb489x4tv8Dsx3kGzHviLcPjCPdUG/FgATsdAhhWXfcEeVC7
         SPia4M3+RHxFDt5eSlPuxnEJ3+aYDPpGwmpnGCMvL3BBrfcqFHwula+DLblMZZIfM/Vm
         aVQw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717358679; x=1717963479;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=4tWBEUL49TAySW16gIvLPk/vey32SSkMkPdHxsZn7CQ=;
        b=HAldCsgSOweObLxXKHiVOi1YpYtnlPQ6iXz4vkXyn1wTBD6TPC51muSgDlcs2otzfj
         BJjtcZNed/l4QZYZwxZFE70wgPc3EPRRwTdKuUl3Ub9uenccWm9VKvbJrek7gygvrR5U
         BETnixlO5WRWBciA2qRyMe9W7BXM44OlnTHpthzuAoXIBQ+NWTjqDG4AnwOv/kiOVAIb
         afjvExObJoDUqUiWI3+k35KO2X3uX9eWHwk0BlcpzafOzyjI37e/Pp28ivFLmXcdVSHg
         Vn0MAKdiNfwYAbjajwC545ZEBlI9jv2GcK+dbTgUhJgxN5yuLJn+GdjDrWeWQjNHrDiM
         3R2Q==
X-Gm-Message-State: AOJu0YyI1evj8U3RDa3mR0TOVUGu6o/Mw/0vWRlSnFWV9pCKIzH+gVIq
	8OmxNEca7pVIrHzE+uAYXDCjKBqRyQGJ90bmeolpu7/WJ0ieznmVsgPY4g==
X-Google-Smtp-Source: AGHT+IEcZugb1HchUGNH0n8OmUpHLBaLxbPi8w5G8EFFJnI6zPxZpMm0Djp1QyiCnXUS+IGb/ZHrtA==
X-Received: by 2002:a05:600c:1c19:b0:421:2adb:dd4c with SMTP id 5b1f17b1804b1-4212e07009amr63464015e9.22.1717358678880;
        Sun, 02 Jun 2024 13:04:38 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH for-4.19? v5 09/10] xen/x86: Disallow creating domains with altp2m enabled and altp2m.nr == 0
Date: Sun,  2 Jun 2024 20:04:22 +0000
Message-Id: <d6fd97b66b5f1a974e317c9d3f72fb139b39118f.1717356829.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717356829.git.w1benny@gmail.com>
References: <cover.1717356829.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

Since libxl finally sends the altp2m.nr value, we can remove the previously
introduced temporary workaround.

Creating domain with enabled altp2m while setting altp2m.nr == 0 doesn't
make sense and it's probably not what user wants.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 xen/arch/x86/domain.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 011cffb07e..52bfeafe3f 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -747,8 +747,9 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)

         if ( !config->altp2m.nr )
         {
-            /* Fix the value to the legacy default */
-            config->altp2m.nr = 10;
+            dprintk(XENLOG_INFO,
+                    "altp2m must be requested with altp2m.nr > 0\n");
+            return -EINVAL;
         }

         if ( config->altp2m.nr > MAX_NR_ALTP2M )
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 02 20:04:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 20:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734560.1140703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRY-0000wv-Q9; Sun, 02 Jun 2024 20:04:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734560.1140703; Sun, 02 Jun 2024 20:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRY-0000wE-KC; Sun, 02 Jun 2024 20:04:40 +0000
Received: by outflank-mailman (input) for mailman id 734560;
 Sun, 02 Jun 2024 20:04:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nKxc=NE=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sDrRX-0007mz-Ue
 for xen-devel@lists.xenproject.org; Sun, 02 Jun 2024 20:04:40 +0000
Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com
 [2a00:1450:4864:20::42a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5727d16e-211b-11ef-90a1-e314d9c70b13;
 Sun, 02 Jun 2024 22:04:38 +0200 (CEST)
Received: by mail-wr1-x42a.google.com with SMTP id
 ffacd0b85a97d-35dcd34a69bso2430744f8f.3
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 13:04:38 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd064bb21sm6879280f8f.102.2024.06.02.13.04.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 02 Jun 2024 13:04:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5727d16e-211b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717358677; x=1717963477; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=IVTFox30KwHLP/OtWq0mLQ/t5mCblNfzOg7TaHqw2bw=;
        b=emz1xUD/5PnrdKZBt+GsbD23012FWwlFwlUFBfTsYIT0m43F5+s4yTGhjKLZLr9psC
         UyzN8JZyciT/rEX0EOgxLABrSXQgliPtIWe4sV/nRPOdIbn93dUAsmHZ6X0e//ugUa31
         fZRs/4X2rJxUOY6M6sl93LaLgKS5W0fD5bxDBsHDyaVbuwGq41NK128wcL4jmYbFoDsC
         UeYdKss8zASwmSsxEcK8OPKq9ePUApe8ed7pjixoaPiwpuAZoigEJ1wxEULkFlQlkqHQ
         e2a+J75Y7zw83rmqSVZGx88fsdVC5jj1lvgk3Zcs4MflOVbl9+pTozIouIwvvdQ69oda
         e2ag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717358677; x=1717963477;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=IVTFox30KwHLP/OtWq0mLQ/t5mCblNfzOg7TaHqw2bw=;
        b=KFO6R2wQuBtCvveC9mwgOOWaaFkZqh8+PIRbgivi09686ZWgEhVafWKmb9HheoWtOi
         qMLb5R18K0eJ+xwGZNdNMbbn5e20mChz1kQb8lc4oR4rHmLdnMRw6F5iEtrog2XZ8jF6
         ar9T/iFDSMrRw2wsfXKENEIhAupF7gyHGdwU7CX+znCjm9G/yn2wt5PHjlZFSJ9qO1no
         FxPCNUQm2uGE12JCHr2bBFyk8qMlQ2jsAJOBm1MwS+qD1Hj3iCd6uJeMtZW7EfvU9bSl
         x84HEuTHxDCQn5s44bc0kDD6Qh/1SCEpHRQ5gtaLh+vg8qqQ4DdOsPKFi9YsU3nHxp6Q
         VXAA==
X-Gm-Message-State: AOJu0Yz3qFx2vch5aRno918jLH8Z6T6KBz9nsPjEwmlqyIUdty+s6W/S
	QMSIvGMDvW1RDNDZ0GK9pwURXDWQMnudOy/cLMlQYvyHSn3agwY9Cp75DXTr
X-Google-Smtp-Source: AGHT+IFfWW1mzFYkEfS5aJQ0OAMKzDVsGxqM2GlbwhRTwd4ipwqab+yoixf+R+8smKTOTCLHm5k4JA==
X-Received: by 2002:a5d:6e5e:0:b0:354:faf4:fa87 with SMTP id ffacd0b85a97d-35e0f2590e5mr4052405f8f.3.1717358677173;
        Sun, 02 Jun 2024 13:04:37 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>
Subject: [PATCH for-4.19? v5 07/10] xen: Make the maximum number of altp2m views configurable for x86
Date: Sun,  2 Jun 2024 20:04:20 +0000
Message-Id: <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717356829.git.w1benny@gmail.com>
References: <cover.1717356829.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

x86: Make the maximum number of altp2m views configurable

This commit introduces the ability to configure the maximum number of altp2m
views for the domain during its creation. Previously, the limits were hardcoded
to a maximum of 10. This change allows for greater flexibility in environments
that require more or fewer altp2m views.

The maximum configurable limit for nr_altp2m on x86 is now set to
MAX_NR_ALTP2M (which currently holds the MAX_EPTP value - 512). This cap is
linked to the architectural limit of the EPTP-switching VMFUNC, which supports
up to 512 entries. Despite there being no inherent need for limiting nr_altp2m
in scenarios not utilizing VMFUNC, decoupling these components would necessitate
substantial code changes.

xen_domctl_createdomain::altp2m is extended for a new field `nr`, that will
configure this limit for a domain. Additionally, previous altp2m.opts value
has been reduced from uint32_t to uint16_t so that both of these fields occupy
as little space as possible.

altp2m_get_p2m() function is modified to respect the new nr_altp2m value.
Accessor functions that operate on EPT arrays are unmodified, since these
arrays always have fixed size of MAX_EPTP.

A dummy hvm_altp2m_supported() function is introduced for non-HVM builds, so
that the compilation won't fail for them.

Additional sanitization is introduced in the x86/arch_sanitise_domain_config
to fix the altp2m.nr value to 10 if it is previously set to 0. This behavior
is only temporary and immediately removed in the upcoming commit (which will
disallow creating a domain with enabled altp2m with zero nr_altp2m).

The reason for this temporary workaround is to retain the legacy behavior
until the feature is fully activated in libxl.

Also, arm/arch_sanitise_domain_config is extended to not allow requesting
non-zero altp2ms.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 xen/arch/arm/domain.c              |  2 +-
 xen/arch/x86/domain.c              | 40 ++++++++++++++++++----
 xen/arch/x86/hvm/hvm.c             |  8 ++++-
 xen/arch/x86/hvm/vmx/vmx.c         |  2 +-
 xen/arch/x86/include/asm/altp2m.h  |  2 +-
 xen/arch/x86/include/asm/domain.h  |  9 ++---
 xen/arch/x86/include/asm/hvm/hvm.h |  5 +++
 xen/arch/x86/include/asm/p2m.h     |  4 +--
 xen/arch/x86/mm/altp2m.c           | 54 ++++++++++++++++++++----------
 xen/arch/x86/mm/hap/hap.c          |  8 ++---
 xen/arch/x86/mm/mem_access.c       |  8 ++---
 xen/arch/x86/mm/mem_sharing.c      |  2 +-
 xen/arch/x86/mm/p2m-ept.c          |  4 +--
 xen/arch/x86/mm/p2m.c              |  8 ++---
 xen/common/domain.c                |  1 +
 xen/include/public/domctl.h        |  5 ++-
 xen/include/xen/sched.h            |  2 ++
 17 files changed, 113 insertions(+), 51 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 5234b627d0..e5785d2d96 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -688,7 +688,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }

-    if ( config->altp2m.opts )
+    if ( config->altp2m.opts || config->altp2m.nr )
     {
         dprintk(XENLOG_INFO, "Altp2m not supported\n");
         return -EINVAL;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index bb5ba8fc1e..011cffb07e 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -724,16 +724,42 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }

-    if ( altp2m_mode && nested_virt )
+    if ( altp2m_mode )
     {
-        dprintk(XENLOG_INFO,
-                "Nested virt and altp2m are not supported together\n");
-        return -EINVAL;
-    }
+        if ( nested_virt )
+        {
+            dprintk(XENLOG_INFO,
+                    "Nested virt and altp2m are not supported together\n");
+            return -EINVAL;
+        }
+
+        if ( !hap )
+        {
+            dprintk(XENLOG_INFO, "altp2m is only supported with HAP\n");
+            return -EINVAL;
+        }
+
+        if ( !hvm_altp2m_supported() )
+        {
+            dprintk(XENLOG_INFO, "altp2m is not supported\n");
+            return -EINVAL;
+        }
+
+        if ( !config->altp2m.nr )
+        {
+            /* Fix the value to the legacy default */
+            config->altp2m.nr = 10;
+        }

-    if ( altp2m_mode && !hap )
+        if ( config->altp2m.nr > MAX_NR_ALTP2M )
+        {
+            dprintk(XENLOG_INFO, "altp2m.nr must be <= %lu\n", MAX_NR_ALTP2M);
+            return -EINVAL;
+        }
+    }
+    else if ( config->altp2m.nr )
     {
-        dprintk(XENLOG_INFO, "altp2m is only supported with HAP\n");
+        dprintk(XENLOG_INFO, "altp2m.nr must be zero when altp2m is off\n");
         return -EINVAL;
     }

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a66ebaaceb..3d0357a0f8 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4657,6 +4657,12 @@ static int do_altp2m_op(
         goto out;
     }

+    if ( d->nr_altp2m == 0 )
+    {
+        rc = -EINVAL;
+        goto out;
+    }
+
     if ( (rc = xsm_hvm_altp2mhvm_op(XSM_OTHER, d, mode, a.cmd)) )
         goto out;

@@ -5245,7 +5251,7 @@ void hvm_fast_singlestep(struct vcpu *v, uint16_t p2midx)
     if ( !hvm_is_singlestep_supported() )
         return;

-    if ( p2midx >= MAX_ALTP2M )
+    if ( p2midx >= v->domain->nr_altp2m )
         return;

     v->arch.hvm.single_step = true;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index a420d452b3..9292a2c8d8 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -4885,7 +4885,7 @@ bool asmlinkage vmx_vmenter_helper(const struct cpu_user_regs *regs)
         {
             unsigned int i;

-            for ( i = 0; i < MAX_ALTP2M; ++i )
+            for ( i = 0; i < currd->nr_altp2m; ++i )
             {
                 if ( altp2m_get_eptp(currd, i) == mfn_x(INVALID_MFN) )
                     continue;
diff --git a/xen/arch/x86/include/asm/altp2m.h b/xen/arch/x86/include/asm/altp2m.h
index 2f064c61a2..a4cc3d3ffc 100644
--- a/xen/arch/x86/include/asm/altp2m.h
+++ b/xen/arch/x86/include/asm/altp2m.h
@@ -22,7 +22,7 @@ static inline bool altp2m_active(const struct domain *d)
 static inline struct p2m_domain *altp2m_get_p2m(const struct domain* d,
                                                 unsigned int idx)
 {
-    return d->arch.altp2m_p2m[array_index_nospec(idx, MAX_ALTP2M)];
+    return d->arch.altp2m_p2m[array_index_nospec(idx, d->nr_altp2m)];
 }

 static inline uint64_t altp2m_get_eptp(const struct domain* d,
diff --git a/xen/arch/x86/include/asm/domain.h b/xen/arch/x86/include/asm/domain.h
index f5daeb182b..855e844bed 100644
--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -258,11 +258,12 @@ struct paging_vcpu {
     struct shadow_vcpu shadow;
 };

-#define MAX_NESTEDP2M 10
+#define MAX_EPTP        (PAGE_SIZE / sizeof(uint64_t))
+#define MAX_NR_ALTP2M   MAX_EPTP
+#define MAX_NESTEDP2M   10

-#define MAX_ALTP2M      10 /* arbitrary */
 #define INVALID_ALTP2M  0xffff
-#define MAX_EPTP        (PAGE_SIZE / sizeof(uint64_t))
+
 struct p2m_domain;
 struct time_scale {
     int shift;
@@ -353,7 +354,7 @@ struct arch_domain

     /* altp2m: allow multiple copies of host p2m */
     bool altp2m_active;
-    struct p2m_domain *altp2m_p2m[MAX_ALTP2M];
+    struct p2m_domain **altp2m_p2m;
     mm_lock_t altp2m_list_lock;
     uint64_t *altp2m_eptp;
     uint64_t *altp2m_visible_eptp;
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index 1c01e22c8e..277648dd18 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -828,6 +828,11 @@ static inline bool hvm_hap_supported(void)
     return false;
 }

+static inline bool hvm_altp2m_supported(void)
+{
+    return false;
+}
+
 static inline bool hvm_nested_virt_supported(void)
 {
     return false;
diff --git a/xen/arch/x86/include/asm/p2m.h b/xen/arch/x86/include/asm/p2m.h
index e6f7764f9f..a1094fc7b3 100644
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -887,7 +887,7 @@ static inline struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
     if ( index == INVALID_ALTP2M )
         return NULL;

-    BUG_ON(index >= MAX_ALTP2M);
+    BUG_ON(index >= v->domain->nr_altp2m);

     return altp2m_get_p2m(v->domain, index);
 }
@@ -898,7 +898,7 @@ static inline bool p2m_set_altp2m(struct vcpu *v, unsigned int idx)
     struct p2m_domain *orig;
     struct p2m_domain *ap2m;

-    BUG_ON(idx >= MAX_ALTP2M);
+    BUG_ON(idx >= v->domain->nr_altp2m);

     if ( idx == vcpu_altp2m(v).p2midx )
         return false;
diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index 7fb1738376..d47277e5e5 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -15,6 +15,11 @@
 void
 altp2m_vcpu_initialise(struct vcpu *v)
 {
+    struct domain *d = v->domain;
+
+    if ( d->nr_altp2m == 0 )
+        return;
+
     if ( v != current )
         vcpu_pause(v);

@@ -30,8 +35,12 @@ altp2m_vcpu_initialise(struct vcpu *v)
 void
 altp2m_vcpu_destroy(struct vcpu *v)
 {
+    struct domain *d = v->domain;
     struct p2m_domain *p2m;

+    if ( d->nr_altp2m == 0 )
+        return;
+
     if ( v != current )
         vcpu_pause(v);

@@ -122,7 +131,12 @@ int p2m_init_altp2m(struct domain *d)
     struct p2m_domain *hostp2m = p2m_get_hostp2m(d);

     mm_lock_init(&d->arch.altp2m_list_lock);
-    for ( i = 0; i < MAX_ALTP2M; i++ )
+    d->arch.altp2m_p2m = xzalloc_array(struct p2m_domain *, d->nr_altp2m);
+
+    if ( !d->arch.altp2m_p2m )
+        return -ENOMEM;
+
+    for ( i = 0; i < d->nr_altp2m; i++ )
     {
         d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
         if ( p2m == NULL )
@@ -143,7 +157,10 @@ void p2m_teardown_altp2m(struct domain *d)
     unsigned int i;
     struct p2m_domain *p2m;

-    for ( i = 0; i < MAX_ALTP2M; i++ )
+    if ( !d->arch.altp2m_p2m )
+        return;
+
+    for ( i = 0; i < d->nr_altp2m; i++ )
     {
         if ( !d->arch.altp2m_p2m[i] )
             continue;
@@ -151,6 +168,8 @@ void p2m_teardown_altp2m(struct domain *d)
         d->arch.altp2m_p2m[i] = NULL;
         p2m_free_one(p2m);
     }
+
+    XFREE(d->arch.altp2m_p2m);
 }

 int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn,
@@ -200,7 +219,7 @@ bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
     struct domain *d = v->domain;
     bool rc = false;

-    if ( idx >= MAX_ALTP2M )
+    if ( idx >= d->nr_altp2m )
         return rc;

     altp2m_list_lock(d);
@@ -306,7 +325,7 @@ static void p2m_reset_altp2m(struct domain *d, unsigned int idx,
 {
     struct p2m_domain *p2m;

-    ASSERT(idx < MAX_ALTP2M);
+    ASSERT(idx < d->nr_altp2m);
     p2m = altp2m_get_p2m(d, idx);

     p2m_lock(p2m);
@@ -332,7 +351,7 @@ void p2m_flush_altp2m(struct domain *d)

     altp2m_list_lock(d);

-    for ( i = 0; i < MAX_ALTP2M; i++ )
+    for ( i = 0; i < d->nr_altp2m; i++ )
     {
         p2m_reset_altp2m(d, i, ALTP2M_DEACTIVATE);
         altp2m_set_eptp(d, i, mfn_x(INVALID_MFN));
@@ -348,7 +367,7 @@ static int p2m_activate_altp2m(struct domain *d, unsigned int idx,
     struct p2m_domain *hostp2m, *p2m;
     int rc;

-    ASSERT(idx < MAX_ALTP2M);
+    ASSERT(idx < d->nr_altp2m);

     p2m = altp2m_get_p2m(d, idx);
     hostp2m = p2m_get_hostp2m(d);
@@ -388,7 +407,7 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
     int rc = -EINVAL;
     struct p2m_domain *hostp2m = p2m_get_hostp2m(d);

-    if ( idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) )
+    if ( idx >= d->nr_altp2m )
         return rc;

     altp2m_list_lock(d);
@@ -414,7 +433,7 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx,

     altp2m_list_lock(d);

-    for ( i = 0; i < MAX_ALTP2M; i++ )
+    for ( i = 0; i < d->nr_altp2m; i++ )
     {
         if ( altp2m_get_eptp(d, i) != mfn_x(INVALID_MFN) )
             continue;
@@ -436,7 +455,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
     struct p2m_domain *p2m;
     int rc = -EBUSY;

-    if ( !idx || idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) )
+    if ( !idx || idx >= d->nr_altp2m )
         return rc;

     rc = domain_pause_except_self(d);
@@ -471,7 +490,7 @@ int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
     struct vcpu *v;
     int rc = -EINVAL;

-    if ( idx >= MAX_ALTP2M )
+    if ( idx >= d->nr_altp2m )
         return rc;

     rc = domain_pause_except_self(d);
@@ -506,8 +525,7 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
     mfn_t mfn;
     int rc = -EINVAL;

-    if ( idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
-         altp2m_get_eptp(d, idx) == mfn_x(INVALID_MFN) )
+    if ( idx >= d->nr_altp2m || altp2m_get_eptp(d, idx) == mfn_x(INVALID_MFN) )
         return rc;

     hp2m = p2m_get_hostp2m(d);
@@ -567,7 +585,7 @@ int p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,

     altp2m_list_lock(d);

-    for ( i = 0; i < MAX_ALTP2M; i++ )
+    for ( i = 0; i < d->nr_altp2m; i++ )
     {
         p2m_type_t t;
         p2m_access_t a;
@@ -590,7 +608,7 @@ int p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
             else
             {
                 /* At least 2 altp2m's impacted, so reset everything */
-                for ( i = 0; i < MAX_ALTP2M; i++ )
+                for ( i = 0; i < d->nr_altp2m; i++ )
                 {
                     if ( i == last_reset_idx ||
                          altp2m_get_eptp(d, i) == mfn_x(INVALID_MFN) )
@@ -654,7 +672,7 @@ int p2m_set_suppress_ve_multi(struct domain *d,

     if ( sve->view > 0 )
     {
-        if ( sve->view >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+        if ( sve->view >= d->nr_altp2m ||
              altp2m_get_eptp(d, sve->view) == mfn_x(INVALID_MFN) )
             return -EINVAL;

@@ -721,7 +739,7 @@ int p2m_get_suppress_ve(struct domain *d, gfn_t gfn, bool *suppress_ve,

     if ( altp2m_idx > 0 )
     {
-        if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+        if ( altp2m_idx >= d->nr_altp2m ||
              altp2m_get_eptp(d, altp2m_idx) == mfn_x(INVALID_MFN) )
             return -EINVAL;

@@ -756,9 +774,9 @@ int p2m_set_altp2m_view_visibility(struct domain *d, unsigned int altp2m_idx,

     /*
      * Eptp index is correlated with altp2m index and should not exceed
-     * min(MAX_ALTP2M, MAX_EPTP).
+     * d->nr_altp2m.
      */
-    if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+    if ( altp2m_idx >= d->nr_altp2m ||
          altp2m_get_eptp(d, altp2m_idx) == mfn_x(INVALID_MFN) )
         rc = -EINVAL;
     else if ( visible )
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index 8fc8348152..8b23792a0d 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -515,7 +515,7 @@ int hap_enable(struct domain *d, u32 mode)
             altp2m_set_visible_eptp(d, i, mfn_x(INVALID_MFN));
         }

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             rv = p2m_alloc_table(altp2m_get_p2m(d, i));
             if ( rv != 0 )
@@ -538,8 +538,8 @@ void hap_final_teardown(struct domain *d)
     unsigned int i;

     if ( hvm_altp2m_supported() )
-        for ( i = 0; i < MAX_ALTP2M; i++ )
-            p2m_teardown(d->arch.altp2m_p2m[i], true, NULL);
+        for ( i = 0; i < d->nr_altp2m; i++ )
+            p2m_teardown(altp2m_get_p2m(d, i), true, NULL);

     /* Destroy nestedp2m's first */
     for (i = 0; i < MAX_NESTEDP2M; i++) {
@@ -590,7 +590,7 @@ void hap_teardown(struct domain *d, bool *preempted)
         FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
         FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             p2m_teardown(altp2m_get_p2m(d, i), false, preempted);
             if ( preempted && *preempted )
diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
index 43f3a8c2aa..669a0d0a54 100644
--- a/xen/arch/x86/mm/mem_access.c
+++ b/xen/arch/x86/mm/mem_access.c
@@ -347,7 +347,7 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
     /* altp2m view 0 is treated as the hostp2m */
     if ( altp2m_idx )
     {
-        if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+        if ( altp2m_idx >= d->nr_altp2m ||
              altp2m_get_eptp(d, altp2m_idx) == mfn_x(INVALID_MFN) )
             return -EINVAL;

@@ -402,7 +402,7 @@ long p2m_set_mem_access_multi(struct domain *d,
     /* altp2m view 0 is treated as the hostp2m */
     if ( altp2m_idx )
     {
-        if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+        if ( altp2m_idx >= d->nr_altp2m ||
              altp2m_get_eptp(d, altp2m_idx) == mfn_x(INVALID_MFN) )
             return -EINVAL;

@@ -464,7 +464,7 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access,
     }
     else if ( altp2m_idx ) /* altp2m view 0 is treated as the hostp2m */
     {
-        if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+        if ( altp2m_idx >= d->nr_altp2m ||
              altp2m_get_eptp(d, altp2m_idx) == mfn_x(INVALID_MFN) )
             return -EINVAL;

@@ -483,7 +483,7 @@ void arch_p2m_set_access_required(struct domain *d, bool access_required)
     if ( altp2m_active(d) )
     {
         unsigned int i;
-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             struct p2m_domain *p2m = altp2m_get_p2m(d, i);

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 21ac361111..2139d13009 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -912,7 +912,7 @@ static int nominate_page(struct domain *d, gfn_t gfn,

         altp2m_list_lock(d);

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             ap2m = altp2m_get_p2m(d, i);
             if ( !ap2m )
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index ed4252822e..f90c82f89b 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -1293,7 +1293,7 @@ static void ept_set_ad_sync(struct domain *d, bool value)
     {
         unsigned int i;

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             struct p2m_domain *p2m;

@@ -1519,7 +1519,7 @@ unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)

     altp2m_list_lock(d);

-    for ( i = 0; i < MAX_ALTP2M; i++ )
+    for ( i = 0; i < d->nr_altp2m; i++ )
     {
         if ( altp2m_get_eptp(d, i) == mfn_x(INVALID_MFN) )
             continue;
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 30a44441ba..380b7ece9c 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -105,7 +105,7 @@ void p2m_change_entry_type_global(struct domain *d,
     {
         unsigned int i;

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             if ( altp2m_get_eptp(d, i) != mfn_x(INVALID_MFN) )
             {
@@ -140,7 +140,7 @@ void p2m_memory_type_changed(struct domain *d)
     {
         unsigned int i;

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             if ( altp2m_get_eptp(d, i) != mfn_x(INVALID_MFN) )
             {
@@ -913,7 +913,7 @@ void p2m_change_type_range(struct domain *d,
     {
         unsigned int i;

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             if ( altp2m_get_eptp(d, i) != mfn_x(INVALID_MFN) )
             {
@@ -986,7 +986,7 @@ int p2m_finish_type_change(struct domain *d,
     {
         unsigned int i;

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             if ( altp2m_get_eptp(d, i) != mfn_x(INVALID_MFN) )
             {
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 67cadb7c3f..776442cec0 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -610,6 +610,7 @@ struct domain *domain_create(domid_t domid,
     if ( config )
     {
         d->options = config->flags;
+        d->nr_altp2m = config->altp2m.nr;
         d->vmtrace_size = config->vmtrace_size;
     }

diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index dea399aa8e..056bbc82a2 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -103,7 +103,10 @@ struct xen_domctl_createdomain {
 /* Altp2m mode signaling uses bits [0, 1]. */
 #define XEN_DOMCTL_ALTP2M_mode_mask  (0x3U)
 #define XEN_DOMCTL_ALTP2M_mode(m)    ((m) & XEN_DOMCTL_ALTP2M_mode_mask)
-        uint32_t opts;
+        uint16_t opts;
+
+        /* Number of altp2ms to allocate. */
+        uint16_t nr;
     } altp2m;

     /* Per-vCPU buffer size in bytes.  0 to disable. */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 2dcd1d1a4f..7119f3c44f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -610,6 +610,8 @@ struct domain
         unsigned int guest_request_sync          : 1;
     } monitor;

+    unsigned int nr_altp2m;    /* Number of altp2m tables */
+
     unsigned int vmtrace_size; /* Buffer size in bytes, or 0 to disable. */

 #ifdef CONFIG_ARGO
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 02 20:04:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 20:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734556.1140667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRW-0008Ur-4I; Sun, 02 Jun 2024 20:04:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734556.1140667; Sun, 02 Jun 2024 20:04:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRW-0008Ug-15; Sun, 02 Jun 2024 20:04:38 +0000
Received: by outflank-mailman (input) for mailman id 734556;
 Sun, 02 Jun 2024 20:04:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nKxc=NE=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sDrRU-00084Y-52
 for xen-devel@lists.xenproject.org; Sun, 02 Jun 2024 20:04:36 +0000
Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com
 [2a00:1450:4864:20::42f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 545a8b67-211b-11ef-b4bb-af5377834399;
 Sun, 02 Jun 2024 22:04:33 +0200 (CEST)
Received: by mail-wr1-x42f.google.com with SMTP id
 ffacd0b85a97d-35dbfe31905so3715971f8f.2
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 13:04:33 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd064bb21sm6879280f8f.102.2024.06.02.13.04.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 02 Jun 2024 13:04:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 545a8b67-211b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717358673; x=1717963473; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=srnMunaIVMSNKwZKkgAK+um7rawMxCtrAJXIPwRvR+M=;
        b=kCiy1dDrhiQ0GQh1ujBRyF8kZ+DarHa2622aU+ylXXUeyc5HqOIsjGYDkJr6zidKFq
         DgpLa5Xsfq6uSTaYMiu7dXzHQVSe+2mpd6W78esrVIbesyYj7zH9gCCrxU+OnRRaozWK
         mc3qR+O4BZADE6kSpPjrYh7kS0yO1scxDBDXW85cdcQv/i114DWQV34HvnOxJCNOs0qS
         mH8Npsz8E1SnqI+DHKsz5wXFD2sJv7+9J20LvFGOgh9+ze1fVvGQgKeO0lpYCxRZ22uT
         RPj++8diyRMEPKebaY06RrqFOrP7Dpj/ZU5+tbbLWDO3WYA/u3J4ByP1g84hHzW+cwi6
         PFaw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717358673; x=1717963473;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=srnMunaIVMSNKwZKkgAK+um7rawMxCtrAJXIPwRvR+M=;
        b=qAZ7Zx+H6swU49QUMewopU+swZUu58MSC3r3Gu3wnd1jXxMTe6w/Zj5r7H+lsNZFe/
         HuOZDe2yXd5RfZvolTxQqmvuKicw2Cy1oHgnPjbzjsD9AouGDpK2GNhEsMIxF2WoKJ1+
         3cRcslksIL0RsKZaneRYbMIeruaEZxhCtv2IxNB4DkIkK6Li30eFZz6T2py8iMfKnW/2
         0OF95SffydjHsQ2TXOch9Qi+Ar5KUxj2sa+S0FyfY0Z9lBTd1pAAscjVimDAu3fI0Ko6
         Xfx8Bastd1idh3OOaYr6rq1KKg17i3JvsL+qRDM5V+Eky8K9gM93lmnx2WHX157slodE
         B/6Q==
X-Gm-Message-State: AOJu0Yy/asl1gbUbZkwdM1pUrHHjUEVO7GYRmB4XhwGQnmkt34oxv3l6
	nH9Vm/tn66MWI4UvxHA87nnNSY4doiuPXjJKO5vJz1jaWEtc4GCslPXyYsh4
X-Google-Smtp-Source: AGHT+IHfZB6JtVKlc1r30NQBE/aH5henjAJM4LuXPRiH4t4jLmq5/lbGABQEBkm1Z8KP/KaK/EvG+A==
X-Received: by 2002:a05:6000:90:b0:354:df59:c9a4 with SMTP id ffacd0b85a97d-35e0f259bacmr5053050f8f.9.1717358672817;
        Sun, 02 Jun 2024 13:04:32 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH for-4.19? v5 03/10] xen: Refactor altp2m options into a structured format
Date: Sun,  2 Jun 2024 20:04:16 +0000
Message-Id: <5dc1d0375206bd982b91f4db4bd237769a889f48.1717356829.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717356829.git.w1benny@gmail.com>
References: <cover.1717356829.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

Encapsulate the altp2m options within a struct. This change is preparatory
and sets the groundwork for introducing additional parameter in subsequent
commit.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 tools/libs/light/libxl_create.c     | 6 +++---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 4 +++-
 xen/arch/arm/domain.c               | 2 +-
 xen/arch/x86/domain.c               | 4 ++--
 xen/arch/x86/hvm/hvm.c              | 2 +-
 xen/include/public/domctl.h         | 4 +++-
 6 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index edeadd57ef..569e3d21ed 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -680,17 +680,17 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
         LOG(DETAIL, "altp2m: %s", libxl_altp2m_mode_to_string(b_info->altp2m));
         switch(b_info->altp2m) {
         case LIBXL_ALTP2M_MODE_MIXED:
-            create.altp2m_opts |=
+            create.altp2m.opts |=
                 XEN_DOMCTL_ALTP2M_mode(XEN_DOMCTL_ALTP2M_mixed);
             break;

         case LIBXL_ALTP2M_MODE_EXTERNAL:
-            create.altp2m_opts |=
+            create.altp2m.opts |=
                 XEN_DOMCTL_ALTP2M_mode(XEN_DOMCTL_ALTP2M_external);
             break;

         case LIBXL_ALTP2M_MODE_LIMITED:
-            create.altp2m_opts |=
+            create.altp2m.opts |=
                 XEN_DOMCTL_ALTP2M_mode(XEN_DOMCTL_ALTP2M_limited);
             break;

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index a529080129..e6c977521f 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -231,7 +231,9 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 		.max_maptrack_frames = Int_val(VAL_MAX_MAPTRACK_FRAMES),
 		.grant_opts =
 		    XEN_DOMCTL_GRANT_version(Int_val(VAL_MAX_GRANT_VERSION)),
-		.altp2m_opts = Int32_val(VAL_ALTP2M_OPTS),
+		.altp2m = {
+			.opts = Int32_val(VAL_ALTP2M_OPTS),
+		},
 		.vmtrace_size = vmtrace_size,
 		.cpupool_id = Int32_val(VAL_CPUPOOL_ID),
 	};
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 8bde2f730d..5234b627d0 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -688,7 +688,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }

-    if ( config->altp2m_opts )
+    if ( config->altp2m.opts )
     {
         dprintk(XENLOG_INFO, "Altp2m not supported\n");
         return -EINVAL;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 536542841e..bb5ba8fc1e 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -637,7 +637,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
     bool hap = config->flags & XEN_DOMCTL_CDF_hap;
     bool nested_virt = config->flags & XEN_DOMCTL_CDF_nested_virt;
     unsigned int max_vcpus;
-    unsigned int altp2m_mode = MASK_EXTR(config->altp2m_opts,
+    unsigned int altp2m_mode = MASK_EXTR(config->altp2m.opts,
                                          XEN_DOMCTL_ALTP2M_mode_mask);

     if ( hvm ? !hvm_enabled : !IS_ENABLED(CONFIG_PV) )
@@ -717,7 +717,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }

-    if ( config->altp2m_opts & ~XEN_DOMCTL_ALTP2M_mode_mask )
+    if ( config->altp2m.opts & ~XEN_DOMCTL_ALTP2M_mode_mask )
     {
         dprintk(XENLOG_INFO, "Invalid altp2m options selected: %#x\n",
                 config->flags);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 8334ab1711..a66ebaaceb 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -659,7 +659,7 @@ int hvm_domain_initialise(struct domain *d,
     d->arch.hvm.params[HVM_PARAM_TRIPLE_FAULT_REASON] = SHUTDOWN_reboot;

     /* Set altp2m based on domctl flags. */
-    switch ( MASK_EXTR(config->altp2m_opts, XEN_DOMCTL_ALTP2M_mode_mask) )
+    switch ( MASK_EXTR(config->altp2m.opts, XEN_DOMCTL_ALTP2M_mode_mask) )
     {
     case XEN_DOMCTL_ALTP2M_mixed:
         d->arch.hvm.params[HVM_PARAM_ALTP2M] = XEN_ALTP2M_mixed;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 2a49fe46ce..dea399aa8e 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -86,6 +86,7 @@ struct xen_domctl_createdomain {

     uint32_t grant_opts;

+    struct {
 /*
  * Enable altp2m mixed mode.
  *
@@ -102,7 +103,8 @@ struct xen_domctl_createdomain {
 /* Altp2m mode signaling uses bits [0, 1]. */
 #define XEN_DOMCTL_ALTP2M_mode_mask  (0x3U)
 #define XEN_DOMCTL_ALTP2M_mode(m)    ((m) & XEN_DOMCTL_ALTP2M_mode_mask)
-    uint32_t altp2m_opts;
+        uint32_t opts;
+    } altp2m;

     /* Per-vCPU buffer size in bytes.  0 to disable. */
     uint32_t vmtrace_size;
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 02 20:04:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 20:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734557.1140672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRW-00006e-Ee; Sun, 02 Jun 2024 20:04:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734557.1140672; Sun, 02 Jun 2024 20:04:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRW-0008WK-7m; Sun, 02 Jun 2024 20:04:38 +0000
Received: by outflank-mailman (input) for mailman id 734557;
 Sun, 02 Jun 2024 20:04:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nKxc=NE=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sDrRU-0007mz-5Z
 for xen-devel@lists.xenproject.org; Sun, 02 Jun 2024 20:04:36 +0000
Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com
 [2a00:1450:4864:20::32c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5512c646-211b-11ef-90a1-e314d9c70b13;
 Sun, 02 Jun 2024 22:04:34 +0200 (CEST)
Received: by mail-wm1-x32c.google.com with SMTP id
 5b1f17b1804b1-42120fc8d1dso35085015e9.2
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 13:04:34 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd064bb21sm6879280f8f.102.2024.06.02.13.04.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 02 Jun 2024 13:04:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5512c646-211b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717358674; x=1717963474; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7xYGePkFSrt21Sw0npf4IiaMWXG46KCQA4sFct5gdmA=;
        b=cHfv1yX176TANK09Pl0nQALKyyOnCEyYy+I/1KmDdXN1H4OYXNwzYGtUua+Vgw3PGB
         +IYUYwFZWGe/9GVAQzV+c0GjSAELW4Vn9KPSMQkujLpzI2CLsCKwiEji2BO8/iaj2Da+
         m3jkn2EYsKI6orXBf/w3zXB1Tyf08WOmKnHnjXA0gI/q2Tj7U3tk5jN/sb1LS0ER8BYp
         3IINisdKJpuaLBmn6diJVlOzdpCfbBaXmTntQjqQBWFtUEc4c89DvTrl837CsZUjsbMk
         Ifnj1dO+gHQy3/qnS2vxlzzaMLXUHimopU7WW7N4N9BztxRcCjLw5uRCFXJ80SuNN4/j
         5FKQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717358674; x=1717963474;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7xYGePkFSrt21Sw0npf4IiaMWXG46KCQA4sFct5gdmA=;
        b=hlKRtEUXswCeIJ0e52jVB2qB00MBUvHFTXpd84E5fLq72as3bLtTxTsUA1qnZs8nK6
         I8ta9VvzBtw8uLpgE/oztxyZY6l5a2/vhwlP6alxfVPC52jLgDdqJR0R6XXHFy15jRI8
         uFc82JOeted18EXfEOjtFBlHdMbjq31uN40m/DXEEy1LnUouYiPG/tafjUcLFzhMGdL+
         sCf+ev20tjPtERkLwEMQbZ6EAJ9p9E9XdGzJsSack5UQZ7UFtUQzcwiiO2j4CVpS5vXO
         iMTvuWGtTdMTcsqOMZd6v6UWcvXcV1pOemMm3dyWgy5qOEs07s7tgGN7spi1W/FnzsIi
         9k/w==
X-Gm-Message-State: AOJu0YyontB4N9OlJJ7oF/zyUOCVizyNsWL0v3GgN4ysAq3edRjDW0Xl
	DlSPSASItOMdpLa6R6nEHhAik3Qp300UN7/bRr57sQ8xpdg3eL3Bm4zZow==
X-Google-Smtp-Source: AGHT+IGWniMlwBg4nMNN0Azz1SJeWir1giD7TSupuiF6HMS5oCbnDlsk4XCmiaUxi/NyRQN540kPqA==
X-Received: by 2002:a5d:4e03:0:b0:34d:b284:9c65 with SMTP id ffacd0b85a97d-35e0f30ed25mr4741631f8f.45.1717358673787;
        Sun, 02 Jun 2024 13:04:33 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.19? v5 04/10] tools/xl: Add altp2m_count parameter
Date: Sun,  2 Jun 2024 20:04:17 +0000
Message-Id: <a5e6aaeb477d42d9f566a38d3eaa099d41400c05.1717356829.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717356829.git.w1benny@gmail.com>
References: <cover.1717356829.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

Introduce a new altp2m_count parameter to control the maximum number of altp2m
views a domain can use. By default, if altp2m_count is unspecified and altp2m
is enabled, the value is set to 10, reflecting the legacy behavior.

This change is preparatory; it establishes the groundwork for the feature but
does not activate it.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 tools/golang/xenlight/helpers.gen.go | 2 ++
 tools/golang/xenlight/types.gen.go   | 1 +
 tools/include/libxl.h                | 8 ++++++++
 tools/libs/light/libxl_create.c      | 9 +++++++++
 tools/libs/light/libxl_types.idl     | 1 +
 tools/xl/xl_parse.c                  | 9 +++++++++
 6 files changed, 30 insertions(+)

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index fe5110474d..0449c55f31 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1159,6 +1159,7 @@ if err := x.ArchX86.MsrRelaxed.fromC(&xc.arch_x86.msr_relaxed);err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
 x.Altp2M = Altp2MMode(xc.altp2m)
+x.Altp2MCount = uint32(xc.altp2m_count)
 x.VmtraceBufKb = int(xc.vmtrace_buf_kb)
 if err := x.Vpmu.fromC(&xc.vpmu);err != nil {
 return fmt.Errorf("converting field Vpmu: %v", err)
@@ -1676,6 +1677,7 @@ if err := x.ArchX86.MsrRelaxed.toC(&xc.arch_x86.msr_relaxed); err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
 xc.altp2m = C.libxl_altp2m_mode(x.Altp2M)
+xc.altp2m_count = C.uint32_t(x.Altp2MCount)
 xc.vmtrace_buf_kb = C.int(x.VmtraceBufKb)
 if err := x.Vpmu.toC(&xc.vpmu); err != nil {
 return fmt.Errorf("converting field Vpmu: %v", err)
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index c9e45b306f..54607758d3 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -603,6 +603,7 @@ ArchX86 struct {
 MsrRelaxed Defbool
 }
 Altp2M Altp2MMode
+Altp2MCount uint32
 VmtraceBufKb int
 Vpmu Defbool
 }
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index f5c7167742..bfa06caad2 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -1250,6 +1250,14 @@ typedef struct libxl__ctx libxl_ctx;
  */
 #define LIBXL_HAVE_ALTP2M 1

+/*
+ * LIBXL_HAVE_ALTP2M_COUNT
+ * If this is defined, then libxl supports setting the maximum number of
+ * alternate p2m tables.
+ */
+#define LIBXL_HAVE_ALTP2M_COUNT 1
+#define LIBXL_ALTP2M_COUNT_DEFAULT (~(uint32_t)0)
+
 /*
  * LIBXL_HAVE_REMUS
  * If this is defined, then libxl supports remus.
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 569e3d21ed..11d2f282f5 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -482,6 +482,15 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
         return -ERROR_INVAL;
     }

+    if (b_info->altp2m_count == LIBXL_ALTP2M_COUNT_DEFAULT) {
+        if ((libxl_defbool_val(b_info->u.hvm.altp2m) ||
+            b_info->altp2m != LIBXL_ALTP2M_MODE_DISABLED))
+            /* Reflect the default legacy count */
+            b_info->altp2m_count = 10;
+        else
+            b_info->altp2m_count = 0;
+    }
+
     /* Assume that providing a bootloader user implies enabling restrict. */
     libxl_defbool_setdefault(&b_info->bootloader_restrict,
                              !!b_info->bootloader_user);
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 4e65e6fda5..2963c5e250 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -729,6 +729,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     # Alternate p2m is not bound to any architecture or guest type, as it is
     # supported by x86 HVM and ARM support is planned.
     ("altp2m", libxl_altp2m_mode),
+    ("altp2m_count", uint32, {'init_val': 'LIBXL_ALTP2M_COUNT_DEFAULT'}),

     # Size of preallocated vmtrace trace buffers (in KBYTES).
     # Use zero value to disable this feature.
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index e3a4800f6e..a82b8fe6e4 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2063,6 +2063,15 @@ void parse_config_data(const char *config_source,
         }
     }

+    if (!xlu_cfg_get_long(config, "altp2m_count", &l, 1)) {
+        if (l != (uint16_t)l) {
+            fprintf(stderr, "ERROR: invalid value %ld for \"altp2m_count\"\n", l);
+            exit (1);
+        }
+
+        b_info->altp2m_count = l;
+    }
+
     if (!xlu_cfg_get_long(config, "vmtrace_buf_kb", &l, 1) && l) {
         b_info->vmtrace_buf_kb = l;
     }
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 02 20:04:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 Jun 2024 20:04:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734554.1140647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRT-00081S-LL; Sun, 02 Jun 2024 20:04:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734554.1140647; Sun, 02 Jun 2024 20:04:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDrRT-00081L-IF; Sun, 02 Jun 2024 20:04:35 +0000
Received: by outflank-mailman (input) for mailman id 734554;
 Sun, 02 Jun 2024 20:04:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nKxc=NE=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sDrRS-0007mz-4u
 for xen-devel@lists.xenproject.org; Sun, 02 Jun 2024 20:04:34 +0000
Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com
 [2a00:1450:4864:20::331])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 52e1da37-211b-11ef-90a1-e314d9c70b13;
 Sun, 02 Jun 2024 22:04:32 +0200 (CEST)
Received: by mail-wm1-x331.google.com with SMTP id
 5b1f17b1804b1-421392b8156so5404115e9.3
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 13:04:31 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd064bb21sm6879280f8f.102.2024.06.02.13.04.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 02 Jun 2024 13:04:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52e1da37-211b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717358670; x=1717963470; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=NtMJq0HQrMIOCY6/RlBy7+7HJutlZ/23G2I1odKXddQ=;
        b=K77xtUdMY/8f2OhtHkThXyaeqknW2D98dcSFvxRZGsZ/GnBv0DdjaFrmWpOMiyESYP
         pUL/WIC024bwj8z0SuvtrPrw1t5zciW5synZTRaP65DJCkGW5do2zaFcKFBKLcw9vzji
         puNI7TTX7QaW3fWTp+0UnbsIGhAbu+uWYinWZKr1IVwE7r7Q1Q7w0IUQvp34tdpchGrn
         t3+1PBSwkD7uT6QT+LIVz2r/X1x/MTSdJ+xYN5KobdAeJC3/FsyvGcbQwVmyZbgGjmOO
         jpP/NY87mOFuWndZqusCfs8W85wphQD0TY5szaBtCH6nv6TEE/Zq1RwksxnSZjWmb5s0
         nujg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717358670; x=1717963470;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=NtMJq0HQrMIOCY6/RlBy7+7HJutlZ/23G2I1odKXddQ=;
        b=ZDEjgJG1CCo2CUrxIumHW6xEoJIQFYEIS4ERMgF1xHuLoEO7D/F0bmNwRBzMzMJFXR
         N76QaYsEY8sku0FcwhWQ/6g0n5AlMnzvECM1XXEZffOgfTw6XVo/FxDydFfmRE0Xr+M3
         BFAl1HYY5DOswsvOlspjNwcC9Cjs5DEHCxLJyjJjkBIGVdXFyx4ci+SpE2UOeWZDEY1r
         LfbDF8G/mO1Jhw+XyGURHThDilLlfeJzP07JXTJP71i9DZ8HqIrVyAFURmDr6w6ZtPVa
         x5kuEMdmnPSsaCS9sp5yPIRmR7R97oERYDvRQQ63WHqhi5eLlid6fI9xvrcAcso2HIPm
         F63w==
X-Gm-Message-State: AOJu0Yyww4Chfq/oUPyN0U6X0Jwhy7+xzaMiwouRiW12PS1j/ZgAsZ1P
	I5B4P9PzL/buLCAFVEQRLJVLwpSrJ8xhIvMmrKmc8z4itEWtZcOI16kw7Dfc
X-Google-Smtp-Source: AGHT+IGB9GGD5JCZDraG4IjTDMuXO8bGbqNhaH7hkOsz7s39LzRzOLfDHLcFWO9MGKbXCQzLJugIDg==
X-Received: by 2002:a05:600c:4347:b0:41c:3e1:9db9 with SMTP id 5b1f17b1804b1-4212e09ccfcmr54691575e9.27.1717358669927;
        Sun, 02 Jun 2024 13:04:29 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>
Subject: [PATCH for-4.19? v5 00/10] x86: Make MAX_ALTP2M configurable
Date: Sun,  2 Jun 2024 20:04:13 +0000
Message-Id: <cover.1717356829.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

This series introduces the ability to configure the maximum number of altp2m
tables during domain creation. Previously, the limits were hardcoded to a
maximum of 10. This change allows for greater flexibility in environments that
require more or fewer altp2m views.

This enhancement is particularly relevant for users leveraging Xen's features
for virtual machine introspection.

Changes since v4:
- Rebased on top of staging (applying Roger's changes).
- Fix mixed tabs/spaces in xenctrl_stubs.c.
- Add missing OCaml bindings for altp2m_opts.
- Substitute altp2m_opts into an unnamed structure. (This is a preparation for
  the next patch that will introduce the `nr` field.)
- altp2m.opts is then shortened to uint16_t and a new field altp2m.nr is added -
  also uint16_t. This value is then verified by libxl to not exceed the maximum
  uint16_t value.

  This puts a hard limit to number of altp2m to 65535, which is enough, at least
  for the time being. Also, altp2m.opts currently uses only 2 bits. Therefore
  I believe this change is justified.
- Introduction of accessor functions for altp2m arrays and refactoring the code
  to use them.
- Added a check to arm/arch_sanitise_domain_config() to disallow creating
  domains with altp2m.nr != 0.
- Added dummy hvm_altp2m_supported() to avoid build errors when CONFIG_HVM is
  disabled.
- Finally, expose altp2m_count to OCaml bindings (and verify both altp2m_opts
  and altp2m_count fit uint16_t).
- I also removed Christian Lindig from the Acked-by, since I think this change
  is significant enough to require a re-review.

Changes since v3:
- Rebased on top of staging (some functions were moved to altp2m.c).
- Re-added the array_index_nospec() where it was removed.

Changes since v2:
- Changed max_altp2m to nr_altp2m.
- Moved arch-dependent check from xen/common/domain.c to xen/arch/x86/domain.c.
- Replaced min(d->nr_altp2m, MAX_EPTP) occurences for just d->nr_altp2m.
- Replaced array_index_nospec(altp2m_idx, ...) for just altp2m_idx.
- Shortened long lines.
- Removed unnecessary comments in altp2m_vcpu_initialise/destroy.
- Moved nr_altp2m field after max_ fields in xen_domctl_createdomain.
- Removed the commit that adjusted the initial allocation of pages from 256
  to 1024. This means that after these patches, technically, the nr_altp2m will
  be capped to (256 - 1 - vcpus - MAX_NESTEDP2M) instead of MAX_EPTP (512).
  Future work will be needed to fix this.

Petr Beneš (10):
  tools/ocaml: Fix mixed tabs/spaces
  tools/ocaml: Add missing ocaml bindings for altp2m_opts
  xen: Refactor altp2m options into a structured format
  tools/xl: Add altp2m_count parameter
  docs/man: Add altp2m_count parameter to the xl.cfg manual
  x86/altp2m: Introduce accessor functions for safer array indexing
  xen: Make the maximum number of altp2m views configurable for x86
  tools/libxl: Activate the altp2m_count feature
  xen/x86: Disallow creating domains with altp2m enabled and altp2m.nr
    == 0
  tools/ocaml: Add altp2m_count parameter

 docs/man/xl.cfg.5.pod.in             |  14 ++++
 tools/golang/xenlight/helpers.gen.go |   2 +
 tools/golang/xenlight/types.gen.go   |   1 +
 tools/include/libxl.h                |   8 ++
 tools/libs/light/libxl_create.c      |  19 ++++-
 tools/libs/light/libxl_types.idl     |   1 +
 tools/ocaml/libs/xc/xenctrl.ml       |   2 +
 tools/ocaml/libs/xc/xenctrl.mli      |   2 +
 tools/ocaml/libs/xc/xenctrl_stubs.c  |  40 +++++++---
 tools/xl/xl_parse.c                  |   9 +++
 xen/arch/arm/domain.c                |   2 +-
 xen/arch/x86/domain.c                |  45 ++++++++---
 xen/arch/x86/hvm/hvm.c               |  10 ++-
 xen/arch/x86/hvm/vmx/vmx.c           |   6 +-
 xen/arch/x86/include/asm/altp2m.h    |  32 ++++++++
 xen/arch/x86/include/asm/domain.h    |   9 ++-
 xen/arch/x86/include/asm/hvm/hvm.h   |   5 ++
 xen/arch/x86/include/asm/p2m.h       |  11 ++-
 xen/arch/x86/mm/altp2m.c             | 110 ++++++++++++++-------------
 xen/arch/x86/mm/hap/hap.c            |  16 ++--
 xen/arch/x86/mm/mem_access.c         |  25 +++---
 xen/arch/x86/mm/mem_sharing.c        |   4 +-
 xen/arch/x86/mm/p2m-ept.c            |  18 ++---
 xen/arch/x86/mm/p2m.c                |  24 +++---
 xen/common/domain.c                  |   1 +
 xen/include/public/domctl.h          |   7 +-
 xen/include/xen/sched.h              |   2 +
 27 files changed, 290 insertions(+), 135 deletions(-)

--
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 02:05:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 02:05:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734661.1140746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDx47-0008Tr-Cp; Mon, 03 Jun 2024 02:04:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734661.1140746; Mon, 03 Jun 2024 02:04:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sDx47-0008Tk-AH; Mon, 03 Jun 2024 02:04:51 +0000
Received: by outflank-mailman (input) for mailman id 734661;
 Mon, 03 Jun 2024 02:04:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDx45-0008TZ-LO; Mon, 03 Jun 2024 02:04:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDx45-00040S-7A; Mon, 03 Jun 2024 02:04:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sDx44-00048S-Qw; Mon, 03 Jun 2024 02:04:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sDx44-0002DF-QP; Mon, 03 Jun 2024 02:04:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pfcIGyuqxdpW8hJzFLaKj9jwNMPKVHYt8b34KO6iXSQ=; b=Q1/ESiOFjGKwE6QZNVvTCrH3Ro
	SbD+Iu9/tX/C+Qv2FheNienDYBSgatczBOPVebcWZp4XdMBEes3a++zPFOusz1OHC6oVFKInzx9LN
	/kDi4lc2+oh5jQEmqb9YxH7mrryj9P7VaauvVBCtUKehFZb5uGlMjiWqzWTyqpT5F2zM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186232-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186232: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-libvirt:host-ping-check-xen:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a693b9c95abd4947c2d06e05733de5d470ab6586
X-Osstest-Versions-That:
    linux=83814698cf48ce3aadc5d88a3f577f04482ff92a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 Jun 2024 02:04:48 +0000

flight 186232 linux-linus real [real]
flight 186233 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186232/
http://logs.test-lab.xenproject.org/osstest/logs/186233/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt     10 host-ping-check-xen      fail REGR. vs. 186231
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 186231

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186231
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186231
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186231
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186231
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186231
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186231
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                a693b9c95abd4947c2d06e05733de5d470ab6586
baseline version:
 linux                83814698cf48ce3aadc5d88a3f577f04482ff92a

Last test of basis   186231  2024-06-02 11:33:31 Z    0 days
Testing same since   186232  2024-06-02 18:43:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Dave Hansen <dave.hansen@linux.intel.com>
  Ingo Molnar <mingo@kernel.org>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jörn Heusipp <osmanx@heusipp.de>
  Kees Cook <kees@kernel.org>
  Kees Cook <keescook@chromium.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marco Patalano <mpatalan@redhat.com>
  Peter Schneider <pschneider1968@googlemail.com>
  Phil Auld <pauld@redhat.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tim Teichmann <teichmanntim@outlook.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 300 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 05:45:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 05:45:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734648.1140756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE0VF-0000wu-Mh; Mon, 03 Jun 2024 05:45:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734648.1140756; Mon, 03 Jun 2024 05:45:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE0VF-0000wn-JV; Mon, 03 Jun 2024 05:45:05 +0000
Received: by outflank-mailman (input) for mailman id 734648;
 Mon, 03 Jun 2024 00:37:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/YIx=NF=quicinc.com=quic_jjohnson@srs-se1.protection.inumbo.net>)
 id 1sDvhi-0007dJ-S5
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 00:37:38 +0000
Received: from mx0b-0031df01.pphosted.com (mx0b-0031df01.pphosted.com
 [205.220.180.131]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77e5d704-2141-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 02:37:35 +0200 (CEST)
Received: from pps.filterd (m0279871.ppops.net [127.0.0.1])
 by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 452NgCVZ029983;
 Mon, 3 Jun 2024 00:37:31 GMT
Received: from nalasppmta05.qualcomm.com (Global_NAT1.qualcomm.com
 [129.46.96.20])
 by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3yfw7djnw2-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 03 Jun 2024 00:37:31 +0000 (GMT)
Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com
 [10.47.209.196])
 by NALASPPMTA05.qualcomm.com (8.17.1.5/8.17.1.5) with ESMTPS id 4530bTpn029167
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 3 Jun 2024 00:37:29 GMT
Received: from [169.254.0.1] (10.49.16.6) by nalasex01a.na.qualcomm.com
 (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Sun, 2 Jun 2024
 17:37:29 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77e5d704-2141-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=
	cc:content-transfer-encoding:content-type:date:from:message-id
	:mime-version:subject:to; s=qcppdkim1; bh=gI23069akMvv2fLPXYYcSC
	TFV09EJupDQM5/X6eZXMI=; b=GtQviHh4QfXIpYC/XJTYu3uGqqDBP2fbEExWgT
	T9OP7YIdlxKV3H63+gq3RccikmQWrjzkIxczGEhHvuIB3H8w1Hx52RNKKY4+es26
	Qpj8ALOvEtDMuzd4XWsRUbozIjobUIpr4Y0CkJKf3wIm1TEgCs/5dJq8sLRXHp2/
	EoBzIDzBapY7YUY/8vy7MxW6fzk9vIFOfCecK0Cgzbg9LX/ShYMJm1NGk5f50Ser
	NFA/g2PTMT/UWrqGdO19rPElQBuhye/PB4qMZ/58zP6J51KgaYFbPqUhqLIe6gki
	0y/cugFGr6UGwoc5+Pz/814gdZWv9LKPAMl8oqBxXUgfZP3Q==
From: Jeff Johnson <quic_jjohnson@quicinc.com>
Date: Sun, 2 Jun 2024 17:37:28 -0700
Subject: [PATCH] xen/blkback: add missing MODULE_DESCRIPTION() macro
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Message-ID: <20240602-md-block-xen-blkback-v1-1-6ff5b58bdee1@quicinc.com>
X-B4-Tracking: v=1; b=H4sIAEcQXWYC/x2MQQrCMBAAv1L27EIMwaJfEQ+bZGuXtqlsqgRK/
 +7qbeYws0NlFa5w63ZQ/kiVtZicTx2kkcqTUbI5eOeDuziPS8Y4r2nCxsVoimTscgzXkH3f0wC
 WvpQHaf/t/WEeqTJGpZLG32yW8m64UN1Y4Ti+leS1yYUAAAA=
To: =?utf-8?q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
        Jens Axboe
	<axboe@kernel.dk>
CC: <xen-devel@lists.xenproject.org>, <linux-block@vger.kernel.org>,
        <linux-kernel@vger.kernel.org>, <kernel-janitors@vger.kernel.org>,
        "Jeff
 Johnson" <quic_jjohnson@quicinc.com>
X-Mailer: b4 0.13.0
X-Originating-IP: [10.49.16.6]
X-ClientProxiedBy: nalasex01b.na.qualcomm.com (10.47.209.197) To
 nalasex01a.na.qualcomm.com (10.47.209.196)
X-QCInternal: smtphost
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085
X-Proofpoint-ORIG-GUID: gIsLlGKjs2kiMEpeW4_9gb3LGjvCXwcn
X-Proofpoint-GUID: gIsLlGKjs2kiMEpeW4_9gb3LGjvCXwcn
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.650,FMLib:17.12.28.16
 definitions=2024-06-02_15,2024-05-30_01,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0
 mlxlogscore=999 impostorscore=0 malwarescore=0 phishscore=0 adultscore=0
 clxscore=1011 bulkscore=0 priorityscore=1501 lowpriorityscore=0
 spamscore=0 mlxscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.19.0-2405170001 definitions=main-2406030003

make allmodconfig && make W=1 C=1 reports:
WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/block/xen-blkback/xen-blkback.o

Add the missing invocation of the MODULE_DESCRIPTION() macro.

Signed-off-by: Jeff Johnson <quic_jjohnson@quicinc.com>
---
 drivers/block/xen-blkback/blkback.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/drivers/block/xen-blkback/blkback.c b/drivers/block/xen-blkback/blkback.c
index 944576d582fb..838064593f62 100644
--- a/drivers/block/xen-blkback/blkback.c
+++ b/drivers/block/xen-blkback/blkback.c
@@ -1563,5 +1563,6 @@ static void __exit xen_blkif_fini(void)
 
 module_exit(xen_blkif_fini);
 
+MODULE_DESCRIPTION("Virtual block device back-end driver");
 MODULE_LICENSE("Dual BSD/GPL");
 MODULE_ALIAS("xen-backend:vbd");

---
base-commit: a693b9c95abd4947c2d06e05733de5d470ab6586
change-id: 20240602-md-block-xen-blkback-0db494d277af



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 05:58:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 05:58:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734690.1140767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE0iK-0002kQ-W3; Mon, 03 Jun 2024 05:58:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734690.1140767; Mon, 03 Jun 2024 05:58:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE0iK-0002kJ-Sb; Mon, 03 Jun 2024 05:58:36 +0000
Received: by outflank-mailman (input) for mailman id 734690;
 Mon, 03 Jun 2024 05:58:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=29W0=NF=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sE0iJ-0002kD-HF
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 05:58:35 +0000
Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com
 [2a00:1450:4864:20::332])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4f1dd34d-216e-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 07:58:33 +0200 (CEST)
Received: by mail-wm1-x332.google.com with SMTP id
 5b1f17b1804b1-42134bb9735so15126915e9.1
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 22:58:33 -0700 (PDT)
Received: from ?IPV6:2003:ca:b724:4976:f1a7:a03d:19f7:6554?
 (p200300cab7244976f1a7a03d19f76554.dip0.t-ipconnect.de.
 [2003:ca:b724:4976:f1a7:a03d:19f7:6554])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4213ab7ca26sm38200405e9.25.2024.06.02.22.58.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 02 Jun 2024 22:58:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f1dd34d-216e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717394312; x=1717999112; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=IPvT6jy0s4j9O9+2tmG66NrXggrPlQni7x5Sszi+yxI=;
        b=FwXlLfXypTNdvFWyiH1QUQ8IOEnE2p3BtojFuCzEAF5IaaNh5pKw8ufvi2kdr/JoZ7
         KHdm2RiZEeUu5k6PbnC/YjaoMRm5JEKnb6/Jsau6mDEZ3RkL0VJnifrmvDcBwg3cEmOX
         rRyLoV8ioq13cc0Pc6kGZgcNHInltFF+5iWMeeCZnTz8CRwkdIJgLFO1reg2uFIuP67v
         3DwTOyYQK37eacSAaj/qe5N8G7xnPmGXq/Peiygdtxi7QcHfZuzlFTkz0OXTb+r63ME+
         AqNeD0bEer7WaeJb/kwGgIiOUdu8uPuKK07wObpZ3gKwW3gqFV3cm9A0E50/1cHDTgEQ
         V3aQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717394312; x=1717999112;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=IPvT6jy0s4j9O9+2tmG66NrXggrPlQni7x5Sszi+yxI=;
        b=SvLkRHX3L8Xpiq83UOojdEt75y8uSIgz7qT1lkgI7oQ0Xyl9nDMpxkkUwBP/8jVZrh
         YwTaS+muRfwheFEP566eNvtmXfWslTJxyeoEV/CaO2ncqXcMljdrhxCf8Xn7PAqTMXGW
         G9hQxapTe/EbDUy5t2Voh9IOA52S7xnY70q7+uf1vkN/146J9yBX0LU9hDrbNsYjCyJ+
         6kdT7idw1z7O2OTeu++pTfIBFfvdzbUB0pWddGz/ZowuSNW0sjSheSNGh4bUjsTJaCzK
         vSYTS3vx8uxcygs5jHmC2a59BxtQuxNo/5bxN55SJCtu1FToTqsCqRQXYFhI2dhRUBCz
         X5hw==
X-Forwarded-Encrypted: i=1; AJvYcCX6tufW9Nbjy9aVUwDDwne1wh6+E/CKJKfN/Nn3OmbcT8tBz1G6ZnBMWaFsg8nQycUaUcsdHytfUkbyxjylu5gBezL9Hk3HctMcd5Czm7U=
X-Gm-Message-State: AOJu0Yyzc1HCYkza27lZy4emiUvHDqE4DOQcboAd7jLg3fiMifsHXoZN
	xuqkpIqxJH7df/xk7dK+xzSnO8D916LXUP6Zv5lnlUx6oNF8fVfLrVctCv7gyQ==
X-Google-Smtp-Source: AGHT+IFKWAamOY0F3twXzaGHTrkpI3MeQIoceCYLoX6cqL9weHF+ij89NL0mb/YCViqq55bGudNjhA==
X-Received: by 2002:a05:600c:19c9:b0:41b:e84d:67a3 with SMTP id 5b1f17b1804b1-4212e0065e0mr66128555e9.0.1717394312362;
        Sun, 02 Jun 2024 22:58:32 -0700 (PDT)
Message-ID: <90c40d6a-d648-46bb-9cb0-df11ac165bd7@suse.com>
Date: Mon, 3 Jun 2024 07:58:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 4/5] automation/eclair_analysis: address remaining
 violations of MISRA C Rule 20.12
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, xen-devel@lists.xenproject.org
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <ba7e17494f0bb167fe48f7fe0a69fabc1c3f5d1a.1717236930.git.nicola.vetrini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ba7e17494f0bb167fe48f7fe0a69fabc1c3f5d1a.1717236930.git.nicola.vetrini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 01.06.2024 12:16, Nicola Vetrini wrote:
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -483,6 +483,12 @@ leads to a violation of the Rule are deviated."
>  -config=MC3R1.R20.12,macros+={deliberate, "name(GENERATE_CASE)&&loc(file(deliberate_generate_case))"}
>  -doc_end
>  
> +-doc_begin="The macro DEFINE is defined and used in excluded files asm-offsets.c.
> +This may still cause violations if entities outside these files are referred to
> +in the expansion."
> +-config=MC3R1.R20.12,macros+={deliberate, "name(DEFINE)&&loc(file(asm_offsets))"}
> +-doc_end

Can you give an example of such a reference? Nothing _in_ asm-offsets.c
should be referenced, I'd think. Only stuff in asm-offsets.h as _generated
from_ asm-offsets.c will, of course, be.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 06:24:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 06:24:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734703.1140777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE17S-0006y7-1e; Mon, 03 Jun 2024 06:24:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734703.1140777; Mon, 03 Jun 2024 06:24:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE17R-0006y0-Td; Mon, 03 Jun 2024 06:24:33 +0000
Received: by outflank-mailman (input) for mailman id 734703;
 Mon, 03 Jun 2024 06:24:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=29W0=NF=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sE17Q-0006xu-Uv
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 06:24:32 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ef6505c7-2171-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 08:24:30 +0200 (CEST)
Received: by mail-wr1-x429.google.com with SMTP id
 ffacd0b85a97d-35dce6102f4so2647492f8f.3
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 23:24:30 -0700 (PDT)
Received: from ?IPV6:2003:ca:b724:4976:f1a7:a03d:19f7:6554?
 (p200300cab7244976f1a7a03d19f76554.dip0.t-ipconnect.de.
 [2003:ca:b724:4976:f1a7:a03d:19f7:6554])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd04d9424sm7811871f8f.50.2024.06.02.23.24.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 02 Jun 2024 23:24:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef6505c7-2171-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717395870; x=1718000670; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=UPCewO+Mv8mYfHHvjMIO9EcN+6/nh2RkPWUQC6NHmWM=;
        b=DUC4HSLS3tFtwqMsIqp/AiYWHDPSdd4M6XPhNI//fpCW5MUn4jPbfxAwP2h8VIXXRa
         EkLoTlzY8BYmQnyW8CGITE13VnoYsx507TjDiDS86EKmYEfJ96C78vrt4G1esss20itb
         0UoTxdyWSUs/L/0+GneluvKZwikugST/tMdInYZugbC2WzZu9CazG9O2/ab7EPo5U5kT
         erbZNfgH05iouyDMoD1SSZk9CUeALjBS3DnFlApmX4zMb6vV6SNmtQvTLd8nUskcQqXJ
         m1PCJVIrsgWJ/09mo9fgsmpcIzddftklPJm01BNzkogBuazZWo1xcrtkJAlAYIk+uK+5
         Ou8w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717395870; x=1718000670;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=UPCewO+Mv8mYfHHvjMIO9EcN+6/nh2RkPWUQC6NHmWM=;
        b=CeIhoaQEKrldCfUAhXq3LxpJ8W8rSOu0S95qhvPLtA2bU7poCZrTFja4VMXznZX05f
         GdYbb66/mGpHSYcYj5hJM8mD5zPwRcpTFP4NF91KL42uf9frd0sa6O+V+gB0EahEr07p
         46EWw8Pe0d6iBM9ap1tCMqEmxF2Lq3YWkIpwk2A83stsczamdRQlHFo/+RlWcGa7HXBb
         iMkamKTNR8iSpmw05jJnOWuUiCjHds7sOofgjrO1Qfs57YKrzyGqlxVl7k2M4WUya6E0
         dK0ehK4FC8i9hAWCOxi0IBxMBEqJ87om7fl2nwzbOm/1qrq0J80gPZLX07ocXxPutS46
         V3cQ==
X-Forwarded-Encrypted: i=1; AJvYcCXBiVSrA88fYLkUvOlWK16feb5Ah+5IsrOQOnoPckX+lQKgRLQx+bMcJWc7FHd3JJjNSTsOcSlWomyUivrLN6Lat7SiGn4Z22qrYGIcaVw=
X-Gm-Message-State: AOJu0YxuHFrKEKY84TlYwa7MkiJ+yxGMFzTnpyd6cXn0L0x6WGm1eg5B
	yNrwyl/INUHxwfuSugXtVvmHizaFkc/u2tUDaOChbOkHBSd9TckRhST6M9aICA==
X-Google-Smtp-Source: AGHT+IGEENRs/QW8OHRKuenWcOHDRtRZ3tJ7N+35VKMMd3kBVejaA8dt6iGzF/eW9IO6NISsRRAPyA==
X-Received: by 2002:a5d:4905:0:b0:343:efb7:8748 with SMTP id ffacd0b85a97d-35e0f32f33fmr6438654f8f.66.1717395869798;
        Sun, 02 Jun 2024 23:24:29 -0700 (PDT)
Message-ID: <7face8c3-174b-4ba3-9e08-1622d2db98ba@suse.com>
Date: Mon, 3 Jun 2024 08:24:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 07/13] x86/bitops: Improve arch_ffs() in the general
 case
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Shawn Anastasio <sanastasio@raptorengineering.com>,
 "consulting @ bugseng . com" <consulting@bugseng.com>,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Federico Serafini <federico.serafini@bugseng.com>,
 Nicola Vetrini <nicola.vetrini@bugseng.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240524200338.1232391-1-andrew.cooper3@citrix.com>
 <20240524200338.1232391-8-andrew.cooper3@citrix.com>
 <1660a2a7-cea4-4e6f-9286-0c134c34b6fb@suse.com>
 <57a47c76-c484-4309-8a87-a51f79dd48b6@suse.com>
 <b0838a62-1e6a-473a-a757-97091c84e164@citrix.com>
 <df7bb467-c778-43fb-bd04-f81f6e3dfd01@suse.com>
 <b5a24f69-855d-49e8-b13b-7c9b2ee199f4@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <b5a24f69-855d-49e8-b13b-7c9b2ee199f4@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 01.06.2024 03:47, Andrew Cooper wrote:
> On 28/05/2024 2:12 pm, Jan Beulich wrote:
>> On 28.05.2024 14:30, Andrew Cooper wrote:
>>> On 27/05/2024 2:37 pm, Jan Beulich wrote:
>>>> On 27.05.2024 15:27, Jan Beulich wrote:
>>>>> On 24.05.2024 22:03, Andrew Cooper wrote:
>>>>>> --- a/xen/arch/x86/include/asm/bitops.h
>>>>>> +++ b/xen/arch/x86/include/asm/bitops.h
>>>>>> @@ -432,12 +432,28 @@ static inline int ffsl(unsigned long x)
>>>>>>  
>>>>>>  static always_inline unsigned int arch_ffs(unsigned int x)
>>>>>>  {
>>>>>> -    int r;
>>>>>> +    unsigned int r;
>>>>>> +
>>>>>> +    if ( __builtin_constant_p(x > 0) && x > 0 )
>>>>>> +    {
>>>>>> +        /* Safe, when the compiler knows that x is nonzero. */
>>>>>> +        asm ( "bsf %[val], %[res]"
>>>>>> +              : [res] "=r" (r)
>>>>>> +              : [val] "rm" (x) );
>>>>>> +    }
>>>>> In patch 11 relevant things are all in a single patch, making it easier
>>>>> to spot that this is dead code: The sole caller already has a
>>>>> __builtin_constant_p(), hence I don't see how the one here could ever
>>>>> return true. With that the respective part of the description is then
>>>>> questionable, too, I'm afraid: Where did you observe any actual effect
>>>>> from this? Or if you did - what am I missing?
>>>> Hmm, thinking about it: I suppose that's why you have
>>>> __builtin_constant_p(x > 0), not __builtin_constant_p(x). I have to admit
>>>> I'm (positively) surprised that the former may return true when the latter
>>>> doesn't.
>>> So was I, but this recommendation came straight from the GCC mailing
>>> list.  And it really does work, even back in obsolete versions of GCC.
>>>
>>> __builtin_constant_p() operates on an expression not a value, and is
>>> documented as such.
>> Of course.
>>
>>>>  Nevertheless I'm inclined to think this deserves a brief comment.
>>> There is a comment, and it's even visible in the snippet.
>> The comment is about the asm(); it is neither placed to clearly relate
>> to __builtin_constant_p(), nor is it saying anything about this specific
>> property of it. You said you were equally surprised; don't you think
>> that when both of us are surprised, a specific (even if brief) comment
>> is warranted?
> 
> Spell it out for me like I'm an idiot.
> 
> Because I'm looking at the patch I submitted, and at your request for "a
> brief comment", and I still have no idea what you think is wrong at the
> moment.
> 
> I'm also not included to write a comment saying "go and read the GCC
> manual more carefully".
> 
>>
>>>> As an aside, to better match the comment inside the if()'s body, how about
>>>>
>>>>     if ( __builtin_constant_p(!!x) && x )
>>>>
>>>> ? That also may make a little more clear that this isn't just a style
>>>> choice, but actually needed for the intended purpose.
>>> I am not changing the logic.
>>>
>>> Apart from anything else, your suggestion is trivially buggy.  I care
>>> about whether the RHS collapses to a constant, and the only way of doing
>>> that correctly is asking the compiler about the *exact* expression. 
>>> Asking about some other expression which you hope - but do not know -
>>> that the compiler will treat equivalently is bogus.  It would be
>>> strictly better to only take the else clause, than to have both halves
>>> emitted.
>>>
>>> This is the form I've tested extensively.  It's also the clearest form
>>> IMO.  You can experiment with alternative forms when we're not staring
>>> down code freeze of 4.19.
>> "Clearest form" is almost always a matter of taste. To me, comparing
>> unsigned values with > or < against 0 is generally at least suspicious.
>> Using != is typically better (again: imo), and simply omitting the != 0
>> then is shorter with no difference in effect. Except in peculiar cases
>> like this one, where indeed it took me some time to figure why the
>> comparison operator may not be omitted.
>>
>> All that said: I'm not going to insist on any change; the R-b previously
>> offered still stands. I would highly appreciate though if the (further)
>> comment asked for could be added.
>>
>> What I definitely dislike here is you - not for the first time - turning
>> down remarks because a change of yours is late.
> 
> Actually it's not to do with the release.  I'd reject it at any point
> because it's an unreasonable request to make; to me, or to anyone else.
> 
> It would be a matter of taste (which again you have a singular view on),
> if it wasn't for the fact that what you actually said was:
> 
> "I don't like it, and you should discard all the careful analysis you
> did because here's a form I prefer, that I haven't tested concerning a
> behaviour I didn't even realise until this email."

Just to clarify: Long before this reply of yours I understood and admitted
my mistake. A more clear / well placed comment (see further up) might have
avoided that. Still - thanks for extending the comment in what you have
committed.

> and even if it wasn't a buggy suggestion to begin with, it's still toxic
> maintainer feedback.

What's toxic about making a mistake? What's toxic about disliking "x > 0"
for unsigned quantities? As you say, it's a matter of taste to a fair
degree. Yet there are ample cases where taste as in "make it as clear as
possible to every reader" is used to ask me or others to change style. I
don't see why I shouldn't be permitted to at least make a similar remark,
even if then it's turned down (for good or bad reasons).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 06:39:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 06:39:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734709.1140788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE1LY-0000Og-9h; Mon, 03 Jun 2024 06:39:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734709.1140788; Mon, 03 Jun 2024 06:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE1LY-0000OZ-4N; Mon, 03 Jun 2024 06:39:08 +0000
Received: by outflank-mailman (input) for mailman id 734709;
 Mon, 03 Jun 2024 06:39:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=29W0=NF=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sE1LW-0000OP-2T
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 06:39:06 +0000
Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com
 [2a00:1450:4864:20::32b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f83d07f6-2173-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 08:39:04 +0200 (CEST)
Received: by mail-wm1-x32b.google.com with SMTP id
 5b1f17b1804b1-4214053918aso1007565e9.2
 for <xen-devel@lists.xenproject.org>; Sun, 02 Jun 2024 23:39:04 -0700 (PDT)
Received: from ?IPV6:2003:ca:b724:4976:f1a7:a03d:19f7:6554?
 (p200300cab7244976f1a7a03d19f76554.dip0.t-ipconnect.de.
 [2003:ca:b724:4976:f1a7:a03d:19f7:6554])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42133227f8asm87334765e9.19.2024.06.02.23.39.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 02 Jun 2024 23:39:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f83d07f6-2173-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717396744; x=1718001544; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=+gwXNTmVLAa7vhmk1oIbqvl3xm4vuSNEUNLAMuUQKg0=;
        b=ggTGG4yyt3Ceo6eRSGyuYyrvhB57yfe5+ZWJ/ZW4nxJ8adMZT5JjRySlUcdFsfuvUS
         /wq9iOltQadKa6J4XGqYSmhFGDtxG37GgiEYhUIz0I8DuB8qjzbehNLWRYdb0xnQxtCK
         pbZfrxGh14MQFao1Ijld1x6Z5Omx+4XLPExmHl6BDMZoBYCgf+dyBvZM9cgI8mIWroNQ
         c3i0SuC3ODBLlLq6yu36prWc9Cl2KAk1OQ7neNhw2l2jLnh5cvyPXN48B+HoTI/fCBxB
         BSrpdXQs2gFDQhI68k2A5wb40Z6AQw1pbYBjm0quqTzk6eGZgEAKIUjqEM5+VKb31CTz
         FlMA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717396744; x=1718001544;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+gwXNTmVLAa7vhmk1oIbqvl3xm4vuSNEUNLAMuUQKg0=;
        b=Vb4BRh8lRKAr0RlCjDo+ltExa/nfSi8zwOZ3bEF5wnR3xeaOMBuR1aJT90+zE/NVRl
         2+6rU0+3/GRbbrxL1ssQUi9/qi4NOuoijfTBCd+OHkb3vQsf2VjU6eqx44xy3jqIwRQK
         B2mIHExOkmVFDNJ+gUzRH8Eg6Jf9/Vu/wwlRUaSlrNQvGnE3fariAnwS4eopTwwCcLOo
         r4QcRyJ1z/g5VBlvYifKbrB+hf3Af+LXZxAyFZ757oTfA+wPZc419lkrzBqUcz6X4nB/
         H6AwBmvPEyDhUQioomYwZbszctxbCZzir0qYu5M4l4PrBw9lxnEAJ46xOrgfHkhKTAai
         2N8A==
X-Forwarded-Encrypted: i=1; AJvYcCUYSJ1coR3xbkxhKYxYfzqlEbnLsOwBJHsL3okhx0hZNz06046vdx+u41Lq82PQHpbwdVWKP1dkIohn56akCAe3HoE44CaKH93eIqSlLlY=
X-Gm-Message-State: AOJu0YxWdxtcdeedIkv8K+eKEAG3O1Pro3Ii385YfGrC5shWATbsv+p5
	FQicfD6NkTawnIufCXDLIt9/oj/EkAY8iFfr+0ZKo3YJuWgJsvzttPoUOlVxKQ==
X-Google-Smtp-Source: AGHT+IErtBKJd+0Ba8/0fEpydosf1FEzPcjnkuwyuntfZLcwowk40vtqCu3DuSeGV2O6Cr8TnxqJXQ==
X-Received: by 2002:a05:600c:1d85:b0:421:2efe:5aa8 with SMTP id 5b1f17b1804b1-4212efe5bebmr63862185e9.18.1717396744231;
        Sun, 02 Jun 2024 23:39:04 -0700 (PDT)
Message-ID: <7e96b887-8fd3-4ecc-a23c-98a46ea1aa8c@suse.com>
Date: Mon, 3 Jun 2024 08:39:02 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 1/5] xen/domain: deviate violation of MISRA C Rule
 20.12
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <843540164f7e8f910226e1ded05e153cb04c519d.1717236930.git.nicola.vetrini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <843540164f7e8f910226e1ded05e153cb04c519d.1717236930.git.nicola.vetrini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 01.06.2024 12:16, Nicola Vetrini wrote:
> MISRA C Rule 20.12 states: "A macro parameter used as an operand to
> the # or ## operators, which is itself subject to further macro replacement,
> shall only be used as an operand to these operators".
> 
> In this case, builds where CONFIG_DEBUG_LOCK_PROFILE=y the domain_lock
> macro is used both as a regular macro argument and as an operand for
> stringification in the expansion of macro spin_lock_init_prof.

The shouldn't the marker be on the definition of spin_lock_init_prof(),
rather than ...

> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -632,6 +632,7 @@ struct domain *domain_create(domid_t domid,
>  
>      atomic_set(&d->refcnt, 1);
>      RCU_READ_LOCK_INIT(&d->rcu_lock);
> +    /* SAF-6-safe Rule 20.12 expansion of macro domain_lock in debug builds */
>      rspin_lock_init_prof(d, domain_lock);
>      rspin_lock_init_prof(d, page_alloc_lock);
>      spin_lock_init(&d->hypercall_deadlock_mutex);

... actually just one of the two uses here (and presumably several more
elsewhere)?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 07:13:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 07:13:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734720.1140798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE1si-000667-QG; Mon, 03 Jun 2024 07:13:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734720.1140798; Mon, 03 Jun 2024 07:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE1si-000660-M4; Mon, 03 Jun 2024 07:13:24 +0000
Received: by outflank-mailman (input) for mailman id 734720;
 Mon, 03 Jun 2024 07:13:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3ZA=NF=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sE1sh-00065u-MD
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 07:13:23 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c2e425aa-2178-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 09:13:22 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id BB5854EE0737;
 Mon,  3 Jun 2024 09:13:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2e425aa-2178-11ef-90a1-e314d9c70b13
MIME-Version: 1.0
Date: Mon, 03 Jun 2024 09:13:21 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com, Simone Ballarin
 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>,
 xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH 4/5] automation/eclair_analysis: address remaining
 violations of MISRA C Rule 20.12
In-Reply-To: <90c40d6a-d648-46bb-9cb0-df11ac165bd7@suse.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <ba7e17494f0bb167fe48f7fe0a69fabc1c3f5d1a.1717236930.git.nicola.vetrini@bugseng.com>
 <90c40d6a-d648-46bb-9cb0-df11ac165bd7@suse.com>
Message-ID: <085aabe9953d53e634d5cf75fecdb8b7@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-03 07:58, Jan Beulich wrote:
> On 01.06.2024 12:16, Nicola Vetrini wrote:
>> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
>> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
>> @@ -483,6 +483,12 @@ leads to a violation of the Rule are deviated."
>>  -config=MC3R1.R20.12,macros+={deliberate, 
>> "name(GENERATE_CASE)&&loc(file(deliberate_generate_case))"}
>>  -doc_end
>> 
>> +-doc_begin="The macro DEFINE is defined and used in excluded files 
>> asm-offsets.c.
>> +This may still cause violations if entities outside these files are 
>> referred to
>> +in the expansion."
>> +-config=MC3R1.R20.12,macros+={deliberate, 
>> "name(DEFINE)&&loc(file(asm_offsets))"}
>> +-doc_end
> 
> Can you give an example of such a reference? Nothing _in_ asm-offsets.c
> should be referenced, I'd think. Only stuff in asm-offsets.h as 
> _generated
> from_ asm-offsets.c will, of course, be.
> 

Perhaps I could have expressed that more clearly. What I meant is that 
there are some arguments to DEFINE that are not part of asm-offsets.c, 
therefore they end up in the violation report, but are not actually 
relevant, because the macro DEFINE is actually what we want to exclude.

See for instance at the link below VCPU_TRAP_{NMI,MCE}, which are 
defined in asm/domain.h and used as arguments to DEFINE inside 
asm-offsets.c.

https://saas.eclairit.com:3787/fs/var/local/eclair/XEN.ecdf/ECLAIR_normal/staging/X86_64-BUGSENG/676/PROJECT.ecd;/by_service/MC3R1.R20.12.html

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 08:42:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 08:42:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734746.1140807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE3H4-000176-16; Mon, 03 Jun 2024 08:42:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734746.1140807; Mon, 03 Jun 2024 08:42:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE3H3-00016z-TO; Mon, 03 Jun 2024 08:42:37 +0000
Received: by outflank-mailman (input) for mailman id 734746;
 Mon, 03 Jun 2024 08:42:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sE3H2-00016p-0S; Mon, 03 Jun 2024 08:42:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sE3H1-0004Ag-Pv; Mon, 03 Jun 2024 08:42:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sE3H1-00045l-EY; Mon, 03 Jun 2024 08:42:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sE3H1-0003I9-E5; Mon, 03 Jun 2024 08:42:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gy09EF9HNyXwouW4pEUHAdDsWoeqoDuwdmzpvH27t5Y=; b=eD+GyVM5CTJ4zMRg+qPNBBF53X
	uAnhnqg5JOGem2N1DpSK5vThUuL7tmeFMrNKWlRHmAsn+XrCbkCbVVTTeyCreTnCaf0JVeAQSnXfs
	CAEq2zb0hnxAY8Kzlus+e3SYszDs0B8q7DmWaZuxbrfiE2hoCcDlje86OaKeICqp8JfI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186234-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186234: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
X-Osstest-Versions-That:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 Jun 2024 08:42:35 +0000

flight 186234 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186234/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail in 186229 pass in 186234
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186229

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186229
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186229
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186229
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186229
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186229
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186229
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e
baseline version:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e

Last test of basis   186234  2024-06-03 01:51:55 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 09:01:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 09:01:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734755.1140817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE3ZJ-00042n-GI; Mon, 03 Jun 2024 09:01:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734755.1140817; Mon, 03 Jun 2024 09:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE3ZJ-00042g-D9; Mon, 03 Jun 2024 09:01:29 +0000
Received: by outflank-mailman (input) for mailman id 734755;
 Mon, 03 Jun 2024 09:01:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i/E6=NF=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1sE3ZH-00042Z-Tv
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 09:01:27 +0000
Received: from mail-ot1-x336.google.com (mail-ot1-x336.google.com
 [2607:f8b0:4864:20::336])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id db7f52b3-2187-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 11:01:26 +0200 (CEST)
Received: by mail-ot1-x336.google.com with SMTP id
 46e09a7af769-6f8d0a7b18bso1457449a34.1
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 02:01:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db7f52b3-2187-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1717405285; x=1718010085; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0+H7pObS/b5siSWbttXRNCagqw5g9fdwQAB6Z8V9ywg=;
        b=am7Yez/D9gpJSiwO3yafn7dC7san4bXiObKatAfY+xJih1RXPJxDMEdVcmGLzjQpeM
         9bU4bRYUlSKJKtlWFvnf/sfYIxOqIhRXfOO8tzl/wi+nziK/EwqB7vchiEhZulT2Gt8R
         Jx08cMGwiNgWaRiIjEI5QYM4uZLqMNHxfsCZoBWXWEV2M3AYYHqv8boLxb2+3HH1VaWb
         1GwPZy4MybNxF9RLNYZc0nTiKrHJ1+GzrM5g0/8Fge1A/DjOspWpTODqKgENF4MAiBhV
         StPSsQxvjssG/VGGPwzvEapsz4sbpBw2ghZ7QPW4gthJ9tPbt1q2mswjOER6LRQ3E+69
         wBWA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717405285; x=1718010085;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=0+H7pObS/b5siSWbttXRNCagqw5g9fdwQAB6Z8V9ywg=;
        b=ETFWyj4tvbHmpxB3Vrt6f6d8oE26Hz6DiHHTLMxzut+WtofeazXQsnt58wcaevXFF3
         Da/DJF83Z4RDo1WSWud07a9o+z+zGZUqw4DOR5+Q8qt/6a3Ey0mWj6wIt3VQ9TLD0WHQ
         pgQWb0i/OH5rGI5flBFuPrmB9U7kHMocqWC4W9F6j77+dAQzA4LB7GZ0E9htR85nohkZ
         d3Pt+mBHSi7PQhPTX8gjVQZ4IJo0Lz8525CucNIIn4diWwt0+6FIKVrnjBg8b5x5M+bx
         00rFBM8xPEKY7hMrX7I8v2XDZuE6QIqVbJXbvAwR/4XaBm73q/cCFhMyx7HnWL5MS/ug
         Kd+Q==
X-Gm-Message-State: AOJu0YwaYuEptdzIAmcD7QnRxMLSti0W3N21anA/WkdYW++8Yr32z30/
	XlnL5IctfLNgMebz3ffOIkONtSJJt0ZDKnu64ZBkI9TeVedZGswB69wA9gXK/yyXUALAYKC2VV9
	D0B0BSjcFZFHjPrOc4ipDE5gn2MpI+ZmMfmefLg==
X-Google-Smtp-Source: AGHT+IF3X5wjr+H1fPRUyv06pZ9YgHDM+CDvAlHe7jjJhdtWj+vSW4djk1iFEBo8y1Xl6v1loBQLToMia9MDy/276WA=
X-Received: by 2002:a05:6870:4154:b0:24f:d90b:fcd3 with SMTP id
 586e51a60fabf-2508b19810dmr4066776fac.25.1717405285154; Mon, 03 Jun 2024
 02:01:25 -0700 (PDT)
MIME-Version: 1.0
References: <20240529072559.2486986-1-jens.wiklander@linaro.org>
 <20240529072559.2486986-8-jens.wiklander@linaro.org> <C52D6A7C-1136-4BF1-9060-600157F641F5@arm.com>
In-Reply-To: <C52D6A7C-1136-4BF1-9060-600157F641F5@arm.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Mon, 3 Jun 2024 11:01:12 +0200
Message-ID: <CAHUa44GRNQV4X61YPZTxO+tkkwJS9hoqQ07U9vP1k6n1zUt9rQ@mail.gmail.com>
Subject: Re: [XEN PATCH v5 7/7] xen/arm: ffa: support notification
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, 
	"patches@linaro.org" <patches@linaro.org>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Michal Orzel <michal.orzel@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Bertrand,

On Fri, May 31, 2024 at 4:28=E2=80=AFPM Bertrand Marquis
<Bertrand.Marquis@arm.com> wrote:
>
> Hi Jens,
>
> > On 29 May 2024, at 09:25, Jens Wiklander <jens.wiklander@linaro.org> wr=
ote:
> >
> > Add support for FF-A notifications, currently limited to an SP (Secure
> > Partition) sending an asynchronous notification to a guest.
> >
> > Guests and Xen itself are made aware of pending notifications with an
> > interrupt. The interrupt handler triggers a tasklet to retrieve the
> > notifications using the FF-A ABI and deliver them to their destinations=
.
> >
> > Update ffa_partinfo_domain_init() to return error code like
> > ffa_notif_domain_init().
> >
> > Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> > ---
> > v4->v5:
> > - Move the freeing of d->arch.tee to the new TEE mediator free_domain_c=
tx
> >  callback to make the context accessible during rcu_lock_domain_by_id()
> >  from a tasklet
> > - Initialize interrupt handlers for secondary CPUs from the new TEE med=
iator
> >  init_interrupt() callback
> > - Restore the ffa_probe() from v3, that is, remove the
> >  presmp_initcall(ffa_init) approach and use ffa_probe() as usual now th=
at we
> >  have the init_interrupt callback.
> > - A tasklet is added to handle the Schedule Receiver interrupt. The tas=
klet
> >  finds each relevant domain with rcu_lock_domain_by_id() which now is e=
nough
> >  to guarantee that the FF-A context can be accessed.
> > - The notification interrupt handler only schedules the notification
> >  tasklet mentioned above
> >
> > v3->v4:
> > - Add another note on FF-A limitations
> > - Clear secure_pending in ffa_handle_notification_get() if both SP and =
SPM
> >  bitmaps are retrieved
> > - ASSERT that ffa_rcu_lock_domain_by_vm_id() isn't passed the vm_id FF-=
A
> >  uses for Xen itself
> > - Replace the get_domain_by_id() call done via ffa_get_domain_by_vm_id(=
) in
> >  notif_irq_handler() with a call to rcu_lock_live_remote_domain_by_id()=
 via
> >  ffa_rcu_lock_domain_by_vm_id()
> > - Remove spinlock in struct ffa_ctx_notif and use atomic functions as n=
eeded
> >  to access and update the secure_pending field
> > - In notif_irq_handler(), look for the first online CPU instead of assu=
ming
> >  that the first CPU is online
> > - Initialize FF-A via presmp_initcall() before the other CPUs are onlin=
e,
> >  use register_cpu_notifier() to install the interrupt handler
> >  notif_irq_handler()
> > - Update commit message to reflect recent updates
> >
> > v2->v3:
> > - Add a GUEST_ prefix and move FFA_NOTIF_PEND_INTR_ID and
> >  FFA_SCHEDULE_RECV_INTR_ID to public/arch-arm.h
> > - Register the Xen SRI handler on each CPU using on_selected_cpus() and
> >  setup_irq()
> > - Check that the SGI ID retrieved with FFA_FEATURE_SCHEDULE_RECV_INTR
> >  doesn't conflict with static SGI handlers
> >
> > v1->v2:
> > - Addressing review comments
> > - Change ffa_handle_notification_{bind,unbind,set}() to take struct
> >  cpu_user_regs *regs as argument.
> > - Update ffa_partinfo_domain_init() and ffa_notif_domain_init() to retu=
rn
> >  an error code.
> > - Fixing a bug in handle_features() for FFA_FEATURE_SCHEDULE_RECV_INTR.
> > ---
> > xen/arch/arm/tee/Makefile       |   1 +
> > xen/arch/arm/tee/ffa.c          |  72 +++++-
> > xen/arch/arm/tee/ffa_notif.c    | 409 ++++++++++++++++++++++++++++++++
> > xen/arch/arm/tee/ffa_partinfo.c |   9 +-
> > xen/arch/arm/tee/ffa_private.h  |  56 ++++-
> > xen/arch/arm/tee/tee.c          |   2 +-
> > xen/include/public/arch-arm.h   |  14 ++
> > 7 files changed, 548 insertions(+), 15 deletions(-)
> > create mode 100644 xen/arch/arm/tee/ffa_notif.c
> >
[...]
> >
> > @@ -517,8 +567,10 @@ err_rxtx_destroy:
> > static const struct tee_mediator_ops ffa_ops =3D
> > {
> >     .probe =3D ffa_probe,
> > +    .init_interrupt =3D ffa_notif_init_interrupt,
>
> With the previous change on init secondary i would suggest to
> have a ffa_init_secondary here actually calling the ffa_notif_init_interr=
upt.

Yes, that makes sense. I'll update.

>
> >     .domain_init =3D ffa_domain_init,
> >     .domain_teardown =3D ffa_domain_teardown,
> > +    .free_domain_ctx =3D ffa_free_domain_ctx,
> >     .relinquish_resources =3D ffa_relinquish_resources,
> >     .handle_call =3D ffa_handle_call,
> > };
> > diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_notif.=
c
> > new file mode 100644
> > index 000000000000..e8e8b62590b3
> > --- /dev/null
> > +++ b/xen/arch/arm/tee/ffa_notif.c
[...]
> > +static void notif_vm_pend_intr(uint16_t vm_id)
> > +{
> > +    struct ffa_ctx *ctx;
> > +    struct domain *d;
> > +    struct vcpu *v;
> > +
> > +    /*
> > +     * vm_id =3D=3D 0 means a notifications pending for Xen itself, bu=
t
> > +     * we don't support that yet.
> > +     */
> > +    if ( !vm_id )
> > +        return;
> > +
> > +    d =3D ffa_rcu_lock_domain_by_vm_id(vm_id);
> > +    if ( !d )
> > +        return;
> > +
> > +    ctx =3D d->arch.tee;
> > +    if ( !ctx )
> > +        goto out_unlock;
>
> In both previous cases you are silently ignoring an interrupt
> due to an internal error.
> Is this something that we should trace ? maybe just debug ?
>
> Could you add a comment to explain why this could happen
> (when possible) or not and the possible side effects ?
>
> The second one would be a notification for a domain without
> FF-A enabled which is ok but i am a bit more wondering on
> the first one.

The SPMC must be out of sync in both cases. I've been looking for a
window where that can happen, but I can't find any. SPMC is called
with FFA_NOTIFICATION_BITMAP_DESTROY during domain teardown so the
SPMC shouldn't try to deliver any notifications after that.
In the second case, the domain ID might have been reused for a domain
without FF-A enabled, but the SPMC should have known that already.
I'll add comments and prints since these errors indicate a bug
somewhere.

>
> > +
> > +    /*
> > +     * arch.tee is freed from complete_domain_destroy() so the RCU loc=
k
> > +     * guarantees that the data structure isn't freed while we're acce=
ssing
> > +     * it.
> > +     */
> > +    ACCESS_ONCE(ctx->notif.secure_pending) =3D true;
> > +
> > +    /*
> > +     * Since we're only delivering global notification, always
> > +     * deliver to the first online vCPU. It doesn't matter
> > +     * which we chose, as long as it's available.
> > +     */
> > +    for_each_vcpu(d, v)
> > +    {
> > +        if ( is_vcpu_online(v) )
> > +        {
> > +            vgic_inject_irq(d, v, GUEST_FFA_NOTIF_PEND_INTR_ID,
> > +                            true);
> > +            break;
> > +        }
> > +    }
> > +    if ( !v )
> > +        printk(XENLOG_ERR "ffa: can't inject NPI, all vCPUs offline\n"=
);
> > +
> > +out_unlock:
> > +    rcu_unlock_domain(d);
> > +}

Thanks,
Jens


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 09:02:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 09:02:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734760.1140827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE3ae-0004ad-Qz; Mon, 03 Jun 2024 09:02:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734760.1140827; Mon, 03 Jun 2024 09:02:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE3ae-0004aW-O7; Mon, 03 Jun 2024 09:02:52 +0000
Received: by outflank-mailman (input) for mailman id 734760;
 Mon, 03 Jun 2024 09:02:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mvE+=NF=epam.com=prvs=288473a65a=sergiy_kibrik@srs-se1.protection.inumbo.net>)
 id 1sE3ad-0004aM-Th
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 09:02:51 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0c6d2e70-2188-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 11:02:49 +0200 (CEST)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 4534IhaH011508;
 Mon, 3 Jun 2024 09:02:39 GMT
Received: from eur02-am0-obe.outbound.protection.outlook.com
 (mail-am0eur02lp2237.outbound.protection.outlook.com [104.47.11.237])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3yfw1aukg7-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 03 Jun 2024 09:02:39 +0000 (GMT)
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com (2603:10a6:20b:5c0::11)
 by PA4PR03MB7150.eurprd03.prod.outlook.com (2603:10a6:102:f1::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.16; Mon, 3 Jun
 2024 09:02:35 +0000
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d]) by AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d%6]) with mapi id 15.20.7611.013; Mon, 3 Jun 2024
 09:02:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c6d2e70-2188-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RFwoM2KAG/4EOR02PdIIlpS0mlyOw43405iTrZVT2Ma1sF8ySahcP5R90FHWTVEMl7QHkFJlDCgLfzZL4LGSzUNvEBGmGo/8glWXgoXYEwB/ZhK4b6QtZkKaQYb5nIJ4VxoJElAmPYjLspFaV9fUpWBBHj+YlXSwMOJ8DB4yI+A9g2TLOBifqmJ4qVnIGNpvgKvvBqflNWfl+ahvEuqhplrPBCHgK4ta8xZkyQ7GNT/SoP8jEd7mwv7s95tYC/p82x+L9soTcHwcDnICKmI9yoRjkU+R3L55OTienyqQb92Fp4d9qr0lzKeXd9TOWOE+y23MFZSWCPuYwTKmX5iznw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tr1UsH59RERn1t6O2Gk4PbDW3lgQNTG2cS6treYQbdE=;
 b=XB+exh92lYsV6YT7adGkiYc2U6YQoHCsFCYmr+VZMFMFEl/vwmVuooGcieaR67BHYNx5hN+JspVdbCVJKTETWweT8B9lYXNK0ij9Xp7rGuj6LR71h8FFzF5mSEuLJTIetKaRyEv8kCDVYlqUb/Zroo3utNZnJ7muTjpqWKbOjYLjBjfO6+qcEDr9FRpGZZPFJ0iRz+O3C44SclcOqM2J6L2xNVJAZemwPMZwOjhRdpL7o5VQcedIbj1G3m5RJEBwMOXbGNYgxyVYmxBG4hJFibycG2SCmglyFl6llsdBstivLwnsbHYlUbP9WCDB82ljaVt0k3H+yMyVUjrr6n7G+A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tr1UsH59RERn1t6O2Gk4PbDW3lgQNTG2cS6treYQbdE=;
 b=c+sZUGkiCbT+yWrqYRVFYcAx53fSHkw5y7zhgZyxBXz0SZOpWs4F+rp4LiBtWFwgf+0InbGxqnfg3Y8zIYCzapC1eGXe8uUe6s1Ps7AwBY2HLeWuWmuQxJSWHwrcA0HgNwztgaR8v8+8nrOGuEJJvPFMp7w7+egdXtKee9ez6cs0tHEIBRpWrb7ULfXiWjhFwKp3vs51sVi3UMQjdzQC+FMZBeNhmrS8+plbDmfXfNGZqGMB+P0BKZHWdlCxxJCT3aF6clFFdDEAwxYvpDey5I+rNV6OXHaQb5AhiAk79WY0CpmX84VwA/kGPFyA4ttIYbu23iDIy0LxFIhTNEC8Lw==
Message-ID: <f7ef0b65-4c16-4590-9122-99d61bddc544@epam.com>
Date: Mon, 3 Jun 2024 12:02:32 +0300
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 08/15] x86/vpmu: guard vmx/svm calls with
 cpu_has_{vmx,svm}
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
        =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
        xen-devel@lists.xenproject.org
References: <cover.1715761386.git.Sergiy_Kibrik@epam.com>
 <fbd17194026a7e61bac2198e3b468d498f45d064.1715761386.git.Sergiy_Kibrik@epam.com>
 <7d780eff-a64f-40dd-a377-2af05bbd2eee@suse.com>
Content-Language: en-US
From: Sergiy Kibrik <sergiy_kibrik@epam.com>
In-Reply-To: <7d780eff-a64f-40dd-a377-2af05bbd2eee@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: WA2P291CA0032.POLP291.PROD.OUTLOOK.COM
 (2603:10a6:1d0:1f::21) To AS8PR03MB9192.eurprd03.prod.outlook.com
 (2603:10a6:20b:5c0::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AS8PR03MB9192:EE_|PA4PR03MB7150:EE_
X-MS-Office365-Filtering-Correlation-Id: c85eea43-fcf4-40c1-f3ac-08dc83abe8bd
X-LD-Processed: b41b72d0-4e9f-4c26-8a69-f949f367c91d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230031|366007|1800799015|376005;
X-Microsoft-Antispam-Message-Info: 
	=?utf-8?B?cUJCT09weU83bThZdHlwbkpreGIrUGdQS2NyRWFTdkxnVFVYYjRvV3Rra1Ju?=
 =?utf-8?B?MWdFN2FZQkFTbzRBcWpkSGtTbGZuZ2xQb0xOQnI1dkxsdHhra0VJcWJ2KzdY?=
 =?utf-8?B?bFNuRDQ3dGd3M0doS3RzdStqMUsyNzNkZEMxMXJjcW1neENQSXFycW1lbVNW?=
 =?utf-8?B?SlJuU1J3MDEyS0QyUy9pYThrSWd3M0dNdnFUb1gyeENicUh5NzRjL1ZHRnpn?=
 =?utf-8?B?ejZsclZ6LzhyVHFoTWszdHRMbW8rQmVRSUVCNEhEcXRzRFphZ3lybUlVbzR4?=
 =?utf-8?B?RUNvUW5rMWxzazgrbGh0Z2RxZFgxRnRsSm5YK3Avd3MyS3NWNGVQYXgwb0l2?=
 =?utf-8?B?Rmhnd2N5elpXSHpEdU9WdVFlNFVvRTBMbHMza0JqbmRYZzFqeHYxOFNiditJ?=
 =?utf-8?B?WkM4ZDRpN3lrYVlxV056cFgwWGcySnZpYWUzajVQY1YyNU1iRjFiamlsblRN?=
 =?utf-8?B?TFNFYVlqbE1NVERSVTRlUFZ0WDdKYm04OCtHcDAxeml5MVAvQmtyZUV2ZnZu?=
 =?utf-8?B?SGoxM1BVMlpXQWtCNFEvQ3FHcC9HL3VrQkh5YWJwdXNCK1k4TmFLV05lN3gv?=
 =?utf-8?B?WHRoY25HQndiaDNIVnVnTG1GU2ZIVVVPNnRLMDB4WUxNR0dEV1o4dngyMTRO?=
 =?utf-8?B?Q3ROZ3VkMXpOL1NUQXk0STlEeGZYTzBNSzJYc2JDaGhxUXR1b3VJaWlmUnhR?=
 =?utf-8?B?NTF4ODBWdWxiZjRnTjR3TjRhenhqd2YzWnBPSzFqQjI1bVpNSXFvZkhnK0Zx?=
 =?utf-8?B?QTFwMHp5V0dmc2RtYy9qMUZoVElzLzZreVNBRnRWWVp3VGgyejVBd1F4V2t6?=
 =?utf-8?B?TitkZGNOL05xSnVnY1c1NnlZVkxuVDZYQUJiTkVOdjE1Ly9xbkhUcFlmaWpq?=
 =?utf-8?B?dGJNTU85dGJSdXJsc2FzL3ZiTE1RV0xHU3dPN3pLbXpyQ0JCMzhGbXExOXlR?=
 =?utf-8?B?VXZyUEFXZk9lbEtkVEpUVGtORHU5RER3blBkd20vMjJCbFM5bzBCMzl5UE1w?=
 =?utf-8?B?UlM4VUR3Tyt6NVJiLzZrMTJTalJGM2tUVWtqcUlKYTVKWXc5Z1EwVmRNUHdF?=
 =?utf-8?B?VHkzb1V3K20xaVRwUS9TWU14NEtzbkZwZ2JNWUsyNENScytIVTdiU3hSM1ht?=
 =?utf-8?B?U1Z5UklPdytTdHN4VDU4UHJPSTVOWllEMXpad3Myc0twUmNXQ3ROZm9oL01O?=
 =?utf-8?B?M3hXSHBiUFdQVFB5T0graURJdHoxbDkyelNQcGFobnVLaEhyZWpkTlFDMlFZ?=
 =?utf-8?B?YitYSmp4bDFDMWJZSVlTMlc5K2NTM1JFaU9hTmN5a3BFUDRUMlFHMU1pRnBt?=
 =?utf-8?B?Z0JJMGN2TWVsenhTMjFJVkhDaWI5UzNNNHU5M3J6R0lyNDM4dVp3TnVlUFZN?=
 =?utf-8?B?SVpTNEFtN0RLUW1xN0xZWThpdkhHU0c5N1R3Tyt6dko1TlczcW9QWHpXemNu?=
 =?utf-8?B?RkJCNytlb09xMFU4Yk9YR3h6T3NMWnZIM0t0T0p0bENLMWRHU2hlVmZJNG0v?=
 =?utf-8?B?Q2hpYXFRazVCMktaL2Y3MnE3Z0hNcVU4d0ltU3pxbzljQXpvSTYxZjliREwx?=
 =?utf-8?B?anJuZ0w0cjJ1N3JjVFhmcW81OE9lY0NJUWt4WkdYRHZqYVZwbVdNWUt2UjhX?=
 =?utf-8?B?bDVQVVVFdDhmZUNXK2pNNFBLMStHaUlkT0hZdW1rYnFUanRwQ0lBV3MwbHZT?=
 =?utf-8?B?SkpZL0J1dmVXS29QMmVHdkJwQ1czdUNoT3lSSkJhMzVvR1A5Z01hVHpRPT0=?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR03MB9192.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(1800799015)(376005);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?R1RZaVV6MExnNWVpc2VHdDhqL0VxbGs1ZW5ybmE0bHp2elFBcmtGcHZqUVlT?=
 =?utf-8?B?Mks5VElrU1JrZnNlbTZzR2drZVpMMzdlM0xBSlZSVGVaTlZsQTYxdkxPdkRD?=
 =?utf-8?B?bkc3QnN6K0lZL0NkOHRZMmhMM0J2UmlmR3NPWFlpeTlBVnk4Y0JlWmk4WU5X?=
 =?utf-8?B?UTBOS3RTMTVQMGZTSVE4TWEyVE9hWUhqV2czaE0weWZyM3hGYlFSSFhyMHhT?=
 =?utf-8?B?TTQ5M2k2bjQxQjEyM3BmRzVrQVVVUkZFTW5FbmFNUlRoeVlBdGtRWVIremx1?=
 =?utf-8?B?TzRPUWg2Q0g3S2gxVEVnSDFmNHZQbTI4djNhb3NZbFlhSnRxLzg1cDE1N2th?=
 =?utf-8?B?b3dZemhucGowQmpqUlZPK3FBclZKc3ZXc1ZDZi9ZamRFdUNsRHBEeWxnb0xl?=
 =?utf-8?B?TVJuUENmUHhNQ0k0Wk14T1Y1K2dYZW9tREtOaXRKMHJtNkYwMVp2V24vZ3l5?=
 =?utf-8?B?MXNJcGVCUEtNWGYyZFAxTks1WCtXTFk1aUU3L0ZCdFFNMDRjS25VK0tLYi9t?=
 =?utf-8?B?RUNCaGFSYUhvQXFubG1BaE9uM3NZeCsrYzRZYlVzV2dKWG5NZVkyQWhxZlpq?=
 =?utf-8?B?WWxTejI3azNzMTVSak84NWY1bEM5ZXh3YXJraW1qcjB5TjlHaytWVWcrc3BF?=
 =?utf-8?B?QTV1MERpR0NadjB6NUhqZ0dSSGR1SHF6ajlSdTdHczFuOWtCTE5CYVJnSW94?=
 =?utf-8?B?NUFoQnZkbks2bC9tY050TkQzNlE4dHhFVjZ4L0J5RDNMTjZCQWtoNnladkhs?=
 =?utf-8?B?LzlXOHlVOGxSS2hVanN3Y2VsZUIxeXF2UW9tZjVpb0ltYnVBelhMazV2Z2Yw?=
 =?utf-8?B?NUlSYWxPeGo5QzJ1S0NBaWhvN0E3azNldGRuNU1ONjFPRlF2MlNIRmVTaWMy?=
 =?utf-8?B?b0V5aHBxRzF3b3hyQXVLY3QxSzhRYWd2RjlKSGtQYjFZaE4zMWdWVk5WeVFN?=
 =?utf-8?B?RWRGZDlrZU14ZjFyamduWW5uZ2tLejlBVnRURTFmZ0UySjBYS0d5eWo0U2p3?=
 =?utf-8?B?S25GUWsyVGExekYrZjdVVzRjSFBRVVF6VXA1aVJjbk1lc1pDdWY3S3R2dU1N?=
 =?utf-8?B?Q3RhR2FWeGcwLzRrWGp6Q2EzT2NzK2JMd2tiR1Ntc3oxOW5GS212aFNUaStE?=
 =?utf-8?B?WmtabzVjeXQzSGZtY25pOUFRbUxmWGV5SnVCMEZqd1c1cUJvV1plcUQyeGJS?=
 =?utf-8?B?TnNvUHpkV0VYQ0FieW03cStVRWhQMWxzZllNSFpZUVlTcTlSa0tqdDAxMUdK?=
 =?utf-8?B?TTdzZmNES3U2RTJhV3RKZWJjeW9peEZYRzhML29EUHA1SmNNMEdsWnN4azhp?=
 =?utf-8?B?bHF0VXpYSlRWMzhLdXhTc0RUTDVtR3BBRTQvMGpqYVJFSFNLQ0FBS1g1bVlL?=
 =?utf-8?B?TkpJL2NqWnZvNjN1Z3E5R3hqbTJmY0lycHhhZUxpeEpnekd0Tmh0VlhzMjd0?=
 =?utf-8?B?V1d3am53Q3QrWFZ2NjJhODh1c1VGVkNidFhSNEduaFlJcmNGdkY4YnFvUk91?=
 =?utf-8?B?MlZqeDlNcG1HWGN1UWtDVlZrSXJJZUJtZWpFT1VVaGlWUHhPL01IQzAxOCtZ?=
 =?utf-8?B?Snk1MU5iNUhyT0ZGeHZrMmFwSFk2Rms2QXMrUjhibFpKVE1aemE4YzM2Znpo?=
 =?utf-8?B?WEZSR0F0Ri9RSGFoY2thUUprbE9wc1U2RUNZY0tkM3I4Y1V5K3dQYWR4Nk41?=
 =?utf-8?B?RUpETEJuSDRndHRBN01sZC9zMU5yeVdXWS9WZFdHTm4vVU9jTCtsNnJjb0I4?=
 =?utf-8?B?ZVJMOHFoSTN1S1h2RnJiM3NyT3BPLzAwengzMHhGVEk2aUg1ZjBGOWQzam5t?=
 =?utf-8?B?WFZ4WDRSKzBBb0wxSEhjWWFVVFRiRUxvV0hTTkQyS2ZrNjVkTk5jTDBkclFY?=
 =?utf-8?B?eHQwdS9nMUk3WnJ4dHZET3A4bFphb0xsTmtzeEtXbCtRQlQ0ZUhmM2N2bmM2?=
 =?utf-8?B?TmJ1ZDhvb0RXK1ZJSWRQL01kdDRUMXRva1RLVHAzWElZL1hjZ1IvK1J6SVRw?=
 =?utf-8?B?K0dwVzFzM3NrM2Q2M0tMWldTSG5iYUl5M1hPT2k1dlQwTE9HSHY2cSs0WkE1?=
 =?utf-8?B?N0tYaEUrcWlhU0RJR2VNdW9wY2ZlMXB0M0dKYzJMUDkvd1lZVm15Nys5L1lq?=
 =?utf-8?Q?jyjiFo5mgnh6GYW/rhh6mo6yu?=
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c85eea43-fcf4-40c1-f3ac-08dc83abe8bd
X-MS-Exchange-CrossTenant-AuthSource: AS8PR03MB9192.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2024 09:02:34.8924
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LW5fUS2n+AvFTuw2TqDISOwpqha016JR7KP2eKvRHwoFUnrsMqokFdrsq4pFOwVsLUTH9/HiI0O6/+79yEmCnw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR03MB7150
X-Proofpoint-ORIG-GUID: FESrUTUyzQsXyE8DI0jhjsaoeZI_K1Zk
X-Proofpoint-GUID: FESrUTUyzQsXyE8DI0jhjsaoeZI_K1Zk
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.650,FMLib:17.12.28.16
 definitions=2024-06-03_05,2024-05-30_01,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0
 priorityscore=1501 spamscore=0 malwarescore=0 mlxscore=0 clxscore=1015
 adultscore=0 impostorscore=0 mlxlogscore=999 lowpriorityscore=0
 bulkscore=0 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.19.0-2405170001 definitions=main-2406030075

16.05.24 14:23, Jan Beulich:
> On 15.05.2024 11:14, Sergiy Kibrik wrote:
>> --- a/xen/arch/x86/cpu/vpmu_amd.c
>> +++ b/xen/arch/x86/cpu/vpmu_amd.c
>> @@ -290,7 +290,7 @@ static int cf_check amd_vpmu_save(struct vcpu *v,  bool to_guest)
>>       context_save(v);
>>   
>>       if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) &&
>> -         is_msr_bitmap_on(vpmu) )
>> +         is_msr_bitmap_on(vpmu) && cpu_has_svm )
>>           amd_vpmu_unset_msr_bitmap(v);
> 
> Assuming the change in the earlier patch can actually be established to be
> safe, along the lines of an earlier comment from Stefano the addition may
> want to move earlier in the overall conditionals (here and below). In fact
> I wonder whether it wouldn't be neater to have
> 
> #define is_svm_vcpu(v) (cpu_has_svm && is_hvm_vcpu(v))
> 
> at the top of the file, and then use that throughout to replace is_hvm_vcpu().
> Same on the Intel side then, obviously.
> 

sure, will do

>> @@ -288,7 +288,7 @@ static int cf_check core2_vpmu_save(struct vcpu *v, bool to_guest)
>>   
>>       /* Unset PMU MSR bitmap to trap lazy load. */
>>       if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) &&
>> -         cpu_has_vmx_msr_bitmap )
>> +         cpu_has_vmx && cpu_has_vmx_msr_bitmap )
> 
> Aren't you elsewhere adding IS_ENABLED() to cpu_has_vmx_*, rendering this (and
> similar changes further down) redundant?
> 

indeed, I can move this change later in the series & reuse checks 
provided by cpu_has_vmx_msr_bitmap

   -Sergiy


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 09:12:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 09:12:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734770.1140837 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE3jx-0006Yj-Q8; Mon, 03 Jun 2024 09:12:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734770.1140837; Mon, 03 Jun 2024 09:12:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE3jx-0006Yc-Md; Mon, 03 Jun 2024 09:12:29 +0000
Received: by outflank-mailman (input) for mailman id 734770;
 Mon, 03 Jun 2024 09:12:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sE3jw-0006YR-S3
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 09:12:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sE3jw-0004fk-FM; Mon, 03 Jun 2024 09:12:28 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.0.211])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sE3jw-0002ct-6B; Mon, 03 Jun 2024 09:12:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=SGNt6sgQWcRp+LOkKoBj2bt1R/frYQwbiZGXnrYXUXc=; b=JuaG2HBJAuEq3pXy2Gqkn2Psnk
	urT0SEqcXtmWfxXoDTnAW4Vk0VjNjwdu/IeNndER5v0vsAHGzYQMxHuJvblKijpDgpnN69NI92k0o
	oDeCWIrgeEALxVN8GYnC+ukp/S9poAMUYNGWI94f/PbYS36RS/IrYbX7qFONZdNt7ccc=;
Message-ID: <39045a8f-ea18-4264-b540-66645751d27d@xen.org>
Date: Mon, 3 Jun 2024 10:12:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v5 7/7] xen/arm: ffa: support notification
Content-Language: en-GB
To: Jens Wiklander <jens.wiklander@linaro.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 "patches@linaro.org" <patches@linaro.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Michal Orzel <michal.orzel@amd.com>
References: <20240529072559.2486986-1-jens.wiklander@linaro.org>
 <20240529072559.2486986-8-jens.wiklander@linaro.org>
 <C52D6A7C-1136-4BF1-9060-600157F641F5@arm.com>
 <CAHUa44GRNQV4X61YPZTxO+tkkwJS9hoqQ07U9vP1k6n1zUt9rQ@mail.gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <CAHUa44GRNQV4X61YPZTxO+tkkwJS9hoqQ07U9vP1k6n1zUt9rQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Jens,

On 03/06/2024 10:01, Jens Wiklander wrote:
> On Fri, May 31, 2024 at 4:28 PM Bertrand Marquis
> <Bertrand.Marquis@arm.com> wrote:
>>
>> Hi Jens,
>>
>>> On 29 May 2024, at 09:25, Jens Wiklander <jens.wiklander@linaro.org> wrote:
>>>
>>> Add support for FF-A notifications, currently limited to an SP (Secure
>>> Partition) sending an asynchronous notification to a guest.
>>>
>>> Guests and Xen itself are made aware of pending notifications with an
>>> interrupt. The interrupt handler triggers a tasklet to retrieve the
>>> notifications using the FF-A ABI and deliver them to their destinations.
>>>
>>> Update ffa_partinfo_domain_init() to return error code like
>>> ffa_notif_domain_init().
>>>
>>> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
>>> ---
>>> v4->v5:
>>> - Move the freeing of d->arch.tee to the new TEE mediator free_domain_ctx
>>>   callback to make the context accessible during rcu_lock_domain_by_id()
>>>   from a tasklet
>>> - Initialize interrupt handlers for secondary CPUs from the new TEE mediator
>>>   init_interrupt() callback
>>> - Restore the ffa_probe() from v3, that is, remove the
>>>   presmp_initcall(ffa_init) approach and use ffa_probe() as usual now that we
>>>   have the init_interrupt callback.
>>> - A tasklet is added to handle the Schedule Receiver interrupt. The tasklet
>>>   finds each relevant domain with rcu_lock_domain_by_id() which now is enough
>>>   to guarantee that the FF-A context can be accessed.
>>> - The notification interrupt handler only schedules the notification
>>>   tasklet mentioned above
>>>
>>> v3->v4:
>>> - Add another note on FF-A limitations
>>> - Clear secure_pending in ffa_handle_notification_get() if both SP and SPM
>>>   bitmaps are retrieved
>>> - ASSERT that ffa_rcu_lock_domain_by_vm_id() isn't passed the vm_id FF-A
>>>   uses for Xen itself
>>> - Replace the get_domain_by_id() call done via ffa_get_domain_by_vm_id() in
>>>   notif_irq_handler() with a call to rcu_lock_live_remote_domain_by_id() via
>>>   ffa_rcu_lock_domain_by_vm_id()
>>> - Remove spinlock in struct ffa_ctx_notif and use atomic functions as needed
>>>   to access and update the secure_pending field
>>> - In notif_irq_handler(), look for the first online CPU instead of assuming
>>>   that the first CPU is online
>>> - Initialize FF-A via presmp_initcall() before the other CPUs are online,
>>>   use register_cpu_notifier() to install the interrupt handler
>>>   notif_irq_handler()
>>> - Update commit message to reflect recent updates
>>>
>>> v2->v3:
>>> - Add a GUEST_ prefix and move FFA_NOTIF_PEND_INTR_ID and
>>>   FFA_SCHEDULE_RECV_INTR_ID to public/arch-arm.h
>>> - Register the Xen SRI handler on each CPU using on_selected_cpus() and
>>>   setup_irq()
>>> - Check that the SGI ID retrieved with FFA_FEATURE_SCHEDULE_RECV_INTR
>>>   doesn't conflict with static SGI handlers
>>>
>>> v1->v2:
>>> - Addressing review comments
>>> - Change ffa_handle_notification_{bind,unbind,set}() to take struct
>>>   cpu_user_regs *regs as argument.
>>> - Update ffa_partinfo_domain_init() and ffa_notif_domain_init() to return
>>>   an error code.
>>> - Fixing a bug in handle_features() for FFA_FEATURE_SCHEDULE_RECV_INTR.
>>> ---
>>> xen/arch/arm/tee/Makefile       |   1 +
>>> xen/arch/arm/tee/ffa.c          |  72 +++++-
>>> xen/arch/arm/tee/ffa_notif.c    | 409 ++++++++++++++++++++++++++++++++
>>> xen/arch/arm/tee/ffa_partinfo.c |   9 +-
>>> xen/arch/arm/tee/ffa_private.h  |  56 ++++-
>>> xen/arch/arm/tee/tee.c          |   2 +-
>>> xen/include/public/arch-arm.h   |  14 ++
>>> 7 files changed, 548 insertions(+), 15 deletions(-)
>>> create mode 100644 xen/arch/arm/tee/ffa_notif.c
>>>
> [...]
>>>
>>> @@ -517,8 +567,10 @@ err_rxtx_destroy:
>>> static const struct tee_mediator_ops ffa_ops =
>>> {
>>>      .probe = ffa_probe,
>>> +    .init_interrupt = ffa_notif_init_interrupt,
>>
>> With the previous change on init secondary i would suggest to
>> have a ffa_init_secondary here actually calling the ffa_notif_init_interrupt.
> 
> Yes, that makes sense. I'll update.
> 
>>
>>>      .domain_init = ffa_domain_init,
>>>      .domain_teardown = ffa_domain_teardown,
>>> +    .free_domain_ctx = ffa_free_domain_ctx,
>>>      .relinquish_resources = ffa_relinquish_resources,
>>>      .handle_call = ffa_handle_call,
>>> };
>>> diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_notif.c
>>> new file mode 100644
>>> index 000000000000..e8e8b62590b3
>>> --- /dev/null
>>> +++ b/xen/arch/arm/tee/ffa_notif.c
> [...]
>>> +static void notif_vm_pend_intr(uint16_t vm_id)
>>> +{
>>> +    struct ffa_ctx *ctx;
>>> +    struct domain *d;
>>> +    struct vcpu *v;
>>> +
>>> +    /*
>>> +     * vm_id == 0 means a notifications pending for Xen itself, but
>>> +     * we don't support that yet.
>>> +     */
>>> +    if ( !vm_id )
>>> +        return;
>>> +
>>> +    d = ffa_rcu_lock_domain_by_vm_id(vm_id);
>>> +    if ( !d )
>>> +        return;
>>> +
>>> +    ctx = d->arch.tee;
>>> +    if ( !ctx )
>>> +        goto out_unlock;
>>
>> In both previous cases you are silently ignoring an interrupt
>> due to an internal error.
>> Is this something that we should trace ? maybe just debug ?
>>
>> Could you add a comment to explain why this could happen
>> (when possible) or not and the possible side effects ?
>>
>> The second one would be a notification for a domain without
>> FF-A enabled which is ok but i am a bit more wondering on
>> the first one.
> 
> The SPMC must be out of sync in both cases. I've been looking for a
> window where that can happen, but I can't find any. SPMC is called
> with FFA_NOTIFICATION_BITMAP_DESTROY during domain teardown so the
> SPMC shouldn't try to deliver any notifications after that.

I don't think I agree with the conclusion. I believe, this can also 
happen in normal operation.

For example, the SPMC could have trigger the interrupt before 
FFA_NOTIFICATION_BITMAP_DESTROY but Xen didn't handle the interrupt (or 
run the tasklet) until later.

This could be at the time where the domain has been fully destroyed or 
even when...

> In the second case, the domain ID might have been reused for a domain
> without FF-A enabled, but the SPMC should have known that already.

... a new domain has been created. Although, the latter is rather unlikely.

So what if the new domain has FFA enabled? Is there any potential 
security issue?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 09:35:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 09:35:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734781.1140846 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE45g-0001T9-IF; Mon, 03 Jun 2024 09:34:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734781.1140846; Mon, 03 Jun 2024 09:34:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE45g-0001T2-FI; Mon, 03 Jun 2024 09:34:56 +0000
Received: by outflank-mailman (input) for mailman id 734781;
 Mon, 03 Jun 2024 09:34:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mvE+=NF=epam.com=prvs=288473a65a=sergiy_kibrik@srs-se1.protection.inumbo.net>)
 id 1sE45f-0001Sp-Ea
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 09:34:55 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 81e6ae09-218c-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 11:34:44 +0200 (CEST)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 4537hIj6011139;
 Mon, 3 Jun 2024 09:34:33 GMT
Received: from eur03-am7-obe.outbound.protection.outlook.com
 (mail-am7eur03lp2233.outbound.protection.outlook.com [104.47.51.233])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3yfvxwutcx-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 03 Jun 2024 09:34:33 +0000 (GMT)
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com (2603:10a6:20b:5c0::11)
 by DU0PR03MB9866.eurprd03.prod.outlook.com (2603:10a6:10:408::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.7; Mon, 3 Jun
 2024 09:34:30 +0000
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d]) by AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d%6]) with mapi id 15.20.7611.013; Mon, 3 Jun 2024
 09:34:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 81e6ae09-218c-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FcKHikcGCiDKXgMF0bQhXpfmsMfh6OXgkgFczSQWsYsWcBx1Of+xByKhGXU9/K+aVdMkuPPZWamvGcKuB3C0irCEPiEJYQ9EFqxUpCeZzu2903i3+rVMVRMCa4OjtQX6dU6hxmBBUZ5v/OXwXVeEEsPrR1jtYHsYb6ay/SbRmE7yrjhWcXDvZbWk/WnAv2YcGIZxEuFct35VQBfCeS14Rl4SE6x27P0n3N+GfjOwDQYAja9PjyT8vSi9gaYY5EBJwidI1pd8z/9DlyJwQEOH0oQbntvwhEX2uetdCuJKB6XbMnTEsrxS+94DWRUOlB8iP4gw4vDdkrXQApHWvUdElw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=dsD35ko3OK5ISRiqI5BC8VgYyyqAyo0tO2nCpLwDCqc=;
 b=btpwjIpD2NeQCnABoVXPPUMC34+Lxl5PonzcytXkrVruJonU4JQK0yi/6ebN3sINqwFnX549miLIKZvc3ACgcFsLAphaFgG4LxoB07O815sFEX4ef+DtUwHr7t6GYy/FxD8rmC7vB9CFPp/XmxtsE59Uxg9G7X79BINlO0h7c/VS2MJq89qONDqlQTnhl9b8DuLvdMlpq5nXOG3euPoZvEX2H9CGy9V4SzjLp0OCCOli96jkk+PhR7x/0uN2VeICOWyGBN6SwF2frrOOqm/WTrF/69fpmsaW05VhtuV53hRt+Qo0+uG/8uyVV42OmnbaQCtuFXBTROXv4djN/QhCAw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dsD35ko3OK5ISRiqI5BC8VgYyyqAyo0tO2nCpLwDCqc=;
 b=faClFRzpb3orsStjA8MLQ5mZNLf1aQxgaY4uHUwKD+hWdqNGwwa/tiAqcFN3SQmDb+bksinffUwcIUZDesBQRITvkVdljZjU9P71sl5UhAzdRyMKRIYTQHNwBs1KnpU8pjvwC/Zb5cxewzJRV25+Hl6uRkkF9cOFy0F2hJpTz0r7qlW9JYhuLQSG8cfAoKc2EOshKiIWsRtr8mxEU25dZ4aQspHXBHBUG95Fto4V8DRSe1R4NCKmWG430kthsn83wLu3CJkdtD25yxbjWTcFEytPyLD5Sm5pFcBtcJCbqUJvL/y+pedFGrgRDK82RZV3LVQZrZGDp+eoy4D1r6iQZw==
Message-ID: <ce7a5b2d-ed76-46c0-a86f-67a5aa4aa485@epam.com>
Date: Mon, 3 Jun 2024 12:34:26 +0300
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 14/15] iommu/vt-d: guard vmx_pi_hooks_* calls with
 cpu_has_vmx
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
        =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
        xen-devel@lists.xenproject.org
References: <cover.1715761386.git.Sergiy_Kibrik@epam.com>
 <73072e5b2ec40ad28d4bcfb9bb0870f3838bb726.1715761386.git.Sergiy_Kibrik@epam.com>
 <c43a554e-4b21-4a3b-92f4-60633f61f67e@suse.com>
Content-Language: en-US
From: Sergiy Kibrik <sergiy_kibrik@epam.com>
In-Reply-To: <c43a554e-4b21-4a3b-92f4-60633f61f67e@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: WA2P291CA0028.POLP291.PROD.OUTLOOK.COM
 (2603:10a6:1d0:1f::23) To AS8PR03MB9192.eurprd03.prod.outlook.com
 (2603:10a6:20b:5c0::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AS8PR03MB9192:EE_|DU0PR03MB9866:EE_
X-MS-Office365-Filtering-Correlation-Id: baa4a87a-eeed-404b-ca92-08dc83b05e3c
X-LD-Processed: b41b72d0-4e9f-4c26-8a69-f949f367c91d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230031|1800799015|376005|366007;
X-Microsoft-Antispam-Message-Info: 
	=?utf-8?B?N3NqWHJvNHlwdTJpaS8veFVlVDRMM2RIY1V5Y2V2RTlubmFFdTg4RGZOc0gv?=
 =?utf-8?B?aW5tU1gxcXUwUmZmQmhldEtBRjluL2FiM2RHMnp4Uk0xNk55UGZWVkRxeWtp?=
 =?utf-8?B?TExmaDBlTlJTbEYyOEJISG56MHpvc290RGJEZEE3NnB1V0FHaUpHQ1BvVmc2?=
 =?utf-8?B?bXV0NUx5QnV2NTluSUVqNHpPWXh5U3Z0MkkydElUZklaTlczRmRqVFIyTlEy?=
 =?utf-8?B?SHZ5MjZ0UThqV0NJVmxrMHAvUGNma05jWXovbDI1bkVIdG1sWEs0T1o1SjlM?=
 =?utf-8?B?dFRxZjFvT3hhTmFDdHZnNnA1V1pQSjNaYmJxbXJxdzFXdG9iMTQ3Snc3UTJp?=
 =?utf-8?B?YkcxMitvM3FQYTRicy9WT3RRTFVOdFdGQ3VKWEpIYTN1VUdHU1IwSjgrcVVt?=
 =?utf-8?B?VGJrS3ZCeGovV0hCaXBJYXNOYkZPd2cvWEpUNEpJMFF0STRnbzNHNmpTTmt5?=
 =?utf-8?B?alVsUlhDQTRLdGo4b05ZakdqUFdKd2ZTWmJva3oxb3czS3NKbjJuK3FtVkU4?=
 =?utf-8?B?VEdDTXgrS3VqWmlzU3ZLK0lDR2VDbk1wVnBsVmc0ajFoc1VKSlJYdnNKUEdT?=
 =?utf-8?B?eXJaWXNScUlXNElZd0NXNkZwb2h6MTh5YVVKMjZnTHNUNXpmUDdVdU10TEJr?=
 =?utf-8?B?SGtMSnBEeTM5NFZFRDhXOWl3ZUJRVHAwU2ZrL05EK2VIdzE3YVFKZE9NR0FY?=
 =?utf-8?B?d3R2MGxhN3hyclM5bUkvczlGS0xZWUJoS1NPS3NRZ04yYmFrQXNFYjFEM01n?=
 =?utf-8?B?REVXYW9tekFicFdpaUs5dHYySlJJRGgzRXBSN3Q3TGpzWjdSODJkQmNHNER1?=
 =?utf-8?B?U0pWendJU042WHVsdnRDVE1sczkreEI5SGlyTEo0a0xKcHFrTXVPSGQ2MS9t?=
 =?utf-8?B?OGgvNlhhcUZvaGpQVnNuZHRyTGJRaXdlTE5DQ0tLTGptL1pzcUJ5V3RIRDRr?=
 =?utf-8?B?ZmZabldlSk1ka2lpOFJNVDZ6SndZOEJqMElKODM0eDI2bU95RzZXbTZ2YU5K?=
 =?utf-8?B?TmZ6eXZzbkc4TVVPampCL1o2bmhZKzhiaWZoR0dXM3VrRi9NbVl1L1BCUlNh?=
 =?utf-8?B?Zk0wUGNERHJucGhzY0FkMFlabTlMWGE1WVBKNG53emdvdGNLYVNWTnpjbU5R?=
 =?utf-8?B?dnFFZnhDMmJWdUFscmZXZ1ViSUJJVlMxVk1tL3Njd29LSHdXZ3JmMUhPUHBi?=
 =?utf-8?B?Rlp3TzZkT0dwZml0eXIrdUFtY2JEUzNSQ3cwMmdUc1pJWU5sbU9XU2UwN2hi?=
 =?utf-8?B?YjVkQnVyN0ZHbWEwVGg5cGNyWndjWjYvRm96YUd1V1VSeHR2MXBTTjlaNmRM?=
 =?utf-8?B?QmFWNXFZQW80NWFuTEVYSENKMHdPbGpCVktzVmp5dW40VXpja0VSa25GWEpq?=
 =?utf-8?B?MWhFSFNkYzVaQiszS2lNMmdZaVJjUkRPMGpRZGdWcXVERDZOT09RZVk3L3Zn?=
 =?utf-8?B?d01VSmo3S0hONU8wQzhXekVuM0UwaituZWExUi9HOTFsUnViTHE1UDlxdktP?=
 =?utf-8?B?V3NrTVFZYkZab0tLY0YzQndlRi9ISzFhdVpwVTNLM0xISTNVdzhFdnZaSHc5?=
 =?utf-8?B?Z0Ztd2s2QS9ZOU43Z3BYdnRwOGlqdEplTW9BVWp4Rk9Rd1VreGpTMjZ5UDlT?=
 =?utf-8?B?M2o3Z2dDMzFSSm9lR1lPUFVIQURjWDVSS1VZd3pMcEVaemtoc1VmRkovSDAv?=
 =?utf-8?B?eFJDUlJVWWwyZHpUVXdKUzloMzd1djI1UE1VMHRUY2w3djdFZVVDS0V3PT0=?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR03MB9192.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(1800799015)(376005)(366007);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?dHpiK2xJOWp4aHFFYnFCVk1jd1NZM1FVRnUyYzZJYk9SVzUxeUZkbzVXSnM0?=
 =?utf-8?B?RHY1aHNpUmlrQVBjd1AvQ0NZd1kycXBhWVdqRS9zQnhRYU5rSVFmeXhUbEk4?=
 =?utf-8?B?dUtCOXdOWFYyL2dYNlhtRWRTaW5xcmdSOEx5ZGZJWGRDTG93bWcwQVNIdmZx?=
 =?utf-8?B?czhCdVpWRXpEb0lmYlMrRWNOMzJ4Qlk4ZytvcE9jZ1NWZG5IVWNlZXhxZUdz?=
 =?utf-8?B?MEVoZ0VWdDJrb1pBYU9Qc3JGSnhmYUxraWRqQ0RFUjNRK0RLd3RRdVdyQkV3?=
 =?utf-8?B?QVpSc2ROUG9qRWpZa0VDZDluelV5OVVBRHhqc3B4ZW5ZUmVGL1RUZ1FheWFy?=
 =?utf-8?B?YzhlNFQvWHUvYU1teTArL0RXTmVad205M0w0bVhMbUNWZlBPVUZhTFQ2TmFK?=
 =?utf-8?B?NTRFVHN2SnBObE1tWlEvcEFsa1FWZnBzVXB2YzE0cm5sSEMvUWtYOGxoSFA3?=
 =?utf-8?B?UzRUbFlMa2tGOHpDRWhqL0FDRnhOSkU0ZmJQendzK3hYb1RpN3U3UmhzU2hs?=
 =?utf-8?B?eFZ1cGdHMHRmVlh3NEw5YS9ML0ZLbmVsRlBvUTY4TjJNWEIvcGFKRTd6TXNX?=
 =?utf-8?B?UDlyeTJsU2xNZlZyb0hjTWJoMHVXWFRPQmdUWHhTajNNQU1ObloyZUtkVjdK?=
 =?utf-8?B?M0Z6VmtoTjk0eEczWnJLVSt3R3hIcjF0bnFSaUxDT3ROTjE2MGZWZU1qcTlh?=
 =?utf-8?B?OHdEV0V1VlpRR0R3b1JPdjNsQ1AyMERxb0tpd3hrdVpEa2l4bGl0KzJUZm5H?=
 =?utf-8?B?OWZOaUlrZ2t0OC9aajhsc2RmcFJuY0lTYzFjL3RZdFN0S3pURVBLbjJidDRj?=
 =?utf-8?B?R21pVWdSRXdTZDI0OU9tZU9nWVFxK2lmVUdsOHZqd0NqNmdMT3hIdmFPZ1ls?=
 =?utf-8?B?SDhVSHRKOURZNEFhU1M0MW1Hd0piOXluSUxRNG5vMWJ4a3hXTFdUN3FRbmJL?=
 =?utf-8?B?UHd3WWxDc09OMnBURzFTMXk1N2R5RzVzcmNrbXZuTm4yZG14bEZ6amJBQTRx?=
 =?utf-8?B?KzQ5Q1hRZm9UTFVpSWxPZ2JvOEQ1SGNWRXRBL1dyUytxQmxqWTZxME9jVkxO?=
 =?utf-8?B?Nk5jVzZ2TmVESFc1NXhtUUNad2p2TVJDM1hPb1ROWUkrc29NYThvcHh1c0RN?=
 =?utf-8?B?OXh1S0hHOFZFTWFRWVY2R080aXJtcHJtMlQzcEoyQlViYUR3bCtlOWluRHoy?=
 =?utf-8?B?eXBtdkd3RXREbUtEa1pWRjEvL1pyeHQwVjB4TXpPNzlSNXAvdkdKb3B3THZi?=
 =?utf-8?B?djYzd2s4ekdtU2JMUzVxZHZJbG9wUFVYdXU5Ym9OalBkaGZHQjFzdUE4UWtv?=
 =?utf-8?B?amNvWlFpMTl3dFpVU2VvdjZlNFFjOWhMeWFiSmNUN1JpcXBPdE9zcGxEWDhV?=
 =?utf-8?B?Um1SSHpXS0NmdGM4aEZ0Tnc5L3ZPR0J4S2VCN0Mybk45RjQrNUFBdXQydFhs?=
 =?utf-8?B?TUp1VmtkREJpalB4S1prdG5YOWg0eC93Vld1dzM3MDhueUQvVjZQTUFZNlRh?=
 =?utf-8?B?aVFiRS9CeTQ3eVFSS2EwMWMzUytCR01mcWhKLzVXNEFJRGxBSk9ZMWoyM1pY?=
 =?utf-8?B?azZQRkwweW1tK0dZOWE4VGxTVGtPOFc5SitwVU9XVVd1a1ErU2ZrSWdBUVVW?=
 =?utf-8?B?SzZ5cjlOYTlBcjRtcmh5cHdZZnF4WlJhaTBwY2FHMVFjbGtZS0Y1TjZEekNa?=
 =?utf-8?B?dGQ2ZXU0VDJhQ1VSZkdoMHZVQnkvUU5UYS82b2NjcHFBWSszOUh3djRMWEF6?=
 =?utf-8?B?Y1Rub3lOMG1veEw5cGFZOHYyb3M4a0F5TzlIYUZoeWJUbVl5Yy9yVENFRnY5?=
 =?utf-8?B?ekpDbkxlbEdGMFVaL0lPOHVsYTB3bzlaUHFQK1Axd0N2NHM1TTZiNmt4STR4?=
 =?utf-8?B?ZXhUSys1eTJEaTJqdVA2dWMwSkM5Vk9rT3lySVI3UGREVjF6aktBVEErR0RV?=
 =?utf-8?B?NzFXREdValljelNsd0UvbGM2M1RiN3ZFVWI2ZVp1TEF4QTkrQnlFS1hzUkg0?=
 =?utf-8?B?cXVEM01QNXRNRzEwdDAvckY4QW50WUhROFdGYVNXSmtmb0xUMG5NT1pObWNk?=
 =?utf-8?B?T2JSckdxWE9WUkZCYm9vRlBGVGkzQ0t2eVpNV3dEZG9FKzBtVkNZQUNhT2lI?=
 =?utf-8?Q?jjqlEm2ghX0N3q3bYueqoezeY?=
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: baa4a87a-eeed-404b-ca92-08dc83b05e3c
X-MS-Exchange-CrossTenant-AuthSource: AS8PR03MB9192.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 Jun 2024 09:34:30.0636
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: r3UrDFZS9pxnV6JTF8tnoNR3l1MsS4Wivfl54WWFMsvixV2Gtf+GhVbKHjRdupF+fkQ5g7ELy7kBpZAmi26zoQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR03MB9866
X-Proofpoint-GUID: sdrKbk4a8b6xd9fh_i4OKGD4Vckjbr_W
X-Proofpoint-ORIG-GUID: sdrKbk4a8b6xd9fh_i4OKGD4Vckjbr_W
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.650,FMLib:17.12.28.16
 definitions=2024-06-03_06,2024-05-30_01,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 adultscore=0
 mlxlogscore=882 clxscore=1015 impostorscore=0 mlxscore=0 suspectscore=0
 priorityscore=1501 lowpriorityscore=0 malwarescore=0 bulkscore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.19.0-2405170001 definitions=main-2406030079

16.05.24 15:15, Jan Beulich:
> On 15.05.2024 11:26, Sergiy Kibrik wrote:
>> VMX posted interrupts support can now be excluded from x86 build along with
>> VMX code itself, but still we may want to keep the possibility to use
>> VT-d IOMMU driver in non-HVM setups.
>> So we guard vmx_pi_hooks_{assign/deassign} with some checks for such a case.
> 
> But both function already have a stub each. Isn't is merely a matter of
> changing when those stubs come into play?
> 

indeed, we've got stubs already provided. Then this becomes a nice one 
line change.

   -Sergiy


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 09:50:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 09:50:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734790.1140857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE4Kv-0004rl-RZ; Mon, 03 Jun 2024 09:50:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734790.1140857; Mon, 03 Jun 2024 09:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE4Kv-0004re-No; Mon, 03 Jun 2024 09:50:41 +0000
Received: by outflank-mailman (input) for mailman id 734790;
 Mon, 03 Jun 2024 09:50:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i/E6=NF=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1sE4Ku-0004rY-Aw
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 09:50:40 +0000
Received: from mail-oo1-xc35.google.com (mail-oo1-xc35.google.com
 [2607:f8b0:4864:20::c35])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bb5edcc0-218e-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 11:50:39 +0200 (CEST)
Received: by mail-oo1-xc35.google.com with SMTP id
 006d021491bc7-5b9a35a0901so1456207eaf.0
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 02:50:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb5edcc0-218e-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1717408238; x=1718013038; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=aBoPPMx+b7L5SS8HwWs/CXS1yB/zV35i45OYl15iJL0=;
        b=dxid+m63e2Xvsbix8I4fLvoBhkk7gtZyjdYyTMWXyEl5uQjTV7qVypkySHEpWKJj5B
         OCGZlNA7z9F7+ifx0bGn/oIypYAVdTzgQr4Z/q0DyVK7hWekzmzXSEIEYi+nOzg45gce
         VJhugvK7JaMxXz7cNQxNi+w13tpyFw9uVa5cDgL8KJ6ARhKvJA6y17LBHyB339cDuuT/
         3hPU28nBqIjR1hDc26IOc1fY0QdQR6Dc2aET73DaE9M8v2hAvEEc/FE/s4XG0QM3/73V
         sY1+sVJnzkkjJpqes+NhpzZBJHcJRlhaxuuw+G+L0E3X8Yxpg42lsxIacDGNOfjOPUlM
         vL8Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717408238; x=1718013038;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=aBoPPMx+b7L5SS8HwWs/CXS1yB/zV35i45OYl15iJL0=;
        b=mBTvpiT0KL/5B1d1/yOjNTHAuXMv25UPAQ8jqWOivW2QJU1zVz0zqxAzPsQ3CrLDUs
         s+ahGaP/64Uiup4pPfTy5e97GqmavA1CkfzSH/Jz+GCoi7kCLziK4fGJ5yDX5qPQR652
         rdXEARGLGrk7CY1pYqDImeNh5c6t8HMw1iV5lTwUZk8zLmNkQBdw/BdXPkaBqt0/DYic
         bjNZgaSp50/fyc1lo+pskHM4YeMcEl1m1siX8O05ewT04XnLAKrKjxwdSIt6Zt2yqPFi
         iMPhWibmgKKmdRdgDxIfT0TUwWD1AuKV+f10/ByyEn5GB9LlNIXwXykIYonBSjgItiIT
         rp0w==
X-Forwarded-Encrypted: i=1; AJvYcCWbl+Ugdq+KpUO2aT7Medjz+O7vhMrVTNVBcuoMY5TTo5dcfwMQNX8+rQJ6eb3d+LJPM2YdxgsecrTHjEFkqa+0VGl34Y59jSqP9MrQGqY=
X-Gm-Message-State: AOJu0Ywf4t9sp2PziCUMtdyEHxzeWidVAUQVY27C2PvrkVX5Av9pCOFu
	8yS1ObE3Kec+OBnsH2H9mX2PNxRWYJQ/9Vv3dshUS64849eAVkfwc+BRdsRctHzi2oRcaJdofT/
	rloLGV1qkcwdj4cV9qngEDpU7cvxBmZMevEFK+w==
X-Google-Smtp-Source: AGHT+IFbtrRS7eKd4GEMiCJqtXB9/NmDCasGydI2BJCRvy2B7Wre5GnIt/HaKGz53UJJB9Kk3/KO4hFgUohHRC9hXGc=
X-Received: by 2002:a05:6820:16a6:b0:5b9:d636:416b with SMTP id
 006d021491bc7-5ba05af7a62mr9148962eaf.1.1717408237741; Mon, 03 Jun 2024
 02:50:37 -0700 (PDT)
MIME-Version: 1.0
References: <20240529072559.2486986-1-jens.wiklander@linaro.org>
 <20240529072559.2486986-8-jens.wiklander@linaro.org> <C52D6A7C-1136-4BF1-9060-600157F641F5@arm.com>
 <CAHUa44GRNQV4X61YPZTxO+tkkwJS9hoqQ07U9vP1k6n1zUt9rQ@mail.gmail.com> <39045a8f-ea18-4264-b540-66645751d27d@xen.org>
In-Reply-To: <39045a8f-ea18-4264-b540-66645751d27d@xen.org>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Mon, 3 Jun 2024 11:50:25 +0200
Message-ID: <CAHUa44Hrm7p9MyTwsp+XU+EAMPXb+bi0a7P8sbhsvz2Tobozow@mail.gmail.com>
Subject: Re: [XEN PATCH v5 7/7] xen/arm: ffa: support notification
To: Julien Grall <julien@xen.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Xen-devel <xen-devel@lists.xenproject.org>, 
	"patches@linaro.org" <patches@linaro.org>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Michal Orzel <michal.orzel@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi Julien,

On Mon, Jun 3, 2024 at 11:12=E2=80=AFAM Julien Grall <julien@xen.org> wrote=
:
>
> Hi Jens,
>
> On 03/06/2024 10:01, Jens Wiklander wrote:
> > On Fri, May 31, 2024 at 4:28=E2=80=AFPM Bertrand Marquis
> > <Bertrand.Marquis@arm.com> wrote:
> >>
> >> Hi Jens,
> >>
> >>> On 29 May 2024, at 09:25, Jens Wiklander <jens.wiklander@linaro.org> =
wrote:
> >>>
> >>> Add support for FF-A notifications, currently limited to an SP (Secur=
e
> >>> Partition) sending an asynchronous notification to a guest.
> >>>
> >>> Guests and Xen itself are made aware of pending notifications with an
> >>> interrupt. The interrupt handler triggers a tasklet to retrieve the
> >>> notifications using the FF-A ABI and deliver them to their destinatio=
ns.
> >>>
> >>> Update ffa_partinfo_domain_init() to return error code like
> >>> ffa_notif_domain_init().
> >>>
> >>> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> >>> ---
> >>> v4->v5:
> >>> - Move the freeing of d->arch.tee to the new TEE mediator free_domain=
_ctx
> >>>   callback to make the context accessible during rcu_lock_domain_by_i=
d()
> >>>   from a tasklet
> >>> - Initialize interrupt handlers for secondary CPUs from the new TEE m=
ediator
> >>>   init_interrupt() callback
> >>> - Restore the ffa_probe() from v3, that is, remove the
> >>>   presmp_initcall(ffa_init) approach and use ffa_probe() as usual now=
 that we
> >>>   have the init_interrupt callback.
> >>> - A tasklet is added to handle the Schedule Receiver interrupt. The t=
asklet
> >>>   finds each relevant domain with rcu_lock_domain_by_id() which now i=
s enough
> >>>   to guarantee that the FF-A context can be accessed.
> >>> - The notification interrupt handler only schedules the notification
> >>>   tasklet mentioned above
> >>>
> >>> v3->v4:
> >>> - Add another note on FF-A limitations
> >>> - Clear secure_pending in ffa_handle_notification_get() if both SP an=
d SPM
> >>>   bitmaps are retrieved
> >>> - ASSERT that ffa_rcu_lock_domain_by_vm_id() isn't passed the vm_id F=
F-A
> >>>   uses for Xen itself
> >>> - Replace the get_domain_by_id() call done via ffa_get_domain_by_vm_i=
d() in
> >>>   notif_irq_handler() with a call to rcu_lock_live_remote_domain_by_i=
d() via
> >>>   ffa_rcu_lock_domain_by_vm_id()
> >>> - Remove spinlock in struct ffa_ctx_notif and use atomic functions as=
 needed
> >>>   to access and update the secure_pending field
> >>> - In notif_irq_handler(), look for the first online CPU instead of as=
suming
> >>>   that the first CPU is online
> >>> - Initialize FF-A via presmp_initcall() before the other CPUs are onl=
ine,
> >>>   use register_cpu_notifier() to install the interrupt handler
> >>>   notif_irq_handler()
> >>> - Update commit message to reflect recent updates
> >>>
> >>> v2->v3:
> >>> - Add a GUEST_ prefix and move FFA_NOTIF_PEND_INTR_ID and
> >>>   FFA_SCHEDULE_RECV_INTR_ID to public/arch-arm.h
> >>> - Register the Xen SRI handler on each CPU using on_selected_cpus() a=
nd
> >>>   setup_irq()
> >>> - Check that the SGI ID retrieved with FFA_FEATURE_SCHEDULE_RECV_INTR
> >>>   doesn't conflict with static SGI handlers
> >>>
> >>> v1->v2:
> >>> - Addressing review comments
> >>> - Change ffa_handle_notification_{bind,unbind,set}() to take struct
> >>>   cpu_user_regs *regs as argument.
> >>> - Update ffa_partinfo_domain_init() and ffa_notif_domain_init() to re=
turn
> >>>   an error code.
> >>> - Fixing a bug in handle_features() for FFA_FEATURE_SCHEDULE_RECV_INT=
R.
> >>> ---
> >>> xen/arch/arm/tee/Makefile       |   1 +
> >>> xen/arch/arm/tee/ffa.c          |  72 +++++-
> >>> xen/arch/arm/tee/ffa_notif.c    | 409 +++++++++++++++++++++++++++++++=
+
> >>> xen/arch/arm/tee/ffa_partinfo.c |   9 +-
> >>> xen/arch/arm/tee/ffa_private.h  |  56 ++++-
> >>> xen/arch/arm/tee/tee.c          |   2 +-
> >>> xen/include/public/arch-arm.h   |  14 ++
> >>> 7 files changed, 548 insertions(+), 15 deletions(-)
> >>> create mode 100644 xen/arch/arm/tee/ffa_notif.c
> >>>
> > [...]
> >>>
> >>> @@ -517,8 +567,10 @@ err_rxtx_destroy:
> >>> static const struct tee_mediator_ops ffa_ops =3D
> >>> {
> >>>      .probe =3D ffa_probe,
> >>> +    .init_interrupt =3D ffa_notif_init_interrupt,
> >>
> >> With the previous change on init secondary i would suggest to
> >> have a ffa_init_secondary here actually calling the ffa_notif_init_int=
errupt.
> >
> > Yes, that makes sense. I'll update.
> >
> >>
> >>>      .domain_init =3D ffa_domain_init,
> >>>      .domain_teardown =3D ffa_domain_teardown,
> >>> +    .free_domain_ctx =3D ffa_free_domain_ctx,
> >>>      .relinquish_resources =3D ffa_relinquish_resources,
> >>>      .handle_call =3D ffa_handle_call,
> >>> };
> >>> diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_noti=
f.c
> >>> new file mode 100644
> >>> index 000000000000..e8e8b62590b3
> >>> --- /dev/null
> >>> +++ b/xen/arch/arm/tee/ffa_notif.c
> > [...]
> >>> +static void notif_vm_pend_intr(uint16_t vm_id)
> >>> +{
> >>> +    struct ffa_ctx *ctx;
> >>> +    struct domain *d;
> >>> +    struct vcpu *v;
> >>> +
> >>> +    /*
> >>> +     * vm_id =3D=3D 0 means a notifications pending for Xen itself, =
but
> >>> +     * we don't support that yet.
> >>> +     */
> >>> +    if ( !vm_id )
> >>> +        return;
> >>> +
> >>> +    d =3D ffa_rcu_lock_domain_by_vm_id(vm_id);
> >>> +    if ( !d )
> >>> +        return;
> >>> +
> >>> +    ctx =3D d->arch.tee;
> >>> +    if ( !ctx )
> >>> +        goto out_unlock;
> >>
> >> In both previous cases you are silently ignoring an interrupt
> >> due to an internal error.
> >> Is this something that we should trace ? maybe just debug ?
> >>
> >> Could you add a comment to explain why this could happen
> >> (when possible) or not and the possible side effects ?
> >>
> >> The second one would be a notification for a domain without
> >> FF-A enabled which is ok but i am a bit more wondering on
> >> the first one.
> >
> > The SPMC must be out of sync in both cases. I've been looking for a
> > window where that can happen, but I can't find any. SPMC is called
> > with FFA_NOTIFICATION_BITMAP_DESTROY during domain teardown so the
> > SPMC shouldn't try to deliver any notifications after that.
>
> I don't think I agree with the conclusion. I believe, this can also
> happen in normal operation.
>
> For example, the SPMC could have trigger the interrupt before
> FFA_NOTIFICATION_BITMAP_DESTROY but Xen didn't handle the interrupt (or
> run the tasklet) until later.

You're right, there is a window. Delayed handling is OK since
FFA_NOTIFICATION_INFO_GET_64 is invoked from the tasklet, but there is
a window if the tasklet is suspended or another core destroys the
domain before the tasklet has called ffa_rcu_lock_domain_by_vm_id().
So far it's harmless and I guess we can afford a print.

>
> This could be at the time where the domain has been fully destroyed or
> even when...
>
> > In the second case, the domain ID might have been reused for a domain
> > without FF-A enabled, but the SPMC should have known that already.
>
> ... a new domain has been created. Although, the latter is rather unlikel=
y.
>
> So what if the new domain has FFA enabled? Is there any potential
> security issue?

In this case, we'll inject an NPI in the guest, but when it invokes
FFA_NOTIFICATION_GET it will get accurate information from the SPMC.
The worst case is a spurious NPI. This shouldn't be a security issue.

Thanks,
Jens


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:06:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:06:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734798.1140868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5Vb-0006kC-6p; Mon, 03 Jun 2024 11:05:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734798.1140868; Mon, 03 Jun 2024 11:05:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5Vb-0006k5-2A; Mon, 03 Jun 2024 11:05:47 +0000
Received: by outflank-mailman (input) for mailman id 734798;
 Mon, 03 Jun 2024 11:05:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5Va-0006jz-KL
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:05:46 +0000
Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 375ab25f-2199-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 13:05:43 +0200 (CEST)
Received: from pb-smtp20.pobox.com (unknown [127.0.0.1])
 by pb-smtp20.pobox.com (Postfix) with ESMTP id A5527272C0;
 Mon,  3 Jun 2024 07:05:40 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp20.pobox.com (Postfix) with ESMTP id 9C69A272BF;
 Mon,  3 Jun 2024 07:05:40 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp20.pobox.com (Postfix) with ESMTPSA id 4CFD5272BE;
 Mon,  3 Jun 2024 07:05:32 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 375ab25f-2199-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:mime-version:content-transfer-encoding;
	 s=sasl; bh=QaiSGXogYi2owIsVlMMKiyc+5dmbop+tdzhMcxRnQ+o=; b=D2oL
	eEzRqIEaJ7zNqTatP09PLPsJBgx0kSXGrYT8gRy6Gy+xSyLqP++ZMch2IEN/0/wl
	p8BhrWvjvGSpoEnJaRKFImGr8OpfKPUrUdQAypL8aKOcrLdZhpDxitnIQLYvIBOE
	u9VvvsjNmV61UCxrYFUi39UFf3kBYdpWNMa1Tno=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Subject: [XEN PATCH v3 00/16] x86: make cpu virtualization support configurable
Date: Mon,  3 Jun 2024 14:05:28 +0300
Message-Id: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
X-Pobox-Relay-ID:
 31F0C22E-2199-11EF-9A9D-ACC938F0AE34-90055647!pb-smtp20.pobox.com
Content-Transfer-Encoding: quoted-printable

This is yet another attempt to provide a means to render the cpu virtuali=
zation
technology support in Xen configurable.
Currently, irrespectively of the target platform, both AMD-V and Intel VT=
-x
drivers are built.
The series adds three new Kconfig controls, ALT2PM, SVM and VMX, that can=
 be
used to switch to a finer-grained configuration for a given platform, and
reduce dead code.

The code separation is done using the new config guards.

Major changes in this series, comparing to v2, are introduction and adopt=
ion of
using_vmx & using_svm macros, and turning of arch_vcpu_ioreq_completion()=
 into
optional callback. The latter is more rfc-style change, kind of a perspec=
tive
on how things might look.
More specific changes are provided in per-patch changelog.=20

v2 series here:
https://lore.kernel.org/xen-devel/cover.1715761386.git.Sergiy_Kibrik@epam=
.com/

 -Sergiy

Sergiy Kibrik (9):
  x86/altp2m: check if altp2m active when giving away p2midx
  x86/p2m: guard altp2m routines
  x86: introduce CONFIG_ALTP2M Kconfig option
  x86: introduce using_{svm,vmx} macros
  x86/nestedhvm: switch to using_{svm,vmx} check
  x86/vmx: guard access to cpu_has_vmx_* in common code
  x86/vpmu: guard calls to vmx/svm functions
  ioreq: make arch_vcpu_ioreq_completion() an optional callback
  x86/vmx: replace CONFIG_HVM with CONFIG_VMX in vmx.h

Xenia Ragiadakou (7):
  x86: introduce AMD-V and Intel VT-x Kconfig options
  x86/hvm: guard AMD-V and Intel VT-x hvm_function_table initializers
  x86/p2m: guard EPT functions with using_vmx macro
  x86/traps: guard vmx specific functions with usinc_vmx macro
  x86/domain: guard svm specific functions with usinc_svm macro
  x86/oprofile: guard svm specific symbols with CONFIG_SVM
  x86/hvm: make AMD-V and Intel VT-x support configurable

 xen/arch/arm/ioreq.c                    |  6 -----
 xen/arch/x86/Kconfig                    | 31 +++++++++++++++++++++++++
 xen/arch/x86/cpu/vpmu_amd.c             |  9 +++----
 xen/arch/x86/cpu/vpmu_intel.c           | 16 +++++++------
 xen/arch/x86/domain.c                   |  8 +++----
 xen/arch/x86/hvm/Makefile               |  4 ++--
 xen/arch/x86/hvm/hvm.c                  |  6 ++---
 xen/arch/x86/hvm/ioreq.c                | 23 ------------------
 xen/arch/x86/hvm/nestedhvm.c            |  4 ++--
 xen/arch/x86/hvm/viridian/viridian.c    |  4 ++--
 xen/arch/x86/hvm/vmx/vmx.c              | 16 +++++++++++++
 xen/arch/x86/include/asm/altp2m.h       | 15 +++++++-----
 xen/arch/x86/include/asm/hvm/hvm.h      |  5 +++-
 xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 12 +++++-----
 xen/arch/x86/include/asm/hvm/vmx/vmx.h  |  2 +-
 xen/arch/x86/include/asm/p2m.h          | 17 +++++++++++++-
 xen/arch/x86/mm/Makefile                |  5 ++--
 xen/arch/x86/mm/hap/Makefile            |  2 +-
 xen/arch/x86/mm/p2m-basic.c             | 13 ++++++-----
 xen/arch/x86/mm/p2m-ept.c               |  2 +-
 xen/arch/x86/oprofile/op_model_athlon.c |  2 +-
 xen/arch/x86/traps.c                    | 13 ++++-------
 xen/common/ioreq.c                      |  5 +++-
 xen/include/xen/ioreq.h                 |  2 +-
 24 files changed, 132 insertions(+), 90 deletions(-)

--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:07:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:07:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734803.1140876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5XY-0007IF-Gj; Mon, 03 Jun 2024 11:07:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734803.1140876; Mon, 03 Jun 2024 11:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5XY-0007I8-Dt; Mon, 03 Jun 2024 11:07:48 +0000
Received: by outflank-mailman (input) for mailman id 734803;
 Mon, 03 Jun 2024 11:07:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5XW-0007I2-UT
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:07:46 +0000
Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7fbc3c67-2199-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 13:07:44 +0200 (CEST)
Received: from pb-smtp21.pobox.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 60D4131584;
 Mon,  3 Jun 2024 07:07:42 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 59D7B31583;
 Mon,  3 Jun 2024 07:07:42 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 13F4931582;
 Mon,  3 Jun 2024 07:07:39 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fbc3c67-2199-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=x+pyBWLmT+o8FD/HTumT5AioH
	ODe7ZP15Is2zs12tZU=; b=Qst45r6mx/Nd4gbGNoc2Igg+WPHgY8G64WZPpHSHi
	h7ELaEBkZJiD1Wia1jaLYWYH56tfpnYy+2d6XAce68R5CX5d8mFFuCoCYpsCJhY8
	m1rWr4IpzxWQk6/BiyhuiQJXncJhojp1kOItgDO+nlq0rwFgyczD5wMprbGtSuTK
	Hc=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
	Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Subject: [XEN PATCH v3 01/16] x86: introduce AMD-V and Intel VT-x Kconfig options
Date: Mon,  3 Jun 2024 14:07:35 +0300
Message-Id: <72439ab1749b4bdca3c74a7d2af0254d23626797.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 7D7F25B4-2199-11EF-AD40-8F8B087618E4-90055647!pb-smtp21.pobox.com
Content-Transfer-Encoding: quoted-printable

From: Xenia Ragiadakou <burzalodowa@gmail.com>

Introduce two new Kconfig options, SVM and VMX, to allow code
specific to each virtualization technology to be separated and, when not
required, stripped.
CONFIG_SVM will be used to enable virtual machine extensions on platforms=
 that
implement the AMD Virtualization Technology (AMD-V).
CONFIG_VMX will be used to enable virtual machine extensions on platforms=
 that
implement the Intel Virtualization Technology (Intel VT-x).

Both features depend on HVM support.

Since, at this point, disabling any of them would cause Xen to not compil=
e,
the options are enabled by default if HVM and are not selectable by the u=
ser.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
changes in v3:
 - tag added
changes in v2:
 - simplify kconfig expression to def_bool HVM
 - keep file list in Makefile in alphabetical order
changes in v1:
 - change kconfig option name AMD_SVM/INTEL_VMX -> SVM/VMX
---
 xen/arch/x86/Kconfig         | 6 ++++++
 xen/arch/x86/hvm/Makefile    | 4 ++--
 xen/arch/x86/mm/Makefile     | 3 ++-
 xen/arch/x86/mm/hap/Makefile | 2 +-
 4 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 7e03e4bc55..8c9f8431f0 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -122,6 +122,12 @@ config HVM
=20
 	  If unsure, say Y.
=20
+config SVM
+	def_bool HVM
+
+config VMX
+	def_bool HVM
+
 config XEN_SHSTK
 	bool "Supervisor Shadow Stacks"
 	depends on HAS_AS_CET_SS
diff --git a/xen/arch/x86/hvm/Makefile b/xen/arch/x86/hvm/Makefile
index 3464191544..8434badc64 100644
--- a/xen/arch/x86/hvm/Makefile
+++ b/xen/arch/x86/hvm/Makefile
@@ -1,5 +1,5 @@
-obj-y +=3D svm/
-obj-y +=3D vmx/
+obj-$(CONFIG_SVM) +=3D svm/
+obj-$(CONFIG_VMX) +=3D vmx/
 obj-y +=3D viridian/
=20
 obj-y +=3D asid.o
diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile
index 0803ac9297..0128ca7ab6 100644
--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -10,6 +10,7 @@ obj-$(CONFIG_MEM_SHARING) +=3D mem_sharing.o
 obj-$(CONFIG_HVM) +=3D nested.o
 obj-$(CONFIG_HVM) +=3D p2m.o
 obj-y +=3D p2m-basic.o
-obj-$(CONFIG_HVM) +=3D p2m-ept.o p2m-pod.o p2m-pt.o
+obj-$(CONFIG_VMX) +=3D p2m-ept.o
+obj-$(CONFIG_HVM) +=3D p2m-pod.o p2m-pt.o
 obj-y +=3D paging.o
 obj-y +=3D physmap.o
diff --git a/xen/arch/x86/mm/hap/Makefile b/xen/arch/x86/mm/hap/Makefile
index 8ef54b1faa..98c8a87819 100644
--- a/xen/arch/x86/mm/hap/Makefile
+++ b/xen/arch/x86/mm/hap/Makefile
@@ -3,4 +3,4 @@ obj-y +=3D guest_walk_2.o
 obj-y +=3D guest_walk_3.o
 obj-y +=3D guest_walk_4.o
 obj-y +=3D nested_hap.o
-obj-y +=3D nested_ept.o
+obj-$(CONFIG_VMX) +=3D nested_ept.o
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:09:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:09:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734808.1140887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5ZZ-00089C-Rg; Mon, 03 Jun 2024 11:09:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734808.1140887; Mon, 03 Jun 2024 11:09:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5ZZ-000895-Oo; Mon, 03 Jun 2024 11:09:53 +0000
Received: by outflank-mailman (input) for mailman id 734808;
 Mon, 03 Jun 2024 11:09:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5ZY-00086P-7s
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:09:52 +0000
Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb617c31-2199-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 13:09:51 +0200 (CEST)
Received: from pb-smtp21.pobox.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 60E6031593;
 Mon,  3 Jun 2024 07:09:49 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 5887B31592;
 Mon,  3 Jun 2024 07:09:49 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 0E8263158F;
 Mon,  3 Jun 2024 07:09:45 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb617c31-2199-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=sPIOjqAVFxujq+o241jW7icrs
	Sxi6WsPSyOt9m/OrS8=; b=anWlnWgXFC7YX/SHvzJnwB3j/H5i0sj4x3YM+JkVW
	HZnxk481sqaqdMsHrpwqwsVAc2D99tPEBiDt/zQF818QJlq0ZHCCLoKpHQM7Il9L
	bS2ukjBwHGUzPi3/UGF+uUs+Rp/U2tbH9PlxW86Qf16Zep9FST9CtWkvjUzXeF90
	qs=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
	Tamas K Lengyel <tamas@tklengyel.com>
Subject: [XEN PATCH v3 02/16] x86/altp2m: check if altp2m active when giving away p2midx
Date: Mon,  3 Jun 2024 14:09:42 +0300
Message-Id: <9306d31184b8e714c3a10ccc6a2b2c6a80777ddb.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 C92EAAC0-2199-11EF-B728-8F8B087618E4-90055647!pb-smtp21.pobox.com
Content-Transfer-Encoding: quoted-printable

Explicitly check whether altp2m is on for domain when getting altp2m inde=
x.

The puspose of that is later to be able to disable altp2m support and
exclude its code from the build completely, when not supported by target
platform (as of now it's implemented for Intel EPT only).

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Jan Beulich <jbeulich@suse.com>
---
changes in v3:
 - move altp2m_active() check inside altp2m_vcpu_idx()
 - drop changes to monitor.c
 - changed patch description
changes in v2:
 - patch description changed, removed VMX mentioning
 - guard by altp2m_active() instead of hvm_altp2m_supported()
---
 xen/arch/x86/include/asm/altp2m.h | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/include/asm/altp2m.h b/xen/arch/x86/include/asm=
/altp2m.h
index e5e59cbd68..2d36c5aa9b 100644
--- a/xen/arch/x86/include/asm/altp2m.h
+++ b/xen/arch/x86/include/asm/altp2m.h
@@ -26,10 +26,6 @@ void altp2m_vcpu_destroy(struct vcpu *v);
 int altp2m_vcpu_enable_ve(struct vcpu *v, gfn_t gfn);
 void altp2m_vcpu_disable_ve(struct vcpu *v);
=20
-static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
-{
-    return vcpu_altp2m(v).p2midx;
-}
 #else
=20
 static inline bool altp2m_active(const struct domain *d)
@@ -38,9 +34,13 @@ static inline bool altp2m_active(const struct domain *=
d)
 }
=20
 /* Only declaration is needed. DCE will optimise it out when linking. */
-uint16_t altp2m_vcpu_idx(const struct vcpu *v);
 void altp2m_vcpu_disable_ve(struct vcpu *v);
=20
 #endif
=20
+static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
+{
+    return altp2m_active(v->domain) ? vcpu_altp2m(v).p2midx : 0;
+}
+
 #endif /* __ASM_X86_ALTP2M_H */
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:12:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:12:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734816.1140897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5bc-00018a-8v; Mon, 03 Jun 2024 11:12:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734816.1140897; Mon, 03 Jun 2024 11:12:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5bc-00018T-4d; Mon, 03 Jun 2024 11:12:00 +0000
Received: by outflank-mailman (input) for mailman id 734816;
 Mon, 03 Jun 2024 11:11:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5ba-00016a-Hs
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:11:58 +0000
Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 170103e2-219a-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 13:11:57 +0200 (CEST)
Received: from pb-smtp21.pobox.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id E9B4A315F7;
 Mon,  3 Jun 2024 07:11:55 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id E1FF9315F6;
 Mon,  3 Jun 2024 07:11:55 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 0CA89315F5;
 Mon,  3 Jun 2024 07:11:52 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 170103e2-219a-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=md4/Z52UulaNXfs7DDM6MqdCK
	uKZsNgFQ+L4Xd1y2gc=; b=xQ5QuPr/nIZwTMBYH0J0WoqHEmpCXlUZni/x4ZfM5
	JtszSZP01QQEGfglBenuK9WZ4Gbrj07+g/3f0jagYh7ojJAExxJlnd3pOBa6mk0I
	dN8kib1imh0nWiGQ462buSFsyUpUw2TV3J0yO9tyD+U9YobK/5m8OH62OcukD9LA
	3U=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
	Tamas K Lengyel <tamas@tklengyel.com>
Subject: [XEN PATCH v3 03/16] x86/p2m: guard altp2m routines
Date: Mon,  3 Jun 2024 14:11:49 +0300
Message-Id: <acb98c1c52613555a59cd27aad853a24caef0e19.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 14E00040-219A-11EF-B614-8F8B087618E4-90055647!pb-smtp21.pobox.com
Content-Transfer-Encoding: quoted-printable

Initialize and bring down altp2m only when it is supported by the platfor=
m,
e.g. VMX. Also guard p2m_altp2m_propagate_change().
The puspose of that is the possiblity to disable altp2m support and exclu=
de its
code from the build completely, when it's not supported by the target pla=
tform.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Jan Beulich <jbeulich@suse.com>
---
changes in v3:
 - put hvm_altp2m_supported() first
 - rewrite changes to p2m_init() with less code
 - add tag
---
 xen/arch/x86/mm/p2m-basic.c | 9 +++++----
 xen/arch/x86/mm/p2m-ept.c   | 2 +-
 2 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-basic.c b/xen/arch/x86/mm/p2m-basic.c
index 25d27a0a9f..08007a687c 100644
--- a/xen/arch/x86/mm/p2m-basic.c
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -128,7 +128,7 @@ int p2m_init(struct domain *d)
         return rc;
     }
=20
-    rc =3D p2m_init_altp2m(d);
+    rc =3D hvm_altp2m_supported() ? p2m_init_altp2m(d) : 0;
     if ( rc )
     {
         p2m_teardown_hostp2m(d);
@@ -197,11 +197,12 @@ void p2m_final_teardown(struct domain *d)
 {
     if ( is_hvm_domain(d) )
     {
+        if ( hvm_altp2m_supported() )
+            p2m_teardown_altp2m(d);
         /*
-         * We must tear down both of them unconditionally because
-         * we initialise them unconditionally.
+         * We must tear down nestedp2m unconditionally because
+         * we initialise it unconditionally.
          */
-        p2m_teardown_altp2m(d);
         p2m_teardown_nestedp2m(d);
     }
=20
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index f83610cb8c..c261ba02db 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -986,7 +986,7 @@ out:
     if ( is_epte_present(&old_entry) )
         ept_free_entry(p2m, &old_entry, target);
=20
-    if ( entry_written && p2m_is_hostp2m(p2m) )
+    if ( hvm_altp2m_supported() && entry_written && p2m_is_hostp2m(p2m) =
)
     {
         ret =3D p2m_altp2m_propagate_change(d, _gfn(gfn), mfn, order, p2=
mt, p2ma);
         if ( !rc )
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:14:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:14:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734821.1140906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5df-0001nc-JL; Mon, 03 Jun 2024 11:14:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734821.1140906; Mon, 03 Jun 2024 11:14:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5df-0001nT-GL; Mon, 03 Jun 2024 11:14:07 +0000
Received: by outflank-mailman (input) for mailman id 734821;
 Mon, 03 Jun 2024 11:14:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5de-0001nH-1h
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:14:06 +0000
Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6235b457-219a-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 13:14:03 +0200 (CEST)
Received: from pb-smtp21.pobox.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 49F4F31605;
 Mon,  3 Jun 2024 07:14:02 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 42AD331604;
 Mon,  3 Jun 2024 07:14:02 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 485B631602;
 Mon,  3 Jun 2024 07:13:59 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6235b457-219a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=Z5+rlrFJma8dHA+2EwttZvn1p
	HBH3Y8OD11pFDMz7uM=; b=ZwfXE+GKfiW/cqvlAy9rG4L2Sp7zddb9G/HLsxVgH
	MQU1KWc2I54tdUx8TPwdw52cLIsnByBmEmpRvecl+kD0/w9fYvZVCQieN0ADNqV/
	AYsEoDXmGDzptKxY74OYX3SEkuCRePDu1svT45awUhdTsubIoBjIMBJ9AvpOcc5T
	yY=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
	Tamas K Lengyel <tamas@tklengyel.com>
Subject: [XEN PATCH v3 04/16] x86: introduce CONFIG_ALTP2M Kconfig option
Date: Mon,  3 Jun 2024 14:13:55 +0300
Message-Id: <035f63f2b92b963f2585064fa21e09e73157f9d3.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 601F0EDE-219A-11EF-A03A-8F8B087618E4-90055647!pb-smtp21.pobox.com
Content-Transfer-Encoding: quoted-printable

Add new option to make altp2m code inclusion optional.
Currently altp2m implemented for Intel EPT only, so option is dependant o=
n VMX.
Also the prompt itself depends on EXPERT=3Dy, so that option is available
for fine-tuning, if one want to play around with it.

Use this option instead of more generic CONFIG_HVM option.
That implies the possibility to build hvm code without altp2m support,
hence we need to declare altp2m routines for hvm code to compile successf=
ully
(altp2m_vcpu_initialise(), altp2m_vcpu_destroy(), altp2m_vcpu_enable_ve()=
)

Also guard altp2m routines, so that they can be disabled completely in th=
e
build -- when target platform does not actually support altp2m
(AMD-V & ARM as of now).

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
---
changes in v3:
 - added help text
 - use conditional prompt depending on EXPERT=3Dy
 - corrected & extended patch description
 - put a blank line before #ifdef CONFIG_ALTP2M
 - sqashed in a separate patch for guarding altp2m code with CONFIG_ALTP2=
M option
changes in v2:
 - use separate CONFIG_ALTP2M option instead of CONFIG_VMX
---
 xen/arch/x86/Kconfig               | 11 +++++++++++
 xen/arch/x86/include/asm/altp2m.h  |  5 ++++-
 xen/arch/x86/include/asm/hvm/hvm.h |  2 +-
 xen/arch/x86/include/asm/p2m.h     | 17 ++++++++++++++++-
 xen/arch/x86/mm/Makefile           |  2 +-
 5 files changed, 33 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 8c9f8431f0..4a35c43dc5 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -358,6 +358,17 @@ config REQUIRE_NX
 	  was unavailable. However, if enabled, Xen will no longer boot on
 	  any CPU which is lacking NX support.
=20
+config ALTP2M
+	bool "Alternate P2M support" if EXPERT
+	default y
+	depends on VMX
+	help
+	  Alternate-p2m allows a guest to manage multiple p2m guest physical
+	  "memory views" (as opposed to a single p2m).
+	  Useful for memory introspection.
+
+	  If unsure, stay with defaults.
+
 endmenu
=20
 source "common/Kconfig"
diff --git a/xen/arch/x86/include/asm/altp2m.h b/xen/arch/x86/include/asm=
/altp2m.h
index 2d36c5aa9b..effbef51eb 100644
--- a/xen/arch/x86/include/asm/altp2m.h
+++ b/xen/arch/x86/include/asm/altp2m.h
@@ -7,7 +7,7 @@
 #ifndef __ASM_X86_ALTP2M_H
 #define __ASM_X86_ALTP2M_H
=20
-#ifdef CONFIG_HVM
+#ifdef CONFIG_ALTP2M
=20
 #include <xen/types.h>
 #include <xen/sched.h>         /* for struct vcpu, struct domain */
@@ -34,6 +34,9 @@ static inline bool altp2m_active(const struct domain *d=
)
 }
=20
 /* Only declaration is needed. DCE will optimise it out when linking. */
+void altp2m_vcpu_initialise(struct vcpu *v);
+void altp2m_vcpu_destroy(struct vcpu *v);
+int altp2m_vcpu_enable_ve(struct vcpu *v, gfn_t gfn);
 void altp2m_vcpu_disable_ve(struct vcpu *v);
=20
 #endif
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/as=
m/hvm/hvm.h
index 1c01e22c8e..2ebea1a92c 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -670,7 +670,7 @@ static inline bool hvm_hap_supported(void)
 /* returns true if hardware supports alternate p2m's */
 static inline bool hvm_altp2m_supported(void)
 {
-    return hvm_funcs.caps.altp2m;
+    return IS_ENABLED(CONFIG_ALTP2M) && hvm_funcs.caps.altp2m;
 }
=20
 /* Returns true if we have the minimum hardware requirements for nested =
virt */
diff --git a/xen/arch/x86/include/asm/p2m.h b/xen/arch/x86/include/asm/p2=
m.h
index c1478ffc36..b247aa4c7d 100644
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -577,10 +577,10 @@ static inline gfn_t mfn_to_gfn(const struct domain =
*d, mfn_t mfn)
         return _gfn(mfn_x(mfn));
 }
=20
-#ifdef CONFIG_HVM
 #define AP2MGET_prepopulate true
 #define AP2MGET_query false
=20
+#ifdef CONFIG_ALTP2M
 /*
  * Looks up altp2m entry. If the entry is not found it looks up the entr=
y in
  * hostp2m.
@@ -589,6 +589,15 @@ static inline gfn_t mfn_to_gfn(const struct domain *=
d, mfn_t mfn)
 int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t=
 *mfn,
                                p2m_type_t *t, p2m_access_t *a,
                                bool prepopulate);
+#else
+static inline int altp2m_get_effective_entry(struct p2m_domain *ap2m,
+                                             gfn_t gfn, mfn_t *mfn,
+                                             p2m_type_t *t, p2m_access_t=
 *a,
+                                             bool prepopulate)
+{
+    ASSERT_UNREACHABLE();
+    return -EOPNOTSUPP;
+}
 #endif
=20
 /* Init the datastructures for later use by the p2m code */
@@ -914,8 +923,14 @@ static inline bool p2m_set_altp2m(struct vcpu *v, un=
signed int idx)
 /* Switch alternate p2m for a single vcpu */
 bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx);
=20
+#ifdef CONFIG_ALTP2M
 /* Check to see if vcpu should be switched to a different p2m. */
 void p2m_altp2m_check(struct vcpu *v, uint16_t idx);
+#else
+static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
+{
+}
+#endif
=20
 /* Flush all the alternate p2m's for a domain */
 void p2m_flush_altp2m(struct domain *d);
diff --git a/xen/arch/x86/mm/Makefile b/xen/arch/x86/mm/Makefile
index 0128ca7ab6..d7d57b8190 100644
--- a/xen/arch/x86/mm/Makefile
+++ b/xen/arch/x86/mm/Makefile
@@ -1,7 +1,7 @@
 obj-y +=3D shadow/
 obj-$(CONFIG_HVM) +=3D hap/
=20
-obj-$(CONFIG_HVM) +=3D altp2m.o
+obj-$(CONFIG_ALTP2M) +=3D altp2m.o
 obj-$(CONFIG_HVM) +=3D guest_walk_2.o guest_walk_3.o guest_walk_4.o
 obj-$(CONFIG_SHADOW_PAGING) +=3D guest_walk_4.o
 obj-$(CONFIG_MEM_ACCESS) +=3D mem_access.o
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:16:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:16:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734828.1140917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5fg-0002Ub-4g; Mon, 03 Jun 2024 11:16:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734828.1140917; Mon, 03 Jun 2024 11:16:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5fg-0002UU-1b; Mon, 03 Jun 2024 11:16:12 +0000
Received: by outflank-mailman (input) for mailman id 734828;
 Mon, 03 Jun 2024 11:16:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5ff-0002UO-1e
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:16:11 +0000
Received: from pb-smtp1.pobox.com (pb-smtp1.pobox.com [64.147.108.70])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id acc89b00-219a-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 13:16:09 +0200 (CEST)
Received: from pb-smtp1.pobox.com (unknown [127.0.0.1])
 by pb-smtp1.pobox.com (Postfix) with ESMTP id BC50E31C4E;
 Mon,  3 Jun 2024 07:16:07 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp1.nyi.icgroup.com (unknown [127.0.0.1])
 by pb-smtp1.pobox.com (Postfix) with ESMTP id AB0ED31C4D;
 Mon,  3 Jun 2024 07:16:07 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp1.pobox.com (Postfix) with ESMTPSA id 23C6931C4C;
 Mon,  3 Jun 2024 07:16:05 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acc89b00-219a-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=qSv2cX6hBGxFZiQm0DYgbFSKi
	8pVVU7oKD2KrMQ9HqQ=; b=eCO7kCNamh1bXY4xOx7svsIvAQxFMbf/PIFORQXms
	vxt3LZ83ia6tX7rNfQciujDMhLwchyZYDPqZ7UDegZ5u5adkTvYwDDMACqD1BT4f
	fR87uRkkpc6SOKO8guuBeCMYYo9/xxxEgCAlxO0ZL822fOgggnGdMFZ9Ktj1cN/K
	OA=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Subject: [XEN PATCH v3 05/16] x86: introduce using_{svm,vmx} macros
Date: Mon,  3 Jun 2024 14:16:02 +0300
Message-Id: <9860c4b497038abda71084ea3bce698eab3b277c.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 ABBD3118-219A-11EF-91F9-B84BEB2EC81B-90055647!pb-smtp1.pobox.com
Content-Transfer-Encoding: quoted-printable

As we now have SVM/VMX config options for enabling/disabling these featur=
es
completely in the build, we need some build-time checks to ensure that vm=
x/svm
code can be used and things compile. Macros cpu_has_{svm,vmx} used to be =
doing
such checks at runtime, however they do not check if SVM/VMX support is
enabled in the build.

Also cpu_has_{svm,vmx} can potentially be called from non-{VMX,SVM} build
yet running on {VMX,SVM}-enabled CPU, so would correctly indicate that VM=
X/SVM
is indeed supported by CPU, but code to drive it can't be used.

New macros using_{vmx,svm} indicates that both CPU _and_ build provide
corresponding technology support, while cpu_has_{vmx,svm} still remains f=
or
informational runtime purpose, just as their naming suggests.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
CC: Jan Beulich <jbeulich@suse.com>
---
Here I've followed Jan's suggestion on not touching cpu_has_{vmx,svm} but
adding separate macros for solving build problems, and then using these
where required.
---
changes in v3:
 - introduce separate macros instead of modifying behaviour of cpu_has_{v=
mx,svm}
---
 xen/arch/x86/include/asm/hvm/hvm.h | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/as=
m/hvm/hvm.h
index 2ebea1a92c..778b93df5c 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -26,6 +26,9 @@ extern bool opt_hvm_fep;
 #define opt_hvm_fep 0
 #endif
=20
+#define using_vmx ( IS_ENABLED(CONFIG_VMX) && cpu_has_vmx )
+#define using_svm ( IS_ENABLED(CONFIG_SVM) && cpu_has_svm )
+
 /* Interrupt acknowledgement sources. */
 enum hvm_intsrc {
     hvm_intsrc_none,
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:18:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:18:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734833.1140927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5hh-00033s-GW; Mon, 03 Jun 2024 11:18:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734833.1140927; Mon, 03 Jun 2024 11:18:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5hh-00033l-Co; Mon, 03 Jun 2024 11:18:17 +0000
Received: by outflank-mailman (input) for mailman id 734833;
 Mon, 03 Jun 2024 11:18:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5hg-00033f-LU
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:18:16 +0000
Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f846ebcf-219a-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 13:18:15 +0200 (CEST)
Received: from pb-smtp21.pobox.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 4FDC8316C9;
 Mon,  3 Jun 2024 07:18:14 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 48B78316C8;
 Mon,  3 Jun 2024 07:18:14 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 648BD316C7;
 Mon,  3 Jun 2024 07:18:11 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f846ebcf-219a-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=EZVyjSsSWDLxpBQraUn0Nt2fs
	8KnFVfS9druGNOmUVE=; b=pbSAKgtpWtY8TrguN6fgy7NhvG92n8MDpX6mO1PPp
	rIXC07gSRsPhMVciOQJ0Ib6qxXVD7jS8SaHHpq5gKJkOWI5aAZM2iNNu6IKgTNom
	GiJ+DqpP/87ux7lXrKbFsforoA887fpCUpj9/ohY3IshwMzMzW1qE3mlJrQ3Ortu
	rc=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Subject: [XEN PATCH v3 06/16] x86/nestedhvm: switch to using_{svm,vmx} check
Date: Mon,  3 Jun 2024 14:18:07 +0300
Message-Id: <63ba1d4e043315693957093688670d36ffa65d28.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 F664D676-219A-11EF-A6B0-8F8B087618E4-90055647!pb-smtp21.pobox.com
Content-Transfer-Encoding: quoted-printable

Use using_{svm,vmx} instead of cpu_has_{svm,vmx} that not only checks if =
CPU
supports corresponding virtialization technology, but also if it is
supported by build configuration.

This fixes build when VMX=3Dn or SVM=3Dn, because then start_nested_{svm,=
vmx}
routine(s) not available.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
 xen/arch/x86/hvm/nestedhvm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/nestedhvm.c b/xen/arch/x86/hvm/nestedhvm.c
index 451c4da6d4..008dddf801 100644
--- a/xen/arch/x86/hvm/nestedhvm.c
+++ b/xen/arch/x86/hvm/nestedhvm.c
@@ -155,9 +155,9 @@ static int __init cf_check nestedhvm_setup(void)
      * done, so that if (for example) HAP is disabled, nested virt is
      * disabled as well.
      */
-    if ( cpu_has_vmx )
+    if ( using_vmx )
         start_nested_vmx(&hvm_funcs);
-    else if ( cpu_has_svm )
+    else if ( using_svm )
         start_nested_svm(&hvm_funcs);
=20
     return 0;
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:20:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:20:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734837.1140936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5jk-0004p3-QZ; Mon, 03 Jun 2024 11:20:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734837.1140936; Mon, 03 Jun 2024 11:20:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5jk-0004ow-O0; Mon, 03 Jun 2024 11:20:24 +0000
Received: by outflank-mailman (input) for mailman id 734837;
 Mon, 03 Jun 2024 11:20:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5jj-0004on-Lq
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:20:23 +0000
Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 43d6d3b1-219b-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 13:20:22 +0200 (CEST)
Received: from pb-smtp21.pobox.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 3430E3179A;
 Mon,  3 Jun 2024 07:20:21 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 2CF4B31799;
 Mon,  3 Jun 2024 07:20:21 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 8A66531798;
 Mon,  3 Jun 2024 07:20:17 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43d6d3b1-219b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=n/Gri7cpn2cravDMKsNg30rBQ
	5Jg6W+aMAIEGe32Evo=; b=dCF9a04QKrLuNFjVNwZyYfAhQz0S4I/KbajsoeJh3
	dRsxghoqoirdKVKmb3iHcUMk/laThMhELQBToZFd3PAlcQCA5LDC49GDx++MbNXr
	5rxZmYK/OV+VCAlaRiPh61rwmid4GTfoLoPZ+5SQbtA8xjkCvvcC4NuXzQPQCDQM
	5c=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
	Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Subject: [XEN PATCH v3 07/16] x86/hvm: guard AMD-V and Intel VT-x hvm_function_table initializers
Date: Mon,  3 Jun 2024 14:20:14 +0300
Message-Id: <25d4ade03f22ae4eb260af3eae5f48528f2e3ca8.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 4196CF28-219B-11EF-BE08-8F8B087618E4-90055647!pb-smtp21.pobox.com
Content-Transfer-Encoding: quoted-printable

From: Xenia Ragiadakou <burzalodowa@gmail.com>

Since start_svm() is AMD-V specific while start_vmx() is Intel VT-x speci=
fic,
any of them can potentially be excluded from build completely with CONFIG=
_SVM
or CONFIG_VMX options respectively, hence we have to explicitly check if
they're available and use specific macros using_{svm,vmx} for that.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
changes in v3:
 - use using_{svm,vmx} macro instead of IS_ENABLED(CONFIG_{SVM,VMX})
 - updated description
---
 xen/arch/x86/hvm/hvm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 8334ab1711..7b8679bcd8 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -155,9 +155,9 @@ static int __init cf_check hvm_enable(void)
 {
     const struct hvm_function_table *fns =3D NULL;
=20
-    if ( cpu_has_vmx )
+    if ( using_vmx )
         fns =3D start_vmx();
-    else if ( cpu_has_svm )
+    else if ( using_svm )
         fns =3D start_svm();
=20
     if ( fns =3D=3D NULL )
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:22:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:22:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734844.1140946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5ln-0005dl-5h; Mon, 03 Jun 2024 11:22:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734844.1140946; Mon, 03 Jun 2024 11:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5ln-0005de-3A; Mon, 03 Jun 2024 11:22:31 +0000
Received: by outflank-mailman (input) for mailman id 734844;
 Mon, 03 Jun 2024 11:22:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5ll-0005dC-8v
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:22:29 +0000
Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8ef92e04-219b-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 13:22:28 +0200 (CEST)
Received: from pb-smtp21.pobox.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 56746317B7;
 Mon,  3 Jun 2024 07:22:27 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 4EE87317B6;
 Mon,  3 Jun 2024 07:22:27 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 20D17317B5;
 Mon,  3 Jun 2024 07:22:23 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ef92e04-219b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=zTRJcrxkkTEClYsuRZF33f4dO
	2xkoYPQYVqoFLYitDA=; b=OhEv95fhpMwMEbss3+HIzhjyAfx9PiUxSJglyEpM+
	6D5t5J+swdquc1xTU6Y00z23wh0hxZpAEa87x5eZNVIOtAtzww0XJjXZyW+4tZaI
	6sRSdOgdRU9GwNM7tB9XIU+m29ZAIbtJLp1KqcTuvg3On2s4U381gi+xQZhXyqo+
	FE=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
	Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Subject: [XEN PATCH v3 08/16] x86/p2m: guard EPT functions with using_vmx macro
Date: Mon,  3 Jun 2024 14:22:21 +0300
Message-Id: <52c64ffd589f289fda271422fee1e957f94aac6e.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 8D074668-219B-11EF-92C9-8F8B087618E4-90055647!pb-smtp21.pobox.com
Content-Transfer-Encoding: quoted-printable

From: Xenia Ragiadakou <burzalodowa@gmail.com>

Replace cpu_has_vmx check with using_vmx, so that presence of functions
ept_p2m_init() and ept_p2m_uninit() in the build gets checked.
Since currently Intel EPT implementation depends on CONFIG_VMX config opt=
ion,
when VMX is off these functions are unavailable.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
changes in v3:
 - using_vmx instead of IS_ENABLED(CONFIG_VMX)
 - updated description
 -
---
 xen/arch/x86/mm/p2m-basic.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/mm/p2m-basic.c b/xen/arch/x86/mm/p2m-basic.c
index 08007a687c..442284fb40 100644
--- a/xen/arch/x86/mm/p2m-basic.c
+++ b/xen/arch/x86/mm/p2m-basic.c
@@ -40,7 +40,7 @@ static int p2m_initialise(struct domain *d, struct p2m_=
domain *p2m)
     p2m_pod_init(p2m);
     p2m_nestedp2m_init(p2m);
=20
-    if ( hap_enabled(d) && cpu_has_vmx )
+    if ( hap_enabled(d) && using_vmx )
         ret =3D ept_p2m_init(p2m);
     else
         p2m_pt_init(p2m);
@@ -72,7 +72,7 @@ struct p2m_domain *p2m_init_one(struct domain *d)
 void p2m_free_one(struct p2m_domain *p2m)
 {
     p2m_free_logdirty(p2m);
-    if ( hap_enabled(p2m->domain) && cpu_has_vmx )
+    if ( hap_enabled(p2m->domain) && using_vmx )
         ept_p2m_uninit(p2m);
     free_cpumask_var(p2m->dirty_cpumask);
     xfree(p2m);
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:24:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:24:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734851.1140958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5np-0006BJ-IE; Mon, 03 Jun 2024 11:24:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734851.1140958; Mon, 03 Jun 2024 11:24:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5np-0006BC-Dy; Mon, 03 Jun 2024 11:24:37 +0000
Received: by outflank-mailman (input) for mailman id 734851;
 Mon, 03 Jun 2024 11:24:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5nn-0006B5-VS
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:24:36 +0000
Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id da47167b-219b-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 13:24:34 +0200 (CEST)
Received: from pb-smtp21.pobox.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 4FC05317C7;
 Mon,  3 Jun 2024 07:24:33 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 48509317C6;
 Mon,  3 Jun 2024 07:24:33 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 60861317C5;
 Mon,  3 Jun 2024 07:24:30 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da47167b-219b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=sEkjfdgPA5uPcRwW5X9CnC8Lf
	bf4U4pmRrO5A9YryUE=; b=LhQL90P2fpJ9a4IStB+H/oyPE2fx62KY9i5DwDiMg
	Z+W52+FFuMDE+/mmn/HYaCFbzwuwdinLyF1On10RA/n8ShUR2ePaVl5pWjQYXVKB
	+gspc13lb/uy8RWM0LSBNNVLKDRjm/BXsUnK0DLuXxb8ikWHMYl3Ynj+0jradqkX
	Fo=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
	Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Subject: [XEN PATCH v3 09/16] x86/traps: guard vmx specific functions with usinc_vmx macro
Date: Mon,  3 Jun 2024 14:24:27 +0300
Message-Id: <63045d707485c818af5eafa45752e60405ecf887.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 D84958B4-219B-11EF-B94D-8F8B087618E4-90055647!pb-smtp21.pobox.com
Content-Transfer-Encoding: quoted-printable

From: Xenia Ragiadakou <burzalodowa@gmail.com>

Replace cpu_has_vmx check with using_vmx, so that not only VMX support in=
 CPU
gets checked, but also presence of functions vmx_vmcs_enter() & vmx_vmcs_=
exit()
in the build checked as well.

Also since CONFIG_VMX is checked in using_vmx and it depends on CONFIG_HV=
M,
we can drop #ifdef CONFIG_HVM lines around using_vmx.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
changes in v3:
 - using_vmx instead of IS_ENABLED(CONFIG_VMX)
 - updated description
---
 xen/arch/x86/traps.c | 8 ++------
 1 file changed, 2 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 9906e874d5..a81f3cf57c 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -676,7 +676,6 @@ void vcpu_show_execution_state(struct vcpu *v)
=20
     vcpu_pause(v); /* acceptably dangerous */
=20
-#ifdef CONFIG_HVM
     /*
      * For VMX special care is needed: Reading some of the register stat=
e will
      * require VMCS accesses. Engaging foreign VMCSes involves acquiring=
 of a
@@ -684,12 +683,11 @@ void vcpu_show_execution_state(struct vcpu *v)
      * region. Despite this being a layering violation, engage the VMCS =
right
      * here. This then also avoids doing so several times in close succe=
ssion.
      */
-    if ( cpu_has_vmx && is_hvm_vcpu(v) )
+    if ( using_vmx && is_hvm_vcpu(v) )
     {
         ASSERT(!in_irq());
         vmx_vmcs_enter(v);
     }
-#endif
=20
     /* Prevent interleaving of output. */
     flags =3D console_lock_recursive_irqsave();
@@ -714,10 +712,8 @@ void vcpu_show_execution_state(struct vcpu *v)
         console_unlock_recursive_irqrestore(flags);
     }
=20
-#ifdef CONFIG_HVM
-    if ( cpu_has_vmx && is_hvm_vcpu(v) )
+    if ( using_vmx && is_hvm_vcpu(v) )
         vmx_vmcs_exit(v);
-#endif
=20
     vcpu_unpause(v);
 }
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:26:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:26:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734856.1140967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5po-0006j8-Sx; Mon, 03 Jun 2024 11:26:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734856.1140967; Mon, 03 Jun 2024 11:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5po-0006j1-Q4; Mon, 03 Jun 2024 11:26:40 +0000
Received: by outflank-mailman (input) for mailman id 734856;
 Mon, 03 Jun 2024 11:26:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5po-0006it-Ay
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:26:40 +0000
Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 23a22ae2-219c-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 13:26:38 +0200 (CEST)
Received: from pb-smtp2.pobox.com (unknown [127.0.0.1])
 by pb-smtp2.pobox.com (Postfix) with ESMTP id 8F8752A8E7;
 Mon,  3 Jun 2024 07:26:36 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1])
 by pb-smtp2.pobox.com (Postfix) with ESMTP id 8748B2A8E6;
 Mon,  3 Jun 2024 07:26:36 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp2.pobox.com (Postfix) with ESMTPSA id A7C7F2A8E5;
 Mon,  3 Jun 2024 07:26:35 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23a22ae2-219c-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=C9i/Ss0viC+/vS6vrwKhCdgvC
	fQtFhYi+LVOUd2WOmY=; b=F827fVDOG7PTj68DkUpBmH/hc3C5ZKUSls6qCAvlV
	5zMwJ35Zcgfn39w5fuxXIteq/W7dscCVEfunGa7ZWDR5GJbMEWNYxVsoeQgVwZYi
	MsiK/LWrlDAEcJwXnAhBF2byjltBHb8QykidMNzP3NnASUbnqybM9PiUJL7VczeK
	Rc=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
	Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Subject: [XEN PATCH v3 10/16] x86/domain: guard svm specific functions with usinc_svm macro
Date: Mon,  3 Jun 2024 14:26:33 +0300
Message-Id: <e03693d1daa386a31e09794b0167d282df5a8bfe.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 22F79E52-219C-11EF-8278-6488940A682E-90055647!pb-smtp2.pobox.com
Content-Transfer-Encoding: quoted-printable

From: Xenia Ragiadakou <burzalodowa@gmail.com>

Replace cpu_has_svm check with using_svm, so that not only SVM support in=
 CPU
gets checked, but also presence of functions svm_load_segs() and
svm_load_segs_prefetch() in the build checked as well.

Since SVM depends on HVM, it can be used alone.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
changes in v3:
 - using_svm instead of IS_ENABLED(CONFIG_SVM)
 - updated description
---
 xen/arch/x86/domain.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 536542841e..a2f19f8b46 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1731,11 +1731,9 @@ static void load_segments(struct vcpu *n)
         if ( !(n->arch.flags & TF_kernel_mode) )
             SWAP(gsb, gss);
=20
-#ifdef CONFIG_HVM
-        if ( cpu_has_svm && (uregs->fs | uregs->gs) <=3D 3 )
+        if ( using_svm && (uregs->fs | uregs->gs) <=3D 3 )
             fs_gs_done =3D svm_load_segs(n->arch.pv.ldt_ents, LDT_VIRT_S=
TART(n),
                                        n->arch.pv.fs_base, gsb, gss);
-#endif
     }
=20
     if ( !fs_gs_done )
@@ -2048,9 +2046,9 @@ static void __context_switch(void)
=20
     write_ptbase(n);
=20
-#if defined(CONFIG_PV) && defined(CONFIG_HVM)
+#if defined(CONFIG_PV)
     /* Prefetch the VMCB if we expect to use it later in the context swi=
tch */
-    if ( cpu_has_svm && is_pv_64bit_domain(nd) && !is_idle_domain(nd) )
+    if ( using_svm && is_pv_64bit_domain(nd) && !is_idle_domain(nd) )
         svm_load_segs_prefetch();
 #endif
=20
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:28:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:28:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734862.1140976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5rq-0007Hu-7j; Mon, 03 Jun 2024 11:28:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734862.1140976; Mon, 03 Jun 2024 11:28:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5rq-0007Hn-4b; Mon, 03 Jun 2024 11:28:46 +0000
Received: by outflank-mailman (input) for mailman id 734862;
 Mon, 03 Jun 2024 11:28:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5rp-0007Hh-F2
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:28:45 +0000
Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6f0f3d40-219c-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 13:28:44 +0200 (CEST)
Received: from pb-smtp21.pobox.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 305C4317E8;
 Mon,  3 Jun 2024 07:28:43 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 29BF7317E7;
 Mon,  3 Jun 2024 07:28:43 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 0BB53317E6;
 Mon,  3 Jun 2024 07:28:39 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f0f3d40-219c-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=/ajn3Y23Ptep8PMnKlNl2X5Wf
	quTjK0FGfT5lFyKeSU=; b=hPCH2kf9dP1RWVFD/0rd/7nEKY6u6LPWdt1rwCXQL
	mnHW4Xy7S1YSlJXBpQENZBrkHb6btgLOBg5F4GNIirccVERA9UcSZUq0XjRUvdQA
	WbHb9IxW+rwIY95fN4I5rFAjlAUbmZH7zBnLGa1/gZqQCC+HoPHoZojpNHcrl5QX
	XE=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
	Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Subject: [XEN PATCH v3 11/16] x86/oprofile: guard svm specific symbols with CONFIG_SVM
Date: Mon,  3 Jun 2024 14:28:36 +0300
Message-Id: <cdd7b4fe1e738007e37f5bae99ab8fc39bf85ba7.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 6D175BF8-219C-11EF-9E81-8F8B087618E4-90055647!pb-smtp21.pobox.com
Content-Transfer-Encoding: quoted-printable

From: Xenia Ragiadakou <burzalodowa@gmail.com>

The symbol svm_stgi_label is AMD-V specific so guard its usage in common =
code
with CONFIG_SVM.

Since SVM depends on HVM, it can be used alone.
Also, use #ifdef instead of #if.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/oprofile/op_model_athlon.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/oprofile/op_model_athlon.c b/xen/arch/x86/oprof=
ile/op_model_athlon.c
index 69fd3fcc86..a9c7b87d67 100644
--- a/xen/arch/x86/oprofile/op_model_athlon.c
+++ b/xen/arch/x86/oprofile/op_model_athlon.c
@@ -320,7 +320,7 @@ static int cf_check athlon_check_ctrs(
 	struct vcpu *v =3D current;
 	unsigned int const nr_ctrs =3D model->num_counters;
=20
-#if CONFIG_HVM
+#ifdef CONFIG_SVM
 	struct cpu_user_regs *guest_regs =3D guest_cpu_user_regs();
=20
 	if (!guest_mode(regs) &&
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:31:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:31:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734869.1140987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5tv-0000rd-N0; Mon, 03 Jun 2024 11:30:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734869.1140987; Mon, 03 Jun 2024 11:30:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5tv-0000rW-KM; Mon, 03 Jun 2024 11:30:55 +0000
Received: by outflank-mailman (input) for mailman id 734869;
 Mon, 03 Jun 2024 11:30:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5tu-0000qE-1E
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:30:54 +0000
Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bac16c16-219c-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 13:30:51 +0200 (CEST)
Received: from pb-smtp20.pobox.com (unknown [127.0.0.1])
 by pb-smtp20.pobox.com (Postfix) with ESMTP id BBBF3273FA;
 Mon,  3 Jun 2024 07:30:49 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp20.pobox.com (Postfix) with ESMTP id B3434273F9;
 Mon,  3 Jun 2024 07:30:49 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp20.pobox.com (Postfix) with ESMTPSA id C8FBC273F8;
 Mon,  3 Jun 2024 07:30:46 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bac16c16-219c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=tFMcVJrjymE8LFOeCd5ZgK9a6
	R2CYRHptQF3Am22SyM=; b=DVOuHKfuf0jbFTvrk+Z2anRLLaObxi+C5W7V+h8gv
	zNbmqz7gugEO8GIumxGLcfqVDrzkisKVsSppqm4R1EY7O3k8D0sA1jNXvvXbKyU+
	h2vNbKO7t+PyeEcOuldhUpDh/dr675JR9/E7Z6Z7lj7NC/SnsddtuOZmrfL/Keq8
	KA=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Paul Durrant <paul@xen.org>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [XEN PATCH v3 12/16] x86/vmx: guard access to cpu_has_vmx_* in common code
Date: Mon,  3 Jun 2024 14:30:43 +0300
Message-Id: <1645c0d4a5aae7b53cfb166ac10235e12ae4dbb1.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 B8A7CE9A-219C-11EF-A069-ACC938F0AE34-90055647!pb-smtp20.pobox.com
Content-Transfer-Encoding: quoted-printable

There're several places in common code, outside of arch/x86/hvm/vmx,
where cpu_has_vmx_* get accessed without checking whether VMX supported f=
irst.
These macros rely on global variables defined in vmx code, so when VMX su=
pport
gets disabled accesses to these variables turn into build failures.

To overcome these failures, build-time check is done before accessing glo=
bal
variables, so that DCE would remove these variables.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
---
changes in v3:
 - using_vmx instead of cpu_has_vmx
 - clarify description on why this change needed
changes in v2:
 - do not touch SVM code and macros
 - drop vmx_ctrl_has_feature()
 - guard cpu_has_vmx_* macros in common code instead
changes in v1:
 - introduced helper routine vmx_ctrl_has_feature() and used it for all
   cpu_has_vmx_* macros
---
 xen/arch/x86/hvm/hvm.c                  |  2 +-
 xen/arch/x86/hvm/viridian/viridian.c    |  4 ++--
 xen/arch/x86/include/asm/hvm/vmx/vmcs.h | 12 ++++++------
 xen/arch/x86/traps.c                    |  5 +++--
 4 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7b8679bcd8..af45c5ed8c 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5197,7 +5197,7 @@ int hvm_debug_op(struct vcpu *v, int32_t op)
     {
         case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON:
         case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_OFF:
-            if ( !cpu_has_monitor_trap_flag )
+            if ( !using_vmx || !cpu_has_monitor_trap_flag )
                 return -EOPNOTSUPP;
             break;
         default:
diff --git a/xen/arch/x86/hvm/viridian/viridian.c b/xen/arch/x86/hvm/viri=
dian/viridian.c
index 0496c52ed5..59e7e7955a 100644
--- a/xen/arch/x86/hvm/viridian/viridian.c
+++ b/xen/arch/x86/hvm/viridian/viridian.c
@@ -196,7 +196,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint=
32_t leaf,
         res->a =3D CPUID4A_RELAX_TIMER_INT;
         if ( viridian_feature_mask(d) & HVMPV_hcall_remote_tlb_flush )
             res->a |=3D CPUID4A_HCALL_REMOTE_TLB_FLUSH;
-        if ( !cpu_has_vmx_apic_reg_virt )
+        if ( !using_vmx || !cpu_has_vmx_apic_reg_virt )
             res->a |=3D CPUID4A_MSR_BASED_APIC;
         if ( viridian_feature_mask(d) & HVMPV_hcall_ipi )
             res->a |=3D CPUID4A_SYNTHETIC_CLUSTER_IPI;
@@ -236,7 +236,7 @@ void cpuid_viridian_leaves(const struct vcpu *v, uint=
32_t leaf,
=20
     case 6:
         /* Detected and in use hardware features. */
-        if ( cpu_has_vmx_virtualize_apic_accesses )
+        if ( using_vmx && cpu_has_vmx_virtualize_apic_accesses )
             res->a |=3D CPUID6A_APIC_OVERLAY;
         if ( cpu_has_vmx_msr_bitmap || (read_efer() & EFER_SVME) )
             res->a |=3D CPUID6A_MSR_BITMAPS;
diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h b/xen/arch/x86/inclu=
de/asm/hvm/vmx/vmcs.h
index 58140af691..713088b8d3 100644
--- a/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmcs.h
@@ -306,7 +306,7 @@ extern u64 vmx_ept_vpid_cap;
 #define cpu_has_vmx_vnmi \
     (vmx_pin_based_exec_control & PIN_BASED_VIRTUAL_NMIS)
 #define cpu_has_vmx_msr_bitmap \
-    (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_MSR_BITMAP)
+    (using_vmx && vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_MSR_BI=
TMAP)
 #define cpu_has_vmx_secondary_exec_control \
     (vmx_cpu_based_exec_control & CPU_BASED_ACTIVATE_SECONDARY_CONTROLS)
 #define cpu_has_vmx_tertiary_exec_control \
@@ -316,7 +316,7 @@ extern u64 vmx_ept_vpid_cap;
 #define cpu_has_vmx_dt_exiting \
     (vmx_secondary_exec_control & SECONDARY_EXEC_DESCRIPTOR_TABLE_EXITIN=
G)
 #define cpu_has_vmx_rdtscp \
-    (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_RDTSCP)
+    (using_vmx && vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_RDT=
SCP)
 #define cpu_has_vmx_vpid \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VPID)
 #define cpu_has_monitor_trap_flag \
@@ -333,7 +333,7 @@ extern u64 vmx_ept_vpid_cap;
 #define cpu_has_vmx_ple \
     (vmx_secondary_exec_control & SECONDARY_EXEC_PAUSE_LOOP_EXITING)
 #define cpu_has_vmx_invpcid \
-    (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_INVPCID)
+    (using_vmx && vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_INV=
PCID)
 #define cpu_has_vmx_apic_reg_virt \
     (vmx_secondary_exec_control & SECONDARY_EXEC_APIC_REGISTER_VIRT)
 #define cpu_has_vmx_virtual_intr_delivery \
@@ -347,14 +347,14 @@ extern u64 vmx_ept_vpid_cap;
 #define cpu_has_vmx_vmfunc \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VM_FUNCTIONS)
 #define cpu_has_vmx_virt_exceptions \
-    (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VIRT_EXCEPTIONS)
+    (using_vmx && vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_VIR=
T_EXCEPTIONS)
 #define cpu_has_vmx_pml \
     (vmx_secondary_exec_control & SECONDARY_EXEC_ENABLE_PML)
 #define cpu_has_vmx_mpx \
-    ((vmx_vmexit_control & VM_EXIT_CLEAR_BNDCFGS) && \
+    (using_vmx && (vmx_vmexit_control & VM_EXIT_CLEAR_BNDCFGS) && \
      (vmx_vmentry_control & VM_ENTRY_LOAD_BNDCFGS))
 #define cpu_has_vmx_xsaves \
-    (vmx_secondary_exec_control & SECONDARY_EXEC_XSAVES)
+    (using_vmx && vmx_secondary_exec_control & SECONDARY_EXEC_XSAVES)
 #define cpu_has_vmx_tsc_scaling \
     (vmx_secondary_exec_control & SECONDARY_EXEC_TSC_SCALING)
 #define cpu_has_vmx_bus_lock_detection \
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index a81f3cf57c..c2f29fc9a4 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1130,7 +1130,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, =
uint32_t leaf,
         if ( !is_hvm_domain(d) || subleaf !=3D 0 )
             break;
=20
-        if ( cpu_has_vmx_apic_reg_virt )
+        if ( using_vmx && cpu_has_vmx_apic_reg_virt )
             res->a |=3D XEN_HVM_CPUID_APIC_ACCESS_VIRT;
=20
         /*
@@ -1139,7 +1139,8 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, =
uint32_t leaf,
          * and wrmsr in the guest will run without VMEXITs (see
          * vmx_vlapic_msr_changed()).
          */
-        if ( cpu_has_vmx_virtualize_x2apic_mode &&
+        if ( using_vmx &&
+             cpu_has_vmx_virtualize_x2apic_mode &&
              cpu_has_vmx_apic_reg_virt &&
              cpu_has_vmx_virtual_intr_delivery )
             res->a |=3D XEN_HVM_CPUID_X2APIC_VIRT;
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:33:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:33:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734878.1140997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5vv-0001tz-2M; Mon, 03 Jun 2024 11:32:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734878.1140997; Mon, 03 Jun 2024 11:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5vu-0001ts-VJ; Mon, 03 Jun 2024 11:32:58 +0000
Received: by outflank-mailman (input) for mailman id 734878;
 Mon, 03 Jun 2024 11:32:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5vt-0001tR-Je
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:32:57 +0000
Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 03e42810-219d-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 13:32:54 +0200 (CEST)
Received: from pb-smtp2.pobox.com (unknown [127.0.0.1])
 by pb-smtp2.pobox.com (Postfix) with ESMTP id 9F1E42A952;
 Mon,  3 Jun 2024 07:32:52 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1])
 by pb-smtp2.pobox.com (Postfix) with ESMTP id 954432A951;
 Mon,  3 Jun 2024 07:32:52 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp2.pobox.com (Postfix) with ESMTPSA id C5AB22A950;
 Mon,  3 Jun 2024 07:32:51 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03e42810-219d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=qFt1GhHtKwmXz/WKqzqf/F4up
	h4obWVv/1gPxRugSAk=; b=eb/w/s2vBAl1xjuLGLhjiOsOKxouCwL4tCmyPuNr4
	s1tVy5bcyujmGfgxPFTy1dwn0MDaMpN+BwVG4rD+PXRwZB1oJU9AqHZ9qN9CpRq0
	sNwmfOpNyAp3UM4gOoNlXy7GX/cm1nhomnFHH+rwaG28E/bf732DDkGHyNDLJz6M
	bs=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Subject: [XEN PATCH v3 13/16] x86/vpmu: guard calls to vmx/svm functions
Date: Mon,  3 Jun 2024 14:32:49 +0300
Message-Id: <b7f68e09ccc54782410d65173e490f477364a5f0.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 03271692-219D-11EF-AFD9-6488940A682E-90055647!pb-smtp2.pobox.com
Content-Transfer-Encoding: quoted-printable

If VMX/SVM disabled in the build, we may still want to have vPMU drivers =
for
PV guests. Yet in such case before using VMX/SVM features and functions w=
e have
to explicitly check if they're available in the build. For this puspose
(and also not to complicate conditionals) two helpers introduced --
is_{vmx,svm}_vcpu(v) that check both HVM & VMX/SVM conditions at the same=
 time,
and they replace is_hvm_vcpu(v) macro where needed.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
changes in v3:
 - introduced macro is_{vmx,svm}_vcpu(v)
 - changed description
 - reordered patch, do not modify conditionals w/ cpu_has_vmx_msr_bitmap =
check
---
 xen/arch/x86/cpu/vpmu_amd.c   |  9 +++++----
 xen/arch/x86/cpu/vpmu_intel.c | 16 +++++++++-------
 2 files changed, 14 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/cpu/vpmu_amd.c b/xen/arch/x86/cpu/vpmu_amd.c
index 97e6315bd9..217501ecd3 100644
--- a/xen/arch/x86/cpu/vpmu_amd.c
+++ b/xen/arch/x86/cpu/vpmu_amd.c
@@ -27,6 +27,7 @@
 #define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT)=
)
 #define set_guest_mode(msr) ((msr) |=3D (1ULL << MSR_F10H_EVNTSEL_GO_SHI=
FT))
 #define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH =
- 1))))
+#define is_svm_vcpu(v) (using_svm && is_hvm_vcpu(v))
=20
 static unsigned int __read_mostly num_counters;
 static const u32 __read_mostly *counters;
@@ -289,7 +290,7 @@ static int cf_check amd_vpmu_save(struct vcpu *v,  bo=
ol to_guest)
=20
     context_save(v);
=20
-    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_hvm_vcpu(v) &&
+    if ( !vpmu_is_set(vpmu, VPMU_RUNNING) && is_svm_vcpu(v) &&
          is_msr_bitmap_on(vpmu) )
         amd_vpmu_unset_msr_bitmap(v);
=20
@@ -363,7 +364,7 @@ static int cf_check amd_vpmu_do_wrmsr(unsigned int ms=
r, uint64_t msr_content)
             return 0;
         vpmu_set(vpmu, VPMU_RUNNING);
=20
-        if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) )
+        if ( is_svm_vcpu(v) && is_msr_bitmap_on(vpmu) )
              amd_vpmu_set_msr_bitmap(v);
     }
=20
@@ -372,7 +373,7 @@ static int cf_check amd_vpmu_do_wrmsr(unsigned int ms=
r, uint64_t msr_content)
         (is_pmu_enabled(msr_content) =3D=3D 0) && vpmu_is_set(vpmu, VPMU=
_RUNNING) )
     {
         vpmu_reset(vpmu, VPMU_RUNNING);
-        if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) )
+        if ( is_svm_vcpu(v) && is_msr_bitmap_on(vpmu) )
              amd_vpmu_unset_msr_bitmap(v);
         release_pmu_ownership(PMU_OWNER_HVM);
     }
@@ -415,7 +416,7 @@ static void cf_check amd_vpmu_destroy(struct vcpu *v)
 {
     struct vpmu_struct *vpmu =3D vcpu_vpmu(v);
=20
-    if ( is_hvm_vcpu(v) && is_msr_bitmap_on(vpmu) )
+    if ( is_svm_vcpu(v) && is_msr_bitmap_on(vpmu) )
         amd_vpmu_unset_msr_bitmap(v);
=20
     xfree(vpmu->context);
diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.=
c
index cd414165df..f95a9b058d 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -54,6 +54,8 @@
 #define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFC=
TR0))
 static bool __read_mostly full_width_write;
=20
+#define is_vmx_vcpu(v) ( using_vmx && is_hvm_vcpu(v) )
+
 /*
  * MSR_CORE_PERF_FIXED_CTR_CTRL contains the configuration of all fixed
  * counters. 4 bits for every counter.
@@ -269,7 +271,7 @@ static inline void __core2_vpmu_save(struct vcpu *v)
     if ( !is_hvm_vcpu(v) )
         rdmsrl(MSR_CORE_PERF_GLOBAL_STATUS, core2_vpmu_cxt->global_statu=
s);
     /* Save MSR to private context to make it fork-friendly */
-    else if ( mem_sharing_enabled(v->domain) )
+    else if ( is_vmx_vcpu(v) && mem_sharing_enabled(v->domain) )
         vmx_read_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL,
                            &core2_vpmu_cxt->global_ctrl);
 }
@@ -333,7 +335,7 @@ static inline void __core2_vpmu_load(struct vcpu *v)
         wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, core2_vpmu_cxt->global_ctrl);
     }
     /* Restore MSR from context when used with a fork */
-    else if ( mem_sharing_is_fork(v->domain) )
+    else if ( is_vmx_vcpu(v) && mem_sharing_is_fork(v->domain) )
         vmx_write_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL,
                             core2_vpmu_cxt->global_ctrl);
 }
@@ -442,7 +444,7 @@ static int cf_check core2_vpmu_alloc_resource(struct =
vcpu *v)
     if ( !acquire_pmu_ownership(PMU_OWNER_HVM) )
         return 0;
=20
-    if ( is_hvm_vcpu(v) )
+    if ( is_vmx_vcpu(v) )
     {
         if ( vmx_add_host_load_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, 0) )
             goto out_err;
@@ -584,7 +586,7 @@ static int cf_check core2_vpmu_do_wrmsr(unsigned int =
msr, uint64_t msr_content)
         if ( msr_content & fixed_ctrl_mask )
             return -EINVAL;
=20
-        if ( is_hvm_vcpu(v) )
+        if ( is_vmx_vcpu(v) )
             vmx_read_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL,
                                &core2_vpmu_cxt->global_ctrl);
         else
@@ -653,7 +655,7 @@ static int cf_check core2_vpmu_do_wrmsr(unsigned int =
msr, uint64_t msr_content)
             if ( blocked )
                 return -EINVAL;
=20
-            if ( is_hvm_vcpu(v) )
+            if ( is_vmx_vcpu(v) )
                 vmx_read_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL,
                                    &core2_vpmu_cxt->global_ctrl);
             else
@@ -672,7 +674,7 @@ static int cf_check core2_vpmu_do_wrmsr(unsigned int =
msr, uint64_t msr_content)
         wrmsrl(msr, msr_content);
     else
     {
-        if ( is_hvm_vcpu(v) )
+        if ( is_vmx_vcpu(v) )
             vmx_write_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, msr_conten=
t);
         else
             wrmsrl(MSR_CORE_PERF_GLOBAL_CTRL, msr_content);
@@ -706,7 +708,7 @@ static int cf_check core2_vpmu_do_rdmsr(unsigned int =
msr, uint64_t *msr_content)
             *msr_content =3D core2_vpmu_cxt->global_status;
             break;
         case MSR_CORE_PERF_GLOBAL_CTRL:
-            if ( is_hvm_vcpu(v) )
+            if ( is_vmx_vcpu(v) )
                 vmx_read_guest_msr(v, MSR_CORE_PERF_GLOBAL_CTRL, msr_con=
tent);
             else
                 rdmsrl(MSR_CORE_PERF_GLOBAL_CTRL, *msr_content);
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:35:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:35:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734883.1141007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5xr-0002RJ-DN; Mon, 03 Jun 2024 11:34:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734883.1141007; Mon, 03 Jun 2024 11:34:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5xr-0002RC-9T; Mon, 03 Jun 2024 11:34:59 +0000
Received: by outflank-mailman (input) for mailman id 734883;
 Mon, 03 Jun 2024 11:34:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5xq-0002Qn-61
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:34:58 +0000
Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d5a9d82-219d-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 13:34:56 +0200 (CEST)
Received: from pb-smtp2.pobox.com (unknown [127.0.0.1])
 by pb-smtp2.pobox.com (Postfix) with ESMTP id 412FF2A963;
 Mon,  3 Jun 2024 07:34:56 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1])
 by pb-smtp2.pobox.com (Postfix) with ESMTP id 37BC12A962;
 Mon,  3 Jun 2024 07:34:56 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp2.pobox.com (Postfix) with ESMTPSA id 330182A961;
 Mon,  3 Jun 2024 07:34:54 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d5a9d82-219d-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=HkS7PEf8kJUA2RxHxXS98LEBr
	MEt1ID9/ioAN+1BsYg=; b=QqwbSLPTK4nhdLFUwp4v1x2GmmVp2h+6EoIzXwYYj
	G1q9DTzCDsn1Aj9XyY1fig+lx566ZbI3r3Xp2NODp6S9yY6s36UK9WJCogtXezyT
	pccqmwcnn7aDZbvaayqnU7Vb3EY1XawYZ5GHitOi62MyBIsLyixtP6n8cCRqLbYG
	Zs=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Subject: [XEN PATCH v3 14/16] ioreq: make arch_vcpu_ioreq_completion() an optional callback
Date: Mon,  3 Jun 2024 14:34:53 +0300
Message-Id: <a0f9c5ef8554d63e149afd0a413a27385c889faa.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 4CB4C53E-219D-11EF-A092-6488940A682E-90055647!pb-smtp2.pobox.com
Content-Transfer-Encoding: quoted-printable

For the most cases arch_vcpu_ioreq_completion() routine is just an empty =
stub,
except when handling VIO_realmode_completion, which only happens on HVM
domains running on VT-x machine. When VT-x is disabled in build configura=
tion,
both x86 & arm version of routine become empty stubs.
To dispose of these useless stubs we can do optional call to arch-specifi=
c
ioreq completion handler, if it's present, and drop arm and generic x86 h=
andlers.
Actual handling of VIO_realmore_completion can be done by VMX code then.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
 xen/arch/arm/ioreq.c       |  6 ------
 xen/arch/x86/hvm/ioreq.c   | 23 -----------------------
 xen/arch/x86/hvm/vmx/vmx.c | 16 ++++++++++++++++
 xen/common/ioreq.c         |  5 ++++-
 xen/include/xen/ioreq.h    |  2 +-
 5 files changed, 21 insertions(+), 31 deletions(-)

diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c
index 5df755b48b..2e829d2e7f 100644
--- a/xen/arch/arm/ioreq.c
+++ b/xen/arch/arm/ioreq.c
@@ -135,12 +135,6 @@ bool arch_ioreq_complete_mmio(void)
     return false;
 }
=20
-bool arch_vcpu_ioreq_completion(enum vio_completion completion)
-{
-    ASSERT_UNREACHABLE();
-    return true;
-}
-
 /*
  * The "legacy" mechanism of mapping magic pages for the IOREQ servers
  * is x86 specific, so the following hooks don't need to be implemented =
on Arm:
diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
index 4eb7a70182..088650e007 100644
--- a/xen/arch/x86/hvm/ioreq.c
+++ b/xen/arch/x86/hvm/ioreq.c
@@ -29,29 +29,6 @@ bool arch_ioreq_complete_mmio(void)
     return handle_mmio();
 }
=20
-bool arch_vcpu_ioreq_completion(enum vio_completion completion)
-{
-    switch ( completion )
-    {
-    case VIO_realmode_completion:
-    {
-        struct hvm_emulate_ctxt ctxt;
-
-        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
-        vmx_realmode_emulate_one(&ctxt);
-        hvm_emulate_writeback(&ctxt);
-
-        break;
-    }
-
-    default:
-        ASSERT_UNREACHABLE();
-        break;
-    }
-
-    return true;
-}
-
 static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s)
 {
     struct domain *d =3D s->target;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f16faa6a61..7187d1819c 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -10,6 +10,7 @@
 #include <xen/param.h>
 #include <xen/trace.h>
 #include <xen/sched.h>
+#include <xen/ioreq.h>
 #include <xen/irq.h>
 #include <xen/softirq.h>
 #include <xen/domain_page.h>
@@ -2749,6 +2750,20 @@ static void cf_check vmx_set_reg(struct vcpu *v, u=
nsigned int reg, uint64_t val)
     vmx_vmcs_exit(v);
 }
=20
+bool realmode_vcpu_ioreq_completion(enum vio_completion completion)
+{
+    struct hvm_emulate_ctxt ctxt;
+
+    if ( completion !=3D VIO_realmode_completion )
+        ASSERT_UNREACHABLE();
+
+    hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
+    vmx_realmode_emulate_one(&ctxt);
+    hvm_emulate_writeback(&ctxt);
+
+    return true;
+}
+
 static struct hvm_function_table __initdata_cf_clobber vmx_function_tabl=
e =3D {
     .name                 =3D "VMX",
     .cpu_up_prepare       =3D vmx_cpu_up_prepare,
@@ -3070,6 +3085,7 @@ const struct hvm_function_table * __init start_vmx(=
void)
     lbr_tsx_fixup_check();
     ler_to_fixup_check();
=20
+    arch_vcpu_ioreq_completion =3D realmode_vcpu_ioreq_completion;
     return &vmx_function_table;
 }
=20
diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
index 1257a3d972..94fde97ece 100644
--- a/xen/common/ioreq.c
+++ b/xen/common/ioreq.c
@@ -33,6 +33,8 @@
 #include <public/hvm/ioreq.h>
 #include <public/hvm/params.h>
=20
+bool (*arch_vcpu_ioreq_completion)(enum vio_completion completion) =3D N=
ULL;
+
 void ioreq_request_mapcache_invalidate(const struct domain *d)
 {
     struct vcpu *v =3D current;
@@ -244,7 +246,8 @@ bool vcpu_ioreq_handle_completion(struct vcpu *v)
         break;
=20
     default:
-        res =3D arch_vcpu_ioreq_completion(completion);
+        if ( arch_vcpu_ioreq_completion )
+            res =3D arch_vcpu_ioreq_completion(completion);
         break;
     }
=20
diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
index cd399adf17..880214ec41 100644
--- a/xen/include/xen/ioreq.h
+++ b/xen/include/xen/ioreq.h
@@ -111,7 +111,7 @@ void ioreq_domain_init(struct domain *d);
 int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *con=
st_op);
=20
 bool arch_ioreq_complete_mmio(void);
-bool arch_vcpu_ioreq_completion(enum vio_completion completion);
+extern bool (*arch_vcpu_ioreq_completion)(enum vio_completion completion=
);
 int arch_ioreq_server_map_pages(struct ioreq_server *s);
 void arch_ioreq_server_unmap_pages(struct ioreq_server *s);
 void arch_ioreq_server_enable(struct ioreq_server *s);
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:37:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:37:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734889.1141017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5zw-000310-OK; Mon, 03 Jun 2024 11:37:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734889.1141017; Mon, 03 Jun 2024 11:37:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE5zw-00030t-LO; Mon, 03 Jun 2024 11:37:08 +0000
Received: by outflank-mailman (input) for mailman id 734889;
 Mon, 03 Jun 2024 11:37:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE5zv-00030l-Mv
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:37:07 +0000
Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 99d37f8a-219d-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 13:37:05 +0200 (CEST)
Received: from pb-smtp20.pobox.com (unknown [127.0.0.1])
 by pb-smtp20.pobox.com (Postfix) with ESMTP id 17FD02757B;
 Mon,  3 Jun 2024 07:37:04 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp20.pobox.com (Postfix) with ESMTP id F09AD2757A;
 Mon,  3 Jun 2024 07:37:03 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp20.pobox.com (Postfix) with ESMTPSA id 44D8E27575;
 Mon,  3 Jun 2024 07:36:59 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99d37f8a-219d-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=7Gq+t0OokvjjzhsPQkQvhI7mb
	2EvEEH35F6GhZjfg/Q=; b=Z6WeutNbTMOXpf6MofM8KMl6ERm9hr1ZeKo1TPN5E
	rzk2K8HuLoSmf1yFjs4qaKhpp4XbHSBhnZseVu40dRX3foipQPm1BhyJtKRfaIlG
	9vDwEMGAhQ57JZdo/ZEHm/1wjBSnB/ikyqNFJCSMCHm0t6wa87dO3iZ+qRDYGo/q
	xo=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>
Subject: [XEN PATCH v3 15/16] x86/vmx: replace CONFIG_HVM with CONFIG_VMX in vmx.h
Date: Mon,  3 Jun 2024 14:36:56 +0300
Message-Id: <9a1d4a9af373ff7164c20b9774eea5249af60b01.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 9740F190-219D-11EF-B366-ACC938F0AE34-90055647!pb-smtp20.pobox.com
Content-Transfer-Encoding: quoted-printable

As now we got a separate config option for VMX which itself depends on
CONFIG_HVM, we need to use it to provide vmx_pi_hooks_{assign,deassign}
stubs for case when VMX is disabled while HVM is enabled.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
CC: Jan Beulich <jbeulich@suse.com>
---
changes in v3:
 - use CONFIG_VMX instead of CONFIG_HVM to provide stubs, instead of guar=
ding
   calls to vmx_pi_hooks_{assign,deassign} in iommu/vt-d code
---
 xen/arch/x86/include/asm/hvm/vmx/vmx.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/include/asm/hvm/vmx/vmx.h b/xen/arch/x86/includ=
e/asm/hvm/vmx/vmx.h
index 1489dd05c2..025bec2321 100644
--- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
+++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
@@ -599,7 +599,7 @@ void vmx_pi_desc_fixup(unsigned int cpu);
=20
 void vmx_sync_exit_bitmap(struct vcpu *v);
=20
-#ifdef CONFIG_HVM
+#ifdef CONFIG_VMX
 void vmx_pi_hooks_assign(struct domain *d);
 void vmx_pi_hooks_deassign(struct domain *d);
 #else
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:39:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:39:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734896.1141026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE61z-0003bb-6a; Mon, 03 Jun 2024 11:39:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734896.1141026; Mon, 03 Jun 2024 11:39:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE61z-0003bU-3x; Mon, 03 Jun 2024 11:39:15 +0000
Received: by outflank-mailman (input) for mailman id 734896;
 Mon, 03 Jun 2024 11:39:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G0SM=NF=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sE61y-0003bO-40
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:39:14 +0000
Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e4ecf6b3-219d-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 13:39:11 +0200 (CEST)
Received: from pb-smtp21.pobox.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 126C7318B8;
 Mon,  3 Jun 2024 07:39:10 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 0ACD3318B7;
 Mon,  3 Jun 2024 07:39:10 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 25949318B6;
 Mon,  3 Jun 2024 07:39:06 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4ecf6b3-219d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:in-reply-to:references:mime-version
	:content-transfer-encoding; s=sasl; bh=JHgRlQ2mOAVaV36q/DwoXTgbm
	MjsvlXARfcG6GpvdPs=; b=rNgvOyf/nEl3Sg/AqUYANDNaMjpNMCYxrg86dVmWz
	7D5eeIVOvdRNPxFY3L6gQhaqKXZW6JMp/5r8ytL1HtL1TVEtm9PFFxeie8itz5j4
	CrJmrD43xLGM4A4BilgybCoJHzDFlrJeGZesI5Cad86aI/Dz3M2Qpz6nf7D/flHq
	m8=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
	Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Subject: [XEN PATCH v3 16/16] x86/hvm: make AMD-V and Intel VT-x support configurable
Date: Mon,  3 Jun 2024 14:39:03 +0300
Message-Id: <794fc2bf6cedddb9ea2ee0265e750e198d34eee9.1717410850.git.Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
MIME-Version: 1.0
X-Pobox-Relay-ID:
 E2E04B50-219D-11EF-AEB4-8F8B087618E4-90055647!pb-smtp21.pobox.com
Content-Transfer-Encoding: quoted-printable

From: Xenia Ragiadakou <burzalodowa@gmail.com>

Provide the user with configuration control over the cpu virtualization s=
upport
in Xen by making SVM and VMX options user selectable.

To preserve the current default behavior, both options depend on HVM and
default to value of HVM.

To prevent users from unknowingly disabling virtualization support, make =
the
controls user selectable only if EXPERT is enabled.

No functional change intended.

Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
changes in v3:
 - only tags added
changes in v2:
 - remove dependency of build options IOMMU/AMD_IOMMU on VMX/SVM options
---
 xen/arch/x86/Kconfig | 18 ++++++++++++++++--
 1 file changed, 16 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 4a35c43dc5..dbee7c2efb 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -123,10 +123,24 @@ config HVM
 	  If unsure, say Y.
=20
 config SVM
-	def_bool HVM
+	bool "AMD-V" if EXPERT
+	depends on HVM
+	default HVM
+	help
+	  Enables virtual machine extensions on platforms that implement the
+	  AMD Virtualization Technology (AMD-V).
+	  If your system includes a processor with AMD-V support, say Y.
+	  If in doubt, say Y.
=20
 config VMX
-	def_bool HVM
+	bool "Intel VT-x" if EXPERT
+	depends on HVM
+	default HVM
+	help
+	  Enables virtual machine extensions on platforms that implement the
+	  Intel Virtualization Technology (Intel VT-x).
+	  If your system includes a processor with Intel VT-x support, say Y.
+	  If in doubt, say Y.
=20
 config XEN_SHSTK
 	bool "Supervisor Shadow Stacks"
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 11:49:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 11:49:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734906.1141037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE6BW-0006KE-2t; Mon, 03 Jun 2024 11:49:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734906.1141037; Mon, 03 Jun 2024 11:49:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE6BW-0006K7-0F; Mon, 03 Jun 2024 11:49:06 +0000
Received: by outflank-mailman (input) for mailman id 734906;
 Mon, 03 Jun 2024 11:49:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YA0q=NF=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1sE6BV-0006K1-07
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 11:49:05 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4594fb0a-219f-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 13:49:02 +0200 (CEST)
Received: by mail-wr1-x433.google.com with SMTP id
 ffacd0b85a97d-35dc0472b7eso3590727f8f.2
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 04:49:02 -0700 (PDT)
Received: from [192.168.69.100] ([176.176.177.241])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd04ca9bbsm8651496f8f.31.2024.06.03.04.49.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 04:49:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4594fb0a-219f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1717415342; x=1718020142; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=YwQ/Pdf4Y3EmNBto/siemVD2L186u3Gcst2+RqrCCHU=;
        b=rGtYI2F0eoBR2FW8Hia03AA2mIU5SC1HBBo6o7UJPn8RtepFrVMTuP9kms00XeBPSn
         DXkNLL/GIfpF/36I1gFN/avPVENI1CuVU3T6kFxY/XtRD/ON1Y3BhUsMkFsyaBN8+iaj
         TTEE98VEgZHb1PjtUnCow79Trzz3eUElpiwRzm+UgAOPBDVTmu2kMwBjSMjYCKkBdvDU
         8Z22fc4AFXtbjcQmkj6vlnEAV8y1l8MmTe3zevJ5aChXWpJEXtSCR7q8FixD9+cjapk2
         SBQvmH6kbNT7aXzgX06c4o8nKFFXXDXst+8URsiFpAN/kT8wsBzYXTGTMw6dES6jJOk5
         w+5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717415342; x=1718020142;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=YwQ/Pdf4Y3EmNBto/siemVD2L186u3Gcst2+RqrCCHU=;
        b=PCdMgo9/Hb62rg6Pl++b1FGMr+5MSiqBZshOvu38BzPWgEooyYqUd3Ll0aO051dPsv
         UeUAvrRlBRXrOWvlPVzqtti8Xk5Q5Go5m9pvA+7Jvoc8vpj6DrdJAKFAYRXQz0hFBPXc
         EGqnP2S+9ag94vnLmEtQ+yDqOWQLNF22zg8CTmdrZ7NMkJFb4wJd2HXYS1dgnM+2lqLv
         ZG+bDqVV5wSTgp8XDp2teMctVY7HyfR79dU7jCMDdgLrJLMVU+lYiHpMZd0tXkS/rQFI
         ra1SrTOHU+4fAgOH8QnXYIsyDkzT4Ej9DtyaT3S6G1AsPVJamj753lJp9JGYKPYBY9Vt
         wr9g==
X-Forwarded-Encrypted: i=1; AJvYcCUo1LrNmtc/i4hl0/0YKnAV206TVdBt1gKaBFS48PTpVgZA4eH6l9f9NX9/73EijoHE+aMSRWFJkhi2xR8xv5USJZLZut7ivloSvxee+wo=
X-Gm-Message-State: AOJu0Yyd//pngB1XOuYMHdjLBDGEIjEhUhy1soPFflPzPHid2Uc5w8u8
	jvFM3k+rTKjms7nNL6KQGpPk/9xml5kGikBErzPtEttEvzoEITqRrVyWCsioO/s=
X-Google-Smtp-Source: AGHT+IEFOw08szT4GXwub1/2CwBP2HURKztzGBZQBQKhd0OivMVk9XKNPsjeqx+sYrMmKpqVgkG8sQ==
X-Received: by 2002:adf:f852:0:b0:355:25d:a5b0 with SMTP id ffacd0b85a97d-35e0f273304mr6089349f8f.15.1717415341805;
        Mon, 03 Jun 2024 04:49:01 -0700 (PDT)
Message-ID: <ba6a62b7-46a6-4e2a-a4c0-ee42a5e63fbb@linaro.org>
Date: Mon, 3 Jun 2024 13:48:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 0/7] hw/xen: Simplify legacy backends handling
To: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>,
 Paul Durrant <paul@xen.org>
Cc: Anthony PERARD <anthony@xenproject.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 "Michael S. Tsirkin" <mst@redhat.com>, Eduardo Habkost
 <eduardo@habkost.net>, Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Gerd Hoffmann <kraxel@redhat.com>
References: <20240510104908.76908-1-philmd@linaro.org>
Content-Language: en-US
From: =?UTF-8?Q?Philippe_Mathieu-Daud=C3=A9?= <philmd@linaro.org>
In-Reply-To: <20240510104908.76908-1-philmd@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 10/5/24 12:49, Philippe Mathieu-Daudé wrote:
> Respin of Paolo's Xen patches from
> https://lore.kernel.org/qemu-devel/20240509170044.190795-1-pbonzini@redhat.com/
> rebased on one of my cleanup branches making backend
> structures const. Treat xenfb as other backends.
> 
> Paolo Bonzini (2):
>    hw/xen: initialize legacy backends from xen_bus_init()
>    hw/xen: register legacy backends via xen_backend_init
> 
> Philippe Mathieu-Daudé (5):
>    hw/xen: Remove declarations left over in 'xen-legacy-backend.h'
>    hw/xen: Constify XenLegacyDevice::XenDevOps
>    hw/xen: Constify xenstore_be::XenDevOps
>    hw/xen: Make XenDevOps structures const
>    hw/xen: Register framebuffer backend via xen_backend_init()

Thanks Paul for the review, unfortunately Paulo missed this and
merged v1 as single commit 88f5ed7017 ("xen: register legacy
backends via xen_backend_init") :(

Regards,

Phil.



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 12:46:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 12:46:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734915.1141046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE74b-0000b6-4L; Mon, 03 Jun 2024 12:46:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734915.1141046; Mon, 03 Jun 2024 12:46:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE74b-0000az-0t; Mon, 03 Jun 2024 12:46:01 +0000
Received: by outflank-mailman (input) for mailman id 734915;
 Mon, 03 Jun 2024 12:45:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sE74Z-0000ap-MN; Mon, 03 Jun 2024 12:45:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sE74Z-0000Io-7w; Mon, 03 Jun 2024 12:45:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sE74Y-0001o0-Su; Mon, 03 Jun 2024 12:45:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sE74Y-0000ek-SL; Mon, 03 Jun 2024 12:45:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GlU9WxshEKo2hawnlTwM3EJSJ8RwY6YMQDFj2+zEvUg=; b=sH4Hx7QTjL4VsjI2ptdD+yckcl
	syLp+k1NpiJDh0ruYHZwtdMGu/cGI5dDDfHeGSpgnlPabIv8VfsoTV+lG9wC9PBkq9+tgpPaZn12Y
	Uu6Qbk9R/VqWt7N9XBpu+/VzEoBn+YyIucIs3JBaNcqhGZ6oNeEI8JuJaxGqM9y+Cbew=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186236-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186236: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b0930e3f4e6de1ce1c480bca687b44875e071f74
X-Osstest-Versions-That:
    ovmf=de2330450ff71df5a609fe48e2153eb4854d9359
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 Jun 2024 12:45:58 +0000

flight 186236 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186236/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b0930e3f4e6de1ce1c480bca687b44875e071f74
baseline version:
 ovmf                 de2330450ff71df5a609fe48e2153eb4854d9359

Last test of basis   186227  2024-06-02 02:11:22 Z    1 days
Testing same since   186236  2024-06-03 11:14:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nhi Pham <nhi@os.amperecomputing.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   de2330450f..b0930e3f4e  b0930e3f4e6de1ce1c480bca687b44875e071f74 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 13:20:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 13:20:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734923.1141056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE7bz-0005xb-Lk; Mon, 03 Jun 2024 13:20:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734923.1141056; Mon, 03 Jun 2024 13:20:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE7bz-0005xU-JC; Mon, 03 Jun 2024 13:20:31 +0000
Received: by outflank-mailman (input) for mailman id 734923;
 Mon, 03 Jun 2024 13:20:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sE7by-0005xK-99; Mon, 03 Jun 2024 13:20:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sE7by-0000sM-1Y; Mon, 03 Jun 2024 13:20:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sE7bx-0002YG-NF; Mon, 03 Jun 2024 13:20:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sE7bx-0000XQ-Mm; Mon, 03 Jun 2024 13:20:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=P1LSWn6UnorZANq5lghR0r+yDD7WjcwD6Qz/liZYiHE=; b=orUNCfdvSe8tKWmAN6WesIl3rS
	qrKsY6w0OSfnSK/tkFnWpaE/DDWWWKTB28QdPF7Y64GnMtE/Qf1AcnTfLDDKumyxWH5UgnlqIp16t
	+gDQWnZqmv6bz20SlTBsyXRq+Q/UyV/Jo2ottTcRwnvJ7Kspj6kzamXmtj7PBM2W/s6A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186235-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186235: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c3f38fa61af77b49866b006939479069cd451173
X-Osstest-Versions-That:
    linux=83814698cf48ce3aadc5d88a3f577f04482ff92a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 Jun 2024 13:20:29 +0000

flight 186235 linux-linus real [real]
flight 186237 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186235/
http://logs.test-lab.xenproject.org/osstest/logs/186237/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 186231

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-examine      8 reboot              fail pass in 186237-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186231
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186231
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186231
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186231
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186231
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186231
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186231
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                c3f38fa61af77b49866b006939479069cd451173
baseline version:
 linux                83814698cf48ce3aadc5d88a3f577f04482ff92a

Last test of basis   186231  2024-06-02 11:33:31 Z    1 days
Failing since        186232  2024-06-02 18:43:35 Z    0 days    2 attempts
Testing same since   186235  2024-06-03 02:08:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Dave Hansen <dave.hansen@linux.intel.com>
  Ingo Molnar <mingo@kernel.org>
  Jason Nader <dev@kayoway.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jörn Heusipp <osmanx@heusipp.de>
  Kees Cook <kees@kernel.org>
  Kees Cook <keescook@chromium.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marco Patalano <mpatalan@redhat.com>
  Niklas Cassel <cassel@kernel.org>
  Peter Schneider <pschneider1968@googlemail.com>
  Phil Auld <pauld@redhat.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tim Teichmann <teichmanntim@outlook.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 481 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 13:21:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 13:21:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734930.1141066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE7cr-0006Rt-VH; Mon, 03 Jun 2024 13:21:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734930.1141066; Mon, 03 Jun 2024 13:21:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE7cr-0006Rm-Sp; Mon, 03 Jun 2024 13:21:25 +0000
Received: by outflank-mailman (input) for mailman id 734930;
 Mon, 03 Jun 2024 13:21:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3ZA=NF=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sE7cr-0006R7-2k
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 13:21:25 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2c2a8f35-21ac-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 15:21:23 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 8B42E4EE0737;
 Mon,  3 Jun 2024 15:21:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c2a8f35-21ac-11ef-b4bb-af5377834399
MIME-Version: 1.0
Date: Mon, 03 Jun 2024 15:21:22 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH 1/5] xen/domain: deviate violation of MISRA C Rule
 20.12
In-Reply-To: <7e96b887-8fd3-4ecc-a23c-98a46ea1aa8c@suse.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <843540164f7e8f910226e1ded05e153cb04c519d.1717236930.git.nicola.vetrini@bugseng.com>
 <7e96b887-8fd3-4ecc-a23c-98a46ea1aa8c@suse.com>
Message-ID: <91e5a73aaa1abdaa7922774022843932@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-03 08:39, Jan Beulich wrote:
> On 01.06.2024 12:16, Nicola Vetrini wrote:
>> MISRA C Rule 20.12 states: "A macro parameter used as an operand to
>> the # or ## operators, which is itself subject to further macro 
>> replacement,
>> shall only be used as an operand to these operators".
>> 
>> In this case, builds where CONFIG_DEBUG_LOCK_PROFILE=y the domain_lock
>> macro is used both as a regular macro argument and as an operand for
>> stringification in the expansion of macro spin_lock_init_prof.
> 
> The shouldn't the marker be on the definition of spin_lock_init_prof(),
> rather than ...
> 
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -632,6 +632,7 @@ struct domain *domain_create(domid_t domid,
>> 
>>      atomic_set(&d->refcnt, 1);
>>      RCU_READ_LOCK_INIT(&d->rcu_lock);
>> +    /* SAF-6-safe Rule 20.12 expansion of macro domain_lock in debug 
>> builds */
>>      rspin_lock_init_prof(d, domain_lock);
>>      rspin_lock_init_prof(d, page_alloc_lock);
>>      spin_lock_init(&d->hypercall_deadlock_mutex);
> 
> ... actually just one of the two uses here (and presumably several more
> elsewhere)?
> 
> Jan

Actually it seems that this violation went away with some refactorings, 
so this patch is no longer needed other than for the addition to 
safe.json, so it can be folded into the next one.
I'll make the adjustment.

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 15:00:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 15:00:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734944.1141077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE9A6-0000Az-0Z; Mon, 03 Jun 2024 14:59:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734944.1141077; Mon, 03 Jun 2024 14:59:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE9A5-0000Ai-TO; Mon, 03 Jun 2024 14:59:49 +0000
Received: by outflank-mailman (input) for mailman id 734944;
 Mon, 03 Jun 2024 14:59:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LFL5=NF=cloud.com=matthew.barnes@srs-se1.protection.inumbo.net>)
 id 1sE9A4-0000Ab-Lt
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 14:59:48 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e84896b9-21b9-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 16:59:42 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a66e9eac48fso2267366b.2
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 07:59:42 -0700 (PDT)
Received: from EMEAENGAAD91498.citrite.net ([217.156.233.157])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a685b935b5csm464114866b.206.2024.06.03.07.59.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 Jun 2024 07:59:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e84896b9-21b9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1717426781; x=1718031581; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=+JA14AOgEuKBKYAkFvlW6fpt/3ZUuvQg25+31+429CY=;
        b=k+xzKYs7zcOulCMfgoyL4KLhfELcAUzob6htf+kKUOi11Af0NzF7zQqOEBK5rH6uyV
         ZetTAwjowCwP/4G9bTjmZUVDZg4/Mjo1gN2vaJjzze9dXFBLsAp41Oug5G2iEjj5igux
         HJy/dzEqTjOrtcSQLZ+xhbgD2CvpE1LsGcBLQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717426781; x=1718031581;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=+JA14AOgEuKBKYAkFvlW6fpt/3ZUuvQg25+31+429CY=;
        b=BSZBLSnDLgIZAvLBK9WwHwl/nRisD1c3Xt2pvrIJ2pmN093EPVBTdt7+Y1D4kA0onz
         sKIzDRRp0yv4xkSEMhW9dqF3wVw4CfhS2hxGX2cYVismylSAKZVS8g0/Xk6Z/+atjfZy
         Ea/ggUr6EEiRF1nPUIv4FwolhprjPaj/TKh3oUBztgfjuDgWf3ZXVXpPtHAHCrz9OuUS
         rntNSGkfOMMnF5lW3EzbpiespXlGKbvbd7EgeIgU9Vkd2mV8mz7DrNKKMVgblAtYBjsr
         72KKg7gRQbM+9P7o9aGOjTfqstsWF1ILxX7X/LPIQEYB7gSOjjBKL6kI537EeBZv1fft
         Rd8w==
X-Gm-Message-State: AOJu0YwK5n9Nsl1D59+FNKQ2AjfoxxIor1H2jZVm3gXHSJtWgL1mfYSr
	VgPJilD1pzFIE2i5xpO1FC/Z62+ivG5eSkhLCgAbgGkcBe81LSmH7+rJH3IXNS6xMAgCeRcpuPL
	l
X-Google-Smtp-Source: AGHT+IGIdwQtkxRsIrywJihfXR582ob4fbTWFwhOGhA4eNRNhIzbkUvc081KuhaSxyV2ONFN7D9plA==
X-Received: by 2002:a17:906:56ca:b0:a68:f43f:6f31 with SMTP id a640c23a62f3a-a68f43f708fmr240161266b.64.1717426781323;
        Mon, 03 Jun 2024 07:59:41 -0700 (PDT)
From: Matthew Barnes <matthew.barnes@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Matthew Barnes <matthew.barnes@cloud.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [XEN PATCH] tools/misc: xen-hvmcrash: Inject #DF instead of overwriting RIP
Date: Mon,  3 Jun 2024 15:59:18 +0100
Message-Id: <27f4397093d92b53f89d625d682bd4b7145b65d8.1717426439.git.matthew.barnes@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xen-hvmcrash would previously save records, overwrite the instruction
pointer with a bogus value, and then restore them to crash a domain
just enough to cause the guest OS to memdump.

This approach is found to be unreliable when tested on a guest running
Windows 10 x64, with some executions doing nothing at all.

Another approach would be to trigger NMIs. This approach is found to be
unreliable when tested on Linux (Ubuntu 22.04), as Linux will ignore
NMIs if it is not configured to handle such.

Injecting a double fault abort to all vCPUs is found to be more
reliable at crashing and invoking memdumps from Windows and Linux
domains.

This patch modifies the xen-hvmcrash tool to inject #DF to all vCPUs
belonging to the specified domain, instead of overwriting RIP.

Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>
---
 tools/misc/xen-hvmcrash.c | 77 +++++++--------------------------------
 1 file changed, 13 insertions(+), 64 deletions(-)

diff --git a/tools/misc/xen-hvmcrash.c b/tools/misc/xen-hvmcrash.c
index 1d058fa40a47..8ef1beb388f8 100644
--- a/tools/misc/xen-hvmcrash.c
+++ b/tools/misc/xen-hvmcrash.c
@@ -38,22 +38,21 @@
 #include <sys/stat.h>
 #include <arpa/inet.h>
 
+#define XC_WANT_COMPAT_DEVICEMODEL_API
 #include <xenctrl.h>
 #include <xen/xen.h>
 #include <xen/domctl.h>
 #include <xen/hvm/save.h>
 
+#define X86_ABORT_DF 8
+
 int
 main(int argc, char **argv)
 {
     int domid;
     xc_interface *xch;
     xc_domaininfo_t dominfo;
-    int ret;
-    uint32_t len;
-    uint8_t *buf;
-    uint32_t off;
-    struct hvm_save_descriptor *descriptor;
+    int vcpu_id, ret;
 
     if (argc != 2 || !argv[1] || (domid = atoi(argv[1])) < 0) {
         fprintf(stderr, "usage: %s <domid>\n", argv[0]);
@@ -77,66 +76,16 @@ main(int argc, char **argv)
         exit(1);
     }
 
-    ret = xc_domain_pause(xch, domid);
-    if (ret < 0) {
-        perror("xc_domain_pause");
-        exit(-1);
-    }
-
-    /*
-     * Calling with zero buffer length should return the buffer length
-     * required.
-     */
-    ret = xc_domain_hvm_getcontext(xch, domid, 0, 0);
-    if (ret < 0) {
-        perror("xc_domain_hvm_getcontext");
-        exit(1);
-    }
-    
-    len = ret;
-    buf = malloc(len);
-    if (buf == NULL) {
-        perror("malloc");
-        exit(1);
-    }
-
-    ret = xc_domain_hvm_getcontext(xch, domid, buf, len);
-    if (ret < 0) {
-        perror("xc_domain_hvm_getcontext");
-        exit(1);
-    }
-
-    off = 0;
-
-    while (off < len) {
-        descriptor = (struct hvm_save_descriptor *)(buf + off);
-
-        off += sizeof (struct hvm_save_descriptor);
-
-        if (descriptor->typecode == HVM_SAVE_CODE(CPU)) {
-            HVM_SAVE_TYPE(CPU) *cpu;
-
-            /* Overwrite EIP/RIP with some recognisable but bogus value */
-            cpu = (HVM_SAVE_TYPE(CPU) *)(buf + off);
-            printf("CPU[%d]: RIP = %" PRIx64 "\n", descriptor->instance, cpu->rip);
-            cpu->rip = 0xf001;
-        } else if (descriptor->typecode == HVM_SAVE_CODE(END)) {
-            break;
+    for (vcpu_id = 0; vcpu_id <= dominfo.max_vcpu_id; vcpu_id++) {
+        printf("Injecting #DF to vcpu ID #%d...\n", vcpu_id);
+        ret = xc_hvm_inject_trap(xch, domid, vcpu_id,
+                                X86_ABORT_DF,
+                                XEN_DMOP_EVENT_hw_exc, 0,
+                                NULL, NULL);
+        if (ret < 0) {
+            fprintf(stderr, "Could not inject #DF to vcpu ID #%d\n", vcpu_id);
+            perror("xc_hvm_inject_trap");
         }
-
-        off += descriptor->length;
-    }
-
-    ret = xc_domain_hvm_setcontext(xch, domid, buf, len);
-    if (ret < 0) {
-        perror("xc_domain_hvm_setcontext");
-        exit(1);
-    }
-
-    ret = xc_domain_unpause(xch, domid);
-    if (ret < 0) {
-        perror("xc_domain_unpause");
-        exit(1);
     }
 
     return 0;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 15:18:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 15:18:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734956.1141109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE9SH-0003OL-Dx; Mon, 03 Jun 2024 15:18:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734956.1141109; Mon, 03 Jun 2024 15:18:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE9SH-0003N1-84; Mon, 03 Jun 2024 15:18:37 +0000
Received: by outflank-mailman (input) for mailman id 734956;
 Mon, 03 Jun 2024 15:18:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3DZA=NF=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1sE9SG-00030f-Jv
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 15:18:36 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8b0febaf-21bc-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 17:18:35 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-423-PXlMk71_P76Ne_v4g56O5A-1; Mon, 03 Jun 2024 11:18:30 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C21E6800CAC;
 Mon,  3 Jun 2024 15:18:29 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.239])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 2FC58200C7E4;
 Mon,  3 Jun 2024 15:18:29 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 3947B1801F2C; Mon,  3 Jun 2024 17:18:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b0febaf-21bc-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717427913;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=iOjw8WHuKXRxil9Kra4lnYZ4J/Am36P/OUiaBQSEgzM=;
	b=RZor1nEUcfmVcmlwWgxeR7adys+ZH4GwivoHx7qCeeBIoZb/byKhDD301HJjhcA7iWe8vW
	zLSSTbgb4tBhoL1Cb0e2ykF44Y3pd0/M9oQuzU+WJ02yGzHq5k/KZqP6OZHGuOe9sbSzLm
	nyWUZuNeHaluN1g+OiJrcBbKpcNkUEU=
X-MC-Unique: PXlMk71_P76Ne_v4g56O5A-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Anthony PERARD <anthony@xenproject.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 3/3] ui+display: rename is_placeholder -> surface_is_placeholder
Date: Mon,  3 Jun 2024 17:18:25 +0200
Message-ID: <20240603151825.188353-4-kraxel@redhat.com>
In-Reply-To: <20240603151825.188353-1-kraxel@redhat.com>
References: <20240603151825.188353-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6

No functional change.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/ui/surface.h | 2 +-
 ui/console.c         | 2 +-
 ui/sdl2-2d.c         | 2 +-
 ui/sdl2-gl.c         | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/ui/surface.h b/include/ui/surface.h
index 96f9b1611c1c..60416a451901 100644
--- a/include/ui/surface.h
+++ b/include/ui/surface.h
@@ -50,7 +50,7 @@ static inline int surface_is_borrowed(DisplaySurface *surface)
     return !(surface->flags & QEMU_ALLOCATED_FLAG);
 }
 
-static inline int is_placeholder(DisplaySurface *surface)
+static inline int surface_is_placeholder(DisplaySurface *surface)
 {
     return surface->flags & QEMU_PLACEHOLDER_FLAG;
 }
diff --git a/ui/console.c b/ui/console.c
index d7967ddb0d1a..3bd2adcc33c3 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -1510,7 +1510,7 @@ void qemu_console_resize(QemuConsole *s, int width, int height)
     assert(QEMU_IS_GRAPHIC_CONSOLE(s));
 
     if ((s->scanout.kind != SCANOUT_SURFACE ||
-         (surface && !surface_is_borrowed(surface) && !is_placeholder(surface))) &&
+         (surface && !surface_is_borrowed(surface) && !surface_is_placeholder(surface))) &&
         qemu_console_get_width(s, -1) == width &&
         qemu_console_get_height(s, -1) == height) {
         return;
diff --git a/ui/sdl2-2d.c b/ui/sdl2-2d.c
index 06468cd493ea..73052383c2e0 100644
--- a/ui/sdl2-2d.c
+++ b/ui/sdl2-2d.c
@@ -72,7 +72,7 @@ void sdl2_2d_switch(DisplayChangeListener *dcl,
         scon->texture = NULL;
     }
 
-    if (is_placeholder(new_surface) && qemu_console_get_index(dcl->con)) {
+    if (surface_is_placeholder(new_surface) && qemu_console_get_index(dcl->con)) {
         sdl2_window_destroy(scon);
         return;
     }
diff --git a/ui/sdl2-gl.c b/ui/sdl2-gl.c
index 28d796607c08..91b7ee2419b7 100644
--- a/ui/sdl2-gl.c
+++ b/ui/sdl2-gl.c
@@ -89,7 +89,7 @@ void sdl2_gl_switch(DisplayChangeListener *dcl,
 
     scon->surface = new_surface;
 
-    if (is_placeholder(new_surface) && qemu_console_get_index(dcl->con)) {
+    if (surface_is_placeholder(new_surface) && qemu_console_get_index(dcl->con)) {
         qemu_gl_fini_shader(scon->gls);
         scon->gls = NULL;
         sdl2_window_destroy(scon);
-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 15:18:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 15:18:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734953.1141087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE9SF-0002zm-Id; Mon, 03 Jun 2024 15:18:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734953.1141087; Mon, 03 Jun 2024 15:18:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE9SF-0002zf-F5; Mon, 03 Jun 2024 15:18:35 +0000
Received: by outflank-mailman (input) for mailman id 734953;
 Mon, 03 Jun 2024 15:18:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3DZA=NF=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1sE9SE-0002zU-7Z
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 15:18:34 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 88f5dadc-21bc-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 17:18:31 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-269-YU-I1tivMFSQruzegKHrDA-1; Mon, 03 Jun 2024 11:18:28 -0400
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.rdu2.redhat.com
 [10.11.54.2])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4AB06101A525;
 Mon,  3 Jun 2024 15:18:28 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.239])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id DC5C140AD3DE;
 Mon,  3 Jun 2024 15:18:26 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id F27C61800985; Mon,  3 Jun 2024 17:18:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88f5dadc-21bc-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717427910;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=2Ot+g6unzuYazgp1aGNy1K4xrxxsPWswKLjbLi14qio=;
	b=eBXPZp4WgbMAWCj+MS9aGG4ZI/n0O22+4x/jKd4u1p+Vj/KcuTGhMeVD2pBPAdUCULIgvK
	hJreZ7huPSoM3dxp7k+hp3GziSgWysiK6LBDrkrSedBJKAiQ8V1N+bt+/SnpPuP8wKIWcW
	6/18aLkTlTN8Y+H0Kq4p3AGgXyCcj5w=
X-MC-Unique: YU-I1tivMFSQruzegKHrDA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Anthony PERARD <anthony@xenproject.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 0/3] stdvga: fix screen blanking
Date: Mon,  3 Jun 2024 17:18:22 +0200
Message-ID: <20240603151825.188353-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.2



Gerd Hoffmann (3):
  stdvga: fix screen blanking
  ui+display: rename is_buffer_shared() -> surface_is_borrowed()
  ui+display: rename is_placeholder -> surface_is_placeholder

 include/ui/surface.h    |  4 ++--
 hw/display/qxl-render.c |  2 +-
 hw/display/vga.c        | 14 ++++++++++----
 hw/display/xenfb.c      |  4 ++--
 ui/console.c            |  2 +-
 ui/sdl2-2d.c            |  2 +-
 ui/sdl2-gl.c            |  2 +-
 7 files changed, 18 insertions(+), 12 deletions(-)

-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 15:18:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 15:18:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734954.1141096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE9SG-0003E5-Q2; Mon, 03 Jun 2024 15:18:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734954.1141096; Mon, 03 Jun 2024 15:18:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE9SG-0003Dy-MV; Mon, 03 Jun 2024 15:18:36 +0000
Received: by outflank-mailman (input) for mailman id 734954;
 Mon, 03 Jun 2024 15:18:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3DZA=NF=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1sE9SE-0002zU-U7
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 15:18:34 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 89ebe9ba-21bc-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 17:18:33 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-126-Iy7WdosbNt6_HMy2-StTTQ-1; Mon, 03 Jun 2024 11:18:28 -0400
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com
 [10.11.54.3])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B2E04185AD2C;
 Mon,  3 Jun 2024 15:18:27 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.239])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id DC73F1050176;
 Mon,  3 Jun 2024 15:18:26 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 0C05718009AD; Mon,  3 Jun 2024 17:18:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89ebe9ba-21bc-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717427912;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vkSiq4a30KX/BtKlzALhqQstMUl79qOxtGFoh3xybvE=;
	b=ZaHkqAfY0prOxd9IsvYxn3cjrCzTyehr8LM8V/cUr8u9y8nC/gDT5zr1ezo0VbF63ReQG+
	CBYhNOd6hphyBpYhd13PoMbmI+ht2ARPhFcgQ3GGJWR00thyODBbGPvDYo+LDKiipCYas2
	ADA1VUyxfjUmHRSDEFClWjtIl46k7HI=
X-MC-Unique: Iy7WdosbNt6_HMy2-StTTQ-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Anthony PERARD <anthony@xenproject.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	qemu-stable@nongnu.org
Subject: [PATCH v2 1/3] stdvga: fix screen blanking
Date: Mon,  3 Jun 2024 17:18:23 +0200
Message-ID: <20240603151825.188353-2-kraxel@redhat.com>
In-Reply-To: <20240603151825.188353-1-kraxel@redhat.com>
References: <20240603151825.188353-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3

In case the display surface uses a shared buffer (i.e. uses vga vram
directly instead of a shadow) go unshare the buffer before clearing it.

This avoids vga memory corruption, which in turn fixes unblanking not
working properly with X11.

Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2067
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/display/vga.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/hw/display/vga.c b/hw/display/vga.c
index 30facc6c8e33..474b6b14c327 100644
--- a/hw/display/vga.c
+++ b/hw/display/vga.c
@@ -1762,6 +1762,12 @@ static void vga_draw_blank(VGACommonState *s, int full_update)
     if (s->last_scr_width <= 0 || s->last_scr_height <= 0)
         return;
 
+    if (is_buffer_shared(surface)) {
+        /* unshare buffer, otherwise the blanking corrupts vga vram */
+        surface = qemu_create_displaysurface(s->last_scr_width, s->last_scr_height);
+        dpy_gfx_replace_surface(s->con, surface);
+    }
+
     w = s->last_scr_width * surface_bytes_per_pixel(surface);
     d = surface_data(surface);
     for(i = 0; i < s->last_scr_height; i++) {
-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 15:18:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 15:18:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734955.1141102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE9SH-0003Fp-2r; Mon, 03 Jun 2024 15:18:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734955.1141102; Mon, 03 Jun 2024 15:18:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sE9SG-0003FU-Te; Mon, 03 Jun 2024 15:18:36 +0000
Received: by outflank-mailman (input) for mailman id 734955;
 Mon, 03 Jun 2024 15:18:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3DZA=NF=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1sE9SF-00030f-Uq
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 15:18:35 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8ad6c8c0-21bc-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 17:18:35 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-391-On8TU32lNrWFK8D0XWJUxg-1; Mon, 03 Jun 2024 11:18:30 -0400
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.rdu2.redhat.com
 [10.11.54.8])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EDB9D85A58C;
 Mon,  3 Jun 2024 15:18:29 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.239])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 9BC29C15C05;
 Mon,  3 Jun 2024 15:18:28 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 2456C1801F28; Mon,  3 Jun 2024 17:18:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ad6c8c0-21bc-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717427913;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=i8kzX9DrYOhcqIvJMNJmWCuMKiFj8etOtZj3EnRy0I8=;
	b=KXPNlJd0dPNGAyLTTPnQJdbWxIZg0IVS5BACVvB7Et5xtwkDB2JLk6+aAlgNVJ0AKgps2W
	6vI0SrYd6fqGqggHovJbyZIRWiifwasDBsbJb7ofLw5Q0S+cf17vKW1qdIJr59HHui23jd
	DfoOgNvzkTdervHZe+fij4wUFD2QLvw=
X-MC-Unique: On8TU32lNrWFK8D0XWJUxg-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Anthony PERARD <anthony@xenproject.org>,
	Gerd Hoffmann <kraxel@redhat.com>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@gmail.com>
Subject: [PATCH v2 2/3] ui+display: rename is_buffer_shared() -> surface_is_borrowed()
Date: Mon,  3 Jun 2024 17:18:24 +0200
Message-ID: <20240603151825.188353-3-kraxel@redhat.com>
In-Reply-To: <20240603151825.188353-1-kraxel@redhat.com>
References: <20240603151825.188353-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.8

No functional change.

Suggested-by: Marc-André Lureau <marcandre.lureau@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/ui/surface.h    |  2 +-
 hw/display/qxl-render.c |  2 +-
 hw/display/vga.c        | 10 +++++-----
 hw/display/xenfb.c      |  4 ++--
 ui/console.c            |  2 +-
 5 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/include/ui/surface.h b/include/ui/surface.h
index 4244e0ca4a32..96f9b1611c1c 100644
--- a/include/ui/surface.h
+++ b/include/ui/surface.h
@@ -45,7 +45,7 @@ void qemu_displaysurface_win32_set_handle(DisplaySurface *surface,
 DisplaySurface *qemu_create_displaysurface(int width, int height);
 void qemu_free_displaysurface(DisplaySurface *surface);
 
-static inline int is_buffer_shared(DisplaySurface *surface)
+static inline int surface_is_borrowed(DisplaySurface *surface)
 {
     return !(surface->flags & QEMU_ALLOCATED_FLAG);
 }
diff --git a/hw/display/qxl-render.c b/hw/display/qxl-render.c
index ec99ec887a6e..bfdf2c59bdbe 100644
--- a/hw/display/qxl-render.c
+++ b/hw/display/qxl-render.c
@@ -31,7 +31,7 @@ static void qxl_blit(PCIQXLDevice *qxl, QXLRect *rect)
     uint8_t *src;
     int len, i;
 
-    if (is_buffer_shared(surface)) {
+    if (surface_is_borrowed(surface)) {
         return;
     }
     trace_qxl_render_blit(qxl->guest_primary.qxl_stride,
diff --git a/hw/display/vga.c b/hw/display/vga.c
index 474b6b14c327..bd800a683e45 100644
--- a/hw/display/vga.c
+++ b/hw/display/vga.c
@@ -1620,7 +1620,7 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
         height != s->last_height ||
         s->last_depth != depth ||
         s->last_byteswap != byteswap ||
-        share_surface != is_buffer_shared(surface)) {
+        share_surface != surface_is_borrowed(surface)) {
         /* display parameters changed -> need new display surface */
         s->last_scr_width = disp_width;
         s->last_scr_height = height;
@@ -1635,7 +1635,7 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
         full_update = 1;
     }
     if (surface_data(surface) != s->vram_ptr + (s->params.start_addr * 4)
-        && is_buffer_shared(surface)) {
+        && surface_is_borrowed(surface)) {
         /* base address changed (page flip) -> shared display surfaces
          * must be updated with the new base address */
         full_update = 1;
@@ -1655,7 +1655,7 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
 
     vga_draw_line = vga_draw_line_table[v];
 
-    if (!is_buffer_shared(surface) && s->cursor_invalidate) {
+    if (!surface_is_borrowed(surface) && s->cursor_invalidate) {
         s->cursor_invalidate(s);
     }
 
@@ -1707,7 +1707,7 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
         if (update) {
             if (y_start < 0)
                 y_start = y;
-            if (!(is_buffer_shared(surface))) {
+            if (!(surface_is_borrowed(surface))) {
                 uint8_t *p;
                 p = vga_draw_line(s, d, addr, width, hpel);
                 if (p) {
@@ -1762,7 +1762,7 @@ static void vga_draw_blank(VGACommonState *s, int full_update)
     if (s->last_scr_width <= 0 || s->last_scr_height <= 0)
         return;
 
-    if (is_buffer_shared(surface)) {
+    if (surface_is_borrowed(surface)) {
         /* unshare buffer, otherwise the blanking corrupts vga vram */
         surface = qemu_create_displaysurface(s->last_scr_width, s->last_scr_height);
         dpy_gfx_replace_surface(s->con, surface);
diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index 27536bfce0cb..a9bc5a08cdc2 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -637,7 +637,7 @@ static void xenfb_guest_copy(struct XenFB *xenfb, int x, int y, int w, int h)
     int linesize = surface_stride(surface);
     uint8_t *data = surface_data(surface);
 
-    if (!is_buffer_shared(surface)) {
+    if (!surface_is_borrowed(surface)) {
         switch (xenfb->depth) {
         case 8:
             if (bpp == 16) {
@@ -755,7 +755,7 @@ static void xenfb_update(void *opaque)
         xen_pv_printf(&xenfb->c.xendev, 1,
                       "update: resizing: %dx%d @ %d bpp%s\n",
                       xenfb->width, xenfb->height, xenfb->depth,
-                      is_buffer_shared(surface) ? " (shared)" : "");
+                      surface_is_borrowed(surface) ? " (borrowed)" : "");
         xenfb->up_fullscreen = 1;
     }
 
diff --git a/ui/console.c b/ui/console.c
index 1b2cd0c7365d..d7967ddb0d1a 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -1510,7 +1510,7 @@ void qemu_console_resize(QemuConsole *s, int width, int height)
     assert(QEMU_IS_GRAPHIC_CONSOLE(s));
 
     if ((s->scanout.kind != SCANOUT_SURFACE ||
-         (surface && !is_buffer_shared(surface) && !is_placeholder(surface))) &&
+         (surface && !surface_is_borrowed(surface) && !is_placeholder(surface))) &&
         qemu_console_get_width(s, -1) == width &&
         qemu_console_get_height(s, -1) == height) {
         return;
-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 15:57:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 15:57:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734978.1141127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEA46-0002PR-9Z; Mon, 03 Jun 2024 15:57:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734978.1141127; Mon, 03 Jun 2024 15:57:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEA46-0002PK-6J; Mon, 03 Jun 2024 15:57:42 +0000
Received: by outflank-mailman (input) for mailman id 734978;
 Mon, 03 Jun 2024 15:57:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YA0q=NF=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1sEA44-0002PE-Nl
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 15:57:40 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 004e8db4-21c2-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 17:57:38 +0200 (CEST)
Received: by mail-wr1-x432.google.com with SMTP id
 ffacd0b85a97d-35dbfe31905so58347f8f.2
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 08:57:38 -0700 (PDT)
Received: from [192.168.69.100] ([176.176.177.241])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd0667366sm8975604f8f.111.2024.06.03.08.57.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 08:57:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 004e8db4-21c2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1717430258; x=1718035058; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=pfqXv9O156k/qVlRPORBHveRYZBimdALutURsiXGVic=;
        b=DMhi96OyNNRWjBA/KtxI5GauPu9nIsfAtZq5xAQq7gblKWfEP5k7xOjbdHYl/8hYeR
         gkZj9ez+j3YkqOCU5c1FM/fLwJujzB5IUBBPwE1bfsS3SDr3KFU+GrZ4qHuruRKFVrDR
         38X5KGWarITmqGd+pUBpFT3n6fbBTVFMTdVlnYyLaSAVksJ0Spa8i3sMjRO2nsHr1vnL
         q678NdxpWFH55JJnXglQQ2FCvLqydR7KE5CtuH3HWzA5VpNWTQEBFzgJ+JOEq+qF51Ho
         /uU+6i4bkjaxC8ukGxELPUn5muOHxurpSXbcJmQzhtNevmMpbt+E51vRXSXmk1puqiF/
         1HlQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717430258; x=1718035058;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=pfqXv9O156k/qVlRPORBHveRYZBimdALutURsiXGVic=;
        b=M18t7kj0ZbEEwVpsyfqY72kvkfuLWYp0tu+J6jTVXDfU1EHHgFT50uMaywcRiJxUHe
         sTu4K585gb2aA/EKLgMe1Ql+4OtgYSQze3RnWuxmODEK5JsoE1hAB3Y4DCSii2AIkowj
         WxIT05O5ucRVTqN9QNQ9usK663kGAOBNdG/jDI/kBfMY1fD11l/KAV4r2h/j4Pq/XxAK
         9WjaBmT7od+ne1tk1yjmad6mNlj1rKzOYknl91ZykWJfqqDV0Dk306e/nxWR9OOL+cx7
         P7VNpmEqYreobPnIGQJOuHhI6m0oO3jiVGCJo015vyxzQG/SxM9JO5KEthIF2Pz6FKxn
         z/jQ==
X-Forwarded-Encrypted: i=1; AJvYcCXVumatO+kSSG9MiasrDebL53sPdfsNh59Cy/gr2So+4uyzyYBS3pFWOTYw80tBUFIh0V3blzVOa8OTM2t2YRlmD3ilah0vEO9U/oPpUfE=
X-Gm-Message-State: AOJu0YzfRZ4x3VfXbImia63BcYWUc3rghlpCEnLkOW9XV5uPc9IQYcVt
	9DwhX5OLbS7BaDxQsJCIVYRDWNuXAADPdnV+daT04WuXhBkp/AGbwsLiWMI9GS8=
X-Google-Smtp-Source: AGHT+IG0AwImAZQCWL5EcwF+xN/h0jceBRg8jiE8riT+hg2DO7rDBxMnf5lD3Tad7RQSC7AjUj7+Fw==
X-Received: by 2002:a5d:5272:0:b0:354:de21:2145 with SMTP id ffacd0b85a97d-35e0f271297mr7022282f8f.22.1717430258024;
        Mon, 03 Jun 2024 08:57:38 -0700 (PDT)
Message-ID: <abf1c6da-71fe-483d-8edb-0ebfef14dbcc@linaro.org>
Date: Mon, 3 Jun 2024 17:57:35 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v8 1/8] xen: mapcache: Make MCACHE_BUCKET_SHIFT runtime
 configurable
To: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>, qemu-devel@nongnu.org
Cc: sstabellini@kernel.org, jgross@suse.com,
 "Edgar E. Iglesias" <edgar.iglesias@amd.com>,
 Anthony PERARD <anthony@xenproject.org>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <20240529140739.1387692-1-edgar.iglesias@gmail.com>
 <20240529140739.1387692-2-edgar.iglesias@gmail.com>
Content-Language: en-US
From: =?UTF-8?Q?Philippe_Mathieu-Daud=C3=A9?= <philmd@linaro.org>
In-Reply-To: <20240529140739.1387692-2-edgar.iglesias@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 29/5/24 16:07, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> 
> Make MCACHE_BUCKET_SHIFT runtime configurable per cache instance.
> 
> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
>   hw/xen/xen-mapcache.c | 54 ++++++++++++++++++++++++++-----------------
>   1 file changed, 33 insertions(+), 21 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



From xen-devel-bounces@lists.xenproject.org Mon Jun 03 18:39:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 18:39:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734990.1141136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sECaj-000536-8c; Mon, 03 Jun 2024 18:39:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734990.1141136; Mon, 03 Jun 2024 18:39:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sECaj-00052z-5P; Mon, 03 Jun 2024 18:39:33 +0000
Received: by outflank-mailman (input) for mailman id 734990;
 Mon, 03 Jun 2024 18:39:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8Wgl=NF=digikod.net=mic@srs-se1.protection.inumbo.net>)
 id 1sECai-00052t-1J
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 18:39:32 +0000
Received: from smtp-bc0e.mail.infomaniak.ch (smtp-bc0e.mail.infomaniak.ch
 [45.157.188.14]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9ca8f14b-21d8-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 20:39:30 +0200 (CEST)
Received: from smtp-3-0000.mail.infomaniak.ch (smtp-3-0000.mail.infomaniak.ch
 [10.4.36.107])
 by smtp-3-3000.mail.infomaniak.ch (Postfix) with ESMTPS id 4VtMvh6qFGzbqD;
 Mon,  3 Jun 2024 20:39:28 +0200 (CEST)
Received: from unknown by smtp-3-0000.mail.infomaniak.ch (Postfix) with ESMTPA
 id 4VtMvf5T4yz1Y3; Mon,  3 Jun 2024 20:39:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ca8f14b-21d8-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=digikod.net;
	s=20191114; t=1717439968;
	bh=gtIaPUDSXyeON8s2CCg1DgIhDlLJpeFMkqvOS8MasVM=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=xgOLEnz6SJcIhFjVsyAJkws+cB0hDF1vBl5UhdAnMFZ3IwIv9+mfJG3z/z022GaLC
	 hucCNjddaf0O63psa6tLWOQ3L5xKcN9U7KIay9a2JpyiZ7tPESeMWs+1ppcAXahNNL
	 fGKNj5z+dog4VQDnEUOYMq6wUCsWbim9yiLiW2VQ=
Date: Mon, 3 Jun 2024 20:39:24 +0200
From: =?utf-8?Q?Micka=C3=ABl_Sala=C3=BCn?= <mic@digikod.net>
To: Sean Christopherson <seanjc@google.com>
Cc: Nicolas Saenz Julienne <nsaenz@amazon.com>, 
	Borislav Petkov <bp@alien8.de>, Dave Hansen <dave.hansen@linux.intel.com>, 
	"H . Peter Anvin" <hpa@zytor.com>, Ingo Molnar <mingo@redhat.com>, 
	Kees Cook <keescook@chromium.org>, Paolo Bonzini <pbonzini@redhat.com>, 
	Thomas Gleixner <tglx@linutronix.de>, Vitaly Kuznetsov <vkuznets@redhat.com>, 
	Wanpeng Li <wanpengli@tencent.com>, Rick P Edgecombe <rick.p.edgecombe@intel.com>, 
	Alexander Graf <graf@amazon.com>, Angelina Vu <angelinavu@linux.microsoft.com>, 
	Anna Trikalinou <atrikalinou@microsoft.com>, Chao Peng <chao.p.peng@linux.intel.com>, 
	Forrest Yuan Yu <yuanyu@google.com>, James Gowans <jgowans@amazon.com>, 
	James Morris <jamorris@linux.microsoft.com>, John Andersen <john.s.andersen@intel.com>, 
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>, Marian Rotariu <marian.c.rotariu@gmail.com>, 
	Mihai =?utf-8?B?RG9uyJt1?= <mdontu@bitdefender.com>, =?utf-8?B?TmljdciZb3IgQ8OuyJt1?= <nicu.citu@icloud.com>, 
	Thara Gopinath <tgopinath@microsoft.com>, Trilok Soni <quic_tsoni@quicinc.com>, 
	Wei Liu <wei.liu@kernel.org>, Will Deacon <will@kernel.org>, 
	Yu Zhang <yu.c.zhang@linux.intel.com>, =?utf-8?Q?=C8=98tefan_=C8=98icleru?= <ssicleru@bitdefender.com>, 
	dev@lists.cloudhypervisor.org, kvm@vger.kernel.org, linux-hardening@vger.kernel.org, 
	linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, 
	linux-security-module@vger.kernel.org, qemu-devel@nongnu.org, virtualization@lists.linux-foundation.org, 
	x86@kernel.org, xen-devel@lists.xenproject.org
Subject: Re: [RFC PATCH v3 3/5] KVM: x86: Add notifications for Heki policy
 configuration and violation
Message-ID: <20240603.ahNg8waif6Fu@digikod.net>
References: <20240503131910.307630-1-mic@digikod.net>
 <20240503131910.307630-4-mic@digikod.net>
 <ZjTuqV-AxQQRWwUW@google.com>
 <20240506.ohwe7eewu0oB@digikod.net>
 <ZjmFPZd5q_hEBdBz@google.com>
 <20240507.ieghomae0UoC@digikod.net>
 <ZjpTxt-Bxia3bRwB@google.com>
 <D15VQ97L5M8J.1TDNQE6KLW6JO@amazon.com>
 <20240514.mai3Ahdoo2qu@digikod.net>
 <ZkUb2IWj4Z9FziCb@google.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZkUb2IWj4Z9FziCb@google.com>
X-Infomaniak-Routing: alpha

On Wed, May 15, 2024 at 01:32:24PM -0700, Sean Christopherson wrote:
> On Tue, May 14, 2024, Mickaël Salaün wrote:
> > On Fri, May 10, 2024 at 10:07:00AM +0000, Nicolas Saenz Julienne wrote:
> > > Development happens
> > > https://github.com/vianpl/{linux,qemu,kvm-unit-tests} and the vsm-next
> > > branch, but I'd advice against looking into it until we add some order
> > > to the rework. Regardless, feel free to get in touch.
> > 
> > Thanks for the update.
> > 
> > Could we schedule a PUCK meeting to synchronize and help each other?
> > What about June 12?
> 
> June 12th works on my end.

Can you please send an invite?

 Mickaël


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 18:52:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 18:52:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.734997.1141147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sECn5-0007gK-6D; Mon, 03 Jun 2024 18:52:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 734997.1141147; Mon, 03 Jun 2024 18:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sECn5-0007gD-2e; Mon, 03 Jun 2024 18:52:19 +0000
Received: by outflank-mailman (input) for mailman id 734997;
 Mon, 03 Jun 2024 18:52:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=29W0=NF=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sECn4-0007g7-95
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 18:52:18 +0000
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
 [2a00:1450:4864:20::42b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 65cf6e0a-21da-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 20:52:16 +0200 (CEST)
Received: by mail-wr1-x42b.google.com with SMTP id
 ffacd0b85a97d-35dcff36522so199547f8f.1
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 11:52:16 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35e56669a23sm4708045f8f.83.2024.06.03.11.52.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 11:52:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65cf6e0a-21da-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717440736; x=1718045536; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=SKxUQ0E03ZXKt081hO6/G9Tb33MrR59K+v3qCZgkfQg=;
        b=DFTXzedy5t+c57B+6Hw7+HB8F00iIFwCrXH9H4rMCoaGvGFGd0r5ujsegpEZALnbWL
         XXvCNdnM5YxnOvsOged++QKyPr0kYHh4UsFnP2BUDnn326yjtc5lrjSddPZ86LdqnyiE
         +j5Zcgxbkozqt/ScIWYP7hp8AYd5zwlBGpux6weKxOir61YcfJShYNA9Z0n3DUWGRNK2
         J2+ZcnwEZ7Yg7I9myxVc38scEZRVqbUFiPHNXdCljKyWfRnqR4d1hq+G4HfidQsNX3Hw
         5I2TqXxqs4W8I1RrQ1CoaBNiYDnWQb200akadE3UsUOnHpWuxGq0f7atWvVh2EkSHgNq
         CmHg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717440736; x=1718045536;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=SKxUQ0E03ZXKt081hO6/G9Tb33MrR59K+v3qCZgkfQg=;
        b=LAYIheGK1Pqsc38af4i0Adm9scAldiSO0vXRUwI29bQXTc5ax/+gK8dk7QiTQgHAw0
         ubRuDVpr8fLi6RrjV7GKAKQajsD0pVn/VVTzLWMks0o9F6MvF+P9zKVyWBZz5BHe3r6d
         DpRjAnYdZvodoLi+VN0xr/+5fDtixwkvMwObzZpQmIeryDkjNpyJ7uNSRO/1tEIZHgF8
         Yaw+yzRo7vyjjtQkXIL/s6y9R6lX+wEWs1kHP9WhTgCP7tSidJFafqqg+1C5/DJV5k2Z
         ya4ytBd6lLT8AxyBtmmo009BLp8itdDpMvpUH8w7WT/qrwMoK0uxBGoFYqQ9Fd7rhiep
         O5uA==
X-Forwarded-Encrypted: i=1; AJvYcCXFjQ7n2ztUOkhE+TFaetnsdy7yXyKF/b4ipYfWB+5vF1xLbLbRIwu0OMyHALhxrz67NHfGgCOWhZsDBtSQuIaC3HYtPh9Cc+tmeWSaDys=
X-Gm-Message-State: AOJu0YxnuVW2M/iaYSljYvBs9Dn19d57xByOLczLErdqCHMM1AEm1nsj
	z6aL9FAL5r+1JkMflmGE77wh2uraNPt6zttTHhzPicZa1sXR3xfekue2HHbTgQ==
X-Google-Smtp-Source: AGHT+IEJZW9KdfEAUoyC9as1ECWnb173KMCJQarn7R6gGTk4tQSBvZl9zKEArGXcRGcXKs+7p8FzZg==
X-Received: by 2002:a05:6000:52:b0:354:e746:7515 with SMTP id ffacd0b85a97d-35e0f286676mr7506538f8f.34.1717440736173;
        Mon, 03 Jun 2024 11:52:16 -0700 (PDT)
Message-ID: <cb14826d-3c5c-45b8-aaea-30cfa85a450f@suse.com>
Date: Mon, 3 Jun 2024 20:52:14 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 4/5] automation/eclair_analysis: address remaining
 violations of MISRA C Rule 20.12
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, xen-devel@lists.xenproject.org
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <ba7e17494f0bb167fe48f7fe0a69fabc1c3f5d1a.1717236930.git.nicola.vetrini@bugseng.com>
 <90c40d6a-d648-46bb-9cb0-df11ac165bd7@suse.com>
 <085aabe9953d53e634d5cf75fecdb8b7@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <085aabe9953d53e634d5cf75fecdb8b7@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 09:13, Nicola Vetrini wrote:
> On 2024-06-03 07:58, Jan Beulich wrote:
>> On 01.06.2024 12:16, Nicola Vetrini wrote:
>>> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
>>> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
>>> @@ -483,6 +483,12 @@ leads to a violation of the Rule are deviated."
>>>  -config=MC3R1.R20.12,macros+={deliberate, 
>>> "name(GENERATE_CASE)&&loc(file(deliberate_generate_case))"}
>>>  -doc_end
>>>
>>> +-doc_begin="The macro DEFINE is defined and used in excluded files 
>>> asm-offsets.c.
>>> +This may still cause violations if entities outside these files are 
>>> referred to
>>> +in the expansion."
>>> +-config=MC3R1.R20.12,macros+={deliberate, 
>>> "name(DEFINE)&&loc(file(asm_offsets))"}
>>> +-doc_end
>>
>> Can you give an example of such a reference? Nothing _in_ asm-offsets.c
>> should be referenced, I'd think. Only stuff in asm-offsets.h as 
>> _generated
>> from_ asm-offsets.c will, of course, be.
>>
> 
> Perhaps I could have expressed that more clearly. What I meant is that 
> there are some arguments to DEFINE that are not part of asm-offsets.c, 
> therefore they end up in the violation report, but are not actually 
> relevant, because the macro DEFINE is actually what we want to exclude.
> 
> See for instance at the link below VCPU_TRAP_{NMI,MCE}, which are 
> defined in asm/domain.h and used as arguments to DEFINE inside 
> asm-offsets.c.
> 
> https://saas.eclairit.com:3787/fs/var/local/eclair/XEN.ecdf/ECLAIR_normal/staging/X86_64-BUGSENG/676/PROJECT.ecd;/by_service/MC3R1.R20.12.html

I'm afraid I still don't understand: The file being supposed to be
excluded from scanning, why does it even show up in that report?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 19:12:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 19:12:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735003.1141156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sED6t-0002BN-OM; Mon, 03 Jun 2024 19:12:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735003.1141156; Mon, 03 Jun 2024 19:12:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sED6t-0002BG-LP; Mon, 03 Jun 2024 19:12:47 +0000
Received: by outflank-mailman (input) for mailman id 735003;
 Mon, 03 Jun 2024 19:12:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3ZA=NF=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sED6s-0002BA-Qn
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 19:12:46 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 41a3641c-21dd-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 21:12:45 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 17E6E4EE0738;
 Mon,  3 Jun 2024 21:12:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41a3641c-21dd-11ef-90a1-e314d9c70b13
MIME-Version: 1.0
Date: Mon, 03 Jun 2024 21:12:44 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com, Simone Ballarin
 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>,
 xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH 4/5] automation/eclair_analysis: address remaining
 violations of MISRA C Rule 20.12
In-Reply-To: <cb14826d-3c5c-45b8-aaea-30cfa85a450f@suse.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <ba7e17494f0bb167fe48f7fe0a69fabc1c3f5d1a.1717236930.git.nicola.vetrini@bugseng.com>
 <90c40d6a-d648-46bb-9cb0-df11ac165bd7@suse.com>
 <085aabe9953d53e634d5cf75fecdb8b7@bugseng.com>
 <cb14826d-3c5c-45b8-aaea-30cfa85a450f@suse.com>
Message-ID: <017a3e69ef784eb919a96a06b0fcf0dc@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-03 20:52, Jan Beulich wrote:
> On 03.06.2024 09:13, Nicola Vetrini wrote:
>> On 2024-06-03 07:58, Jan Beulich wrote:
>>> On 01.06.2024 12:16, Nicola Vetrini wrote:
>>>> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
>>>> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
>>>> @@ -483,6 +483,12 @@ leads to a violation of the Rule are deviated."
>>>>  -config=MC3R1.R20.12,macros+={deliberate,
>>>> "name(GENERATE_CASE)&&loc(file(deliberate_generate_case))"}
>>>>  -doc_end
>>>> 
>>>> +-doc_begin="The macro DEFINE is defined and used in excluded files
>>>> asm-offsets.c.
>>>> +This may still cause violations if entities outside these files are
>>>> referred to
>>>> +in the expansion."
>>>> +-config=MC3R1.R20.12,macros+={deliberate,
>>>> "name(DEFINE)&&loc(file(asm_offsets))"}
>>>> +-doc_end
>>> 
>>> Can you give an example of such a reference? Nothing _in_ 
>>> asm-offsets.c
>>> should be referenced, I'd think. Only stuff in asm-offsets.h as
>>> _generated
>>> from_ asm-offsets.c will, of course, be.
>>> 
>> 
>> Perhaps I could have expressed that more clearly. What I meant is that
>> there are some arguments to DEFINE that are not part of asm-offsets.c,
>> therefore they end up in the violation report, but are not actually
>> relevant, because the macro DEFINE is actually what we want to 
>> exclude.
>> 
>> See for instance at the link below VCPU_TRAP_{NMI,MCE}, which are
>> defined in asm/domain.h and used as arguments to DEFINE inside
>> asm-offsets.c.
>> 
>> https://saas.eclairit.com:3787/fs/var/local/eclair/XEN.ecdf/ECLAIR_normal/staging/X86_64-BUGSENG/676/PROJECT.ecd;/by_service/MC3R1.R20.12.html
> 
> I'm afraid I still don't understand: The file being supposed to be
> excluded from scanning, why does it even show up in that report?
> 
> Jan

The report is made up of several source code locations. Three of them 
are within asm-offsets.c, which is excluded from compliance but still 
analyzed, but one references a macro definition in another file (e.g., 
VCPU_TRAP_NMI from asm/domain.h). So in this case the exclusion of 
asm-offsets.c is not enough for the report not to be shown.

The configuration is telling the tool that the macro DEFINE from 
asm-offsets is meant to be deviated, which is what is needed in this 
case.

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 20:30:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 20:30:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735021.1141166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEEK6-00047k-HW; Mon, 03 Jun 2024 20:30:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735021.1141166; Mon, 03 Jun 2024 20:30:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEEK6-00047d-F3; Mon, 03 Jun 2024 20:30:30 +0000
Received: by outflank-mailman (input) for mailman id 735021;
 Mon, 03 Jun 2024 20:30:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEEK5-00047T-68; Mon, 03 Jun 2024 20:30:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEEK4-0000RI-Qk; Mon, 03 Jun 2024 20:30:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEEK4-0007ja-J4; Mon, 03 Jun 2024 20:30:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEEK4-0001NX-IZ; Mon, 03 Jun 2024 20:30:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=y1YJZ0D0mBSqnmVOwSZDuzR7NnG6IPIqSPeIkfbI/n0=; b=Z9Im/QWjJo4FadunvGDNvhPLx7
	bMq6Z13/KcJeLeWShEIUuZ6e3j4mPuVx8LJHus++MHCMKfjf6CrRM7En311r8oCtJoUKF1nwpr4Lq
	yoCky8EbcSbhwJD2GD5OvtafzsKEKIDCi7napaBVLBaf94MPUDlL1BehZKc7MXh6ly0Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186238-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186238: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-examine:reboot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-raw:debian-di-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c3f38fa61af77b49866b006939479069cd451173
X-Osstest-Versions-That:
    linux=83814698cf48ce3aadc5d88a3f577f04482ff92a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 Jun 2024 20:30:28 +0000

flight 186238 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186238/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-examine      8 reboot           fail in 186235 pass in 186238
 test-armhf-armhf-xl-credit1   8 xen-boot         fail in 186235 pass in 186238
 test-armhf-armhf-xl-multivcpu  8 xen-boot                  fail pass in 186235
 test-armhf-armhf-xl-raw      12 debian-di-install          fail pass in 186235

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 186235 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 186235 never pass
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 186235 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 186235 never pass
 test-armhf-armhf-xl-raw     14 migrate-support-check fail in 186235 never pass
 test-armhf-armhf-xl-raw 15 saverestore-support-check fail in 186235 never pass
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 186231
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186231
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186231
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186231
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186231
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186231
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186231
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186231
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                c3f38fa61af77b49866b006939479069cd451173
baseline version:
 linux                83814698cf48ce3aadc5d88a3f577f04482ff92a

Last test of basis   186231  2024-06-02 11:33:31 Z    1 days
Failing since        186232  2024-06-02 18:43:35 Z    1 days    3 attempts
Testing same since   186235  2024-06-03 02:08:22 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Dave Hansen <dave.hansen@linux.intel.com>
  Ingo Molnar <mingo@kernel.org>
  Jason Nader <dev@kayoway.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jörn Heusipp <osmanx@heusipp.de>
  Kees Cook <kees@kernel.org>
  Kees Cook <keescook@chromium.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marco Patalano <mpatalan@redhat.com>
  Niklas Cassel <cassel@kernel.org>
  Peter Schneider <pschneider1968@googlemail.com>
  Phil Auld <pauld@redhat.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tim Teichmann <teichmanntim@outlook.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   83814698cf48..c3f38fa61af7  c3f38fa61af77b49866b006939479069cd451173 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 21:19:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 21:19:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735033.1141176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEF5b-0000vB-Ug; Mon, 03 Jun 2024 21:19:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735033.1141176; Mon, 03 Jun 2024 21:19:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEF5b-0000v4-SD; Mon, 03 Jun 2024 21:19:35 +0000
Received: by outflank-mailman (input) for mailman id 735033;
 Mon, 03 Jun 2024 21:19:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=29W0=NF=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sEF5b-0000uy-2V
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 21:19:35 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f69242c2-21ee-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 23:19:29 +0200 (CEST)
Received: by mail-wr1-x42e.google.com with SMTP id
 ffacd0b85a97d-35dce6102f4so291637f8f.3
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 14:19:29 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd066d317sm9754719f8f.113.2024.06.03.14.19.28
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 14:19:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f69242c2-21ee-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717449569; x=1718054369; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=6/fMCteIGurMwvJFr+zKaGcV1XwSTll/UXfxnP8XqWw=;
        b=CLbSgSvYveyMRFA+dpjM2Kq1GUYq7IEfEZHUiet7j64nLgQzE4JtObhcmckpmPRM0I
         jBfTr5WPERSibKW6zj8RsOe/9kbpI7ztwgcj+f9QejjkHxy6sCQlNkIP7tHxyvPFgWCV
         aeA4bmV1H9QfPRm+MPmvk0t68xYEm+mcySHtvWea0nt3/o4afY0sAud4wIbpHLcBP+KT
         ZSojJWZoa1oYnW5BV3F+Ms8XgVytLIkT7u0SIO4GPzL16yA5u+IB96BFpgRiGKQGeEPn
         Ejx3oHMaWW2h6Yyl0pVMzTW1EBUpcai8fr5CblDGeoxG/84B/je8GqAvCjOkZrJZDav4
         WkXA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717449569; x=1718054369;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=6/fMCteIGurMwvJFr+zKaGcV1XwSTll/UXfxnP8XqWw=;
        b=ch51W5Ja+2XrHrHmZEzJWesrWWq/2qB5Iuxdh/RUJSvAxS8tzMVxrrbde+XwVSdhwQ
         SMHtBKML9qsVmaWmV8E8GLB4+3JMNvjgau7no48Ip5B3IOCavyAQmhHZDy9gMrI80VMs
         xJnhUi78+pVXMCIpCElRqRhwpEXvfL5q4koV0qw+AkZbylp8efnHEEqVmu51DH3pbWL8
         RJT7MXP1/zIg2qX+rVDeSjKC3b0DUDP86aF2NOjJeHuiywWBTCnaYytRl/A+iWi3mePq
         RD9sOSfRD5TRktyY4eKwy3cXv08g0JO1ss+rdJhXlCj+x6Fh9uj2JQ/UMvM8sPfgiAWi
         qr6Q==
X-Forwarded-Encrypted: i=1; AJvYcCXIls/R6LAzBiGEnpmLFmsFyzo4CENHLYd7MJMX+f/XtHPrar7kVNn0HdzzTNgY9IIw0u7H7dqRnfskwhjzxh7gLdsUj/BoOpSjObxSEUs=
X-Gm-Message-State: AOJu0YwgmbnBP1t3ctVod4OrkjZw2e6Oub38izevCq+yjRHMdaEkiFbw
	7pZrI3RICo1q+ZoL7eZfUriOklbkYFhMDSUWQE/6Vehou5YZWgyKi0h449JlWg==
X-Google-Smtp-Source: AGHT+IEDqgWFVAEMzctxEZUAg27CKh0pRkwNNNqZwsIU0wE798z7Rt/yvl/9SoZB7+b3jtG+U2qXGA==
X-Received: by 2002:a5d:53c4:0:b0:355:75f:2876 with SMTP id ffacd0b85a97d-35e0f25a3c2mr6685235f8f.5.1717449569006;
        Mon, 03 Jun 2024 14:19:29 -0700 (PDT)
Message-ID: <c5951643-5172-4aa1-9833-1a7a0eebb540@suse.com>
Date: Mon, 3 Jun 2024 23:19:26 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: vcpumask_to_pcpumask() case study
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <3bb4e3fa-376b-4641-824d-61864b4e1e8e@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <3bb4e3fa-376b-4641-824d-61864b4e1e8e@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 01.06.2024 20:50, Andrew Cooper wrote:
> One of the followon items I had from the bitops clean-up is this:
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 648d6dd475ba..9c3a017606ed 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -3425,7 +3425,7 @@ static int vcpumask_to_pcpumask(
>              unsigned int cpu;
>  
>              vcpu_id = ffsl(vmask) - 1;
> -            vmask &= ~(1UL << vcpu_id);
> +            vmask &= vmask - 1;
>              vcpu_id += vcpu_bias;
>              if ( (vcpu_id >= d->max_vcpus) )
>                  return 0;
> 
> which yields the following improvement:
> 
>   add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-34 (-34)
>   Function                                     old     new   delta
>   vcpumask_to_pcpumask                         519     485     -34

Nice. At the risk of getting flamed again for raising dumb questions:
Considering that elsewhere "trickery" like the &= mask - 1 here were
deemed not nice to have (at least wanting to be hidden behind a
suitably named macro; see e.g. ISOLATE_LSB()), wouldn't __clear_bit()
be suitable here too, and less at risk of being considered "trickery"?
But yes, that would eliminate the benefit of making the bit clearing
independent of the ffsl() result. And personally I'm fine anyway with
the form as suggested.

> While I (the programmer) can reason the two expressions are equivalent,
> the compiler can't,

Why is it you think it can't? There's no further knowledge that you
as a human need to rely on for this, afaics. If ffsl() uses the
built-in (as it now does), the compiler has full insight into what's
going on. It's just that compiler engineers may not deem it worth the
effort to carry out such a special-case optimization.

> so we really are saving a SHL to re-calculate (1 <<
> vcpu_id) and swapping it for a LEA -1(vmask) which happens to be hoisted
> above the ffsl() call.
> 
> However, the majority of the savings came from the fact that the old
> code used to hold 1 in %r15 (across the entire function!) so it could
> calculate (1 << vcpu_id) on each loop iteration.  With %r15 now free for
> other use, we one fewer thing spilt to the stack.
> 
> Anyway - while it goes to show that while/ffs logic should be extra
> careful to use x &= x - 1 for it's loop condition logic, that's not all.
> 
> The rest of this function is crazy.  We're reading a guest-word at a
> time in order to make a d->max_vcpus sized bitmap (with a reasonable
> amount of opencoded logic to reassemble the fragments back into a vcpu
> number).
> 
> PVH dom0 can reach here, and because it's not pv32, will be deemed to
> have a guest word size of 64.  Also, word-at-time for any HVM guest is
> an insane overhead in terms of the pagewalks behind the copy_from_hvm()
> infrastructure.
> 
> Instead, we should calculate the size of the bitmap at
> DIV_ROUND_UP(d->max_vcpus, 8), copy the bitmap in one whole go, and then
> just have a single for_each_set_bit() looping over it.  Importantly this
> avoids needing to know or care about the guest word size.
> 
> This removes 4 of the 6 hiding lfences; the pair for calculating
> is_native to start with, and then one of the two pairs behind the use of
> is_native to select the type of the access.
> 
> The only complications is this:  Right now, PV max vCPUS is 8k, so we
> could even get away with this being an on-stack bitmap.  However, the
> vast majority of PV guests small and a 64-bit bitmap would do fine.
> 
> But, given this is just one example, wouldn't it be better if we just
> unconditionally had a marshalling buffer for hypercalls?  There's the
> XLAT area but it doesn't exist PV64, and we've got various other pieces
> of scratch per-cpu data.
> 
> If not, having a 128/256-bit bitmap on the stack will still be good
> enough in practice, but still ok at amortising the PVH dom0 costs.
> 
> Thoughts?

Well, yes, the last of what you suggest might certainly be worthwhile
as a minimal improvement. The only slight concern with increasing the
granularity is that it'll be increasingly less likely that a 2nd or
further iterations of the loop would actually be exercised in routine
testing.

A marshalling buffer might also make sense to have, as long as we have
more than just one or two places wanting to use it. Since you say "just
one example", likely you have further uses in mind. In most other
places we use input data more directly, so it looks like I can't right
away come up with possible further use cases.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 21:24:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 21:24:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735040.1141187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEFAY-0002Qv-JU; Mon, 03 Jun 2024 21:24:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735040.1141187; Mon, 03 Jun 2024 21:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEFAY-0002Qo-Gv; Mon, 03 Jun 2024 21:24:42 +0000
Received: by outflank-mailman (input) for mailman id 735040;
 Mon, 03 Jun 2024 21:24:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=29W0=NF=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sEFAX-0002Qi-0j
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 21:24:41 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id af1394e8-21ef-11ef-90a1-e314d9c70b13;
 Mon, 03 Jun 2024 23:24:39 +0200 (CEST)
Received: by mail-wm1-x329.google.com with SMTP id
 5b1f17b1804b1-42121d28664so42256635e9.2
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 14:24:39 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4213847aad7sm85430695e9.43.2024.06.03.14.24.37
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 14:24:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af1394e8-21ef-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717449878; x=1718054678; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=1h0IsyhmP1Kky0DXeowcrBtQ0fOwQqtGQ3MVVjWF9SI=;
        b=Xu+kej17jGU56u5sizA3up31cjrfhdCCdV0bVwak/Ss5A1YB3kwBU4MegJ05UDeVGj
         NptL66K6IRz49fl2SRwJAUH01KKEZpDx2hhGokvafCMK4wj2+k7TemZ4cTL7302YKIOu
         2vLQMCBppIRCVMGpxYtPBcrY/OBPK7FB4afN6oSbQwRV/+D2Whi/dtv9ZODYbtOvs4Pl
         wwFKnIqfkVqKEkOrIXkQI1y3rXC6Ixy9juHIQYzBPF76DEN6BADtQ/jMikRhXdGlEgKk
         +kyDPiHX4AQfn3CFJiVMXGKZkZjcE+OrWHIAm89rCoyLngsVs8HK0aNVIcIAStDq+yuX
         Q0BQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717449878; x=1718054678;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=1h0IsyhmP1Kky0DXeowcrBtQ0fOwQqtGQ3MVVjWF9SI=;
        b=wVZ/d4IG6cbTs4e8b5Cu8ecTvDkeaGe+i+GonyyITwrEd6uSXZE6wEm5I4d9cdmRKS
         qBWDgmX3+c0pE3AiKwKtSzaWg2o33egcrGywivFkRJqLvES9qX+CO6GvD2xA6Luef6X7
         jMnFY8KGiWum64aFV4w+c0bWwXoHxk8CkMk1UYyXjM3RDtUEgivnr9UMNsPF8RjvMZOQ
         pcR2BwnSXwQuLLCexrXMJ2kIzj/NcIPDgRq8+y2c51vPp3E6fFeKWm1DxxVjXeVQFiTl
         JS7SGLM2zYYVvhQZRFgv7z+CczVbaVq76984zB6L1/pwUkc/iyVUOCaoilW2NQeMm4JG
         pwxA==
X-Forwarded-Encrypted: i=1; AJvYcCU3Lia/b6L5nRU1Urj2VsMXVSHDtoG0IFXJfZ4WAP6d8GFZDBQ22l4prYyAVoby0oDpIDffBnY5mPqe3JxeJqBhgN69jB9wV/FDQNI3p/c=
X-Gm-Message-State: AOJu0YzkuZxZw1gEyJV4dNn6HRQA3gvzmfUYKQS84/3TG5mbmT78m2KK
	7oUPnx+W1CPe9rcULUEjSrC8r4E7smkVYk/03lRit95toKeK4Vw2ngJzuNtAdA==
X-Google-Smtp-Source: AGHT+IF6hFRjvDjjsXNRQMaHDJEjO9hwb152W+UWDr8fxLAENLlh/E3eTBikQ0kl5oR1H1xcRzZNrw==
X-Received: by 2002:a05:600c:35d3:b0:421:3979:8c56 with SMTP id 5b1f17b1804b1-42139798e83mr39680165e9.40.1717449878506;
        Mon, 03 Jun 2024 14:24:38 -0700 (PDT)
Message-ID: <a27b1218-7530-43c7-8695-16057b712f89@suse.com>
Date: Mon, 3 Jun 2024 23:24:36 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 4/5] automation/eclair_analysis: address remaining
 violations of MISRA C Rule 20.12
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, xen-devel@lists.xenproject.org
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <ba7e17494f0bb167fe48f7fe0a69fabc1c3f5d1a.1717236930.git.nicola.vetrini@bugseng.com>
 <90c40d6a-d648-46bb-9cb0-df11ac165bd7@suse.com>
 <085aabe9953d53e634d5cf75fecdb8b7@bugseng.com>
 <cb14826d-3c5c-45b8-aaea-30cfa85a450f@suse.com>
 <017a3e69ef784eb919a96a06b0fcf0dc@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <017a3e69ef784eb919a96a06b0fcf0dc@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 21:12, Nicola Vetrini wrote:
> On 2024-06-03 20:52, Jan Beulich wrote:
>> On 03.06.2024 09:13, Nicola Vetrini wrote:
>>> On 2024-06-03 07:58, Jan Beulich wrote:
>>>> On 01.06.2024 12:16, Nicola Vetrini wrote:
>>>>> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
>>>>> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
>>>>> @@ -483,6 +483,12 @@ leads to a violation of the Rule are deviated."
>>>>>  -config=MC3R1.R20.12,macros+={deliberate,
>>>>> "name(GENERATE_CASE)&&loc(file(deliberate_generate_case))"}
>>>>>  -doc_end
>>>>>
>>>>> +-doc_begin="The macro DEFINE is defined and used in excluded files
>>>>> asm-offsets.c.
>>>>> +This may still cause violations if entities outside these files are
>>>>> referred to
>>>>> +in the expansion."
>>>>> +-config=MC3R1.R20.12,macros+={deliberate,
>>>>> "name(DEFINE)&&loc(file(asm_offsets))"}
>>>>> +-doc_end
>>>>
>>>> Can you give an example of such a reference? Nothing _in_ 
>>>> asm-offsets.c
>>>> should be referenced, I'd think. Only stuff in asm-offsets.h as
>>>> _generated
>>>> from_ asm-offsets.c will, of course, be.
>>>>
>>>
>>> Perhaps I could have expressed that more clearly. What I meant is that
>>> there are some arguments to DEFINE that are not part of asm-offsets.c,
>>> therefore they end up in the violation report, but are not actually
>>> relevant, because the macro DEFINE is actually what we want to 
>>> exclude.
>>>
>>> See for instance at the link below VCPU_TRAP_{NMI,MCE}, which are
>>> defined in asm/domain.h and used as arguments to DEFINE inside
>>> asm-offsets.c.
>>>
>>> https://saas.eclairit.com:3787/fs/var/local/eclair/XEN.ecdf/ECLAIR_normal/staging/X86_64-BUGSENG/676/PROJECT.ecd;/by_service/MC3R1.R20.12.html
>>
>> I'm afraid I still don't understand: The file being supposed to be
>> excluded from scanning, why does it even show up in that report?
> 
> The report is made up of several source code locations. Three of them 
> are within asm-offsets.c, which is excluded from compliance but still 
> analyzed, but one references a macro definition in another file (e.g., 
> VCPU_TRAP_NMI from asm/domain.h). So in this case the exclusion of 
> asm-offsets.c is not enough for the report not to be shown.

But the (would-be-)violation is in asm-offsets.c. The other locations
pointed at are providing context. To report a violation, it should be
enough to exclude the file where the violation itself is?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 03 21:37:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 Jun 2024 21:37:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735047.1141196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEFN9-0004Co-NX; Mon, 03 Jun 2024 21:37:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735047.1141196; Mon, 03 Jun 2024 21:37:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEFN9-0004Ch-L6; Mon, 03 Jun 2024 21:37:43 +0000
Received: by outflank-mailman (input) for mailman id 735047;
 Mon, 03 Jun 2024 21:37:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3ZA=NF=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sEFN7-0004Cb-Sd
 for xen-devel@lists.xenproject.org; Mon, 03 Jun 2024 21:37:41 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 80330586-21f1-11ef-b4bb-af5377834399;
 Mon, 03 Jun 2024 23:37:39 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 0ACA64EE0739;
 Mon,  3 Jun 2024 23:37:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80330586-21f1-11ef-b4bb-af5377834399
MIME-Version: 1.0
Date: Mon, 03 Jun 2024 23:37:39 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com, Simone Ballarin
 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>,
 xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH 4/5] automation/eclair_analysis: address remaining
 violations of MISRA C Rule 20.12
In-Reply-To: <a27b1218-7530-43c7-8695-16057b712f89@suse.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <ba7e17494f0bb167fe48f7fe0a69fabc1c3f5d1a.1717236930.git.nicola.vetrini@bugseng.com>
 <90c40d6a-d648-46bb-9cb0-df11ac165bd7@suse.com>
 <085aabe9953d53e634d5cf75fecdb8b7@bugseng.com>
 <cb14826d-3c5c-45b8-aaea-30cfa85a450f@suse.com>
 <017a3e69ef784eb919a96a06b0fcf0dc@bugseng.com>
 <a27b1218-7530-43c7-8695-16057b712f89@suse.com>
Message-ID: <9d89ab3fe34597a9e721e6fea102f728@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-03 23:24, Jan Beulich wrote:
> On 03.06.2024 21:12, Nicola Vetrini wrote:
>> On 2024-06-03 20:52, Jan Beulich wrote:
>>> On 03.06.2024 09:13, Nicola Vetrini wrote:
>>>> On 2024-06-03 07:58, Jan Beulich wrote:
>>>>> On 01.06.2024 12:16, Nicola Vetrini wrote:
>>>>>> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
>>>>>> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
>>>>>> @@ -483,6 +483,12 @@ leads to a violation of the Rule are 
>>>>>> deviated."
>>>>>>  -config=MC3R1.R20.12,macros+={deliberate,
>>>>>> "name(GENERATE_CASE)&&loc(file(deliberate_generate_case))"}
>>>>>>  -doc_end
>>>>>> 
>>>>>> +-doc_begin="The macro DEFINE is defined and used in excluded 
>>>>>> files
>>>>>> asm-offsets.c.
>>>>>> +This may still cause violations if entities outside these files 
>>>>>> are
>>>>>> referred to
>>>>>> +in the expansion."
>>>>>> +-config=MC3R1.R20.12,macros+={deliberate,
>>>>>> "name(DEFINE)&&loc(file(asm_offsets))"}
>>>>>> +-doc_end
>>>>> 
>>>>> Can you give an example of such a reference? Nothing _in_
>>>>> asm-offsets.c
>>>>> should be referenced, I'd think. Only stuff in asm-offsets.h as
>>>>> _generated
>>>>> from_ asm-offsets.c will, of course, be.
>>>>> 
>>>> 
>>>> Perhaps I could have expressed that more clearly. What I meant is 
>>>> that
>>>> there are some arguments to DEFINE that are not part of 
>>>> asm-offsets.c,
>>>> therefore they end up in the violation report, but are not actually
>>>> relevant, because the macro DEFINE is actually what we want to
>>>> exclude.
>>>> 
>>>> See for instance at the link below VCPU_TRAP_{NMI,MCE}, which are
>>>> defined in asm/domain.h and used as arguments to DEFINE inside
>>>> asm-offsets.c.
>>>> 
>>>> https://saas.eclairit.com:3787/fs/var/local/eclair/XEN.ecdf/ECLAIR_normal/staging/X86_64-BUGSENG/676/PROJECT.ecd;/by_service/MC3R1.R20.12.html
>>> 
>>> I'm afraid I still don't understand: The file being supposed to be
>>> excluded from scanning, why does it even show up in that report?
>> 
>> The report is made up of several source code locations. Three of them
>> are within asm-offsets.c, which is excluded from compliance but still
>> analyzed, but one references a macro definition in another file (e.g.,
>> VCPU_TRAP_NMI from asm/domain.h). So in this case the exclusion of
>> asm-offsets.c is not enough for the report not to be shown.
> 
> But the (would-be-)violation is in asm-offsets.c. The other locations
> pointed at are providing context. To report a violation, it should be
> enough to exclude the file where the violation itself is?
> 

In general that may not be the safest choice, or it can limit the 
granularity of the configuration (not in the specific case, but the 
mechanism is general).
It's a design aspect of the tool to show a report unless all its 
locations are not meant to be shown.

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 00:30:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 00:30:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735062.1141207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEI3p-0006sB-96; Tue, 04 Jun 2024 00:29:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735062.1141207; Tue, 04 Jun 2024 00:29:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEI3p-0006s4-5c; Tue, 04 Jun 2024 00:29:57 +0000
Received: by outflank-mailman (input) for mailman id 735062;
 Tue, 04 Jun 2024 00:29:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QWSe=NG=flex--seanjc.bounces.google.com=3_l9eZgYKCXEhTPcYRVddVaT.RdbmTc-STkTaaXhih.mTcegdYTRi.dgV@srs-se1.protection.inumbo.net>)
 id 1sEI3n-0006ry-2M
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 00:29:55 +0000
Received: from mail-pl1-x649.google.com (mail-pl1-x649.google.com
 [2607:f8b0:4864:20::649])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8ec18a40-2209-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 02:29:52 +0200 (CEST)
Received: by mail-pl1-x649.google.com with SMTP id
 d9443c01a7336-1f6582eca2bso20755445ad.1
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 17:29:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ec18a40-2209-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20230601; t=1717460991; x=1718065791; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id
         :reply-to;
        bh=QxIMQXFvRFWSOBKMLO/Pb11VLx9FgWxOAKpcjqg1q6M=;
        b=dJ2ShQMh3hc4yiE61PsP9IkkrlHw5+NxTDZLMlmh5xRqfR5mzP93zwUzZJh5NYQkRd
         V7/shSpH1IIrjwrCbhOeuNLgeMCRbkM2jeqC3fSF1aF0yE+HYtFJ0jUmLwz6eHDOkt1a
         PAXwlHlYThQiZ+IqQmBU4pOTrkkYL8IMiBcAt3eMQiJGTZ/2KW6pCpGQo+p0rZDBUSWF
         AnCuYGyBFZkgKI7+V6BcBKeZFa/Wrr/KYfiVlZfHbRvZLo7KJXK8YBEWO0EOYmbLtNqA
         jVWod18rVtis0gUF/IPyxTipnGVZpta0icYWnPj5o61u56VWJPp1nFot4D7duDwfGwUO
         JnHA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717460991; x=1718065791;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=QxIMQXFvRFWSOBKMLO/Pb11VLx9FgWxOAKpcjqg1q6M=;
        b=UZM8I30WOvoMdjG66mJ8xmb7JhxMvtbBQs9gm7glpqzSH6DUeHr2X8FUaz6Uj3M5eD
         brpkrg2uwIVRO1Kf2GJ9OcZ+ET6v34TBTllc39LD8igbIaDyznoxZ35TgBeMm8rOaU8o
         CeL1NtYLd/a9BVukzK11DC1OO0NIYV7T337kv0HPH1eliPiKuYdTdIjIkZZMj8K1HM3I
         6q6JJbM9PWhScWHBwwhD3lkzX+ryAXbW3dnAU8pXyn9PXbg51BtauM68SmNgzImjO5X2
         Vq9Du3YRaJ8dUGU5KfM+xi6U+eVoP28+IsZw7Fei9nrx/4IEWQqp6PW0XKIOZ+KColbE
         5Pcw==
X-Forwarded-Encrypted: i=1; AJvYcCW7QfN9ifOXJ21PRbINn2hvOiayQ9qZIrQOPTIpa75b9SWBoAMj3PcR1gKz71Hq6ZhS0npimUfHA9ya5qdr6yVCUDIJqHHETxQyIjW+4Nc=
X-Gm-Message-State: AOJu0YyhTNtZv8ZBRBy2dPlX8sJikIgMeitVuK/Pm8kJQH5yovkxjK3R
	bDEJWCW0c40CnkZ0UcC/RbZx2iIGtuJbeOb6cDe1Csi69YBoUB/1MFVdq4iJaykpm1dkZbwSaAF
	Y7g==
X-Google-Smtp-Source: AGHT+IENLGEaWbs37HwWar6l9tfsoGWg6m42nFsyoTQfIezFlKVeE9IraziAYhpEU9jmdwNYoPJqg3Pzh8c=
X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37])
 (user=seanjc job=sendgmr) by 2002:a17:902:f7c1:b0:1f4:620b:6a47 with SMTP id
 d9443c01a7336-1f6370524bemr2945395ad.4.1717460990723; Mon, 03 Jun 2024
 17:29:50 -0700 (PDT)
Date: Mon, 3 Jun 2024 17:29:49 -0700
In-Reply-To: <20240514.OoPohLaejai6@digikod.net>
Mime-Version: 1.0
References: <20240503131910.307630-1-mic@digikod.net> <20240503131910.307630-4-mic@digikod.net>
 <ZjTuqV-AxQQRWwUW@google.com> <20240506.ohwe7eewu0oB@digikod.net>
 <ZjmFPZd5q_hEBdBz@google.com> <20240507.ieghomae0UoC@digikod.net>
 <ZjpTxt-Bxia3bRwB@google.com> <20240514.OoPohLaejai6@digikod.net>
Message-ID: <Zl5f_T7Nb-Fk8Y1o@google.com>
Subject: Re: [RFC PATCH v3 3/5] KVM: x86: Add notifications for Heki policy
 configuration and violation
From: Sean Christopherson <seanjc@google.com>
To: "=?utf-8?Q?Micka=C3=ABl_Sala=C3=BCn?=" <mic@digikod.net>
Cc: Nicolas Saenz Julienne <nsaenz@amazon.com>, Borislav Petkov <bp@alien8.de>, 
	Dave Hansen <dave.hansen@linux.intel.com>, "H . Peter Anvin" <hpa@zytor.com>, 
	Ingo Molnar <mingo@redhat.com>, Kees Cook <keescook@chromium.org>, 
	Paolo Bonzini <pbonzini@redhat.com>, Thomas Gleixner <tglx@linutronix.de>, 
	Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, 
	Rick P Edgecombe <rick.p.edgecombe@intel.com>, Alexander Graf <graf@amazon.com>, 
	Angelina Vu <angelinavu@linux.microsoft.com>, 
	Anna Trikalinou <atrikalinou@microsoft.com>, Chao Peng <chao.p.peng@linux.intel.com>, 
	Forrest Yuan Yu <yuanyu@google.com>, James Gowans <jgowans@amazon.com>, 
	James Morris <jamorris@linux.microsoft.com>, John Andersen <john.s.andersen@intel.com>, 
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>, Marian Rotariu <marian.c.rotariu@gmail.com>, 
	"Mihai =?utf-8?B?RG9uyJt1?=" <mdontu@bitdefender.com>, 
	"=?utf-8?B?TmljdciZb3IgQ8OuyJt1?=" <nicu.citu@icloud.com>, Thara Gopinath <tgopinath@microsoft.com>, 
	Trilok Soni <quic_tsoni@quicinc.com>, Wei Liu <wei.liu@kernel.org>, 
	Will Deacon <will@kernel.org>, Yu Zhang <yu.c.zhang@linux.intel.com>, 
	"=?utf-8?Q?=C8=98tefan_=C8=98icleru?=" <ssicleru@bitdefender.com>, dev@lists.cloudhypervisor.org, 
	kvm@vger.kernel.org, linux-hardening@vger.kernel.org, 
	linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, 
	linux-security-module@vger.kernel.org, qemu-devel@nongnu.org, 
	virtualization@lists.linux-foundation.org, x86@kernel.org, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable

On Tue, May 14, 2024, Micka=C3=ABl Sala=C3=BCn wrote:
> On Tue, May 07, 2024 at 09:16:06AM -0700, Sean Christopherson wrote:
> > On Tue, May 07, 2024, Micka=C3=ABl Sala=C3=BCn wrote:
> > > If yes, that would indeed require a *lot* of work for something we're=
 not
> > > sure will be accepted later on.
> >=20
> > Yes and no.  The AWS folks are pursuing VSM support in KVM+QEMU, and SV=
SM support
> > is trending toward the paired VM+vCPU model.  IMO, it's entirely feasib=
le to
> > design KVM support such that much of the development load can be shared=
 between
> > the projects.  And having 2+ use cases for a feature (set) makes it _mu=
ch_ more
> > likely that the feature(s) will be accepted.
> >=20
> > And similar to what Paolo said regarding HEKI not having a complete sto=
ry, I
> > don't see a clear line of sight for landing host-defined policy enforce=
ment, as
> > there are many open, non-trivial questions that need answers. I.e. upst=
reaming
> > HEKI in its current form is also far from a done deal, and isn't guaran=
teed to
> > be substantially less work when all is said and done.
>=20
> I'm not sure to understand why "Heki not having a complete story".  The
> goal is the same as the current kernel self-protection mechanisms.

HEKI doesn't have a complete story for how it's going to play nice with kex=
ec(),
emulated RESET, etc.  The kernel's existing self-protection mechanisms Just=
 Work
because the protections are automatically disabled/lost on such transitions=
.
They are obviously significant drawbacks to that behavior, but they are acc=
epted
drawbacks, i.e. solving those problems isn't in scope (yet) for the kernel.=
  And
the "failure" mode is also loss of hardening, not an unusable guest.

In other words, the kernel's hardening is firmly best effort at this time,
whereas HEKI likely needs to be much more than "best effort" in order to ju=
stify
the extra complexity.  And that means having answers to the various interop=
erability
questions.


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 00:30:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 00:30:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735063.1141217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEI4B-000840-GD; Tue, 04 Jun 2024 00:30:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735063.1141217; Tue, 04 Jun 2024 00:30:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEI4B-00083r-DC; Tue, 04 Jun 2024 00:30:19 +0000
Received: by outflank-mailman (input) for mailman id 735063;
 Tue, 04 Jun 2024 00:30:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NYU3=NG=flex--seanjc.bounces.google.com=3FmBeZgYKCYk5rn0wpt11tyr.p1zAr0-qr8ryyv565.Ar0241wrp6.14t@srs-se1.protection.inumbo.net>)
 id 1sEI4A-00083L-NN
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 00:30:18 +0000
Received: from mail-pj1-x104a.google.com (mail-pj1-x104a.google.com
 [2607:f8b0:4864:20::104a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9cd7693a-2209-11ef-90a1-e314d9c70b13;
 Tue, 04 Jun 2024 02:30:17 +0200 (CEST)
Received: by mail-pj1-x104a.google.com with SMTP id
 98e67ed59e1d1-2c1aa8d19bbso4423833a91.2
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 17:30:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cd7693a-2209-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=google.com; s=20230601; t=1717461015; x=1718065815; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:from:to:cc:subject:date:message-id
         :reply-to;
        bh=qxZAvQthUOLtrgNKYUD6oiL7rAnsVeqwoSoZowsNkW0=;
        b=XbnRFghZGS9a30bs+Rhr+LA1njrF8MA97M+/ylVNlv68ooScT8PdgdatMy4gY9c7tl
         yuc8z1QZKAajVzbLtvcsFIdaPVWZsu5/3C1X1li1rszkCONmeaIpViE31S0WFAffF4oV
         9QBtN0YYPZU2e101IOdlOtlbsoinAgUXn6WvYmGZ2AAavmCKqp5Si1Uz3jbSmMsVepoQ
         sx8/tUj2xSPIlBIgoCTUvWeD5o74BhitXq1YvMZYi4um04RZGtyvXZBTap691iUl9OZa
         XyQVHuocHZAih5Tfy/gpM5vwLTAmUKgGDZPlim9Oy6Pa19ugiEAR98/PP8DUxA0UpcCX
         kuNA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717461015; x=1718065815;
        h=content-transfer-encoding:cc:to:from:subject:message-id:references
         :mime-version:in-reply-to:date:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=qxZAvQthUOLtrgNKYUD6oiL7rAnsVeqwoSoZowsNkW0=;
        b=R9sxluoWU/a1GRXWZhGM2FLx43pcZ/5g3dmnEI7OTkHVwJ3phLoijt9isYgsPG/Giw
         72CtfNXpRSvSLs/CrYyOQtxkgrJ4Gnyp9AdNq0uALh6iLLoaWKRXk6ngF92/kYap2diq
         WqGOm/LvWc33SrKYN3+sgguF135ZJwJlQ99fw8pJhst+ayYphukczOBVJoQ5OfPb7SGX
         X/DSWWmUDzQMAcwsLYK5/bSyRN0vjA1PM0TEvOl8aB9LJHzgab6cBkuUj/ZhEIVU9QFU
         8iKQmiYmYL0TiWv2I9w/ebtYnhG3dsSPzopI7cRPVahaX4oEkeXhY4sGxEj9+kv4c6kS
         N20g==
X-Forwarded-Encrypted: i=1; AJvYcCX3FmoRlDMleb8460asuKwfzGtP7E1WSYNKuBRTMLuw7WA9wSOEDGs6WdbXHEFTfwRYJm4HYDFh3G8Xbq5wacZS6pPqehcXPQIg+t1/iFw=
X-Gm-Message-State: AOJu0Ywty59X6Lu+MtZC9DlxgIZCIz5VkLpi7QYzt3OwBmEa1OmfkUKr
	N7K5REjuYmXQSo+ZOZCE9D398DKmAU9UFBYPtu2oLVoHk5sSxbN65h1ADfYk+sNUJR2PF1M7RuP
	vcA==
X-Google-Smtp-Source: AGHT+IHQGaBTeBmAnXtOHPKtCDExst6sxM2uUbRh6POZbhCe3IxNIvO17m1N9WlbQ/aSMCylqTV+FPzEOtI=
X-Received: from zagreus.c.googlers.com ([fda3:e722:ac3:cc00:7f:e700:c0a8:5c37])
 (user=seanjc job=sendgmr) by 2002:a17:90a:2c86:b0:2c1:aa8e:13cb with SMTP id
 98e67ed59e1d1-2c1dc5e0ab2mr29758a91.9.1717461014195; Mon, 03 Jun 2024
 17:30:14 -0700 (PDT)
Date: Mon, 3 Jun 2024 17:30:12 -0700
In-Reply-To: <20240603.ahNg8waif6Fu@digikod.net>
Mime-Version: 1.0
References: <20240503131910.307630-4-mic@digikod.net> <ZjTuqV-AxQQRWwUW@google.com>
 <20240506.ohwe7eewu0oB@digikod.net> <ZjmFPZd5q_hEBdBz@google.com>
 <20240507.ieghomae0UoC@digikod.net> <ZjpTxt-Bxia3bRwB@google.com>
 <D15VQ97L5M8J.1TDNQE6KLW6JO@amazon.com> <20240514.mai3Ahdoo2qu@digikod.net>
 <ZkUb2IWj4Z9FziCb@google.com> <20240603.ahNg8waif6Fu@digikod.net>
Message-ID: <Zl5gFMJp3nECJVW-@google.com>
Subject: Re: [RFC PATCH v3 3/5] KVM: x86: Add notifications for Heki policy
 configuration and violation
From: Sean Christopherson <seanjc@google.com>
To: "=?utf-8?Q?Micka=C3=ABl_Sala=C3=BCn?=" <mic@digikod.net>
Cc: Nicolas Saenz Julienne <nsaenz@amazon.com>, Borislav Petkov <bp@alien8.de>, 
	Dave Hansen <dave.hansen@linux.intel.com>, "H . Peter Anvin" <hpa@zytor.com>, 
	Ingo Molnar <mingo@redhat.com>, Kees Cook <keescook@chromium.org>, 
	Paolo Bonzini <pbonzini@redhat.com>, Thomas Gleixner <tglx@linutronix.de>, 
	Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, 
	Rick P Edgecombe <rick.p.edgecombe@intel.com>, Alexander Graf <graf@amazon.com>, 
	Angelina Vu <angelinavu@linux.microsoft.com>, 
	Anna Trikalinou <atrikalinou@microsoft.com>, Chao Peng <chao.p.peng@linux.intel.com>, 
	Forrest Yuan Yu <yuanyu@google.com>, James Gowans <jgowans@amazon.com>, 
	James Morris <jamorris@linux.microsoft.com>, John Andersen <john.s.andersen@intel.com>, 
	"Madhavan T . Venkataraman" <madvenka@linux.microsoft.com>, Marian Rotariu <marian.c.rotariu@gmail.com>, 
	"Mihai =?utf-8?B?RG9uyJt1?=" <mdontu@bitdefender.com>, 
	"=?utf-8?B?TmljdciZb3IgQ8OuyJt1?=" <nicu.citu@icloud.com>, Thara Gopinath <tgopinath@microsoft.com>, 
	Trilok Soni <quic_tsoni@quicinc.com>, Wei Liu <wei.liu@kernel.org>, 
	Will Deacon <will@kernel.org>, Yu Zhang <yu.c.zhang@linux.intel.com>, 
	"=?utf-8?Q?=C8=98tefan_=C8=98icleru?=" <ssicleru@bitdefender.com>, dev@lists.cloudhypervisor.org, 
	kvm@vger.kernel.org, linux-hardening@vger.kernel.org, 
	linux-hyperv@vger.kernel.org, linux-kernel@vger.kernel.org, 
	linux-security-module@vger.kernel.org, qemu-devel@nongnu.org, 
	virtualization@lists.linux-foundation.org, x86@kernel.org, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 03, 2024, Micka=C3=ABl Sala=C3=BCn wrote:
> On Wed, May 15, 2024 at 01:32:24PM -0700, Sean Christopherson wrote:
> > On Tue, May 14, 2024, Micka=C3=ABl Sala=C3=BCn wrote:
> > > On Fri, May 10, 2024 at 10:07:00AM +0000, Nicolas Saenz Julienne wrot=
e:
> > > > Development happens
> > > > https://github.com/vianpl/{linux,qemu,kvm-unit-tests} and the vsm-n=
ext
> > > > branch, but I'd advice against looking into it until we add some or=
der
> > > > to the rework. Regardless, feel free to get in touch.
> > >=20
> > > Thanks for the update.
> > >=20
> > > Could we schedule a PUCK meeting to synchronize and help each other?
> > > What about June 12?
> >=20
> > June 12th works on my end.
>=20
> Can you please send an invite?

I think this is all the info?

Time:  6am PDT
Video: https://meet.google.com/vdb-aeqo-knk
Phone: https://tel.meet/vdb-aeqo-knk?pin=3D3003112178656

Calendar: https://calendar.google.com/calendar/u/0?cid=3DY182MWE1YjFmNjQ0Nz=
M5YmY1YmVkN2U1ZWE1ZmMzNjY5Y2UzMmEyNTQ0YzVkYjFjN2M4OTE3MDJjYTUwOTBjN2Q1QGdyb=
3VwLmNhbGVuZGFyLmdvb2dsZS5jb20
Drive:    https://drive.google.com/drive/folders/1aTqCrvTsQI9T4qLhhLs_l986S=
ngGlhPH?resourcekey=3D0-FDy0ykM3RerZedI8R-zj4A&usp=3Ddrive_link


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 02:04:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 02:04:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735083.1141227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEJWy-0001UX-UM; Tue, 04 Jun 2024 02:04:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735083.1141227; Tue, 04 Jun 2024 02:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEJWy-0001UQ-Qk; Tue, 04 Jun 2024 02:04:08 +0000
Received: by outflank-mailman (input) for mailman id 735083;
 Tue, 04 Jun 2024 02:04:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEJWx-0001U0-Uu; Tue, 04 Jun 2024 02:04:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEJWx-0006NY-QN; Tue, 04 Jun 2024 02:04:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEJWx-0002WI-76; Tue, 04 Jun 2024 02:04:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEJWx-0008CD-6i; Tue, 04 Jun 2024 02:04:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jsSQEokWRd7+U0rFOgE0pH2kqhhs0yHAgzRH6fyz1Xg=; b=GqBblMQ7iDbNOTYngWJWSbAnLa
	pLgmIcHiYx1J9jmxMTS1RNX+3yyRns1xfbkpkLcMiSUaNwBG1quW+232fx9aDCLlqLir4a0k19vBa
	MvjybO94HiY8cNLnT+GtbUYzOvwIKf2gqfHH0xRhXSuDOI6TTcyNxF/sE2yFbTGnW5tA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186240-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186240: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=27b044605cd5f6b33a3d231576003850b3fe305b
X-Osstest-Versions-That:
    ovmf=b0930e3f4e6de1ce1c480bca687b44875e071f74
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Jun 2024 02:04:07 +0000

flight 186240 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186240/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 27b044605cd5f6b33a3d231576003850b3fe305b
baseline version:
 ovmf                 b0930e3f4e6de1ce1c480bca687b44875e071f74

Last test of basis   186236  2024-06-03 11:14:35 Z    0 days
Testing same since   186240  2024-06-03 20:43:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b0930e3f4e..27b044605c  27b044605cd5f6b33a3d231576003850b3fe305b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 03:05:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 03:05:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735095.1141237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEKTn-0000Bv-9o; Tue, 04 Jun 2024 03:04:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735095.1141237; Tue, 04 Jun 2024 03:04:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEKTn-0000Bo-6x; Tue, 04 Jun 2024 03:04:55 +0000
Received: by outflank-mailman (input) for mailman id 735095;
 Tue, 04 Jun 2024 03:04:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAXK=NG=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sEKTl-0000Bi-Af
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 03:04:53 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20601.outbound.protection.outlook.com
 [2a01:111:f403:2418::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3415191d-221f-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 05:04:49 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by CYXPR12MB9280.namprd12.prod.outlook.com (2603:10b6:930:e4::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.27; Tue, 4 Jun
 2024 03:04:45 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7633.021; Tue, 4 Jun 2024
 03:04:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3415191d-221f-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KPjA7XXM3abUb4JZsv7EVbwbt5JTIEC7xXV95MI5bVETc+0amrgyBRAbrf7coVl+gBCkzd001xaVxal7ploZM1goM+mkQocodgkzElbiH0LBHUS0mpnPH5oVBilK0JZlcS4SIVMSlwZb1UXOEVQVx/qBTFRBRxzVysu4TVhlLuIgW0KQFdSBJiiSbjElqMZnJwinn3gLHQFMjUptdnZo4o2iKivCJTmIc+xMAW2GWu1Idg77tFEPtBPOn9KhQvCFOo8GroHvLkz4G+w0CAROcoPsZNn20dz3ceDVv5pjacuz5ZCR25pTovuyNA406+TyAQAYznhp2mOx2OLJ3dSvxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AO3g070u72H6qs28dyooBvgbonPvKK/u0YKvdal2ZEs=;
 b=DtoD2Nk6ejBTpYSOAWZoVODWFSJ2SA6Hfw9ti2pcXgFLt2Xdh6lql9l4g9o9iOPqviCw6DLkIgYzjbh/byOdk8cL19DsVqI28mcfITNqxQKuNYGzXRs9OmI98Ro55eAqhcjII0fP/YHdg7pgBdcD5NdZGXWEbHj8AfHjxlAA1BkcfvJRyGFhktF9zDU4hzleFVCzW3ARTQ72M43NJg+MuuDzBjOf9YbLRGlOS8D1CClKO5metKiT35pL/z2eYcJ6p2pc14DCRwu85yRo8tDx6EY19Beb74lCMyw+qNEWUWRdDd3H4sFYuRBjXdrODOzaI8VEEeSZiXUECdq/MTojoA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AO3g070u72H6qs28dyooBvgbonPvKK/u0YKvdal2ZEs=;
 b=Mtxmm6gV+uTdaylVBtwJxZjRYsEjJMk3MUHKZkIW7dunaGk4sAYEr550NNMa7G03nptjZm76fCYU1LtRWGzk79JPiMVbp433Gh9PBIBAfu3iXA+sY1LnCqKJqP8w8PloxJRMPvQBtN8AX/j73G0/LcjzykouxTT5AQqzGOCh1lU=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Hildebrand,
 Stewart" <Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Daniel P
 . Smith" <dpsmith@apertussolutions.com>, "Chen, Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index:
 AQHap3bgiER4vYjwvk2+R5oTa8V63LGZ5EYAgAHVa4D//4e1AIAAis+A//+FmgCAEsTugP//viAAABEhvwD//4GYgIAAoyuA//+0FICAAgSCAP//yCWAgAeLLwA=
Date: Tue, 4 Jun 2024 03:04:45 +0000
Message-ID:
 <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240516095235.64128-1-Jiqian.Chen@amd.com>
 <20240516095235.64128-6-Jiqian.Chen@amd.com>
 <9652011f-3f24-43f8-b91e-88bd3982a4c4@suse.com>
 <BL1PR12MB5849EB5EE20B1A6C647F5717E7EE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b10e68e2-3279-471d-a089-c40934050737@suse.com>
 <BL1PR12MB58491A32C32C33545AC71AB7E7EE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4b311c82-b252-413a-bb64-0a36aa97680a@suse.com>
 <BL1PR12MB5849333D416160492A7475E2E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <70c86c74-3ed6-4b22-9ba6-3f927f81bcd0@suse.com>
 <BL1PR12MB584922B0352AA2F4A359FD66E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <7cdff236-bb7d-4dad-9a83-47faaa6dc15f@suse.com>
 <BL1PR12MB58493D3365CC451F36DB554FE7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <fbaf7086-85d8-4433-91d9-ef8f74512685@suse.com>
 <BL1PR12MB58494B521CB40BAEA30CB412E7F32@BL1PR12MB5849.namprd12.prod.outlook.com>
 <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
In-Reply-To: <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7633.017)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|CYXPR12MB9280:EE_
x-ms-office365-filtering-correlation-id: 08cc546e-14fe-46b1-5ef5-08dc84431666
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230031|366007|7416005|376005|1800799015|38070700009;
x-microsoft-antispam-message-info:
 =?utf-8?B?cjFmRW85dUV0UXdPUVZ5U3lVZjBrK3BhZEhsbEVvYTVjUUJuRDRpTEtNOGZm?=
 =?utf-8?B?UnVYZmVSekx1VzBaNC84UW8vVVBWVDlsTzFWZkxHbHMwMFRCVHhYQ3h6R0VS?=
 =?utf-8?B?TzNER0xBd2w4QnIwald4cEorL2QyZGdRSGVjVTY0eGtFK1ZhWXlVeFBUWmFv?=
 =?utf-8?B?UE5RdGxQTUdWL1dzVEtnRTdmOE5QbTRrcW9OSU5XYzFvRThWVHlQMG5neWxv?=
 =?utf-8?B?cnBVWkVSSWRjcjJKbnhHY0xEY09ySFJpUHJRSjFCMlo4eVRnbkJUK1dmREVT?=
 =?utf-8?B?UnVEUjVzMS9YVk5JMEdSVzJzelN3NzJVOCtKVmk4MWlEeVFUMXl5dUl4aG5n?=
 =?utf-8?B?RHdGODhDbHRSZTNBRUZ5aERXNkF1MW53ZkNLa1RJdW9mL2ZoVjRGbVJQZ2Jj?=
 =?utf-8?B?V01JUmJBTG02OS9pVEZ6ZmIzbEY0a25XdGlxWkY5aGNnMitOTVpoVmcwSHJO?=
 =?utf-8?B?UG9WNmNiNW5uaGpZWWxXSkhXbDAvb1ExMlVxdmdhd0dGTlhsb3o3V3dOaThy?=
 =?utf-8?B?NUNYTXZtL0sxM1BiVlVyUVczZnB3Q1AvYy9XZnBnVFh4UFRXd1YxTXZlSm5N?=
 =?utf-8?B?T0w2bnlmV0ZWbXZGVWsxcVdjSjRXbHhwNitWaTZSK05LNFp3RjEvcWZFMFRN?=
 =?utf-8?B?ZmdSMms5UlBmKy9ZVk1jR2NuOGloNkk1czZKdkRtOU9nZjVjVzVUL3RzUExx?=
 =?utf-8?B?UCtiTzRsaEpYZFFURXJ6dktWK1QyeWlBK3FnMlFFbEZYbFpobzZuc2F1Szd0?=
 =?utf-8?B?QTZHcldQTDFyU1JpeWRyTHBWU3FXWVpkUkNQZmo3eXJwVHhRQndmMDNRWk1m?=
 =?utf-8?B?cENndThYTjByOThZSXM2QTZtb1BMNDk5ZytXdXFWemREb3lvUFc2QjUzRDRr?=
 =?utf-8?B?UEQ1OXJuSmRHK085N3M3RTZLeTRVZW56NzBieUhRRGVGTmhFaFZMZjd2TWlL?=
 =?utf-8?B?WU1xVDFOVDI1N1hNZHdYbW16OC84QTE1N2JTRG1pS1BYU3NPcU1CWDAybDha?=
 =?utf-8?B?Tk1hNVdJLzBtNFVlSTZRbGtMdTJycTBRbnRKaTlZditvYXQyTUtYVFpqQ1do?=
 =?utf-8?B?RmFuMXJjYXhtRFZuUWlxTlVaMUo2REVtS0JaR0tLcWpudVVGQ2FQZWcrMUZC?=
 =?utf-8?B?cC8yOW82S3RFL2lBdGQwRmZ2RjJRbk1aMkxXRU1mSndSSFNrQlRBbG5tWHVP?=
 =?utf-8?B?enlNMkxUakZlTzYxNG44ODkvd2F4bDdma01hTzhvTDhGMlREZmt3TGtHbkZs?=
 =?utf-8?B?b1djSjdGdG02QW5pYXdtS053SjlVSk1WMCtWenNsRi9NZnEzaUhDMG9oS1Y5?=
 =?utf-8?B?VUhBOU5sZnpLTWlyYWh4RHhFV2RiTm9SZDNyS09TL3I3bFZHV2pwRzR0NWQ0?=
 =?utf-8?B?b1l4Mzc1UVRYb3I3YXBsaXZZNEtNMlpWM3JDVnJGWGZENm40S0pIK2piUXpz?=
 =?utf-8?B?cUZad1NWbGpHOFVqV3Q4R0J5RkFadi8rbEIzTHQ2WThmbWx4bUhjM1lZOSts?=
 =?utf-8?B?cFhwZG1tODdlalFjT1hhNWhHdXFRbnNRQjloL0k1Sjg0MjdxNElldGRBNUhq?=
 =?utf-8?B?bjloNDBGcmd1dnJ2UCtmaE00Z01iWWwwMm1ZdnRsM0VxU3IwdWdoOTJqKy9r?=
 =?utf-8?B?aStGTmwxeDBlN1F0WFVyN2xCUkN4RWlQZ0VpVjllSDFjNTBOc1dxNGNHakRa?=
 =?utf-8?B?eEc4MGpGR01iSCtVSG9tTGhvaVVqM0FaOFZRd2tGa1hQM2ZXdnBINlhnclNN?=
 =?utf-8?Q?Ps2sheRzmHLL5Md3tS66dps89e5+34MtBq94iBN?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(7416005)(376005)(1800799015)(38070700009);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MUhRTjU3ZDBuZStBZVJnNEhyd3B6Qll1YUVleEFBdjZucWNPUmpiYzBZbEFU?=
 =?utf-8?B?Uml3dDNPTEdvRnNrQmZLZXAwVXoyak5WQm02UTh3dDhQTUFxQk5kdTNYeUNj?=
 =?utf-8?B?MFlZYXM4VTVQT0FsVEJEbkxJS0x1VmlyVU1DV0xhZmVnUzgrZ1AyRTRkNXJl?=
 =?utf-8?B?RjQweWNFdlVJNkRkbHlzY0NLVUg2WmtqbUlyNEpWME02aUFVbVhQUVJWQjF5?=
 =?utf-8?B?UTNhSVZMTUJjVUNuKy8wclRKTVE1SFJXOVVVeFo1L29TaDQvNCtUNmlKcy82?=
 =?utf-8?B?MHNTQlUwWmtKYkRBS1hLYzVnWW90MStacHYyMml1ZENzcC9Gb0FZNnBhTlFV?=
 =?utf-8?B?bVFpZDBCUlRNU2hzemtLcWNvV25JVjRhRGhyT0R3SVBrcFptT0oyRUdjUnJ3?=
 =?utf-8?B?SG0zSjUrUG04aDZ3OW82N25YVXFoY2gxdk5OZGZZd1Azb0JtK0tOb0o3c1l2?=
 =?utf-8?B?TG1ieGhvSmxDRXZVd08zN0pvK2I5L0NGTVRPTDN4SGJRYnhxUitMQnpLSDFk?=
 =?utf-8?B?RHo3WkR1eWlMSWYxTE1QWlRyeDVXN1VQYUFUZmJ6eGFuYkdqRjdIcGwwa3dq?=
 =?utf-8?B?ZGtpUnFlZ2xXSlBLeThnaFh0ZkdCRnVpV3FJUU5oS1FUeTRCVWhlVXlaK3lj?=
 =?utf-8?B?SzMra3FmWUJNeHhlV1JVUXBWc2J0UVVzb0R2a3lwTzlWR0RYdjVVY25GWE95?=
 =?utf-8?B?SFFNNGswR2Z3YWd0NzNmdkFpUWg0TEZxZlVwc0ZuMkZ1TVZtSDAyWkJJQ3p5?=
 =?utf-8?B?SlNlcFZKNWJuajBZTzgrU2NDeHl0M3pmSys5blRBaUpZZFhDQzBMWDhjSE5H?=
 =?utf-8?B?d0NyQ2xrQUVKYWZPUW91dWV6OG9iVW5haHNlVUlsUDYwWk10dlhGcTRCcmVC?=
 =?utf-8?B?a04yaktWRnlhTm94YkpBOHhqNlZKZ0h3YXNaaVVvRnBNTUNSRVlUUkw4OEh4?=
 =?utf-8?B?eUIzcm1LVlliRStUdlhmaE5yZit3SlRJeDZZdUkrbko3SUcxV2hhRE5SUzNw?=
 =?utf-8?B?N2VpOExJR2VLYjdqYlBncmNLSTUzVGxyZDg1MDkwWkswOFRrK0tndEtFaDBU?=
 =?utf-8?B?djdINFZyUUd0NzlZZ1RmWUFWcW1yU2FCcmpIQ1RnajQrVjRvT3VDREpTMGpr?=
 =?utf-8?B?VTA0QTNQSEI0eVA2cTUrSzcweGpKYmZ6SHNtZzdFaUwyc29QODVpd2NQM3hY?=
 =?utf-8?B?eXBObHhPNTNmZXNINWQxcVNJK2VBTkFuZDBodm9OZFFxVkVFdFluVi9GNm52?=
 =?utf-8?B?RWNhREl5cEJ5TTFicGR1NDBUZEhDeUtOSXZXdGtyUk41dEpSZUZPeXVqTExE?=
 =?utf-8?B?U3dZNkZadDJFQWgrR1NPN3RrWWtEWGVDckdlSWJvTENVNVVYaG5FVVlldDA5?=
 =?utf-8?B?R1VnZm5WWHVZNkRyckZUMmNRclRpbWNvTWUyd3l0dlczUHE2OGtXTlhTaDdo?=
 =?utf-8?B?SUdQeEVRMHpRR1lqbkt6Z21SUjFJWEw1clczTklpcC9Jem56NGxQUDZ3cFl4?=
 =?utf-8?B?VUk2QWhScjE5Q0UrRmxCU2VFQ2NmWTRTYmFGSGdtZmI5VnpPZUlpMHdQZUtY?=
 =?utf-8?B?Z1lkT0pNcHFwcHpMeTlDaHJRM0pVSHljRG1tVC81dDBXcEJ4YXI3U3ZaY05L?=
 =?utf-8?B?ai93R2Rqa0VRNngyUGlqRE9CYjcvRDgzU0ZjL1BJaWdWR2pnR2tRTDNtMGRj?=
 =?utf-8?B?cVNzbXhXR2FhcEoxWTdjWUdaaUdlWVl3bm02UGpPSndCenRiMU9tY2tjS0s3?=
 =?utf-8?B?Q0NYcmFjS3VBdGdQSVJGUVUxM3Fvc2tjTmVqUkZnZjlUMmpQeTZnYzhtWkNy?=
 =?utf-8?B?STQ2aXErTHJzeVhVSkdZNzFheTJoeGx3Wm85S2l4NmNwem96SXJkOHJFTGNX?=
 =?utf-8?B?YWhpTGJoc3hKQzJ5b3RHdVVPbEpNQTZJR2pLMGU5cHczc0NjT01IdWtmSmNF?=
 =?utf-8?B?VWZia1l5TmRCZ3dFY3FXZVBITG05T2VMSUE0TUs1aCs3RU03UzRMelJHTTIx?=
 =?utf-8?B?R29oZmVSbEFZdzZ0bWZhZTFabjBXaFRJVjZsTDIxakpOdG91TnYrRTVYM2Vs?=
 =?utf-8?B?Qkp4aWZIdlk1WEJmNHdhTFo0cDc0SjdYSWhpNGgxNzMvQ2tSNnRqSVAzTWJV?=
 =?utf-8?Q?TSk4=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <A500CE4B85DA7342849AF3682BA13361@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 08cc546e-14fe-46b1-5ef5-08dc84431666
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jun 2024 03:04:45.3173
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Rr3xJuWVj6EJCw6XwLz28qEj70Gn567q5ivaREp0ZkdmhE6mppMt8iPsAUQUZUTzN8jSB/rdraQEanmWHh5EVQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYXPR12MB9280

T24gMjAyNC81LzMwIDIzOjUxLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMzAuMDUuMjAyNCAx
MzoxOSwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC81LzI5IDIwOjIyLCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAyOS4wNS4yMDI0IDEzOjEzLCBDaGVuLCBKaXFpYW4gd3JvdGU6
DQo+Pj4+IE9uIDIwMjQvNS8yOSAxNToxMCwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAy
OS4wNS4yMDI0IDA4OjU2LCBDaGVuLCBKaXFpYW4gd3JvdGU6DQo+Pj4+Pj4gT24gMjAyNC81LzI5
IDE0OjMxLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+Pj4+Pj4gT24gMjkuMDUuMjAyNCAwNDo0MSwg
Q2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4+Pj4+Pj4gQnV0IEkgZm91bmQgaW4gZnVuY3Rpb24gaW5p
dF9pcnFfZGF0YToNCj4+Pj4+Pj4+ICAgICBmb3IgKCBpcnEgPSAwOyBpcnEgPCBucl9pcnFzX2dz
aTsgaXJxKysgKQ0KPj4+Pj4+Pj4gICAgIHsNCj4+Pj4+Pj4+ICAgICAgICAgaW50IHJjOw0KPj4+
Pj4+Pj4NCj4+Pj4+Pj4+ICAgICAgICAgZGVzYyA9IGlycV90b19kZXNjKGlycSk7DQo+Pj4+Pj4+
PiAgICAgICAgIGRlc2MtPmlycSA9IGlycTsNCj4+Pj4+Pj4+DQo+Pj4+Pj4+PiAgICAgICAgIHJj
ID0gaW5pdF9vbmVfaXJxX2Rlc2MoZGVzYyk7DQo+Pj4+Pj4+PiAgICAgICAgIGlmICggcmMgKQ0K
Pj4+Pj4+Pj4gICAgICAgICAgICAgcmV0dXJuIHJjOw0KPj4+Pj4+Pj4gICAgIH0NCj4+Pj4+Pj4+
IERvZXMgaXQgbWVhbiB0aGF0IHdoZW4gaXJxIDwgbnJfaXJxc19nc2ksIHRoZSBnc2kgYW5kIGly
cSBpcyBhIDE6MSBtYXBwaW5nPw0KPj4+Pj4+Pg0KPj4+Pj4+PiBObywgYXMgZXhwbGFpbmVkIGJl
Zm9yZS4gSSBhbHNvIGRvbid0IHNlZSBob3cgeW91IHdvdWxkIGRlcml2ZSB0aGF0IGZyb20gdGhl
IGNvZGUgYWJvdmUuDQo+Pj4+Pj4gQmVjYXVzZSBoZXJlIHNldCBkZXNjLT5pcnEgPSBpcnEsIGFu
ZCBpdCBzZWVtcyB0aGVyZSBpcyBubyBvdGhlciBwbGFjZSB0byBjaGFuZ2UgdGhpcyBkZXNjLT5p
cnEsIHNvLCBnc2kgMSBpcyBjb25zaWRlcmVkIHRvIGlycSAxLg0KPj4+Pj4NCj4+Pj4+IFdoYXQg
YXJlIHlvdSB0YWtpbmcgdGhpcyBmcm9tPyBUaGUgbG9vcCBib3VuZCBpc24ndCBucl9nc2lzLCBh
bmQgdGhlIGl0ZXJhdGlvbg0KPj4+Pj4gdmFyaWFibGUgaXNuJ3QgaW4gR1NJIHNwYWNlIGVpdGhl
cjsgaXQncyBpbiBJUlEgbnVtYmVyaW5nIHNwYWNlLiBJbiB0aGlzIGxvb3ANCj4+Pj4+IHdlJ3Jl
IG1lcmVseSBsZXZlcmFnaW5nIHRoYXQgZXZlcnkgR1NJIGhhcyBhIGNvcnJlc3BvbmRpbmcgSVJR
Ow0KPj4+Pj4gdGhlcmUgYXJlIG5vIGFzc3VtcHRpb25zIG1hZGUgYWJvdXQgdGhlIG1hcHBpbmcg
YmV0d2VlbiB0aGUgdHdvLiBBZmFpY3MgYXQgbGVhc3QuDQo+Pj4+Pg0KPj4+Pj4+PiAibnJfaXJx
c19nc2kiIGRlc2NyaWJlcyB3aGF0IGl0cyBuYW1lIHNheXM6IFRoZSBudW1iZXIgb2YNCj4+Pj4+
Pj4gSVJRcyBtYXBwaW5nIHRvIGEgKF9zb21lXykgR1NJLiBUaGF0J3MgdG8gdGVsbCB0aGVtIGZy
b20gdGhlIG5vbi1HU0kgKGkuZS4NCj4+Pj4+Pj4gbWFpbmx5IE1TSSkgb25lcy4gVGhlcmUncyBu
byBpbXBsaWNhdGlvbiB3aGF0c29ldmVyIG9uIHRoZSBJUlEgPC0+IEdTSQ0KPj4+Pj4+PiBtYXBw
aW5nLg0KPj4+Pj4+Pg0KPj4+Pj4+Pj4gV2hhdCdzIG1vcmUsIHdoZW4gdXNpbmcgUEhZU0RFVk9Q
X3NldHVwX2dzaSwgaXQgY2FsbHMgbXBfcmVnaXN0ZXJfZ3NpLA0KPj4+Pj4+Pj4gYW5kIGluIG1w
X3JlZ2lzdGVyX2dzaSwgaXQgdXNlcyAiIGRlc2MgPSBpcnFfdG9fZGVzYyhnc2kpOyAiIHRvIGdl
dCBpcnFfZGVzYyBkaXJlY3RseS4NCj4+Pj4+Pj4NCj4+Pj4+Pj4gV2hpY2ggbWF5IGJlIHdyb25n
LCB3aGlsZSB0aGF0IHdyb25nLW5lc3MgbWF5IG5vdCBoYXZlIGhpdCBhbnlvbmUgaW4NCj4+Pj4+
Pj4gcHJhY3RpY2UgKGZvciByZWFzb25zIHRoYXQgd291bGQgbmVlZCB3b3JraW5nIG91dCkuDQo+
Pj4+Pj4+DQo+Pj4+Pj4+PiBDb21iaW5pbmcgYWJvdmUsIGNhbiB3ZSBjb25zaWRlciAiZ3NpID09
IGlycSIgd2hlbiBpcnEgPCBucl9pcnFzX2dzaSA/DQo+Pj4+Pj4+DQo+Pj4+Pj4+IEFnYWluIC0g
bm8uDQo+Pj4+Pj4gU2luY2UgeW91IGFyZSBjZXJ0YWluIHRoYXQgdGhleSBhcmUgbm90IGVxdWFs
LCBjb3VsZCB5b3UgdGVsbCBtZSB3aGVyZSBzaG93IHRoZXkgYXJlIG5vdCBlcXVhbCBvciB3aGVy
ZSBidWlsZCB0aGVpciBtYXBwaW5ncywNCj4+Pj4+PiBzbyB0aGF0IEkgY2FuIGtub3cgaG93IHRv
IGRvIGEgY29udmVyc2lvbiBnc2kgZnJvbSBpcnEuDQo+Pj4+Pg0KPj4+Pj4gSSBkaWQgcG9pbnQg
eW91IGF0IHRoZSBBQ1BJIEludGVycnVwdCBTb3VyY2UgT3ZlcnJpZGUgc3RydWN0dXJlIGJlZm9y
ZS4NCj4+Pj4+IFdlJ3JlIHBhcnNpbmcgdGhvc2UgaW4gYWNwaV9wYXJzZV9pbnRfc3JjX292cigp
LCB0byBnaXZlIHlvdSBhIHBsYWNlIHRvDQo+Pj4+PiBzdGFydCBnb2luZyBmcm9tLg0KPj4+PiBP
aCEgSSB0aGluayBJIGtub3cuDQo+Pj4+IElmIEkgd2FudCB0byB0cmFuc2Zvcm0gZ3NpIHRvIGly
cSwgSSBuZWVkIHRvIGRvIGJlbG93Og0KPj4+PiAJaW50IGlycSwgZW50cnksIGlvYXBpYywgcGlu
Ow0KPj4+Pg0KPj4+PiAJaW9hcGljID0gbXBfZmluZF9pb2FwaWMoZ3NpKTsNCj4+Pj4gCXBpbiA9
IGdzaSAtIG1wX2lvYXBpY19yb3V0aW5nW2lvYXBpY10uZ3NpX2Jhc2U7DQo+Pj4+IAllbnRyeSA9
IGZpbmRfaXJxX2VudHJ5KGlvYXBpYywgcGluLCBtcF9JTlQpOw0KPj4+PiAJaXJxID0gcGluXzJf
aXJxKGVudHJ5LCBpb2FwaWMsIHBpbik7DQo+Pj4+DQo+Pj4+IEFtIEkgcmlnaHQ/DQo+Pj4NCj4+
PiBUaGlzIGxvb2tzIHBsYXVzaWJsZSwgeWVzLg0KPj4gSSBkdW1wIGFsbCBtcGNfY29uZmlnX2lu
dHNyYyBvZiBhcnJheSBtcF9pcnFzLCBpdCBzaG93czoNCj4+IChYRU4pIGZpbmRfaXJxX2VudHJ5
IHR5cGUgMyBpcnF0eXBlIDAgaXJxZmxhZyAwIHNyY2J1cyAwIHNyY2J1c2lycSAwIGRzdGFwaWMg
MzMgZHN0aXJxIDINCj4+IChYRU4pIGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBpcnF0eXBlIDAgaXJx
ZmxhZyAxNSBzcmNidXMgMCBzcmNidXNpcnEgOSBkc3RhcGljIDMzIGRzdGlycSA5DQo+PiAoWEVO
KSBmaW5kX2lycV9lbnRyeSB0eXBlIDMgaXJxdHlwZSAwIGlycWZsYWcgMCBzcmNidXMgMCBzcmNi
dXNpcnEgMSBkc3RhcGljIDMzIGRzdGlycSAxDQo+PiAoWEVOKSBmaW5kX2lycV9lbnRyeSB0eXBl
IDMgaXJxdHlwZSAwIGlycWZsYWcgMCBzcmNidXMgMCBzcmNidXNpcnEgMyBkc3RhcGljIDMzIGRz
dGlycSAzDQo+PiAoWEVOKSBmaW5kX2lycV9lbnRyeSB0eXBlIDMgaXJxdHlwZSAwIGlycWZsYWcg
MCBzcmNidXMgMCBzcmNidXNpcnEgNCBkc3RhcGljIDMzIGRzdGlycSA0DQo+PiAoWEVOKSBmaW5k
X2lycV9lbnRyeSB0eXBlIDMgaXJxdHlwZSAwIGlycWZsYWcgMCBzcmNidXMgMCBzcmNidXNpcnEg
NSBkc3RhcGljIDMzIGRzdGlycSA1DQo+PiAoWEVOKSBmaW5kX2lycV9lbnRyeSB0eXBlIDMgaXJx
dHlwZSAwIGlycWZsYWcgMCBzcmNidXMgMCBzcmNidXNpcnEgNiBkc3RhcGljIDMzIGRzdGlycSA2
DQo+PiAoWEVOKSBmaW5kX2lycV9lbnRyeSB0eXBlIDMgaXJxdHlwZSAwIGlycWZsYWcgMCBzcmNi
dXMgMCBzcmNidXNpcnEgNyBkc3RhcGljIDMzIGRzdGlycSA3DQo+PiAoWEVOKSBmaW5kX2lycV9l
bnRyeSB0eXBlIDMgaXJxdHlwZSAwIGlycWZsYWcgMCBzcmNidXMgMCBzcmNidXNpcnEgOCBkc3Rh
cGljIDMzIGRzdGlycSA4DQo+PiAoWEVOKSBmaW5kX2lycV9lbnRyeSB0eXBlIDMgaXJxdHlwZSAw
IGlycWZsYWcgMCBzcmNidXMgMCBzcmNidXNpcnEgMTAgZHN0YXBpYyAzMyBkc3RpcnEgMTANCj4+
IChYRU4pIGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBpcnF0eXBlIDAgaXJxZmxhZyAwIHNyY2J1cyAw
IHNyY2J1c2lycSAxMSBkc3RhcGljIDMzIGRzdGlycSAxMQ0KPj4gKFhFTikgZmluZF9pcnFfZW50
cnkgdHlwZSAzIGlycXR5cGUgMCBpcnFmbGFnIDAgc3JjYnVzIDAgc3JjYnVzaXJxIDEyIGRzdGFw
aWMgMzMgZHN0aXJxIDEyDQo+PiAoWEVOKSBmaW5kX2lycV9lbnRyeSB0eXBlIDMgaXJxdHlwZSAw
IGlycWZsYWcgMCBzcmNidXMgMCBzcmNidXNpcnEgMTMgZHN0YXBpYyAzMyBkc3RpcnEgMTMNCj4+
IChYRU4pIGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBpcnF0eXBlIDAgaXJxZmxhZyAwIHNyY2J1cyAw
IHNyY2J1c2lycSAxNCBkc3RhcGljIDMzIGRzdGlycSAxNA0KPj4gKFhFTikgZmluZF9pcnFfZW50
cnkgdHlwZSAzIGlycXR5cGUgMCBpcnFmbGFnIDAgc3JjYnVzIDAgc3JjYnVzaXJxIDE1IGRzdGFw
aWMgMzMgZHN0aXJxIDE1DQo+Pg0KPj4gSXQgc2VlbXMgb25seSBMZWdhY3kgaXJxIGFuZCBnc2lb
MDoxNV0gaGFzIGEgbWFwcGluZyBpbiBtcF9pcnFzLg0KPj4gT3RoZXIgZ3NpIGNhbiBiZSBjb25z
aWRlcmVkIDE6MSBtYXBwaW5nIHdpdGggaXJxPyBPciBhcmUgdGhlcmUgb3RoZXIgcGxhY2VzIHJl
ZmxlY3QgdGhlIG1hcHBpbmcgYmV0d2VlbiBpcnEgYW5kIGdzaT8NCj4gDQo+IEl0IG1heSBiZSB1
bmNvbW1vbiB0byBoYXZlIG92ZXJyaWRlcyBmb3IgaGlnaGVyIEdTSXMsIGJ1dCBJIGRvbid0IHRo
aW5rIEFDUEkNCj4gZGlzYWxsb3dzIHRoYXQuDQpEbyB5b3Ugc3VnZ2VzdCBtZSB0byBhZGQgb3Zl
cnJpZGVzIGZvciBoaWdoZXIgR1NJcyBpbnRvIGFycmF5IG1wX2lycXM/DQoNCj4gDQo+IEphbg0K
DQotLSANCkJlc3QgcmVnYXJkcywNCkppcWlhbiBDaGVuLg0K


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 04:55:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 04:55:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735105.1141246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEMCI-0004Tv-RR; Tue, 04 Jun 2024 04:54:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735105.1141246; Tue, 04 Jun 2024 04:54:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEMCI-0004To-Ol; Tue, 04 Jun 2024 04:54:58 +0000
Received: by outflank-mailman (input) for mailman id 735105;
 Tue, 04 Jun 2024 04:54:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEMCH-0004Tc-4J; Tue, 04 Jun 2024 04:54:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEMCG-000131-Kz; Tue, 04 Jun 2024 04:54:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEMCG-0001ch-7e; Tue, 04 Jun 2024 04:54:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEMCG-0006hH-79; Tue, 04 Jun 2024 04:54:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LeGBU45T0mGsZ+E32+KT+TWbP1MHoKFPmCeh+JhMxzg=; b=V6whweuJ/tn/rsW/fRLGBsU1Ea
	JpAq28lN03yUlBjjWpxllb8VXyrrjI7G/VIIpFs6v+2fwrMyMbvie13V8TtwQKmQSlfFatwpFsZKB
	fQYySLR8rPE97fRRP8wEXTKzgV23tkHkfqiASyP04YeTnCYDuCYqKkderhol4kynwHKY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186239-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186239: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f06ce441457d4abc4d76be7acba26868a2d02b1c
X-Osstest-Versions-That:
    linux=c3f38fa61af77b49866b006939479069cd451173
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Jun 2024 04:54:56 +0000

flight 186239 linux-linus real [real]
flight 186242 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186239/
http://logs.test-lab.xenproject.org/osstest/logs/186242/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 186238

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt      8 xen-boot            fail pass in 186242-retest
 test-armhf-armhf-xl-arndale   8 xen-boot            fail pass in 186242-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 186242 like 186238
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 186242 never pass
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 186242 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 186242 never pass
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186238
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186238
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f06ce441457d4abc4d76be7acba26868a2d02b1c
baseline version:
 linux                c3f38fa61af77b49866b006939479069cd451173

Last test of basis   186238  2024-06-03 13:41:54 Z    0 days
Testing same since   186239  2024-06-03 20:43:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Huacai Chen <chenhuacai@loongson.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Tiezhu Yang <yangtiezhu@loongson.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f06ce441457d4abc4d76be7acba26868a2d02b1c
Merge: c3f38fa61af7 eb36e520f4f1
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Jun 3 09:27:45 2024 -0700

    Merge tag 'loongarch-fixes-6.10-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson
    
    Pull LoongArch fixes from Huacai Chen:
     "Some bootloader interface fixes, a dts fix, and a trivial cleanup"
    
    * tag 'loongarch-fixes-6.10-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson:
      LoongArch: Fix GMAC's phy-mode definitions in dts
      LoongArch: Override higher address bits in JUMP_VIRT_ADDR
      LoongArch: Fix entry point in kernel image header
      LoongArch: Add all CPUs enabled by fdt to NUMA node 0
      LoongArch: Fix built-in DTB detection
      LoongArch: Remove CONFIG_ACPI_TABLE_UPGRADE in platform_init()

commit eb36e520f4f1b690fd776f15cbac452f82ff7bfa
Author: Huacai Chen <chenhuacai@loongson.cn>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Fix GMAC's phy-mode definitions in dts
    
    The GMAC of Loongson chips cannot insert the correct 1.5-2ns delay. So
    we need the PHY to insert internal delays for both transmit and receive
    data lines from/to the PHY device. Fix this by changing the "phy-mode"
    from "rgmii" to "rgmii-id" in dts.
    
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit 1098efd299ffe9c8af818425338c7f6c4f930a98
Author: Jiaxun Yang <jiaxun.yang@flygoat.com>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Override higher address bits in JUMP_VIRT_ADDR
    
    In JUMP_VIRT_ADDR we are performing an or calculation on address value
    directly from pcaddi.
    
    This will only work if we are currently running from direct 1:1 mapping
    addresses or firmware's DMW is configured exactly same as kernel. Still,
    we should not rely on such assumption.
    
    Fix by overriding higher bits in address comes from pcaddi, so we can
    get rid of or operator.
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit beb2800074c15362cf9f6c7301120910046d6556
Author: Jiaxun Yang <jiaxun.yang@flygoat.com>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Fix entry point in kernel image header
    
    Currently kernel entry in head.S is in DMW address range, firmware is
    instructed to jump to this address after loading the kernel image.
    
    However kernel should not make any assumption on firmware's DMW
    setting, thus the entry point should be a physical address falls into
    direct translation region.
    
    Fix by converting entry address to physical and amend entry calculation
    logic in libstub accordingly.
    
    BTW, use ABSOLUTE() to calculate variables to make Clang/LLVM happy.
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit 3de9c42d02a79a5e09bbee7a4421ddc00cfd5c6d
Author: Jiaxun Yang <jiaxun.yang@flygoat.com>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Add all CPUs enabled by fdt to NUMA node 0
    
    NUMA enabled kernel on FDT based machine fails to boot because CPUs
    are all in NUMA_NO_NODE and mm subsystem won't accept that.
    
    Fix by adding them to default NUMA node at FDT parsing phase and move
    numa_add_cpu(0) to a later point.
    
    Cc: stable@vger.kernel.org
    Fixes: 88d4d957edc7 ("LoongArch: Add FDT booting support from efi system table")
    Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit b56f67a6c748bb009f313f91651c8020d2338d63
Author: Jiaxun Yang <jiaxun.yang@flygoat.com>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Fix built-in DTB detection
    
    fdt_check_header(__dtb_start) will always success because kernel
    provides a dummy dtb, and by coincidence __dtb_start clashed with
    entry of this dummy dtb. The consequence is fdt passed from firmware
    will never be taken.
    
    Fix by trying to utilise __dtb_start only when CONFIG_BUILTIN_DTB is
    enabled.
    
    Cc: stable@vger.kernel.org
    Fixes: 7b937cc243e5 ("of: Create of_root if no dtb provided by firmware")
    Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit 6c3ca6654a74dd396bc477839ba8d9792eced441
Author: Tiezhu Yang <yangtiezhu@loongson.cn>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Remove CONFIG_ACPI_TABLE_UPGRADE in platform_init()
    
    Both acpi_table_upgrade() and acpi_boot_table_init() are defined as
    empty functions under !CONFIG_ACPI_TABLE_UPGRADE and !CONFIG_ACPI in
    include/linux/acpi.h, there are no implicit declaration errors with
    various configs.
    
      #ifdef CONFIG_ACPI_TABLE_UPGRADE
      void acpi_table_upgrade(void);
      #else
      static inline void acpi_table_upgrade(void) { }
      #endif
    
      #ifdef        CONFIG_ACPI
      ...
      void acpi_boot_table_init (void);
      ...
      #else /* !CONFIG_ACPI */
      ...
      static inline void acpi_boot_table_init(void)
      {
      }
      ...
      #endif        /* !CONFIG_ACPI */
    
    As Huacai suggested, CONFIG_ACPI_TABLE_UPGRADE is ugly and not necessary
    here, just remove it. At the same time, just keep CONFIG_ACPI to prevent
    potential build errors in future, and give a signal to indicate the code
    is ACPI-specific. For the same reason, we also put acpi_table_upgrade()
    under CONFIG_ACPI.
    
    Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 05:55:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 05:55:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735116.1141257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEN8c-0003Ef-7h; Tue, 04 Jun 2024 05:55:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735116.1141257; Tue, 04 Jun 2024 05:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEN8c-0003EY-51; Tue, 04 Jun 2024 05:55:14 +0000
Received: by outflank-mailman (input) for mailman id 735116;
 Tue, 04 Jun 2024 05:55:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N7N6=NG=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sEN8b-0003E9-NX
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 05:55:13 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fdbb2591-2236-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 07:55:05 +0200 (CEST)
Received: by mail-wr1-x429.google.com with SMTP id
 ffacd0b85a97d-35dcd34a69bso3716745f8f.3
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 22:55:05 -0700 (PDT)
Received: from [172.31.7.231] ([62.28.210.62])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd064b030sm10522642f8f.105.2024.06.03.22.55.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 22:55:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdbb2591-2236-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717480505; x=1718085305; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Uvm5TxqdIcmAMsaNZAzSbRAr65S7lP15jAckEQQhEyI=;
        b=eVxkEKfl1xM+OZxVO9Z7Z//CUUXNhKckYas+C+3k6dcx1C3SjwsvdX2ahLs5PE8DIq
         jLDKdn9IAsYV6Q+A1k8qaEgGoyAmjtCNdkWRd39vG4mORNRf6iFJ3eugKWpiVRVvQpmQ
         PlXc3mwaJNZHG4XJ5moRnV8Y6YAPdxA0Kz2nxAO/MVtkHLBH/gZnxwaybDOyswiN8zVh
         TjBu6VBP4SX8drCuDX2t6MyQ5PmVJhtMiCb/f8eFQ3uut0nKqvEDibWGzr4LDI7gDu2s
         tfJFFRGhUDlZEd6gjZ+QRsyTSNSnyuOgxkfrT3rv83hee2EdevRzgMfOWKYUoLyxxsF6
         sbow==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717480505; x=1718085305;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Uvm5TxqdIcmAMsaNZAzSbRAr65S7lP15jAckEQQhEyI=;
        b=qwXhM7oGjwKaL1kvYeUPaL8MDNchEuLWmU/zewuzdVUToEEWqe/TCpL+YMYKtv99SK
         jupNNG9Z5jFVG74kvgLsKmWLvcKexxwBI7pKqYb84wpySzcNo3XrDt1zoaANt8NdhHC3
         jtUDVthM2ONu5x09SAbcWbMpZKgxits0DBSsBuiJhg0Vjuxk/WElgvTqCxDrBtb10lNT
         2+ypVIFnc8pykQ9i9aMgczMeB55MeJ+bSNkxlXeY1GlJnnotS/Vol+/jKcwmYOHLtnZg
         ofJwXS+85LLUNJw4ZEj0/heBDGPNRI34cHxK89buvSigADhc+8/9n2iv5M2d4h1kPILq
         fQCw==
X-Forwarded-Encrypted: i=1; AJvYcCUNO63qq6Zq/TDW4f26wdL1iUfAtHXvP5LDYDCTggnLFD6/7qLNyKY5Zfohfv4qlReBkb8401ndwn9YLxO25E7/3p3SaycMBnFm7NwQ2oo=
X-Gm-Message-State: AOJu0Yz1b8oPzV3Xb1tKnLCANFUiPtMGLdc4hwl7WBEKRdQPP1pmdYHd
	e54nzU+fWM7SxQJYOJJyFZem1biZdKC2Roz4OzVUjJbtty99ACwNsJennxHLuQ==
X-Google-Smtp-Source: AGHT+IFO62aKzblkgG28uyE0TQEQ/FSluqZHTcKqt3JQ3ijno53Hp0Z077girYCU+wbKfPuU3cgmHg==
X-Received: by 2002:a5d:4005:0:b0:354:f1de:33eb with SMTP id ffacd0b85a97d-35e0f2836b3mr7291068f8f.26.1717480504692;
        Mon, 03 Jun 2024 22:55:04 -0700 (PDT)
Message-ID: <4a421aa5-b4c5-43f3-85cb-68c2021f13dd@suse.com>
Date: Tue, 4 Jun 2024 07:55:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>
References: <20240516095235.64128-1-Jiqian.Chen@amd.com>
 <9652011f-3f24-43f8-b91e-88bd3982a4c4@suse.com>
 <BL1PR12MB5849EB5EE20B1A6C647F5717E7EE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b10e68e2-3279-471d-a089-c40934050737@suse.com>
 <BL1PR12MB58491A32C32C33545AC71AB7E7EE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4b311c82-b252-413a-bb64-0a36aa97680a@suse.com>
 <BL1PR12MB5849333D416160492A7475E2E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <70c86c74-3ed6-4b22-9ba6-3f927f81bcd0@suse.com>
 <BL1PR12MB584922B0352AA2F4A359FD66E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <7cdff236-bb7d-4dad-9a83-47faaa6dc15f@suse.com>
 <BL1PR12MB58493D3365CC451F36DB554FE7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <fbaf7086-85d8-4433-91d9-ef8f74512685@suse.com>
 <BL1PR12MB58494B521CB40BAEA30CB412E7F32@BL1PR12MB5849.namprd12.prod.outlook.com>
 <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
 <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 04.06.2024 05:04, Chen, Jiqian wrote:
> On 2024/5/30 23:51, Jan Beulich wrote:
>> On 30.05.2024 13:19, Chen, Jiqian wrote:
>>> On 2024/5/29 20:22, Jan Beulich wrote:
>>>> On 29.05.2024 13:13, Chen, Jiqian wrote:
>>>>> On 2024/5/29 15:10, Jan Beulich wrote:
>>>>>> On 29.05.2024 08:56, Chen, Jiqian wrote:
>>>>>>> On 2024/5/29 14:31, Jan Beulich wrote:
>>>>>>>> On 29.05.2024 04:41, Chen, Jiqian wrote:
>>>>>>>>> But I found in function init_irq_data:
>>>>>>>>>     for ( irq = 0; irq < nr_irqs_gsi; irq++ )
>>>>>>>>>     {
>>>>>>>>>         int rc;
>>>>>>>>>
>>>>>>>>>         desc = irq_to_desc(irq);
>>>>>>>>>         desc->irq = irq;
>>>>>>>>>
>>>>>>>>>         rc = init_one_irq_desc(desc);
>>>>>>>>>         if ( rc )
>>>>>>>>>             return rc;
>>>>>>>>>     }
>>>>>>>>> Does it mean that when irq < nr_irqs_gsi, the gsi and irq is a 1:1 mapping?
>>>>>>>>
>>>>>>>> No, as explained before. I also don't see how you would derive that from the code above.
>>>>>>> Because here set desc->irq = irq, and it seems there is no other place to change this desc->irq, so, gsi 1 is considered to irq 1.
>>>>>>
>>>>>> What are you taking this from? The loop bound isn't nr_gsis, and the iteration
>>>>>> variable isn't in GSI space either; it's in IRQ numbering space. In this loop
>>>>>> we're merely leveraging that every GSI has a corresponding IRQ;
>>>>>> there are no assumptions made about the mapping between the two. Afaics at least.
>>>>>>
>>>>>>>> "nr_irqs_gsi" describes what its name says: The number of
>>>>>>>> IRQs mapping to a (_some_) GSI. That's to tell them from the non-GSI (i.e.
>>>>>>>> mainly MSI) ones. There's no implication whatsoever on the IRQ <-> GSI
>>>>>>>> mapping.
>>>>>>>>
>>>>>>>>> What's more, when using PHYSDEVOP_setup_gsi, it calls mp_register_gsi,
>>>>>>>>> and in mp_register_gsi, it uses " desc = irq_to_desc(gsi); " to get irq_desc directly.
>>>>>>>>
>>>>>>>> Which may be wrong, while that wrong-ness may not have hit anyone in
>>>>>>>> practice (for reasons that would need working out).
>>>>>>>>
>>>>>>>>> Combining above, can we consider "gsi == irq" when irq < nr_irqs_gsi ?
>>>>>>>>
>>>>>>>> Again - no.
>>>>>>> Since you are certain that they are not equal, could you tell me where show they are not equal or where build their mappings,
>>>>>>> so that I can know how to do a conversion gsi from irq.
>>>>>>
>>>>>> I did point you at the ACPI Interrupt Source Override structure before.
>>>>>> We're parsing those in acpi_parse_int_src_ovr(), to give you a place to
>>>>>> start going from.
>>>>> Oh! I think I know.
>>>>> If I want to transform gsi to irq, I need to do below:
>>>>> 	int irq, entry, ioapic, pin;
>>>>>
>>>>> 	ioapic = mp_find_ioapic(gsi);
>>>>> 	pin = gsi - mp_ioapic_routing[ioapic].gsi_base;
>>>>> 	entry = find_irq_entry(ioapic, pin, mp_INT);
>>>>> 	irq = pin_2_irq(entry, ioapic, pin);
>>>>>
>>>>> Am I right?
>>>>
>>>> This looks plausible, yes.
>>> I dump all mpc_config_intsrc of array mp_irqs, it shows:
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 0 dstapic 33 dstirq 2
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 15 srcbus 0 srcbusirq 9 dstapic 33 dstirq 9
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 1 dstapic 33 dstirq 1
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 3 dstapic 33 dstirq 3
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 4 dstapic 33 dstirq 4
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 5 dstapic 33 dstirq 5
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 6 dstapic 33 dstirq 6
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 7 dstapic 33 dstirq 7
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 8 dstapic 33 dstirq 8
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 10 dstapic 33 dstirq 10
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 11 dstapic 33 dstirq 11
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 12 dstapic 33 dstirq 12
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 13 dstapic 33 dstirq 13
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 14 dstapic 33 dstirq 14
>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 15 dstapic 33 dstirq 15
>>>
>>> It seems only Legacy irq and gsi[0:15] has a mapping in mp_irqs.
>>> Other gsi can be considered 1:1 mapping with irq? Or are there other places reflect the mapping between irq and gsi?
>>
>> It may be uncommon to have overrides for higher GSIs, but I don't think ACPI
>> disallows that.
> Do you suggest me to add overrides for higher GSIs into array mp_irqs?

Why "add"? That's what mp_override_legacy_irq() already does, isn't it?
Assuming of course any are surfaced at all by ACPI.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 05:58:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 05:58:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735123.1141267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENBc-0003nO-LP; Tue, 04 Jun 2024 05:58:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735123.1141267; Tue, 04 Jun 2024 05:58:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENBc-0003nH-Ig; Tue, 04 Jun 2024 05:58:20 +0000
Received: by outflank-mailman (input) for mailman id 735123;
 Tue, 04 Jun 2024 05:58:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N7N6=NG=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sENBb-0003n7-Ey
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 05:58:19 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 707153f5-2237-11ef-90a1-e314d9c70b13;
 Tue, 04 Jun 2024 07:58:18 +0200 (CEST)
Received: by mail-wr1-x429.google.com with SMTP id
 ffacd0b85a97d-35dc9cef36dso4304426f8f.3
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 22:58:18 -0700 (PDT)
Received: from [172.31.7.231] ([62.28.210.62])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd064998bsm10468832f8f.91.2024.06.03.22.58.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 22:58:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 707153f5-2237-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717480697; x=1718085497; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=uH+BGHloHPFoY0kqG8mJDWzQPeJf/eNSm7qnnCIt5L0=;
        b=deY51aWo9stAgkIeGbpfogXb5IbOwNpNzmeEiumAFORd19lm4R7gYIu5sx5cT+oFLG
         TcwL79dtW3+zLW2xcP+oYmdoZC+BznRPug8b78bSXM6PMFBvng5z0LOvxtVK34IFiJy3
         iR9yHd+9p35+BPjGTzZrzFbqFU2yz86pG8phP+1IAiH1n3yRGMZQdVeOaGpalApUMUGf
         HtApS8YcLMdjjPgEt6fZ5YKZlpli71mFZTMCnHQtX5ZBKmseUFzbNQRN/EpHPAq99kx7
         zyod3yMztnIm0OvRL3YS5LHKFIWltMOf1W4C6dmTASzxghmmxZNDW8FcTF78cbyiiT6q
         LqVw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717480697; x=1718085497;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=uH+BGHloHPFoY0kqG8mJDWzQPeJf/eNSm7qnnCIt5L0=;
        b=CtbCdpW0R3u2CseOWJJ9vEXpR689PXuQvZDa95/0QeGdIM1czCTOQkRdjogCKuddK6
         DxornH5/yNtVTsc5YgHRXPNdtYJ8NMThmb2JHaVFCqbfuKNWACBcTEeFW2swcfvC0+Wh
         dGVaUlgkDK1/3Lomcnu996VZBztZX9NJtnmCJcDbLMxMf2c/cMdMzw+zwyJgFH87jysf
         9qO2to0u4hKR4zCv+vkZz6sZ2nU7GKk/edgRgOZO8/SRF0Zm57tkwTW+Ow0cvE8cBPLa
         sL12HEHq2qrAzjpkJmo2v33yZDxwaKGrudgCYBKexVPx+rSYaQ+hEIE4pohfJvHBhu6K
         e4zQ==
X-Forwarded-Encrypted: i=1; AJvYcCXXm88JsfzSkSu+4hzgUYu0RWhu0XQVGcwh3oepVknRDuareNbmWIzAMxua75hy6gmu1c2Sf1tJh6GUKUM+YB/ZIYYCKYn8dHX59Ed26Tw=
X-Gm-Message-State: AOJu0YxGUXSxERoHBz1RWPcSIN9A1Q1Ys6fK3e2ZP5ENBH+/Lulq2zGH
	G7dfbolC7M6uDWJXQ7rIqfZ7CCQL1mSZkdYjmk3akhn1R1H99mLhWZwweKYKaw==
X-Google-Smtp-Source: AGHT+IFz8CdByCZJp+SLd3/pBsKmAP5L+rIMGT7Jq9wJR9wz5gbJ5llKKpYxALXtibxKgjcMmHjcjg==
X-Received: by 2002:adf:f504:0:b0:354:e021:51ce with SMTP id ffacd0b85a97d-35e0f25c3b6mr7207594f8f.12.1717480697205;
        Mon, 03 Jun 2024 22:58:17 -0700 (PDT)
Message-ID: <917fd5f3-9b0f-467c-bc70-3b22a569afe6@suse.com>
Date: Tue, 4 Jun 2024 07:58:16 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 2/5] x86/domain: deviate violation of MISRA C Rule
 20.12
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <a8fe5f64e46e8980e1740583d59b95f88270f426.1717236930.git.nicola.vetrini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <a8fe5f64e46e8980e1740583d59b95f88270f426.1717236930.git.nicola.vetrini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 01.06.2024 12:16, Nicola Vetrini wrote:
> MISRA C Rule 20.12 states: "A macro parameter used as an operand to
> the # or ## operators, which is itself subject to further macro replacement,
> shall only be used as an operand to these operators".
> 
> In this case, builds where CONFIG_COMPAT=y the fpu_ctxt
> macro is used both as a regular macro argument and as an operand for
> stringification in the expansion of CHECK_FIELD_.
> This is deviated using a SAF-x-safe comment.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

At least for the time being; plans that Andrew vaguely described my
render this unnecessary down the road.

Jan

> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1084,6 +1084,7 @@ void arch_domain_creation_finished(struct domain *d)
>  #ifdef CONFIG_COMPAT
>  #define xen_vcpu_guest_context vcpu_guest_context
>  #define fpu_ctxt fpu_ctxt.x
> +/* SAF-6-safe Rule 20.12 expansion of macro fpu_ctxt with CONFIG_COMPAT */
>  CHECK_FIELD_(struct, vcpu_guest_context, fpu_ctxt);
>  #undef fpu_ctxt
>  #undef xen_vcpu_guest_context
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index 9a72d57333e9..335aedf46d03 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -1326,6 +1326,7 @@ long arch_do_domctl(
>  #ifdef CONFIG_COMPAT
>  #define xen_vcpu_guest_context vcpu_guest_context
>  #define fpu_ctxt fpu_ctxt.x
> +/* SAF-6-safe Rule 20.12 expansion of macro fpu_ctxt with CONFIG_COMPAT */
>  CHECK_FIELD_(struct, vcpu_guest_context, fpu_ctxt);
>  #undef fpu_ctxt
>  #undef xen_vcpu_guest_context



From xen-devel-bounces@lists.xenproject.org Tue Jun 04 06:01:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 06:01:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735129.1141277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENET-0005Xj-1v; Tue, 04 Jun 2024 06:01:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735129.1141277; Tue, 04 Jun 2024 06:01:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENES-0005Xc-VP; Tue, 04 Jun 2024 06:01:16 +0000
Received: by outflank-mailman (input) for mailman id 735129;
 Tue, 04 Jun 2024 06:01:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAXK=NG=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sENER-0005XW-Il
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 06:01:15 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20601.outbound.protection.outlook.com
 [2a01:111:f403:2418::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d9b6c707-2237-11ef-90a1-e314d9c70b13;
 Tue, 04 Jun 2024 08:01:14 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by BL3PR12MB6379.namprd12.prod.outlook.com (2603:10b6:208:3b2::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.24; Tue, 4 Jun
 2024 06:01:10 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7633.021; Tue, 4 Jun 2024
 06:01:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9b6c707-2237-11ef-90a1-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FCtYGaCCxX2y/KlIVDgPP3nFBoZkDExsGY9j1VQXmckNXwRscsw9YBBuBI/9asSjZMghscTkM8nqNF66SMMrNvLXYjdN5YqyjBjPSDSlWwVJo6auP4ZmavANhtp/l5ySwcHaCj3DSqTX22Gx7wAOJHiS8f+GYLfUMYVfc+B9qrXdbFMSj5uxWxmsMcZ2B880mYXjdWW8fwmNlCHZTIvjCpP2bVVbyz7+QL2aPy+FOc6RBqBynb+T2itS3AM04ZxSzWzQW5/G8uS+T+OsCpoz/QyuyEnqdOilYw8+fPOsRfDtca7oPDWDDK9xPPKBKpbj1+AAuJU/B6Uqb7t0IeJ7bw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FDmZTLYOkYHDsuZO7ybeLbK5lH8L8SZk4dMIvFUrAX0=;
 b=RAfwjm4SB7EUJswhmA0ziTr2j2+VwixkKpyGKkX+NsNmE+nb7bKjPrfLpPlVP83IxGyuLrxEMDTEfcJp7UbIErzDMpbfDYVBpahpjFi0vrHn/8sP0hQibbhhd8vJDyboe+ffz40wVMH5UgITrEeQjeAxivUKlpHAeJzQU/+mlf8J/XVReZ5R5skAeigfTz8CENm2FE57JG7e/uSPReHnZJh622LHTPy46FioCTZuHDO1KaZ2VwZ7fLkfJbGkLAGifGa5AiXdIFTIS6kt/aNU+CM3VHRczMt8f923TW3AGgZEurPBeGj7AItCYdTZA92wP6NpfsJuJ7z1BJdA2F2byg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FDmZTLYOkYHDsuZO7ybeLbK5lH8L8SZk4dMIvFUrAX0=;
 b=MAFz80EsdMd9HqZ6sjwe+UK1lXGuNxgTnfSJDU1WCONHUm9QAnMPHJ1au2SHtUizgkTIAnrZtu/Jnv22jSiSG6u1CHXsBLeD4/YEROcrbyXaednUbuMM/X/ENyFGQbUzYuRA9NEQrqAMQQ8H2HIWIa3HPSKGGqX0U2m21FglQ2A=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Hildebrand,
 Stewart" <Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Daniel P
 . Smith" <dpsmith@apertussolutions.com>, "Chen, Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index:
 AQHap3bgiER4vYjwvk2+R5oTa8V63LGZ5EYAgAHVa4D//4e1AIAAis+A//+FmgCAEsTugP//viAAABEhvwD//4GYgIAAoyuA//+0FICAAgSCAP//yCWAgAeLLwD//6n2gAAQ1XyA
Date: Tue, 4 Jun 2024 06:01:10 +0000
Message-ID:
 <BL1PR12MB58492BA224EBCE98549A0349E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240516095235.64128-1-Jiqian.Chen@amd.com>
 <BL1PR12MB5849EB5EE20B1A6C647F5717E7EE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b10e68e2-3279-471d-a089-c40934050737@suse.com>
 <BL1PR12MB58491A32C32C33545AC71AB7E7EE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4b311c82-b252-413a-bb64-0a36aa97680a@suse.com>
 <BL1PR12MB5849333D416160492A7475E2E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <70c86c74-3ed6-4b22-9ba6-3f927f81bcd0@suse.com>
 <BL1PR12MB584922B0352AA2F4A359FD66E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <7cdff236-bb7d-4dad-9a83-47faaa6dc15f@suse.com>
 <BL1PR12MB58493D3365CC451F36DB554FE7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <fbaf7086-85d8-4433-91d9-ef8f74512685@suse.com>
 <BL1PR12MB58494B521CB40BAEA30CB412E7F32@BL1PR12MB5849.namprd12.prod.outlook.com>
 <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
 <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4a421aa5-b4c5-43f3-85cb-68c2021f13dd@suse.com>
In-Reply-To: <4a421aa5-b4c5-43f3-85cb-68c2021f13dd@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7633.017)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|BL3PR12MB6379:EE_
x-ms-office365-filtering-correlation-id: fdf23327-1636-47ec-3925-08dc845bbb62
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230031|7416005|376005|1800799015|366007|38070700009;
x-microsoft-antispam-message-info:
 =?utf-8?B?YlozZ3ZsQ0ZNV2kvU3lYb2lHNTU3eXg4cnd1dWlaOVpOYVJlMFErNGtncFQr?=
 =?utf-8?B?S3M1bGlhOWJmcFdmY3dOUWlHZnpyMmduK1ZLVHoxYnVIZTRlZmVxa2NxZHhn?=
 =?utf-8?B?Nk1BY3hsQUwxbWdGTy9Ic1hVd0VxRG1OUFZnajArcndjRGF2QUpadlplWWZn?=
 =?utf-8?B?aXR6MEdOOWN6R21DSmhoRUZ4Z29lQUlpb0sxYzNqTk54Q3ZRQkZudjRYVkpq?=
 =?utf-8?B?M0JRbTBiVGxVS0xESE1kUC9hS1BHZzFERVhaY29GejdyT0RFaFVvWW5qSFlk?=
 =?utf-8?B?aFYvN1pEenhucDJIaXZPalpsT0xGWGlWL3ZRWUl5MDFGcE8zbzM3U3o0VXd2?=
 =?utf-8?B?VXNiaFpiazdJSjgrTTZXOHYrOTdMSmJ4akVQNjQxalpoRzlERU15cFE5ZnBo?=
 =?utf-8?B?NnRwRDdsZmRudDFVWWt3R0pLLzBMSEl1azZ3cm9KZVhwK0FFY254MlNZVEV3?=
 =?utf-8?B?ZGtQb01BVHZtU3ZBQUFpdzFvYXJZL25QU01jS3BkbG1GQTNmc21pd1NYV1A2?=
 =?utf-8?B?bzQ2Z2QzU2gxdFVPN2N4WG5LbDV0bFl1RW5aOFZib3B6YW9aTDFHQXBsdXRU?=
 =?utf-8?B?UU1idGR3SHBlZTNIVWdoYnpOTWNVKzZ3N2Q3eFNML0lrRjJFak04V29OejJV?=
 =?utf-8?B?T3BvbXM1ZU92dTZ0NWl1Nmg1YU5jSExnUHc5bTYzdEtPdTFpQlhMTERrempF?=
 =?utf-8?B?Y3djZCtWQzVaMVRrd1BpbkRibEJHSS9YT1NVQkZ6bitOTzZ5bFhYMWg4SXVo?=
 =?utf-8?B?RlNOTkJ0Ri9vOUI0dlNPbVRaaVF3R0JFejdBVk9reHdrNWhqbHdEL0c2R1lW?=
 =?utf-8?B?dzdOazM4ZERXY09zeHo2U3FkZG94cXdlZ2VBd2Z6RStvNG9UbE1EU05IMXUw?=
 =?utf-8?B?SzZPZm50amF3VGo1NFVqMWlobjF3aGRqakhJNmd4NDg4d0dYeWNBTlRCOWZ4?=
 =?utf-8?B?NHZFUmY4UjhGc0pSUDVWd01Nbmh3WklndDFIbWp5OXZtV2ladDN4S1Q5TVU0?=
 =?utf-8?B?RDMzRURhakhEUVJCUXFvVm92WXJvb1hWZWxwS054Y3RUVWJXWE5UVGszdmtu?=
 =?utf-8?B?K1QzaVBnUHdKcXBUN2s3eEpuQjdUV2plUXZsVWJlTStzYjJTTVgrbkxjbU5o?=
 =?utf-8?B?VWo2c2JNRU5Ta0U2VHZxS0QyWFlabzY3Z3BJbTdSQ0dJSStaeGpXQzY0bkVJ?=
 =?utf-8?B?YkZDTkc5b1hnNWRLQ1F4ZWppTTRMTnpSWWZRVjlEUVk5US9PRDgzZ3N6M0Jj?=
 =?utf-8?B?bE51Y3VIcXR3TE1wY09PU2pnNDA1VmFON2tBamNqa01RZm9WZ0pBdWI2aCta?=
 =?utf-8?B?LzJFcndyQXExaE5HTmI0R3FFNnRwNjFRV0dZMXVyYUZIajQ4djJ1VGdpaGlr?=
 =?utf-8?B?bHFWMExNK3kvMmlRMEVPRTNtUjd1SkhnN1Fmb0p1OXFqVTJrT0U0bGhaU2lr?=
 =?utf-8?B?aXJSNFNZd1Raa2I3a3lPZnRjbmNRUDZmOEVSS0hVQWd6S3cySnJQZ0hNN2RU?=
 =?utf-8?B?UjZ0UkdpT08xbmtVT0VjcjhOK0dHaVdidGpqR0RVRlRoRExFZzRUSSsxOEZK?=
 =?utf-8?B?WHRaOWQ1MEg4SWZOTWd6ZWlDZXl0UWRJUisxTGY0Wm5YVVZPd0NBTzVnVVJu?=
 =?utf-8?B?UGlmMUp3dGErd0hCRUlvRGhvYjM3aUJ1N1FGeFhuOHZON1J1Sm5Yb3YzSGVp?=
 =?utf-8?B?c1VxOUx0bmNyQk5KR3BjcGtUeDhteUVCUnZnY3h3d1luaTlyUC9sNEVLWDRK?=
 =?utf-8?Q?IVg/eaHTVBKXGYFMV8OBxVgbPXBJAi9uvOA+9cl?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(7416005)(376005)(1800799015)(366007)(38070700009);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?Mk1EMTRVaE4wbERiTTZ0dE43amdTSm53MkU4WjdRV0M5WDY3dklSY0h1ZjI2?=
 =?utf-8?B?bUxPendxR3FRanZHdFhNTHJEMVhqWmdoc3hDYTE2TVdQYUVQeGs3Z1c5UENE?=
 =?utf-8?B?NUlnSVlOSGRjeEJMcnpzM3p6QVFONGdMR3hML1F2VjRpZUtLWmMvMHo4RmxF?=
 =?utf-8?B?Mmdvd0doK1ZoUXViUUdoTjNIZUN3dEg0azkrVkRXZGFScHk5UW1DTTI1c1Nh?=
 =?utf-8?B?OU4yM21jancxVEFKc1FKdmpSWTRPSWYvak1DYWZFaUprOThGQko5RXJYNGQ3?=
 =?utf-8?B?K2NZeERUcWVVenNaQ3lhaG4xWjN0djZtNHY1T1RsWDYrcnBheWpBcEZQM3pW?=
 =?utf-8?B?d1ZMS0hJNTViUS9BTndVUXFISVVnL01tOXRDWVRMWnp1Z0JkMkFTTXBsYTA0?=
 =?utf-8?B?N3pqN0RIcnRNalgvQ3NhMHJkYkRrZ0xDUnU4aytsZnh2RjhZeWtaUXM2U1Y1?=
 =?utf-8?B?T0xrRlFySk5udDdqREZQaEJUMG9UQkd4KzE5cFdxK1E2cmszV0RieTljTzBl?=
 =?utf-8?B?dFQ1ZllPSmIyZnp1ZWZNdmdFQlU3QkR6dG9TUzJtRzc5NG50dUpWK0c5eVJH?=
 =?utf-8?B?eEJWanRITUExNGxNUDhrZHRsVnExYVA5dVlWbGk3YXMxV0tqd2FjNzFWU3lx?=
 =?utf-8?B?c2c4bW9kZGRqVForcHhWQlN3WXdZN21BaTYxYm0xTVRVTjVTeVNYNXl2Ti9S?=
 =?utf-8?B?eDZNdC9NcDd4cVREbURIeUtkbnlkK25tS29rbDNUUk9pMGpOQzF3cnRzS0xW?=
 =?utf-8?B?bHNFRzBNNXFSSlBTL1pEc3gwenBEamNFSzIzUG1YQWtpU29rcWJvdjhnbFg5?=
 =?utf-8?B?ZGRHaEZ2NzY3Nk1WTFVQbUFzbGM0NmJXeEpSajBSZG5ybVM5RERaQWdKeVJk?=
 =?utf-8?B?RnovTTZZa0QzUkRqbndDS1IvczJQUGlRUUVaek9iTmJpRkNCUk10eWN4Znlk?=
 =?utf-8?B?QWRjdlhrWjJGRzd4RlhneGVBQUMybVNpeHdKNS9DSHZxZ1pWakpURkttcXAx?=
 =?utf-8?B?eVhrY3lXM1RWSTdqazBiNVlZMkpkaVkvRm5GSmJ5MG9WUHRDUmpYOSs2a3lH?=
 =?utf-8?B?SWxNUmhIdndUMzlPR3NaZXBEK1JJMWVvM2Zma3VvSHpIS1NHcjR4Rmt5NFFr?=
 =?utf-8?B?Nmx5MGRtY3d6bzRPR3lTb3lXd2hiN3R4OXlnTzdWTk1LWU1XV1prcXkwbHdO?=
 =?utf-8?B?THRybnNOMXhVK2hPVHZpYnJCSTJyTmxVV0RnSFVvcnRCaGt3TzBYWTlMc0xO?=
 =?utf-8?B?ZnhtRXZ2QUh1ZnJHUmhPZmFsWDQzZFNnL0dHKytrb0dpTWRaNzdjYnNSZnRt?=
 =?utf-8?B?UEVlNWJRRjE5bzJFUTQxV1ZyRDd3YysrbW1RREZIYzdHSmJZcHkzY0ZYVmJn?=
 =?utf-8?B?NnRGekRMMlBId3Jqa1MwTjg4cU1ucmZlMUJDK1JJWkZLZUJ4RXVBTE9KUm01?=
 =?utf-8?B?T3J3aGt6RVpteGpDWSs4ZHcxVWFmaTVBdjRmUzhTaVJZSng5aVJDZVBJZEdL?=
 =?utf-8?B?SEYzYzNzRlEyekYvTjJDZ2VLK1ppR25ZaWdxcmt3aHk2YS92V1F4VGpIaDRR?=
 =?utf-8?B?ME1tY2YvUnhZUmpSZjFaOHNvRVk2OHU1VFpCU2lYWUJmVXJsdlQvc1dMWUp6?=
 =?utf-8?B?anFpM0dYcStEeUJZZ3VqUDBaNytPaysrMkJ5a0dZV2UvUjhZU3hQemRERE9m?=
 =?utf-8?B?TE1FdVN6VWd0dVBjRW12TlFaQm5uSVlXZVpvOEFVVjV0YVhvY0VBK3UrbEhr?=
 =?utf-8?B?ZkJ0cmZ0TVJJVW0xUEpndUc2UjJMYjRvMHNLTmxaNHFmcVN0dHVaY1Y0N2VD?=
 =?utf-8?B?WUVZRmRNMFRaU0Y3clp3a1Q5YVBhQ2oydk1vcngrRTNUTHRHVUY3UCthZitj?=
 =?utf-8?B?QVQ4VUZ0LzJiRkZ0ME5QMytlOTU5Um9ZTGxNcSsrRDB3N1pLRmZMMFp0aUxW?=
 =?utf-8?B?S3VhZHA0bFh2NlBONTRiK0RMMW1BS1BVcjIrSXhKVDREMk5QQW1pZnBlRkVz?=
 =?utf-8?B?SWJJclREY3prNzFJRGpITGhTZWp4WDVDSVBORFZYY0NMTFdNZFg3T2NhZWt0?=
 =?utf-8?B?RFd1dk5QZy85TU5wa0xCcEY1Zk5ObitPUEUzbXBXcktKRldtS3dMbXpKUkRJ?=
 =?utf-8?Q?UUhk=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <84C3064BA40C8F41B07FC952CA9BA547@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fdf23327-1636-47ec-3925-08dc845bbb62
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jun 2024 06:01:10.0222
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: fGmRBu7Rdx29U/Evy2T1dEJh62/qESEpgEQOD7UwdnFNUQzuTwsGBFPeiwUNh5eCb2DkRIf0MRVCzqnKI5+A1g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6379

T24gMjAyNC82LzQgMTM6NTUsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAwNC4wNi4yMDI0IDA1
OjA0LCBDaGVuLCBKaXFpYW4gd3JvdGU6DQo+PiBPbiAyMDI0LzUvMzAgMjM6NTEsIEphbiBCZXVs
aWNoIHdyb3RlOg0KPj4+IE9uIDMwLjA1LjIwMjQgMTM6MTksIENoZW4sIEppcWlhbiB3cm90ZToN
Cj4+Pj4gT24gMjAyNC81LzI5IDIwOjIyLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+Pj4+IE9uIDI5
LjA1LjIwMjQgMTM6MTMsIENoZW4sIEppcWlhbiB3cm90ZToNCj4+Pj4+PiBPbiAyMDI0LzUvMjkg
MTU6MTAsIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+Pj4+PiBPbiAyOS4wNS4yMDI0IDA4OjU2LCBD
aGVuLCBKaXFpYW4gd3JvdGU6DQo+Pj4+Pj4+PiBPbiAyMDI0LzUvMjkgMTQ6MzEsIEphbiBCZXVs
aWNoIHdyb3RlOg0KPj4+Pj4+Pj4+IE9uIDI5LjA1LjIwMjQgMDQ6NDEsIENoZW4sIEppcWlhbiB3
cm90ZToNCj4+Pj4+Pj4+Pj4gQnV0IEkgZm91bmQgaW4gZnVuY3Rpb24gaW5pdF9pcnFfZGF0YToN
Cj4+Pj4+Pj4+Pj4gICAgIGZvciAoIGlycSA9IDA7IGlycSA8IG5yX2lycXNfZ3NpOyBpcnErKyAp
DQo+Pj4+Pj4+Pj4+ICAgICB7DQo+Pj4+Pj4+Pj4+ICAgICAgICAgaW50IHJjOw0KPj4+Pj4+Pj4+
Pg0KPj4+Pj4+Pj4+PiAgICAgICAgIGRlc2MgPSBpcnFfdG9fZGVzYyhpcnEpOw0KPj4+Pj4+Pj4+
PiAgICAgICAgIGRlc2MtPmlycSA9IGlycTsNCj4+Pj4+Pj4+Pj4NCj4+Pj4+Pj4+Pj4gICAgICAg
ICByYyA9IGluaXRfb25lX2lycV9kZXNjKGRlc2MpOw0KPj4+Pj4+Pj4+PiAgICAgICAgIGlmICgg
cmMgKQ0KPj4+Pj4+Pj4+PiAgICAgICAgICAgICByZXR1cm4gcmM7DQo+Pj4+Pj4+Pj4+ICAgICB9
DQo+Pj4+Pj4+Pj4+IERvZXMgaXQgbWVhbiB0aGF0IHdoZW4gaXJxIDwgbnJfaXJxc19nc2ksIHRo
ZSBnc2kgYW5kIGlycSBpcyBhIDE6MSBtYXBwaW5nPw0KPj4+Pj4+Pj4+DQo+Pj4+Pj4+Pj4gTm8s
IGFzIGV4cGxhaW5lZCBiZWZvcmUuIEkgYWxzbyBkb24ndCBzZWUgaG93IHlvdSB3b3VsZCBkZXJp
dmUgdGhhdCBmcm9tIHRoZSBjb2RlIGFib3ZlLg0KPj4+Pj4+Pj4gQmVjYXVzZSBoZXJlIHNldCBk
ZXNjLT5pcnEgPSBpcnEsIGFuZCBpdCBzZWVtcyB0aGVyZSBpcyBubyBvdGhlciBwbGFjZSB0byBj
aGFuZ2UgdGhpcyBkZXNjLT5pcnEsIHNvLCBnc2kgMSBpcyBjb25zaWRlcmVkIHRvIGlycSAxLg0K
Pj4+Pj4+Pg0KPj4+Pj4+PiBXaGF0IGFyZSB5b3UgdGFraW5nIHRoaXMgZnJvbT8gVGhlIGxvb3Ag
Ym91bmQgaXNuJ3QgbnJfZ3NpcywgYW5kIHRoZSBpdGVyYXRpb24NCj4+Pj4+Pj4gdmFyaWFibGUg
aXNuJ3QgaW4gR1NJIHNwYWNlIGVpdGhlcjsgaXQncyBpbiBJUlEgbnVtYmVyaW5nIHNwYWNlLiBJ
biB0aGlzIGxvb3ANCj4+Pj4+Pj4gd2UncmUgbWVyZWx5IGxldmVyYWdpbmcgdGhhdCBldmVyeSBH
U0kgaGFzIGEgY29ycmVzcG9uZGluZyBJUlE7DQo+Pj4+Pj4+IHRoZXJlIGFyZSBubyBhc3N1bXB0
aW9ucyBtYWRlIGFib3V0IHRoZSBtYXBwaW5nIGJldHdlZW4gdGhlIHR3by4gQWZhaWNzIGF0IGxl
YXN0Lg0KPj4+Pj4+Pg0KPj4+Pj4+Pj4+ICJucl9pcnFzX2dzaSIgZGVzY3JpYmVzIHdoYXQgaXRz
IG5hbWUgc2F5czogVGhlIG51bWJlciBvZg0KPj4+Pj4+Pj4+IElSUXMgbWFwcGluZyB0byBhIChf
c29tZV8pIEdTSS4gVGhhdCdzIHRvIHRlbGwgdGhlbSBmcm9tIHRoZSBub24tR1NJIChpLmUuDQo+
Pj4+Pj4+Pj4gbWFpbmx5IE1TSSkgb25lcy4gVGhlcmUncyBubyBpbXBsaWNhdGlvbiB3aGF0c29l
dmVyIG9uIHRoZSBJUlEgPC0+IEdTSQ0KPj4+Pj4+Pj4+IG1hcHBpbmcuDQo+Pj4+Pj4+Pj4NCj4+
Pj4+Pj4+Pj4gV2hhdCdzIG1vcmUsIHdoZW4gdXNpbmcgUEhZU0RFVk9QX3NldHVwX2dzaSwgaXQg
Y2FsbHMgbXBfcmVnaXN0ZXJfZ3NpLA0KPj4+Pj4+Pj4+PiBhbmQgaW4gbXBfcmVnaXN0ZXJfZ3Np
LCBpdCB1c2VzICIgZGVzYyA9IGlycV90b19kZXNjKGdzaSk7ICIgdG8gZ2V0IGlycV9kZXNjIGRp
cmVjdGx5Lg0KPj4+Pj4+Pj4+DQo+Pj4+Pj4+Pj4gV2hpY2ggbWF5IGJlIHdyb25nLCB3aGlsZSB0
aGF0IHdyb25nLW5lc3MgbWF5IG5vdCBoYXZlIGhpdCBhbnlvbmUgaW4NCj4+Pj4+Pj4+PiBwcmFj
dGljZSAoZm9yIHJlYXNvbnMgdGhhdCB3b3VsZCBuZWVkIHdvcmtpbmcgb3V0KS4NCj4+Pj4+Pj4+
Pg0KPj4+Pj4+Pj4+PiBDb21iaW5pbmcgYWJvdmUsIGNhbiB3ZSBjb25zaWRlciAiZ3NpID09IGly
cSIgd2hlbiBpcnEgPCBucl9pcnFzX2dzaSA/DQo+Pj4+Pj4+Pj4NCj4+Pj4+Pj4+PiBBZ2FpbiAt
IG5vLg0KPj4+Pj4+Pj4gU2luY2UgeW91IGFyZSBjZXJ0YWluIHRoYXQgdGhleSBhcmUgbm90IGVx
dWFsLCBjb3VsZCB5b3UgdGVsbCBtZSB3aGVyZSBzaG93IHRoZXkgYXJlIG5vdCBlcXVhbCBvciB3
aGVyZSBidWlsZCB0aGVpciBtYXBwaW5ncywNCj4+Pj4+Pj4+IHNvIHRoYXQgSSBjYW4ga25vdyBo
b3cgdG8gZG8gYSBjb252ZXJzaW9uIGdzaSBmcm9tIGlycS4NCj4+Pj4+Pj4NCj4+Pj4+Pj4gSSBk
aWQgcG9pbnQgeW91IGF0IHRoZSBBQ1BJIEludGVycnVwdCBTb3VyY2UgT3ZlcnJpZGUgc3RydWN0
dXJlIGJlZm9yZS4NCj4+Pj4+Pj4gV2UncmUgcGFyc2luZyB0aG9zZSBpbiBhY3BpX3BhcnNlX2lu
dF9zcmNfb3ZyKCksIHRvIGdpdmUgeW91IGEgcGxhY2UgdG8NCj4+Pj4+Pj4gc3RhcnQgZ29pbmcg
ZnJvbS4NCj4+Pj4+PiBPaCEgSSB0aGluayBJIGtub3cuDQo+Pj4+Pj4gSWYgSSB3YW50IHRvIHRy
YW5zZm9ybSBnc2kgdG8gaXJxLCBJIG5lZWQgdG8gZG8gYmVsb3c6DQo+Pj4+Pj4gCWludCBpcnEs
IGVudHJ5LCBpb2FwaWMsIHBpbjsNCj4+Pj4+Pg0KPj4+Pj4+IAlpb2FwaWMgPSBtcF9maW5kX2lv
YXBpYyhnc2kpOw0KPj4+Pj4+IAlwaW4gPSBnc2kgLSBtcF9pb2FwaWNfcm91dGluZ1tpb2FwaWNd
LmdzaV9iYXNlOw0KPj4+Pj4+IAllbnRyeSA9IGZpbmRfaXJxX2VudHJ5KGlvYXBpYywgcGluLCBt
cF9JTlQpOw0KPj4+Pj4+IAlpcnEgPSBwaW5fMl9pcnEoZW50cnksIGlvYXBpYywgcGluKTsNCj4+
Pj4+Pg0KPj4+Pj4+IEFtIEkgcmlnaHQ/DQo+Pj4+Pg0KPj4+Pj4gVGhpcyBsb29rcyBwbGF1c2li
bGUsIHllcy4NCj4+Pj4gSSBkdW1wIGFsbCBtcGNfY29uZmlnX2ludHNyYyBvZiBhcnJheSBtcF9p
cnFzLCBpdCBzaG93czoNCj4+Pj4gKFhFTikgZmluZF9pcnFfZW50cnkgdHlwZSAzIGlycXR5cGUg
MCBpcnFmbGFnIDAgc3JjYnVzIDAgc3JjYnVzaXJxIDAgZHN0YXBpYyAzMyBkc3RpcnEgMg0KPj4+
PiAoWEVOKSBmaW5kX2lycV9lbnRyeSB0eXBlIDMgaXJxdHlwZSAwIGlycWZsYWcgMTUgc3JjYnVz
IDAgc3JjYnVzaXJxIDkgZHN0YXBpYyAzMyBkc3RpcnEgOQ0KPj4+PiAoWEVOKSBmaW5kX2lycV9l
bnRyeSB0eXBlIDMgaXJxdHlwZSAwIGlycWZsYWcgMCBzcmNidXMgMCBzcmNidXNpcnEgMSBkc3Rh
cGljIDMzIGRzdGlycSAxDQo+Pj4+IChYRU4pIGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBpcnF0eXBl
IDAgaXJxZmxhZyAwIHNyY2J1cyAwIHNyY2J1c2lycSAzIGRzdGFwaWMgMzMgZHN0aXJxIDMNCj4+
Pj4gKFhFTikgZmluZF9pcnFfZW50cnkgdHlwZSAzIGlycXR5cGUgMCBpcnFmbGFnIDAgc3JjYnVz
IDAgc3JjYnVzaXJxIDQgZHN0YXBpYyAzMyBkc3RpcnEgNA0KPj4+PiAoWEVOKSBmaW5kX2lycV9l
bnRyeSB0eXBlIDMgaXJxdHlwZSAwIGlycWZsYWcgMCBzcmNidXMgMCBzcmNidXNpcnEgNSBkc3Rh
cGljIDMzIGRzdGlycSA1DQo+Pj4+IChYRU4pIGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBpcnF0eXBl
IDAgaXJxZmxhZyAwIHNyY2J1cyAwIHNyY2J1c2lycSA2IGRzdGFwaWMgMzMgZHN0aXJxIDYNCj4+
Pj4gKFhFTikgZmluZF9pcnFfZW50cnkgdHlwZSAzIGlycXR5cGUgMCBpcnFmbGFnIDAgc3JjYnVz
IDAgc3JjYnVzaXJxIDcgZHN0YXBpYyAzMyBkc3RpcnEgNw0KPj4+PiAoWEVOKSBmaW5kX2lycV9l
bnRyeSB0eXBlIDMgaXJxdHlwZSAwIGlycWZsYWcgMCBzcmNidXMgMCBzcmNidXNpcnEgOCBkc3Rh
cGljIDMzIGRzdGlycSA4DQo+Pj4+IChYRU4pIGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBpcnF0eXBl
IDAgaXJxZmxhZyAwIHNyY2J1cyAwIHNyY2J1c2lycSAxMCBkc3RhcGljIDMzIGRzdGlycSAxMA0K
Pj4+PiAoWEVOKSBmaW5kX2lycV9lbnRyeSB0eXBlIDMgaXJxdHlwZSAwIGlycWZsYWcgMCBzcmNi
dXMgMCBzcmNidXNpcnEgMTEgZHN0YXBpYyAzMyBkc3RpcnEgMTENCj4+Pj4gKFhFTikgZmluZF9p
cnFfZW50cnkgdHlwZSAzIGlycXR5cGUgMCBpcnFmbGFnIDAgc3JjYnVzIDAgc3JjYnVzaXJxIDEy
IGRzdGFwaWMgMzMgZHN0aXJxIDEyDQo+Pj4+IChYRU4pIGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBp
cnF0eXBlIDAgaXJxZmxhZyAwIHNyY2J1cyAwIHNyY2J1c2lycSAxMyBkc3RhcGljIDMzIGRzdGly
cSAxMw0KPj4+PiAoWEVOKSBmaW5kX2lycV9lbnRyeSB0eXBlIDMgaXJxdHlwZSAwIGlycWZsYWcg
MCBzcmNidXMgMCBzcmNidXNpcnEgMTQgZHN0YXBpYyAzMyBkc3RpcnEgMTQNCj4+Pj4gKFhFTikg
ZmluZF9pcnFfZW50cnkgdHlwZSAzIGlycXR5cGUgMCBpcnFmbGFnIDAgc3JjYnVzIDAgc3JjYnVz
aXJxIDE1IGRzdGFwaWMgMzMgZHN0aXJxIDE1DQo+Pj4+DQo+Pj4+IEl0IHNlZW1zIG9ubHkgTGVn
YWN5IGlycSBhbmQgZ3NpWzA6MTVdIGhhcyBhIG1hcHBpbmcgaW4gbXBfaXJxcy4NCj4+Pj4gT3Ro
ZXIgZ3NpIGNhbiBiZSBjb25zaWRlcmVkIDE6MSBtYXBwaW5nIHdpdGggaXJxPyBPciBhcmUgdGhl
cmUgb3RoZXIgcGxhY2VzIHJlZmxlY3QgdGhlIG1hcHBpbmcgYmV0d2VlbiBpcnEgYW5kIGdzaT8N
Cj4+Pg0KPj4+IEl0IG1heSBiZSB1bmNvbW1vbiB0byBoYXZlIG92ZXJyaWRlcyBmb3IgaGlnaGVy
IEdTSXMsIGJ1dCBJIGRvbid0IHRoaW5rIEFDUEkNCj4+PiBkaXNhbGxvd3MgdGhhdC4NCj4+IERv
IHlvdSBzdWdnZXN0IG1lIHRvIGFkZCBvdmVycmlkZXMgZm9yIGhpZ2hlciBHU0lzIGludG8gYXJy
YXkgbXBfaXJxcz8NCj4gDQo+IFdoeSAiYWRkIj8gVGhhdCdzIHdoYXQgbXBfb3ZlcnJpZGVfbGVn
YWN5X2lycSgpIGFscmVhZHkgZG9lcywgaXNuJ3QgaXQ/DQpOby4gbXBfb3ZlcnJpZGVfbGVnYWN5
X2lycSBvbmx5IG92ZXJyaWRlcyBmb3IgZ3NpIDwgMTYsIGJ1dCBub3QgZm9yIGdzaSA+PSAxNihJ
IGR1bXAgYWxsIG1hcHBpbmdzIGZyb20gYXJyYXkgbXBfaXJxcykuDQpJbiBteSBlbnZpcm9ubWVu
dCwgZ3NpIG9mIG15IGRHUFUgaXMgMjQuDQpTbywgaG93IGRvIEkgcHJvY2VzcyBmb3IgZ3NpID49
IDE2Pw0KDQo+IEFzc3VtaW5nIG9mIGNvdXJzZSBhbnkgYXJlIHN1cmZhY2VkIGF0IGFsbCBieSBB
Q1BJLg0KPiANCj4gSmFuDQoNCi0tIA0KQmVzdCByZWdhcmRzLA0KSmlxaWFuIENoZW4uDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 06:08:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 06:08:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735137.1141287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENLK-0006Dx-RS; Tue, 04 Jun 2024 06:08:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735137.1141287; Tue, 04 Jun 2024 06:08:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENLK-0006Dq-OM; Tue, 04 Jun 2024 06:08:22 +0000
Received: by outflank-mailman (input) for mailman id 735137;
 Tue, 04 Jun 2024 06:08:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N7N6=NG=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sENLJ-0006Dk-8h
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 06:08:21 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d775fb64-2238-11ef-90a1-e314d9c70b13;
 Tue, 04 Jun 2024 08:08:20 +0200 (CEST)
Received: by mail-wr1-x430.google.com with SMTP id
 ffacd0b85a97d-35a264cb831so4711444f8f.2
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 23:08:20 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd0667366sm10454887f8f.111.2024.06.03.23.08.18
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 23:08:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d775fb64-2238-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717481299; x=1718086099; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=4BqYI80v6FfEZxjPMgqKjjNEHa67mKdhCDPCzBtylZ0=;
        b=Dk/Wsa0y2vSnWgMvxFwXxm/UC1dl+R9SjI8HCVMZbt1zpmBbbTfybTTuqqRtGufsRm
         OTOGpUHxnWZA0Evy2MpILwvfQPEkkNMv4hw83gaF0tR5MfdkL/Ns3GicC3N9Htb+wzHR
         pP04vUjRAT+d0FPbRuUgO3PDYmrO0ltcLtw9xfnNkMKnaPnThl5VoCVlwKtGFgNWtxBE
         FzwkZz5/9X4ZcQRQ8+a4h8CSg20O4UNuf9OPf/8M+48RvPLSU6v1BaDz7Q8WDB5Eyrln
         CQBslhM2u4EQ6ppac+eJA3i9ACxfaJgKfh02JsK80cUdR4KeOabBhn05UQu2OOo/RWQ1
         PZEA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717481299; x=1718086099;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=4BqYI80v6FfEZxjPMgqKjjNEHa67mKdhCDPCzBtylZ0=;
        b=uj/4V8YPZPM5Akf+9tpgD6Tctwm4gxJBvHa9n00Xhjy9cIkcPrZSeBKwpfcJsyaPAy
         O9Hrd1gfwtjfREUntsEENz2mJSZ7xtJUqCmvM2ZRiEDWG2yl95Sfrc7ESMrJkyj4vV3b
         V8IQSNyblH+kXN44k42ttHXAyxg5uEv24f8SjdfpcdpK5vR6PoNZZdeC9M/P4QViPRwD
         tKUbI8v4gBEFf8kfqNJ4pA45nys8aYbl3vgbI843+kGzOKxUIUvK7mOTkK4vB+CF/wu6
         GpShDM6t8P2qhx6nRG0JA/Tj5tnY5TO+pZc1SCuBJDpfGfzw2BeQEGo4s5MpfbNFOToh
         nhXw==
X-Forwarded-Encrypted: i=1; AJvYcCV0jYt4ZXKRj4ZUmB9+cjfHzQWEgF9OI2sPdElwQ/zp8kHdJK6KJlDbIV7U1LyNu4ZeBgUHxxGbDIDc56Kyhewh+KmXJTkJVudMAUnAhUE=
X-Gm-Message-State: AOJu0YxXKOzEuJvlVqN8Rh1ww3wazydQuJ2z9HyYS+wneOD3r4EPglEk
	JZW9rOatBtbBzBcj0QnDxxlLNbOson9yXJxKVFx9XwT+WVKy90ow7vvBPnNLNw==
X-Google-Smtp-Source: AGHT+IEEiR49CbbboKVfYlW0/iIfY7EqBK9fjkYBdVzxHRykLjUZ7Gj15z4Udn1Z2iD8ePD2YNt2dw==
X-Received: by 2002:a05:6000:b82:b0:35e:7d1d:24b5 with SMTP id ffacd0b85a97d-35e7d1d2542mr831056f8f.64.1717481299505;
        Mon, 03 Jun 2024 23:08:19 -0700 (PDT)
Message-ID: <02262bd1-4d2f-413f-bc03-58c7181be216@suse.com>
Date: Tue, 4 Jun 2024 08:08:18 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 3/5] x86: deviate violation of MISRA C Rule 20.12
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <475daa82f5be77644b1f32ecd3f6e66ccd9ac904.1717236930.git.nicola.vetrini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <475daa82f5be77644b1f32ecd3f6e66ccd9ac904.1717236930.git.nicola.vetrini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 01.06.2024 12:16, Nicola Vetrini wrote:
> --- a/xen/arch/x86/include/asm/shared.h
> +++ b/xen/arch/x86/include/asm/shared.h
> @@ -76,6 +76,7 @@ static inline void arch_set_##field(struct vcpu *v,         \
>  
>  GET_SET_SHARED(unsigned long, max_pfn)
>  GET_SET_SHARED(xen_pfn_t, pfn_to_mfn_frame_list_list)
> +/* SAF-6-safe Rule 20.12: expansion of macro nmi_reason */
>  GET_SET_SHARED(unsigned long, nmi_reason)

Before we go this route, were alternatives at least considered? Plus
didn't we special-case function-like macros already, when used in
situations where only object-like macros would be expanded anyway?

As to alternatives: nmi_reason() is used in exactly one place.
Dropping the #define and expanding the one use instead would be an
option. I further wonder whether moving the #define-s past the
piece of code you actually modify would also be an option (i.e. the
tool then no longer complaining).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 06:12:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 06:12:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735142.1141297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENPQ-00084X-BR; Tue, 04 Jun 2024 06:12:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735142.1141297; Tue, 04 Jun 2024 06:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENPQ-00084Q-82; Tue, 04 Jun 2024 06:12:36 +0000
Received: by outflank-mailman (input) for mailman id 735142;
 Tue, 04 Jun 2024 06:12:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N7N6=NG=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sENPP-00084K-1j
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 06:12:35 +0000
Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com
 [2a00:1450:4864:20::32c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6eba331c-2239-11ef-90a1-e314d9c70b13;
 Tue, 04 Jun 2024 08:12:34 +0200 (CEST)
Received: by mail-wm1-x32c.google.com with SMTP id
 5b1f17b1804b1-4212a3e82b6so24523945e9.0
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 23:12:34 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4214beb5c9asm8050775e9.7.2024.06.03.23.12.32
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 23:12:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6eba331c-2239-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717481553; x=1718086353; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Ncrw/z44aexW3t62vaMFiYSgAkBMt/KPWfW7heWoKpo=;
        b=ATsG1BE3vdVJaF31YL5joSnNQcQutt8+TMTXXihaoSbhFa0TlHWBCuohYzzSULuaL4
         Y3N6fXrNb2XUhw8mVMdrRjoWb4CssdW8o/ifszr7mZASz+CCijw68I2YtDzkn02QIVrI
         lWtoef9fkQ2GlO9qiDz1CncHzRIc29UtY+eP1Re8oYbkOyO8RbTXy1O4lw/qZ9HlaYv0
         hMiEcqsn+h2agwlHjmUdhU2Jj0f9vciN6XW4mKAqSVJp2uyXVQwOGXry+BUVwE3/pVKf
         ucio6gLns3N1nF6jkRpG9sthcpmOJpxvt+jGUANR0N9n3LyRmF6nP9lD+DRl7fqWxgWo
         VFPA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717481553; x=1718086353;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Ncrw/z44aexW3t62vaMFiYSgAkBMt/KPWfW7heWoKpo=;
        b=SM3faTY+vlJ+TvriyGwLmwsFu88xEoQ6Wpr1s/KyT4YGi/2W1rPs+h6zJ+xT6PkaWn
         hbus5zep2+PlYE/tvkxscOrAaPFFDJCs2v/OVYt5PRa4f/irn/fpolURRFzHm4aMcal5
         5OghsXlzXsTyY1Z7/nHvfB+CzNrlSn7ljTskVNdZqLnkIERhyQy/9K3ml0WYM+++DoZ0
         OT/TgIZelmj4P1/cw7Grhtn23Ay4koYWnZx2IHdguVyq9q8xadS7KJKJkH5Z4pfAMS7Z
         bpsO+zE8iUNJHV+MIfvvZVVfpnfApDbMCIW1C1n05j0ypL6vwxMJLmWUa1aIfGe7TDpq
         HK5g==
X-Forwarded-Encrypted: i=1; AJvYcCUjZqMl0OEN5jVtzCV/K47Y4kOuPQuu6WoHV2hSpdpKMBeXsFUU+L6+sSXqNSoIZN8RyIkn0MZifimo5Kaw+FUgaOVY6piZn4+XEV2r0lc=
X-Gm-Message-State: AOJu0YzptYJHxJI08GVdwJNNlHkL3yuM9kc9ZZEpF231jgjaIYX3kX5o
	1PfnINJ08xyN5tm+yai65qUalk/C4ooMLbu6Rlh/+yIdURJ9te0NP9QMNne5qA==
X-Google-Smtp-Source: AGHT+IEUz8RRft9lHuNQNdxArfMv8M/pCNU6QFvZ52JopR2JROFVIbxR0sY8xtpFmipYfaaxj9OWGw==
X-Received: by 2002:a05:600c:1d25:b0:421:3880:d7d5 with SMTP id 5b1f17b1804b1-4214513b8f1mr12248475e9.18.1717481553372;
        Mon, 03 Jun 2024 23:12:33 -0700 (PDT)
Message-ID: <f125e2e3-b579-410f-b6ab-93d008bf9a9e@suse.com>
Date: Tue, 4 Jun 2024 08:12:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>
References: <20240516095235.64128-1-Jiqian.Chen@amd.com>
 <b10e68e2-3279-471d-a089-c40934050737@suse.com>
 <BL1PR12MB58491A32C32C33545AC71AB7E7EE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4b311c82-b252-413a-bb64-0a36aa97680a@suse.com>
 <BL1PR12MB5849333D416160492A7475E2E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <70c86c74-3ed6-4b22-9ba6-3f927f81bcd0@suse.com>
 <BL1PR12MB584922B0352AA2F4A359FD66E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <7cdff236-bb7d-4dad-9a83-47faaa6dc15f@suse.com>
 <BL1PR12MB58493D3365CC451F36DB554FE7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <fbaf7086-85d8-4433-91d9-ef8f74512685@suse.com>
 <BL1PR12MB58494B521CB40BAEA30CB412E7F32@BL1PR12MB5849.namprd12.prod.outlook.com>
 <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
 <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4a421aa5-b4c5-43f3-85cb-68c2021f13dd@suse.com>
 <BL1PR12MB58492BA224EBCE98549A0349E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <BL1PR12MB58492BA224EBCE98549A0349E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 04.06.2024 08:01, Chen, Jiqian wrote:
> On 2024/6/4 13:55, Jan Beulich wrote:
>> On 04.06.2024 05:04, Chen, Jiqian wrote:
>>> On 2024/5/30 23:51, Jan Beulich wrote:
>>>> On 30.05.2024 13:19, Chen, Jiqian wrote:
>>>>> I dump all mpc_config_intsrc of array mp_irqs, it shows:
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 0 dstapic 33 dstirq 2
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 15 srcbus 0 srcbusirq 9 dstapic 33 dstirq 9
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 1 dstapic 33 dstirq 1
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 3 dstapic 33 dstirq 3
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 4 dstapic 33 dstirq 4
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 5 dstapic 33 dstirq 5
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 6 dstapic 33 dstirq 6
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 7 dstapic 33 dstirq 7
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 8 dstapic 33 dstirq 8
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 10 dstapic 33 dstirq 10
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 11 dstapic 33 dstirq 11
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 12 dstapic 33 dstirq 12
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 13 dstapic 33 dstirq 13
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 14 dstapic 33 dstirq 14
>>>>> (XEN) find_irq_entry type 3 irqtype 0 irqflag 0 srcbus 0 srcbusirq 15 dstapic 33 dstirq 15
>>>>>
>>>>> It seems only Legacy irq and gsi[0:15] has a mapping in mp_irqs.
>>>>> Other gsi can be considered 1:1 mapping with irq? Or are there other places reflect the mapping between irq and gsi?
>>>>
>>>> It may be uncommon to have overrides for higher GSIs, but I don't think ACPI
>>>> disallows that.
>>> Do you suggest me to add overrides for higher GSIs into array mp_irqs?
>>
>> Why "add"? That's what mp_override_legacy_irq() already does, isn't it?
> No. mp_override_legacy_irq only overrides for gsi < 16, but not for gsi >= 16(I dump all mappings from array mp_irqs).

I assume you mean you observe so ...

> In my environment, gsi of my dGPU is 24.

... on one specific system? The function is invoked from
acpi_parse_int_src_ovr(), and I can't spot any restriction to
IRQs less than 16 there.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 06:19:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 06:19:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735148.1141307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENVn-0000EN-VE; Tue, 04 Jun 2024 06:19:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735148.1141307; Tue, 04 Jun 2024 06:19:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENVn-0000EG-SN; Tue, 04 Jun 2024 06:19:11 +0000
Received: by outflank-mailman (input) for mailman id 735148;
 Tue, 04 Jun 2024 06:19:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N7N6=NG=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sENVn-0000EA-9J
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 06:19:11 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5a17bba3-223a-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 08:19:08 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 5b1f17b1804b1-4213870aafdso21192515e9.2
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 23:19:08 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd04c0f1fsm10602145f8f.15.2024.06.03.23.19.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 23:19:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a17bba3-223a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717481948; x=1718086748; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=QWhBIDjFB2cJed97y5VSa5+LQ85Jjx4F0A+xGMtaYGk=;
        b=V88azQRygEC2xNWy5GymbSwacV7X3Skwt0bfM+K2FKjMmAi6I7pWpcFc642k6aFSOA
         j1jnnnf+21ZBPe1dv81DjUNWTasRmqwM16JCjkojK2qDM5fNBI27JlyvpwY3tYigWYjs
         JPNc684plu/8KTi96Qxh0WD2CXfzwJhsW50xpJ5ivtwdaqf1x6zrJ8ZXpt7q0YbBD4yf
         RSPGBPtA0PudblNSTSgauXFQXZm9Op3HlafuoO/Nwj9OCjMmpNd12p40ixjkxN2PCc6u
         RZtySDCwvq1wDIboTiwsDu27SWNBt6ynhHTgxqn8ZUdupNyOnukv1jgWG/lpeEUP6gwL
         GLLw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717481948; x=1718086748;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=QWhBIDjFB2cJed97y5VSa5+LQ85Jjx4F0A+xGMtaYGk=;
        b=oVhqbHk2/3oLQ31nV0KbVDxOvZJSR9Y+24sGkmIKUYt936G4u2TusDe3NrD75G0sfW
         uGxheTCB2PyheqCX44/zbBr1flVjxd3DYh/mUWdzUnj2+3MW4jtACVk1gxF7y1MqBh4H
         lASnpS41GTbTZl/otGZAITgoXROyE/WyMxef49AYRW6Yf8WgDVZL2RU0EkMmuGZIZf8Y
         mEkvvKFntensLDZrgHO6CtB0mChI9+C3r/gmgexLYvm5d2eIP97BGZeosObfmTQYgzj4
         8mUa9YIxjb5o9ws/rXn3zeJTcr5vBiVXC3yiXVYDzGQMYXelMozJpbdr56ROQnv0dRwP
         NRmA==
X-Forwarded-Encrypted: i=1; AJvYcCW/hM/SwWVAiG5/dhuoj3xLE7yMOr1dkPchBfkTLZcrWKzsvPPgezVmb1lOdpSbzh2bszOMWwWnbJ/QIgJ5UjZoiLdTWoaprp5veHiJyck=
X-Gm-Message-State: AOJu0YyGik5fJ1+2wz+NOFGi1CvMU+0tALrJycI6rDOvSwoIjUZ13aFj
	lSDxPcBrJU0au7X/J6VLVEWqNC+HbzuQRPYeb5airZaQ8sKxZIAKxWEq1lsDOQ==
X-Google-Smtp-Source: AGHT+IFcoCc4w7IWJu1AGN60jlTdVqCaAg3IvX5Onv3zpGad1/Jf/2EqGUJKk7kzLc97n8FmnDhFFw==
X-Received: by 2002:a05:600c:4683:b0:421:4786:eb0c with SMTP id 5b1f17b1804b1-4214786f0e0mr15858305e9.33.1717481948276;
        Mon, 03 Jun 2024 23:19:08 -0700 (PDT)
Message-ID: <d70988d0-88fe-4bc3-a68d-d72981f78d9d@suse.com>
Date: Tue, 4 Jun 2024 08:19:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>
References: <20240516095235.64128-1-Jiqian.Chen@amd.com>
 <b10e68e2-3279-471d-a089-c40934050737@suse.com>
 <BL1PR12MB58491A32C32C33545AC71AB7E7EE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4b311c82-b252-413a-bb64-0a36aa97680a@suse.com>
 <BL1PR12MB5849333D416160492A7475E2E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <70c86c74-3ed6-4b22-9ba6-3f927f81bcd0@suse.com>
 <BL1PR12MB584922B0352AA2F4A359FD66E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <7cdff236-bb7d-4dad-9a83-47faaa6dc15f@suse.com>
 <BL1PR12MB58493D3365CC451F36DB554FE7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <fbaf7086-85d8-4433-91d9-ef8f74512685@suse.com>
 <BL1PR12MB58494B521CB40BAEA30CB412E7F32@BL1PR12MB5849.namprd12.prod.outlook.com>
 <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
 <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4a421aa5-b4c5-43f3-85cb-68c2021f13dd@suse.com>
 <BL1PR12MB58492BA224EBCE98549A0349E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <BL1PR12MB58492BA224EBCE98549A0349E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 04.06.2024 08:01, Chen, Jiqian wrote:
> So, how do I process for gsi >= 16?

Oh, and to answer this as well: Isn't it as simple as treating
as 1:1 mapped any (valid) GSI you can't find in mp_irqs[]? You
only care about the mapping, not e.g. polarity or trigger mode
here, iirc.

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 04 06:25:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 06:25:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735153.1141317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENcH-0002EG-Kg; Tue, 04 Jun 2024 06:25:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735153.1141317; Tue, 04 Jun 2024 06:25:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENcH-0002E9-I2; Tue, 04 Jun 2024 06:25:53 +0000
Received: by outflank-mailman (input) for mailman id 735153;
 Tue, 04 Jun 2024 06:25:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N7N6=NG=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sENcG-0002E3-5N
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 06:25:52 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 49b65bc0-223b-11ef-90a1-e314d9c70b13;
 Tue, 04 Jun 2024 08:25:50 +0200 (CEST)
Received: by mail-wr1-x433.google.com with SMTP id
 ffacd0b85a97d-35dc0472b7eso4446749f8f.2
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 23:25:50 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd066ff77sm10526121f8f.117.2024.06.03.23.25.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 23:25:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49b65bc0-223b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717482350; x=1718087150; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=ye2oNNCCTk3NquB+RcNnnO9FibV1EaflNrAJcs5QPqk=;
        b=UVCYxuC9xvOROp6VP7ZsrBds98Z0OrfJsmyqRBle9OLVASptjLYrR+z8ByaAudU2Og
         P3JYsykPVIoqaBsCsj/0uN2XZW4y3aWwUM9kUTQMJ+M7BY9xLOQ3GtMgzBftnqugWiVj
         FMbYdMO2dMLMOWXXAONF36f5b9RXlxPz2b7kWu+ABIdVCLdQE/4MVfqt4pCsVR8zHo6y
         Cfu5mGSqGQl9NKBE3OFe2PgWba56pMgkQXRIYS+FnF/MoruC3BEY1CiADYs/JWNm3KeI
         VqSfrd+//xj9UNc3tBQF7EH+vYClHG/Yk7kcqFaDdW+7xeAGHfTKqAtdOsfu/VMGeLaK
         vocA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717482350; x=1718087150;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=ye2oNNCCTk3NquB+RcNnnO9FibV1EaflNrAJcs5QPqk=;
        b=Rkr5HNLsuqiyp/6LnDQEj3m5t968u9kP/P3kZ44AL+LRkZPIiA9D0/0dgKc/ub4APz
         adYn17oeE15cawwD/LLmFxcDO+tgW7FRHz+PHOYUPCMi51yEwhpaMpwoMEjTLjq6HZWH
         5YzZhomQ4FgxlBTAzkGJfdp2cEL2mk6PZ0E9T/eUaTBqyvpyEIfzdz14JyRXAm9jSufZ
         jUeFQDDRONSUdLFwK5by06GegkrZCNqP846QgoKAna26ahdiX2O5B3RWfamKORv0byEz
         x2UgS20hIhNxjLVqQhY615xhsnBZNQ1wyPs89pJOFN6Cjd0lIXAbKqAVXlvwvxDwTOm5
         7rNA==
X-Forwarded-Encrypted: i=1; AJvYcCX8A3YltJ9zBPsCnAnv0M5xrplvfiOmLVhWI2dmpoXMRvmdKdD4KiY2DFlegdPgqOhQb3ISn5Vh0F2SXmRBnDkMby3eosrdck3O3CYIayA=
X-Gm-Message-State: AOJu0YyqpL42vh2J2azNQU6yqei4woinQwH3pZQsBEvZMeVWgP6yyM09
	HT3uaR0RmEWXC/UgR8WJJtDULVTBw39W6Ux+R8shA1lgqXm/iawHdfWXp8UeFg==
X-Google-Smtp-Source: AGHT+IGKcqzt3ZuVw8gG3upQPLpnvnbqrRvH9ZWvLFd/Z0Cumf+r4++TYGV7LmRfHRoznplFBVj+bA==
X-Received: by 2002:a5d:47ad:0:b0:355:1e8:4512 with SMTP id ffacd0b85a97d-35e0f31888cmr9518895f8f.43.1717482350230;
        Mon, 03 Jun 2024 23:25:50 -0700 (PDT)
Message-ID: <e612d581-9f5f-4776-bfa0-9cc174973ee2@suse.com>
Date: Tue, 4 Jun 2024 08:25:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v12 2/8] xen: introduce generic non-atomic test_*bit()
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Shawn Anastasio <sanastasio@raptorengineering.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1717008161.git.oleksii.kurochko@gmail.com>
 <526d2a5a76f03aa0e3cc7ee3192b1c87834f0e9e.1717008161.git.oleksii.kurochko@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <526d2a5a76f03aa0e3cc7ee3192b1c87834f0e9e.1717008161.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 29.05.2024 21:55, Oleksii Kurochko wrote:
> The following generic functions were introduced:
> * test_bit
> * generic__test_and_set_bit
> * generic__test_and_clear_bit
> * generic__test_and_change_bit
> 
> These functions and macros can be useful for architectures
> that don't have corresponding arch-specific instructions.
> 
> Also, the patch introduces the following generics which are
> used by the functions mentioned above:
> * BITOP_BITS_PER_WORD
> * BITOP_MASK
> * BITOP_WORD
> * BITOP_TYPE
> 
> The following approach was chosen for generic*() and arch*() bit
> operation functions:
> If the bit operation function that is going to be generic starts
> with the prefix "__", then the corresponding generic/arch function
> will also contain the "__" prefix. For example:
>  * test_bit() will be defined using arch_test_bit() and
>    generic_test_bit().
>  * __test_and_set_bit() will be defined using
>    arch__test_and_set_bit() and generic__test_and_set_bit().
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Albeit I think you cound have gone further ...

> @@ -307,8 +295,7 @@ static inline int variable_test_bit(int nr, const volatile void *addr)
>      return oldbit;
>  }
>  
> -#define test_bit(nr, addr) ({                           \
> -    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
> +#define arch_test_bit(nr, addr) ({                      \
>      __builtin_constant_p(nr) ?                          \
>          constant_test_bit(nr, addr) :                   \
>          variable_test_bit(nr, addr);                    \

... here, as constant_test_bit() is functionally the same as
generic_test_bit(), afaict. But that can well be cleaned up
subsequently, in order to no further delay this work of yours.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 06:27:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 06:27:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735160.1141327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENdx-0002nY-Vz; Tue, 04 Jun 2024 06:27:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735160.1141327; Tue, 04 Jun 2024 06:27:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENdx-0002nR-S1; Tue, 04 Jun 2024 06:27:37 +0000
Received: by outflank-mailman (input) for mailman id 735160;
 Tue, 04 Jun 2024 06:27:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tQmQ=NG=redhat.com=mlureau@srs-se1.protection.inumbo.net>)
 id 1sENdw-0002nJ-EW
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 06:27:36 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 877b4719-223b-11ef-90a1-e314d9c70b13;
 Tue, 04 Jun 2024 08:27:35 +0200 (CEST)
Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com
 [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-516-jOWUnELcMXG84PjIF3E6Hw-1; Tue, 04 Jun 2024 02:27:33 -0400
Received: by mail-ed1-f69.google.com with SMTP id
 4fb4d7f45d1cf-57a33a589b3so1383512a12.0
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 23:27:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 877b4719-223b-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717482454;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0JVtdIWNto+KqDFiBlQzdZJhx14N17tSKBoZOMoyk4A=;
	b=jLv4t5rjJhvJaDdRYp/A9ePW1a7CpxMaYBH+bDF8L/Qa6e/xblTSvoa0pXecbyOagKJrUJ
	6Fh+x7NR2n/DvmG5tWj2hqgsVh0RIGOqR4oWXZ8Pv8CQpquNjLESIeFSjcDBAO15/xbW2l
	o1Ve7dmCMBpZJz9BjNSNQd45c2gRcDs=
X-MC-Unique: jOWUnELcMXG84PjIF3E6Hw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717482451; x=1718087251;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=0JVtdIWNto+KqDFiBlQzdZJhx14N17tSKBoZOMoyk4A=;
        b=HtIgBk2Kdwb/9RTPui19PjWFfRY2z7fc9B4HPCVZ5t8B1HJZM9e60fO6KkmBxIS0D/
         jVRCEUn+/FLBIzIt9H5g1STS6crjSXDeqfU/JFmBTziudtoAmasaLDnRz1gasnDI8DTy
         /Qbyiyoyak3eQYjrtBS+pf0691VsfbxkbhMOJAv07CCV1u61mUZtsbp7U8hL/WmL7ded
         C17hAEQEC62/AR0UKYyvHN8/PhaBAmdSXAGJlQkHly56sT2z/8FFuTvIUo/wZ7XidsiO
         Lc3MkwTT7zXRxak6UImINpvxVKtribxi7E5WUGoj73n/zZNVJgQg5m14RErOXAY+0Kzi
         ub0g==
X-Forwarded-Encrypted: i=1; AJvYcCVTjRgNK+HSYKl9KtZQnV4aIWyJnhTRLJfz3+qrtJ0GRw/kt4p011vtI6L2AXPU9SfkPUepPZ8tW1zN2uPHGheQhqMPzPFbwy1PE1Lpx6E=
X-Gm-Message-State: AOJu0Yw3zilfJCoZCQWBlrJ/2k2bqPZVHq1A3578q72qAMs6MLXSvW0v
	HU0aXnE509UoFzPt8RA5hhMp0pnNIjGM+lAsb8+mNqL10InQC1xjzqQodAVm8xQBwSaefI9p9wl
	WD7XPnXCkafYRlhYlS0Uac/2C+b+sL+pDt8848hJ9F9GxHi8kyIUmsSWMPwmEIiDcselogt/zu3
	9j2M6Qrh+6jwzWB5p9KSfLzOkKA7GIKd/y0ztTLhOrVMJP4EE=
X-Received: by 2002:a50:9548:0:b0:57a:2ccf:ed2f with SMTP id 4fb4d7f45d1cf-57a363b4972mr7705408a12.3.1717482450705;
        Mon, 03 Jun 2024 23:27:30 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IGgCCSiWmq1QljOi/YHDSSxZ/u0tvY5jfAekfJZ3pRnQnD3+sRh5XAJAPUGxLID5iQcee+lK0g7D2RiA9UbizE=
X-Received: by 2002:a50:9548:0:b0:57a:2ccf:ed2f with SMTP id
 4fb4d7f45d1cf-57a363b4972mr7705397a12.3.1717482450320; Mon, 03 Jun 2024
 23:27:30 -0700 (PDT)
MIME-Version: 1.0
References: <20240603151825.188353-1-kraxel@redhat.com> <20240603151825.188353-2-kraxel@redhat.com>
In-Reply-To: <20240603151825.188353-2-kraxel@redhat.com>
From: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>
Date: Tue, 4 Jun 2024 10:27:18 +0400
Message-ID: <CAMxuvawqf-0dKPsZP2UTcDWPWQ+8FKbZ=S4KX02hQO1qeeGVMA@mail.gmail.com>
Subject: Re: [PATCH v2 1/3] stdvga: fix screen blanking
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: qemu-devel@nongnu.org, Anthony PERARD <anthony@xenproject.org>, 
	Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org, 
	Stefano Stabellini <sstabellini@kernel.org>, qemu-stable@nongnu.org
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi

On Mon, Jun 3, 2024 at 7:18=E2=80=AFPM Gerd Hoffmann <kraxel@redhat.com> wr=
ote:
>
> In case the display surface uses a shared buffer (i.e. uses vga vram
> directly instead of a shadow) go unshare the buffer before clearing it.
>
> This avoids vga memory corruption, which in turn fixes unblanking not
> working properly with X11.
>
> Cc: qemu-stable@nongnu.org
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2067
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> ---
>  hw/display/vga.c | 6 ++++++
>  1 file changed, 6 insertions(+)
>
> diff --git a/hw/display/vga.c b/hw/display/vga.c
> index 30facc6c8e33..474b6b14c327 100644
> --- a/hw/display/vga.c
> +++ b/hw/display/vga.c
> @@ -1762,6 +1762,12 @@ static void vga_draw_blank(VGACommonState *s, int =
full_update)
>      if (s->last_scr_width <=3D 0 || s->last_scr_height <=3D 0)
>          return;
>
> +    if (is_buffer_shared(surface)) {

Perhaps the suggestion to rename the function (in the following patch)
should instead be surface_is_allocated() ? that would match the actual
flag check. But callers would have to ! the result. Wdyt?

> +        /* unshare buffer, otherwise the blanking corrupts vga vram */
> +        surface =3D qemu_create_displaysurface(s->last_scr_width, s->las=
t_scr_height);
> +        dpy_gfx_replace_surface(s->con, surface);

Ok, this looks safer than calling "resize".

thanks

> +    }
> +
>      w =3D s->last_scr_width * surface_bytes_per_pixel(surface);
>      d =3D surface_data(surface);
>      for(i =3D 0; i < s->last_scr_height; i++) {
> --
> 2.45.1
>



From xen-devel-bounces@lists.xenproject.org Tue Jun 04 06:33:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 06:33:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735169.1141337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENjG-000506-Kn; Tue, 04 Jun 2024 06:33:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735169.1141337; Tue, 04 Jun 2024 06:33:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENjG-0004zz-Hf; Tue, 04 Jun 2024 06:33:06 +0000
Received: by outflank-mailman (input) for mailman id 735169;
 Tue, 04 Jun 2024 06:33:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N7N6=NG=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sENjE-0004zt-Ln
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 06:33:04 +0000
Received: from mail-wr1-x444.google.com (mail-wr1-x444.google.com
 [2a00:1450:4864:20::444])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4add9ab0-223c-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 08:33:02 +0200 (CEST)
Received: by mail-wr1-x444.google.com with SMTP id
 ffacd0b85a97d-35dca73095aso4324379f8f.0
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 23:33:02 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35e5985c269sm5124501f8f.82.2024.06.03.23.33.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 23:33:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4add9ab0-223c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717482782; x=1718087582; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=KeeXnvLZvFjvzNPaGphVDY3NWv2H+tFCnzPe1I0QI7w=;
        b=gw8U+2EIOfsoKVL7CELREgxfyqgF/cGHaYMTTXtwQ/FOTeaIqyItXAP1yHP58gOT12
         9EwZHqSh8nQ/GXIju+AYC0AevPe1DeCsa7tl+EIDKcrWwtPzAZULIG3j5WaZTY82918Q
         qggFycP7nQTvTbhBJh/qXsNRjrjsqGIyZoLYDtJDpq0X7EKNumJISgTzTN+IS5Sod/Nm
         7bh5Y2vpX+OHt+pmjP90pyoy7UVCwS104vFd1oDKSKy+PhSBT7MDIwE9GuvRyYcXVBBH
         th8cEA8bmv/iH09Lk0kFRaS9bD35xsTfrSM8ea8A+DYc6Z10d2cv64QARSUqdvdiUDuU
         HCbw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717482782; x=1718087582;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=KeeXnvLZvFjvzNPaGphVDY3NWv2H+tFCnzPe1I0QI7w=;
        b=VBKrArCfqbPAdifNPBQFSLUWzaxoMKfuf8igQy5DQLThE6IEMZAGC1MxvfD+XvBSsD
         IQnGSoGKf0mEKBpap5gpA4jtVci//q4gkX9+cyntR1nORq/nnC9jS6h5wakgjAJrjxcF
         J1BrOsvIdCbUOJrGWdM5656mzWMKguUtMkTj26WRyB87ewozowvCfz/+GvsvHAAkD0Ex
         Mz8LV6YnYdzWrWxntKExUwJnZDO8LYUEklSr/KxqBGDRCFiAEucnLap9jMmFML1oc36O
         HbrTfcrHnu4O+OQzoIri+4OLXy4N0wQGe3ITJ53QduFTshE5LxQywh9raMhWUVXJ1HDr
         sgtg==
X-Forwarded-Encrypted: i=1; AJvYcCUbg0v++5gFxryEpIBbGevB8+bgJiBRhxL6fc+QkkxI0eRb1t1k7W4lcrCiEsl1RA6+x5WzDgi52SpvdwPaZog/yuoQXflFh4jkuFCjBkM=
X-Gm-Message-State: AOJu0YxQkLt63es4BjN2mNpgo9DjnolVOEgBmLyizry0+7DJHPMpDeNb
	0l/tZvdPtnh1vlC3Jp8MDWV3jVpt9hw4V/0jMwUOFHc+pKDbhJg9AiDT4Unu/w==
X-Google-Smtp-Source: AGHT+IFxSnuBjFBwSqqrJg0E7ncAuXrKf+20NrKkRzK+bDiMgizkHGnAY7XLYZchJnBVX6ruzkbrew==
X-Received: by 2002:adf:f404:0:b0:355:39d:f7b with SMTP id ffacd0b85a97d-35e0f30df18mr7704737f8f.45.1717482781637;
        Mon, 03 Jun 2024 23:33:01 -0700 (PDT)
Message-ID: <1dde6a74-9bca-45f8-90a1-1e2459148a74@suse.com>
Date: Tue, 4 Jun 2024 08:33:00 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 4/4] x86/ucode: Utilize ucode_force and remove
 opt_ucode_allow_same
To: Fouad Hilly <fouad.hilly@cloud.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240528152943.3915760-1-fouad.hilly@cloud.com>
 <20240528152943.3915760-5-fouad.hilly@cloud.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20240528152943.3915760-5-fouad.hilly@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 28.05.2024 17:29, Fouad Hilly wrote:
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -2648,7 +2648,7 @@ performance.
>     Alternatively, selecting `tsx=1` will re-enable TSX at the users own risk.
>  
>  ### ucode
> -> `= List of [ <integer> | scan=<bool>, nmi=<bool>, allow-same=<bool> ]`
> +> `= List of [ <integer> | scan=<bool>, nmi=<bool> ]`
>  
>      Applicability: x86
>      Default: `nmi`
> @@ -2680,11 +2680,6 @@ precedence over `scan`.
>  stop_machine context. In NMI handler, even NMIs are blocked, which is
>  considered safer. The default value is `true`.
>  
> -'allow-same' alters the default acceptance policy for new microcode to permit
> -trying to reload the same version.  Many CPUs will actually reload microcode
> -of the same version, and this allows for easy testing of the late microcode
> -loading path.
> -
>  ### unrestricted_guest (Intel)
>  > `= <boolean>`

Removal of a command line (sub-)option nowadays wants accompanying by a
CHANGELOG.md entry.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 06:33:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 06:33:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735172.1141347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENjl-0005R7-T6; Tue, 04 Jun 2024 06:33:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735172.1141347; Tue, 04 Jun 2024 06:33:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENjl-0005R0-Pi; Tue, 04 Jun 2024 06:33:37 +0000
Received: by outflank-mailman (input) for mailman id 735172;
 Tue, 04 Jun 2024 06:33:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAXK=NG=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sENjk-0004zt-Bp
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 06:33:36 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20610.outbound.protection.outlook.com
 [2a01:111:f403:2414::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5d4ec3dd-223c-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 08:33:34 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by PH0PR12MB7906.namprd12.prod.outlook.com (2603:10b6:510:26c::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.21; Tue, 4 Jun
 2024 06:33:27 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7633.021; Tue, 4 Jun 2024
 06:33:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d4ec3dd-223c-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JqKPJv0RdXPpRDq6Ll38BB13ZKR3hTm52dpJICZhnfYTpCT1yek+FAeRMgHjwkl+kfyHzYGFDut8dvh9vtW8LNEx/7TK/8ocTtBwMv8fJzK2d3jnBNpu8DYcfNiKTKOlAgfaawViaiFpIChR0BqSin1yt+C4Mz7PZ5BlrSu7z8kYAsY5w1bYaGOrCSJko/tnWPUv4UVmEo0p5eStvDsJ8MmAedp3cVYsdNkW830ZqjOUeFH1rVHyHBpcPOjLJuZ0CqdQn/xGF+4FpUEvJX0UGjb5thts0TYLx4/DnZ7KTCccmHhNWktkVu3pLSXQay6j0LXl7SHqFKL7kUSJhgSlNw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=SH7CTEWc4mMzUAt0EfdEJB13zi2D3Z4Mets+ryceVJY=;
 b=Q1VTvNkRidErkYG1dEm+sSyWjWsxwy28UFyf9MAqjHp/TjzY6g/b/280sMQtpfNOf4aEOyPmWRwHd3wQoWenDblIU2abx4M9sS//90iPx28uKN/LrFlL3FuaJcgXioqPJCSVZ5fgQnOHM5W/85ph4BC+Awr4Hr///b1LWUnmIMJMEmvzttuQEPqh7j+O+btbao28jppz+3eA50LzpAHxKV0oghiLRt1boyuZzgsX17fC/xfj/03+Miri+8lF6uaTGoPRLItQafdQnE+hg1Vfr2yJab8q8BodN4OJ8l9KqbFE1UvY2ml4FtixpblTA5CsxS32/eIzIy7sKIuzjFuQpw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SH7CTEWc4mMzUAt0EfdEJB13zi2D3Z4Mets+ryceVJY=;
 b=GroaUCWgzsYDN2q18hJO5t/CF8htHNMBjcI4fBBrXQ5KEovi+2cqHYjFFUXJEEingoYK2d3zyoTj0wB7dvL/P263o8/7+lCnNwCmWnYqrU0zBYhlPqAEl82zhRZxk3OzbOghoS28XZxdikM1XOB5TrOo2AIHjlYO7cdYLFbgNW8=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Hildebrand,
 Stewart" <Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Daniel P
 . Smith" <dpsmith@apertussolutions.com>, "Chen, Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index:
 AQHap3bgiER4vYjwvk2+R5oTa8V63LGZ5EYAgAHVa4D//4e1AIAAis+A//+FmgCAEsTugP//viAAABEhvwD//4GYgIAAoyuA//+0FICAAgSCAP//yCWAgAeLLwD//6n2gAAQ1XyAABA5GQAADz4pgA==
Date: Tue, 4 Jun 2024 06:33:26 +0000
Message-ID:
 <BL1PR12MB58494B2DD0CD75CCDF1F5CA1E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240516095235.64128-1-Jiqian.Chen@amd.com>
 <BL1PR12MB58491A32C32C33545AC71AB7E7EE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4b311c82-b252-413a-bb64-0a36aa97680a@suse.com>
 <BL1PR12MB5849333D416160492A7475E2E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <70c86c74-3ed6-4b22-9ba6-3f927f81bcd0@suse.com>
 <BL1PR12MB584922B0352AA2F4A359FD66E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <7cdff236-bb7d-4dad-9a83-47faaa6dc15f@suse.com>
 <BL1PR12MB58493D3365CC451F36DB554FE7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <fbaf7086-85d8-4433-91d9-ef8f74512685@suse.com>
 <BL1PR12MB58494B521CB40BAEA30CB412E7F32@BL1PR12MB5849.namprd12.prod.outlook.com>
 <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
 <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4a421aa5-b4c5-43f3-85cb-68c2021f13dd@suse.com>
 <BL1PR12MB58492BA224EBCE98549A0349E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <f125e2e3-b579-410f-b6ab-93d008bf9a9e@suse.com>
In-Reply-To: <f125e2e3-b579-410f-b6ab-93d008bf9a9e@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7633.017)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|PH0PR12MB7906:EE_
x-ms-office365-filtering-correlation-id: 0fb03118-5c56-4051-703b-08dc84603de9
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230031|1800799015|7416005|366007|376005|38070700009;
x-microsoft-antispam-message-info:
 =?utf-8?B?OGp1VUZ6cXIzcC9naTdiN0FLcmFlSk8vTEdiRklwSE5aR25Nc3dKTTB4MVc5?=
 =?utf-8?B?cWdjSDhiaWJnclBQc3d0cHV6NWFqK3VvQ2FsMzRMemRXMStqMy9sRjJ2dkxF?=
 =?utf-8?B?UE44K2NqSDdiNFVZeW5qSnpvTHUzdGY2YTFvZE9yOUJHVkkzS3B6ZWpjYW5n?=
 =?utf-8?B?anpaQWdPVmF0a0htUUdCejBvNjBqcHFFL2s2UWswRkVJKzFTRGdMNGRhQzI5?=
 =?utf-8?B?V256MDl6VTBFbzhmYTI2NnJ2a2x2cFh1ZjhNZldXOHRldEtNWmNaMFM2Q3Zh?=
 =?utf-8?B?NmhPZ2Rrb2FyRXozUVl2OGMzREoxVURaRmVxRm9xRVNFZFdReG5HNzI4TkVm?=
 =?utf-8?B?dFlMS2VuSlJpR3VuajIzZHUwc2lsMmJGb2RSK0F2VERGejBvN2JyelBLZkhz?=
 =?utf-8?B?T1lZMFcyQytmbWIrYXBOSTRUNk00Tm50TGFhZ2svUVRmdXkrMlFXUW9tQXpo?=
 =?utf-8?B?TFlJZUhGaVNUMTE5R1VoRjh2VlRrWC83OGJSUmh1M2tva3dHbG1Vd3NpVTBs?=
 =?utf-8?B?VUJHRXFneS9JbENzUFNGZHNkQUM4dm9KeHc0ZWhFMmtqckN1RjdxZVk2OUxx?=
 =?utf-8?B?Q3dRY3hnVlZ3bzlOVkdVNlJ2bzFxdWRIZld2cEJWbWVHRDFyd3l0TlkrQVdp?=
 =?utf-8?B?a2NLVzdKTHVQc0RHNVFuWUQycm5TcWlNWEJuVE03bnhoN0t6N1pOYThKcXJq?=
 =?utf-8?B?S3VNM1l5UkpuaVpZVWg4OUQzZm14OVlmY1dzdW9McXZMckdyOEE4S0RUOVBm?=
 =?utf-8?B?QnNRTUZzWHoxVHkvMkhWQS9JbVMzRmNGeG5kNUprSzBvZmxwaHNLTm0rb1dD?=
 =?utf-8?B?V2hiTnB6TzFDSzhXMU5oUXE5V1hibm53US9YTENGNzFMUTFXSy83OVBlWjEr?=
 =?utf-8?B?REFqRDV3Y3pkZW8vek1MWXlZRFFoclFsYWJId0ZkbXVDK21qQnVYTXVkTE5l?=
 =?utf-8?B?ZWk1ZFFieGI4akZDMDN1OU5ONzlNaHkvSHpmbUZpaFpkbG95cmtFcElya3V3?=
 =?utf-8?B?WjZjSHZRclY0Qk9PRnRVWTZTZkdlTGRwYUNsZkVsODZJUkpkZ21URGpKSWNN?=
 =?utf-8?B?THAxanhRcjBxNzlwZWd0ZWxaTXJBakJTU3Znb2dNeWpzSWJ4aVQ5dklZREdn?=
 =?utf-8?B?bm12cFhNbEJqYWIwb0FZUDd3R3dnOWtNbG1GMTQ3STExS1pXU2Q3L3JIcW15?=
 =?utf-8?B?MXVrZitMUG1oYVNUdVBEMnN1T05pUlQrWWlkdm9RMmV5WXZkWklHR0NPa2RJ?=
 =?utf-8?B?cWFXbkVuWnNOZzl0Zk0xbU1GZXJNeHJnODNCM1Y4TlpObTdHSGk0V2E1c2la?=
 =?utf-8?B?ZUM3N2VFNU16UzUzUXZSUFZjNU1GMjNHQTBXaExXT3F4SkZOTzZQNE9ZTytw?=
 =?utf-8?B?dkZKbXl5c3hGNVlsMTZQWitPVVdVTXdCNjd1K1hsOWRnR1F0OWZ1dmdoQkVV?=
 =?utf-8?B?VU5ocEhOSndPclBiRTBOSS90RUNJSHkwSm8vYTVOQlNXRW1Eb0k2QXdzUGwy?=
 =?utf-8?B?SGJQbFNvaG1RYU5NaGd3dTNyWE9MRnhtRUNiSDNMWDF6a0RpMTh5RGM2a241?=
 =?utf-8?B?a293aWhtWmNqQ29UM1Q5NGg3K3Aveml4czRINnM3RS9SSE81bTN1WTRuMmZ2?=
 =?utf-8?B?SXZpQ0xVek1NaEUwRmR2NzAva0RXN3ppWjhVUE5aSFMzYjgrK29RVE54c25G?=
 =?utf-8?B?aFk0WmREdndkNDJJVGZtT3pnWXVaaVYyVjh4L2tFeVZFNWgwSUg1bjdLQmhV?=
 =?utf-8?B?ZXMycmpiYVZFcVNjaC9BODM0ZitnVW9RL3N2ZzhjN21wWEZoQklmV0t1V3Rs?=
 =?utf-8?B?dndIWUR0TVgrVDEzdytkZz09?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(1800799015)(7416005)(366007)(376005)(38070700009);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ZzdyZ0tpdDhGTUJxVFlaeVJUbjFEdjV0cmZDb1RnU09ZeVVoUlhZKzNqaktj?=
 =?utf-8?B?RnJ0V1p3TzRlbUhtdjRpc0NIUGxGcHJOVG9QSWZ4cjRiTWNSVTJYakR0V0ZR?=
 =?utf-8?B?RWlhVmNTNWw3Y1NTTDJndy9iNzU2Nm9Lbm80Tmd4VFZQSytUaVJseGRsK3hh?=
 =?utf-8?B?MmhZK1I2eG5ObVB4cDNnM3NkcThQaDhoSjJ3WWRQR3c1WitxdnpVSFZkWW9D?=
 =?utf-8?B?N3F3WW1samhxck0xT2NMeWRmMjhRRW9KTnc0bEM1K054ZUVnRDlaYjRtSk9k?=
 =?utf-8?B?amtSU09iOGZvL3M1L0FaeitOa245UXV1K1JiSFpjNTNEYm8ydDBpUWNpdENX?=
 =?utf-8?B?c3draGRWeHhMWjBMYTVHZjZDbVVzUDU4VHhtT2dVWXI4M0NmY3JycnpkTnJ2?=
 =?utf-8?B?eWhNUnlPZGhvci9SZVJLNGlOUUpOTE5Qb0lON0dWNUJjcWxqSUt6c0ViZDlr?=
 =?utf-8?B?MjF5c1ExbGxwVmk4b2hmdFhDVC96TTgySW5pb0dETEY4WFN1N0xvc2lJZzVB?=
 =?utf-8?B?ZnRJcTJOdk1wcW5jRTM3NVNZM0NZUHBLSVhpTXZEWHo4aDAyRnR4aVQwb1kz?=
 =?utf-8?B?WEc5NzF6Z01sMW5KNUVrRFVidDYyWVNhK0VTOEFocDcyR1g5bU5WV2N3NUlL?=
 =?utf-8?B?THVQN1VsMk8yVFpOWkY2MDNVRHZvQjZJNVpGb0VIUUJ6d2lCVC9hYll6NGk5?=
 =?utf-8?B?MWxsZ0RrV29aaG5PRDFFUDg2Ym5jZzg4cmUzSHIvbUVvQ0FDSXF5eFJibU5M?=
 =?utf-8?B?UmtaUXo5amFQYjBqZEs2ZFlPNnY5ZFQxZjhvczMxaTRlQW1sOVFnNngweHd0?=
 =?utf-8?B?N2dac0t4UXpraGZqMkVsOWRPeEVaaEk1SDNIdmM4TlVWNUxGd0hOS0xMR1pN?=
 =?utf-8?B?b3MrYnNsR2MzWVJBbGhZK3ZGeWZLV3hpbGlLbGtDNmRMamZGZ1FPMTdHSnFp?=
 =?utf-8?B?QTVDTk1VMFh6cXZnVWpPVzlPTEg0RkZJVVFmalpuV3U4M3Y3OHEydFU2VXRl?=
 =?utf-8?B?QUFSZ1RTWkdnWk5LbXRiaDBYaG1SRkYwV2xCeHFBbmd6RVBOeXd2VUQzaUd4?=
 =?utf-8?B?dUNqYnVlaHZBeHkzS1Y1T0w1SFNkZTQ3U3lMZGxaMWRHSTcvbFoxZDZSSWpv?=
 =?utf-8?B?bjdtMlBxT1pTS2ZaVDRnNGZacURzRi9GQ2xUMk83cEJZQ1gyeXpwY0JFYUtR?=
 =?utf-8?B?NTBSR3M0ZnB2L2J4MklOTGc2Myt0U0JBNWk1U1BVVlV6N1gvVW9JaEpYTTAw?=
 =?utf-8?B?am5BZFdveHkyYVlBQ1RVcFRoeC9zQUU4RHNvVmVyTVlrNUR4RG9YNlRDaEJY?=
 =?utf-8?B?c2ZXa25IUDdEN0xXWWd3NWtHcllUaVgzK0hhejlpS01INHZkQmhqcnd1L0Vp?=
 =?utf-8?B?N2V1azFJS2I4QmY3MlkvK3Uzb3hGaVhXV3FqLzdiSVFUNHNqbHJzU3hvaDgr?=
 =?utf-8?B?d2tZenUvUURSaU1UeU92WFNNWHE1LzFTQVZXNHcwUXkrdXBpaVhPSFZYdlZ3?=
 =?utf-8?B?bDZJb3pKN1hKNHpZMTBoRkxnUkR1TDN2M1lNbys5VkhRRkJsd2Q5UXNSKy9Y?=
 =?utf-8?B?Uml2U21rS3FyQXNxT1JZRERIR3pMa1J2anZjcVZJbmN4RG5LVUpyYitoaFJJ?=
 =?utf-8?B?elRnRm9RTWl4VU1yV1EzMVpTSEsvOHNmanF5NzdIR0MrUWV1aExEUGd1cVZY?=
 =?utf-8?B?RjN1MEhwc3BYTHZIZWFwS2hkNjlBMkM3L29vUmFKckhDVk5sQXF4a1JaVHVG?=
 =?utf-8?B?SzBTcEF6bDVJbUt6L0NyRGVqalhxV0JsTmRORVB5RjZKVFNNZWRQbFYvaWpO?=
 =?utf-8?B?cUxTd29DYUwydEJya2NxNWc1eEo1N3Jwd0NjeWREU2w0NWo3VEFuaTNiVUZH?=
 =?utf-8?B?ZEZuVUVvS1E3U1VYT0hrTGpBUDBUQkVkam1QQkJ5Y0RLNDNOM2NBUWdFQW5D?=
 =?utf-8?B?SHovRjgzL25VTDVoaHV1M3g1NlBsNkdXcGlqYjhzYW50ZnBuNEN2bVdITFJ5?=
 =?utf-8?B?ZWJ6d0E4dHh2dEYyVDNqcFM0RW1CdlpPRmZnckNOcUhqTTdxZEd2Y0U3aWp0?=
 =?utf-8?B?dzB0RWZmSWdvbEdkRnhCYnRCblhiVENVREU3ZW9tQ3NSUkphaUpzcm1ZK21H?=
 =?utf-8?Q?FAxE=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <553859B258E7A443BF6F41D71A53D22A@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0fb03118-5c56-4051-703b-08dc84603de9
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jun 2024 06:33:27.0018
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: mC/aNAM86X3H/WE2gQZ/wLg4J5qkepsMZYqmSKZgEByRihaYd6ZWPcck280rkzqIY5ptYEJ7o7S4EDWH5LGdNQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB7906

T24gMjAyNC82LzQgMTQ6MTIsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAwNC4wNi4yMDI0IDA4
OjAxLCBDaGVuLCBKaXFpYW4gd3JvdGU6DQo+PiBPbiAyMDI0LzYvNCAxMzo1NSwgSmFuIEJldWxp
Y2ggd3JvdGU6DQo+Pj4gT24gMDQuMDYuMjAyNCAwNTowNCwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0K
Pj4+PiBPbiAyMDI0LzUvMzAgMjM6NTEsIEphbiBCZXVsaWNoIHdyb3RlOg0KPj4+Pj4gT24gMzAu
MDUuMjAyNCAxMzoxOSwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4+Pj4+IEkgZHVtcCBhbGwgbXBj
X2NvbmZpZ19pbnRzcmMgb2YgYXJyYXkgbXBfaXJxcywgaXQgc2hvd3M6DQo+Pj4+Pj4gKFhFTikg
ZmluZF9pcnFfZW50cnkgdHlwZSAzIGlycXR5cGUgMCBpcnFmbGFnIDAgc3JjYnVzIDAgc3JjYnVz
aXJxIDAgZHN0YXBpYyAzMyBkc3RpcnEgMg0KPj4+Pj4+IChYRU4pIGZpbmRfaXJxX2VudHJ5IHR5
cGUgMyBpcnF0eXBlIDAgaXJxZmxhZyAxNSBzcmNidXMgMCBzcmNidXNpcnEgOSBkc3RhcGljIDMz
IGRzdGlycSA5DQo+Pj4+Pj4gKFhFTikgZmluZF9pcnFfZW50cnkgdHlwZSAzIGlycXR5cGUgMCBp
cnFmbGFnIDAgc3JjYnVzIDAgc3JjYnVzaXJxIDEgZHN0YXBpYyAzMyBkc3RpcnEgMQ0KPj4+Pj4+
IChYRU4pIGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBpcnF0eXBlIDAgaXJxZmxhZyAwIHNyY2J1cyAw
IHNyY2J1c2lycSAzIGRzdGFwaWMgMzMgZHN0aXJxIDMNCj4+Pj4+PiAoWEVOKSBmaW5kX2lycV9l
bnRyeSB0eXBlIDMgaXJxdHlwZSAwIGlycWZsYWcgMCBzcmNidXMgMCBzcmNidXNpcnEgNCBkc3Rh
cGljIDMzIGRzdGlycSA0DQo+Pj4+Pj4gKFhFTikgZmluZF9pcnFfZW50cnkgdHlwZSAzIGlycXR5
cGUgMCBpcnFmbGFnIDAgc3JjYnVzIDAgc3JjYnVzaXJxIDUgZHN0YXBpYyAzMyBkc3RpcnEgNQ0K
Pj4+Pj4+IChYRU4pIGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBpcnF0eXBlIDAgaXJxZmxhZyAwIHNy
Y2J1cyAwIHNyY2J1c2lycSA2IGRzdGFwaWMgMzMgZHN0aXJxIDYNCj4+Pj4+PiAoWEVOKSBmaW5k
X2lycV9lbnRyeSB0eXBlIDMgaXJxdHlwZSAwIGlycWZsYWcgMCBzcmNidXMgMCBzcmNidXNpcnEg
NyBkc3RhcGljIDMzIGRzdGlycSA3DQo+Pj4+Pj4gKFhFTikgZmluZF9pcnFfZW50cnkgdHlwZSAz
IGlycXR5cGUgMCBpcnFmbGFnIDAgc3JjYnVzIDAgc3JjYnVzaXJxIDggZHN0YXBpYyAzMyBkc3Rp
cnEgOA0KPj4+Pj4+IChYRU4pIGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBpcnF0eXBlIDAgaXJxZmxh
ZyAwIHNyY2J1cyAwIHNyY2J1c2lycSAxMCBkc3RhcGljIDMzIGRzdGlycSAxMA0KPj4+Pj4+IChY
RU4pIGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBpcnF0eXBlIDAgaXJxZmxhZyAwIHNyY2J1cyAwIHNy
Y2J1c2lycSAxMSBkc3RhcGljIDMzIGRzdGlycSAxMQ0KPj4+Pj4+IChYRU4pIGZpbmRfaXJxX2Vu
dHJ5IHR5cGUgMyBpcnF0eXBlIDAgaXJxZmxhZyAwIHNyY2J1cyAwIHNyY2J1c2lycSAxMiBkc3Rh
cGljIDMzIGRzdGlycSAxMg0KPj4+Pj4+IChYRU4pIGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBpcnF0
eXBlIDAgaXJxZmxhZyAwIHNyY2J1cyAwIHNyY2J1c2lycSAxMyBkc3RhcGljIDMzIGRzdGlycSAx
Mw0KPj4+Pj4+IChYRU4pIGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBpcnF0eXBlIDAgaXJxZmxhZyAw
IHNyY2J1cyAwIHNyY2J1c2lycSAxNCBkc3RhcGljIDMzIGRzdGlycSAxNA0KPj4+Pj4+IChYRU4p
IGZpbmRfaXJxX2VudHJ5IHR5cGUgMyBpcnF0eXBlIDAgaXJxZmxhZyAwIHNyY2J1cyAwIHNyY2J1
c2lycSAxNSBkc3RhcGljIDMzIGRzdGlycSAxNQ0KPj4+Pj4+DQo+Pj4+Pj4gSXQgc2VlbXMgb25s
eSBMZWdhY3kgaXJxIGFuZCBnc2lbMDoxNV0gaGFzIGEgbWFwcGluZyBpbiBtcF9pcnFzLg0KPj4+
Pj4+IE90aGVyIGdzaSBjYW4gYmUgY29uc2lkZXJlZCAxOjEgbWFwcGluZyB3aXRoIGlycT8gT3Ig
YXJlIHRoZXJlIG90aGVyIHBsYWNlcyByZWZsZWN0IHRoZSBtYXBwaW5nIGJldHdlZW4gaXJxIGFu
ZCBnc2k/DQo+Pj4+Pg0KPj4+Pj4gSXQgbWF5IGJlIHVuY29tbW9uIHRvIGhhdmUgb3ZlcnJpZGVz
IGZvciBoaWdoZXIgR1NJcywgYnV0IEkgZG9uJ3QgdGhpbmsgQUNQSQ0KPj4+Pj4gZGlzYWxsb3dz
IHRoYXQuDQo+Pj4+IERvIHlvdSBzdWdnZXN0IG1lIHRvIGFkZCBvdmVycmlkZXMgZm9yIGhpZ2hl
ciBHU0lzIGludG8gYXJyYXkgbXBfaXJxcz8NCj4+Pg0KPj4+IFdoeSAiYWRkIj8gVGhhdCdzIHdo
YXQgbXBfb3ZlcnJpZGVfbGVnYWN5X2lycSgpIGFscmVhZHkgZG9lcywgaXNuJ3QgaXQ/DQo+PiBO
by4gbXBfb3ZlcnJpZGVfbGVnYWN5X2lycSBvbmx5IG92ZXJyaWRlcyBmb3IgZ3NpIDwgMTYsIGJ1
dCBub3QgZm9yIGdzaSA+PSAxNihJIGR1bXAgYWxsIG1hcHBpbmdzIGZyb20gYXJyYXkgbXBfaXJx
cykuDQo+IA0KPiBJIGFzc3VtZSB5b3UgbWVhbiB5b3Ugb2JzZXJ2ZSBzbyAuLi4NCk5vLCBhZnRl
ciBzdGFydGluZyB4ZW4gcHZoIGRvbTAsIEkgZHVtcCBhbGwgZW50cmllcyBmcm9tIG1wX2lycXMu
DQoNCj4gDQo+PiBJbiBteSBlbnZpcm9ubWVudCwgZ3NpIG9mIG15IGRHUFUgaXMgMjQuDQo+IA0K
PiAuLi4gb24gb25lIHNwZWNpZmljIHN5c3RlbT8gVGhlIGZ1bmN0aW9uIGlzIGludm9rZWQgZnJv
bQ0KPiBhY3BpX3BhcnNlX2ludF9zcmNfb3ZyKCksIGFuZCBJIGNhbid0IHNwb3QgYW55IHJlc3Ry
aWN0aW9uIHRvDQo+IElSUXMgbGVzcyB0aGFuIDE2IHRoZXJlLg0KSSBkaWRuJ3Qgc2VlIGFueSBy
ZXN0cmljdGlvbiB0b28sIGJ1dCBmcm9tIHRoZSBkdW1wIHJlc3VsdHMsIHRoZXJlIGFyZSBvbmx5
IDE2IGVudHJpZXMsIHNlZSBwcmV2aW91cyBlbWFpbC4gDQoNCj4gDQo+IEphbg0KDQotLSANCkJl
c3QgcmVnYXJkcywNCkppcWlhbiBDaGVuLg0K


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 06:36:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 06:36:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735181.1141356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENmY-00064b-9p; Tue, 04 Jun 2024 06:36:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735181.1141356; Tue, 04 Jun 2024 06:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENmY-00064U-72; Tue, 04 Jun 2024 06:36:30 +0000
Received: by outflank-mailman (input) for mailman id 735181;
 Tue, 04 Jun 2024 06:36:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N7N6=NG=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sENmX-00063C-20
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 06:36:29 +0000
Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com
 [2a00:1450:4864:20::136])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c52dba97-223c-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 08:36:27 +0200 (CEST)
Received: by mail-lf1-x136.google.com with SMTP id
 2adb3069b0e04-52b7c82e39eso5124102e87.1
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 23:36:27 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42127062f0dsm170538585e9.17.2024.06.03.23.36.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 23:36:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c52dba97-223c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717482987; x=1718087787; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=X18d/0Y4e+ykx5+NEnjOpDtIFp4gRnNSoUya/BnES+8=;
        b=RNC+1beu9JRgZipHszXRySHQ+Fxa2YvPCfZ2XD/xGSHE2SKI/JujgW4aqmavyPEUzh
         pS4YL5Y+FdwsDzi97sNO5LUZ/jH01J7FB+fegTKGWNIlOYzHp3eFBTheYJ9Q+srSC8Gt
         zLNM3/+ZzTTDFNrHChzmzgYd4BLRT/zWCYnKVFh16nW7UTbpLTqiIzQP9jmgwsH6gjwq
         kyTd1EkqcmUUF4Ihvp4A3evQXqj/MZzbX7SFPLlnXK+5BnvZIY6313K2BANIyGPNXmhy
         zOJfKQX3wYMbjIsf7HFZvWNT0rX2lA/g73CGPTPrEqvJzsXmk8chNW0tFXVJiGAb3DiV
         adtQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717482987; x=1718087787;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=X18d/0Y4e+ykx5+NEnjOpDtIFp4gRnNSoUya/BnES+8=;
        b=michPdeHeYV3OP2wx/oO5WTYzwCzwFPsYbxze32B9yWlxdtKTtbd607MEONJs0fEKM
         4fVtkFs0t/M1p7TSv53BwEOXjuMh9rRzPgQcTGyUFzE4+kMHEC64UExRdSctOYzpsjGG
         V4r08QcrArlP7hiexuEwvEAZqNIzCwlB+sXt83AvFvq9u6ucP1TnCv+dvu2cXe910poU
         Lc1RS7jwcP8Y/LghjIvpRs/+66UZWnUNALstDgyuhnWU7czBTKgPdNKEAf5sEuSSTL4Q
         npJwtucxNt6fP/WlhSwVkZNvvWHH5uSYhMG7zvqFoS5a6tum608Y8E8UCLbD3GaCjXPf
         4RHg==
X-Forwarded-Encrypted: i=1; AJvYcCXaRqEhuH3sgSo2OFd6lOJitKBWyd24iOSgo2aZb4Fj/kooGFhQCCX5e8mUDYhVAdZtAbr9Lvs67NUP7oRspAx54QDhhmYkd0MXIrWDorM=
X-Gm-Message-State: AOJu0YyFcr54ftkfNoVLuooQCa/EY+dR8tNl0WavlBr5C+AlCPSJ5RZZ
	Urt6OI2Zt6nDCkO2c7HwU+y7YVryTcg17jB87adYoqR0xg6ishsU+s/HJT7L+w==
X-Google-Smtp-Source: AGHT+IFc5u5GrKWll3QirFv1CrGhm9LcyqDhgI75tnBhTlJzXE9KYLmJwsO1i6fCyfLRLC/PEkkufQ==
X-Received: by 2002:a05:6512:3995:b0:52b:9a99:9201 with SMTP id 2adb3069b0e04-52b9a99925cmr4943578e87.58.1717482986804;
        Mon, 03 Jun 2024 23:36:26 -0700 (PDT)
Message-ID: <67960b60-3108-4920-8bf1-68a00e117569@suse.com>
Date: Tue, 4 Jun 2024 08:36:24 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>
References: <20240516095235.64128-1-Jiqian.Chen@amd.com>
 <4b311c82-b252-413a-bb64-0a36aa97680a@suse.com>
 <BL1PR12MB5849333D416160492A7475E2E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <70c86c74-3ed6-4b22-9ba6-3f927f81bcd0@suse.com>
 <BL1PR12MB584922B0352AA2F4A359FD66E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <7cdff236-bb7d-4dad-9a83-47faaa6dc15f@suse.com>
 <BL1PR12MB58493D3365CC451F36DB554FE7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <fbaf7086-85d8-4433-91d9-ef8f74512685@suse.com>
 <BL1PR12MB58494B521CB40BAEA30CB412E7F32@BL1PR12MB5849.namprd12.prod.outlook.com>
 <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
 <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4a421aa5-b4c5-43f3-85cb-68c2021f13dd@suse.com>
 <BL1PR12MB58492BA224EBCE98549A0349E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <f125e2e3-b579-410f-b6ab-93d008bf9a9e@suse.com>
 <BL1PR12MB58494B2DD0CD75CCDF1F5CA1E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <BL1PR12MB58494B2DD0CD75CCDF1F5CA1E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 04.06.2024 08:33, Chen, Jiqian wrote:
> On 2024/6/4 14:12, Jan Beulich wrote:
>> On 04.06.2024 08:01, Chen, Jiqian wrote:
>>> On 2024/6/4 13:55, Jan Beulich wrote:
>>>> On 04.06.2024 05:04, Chen, Jiqian wrote:
>>>>> On 2024/5/30 23:51, Jan Beulich wrote:
>>>>>> On 30.05.2024 13:19, Chen, Jiqian wrote:
>>>>>>> It seems only Legacy irq and gsi[0:15] has a mapping in mp_irqs.
>>>>>>> Other gsi can be considered 1:1 mapping with irq? Or are there other places reflect the mapping between irq and gsi?
>>>>>>
>>>>>> It may be uncommon to have overrides for higher GSIs, but I don't think ACPI
>>>>>> disallows that.
>>>>> Do you suggest me to add overrides for higher GSIs into array mp_irqs?
>>>>
>>>> Why "add"? That's what mp_override_legacy_irq() already does, isn't it?
>>> No. mp_override_legacy_irq only overrides for gsi < 16, but not for gsi >= 16(I dump all mappings from array mp_irqs).
>>
>> I assume you mean you observe so ...
> No, after starting xen pvh dom0, I dump all entries from mp_irqs.

IOW really your answer is "yes" ...

>>> In my environment, gsi of my dGPU is 24.
>>
>> ... on one specific system?

... to this question I raised. Whatever you dump on any number of
systems, there's always the chance that there's another system
where things are different.

>> The function is invoked from
>> acpi_parse_int_src_ovr(), and I can't spot any restriction to
>> IRQs less than 16 there.
> I didn't see any restriction too, but from the dump results, there are only 16 entries, see previous email. 

Hence why I tried to point out that going from observations on a
particular system isn't enough.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 06:38:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 06:38:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735187.1141366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENos-0006dK-LM; Tue, 04 Jun 2024 06:38:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735187.1141366; Tue, 04 Jun 2024 06:38:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sENos-0006dD-Iu; Tue, 04 Jun 2024 06:38:54 +0000
Received: by outflank-mailman (input) for mailman id 735187;
 Tue, 04 Jun 2024 06:38:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N7N6=NG=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sENor-0006d7-NX
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 06:38:53 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1b11b2cf-223d-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 08:38:51 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 5b1f17b1804b1-42136faf3aeso16194985e9.2
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 23:38:51 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42135fea176sm104337255e9.24.2024.06.03.23.38.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 23:38:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b11b2cf-223d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717483131; x=1718087931; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Vcb6mdZUN4++Vz54iHGsZAG3B9yDliTGLd0WfbnrCcU=;
        b=e8eGB0ELPxLuXjP4qWBlEKjvKXDiDhfjX/64x4FG8RZ0Bx/TygRItEus8ME38mFrFo
         5UTo0E4szZxf74DQKSLiYcaHl7oQbIzz44FmXmkoV+gmg46OEjjlhVb0WNHQj/2xA5QQ
         wo9D0inrEjjfTZZUN4ZrstCDaH9ATnU77KV3lpTmPpgp6V58PrqiJvCTWopOcHGyDvwr
         R+9N6zTYuPitEuR0IEJBdMNdDse+UAtYVZkz6Ss8eXmw0neZsZ9vaaJ17Wc/HVnSSKFt
         CTbocV90BG20n6jBMuLjUWp0LWu3jt+Lay5pTvOAJfGqAps85ZTSmu8K8B1zOo14mdqr
         q37Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717483131; x=1718087931;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Vcb6mdZUN4++Vz54iHGsZAG3B9yDliTGLd0WfbnrCcU=;
        b=uMphA/9IQspt65HviuCy8Q0bjZOhiGy3T5dCj8dC+BdzxN5dlU6ETLCu3bHFsst6HM
         vZWXcO3FWJw7CL7jBSQ51aNGpRB9eoubE37AmbJjVKyajjkarKDRWPKuApe+TvtRoOXt
         1oPM7DKmPQukHva2cO+2m5QpqwOd8gh/XsLApGtNKoIBgOxw/AKuDxzk+06nbqx8alCD
         dlyMxZ+YWrG/MfSn23SCmGPnP5gGnk4u11/NgbznTc5xylaxKgxo1tpErg2h5mmdc7T1
         0QPEvlGgFlo357VArfJNoKr1zpAxuoqKIPGTYlO41Fk5tuEoMGuD+PKXIkuV/ChJcGzv
         m2ag==
X-Forwarded-Encrypted: i=1; AJvYcCWYdjG70r+yxx8i0NsSW9AHVEqMFA3XGJkocFSSBrkmDSIjjyPWJ4jNS12bCFWbsFp8hOZpf78ElqAlEOym2BLicsEwiusrZbCaeBruvOw=
X-Gm-Message-State: AOJu0YwGTyEOTlmRx1/yBWxznQj6LccI/pIobSNOpj6rkZmeJTnCcSUR
	yBSbFF7WZtJjcneBDDDNiK2DpeFvx7gWcAOddrqCVo5sLHw0femgXN+NubdwLQ==
X-Google-Smtp-Source: AGHT+IHVHD4SHYJOO8JZnijobM0cIWA28UoQAnUGn4YZWlpKthhK3xz0Gt/SrKQIVqY3+5Lkcp+Kzw==
X-Received: by 2002:a05:600c:5012:b0:41b:9e4f:d2b2 with SMTP id 5b1f17b1804b1-4212e044b0dmr94150615e9.2.1717483131060;
        Mon, 03 Jun 2024 23:38:51 -0700 (PDT)
Message-ID: <2c7bc1df-74da-420a-9cc8-d64f0bf212cf@suse.com>
Date: Tue, 4 Jun 2024 08:38:49 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v5 03/10] xen: Refactor altp2m options into a
 structured format
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1717356829.git.w1benny@gmail.com>
 <5dc1d0375206bd982b91f4db4bd237769a889f48.1717356829.git.w1benny@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <5dc1d0375206bd982b91f4db4bd237769a889f48.1717356829.git.w1benny@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 02.06.2024 22:04, Petr Beneš wrote:
> From: Petr Beneš <w1benny@gmail.com>
> 
> Encapsulate the altp2m options within a struct. This change is preparatory
> and sets the groundwork for introducing additional parameter in subsequent
> commit.
> 
> Signed-off-by: Petr Beneš <w1benny@gmail.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com> # hypervisor




From xen-devel-bounces@lists.xenproject.org Tue Jun 04 06:50:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 06:50:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735198.1141376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEO0R-00025X-QQ; Tue, 04 Jun 2024 06:50:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735198.1141376; Tue, 04 Jun 2024 06:50:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEO0R-00025Q-No; Tue, 04 Jun 2024 06:50:51 +0000
Received: by outflank-mailman (input) for mailman id 735198;
 Tue, 04 Jun 2024 06:50:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N7N6=NG=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sEO0Q-00025K-PK
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 06:50:50 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c6408ca1-223e-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 08:50:48 +0200 (CEST)
Received: by mail-wr1-x429.google.com with SMTP id
 ffacd0b85a97d-35dceef429bso2227057f8f.1
 for <xen-devel@lists.xenproject.org>; Mon, 03 Jun 2024 23:50:48 -0700 (PDT)
Received: from [172.31.7.231] ([62.28.210.62])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35e574748ffsm5599123f8f.87.2024.06.03.23.50.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 Jun 2024 23:50:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6408ca1-223e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717483848; x=1718088648; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=YJEBEdzh7uV2SLNEdg7mSfjPAreMvRs5W4pE8Flw53o=;
        b=R2YiBt4c4gSFnMXHg8IHa3Pdj9ZR5VqF3kE6N8xL6dPPD83Ftdz9fcE7PAZb73GInP
         65Mmn4uG/+V6MCv262ruqn3Qr3o5uVDLte5FDwRCoiKg6eu8QFt++hBVGoHM9cEnrCcn
         nQgBXGLJFVESTp9+r76Kc1OpKGRSdCeHUSpyvz1F1Y1H1O1PzTQDcYErYtw2K67w021j
         sJ++wXZJihd6AndJYVwQwTUt0womdomkjd2i6b43kPAPdQzM6RhkqscJM731+puYoO4t
         VSCK2WaiXNjs5KNyBuZSZuF5lnw+VHL60pPopo2j/8XdYM+mcwZUWIaYLKQgomfroAXi
         37SQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717483848; x=1718088648;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=YJEBEdzh7uV2SLNEdg7mSfjPAreMvRs5W4pE8Flw53o=;
        b=alNWB7AIC2XhvUmAek376WWHCqU0EkRCeWywjeYNzaDeN2y+6DFUvjMpYZIrP7AU+o
         175A7YerXLqcER6FbPFvCxuHxSoC9zAIKZkRKoUKRN3tkMFlLXTVmkpIqLjrpmFCsxoF
         HWlZ66tTmlo9p+ndZy88puC6MKSWtu4R7R0Ho6+FG+DAXinV3/ZrhycCPvLt2umJ0Jvh
         CWODh3F60X4LYXjZvrvy9q2Fdh3kRlGiPHc27Ue1uxpVH1DBEUk5hpS9v2i2P2YI/86Z
         h58tTY0J8modYUiqs/4uyP7vtU3jZwMjM/3fqhm0LAyhP6Xh2GAu/9rKEMnFa5ZkBZ7o
         ahfA==
X-Forwarded-Encrypted: i=1; AJvYcCUOD52oZE1FXxLJR9zijUSQD3Kozfmay6e8fNtDjgiuavf5d+Gwd6gNQndI10bIsNPC+OqdV3LgM7pMppjSGXUwwTFHVgbrwZc2lfjyM0M=
X-Gm-Message-State: AOJu0YyhUfqzmP5tfz0Ejs/8+cnf4dKNDwHzXBI+2HXUBjyd80olFlg7
	oPWcbzKgDdHtty48el3WzO+JV8JOtkSUI9eURHCbAZhFq2ohMFIk3NX0GlAEBQ==
X-Google-Smtp-Source: AGHT+IGFMGVzvhz+CTx28oIJXojI3YRxX0wIxdOs3wf20LphGJRCqtJjABNNi9gVAws2YNTkyts8LQ==
X-Received: by 2002:a05:6000:1361:b0:35e:5076:e8ce with SMTP id ffacd0b85a97d-35e7c51a31dmr1271641f8f.2.1717483847679;
        Mon, 03 Jun 2024 23:50:47 -0700 (PDT)
Message-ID: <e5a54405-d686-4ac9-a4c3-5b76e4da2bb4@suse.com>
Date: Tue, 4 Jun 2024 08:50:46 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v5 06/10] x86/altp2m: Introduce accessor
 functions for safer array indexing
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <cover.1717356829.git.w1benny@gmail.com>
 <e2e5f7a3c9a0ac6d65a6f942b0ea54f0f0b104f3.1717356829.git.w1benny@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <e2e5f7a3c9a0ac6d65a6f942b0ea54f0f0b104f3.1717356829.git.w1benny@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 02.06.2024 22:04, Petr Beneš wrote:
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -4887,10 +4887,10 @@ bool asmlinkage vmx_vmenter_helper(const struct cpu_user_regs *regs)
> 
>              for ( i = 0; i < MAX_ALTP2M; ++i )
>              {
> -                if ( currd->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
> +                if ( altp2m_get_eptp(currd, i) == mfn_x(INVALID_MFN) )
>                      continue;
> 
> -                ept = &currd->arch.altp2m_p2m[i]->ept;
> +                ept = &altp2m_get_p2m(currd, i)->ept;
>                  if ( cpumask_test_cpu(cpu, ept->invalidate) )
>                  {
>                      cpumask_clear_cpu(cpu, ept->invalidate);

I'm not convinced we want to add the extra overhead in cases like
this one, where we shouldn't need it. I'd like to hear other
maintainers' views.

> --- a/xen/arch/x86/include/asm/altp2m.h
> +++ b/xen/arch/x86/include/asm/altp2m.h
> @@ -19,6 +19,38 @@ static inline bool altp2m_active(const struct domain *d)
>      return d->arch.altp2m_active;
>  }
> 
> +static inline struct p2m_domain *altp2m_get_p2m(const struct domain* d,

Nit: Style (misplaced *).

> +                                                unsigned int idx)
> +{
> +    return d->arch.altp2m_p2m[array_index_nospec(idx, MAX_ALTP2M)];
> +}
> +
> +static inline uint64_t altp2m_get_eptp(const struct domain* d,

Same here. And more further down.

Further: At this point it's not necessary yet to switch away from
array_access_nospec(). You doing so right away is probably okay, but
then needs justifying in the description.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 08:19:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 08:19:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735212.1141387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEPNZ-0007RE-Oz; Tue, 04 Jun 2024 08:18:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735212.1141387; Tue, 04 Jun 2024 08:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEPNZ-0007R7-Ls; Tue, 04 Jun 2024 08:18:49 +0000
Received: by outflank-mailman (input) for mailman id 735212;
 Tue, 04 Jun 2024 08:18:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KAXK=NG=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sEPNY-0007Qq-FR
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 08:18:48 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on2061a.outbound.protection.outlook.com
 [2a01:111:f403:2414::61a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0fc77823-224b-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 10:18:46 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by CY8PR12MB8216.namprd12.prod.outlook.com (2603:10b6:930:78::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7611.21; Tue, 4 Jun
 2024 08:18:42 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7633.021; Tue, 4 Jun 2024
 08:18:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fc77823-224b-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OOD6JKWydb2a5m5LGu6ACy+P8PJDVnkuLhdHB3JvFI4hdxXnRfLn/T7uv7GrUP4NsFYh9r05rmWI8O9cHdZfBmXu+eKPy0J36TI/6mG1TOJZiLwmfo4RKHXo89iujcKo975M80E/UXg9tNS75O/bsJ5Js578rLKRlNpqZ+3Y5tNr6qLvzNrVLj+RjxdPSYcmvBSP/aHgWKtzQI21bi7/mDvfsWtGFa3Va+35WWbs0bEWP8dE9X0wNMt/lhGgLiOnGWviGp4H72otCAkLoGQqcDb8WzE3fpoUg1cl9GHuvGiE+t5AhMo7oKhUyfn8N1CDRKklGymYBluvTXo651zW2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=++Lg/aGmLc3Bq6sVEngyKnRWXQfleRO5rke1pZtkoG0=;
 b=UlXEe/TC1dmXj6XX2bRM+XXM4I7zeBrkxHGxZvfDjuhr9xIWwCpFV3RWk7W1m4ryADOQDKJLD9lC8a3YOEryu67S6CYI4I++OBjEk5m9AOupg67djq/dCCO7nHsO4nzsCo975Y+Qtva0m3+bQSjXJv5k1L68ZBMy3kl0PrE70MvW++G2s7Av3iQLA1oXuUiM3EFSza9y1qRWaA7rpDtpnvOO5o0Sn9uiAuEMKBm78oZ77DOBiS47vuJuQjo9ZoB/jbkatkAeYjFU5O2pOIKSDR2/c9TXT42c65MaUZNtx0Y/vvhT7+TFJgtQBTzezcwEHpvMAt6EqgNdhJ4yQ2VEoQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=++Lg/aGmLc3Bq6sVEngyKnRWXQfleRO5rke1pZtkoG0=;
 b=Q6uRfs+OS+TY3lDpComQVv/nqF42DYvMX1/QtUrw8Bvrh78D1YAhz28KazhNcLNgq+wuxgRKh0D3xJvLQNM96AJGdMOIoXIqeQ+36GHa4bJNlUUoDVpG6EA7zJ6cMPD4OEVfGZckzRrn0TbwUFoyqE6AbOiwgG4y2GDibIP9vMc=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Hildebrand,
 Stewart" <Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Daniel P
 . Smith" <dpsmith@apertussolutions.com>, "Chen, Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index:
 AQHap3bgiER4vYjwvk2+R5oTa8V63LGZ5EYAgAHVa4D//4e1AIAAis+A//+FmgCAEsTugP//viAAABEhvwD//4GYgIAAoyuA//+0FICAAgSCAP//yCWAgAeLLwD//6n2gAAQ1XyAABA5GQAADz4pgAAu2vkAAEmxvAA=
Date: Tue, 4 Jun 2024 08:18:42 +0000
Message-ID:
 <BL1PR12MB58490E8F1F26532B0FDFFFD6E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240516095235.64128-1-Jiqian.Chen@amd.com>
 <BL1PR12MB5849333D416160492A7475E2E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <70c86c74-3ed6-4b22-9ba6-3f927f81bcd0@suse.com>
 <BL1PR12MB584922B0352AA2F4A359FD66E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <7cdff236-bb7d-4dad-9a83-47faaa6dc15f@suse.com>
 <BL1PR12MB58493D3365CC451F36DB554FE7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <fbaf7086-85d8-4433-91d9-ef8f74512685@suse.com>
 <BL1PR12MB58494B521CB40BAEA30CB412E7F32@BL1PR12MB5849.namprd12.prod.outlook.com>
 <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
 <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4a421aa5-b4c5-43f3-85cb-68c2021f13dd@suse.com>
 <BL1PR12MB58492BA224EBCE98549A0349E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <f125e2e3-b579-410f-b6ab-93d008bf9a9e@suse.com>
 <BL1PR12MB58494B2DD0CD75CCDF1F5CA1E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <67960b60-3108-4920-8bf1-68a00e117569@suse.com>
In-Reply-To: <67960b60-3108-4920-8bf1-68a00e117569@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7633.017)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|CY8PR12MB8216:EE_
x-ms-office365-filtering-correlation-id: b824f69a-c252-4ba2-bf5f-08dc846ef23d
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230031|366007|7416005|376005|1800799015|38070700009;
x-microsoft-antispam-message-info:
 =?utf-8?B?M3pvRmVBcjJLaXc4ZGo3R01DNFcxUG1YZkQ1cjVEUFVMbm1VQURXeHBOU3FM?=
 =?utf-8?B?ck01M0N5ckFSTCtVcm0vSC9tVytCMzNMYjlCL1QzWUl6M3hHM0o2MndpVmhN?=
 =?utf-8?B?S0U1RkFvM3FuM0p1VklSRjViV3FYejUvQTUrdkNCN2pPbWJJY3dKdHl5NWN4?=
 =?utf-8?B?cUpxZitscDd0dHpiVHo1d28ybGlpM2ZmSTZvOHQxWWNLamY3OU5iQzM2V280?=
 =?utf-8?B?NXhWayt1ZWRBMGxKOVNyUnBuMkp6YmpLZHNpVTdwZXZiZGQ2VHFLR3psNWlQ?=
 =?utf-8?B?bzJPMGpKR3lzak0yd3dnZTBhUzh4S1dDTThWcExzNTRsQ2cxd3pkaVhuYzBk?=
 =?utf-8?B?WXVvMGpjajVvaGI0MFh2cUdrYk85MTdjK3BNTVVoOG0wOW5TZURLVU5KbnJ1?=
 =?utf-8?B?VFhaVWpLOExpR09iTU9ld0FXS3Y1bEJNOHJkOTlDeE9icnVxb2drbkdWNjVU?=
 =?utf-8?B?K1Qva24xcGl5T1BRR2FydlJzQlpvVWk4LzlXVU9QZVVXenZBQmZLVzhKQ0R6?=
 =?utf-8?B?ZWNSVExWKzUrQkt6dTJxRHJ5KzQyTkxGczh0WjhjYUMrb3hUL3JpNWwwUmVY?=
 =?utf-8?B?Z2hzOWJBUXlXUW9jT1hqWU03Mmo0aWxRbC8zR0JtRG5qMWc4eTdYWDFLUURS?=
 =?utf-8?B?OGVYa1Zic1FxekVIY2pNeXptSVZsTlVoNG1hdExobklZRmFlK0h2a08yNlBH?=
 =?utf-8?B?TS9Ha1VJNm9qaEcyOXB0TE9sRjNWWklLL2JKNzlSekx6ZGh5Z0ZVVXp6eXJQ?=
 =?utf-8?B?NCs5UytVejNjVG9PYnF3QVBsU3JlUVBwQy9oM054UE1XNFNGTFIxeVVkUy9M?=
 =?utf-8?B?MmJJb2UyRlU3WlZManVxZ25OQVV2UGFRZXNEWEN6aGVwc3phZVpiUkZHU1Nx?=
 =?utf-8?B?Z09UQUEwN3pzWUczbTh0Vzd1ak1tenV2SjVucmVrMEh3ajRGc2p3OURoL2J2?=
 =?utf-8?B?V29YMTAxUXJ3SnVlelcvV3FhdENtY24wYmUzdFBRTVp3M1hjQ2owSlI2aWlj?=
 =?utf-8?B?UXVFeWdCeC9lZVgvL1FGN25HM1lHRkZYTW4vZDgrUFNRSU5VVkE3dEd1ZENI?=
 =?utf-8?B?VTU0QlNrRkFBSkdKZDdINzZYeXlGcXhqeUl0MWhsREJraStFOC92L0I4RTU0?=
 =?utf-8?B?U3NkZkVUL1lvNFBwS2RhUTNOaGZXRnQ3alBaZ240anp1NVFLOFprR1ZWcXph?=
 =?utf-8?B?L1ArQWc2ZUJoYitnd3A5SXlUQVZRU0NRQ2VSWlNmd2lEekJJais3MkxOZnAv?=
 =?utf-8?B?TkI5WG1vRTZtQnRvckdXbEYwYXIrcjZBQUJ4Y0JBWmhJMmVKNHVndENMYXh0?=
 =?utf-8?B?dHpTTjRiWGNmVTlKRkNjMVJraGsrUWpxdFUrSXFIaVlyWXd0RVpMWUlTY01I?=
 =?utf-8?B?WmhDd04yZ3hzK3dwa253NGFLaVBZSi9ycGhDWm1wUnIyT1ZWMEZpZExQN2hI?=
 =?utf-8?B?eFp3Y0ZaVThRNmRwMURUclhaMlk4My9RcWo4UGhEMnlWL05GdUVFR2R0blJu?=
 =?utf-8?B?NDFRcGc0MFBDdnlma2xlczFCblFnREwwMnh6MU0wRnBvdTVrVFZUN3hGdU5a?=
 =?utf-8?B?ZjF6OVJSdE9ST3p3Q0hYSFlybVk0OTg1TUsyVUdiUWRjZVRoczF6QU5WYzU2?=
 =?utf-8?B?VXc5Tm9HdkhVL1c4S3RLWG03c2Vwa3pWeTNjYzNmM0srRGFvU1JvZ3o5Ym9i?=
 =?utf-8?B?VTBlUEJpMUV4M3RORGRQYng3allLaFNhU2tEQXZQTWZtN3F3Z25LZ08zVTFy?=
 =?utf-8?B?cE9hZTUybUxOaVRRQVpsSmZNS1NwMkJFNDJBQjlGVkwwVEc2Qys3NTg1YVE0?=
 =?utf-8?B?NUtGU2lvbGJMY1VMNFRTZz09?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(7416005)(376005)(1800799015)(38070700009);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?eWd1RjBrbHcvdDdjNFJmdmR4aHJ2TXdwL3crQUt5R2RTV1p0RllnM3lYUTRZ?=
 =?utf-8?B?THpFVHhrbGJsUHE3cm9PdW96eW9ydSt5RHVZK1BVbmY2UDNxTzdQWjlYUHMz?=
 =?utf-8?B?RXdMdG52MUxNV0V2RFNScjQzUndoMmE3ZXVBb3FpeHZTQlNtZE5iQUNoQ25P?=
 =?utf-8?B?SHk1a3JWL0ZjenBYU2pFUWdOU2wvTTBjd1RjNCs4VlEza0tYN0ZQUGl3a2Y0?=
 =?utf-8?B?KzNBOWZRaUI1cVl6SW5IY2sydzJ2Zm9QZGJteFE1dEtmVHVsWE02Z1h1eUp5?=
 =?utf-8?B?Z1FLZ3JEL3VocytGcjQzcWcrSXE0eEh4WkRPT2h1S2t0eEVtRENWSEI0UFBl?=
 =?utf-8?B?amw5L3V2dlZwaUlwOTZlaWhiS2pJMWRQSWliSks2WTU4aFlMV055V1dyQlFw?=
 =?utf-8?B?dkhoZTlFclNEWlJSR2lrQ2tENjc0Z0d4UVBoS0U1b3pENkZJWlFMSHFMQXQw?=
 =?utf-8?B?VlVBcDNoWlRYbVJGVHZ1SUNqd2dmNDJtNW5BckF3aGRUeTVxeU5CbktpOFVu?=
 =?utf-8?B?aDZ4ckpSQlVxeGZQU1JtcmdUTWdNOXdyT21nSlY0NmJMbHlFNGsyYm96MHkx?=
 =?utf-8?B?SUlUeDI2eEF5TE1CelF4NlNKYXVheHRxZE5naEl5OUFuQWMxc1I4RWI4Um9P?=
 =?utf-8?B?NVVmWTNJWmoyR3Z5SHQ3K3RaZEpMc3I4WC9HSE00Q3d2bEhjYUZNaThzNThD?=
 =?utf-8?B?L3hGbWowN2dQcGs0aWkrdFdmWlhuendQUVArMnkxNWF5VnhhZlU3NVhLUmdF?=
 =?utf-8?B?clpCeDY3d2JCeldVQWpJRCtjRmVsSWp4dkhlN1N3bll2QWRPUWtBd21LcDdE?=
 =?utf-8?B?OHBuYVcwSi8waC8zUmJid0pXUEQzSzhkcStMOWFrV09BYWJScFhFZlhpbUVW?=
 =?utf-8?B?ZmlCQk5lblRNZ0FlTmNsWWxXQXR1dm0vOUNUemVMNHJFY3haeGtubFhjdmxm?=
 =?utf-8?B?R1kyNnlKN21JQ1grOEdTMktCUmpLN1l5YUJLTEtnd2pWZU9hS0RuZDZlVW9C?=
 =?utf-8?B?VTlqS0MxMzJydU1KekhHRktHU1FSMzZrcDNoZUQrMUxvaERnQ3RlRER2a043?=
 =?utf-8?B?b1VKQXdaN2V0RzdFWXExMFg3cnVPb1FyRHNqaTZZRHQxdEM0R0dlVGcyWXZs?=
 =?utf-8?B?VGgrazE0NnVKN0VpdXNtQWpwdjFUWVRpSHlBWXRDUXdFRE5HR2Q0VXR3Z0t3?=
 =?utf-8?B?QnBJS0VWMVRiM2dVeGxtbnk4Tk4rQmhJeXJjK1pnbG9lVGRicmdQdUt3eFFN?=
 =?utf-8?B?K1BnNVZlZElKNk5qUGhGaE5rUlpOcFdXNVN6ZGw2Sk5TUGRsRlhtZkVWd3RO?=
 =?utf-8?B?NFZ4dXQ2eHVwTk1HK2w1STBwNTVJbXZ5b0JRQmhlOE91SUs2RnZwZkxnTFRJ?=
 =?utf-8?B?Y050MVVuZ1dQdm9kSUJBZW9keXhLM0xtdjFsQ1NiWDJUZFNaU0ZWa285amta?=
 =?utf-8?B?eXExeGJPVWZ4cHUrQ0V1cTFMS29IeTBpNDhaYTdyQmxWRE5hTDVOZTV0L2sw?=
 =?utf-8?B?M1owcXVydHNKdlA5UFBxTXNrVkszNW9qNE96Q08vSTVpdjlQRXJCRGxRWXRs?=
 =?utf-8?B?THU5TmhFTVdZZUdLcFdhVFM1WDFiODVmZnhSUHo3SUpTOFR2M1B2eVpydE0r?=
 =?utf-8?B?UDYvclNuWjhDd1g2SVdZcGozV1g1M0pVNFNLUU01YkZYMjhDeWpzQmtQWjRj?=
 =?utf-8?B?U1dyY0k3M0p4VkpDQUJuVVhtS1JVeHpUSENreDZTWXVYMGwvNjhGOUllYXhz?=
 =?utf-8?B?dTZYd1hLZWc2RXdWNkFuMlNsN2pvQVlNdDZhRnBWMkx3TGh6dk5xcW9nVU1I?=
 =?utf-8?B?SERKL29KSVAwY256K3JWaWxRc25TeUw0ZGtkRGJ6cEo3V0RZMFNrNyt3d3E5?=
 =?utf-8?B?SkNHSXRrdnp1ajZnSTJTUVJjSHN2Ujh0RW93bjQ3OHpTT0dUN0dDNEp3Z0Zq?=
 =?utf-8?B?cHRaOTZDUnA5VnIzOEVOY0RxejBBek1ab3lWWmVYd1o2RDdtWWZYOXlWdmIr?=
 =?utf-8?B?QXMyQWg4eC9ib09BRXRUcHVIbS9QckYwcXJOckxMSkh1eUlkb043SC82elgz?=
 =?utf-8?B?Wlh4R0tKM1F6QUtMSm1OYm14L01uK0dweWtNZnZnWHRxbHdDeC9FZ1gxbmRq?=
 =?utf-8?Q?Jmu4=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <F92A21E268EC7F49ACD1C7E638655663@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b824f69a-c252-4ba2-bf5f-08dc846ef23d
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jun 2024 08:18:42.4788
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: FbFWmr5nHOC8sB9XZ3uoKniqUPtUt7eQNbiNQyWpi8YFIvNT/S1tCSoVCG5ViSMFpB+aj3eV3/sA08xfPPGlsA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB8216

T24gMjAyNC82LzQgMTQ6MzYsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAwNC4wNi4yMDI0IDA4
OjMzLCBDaGVuLCBKaXFpYW4gd3JvdGU6DQo+PiBPbiAyMDI0LzYvNCAxNDoxMiwgSmFuIEJldWxp
Y2ggd3JvdGU6DQo+Pj4gT24gMDQuMDYuMjAyNCAwODowMSwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0K
Pj4+PiBPbiAyMDI0LzYvNCAxMzo1NSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAwNC4w
Ni4yMDI0IDA1OjA0LCBDaGVuLCBKaXFpYW4gd3JvdGU6DQo+Pj4+Pj4gT24gMjAyNC81LzMwIDIz
OjUxLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+Pj4+Pj4gT24gMzAuMDUuMjAyNCAxMzoxOSwgQ2hl
biwgSmlxaWFuIHdyb3RlOg0KPj4+Pj4+Pj4gSXQgc2VlbXMgb25seSBMZWdhY3kgaXJxIGFuZCBn
c2lbMDoxNV0gaGFzIGEgbWFwcGluZyBpbiBtcF9pcnFzLg0KPj4+Pj4+Pj4gT3RoZXIgZ3NpIGNh
biBiZSBjb25zaWRlcmVkIDE6MSBtYXBwaW5nIHdpdGggaXJxPyBPciBhcmUgdGhlcmUgb3RoZXIg
cGxhY2VzIHJlZmxlY3QgdGhlIG1hcHBpbmcgYmV0d2VlbiBpcnEgYW5kIGdzaT8NCj4+Pj4+Pj4N
Cj4+Pj4+Pj4gSXQgbWF5IGJlIHVuY29tbW9uIHRvIGhhdmUgb3ZlcnJpZGVzIGZvciBoaWdoZXIg
R1NJcywgYnV0IEkgZG9uJ3QgdGhpbmsgQUNQSQ0KPj4+Pj4+PiBkaXNhbGxvd3MgdGhhdC4NCj4+
Pj4+PiBEbyB5b3Ugc3VnZ2VzdCBtZSB0byBhZGQgb3ZlcnJpZGVzIGZvciBoaWdoZXIgR1NJcyBp
bnRvIGFycmF5IG1wX2lycXM/DQo+Pj4+Pg0KPj4+Pj4gV2h5ICJhZGQiPyBUaGF0J3Mgd2hhdCBt
cF9vdmVycmlkZV9sZWdhY3lfaXJxKCkgYWxyZWFkeSBkb2VzLCBpc24ndCBpdD8NCj4+Pj4gTm8u
IG1wX292ZXJyaWRlX2xlZ2FjeV9pcnEgb25seSBvdmVycmlkZXMgZm9yIGdzaSA8IDE2LCBidXQg
bm90IGZvciBnc2kgPj0gMTYoSSBkdW1wIGFsbCBtYXBwaW5ncyBmcm9tIGFycmF5IG1wX2lycXMp
Lg0KPj4+DQo+Pj4gSSBhc3N1bWUgeW91IG1lYW4geW91IG9ic2VydmUgc28gLi4uDQo+PiBObywg
YWZ0ZXIgc3RhcnRpbmcgeGVuIHB2aCBkb20wLCBJIGR1bXAgYWxsIGVudHJpZXMgZnJvbSBtcF9p
cnFzLg0KPiANCj4gSU9XIHJlYWxseSB5b3VyIGFuc3dlciBpcyAieWVzIiAuLi4NCj4gDQo+Pj4+
IEluIG15IGVudmlyb25tZW50LCBnc2kgb2YgbXkgZEdQVSBpcyAyNC4NCj4+Pg0KPj4+IC4uLiBv
biBvbmUgc3BlY2lmaWMgc3lzdGVtPw0KPiANCj4gLi4uIHRvIHRoaXMgcXVlc3Rpb24gSSByYWlz
ZWQuIFdoYXRldmVyIHlvdSBkdW1wIG9uIGFueSBudW1iZXIgb2YNCj4gc3lzdGVtcywgdGhlcmUn
cyBhbHdheXMgdGhlIGNoYW5jZSB0aGF0IHRoZXJlJ3MgYW5vdGhlciBzeXN0ZW0NCj4gd2hlcmUg
dGhpbmdzIGFyZSBkaWZmZXJlbnQuDQo+IA0KPj4+IFRoZSBmdW5jdGlvbiBpcyBpbnZva2VkIGZy
b20NCj4+PiBhY3BpX3BhcnNlX2ludF9zcmNfb3ZyKCksIGFuZCBJIGNhbid0IHNwb3QgYW55IHJl
c3RyaWN0aW9uIHRvDQo+Pj4gSVJRcyBsZXNzIHRoYW4gMTYgdGhlcmUuDQo+PiBJIGRpZG4ndCBz
ZWUgYW55IHJlc3RyaWN0aW9uIHRvbywgYnV0IGZyb20gdGhlIGR1bXAgcmVzdWx0cywgdGhlcmUg
YXJlIG9ubHkgMTYgZW50cmllcywgc2VlIHByZXZpb3VzIGVtYWlsLiANCj4gDQo+IEhlbmNlIHdo
eSBJIHRyaWVkIHRvIHBvaW50IG91dCB0aGF0IGdvaW5nIGZyb20gb2JzZXJ2YXRpb25zIG9uIGEN
Cj4gcGFydGljdWxhciBzeXN0ZW0gaXNuJ3QgZW5vdWdoLg0KQW55d2F5LCBJIGFncmVlIHdpdGgg
eW91IHRoYXQgSSBuZWVkIHRvIGdldCBtYXBwaW5nIGZyb20gbXBfaXJxcy4NCkkgdHJpZWQgdG8g
Z2V0IG1vcmUgZGVidWcgaW5mb3JtYXRpb24gZnJvbSBteSBlbnZpcm9ubWVudC4gQW5kIEkgYXR0
YWNoIHRoZW0gaGVyZSwgbWF5YmUgeW91IGNhbiBmaW5kIHNvbWUgcHJvYmxlbXMuDQphY3BpX3Bh
cnNlX21hZHRfaW9hcGljX2VudHJpZXMNCglhY3BpX3RhYmxlX3BhcnNlX21hZHQoQUNQSV9NQURU
X1RZUEVfSU5URVJSVVBUX09WRVJSSURFLCBhY3BpX3BhcnNlX2ludF9zcmNfb3ZyLCBNQVhfSVJR
X1NPVVJDRVMpOw0KCQlhY3BpX3BhcnNlX2ludF9zcmNfb3ZyDQoJCQltcF9vdmVycmlkZV9sZWdh
Y3lfaXJxDQoJCQkJb25seSBwcm9jZXNzIHR3byBlbnRyaWVzLCBpcnEgMCBnc2kgMiBhbmQgaXJx
IDkgZ3NpIDkNClRoZXJlIGFyZSBvbmx5IHR3byBlbnRyaWVzIHdob3NlIHR5cGUgaXMgQUNQSV9N
QURUX1RZUEVfSU5URVJSVVBUX09WRVJSSURFIGluIE1BRFQgdGFibGUuIElzIGl0IG5vcm1hbD8N
CkFuZA0KYWNwaV9wYXJzZV9tYWR0X2lvYXBpY19lbnRyaWVzDQoJbXBfY29uZmlnX2FjcGlfbGVn
YWN5X2lycXMNCgkJcHJvY2VzcyB0aGUgb3RoZXIgR1NJcyg8IDE2KSwgc28gdGhhdCB0aGUgdG90
YWwgbnVtYmVyIG9mIG1wX2lycXMgaXMgMTYuDQoNCj4gDQo+IEphbg0KDQotLSANCkJlc3QgcmVn
YXJkcywNCkppcWlhbiBDaGVuLg0K


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 08:43:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 08:43:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735221.1141396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEPl4-00039k-Jd; Tue, 04 Jun 2024 08:43:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735221.1141396; Tue, 04 Jun 2024 08:43:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEPl4-00039d-HA; Tue, 04 Jun 2024 08:43:06 +0000
Received: by outflank-mailman (input) for mailman id 735221;
 Tue, 04 Jun 2024 08:43:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dlb7=NG=cloud.com=christian.lindig@srs-se1.protection.inumbo.net>)
 id 1sEPl3-00039X-BY
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 08:43:05 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 74c0d093-224e-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 10:43:03 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id
 4fb4d7f45d1cf-57a7dc13aabso810088a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 04 Jun 2024 01:43:03 -0700 (PDT)
Received: from smtpclient.apple ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57a31be52b5sm7016308a12.49.2024.06.04.01.43.01
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 04 Jun 2024 01:43:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74c0d093-224e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1717490583; x=1718095383; darn=lists.xenproject.org;
        h=to:references:message-id:content-transfer-encoding:cc:date
         :in-reply-to:from:subject:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=L9Tets7h3NsZMAsapgrVGAo8127mGFlmmAtlnR/KiUg=;
        b=Qbeza96NwH7nhGpY22h/4ntkJzkiMQEvpSh46eybWfVS8RBoRaSnzzNUljRqmOS4RL
         usqKAF83V6j2L+eNHKDfk+w0zOa18y3pXTe5wA9pvhs/kcAxepf5anWz22cipMyOs9l/
         u8fIadXWknoS0fVxwf6hS9o/rzvUlCxxqUdAE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717490583; x=1718095383;
        h=to:references:message-id:content-transfer-encoding:cc:date
         :in-reply-to:from:subject:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=L9Tets7h3NsZMAsapgrVGAo8127mGFlmmAtlnR/KiUg=;
        b=lJ0CgWc2z6BMoLlLvvIsiyP6o33k91F7UOa2za2B6V6C45napaROm04DgHkcjiy9uI
         pUdbfrVF0A9k3wX2zX29mfwr4M6DEegA0TlRG5EEHwVNKb21z7riSoiRIW1Sp0ohm6oa
         pZolacFuRN2dUg4lWoI3U5H3xdBZHIHEsAZFUg9N6hT2Nl28paGDS9f9ldmJxwiW4L7M
         0Pke+0TIFnTy4OdA6sNNQLR2yTI0NSxHflQ22gnKdoN1qDAYxPvMV4Qi3S7uEq7IQDZk
         n+yzC3uUKZwYiHr01RTDPFCyVbLArn/J+2+TvJP62Y5tVhm1vhlzT00Q9Rzl5qwIQ8Pd
         U+kQ==
X-Gm-Message-State: AOJu0YzrfCxbhoCwe6He2a48vWDYykewpz4OGDes6xdDm4DjMFbIRBaw
	XqXew5iT+ODXlttMC8H+y8P+nd8c3mMIxMYhRnjK775nkBbDhOkrbCCd67Ea8+w=
X-Google-Smtp-Source: AGHT+IG7vFWFEePmBQtNy0TcRKWm61hZ2HhdXLsuZ+EBew5aK50/X3qWBsJ1Mj9PSSO1JvbKGooMiA==
X-Received: by 2002:a50:bb6f:0:b0:57a:4ff8:2f11 with SMTP id 4fb4d7f45d1cf-57a4ff82f64mr5424060a12.5.1717490582831;
        Tue, 04 Jun 2024 01:43:02 -0700 (PDT)
Content-Type: text/plain;
	charset=utf-8
Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3774.500.171.1.1\))
Subject: Re: [PATCH for-4.19? v5 00/10] x86: Make MAX_ALTP2M configurable
From: Christian Lindig <christian.lindig@cloud.com>
In-Reply-To: <cover.1717356829.git.w1benny@gmail.com>
Date: Tue, 4 Jun 2024 09:42:51 +0100
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Christian Lindig <christian.lindig@citrix.com>,
 David Scott <dave@recoil.org>,
 Anthony PERARD <anthony@xenproject.org>,
 Juergen Gross <jgross@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?utf-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Nick Rosbrook <rosbrookn@gmail.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <2CDBC752-7870-4C61-A027-69FBF5854AF4@cloud.com>
References: <cover.1717356829.git.w1benny@gmail.com>
To: =?utf-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
X-Mailer: Apple Mail (2.3774.500.171.1.1)



> On 2 Jun 2024, at 21:04, Petr Bene=C5=A1 <w1benny@gmail.com> wrote:
>=20
> tools/ocaml/libs/xc/xenctrl.ml       |   2 +
> tools/ocaml/libs/xc/xenctrl.mli      |   2 +
> tools/ocaml/libs/xc/xenctrl_stubs.c  |  40 +++++++---

Acked-by: Christian Lindig <christian.lindig@cloud.com>



From xen-devel-bounces@lists.xenproject.org Tue Jun 04 08:46:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 08:46:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735226.1141406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEPoa-0003ho-2V; Tue, 04 Jun 2024 08:46:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735226.1141406; Tue, 04 Jun 2024 08:46:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEPoZ-0003hh-W2; Tue, 04 Jun 2024 08:46:43 +0000
Received: by outflank-mailman (input) for mailman id 735226;
 Tue, 04 Jun 2024 08:46:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WX0/=NG=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sEPoZ-0003hb-G1
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 08:46:43 +0000
Received: from pb-smtp21.pobox.com (pb-smtp21.pobox.com [173.228.157.53])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f5d8c29b-224e-11ef-90a1-e314d9c70b13;
 Tue, 04 Jun 2024 10:46:41 +0200 (CEST)
Received: from pb-smtp21.pobox.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 348B8196BB;
 Tue,  4 Jun 2024 04:46:39 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp21.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp21.pobox.com (Postfix) with ESMTP id 2D4EC196BA;
 Tue,  4 Jun 2024 04:46:39 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp21.pobox.com (Postfix) with ESMTPSA id 0865C196B8;
 Tue,  4 Jun 2024 04:46:36 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5d8c29b-224e-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:mime-version:content-transfer-encoding;
	 s=sasl; bh=P9ViT+kByA/nS2oeEkrB3i3OCLuzrwsXL0AtZYq0SDs=; b=sztl
	ItHCErinkS2tgsfI1DY82krVXhIK4EVp/o4zHxKiRTeATPmi+1jqZkclsaXXRo0i
	rkpmOH7MudPwDrMe3I/bAdD2gmB4hlhdjUyEJkyXDZFtzULP/5wpbgoMIdijt5e7
	ppZClWS2zmMoS44Vy8mJiBz3pZIh79xC4otr7xY=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [XEN PATCH] x86/cpufreq: clean up stale powernow_cpufreq_init()
Date: Tue,  4 Jun 2024 11:46:29 +0300
Message-Id: <20240604084629.2418430-1-Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
X-Pobox-Relay-ID:
 F38A66C8-224E-11EF-83AA-8F8B087618E4-90055647!pb-smtp21.pobox.com
Content-Transfer-Encoding: quoted-printable

Remove useless declaration. The routine itself was removed by following
commit long time ago:

   222013114 x86: Fix RevF detection in powernow.c

No functional change.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
 xen/include/acpi/cpufreq/processor_perf.h | 1 -
 1 file changed, 1 deletion(-)

diff --git a/xen/include/acpi/cpufreq/processor_perf.h b/xen/include/acpi=
/cpufreq/processor_perf.h
index 5f48aceadb..301104e16f 100644
--- a/xen/include/acpi/cpufreq/processor_perf.h
+++ b/xen/include/acpi/cpufreq/processor_perf.h
@@ -7,7 +7,6 @@
=20
 #define XEN_PX_INIT 0x80000000U
=20
-int powernow_cpufreq_init(void);
 unsigned int powernow_register_driver(void);
 unsigned int get_measured_perf(unsigned int cpu, unsigned int flag);
 void cpufreq_residency_update(unsigned int cpu, uint8_t state);
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 04 08:54:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 08:54:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735232.1141416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEPwR-0005Yh-0w; Tue, 04 Jun 2024 08:54:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735232.1141416; Tue, 04 Jun 2024 08:54:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEPwQ-0005Ya-Ts; Tue, 04 Jun 2024 08:54:50 +0000
Received: by outflank-mailman (input) for mailman id 735232;
 Tue, 04 Jun 2024 08:54:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEPwP-0005YQ-Il; Tue, 04 Jun 2024 08:54:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEPwP-000652-4q; Tue, 04 Jun 2024 08:54:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEPwO-0003AM-KB; Tue, 04 Jun 2024 08:54:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEPwO-000819-Jh; Tue, 04 Jun 2024 08:54:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=czRMgMAurMn7rTujBsy/iYy29uwvI5R4Qw2xPbscYBo=; b=qaewrmhkzV0j9+6keB1ut8dhS2
	bSUs0OGIU1vQQFQoq0BX82cU+eho7S3rexh48fV3ChB/GTR1oahhNDICcTcv4WTBUeSYflm64QO5f
	gx9qXGisuj4OuJwPFQUmshp73GoI+GjzXSwbbdVzLB8uKfQtZ0xssdtYIKL4jyG0qBJg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186241-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186241: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
X-Osstest-Versions-That:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Jun 2024 08:54:48 +0000

flight 186241 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186241/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186234
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186234
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186234
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186234
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186234
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186234
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e
baseline version:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e

Last test of basis   186241  2024-06-04 01:53:39 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Jun 04 09:09:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 09:09:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735241.1141427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEQAv-0007dS-8I; Tue, 04 Jun 2024 09:09:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735241.1141427; Tue, 04 Jun 2024 09:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEQAv-0007dL-5X; Tue, 04 Jun 2024 09:09:49 +0000
Received: by outflank-mailman (input) for mailman id 735241;
 Tue, 04 Jun 2024 09:09:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+LoY=NG=cloud.com=kelly.choi@srs-se1.protection.inumbo.net>)
 id 1sEQAt-0007dF-Tf
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 09:09:48 +0000
Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com
 [2a00:1450:4864:20::531])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3035958f-2252-11ef-90a1-e314d9c70b13;
 Tue, 04 Jun 2024 11:09:46 +0200 (CEST)
Received: by mail-ed1-x531.google.com with SMTP id
 4fb4d7f45d1cf-57a2f032007so5263836a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 04 Jun 2024 02:09:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3035958f-2252-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1717492186; x=1718096986; darn=lists.xenproject.org;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=WCTEXu+M717i4W/ttVLdaRuQa6mebIqSuzX8/KvGncE=;
        b=hr+5EtFGTzBlpQMrTsF+ZVjYk6BOJIVC+EAbczrDIjj/m2tRIDSFL1uROc/lnWqKJP
         eguC+dHMMDJelNFfZXTQ0cWGR3xpP7hQOn4+V1VASRNldGLTk3sF6e/rPfUW923BXoiy
         tuIm4Cgp9IBeaHT4BIKuPUH2H1zxm0pONVf14=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717492186; x=1718096986;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=WCTEXu+M717i4W/ttVLdaRuQa6mebIqSuzX8/KvGncE=;
        b=FQb900CE3p7ZkMwzaCEkTq+OTOK2qTi1dAcReoigDJ56yKnQFf2HmA7FsnEzjU+cMG
         ylBPPsgb23uw372il8x6MGtBu+Ium/yuTzEUIZiZxQi9e371XhDmohtEdeufZ8RTFpEH
         ZKdNSxAAOYJWQDRBEjIcF3opGo4dbEZOkZ3SOeGgcen+E6O3McCoziyZCkUYEoezJg3M
         UhzkZxuGwqAInpDocAYXMpvBDRTR3VUrAzxewj3TIBpbhpcwfjBfVd1sgw7/5bYg67j2
         aCsWOsGaosGh1i9nzlRw2i3Kk7qmVZbxEWP+IZL5Sg3VYy0zbkWWrrOyJMdQeNXfUKlL
         hKbg==
X-Gm-Message-State: AOJu0YwOI9hA31vk/D+MN02zSnNCtnGKtZzILB9SRRx+whzurZ8USr6V
	FuNgV1Xz8Qp/6OePVxamt8ImpIdNp5ymVx3k8hpCu+2GHx1Zmca+wxjt5kPTA5RnMRGXQWaISf0
	+xkwRj5eNC7NOe5GGnWEA8KYOrEBEwPmOiDj+hZD15NJKrsfg/X8=
X-Google-Smtp-Source: AGHT+IFB+GEAE02O6tSCkAd5IDs5mAi/iQnQR9I3hAymaIC+Qvs7C1XNMn3NhvfFaHRt4Yy1q53/hQmMKeKOyTWKI0Q=
X-Received: by 2002:a50:8717:0:b0:574:eace:b7bd with SMTP id
 4fb4d7f45d1cf-57a36372cdemr8788700a12.11.1717492185460; Tue, 04 Jun 2024
 02:09:45 -0700 (PDT)
MIME-Version: 1.0
References: <CAO-mL=y+q2iUw-OHkHO96FSg1jfm8aQV-dFsMg4R0VS4+maOXg@mail.gmail.com>
In-Reply-To: <CAO-mL=y+q2iUw-OHkHO96FSg1jfm8aQV-dFsMg4R0VS4+maOXg@mail.gmail.com>
From: Kelly Choi <kelly.choi@cloud.com>
Date: Tue, 4 Jun 2024 10:09:09 +0100
Message-ID: <CAO-mL=wWSkzVEp7cVEdVafnOr+ZhneBfnvgVO2+yGigovFb_rw@mail.gmail.com>
Subject: [ANNOUNCE] Join us TODAY! Free virtual Xen Summit 2024
To: xen-devel <xen-devel@lists.xenproject.org>, xen-users@lists.xenproject.org, 
	xen-announce@lists.xenproject.org
Cc: committers@xenproject.org
Content-Type: multipart/alternative; boundary="0000000000009e942a061a0ccf36"

--0000000000009e942a061a0ccf36
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi all,

Don't forget to join us today using Jitsi for Xen Summit!

Many thanks,
Kelly Choi

Community Manager
Xen Project


On Fri, May 31, 2024 at 11:15=E2=80=AFAM Kelly Choi <kelly.choi@cloud.com> =
wrote:

> Hello Xen Community,
>
> Join us virtually at Xen Summit (4-6th June), for free using Jitsi!
>
> We want to encourage as many of you to participate in the Xen Summit as
> possible. Just because you are not physically attending, doesn't mean you
> can't get involved.
> *(Please note, there will be no professional AV or audio equipment, and i=
t
> is a community effort to enable others to get involved.) *
>
> On the day, please find the *schedule of talks listed here.*
> <https://events.linuxfoundation.org/xen-project-summit/program/schedule/>
> I've also included some FAQs at the bottom of this email.
>
> *Instructions:*
>
>    - Our team will help host the main presentation talk sessions
>    using Jitsi
>    - Either myself or one attendee volunteer will help host each design
>    session using Jitsi
>    - Ensure you are using the Lisbon timezone
>    - All sessions on Tuesday 4th June will be streamed using:
>       - *SESSION PRESENTATIONS LINK <https://meet.jit.si/XenSummit24Main>=
*
>       .
>    - All sessions on Wednesday 5th June and Thursday 6th June will use
>    the following links:
>    - *DESIGN SESSION A <https://meet.jit.si/XenDesignSessionsA> *(Liberda=
de
>       Room)
>       - *DESIGN SESSION B <https://meet.jit.si/XenDesignSessionsB> *(Augu=
sta
>       Room)
>       - Please look out on the schedule for the time and which session
>       room it takes place in
>
>
>    - Thursday design sessions will be finalized on the schedule by the
>    end of day on Wednesday
>    - The same links will be used throughout talks and sessions
>    - (Optional) Join our Xen Summit matrix channel for updates on the
>    day: https://matrix.to/#/#xen-project-summit:matrix.org
>
> *Some ground rules to follow:*
>
>    - Enter your full name on Jitsi so everyone knows who you are
>    - Please mute yourself upon joining
>    - Turning on cameras is optional, but we encourage doing this for
>    design sessions
>    - Do *not* shout out your questions during session presentations,
>    instead ask these on the chat function and we will do our best to ask =
on
>    behalf of you
>    - During design sessions, we encourage you to unmute and participate
>    freely
>    - If multiple people wish to speak, please use the 'raise hand'
>    function on Jitsi or chat
>    - Should there be a need, moderators will have permission to remove
>    anyone who is disruptive in sessions on Jitsi
>    - If you face issues on the day, please let us know via Matrix - we
>    will do our best to help, but please note this is a community effort
>
> *Jitsi links:*
>
> Session presentation link:
>
> https://meet.jit.si/XenSummit24Main
>
> Design Session A (Liberdade Room) link:
> https://meet.jit.si/XenDesignSessionsA
>
> Design Session B (Augusta Room) link:
> https://meet.jit.si/XenDesignSessionsB
>
> See meeting dial-in numbers:
> https://meet.jit.si/static/dialInInfo.html?room=3DXenSummit24Main
>
> If also dialing-in through a room phone, join without connecting to audio=
:
> https://meet.jit.si/XenSummit24Main#config.startSilent=3Dtrue
>
> See meeting dial-in numbers:
> https://meet.jit.si/static/dialInInfo.html?room=3DXenDesignSessionsA
>
> If also dialing-in through a room phone, join without connecting to audio=
:
> https://meet.jit.si/XenDesignSessionsA#config.startSilent=3Dtrue
>
> See meeting dial-in numbers:
> https://meet.jit.si/static/dialInInfo.html?room=3DXenDesignSessionsB
>
> If also dialing-in through a room phone, join without connecting to audio=
:
> https://meet.jit.si/XenDesignSessionsB#config.startSilent=3Dtrue
>
> Schedule Example:
>
> Tuesday 4th & Wednesday 5th June 2024
>
> (Lisbon timezone)
>
> Schedule Example: Wednesday 5th & Thursday 6th June 2024
>
> (Lisbon timezone)
>
> Schedule Example:  Wednesday 5th & Thursday 6th June 2024
>
> (Lisbon timezone)
>
> 09:00
> Welcome & Opening Remarks - Kelly Choi, Community Manager, Cloud Software
> Group, XenServer
>
> 09:10
>
> Xen Project 2024 Weather Report - Kelly Choi, XenServer, Cloud Software
> Group.
>
> LIBERDADE I
>
> 09:10
>
> Challenges and Status of Enabling TrenchBoot in Xen Hypervisor - Micha=C5=
=82
> =C5=BBygowski & Piotr Kr=C3=B3l, 3mdeb
>
> 13:45
>
> The future of Xen Project physical events
> <https://design-sessions.xenproject.org/uid/discussion/disc_OlJce1uK3uI0O=
jS2cL55/view>
>
> LIBERDADE I
>
> 14:35
>
> IOMMU paravirtualization and Xen IOMMU subsystem rework
> <https://design-sessions.xenproject.org/uid/discussion/disc_PEgBNIXMyEkdJ=
oysE27O/view>
>
> LIBERDADE I
> 09:10
>
> Using Xenalyze for Performance Analysis - George Dunlap, Xen ServerAUGUST=
A
> I
>
>
>
>
>
> 13:45
>
> Downstream working group
> <https://design-sessions.xenproject.org/uid/discussion/disc_z78Lt2EIZt2qx=
aS3FSlQ/view>
>
> AUGUSTA I
>
> 14:35
>
> Xen Safety Requirements upstreaming
> <https://design-sessions.xenproject.org/uid/discussion/disc_01JE9EOi9zxxA=
U8lfB6Z/view>
>
> AUGUSTA I
>
>
>
> *How to filter the schedule:*
>
>
> *FAQs:*
>
>    - *My company would like to sponsor the next Xen Summit, how do I get
>    involved?*
>       - *Please email *@communitymanager
>       <community.manager@xenproject.org>* with your interest *
>    - *Are sessions recorded?*
>       - *Yes, all talks are recorded and will be available on YouTube
>       after the event. Design sessions on Day 3 are not recorded. *
>    - *Can I write about my experience at Xen Summit?*
>       - *Yes! We encourage the community to spread the word through
>       social media. Please tag us on X or LinkedIn so that we can reshare=
. *
>    -
> *Can we see the presenter's slides? *
>       - *Some presenters may have uploaded their slides in advance. If
>       available, you can view these by clicking on each session in the sc=
reenshot
>       above. *
>
> We look forward to seeing you there, both virtually and physically!
>
> Many thanks,
> Kelly Choi
>
> Community Manager
> Xen Project
>

--0000000000009e942a061a0ccf36
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr">Hi all,</div><div dir=3D"ltr"><br></div><=
div dir=3D"ltr">Don&#39;t forget to join us today using Jitsi for Xen Summi=
t!=C2=A0</div><div dir=3D"ltr"><br clear=3D"all"><div><div dir=3D"ltr" clas=
s=3D"gmail_signature"><div dir=3D"ltr"><div>Many thanks,</div><div>Kelly Ch=
oi</div><div><br></div><div><div style=3D"color:rgb(136,136,136)">Community=
 Manager</div><div style=3D"color:rgb(136,136,136)">Xen Project=C2=A0<br></=
div></div></div></div></div><br></div><br><div class=3D"gmail_quote"><div d=
ir=3D"ltr" class=3D"gmail_attr">On Fri, May 31, 2024 at 11:15=E2=80=AFAM Ke=
lly Choi &lt;<a href=3D"mailto:kelly.choi@cloud.com">kelly.choi@cloud.com</=
a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0p=
x 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><d=
iv dir=3D"ltr">Hello Xen Community,=C2=A0<div><br></div><div>Join us virtua=
lly at Xen Summit (4-6th June), for free using=C2=A0Jitsi!=C2=A0</div><div>=
<br></div><div>We want to encourage as many of you to participate in the Xe=
n Summit as possible. Just because you are not physically attending, doesn&=
#39;t mean you can&#39;t get involved.=C2=A0</div><div><i>(Please note, the=
re will be no professional AV or audio equipment, and it is a community eff=
ort to enable others to get involved.)=C2=A0</i></div><div><br></div><div>O=
n the day, please find the=C2=A0<a href=3D"https://events.linuxfoundation.o=
rg/xen-project-summit/program/schedule/" target=3D"_blank"><b>schedule=C2=
=A0of talks listed here.</b></a></div><div>I&#39;ve also included some FAQs=
 at the bottom of this email.</div><div><br></div><div><b><u>Instructions:<=
/u></b></div><div><ul><li style=3D"margin-left:15px">Our team will help hos=
t the main presentation talk sessions using=C2=A0Jitsi=C2=A0</li><li style=
=3D"margin-left:15px">Either myself or one attendee volunteer will help hos=
t each design session using=C2=A0Jitsi=C2=A0</li><li style=3D"margin-left:1=
5px">Ensure you are using the Lisbon timezone</li><li style=3D"margin-left:=
15px">All sessions on Tuesday 4th June will be streamed using:</li><ul><li =
style=3D"margin-left:15px"><b><a href=3D"https://meet.jit.si/XenSummit24Mai=
n" target=3D"_blank">SESSION PRESENTATIONS LINK</a></b>.=C2=A0</li></ul><li=
 style=3D"margin-left:15px">All sessions on Wednesday 5th June and Thursday=
 6th June will use the following links:<br></li><ul><li style=3D"margin-lef=
t:15px"><b><a href=3D"https://meet.jit.si/XenDesignSessionsA" target=3D"_bl=
ank">DESIGN SESSION A</a>=C2=A0</b>(Liberdade Room)=C2=A0</li><li style=3D"=
margin-left:15px"><b><a href=3D"https://meet.jit.si/XenDesignSessionsB" tar=
get=3D"_blank">DESIGN SESSION=C2=A0B</a>=C2=A0</b>(Augusta Room)</li><li st=
yle=3D"margin-left:15px">Please look out on the schedule for the time and w=
hich session room it takes place in<br></li></ul></ul><ul><li style=3D"marg=
in-left:15px">Thursday design sessions will be finalized on the schedule by=
 the end of day on Wednesday</li><li style=3D"margin-left:15px">The same li=
nks will be used throughout talks and sessions</li><li style=3D"margin-left=
:15px">(Optional) Join our Xen Summit matrix channel for updates on the day=
:=C2=A0<a href=3D"https://matrix.to/#/%23xen-project-summit:matrix.org" tar=
get=3D"_blank">https://matrix.to/#/#xen-project-summit:matrix.org</a>=C2=A0=
=C2=A0</li></ul><div><div><b><u>Some ground rules to follow:</u></b></div><=
div><ul><li style=3D"margin-left:15px">Enter your full name on Jitsi so eve=
ryone knows who you are</li><li style=3D"margin-left:15px">Please mute your=
self upon joining=C2=A0</li><li style=3D"margin-left:15px">Turning on camer=
as is optional, but we encourage doing this for design sessions</li><li sty=
le=3D"margin-left:15px">Do=C2=A0<u>not</u>=C2=A0shout out your questions du=
ring session presentations, instead ask these on the chat function and we w=
ill do our best to ask on behalf of you</li><li style=3D"margin-left:15px">=
During design sessions, we encourage you to unmute and participate freely</=
li><li style=3D"margin-left:15px">If multiple people wish to speak, please =
use the &#39;raise hand&#39; function on=C2=A0Jitsi or chat</li><li style=
=3D"margin-left:15px">Should there be a need, moderators will have permissi=
on to remove anyone who is disruptive in sessions on=C2=A0Jitsi</li><li sty=
le=3D"margin-left:15px">If you face issues on the day, please let us know v=
ia Matrix - we will do our best to help, but please note this is a communit=
y effort=C2=A0</li></ul><div><b><u>Jitsi links:</u></b></div></div></div></=
div><div><span id=3D"m_-1299433891387361475m_-3807238046455862930m_30172008=
69748370048gmail-docs-internal-guid-f9217878-7fff-07f7-1772-78aa156120b2"><=
div dir=3D"ltr" align=3D"left" style=3D"margin-left:0pt"><table style=3D"bo=
rder:none;border-collapse:collapse"><colgroup><col width=3D"208"><col width=
=3D"208"><col width=3D"208"></colgroup><tbody><tr style=3D"height:58.5pt"><=
td style=3D"border-width:0.681818pt;border-style:solid;border-color:rgb(0,0=
,0);vertical-align:top;padding:5pt;overflow:hidden"><p dir=3D"ltr" style=3D=
"line-height:1.728;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-si=
ze:11pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:tran=
sparent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian=
:normal;font-variant-alternates:normal;vertical-align:baseline">Session pre=
sentation link:</span></p><p dir=3D"ltr" style=3D"line-height:1.728;margin-=
top:0pt;margin-bottom:0pt"><a href=3D"https://meet.jit.si/XenSummit24Main" =
style=3D"text-decoration-line:none" target=3D"_blank"><span style=3D"font-s=
ize:11pt;font-family:Arial,sans-serif;background-color:transparent;font-wei=
ght:700;font-variant-numeric:normal;font-variant-east-asian:normal;font-var=
iant-alternates:normal;text-decoration-line:underline;vertical-align:baseli=
ne">https://meet.jit.si/XenSummit24Main</span></a><span style=3D"font-size:=
11pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background-color:transpa=
rent;font-weight:700;font-variant-numeric:normal;font-variant-east-asian:no=
rmal;font-variant-alternates:normal;vertical-align:baseline">=C2=A0</span><=
/p></td><td style=3D"border-width:0.681818pt;border-style:solid;border-colo=
r:rgb(0,0,0);vertical-align:top;padding:5pt;overflow:hidden"><p dir=3D"ltr"=
 style=3D"line-height:1.728;margin-top:0pt;margin-bottom:0pt"><span style=
=3D"font-size:11pt;font-family:Arial,sans-serif;color:rgb(0,0,0);background=
-color:transparent;font-weight:700;font-variant-numeric:normal;font-variant=
-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">=
Design Session A (Liberdade Room) link:=C2=A0</span><a href=3D"https://meet=
.jit.si/XenDesignSessionsA" style=3D"text-decoration-line:none" target=3D"_=
blank"><span style=3D"font-size:11pt;font-family:Arial,sans-serif;backgroun=
d-color:transparent;font-weight:700;font-variant-numeric:normal;font-varian=
t-east-asian:normal;font-variant-alternates:normal;text-decoration-line:und=
erline;vertical-align:baseline">https://meet.jit.si/XenDesignSessionsA</spa=
n></a></p></td><td style=3D"border-width:0.681818pt;border-style:solid;bord=
er-color:rgb(0,0,0);vertical-align:top;padding:5pt;overflow:hidden"><p dir=
=3D"ltr" style=3D"line-height:1.728;margin-top:0pt;margin-bottom:0pt"><span=
 style=3D"font-size:11pt;font-family:Arial,sans-serif;color:rgb(0,0,0);back=
ground-color:transparent;font-weight:700;font-variant-numeric:normal;font-v=
ariant-east-asian:normal;font-variant-alternates:normal;vertical-align:base=
line">Design Session B (Augusta Room) link:=C2=A0</span><a href=3D"https://=
meet.jit.si/XenDesignSessionsB" style=3D"text-decoration-line:none" target=
=3D"_blank"><span style=3D"font-size:11pt;font-family:Arial,sans-serif;back=
ground-color:transparent;font-weight:700;font-variant-numeric:normal;font-v=
ariant-east-asian:normal;font-variant-alternates:normal;text-decoration-lin=
e:underline;vertical-align:baseline">https://meet.jit.si/XenDesignSessionsB=
</span></a></p></td></tr><tr style=3D"height:171.75pt"><td style=3D"border-=
width:0.681818pt;border-style:solid;border-color:rgb(0,0,0);vertical-align:=
top;padding:5pt;overflow:hidden"><p dir=3D"ltr" style=3D"line-height:1.656;=
margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family=
:Arial,sans-serif;background-color:transparent;font-variant-numeric:normal;=
font-variant-east-asian:normal;font-variant-alternates:normal;vertical-alig=
n:baseline">See meeting dial-in numbers:=C2=A0</span><a href=3D"https://mee=
t.jit.si/static/dialInInfo.html?room=3DXenSummit24Main" style=3D"text-decor=
ation-line:none" target=3D"_blank"><span style=3D"font-size:11pt;font-famil=
y:Arial,sans-serif;background-color:transparent;font-variant-numeric:normal=
;font-variant-east-asian:normal;font-variant-alternates:normal;text-decorat=
ion-line:underline;vertical-align:baseline">https://meet.jit.si/static/dial=
InInfo.html?room=3DXenSummit24Main</span></a><span style=3D"font-size:11pt;=
font-family:Arial,sans-serif;background-color:transparent;font-variant-nume=
ric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;ve=
rtical-align:baseline">=C2=A0</span></p><br><p dir=3D"ltr" style=3D"line-he=
ight:1.656;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;=
font-family:Arial,sans-serif;background-color:transparent;font-variant-nume=
ric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;ve=
rtical-align:baseline">If also dialing-in through a room phone, join withou=
t connecting to audio:=C2=A0</span><a href=3D"https://meet.jit.si/XenSummit=
24Main#config.startSilent=3Dtrue" style=3D"text-decoration-line:none" targe=
t=3D"_blank"><span style=3D"font-size:11pt;font-family:Arial,sans-serif;bac=
kground-color:transparent;font-variant-numeric:normal;font-variant-east-asi=
an:normal;font-variant-alternates:normal;text-decoration-line:underline;ver=
tical-align:baseline">https://meet.jit.si/XenSummit24Main#config.startSilen=
t=3Dtrue</span></a><span style=3D"font-size:11pt;font-family:Arial,sans-ser=
if;background-color:transparent;font-variant-numeric:normal;font-variant-ea=
st-asian:normal;font-variant-alternates:normal;vertical-align:baseline">=C2=
=A0</span></p></td><td style=3D"border-width:0.681818pt;border-style:solid;=
border-color:rgb(0,0,0);vertical-align:top;padding:5pt;overflow:hidden"><p =
dir=3D"ltr" style=3D"line-height:1.656;margin-top:0pt;margin-bottom:0pt"><s=
pan style=3D"font-size:11pt;font-family:Arial,sans-serif;background-color:t=
ransparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-=
variant-alternates:normal;vertical-align:baseline">See meeting dial-in numb=
ers:=C2=A0</span><a href=3D"https://meet.jit.si/static/dialInInfo.html?room=
=3DXenDesignSessionsA" style=3D"text-decoration-line:none" target=3D"_blank=
"><span style=3D"font-size:11pt;font-family:Arial,sans-serif;background-col=
or:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;f=
ont-variant-alternates:normal;text-decoration-line:underline;vertical-align=
:baseline">https://meet.jit.si/static/dialInInfo.html?room=3DXenDesignSessi=
onsA</span></a></p><br><p dir=3D"ltr" style=3D"line-height:1.656;margin-top=
:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial,san=
s-serif;background-color:transparent;font-variant-numeric:normal;font-varia=
nt-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline=
">If also dialing-in through a room phone, join without connecting to audio=
:=C2=A0</span><a href=3D"https://meet.jit.si/XenDesignSessionsA#config.star=
tSilent=3Dtrue" style=3D"text-decoration-line:none" target=3D"_blank"><span=
 style=3D"font-size:11pt;font-family:Arial,sans-serif;background-color:tran=
sparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-var=
iant-alternates:normal;text-decoration-line:underline;vertical-align:baseli=
ne">https://meet.jit.si/XenDesignSessionsA#config.startSilent=3Dtrue</span>=
</a></p></td><td style=3D"border-width:0.681818pt;border-style:solid;border=
-color:rgb(0,0,0);vertical-align:top;padding:5pt;overflow:hidden"><p dir=3D=
"ltr" style=3D"line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span st=
yle=3D"font-size:11pt;font-family:Arial,sans-serif;background-color:transpa=
rent;font-variant-numeric:normal;font-variant-east-asian:normal;font-varian=
t-alternates:normal;vertical-align:baseline">See meeting dial-in numbers:=
=C2=A0</span><a href=3D"https://meet.jit.si/static/dialInInfo.html?room=3DX=
enDesignSessionsB" style=3D"text-decoration-line:none" target=3D"_blank"><s=
pan style=3D"font-size:11pt;font-family:Arial,sans-serif;background-color:t=
ransparent;font-variant-numeric:normal;font-variant-east-asian:normal;font-=
variant-alternates:normal;text-decoration-line:underline;vertical-align:bas=
eline">https://meet.jit.si/static/dialInInfo.html?room=3DXenDesignSessionsB=
</span></a></p><br><p dir=3D"ltr" style=3D"line-height:1.656;margin-top:0pt=
;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial,sans-se=
rif;background-color:transparent;font-variant-numeric:normal;font-variant-e=
ast-asian:normal;font-variant-alternates:normal;vertical-align:baseline">If=
 also dialing-in through a room phone, join without connecting to audio:=C2=
=A0</span><a href=3D"https://meet.jit.si/XenDesignSessionsB#config.startSil=
ent=3Dtrue" style=3D"text-decoration-line:none" target=3D"_blank"><span sty=
le=3D"font-size:11pt;font-family:Arial,sans-serif;background-color:transpar=
ent;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant=
-alternates:normal;text-decoration-line:underline;vertical-align:baseline">=
https://meet.jit.si/XenDesignSessionsB#config.startSilent=3Dtrue</span></a>=
</p></td></tr><tr style=3D"height:82.5pt"><td style=3D"border-width:0.68181=
8pt;border-style:solid;border-color:rgb(0,0,0);vertical-align:top;padding:5=
pt;overflow:hidden"><p dir=3D"ltr" style=3D"line-height:1.656;margin-top:0p=
t;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial,sans-s=
erif;background-color:transparent;font-weight:700;font-style:italic;font-va=
riant-numeric:normal;font-variant-east-asian:normal;font-variant-alternates=
:normal;vertical-align:baseline">Schedule Example:=C2=A0</span></p><p dir=
=3D"ltr" style=3D"line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span=
 style=3D"font-size:11pt;font-family:Arial,sans-serif;background-color:tran=
sparent;font-style:italic;font-variant-numeric:normal;font-variant-east-asi=
an:normal;font-variant-alternates:normal;vertical-align:baseline">Tuesday 4=
th &amp; Wednesday 5th June 2024</span></p><p dir=3D"ltr" style=3D"line-hei=
ght:1.656;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;f=
ont-family:Arial,sans-serif;background-color:transparent;font-style:italic;=
font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alt=
ernates:normal;vertical-align:baseline">(Lisbon timezone)</span></p></td><t=
d style=3D"border-width:0.681818pt;border-style:solid;border-color:rgb(0,0,=
0);vertical-align:top;padding:5pt;overflow:hidden"><p dir=3D"ltr" style=3D"=
line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-siz=
e:11pt;font-family:Arial,sans-serif;background-color:transparent;font-weigh=
t:700;font-style:italic;font-variant-numeric:normal;font-variant-east-asian=
:normal;font-variant-alternates:normal;vertical-align:baseline">Schedule Ex=
ample:=C2=A0</span><span style=3D"font-size:11pt;font-family:Arial,sans-ser=
if;background-color:transparent;font-style:italic;font-variant-numeric:norm=
al;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-a=
lign:baseline">Wednesday 5th &amp; Thursday 6th June 2024</span></p><p dir=
=3D"ltr" style=3D"line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span=
 style=3D"font-size:11pt;font-family:Arial,sans-serif;background-color:tran=
sparent;font-style:italic;font-variant-numeric:normal;font-variant-east-asi=
an:normal;font-variant-alternates:normal;vertical-align:baseline">(Lisbon t=
imezone)</span></p></td><td style=3D"border-width:0.681818pt;border-style:s=
olid;border-color:rgb(0,0,0);vertical-align:top;padding:5pt;overflow:hidden=
"><p dir=3D"ltr" style=3D"line-height:1.656;margin-top:0pt;margin-bottom:0p=
t"><span style=3D"font-size:11pt;font-family:Arial,sans-serif;background-co=
lor:transparent;font-weight:700;font-style:italic;font-variant-numeric:norm=
al;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-a=
lign:baseline">Schedule Example:=C2=A0=C2=A0</span><span style=3D"font-size=
:11pt;font-family:Arial,sans-serif;background-color:transparent;font-style:=
italic;font-variant-numeric:normal;font-variant-east-asian:normal;font-vari=
ant-alternates:normal;vertical-align:baseline">Wednesday 5th &amp; Thursday=
 6th June 2024</span></p><p dir=3D"ltr" style=3D"line-height:1.656;margin-t=
op:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial,s=
ans-serif;background-color:transparent;font-style:italic;font-variant-numer=
ic:normal;font-variant-east-asian:normal;font-variant-alternates:normal;ver=
tical-align:baseline">(Lisbon timezone)</span></p><br></td></tr><tr style=
=3D"height:201.75pt"><td style=3D"border-width:0.681818pt;border-style:soli=
d;border-color:rgb(0,0,0);vertical-align:top;padding:5pt;overflow:hidden"><=
h3 style=3D"box-sizing:border-box;font-family:Inter,&quot;Avenir Next&quot;=
,&quot;Helvetica Neue&quot;,&quot;Lucida Grande&quot;,Helvetica,Arial,sans-=
serif;font-weight:inherit;line-height:1.1;margin:17px 0px 0px;padding:0px;b=
order:0px;outline:0px;font-size:14px;vertical-align:baseline;clear:left;col=
or:rgb(153,153,153);float:left;text-align:right;width:90px;zoom:1">09:00</h=
3><div style=3D"box-sizing:border-box;padding:0px;border-width:0px 0px 0px =
1px;border-top-style:initial;border-right-style:initial;border-bottom-style=
:initial;border-left-style:solid;border-top-color:initial;border-right-colo=
r:initial;border-bottom-color:initial;border-left-color:rgba(0,0,0,0.1);out=
line:0px;font-size:16px;font-family:Inter,&quot;Avenir Next&quot;,&quot;Hel=
vetica Neue&quot;,&quot;Lucida Grande&quot;,Helvetica,Arial,sans-serif;vert=
ical-align:baseline;color:rgb(51,51,51)"><div style=3D"box-sizing:border-bo=
x;margin:0px;padding:10px 0px 5px 15px;border:0px;outline:0px;font-weight:i=
nherit;font-family:inherit;vertical-align:baseline"><span style=3D"box-sizi=
ng:border-box;margin:0px 12px 12px 0px;padding:0px;border-width:1px;border-=
style:solid;border-color:transparent rgba(0,0,0,0.15) rgba(0,0,0,0.15) tran=
sparent;outline:0px;font-weight:inherit;font-family:inherit;vertical-align:=
baseline;background-color:rgb(166,237,231);border-radius:4px;display:block;=
float:left"><a href=3D"https://xenprojectsummit2024.sched.com/event/1bCKF/w=
elcome-opening-remarks-kelly-choi-community-manager-cloud-software-group-xe=
nserver?iframe=3Dyes&amp;w=3D100%&amp;sidebar=3Dyes&amp;bg=3Dno" id=3D"m_-1=
299433891387361475gmail-69e373000a4da4d8e181cc54aa83e898" style=3D"box-sizi=
ng:border-box;text-decoration-line:none;margin:0px;padding:6px 10px 7px;bor=
der:0px;outline:0px;font-size:15px;font-family:inherit;vertical-align:basel=
ine;color:rgb(0,0,0);border-bottom-right-radius:3px;border-top-right-radius=
:3px;float:left;line-height:1.2;min-height:32px" target=3D"_blank">Welcome =
&amp; Opening Remarks - Kelly Choi, Community Manager, Cloud Software Group=
, XenServer<span style=3D"box-sizing:border-box;margin:5px 0px 0px;padding:=
0px;border:0px;outline:0px;font-size:12px;font-family:inherit;vertical-alig=
n:baseline;text-transform:uppercase;clear:left;display:none;opacity:0.8"></=
span></a></span></div></div><br><p dir=3D"ltr" style=3D"line-height:1.656;m=
argin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:=
Arial,sans-serif;background-color:transparent;font-variant-numeric:normal;f=
ont-variant-east-asian:normal;font-variant-alternates:normal;vertical-align=
:baseline">09:10=C2=A0</span></p><p dir=3D"ltr" style=3D"line-height:1.656;=
margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family=
:Arial,sans-serif;background-color:transparent;font-variant-numeric:normal;=
font-variant-east-asian:normal;font-variant-alternates:normal;vertical-alig=
n:baseline">Xen Project 2024 Weather Report - Kelly Choi, XenServer, Cloud =
Software Group.</span></p><p dir=3D"ltr" style=3D"line-height:1.656;margin-=
top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial,=
sans-serif;background-color:transparent;font-variant-numeric:normal;font-va=
riant-east-asian:normal;font-variant-alternates:normal;vertical-align:basel=
ine">LIBERDADE I</span></p><br></td><td style=3D"border-width:0.681818pt;bo=
rder-style:solid;border-color:rgb(0,0,0);vertical-align:top;padding:5pt;ove=
rflow:hidden"><h3 style=3D"box-sizing:border-box;font-family:Inter,&quot;Av=
enir Next&quot;,&quot;Helvetica Neue&quot;,&quot;Lucida Grande&quot;,Helvet=
ica,Arial,sans-serif;font-weight:inherit;line-height:1.1;margin:17px 0px 0p=
x;padding:0px;border:0px;outline:0px;font-size:14px;vertical-align:baseline=
;clear:left;color:rgb(153,153,153);float:left;text-align:right;width:90px;z=
oom:1">09:10=C2=A0</h3><p dir=3D"ltr" style=3D"line-height:1.656;margin-top=
:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial,san=
s-serif;background-color:transparent;font-variant-numeric:normal;font-varia=
nt-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline=
"></span></p><div style=3D"box-sizing:border-box;padding:0px;border-width:0=
px 0px 0px 1px;border-top-style:initial;border-right-style:initial;border-b=
ottom-style:initial;border-left-style:solid;border-top-color:initial;border=
-right-color:initial;border-bottom-color:initial;border-left-color:rgba(0,0=
,0,0.1);outline:0px;font-size:16px;font-family:Inter,&quot;Avenir Next&quot=
;,&quot;Helvetica Neue&quot;,&quot;Lucida Grande&quot;,Helvetica,Arial,sans=
-serif;vertical-align:baseline;color:rgb(51,51,51)"><div style=3D"box-sizin=
g:border-box;margin:0px;padding:10px 0px 5px 15px;border:0px;outline:0px;fo=
nt-weight:inherit;font-family:inherit;vertical-align:baseline"><span style=
=3D"box-sizing:border-box;margin:0px 12px 12px 0px;padding:0px;border-width=
:1px;border-style:solid;border-color:transparent rgba(0,0,0,0.15) rgba(0,0,=
0,0.15) transparent;outline:0px;font-weight:inherit;font-family:inherit;ver=
tical-align:baseline;background-color:rgb(109,209,61);border-radius:4px;dis=
play:block;float:left"><a href=3D"https://xenprojectsummit2024.sched.com/ev=
ent/1bCFO/challenges-and-status-of-enabling-trenchboot-in-xen-hypervisor-mi=
chal-zygowski-piotr-krol-3mdeb?iframe=3Dyes&amp;w=3D100%&amp;sidebar=3Dyes&=
amp;bg=3Dno" id=3D"m_-1299433891387361475gmail-875c64b7afbdbd049318b5585b91=
20f8" style=3D"box-sizing:border-box;text-decoration-line:none;margin:0px;p=
adding:6px 10px 7px;border:0px;outline:0px;font-size:15px;font-family:inher=
it;vertical-align:baseline;color:rgb(0,0,0);border-bottom-right-radius:3px;=
border-top-right-radius:3px;float:left;line-height:1.2;min-height:32px" tar=
get=3D"_blank">Challenges and Status of Enabling TrenchBoot in Xen Hypervis=
or - Micha=C5=82 =C5=BBygowski &amp; Piotr Kr=C3=B3l, 3mdeb<span style=3D"b=
ox-sizing:border-box;margin:5px 0px 0px;padding:0px;border:0px;outline:0px;=
font-size:12px;font-family:inherit;vertical-align:baseline;text-transform:u=
ppercase;clear:left;display:none;opacity:0.8"></span></a></span></div></div=
><p dir=3D"ltr" style=3D"line-height:1.656;margin-top:0pt;margin-bottom:0pt=
"><span style=3D"font-size:11pt;font-family:Arial,sans-serif;background-col=
or:transparent;font-variant-numeric:normal;font-variant-east-asian:normal;f=
ont-variant-alternates:normal;vertical-align:baseline">13:45=C2=A0</span></=
p><p dir=3D"ltr" style=3D"line-height:1.656;margin-top:0pt;margin-bottom:0p=
t"><a href=3D"https://design-sessions.xenproject.org/uid/discussion/disc_Ol=
Jce1uK3uI0OjS2cL55/view" style=3D"text-decoration-line:none" target=3D"_bla=
nk"><span style=3D"font-size:11pt;font-family:Arial,sans-serif;font-variant=
-numeric:normal;font-variant-east-asian:normal;font-variant-alternates:norm=
al;text-decoration-line:underline;vertical-align:baseline">The future of Xe=
n Project physical events</span></a></p><p dir=3D"ltr" style=3D"line-height=
:1.656;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font=
-family:Arial,sans-serif;background-color:transparent;font-variant-numeric:=
normal;font-variant-east-asian:normal;font-variant-alternates:normal;vertic=
al-align:baseline">LIBERDADE I</span></p><br><p dir=3D"ltr" style=3D"line-h=
eight:1.656;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt=
;font-family:Arial,sans-serif;background-color:transparent;font-variant-num=
eric:normal;font-variant-east-asian:normal;font-variant-alternates:normal;v=
ertical-align:baseline">14:35=C2=A0</span></p><p dir=3D"ltr" style=3D"line-=
height:1.656;margin-top:0pt;margin-bottom:0pt"><a href=3D"https://design-se=
ssions.xenproject.org/uid/discussion/disc_PEgBNIXMyEkdJoysE27O/view" style=
=3D"text-decoration-line:none" target=3D"_blank"><span style=3D"font-size:1=
1pt;font-family:Arial,sans-serif;font-variant-numeric:normal;font-variant-e=
ast-asian:normal;font-variant-alternates:normal;text-decoration-line:underl=
ine;vertical-align:baseline">IOMMU paravirtualization and Xen IOMMU subsyst=
em rework</span></a></p><p dir=3D"ltr" style=3D"line-height:1.656;margin-to=
p:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial,sa=
ns-serif;background-color:transparent;font-variant-numeric:normal;font-vari=
ant-east-asian:normal;font-variant-alternates:normal;vertical-align:baselin=
e">LIBERDADE I</span></p></td><td style=3D"border-width:0.681818pt;border-s=
tyle:solid;border-color:rgb(0,0,0);vertical-align:top;padding:5pt;overflow:=
hidden"><h3 style=3D"box-sizing:border-box;font-family:Inter,&quot;Avenir N=
ext&quot;,&quot;Helvetica Neue&quot;,&quot;Lucida Grande&quot;,Helvetica,Ar=
ial,sans-serif;font-weight:inherit;line-height:1.1;margin:17px 0px 0px;padd=
ing:0px;border:0px;outline:0px;font-size:14px;vertical-align:baseline;clear=
:left;color:rgb(153,153,153);text-align:right;width:90px;zoom:1">09:10=C2=
=A0</h3><p dir=3D"ltr" style=3D"line-height:1.656;margin-top:0pt;margin-bot=
tom:0pt"><span style=3D"font-size:11pt;font-family:Arial,sans-serif;backgro=
und-color:transparent;font-variant-numeric:normal;font-variant-east-asian:n=
ormal;font-variant-alternates:normal;vertical-align:baseline"><span style=
=3D"box-sizing:border-box;margin:0px 12px 12px 0px;padding:0px;border-width=
:1px;border-style:solid;border-color:transparent rgba(0,0,0,0.15) rgba(0,0,=
0,0.15) transparent;outline:0px;font-size:16px;font-family:Inter,&quot;Aven=
ir Next&quot;,&quot;Helvetica Neue&quot;,&quot;Lucida Grande&quot;,Helvetic=
a,Arial,sans-serif;vertical-align:baseline;background-color:rgb(109,209,61)=
;border-radius:4px;display:block;float:left;color:rgb(51,51,51)"><a href=3D=
"https://xenprojectsummit2024.sched.com/event/1bCFR/using-xenalyze-for-perf=
ormance-analysis-george-dunlap-xen-server?iframe=3Dyes&amp;w=3D100%&amp;sid=
ebar=3Dyes&amp;bg=3Dno" id=3D"m_-1299433891387361475gmail-0cf68d8b5cd7b1b5c=
490b568c76b39f5" style=3D"box-sizing:border-box;text-decoration-line:none;m=
argin:0px;padding:6px 10px 7px;border:0px;outline:0px;font-size:15px;font-f=
amily:inherit;vertical-align:baseline;color:rgb(0,0,0);border-bottom-right-=
radius:3px;border-top-right-radius:3px;float:left;line-height:1.2;min-heigh=
t:32px" target=3D"_blank">Using Xenalyze for Performance Analysis - George =
Dunlap, Xen Server<span style=3D"box-sizing:border-box;margin:5px 0px 0px;p=
adding:0px;border:0px;outline:0px;font-size:12px;font-family:inherit;vertic=
al-align:baseline;text-transform:uppercase;clear:left;display:none;opacity:=
0.8">AUGUSTA I</span></a></span><br></span></p><p dir=3D"ltr" style=3D"line=
-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11=
pt;font-family:Arial,sans-serif;background-color:transparent;font-variant-n=
umeric:normal;font-variant-east-asian:normal;font-variant-alternates:normal=
;vertical-align:baseline"><br></span></p><p dir=3D"ltr" style=3D"line-heigh=
t:1.656;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;fon=
t-family:Arial,sans-serif;background-color:transparent;font-variant-numeric=
:normal;font-variant-east-asian:normal;font-variant-alternates:normal;verti=
cal-align:baseline"><br></span></p><p dir=3D"ltr" style=3D"line-height:1.65=
6;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-fami=
ly:Arial,sans-serif;background-color:transparent;font-variant-numeric:norma=
l;font-variant-east-asian:normal;font-variant-alternates:normal;vertical-al=
ign:baseline"><br></span></p><p dir=3D"ltr" style=3D"line-height:1.656;marg=
in-top:0pt;margin-bottom:0pt"><br></p><p dir=3D"ltr" style=3D"line-height:1=
.656;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-f=
amily:Arial,sans-serif;background-color:transparent;font-variant-numeric:no=
rmal;font-variant-east-asian:normal;font-variant-alternates:normal;vertical=
-align:baseline">13:45=C2=A0</span></p><p dir=3D"ltr" style=3D"line-height:=
1.656;margin-top:0pt;margin-bottom:0pt"><a href=3D"https://design-sessions.=
xenproject.org/uid/discussion/disc_z78Lt2EIZt2qxaS3FSlQ/view" style=3D"text=
-decoration-line:none" target=3D"_blank"><span style=3D"font-size:11pt;font=
-family:Arial,sans-serif;font-variant-numeric:normal;font-variant-east-asia=
n:normal;font-variant-alternates:normal;text-decoration-line:underline;vert=
ical-align:baseline">Downstream working group</span></a></p><p dir=3D"ltr" =
style=3D"line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style=3D=
"font-size:11pt;font-family:Arial,sans-serif;background-color:transparent;f=
ont-variant-numeric:normal;font-variant-east-asian:normal;font-variant-alte=
rnates:normal;vertical-align:baseline">AUGUSTA I</span></p><br><p dir=3D"lt=
r" style=3D"line-height:1.656;margin-top:0pt;margin-bottom:0pt"><span style=
=3D"font-size:11pt;font-family:Arial,sans-serif;background-color:transparen=
t;font-variant-numeric:normal;font-variant-east-asian:normal;font-variant-a=
lternates:normal;vertical-align:baseline">14:35=C2=A0</span></p><p dir=3D"l=
tr" style=3D"line-height:1.656;margin-top:0pt;margin-bottom:0pt"><a href=3D=
"https://design-sessions.xenproject.org/uid/discussion/disc_01JE9EOi9zxxAU8=
lfB6Z/view" style=3D"text-decoration-line:none" target=3D"_blank"><span sty=
le=3D"font-size:11pt;font-family:Arial,sans-serif;font-variant-numeric:norm=
al;font-variant-east-asian:normal;font-variant-alternates:normal;text-decor=
ation-line:underline;vertical-align:baseline">Xen Safety Requirements upstr=
eaming</span></a></p><p dir=3D"ltr" style=3D"line-height:1.656;margin-top:0=
pt;margin-bottom:0pt"><span style=3D"font-size:11pt;font-family:Arial,sans-=
serif;background-color:transparent;font-variant-numeric:normal;font-variant=
-east-asian:normal;font-variant-alternates:normal;vertical-align:baseline">=
AUGUSTA I</span></p><br><br></td></tr></tbody></table></div></span></div><d=
iv><span id=3D"m_-1299433891387361475m_-3807238046455862930m_30172008697483=
70048m_2927081184909999839m_-8370998895675947915gmail-docs-internal-guid-7c=
024368-7fff-f910-ae50-70b83521ee30"><div><br></div><b><u>How to filter the =
schedule:</u></b><br><span style=3D"border:none;display:inline-block;overfl=
ow:hidden;width:624px;height:411px"><img src=3D"https://lh7-us.googleuserco=
ntent.com/nJqdiQUkRhper7xaNfEg5kdR7Jsn_7Mb7Muwgz_IaImd4-ZOK0D5o9_jypDXHX2yb=
CsevBnzCISatQZxOIu5eEkBWWsYjYpXNCTw3khSi5qySJoq7hzZ5HPsoLxAJOQ5KjEzJXfvgVnL=
c4s_53AqXuE" width=3D"624" height=3D"411" style=3D"outline: 0px; margin-lef=
t: 0px; margin-top: 0px;"></span></span><br></div><div><div dir=3D"ltr" cla=
ss=3D"gmail_signature"><div dir=3D"ltr"><div><br></div><div><b><u>FAQs:</u>=
</b></div><div><ul><li><b>My company would like to sponsor the next Xen Sum=
mit, how do I get involved?</b></li><ul><li><i>Please email=C2=A0</i><a cla=
ss=3D"gmail_plusreply" id=3D"m_-1299433891387361475m_8251240063996215973plu=
sReplyChip-1" href=3D"mailto:community.manager@xenproject.org" target=3D"_b=
lank">@communitymanager</a><i>=C2=A0with your interest=C2=A0</i></li></ul><=
li><b>Are sessions recorded?</b></li><ul><li><i>Yes, all talks are recorded=
 and will be available=C2=A0on YouTube after the event. Design sessions on =
Day 3 are not recorded.=C2=A0</i></li></ul><li><b>Can I write about my expe=
rience at Xen Summit?</b></li><ul><li><i>Yes! We encourage the community to=
 spread the word through social media. Please tag us on X or LinkedIn so th=
at we can reshare.=C2=A0</i></li></ul><li><i><b>Can we see the presenter&#3=
9;s slides?</b><br></i></li><ul><li><i>Some presenters may have uploaded th=
eir slides in advance. If available, you can view these by clicking on each=
 session in the screenshot above.=C2=A0</i></li></ul></ul></div><div>We loo=
k forward to seeing you there, both virtually and physically!</div><div><br=
></div></div></div></div><div><div dir=3D"ltr" class=3D"gmail_signature"><d=
iv dir=3D"ltr"><div>Many thanks,</div><div>Kelly Choi</div><div><br></div><=
div><div style=3D"color:rgb(136,136,136)">Community Manager</div><div style=
=3D"color:rgb(136,136,136)">Xen Project=C2=A0<br></div></div></div></div></=
div></div>
</blockquote></div></div>

--0000000000009e942a061a0ccf36--


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 09:34:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 09:34:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735297.1141465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEQYd-0005f4-1Y; Tue, 04 Jun 2024 09:34:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735297.1141465; Tue, 04 Jun 2024 09:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEQYc-0005ex-Ud; Tue, 04 Jun 2024 09:34:18 +0000
Received: by outflank-mailman (input) for mailman id 735297;
 Tue, 04 Jun 2024 09:34:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WX0/=NG=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sEQYb-0005er-H8
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 09:34:17 +0000
Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a98104e-2255-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 11:34:14 +0200 (CEST)
Received: from pb-smtp20.pobox.com (unknown [127.0.0.1])
 by pb-smtp20.pobox.com (Postfix) with ESMTP id 997412F5DB;
 Tue,  4 Jun 2024 05:34:12 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp20.pobox.com (Postfix) with ESMTP id 91B252F5DA;
 Tue,  4 Jun 2024 05:34:12 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.75])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp20.pobox.com (Postfix) with ESMTPSA id ABF432F5D9;
 Tue,  4 Jun 2024 05:34:09 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a98104e-2255-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:mime-version:content-transfer-encoding;
	 s=sasl; bh=xPhBlxMah8ylw2Vp341Ssc/9KhHi6LZUJO+fI51M1Xw=; b=ubsG
	NxbTQMv4tRUxnxFLHSBGco9H1CB7ZrM0tD/ZtvLZJp86kAbat3WbDYz+U80ExSXO
	RO/67DxLMAXQnN+nsMOFComgB9hWgBqDPxRW47kczV5zPm/4C3XkhI9wJ6zTGQlP
	jbfYJZT5ukmJQi+rtd2n2SAlxDEf58CoSYY7VyI=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jason Andryuk <jason.andryuk@amd.com>
Subject: [XEN PATCH v1] x86/cpufreq: separate powernow/hwp cpufreq code
Date: Tue,  4 Jun 2024 12:34:06 +0300
Message-Id: <20240604093406.2448552-1-Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
X-Pobox-Relay-ID:
 98762248-2255-11EF-A541-ACC938F0AE34-90055647!pb-smtp20.pobox.com
Content-Transfer-Encoding: quoted-printable

Build AMD Architectural P-state driver when CONFIG_AMD is on, and
Intel Hardware P-States driver when CONFIG_INTEL is on respectively,
allowing for a plaftorm-specific build.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
CC: Jason Andryuk <jason.andryuk@amd.com>
---
 xen/arch/x86/acpi/cpufreq/Makefile  |  4 ++--
 xen/arch/x86/acpi/cpufreq/cpufreq.c |  2 +-
 xen/include/acpi/cpufreq/cpufreq.h  | 32 +++++++++++++++++++++++++++++
 3 files changed, 35 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/acpi/cpufreq/Makefile b/xen/arch/x86/acpi/cpufr=
eq/Makefile
index db83aa6b14..527ff20f5a 100644
--- a/xen/arch/x86/acpi/cpufreq/Makefile
+++ b/xen/arch/x86/acpi/cpufreq/Makefile
@@ -1,3 +1,3 @@
 obj-y +=3D cpufreq.o
-obj-y +=3D hwp.o
-obj-y +=3D powernow.o
+obj-$(CONFIG_INTEL) +=3D hwp.o
+obj-$(CONFIG_AMD) +=3D powernow.o
diff --git a/xen/arch/x86/acpi/cpufreq/cpufreq.c b/xen/arch/x86/acpi/cpuf=
req/cpufreq.c
index a341f2f020..a89f3ed03a 100644
--- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
+++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
@@ -657,7 +657,7 @@ static int __init cf_check cpufreq_driver_init(void)
=20
         case X86_VENDOR_AMD:
         case X86_VENDOR_HYGON:
-            ret =3D powernow_register_driver();
+            ret =3D IS_ENABLED(CONFIG_AMD) ? powernow_register_driver() =
: -ENODEV;
             break;
         }
     }
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufre=
q/cpufreq.h
index 443427153b..bc0c9a2b9f 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -252,6 +252,7 @@ void cpufreq_dbs_timer_resume(void);
=20
 void intel_feature_detect(struct cpufreq_policy *policy);
=20
+#ifdef CONFIG_INTEL
 int hwp_cmdline_parse(const char *s, const char *e);
 int hwp_register_driver(void);
 bool hwp_active(void);
@@ -260,4 +261,35 @@ int get_hwp_para(unsigned int cpu,
 int set_hwp_para(struct cpufreq_policy *policy,
                  struct xen_set_cppc_para *set_cppc);
=20
+#else
+
+static inline int hwp_cmdline_parse(const char *s, const char *e)
+{
+    return -EINVAL;
+}
+
+static inline int hwp_register_driver(void)
+{
+    return -ENODEV;
+}
+
+static inline bool hwp_active(void)
+{
+    return false;
+}
+
+static inline int get_hwp_para(unsigned int cpu,
+                               struct xen_cppc_para *cppc_para)
+{
+    return -EINVAL;
+}
+
+static inline int set_hwp_para(struct cpufreq_policy *policy,
+                               struct xen_set_cppc_para *set_cppc)
+{
+    return -EINVAL;
+}
+
+#endif /* CONFIG_INTEL */
+
 #endif /* __XEN_CPUFREQ_PM_H__ */
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 04 09:57:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 09:57:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735305.1141474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEQuw-0000sw-RO; Tue, 04 Jun 2024 09:57:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735305.1141474; Tue, 04 Jun 2024 09:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEQuw-0000sp-OO; Tue, 04 Jun 2024 09:57:22 +0000
Received: by outflank-mailman (input) for mailman id 735305;
 Tue, 04 Jun 2024 09:57:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEQuv-0000sf-9G; Tue, 04 Jun 2024 09:57:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEQuu-000775-Vd; Tue, 04 Jun 2024 09:57:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEQuu-0004tW-MZ; Tue, 04 Jun 2024 09:57:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEQuu-0004mO-M4; Tue, 04 Jun 2024 09:57:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Px8pXpGm8mDdUado2zE0WIR7K138b89DrHS6LciIy9U=; b=S0eT2LIat8bKHSBo7n0kFQ8AC5
	ljNZFwvQZxHkmbsZHyz+euVD20219JkpAqc1/ZWfY8ydUdLOW4QER7OxvBaZ5lwJZLAERKmNvRQI8
	hl/PQyRncMWITJVTW/bOlbiNeebp5f8h0gfYsKbnwbys02YB1WWBgsPBf1+SksGbjb68=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186245-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186245: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=077760fec40c2e5fcae274cc609d97aee12e5d56
X-Osstest-Versions-That:
    ovmf=27b044605cd5f6b33a3d231576003850b3fe305b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Jun 2024 09:57:20 +0000

flight 186245 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186245/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 077760fec40c2e5fcae274cc609d97aee12e5d56
baseline version:
 ovmf                 27b044605cd5f6b33a3d231576003850b3fe305b

Last test of basis   186240  2024-06-03 20:43:34 Z    0 days
Testing same since   186245  2024-06-04 07:42:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dun Tan <dun.tan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   27b044605c..077760fec4  077760fec40c2e5fcae274cc609d97aee12e5d56 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 10:43:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 10:43:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735319.1141485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sERdT-0008Nb-71; Tue, 04 Jun 2024 10:43:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735319.1141485; Tue, 04 Jun 2024 10:43:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sERdT-0008NU-30; Tue, 04 Jun 2024 10:43:23 +0000
Received: by outflank-mailman (input) for mailman id 735319;
 Tue, 04 Jun 2024 10:43:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sERdS-0008NK-K2; Tue, 04 Jun 2024 10:43:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sERdS-0007zE-8e; Tue, 04 Jun 2024 10:43:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sERdR-0005zy-V5; Tue, 04 Jun 2024 10:43:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sERdR-0002CA-Ua; Tue, 04 Jun 2024 10:43:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WIWT94CafAQzmNLb4iPZ3BWN9N4qMkEoy4cjKVPO2aA=; b=bGlmYxHHcAPZMpRKtRPmDNBf/7
	FCui2b7WvjCBiLmA6hxkqOBLiHzTfpvxxXfkJgepCDkLADYn2/DOkEBbUbGNaO13696j4VX+4+IzZ
	5GzgG/gmYMqnacNtpt0yvrsDdfEIfU++gImk6BOelckytmZWBXDEZme2V/NOPi5HYtEA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186243-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186243: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=83bed4367e76e9003479a8d7bd5cbee080d80017
X-Osstest-Versions-That:
    libvirt=5fa180bc7756102c3af5fcbeb4c61109c4d0e829
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Jun 2024 10:43:21 +0000

flight 186243 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186243/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186218
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              83bed4367e76e9003479a8d7bd5cbee080d80017
baseline version:
 libvirt              5fa180bc7756102c3af5fcbeb4c61109c4d0e829

Last test of basis   186218  2024-06-01 04:20:30 Z    3 days
Testing same since   186243  2024-06-04 04:20:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Sergey A." <sw@atrus.ru>
  Andrea Bolognani <abologna@redhat.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Jiri Denemark <jdenemar@redhat.com>
  Sergey A <sw@atrus.ru>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   5fa180bc77..83bed4367e  83bed4367e76e9003479a8d7bd5cbee080d80017 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 10:50:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 10:50:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735327.1141495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sERkO-0001cg-Sd; Tue, 04 Jun 2024 10:50:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735327.1141495; Tue, 04 Jun 2024 10:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sERkO-0001cZ-Q7; Tue, 04 Jun 2024 10:50:32 +0000
Received: by outflank-mailman (input) for mailman id 735327;
 Tue, 04 Jun 2024 10:50:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sERkN-0001cT-CF
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 10:50:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sERkM-00088Q-RG; Tue, 04 Jun 2024 10:50:30 +0000
Received: from [62.28.225.65] (helo=[172.20.145.71])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sERkM-0000gn-Io; Tue, 04 Jun 2024 10:50:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=z6vpzpsfxfMyMDwp7y9YIOP9EZv+qTKBIiUzP933kQ8=; b=xO2R9AL+pLfrSeiUMHW2AW66S0
	+RDWG0tO4XHhA5Bj5CLilqkidi0A0khhAbYQXUTq7iOgmH06rIHkxI8Tl2fN6BpxXHjJQjD9lus3t
	asqFpt8T4hw3IfRltmom2MQcGV+lJ+G4N1v3sQ1WYOKJHLzk/qrFClSqgm/glKC1VFes=;
Message-ID: <9e62b5d9-9c80-4f7c-9cc6-3b863f0c90ad@xen.org>
Date: Tue, 4 Jun 2024 11:50:28 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC PATCH v2] arm: dom0less: add TEE support
Content-Language: en-GB
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>
References: <20240531174915.1679443-1-volodymyr_babchuk@epam.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20240531174915.1679443-1-volodymyr_babchuk@epam.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Volodymyr,

On 31/05/2024 18:49, Volodymyr Babchuk wrote:
> Extend TEE mediator interface with two functions :
> 
>   - tee_get_type_from_dts() returns TEE type based on input string
>   - tee_make_dtb_node() creates a DTB entry for the selected
>     TEE mediator
> 
> Use those new functions to parse "xen,tee" DTS property for dom0less
> guests and enable appropriate TEE mediator.
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> 
> ---
> 
> This is still RFC because I am not happy how I decide if
> tee_make_dtb_node() needs to be called.
> 
> TEE type is stored in d_cfg, but d_cfg is not passed to
> construct_domU()->prepare_dtb_domU(). So right now I am relying on
> fact that every TEE mediator initilizes d->arch.tee.
> 
> Also I am sorry about previous completely botched version of this
> patch. I really messed it up, including absence of [RFC] tag :(

That's fine. We all sent botched patches at least once :). Some comments 
below on the series.

> 
> ---
>   docs/misc/arm/device-tree/booting.txt | 17 +++++++++++++
>   xen/arch/arm/dom0less-build.c         | 19 +++++++++++++++
>   xen/arch/arm/include/asm/tee/tee.h    | 13 ++++++++++
>   xen/arch/arm/tee/ffa.c                |  8 ++++++
>   xen/arch/arm/tee/optee.c              | 35 +++++++++++++++++++++++++++
>   xen/arch/arm/tee/tee.c                | 21 ++++++++++++++++
>   6 files changed, 113 insertions(+)
> 
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> index bbd955e9c2..711a6080b5 100644
> --- a/docs/misc/arm/device-tree/booting.txt
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -231,6 +231,23 @@ with the following properties:
>       In the future other possible property values might be added to
>       enable only selected interfaces.
>   
> +- xen,tee
> +
> +    A string property that describes what TEE mediator should be enabled
> +    for the domain. Possible property values are:
> +
> +    - "none" (or missing property value)
> +    No TEE will be available in the VM.
> +
> +    - "OP-TEE"
> +    VM will have access to the OP-TEE using classic OP-TEE SMC interface.
> +
> +    - "FF-A"
> +    VM will have access to a TEE using generic FF-A interface.

I understand why you chose those name, but it also means we are using 
different name in XL and Dom0less. I would rather prefer if they are the 
same.

> +
> +    In the future other TEE mediators may be added, extending possible
> +    values for this property.
> +
>   - xen,domain-p2m-mem-mb
>   
>       Optional. A 32-bit integer specifying the amount of megabytes of RAM
> diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build.c
> index fb63ec6fd1..13fdd44eef 100644
> --- a/xen/arch/arm/dom0less-build.c
> +++ b/xen/arch/arm/dom0less-build.c
> @@ -15,6 +15,7 @@
>   #include <asm/domain_build.h>
>   #include <asm/static-memory.h>
>   #include <asm/static-shmem.h>
> +#include <asm/tee/tee.h>
>   
>   bool __init is_dom0less_mode(void)
>   {
> @@ -650,6 +651,10 @@ static int __init prepare_dtb_domU(struct domain *d, struct kernel_info *kinfo)
>       if ( ret )
>           goto err;
>   
> +    /* We are making assumption that every mediator sets d->arch.tee */
> +    if ( d->arch.tee )

I think the assumption is Ok. I would consider to move this check in 
each TEE callback. IOW call tee_make_dtb_node() unconditionally.

> +        tee_make_dtb_node(kinfo->fdt);

AFAICT, tee_make_dtb_node() can return an error. So please check the 
return value.

> +
>       /*
>        * domain_handle_dtb_bootmodule has to be called before the rest of
>        * the device tree is generated because it depends on the value of
> @@ -871,6 +876,7 @@ void __init create_domUs(void)
>           unsigned int flags = 0U;
>           uint32_t val;
>           int rc;
> +        const char *tee_name;
>   
>           if ( !dt_device_is_compatible(node, "xen,domain") )
>               continue;
> @@ -881,6 +887,19 @@ void __init create_domUs(void)
>           if ( dt_find_property(node, "xen,static-mem", NULL) )
>               flags |= CDF_staticmem;
>   
> +        tee_name = dt_get_property(node, "xen,tee", NULL);

In the previous version, you used dt_property_read_property_string() 
which contained some sanity check. Can you explain why you switch to 
dt_get_property()?

> +        if ( tee_name )
> +        {
> +            rc = tee_get_type_from_dts(tee_name);
> +            if ( rc < 0) > +                panic("Can't enable requested TEE for domain: %d\n", 
rc);> +            d_cfg.arch.tee_type = rc;
> +        }
> +        else
> +        {

NIT: The parentheses are not necessary.

> +            d_cfg.arch.tee_type = XEN_DOMCTL_CONFIG_TEE_NONE;
> +        }
> +
>           if ( dt_property_read_bool(node, "direct-map") )
>           {
>               if ( !(flags & CDF_staticmem) )

Cheers,

---
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 11:07:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 11:07:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735333.1141504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sES0Y-0003m9-74; Tue, 04 Jun 2024 11:07:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735333.1141504; Tue, 04 Jun 2024 11:07:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sES0Y-0003m2-4V; Tue, 04 Jun 2024 11:07:14 +0000
Received: by outflank-mailman (input) for mailman id 735333;
 Tue, 04 Jun 2024 11:07:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sES0W-0003ld-Tk
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 11:07:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sES0W-00005g-DX; Tue, 04 Jun 2024 11:07:12 +0000
Received: from [62.28.225.65] (helo=[172.20.145.71])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sES0W-0001gQ-5o; Tue, 04 Jun 2024 11:07:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=2W+G2fBIShc8NgWCIfF5YQxJyIolvkSOJjpdntoxqpE=; b=J0lVTuB5kowSOxEKEVDDV0oAG+
	vfAThKhqgig4Gn9H8ykih9ztKJ/H+SpzfgJHaQVdq6aTZ6g7n9UqEaaAoPtE16DIQUAVQqWV7zcS1
	gKEDvWUt/PZOBp1v3Oku+vd4eV2yRknZENvWeUJTYcs2ANFU3sCz80OCLiSYeLY92xJg=;
Message-ID: <cc51da4b-d024-4923-95a4-18e11b150f90@xen.org>
Date: Tue, 4 Jun 2024 12:07:09 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 14/16] ioreq: make arch_vcpu_ioreq_completion() an
 optional callback
Content-Language: en-GB
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <a0f9c5ef8554d63e149afd0a413a27385c889faa.1717410850.git.Sergiy_Kibrik@epam.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <a0f9c5ef8554d63e149afd0a413a27385c889faa.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Sergiy,

On 03/06/2024 12:34, Sergiy Kibrik wrote:
> For the most cases arch_vcpu_ioreq_completion() routine is just an empty stub,
> except when handling VIO_realmode_completion, which only happens on HVM
> domains running on VT-x machine. When VT-x is disabled in build configuration,
> both x86 & arm version of routine become empty stubs.
> To dispose of these useless stubs we can do optional call to arch-specific
> ioreq completion handler, if it's present, and drop arm and generic x86 handlers.
> Actual handling of VIO_realmore_completion can be done by VMX code then.
> 
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
> ---
>   xen/arch/arm/ioreq.c       |  6 ------
>   xen/arch/x86/hvm/ioreq.c   | 23 -----------------------
>   xen/arch/x86/hvm/vmx/vmx.c | 16 ++++++++++++++++
>   xen/common/ioreq.c         |  5 ++++-
>   xen/include/xen/ioreq.h    |  2 +-
>   5 files changed, 21 insertions(+), 31 deletions(-)
> 
> diff --git a/xen/arch/arm/ioreq.c b/xen/arch/arm/ioreq.c
> index 5df755b48b..2e829d2e7f 100644
> --- a/xen/arch/arm/ioreq.c
> +++ b/xen/arch/arm/ioreq.c
> @@ -135,12 +135,6 @@ bool arch_ioreq_complete_mmio(void)
>       return false;
>   }
>   
> -bool arch_vcpu_ioreq_completion(enum vio_completion completion)
> -{
> -    ASSERT_UNREACHABLE();
> -    return true;
> -}
> -
>   /*
>    * The "legacy" mechanism of mapping magic pages for the IOREQ servers
>    * is x86 specific, so the following hooks don't need to be implemented on Arm:
> diff --git a/xen/arch/x86/hvm/ioreq.c b/xen/arch/x86/hvm/ioreq.c
> index 4eb7a70182..088650e007 100644
> --- a/xen/arch/x86/hvm/ioreq.c
> +++ b/xen/arch/x86/hvm/ioreq.c
> @@ -29,29 +29,6 @@ bool arch_ioreq_complete_mmio(void)
>       return handle_mmio();
>   }
>   
> -bool arch_vcpu_ioreq_completion(enum vio_completion completion)
> -{
> -    switch ( completion )
> -    {
> -    case VIO_realmode_completion:
> -    {
> -        struct hvm_emulate_ctxt ctxt;
> -
> -        hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
> -        vmx_realmode_emulate_one(&ctxt);
> -        hvm_emulate_writeback(&ctxt);
> -
> -        break;
> -    }
> -
> -    default:
> -        ASSERT_UNREACHABLE();
> -        break;
> -    }
> -
> -    return true;
> -}
> -
>   static gfn_t hvm_alloc_legacy_ioreq_gfn(struct ioreq_server *s)
>   {
>       struct domain *d = s->target;
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index f16faa6a61..7187d1819c 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -10,6 +10,7 @@
>   #include <xen/param.h>
>   #include <xen/trace.h>
>   #include <xen/sched.h>
> +#include <xen/ioreq.h>
>   #include <xen/irq.h>
>   #include <xen/softirq.h>
>   #include <xen/domain_page.h>
> @@ -2749,6 +2750,20 @@ static void cf_check vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
>       vmx_vmcs_exit(v);
>   }
>   
> +bool realmode_vcpu_ioreq_completion(enum vio_completion completion)

No one seems to call this function outside of vmx.c. So can it be 'static'?

> +{
> +    struct hvm_emulate_ctxt ctxt;
> +
> +    if ( completion != VIO_realmode_completion )
> +        ASSERT_UNREACHABLE();
> +
> +    hvm_emulate_init_once(&ctxt, NULL, guest_cpu_user_regs());
> +    vmx_realmode_emulate_one(&ctxt);
> +    hvm_emulate_writeback(&ctxt);
> +
> +    return true;
> +}
> +
>   static struct hvm_function_table __initdata_cf_clobber vmx_function_table = {
>       .name                 = "VMX",
>       .cpu_up_prepare       = vmx_cpu_up_prepare,
> @@ -3070,6 +3085,7 @@ const struct hvm_function_table * __init start_vmx(void)
>       lbr_tsx_fixup_check();
>       ler_to_fixup_check();
>   
> +    arch_vcpu_ioreq_completion = realmode_vcpu_ioreq_completion;
>       return &vmx_function_table;
>   }
>   
> diff --git a/xen/common/ioreq.c b/xen/common/ioreq.c
> index 1257a3d972..94fde97ece 100644
> --- a/xen/common/ioreq.c
> +++ b/xen/common/ioreq.c
> @@ -33,6 +33,8 @@
>   #include <public/hvm/ioreq.h>
>   #include <public/hvm/params.h>
>   
> +bool (*arch_vcpu_ioreq_completion)(enum vio_completion completion) = NULL;

I don't think this should be allowed to be modified after boot. So I 
woudl add __ro_after_init.

> +
>   void ioreq_request_mapcache_invalidate(const struct domain *d)
>   {
>       struct vcpu *v = current;
> @@ -244,7 +246,8 @@ bool vcpu_ioreq_handle_completion(struct vcpu *v)
>           break;
>   
>       default:
> -        res = arch_vcpu_ioreq_completion(completion);
> +        if ( arch_vcpu_ioreq_completion )
> +            res = arch_vcpu_ioreq_completion(completion);

I think this wants an:

else {
   ASSERT_UNREACHABLE();
}

So this match the existing code. But I am not fully convinced that this 
is the right approach. Arch_vcpu_ioreq_completion is not meant to change 
across boot (or even at compile time for Arm).

Reading the previous thread, I think something like below would work:

static arch_vcpu_ioreq_completion(...)
{
#ifdef CONFIG_VMX
/* Existing code */
#else
ASSERT_UNREACHABLE();
return true;
#endif
}

If we want to avoid stub, then I think it would be better to use

#ifdef CONFIG_VMX
static  arch_vcpu_ioreq...
{
}
#endif CONFIG_VMX

Then in the x86 header.

#ifdef CONFIG_VMX
static arch_vcpu_ioreq..();
#define arch_vcpu_ioreq...
#endif

And then in common/ioreq.c

#ifdef arch_vcpu_ioreq
res = arch_vcpu_ioreq(...)
#else
ASSERT_UNREACHABLE();
#endif

>           break;
>       }
>   
> diff --git a/xen/include/xen/ioreq.h b/xen/include/xen/ioreq.h
> index cd399adf17..880214ec41 100644
> --- a/xen/include/xen/ioreq.h
> +++ b/xen/include/xen/ioreq.h
> @@ -111,7 +111,7 @@ void ioreq_domain_init(struct domain *d);
>   int ioreq_server_dm_op(struct xen_dm_op *op, struct domain *d, bool *const_op);
>   
>   bool arch_ioreq_complete_mmio(void);
> -bool arch_vcpu_ioreq_completion(enum vio_completion completion);
> +extern bool (*arch_vcpu_ioreq_completion)(enum vio_completion completion);
>   int arch_ioreq_server_map_pages(struct ioreq_server *s);
>   void arch_ioreq_server_unmap_pages(struct ioreq_server *s);
>   void arch_ioreq_server_enable(struct ioreq_server *s);

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 11:14:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 11:14:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735342.1141515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sES76-0005j6-0F; Tue, 04 Jun 2024 11:14:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735342.1141515; Tue, 04 Jun 2024 11:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sES75-0005iz-TZ; Tue, 04 Jun 2024 11:13:59 +0000
Received: by outflank-mailman (input) for mailman id 735342;
 Tue, 04 Jun 2024 11:13:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sES74-0005iq-4A
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 11:13:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sES72-0000Cp-Ev; Tue, 04 Jun 2024 11:13:56 +0000
Received: from [62.28.225.65] (helo=[172.20.145.71])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sES72-00027S-6y; Tue, 04 Jun 2024 11:13:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=pcYrkOLcEwC1UqIzg9fwWXkbE6I3gycHivNE520qcPM=; b=qAWidke601ieLszqZcuU0HHcqN
	Km2y/70JJ4gi0Fok89QQPfQwBqS2iqj3b71hnXi/Z36QUg58cCB/Cvcy3HH5I2xcdq9g8/5QSK3rh
	W7M+dvw4vbwfZgRJmHcEzlKPgs41UfTbD8P332WA6B2WfsJm7IeitPbn3KY10urGsTds=;
Message-ID: <3e8c5cad-d5b2-48f6-8db4-ea714a1166d7@xen.org>
Date: Tue, 4 Jun 2024 12:13:53 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v5 03/10] xen: Refactor altp2m options into a
 structured format
Content-Language: en-GB
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
References: <cover.1717356829.git.w1benny@gmail.com>
 <5dc1d0375206bd982b91f4db4bd237769a889f48.1717356829.git.w1benny@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <5dc1d0375206bd982b91f4db4bd237769a889f48.1717356829.git.w1benny@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 02/06/2024 21:04, Petr Beneš wrote:
> From: Petr Beneš <w1benny@gmail.com>
> 
> Encapsulate the altp2m options within a struct. This change is preparatory
> and sets the groundwork for introducing additional parameter in subsequent
> commit.
> 
> Signed-off-by: Petr Beneš <w1benny@gmail.com>
> ---
>   tools/libs/light/libxl_create.c     | 6 +++---
>   tools/ocaml/libs/xc/xenctrl_stubs.c | 4 +++-
>   xen/arch/arm/domain.c               | 2 +-

For the small change in Arm:

Acked-by: Julien Grall <jgrall@amazon.com> # arm

>   xen/arch/x86/domain.c               | 4 ++--
>   xen/arch/x86/hvm/hvm.c              | 2 +-
>   xen/include/public/domctl.h         | 4 +++-
>   6 files changed, 13 insertions(+), 9 deletions(-)

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 11:20:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 11:20:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735349.1141524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sESDC-0007J1-KZ; Tue, 04 Jun 2024 11:20:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735349.1141524; Tue, 04 Jun 2024 11:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sESDC-0007Iu-I2; Tue, 04 Jun 2024 11:20:18 +0000
Received: by outflank-mailman (input) for mailman id 735349;
 Tue, 04 Jun 2024 11:20:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sESDB-0007Io-KU
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 11:20:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sESDA-0000Jr-SB; Tue, 04 Jun 2024 11:20:16 +0000
Received: from [62.28.225.65] (helo=[172.20.145.71])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sESDA-0002HI-Jc; Tue, 04 Jun 2024 11:20:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=NdVOdUXAH+pXckRWyHpVGO7EJZKUCsxPDmNeIaJEdJY=; b=YuG0HqvDpZsbO0JiJmoRAHRDqs
	f5dfRbR6DKjfrLF0SpSf8/He/JI+l4EuKd1JD4Pefcsghic0gkevP06WX+MtRjs2KTSnsi9ZvZiYT
	Cxv+PbyWBzvWVz7X6enaYemM3aNx7xvjFeQ/yDlvKj0WhJjmLHCmOkWsXE+sABvQuUeA=;
Message-ID: <91a15518-8211-457c-b716-226c2f89d278@xen.org>
Date: Tue, 4 Jun 2024 12:20:14 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v5 07/10] xen: Make the maximum number of altp2m
 views configurable for x86
Content-Language: en-GB
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>
References: <cover.1717356829.git.w1benny@gmail.com>
 <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Petr,

On 02/06/2024 21:04, Petr Beneš wrote:
> From: Petr Beneš <w1benny@gmail.com>
> 
> x86: Make the maximum number of altp2m views configurable
> 
> This commit introduces the ability to configure the maximum number of altp2m
> views for the domain during its creation. Previously, the limits were hardcoded
> to a maximum of 10. This change allows for greater flexibility in environments
> that require more or fewer altp2m views.
> 
> The maximum configurable limit for nr_altp2m on x86 is now set to
> MAX_NR_ALTP2M (which currently holds the MAX_EPTP value - 512). This cap is
> linked to the architectural limit of the EPTP-switching VMFUNC, which supports
> up to 512 entries. Despite there being no inherent need for limiting nr_altp2m
> in scenarios not utilizing VMFUNC, decoupling these components would necessitate
> substantial code changes.
> 
> xen_domctl_createdomain::altp2m is extended for a new field `nr`, that will
> configure this limit for a domain. Additionally, previous altp2m.opts value
> has been reduced from uint32_t to uint16_t so that both of these fields occupy
> as little space as possible.
> 
> altp2m_get_p2m() function is modified to respect the new nr_altp2m value.
> Accessor functions that operate on EPT arrays are unmodified, since these
> arrays always have fixed size of MAX_EPTP.
> 
> A dummy hvm_altp2m_supported() function is introduced for non-HVM builds, so
> that the compilation won't fail for them.
> 
> Additional sanitization is introduced in the x86/arch_sanitise_domain_config
> to fix the altp2m.nr value to 10 if it is previously set to 0. This behavior
> is only temporary and immediately removed in the upcoming commit (which will
> disallow creating a domain with enabled altp2m with zero nr_altp2m).
> 
> The reason for this temporary workaround is to retain the legacy behavior
> until the feature is fully activated in libxl.
> 
> Also, arm/arch_sanitise_domain_config is extended to not allow requesting
> non-zero altp2ms.
> 
> Signed-off-by: Petr Beneš <w1benny@gmail.com>

For the small change in Arm:

Acked-by: Julien Grall <jgrall@amazon.com> # arm

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 11:24:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 11:24:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735356.1141534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sESHX-00088i-4S; Tue, 04 Jun 2024 11:24:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735356.1141534; Tue, 04 Jun 2024 11:24:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sESHX-00088b-23; Tue, 04 Jun 2024 11:24:47 +0000
Received: by outflank-mailman (input) for mailman id 735356;
 Tue, 04 Jun 2024 11:24:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sESHV-00088V-UG
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 11:24:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sESHV-0000Oz-HA; Tue, 04 Jun 2024 11:24:45 +0000
Received: from [62.28.225.65] (helo=[172.20.145.71])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sESHV-0002aY-BO; Tue, 04 Jun 2024 11:24:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=CO/kMwF14NhyroNvPVZsWVqvlcwsDPDKmRHNENryLHc=; b=oZezYFwpbRs1PpyP02y7GewLds
	d9TiVj/3vvnwycXm6npi/W8hbaIrZVLHhGcZfmPPecfBOlcaRx+wVz8oU0sBQI2t++0eUS+TECk6z
	SjINEGNOnZDTbCjsQB4vs4iaBW0yvsAt4mp+b8ayV2jb5ln/2IhAlGrgRmfNg1MKJ5uE=;
Message-ID: <ad94bed4-42a1-4c59-afc1-a542c9a406ea@xen.org>
Date: Tue, 4 Jun 2024 12:24:43 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v5 7/7] xen/arm: ffa: support notification
Content-Language: en-GB
To: Jens Wiklander <jens.wiklander@linaro.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 "patches@linaro.org" <patches@linaro.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Michal Orzel <michal.orzel@amd.com>
References: <20240529072559.2486986-1-jens.wiklander@linaro.org>
 <20240529072559.2486986-8-jens.wiklander@linaro.org>
 <C52D6A7C-1136-4BF1-9060-600157F641F5@arm.com>
 <CAHUa44GRNQV4X61YPZTxO+tkkwJS9hoqQ07U9vP1k6n1zUt9rQ@mail.gmail.com>
 <39045a8f-ea18-4264-b540-66645751d27d@xen.org>
 <CAHUa44Hrm7p9MyTwsp+XU+EAMPXb+bi0a7P8sbhsvz2Tobozow@mail.gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <CAHUa44Hrm7p9MyTwsp+XU+EAMPXb+bi0a7P8sbhsvz2Tobozow@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 03/06/2024 10:50, Jens Wiklander wrote:
> Hi Julien,

Hi Jens,


> On Mon, Jun 3, 2024 at 11:12 AM Julien Grall <julien@xen.org> wrote:
>>
>> Hi Jens,
>>
>> On 03/06/2024 10:01, Jens Wiklander wrote:
>>> On Fri, May 31, 2024 at 4:28 PM Bertrand Marquis
>>> <Bertrand.Marquis@arm.com> wrote:
>>>>
>>>> Hi Jens,
>>>>
>>>>> On 29 May 2024, at 09:25, Jens Wiklander <jens.wiklander@linaro.org> wrote:
>>>>>
>>>>> Add support for FF-A notifications, currently limited to an SP (Secure
>>>>> Partition) sending an asynchronous notification to a guest.
>>>>>
>>>>> Guests and Xen itself are made aware of pending notifications with an
>>>>> interrupt. The interrupt handler triggers a tasklet to retrieve the
>>>>> notifications using the FF-A ABI and deliver them to their destinations.
>>>>>
>>>>> Update ffa_partinfo_domain_init() to return error code like
>>>>> ffa_notif_domain_init().
>>>>>
>>>>> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
>>>>> ---
>>>>> v4->v5:
>>>>> - Move the freeing of d->arch.tee to the new TEE mediator free_domain_ctx
>>>>>    callback to make the context accessible during rcu_lock_domain_by_id()
>>>>>    from a tasklet
>>>>> - Initialize interrupt handlers for secondary CPUs from the new TEE mediator
>>>>>    init_interrupt() callback
>>>>> - Restore the ffa_probe() from v3, that is, remove the
>>>>>    presmp_initcall(ffa_init) approach and use ffa_probe() as usual now that we
>>>>>    have the init_interrupt callback.
>>>>> - A tasklet is added to handle the Schedule Receiver interrupt. The tasklet
>>>>>    finds each relevant domain with rcu_lock_domain_by_id() which now is enough
>>>>>    to guarantee that the FF-A context can be accessed.
>>>>> - The notification interrupt handler only schedules the notification
>>>>>    tasklet mentioned above
>>>>>
>>>>> v3->v4:
>>>>> - Add another note on FF-A limitations
>>>>> - Clear secure_pending in ffa_handle_notification_get() if both SP and SPM
>>>>>    bitmaps are retrieved
>>>>> - ASSERT that ffa_rcu_lock_domain_by_vm_id() isn't passed the vm_id FF-A
>>>>>    uses for Xen itself
>>>>> - Replace the get_domain_by_id() call done via ffa_get_domain_by_vm_id() in
>>>>>    notif_irq_handler() with a call to rcu_lock_live_remote_domain_by_id() via
>>>>>    ffa_rcu_lock_domain_by_vm_id()
>>>>> - Remove spinlock in struct ffa_ctx_notif and use atomic functions as needed
>>>>>    to access and update the secure_pending field
>>>>> - In notif_irq_handler(), look for the first online CPU instead of assuming
>>>>>    that the first CPU is online
>>>>> - Initialize FF-A via presmp_initcall() before the other CPUs are online,
>>>>>    use register_cpu_notifier() to install the interrupt handler
>>>>>    notif_irq_handler()
>>>>> - Update commit message to reflect recent updates
>>>>>
>>>>> v2->v3:
>>>>> - Add a GUEST_ prefix and move FFA_NOTIF_PEND_INTR_ID and
>>>>>    FFA_SCHEDULE_RECV_INTR_ID to public/arch-arm.h
>>>>> - Register the Xen SRI handler on each CPU using on_selected_cpus() and
>>>>>    setup_irq()
>>>>> - Check that the SGI ID retrieved with FFA_FEATURE_SCHEDULE_RECV_INTR
>>>>>    doesn't conflict with static SGI handlers
>>>>>
>>>>> v1->v2:
>>>>> - Addressing review comments
>>>>> - Change ffa_handle_notification_{bind,unbind,set}() to take struct
>>>>>    cpu_user_regs *regs as argument.
>>>>> - Update ffa_partinfo_domain_init() and ffa_notif_domain_init() to return
>>>>>    an error code.
>>>>> - Fixing a bug in handle_features() for FFA_FEATURE_SCHEDULE_RECV_INTR.
>>>>> ---
>>>>> xen/arch/arm/tee/Makefile       |   1 +
>>>>> xen/arch/arm/tee/ffa.c          |  72 +++++-
>>>>> xen/arch/arm/tee/ffa_notif.c    | 409 ++++++++++++++++++++++++++++++++
>>>>> xen/arch/arm/tee/ffa_partinfo.c |   9 +-
>>>>> xen/arch/arm/tee/ffa_private.h  |  56 ++++-
>>>>> xen/arch/arm/tee/tee.c          |   2 +-
>>>>> xen/include/public/arch-arm.h   |  14 ++
>>>>> 7 files changed, 548 insertions(+), 15 deletions(-)
>>>>> create mode 100644 xen/arch/arm/tee/ffa_notif.c
>>>>>
>>> [...]
>>>>>
>>>>> @@ -517,8 +567,10 @@ err_rxtx_destroy:
>>>>> static const struct tee_mediator_ops ffa_ops =
>>>>> {
>>>>>       .probe = ffa_probe,
>>>>> +    .init_interrupt = ffa_notif_init_interrupt,
>>>>
>>>> With the previous change on init secondary i would suggest to
>>>> have a ffa_init_secondary here actually calling the ffa_notif_init_interrupt.
>>>
>>> Yes, that makes sense. I'll update.
>>>
>>>>
>>>>>       .domain_init = ffa_domain_init,
>>>>>       .domain_teardown = ffa_domain_teardown,
>>>>> +    .free_domain_ctx = ffa_free_domain_ctx,
>>>>>       .relinquish_resources = ffa_relinquish_resources,
>>>>>       .handle_call = ffa_handle_call,
>>>>> };
>>>>> diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_notif.c
>>>>> new file mode 100644
>>>>> index 000000000000..e8e8b62590b3
>>>>> --- /dev/null
>>>>> +++ b/xen/arch/arm/tee/ffa_notif.c
>>> [...]
>>>>> +static void notif_vm_pend_intr(uint16_t vm_id)
>>>>> +{
>>>>> +    struct ffa_ctx *ctx;
>>>>> +    struct domain *d;
>>>>> +    struct vcpu *v;
>>>>> +
>>>>> +    /*
>>>>> +     * vm_id == 0 means a notifications pending for Xen itself, but
>>>>> +     * we don't support that yet.
>>>>> +     */
>>>>> +    if ( !vm_id )
>>>>> +        return;
>>>>> +
>>>>> +    d = ffa_rcu_lock_domain_by_vm_id(vm_id);
>>>>> +    if ( !d )
>>>>> +        return;
>>>>> +
>>>>> +    ctx = d->arch.tee;
>>>>> +    if ( !ctx )
>>>>> +        goto out_unlock;
>>>>
>>>> In both previous cases you are silently ignoring an interrupt
>>>> due to an internal error.
>>>> Is this something that we should trace ? maybe just debug ?
>>>>
>>>> Could you add a comment to explain why this could happen
>>>> (when possible) or not and the possible side effects ?
>>>>
>>>> The second one would be a notification for a domain without
>>>> FF-A enabled which is ok but i am a bit more wondering on
>>>> the first one.
>>>
>>> The SPMC must be out of sync in both cases. I've been looking for a
>>> window where that can happen, but I can't find any. SPMC is called
>>> with FFA_NOTIFICATION_BITMAP_DESTROY during domain teardown so the
>>> SPMC shouldn't try to deliver any notifications after that.
>>
>> I don't think I agree with the conclusion. I believe, this can also
>> happen in normal operation.
>>
>> For example, the SPMC could have trigger the interrupt before
>> FFA_NOTIFICATION_BITMAP_DESTROY but Xen didn't handle the interrupt (or
>> run the tasklet) until later.
> 
> You're right, there is a window. Delayed handling is OK since
> FFA_NOTIFICATION_INFO_GET_64 is invoked from the tasklet, but there is
> a window if the tasklet is suspended or another core destroys the
> domain before the tasklet has called ffa_rcu_lock_domain_by_vm_id().
> So far it's harmless and I guess we can afford a print.

I think it would confuse more the user than anything else because this 
is an expected race. If we wanted to print a message, then I would argue 
it should be in the case where...

> 
>>
>> This could be at the time where the domain has been fully destroyed or
>> even when...
>>
>>> In the second case, the domain ID might have been reused for a domain
>>> without FF-A enabled, but the SPMC should have known that already.
>>
>> ... a new domain has been created. Although, the latter is rather unlikely.
>>
>> So what if the new domain has FFA enabled? Is there any potential
>> security issue?
> 
> In this case, we'll inject an NPI in the guest, but when it invokes
> FFA_NOTIFICATION_GET it will get accurate information from the SPMC.
> The worst case is a spurious NPI. This shouldn't be a security issue.

... we inject the interrupt to the "wrong" domain. But I also understand 
that it would be difficult for Xen to detect it.

So I would say no print should be needed. Bertrand, what do you think?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 11:29:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 11:29:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735362.1141544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sESMR-0000XX-Nl; Tue, 04 Jun 2024 11:29:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735362.1141544; Tue, 04 Jun 2024 11:29:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sESMR-0000XQ-Ku; Tue, 04 Jun 2024 11:29:51 +0000
Received: by outflank-mailman (input) for mailman id 735362;
 Tue, 04 Jun 2024 11:29:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sESMQ-0000XJ-G7
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 11:29:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sESMP-0000U8-Oy; Tue, 04 Jun 2024 11:29:49 +0000
Received: from [62.28.225.65] (helo=[172.20.145.71])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sESMP-0002hQ-H8; Tue, 04 Jun 2024 11:29:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=h4DzSjwu3DCXM+ZlOhs99zEoRyWLq8LtkDh0vhjNaxM=; b=bI4gPVfJH3j010QvDRvbQ4tkvZ
	srNzzhwgSrINxpd8RD3qxUC6bYCcS50MRAb2B3vd05PiUcMSBuhdFgXlu+gxVNb9TZVOk02J7/b0H
	QO7N3amYqEVCTtrfmRAZOtaV+VhN3bAolQZ8fgeV7BBkvjnpxzFD7p5BAkq+rJSp9IBs=;
Message-ID: <eb18d4d8-ede2-4529-92f1-3ba8a5beea72@xen.org>
Date: Tue, 4 Jun 2024 12:29:47 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4.2] xen/p2m: put reference for level 2 superpage
Content-Language: en-GB
To: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org
Cc: Penny Zheng <Penny.Zheng@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20240528125603.2467640-1-luca.fancellu@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20240528125603.2467640-1-luca.fancellu@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Luca,

On 28/05/2024 13:56, Luca Fancellu wrote:
> From: Penny Zheng <Penny.Zheng@arm.com>
> 
> We are doing foreign memory mapping for static shared memory, and
> there is a great possibility that it could be super mapped.
> But today, p2m_put_l3_page could not handle superpages.
> 
> This commits implements a new function p2m_put_l2_superpage to handle
> level 2 superpages, specifically for helping put extra references for
> foreign superpages.
> 
> Modify relinquish_p2m_mapping as well to take into account preemption
> when we have a level-2 foreign mapping.
> 
> Currently level 1 superpages are not handled because Xen is not
> preemptible and therefore some work is needed to handle such superpages,
> for which at some point Xen might end up freeing memory and therefore
> for such a big mapping it could end up in a very long operation.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 11:39:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 11:39:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735372.1141555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sESVz-0002Ka-JJ; Tue, 04 Jun 2024 11:39:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735372.1141555; Tue, 04 Jun 2024 11:39:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sESVz-0002KT-Ge; Tue, 04 Jun 2024 11:39:43 +0000
Received: by outflank-mailman (input) for mailman id 735372;
 Tue, 04 Jun 2024 11:39:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9vDb=NG=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sESVy-0002KN-D4
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 11:39:42 +0000
Received: from mail-wm1-x335.google.com (mail-wm1-x335.google.com
 [2a00:1450:4864:20::335])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 20abdc14-2267-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 13:39:40 +0200 (CEST)
Received: by mail-wm1-x335.google.com with SMTP id
 5b1f17b1804b1-42120fc8d1dso9192215e9.2
 for <xen-devel@lists.xenproject.org>; Tue, 04 Jun 2024 04:39:40 -0700 (PDT)
Received: from [172.20.145.106] ([62.28.225.65])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4212b85c61dsm151920515e9.28.2024.06.04.04.39.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 Jun 2024 04:39:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20abdc14-2267-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717501179; x=1718105979; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=LgtLEos4uAfwwSo2tCVc3OwFikzrORRRSStXJy+JwN0=;
        b=e/b4ZOgERdMVNlydoOjv6iso0ici+A9UEtKrUdVZJ/diLw1gTcv5swwdSrGoCcNTs0
         /WUSsYwcFBEAdLeKYcK0Kmj4+dVlEsxtD+2Zqvu3g0APgYT4u/sasIJLXHx7l/KXvHKR
         9D0aQZu/8YIX+fMdIwF9kDozXd6DCwXZqSrIi1fGg75s+AJoCNo0ip0h/0F3vZy9y5+K
         utgyI6UMGuLRTpqdQOxSYNHE9OOVsWdvkcdS6dzWb19/dLh8DL6OVXoJ5vZmpU9/ujsa
         ceS3nsUkMFbYoWbF9uAOz98+SP8ZNZQwHX+oY2IoUxtcsOE2+QTCD2UVLBbZqT9e5WnL
         HHvw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717501179; x=1718105979;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=LgtLEos4uAfwwSo2tCVc3OwFikzrORRRSStXJy+JwN0=;
        b=tO64sToBNIUD/jy+D72mTj3ubXAM+gba04xPnQ50uDYOHEkrhhoGtwhSqRnbEKMSlI
         dLJN4X3fce5O9nNZ5M+QW6Wt2mIF/TwBhDj9QshErlcF/tNIfMMb+K02My/KcKqwKQCX
         Ee6WzimrOjs/7EiXR6y8TINdWZbDJ67tnsTGBeF2zWp8mo3D3JU/Ew4IXRmj8ifefjZy
         A1NmhL5Xk93XL+ftxcZ7JLNUEJlSrW5U3VdnVgbXAXrccyolzOaBqGZXO3sUN4KyI+/R
         R5HNLoX9ZdBUD5ZgxKNQDyoEzqeUtYa1D5Di4L5TE/jc+cWXRm/8LfW8KFIrbJvHgwgC
         XSvQ==
X-Forwarded-Encrypted: i=1; AJvYcCVc1h9j9twj8tBbFCGI7x22kW9rcXN6uAgZNDb2iEMtdMWknekYIoYzGdNr99ttKjLsw/ehASAP3lY5KMeGBR6yn2Hj+PIz6/JBb4p2KD8=
X-Gm-Message-State: AOJu0Yz1XqjNTz5jFUaVnnvDydHRz0ncp9DTtRYRhu3gGkhN659XQb4B
	ySKtwgtVFlEsO1WHKvwsVbqgyhLmIgrmvNSoqakWWi5//mAcuipf
X-Google-Smtp-Source: AGHT+IHy7N9eiJ8xHfY/T/O4qdxVg4oXFj5c+db+yJjT66WYC2oCZQ7trNAZuskip/r+cUbrD+G1wA==
X-Received: by 2002:a05:600c:4e91:b0:421:4fbe:7daf with SMTP id 5b1f17b1804b1-4214fbe7f0bmr11019305e9.11.1717501178921;
        Tue, 04 Jun 2024 04:39:38 -0700 (PDT)
Message-ID: <11c999212a75ea0f043e90128d5321b41a79c305.camel@gmail.com>
Subject: Re: [XEN PATCH] automation/eclair_analysis: add more clean MISRA
 guidelines
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Nicola Vetrini <nicola.vetrini@bugseng.com>, 
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com, 
	ayan.kumar.halder@amd.com, consulting@bugseng.com, Simone Ballarin
	 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>
Date: Tue, 04 Jun 2024 13:39:37 +0200
In-Reply-To: <3af20044d2906a6f873aac1b6dd2b41c5b9e0507.1717269049.git.nicola.vetrini@bugseng.com>
References: 
	<3af20044d2906a6f873aac1b6dd2b41c5b9e0507.1717269049.git.nicola.vetrini@bugseng.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Sat, 2024-06-01 at 21:13 +0200, Nicola Vetrini wrote:
> Rules 20.9, 20.12 and 14.4 are now clean on ARM and x86, so they are
> added
> to the list of clean guidelines.
>=20
> Some guidelines listed in the additional clean section for ARM are
> also
> clean on x86, so they can be removed from there.
>=20
> No functional change.
>=20
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
> ---
> +Cc Oleksii for an opinion on the inclusion for 4.19
>=20
> This is a follow-up to series
> https://lore.kernel.org/xen-devel/cover.1717236930.git.nicola.vetrini@bug=
seng.com/
> and depends on it (otherwise the gitlab MISRA analysis would fail on
> violations of Rule 20.12).
> If it is decided that the dependent series should go in for 4.19,
> then
> my suggestion is to include this as well, to the gating on more
> guidelines.
> ---
I just want to clarify if I understand you correctly. Do you mean taht
the current one patch will be merged without dependent series that
gitlab MISRA analysis would fail? IIUUC then I am not sure that we have
an option to have this patch without the dependent patch series.

~ Oleksii
> =C2=A0automation/eclair_analysis/ECLAIR/tagging.ecl | 4 +++-
> =C2=A01 file changed, 3 insertions(+), 1 deletion(-)
>=20
> diff --git a/automation/eclair_analysis/ECLAIR/tagging.ecl
> b/automation/eclair_analysis/ECLAIR/tagging.ecl
> index a354ff322e03..b829655ca0bc 100644
> --- a/automation/eclair_analysis/ECLAIR/tagging.ecl
> +++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
> @@ -60,6 +60,7 @@ MC3R1.R11.7||
> =C2=A0MC3R1.R11.9||
> =C2=A0MC3R1.R12.5||
> =C2=A0MC3R1.R14.1||
> +MC3R1.R14.4||
> =C2=A0MC3R1.R16.7||
> =C2=A0MC3R1.R17.1||
> =C2=A0MC3R1.R17.3||
> @@ -73,6 +74,7 @@ MC3R1.R20.4||
> =C2=A0MC3R1.R20.6||
> =C2=A0MC3R1.R20.9||
> =C2=A0MC3R1.R20.11||
> +MC3R1.R20.12||
> =C2=A0MC3R1.R20.13||
> =C2=A0MC3R1.R20.14||
> =C2=A0MC3R1.R21.3||
> @@ -105,7 +107,7 @@ if(string_equal(target,"x86_64"),
> =C2=A0)
> =C2=A0
> =C2=A0if(string_equal(target,"arm64"),
> -=C2=A0=C2=A0=C2=A0
> service_selector({"additional_clean_guidelines","MC3R1.R14.4||MC3R1.R
> 16.6||MC3R1.R20.12||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.2||MC3R1.R7.3||M
> C3R1.R8.6||MC3R1.R9.3"})
> +=C2=A0=C2=A0=C2=A0
> service_selector({"additional_clean_guidelines","MC3R1.R16.6||MC3R1.R
> 2.1||MC3R1.R5.3||MC3R1.R7.3"})
> =C2=A0)
> =C2=A0
> =C2=A0-
> reports+=3D{clean:added,"service(clean_guidelines_common||additional_cl
> ean_guidelines)"}



From xen-devel-bounces@lists.xenproject.org Tue Jun 04 11:57:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 11:57:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735381.1141565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sESmc-0005fJ-1D; Tue, 04 Jun 2024 11:56:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735381.1141565; Tue, 04 Jun 2024 11:56:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sESmb-0005fC-Tv; Tue, 04 Jun 2024 11:56:53 +0000
Received: by outflank-mailman (input) for mailman id 735381;
 Tue, 04 Jun 2024 11:56:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0C9R=NG=epam.com=prvs=2885f2d17f=volodymyr_babchuk@srs-se1.protection.inumbo.net>)
 id 1sESmZ-0005f6-Og
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 11:56:51 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 85d36877-2269-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 13:56:49 +0200 (CEST)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 4549fJFG009223;
 Tue, 4 Jun 2024 11:56:35 GMT
Received: from eur03-dba-obe.outbound.protection.outlook.com
 (mail-dbaeur03lp2168.outbound.protection.outlook.com [104.47.51.168])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3yhqvu1upu-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 04 Jun 2024 11:56:34 +0000 (GMT)
Received: from GV1PR03MB10456.eurprd03.prod.outlook.com
 (2603:10a6:150:16a::21) by DBBPR03MB10533.eurprd03.prod.outlook.com
 (2603:10a6:10:53c::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.15; Tue, 4 Jun
 2024 11:56:30 +0000
Received: from GV1PR03MB10456.eurprd03.prod.outlook.com
 ([fe80::a41e:5aa8:e298:757e]) by GV1PR03MB10456.eurprd03.prod.outlook.com
 ([fe80::a41e:5aa8:e298:757e%3]) with mapi id 15.20.7656.012; Tue, 4 Jun 2024
 11:56:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85d36877-2269-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cW97bfir6t/KVEW94iLNomyIbB5DJYbsRXkR4FlrevqEwmVVUnGkKV7sX83k+kY5Guxeo1z0tTrTv3syH+eFjcIhd3cgaOSsGIBzsGwter/QzLDEz33Ic4TXqmOcfKY7L/e+Le30AFsoCyFdDMfy+xzqjOkR8GV6bNpfkqcBniOVSjX7YMmwUUktz9y2Dmke3p6cR9NuZtDZh+rDWRzomPtiGSUPEEX9vn9qiLTfJp8GGQjWbAIRU2L9IA0ya1UhyMoh4VDHAcQ8/UvH5SY6rxSYzYGI88Rzx4dxylundQIr8lgjbXZ6HaQGWZdSHNUnb1aHEQNDBy4w1boe2vcbUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EoN9sTZzTcVWUtCrs7tmGwoVMb12tIiwhSXzCe3yGdE=;
 b=F5vj9BDsu8BgZ0p3Y8OLdD2KaDjwHWKYaMeGH+i7p/Rf937BjZMgop0BdYcXEP9iVaqBKYrox1uZzk2ZfDf64BGAv4Zdc8WaYfSBVHKLk1XtKZ52dodZdr5hf7kumo2mTqnYJxf1SIDi8Wj+qmtWkYOTOkvti5Ybr2v9CkEXHujBWemJ01451JuU33UinjNZphe4WqrEjpVHzVXs/PLKTK4rX2QB7MLkTletiLHP457vWFIQafrqXq6mHQC20+lf7m7+skj5LoCW/Ya9Ug0fCnKoyG7RyLKK9QBOgnrTgiZY6msarIp6L9gNFdHZwbdZjMIpyNnph6WsaAhHfGm54A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EoN9sTZzTcVWUtCrs7tmGwoVMb12tIiwhSXzCe3yGdE=;
 b=uCMDQnKy52c+mX8ID3cIy9VBd/n8Thsfcc23Xs4izIaY6bWnQGoiTA3MD3fpvlPM6GeL4SmSnUNnhOmhhzTyEHNPzPKE9F0rZt396okM3nQ9Ecw8kOgUQnnaH9bN1DRoMtQiTTvSlbV2IAn5JbyWwr/yGhOSPGzsa7s7AQB/G8HLBVOqgFM/Ut9E+CQ8uha7Ljw5BaP6GZ1XXQ330lE0gUwgWvKlxlj/F5jwyhX2zx5HJoqAF/fBqeg9P7NrfbSIZAAfmHyNiG5F5+UtOYh9PCYD/EYZNvTLqglR+4MJDT9ql0pzMsXojRE5YXsOKMt24qXnnYobWwWs2Zc8Xf6EQw==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Julien Grall <julien@xen.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Stefano
 Stabellini <sstabellini@kernel.org>,
        Bertrand Marquis
	<bertrand.marquis@arm.com>,
        Michal Orzel <michal.orzel@amd.com>
Subject: Re: [RFC PATCH v2] arm: dom0less: add TEE support
Thread-Topic: [RFC PATCH v2] arm: dom0less: add TEE support
Thread-Index: AQHas4LmA3+MKr53jE6Kc+pdYgdQGLG3cwAAgAAM7IA=
Date: Tue, 4 Jun 2024 11:56:29 +0000
Message-ID: <87tti8x6oj.fsf@epam.com>
References: <20240531174915.1679443-1-volodymyr_babchuk@epam.com>
 <9e62b5d9-9c80-4f7c-9cc6-3b863f0c90ad@xen.org>
In-Reply-To: <9e62b5d9-9c80-4f7c-9cc6-3b863f0c90ad@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.10.7; emacs 29.3
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: GV1PR03MB10456:EE_|DBBPR03MB10533:EE_
x-ms-office365-filtering-correlation-id: 34a9f589-af8f-406d-c149-08dc848d5ef3
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;ARA:13230031|366007|376005|1800799015|38070700009;
x-microsoft-antispam-message-info: 
 =?iso-8859-1?Q?a53Af0pFE0IEc8ptUQ1MZ+Fdth14iveeeN+sl1GPe+LgbziOlWW9Qgm9Hh?=
 =?iso-8859-1?Q?jiTT7rMeky9KFF4RQly/Q6xOi9yfhSrMloHIEast6Q8hXiHJN6ns0ALWT/?=
 =?iso-8859-1?Q?+BImAz/srFGgwZLuZAE1BblBzXZBdqGDe8orQBtIlYjx3YLhqueaLLvQBK?=
 =?iso-8859-1?Q?Sckjf+Q6OPHzOJkBbEy9OnGgXw3ZkTsH0AlwFvHEcur3yrMNVQfrGeQKgP?=
 =?iso-8859-1?Q?w7ZOT+v/qYeVd4cYXm9fYt1hmQntNEdWwwqTGBKb6vhyP4pyBy7rO2IUbk?=
 =?iso-8859-1?Q?fxVqgQaC+hkx1fLQChFpBbPmrE3BuZfECCFJHuVI05rDDA8jlP4qkxQpdj?=
 =?iso-8859-1?Q?t+O6C0stRhD72tYa7JUUrh840Fa1urYtp/6Ob6kL1EUYNgGzRzk7efFxVF?=
 =?iso-8859-1?Q?48igQh2iHej23oDRwGu+4jPnCLpaXrN4dPwXmBxOyx86sh54uwyolMAL39?=
 =?iso-8859-1?Q?4hJflCOhKyzdY3/c7+zIU2wzmvOAB0uBstyjcrDgNZLeQOAs9t6r+Cm4lP?=
 =?iso-8859-1?Q?jhABbJrVilPk9L3YvIXhtjEMpp5X9edyYVGQhFf1MJdJZem29yrmzTI+iZ?=
 =?iso-8859-1?Q?HG2whb7yeXhsYiDXAfnXNNxl5KIutBKKdMXie8czWtsQCVlWhgA/toBojC?=
 =?iso-8859-1?Q?AIOPtptsac/3FhRvPsvRghqd6+0vCkdaokzji6E83DtX73DBB0VMIE32kB?=
 =?iso-8859-1?Q?6e9ioU3cCkwTwRODjbpjBexEfrsjHl7JKKyKNBQLSljKpk1KnF3NbfZ/wi?=
 =?iso-8859-1?Q?KtTSAwMWg2d1Tt7rnIBjZPYsI1mpU62qziH1uMCw5usNj/rek8acN7rPuV?=
 =?iso-8859-1?Q?i6ZKS+3YG9ViaWYCwpcxlptpmpHtH6RLsLGupOZlj4TVP3zheHCYAK8NPv?=
 =?iso-8859-1?Q?BSH4GlalXyxA/x31qKy0vk20anaYcu82GPav1PLeRQ5hterUK2++LTHU4n?=
 =?iso-8859-1?Q?7oHefKNfeBIAwvouSKfIiMPH+TLyZ7gvaVR6CU/ydBzKM/BRNSdz14CGVN?=
 =?iso-8859-1?Q?pKNqABwlSaKGiZy0P0pwVPKEuCnCenW/LBgIEZPnxVPf3W7f+6o+hSGNic?=
 =?iso-8859-1?Q?HeDTDR0b2ftSc31sVAbtRl/gtWSbEUEsSRDle/SrHg8wnw8L5P6uKQbHhJ?=
 =?iso-8859-1?Q?PSmOpbpYZZQkn3dlgS4qJa53zyadsnifZsSWgfwwlfQmdVKl8O56V+LdVj?=
 =?iso-8859-1?Q?0JzRFC/9MpmvlAxYWkP0+otX6sfElKnX7oLNrpUfOpX2hO8Jpfs9KcFJKi?=
 =?iso-8859-1?Q?m1RcUjpi0G+qkEpUf68DZO5TSkBTNBPF4vJ9Qm8X/JRNvoXHSxm8entJx/?=
 =?iso-8859-1?Q?U/i9Tvt/vOhwHusRjomVtnSFQ267Z4eodb2DRSUn4VblLAcjN6u8+OpNIb?=
 =?iso-8859-1?Q?/GjEVgoWoxHFdia7ybO93MaQPt0lsOGA=3D=3D?=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:GV1PR03MB10456.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(376005)(1800799015)(38070700009);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?iso-8859-1?Q?Z6ZuQIA3ZNwvRxzNOaxzWGcNPsuBeJnWbGKDLKMrQueduw6x41YHA3CMN0?=
 =?iso-8859-1?Q?RHLukb8dVPwc+8gDH5z6MtXeBjWqVXik31fXpxgu1jRlfLiTriYEN6eSY9?=
 =?iso-8859-1?Q?1xlhW8BwThmX9cLuWYvTVuwsyr19bBoN4crPM+k1ZJyA5aN74d3Zbk9bP5?=
 =?iso-8859-1?Q?tDPdjFFcXMLsTiBImE+M+nPXW3Mjf3uHV1GfhTdnvOupJxD/gBVyFA1wki?=
 =?iso-8859-1?Q?v9lCuettbr53h0slehRn5B4/C3rfexj2/MC3OZ34tNL7hyYb2RVv9ELd7r?=
 =?iso-8859-1?Q?lMQavl0aBqP6PqMs5Tnzl3x1aLeoMWqhHDSkcAgsqA9HDfTUxuRAbjAHFC?=
 =?iso-8859-1?Q?afi0uftwlLLEQAsybfiFF5fofjOtKljBiXVALLqMHt7UCPJziE8EZ0wD1r?=
 =?iso-8859-1?Q?ZT3kSWdNwGotjHOphgLgSC5MuTsP1JhGFUklRJN0XQXjJ1xgSuyQPe5zv1?=
 =?iso-8859-1?Q?86R/OW6+ZJPHUEqMlWiS40nFwenn1ZB78yNGye3qb8F/1DbdgBRoSwXuFd?=
 =?iso-8859-1?Q?PAolmaq7BP+9t9ge9cYnq5xsXiBNCRAjF+6XTUB9dcLZlmeppHDg7uhaSS?=
 =?iso-8859-1?Q?p86llzGv1BEKeFBpqswhpC4rV2YegMv4gTjuoZmzjt1Xm56QTwGCbMKUH4?=
 =?iso-8859-1?Q?N7SBtZAeHCbwYhXSFSMJmzHK9jRdAY5rTVjm+KoCKpnXaPEHNDjtQG1rQL?=
 =?iso-8859-1?Q?YvsM4DZnEdtTLf6ZbmS+TBOSIJF/oxpNAW0W/6QJvdn3zWVLmMYqMCM61S?=
 =?iso-8859-1?Q?XvO2k8z/GEnI9WD5+8xv+J3m7CsZNgicYtDsLyjbZFJwYIAUp61N3PLnRu?=
 =?iso-8859-1?Q?25RGS1IuB1r1bgWHW29/UqUfnm9a1lj6kmQtItaxGjXYdL4c6c9LQY15ST?=
 =?iso-8859-1?Q?BgAezcnmFmZ38XDd3Uv/7kY/19a6u+KDfa/vddni6eJoQRKqk6HwrQS+Xe?=
 =?iso-8859-1?Q?0UfXt+J/AUalRMQmAsxoZ1kjL89JMeqghwIPk4JF42miSjntnd1+x1pVUK?=
 =?iso-8859-1?Q?Tbm95AUzl4rL5BY0vdr7j3sR4w/da8My+ypnesX1PTQEyEeIdtC/1XZXSW?=
 =?iso-8859-1?Q?54S3w1tsNAuUUP85Pf/H6u5uvHWup2+nHWw4CCQisWvMbBUFAaL6qqh0qx?=
 =?iso-8859-1?Q?q5f2Xt33V6e0m+Ki9aeICJcbVpliOTruQ3CvI6yHnnl4o7V70VcVreEGEW?=
 =?iso-8859-1?Q?lwFGQ3w6u7rEDrT1mI5TEOMqTSc0k/fidKngDi4FSAK4IyqWn17lK1JV9E?=
 =?iso-8859-1?Q?mTwxztMC+/Xym1S2d0OF7oqCJTzjg+0EGD+Ni6PXcMo0Inf9md75Y0ZTvW?=
 =?iso-8859-1?Q?oJNqbMy9pDU2Ni6K0cVgOqWD+ux3Z3M0k4T8qgCzjoXQEzTXzRm3dWq2ST?=
 =?iso-8859-1?Q?hSGJ45dwtRdoz6r1ekMV49KX5T/uH1Px1UsPRVfFwHDVBOVfxgJEuj5u0I?=
 =?iso-8859-1?Q?mOwSifIwiAVWsfr+kb0P+ytjmQFRz/JsJUNsJmagSh6jYs0oLD8eNOzQ6m?=
 =?iso-8859-1?Q?4pnJKtWQG4zUnrLgSTODERg2YuG7RpPeuR6oi4djevrSIW6sz5fn4o+ntl?=
 =?iso-8859-1?Q?jB+pYf3Knz7FJUZTunlzcAZJ2iV3hVEUKi6L5ROkj92xLtRERam8kJ9RmK?=
 =?iso-8859-1?Q?7wYFqNFCDsLBHMZ/FNI24+a76YLQIah0oiUQ6vBF135e5u/OMjwjob+Q?=
 =?iso-8859-1?Q?=3D=3D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: GV1PR03MB10456.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 34a9f589-af8f-406d-c149-08dc848d5ef3
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Jun 2024 11:56:29.7993
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: OR1B+Yjyn0SIwSp18fX0ySjvoeL5OUVQ/wB5VYfa19Z7/SeA3aKezNSdc2YQK0zkgR6DDNaENLD0riNljVntIJohC6/skt6v3sBOSTFKA9o=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR03MB10533
X-Proofpoint-ORIG-GUID: 4j_CJ_K4mbfb_dTqFkCxcky0kWVg679m
X-Proofpoint-GUID: 4j_CJ_K4mbfb_dTqFkCxcky0kWVg679m
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.650,FMLib:17.12.28.16
 definitions=2024-06-04_05,2024-05-30_01,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 phishscore=0
 clxscore=1015 lowpriorityscore=0 adultscore=0 priorityscore=1501
 impostorscore=0 mlxlogscore=999 mlxscore=0 malwarescore=0 spamscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.19.0-2405170001 definitions=main-2406040096


Hi Julien,

Julien Grall <julien@xen.org> writes:

> Hi Volodymyr,
>
> On 31/05/2024 18:49, Volodymyr Babchuk wrote:
>> Extend TEE mediator interface with two functions :
>>   - tee_get_type_from_dts() returns TEE type based on input string
>>   - tee_make_dtb_node() creates a DTB entry for the selected
>>     TEE mediator
>> Use those new functions to parse "xen,tee" DTS property for dom0less
>> guests and enable appropriate TEE mediator.
[...]

>> +
>> +    A string property that describes what TEE mediator should be enable=
d
>> +    for the domain. Possible property values are:
>> +
>> +    - "none" (or missing property value)
>> +    No TEE will be available in the VM.
>> +
>> +    - "OP-TEE"
>> +    VM will have access to the OP-TEE using classic OP-TEE SMC interfac=
e.
>> +
>> +    - "FF-A"
>> +    VM will have access to a TEE using generic FF-A interface.
>
> I understand why you chose those name, but it also means we are using
> different name in XL and Dom0less. I would rather prefer if they are
> the same.
>

Well, my first idea was to introduce additional "const char *dts_name"
for a TEE mediator description. But it seems redundant. I can rename
existing mediators so their names will correspond to names used by libxl.

>> +
>> +    In the future other TEE mediators may be added, extending possible
>> +    values for this property.
>> +
>>   - xen,domain-p2m-mem-mb
>>         Optional. A 32-bit integer specifying the amount of
>> megabytes of RAM
>> diff --git a/xen/arch/arm/dom0less-build.c b/xen/arch/arm/dom0less-build=
.c
>> index fb63ec6fd1..13fdd44eef 100644
>> --- a/xen/arch/arm/dom0less-build.c
>> +++ b/xen/arch/arm/dom0less-build.c
>> @@ -15,6 +15,7 @@
>>   #include <asm/domain_build.h>
>>   #include <asm/static-memory.h>
>>   #include <asm/static-shmem.h>
>> +#include <asm/tee/tee.h>
>>     bool __init is_dom0less_mode(void)
>>   {
>> @@ -650,6 +651,10 @@ static int __init prepare_dtb_domU(struct domain *d=
, struct kernel_info *kinfo)
>>       if ( ret )
>>           goto err;
>>   +    /* We are making assumption that every mediator sets
>> d->arch.tee */
>> +    if ( d->arch.tee )
>
> I think the assumption is Ok. I would consider to move this check in
> each TEE callback. IOW call tee_make_dtb_node() unconditionally.
>

Ah, okay, makes sense.

>> +        tee_make_dtb_node(kinfo->fdt);
>
> AFAICT, tee_make_dtb_node() can return an error. So please check the
> return value.
>

Yes, you are right.

>> +
>>       /*
>>        * domain_handle_dtb_bootmodule has to be called before the rest o=
f
>>        * the device tree is generated because it depends on the value of
>> @@ -871,6 +876,7 @@ void __init create_domUs(void)
>>           unsigned int flags =3D 0U;
>>           uint32_t val;
>>           int rc;
>> +        const char *tee_name;
>>             if ( !dt_device_is_compatible(node, "xen,domain") )
>>               continue;
>> @@ -881,6 +887,19 @@ void __init create_domUs(void)
>>           if ( dt_find_property(node, "xen,static-mem", NULL) )
>>               flags |=3D CDF_staticmem;
>>   +        tee_name =3D dt_get_property(node, "xen,tee", NULL);
>
> In the previous version, you used dt_property_read_property_string()
> which contained some sanity check. Can you explain why you switch to
> dt_get_property()?

Because I was confused by dt_property_read_string() return values.

I made a fresh look at it and now I understand that I need to test for
-EINVAL to determine that a property is not available and I should use
a default value. All other return codes should cause panic. I'll rework
this in the next version.

--=20
WBR, Volodymyr=


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 12:01:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 12:01:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735391.1141574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sESqo-0007ha-PH; Tue, 04 Jun 2024 12:01:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735391.1141574; Tue, 04 Jun 2024 12:01:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sESqo-0007hT-Mh; Tue, 04 Jun 2024 12:01:14 +0000
Received: by outflank-mailman (input) for mailman id 735391;
 Tue, 04 Jun 2024 12:01:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=w0gD=NG=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sESqn-0007aT-RB
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 12:01:13 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2368c2cf-226a-11ef-90a1-e314d9c70b13;
 Tue, 04 Jun 2024 14:01:13 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 616554EE073C;
 Tue,  4 Jun 2024 14:01:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2368c2cf-226a-11ef-90a1-e314d9c70b13
MIME-Version: 1.0
Date: Tue, 04 Jun 2024 14:01:12 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: "Oleksii K." <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com,
 consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, Doug
 Goldstein <cardoe@cardoe.com>
Subject: Re: [XEN PATCH] automation/eclair_analysis: add more clean MISRA
 guidelines
In-Reply-To: <11c999212a75ea0f043e90128d5321b41a79c305.camel@gmail.com>
References: <3af20044d2906a6f873aac1b6dd2b41c5b9e0507.1717269049.git.nicola.vetrini@bugseng.com>
 <11c999212a75ea0f043e90128d5321b41a79c305.camel@gmail.com>
Message-ID: <06615fc65a59dbe950bc462030a54906@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=UTF-8;
 format=flowed
Content-Transfer-Encoding: 8bit

On 2024-06-04 13:39, Oleksii K. wrote:
> On Sat, 2024-06-01 at 21:13 +0200, Nicola Vetrini wrote:
>> Rules 20.9, 20.12 and 14.4 are now clean on ARM and x86, so they are
>> added
>> to the list of clean guidelines.
>> 
>> Some guidelines listed in the additional clean section for ARM are
>> also
>> clean on x86, so they can be removed from there.
>> 
>> No functional change.
>> 
>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
>> ---
>> +Cc Oleksii for an opinion on the inclusion for 4.19
>> 
>> This is a follow-up to series
>> https://lore.kernel.org/xen-devel/cover.1717236930.git.nicola.vetrini@bugseng.com/
>> and depends on it (otherwise the gitlab MISRA analysis would fail on
>> violations of Rule 20.12).
>> If it is decided that the dependent series should go in for 4.19,
>> then
>> my suggestion is to include this as well, to the gating on more
>> guidelines.
>> ---
> I just want to clarify if I understand you correctly. Do you mean taht
> the current one patch will be merged without dependent series that
> gitlab MISRA analysis would fail? IIUUC then I am not sure that we have
> an option to have this patch without the dependent patch series.
> 

Exactly, that's why I specified the dependency. This patch should have 
been part of the series, but I forgot to include it.

> ~ Oleksii
>>  automation/eclair_analysis/ECLAIR/tagging.ecl | 4 +++-
>>  1 file changed, 3 insertions(+), 1 deletion(-)
>> 
>> diff --git a/automation/eclair_analysis/ECLAIR/tagging.ecl
>> b/automation/eclair_analysis/ECLAIR/tagging.ecl
>> index a354ff322e03..b829655ca0bc 100644
>> --- a/automation/eclair_analysis/ECLAIR/tagging.ecl
>> +++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
>> @@ -60,6 +60,7 @@ MC3R1.R11.7||
>>  MC3R1.R11.9||
>>  MC3R1.R12.5||
>>  MC3R1.R14.1||
>> +MC3R1.R14.4||
>>  MC3R1.R16.7||
>>  MC3R1.R17.1||
>>  MC3R1.R17.3||
>> @@ -73,6 +74,7 @@ MC3R1.R20.4||
>>  MC3R1.R20.6||
>>  MC3R1.R20.9||
>>  MC3R1.R20.11||
>> +MC3R1.R20.12||
>>  MC3R1.R20.13||
>>  MC3R1.R20.14||
>>  MC3R1.R21.3||
>> @@ -105,7 +107,7 @@ if(string_equal(target,"x86_64"),
>>  )
>>  
>>  if(string_equal(target,"arm64"),
>> -   
>> service_selector({"additional_clean_guidelines","MC3R1.R14.4||MC3R1.R
>> 16.6||MC3R1.R20.12||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.2||MC3R1.R7.3||M
>> C3R1.R8.6||MC3R1.R9.3"})
>> +   
>> service_selector({"additional_clean_guidelines","MC3R1.R16.6||MC3R1.R
>> 2.1||MC3R1.R5.3||MC3R1.R7.3"})
>>  )
>>  
>>  -
>> reports+={clean:added,"service(clean_guidelines_common||additional_cl
>> ean_guidelines)"}

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 14:08:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 14:08:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735413.1141584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEUpI-0005xo-Gv; Tue, 04 Jun 2024 14:07:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735413.1141584; Tue, 04 Jun 2024 14:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEUpI-0005xh-EK; Tue, 04 Jun 2024 14:07:48 +0000
Received: by outflank-mailman (input) for mailman id 735413;
 Tue, 04 Jun 2024 14:07:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEUpG-0005xX-QL; Tue, 04 Jun 2024 14:07:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEUpG-0003kZ-Iv; Tue, 04 Jun 2024 14:07:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEUpG-0002bE-BO; Tue, 04 Jun 2024 14:07:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEUpG-0007ym-Ak; Tue, 04 Jun 2024 14:07:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SoeixEKdB1bbqCBHTqbjw3elTZZdgDLGaeYu8OOIXkY=; b=NjM63LvIIgYSOiCfBbJqM6Tghm
	9WuxG53K/CNxGOIzTS/9qteTZhZUhP2k/Aknhq84yxV304QFvMssj82/P7eKykm5SwOsqcvHGs4Gu
	v139nvOVwXjT/z2sJHQQ7fM40wjZKHZZx879WwBfIZ+VBZHV6NAQmzUyj+kia2W0TfWo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186246-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186246: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=839bd179735284592ba8f0879d2cbf07e0cb585a
X-Osstest-Versions-That:
    ovmf=077760fec40c2e5fcae274cc609d97aee12e5d56
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Jun 2024 14:07:46 +0000

flight 186246 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186246/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 839bd179735284592ba8f0879d2cbf07e0cb585a
baseline version:
 ovmf                 077760fec40c2e5fcae274cc609d97aee12e5d56

Last test of basis   186245  2024-06-04 07:42:53 Z    0 days
Testing same since   186246  2024-06-04 12:44:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dun Tan <dun.tan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   077760fec4..839bd17973  839bd179735284592ba8f0879d2cbf07e0cb585a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 15:35:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 15:35:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735430.1141595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEWCN-0007o7-M8; Tue, 04 Jun 2024 15:35:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735430.1141595; Tue, 04 Jun 2024 15:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEWCN-0007o0-JF; Tue, 04 Jun 2024 15:35:43 +0000
Received: by outflank-mailman (input) for mailman id 735430;
 Tue, 04 Jun 2024 15:35:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEWCM-0007nn-Jr; Tue, 04 Jun 2024 15:35:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEWCM-0005Mo-Bs; Tue, 04 Jun 2024 15:35:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEWCM-0004cF-00; Tue, 04 Jun 2024 15:35:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEWCL-0004Bk-Vp; Tue, 04 Jun 2024 15:35:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GpeqENfND0O7o4IctpZGYcJO/TphcJuUuxJJ4UuCuVI=; b=zZuEcAmX9nrv5H11WuTihe8+Ai
	2n3hpSJ1DOKkTNstVEOMTHwSw1qbeW0ervYsSFYpG/k1khzkO5qRE+c3W8henfgQ6tdgvgM4SX3rR
	VjB/3FRBnCNGe512UnaDoymaObRyqmWhVqX/TYOyv70uPN58DTVcibH2stZDUmURwpVg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186244-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186244: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2ab79514109578fc4b6df90633d500cf281eb689
X-Osstest-Versions-That:
    linux=c3f38fa61af77b49866b006939479069cd451173
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Jun 2024 15:35:41 +0000

flight 186244 linux-linus real [real]
flight 186247 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186244/
http://logs.test-lab.xenproject.org/osstest/logs/186247/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 186238

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-qcow2     8 xen-boot            fail pass in 186247-retest
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186247-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-qcow2   14 migrate-support-check fail in 186247 never pass
 test-armhf-armhf-xl-qcow2 15 saverestore-support-check fail in 186247 never pass
 test-armhf-armhf-xl-credit1   8 xen-boot                     fail  like 186235
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186238
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186238
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                2ab79514109578fc4b6df90633d500cf281eb689
baseline version:
 linux                c3f38fa61af77b49866b006939479069cd451173

Last test of basis   186238  2024-06-03 13:41:54 Z    1 days
Failing since        186239  2024-06-03 20:43:26 Z    0 days    2 attempts
Testing same since   186244  2024-06-04 04:58:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dave Jiang <dave.jiang@intel.com>
  Huacai Chen <chenhuacai@loongson.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Li Zhijian <lizhijian@fujitsu.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Tiezhu Yang <yangtiezhu@loongson.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2ab79514109578fc4b6df90633d500cf281eb689
Merge: f06ce441457d 49ba7b515c4c
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Jun 3 14:42:41 2024 -0700

    Merge tag 'cxl-fixes-6.10-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl
    
    Pull cxl fixes from Dave Jiang:
    
     - Compile fix for cxl-test from missing linux/vmalloc.h
    
     - Fix for memregion leaks in devm_cxl_add_region()
    
    * tag 'cxl-fixes-6.10-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl:
      cxl/region: Fix memregion leaks in devm_cxl_add_region()
      cxl/test: Add missing vmalloc.h for tools/testing/cxl/test/mem.c

commit f06ce441457d4abc4d76be7acba26868a2d02b1c
Merge: c3f38fa61af7 eb36e520f4f1
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Jun 3 09:27:45 2024 -0700

    Merge tag 'loongarch-fixes-6.10-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson
    
    Pull LoongArch fixes from Huacai Chen:
     "Some bootloader interface fixes, a dts fix, and a trivial cleanup"
    
    * tag 'loongarch-fixes-6.10-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson:
      LoongArch: Fix GMAC's phy-mode definitions in dts
      LoongArch: Override higher address bits in JUMP_VIRT_ADDR
      LoongArch: Fix entry point in kernel image header
      LoongArch: Add all CPUs enabled by fdt to NUMA node 0
      LoongArch: Fix built-in DTB detection
      LoongArch: Remove CONFIG_ACPI_TABLE_UPGRADE in platform_init()

commit eb36e520f4f1b690fd776f15cbac452f82ff7bfa
Author: Huacai Chen <chenhuacai@loongson.cn>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Fix GMAC's phy-mode definitions in dts
    
    The GMAC of Loongson chips cannot insert the correct 1.5-2ns delay. So
    we need the PHY to insert internal delays for both transmit and receive
    data lines from/to the PHY device. Fix this by changing the "phy-mode"
    from "rgmii" to "rgmii-id" in dts.
    
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit 1098efd299ffe9c8af818425338c7f6c4f930a98
Author: Jiaxun Yang <jiaxun.yang@flygoat.com>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Override higher address bits in JUMP_VIRT_ADDR
    
    In JUMP_VIRT_ADDR we are performing an or calculation on address value
    directly from pcaddi.
    
    This will only work if we are currently running from direct 1:1 mapping
    addresses or firmware's DMW is configured exactly same as kernel. Still,
    we should not rely on such assumption.
    
    Fix by overriding higher bits in address comes from pcaddi, so we can
    get rid of or operator.
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit beb2800074c15362cf9f6c7301120910046d6556
Author: Jiaxun Yang <jiaxun.yang@flygoat.com>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Fix entry point in kernel image header
    
    Currently kernel entry in head.S is in DMW address range, firmware is
    instructed to jump to this address after loading the kernel image.
    
    However kernel should not make any assumption on firmware's DMW
    setting, thus the entry point should be a physical address falls into
    direct translation region.
    
    Fix by converting entry address to physical and amend entry calculation
    logic in libstub accordingly.
    
    BTW, use ABSOLUTE() to calculate variables to make Clang/LLVM happy.
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit 3de9c42d02a79a5e09bbee7a4421ddc00cfd5c6d
Author: Jiaxun Yang <jiaxun.yang@flygoat.com>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Add all CPUs enabled by fdt to NUMA node 0
    
    NUMA enabled kernel on FDT based machine fails to boot because CPUs
    are all in NUMA_NO_NODE and mm subsystem won't accept that.
    
    Fix by adding them to default NUMA node at FDT parsing phase and move
    numa_add_cpu(0) to a later point.
    
    Cc: stable@vger.kernel.org
    Fixes: 88d4d957edc7 ("LoongArch: Add FDT booting support from efi system table")
    Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit b56f67a6c748bb009f313f91651c8020d2338d63
Author: Jiaxun Yang <jiaxun.yang@flygoat.com>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Fix built-in DTB detection
    
    fdt_check_header(__dtb_start) will always success because kernel
    provides a dummy dtb, and by coincidence __dtb_start clashed with
    entry of this dummy dtb. The consequence is fdt passed from firmware
    will never be taken.
    
    Fix by trying to utilise __dtb_start only when CONFIG_BUILTIN_DTB is
    enabled.
    
    Cc: stable@vger.kernel.org
    Fixes: 7b937cc243e5 ("of: Create of_root if no dtb provided by firmware")
    Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit 6c3ca6654a74dd396bc477839ba8d9792eced441
Author: Tiezhu Yang <yangtiezhu@loongson.cn>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Remove CONFIG_ACPI_TABLE_UPGRADE in platform_init()
    
    Both acpi_table_upgrade() and acpi_boot_table_init() are defined as
    empty functions under !CONFIG_ACPI_TABLE_UPGRADE and !CONFIG_ACPI in
    include/linux/acpi.h, there are no implicit declaration errors with
    various configs.
    
      #ifdef CONFIG_ACPI_TABLE_UPGRADE
      void acpi_table_upgrade(void);
      #else
      static inline void acpi_table_upgrade(void) { }
      #endif
    
      #ifdef        CONFIG_ACPI
      ...
      void acpi_boot_table_init (void);
      ...
      #else /* !CONFIG_ACPI */
      ...
      static inline void acpi_boot_table_init(void)
      {
      }
      ...
      #endif        /* !CONFIG_ACPI */
    
    As Huacai suggested, CONFIG_ACPI_TABLE_UPGRADE is ugly and not necessary
    here, just remove it. At the same time, just keep CONFIG_ACPI to prevent
    potential build errors in future, and give a signal to indicate the code
    is ACPI-specific. For the same reason, we also put acpi_table_upgrade()
    under CONFIG_ACPI.
    
    Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit 49ba7b515c4c0719b866d16f068e62d16a8a3dd1
Author: Li Zhijian <lizhijian@fujitsu.com>
Date:   Tue May 7 13:34:21 2024 +0800

    cxl/region: Fix memregion leaks in devm_cxl_add_region()
    
    Move the mode verification to __create_region() before allocating the
    memregion to avoid the memregion leaks.
    
    Fixes: 6e099264185d ("cxl/region: Add volatile region creation support")
    Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
    Reviewed-by: Dan Williams <dan.j.williams@intel.com>
    Link: https://lore.kernel.org/r/20240507053421.456439-1-lizhijian@fujitsu.com
    Signed-off-by: Dave Jiang <dave.jiang@intel.com>

commit d55510527153d17a3af8cc2df69c04f95ae1350d
Author: Dave Jiang <dave.jiang@intel.com>
Date:   Tue May 28 15:55:51 2024 -0700

    cxl/test: Add missing vmalloc.h for tools/testing/cxl/test/mem.c
    
    tools/testing/cxl/test/mem.c uses vmalloc() and vfree() but does not
    include linux/vmalloc.h. Kernel v6.10 made changes that causes the
    currently included headers not depend on vmalloc.h and therefore
    mem.c can no longer compile. Add linux/vmalloc.h to fix compile
    issue.
    
      CC [M]  tools/testing/cxl/test/mem.o
    tools/testing/cxl/test/mem.c: In function ‘label_area_release’:
    tools/testing/cxl/test/mem.c:1428:9: error: implicit declaration of function ‘vfree’; did you mean ‘kvfree’? [-Werror=implicit-function-declaration]
     1428 |         vfree(lsa);
          |         ^~~~~
          |         kvfree
    tools/testing/cxl/test/mem.c: In function ‘cxl_mock_mem_probe’:
    tools/testing/cxl/test/mem.c:1466:22: error: implicit declaration of function ‘vmalloc’; did you mean ‘kmalloc’? [-Werror=implicit-function-declaration]
     1466 |         mdata->lsa = vmalloc(LSA_SIZE);
          |                      ^~~~~~~
          |                      kmalloc
    
    Fixes: 7d3eb23c4ccf ("tools/testing/cxl: Introduce a mock memory device + driver")
    Reviewed-by: Dan Williams <dan.j.williams@intel.com>
    Reviewed-by: Alison Schofield <alison.schofield@intel.com>
    Link: https://lore.kernel.org/r/20240528225551.1025977-1-dave.jiang@intel.com
    Signed-off-by: Dave Jiang <dave.jiang@intel.com>


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 17:03:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 17:03:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735449.1141606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEXZP-0001Xb-4P; Tue, 04 Jun 2024 17:03:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735449.1141606; Tue, 04 Jun 2024 17:03:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEXZO-0001XU-W2; Tue, 04 Jun 2024 17:03:34 +0000
Received: by outflank-mailman (input) for mailman id 735449;
 Tue, 04 Jun 2024 17:03:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N7N6=NG=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sEXZN-0001XO-SG
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 17:03:33 +0000
Received: from mail-wr1-x42d.google.com (mail-wr1-x42d.google.com
 [2a00:1450:4864:20::42d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5f95c397-2294-11ef-90a1-e314d9c70b13;
 Tue, 04 Jun 2024 19:03:32 +0200 (CEST)
Received: by mail-wr1-x42d.google.com with SMTP id
 ffacd0b85a97d-35dc0472b7eso5061531f8f.2
 for <xen-devel@lists.xenproject.org>; Tue, 04 Jun 2024 10:03:32 -0700 (PDT)
Received: from [172.31.7.231] ([62.28.210.62])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35e52ec4616sm8140769f8f.4.2024.06.04.10.03.30
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 04 Jun 2024 10:03:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f95c397-2294-11ef-90a1-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717520612; x=1718125412; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Ew5sWGfZJZnu6vS8zquzyKtG+O1Hmmq3JNQC8TnCPt4=;
        b=eidZ9j5DNX1UIVG3hdCSh5fMdbT1XhuxRAIejo2y47YHEJz2qPfJHiRHX4BLGKIAv7
         RZDAeSEwym/MYzLd8YDtmg70mqlJ07c0zcY+VJtL5i0qHSqA6L5liwRzFlqhMzLUI/g/
         l0Gz9+ZiONkq+s6ODO2/63laleLnCs83jMyYu1/9nmjH/X4ST5KSEQHmt6cHuiz6sT5K
         eLJBmkJM+HwAyYp7oE4AlV8f+YiUILaOJtzPUzJeX2HCF3hZAe8/+Xfj4sZojY4sQcSJ
         zS2svGadTu6p2opvXVTmneF0Jfc9RQEuuuXMFZrdP+c1cjG1W/u1yyGO9EwHWmVoxgwi
         A4QA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717520612; x=1718125412;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Ew5sWGfZJZnu6vS8zquzyKtG+O1Hmmq3JNQC8TnCPt4=;
        b=lLi80n9enGz/jf4dz/Daf7+3XhmkVGpmQ43RKDlFsyV/YG4s1LaO+KnKuwWpV7IN6o
         1A4l2hKhcKmFYJskE1yIdk75YiNtTuIdV6n8FuSbF9M4yc1ZQrynSbS6FaU8rSl7q/qH
         U2O5B3CXfkVIPsQ7hB85grJYWJKa/4eHpt6/f/NCfW8+zMJq5nl1SZU4viQpfKPbewpM
         U3gnpLK0hV+mlZ6BcnQ/GYbIC/wohy60HSfUge9X1m+avL7ocqA3c0GaLDHu9vHICLLG
         an7x0FrLhIsdn2MCS5uvawO5kvcPyqaQt9p9aVr16y6cScRVoK/TTc2lvPwi7z46ajSV
         fEqA==
X-Forwarded-Encrypted: i=1; AJvYcCUDuv298OcwgWvKs0u53GGiYCWfn8L/BcityTdVeJI92OZUstg6asToIPv98jkgH0DrWeuAUBMNgSoa6rfF5Yt9WWRt92aVG4zYLkJzRLo=
X-Gm-Message-State: AOJu0YxuiVXgooQrF9iXFi8eSVLRTG5HTi9WCOxGccLYPaBZ3av2DMFC
	8efT79t/htxH1Gxh9W6URqSfKcLkd5mSHy5BioRGIovIKbn1bJx4gOLLH2eIAQ==
X-Google-Smtp-Source: AGHT+IEcduNcOq5162GCg9hR4YRjjZOzNwpYXbjZVf7dyY70y+nyoUD84iragVkzNzBzfQ7PG6lGfA==
X-Received: by 2002:adf:e262:0:b0:354:f692:2c1d with SMTP id ffacd0b85a97d-35e84047375mr16866f8f.12.1717520612158;
        Tue, 04 Jun 2024 10:03:32 -0700 (PDT)
Message-ID: <d35c45b2-6b93-4eea-a037-e4aa2284245d@suse.com>
Date: Tue, 4 Jun 2024 19:03:29 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 14/16] ioreq: make arch_vcpu_ioreq_completion() an
 optional callback
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>, xen-devel@lists.xenproject.org,
 Julien Grall <julien@xen.org>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <a0f9c5ef8554d63e149afd0a413a27385c889faa.1717410850.git.Sergiy_Kibrik@epam.com>
 <cc51da4b-d024-4923-95a4-18e11b150f90@xen.org>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <cc51da4b-d024-4923-95a4-18e11b150f90@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 04.06.2024 13:07, Julien Grall wrote:
> On 03/06/2024 12:34, Sergiy Kibrik wrote:
>> @@ -2749,6 +2750,20 @@ static void cf_check vmx_set_reg(struct vcpu *v, unsigned int reg, uint64_t val)
>>       vmx_vmcs_exit(v);
>>   }
>>   
>> +bool realmode_vcpu_ioreq_completion(enum vio_completion completion)
> 
> No one seems to call this function outside of vmx.c. So can it be 'static'?

Plus it absolutely needs to be cf_check. If it is to stay, which
it looks like it isn't, as per further comments from Julien.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 17:18:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 17:18:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735454.1141615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEXnI-0003G6-6g; Tue, 04 Jun 2024 17:17:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735454.1141615; Tue, 04 Jun 2024 17:17:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEXnI-0003Fz-3a; Tue, 04 Jun 2024 17:17:56 +0000
Received: by outflank-mailman (input) for mailman id 735454;
 Tue, 04 Jun 2024 17:17:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N7N6=NG=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sEXnG-0003Ft-TE
 for xen-devel@lists.xenproject.org; Tue, 04 Jun 2024 17:17:54 +0000
Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com
 [2a00:1450:4864:20::42f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 600f26f2-2296-11ef-b4bb-af5377834399;
 Tue, 04 Jun 2024 19:17:52 +0200 (CEST)
Received: by mail-wr1-x42f.google.com with SMTP id
 ffacd0b85a97d-35dbfe31905so1388725f8f.2
 for <xen-devel@lists.xenproject.org>; Tue, 04 Jun 2024 10:17:52 -0700 (PDT)
Received: from [172.31.7.231] ([62.28.210.62])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd04c0f47sm12171984f8f.8.2024.06.04.10.17.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 04 Jun 2024 10:17:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 600f26f2-2296-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717521472; x=1718126272; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=zOatp/Hc0cZBNHQx2peJlGOb098WMlw6EB3Wt3S4mxM=;
        b=Dr4QSrk/kr0M7TmZ4tt8v4LfpzGeyTepuduGHCkg03o+q6/GpRJ+o6cX9DrkcMfcAd
         PH9ncmND4MbsjCBG+Lsaz1ApHAUusJfxsEDOenkGrrUMVz7DC6nsRHBevIWLP2ReKdt0
         Sg1H3Ls5qaqgghAvlBHJVBNJGNe1WMXtRAcyaGw4q//nesFtx7b2qZDZ7pSJCA30tNMC
         T+8F09l7/J7MNh6DTg+DmlHDKWlxjDS/CNmf3j3/1Odl9Zb4ae/XpCiolLuCc/BWAs7F
         U9LqCWRH/h4T1c75r6B5UaySTJexm2+or7cN1QP3VZLxFcPAWdUV7q2xDaPGZIoGHTI2
         5p5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717521472; x=1718126272;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=zOatp/Hc0cZBNHQx2peJlGOb098WMlw6EB3Wt3S4mxM=;
        b=kpYuSIzhqQLV6lr+xGsDcZkxClZ4zO1keB/Z/3Ca04TyBnMkCRsz9bA7RKd14Odr97
         A7BtkzCVZ6mBH4eqZRE9DrMBwFNsUb+kd7FK3T/0EzOno70jnxGCH7ydMwcOprQ+qx8i
         z2WYiaZLKZgVPjRRhagEBgoOQvKKbKv08nxcIwS17AINZx9/8adi6wtT8ve8z841xl9Y
         T+u01SoZN5SoTqgoW4Xf6zULAE8XvoroiU9GnrOyb8ZgNUoKUJl4Jyag07umLGc9xeme
         jEV+ItbWLqWQsykgvwnrGQoQf37Hw+uhZqdq2asOsue6g+z1e1K1LxB95pKjI3OMhDXE
         Wj0w==
X-Forwarded-Encrypted: i=1; AJvYcCWA5sX8d1PlrOzw7Qxt0TVaKH2CiYLP+Ehjv60gxjLYc2qL0Y3YbljYhHrODvag+f//kl6zgrGfniSlMw6+vyQdwXqWjMXOU7mEIgH3FxQ=
X-Gm-Message-State: AOJu0Yzv6PO5ZlfabO8a+Cp/QA2Xoq1KKwhn5KqiGJ8S8lQq2mdXTzui
	HuYM4+DSKiVR8WRzZEcd6fUU6p69DrQHqiemh4qz/692bHqjYqaHCBZPZPPlGg==
X-Google-Smtp-Source: AGHT+IEX1FiNnIYUsRAgaKSZjd+27u3zg2WyADRLrc9eyP7Uj1OzwRbgJwgk1Xkh5nA0qLJIN6Nh8g==
X-Received: by 2002:a05:6000:400f:b0:354:f7f3:5e60 with SMTP id ffacd0b85a97d-35e8ef8e5c1mr33055f8f.52.1717521471933;
        Tue, 04 Jun 2024 10:17:51 -0700 (PDT)
Message-ID: <46b884e2-cbec-46f0-9070-7013307a310f@suse.com>
Date: Tue, 4 Jun 2024 19:17:49 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>
References: <20240516095235.64128-1-Jiqian.Chen@amd.com>
 <70c86c74-3ed6-4b22-9ba6-3f927f81bcd0@suse.com>
 <BL1PR12MB584922B0352AA2F4A359FD66E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <7cdff236-bb7d-4dad-9a83-47faaa6dc15f@suse.com>
 <BL1PR12MB58493D3365CC451F36DB554FE7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <fbaf7086-85d8-4433-91d9-ef8f74512685@suse.com>
 <BL1PR12MB58494B521CB40BAEA30CB412E7F32@BL1PR12MB5849.namprd12.prod.outlook.com>
 <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
 <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4a421aa5-b4c5-43f3-85cb-68c2021f13dd@suse.com>
 <BL1PR12MB58492BA224EBCE98549A0349E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <f125e2e3-b579-410f-b6ab-93d008bf9a9e@suse.com>
 <BL1PR12MB58494B2DD0CD75CCDF1F5CA1E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <67960b60-3108-4920-8bf1-68a00e117569@suse.com>
 <BL1PR12MB58490E8F1F26532B0FDFFFD6E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <BL1PR12MB58490E8F1F26532B0FDFFFD6E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 04.06.2024 10:18, Chen, Jiqian wrote:
> I tried to get more debug information from my environment. And I attach them here, maybe you can find some problems.
> acpi_parse_madt_ioapic_entries
> 	acpi_table_parse_madt(ACPI_MADT_TYPE_INTERRUPT_OVERRIDE, acpi_parse_int_src_ovr, MAX_IRQ_SOURCES);
> 		acpi_parse_int_src_ovr
> 			mp_override_legacy_irq
> 				only process two entries, irq 0 gsi 2 and irq 9 gsi 9
> There are only two entries whose type is ACPI_MADT_TYPE_INTERRUPT_OVERRIDE in MADT table. Is it normal?

Yes, that's what you'd typically see (or just one such entry).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 20:59:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 20:59:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735479.1141624 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEbFg-0002wR-4s; Tue, 04 Jun 2024 20:59:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735479.1141624; Tue, 04 Jun 2024 20:59:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEbFg-0002wK-1u; Tue, 04 Jun 2024 20:59:28 +0000
Received: by outflank-mailman (input) for mailman id 735479;
 Tue, 04 Jun 2024 20:59:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEbFe-0002ul-30; Tue, 04 Jun 2024 20:59:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEbFd-0004sz-QZ; Tue, 04 Jun 2024 20:59:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEbFd-0007AE-FK; Tue, 04 Jun 2024 20:59:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEbFd-0004Zd-Eo; Tue, 04 Jun 2024 20:59:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xBCfEEcRgWBxOIKplCOgooAoaGOMW/HRoLS0FVVgcI0=; b=KRfP+viOe67QH5RbuUtLDF3IvW
	oBFDKVendxz4nQJckB+5/1wmabssl7tbeUSyl0qEC4KpmemeQaiDK+vPhUa7kBbd0a8wuKj2+Sxqt
	xzNPI7/+0kcSEnvQ+Ry80k5KpIO7ZB3MLgVc8E2N/81osR3dan9dKy04Z/sxD3TEkx+g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186248-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186248: regressions - FAIL
X-Osstest-Failures:
    linux-linus:build-armhf:xen-build:fail:regression
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2ab79514109578fc4b6df90633d500cf281eb689
X-Osstest-Versions-That:
    linux=c3f38fa61af77b49866b006939479069cd451173
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Jun 2024 20:59:25 +0000

flight 186248 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186248/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                   6 xen-build                fail REGR. vs. 186238

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-qcow2     1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-raw       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186238
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                2ab79514109578fc4b6df90633d500cf281eb689
baseline version:
 linux                c3f38fa61af77b49866b006939479069cd451173

Last test of basis   186238  2024-06-03 13:41:54 Z    1 days
Failing since        186239  2024-06-03 20:43:26 Z    1 days    3 attempts
Testing same since   186244  2024-06-04 04:58:12 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dave Jiang <dave.jiang@intel.com>
  Huacai Chen <chenhuacai@loongson.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Li Zhijian <lizhijian@fujitsu.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Tiezhu Yang <yangtiezhu@loongson.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  fail    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    blocked 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 blocked 
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2ab79514109578fc4b6df90633d500cf281eb689
Merge: f06ce441457d 49ba7b515c4c
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Jun 3 14:42:41 2024 -0700

    Merge tag 'cxl-fixes-6.10-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl
    
    Pull cxl fixes from Dave Jiang:
    
     - Compile fix for cxl-test from missing linux/vmalloc.h
    
     - Fix for memregion leaks in devm_cxl_add_region()
    
    * tag 'cxl-fixes-6.10-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/cxl/cxl:
      cxl/region: Fix memregion leaks in devm_cxl_add_region()
      cxl/test: Add missing vmalloc.h for tools/testing/cxl/test/mem.c

commit f06ce441457d4abc4d76be7acba26868a2d02b1c
Merge: c3f38fa61af7 eb36e520f4f1
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Jun 3 09:27:45 2024 -0700

    Merge tag 'loongarch-fixes-6.10-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson
    
    Pull LoongArch fixes from Huacai Chen:
     "Some bootloader interface fixes, a dts fix, and a trivial cleanup"
    
    * tag 'loongarch-fixes-6.10-1' of git://git.kernel.org/pub/scm/linux/kernel/git/chenhuacai/linux-loongson:
      LoongArch: Fix GMAC's phy-mode definitions in dts
      LoongArch: Override higher address bits in JUMP_VIRT_ADDR
      LoongArch: Fix entry point in kernel image header
      LoongArch: Add all CPUs enabled by fdt to NUMA node 0
      LoongArch: Fix built-in DTB detection
      LoongArch: Remove CONFIG_ACPI_TABLE_UPGRADE in platform_init()

commit eb36e520f4f1b690fd776f15cbac452f82ff7bfa
Author: Huacai Chen <chenhuacai@loongson.cn>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Fix GMAC's phy-mode definitions in dts
    
    The GMAC of Loongson chips cannot insert the correct 1.5-2ns delay. So
    we need the PHY to insert internal delays for both transmit and receive
    data lines from/to the PHY device. Fix this by changing the "phy-mode"
    from "rgmii" to "rgmii-id" in dts.
    
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit 1098efd299ffe9c8af818425338c7f6c4f930a98
Author: Jiaxun Yang <jiaxun.yang@flygoat.com>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Override higher address bits in JUMP_VIRT_ADDR
    
    In JUMP_VIRT_ADDR we are performing an or calculation on address value
    directly from pcaddi.
    
    This will only work if we are currently running from direct 1:1 mapping
    addresses or firmware's DMW is configured exactly same as kernel. Still,
    we should not rely on such assumption.
    
    Fix by overriding higher bits in address comes from pcaddi, so we can
    get rid of or operator.
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit beb2800074c15362cf9f6c7301120910046d6556
Author: Jiaxun Yang <jiaxun.yang@flygoat.com>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Fix entry point in kernel image header
    
    Currently kernel entry in head.S is in DMW address range, firmware is
    instructed to jump to this address after loading the kernel image.
    
    However kernel should not make any assumption on firmware's DMW
    setting, thus the entry point should be a physical address falls into
    direct translation region.
    
    Fix by converting entry address to physical and amend entry calculation
    logic in libstub accordingly.
    
    BTW, use ABSOLUTE() to calculate variables to make Clang/LLVM happy.
    
    Cc: stable@vger.kernel.org
    Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit 3de9c42d02a79a5e09bbee7a4421ddc00cfd5c6d
Author: Jiaxun Yang <jiaxun.yang@flygoat.com>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Add all CPUs enabled by fdt to NUMA node 0
    
    NUMA enabled kernel on FDT based machine fails to boot because CPUs
    are all in NUMA_NO_NODE and mm subsystem won't accept that.
    
    Fix by adding them to default NUMA node at FDT parsing phase and move
    numa_add_cpu(0) to a later point.
    
    Cc: stable@vger.kernel.org
    Fixes: 88d4d957edc7 ("LoongArch: Add FDT booting support from efi system table")
    Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit b56f67a6c748bb009f313f91651c8020d2338d63
Author: Jiaxun Yang <jiaxun.yang@flygoat.com>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Fix built-in DTB detection
    
    fdt_check_header(__dtb_start) will always success because kernel
    provides a dummy dtb, and by coincidence __dtb_start clashed with
    entry of this dummy dtb. The consequence is fdt passed from firmware
    will never be taken.
    
    Fix by trying to utilise __dtb_start only when CONFIG_BUILTIN_DTB is
    enabled.
    
    Cc: stable@vger.kernel.org
    Fixes: 7b937cc243e5 ("of: Create of_root if no dtb provided by firmware")
    Signed-off-by: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit 6c3ca6654a74dd396bc477839ba8d9792eced441
Author: Tiezhu Yang <yangtiezhu@loongson.cn>
Date:   Mon Jun 3 15:45:53 2024 +0800

    LoongArch: Remove CONFIG_ACPI_TABLE_UPGRADE in platform_init()
    
    Both acpi_table_upgrade() and acpi_boot_table_init() are defined as
    empty functions under !CONFIG_ACPI_TABLE_UPGRADE and !CONFIG_ACPI in
    include/linux/acpi.h, there are no implicit declaration errors with
    various configs.
    
      #ifdef CONFIG_ACPI_TABLE_UPGRADE
      void acpi_table_upgrade(void);
      #else
      static inline void acpi_table_upgrade(void) { }
      #endif
    
      #ifdef        CONFIG_ACPI
      ...
      void acpi_boot_table_init (void);
      ...
      #else /* !CONFIG_ACPI */
      ...
      static inline void acpi_boot_table_init(void)
      {
      }
      ...
      #endif        /* !CONFIG_ACPI */
    
    As Huacai suggested, CONFIG_ACPI_TABLE_UPGRADE is ugly and not necessary
    here, just remove it. At the same time, just keep CONFIG_ACPI to prevent
    potential build errors in future, and give a signal to indicate the code
    is ACPI-specific. For the same reason, we also put acpi_table_upgrade()
    under CONFIG_ACPI.
    
    Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn>
    Signed-off-by: Huacai Chen <chenhuacai@loongson.cn>

commit 49ba7b515c4c0719b866d16f068e62d16a8a3dd1
Author: Li Zhijian <lizhijian@fujitsu.com>
Date:   Tue May 7 13:34:21 2024 +0800

    cxl/region: Fix memregion leaks in devm_cxl_add_region()
    
    Move the mode verification to __create_region() before allocating the
    memregion to avoid the memregion leaks.
    
    Fixes: 6e099264185d ("cxl/region: Add volatile region creation support")
    Signed-off-by: Li Zhijian <lizhijian@fujitsu.com>
    Reviewed-by: Dan Williams <dan.j.williams@intel.com>
    Link: https://lore.kernel.org/r/20240507053421.456439-1-lizhijian@fujitsu.com
    Signed-off-by: Dave Jiang <dave.jiang@intel.com>

commit d55510527153d17a3af8cc2df69c04f95ae1350d
Author: Dave Jiang <dave.jiang@intel.com>
Date:   Tue May 28 15:55:51 2024 -0700

    cxl/test: Add missing vmalloc.h for tools/testing/cxl/test/mem.c
    
    tools/testing/cxl/test/mem.c uses vmalloc() and vfree() but does not
    include linux/vmalloc.h. Kernel v6.10 made changes that causes the
    currently included headers not depend on vmalloc.h and therefore
    mem.c can no longer compile. Add linux/vmalloc.h to fix compile
    issue.
    
      CC [M]  tools/testing/cxl/test/mem.o
    tools/testing/cxl/test/mem.c: In function ‘label_area_release’:
    tools/testing/cxl/test/mem.c:1428:9: error: implicit declaration of function ‘vfree’; did you mean ‘kvfree’? [-Werror=implicit-function-declaration]
     1428 |         vfree(lsa);
          |         ^~~~~
          |         kvfree
    tools/testing/cxl/test/mem.c: In function ‘cxl_mock_mem_probe’:
    tools/testing/cxl/test/mem.c:1466:22: error: implicit declaration of function ‘vmalloc’; did you mean ‘kmalloc’? [-Werror=implicit-function-declaration]
     1466 |         mdata->lsa = vmalloc(LSA_SIZE);
          |                      ^~~~~~~
          |                      kmalloc
    
    Fixes: 7d3eb23c4ccf ("tools/testing/cxl: Introduce a mock memory device + driver")
    Reviewed-by: Dan Williams <dan.j.williams@intel.com>
    Reviewed-by: Alison Schofield <alison.schofield@intel.com>
    Link: https://lore.kernel.org/r/20240528225551.1025977-1-dave.jiang@intel.com
    Signed-off-by: Dave Jiang <dave.jiang@intel.com>


From xen-devel-bounces@lists.xenproject.org Tue Jun 04 21:17:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 Jun 2024 21:17:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735493.1141635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEbWd-0005e5-N3; Tue, 04 Jun 2024 21:16:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735493.1141635; Tue, 04 Jun 2024 21:16:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEbWd-0005dy-J5; Tue, 04 Jun 2024 21:16:59 +0000
Received: by outflank-mailman (input) for mailman id 735493;
 Tue, 04 Jun 2024 21:16:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEbWd-0005do-0T; Tue, 04 Jun 2024 21:16:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEbWc-0005Kn-KV; Tue, 04 Jun 2024 21:16:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEbWc-0007a9-8M; Tue, 04 Jun 2024 21:16:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEbWc-0000Yl-7q; Tue, 04 Jun 2024 21:16:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YcP0lxSvTmyzp2pFJw9KHbs6QDt4t6SH2yElWrJAazg=; b=YbwPlBhNYq2ZODO9qUS7mGby2n
	rEqnoetoJoJ01B16UiW/sqcUazMzzAl8UmuKHnytlyJNMYK6EihYa04Efdi7iz9I0oQ9IRl8rfhr/
	c51fW/2CjYPNj13E28PGEUHs4oGW1UsKDyWdPEVEotH1p09XJHBdlidzFn+/2oO1Qsp0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186249-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186249: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=de4cc40b8c1d9044df82e077e72ef6e192ea12e2
X-Osstest-Versions-That:
    ovmf=839bd179735284592ba8f0879d2cbf07e0cb585a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 Jun 2024 21:16:58 +0000

flight 186249 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186249/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 de4cc40b8c1d9044df82e077e72ef6e192ea12e2
baseline version:
 ovmf                 839bd179735284592ba8f0879d2cbf07e0cb585a

Last test of basis   186246  2024-06-04 12:44:40 Z    0 days
Testing same since   186249  2024-06-04 19:12:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   839bd17973..de4cc40b8c  de4cc40b8c1d9044df82e077e72ef6e192ea12e2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 01:20:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 01:20:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735519.1141644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEfJs-0006Ff-Li; Wed, 05 Jun 2024 01:20:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735519.1141644; Wed, 05 Jun 2024 01:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEfJs-0006FV-It; Wed, 05 Jun 2024 01:20:04 +0000
Received: by outflank-mailman (input) for mailman id 735519;
 Wed, 05 Jun 2024 01:20:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEfJr-00061L-HI; Wed, 05 Jun 2024 01:20:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEfJr-0001Cw-9T; Wed, 05 Jun 2024 01:20:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEfJq-0008WP-Ut; Wed, 05 Jun 2024 01:20:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEfJq-0006dL-UL; Wed, 05 Jun 2024 01:20:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kV7Kn9f6N1uwpp/wBQtD0+6WlbujHD8NvetGVo6mELc=; b=poU4mUW1e1GADUCuZiuiEnH/ZY
	xNd+32uYTSa6A8DvFd9obSWA2vtx5lW2LlpGC37+WuKcsirMqjRcnVhmXXeKsOolIZiuxu3gKmZXD
	USzGnGfRgIUTVF/EHNquoP8aIyHDKlZV4YQuY7yWb/+rpJkvAQZHxozZahAUO4fE0CL4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186251-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186251: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=7772e339bdbbaad81e84086eec0f8577e54e0f28
X-Osstest-Versions-That:
    ovmf=de4cc40b8c1d9044df82e077e72ef6e192ea12e2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Jun 2024 01:20:02 +0000

flight 186251 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186251/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 7772e339bdbbaad81e84086eec0f8577e54e0f28
baseline version:
 ovmf                 de4cc40b8c1d9044df82e077e72ef6e192ea12e2

Last test of basis   186249  2024-06-04 19:12:56 Z    0 days
Testing same since   186251  2024-06-04 23:14:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chao Li <lichao@loongson.cn>
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   de4cc40b8c..7772e339bd  7772e339bdbbaad81e84086eec0f8577e54e0f28 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 03:55:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 03:55:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735534.1141654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEhk2-000669-2w; Wed, 05 Jun 2024 03:55:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735534.1141654; Wed, 05 Jun 2024 03:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEhk1-000662-WC; Wed, 05 Jun 2024 03:55:14 +0000
Received: by outflank-mailman (input) for mailman id 735534;
 Wed, 05 Jun 2024 03:55:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEhk0-00065s-TN; Wed, 05 Jun 2024 03:55:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEhk0-0004WV-Ju; Wed, 05 Jun 2024 03:55:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEhk0-0005He-9J; Wed, 05 Jun 2024 03:55:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEhk0-0007ot-8v; Wed, 05 Jun 2024 03:55:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qwh/SkjKzhl+2kZ3NkZNbuTI6YaBhv2TU49z4hO7okk=; b=wbxFnlx0SeQqjO2paLt1iiP1H4
	UZl11ZRvgvEVIuQsV8j9fHJy4SceDGlL4fkICVwug6h2jmrpgKSiMnRKjoU7JyeGMhu3iryzjNdys
	fnnmA2Eq8UpMJsImULuLO9xjSLYNT4+Az+6eIO5xyyVa7EPrajcjyYhgGjuJHow5eqAU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186250-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186250: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=32f88d65f01bf6f45476d7edbe675e44fb9e1d58
X-Osstest-Versions-That:
    linux=c3f38fa61af77b49866b006939479069cd451173
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Jun 2024 03:55:12 +0000

flight 186250 linux-linus real [real]
flight 186253 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186250/
http://logs.test-lab.xenproject.org/osstest/logs/186253/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           8 xen-boot            fail pass in 186253-retest
 test-armhf-armhf-xl-arndale   8 xen-boot            fail pass in 186253-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 186253 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 186253 never pass
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 186253 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 186253 never pass
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186238
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186238
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186238
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186238
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                32f88d65f01bf6f45476d7edbe675e44fb9e1d58
baseline version:
 linux                c3f38fa61af77b49866b006939479069cd451173

Last test of basis   186238  2024-06-03 13:41:54 Z    1 days
Failing since        186239  2024-06-03 20:43:26 Z    1 days    4 attempts
Testing same since   186250  2024-06-04 21:12:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dave Jiang <dave.jiang@intel.com>
  Huacai Chen <chenhuacai@loongson.cn>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  John Hubbard <jhubbard@nvidia.com>
  Li Zhijian <lizhijian@fujitsu.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Mark Brown <broonie@kernel.org>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Michael Ellerman <mpe@ellerman.id.au>
  Shuah Khan <skhan@linuxfoundation.org>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Tiezhu Yang <yangtiezhu@loongson.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   c3f38fa61af7..32f88d65f01b  32f88d65f01bf6f45476d7edbe675e44fb9e1d58 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 05:43:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 05:43:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735545.1141666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEjR1-0002Bs-Ne; Wed, 05 Jun 2024 05:43:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735545.1141666; Wed, 05 Jun 2024 05:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEjR1-0002Bl-Ih; Wed, 05 Jun 2024 05:43:43 +0000
Received: by outflank-mailman (input) for mailman id 735545;
 Wed, 05 Jun 2024 05:43:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEjR0-0002Bb-1F; Wed, 05 Jun 2024 05:43:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEjQz-0006vy-TB; Wed, 05 Jun 2024 05:43:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEjQz-0002Dz-MP; Wed, 05 Jun 2024 05:43:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEjQz-0001Gq-Lw; Wed, 05 Jun 2024 05:43:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OoQlAvB/DhlR4NJTxwvWX8ipfBD0LXeF3W9RwwbvNYQ=; b=HCTj+BxgE9+YNrNqnKnqQQNiUb
	FNVAR7Sb/iVfSvvBBDoriKaPZwJ2P4/JmjPkBpeF4nwun/r3N1amenz3qZk8aFRe98x1AtyOwa5qW
	lOT4nGr3wfJaEILNdfYh9nkGnVHQPEKWjEl09PEVXKqtDv7xHowxctnC8VslQBk1Jxis=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186254-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186254: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e2e09d8512898709a3d076fdd36c8abee2734027
X-Osstest-Versions-That:
    ovmf=7772e339bdbbaad81e84086eec0f8577e54e0f28
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Jun 2024 05:43:41 +0000

flight 186254 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186254/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e2e09d8512898709a3d076fdd36c8abee2734027
baseline version:
 ovmf                 7772e339bdbbaad81e84086eec0f8577e54e0f28

Last test of basis   186251  2024-06-04 23:14:39 Z    0 days
Testing same since   186254  2024-06-05 03:41:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron <105021049+apop5@users.noreply.github.com>
  Aaron Pop <aaronpop@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   7772e339bd..e2e09d8512  e2e09d8512898709a3d076fdd36c8abee2734027 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 07:04:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 07:04:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735563.1141674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEkhE-0002u3-6z; Wed, 05 Jun 2024 07:04:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735563.1141674; Wed, 05 Jun 2024 07:04:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEkhE-0002tw-4S; Wed, 05 Jun 2024 07:04:32 +0000
Received: by outflank-mailman (input) for mailman id 735563;
 Wed, 05 Jun 2024 07:04:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rKp1=NH=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sEkhD-0002tq-Io
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 07:04:31 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20600.outbound.protection.outlook.com
 [2a01:111:f403:2415::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d8a52bc8-2309-11ef-b4bb-af5377834399;
 Wed, 05 Jun 2024 09:04:27 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by PH7PR12MB7889.namprd12.prod.outlook.com (2603:10b6:510:27f::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.27; Wed, 5 Jun
 2024 07:04:25 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7633.021; Wed, 5 Jun 2024
 07:04:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8a52bc8-2309-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iBkbzcXTfZcd+DpZYEbMK/1sWFISfZx2twejFsOefMmrDwtN2EAqiZZUgp9jA+BAySf/AmbF5b32n1tAL5778JuEi8Yu71+EJn//yG7yt9tyPPDTV84mCH60YKGxiNxPbFJrSvRY7Z3Ngvkrq2m1Kc1bh8j0hVS8tAs8AxsxS4lwyp9vbStKSFxlXQQR+mo496cqBZnLYmEnDVhCD6jKLI+aybcc8tmamp8wvf6oGPNpLxaxUy36UwHY4lwm8J5es2+eedfor4J47VkFmA3P3gnv14D9/7ZDXorVtOx6EaUhC9wqh1r6rxJWFyMYjzIPIwFibjk2uyjAE2cEn7wwlg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fu7b7tl2h7dyysQNC99E+j+9mSwNg2MYy7Xk6wiuH6s=;
 b=UuwICIp/tBjapc/TEU0KOgbWpevgKXqOMc77MKUjyzVvCRBRgHOJcs37JstgapzUenq2ZZvIR4MeBgCJ/c6dxxptQshZUwgUSNqbQbKTNaXvmqeePO6L+kqyc/Onh/2Q18czWCYo2wUV0WOHboNKBolncCLQxL8FcvYZY36vBhdZ5CYSHCENh/5lXQsS+UcYhMMfmXbdWC0M3dglYA2jwlvCVkBc7B9gx0la9W6upI0sXnOTeTrgGtKt3NTbw5Y2ayMnDBVInC4DYx2VIg32w3bVaTxBjPhdI5T8hXTmucNlnDBPcufY9lEkBT/H8yJNziJd7sSRocGQyFszQOpXLA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fu7b7tl2h7dyysQNC99E+j+9mSwNg2MYy7Xk6wiuH6s=;
 b=lPGIkPoyWpmEAm2l39UxCnKsKmsBg/6eMFLDEh+7JYqtVrKzx2K1NzgAXbz/Wv3/WL8Q5bSTrm3Ie1PSDni+TJgNdMUFqKBwieBz+AEVigEbwkCRFdMSrgJxrWbq4KFTLRFUpdVBc0XifCyWgxMHTzKJFMzWt2B52hG5ZV0PCZ8=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Hildebrand,
 Stewart" <Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Daniel P
 . Smith" <dpsmith@apertussolutions.com>, "Chen, Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index:
 AQHap3bgiER4vYjwvk2+R5oTa8V63LGZ5EYAgAHVa4D//4e1AIAAis+A//+FmgCAEsTugP//viAAABEhvwD//4GYgIAAoyuA//+0FICAAgSCAP//yCWAgAeLLwD//6n2gAAQ1XyAABA5GQAADz4pgAAu2vkAAEmxvAAAkQD4gAD05PGA
Date: Wed, 5 Jun 2024 07:04:25 +0000
Message-ID:
 <BL1PR12MB5849C1D40FCF9861BFE7B208E7F92@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240516095235.64128-1-Jiqian.Chen@amd.com>
 <BL1PR12MB584922B0352AA2F4A359FD66E7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <7cdff236-bb7d-4dad-9a83-47faaa6dc15f@suse.com>
 <BL1PR12MB58493D3365CC451F36DB554FE7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <fbaf7086-85d8-4433-91d9-ef8f74512685@suse.com>
 <BL1PR12MB58494B521CB40BAEA30CB412E7F32@BL1PR12MB5849.namprd12.prod.outlook.com>
 <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
 <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4a421aa5-b4c5-43f3-85cb-68c2021f13dd@suse.com>
 <BL1PR12MB58492BA224EBCE98549A0349E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <f125e2e3-b579-410f-b6ab-93d008bf9a9e@suse.com>
 <BL1PR12MB58494B2DD0CD75CCDF1F5CA1E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <67960b60-3108-4920-8bf1-68a00e117569@suse.com>
 <BL1PR12MB58490E8F1F26532B0FDFFFD6E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <46b884e2-cbec-46f0-9070-7013307a310f@suse.com>
In-Reply-To: <46b884e2-cbec-46f0-9070-7013307a310f@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7633.017)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|PH7PR12MB7889:EE_
x-ms-office365-filtering-correlation-id: 7ee4a8bf-f293-425f-1c37-08dc852dbbe0
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230031|376005|1800799015|7416005|366007|38070700009;
x-microsoft-antispam-message-info:
 =?utf-8?B?ZUV2MGE0NkpPcFM3a01vSnlWRGxQbnZ1OUNSYU81YjdFSGdEanBFNmxEak9Z?=
 =?utf-8?B?cHpLazVEcUtpbGVFNXpER1ZpUXk5MGx1ejQ1SlV6SVVDUGNXNWdCUmtJVGlk?=
 =?utf-8?B?QWxrRVpsVU4xRHZaeWtvNlNYOUhTRDBQaWNMMTFtbzQ4UDhXdm0xTGZCcHpV?=
 =?utf-8?B?RDlBY0VVZnZESWpjRDNtVzk0eWZvSmJKWUZCTjhHS2RZdkN4Z0FYYlJMWExi?=
 =?utf-8?B?Q0JIeE1VWG1wVEtEZC9lUmNQK1NuV0JxTFVQeXpvR3pUVlhWZ0ZJdVVoWDh5?=
 =?utf-8?B?akFibG9kYzdKaHJHTVA2eDZRM0NBWXJFb1RXcytQRkVGdHByV2hRVFcrNjhp?=
 =?utf-8?B?d1RrNXZtWmhPdUEzQllHcTJGbDlYdGlFRHdIRjlaMTF1YXdJSEQ0TUdsTHIv?=
 =?utf-8?B?aUZsWW5qODNrV3J1Sjd2R2NleFppejNGUzNjTVQ0ZER0b3M1Q3RRd1d2dFps?=
 =?utf-8?B?cGQ5SmtDTTFNN283TGVYQ2VMUXhLTFRwTk1paThQVlVKbWRaSWhXTU5BUi9X?=
 =?utf-8?B?ZEV2cUcrL0lNSzdtcVZ0eEZHM2MyUTQ5eGs3SHV5eW5tR0RKK3QyWWdPS0tM?=
 =?utf-8?B?ODlneCthcUdOcThXNzBlQVduVzJsSWwvb0dGWG9SQTVVV3RURStXb3I5Nyt4?=
 =?utf-8?B?MTFybkszcnEyZ2tBQUJBUUc1M3h0d2xnWmFZMy8yUzlIaTZPUWhWOUZFQ0xF?=
 =?utf-8?B?TXk3b3ljMDN3RVkyUUt3WDJMNjRxeDIxMjVDMk5mT0dBT2FPNkU4N0haeGwx?=
 =?utf-8?B?SHlyem41YURvczZFQ0trZFpqUW9rWm5hT2I3R21xSEZpZk5kejE1ZXUzbHBB?=
 =?utf-8?B?cWRrVFcvM1JRT1VRRGZVditNWkdrcXQ0VmtkaUsrd1l2U01tMm1PbVB3TW03?=
 =?utf-8?B?YXpQeHRmMzQxMlFqQWsxcVhBcDBzdXpuMGd4dWFJLzA2dUV5SjFqa1QwQjZZ?=
 =?utf-8?B?VlNVekRpUUU4WERCMEJHV3FPMXFLdkpxKzN6UHNxOGJnNHJNMW9VbjN1bjhs?=
 =?utf-8?B?d09EQk4yM3BXUmgzZk9kTVdQWEQyb3NlK05oVmdTOFpTeVYyR0Y2WjU2V2dr?=
 =?utf-8?B?NHJlRXh3UFRFaXhFWjVLdnRqcFgweEpmejVIMUNxdW1hSFo4c1NPeVZuOG5I?=
 =?utf-8?B?ZkJSYktkZlc1YTZLUG5aNGgxMlNtNFZUNU1WRUkyU2pDYWZWOGZTNkxQUTgr?=
 =?utf-8?B?QlMrNEU2RXlrY25ROWpSdWdhT2MvcUZjZkVYZU1YZFhMWThhU2U2TytDK3Er?=
 =?utf-8?B?S3BQUzhadHpKNk1vR01oS3lxcDN5V0hhc1lnSlJ1cHUwMnZ4ME9HSTRYekxK?=
 =?utf-8?B?VW5LNGZNcUJHbkdOY1luS0hpeTh6TTZtY0FGY3NlRWZDL1hQcG1mY0s2cUow?=
 =?utf-8?B?TEJJNE5WRmtrSnVJRS9QU1hEaUxWZGZMOEl6bnpMQm41Zjc1R0JEODB5Tjc5?=
 =?utf-8?B?SVJBVHNOOHV6czFPLzRTRWgrSzJHa0lPK3lWUUZPbU00Z253RkYzRmI2VVV3?=
 =?utf-8?B?OHhhbjdOUm1DNnlUWitsVlRMekwreUdPWUw2QmRIM1ZHNzJGaHVvUzNteXhy?=
 =?utf-8?B?VENRbVVoTHZ1N1dlNStWb2RQNlhZTkFXa3lhRHZ4MDFNYy9zZE94TTZTaitm?=
 =?utf-8?B?TTZLODNWdTdSMTdoT1FBMHNFYU4yT1VTUWtRNnRvSHBFa3M0aUZLbkhxbnBH?=
 =?utf-8?B?bHpXbTV3aG1qekVCZW03elNQL0tHWGNIdFl0OHFFWm05ZFZlc0NYS3o1V1BI?=
 =?utf-8?B?eVJMd2VEWUg2bU5iRk1EZkkzWWJMMXJOVVg4ZGZMVWFuRm1nMVJlenpsazRq?=
 =?utf-8?B?b1FLR0NpaXIyNGcrM0Mxdz09?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(1800799015)(7416005)(366007)(38070700009);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?VnNRdEYwSWxweERFd2FHUzJxRms0cnVTdzM1MlVhcXYzbWpUWGRObGZSQ3h4?=
 =?utf-8?B?dHRrcW42bktnRG5lUGNyd0JxT2hvc1hLc2VnMkh0Y1hjVk1kbWM4c292UEhO?=
 =?utf-8?B?LzhmZVQ2WHRDL08wSGEwRmtsQmYwN2ZsYk9BL0t4RW1ocklzcTVFRTdtOVVs?=
 =?utf-8?B?QVFvUzI0YmIzN1hxWjBjSUgzMVZ4TWdsRVRpcVRUcXFNUzRURnM1RVNRc1NZ?=
 =?utf-8?B?Mjd2eDJVOEtKc1l4MXdZczA5bHBaSTkrN3VaVGpuOUYxd1RKTW1CL01JeUZn?=
 =?utf-8?B?Q3A3MGk2SUVnT1VmOWdIRmlPUEI2S2RxSnhYaDIzTXJxb1BYZGhvWDBRMUJr?=
 =?utf-8?B?a1l0MzlaRFdWRGU2aFEwblNJZUZMUzQvSnZMTUxVQjNwUlFlN09CWk1VOXhw?=
 =?utf-8?B?RkE1UVUvTXdFNHhHeFpBSFU5N2dYQ0ZhS1lWdDNXbi9TczRlSWs1YjVWNFQ0?=
 =?utf-8?B?QitPTmlZM3ZnRnlTa1hKL2tUOEs3ODN3R05IaThnMXpXYVZWMVU5SXhWbmpt?=
 =?utf-8?B?c05wdGt0ZU1ISDRaU0d3cU9PZFVxVE1mS1ZrL3FxMEdtOGJoWml2T3l6Ujk0?=
 =?utf-8?B?ZWYzaDZISVZBNWc4OXVyVUQvM2cydXRjd1VIQkVLQ0JsZWRyUlQwWWJiRG03?=
 =?utf-8?B?VUE3bGd2ekhCWjJpbmEyZlQ3bE9WbEFWT3dRWFFVbk1wRDdyNklYN0c5cFpn?=
 =?utf-8?B?cGZyWk8rVFF5QThzajJIZFJQZ0pXYndXaG5lQm9reXFUUGE3K3hZN3NYQzZH?=
 =?utf-8?B?UlowNGg4VVE2b0F4aU16MGFUcHBta090a2xOcFd6akc3aGVsMTdzSjBJKzJ1?=
 =?utf-8?B?QnBhM2hETXRwNjBXeVkxTGVhdlNiemFud1M4aE93SHJOQlRpQXlQRFo2aUF4?=
 =?utf-8?B?blZqbFJ6aDNaV21ZTTl4Y0x5d3AwU3hBL3N0ZFlqMG5BZW81blU0SmRrWHNu?=
 =?utf-8?B?WkhqSVpqK2h4S3Zta083Rk1iOEJuZ0hic0x4bjdVelYvTStLdlJTbEtzZTE2?=
 =?utf-8?B?Z2Z3N2hacW9hNHEvd0N3N0IrUEppbFRidHpXSmlBZUVxNENHWDlvZFZ6Wmsw?=
 =?utf-8?B?c0h3cGdpRmF5ejAzbVFZNlM2YmxZejU0b05ZU3ZScFgxaTFSYjBUSGl2MkJy?=
 =?utf-8?B?ZVhQd1JQb01tbXl2cnhLUDdyVUtLSEtNVTdpRGtISXArUndWbG1nL3VwOGJz?=
 =?utf-8?B?UVZNRkhGbXIrbzFJeEJjQytmT3hIc2g4Z25NcGUrRFpIRjBsTHFIYjZ1QkU1?=
 =?utf-8?B?TVNzMFJTMkZNVkdaZmZha3gwd29DYzJYNmlVbmF1UzI0cGw1YjllUklwbmRw?=
 =?utf-8?B?ak5vdWlMTXBvZ2V2Z0swc2ZiQ1JoazBLeE5PdzVIT25Cd0o0OWFPOU9vblVs?=
 =?utf-8?B?WmI5TTlKS3NnUFJFQmUvRUlQRjd0NEFCT0tuMTJsS3RxL1d3Y0xqVWpvY212?=
 =?utf-8?B?cHRaMVpnVGl5TWRCcENxNFgyeDdLT2FGcm5ndC9zS2FURWg1elE3QlgwdDY0?=
 =?utf-8?B?YVV4eEh5alJyZEtudHBkK3E5Tks5aExZNzVrN1I2ZjNVcGw2dWZDemVGcWUz?=
 =?utf-8?B?enBCQzZMcDBJaTIvTkJkODFBcW1DTzUvbVQxbWlhcVMvYXAyNHF2WGJjRXU2?=
 =?utf-8?B?UXlRazdSUVNRTGl3R1V2Ymh1ZkN4TmVLZTFvNW51eHZ1eGhPUTNmRStzOTZu?=
 =?utf-8?B?blFvZG14ZkxPb2NxWm9NdUxZM0paYkgxNmRHeklOU3RMYit5VFp5OU4xRk9L?=
 =?utf-8?B?ZkNjM0c5K1p0cWhRdVlhcXFjbzNpcTNLUTFuRmtzSTdkUCttbFdLVGJ3VDVH?=
 =?utf-8?B?VkkyTk4vT1JtVmZCd3puZ2dBNEtwczdlcTV0S1NKMk1Na3orUXg3Z3M3WU9G?=
 =?utf-8?B?UjRHOWY4NEVUVGgwNk1RR285WHBLZitlcWY4a2NadHFFSHZ6dms1ZVY4d2FS?=
 =?utf-8?B?dzNoK1VoMC84ZW5nOTBaMDhCUkRDTFhzWVZzT2RIQzdZbSsxNDQvSE5HQnhh?=
 =?utf-8?B?N0diN3NzblRHK0htclNwa2xuR3VQakRNV3d3MlJrck05bTBpeTl1SlJJaWZ1?=
 =?utf-8?B?TGFsRUJDakZoWXVhR0pPVzlIamlib045UlkyZVhTQ1NJRDgzM0xHVWU2MFBj?=
 =?utf-8?Q?mgoA=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <747383FDB446BA4EBF98B7D12131A215@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7ee4a8bf-f293-425f-1c37-08dc852dbbe0
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jun 2024 07:04:25.1634
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 6h7WtNurZzoL9c4mmw9MoJqEVLSpewmOmW1t5Rt9+oeKrRDkP+kvig/sUGSEf2DEPbynPw7sa6DP/trX11Iukg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7889

T24gMjAyNC82LzUgMDE6MTcsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAwNC4wNi4yMDI0IDEw
OjE4LCBDaGVuLCBKaXFpYW4gd3JvdGU6DQo+PiBJIHRyaWVkIHRvIGdldCBtb3JlIGRlYnVnIGlu
Zm9ybWF0aW9uIGZyb20gbXkgZW52aXJvbm1lbnQuIEFuZCBJIGF0dGFjaCB0aGVtIGhlcmUsIG1h
eWJlIHlvdSBjYW4gZmluZCBzb21lIHByb2JsZW1zLg0KPj4gYWNwaV9wYXJzZV9tYWR0X2lvYXBp
Y19lbnRyaWVzDQo+PiAJYWNwaV90YWJsZV9wYXJzZV9tYWR0KEFDUElfTUFEVF9UWVBFX0lOVEVS
UlVQVF9PVkVSUklERSwgYWNwaV9wYXJzZV9pbnRfc3JjX292ciwgTUFYX0lSUV9TT1VSQ0VTKTsN
Cj4+IAkJYWNwaV9wYXJzZV9pbnRfc3JjX292cg0KPj4gCQkJbXBfb3ZlcnJpZGVfbGVnYWN5X2ly
cQ0KPj4gCQkJCW9ubHkgcHJvY2VzcyB0d28gZW50cmllcywgaXJxIDAgZ3NpIDIgYW5kIGlycSA5
IGdzaSA5DQo+PiBUaGVyZSBhcmUgb25seSB0d28gZW50cmllcyB3aG9zZSB0eXBlIGlzIEFDUElf
TUFEVF9UWVBFX0lOVEVSUlVQVF9PVkVSUklERSBpbiBNQURUIHRhYmxlLiBJcyBpdCBub3JtYWw/
DQo+IA0KPiBZZXMsIHRoYXQncyB3aGF0IHlvdSdkIHR5cGljYWxseSBzZWUgKG9yIGp1c3Qgb25l
IHN1Y2ggZW50cnkpLg0KT2ssIGxldCBtZSBjb25jbHVkZSB0aGF0IGFjcGlfcGFyc2VfaW50X3Ny
Y19vdnIgZ2V0IHR3byBlbnRyaWVzIGZyb20gTUFEVCB0YWJsZSBhbmQgYWRkIHRoZW0gaW50byBt
cF9pcnFzLiBUaGV5IGFyZSBbaXJxLCBnc2ldWzAsIDJdIGFuZCBbaXJxLCBnc2ldWzksIDldLg0K
VGhlbiBpbiB0aGUgZm9sbG93aW5nIGZ1bmN0aW9uIG1wX2NvbmZpZ19hY3BpX2xlZ2FjeV9pcnFz
IGluaXRpYWxpemVzIHRoZSAxOjEgbWFwcGluZyBvZiBpcnEgYW5kIGdzaSBbMH4xNSBleGNlcHQg
MiBhbmQgOV0sIGFuZCBhZGQgdGhlbSBpbnRvIG1wX2lycXMuDQpCdXQgZm9yIGhpZ2ggR1NJcyg+
PSAxNiksIG5vIG1hcHBpbmcgcHJvY2Vzc2luZy4NClJpZ2h0Pw0KDQpJcyBpdCB0aGF0IHRoZSBY
ZW4gaHlwZXJ2aXNvciBsYWNrcyBzb21lIGhhbmRsaW5nIG9mIGhpZ2ggR1NJcz8NCkZvciBub3cs
IGlmIGh5cGVydmlzb3IgZ2V0cyBhIGhpZ2ggR1NJcywgaXQgY2FuJ3QgYmUgdHJhbnNmb3JtZWQg
dG8gaXJxLCBiZWNhdXNlIHRoZXJlIGlzIG5vIG1hcHBpbmcgYmV0d2VlbiB0aGVtLg0KDQo+IA0K
PiBKYW4NCg0KLS0gDQpCZXN0IHJlZ2FyZHMsDQpKaXFpYW4gQ2hlbi4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 07:36:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 07:36:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735573.1141685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sElC4-0006qj-Fz; Wed, 05 Jun 2024 07:36:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735573.1141685; Wed, 05 Jun 2024 07:36:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sElC4-0006qc-D8; Wed, 05 Jun 2024 07:36:24 +0000
Received: by outflank-mailman (input) for mailman id 735573;
 Wed, 05 Jun 2024 07:36:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oDqB=NH=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1sElC3-0006qW-Pt
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 07:36:23 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d7f0367-230e-11ef-90a2-e314d9c70b13;
 Wed, 05 Jun 2024 09:36:22 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73])
 by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,
 cipher=TLS_AES_256_GCM_SHA384) id us-mta-582-g3I8oBvNM4KXcyD4UNR39Q-1; Wed,
 05 Jun 2024 03:36:13 -0400
Received: from smtp.corp.redhat.com (int-mx10.intmail.prod.int.rdu2.redhat.com
 [10.11.54.10])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 3D8F53C025B1;
 Wed,  5 Jun 2024 07:36:13 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.217])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id C2BB6492BF6;
 Wed,  5 Jun 2024 07:36:12 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 97652180098E; Wed,  5 Jun 2024 09:36:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d7f0367-230e-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717572980;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=si07Y0bLjjYsqVO37UOQw+PHM2IEPt+3tkAbNv1A9dY=;
	b=h00a7rdkb9o3f8X7RXfkiPV0mmoBVguAncQ13JgfbhABwjPS07TKCN4C3xS2ySc2uXGcNs
	v4RTAEdnO95K5EvvJrPHoFIP62MwlIAOgPQZEQM2mxnFHt5GqEoXqbfVp5jvGo8J5jxl0J
	tV5VtUmvGzLXbhOj+nj/e2dSdSvkdHs=
X-MC-Unique: g3I8oBvNM4KXcyD4UNR39Q-1
Date: Wed, 5 Jun 2024 09:36:11 +0200
From: Gerd Hoffmann <kraxel@redhat.com>
To: =?utf-8?Q?Marc-Andr=C3=A9?= Lureau <marcandre.lureau@redhat.com>
Cc: qemu-devel@nongnu.org, Anthony PERARD <anthony@xenproject.org>, 
	Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org, 
	Stefano Stabellini <sstabellini@kernel.org>, qemu-stable@nongnu.org
Subject: Re: [PATCH v2 1/3] stdvga: fix screen blanking
Message-ID: <tmtartaqh2ac4azfq4cgwh22uuc4pnrnxjpcpky24xzjrkwb5c@ung7cyha4ppa>
References: <20240603151825.188353-1-kraxel@redhat.com>
 <20240603151825.188353-2-kraxel@redhat.com>
 <CAMxuvawqf-0dKPsZP2UTcDWPWQ+8FKbZ=S4KX02hQO1qeeGVMA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAMxuvawqf-0dKPsZP2UTcDWPWQ+8FKbZ=S4KX02hQO1qeeGVMA@mail.gmail.com>
X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.10

On Tue, Jun 04, 2024 at 10:27:18AM GMT, Marc-Andr Lureau wrote:
> Hi
> 
> > +    if (is_buffer_shared(surface)) {
> 
> Perhaps the suggestion to rename the function (in the following patch)
> should instead be surface_is_allocated() ? that would match the actual
> flag check. But callers would have to ! the result. Wdyt?

surface_is_shadow() ?  Comes closer to the typical naming in computer
graphics.

take care,
  Gerd



From xen-devel-bounces@lists.xenproject.org Wed Jun 05 07:55:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 07:55:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735580.1141695 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sElUw-0001Ir-33; Wed, 05 Jun 2024 07:55:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735580.1141695; Wed, 05 Jun 2024 07:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sElUv-0001Ik-VM; Wed, 05 Jun 2024 07:55:53 +0000
Received: by outflank-mailman (input) for mailman id 735580;
 Wed, 05 Jun 2024 07:55:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TbIy=NH=epam.com=prvs=28861ff5d1=sergiy_kibrik@srs-se1.protection.inumbo.net>)
 id 1sElUu-0001Ie-JS
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 07:55:52 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 04d0861a-2311-11ef-b4bb-af5377834399;
 Wed, 05 Jun 2024 09:55:48 +0200 (CEST)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 4557rMdQ005517;
 Wed, 5 Jun 2024 07:55:34 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2110.outbound.protection.outlook.com [104.47.18.110])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3yjm1b009x-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 05 Jun 2024 07:55:34 +0000 (GMT)
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com (2603:10a6:20b:5c0::11)
 by GVXPR03MB10577.eurprd03.prod.outlook.com (2603:10a6:150:14d::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.16; Wed, 5 Jun
 2024 07:55:30 +0000
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d]) by AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d%6]) with mapi id 15.20.7656.012; Wed, 5 Jun 2024
 07:55:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04d0861a-2311-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N130QbYABJePCtyJtX0p8Z9GVMN8j5/cyYS7L3iDL+9FnEbnGYzvgj/EID9SST5Dai3KQMyA9Gbmi8PFAI0QDLM1u8+3Etfuqr5wf/JVvqfbQoVo2QDn2tmWuvPXz5FaytwSllKee9DyP4oBBPOGMjMcVO7Nuevv6vPwm0tm99wnKC3fDfl/RKtFouuE4+AWyQFjYqqiXWCKJRqf1bCXyjWloWITie64oLZ1q69feVScvQvg8bGbosdN/WawSKgE7gsY95Ewd6Waed95jxZhDdMQ0dFjwjrClLTHLsIwwoGG1PdIrK17AmkCCn0I/UVV2ddRWtya044jf9Gok3xPuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=rs645B1pau1J19eF6GkovdydEBouVGJHTXTuW79/Qj4=;
 b=JUO0qudGSKtVuAKCbFzyx5i+PkAJtL5UAgKmzgJ+aCZ7YqIt3EgWcO7SI504sWpupvuCql/QRQ3F0t03+KFKy/Elz8FPt5dPM9I1hA9TDM/h7UVFLnRy4cpYgfG+dFZo5bpZCd+2+6hF1X5GyrUTy2/LpbTR+6owKZKjHh3R0QpPhBnnNNKtiqfWieOEHphvRWhHB1yF9B2+v/Dj+aENOM84DZlwIz6Ej6eJtYXggv3oRIWtysVgtbWHY+MYW+QH+hYtXRfxd2krZ5x+RLuDmTdXcqXUoIVK/+SJOeViDoeailD9/Sy78a00BTgvUko3M71MyZbS8jKWTiGvJoJS5w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rs645B1pau1J19eF6GkovdydEBouVGJHTXTuW79/Qj4=;
 b=OmVAneRhXuKfNGdM/7dShR6FQUoGf7X6laPDcaVNY9RkPIFImsDtmJsdhuethQrNX3hz6L/TTnV3kH+TZPnYlALle6uNJ+xjALheHIT/mn7GnPMfADjZ6mvqMEn3pwfcpP1OcIZxEcqhK7ASKBZGkG4i08bthwPN0Zpap1A1eXGepAv2cUUM1Qfpvimqq9u+g7orwP8e37+SexlTLXuz/G4N9Yt21tF9zvRaFcppJZg96rXxak2N7SJ/WXnNv5xJLU6COpBPr2Exw5JHrneqHgmpJ25kyx+VfD836wb/WWyfV7b2oaIKWdoZVYMMixukjDeJiS8YAs9hJrm2Cgqz/g==
Message-ID: <055cf2c9-dd70-4eb4-9d84-270cf30e385a@epam.com>
Date: Wed, 5 Jun 2024 10:55:27 +0300
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 14/16] ioreq: make arch_vcpu_ioreq_completion() an
 optional callback
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
        Bertrand Marquis <bertrand.marquis@arm.com>,
        Michal Orzel <michal.orzel@amd.com>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Jan Beulich <jbeulich@suse.com>,
        =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
        Xenia Ragiadakou <xenia.ragiadakou@amd.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <a0f9c5ef8554d63e149afd0a413a27385c889faa.1717410850.git.Sergiy_Kibrik@epam.com>
 <cc51da4b-d024-4923-95a4-18e11b150f90@xen.org>
Content-Language: en-US
From: Sergiy Kibrik <sergiy_kibrik@epam.com>
In-Reply-To: <cc51da4b-d024-4923-95a4-18e11b150f90@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: WA2P291CA0032.POLP291.PROD.OUTLOOK.COM
 (2603:10a6:1d0:1f::21) To AS8PR03MB9192.eurprd03.prod.outlook.com
 (2603:10a6:20b:5c0::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AS8PR03MB9192:EE_|GVXPR03MB10577:EE_
X-MS-Office365-Filtering-Correlation-Id: 7632df28-9e1c-4d78-7c97-08dc8534de74
X-LD-Processed: b41b72d0-4e9f-4c26-8a69-f949f367c91d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230031|376005|366007|7416005|1800799015;
X-Microsoft-Antispam-Message-Info: 
	=?utf-8?B?bGwxMVNoRmlEM2RQem04ZzRISUNmbGhESHdSVm1MQjFDT0w3TXVscU42UjYz?=
 =?utf-8?B?cStleGhPUUtBWFNzbjN4YWxENzBNTDJMUm9ZWDVFc2JVYlJUb3UwVEJIQTRi?=
 =?utf-8?B?WjdVdFpFbGNiTHBwRTRmSlRBRWk5bEdGWnlwVThZM2tZQ0ZQOGV5UUVqcTZ4?=
 =?utf-8?B?bHJWdlBhTm9aRlZiVTRtcUEvSTdNQlpVSzhSdnFtSWoycE9vai9MNXlLUEd5?=
 =?utf-8?B?U2hlZTBWU2lyU1dHKzhUSFZVaGc4LzFRcm96aEh4R2JxUWJBNk5TK3hveVo0?=
 =?utf-8?B?YnBqU3JtYWlYcHZDQmJ2RkduQ3ZkOEdtR0pSaVRTZkRFbkFCUGp1RDUzWTIw?=
 =?utf-8?B?K3hlUDVYcjZ0MUsyRzZXU0dQNGt3eG1iOW96QnlOcWIrdERVb2pCeklEQlJw?=
 =?utf-8?B?UXRqalBSbXBaUjdob3kvYWx5eURDRndpQkkveG9HSS9VM1I5SXlQMUk2azBU?=
 =?utf-8?B?VU1aRW1xYzcxc2REa0lUb0RtcFR6blZEUDk4VllvWWJJamgzOTAwcHBiYXhw?=
 =?utf-8?B?dWtEUjBQdTZRM1l0dUUxQUtoeEM3UzVvQjMyOVdiNy9UUHpUdStoTGtRaWQ1?=
 =?utf-8?B?WmNNZDdEQzJYVlJqTXhIMkg3Q2NmYitaSG9yQVhSdjQwK1M3SGNWWmVDczg2?=
 =?utf-8?B?eWIvRmR5NEJVUFNNRG56bVhjb2dxOWpKQW1HZVFnUGxJT0NiR2d0QmM4cUdC?=
 =?utf-8?B?ZjZ0OUhONm5LU2swbGFqdElFMVlpRUVIdVBmeExYMWhrQmRNVlpKYnlLWDU2?=
 =?utf-8?B?dlR4UGRzRnh6a2FvWHVyYlFvME5Dek9LWHdtcldYeFZ0Y05BdHZ3amNScXJu?=
 =?utf-8?B?TGtQa1lsQlk3WEwveTQxQ2hXNGIzbzJvUnpHVXpWRTcveFV6WFFENTdyaXVz?=
 =?utf-8?B?ZnhteFh3YmRKZzk2QS9Xb0FKeGNNR3p0SHhITENYYytOKzByVEF5ckh2dnBN?=
 =?utf-8?B?QThwVWthSGcrbzI1UjJDVDZXZ25yckpSdGNjcjR5dUJEcXRPTzVXLytRQXhK?=
 =?utf-8?B?RmdFWFVyck1lY1VXeUcrTjhPdWtDeDZBcVFOc1QxS09lL1NZN0pHTk15K2RO?=
 =?utf-8?B?REpHRWJNbldzcENoeUxicmtnVXBWQ1oyb0hsaHU2MFJtcWMwZWQxU016Znhm?=
 =?utf-8?B?ZEt2WVJHeFBJT0pyVUdzWE51Y2VVU2RJbGU2TjIySXFUYWZpbUd0TXQ1cUcz?=
 =?utf-8?B?blZCTnpENnk2ZkpkcjJMTGg1NzJXdmg4YWZ0ZmNRQmpYcU44ZklnNG1icGtD?=
 =?utf-8?B?NldiWDRGdElhWjA1dVhiaDFGMmo5UVplVmkwS2FpaGUxTFJMTkZxbWx3OTd2?=
 =?utf-8?B?TFFoN2JIUlR2dHNoUVhEbXlsQmtuSklvRkQ5VFR4VGNNUm1hQlkxdXJuWkcw?=
 =?utf-8?B?U25UcVhkTThZaThUaElWb0ZrVmU3ci9oM0VZYXIxd0RTb2FsQWlZQmF0TUFI?=
 =?utf-8?B?MlZqSTBlVk1xRjREQ0F1bndlNENDNXJBejl3RlV3cEQyb0VHZTlYUW9XKzRs?=
 =?utf-8?B?d1Q5S2dpNkl1a1hDR0xCNk5XQTlFYS9kcElzWHR0WWhuY2RBd2dHMGgwTTdT?=
 =?utf-8?B?Mm1sQTZhbytkbjR4ODEyVlhaY0VockdOUy9WSXBhbFcyVDM2eDRCdFVLdFRU?=
 =?utf-8?B?ZjJWV1ZaM0tKT2JQaVBQVVZIS0FJZEt2NUMzcTJ1eURCb2lFb29MUC9URFRV?=
 =?utf-8?B?dGFsa0QrZEVXNEtUaEZGRElXR2dranNtcC9teWt0VDh1ZDM5ZTAwVDlnPT0=?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR03MB9192.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(366007)(7416005)(1800799015);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?NTVUaVJVQUtnUmU0MnZSSWkzc3lJTHhCNjhJTS9NeGQ5V3o3VFNFY3VzN3o2?=
 =?utf-8?B?My9wZW9ZK2FNVkFKWmc5dUZvb0NxamE3cmFOZmxxdEVlVlQ1YTBiekVXVkpy?=
 =?utf-8?B?ZEtiK1dKc01kSGxId0pTbmo0Y1ZDQlJvdW14MkFpVEc0Z0UwZStwQ1VZUlFL?=
 =?utf-8?B?dzVBTWUzM3RGcEFLOFNSQkxVYkR1R2YyWlZjOEpIS1lRSnZQbng1aVJCMmRk?=
 =?utf-8?B?eW5IMC9wVmNQb0ZRL01haUFDSks5dXF1K0t5cVRUOCs3M01sS3VlQmRsUUJX?=
 =?utf-8?B?RmFlVUVvRzdMTjVZTzM4QXB2cnZUaUFPVUVqSDBCenNvRUkwS2lIL0xDU2pj?=
 =?utf-8?B?YW1oVjVYbXZvOTlhZkxFYWE1NVVKdEZ4UnQzTzN5eVdwNzFVcmU4TlBBeHEx?=
 =?utf-8?B?Y3RSazdXb2o4YmRqaFBRZlh4dEFCVmg4aHI4L21mMGxCb3V4TkYzcUI0U1pQ?=
 =?utf-8?B?c0NYRHN0UjFUb0Q5bHowOWNyMWRuS1NRM1E5SHp3b0NrSWdsdVowTGFLR2pa?=
 =?utf-8?B?eno1NG1FREIyOU1TdXlLSXlyTWdqWW42SDBYQVpTZnJQaWk0ZUNJNGFzNVpi?=
 =?utf-8?B?aGxCTnhLd0JPeVR5TmpJSlR5K0RoSGxsdFNadUE0dmVsdnNJanJSTW9rWjlX?=
 =?utf-8?B?QVpZenBPNTVHQmdKdjYyUUpwV01oU0VkVlhPZXk1Z2wzQTM3RlhwWU5NYmlC?=
 =?utf-8?B?VE5pcjk4aU03VnAwRWpPR0s3blV4Z29TR3hiMERmRGVTMnlieDlkREhZMkZ4?=
 =?utf-8?B?clJIV0c2NXBMMjZiUzdZMkNhQzhOb0psOXZxQ1BoejA2bmhJRGZDRzJLcjV0?=
 =?utf-8?B?MlVPSnpEVEpISU45U3RIWWpvNHRJMEtkbjZyYkQzbnMza2FtVnB5VDlpZ092?=
 =?utf-8?B?NTZlSXZHQytjR1dpNmNVV2JhemEwS3NXVGlBVEM3VnRLeUdTM1NOVkdhalZn?=
 =?utf-8?B?VEFMR2FLYzUyU1N1cXpJVWZibU96TGpzelZ5c3p1b3VUME8wR3lVUkJ3cTFp?=
 =?utf-8?B?VVF5b21yQ0VDUVA2WDV5MlFhdCtkQzJ5WVJLUHYrZHl6dDhCaEVzT2Nuek0r?=
 =?utf-8?B?THJMYTdyT2lZZkRuNWNmZW00N0puL040R3B0Nk43Yis2aUJMUFZhWFliMXNS?=
 =?utf-8?B?MmxqNGJjRi9hNUpMQkJsM0gwSGFwZEZWOWhHeDN3RldTMThLN0xURVlRcXFr?=
 =?utf-8?B?ZzNBd3JaZkpXM1lQaDIzazFBOXd4K0doSWlBUXl3N1lxQ1l0aC96dml1aGJL?=
 =?utf-8?B?V2tBZXplRGJhOWt0UFNlVEVkNTVkQ2NRNnZZeGZqQkRudFd4cjNzamhqQTVN?=
 =?utf-8?B?ZDVWNlpidjI1by84Y0NoZmpiZzhWSmVXZXo4djBacXB0V0NLSGV0S3lua204?=
 =?utf-8?B?SzRhNkN5RXlpYnMrc2RWZVlCQlhXaXZIVlk0b0hUdnpaNUxUS3daWnRiK0lv?=
 =?utf-8?B?ZUdteTZoR0tQZHA1VlRXL0FTYWkyQlhMMEpVd0xCMDJSTi93alBlQW5zWGQx?=
 =?utf-8?B?b2hrSEgyejFGNzNjdUtndEhZSzBTTXdSdFd1UkNoWldrMnFJUHlRMDU5c2hO?=
 =?utf-8?B?b09LczkxRmVSbWx0UkkvNk9VNGJ6Q1ZZQWNDZVZDeVRuVWtQYkN2U0dsMjJr?=
 =?utf-8?B?dTRHTW93Ykp5UTJhMExIMFV4MTVyTnI1ZzFEYlJDd0pUc3ZDRU11NkdWZm5i?=
 =?utf-8?B?cFRyVzdXV0dVKysrZ1YrcEFpWE5BZ04xU1hoTEhpZzI0eTRjQ0lScUdZYzZB?=
 =?utf-8?B?MWI1MlhJaVNRTWE0MmVLQ0R3aDlCUk1DSFZRMjNxYjZTSFl0bmdCZ24vdjlx?=
 =?utf-8?B?MkxHemlWbHo2SlFMWjVGVE5CNHNlMEVuOWZobnhXeWtnNU9kWUdDd214UHh1?=
 =?utf-8?B?YjZuN3kvY1Nid3hnNkJ3V1gyRlBjbDBSK2tqaW1sQzhzbDJyYzgrMDloOEcy?=
 =?utf-8?B?cmt5VjBOUkR0Q3VKd2w1R3dvNEwvY2NzcFFBVW9hM0EzZ3lybk8zRDFvaE13?=
 =?utf-8?B?U003L1l4WkpKTFZWZjJncWZMa1pCSlFZdDVGRTM0L2Y1RzJKWnR6Mm9IRG5q?=
 =?utf-8?B?RDB0VEtXL1ZJTXV4M0VPRERteTBoaVZ3LzJhRENmb2pPZGs5T1BXZkdaTmkv?=
 =?utf-8?Q?MGCs7GUsegUAmUPO8Xi23HTRK?=
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7632df28-9e1c-4d78-7c97-08dc8534de74
X-MS-Exchange-CrossTenant-AuthSource: AS8PR03MB9192.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jun 2024 07:55:29.8538
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 023q8uG8fM8tMDuJnOQ5p5KhbsLr50sqPmq0JLOEILD7KOjMgUY9fr/r1Wiia0z/57jMmK7r5F2ofTpoYdQV4A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GVXPR03MB10577
X-Proofpoint-ORIG-GUID: dEKU8e4LmIBdikq6im8M274oOytt-8P2
X-Proofpoint-GUID: dEKU8e4LmIBdikq6im8M274oOytt-8P2
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-04_11,2024-06-05_01,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=999
 malwarescore=0 suspectscore=0 spamscore=0 impostorscore=0 clxscore=1011
 priorityscore=1501 mlxscore=0 bulkscore=0 lowpriorityscore=0 adultscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.19.0-2405170001 definitions=main-2406050057

hi Julien,

04.06.24 14:07, Julien Grall:
> So this match the existing code. But I am not fully convinced that this 
> is the right approach. Arch_vcpu_ioreq_completion is not meant to change 
> across boot (or even at compile time for Arm).
> 
> Reading the previous thread, I think something like below would work:
> 
> static arch_vcpu_ioreq_completion(...)
> {
> #ifdef CONFIG_VMX
> /* Existing code */
> #else
> ASSERT_UNREACHABLE();
> return true;
> #endif
> }
> 
> If we want to avoid stub, then I think it would be better to use
> 
> #ifdef CONFIG_VMX
> static  arch_vcpu_ioreq...
> {
> }
> #endif CONFIG_VMX
> 
> Then in the x86 header.
> 
> #ifdef CONFIG_VMX
> static arch_vcpu_ioreq..();
> #define arch_vcpu_ioreq...
> #endif
> 
> And then in common/ioreq.c
> 
> #ifdef arch_vcpu_ioreq
> res = arch_vcpu_ioreq(...)
> #else
> ASSERT_UNREACHABLE();
> #endif

Thank you for taking a look and a suggestion. I agree that it's all 
related to compile time configuration and not a runtime.

I'll rewrite the patch and let's see how it looks like.

   -Sergiy


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 08:19:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 08:19:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735592.1141704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sElrP-00052W-3Q; Wed, 05 Jun 2024 08:19:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735592.1141704; Wed, 05 Jun 2024 08:19:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sElrP-00052P-0w; Wed, 05 Jun 2024 08:19:07 +0000
Received: by outflank-mailman (input) for mailman id 735592;
 Wed, 05 Jun 2024 08:19:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sElrN-00052J-NR
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 08:19:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sElrN-0001uP-1H; Wed, 05 Jun 2024 08:19:05 +0000
Received: from [62.28.225.65] (helo=[172.20.145.71])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sElrM-0002Ts-Nj; Wed, 05 Jun 2024 08:19:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=MDIY5unLBtzxabn8imTAvrRC/Hn4le7laRl0/9ir/Y8=; b=TqoG1zig/zIOs5HBPjVJIzn+Sz
	NShcZ3v1niR8sMasQUGZ9W9bX8rAwUdW9FVs6iKNExFvJ5Wo9LcXRV7RC+4VfEaoXrhQg3vNGm9yD
	PTTEzrMx/1d6lYEA77/OfQRRqh5+mxjls4qLL5H1vvgtkDnRWEPDbS7SY9hqeLts3SAs=;
Message-ID: <a4a3987f-f22b-4e14-b38f-51335c821bad@xen.org>
Date: Wed, 5 Jun 2024 09:19:02 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC PATCH v2] arm: dom0less: add TEE support
Content-Language: en-GB
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>
References: <20240531174915.1679443-1-volodymyr_babchuk@epam.com>
 <9e62b5d9-9c80-4f7c-9cc6-3b863f0c90ad@xen.org> <87tti8x6oj.fsf@epam.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <87tti8x6oj.fsf@epam.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Volodymyr,

On 04/06/2024 12:56, Volodymyr Babchuk wrote:
> 
> Hi Julien,
> 
> Julien Grall <julien@xen.org> writes:
> 
>> Hi Volodymyr,
>>
>> On 31/05/2024 18:49, Volodymyr Babchuk wrote:
>>> Extend TEE mediator interface with two functions :
>>>    - tee_get_type_from_dts() returns TEE type based on input string
>>>    - tee_make_dtb_node() creates a DTB entry for the selected
>>>      TEE mediator
>>> Use those new functions to parse "xen,tee" DTS property for dom0less
>>> guests and enable appropriate TEE mediator.
> [...]
> 
>>> +
>>> +    A string property that describes what TEE mediator should be enabled
>>> +    for the domain. Possible property values are:
>>> +
>>> +    - "none" (or missing property value)
>>> +    No TEE will be available in the VM.
>>> +
>>> +    - "OP-TEE"
>>> +    VM will have access to the OP-TEE using classic OP-TEE SMC interface.
>>> +
>>> +    - "FF-A"
>>> +    VM will have access to a TEE using generic FF-A interface.
>>
>> I understand why you chose those name, but it also means we are using
>> different name in XL and Dom0less. I would rather prefer if they are
>> the same.
>>
> 
> Well, my first idea was to introduce additional "const char *dts_name"
> for a TEE mediator description. But it seems redundant. I can rename
> existing mediators so their names will correspond to names used by libxl.

The field name is only used for printing. So I think it would be ok to 
rename the values.

It would also be good to update the comment on top of the definition of 
the field "name" so it is clear the name will be used by dom0less.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 08:21:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 08:21:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735598.1141714 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sElu7-0006d3-Fk; Wed, 05 Jun 2024 08:21:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735598.1141714; Wed, 05 Jun 2024 08:21:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sElu7-0006cw-DG; Wed, 05 Jun 2024 08:21:55 +0000
Received: by outflank-mailman (input) for mailman id 735598;
 Wed, 05 Jun 2024 08:21:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sElu6-0006cm-ER; Wed, 05 Jun 2024 08:21:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sElu6-0001wS-AW; Wed, 05 Jun 2024 08:21:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sElu5-0006Ey-VP; Wed, 05 Jun 2024 08:21:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sElu5-0007JE-VB; Wed, 05 Jun 2024 08:21:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jvdeXQoSqdAD0XlVE678jN/jJrCkoCyBneyfUQZK3eM=; b=NFWtgSfRhBu3cDGzkyeO1UvzhI
	WRj3EJnP7n15mdiPMxkGLdXzUnOLpLcoTDkhS2Daf/u+sN+7aps10XhhUVIQ0vyS5EJ7qYdkpSSrd
	ym7AXzFj6yBfk1yaVuxhOMSH8a48hULiYKN1o3QsRlAUFpmdOFgdwHu0gyuTHFhmEPNs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186252-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186252: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl:host-ping-check-xen:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
X-Osstest-Versions-That:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Jun 2024 08:21:53 +0000

flight 186252 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186252/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl          10 host-ping-check-xen        fail pass in 186241

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 186241 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 186241 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186241
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186241
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186241
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186241
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186241
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186241
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e
baseline version:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e

Last test of basis   186252  2024-06-05 01:53:35 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed Jun 05 08:55:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 08:55:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735608.1141724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEmQI-0002ft-0K; Wed, 05 Jun 2024 08:55:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735608.1141724; Wed, 05 Jun 2024 08:55:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEmQH-0002fm-Tk; Wed, 05 Jun 2024 08:55:09 +0000
Received: by outflank-mailman (input) for mailman id 735608;
 Wed, 05 Jun 2024 08:55:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sEmQG-0002fg-TV
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 08:55:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sEmQG-0002Vm-Bn; Wed, 05 Jun 2024 08:55:08 +0000
Received: from [62.28.225.65] (helo=[172.20.145.71])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sEmQG-0004pS-42; Wed, 05 Jun 2024 08:55:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=7oWxHxgxOg5m6+msIJmhdgnSMo8PIyqw4Xd5D6cnUNA=; b=Ovx8r5+/Al4K3KcVimY2oTQkgF
	FbPwryPmE6E/0V7NPfelPZIT8IUU93FmcspIkxtgq2O8Nos97fmJR9PNn01RmxdR8uvmZ41QdZZ/h
	2clJdI62LQ45cSrgyiZOLYj5hALuV/bK1j48JW2EF0SWr1qQ5NzrHDavhYokVmTx/O6U=;
Message-ID: <68d27e3a-5715-47d8-8ae4-a36ed9ad71cd@xen.org>
Date: Wed, 5 Jun 2024 09:55:04 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v1 1/1] xen/arm: smmuv3: Mark more init-only functions
 with __init
Content-Language: en-GB
To: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, bertrand.marquis@arm.com, rahul.singh@arm.com,
 michal.orzel@amd.com, Volodymyr_Babchuk@epam.com, edgar.iglesias@amd.com
References: <20240522132829.1278625-1-edgar.iglesias@gmail.com>
 <20240522132829.1278625-2-edgar.iglesias@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20240522132829.1278625-2-edgar.iglesias@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Edgar,

On 22/05/2024 14:28, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> 
> Move more functions that are only called at init to
> the .init.text section.
> 
> Signed-off-by: Edgar E. Iglesias <edgar.iglesias@amd.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

This also needs a reviewed-by from either Bertrand or Rahul.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 09:00:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 09:00:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735613.1141734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEmUy-0003HE-HN; Wed, 05 Jun 2024 09:00:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735613.1141734; Wed, 05 Jun 2024 09:00:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEmUy-0003H7-Ej; Wed, 05 Jun 2024 09:00:00 +0000
Received: by outflank-mailman (input) for mailman id 735613;
 Wed, 05 Jun 2024 08:59:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sEmUx-0003H1-AU
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 08:59:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sEmUx-0002d3-0K; Wed, 05 Jun 2024 08:59:59 +0000
Received: from [62.28.225.65] (helo=[172.20.145.71])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sEmUw-00054M-Qb; Wed, 05 Jun 2024 08:59:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=ssTzv1/QZF99OCEaq78ROZ1+aT4W6/KPfeuLbk6bRMY=; b=pSzS786pS3wdfAY0VximTe7Scl
	91UP+oAyAaLZ5sdHsivKPb672n7x1VObVSn+1fuC1Mn7Z4pzcdVUsjfXDYNpNRNSLEfmsta3RQhbN
	2zlQUPzVvekOtq6YzUGWKLJCe9vW8E9zdcnt6CzGJb9InqzzaOTjtCDAgQm07LCcW0TU=;
Message-ID: <17a4df22-f45b-418e-8a64-834a33978190@xen.org>
Date: Wed, 5 Jun 2024 09:59:56 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v1 0/1] xen/arm: smmuv3: Mark more init-only functions
 with __init
Content-Language: en-GB
To: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, bertrand.marquis@arm.com, rahul.singh@arm.com,
 michal.orzel@amd.com, Volodymyr_Babchuk@epam.com, edgar.iglesias@amd.com
References: <20240522132829.1278625-1-edgar.iglesias@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20240522132829.1278625-1-edgar.iglesias@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Edgar,

On 22/05/2024 14:28, Edgar E. Iglesias wrote:
> From: "Edgar E. Iglesias" <edgar.iglesias@amd.com>
> 
> I was scanning for code that we could potentially move from the
> .text section into .init.text and found a few candidates.
> 
> I'm not sure if this makes sense, perhaps we don't want to mark
> these functions for other reasons but my scripts found this chain
> of SMMUv3 init functions as only reachable by .inittext code.
> Perhaps it's a little late in the release cycle to consider this...

The risk of the change is limited and the SMMUv3 driver is still in tech 
preview. So I would say it would be fine as long as it is fully reviewed 
by the code freeze date (14th June).

CCing the release manager (Oleksii) to check if he is happy with 
committing this patch in 4.19. If not, I can queue it in a personal 
branch until the tree is re-opened.

But note this still need a review frome either Bertrand or Rahul.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 09:55:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 09:55:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735629.1141745 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEnMU-0002Xv-Ew; Wed, 05 Jun 2024 09:55:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735629.1141745; Wed, 05 Jun 2024 09:55:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEnMU-0002Xo-C2; Wed, 05 Jun 2024 09:55:18 +0000
Received: by outflank-mailman (input) for mailman id 735629;
 Wed, 05 Jun 2024 09:55:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=617u=NH=arm.com=Rahul.Singh@srs-se1.protection.inumbo.net>)
 id 1sEnMS-0002Xi-Ok
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 09:55:17 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 (mail-am6eur05on20615.outbound.protection.outlook.com
 [2a01:111:f403:2612::615])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b44199c6-2321-11ef-b4bb-af5377834399;
 Wed, 05 Jun 2024 11:55:14 +0200 (CEST)
Received: from AS9PR01CA0041.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:542::15) by DBBPR08MB6139.eurprd08.prod.outlook.com
 (2603:10a6:10:200::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.31; Wed, 5 Jun
 2024 09:55:10 +0000
Received: from AMS0EPF000001A0.eurprd05.prod.outlook.com
 (2603:10a6:20b:542:cafe::7) by AS9PR01CA0041.outlook.office365.com
 (2603:10a6:20b:542::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.30 via Frontend
 Transport; Wed, 5 Jun 2024 09:55:10 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AMS0EPF000001A0.mail.protection.outlook.com (10.167.16.230) with
 Microsoft
 SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.7633.15
 via Frontend Transport; Wed, 5 Jun 2024 09:55:09 +0000
Received: ("Tessian outbound 9d4318e1cabb:v327");
 Wed, 05 Jun 2024 09:55:09 +0000
Received: from 03e1b6e6744c.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4518FE04-ED1E-4BB4-AEC2-6FBE330D3764.1; 
 Wed, 05 Jun 2024 09:54:58 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 03e1b6e6744c.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 05 Jun 2024 09:54:58 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com (2603:10a6:20b:404::24)
 by GV1PR08MB10502.eurprd08.prod.outlook.com (2603:10a6:150:16b::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.31; Wed, 5 Jun
 2024 09:54:55 +0000
Received: from AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a62b:3664:9e7e:6fb2]) by AS8PR08MB7158.eurprd08.prod.outlook.com
 ([fe80::a62b:3664:9e7e:6fb2%4]) with mapi id 15.20.7611.016; Wed, 5 Jun 2024
 09:54:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b44199c6-2321-11ef-b4bb-af5377834399
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=XXW3xvB/pdDKyk0LYT43ROnA4N6fZQ32WMKGqicm6D8o4VabrtaQgDyjydWYZcLNOvoRCAhL3toT9nWaebWJeljYicnMr+yzDZfpqQMauKhSwy7NjkA4mT0gbSn4YnnouPJS9W5l/MpyP/91DuLdJoXcdZgNG7FisHhlEXvZ/g8Y++WIZxtmIcv3fqcbpywvxNqugYpnPh4kudzg3y42InLOPfq2PkkxKkiiK1G097ZzIntWPyxmrsdmC+li+YygITLTKe6I6dEQZWEmznXwhtgsYYoQ847MWMtvvM4Lsr8jkMvuSwukVNWkQx6UIKj+kwy3/zTKEgrl7JzmfdFTKg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sC0Qj0SZW9VKJMcfM7iGLToXbq92PwYe14S/eMEI/8k=;
 b=L8Nwu6QUMuEt7ZOlE0KZygQmYxavJJ8J5+A7l7LoBsRur9c2C992fvJN5Ae+tTDitdYzpmEtIq6bvHZOOCWfCp99LYhJcT40XxZqChQN5PlzQE22g8YpMqGfiUEJub1cH4iPzUt40nFzQxvhK+KGkAA3/6ld/RBs8BmuUtt9PZaCpfY+pi6iluYoxdqq+fBv7yUaxE8oxiBqpN5HP55WdEOS8eDWY6gNSVOSqIcrxA/nRLq6XNwDVLWcgBvsNYp//Iyp74wzChUIFEHqlMXyyf5qGVZhbj5P1ZTg+MUXBJrUgSkbfhl/i5f+r5OQ6RXnOSzClcXp7gnqFESKbENprA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1
 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sC0Qj0SZW9VKJMcfM7iGLToXbq92PwYe14S/eMEI/8k=;
 b=rtkol1hNPDucgBK0iLmEsUKYzmr8LOxZslvFycGQHhULNmYhUCIzMQDF67aCP0Mdp79f/MEsPcCTwgz8SlSnEk7vqZKcietPBO9jXy+tVD1pgkPL1sdG6qdyOQ7kaCQrW3YnoKx779RyRQckubf+lnp12LVpt/vdY9e/RwO1vT4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=arm.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 434fb910ece51337
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H1gLjwOI7AD35o3CEe+2aRUGXeTy00OH1jtBwE83b9xnLUqNj6m2Eg6huEY+GHVd7xf2g21B4b60RATl1On6Hpzn5aIgGBbmiuh+P4i8zpLtWSFn/b7lGU7qSTYD5+CoR1Pawh4klDrv/G11OmnrH8lxVlLdQzEv2U9HpbdpzgcMMrA0aYBCb3za0P4cCpLiicyCG5SHMjw3SjrUPJ28hbstQONWHWdh4w9bpojncV7t9cIcZwESvEuAPt+mdc69kjBMbwaDkpLnBAgmIwKtbccp3uKxC/umWi14B2T+9nAwX43poK5sTR7bYd6kKonAaCoDwgK01Wru8dtf2cmgYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sC0Qj0SZW9VKJMcfM7iGLToXbq92PwYe14S/eMEI/8k=;
 b=B/bDI6rRtn+yBjmwAN5jeaS6IpEgFI2cO2KPVTBMZJE7otZmXJ1+UMGSNrO3pgbaG3lhSMXs1Q5yQOSk5qykwvRPRzCDNjwA7VYPMXyhQEWBnaMQUIqWk961g8/bnUJVt2bMxxmTmVe+1ZdUMDzgehuihHxnWEp4vtXMChMZr1+H7nB5wyJiy8R2qcfYOhxp70479Vy4Ryb/CDu9dzZNiLdH6qkh3+iv5qZd5G6kOKWqQUV38raQJPWfqzXM8MQFR5MdJXrQ8OaCRCrtcqvQo/6AhdvpWfAyCOB2+ENIisfs4FtMpmV3rDcBgUxCfO52j5VUHT52kqcgjMUTSRFqkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sC0Qj0SZW9VKJMcfM7iGLToXbq92PwYe14S/eMEI/8k=;
 b=rtkol1hNPDucgBK0iLmEsUKYzmr8LOxZslvFycGQHhULNmYhUCIzMQDF67aCP0Mdp79f/MEsPcCTwgz8SlSnEk7vqZKcietPBO9jXy+tVD1pgkPL1sdG6qdyOQ7kaCQrW3YnoKx779RyRQckubf+lnp12LVpt/vdY9e/RwO1vT4=
From: Rahul Singh <Rahul.Singh@arm.com>
To: "Edgar E. Iglesias" <edgar.iglesias@gmail.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, "michal.orzel@amd.com" <michal.orzel@amd.com>,
	"Volodymyr_Babchuk@epam.com" <Volodymyr_Babchuk@epam.com>,
	"edgar.iglesias@amd.com" <edgar.iglesias@amd.com>
Subject: Re: [PATCH v1 1/1] xen/arm: smmuv3: Mark more init-only functions
 with __init
Thread-Topic: [PATCH v1 1/1] xen/arm: smmuv3: Mark more init-only functions
 with __init
Thread-Index: AQHarTfONOUU2GkpA0uG/NCVMyHc+7G5AkKA
Date: Wed, 5 Jun 2024 09:54:55 +0000
Message-ID: <7AA016AF-03B0-4321-B0B9-FBDD60B24557@arm.com>
References: <20240522132829.1278625-1-edgar.iglesias@gmail.com>
 <20240522132829.1278625-2-edgar.iglesias@gmail.com>
In-Reply-To: <20240522132829.1278625-2-edgar.iglesias@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AS8PR08MB7158:EE_|GV1PR08MB10502:EE_|AMS0EPF000001A0:EE_|DBBPR08MB6139:EE_
X-MS-Office365-Filtering-Correlation-Id: a0b4f80f-47eb-44b8-1710-08dc85459610
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted:
 BCL:0;ARA:13230031|1800799015|366007|376005|38070700009;
X-Microsoft-Antispam-Message-Info-Original:
 =?utf-8?B?QWg3ODJkcldFZXpnOHB5S0lwbnNEbVVPQXhLVkVESU9lc1VYK2J1T3c0cktR?=
 =?utf-8?B?dTZkL1RUTWo1eDdZc2RmeFFRdUdVNWNiMlBHSHFGNSthMTlIMTJjTllnYTgw?=
 =?utf-8?B?MHA0KzZDYndNM1V3WGpiRTlJcFZScHJENldCaERaajdYT1NLaVFDblk3ZC9w?=
 =?utf-8?B?M1BOeU5VRnNHVmpOdU5yc2x6QWdTR3RHaU96dmVSdjRBc0pkNzBZSkc2cFk1?=
 =?utf-8?B?Y1BhcmxLTGxka2wzcGs0TyticDdOZXdGblNMR1JHQlQyM0tsTE9hRVVDclFL?=
 =?utf-8?B?OGFmTUtsYU9reW50ekpKaFVnQyswbWlkOTVGZmdIamdjeGJQMHdUWWhJYmow?=
 =?utf-8?B?VEhzRXF0amVEamJvVy9PbVUwSnJyVTFYNGQ1N1VwQktuWk0wVHFOeTBhTlBp?=
 =?utf-8?B?andxRTBqOTN1UStJRnRvNmRhNk9sSDcxZWtXVVpDTERqc1JwWEoyY0ZPRHFh?=
 =?utf-8?B?d0RCcktyVDZwYnNOeUhLTVhrSDNwZm5vU09YYW5FRGU1ZnZ0WlAzdWdJNU5R?=
 =?utf-8?B?UXBDeldLMXVyWVlyYU5UU1FYVmZwaTF3Ynd1WFJ5ZGh6dUhMK29IdzdIK2cv?=
 =?utf-8?B?alVDRnlQUkJJblJtY3libXo1NVZFT0hpWTFmMnp2bzBPMERTbUN4WWhhSzIy?=
 =?utf-8?B?OXQ2ekJvek5JN21jNkdERTlNWE5ocDZBd29VWmdXUnYwSkZQOENId3ZCOTRy?=
 =?utf-8?B?cmI0bGkzMmRFVERQUFcvbER0UE44N2swMEFOK2R2V2FCU2haeW0yU3c5RlU1?=
 =?utf-8?B?dGhWM2xQTkJ1Ui9FQXVteHpsVXRqZjduZG5Hd0tSY2pNUTZmSitBSlBqVjgv?=
 =?utf-8?B?NmZYVCs5SDFoc2NLdTdJNEQwY2NCbGN5N2xVY1lEQVpEUU1OQmpzZUhLS0VG?=
 =?utf-8?B?TG5DRFQ1RHE5Z0o5ZkhjckwyU2lqOGFabHE2b0g0MWZTaFVZcEVRVVZLTGg3?=
 =?utf-8?B?UmZpZjcraWNSb25PSkNGd1RpbGY0RmZrSXpRMlhRUkNQYStKLzd1RENFUzNw?=
 =?utf-8?B?aG9qV0V1TXppb0k3ekt5YVNKamNFVnpIcnBaUHFDN0Z1UTl3WWIvRS9OVHRJ?=
 =?utf-8?B?MmIyenRKZHBIQWNaZDNkbS8yMUxITHRTcmlWRkgza0dxeUl0MEsybGNzS3NS?=
 =?utf-8?B?KytEbUN3WWdBbFloK1pPMWtGclpSMGIyK0tsRENXVCsydHFjK2tmM0dsMFBZ?=
 =?utf-8?B?UnpwQXkvaFhPemgxc09uamRJSzYxZWxzMnU5WHFDT3dpQm5TSkZkMTdhOEpO?=
 =?utf-8?B?Rm40eEhWM0xLRFUwTEhNMnhRL0RQRWZlV3g3Q1JoT2V2SU9nR0JEUHZnY2ps?=
 =?utf-8?B?Mmx0MHA5Z3h2cEpuSkh3aml1cHlRQUZ0Y1BBb0d5YU9EWTJRbnVsalBtaTRP?=
 =?utf-8?B?N1RLbTlPYnUyNG83YjMvM2FONWF0SEp4aFNUaHF4Y3RSR3BKc2VUWHRidU0w?=
 =?utf-8?B?ODFkV0lId0lteVd6V2RzWWJGRUo1ZU9CN3NkUU12RjlzY1pEZ3E4WFhDTTlO?=
 =?utf-8?B?TXpJTjJuWUwwUSs2bDh4VG1yeENJTE5HRHdEOVhWc1JhWmVIdVlyVDJZM3FV?=
 =?utf-8?B?a2QzYVd4NU1NZGFDSGV6YjExT2FxckRScFdVUUwvbStGdWFXLzM2K0Jva1Ri?=
 =?utf-8?B?U3pGdUlsN0UzRmZnaVg0ZlpYRTVVbkpwQ1lrZGpLNXZwcmVPSUhTQjVBU1l2?=
 =?utf-8?B?UmJkSDBURytraEZ5Ni9FeFM1ZE50ald1LzlLOWpmZmpzK2trOXlmWUpTa0xl?=
 =?utf-8?B?SGd0Zm8wODBOWnFNajVsZVlDaXlpNkYva0d4ak5WWFVGN0p3Sm5teVNKMlk5?=
 =?utf-8?B?ejVhcm5EekNhSms5dUw3Zz09?=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB7158.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(1800799015)(366007)(376005)(38070700009);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <D4F2A46F17717744AC29B68CB5F19183@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV1PR08MB10502
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AMS0EPF000001A0.eurprd05.prod.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5a41e625-0a99-4c30-cfd5-08dc85458dd6
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|376005|36860700004|1800799015|82310400017|35042699013;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?MDVYRE8vNEZYeDNGTnlKQmtmV2hGZmVQSGg5T0U2WlN5d3l1V3dXMUpaUTdL?=
 =?utf-8?B?Nmp5YzlUK09WazNmT2w2bFZpejBjbzJ6dUI2WjU5TDdqdmdpR2hLRytnUzQx?=
 =?utf-8?B?VnF3c281L0llOWw3NjNUdEpRUlk4cThYT3EyWW1uSjZ3bUZVNUIzUVdMdmlV?=
 =?utf-8?B?SFczVnVQanU2eHhrRi9FcTU3c3RranUxa3hFTm80anp0NEgzN1UwL2ttZUtP?=
 =?utf-8?B?Z2JMaFhZUUZaZExQOG1ZRkk0SGxnRDBJSUxTT0dlNEs1NTRsQzJiRkRQK2xw?=
 =?utf-8?B?Ymx0OFNBR1ZTa3BodWZkdURCM0huYVg2RHRCdGRCd29Eeks4bXZXcTJhSUsz?=
 =?utf-8?B?Q2pYZFhrVnZwYytrRkRlelhYWTF5eWJrSnNKTVZvZGFRUlRVWk5WSzJQL1Bs?=
 =?utf-8?B?SGRjT1NxUXJ1VnpiaUkwRTNBa094OFQwekxRSTVoLzJxMTh1R2JYZEVzeS9X?=
 =?utf-8?B?WTQ5Q3QzWm1WMGs0ZVlMQmpUem11bldFbFZaUkcvZTNFdE55OFdpNmREcXRC?=
 =?utf-8?B?VzRWdVZTWGN3WVZ5dUxDQmpCK2Y3R2ZFMldhbmZrejNUdEhlODZ3K2lpOEM4?=
 =?utf-8?B?ZENVVjBiSUpCbHNpREJOS2xSRkhSU29VdmdsNHhKeTFNR0NXNkU4SEd2MndM?=
 =?utf-8?B?NndXRys3SWpkcE9qR1FRMnlNeFArM2ZEbXNNVnB5ZlloeWlKa3I5eFhQdUht?=
 =?utf-8?B?anZDeXR0bkU2UjFYdjNYZWt0cDBYWUQ0Qi9vREVUdlVDZ2pqR01GMHZTRUdY?=
 =?utf-8?B?RG83REtPTnEyZS9yVitHei9zZnp1ejRjZURHZHcwWGwvZ3NpVnBTcWtNTWZv?=
 =?utf-8?B?QWtQTFp3dDF1SEpsTzBTQ0lpbE1hS3psOFJWZHEyR05EU1RnMW90SGpXWHRV?=
 =?utf-8?B?ZWVET0lCRHZHdFlsbWQxUkdGM2xXSnNQV2cxam5xdTRDbUtoMjErVkZwU1Bm?=
 =?utf-8?B?UkJ0VzJwbHJmNWpKZEZBMDVEZXp6dVlDR3ZvTUhrSWpNOThCRmVXY3RaUlJ0?=
 =?utf-8?B?MFUxOUh5STZZQVpJc0Y3dmFJYUU1VHYvVFRvazQrUVVwTEVuWVJBQXhKVUJy?=
 =?utf-8?B?Y1VXNEFaYzd1Rk5SVTlxV0Q5MDJ2THlnMVl4WUxVQldJQ3JkLzNoTFNERFdC?=
 =?utf-8?B?YzMxeThrampLYlR3d2pDaXgwUk1CcTgzYlFpSURSQmI5M0lxM1JEWG1jL1hH?=
 =?utf-8?B?dnlaUURnTitnSTMzeGcxZ3lkN1YzSHJHcUhMWU1xZ2lDT2lQdFhmWHhZRi9H?=
 =?utf-8?B?ZmVuUkQxTDFycGR1bEIrenRUcklkeHROM3VTa1I3THJSelA5YnE3NzZtMFNW?=
 =?utf-8?B?UDI0dlNQSEExamxsM2syYXBOUGkvOFRFamtRQUtRVzZvTThQbTlUMnVpMzU1?=
 =?utf-8?B?RlZFZVJkWHRrQ1dGZXRLTjRNUDNlV0hIYk1tQzZTSGs1ZnRzOEdRVkRpS2lS?=
 =?utf-8?B?UWNVL2E4RktlMHlwbDdrU2lqVHhkZW90YWVCc3ZRQU04UWw0TVJTMFhrd3Nh?=
 =?utf-8?B?VVUvS3NpbjNuTWp2bFlmWWFjUzFTM3NLdlptRUx6ZWNLdkpQVytQUFNrQXl3?=
 =?utf-8?B?cWZnczc4Q2Y3cGQ2dzJvdXRneGJMd3hPTVoxNU53cit6YzNlTEVRZzZuNmV6?=
 =?utf-8?B?WDUrWjE4RTBrV3lDZXJwM1J3QUQwQzYzbEpicTNuTnJvWnlSbVdpU3g2Y1dM?=
 =?utf-8?B?UGIxODEyeXRyS3hWb0FNUDJGa203QTRQcmx2cVNDNlk4cjhVd1lWN3pwMEdr?=
 =?utf-8?B?U1g4bGZodWdFTjErdDNRc3pLdlpLcEtQVTZXWFF2RkpvRENWTXhIZGdDSnVi?=
 =?utf-8?B?ZTRYTWkvMWErbXZLcUFDOVRmWWpWZTRmVGFxTlk1cVh1aytxSWMxbTNTNW5z?=
 =?utf-8?Q?nZt5b6cfPcXaE?=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230031)(376005)(36860700004)(1800799015)(82310400017)(35042699013);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jun 2024 09:55:09.6173
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a0b4f80f-47eb-44b8-1710-08dc85459610
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AMS0EPF000001A0.eurprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6139

SGkgRWRnYXIsDQoNCj4gT24gMjIgTWF5IDIwMjQsIGF0IDI6MjjigK9QTSwgRWRnYXIgRS4gSWds
ZXNpYXMgPGVkZ2FyLmlnbGVzaWFzQGdtYWlsLmNvbT4gd3JvdGU6DQo+IA0KPiBGcm9tOiAiRWRn
YXIgRS4gSWdsZXNpYXMiIDxlZGdhci5pZ2xlc2lhc0BhbWQuY29tPg0KPiANCj4gTW92ZSBtb3Jl
IGZ1bmN0aW9ucyB0aGF0IGFyZSBvbmx5IGNhbGxlZCBhdCBpbml0IHRvDQo+IHRoZSAuaW5pdC50
ZXh0IHNlY3Rpb24uDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBFZGdhciBFLiBJZ2xlc2lhcyA8ZWRn
YXIuaWdsZXNpYXNAYW1kLmNvbT4NCg0KQWNrZWQtYnk6IFJhaHVsIFNpbmdoIDxyYWh1bC5zaW5n
aEBhcm0uY29tPg0KVGVzdGVkLWJ5OiBSYWh1bCBTaW5naCA8cmFodWwuc2luZ2hAYXJtLmNvbT4N
Cg0KUmVnYXJkcywNClJhaHVsDQoNCiA=


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 10:08:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 10:08:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735636.1141755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEnYq-0004S7-Li; Wed, 05 Jun 2024 10:08:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735636.1141755; Wed, 05 Jun 2024 10:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEnYq-0004S0-Hw; Wed, 05 Jun 2024 10:08:04 +0000
Received: by outflank-mailman (input) for mailman id 735636;
 Wed, 05 Jun 2024 10:08:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19Cw=NH=cloud.com=kelly.choi@srs-se1.protection.inumbo.net>)
 id 1sEnYp-0004Rj-2a
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 10:08:03 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7de7d43c-2323-11ef-90a2-e314d9c70b13;
 Wed, 05 Jun 2024 12:08:01 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-57a3d21299aso6017086a12.2
 for <xen-devel@lists.xenproject.org>; Wed, 05 Jun 2024 03:08:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7de7d43c-2323-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1717582081; x=1718186881; darn=lists.xenproject.org;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=ZfR2N7jU6Gmhe+jCMYZymmj8uBxt4MhcgI5sJhH2SG8=;
        b=D9hhw7lSKZKRMrL1EdzVGUvs1fiY6Xytl2KKitS6P819ntqgClig0mDV/9xKw6Fa0R
         viKC8go17wSyuGpDKR0bmVJxJxCskqgqlbIkhVdy3lYRt25FzA3vnCHNsWGOsowqodHl
         ccWjTKgGkJzj0dl85EL7h7c6c1CsZAh7dAJB4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717582081; x=1718186881;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=ZfR2N7jU6Gmhe+jCMYZymmj8uBxt4MhcgI5sJhH2SG8=;
        b=HBE6zcT39rfpOo/X2k8nIQ+U7eRAtlkb8dq6nBAL6fHCNP1+188c0TMSLRtrRk/E9C
         PID+ifd/Sru5o0sJT1XCogTshaTTwFNA10/VLUXgecNUaE5fPMpC38yQlU+rrLrYTgce
         zRQaWVTGCUkFiBONxbM6mRcCxx/8PfOgrtI5VD+J27u0kVTBr6WZJALNEVBb4JfFKTmS
         PresxSiWUB1qZNnuUWcHm1jUzxNDKzRUZVuJBwK8gbtxv/ffZLCta0n1wSUkplRTeEDG
         GcN9d86pai9FCwDm8u3dmGWuS/xhpxC7t0HHlw6/zPUB4X3inGqW21B872genamuyT+E
         f4ww==
X-Gm-Message-State: AOJu0YwbuSzaEfvNLroQzmsWUQ/DBpc/LA0wEkBNF1wPYqD7tL2s8U2U
	NXhHD56CMWOgCtEk9+OROAdO2Bi3u3gF3NyW+MNPLzY9RPx4YMdHREHjfAv077O/rQ9exxRR7Mr
	uwEdi5tIvUhaFhwZo5bKlYlSfyRzb2A7BBK741WCYIe2c4E7OVioFWg==
X-Google-Smtp-Source: AGHT+IGwwMjUQJQk/si9KlMJzkl/2O/IbwC30qKf0YSyY7CtF4WTsWozfCkfl4cH++ymAzsO3W/LvZuagtbL90ecO2A=
X-Received: by 2002:a50:aa93:0:b0:57a:212a:a21b with SMTP id
 4fb4d7f45d1cf-57a8b68797fmr1339345a12.7.1717582080668; Wed, 05 Jun 2024
 03:08:00 -0700 (PDT)
MIME-Version: 1.0
From: Kelly Choi <kelly.choi@cloud.com>
Date: Wed, 5 Jun 2024 11:07:24 +0100
Message-ID: <CAO-mL=xPmpwW9m-aT9hx9S7wm4iJf-C8C3UxGgrg8ewSQCyOHg@mail.gmail.com>
Subject: Join us for day 2 - Free virtual Xen Summit 2024
To: xen-devel <xen-devel@lists.xenproject.org>, xen-users@lists.xenproject.org, 
	xen-announce@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000caa5d1061a21bd72"

--000000000000caa5d1061a21bd72
Content-Type: text/plain; charset="UTF-8"

Hi everyone,

Please find the *schedule of talks listed here.*
<https://events.linuxfoundation.org/xen-project-summit/program/schedule/>

   - All sessions on Wednesday 5th June and Thursday 6th June will use the
   following links:
   - *DESIGN SESSION A <https://meet.jit.si/XenDesignSessionsA> *(Liberdade
      Room)
      - *DESIGN SESSION B <https://meet.jit.si/XenDesignSessionsB> *(Augusta
      Room)
      - Please look out on the schedule for the time and which session room
      it takes place in


   - Thursday design sessions will be finalized on the schedule by the end
   of day on Wednesday
   - The same links will be used throughout talks and sessions
   - (Optional) Join our Xen Summit matrix channel for updates on the day:
   https://matrix.to/#/#xen-project-summit:matrix.org

*Some ground rules to follow:*

   - Enter your full name on Jitsi so everyone knows who you are
   - Please mute yourself upon joining
   - Turning on cameras is optional, but we encourage doing this for design
   sessions
   - Do *not* shout out your questions during session presentations,
   instead ask these on the chat function and we will do our best to ask on
   behalf of you
   - During design sessions, we encourage you to unmute and participate
   freely
   - If multiple people wish to speak, please use the 'raise hand' function
   on Jitsi or chat
   - Should there be a need, moderators will have permission to remove
   anyone who is disruptive in sessions on Jitsi
   - If you face issues on the day, please let us know via Matrix - we will
   do our best to help, but please note this is a community effort

Many thanks,
Kelly Choi

Community Manager
Xen Project

--000000000000caa5d1061a21bd72
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi everyone,<div><br></div><div>Please find the=C2=A0<a hr=
ef=3D"https://events.linuxfoundation.org/xen-project-summit/program/schedul=
e/" target=3D"_blank"><b>schedule=C2=A0of talks listed here.</b></a><br></d=
iv><div><ul><li style=3D"margin-left:15px">All sessions on Wednesday 5th Ju=
ne and Thursday 6th June will use the following links:<br></li><ul><li styl=
e=3D"margin-left:15px"><b><a href=3D"https://meet.jit.si/XenDesignSessionsA=
" target=3D"_blank">DESIGN SESSION A</a>=C2=A0</b>(Liberdade Room)=C2=A0</l=
i><li style=3D"margin-left:15px"><b><a href=3D"https://meet.jit.si/XenDesig=
nSessionsB" target=3D"_blank">DESIGN SESSION=C2=A0B</a>=C2=A0</b>(Augusta R=
oom)</li><li style=3D"margin-left:15px">Please look out on the schedule for=
 the time and which session room it takes place in<br></li></ul></ul><ul><l=
i style=3D"margin-left:15px">Thursday design sessions will be finalized on =
the schedule by the end of day on Wednesday</li><li style=3D"margin-left:15=
px">The same links will be used throughout talks and sessions</li><li style=
=3D"margin-left:15px">(Optional) Join our Xen Summit matrix channel for upd=
ates on the day:=C2=A0<a href=3D"https://matrix.to/#/%23xen-project-summit:=
matrix.org" target=3D"_blank">https://matrix.to/#/#xen-project-summit:matri=
x.org</a>=C2=A0</li></ul><div><div dir=3D"ltr" class=3D"gmail_signature" da=
ta-smartmail=3D"gmail_signature"><div dir=3D"ltr"><div><div><b><u>Some grou=
nd rules to follow:</u></b></div><div><ul><li style=3D"margin-left:15px">En=
ter your full name on Jitsi so everyone knows who you are</li><li style=3D"=
margin-left:15px">Please mute yourself upon joining=C2=A0</li><li style=3D"=
margin-left:15px">Turning on cameras is optional, but we encourage doing th=
is for design sessions</li><li style=3D"margin-left:15px">Do=C2=A0<u>not</u=
>=C2=A0shout out your questions during session presentations, instead ask t=
hese on the chat function and we will do our best to ask on behalf of you</=
li><li style=3D"margin-left:15px">During design sessions, we encourage you =
to unmute and participate freely</li><li style=3D"margin-left:15px">If mult=
iple people wish to speak, please use the &#39;raise hand&#39; function on=
=C2=A0Jitsi or chat</li><li style=3D"margin-left:15px">Should there be a ne=
ed, moderators will have permission to remove anyone who is disruptive in s=
essions on=C2=A0Jitsi</li><li style=3D"margin-left:15px">If you face issues=
 on the day, please let us know via Matrix - we will do our best to help, b=
ut please note this is a community effort=C2=A0</li></ul></div></div><div>M=
any thanks,</div><div>Kelly Choi</div><div><br></div><div><div style=3D"col=
or:rgb(136,136,136)">Community Manager</div><div style=3D"color:rgb(136,136=
,136)">Xen Project=C2=A0<br></div></div></div></div></div></div></div>

--000000000000caa5d1061a21bd72--


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 10:20:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 10:20:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735684.1141793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEnkn-0000yy-8X; Wed, 05 Jun 2024 10:20:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735684.1141793; Wed, 05 Jun 2024 10:20:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEnkn-0000yr-5d; Wed, 05 Jun 2024 10:20:25 +0000
Received: by outflank-mailman (input) for mailman id 735684;
 Wed, 05 Jun 2024 10:20:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3tuO=NH=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sEna9-0004Rp-3i
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 10:09:25 +0000
Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com
 [2a00:1450:4864:20::22b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ae947686-2323-11ef-b4bb-af5377834399;
 Wed, 05 Jun 2024 12:09:23 +0200 (CEST)
Received: by mail-lj1-x22b.google.com with SMTP id
 38308e7fff4ca-2e73359b900so22039741fa.2
 for <xen-devel@lists.xenproject.org>; Wed, 05 Jun 2024 03:09:23 -0700 (PDT)
Received: from [172.20.145.98] ([62.28.225.65])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42158135ea8sm15159045e9.37.2024.06.05.03.09.21
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 05 Jun 2024 03:09:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae947686-2323-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717582163; x=1718186963; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=g5UPimmpxwpsZkIT4ao8qllIcnVD0IaMRYQnvqfZOS8=;
        b=U3ickssd4pQ++aGdG3s+47I26cfA/bxC3RPhuUqpCKsWGO4GFZhWg8SAU+WDGgvBYG
         km1hYsp5i1EJn8chZeNvJpoOU0PRghkQc3x8f19SaXRs8ssUU+5AMkglg1PoYE4x1c6d
         cKeMeXfZ0dTBpa8uanFcsWZ5m+kysdyqdYPH263MnAnvvdAhqV/QI+hMZt6/kX+SgT2H
         K9rgM5zUxqjH/X34+t9RxxWku15XEyaIjUsqXPrL0JIiFh66ouE1GsNnaCVx7B1C6ozW
         8zLZAatRaOOVohD59jpPc//Z5JOGNkeOxziQctJO6z+9QxIgQfT8g40PX4WsmXGNcnK6
         3Oag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717582163; x=1718186963;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=g5UPimmpxwpsZkIT4ao8qllIcnVD0IaMRYQnvqfZOS8=;
        b=oTXzyosRzoA+wWwlnUQg6MHGWhF9uqv7+bF2PqVjx6T5MaNhEH65oqMtT0wxXue4g3
         gelTmXkC63lkZkk70u7JZge5HTCw2KoKV0/84RpciLKigSexTo4VyfIYN79/0XIvjtaD
         JqsGJ6F04c77fJqF0ZI7GwSt6rDNDw0An9Vrki1s8M9WYZcydOq6oFuE1+l0AEzsl/9q
         mMG7zf8gV00Y6O61YXu46jwclnw0Rm8A4k+omHiKDi1ZOIlZ6sM2k5VjlIRhqndeW/Kr
         187NiExfH4QvxwG9bpvb+7CmFNmI5oI10PVYFUslTiYkqqdHsIcYMZSUxvZnrQd9fti5
         TGXg==
X-Forwarded-Encrypted: i=1; AJvYcCWOHWJet82EF3n/d0M7jK6PSATFqrgUs3B8ZuBQHORtTvfsOTz2HJQsTvD/u8mLam4BSnbVtJ5iI9tZtwljcgeQiWE+EDbl6o1ak/qlKfY=
X-Gm-Message-State: AOJu0Yxn2SzmnLmOs66bAkT/9gPxDdqJ+rGCyvbAqojo0GifZoV0OOz7
	0S57o2rK+rbO1L9G5HK3O2zSWfhlt+uobGbY7b5FPGB+tzRC5wYKzqKNyMxmHw==
X-Google-Smtp-Source: AGHT+IF2I24LdiAwrUT/tnr0839W27jnocjwEL5t/u0Vhvhc5Ve1GcUrrcNKmx90wJpwC/NsZ1Q4vg==
X-Received: by 2002:a2e:2c06:0:b0:2ea:bc8d:3a43 with SMTP id 38308e7fff4ca-2eac7aace3bmr10438941fa.43.1717582162641;
        Wed, 05 Jun 2024 03:09:22 -0700 (PDT)
Message-ID: <6d2e49bf-7be2-48f1-8075-dc0626015c17@suse.com>
Date: Wed, 5 Jun 2024 12:09:20 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>
References: <20240516095235.64128-1-Jiqian.Chen@amd.com>
 <7cdff236-bb7d-4dad-9a83-47faaa6dc15f@suse.com>
 <BL1PR12MB58493D3365CC451F36DB554FE7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <fbaf7086-85d8-4433-91d9-ef8f74512685@suse.com>
 <BL1PR12MB58494B521CB40BAEA30CB412E7F32@BL1PR12MB5849.namprd12.prod.outlook.com>
 <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
 <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4a421aa5-b4c5-43f3-85cb-68c2021f13dd@suse.com>
 <BL1PR12MB58492BA224EBCE98549A0349E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <f125e2e3-b579-410f-b6ab-93d008bf9a9e@suse.com>
 <BL1PR12MB58494B2DD0CD75CCDF1F5CA1E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <67960b60-3108-4920-8bf1-68a00e117569@suse.com>
 <BL1PR12MB58490E8F1F26532B0FDFFFD6E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <46b884e2-cbec-46f0-9070-7013307a310f@suse.com>
 <BL1PR12MB5849C1D40FCF9861BFE7B208E7F92@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <BL1PR12MB5849C1D40FCF9861BFE7B208E7F92@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 05.06.2024 09:04, Chen, Jiqian wrote:
> On 2024/6/5 01:17, Jan Beulich wrote:
>> On 04.06.2024 10:18, Chen, Jiqian wrote:
>>> I tried to get more debug information from my environment. And I attach them here, maybe you can find some problems.
>>> acpi_parse_madt_ioapic_entries
>>> 	acpi_table_parse_madt(ACPI_MADT_TYPE_INTERRUPT_OVERRIDE, acpi_parse_int_src_ovr, MAX_IRQ_SOURCES);
>>> 		acpi_parse_int_src_ovr
>>> 			mp_override_legacy_irq
>>> 				only process two entries, irq 0 gsi 2 and irq 9 gsi 9
>>> There are only two entries whose type is ACPI_MADT_TYPE_INTERRUPT_OVERRIDE in MADT table. Is it normal?
>>
>> Yes, that's what you'd typically see (or just one such entry).
> Ok, let me conclude that acpi_parse_int_src_ovr get two entries from MADT table and add them into mp_irqs. They are [irq, gsi][0, 2] and [irq, gsi][9, 9].
> Then in the following function mp_config_acpi_legacy_irqs initializes the 1:1 mapping of irq and gsi [0~15 except 2 and 9], and add them into mp_irqs.
> But for high GSIs(>= 16), no mapping processing.
> Right?

On that specific system of yours - yes. In the general case high GSIs
may have entries, too.

> Is it that the Xen hypervisor lacks some handling of high GSIs?

I don't think so. Unless you can point out something?

> For now, if hypervisor gets a high GSIs, it can't be transformed to irq, because there is no mapping between them.

No, in the absence of a source override (note the word "override") the
default identity mapping applies.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 10:22:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 10:22:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735691.1141802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEnmW-0001bt-JS; Wed, 05 Jun 2024 10:22:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735691.1141802; Wed, 05 Jun 2024 10:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEnmW-0001bm-Gu; Wed, 05 Jun 2024 10:22:12 +0000
Received: by outflank-mailman (input) for mailman id 735691;
 Wed, 05 Jun 2024 10:22:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rKp1=NH=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sEnmV-0001be-3F
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 10:22:11 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20600.outbound.protection.outlook.com
 [2a01:111:f403:2417::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 76d98a4a-2325-11ef-90a2-e314d9c70b13;
 Wed, 05 Jun 2024 12:22:09 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by SA1PR12MB8597.namprd12.prod.outlook.com (2603:10b6:806:251::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.27; Wed, 5 Jun
 2024 10:22:06 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7633.021; Wed, 5 Jun 2024
 10:22:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76d98a4a-2325-11ef-90a2-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nVVq9EUDADTMBa0IcD4BFrXJkVTRg1Wg1WL09Zdxazyf2Rq+R0PHpPonyBqdRwhfijpY5dh1DtR7gm6gRbU8/MJi5k05/Gzlz8grOr6QR1aunIQbaGFGgVXdxw20HNFGXkQIVhHDDzUi8GpvRMjirAiZjU1qacnPWXArnmDj1RNaXTlAJz4yXQsqv270rjSDAq8nivl/J31iL5SLqQmRNm8h7npf/p3ntaqv7KjR6NnR8Bu54NFHe+79pi16yhZSI2sPL/hCLcVecRq8PNMn67zD92ar9X34fAXIsBu210y4/xPI/E7+SARjMrotZbqXMHhSxUxdiur+t7dH/+tINQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fDhwJdRIQqsVW8o307fkgrhnlbwWH65fIFvsBZ0/BYA=;
 b=Bwx3bsa4KqwmI3eP1NHdziLi7R6sbu03V/KWlHLRirWNvxDBYfMdLLWbVVwj3qnzo0O85f12/jeXwvPG0YAN2BfWhsdJxeLYJLa+QKgY4/zUUtwbSSAIEoBIFE8UbliLftQFfyl9u20tVmTUR6WYiDWBGT6xCTsAL4IFCGX+hw9Bzo/L5/oYR8DMwApXk/9LN4J4JQEk/MK+k32nU4vqNAnr8/jUY6KWl57iElym3qIaCLj1FZs7pXnS8dQkn9DdBeeqjL+lCnj6JpoNVVOVX35HOr2tA80KTTw3L2zHTaBXIoNDukrEhScidQAjR5cqFEePJL4D9MM3uqVzLES45w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fDhwJdRIQqsVW8o307fkgrhnlbwWH65fIFvsBZ0/BYA=;
 b=kRBnioSYRDpvkiydnXxYQBD9nCPk6KLt5Wgw6wob4Rj9lakZv1SyUuIzb3S0Q8FsSSj9Vtp3Btc4Iav0uN3J20yATvPLB3Wl9AYRpwsiO+qcdZuUyE153sO+iRx6rtKAl/Jh6ru9uivkwWyHiIjmbvCfaEKvpYZF8yKRrXPVSg0=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Hildebrand,
 Stewart" <Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Daniel P
 . Smith" <dpsmith@apertussolutions.com>, "Chen, Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index:
 AQHap3bgiER4vYjwvk2+R5oTa8V63LGZ5EYAgAHVa4D//4e1AIAAis+A//+FmgCAEsTugP//viAAABEhvwD//4GYgIAAoyuA//+0FICAAgSCAP//yCWAgAeLLwD//6n2gAAQ1XyAABA5GQAADz4pgAAu2vkAAEmxvAAAkQD4gAD05PGAAfOTNwAD1iZlAA==
Date: Wed, 5 Jun 2024 10:22:05 +0000
Message-ID:
 <BL1PR12MB5849932D0F3D280E4B8574DCE7F92@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240516095235.64128-1-Jiqian.Chen@amd.com>
 <BL1PR12MB58493D3365CC451F36DB554FE7F22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <fbaf7086-85d8-4433-91d9-ef8f74512685@suse.com>
 <BL1PR12MB58494B521CB40BAEA30CB412E7F32@BL1PR12MB5849.namprd12.prod.outlook.com>
 <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
 <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4a421aa5-b4c5-43f3-85cb-68c2021f13dd@suse.com>
 <BL1PR12MB58492BA224EBCE98549A0349E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <f125e2e3-b579-410f-b6ab-93d008bf9a9e@suse.com>
 <BL1PR12MB58494B2DD0CD75CCDF1F5CA1E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <67960b60-3108-4920-8bf1-68a00e117569@suse.com>
 <BL1PR12MB58490E8F1F26532B0FDFFFD6E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <46b884e2-cbec-46f0-9070-7013307a310f@suse.com>
 <BL1PR12MB5849C1D40FCF9861BFE7B208E7F92@BL1PR12MB5849.namprd12.prod.outlook.com>
 <6d2e49bf-7be2-48f1-8075-dc0626015c17@suse.com>
In-Reply-To: <6d2e49bf-7be2-48f1-8075-dc0626015c17@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7633.017)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|SA1PR12MB8597:EE_
x-ms-office365-filtering-correlation-id: 0b7b4479-f7a9-43f9-92c6-08dc85495971
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230031|7416005|376005|1800799015|366007|38070700009;
x-microsoft-antispam-message-info:
 =?utf-8?B?alRwU3FnUVNEOTl6N3h6bGZ4eE40TWJNcTZ6Q2NTYVE2Wm1hLytPRkQwKy9W?=
 =?utf-8?B?aEhoTUZOOTdvQkVBWDNmaFpJT1U2OXRZMCtzNENDRjRWbjQ4akZiL2xUa0tZ?=
 =?utf-8?B?bytwOUNGdVNBR2NOYm1ONVlPWWFTbVp6U1FWZTJNdlhNR0JxSXg2M1YyMTdk?=
 =?utf-8?B?MzBkT3cyZVJVRzVxeTRxam1aQ2FUQ1l4QlpXWHFmMHlLVXFiWW1VNkdMR1RT?=
 =?utf-8?B?NFBuOHVTUmlydWMvbWU1Mm05SjVveWt2U3VRUWlMQi90d3o5VExvVG5KSnNS?=
 =?utf-8?B?em5sNmV1a25NZEdRbDZkYXNXYnA2MnczNnFFVEFtOUYwYUttU1JiVU9DeElZ?=
 =?utf-8?B?WkYxd0FIaVZCUEZpbE9zeVlhSzdqZnZ4dEt2TS8xWkUvZFpaVXhHakF3L29o?=
 =?utf-8?B?ZWsyTXF4cExWTEkwNDVaSHhaTnpsblMySGpBb05TUllpMzlneXpEa1B1VUdl?=
 =?utf-8?B?RHZiOERUK1h3elBCYk9GeWVDS0dhZU15RURNdk1BWTRZRXJhQWRpZnFnNnpp?=
 =?utf-8?B?alBzR0MrU2RLWHJIZFc4eHAvNXk4dXA2TnFJaFEwSCtDZmhVM0hNTSs3TmE1?=
 =?utf-8?B?bXllK0QvQ3dabC9GZ1JZZ3dlWEJoSk1VWnZNT3ErRDYzbFZzWHQxRktjTThJ?=
 =?utf-8?B?emhvVis2U1BDYS96eVhuOTQyZWQ1OXB1dGx5RTNZWHRvb2tZRWVCMXFhemVI?=
 =?utf-8?B?bE4zTW5hRlhGb0NEQStyWFFYeE5vS0hiNUdwSTVXdFBpcUVpRFJZUkNTZEZt?=
 =?utf-8?B?TktvTC9NUGNsVFZ1SVhmWG1jM21mdElSZHVJL1dkdjRtSlZ2Ni9lNEhaWXpM?=
 =?utf-8?B?RWgrWUdOcnJ4TjdhdGE4LzlEODR0bEdURlBYMHZPZmJDeDdaRENkWmlDcDBZ?=
 =?utf-8?B?ZVdyTy9IbXR3L3ZiMUZXTjFTWTZ1clduRXZoZFREZFdhVkNPVERFSWdYTU9n?=
 =?utf-8?B?dFlndmdqNG1BUEV5enRYYmNvbENncXRGTXpkcWY3aG9lcHNXWU4zYUhJTmFG?=
 =?utf-8?B?cUdVWm9vcE1VVGliUnhXWDl3ZFpCb0lOU0VJV1Nqems0YkRGcUs5NzI1TVRL?=
 =?utf-8?B?YmJLNmN4K3Nsakw3L3VyWnN5OW1LaW92NzM0dSthMnhObUNlRnh5RG9SOERK?=
 =?utf-8?B?U2dsTzZiYUxubS93S0RrOW5KU2pwOTM1R3pBYVkrZjNqSWVxNXZKVDgrTGto?=
 =?utf-8?B?V0lnblFNbThuMnprQmRhSGRGVUdPNWhsYkRTdytBR01TeTVaNEErT0F4eHdv?=
 =?utf-8?B?WTVRYXBiQ1FtekNUbzgrRmZNUmNsVHBxUkloVWNBVStzUUVBeEpVbWhoNzdt?=
 =?utf-8?B?ZytXbTlESm5nQ3ZDTEtJZ2FQZHBOeHJaN3grem1KZXczY1BDQzh3ZExKSzIx?=
 =?utf-8?B?MWZBQVNpekpkV05KQkxMeGlMWUFYMjZxUUFsazhlRlVRSmo1dDl1RFByVk52?=
 =?utf-8?B?QkpCZjJMa3hVVi9IUmlOcUhWODlJVmVLVEpPL1pMNm5xS2NhWmp5TS82TEdD?=
 =?utf-8?B?M1hBK2Z0bVNINDdRRDNIYUl6NmNVaFBEVkRNSkVQZVU2cDZtQXQ5c1Y1d2NC?=
 =?utf-8?B?Kys5YWFXZm4xZEhQc3BCM1B0SmxVNEtMVGpMQS8wZXFRNVhrL3B3ZnNXUVda?=
 =?utf-8?B?dFdTRDdNOGtjbGVPQjVlTFc3Y01SZGx0Zy9CYy9LbmFTNVBEV1V3ZU5RaHcz?=
 =?utf-8?B?SWRXNHpuamxqYXBYWGtqYTZhQUptaVN3NzZFamE2dVRGL2w3NjhhUTI4U3pu?=
 =?utf-8?B?TjY1MlNDNXZKZlAvREFtSkVucyt0bStNNk1Kc3ZqdW5qd0pEUjJhejFvc2VD?=
 =?utf-8?B?K3dRZng0Y2xtQnBsWUZBQT09?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(7416005)(376005)(1800799015)(366007)(38070700009);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?cDRqUHV6R0M2dmFCZytpbFFXK3FjWmtxOW1qaldGaE5ZZHA4QVE4REdvL0No?=
 =?utf-8?B?ejFLb2NiVFljMmJVQ2lqd3E0NHlmZXM4RUZjdFNXUkErN1FJdVB0VWdzald2?=
 =?utf-8?B?NlpBZHo4c2lWa2J2S0hGU2RpdEpPZzdOWGpZUDFwNzlrbnBEMnBHb2lESDZw?=
 =?utf-8?B?a3E4UEI1Zlh5TjJLT3UrdDFETFRlNk1IVzAzTGU2SWNEaVRXMFVFVStESTZE?=
 =?utf-8?B?M21GTXRSMCtONG1EV002U05jc0F6ZGVDOFRrRWRFaUlidEVLcHJ3ZkV6QXc4?=
 =?utf-8?B?TytDZTRvejhxYU1wMFJMY3FyYXlQTVVRT0FoVFB4SnBDYW9jTnpNaFJnUVZi?=
 =?utf-8?B?RjBML08xR3MxeU00WG93UHJrWXgvM2EzdGJiZ0xkL1NrUmF3dTVNQUlnN2dn?=
 =?utf-8?B?bDl1Mi9BWGlQRlczRzdMMWs2WVRuaVEvV09SdTlTQkhLc0JHYlN5ZW9yNUNs?=
 =?utf-8?B?b2twanAwTlcwdmxzdi9BNlVnaStUcGczcVZtRHh6eTNWQWRlRndoMUl3a2tG?=
 =?utf-8?B?MlRxdGVCUTczNTBGMDJCWGNUQnI0bURqNmdHdHg1UkUwZlR5RXh4c3hoMUI1?=
 =?utf-8?B?UHZVZTZuemxXYmFjWVY2S01Idjc3Mis2bkZFSkNBdlRsWVFsdCs2V09Wc1hJ?=
 =?utf-8?B?NnR3YjBmSHJvOUU4d3I0RDhpTStQUWJ1Ritsa01oazAwRGNkbCtmZnV6VHZZ?=
 =?utf-8?B?ckk2c0RMVGRzbVVlRUFWdEwzM1V2M2YxenhYWlE4L0FDNkYvRitXTGZqMm1h?=
 =?utf-8?B?M2tmcU5yeURrcFIwMkFLckJDVUovQm00UnlIYjVSUFhCSmorSDFNVHJRWmN0?=
 =?utf-8?B?TXIyTVMwVEQ3L1BqbDRHVEdWb2VxQmV3SDNGSlNtQ3I2RkdESmkyb0hTYVZo?=
 =?utf-8?B?REFlZkZiaDlINTNVT3RyQUJMVmFzcDVxT0R3cUM0czMydjV6dUYzKzBkT1B2?=
 =?utf-8?B?WXpNNi9zcXhzTS9IN0hFaldaU1NFUGU0TWphM3V2TEtPUFFncHlYSTNvVGta?=
 =?utf-8?B?MmsrbFFjMHJuajVsYkhaVG9qVlNyNUQ1ZHZCM3E0YWhLRHArN3A3VUZRMWEx?=
 =?utf-8?B?WnI5bTZwZ1NQRmxGN2h6OEl5bDNlVko3d3dpN2NVOTNsRHl1c1h3R09LK0Z5?=
 =?utf-8?B?OVJPREFuUXI2RzFCODJLTGdSbjBRNDFqRkNJUnBQQ0kzNlhSYUg2QXRsOWsv?=
 =?utf-8?B?ZlVJcGx2Z0xZOW9ENlF2aytNd2lkSDY0YnZ4NmVxTzZoSCs5cXo5ekpRbDFs?=
 =?utf-8?B?RnN5bGZ6bFB6RGJlbWhxWjBhcGt5eVFnYjB4TjhjTTFjVmRTendTdit4WjMy?=
 =?utf-8?B?RE9idlR0SEtOQ2RIbldpTWRvN28xNHV1Y3RGUEx2ZkxoOXpwN3JGbVJJMmQz?=
 =?utf-8?B?QTRKOWxFTThCYVZtOWh4bkoxN0hJMkJreGVsMUpFa0Z6WFVMZWQwQnlpM2Uz?=
 =?utf-8?B?YTlHUWxQTkxjMWQ0TVd5QXNFa3BObndMSjgwQThUQ2hrK21ycFhlcDFXWXlH?=
 =?utf-8?B?U2h1YWxKT3E2a2tGcWRYRGNoaHRIekJ6MmxDY3h1Q2F0M3VLWjB1MFR3OEwz?=
 =?utf-8?B?eitQa1FucEFIdGQ3bjd6VGFCV1VzdmtMRDFKWEJwVjVHTjE3UFFwM0ZsWXpM?=
 =?utf-8?B?cFNmdGR1MUhDcitOZ21oejF2dXFqMWhOUS9zcGJzeHpQOThnVnZDQTRtNmhr?=
 =?utf-8?B?Uzc4bkVmTEM3c0tFa1FCay80V2J1cndML0NPaThkSDBsTFgwa0FZc2dDVTlE?=
 =?utf-8?B?U0JDYW9sRzlNNmR3T2I4WmJGZm9GWGNqd1RDZlFTV254cUJwVno4V1NkUjYw?=
 =?utf-8?B?dC8vdmlUV0d1WnlYbGxHWU4zRllKUnRybzRKMmhidHRDZGhDWEpwcEVmZ25z?=
 =?utf-8?B?Y0VxSlJKMXhZbW1qOU0yOHcyK2xuQStlS1ZTV1RJdGFHWUNJRHl0QjFBQyt2?=
 =?utf-8?B?VEVRNEpsR0R4TXpjUHlFSmNoRzJhSk5NcTdyYUIyNFB5S1Z0NlZHK25aNitN?=
 =?utf-8?B?NUQwdU1XSWtya3RlM0c3ZkhQUjY1WUl3cDA2RXZNdU9RcjV2SVE2NVM5WDZk?=
 =?utf-8?B?R2dLbXpTTUg1UWptcCthZlEvbGwrZUhrQnZJYXdmMDBHZ3I1SjNWYW5zMmts?=
 =?utf-8?Q?ALUc=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <D08E908882CF07488C39DE03E48D7703@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b7b4479-f7a9-43f9-92c6-08dc85495971
X-MS-Exchange-CrossTenant-originalarrivaltime: 05 Jun 2024 10:22:05.8904
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: mpKrZxtk1InFwaOyYR68+7BNeE1Miudw8NXt2uGEC5WCEBmwl7vGDWgL8p8VOFWz7vQWXD3E0S9jp8iRiY5zXQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB8597

T24gMjAyNC82LzUgMTg6MDksIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAwNS4wNi4yMDI0IDA5
OjA0LCBDaGVuLCBKaXFpYW4gd3JvdGU6DQo+PiBPbiAyMDI0LzYvNSAwMToxNywgSmFuIEJldWxp
Y2ggd3JvdGU6DQo+Pj4gT24gMDQuMDYuMjAyNCAxMDoxOCwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0K
Pj4+PiBJIHRyaWVkIHRvIGdldCBtb3JlIGRlYnVnIGluZm9ybWF0aW9uIGZyb20gbXkgZW52aXJv
bm1lbnQuIEFuZCBJIGF0dGFjaCB0aGVtIGhlcmUsIG1heWJlIHlvdSBjYW4gZmluZCBzb21lIHBy
b2JsZW1zLg0KPj4+PiBhY3BpX3BhcnNlX21hZHRfaW9hcGljX2VudHJpZXMNCj4+Pj4gCWFjcGlf
dGFibGVfcGFyc2VfbWFkdChBQ1BJX01BRFRfVFlQRV9JTlRFUlJVUFRfT1ZFUlJJREUsIGFjcGlf
cGFyc2VfaW50X3NyY19vdnIsIE1BWF9JUlFfU09VUkNFUyk7DQo+Pj4+IAkJYWNwaV9wYXJzZV9p
bnRfc3JjX292cg0KPj4+PiAJCQltcF9vdmVycmlkZV9sZWdhY3lfaXJxDQo+Pj4+IAkJCQlvbmx5
IHByb2Nlc3MgdHdvIGVudHJpZXMsIGlycSAwIGdzaSAyIGFuZCBpcnEgOSBnc2kgOQ0KPj4+PiBU
aGVyZSBhcmUgb25seSB0d28gZW50cmllcyB3aG9zZSB0eXBlIGlzIEFDUElfTUFEVF9UWVBFX0lO
VEVSUlVQVF9PVkVSUklERSBpbiBNQURUIHRhYmxlLiBJcyBpdCBub3JtYWw/DQo+Pj4NCj4+PiBZ
ZXMsIHRoYXQncyB3aGF0IHlvdSdkIHR5cGljYWxseSBzZWUgKG9yIGp1c3Qgb25lIHN1Y2ggZW50
cnkpLg0KPj4gT2ssIGxldCBtZSBjb25jbHVkZSB0aGF0IGFjcGlfcGFyc2VfaW50X3NyY19vdnIg
Z2V0IHR3byBlbnRyaWVzIGZyb20gTUFEVCB0YWJsZSBhbmQgYWRkIHRoZW0gaW50byBtcF9pcnFz
LiBUaGV5IGFyZSBbaXJxLCBnc2ldWzAsIDJdIGFuZCBbaXJxLCBnc2ldWzksIDldLg0KPj4gVGhl
biBpbiB0aGUgZm9sbG93aW5nIGZ1bmN0aW9uIG1wX2NvbmZpZ19hY3BpX2xlZ2FjeV9pcnFzIGlu
aXRpYWxpemVzIHRoZSAxOjEgbWFwcGluZyBvZiBpcnEgYW5kIGdzaSBbMH4xNSBleGNlcHQgMiBh
bmQgOV0sIGFuZCBhZGQgdGhlbSBpbnRvIG1wX2lycXMuDQo+PiBCdXQgZm9yIGhpZ2ggR1NJcyg+
PSAxNiksIG5vIG1hcHBpbmcgcHJvY2Vzc2luZy4NCj4+IFJpZ2h0Pw0KPiANCj4gT24gdGhhdCBz
cGVjaWZpYyBzeXN0ZW0gb2YgeW91cnMgLSB5ZXMuIEluIHRoZSBnZW5lcmFsIGNhc2UgaGlnaCBH
U0lzDQo+IG1heSBoYXZlIGVudHJpZXMsIHRvby4NCj4gDQo+PiBJcyBpdCB0aGF0IHRoZSBYZW4g
aHlwZXJ2aXNvciBsYWNrcyBzb21lIGhhbmRsaW5nIG9mIGhpZ2ggR1NJcz8NCj4gDQo+IEkgZG9u
J3QgdGhpbmsgc28uIFVubGVzcyB5b3UgY2FuIHBvaW50IG91dCBzb21ldGhpbmc/DQpPaywgc28g
dGhlIGltcGxlbWVudGF0aW9uIGlzIHN0aWxsIHRvIGdldCBtYXBwaW5nIGZyb20gbXBfaXJxcywg
SSB3aWxsIGNoYW5nZSBpbiBuZXh0IHZlcnNpb24uDQpUaGFuayB5b3UuDQoNCj4gDQo+PiBGb3Ig
bm93LCBpZiBoeXBlcnZpc29yIGdldHMgYSBoaWdoIEdTSXMsIGl0IGNhbid0IGJlIHRyYW5zZm9y
bWVkIHRvIGlycSwgYmVjYXVzZSB0aGVyZSBpcyBubyBtYXBwaW5nIGJldHdlZW4gdGhlbS4NCj4g
DQo+IE5vLCBpbiB0aGUgYWJzZW5jZSBvZiBhIHNvdXJjZSBvdmVycmlkZSAobm90ZSB0aGUgd29y
ZCAib3ZlcnJpZGUiKSB0aGUNCj4gZGVmYXVsdCBpZGVudGl0eSBtYXBwaW5nIGFwcGxpZXMuDQpX
aGF0IGlzIGlkZW50aXR5IG1hcHBpbmc/IExpa2UgdGhlIG1wX2NvbmZpZ19hY3BpX2xlZ2FjeV9p
cnFzIGRvZXM/DQoNCj4gDQo+IEphbg0KDQotLSANCkJlc3QgcmVnYXJkcywNCkppcWlhbiBDaGVu
Lg0K


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 10:45:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 10:45:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735702.1141813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEo97-0005J7-Ex; Wed, 05 Jun 2024 10:45:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735702.1141813; Wed, 05 Jun 2024 10:45:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEo97-0005J0-Bp; Wed, 05 Jun 2024 10:45:33 +0000
Received: by outflank-mailman (input) for mailman id 735702;
 Wed, 05 Jun 2024 10:45:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JX9+=NH=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1sEo95-0005Is-EU
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 10:45:31 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on20601.outbound.protection.outlook.com
 [2a01:111:f403:2602::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b935038d-2328-11ef-b4bb-af5377834399;
 Wed, 05 Jun 2024 12:45:28 +0200 (CEST)
Received: from DUZPR01CA0041.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:468::19) by DB3PR08MB8818.eurprd08.prod.outlook.com
 (2603:10a6:10:434::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.30; Wed, 5 Jun
 2024 10:45:22 +0000
Received: from DB5PEPF00014B8D.eurprd02.prod.outlook.com
 (2603:10a6:10:468:cafe::2e) by DUZPR01CA0041.outlook.office365.com
 (2603:10a6:10:468::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7587.36 via Frontend
 Transport; Wed, 5 Jun 2024 10:45:22 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5PEPF00014B8D.mail.protection.outlook.com (10.167.8.201) with
 Microsoft
 SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.7633.15
 via Frontend Transport; Wed, 5 Jun 2024 10:45:22 +0000
Received: ("Tessian outbound 9d4318e1cabb:v327");
 Wed, 05 Jun 2024 10:45:22 +0000
Received: from 56c1d574a776.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8F429AEC-BDF5-4D84-846E-ADD26D9B47FE.1; 
 Wed, 05 Jun 2024 10:45:15 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 56c1d574a776.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 05 Jun 2024 10:45:15 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com (2603:10a6:10:25a::24)
 by AS1PR08MB7588.eurprd08.prod.outlook.com (2603:10a6:20b:474::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.25; Wed, 5 Jun
 2024 10:45:12 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a]) by DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a%5]) with mapi id 15.20.7633.021; Wed, 5 Jun 2024
 10:45:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b935038d-2328-11ef-b4bb-af5377834399
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=VO4JDPCBqTpDnQXOt+tqICWLh5da9Yff8jF8dRxDHvlOf7TpeI6xAUWJnZp4aPeB3JwJCJzJMuEsLejXJaQTwe5AAHGwMDrM4W9NbSKuoeS7ffRsK9X9U7j5x2JD9pTs0G3XdCUKiBAV3iOcBAV/iiaA5Ri1dU/sBiX8IDeqPZxR81XPjfp3wSug/tP1ZfzBbxIE83mvF4EhBPpphtjlP9GCt91xy3GGoO2jWK6Un00gi+6Q1g0H/rSqsh+ub+3pNwuMcaOLyy9Xh6JQ4YY+ulQgDQhRBzPZYXCbSi8xdnhIkE9+pu3hpl87dQA2O+AwiahGButBxm8ncsIn6BaImw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RTYS+Oeb4/va+Us/z7iMi2OVLLf4UT+d7XqTwufYGmQ=;
 b=jo20mTOHl64balE9sPwSEqM+lR7L1bQmwLGfiuKPC2dxivDnkTUfxy/jqBJ2mR1KAICwD3LikZ7eFqkK9DbAOaznxm2SQtWyYY+bSLnDSrEVCapa6OfuELZCdqOK7rYw/EuhzPM/mAKYWcRfOiO2h8ldUnbY+H7j6NBEOkadJydjjhXcoQ0R211BXXUnLZnODy7WLn5dQflVnLvb7AJ0lgVxW1pOduQ5I6hT450yQcQSpqfHqTUEHuPM9yiYcIdp/AQSbVw5zbFN/rTx9YRWvw/fJkic3aZSd1Db1wANl/73X5hpTYXU0S+aao949U/wbp4JUDcCgbrHi49YsC70ZQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1
 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RTYS+Oeb4/va+Us/z7iMi2OVLLf4UT+d7XqTwufYGmQ=;
 b=YnjyPzOBmHtm8YjcIAtTVvbz7idK8x0NYOTL1KhE+llMLDLSnq0q3QjBAr5pUcc1uKaF+z/U3wngDvixU5bMafmMVzz7UwtQycCUDRW83H2msDxjlBn7F5K7EA/ST36LNB7xG3lnQA8pjkHklSUMvM6zJBKdTu6cKClnyLjQjOw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=arm.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: ef224ef629620261
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RDmOlUcYejyN2awfXF4QX6pbJ+IjonMte7NfXrf7Rkwcd0TRxPr0Akb6nzMlHiNhsyQM4PDGXTHy6jHcVuDRB3DNO5sbc5Vo4bEDpLdFBWRyGX6CLtqV4qQIGw9lzxSUX9cTcddnztP5O8Ko7ZOiMgCytfXgb0jl8Y055V/R7W0xpSnDl5Kv/qs9UZuxK8Pa91kLsnoTMWmGfd7uSpV28+ZJ6GjhLwhm+v5iY9GkHWW51feIyDY1rtcxzVUgxlCIFO5uUDw7GWQZvajIAMxY6f4GeQdY7PfkBOWe8FA07n3Uw/v9BImaKKNTCcQaRu0R0F4jOJL5QtP95v90U0DXtA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RTYS+Oeb4/va+Us/z7iMi2OVLLf4UT+d7XqTwufYGmQ=;
 b=ZNG1PMTWB+gSKk1AdFQzdF5daF6tQZmnxc+cfViL8VXw9cgoy/M3L0sX20N4+eYoU3GgoEr5qIHFoNkBDcuer52N+T9mBYvpUWUBWGfGugsXO1EknYz17bIETToy2rfSAGot0F5G7/U2PpuDZVkspfh5r2WU2Eb/DDQlLHZVKynOM96yIHWPwJTysLwsSLsmQEae9PpL0ucrFTLx4Lvwe1gom3qCKpNcKvi+2nodj4xffYSp5GZ0k9mckPbnap3F4nFUutqqvxs6rqrtAYGCqxiH38z4mJ2LRwrkCvD+3yiDRbD+gvRATLSYkE0qZ1ZO3Um7UtD37h5Lovd9EzGPBg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RTYS+Oeb4/va+Us/z7iMi2OVLLf4UT+d7XqTwufYGmQ=;
 b=YnjyPzOBmHtm8YjcIAtTVvbz7idK8x0NYOTL1KhE+llMLDLSnq0q3QjBAr5pUcc1uKaF+z/U3wngDvixU5bMafmMVzz7UwtQycCUDRW83H2msDxjlBn7F5K7EA/ST36LNB7xG3lnQA8pjkHklSUMvM6zJBKdTu6cKClnyLjQjOw=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Jens Wiklander <jens.wiklander@linaro.org>, Xen-devel
	<xen-devel@lists.xenproject.org>, "patches@linaro.org" <patches@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Michal Orzel <michal.orzel@amd.com>
Subject: Re: [XEN PATCH v5 7/7] xen/arm: ffa: support notification
Thread-Topic: [XEN PATCH v5 7/7] xen/arm: ffa: support notification
Thread-Index:
 AQHasZmLIIIAu+VRvk26HUXX7ib+FLGxammAgARbjQCAAAMkAIAACp2AgAGsrYCAAYc/AA==
Date: Wed, 5 Jun 2024 10:45:12 +0000
Message-ID: <D56844C2-D602-4109-BF9D-6FCD59B532EC@arm.com>
References: <20240529072559.2486986-1-jens.wiklander@linaro.org>
 <20240529072559.2486986-8-jens.wiklander@linaro.org>
 <C52D6A7C-1136-4BF1-9060-600157F641F5@arm.com>
 <CAHUa44GRNQV4X61YPZTxO+tkkwJS9hoqQ07U9vP1k6n1zUt9rQ@mail.gmail.com>
 <39045a8f-ea18-4264-b540-66645751d27d@xen.org>
 <CAHUa44Hrm7p9MyTwsp+XU+EAMPXb+bi0a7P8sbhsvz2Tobozow@mail.gmail.com>
 <ad94bed4-42a1-4c59-afc1-a542c9a406ea@xen.org>
In-Reply-To: <ad94bed4-42a1-4c59-afc1-a542c9a406ea@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3774.600.62)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	DB9PR08MB6588:EE_|AS1PR08MB7588:EE_|DB5PEPF00014B8D:EE_|DB3PR08MB8818:EE_
X-MS-Office365-Filtering-Correlation-Id: a2ad0a8d-3998-48e3-8753-08dc854c99b3
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted:
 BCL:0;ARA:13230031|376005|1800799015|366007|38070700009;
X-Microsoft-Antispam-Message-Info-Original:
 =?utf-8?B?WG5zSWpBZmRuK05KUGs0MGtUMGxHVGtxNmZRK2YxU09RdlY1RVU5OXllS1lw?=
 =?utf-8?B?VndqZmt4STcvb3VockhiM3c2U0g2Nk1rWm9CTytWUUVwTDA3ZmZDYVc3VnFh?=
 =?utf-8?B?cHpWS0FBcUJHcnhDVDYvSXk2TWtCYnJKZy84dFRjdmpXRytVdWlWOERiak9U?=
 =?utf-8?B?QTdNK1R2M3VZNlVwaUpib0JoZjFIUmdkOEc0V21KbWhMV1FBZVVsUnlreUMv?=
 =?utf-8?B?ZFhYYUZ5Y3J4eklWN2wySCs3eWRLdUpHTnd1a2hoNytLL1A1b3J0MU5DMlo3?=
 =?utf-8?B?WjRZTXo0QW05cUovRVBueHNreHpEMTlBVEVPL3Z1aFEvNTJFeGRrQVk4NHQ1?=
 =?utf-8?B?bXJmN2Z0K09kV3VhUE1DMG9KMlc2UEZRbkNUOUM4MThSRmFDRkduV2t1bDNQ?=
 =?utf-8?B?TWVvYW5Mc1BLQ1hXcHQxc2ZVQUhvb3RXNG1CVkVhYWVlVnJNZTlkUENUQ3dZ?=
 =?utf-8?B?QXY2QUZieC8xZHdjKzJwQ3Y1b2R2Y28wUTh0MkgxWit6bGxUTVFOYTJZa05p?=
 =?utf-8?B?dXZ3elRxaEtsVjY5N3lCM0R0cEtxN0pBcWxudWZOaWwxQXFhbEpjQmJXWGJ0?=
 =?utf-8?B?bVcwOFo1bmlYR2VQZDhOQUhNdHdERlgwUXJ2K0tpblZEY3JDMFhwQVp2SXdl?=
 =?utf-8?B?ZEkwUkx3aHloOTJuakVkVTFYRWxnMUdCLzFBb0s1blpSSjhPdGdrRGFjUWxo?=
 =?utf-8?B?MDFxenRQS3lSZUZRcUErbjg0ZVJoa0Q5TExneElwNUFobExjMStkd3ZQMkdo?=
 =?utf-8?B?VFY3ZnVjT2RIYmx1V3pDOUw4V3RpSGpMU2h0QTlEUEpISUkyUTdnVDRxVmtV?=
 =?utf-8?B?bWxISHZESXMzTmpPNG5uVkNBZTFoWDIrb3pzRjdiK2s2Rjd5WWttWVVNK0N5?=
 =?utf-8?B?VXRYNnc5N1VnRlgvZ0YxajVwMnlNNXQ1Wlp6VVZUQ05RK054MVRqUnU0Y1lw?=
 =?utf-8?B?NCtLMkZqbjlxSENNdmpMVE5RWmtySG1JOW5aS1U5RTlYYkIyNmE5SlRpRUFE?=
 =?utf-8?B?Rk9EcDA1MUp1T1ZMdVRyMkFXc1l5eGhLVmdVOTkwOXI4NGZxeW5PYWdyY05h?=
 =?utf-8?B?Zk4yaWtLcitEaXVJNEtHak11dHpwK1lmWjR0b0g3cnNzLzBFSVpicTNHMkxF?=
 =?utf-8?B?azJCN1ZnQ1NBSXBxcUR3b0h5VTJsN3NBUEVSTU5TaGgrbnlxQUYyMkRYUysy?=
 =?utf-8?B?N2pRVEpveWxWejQ0RzJ4MFZsU2lmVitBVWVDYmhuZDFPMGloMkNxUHNSZmtM?=
 =?utf-8?B?bjRjSVBvRFZuVlpjc3I4WHMvMTgrdHhySEFBYVNJRmN4MWtlU0xUM1BrMkxx?=
 =?utf-8?B?M3FHRnFWcUFUbHhBN3RlSFpTYUxwdUVEYXJlZFB0VlJpdm04Q0hkUjBBdEZr?=
 =?utf-8?B?T0xWMTlsbTBHdFhvS3VVN1NBbmkzVnpxdUhyQVBDd2pMWE5CL2wvUXhLNmlF?=
 =?utf-8?B?WUU2MTdwdWF1UlpPei9KYVh5MDcxdUhwV2psUVdjcDloS0g2L1F4MisyM1J5?=
 =?utf-8?B?aGs4K1FnYzdyUXVlcUZ5NGs0NnFHTml3b2doZHM3cjhMb2xiRUpzQ2xvRkVm?=
 =?utf-8?B?S1JvR3lkOTVCNGQ5MlQwY1c5WlI0ZHNwTmZudkdTbnZlZlIraC93U2NxYVEy?=
 =?utf-8?B?VGNrNGs4QmRlcVdTS2dwdGJlbkk2MEZjODZYTlhkRENuT29IQ1hVelF4SHJm?=
 =?utf-8?B?K3lrdzk3czQySjZSYlh3VXRScGplS1ZaMzV4RHBUUDBjM1BHQnF4TzhHMnhi?=
 =?utf-8?B?dytBSGQ1V1dUb3hWQnZmbnJFYnlxWldpUmRaUFRGUitmZnB1Zkc1c1pnb0VR?=
 =?utf-8?Q?2SdwYqBcephSnQxWignBdWznuitHmP5u9+U4k=3D?=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB9PR08MB6588.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(1800799015)(366007)(38070700009);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <6082E90B1E056F469866944D8890B770@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS1PR08MB7588
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5PEPF00014B8D.eurprd02.prod.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	3fa1cfb3-1822-41de-5173-08dc854c93e3
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|36860700004|376005|82310400017|35042699013|1800799015;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?cVZuUVlXZ1pJOVJLRmRleVo5TlZsMlREMWpOMzZucVV4a2F6d0Z5TGxTUm80?=
 =?utf-8?B?Z3M4aDhoa0RUYWx0bFJYR0JMT1B5THNJUW9qM0d4eVZ4N0FwRkZWVTBJRkdk?=
 =?utf-8?B?R09pTWlYLytLODJadzFuMGJqUlpZcXk2UUhTbWxHUVJRTmRCNkx4REhyekZ4?=
 =?utf-8?B?L0tiellMQnFyMjVHSlNjUDlFYlRGamhqWDUrT2ZWeXhodnVWZ0lYbmlUREVR?=
 =?utf-8?B?eWlrbXZGR1RCOW9BNmhoYklrRHhzMDJ1RWxGaFFlQWcwWjJBcFBSWEJLbjV3?=
 =?utf-8?B?c2diUFBhSng4eFBEM3ZyQ3VrZlNRLzNKR1ZTaVNEM0loSS8zT3ltcnBETisw?=
 =?utf-8?B?V1lNL0ZjSklFaXJ4WkJoU0QrUmNXRGJDMU9uN2tuNEdqVVcwOE5zVXpwek9k?=
 =?utf-8?B?WlR6d1prV0ZnUG9JbVZsWnlyWGNOMm1lUkV0bnVsc1NwZmY2bzJPTHFtREMy?=
 =?utf-8?B?eVRjcDA5c25YNUhVME1hSEExaXNJOWpKV0Q2YzRETnlKRzRFZEIwTytRNG1i?=
 =?utf-8?B?S2hoUHlsc2wyT3prc1pScFRRQUxyS2xBL09xZVRONi9jV3dZVFB6NHhhWXdP?=
 =?utf-8?B?eW9NYnFPRkNsNVY0alVPYlJjdmpwcFZ1Z2x5a3ZVa1NkbHpRRG02VkpNYktR?=
 =?utf-8?B?ZkZncDNuTmR3dDB4UVVhRnRKWjdGaE1OOXAreFJIT29HYmFuUEhHVVZORTlB?=
 =?utf-8?B?NWorL2RkeTJIR3lKVkNSVWlUNXp4QVR1SWVtR3RSYUJrb3R5U0xid0ZaZmlG?=
 =?utf-8?B?bmxmWm42eE56emZTUEI1MFhYMm0xOUxyYnpmRXhOSE1OM1lIUU5NVXk0bnJv?=
 =?utf-8?B?Y3FsTm5xMUowWjJ2eHEveHBmNzdsL0hRTE5CcW1NOW96Qkp0Yjl6c1ZxNktj?=
 =?utf-8?B?YWhDSVk2NFdkSDBnTUNBS0NLU3l1bWgxMlJpYnErS3U3V0U0YWhCcGNxRU1q?=
 =?utf-8?B?ZTRiWFdlaU9hdEFjdjZhbStSTXRTWmV1RndnN0tDZ0NFTmltOXhvZHFaSHIr?=
 =?utf-8?B?dDRGS0loYStmMERCdVlNM3FHVlFHdUFLb1lyYnV3LzQzSlViYysxVlFBU05I?=
 =?utf-8?B?LzBnUzU2djdPYm1KK0kyK0lHZzhSUFAyZGE3eVY4VWphbjZ6YnpkN293NUVr?=
 =?utf-8?B?a2JFZjVaMEhtZUcrSmQ3SU5MWS9pc2dnUm9mbkJNQThDU1JVQURKRDFyTWM1?=
 =?utf-8?B?Uit2SmdzejAwZ3NzNStkbVFrTUYrbno4NXQyV0J6dFR0WlRqSkpGMkFuUThp?=
 =?utf-8?B?cFY2ek4xUi9MMEJVZ3BKeld6NWpWT0RiQkx6L2NrVjlWRmRnOWVaSmtjZXRu?=
 =?utf-8?B?eStreDNPTVYxbkpFdStKNmw0bnV1L3N4a1pUOXJNemkrdTVJdHlIaVRRSmZI?=
 =?utf-8?B?dWE4WFo1T1FtWWNuMmlTUmVqNElGeXNzV2tOY0I2bUlUcFlrbFlOREl1NGZY?=
 =?utf-8?B?VmMyYktHeUdLanFTcXZrbzllQ084QkcrbjRnamRnbTczWnczMDdCbWl4YWlh?=
 =?utf-8?B?dFVUSlFHU1NzajBtVk1KbWNIdzI2eGNhSTU1Yk9PQmZZdVNNUGRTS2EyWEJN?=
 =?utf-8?B?UkJsNVd4VUFaZVZUa1RwWXI4d2NIcEdEV2E5YjFHRUwrMkorYzgwaXhHM2x2?=
 =?utf-8?B?MHRMcDNvaFh2Z2JkbmlyVXl0OXlXYWR0eVBNNHlEV2lxT2RTM0dIdnBhcFdl?=
 =?utf-8?B?b2dtbjFEQzJWYlF1cFdJS2FrUU0wYWl0ZGtjUjJJemgrN0hvVjE0c29INVF1?=
 =?utf-8?B?L3JZVUgxdU1oRVJKSkY2S2ZtQ0FuUTRhTnRETFFwQzVLc0lrSy83MjhXNVpN?=
 =?utf-8?B?UHYycjVnbzJ6a2tTMmM4aFUzRGxPZFR1ZWVuaEZoUXZjNzFydDhuTEZBWXU1?=
 =?utf-8?Q?AiC/bCqZ9vmmg?=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230031)(36860700004)(376005)(82310400017)(35042699013)(1800799015);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 Jun 2024 10:45:22.2299
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a2ad0a8d-3998-48e3-8753-08dc854c99b3
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5PEPF00014B8D.eurprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB3PR08MB8818

SGksDQoNCj4gT24gNCBKdW4gMjAyNCwgYXQgMTI6MjQsIEp1bGllbiBHcmFsbCA8anVsaWVuQHhl
bi5vcmc+IHdyb3RlOg0KPiANCj4gDQo+IA0KPiBPbiAwMy8wNi8yMDI0IDEwOjUwLCBKZW5zIFdp
a2xhbmRlciB3cm90ZToNCj4+IEhpIEp1bGllbiwNCj4gDQo+IEhpIEplbnMsDQo+IA0KPiANCj4+
IE9uIE1vbiwgSnVuIDMsIDIwMjQgYXQgMTE6MTLigK9BTSBKdWxpZW4gR3JhbGwgPGp1bGllbkB4
ZW4ub3JnPiB3cm90ZToNCj4+PiANCj4+PiBIaSBKZW5zLA0KPj4+IA0KPj4+IE9uIDAzLzA2LzIw
MjQgMTA6MDEsIEplbnMgV2lrbGFuZGVyIHdyb3RlOg0KPj4+PiBPbiBGcmksIE1heSAzMSwgMjAy
NCBhdCA0OjI44oCvUE0gQmVydHJhbmQgTWFycXVpcw0KPj4+PiA8QmVydHJhbmQuTWFycXVpc0Bh
cm0uY29tPiB3cm90ZToNCj4+Pj4+IA0KPj4+Pj4gSGkgSmVucywNCj4+Pj4+IA0KPj4+Pj4+IE9u
IDI5IE1heSAyMDI0LCBhdCAwOToyNSwgSmVucyBXaWtsYW5kZXIgPGplbnMud2lrbGFuZGVyQGxp
bmFyby5vcmc+IHdyb3RlOg0KPj4+Pj4+IA0KPj4+Pj4+IEFkZCBzdXBwb3J0IGZvciBGRi1BIG5v
dGlmaWNhdGlvbnMsIGN1cnJlbnRseSBsaW1pdGVkIHRvIGFuIFNQIChTZWN1cmUNCj4+Pj4+PiBQ
YXJ0aXRpb24pIHNlbmRpbmcgYW4gYXN5bmNocm9ub3VzIG5vdGlmaWNhdGlvbiB0byBhIGd1ZXN0
Lg0KPj4+Pj4+IA0KPj4+Pj4+IEd1ZXN0cyBhbmQgWGVuIGl0c2VsZiBhcmUgbWFkZSBhd2FyZSBv
ZiBwZW5kaW5nIG5vdGlmaWNhdGlvbnMgd2l0aCBhbg0KPj4+Pj4+IGludGVycnVwdC4gVGhlIGlu
dGVycnVwdCBoYW5kbGVyIHRyaWdnZXJzIGEgdGFza2xldCB0byByZXRyaWV2ZSB0aGUNCj4+Pj4+
PiBub3RpZmljYXRpb25zIHVzaW5nIHRoZSBGRi1BIEFCSSBhbmQgZGVsaXZlciB0aGVtIHRvIHRo
ZWlyIGRlc3RpbmF0aW9ucy4NCj4+Pj4+PiANCj4+Pj4+PiBVcGRhdGUgZmZhX3BhcnRpbmZvX2Rv
bWFpbl9pbml0KCkgdG8gcmV0dXJuIGVycm9yIGNvZGUgbGlrZQ0KPj4+Pj4+IGZmYV9ub3RpZl9k
b21haW5faW5pdCgpLg0KPj4+Pj4+IA0KPj4+Pj4+IFNpZ25lZC1vZmYtYnk6IEplbnMgV2lrbGFu
ZGVyIDxqZW5zLndpa2xhbmRlckBsaW5hcm8ub3JnPg0KPj4+Pj4+IC0tLQ0KPj4+Pj4+IHY0LT52
NToNCj4+Pj4+PiAtIE1vdmUgdGhlIGZyZWVpbmcgb2YgZC0+YXJjaC50ZWUgdG8gdGhlIG5ldyBU
RUUgbWVkaWF0b3IgZnJlZV9kb21haW5fY3R4DQo+Pj4+Pj4gICBjYWxsYmFjayB0byBtYWtlIHRo
ZSBjb250ZXh0IGFjY2Vzc2libGUgZHVyaW5nIHJjdV9sb2NrX2RvbWFpbl9ieV9pZCgpDQo+Pj4+
Pj4gICBmcm9tIGEgdGFza2xldA0KPj4+Pj4+IC0gSW5pdGlhbGl6ZSBpbnRlcnJ1cHQgaGFuZGxl
cnMgZm9yIHNlY29uZGFyeSBDUFVzIGZyb20gdGhlIG5ldyBURUUgbWVkaWF0b3INCj4+Pj4+PiAg
IGluaXRfaW50ZXJydXB0KCkgY2FsbGJhY2sNCj4+Pj4+PiAtIFJlc3RvcmUgdGhlIGZmYV9wcm9i
ZSgpIGZyb20gdjMsIHRoYXQgaXMsIHJlbW92ZSB0aGUNCj4+Pj4+PiAgIHByZXNtcF9pbml0Y2Fs
bChmZmFfaW5pdCkgYXBwcm9hY2ggYW5kIHVzZSBmZmFfcHJvYmUoKSBhcyB1c3VhbCBub3cgdGhh
dCB3ZQ0KPj4+Pj4+ICAgaGF2ZSB0aGUgaW5pdF9pbnRlcnJ1cHQgY2FsbGJhY2suDQo+Pj4+Pj4g
LSBBIHRhc2tsZXQgaXMgYWRkZWQgdG8gaGFuZGxlIHRoZSBTY2hlZHVsZSBSZWNlaXZlciBpbnRl
cnJ1cHQuIFRoZSB0YXNrbGV0DQo+Pj4+Pj4gICBmaW5kcyBlYWNoIHJlbGV2YW50IGRvbWFpbiB3
aXRoIHJjdV9sb2NrX2RvbWFpbl9ieV9pZCgpIHdoaWNoIG5vdyBpcyBlbm91Z2gNCj4+Pj4+PiAg
IHRvIGd1YXJhbnRlZSB0aGF0IHRoZSBGRi1BIGNvbnRleHQgY2FuIGJlIGFjY2Vzc2VkLg0KPj4+
Pj4+IC0gVGhlIG5vdGlmaWNhdGlvbiBpbnRlcnJ1cHQgaGFuZGxlciBvbmx5IHNjaGVkdWxlcyB0
aGUgbm90aWZpY2F0aW9uDQo+Pj4+Pj4gICB0YXNrbGV0IG1lbnRpb25lZCBhYm92ZQ0KPj4+Pj4+
IA0KPj4+Pj4+IHYzLT52NDoNCj4+Pj4+PiAtIEFkZCBhbm90aGVyIG5vdGUgb24gRkYtQSBsaW1p
dGF0aW9ucw0KPj4+Pj4+IC0gQ2xlYXIgc2VjdXJlX3BlbmRpbmcgaW4gZmZhX2hhbmRsZV9ub3Rp
ZmljYXRpb25fZ2V0KCkgaWYgYm90aCBTUCBhbmQgU1BNDQo+Pj4+Pj4gICBiaXRtYXBzIGFyZSBy
ZXRyaWV2ZWQNCj4+Pj4+PiAtIEFTU0VSVCB0aGF0IGZmYV9yY3VfbG9ja19kb21haW5fYnlfdm1f
aWQoKSBpc24ndCBwYXNzZWQgdGhlIHZtX2lkIEZGLUENCj4+Pj4+PiAgIHVzZXMgZm9yIFhlbiBp
dHNlbGYNCj4+Pj4+PiAtIFJlcGxhY2UgdGhlIGdldF9kb21haW5fYnlfaWQoKSBjYWxsIGRvbmUg
dmlhIGZmYV9nZXRfZG9tYWluX2J5X3ZtX2lkKCkgaW4NCj4+Pj4+PiAgIG5vdGlmX2lycV9oYW5k
bGVyKCkgd2l0aCBhIGNhbGwgdG8gcmN1X2xvY2tfbGl2ZV9yZW1vdGVfZG9tYWluX2J5X2lkKCkg
dmlhDQo+Pj4+Pj4gICBmZmFfcmN1X2xvY2tfZG9tYWluX2J5X3ZtX2lkKCkNCj4+Pj4+PiAtIFJl
bW92ZSBzcGlubG9jayBpbiBzdHJ1Y3QgZmZhX2N0eF9ub3RpZiBhbmQgdXNlIGF0b21pYyBmdW5j
dGlvbnMgYXMgbmVlZGVkDQo+Pj4+Pj4gICB0byBhY2Nlc3MgYW5kIHVwZGF0ZSB0aGUgc2VjdXJl
X3BlbmRpbmcgZmllbGQNCj4+Pj4+PiAtIEluIG5vdGlmX2lycV9oYW5kbGVyKCksIGxvb2sgZm9y
IHRoZSBmaXJzdCBvbmxpbmUgQ1BVIGluc3RlYWQgb2YgYXNzdW1pbmcNCj4+Pj4+PiAgIHRoYXQg
dGhlIGZpcnN0IENQVSBpcyBvbmxpbmUNCj4+Pj4+PiAtIEluaXRpYWxpemUgRkYtQSB2aWEgcHJl
c21wX2luaXRjYWxsKCkgYmVmb3JlIHRoZSBvdGhlciBDUFVzIGFyZSBvbmxpbmUsDQo+Pj4+Pj4g
ICB1c2UgcmVnaXN0ZXJfY3B1X25vdGlmaWVyKCkgdG8gaW5zdGFsbCB0aGUgaW50ZXJydXB0IGhh
bmRsZXINCj4+Pj4+PiAgIG5vdGlmX2lycV9oYW5kbGVyKCkNCj4+Pj4+PiAtIFVwZGF0ZSBjb21t
aXQgbWVzc2FnZSB0byByZWZsZWN0IHJlY2VudCB1cGRhdGVzDQo+Pj4+Pj4gDQo+Pj4+Pj4gdjIt
PnYzOg0KPj4+Pj4+IC0gQWRkIGEgR1VFU1RfIHByZWZpeCBhbmQgbW92ZSBGRkFfTk9USUZfUEVO
RF9JTlRSX0lEIGFuZA0KPj4+Pj4+ICAgRkZBX1NDSEVEVUxFX1JFQ1ZfSU5UUl9JRCB0byBwdWJs
aWMvYXJjaC1hcm0uaA0KPj4+Pj4+IC0gUmVnaXN0ZXIgdGhlIFhlbiBTUkkgaGFuZGxlciBvbiBl
YWNoIENQVSB1c2luZyBvbl9zZWxlY3RlZF9jcHVzKCkgYW5kDQo+Pj4+Pj4gICBzZXR1cF9pcnEo
KQ0KPj4+Pj4+IC0gQ2hlY2sgdGhhdCB0aGUgU0dJIElEIHJldHJpZXZlZCB3aXRoIEZGQV9GRUFU
VVJFX1NDSEVEVUxFX1JFQ1ZfSU5UUg0KPj4+Pj4+ICAgZG9lc24ndCBjb25mbGljdCB3aXRoIHN0
YXRpYyBTR0kgaGFuZGxlcnMNCj4+Pj4+PiANCj4+Pj4+PiB2MS0+djI6DQo+Pj4+Pj4gLSBBZGRy
ZXNzaW5nIHJldmlldyBjb21tZW50cw0KPj4+Pj4+IC0gQ2hhbmdlIGZmYV9oYW5kbGVfbm90aWZp
Y2F0aW9uX3tiaW5kLHVuYmluZCxzZXR9KCkgdG8gdGFrZSBzdHJ1Y3QNCj4+Pj4+PiAgIGNwdV91
c2VyX3JlZ3MgKnJlZ3MgYXMgYXJndW1lbnQuDQo+Pj4+Pj4gLSBVcGRhdGUgZmZhX3BhcnRpbmZv
X2RvbWFpbl9pbml0KCkgYW5kIGZmYV9ub3RpZl9kb21haW5faW5pdCgpIHRvIHJldHVybg0KPj4+
Pj4+ICAgYW4gZXJyb3IgY29kZS4NCj4+Pj4+PiAtIEZpeGluZyBhIGJ1ZyBpbiBoYW5kbGVfZmVh
dHVyZXMoKSBmb3IgRkZBX0ZFQVRVUkVfU0NIRURVTEVfUkVDVl9JTlRSLg0KPj4+Pj4+IC0tLQ0K
Pj4+Pj4+IHhlbi9hcmNoL2FybS90ZWUvTWFrZWZpbGUgICAgICAgfCAgIDEgKw0KPj4+Pj4+IHhl
bi9hcmNoL2FybS90ZWUvZmZhLmMgICAgICAgICAgfCAgNzIgKysrKystDQo+Pj4+Pj4geGVuL2Fy
Y2gvYXJtL3RlZS9mZmFfbm90aWYuYyAgICB8IDQwOSArKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKw0KPj4+Pj4+IHhlbi9hcmNoL2FybS90ZWUvZmZhX3BhcnRpbmZvLmMgfCAgIDkgKy0N
Cj4+Pj4+PiB4ZW4vYXJjaC9hcm0vdGVlL2ZmYV9wcml2YXRlLmggIHwgIDU2ICsrKystDQo+Pj4+
Pj4geGVuL2FyY2gvYXJtL3RlZS90ZWUuYyAgICAgICAgICB8ICAgMiArLQ0KPj4+Pj4+IHhlbi9p
bmNsdWRlL3B1YmxpYy9hcmNoLWFybS5oICAgfCAgMTQgKysNCj4+Pj4+PiA3IGZpbGVzIGNoYW5n
ZWQsIDU0OCBpbnNlcnRpb25zKCspLCAxNSBkZWxldGlvbnMoLSkNCj4+Pj4+PiBjcmVhdGUgbW9k
ZSAxMDA2NDQgeGVuL2FyY2gvYXJtL3RlZS9mZmFfbm90aWYuYw0KPj4+Pj4+IA0KPj4+PiBbLi4u
XQ0KPj4+Pj4+IA0KPj4+Pj4+IEBAIC01MTcsOCArNTY3LDEwIEBAIGVycl9yeHR4X2Rlc3Ryb3k6
DQo+Pj4+Pj4gc3RhdGljIGNvbnN0IHN0cnVjdCB0ZWVfbWVkaWF0b3Jfb3BzIGZmYV9vcHMgPQ0K
Pj4+Pj4+IHsNCj4+Pj4+PiAgICAgIC5wcm9iZSA9IGZmYV9wcm9iZSwNCj4+Pj4+PiArICAgIC5p
bml0X2ludGVycnVwdCA9IGZmYV9ub3RpZl9pbml0X2ludGVycnVwdCwNCj4+Pj4+IA0KPj4+Pj4g
V2l0aCB0aGUgcHJldmlvdXMgY2hhbmdlIG9uIGluaXQgc2Vjb25kYXJ5IGkgd291bGQgc3VnZ2Vz
dCB0bw0KPj4+Pj4gaGF2ZSBhIGZmYV9pbml0X3NlY29uZGFyeSBoZXJlIGFjdHVhbGx5IGNhbGxp
bmcgdGhlIGZmYV9ub3RpZl9pbml0X2ludGVycnVwdC4NCj4+Pj4gDQo+Pj4+IFllcywgdGhhdCBt
YWtlcyBzZW5zZS4gSSdsbCB1cGRhdGUuDQo+Pj4+IA0KPj4+Pj4gDQo+Pj4+Pj4gICAgICAuZG9t
YWluX2luaXQgPSBmZmFfZG9tYWluX2luaXQsDQo+Pj4+Pj4gICAgICAuZG9tYWluX3RlYXJkb3du
ID0gZmZhX2RvbWFpbl90ZWFyZG93biwNCj4+Pj4+PiArICAgIC5mcmVlX2RvbWFpbl9jdHggPSBm
ZmFfZnJlZV9kb21haW5fY3R4LA0KPj4+Pj4+ICAgICAgLnJlbGlucXVpc2hfcmVzb3VyY2VzID0g
ZmZhX3JlbGlucXVpc2hfcmVzb3VyY2VzLA0KPj4+Pj4+ICAgICAgLmhhbmRsZV9jYWxsID0gZmZh
X2hhbmRsZV9jYWxsLA0KPj4+Pj4+IH07DQo+Pj4+Pj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2Fy
bS90ZWUvZmZhX25vdGlmLmMgYi94ZW4vYXJjaC9hcm0vdGVlL2ZmYV9ub3RpZi5jDQo+Pj4+Pj4g
bmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4+Pj4+PiBpbmRleCAwMDAwMDAwMDAwMDAuLmU4ZThiNjI1
OTBiMw0KPj4+Pj4+IC0tLSAvZGV2L251bGwNCj4+Pj4+PiArKysgYi94ZW4vYXJjaC9hcm0vdGVl
L2ZmYV9ub3RpZi5jDQo+Pj4+IFsuLi5dDQo+Pj4+Pj4gK3N0YXRpYyB2b2lkIG5vdGlmX3ZtX3Bl
bmRfaW50cih1aW50MTZfdCB2bV9pZCkNCj4+Pj4+PiArew0KPj4+Pj4+ICsgICAgc3RydWN0IGZm
YV9jdHggKmN0eDsNCj4+Pj4+PiArICAgIHN0cnVjdCBkb21haW4gKmQ7DQo+Pj4+Pj4gKyAgICBz
dHJ1Y3QgdmNwdSAqdjsNCj4+Pj4+PiArDQo+Pj4+Pj4gKyAgICAvKg0KPj4+Pj4+ICsgICAgICog
dm1faWQgPT0gMCBtZWFucyBhIG5vdGlmaWNhdGlvbnMgcGVuZGluZyBmb3IgWGVuIGl0c2VsZiwg
YnV0DQo+Pj4+Pj4gKyAgICAgKiB3ZSBkb24ndCBzdXBwb3J0IHRoYXQgeWV0Lg0KPj4+Pj4+ICsg
ICAgICovDQo+Pj4+Pj4gKyAgICBpZiAoICF2bV9pZCApDQo+Pj4+Pj4gKyAgICAgICAgcmV0dXJu
Ow0KPj4+Pj4+ICsNCj4+Pj4+PiArICAgIGQgPSBmZmFfcmN1X2xvY2tfZG9tYWluX2J5X3ZtX2lk
KHZtX2lkKTsNCj4+Pj4+PiArICAgIGlmICggIWQgKQ0KPj4+Pj4+ICsgICAgICAgIHJldHVybjsN
Cj4+Pj4+PiArDQo+Pj4+Pj4gKyAgICBjdHggPSBkLT5hcmNoLnRlZTsNCj4+Pj4+PiArICAgIGlm
ICggIWN0eCApDQo+Pj4+Pj4gKyAgICAgICAgZ290byBvdXRfdW5sb2NrOw0KPj4+Pj4gDQo+Pj4+
PiBJbiBib3RoIHByZXZpb3VzIGNhc2VzIHlvdSBhcmUgc2lsZW50bHkgaWdub3JpbmcgYW4gaW50
ZXJydXB0DQo+Pj4+PiBkdWUgdG8gYW4gaW50ZXJuYWwgZXJyb3IuDQo+Pj4+PiBJcyB0aGlzIHNv
bWV0aGluZyB0aGF0IHdlIHNob3VsZCB0cmFjZSA/IG1heWJlIGp1c3QgZGVidWcgPw0KPj4+Pj4g
DQo+Pj4+PiBDb3VsZCB5b3UgYWRkIGEgY29tbWVudCB0byBleHBsYWluIHdoeSB0aGlzIGNvdWxk
IGhhcHBlbg0KPj4+Pj4gKHdoZW4gcG9zc2libGUpIG9yIG5vdCBhbmQgdGhlIHBvc3NpYmxlIHNp
ZGUgZWZmZWN0cyA/DQo+Pj4+PiANCj4+Pj4+IFRoZSBzZWNvbmQgb25lIHdvdWxkIGJlIGEgbm90
aWZpY2F0aW9uIGZvciBhIGRvbWFpbiB3aXRob3V0DQo+Pj4+PiBGRi1BIGVuYWJsZWQgd2hpY2gg
aXMgb2sgYnV0IGkgYW0gYSBiaXQgbW9yZSB3b25kZXJpbmcgb24NCj4+Pj4+IHRoZSBmaXJzdCBv
bmUuDQo+Pj4+IA0KPj4+PiBUaGUgU1BNQyBtdXN0IGJlIG91dCBvZiBzeW5jIGluIGJvdGggY2Fz
ZXMuIEkndmUgYmVlbiBsb29raW5nIGZvciBhDQo+Pj4+IHdpbmRvdyB3aGVyZSB0aGF0IGNhbiBo
YXBwZW4sIGJ1dCBJIGNhbid0IGZpbmQgYW55LiBTUE1DIGlzIGNhbGxlZA0KPj4+PiB3aXRoIEZG
QV9OT1RJRklDQVRJT05fQklUTUFQX0RFU1RST1kgZHVyaW5nIGRvbWFpbiB0ZWFyZG93biBzbyB0
aGUNCj4+Pj4gU1BNQyBzaG91bGRuJ3QgdHJ5IHRvIGRlbGl2ZXIgYW55IG5vdGlmaWNhdGlvbnMg
YWZ0ZXIgdGhhdC4NCj4+PiANCj4+PiBJIGRvbid0IHRoaW5rIEkgYWdyZWUgd2l0aCB0aGUgY29u
Y2x1c2lvbi4gSSBiZWxpZXZlLCB0aGlzIGNhbiBhbHNvDQo+Pj4gaGFwcGVuIGluIG5vcm1hbCBv
cGVyYXRpb24uDQo+Pj4gDQo+Pj4gRm9yIGV4YW1wbGUsIHRoZSBTUE1DIGNvdWxkIGhhdmUgdHJp
Z2dlciB0aGUgaW50ZXJydXB0IGJlZm9yZQ0KPj4+IEZGQV9OT1RJRklDQVRJT05fQklUTUFQX0RF
U1RST1kgYnV0IFhlbiBkaWRuJ3QgaGFuZGxlIHRoZSBpbnRlcnJ1cHQgKG9yDQo+Pj4gcnVuIHRo
ZSB0YXNrbGV0KSB1bnRpbCBsYXRlci4NCj4+IFlvdSdyZSByaWdodCwgdGhlcmUgaXMgYSB3aW5k
b3cuIERlbGF5ZWQgaGFuZGxpbmcgaXMgT0sgc2luY2UNCj4+IEZGQV9OT1RJRklDQVRJT05fSU5G
T19HRVRfNjQgaXMgaW52b2tlZCBmcm9tIHRoZSB0YXNrbGV0LCBidXQgdGhlcmUgaXMNCj4+IGEg
d2luZG93IGlmIHRoZSB0YXNrbGV0IGlzIHN1c3BlbmRlZCBvciBhbm90aGVyIGNvcmUgZGVzdHJv
eXMgdGhlDQo+PiBkb21haW4gYmVmb3JlIHRoZSB0YXNrbGV0IGhhcyBjYWxsZWQgZmZhX3JjdV9s
b2NrX2RvbWFpbl9ieV92bV9pZCgpLg0KPj4gU28gZmFyIGl0J3MgaGFybWxlc3MgYW5kIEkgZ3Vl
c3Mgd2UgY2FuIGFmZm9yZCBhIHByaW50Lg0KPiANCj4gSSB0aGluayBpdCB3b3VsZCBjb25mdXNl
IG1vcmUgdGhlIHVzZXIgdGhhbiBhbnl0aGluZyBlbHNlIGJlY2F1c2UgdGhpcyBpcyBhbiBleHBl
Y3RlZCByYWNlLiBJZiB3ZSB3YW50ZWQgdG8gcHJpbnQgYSBtZXNzYWdlLCB0aGVuIEkgd291bGQg
YXJndWUgaXQgc2hvdWxkIGJlIGluIHRoZSBjYXNlIHdoZXJlLi4uDQo+IA0KPj4+IA0KPj4+IFRo
aXMgY291bGQgYmUgYXQgdGhlIHRpbWUgd2hlcmUgdGhlIGRvbWFpbiBoYXMgYmVlbiBmdWxseSBk
ZXN0cm95ZWQgb3INCj4+PiBldmVuIHdoZW4uLi4NCj4+PiANCj4+Pj4gSW4gdGhlIHNlY29uZCBj
YXNlLCB0aGUgZG9tYWluIElEIG1pZ2h0IGhhdmUgYmVlbiByZXVzZWQgZm9yIGEgZG9tYWluDQo+
Pj4+IHdpdGhvdXQgRkYtQSBlbmFibGVkLCBidXQgdGhlIFNQTUMgc2hvdWxkIGhhdmUga25vd24g
dGhhdCBhbHJlYWR5Lg0KPj4+IA0KPj4+IC4uLiBhIG5ldyBkb21haW4gaGFzIGJlZW4gY3JlYXRl
ZC4gQWx0aG91Z2gsIHRoZSBsYXR0ZXIgaXMgcmF0aGVyIHVubGlrZWx5Lg0KPj4+IA0KPj4+IFNv
IHdoYXQgaWYgdGhlIG5ldyBkb21haW4gaGFzIEZGQSBlbmFibGVkPyBJcyB0aGVyZSBhbnkgcG90
ZW50aWFsDQo+Pj4gc2VjdXJpdHkgaXNzdWU/DQo+PiBJbiB0aGlzIGNhc2UsIHdlJ2xsIGluamVj
dCBhbiBOUEkgaW4gdGhlIGd1ZXN0LCBidXQgd2hlbiBpdCBpbnZva2VzDQo+PiBGRkFfTk9USUZJ
Q0FUSU9OX0dFVCBpdCB3aWxsIGdldCBhY2N1cmF0ZSBpbmZvcm1hdGlvbiBmcm9tIHRoZSBTUE1D
Lg0KPj4gVGhlIHdvcnN0IGNhc2UgaXMgYSBzcHVyaW91cyBOUEkuIFRoaXMgc2hvdWxkbid0IGJl
IGEgc2VjdXJpdHkgaXNzdWUuDQo+IA0KPiAuLi4gd2UgaW5qZWN0IHRoZSBpbnRlcnJ1cHQgdG8g
dGhlICJ3cm9uZyIgZG9tYWluLiBCdXQgSSBhbHNvIHVuZGVyc3RhbmQgdGhhdCBpdCB3b3VsZCBi
ZSBkaWZmaWN1bHQgZm9yIFhlbiB0byBkZXRlY3QgaXQuDQo+IA0KPiBTbyBJIHdvdWxkIHNheSBu
byBwcmludCBzaG91bGQgYmUgbmVlZGVkLiBCZXJ0cmFuZCwgd2hhdCBkbyB5b3UgdGhpbms/DQoN
ClllcyBpIGFncmVlIHRoYXQgaXQgY291bGQgY29uZnVzZSB0aGUgdXNlci4NCkkgd291bGQganVz
dCBwdXQgY29tbWVudHMgdG8gZXhwbGFpbiB0aGUgc2l0dWF0aW9ucyB3aGVyZSBpdCBjb3VsZCBv
Y2N1ciBpbiB0aGUgY29kZS4NCg0KQ2hlZXJzDQpCZXJ0cmFuZA0KDQo+IA0KPiBDaGVlcnMsDQo+
IA0KPiAtLSANCj4gSnVsaWVuIEdyYWxsDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 10:53:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 10:53:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735710.1141822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEoGx-0007Q9-9G; Wed, 05 Jun 2024 10:53:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735710.1141822; Wed, 05 Jun 2024 10:53:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEoGx-0007Q2-6l; Wed, 05 Jun 2024 10:53:39 +0000
Received: by outflank-mailman (input) for mailman id 735710;
 Wed, 05 Jun 2024 10:53:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sEoGw-0007Pw-HA
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 10:53:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sEoGw-0004ly-38; Wed, 05 Jun 2024 10:53:38 +0000
Received: from [62.28.225.65] (helo=[172.20.145.71])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sEoGv-0002mi-RN; Wed, 05 Jun 2024 10:53:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=5qzugHp7KExUdLWJSFm7YS2nF/mvOqvBE96BNngndcU=; b=k8TriZiBEGKsfBeDwZbBspjltz
	T5Qbt3GY5h0CMzwPOCY6eokgC+g3ZFDXlygXz2u9zqfOyctc1uzBD9kQ/C0WE85YaVxn944PvRFy1
	XZopvSBiWgOb0tNsq4MdS46oM+uoYx+oSKDX1l60wmxecwx5ewn4raB8/fNPv757uJy8=;
Message-ID: <8a372ac1-2ccb-40b6-89d9-c7bcbf95669b@xen.org>
Date: Wed, 5 Jun 2024 11:53:35 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v5 4/7] xen/arm: allow dynamically assigned SGI
 handlers
Content-Language: en-GB
To: Jens Wiklander <jens.wiklander@linaro.org>, xen-devel@lists.xenproject.org
Cc: patches@linaro.org, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20240529072559.2486986-1-jens.wiklander@linaro.org>
 <20240529072559.2486986-5-jens.wiklander@linaro.org>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20240529072559.2486986-5-jens.wiklander@linaro.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Jens,

On 29/05/2024 08:25, Jens Wiklander wrote:
> Updates so request_irq() can be used with a dynamically assigned SGI irq
> as input. This prepares for a later patch where an FF-A schedule
> receiver interrupt handler is installed for an SGI generated by the
> secure world.
> 
>  From the Arm Base System Architecture v1.0C [1]:
> "The system shall implement at least eight Non-secure SGIs, assigned to
> interrupt IDs 0-7."
> 
> gic_route_irq_to_xen() don't gic_set_irq_type() for SGIs since they are
> always edge triggered.
> 
> gic_interrupt() is updated to route the dynamically assigned SGIs to
> do_IRQ() instead of do_sgi(). The latter still handles the statically
> assigned SGI handlers like for instance GIC_SGI_CALL_FUNCTION.
> 
> [1] https://developer.arm.com/documentation/den0094/
> 
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 11:12:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 11:12:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735717.1141832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEoYz-0002nw-Lo; Wed, 05 Jun 2024 11:12:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735717.1141832; Wed, 05 Jun 2024 11:12:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEoYz-0002np-JG; Wed, 05 Jun 2024 11:12:17 +0000
Received: by outflank-mailman (input) for mailman id 735717;
 Wed, 05 Jun 2024 11:12:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEoYy-0002nf-96; Wed, 05 Jun 2024 11:12:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEoYy-00055d-74; Wed, 05 Jun 2024 11:12:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEoYx-0001eC-Sn; Wed, 05 Jun 2024 11:12:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEoYx-0006B6-SN; Wed, 05 Jun 2024 11:12:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4pyXuEtLTnvEiK3MAdfPEc+kj7ugbA7S6NhD+iE8W5E=; b=q8Cc5wBfyMxvr++NseIZEW8Q5f
	LysZx8Yiys1MYmyaOdqi3M4tUsGe2pvEz/WX4pLsKViuSiEdF/sQVpps3LNh8WojtxY8RdCWL4tP2
	s+2/Lq8FWhmXwKlcTTSzwzhFJrrvUhvIufrk0HESDJX8yL7TetwDew341fRCxVU9EmNk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186255-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186255: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=4381b83d991b51a07ba5b6d3f56e6c0a8910a38d
X-Osstest-Versions-That:
    libvirt=83bed4367e76e9003479a8d7bd5cbee080d80017
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Jun 2024 11:12:15 +0000

flight 186255 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186255/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186243
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              4381b83d991b51a07ba5b6d3f56e6c0a8910a38d
baseline version:
 libvirt              83bed4367e76e9003479a8d7bd5cbee080d80017

Last test of basis   186243  2024-06-04 04:20:32 Z    1 days
Testing same since   186255  2024-06-05 04:20:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   83bed4367e..4381b83d99  4381b83d991b51a07ba5b6d3f56e6c0a8910a38d -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 12:09:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 12:09:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735745.1141842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEpRl-0002HH-0s; Wed, 05 Jun 2024 12:08:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735745.1141842; Wed, 05 Jun 2024 12:08:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEpRk-0002HA-U4; Wed, 05 Jun 2024 12:08:52 +0000
Received: by outflank-mailman (input) for mailman id 735745;
 Wed, 05 Jun 2024 12:08:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OW4N=NH=redhat.com=mlureau@srs-se1.protection.inumbo.net>)
 id 1sEpRj-0002Gz-Sq
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 12:08:51 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5dd2ca5d-2334-11ef-b4bb-af5377834399;
 Wed, 05 Jun 2024 14:08:50 +0200 (CEST)
Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com
 [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-678-h_UGk8sUMvS-YfTWapbYEg-1; Wed, 05 Jun 2024 08:08:47 -0400
Received: by mail-ed1-f69.google.com with SMTP id
 4fb4d7f45d1cf-57a306c4b1eso3237982a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 05 Jun 2024 05:08:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5dd2ca5d-2334-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717589328;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2v9Z4zn5H16gf2IEP0RcvQSdt1WbgFyv76Fd0JybRQE=;
	b=NIsDfowt3iB6zNABTta6Cjs9pcnu/470jqWtXTPCs9BM7KZeXNlllBCNDxgjATgY0f6qbH
	empyx4v5qJD++bM60CzyV5jFmqBtPn1UtN6uEagXbv9HWqUFdftb0/D4EzdOJ4PCHVf2Eg
	EcCUzheeL7l7FvS/soiPjdcBflZ1dvg=
X-MC-Unique: h_UGk8sUMvS-YfTWapbYEg-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717589326; x=1718194126;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=2v9Z4zn5H16gf2IEP0RcvQSdt1WbgFyv76Fd0JybRQE=;
        b=iumGmIedipODdjRCamjY0ynhfPh1cGpcy4ME/z9NAjsiRNVu3jHlpAZK+gFnqPsh+L
         VnfXMhbO/bp9iKv8XvUohsBvgXsPZaOeTf6p4smQAfTCPUlGLB4+iIxWG4a2auZUjaqo
         fHiKmNEByxEAfSTIQGes6vMr7onxzWDavaXOmZITXXNCsT03vW2NiCqwNMLhb42sQmSP
         deZ/Z9Vib2TYCsT2RDHVmn9lAB/vmSYt7II8dHy9Z5Db9Gk45qtON/SAqpr+AwcAwoI2
         bMTtU+y/KJETDrJY//vSpqJ2TbGVxF9QNJDVHvWbL0iV0mUkBqoRy1xkALGG6sYHQBty
         xGsA==
X-Forwarded-Encrypted: i=1; AJvYcCVo7hW4ApSSleupNfS3VZbCyet2lhhD8Kjg4oITPtAHN0q0gtgl2F3eIHuy4Wh7cavM3tHN/DM9YKgkBErTyRzWIbvLssE/4UrFmwqONNc=
X-Gm-Message-State: AOJu0YxGOYKoZoUYCC8ESjuugWzB90vH/7eVHHD9bdQPD31rDvdtlN7I
	Z7mqFoqDmalziT0kqO3c9Zbi+dU64z8iH05Ja92MZ1lEJHAc3Eq34573NRkUExeIOwoCJfbBonX
	uKlbzUXeWwSbNK6wAHxieeUcyBFx+PKOFvpKQ6KIzknR1esMesDeIKKEV+9C/hKI0elgdFDB2ZF
	RQ+zm590tixwrnfZv1P0OelwkjUMGDbLlf6AnLoHw=
X-Received: by 2002:a50:ccc6:0:b0:57a:27e5:2a8a with SMTP id 4fb4d7f45d1cf-57a8b67c31fmr1805739a12.8.1717589326254;
        Wed, 05 Jun 2024 05:08:46 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IE+JxuJ520EzT+4QtY4uXVm+JfZMvYvj6q9jh0fTRQVgJJ52LosO9DfGMFok6kJpKaWnybVCz6cTTo87EWHuIc=
X-Received: by 2002:a50:ccc6:0:b0:57a:27e5:2a8a with SMTP id
 4fb4d7f45d1cf-57a8b67c31fmr1805731a12.8.1717589325928; Wed, 05 Jun 2024
 05:08:45 -0700 (PDT)
MIME-Version: 1.0
References: <20240603151825.188353-1-kraxel@redhat.com> <20240603151825.188353-2-kraxel@redhat.com>
 <CAMxuvawqf-0dKPsZP2UTcDWPWQ+8FKbZ=S4KX02hQO1qeeGVMA@mail.gmail.com> <tmtartaqh2ac4azfq4cgwh22uuc4pnrnxjpcpky24xzjrkwb5c@ung7cyha4ppa>
In-Reply-To: <tmtartaqh2ac4azfq4cgwh22uuc4pnrnxjpcpky24xzjrkwb5c@ung7cyha4ppa>
From: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>
Date: Wed, 5 Jun 2024 16:08:33 +0400
Message-ID: <CAMxuvay6qMGCSc7eWzs0Nu7x=VOyG6D56Jb9sNe+azh80GFe1Q@mail.gmail.com>
Subject: Re: [PATCH v2 1/3] stdvga: fix screen blanking
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: qemu-devel@nongnu.org, Anthony PERARD <anthony@xenproject.org>, 
	Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org, 
	Stefano Stabellini <sstabellini@kernel.org>, qemu-stable@nongnu.org
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi

On Wed, Jun 5, 2024 at 11:36=E2=80=AFAM Gerd Hoffmann <kraxel@redhat.com> w=
rote:
>
> On Tue, Jun 04, 2024 at 10:27:18AM GMT, Marc-Andr=C3=A9 Lureau wrote:
> > Hi
> >
> > > +    if (is_buffer_shared(surface)) {
> >
> > Perhaps the suggestion to rename the function (in the following patch)
> > should instead be surface_is_allocated() ? that would match the actual
> > flag check. But callers would have to ! the result. Wdyt?
>
> surface_is_shadow() ?  Comes closer to the typical naming in computer
> graphics.

If the underlying flag is renamed too, that's ok to me.



From xen-devel-bounces@lists.xenproject.org Wed Jun 05 13:15:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 13:15:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735756.1141867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEqTj-0003Tl-7P; Wed, 05 Jun 2024 13:14:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735756.1141867; Wed, 05 Jun 2024 13:14:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEqTj-0003TW-4C; Wed, 05 Jun 2024 13:14:59 +0000
Received: by outflank-mailman (input) for mailman id 735756;
 Wed, 05 Jun 2024 13:14:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oDqB=NH=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1sEqTh-0003E5-Vj
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 13:14:57 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9a5f9c15-233d-11ef-90a2-e314d9c70b13;
 Wed, 05 Jun 2024 15:14:57 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73])
 by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,
 cipher=TLS_AES_256_GCM_SHA384) id us-mta-460-tXfqlkJlNb2AyTUOjfHNLw-1; Wed,
 05 Jun 2024 09:14:50 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 756BE1C05122;
 Wed,  5 Jun 2024 13:14:50 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.217])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id F38F937D1;
 Wed,  5 Jun 2024 13:14:48 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 713EA1800DDB; Wed,  5 Jun 2024 15:14:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a5f9c15-233d-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717593296;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TVLB8maweI2YBdW9cqwpRepDTGxsDFjVZsXbEwlXhJ4=;
	b=Ca3GCqN4wYHGdjddioYoIwOtsQQwBpOF9kLw/z6RiQUamn9nSIWjyp/tIubW7DUzffUXsU
	liNUrSqAa1uhC0hxYV2b5UFb57PQH1+KvddvUtrPXJjHwM+Ru5DeedI4Omtb1qEbKckScQ
	MiMlz52Jf2ShsSCeLnDjBSgoCeGr3a0=
X-MC-Unique: tXfqlkJlNb2AyTUOjfHNLw-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Gerd Hoffmann <kraxel@redhat.com>,
	Anthony PERARD <anthony@xenproject.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@gmail.com>
Subject: [PATCH v3 3/3] ui+display: rename is_buffer_shared() -> surface_is_allocated()
Date: Wed,  5 Jun 2024 15:14:43 +0200
Message-ID: <20240605131444.797896-4-kraxel@redhat.com>
In-Reply-To: <20240605131444.797896-1-kraxel@redhat.com>
References: <20240605131444.797896-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1

Boolean return value is reversed, to align with QEMU_ALLOCATED_FLAG, so
all callers must be adapted.  Also rename share_surface variable in
vga_draw_graphic() to reduce confusion.

No functional change.

Suggested-by: Marc-André Lureau <marcandre.lureau@gmail.com>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/ui/surface.h    |  4 ++--
 hw/display/qxl-render.c |  2 +-
 hw/display/vga.c        | 20 ++++++++++----------
 hw/display/xenfb.c      |  5 +++--
 ui/console.c            |  3 ++-
 5 files changed, 18 insertions(+), 16 deletions(-)

diff --git a/include/ui/surface.h b/include/ui/surface.h
index 273bb4769a02..345b19169d2e 100644
--- a/include/ui/surface.h
+++ b/include/ui/surface.h
@@ -45,9 +45,9 @@ void qemu_displaysurface_win32_set_handle(DisplaySurface *surface,
 DisplaySurface *qemu_create_displaysurface(int width, int height);
 void qemu_free_displaysurface(DisplaySurface *surface);
 
-static inline int is_buffer_shared(DisplaySurface *surface)
+static inline int surface_is_allocated(DisplaySurface *surface)
 {
-    return !(surface->flags & QEMU_ALLOCATED_FLAG);
+    return surface->flags & QEMU_ALLOCATED_FLAG;
 }
 
 static inline int surface_is_placeholder(DisplaySurface *surface)
diff --git a/hw/display/qxl-render.c b/hw/display/qxl-render.c
index ec99ec887a6e..8daae72c8d04 100644
--- a/hw/display/qxl-render.c
+++ b/hw/display/qxl-render.c
@@ -31,7 +31,7 @@ static void qxl_blit(PCIQXLDevice *qxl, QXLRect *rect)
     uint8_t *src;
     int len, i;
 
-    if (is_buffer_shared(surface)) {
+    if (!surface_is_allocated(surface)) {
         return;
     }
     trace_qxl_render_blit(qxl->guest_primary.qxl_stride,
diff --git a/hw/display/vga.c b/hw/display/vga.c
index 474b6b14c327..0ed933596584 100644
--- a/hw/display/vga.c
+++ b/hw/display/vga.c
@@ -1487,7 +1487,7 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
     uint8_t *d;
     uint32_t v, addr1, addr;
     vga_draw_line_func *vga_draw_line = NULL;
-    bool share_surface, force_shadow = false;
+    bool allocate_surface, force_shadow = false;
     pixman_format_code_t format;
 #if HOST_BIG_ENDIAN
     bool byteswap = !s->big_endian_fb;
@@ -1609,10 +1609,10 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
      */
     format = qemu_default_pixman_format(depth, !byteswap);
     if (format) {
-        share_surface = dpy_gfx_check_format(s->con, format)
-            && !s->force_shadow && !force_shadow;
+        allocate_surface = !dpy_gfx_check_format(s->con, format)
+            || s->force_shadow || force_shadow;
     } else {
-        share_surface = false;
+        allocate_surface = true;
     }
 
     if (s->params.line_offset != s->last_line_offset ||
@@ -1620,7 +1620,7 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
         height != s->last_height ||
         s->last_depth != depth ||
         s->last_byteswap != byteswap ||
-        share_surface != is_buffer_shared(surface)) {
+        allocate_surface != surface_is_allocated(surface)) {
         /* display parameters changed -> need new display surface */
         s->last_scr_width = disp_width;
         s->last_scr_height = height;
@@ -1635,14 +1635,14 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
         full_update = 1;
     }
     if (surface_data(surface) != s->vram_ptr + (s->params.start_addr * 4)
-        && is_buffer_shared(surface)) {
+        && !surface_is_allocated(surface)) {
         /* base address changed (page flip) -> shared display surfaces
          * must be updated with the new base address */
         full_update = 1;
     }
 
     if (full_update) {
-        if (share_surface) {
+        if (!allocate_surface) {
             surface = qemu_create_displaysurface_from(disp_width,
                     height, format, s->params.line_offset,
                     s->vram_ptr + (s->params.start_addr * 4));
@@ -1655,7 +1655,7 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
 
     vga_draw_line = vga_draw_line_table[v];
 
-    if (!is_buffer_shared(surface) && s->cursor_invalidate) {
+    if (surface_is_allocated(surface) && s->cursor_invalidate) {
         s->cursor_invalidate(s);
     }
 
@@ -1707,7 +1707,7 @@ static void vga_draw_graphic(VGACommonState *s, int full_update)
         if (update) {
             if (y_start < 0)
                 y_start = y;
-            if (!(is_buffer_shared(surface))) {
+            if (surface_is_allocated(surface)) {
                 uint8_t *p;
                 p = vga_draw_line(s, d, addr, width, hpel);
                 if (p) {
@@ -1762,7 +1762,7 @@ static void vga_draw_blank(VGACommonState *s, int full_update)
     if (s->last_scr_width <= 0 || s->last_scr_height <= 0)
         return;
 
-    if (is_buffer_shared(surface)) {
+    if (!surface_is_allocated(surface)) {
         /* unshare buffer, otherwise the blanking corrupts vga vram */
         surface = qemu_create_displaysurface(s->last_scr_width, s->last_scr_height);
         dpy_gfx_replace_surface(s->con, surface);
diff --git a/hw/display/xenfb.c b/hw/display/xenfb.c
index 27536bfce0cb..06f84ed39d64 100644
--- a/hw/display/xenfb.c
+++ b/hw/display/xenfb.c
@@ -637,7 +637,7 @@ static void xenfb_guest_copy(struct XenFB *xenfb, int x, int y, int w, int h)
     int linesize = surface_stride(surface);
     uint8_t *data = surface_data(surface);
 
-    if (!is_buffer_shared(surface)) {
+    if (surface_is_allocated(surface)) {
         switch (xenfb->depth) {
         case 8:
             if (bpp == 16) {
@@ -755,7 +755,8 @@ static void xenfb_update(void *opaque)
         xen_pv_printf(&xenfb->c.xendev, 1,
                       "update: resizing: %dx%d @ %d bpp%s\n",
                       xenfb->width, xenfb->height, xenfb->depth,
-                      is_buffer_shared(surface) ? " (shared)" : "");
+                      surface_is_allocated(surface)
+                      ? " (allocated)" : " (borrowed)");
         xenfb->up_fullscreen = 1;
     }
 
diff --git a/ui/console.c b/ui/console.c
index c2173fc0b1e5..1a7eb7fe8e8c 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -1510,7 +1510,8 @@ void qemu_console_resize(QemuConsole *s, int width, int height)
     assert(QEMU_IS_GRAPHIC_CONSOLE(s));
 
     if ((s->scanout.kind != SCANOUT_SURFACE ||
-         (surface && !is_buffer_shared(surface) && !surface_is_placeholder(surface))) &&
+         (surface && surface_is_allocated(surface) &&
+          !surface_is_placeholder(surface))) &&
         qemu_console_get_width(s, -1) == width &&
         qemu_console_get_height(s, -1) == height) {
         return;
-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 05 13:15:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 13:15:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735757.1141883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEqTm-0003wR-Dd; Wed, 05 Jun 2024 13:15:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735757.1141883; Wed, 05 Jun 2024 13:15:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEqTm-0003wK-AK; Wed, 05 Jun 2024 13:15:02 +0000
Received: by outflank-mailman (input) for mailman id 735757;
 Wed, 05 Jun 2024 13:15:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oDqB=NH=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1sEqTl-0003De-By
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 13:15:01 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a716cea-233d-11ef-b4bb-af5377834399;
 Wed, 05 Jun 2024 15:14:59 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73])
 by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,
 cipher=TLS_AES_256_GCM_SHA384) id us-mta-97-vsGxsQoUPgaw-ZwqovnG_w-1; Wed,
 05 Jun 2024 09:14:50 -0400
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.rdu2.redhat.com
 [10.11.54.6])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id CB7CB29ABA03;
 Wed,  5 Jun 2024 13:14:49 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.217])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 01A992166AF7;
 Wed,  5 Jun 2024 13:14:48 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 58C141800DD4; Wed,  5 Jun 2024 15:14:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a716cea-233d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717593295;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZzhPwfoH1SErdzaNq4zX0FAoHyesUw7dSQsAWnH09DA=;
	b=enCejUn8OEUZUGWIzELPlQlH/M/hYzpbb+qxF3h31YUJ4vNEW9ED1pLfjGFvkJTC4oxH37
	uR1u8gGA6RzouRJNdmZbPt3ouPBL87nh7nZbQdaRPHVIwbPRVoY2w1uJN9MNogDlmwjNfY
	CUQuJn1oMs/cQYn/aMe27C+VEJiZVFM=
X-MC-Unique: vsGxsQoUPgaw-ZwqovnG_w-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Gerd Hoffmann <kraxel@redhat.com>,
	Anthony PERARD <anthony@xenproject.org>
Subject: [PATCH v3 2/3] ui+display: rename is_placeholder() -> surface_is_placeholder()
Date: Wed,  5 Jun 2024 15:14:42 +0200
Message-ID: <20240605131444.797896-3-kraxel@redhat.com>
In-Reply-To: <20240605131444.797896-1-kraxel@redhat.com>
References: <20240605131444.797896-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.6

No functional change.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/ui/surface.h | 2 +-
 ui/console.c         | 2 +-
 ui/sdl2-2d.c         | 2 +-
 ui/sdl2-gl.c         | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/include/ui/surface.h b/include/ui/surface.h
index 4244e0ca4a32..273bb4769a02 100644
--- a/include/ui/surface.h
+++ b/include/ui/surface.h
@@ -50,7 +50,7 @@ static inline int is_buffer_shared(DisplaySurface *surface)
     return !(surface->flags & QEMU_ALLOCATED_FLAG);
 }
 
-static inline int is_placeholder(DisplaySurface *surface)
+static inline int surface_is_placeholder(DisplaySurface *surface)
 {
     return surface->flags & QEMU_PLACEHOLDER_FLAG;
 }
diff --git a/ui/console.c b/ui/console.c
index 1b2cd0c7365d..c2173fc0b1e5 100644
--- a/ui/console.c
+++ b/ui/console.c
@@ -1510,7 +1510,7 @@ void qemu_console_resize(QemuConsole *s, int width, int height)
     assert(QEMU_IS_GRAPHIC_CONSOLE(s));
 
     if ((s->scanout.kind != SCANOUT_SURFACE ||
-         (surface && !is_buffer_shared(surface) && !is_placeholder(surface))) &&
+         (surface && !is_buffer_shared(surface) && !surface_is_placeholder(surface))) &&
         qemu_console_get_width(s, -1) == width &&
         qemu_console_get_height(s, -1) == height) {
         return;
diff --git a/ui/sdl2-2d.c b/ui/sdl2-2d.c
index 06468cd493ea..73052383c2e0 100644
--- a/ui/sdl2-2d.c
+++ b/ui/sdl2-2d.c
@@ -72,7 +72,7 @@ void sdl2_2d_switch(DisplayChangeListener *dcl,
         scon->texture = NULL;
     }
 
-    if (is_placeholder(new_surface) && qemu_console_get_index(dcl->con)) {
+    if (surface_is_placeholder(new_surface) && qemu_console_get_index(dcl->con)) {
         sdl2_window_destroy(scon);
         return;
     }
diff --git a/ui/sdl2-gl.c b/ui/sdl2-gl.c
index 28d796607c08..91b7ee2419b7 100644
--- a/ui/sdl2-gl.c
+++ b/ui/sdl2-gl.c
@@ -89,7 +89,7 @@ void sdl2_gl_switch(DisplayChangeListener *dcl,
 
     scon->surface = new_surface;
 
-    if (is_placeholder(new_surface) && qemu_console_get_index(dcl->con)) {
+    if (surface_is_placeholder(new_surface) && qemu_console_get_index(dcl->con)) {
         qemu_gl_fini_shader(scon->gls);
         scon->gls = NULL;
         sdl2_window_destroy(scon);
-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 05 13:15:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 13:15:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735754.1141853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEqTg-0003Dr-Po; Wed, 05 Jun 2024 13:14:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735754.1141853; Wed, 05 Jun 2024 13:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEqTg-0003Dk-MY; Wed, 05 Jun 2024 13:14:56 +0000
Received: by outflank-mailman (input) for mailman id 735754;
 Wed, 05 Jun 2024 13:14:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oDqB=NH=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1sEqTg-0003De-3B
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 13:14:56 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9878d3eb-233d-11ef-b4bb-af5377834399;
 Wed, 05 Jun 2024 15:14:54 +0200 (CEST)
Received: from mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com
 (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,
 cipher=TLS_AES_256_GCM_SHA384) id us-mta-372-rbWF64UtPluLoJfyEMRmOA-1; Wed,
 05 Jun 2024 09:14:49 -0400
Received: from mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com
 (mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.40])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mx-prod-mc-01.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS
 id C7CBD1955DAB; Wed,  5 Jun 2024 13:14:47 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.217])
 by mx-prod-int-04.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS
 id 8FC2C1956055; Wed,  5 Jun 2024 13:14:46 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 437C71800DD3; Wed,  5 Jun 2024 15:14:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9878d3eb-233d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717593292;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vkSiq4a30KX/BtKlzALhqQstMUl79qOxtGFoh3xybvE=;
	b=P6TkaPomyj8h0djr4ki93gTaFdYTrLsGK/8usJWTxEZeQ0DRTsq330lDEns8QZyF8XPM/K
	mbIgmOGJQyuaKTMAzkR8OQL7OhgSBHqWbG2Hv3q3hwfSz68ZvgwuqztDnbN+jhASwdyoAb
	0DjO3XdmVaG+7X9Ze9UgH7+3sT4wduI=
X-MC-Unique: rbWF64UtPluLoJfyEMRmOA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Gerd Hoffmann <kraxel@redhat.com>,
	Anthony PERARD <anthony@xenproject.org>,
	qemu-stable@nongnu.org
Subject: [PATCH v3 1/3] stdvga: fix screen blanking
Date: Wed,  5 Jun 2024 15:14:41 +0200
Message-ID: <20240605131444.797896-2-kraxel@redhat.com>
In-Reply-To: <20240605131444.797896-1-kraxel@redhat.com>
References: <20240605131444.797896-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.0 on 10.30.177.40

In case the display surface uses a shared buffer (i.e. uses vga vram
directly instead of a shadow) go unshare the buffer before clearing it.

This avoids vga memory corruption, which in turn fixes unblanking not
working properly with X11.

Cc: qemu-stable@nongnu.org
Resolves: https://gitlab.com/qemu-project/qemu/-/issues/2067
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 hw/display/vga.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/hw/display/vga.c b/hw/display/vga.c
index 30facc6c8e33..474b6b14c327 100644
--- a/hw/display/vga.c
+++ b/hw/display/vga.c
@@ -1762,6 +1762,12 @@ static void vga_draw_blank(VGACommonState *s, int full_update)
     if (s->last_scr_width <= 0 || s->last_scr_height <= 0)
         return;
 
+    if (is_buffer_shared(surface)) {
+        /* unshare buffer, otherwise the blanking corrupts vga vram */
+        surface = qemu_create_displaysurface(s->last_scr_width, s->last_scr_height);
+        dpy_gfx_replace_surface(s->con, surface);
+    }
+
     w = s->last_scr_width * surface_bytes_per_pixel(surface);
     d = surface_data(surface);
     for(i = 0; i < s->last_scr_height; i++) {
-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 05 13:15:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 13:15:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735755.1141863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEqTj-0003S8-0q; Wed, 05 Jun 2024 13:14:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735755.1141863; Wed, 05 Jun 2024 13:14:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEqTi-0003S1-U5; Wed, 05 Jun 2024 13:14:58 +0000
Received: by outflank-mailman (input) for mailman id 735755;
 Wed, 05 Jun 2024 13:14:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oDqB=NH=redhat.com=kraxel@srs-se1.protection.inumbo.net>)
 id 1sEqTh-0003E5-OZ
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 13:14:57 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9881ad5b-233d-11ef-90a2-e314d9c70b13;
 Wed, 05 Jun 2024 15:14:55 +0200 (CEST)
Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com
 (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,
 cipher=TLS_AES_256_GCM_SHA384) id us-mta-257-tkqqa6s6PxClgnS5JezhrA-1; Wed,
 05 Jun 2024 09:14:49 -0400
Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com
 (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS
 id D88511955D7D; Wed,  5 Jun 2024 13:14:47 +0000 (UTC)
Received: from sirius.home.kraxel.org (unknown [10.39.192.217])
 by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS
 id 9315F1956086; Wed,  5 Jun 2024 13:14:46 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 364391800DCF; Wed,  5 Jun 2024 15:14:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9881ad5b-233d-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717593292;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=NpWaLJawczwJqHAf2KqSeR/1UYWX96+8lZ1AEzU9pjg=;
	b=JVoP7A62CMDJU8MXpilr+bbm5fdIYQgIPZ3EwoxEGRQnsMvzNmGD3/KA5/GlIkCoUoB2ye
	L3q+2MJyQ2h5yuJg0pYTJygmweQ+GShQhKSWh5ag9rBfuDEuCo5aeB5Arpz8QuvftrbzhE
	HSx5MmSrppWIRRE6NYTzEWXgZp8+H1I=
X-MC-Unique: tkqqa6s6PxClgnS5JezhrA-1
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	=?UTF-8?q?Marc-Andr=C3=A9=20Lureau?= <marcandre.lureau@redhat.com>,
	Paul Durrant <paul@xen.org>,
	xen-devel@lists.xenproject.org,
	Gerd Hoffmann <kraxel@redhat.com>,
	Anthony PERARD <anthony@xenproject.org>
Subject: [PATCH v3 0/3] stdvga: fix screen blanking
Date: Wed,  5 Jun 2024 15:14:40 +0200
Message-ID: <20240605131444.797896-1-kraxel@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15



Gerd Hoffmann (3):
  stdvga: fix screen blanking
  ui+display: rename is_placeholder() -> surface_is_placeholder()
  ui+display: rename is_buffer_shared() -> surface_is_allocated()

 include/ui/surface.h    |  6 +++---
 hw/display/qxl-render.c |  2 +-
 hw/display/vga.c        | 24 +++++++++++++++---------
 hw/display/xenfb.c      |  5 +++--
 ui/console.c            |  3 ++-
 ui/sdl2-2d.c            |  2 +-
 ui/sdl2-gl.c            |  2 +-
 7 files changed, 26 insertions(+), 18 deletions(-)

-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 05 13:27:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 13:27:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735777.1141893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEqfi-0006uy-Jj; Wed, 05 Jun 2024 13:27:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735777.1141893; Wed, 05 Jun 2024 13:27:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEqfi-0006ur-Fc; Wed, 05 Jun 2024 13:27:22 +0000
Received: by outflank-mailman (input) for mailman id 735777;
 Wed, 05 Jun 2024 13:27:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OW4N=NH=redhat.com=mlureau@srs-se1.protection.inumbo.net>)
 id 1sEqfh-0006ul-5U
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 13:27:21 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 53e288df-233f-11ef-90a2-e314d9c70b13;
 Wed, 05 Jun 2024 15:27:19 +0200 (CEST)
Received: from mail-ed1-f69.google.com (mail-ed1-f69.google.com
 [209.85.208.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-393-G0jeKpUlOxC9KhJDWL7tqA-1; Wed, 05 Jun 2024 09:27:14 -0400
Received: by mail-ed1-f69.google.com with SMTP id
 4fb4d7f45d1cf-57a9e2e43c0so457976a12.1
 for <xen-devel@lists.xenproject.org>; Wed, 05 Jun 2024 06:27:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53e288df-233f-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717594036;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=f9e4KwmmjiJ5xmaAO0+LStJyYZP1//KhEcep2QFkcTI=;
	b=YRhepYGQ89MVJb4rYm1ilpmmqU0u1RVcHcYRz45Z9q2RrwoIFDQRnf3P2StBpwyncmGdmH
	9HzoTwU4OyX4sUhnM5M+tzIKq51kbx8TJ/eG2NKPlfnFOj/GgCC9PC1a4gqxsYPiFxMQCw
	WqKYooRC7VDevw6TLteJEsGPkYu2Oe4=
X-MC-Unique: G0jeKpUlOxC9KhJDWL7tqA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717594031; x=1718198831;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=f9e4KwmmjiJ5xmaAO0+LStJyYZP1//KhEcep2QFkcTI=;
        b=OmQ7d7ynYpVdow0fT5PTrceM/9MptJALvf0b2bwBkQBBdssUeV0WtDAlzfXK/8hHaN
         GuHkNKjVrhPZ0rzVO/B+ZbBd1PJVJkmQkfLTxz2t8egVm/P4wLXHNbl4XV+c3zxecita
         YAE9Ba+OIv/wNVMH8fNIVIolksjEe05mYuUpvJ3U1ycXSb9ItVjqsIzXfzzsRk92uIO+
         IlogfUw9Qs892gphoPvPA6vhFL4IvvY2DPankVfb0RcpAgRlAXdB7F+El/ng45Qtq3n5
         qCnRu+uZ/piGXGB2CJTiXNJvIXjpctqfksoX8MQRk6bNKImwdMr+315cV16agk7/LDNV
         E3ZQ==
X-Forwarded-Encrypted: i=1; AJvYcCW47EGNwcUBR0XjvBWiHspkoIvpM6rmEEFmBHIz9koBrl574GH3ShIl+PaosoyTQuTEvdFzlKCIgJhewQJy8P/ftNkYZu0fPCKCBahr328=
X-Gm-Message-State: AOJu0YxrCcPnSmbZExinAG7jRkddnP4yjN5IJdyMY5pTH5qi6UotJVja
	BP+Qu6GRgvLS4UvXrDToNIPGNvTsWfdIAU+EacA4r03a0F4oQxqh9DSKmCoPhqu9ceEvNpf1q8d
	m45FmzcaU6qTnBCrTgNkP6pQiSTOJPSML8fSxdEAMEM593ULV/qfI8m+gpGyphz/365TYFkmI5U
	Js2GHwO7WV8VsWmBsSAtdzw2JGnJrwRaDQVvFXzAA=
X-Received: by 2002:a50:9e0f:0:b0:579:d34c:396a with SMTP id 4fb4d7f45d1cf-57a7a6c18bdmr4149077a12.11.1717594031345;
        Wed, 05 Jun 2024 06:27:11 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IHrqbKaA8JYmGJu9SWv44omNvgzVPehMk6+vUutUBatoaFuw/tX7cJUj872ybAZaprPmRtrqSdff3f+TbTs/Ww=
X-Received: by 2002:a50:9e0f:0:b0:579:d34c:396a with SMTP id
 4fb4d7f45d1cf-57a7a6c18bdmr4149067a12.11.1717594030978; Wed, 05 Jun 2024
 06:27:10 -0700 (PDT)
MIME-Version: 1.0
References: <20240605131444.797896-1-kraxel@redhat.com>
In-Reply-To: <20240605131444.797896-1-kraxel@redhat.com>
From: =?UTF-8?B?TWFyYy1BbmRyw6kgTHVyZWF1?= <marcandre.lureau@redhat.com>
Date: Wed, 5 Jun 2024 17:26:58 +0400
Message-ID: <CAMxuvawet2HKobd7RjQ3dG5bW17zuMTNMj_Zmoc-m==iizB8xQ@mail.gmail.com>
Subject: Re: [PATCH v3 0/3] stdvga: fix screen blanking
To: Gerd Hoffmann <kraxel@redhat.com>
Cc: qemu-devel@nongnu.org, Stefano Stabellini <sstabellini@kernel.org>, 
	Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org, 
	Anthony PERARD <anthony@xenproject.org>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Jun 5, 2024 at 5:14=E2=80=AFPM Gerd Hoffmann <kraxel@redhat.com> wr=
ote:
>
>
>
> Gerd Hoffmann (3):
>   stdvga: fix screen blanking
>   ui+display: rename is_placeholder() -> surface_is_placeholder()
>   ui+display: rename is_buffer_shared() -> surface_is_allocated()
>
>  include/ui/surface.h    |  6 +++---
>  hw/display/qxl-render.c |  2 +-
>  hw/display/vga.c        | 24 +++++++++++++++---------
>  hw/display/xenfb.c      |  5 +++--
>  ui/console.c            |  3 ++-
>  ui/sdl2-2d.c            |  2 +-
>  ui/sdl2-gl.c            |  2 +-
>  7 files changed, 26 insertions(+), 18 deletions(-)
>

for the series:
Reviewed-by: Marc-Andr=C3=A9 Lureau <marcandre.lureau@redhat.com>



From xen-devel-bounces@lists.xenproject.org Wed Jun 05 15:04:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 15:04:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735788.1141902 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEsBW-0004gH-GX; Wed, 05 Jun 2024 15:04:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735788.1141902; Wed, 05 Jun 2024 15:04:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEsBW-0004gA-Dv; Wed, 05 Jun 2024 15:04:18 +0000
Received: by outflank-mailman (input) for mailman id 735788;
 Wed, 05 Jun 2024 15:04:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEsBV-0004g0-Jw; Wed, 05 Jun 2024 15:04:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEsBV-00018I-Hx; Wed, 05 Jun 2024 15:04:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sEsBV-0006nR-8E; Wed, 05 Jun 2024 15:04:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sEsBV-0000Ta-7h; Wed, 05 Jun 2024 15:04:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1602y+2XO99ixOcngECfS9OVdP+1o0b+JwbIw/lbzig=; b=IAb749DktGDGsNLMRwj5PkKsaL
	xVIp3PqmuBwiOCM9dU7NUN+IaomwL/mS4ymBTCnX0n+ApiT4VcrtkFvoc7Cddfyi367etM8dVZwuP
	IBl33XY8Xr42eGP2oVjvXH4gRLXLlUQEMZhTYRiasYvH190R2ymQ5qSmwzpBYYwCQ8gI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186256-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186256: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=10cd8b45ce36152996bcb1520ba36107a8cdc63f
X-Osstest-Versions-That:
    ovmf=e2e09d8512898709a3d076fdd36c8abee2734027
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Jun 2024 15:04:17 +0000

flight 186256 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186256/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 10cd8b45ce36152996bcb1520ba36107a8cdc63f
baseline version:
 ovmf                 e2e09d8512898709a3d076fdd36c8abee2734027

Last test of basis   186254  2024-06-05 03:41:14 Z    0 days
Testing same since   186256  2024-06-05 12:44:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Neo Hsueh <Hong-Chih.Hsueh@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e2e09d8512..10cd8b45ce  10cd8b45ce36152996bcb1520ba36107a8cdc63f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 15:38:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 15:38:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735796.1141914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEsiD-0000Uf-3d; Wed, 05 Jun 2024 15:38:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735796.1141914; Wed, 05 Jun 2024 15:38:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEsiC-0000UY-VH; Wed, 05 Jun 2024 15:38:04 +0000
Received: by outflank-mailman (input) for mailman id 735796;
 Wed, 05 Jun 2024 15:38:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EJva=NH=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sEsiB-0000UQ-UB
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 15:38:03 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 97a7c53a-2351-11ef-b4bb-af5377834399;
 Wed, 05 Jun 2024 17:38:01 +0200 (CEST)
Received: by mail-wr1-x42e.google.com with SMTP id
 ffacd0b85a97d-354f14bd80cso1845935f8f.1
 for <xen-devel@lists.xenproject.org>; Wed, 05 Jun 2024 08:38:01 -0700 (PDT)
Received: from [172.20.145.106] ([62.28.225.65])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35dd04cac9asm15009131f8f.34.2024.06.05.08.37.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 05 Jun 2024 08:38:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97a7c53a-2351-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717601881; x=1718206681; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Ym7yS0328xIlnOfuHV3e2nWXc5hhw5wVpnMaW2renGQ=;
        b=iiYUCycdCQexTx+BAAGu6PvMoKNTm0nV4BwUsevjN1owuVtpsSZX/0vH/LfnKQsH/I
         Kc9Q3T9N+dSw9iety3WLMM5H4wdt3CsUxIWxrG12AVdXP9/V9OACEQ1Y70sHXxeAaN1Q
         +RqQghFHhhuYp1t5nX0uMOTITz7jidJ+6e8RTcVRbSbCv7RWk9+LPCaNTgNcW25u0CO4
         EsOKcgSds7Og0+EezlBPpMr7qnRv8YjAy8ecRMsT9BzdleuNIU+isTu4eubHKWxXZYi0
         j0RmydlbnoVOmZDGhX50vxJcG1/z+KEfZDnmzefxt6FPrOUcQYQLwzhbNn+Ap7UwTeyH
         1JXw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717601881; x=1718206681;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Ym7yS0328xIlnOfuHV3e2nWXc5hhw5wVpnMaW2renGQ=;
        b=TWdnW+jV2WmnqZdsKRD0Gm75SO0CFG0nMj7XRBZH8i9MODdoCGNT3CKkEZY8x+YHNj
         Cbhspdoli6BteFJdOc8QxjLwvNNSIYrHGOlZ9iQlN+tvm28GFEbYGQJL8zB99C5ehUZS
         3c+jfqqQ50uRp7o3l5+WNgeJS+OEQcnsY5FehG+bH8W3oiRpWANIk03OpFivSCmfLXGu
         Qhrs32jFr3OZFFNv2lDAxVcGsmI9EuIpv1K8p4PsMT2q2XflBf3ufCWhz/FTDlErpTPz
         /IxuCcrl+ImGGkgimaL4Efy4AvgI8EocuMfq97z86VUkU3VYLSF3etT4uATcsZEr/3zH
         Z0Bg==
X-Gm-Message-State: AOJu0Yzq3cdt7J9akOBx8FhcgS812UlS/EGJyy6tefo82iMzYnqO+AlO
	O1AFr+r96/ep9ioqoGpZzT9aO+GBirXtanCEmLAOqTRueKSa/mvj
X-Google-Smtp-Source: AGHT+IG4P9lvgxwKbf2DlQZplABLJgH4D6fK78+E5+vYen2ih/l99muF0YbMMHS3mybqPWBdZ/GBlg==
X-Received: by 2002:a05:6000:400f:b0:354:c934:efa0 with SMTP id ffacd0b85a97d-35e8ef86d40mr2870062f8f.48.1717601880760;
        Wed, 05 Jun 2024 08:38:00 -0700 (PDT)
Message-ID: <d9252ecebbbc21f6498876ed845c32eea863cc80.camel@gmail.com>
Subject: Re: [XEN PATCH] automation/eclair_analysis: add more clean MISRA
 guidelines
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
	michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
	consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, Doug
 Goldstein <cardoe@cardoe.com>
Date: Wed, 05 Jun 2024 17:37:59 +0200
In-Reply-To: <06615fc65a59dbe950bc462030a54906@bugseng.com>
References: 
	<3af20044d2906a6f873aac1b6dd2b41c5b9e0507.1717269049.git.nicola.vetrini@bugseng.com>
	 <11c999212a75ea0f043e90128d5321b41a79c305.camel@gmail.com>
	 <06615fc65a59dbe950bc462030a54906@bugseng.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Tue, 2024-06-04 at 14:01 +0200, Nicola Vetrini wrote:
> On 2024-06-04 13:39, Oleksii K. wrote:
> > On Sat, 2024-06-01 at 21:13 +0200, Nicola Vetrini wrote:
> > > Rules 20.9, 20.12 and 14.4 are now clean on ARM and x86, so they
> > > are
> > > added
> > > to the list of clean guidelines.
> > >=20
> > > Some guidelines listed in the additional clean section for ARM
> > > are
> > > also
> > > clean on x86, so they can be removed from there.
> > >=20
> > > No functional change.
> > >=20
> > > Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
> > > ---
> > > +Cc Oleksii for an opinion on the inclusion for 4.19
> > >=20
> > > This is a follow-up to series
> > > https://lore.kernel.org/xen-devel/cover.1717236930.git.nicola.vetrini=
@bugseng.com/
> > > and depends on it (otherwise the gitlab MISRA analysis would fail
> > > on
> > > violations of Rule 20.12).
> > > If it is decided that the dependent series should go in for 4.19,
> > > then
> > > my suggestion is to include this as well, to the gating on more
> > > guidelines.
> > > ---
> > I just want to clarify if I understand you correctly. Do you mean
> > taht
> > the current one patch will be merged without dependent series that
> > gitlab MISRA analysis would fail? IIUUC then I am not sure that we
> > have
> > an option to have this patch without the dependent patch series.
> >=20
>=20
> Exactly, that's why I specified the dependency. This patch should
> have=20
> been part of the series, but I forgot to include it.
I am okay to consider this patches in Xen 4.19, but only in case it the
dependencies will be resolved properly and necessary Acked will be
recieved.

~ Oleksii
>=20
> > ~ Oleksii
> > > =C2=A0automation/eclair_analysis/ECLAIR/tagging.ecl | 4 +++-
> > > =C2=A01 file changed, 3 insertions(+), 1 deletion(-)
> > >=20
> > > diff --git a/automation/eclair_analysis/ECLAIR/tagging.ecl
> > > b/automation/eclair_analysis/ECLAIR/tagging.ecl
> > > index a354ff322e03..b829655ca0bc 100644
> > > --- a/automation/eclair_analysis/ECLAIR/tagging.ecl
> > > +++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
> > > @@ -60,6 +60,7 @@ MC3R1.R11.7||
> > > =C2=A0MC3R1.R11.9||
> > > =C2=A0MC3R1.R12.5||
> > > =C2=A0MC3R1.R14.1||
> > > +MC3R1.R14.4||
> > > =C2=A0MC3R1.R16.7||
> > > =C2=A0MC3R1.R17.1||
> > > =C2=A0MC3R1.R17.3||
> > > @@ -73,6 +74,7 @@ MC3R1.R20.4||
> > > =C2=A0MC3R1.R20.6||
> > > =C2=A0MC3R1.R20.9||
> > > =C2=A0MC3R1.R20.11||
> > > +MC3R1.R20.12||
> > > =C2=A0MC3R1.R20.13||
> > > =C2=A0MC3R1.R20.14||
> > > =C2=A0MC3R1.R21.3||
> > > @@ -105,7 +107,7 @@ if(string_equal(target,"x86_64"),
> > > =C2=A0)
> > > =C2=A0
> > > =C2=A0if(string_equal(target,"arm64"),
> > > -=C2=A0=C2=A0=C2=A0
> > > service_selector({"additional_clean_guidelines","MC3R1.R14.4||MC3
> > > R1.R
> > > 16.6||MC3R1.R20.12||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.2||MC3R1.R7.
> > > 3||M
> > > C3R1.R8.6||MC3R1.R9.3"})
> > > +=C2=A0=C2=A0=C2=A0
> > > service_selector({"additional_clean_guidelines","MC3R1.R16.6||MC3
> > > R1.R
> > > 2.1||MC3R1.R5.3||MC3R1.R7.3"})
> > > =C2=A0)
> > > =C2=A0
> > > =C2=A0-
> > > reports+=3D{clean:added,"service(clean_guidelines_common||additiona
> > > l_cl
> > > ean_guidelines)"}
>=20



From xen-devel-bounces@lists.xenproject.org Wed Jun 05 15:39:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 15:39:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735801.1141922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEsjL-000106-BJ; Wed, 05 Jun 2024 15:39:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735801.1141922; Wed, 05 Jun 2024 15:39:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEsjL-0000zz-88; Wed, 05 Jun 2024 15:39:15 +0000
Received: by outflank-mailman (input) for mailman id 735801;
 Wed, 05 Jun 2024 15:39:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kNKw=NH=cloud.com=alex.brett@srs-se1.protection.inumbo.net>)
 id 1sEsjJ-0000zn-VP
 for xen-devel@lists.xen.org; Wed, 05 Jun 2024 15:39:14 +0000
Received: from mail-pj1-x102a.google.com (mail-pj1-x102a.google.com
 [2607:f8b0:4864:20::102a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c0e3cd60-2351-11ef-b4bb-af5377834399;
 Wed, 05 Jun 2024 17:39:11 +0200 (CEST)
Received: by mail-pj1-x102a.google.com with SMTP id
 98e67ed59e1d1-2c1ab9e17f6so30417a91.1
 for <xen-devel@lists.xen.org>; Wed, 05 Jun 2024 08:39:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0e3cd60-2351-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717601950; x=1718206750; darn=lists.xen.org;
        h=to:subject:message-id:date:from:reply-to:mime-version:from:to:cc
         :subject:date:message-id:reply-to;
        bh=i0yzbyUBTrQd441rxXQik5Gk9in4UvsE2GoeY89+O74=;
        b=I1TXGOLkeuX1x74m+wQqkW6D5HVLQgRnuwkPepqtMLKnZa7Q/lP0Ix0qOstohdEzNZ
         ddbwdnJoLXWIsf99Wa9NKzeosMcr9eiYwAiKDzmeMfWqabQBSK+yvBUgP/O31lk51/o9
         q9jwL26ys6n2ZpNd0wp4N5UDs+jYsdJmQ8pgg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717601950; x=1718206750;
        h=to:subject:message-id:date:from:reply-to:mime-version
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=i0yzbyUBTrQd441rxXQik5Gk9in4UvsE2GoeY89+O74=;
        b=kOu+ZSgcMFvgps2sslvZFsNucUL+7Ylqeb5yLCqWutayrwhKQNBztqK/qGEL57cuU+
         aBufl6UK+vr9GnV/otP98nQ2iKKKPIGG6PaPY5KhUm2HsXJ6th5y96f0T54+7m4NlVOO
         Y0mx8/UDNq0h5vah5tdABbpb4h1Y4yTsWr4MwxEJAcmS6lntxdtpwPdtukAwgxw/B5SJ
         VIdovdQpNwtMEg1Qig0vwMnbmcLNToT7z66b0ZM+23ZDzE2MApFUj8vgPEu7iWzr8KrS
         x/op5BpEKoKYTbItY/+VPVA9FvdkwTbwE0PtcT6X8F7mNkJW+EI/YpF00u5hWWGdi/Ne
         MKqg==
X-Gm-Message-State: AOJu0YyguHLOhaFXFpGL5Qxh31Y8qnVeM+e0xFTSkva8QoUBgnzNQ6Y/
	GvgbvhOtuT7SczHh899G68aLrwiu72w5iAYuLXSMJQGfUsU8S/x8DJh/Udqm4qAosSJ0n7CPLTF
	Rh+8c5jNFxjacHPxlOGYPgpbUVqF83zvGvyrwXIHrI7hGYY5KXK8=
X-Google-Smtp-Source: AGHT+IEPqjLHp+jMjuC12pzbuFH6S5VNLrz/m+oYp6ZsLpAfynXVLNBFE4dv7l1cyBnQHOrYTyn1s3vlp/T0GSF0iAU=
X-Received: by 2002:a17:90a:a418:b0:2bf:9981:e0ae with SMTP id
 98e67ed59e1d1-2c27db5a46emr3167255a91.39.1717601949433; Wed, 05 Jun 2024
 08:39:09 -0700 (PDT)
MIME-Version: 1.0
Reply-To: alex.brett@cloud.com
From: Alex Brett <alex.brett@citrix.com>
Date: Wed, 5 Jun 2024 16:39:01 +0100
Message-ID: <CAMC3yeFCJX+C3ELHSbAKTJD-24NTxXk0kUg9VHjbKQXv41W=jg@mail.gmail.com>
Subject: Design session notes: guest unaware migration
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Content-Type: multipart/alternative; boundary="0000000000000fe83b061a265efc"

--0000000000000fe83b061a265efc
Content-Type: text/plain; charset="UTF-8"

Notes from the design session on guest unaware migration:

Migration is often needed for e.g. host maintenance. When this is done, we
(XenServer) see two classes of issues with guests:
  - Guest kernel crashes (relatively rare). Often detectable by the
toolstack and thus reported to the admin, distros generally take patches
quickly.

  - Guest userspace issues (more common).
    Primarily seen around networking - e.g. iptables rules get cleaned up,
and not re-injected. This can break e.g. Kubernetes networking.
Some other examples around clustered services (though not clear if this is
the guest being aware of the migration or just a result of the downtime).
Generally impossible for the toolstack to detect, so admin normally unaware
until users/monitoring complains.

It was also mentioned that NetBSD has issues with live migration around
suspend of the network interface.

Possible solutions
1. Do the migration in a way that the guest is entirely unaware of it
Amazon produced a proposal for this non-cooperative migration:
https://xenbits.xen.org/gitweb/?p=xen.git;a=blob_plain;f=docs/designs/non-cooperative-migration.md;hb=HEAD
Believed to be some older patch series on this

Some notes from VM forking work that might be relevant:
  Some state was not saved as part of regular VM save, so resuming VM
didn't work in some cases - likely will need to save this state if doing
non-cooperative migration
  Dumping / restoring qemu state worked for Windows, but for Linux needed a
save, fork, restore, so appears to be some sort of dependency there

There is an issue around domids - in the proposal these are randomised, but
that still means certain destinations aren't possible (in Amazon's case
they just find a compatible target, but this is not necessarily an option
in server virt scenarios where the admin specifies where they want the VM
migrated to).
The domid is a 15bit integer, so if you have < 32k VMs you could allocate
centrally across a pool of servers.

Could use non-cooperative migration where possible, but not expect it to
work everywhere (e.g. within a pool, but not cross-pool in a XenServer
example).

Alternative idea from Alejandro - could VMs be faked to always think they
always have a fixed domid (e.g. 1), then have dom0 know the actual one,
with e.g. xenstore translating?
  Suggestion to talk to Juergen, he may have thoughts on this.

Could we use a UUID instead of domid in the protocols?
  Large string/value that would be in lots of xenstore messages, could that
cause problems.
  Does a VM need to know its domid (e.g. for giving to other guests to set
up grants), or could it be hidden?
  Is this too much of a hack?

If the guest is unaware, we still need to make sure the gratuitous ARP gets
sent after migration.

There are other use cases for non-cooperative migration, which would
require not having anything custom in the VM.


2. Can we modify netfront so we don't generate the events (link down /
interface removed - not clear which?) across a migration, thus userspace
isn't aware even if the kernel is?
  Likely needs some code inspection to understand what's actually happening
here as to any potential improvements.

--0000000000000fe83b061a265efc
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Notes from the design session on guest unaware migrat=
ion:</div><div><br></div>Migration is often needed for e.g. host maintenanc=
e. When this is done, we (XenServer) see two classes of issues with guests:=
<br>=C2=A0 - Guest kernel crashes (relatively rare). Often detectable by th=
e toolstack and thus reported to the admin, distros generally take patches =
quickly.<br><br>=C2=A0 - Guest userspace issues (more common).<br>=C2=A0 =
=C2=A0 Primarily seen around networking - e.g. iptables rules get cleaned u=
p, and not re-injected. This can break e.g. Kubernetes networking.<br><div>=
	Some other examples around clustered services (though not clear if this is=
 the guest being aware of the migration or just a result of the downtime).<=
br>	Generally impossible for the toolstack to detect, so admin normally una=
ware until users/monitoring complains.<br><br>It was also mentioned that Ne=
tBSD has issues with live migration around suspend of the network interface=
.<br>	<br>Possible solutions<br>1. Do the migration in a way that the guest=
 is entirely unaware of it<br>Amazon produced a proposal for this non-coope=
rative migration: <a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=
=3Dblob_plain;f=3Ddocs/designs/non-cooperative-migration.md;hb=3DHEAD">http=
s://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dblob_plain;f=3Ddocs/designs/non=
-cooperative-migration.md;hb=3DHEAD</a><br>Believed to be some older patch =
series on this<br>=C2=A0<br>Some notes from VM forking work that might be r=
elevant:<br>=C2=A0 Some state was not saved as part of regular VM save, so =
resuming VM didn&#39;t work in some cases - likely will need to save this s=
tate if doing non-cooperative migration<br>=C2=A0 Dumping / restoring qemu =
state worked for Windows, but for Linux needed a save, fork, restore, so ap=
pears to be some sort of dependency there<br>=C2=A0<br>There is an issue ar=
ound domids - in the proposal these are randomised, but that still means ce=
rtain destinations aren&#39;t possible (in Amazon&#39;s case they just find=
 a compatible target, but this is not necessarily an option in server virt =
scenarios where the admin specifies where they want the VM migrated to).<br=
>The domid is a 15bit integer, so if you have &lt; 32k VMs you could alloca=
te centrally across a pool of servers.<br><br>Could use non-cooperative mig=
ration where possible, but not expect it to work everywhere (e.g. within a =
pool, but not cross-pool in a XenServer example).<br><br>Alternative idea f=
rom Alejandro - could VMs be faked to always think they always have a fixed=
 domid (e.g. 1), then have dom0 know the actual one, with e.g. xenstore tra=
nslating?<br>=C2=A0 Suggestion to talk to Juergen, he may have thoughts on =
this.<br>=C2=A0 <br>Could we use a UUID instead of domid in the protocols?<=
br>=C2=A0 Large string/value that would be in lots of xenstore messages, co=
uld that cause problems.<br>=C2=A0 Does a VM need to know its domid (e.g. f=
or giving to other guests to set up grants), or could it be hidden?<br>=C2=
=A0 Is this too much of a hack?<br><br>If the guest is unaware, we still ne=
ed to make sure the gratuitous ARP gets sent after migration.<br><br>There =
are other use cases for non-cooperative migration, which would require not =
having anything custom in the VM.<br><br><br>2. Can we modify netfront so w=
e don&#39;t generate the events (link down / interface removed - not clear =
which?) across a migration, thus userspace isn&#39;t aware even if the kern=
el is?<br>=C2=A0 Likely needs some code inspection to understand what&#39;s=
 actually happening here as to any potential improvements.<br><div dir=3D"l=
tr" class=3D"gmail_signature" data-smartmail=3D"gmail_signature"></div></di=
v></div>

--0000000000000fe83b061a265efc--


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 20:27:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 20:27:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735823.1141933 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sExDo-0000W1-FH; Wed, 05 Jun 2024 20:27:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735823.1141933; Wed, 05 Jun 2024 20:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sExDo-0000Vu-CD; Wed, 05 Jun 2024 20:27:00 +0000
Received: by outflank-mailman (input) for mailman id 735823;
 Wed, 05 Jun 2024 20:26:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SHzc=NH=raptorengineering.com=sanastasio@srs-se1.protection.inumbo.net>)
 id 1sExDm-0000Vo-Nh
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 20:26:58 +0000
Received: from raptorengineering.com (mail.raptorengineering.com
 [23.155.224.40]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f23a187c-2379-11ef-b4bb-af5377834399;
 Wed, 05 Jun 2024 22:26:54 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mail.rptsys.com (Postfix) with ESMTP id CEA7E82869FE;
 Wed,  5 Jun 2024 15:26:52 -0500 (CDT)
Received: from mail.rptsys.com ([127.0.0.1])
 by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 1iaxEAGURnUV; Wed,  5 Jun 2024 15:26:52 -0500 (CDT)
Received: from localhost (localhost [127.0.0.1])
 by mail.rptsys.com (Postfix) with ESMTP id EE18E8285ACC;
 Wed,  5 Jun 2024 15:26:51 -0500 (CDT)
Received: from mail.rptsys.com ([127.0.0.1])
 by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id j0uVPfe7oJ36; Wed,  5 Jun 2024 15:26:51 -0500 (CDT)
Received: from [10.11.0.2] (5.edge.rptsys.com [23.155.224.38])
 by mail.rptsys.com (Postfix) with ESMTPSA id F2C6E8285948;
 Wed,  5 Jun 2024 15:26:50 -0500 (CDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f23a187c-2379-11ef-b4bb-af5377834399
DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com EE18E8285ACC
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD;
	t=1717619212; bh=UHQKShSB8KrAmc93V/GRsQC9WMG9iNp7QrHeYLXfRm0=;
	h=Message-ID:Date:MIME-Version:To:From;
	b=aIJudotdqWy+hRpyOqWU4J2p5HmHpx00iIVWAgJb/nkSQtcVo/p6TegLKIinCMpBp
	 GwXR0ohU9XlVLNONJNqQqDzdhveiyN1MAwx8jhjxkduJ1gJkJr50dysOMEo47EY0iE
	 Q9eB/JwLuB0HeXCPrPZeq2Pvn8G4u+FFkolSGUdU=
X-Virus-Scanned: amavisd-new at rptsys.com
Message-ID: <72dff69f-2b20-4778-8a20-0f26408dc0dc@raptorengineering.com>
Date: Wed, 5 Jun 2024 15:26:50 -0500
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/2] arch/irq: Make irq_ack_none() mandatory
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20240530184027.44609-1-andrew.cooper3@citrix.com>
 <20240530184027.44609-2-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Shawn Anastasio <sanastasio@raptorengineering.com>
In-Reply-To: <20240530184027.44609-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Andrew,

On 5/30/24 1:40 PM, Andrew Cooper wrote:
> Any non-stub implementation of these is going to have to do something here.
> 
> irq_end_none() is more complicated and has arch-specific interactions with
> irq_ack_none(), so make it optional.
> 
> For PPC, introduce a stub irq_ack_none().
> 
> For ARM and x86, export the existing {ack,end}_none() helpers, gaining an irq_
> prefix for consisntency with everything else in no_irq_type.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

For the PPC parts:

Acked-by: Shawn Anastasio <sanastasio@raptorengineering.com>a

Thanks,
Shawn


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 20:28:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 20:28:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735827.1141942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sExEr-000111-NB; Wed, 05 Jun 2024 20:28:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735827.1141942; Wed, 05 Jun 2024 20:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sExEr-00010u-KO; Wed, 05 Jun 2024 20:28:05 +0000
Received: by outflank-mailman (input) for mailman id 735827;
 Wed, 05 Jun 2024 20:28:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SHzc=NH=raptorengineering.com=sanastasio@srs-se1.protection.inumbo.net>)
 id 1sExEq-00010k-45
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 20:28:04 +0000
Received: from raptorengineering.com (mail.raptorengineering.com
 [23.155.224.40]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ad934c6-237a-11ef-b4bb-af5377834399;
 Wed, 05 Jun 2024 22:28:02 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mail.rptsys.com (Postfix) with ESMTP id 4F7AA8285ACC;
 Wed,  5 Jun 2024 15:28:01 -0500 (CDT)
Received: from mail.rptsys.com ([127.0.0.1])
 by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id La4VdzBDQAWB; Wed,  5 Jun 2024 15:28:00 -0500 (CDT)
Received: from localhost (localhost [127.0.0.1])
 by mail.rptsys.com (Postfix) with ESMTP id BF0F482869FE;
 Wed,  5 Jun 2024 15:28:00 -0500 (CDT)
Received: from mail.rptsys.com ([127.0.0.1])
 by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id MobF6J5eMZOq; Wed,  5 Jun 2024 15:28:00 -0500 (CDT)
Received: from [10.11.0.2] (5.edge.rptsys.com [23.155.224.38])
 by mail.rptsys.com (Postfix) with ESMTPSA id 4F11C8285ACC;
 Wed,  5 Jun 2024 15:28:00 -0500 (CDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ad934c6-237a-11ef-b4bb-af5377834399
DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com BF0F482869FE
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD;
	t=1717619280; bh=nT0TSE96TWclU2WO/Nun9iRIbaynReohYzDwTveB5w8=;
	h=Message-ID:Date:MIME-Version:To:From;
	b=OTotUUfEX1113DzgA844keWo/YBUmGCy76/cgIB5LJHEo5onFmzkNLS1ELu0jtxkx
	 gy01assCrVHjis4HXv84J4jAZtqlUH+qN1LYzFj/63XTD2n7YNaTZAUZrbnk91tCEW
	 Y+ArBspS8YYVL/AvqDoI2vnetQ3dYfF99VCLG6J4=
X-Virus-Scanned: amavisd-new at rptsys.com
Message-ID: <c4482613-35b5-48df-9154-0caab5c47b6d@raptorengineering.com>
Date: Wed, 5 Jun 2024 15:27:59 -0500
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/2] arch/irq: Centralise no_irq_type
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20240530184027.44609-1-andrew.cooper3@citrix.com>
 <20240530184027.44609-3-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Shawn Anastasio <sanastasio@raptorengineering.com>
In-Reply-To: <20240530184027.44609-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hi Andrew,

On 5/30/24 1:40 PM, Andrew Cooper wrote:
> Having no_irq_type defined per arch, but using common callbacks is a mess, and
> particualrly hard to bootstrap a new architecture with.
> 
> Now that the ack()/end() hooks have been exported suitably, move the
> definition of no_irq_type into common/irq.c, and into .rodata for good
> measure.
> 
> No functional change, but a whole lot less tangled.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Shawn Anastasio <sanastasio@raptorengineering.com>

Thanks,
Shawn


From xen-devel-bounces@lists.xenproject.org Wed Jun 05 22:23:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 22:23:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735811.1141953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEz1p-0006rd-9d; Wed, 05 Jun 2024 22:22:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735811.1141953; Wed, 05 Jun 2024 22:22:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sEz1p-0006rW-6d; Wed, 05 Jun 2024 22:22:45 +0000
Received: by outflank-mailman (input) for mailman id 735811;
 Wed, 05 Jun 2024 16:55:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BUZb=NH=gmail.com=milandjokic1995@srs-se1.protection.inumbo.net>)
 id 1sEtuq-0003Hu-CP
 for xen-devel@lists.xenproject.org; Wed, 05 Jun 2024 16:55:12 +0000
Received: from mail-wm1-x32c.google.com (mail-wm1-x32c.google.com
 [2a00:1450:4864:20::32c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5db0cae7-235c-11ef-b4bb-af5377834399;
 Wed, 05 Jun 2024 18:55:09 +0200 (CEST)
Received: by mail-wm1-x32c.google.com with SMTP id
 5b1f17b1804b1-42155143cb0so682755e9.2
 for <xen-devel@lists.xenproject.org>; Wed, 05 Jun 2024 09:55:08 -0700 (PDT)
Received: from Xen-host.domain.local ([89.216.37.146])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42158111052sm28995205e9.16.2024.06.05.09.55.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 05 Jun 2024 09:55:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5db0cae7-235c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717606508; x=1718211308; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=J0v+4euvI1VeN0E+AOVIoBR9XyQW2czoSpmZlVUU7I4=;
        b=V11AdJB9L6Fca3r/rkBDcDqgOyK9iGYMyJy2y2k1ACUYkICkF/ivh6q/LyUV+mg9iO
         u/BYqzIjWBMHeL040tuSekCi89YILIOZ281NwRY2z8MW3SDLg6kI4cNZ4X5wl4lOV3FZ
         nE99Qe+JWPuAQ3x6J0ibWy7xJh5Xiv8RWoxgGUZKCCYMOCEwsINivnwOx9ZjyCQHLVG8
         OxrE75B/ZX6YPQIvLSwrBeL7qz29vqR0SuBizhX8Q6xTFgR4htPtS0GurVL7L2iE/mTI
         hYe+ezOR54wj+AXaw5cyeqBKNq2X948InkHhhlKIvT8v9EPvdnHGohYp1FkKTRJVtRIP
         QniQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717606508; x=1718211308;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=J0v+4euvI1VeN0E+AOVIoBR9XyQW2czoSpmZlVUU7I4=;
        b=liyckusy2vnbryl1sG6WQRVfN8fZhDkpfdtTKEtzzJlQo4FbTWdnLKwr+nrDS2+BJz
         2IHEiN11x44c3RQt56xjSwNljYMSk+5l7I0w1svn1SNIVXhVXrL9U3Oy9tnBwejfWRTE
         5NnJCmkptsYyQVJm6vGPynMqiqCDQfAo6oqcM/otBh7h2vKu+WAJkij8G+uoomKbFL1Z
         FVmEODtQMKOFqrjP8rHeIjqhN/AtmfwOHCsGv5tVY1k5mU6pRB55Hv5hIIVDsin928x/
         unpVh7CfbjWiO2NQyLREovGziYf275ngAnbrmjotOrb0FK5rtaJHrbO9Ey4H9jBn3hWI
         HRqQ==
X-Gm-Message-State: AOJu0YwvibZDE9PKmbUzloNFLsRAgx07PUvmogaU9vhaTsxeO6UZjbvA
	1ylChcPtsrSwfSwY0MkZ4JHiL+Nont1wpk2owlxLVxVQoJtgY+dLkIDSlxG7JQI=
X-Google-Smtp-Source: AGHT+IEf3/VgppoF9v6tYs0yYUiDzHeJp+jWAmWYjTK/bjPSTQaVQtitSyOk5A5iQ4LVUAKwULUmhg==
X-Received: by 2002:a05:600c:4f8e:b0:421:59b0:f364 with SMTP id 5b1f17b1804b1-42159b0f616mr13874815e9.40.1717606507830;
        Wed, 05 Jun 2024 09:55:07 -0700 (PDT)
From: milandjokic1995@gmail.com
To: xen-devel@lists.xenproject.org
Cc: milan.djokic@rt-rk.com,
	Nikola Jelic <nikola.jelic@rt-rk.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/riscv: PE/COFF image header for RISC-V target
Date: Wed,  5 Jun 2024 18:54:43 +0200
Message-Id: <87b5e458498bbff2e54ac011a50ff1f9555c3613.1717354932.git.milan.djokic@rt-rk.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Nikola Jelic <nikola.jelic@rt-rk.com>

Extended RISC-V xen image with PE/COFF headers,
in order to support xen boot from popular bootloaders like U-boot.
Image header is optionally included (with CONFIG_RISCV_EFI) so
both plain ELF and image with PE/COFF header can now be generated as build artifacts.

Tested on both QEMU and StarFive VisionFive 2 with OpenSBI->U-Boot->xen->dom0 boot chain.

Signed-off-by: Nikola Jelic <nikola.jelic@rt-rk.com>
---
 xen/arch/riscv/Kconfig             |  9 +++++
 xen/arch/riscv/include/asm/image.h | 62 ++++++++++++++++++++++++++++++
 xen/arch/riscv/riscv64/head.S      | 33 +++++++++++++++-
 3 files changed, 103 insertions(+), 1 deletion(-)
 create mode 100644 xen/arch/riscv/include/asm/image.h

diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
index f382b36f6c..59bf5aa2a6 100644
--- a/xen/arch/riscv/Kconfig
+++ b/xen/arch/riscv/Kconfig
@@ -9,6 +9,15 @@ config ARCH_DEFCONFIG
 	string
 	default "arch/riscv/configs/tiny64_defconfig"
 
+config RISCV_EFI
+	bool "UEFI boot service support"
+	depends on RISCV_64
+	default n
+	help
+	  This option provides support for boot services through
+	  UEFI firmware. A UEFI stub is provided to allow Xen to
+	  be booted as an EFI application.
+
 menu "Architecture Features"
 
 source "arch/Kconfig"
diff --git a/xen/arch/riscv/include/asm/image.h b/xen/arch/riscv/include/asm/image.h
new file mode 100644
index 0000000000..b379246290
--- /dev/null
+++ b/xen/arch/riscv/include/asm/image.h
@@ -0,0 +1,62 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+
+
+#ifndef _ASM_RISCV_IMAGE_H
+#define _ASM_RISCV_IMAGE_H
+
+#define RISCV_IMAGE_MAGIC	"RISCV\0\0\0"
+#define RISCV_IMAGE_MAGIC2	"RSC\x05"
+
+#define RISCV_IMAGE_FLAG_BE_SHIFT	0
+#define RISCV_IMAGE_FLAG_BE_MASK	0x1
+
+#define RISCV_IMAGE_FLAG_LE		0
+#define RISCV_IMAGE_FLAG_BE		1
+
+#ifdef CONFIG_CPU_BIG_ENDIAN
+#error conversion of header fields to LE not yet implemented
+#else
+#define __HEAD_FLAG_BE		RISCV_IMAGE_FLAG_LE
+#endif
+
+#define __HEAD_FLAG(field)	(__HEAD_FLAG_##field << \
+				RISCV_IMAGE_FLAG_##field##_SHIFT)
+
+#define __HEAD_FLAGS		(__HEAD_FLAG(BE))
+
+#define RISCV_HEADER_VERSION_MAJOR 0
+#define RISCV_HEADER_VERSION_MINOR 2
+
+#define RISCV_HEADER_VERSION (RISCV_HEADER_VERSION_MAJOR << 16 | \
+			      RISCV_HEADER_VERSION_MINOR)
+
+#ifndef __ASSEMBLY__
+/*
+ * struct riscv_image_header - riscv xen image header
+ *
+ * @code0:		Executable code
+ * @code1:		Executable code
+ * @text_offset:	Image load offset
+ * @image_size:		Effective Image size
+ * @reserved:		reserved
+ * @reserved:		reserved
+ * @reserved:		reserved
+ * @magic:		Magic number
+ * @reserved:		reserved
+ * @reserved:		reserved (will be used for PE COFF offset)
+ */
+
+struct riscv_image_header {
+	u32 code0;
+	u32 code1;
+	u64 text_offset;
+	u64 image_size;
+	u64 res1;
+	u64 res2;
+	u64 res3;
+	u64 magic;
+	u32 res4;
+	u32 res5;
+};
+#endif /* __ASSEMBLY__ */
+#endif /* __ASM_IMAGE_H */
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index 3261e9fce8..0edd35b20f 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -1,14 +1,40 @@
 #include <asm/asm.h>
 #include <asm/riscv_encoding.h>
+#include <asm/image.h>
 
         .section .text.header, "ax", %progbits
 
         /*
          * OpenSBI pass to start():
          *   a0 -> hart_id ( bootcpu_id )
-         *   a1 -> dtb_base 
+         *   a1 -> dtb_base
          */
 FUNC(start)
+#ifdef CONFIG_RISCV_EFI
+        j xen_start
+
+        /* -----------  Header -------------- */
+	.word 0
+	.balign 8
+#if __riscv_xlen == 64
+	/* Image load offset(2MB) from start of RAM */
+	.dword 0x200000
+#else
+	/* Image load offset(4MB) from start of RAM */
+	.dword 0x400000
+#endif
+	/* Effective size of xen image */
+	.dword _end - _start
+	.dword __HEAD_FLAGS
+	.word RISCV_HEADER_VERSION
+	.word 0
+	.dword 0
+	.ascii RISCV_IMAGE_MAGIC
+	.balign 4
+	.ascii RISCV_IMAGE_MAGIC2
+
+FUNC(xen_start)
+#endif
         /* Mask all interrupts */
         csrw    CSR_SIE, zero
 
@@ -60,6 +86,11 @@ FUNC(start)
         mv      a1, s1
 
         tail    start_xen
+
+#ifdef CONFIG_RISCV_EFI
+END(xen_start)
+#endif
+
 END(start)
 
         .section .text, "ax", %progbits
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 05 23:47:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 Jun 2024 23:47:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735847.1141962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF0L3-0007xl-Br; Wed, 05 Jun 2024 23:46:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735847.1141962; Wed, 05 Jun 2024 23:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF0L3-0007xe-97; Wed, 05 Jun 2024 23:46:41 +0000
Received: by outflank-mailman (input) for mailman id 735847;
 Wed, 05 Jun 2024 23:46:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF0L1-0007xS-CX; Wed, 05 Jun 2024 23:46:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF0L1-0003Ed-5k; Wed, 05 Jun 2024 23:46:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF0L0-0005Rl-RH; Wed, 05 Jun 2024 23:46:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sF0L0-0007td-Qo; Wed, 05 Jun 2024 23:46:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fvI/gnIUg4kQo6fCPJujr8ZZPBDZZEHnfbp2+crXVQY=; b=NaqLAi3O1ubiJ1cIcKIj/90K79
	Ta6cy2pMgHGYpQT3qtZbkan0QBiMvgDPvJkhLm0wqeUCIgFrfzKxH0K0RGjkwfwn4Nk7s7LE80/us
	1/F6XoIVZ7/cRpz8PA6Dq2hvu4Anm69g5e4uWMrOpo+de8+lHPlPvCzdfGk/dqyIOYss=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186257-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186257: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:host-ping-check-xen:fail:allowable
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=71d7b52cc33bc3b6697cce8a0a5ac9032f372e47
X-Osstest-Versions-That:
    linux=32f88d65f01bf6f45476d7edbe675e44fb9e1d58
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 Jun 2024 23:46:38 +0000

flight 186257 linux-linus real [real]
flight 186258 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186257/
http://logs.test-lab.xenproject.org/osstest/logs/186258/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-qcow2     8 xen-boot            fail pass in 186258-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     10 host-ping-check-xen      fail REGR. vs. 186250

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-qcow2   14 migrate-support-check fail in 186258 never pass
 test-armhf-armhf-xl-qcow2 15 saverestore-support-check fail in 186258 never pass
 test-armhf-armhf-xl           8 xen-boot                     fail  like 186250
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186250
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186250
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186250
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186250
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186250
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186250
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186250
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                71d7b52cc33bc3b6697cce8a0a5ac9032f372e47
baseline version:
 linux                32f88d65f01bf6f45476d7edbe675e44fb9e1d58

Last test of basis   186250  2024-06-04 21:12:01 Z    1 days
Testing same since   186257  2024-06-05 16:10:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anup Patel <anup@brainfault.org>
  Anup Patel <apatel@ventanamicro.com>
  Conor Dooley <conor.dooley@microchip.com>
  Fuad Tabba <tabba@google.com>
  Isaku Yamahata <isaku.yamahata@intel.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marc Zyngier <maz@kernel.org>
  Nikunj A Dadhania <nikunj@amd.com>
  Oliver Upton <oliver.upton@linux.dev>
  Paolo Bonzini <pbonzini@redhat.com>
  Quan Zhou <zhouquan@iscas.ac.cn>
  Ravi Bangoria <ravi.bangoria@amd.com>
  Rob Herring (Arm) <robh@kernel.org>
  Sean Christopherson <seanjc@google.com>
  Tao Su <tao1.su@linux.intel.com>
  Yong-Xuan Wang <yongxuan.wang@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   32f88d65f01b..71d7b52cc33b  71d7b52cc33bc3b6697cce8a0a5ac9032f372e47 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 00:19:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 00:19:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735859.1141973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF0qP-00040R-A8; Thu, 06 Jun 2024 00:19:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735859.1141973; Thu, 06 Jun 2024 00:19:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF0qP-00040K-7V; Thu, 06 Jun 2024 00:19:05 +0000
Received: by outflank-mailman (input) for mailman id 735859;
 Thu, 06 Jun 2024 00:19:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF0qN-00040A-G9; Thu, 06 Jun 2024 00:19:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF0qN-0004SE-Dl; Thu, 06 Jun 2024 00:19:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF0qN-0006D6-24; Thu, 06 Jun 2024 00:19:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sF0qN-0008SJ-1e; Thu, 06 Jun 2024 00:19:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wSqcDAMk6I3+V57lptTSwCLgHyBLCa0YB0XZqp2dzF8=; b=PStgLMYgaxCInVG8oFI+tCh1Px
	WWKRZY7bdudMqrpPJbfq0uztfGghQSAq518BLobbT5oCmccIU/Hb1pQtGnggsBTTrJjz3CPyJmv5a
	zWufTROGkt70r4PBxx7m4eAB5a7Q0bz4acXPhJ5ge2Xfgm6GaG3nU5eQ5UyD7g2WQ+x4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186259-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186259: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=10ab1c67c489942b787b784a48d46623b442cfd1
X-Osstest-Versions-That:
    ovmf=10cd8b45ce36152996bcb1520ba36107a8cdc63f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Jun 2024 00:19:03 +0000

flight 186259 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186259/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 10ab1c67c489942b787b784a48d46623b442cfd1
baseline version:
 ovmf                 10cd8b45ce36152996bcb1520ba36107a8cdc63f

Last test of basis   186256  2024-06-05 12:44:34 Z    0 days
Testing same since   186259  2024-06-05 22:11:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chao Li <lichao@loongson.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   10cd8b45ce..10ab1c67c4  10ab1c67c489942b787b784a48d46623b442cfd1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 05:09:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 05:09:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735881.1141983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF5NI-0001XJ-Uj; Thu, 06 Jun 2024 05:09:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735881.1141983; Thu, 06 Jun 2024 05:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF5NI-0001XC-Qm; Thu, 06 Jun 2024 05:09:20 +0000
Received: by outflank-mailman (input) for mailman id 735881;
 Thu, 06 Jun 2024 05:09:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF5NH-0001X2-GS; Thu, 06 Jun 2024 05:09:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF5NH-0001DE-D7; Thu, 06 Jun 2024 05:09:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF5NH-0002g7-53; Thu, 06 Jun 2024 05:09:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sF5NH-0000k3-4e; Thu, 06 Jun 2024 05:09:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7+6v1S2w02tNByKPLNpsWdT9RO9iTdTpZRmFDsZAY9c=; b=N/W5H0zbiTCN/Agw1LGRVQQKhF
	trMCvmsWho4vfTX03LbTArZN5AApWNTkvUIpRYpPH/EMRc37gr38VCCisPkBxyFr5sl6XKpsmef3/
	9gW5TvHFxljDTUJVdCsoviEePIa8Re7uXg6elR6vHrhDQF1tTuoL4OqXVE/0Fw7Oxtik=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186262-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186262: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b45aff0dc9cb87f316eb17a11e5d4438175d9cca
X-Osstest-Versions-That:
    ovmf=10ab1c67c489942b787b784a48d46623b442cfd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Jun 2024 05:09:19 +0000

flight 186262 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186262/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b45aff0dc9cb87f316eb17a11e5d4438175d9cca
baseline version:
 ovmf                 10ab1c67c489942b787b784a48d46623b442cfd1

Last test of basis   186259  2024-06-05 22:11:07 Z    0 days
Testing same since   186262  2024-06-06 02:42:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   10ab1c67c4..b45aff0dc9  b45aff0dc9cb87f316eb17a11e5d4438175d9cca -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 05:48:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 05:48:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735891.1141993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF5ye-0006DQ-RI; Thu, 06 Jun 2024 05:47:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735891.1141993; Thu, 06 Jun 2024 05:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF5ye-0006DJ-Na; Thu, 06 Jun 2024 05:47:56 +0000
Received: by outflank-mailman (input) for mailman id 735891;
 Thu, 06 Jun 2024 05:47:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OoW9=NI=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sF5yc-0006DD-V0
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 05:47:54 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 502ca8ec-23c8-11ef-b4bb-af5377834399;
 Thu, 06 Jun 2024 07:47:52 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 3DD591F8A3;
 Thu,  6 Jun 2024 05:47:51 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 8637513A7B;
 Thu,  6 Jun 2024 05:47:50 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id RHutHIZNYWbSMgAAD6G6ig
 (envelope-from <jgross@suse.com>); Thu, 06 Jun 2024 05:47:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 502ca8ec-23c8-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1717652871; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=HJD9NQU4GT2PqKcUt8lgFTWuNUHKLP/7bYpY6Dky8iA=;
	b=Pud4WLzeP7Kt3B/B5ROkKS9x86OXynbB+r8fq6Aq77WBJKnoBHsS+GGLJvkTOh+CMgdU+0
	y+yLR1koUfKl1bQ8LnJDsrVFWxWVY2ZuI6QrLg5kXxLSUtKaFN0ZVKabBTC8P3uEe47V0D
	Z9hpqp2nYuPxnChCO6krWmNYfkBoecQ=
Authentication-Results: smtp-out2.suse.de;
	dkim=pass header.d=suse.com header.s=susede1 header.b=Pud4WLze
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1717652871; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=HJD9NQU4GT2PqKcUt8lgFTWuNUHKLP/7bYpY6Dky8iA=;
	b=Pud4WLzeP7Kt3B/B5ROkKS9x86OXynbB+r8fq6Aq77WBJKnoBHsS+GGLJvkTOh+CMgdU+0
	y+yLR1koUfKl1bQ8LnJDsrVFWxWVY2ZuI6QrLg5kXxLSUtKaFN0ZVKabBTC8P3uEe47V0D
	Z9hpqp2nYuPxnChCO6krWmNYfkBoecQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] MAINTAINERS: add an entry for tools/9pfsd
Date: Thu,  6 Jun 2024 07:47:44 +0200
Message-Id: <20240606054745.23555-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Spam-Level: 
X-Spamd-Result: default: False [-0.28 / 50.00];
	MID_CONTAINS_FROM(1.00)[];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	R_MISSING_CHARSET(0.50)[];
	BAYES_HAM(-0.27)[74.15%];
	R_DKIM_ALLOW(-0.20)[suse.com:s=susede1];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	MX_GOOD(-0.01)[];
	ARC_NA(0.00)[];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	DKIM_SIGNED(0.00)[suse.com:s=susede1];
	MIME_TRACE(0.00)[0:+];
	RCVD_TLS_ALL(0.00)[];
	DKIM_TRACE(0.00)[suse.com:+];
	FROM_EQ_ENVFROM(0.00)[];
	FROM_HAS_DN(0.00)[];
	TO_DN_SOME(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	RCPT_COUNT_SEVEN(0.00)[8];
	DNSWL_BLOCKED(0.00)[2a07:de40:b281:104:10:150:64:97:from,2a07:de40:b281:106:10:150:64:167:received];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:dkim,suse.com:email]
X-Spam-Flag: NO
X-Spam-Score: -0.28
X-Spamd-Bar: /
X-Rspamd-Queue-Id: 3DD591F8A3
X-Rspamd-Server: rspamd2.dmz-prg2.suse.org
X-Rspamd-Action: no action

Add me as the maintainer for the tools/9pfsd directory.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 MAINTAINERS | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 076cf1e141..28fb35582b 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -206,6 +206,12 @@ Maintainers List (try to look for most precise areas first)
 
 		-----------------------------------
 
+9PFSD
+M:	Juergen Gross <jgross@suse.com>
+M:	Anthony PERARD <anthony.perard@citrix.com>
+S:	Supported
+F:	tools/9pfsd
+
 ACPI
 M:	Jan Beulich <jbeulich@suse.com>
 S:	Supported
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jun 06 05:48:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 05:48:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735895.1142003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF5zF-0006gU-1l; Thu, 06 Jun 2024 05:48:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735895.1142003; Thu, 06 Jun 2024 05:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF5zE-0006gN-V5; Thu, 06 Jun 2024 05:48:32 +0000
Received: by outflank-mailman (input) for mailman id 735895;
 Thu, 06 Jun 2024 05:48:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OoW9=NI=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sF5zD-0006DD-B7
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 05:48:31 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 66b2d32b-23c8-11ef-b4bb-af5377834399;
 Thu, 06 Jun 2024 07:48:29 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 4D46D21954;
 Thu,  6 Jun 2024 05:48:29 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id AF75A13A7B;
 Thu,  6 Jun 2024 05:48:28 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id EGSgJ6xNYWb5MgAAD6G6ig
 (envelope-from <jgross@suse.com>); Thu, 06 Jun 2024 05:48:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66b2d32b-23c8-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1717652909; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kvMczZcb48JjtaatB5WOn8l5TupQJgv8S3WnQ++h5YU=;
	b=G8D1bG0m60lW3zAuQ0f6qT4KruSxjjO/DWJoVCM5uB8e2JfQxpzetq8ae8mJtEBkeiq68p
	+GTrC+FmVF0Eez5a8VrLVYR/d5QPVbnwBtordrt1HUrk2FYbK6Uv6E/LIwb87mFIeULvf+
	OyGMyaijHtZwrAddQvfsSaJHSxlbFPE=
Authentication-Results: smtp-out1.suse.de;
	dkim=pass header.d=suse.com header.s=susede1 header.b=G8D1bG0m
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1717652909; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kvMczZcb48JjtaatB5WOn8l5TupQJgv8S3WnQ++h5YU=;
	b=G8D1bG0m60lW3zAuQ0f6qT4KruSxjjO/DWJoVCM5uB8e2JfQxpzetq8ae8mJtEBkeiq68p
	+GTrC+FmVF0Eez5a8VrLVYR/d5QPVbnwBtordrt1HUrk2FYbK6Uv6E/LIwb87mFIeULvf+
	OyGMyaijHtZwrAddQvfsSaJHSxlbFPE=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH] MAINTAINERS: add me as scheduer maintainer
Date: Thu,  6 Jun 2024 07:47:45 +0200
Message-Id: <20240606054745.23555-2-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
In-Reply-To: <20240606054745.23555-1-jgross@suse.com>
References: <20240606054745.23555-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Spam-Level: 
X-Spamd-Result: default: False [-0.09 / 50.00];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	MID_CONTAINS_FROM(1.00)[];
	R_MISSING_CHARSET(0.50)[];
	R_DKIM_ALLOW(-0.20)[suse.com:s=susede1];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	BAYES_HAM(-0.08)[63.53%];
	MX_GOOD(-0.01)[];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	MIME_TRACE(0.00)[0:+];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	RCPT_COUNT_SEVEN(0.00)[7];
	ARC_NA(0.00)[];
	DNSWL_BLOCKED(0.00)[2a07:de40:b281:106:10:150:64:167:received,2a07:de40:b281:104:10:150:64:97:from];
	DKIM_SIGNED(0.00)[suse.com:s=susede1];
	FROM_EQ_ENVFROM(0.00)[];
	FROM_HAS_DN(0.00)[];
	TO_DN_SOME(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:dkim,suse.com:email];
	RCVD_COUNT_TWO(0.00)[2];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DKIM_TRACE(0.00)[suse.com:+]
X-Spam-Flag: NO
X-Spam-Score: -0.09
X-Spamd-Bar: /
X-Rspamd-Queue-Id: 4D46D21954
X-Rspamd-Server: rspamd2.dmz-prg2.suse.org
X-Rspamd-Action: no action

I've been active in the scheduling code since many years now. Add
me as a maintainer.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 MAINTAINERS | 1 +
 1 file changed, 1 insertion(+)

diff --git a/MAINTAINERS b/MAINTAINERS
index 6ba7d2765f..cc40c0be9d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -490,6 +490,7 @@ F:	xen/common/sched/rt.c
 SCHEDULING
 M:	George Dunlap <george.dunlap@citrix.com>
 M:	Dario Faggioli <dfaggioli@suse.com>
+M:	Juergen Gross <jgross@suse.com>
 S:	Supported
 F:	xen/common/sched/
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Thu Jun 06 05:49:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 05:49:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735899.1142014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF60I-0007Rk-Cf; Thu, 06 Jun 2024 05:49:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735899.1142014; Thu, 06 Jun 2024 05:49:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF60I-0007Rd-7q; Thu, 06 Jun 2024 05:49:38 +0000
Received: by outflank-mailman (input) for mailman id 735899;
 Thu, 06 Jun 2024 05:49:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OoW9=NI=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sF60H-0007RX-8j
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 05:49:37 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8e4b8130-23c8-11ef-90a2-e314d9c70b13;
 Thu, 06 Jun 2024 07:49:36 +0200 (CEST)
Received: by mail-wm1-x32d.google.com with SMTP id
 5b1f17b1804b1-4213b94b7e7so2689995e9.0
 for <xen-devel@lists.xenproject.org>; Wed, 05 Jun 2024 22:49:36 -0700 (PDT)
Received: from [172.31.5.161] ([62.48.185.238])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4215c2cd2d3sm9224805e9.41.2024.06.05.22.49.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 05 Jun 2024 22:49:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e4b8130-23c8-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717652975; x=1718257775; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=j0ye9TSuckwM/Bjmu9gBLt92YyAeVfRItWoKaOi3lNA=;
        b=H/gMIE6Nd2JELbe1f9rk0vADbmgZap0F7mgDOkIZOp0W3vhYq/zvXVkL+iXef1W8Wh
         zkHRcCWxvzr44P5IDhwHp4Ezb5lHHt5+EB30cwTmyx5geUL19tdkrpF0STtbGv21UdhT
         E/v6b/SMX0LEDylaq80Lt363x7/0LWu7sfCygoJ/nQb5NwiToVmvQXfBaL78LWkPhWVr
         nVeg5jQVj12uG9UQc17aZPJHQDO7PSaEAIV658E4xzTi1CBYp6tWGaYyoNplM1nnoCYo
         JMdwGrbc/NZyNDTEPe/67xmBhZshmww6UbEwk2CkgDFP0y6XqzIOUrWbzJ6lYplTXDLU
         MH1w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717652975; x=1718257775;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=j0ye9TSuckwM/Bjmu9gBLt92YyAeVfRItWoKaOi3lNA=;
        b=OaiOCmDoLcby+w6XvybYbqZu0LKcxzqINBc3EWV4yBEial8Eyds6WpNDad9bIna9JO
         yVjSAGDlzr42yhPQoOsN09p9Pve5mGZhIaUIGVVHue1F+fUHtq49ssc5/xegD+PHKUlX
         wTs5FtRWojlVk4At0OnT/rrYoEU49jT2YNUR7GGvsnIDWXIhUalqyYeyWT+8XiEDgMck
         YWzIQkyElE4rVDmDiNZ2/cngG1D0P8wFOpVIPyp+z25XUg6LOk0ONxjqLjS8sCbgkR7b
         nNYRwHZNyRwpHLa/FnG+CrGpcpomtnHSWjvP7pCMqch7Px2DE+D3NUT0+ojVMqdaTb2q
         Vjmw==
X-Gm-Message-State: AOJu0Yys3tNzeCjqx08MMKmKJFmTYDAPsXWpx0LKqMdaUS0a7ulhtbsS
	P/75k5IPP6TzlLu82bGcQKlDdi4KIp5fl1ufXB4+97BMeeBzI2XR2Q3HSM+JdaTBHI7tTTY/bQh
	e
X-Google-Smtp-Source: AGHT+IFwoqrBssLvw5bS/mpCJLSxJ4haSHtC/eQ5LcSg4PGrltqXP/0AQ/I/46synaOY2Dd8Xzqimw==
X-Received: by 2002:a05:600c:35c2:b0:421:4b0a:5006 with SMTP id 5b1f17b1804b1-4215acd81e0mr13002445e9.7.1717652975404;
        Wed, 05 Jun 2024 22:49:35 -0700 (PDT)
Message-ID: <710109de-7d8c-4749-9603-70f4540e7825@suse.com>
Date: Thu, 6 Jun 2024 07:49:34 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] MAINTAINERS: add an entry for tools/9pfsd
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <20240606054745.23555-1-jgross@suse.com>
Content-Language: en-US
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
In-Reply-To: <20240606054745.23555-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 06.06.24 07:47, Juergen Gross wrote:
> Add me as the maintainer for the tools/9pfsd directory.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   MAINTAINERS | 6 ++++++
>   1 file changed, 6 insertions(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 076cf1e141..28fb35582b 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -206,6 +206,12 @@ Maintainers List (try to look for most precise areas first)
>   
>   		-----------------------------------
>   
> +9PFSD
> +M:	Juergen Gross <jgross@suse.com>
> +M:	Anthony PERARD <anthony.perard@citrix.com>
> +S:	Supported
> +F:	tools/9pfsd
> +
>   ACPI
>   M:	Jan Beulich <jbeulich@suse.com>
>   S:	Supported

Please ignore, this was sent already long ago.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 06:14:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 06:14:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735907.1142023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF6OS-00037W-82; Thu, 06 Jun 2024 06:14:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735907.1142023; Thu, 06 Jun 2024 06:14:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF6OS-00037P-4l; Thu, 06 Jun 2024 06:14:36 +0000
Received: by outflank-mailman (input) for mailman id 735907;
 Thu, 06 Jun 2024 06:14:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tbpc=NI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sF6OQ-00037J-Oc
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 06:14:34 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0a297257-23cc-11ef-b4bb-af5377834399;
 Thu, 06 Jun 2024 08:14:32 +0200 (CEST)
Received: by mail-wr1-x432.google.com with SMTP id
 ffacd0b85a97d-35dcff36522so614717f8f.1
 for <xen-devel@lists.xenproject.org>; Wed, 05 Jun 2024 23:14:32 -0700 (PDT)
Received: from [172.31.7.231] ([62.28.210.62])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35ef5e96c15sm644603f8f.75.2024.06.05.23.14.30
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 05 Jun 2024 23:14:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a297257-23cc-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717654472; x=1718259272; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=kRKrKhUC+jacDQ6vyLo09CgzUBl80TIrI9om95rMHjY=;
        b=W6WLq8xEqYum0C3domXE8AVrlrChQZaN7Tvfl9GoJko83Br+8swFYU7oIv61LU39yJ
         b4oKZsRphlK6+BI56YMBQ684HOid6L4dMHsYi8ee4dEmlkaBJ7LSasKnUgrtAFnk8GO3
         vf34Br10zWmXZR1U58HVs7z/mZy+GwMvhF/TKYjQ2A3OOAtIVjdkThzYYaRwqtgBAiTv
         lXjcv18yQxHBzqGc4l5C19hta/+yrzC8ZbuVnzkl5XYbBe2MUzwX/Sif6Uj1h8jaFNtx
         wJiXyM9UXpUGVYpr8URdkYNgIMx4YMH88+PWEnigx5gzIf1P4nf1gxvqvzwzXjgYvkXN
         ctjA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717654472; x=1718259272;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=kRKrKhUC+jacDQ6vyLo09CgzUBl80TIrI9om95rMHjY=;
        b=vMJWh/3dtuMyWnMKC7HOGOSCEGfDocX4oVfcnHDQuTPir3zXu7cY5OmwBD38vHtSTJ
         LFrpnqt6PfgQ8u0CkxpqXqnvcmBECvDZood3KPJgwHXhp5VdxsZdhMo5dQABaBHErGqo
         tnqPSANqZ076CWQtFScXFblVCIFPTygGaZhZQZ/xY92JElCRRnfPIQ+RaGg/BthWLFad
         a2xbapck+5IpUmlhYUP6xLVHHyji3Ndpxl7ntNYptBeRFxyDQlD0PTtwey9oYlBqSEWz
         aZ8PYagL6wXRjh4DkQO1ViM8L98cmOO4OaT3FAnsxvWynqz36bPxwNDwekfwHfeGyKwe
         uuKA==
X-Forwarded-Encrypted: i=1; AJvYcCXg/Y4lvIvIol7K0wH+41+hERr4Ov3I1YTz04osnYaxd+WRU9ZSYP3A0soWp+KgmEe3qCwUzuLEgchnBqe5cdd2SCglktdC2lcRJ6beV6Q=
X-Gm-Message-State: AOJu0Yy0C5otPZKgWixqR0zWLvB2HeYHUtC+s7P7Pn4fwRABH3Q6a2/G
	O3DGretbcYBKfCLwva1La9MS75ROYu7DRgdkmWyTaJU9atWg0pZehuS+nX67+w==
X-Google-Smtp-Source: AGHT+IEI0uoyaFiPKkVpQ7TkAvRlOxIZZCvxldIDqD8mPU9mkQzgsjMUmWfk+rR1Q4J+of6VyaMtaQ==
X-Received: by 2002:a5d:4b0a:0:b0:354:e746:7515 with SMTP id ffacd0b85a97d-35e8ef15cd1mr3061975f8f.34.1717654471751;
        Wed, 05 Jun 2024 23:14:31 -0700 (PDT)
Message-ID: <053fe7c5-4f91-450c-b00d-24ae231e2e8c@suse.com>
Date: Thu, 6 Jun 2024 08:14:29 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN PATCH v8 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>
References: <20240516095235.64128-1-Jiqian.Chen@amd.com>
 <fbaf7086-85d8-4433-91d9-ef8f74512685@suse.com>
 <BL1PR12MB58494B521CB40BAEA30CB412E7F32@BL1PR12MB5849.namprd12.prod.outlook.com>
 <677e564e-4702-4a37-83df-8d47135b62ff@suse.com>
 <BL1PR12MB58494C3B7032B8BEFECF057DE7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <4a421aa5-b4c5-43f3-85cb-68c2021f13dd@suse.com>
 <BL1PR12MB58492BA224EBCE98549A0349E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <f125e2e3-b579-410f-b6ab-93d008bf9a9e@suse.com>
 <BL1PR12MB58494B2DD0CD75CCDF1F5CA1E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <67960b60-3108-4920-8bf1-68a00e117569@suse.com>
 <BL1PR12MB58490E8F1F26532B0FDFFFD6E7F82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <46b884e2-cbec-46f0-9070-7013307a310f@suse.com>
 <BL1PR12MB5849C1D40FCF9861BFE7B208E7F92@BL1PR12MB5849.namprd12.prod.outlook.com>
 <6d2e49bf-7be2-48f1-8075-dc0626015c17@suse.com>
 <BL1PR12MB5849932D0F3D280E4B8574DCE7F92@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <BL1PR12MB5849932D0F3D280E4B8574DCE7F92@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 05.06.2024 12:22, Chen, Jiqian wrote:
> On 2024/6/5 18:09, Jan Beulich wrote:
>> On 05.06.2024 09:04, Chen, Jiqian wrote:
>>> For now, if hypervisor gets a high GSIs, it can't be transformed to irq, because there is no mapping between them.
>>
>> No, in the absence of a source override (note the word "override") the
>> default identity mapping applies.
> What is identity mapping?

GSI == IRQ

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 06:53:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 06:53:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735916.1142034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF6zb-0000BT-4A; Thu, 06 Jun 2024 06:52:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735916.1142034; Thu, 06 Jun 2024 06:52:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF6za-0000BM-VU; Thu, 06 Jun 2024 06:52:58 +0000
Received: by outflank-mailman (input) for mailman id 735916;
 Thu, 06 Jun 2024 06:52:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tbpc=NI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sF6zZ-0000BG-Qj
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 06:52:57 +0000
Received: from mail-wr1-x42b.google.com (mail-wr1-x42b.google.com
 [2a00:1450:4864:20::42b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6772c8b1-23d1-11ef-90a2-e314d9c70b13;
 Thu, 06 Jun 2024 08:52:56 +0200 (CEST)
Received: by mail-wr1-x42b.google.com with SMTP id
 ffacd0b85a97d-35dc36b107fso640762f8f.1
 for <xen-devel@lists.xenproject.org>; Wed, 05 Jun 2024 23:52:56 -0700 (PDT)
Received: from [172.31.7.231] ([62.28.210.62])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35ef5e98b0fsm716632f8f.72.2024.06.05.23.52.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 05 Jun 2024 23:52:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6772c8b1-23d1-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717656776; x=1718261576; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=+MVxC/nZqC5UrbUucVWosnkii0meRc5NzeEzYI0F+Bs=;
        b=bO0ul5fNGLIQS9OlX6mAlnRz+mKvhkMs+SZUBpMP4kM0m3Ts/dAM8yUYg+ioFWgDhW
         smlEdGAE663Ck5CgOKqgSU2wacm4o5BbLrAvFZ0rzzP8io8g4Y6Ugqd/u6l531pehWdz
         fQqOqRs1x6ULCYKbO/Eq0dID2U2BcuVXlxVLLydaYBpDc98ixN/sJZzKFqQ81ac4dS+d
         yu8Ve3eg+Fonk6hT1Vih8BpZVF967dop+mNaWioKAYEfGbgo5+RVXpLhB6RSCpaNAyhe
         00pELBRHMmtKN2Zehl98OoGjHTGex8ec2yZgl6cRFpF4DunUBZeYMxqYiiaisdwNG7j+
         euPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717656776; x=1718261576;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=+MVxC/nZqC5UrbUucVWosnkii0meRc5NzeEzYI0F+Bs=;
        b=hNMDidUn71M1TL07VOzfUFv05caL/+Bnxe2WQ5JoM9f6kou3A87Rv6i2BKCZCgCpRO
         t22x85xdwKAxN2a/BuO3LGP0tw/cnj1qVwItG/uhCcmHXfv16NuNODKNSp5gNo6AnUVA
         iLY6nq2zfFdh8BBuLqgTlYTjJl4zq93fLrvm4ns29peDS/Am+iTKxFREjNNhyhtOlgqR
         QoW2EkueuXKXsMNPtZvVcjktKn4iMs1mLwdwL+NlxBoitABpjTxZtD6JkQdKVfPxtsvz
         ByrNfuUVExVNgcapvyP+RD0A5rqNmwdR6zu3ULhDR8pRlZn3lqo4EIpEVio0dL5OS134
         E6Lw==
X-Forwarded-Encrypted: i=1; AJvYcCWzrwGsmBYPBziR51/BAvxjyZ/YOwLTxC0ktSw2ilNe5qdu8gYRAX92P3ZR9W4s7sJvcFB5yebAOZRiIEv3Lw8AO/v6JevRALH1zHkJYpM=
X-Gm-Message-State: AOJu0YwffQx/IUNEs1eDd+R2yWL17AE/qJSI3cueveGyIH2jCpjY+xFj
	PhUto+3zfsH2Am1aiSJg9Od3mjkb21OVAQRil8R/5o3Venfg9lRvBilVkwh2Dw==
X-Google-Smtp-Source: AGHT+IH/3T92HokCXPs3pX2e6M4X9PKLybkMVdIZSCucz/4AXa4FWyJho66RCZWTtaZSB/7dCoHxAA==
X-Received: by 2002:a05:6000:b82:b0:356:4cfa:b4b9 with SMTP id ffacd0b85a97d-35e84058666mr3683902f8f.2.1717656775681;
        Wed, 05 Jun 2024 23:52:55 -0700 (PDT)
Message-ID: <abe801a1-dba7-487b-91c6-3a97cb8523ca@suse.com>
Date: Thu, 6 Jun 2024 08:52:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] xen/riscv: PE/COFF image header for RISC-V target
To: milandjokic1995@gmail.com
Cc: milan.djokic@rt-rk.com, Nikola Jelic <nikola.jelic@rt-rk.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <87b5e458498bbff2e54ac011a50ff1f9555c3613.1717354932.git.milan.djokic@rt-rk.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <87b5e458498bbff2e54ac011a50ff1f9555c3613.1717354932.git.milan.djokic@rt-rk.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 05.06.2024 18:54, milandjokic1995@gmail.com wrote:
> --- a/xen/arch/riscv/Kconfig
> +++ b/xen/arch/riscv/Kconfig
> @@ -9,6 +9,15 @@ config ARCH_DEFCONFIG
>  	string
>  	default "arch/riscv/configs/tiny64_defconfig"
>  
> +config RISCV_EFI
> +	bool "UEFI boot service support"
> +	depends on RISCV_64
> +	default n
> +	help
> +	  This option provides support for boot services through
> +	  UEFI firmware. A UEFI stub is provided to allow Xen to
> +	  be booted as an EFI application.

Hmm, all of this promises quite a bit more than you actually add. If
there are future plans, please clarify in the description. Otherwise
please consider adjusting name, prompt, and help text to actually
cover just what's actually done.

> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/image.h

This is pretty generic a name for something pretty specific.

> @@ -0,0 +1,62 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +
> +
> +#ifndef _ASM_RISCV_IMAGE_H
> +#define _ASM_RISCV_IMAGE_H
> +
> +#define RISCV_IMAGE_MAGIC	"RISCV\0\0\0"
> +#define RISCV_IMAGE_MAGIC2	"RSC\x05"
> +
> +#define RISCV_IMAGE_FLAG_BE_SHIFT	0
> +#define RISCV_IMAGE_FLAG_BE_MASK	0x1
> +
> +#define RISCV_IMAGE_FLAG_LE		0
> +#define RISCV_IMAGE_FLAG_BE		1
> +
> +#ifdef CONFIG_CPU_BIG_ENDIAN

I don't think we have such a Kconfig control.

> +#error conversion of header fields to LE not yet implemented
> +#else
> +#define __HEAD_FLAG_BE		RISCV_IMAGE_FLAG_LE
> +#endif
> +
> +#define __HEAD_FLAG(field)	(__HEAD_FLAG_##field << \
> +				RISCV_IMAGE_FLAG_##field##_SHIFT)
> +
> +#define __HEAD_FLAGS		(__HEAD_FLAG(BE))
> +
> +#define RISCV_HEADER_VERSION_MAJOR 0
> +#define RISCV_HEADER_VERSION_MINOR 2
> +
> +#define RISCV_HEADER_VERSION (RISCV_HEADER_VERSION_MAJOR << 16 | \
> +			      RISCV_HEADER_VERSION_MINOR)

Nit: Indentation of this 2nd line wants to result in the two RISCV_
to be properly aligned.

> +#ifndef __ASSEMBLY__
> +/*
> + * struct riscv_image_header - riscv xen image header
> + *
> + * @code0:		Executable code
> + * @code1:		Executable code
> + * @text_offset:	Image load offset
> + * @image_size:		Effective Image size
> + * @reserved:		reserved
> + * @reserved:		reserved
> + * @reserved:		reserved
> + * @magic:		Magic number
> + * @reserved:		reserved
> + * @reserved:		reserved (will be used for PE COFF offset)
> + */
> +
> +struct riscv_image_header {
> +	u32 code0;
> +	u32 code1;
> +	u64 text_offset;
> +	u64 image_size;
> +	u64 res1;
> +	u64 res2;
> +	u64 res3;
> +	u64 magic;
> +	u32 res4;
> +	u32 res5;

No new uses of u32 / u64 anymore, please. We're in the process of fully
moving to uint<N>_t.

> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -1,14 +1,40 @@
>  #include <asm/asm.h>
>  #include <asm/riscv_encoding.h>
> +#include <asm/image.h>
>  
>          .section .text.header, "ax", %progbits
>  
>          /*
>           * OpenSBI pass to start():
>           *   a0 -> hart_id ( bootcpu_id )
> -         *   a1 -> dtb_base 
> +         *   a1 -> dtb_base
>           */
>  FUNC(start)
> +#ifdef CONFIG_RISCV_EFI
> +        j xen_start

Comparing with what Arm does, shouldn't this similarly resolve to
the MZ pattern in the binary? In which case likely it needs to be
an entirely different insn, if such an insn even exists on RISC-V?
Otherwise the lack of MZ would clearly need explaining in the
description.

> +        /* -----------  Header -------------- */
> +	.word 0

Nit: Please use consistent indentation - either always tabs or
always blanks (matching what existing code uses).

> +	.balign 8
> +#if __riscv_xlen == 64

Wouldn't this better be CONFIG_RISCV_64? We do have #if-s like
this, but in different contexts. Even there I wonder of the
mix - Cc-ing Oleksii to possible comment (you probably should
have Cc-ed him anyway).

> +	/* Image load offset(2MB) from start of RAM */
> +	.dword 0x200000
> +#else
> +	/* Image load offset(4MB) from start of RAM */
> +	.dword 0x400000
> +#endif
> +	/* Effective size of xen image */
> +	.dword _end - _start
> +	.dword __HEAD_FLAGS
> +	.word RISCV_HEADER_VERSION
> +	.word 0
> +	.dword 0
> +	.ascii RISCV_IMAGE_MAGIC
> +	.balign 4
> +	.ascii RISCV_IMAGE_MAGIC2

There's only one "magic" in the struct further up.

This also isn't quite enough for a PE/COFF image, see again Arm
code. If the other header parts aren't needed, that too would
want mentioning / explaining in the description.

> +FUNC(xen_start)
> +#endif
>          /* Mask all interrupts */
>          csrw    CSR_SIE, zero
>  
> @@ -60,6 +86,11 @@ FUNC(start)
>          mv      a1, s1
>  
>          tail    start_xen
> +
> +#ifdef CONFIG_RISCV_EFI
> +END(xen_start)
> +#endif
> +
>  END(start)

I'm not convinced it is a good idea to have two functions nested
within one another, ELF-annotation-wise.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 07:01:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 07:01:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735924.1142043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF77w-000236-Vt; Thu, 06 Jun 2024 07:01:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735924.1142043; Thu, 06 Jun 2024 07:01:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF77w-00022z-TG; Thu, 06 Jun 2024 07:01:36 +0000
Received: by outflank-mailman (input) for mailman id 735924;
 Thu, 06 Jun 2024 07:01:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tbpc=NI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sF77v-00022t-NW
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 07:01:35 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9bced227-23d2-11ef-b4bb-af5377834399;
 Thu, 06 Jun 2024 09:01:33 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 5b1f17b1804b1-42155143cb0so8927405e9.2
 for <xen-devel@lists.xenproject.org>; Thu, 06 Jun 2024 00:01:33 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42158110dfesm46190115e9.19.2024.06.06.00.01.32
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 06 Jun 2024 00:01:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bced227-23d2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717657293; x=1718262093; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=kcHwOpjlqmnRFryITdq4y5cZLPA7is7PXKp9r4JTpW4=;
        b=axkLH3jjNha/nAEXGjweFUQRUjuz0y2zsUtxzPQS/+GtlLGT9/oVQAi5Bs9qRRWlzj
         ChuBUfJG2KgSHxA37Y2fxi7PfrXy3FAUivRXaMfB+D0C53d29LoZqb8JnJz7xWVkNjZd
         7rXf/Qn92kYOeXuG75fmxgWJfMs7sFawqNfZCbRU6oWIplCsne6ajnpEsKvrDnsDo1mO
         LHtuS3P1xU2sVdezB/d7mZoSf44Omey5+JB+uxY7Asyqs07QyS/RbKTk7rg7/mbcUb2v
         wgnS4aAG2haP1PLrzChEnoeviaeZJcSKxRBsLR5uv7UaPtjHZoS6UU96Yo7ej6l3GlDn
         F5JQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717657293; x=1718262093;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=kcHwOpjlqmnRFryITdq4y5cZLPA7is7PXKp9r4JTpW4=;
        b=fzBfk7bTCBs6cLJwDE047lxRobl0yQnPa/Sv1HbbRIFFizqheez+R0LRqDTOGaV3l8
         9+c7Dx8gPPrymVCaRETO5zaxjSYYns7GHhWWXcamxEkmUGoCzscdTliEQV4L07GH30vs
         vlXGdxSvXNRuin8HEFfbFmbyQGNhihY6VhA23ydEKESDVG38ex76nDRZ00gYFPXThgbf
         xpOsRFUvawUAOBXEA2whVwrbTkuqM4BZ6j/TIjjyRhs60lbxeN22p5bdAItx8oh32RU+
         MZAZJLALlM9EP991mt2uri1LNleJDvDJgUwsxp1zDHwbW8NVyc+mjk1cfO/IiXT97nxt
         +wlw==
X-Forwarded-Encrypted: i=1; AJvYcCUING0JC7gquF5WJQQOQocHo81S/mSbUPWY9gJWfx1Ee/9hvpr9GutKErcJdPJ9cRvuz71iM8aFIxT2wynv7UyiaQMExD1NepWOV+bIpCg=
X-Gm-Message-State: AOJu0YylyjC8TZeKaSJM9NIWk6PT2FND59kyABRZpjytp8tE9a4YnIJR
	DwikSIEfBB4l6xoxeX7o29MgNZtEcWtCyUBVRUBxjQFl+YX19IYgTJcFMhKwgQP+QznUeTw8Mdi
	cpA==
X-Google-Smtp-Source: AGHT+IFUcmy+QDcGzDE8wqGF2nBatdk2jNpe1Scm26jSMDOfqU6a1yYXXHpwbMJYQJBqWjyo1LVEKw==
X-Received: by 2002:a05:600c:1f90:b0:419:f911:680a with SMTP id 5b1f17b1804b1-421562c33a1mr45341425e9.1.1717657293137;
        Thu, 06 Jun 2024 00:01:33 -0700 (PDT)
Message-ID: <efd0ad53-b052-4e71-aeb6-394367199218@suse.com>
Date: Thu, 6 Jun 2024 09:01:31 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] x86/cpufreq: clean up stale powernow_cpufreq_init()
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: sstabellini@kernel.org, xen-devel@lists.xenproject.org
References: <20240604084629.2418430-1-Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20240604084629.2418430-1-Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 04.06.2024 10:46, Sergiy Kibrik wrote:
> Remove useless declaration. The routine itself was removed by following
> commit long time ago:
> 
>    222013114 x86: Fix RevF detection in powernow.c
> 
> No functional change.
> 
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

This actually addresses a Misra rule 8.6 violation, afaict. Would
be good to mention in the description (happy to add a sentence
when [eventually] committing).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 07:08:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 07:08:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735929.1142052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF7Ei-0002g7-L4; Thu, 06 Jun 2024 07:08:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735929.1142052; Thu, 06 Jun 2024 07:08:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF7Ei-0002g0-I4; Thu, 06 Jun 2024 07:08:36 +0000
Received: by outflank-mailman (input) for mailman id 735929;
 Thu, 06 Jun 2024 07:08:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tbpc=NI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sF7Eg-0002fu-Im
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 07:08:34 +0000
Received: from mail-wr1-x431.google.com (mail-wr1-x431.google.com
 [2a00:1450:4864:20::431])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 95f1af14-23d3-11ef-90a2-e314d9c70b13;
 Thu, 06 Jun 2024 09:08:33 +0200 (CEST)
Received: by mail-wr1-x431.google.com with SMTP id
 ffacd0b85a97d-35dceef429bso238277f8f.1
 for <xen-devel@lists.xenproject.org>; Thu, 06 Jun 2024 00:08:33 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35ef5d66cf6sm748524f8f.49.2024.06.06.00.08.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 06 Jun 2024 00:08:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95f1af14-23d3-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717657713; x=1718262513; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=LVAZpPXiWZYzmFX1DPoHmlYfKiRk3H4WVr2/Kknry/8=;
        b=JbFu6Yl3lnNQFwBPGnmwQm6wc7hLkQJMbrYLJ8hyZIgb+H6WON6kfT69vf1dGzyKzi
         ki+ZdlyGb3gj4xhVaJWW8Yyut/pAHKbbE8X8Rx37YpMS3tpQC6VBHJCZMwImpSUoACxT
         02+2ZUxvtWd6WtMJFTMv7ADROV221uphAFnKe4Dht/3fUeyYQ9dZiw2LuA2fy36jCZlq
         gytJHGPbOSEUBvbDDe7gEc51rANneD2qS7t+NQBEKqA6qaWNBz6L+NVPMLsXFjfZdoAy
         13tGnMd9zBHqo4BiK0xsBLMpr8keD5odjacEtmNealLGidRCHd13iczXG1idkDZdaAfx
         ToBQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717657713; x=1718262513;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=LVAZpPXiWZYzmFX1DPoHmlYfKiRk3H4WVr2/Kknry/8=;
        b=oDjcBzthhXnBXigWgf2mE8dBga6a7BSvGbk6VJd1fR6N5aOaOl3a68u9l6QGHHJAGB
         YFvyuUqUIWKtUdeqT4TETVP/WZpDGAoIYh0st8lSv4twzf+HXox32IMapjSjvWyqU13r
         nIwFHyLLGFZqgo6WVLL5UXW5pjAGvR7AP8u0VHYydRqkh/z323CU03KrcehgFocfjJ4y
         O2Gl4Ea5YmR5sNx0byLC/VG4NO5WUOkrIRBJdfvwRcZr3VhXKsO08UB+C3Bs/ijChmkg
         JO9RpXJytHKsHAdIMpFAb+tHI5NAATVVOs9ZuRqiDoXEkkzql8dfFgt5itcdFbYSkBMx
         e/6w==
X-Forwarded-Encrypted: i=1; AJvYcCW6PLbYMPh5xKlzoBGUGpiJsOcCQ7S6rZQCc7d4LLlQfq8rmydZOiG8vysNVZfRjs8kI4kOlrsL5XlSHTnj/1nEJe0cBL5gDNdzlgq9XAs=
X-Gm-Message-State: AOJu0Yy05TOm6QXopXMYQBDl3nNo8Td9o2HwFVyTS6WtHRb8YoFia8/V
	c7hSHONoFLMCN2f0IW9oQgKfKbB5RnPs0Pa3k7g3nQqyv+MrLKWTTXobHIBa4Q==
X-Google-Smtp-Source: AGHT+IHaBt9PvzUYaH1zWNTBbrPH/BWP5OInr8lF+bZUwhn40ujKdll7eF8RWOaxRCGGU4YW+J8eXA==
X-Received: by 2002:adf:ffd1:0:b0:34d:b0bf:f1b5 with SMTP id ffacd0b85a97d-35ef0ed0a22mr956674f8f.35.1717657712735;
        Thu, 06 Jun 2024 00:08:32 -0700 (PDT)
Message-ID: <5cb13d1a-1452-4542-b50d-23e6a9d9d3ef@suse.com>
Date: Thu, 6 Jun 2024 09:08:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v1] x86/cpufreq: separate powernow/hwp cpufreq code
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jason Andryuk <jason.andryuk@amd.com>, xen-devel@lists.xenproject.org
References: <20240604093406.2448552-1-Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <20240604093406.2448552-1-Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 04.06.2024 11:34, Sergiy Kibrik wrote:
> --- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
> +++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
> @@ -657,7 +657,7 @@ static int __init cf_check cpufreq_driver_init(void)
>  
>          case X86_VENDOR_AMD:
>          case X86_VENDOR_HYGON:
> -            ret = powernow_register_driver();
> +            ret = IS_ENABLED(CONFIG_AMD) ? powernow_register_driver() : -ENODEV;
>              break;
>          }

What about the Intel-specific code immediately up from here?
Dealing with that as well may likely permit to reduce ...

> --- a/xen/include/acpi/cpufreq/cpufreq.h
> +++ b/xen/include/acpi/cpufreq/cpufreq.h
> @@ -252,6 +252,7 @@ void cpufreq_dbs_timer_resume(void);
>  
>  void intel_feature_detect(struct cpufreq_policy *policy);
>  
> +#ifdef CONFIG_INTEL
>  int hwp_cmdline_parse(const char *s, const char *e);
>  int hwp_register_driver(void);
>  bool hwp_active(void);
> @@ -260,4 +261,35 @@ int get_hwp_para(unsigned int cpu,
>  int set_hwp_para(struct cpufreq_policy *policy,
>                   struct xen_set_cppc_para *set_cppc);
>  
> +#else
> +
> +static inline int hwp_cmdline_parse(const char *s, const char *e)
> +{
> +    return -EINVAL;
> +}
> +
> +static inline int hwp_register_driver(void)
> +{
> +    return -ENODEV;
> +}
> +
> +static inline bool hwp_active(void)
> +{
> +    return false;
> +}
> +
> +static inline int get_hwp_para(unsigned int cpu,
> +                               struct xen_cppc_para *cppc_para)
> +{
> +    return -EINVAL;
> +}
> +
> +static inline int set_hwp_para(struct cpufreq_policy *policy,
> +                               struct xen_set_cppc_para *set_cppc)
> +{
> +    return -EINVAL;
> +}
> +
> +#endif /* CONFIG_INTEL */

... the number of stubs you're adding here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 07:24:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 07:24:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735936.1142062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF7Tu-0005kx-V8; Thu, 06 Jun 2024 07:24:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735936.1142062; Thu, 06 Jun 2024 07:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF7Tu-0005kq-SZ; Thu, 06 Jun 2024 07:24:18 +0000
Received: by outflank-mailman (input) for mailman id 735936;
 Thu, 06 Jun 2024 07:24:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tbpc=NI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sF7Tu-0005kk-59
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 07:24:18 +0000
Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com
 [2a00:1450:4864:20::331])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c796d472-23d5-11ef-b4bb-af5377834399;
 Thu, 06 Jun 2024 09:24:15 +0200 (CEST)
Received: by mail-wm1-x331.google.com with SMTP id
 5b1f17b1804b1-4210aa00c94so7145435e9.1
 for <xen-devel@lists.xenproject.org>; Thu, 06 Jun 2024 00:24:15 -0700 (PDT)
Received: from [172.31.7.231] ([62.28.210.62])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4215814e91csm44959065e9.41.2024.06.06.00.24.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 06 Jun 2024 00:24:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c796d472-23d5-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717658655; x=1718263455; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=x5v86fs4bitDtehls6LSVwpbnQIgG5eQPtWXwbkfs6Q=;
        b=RNu/hK7LuZp0dhPITcHOE/VrvMiMEe9hDusLyUABW/ORZ7MX/Q4EP+uSLf/PR0sVfO
         0nsms0+sfSKmRu4ME255sVqx6WTneZAsz6vZDduJdwpBtcFVOdwq7in8SA7twnyDxkKd
         j8AdooMkqrD5S/ZQOeUnQH3qQDqwzUg/jAZQuG8MtdN2lxuK1DOvXLVqP8tOqolrLupM
         pqLKsJAyJzBchm13TUsLrm0c2ufVM+lj67EvLhSXxW5RMICh1VUVIbnX0qc/GTpk8p1b
         x/WqxShjHvV6VZ/t7MtiBU+6N47/rAUiq+vfYmsDE3NEFQkhd4qh8eSYyQLbSpL/5Tm1
         7YpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717658655; x=1718263455;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=x5v86fs4bitDtehls6LSVwpbnQIgG5eQPtWXwbkfs6Q=;
        b=IIXxM0a2l5WayKaBlz7s1f+hzL71ELt4NwHHVAFcP2U/oixHG2BIq7klsysuKrWtNR
         p3q6q8AKJPZdJV4NghC5JxpbEcpeKCsxjHYg7Y09DnY8DAAgqTbQYCRiex6y6kNIIrqs
         8Z+RGHeGPsrYMj42Z5V24dZnKf0OnbMe6PSyTMRAmqitADUv9eZ4li56GM7ry+SM1gte
         KKpm1UMn0QciyMAHsc6i2V+gVejkYRwfv7cHqTfbpdCHHv0zNTxTkLBvp0lyCfdX6oca
         I1X9WsCfz1sI7IauvxejaBnH0fUYFvpjNl9IUUxXpUNKrlepeHP8ZOx6HVDaJ4A7jTVN
         MZng==
X-Forwarded-Encrypted: i=1; AJvYcCXBQoXSebN2iQpkSdcNlKrgYcLg+gzXfoGiYSwHqc8CwkVRo9Opa4gUs9ON3uf/akuKXQNe4z9cbYerM5EKiK01kjTm/UnKJ8wouqNsqoA=
X-Gm-Message-State: AOJu0YzCva9J5+sc4IxGLA+ONN+TDHebXydIOno8DueROR8W4XIGxsHt
	mqpWczjUqJV6z2YVmghS+NhhNNdOgKeeayr45yqovG/OEeTnYYwUt1wpx/6xtw==
X-Google-Smtp-Source: AGHT+IHwj53cWwJUWAIxsLU+T3YHIjl6xmq4uqPrxPfmEF/xgLpbKhSf9L2WOCAkZFCVLt5vuAT8mw==
X-Received: by 2002:a05:600c:4690:b0:421:22c4:db60 with SMTP id 5b1f17b1804b1-421562f22bbmr38912985e9.23.1717658655072;
        Thu, 06 Jun 2024 00:24:15 -0700 (PDT)
Message-ID: <22eabe14-10c3-4095-91d3-b63911908cb2@suse.com>
Date: Thu, 6 Jun 2024 09:24:12 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v5 07/10] xen: Make the maximum number of altp2m
 views configurable for x86
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <cover.1717356829.git.w1benny@gmail.com>
 <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 02.06.2024 22:04, Petr Beneš wrote:
> @@ -5245,7 +5251,7 @@ void hvm_fast_singlestep(struct vcpu *v, uint16_t p2midx)
>      if ( !hvm_is_singlestep_supported() )
>          return;
> 
> -    if ( p2midx >= MAX_ALTP2M )
> +    if ( p2midx >= v->domain->nr_altp2m )
>          return;

As (iirc) indicated before, just like you don't add a d local variable
here or ...

> --- a/xen/arch/x86/include/asm/p2m.h
> +++ b/xen/arch/x86/include/asm/p2m.h
> @@ -887,7 +887,7 @@ static inline struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
>      if ( index == INVALID_ALTP2M )
>          return NULL;
> 
> -    BUG_ON(index >= MAX_ALTP2M);
> +    BUG_ON(index >= v->domain->nr_altp2m);
> 
>      return altp2m_get_p2m(v->domain, index);
>  }
> @@ -898,7 +898,7 @@ static inline bool p2m_set_altp2m(struct vcpu *v, unsigned int idx)
>      struct p2m_domain *orig;
>      struct p2m_domain *ap2m;
> 
> -    BUG_ON(idx >= MAX_ALTP2M);
> +    BUG_ON(idx >= v->domain->nr_altp2m);
> 
>      if ( idx == vcpu_altp2m(v).p2midx )
>          return false;

... in either of these, I see little reason to have such ...

> --- a/xen/arch/x86/mm/altp2m.c
> +++ b/xen/arch/x86/mm/altp2m.c
> @@ -15,6 +15,11 @@
>  void
>  altp2m_vcpu_initialise(struct vcpu *v)
>  {
> +    struct domain *d = v->domain;
> +
> +    if ( d->nr_altp2m == 0 )
> +        return;
> +
>      if ( v != current )
>          vcpu_pause(v);
> 
> @@ -30,8 +35,12 @@ altp2m_vcpu_initialise(struct vcpu *v)
>  void
>  altp2m_vcpu_destroy(struct vcpu *v)
>  {
> +    struct domain *d = v->domain;
>      struct p2m_domain *p2m;
> 
> +    if ( d->nr_altp2m == 0 )
> +        return;
> +
>      if ( v != current )
>          vcpu_pause(v);

... in both of these.

> @@ -122,7 +131,12 @@ int p2m_init_altp2m(struct domain *d)
>      struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
> 
>      mm_lock_init(&d->arch.altp2m_list_lock);
> -    for ( i = 0; i < MAX_ALTP2M; i++ )
> +    d->arch.altp2m_p2m = xzalloc_array(struct p2m_domain *, d->nr_altp2m);
> +
> +    if ( !d->arch.altp2m_p2m )
> +        return -ENOMEM;

This isn't really needed, is it? Both ...

> +    for ( i = 0; i < d->nr_altp2m; i++ )

... this and ...

>      {
>          d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
>          if ( p2m == NULL )
> @@ -143,7 +157,10 @@ void p2m_teardown_altp2m(struct domain *d)
>      unsigned int i;
>      struct p2m_domain *p2m;
> 
> -    for ( i = 0; i < MAX_ALTP2M; i++ )
> +    if ( !d->arch.altp2m_p2m )
> +        return;
> +
> +    for ( i = 0; i < d->nr_altp2m; i++ )
>      {
>          if ( !d->arch.altp2m_p2m[i] )
>              continue;
> @@ -151,6 +168,8 @@ void p2m_teardown_altp2m(struct domain *d)
>          d->arch.altp2m_p2m[i] = NULL;
>          p2m_free_one(p2m);
>      }
> +
> +    XFREE(d->arch.altp2m_p2m);
>  }

... this ought to be fine without?

> @@ -538,8 +538,8 @@ void hap_final_teardown(struct domain *d)
>      unsigned int i;
> 
>      if ( hvm_altp2m_supported() )
> -        for ( i = 0; i < MAX_ALTP2M; i++ )
> -            p2m_teardown(d->arch.altp2m_p2m[i], true, NULL);
> +        for ( i = 0; i < d->nr_altp2m; i++ )
> +            p2m_teardown(altp2m_get_p2m(d, i), true, NULL);

Shouldn't the switch to altp2m_get_p2m() be part of the respective
earlier patch?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 07:30:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 07:30:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735941.1142072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF7Zp-0007Wt-Ie; Thu, 06 Jun 2024 07:30:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735941.1142072; Thu, 06 Jun 2024 07:30:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF7Zp-0007Wm-G3; Thu, 06 Jun 2024 07:30:25 +0000
Received: by outflank-mailman (input) for mailman id 735941;
 Thu, 06 Jun 2024 07:30:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QPv7=NI=epam.com=prvs=28876dbbc5=sergiy_kibrik@srs-se1.protection.inumbo.net>)
 id 1sF7Zn-0007Wd-Vg
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 07:30:24 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a1a91063-23d6-11ef-90a2-e314d9c70b13;
 Thu, 06 Jun 2024 09:30:22 +0200 (CEST)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45608dIR014527;
 Thu, 6 Jun 2024 07:30:16 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2169.outbound.protection.outlook.com [104.47.17.169])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3yjtc5j51u-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 06 Jun 2024 07:30:16 +0000 (GMT)
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com (2603:10a6:20b:5c0::11)
 by DB9PR03MB7796.eurprd03.prod.outlook.com (2603:10a6:10:2c1::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.13; Thu, 6 Jun
 2024 07:30:11 +0000
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d]) by AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d%6]) with mapi id 15.20.7656.012; Thu, 6 Jun 2024
 07:30:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1a91063-23d6-11ef-90a2-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Lkq6tBEzDV/YLpVWMcsAZdR15IoNq6y9yJcqLlJaysVRZ7AwcCDQRvBS0YoXIMQ4o6AQQNIaVqdiCIQ/v98F0W7yi1tjJnCeEpM96gCXHJ2xhD21EHBNNWKv6bO6F5TfLLzTtI9rjWFYp+8IrV3xOUkL6p3q7U+H7i3NBlAYfLkUfMKJIvv4EzvWJnHKCSyhWLlcQHFzOxNR83KcUminA1NAhs3L1sbQjXjv6r9mq1U3QSuxqaxnOrMfHWd97SuD2aPCzTTuH5pIFWgeutqiEJxEaHJh98Vj4Ym0VqAPzyRtepnv7tprBQvnUY2tVq+srGxlq5+SBQjNFohFwt508Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zUtPU55NXqE4SI1MEQTwSG3nl6/RUMeNXZ23bFy+jt8=;
 b=kg6x2t3Ycj7n15zFSZGIg6wYxinQ0Wj9j2Z7hO+o3XefjeXvOpHhUGIT7texdXR/AgH6epW8mcPPWqkPFoBglK5E4nw2/ynicf7G1VCT1mGmxy89o+L55d/AddKX7hRJoebmhwWrn1TsNCOmjxgm2s+nU7S1SmBn6Djj452wWI+r5PXxk7lvVxKO/UvBcrWTrviclnb6QmLbY4U30Jo3viOLWbzDjSmrQoDx0O4Q8Bw3KY5kfMT3CIUzI37JUKNOcW6WaNf1kRnt+3BNx1g5cPYNE6BTpKpOxEKgWvz8Bu1ZlvG4KQlCdSfOOu4A7ySpFbcAr8ZYc4m1fDklnPGjrA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zUtPU55NXqE4SI1MEQTwSG3nl6/RUMeNXZ23bFy+jt8=;
 b=PtNZGrQGnChzaXpa9+mMjXDVne9Om3PTvEB/g5Z1dWpLzqu0G1/n7dLE/6dM5aW1niBxCEYJry8qTaXoEwHVJYkFsoIVPgfuYfaDkK78I/vkn2lL9bL3zXW2Jqg2fwCZ3iuNgxCF/fDpMzBK6YGPxOBysLtT4Ckpl6gvW8Wus/hdhRieAOlVn1ibVjqTuWRvaZnWAboXxxziN6ii08UCnYe7gLpuk6s41l0RabY71/DZCw4W+5bCAzz48xJ5aOacAJTkE7rIW0Pv8YV1NnNLwuxlSUV37q+jn2ktNDAy/+7Jmwm04QvtOibYBSkivLZHDGK3yqivcvVkSEQhEgEGlA==
Message-ID: <c66966da-bbe3-432e-8a2f-809bf434db39@epam.com>
Date: Thu, 6 Jun 2024 10:30:09 +0300
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v1] x86/cpufreq: separate powernow/hwp cpufreq code
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
        =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Jason Andryuk <jason.andryuk@amd.com>, xen-devel@lists.xenproject.org
References: <20240604093406.2448552-1-Sergiy_Kibrik@epam.com>
 <5cb13d1a-1452-4542-b50d-23e6a9d9d3ef@suse.com>
Content-Language: en-US
From: Sergiy Kibrik <sergiy_kibrik@epam.com>
In-Reply-To: <5cb13d1a-1452-4542-b50d-23e6a9d9d3ef@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: WA2P291CA0042.POLP291.PROD.OUTLOOK.COM
 (2603:10a6:1d0:1f::14) To AS8PR03MB9192.eurprd03.prod.outlook.com
 (2603:10a6:20b:5c0::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AS8PR03MB9192:EE_|DB9PR03MB7796:EE_
X-MS-Office365-Filtering-Correlation-Id: 27c2ed06-0232-4a22-1980-08dc85fa7fb8
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230031|1800799015|366007|376005;
X-Microsoft-Antispam-Message-Info: 
	=?utf-8?B?clEvcm41Y0hwV20waUFXcTVheTNUVHJrOWhuR2JaL2xReWVmbDBIL3J3UEVG?=
 =?utf-8?B?amxMT2o5OGdJNUduUi9uTEc2V2tPK1dDbGdsb1ZsaG1kL0hEZzRkQkZTY05P?=
 =?utf-8?B?RTA1UmlxcUZDeDBUT01qNTZpVmhGRFdTc3YySkpjQUJKdCsrL2plQnRjd2Rw?=
 =?utf-8?B?RXFRUEhPTkxJVkxHUSt0V2Qra202cmZ1OC9ReUVvd0paRnFkdC9xdzhhdlRr?=
 =?utf-8?B?Y1NqQ2ZiaERGaEhZN3NCcEw2RDFTVUhCOEhKdWlwNTFUb3plQjNOUno2MUxu?=
 =?utf-8?B?ZE9Jc3J3R0txM2VleHE1cnN4OXNFUXluZmtpRjkwb1BoNGZ4Wk5SN1IwTDdu?=
 =?utf-8?B?aTFVNk5KWEhwUDRjY0pzM0lvNTRIYlBRb1YraFltM2RxWCtYZ1ZXT1hROE83?=
 =?utf-8?B?MUs0TlRvTnFIemlUTE1mYkxaa3dVQUNIZDFCZVlBT1JXa0FEUzlia3V5QWR2?=
 =?utf-8?B?UXZJWVhJeHovSFBPWXovTXVkTHNzSFFVcWZ3OU5WNHhYUzdaMldmNkZlVjNy?=
 =?utf-8?B?YkpaNERSZXJ1TmRGT2R6ZDRiKytBYzBsRHNmaWx2ajJxSjRsMHZ3amhYM2FE?=
 =?utf-8?B?TjRJS0NWb2dBdmhybWdNQUtRZ2hGbVVGa3FxR0EvV1FDNHB4dzRMQmRnaXlY?=
 =?utf-8?B?MlZjYWpsZCtHalZkb2NLaGVPMjQvVGxIU0s2N0R2RHN3TzJMTFJrUnRJNU5z?=
 =?utf-8?B?azB4RWM5SFlSSmh1T0xUWTZ5blJUUnJkZjAva1FoVnAzZk5SYkZ5NTc3dHFv?=
 =?utf-8?B?cUJlejcxZmVjZmxBN25wT3Q5SjZ3MmI2SmFhd3ozazBzYWNKSUNWa3dyOE1u?=
 =?utf-8?B?bEJxeHJqSkFOMkZXeU5JcC9LeCtrYWFNNDBTaXl2aFRMOGxqVUpUL1FJTzFU?=
 =?utf-8?B?ZVA2MnAvT2NIZEoySWlUVlB3ZzVXRFdocXVWVnZyMHZuWWJwNitwWmREeEZo?=
 =?utf-8?B?YTE1eGFaNWkzOTlPY3AwMWhwTzE5dGVzNFR6YXp5OWZwWXppSm9TODVmNTc3?=
 =?utf-8?B?VFh3UTJWYURTR3IwbDhldDJ5WWJKdTQ5N09Rb1I5Q3BYVlJiN25GVjhqSWhr?=
 =?utf-8?B?aXJWbUhPM2VlS3JWR2Vpd1k1dzFIR0ZVV0hWZFUyMkxZcFZ0NHcrTTNEYUhE?=
 =?utf-8?B?Qm5lR003eDBpYUhiaXJEams0SkpVbjM2aytOQWY1UDA1RXdqa2UyaVJVelJq?=
 =?utf-8?B?eTYwZmU4ekMzY2oyTTFlbzN3MjVocEdWeWtwQW1pZGEwcmZ6KyszWXVvSnlB?=
 =?utf-8?B?eUZtQ3E2RGFHTGVTdi9uV05OSHZkOU1XR1ZEVUtOamsvSTMzaXlLUTBoQ0Qx?=
 =?utf-8?B?bm9tc1VmdUh2VnBaVGhTRXRwN2pveXpuRWJLdlNwZnJmUE14SDBIOE9vNStj?=
 =?utf-8?B?aGNJQlFnS0xSVjl5K2w2WDJuNTJEalJBY2NybWF0YlpyV1U4RE85R3E3c05N?=
 =?utf-8?B?bnI4SE1nRVhLN2VpVmJGbUhhUVgvV1FyeHQ2M1ZOVUlEZ1ozV0w4SC9DUHJv?=
 =?utf-8?B?RzRlRVg0bzVPQTk0TElML09xVEpSRVF5SmtEQVVkT0FTY2RtS201Q0loVy8y?=
 =?utf-8?B?Qm10K3lVcXUzdFFpN3JFbnhiZm9Jc21XZzRMelRnM3pmbHFIcVZGaERXMG15?=
 =?utf-8?B?NFduTGx2ZThGNnp0WklVbDFXSkNpT3lrbEd1NGxES1M1Z0toSDdiSnJGSWd1?=
 =?utf-8?B?WGJOZjcrMUJEeUlsc3VLNUJ4MWhmeDRpcndUY252VEx5Q1dMUGZ6dDhlamtM?=
 =?utf-8?Q?kL52LtLTDIxZa4HApfc2/vBR+pQKwybN8Q5+ase?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR03MB9192.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(1800799015)(366007)(376005);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?WWVMSHJQZFFoT2tFUjVVU25WcmFOaWtVTUY1c2xMTXFSTSs5dHhLQ1VjZjlz?=
 =?utf-8?B?a1c2WVVnZFBObnlLS3Q1ZEpaektTSThOQmFWd0dPdUd4T0tUOERaMlpaZkIr?=
 =?utf-8?B?bXlzanlzZCsvTVYya3V5K2ZreTBUOCtVVDZ6cVVrbDZBUjZHSlhLcThRZWZR?=
 =?utf-8?B?VVM1bWF3cTZ6SVMrS2hMNFVFNE0yb2M5Z0NSb3hHRUpLdlJ5TDRtK1JUbnJw?=
 =?utf-8?B?VVh0TWlkWFJzUjd4NDFSdFI2b281Q2Y5cGhWeXU2UlVBUUQybENiZmFqc1VO?=
 =?utf-8?B?WThCaGduQ3FlS0JTVVhtcDM3RHBKbkxBRFF5b0ZSRklNcHlNQmZWMTd6Ky8x?=
 =?utf-8?B?NEF4cW01V2hPc3Rjc0MrUW1PUXk0ZHp0YnlXa0o2Yy9zQTdXNEFWTzZ5R2t4?=
 =?utf-8?B?WTkrbmlJeVZJMGNjKy93MFlyM0RJdG11TWJOdGYxeGR6Y1NMNGJoRkcyMDBP?=
 =?utf-8?B?M3h2ZkRxMldhREx4S3k4VHFWUWNzWkNxL0lKUTdnZ0FOcXk0ajBqWEJzL3Yr?=
 =?utf-8?B?VzB1OHVZc2czdlRIYnlaWFoxZjl6c252UUEvckxxNmF4b3NPZDJRczVJeWMx?=
 =?utf-8?B?K1ZYVU1jczN1WnJpY2NWcXpBV09ScWJkdHNUTDA4cVpLb3crTFJxcmVwa1ZL?=
 =?utf-8?B?czlBamJDRGl2RktFaEhQditOMDlHZUVhS0NrSk5IVjkrZFZISXArTXhxbWFT?=
 =?utf-8?B?K0Z6TEpIeERtZ01hdDhUQ0NwSVVWWGR6VE9Zc0JzN3MxN2FhNDdrRzgyMkdG?=
 =?utf-8?B?dVdLWWk3TlJ5YXhzclJnaFdKdUo4VGp2akJ5VDVNODJVcmNHRnBQUXMxMEo3?=
 =?utf-8?B?amYyL0FPUkJTZDZ0b1k1STNWWkUvdXBlcjBRTVRXd1JEQmJpdjZDSG4rZXYz?=
 =?utf-8?B?ejVucGZWMHlhdmVXSzhMTkhhbGJBamJmNDlLTVdmRGw4cVEzbzR4UEZ4MERH?=
 =?utf-8?B?bnppZ0xaemVoTlU3dkpza0dvVXZTVWErRVpZQUJ5VVN5Zm1OK2ZDSTh3UitZ?=
 =?utf-8?B?U2FYdVdHWDRzNUZTbkM5N29vN1dpTUVSYWMyNTQzU0poRTJ3cDVzNFhSZ1V4?=
 =?utf-8?B?eFFML1RId2tFOWZleTZGK0ZMNkhUTDJFeXRHaGJrNTVzYzBxMm9ZMVF0SGJ2?=
 =?utf-8?B?ZkNHUVpoZ0V5U2EzTlpnS3NoU25ENVF5UmJPRGtpTmY4aUwvRFlMbUYydWda?=
 =?utf-8?B?VVhrclQ0bkRYc1ZEdXp1ODFsRjB3cC9GV0p2VGltakk4UCtCeHdkNTRsUXZ2?=
 =?utf-8?B?UXBzNDB1cFExWWhaQ0c0dTUzVFN5dUY3NTNTSW8rMEZpMkxWalJHbWJzZHpv?=
 =?utf-8?B?ejh6MUdGZnZGU25PQXBqQnBSQ250L1FBYkIrWUI4alNXU3lTSkpXd1d0SGJS?=
 =?utf-8?B?WnlDYkZrNDhwcHU4WlR4dVBrNzY0cDhDSGhhejhWbGtncEk0THpocytNWXpp?=
 =?utf-8?B?M2Zpb3lSNE42MWdvUS9oMnZwWWw4ZzRHM2Z5V0NNdVlWR25zZkk0bWE1MUtL?=
 =?utf-8?B?c01NOGRKTkkvN3V2a3RBVGY2ZE5UM3RidStTcEQyd1lVakxleDVrMEQ4SHND?=
 =?utf-8?B?bGtXOXByZTg4UGZuL0Q2NGtnY1FnMWdPMm5uMHdWK2FCc3ZTUVltWGJYZHJ2?=
 =?utf-8?B?blhnM3dqV25SeGRIVDIyMWplV290N3F4UzRZbzNJRXkvNU5lcXRWdWdNRHZk?=
 =?utf-8?B?NkMzZmRNY1RVZlY5eWswU242QStRSDNDemJqb1gxTW1xNVh4cXl1aWtmRTVr?=
 =?utf-8?B?dUpHL1ZsT29iWHhiQVlUeHM2ejFzb0xlY0UvdDlaOVpWY2NtT01qYmVJeWww?=
 =?utf-8?B?aWRQN3ZmT3Y0UnM3WmdFRVdHMVhUYmVucElQU3kvWTZEOHJYa0JrOGZtM2Ey?=
 =?utf-8?B?VHB5V2FUUEpXM3dtc1RBQzZoQW5HMDg5b3ZTV3JKalNDbndtTk1ySkdxQUNW?=
 =?utf-8?B?dXNSVzhlT2lUK0dRM2ZzOGV6SlhPNDByYnQ4SjdoekwvL3BMZlhYOHZ3Qlpo?=
 =?utf-8?B?bTNsaWdXdlhQR29CM2xtMGM5ZXBoVWIyNGViWER0WmxOeUFTeW9yWE9RVFdn?=
 =?utf-8?B?c2ZmbVNYNDhLRmVmclBJTGJkNDZmRDhTUEdVcHFmWW44aXFkQ0F5SkJnL3p6?=
 =?utf-8?Q?gsynea555df8JnLxEaAL21kcv?=
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 27c2ed06-0232-4a22-1980-08dc85fa7fb8
X-MS-Exchange-CrossTenant-AuthSource: AS8PR03MB9192.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jun 2024 07:30:11.1851
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9xKcuzUKt+m+xPmzjPsoB+y4W0sx6LLVFF1qOqmc0mxg4MMm9rCBL9b3/vqfrDLiqIX6ustM2ZQ/nsO2HBAEXw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR03MB7796
X-Proofpoint-GUID: ESA80f-pxfkORFiDthoLL_Rncyw71zqE
X-Proofpoint-ORIG-GUID: ESA80f-pxfkORFiDthoLL_Rncyw71zqE
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-06_01,2024-06-06_01,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 mlxlogscore=994
 mlxscore=0 phishscore=0 malwarescore=0 clxscore=1011 adultscore=0
 bulkscore=0 lowpriorityscore=0 priorityscore=1501 impostorscore=0
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.19.0-2405170001 definitions=main-2406060053

06.06.24 10:08, Jan Beulich:
> On 04.06.2024 11:34, Sergiy Kibrik wrote:
>> --- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
>> +++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
>> @@ -657,7 +657,7 @@ static int __init cf_check cpufreq_driver_init(void)
>>   
>>           case X86_VENDOR_AMD:
>>           case X86_VENDOR_HYGON:
>> -            ret = powernow_register_driver();
>> +            ret = IS_ENABLED(CONFIG_AMD) ? powernow_register_driver() : -ENODEV;
>>               break;
>>           }
> 
> What about the Intel-specific code immediately up from here?
> Dealing with that as well may likely permit to reduce ...
>

you mean to guard a call to hwp_register_driver() the same way as for 
powernow_register_driver(), and save one stub? ?

   -Sergiy



From xen-devel-bounces@lists.xenproject.org Thu Jun 06 07:45:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 07:45:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735952.1142099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF7oC-0001Ys-At; Thu, 06 Jun 2024 07:45:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735952.1142099; Thu, 06 Jun 2024 07:45:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF7oC-0001Yl-6p; Thu, 06 Jun 2024 07:45:16 +0000
Received: by outflank-mailman (input) for mailman id 735952;
 Thu, 06 Jun 2024 07:45:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sDjy=NI=cloud.com=kelly.choi@srs-se1.protection.inumbo.net>)
 id 1sF7oA-0001AU-L2
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 07:45:14 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b5244c91-23d8-11ef-90a2-e314d9c70b13;
 Thu, 06 Jun 2024 09:45:13 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id
 38308e7fff4ca-2e6f33150bcso6262491fa.2
 for <xen-devel@lists.xenproject.org>; Thu, 06 Jun 2024 00:45:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5244c91-23d8-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1717659912; x=1718264712; darn=lists.xenproject.org;
        h=cc:to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=QnBvdIAJVyhTAREsYk4qao8v/d3hMUjmboeptL4WYQE=;
        b=aAzGPmsrtEO7YVQKxIJDLYJRCFFfCd7lw5hSKEbyY8So0WO1WrAVhqXF/FEzw95gXt
         tIih7x7aiBM8Cu7WdMdRQPEkcpV1kCFqgv6/IumGVUnZkCwOCp6PxRyGgwdcBXfs3hoN
         9chGaSQGBeFvToXIUk8lH+kjlepwwxzDf7c+s=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717659912; x=1718264712;
        h=cc:to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=QnBvdIAJVyhTAREsYk4qao8v/d3hMUjmboeptL4WYQE=;
        b=QujsjvjYL2C8syPnlypTaf2shKb3L8D6ogRBX0MMYBbwS+7GQJLX04dpGwRsKab5JA
         O3NVuTXv2P0SpEEIMyY3BUeXFa+kwKLQtQyNoonnfWtg62sHRQU5Z+mHMSioI3LgeIgx
         yn4UYjJU147IYP0Asi0nR3cHt3FcbnT5N4v6P+jbhX74zRx59L52VHK65etnisH3Vhb/
         65hxOTHVYF+XlPkPFWSGp2OESzVEmj7n4wYD6ldl3dCp89ZelLyBFNR2z9BUvAPFinhL
         XQroJNqCfDsGewdWl9YDsINmwzYSpVskVPIoE95uQMKrkkX7OdHgPE7Wdp8iYZo5rgxS
         ZHvQ==
X-Gm-Message-State: AOJu0YzorA+xWxbdU1hqXGxxN7K9LECJQFr1a0BtrKTTKMd7WtctSqbx
	JFTF1MhIMfNmmR+1iFv56qc3UG4tYuXcJKkzjenMMep+I/m/MTkSiuAv1H7mkKNKbRmoJE3g4+G
	UqB0DD3ws9WE/ytZhIy8VNasiCOrBL5IEV+RmIaAXOmHfIog9JJORtA==
X-Google-Smtp-Source: AGHT+IHemNqLVUcAMWRh60pK+cqN3cSCwjd2eUCpFKPmdqGhrftjP4ZIOKxjb07Dzuc7gqJiHAstroU+sdEbDNhIJHU=
X-Received: by 2002:a2e:90d6:0:b0:2e9:881b:5f02 with SMTP id
 38308e7fff4ca-2eac7aa4f5bmr26131051fa.53.1717659912284; Thu, 06 Jun 2024
 00:45:12 -0700 (PDT)
MIME-Version: 1.0
From: Kelly Choi <kelly.choi@cloud.com>
Date: Thu, 6 Jun 2024 08:45:00 +0100
Message-ID: <CAO-mL=wF=k_8m90cg3VJR0yutAqqEU3VQmavWaKB6G=PVCTF9A@mail.gmail.com>
Subject: Join us for day 3 - Free virtual Xen Summit 2024
To: xen-devel <xen-devel@lists.xenproject.org>, xen-users@lists.xenproject.org, 
	xen-announce@lists.xenproject.org
Cc: advisory-board@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000eadab2061a33dc31"

--000000000000eadab2061a33dc31
Content-Type: text/plain; charset="UTF-8"

Hi everyone,

Please find the *schedule of talks listed here.*
<https://events.linuxfoundation.org/xen-project-summit/program/schedule/>

   - All sessions on Thursday 6th June will use the following links:
   - *DESIGN SESSION A <https://meet.jit.si/XenDesignSessionsA> *(Liberdade
      Room)
      - *DESIGN SESSION B <https://meet.jit.si/XenDesignSessionsB> *(Augusta
      Room)
      - Please look out on the schedule for the time and which session room
      it takes place in


   - The same links will be used throughout talks and sessions
   - (Optional) Join our Xen Summit matrix channel for updates on the day:
   https://matrix.to/#/#xen-project-summit:matrix.org

*Some ground rules to follow:*

   - Enter your full name on Jitsi so everyone knows who you are
   - Please mute yourself upon joining
   - Turning on cameras is optional, but we encourage doing this for design
   sessions
   - Do *not* shout out your questions during session presentations,
   instead ask these on the chat function and we will do our best to ask on
   behalf of you
   - During design sessions, we encourage you to unmute and participate
   freely
   - If multiple people wish to speak, please use the 'raise hand' function
   on Jitsi or chat
   - Should there be a need, moderators will have permission to remove
   anyone who is disruptive in sessions on Jitsi
   - If you face issues on the day, please let us know via Matrix - we will
   do our best to help, but please note this is a community effort

Many thanks,
Kelly Choi

Community Manager
Xen Project

--000000000000eadab2061a33dc31
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi everyone,<div><br></div><div>Please find the=C2=A0<a hr=
ef=3D"https://events.linuxfoundation.org/xen-project-summit/program/schedul=
e/" target=3D"_blank"><b>schedule=C2=A0of talks listed here.</b></a><br></d=
iv><div><ul><li style=3D"margin-left:15px">All sessions on Thursday 6th Jun=
e will use the following links:<br></li><ul><li style=3D"margin-left:15px">=
<b><a href=3D"https://meet.jit.si/XenDesignSessionsA" target=3D"_blank">DES=
IGN SESSION A</a>=C2=A0</b>(Liberdade Room)=C2=A0</li><li style=3D"margin-l=
eft:15px"><b><a href=3D"https://meet.jit.si/XenDesignSessionsB" target=3D"_=
blank">DESIGN SESSION=C2=A0B</a>=C2=A0</b>(Augusta Room)</li><li style=3D"m=
argin-left:15px">Please look out on the schedule for the time and which ses=
sion room it takes place in</li></ul></ul><ul><li style=3D"margin-left:15px=
">The same links will be used throughout talks and sessions</li><li style=
=3D"margin-left:15px">(Optional) Join our Xen Summit matrix channel for upd=
ates on the day:=C2=A0<a href=3D"https://matrix.to/#/%23xen-project-summit:=
matrix.org" target=3D"_blank">https://matrix.to/#/#xen-project-summit:matri=
x.org</a>=C2=A0</li></ul><div><div dir=3D"ltr" class=3D"gmail_signature"><d=
iv dir=3D"ltr"><div><div><b><u>Some ground rules to follow:</u></b></div><d=
iv><ul><li style=3D"margin-left:15px">Enter your full name on Jitsi so ever=
yone knows who you are</li><li style=3D"margin-left:15px">Please mute yours=
elf upon joining=C2=A0</li><li style=3D"margin-left:15px">Turning on camera=
s is optional, but we encourage doing this for design sessions</li><li styl=
e=3D"margin-left:15px">Do=C2=A0<u>not</u>=C2=A0shout out your questions dur=
ing session presentations, instead ask these on the chat function and we wi=
ll do our best to ask on behalf of you</li><li style=3D"margin-left:15px">D=
uring design sessions, we encourage you to unmute and participate freely</l=
i><li style=3D"margin-left:15px">If multiple people wish to speak, please u=
se the &#39;raise hand&#39; function on=C2=A0Jitsi or chat</li><li style=3D=
"margin-left:15px">Should there be a need, moderators will have permission =
to remove anyone who is disruptive in sessions on=C2=A0Jitsi</li><li style=
=3D"margin-left:15px">If you face issues on the day, please let us know via=
 Matrix - we will do our best to help, but please note this is a community =
effort=C2=A0</li></ul></div></div></div></div></div></div><div><div dir=3D"=
ltr" class=3D"gmail_signature" data-smartmail=3D"gmail_signature"><div dir=
=3D"ltr"><div>Many thanks,</div><div>Kelly Choi</div><div><br></div><div><d=
iv style=3D"color:rgb(136,136,136)">Community Manager</div><div style=3D"co=
lor:rgb(136,136,136)">Xen Project=C2=A0<br></div></div></div></div></div></=
div>

--000000000000eadab2061a33dc31--


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 07:49:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 07:49:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.735988.1142124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF7s3-0003Pi-4o; Thu, 06 Jun 2024 07:49:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 735988.1142124; Thu, 06 Jun 2024 07:49:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF7s3-0003PZ-0J; Thu, 06 Jun 2024 07:49:15 +0000
Received: by outflank-mailman (input) for mailman id 735988;
 Thu, 06 Jun 2024 07:49:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF7s1-0003PN-W8; Thu, 06 Jun 2024 07:49:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF7s1-00049G-Tr; Thu, 06 Jun 2024 07:49:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF7s1-0007yf-IT; Thu, 06 Jun 2024 07:49:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sF7s1-0005b8-I1; Thu, 06 Jun 2024 07:49:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gGqvg+JVUu7P/ZBL9ffUqs5PJyYKRN7g0j4uiPrUyvk=; b=zIW7vzhYwLqwcmZlM6JyshuWkI
	2GXsXdPDZIef91U6xV9CH7XpQeISVhUeRnS7XF4ebu1krnkhCtDGj36vrPeh9apcy/2law7O1DAd6
	EAD10x1G22O70Ye9gwsSL8IUlYri2HznGFwy1+oElYVOyCL/UTSY2YcjHatqtlY+rkd4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186264-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186264: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=65b0d08786888284cd1bb705c58f53a65ae443b0
X-Osstest-Versions-That:
    ovmf=b45aff0dc9cb87f316eb17a11e5d4438175d9cca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Jun 2024 07:49:13 +0000

flight 186264 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186264/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 65b0d08786888284cd1bb705c58f53a65ae443b0
baseline version:
 ovmf                 b45aff0dc9cb87f316eb17a11e5d4438175d9cca

Last test of basis   186262  2024-06-06 02:42:55 Z    0 days
Testing same since   186264  2024-06-06 06:12:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jeff Brasen <jbrasen@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b45aff0dc9..65b0d08786  65b0d08786888284cd1bb705c58f53a65ae443b0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 07:54:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 07:54:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736009.1142133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF7xC-0005cK-Mm; Thu, 06 Jun 2024 07:54:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736009.1142133; Thu, 06 Jun 2024 07:54:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF7xC-0005cD-K6; Thu, 06 Jun 2024 07:54:34 +0000
Received: by outflank-mailman (input) for mailman id 736009;
 Thu, 06 Jun 2024 07:54:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tbpc=NI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sF7xB-0005c7-5b
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 07:54:33 +0000
Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com
 [2a00:1450:4864:20::32b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 018c18f0-23da-11ef-b4bb-af5377834399;
 Thu, 06 Jun 2024 09:54:30 +0200 (CEST)
Received: by mail-wm1-x32b.google.com with SMTP id
 5b1f17b1804b1-421572bb0f0so8238635e9.0
 for <xen-devel@lists.xenproject.org>; Thu, 06 Jun 2024 00:54:30 -0700 (PDT)
Received: from [172.20.145.98] ([62.28.225.65])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42158102af1sm46518985e9.16.2024.06.06.00.54.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 06 Jun 2024 00:54:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 018c18f0-23da-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717660470; x=1718265270; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=eipaC/9NVWX7FB1JguRZeH6kHfbXdyOVG6aQyY4pq5o=;
        b=XzddSY7iUzO/SA97cZKLb8zotruO9zwrHW0qrHYwUkcAYLyi6mxcrRoKVYyzkFp/YT
         ownv+7Vn62qk8BQD5xXyhiMzoRdQJX50Rq6UUTbOae1UjkECJQ9e/Wfr9vU/4HmF/ZI0
         SixnKbhqDoPxcqws2VrJ3291TvuVo0SJA4OIwlriifwN1V2e74hY8V3SKXU9ef4GcFI6
         qluqLbILdLMVMgLNG8DK7JtVqCN5UeKPXEucd8v6HSRqKf1EtiN1yZNCuSsw7lGuXsBc
         Wln7b/VRnZ35yO5on4RoCDczxF4fSIimOmeOERFiWyvMvX9GbQq527RS4LjHnRGexR2v
         /M9g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717660470; x=1718265270;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=eipaC/9NVWX7FB1JguRZeH6kHfbXdyOVG6aQyY4pq5o=;
        b=jkkbnm6Foib/YvD+0ezYgqlC6PRwjWQF562D9qJEm93q0Bc6zn4uhoEoZkPlC3006/
         HDISlmo1TzH4W0kIS/BbkR06iWiJr+FyE9LGxGgE+alVEeivbKScfzbuo1/Q6xkKDAdx
         jTasnLbHcHQHFqk8FrJtBjtLUbyhjvFzXTtGi7w7V8wfc/Ldr5QbG/nFB7x7AVq3t/Vu
         pLCos/F2FrR5KOZwRlEbeVLwpNbr9+32Xs51Q17rSbP8ocB+H0d+SMHbIROyoIqDVeZo
         gwcr/S7ON9AVyHMSu6hOQYPgYFdJu88J6txGycf7ISRlH0U2r3wiaW4TolG9lMEDsGzb
         3tyw==
X-Forwarded-Encrypted: i=1; AJvYcCXYc3r8KMtO0RJP/gqYm5yODnV8dxV5oGaJjktTxuy+KGUpdp2Ai5WXvTCGiVry7ZJGh3hcqrwE842wY7fEi+o1ZBXGTmpnPPQZ/Ojn9iQ=
X-Gm-Message-State: AOJu0Yyyx7o4tuyBvOSQvtqaKv68IifQihJOJx7xxj9xJQBK3Ls75zQ7
	mKl5zU86hi0zqNPLG5s87NpGWgnXlStuU+NPTCLln+itzeVHGqYek/w6UAGcfA==
X-Google-Smtp-Source: AGHT+IERrNaCXf1nm3lAwF6/xMkmyEt6Ap8vg+0uiBhXpsF9mjXbMXs52Ulk9St77o86Ja+NrLXQJw==
X-Received: by 2002:a05:600c:3108:b0:41f:e87b:45c2 with SMTP id 5b1f17b1804b1-4215624b7fbmr41410505e9.0.1717660470334;
        Thu, 06 Jun 2024 00:54:30 -0700 (PDT)
Message-ID: <ab57f7f3-ac54-4b41-950a-1f7bee4293ab@suse.com>
Date: Thu, 6 Jun 2024 09:54:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v1] x86/cpufreq: separate powernow/hwp cpufreq code
To: Sergiy Kibrik <sergiy_kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jason Andryuk <jason.andryuk@amd.com>, xen-devel@lists.xenproject.org
References: <20240604093406.2448552-1-Sergiy_Kibrik@epam.com>
 <5cb13d1a-1452-4542-b50d-23e6a9d9d3ef@suse.com>
 <c66966da-bbe3-432e-8a2f-809bf434db39@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <c66966da-bbe3-432e-8a2f-809bf434db39@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 06.06.2024 09:30, Sergiy Kibrik wrote:
> 06.06.24 10:08, Jan Beulich:
>> On 04.06.2024 11:34, Sergiy Kibrik wrote:
>>> --- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
>>> +++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
>>> @@ -657,7 +657,7 @@ static int __init cf_check cpufreq_driver_init(void)
>>>   
>>>           case X86_VENDOR_AMD:
>>>           case X86_VENDOR_HYGON:
>>> -            ret = powernow_register_driver();
>>> +            ret = IS_ENABLED(CONFIG_AMD) ? powernow_register_driver() : -ENODEV;
>>>               break;
>>>           }
>>
>> What about the Intel-specific code immediately up from here?
>> Dealing with that as well may likely permit to reduce ...
> 
> you mean to guard a call to hwp_register_driver() the same way as for 
> powernow_register_driver(), and save one stub? ?

Yes, and perhaps more. Maybe more stubs can be avoided? And
acpi_cpufreq_driver doesn't need registering either, and hence
would presumably be left unreferenced when !INTEL?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 07:57:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 07:57:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736014.1142143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF80O-0006C0-40; Thu, 06 Jun 2024 07:57:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736014.1142143; Thu, 06 Jun 2024 07:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF80O-0006Bt-1J; Thu, 06 Jun 2024 07:57:52 +0000
Received: by outflank-mailman (input) for mailman id 736014;
 Thu, 06 Jun 2024 07:57:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tbpc=NI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sF80M-0006Bn-8D
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 07:57:50 +0000
Received: from mail-wr1-x42f.google.com (mail-wr1-x42f.google.com
 [2a00:1450:4864:20::42f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7796abe9-23da-11ef-90a2-e314d9c70b13;
 Thu, 06 Jun 2024 09:57:48 +0200 (CEST)
Received: by mail-wr1-x42f.google.com with SMTP id
 ffacd0b85a97d-35dce610207so767939f8f.2
 for <xen-devel@lists.xenproject.org>; Thu, 06 Jun 2024 00:57:48 -0700 (PDT)
Received: from [172.20.145.98] ([62.28.225.65])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35ef5d29e63sm847640f8f.17.2024.06.06.00.57.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 06 Jun 2024 00:57:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7796abe9-23da-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717660668; x=1718265468; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=LiSjfKscAIgkbIh+5/HtmWJ/93xp6nvqOK25vHNy3og=;
        b=gaD1BMrHABOOFCVqT1ZMm2QS9qS9HDemc/fQgo+6yGLynKcJOot/36SQPEoO8I5U9s
         9gOS/lIPldXql8v21nnxejgFBFNKiaHlEONEVxdd/vvLhc3ouXDKqMIirZrlHUhJaA7S
         tAU2L+reCm+dDdPIl8BYSBtQI8RtQ/kKG5ol7+hnkn9osw18iN2FQ+O79nNyk5GD3jde
         uL1zdX2hj7NcGsaL+PhTfvDbRs7fCgUR9upDuHno05O4rP/VHoZUEhKmu2r4KMgn0dgV
         cQFwjscIIbcY65qMNcC0+L3gk8pQjvhgn43nnC7NTIWtN2OLALcg8SjVsNUc/1uXzjPc
         w03g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717660668; x=1718265468;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=LiSjfKscAIgkbIh+5/HtmWJ/93xp6nvqOK25vHNy3og=;
        b=ADABd+xsBgsoGKEdlFY7Q3QPGIWetGxs2nuCYujX62cUF+ehdHmqayF7g0Xiap+lao
         bB11XTXwskgoGUhUBSHeORcLJk88g1EHq2nI5RxutIpg4Z7PZ7s55Tg21mt8THdngSXr
         f+I7sz3JDiR/lnZ/ywuturF30wzw/WwWEEK1o0mxJeB83yp00S1E3s7lnV8ssWN7QE4t
         O+ckLWBNMtlB9/JJmT2usjfN6thmmZVOCbn7OFjN4T8ehqBg9ZnmgdCgVgkdeMo/mkgh
         IxWn+QE+zigiQhb4H5HiSwoXm9FfZ01C9W9m6zaXUV2JJrbOjRIKE9j+fI9WLTiY4AVE
         eKHA==
X-Forwarded-Encrypted: i=1; AJvYcCVgEQznhgHATDgVL3jJkaI8REFfWU5R5giBluWtI9UJntu5cC2hCTd6TID3Ygomk9PkKih/aqHhXdD0UGuJ8LK2Eqnk4G/Qz13DHC2/ZFg=
X-Gm-Message-State: AOJu0YxEWu37QqPxFawQiPWEfdmBVKiSCJm7Bzj4OHaeK4jM+w4/hUkS
	s1AA89b8ScYuL5Vl6R2/tYjtvNwqzTPIRO5BaxkRR+pXovEGkdFJkur+lz+3Oz84Y6JCNXXGmCY
	=
X-Google-Smtp-Source: AGHT+IEc0Jq3aAP8S7dwx87ffsQIZv9tJumymC80qkr4UerYfp5GLyBFA74vfMUPbnDHujvAJpuF8w==
X-Received: by 2002:adf:f601:0:b0:354:f489:faf with SMTP id ffacd0b85a97d-35e8406dd8emr4514038f8f.1.1717660668369;
        Thu, 06 Jun 2024 00:57:48 -0700 (PDT)
Message-ID: <8936b5ef-1ef7-4606-9f19-c75287aa88fa@suse.com>
Date: Thu, 6 Jun 2024 09:57:46 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v5 09/10] xen/x86: Disallow creating domains
 with altp2m enabled and altp2m.nr == 0
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1717356829.git.w1benny@gmail.com>
 <d6fd97b66b5f1a974e317c9d3f72fb139b39118f.1717356829.git.w1benny@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <d6fd97b66b5f1a974e317c9d3f72fb139b39118f.1717356829.git.w1benny@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 02.06.2024 22:04, Petr Beneš wrote:
> From: Petr Beneš <w1benny@gmail.com>
> 
> Since libxl finally sends the altp2m.nr value, we can remove the previously
> introduced temporary workaround.
> 
> Creating domain with enabled altp2m while setting altp2m.nr == 0 doesn't
> make sense and it's probably not what user wants.

Yet: Do we need separate count and flag anymore? Can't "nr != 0"
take the place of the flag being true?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 08:37:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 08:37:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736026.1142153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF8cC-0005Tt-1M; Thu, 06 Jun 2024 08:36:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736026.1142153; Thu, 06 Jun 2024 08:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF8cB-0005Tm-V8; Thu, 06 Jun 2024 08:36:55 +0000
Received: by outflank-mailman (input) for mailman id 736026;
 Thu, 06 Jun 2024 08:36:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PrXz=NI=gmail.com=xadimgnik@srs-se1.protection.inumbo.net>)
 id 1sF8cA-0005Tg-IF
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 08:36:54 +0000
Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com
 [2a00:1450:4864:20::332])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ecc18d80-23df-11ef-90a2-e314d9c70b13;
 Thu, 06 Jun 2024 10:36:53 +0200 (CEST)
Received: by mail-wm1-x332.google.com with SMTP id
 5b1f17b1804b1-4210aa012e5so8856205e9.0
 for <xen-devel@lists.xenproject.org>; Thu, 06 Jun 2024 01:36:53 -0700 (PDT)
Received: from [192.168.0.151] (54-240-197-239.amazon.com. [54.240.197.239])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4215c2c690dsm13781485e9.34.2024.06.06.01.36.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 06 Jun 2024 01:36:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecc18d80-23df-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717663012; x=1718267812; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:organization:content-language
         :references:cc:to:subject:reply-to:user-agent:mime-version:date
         :message-id:from:from:to:cc:subject:date:message-id:reply-to;
        bh=rzOlTpGXOFDC46rae4mb64yBb7uEDZ0xP/1eKZGmBUw=;
        b=FrxzL4GhLm10bhjOnz/7cvkbWOyzrN5DmuBsLeJaI+Nrp6ZuNPHt076egcBXVrTSZX
         duQif5UbtlfxuUdpt/2M3UCbqZ42CxrN3qIDpGEIeY3MVyzMJSKJ5m3AAwAlMWuCQ8Rn
         /o0Xft/wgaqDtI0IzQXl1oybgrtfLeDXh0Hjb9SVyG+OIaOsoBCBziMO6TUWMAeiB+oN
         jaubWJJ8QWSi4nBHjRUAw/hcw9sOO/664jXJFwYL5+8wsNKx3itF7eyAUt+ah4GAMyq5
         jAk3KCyNKLrd8X91iCaxrnz305Sr6lRZyRmAOuFIcqch3+U6wIin8QKxvXX2PRxQtInv
         n2gA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717663012; x=1718267812;
        h=content-transfer-encoding:in-reply-to:organization:content-language
         :references:cc:to:subject:reply-to:user-agent:mime-version:date
         :message-id:from:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=rzOlTpGXOFDC46rae4mb64yBb7uEDZ0xP/1eKZGmBUw=;
        b=kcJe2BjnUdR/NwxZd6EMpOxRIVesGRnFJ/MCH68rhMkjdSHiXQMDRgP6aMonviT8ms
         53aejD0ada75rQxWeHFlNxC7tSIgjl8g59/pQ1xcMfO9EJMksg/jxy+H14hM6A7FT97S
         H6KK7gXl1gj7GAdTKV8SF7oSyYDfKD+8NV4GdLuCsGi9ub29qJ+LYC8toTtsqdG0v0X5
         OAf6PtAKKNag+uK1n5ypfh+E8QHXNKzDeOWMVwMMU8g27Wr7r3H3PmTFaX008UT4oJsy
         zmtgJzs9kS5pb9lJCo+TFOckDPoiFf7hLUVc5O2s1pnoryI7no6lJgAOrRrNeLkS7KRA
         XN1A==
X-Forwarded-Encrypted: i=1; AJvYcCVEzk9pQIue3A9pm5FSB++wWDqE6PGbGgdmYaSwgK6ZI1bCzzJ+HVfiYs5/nZAGG/xAHe2ZdKVx0p3wNQ+wsTJ+u63J2JFmP7zfEM/JeKE=
X-Gm-Message-State: AOJu0YzJ/PXblyi3if1PW55bpUw8eKiPGD+GeVBx/0luNJGXxrO64ad5
	oq0wYpVeZh9zmTRGEJS8iYS5F3wjtNf8XTH0J1WxYxGVtOuocy+P
X-Google-Smtp-Source: AGHT+IGzT54QNK7IhOhsW9oQdhMM/tkMLq7skxG0VwQqxfW7PvYNgJZbnLqz09Ajr4xdIhZI5tb7+w==
X-Received: by 2002:a05:600c:524c:b0:421:4f34:3ada with SMTP id 5b1f17b1804b1-4215632dd1bmr43362575e9.32.1717663012050;
        Thu, 06 Jun 2024 01:36:52 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Message-ID: <434fea2b-c7e9-42b3-bc1c-27ef811d0027@xen.org>
Date: Thu, 6 Jun 2024 09:36:50 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Reply-To: paul@xen.org
Subject: Re: [PATCH v3 3/3] ui+display: rename is_buffer_shared() ->
 surface_is_allocated()
To: Gerd Hoffmann <kraxel@redhat.com>, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Marc-Andr=C3=A9_Lureau?= <marcandre.lureau@redhat.com>,
 xen-devel@lists.xenproject.org, Anthony PERARD <anthony@xenproject.org>,
 =?UTF-8?Q?Marc-Andr=C3=A9_Lureau?= <marcandre.lureau@gmail.com>
References: <20240605131444.797896-1-kraxel@redhat.com>
 <20240605131444.797896-4-kraxel@redhat.com>
Content-Language: en-US
Organization: Xen Project
In-Reply-To: <20240605131444.797896-4-kraxel@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 05/06/2024 14:14, Gerd Hoffmann wrote:
> Boolean return value is reversed, to align with QEMU_ALLOCATED_FLAG, so
> all callers must be adapted.  Also rename share_surface variable in
> vga_draw_graphic() to reduce confusion.
> 
> No functional change.
> 
> Suggested-by: Marc-André Lureau <marcandre.lureau@gmail.com>
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> ---
>   include/ui/surface.h    |  4 ++--
>   hw/display/qxl-render.c |  2 +-
>   hw/display/vga.c        | 20 ++++++++++----------
>   hw/display/xenfb.c      |  5 +++--
>   ui/console.c            |  3 ++-
>   5 files changed, 18 insertions(+), 16 deletions(-)
> 

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Thu Jun 06 08:39:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 08:39:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736030.1142164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF8eW-00062L-EU; Thu, 06 Jun 2024 08:39:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736030.1142164; Thu, 06 Jun 2024 08:39:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF8eW-00062E-Bc; Thu, 06 Jun 2024 08:39:20 +0000
Received: by outflank-mailman (input) for mailman id 736030;
 Thu, 06 Jun 2024 08:39:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dWw=NI=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sF8eV-000628-S7
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 08:39:19 +0000
Received: from pb-smtp20.pobox.com (pb-smtp20.pobox.com [173.228.157.52])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 42654c64-23e0-11ef-90a2-e314d9c70b13;
 Thu, 06 Jun 2024 10:39:18 +0200 (CEST)
Received: from pb-smtp20.pobox.com (unknown [127.0.0.1])
 by pb-smtp20.pobox.com (Postfix) with ESMTP id E126920F91;
 Thu,  6 Jun 2024 04:39:15 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp20.sea.icgroup.com (unknown [127.0.0.1])
 by pb-smtp20.pobox.com (Postfix) with ESMTP id D949320F90;
 Thu,  6 Jun 2024 04:39:15 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.99])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp20.pobox.com (Postfix) with ESMTPSA id 5727920F8F;
 Thu,  6 Jun 2024 04:39:12 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42654c64-23e0-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:mime-version:content-transfer-encoding;
	 s=sasl; bh=fqBvSAEw8XtwtZ3SAZ6vOTdrFxbzk12l4TwoY0xOxOM=; b=FxlF
	uaY3zTutb/V1OGR2JSDGrs4syzjvq/YlZWWmf4ssrwfiD50WKcIL2V0sX+jjVepS
	TN7yavkR9i9gKZ82RPpaWNjZ05JnapQOngiRQa9dDDwfNEaCNNO4qMMqvA2HIbQh
	fLisDcAoqzRf0bWOP0jU/6BYQo08PDLbHS6mtHI=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [XEN PATCH v1] x86/intel: optional build of PSR support
Date: Thu,  6 Jun 2024 11:39:08 +0300
Message-Id: <20240606083908.2510396-1-Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
X-Pobox-Relay-ID:
 3FEA45F0-23E0-11EF-BBD9-ACC938F0AE34-90055647!pb-smtp20.pobox.com
Content-Transfer-Encoding: quoted-printable

Platform Shared Resource feature is available for certain Intel's CPUs on=
ly,
hence can be put under CONFIG_INTEL build option.

When !INTEL then PSR-related sysctls XEN_SYSCTL_psr_cmt_op &
XEN_SYSCTL_psr_alloc are off as well.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
 xen/arch/x86/Makefile          |  2 +-
 xen/arch/x86/domctl.c          |  2 ++
 xen/arch/x86/include/asm/psr.h | 15 +++++++++++++++
 xen/arch/x86/sysctl.c          |  4 ++++
 4 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d902fb7acc..02218d32c5 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -57,7 +57,7 @@ obj-y +=3D pci.o
 obj-y +=3D percpu.o
 obj-y +=3D physdev.o
 obj-$(CONFIG_COMPAT) +=3D x86_64/physdev.o
-obj-y +=3D psr.o
+obj-$(CONFIG_INTEL) +=3D psr.o
 obj-y +=3D setup.o
 obj-y +=3D shutdown.o
 obj-y +=3D smp.o
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 9a72d57333..cccf71f745 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1160,6 +1160,7 @@ long arch_do_domctl(
         break;
     }
=20
+#ifdef CONFIG_INTEL
     case XEN_DOMCTL_psr_cmt_op:
         if ( !psr_cmt_enabled() )
         {
@@ -1262,6 +1263,7 @@ long arch_do_domctl(
         }
=20
         break;
+#endif
=20
     case XEN_DOMCTL_get_cpu_policy:
         /* Process the CPUID leaves. */
diff --git a/xen/arch/x86/include/asm/psr.h b/xen/arch/x86/include/asm/ps=
r.h
index 51df78794c..14d5d33970 100644
--- a/xen/arch/x86/include/asm/psr.h
+++ b/xen/arch/x86/include/asm/psr.h
@@ -72,6 +72,8 @@ static inline bool psr_cmt_enabled(void)
     return !!psr_cmt;
 }
=20
+#ifdef CONFIG_INTEL
+
 int psr_alloc_rmid(struct domain *d);
 void psr_free_rmid(struct domain *d);
 void psr_ctxt_switch_to(struct domain *d);
@@ -86,6 +88,19 @@ int psr_set_val(struct domain *d, unsigned int socket,
 void psr_domain_init(struct domain *d);
 void psr_domain_free(struct domain *d);
=20
+#else
+
+static inline void psr_ctxt_switch_to(struct domain *d)
+{
+}
+static inline void psr_domain_init(struct domain *d)
+{
+}
+static inline void psr_domain_free(struct domain *d)
+{
+}
+#endif /* CONFIG_INTEL */
+
 #endif /* __ASM_PSR_H__ */
=20
 /*
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index 1d40d82c5a..947c954b42 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -32,6 +32,7 @@
 #include <asm/psr.h>
 #include <asm/cpu-policy.h>
=20
+#ifdef CONFIG_INTEL
 struct l3_cache_info {
     int ret;
     unsigned long size;
@@ -46,6 +47,7 @@ static void cf_check l3_cache_get(void *arg)
     if ( !l3_info->ret )
         l3_info->size =3D info.size / 1024; /* in KB unit */
 }
+#endif
=20
 static long cf_check smt_up_down_helper(void *data)
 {
@@ -169,6 +171,7 @@ long arch_do_sysctl(
     }
     break;
=20
+#ifdef CONFIG_INTEL
     case XEN_SYSCTL_psr_cmt_op:
         if ( !psr_cmt_enabled() )
             return -ENODEV;
@@ -286,6 +289,7 @@ long arch_do_sysctl(
         }
         break;
     }
+#endif
=20
     case XEN_SYSCTL_get_cpu_levelling_caps:
         sysctl->u.cpu_levelling_caps.caps =3D levelling_caps;
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 06 09:03:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 09:03:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736040.1142173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF91M-0002xq-C7; Thu, 06 Jun 2024 09:02:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736040.1142173; Thu, 06 Jun 2024 09:02:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sF91M-0002xj-9a; Thu, 06 Jun 2024 09:02:56 +0000
Received: by outflank-mailman (input) for mailman id 736040;
 Thu, 06 Jun 2024 09:02:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF91K-0002xZ-Ui; Thu, 06 Jun 2024 09:02:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF91K-0005xk-Rm; Thu, 06 Jun 2024 09:02:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sF91K-0001LE-G0; Thu, 06 Jun 2024 09:02:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sF91K-0004TV-FR; Thu, 06 Jun 2024 09:02:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mjo+oXVyttPlzg80rMZfVxQYGn9xLU5wJgieUbz1Nck=; b=GAA4GatPeUmET8JNGNVrI8l9fM
	CqoRT8Btg+JyeQ3iKeqJMHqtju3RPj5SrHBDDiHsYxAO/ucAzLW4urVar8ZRuo4ChWNvqNqhjXrrm
	eHurpTFyPk9NZKb8QBVkVQXFgeBYsMeZRC9ZKmsJyLn3KFBnj3kmRZRN3suQI/xM/unw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186260-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186260: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2df0193e62cf887f373995fb8a91068562784adc
X-Osstest-Versions-That:
    linux=71d7b52cc33bc3b6697cce8a0a5ac9032f372e47
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Jun 2024 09:02:54 +0000

flight 186260 linux-linus real [real]
flight 186265 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186260/
http://logs.test-lab.xenproject.org/osstest/logs/186265/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-multivcpu  8 xen-boot           fail pass in 186265-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 186265 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 186265 never pass
 test-armhf-armhf-xl           8 xen-boot                     fail  like 186257
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186257
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186257
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186257
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186257
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186257
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186257
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186257
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                2df0193e62cf887f373995fb8a91068562784adc
baseline version:
 linux                71d7b52cc33bc3b6697cce8a0a5ac9032f372e47

Last test of basis   186257  2024-06-05 16:10:40 Z    0 days
Testing same since   186260  2024-06-06 00:12:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ananth Narayan <Ananth.Narayan@amd.com>
  Andi Shyti <andi.shyti@kernel.org>
  Andy Shevchenko <andy.shevchenko@gmail.com>
  Ard Biesheuvel <ardb@kernel.org>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Dan Williams <dan.j.williams@intel.com>
  David Sterba <dsterba@suse.com>
  Dhananjay Ugwekar <Dhananjay.Ugwekar@amd.com>
  Filipe Manana <fdmanana@suse.com>
  Gautham R. Shenoy <gautham.shenoy@amd.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Jarkko Sakkinen <jarkko@kernel.org>
  Kent Overstreet <kent.overstreet@linux.dev>
  Linus Torvalds <torvalds@linux-foundation.org>
  Mario Limonciello <mario.limonciello@amd.com>
  Peter Jung <ptr1337@cachyos.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Shuah Khan <skhan@linuxfoundation.org>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Thomas Weißschuh <linux@weissschuh.net>
  Tony Luck <tony.luck@intel.com>
  Wolfram Sang <wsa+renesas@sang-engineering.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   71d7b52cc33b..2df0193e62cf  2df0193e62cf887f373995fb8a91068562784adc -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 10:56:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 10:56:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736060.1142183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFAmb-000150-12; Thu, 06 Jun 2024 10:55:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736060.1142183; Thu, 06 Jun 2024 10:55:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFAma-00014t-Uj; Thu, 06 Jun 2024 10:55:48 +0000
Received: by outflank-mailman (input) for mailman id 736060;
 Thu, 06 Jun 2024 10:55:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFAmZ-00014j-OM; Thu, 06 Jun 2024 10:55:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFAmZ-0007vs-Jv; Thu, 06 Jun 2024 10:55:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFAmZ-0004F2-6q; Thu, 06 Jun 2024 10:55:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFAmZ-00063X-6S; Thu, 06 Jun 2024 10:55:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=O0pGJTWZoUtWfGdG5jKKPb17QTmYlK/7yrRW7WQhKvI=; b=h7i59l+6fshYeuTE1fNyaCSzNX
	BsRaNF8/dUCeoMdR3rwckV1/oYkGRtPDkLx4GgweYVGINxjhuTPtUdfpAI7V2RnkAIeSoTGO9MHDL
	hWnJl4w1TSNWsi4rKRoEROOlvwus0OLiwFzkzAM4JRyLp747dBUbgwev4uC9lieYgliE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186261-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186261: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl:host-ping-check-xen:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl:xen-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-credit2:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
X-Osstest-Versions-That:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Jun 2024 10:55:47 +0000

flight 186261 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186261/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl       10 host-ping-check-xen fail in 186252 pass in 186241
 test-armhf-armhf-xl           7 xen-install                fail pass in 186252
 test-armhf-armhf-xl-credit2   8 xen-boot                   fail pass in 186252

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 186241 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 186241 never pass
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 186252 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 186252 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186252
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186252
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186252
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186252
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186252
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186252
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e
baseline version:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e

Last test of basis   186261  2024-06-06 01:53:48 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Jun 06 11:05:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 11:05:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736071.1142194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFAvQ-0002xR-VU; Thu, 06 Jun 2024 11:04:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736071.1142194; Thu, 06 Jun 2024 11:04:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFAvQ-0002xK-RP; Thu, 06 Jun 2024 11:04:56 +0000
Received: by outflank-mailman (input) for mailman id 736071;
 Thu, 06 Jun 2024 11:04:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dWw=NI=darkstar.site=sakib@srs-se1.protection.inumbo.net>)
 id 1sFAvP-0002xD-JN
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 11:04:55 +0000
Received: from pb-smtp2.pobox.com (pb-smtp2.pobox.com [64.147.108.71])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9965dd70-23f4-11ef-90a2-e314d9c70b13;
 Thu, 06 Jun 2024 13:04:53 +0200 (CEST)
Received: from pb-smtp2.pobox.com (unknown [127.0.0.1])
 by pb-smtp2.pobox.com (Postfix) with ESMTP id 15E1024BBC;
 Thu,  6 Jun 2024 07:04:52 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1])
 by pb-smtp2.pobox.com (Postfix) with ESMTP id 0E34924BBB;
 Thu,  6 Jun 2024 07:04:52 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [185.130.54.99])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp2.pobox.com (Postfix) with ESMTPSA id D080D24BBA;
 Thu,  6 Jun 2024 07:04:50 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9965dd70-23f4-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:mime-version:content-transfer-encoding;
	 s=sasl; bh=j4XJ7kK9wSYGSQ2+xY8WxVZQ/YBI9b9Yv5L00MvO5ts=; b=jmem
	UtFC/hA1NXCzrtBAxnof7n+EhbwGUnfgM6bUkuVWhWQ45/r1aLi07D5NCKUN3n2d
	k2dHsuaPwbkzj43LgSZq2EicztgiUb5+WQgSvzER7RzJfB9grQ7Kl2phKHwmZgAN
	0/xDdAbTuPGnL/5plrdsH/CsxvOmBMG3Q2JGW5E=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [XEN PATCH v1] x86/intel: optional build of TSX support
Date: Thu,  6 Jun 2024 14:04:48 +0300
Message-Id: <20240606110448.2540261-1-Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
X-Pobox-Relay-ID:
 9876CF90-23F4-11EF-BAB5-6488940A682E-90055647!pb-smtp2.pobox.com
Content-Transfer-Encoding: quoted-printable

Transactional Synchronization Extensions are available for certain Intel'=
s
CPUs only, hence can be put under CONFIG_INTEL build option.

The whole TSX support, even if supported by CPU, may need to be disabled =
via
options, by microcode or through spec-ctrl, depending on a set of specifi=
c
conditions. To make sure nothing gets accidentally rutime-broken all
modifications of global TSX configuration variables is secured by #ifdef'=
s,
while variables themselves redefined to 0, so that ones can't mistakenly =
be
written to.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
 xen/arch/x86/Makefile                | 2 +-
 xen/arch/x86/include/asm/processor.h | 8 ++++++++
 xen/arch/x86/spec_ctrl.c             | 4 ++++
 3 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/Makefile b/xen/arch/x86/Makefile
index d902fb7acc..286c003ec3 100644
--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -67,7 +67,7 @@ obj-y +=3D srat.o
 obj-y +=3D string.o
 obj-y +=3D time.o
 obj-y +=3D traps.o
-obj-y +=3D tsx.o
+obj-$(CONFIG_INTEL) +=3D tsx.o
 obj-y +=3D usercopy.o
 obj-y +=3D x86_emulate.o
 obj-$(CONFIG_TBOOT) +=3D tboot.o
diff --git a/xen/arch/x86/include/asm/processor.h b/xen/arch/x86/include/=
asm/processor.h
index c26ef9090c..8b12627ab0 100644
--- a/xen/arch/x86/include/asm/processor.h
+++ b/xen/arch/x86/include/asm/processor.h
@@ -503,9 +503,17 @@ static inline uint8_t get_cpu_family(uint32_t raw, u=
int8_t *model,
     return fam;
 }
=20
+#ifdef CONFIG_INTEL
 extern int8_t opt_tsx;
 extern bool rtm_disabled;
 void tsx_init(void);
+#else
+#define opt_tsx      0     /* explicitly indicate TSX is off */
+#define rtm_disabled false /* RTM was not force-disabled */
+static inline void tsx_init(void)
+{
+}
+#endif
=20
 void update_mcu_opt_ctrl(void);
 void set_in_mcu_opt_ctrl(uint32_t mask, uint32_t val);
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 40f6ae0170..6b3631e375 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -116,8 +116,10 @@ static int __init cf_check parse_spec_ctrl(const cha=
r *s)
             if ( opt_pv_l1tf_domu < 0 )
                 opt_pv_l1tf_domu =3D 0;
=20
+#ifdef CONFIG_INTEL
             if ( opt_tsx =3D=3D -1 )
                 opt_tsx =3D -3;
+#endif
=20
         disable_common:
             opt_rsb_pv =3D false;
@@ -2264,6 +2266,7 @@ void __init init_speculation_mitigations(void)
      * plausibly value TSX higher than Hyperthreading...), disable TSX t=
o
      * mitigate TAA.
      */
+#ifdef CONFIG_INTEL
     if ( opt_tsx =3D=3D -1 && cpu_has_bug_taa && cpu_has_tsx_ctrl &&
          ((hw_smt_enabled && opt_smt) ||
           !boot_cpu_has(X86_FEATURE_SC_VERW_IDLE)) )
@@ -2271,6 +2274,7 @@ void __init init_speculation_mitigations(void)
         opt_tsx =3D 0;
         tsx_init();
     }
+#endif
=20
     /*
      * On some SRBDS-affected hardware, it may be safe to relax srb-lock=
 by
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 06 11:12:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 11:12:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736077.1142203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFB2Z-0004lZ-KB; Thu, 06 Jun 2024 11:12:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736077.1142203; Thu, 06 Jun 2024 11:12:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFB2Z-0004lS-Hn; Thu, 06 Jun 2024 11:12:19 +0000
Received: by outflank-mailman (input) for mailman id 736077;
 Thu, 06 Jun 2024 11:12:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFB2Z-0004lI-6i; Thu, 06 Jun 2024 11:12:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFB2Z-0008Gx-3D; Thu, 06 Jun 2024 11:12:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFB2Y-0004dU-O5; Thu, 06 Jun 2024 11:12:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFB2Y-0002YC-NZ; Thu, 06 Jun 2024 11:12:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cIzigwMwGtB//tvFGkmzIV3EMyDAstp4wsBhy3npHVI=; b=kFr6FW/wfhjgPhjXucDLtsKPE0
	NnrQv4F3fQls4nuX5vO3xSIbT1QFt343h6OhSe0WCoa/YxrzDldCjjfTgaU8D8v9SzrQVOPD/HTaB
	TxVsX40NryLo0ImWVBvMtJq9wC1axKEq5UqpIYiIBIqZoiykE4BYCdyBnl7H8R15V9pA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186266-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186266: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=90cb1ec33225a070e9fea1d94c72ff590bd38731
X-Osstest-Versions-That:
    ovmf=65b0d08786888284cd1bb705c58f53a65ae443b0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Jun 2024 11:12:18 +0000

flight 186266 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186266/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 90cb1ec33225a070e9fea1d94c72ff590bd38731
baseline version:
 ovmf                 65b0d08786888284cd1bb705c58f53a65ae443b0

Last test of basis   186264  2024-06-06 06:12:55 Z    0 days
Testing same since   186266  2024-06-06 09:12:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   65b0d08786..90cb1ec332  90cb1ec33225a070e9fea1d94c72ff590bd38731 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 12:28:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 12:28:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736109.1142214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFCDi-0005f4-E3; Thu, 06 Jun 2024 12:27:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736109.1142214; Thu, 06 Jun 2024 12:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFCDi-0005ex-Af; Thu, 06 Jun 2024 12:27:54 +0000
Received: by outflank-mailman (input) for mailman id 736109;
 Thu, 06 Jun 2024 12:27:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFCDg-0005en-Nq; Thu, 06 Jun 2024 12:27:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFCDg-0001GS-Ke; Thu, 06 Jun 2024 12:27:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFCDg-0006PX-9f; Thu, 06 Jun 2024 12:27:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFCDg-0008HK-9O; Thu, 06 Jun 2024 12:27:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5TQQO5pylxSPmTdMxkSE/TrArB6Lk2lOCqAzDXDxvTI=; b=LZXQjSGqK+GovO4tClTknTYnMC
	BXsOMZcaQHZXMO2Hhl/BNJCdLk58xX+uFWYsstgndf7Tu3bd0M2OfwDxTpSu/VKdL/Qw2Zy+zT48I
	4UUeZrxUfe32hoC7WnvWimhUOthvPyzqWHHWpoaz/zMIGWFJjn/IxitWlbVsgJpkxCQo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186263-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186263: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=9d0c8618db599c407d47a8a6af881708608cdcd9
X-Osstest-Versions-That:
    libvirt=4381b83d991b51a07ba5b6d3f56e6c0a8910a38d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Jun 2024 12:27:52 +0000

flight 186263 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186263/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186255
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              9d0c8618db599c407d47a8a6af881708608cdcd9
baseline version:
 libvirt              4381b83d991b51a07ba5b6d3f56e6c0a8910a38d

Last test of basis   186255  2024-06-05 04:20:30 Z    1 days
Testing same since   186263  2024-06-06 04:20:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel P. Berrangé <berrange@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   4381b83d99..9d0c8618db  9d0c8618db599c407d47a8a6af881708608cdcd9 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 16:00:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 16:00:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736138.1142224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFFX5-0004WR-5a; Thu, 06 Jun 2024 16:00:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736138.1142224; Thu, 06 Jun 2024 16:00:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFFX5-0004WK-2G; Thu, 06 Jun 2024 16:00:07 +0000
Received: by outflank-mailman (input) for mailman id 736138;
 Thu, 06 Jun 2024 16:00:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tbpc=NI=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sFFX3-0004RJ-N0
 for xen-devel@lists.xenproject.org; Thu, 06 Jun 2024 16:00:05 +0000
Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com
 [2a00:1450:4864:20::332])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d66b4794-241d-11ef-90a2-e314d9c70b13;
 Thu, 06 Jun 2024 18:00:04 +0200 (CEST)
Received: by mail-wm1-x332.google.com with SMTP id
 5b1f17b1804b1-4215fc19abfso5959455e9.3
 for <xen-devel@lists.xenproject.org>; Thu, 06 Jun 2024 09:00:04 -0700 (PDT)
Received: from [172.20.145.98] ([62.28.225.65])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35ef5e983e3sm1887078f8f.83.2024.06.06.09.00.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 06 Jun 2024 09:00:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d66b4794-241d-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717689604; x=1718294404; darn=lists.xenproject.org;
        h=content-transfer-encoding:subject:from:cc:to:content-language
         :user-agent:mime-version:date:message-id:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/dEI8GtrXaOrV70aXcdMBloGENMJVqUl/hbJDplseHA=;
        b=QiFcAxKVk490rcCJ0DiQqxUxV+dYLCfeUVY12+V99O4COs+J+CaAzIpKkdjwGUnU+u
         QU1Pkm8z3eX//L7t+ANXChYLcv7LM8XzV5w2CECOra4+hOfyqVPiiYsMFo66OfBqGCsR
         k3s8cNtpeO+YAcKdYaVdQvU0mw/ZVCRHWkTgy6uCPYFE/WlEzZUgJRi2YHkA22GoiB2V
         0a4XXPlk4KrnxzNBJMvFwlVEsu983EtuofRZY0PAC8OjopwH3gwKV+s4nuNX000lLfUH
         umVViFw8aqtHBtpIz7/P/BCLnEtCheUuXO1bBHOc71+ymPnCgZ0/LhiLWf6HWwIcBhMg
         Tc9g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717689604; x=1718294404;
        h=content-transfer-encoding:subject:from:cc:to:content-language
         :user-agent:mime-version:date:message-id:x-gm-message-state:from:to
         :cc:subject:date:message-id:reply-to;
        bh=/dEI8GtrXaOrV70aXcdMBloGENMJVqUl/hbJDplseHA=;
        b=IhFcfhTNe/WIewLBwiwTGQGVI8J8Gv2Os+CcWH5uyuqyfWuYUXeyHCfMllKdWojikq
         7IQ6BsrSngJuiMv/BP6rtywnzfa7q1Wa5mYnggxTXqfVctixaWsN5bKbD+MzBHIia/e4
         kOfgAy1kA9oM/AS4nmrtYMs5pHT4Fe1jYbUQjqurr1WwaEteNAZOVR3lairZ2nU5BA5/
         EI48KgxaBLbOE+mSTkpi/iII8AmKQgPbpACklBHr+DlsvNBY5HI1upvJirItUkVasM/3
         5g+Q0pxLeMExwp+1n8ESmVkCiNdibH0LfTWQl053JMTNjVTMs6UvQkhrfqCkZI8HSyOM
         4spw==
X-Gm-Message-State: AOJu0YxgATest5Q5QS4EaRR6aTXU0V8E63YfsRD4kD2cJW6X8XA9HjyV
	rFK+m2gjDt1oRSiJ+H0eyera5qYG3SrdvWhPwAYpjD35GxK0URmu9WxUeitxag==
X-Google-Smtp-Source: AGHT+IGL1nwvzI0W6hB8Fp/Bhu78C0jW9piL7o8qR48b840k8AfP6Cteu5TlR1K/ho0kWpo1CJs+Sw==
X-Received: by 2002:adf:ec0f:0:b0:354:de8f:daa0 with SMTP id ffacd0b85a97d-35efedcbd63mr26654f8f.53.1717689603652;
        Thu, 06 Jun 2024 09:00:03 -0700 (PDT)
Message-ID: <bd6bd37c-3fb5-4353-a760-5c4465bf7582@suse.com>
Date: Thu, 6 Jun 2024 18:00:01 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [for-4.19] x86 emulator session outcome
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Oleksii,

a decision of the session just finished was to deprecate support
for XeonPhi in 4.19, with the firm plan to remove support in 4.20.
This will want putting in the release notes.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 06 17:00:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 Jun 2024 17:00:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736145.1142234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFGSw-0002Fx-60; Thu, 06 Jun 2024 16:59:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736145.1142234; Thu, 06 Jun 2024 16:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFGSw-0002Fq-35; Thu, 06 Jun 2024 16:59:54 +0000
Received: by outflank-mailman (input) for mailman id 736145;
 Thu, 06 Jun 2024 16:59:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFGSu-0002Fg-A4; Thu, 06 Jun 2024 16:59:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFGSu-0006oU-7u; Thu, 06 Jun 2024 16:59:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFGSt-00040K-Uc; Thu, 06 Jun 2024 16:59:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFGSt-0004js-U4; Thu, 06 Jun 2024 16:59:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KQKaVYNd9ZrOEeTkGeZCquNWxXDqEpSoufDVkJRQkSQ=; b=M+6J4R2GvTYIh2AdkJ0hbRxdzR
	hhHbxCmLx8cteZ+hl+AN599FbQ/bZdP5uvSxNLm/2ogrPaYsdnGb7Y+TOZV6d0AoKJX090SFE+UPk
	OuKB/CuTLCHjM/I8JbeW/FH3Pa17EYEnITLijU67KowXPK+Tb8+768bsX2fJ0NOxomPM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186267-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186267: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=71606314f80500ff0849f66553fad0da11bf4beb
X-Osstest-Versions-That:
    ovmf=90cb1ec33225a070e9fea1d94c72ff590bd38731
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 Jun 2024 16:59:51 +0000

flight 186267 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186267/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 71606314f80500ff0849f66553fad0da11bf4beb
baseline version:
 ovmf                 90cb1ec33225a070e9fea1d94c72ff590bd38731

Last test of basis   186266  2024-06-06 09:12:51 Z    0 days
Testing same since   186267  2024-06-06 15:14:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Wenxing Hou <wenxing.hou@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   90cb1ec332..71606314f8  71606314f80500ff0849f66553fad0da11bf4beb -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 00:42:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 00:42:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736184.1142244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFNgi-0001OQ-Un; Fri, 07 Jun 2024 00:42:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736184.1142244; Fri, 07 Jun 2024 00:42:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFNgi-0001OI-Pu; Fri, 07 Jun 2024 00:42:36 +0000
Received: by outflank-mailman (input) for mailman id 736184;
 Fri, 07 Jun 2024 00:42:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFNgh-0001O8-OA; Fri, 07 Jun 2024 00:42:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFNgh-0007Xo-KC; Fri, 07 Jun 2024 00:42:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFNgh-0001Rm-6n; Fri, 07 Jun 2024 00:42:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFNgh-00046A-6K; Fri, 07 Jun 2024 00:42:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=U+WpC/eRlvGk9h2zrlJQeh9xOB1uqQ+n79rnIfQtiTM=; b=f1hKGV2dAJpkBCgizz72S1fidv
	IvweWKwHZMnz6FJ41EyAPTeECXknXCeeTD/EtbitpMVcXMDWPBqwNyxu5otEktwaHm6h/xNhB4tm9
	BxqP6EnRYaDfrgv+Gf+AHBag162CP32kIaZWanle4NkzqnPlYulw29KgArOmqjCWFoeU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186268-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186268: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:host-ping-check-xen:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d30d0e49da71de8df10bf3ff1b3de880653af562
X-Osstest-Versions-That:
    linux=2df0193e62cf887f373995fb8a91068562784adc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Jun 2024 00:42:35 +0000

flight 186268 linux-linus real [real]
flight 186269 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186268/
http://logs.test-lab.xenproject.org/osstest/logs/186269/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt      8 xen-boot            fail pass in 186269-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     10 host-ping-check-xen      fail REGR. vs. 186260

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 186269 like 186260
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 186269 never pass
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186260
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186260
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186260
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186260
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186260
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186260
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                d30d0e49da71de8df10bf3ff1b3de880653af562
baseline version:
 linux                2df0193e62cf887f373995fb8a91068562784adc

Last test of basis   186260  2024-06-06 00:12:08 Z    1 days
Testing same since   186268  2024-06-06 17:10:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aditya Kumar Singh <quic_adisi@quicinc.com>
  Aleksandr Mishin <amishin@t-argos.ru>
  Alexei Starovoitov <ast@kernel.org>
  Alexis Lothoré <alexis.lothore@bootlin.com>
  Andrii Nakryiko <andrii@kernel.org>
  Antonio Quartulli <a@unstable.cc>
  Ard Biesheuvel <ardb@kernel.org>
  Ayala Beker <ayala.beker@intel.com>
  Baochen Qiang <quic_bqiang@quicinc.com>
  Benjamin Berg <benjamin.berg@intel.com>
  Bitterblue Smith <rtl8821cerfe2@gmail.com>
  Breno Leitao <leitao@debian.org>
  Carl Huang <quic_cjhuang@quicinc.com>
  Chandan Kumar Rout <chandanx.rout@intel.com>
  Chris Maness <christopher.maness@gmail.com>
  Cong Wang <cong.wang@bytedance.com>
  Dan Cross <crossd@gmail.com>
  Daniel Borkmann <daniel@iogearbox.net>
  David S. Miller <davem@davemloft.net>
  DelphineCCChiu <delphine_cc_chiu@wiwynn.com>
  Dmitry Antipov <dmantipov@yandex.ru>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmitry Safonov <0x7f454c46@gmail.com>
  Duoming Zhou <duoming@zju.edu.cn>
  Emmanuel Grumbach <emmanuel.grumbach@intel.com>
  Eric Dumazet <edumazet@google.com>
  Frank Wunderlich <frank-w@public-files.de>
  Geliang Tang <geliang@kernel.org>
  Guilherme G. Piccoli <gpiccoli@igalia.com>
  Hangbin Liu <liuhangbin@gmail.com>
  Hangyu Hua <hbh25y@gmail.com>
  Heng Qi <hengqi@linux.alibaba.com>
  Huacai Chen <chenhuacai@loongson.cn>
  Ilan Peer <ilan.peer@intel.com>
  Jacob Keller <jacob.e.keller@intel.com>
  Jakub Kicinski <kuba@kernel.org>
  Jason Wang <jasowang@redhat.com>
  Jason Xing <kernelxing@tencent.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jiri Olsa <jolsa@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  Kalle Valo <kvalo@kernel.org>
  Kalle Valo <quic_kvalo@quicinc.com>
  Karol Kolacinski <karol.kolacinski@intel.com>
  Kuniyuki Iwashima <kuniyu@amazon.com>
  Lars Kellogg-Stedman <lars@oddbit.com>
  Larysa Zaremba <larysa.zaremba@intel.com>
  Lin Ma <linma@zju.edu.cn>
  Lingbo Kong <quic_lingbok@quicinc.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Luca Weiss <luca.weiss@fairphone.com>
  Magnus Karlsson <magnus.karlsson@intel.com>
  Matthias Stocker <mstocker@barracuda.com>
  Matthieu Baerts (NGI0) <matttbe@kernel.org>
  Michael S. Tsirkin <mst@redhat.com>
  Miri Korenblit <miriam.rachel.korenblit@intel.com>
  Mordechay Goodstein <mordechay.goodstein@intel.com>
  Moshe Shemesh <moshe@nvidia.com>
  Naama Meir <naamax.meir@linux.intel.com>
  Nathan Chancellor <nathan@kernel.org>
  Nicolas Escande <nico.escande@gmail.com>
  Paolo Abeni <pabeni@redhat.com>
  Paul Greenwalt <paul.greenwalt@intel.com>
  Peter Geis <pgwipeout@gmail.com>
  Ping-Ke Shih <pkshih@realtek.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com>
  Remi Pommarel <repk@triplefau.lt>
  Richard Cochran <richardcochran@gmail.com>
  Sasha Neftin <sasha.neftin@intel.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Sergey Ryazanov <ryazanov.s.a@gmail.com>
  Shahar S Matityahu <shahar.s.matityahu@intel.com>
  Shaul Triebitz <shaul.triebitz@intel.com>
  Shay Drory <shayd@nvidia.com>
  Simon Horman <horms@kernel.org>
  Su Hui <suhui@nfschina.com>
  Subbaraya Sundeep <sbhatta@marvell.com>
  Taehee Yoo <ap420073@gmail.com>
  Tariq Toukan <tariqt@nvidia.com>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thorsten Blum <thorsten.blum@toblux.com>
  Tristram Ha <tristram.ha@microchip.com>
  Vadim Fedorenko <vadfed@meta.com>
  Vinicius Costa Gomes <vinicius.gomes@intel.com>
  Wen Gu <guwen@linux.alibaba.com>
  Yedidya Benshimol <yedidya.ben.shimol@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   2df0193e62cf..d30d0e49da71  d30d0e49da71de8df10bf3ff1b3de880653af562 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 03:15:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 03:15:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736200.1142254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFQ4b-0008AD-AJ; Fri, 07 Jun 2024 03:15:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736200.1142254; Fri, 07 Jun 2024 03:15:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFQ4b-0008A6-7D; Fri, 07 Jun 2024 03:15:25 +0000
Received: by outflank-mailman (input) for mailman id 736200;
 Fri, 07 Jun 2024 03:15:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DKnY=NJ=gmail.com=mhkelley58@srs-se1.protection.inumbo.net>)
 id 1sFQ4a-00089x-7p
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 03:15:24 +0000
Received: from mail-pl1-x62d.google.com (mail-pl1-x62d.google.com
 [2607:f8b0:4864:20::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 131d08fe-247c-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 05:14:39 +0200 (CEST)
Received: by mail-pl1-x62d.google.com with SMTP id
 d9443c01a7336-1f480624d0dso16069085ad.1
 for <xen-devel@lists.xenproject.org>; Thu, 06 Jun 2024 20:14:39 -0700 (PDT)
Received: from localhost.localdomain (c-67-161-114-176.hsd1.wa.comcast.net.
 [67.161.114.176]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f6bd7f7ec1sm23027405ad.271.2024.06.06.20.14.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 Jun 2024 20:14:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 131d08fe-247c-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717730078; x=1718334878; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:reply-to:message-id:date
         :subject:to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=S6sq1v/HYVth5hgL6aHLmT9s4dMG30DKkez74RK4Byw=;
        b=PWxn4Z+pKTxIhSiebQ1DqW9mRhmews7Dx4SUqPtkLQvldlNO/MxJgHWzVdfVyJNREt
         RSbGG9eavUSuxErbZgrqNQWoYjSKgqeXvAh9hr6eIx/6pbyk2sq3ezbOxVVwdS+FY5KS
         c0H7zDWVeLbK1LIezeCWNAMLQfBsUQW9vYct2DuJhveBS/OmnFUcOK7jjL8Yiie/O0kP
         COPBw2jgTIXJ3c3G6mYb3ZxjqBxroXD7Jq+RDf4E5PwMH5sVLGnp+sbEe9B1ewuO1Y+K
         NiBHke2NMr/jYUJp7DuomdSGYBRkuo23iNBhLwsd9E/V/lX2QCMH20/ownYXVTd4xf5K
         d7cw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717730078; x=1718334878;
        h=content-transfer-encoding:mime-version:reply-to:message-id:date
         :subject:to:from:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=S6sq1v/HYVth5hgL6aHLmT9s4dMG30DKkez74RK4Byw=;
        b=ntcr00o1ujoE1A4/2wk6Ki4vobMCPtPo7h3mreqDKXcqx0tL/9gaIMGK5LtVfJzJta
         90mxnrIz0R0icIOLpWzLtNGaz24ZnOiWHT4FtCxbr13sQlBW66wdeRuZdiOoIxLW7TXv
         Vr4ozG4nwSg/iTCZibkh3jctnq/WOGiaQKhKI+Gs7I2aglxnyU1h5rt47tDlXDS7HPdp
         Wnk3VmGDx70Q6jsJN/q7hUPpPCwKKcGcu9EH/yvTp50wWOTItg2KjCU29rkx4kjyCN7V
         UrQSz4vXk1Xqj7mn/X9CrL0ZilLN6hvRatOd1K9rScjSxSE765+GJV0wF1YaOfUZBiEX
         q8ZA==
X-Forwarded-Encrypted: i=1; AJvYcCWrQ4AfnyT07SUBX+Q9kSeXsbqcTm0oGESD3/ScijOqTUqyntfMFLD5Xg5MfhK5yCXG6O9yfuy6taHK2l/nGPPW5KavwMNGlA0rSb5Fp/w=
X-Gm-Message-State: AOJu0YwpGj9Ke6EtTHTXQXlcvlD6Ej27+k3aBLqj3Spf8w16k2+Z4lfP
	y/5HTi68X5DYQyv1Q5AKKCbey2pjC76tcHEdi4K/h9Ux74kvZ71r
X-Google-Smtp-Source: AGHT+IGVLbOop05kbtCd0vA2QzX1cxZyTlSfbXjA3G457nSL5A9bMH4Wmsybo0Hjsa4Y5lzdKheiWA==
X-Received: by 2002:a17:902:ea0b:b0:1f6:92f1:b01c with SMTP id d9443c01a7336-1f6d03b9604mr16907795ad.69.1717730077750;
        Thu, 06 Jun 2024 20:14:37 -0700 (PDT)
From: mhkelley58@gmail.com
X-Google-Original-From: mhklinux@outlook.com
To: robin.murphy@arm.com,
	joro@8bytes.org,
	will@kernel.org,
	jgross@suse.com,
	sstabellini@kernel.org,
	oleksandr_tyshchenko@epam.com,
	hch@lst.de,
	m.szyprowski@samsung.com,
	petr@tesarici.cz,
	iommu@lists.linux.dev,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Date: Thu,  6 Jun 2024 20:14:21 -0700
Message-Id: <20240607031421.182589-1-mhklinux@outlook.com>
X-Mailer: git-send-email 2.25.1
Reply-To: mhklinux@outlook.com
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Michael Kelley <mhklinux@outlook.com>

With CONFIG_SWIOTLB_DYNAMIC enabled, each round-trip map/unmap pair
in the swiotlb results in 6 calls to swiotlb_find_pool(). In multiple
places, the pool is found and used in one function, and then must
found again in the next function that is called because only the
tlb_addr is passed as an argument. These are the six call sites:

dma_direct_map_page:
1. swiotlb_map->swiotlb_tbl_map_single->swiotlb_bounce

dma_direct_unmap_page:
2. dma_direct_sync_single_for_cpu->is_swiotlb_buffer
3. dma_direct_sync_single_for_cpu->swiotlb_sync_single_for_cpu->
	swiotlb_bounce
4. is_swiotlb_buffer
5. swiotlb_tbl_unmap_single->swiotlb_del_transient
6. swiotlb_tbl_unmap_single->swiotlb_release_slots

Reduce the number of calls by finding the pool at a higher level, and
passing it as an argument instead of searching again. A key change is
for is_swiotlb_buffer() to return a pool pointer instead of a boolean,
and then pass this pool pointer to subsequent swiotlb functions.
With these changes, a round-trip map/unmap pair requires only 2 calls
to swiotlb_find_pool():

dma_direct_unmap_page:
1. dma_direct_sync_single_for_cpu->is_swiotlb_buffer
2. is_swiotlb_buffer

These changes come from noticing the inefficiencies in a code review,
not from performance measurements. With CONFIG_SWIOTLB_DYNAMIC,
swiotlb_find_pool() is not trivial, and it uses an RCU read lock,
so avoiding the redundant calls helps performance in a hot path.
When CONFIG_SWIOTLB_DYNAMIC is *not* set, the code size reduction
is minimal and the perf benefits are likely negligible, but no
harm is done.

No functional change is intended.

Signed-off-by: Michael Kelley <mhklinux@outlook.com>
---
This patch trades off making many of the core swiotlb APIs take
an additional argument in order to avoid duplicating calls to
swiotlb_find_pool(). The current code seems rather wasteful in
making 6 calls per round-trip, but I'm happy to accept others'
judgment as to whether getting rid of the waste is worth the
additional code complexity.

 drivers/iommu/dma-iommu.c | 27 ++++++++++++++------
 drivers/xen/swiotlb-xen.c | 25 +++++++++++-------
 include/linux/swiotlb.h   | 54 +++++++++++++++++++++------------------
 kernel/dma/direct.c       | 12 ++++++---
 kernel/dma/direct.h       | 18 ++++++++-----
 kernel/dma/swiotlb.c      | 43 ++++++++++++++++---------------
 6 files changed, 106 insertions(+), 73 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index f731e4b2a417..ab6bc37ecf90 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -1073,6 +1073,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 		dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
 {
 	phys_addr_t phys;
+	struct io_tlb_pool *pool;
 
 	if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir))
 		return;
@@ -1081,21 +1082,25 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(dev, phys))
-		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
+	pool = is_swiotlb_buffer(dev, phys);
+	if (pool)
+		swiotlb_sync_single_for_cpu(dev, phys, size, dir, pool);
 }
 
 static void iommu_dma_sync_single_for_device(struct device *dev,
 		dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
 {
 	phys_addr_t phys;
+	struct io_tlb_pool *pool;
 
 	if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir))
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(dev, phys))
-		swiotlb_sync_single_for_device(dev, phys, size, dir);
+
+	pool = is_swiotlb_buffer(dev, phys);
+	if (pool)
+		swiotlb_sync_single_for_device(dev, phys, size, dir, pool);
 
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_device(phys, size, dir);
@@ -1189,8 +1194,12 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
 		arch_sync_dma_for_device(phys, size, dir);
 
 	iova = __iommu_dma_map(dev, phys, size, prot, dma_mask);
-	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
-		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
+	if (iova == DMA_MAPPING_ERROR) {
+		struct io_tlb_pool *pool = is_swiotlb_buffer(dev, phys);
+
+		if (pool)
+			swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs, pool);
+	}
 	return iova;
 }
 
@@ -1199,6 +1208,7 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
 {
 	struct iommu_domain *domain = iommu_get_dma_domain(dev);
 	phys_addr_t phys;
+	struct io_tlb_pool *pool;
 
 	phys = iommu_iova_to_phys(domain, dma_handle);
 	if (WARN_ON(!phys))
@@ -1209,8 +1219,9 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
 
 	__iommu_dma_unmap(dev, dma_handle, size);
 
-	if (unlikely(is_swiotlb_buffer(dev, phys)))
-		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
+	pool = is_swiotlb_buffer(dev, phys);
+	if (unlikely(pool))
+		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs, pool);
 }
 
 /*
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 6579ae3f6dac..7af8c8466e1d 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -88,7 +88,7 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
 	return 0;
 }
 
-static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
+static struct io_tlb_pool *is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 {
 	unsigned long bfn = XEN_PFN_DOWN(dma_to_phys(dev, dma_addr));
 	unsigned long xen_pfn = bfn_to_local_pfn(bfn);
@@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 */
 	if (pfn_valid(PFN_DOWN(paddr)))
 		return is_swiotlb_buffer(dev, paddr);
-	return 0;
+	return NULL;
 }
 
 #ifdef CONFIG_X86
@@ -228,7 +228,8 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 	 */
 	if (unlikely(!dma_capable(dev, dev_addr, size, true))) {
 		swiotlb_tbl_unmap_single(dev, map, size, dir,
-				attrs | DMA_ATTR_SKIP_CPU_SYNC);
+				attrs | DMA_ATTR_SKIP_CPU_SYNC,
+				swiotlb_find_pool(dev, map));
 		return DMA_MAPPING_ERROR;
 	}
 
@@ -254,6 +255,7 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 		size_t size, enum dma_data_direction dir, unsigned long attrs)
 {
 	phys_addr_t paddr = xen_dma_to_phys(hwdev, dev_addr);
+	struct io_tlb_pool *pool;
 
 	BUG_ON(dir == DMA_NONE);
 
@@ -265,8 +267,9 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 	}
 
 	/* NOTE: We use dev_addr here, not paddr! */
-	if (is_xen_swiotlb_buffer(hwdev, dev_addr))
-		swiotlb_tbl_unmap_single(hwdev, paddr, size, dir, attrs);
+	pool = is_xen_swiotlb_buffer(hwdev, dev_addr);
+	if (pool)
+		swiotlb_tbl_unmap_single(hwdev, paddr, size, dir, attrs, pool);
 }
 
 static void
@@ -274,6 +277,7 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
 		size_t size, enum dma_data_direction dir)
 {
 	phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr);
+	struct io_tlb_pool *pool;
 
 	if (!dev_is_dma_coherent(dev)) {
 		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
@@ -282,8 +286,9 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
 			xen_dma_sync_for_cpu(dev, dma_addr, size, dir);
 	}
 
-	if (is_xen_swiotlb_buffer(dev, dma_addr))
-		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
+	pool = is_xen_swiotlb_buffer(dev, dma_addr);
+	if (pool)
+		swiotlb_sync_single_for_cpu(dev, paddr, size, dir, pool);
 }
 
 static void
@@ -291,9 +296,11 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
 		size_t size, enum dma_data_direction dir)
 {
 	phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr);
+	struct io_tlb_pool *pool;
 
-	if (is_xen_swiotlb_buffer(dev, dma_addr))
-		swiotlb_sync_single_for_device(dev, paddr, size, dir);
+	pool = is_xen_swiotlb_buffer(dev, dma_addr);
+	if (pool)
+		swiotlb_sync_single_for_device(dev, paddr, size, dir, pool);
 
 	if (!dev_is_dma_coherent(dev)) {
 		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 14bc10c1bb23..ce8651949123 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -42,24 +42,6 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
 	int (*remap)(void *tlb, unsigned long nslabs));
 extern void __init swiotlb_update_mem_attributes(void);
 
-phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
-		size_t mapping_size,
-		unsigned int alloc_aligned_mask, enum dma_data_direction dir,
-		unsigned long attrs);
-
-extern void swiotlb_tbl_unmap_single(struct device *hwdev,
-				     phys_addr_t tlb_addr,
-				     size_t mapping_size,
-				     enum dma_data_direction dir,
-				     unsigned long attrs);
-
-void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
-		size_t size, enum dma_data_direction dir);
-void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,
-		size_t size, enum dma_data_direction dir);
-dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys,
-		size_t size, enum dma_data_direction dir, unsigned long attrs);
-
 #ifdef CONFIG_SWIOTLB
 
 /**
@@ -168,12 +150,12 @@ static inline struct io_tlb_pool *swiotlb_find_pool(struct device *dev,
  * * %true if @paddr points into a bounce buffer
  * * %false otherwise
  */
-static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
+static inline struct io_tlb_pool *is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 
 	if (!mem)
-		return false;
+		return NULL;
 
 #ifdef CONFIG_SWIOTLB_DYNAMIC
 	/*
@@ -187,10 +169,13 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 	 * This barrier pairs with smp_mb() in swiotlb_find_slots().
 	 */
 	smp_rmb();
-	return READ_ONCE(dev->dma_uses_io_tlb) &&
-		swiotlb_find_pool(dev, paddr);
+	if (READ_ONCE(dev->dma_uses_io_tlb))
+		return swiotlb_find_pool(dev, paddr);
+	return NULL;
 #else
-	return paddr >= mem->defpool.start && paddr < mem->defpool.end;
+	if (paddr >= mem->defpool.start && paddr < mem->defpool.end)
+		return &mem->defpool;
+	return NULL;
 #endif
 }
 
@@ -201,6 +186,25 @@ static inline bool is_swiotlb_force_bounce(struct device *dev)
 	return mem && mem->force_bounce;
 }
 
+phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
+		size_t mapping_size,
+		unsigned int alloc_aligned_mask, enum dma_data_direction dir,
+		unsigned long attrs);
+
+extern void swiotlb_tbl_unmap_single(struct device *hwdev,
+				     phys_addr_t tlb_addr,
+				     size_t mapping_size,
+				     enum dma_data_direction dir,
+				     unsigned long attrs,
+				     struct io_tlb_pool *pool);
+
+void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
+		size_t size, enum dma_data_direction dir, struct io_tlb_pool *pool);
+void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,
+		size_t size, enum dma_data_direction dir, struct io_tlb_pool *pool);
+dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys,
+		size_t size, enum dma_data_direction dir, unsigned long attrs);
+
 void swiotlb_init(bool addressing_limited, unsigned int flags);
 void __init swiotlb_exit(void);
 void swiotlb_dev_init(struct device *dev);
@@ -219,9 +223,9 @@ static inline void swiotlb_dev_init(struct device *dev)
 {
 }
 
-static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
+static inline struct io_tlb_pool *is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	return false;
+	return NULL;
 }
 static inline bool is_swiotlb_force_bounce(struct device *dev)
 {
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 4d543b1e9d57..50689afb0ffd 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -399,14 +399,16 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 		struct scatterlist *sgl, int nents, enum dma_data_direction dir)
 {
 	struct scatterlist *sg;
+	struct io_tlb_pool *pool;
 	int i;
 
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(dev, paddr)))
+		pool = is_swiotlb_buffer(dev, paddr);
+		if (unlikely(pool))
 			swiotlb_sync_single_for_device(dev, paddr, sg->length,
-						       dir);
+						       dir, pool);
 
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_device(paddr, sg->length,
@@ -422,6 +424,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		struct scatterlist *sgl, int nents, enum dma_data_direction dir)
 {
 	struct scatterlist *sg;
+	struct io_tlb_pool *pool;
 	int i;
 
 	for_each_sg(sgl, sg, nents, i) {
@@ -430,9 +433,10 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(dev, paddr)))
+		pool = is_swiotlb_buffer(dev, paddr);
+		if (unlikely(pool))
 			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
-						    dir);
+						    dir, pool);
 
 		if (dir == DMA_FROM_DEVICE)
 			arch_dma_mark_clean(paddr, sg->length);
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 18d346118fe8..72aa65558e07 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -57,9 +57,11 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 		dma_addr_t addr, size_t size, enum dma_data_direction dir)
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
+	struct io_tlb_pool *pool;
 
-	if (unlikely(is_swiotlb_buffer(dev, paddr)))
-		swiotlb_sync_single_for_device(dev, paddr, size, dir);
+	pool = is_swiotlb_buffer(dev, paddr);
+	if (unlikely(pool))
+		swiotlb_sync_single_for_device(dev, paddr, size, dir, pool);
 
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_device(paddr, size, dir);
@@ -69,14 +71,16 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		dma_addr_t addr, size_t size, enum dma_data_direction dir)
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
+	struct io_tlb_pool *pool;
 
 	if (!dev_is_dma_coherent(dev)) {
 		arch_sync_dma_for_cpu(paddr, size, dir);
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(dev, paddr)))
-		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
+	pool = is_swiotlb_buffer(dev, paddr);
+	if (unlikely(pool))
+		swiotlb_sync_single_for_cpu(dev, paddr, size, dir, pool);
 
 	if (dir == DMA_FROM_DEVICE)
 		arch_dma_mark_clean(paddr, size);
@@ -117,12 +121,14 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 		size_t size, enum dma_data_direction dir, unsigned long attrs)
 {
 	phys_addr_t phys = dma_to_phys(dev, addr);
+	struct io_tlb_pool *pool;
 
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(dev, phys)))
+	pool = is_swiotlb_buffer(dev, phys);
+	if (unlikely(pool))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir,
-					 attrs | DMA_ATTR_SKIP_CPU_SYNC);
+					 attrs | DMA_ATTR_SKIP_CPU_SYNC, pool);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index fe1ccb53596f..59b3e333651d 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -855,9 +855,8 @@ static unsigned int swiotlb_align_offset(struct device *dev,
  * Bounce: copy the swiotlb buffer from or back to the original dma location
  */
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
-			   enum dma_data_direction dir)
+			   enum dma_data_direction dir, struct io_tlb_pool *mem)
 {
-	struct io_tlb_pool *mem = swiotlb_find_pool(dev, tlb_addr);
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
 	size_t alloc_size = mem->slots[index].alloc_size;
@@ -1435,13 +1434,13 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	 * hardware behavior.  Use of swiotlb is supposed to be transparent,
 	 * i.e. swiotlb must not corrupt memory by clobbering unwritten bytes.
 	 */
-	swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
+	swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE, pool);
 	return tlb_addr;
 }
 
-static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
+static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr,
+				  struct io_tlb_pool *mem)
 {
-	struct io_tlb_pool *mem = swiotlb_find_pool(dev, tlb_addr);
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(dev, 0, tlb_addr);
 	int index, nslots, aindex;
@@ -1505,11 +1504,9 @@ static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_addr)
  *
  * Return: %true if @tlb_addr belonged to a transient pool that was released.
  */
-static bool swiotlb_del_transient(struct device *dev, phys_addr_t tlb_addr)
+static bool swiotlb_del_transient(struct device *dev, phys_addr_t tlb_addr,
+				  struct io_tlb_pool *pool)
 {
-	struct io_tlb_pool *pool;
-
-	pool = swiotlb_find_pool(dev, tlb_addr);
 	if (!pool->transient)
 		return false;
 
@@ -1522,7 +1519,8 @@ static bool swiotlb_del_transient(struct device *dev, phys_addr_t tlb_addr)
 #else  /* !CONFIG_SWIOTLB_DYNAMIC */
 
 static inline bool swiotlb_del_transient(struct device *dev,
-					 phys_addr_t tlb_addr)
+					 phys_addr_t tlb_addr,
+					 struct io_tlb_pool *pool)
 {
 	return false;
 }
@@ -1534,34 +1532,34 @@ static inline bool swiotlb_del_transient(struct device *dev,
  */
 void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
 			      size_t mapping_size, enum dma_data_direction dir,
-			      unsigned long attrs)
+			      unsigned long attrs, struct io_tlb_pool *pool)
 {
 	/*
 	 * First, sync the memory before unmapping the entry
 	 */
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE, pool);
 
-	if (swiotlb_del_transient(dev, tlb_addr))
+	if (swiotlb_del_transient(dev, tlb_addr, pool))
 		return;
-	swiotlb_release_slots(dev, tlb_addr);
+	swiotlb_release_slots(dev, tlb_addr, pool);
 }
 
 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
-		size_t size, enum dma_data_direction dir)
+		size_t size, enum dma_data_direction dir, struct io_tlb_pool *pool)
 {
 	if (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL)
-		swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
+		swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE, pool);
 	else
 		BUG_ON(dir != DMA_FROM_DEVICE);
 }
 
 void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,
-		size_t size, enum dma_data_direction dir)
+		size_t size, enum dma_data_direction dir, struct io_tlb_pool *pool)
 {
 	if (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL)
-		swiotlb_bounce(dev, tlb_addr, size, DMA_FROM_DEVICE);
+		swiotlb_bounce(dev, tlb_addr, size, DMA_FROM_DEVICE, pool);
 	else
 		BUG_ON(dir != DMA_TO_DEVICE);
 }
@@ -1586,7 +1584,8 @@ dma_addr_t swiotlb_map(struct device *dev, phys_addr_t paddr, size_t size,
 	dma_addr = phys_to_dma_unencrypted(dev, swiotlb_addr);
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
 		swiotlb_tbl_unmap_single(dev, swiotlb_addr, size, dir,
-			attrs | DMA_ATTR_SKIP_CPU_SYNC);
+			attrs | DMA_ATTR_SKIP_CPU_SYNC,
+			swiotlb_find_pool(dev, swiotlb_addr));
 		dev_WARN_ONCE(dev, 1,
 			"swiotlb addr %pad+%zu overflow (mask %llx, bus limit %llx).\n",
 			&dma_addr, size, *dev->dma_mask, dev->bus_dma_limit);
@@ -1774,11 +1773,13 @@ struct page *swiotlb_alloc(struct device *dev, size_t size)
 bool swiotlb_free(struct device *dev, struct page *page, size_t size)
 {
 	phys_addr_t tlb_addr = page_to_phys(page);
+	struct io_tlb_pool *pool;
 
-	if (!is_swiotlb_buffer(dev, tlb_addr))
+	pool = is_swiotlb_buffer(dev, tlb_addr);
+	if (!pool)
 		return false;
 
-	swiotlb_release_slots(dev, tlb_addr);
+	swiotlb_release_slots(dev, tlb_addr, pool);
 
 	return true;
 }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 03:29:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 03:29:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736206.1142263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFQIP-0001UN-GN; Fri, 07 Jun 2024 03:29:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736206.1142263; Fri, 07 Jun 2024 03:29:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFQIP-0001UG-Di; Fri, 07 Jun 2024 03:29:41 +0000
Received: by outflank-mailman (input) for mailman id 736206;
 Fri, 07 Jun 2024 03:29:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFQIO-0001U6-5k; Fri, 07 Jun 2024 03:29:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFQIN-0001Ts-WF; Fri, 07 Jun 2024 03:29:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFQIN-0006H6-KI; Fri, 07 Jun 2024 03:29:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFQIN-0001RX-Jk; Fri, 07 Jun 2024 03:29:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=G8c9vK0jbNdGQ8dWgj7uRZ1WcPhDTimREtCS/tcp1dg=; b=OCjzn8dz2pTQ8Xw5tsSR7bEYeM
	HESjmyfd4w3NTu/aFwh6w6yoHCIu7okFTdBUeQ/5K0hI+IAAkWt1VfUXqMvZ49YmMkJ9QLUq9ttW4
	pkV2EHrcE9OyKMsyAviH6VVUtdpCvPkWACMKjF2ZTRd+bMAFtbJWLlU6cHW36bWtzfnQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186271-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186271: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=f9c2f2fa0fd92f94d6c20292f37d5302762cad66
X-Osstest-Versions-That:
    ovmf=71606314f80500ff0849f66553fad0da11bf4beb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Jun 2024 03:29:39 +0000

flight 186271 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186271/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f9c2f2fa0fd92f94d6c20292f37d5302762cad66
baseline version:
 ovmf                 71606314f80500ff0849f66553fad0da11bf4beb

Last test of basis   186267  2024-06-06 15:14:32 Z    0 days
Testing same since   186271  2024-06-07 01:41:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   71606314f8..f9c2f2fa0f  f9c2f2fa0fd92f94d6c20292f37d5302762cad66 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 06:04:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 06:04:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736220.1142273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFShy-0002Gc-CX; Fri, 07 Jun 2024 06:04:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736220.1142273; Fri, 07 Jun 2024 06:04:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFShy-0002GV-A8; Fri, 07 Jun 2024 06:04:14 +0000
Received: by outflank-mailman (input) for mailman id 736220;
 Fri, 07 Jun 2024 06:04:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFShw-0002GI-MD; Fri, 07 Jun 2024 06:04:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFShw-0005B6-KD; Fri, 07 Jun 2024 06:04:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFShw-0006Rk-4B; Fri, 07 Jun 2024 06:04:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFShw-0000G6-3X; Fri, 07 Jun 2024 06:04:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IXjSIVB3ttvQDVEyDjCmTY530/6JOiCr3ZxZqM7bEjk=; b=0G3FI+h8KS808ok8DK4leje7cO
	If/dx5Me32ZFKpXd1uj8v0dLI0R5M65OGnKqDR+PSHWPhGJ13IEJSDL5Ro7TYIMCpoVnuYzFw2rOm
	X0Q7xtQ9xeJAUJrv5wjPkJCpOIy5YTcac2zOZhNmut8twqeEDbfaao2HKdOhbup8m5TM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186273-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186273: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=80b59ff8320d1bd134bf689fe9c0ddf4e0473b88
X-Osstest-Versions-That:
    ovmf=f9c2f2fa0fd92f94d6c20292f37d5302762cad66
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Jun 2024 06:04:12 +0000

flight 186273 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186273/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 80b59ff8320d1bd134bf689fe9c0ddf4e0473b88
baseline version:
 ovmf                 f9c2f2fa0fd92f94d6c20292f37d5302762cad66

Last test of basis   186271  2024-06-07 01:41:10 Z    0 days
Testing same since   186273  2024-06-07 04:13:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Oliver Steffen <osteffen@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f9c2f2fa0f..80b59ff832  80b59ff8320d1bd134bf689fe9c0ddf4e0473b88 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 07:01:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 07:01:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736231.1142283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFTbS-0000YM-K8; Fri, 07 Jun 2024 07:01:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736231.1142283; Fri, 07 Jun 2024 07:01:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFTbS-0000YF-Hc; Fri, 07 Jun 2024 07:01:34 +0000
Received: by outflank-mailman (input) for mailman id 736231;
 Fri, 07 Jun 2024 07:01:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m5uW=NJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sFTbQ-0000Y3-P1
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 07:01:32 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c3383466-249b-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 09:01:28 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id
 4fb4d7f45d1cf-57a44c2ce80so1962046a12.0
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 00:01:28 -0700 (PDT)
Received: from [172.31.7.231] ([62.28.210.62])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57aae0db916sm2269777a12.35.2024.06.07.00.01.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Jun 2024 00:01:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3383466-249b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717743688; x=1718348488; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=2TnzugukcPNWIZEkDdBk97PVqyuzruiroUpyxHvT3XE=;
        b=ZQVG2eS5U2vpVuOK4GB8ln2hJODWvQkaU3SuaTFMj3D6uJEzKCu5iiOvFDvW4mkh/4
         6VXpDuzyKyTuw5RaGuHdK8oZVZ4PktaT8b3TIGgD4RO1uh++RZM5XZXOKo7GDR+/jNI9
         Ha21FqnxjjuwFMcIse7dDjvEV5mv7XY1eZr/cQ+OViURvufk21xMlzMUwCIBZedWMlZs
         ACSxtvZutEVPvdOsjWWXszSN2hRzAL1Z3yfqj/3WOQXGBgm6nx32TyMjWdNzyLMmpQuL
         1fBNWfDlOCddywF7CmQHZIAsOAgGgEYqFZDLpzQRVBvaV+qH/EPlytz89hVwGmCTq0qL
         ngQg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717743688; x=1718348488;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=2TnzugukcPNWIZEkDdBk97PVqyuzruiroUpyxHvT3XE=;
        b=vpFTH2+fmWBYXw+zbRiuAhK2ARxsdUm4vaLgXdIg4HG7sADDy6bcwa8AzdCyMbL/Eg
         ceDBxKNINXKK2mENBvmN4/7nFCk4enoIgqOM489Na3qwM23/xqdVS37XgP9fO0SmUp2i
         p4XxLUG3O6TtNaI3xR8FbXJUucqXsQVaWVRCc2dxFj/tddTkbUAEAIBGW+ZGLxdYYWH0
         OvSbpGjdd3nxUgtvR8hfGKjUfmZ3oRMeF6L3jsNVGZGcmRCHg1jcqJNWSKdF1HUespmi
         qMrO++2mV/Ydhoh6e8fv2tkSNITSvmkofMg1yljeyLnwV73BJ8qey8jXbnjw3wsYAJQi
         KxlA==
X-Forwarded-Encrypted: i=1; AJvYcCUV7WD6mRkKiLLz7TfWh+TiwnnsSxIT3S01EYEvZZtVKKkkZyxNL+qTglDmyJl7ONf7yQ8rDJUAYxRIcgsd2xYi7kuhqqdKqZugNCHXpEU=
X-Gm-Message-State: AOJu0Yy8gVn9Z71BtVaW5VmsxiuwoWz1jITUlIbDiCR5IQfzbR+xNGP3
	c33L68epOoAfuoyKwC5L+IXH2dXMF3EAD9nrNW/VSjIQXkB52F7dCbtzd+XfBA==
X-Google-Smtp-Source: AGHT+IGOt5/Cq1f6ttx1qaeWdzW59G9LMskWyRJa7pSs+IyWCbHGsugGZg17FtDfvRySXGsBPwXzKQ==
X-Received: by 2002:a50:8ad7:0:b0:57a:76d4:d890 with SMTP id 4fb4d7f45d1cf-57c509a64d3mr750227a12.41.1717743687996;
        Fri, 07 Jun 2024 00:01:27 -0700 (PDT)
Message-ID: <d2ce1c48-fd95-47f9-b821-8e01d5006e8e@suse.com>
Date: Fri, 7 Jun 2024 09:01:25 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO
 page read only
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.68462f37276d69ab6e268be94d049f866a321f73.1716392340.git-series.marmarek@invisiblethingslab.com>
 <30562c807ff2e434731a76d7110d48614a58884b.1716392340.git-series.marmarek@invisiblethingslab.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <30562c807ff2e434731a76d7110d48614a58884b.1716392340.git-series.marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 22.05.2024 17:39, Marek Marczykowski-Górecki wrote:
> --- a/xen/arch/x86/include/asm/mm.h
> +++ b/xen/arch/x86/include/asm/mm.h
> @@ -522,9 +522,34 @@ extern struct rangeset *mmio_ro_ranges;
>  void memguard_guard_stack(void *p);
>  void memguard_unguard_stack(void *p);
>  
> +/*
> + * Add more precise r/o marking for a MMIO page. Range specified here
> + * will still be R/O, but the rest of the page (not marked as R/O via another
> + * call) will have writes passed through.
> + * The start address and the size must be aligned to MMIO_RO_SUBPAGE_GRAN.
> + *
> + * This API cannot be used for overlapping ranges, nor for pages already added
> + * to mmio_ro_ranges separately.
> + *
> + * Since there is currently no subpage_mmio_ro_remove(), relevant device should
> + * not be hot-unplugged.

Yet there are no guarantees whatsoever. I think we should refuse
hot-unplug attempts (not just here, but also e.g. for an EHCI
controller that we use the debug feature of), but doing so would
likely require coordination with Dom0. Nothing to be done right
here, of course.

> + * Return values:
> + *  - negative: error
> + *  - 0: success
> + */
> +#define MMIO_RO_SUBPAGE_GRAN 8
> +int subpage_mmio_ro_add(paddr_t start, size_t size);
> +#ifdef CONFIG_HVM
> +bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla);
> +#endif

I'd suggest to omit the #ifdef here. The declaration alone doesn't
hurt, and the #ifdef harms readability (if only a bit).

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -150,6 +150,17 @@ bool __read_mostly machine_to_phys_mapping_valid;
>  
>  struct rangeset *__read_mostly mmio_ro_ranges;
>  
> +/* Handling sub-page read-only MMIO regions */
> +struct subpage_ro_range {
> +    struct list_head list;
> +    mfn_t mfn;
> +    void __iomem *mapped;
> +    DECLARE_BITMAP(ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GRAN);
> +};
> +
> +static LIST_HEAD(subpage_ro_ranges);

With modifications all happen from __init code, this likely wants
to be LIST_HEAD_RO_AFTER_INIT() (which would need introducing, to
parallel LIST_HEAD_READ_MOSTLY()).

> +int __init subpage_mmio_ro_add(
> +    paddr_t start,
> +    size_t size)
> +{
> +    mfn_t mfn_start = maddr_to_mfn(start);
> +    paddr_t end = start + size - 1;
> +    mfn_t mfn_end = maddr_to_mfn(end);
> +    unsigned int offset_end = 0;
> +    int rc;
> +    bool subpage_start, subpage_end;
> +
> +    ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
> +    ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
> +    if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
> +        size = ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);

I'm puzzled: You first check suitable alignment and then adjust size
to have suitable granularity. Either it is a mistake to call the
function with a bad size, or it is not. If it's a mistake, the
release build alternative to the assertion would be to return an
error. If it's not a mistake, the assertion ought to go away.

If the assertion is to stay, then I'll further question why the
other one doesn't also have release build safety fallback logic.

> +    if ( !size )
> +        return 0;
> +
> +    if ( mfn_eq(mfn_start, mfn_end) )
> +    {
> +        /* Both starting and ending parts handled at once */
> +        subpage_start = PAGE_OFFSET(start) || PAGE_OFFSET(end) != PAGE_SIZE - 1;
> +        subpage_end = false;
> +    }
> +    else
> +    {
> +        subpage_start = PAGE_OFFSET(start);
> +        subpage_end = PAGE_OFFSET(end) != PAGE_SIZE - 1;
> +    }

Since you calculate "end" before adjusting "size", the logic here
depends on there being the assertion further up.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 07:25:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 07:25:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736297.1142317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFTyI-0003oh-Nl; Fri, 07 Jun 2024 07:25:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736297.1142317; Fri, 07 Jun 2024 07:25:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFTyI-0003oa-LE; Fri, 07 Jun 2024 07:25:10 +0000
Received: by outflank-mailman (input) for mailman id 736297;
 Fri, 07 Jun 2024 07:25:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m5uW=NJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sFTyH-0003oU-7F
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 07:25:09 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1098eec4-249f-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 09:25:07 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 5b1f17b1804b1-4215ac379fdso17145215e9.3
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 00:25:07 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4216b398fd8sm9591025e9.23.2024.06.07.00.25.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Jun 2024 00:25:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1098eec4-249f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717745106; x=1718349906; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=XVrIoBIptPdTqxav+phPDj10YQpzRTktNuWTKABvlOU=;
        b=LeQFUuWjoSHEMSyoDi2aVFVyYdng6IfJ6Aw5kOypBsMOyH952kAcCt6Qh1mQFJ4jXr
         Pk4btFMiWetzxt38TPja3Gsm9HD+3INg+jv20Iogkfb3lV3leQONbz1VV0RmGMfPmv2/
         cCqvealup139A3j7SPE+FeLIeSLUTFvhkaW6GAhGuJef87QAVJqNuk+TpC6PGHd9gPAd
         2eCgNuH3Z3+xrxMsfTv6NCBZXE1L7UB0xoZ69DUTvEFlqgZaujKec2QHTdEzznsDGl0C
         lps7ywJ8nnD3JZbU5rVzL7oF50+s0g5DqltRBgMnocPPzP2pPsKHrFXOlob9iy4tsWjN
         nmOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717745106; x=1718349906;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=XVrIoBIptPdTqxav+phPDj10YQpzRTktNuWTKABvlOU=;
        b=sZ3swtP29PJ4lgKKJQ2iJXAe5p+K8KHpspPPbb2TuZquObUJNg4dWvy61wesVXl1AV
         ZV4WiC2Uovc20guwssLIKMQ6rQDaloMQKPlIw461qP+zK+k2XfHoRBEWcfWBrXfqYzWn
         xmv9Eei24grHhlGpQpsTgRhNpqFvwVbRuIFPsbQBLQmzaGGYGuvv0RnGQ0diI04NZ03/
         spK01C3Pr9Xauk7BxvQiLGkmkraYSxidNYhKfSfVTWOif1TF2szNq0POsUC4ZdgHOP2U
         praI38OOTY7ff1VFyQ7t3cOx0elf7FPcaJxbAY1jzPBMM8j6OvZK6I/TtYHcBUFW6fVL
         fd3Q==
X-Forwarded-Encrypted: i=1; AJvYcCXchgzJO3hTcsYSBXeEJ+ieGUtO93qEo1kXs1EW8U/spCFzhKutcNgKur9UQSCxFh7BwAscUAvumZnYYSwoMHibgLn/e/mwm3+7JDaYCes=
X-Gm-Message-State: AOJu0YzfK7Jwro8pZ8rmMdDzT+wt/FmDIN7ijH18OqPL78ZN+1Dp+Rmx
	LCFoxoPwnVDIlUbqxUKdBiAjnHgvnveICyxHc6mkcuEglBEkIJHYwRCDbmsULQ==
X-Google-Smtp-Source: AGHT+IFac/sceiIusDUvpOuRVCoxkNuzRoQAOooXFeVtDpx+Tdm7VI5MK3Gb7z+SEazebT9QSaE5ag==
X-Received: by 2002:a05:600c:4f86:b0:420:1125:dd79 with SMTP id 5b1f17b1804b1-42164a2eed8mr18927465e9.31.1717745106462;
        Fri, 07 Jun 2024 00:25:06 -0700 (PDT)
Message-ID: <7493c91c-070b-4828-a7f9-15d618d49ce5@suse.com>
Date: Fri, 7 Jun 2024 09:25:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 02/16] x86/altp2m: check if altp2m active when
 giving away p2midx
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <9306d31184b8e714c3a10ccc6a2b2c6a80777ddb.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <9306d31184b8e714c3a10ccc6a2b2c6a80777ddb.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:09, Sergiy Kibrik wrote:
> @@ -38,9 +34,13 @@ static inline bool altp2m_active(const struct domain *d)
>  }
>  
>  /* Only declaration is needed. DCE will optimise it out when linking. */
> -uint16_t altp2m_vcpu_idx(const struct vcpu *v);
>  void altp2m_vcpu_disable_ve(struct vcpu *v);
>  
>  #endif
>  
> +static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
> +{
> +    return altp2m_active(v->domain) ? vcpu_altp2m(v).p2midx : 0;
> +}

While perhaps okay this way as a first step, my general expectation
would be that with ALTP2M=n there also wouldn't be any p2midx field
in the respective struct. Which in turn will mean that this code
would need re-doing again, and perhaps again splitting between an
inline one and a decl-only one. With that I wonder whether that split
wouldn't better be retained right away.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 07:33:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 07:33:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736303.1142327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFU5r-0005Ui-C7; Fri, 07 Jun 2024 07:32:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736303.1142327; Fri, 07 Jun 2024 07:32:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFU5r-0005Ub-9A; Fri, 07 Jun 2024 07:32:59 +0000
Received: by outflank-mailman (input) for mailman id 736303;
 Fri, 07 Jun 2024 07:32:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m5uW=NJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sFU5q-0005UF-6q
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 07:32:58 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2880a403-24a0-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 09:32:56 +0200 (CEST)
Received: by mail-wr1-x434.google.com with SMTP id
 ffacd0b85a97d-35e4d6f7c5cso1744579f8f.2
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 00:32:56 -0700 (PDT)
Received: from [172.31.7.231] ([62.28.210.62])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35efa97b15dsm2334341f8f.81.2024.06.07.00.32.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Jun 2024 00:32:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2880a403-24a0-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717745576; x=1718350376; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=jn3+J2hdR5Zt8nxW6//+WadLEYLKgqpbjzuBkN2w4Nw=;
        b=c0gyQeZdwae40IByf+PlgtH9sLaTHrnu4/hVW6Rxmx0xWNtIO9D+CpEnr2awU6NAEA
         B6z7wsa/0rHlSRv3oMDgYPq+xCRlWWWWgub8+85Uq5r8zHl0BM/Yg+ZHmJqW2kZoVLSv
         gFudpy1orUROiEKMAl6XJiWh+z6zxvN+dJ52HM/8zAXNzX61QRcO0FI1cDHLb+QZGa8S
         gCU26Nr1zfViGK0zvRo+UhP9G+zbRGd0E+Z1QgEgr61pIfkp3FrR9R8fkPgTKruIdzDI
         n7ABOF+Tt9mIrISXtvv9fDH13EAO2xrYYRmaljioe6WWO9QGAtAwdrXbN1WyfWBVc4ld
         rMXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717745576; x=1718350376;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=jn3+J2hdR5Zt8nxW6//+WadLEYLKgqpbjzuBkN2w4Nw=;
        b=jCQ7klAYX/ZnTDF3adruxiZQ8uL+Tuy2NDf+tqBYdFi/aZxsOXwGYaFrCpjcUP16kR
         h4E4yHWX3Rl5x1njJN7+1hBpQNh0PymtEu8PQkZoFC9/wobQAkyyMww7uvtZ39vZnofm
         30ENSWymd1f1TvoBgM2GF2WzsQVnTMk6cOCynSOScQFb8MwLIYjksT38rcss9NIt5A4C
         pgcdAWI7YAH3f8hbe9irlFrMWGC+iS0IBYUJRmycJ5RpPe7xYdaGgN8XTOlxn4dJ7MdM
         zyVRnTTmefRGwRIWNU7UgBAZUiN8B3uLnAd8//cjkX51NGbqJiJ7gm049mpkKt5ljh9a
         g7QQ==
X-Forwarded-Encrypted: i=1; AJvYcCViiHlZjxVTLY+LRTgc8ts9TTCiCVXVukzylDnVHOKvE891WHRTt/7M/vDqoP4GRui3S+ShemRSzZAoYZmIfI5JyTVGYYzBqqTo1h2PHVQ=
X-Gm-Message-State: AOJu0YyhE/CSZ/ho/N+UlH6gAh/zKaatApOTCgihRVoQIR40dFRRK7ow
	bJgy+rX2AiHILp1sxux7HKa3Ysf0ZL6b+wwBGUp8MTD6PYjpxEgWfESoTMRkBA==
X-Google-Smtp-Source: AGHT+IGgdxa9bxzRdsBn9yD3bv+gx4uaZl/dnd8CZ6toQjuyK4IwQIoShp2NxNFXWOjEcOHSgWggmg==
X-Received: by 2002:a05:6000:4594:b0:35d:bfe3:1817 with SMTP id ffacd0b85a97d-35efed3f93amr1083501f8f.21.1717745575915;
        Fri, 07 Jun 2024 00:32:55 -0700 (PDT)
Message-ID: <ade6ba3c-5a53-463c-97fd-34f6ec7dacaf@suse.com>
Date: Fri, 7 Jun 2024 09:32:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 03/16] x86/p2m: guard altp2m routines
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <acb98c1c52613555a59cd27aad853a24caef0e19.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <acb98c1c52613555a59cd27aad853a24caef0e19.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:11, Sergiy Kibrik wrote:
> Initialize and bring down altp2m only when it is supported by the platform,
> e.g. VMX. Also guard p2m_altp2m_propagate_change().
> The puspose of that is the possiblity to disable altp2m support and exclude its
> code from the build completely, when it's not supported by the target platform.
> 
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Acked-by: Jan Beulich <jbeulich@suse.com>

I question though whether Stefano's R-b was valid to retain with ...

> ---
> changes in v3:
>  - put hvm_altp2m_supported() first
>  - rewrite changes to p2m_init() with less code

... this.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 07:39:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 07:39:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736308.1142338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUBg-00066c-15; Fri, 07 Jun 2024 07:39:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736308.1142338; Fri, 07 Jun 2024 07:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUBf-00066V-TF; Fri, 07 Jun 2024 07:38:59 +0000
Received: by outflank-mailman (input) for mailman id 736308;
 Fri, 07 Jun 2024 07:38:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UIad=NJ=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1sFUBe-000666-L4
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 07:38:58 +0000
Received: from mail-ot1-x332.google.com (mail-ot1-x332.google.com
 [2607:f8b0:4864:20::332])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fef7b0d0-24a0-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 09:38:57 +0200 (CEST)
Received: by mail-ot1-x332.google.com with SMTP id
 46e09a7af769-6f95be3d4c4so95926a34.3
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 00:38:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fef7b0d0-24a0-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1717745936; x=1718350736; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lsiNEyc4kBw0pFyZ3blWgvfjWbD3I7zepT775YZ1EEE=;
        b=AgtUmA1VvwMj90VKV5XzNjng5xfk9CxA61jAkUOjP7R2Z7+3UKDC4m3yuhJ+s9N6TI
         b+iFpu7jJBcSycyKs2Z6UojSht21GtQmam43ZBjLngymv/z/YEuIedEUIWbLRmauPsGW
         9ejlbrX7sqR6dCWjfK3M+SChWie4tk2NTo/E/Qm/n0yotVazpPKNW64vdoW4s0LBn6OY
         y6618ZghFvN20tVPx9BP4n5B2BsZ8miO2XncOqW4Z0i9YzgqOTZaBjEHG6JLJ6Kdn4hS
         VrD5/78RenmXaJxxlrEEl9+Ly6r4mp7SRlTpoV/ADR2qkl8SC7gKanf+DeoC3sij0o2c
         oBrg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717745936; x=1718350736;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=lsiNEyc4kBw0pFyZ3blWgvfjWbD3I7zepT775YZ1EEE=;
        b=G8ClMQmGqKAxTZPKPlp/wVIyJBdq1TkoXYiV2MJ5nlkxs268khbILEANsw5SNQEuap
         U5gxm3BJBDnNofcYC2G90e58slobicvuJQQjpQSqEjZ+z5xLcLZvpiZ9gvITE2lIZcZg
         kw+kmd0t9VYxjg99RF7dUyMCFuq2XjthcBZ/EBerDqmVNdNnnnfnXoq17oFlCTzMKqDU
         m9w6rqcSMK+NsHGIHKcbCjh7KSJto6NRV8+gL3NmyI5LYQ+vv5l4WXHPACyR32yjjkl2
         JjGVcbTHm94lVRpEmX6rvppvrsKNo+NRGModttzGb100bNOOXjgW3PrWpjMs+FfV4LUD
         sVag==
X-Forwarded-Encrypted: i=1; AJvYcCWKugq9ShS2flVjan9/cP60/YHLyS/HDB20NR+IJ4WoaKwh4vEk2hkzXXSKq+8jdjSpnIC9JMPl2pNi2SMaOWAs5AOhZpVMZxfIzsoA/N0=
X-Gm-Message-State: AOJu0YxhS0XIgafAiJHBZbW/KPeZlr56qGYG+7WrkCHKVNV4lQzLuGbo
	GsmYqJXlFa6y1i5LiujscssbWrsqzKxlPHz0W9SAGAA84CTNLhepK/tNJNC6rRQp3MHIJK6XrBB
	sYNED6bEYAtVhiSIZeBgCXox84IpfJnCEFGVV0exS27RzaGt1xq0=
X-Google-Smtp-Source: AGHT+IHuebJdmBNnwhEQYKmWSkqKL0GtkCPRmAyUZLrSnL2n/yvLkfSedSstjMhbtI75z2F3TF3olfIGAZhTqGQWB34=
X-Received: by 2002:a05:6830:3d17:b0:6f0:e793:1325 with SMTP id
 46e09a7af769-6f9571e5715mr1733503a34.2.1717745935551; Fri, 07 Jun 2024
 00:38:55 -0700 (PDT)
MIME-Version: 1.0
References: <20240529072559.2486986-1-jens.wiklander@linaro.org>
 <20240529072559.2486986-8-jens.wiklander@linaro.org> <C52D6A7C-1136-4BF1-9060-600157F641F5@arm.com>
 <CAHUa44GRNQV4X61YPZTxO+tkkwJS9hoqQ07U9vP1k6n1zUt9rQ@mail.gmail.com>
 <39045a8f-ea18-4264-b540-66645751d27d@xen.org> <CAHUa44Hrm7p9MyTwsp+XU+EAMPXb+bi0a7P8sbhsvz2Tobozow@mail.gmail.com>
 <ad94bed4-42a1-4c59-afc1-a542c9a406ea@xen.org> <D56844C2-D602-4109-BF9D-6FCD59B532EC@arm.com>
In-Reply-To: <D56844C2-D602-4109-BF9D-6FCD59B532EC@arm.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Fri, 7 Jun 2024 09:38:43 +0200
Message-ID: <CAHUa44E0hZCJ9+eL160H3kv0QVwFDt5wvWLRYfDj9V+a7TcGmg@mail.gmail.com>
Subject: Re: [XEN PATCH v5 7/7] xen/arm: ffa: support notification
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Julien Grall <julien@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>, 
	"patches@linaro.org" <patches@linaro.org>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, 
	Stefano Stabellini <sstabellini@kernel.org>, Michal Orzel <michal.orzel@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi,

On Wed, Jun 5, 2024 at 12:45=E2=80=AFPM Bertrand Marquis
<Bertrand.Marquis@arm.com> wrote:
>
> Hi,
>
> > On 4 Jun 2024, at 12:24, Julien Grall <julien@xen.org> wrote:
> >
> >
> >
> > On 03/06/2024 10:50, Jens Wiklander wrote:
> >> Hi Julien,
> >
> > Hi Jens,
> >
> >
> >> On Mon, Jun 3, 2024 at 11:12=E2=80=AFAM Julien Grall <julien@xen.org> =
wrote:
> >>>
> >>> Hi Jens,
> >>>
> >>> On 03/06/2024 10:01, Jens Wiklander wrote:
> >>>> On Fri, May 31, 2024 at 4:28=E2=80=AFPM Bertrand Marquis
> >>>> <Bertrand.Marquis@arm.com> wrote:
> >>>>>
> >>>>> Hi Jens,
> >>>>>
> >>>>>> On 29 May 2024, at 09:25, Jens Wiklander <jens.wiklander@linaro.or=
g> wrote:
> >>>>>>
> >>>>>> Add support for FF-A notifications, currently limited to an SP (Se=
cure
> >>>>>> Partition) sending an asynchronous notification to a guest.
> >>>>>>
> >>>>>> Guests and Xen itself are made aware of pending notifications with=
 an
> >>>>>> interrupt. The interrupt handler triggers a tasklet to retrieve th=
e
> >>>>>> notifications using the FF-A ABI and deliver them to their destina=
tions.
> >>>>>>
> >>>>>> Update ffa_partinfo_domain_init() to return error code like
> >>>>>> ffa_notif_domain_init().
> >>>>>>
> >>>>>> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
> >>>>>> ---
> >>>>>> v4->v5:
> >>>>>> - Move the freeing of d->arch.tee to the new TEE mediator free_dom=
ain_ctx
> >>>>>>   callback to make the context accessible during rcu_lock_domain_b=
y_id()
> >>>>>>   from a tasklet
> >>>>>> - Initialize interrupt handlers for secondary CPUs from the new TE=
E mediator
> >>>>>>   init_interrupt() callback
> >>>>>> - Restore the ffa_probe() from v3, that is, remove the
> >>>>>>   presmp_initcall(ffa_init) approach and use ffa_probe() as usual =
now that we
> >>>>>>   have the init_interrupt callback.
> >>>>>> - A tasklet is added to handle the Schedule Receiver interrupt. Th=
e tasklet
> >>>>>>   finds each relevant domain with rcu_lock_domain_by_id() which no=
w is enough
> >>>>>>   to guarantee that the FF-A context can be accessed.
> >>>>>> - The notification interrupt handler only schedules the notificati=
on
> >>>>>>   tasklet mentioned above
> >>>>>>
> >>>>>> v3->v4:
> >>>>>> - Add another note on FF-A limitations
> >>>>>> - Clear secure_pending in ffa_handle_notification_get() if both SP=
 and SPM
> >>>>>>   bitmaps are retrieved
> >>>>>> - ASSERT that ffa_rcu_lock_domain_by_vm_id() isn't passed the vm_i=
d FF-A
> >>>>>>   uses for Xen itself
> >>>>>> - Replace the get_domain_by_id() call done via ffa_get_domain_by_v=
m_id() in
> >>>>>>   notif_irq_handler() with a call to rcu_lock_live_remote_domain_b=
y_id() via
> >>>>>>   ffa_rcu_lock_domain_by_vm_id()
> >>>>>> - Remove spinlock in struct ffa_ctx_notif and use atomic functions=
 as needed
> >>>>>>   to access and update the secure_pending field
> >>>>>> - In notif_irq_handler(), look for the first online CPU instead of=
 assuming
> >>>>>>   that the first CPU is online
> >>>>>> - Initialize FF-A via presmp_initcall() before the other CPUs are =
online,
> >>>>>>   use register_cpu_notifier() to install the interrupt handler
> >>>>>>   notif_irq_handler()
> >>>>>> - Update commit message to reflect recent updates
> >>>>>>
> >>>>>> v2->v3:
> >>>>>> - Add a GUEST_ prefix and move FFA_NOTIF_PEND_INTR_ID and
> >>>>>>   FFA_SCHEDULE_RECV_INTR_ID to public/arch-arm.h
> >>>>>> - Register the Xen SRI handler on each CPU using on_selected_cpus(=
) and
> >>>>>>   setup_irq()
> >>>>>> - Check that the SGI ID retrieved with FFA_FEATURE_SCHEDULE_RECV_I=
NTR
> >>>>>>   doesn't conflict with static SGI handlers
> >>>>>>
> >>>>>> v1->v2:
> >>>>>> - Addressing review comments
> >>>>>> - Change ffa_handle_notification_{bind,unbind,set}() to take struc=
t
> >>>>>>   cpu_user_regs *regs as argument.
> >>>>>> - Update ffa_partinfo_domain_init() and ffa_notif_domain_init() to=
 return
> >>>>>>   an error code.
> >>>>>> - Fixing a bug in handle_features() for FFA_FEATURE_SCHEDULE_RECV_=
INTR.
> >>>>>> ---
> >>>>>> xen/arch/arm/tee/Makefile       |   1 +
> >>>>>> xen/arch/arm/tee/ffa.c          |  72 +++++-
> >>>>>> xen/arch/arm/tee/ffa_notif.c    | 409 ++++++++++++++++++++++++++++=
++++
> >>>>>> xen/arch/arm/tee/ffa_partinfo.c |   9 +-
> >>>>>> xen/arch/arm/tee/ffa_private.h  |  56 ++++-
> >>>>>> xen/arch/arm/tee/tee.c          |   2 +-
> >>>>>> xen/include/public/arch-arm.h   |  14 ++
> >>>>>> 7 files changed, 548 insertions(+), 15 deletions(-)
> >>>>>> create mode 100644 xen/arch/arm/tee/ffa_notif.c
> >>>>>>
> >>>> [...]
> >>>>>>
> >>>>>> @@ -517,8 +567,10 @@ err_rxtx_destroy:
> >>>>>> static const struct tee_mediator_ops ffa_ops =3D
> >>>>>> {
> >>>>>>      .probe =3D ffa_probe,
> >>>>>> +    .init_interrupt =3D ffa_notif_init_interrupt,
> >>>>>
> >>>>> With the previous change on init secondary i would suggest to
> >>>>> have a ffa_init_secondary here actually calling the ffa_notif_init_=
interrupt.
> >>>>
> >>>> Yes, that makes sense. I'll update.
> >>>>
> >>>>>
> >>>>>>      .domain_init =3D ffa_domain_init,
> >>>>>>      .domain_teardown =3D ffa_domain_teardown,
> >>>>>> +    .free_domain_ctx =3D ffa_free_domain_ctx,
> >>>>>>      .relinquish_resources =3D ffa_relinquish_resources,
> >>>>>>      .handle_call =3D ffa_handle_call,
> >>>>>> };
> >>>>>> diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_n=
otif.c
> >>>>>> new file mode 100644
> >>>>>> index 000000000000..e8e8b62590b3
> >>>>>> --- /dev/null
> >>>>>> +++ b/xen/arch/arm/tee/ffa_notif.c
> >>>> [...]
> >>>>>> +static void notif_vm_pend_intr(uint16_t vm_id)
> >>>>>> +{
> >>>>>> +    struct ffa_ctx *ctx;
> >>>>>> +    struct domain *d;
> >>>>>> +    struct vcpu *v;
> >>>>>> +
> >>>>>> +    /*
> >>>>>> +     * vm_id =3D=3D 0 means a notifications pending for Xen itsel=
f, but
> >>>>>> +     * we don't support that yet.
> >>>>>> +     */
> >>>>>> +    if ( !vm_id )
> >>>>>> +        return;
> >>>>>> +
> >>>>>> +    d =3D ffa_rcu_lock_domain_by_vm_id(vm_id);
> >>>>>> +    if ( !d )
> >>>>>> +        return;
> >>>>>> +
> >>>>>> +    ctx =3D d->arch.tee;
> >>>>>> +    if ( !ctx )
> >>>>>> +        goto out_unlock;
> >>>>>
> >>>>> In both previous cases you are silently ignoring an interrupt
> >>>>> due to an internal error.
> >>>>> Is this something that we should trace ? maybe just debug ?
> >>>>>
> >>>>> Could you add a comment to explain why this could happen
> >>>>> (when possible) or not and the possible side effects ?
> >>>>>
> >>>>> The second one would be a notification for a domain without
> >>>>> FF-A enabled which is ok but i am a bit more wondering on
> >>>>> the first one.
> >>>>
> >>>> The SPMC must be out of sync in both cases. I've been looking for a
> >>>> window where that can happen, but I can't find any. SPMC is called
> >>>> with FFA_NOTIFICATION_BITMAP_DESTROY during domain teardown so the
> >>>> SPMC shouldn't try to deliver any notifications after that.
> >>>
> >>> I don't think I agree with the conclusion. I believe, this can also
> >>> happen in normal operation.
> >>>
> >>> For example, the SPMC could have trigger the interrupt before
> >>> FFA_NOTIFICATION_BITMAP_DESTROY but Xen didn't handle the interrupt (=
or
> >>> run the tasklet) until later.
> >> You're right, there is a window. Delayed handling is OK since
> >> FFA_NOTIFICATION_INFO_GET_64 is invoked from the tasklet, but there is
> >> a window if the tasklet is suspended or another core destroys the
> >> domain before the tasklet has called ffa_rcu_lock_domain_by_vm_id().
> >> So far it's harmless and I guess we can afford a print.
> >
> > I think it would confuse more the user than anything else because this =
is an expected race. If we wanted to print a message, then I would argue it=
 should be in the case where...
> >
> >>>
> >>> This could be at the time where the domain has been fully destroyed o=
r
> >>> even when...
> >>>
> >>>> In the second case, the domain ID might have been reused for a domai=
n
> >>>> without FF-A enabled, but the SPMC should have known that already.
> >>>
> >>> ... a new domain has been created. Although, the latter is rather unl=
ikely.
> >>>
> >>> So what if the new domain has FFA enabled? Is there any potential
> >>> security issue?
> >> In this case, we'll inject an NPI in the guest, but when it invokes
> >> FFA_NOTIFICATION_GET it will get accurate information from the SPMC.
> >> The worst case is a spurious NPI. This shouldn't be a security issue.
> >
> > ... we inject the interrupt to the "wrong" domain. But I also understan=
d that it would be difficult for Xen to detect it.
> >
> > So I would say no print should be needed. Bertrand, what do you think?
>
> Yes i agree that it could confuse the user.
> I would just put comments to explain the situations where it could occur =
in the code.

Thanks, I'll add comments for the next version.

Cheers,
Jens


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 07:47:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 07:47:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736316.1142347 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUK3-00081j-UY; Fri, 07 Jun 2024 07:47:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736316.1142347; Fri, 07 Jun 2024 07:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUK3-00081c-Rm; Fri, 07 Jun 2024 07:47:39 +0000
Received: by outflank-mailman (input) for mailman id 736316;
 Fri, 07 Jun 2024 07:47:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m5uW=NJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sFUK2-00081W-MK
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 07:47:38 +0000
Received: from mail-wm1-x330.google.com (mail-wm1-x330.google.com
 [2a00:1450:4864:20::330])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 34f76dd8-24a2-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 09:47:36 +0200 (CEST)
Received: by mail-wm1-x330.google.com with SMTP id
 5b1f17b1804b1-4214f803606so17587045e9.0
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 00:47:36 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35ef5d4a827sm3396101f8f.36.2024.06.07.00.47.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Jun 2024 00:47:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34f76dd8-24a2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717746456; x=1718351256; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=ja/7CEOTFK4lCt0KPlJsNG1ZbNffzZMI9b9F+6ytaGE=;
        b=gFbvklnmDsSxmWldXQLz0Uzd6VKemBqD84jTuASomfMtRHmTk82FErG3PYkweZ2mPy
         T8Yu0jIelmeVGnRzyCmYM7CErWM+s8CftwXUr830zy4ZHO8+dQPME5h46I9RWmfIFxTy
         qt6q0ln96pWnMBGv1GsvHq2Cbgca28we8X9iUoka8toTpOejyV0V7yGHrmj2Yi2F6Us2
         v+ja/J+h17kRvlaCQiCA9+H/E4z+slrC5Dpc5p7B4Tys6mZGYyJ2DtCCzbg3Q2+FmZgJ
         t2f7HBZjRO8tBBFnjY20sUaINNOJreiFgmnXfhHrmH/+POEN0k/6wdPEQUaQVXMWtNK/
         322A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717746456; x=1718351256;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=ja/7CEOTFK4lCt0KPlJsNG1ZbNffzZMI9b9F+6ytaGE=;
        b=OCye79as/ELfoTKs2WoSGwBTJnzbCj21E23qPEbjkFISqkwEbiL+Lm0ZQaW9RnsMmx
         ygt+t6KrwTOqrofKobFEgct2PkKPssavAD0A99dIFPw8HHh2EhzbkmuDAcBolhsDDAbw
         mS0ytpPoIV6JBBm6hQHzZyRduhuBNsYLvREJWRGxEGbK/lXkwLgUPS0BFI4/ZXCkHBPr
         sp/LsGKN8A/L0ixcngzVv4Zs6xkZANtkadhG1Yz4j7TvPDQu1ipEMz/9wY9wyNpEOXnC
         zXZhgs4Ad7TF+e6On/lLCxupfVubNUkUCygf24ekanKcxAmmZTqXHZEfvehLmUQ4naoc
         40IA==
X-Forwarded-Encrypted: i=1; AJvYcCWtmtRMR5tTUNnDQSMZoGMXJxXmvh6HVq8X5aP4fYkftjzYm4xoY8P6scYQq8+c286wW1gg3DJzh07Pbieoei/ZeflsebL5rX694149MHE=
X-Gm-Message-State: AOJu0YxLHb9ZelGb+BERknB02EtZQur8VmzPr41IdKcjr+CQ7W0g5qML
	6tp9DqSqXh7D2hW6CjImkyNp2jHm+MlRkD+etlYb0KYXvJ08L1aAbC/LU5dQ3A==
X-Google-Smtp-Source: AGHT+IGXD33tpnMPZMecPt1YDBAcomyWhXDfHKmyD9lUIQDU/XZvqTMfG5lXuRwWL144dZtMoUloCQ==
X-Received: by 2002:a05:600c:47d7:b0:421:5288:63df with SMTP id 5b1f17b1804b1-421649fe377mr14552265e9.11.1717746455904;
        Fri, 07 Jun 2024 00:47:35 -0700 (PDT)
Message-ID: <856e3517-4aef-4e18-85b1-174ebf5c358f@suse.com>
Date: Fri, 7 Jun 2024 09:47:33 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 04/16] x86: introduce CONFIG_ALTP2M Kconfig option
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <035f63f2b92b963f2585064fa21e09e73157f9d3.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <035f63f2b92b963f2585064fa21e09e73157f9d3.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:13, Sergiy Kibrik wrote:
> --- a/xen/arch/x86/include/asm/p2m.h
> +++ b/xen/arch/x86/include/asm/p2m.h
> @@ -577,10 +577,10 @@ static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn)
>          return _gfn(mfn_x(mfn));
>  }
>  
> -#ifdef CONFIG_HVM
>  #define AP2MGET_prepopulate true
>  #define AP2MGET_query false
>  
> +#ifdef CONFIG_ALTP2M
>  /*
>   * Looks up altp2m entry. If the entry is not found it looks up the entry in
>   * hostp2m.

In principle this #ifdef shouldn't need moving. It's just that the
three use sites need taking care of a little differently. E.g. ...

> @@ -589,6 +589,15 @@ static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn)
>  int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn,
>                                 p2m_type_t *t, p2m_access_t *a,
>                                 bool prepopulate);
> +#else
> +static inline int altp2m_get_effective_entry(struct p2m_domain *ap2m,
> +                                             gfn_t gfn, mfn_t *mfn,
> +                                             p2m_type_t *t, p2m_access_t *a,
> +                                             bool prepopulate)
> +{
> +    ASSERT_UNREACHABLE();
> +    return -EOPNOTSUPP;
> +}

static inline int altp2m_get_effective_entry(struct p2m_domain *ap2m,
                                             gfn_t gfn, mfn_t *mfn,
                                             p2m_type_t *t, p2m_access_t *a)
{
    ASSERT_UNREACHABLE();
    return -EOPNOTSUPP;
}
#define altp2m_get_effective_entry(ap2m, gfn, mfn, t, a, prepopulate) \
        altp2m_get_effective_entry(ap2m, gfn, mfn, t, a)

Misra doesn't like such shadowing, so the inline function may want
naming slightly differently, e.g. _ap2m_get_effective_entry().

> @@ -914,8 +923,14 @@ static inline bool p2m_set_altp2m(struct vcpu *v, unsigned int idx)
>  /* Switch alternate p2m for a single vcpu */
>  bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx);
>  
> +#ifdef CONFIG_ALTP2M
>  /* Check to see if vcpu should be switched to a different p2m. */
>  void p2m_altp2m_check(struct vcpu *v, uint16_t idx);
> +#else
> +static inline void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
> +{
> +}

Btw, no need to put the braces on separate lines for such a stub.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 07:50:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 07:50:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736321.1142357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUMe-0001HM-AJ; Fri, 07 Jun 2024 07:50:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736321.1142357; Fri, 07 Jun 2024 07:50:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUMe-0001HF-7j; Fri, 07 Jun 2024 07:50:20 +0000
Received: by outflank-mailman (input) for mailman id 736321;
 Fri, 07 Jun 2024 07:50:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m5uW=NJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sFUMd-0001H9-4q
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 07:50:19 +0000
Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com
 [2a00:1450:4864:20::22e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 95416989-24a2-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 09:50:18 +0200 (CEST)
Received: by mail-lj1-x22e.google.com with SMTP id
 38308e7fff4ca-2e6f2534e41so19241521fa.0
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 00:50:18 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b059332073sm7806866d6.139.2024.06.07.00.50.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Jun 2024 00:50:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95416989-24a2-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717746617; x=1718351417; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:content-language:references
         :cc:to:from:subject:user-agent:mime-version:date:message-id:from:to
         :cc:subject:date:message-id:reply-to;
        bh=DrDl9r2juB5/Cc4BIkYP0HSQSjdb3MfmhRmurpYfPjs=;
        b=OxSCSF/zbhG05ewOsJG6My+MgQA1xB5jorl/rnCfnfZpbkolBDBQXcs22NYOUa19L6
         i/ps8ScAr2r6EFkZgSaIAUuZNLUPzJ496afl8XsLRZJNzpBwgs2d8QnnmtZ8WcGnTglm
         WPssN6hBEGcLxW8KZ+R8r72kH23sWbSHUhFJ0F8XcZQD8IUHjTHQwsU09ob5isVK6Ba8
         2FyNjhm68EHpVjt+sIsebcrhCJG1+a8pKwmIy3+m/RcoGPvuI3FAclugKYRnCI5zT36m
         v8DdTGjANjEdOs1SzBFDrkyutaw4DNiByHk2IcX+nzghVMyAvZtYfcEphv8GREhjXlxe
         Jz8g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717746617; x=1718351417;
        h=content-transfer-encoding:in-reply-to:content-language:references
         :cc:to:from:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=DrDl9r2juB5/Cc4BIkYP0HSQSjdb3MfmhRmurpYfPjs=;
        b=HBTj6Z+V24ir1+mhISWw14WYG0WsF4H1nRD3nZJhSkQJkQHmI8q2lV9kXIBkl++PN/
         qZRs5+ZgOcQLsPULZiuFdoVNJWvIYvP+TGratgeWxYyfol4bI8ZmDjmWQ6m/pRBKTIYY
         GUL9Kd+wL62UBVabsbERadiG9W4kKRZV/QLKwh5XjNEQ6t4KCrFgsdWujUPpJwxt0nwZ
         oq7NAnddpBAPnbq64io6oeeqkDdDNGyxxaH3ZzbHv4XRuKAtpbIc8Ju9fZj19DuBtGLW
         rbPTCTccrcsFMXIavkdsGtsmOFsLdCVOJKuseMErB6Jewj7amCDRYQQkNY9bg8biC7er
         ugqw==
X-Forwarded-Encrypted: i=1; AJvYcCV4qFzequZHVGdyHro2s1pPSmKqBcceKP2jB9bE9/XtRZTakJCsGRTntgv1aj7rRo1X7+Ffulz55rd23UtC5zf04LR2KPazuiGh7Hf5BU8=
X-Gm-Message-State: AOJu0Yxp0jjGwsinfzK/hCx98s/qeWLWBzE/6Hon8Yl4MC7l6R9u/yIi
	TNFe2/VW7MiB/Y+aWNKll2GsAsnOOv9WW9aqZsSNw21UQU0IZmH+U+Ulh3Lr1A==
X-Google-Smtp-Source: AGHT+IGJALksAjdu5Qmye3Qx7jEI0fbwBAYxVozMgj8/0+4ww2eivWlgKuZJc05pemeYGqHvZWmlqw==
X-Received: by 2002:a19:9118:0:b0:520:9775:5d4b with SMTP id 2adb3069b0e04-52bb9f5d9cemr1123511e87.13.1717746617456;
        Fri, 07 Jun 2024 00:50:17 -0700 (PDT)
Message-ID: <c72ef6c8-6e5a-4533-a049-7636f6387e4b@suse.com>
Date: Fri, 7 Jun 2024 09:50:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 02/16] x86/altp2m: check if altp2m active when
 giving away p2midx
From: Jan Beulich <jbeulich@suse.com>
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <9306d31184b8e714c3a10ccc6a2b2c6a80777ddb.1717410850.git.Sergiy_Kibrik@epam.com>
 <7493c91c-070b-4828-a7f9-15d618d49ce5@suse.com>
Content-Language: en-US
In-Reply-To: <7493c91c-070b-4828-a7f9-15d618d49ce5@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 07.06.2024 09:25, Jan Beulich wrote:
> On 03.06.2024 13:09, Sergiy Kibrik wrote:
>> @@ -38,9 +34,13 @@ static inline bool altp2m_active(const struct domain *d)
>>  }
>>  
>>  /* Only declaration is needed. DCE will optimise it out when linking. */
>> -uint16_t altp2m_vcpu_idx(const struct vcpu *v);
>>  void altp2m_vcpu_disable_ve(struct vcpu *v);
>>  
>>  #endif
>>  
>> +static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
>> +{
>> +    return altp2m_active(v->domain) ? vcpu_altp2m(v).p2midx : 0;
>> +}
> 
> While perhaps okay this way as a first step,

Hmm, or maybe not. 0 is a valid index, and hence could be misleading
at call sites.

Jan

> my general expectation
> would be that with ALTP2M=n there also wouldn't be any p2midx field
> in the respective struct. Which in turn will mean that this code
> would need re-doing again, and perhaps again splitting between an
> inline one and a decl-only one. With that I wonder whether that split
> wouldn't better be retained right away.
> 
> Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 07:51:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 07:51:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736327.1142367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUNp-0001wV-Jz; Fri, 07 Jun 2024 07:51:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736327.1142367; Fri, 07 Jun 2024 07:51:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUNp-0001wO-HB; Fri, 07 Jun 2024 07:51:33 +0000
Received: by outflank-mailman (input) for mailman id 736327;
 Fri, 07 Jun 2024 07:51:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Avvd=NJ=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sFUNo-0001uI-97
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 07:51:32 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20626.outbound.protection.outlook.com
 [2a01:111:f403:2009::626])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c032946e-24a2-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 09:51:31 +0200 (CEST)
Received: from BN1PR13CA0023.namprd13.prod.outlook.com (2603:10b6:408:e2::28)
 by SA3PR12MB7860.namprd12.prod.outlook.com (2603:10b6:806:307::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7611.22; Fri, 7 Jun
 2024 07:51:27 +0000
Received: from BN3PEPF0000B36F.namprd21.prod.outlook.com
 (2603:10b6:408:e2:cafe::41) by BN1PR13CA0023.outlook.office365.com
 (2603:10b6:408:e2::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.13 via Frontend
 Transport; Fri, 7 Jun 2024 07:51:26 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN3PEPF0000B36F.mail.protection.outlook.com (10.167.243.166) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.0 via Frontend Transport; Fri, 7 Jun 2024 07:51:26 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 7 Jun
 2024 02:51:23 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c032946e-24a2-11ef-90a2-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MMX4/znEupsgzpoFMvMV/WPDPPQ2UBL/cFEjzdhWkcQgDnodewZpgAMW7MwKFi4mVi6gngIRXT2w65wHM+Y6991C3HZf6yLwUAwOBllp3WAmPAcHpKKIrBBUpk80KG0+qvpYjYtCGQqxwPiLnlfO8sKpNpXVQSbZsGpo2Coc8kszFWUrpfIJCB/2NMOz2oqERuX+/LiOGgQXgaeQ+dat1X4JpoeeBquaccQGp2OySIlbE3Ti39P9lkZBzc9GS6IV/DObff/kVl5syNsmbPrwxhKZy5nJ6kE5OHDQcBUdkwGHWlUhgVPUMl880sn5LuLPDHx8NfDhs7RwswVkF7ITgw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=P0rNKlh4YHIIL4TR9dCsbVJChfs3QK+5ViLp0qK0wcg=;
 b=ODdt8svolw8jb5WBoXe/Re5p7BfK6hdvSNhQ6ARJFbj/I9y/OE5Stci9WtHlvDG5pWH979p4WE0p90YSPru7oy+nMNJ2pt302CMElJR5kRDFJmKitYDCDzdj7BnBJ3igfXh5/nbbiRLUjtrXg4XVS0avD4PrfuOuonnA7ePiyk5SD3XO5clXyU4dnL7Psg1gKGloKtz/4SsK7P7OUzvzCAzNL726uvC3cUJFChTHwnaJNWzNeWlY4wUUr6hObR4hG5OSstyq+Fe1JQIZzjBKPvmV3NDKl9yYR7lC4zQyhLvtyjrENSOfCFL86Zv2Xfa6OW7yYCrVgSO9ct1F7m37Iw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=P0rNKlh4YHIIL4TR9dCsbVJChfs3QK+5ViLp0qK0wcg=;
 b=ETVl+EvCzNjfMzQj7tvapRQ5dhS98bL/5SZyeixjbRfDTmK03/XjFUQK1oWUFV1XwIlIc1QvpNhEo+WiiIKr53EI2lQFDUR7KMyLVDeN9PQIInCnOee8RK5h/9Q+NGTVj7I6dzrILBl6BYgKhM41xAKyBfJ8yfs85cZ87M9FhRA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: Juergen Gross <jgross@suse.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, "Rafael J .
 Wysocki" <rafael@kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>, <linux-pci@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <linux-acpi@vger.kernel.org>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>
Subject: [RFC KERNEL PATCH v8 0/2] Support device passthrough when dom0 is PVH on Xen
Date: Fri, 7 Jun 2024 15:51:06 +0800
Message-ID: <20240607075109.126277-1-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN3PEPF0000B36F:EE_|SA3PR12MB7860:EE_
X-MS-Office365-Filtering-Correlation-Id: 79878ae9-8039-442d-e60f-08dc86c6a289
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|82310400017|376005|36860700004|1800799015;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?8TvU9vdS00BOx6p+248e3Htxc6+U902JaIcSj8sjITpBSuRtoykDkGympfnI?=
 =?us-ascii?Q?z0Het+eipdV1h8LBKXI9b8DQFnKncZmTnmfv2GcceDP5ij/z+GF2UaWzk+s+?=
 =?us-ascii?Q?iIVTygvekYcCuRSSUppszxPX9MJm7kNiC3sa8ZDw9ZZVDhJ1OPCLa9XYV0Dy?=
 =?us-ascii?Q?V0rSFJvLddl0xaX0kodKE2ysPZYSYNkchYFbjZfl5eeBfyjmAwFljNka32mg?=
 =?us-ascii?Q?RcBHtVxpvu32UNGkDbnOgCp9y0l9QocIElbhkCeowCNKb3P3ZbCeXeqNGp4j?=
 =?us-ascii?Q?vElMXDnVo5vF33B0uvtu5cn/e8zWHKn7RNBHnjZzY6yB76EPofNFYWMPNJ+F?=
 =?us-ascii?Q?32NF/dx4NI2LeoexDaFl8YPZwtcDOZmF8CpEJlSO2hLs/mmVOcC6UZO6JGFT?=
 =?us-ascii?Q?iKrlsxm0H0TkyYf6IVtP52wwGRVjva63wJk77zHENpXc9dasFa9kw3E7fRN0?=
 =?us-ascii?Q?AFtddfa/jlf4e0m+9tTysGt56oS8Rgt0KYKKoBuzYCuUeoHKk2PAv1zyyF5L?=
 =?us-ascii?Q?DlZduqlFUCECSfs5vWq5Gq0LLVC5+iDSsADUS5EBvAd/Wrm+hRetNjc55qYW?=
 =?us-ascii?Q?+efKKcau1e/iBKwGDeN/9VSTIpAGz2dtAQC+CsNh1y/O+Qqup6DfdRewLnC8?=
 =?us-ascii?Q?tDviS4nkx2p4GUPQtXZT4VuGi35OfhRM79XvWC/rZZMLs/oRZ4JMND/Q8joQ?=
 =?us-ascii?Q?ZTRxAuQ51eXLD9kvy8NUNHbCA9NHBRlH6KsMaunYdu5yLedJY2hS1Wt+vdkA?=
 =?us-ascii?Q?DeuURXVehJrMt9Xn0JmOYbGZsA7iPn79m3uTP3BTObS/ChrOPLMaALllvimr?=
 =?us-ascii?Q?d1q987htENKt2f4x3jf4+vHLUnj2FeIiFhUqqvCJlh4Uzft1B39J+65lOtGK?=
 =?us-ascii?Q?CpgmL4aYazzGDb0CYwyAw3Fu59/t7mL+NP+DCj6Ysq817pplDHhWI6LAAV2e?=
 =?us-ascii?Q?lzFFFpZSAjhyRlK3B4iWalQYEocgiRUAnAAuL4Pa4xI7nkii+JYhVNUhqMBa?=
 =?us-ascii?Q?KzKaRKef2rHa9bQdqqs2mhts2KtS9H4xBb0JNK6BsKGvyj8gYf74vC19CJ87?=
 =?us-ascii?Q?2vn+hdAfkQjyYcAcIBky9StbobyHBEb7RBoSVM+STBlOZR1bRJNW4/pLr+Ls?=
 =?us-ascii?Q?8ATb/Zn/TcUKEqHmAM2hskXTUKtPnQ3uGXbNbeFyx717NIZ/tOFYCdHQaD+E?=
 =?us-ascii?Q?n85VQ4Lrna0oqWB8kKx/Pq+5Sk5UxEKFrH4y4MU8y7gah3m7oOojrl2ZISvL?=
 =?us-ascii?Q?JClbPFa7vEs5NjaKTJFMuPZ6NCxig6NLl1IgfV0KeMnPrFdSXrF63M9zw+p0?=
 =?us-ascii?Q?brXzCbPk78pn5rc8p+BkV6g6SuwTpKE+8LwJBxUchkuz/qA4K6Cqq0SDt+KI?=
 =?us-ascii?Q?5rPKVs5J1K5cCTVkntQB8bU5TOyrBxbSO45iy0zuoVMZNXDprUU6NmGz/VMz?=
 =?us-ascii?Q?WyaGZWo66aI=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(82310400017)(376005)(36860700004)(1800799015);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2024 07:51:26.7579
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 79878ae9-8039-442d-e60f-08dc86c6a289
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN3PEPF0000B36F.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB7860

Hi All,
This is v8 series to support passthrough on Xen when dom0 is PVH.
v7->v8 change:
* patch#1: This is the patch#1 of v6, because it is reverted from the staging branch due to the API changes on Xen side.
           Add pci_device_state_reset_type_t to distinguish the reset types.
* patch#2: is the patch#1 of v7. Use CONFIG_XEN_ACPI instead of CONFIG_ACPI to wrap codes.
* patch#3: is the patch#2 of v7. In function privcmd_ioctl_gsi_from_dev, return -EINVAL when not confige CONFIG_XEN_ACPI.
           use PCI_BUS_NUM PCI_SLOT PCI_FUNC instead of open coding.


Best regards,
Jiqian Chen



v6->v7 change:
* the first patch of v6 was already merged into branch linux_next.
* patch#1: is the patch#2 of v6. move the implementation of function xen_acpi_get_gsi_info to
           file drivers/xen/acpi.c, that modification is more convenient for the subsequent
           patch to obtain gsi.
* patch#2: is the patch#3 of v6. add a new parameter "gsi" to struct pcistub_device and set
           gsi when pcistub initialize device. Then when userspace wants to get gsi by passing
           sbdf, we can return that gsi.


v5->v6 change:
* patch#3: change to add a new syscall to translate irq to gsi, instead adding a new gsi sysfs.


v4->v5 changes:
* patch#1: Add Reviewed-by Stefano
* patch#2: Add Reviewed-by Stefano
* patch#3: No changes


v3->v4 changes:
* patch#1: change the comment of PHYSDEVOP_pci_device_state_reset; use a new function
           pcistub_reset_device_state to wrap __pci_reset_function_locked and xen_reset_device_state,
           and call pcistub_reset_device_state in pci_stub.c
* patch#2: remove map_pirq from xen_pvh_passthrough_gsi


v2->v3 changes:
* patch#1: add condition to limit do xen_reset_device_state for no-pv domain in pcistub_init_device.
* patch#2: Abandoning previous implementations that call unmask_irq. To setup gsi and map pirq for
           passthrough device in pcistub_init_device.
* patch#3: Abandoning previous implementations that adds new syscall to get gsi from irq. To add a new
           sysfs for gsi, then userspace can get gsi number from sysfs.


Below is the description of v2 cover letter:
This series of patches are the v2 of the implementation of passthrough when dom0 is PVH on Xen.
We sent the v1 to upstream before, but the v1 had so many problems and we got lots of suggestions.
I will introduce all issues that these patches try to fix and the differences between v1 and v2.

Issues we encountered:
1. pci_stub failed to write bar for a passthrough device.
Problem: when we run \u201csudo xl pci-assignable-add <sbdf>\u201d to assign a device, pci_stub will
call \u201cpcistub_init_device() -> pci_restore_state() -> pci_restore_config_space() ->
pci_restore_config_space_range() -> pci_restore_config_dword() -> pci_write_config_dword(), the pci
config write will trigger an io interrupt to bar_write() in the xen, but the bar->enabled was set before,
the write is not allowed now, and then when bar->Qemu config the passthrough device in xen_pt_realize(),
it gets invalid bar values.

Reason: the reason is that we don't tell vPCI that the device has been reset, so the current cached state
in pdev->vpci is all out of date and is different from the real device state.

Solution: to solve this problem, the first patch of kernel(xen/pci: Add xen_reset_device_state
function) and the fist patch of xen(xen/vpci: Clear all vpci status of device) add a new hypercall to
reset the state stored in vPCI when the state of real device has changed.
Thank Roger for the suggestion of this v2, and it is different from v1
(https://lore.kernel.org/xen-devel/20230312075455.450187-3-ray.huang@amd.com/), v1 simply allow domU to
write pci bar, it does not comply with the design principles of vPCI.

2. failed to do PHYSDEVOP_map_pirq when dom0 is PVH
Problem: HVM domU will do PHYSDEVOP_map_pirq for a passthrough device by using gsi. See
xen_pt_realize->xc_physdev_map_pirq and pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq
will call into Xen, but in hvm_physdev_op(), PHYSDEVOP_map_pirq is not allowed.

Reason: In hvm_physdev_op(), the variable "currd" is PVH dom0 and PVH has no X86_EMU_USE_PIRQ flag, it
will fail at has_pirq check.

Solution: I think we may need to allow PHYSDEVOP_map_pirq when "currd" is dom0 (at present dom0 is PVH).
The second patch of xen(x86/pvh: Open PHYSDEVOP_map_pirq for PVH dom0) allow PVH dom0 do
PHYSDEVOP_map_pirq. This v2 patch is better than v1, v1 simply remove the has_pirq check
(xen https://lore.kernel.org/xen-devel/20230312075455.450187-4-ray.huang@amd.com/).

3. the gsi of a passthrough device doesn't be unmasked
 3.1 failed to check the permission of pirq
 3.2 the gsi of passthrough device was not registered in PVH dom0

Problem:
3.1 callback function pci_add_dm_done() will be called when qemu config a passthrough device for domU.
This function will call xc_domain_irq_permission()-> pirq_access_permitted() to check if the gsi has
corresponding mappings in dom0. But it didn\u2019t, so failed. See
XEN_DOMCTL_irq_permission->pirq_access_permitted, "current" is PVH dom0 and it return irq is 0.
3.2 it's possible for a gsi (iow: vIO-APIC pin) to never get registered on PVH dom0, because the
devices of PVH are using MSI(-X) interrupts. However, the IO-APIC pin must be configured for it to be
able to be mapped into a domU.

Reason: After searching codes, I find "map_pirq" and "register_gsi" will be done in function
vioapic_write_redirent->vioapic_hwdom_map_gsi when the gsi(aka ioapic's pin) is unmasked in PVH dom0.
So the two problems can be concluded to that the gsi of a passthrough device doesn't be unmasked.

Solution: to solve these problems, the second patch of kernel(xen/pvh: Unmask irq for passthrough device
in PVH dom0) call the unmask_irq() when we assign a device to be passthrough. So that passthrough devices
can have the mapping of gsi on PVH dom0 and gsi can be registered. This v2 patch is different from the
v1( kernel https://lore.kernel.org/xen-devel/20230312120157.452859-5-ray.huang@amd.com/,
kernel https://lore.kernel.org/xen-devel/20230312120157.452859-5-ray.huang@amd.com/ and
xen https://lore.kernel.org/xen-devel/20230312075455.450187-5-ray.huang@amd.com/),
v1 performed "map_pirq" and "register_gsi" on all pci devices on PVH dom0, which is unnecessary and may
cause multiple registration.

4. failed to map pirq for gsi
Problem: qemu will call xc_physdev_map_pirq() to map a passthrough device\u2019s gsi to pirq in function
xen_pt_realize(). But failed.

Reason: According to the implement of xc_physdev_map_pirq(), it needs gsi instead of irq, but qemu pass
irq to it and treat irq as gsi, it is got from file /sys/bus/pci/devices/xxxx:xx:xx.x/irq in function
xen_host_pci_device_get(). But actually the gsi number is not equal with irq. On PVH dom0, when it
allocates irq for a gsi in function acpi_register_gsi_ioapic(), allocation is dynamic, and follow the
principle of applying first, distributing first. And if you debug the kernel codes(see
function __irq_alloc_descs), you will find the irq number is allocated from small to large by order, but
the applying gsi number is not, gsi 38 may come before gsi 28, that causes gsi 38 get a smaller irq number
than gsi 28, and then gsi != irq.

Solution: we can record the relation between gsi and irq, then when userspace(qemu) want to use gsi, we
can do a translation. The third patch of kernel(xen/privcmd: Add new syscall to get gsi from irq) records
all the relations in acpi_register_gsi_xen_pvh() when dom0 initialize pci devices, and provide a syscall
for userspace to get the gsi from irq. The third patch of xen(tools: Add new function to get gsi from irq)
add a new function xc_physdev_gsi_from_irq() to call the new syscall added on kernel side.
And then userspace can use that function to get gsi. Then xc_physdev_map_pirq() will success. This v2 patch
is the same as v1( kernel https://lore.kernel.org/xen-devel/20230312120157.452859-6-ray.huang@amd.com/ and
xen https://lore.kernel.org/xen-devel/20230312075455.450187-6-ray.huang@amd.com/)

About the v2 patch of qemu, just change an included head file, other are similar to the v1 (
qemu https://lore.kernel.org/xen-devel/20230312092244.451465-19-ray.huang@amd.com/), just call
xc_physdev_gsi_from_irq() to get gsi from irq.

Jiqian Chen (3):
  xen/pci: Add xen_reset_device_function_state
  xen/pvh: Setup gsi for passthrough device
  xen/privcmd: Add new syscall to get gsi from dev

 arch/x86/xen/enlighten_pvh.c       | 23 +++++++++
 drivers/acpi/pci_irq.c             |  2 +-
 drivers/xen/acpi.c                 | 50 +++++++++++++++++++
 drivers/xen/pci.c                  | 25 ++++++++++
 drivers/xen/privcmd.c              | 28 +++++++++++
 drivers/xen/xen-pciback/pci_stub.c | 77 +++++++++++++++++++++++++++---
 include/linux/acpi.h               |  1 +
 include/uapi/xen/privcmd.h         |  7 +++
 include/xen/acpi.h                 | 19 ++++++++
 include/xen/interface/physdev.h    |  7 +++
 include/xen/pci.h                  |  6 +++
 11 files changed, 238 insertions(+), 7 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 07:51:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 07:51:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736328.1142379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUNs-0002Bj-SP; Fri, 07 Jun 2024 07:51:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736328.1142379; Fri, 07 Jun 2024 07:51:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUNs-0002Bc-Nl; Fri, 07 Jun 2024 07:51:36 +0000
Received: by outflank-mailman (input) for mailman id 736328;
 Fri, 07 Jun 2024 07:51:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Avvd=NJ=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sFUNr-0001uI-Cu
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 07:51:35 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20600.outbound.protection.outlook.com
 [2a01:111:f403:2417::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c24028a6-24a2-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 09:51:34 +0200 (CEST)
Received: from SN6PR16CA0067.namprd16.prod.outlook.com (2603:10b6:805:ca::44)
 by LV8PR12MB9155.namprd12.prod.outlook.com (2603:10b6:408:183::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.34; Fri, 7 Jun
 2024 07:51:31 +0000
Received: from SA2PEPF0000150A.namprd04.prod.outlook.com
 (2603:10b6:805:ca:cafe::75) by SN6PR16CA0067.outlook.office365.com
 (2603:10b6:805:ca::44) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.21 via Frontend
 Transport; Fri, 7 Jun 2024 07:51:30 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 SA2PEPF0000150A.mail.protection.outlook.com (10.167.242.42) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7633.15 via Frontend Transport; Fri, 7 Jun 2024 07:51:30 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 7 Jun
 2024 02:51:26 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c24028a6-24a2-11ef-90a2-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lDgmRs+DwoNK+hFVWn3gAycVUYaZkxSzX1eW9OEl7MfrbQJdzM5AirVDkogE9oZHn7JOi2pwiGHhjCmGgU4xs3LkI5nvefUBbaOMRQIKRxMspBjKlK10BT7KvY4qgD1POWWYswjxkv0i7YcrkvqaWXtZpMOkplBJYwOUCKwoy1Aei3ilKLXo1Nx1k1YgVxz7cTLTSW5A7e22pXu44xMc397yhRQprSgA4pF1me1TBNoNWOxDFl1vY/p15yM6uq1mbU7/I5V/yzzivbwI8qiCLQovAhrzfN7mHEeKfFuTdTZVBOp/QjaRqGurfO12aNOw2d9KFoC+klpY8es3AhDu8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=jMVm+w4rQYTQQvMoTBNiExOv9MjyQJmMuO42lSrvZo4=;
 b=crg7bY9F6lE/OZi42rQWup718r0jR3X+G2RWYNVAALk8fXQzgwyU508Vnq33n8eejNdmh5jmyvkMe63NaGZi7GF/ERcCVPZLFyGZyRUafqbaVYPet0XJJ8p6KhghNyhJLRUn7rYrZxF5cIqvVk+ql7RpxByNZklDgo2FENpw7z5kywCuDNa8SZ6+QlDxJ6TjSKIxA5sUe567VtCC7bB7AKMYqEYU+AkcC8ZyPTEWyIiZyDU91+1Y+dvrTce9LhfJo55gizxPz/m9ANIh04uOgWozJeTeppZf3U4Erzhmjrh7YN+ZKaz+QUKXb3dT9EoqCMgwZAFtWJpPEoBpRFcbvQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jMVm+w4rQYTQQvMoTBNiExOv9MjyQJmMuO42lSrvZo4=;
 b=aqhcsn74iopy32W1816q1grzVC8InoYsflSL8g48tKOxA77hkSUxAE2l1fcdYd9jtFoLCUmKl44oYpmGdNnDfU53exhyZh7SqO8Cm4Roj9aiVf4de8+kmBtT+4chWuSclNdmDFrExhC/cCCn8iHmNerJRCepLe+WVNSm4TZ/uxk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: Juergen Gross <jgross@suse.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, "Rafael J .
 Wysocki" <rafael@kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>, <linux-pci@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <linux-acpi@vger.kernel.org>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [RFC KERNEL PATCH v8 1/3] xen/pci: Add xen_reset_device_function_state
Date: Fri, 7 Jun 2024 15:51:07 +0800
Message-ID: <20240607075109.126277-2-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240607075109.126277-1-Jiqian.Chen@amd.com>
References: <20240607075109.126277-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SA2PEPF0000150A:EE_|LV8PR12MB9155:EE_
X-MS-Office365-Filtering-Correlation-Id: 928a9cb5-6ea4-4008-bf46-08dc86c6a4e1
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|1800799015|82310400017|376005|36860700004;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?aDjIB9rY12E5p5u8juaALLAd34EcSmifUvcr68eOpQVKMmmiKwoSxNLjDsIb?=
 =?us-ascii?Q?u/QBvcl+fEDlb06aqQl4lfDXZjZ9JM3tHbOD/CCyvMCp4YrIoDoAGPW2SUAH?=
 =?us-ascii?Q?fQLRhZf9iLIqbWgnthHBUaY0JJr/dOOpdaEq89p9ie7hIoR7/GG3I946Mgfz?=
 =?us-ascii?Q?6rZk1XaA1HY5fLAl5HA/vqxDB/1lCTvre4/BVBHGQHobpFqotzQftuBe7AQt?=
 =?us-ascii?Q?OWYL+boxKovPF9mjtApGNens/9JbGVT9/tfIg5I8Jmocv1sBTRe7GOWNtED1?=
 =?us-ascii?Q?fe5bc80GGYiW6f22tmiVHVAMpxpwEfVSTptXf+xVJv/po3pvAdiWpgS2qCt9?=
 =?us-ascii?Q?fnqNd14i9HmrJxKQK3huKuWRBE8rr1cwulPhQYpmZq4nsIWYpGlbiHe8Eryx?=
 =?us-ascii?Q?NllzLoQ++CxV3FbmNgcQXLy3uG/4IZ5VjoCoqh8aNIEXTrejDbPgp4ww9lHw?=
 =?us-ascii?Q?mj7sdFIjk2CkGAkWq9rFZ7hqMNkC9Em1AUvKJwu+TdFcJLohGkOcZ+xNslCa?=
 =?us-ascii?Q?hIQtzmHHqnHemuA5aQ7MMXP38nnQtSS6ZfBqD7ZMANyk4263eZQAG/0i9DZv?=
 =?us-ascii?Q?2ouOisTNZJCkQ4uQkAu+vuDS/2ftOzdALxa0KcUqZx6H0U7WBirpm1TnE32y?=
 =?us-ascii?Q?sAGsj698pattta0MpUUzgjZuuLAgynya7Nob+cleKzJJ0EeP4auwsy1R38qE?=
 =?us-ascii?Q?0NVHyxB8KLTJgy06PBl1snOC34THdt4+5nXbkYYeGtrZrWysTHV9DPI+55n5?=
 =?us-ascii?Q?mRZnuwDtwECofsH9F+BPeRtF3z8PSKMtqZfcwJjCKblEQr6QQWLqPwikg02e?=
 =?us-ascii?Q?uKsYgVzeePA1pQXqq54kCtRsBu6ZudRuOiGkgxiom6+EQCZRdOV6b+2mtkxR?=
 =?us-ascii?Q?qAVsNCQzXktyAH32L2WUjo4ajHDDt2TNycaxJqKgjEX9xXxdjxpFtMq56p1o?=
 =?us-ascii?Q?+ApDlCFH3TcV7ZwqDVLf7xo5brRK95UYG/MqYj3gdyDNmwsFhYf21bJipeIU?=
 =?us-ascii?Q?uYnOs2xvD0gNEZDJo2g3INUbzC2QoMZF/gkmA1aVoCSJbk6e6IV+ZVIHVLP3?=
 =?us-ascii?Q?WWndboiRZ2Zv7TYXjjF8kQh3P8iPYfq7fp2poNZZ1umhOVLK8zRIBzOGK41o?=
 =?us-ascii?Q?zxhiTF5jX4lyjHwe2H/ZtfZ7NoQ6vapxyyEUmvmWrhpefOImuXoTna0bddcH?=
 =?us-ascii?Q?rY2th1iZGHwd5LZk8c+y97imJxhMPxuoRl1Kq9onQVd8rXVXF9zXDYNZSqVp?=
 =?us-ascii?Q?qeBISLhFhHJf5bUYKsO+xiC9E2yQidxpoesApHmPu5MzV/qbPLMSc2F33e4y?=
 =?us-ascii?Q?ZDKuKnFSHDoD70GyDfi82EMqJvLH6ty3wTpSaORarwIUESs3doqGA6f6lQzS?=
 =?us-ascii?Q?VTJuk9THDunQ5f2nReJd6WkgAE0ctvoDpWS5oGjQ72cL6AMjXQ=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(1800799015)(82310400017)(376005)(36860700004);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2024 07:51:30.6734
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 928a9cb5-6ea4-4008-bf46-08dc86c6a4e1
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	SA2PEPF0000150A.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV8PR12MB9155

When device on dom0 side has been reset, the vpci on Xen side
won't get notification, so that the cached state in vpci is
all out of date with the real device state.
To solve that problem, add a new function to clear all vpci
device state when device is reset on dom0 side.

And call that function in pcistub_init_device. Because when
using "pci-assignable-add" to assign a passthrough device in
Xen, it will reset passthrough device and the vpci state will
out of date, and then device will fail to restore bar state.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
RFC: it need to wait for the corresponding first patch on xen side to be merged.
---
 drivers/xen/pci.c                  | 25 +++++++++++++++++++++++++
 drivers/xen/xen-pciback/pci_stub.c | 18 +++++++++++++++---
 include/xen/interface/physdev.h    |  7 +++++++
 include/xen/pci.h                  |  6 ++++++
 4 files changed, 53 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/pci.c b/drivers/xen/pci.c
index 72d4e3f193af..57093e395982 100644
--- a/drivers/xen/pci.c
+++ b/drivers/xen/pci.c
@@ -177,6 +177,31 @@ static int xen_remove_device(struct device *dev)
 	return r;
 }
 
+enum pci_device_state_reset_type {
+	DEVICE_RESET_FLR,
+	DEVICE_RESET_COLD,
+	DEVICE_RESET_WARM,
+	DEVICE_RESET_HOT,
+};
+
+struct pci_device_state_reset {
+	struct physdev_pci_device dev;
+	enum pci_device_state_reset_type reset_type;
+};
+
+int xen_reset_device_function_state(const struct pci_dev *dev)
+{
+	struct pci_device_state_reset device = {
+		.dev.seg = pci_domain_nr(dev->bus),
+		.dev.bus = dev->bus->number,
+		.dev.devfn = dev->devfn,
+		.reset_type = DEVICE_RESET_FLR,
+	};
+
+	return HYPERVISOR_physdev_op(PHYSDEVOP_pci_device_state_reset, &device);
+}
+EXPORT_SYMBOL_GPL(xen_reset_device_function_state);
+
 static int xen_pci_notifier(struct notifier_block *nb,
 			    unsigned long action, void *data)
 {
diff --git a/drivers/xen/xen-pciback/pci_stub.c b/drivers/xen/xen-pciback/pci_stub.c
index e34b623e4b41..73062e531c34 100644
--- a/drivers/xen/xen-pciback/pci_stub.c
+++ b/drivers/xen/xen-pciback/pci_stub.c
@@ -89,6 +89,16 @@ static struct pcistub_device *pcistub_device_alloc(struct pci_dev *dev)
 	return psdev;
 }
 
+static int pcistub_reset_device_state(struct pci_dev *dev)
+{
+	__pci_reset_function_locked(dev);
+
+	if (!xen_pv_domain())
+		return xen_reset_device_function_state(dev);
+	else
+		return 0;
+}
+
 /* Don't call this directly as it's called by pcistub_device_put */
 static void pcistub_device_release(struct kref *kref)
 {
@@ -107,7 +117,7 @@ static void pcistub_device_release(struct kref *kref)
 	/* Call the reset function which does not take lock as this
 	 * is called from "unbind" which takes a device_lock mutex.
 	 */
-	__pci_reset_function_locked(dev);
+	pcistub_reset_device_state(dev);
 	if (dev_data &&
 	    pci_load_and_free_saved_state(dev, &dev_data->pci_saved_state))
 		dev_info(&dev->dev, "Could not reload PCI state\n");
@@ -284,7 +294,7 @@ void pcistub_put_pci_dev(struct pci_dev *dev)
 	 * (so it's ready for the next domain)
 	 */
 	device_lock_assert(&dev->dev);
-	__pci_reset_function_locked(dev);
+	pcistub_reset_device_state(dev);
 
 	dev_data = pci_get_drvdata(dev);
 	ret = pci_load_saved_state(dev, dev_data->pci_saved_state);
@@ -420,7 +430,9 @@ static int pcistub_init_device(struct pci_dev *dev)
 		dev_err(&dev->dev, "Could not store PCI conf saved state!\n");
 	else {
 		dev_dbg(&dev->dev, "resetting (FLR, D3, etc) the device\n");
-		__pci_reset_function_locked(dev);
+		err = pcistub_reset_device_state(dev);
+		if (err)
+			goto config_release;
 		pci_restore_state(dev);
 	}
 	/* Now disable the device (this also ensures some private device
diff --git a/include/xen/interface/physdev.h b/include/xen/interface/physdev.h
index a237af867873..b50646c993dd 100644
--- a/include/xen/interface/physdev.h
+++ b/include/xen/interface/physdev.h
@@ -256,6 +256,13 @@ struct physdev_pci_device_add {
  */
 #define PHYSDEVOP_prepare_msix          30
 #define PHYSDEVOP_release_msix          31
+/*
+ * Notify the hypervisor that a PCI device has been reset, so that any
+ * internally cached state is regenerated.  Should be called after any
+ * device reset performed by the hardware domain.
+ */
+#define PHYSDEVOP_pci_device_state_reset 32
+
 struct physdev_pci_device {
     /* IN */
     uint16_t seg;
diff --git a/include/xen/pci.h b/include/xen/pci.h
index b8337cf85fd1..7941809ab729 100644
--- a/include/xen/pci.h
+++ b/include/xen/pci.h
@@ -4,10 +4,16 @@
 #define __XEN_PCI_H__
 
 #if defined(CONFIG_XEN_DOM0)
+int xen_reset_device_function_state(const struct pci_dev *dev);
 int xen_find_device_domain_owner(struct pci_dev *dev);
 int xen_register_device_domain_owner(struct pci_dev *dev, uint16_t domain);
 int xen_unregister_device_domain_owner(struct pci_dev *dev);
 #else
+static inline int xen_reset_device_function_state(const struct pci_dev *dev)
+{
+	return -1;
+}
+
 static inline int xen_find_device_domain_owner(struct pci_dev *dev)
 {
 	return -1;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 07:51:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 07:51:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736329.1142388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUNv-0002S4-68; Fri, 07 Jun 2024 07:51:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736329.1142388; Fri, 07 Jun 2024 07:51:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUNv-0002Rx-2X; Fri, 07 Jun 2024 07:51:39 +0000
Received: by outflank-mailman (input) for mailman id 736329;
 Fri, 07 Jun 2024 07:51:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Avvd=NJ=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sFUNt-0001uI-TU
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 07:51:37 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20621.outbound.protection.outlook.com
 [2a01:111:f403:2418::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c3dfbff1-24a2-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 09:51:37 +0200 (CEST)
Received: from SN7PR04CA0238.namprd04.prod.outlook.com (2603:10b6:806:127::33)
 by CH2PR12MB4071.namprd12.prod.outlook.com (2603:10b6:610:7b::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.34; Fri, 7 Jun
 2024 07:51:33 +0000
Received: from SA2PEPF00001505.namprd04.prod.outlook.com
 (2603:10b6:806:127:cafe::17) by SN7PR04CA0238.outlook.office365.com
 (2603:10b6:806:127::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.21 via Frontend
 Transport; Fri, 7 Jun 2024 07:51:33 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 SA2PEPF00001505.mail.protection.outlook.com (10.167.242.37) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7633.15 via Frontend Transport; Fri, 7 Jun 2024 07:51:33 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 7 Jun
 2024 02:51:29 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3dfbff1-24a2-11ef-90a2-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DcJz2Dog3W2VKtI3+JLMf0WVGO2P5FoqVMd7r70ar/oPM0RoNovRgOPT8aPUCPtSllYlPM6N4AdRvdQFykR+HttX4BVXX9o/jr5Fhv3t7iCxcrG/lGclbxJZW3UADdwRtzRgoxalISSuU/GqMjqdBMNs4Ccnj4/6jbs1Zj4Q1ASREibWGRYI6qPKGsHR5JnbuwUR3NB+ZPEGjLBv1TYqSYNyX6eqtGJzNuFb7tpiydjM79s9GSh61HOnd8bMOPYPmMFjAbCdDWMe1bOEdPWt/JQm9ak7iL6uqzWa1uVu8F6gkC8uwep6ejARnjGcVtX28woQ7DdqNyxo/BUkJov10g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=En8bxbSZDHQoAf0vgERQjVcF+6/gOfNjPiz97+LyFn0=;
 b=QMZRgzSJRzV0O8Nctm5F8C5REZLM3CMdcvN/lS9oEcyS3Xjsqxt8gwr+SDZPSJJ0h3+g9H/s4yZcWywpXvnR+RG10pdRy5TkqE9iUeusHGrMN91z4C556w1Oh6ymsIRMQtAod21VW8UQb/IhIn7tIRTDFBvWrAFNcsc/e6OvXgdlhhcKS90WjtADDc0Ya17kHldZtHcDwirgizvGCvVJd6XVTBGbGuogSs5L2iVot7wERTCKAANR/T2VzqnpE4Ugf2eTFlBRdGDWrjDEC4gfAQZEJ6eLgONkgC6Z3B/3yUJiUM249jNGZ48pUXC53n22bxCv8kzQTLz/5hTvqdigZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=En8bxbSZDHQoAf0vgERQjVcF+6/gOfNjPiz97+LyFn0=;
 b=p//Eo88w5UJ2aGK82DXEc1uLu1F5hYVa8roDeHWDb9QsEgEFA0aXMwVZeLZ5qg07V1k97LkAVuamlHdapm+8cRLyjgkHE08br0AZnpamtF/6P3v/KH62j45/QYGGrzWifmvgh4U+ekHU+i96MPEPWo0C5dcFREBEHoh1Low+/Kc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: Juergen Gross <jgross@suse.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, "Rafael J .
 Wysocki" <rafael@kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>, <linux-pci@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <linux-acpi@vger.kernel.org>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [RFC KERNEL PATCH v8 2/3] xen/pvh: Setup gsi for passthrough device
Date: Fri, 7 Jun 2024 15:51:08 +0800
Message-ID: <20240607075109.126277-3-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240607075109.126277-1-Jiqian.Chen@amd.com>
References: <20240607075109.126277-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SA2PEPF00001505:EE_|CH2PR12MB4071:EE_
X-MS-Office365-Filtering-Correlation-Id: 72998cb6-13e8-46cc-407d-08dc86c6a66e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|376005|1800799015|36860700004|82310400017;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?04lJ6K7HjkkORc1ZkIo7kjNT5naMCWjWNN1iCyffm3en1aEMaEdKB9GxQlMt?=
 =?us-ascii?Q?46fVljwKopwSnLvepyA22xQuV3FTnD9UUlEf3MW22XUkGvWVtdvBbVJwPIsf?=
 =?us-ascii?Q?UyMegBXs1BjgXVbKHtE+Di6H8pcruwFZYLhh1XMsPaSmLIZ5URt7guxxzVxd?=
 =?us-ascii?Q?NgWxK84GsLRtGFoAE87MhtUdmMuqoHTVbDVzUc/v+tAc/oIRDq++XjuBP5nv?=
 =?us-ascii?Q?nVtjFw01Xux1Cf7DA9tykItI8qG/XGTI2wOY4JgIObKinMZPjkyxYtJYY8w2?=
 =?us-ascii?Q?eXD8aKOxpH5ev1y64fOro7c1HWBUJJ057yvuVPCq9O35qDpEE8/r4wM/6Esb?=
 =?us-ascii?Q?GWnQX173sUH2ctH3WNdgNJ7fSXWslO8PQcpLkoxwC3GvZVMBLbUJ8IYvHnmV?=
 =?us-ascii?Q?RBtEZ4YeWw+lVZkKFBGZHBwdRezaJkAqDwxSbDRlK/WzDBLeMECGT6pt2amB?=
 =?us-ascii?Q?qCK7hgAZnBLN8e1+4v8lBhqmFsvumNlZmAmvGoeeFIG/rxiSS5kRjra38Nut?=
 =?us-ascii?Q?OAKeZhl55yj9OJp7us2tJqoD3cnwEt5B8tY0/SPOhn/wsmLUeCMaxXZk5it9?=
 =?us-ascii?Q?WDGxq5nGiYcgRzIVjaAGXw1fFuEeVU69Q9+X+nTTJXaO19YmEN/qBRIkzvSU?=
 =?us-ascii?Q?CGvP6ke1apZvH5jotCVcyG5hSIpFRKY8jSa60lzvR9+DwAZ0jGNFRW8p8jSq?=
 =?us-ascii?Q?YoSQ+HIA763rhKbc8k1HptztgQ/t9t5KMhBJj4RJDQ0fAwu9/BXWCijSAISL?=
 =?us-ascii?Q?z92OfedmH7EKV28rdnY0QJev9PYj85QZgDhp3qbv8d/AE7G0pLR53IabEqJp?=
 =?us-ascii?Q?7jDNhwd2aObru2Zsf9I76iAbWABWGG4+zi0httZrpXYmaTliVdWS6HQvthjQ?=
 =?us-ascii?Q?o3qyZy+wiRpunWfe1/XGm5nruLGQEoHSdr/SgejdHrwBQmo+J7pL+gEpro9L?=
 =?us-ascii?Q?A2yKsoxc9GEroEbhwvw79zs3EMw/QuPEsG3J0y/N7PZKfPVNNyT1JWpAk+s9?=
 =?us-ascii?Q?N8OuXuQrTS9IWNzpbZa4eDc9NKOUniCvd8y8WWaWJ1O7nD+x71ZNpWs7obiK?=
 =?us-ascii?Q?AKZr5WA7izx7Sj1ajYiOLy2JQwxnFKA+U+HcdXk+y2Zy6mEGBlsZatvgnqCw?=
 =?us-ascii?Q?9/nCjSpy2W8OYpLorLJByrPTJON3LgviYEyF73maWq63qNUQPLXiW0lm8MY9?=
 =?us-ascii?Q?UAw6viP593yYexNtJcf2uHn19GGyR1wMFfCze56LXGRmfwgFVwr5etjy3j+1?=
 =?us-ascii?Q?oNUMNYNaCCNO1WqmSjXN5mSwsSV4fLyjZN/NBru5InCRtwMiICMYN1SupG/I?=
 =?us-ascii?Q?qmXWvUjQUjqzy5oK+N2gbnW4vRvcshN8G9JFaRZwNoRzjsN2BeeWqhr+r9zG?=
 =?us-ascii?Q?BiBEDHae2S3w5zQ96cHdbt/VWt5knELPM8a+/PwJC1xDGYqlHQ=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(376005)(1800799015)(36860700004)(82310400017);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2024 07:51:33.2876
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 72998cb6-13e8-46cc-407d-08dc86c6a66e
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	SA2PEPF00001505.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4071

In PVH dom0, the gsis don't get registered, but the gsi of
a passthrough device must be configured for it to be able to be
mapped into a domU.

When assign a device to passthrough, proactively setup the gsi
of the device during that process.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
RFC: it need to wait for the corresponding third patch on xen side to be merged.
---
 arch/x86/xen/enlighten_pvh.c       | 23 ++++++++++++++
 drivers/acpi/pci_irq.c             |  2 +-
 drivers/xen/acpi.c                 | 50 ++++++++++++++++++++++++++++++
 drivers/xen/xen-pciback/pci_stub.c | 21 +++++++++++++
 include/linux/acpi.h               |  1 +
 include/xen/acpi.h                 | 10 ++++++
 6 files changed, 106 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/enlighten_pvh.c b/arch/x86/xen/enlighten_pvh.c
index 27a2a02ef8fb..6caadf9c00ab 100644
--- a/arch/x86/xen/enlighten_pvh.c
+++ b/arch/x86/xen/enlighten_pvh.c
@@ -4,6 +4,7 @@
 #include <linux/mm.h>
 
 #include <xen/hvc-console.h>
+#include <xen/acpi.h>
 
 #include <asm/bootparam.h>
 #include <asm/io_apic.h>
@@ -27,6 +28,28 @@
 bool __ro_after_init xen_pvh;
 EXPORT_SYMBOL_GPL(xen_pvh);
 
+#ifdef CONFIG_XEN_DOM0
+int xen_pvh_setup_gsi(int gsi, int trigger, int polarity)
+{
+	int ret;
+	struct physdev_setup_gsi setup_gsi;
+
+	setup_gsi.gsi = gsi;
+	setup_gsi.triggering = (trigger == ACPI_EDGE_SENSITIVE ? 0 : 1);
+	setup_gsi.polarity = (polarity == ACPI_ACTIVE_HIGH ? 0 : 1);
+
+	ret = HYPERVISOR_physdev_op(PHYSDEVOP_setup_gsi, &setup_gsi);
+	if (ret == -EEXIST) {
+		xen_raw_printk("Already setup the GSI :%d\n", gsi);
+		ret = 0;
+	} else if (ret)
+		xen_raw_printk("Fail to setup GSI (%d)!\n", gsi);
+
+	return ret;
+}
+EXPORT_SYMBOL_GPL(xen_pvh_setup_gsi);
+#endif
+
 void __init xen_pvh_init(struct boot_params *boot_params)
 {
 	u32 msr;
diff --git a/drivers/acpi/pci_irq.c b/drivers/acpi/pci_irq.c
index ff30ceca2203..630fe0a34bc6 100644
--- a/drivers/acpi/pci_irq.c
+++ b/drivers/acpi/pci_irq.c
@@ -288,7 +288,7 @@ static int acpi_reroute_boot_interrupt(struct pci_dev *dev,
 }
 #endif /* CONFIG_X86_IO_APIC */
 
-static struct acpi_prt_entry *acpi_pci_irq_lookup(struct pci_dev *dev, int pin)
+struct acpi_prt_entry *acpi_pci_irq_lookup(struct pci_dev *dev, int pin)
 {
 	struct acpi_prt_entry *entry = NULL;
 	struct pci_dev *bridge;
diff --git a/drivers/xen/acpi.c b/drivers/xen/acpi.c
index 6893c79fd2a1..9e2096524fbc 100644
--- a/drivers/xen/acpi.c
+++ b/drivers/xen/acpi.c
@@ -30,6 +30,7 @@
  * IN THE SOFTWARE.
  */
 
+#include <linux/pci.h>
 #include <xen/acpi.h>
 #include <xen/interface/platform.h>
 #include <asm/xen/hypercall.h>
@@ -75,3 +76,52 @@ int xen_acpi_notify_hypervisor_extended_sleep(u8 sleep_state,
 	return xen_acpi_notify_hypervisor_state(sleep_state, val_a,
 						val_b, true);
 }
+
+struct acpi_prt_entry {
+	struct acpi_pci_id      id;
+	u8                      pin;
+	acpi_handle             link;
+	u32                     index;
+};
+
+int xen_acpi_get_gsi_info(struct pci_dev *dev,
+						  int *gsi_out,
+						  int *trigger_out,
+						  int *polarity_out)
+{
+	int gsi;
+	u8 pin;
+	struct acpi_prt_entry *entry;
+	int trigger = ACPI_LEVEL_SENSITIVE;
+	int polarity = acpi_irq_model == ACPI_IRQ_MODEL_GIC ?
+				      ACPI_ACTIVE_HIGH : ACPI_ACTIVE_LOW;
+
+	if (!dev || !gsi_out || !trigger_out || !polarity_out)
+		return -EINVAL;
+
+	pin = dev->pin;
+	if (!pin)
+		return -EINVAL;
+
+	entry = acpi_pci_irq_lookup(dev, pin);
+	if (entry) {
+		if (entry->link)
+			gsi = acpi_pci_link_allocate_irq(entry->link,
+							 entry->index,
+							 &trigger, &polarity,
+							 NULL);
+		else
+			gsi = entry->index;
+	} else
+		gsi = -1;
+
+	if (gsi < 0)
+		return -EINVAL;
+
+	*gsi_out = gsi;
+	*trigger_out = trigger;
+	*polarity_out = polarity;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(xen_acpi_get_gsi_info);
diff --git a/drivers/xen/xen-pciback/pci_stub.c b/drivers/xen/xen-pciback/pci_stub.c
index 73062e531c34..6b22e45188f5 100644
--- a/drivers/xen/xen-pciback/pci_stub.c
+++ b/drivers/xen/xen-pciback/pci_stub.c
@@ -21,6 +21,9 @@
 #include <xen/events.h>
 #include <xen/pci.h>
 #include <xen/xen.h>
+#ifdef CONFIG_XEN_ACPI
+#include <xen/acpi.h>
+#endif
 #include <asm/xen/hypervisor.h>
 #include <xen/interface/physdev.h>
 #include "pciback.h"
@@ -367,6 +370,9 @@ static int pcistub_match(struct pci_dev *dev)
 static int pcistub_init_device(struct pci_dev *dev)
 {
 	struct xen_pcibk_dev_data *dev_data;
+#ifdef CONFIG_XEN_ACPI
+	int gsi, trigger, polarity;
+#endif
 	int err = 0;
 
 	dev_dbg(&dev->dev, "initializing...\n");
@@ -435,6 +441,21 @@ static int pcistub_init_device(struct pci_dev *dev)
 			goto config_release;
 		pci_restore_state(dev);
 	}
+
+#ifdef CONFIG_XEN_ACPI
+	err = xen_acpi_get_gsi_info(dev, &gsi, &trigger, &polarity);
+	if (err) {
+		dev_err(&dev->dev, "Fail to get gsi info!\n");
+		goto config_release;
+	}
+
+	if (xen_initial_domain() && xen_pvh_domain()) {
+		err = xen_pvh_setup_gsi(gsi, trigger, polarity);
+		if (err)
+			goto config_release;
+	}
+#endif
+
 	/* Now disable the device (this also ensures some private device
 	 * data is setup before we export)
 	 */
diff --git a/include/linux/acpi.h b/include/linux/acpi.h
index 34829f2c517a..f8690b02bba4 100644
--- a/include/linux/acpi.h
+++ b/include/linux/acpi.h
@@ -361,6 +361,7 @@ void acpi_unregister_gsi (u32 gsi);
 
 struct pci_dev;
 
+struct acpi_prt_entry *acpi_pci_irq_lookup(struct pci_dev *dev, int pin);
 int acpi_pci_irq_enable (struct pci_dev *dev);
 void acpi_penalize_isa_irq(int irq, int active);
 bool acpi_isa_irq_available(int irq);
diff --git a/include/xen/acpi.h b/include/xen/acpi.h
index b1e11863144d..9b50027113f3 100644
--- a/include/xen/acpi.h
+++ b/include/xen/acpi.h
@@ -67,10 +67,20 @@ static inline void xen_acpi_sleep_register(void)
 		acpi_suspend_lowlevel = xen_acpi_suspend_lowlevel;
 	}
 }
+int xen_pvh_setup_gsi(int gsi, int trigger, int polarity);
 #else
 static inline void xen_acpi_sleep_register(void)
 {
 }
+
+static inline int xen_pvh_setup_gsi(int gsi, int trigger, int polarity)
+{
+	return -1;
+}
 #endif
 
+int xen_acpi_get_gsi_info(struct pci_dev *dev,
+						  int *gsi_out,
+						  int *trigger_out,
+						  int *polarity_out);
 #endif	/* _XEN_ACPI_H */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 07:51:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 07:51:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736330.1142397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUNz-0002l8-Co; Fri, 07 Jun 2024 07:51:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736330.1142397; Fri, 07 Jun 2024 07:51:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUNz-0002ks-9f; Fri, 07 Jun 2024 07:51:43 +0000
Received: by outflank-mailman (input) for mailman id 736330;
 Fri, 07 Jun 2024 07:51:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Avvd=NJ=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sFUNx-0001uI-CQ
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 07:51:41 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20614.outbound.protection.outlook.com
 [2a01:111:f403:2009::614])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c5f2a6bb-24a2-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 09:51:40 +0200 (CEST)
Received: from SN6PR16CA0054.namprd16.prod.outlook.com (2603:10b6:805:ca::31)
 by MW4PR12MB6976.namprd12.prod.outlook.com (2603:10b6:303:20a::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.34; Fri, 7 Jun
 2024 07:51:37 +0000
Received: from SA2PEPF0000150A.namprd04.prod.outlook.com
 (2603:10b6:805:ca:cafe::89) by SN6PR16CA0054.outlook.office365.com
 (2603:10b6:805:ca::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.21 via Frontend
 Transport; Fri, 7 Jun 2024 07:51:36 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 SA2PEPF0000150A.mail.protection.outlook.com (10.167.242.42) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7633.15 via Frontend Transport; Fri, 7 Jun 2024 07:51:36 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 7 Jun
 2024 02:51:33 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5f2a6bb-24a2-11ef-90a2-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PN8D4JTyF3+mf9JWWnT7I+aTgDsZQwjahjPtaYkaOeUx16pB0ixzCh4lTCD4X0tCUuO+0jc2TV/PwSc+D4wyKjZfjdaf2E5egV/met1TVuMzx1sHN2m7YpyfjiJZdOInOXiXtbv9gnKNIvWzWnhehxqHkoloOk5ov/pCZgH76UFXH1jo+rKJYJ9tqKZvI6Qs6PSUwqRGQgpBtURuAxUmwEj64arK7EFbE9EvXAYNTmKghJkwT7sDZokAqEIZeH8h0GQBYF+VT5OUMio8Z/bJNyXjwMSL3OcTROhcuvJ02umD6IzVTrAQbHFRwMY3u9xlIhV60FyOTjY7wC6dZFSH1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qtwVEw1eqAZTxz5ZOltSmokorN9ksIbbFVkKSMPl8Yw=;
 b=FQK7f7aYZmxCjaxp5beBd9BOyzXmN/z7nbInaHAybKZraYOpQUQ6vAa8/S4Yhe7C77sgkBPuORfubel8QPXruS3CHzsQe6Hk3o+MKReOWc4iFB+Keo/ZGMEyI5MciuhIvX2nWwyfmzbU9a8Nj8GG4g/k+j80eamDQsaFc/TE2ZmK3+dghF8szPQfEnGCXwuHz7GGfzFAI+RnbtpEwKxwGKIrtGaGFvC5DG5pybh1TnOZ8mDXcn9ypTziOoDiIMhmH+QVaPtCgsLqxgM6fIkmksWztaMI1kBE8ECNVm2Tt6jBNEZnLTIPdkCgXh4n6BueMbN+0Vna5p4j4Odp5KZPrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=suse.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qtwVEw1eqAZTxz5ZOltSmokorN9ksIbbFVkKSMPl8Yw=;
 b=2G07Qca5v/z8aLxsjbak/bX9ggG0nDq3oW2L2nTngx9L37jEjFZ5WTKatNlYuP78km67+V7ws4zi1BsKtoGCkhS6JzXnGgafDUIgIhDpvizr7L8V14hH23KB4DQpuqmGAG+YOPFG2vMeZH1Wj5VC7IMze/MHus1zUyBxHjYcpb4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: Juergen Gross <jgross@suse.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Bjorn Helgaas <bhelgaas@google.com>, "Rafael J .
 Wysocki" <rafael@kernel.org>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>, <linux-pci@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <linux-acpi@vger.kernel.org>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [RFC KERNEL PATCH v8 3/3] xen/privcmd: Add new syscall to get gsi from dev
Date: Fri, 7 Jun 2024 15:51:09 +0800
Message-ID: <20240607075109.126277-4-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240607075109.126277-1-Jiqian.Chen@amd.com>
References: <20240607075109.126277-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SA2PEPF0000150A:EE_|MW4PR12MB6976:EE_
X-MS-Office365-Filtering-Correlation-Id: 1cd1c8d0-2273-4bef-1fff-08dc86c6a86e
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|376005|36860700004|1800799015|82310400017;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?W10QbVwI0ESCViUv3Ii94tlHOF04G1t1XaKMzDJwvWIei6ZRoBVVQsF8DHAS?=
 =?us-ascii?Q?jp5FNXJaXFMzBhfZcHJGyRl1yikEpoq7gliavCNdHNnjwOircVWumM121L5Q?=
 =?us-ascii?Q?Physf5TpaOKbm2oeFPsZavDUg9h/3afwvxbam4yytxGhZ8QdJFUpZHB8z9pX?=
 =?us-ascii?Q?nhhfVxxg4vpTadv2z8gB0EqYRVgYzMrsggqrNPiOY+2Ntdg7ZtGNDGWgOAVC?=
 =?us-ascii?Q?sYQ8g5LXYH+sBY6p/uy+tbMEeGB0wYgRc830CKDP2aCXLN/HhHdSEZRQsWsE?=
 =?us-ascii?Q?PayNlfB+lOOsuynANnsRlQ6Rdpt0dhkxlvAZKg8cQLaBNANce1W4Xfcs15QU?=
 =?us-ascii?Q?xjtbZnIdO36Ge9gJODvIiKjkm1tooUvKBcOrkrBWyaz1FIXzYzLihYhzQ+J3?=
 =?us-ascii?Q?Z3oJO1altQfC36CJ/lCmISTICSJ3FFLcKKnxJiK/jXhdJe9ssO6gy/WJ9Jg7?=
 =?us-ascii?Q?KqBIlH5vmIRNnaEyqsIBToXKLVwJXaP2XgClBgledru+H3/bp8cZXt9yTuy8?=
 =?us-ascii?Q?GebOGnTV9EIOr+7Xbp5ObYLJs+DQlthjyKyDH5OqQDZky/ly9SjBLSds/7hl?=
 =?us-ascii?Q?rhxpyroXOC6GMJptQHQruVNLxvS/UNcT2N4qGcWekx8sCX4nGTcoMTmXDPwt?=
 =?us-ascii?Q?PhCMOoBDIkuD9Std0a5Vy2TR9i8ouzEytPplvPNEKQ+MXXCGc4XxM/oyf4k0?=
 =?us-ascii?Q?axz0/x6PY5sIs2BECmR0bqqfMMozOhGXfoIl2aQuBUIDirGR5rCm7WzIt5XK?=
 =?us-ascii?Q?LgX2KEJCtNbFcljF81HH6L+bOOTfXR6fk4ITjizWBkB84Nqwr9DIqXaps2qP?=
 =?us-ascii?Q?HNqZ3nnT4heeXylWF50repNLWCkKjI9Lm0GmCEtlvTsvUv3/rraF8ZplnKSG?=
 =?us-ascii?Q?Pf4IO94dCXU2YMsVntBzLBVt4mAE1pMr00N555AduSnOcDq7kL01Aw8gM1rj?=
 =?us-ascii?Q?cYGJjOW1sxGjLmeHNhOvKz4TXrigE7GmorktKb9JXIA05gcinta2qUSNIqWZ?=
 =?us-ascii?Q?yyKFSxOJV3r2KZgKlPKzrlg1gweGIkJzZfFmRdH21SwaoL5bNkeVKJv1q+Ns?=
 =?us-ascii?Q?KU16n3D3MaO69HiuG+GlTbqnDc5izn1prO2nAejwg+IcwoyeSF44GgdTBwOr?=
 =?us-ascii?Q?XSlyREjTFGrY26vzBYYBCppR7jF7VYu5P6S5h5tLwNqqEDaDdqfjtMdOPRLj?=
 =?us-ascii?Q?uChFoN1gcNpnXw7/7m+++VbgFd5H5N0DdId5lRdAwqc6xt+yiXe4/z5u+TF0?=
 =?us-ascii?Q?zNltBnHBMdbqabf1TNDt3/QLlJLx9T8j6FEteEzv/MhHkmjhH3fTDyO39zbY?=
 =?us-ascii?Q?uFVJj7PjT6Gpr1+0K3Spls/w6KuQ+MnHKh+V+Jz2LJrOYXSnMvh0kqnA3a+j?=
 =?us-ascii?Q?PpShbrfV2QFmDmPzyT+RMvSfv5mkfJpU+tizNim9u4zQ2ZzvnA=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(376005)(36860700004)(1800799015)(82310400017);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2024 07:51:36.6266
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1cd1c8d0-2273-4bef-1fff-08dc86c6a86e
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	SA2PEPF0000150A.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB6976

In PVH dom0, it uses the linux local interrupt mechanism,
when it allocs irq for a gsi, it is dynamic, and follow
the principle of applying first, distributing first. And
the irq number is alloced from small to large, but the
applying gsi number is not, may gsi 38 comes before gsi 28,
it causes the irq number is not equal with the gsi number.
And when passthrough a device, QEMU will use device's gsi
number to do pirq mapping, but the gsi number is got from
file /sys/bus/pci/devices/<sbdf>/irq, irq!= gsi, so it will
fail when mapping.
And in current linux codes, there is no method to get gsi
for userspace.

For above purpose, record gsi of pcistub devices when init
pcistub and add a new syscall into privcmd to let userspace
can get gsi when they have a need.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
---
RFC: it need review and need to wait for previous patch of this series to be merged.
---
 drivers/xen/privcmd.c              | 28 ++++++++++++++++++++++
 drivers/xen/xen-pciback/pci_stub.c | 38 +++++++++++++++++++++++++++---
 include/uapi/xen/privcmd.h         |  7 ++++++
 include/xen/acpi.h                 |  9 +++++++
 4 files changed, 79 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 67dfa4778864..5809b3168f25 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -45,6 +45,9 @@
 #include <xen/page.h>
 #include <xen/xen-ops.h>
 #include <xen/balloon.h>
+#ifdef CONFIG_XEN_ACPI
+#include <xen/acpi.h>
+#endif
 
 #include "privcmd.h"
 
@@ -842,6 +845,27 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
 	return rc;
 }
 
+static long privcmd_ioctl_gsi_from_dev(struct file *file, void __user *udata)
+{
+#ifdef CONFIG_XEN_ACPI
+	struct privcmd_gsi_from_dev kdata;
+
+	if (copy_from_user(&kdata, udata, sizeof(kdata)))
+		return -EFAULT;
+
+	kdata.gsi = pcistub_get_gsi_from_sbdf(kdata.sbdf);
+	if (kdata.gsi == -1)
+		return -EINVAL;
+
+	if (copy_to_user(udata, &kdata, sizeof(kdata)))
+		return -EFAULT;
+
+	return 0;
+#else
+	return -EINVAL;
+#endif
+}
+
 #ifdef CONFIG_XEN_PRIVCMD_EVENTFD
 /* Irqfd support */
 static struct workqueue_struct *irqfd_cleanup_wq;
@@ -1529,6 +1553,10 @@ static long privcmd_ioctl(struct file *file,
 		ret = privcmd_ioctl_ioeventfd(file, udata);
 		break;
 
+	case IOCTL_PRIVCMD_GSI_FROM_DEV:
+		ret = privcmd_ioctl_gsi_from_dev(file, udata);
+		break;
+
 	default:
 		break;
 	}
diff --git a/drivers/xen/xen-pciback/pci_stub.c b/drivers/xen/xen-pciback/pci_stub.c
index 6b22e45188f5..9d791d7a8098 100644
--- a/drivers/xen/xen-pciback/pci_stub.c
+++ b/drivers/xen/xen-pciback/pci_stub.c
@@ -56,6 +56,9 @@ struct pcistub_device {
 
 	struct pci_dev *dev;
 	struct xen_pcibk_device *pdev;/* non-NULL if struct pci_dev is in use */
+#ifdef CONFIG_XEN_ACPI
+	int gsi;
+#endif
 };
 
 /* Access to pcistub_devices & seized_devices lists and the initialize_devices
@@ -88,6 +91,9 @@ static struct pcistub_device *pcistub_device_alloc(struct pci_dev *dev)
 
 	kref_init(&psdev->kref);
 	spin_lock_init(&psdev->lock);
+#ifdef CONFIG_XEN_ACPI
+	psdev->gsi = -1;
+#endif
 
 	return psdev;
 }
@@ -220,6 +226,25 @@ static struct pci_dev *pcistub_device_get_pci_dev(struct xen_pcibk_device *pdev,
 	return pci_dev;
 }
 
+#ifdef CONFIG_XEN_ACPI
+int pcistub_get_gsi_from_sbdf(unsigned int sbdf)
+{
+	struct pcistub_device *psdev;
+	int domain = (sbdf >> 16) & 0xffff;
+	int bus = PCI_BUS_NUM(sbdf);
+	int slot = PCI_SLOT(sbdf);
+	int func = PCI_FUNC(sbdf);
+
+	psdev = pcistub_device_find(domain, bus, slot, func);
+
+	if (!psdev)
+		return -1;
+
+	return psdev->gsi;
+}
+EXPORT_SYMBOL_GPL(pcistub_get_gsi_from_sbdf);
+#endif
+
 struct pci_dev *pcistub_get_pci_dev_by_slot(struct xen_pcibk_device *pdev,
 					    int domain, int bus,
 					    int slot, int func)
@@ -367,14 +392,20 @@ static int pcistub_match(struct pci_dev *dev)
 	return found;
 }
 
-static int pcistub_init_device(struct pci_dev *dev)
+static int pcistub_init_device(struct pcistub_device *psdev)
 {
 	struct xen_pcibk_dev_data *dev_data;
+	struct pci_dev *dev;
 #ifdef CONFIG_XEN_ACPI
 	int gsi, trigger, polarity;
 #endif
 	int err = 0;
 
+	if (!psdev)
+		return -EINVAL;
+
+	dev = psdev->dev;
+
 	dev_dbg(&dev->dev, "initializing...\n");
 
 	/* The PCI backend is not intended to be a module (or to work with
@@ -448,6 +479,7 @@ static int pcistub_init_device(struct pci_dev *dev)
 		dev_err(&dev->dev, "Fail to get gsi info!\n");
 		goto config_release;
 	}
+	psdev->gsi = gsi;
 
 	if (xen_initial_domain() && xen_pvh_domain()) {
 		err = xen_pvh_setup_gsi(gsi, trigger, polarity);
@@ -495,7 +527,7 @@ static int __init pcistub_init_devices_late(void)
 
 		spin_unlock_irqrestore(&pcistub_devices_lock, flags);
 
-		err = pcistub_init_device(psdev->dev);
+		err = pcistub_init_device(psdev);
 		if (err) {
 			dev_err(&psdev->dev->dev,
 				"error %d initializing device\n", err);
@@ -565,7 +597,7 @@ static int pcistub_seize(struct pci_dev *dev,
 		spin_unlock_irqrestore(&pcistub_devices_lock, flags);
 
 		/* don't want irqs disabled when calling pcistub_init_device */
-		err = pcistub_init_device(psdev->dev);
+		err = pcistub_init_device(psdev);
 
 		spin_lock_irqsave(&pcistub_devices_lock, flags);
 
diff --git a/include/uapi/xen/privcmd.h b/include/uapi/xen/privcmd.h
index 8b8c5d1420fe..220e7670a113 100644
--- a/include/uapi/xen/privcmd.h
+++ b/include/uapi/xen/privcmd.h
@@ -126,6 +126,11 @@ struct privcmd_ioeventfd {
 	__u8 pad[2];
 };
 
+struct privcmd_gsi_from_dev {
+	__u32 sbdf;
+	int gsi;
+};
+
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
@@ -157,5 +162,7 @@ struct privcmd_ioeventfd {
 	_IOW('P', 8, struct privcmd_irqfd)
 #define IOCTL_PRIVCMD_IOEVENTFD					\
 	_IOW('P', 9, struct privcmd_ioeventfd)
+#define IOCTL_PRIVCMD_GSI_FROM_DEV				\
+	_IOC(_IOC_NONE, 'P', 10, sizeof(struct privcmd_gsi_from_dev))
 
 #endif /* __LINUX_PUBLIC_PRIVCMD_H__ */
diff --git a/include/xen/acpi.h b/include/xen/acpi.h
index 9b50027113f3..d6315fd559a9 100644
--- a/include/xen/acpi.h
+++ b/include/xen/acpi.h
@@ -83,4 +83,13 @@ int xen_acpi_get_gsi_info(struct pci_dev *dev,
 						  int *gsi_out,
 						  int *trigger_out,
 						  int *polarity_out);
+
+#ifdef CONFIG_XEN_PCI_STUB
+int pcistub_get_gsi_from_sbdf(unsigned int sbdf);
+#else
+static inline int pcistub_get_gsi_from_sbdf(unsigned int sbdf)
+{
+	return -1;
+}
+#endif
 #endif	/* _XEN_ACPI_H */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 07:54:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 07:54:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736349.1142407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUQw-0004AU-R3; Fri, 07 Jun 2024 07:54:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736349.1142407; Fri, 07 Jun 2024 07:54:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUQw-0004AN-OZ; Fri, 07 Jun 2024 07:54:46 +0000
Received: by outflank-mailman (input) for mailman id 736349;
 Fri, 07 Jun 2024 07:54:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m5uW=NJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sFUQv-0004AH-Pe
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 07:54:45 +0000
Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com
 [2a00:1450:4864:20::333])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 33a0a160-24a3-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 09:54:43 +0200 (CEST)
Received: by mail-wm1-x333.google.com with SMTP id
 5b1f17b1804b1-4215fc19abfso12634355e9.3
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 00:54:43 -0700 (PDT)
Received: from [172.31.7.231] ([62.28.210.62])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4214bf49109sm75516875e9.1.2024.06.07.00.54.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Jun 2024 00:54:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33a0a160-24a3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717746883; x=1718351683; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=MMi1ZLx6N9ad2DtJveP/Z7aipRapZgTbdPmqvyj0eNk=;
        b=M93wr90bPqsFDQEI3BdH/n6oh/8+o9BahqLyRDyL7z/jJrEzPf/ApnkEch2MVIUWvD
         f0QlKEcC9ZBY6aDLWzig7NHAW0mkmhcn6yMLyYzUnzA8Cfkf8t4Hl9S19jfvih96VC0v
         /FKq8OIYQ22WTL4WNKvmbccZoxA03R3TTmqR3Rp4ZPLpT03Fd3qe6MZV2Gqpp7Prpz76
         tjuF5EWfS1o5b/8EFs0hr+rTrzj/KuzPR+Nz80i8XMZJq+LcejszMvz+QiCdW6bztr2m
         73i3LAWJNwgi7zB1bEQbPDIuztsgnfVot7yJY3sCe/zW2CsEwOJH+MxB8C7pjQLf0I07
         fvIQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717746883; x=1718351683;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=MMi1ZLx6N9ad2DtJveP/Z7aipRapZgTbdPmqvyj0eNk=;
        b=l2hyfGaP7Tl0b52ZeuPV0LK8IAi4HdrsK/plJvOx/RT9en4PY50IjnF2fOHYCGkdIM
         +6ZLH2J/Ik2DNT0rjl7ZFYYKebYl0JopRIsNeV0EVSVy3r6NF8S0fyhd0NrzfcSVUpE7
         QmuVTQrATUvNJeTYqVyfYKOxs9bE00zzDmFJ9RSylPFJ+S+/8FvDrjEB+3Uyk2A7YsxE
         I5mTeedpS/vurFDHM/NVEHN7eBA/T+915v3YuNrHRV17ueM3xS2QSN8fzmRf5SorRDSx
         Xlzl1bXyV6dI9B7yDmuDZlSbpFkhNjlbncBZmcvtdIJGrDs6yBXKRK646P9wIU+gVO/C
         Glzg==
X-Forwarded-Encrypted: i=1; AJvYcCVenAT3Pwl+WKFrU/W5kIwmhLQhXFWS868tFUmoorXFbCbtcRRFo/BNb8L06Fo2C13qC7OYg9LmhXhfO1nVL7NOBg3ZNfVwpEiXzUY93qQ=
X-Gm-Message-State: AOJu0Yx3TeIZLo8Q7irMcyAdB74dbAaUm/sHLXuGPf9iJSyJxxGI4t4/
	IrXrUbmiijGqFuvTNgpJJbGGR8HDQ5weBKWkhfT/6ojWb/LZTiRu08FF3XgMWw==
X-Google-Smtp-Source: AGHT+IGJ2rz1WInx9PnJALnkieAqRf+1CjxI3tyPu8ItDxQY28yQDvd/kOFTtNiYxHHeQxWX1HW08w==
X-Received: by 2002:a05:600c:a69d:b0:420:98d:e101 with SMTP id 5b1f17b1804b1-421649f4ea5mr16065065e9.15.1717746883165;
        Fri, 07 Jun 2024 00:54:43 -0700 (PDT)
Message-ID: <30228bc4-fc42-45fc-915e-cf66be05314c@suse.com>
Date: Fri, 7 Jun 2024 09:54:40 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 05/16] x86: introduce using_{svm,vmx} macros
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <9860c4b497038abda71084ea3bce698eab3b277c.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <9860c4b497038abda71084ea3bce698eab3b277c.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:16, Sergiy Kibrik wrote:
> As we now have SVM/VMX config options for enabling/disabling these features
> completely in the build, we need some build-time checks to ensure that vmx/svm
> code can be used and things compile. Macros cpu_has_{svm,vmx} used to be doing
> such checks at runtime, however they do not check if SVM/VMX support is
> enabled in the build.
> 
> Also cpu_has_{svm,vmx} can potentially be called from non-{VMX,SVM} build
> yet running on {VMX,SVM}-enabled CPU, so would correctly indicate that VMX/SVM
> is indeed supported by CPU, but code to drive it can't be used.
> 
> New macros using_{vmx,svm} indicates that both CPU _and_ build provide
> corresponding technology support, while cpu_has_{vmx,svm} still remains for
> informational runtime purpose, just as their naming suggests.
> 
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
> CC: Jan Beulich <jbeulich@suse.com>
> ---
> Here I've followed Jan's suggestion on not touching cpu_has_{vmx,svm} but
> adding separate macros for solving build problems, and then using these
> where required.

As an isolated change this then may want expressing via Suggested-by:.
However, I question whether these wouldn't better be introduced
together with their (first) uses (and then perhaps no such tag).

> --- a/xen/arch/x86/include/asm/hvm/hvm.h
> +++ b/xen/arch/x86/include/asm/hvm/hvm.h
> @@ -26,6 +26,9 @@ extern bool opt_hvm_fep;
>  #define opt_hvm_fep 0
>  #endif
>  
> +#define using_vmx ( IS_ENABLED(CONFIG_VMX) && cpu_has_vmx )
> +#define using_svm ( IS_ENABLED(CONFIG_SVM) && cpu_has_svm )

Nit: Stray blanks immediately next to the parentheses.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 08:00:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 08:00:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736365.1142418 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUW9-00069v-Km; Fri, 07 Jun 2024 08:00:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736365.1142418; Fri, 07 Jun 2024 08:00:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUW9-00069o-HS; Fri, 07 Jun 2024 08:00:09 +0000
Received: by outflank-mailman (input) for mailman id 736365;
 Fri, 07 Jun 2024 08:00:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m5uW=NJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sFUW8-00069i-3L
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 08:00:08 +0000
Received: from mail-lf1-x12b.google.com (mail-lf1-x12b.google.com
 [2a00:1450:4864:20::12b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f41c7c4d-24a3-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 10:00:06 +0200 (CEST)
Received: by mail-lf1-x12b.google.com with SMTP id
 2adb3069b0e04-52b9dda4906so2599756e87.2
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 01:00:06 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-421580fe3bfsm80158245e9.8.2024.06.07.01.00.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Jun 2024 01:00:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f41c7c4d-24a3-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717747206; x=1718352006; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=VTtkiNImIN3ua9NrmdiQNM3YVloWaIf5tYaV6iwsIIc=;
        b=IKUnsLchyPefXtUynb6zYq+mQ106d1pnlsckwPWObkZJ27HakniULzNj9xzbXdpdDS
         IOkO0s+caa2Seh+7pB+HRnWZsPo32wwBkfc7STTcJO3Wyg6pWYVEAoqb9QFcmsfcrpND
         wa9IHTkXMaTBeYLTOto3xb+kVUDd/33TwT1J7ZTGNy7hhApY3lkve2r04S6wpluQSD6A
         UAJj4ShI9+M6zaVdD2XDekiHEZheJiDJSKDzDnM7GNa1d0dCkYqCZ35vSD/gxD+Ldu4O
         a7pTuGkyZtFG8vCIUTM//MSjbnNtpQWJJ2AVkz0+swWwbBZTN0igR3bIvKdn9rKkn1WS
         5vsw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717747206; x=1718352006;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=VTtkiNImIN3ua9NrmdiQNM3YVloWaIf5tYaV6iwsIIc=;
        b=oUfSsEU2PkMsPMPXenuOMpxx4yHyvN3+ZK3bZs3G1aIdz4sdw0aTqf52JfBgOTfT4z
         o8E2LCQj2ky2FKo6FA7s1iNLaZ4o39vrXbfaYDxN+9sK2ElGrgczKnkcM7fwqCuYpwa3
         s20Vk/Yrb3bd3D/zWDPbFzouTMi+eU9USrKE35/A6+enG4ELTOZM5JRP5KgD5t5kVxke
         8Xwv/LyjuPea8UNg3kR/5driFL/z3ZNFm09OPoT6/vQISzBVPtPC58fJBXE7RmNJX1mW
         asG1vQtNR+8nFBC6PQSfOkl0qeeDgIVYcltT02jCfN/pKpNzge5oDPIbWR/iBZwRW6nT
         gjjA==
X-Forwarded-Encrypted: i=1; AJvYcCUVJFwkcB9LZdF5uoTDijUsbUfIx2C2bs7stKHciRIeWxExwg/rGKHUHYizYfmhxINDJ57E0MLXSjFi3BYuZtPSBpscqnUbhEA7VuJ5S/E=
X-Gm-Message-State: AOJu0YyX8WT8Wr+y5POQhO0zAiS5XlbkjcSmbICk2SiBJQZSjFj2Qyxd
	kl34+Y4aGEI1sxpZbzOqHS8Jkjve1VCfDmkbeFAF9Io1gyo96tFtu2EooKkQhw==
X-Google-Smtp-Source: AGHT+IEvbSd1W7VTJCDcPcqLPtrk5YnhJpD1OrUF2kHJMJC2UppgsCUQ3+ZpXuAU/2nG/AWTl/MMQA==
X-Received: by 2002:a05:6512:2027:b0:52b:9037:9965 with SMTP id 2adb3069b0e04-52bb9fc8ae2mr1353513e87.48.1717747206057;
        Fri, 07 Jun 2024 01:00:06 -0700 (PDT)
Message-ID: <5f2f3407-7374-45eb-bd06-b7b47ce31fef@suse.com>
Date: Fri, 7 Jun 2024 10:00:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 06/16] x86/nestedhvm: switch to using_{svm,vmx}
 check
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <63ba1d4e043315693957093688670d36ffa65d28.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <63ba1d4e043315693957093688670d36ffa65d28.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:18, Sergiy Kibrik wrote:
> Use using_{svm,vmx} instead of cpu_has_{svm,vmx} that not only checks if CPU
> supports corresponding virtialization technology, but also if it is
> supported by build configuration.
> 
> This fixes build when VMX=n or SVM=n, because then start_nested_{svm,vmx}
> routine(s) not available.
> 
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
preferably, as said there, with the introduction of those macros
moved here, and title/description adjusted accordingly.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 08:01:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 08:01:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736370.1142428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUXc-0006rY-Uh; Fri, 07 Jun 2024 08:01:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736370.1142428; Fri, 07 Jun 2024 08:01:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUXc-0006rR-Rj; Fri, 07 Jun 2024 08:01:40 +0000
Received: by outflank-mailman (input) for mailman id 736370;
 Fri, 07 Jun 2024 08:01:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=m5uW=NJ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sFUXb-0006ps-5z
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 08:01:39 +0000
Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com
 [2a00:1450:4864:20::136])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2ab0743c-24a4-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 10:01:38 +0200 (CEST)
Received: by mail-lf1-x136.google.com with SMTP id
 2adb3069b0e04-52bc1acb9f0so76386e87.2
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 01:01:38 -0700 (PDT)
Received: from [172.31.7.231] ([62.48.184.126])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42158148f43sm78902655e9.33.2024.06.07.01.01.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Jun 2024 01:01:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ab0743c-24a4-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1717747298; x=1718352098; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=s9/xRDRaw9IjmuC5zhzgHBuN7KyG4TeGScPgoRhTfSY=;
        b=Z0NAfRp/3Me4aI29oiQWR4w8Th+12nww6Hl3O7x6Wi/+y0ho2zJ+nKFQicWO3Xv/QG
         0Cld0LuyqTAOn4QKduI+exDLJ7SFHMidzj5TT1cAbKxTFm3CZLeGeNNhD4RgvlUyQjxh
         iZtXXyn8/ZUoOSytl8YTDNnEx28E518tYeEcBUMe1yQqHS9kyg5FpcTliNWRhKK2E66/
         Zvh4bCYCv6CDvBLVuxSC5m66ACOlIFnGj1ApPx2rMDY0XitCBudCyrYaIyZGwJKL9tLj
         G1bzGj2kPBsxKTZ6WybVrML0LpVfBPsduMVzhotrsHTfXq5m0n/fzPQhz1U57O8Wkg2p
         tnbQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717747298; x=1718352098;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=s9/xRDRaw9IjmuC5zhzgHBuN7KyG4TeGScPgoRhTfSY=;
        b=X2uOCZmnSGVDH7WLkdBXdCviu4Bj5irSD7AIOFXg4NrBPOOema/pwAzARRWeBL7kFz
         kWJQPzmovCiYdgGd9AYl+hRKou5/+jI7/IR75yIU6L5o4lypfUwFxnj7GsFojPLgoHEh
         SNl5tY9VxP1EjKBNSkAD/rBqHlwpfAJhyOvszl2YAWxfpXgXRf4cql38JVCCDcAsmZhg
         dHQpEOfM64EtqjDFwsoZOws6WWgWApYErCZrPGs4E3zT8uDjz1sE1zZsR4AO6fXhL9u7
         H51fGtAd6m4cOyF3F0M0FIH46odkSE1sYV/JJLMkvmD3aTwalTynKmehnwnDEY85jfpW
         7U2A==
X-Forwarded-Encrypted: i=1; AJvYcCVRXgVNZgLjIcOu7YQMch41rs1spa609n7l0A3XgA6LcYYibiDb+w+ugYF9+v8ay0O4QACZmZkpU6Wdsq61QfHftlsRJFL5Z+nu7MNXWdw=
X-Gm-Message-State: AOJu0YxGdpsEUSjtpAOUUKUyMIv9qaKsEsGUIpU02/77qd0qM/Hq5ddO
	EagwOJrrY1HkPMHvWq49c0jkxqEov1WcLpEsnRBz9kPXVg6LT0iU51qC+YyVoQ==
X-Google-Smtp-Source: AGHT+IFusoyKcbwkYeZXcbfH63P1qHPHw5S2y9bGslUCuxZhDp1UdiZpeuGoprFOe8h1mfqQYrT9yg==
X-Received: by 2002:a05:6512:3e07:b0:52b:8435:8f22 with SMTP id 2adb3069b0e04-52bb9f84ca9mr1370194e87.36.1717747295044;
        Fri, 07 Jun 2024 01:01:35 -0700 (PDT)
Message-ID: <9255e072-f86a-4c82-90dc-9c41d11326fc@suse.com>
Date: Fri, 7 Jun 2024 10:01:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 07/16] x86/hvm: guard AMD-V and Intel VT-x
 hvm_function_table initializers
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <25d4ade03f22ae4eb260af3eae5f48528f2e3ca8.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
In-Reply-To: <25d4ade03f22ae4eb260af3eae5f48528f2e3ca8.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:20, Sergiy Kibrik wrote:
> From: Xenia Ragiadakou <burzalodowa@gmail.com>
> 
> Since start_svm() is AMD-V specific while start_vmx() is Intel VT-x specific,
> any of them can potentially be excluded from build completely with CONFIG_SVM
> or CONFIG_VMX options respectively, hence we have to explicitly check if
> they're available and use specific macros using_{svm,vmx} for that.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
yet again I think this could sensibly be folded with the earlier
two patches.

Jan



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 08:12:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 08:12:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736378.1142438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUhd-0001FT-Ty; Fri, 07 Jun 2024 08:12:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736378.1142438; Fri, 07 Jun 2024 08:12:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUhd-0001Es-Pp; Fri, 07 Jun 2024 08:12:01 +0000
Received: by outflank-mailman (input) for mailman id 736378;
 Fri, 07 Jun 2024 08:12:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Avvd=NJ=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sFUhc-0001Em-Br
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 08:12:00 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20601.outbound.protection.outlook.com
 [2a01:111:f403:2416::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9aefbb61-24a5-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 10:11:57 +0200 (CEST)
Received: from MW4PR04CA0086.namprd04.prod.outlook.com (2603:10b6:303:6b::31)
 by DS7PR12MB8082.namprd12.prod.outlook.com (2603:10b6:8:e6::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.34; Fri, 7 Jun
 2024 08:11:53 +0000
Received: from CO1PEPF000044F9.namprd21.prod.outlook.com
 (2603:10b6:303:6b:cafe::8e) by MW4PR04CA0086.outlook.office365.com
 (2603:10b6:303:6b::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.20 via Frontend
 Transport; Fri, 7 Jun 2024 08:11:52 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1PEPF000044F9.mail.protection.outlook.com (10.167.241.199) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.0 via Frontend Transport; Fri, 7 Jun 2024 08:11:52 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 7 Jun
 2024 03:11:47 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9aefbb61-24a5-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=D6Axbb1mDNgYIzX9bHRfoWdPO4e7Rq+mUFvtkoIPs4646EvQ4FDhqpc6xkBLaU/X2Q4g3bSKD/xv8hBIycI5qXyneEEa08Mc2x4NDuggTlugUPC3sK7FZs9sQHeBW5aSk7s+FmmIbKC3sIhzeP6/cyL0gmX0xbbCVnPZo9Ct4x7gvQOF/M7pGLhaYsWe4uUZ8RURo3FpQnS+qFOW6bWFYHwJthTVOIgYiUNhTb04MUn0t8pOCdF0PmR4cW7bLfr8kZz5Hg1d7E+qUN1JXjpnzxc5ftaciQMFxEm6umH8pe/Bhm3NKsx0FPRAGW0T6f8mpVXc20uO01A+mh1LT9TVcg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CRE6J0/9ZV6hCTTCA9dA5xx0jXVpcWBRVLkSxISEQzw=;
 b=BQRgN8unK9nn0WKTBNMzjdMTfJm4Pp+u57yEmtjO2XUkqM2YUE3JFBhy7/oyjYQMk0fhn+Vuu1eqD4wF2zJogzF1uEPWvGVHBBAzZk9lyfAg3y8Fh1OAewrdCdyP/xkYk4H1tRPWMnxbdQGoeLC44SBsLD9r7IH5eEnlFadP0n1sELC758gK6sNjjPjwVGMCm0h0Q7Dhwn5lfZ6WGDN4RAcX1lUGQGlCuA8Oag0Y4JTyHh17ttNiybGD/V2mpQgBVzIcqQHOntA6yvjkpeOs1KPSJ0ynGISWUQp86IW5xcP2SxDqf9637j6PjLbPyj4TdWCTC5/sTryOfCB2RNl1kQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CRE6J0/9ZV6hCTTCA9dA5xx0jXVpcWBRVLkSxISEQzw=;
 b=i17+Ee2EVOhC/Oc5HbvK7+ZGzLk2vLeUOGbLrDLEQvhxgWi/XVUCkgK4pjNixxedhDfbKm1KIpEP3qBKcM7aGP5+GPFVlG6sf57HBR9vhZuqN0kKYtES3BEZtHw81onPaY+lVw3EsE+44ccwh8Is2BOX/DDxQXJhT4HDUJLyL/o=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>
Subject: [XEN PATCH v9 0/5] Support device passthrough when dom0 is PVH on Xen
Date: Fri, 7 Jun 2024 16:11:22 +0800
Message-ID: <20240607081127.126593-1-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1PEPF000044F9:EE_|DS7PR12MB8082:EE_
X-MS-Office365-Filtering-Correlation-Id: d3ee6df7-8fda-4ebd-98df-08dc86c97ced
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|36860700004|376005|7416005|1800799015|82310400017;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?YZ11R5NH28vBMATX4C8RcJYv48ThjAqVD+vwCvJJ0LbWaVBzmjqB1O9SS5I9?=
 =?us-ascii?Q?SyMkv+AnF/8wV+ehgTgKrYT1HOFc4Z2Th3ONpEYKUsnZ/OnPeugSeU6COnM6?=
 =?us-ascii?Q?CjP2xTbE4hUmtIZV42WNX2DDNCY9L1z6ydXNv+BeUxWdJBAC/6nnvJtDQX9I?=
 =?us-ascii?Q?EltfMzlFFNMOvwyCFnFdbAYvFAhwz8E7N6JtHI3XvYZh3MyGuwbqyo3uy+tA?=
 =?us-ascii?Q?Ci/ctmP80Y1H/DPS1Tj3wOV3igZKSxNN+aGat+i603LKJr1t/u8p4YVysfWQ?=
 =?us-ascii?Q?F77PcaAWEYZkicGuPWdEFlAZtOy5/AUHI6gZ7+9vHo1L712Ec6i21U+LFzUu?=
 =?us-ascii?Q?JQdfe/3EMv9Za8viFOIOdLwPxq1Qh8Ql52GU99cMuGP2xt/67lJpvAD0or+v?=
 =?us-ascii?Q?Tc8yFXE3x4m0JpqJ3kDfftJoXOJDHvaSw34gV6iy48aRIWVTERQvJ/T5fv6R?=
 =?us-ascii?Q?7gY5YtNuDMmwIMx4IHMl180tDMOVMByuVLhYiZlSfS74bJwRDJZyWRWuZRVG?=
 =?us-ascii?Q?6hkRgawpsbhje5zAhIj8QExX7EIPUr0iqg/P0PSmEc7fBLvT3LKeq0VHcxad?=
 =?us-ascii?Q?kzu9tG2joEYMpKTKlTFvq7xMczkNZM+3A4ysKxd9O1ZKhsc1fiu6DLB+K2TN?=
 =?us-ascii?Q?1loY+dyKnPtjQ88a2NnrxHclGDvJ5gNFkjc0KSXM5iL7jZkmg3t+Ut7UsTpA?=
 =?us-ascii?Q?KHqgeTAvMDCRDRGKKZWJykXt5DMQKJ9iLu2aA9XFF+dlo1WYg0ggtdvCl7qy?=
 =?us-ascii?Q?Ltr/zi7cFEzTrdfLEtt0/EBB9Xu2jTZ5ThhAfzJbUo/ZhT2oJNcKek0TePd/?=
 =?us-ascii?Q?SEg18wjCbAD3B6MaddMPHA24Xjw+7op9fK/S5aPA2t/mHlid5pnwfM1hUTcA?=
 =?us-ascii?Q?c6tXn+3DPueTqkAt0L3df1tLNM40Vn4PMhdBMEbQpubMg07ZFop9VJWbeNaw?=
 =?us-ascii?Q?XNz/5GgxwwtUtr3nod2z/IWeN3P9thCXcNEuOz+2ZZVFUIRz/+2m6TmreNsb?=
 =?us-ascii?Q?TABmwcggzKWepJBjj7By8vrGKc1wGRbkaXedDA/oDp1QODywKqNH/zSd9RcN?=
 =?us-ascii?Q?mysRqeZhjp8n3G18cHdIO19U0Eex8D9pJqLAdxooanL9khnjAdTo7gw0qinR?=
 =?us-ascii?Q?h4DfYC1p5OhqPzp+UVebf+17Rj3NGLngJmuVmBY/a8FqLaxGK4k/TN5gpki9?=
 =?us-ascii?Q?C4vIDFvsBCVjKUJOdrtRWYEY/Jfiw4rDneuGlJFRnVrYDFivT3NuQ1huM6ia?=
 =?us-ascii?Q?maBEnlOGi02xF9FzDdxdwBxlUcovnn8PvbS2I/86LgtAtttXhoIwaQspnwAv?=
 =?us-ascii?Q?X425NoxHhpNcfBh8Rrsy8ePahYZFCWGA3jGVCcFmB50qqC/CeFPqZfctxn1Q?=
 =?us-ascii?Q?Lgi0vxo=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(36860700004)(376005)(7416005)(1800799015)(82310400017);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2024 08:11:52.0543
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d3ee6df7-8fda-4ebd-98df-08dc86c97ced
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1PEPF000044F9.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB8082

Hi All,
This is v9 series to support passthrough when dom0 is PVH
v8->v9 changes:
* patch#1: Move pcidevs_unlock below write_lock, and remove "ASSERT(pcidevs_locked());" from vpci_reset_device_state;
           Add pci_device_state_reset_type to distinguish the reset types.
* patch#2: Add a comment above PHYSDEVOP_map_pirq to describe why need this hypercall.
           Change "!is_pv_domain(d)" to "is_hvm_domain(d)", and "map.domid == DOMID_SELF" to "d == current->domian".
* patch#3: Remove the check of PHYSDEVOP_setup_gsi, since there is same checke in below.
* patch#5: Change the commit message to describe more why we need this new hypercall.
           Add comment above "if ( is_pv_domain(current->domain) || has_pirq(current->domain) )" to explain why we need this check.
		   Add gsi_2_irq to transform gsi to irq, instead of considering gsi == irq.
		   Add explicit padding to struct xen_domctl_gsi_permission.


Best regards,
Jiqian Chen



v7->v8 changes:
* patch#2: Add the domid check(domid == DOMID_SELF) to prevent self map when guest doesn't use pirq.
           That check was missed in the previous version.
* patch#4: Due to changes in the implementation of obtaining gsi in the kernel. Change to add a new function
           to get gsi by passing in the sbdf of pci device.
* patch#5: Remove the parameter "is_gsi", when there exist gsi, in pci_add_dm_done use a new function
           pci_device_set_gsi to do map_pirq and grant permission. That gets more intuitive code logic.


v6->v7 changes:
* patch#4: Due to changes in the implementation of obtaining gsi in the kernel. Change to add a new function
           to get gsi from irq, instead of gsi sysfs.
* patch#5: Fix the issue with variable usage, rc->r.


v5->v6 changes:
* patch#1: Add Reviewed-by Stefano and Stewart. Rebase code and change old function vpci_remove_device,
           vpci_add_handlers to vpci_deassign_device, vpci_assign_device
* patch#2: Add Reviewed-by Stefano
* patch#3: Remove unnecessary "ASSERT(!has_pirq(currd));"
* patch#4: Fix some coding style issues below directory tools
* patch#5: Modified some variable names and code logic to make code easier to be understood, which to use
           gsi by default and be compatible with older kernel versions to continue to use irq


v4->v5 changes:
* patch#1: add pci_lock wrap function vpci_reset_device_state
* patch#2: move the check of self map_pirq to physdev.c, and change to check if the caller has PIRQ flag, and
           just break for PHYSDEVOP_(un)map_pirq in hvm_physdev_op
* patch#3: return -EOPNOTSUPP instead, and use ASSERT(!has_pirq(currd));
* patch#4: is the patch#5 in v4 because patch#5 in v5 has some dependency on it. And add the handling of errno
           and add the Reviewed-by Stefano
* patch#5: is the patch#4 in v4. New implementation to add new hypercall XEN_DOMCTL_gsi_permission to grant gsi


v3->v4 changes:
* patch#1: change the comment of PHYSDEVOP_pci_device_state_reset; move printings behind pcidevs_unlock
* patch#2: add check to prevent PVH self map
* patch#3: new patch, The implementation of adding PHYSDEVOP_setup_gsi for PVH is treated as a separate patch
* patch#4: new patch to solve the map_pirq problem of PVH dom0. use gsi to grant irq permission in
           XEN_DOMCTL_irq_permission.
* patch#5: to be compatible with previous kernel versions, when there is no gsi sysfs, still use irq
v4 link:
https://lore.kernel.org/xen-devel/20240105070920.350113-1-Jiqian.Chen@amd.com/T/#t

v2->v3 changes:
* patch#1: move the content out of pci_reset_device_state and delete pci_reset_device_state; add
           xsm_resource_setup_pci check for PHYSDEVOP_pci_device_state_reset; add description for
		   PHYSDEVOP_pci_device_state_reset;
* patch#2: du to changes in the implementation of the second patch on kernel side(that it will do setup_gsi and
           map_pirq when assigning a device to passthrough), add PHYSDEVOP_setup_gsi for PVH dom0, and we need
		   to support self mapping.
* patch#3: du to changes in the implementation of the second patch on kernel side(that adds a new sysfs for gsi
           instead of a new syscall), so read gsi number from the sysfs of gsi.
v3 link:
https://lore.kernel.org/xen-devel/20231210164009.1551147-1-Jiqian.Chen@amd.com/T/#t

v2 link:
https://lore.kernel.org/xen-devel/20231124104136.3263722-1-Jiqian.Chen@amd.com/T/#t
Below is the description of v2 cover letter:
This series of patches are the v2 of the implementation of passthrough when dom0 is PVH on Xen.
We sent the v1 to upstream before, but the v1 had so many problems and we got lots of suggestions.
I will introduce all issues that these patches try to fix and the differences between v1 and v2.

Issues we encountered:
1. pci_stub failed to write bar for a passthrough device.
Problem: when we run \u201csudo xl pci-assignable-add <sbdf>\u201d to assign a device, pci_stub will call
pcistub_init_device() -> pci_restore_state() -> pci_restore_config_space() ->
pci_restore_config_space_range() -> pci_restore_config_dword() -> pci_write_config_dword()\u201d, the pci config
write will trigger an io interrupt to bar_write() in the xen, but the
bar->enabled was set before, the write is not allowed now, and then when 
bar->Qemu config the
passthrough device in xen_pt_realize(), it gets invalid bar values.

Reason: the reason is that we don't tell vPCI that the device has been reset, so the current cached state in
pdev->vpci is all out of date and is different from the real device state.

Solution: to solve this problem, the first patch of kernel(xen/pci: Add xen_reset_device_state
function) and the fist patch of xen(xen/vpci: Clear all vpci status of device) add a new hypercall to reset the
state stored in vPCI when the state of real device has changed.
Thank Roger for the suggestion of this v2, and it is different from
v1 (https://lore.kernel.org/xen-devel/20230312075455.450187-3-ray.huang@amd.com/), v1 simply allow domU to write
pci bar, it does not comply with the design principles of vPCI.

2. failed to do PHYSDEVOP_map_pirq when dom0 is PVH
Problem: HVM domU will do PHYSDEVOP_map_pirq for a passthrough device by using gsi. See
xen_pt_realize->xc_physdev_map_pirq and pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq will call
into Xen, but in hvm_physdev_op(), PHYSDEVOP_map_pirq is not allowed.

Reason: In hvm_physdev_op(), the variable "currd" is PVH dom0 and PVH has no X86_EMU_USE_PIRQ flag, it will fail
at has_pirq check.

Solution: I think we may need to allow PHYSDEVOP_map_pirq when "currd" is dom0 (at present dom0 is PVH). The
second patch of xen(x86/pvh: Open PHYSDEVOP_map_pirq for PVH dom0) allow PVH dom0 do PHYSDEVOP_map_pirq. This v2
patch is better than v1, v1 simply remove the has_pirq check
(xen https://lore.kernel.org/xen-devel/20230312075455.450187-4-ray.huang@amd.com/).

3. the gsi of a passthrough device doesn't be unmasked
 3.1 failed to check the permission of pirq
 3.2 the gsi of passthrough device was not registered in PVH dom0

Problem:
3.1 callback function pci_add_dm_done() will be called when qemu config a passthrough device for domU.
This function will call xc_domain_irq_permission()-> pirq_access_permitted() to check if the gsi has corresponding
mappings in dom0. But it didn\u2019t, so failed. See XEN_DOMCTL_irq_permission->pirq_access_permitted, "current"
is PVH dom0 and it return irq is 0.
3.2 it's possible for a gsi (iow: vIO-APIC pin) to never get registered on PVH dom0, because the devices of PVH
are using MSI(-X) interrupts. However, the IO-APIC pin must be configured for it to be able to be mapped into a domU.

Reason: After searching codes, I find "map_pirq" and "register_gsi" will be done in function
vioapic_write_redirent->vioapic_hwdom_map_gsi when the gsi(aka ioapic's pin) is unmasked in PVH dom0.
So the two problems can be concluded to that the gsi of a passthrough device doesn't be unmasked.

Solution: to solve these problems, the second patch of kernel(xen/pvh: Unmask irq for passthrough device in PVH dom0)
call the unmask_irq() when we assign a device to be passthrough. So that passthrough devices can have the mapping of
gsi on PVH dom0 and gsi can be registered. This v2 patch is different from the
v1( kernel https://lore.kernel.org/xen-devel/20230312120157.452859-5-ray.huang@amd.com/,
kernel https://lore.kernel.org/xen-devel/20230312120157.452859-5-ray.huang@amd.com/ and
xen https://lore.kernel.org/xen-devel/20230312075455.450187-5-ray.huang@amd.com/),
v1 performed "map_pirq" and "register_gsi" on all pci devices on PVH dom0, which is unnecessary and may cause
multiple registration.

4. failed to map pirq for gsi
Problem: qemu will call xc_physdev_map_pirq() to map a passthrough device\u2019s gsi to pirq in function
xen_pt_realize(). But failed.

Reason: According to the implement of xc_physdev_map_pirq(), it needs gsi instead of irq, but qemu pass irq to it and
treat irq as gsi, it is got from file /sys/bus/pci/devices/xxxx:xx:xx.x/irq in function xen_host_pci_device_get().
But actually the gsi number is not equal with irq. On PVH dom0, when it allocates irq for a gsi in
function acpi_register_gsi_ioapic(), allocation is dynamic, and follow the principle of applying first, distributing
first. And if you debug the kernel codes(see function __irq_alloc_descs), you will find the irq number is allocated
from small to large by order, but the applying gsi number is not, gsi 38 may come before gsi 28, that causes gsi 38
get a smaller irq number than gsi 28, and then gsi != irq.

Solution: we can record the relation between gsi and irq, then when userspace(qemu) want to use gsi, we can do a
translation. The third patch of kernel(xen/privcmd: Add new syscall to get gsi from irq) records all the relations
in acpi_register_gsi_xen_pvh() when dom0 initialize pci devices, and provide a syscall for userspace to get the gsi
from irq. The third patch of xen(tools: Add new function to get gsi from irq) add a new function
xc_physdev_gsi_from_irq() to call the new syscall added on kernel side.
And then userspace can use that function to get gsi. Then xc_physdev_map_pirq() will success. This v2 patch is the
same as v1( kernel https://lore.kernel.org/xen-devel/20230312120157.452859-6-ray.huang@amd.com/ and
xen https://lore.kernel.org/xen-devel/20230312075455.450187-6-ray.huang@amd.com/)

About the v2 patch of qemu, just change an included head file, other are similar to the
v1 ( qemu https://lore.kernel.org/xen-devel/20230312092244.451465-19-ray.huang@amd.com/), just call
xc_physdev_gsi_from_irq() to get gsi from irq.


Jiqian Chen (5):
  xen/vpci: Clear all vpci status of device
  x86/pvh: Allow (un)map_pirq when dom0 is PVH
  x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
  tools: Add new function to get gsi from dev
  domctl: Add XEN_DOMCTL_gsi_permission to grant gsi

 tools/include/xen-sys/Linux/privcmd.h |  7 +++
 tools/include/xencall.h               |  2 +
 tools/include/xenctrl.h               |  7 +++
 tools/libs/call/core.c                |  5 +++
 tools/libs/call/libxencall.map        |  2 +
 tools/libs/call/linux.c               | 15 +++++++
 tools/libs/call/private.h             |  9 ++++
 tools/libs/ctrl/xc_domain.c           | 15 +++++++
 tools/libs/ctrl/xc_physdev.c          |  4 ++
 tools/libs/light/libxl_pci.c          | 63 +++++++++++++++++++++++++++
 xen/arch/x86/domctl.c                 | 38 ++++++++++++++++
 xen/arch/x86/hvm/hypercall.c          |  8 ++++
 xen/arch/x86/include/asm/io_apic.h    |  2 +
 xen/arch/x86/io_apic.c                | 21 +++++++++
 xen/arch/x86/mpparse.c                |  3 +-
 xen/arch/x86/physdev.c                | 24 ++++++++++
 xen/drivers/pci/physdev.c             | 43 ++++++++++++++++++
 xen/drivers/vpci/vpci.c               |  9 ++++
 xen/include/public/domctl.h           | 10 +++++
 xen/include/public/physdev.h          |  7 +++
 xen/include/xen/pci.h                 | 16 +++++++
 xen/include/xen/vpci.h                |  6 +++
 xen/xsm/flask/hooks.c                 |  1 +
 23 files changed, 315 insertions(+), 2 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 08:12:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 08:12:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736380.1142458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUhj-0001nx-Fi; Fri, 07 Jun 2024 08:12:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736380.1142458; Fri, 07 Jun 2024 08:12:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUhj-0001nn-Bg; Fri, 07 Jun 2024 08:12:07 +0000
Received: by outflank-mailman (input) for mailman id 736380;
 Fri, 07 Jun 2024 08:12:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Avvd=NJ=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sFUhi-0001Em-Bn
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 08:12:06 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20620.outbound.protection.outlook.com
 [2a01:111:f400:7e88::620])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f70c644-24a5-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 10:12:04 +0200 (CEST)
Received: from MW4PR04CA0072.namprd04.prod.outlook.com (2603:10b6:303:6b::17)
 by IA1PR12MB8408.namprd12.prod.outlook.com (2603:10b6:208:3db::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.35; Fri, 7 Jun
 2024 08:11:59 +0000
Received: from CO1PEPF000044F9.namprd21.prod.outlook.com
 (2603:10b6:303:6b:cafe::c6) by MW4PR04CA0072.outlook.office365.com
 (2603:10b6:303:6b::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.16 via Frontend
 Transport; Fri, 7 Jun 2024 08:11:59 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1PEPF000044F9.mail.protection.outlook.com (10.167.241.199) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.0 via Frontend Transport; Fri, 7 Jun 2024 08:11:59 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 7 Jun
 2024 03:11:55 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f70c644-24a5-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bOHfxV66U/ZBaYSt0CgDDxeC9Jodb/G08TioPOvCG1ekA/qg7AaAUf3HQfXSmPiJ7rTcoZ+iPSjFRoEs8oU0lsA2wfmKaLSS7/Orl7mU5Qmzku8aarRgOGlTRTI20kZkuWurbSLHVyQ5ovOUeU3ios5EsageE+ayC5dqHvJs0Eee57BFCxpGe1+ngpWCLfRlxjATMEFRRaTBABHpAcU5SX+vN63PRcwvdCa6f/dGjJMzAiNmy85RBe2kLKow4d4Tr5qj5b04Yxd1lMR4NtqmHDxUFGIsJ1nJ7lUkhwZtve9B7Uy2zD+GDWcNNsigvibOSULHp5q8DzuN27xzdCGb+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=46lGlqrGZ1pXbqPXjWR29h7ukmWWAI3m9/j58I4jIsk=;
 b=MbQa0nEaW6iQva8TbT1wsM8SBRMnNSAV81Gi7u3iBdr1/WwuUb/MOLFiSuxcslLtoqUPbJ4bncVEF0XAMos1myxz21mnKU7QvHjH00PO0CtK3UEy5WJv7ZkMQuZHy5vIQ6lWmcS0n1iD6IeUJ4rfQHLp4ZQ3GCNuCtkiJOhQ009+dfDyxdV0jpAxfs8QGPLy2W1pyvPVXMzZm6imD8yoQtsbj7d/HDeMtHAUiCa/PUTi66Vrkbd7Ny7wlJr7/W0+KsI9LO5/DB2NXaZOqfTrnTiwml4mZOOSkaP7Sm0L4xr0e9I8Ehq3TH1lj8uqoYw//wlMDG22bIky0rc2ZXT4TQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=46lGlqrGZ1pXbqPXjWR29h7ukmWWAI3m9/j58I4jIsk=;
 b=tyc5IM07WsQKI/JzwcW/FFJzbBpepySzmneiqeNH3FztBw4PlSJPZLJAkowjmndyutcIxZbdNHdzidACvuioNrb8zG5PBG3/anSbiA1cxRZLWY262XEej7KBfVj7/10CZVT7lw7fi5ro+3LbWnJZin0/O2gxgTPGgtD+ga6CxII=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [XEN PATCH v9 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
Date: Fri, 7 Jun 2024 16:11:24 +0800
Message-ID: <20240607081127.126593-3-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240607081127.126593-1-Jiqian.Chen@amd.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1PEPF000044F9:EE_|IA1PR12MB8408:EE_
X-MS-Office365-Filtering-Correlation-Id: eff896dd-3367-42be-b41a-08dc86c98157
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|1800799015|7416005|82310400017|376005|36860700004;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?GR5pCOQhewA/YvAU+LXGD9pp60GGAlGIc3iKd3RnMSdTxvBwO329Qrc0pEpM?=
 =?us-ascii?Q?49sC9l3M8r55AUi7Q/2UCq4hdqJ8kKVIjq8egLc5qiMoQhIfEfQ2Jxnq5d1+?=
 =?us-ascii?Q?3OrR/2HuZd5k5SH55lYrgIxN5c4qh3gsN3zFvlLP886Ra9WgSccxuaqS+Iu/?=
 =?us-ascii?Q?yWE17cBT2gUXUVJtPtT19J7PgaOMPHa6z28bL2+dFVNL0O9gvptxM5ptTmiK?=
 =?us-ascii?Q?rG/F9vaq4RvqgOZQI8c/cPq0Gz1vEf6fDcago7Dii7kAWApMAowlqWsSFjvq?=
 =?us-ascii?Q?Ag9W2wo69eNnC7o2dsnFawodR3MtMFFgkc79JB+N7YvwzWlZBXIy1aTT6ygT?=
 =?us-ascii?Q?naszkFTD9HgQVTpieUhf8MMln6IGqVCNXWZck38s7fALdssdSTsDGu3Ag1lP?=
 =?us-ascii?Q?I5hiMOQEaLzU/hTKUVKixo3ol4Q2m4NFMleO1LJof0skKjP2txxpNkmOTxFW?=
 =?us-ascii?Q?am9qSDYXMIIt7BnM/bWGLAVNlMbXmFtmWn73AleHa3rdTMaDFrNdtGZxugYE?=
 =?us-ascii?Q?kThTclWc6Fadd0PDg2KDZ444O3bc6VjzB303eY43o0ZrDBsbPVqudYA+ixVu?=
 =?us-ascii?Q?ybMbHiiScNf9k1OICE9O20AI0GtvygdusHoiKveFfD6cwF3gefxENIw1ZKk3?=
 =?us-ascii?Q?PPQlfbNLSMDutguhgWzOi0It2iH2QIZfzvRIIVmW7VMBV5KxeYFepJ9ieHeM?=
 =?us-ascii?Q?Thq9Jlo3dZI9LcFU17fqi3reujziZKLKwlbYDa+DgtBsN/bLKqZfC052Xh1k?=
 =?us-ascii?Q?zMAOwjtO94tJb30Cenuh7KZ9e8C604P8QT02zX11CiPk5bB8vR49AgxYSfmM?=
 =?us-ascii?Q?vSyEiCjulx9WNFmhAquvY8Xk7uwNTOAfvJRQ1+o8zrhvu/T7BJO9Xi9QZtm5?=
 =?us-ascii?Q?jmOmQqASuvxkkOMJkk5OAWVfDV4M5GjqmdkS/Hd5buEFTiE5RKvA1Q+oezUy?=
 =?us-ascii?Q?nNuvgs6IQn+8d7IKfCc3PRNp0FrjnDirrjaAM4CARymea4JkWCgde3HrRTXM?=
 =?us-ascii?Q?HMI38yQdEey4Ptg/ux06VPeLPdUv0ZjA+EQu32CnfEQskhiKaSkFzdEy5cP5?=
 =?us-ascii?Q?u/FAS1V7HrUXj1C9M7dFOVbXqpiTQEQ0y9IodlvmqNhTLTe8Jee+1KzAR0Hg?=
 =?us-ascii?Q?c3iw3aEmphHG6Co/VYUMVKYkeS/ZH5RH1cFhfW1wUmvvSD2DTcDgsW6NvDo4?=
 =?us-ascii?Q?GMA07xVxENrBDJkcjVEyA4M9Wet6NWOKFxzCYEQ3ZIc2n/Q0KHMRmaISaAYj?=
 =?us-ascii?Q?ZFakq6Ulqe58jxQTqMjxOkXpQDLwJeNUFviZYyh8LOtzh3zDGE+WTqKiXHzm?=
 =?us-ascii?Q?SPEUh+rgZ9+Q4WRxcPLCHH2SEWutq1QbAx7V9LNLyqd0e0ZSdXpLC7zOjCvS?=
 =?us-ascii?Q?FM1Tc9LAngUWX6X5WC8MNwXCRLVurmBsJ0VLySr/X2gh6FpjRw=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(1800799015)(7416005)(82310400017)(376005)(36860700004);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2024 08:11:59.4606
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: eff896dd-3367-42be-b41a-08dc86c98157
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1PEPF000044F9.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB8408

If run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
a passthrough device by using gsi, see qemu code
xen_pt_realize->xc_physdev_map_pirq and libxl code
pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq
will call into Xen, but in hvm_physdev_op, PHYSDEVOP_map_pirq
is not allowed because currd is PVH dom0 and PVH has no
X86_EMU_USE_PIRQ flag, it will fail at has_pirq check.

So, allow PHYSDEVOP_map_pirq when dom0 is PVH and also allow
PHYSDEVOP_unmap_pirq for the failed path to unmap pirq. And
add a new check to prevent self map when subject domain has no
PIRQ flag.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/x86/hvm/hypercall.c |  6 ++++++
 xen/arch/x86/physdev.c       | 24 ++++++++++++++++++++++++
 2 files changed, 30 insertions(+)

diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 0fab670a4871..fa5d50a0dd22 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -71,8 +71,14 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     switch ( cmd )
     {
+    /*
+     * Only being permitted for management of other domains.
+     * Further restrictions are enforced in do_physdev_op.
+     */
     case PHYSDEVOP_map_pirq:
     case PHYSDEVOP_unmap_pirq:
+        break;
+
     case PHYSDEVOP_eoi:
     case PHYSDEVOP_irq_status_query:
     case PHYSDEVOP_get_free_pirq:
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 7efa17cf4c1e..61999882f836 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -305,11 +305,23 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     case PHYSDEVOP_map_pirq: {
         physdev_map_pirq_t map;
         struct msi_info msi;
+        struct domain *d;
 
         ret = -EFAULT;
         if ( copy_from_guest(&map, arg, 1) != 0 )
             break;
 
+        d = rcu_lock_domain_by_any_id(map.domid);
+        if ( d == NULL )
+            return -ESRCH;
+        /* Prevent self-map when domain has no X86_EMU_USE_PIRQ flag */
+        if ( is_hvm_domain(d) && !has_pirq(d) && d == current->domain )
+        {
+            rcu_unlock_domain(d);
+            return -EOPNOTSUPP;
+        }
+        rcu_unlock_domain(d);
+
         switch ( map.type )
         {
         case MAP_PIRQ_TYPE_MSI_SEG:
@@ -343,11 +355,23 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     case PHYSDEVOP_unmap_pirq: {
         struct physdev_unmap_pirq unmap;
+        struct domain *d;
 
         ret = -EFAULT;
         if ( copy_from_guest(&unmap, arg, 1) != 0 )
             break;
 
+        d = rcu_lock_domain_by_any_id(unmap.domid);
+        if ( d == NULL )
+            return -ESRCH;
+        /* Prevent self-unmap when domain has no X86_EMU_USE_PIRQ flag */
+        if ( is_hvm_domain(d) && !has_pirq(d) && d == current->domain )
+        {
+            rcu_unlock_domain(d);
+            return -EOPNOTSUPP;
+        }
+        rcu_unlock_domain(d);
+
         ret = physdev_unmap_pirq(unmap.domid, unmap.pirq);
         break;
     }
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 08:12:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 08:12:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736379.1142448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUhf-0001Ui-8j; Fri, 07 Jun 2024 08:12:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736379.1142448; Fri, 07 Jun 2024 08:12:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUhf-0001Ub-4V; Fri, 07 Jun 2024 08:12:03 +0000
Received: by outflank-mailman (input) for mailman id 736379;
 Fri, 07 Jun 2024 08:12:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Avvd=NJ=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sFUhe-0001Hx-BA
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 08:12:02 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20600.outbound.protection.outlook.com
 [2a01:111:f403:2417::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9df7b388-24a5-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 10:12:01 +0200 (CEST)
Received: from MW4PR03CA0015.namprd03.prod.outlook.com (2603:10b6:303:8f::20)
 by SA0PR12MB4351.namprd12.prod.outlook.com (2603:10b6:806:71::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.34; Fri, 7 Jun
 2024 08:11:56 +0000
Received: from CO1PEPF000044FD.namprd21.prod.outlook.com
 (2603:10b6:303:8f:cafe::50) by MW4PR03CA0015.outlook.office365.com
 (2603:10b6:303:8f::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.18 via Frontend
 Transport; Fri, 7 Jun 2024 08:11:56 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1PEPF000044FD.mail.protection.outlook.com (10.167.241.203) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.0 via Frontend Transport; Fri, 7 Jun 2024 08:11:56 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 7 Jun
 2024 03:11:51 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9df7b388-24a5-11ef-90a2-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TtXWo/YhoncSCRsiQgsMZWsj/zRZmJJYFCNiSNYWly/h2y0N6W8Vw5zqlpKeSbbAk/Nq70Cm2tKRfw6L6PkwDXznWewUQwGodHD3Io43/6nfpEQTPXOfcGCCWkSRpRX/lLKK+NoE3u5O8IqAlZ9uCemmshR6h6cBJndCQxL79DC+bV/saiwWyN14s05TEh9BrgVUp8Nr68keVnuu1mkkwukSLX3ohIaQxrHHSNXy/P+UDxaNAan/0l3gWxX2kmzkyhBIsS44Om7vaE5C5oJK1JIdqND9wUuChxzSlEu2k07Ko/gUrNrZWUNhi39Wtw5UjYNFUYuUASlkl12l7qQptw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RSoTXYGPxu2fXUgCiTtfeb3ST/hmymGeL62Y34ZHHYQ=;
 b=kAjcxGBQEAeBHfPDzIareAv/DoHkSX+RFri4IbCDJ5X+63RwnNHPJFCeinePN2aC03Cav+4FiKD4eaTA+r2z5dRE6TBQWZWe113oFYfNQ+Z7uKaWoVq2zTE58y4mvDJqrtLmk/1pS3RF+Azwq6WzTCg3E/+KxRpe7hif9ELWWSllcbthRxX5FIqGqG8DJcggcYoShzYqNBqFIoFv7b7KsPxLucqyup6U/LsF0dPorrhQ3o3QW2ZEHluCUFUuWQfIxwJtVYdvtAO7aUrSWv17fnVrL8F7z/DX0HH0TPEilwMEKGiCUX/LF3RJTq5Ma/37ccdr3FEEHX/1szKz6TzKAQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RSoTXYGPxu2fXUgCiTtfeb3ST/hmymGeL62Y34ZHHYQ=;
 b=zCtGm6XQowhcNZyi33HeMJMrDchWdQoaKF4E8+8k4xz+uXBEavbcOIj7SpCTLKoH0v4BGH7bH9Q0cIzmzGCvJ8nPfozz2rKtBzEnKHLR4jyMPsCg3//2qkEFZFPmdfKlB8izh6f+DlEeI54vej3BI5wG01RWBYBNFUzNMA60PYA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>, Stewart Hildebrand <stewart.hildebrand@amd.com>
Subject: [XEN PATCH v9 1/5] xen/vpci: Clear all vpci status of device
Date: Fri, 7 Jun 2024 16:11:23 +0800
Message-ID: <20240607081127.126593-2-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240607081127.126593-1-Jiqian.Chen@amd.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1PEPF000044FD:EE_|SA0PR12MB4351:EE_
X-MS-Office365-Filtering-Correlation-Id: 4d6b74c1-20d2-4b0c-7e29-08dc86c97f55
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|36860700004|7416005|376005|1800799015|82310400017;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?l+WuoomI7aw8xVoFDmdZPh2LUXFIIKa+njQmG1YQIGmdSAgntO2dbi2rubyw?=
 =?us-ascii?Q?iFJNV9jRZQPZs9H5zxpl5cB6Y2sPMZTN8w4sDCRqx9+MOaKaBelPk7SmOY3o?=
 =?us-ascii?Q?a8STSzofc7nJzm5iqeKfbeZC2WpCCTrhRM7BgD7/AqHU3u7G70n0I0MMegm1?=
 =?us-ascii?Q?b4iM2cjaMpte/f8XIzGSmWRWw4m16KI9YH6ApT47xodKOA3iARRNbruP+g6r?=
 =?us-ascii?Q?1ZeODxBv/NJ4Ox39GVyEYzFMjiylKmPIxTqI1I1dJeuf8I+w67dIhBPFOWZH?=
 =?us-ascii?Q?wu1Aq6SHIuhSVdIsimSQ2ZKJikFSQSa+Dava8ZCcKM+8KoQYo2Gg+BUe0sh9?=
 =?us-ascii?Q?erFobe9ij/sgrC+ZhU/eevCEM29JEc0n85XqHdeSSbA8WasFQgIloU/Pypti?=
 =?us-ascii?Q?Dlr1aoYo0rmRCFiDFNxhVcCGQQQh1ZA38mn8kT3cHAH07LrE9CImAyB3LpyH?=
 =?us-ascii?Q?HlkyfuXwwxw3FBVr/epcBZkhrifFO4HGy8GCLafLQN5G7Es2ELnpO44VDt8D?=
 =?us-ascii?Q?Son6sr0AOQMxGvOqg/07aB8xhnKtImmZYr/khuhJZyQGIvnEdMXgFJmw4itP?=
 =?us-ascii?Q?UfNt7tMvXCYBV9WpBwKhrlE/hybrJYrUI7scFtStg/Hg2KM4ovM0ggDkereC?=
 =?us-ascii?Q?J1oqzWOZlev9pUlslbj7y/pwmWfq6pkkqpfcb1tVvMIKKELxUWIlpc9FOD5y?=
 =?us-ascii?Q?dR4drT5UnzHxCYo5ETbsBS3cJcmDDoWre1V4o5vg/5pFASsc7zv1QZzTVHAQ?=
 =?us-ascii?Q?4kJ7xM7dkukmr+BSuFBt4KvuK91oB41+uwl/TSzi3GOM7U+YlSRbXFI9y6Rm?=
 =?us-ascii?Q?OKmWrJOLM1c1ik4fM7p6/SsXyGmhO2NwoYQ9qJsdAK0mRXC6o+NG4XDLNH+s?=
 =?us-ascii?Q?tDiTUr7bhno4Yuf02aaAcyeXEYaK6awY47ZFdP2pgZQcXLJ8qwLY2sbwpFmY?=
 =?us-ascii?Q?eatMEZbkqAp5UIysxHHCjJJm5+Ld6vyGsAAkKRmmDk42jgkw7AAa7tGyK7LM?=
 =?us-ascii?Q?U8axvHXxrD/1/RRQQMOBNqrX+RJcefr9eMlLnmBIJqgk5CFJktTObIl9/JT4?=
 =?us-ascii?Q?6gzsPY9xELa3U8CZGbqfn7nFfy9cmiMG2441cjLB5ZD1e7GF2UWeQ23KRfOo?=
 =?us-ascii?Q?oM0L5BF9nEQgJnfMReyOYnzzDNicPFCealucMP9RJsg9TaqGfhD7gxV1Eaie?=
 =?us-ascii?Q?TP6NiWmt2Z+1YTRrUJVUEvxOKLW/CS8pcSUgJfYAnfLJBacC6ZeyfdbnLyYd?=
 =?us-ascii?Q?zgQvNxFpz0TyMBOCv1izvMA8bgEh5rpJMqD24PfjcjM1cpMwANmYvDzsqZR0?=
 =?us-ascii?Q?fVOXoNr+pHQwmlQH53TXNSCeuaJvXSM8BfNn4GBr/H92iSTOX/QE8wdIhFfA?=
 =?us-ascii?Q?9mFd7RSOJ2j15cNFZCKNTM6RdxSCptqM11c90B+MzUSnAC2Grw=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(36860700004)(7416005)(376005)(1800799015)(82310400017);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2024 08:11:56.1063
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4d6b74c1-20d2-4b0c-7e29-08dc86c97f55
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1PEPF000044FD.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4351

When a device has been reset on dom0 side, the vpci on Xen
side won't get notification, so the cached state in vpci is
all out of date compare with the real device state.
To solve that problem, add a new hypercall to clear all vpci
device state. When the state of device is reset on dom0 side,
dom0 can call this hypercall to notify vpci.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Reviewed-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/x86/hvm/hypercall.c |  1 +
 xen/drivers/pci/physdev.c    | 43 ++++++++++++++++++++++++++++++++++++
 xen/drivers/vpci/vpci.c      |  9 ++++++++
 xen/include/public/physdev.h |  7 ++++++
 xen/include/xen/pci.h        | 16 ++++++++++++++
 xen/include/xen/vpci.h       |  6 +++++
 6 files changed, 82 insertions(+)

diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 7fb3136f0c7c..0fab670a4871 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -83,6 +83,7 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     case PHYSDEVOP_pci_mmcfg_reserved:
     case PHYSDEVOP_pci_device_add:
     case PHYSDEVOP_pci_device_remove:
+    case PHYSDEVOP_pci_device_state_reset:
     case PHYSDEVOP_dbgp_op:
         if ( !is_hardware_domain(currd) )
             return -ENOSYS;
diff --git a/xen/drivers/pci/physdev.c b/xen/drivers/pci/physdev.c
index 42db3e6d133c..1cce508a73b1 100644
--- a/xen/drivers/pci/physdev.c
+++ b/xen/drivers/pci/physdev.c
@@ -2,11 +2,17 @@
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <xen/init.h>
+#include <xen/vpci.h>
 
 #ifndef COMPAT
 typedef long ret_t;
 #endif
 
+static const struct pci_device_state_reset_method
+                    pci_device_state_reset_methods[] = {
+    [ DEVICE_RESET_FLR ].reset_fn = vpci_reset_device_state,
+};
+
 ret_t pci_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     ret_t ret;
@@ -67,6 +73,43 @@ ret_t pci_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
+    case PHYSDEVOP_pci_device_state_reset: {
+        struct pci_device_state_reset dev_reset;
+        struct physdev_pci_device *dev;
+        struct pci_dev *pdev;
+        pci_sbdf_t sbdf;
+
+        if ( !is_pci_passthrough_enabled() )
+            return -EOPNOTSUPP;
+
+        ret = -EFAULT;
+        if ( copy_from_guest(&dev_reset, arg, 1) != 0 )
+            break;
+        dev = &dev_reset.dev;
+        sbdf = PCI_SBDF(dev->seg, dev->bus, dev->devfn);
+
+        ret = xsm_resource_setup_pci(XSM_PRIV, sbdf.sbdf);
+        if ( ret )
+            break;
+
+        pcidevs_lock();
+        pdev = pci_get_pdev(NULL, sbdf);
+        if ( !pdev )
+        {
+            pcidevs_unlock();
+            ret = -ENODEV;
+            break;
+        }
+
+        write_lock(&pdev->domain->pci_lock);
+        pcidevs_unlock();
+        ret = pci_device_state_reset_methods[dev_reset.reset_type].reset_fn(pdev);
+        write_unlock(&pdev->domain->pci_lock);
+        if ( ret )
+            printk(XENLOG_ERR "%pp: failed to reset vPCI device state\n", &sbdf);
+        break;
+    }
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 1e6aa5d799b9..ff67c2550ccb 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -172,6 +172,15 @@ int vpci_assign_device(struct pci_dev *pdev)
 
     return rc;
 }
+
+int vpci_reset_device_state(struct pci_dev *pdev)
+{
+    ASSERT(rw_is_write_locked(&pdev->domain->pci_lock));
+
+    vpci_deassign_device(pdev);
+    return vpci_assign_device(pdev);
+}
+
 #endif /* __XEN__ */
 
 static int vpci_register_cmp(const struct vpci_register *r1,
diff --git a/xen/include/public/physdev.h b/xen/include/public/physdev.h
index f0c0d4727c0b..a71da5892e5f 100644
--- a/xen/include/public/physdev.h
+++ b/xen/include/public/physdev.h
@@ -296,6 +296,13 @@ DEFINE_XEN_GUEST_HANDLE(physdev_pci_device_add_t);
  */
 #define PHYSDEVOP_prepare_msix          30
 #define PHYSDEVOP_release_msix          31
+/*
+ * Notify the hypervisor that a PCI device has been reset, so that any
+ * internally cached state is regenerated.  Should be called after any
+ * device reset performed by the hardware domain.
+ */
+#define PHYSDEVOP_pci_device_state_reset 32
+
 struct physdev_pci_device {
     /* IN */
     uint16_t seg;
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index 63e49f0117e9..376981f9da98 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -156,6 +156,22 @@ struct pci_dev {
     struct vpci *vpci;
 };
 
+struct pci_device_state_reset_method {
+    int (*reset_fn)(struct pci_dev *pdev);
+};
+
+enum pci_device_state_reset_type {
+    DEVICE_RESET_FLR,
+    DEVICE_RESET_COLD,
+    DEVICE_RESET_WARM,
+    DEVICE_RESET_HOT,
+};
+
+struct pci_device_state_reset {
+    struct physdev_pci_device dev;
+    enum pci_device_state_reset_type reset_type;
+};
+
 #define for_each_pdev(domain, pdev) \
     list_for_each_entry(pdev, &(domain)->pdev_list, domain_list)
 
diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
index da8d0f41e6f4..b230fd374de5 100644
--- a/xen/include/xen/vpci.h
+++ b/xen/include/xen/vpci.h
@@ -38,6 +38,7 @@ int __must_check vpci_assign_device(struct pci_dev *pdev);
 
 /* Remove all handlers and free vpci related structures. */
 void vpci_deassign_device(struct pci_dev *pdev);
+int __must_check vpci_reset_device_state(struct pci_dev *pdev);
 
 /* Add/remove a register handler. */
 int __must_check vpci_add_register_mask(struct vpci *vpci,
@@ -282,6 +283,11 @@ static inline int vpci_assign_device(struct pci_dev *pdev)
 
 static inline void vpci_deassign_device(struct pci_dev *pdev) { }
 
+static inline int __must_check vpci_reset_device_state(struct pci_dev *pdev)
+{
+    return 0;
+}
+
 static inline void vpci_dump_msi(void) { }
 
 static inline uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg,
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 08:12:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 08:12:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736381.1142468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUhq-0002Ay-O0; Fri, 07 Jun 2024 08:12:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736381.1142468; Fri, 07 Jun 2024 08:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUhq-0002Aq-KU; Fri, 07 Jun 2024 08:12:14 +0000
Received: by outflank-mailman (input) for mailman id 736381;
 Fri, 07 Jun 2024 08:12:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Avvd=NJ=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sFUho-0001Hx-TM
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 08:12:12 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20608.outbound.protection.outlook.com
 [2a01:111:f400:7e88::608])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a3f98c95-24a5-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 10:12:12 +0200 (CEST)
Received: from SJ0PR05CA0009.namprd05.prod.outlook.com (2603:10b6:a03:33b::14)
 by PH7PR12MB7210.namprd12.prod.outlook.com (2603:10b6:510:205::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.17; Fri, 7 Jun
 2024 08:12:08 +0000
Received: from CO1PEPF000044F8.namprd21.prod.outlook.com
 (2603:10b6:a03:33b:cafe::9b) by SJ0PR05CA0009.outlook.office365.com
 (2603:10b6:a03:33b::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.12 via Frontend
 Transport; Fri, 7 Jun 2024 08:12:08 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1PEPF000044F8.mail.protection.outlook.com (10.167.241.198) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.0 via Frontend Transport; Fri, 7 Jun 2024 08:12:07 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 7 Jun
 2024 03:11:59 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3f98c95-24a5-11ef-90a2-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SQcdtiN7rUK6mSpewwdeSJlYpoNEXzHlbbXRQzb7XCGBZqLqg7GH6NBTmFa3nBImgVXizzx0mp33JqtbOY5TVEN9s7WVz+Lr/+Pq4o+EAegsGs8Zhw1zemYXpPeiwG7sOz7u+dHzwMqir0Yvvo7zIWpELE9jl9OhdLga5B6VF0N6r3yyE4rdaHRE3Gi3XvKXeZ+HvVRYxL0xXsayRpEzrNromD52LPuaRqbY/SZ1KV7n4UwW3bQ2LssBDoGojcKARrGiCnfizEZJdIlYXzW0D0SXrUXKx/LMSK3rYgkA2HFSAGAOQm1R6HESgzhcPHGqbG6FyO31Rerh3EFsRe4GPQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=qcD0xWxQTGxAvBu0zi4mdkVlNKBayZWA9asqiHxXLZ0=;
 b=NdDIdTi6WfnTckXMO/obfujg9++1F5704rj8CgtGKjf7gq435AD99/2jwvYPPUIafhFar+cIgiwDut6yZUIGCs/9tRearg65OaLw1f/lCWPoHVf4MVwGhRWUS/sBQvgMKKaxNz5NwBwHhuXQPURxI91TJbNmE1v/F+aFNAOh8rA+9pCerw44tv6Ohf19C9HSVoWgc3aoVChnbSwpE/kOHVMVjWBjzPj5Qk9Yij+fJe/Y1KBtknqtCCSVkKMXoZIMVB7qQMbwP0e7wyTLf6FdTVwNH8Rm0mRyBJkMBmbYZwN1Fz7/bcXh8Qm+JvWa8vX7iEn2q/bUCZzDL/mgaS1MHw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qcD0xWxQTGxAvBu0zi4mdkVlNKBayZWA9asqiHxXLZ0=;
 b=eI7lBBQxffiY2IXj79JnlOtOK0meQci6Z6yjJzc5C22Azuw2P99ILoQFiAfZ2dSobQZTo0Lx2v9S5TP10zU8KhfZNN3Mn0OKlNEgc7xYvpRd6j5AK5J0m1vBqPiZqoWWXSCvFEjy+yIxH7upaMwDhn8GaOBCC3MGZj+0tIi+9Kk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [XEN PATCH v9 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
Date: Fri, 7 Jun 2024 16:11:25 +0800
Message-ID: <20240607081127.126593-4-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240607081127.126593-1-Jiqian.Chen@amd.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1PEPF000044F8:EE_|PH7PR12MB7210:EE_
X-MS-Office365-Filtering-Correlation-Id: 3a3a5448-453d-400a-2811-08dc86c9865b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|36860700004|376005|82310400017|7416005|1800799015;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?5F7rF2rvSgyBwSLNicxaP1ev/9UrT9aEeafFtVpNnKv7+38TiOOiWa2201ZI?=
 =?us-ascii?Q?7lXz2H2mBGgZzQFBx1x+jK2Zr4kMM1dgh5ie/PiLyCaft9dKN8uFGT7mDtrH?=
 =?us-ascii?Q?TifcCjh5XJYCGdwQSxWsmVhF5boqcK8ouoBF0SOQ1VgUh5o3qJ3bWKG6mQh9?=
 =?us-ascii?Q?GIdhpSs512IiNLaW+xC+xl7k+3LVoYeTGojVJtRvzKG7p3lj94u4gkC3jKtV?=
 =?us-ascii?Q?OdKb5kfafwXLt8wSaTRLsKHVlsQGt3+EKTVpquHCiqSOnURuLoRdMDawfTnb?=
 =?us-ascii?Q?7a9PIsorJHV5JUB14viFGQRBBHdAxKQ/36z3JXKTAv3YBJaqIVbiCd8aln48?=
 =?us-ascii?Q?48L+Y1YMrtRkz6PG4u/WnpkMCvDvVc0bh8nT2IuoYcejwzTP4upVvJX3lc7B?=
 =?us-ascii?Q?2vg3vbNWGkrxO8Z+Ke8NYO+qENIkxL8sl83K1p/yHnknbB73lhDIQ+v4Fjty?=
 =?us-ascii?Q?cDoXx1imZEj+AAsDi+dB90DNu21x7vT572VzAYElS5DmapumG4J6AZX91/oc?=
 =?us-ascii?Q?aXRebu8hOxrumsbjwyhmBgOBlWDwlTZQ4FdQCENI10F8xHvOjV+6MJl23tuo?=
 =?us-ascii?Q?+xh7UQnusofq5JIgkAPIwzopsXdyH4ZiL6+hBKSBWhiVolIiLaLC18PjGtkH?=
 =?us-ascii?Q?niTx5Mg/8sKxUH96pYEnxvOBcvAaWrhyNCGuGkAKbYErFayNCALq/qGDbUgq?=
 =?us-ascii?Q?ERPl4OB2nX1Cv5b1ibTbsdDjjR2QiiD9zXj2Tau2WrlIPDXwdwKokVBgAERg?=
 =?us-ascii?Q?tDFOGB7fiOkE5Q2/Z6sFqIPtIvRtunBQokynnRp3PydN1xUu1SycMDTIN4l4?=
 =?us-ascii?Q?RZU/aa6PE6QlI1OMXwHgeRcwULiZcjg0WSKDEgZMRnZCgiER3NB5zwF0hzjx?=
 =?us-ascii?Q?s/cVyMovByIDnenNkriQQ8UPOha+lLYGoqwLe+dLTgzrjtWmEUE1ijSJ3thT?=
 =?us-ascii?Q?3Aac2PKtqCmKqfAb2qMXRSO8Gkgs+gy5FnmXSq2Vgu6q7F+5h3IvmMLbUqcX?=
 =?us-ascii?Q?mMB55ACaeaFUQMzVJAJY7YOU7lk7gk5aLXHgY3Tm5RQXTeOvkaTlrNJgeukF?=
 =?us-ascii?Q?LZ0MEoGKWQVbU3h8I3rovaqnH5DiMO5wxDbH7gThRgXRC+EbfvBqMXDPSRD5?=
 =?us-ascii?Q?oGl+A15/FEC0+szmT3EJVRid9eZK6+00b47hJ7iC8xjQUmLySs7MzuL2t44h?=
 =?us-ascii?Q?hd/OBmbHfz8tc9IMHc/4wFa6nUkGZHy9tkIR6EFfsYrgDzTeW8i4I9YLPiRZ?=
 =?us-ascii?Q?96g41ZOjdk90IrBjXj98qUhPZt7OZwwEqwWSkk2drnaeQRxG9O3LXNuTVDpI?=
 =?us-ascii?Q?IKcNg5u5Dg2BTHKSQHFPgFxlSeJ5UfS5wqINTZNsq+BPqMHnDOWJ+oBziZfz?=
 =?us-ascii?Q?jcDIPcs=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(36860700004)(376005)(82310400017)(7416005)(1800799015);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2024 08:12:07.8718
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a3a5448-453d-400a-2811-08dc86c9865b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1PEPF000044F8.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7210

On PVH dom0, the gsis don't get registered, but
the gsi of a passthrough device must be configured for it to
be able to be mapped into a hvm domU.
On Linux kernel side, it calles PHYSDEVOP_setup_gsi for
passthrough devices to register gsi when dom0 is PVH.
So, add PHYSDEVOP_setup_gsi for above purpose.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
---
The code link that will call this hypercall on linux kernel side is as follows
https://lore.kernel.org/lkml/20240607075109.126277-3-Jiqian.Chen@amd.com/T/#u
---
 xen/arch/x86/hvm/hypercall.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index fa5d50a0dd22..164f4eefa043 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -86,6 +86,7 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return -ENOSYS;
         break;
 
+    case PHYSDEVOP_setup_gsi:
     case PHYSDEVOP_pci_mmcfg_reserved:
     case PHYSDEVOP_pci_device_add:
     case PHYSDEVOP_pci_device_remove:
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 08:12:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 08:12:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736382.1142477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUhs-0002Ri-7d; Fri, 07 Jun 2024 08:12:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736382.1142477; Fri, 07 Jun 2024 08:12:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUhs-0002RX-4A; Fri, 07 Jun 2024 08:12:16 +0000
Received: by outflank-mailman (input) for mailman id 736382;
 Fri, 07 Jun 2024 08:12:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Avvd=NJ=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sFUhr-0001Em-B7
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 08:12:15 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on20624.outbound.protection.outlook.com
 [2a01:111:f400:7ea9::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a4a854c5-24a5-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 10:12:13 +0200 (CEST)
Received: from SJ0PR05CA0012.namprd05.prod.outlook.com (2603:10b6:a03:33b::17)
 by SA1PR12MB6993.namprd12.prod.outlook.com (2603:10b6:806:24c::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Fri, 7 Jun
 2024 08:12:09 +0000
Received: from CO1PEPF000044F8.namprd21.prod.outlook.com
 (2603:10b6:a03:33b:cafe::8f) by SJ0PR05CA0012.outlook.office365.com
 (2603:10b6:a03:33b::17) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.14 via Frontend
 Transport; Fri, 7 Jun 2024 08:12:09 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1PEPF000044F8.mail.protection.outlook.com (10.167.241.198) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.0 via Frontend Transport; Fri, 7 Jun 2024 08:12:09 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 7 Jun
 2024 03:12:02 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4a854c5-24a5-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=biTfxaOagWATFbVkVPDhuUnrVzwblk1bCtHwOfzanszi0FrFkesIJrHhUcan3zLxcLzJip+DnZEB8sWGvR5tSp6q5u5TWOiKQZTy6C5MKomzrsCbhhl4mclbXySAwBzlJvQ37ZQLwGQB3izxEpJbDoZFFUslo+SIrNuE5jq+N8BNme6muqvCn+85OG/jlbXrkrOwiJ3ycRkuSOITDVvTB23lQMSC5/Lpdz70WN8//Hg5Hzukg8N0JMtOWq8KsyVpjI3fMK85Ct2HdQSpEySCROT6h4xIQtyV5Pvv9/NU3nQSHQGdkq13UllsNDU+zE9ssqtDfLN+38qoSpgaZ3Cvng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0fwxiaHPU3gH65mb06pygxWzyZ3jBWCUum2KJFb89q8=;
 b=XnTogwIqwP2b/fjBKTruDNL4jUJkysZmw/hzENtS1Er5HXHeJLWrWs2OTYzsFXz1UPVv5Zyq0if9ADcSHwNZfGshJIgaxWbkjklmifzt932QGg042iYG+5DhcE+tgxDolTJEQY4tr60y/ALSdgAPPwvXD2a76v/DDj/GrR+OwNMehUxsaFv3DODgIG3hIMXnaGJoXInGLwapSp5MGujINKAr3FjmfldZpSn8kZ2zXvbO2zeuZNWfJ8pXnpHKSRYcmk4sS1cvf+y/oAJpJT5gkGIuWD4Zyik0U8h7WpGqJO1WwQc5F6xTFtU33hPlETpmaD+yWWtcSzibOHlr36/+lQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0fwxiaHPU3gH65mb06pygxWzyZ3jBWCUum2KJFb89q8=;
 b=ziX6Zn3uSO+vlc6n+90i6MjLnsh1mfvJsChKW863b+Bqvwj7BiHq6xq5MvKAMxGZDuLuCOMRkNETvSZgF54pik10gpk4EyLNKoJNGRLhgz6eeuPHV1Zhh3XczH/yFHvWdRVMe7Sd38jjMvuys8+6iAT+fyi+HN0wZ/4pRmqlesk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [RFC XEN PATCH v9 4/5] tools: Add new function to get gsi from dev
Date: Fri, 7 Jun 2024 16:11:26 +0800
Message-ID: <20240607081127.126593-5-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240607081127.126593-1-Jiqian.Chen@amd.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1PEPF000044F8:EE_|SA1PR12MB6993:EE_
X-MS-Office365-Filtering-Correlation-Id: 782befe2-48d6-4505-8203-08dc86c98717
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|82310400017|1800799015|376005|7416005|36860700004;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?Dohy5EqQoVV+7Xm7KnJSp4Wxvs9wgiLPymZD+lEHWKXbuSgzFv7dhalYppzW?=
 =?us-ascii?Q?asHYz0nY8DOqsxmINW9wRHoOl80RIE0YVhCbBR+W0OVCEXLXn2mAdPFM9ooE?=
 =?us-ascii?Q?nPNP+TL4DTR8qjgRT0FYTIj1XlYqWWdSkL9hcIXYk5VCgcWbQrAeoNr25UFk?=
 =?us-ascii?Q?7IvVXHNY70jyatzT2o8CImKmAYZbqnzUy4cG1UjkNm033xc/fdtXzyTG68cd?=
 =?us-ascii?Q?wgXHfCUqtBQDLksRxVal/3nUk3tQPePu58DaZnUVIsYJ+I2jYN0rgQ8ZaQNT?=
 =?us-ascii?Q?2E1mIFaEgm8dIUVyVRX+CW0GztqDNjqVLZT6ybI/ucw+mns9aIq/2rfijZi3?=
 =?us-ascii?Q?vT9SNx/o5Jkj+dwMTFBLY9OWUTcmed8tsb5snPHPLXpxg9KSg7/JvU2ICaBS?=
 =?us-ascii?Q?+Fni4Y6NIQpvXRCKqy3YEfdWitsITsoJOfbOP7Im1+dUArrVEeFrBhkEJVVA?=
 =?us-ascii?Q?Kt/3rv8/V20sx6mKNpojNn0mquQXCtUj2iOfFBFQILkGcKzMdlfONrp3F+xx?=
 =?us-ascii?Q?J0h6/ibwUeabS7TQesBIiXpkRG4SoP3/U2b5N/vpfL3FuThT761ZwgrThS1W?=
 =?us-ascii?Q?rp6Ano6vi3sWyW8bz5lIoMwC644EstvkIUz/E7aDRLZMu7pg+LSbtthye88z?=
 =?us-ascii?Q?Eqp0SYf/TNS6cA2d/HYXGXNlU7TqcusxMeaCgQNEMAqW0aU6X9wtjy/5XQ8S?=
 =?us-ascii?Q?BqPE6/p9kUeTpRepEUbmCDqB6uBCZcyQi5/5fRhMLpJd4cQJeri5CXjglLi2?=
 =?us-ascii?Q?6918dG3Aa0uIV+9mwyuNhn16gjv5lTb1R5eTLaEVeuN4Juli/z2/qr2rCszY?=
 =?us-ascii?Q?+Fl3179P3iaG6YjccxcKHHk0KOkVNx/gqwh30mAMUIGFOAq70IesmrppBO6H?=
 =?us-ascii?Q?b0fBbE73kMEE78p3p3iFCQBm9yqO5fQj+noyUruDNo3d4jigId4GIUfV08A6?=
 =?us-ascii?Q?xK4omLXsPFZg7nqwOv4rv2mbWaA0tLzjKUL+lzKXHxJi4wXrcpXqRXTGfQ/0?=
 =?us-ascii?Q?sxIALOSrw2Ft6IQ2LOr7GLN+BPijz0q2YOzq9MQo1r+fwL/W5NsTGfnS5fA8?=
 =?us-ascii?Q?wF0UqtVBvNcVNMmlTKS32NxwrjBjJD0zMpnDBAnug+Kkg4YYjN/w1yV4hyFX?=
 =?us-ascii?Q?oQQpRyFIYwmq00leRhDeND1qhOPDHxXY7ggZvRAUAhLpKawRxJAPHcG5yt9g?=
 =?us-ascii?Q?UZ5UfSHeTGghuYjzO9SjJxfwppNrATSYc6vW+H2aQvDH7EQy9ARhFNNV3NGH?=
 =?us-ascii?Q?ufwloIkg45MKPJQRBFXaJ90g78rfVtM+0d3zBPoLnTp4bBtAtbR8ECDxURw7?=
 =?us-ascii?Q?52bPZUghJnC5wwg1gMkQVA4VXcc1FmlKanb1zGxSZVjPJR3T3upwVrvWtTTh?=
 =?us-ascii?Q?UPoU5nVvN/e8ABY2+ub8BvsG5yEGuJssUtEuvAmo6BgYy1e62w=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(82310400017)(1800799015)(376005)(7416005)(36860700004);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2024 08:12:09.1062
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 782befe2-48d6-4505-8203-08dc86c98717
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1PEPF000044F8.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB6993

In PVH dom0, it uses the linux local interrupt mechanism,
when it allocs irq for a gsi, it is dynamic, and follow
the principle of applying first, distributing first. And
irq number is alloced from small to large, but the applying
gsi number is not, may gsi 38 comes before gsi 28, that
causes the irq number is not equal with the gsi number.
And when passthrough a device, QEMU will use its gsi number
to do pirq mapping, see xen_pt_realize->xc_physdev_map_pirq,
but the gsi number is got from file
/sys/bus/pci/devices/<sbdf>/irq, so it will fail when mapping.
And in current codes, there is no method to get gsi for
userspace.

For above purpose, add new function to get gsi. And call this
function before xc_physdev_(un)map_pirq

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Chen Jiqian <Jiqian.Chen@amd.com>
---
RFC: it needs review and needs to wait for the corresponding third patch on linux kernel side to be merged.
---
 tools/include/xen-sys/Linux/privcmd.h |  7 +++++++
 tools/include/xencall.h               |  2 ++
 tools/include/xenctrl.h               |  2 ++
 tools/libs/call/core.c                |  5 +++++
 tools/libs/call/libxencall.map        |  2 ++
 tools/libs/call/linux.c               | 15 +++++++++++++++
 tools/libs/call/private.h             |  9 +++++++++
 tools/libs/ctrl/xc_physdev.c          |  4 ++++
 tools/libs/light/libxl_pci.c          | 23 +++++++++++++++++++++++
 9 files changed, 69 insertions(+)

diff --git a/tools/include/xen-sys/Linux/privcmd.h b/tools/include/xen-sys/Linux/privcmd.h
index bc60e8fd55eb..977f1a058797 100644
--- a/tools/include/xen-sys/Linux/privcmd.h
+++ b/tools/include/xen-sys/Linux/privcmd.h
@@ -95,6 +95,11 @@ typedef struct privcmd_mmap_resource {
 	__u64 addr;
 } privcmd_mmap_resource_t;
 
+typedef struct privcmd_gsi_from_dev {
+	__u32 sbdf;
+	int gsi;
+} privcmd_gsi_from_dev_t;
+
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
@@ -114,6 +119,8 @@ typedef struct privcmd_mmap_resource {
 	_IOC(_IOC_NONE, 'P', 6, sizeof(domid_t))
 #define IOCTL_PRIVCMD_MMAP_RESOURCE				\
 	_IOC(_IOC_NONE, 'P', 7, sizeof(privcmd_mmap_resource_t))
+#define IOCTL_PRIVCMD_GSI_FROM_DEV				\
+	_IOC(_IOC_NONE, 'P', 10, sizeof(privcmd_gsi_from_dev_t))
 #define IOCTL_PRIVCMD_UNIMPLEMENTED				\
 	_IOC(_IOC_NONE, 'P', 0xFF, 0)
 
diff --git a/tools/include/xencall.h b/tools/include/xencall.h
index fc95ed0fe58e..750aab070323 100644
--- a/tools/include/xencall.h
+++ b/tools/include/xencall.h
@@ -113,6 +113,8 @@ int xencall5(xencall_handle *xcall, unsigned int op,
              uint64_t arg1, uint64_t arg2, uint64_t arg3,
              uint64_t arg4, uint64_t arg5);
 
+int xen_oscall_gsi_from_dev(xencall_handle *xcall, unsigned int sbdf);
+
 /* Variant(s) of the above, as needed, returning "long" instead of "int". */
 long xencall2L(xencall_handle *xcall, unsigned int op,
                uint64_t arg1, uint64_t arg2);
diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 9ceca0cffc2f..a0381f74d24b 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1641,6 +1641,8 @@ int xc_physdev_unmap_pirq(xc_interface *xch,
                           uint32_t domid,
                           int pirq);
 
+int xc_physdev_gsi_from_dev(xc_interface *xch, uint32_t sbdf);
+
 /*
  *  LOGGING AND ERROR REPORTING
  */
diff --git a/tools/libs/call/core.c b/tools/libs/call/core.c
index 02c4f8e1aefa..6dae50c9a6ba 100644
--- a/tools/libs/call/core.c
+++ b/tools/libs/call/core.c
@@ -173,6 +173,11 @@ int xencall5(xencall_handle *xcall, unsigned int op,
     return osdep_hypercall(xcall, &call);
 }
 
+int xen_oscall_gsi_from_dev(xencall_handle *xcall, unsigned int sbdf)
+{
+    return osdep_oscall(xcall, sbdf);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libs/call/libxencall.map b/tools/libs/call/libxencall.map
index d18a3174e9dc..b92a0b5dc12c 100644
--- a/tools/libs/call/libxencall.map
+++ b/tools/libs/call/libxencall.map
@@ -10,6 +10,8 @@ VERS_1.0 {
 		xencall4;
 		xencall5;
 
+		xen_oscall_gsi_from_dev;
+
 		xencall_alloc_buffer;
 		xencall_free_buffer;
 		xencall_alloc_buffer_pages;
diff --git a/tools/libs/call/linux.c b/tools/libs/call/linux.c
index 6d588e6bea8f..92c740e176f2 100644
--- a/tools/libs/call/linux.c
+++ b/tools/libs/call/linux.c
@@ -85,6 +85,21 @@ long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
     return ioctl(xcall->fd, IOCTL_PRIVCMD_HYPERCALL, hypercall);
 }
 
+int osdep_oscall(xencall_handle *xcall, unsigned int sbdf)
+{
+    privcmd_gsi_from_dev_t dev_gsi = {
+        .sbdf = sbdf,
+        .gsi = -1,
+    };
+
+    if (ioctl(xcall->fd, IOCTL_PRIVCMD_GSI_FROM_DEV, &dev_gsi)) {
+        PERROR("failed to get gsi from dev");
+        return -1;
+    }
+
+    return dev_gsi.gsi;
+}
+
 static void *alloc_pages_bufdev(xencall_handle *xcall, size_t npages)
 {
     void *p;
diff --git a/tools/libs/call/private.h b/tools/libs/call/private.h
index 9c3aa432efe2..cd6eb5a3e66f 100644
--- a/tools/libs/call/private.h
+++ b/tools/libs/call/private.h
@@ -57,6 +57,15 @@ int osdep_xencall_close(xencall_handle *xcall);
 
 long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall);
 
+#if defined(__linux__)
+int osdep_oscall(xencall_handle *xcall, unsigned int sbdf);
+#else
+static inline int osdep_oscall(xencall_handle *xcall, unsigned int sbdf)
+{
+    return -1;
+}
+#endif
+
 void *osdep_alloc_pages(xencall_handle *xcall, size_t nr_pages);
 void osdep_free_pages(xencall_handle *xcall, void *p, size_t nr_pages);
 
diff --git a/tools/libs/ctrl/xc_physdev.c b/tools/libs/ctrl/xc_physdev.c
index 460a8e779ce8..c1458f3a38b5 100644
--- a/tools/libs/ctrl/xc_physdev.c
+++ b/tools/libs/ctrl/xc_physdev.c
@@ -111,3 +111,7 @@ int xc_physdev_unmap_pirq(xc_interface *xch,
     return rc;
 }
 
+int xc_physdev_gsi_from_dev(xc_interface *xch, uint32_t sbdf)
+{
+    return xen_oscall_gsi_from_dev(xch->xcall, sbdf);
+}
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 96cb4da0794e..7e44d4c3ae2b 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1406,6 +1406,12 @@ static bool pci_supp_legacy_irq(void)
 #endif
 }
 
+#define PCI_DEVID(bus, devfn)\
+            ((((uint16_t)(bus)) << 8) | ((devfn) & 0xff))
+
+#define PCI_SBDF(seg, bus, devfn) \
+            ((((uint32_t)(seg)) << 16) | (PCI_DEVID(bus, devfn)))
+
 static void pci_add_dm_done(libxl__egc *egc,
                             pci_add_state *pas,
                             int rc)
@@ -1418,6 +1424,7 @@ static void pci_add_dm_done(libxl__egc *egc,
     unsigned long long start, end, flags, size;
     int irq, i;
     int r;
+    uint32_t sbdf;
     uint32_t flag = XEN_DOMCTL_DEV_RDM_RELAXED;
     uint32_t domainid = domid;
     bool isstubdom = libxl_is_stubdom(ctx, domid, &domainid);
@@ -1486,6 +1493,13 @@ static void pci_add_dm_done(libxl__egc *egc,
         goto out_no_irq;
     }
     if ((fscanf(f, "%u", &irq) == 1) && irq) {
+        sbdf = PCI_SBDF(pci->domain, pci->bus,
+                        (PCI_DEVFN(pci->dev, pci->func)));
+        r = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
+        /* if fail, keep using irq; if success, r is gsi, use gsi */
+        if (r != -1) {
+            irq = r;
+        }
         r = xc_physdev_map_pirq(ctx->xch, domid, irq, &irq);
         if (r < 0) {
             LOGED(ERROR, domainid, "xc_physdev_map_pirq irq=%d (error=%d)",
@@ -2172,8 +2186,10 @@ static void pci_remove_detached(libxl__egc *egc,
     int  irq = 0, i, stubdomid = 0;
     const char *sysfs_path;
     FILE *f;
+    uint32_t sbdf;
     uint32_t domainid = prs->domid;
     bool isstubdom;
+    int r;
 
     /* Convenience aliases */
     libxl_device_pci *const pci = &prs->pci;
@@ -2239,6 +2255,13 @@ skip_bar:
     }
 
     if ((fscanf(f, "%u", &irq) == 1) && irq) {
+        sbdf = PCI_SBDF(pci->domain, pci->bus,
+                        (PCI_DEVFN(pci->dev, pci->func)));
+        r = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
+        /* if fail, keep using irq; if success, r is gsi, use gsi */
+        if (r != -1) {
+            irq = r;
+        }
         rc = xc_physdev_unmap_pirq(ctx->xch, domid, irq);
         if (rc < 0) {
             /*
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 08:12:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 08:12:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736385.1142488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUhv-0002sG-GV; Fri, 07 Jun 2024 08:12:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736385.1142488; Fri, 07 Jun 2024 08:12:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFUhv-0002rV-Cj; Fri, 07 Jun 2024 08:12:19 +0000
Received: by outflank-mailman (input) for mailman id 736385;
 Fri, 07 Jun 2024 08:12:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Avvd=NJ=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sFUhu-0001Em-2N
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 08:12:18 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20626.outbound.protection.outlook.com
 [2a01:111:f403:2414::626])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a6aa1a0c-24a5-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 10:12:15 +0200 (CEST)
Received: from SJ0PR05CA0006.namprd05.prod.outlook.com (2603:10b6:a03:33b::11)
 by SJ2PR12MB8689.namprd12.prod.outlook.com (2603:10b6:a03:53d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.33; Fri, 7 Jun
 2024 08:12:12 +0000
Received: from CO1PEPF000044F8.namprd21.prod.outlook.com
 (2603:10b6:a03:33b:cafe::bd) by SJ0PR05CA0006.outlook.office365.com
 (2603:10b6:a03:33b::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.12 via Frontend
 Transport; Fri, 7 Jun 2024 08:12:12 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1PEPF000044F8.mail.protection.outlook.com (10.167.241.198) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.0 via Frontend Transport; Fri, 7 Jun 2024 08:12:12 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.35; Fri, 7 Jun
 2024 03:12:06 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6aa1a0c-24a5-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ks6oLTWhdNjumqJVkJt+3cQQ/5MvirhbfRntUsLM0HK3DWHfY4ns8J9MzUnlRzhUBYpxqakKdIKCB7rhQbc1xCreBXlVqKJHxAX/UFIxOx/4TXWtBnFFHnftnkddBTVxDydkU5pZA2a+xcbpAmmTTSoHU3Juz1j2ZaOWdnQ4TOkWIxM7lSHEy6hkqSmYfVc4R4hZhuWo5LkEa8vRgP/MDqvnMSBFfAM3H+YnJkjqKW5Ut9JIHnBlrosmYszVIHA8mwBburNqNJtTOvCXRZkMg+bvT94r+S8kL5d1IZyO0pNRisq3rzgD/2M6QulS5WNTCXZOGVJ5ohoTrpSOaANW0Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=oIL2jjEjx9U91HHLhsRLBwmWlPr2GNH9RAR8j3dN22k=;
 b=aljpSylcYQ9Y3H0a3OO5FRBT7OCtqq/G6y+LybXDR9kgknFiriVbBKdGTrmdP27UEIAVVl02mgyeLTb7Jo9apcS7gYPBxcSeNsaDymHtoj54C5lF31xT6kAEBuQv/X1MgaqkHa4WMScDtDna25b21TGOI4YUv6eaj/7K+Fl4cSq6m72SbJJ2cc3JusEVV+ipmI4M508f2tH7iqCpCcwbJ4ARMALPvsXqWrfdp3u0FjIr6Bh7X5IR25ZfuK2g0ncIpcox1xkSjH6iHZFdP4c7b1fzamikZltgbu6gc3ye2Lx0eRzIDWNHt3hneO++aGSavZkebYs1Lr9MaiuDD0y1Tg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oIL2jjEjx9U91HHLhsRLBwmWlPr2GNH9RAR8j3dN22k=;
 b=BSIZLkbc5G/wuGhZ+RO3WHrmEDGIrdoIFgemmZ6IvHCC6Og+o24YpBV+ZoXz9EMmMDdgU46QWRjSecQ5AsVmjQMbFL4oPANc+LIaRT4EXTwddGjvqPxfqkJ3HzjXsqe4tVwIrcFZ9vfBM48veWH6d7px2w91Q4bixO6l+B/BcmM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to grant gsi
Date: Fri, 7 Jun 2024 16:11:27 +0800
Message-ID: <20240607081127.126593-6-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240607081127.126593-1-Jiqian.Chen@amd.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1PEPF000044F8:EE_|SJ2PR12MB8689:EE_
X-MS-Office365-Filtering-Correlation-Id: 6dd9d50d-fc0a-43ec-93c4-08dc86c988ef
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|1800799015|376005|7416005|36860700004|82310400017;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?JXI+JMpP0iQskjsFuRdwycAmUbZk4N1xXpD+ScvTAkww+AwqFCyW4D0fpePs?=
 =?us-ascii?Q?RAdSxM6MS2IQrqQN04VdaUElg3PheuuDMZ94BUa4f17yzh1TWRTcedoDCWbh?=
 =?us-ascii?Q?teqeB00Qi2P7fR12UHOktlpe8vV5/nx7ZOa5hJCuWVHOufwYzZfWcPpOI/W/?=
 =?us-ascii?Q?f9c3s/yIDqgphit46utufsAzwYMGd9MjT7Vf6a07SHFEJKjn/s2nO7B1ryK4?=
 =?us-ascii?Q?s/aRp5dMGJNeM9zD2ekRnWBpS9xx75VB6//RtGIjiacvJQJnjESFU8v9pjx+?=
 =?us-ascii?Q?X/33DJM8oYU+D8u1pCvmwOW/QVIMEJAxooeh0PjORwzLxy1wjqI2eQ1ZPHL0?=
 =?us-ascii?Q?yK+Z3TYBP7APsbBP5VbqL1aDz6THkvd7T+P7HGpULZqhtnwHpBiD0FF6rQEs?=
 =?us-ascii?Q?9PA9dkz7yCaCytI0aZQGMwHWf3t1U1cZQB+LKwOCZC96ibF1llt8A8LMher+?=
 =?us-ascii?Q?KR3JSB82FcPBI5+cYjSZ6HxDcZYqdqBAx+WErteqk4DNMRm9LxEVEXUUHkR/?=
 =?us-ascii?Q?9NZAaoApENyjQI/+eov6uyMMjl4cVkf/0kDz11FhYYGPmUL7V/J6BCKwNMIK?=
 =?us-ascii?Q?tIQgPdkpDs050pF9yFkxc9U9YXxlQ6DuuqmLgXt0xtJQhdivVF5Io6yPlD7I?=
 =?us-ascii?Q?4H1gb6NZBMgyLkB7rhlDk+xam0bKPzlrwSym6s/4OPNX/Kexn7Dbjamkt0e1?=
 =?us-ascii?Q?DJ1q0U9kyktcFQpnq3LPUamVmvXVNc11PUUb6dJuuIKOc0noqq71h65h65L9?=
 =?us-ascii?Q?PM3VNPKpQZA/nxfCkAQi2w46VNb+nIP+Y4N1kSq/K/TaMY6QNHmBFqgra7Zl?=
 =?us-ascii?Q?wcg82mNCGUaRZiCVJ02GVSe85FufteBAGplL7j3ROzdBJB6wsW6vAohyAXOx?=
 =?us-ascii?Q?TLioMMuHiT5Th87cPK1WQNxf/3tJl8qpM/NKK85gniCnBsQI2Yj/BCKO5+N1?=
 =?us-ascii?Q?5niP1dsmWsHSI7QYma7RXariJSFwMW6q3wwE0tI6tr4qe4JxLuZ+b2NCiVTs?=
 =?us-ascii?Q?0JCqSCM+yxFAbLA/8rGtX8qpv62Rr0oxx1lGD84++2YB/X3XIkEFBX0cybBE?=
 =?us-ascii?Q?qwU4yIZAfkQK18UJK+5TVVB2+4NLqcvCbgQrYCxDl5HU42Ofc518UqTVxjW9?=
 =?us-ascii?Q?bLb98k7EE1L3kEtVLHnmu9FGb0SId7/K7OctRphqHBYKAlekgtbfc0Hygemn?=
 =?us-ascii?Q?q4pF738sHInGxt5/QUayGWMV3iE+K4TuseqRvmi5FtUQsVOFKIJqTdgONxtn?=
 =?us-ascii?Q?LUq93Wv4r1GBP+6KP9Bx1mgMhtTMeE7CNL8PCCd3xuj4xVLQJcFhezx73DXh?=
 =?us-ascii?Q?obwaztLkylerIQNdUbNJ78roC8e2w3uekC4izuTBCNlTp0KH9IcbXZtBmL2Q?=
 =?us-ascii?Q?ZbWdEipoGGXd7eFMvbGM2VOn7y63sKousYsKv6OfI49X9J6dRA=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(1800799015)(376005)(7416005)(36860700004)(82310400017);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2024 08:12:12.2156
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6dd9d50d-fc0a-43ec-93c4-08dc86c988ef
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1PEPF000044F8.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8689

Some type of domain don't have PIRQ, like PVH, it do not do
PHYSDEVOP_map_pirq for each gsi. When passthrough a device
to guest on PVH dom0, callstack
pci_add_dm_done->XEN_DOMCTL_irq_permission will failed at
domain_pirq_to_irq, because PVH has no mapping of gsi, pirq
and irq on Xen side.

What's more, current hypercall XEN_DOMCTL_irq_permission require
passing in pirq and grant the access of irq, it is not suitable
for dom0 that has no PIRQ flag, because passthrough a device
needs gsi and grant the corresponding irq to guest. So, add a
new hypercall to grant gsi permission when dom0 is not PV or dom0
has not PIRQ flag.

Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
---
RFC: it needs review and needs to wait for the corresponding third patch on linux kernel side to be merged.
---
 tools/include/xenctrl.h            |  5 +++
 tools/libs/ctrl/xc_domain.c        | 15 +++++++
 tools/libs/light/libxl_pci.c       | 72 +++++++++++++++++++++++-------
 xen/arch/x86/domctl.c              | 38 ++++++++++++++++
 xen/arch/x86/include/asm/io_apic.h |  2 +
 xen/arch/x86/io_apic.c             | 21 +++++++++
 xen/arch/x86/mpparse.c             |  3 +-
 xen/include/public/domctl.h        | 10 +++++
 xen/xsm/flask/hooks.c              |  1 +
 9 files changed, 149 insertions(+), 18 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index a0381f74d24b..f3feb6848e25 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1382,6 +1382,11 @@ int xc_domain_irq_permission(xc_interface *xch,
                              uint32_t pirq,
                              bool allow_access);
 
+int xc_domain_gsi_permission(xc_interface *xch,
+                             uint32_t domid,
+                             uint32_t gsi,
+                             bool allow_access);
+
 int xc_domain_iomem_permission(xc_interface *xch,
                                uint32_t domid,
                                unsigned long first_mfn,
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index f2d9d14b4d9f..8540e84fda93 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -1394,6 +1394,21 @@ int xc_domain_irq_permission(xc_interface *xch,
     return do_domctl(xch, &domctl);
 }
 
+int xc_domain_gsi_permission(xc_interface *xch,
+                             uint32_t domid,
+                             uint32_t gsi,
+                             bool allow_access)
+{
+    struct xen_domctl domctl = {
+        .cmd = XEN_DOMCTL_gsi_permission,
+        .domain = domid,
+        .u.gsi_permission.gsi = gsi,
+        .u.gsi_permission.allow_access = allow_access,
+    };
+
+    return do_domctl(xch, &domctl);
+}
+
 int xc_domain_iomem_permission(xc_interface *xch,
                                uint32_t domid,
                                unsigned long first_mfn,
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 7e44d4c3ae2b..b8ec37d8d7e3 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1412,6 +1412,37 @@ static bool pci_supp_legacy_irq(void)
 #define PCI_SBDF(seg, bus, devfn) \
             ((((uint32_t)(seg)) << 16) | (PCI_DEVID(bus, devfn)))
 
+static int pci_device_set_gsi(libxl_ctx *ctx,
+                              libxl_domid domid,
+                              libxl_device_pci *pci,
+                              bool map,
+                              int *gsi_back)
+{
+    int r, gsi, pirq;
+    uint32_t sbdf;
+
+    sbdf = PCI_SBDF(pci->domain, pci->bus, (PCI_DEVFN(pci->dev, pci->func)));
+    r = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
+    *gsi_back = r;
+    if (r < 0)
+        return r;
+
+    gsi = r;
+    pirq = r;
+    if (map)
+        r = xc_physdev_map_pirq(ctx->xch, domid, gsi, &pirq);
+    else
+        r = xc_physdev_unmap_pirq(ctx->xch, domid, pirq);
+    if (r)
+        return r;
+
+    r = xc_domain_gsi_permission(ctx->xch, domid, gsi, map);
+    if (r && errno == EOPNOTSUPP)
+        r = xc_domain_irq_permission(ctx->xch, domid, pirq, map);
+
+    return r;
+}
+
 static void pci_add_dm_done(libxl__egc *egc,
                             pci_add_state *pas,
                             int rc)
@@ -1424,10 +1455,10 @@ static void pci_add_dm_done(libxl__egc *egc,
     unsigned long long start, end, flags, size;
     int irq, i;
     int r;
-    uint32_t sbdf;
     uint32_t flag = XEN_DOMCTL_DEV_RDM_RELAXED;
     uint32_t domainid = domid;
     bool isstubdom = libxl_is_stubdom(ctx, domid, &domainid);
+    int gsi;
 
     /* Convenience aliases */
     bool starting = pas->starting;
@@ -1485,6 +1516,19 @@ static void pci_add_dm_done(libxl__egc *egc,
     fclose(f);
     if (!pci_supp_legacy_irq())
         goto out_no_irq;
+
+    r = pci_device_set_gsi(ctx, domid, pci, 1, &gsi);
+    if (gsi >= 0) {
+        if (r < 0) {
+            rc = ERROR_FAIL;
+            LOGED(ERROR, domainid,
+                  "pci_device_set_gsi gsi=%d (error=%d)", gsi, errno);
+            goto out;
+        } else {
+            goto process_permissive;
+        }
+    }
+    /* if gsi < 0, keep using irq */
     sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
                                 pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
@@ -1493,13 +1537,6 @@ static void pci_add_dm_done(libxl__egc *egc,
         goto out_no_irq;
     }
     if ((fscanf(f, "%u", &irq) == 1) && irq) {
-        sbdf = PCI_SBDF(pci->domain, pci->bus,
-                        (PCI_DEVFN(pci->dev, pci->func)));
-        r = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
-        /* if fail, keep using irq; if success, r is gsi, use gsi */
-        if (r != -1) {
-            irq = r;
-        }
         r = xc_physdev_map_pirq(ctx->xch, domid, irq, &irq);
         if (r < 0) {
             LOGED(ERROR, domainid, "xc_physdev_map_pirq irq=%d (error=%d)",
@@ -1519,6 +1556,7 @@ static void pci_add_dm_done(libxl__egc *egc,
     }
     fclose(f);
 
+process_permissive:
     /* Don't restrict writes to the PCI config space from this VM */
     if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
@@ -2186,10 +2224,10 @@ static void pci_remove_detached(libxl__egc *egc,
     int  irq = 0, i, stubdomid = 0;
     const char *sysfs_path;
     FILE *f;
-    uint32_t sbdf;
     uint32_t domainid = prs->domid;
     bool isstubdom;
     int r;
+    int gsi;
 
     /* Convenience aliases */
     libxl_device_pci *const pci = &prs->pci;
@@ -2245,6 +2283,15 @@ skip_bar:
     if (!pci_supp_legacy_irq())
         goto skip_legacy_irq;
 
+    r = pci_device_set_gsi(ctx, domid, pci, 0, &gsi);
+    if (gsi >= 0) {
+        if (r < 0) {
+            LOGED(ERROR, domainid,
+                  "pci_device_set_gsi gsi=%d (error=%d)", gsi, errno);
+        }
+        goto skip_legacy_irq;
+    }
+    /* if gsi < 0, keep using irq */
     sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
                            pci->bus, pci->dev, pci->func);
 
@@ -2255,13 +2302,6 @@ skip_bar:
     }
 
     if ((fscanf(f, "%u", &irq) == 1) && irq) {
-        sbdf = PCI_SBDF(pci->domain, pci->bus,
-                        (PCI_DEVFN(pci->dev, pci->func)));
-        r = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
-        /* if fail, keep using irq; if success, r is gsi, use gsi */
-        if (r != -1) {
-            irq = r;
-        }
         rc = xc_physdev_unmap_pirq(ctx->xch, domid, irq);
         if (rc < 0) {
             /*
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 9a72d57333e9..c69b4566ac4f 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -36,6 +36,7 @@
 #include <asm/xstate.h>
 #include <asm/psr.h>
 #include <asm/cpu-policy.h>
+#include <asm/io_apic.h>
 
 static int update_domain_cpu_policy(struct domain *d,
                                     xen_domctl_cpu_policy_t *xdpc)
@@ -237,6 +238,43 @@ long arch_do_domctl(
         break;
     }
 
+    case XEN_DOMCTL_gsi_permission:
+    {
+        unsigned int gsi = domctl->u.gsi_permission.gsi;
+        int irq = gsi_2_irq(gsi);
+        bool allow = domctl->u.gsi_permission.allow_access;
+
+        /*
+         * If current domain is PV or it has PIRQ flag, it has a mapping
+         * of gsi, pirq and irq, so it should use XEN_DOMCTL_irq_permission
+         * to grant irq permission.
+         */
+        if ( is_pv_domain(current->domain) || has_pirq(current->domain) )
+        {
+            ret = -EOPNOTSUPP;
+            break;
+        }
+
+        if ( gsi >= nr_irqs_gsi || irq < 0 )
+        {
+            ret = -EINVAL;
+            break;
+        }
+
+        if ( !irq_access_permitted(current->domain, irq) ||
+             xsm_irq_permission(XSM_HOOK, d, irq, allow) )
+        {
+            ret = -EPERM;
+            break;
+        }
+
+        if ( allow )
+            ret = irq_permit_access(d, irq);
+        else
+            ret = irq_deny_access(d, irq);
+        break;
+    }
+
     case XEN_DOMCTL_getpageframeinfo3:
     {
         unsigned int num = domctl->u.getpageframeinfo3.num;
diff --git a/xen/arch/x86/include/asm/io_apic.h b/xen/arch/x86/include/asm/io_apic.h
index 78268ea8f666..7e86d8337758 100644
--- a/xen/arch/x86/include/asm/io_apic.h
+++ b/xen/arch/x86/include/asm/io_apic.h
@@ -213,5 +213,7 @@ unsigned highest_gsi(void);
 
 int ioapic_guest_read( unsigned long physbase, unsigned int reg, u32 *pval);
 int ioapic_guest_write(unsigned long physbase, unsigned int reg, u32 val);
+int mp_find_ioapic(int gsi);
+int gsi_2_irq(int gsi);
 
 #endif
diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index b48a64246548..d03bcdef4d19 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -955,6 +955,27 @@ static int pin_2_irq(int idx, int apic, int pin)
     return irq;
 }
 
+int gsi_2_irq(int gsi)
+{
+    int entry, ioapic, pin;
+
+    ioapic = mp_find_ioapic(gsi);
+    if ( ioapic < 0 )
+        return -1;
+
+    pin = gsi - io_apic_gsi_base(ioapic);
+
+    entry = find_irq_entry(ioapic, pin, mp_INT);
+    /*
+     * If there is no override mapping for irq and gsi in mp_irqs,
+     * then the default identity mapping applies.
+     */
+    if ( entry < 0 )
+        return gsi;
+
+    return pin_2_irq(entry, ioapic, pin);
+}
+
 static inline int IO_APIC_irq_trigger(int irq)
 {
     int apic, idx, pin;
diff --git a/xen/arch/x86/mpparse.c b/xen/arch/x86/mpparse.c
index d8ccab2449c6..c95da0de5770 100644
--- a/xen/arch/x86/mpparse.c
+++ b/xen/arch/x86/mpparse.c
@@ -841,8 +841,7 @@ static struct mp_ioapic_routing {
 } mp_ioapic_routing[MAX_IO_APICS];
 
 
-static int mp_find_ioapic (
-	int			gsi)
+int mp_find_ioapic(int gsi)
 {
 	unsigned int		i;
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 2a49fe46ce25..f933af8722f4 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -465,6 +465,14 @@ struct xen_domctl_irq_permission {
 };
 
 
+/* XEN_DOMCTL_gsi_permission */
+struct xen_domctl_gsi_permission {
+    uint32_t gsi;
+    uint8_t allow_access;    /* flag to specify enable/disable of x86 gsi access */
+    uint8_t pad[3];
+};
+
+
 /* XEN_DOMCTL_iomem_permission */
 struct xen_domctl_iomem_permission {
     uint64_aligned_t first_mfn;/* first page (physical page number) in range */
@@ -1306,6 +1314,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_get_paging_mempool_size       85
 #define XEN_DOMCTL_set_paging_mempool_size       86
 #define XEN_DOMCTL_dt_overlay                    87
+#define XEN_DOMCTL_gsi_permission                88
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1328,6 +1337,7 @@ struct xen_domctl {
         struct xen_domctl_setdomainhandle   setdomainhandle;
         struct xen_domctl_setdebugging      setdebugging;
         struct xen_domctl_irq_permission    irq_permission;
+        struct xen_domctl_gsi_permission    gsi_permission;
         struct xen_domctl_iomem_permission  iomem_permission;
         struct xen_domctl_ioport_permission ioport_permission;
         struct xen_domctl_hypercall_init    hypercall_init;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 5e88c71b8e22..a5b134c91101 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -685,6 +685,7 @@ static int cf_check flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_shadow_op:
     case XEN_DOMCTL_ioport_permission:
     case XEN_DOMCTL_ioport_mapping:
+    case XEN_DOMCTL_gsi_permission:
 #endif
 #ifdef CONFIG_HAS_PASSTHROUGH
     /*
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 09:10:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 09:10:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736428.1142508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFVbo-00072q-VZ; Fri, 07 Jun 2024 09:10:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736428.1142508; Fri, 07 Jun 2024 09:10:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFVbo-00072J-SW; Fri, 07 Jun 2024 09:10:04 +0000
Received: by outflank-mailman (input) for mailman id 736428;
 Fri, 07 Jun 2024 09:10:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NiIp=NJ=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sFVbn-0006BI-N6
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 09:10:03 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b51881a3-24ad-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 11:09:56 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-629-FnEQzJIXPkGraVrUGETiZw-1; Fri, 07 Jun 2024 05:09:47 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id F2F3680B5C7;
 Fri,  7 Jun 2024 09:09:45 +0000 (UTC)
Received: from t14s.fritz.box (unknown [10.39.194.94])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 6D69F37E5;
 Fri,  7 Jun 2024 09:09:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b51881a3-24ad-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717751395;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=QzQrIOtS4DHvHPJ4+HHeRZGer7eituw1+x4QYHDabqc=;
	b=XqqJf/iFUUg+4jjekoOLnI0ZIzgLyta3BRrfvQQMtV+umQpgcMJJspR+mwPP58xqLPOpED
	Ni18FsDCECejZ+MEgIXOSUjsizDBKfZLtfw3K/VE5cM7sNKCvoTgpfcy5lDO70U7J/JVni
	6izv9q0obGBHxrQa6PlPQq5tkONUG7c=
X-MC-Unique: FnEQzJIXPkGraVrUGETiZw-1
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	kasan-dev@googlegroups.com,
	David Hildenbrand <david@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>,
	Oscar Salvador <osalvador@suse.de>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>,
	Dexuan Cui <decui@microsoft.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
	=?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>,
	Dmitry Vyukov <dvyukov@google.com>
Subject: [PATCH v1 0/3] mm/memory_hotplug: use PageOffline() instead of PageReserved() for !ZONE_DEVICE
Date: Fri,  7 Jun 2024 11:09:35 +0200
Message-ID: <20240607090939.89524-1-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1

This can be a considered a long-overdue follow-up to some parts of [1].
The patches are based on [2], but they are not strictly required -- just
makes it clearer why we can use adjust_managed_page_count() for memory
hotplug without going into details about highmem.

We stop initializing pages with PageReserved() in memory hotplug code --
except when dealing with ZONE_DEVICE for now. Instead, we use
PageOffline(): all pages are initialized to PageOffline() when onlining a
memory section, and only the ones actually getting exposed to the
system/page allocator will get PageOffline cleared.

This way, we enlighten memory hotplug more about PageOffline() pages and
can cleanup some hacks we have in virtio-mem code.

What about ZONE_DEVICE? PageOffline() is wrong, but we might just stop
using PageReserved() for them later by simply checking for
is_zone_device_page() at suitable places. That will be a separate patch
set / proposal.

This primarily affects virtio-mem, HV-balloon and XEN balloon. I only
briefly tested with virtio-mem, which benefits most from these cleanups.

[1] https://lore.kernel.org/all/20191024120938.11237-1-david@redhat.com/
[2] https://lkml.kernel.org/r/20240607083711.62833-1-david@redhat.com

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Mike Rapoport <rppt@kernel.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: "K. Y. Srinivasan" <kys@microsoft.com>
Cc: Haiyang Zhang <haiyangz@microsoft.com>
Cc: Wei Liu <wei.liu@kernel.org>
Cc: Dexuan Cui <decui@microsoft.com>
Cc: "Michael S. Tsirkin" <mst@redhat.com>
Cc: Jason Wang <jasowang@redhat.com>
Cc: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Cc: "Eugenio Pérez" <eperezma@redhat.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: Alexander Potapenko <glider@google.com>
Cc: Marco Elver <elver@google.com>
Cc: Dmitry Vyukov <dvyukov@google.com>

David Hildenbrand (3):
  mm: pass meminit_context to __free_pages_core()
  mm/memory_hotplug: initialize memmap of !ZONE_DEVICE with
    PageOffline() instead of PageReserved()
  mm/memory_hotplug: skip adjust_managed_page_count() for PageOffline()
    pages when offlining

 drivers/hv/hv_balloon.c        |  5 ++--
 drivers/virtio/virtio_mem.c    | 29 +++++++++---------
 drivers/xen/balloon.c          |  9 ++++--
 include/linux/memory_hotplug.h |  4 +--
 include/linux/page-flags.h     | 20 +++++++------
 mm/internal.h                  |  3 +-
 mm/kmsan/init.c                |  2 +-
 mm/memory_hotplug.c            | 31 +++++++++----------
 mm/mm_init.c                   | 14 ++++++---
 mm/page_alloc.c                | 55 +++++++++++++++++++++++++++-------
 10 files changed, 108 insertions(+), 64 deletions(-)


base-commit: 19b8422c5bd56fb5e7085995801c6543a98bda1f
prerequisite-patch-id: ca280eafd2732d7912e0c5249dc0df9ecbef19ca
prerequisite-patch-id: 8f43ebc81fdf7b9b665b57614e9e569535094758
-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 09:10:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 09:10:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736427.1142498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFVbl-0006CE-O4; Fri, 07 Jun 2024 09:10:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736427.1142498; Fri, 07 Jun 2024 09:10:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFVbl-0006Bi-KM; Fri, 07 Jun 2024 09:10:01 +0000
Received: by outflank-mailman (input) for mailman id 736427;
 Fri, 07 Jun 2024 09:10:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NiIp=NJ=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sFVbk-0006Bc-2v
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 09:10:00 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b62463dc-24ad-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 11:09:58 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73])
 by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,
 cipher=TLS_AES_256_GCM_SHA384) id us-mta-94-siZVc8J6MdebVKtoCSUwSQ-1; Fri,
 07 Jun 2024 05:09:51 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id C59D31C05190;
 Fri,  7 Jun 2024 09:09:50 +0000 (UTC)
Received: from t14s.fritz.box (unknown [10.39.194.94])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 44DB337E7;
 Fri,  7 Jun 2024 09:09:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b62463dc-24ad-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717751397;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kiFnKpy9jAQsfmTIBaaW+Ib0u1X/H+DkqsqDykpaDbc=;
	b=GNxqk8ux8t6eT3Pmk4WOQGs8g/mCXLwUO6W/xF9HOi1t8Yba2PDpiWqkULXq4K6vOtXxkM
	dTXBocFW4U8EbRfAenebeuBR230RxJHIs7+o6AVHe+1+wlMhDBVyFnYlEkF9mWCtTUgj5H
	atTIYRAsw3bPgh95K3XImwmmdPBEVrY=
X-MC-Unique: siZVc8J6MdebVKtoCSUwSQ-1
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	kasan-dev@googlegroups.com,
	David Hildenbrand <david@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>,
	Oscar Salvador <osalvador@suse.de>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>,
	Dexuan Cui <decui@microsoft.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
	=?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>,
	Dmitry Vyukov <dvyukov@google.com>
Subject: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()
Date: Fri,  7 Jun 2024 11:09:36 +0200
Message-ID: <20240607090939.89524-2-david@redhat.com>
In-Reply-To: <20240607090939.89524-1-david@redhat.com>
References: <20240607090939.89524-1-david@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1

In preparation for further changes, let's teach __free_pages_core()
about the differences of memory hotplug handling.

Move the memory hotplug specific handling from generic_online_page() to
__free_pages_core(), use adjust_managed_page_count() on the memory
hotplug path, and spell out why memory freed via memblock
cannot currently use adjust_managed_page_count().

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 mm/internal.h       |  3 ++-
 mm/kmsan/init.c     |  2 +-
 mm/memory_hotplug.c |  9 +--------
 mm/mm_init.c        |  4 ++--
 mm/page_alloc.c     | 17 +++++++++++++++--
 5 files changed, 21 insertions(+), 14 deletions(-)

diff --git a/mm/internal.h b/mm/internal.h
index 12e95fdf61e90..3fdee779205ab 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -604,7 +604,8 @@ extern void __putback_isolated_page(struct page *page, unsigned int order,
 				    int mt);
 extern void memblock_free_pages(struct page *page, unsigned long pfn,
 					unsigned int order);
-extern void __free_pages_core(struct page *page, unsigned int order);
+extern void __free_pages_core(struct page *page, unsigned int order,
+		enum meminit_context);
 
 /*
  * This will have no effect, other than possibly generating a warning, if the
diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c
index 3ac3b8921d36f..ca79636f858e5 100644
--- a/mm/kmsan/init.c
+++ b/mm/kmsan/init.c
@@ -172,7 +172,7 @@ static void do_collection(void)
 		shadow = smallstack_pop(&collect);
 		origin = smallstack_pop(&collect);
 		kmsan_setup_meta(page, shadow, origin, collect.order);
-		__free_pages_core(page, collect.order);
+		__free_pages_core(page, collect.order, MEMINIT_EARLY);
 	}
 }
 
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 171ad975c7cfd..27e3be75edcf7 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -630,14 +630,7 @@ EXPORT_SYMBOL_GPL(restore_online_page_callback);
 
 void generic_online_page(struct page *page, unsigned int order)
 {
-	/*
-	 * Freeing the page with debug_pagealloc enabled will try to unmap it,
-	 * so we should map it first. This is better than introducing a special
-	 * case in page freeing fast path.
-	 */
-	debug_pagealloc_map_pages(page, 1 << order);
-	__free_pages_core(page, order);
-	totalram_pages_add(1UL << order);
+	__free_pages_core(page, order, MEMINIT_HOTPLUG);
 }
 EXPORT_SYMBOL_GPL(generic_online_page);
 
diff --git a/mm/mm_init.c b/mm/mm_init.c
index 019193b0d8703..feb5b6e8c8875 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -1938,7 +1938,7 @@ static void __init deferred_free_range(unsigned long pfn,
 	for (i = 0; i < nr_pages; i++, page++, pfn++) {
 		if (pageblock_aligned(pfn))
 			set_pageblock_migratetype(page, MIGRATE_MOVABLE);
-		__free_pages_core(page, 0);
+		__free_pages_core(page, 0, MEMINIT_EARLY);
 	}
 }
 
@@ -2513,7 +2513,7 @@ void __init memblock_free_pages(struct page *page, unsigned long pfn,
 		}
 	}
 
-	__free_pages_core(page, order);
+	__free_pages_core(page, order, MEMINIT_EARLY);
 }
 
 DEFINE_STATIC_KEY_MAYBE(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, init_on_alloc);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 2224965ada468..e0c8a8354be36 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1214,7 +1214,8 @@ static void __free_pages_ok(struct page *page, unsigned int order,
 	__count_vm_events(PGFREE, 1 << order);
 }
 
-void __free_pages_core(struct page *page, unsigned int order)
+void __free_pages_core(struct page *page, unsigned int order,
+		enum meminit_context context)
 {
 	unsigned int nr_pages = 1 << order;
 	struct page *p = page;
@@ -1234,7 +1235,19 @@ void __free_pages_core(struct page *page, unsigned int order)
 	__ClearPageReserved(p);
 	set_page_count(p, 0);
 
-	atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
+	if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG) &&
+	    unlikely(context == MEMINIT_HOTPLUG)) {
+		/*
+		 * Freeing the page with debug_pagealloc enabled will try to
+		 * unmap it; some archs don't like double-unmappings, so
+		 * map it first.
+		 */
+		debug_pagealloc_map_pages(page, nr_pages);
+		adjust_managed_page_count(page, nr_pages);
+	} else {
+		/* memblock adjusts totalram_pages() ahead of time. */
+		atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
+	}
 
 	if (page_contains_unaccepted(page, order)) {
 		if (order == MAX_PAGE_ORDER && __free_unaccepted(page))
-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 09:10:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 09:10:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736429.1142518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFVbs-0007ZO-6j; Fri, 07 Jun 2024 09:10:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736429.1142518; Fri, 07 Jun 2024 09:10:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFVbs-0007ZD-34; Fri, 07 Jun 2024 09:10:08 +0000
Received: by outflank-mailman (input) for mailman id 736429;
 Fri, 07 Jun 2024 09:10:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NiIp=NJ=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sFVbq-0006Bc-3b
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 09:10:06 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b9f949be-24ad-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 11:10:05 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73])
 by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,
 cipher=TLS_AES_256_GCM_SHA384) id us-mta-606-eUzDhlzTMDqdmYuPAblQDA-1; Fri,
 07 Jun 2024 05:09:56 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 964D43C01221;
 Fri,  7 Jun 2024 09:09:55 +0000 (UTC)
Received: from t14s.fritz.box (unknown [10.39.194.94])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 30BEC37E5;
 Fri,  7 Jun 2024 09:09:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9f949be-24ad-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717751403;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mc4++sRx3ukGDekZxB1N0n7JzUxaxxhg3TlGVFiWWaQ=;
	b=i3l+1xbWhuleJwBKwwSrDqP43KsQnfifvtyAnYuVufCcL0d0NFBBHXPiLH0MHGUKkDxvfj
	WWrtixlSiV3Fj/7e8bSJYMfx5LkRmpSLW0vEc4HIqqBnUlpQeOlJpHndGG82/ltRA9tDGN
	XsuZBbOBpONWjoqPTYqFHPyaYQlzwsg=
X-MC-Unique: eUzDhlzTMDqdmYuPAblQDA-1
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	kasan-dev@googlegroups.com,
	David Hildenbrand <david@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>,
	Oscar Salvador <osalvador@suse.de>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>,
	Dexuan Cui <decui@microsoft.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
	=?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>,
	Dmitry Vyukov <dvyukov@google.com>
Subject: [PATCH v1 2/3] mm/memory_hotplug: initialize memmap of !ZONE_DEVICE with PageOffline() instead of PageReserved()
Date: Fri,  7 Jun 2024 11:09:37 +0200
Message-ID: <20240607090939.89524-3-david@redhat.com>
In-Reply-To: <20240607090939.89524-1-david@redhat.com>
References: <20240607090939.89524-1-david@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1

We currently initialize the memmap such that PG_reserved is set and the
refcount of the page is 1. In virtio-mem code, we have to manually clear
that PG_reserved flag to make memory offlining with partially hotplugged
memory blocks possible: has_unmovable_pages() would otherwise bail out on
such pages.

We want to avoid PG_reserved where possible and move to typed pages
instead. Further, we want to further enlighten memory offlining code about
PG_offline: offline pages in an online memory section. One example is
handling managed page count adjustments in a cleaner way during memory
offlining.

So let's initialize the pages with PG_offline instead of PG_reserved.
generic_online_page()->__free_pages_core() will now clear that flag before
handing that memory to the buddy.

Note that the page refcount is still 1 and would forbid offlining of such
memory except when special care is take during GOING_OFFLINE as
currently only implemented by virtio-mem.

With this change, we can now get non-PageReserved() pages in the XEN
balloon list. From what I can tell, that can already happen via
decrease_reservation(), so that should be fine.

HV-balloon should not really observe a change: partial online memory
blocks still cannot get surprise-offlined, because the refcount of these
PageOffline() pages is 1.

Update virtio-mem, HV-balloon and XEN-balloon code to be aware that
hotplugged pages are now PageOffline() instead of PageReserved() before
they are handed over to the buddy.

We'll leave the ZONE_DEVICE case alone for now.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 drivers/hv/hv_balloon.c     |  5 ++---
 drivers/virtio/virtio_mem.c | 18 ++++++++++++------
 drivers/xen/balloon.c       |  9 +++++++--
 include/linux/page-flags.h  | 12 +++++-------
 mm/memory_hotplug.c         | 16 ++++++++++------
 mm/mm_init.c                | 10 ++++++++--
 mm/page_alloc.c             | 32 +++++++++++++++++++++++---------
 7 files changed, 67 insertions(+), 35 deletions(-)

diff --git a/drivers/hv/hv_balloon.c b/drivers/hv/hv_balloon.c
index e000fa3b9f978..c1be38edd8361 100644
--- a/drivers/hv/hv_balloon.c
+++ b/drivers/hv/hv_balloon.c
@@ -693,9 +693,8 @@ static void hv_page_online_one(struct hv_hotadd_state *has, struct page *pg)
 		if (!PageOffline(pg))
 			__SetPageOffline(pg);
 		return;
-	}
-	if (PageOffline(pg))
-		__ClearPageOffline(pg);
+	} else if (!PageOffline(pg))
+		return;
 
 	/* This frame is currently backed; online the page. */
 	generic_online_page(pg, 0);
diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index a3857bacc8446..b90df29621c81 100644
--- a/drivers/virtio/virtio_mem.c
+++ b/drivers/virtio/virtio_mem.c
@@ -1146,12 +1146,16 @@ static void virtio_mem_set_fake_offline(unsigned long pfn,
 	for (; nr_pages--; pfn++) {
 		struct page *page = pfn_to_page(pfn);
 
-		__SetPageOffline(page);
-		if (!onlined) {
+		if (!onlined)
+			/*
+			 * Pages that have not been onlined yet were initialized
+			 * to PageOffline(). Remember that we have to route them
+			 * through generic_online_page().
+			 */
 			SetPageDirty(page);
-			/* FIXME: remove after cleanups */
-			ClearPageReserved(page);
-		}
+		else
+			__SetPageOffline(page);
+		VM_WARN_ON_ONCE(!PageOffline(page));
 	}
 	page_offline_end();
 }
@@ -1166,9 +1170,11 @@ static void virtio_mem_clear_fake_offline(unsigned long pfn,
 	for (; nr_pages--; pfn++) {
 		struct page *page = pfn_to_page(pfn);
 
-		__ClearPageOffline(page);
 		if (!onlined)
+			/* generic_online_page() will clear PageOffline(). */
 			ClearPageDirty(page);
+		else
+			__ClearPageOffline(page);
 	}
 }
 
diff --git a/drivers/xen/balloon.c b/drivers/xen/balloon.c
index aaf2514fcfa46..528395133b4f8 100644
--- a/drivers/xen/balloon.c
+++ b/drivers/xen/balloon.c
@@ -146,7 +146,8 @@ static DECLARE_WAIT_QUEUE_HEAD(balloon_wq);
 /* balloon_append: add the given page to the balloon. */
 static void balloon_append(struct page *page)
 {
-	__SetPageOffline(page);
+	if (!PageOffline(page))
+		__SetPageOffline(page);
 
 	/* Lowmem is re-populated first, so highmem pages go at list tail. */
 	if (PageHighMem(page)) {
@@ -412,7 +413,11 @@ static enum bp_state increase_reservation(unsigned long nr_pages)
 
 		xenmem_reservation_va_mapping_update(1, &page, &frame_list[i]);
 
-		/* Relinquish the page back to the allocator. */
+		/*
+		 * Relinquish the page back to the allocator. Note that
+		 * some pages, including ones added via xen_online_page(), might
+		 * not be marked reserved; free_reserved_page() will handle that.
+		 */
 		free_reserved_page(page);
 	}
 
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index f04fea86324d9..e0362ce7fc109 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -30,16 +30,11 @@
  * - Pages falling into physical memory gaps - not IORESOURCE_SYSRAM. Trying
  *   to read/write these pages might end badly. Don't touch!
  * - The zero page(s)
- * - Pages not added to the page allocator when onlining a section because
- *   they were excluded via the online_page_callback() or because they are
- *   PG_hwpoison.
  * - Pages allocated in the context of kexec/kdump (loaded kernel image,
  *   control pages, vmcoreinfo)
  * - MMIO/DMA pages. Some architectures don't allow to ioremap pages that are
  *   not marked PG_reserved (as they might be in use by somebody else who does
  *   not respect the caching strategy).
- * - Pages part of an offline section (struct pages of offline sections should
- *   not be trusted as they will be initialized when first onlined).
  * - MCA pages on ia64
  * - Pages holding CPU notes for POWER Firmware Assisted Dump
  * - Device memory (e.g. PMEM, DAX, HMM)
@@ -1021,6 +1016,10 @@ PAGE_TYPE_OPS(Buddy, buddy, buddy)
  * The content of these pages is effectively stale. Such pages should not
  * be touched (read/write/dump/save) except by their owner.
  *
+ * When a memory block gets onlined, all pages are initialized with a
+ * refcount of 1 and PageOffline(). generic_online_page() will
+ * take care of clearing PageOffline().
+ *
  * If a driver wants to allow to offline unmovable PageOffline() pages without
  * putting them back to the buddy, it can do so via the memory notifier by
  * decrementing the reference count in MEM_GOING_OFFLINE and incrementing the
@@ -1028,8 +1027,7 @@ PAGE_TYPE_OPS(Buddy, buddy, buddy)
  * pages (now with a reference count of zero) are treated like free pages,
  * allowing the containing memory block to get offlined. A driver that
  * relies on this feature is aware that re-onlining the memory block will
- * require to re-set the pages PageOffline() and not giving them to the
- * buddy via online_page_callback_t.
+ * require not giving them to the buddy via generic_online_page().
  *
  * There are drivers that mark a page PageOffline() and expect there won't be
  * any further access to page content. PFN walkers that read content of random
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 27e3be75edcf7..0254059efcbe1 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -734,7 +734,7 @@ static inline void section_taint_zone_device(unsigned long pfn)
 /*
  * Associate the pfn range with the given zone, initializing the memmaps
  * and resizing the pgdat/zone data to span the added pages. After this
- * call, all affected pages are PG_reserved.
+ * call, all affected pages are PageOffline().
  *
  * All aligned pageblocks are initialized to the specified migratetype
  * (usually MIGRATE_MOVABLE). Besides setting the migratetype, no related
@@ -1100,8 +1100,12 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
 
 	move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_UNMOVABLE);
 
-	for (i = 0; i < nr_pages; i++)
-		SetPageVmemmapSelfHosted(pfn_to_page(pfn + i));
+	for (i = 0; i < nr_pages; i++) {
+		struct page *page = pfn_to_page(pfn + i);
+
+		__ClearPageOffline(page);
+		SetPageVmemmapSelfHosted(page);
+	}
 
 	/*
 	 * It might be that the vmemmap_pages fully span sections. If that is
@@ -1959,9 +1963,9 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages,
 	 * Don't allow to offline memory blocks that contain holes.
 	 * Consequently, memory blocks with holes can never get onlined
 	 * via the hotplug path - online_pages() - as hotplugged memory has
-	 * no holes. This way, we e.g., don't have to worry about marking
-	 * memory holes PG_reserved, don't need pfn_valid() checks, and can
-	 * avoid using walk_system_ram_range() later.
+	 * no holes. This way, we don't have to worry about memory holes,
+	 * don't need pfn_valid() checks, and can avoid using
+	 * walk_system_ram_range() later.
 	 */
 	walk_system_ram_range(start_pfn, nr_pages, &system_ram_pages,
 			      count_system_ram_pages_cb);
diff --git a/mm/mm_init.c b/mm/mm_init.c
index feb5b6e8c8875..c066c1c474837 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -892,8 +892,14 @@ void __meminit memmap_init_range(unsigned long size, int nid, unsigned long zone
 
 		page = pfn_to_page(pfn);
 		__init_single_page(page, pfn, zone, nid);
-		if (context == MEMINIT_HOTPLUG)
-			__SetPageReserved(page);
+		if (context == MEMINIT_HOTPLUG) {
+#ifdef CONFIG_ZONE_DEVICE
+			if (zone == ZONE_DEVICE)
+				__SetPageReserved(page);
+			else
+#endif
+				__SetPageOffline(page);
+		}
 
 		/*
 		 * Usually, we want to mark the pageblock MIGRATE_MOVABLE,
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e0c8a8354be36..039bc52cc9091 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1225,18 +1225,23 @@ void __free_pages_core(struct page *page, unsigned int order,
 	 * When initializing the memmap, __init_single_page() sets the refcount
 	 * of all pages to 1 ("allocated"/"not free"). We have to set the
 	 * refcount of all involved pages to 0.
+	 *
+	 * Note that hotplugged memory pages are initialized to PageOffline().
+	 * Pages freed from memblock might be marked as reserved.
 	 */
-	prefetchw(p);
-	for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
-		prefetchw(p + 1);
-		__ClearPageReserved(p);
-		set_page_count(p, 0);
-	}
-	__ClearPageReserved(p);
-	set_page_count(p, 0);
-
 	if (IS_ENABLED(CONFIG_MEMORY_HOTPLUG) &&
 	    unlikely(context == MEMINIT_HOTPLUG)) {
+		prefetchw(p);
+		for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
+			prefetchw(p + 1);
+			VM_WARN_ON_ONCE(PageReserved(p));
+			__ClearPageOffline(p);
+			set_page_count(p, 0);
+		}
+		VM_WARN_ON_ONCE(PageReserved(p));
+		__ClearPageOffline(p);
+		set_page_count(p, 0);
+
 		/*
 		 * Freeing the page with debug_pagealloc enabled will try to
 		 * unmap it; some archs don't like double-unmappings, so
@@ -1245,6 +1250,15 @@ void __free_pages_core(struct page *page, unsigned int order,
 		debug_pagealloc_map_pages(page, nr_pages);
 		adjust_managed_page_count(page, nr_pages);
 	} else {
+		prefetchw(p);
+		for (loop = 0; loop < (nr_pages - 1); loop++, p++) {
+			prefetchw(p + 1);
+			__ClearPageReserved(p);
+			set_page_count(p, 0);
+		}
+		__ClearPageReserved(p);
+		set_page_count(p, 0);
+
 		/* memblock adjusts totalram_pages() ahead of time. */
 		atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
 	}
-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 09:10:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 09:10:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736431.1142527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFVc4-00081Y-FC; Fri, 07 Jun 2024 09:10:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736431.1142527; Fri, 07 Jun 2024 09:10:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFVc4-00081R-C3; Fri, 07 Jun 2024 09:10:20 +0000
Received: by outflank-mailman (input) for mailman id 736431;
 Fri, 07 Jun 2024 09:10:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NiIp=NJ=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sFVc3-0006Bc-P9
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 09:10:19 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c2252ea2-24ad-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 11:10:18 +0200 (CEST)
Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com
 [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-592-yH10sX6lNoa9rUALrYBFDg-1; Fri, 07 Jun 2024 05:10:01 -0400
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com
 [10.11.54.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 16217811E81;
 Fri,  7 Jun 2024 09:10:00 +0000 (UTC)
Received: from t14s.fritz.box (unknown [10.39.194.94])
 by smtp.corp.redhat.com (Postfix) with ESMTP id CF51437E7;
 Fri,  7 Jun 2024 09:09:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2252ea2-24ad-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717751417;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BkvZtA0DZCTst0gmiY3p4tzfSSSAg0RWHnN+gN89loE=;
	b=GKS4Exv49hOzsBLMaPqOha/mm31v0EwICUqWSPgUeIO9FvzjrdpskTrBZeDrcPtsfe1Rbk
	BSfeVuNbfB0vJgeg6zAA6QLPbzSeXHIKiy4kFQJZobdZEK91aRfP1Q1X7v5r4coAptWyet
	hRCjqaDSgsmF4ZlhH4nzU2ubmxDkq9Q=
X-MC-Unique: yH10sX6lNoa9rUALrYBFDg-1
From: David Hildenbrand <david@redhat.com>
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	kasan-dev@googlegroups.com,
	David Hildenbrand <david@redhat.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>,
	Oscar Salvador <osalvador@suse.de>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>,
	Dexuan Cui <decui@microsoft.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
	=?UTF-8?q?Eugenio=20P=C3=A9rez?= <eperezma@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>,
	Dmitry Vyukov <dvyukov@google.com>
Subject: [PATCH v1 3/3] mm/memory_hotplug: skip adjust_managed_page_count() for PageOffline() pages when offlining
Date: Fri,  7 Jun 2024 11:09:38 +0200
Message-ID: <20240607090939.89524-4-david@redhat.com>
In-Reply-To: <20240607090939.89524-1-david@redhat.com>
References: <20240607090939.89524-1-david@redhat.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.1

We currently have a hack for virtio-mem in place to handle memory
offlining with PageOffline pages for which we already adjusted the
managed page count.

Let's enlighten memory offlining code so we can get rid of that hack,
and document the situation.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
 drivers/virtio/virtio_mem.c    | 11 ++---------
 include/linux/memory_hotplug.h |  4 ++--
 include/linux/page-flags.h     |  8 ++++++--
 mm/memory_hotplug.c            |  6 +++---
 mm/page_alloc.c                | 12 ++++++++++--
 5 files changed, 23 insertions(+), 18 deletions(-)

diff --git a/drivers/virtio/virtio_mem.c b/drivers/virtio/virtio_mem.c
index b90df29621c81..b0b8714415783 100644
--- a/drivers/virtio/virtio_mem.c
+++ b/drivers/virtio/virtio_mem.c
@@ -1269,12 +1269,6 @@ static void virtio_mem_fake_offline_going_offline(unsigned long pfn,
 	struct page *page;
 	unsigned long i;
 
-	/*
-	 * Drop our reference to the pages so the memory can get offlined
-	 * and add the unplugged pages to the managed page counters (so
-	 * offlining code can correctly subtract them again).
-	 */
-	adjust_managed_page_count(pfn_to_page(pfn), nr_pages);
 	/* Drop our reference to the pages so the memory can get offlined. */
 	for (i = 0; i < nr_pages; i++) {
 		page = pfn_to_page(pfn + i);
@@ -1293,10 +1287,9 @@ static void virtio_mem_fake_offline_cancel_offline(unsigned long pfn,
 	unsigned long i;
 
 	/*
-	 * Get the reference we dropped when going offline and subtract the
-	 * unplugged pages from the managed page counters.
+	 * Get the reference again that we dropped via page_ref_dec_and_test()
+	 * when going offline.
 	 */
-	adjust_managed_page_count(pfn_to_page(pfn), -nr_pages);
 	for (i = 0; i < nr_pages; i++)
 		page_ref_inc(pfn_to_page(pfn + i));
 }
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 7a9ff464608d7..ebe876930e782 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -175,8 +175,8 @@ extern int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
 extern void mhp_deinit_memmap_on_memory(unsigned long pfn, unsigned long nr_pages);
 extern int online_pages(unsigned long pfn, unsigned long nr_pages,
 			struct zone *zone, struct memory_group *group);
-extern void __offline_isolated_pages(unsigned long start_pfn,
-				     unsigned long end_pfn);
+extern unsigned long __offline_isolated_pages(unsigned long start_pfn,
+		unsigned long end_pfn);
 
 typedef void (*online_page_callback_t)(struct page *page, unsigned int order);
 
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index e0362ce7fc109..0876aca0833e7 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -1024,11 +1024,15 @@ PAGE_TYPE_OPS(Buddy, buddy, buddy)
  * putting them back to the buddy, it can do so via the memory notifier by
  * decrementing the reference count in MEM_GOING_OFFLINE and incrementing the
  * reference count in MEM_CANCEL_OFFLINE. When offlining, the PageOffline()
- * pages (now with a reference count of zero) are treated like free pages,
- * allowing the containing memory block to get offlined. A driver that
+ * pages (now with a reference count of zero) are treated like free (unmanaged)
+ * pages, allowing the containing memory block to get offlined. A driver that
  * relies on this feature is aware that re-onlining the memory block will
  * require not giving them to the buddy via generic_online_page().
  *
+ * Memory offlining code will not adjust the managed page count for any
+ * PageOffline() pages, treating them like they were never exposed to the
+ * buddy using generic_online_page().
+ *
  * There are drivers that mark a page PageOffline() and expect there won't be
  * any further access to page content. PFN walkers that read content of random
  * pages should check PageOffline() and synchronize with such drivers using
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 0254059efcbe1..965707a02556f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1941,7 +1941,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages,
 			struct zone *zone, struct memory_group *group)
 {
 	const unsigned long end_pfn = start_pfn + nr_pages;
-	unsigned long pfn, system_ram_pages = 0;
+	unsigned long pfn, managed_pages, system_ram_pages = 0;
 	const int node = zone_to_nid(zone);
 	unsigned long flags;
 	struct memory_notify arg;
@@ -2062,7 +2062,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages,
 	} while (ret);
 
 	/* Mark all sections offline and remove free pages from the buddy. */
-	__offline_isolated_pages(start_pfn, end_pfn);
+	managed_pages = __offline_isolated_pages(start_pfn, end_pfn);
 	pr_debug("Offlined Pages %ld\n", nr_pages);
 
 	/*
@@ -2078,7 +2078,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages,
 	zone_pcp_enable(zone);
 
 	/* removal success */
-	adjust_managed_page_count(pfn_to_page(start_pfn), -nr_pages);
+	adjust_managed_page_count(pfn_to_page(start_pfn), -managed_pages);
 	adjust_present_page_count(pfn_to_page(start_pfn), group, -nr_pages);
 
 	/* reinitialise watermarks and update pcp limits */
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 039bc52cc9091..809bc4a816e85 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -6745,14 +6745,19 @@ void zone_pcp_reset(struct zone *zone)
 /*
  * All pages in the range must be in a single zone, must not contain holes,
  * must span full sections, and must be isolated before calling this function.
+ *
+ * Returns the number of managed (non-PageOffline()) pages in the range: the
+ * number of pages for which memory offlining code must adjust managed page
+ * counters using adjust_managed_page_count().
  */
-void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
+unsigned long __offline_isolated_pages(unsigned long start_pfn,
+		unsigned long end_pfn)
 {
+	unsigned long already_offline = 0, flags;
 	unsigned long pfn = start_pfn;
 	struct page *page;
 	struct zone *zone;
 	unsigned int order;
-	unsigned long flags;
 
 	offline_mem_sections(pfn, end_pfn);
 	zone = page_zone(pfn_to_page(pfn));
@@ -6774,6 +6779,7 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
 		if (PageOffline(page)) {
 			BUG_ON(page_count(page));
 			BUG_ON(PageBuddy(page));
+			already_offline++;
 			pfn++;
 			continue;
 		}
@@ -6786,6 +6792,8 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
 		pfn += (1 << order);
 	}
 	spin_unlock_irqrestore(&zone->lock, flags);
+
+	return end_pfn - start_pfn - already_offline;
 }
 #endif
 
-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 09:14:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 09:14:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736451.1142539 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFVgV-0001Ps-6K; Fri, 07 Jun 2024 09:14:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736451.1142539; Fri, 07 Jun 2024 09:14:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFVgV-0001Pl-1c; Fri, 07 Jun 2024 09:14:55 +0000
Received: by outflank-mailman (input) for mailman id 736451;
 Fri, 07 Jun 2024 09:14:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ERR9=NJ=epam.com=prvs=288857fa54=sergiy_kibrik@srs-se1.protection.inumbo.net>)
 id 1sFVgT-0001Pf-I9
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 09:14:53 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 64e2fd0d-24ae-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 11:14:52 +0200 (CEST)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45769RDY013605;
 Fri, 7 Jun 2024 09:14:37 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2040.outbound.protection.outlook.com [104.47.13.40])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3ykvp6rjun-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 07 Jun 2024 09:14:37 +0000 (GMT)
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com (2603:10a6:20b:5c0::11)
 by DU0PR03MB8981.eurprd03.prod.outlook.com (2603:10a6:10:477::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.15; Fri, 7 Jun
 2024 09:14:33 +0000
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d]) by AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d%6]) with mapi id 15.20.7656.012; Fri, 7 Jun 2024
 09:14:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64e2fd0d-24ae-11ef-90a2-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mYOnVkWUUHaqMSpkk69QKjRiMTUJhdfh1v+Ow5eGCEt74p5tEsSa/pOIXq/57Ek3mTyEI85Yx8MARnDogUZ+8cpYy3/sBsCoPZpwW06jQJua7JTysGMNuW5W7us9Jd6JlYTq/gZZspztpPdbuXJwhjdtvSxzlU8S1vmb1rHR2EYYZfnFVxQaTBI2Jr/ce4P7YTupyz3s8KjCttFwk0+ikGvhqyJtD/yQNLg0ok5oErWqeHkILnKuPgrWG3Ad5WBDIZxrorHADZP/wh8ldp9UmTqFSmyQywGEQk6qRcVnAvb10fLwFPmXp77fLLx0GAKw6W3BAdrvgz1TRTYDcfVYsw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GiKVYOgXgaZpYoJuUo4D3kT4lGfZJ/ey2VihoOkDpHU=;
 b=F79Kl81dmgOYVf+6fzuohp7kBQuUdwdrmhsZH+5DCuFr+c1qnzbQXMTgpn6Hp2RUXVYzd9kzJ1Q+IURBsTWQ+cd71oW7Ol9IcNZxLbeg3ZVPOge1mYP+7W386qOqGFRRJjHc4KH3J5gkk3u6V4qd+hByx6/4FtBWznzfe9nD+CYj5HH0yjMTWYCTz15M/c++X1HDMJX2ctjjy07cMPy+b23aRQ76VlvGLTdX/B43ulK9PGCaReYvFtBPSi2p2pSBoPbn2Omv5WvQXJ5drT7YNIp8P14qqfWa6ohcF3cNz4NAtjFhfX8mOeTUtU5M5Gcun0jO2n3NwigRgm7Fdyxotg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GiKVYOgXgaZpYoJuUo4D3kT4lGfZJ/ey2VihoOkDpHU=;
 b=su0M8Y+VW9cYdpvsaHGTsPdgbGBQFCGM20K3Bpz/jpBIfGzs/ZMzVB1VPUh4DT0e8oRwqQ8kV/JAo6dzZ18znJXXRnAi2AGQfCzgtkLvuzP0bxqhk0O5Ea3GhC1Ds40jid1J7B7gQOG1tpf78IK8mQ5DIIcm/ZqISt+BOR5eJjxZSV8SXhZI+jR1jal4iy0J8FUMV4YRUd7ms8lFzuFI5lbOWYi27oHxuFiRn3+ZpSTy60rS7qQa4QwF6csL2RMV9nJZbzZf/2cRVK+venN+lND1n2Ra854JusGfv/F0Ir/Kr7Ef8B6oUk7oSaZ0HEekryf193qyHWnLD/QHmhMXAw==
Message-ID: <647b086a-04b0-42be-a7b8-a266c4f4e64b@epam.com>
Date: Fri, 7 Jun 2024 12:14:31 +0300
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v1] x86/cpufreq: separate powernow/hwp cpufreq code
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
        =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Jason Andryuk <jason.andryuk@amd.com>, xen-devel@lists.xenproject.org
References: <20240604093406.2448552-1-Sergiy_Kibrik@epam.com>
 <5cb13d1a-1452-4542-b50d-23e6a9d9d3ef@suse.com>
 <c66966da-bbe3-432e-8a2f-809bf434db39@epam.com>
 <ab57f7f3-ac54-4b41-950a-1f7bee4293ab@suse.com>
Content-Language: en-US
From: Sergiy Kibrik <sergiy_kibrik@epam.com>
In-Reply-To: <ab57f7f3-ac54-4b41-950a-1f7bee4293ab@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: WA2P291CA0044.POLP291.PROD.OUTLOOK.COM
 (2603:10a6:1d0:1f::20) To AS8PR03MB9192.eurprd03.prod.outlook.com
 (2603:10a6:20b:5c0::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AS8PR03MB9192:EE_|DU0PR03MB8981:EE_
X-MS-Office365-Filtering-Correlation-Id: 844d433e-8692-484b-6c4e-08dc86d23e9a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230031|366007|1800799015|376005;
X-Microsoft-Antispam-Message-Info: 
	=?utf-8?B?REpGcmtWRnNsR0NMK2pzdUNkcDVjTW9TOTRSamc1RU81VnNvTmZqd1QzclFH?=
 =?utf-8?B?b1k4OEx1Vms4SHk4NmdwdGt2SzhtNk52cTdtR05Ed2dUSHN6bEh0SlBjSW95?=
 =?utf-8?B?Q3FsYXRmbzdabXNidkgvV1h2SlJqTUdLS3RHL1NaSzI3ektmRDIxd2orMjl1?=
 =?utf-8?B?WEgyeDZlaGk2MXI1dnoxdk83ZG0zYU1IWUcwdExnUVBHR0d3NUJpQldueUhX?=
 =?utf-8?B?M2NoZ3ErUGtuZmpTTlZGczJuQkhDdTVNeU1DdHRnMzRoSXJiSENjQ1h0UDBS?=
 =?utf-8?B?Y2o0R1QyR1d1Kyt5aDlJTkNBUFcwdUtxVkxoc3BXK3dZRmUxaWV4VHNvOFNE?=
 =?utf-8?B?ekk4L2xwb0ZOZXdRTkFEeG8yTmNKWGx5NWZXL3BpVlEwM3RFMnlBQjZGTmpa?=
 =?utf-8?B?bDh2cmhrVi9ZbHNCN0pHUlhmWkt6aDkxNUN6R2lMMXJ4Y05MSzdGT2ltcDEz?=
 =?utf-8?B?cDNPQnYzSGJJWkVFY2lXQXAwYUR2Zzd3WmxUdzVBREE5RzFTdTJ6a1A4SlpM?=
 =?utf-8?B?WXVWS0IveDhHNVZQczl1MGUrWUVRUVJ2ZXU3RkFYWmI1RW1icXJEczgzbkdy?=
 =?utf-8?B?RFZFYzlxbmdxY2RrcDR4M2U4Q3Erdnc3VWJwMTJvaTZUdXNJVnB5R09FY0ph?=
 =?utf-8?B?WVBQNGpyVDNBcCtwWmFkMEFITWxVREhQcFk3c0NhVURNbEFHcXZmSHJreUo2?=
 =?utf-8?B?M21ZSzN5bStZS2IrWXhqd3FGT3FWdTdyM3NBZFZDQVJ6YllKZmpKcXp0NU9p?=
 =?utf-8?B?dE9XNkRmS0srblNNNEsyZm1JRGVLbWlKRTJvbURHakVqVlBMaGFhM1BIZnI1?=
 =?utf-8?B?b3J6OUNTYTJuT0hXYWhKUEZ1cGlCRlptdnhHUmp1TlFoT05hRGxKMitxdzZu?=
 =?utf-8?B?ZFVNU0g0M1dJczF1NlV3Qm45TnV6Q2drUWN6ZHZ6czJFa1RrNzdKY3d5Y05i?=
 =?utf-8?B?NDdGc0RKT1RXQnc3MlhZUStBMXlEZ2F2NGJSY0lYRmJlR09hNy8xYXJJUHBl?=
 =?utf-8?B?RHMyMUpEVGRrZlBVajV0a0JzK3ByL0NpYzlMS3FORUFRVjh2czIxSXI0K05H?=
 =?utf-8?B?bHhJczlxcWt6NDNhczJtVm1sT3dZdE93K1U2SnpYSlA1UXFPdWliejJ4b2dm?=
 =?utf-8?B?QjlyQUxBNTc1aDBaZkhKM0NqVC9YU1BycU5JVnl2dkdTeW1TVFZqd2laOGN0?=
 =?utf-8?B?V3RnbFo0ZmsvZDIyL2hiSGJTdHFJd25XVGZUTVJ6alhMUWNwYU44SjJuUUQr?=
 =?utf-8?B?NDlpM2VnWXdjNytqbDd0RTdOOFdWaUxLSkc4VitSN0t4dksxMDZqQnJJK0Y4?=
 =?utf-8?B?NkwyRUg4QkN0RGV1Znhhb05sVWVXdGJ0RGNSSjRrOUp0bE1vTllpYys5RVlZ?=
 =?utf-8?B?SThkbVVRNkhDTFZFWTFRYU9aUEtyc3ZtQlZYcmt2UUNVNXFyQm05NkI2UGxT?=
 =?utf-8?B?VndvdkpUZWpEVkFZNGVJeThjcEZ5bGZMVndTSTZBdU83ZFEwT1BiZFBwcHY4?=
 =?utf-8?B?N1RVU1VwOFZNQUpMbFBBekcyS2hOZmpnQWQ5QmphOUFIeDJtSW4zUFVKNkh3?=
 =?utf-8?B?VjVEcEtVOW1GZGNXUHJDN2E0U3hnUEhQK2JMM05rNkJwQ3VQN1o0bFJwSVVW?=
 =?utf-8?B?aDFodXFxNU1mK0NEK3B4MnJiUGtBbnRtaGtzd1VpRW14VTR0T1pBdDFlVCtY?=
 =?utf-8?B?WDhlMjdaa3pzbFMxZldUeVZyUGFwZGVpWEEvV3FVQURnVHBxT3Q2VmdwejY4?=
 =?utf-8?Q?gE/AE1I3xcPahMux3Oy2tm2vLF6mAVMqrGi0cT6?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR03MB9192.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(1800799015)(376005);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?c1d2YmVUWm5Gb0trZlAvUmVxMENRVWY4dEhvVmFBNjZTQTR0azJ6bi81M21G?=
 =?utf-8?B?VUJIWXVUVXREdnRONjlsTCtHVXNTMGswc2t4U0lHTzMwdUJPWlRsMnVkM0h3?=
 =?utf-8?B?aG5qSXEyaTVVUGFERkN1RWo1QUVBSlY5dkF1QU4wRFBEaDRVTElhc0t3SStO?=
 =?utf-8?B?Z1pTUTYxdmx5RVZlSlVoREVjS2pYWWxHRUxCbThWU2l5Y3hJMTU1R1JKQzhO?=
 =?utf-8?B?Z0x0U3dpdUJGdmZMR3VyTUJEeEY3dTdXQWhLcFVIekJoakJKRU9XV1VqM3cz?=
 =?utf-8?B?RkVCbVpHSTZ4MXFOVlZSS213aDhNSEg1d0tiQVdiamRacVoyMTVUMFQrQ3lS?=
 =?utf-8?B?eHpSQ2VWdkw4UDZNallUY0V4Z080bVIzN2xlRjBVUkxWWkhlVDNvbEkvZ1g2?=
 =?utf-8?B?ZXhIb2hGTm5WcnBsTkF6cHhyTmVpSjdKdEJOSzI4OEE4dUlwalhGRUZvMVZT?=
 =?utf-8?B?Y1hvaXROelJsMjJHbEFJTzN4VmNySWNJVm12dWNmRko2alBDZCtJTWxZLzhy?=
 =?utf-8?B?di9VMU1tSFY2Ky9oVUFTdDF3VkpWZ3R3RUpsRzZNRzdHV0FPSUJlQXErdjJi?=
 =?utf-8?B?OUoySWZOcG1LaEY2bHdUZ2p3dDBrMDZtc0IybDdOK1FWcFYvMVptUldKYVZn?=
 =?utf-8?B?WmlEVjM4bUI1VXFoVWxCZUQ5SnRzUGgrV2pvZk9pRVhyeGh5VFRqYk9XVnFk?=
 =?utf-8?B?eU1FdUVzMklQR3RPbWdLbTlaRDNJNG5ld2MyVXAzcm1kV2NGRk9nUGpDT1g2?=
 =?utf-8?B?VjJSSS9vYjFsakVnQ2hrQWJRUGZPbzJZUVNxR0xVcnZHZkR1aSt2ckpibDdM?=
 =?utf-8?B?WDJjblkzMVZCbTNkQ2RnQnF2WGM4L1R6N2lmUmo1T21SY3JLbC85cEhUdFY5?=
 =?utf-8?B?NlpoWkxBUG9KaTg1dTUxZFNuLzBYVnhxRjFvMG4xN3JoRG1EYTNFcTNnNFcw?=
 =?utf-8?B?YzVYRUVqenFTeGR1M0E5eFh6d1FTZ0FnMUI2YkpHMVBvZFQxV204ZUtLc3pF?=
 =?utf-8?B?M0VBMExkUjlCWXBSSTlFazlOTDNFOGlyRkhrYTFXbWhjUTdGZ2Z0MnFKV0tZ?=
 =?utf-8?B?aDRLZWZ1VExyMWZ5aE1MbGFCT1U4d1Z5TWJRRGRuazFaSnYxSGVjQWZDQjdq?=
 =?utf-8?B?WWtTRWxyMWcyYWJrdnMxd0RvaVNIU0lPRWVnM1BUL3N1ODQ5TXVJSGdid2Zu?=
 =?utf-8?B?cVAvNXZoVTk2WEgxbklQTlUrb2R5WTRkYi9xY1lYTmpYNEhCMEZkcENqVW9z?=
 =?utf-8?B?Umd3S3psc3Z5VU55THl4MFBBMDVKU1FVRkdPbFpiVzJObXhMYll5c1FHVjN1?=
 =?utf-8?B?US9hSVgrT0FTTHJXeG5NMld5REhuWUhsWGlld2VJWWhwb0FmRWNEQ2JHV0Jy?=
 =?utf-8?B?eFBBQVF4eEtHZXRtMVZKdmNxR2x4SGpaRldKTW1aS0duZXV2c2x2MGZieXdv?=
 =?utf-8?B?TG8zTGw4enJucmpyRmxVenB2cWlnMjJRVG1VK1pVTS84RjRnWGM2L0J2MnFC?=
 =?utf-8?B?TzRpKzk4TUpyMUx0T2tIQ2ZYRWtJbWNhZWpTbkJoZVBaalB2SGJnb2prMDhE?=
 =?utf-8?B?UDVUcCtaRStWK2JoS3VQN2Zjc2cyYjduZFZodGZucUhQVGxOUnBBTUtHaHpq?=
 =?utf-8?B?ckhtQk5ubHZLRFRFMXZ2eHlocE5RQU1hU3Npd0pJZnVKeHR2TlprTG53WjVW?=
 =?utf-8?B?eEs4Qy9Od2Z2SUR6YTFtTlJxek53TytmWmlYSTRJNmV3eVh2TlIxeUt6R2pp?=
 =?utf-8?B?ajVMeU5QRDc2aDU5bVBDSXlCRnNKdSs1OXhpM1ZxSHBmS05JQ0FOcm1zWFdU?=
 =?utf-8?B?c1ZEcXc4NnlHaCtWVDFqNkhUSGdlamJ6c3Fya01keFdXS3NjODBkUjdyUERy?=
 =?utf-8?B?R24zc0VEOWhEeEpQWDdwNmlTMzJMc1dYV2krZjFGMzk0bmtsNTQrMGtyM1ly?=
 =?utf-8?B?S1Juakx6ZTZES0VuNldVeU96cW04U1djWXEzRmhxZnlkQ1JaZnI3bG1TL1J6?=
 =?utf-8?B?aDFCeHVvVjdIbllvUHdvRHFYU2VGVzFmeUhKM0ZGRTZPbSs4NWx0L2hkbTBu?=
 =?utf-8?B?aU45WElaVWhFQkJKTXdWU1g2TGtUdWtPMzVVRFFJRzdTVnU1THQ1b3pSeVd5?=
 =?utf-8?Q?f816imtojE0NpqieXZR3G7yii?=
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 844d433e-8692-484b-6c4e-08dc86d23e9a
X-MS-Exchange-CrossTenant-AuthSource: AS8PR03MB9192.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2024 09:14:33.2767
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SAFM+Qu4/pO/5oPATkvA08aBT4okO2iFTB9dU2PuVY5JR5ObflMaBzbeFZY9LwEYsCqE7WEQ7oCRsNZccLtFfQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR03MB8981
X-Proofpoint-GUID: f23x7LXJJyCBkg_1Dwu6Ur0MIHtBW4Fw
X-Proofpoint-ORIG-GUID: f23x7LXJJyCBkg_1Dwu6Ur0MIHtBW4Fw
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-07_04,2024-06-06_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0
 malwarescore=0 clxscore=1015 spamscore=0 impostorscore=0 bulkscore=0
 adultscore=0 phishscore=0 mlxlogscore=999 mlxscore=0 suspectscore=0
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.19.0-2405170001 definitions=main-2406070066

06.06.24 10:54, Jan Beulich:
> On 06.06.2024 09:30, Sergiy Kibrik wrote:
>> 06.06.24 10:08, Jan Beulich:
>>> On 04.06.2024 11:34, Sergiy Kibrik wrote:
>>>> --- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
>>>> +++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
>>>> @@ -657,7 +657,7 @@ static int __init cf_check cpufreq_driver_init(void)
>>>>    
>>>>            case X86_VENDOR_AMD:
>>>>            case X86_VENDOR_HYGON:
>>>> -            ret = powernow_register_driver();
>>>> +            ret = IS_ENABLED(CONFIG_AMD) ? powernow_register_driver() : -ENODEV;
>>>>                break;
>>>>            }
>>>
>>> What about the Intel-specific code immediately up from here?
>>> Dealing with that as well may likely permit to reduce ...
>>
>> you mean to guard a call to hwp_register_driver() the same way as for
>> powernow_register_driver(), and save one stub? ?
> 
> Yes, and perhaps more. Maybe more stubs can be avoided? And
> acpi_cpufreq_driver doesn't need registering either, and hence
> would presumably be left unreferenced when !INTEL?
> 

{get,set}_hwp_para() can be avoided, as they're being called just once 
and may be guarded by IS_ENABLED(CONFIG_INTEL).
The same for hwp_cmdline_parse().
As for hwp_active() it's being used many times by generic cpufreq code 
and even outside of cpufreq, so probably it has to be either a stub, or 
be moved outside of hwp.c and become smth, like this:

  bool hwp_active(void)
  {
     return IS_ENABLED(CONFIG_INTEL) && hwp_in_use;
  }

Though I'm not sure such movement would be any better than a stub.

acpi_cpufreq_driver, i.e. the most of code in cpufreq.c file, can 
probably be separated into acpi.c and put under CONFIG_INTEL as well. 
What you think of this?

  -Sergiy


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 09:41:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 09:41:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736466.1142547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFW5w-0007OT-Ct; Fri, 07 Jun 2024 09:41:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736466.1142547; Fri, 07 Jun 2024 09:41:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFW5w-0007OM-AA; Fri, 07 Jun 2024 09:41:12 +0000
Received: by outflank-mailman (input) for mailman id 736466;
 Fri, 07 Jun 2024 09:41:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ERR9=NJ=epam.com=prvs=288857fa54=sergiy_kibrik@srs-se1.protection.inumbo.net>)
 id 1sFW5u-0007OG-7q
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 09:41:10 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f006694-24b2-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 11:41:05 +0200 (CEST)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 4579KJqL031755;
 Fri, 7 Jun 2024 09:41:00 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2108.outbound.protection.outlook.com [104.47.18.108])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3ykyfwg2pe-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 07 Jun 2024 09:41:00 +0000 (GMT)
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com (2603:10a6:20b:5c0::11)
 by AS4PR03MB8254.eurprd03.prod.outlook.com (2603:10a6:20b:4fd::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.26; Fri, 7 Jun
 2024 09:40:57 +0000
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d]) by AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d%6]) with mapi id 15.20.7656.012; Fri, 7 Jun 2024
 09:40:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f006694-24b2-11ef-90a2-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GljUWAsv1HfUzkBS50fkBCH/b3CL2GWuR7Hv9bJHDveXLV/4qwOr7Doai2WRjfMav3pyHNet0udNCjwA2zpN5RjuKV7mH0cXSKlFjPIAXTo+jAo1HfxcD0MIi3lXP6GBPcVQ5as9hzPO6jXLm+NbFhI1TH8+svVDR9BICvANbOn8vTl80EX4YRE8+EWKkXo7e5HLFLZUaw8/tHgM89HqWmNInlQ1kpYsBu8BxE0Yu0ye2C4GX9bgc1VCtSc4i1GfmpVvIKcWfknMKM19a91f4/WJ5prBbRQF2DemiCZyiW0o34jybsedTT/4XNqeXhwbhPXWcxelv31CYzpuaEVSbQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fQSWMvFthlOnIXd7KFncBqnKIJ2wikQTFBXuPSArA1Q=;
 b=gyO9AFr1KwLsCvqghWsmc0eIbPmLoiotR9w9pUOk7wsaGgCzC8sRZLnp1EG52nRMElrd0hGZRGPuDOx5Cit6hr5VMVLRUq/CAGF4LxyRPwY4UljbMVO00jlPtUy+GXGPT4of7J3J3xck0y0414fd954PQP2kGvhXWhjP0LMSE/ypCsU5Jb+c/1d/TEw0Utg7QiCzngI7dexkrVNiTrPKXXWwnmm+LroqcTAR2fLMmPv6pcF+BnuwaEiSKEURzWtHT37Gv3wYId4icN1mkMMeSROagutRunfQ2ljAtnBmt5g2boL2cqdjJV7rNqrCJ0YtFz1UuuvrKtyKwTjbRa9Xzg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fQSWMvFthlOnIXd7KFncBqnKIJ2wikQTFBXuPSArA1Q=;
 b=usEEqxArfBhUbF3Xv68xmPoRmIwLu96n1JIF8uV36rppdfivospTIdFit59ndS9itxdRRn4V4MrVFOeh2uDBwWqMKnWBYOmqq7U0yzz7Mk/kyHLowSk7RpmOw4Z93AihTJW+IKLVU6xs3eMYle9QDMgaauxwVZzLkD82ikNR1LWR+6jRxI4YyqVwqqgmB6ss3URhhD0iD8w1SWFz9GNUvhubSDP89GD0Ua0OJqo0A32qPPiTZT8ty5kRxXXpdYy7B2Zmr+IpPcHlUnSsagzs3U5qlwYgBlAwt65zdkWS6+xAnCorWBUWUKzvl69GGCn1K9rambprQZ9x7g8GYw1dtw==
Message-ID: <971ed412-c016-4e95-b691-2e6795637c61@epam.com>
Date: Fri, 7 Jun 2024 12:40:55 +0300
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 02/16] x86/altp2m: check if altp2m active when
 giving away p2midx
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
        =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
        Tamas K Lengyel <tamas@tklengyel.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <9306d31184b8e714c3a10ccc6a2b2c6a80777ddb.1717410850.git.Sergiy_Kibrik@epam.com>
 <7493c91c-070b-4828-a7f9-15d618d49ce5@suse.com>
 <c72ef6c8-6e5a-4533-a049-7636f6387e4b@suse.com>
Content-Language: en-US
From: Sergiy Kibrik <sergiy_kibrik@epam.com>
In-Reply-To: <c72ef6c8-6e5a-4533-a049-7636f6387e4b@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: WA2P291CA0046.POLP291.PROD.OUTLOOK.COM
 (2603:10a6:1d0:1f::15) To AS8PR03MB9192.eurprd03.prod.outlook.com
 (2603:10a6:20b:5c0::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AS8PR03MB9192:EE_|AS4PR03MB8254:EE_
X-MS-Office365-Filtering-Correlation-Id: 830b2d32-6cd8-4de2-adb0-08dc86d5eeb9
X-LD-Processed: b41b72d0-4e9f-4c26-8a69-f949f367c91d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230031|376005|366007|1800799015;
X-Microsoft-Antispam-Message-Info: 
	=?utf-8?B?Y2RFZ3VCQkVrOHV6T21nODR4KzdrNmtLZmlmWjBEU0tYMWJrdmxtRXpLcUd3?=
 =?utf-8?B?bWl4QkIyS3F6MmFIWmJ1Mit4NkJCK285b05IOXoyWEU3Z090T1pBNzFaSDN6?=
 =?utf-8?B?YmZQUzdIZHQvL09JL0RLZ21HaFM5ZGlsMlFad2I4eTJRN21jM2VPNGdjQzJU?=
 =?utf-8?B?aDFadkNQSzc0Q0dpck5DYmNsaEl6T3pDMkFJZjRNS3VQZ2g3S095Q0duWHly?=
 =?utf-8?B?Yi9EUTZZS1pnSGZaSkVBUEEyMnlhWnZxVGU4RDZyRVVCNng4eWQ2dzIvU2V1?=
 =?utf-8?B?d0tXR0NiQnVWOW9CSTY3UUlJMFB3YXo5T0h5eGVOa0ZSMTZLNXdFaUV2aS9Z?=
 =?utf-8?B?ZzdOTmtoU2lzN2NpbzdiSnlsM3U5Nm1IYndKMTJYTzNrRk4yazczd2x1Y1FO?=
 =?utf-8?B?VDBlZXUzUWFBL3ZMWW5jYnZ0dzVoUWpSdzBTQU5iMjRyNkNzZXQ0Um9HMG5w?=
 =?utf-8?B?czdGSkhYWnBmdUZ1ME9DWFBpeHE0VXpkblNlZTM3UUNCSjlLUzJvb05xQU5o?=
 =?utf-8?B?dHh3bUZ2R2R6dHdCdTgxc3VXWmdXaENJQzZvMmIyMGFtTEp0UDFNS25jdloy?=
 =?utf-8?B?ZUNHbWlRQitYN3NPTEtkVjBvdlpvM1pKQ2FjQlU2SHR0YmMrcGpyYVBDODVO?=
 =?utf-8?B?ZzJ3bk9oajFoSW54dFAwWitBcStIUXI0ZksyNmcweFZlanp0VW02d24yOFpN?=
 =?utf-8?B?SjMrYkVxNkYvVVhhN3dLeTZFUldJNVIycnhlRG5BVTV0WE1QZ2xPNEtZUmpF?=
 =?utf-8?B?dEd3aUVOUDBWZktXZWlMNlhDU3VtQ2E5L3laNlJMZWNYV3B2aU9neVA4NnRq?=
 =?utf-8?B?b2xySHhTeGEzMW0xSUh3YUdqVk5XakFGVkY0eHNKQWdrNDV3N1pZMXZPSnNJ?=
 =?utf-8?B?Zy8rTnFvYXh4ZGdNNnFjdHorYUwwTmcrSU9FMXRBUWhMQmNPQ1ErVmpvM1Ba?=
 =?utf-8?B?UVprc2drQmw5ZElwMDVUbXdtWkFheWVvSVhKTGhYQ0JsZ3NsNVhRUFRURjl0?=
 =?utf-8?B?UjhBWmVaakhTa0dWYnN1V1ZSbEJkd3U3Y1AzcWVZcCtlMko3bTJKQmhTbzZp?=
 =?utf-8?B?SXIwSTNvWWVsb0F2OXBIRHNtN3IzSUR6OFcrN0kzVklRVVJCWDE0Y21aU0ZE?=
 =?utf-8?B?MEJQcTJnekQ4RzJDQndMd1BpV1RZREIrTS9uVXF6NUdtRXNJMGtkRkdLWTA5?=
 =?utf-8?B?NXoxTnY0Yk9xdDBaYld2aDN2bnRpeUlHMjZXQmdUM2k4Q1cwSWx1WFBpYU5X?=
 =?utf-8?B?ODVpd3dHVlZpWTFHaXR1ZjFmblVCQU5uNlRldEtjT1RHL05JWkV3Y0pDWUln?=
 =?utf-8?B?L1kxYjBqRW04alp0Tkp2bjJjV09xbVJHeUJsQzdrUUlPUUNzM3p0Y2lLMGYx?=
 =?utf-8?B?ZERNczh0NkMvT1RsUmFMTDZKeitkbXdlellJdG1qZDVnNVdLYWNKcWxFa2pm?=
 =?utf-8?B?Z0RDZHNZckdTSXFjVnNBR0tYclZ0RnQvSUUzMUN1SjZGd05MVzFobTNPd0pV?=
 =?utf-8?B?aFcxNEFVQXJObnpKZ016ZWJVS0REYk8xR0owL3U5YlRWQ0x2dmxWbzhYRjRw?=
 =?utf-8?B?SkdveURNd1Uyazg4NzY1TEppWHgwanJXMFJlbEFZWWdVL3VlUHV2Y0lZZy9Z?=
 =?utf-8?B?TzBnOHI5OFl4OEEwQzk4VEZLSWpQa0YvdGp6ZXQ3ak5wWVJGYncwelVZL1Iw?=
 =?utf-8?B?Z1I0RkpkTUtDZmxGeVFZS2hKanlGelRhNllOZGxBcE5KcllIU0hrK20wWVZw?=
 =?utf-8?Q?v6FWmdz0Gw+WXzOuts=3D?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR03MB9192.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(366007)(1800799015);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?WHpHT2o3RUN2TmVMRU1IU3E2czYwYjdPemIycmFxVG02Z0ZSU1d4NjcyajRR?=
 =?utf-8?B?bjdMRlI3MWg4VnBjOGRzR3JUYytqckRDbTNxNE1Ncmh4K3d5SGErUWZlcnEw?=
 =?utf-8?B?OU1tVkpqaGJNOGZLNGkyb1pFbW9wVy9RSXI2dk1EajBXZEdaMloxUUREMG93?=
 =?utf-8?B?Q3o0bTV0U1gxeTBXRGpTWWpRQ1pMNzVHNXRLazQ2WXdnT2tRWEhsdStOWmJq?=
 =?utf-8?B?SmdnTkxKZjVkb0dyTmx3WnltQjlFQXpDYTgwczZWWGxkcFRlb1pXSTlhNDdm?=
 =?utf-8?B?VVFPK2VoRHhVQ1UwQTNiOS8xNGFHa1BXWWpjYzFxOHc3N3RNMEtoeUM5SlFX?=
 =?utf-8?B?Q0RyaytxNmVUN0daMzF0Q3hxUTBtNkpHT0dhNDAvekZGK3hBVFdFb2laWWFZ?=
 =?utf-8?B?cWdRblM4UTV2dGU1OUNNZHlydVR6Y3c2Q3dkTHlmbnZjMkJGeFd6WnBXODQy?=
 =?utf-8?B?eDJ2b2Y0cXg2UzdqRFBHcGFUM2g2eDd4Y3Jsc3IrRERoK1dzTFg3aEtZd3Fn?=
 =?utf-8?B?U1VPbFpVT3Q1YVh0WWMrK1J6TXZQZEV2VDhuOW4zRzhKZXNIWEo1bzA3QUcw?=
 =?utf-8?B?Rld2OXBsS3VFZFVFdVpGK2lKVVZhTi8waDlGRGFzREhYdTVjaEtIRzl4cFEr?=
 =?utf-8?B?VXB6YXJmb3BXVklieERRTnBrcTA0SGowd0tHclNXU0ZQaXE4SkZvVUtYSW5N?=
 =?utf-8?B?QndKSGR6MzA4THpNUnBNNUZVZlJWSnVTZ242MWVNV0J3d3M3N3lPU2pzS0tN?=
 =?utf-8?B?MXdJK21KK2k4WDJCa1M4RVVha2FpVGtYVGJYcVlMR3RTSDl4NlJ1UTg1bDE5?=
 =?utf-8?B?TEdCbFEwV29ndnJ5VFMvd3g4aWJnbklRMFZwVGhDVXlVVFU0dmVlWUhPZDhD?=
 =?utf-8?B?cTRTYzdMRXdmanZxZnliRjdFbEtnTThMYjRPOEYzci9Pdi91SmJkaDd0MDl4?=
 =?utf-8?B?UGJnUmR3YUNiMFBJbEtEczhjMHRrM3NLMHBldm9QVC9IVkt2SWRmMlNwNzdl?=
 =?utf-8?B?ZDhydkh5THVNVFhtNEtpc2MvdWtIOFU4ZGRXc3RxZ3JFVWdxR0sxOUtjOU53?=
 =?utf-8?B?RUpjS2NER0Z2cUNtdGtJQUN4Mjd2UFBCcVVaR202MitFR3BUcDJESkZrZUgw?=
 =?utf-8?B?N2Nja2MyYnZNc2p0YmxCYXpDdmdXQ2NCS3hOM05VdWdEMVlDaFZZVm1WMFYz?=
 =?utf-8?B?Vm9UR3R2Vy9wVlRoT2RkMzBvV3NUcDhoK1IxeVdJbWxpR21jUFFSZW16aVhn?=
 =?utf-8?B?N01kdXlDMm5HVWVZek0zWHZHall1SXJJeUhSN3dHRURzMUF1cGlzNnR2Nkwr?=
 =?utf-8?B?R1lLNUR4KzN4TEJrZWhCdWxsZnNYMGhEVVhKSjJiWjBteXhjamlERG1ZaG8v?=
 =?utf-8?B?QXgzVlRWS2pRcWJiR0hIM3BnM25hanRmdDkzTmdRRUdSdGkyRm1NZTJOckY2?=
 =?utf-8?B?TFRlSk9uazRwRUFoeGhQNDJUTkRaVjkraEpnRmkzTkJZVks0MERaUlNlNVFp?=
 =?utf-8?B?SzZUY2tFSDZJZktCdVVTN281SGExSEhwY3JMVGk4a1VnWGVObTNaeVNsM3Zt?=
 =?utf-8?B?TkozelR0NmVKbmJCdkNISkw1Q2tYRE9KdVltZ0hmT0NDTWdZWXR2TElOWnEw?=
 =?utf-8?B?QUE5eUFYWWtBd2RjNVFqQW9oL05pK1RIUmtuaEIzYUVjUXA0WGVCVmd4dFhN?=
 =?utf-8?B?YjZVQ3RxaHhESnpkOE9ja2IyaTVBc283UU9LeHgyOGFTVjJRTHdMQk1ybGhn?=
 =?utf-8?B?ZGl5NFhpbGdqWkxXdTArcnlqZzVYSXJRajd1OTVVdUtOdlNndlJQSzFRaWR1?=
 =?utf-8?B?N2J3N2dLRXYxdUMreGtMaVEzUzJsWjh5NFFreUNBcWIyNmxITnlpWnpkbGx6?=
 =?utf-8?B?eWZFK0VVRmNsWVJjQkZSRTVMVEkxTk5NQXgvTEtyeDNUbndGczdqTlU1NGhE?=
 =?utf-8?B?YWVEMlVEYWdXNGJvYlB4NW40djlpQ250MHhKMWRQYUs1bmFrVXppZEpCNUV5?=
 =?utf-8?B?TVZPeXpXOE5uNWVJWER1L2tHci82dFlmK3JiM0xVcWhwanpEM1laVFpCVGtR?=
 =?utf-8?B?TUVpU3d4RDdOUVJGN1gzeFlWRFNZQlRubm4vZjZnOGIwWW5oQlpMTm9tV3lV?=
 =?utf-8?Q?hsdOC5nzQj2HpWcZ+0Mwv4Wvb?=
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 830b2d32-6cd8-4de2-adb0-08dc86d5eeb9
X-MS-Exchange-CrossTenant-AuthSource: AS8PR03MB9192.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Jun 2024 09:40:57.2271
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: P2rUWKXlWhl+P0iIEr6eyzSTGpbg5KZMpIDgRxVTmgncMs9c85wTPgNVwMMJ6/lzozzde55uiOEt0SpRrOmoMA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR03MB8254
X-Proofpoint-GUID: PKBOr8KXeGoeAHSinQdc0mhaoa3kX7oU
X-Proofpoint-ORIG-GUID: PKBOr8KXeGoeAHSinQdc0mhaoa3kX7oU
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-07_04,2024-06-06_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1015 spamscore=0
 impostorscore=0 suspectscore=0 adultscore=0 lowpriorityscore=0 bulkscore=0
 phishscore=0 malwarescore=0 mlxlogscore=903 mlxscore=0 priorityscore=1501
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2405170001
 definitions=main-2406070069

07.06.24 10:50, Jan Beulich:
> On 07.06.2024 09:25, Jan Beulich wrote:
>> On 03.06.2024 13:09, Sergiy Kibrik wrote:
>>> @@ -38,9 +34,13 @@ static inline bool altp2m_active(const struct domain *d)
>>>   }
>>>   
>>>   /* Only declaration is needed. DCE will optimise it out when linking. */
>>> -uint16_t altp2m_vcpu_idx(const struct vcpu *v);
>>>   void altp2m_vcpu_disable_ve(struct vcpu *v);
>>>   
>>>   #endif
>>>   
>>> +static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
>>> +{
>>> +    return altp2m_active(v->domain) ? vcpu_altp2m(v).p2midx : 0;
>>> +}
>>
>> While perhaps okay this way as a first step,
> 
> Hmm, or maybe not. 0 is a valid index, and hence could be misleading
> at call sites.

I'm returning 0 index here because implementation of 
p2m_get_mem_access() for x86 & ARM expects it to be 0 when altp2m not 
active or not implemented.

   -Sergiy


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 10:05:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 10:05:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736472.1142558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFWT2-00035T-99; Fri, 07 Jun 2024 10:05:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736472.1142558; Fri, 07 Jun 2024 10:05:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFWT2-00035M-5m; Fri, 07 Jun 2024 10:05:04 +0000
Received: by outflank-mailman (input) for mailman id 736472;
 Fri, 07 Jun 2024 10:05:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vKbl=NJ=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sFWT1-00035G-CO
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 10:05:03 +0000
Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com
 [2a00:1450:4864:20::333])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 675769f3-24b5-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 12:05:01 +0200 (CEST)
Received: by mail-wm1-x333.google.com with SMTP id
 5b1f17b1804b1-42166eb31b2so4684275e9.0
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 03:05:01 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4215c2cd2d3sm47711145e9.41.2024.06.07.03.04.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 07 Jun 2024 03:04:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 675769f3-24b5-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717754700; x=1718359500; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=cC7s1N5JwzVGpOSE8kERMNzt+zX1aSVzHWVpT78zhhc=;
        b=SX8B+ShbqXzIcmHG114pqic9U5nE6i5TUyroJNeKToa7pVPuIQq8Spfbn+6dO1PTIE
         jjDRKJpMO1hEht4j00OKWnM6ZM+TA4HikLx2qbZGkwvLhCRiUwsD7k2UJPYDd0+MiaZm
         D/20HkvnR0GMlCOWyVdjaHLzl1NKD25NcaZCg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717754700; x=1718359500;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=cC7s1N5JwzVGpOSE8kERMNzt+zX1aSVzHWVpT78zhhc=;
        b=Klp2l6QvFrijOh8afllieBTcraJ5APFcpzmxue7RHXuizikmHC0oyeZgS2aWArXewi
         +ZLGs7eNAuab/+huQ6sIYWg/F3ATXh9EUrAr+Pq40bPfgE/6jpa5DauQhdYPeBzIgtDf
         iKks10GzrzWiRn64f6IgjovLPyenQsueOXZa8WoYor8SKK9lj9bNQ5gfHcGViZqM4eY3
         tRF+nOnsp7HilyVX0KigcBva3o37QaNlAK+QuY9ykBZK++41Db1RxgHwNqVjLMuZsnYa
         0MI3yP3Qj4l3Te5ctIGKeG8yr1iG9MYOo904FBue3T8ywjXrIca5Lh9hFSWIpv0LWdtV
         DSHQ==
X-Gm-Message-State: AOJu0Yxf1UrrYMqG5MHDzoscClBjfTl6yrqJFO9NFqtKImqdG3425GZe
	WnYZpvQ0qQvKEjXbuGtsf7/yokhYnklJN9sZYMC0KqBloG7uh7K3o8omlqa4tnOvmqpa2U9+nXV
	cFc8=
X-Google-Smtp-Source: AGHT+IGGSfkS678KPq++R8Uvx6ts2+CcsegeQJNkfwyPpnpjFUuKnNCW8ZTijz0urPRNrSLJy6Ivmg==
X-Received: by 2002:a05:600c:3d93:b0:41c:290e:7e6b with SMTP id 5b1f17b1804b1-421649f4a2amr17692325e9.13.1717754699717;
        Fri, 07 Jun 2024 03:04:59 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Community Manager <community.manager@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH for-4.19] x86/pvh: declare PVH dom0 supported with caveats
Date: Fri,  7 Jun 2024 12:03:20 +0200
Message-ID: <20240607100320.11723-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.44.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

PVH dom0 is functionally very similar to PVH domU except for the domain
builder and the added set of hypercalls available to it.

The main concern with declaring it "Supported" is the lack of some features
when compared to classic PV dom0, hence switch it's status to supported with
caveats.  List the known missing features, there might be more features missing
or not working as expected apart from the ones listed.

Note there's some (limited) PVH dom0 testing on both osstest and gitlab.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Hopefully this will attract more testing an resources to PVH dom0 in order to
try to finish the missing features.
---
 CHANGELOG.md |  1 +
 SUPPORT.md   | 15 ++++++++++++++-
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 201478aa1c0e..1778419cae64 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -14,6 +14,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    - HVM PIRQs are disabled by default.
    - Reduce IOMMU setup time for hardware domain.
    - Allow HVM/PVH domains to map foreign pages.
+   - Declare PVH dom0 supported with caveats.
  - xl/libxl configures vkb=[] for HVM domains with priority over vkb_device.
  - Increase the maximum number of CPUs Xen can be built for from 4095 to
    16383.
diff --git a/SUPPORT.md b/SUPPORT.md
index d5d60c62ec11..711aacf34662 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -161,7 +161,20 @@ Requires hardware virtualisation support (Intel VMX / AMD SVM).
 Dom0 support requires an IOMMU (Intel VT-d / AMD IOMMU).
 
     Status, domU: Supported
-    Status, dom0: Experimental
+    Status, dom0: Supported, with caveats
+
+PVH dom0 hasn't received the same test coverage as PV dom0, so it can exhibit
+unexpected behavior or issues on some hardware.
+
+At least the following features are missing on a PVH dom0:
+
+  * PCI SR-IOV and Resizable BARs.
+
+  * Native NMI forwarding (nmi=dom0 command line option).
+
+  * MCE handling.
+
+  * PCI Passthrough to any kind of domUs.
 
 ### ARM
 
-- 
2.44.0



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 11:18:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 11:18:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736484.1142568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFXbb-0006FI-G1; Fri, 07 Jun 2024 11:17:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736484.1142568; Fri, 07 Jun 2024 11:17:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFXbb-0006FB-Cp; Fri, 07 Jun 2024 11:17:59 +0000
Received: by outflank-mailman (input) for mailman id 736484;
 Fri, 07 Jun 2024 11:17:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFXba-0006F1-H7; Fri, 07 Jun 2024 11:17:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFXba-0002tt-E7; Fri, 07 Jun 2024 11:17:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFXba-0007hy-2R; Fri, 07 Jun 2024 11:17:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFXba-0004yi-23; Fri, 07 Jun 2024 11:17:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FnwrUGnqjRRpwsB99ElphbeaByCg288o1Ru2Ssg2wQw=; b=1+BHQFE8eX+nBhBsths7SOwsTY
	gZVMx54d5vwmPpHa2gogkP1uxNUV/1fjihazkhaUVOGAn4BcSBq6rRZjdGfAL/3HGjVvBRyHu0kAX
	As5GX+nZPJR7F7EYZV64+T4r8in4cLbJSOAa3NE9zAkuyzXV5tSpHCd1R4GsiSmXWGa0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186276-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186276: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=665b223d57369d8b28dcdc81352428adfe435ff4
X-Osstest-Versions-That:
    ovmf=80b59ff8320d1bd134bf689fe9c0ddf4e0473b88
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Jun 2024 11:17:58 +0000

flight 186276 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186276/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 665b223d57369d8b28dcdc81352428adfe435ff4
baseline version:
 ovmf                 80b59ff8320d1bd134bf689fe9c0ddf4e0473b88

Last test of basis   186273  2024-06-07 04:13:04 Z    0 days
Testing same since   186276  2024-06-07 09:11:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  HoraceX Lien <horacex.lien@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   80b59ff832..665b223d57  665b223d57369d8b28dcdc81352428adfe435ff4 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 11:33:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 11:33:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736493.1142579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFXq6-0000pu-P2; Fri, 07 Jun 2024 11:32:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736493.1142579; Fri, 07 Jun 2024 11:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFXq6-0000pn-Kk; Fri, 07 Jun 2024 11:32:58 +0000
Received: by outflank-mailman (input) for mailman id 736493;
 Fri, 07 Jun 2024 11:32:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G7b/=NJ=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sFXq4-0000ph-Qk
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 11:32:56 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aeba0236-24c1-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 13:32:55 +0200 (CEST)
Received: by mail-wm1-x329.google.com with SMTP id
 5b1f17b1804b1-42121d27861so19268845e9.0
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 04:32:55 -0700 (PDT)
Received: from [10.125.142.153] ([213.58.148.145])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4215c1aa3e4sm50229515e9.15.2024.06.07.04.32.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Jun 2024 04:32:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aeba0236-24c1-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717759974; x=1718364774; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=d7ALIXUX4D2JlQfr8lNEzQQhDWTvCPNvN9Sf2uWWe5c=;
        b=HRoktrzWzLGF75KrRY/qdu9PZXKVfv+TKSWW5AG23wHcQ6ujAKOTAKabPS2jqupujB
         FBGfb36gBWX2S2ax3Vr6+982eS7+NNGF2mCvIyXtvyKXp5f9lbRUyMNMd+EiN9TRUAt6
         00zCvwvscY7Am3XKn7ck9ZBP+ElM0Z7TczI2M=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717759974; x=1718364774;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=d7ALIXUX4D2JlQfr8lNEzQQhDWTvCPNvN9Sf2uWWe5c=;
        b=dFhL6KsxGbW42M8tqDme3EIY/hOT1Mo8Wr5zl8ZYJmpgDEhlz0BT8Xb5tXoGQCjYq4
         6aolAecxxofF4PWr0k2X+msAcAD1WwsOsq6mU1v+JiH4DOpmVH7NRhSMmFQZwkCNH7at
         OPGHfycXOPwVRfrGESeDfvayZckmBo2DPuQQfnTS2sPMVux4S92b6MNyPkrbvvesFOaO
         q7k2ofTwlr7G9ZSI+hAtECChXfnh1ScxLUx+A+/zGtOzGDPYIFk4Zpkb5jgejRGs4jph
         TkoOz54x5LwrVy+fP/PTcV5oO0vsL/+hlMVF6fsHNB/SjhfrB3F3fOPop82MS1VBIsxl
         RnXw==
X-Forwarded-Encrypted: i=1; AJvYcCVrdP0PKn4s/oFaRu23E4ZwYioxnY/e9HX40/T1MoC5Zo2VFax1s2u2HNU2I9EumCAke93PH6t1tdx5syaVsBcr9MjJYT/Upodt7hqR/e4=
X-Gm-Message-State: AOJu0Yyaho8sT0oOqNbeDSrYaVLuGU0fKBFbM1Hrnjnw/AO57C4AMz26
	1Oa6NPKXMcRxgOs3eMs82cyS2M3uY2omtWAXSW7so/TSGBkXj5wH52UijxTtVwU=
X-Google-Smtp-Source: AGHT+IEo9f1X1IhAeVA/9PIlNnzuHZAUxxiGoMdeBStP3497CUyVAxWUhgQpET2tHpEIrbgzuuvPow==
X-Received: by 2002:a05:600c:4907:b0:41b:e406:5ae6 with SMTP id 5b1f17b1804b1-42164a02f54mr19017285e9.9.1717759974596;
        Fri, 07 Jun 2024 04:32:54 -0700 (PDT)
Message-ID: <3065f97e-3836-48cd-a83c-616aebd86c45@citrix.com>
Date: Fri, 7 Jun 2024 12:32:52 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] x86/pvh: declare PVH dom0 supported with caveats
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Community Manager <community.manager@xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <20240607100320.11723-1-roger.pau@citrix.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <20240607100320.11723-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 07/06/2024 11:03 am, Roger Pau Monne wrote:
> PVH dom0 is functionally very similar to PVH domU except for the domain
> builder and the added set of hypercalls available to it.
>
> The main concern with declaring it "Supported" is the lack of some features
> when compared to classic PV dom0, hence switch it's status to supported with
> caveats.  List the known missing features, there might be more features missing
> or not working as expected apart from the ones listed.
>
> Note there's some (limited) PVH dom0 testing on both osstest and gitlab.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Hopefully this will attract more testing an resources to PVH dom0 in order to
> try to finish the missing features.

As agreed in the XenSummit session on the topic.

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 11:45:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 11:45:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736499.1142588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFY2H-0002fH-Pr; Fri, 07 Jun 2024 11:45:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736499.1142588; Fri, 07 Jun 2024 11:45:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFY2H-0002fA-Mx; Fri, 07 Jun 2024 11:45:33 +0000
Received: by outflank-mailman (input) for mailman id 736499;
 Fri, 07 Jun 2024 11:45:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g52z=NJ=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sFY2G-0002f4-Tv
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 11:45:32 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 71c0aab2-24c3-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 13:45:32 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a68c2915d99so230975266b.2
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 04:45:31 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6c8072ac67sm234593666b.209.2024.06.07.04.45.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 07 Jun 2024 04:45:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71c0aab2-24c3-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717760731; x=1718365531; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=L2QMWXb+K0jMpYEYjzTtIRH3nM9D+A3J4L3icldjYIA=;
        b=M01Udk16Mqc1R71mLQbAfKMfpsdyvJUZrOdmYKSGQRYPldW5Hm8y3jIDXiFqHAdxIK
         HBF10aacmEjU0ZMVFEDQxwvM1yQSDKEHebpanNoM62L0f3rkSomu+C5ANmyq12oFjiDl
         qdB3eWDo+yILL1aRrYiNj06DNgqF9/oSmbLEtzSMkxqMo9unUPfUCC6hpwWgw8tExWME
         Y9xx7VaINe4txFwajU2ssaTp+H9AJarGakSEwdoXy2bBmTythFczBvnKlY8LnQpymzrO
         mUOVNJnJiNI6KIjtipbGCP+mus9+xUS6j3/Ol+cfFiH4WO8ZVoeLgCJhVNyBanUThI9e
         bptg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717760731; x=1718365531;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=L2QMWXb+K0jMpYEYjzTtIRH3nM9D+A3J4L3icldjYIA=;
        b=le7xue5CVD6wv3GqZbCEXh5W2PmThhU+ik11s7xqyWTKN33EBdOm5TOKoZHY8e2Q/h
         VNvjL9Er079/2rvU4ziDJdmDL4bYhbITQ/xdqO/2vGzjL5K1ddumQurTUm41/Tf38mIb
         wJasQmsvS/IF4Q2vRe3ySeiQrLwQv5mBc4uUQbx2R7f9l77lcz+wMjfiMxpQp1bI2+13
         Gg5fhrrFhlxavxTjge/7C7JUqrTXR8y2xRHbI9X8z6+dT+w2dSr3+ERX7iqV0mOkmHtv
         0CgCDV3om/EOzTPxH2BPW5z8p56kDUg5f76U/OLE9vLPV2rhL85PMQFXw/5QDs/iX201
         5BXw==
X-Gm-Message-State: AOJu0Yw/Il7CpEHnxQQidgGsYwytJfsvzfz1E4fTEsktxFvyKTPhqPTh
	TYfsDC5pbwOLQmBxzLHE1HD3shDMCZpPGWKwj2FupFfqs8LbK/vuj3FbVg==
X-Google-Smtp-Source: AGHT+IFUHcy28JQ2gC5K+m+0Vhhu8lBiYREoIM4MLQ4xnIs2WUtLV04QowUYNtDpq0Pdov7p+/pYQg==
X-Received: by 2002:a17:906:278d:b0:a63:4e95:5639 with SMTP id a640c23a62f3a-a6cdb2f93bemr141921766b.47.1717760731029;
        Fri, 07 Jun 2024 04:45:31 -0700 (PDT)
Message-ID: <09ce709d1a3bc918d91942310ef6958f93138cf3.camel@gmail.com>
Subject: Re: [for-4.19] x86 emulator session outcome
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Date: Fri, 07 Jun 2024 13:45:30 +0200
In-Reply-To: <bd6bd37c-3fb5-4353-a760-5c4465bf7582@suse.com>
References: <bd6bd37c-3fb5-4353-a760-5c4465bf7582@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Thu, 2024-06-06 at 18:00 +0200, Jan Beulich wrote:
> Oleksii,
>=20
> a decision of the session just finished was to deprecate support
> for XeonPhi in 4.19, with the firm plan to remove support in 4.20.
> This will want putting in the release notes.

Thanks for notifing me.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 11:56:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 11:56:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736504.1142597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFYCd-0004Uy-N5; Fri, 07 Jun 2024 11:56:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736504.1142597; Fri, 07 Jun 2024 11:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFYCd-0004Ur-Ka; Fri, 07 Jun 2024 11:56:15 +0000
Received: by outflank-mailman (input) for mailman id 736504;
 Fri, 07 Jun 2024 11:56:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFYCc-0004Uh-9V; Fri, 07 Jun 2024 11:56:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFYCc-0003Xf-4p; Fri, 07 Jun 2024 11:56:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFYCb-0000HH-QW; Fri, 07 Jun 2024 11:56:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFYCb-0000dS-Q7; Fri, 07 Jun 2024 11:56:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=W0wzthCptylYP9/CCM32+J+DWX9x8s+NB/aD8WLun2I=; b=T6iRm9HvKXlbSOh8GVr0m833qa
	bHiiMnQ3cxAKJ59hSwzSlzOsT3/oKtXuJMqbo+4ZyMDihHH6hs+gn6QoVApWuPgg2Qy59xSeqGlVb
	AZz3FiVFOOGqCMZ/UmRX7Xs8TDZoUQQai3uhOHL9zzQ/cEP60Sxis84eqb8GI+0cJaPg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186270-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186270: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8a92980606e3585d72d510a03b59906e96755b8a
X-Osstest-Versions-That:
    linux=d30d0e49da71de8df10bf3ff1b3de880653af562
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Jun 2024 11:56:13 +0000

flight 186270 linux-linus real [real]
flight 186275 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186270/
http://logs.test-lab.xenproject.org/osstest/logs/186275/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-qcow2     8 xen-boot                 fail REGR. vs. 186268

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit2   8 xen-boot            fail pass in 186275-retest
 test-armhf-armhf-xl-credit1   8 xen-boot            fail pass in 186275-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 186268
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 186275 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 186275 never pass
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 186275 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 186275 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186268
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186268
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186268
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186268
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186268
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8a92980606e3585d72d510a03b59906e96755b8a
baseline version:
 linux                d30d0e49da71de8df10bf3ff1b3de880653af562

Last test of basis   186268  2024-06-06 17:10:36 Z    0 days
Testing same since   186270  2024-06-07 01:11:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bjorn Helgaas <bhelgaas@google.com>
  Chanwoo Lee <cw9316.lee@samsung.com>
  Dan Williams <dan.j.williams@intel.com>
  Deming Wang <wangdeming@inspur.com>
  Hannes Reinecke <hare@suse.de>
  Hans de Goede <hdegoede@redhat.com>
  Justin Stitt <justinstitt@google.com>
  Kalle Valo <kvalo@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Wilck <martin.wilck@suse.com>
  Nathan Chancellor <nathan@kernel.org>
  Nilesh Javali <njavali@marvell.com>
  Peter Schneider <pschneider1968@googlemail.com>
  Saurav Kashyap <skashyap@marvell.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 309 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 12:07:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 12:07:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736517.1142608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFYNk-0006Sw-Rg; Fri, 07 Jun 2024 12:07:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736517.1142608; Fri, 07 Jun 2024 12:07:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFYNk-0006Sp-Or; Fri, 07 Jun 2024 12:07:44 +0000
Received: by outflank-mailman (input) for mailman id 736517;
 Fri, 07 Jun 2024 12:07:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFYNj-0006Sf-Bp; Fri, 07 Jun 2024 12:07:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFYNj-0003ke-64; Fri, 07 Jun 2024 12:07:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFYNi-0000Yl-R2; Fri, 07 Jun 2024 12:07:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFYNi-0000Zl-Qa; Fri, 07 Jun 2024 12:07:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LvKhrJtMEuPg6GUcwWspaOBLbVSCTOmNrTOU3gJvf/4=; b=1KW1ay/n7gI+XADfb7J5vue94C
	jXeF9OcRfTdBEW7E306KBk1r6RWWBuVkz86THeP63QqRgu2PQAAEr3sFlEzrg0dxKh1HjWg+fGf8w
	0s1F2ZaVeU2X43VpGU25oIYkpH1OGJTbrqTkNe6GdajhoqDsIyZ2plJLsBwvCvTbf6cc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186272-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186272: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl:host-ping-check-xen:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
X-Osstest-Versions-That:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Jun 2024 12:07:42 +0000

flight 186272 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186272/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl       10 host-ping-check-xen fail in 186252 pass in 186241
 test-armhf-armhf-xl           8 xen-boot                   fail pass in 186252

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 186241 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 186241 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186261
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186261
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186261
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186261
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186261
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186261
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e
baseline version:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e

Last test of basis   186272  2024-06-07 01:51:55 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 12:36:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 12:36:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736526.1142618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFYpD-0002Rs-Rr; Fri, 07 Jun 2024 12:36:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736526.1142618; Fri, 07 Jun 2024 12:36:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFYpD-0002Rl-NA; Fri, 07 Jun 2024 12:36:07 +0000
Received: by outflank-mailman (input) for mailman id 736526;
 Fri, 07 Jun 2024 12:36:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G7b/=NJ=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sFYpB-0002RJ-W8
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 12:36:06 +0000
Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com
 [2a00:1450:4864:20::12f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7dd65613-24ca-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 14:35:58 +0200 (CEST)
Received: by mail-lf1-x12f.google.com with SMTP id
 2adb3069b0e04-5295eb47b48so2606734e87.1
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 05:35:58 -0700 (PDT)
Received: from [10.125.142.153] ([195.23.45.50])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35ef5d297b3sm3955459f8f.11.2024.06.07.05.35.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Jun 2024 05:35:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7dd65613-24ca-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717763758; x=1718368558; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=qyFVA/YJxpXuaxVqjWyhVVSmoMFJCUIPYQc7kLaBNYc=;
        b=Yf5NehunkjXQPiAC0EGDMQWlaYevzMudPUHZdocV+SJCbII75Bjt4//JMXUEfj9wuI
         0IWrI4/GFLO2UPh8I2rzoFvq/KWIIpEZj0kk519mSsqSQQkZQfiBYFWkiGRPEOcLQ+hb
         nhF7fBaKa1aTNVGZ0IWvJVynxlkPqrjZSHYMU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717763758; x=1718368558;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=qyFVA/YJxpXuaxVqjWyhVVSmoMFJCUIPYQc7kLaBNYc=;
        b=s0AHnY/q6ydzsHuAmFMK9r3ijqv+k1WFTQ22VuxT6xKQazZwe0rWxXLqXBo4zz/R7R
         xILm2AoPYUIgoQWi8SIbq6fngwmpgd2Y6FRMsyrn3q1e8JMgQyQNiuA7uUluhy4yyVTW
         210r8wnDLsB+lHnfMMcnAcj0JHuIvEROTkStR04VK9wOkF4yIiSgBAdJSzY4DmLr3Vo8
         MXk3p/r3gKB0lwW9lyAbzbVvlIa4EVwOEGVXXvi6QVxtP+wT3E7pIniOl3gQ9yBG5DOP
         W8B2/+szIfHVsUK1qb7wdtBhWzgWBydib3er/ZAy5phUooqE6XiueJLNWMmgcGbzQG2V
         Vx/A==
X-Forwarded-Encrypted: i=1; AJvYcCUhixif5lLRkfoRVmjnRt6R+b5FfQEtIcWGev56dkMjH62sf8mz3QNNnbfVeUMU2CsiLX7TYQk6Yu5DYH7i6ZQmbybpmcUT/EfUlUB8Z8U=
X-Gm-Message-State: AOJu0YydHrCgcV3uv2ncgLlhfpSxqq974f/jkvDSlz9flrl/52VbvgEx
	n8KZI7cJ1FmDBEoq9hFhRxkaRDQvjkTkexXB0dV/JcmQshyz//pxt4rNo1fhfUU=
X-Google-Smtp-Source: AGHT+IHjTbSMmt+YrVEJqQHm1gjLG8K/75BBhPpOkGnkohWMtWDh20IR41tGNmQR7HAHcyP4rnTWcQ==
X-Received: by 2002:a05:6512:202f:b0:52b:7c3a:423e with SMTP id 2adb3069b0e04-52bb9f7c802mr1360734e87.20.1717763757849;
        Fri, 07 Jun 2024 05:35:57 -0700 (PDT)
Message-ID: <1745d84b-59b7-4f90-a0a8-5d459b83b0bc@citrix.com>
Date: Fri, 7 Jun 2024 13:35:56 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: vcpumask_to_pcpumask() case study
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <3bb4e3fa-376b-4641-824d-61864b4e1e8e@citrix.com>
 <c5951643-5172-4aa1-9833-1a7a0eebb540@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <c5951643-5172-4aa1-9833-1a7a0eebb540@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 03/06/2024 10:19 pm, Jan Beulich wrote:
> On 01.06.2024 20:50, Andrew Cooper wrote:
>> One of the followon items I had from the bitops clean-up is this:
>>
>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> index 648d6dd475ba..9c3a017606ed 100644
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -3425,7 +3425,7 @@ static int vcpumask_to_pcpumask(
>>              unsigned int cpu;
>>  
>>              vcpu_id = ffsl(vmask) - 1;
>> -            vmask &= ~(1UL << vcpu_id);
>> +            vmask &= vmask - 1;
>>              vcpu_id += vcpu_bias;
>>              if ( (vcpu_id >= d->max_vcpus) )
>>                  return 0;
>>
>> which yields the following improvement:
>>
>>   add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-34 (-34)
>>   Function                                     old     new   delta
>>   vcpumask_to_pcpumask                         519     485     -34
> Nice. At the risk of getting flamed again for raising dumb questions:
> Considering that elsewhere "trickery" like the &= mask - 1 here were
> deemed not nice to have (at least wanting to be hidden behind a
> suitably named macro; see e.g. ISOLATE_LSB()), wouldn't __clear_bit()
> be suitable here too, and less at risk of being considered "trickery"?

__clear_bit() is even worse, because it forces the bitmap to be spilled
to memory.  It hopefully wont when I've given the test/set helpers the
same treatment as ffs/fls.

I'm not really a fan of the bithack.  Like the others, it's completely
unintuitive unless you know it.

However, the improvements speak for themselves and in this one case, the
best chance of people recognising it is in full; hiding any part behind
a macro (of any name) obscures things further.

> But yes, that would eliminate the benefit of making the bit clearing
> independent of the ffsl() result. And personally I'm fine anyway with
> the form as suggested.
>
>> While I (the programmer) can reason the two expressions are equivalent,
>> the compiler can't,
> Why is it you think it can't? There's no further knowledge that you
> as a human need to rely on for this, afaics. If ffsl() uses the
> built-in (as it now does), the compiler has full insight into what's
> going on. It's just that compiler engineers may not deem it worth the
> effort to carry out such a special-case optimization.

On x86, it's a block of asm, not the builtin.

But even with the builtin, just because there is a builtin transforming
a common expression into efficient assembly, doesn't mean there's
semantic reasoning about the result.

https://godbolt.org/z/eah1374a1

I can't find any way to get any compiler to reason about bit in order to
avoid the shift (or rotate, for Clang).

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 13:06:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 13:06:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736533.1142627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFZIR-0006vr-3V; Fri, 07 Jun 2024 13:06:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736533.1142627; Fri, 07 Jun 2024 13:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFZIR-0006vk-0G; Fri, 07 Jun 2024 13:06:19 +0000
Received: by outflank-mailman (input) for mailman id 736533;
 Fri, 07 Jun 2024 13:06:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFZIP-0006va-TO; Fri, 07 Jun 2024 13:06:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFZIP-0004nz-Np; Fri, 07 Jun 2024 13:06:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFZIP-00021H-EK; Fri, 07 Jun 2024 13:06:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFZIP-0008VA-Do; Fri, 07 Jun 2024 13:06:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XL99sTPbaHp1MaAFrhmZOMI7Gj5nX48dSUDR4n5DjqI=; b=M0zOO3e4xxpcqo2P/Yc2r4JKZQ
	kG6bhmFdxygt+TkHYOScm0uXCiDlzp2jeIcUiGXAzTJcBjieCUuHD0mEpl1mOhdX6EiZkJ8v7Uqox
	MgGX17QKD8Qdudb2r7IYC/IMW0HnNA1J7itNMgznta8FnglEzT+em3Z+K33YfxcXCJvI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186277-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186277: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8c826be35c736f4b718c8f927853aa957a1973d8
X-Osstest-Versions-That:
    ovmf=665b223d57369d8b28dcdc81352428adfe435ff4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Jun 2024 13:06:17 +0000

flight 186277 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186277/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8c826be35c736f4b718c8f927853aa957a1973d8
baseline version:
 ovmf                 665b223d57369d8b28dcdc81352428adfe435ff4

Last test of basis   186276  2024-06-07 09:11:05 Z    0 days
Testing same since   186277  2024-06-07 11:42:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   665b223d57..8c826be35c  8c826be35c736f4b718c8f927853aa957a1973d8 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 13:21:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 13:21:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736543.1142637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFZXJ-0001NN-Gj; Fri, 07 Jun 2024 13:21:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736543.1142637; Fri, 07 Jun 2024 13:21:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFZXJ-0001NG-E8; Fri, 07 Jun 2024 13:21:41 +0000
Received: by outflank-mailman (input) for mailman id 736543;
 Fri, 07 Jun 2024 13:21:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vKbl=NJ=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sFZXI-0001NA-8B
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 13:21:40 +0000
Received: from mail-wm1-x332.google.com (mail-wm1-x332.google.com
 [2a00:1450:4864:20::332])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id df239a99-24d0-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 15:21:39 +0200 (CEST)
Received: by mail-wm1-x332.google.com with SMTP id
 5b1f17b1804b1-42163fa630aso10637925e9.3
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 06:21:39 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4215c19e719sm53730915e9.3.2024.06.07.06.21.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 07 Jun 2024 06:21:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df239a99-24d0-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717766498; x=1718371298; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=Cdt4iwBEHchdqlJtYkU1ouTy8CVAw8S3ea4y8s3IURA=;
        b=JzazxFXjKaMvw1V1TsG9mXiEl+9zi8pjOLAOJoNI1Gv+mlJsWjwjjl4plOZUHGurjN
         2bl2wQIVnKZZwAhAuu3ovNNjJ2g4nIkPi6pwUQ1cjsUBW/6JspFs3EMAHuUbJ+XBnPKh
         6q56WqlP7yoxH9WN+F+6TIDMaia5fTKrTAZVE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717766498; x=1718371298;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Cdt4iwBEHchdqlJtYkU1ouTy8CVAw8S3ea4y8s3IURA=;
        b=HbaRl1kR7ZYOyyLoSUi/QFSv/whwHWeypupwgEuhfzjth5FnDG4YKg3qc9aZflnkx/
         4kYhv7RSPsl9g0q7097/3Jgix1EKSihocpEW3jmFu/REphr1IRufjVAc6nNtWw6MYw3V
         7bbKaUPD79yiUaCccHjcIHhW7XX64MXqUlsoFqOUoepPLstpWFIY6+qUaro4R8COnYUG
         Yjt+ksSKVvyzHRzeFUmO/KM3dy+7dbmXArRmc9ve/N+hxbqV4TzDMT+hmtPZA7SucmQo
         U1XPFosW4FJhqOsb0GM11j9a6omuQU2jdUDqGZDjZ1B6LFCMX1tIm4O5V5AGDJAyOe9L
         cf3w==
X-Gm-Message-State: AOJu0Ywbo/+L6612UsoRZsP+2EABftQZaHltjeVrfRbFFZqo+IrtTP3o
	Xy64toAdjrfQK545ZlSWo6lUMTbOe0enWNruQfsNcZVxU1Um/G6743+KeBkc6N0qHwpSfwAIIvq
	jvBY=
X-Google-Smtp-Source: AGHT+IGx/oYqPNx6QbddJP2SeSr+K7cPQUspVWa6dIrqoJ2X0/u/2YxRxxGJr9ssMPXV231yStca2A==
X-Received: by 2002:a05:600c:4ec7:b0:421:73e6:faee with SMTP id 5b1f17b1804b1-42173e6ff15mr8170035e9.34.1717766497993;
        Fri, 07 Jun 2024 06:21:37 -0700 (PDT)
Date: Fri, 7 Jun 2024 15:21:36 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Community Manager <community.manager@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH for-4.19] x86/pvh: declare PVH dom0 supported with caveats
Message-ID: <ZmMJYLhXVmLRKyz_@macbook>
References: <20240607100320.11723-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20240607100320.11723-1-roger.pau@citrix.com>

On Fri, Jun 07, 2024 at 12:03:20PM +0200, Roger Pau Monne wrote:
> PVH dom0 is functionally very similar to PVH domU except for the domain
> builder and the added set of hypercalls available to it.
> 
> The main concern with declaring it "Supported" is the lack of some features
> when compared to classic PV dom0, hence switch it's status to supported with
> caveats.  List the known missing features, there might be more features missing
> or not working as expected apart from the ones listed.
> 
> Note there's some (limited) PVH dom0 testing on both osstest and gitlab.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Hopefully this will attract more testing an resources to PVH dom0 in order to
> try to finish the missing features.
> ---
>  CHANGELOG.md |  1 +
>  SUPPORT.md   | 15 ++++++++++++++-

Bah, forgot to remove the boot warning message, will send v2.

Sorry, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 14:34:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 14:34:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736551.1142648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFafm-0001dx-HU; Fri, 07 Jun 2024 14:34:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736551.1142648; Fri, 07 Jun 2024 14:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFafm-0001dq-Ei; Fri, 07 Jun 2024 14:34:30 +0000
Received: by outflank-mailman (input) for mailman id 736551;
 Fri, 07 Jun 2024 14:34:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ppZm=NJ=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sFafl-0001dk-JK
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 14:34:29 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0ad8c6c6-24db-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 16:34:27 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 9CC5F4EE073E;
 Fri,  7 Jun 2024 16:34:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ad8c6c6-24db-11ef-b4bb-af5377834399
MIME-Version: 1.0
Date: Fri, 07 Jun 2024 16:34:26 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH 3/5] x86: deviate violation of MISRA C Rule 20.12
In-Reply-To: <02262bd1-4d2f-413f-bc03-58c7181be216@suse.com>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com>
 <475daa82f5be77644b1f32ecd3f6e66ccd9ac904.1717236930.git.nicola.vetrini@bugseng.com>
 <02262bd1-4d2f-413f-bc03-58c7181be216@suse.com>
Message-ID: <317aa87b389a8eb7de86853c7fc45829@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-04 08:08, Jan Beulich wrote:
> On 01.06.2024 12:16, Nicola Vetrini wrote:
>> --- a/xen/arch/x86/include/asm/shared.h
>> +++ b/xen/arch/x86/include/asm/shared.h
>> @@ -76,6 +76,7 @@ static inline void arch_set_##field(struct vcpu *v,  
>>        \
>> 
>>  GET_SET_SHARED(unsigned long, max_pfn)
>>  GET_SET_SHARED(xen_pfn_t, pfn_to_mfn_frame_list_list)
>> +/* SAF-6-safe Rule 20.12: expansion of macro nmi_reason */
>>  GET_SET_SHARED(unsigned long, nmi_reason)
> 
> Before we go this route, were alternatives at least considered? Plus
> didn't we special-case function-like macros already, when used in
> situations where only object-like macros would be expanded anyway?
> 

It may be the case that this is already deviated, hence meaning that the 
patch can be dropped: I'll recheck.

In that case, thanks for pointing this out.

> As to alternatives: nmi_reason() is used in exactly one place.
> Dropping the #define and expanding the one use instead would be an
> option. I further wonder whether moving the #define-s past the
> piece of code you actually modify would also be an option (i.e. the
> tool then no longer complaining).
> 
> Jan

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 15:07:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 15:07:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736559.1142657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFbBe-0005V0-W6; Fri, 07 Jun 2024 15:07:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736559.1142657; Fri, 07 Jun 2024 15:07:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFbBe-0005Ut-TF; Fri, 07 Jun 2024 15:07:26 +0000
Received: by outflank-mailman (input) for mailman id 736559;
 Fri, 07 Jun 2024 15:07:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFbBd-0005Uj-Ih; Fri, 07 Jun 2024 15:07:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFbBd-00073M-Ff; Fri, 07 Jun 2024 15:07:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFbBd-0007zW-7S; Fri, 07 Jun 2024 15:07:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFbBd-0001q5-6Z; Fri, 07 Jun 2024 15:07:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dgoCTF0hUTEfSPsihG99iirP/WCnGjFRarQEdTFlGBI=; b=oP71ZFWUgXe+kivRHnhXhUYkBx
	qGC34o3XiCyJnoaMd53db7DmNZDZnPWmIsl5lN30/1/1o+7ioq87Er+XxVNr6EKzQt+Dv0l371opC
	MqYT2QtBQ3UbQJe5Gcxf7pkOBp59PYorMZzeWbLzQWQdgpsYmx0os+PhJD/dwqmMNeD8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186274-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186274: tolerable FAIL - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt-vhd:xen-boot:fail:heisenbug
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=f8ec3f9c2f8ddb3ea4ae89f1849897ef23633d83
X-Osstest-Versions-That:
    libvirt=9d0c8618db599c407d47a8a6af881708608cdcd9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Jun 2024 15:07:25 +0000

flight 186274 libvirt real [real]
flight 186279 libvirt real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186274/
http://logs.test-lab.xenproject.org/osstest/logs/186279/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-vhd  8 xen-boot            fail pass in 186279-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check fail in 186279 never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check fail in 186279 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186263
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              f8ec3f9c2f8ddb3ea4ae89f1849897ef23633d83
baseline version:
 libvirt              9d0c8618db599c407d47a8a6af881708608cdcd9

Last test of basis   186263  2024-06-06 04:20:34 Z    1 days
Testing same since   186274  2024-06-07 04:18:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel P. Berrangé <berrange@redhat.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Michal Privoznik <mprivozn@redhat.com>
  김인수 <simmon@nplob.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   9d0c8618db..f8ec3f9c2f  f8ec3f9c2f8ddb3ea4ae89f1849897ef23633d83 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 15:34:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 15:34:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736567.1142667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFbbx-0000kI-3J; Fri, 07 Jun 2024 15:34:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736567.1142667; Fri, 07 Jun 2024 15:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFbbx-0000kB-0n; Fri, 07 Jun 2024 15:34:37 +0000
Received: by outflank-mailman (input) for mailman id 736567;
 Fri, 07 Jun 2024 15:34:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFbbv-0000jz-IC; Fri, 07 Jun 2024 15:34:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFbbv-0007Vr-G0; Fri, 07 Jun 2024 15:34:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFbbv-0000Ua-3n; Fri, 07 Jun 2024 15:34:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFbbv-0000pQ-3F; Fri, 07 Jun 2024 15:34:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=x3KbdluITuhFmCZk4cBzmY0POdm8Dc0mhwytOqHk3zA=; b=bUUt/E3w9XESkDREfYdoT6orJi
	bwGwhiKqvv3yO9I8/sBgMoz1AG23uqNE7kcNV7YJPS8Z5YxzFvsHuBiuG+g86cgY+sICW7uKfFd2N
	EaZpp04UJrbtqTe1Rp7BNrDy8/cgUgHrd39gbpzjrTq8MrOxt1oEmXVM2RE6Ut8Lzs5Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186280-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186280: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=948f23417010309a5557d46195eae258f6105025
X-Osstest-Versions-That:
    ovmf=8c826be35c736f4b718c8f927853aa957a1973d8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Jun 2024 15:34:35 +0000

flight 186280 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186280/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 948f23417010309a5557d46195eae258f6105025
baseline version:
 ovmf                 8c826be35c736f4b718c8f927853aa957a1973d8

Last test of basis   186277  2024-06-07 11:42:56 Z    0 days
Testing same since   186280  2024-06-07 13:43:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Sebastian Witt <sebastian.witt@siemens.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8c826be35c..948f234170  948f23417010309a5557d46195eae258f6105025 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 17:44:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 17:44:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736579.1142677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFddO-00072u-RS; Fri, 07 Jun 2024 17:44:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736579.1142677; Fri, 07 Jun 2024 17:44:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFddO-00072n-Oa; Fri, 07 Jun 2024 17:44:14 +0000
Received: by outflank-mailman (input) for mailman id 736579;
 Fri, 07 Jun 2024 17:44:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFddN-00072d-A3; Fri, 07 Jun 2024 17:44:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFddN-0001yp-6p; Fri, 07 Jun 2024 17:44:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFddN-00044Z-0K; Fri, 07 Jun 2024 17:44:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFddM-0001or-W9; Fri, 07 Jun 2024 17:44:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SRg/UGPEf7mpO1QfsthQRhJCD9xmn7O0S23mMoZvZe8=; b=USP81n2ue7TlmodlUT1NXE1qAF
	l4wP9SqfhVBEGGXIHWmbqZmZJnOXwcWh+sfXkH1EdmSAxTVNM7mhUgXLVBam+93eHRXGG583gEof1
	nX8qfXKrUKWgfjfe2GseVGg2pNFHJK+2BoUqk66y4UPaS9uEpQzOQx7NTpqsCnsHR7wc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186281-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186281: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=c36414b131dfd0a1ca51f10f87a18955bc110ff2
X-Osstest-Versions-That:
    ovmf=948f23417010309a5557d46195eae258f6105025
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Jun 2024 17:44:12 +0000

flight 186281 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186281/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c36414b131dfd0a1ca51f10f87a18955bc110ff2
baseline version:
 ovmf                 948f23417010309a5557d46195eae258f6105025

Last test of basis   186280  2024-06-07 13:43:06 Z    0 days
Testing same since   186281  2024-06-07 15:44:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nhi Pham <nhi@os.amperecomputing.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   948f234170..c36414b131  c36414b131dfd0a1ca51f10f87a18955bc110ff2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 18:00:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 18:00:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736588.1142688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFdsv-0001Hv-6y; Fri, 07 Jun 2024 18:00:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736588.1142688; Fri, 07 Jun 2024 18:00:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFdsv-0001Ho-3n; Fri, 07 Jun 2024 18:00:17 +0000
Received: by outflank-mailman (input) for mailman id 736588;
 Fri, 07 Jun 2024 18:00:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFdst-0001He-O8; Fri, 07 Jun 2024 18:00:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFdst-0002Kb-Ki; Fri, 07 Jun 2024 18:00:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFdst-0004R8-87; Fri, 07 Jun 2024 18:00:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFdst-0006ea-7m; Fri, 07 Jun 2024 18:00:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3QxXFuFY/DLZaSPt5Z86DJTNIz6ZNqZZpzv+q/WiKIM=; b=SQ+REXH/zw06m1KBPjOWK3r7Gl
	LGNhgG0URyvRdLwiM5Xd8dkWegdSn277gDESTX4kqEDyv1aUbdGUToEUMZTy1Yhq53p8USPxpK6xo
	xA6+MJ4SwAFA2IOexiYS6migDSwi/bgBkRVeA6EDQa/63LGkx8FkmJtEqaUvD2o4sq8I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186278-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186278: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8a92980606e3585d72d510a03b59906e96755b8a
X-Osstest-Versions-That:
    linux=d30d0e49da71de8df10bf3ff1b3de880653af562
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Jun 2024 18:00:15 +0000

flight 186278 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186278/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-qcow2     8 xen-boot                 fail REGR. vs. 186268

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit1   8 xen-boot         fail in 186270 pass in 186278
 test-armhf-armhf-xl-credit2   8 xen-boot         fail in 186270 pass in 186278
 test-armhf-armhf-xl-raw       8 xen-boot                   fail pass in 186270

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 186268
 test-armhf-armhf-xl-raw     14 migrate-support-check fail in 186270 never pass
 test-armhf-armhf-xl-raw 15 saverestore-support-check fail in 186270 never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check fail in 186270 never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check fail in 186270 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186268
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186268
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186268
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186268
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186268
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186268
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8a92980606e3585d72d510a03b59906e96755b8a
baseline version:
 linux                d30d0e49da71de8df10bf3ff1b3de880653af562

Last test of basis   186268  2024-06-06 17:10:36 Z    1 days
Testing same since   186270  2024-06-07 01:11:58 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bjorn Helgaas <bhelgaas@google.com>
  Chanwoo Lee <cw9316.lee@samsung.com>
  Dan Williams <dan.j.williams@intel.com>
  Deming Wang <wangdeming@inspur.com>
  Hannes Reinecke <hare@suse.de>
  Hans de Goede <hdegoede@redhat.com>
  Justin Stitt <justinstitt@google.com>
  Kalle Valo <kvalo@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Wilck <martin.wilck@suse.com>
  Nathan Chancellor <nathan@kernel.org>
  Nilesh Javali <njavali@marvell.com>
  Peter Schneider <pschneider1968@googlemail.com>
  Saurav Kashyap <skashyap@marvell.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 309 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 18:41:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 18:41:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736599.1142697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFeWM-0005x2-AU; Fri, 07 Jun 2024 18:41:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736599.1142697; Fri, 07 Jun 2024 18:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFeWM-0005wv-7j; Fri, 07 Jun 2024 18:41:02 +0000
Received: by outflank-mailman (input) for mailman id 736599;
 Fri, 07 Jun 2024 18:41:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NiIp=NJ=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sFeWL-0005wo-8H
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 18:41:01 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ae2715c-24fd-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 20:40:59 +0200 (CEST)
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-683-7sORR8OLOdGNQtveIeT-dQ-1; Fri, 07 Jun 2024 14:40:56 -0400
Received: by mail-wr1-f69.google.com with SMTP id
 ffacd0b85a97d-35e0f445846so1643870f8f.0
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 11:40:55 -0700 (PDT)
Received: from ?IPV6:2003:cb:c71a:2200:31c4:4d18:1bdd:fb7a?
 (p200300cbc71a220031c44d181bddfb7a.dip0.t-ipconnect.de.
 [2003:cb:c71a:2200:31c4:4d18:1bdd:fb7a])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42158148f43sm93769755e9.33.2024.06.07.11.40.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Jun 2024 11:40:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ae2715c-24fd-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1717785657;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=J1hUmjCQX6fG/8RrOn/s4Zg+H1krUQrxIX7N8cOfAjs=;
	b=DCpO61AoaDEQJQ717YrptT/89ioDnru5pC7VE3SEgp5O+BZKzGJ6zszmbjlGzidW6RpXwQ
	o0W0xUydxkfcuc7rALsvYUQiMg7pbaNrXooDQgOV1puO5QGIkLIKtX7kspY3QF/+mkf2an
	2tEvAimo7BeW2eWDAwtNVjTPGerkrQI=
X-MC-Unique: 7sORR8OLOdGNQtveIeT-dQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717785655; x=1718390455;
        h=content-transfer-encoding:in-reply-to:organization:autocrypt
         :content-language:from:references:cc:to:subject:user-agent
         :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=J1hUmjCQX6fG/8RrOn/s4Zg+H1krUQrxIX7N8cOfAjs=;
        b=b+IvO5NfzFQYIm/H0jfVc0MoLGcxALGvRAhr3SVOhDcCmWhXCKlavNQQ8lHEiK10ts
         Jcd8Jq8A7uEPzwae99IOng3nARTqqG4B4ZJRaF0SYavYgB6nRbZ2tJ3G5uLRIR4fYAIN
         lKiW4ouAzeceKSjqIovWjjYa/oyRWY1Dsqh7SZAQjRVoK/tXEc8fiWK8t+CrCSC6c+6y
         VCA8yq2571AFmCmAd9bypCxuE+XxW3Bi2F1ejQzEyiqOo6BxrorYDRd4YTOi4Plf18oy
         +FYYnQuQJiQpk4AIlPDFNPQDh3z+IwrTUevw3ffe8HnMyrytXRhgBZ7e0vE5PHMDUQYA
         dnww==
X-Forwarded-Encrypted: i=1; AJvYcCXiMxXJyiRhq2a/x2obiUAmoqdEKwjEK6r8nDbijBnXj4FM0hSk9FnPonZgrwFzU518NNSTrIxJMgukR9by0kNsZ4gVm8sbhwquXJXpETE=
X-Gm-Message-State: AOJu0YzZu1I2soPrXBX0JZObRWedv1/SdxBhQM4peu6vpEMHg0SzSDzc
	KwgKyR1yINIzlAaTqurr17amGm1zAQZ/YEU5Fz3Mk4smfK9g3mI838fpHQF62Eskw+Lsh8zK2vq
	J8WD6Hsm/rOzgZ2cO6SYXEDFBmc1aXwLx7Z/9lM/ZIKdJK3svICf3wJ5rYieKC+RE
X-Received: by 2002:a05:600c:3b22:b0:421:1f68:f80c with SMTP id 5b1f17b1804b1-42164a3274cmr33024155e9.25.1717785654966;
        Fri, 07 Jun 2024 11:40:54 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IHjxAAobdln9o8Z+ZSwK+8DP+joAlNWvnty2OQv0sD2CWIRw1H0eU3Abu8j4Jf1XLd5TYqa1A==
X-Received: by 2002:a05:600c:3b22:b0:421:1f68:f80c with SMTP id 5b1f17b1804b1-42164a3274cmr33023775e9.25.1717785654411;
        Fri, 07 Jun 2024 11:40:54 -0700 (PDT)
Message-ID: <b72e6efd-fb0a-459c-b1a0-88a98e5b19e2@redhat.com>
Date: Fri, 7 Jun 2024 20:40:52 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()
To: linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 kasan-dev@googlegroups.com, Andrew Morton <akpm@linux-foundation.org>,
 Mike Rapoport <rppt@kernel.org>, Oscar Salvador <osalvador@suse.de>,
 "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Jason Wang <jasowang@redhat.com>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
 =?UTF-8?Q?Eugenio_P=C3=A9rez?= <eperezma@redhat.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>,
 Dmitry Vyukov <dvyukov@google.com>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-2-david@redhat.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; keydata=
 xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat
In-Reply-To: <20240607090939.89524-2-david@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 07.06.24 11:09, David Hildenbrand wrote:
> In preparation for further changes, let's teach __free_pages_core()
> about the differences of memory hotplug handling.
> 
> Move the memory hotplug specific handling from generic_online_page() to
> __free_pages_core(), use adjust_managed_page_count() on the memory
> hotplug path, and spell out why memory freed via memblock
> cannot currently use adjust_managed_page_count().
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>   mm/internal.h       |  3 ++-
>   mm/kmsan/init.c     |  2 +-
>   mm/memory_hotplug.c |  9 +--------
>   mm/mm_init.c        |  4 ++--
>   mm/page_alloc.c     | 17 +++++++++++++++--
>   5 files changed, 21 insertions(+), 14 deletions(-)
> 
> diff --git a/mm/internal.h b/mm/internal.h
> index 12e95fdf61e90..3fdee779205ab 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -604,7 +604,8 @@ extern void __putback_isolated_page(struct page *page, unsigned int order,
>   				    int mt);
>   extern void memblock_free_pages(struct page *page, unsigned long pfn,
>   					unsigned int order);
> -extern void __free_pages_core(struct page *page, unsigned int order);
> +extern void __free_pages_core(struct page *page, unsigned int order,
> +		enum meminit_context);
>   
>   /*
>    * This will have no effect, other than possibly generating a warning, if the
> diff --git a/mm/kmsan/init.c b/mm/kmsan/init.c
> index 3ac3b8921d36f..ca79636f858e5 100644
> --- a/mm/kmsan/init.c
> +++ b/mm/kmsan/init.c
> @@ -172,7 +172,7 @@ static void do_collection(void)
>   		shadow = smallstack_pop(&collect);
>   		origin = smallstack_pop(&collect);
>   		kmsan_setup_meta(page, shadow, origin, collect.order);
> -		__free_pages_core(page, collect.order);
> +		__free_pages_core(page, collect.order, MEMINIT_EARLY);
>   	}
>   }
>   
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 171ad975c7cfd..27e3be75edcf7 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -630,14 +630,7 @@ EXPORT_SYMBOL_GPL(restore_online_page_callback);
>   
>   void generic_online_page(struct page *page, unsigned int order)
>   {
> -	/*
> -	 * Freeing the page with debug_pagealloc enabled will try to unmap it,
> -	 * so we should map it first. This is better than introducing a special
> -	 * case in page freeing fast path.
> -	 */
> -	debug_pagealloc_map_pages(page, 1 << order);
> -	__free_pages_core(page, order);
> -	totalram_pages_add(1UL << order);
> +	__free_pages_core(page, order, MEMINIT_HOTPLUG);
>   }
>   EXPORT_SYMBOL_GPL(generic_online_page);
>   
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 019193b0d8703..feb5b6e8c8875 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -1938,7 +1938,7 @@ static void __init deferred_free_range(unsigned long pfn,
>   	for (i = 0; i < nr_pages; i++, page++, pfn++) {
>   		if (pageblock_aligned(pfn))
>   			set_pageblock_migratetype(page, MIGRATE_MOVABLE);
> -		__free_pages_core(page, 0);
> +		__free_pages_core(page, 0, MEMINIT_EARLY);
>   	}
>   }

The build bot just reminded me that I missed another case in this function:
(CONFIG_DEFERRED_STRUCT_PAGE_INIT)

diff --git a/mm/mm_init.c b/mm/mm_init.c
index feb5b6e8c8875..5a0752261a795 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -1928,7 +1928,7 @@ static void __init deferred_free_range(unsigned long pfn,
         if (nr_pages == MAX_ORDER_NR_PAGES && IS_MAX_ORDER_ALIGNED(pfn)) {
                 for (i = 0; i < nr_pages; i += pageblock_nr_pages)
                         set_pageblock_migratetype(page + i, MIGRATE_MOVABLE);
-               __free_pages_core(page, MAX_PAGE_ORDER);
+               __free_pages_core(page, MAX_PAGE_ORDER, MEMINIT_EARLY);
                 return;
         }
  

-- 
Cheers,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri Jun 07 19:47:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 19:47:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736606.1142707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFfY6-00051C-Uv; Fri, 07 Jun 2024 19:46:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736606.1142707; Fri, 07 Jun 2024 19:46:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFfY6-000515-SK; Fri, 07 Jun 2024 19:46:54 +0000
Received: by outflank-mailman (input) for mailman id 736606;
 Fri, 07 Jun 2024 19:46:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jWQ4=NJ=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1sFfY4-00050z-JA
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 19:46:52 +0000
Received: from wfhigh8-smtp.messagingengine.com
 (wfhigh8-smtp.messagingengine.com [64.147.123.159])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id abfc6e96-2506-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 21:46:48 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailfhigh.west.internal (Postfix) with ESMTP id 822EA18000C7
 for <xen-devel@lists.xenproject.org>; Fri,  7 Jun 2024 15:46:44 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute1.internal (MEProxy); Fri, 07 Jun 2024 15:46:44 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA for
 <xen-devel@lists.xenproject.org>; Fri, 7 Jun 2024 15:46:43 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abfc6e96-2506-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:content-type:content-type:date:date
	:from:from:in-reply-to:message-id:mime-version:reply-to:subject
	:subject:to:to; s=fm1; t=1717789604; x=1717876004; bh=C6Ug4wpkpt
	oHvvrhX7AzvF3Sta9Ie/JG6Ec+JUWIIaE=; b=QC92jqqclZnp+xa2/T7GwqJ+C6
	7tqsgU9/g50mIIc5vbvBji3riZXPtmCmIV/X0Qyy6Aa/j8U7DpESEn3puNsIUMFD
	HPjUx5TsK1V//aFhWf5WDVqrrfbFYTHBiey7v+gPbYIxZpSaufJBXWE6CPLMHSzX
	C612SvkD8Iv+UWh6JnCRjP5FpohujaX9nE3jshd9Xv1GcohfylDb/nMii5CJRpWw
	cnErtyBpS4+weUQQm9U7/WK5AR4ig+J6zDm1t3yDMo8WOFkS/5/yVIqfnGzi3RD5
	TssXdlySpzAZEjTWBV4+B4/I1WvNKzggNwzlk9At6Fgmuux7Cmzrq4Oh64lg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:message-id
	:mime-version:reply-to:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=
	1717789604; x=1717876004; bh=C6Ug4wpkptoHvvrhX7AzvF3Sta9Ie/JG6Ec
	+JUWIIaE=; b=CkNw8S64IOVIxv1vNW6BKL1vNuvK7Dp1NqjapDMbDAnHaN4tZCv
	x4UnTSEY1R7HMy02/M1JUaimAoEzHX+CySYTNUjUOkbSrNSxfpyeBp06lAYm/mHU
	QCIsGTrXaVpxYhccVtr8r1WsUdEDXqBw3M6RF5luRrthQoDbYZLGfDWHKKjIR76e
	ad8/m6MrGIyGY+2AvW1Rc79K5TVaz6334/XGpoMfAFl+H9klMOs7O4fklj7ZeTKW
	HR6VndvD9DmPJ/uOZSXaOkwDxYBokRufI9zIlGdh6JlCakbQ3vyaI6/qECPvMabM
	Pq2hrsL7ZMaF/EC+WU673VpamANTQh4kjtA==
X-ME-Sender: <xms:o2NjZpuiDoRNsrEMtxcxodbRES5xXySjwPXAwf-kiVa4At-oxxZ8Jw>
    <xme:o2NjZieaLPlWpSTUGSisD4bPHZrj6rUkhJUphH92RJr6PB0lIlO6YqDKYWTpvCRGE
    Pde5kPGoU67Vg>
X-ME-Received: <xmr:o2NjZswE0Q-QlQN077YvnsHu66eHQ6Chvz2S7RfAHwZX90eI5IzHYdI>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedtuddgudduhecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecunecujfgurhepfffhvffukfggtggusehgtderre
    dttdejnecuhfhrohhmpeforghrvghkucforghrtgiihihkohifshhkihdqifpkrhgvtghk
    ihcuoehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqe
    enucggtffrrghtthgvrhhnpedtudfgteduveduieevvefgteeujeelgffggffhhffhhedt
    ffeffefgudeugeefhfenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrih
    hlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgt
    ohhm
X-ME-Proxy: <xmx:o2NjZgO_QR8vFmEp5h5uilDS7rAp8SmDfwbzH5V7iJv8X_95LhFLRg>
    <xmx:o2NjZp_Kok0GkG4YTxqdw54oEMpmO758uUJv6E4emAy3REbiSgewHQ>
    <xmx:o2NjZgW81X16OjgZr3taHtbTAPbXDgTQBCNTF3J_LGWLNYxeMsOIxw>
    <xmx:o2NjZqcdJ7jv7X2UAMgO9oz60pE5SDX-Ad-7B2bQzZ8o93WHN3UWfg>
    <xmx:pGNjZmFvtWko5xD4ZAGprRjzMjIFyqnbG_47HSbTTe71MRHbsycsQuT->
Feedback-ID: i1568416f:Fastmail
Date: Fri, 7 Jun 2024 21:46:40 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: Segment truncation in multi-segment PCI handling?
Message-ID: <ZmNjoeFAwWz8xhfM@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="6/witsI/qtE4GSuI"
Content-Disposition: inline


--6/witsI/qtE4GSuI
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 7 Jun 2024 21:46:40 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: Segment truncation in multi-segment PCI handling?

Hi,

I've got a new system, and it has two PCI segments:

    0000:00:00.0 Host bridge: Intel Corporation Device 7d14 (rev 04)
    0000:00:02.0 VGA compatible controller: Intel Corporation Meteor Lake-P=
 [Intel Graphics] (rev 08)
    ...
    10000:e0:06.0 System peripheral: Intel Corporation RST VMD Managed Cont=
roller
    10000:e0:06.2 PCI bridge: Intel Corporation Device 7ecb (rev 10)
    10000:e1:00.0 Non-Volatile memory controller: Phison Electronics Corpor=
ation PS5021-E21 PCIe4 NVMe Controller (DRAM-less) (rev 01)

But looks like Xen doesn't handle it correctly:

    (XEN) 0000:e0:06.0: unknown type 0
    (XEN) 0000:e0:06.2: unknown type 0
    (XEN) 0000:e1:00.0: unknown type 0
    ...
    (XEN) =3D=3D=3D=3D PCI devices =3D=3D=3D=3D
    (XEN) =3D=3D=3D=3D segment 0000 =3D=3D=3D=3D
    (XEN) 0000:e1:00.0 - NULL - node -1=20
    (XEN) 0000:e0:06.2 - NULL - node -1=20
    (XEN) 0000:e0:06.0 - NULL - node -1=20
    (XEN) 0000:2b:00.0 - d0 - node -1  - MSIs < 161 >
    (XEN) 0000:00:1f.6 - d0 - node -1  - MSIs < 148 >
    ...

This isn't exactly surprising, since pci_sbdf_t.seg is uint16_t, so
0x10000 doesn't fit. OSDev wiki says PCI Express can have 65536 PCI
Segment Groups, each with 256 bus segments.

Fortunately, I don't need this to work, if I disable VMD in the
firmware, I get a single segment and everything works fine.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--6/witsI/qtE4GSuI
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmZjY6EACgkQ24/THMrX
1yzd+gf/ciPyUvmC5sgFOS5N/9kfn0L70p49IdOSD0y2B/KkWHZPvl6xHU3UIojG
fiKpjskPxvAg30Fs7lmMwS7X6NaKBPcNw1TAWvZHUS77uZF305M+pSr0QramJeAs
YWjKFzPGczzw83EjDCg8bq9ZSV9xVMnrcuBg1HPEZASyVW/wQI7UMzPkQbS1MX0k
5dNE06006EOZjI5o7EgUpGL3kW1IBb8kOLiVTlOoiZtsLq1BwB/JxkerqKREgbTd
jmFzf2GN3OQyG6uwij94CXbnoOLTSB962trUsWML4san8WWgeBudQYT7Ab9xZsfM
VqduKRzkJu/ZQ1w907dcG8+j0nPLTw==
=9mrS
-----END PGP SIGNATURE-----

--6/witsI/qtE4GSuI--


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 19:52:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 19:52:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736611.1142717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFfdd-0006i0-Ix; Fri, 07 Jun 2024 19:52:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736611.1142717; Fri, 07 Jun 2024 19:52:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFfdd-0006ht-G6; Fri, 07 Jun 2024 19:52:37 +0000
Received: by outflank-mailman (input) for mailman id 736611;
 Fri, 07 Jun 2024 19:52:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G7b/=NJ=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sFfdc-0006hn-Gd
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 19:52:36 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7c025b62-2507-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 21:52:35 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 5b1f17b1804b1-42138eadf64so23561375e9.3
 for <xen-devel@lists.xenproject.org>; Fri, 07 Jun 2024 12:52:35 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4215c19e567sm61990025e9.1.2024.06.07.12.52.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 Jun 2024 12:52:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c025b62-2507-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1717789954; x=1718394754; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=OZR5Z4FZDaVyZaOFYSyVepY1mSOtms4IyKreQI/Tdt8=;
        b=aBdnN0GGrO95x3MxRC1hP4U3yd45XPB2IZ16V9RSKJm73CnmDcin13Owhwk1STt1EX
         7bjKdaAgjor9bzg4/7pNoCTCjevN0qFgbumHdF/EoB2rSgBggF2BWkds/JM4VgjQdJbb
         TLk9achZp3aongVb64kHWopkJ4mfHtNa8CzUs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717789954; x=1718394754;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=OZR5Z4FZDaVyZaOFYSyVepY1mSOtms4IyKreQI/Tdt8=;
        b=sG2GKNAlBUCS9HVgsYmO4XR61wufN4t6fwp3twsAXfeKE/1YIgJtkaTGVUgzVBrsLS
         b5+shoQBHZY8IBl5fJ82Y9RHnEdxdPVfn7Samh3//KoOtd1KxDAgRVf/ZjmJ7Uhemlpb
         bD01SMqBbaB0J3FEgHoDo7v2E9FL4AQa5+jpmQoLkiWMrXQTFII31i+G+xLGQMxcVOGu
         P7GMZwqwUky6SLS0JxmxHJNUlWdhQTXUTH0h516NSyrCM+3vF1P9ZHGTDVRH/a5qRcOJ
         2DpA8e7L/ER7sZiniTZ0KZqXYgQFEJ0xZCsAPJ/u1kA7Q7PvOOvETjwbCbScmREWZqCO
         qOTA==
X-Forwarded-Encrypted: i=1; AJvYcCX4lYlHDp3bV/DwaI8sX9KzRRMn+V+gNk6pcSa4cIeHXb3wcVWwhhWPs2rPsAoCKB7b8Dfccifr1hNxUL7+lnDI5eK7Z5NjOi3hiDaS6nQ=
X-Gm-Message-State: AOJu0YzYn+dDYCbgGPEiBtsq5MVNd7C2SLuByf6R+QI2T2lhIGoagcOV
	D1OHUffxpPnCoWR84j7rqYsiRM+hrHQnG9uTDOGWhDmMr8XxHybHudLxHjLmJjkR9k39G3QSr7m
	VTVA=
X-Google-Smtp-Source: AGHT+IEwD7O0SxodzlQfbKwVENZVJGDlzOwZspVwNUJ2KfEtadHXcc+CtnM40Z6RJkP/Wc1JggRYwA==
X-Received: by 2002:a05:600c:474d:b0:41f:e2c5:6618 with SMTP id 5b1f17b1804b1-42164a32877mr29529755e9.32.1717789954184;
        Fri, 07 Jun 2024 12:52:34 -0700 (PDT)
Message-ID: <9cbb6dce-b669-4237-8932-b5cd64eb7288@citrix.com>
Date: Fri, 7 Jun 2024 20:52:33 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: Segment truncation in multi-segment PCI handling?
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <ZmNjoeFAwWz8xhfM@mail-itl>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <ZmNjoeFAwWz8xhfM@mail-itl>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 07/06/2024 8:46 pm, Marek Marczykowski-Górecki wrote:
> Hi,
>
> I've got a new system, and it has two PCI segments:
>
>     0000:00:00.0 Host bridge: Intel Corporation Device 7d14 (rev 04)
>     0000:00:02.0 VGA compatible controller: Intel Corporation Meteor Lake-P [Intel Graphics] (rev 08)
>     ...
>     10000:e0:06.0 System peripheral: Intel Corporation RST VMD Managed Controller
>     10000:e0:06.2 PCI bridge: Intel Corporation Device 7ecb (rev 10)
>     10000:e1:00.0 Non-Volatile memory controller: Phison Electronics Corporation PS5021-E21 PCIe4 NVMe Controller (DRAM-less) (rev 01)
>
> But looks like Xen doesn't handle it correctly:
>
>     (XEN) 0000:e0:06.0: unknown type 0
>     (XEN) 0000:e0:06.2: unknown type 0
>     (XEN) 0000:e1:00.0: unknown type 0
>     ...
>     (XEN) ==== PCI devices ====
>     (XEN) ==== segment 0000 ====
>     (XEN) 0000:e1:00.0 - NULL - node -1 
>     (XEN) 0000:e0:06.2 - NULL - node -1 
>     (XEN) 0000:e0:06.0 - NULL - node -1 
>     (XEN) 0000:2b:00.0 - d0 - node -1  - MSIs < 161 >
>     (XEN) 0000:00:1f.6 - d0 - node -1  - MSIs < 148 >
>     ...
>
> This isn't exactly surprising, since pci_sbdf_t.seg is uint16_t, so
> 0x10000 doesn't fit. OSDev wiki says PCI Express can have 65536 PCI
> Segment Groups, each with 256 bus segments.
>
> Fortunately, I don't need this to work, if I disable VMD in the
> firmware, I get a single segment and everything works fine.
>

This is a known issue.  Works is being done, albeit slowly.

0x10000 is indeed not a spec-compliant PCI segment.  It's something
model specific the Linux VMD driver is doing.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 20:13:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 20:13:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736622.1142750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFfxr-0001rH-41; Fri, 07 Jun 2024 20:13:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736622.1142750; Fri, 07 Jun 2024 20:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFfxq-0001p4-S5; Fri, 07 Jun 2024 20:13:30 +0000
Received: by outflank-mailman (input) for mailman id 736622;
 Fri, 07 Jun 2024 20:13:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ppZm=NJ=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sFfxp-0001UQ-Cq
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 20:13:29 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 67727495-250a-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 22:13:28 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 6A9F24EE0745;
 Fri,  7 Jun 2024 22:13:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67727495-250a-11ef-90a2-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [XEN PATCH for-4.19 v2 3/3] automation/eclair_analysis: add more clean MISRA guidelines
Date: Fri,  7 Jun 2024 22:13:18 +0200
Message-Id: <42645b41cf9d2d8b5ef72f0b171989711edb00a1.1717790683.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717790683.git.nicola.vetrini@bugseng.com>
References: <cover.1717790683.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rules 20.9, 20.12 and 14.4 are now clean on ARM and x86, so they are added
to the list of clean guidelines.

Some guidelines listed in the additional clean section for ARM are also
clean on x86, so they can be removed from there.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/tagging.ecl | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/automation/eclair_analysis/ECLAIR/tagging.ecl b/automation/eclair_analysis/ECLAIR/tagging.ecl
index a354ff322e03..b829655ca0bc 100644
--- a/automation/eclair_analysis/ECLAIR/tagging.ecl
+++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
@@ -60,6 +60,7 @@ MC3R1.R11.7||
 MC3R1.R11.9||
 MC3R1.R12.5||
 MC3R1.R14.1||
+MC3R1.R14.4||
 MC3R1.R16.7||
 MC3R1.R17.1||
 MC3R1.R17.3||
@@ -73,6 +74,7 @@ MC3R1.R20.4||
 MC3R1.R20.6||
 MC3R1.R20.9||
 MC3R1.R20.11||
+MC3R1.R20.12||
 MC3R1.R20.13||
 MC3R1.R20.14||
 MC3R1.R21.3||
@@ -105,7 +107,7 @@ if(string_equal(target,"x86_64"),
 )

 if(string_equal(target,"arm64"),
-    service_selector({"additional_clean_guidelines","MC3R1.R14.4||MC3R1.R16.6||MC3R1.R20.12||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.2||MC3R1.R7.3||MC3R1.R8.6||MC3R1.R9.3"})
+    service_selector({"additional_clean_guidelines","MC3R1.R16.6||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.3"})
 )

 -reports+={clean:added,"service(clean_guidelines_common||additional_clean_guidelines)"}
--
2.34.1


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 20:13:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 20:13:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736619.1142728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFfxp-0001Ud-8E; Fri, 07 Jun 2024 20:13:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736619.1142728; Fri, 07 Jun 2024 20:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFfxp-0001UW-5R; Fri, 07 Jun 2024 20:13:29 +0000
Received: by outflank-mailman (input) for mailman id 736619;
 Fri, 07 Jun 2024 20:13:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ppZm=NJ=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sFfxo-0001UQ-DP
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 20:13:28 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6698be71-250a-11ef-90a2-e314d9c70b13;
 Fri, 07 Jun 2024 22:13:27 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 00CB64EE0742;
 Fri,  7 Jun 2024 22:13:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6698be71-250a-11ef-90a2-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [XEN PATCH for-4.19 v2 2/3] automation/eclair_analysis: address remaining violations of MISRA C Rule 20.12
Date: Fri,  7 Jun 2024 22:13:17 +0200
Message-Id: <4ea119f84e075ebcdfe2669527826c269a454d0e.1717790683.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717790683.git.nicola.vetrini@bugseng.com>
References: <cover.1717790683.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The DEFINE macro in asm-offsets.c (for all architectures) still generates
violations despite the file(s) being excluded from compliance, due to the
fact that in its expansion it sometimes refers entities in non-excluded files.
These corner cases are deviated by the configuration.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index 447c1e6661d1..e2653f77eb2c 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -483,6 +483,12 @@ leads to a violation of the Rule are deviated."
 -config=MC3R1.R20.12,macros+={deliberate, "name(GENERATE_CASE)&&loc(file(deliberate_generate_case))"}
 -doc_end

+-doc_begin="The macro DEFINE is defined and used in excluded files asm-offsets.c.
+This may still cause violations if entities outside these files are referred to
+in the expansion."
+-config=MC3R1.R20.12,macros+={deliberate, "name(DEFINE)&&loc(file(asm_offsets))"}
+-doc_end
+
 #
 # Series 21.
 #
--
2.34.1


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 20:13:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 20:13:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736620.1142738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFfxq-0001jB-HC; Fri, 07 Jun 2024 20:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736620.1142738; Fri, 07 Jun 2024 20:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFfxq-0001j3-EA; Fri, 07 Jun 2024 20:13:30 +0000
Received: by outflank-mailman (input) for mailman id 736620;
 Fri, 07 Jun 2024 20:13:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ppZm=NJ=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sFfxp-0001UP-72
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 20:13:29 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 64df7cf0-250a-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 22:13:25 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 0A1AA4EE073E;
 Fri,  7 Jun 2024 22:13:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64df7cf0-250a-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [XEN PATCH for-4.19 v2 0/3] address remaining violations of Rule 20.12
Date: Fri,  7 Jun 2024 22:13:15 +0200
Message-Id: <cover.1717790683.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series addresses the remaining violations of Rule 20.12, and
consequently sets the rule (and others) as clean to enable the gating.

Patches 1 and 2 are retained from the previous version, while patch 3
was sent standalone and now merged in this series, as it depends
on the previous two patches.

Nicola Vetrini (3):
  x86/domain: deviate violation of MISRA C Rule 20.12
  automation/eclair_analysis: address remaining violations of MISRA C
    Rule 20.12
  automation/eclair_analysis: add more clean MISRA guidelines

 automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++++
 automation/eclair_analysis/ECLAIR/tagging.ecl    | 4 +++-
 docs/misra/safe.json                             | 8 ++++++++
 xen/arch/x86/domain.c                            | 1 +
 xen/arch/x86/domctl.c                            | 1 +
 5 files changed, 19 insertions(+), 1 deletion(-)

--
2.34.1


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 20:13:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 20:13:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736621.1142743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFfxq-0001lf-Ol; Fri, 07 Jun 2024 20:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736621.1142743; Fri, 07 Jun 2024 20:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFfxq-0001kO-Jo; Fri, 07 Jun 2024 20:13:30 +0000
Received: by outflank-mailman (input) for mailman id 736621;
 Fri, 07 Jun 2024 20:13:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ppZm=NJ=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sFfxp-0001UP-Du
 for xen-devel@lists.xenproject.org; Fri, 07 Jun 2024 20:13:29 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65bb4055-250a-11ef-b4bb-af5377834399;
 Fri, 07 Jun 2024 22:13:26 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 59D804EE0743;
 Fri,  7 Jun 2024 22:13:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65bb4055-250a-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [XEN PATCH for-4.19 v2 1/3] x86/domain: deviate violation of MISRA C Rule 20.12
Date: Fri,  7 Jun 2024 22:13:16 +0200
Message-Id: <73f582e7b42e44980a79022d2f2937a33e7b37b7.1717790683.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1717790683.git.nicola.vetrini@bugseng.com>
References: <cover.1717790683.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.12 states: "A macro parameter used as an operand to
the # or ## operators, which is itself subject to further macro replacement,
shall only be used as an operand to these operators".

In this case, builds where CONFIG_COMPAT=y the fpu_ctxt
macro is used both as a regular macro argument and as an operand for
stringification in the expansion of CHECK_FIELD_.
This is deviated using a SAF-x-safe comment.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 docs/misra/safe.json  | 8 ++++++++
 xen/arch/x86/domain.c | 1 +
 xen/arch/x86/domctl.c | 1 +
 3 files changed, 10 insertions(+)

diff --git a/docs/misra/safe.json b/docs/misra/safe.json
index 9b13bcf71706..c213e0a0be3b 100644
--- a/docs/misra/safe.json
+++ b/docs/misra/safe.json
@@ -52,6 +52,14 @@
         },
         {
             "id": "SAF-6-safe",
+            "analyser": {
+                "eclair": "MC3R1.R20.12"
+            },
+            "name": "MC3R1.R20.12: use of a macro argument that deliberately violates the Rule",
+            "text": "A macro parameter that is itself a macro is intentionally used within the macro both as a regular parameter and for text replacement."
+        },
+        {
+            "id": "SAF-7-safe",
             "analyser": {},
             "name": "Sentinel",
             "text": "Next ID to be used"
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 536542841ef5..ccadfe0c9e70 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1084,6 +1084,7 @@ void arch_domain_creation_finished(struct domain *d)
 #ifdef CONFIG_COMPAT
 #define xen_vcpu_guest_context vcpu_guest_context
 #define fpu_ctxt fpu_ctxt.x
+/* SAF-6-safe Rule 20.12 expansion of macro fpu_ctxt with CONFIG_COMPAT */
 CHECK_FIELD_(struct, vcpu_guest_context, fpu_ctxt);
 #undef fpu_ctxt
 #undef xen_vcpu_guest_context
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 9a72d57333e9..335aedf46d03 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1326,6 +1326,7 @@ long arch_do_domctl(
 #ifdef CONFIG_COMPAT
 #define xen_vcpu_guest_context vcpu_guest_context
 #define fpu_ctxt fpu_ctxt.x
+/* SAF-6-safe Rule 20.12 expansion of macro fpu_ctxt with CONFIG_COMPAT */
 CHECK_FIELD_(struct, vcpu_guest_context, fpu_ctxt);
 #undef fpu_ctxt
 #undef xen_vcpu_guest_context
--
2.34.1


From xen-devel-bounces@lists.xenproject.org Fri Jun 07 20:59:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 Jun 2024 20:59:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736647.1142767 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFgfi-0000LF-Ar; Fri, 07 Jun 2024 20:58:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736647.1142767; Fri, 07 Jun 2024 20:58:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFgfi-0000L8-8O; Fri, 07 Jun 2024 20:58:50 +0000
Received: by outflank-mailman (input) for mailman id 736647;
 Fri, 07 Jun 2024 20:58:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFgfh-0000Ky-0R; Fri, 07 Jun 2024 20:58:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFgfg-0005Yg-Vf; Fri, 07 Jun 2024 20:58:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFgfg-0002u6-Mi; Fri, 07 Jun 2024 20:58:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFgfg-0004wA-MC; Fri, 07 Jun 2024 20:58:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uU17lVaMVsdSGO/703rEv3cElSHdVauBJy+PUzEoF54=; b=bYy5AfnHKcGAFS5O7V7NfeVKii
	XtibnX/hYf89ghcJA7DJwlHaFpP6JO0kEOd8dXmgc9YzArnaWUGkCzoERYywfBjdr2S31gPxdPadF
	1WRxXu176xsoBYxejCG/vMPuJP4dE0PXGrSQugDys7DiOj4Zzx4Dk0C1sjooHmzZLhnM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186283-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186283: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ab069d580111ccc64d6b0c9697b7c5fd6e1507ce
X-Osstest-Versions-That:
    ovmf=c36414b131dfd0a1ca51f10f87a18955bc110ff2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 Jun 2024 20:58:48 +0000

flight 186283 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186283/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ab069d580111ccc64d6b0c9697b7c5fd6e1507ce
baseline version:
 ovmf                 c36414b131dfd0a1ca51f10f87a18955bc110ff2

Last test of basis   186281  2024-06-07 15:44:37 Z    0 days
Testing same since   186283  2024-06-07 18:13:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c36414b131..ab069d5801  ab069d580111ccc64d6b0c9697b7c5fd6e1507ce -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jun 08 00:33:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Jun 2024 00:33:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736662.1142778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFk0j-0008Ei-QZ; Sat, 08 Jun 2024 00:32:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736662.1142778; Sat, 08 Jun 2024 00:32:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFk0j-0008Eb-NC; Sat, 08 Jun 2024 00:32:45 +0000
Received: by outflank-mailman (input) for mailman id 736662;
 Sat, 08 Jun 2024 00:32:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFk0j-0008ER-2z; Sat, 08 Jun 2024 00:32:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFk0i-0001qG-Vj; Sat, 08 Jun 2024 00:32:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFk0i-0000W7-Jm; Sat, 08 Jun 2024 00:32:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFk0i-0005SK-JM; Sat, 08 Jun 2024 00:32:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0gcYkEdyj/kk4/tWVpu/FYSDJgSt3ExJb6urZlElM1w=; b=CQnAAXHZ0e6hUwbDg09NzhTqmY
	R/gUUs/v+J2wt+uTMSVv6DO67o9ejQmqDqm3l1STKHoJwfR+6YdtDohRr7ASJdUJ1+2Sc2W3NWXgC
	1U5wqozLth3ISofdGtO0zwe5J4oFPaJnzQ/LP2qa4JpTI6f5+jrTY3Qt/NZTKh8qNKdw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186282-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186282: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8a92980606e3585d72d510a03b59906e96755b8a
X-Osstest-Versions-That:
    linux=d30d0e49da71de8df10bf3ff1b3de880653af562
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 Jun 2024 00:32:44 +0000

flight 186282 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186282/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-raw       8 xen-boot         fail in 186278 pass in 186282
 test-armhf-armhf-xl-qcow2     8 xen-boot         fail in 186278 pass in 186282
 test-armhf-armhf-xl-credit1   8 xen-boot                   fail pass in 186278

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 186268
 test-armhf-armhf-libvirt-vhd  8 xen-boot            fail in 186278 like 186268
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 186278 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 186278 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186268
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186268
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186268
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186268
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186268
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8a92980606e3585d72d510a03b59906e96755b8a
baseline version:
 linux                d30d0e49da71de8df10bf3ff1b3de880653af562

Last test of basis   186268  2024-06-06 17:10:36 Z    1 days
Testing same since   186270  2024-06-07 01:11:58 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bjorn Helgaas <bhelgaas@google.com>
  Chanwoo Lee <cw9316.lee@samsung.com>
  Dan Williams <dan.j.williams@intel.com>
  Deming Wang <wangdeming@inspur.com>
  Hannes Reinecke <hare@suse.de>
  Hans de Goede <hdegoede@redhat.com>
  Justin Stitt <justinstitt@google.com>
  Kalle Valo <kvalo@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Wilck <martin.wilck@suse.com>
  Nathan Chancellor <nathan@kernel.org>
  Nilesh Javali <njavali@marvell.com>
  Peter Schneider <pschneider1968@googlemail.com>
  Saurav Kashyap <skashyap@marvell.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   d30d0e49da71..8a92980606e3  8a92980606e3585d72d510a03b59906e96755b8a -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sat Jun 08 09:16:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Jun 2024 09:16:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736696.1142787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFsAl-0003OG-Cv; Sat, 08 Jun 2024 09:15:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736696.1142787; Sat, 08 Jun 2024 09:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFsAl-0003O9-9u; Sat, 08 Jun 2024 09:15:39 +0000
Received: by outflank-mailman (input) for mailman id 736696;
 Sat, 08 Jun 2024 09:15:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFsAj-0003Nz-NF; Sat, 08 Jun 2024 09:15:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFsAj-0003GC-KF; Sat, 08 Jun 2024 09:15:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFsAj-0003y7-AX; Sat, 08 Jun 2024 09:15:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFsAj-0001in-A4; Sat, 08 Jun 2024 09:15:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BVzSpyu2zNZOXlmaU2aKUFYcDn8s5BJTsO0ZH9oa4q4=; b=FWJc5BuAe+NF3LKATU/uiQfMKZ
	AdkHHnLid74aZnQKLU5pAjaCJVrMvaB939sD+pqKnytZMS7wNF9xRc3vFQJn9f/xz6vvRzB4LBDWZ
	zC3RTCJ74IR84UuTKxpkp/QWJGn4FRDntMIkQn4vDZMaNP2FrONmWyIFm9YkZ38ThrOw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186284-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186284: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=dc772f8237f9b0c9ea3f34d0dc4a57d1f6a5070d
X-Osstest-Versions-That:
    linux=8a92980606e3585d72d510a03b59906e96755b8a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 Jun 2024 09:15:37 +0000

flight 186284 linux-linus real [real]
flight 186287 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186284/
http://logs.test-lab.xenproject.org/osstest/logs/186287/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 186282
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 186282
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 186282

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 186270
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186282
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                dc772f8237f9b0c9ea3f34d0dc4a57d1f6a5070d
baseline version:
 linux                8a92980606e3585d72d510a03b59906e96755b8a

Last test of basis   186282  2024-06-07 18:13:19 Z    0 days
Testing same since   186284  2024-06-08 00:43:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Potapenko <glider@google.com>
  Andreas Hindborg <a.hindborg@samsung.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Shevchenko <andy.shevchenko@gmail.com>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Barry Song <baohua@kernel.org>
  Barry Song <v-songbaohua@oppo.com>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Björn Töpel <bjorn@rivosinc.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Borislav Petkov <bp@alien8.de>
  Brian Johannesmeyer <bjohannesmeyer@gmail.com>
  Chen Ni <nichen@iscas.ac.cn>
  Chengming Zhou <chengming.zhou@linux.dev>
  Chris Bainbridge <chris.bainbridge@gmail.com>
  Chris Li <chrisl@kernel.org>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Christoph Hellwig <hch@lst.de>
  Chuck Lever <chuck.lever@oracle.com>
  Chunguang Xu <chunguang.xu@shopee.com>
  Cong Wang <cong.wang@bytedance.com>
  Dave Airlie <airlied@redhat.com>
  David Hildenbrand <david@redhat.com>
  David Sterba <dsterba@suse.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dr. David Alan Gilbert <linux@treblig.org>
  Filipe Manana <fdmanana@suse.com>
  Gregor Herburger <gregor.herburger@tq-group.com>
  Hagar Hemdan <hagarhem@amazon.com>
  Hans de Goede <hdegoede@redhat.com>
  Heiko Carstens <hca@linux.ibm.com>
  Huisong Li <lihuisong@huawei.com>
  Ian Forbes <ian.forbes@broadcom.com>
  Jean-Christophe Guillain <jean-christophe@guillain.net>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jens Axboe <axboe@kernel.dk>
  Jerry Snitselaar <jsnitsel@redhat.com>
  Jessica Zhang <quic_jesszhan@quicinc.com>
  Joerg Roedel <jroedel@suse.de>
  Johannes Weiner <hannes@cmpxchg.org>
  Keith Busch <kbusch@kernel.org>
  Kun(llfl) <llfl@linux.alibaba.com>
  Lance Yang <ioworker0@gmail.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liviu Dudau <liviu.dudau@arm.com>
  Lu Baolu <baolu.lu@linux.intel.com>
  Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michal Hocko <mhocko@suse.com>
  Michal Wajdeczko <michal.wajdeczko@intel.com>
  Mike Rapoport (IBM) <rppt@kernel.org>
  Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
  Mikhail Zaslonko <zaslonko@linux.ibm.com>
  Muhammad Usama Anjum <usama.anjum@collabora.com>
  Nam Cao <namcao@linutronix.de>
  Neil Armstrong <neil.armstrong@linaro.org>
  Niklas Cassel <cassel@kernel.org>
  Omar Sandoval <osandov@fb.com>
  Oscar Salvador <osalvador@suse.de>
  Palmer Dabbelt <palmer@rivosinc.com>
  Qu Wenruo <wqu@suse.com>
  Robin Murphy <robin.murphy@arm.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Shakeel Butt <shakeel.butt@linux.dev>
  Su Hui <suhui@nfschina.com>
  Suma Hegde <suma.hegde@amd.com>
  Suren Baghdasaryan <surenb@google.com>
  Tasos Sahanidis <tasos@tasossah.com>
  Thadeu Lima de Souza Cascardo <cascardo@igalia.com>
  Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
  Thomas Hellström <thomas.hellstrom@linux.intel.com>
  Vasant Hegde <vasant.hegde@amd.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Li <liwei391@huawei.com>
  Weiwen Hu <huweiwen@linux.alibaba.com>
  Will Deacon <will@kernel.org>
  Yujie Liu <yujie.liu@intel.com>
  Zack Rusin <zack.rusin@broadcom.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2462 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 08 11:38:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Jun 2024 11:38:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736721.1142798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFuOW-0001P4-Qw; Sat, 08 Jun 2024 11:38:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736721.1142798; Sat, 08 Jun 2024 11:38:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFuOW-0001Ox-NC; Sat, 08 Jun 2024 11:38:00 +0000
Received: by outflank-mailman (input) for mailman id 736721;
 Sat, 08 Jun 2024 11:37:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFuOV-0001On-Qd; Sat, 08 Jun 2024 11:37:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFuOV-0005mv-NB; Sat, 08 Jun 2024 11:37:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFuOV-00013c-CH; Sat, 08 Jun 2024 11:37:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFuOV-0006tz-Br; Sat, 08 Jun 2024 11:37:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oxYPMuXR3w8dKU8zMRAOV0LSwlRGeTVNDUyWU4NzoVc=; b=erP4ZLuKUE9V5nDyHIVGA9qF2m
	xztFBr317RKCGok6rgMsqn4vFvaabPz3YvObAtGL3h4sy/nU391+/IOR0C+e4iZdWADTAtcC6DL/e
	yJtNDsNy1k3UZwptbkh1j5Mpn0Y2Abq/BufmDb35U7HCy0USBO3Bcpshr0VBfmV1D8lo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186285-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186285: trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-credit2:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl-credit2:host-install(5):broken:heisenbug
    xen-unstable:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
X-Osstest-Versions-That:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 Jun 2024 11:37:59 +0000

flight 186285 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186285/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2     <job status>                 broken

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit2   5 host-install(5)          broken pass in 186272
 test-armhf-armhf-xl           8 xen-boot         fail in 186272 pass in 186285

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 186272 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 186272 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186272
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186272
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186272
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186272
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186272
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186272
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e
baseline version:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e

Last test of basis   186285  2024-06-08 01:52:14 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  broken  
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-credit2 broken
broken-step test-armhf-armhf-xl-credit2 host-install(5)

Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Jun 08 12:57:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Jun 2024 12:57:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736738.1142808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFvd3-0001df-Iy; Sat, 08 Jun 2024 12:57:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736738.1142808; Sat, 08 Jun 2024 12:57:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sFvd3-0001dY-F9; Sat, 08 Jun 2024 12:57:05 +0000
Received: by outflank-mailman (input) for mailman id 736738;
 Sat, 08 Jun 2024 12:57:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFvd2-0001dA-39; Sat, 08 Jun 2024 12:57:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFvd2-00078i-0e; Sat, 08 Jun 2024 12:57:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sFvd1-0004Lo-KM; Sat, 08 Jun 2024 12:57:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sFvd1-0002Qe-Jt; Sat, 08 Jun 2024 12:57:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Sl+4lH8ZZIQbY7RMtvQK36wkgB3xJuFwWZujegMqufg=; b=jo6hMb0lXAUVkbf/Ja45r6hDky
	PHpD0VH6+/T8JBITWlT+Q7nPJXddWb6u8ZH5qRt56So8SJ6+9sFmr8+/PUTRqBHt6aANTku04ce1I
	WWRBVX6gKNgmBC/FvW2nEg1fdHYGLGxB5t/9sN+q1uhS10qzkb7CM7t8jetJkjBth2X0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186286-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186286: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=a7eb7de53171b4cdabc3d36524c468abfe2590fa
X-Osstest-Versions-That:
    libvirt=f8ec3f9c2f8ddb3ea4ae89f1849897ef23633d83
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 Jun 2024 12:57:03 +0000

flight 186286 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186286/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186274
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              a7eb7de53171b4cdabc3d36524c468abfe2590fa
baseline version:
 libvirt              f8ec3f9c2f8ddb3ea4ae89f1849897ef23633d83

Last test of basis   186274  2024-06-07 04:18:45 Z    1 days
Testing same since   186286  2024-06-08 04:20:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   f8ec3f9c2f..a7eb7de531  a7eb7de53171b4cdabc3d36524c468abfe2590fa -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jun 08 18:13:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Jun 2024 18:13:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736758.1142817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sG0Yl-0001WF-Ah; Sat, 08 Jun 2024 18:12:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736758.1142817; Sat, 08 Jun 2024 18:12:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sG0Yl-0001W8-7z; Sat, 08 Jun 2024 18:12:59 +0000
Received: by outflank-mailman (input) for mailman id 736758;
 Sat, 08 Jun 2024 18:12:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sG0Yj-0001Vy-8Y; Sat, 08 Jun 2024 18:12:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sG0Yj-0004tT-2C; Sat, 08 Jun 2024 18:12:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sG0Yi-0003YZ-KO; Sat, 08 Jun 2024 18:12:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sG0Yi-0002hv-Js; Sat, 08 Jun 2024 18:12:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GoYQv9A383/MnVvHXtmhRpQ0F9pfEJYaaJIzYbK+iAw=; b=YvZ1+iO8ee2XV5Rw7VTWZFOSpW
	qUOBn2XJZb2aHxRizadMAOb68erDD6zBEtEM6e/ivdI++1D/pXeCnhYgB5tRpD/gGbdNHDlEMf08p
	d0Vz6WpkASFtWqs/y9xaOHlW0Rvaoc+BzNFppm5kKn6wFVGigd1ugKUiLdiLN1UjFZOQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186288-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186288: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=dc772f8237f9b0c9ea3f34d0dc4a57d1f6a5070d
X-Osstest-Versions-That:
    linux=8a92980606e3585d72d510a03b59906e96755b8a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 Jun 2024 18:12:56 +0000

flight 186288 linux-linus real [real]
flight 186289 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186288/
http://logs.test-lab.xenproject.org/osstest/logs/186289/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 186282

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-raw       8 xen-boot                     fail  like 186278
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186282
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186282
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                dc772f8237f9b0c9ea3f34d0dc4a57d1f6a5070d
baseline version:
 linux                8a92980606e3585d72d510a03b59906e96755b8a

Last test of basis   186282  2024-06-07 18:13:19 Z    0 days
Testing same since   186284  2024-06-08 00:43:36 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Potapenko <glider@google.com>
  Andreas Hindborg <a.hindborg@samsung.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Shevchenko <andy.shevchenko@gmail.com>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Barry Song <baohua@kernel.org>
  Barry Song <v-songbaohua@oppo.com>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Björn Töpel <bjorn@rivosinc.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Borislav Petkov <bp@alien8.de>
  Brian Johannesmeyer <bjohannesmeyer@gmail.com>
  Chen Ni <nichen@iscas.ac.cn>
  Chengming Zhou <chengming.zhou@linux.dev>
  Chris Bainbridge <chris.bainbridge@gmail.com>
  Chris Li <chrisl@kernel.org>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Christoph Hellwig <hch@lst.de>
  Chuck Lever <chuck.lever@oracle.com>
  Chunguang Xu <chunguang.xu@shopee.com>
  Cong Wang <cong.wang@bytedance.com>
  Dave Airlie <airlied@redhat.com>
  David Hildenbrand <david@redhat.com>
  David Sterba <dsterba@suse.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dr. David Alan Gilbert <linux@treblig.org>
  Filipe Manana <fdmanana@suse.com>
  Gregor Herburger <gregor.herburger@tq-group.com>
  Hagar Hemdan <hagarhem@amazon.com>
  Hans de Goede <hdegoede@redhat.com>
  Heiko Carstens <hca@linux.ibm.com>
  Huisong Li <lihuisong@huawei.com>
  Ian Forbes <ian.forbes@broadcom.com>
  Jean-Christophe Guillain <jean-christophe@guillain.net>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jens Axboe <axboe@kernel.dk>
  Jerry Snitselaar <jsnitsel@redhat.com>
  Jessica Zhang <quic_jesszhan@quicinc.com>
  Joerg Roedel <jroedel@suse.de>
  Johannes Weiner <hannes@cmpxchg.org>
  Keith Busch <kbusch@kernel.org>
  Kun(llfl) <llfl@linux.alibaba.com>
  Lance Yang <ioworker0@gmail.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liviu Dudau <liviu.dudau@arm.com>
  Lu Baolu <baolu.lu@linux.intel.com>
  Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michal Hocko <mhocko@suse.com>
  Michal Wajdeczko <michal.wajdeczko@intel.com>
  Mike Rapoport (IBM) <rppt@kernel.org>
  Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
  Mikhail Zaslonko <zaslonko@linux.ibm.com>
  Muhammad Usama Anjum <usama.anjum@collabora.com>
  Nam Cao <namcao@linutronix.de>
  Neil Armstrong <neil.armstrong@linaro.org>
  Niklas Cassel <cassel@kernel.org>
  Omar Sandoval <osandov@fb.com>
  Oscar Salvador <osalvador@suse.de>
  Palmer Dabbelt <palmer@rivosinc.com>
  Qu Wenruo <wqu@suse.com>
  Robin Murphy <robin.murphy@arm.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Shakeel Butt <shakeel.butt@linux.dev>
  Su Hui <suhui@nfschina.com>
  Suma Hegde <suma.hegde@amd.com>
  Suren Baghdasaryan <surenb@google.com>
  Tasos Sahanidis <tasos@tasossah.com>
  Thadeu Lima de Souza Cascardo <cascardo@igalia.com>
  Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
  Thomas Hellström <thomas.hellstrom@linux.intel.com>
  Vasant Hegde <vasant.hegde@amd.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Li <liwei391@huawei.com>
  Weiwen Hu <huweiwen@linux.alibaba.com>
  Will Deacon <will@kernel.org>
  Yujie Liu <yujie.liu@intel.com>
  Zack Rusin <zack.rusin@broadcom.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2462 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 08 23:07:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Jun 2024 23:07:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736772.1142827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sG59R-0006qu-NL; Sat, 08 Jun 2024 23:07:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736772.1142827; Sat, 08 Jun 2024 23:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sG59R-0006qn-Km; Sat, 08 Jun 2024 23:07:09 +0000
Received: by outflank-mailman (input) for mailman id 736772;
 Sat, 08 Jun 2024 23:07:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ovq7=NK=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sG59Q-0006qh-HI
 for xen-devel@lists.xenproject.org; Sat, 08 Jun 2024 23:07:08 +0000
Received: from mail-oa1-x2a.google.com (mail-oa1-x2a.google.com
 [2001:4860:4864:20::2a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d316d401-25eb-11ef-90a2-e314d9c70b13;
 Sun, 09 Jun 2024 01:07:07 +0200 (CEST)
Received: by mail-oa1-x2a.google.com with SMTP id
 586e51a60fabf-25488f4e55aso604342fac.0
 for <xen-devel@lists.xenproject.org>; Sat, 08 Jun 2024 16:07:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d316d401-25eb-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717888025; x=1718492825; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=4Spwh/GE+fNNbhje1XWBauuE++xKMkbCnuBE04umdjI=;
        b=dTdjB4JykwPv4F9aTdSVGFbkOZm79v4P9WG1s6u+W+euAXeX/OlOtX7RV9W92EqCoS
         QVBTQInu0IoyJEEh5/3HpJ7oquGury1ihKNvoEbPd38juow7x/Pry7KdrY8xVxXpVwjB
         Tg0UAWkK/fpY/wPSBaWPAm1tjQYKysdG/EfiVqtWFUAnQ2s8vC0q84Wk8Mf/la1AIUnz
         xU6D50jcVObTAXVMGhd+CWzaLkZhKdiwSaaSbzsCyt4Yqm+2nlYfiVZL4io/8ons6azk
         FwIkCUiA8JpbfAvsvcMWNxo5vQSiks5r8HN/iumZ2zSaumvuvtUkQiFse1n3eD1f0WO4
         UsqQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717888025; x=1718492825;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=4Spwh/GE+fNNbhje1XWBauuE++xKMkbCnuBE04umdjI=;
        b=w6smnMJ12udhD+QdkGvh3cehAtzEn+SZXkOmCRe2zbHV2vVvbLroya5cYDyUMmt1aH
         Ej0MQA+76aW6mitwDfh0C44v0N1OPrGgmzv911KFGtULOgCLaZnfI6ffFd1Vc5d3fqIe
         nUeAM8QyqOBKSN4/W1QB2j4GlaKM0JRHqW6BToUS1G1P6/a0j6W7cufjHnBYNem6RDre
         O1DmUZU97wC4HjxGnstAhmXZP4H1QiyGGbD+Pgxphs8o97ITLSpqYIRrbfJC1r4HE8Jg
         jKvsj1JptENzC8IPK27uoQByBDmBaNeWCoBd6GqlPauqWKhBYOIxOxK45im/JAb18N0J
         jubg==
X-Forwarded-Encrypted: i=1; AJvYcCVLTsVVbAhrQyqiTsibbwMLeeD+z0t4TGw7KI/PcCcM/tJ4oyJmfpKtDX604wA619tnLfcfOkDnKdz0L8uLa+dvWrwY+i3zxKInCS9J12E=
X-Gm-Message-State: AOJu0Yzirq4j1c0efxnvC/6ixbtNNY4SjZhybJ+qTp6Mbt4lObym7g4C
	C1ikUYM2x5vcMoK9uwRzJf3lXh6Z0GwBzWFkVYk5nimFneeeMGvesovEDIN6QpyTs1zroV1HUCt
	Uf6XDxYyWopAXOXdUw25b5Y7+L/k=
X-Google-Smtp-Source: AGHT+IHdnCdp3e/T9uLZe5E15B2ImKKI3C0FZ31eLlDoSgaLf37Q3i1/ToYYau9ERtGnJ4gnRi/ALn76jd9er59deHI=
X-Received: by 2002:a05:6870:b491:b0:24f:d930:3cb0 with SMTP id
 586e51a60fabf-254648000c1mr6247789fac.43.1717888025325; Sat, 08 Jun 2024
 16:07:05 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1717356829.git.w1benny@gmail.com> <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
 <22eabe14-10c3-4095-91d3-b63911908cb2@suse.com>
In-Reply-To: <22eabe14-10c3-4095-91d3-b63911908cb2@suse.com>
From: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Date: Sun, 9 Jun 2024 01:06:54 +0200
Message-ID: <CAKBKdXhZ4HOqThPMkwaWB5ZhQOc6gE=xsKzkoL4_h+M6y33dcQ@mail.gmail.com>
Subject: Re: [PATCH for-4.19? v5 07/10] xen: Make the maximum number of altp2m
 views configurable for x86
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel <michal.orzel@amd.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Tamas K Lengyel <tamas@tklengyel.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Jun 6, 2024 at 9:24=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wrot=
e:
> > @@ -122,7 +131,12 @@ int p2m_init_altp2m(struct domain *d)
> >      struct p2m_domain *hostp2m =3D p2m_get_hostp2m(d);
> >
> >      mm_lock_init(&d->arch.altp2m_list_lock);
> > -    for ( i =3D 0; i < MAX_ALTP2M; i++ )
> > +    d->arch.altp2m_p2m =3D xzalloc_array(struct p2m_domain *, d->nr_al=
tp2m);
> > +
> > +    if ( !d->arch.altp2m_p2m )
> > +        return -ENOMEM;
>
> This isn't really needed, is it? Both ...
>
> > +    for ( i =3D 0; i < d->nr_altp2m; i++ )
>
> ... this and ...
>
> >      {
> >          d->arch.altp2m_p2m[i] =3D p2m =3D p2m_init_one(d);
> >          if ( p2m =3D=3D NULL )
> > @@ -143,7 +157,10 @@ void p2m_teardown_altp2m(struct domain *d)
> >      unsigned int i;
> >      struct p2m_domain *p2m;
> >
> > -    for ( i =3D 0; i < MAX_ALTP2M; i++ )
> > +    if ( !d->arch.altp2m_p2m )
> > +        return;
> > +
> > +    for ( i =3D 0; i < d->nr_altp2m; i++ )
> >      {
> >          if ( !d->arch.altp2m_p2m[i] )
> >              continue;
> > @@ -151,6 +168,8 @@ void p2m_teardown_altp2m(struct domain *d)
> >          d->arch.altp2m_p2m[i] =3D NULL;
> >          p2m_free_one(p2m);
> >      }
> > +
> > +    XFREE(d->arch.altp2m_p2m);
> >  }
>
> ... this ought to be fine without?

Could you, please, elaborate? I honestly don't know what you mean here
(by "this isn't needed").

P.


From xen-devel-bounces@lists.xenproject.org Sat Jun 08 23:09:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 Jun 2024 23:09:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736779.1142838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sG5BI-0007QF-6X; Sat, 08 Jun 2024 23:09:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736779.1142838; Sat, 08 Jun 2024 23:09:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sG5BI-0007Q8-3s; Sat, 08 Jun 2024 23:09:04 +0000
Received: by outflank-mailman (input) for mailman id 736779;
 Sat, 08 Jun 2024 23:09:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ovq7=NK=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sG5BH-0007Q2-EL
 for xen-devel@lists.xenproject.org; Sat, 08 Jun 2024 23:09:03 +0000
Received: from mail-oa1-x2c.google.com (mail-oa1-x2c.google.com
 [2001:4860:4864:20::2c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 168b6bb5-25ec-11ef-b4bb-af5377834399;
 Sun, 09 Jun 2024 01:09:01 +0200 (CEST)
Received: by mail-oa1-x2c.google.com with SMTP id
 586e51a60fabf-250e4c4d650so1660471fac.2
 for <xen-devel@lists.xenproject.org>; Sat, 08 Jun 2024 16:09:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 168b6bb5-25ec-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717888139; x=1718492939; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7L9vV3dhgZONH3ptEOr5Jmb57jUsakuPocb/rHAHRgc=;
        b=cZEeijQNcTPfuQVLAaTCqDyaqruVqTDJP7Kq7f4q+WjoqNVhhetTcYz9p9wuKTUOUS
         wS4UeVYzJyIe+LK7DzAFuKitNK6+XxpEOkryo9JG9ZIJz1T5fwqX5lJpa2bUyCETB2bV
         adsIN1HwTQzjMhfxrRq9alEXItmcuvr2Z47Mzl99xvGNpndJ0/DxP9iwodtFVinfTdXr
         h8ROFZXsmsg/+RdJcGlw+AlB2+Gzv8w2s0nJUzHMgSHWPfs4du+yEtQ27Kxp9y+TwzMV
         nSirtzfFMlA5Jpa7recpTcNFVyWj0BD3leGTvJ6d+D3w4zEcTbmhsytcCqskxr7TmHTZ
         O0fQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717888139; x=1718492939;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7L9vV3dhgZONH3ptEOr5Jmb57jUsakuPocb/rHAHRgc=;
        b=ghzVFSV4up26w8EIu9vH8VAaypZlV5R7Vq67RqfijeEFXRuhHvBrF85mTczK6xGZND
         WPBJVvpGamxzf3WLDCkKZVdtzeuViWxTTaUaP3vGvtyTB25/J1zf/k3frpcnvqwTxGCZ
         49Nz48+yUJNVxpYaG645VnmzOHefmjOgim82TREe/E3z48rN7EXkrPv0fA2qI0ekLepX
         9PeRwX3r5oYGUWfaXApvzLZJDV/LHLTXJWG9/DUopwsllR0ZxfDR9VLXiN1jWK+G3c4H
         7O1P20Wym8GwgSwrTs6ymqERVHDtZIEXVXaH/h2wrTrb3XQZ2/uXQHbJFnbqBir8XrYo
         oZ5g==
X-Forwarded-Encrypted: i=1; AJvYcCUAo0eYahVl7MV67HvpyBqVC6WkoZ+q0WFw/BJL47j/jI6LwQtDSnJHA9t98hsbIzJKfe8wb1OMmSTHlIn/8Vpc0LpDVhtz4hYT+EmX1Qg=
X-Gm-Message-State: AOJu0YxicZafcR9wh17AHAbv59iikRkvErT069ekDIXw9VnD1CW8Qn7I
	yvKm1RYG7zb4IBYFlTXHQYSt2HO8q0FdA6zWxzmmIURLTJTa9LCV/uPex3TaCw1xYoL55tAD3jX
	3x8t0XvcjPADDsRel4L0sJBXC63g=
X-Google-Smtp-Source: AGHT+IEDK8jNM0ovWLj5srWB7RE/aLhEQEu5+m518NFjLnGw/1Vb1QW5LfwMsbE7NROQeyMwblzud3wlY4M9+c2crbA=
X-Received: by 2002:a05:6870:3107:b0:24c:b0ca:9650 with SMTP id
 586e51a60fabf-254644641edmr7126863fac.1.1717888138542; Sat, 08 Jun 2024
 16:08:58 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1717356829.git.w1benny@gmail.com> <d6fd97b66b5f1a974e317c9d3f72fb139b39118f.1717356829.git.w1benny@gmail.com>
 <8936b5ef-1ef7-4606-9f19-c75287aa88fa@suse.com>
In-Reply-To: <8936b5ef-1ef7-4606-9f19-c75287aa88fa@suse.com>
From: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Date: Sun, 9 Jun 2024 01:08:47 +0200
Message-ID: <CAKBKdXi9uiADe+rXyHSd6HV3A-MvT04fgJAdJA=LNuYOr2Eedw@mail.gmail.com>
Subject: Re: [PATCH for-4.19? v5 09/10] xen/x86: Disallow creating domains
 with altp2m enabled and altp2m.nr == 0
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

The flag isn't bool. It can hold one of 3 values (besides 0).

P.

On Thu, Jun 6, 2024 at 9:57=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wrot=
e:
>
> On 02.06.2024 22:04, Petr Bene=C5=A1 wrote:
> > From: Petr Bene=C5=A1 <w1benny@gmail.com>
> >
> > Since libxl finally sends the altp2m.nr value, we can remove the previo=
usly
> > introduced temporary workaround.
> >
> > Creating domain with enabled altp2m while setting altp2m.nr =3D=3D 0 do=
esn't
> > make sense and it's probably not what user wants.
>
> Yet: Do we need separate count and flag anymore? Can't "nr !=3D 0"
> take the place of the flag being true?
>
> Jan


From xen-devel-bounces@lists.xenproject.org Sun Jun 09 01:25:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Jun 2024 01:25:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736788.1142848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sG7JC-0005c8-Kd; Sun, 09 Jun 2024 01:25:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736788.1142848; Sun, 09 Jun 2024 01:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sG7JC-0005c1-Hv; Sun, 09 Jun 2024 01:25:22 +0000
Received: by outflank-mailman (input) for mailman id 736788;
 Sun, 09 Jun 2024 01:25:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sG7JA-0005br-EZ; Sun, 09 Jun 2024 01:25:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sG7JA-0003wI-9Q; Sun, 09 Jun 2024 01:25:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sG7J9-0000MH-Te; Sun, 09 Jun 2024 01:25:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sG7J9-0003K4-TD; Sun, 09 Jun 2024 01:25:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wzpd9yJr4Iu1Sh2+b4bZUVxs05Z7GVl7+Qu/CjlImOM=; b=gfxD9q4mxse/A1ZZbmZlTJzoTS
	39OiZQkT4kIVdtuF0atJ3oUYKmlshSA/Le9RaJNg8TLiryltaOW9G7ePqXh8rnjTHXG5t4dcyfk2Y
	uOrI5ZrWux0wIQj+xgSg8t/TPyQ3pmPXdR+B+sXwPIHiEOd/O1ztcxPMoAOJcolZvnI8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186290-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186290: FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:<job status>:broken:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:host-reuse/final:broken:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=061d1af7b0305227182bd9da60c7706c079348b7
X-Osstest-Versions-That:
    linux=8a92980606e3585d72d510a03b59906e96755b8a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 Jun 2024 01:25:19 +0000

flight 186290 linux-linus real [real]
flight 186291 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186290/
http://logs.test-lab.xenproject.org/osstest/logs/186291/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm    <job status>  broken in 186291

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 25 host-reuse/final broken in 186291 pass in 186290
 test-armhf-armhf-examine      8 reboot              fail pass in 186291-retest
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186291-retest
 test-armhf-armhf-xl-arndale   8 xen-boot            fail pass in 186291-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 186282

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 186291 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 186291 never pass
 test-armhf-armhf-xl-qcow2     8 xen-boot                     fail  like 186278
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186278
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186282
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186282
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                061d1af7b0305227182bd9da60c7706c079348b7
baseline version:
 linux                8a92980606e3585d72d510a03b59906e96755b8a

Last test of basis   186282  2024-06-07 18:13:19 Z    1 days
Failing since        186284  2024-06-08 00:43:36 Z    1 days    3 attempts
Testing same since   186290  2024-06-08 18:41:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Potapenko <glider@google.com>
  Andreas Hindborg <a.hindborg@samsung.com>
  Andrew Ballance <andrewjballance@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Shevchenko <andy.shevchenko@gmail.com>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Aseda Aboagye <aaboagye@chromium.org>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Barry Song <baohua@kernel.org>
  Barry Song <v-songbaohua@oppo.com>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Benjamin Tissoires <bentiss@kernel.org>
  Bingbu Cao <bingbu.cao@intel.com>
  Björn Töpel <bjorn@rivosinc.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Borislav Petkov <bp@alien8.de>
  Brian Johannesmeyer <bjohannesmeyer@gmail.com>
  Carlos Llamas <cmllamas@google.com>
  Chen Ni <nichen@iscas.ac.cn>
  Chengming Zhou <chengming.zhou@linux.dev>
  Chris Bainbridge <chris.bainbridge@gmail.com>
  Chris Li <chrisl@kernel.org>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Chunguang Xu <chunguang.xu@shopee.com>
  Cong Wang <cong.wang@bytedance.com>
  Dave Airlie <airlied@redhat.com>
  David Hildenbrand <david@redhat.com>
  David Kaplan <david.kaplan@amd.com>
  David Sterba <dsterba@suse.com>
  Dmitry Safonov <0x7f454c46@gmail.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dr. David Alan Gilbert <linux@treblig.org>
  Filipe Manana <fdmanana@suse.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gregor Herburger <gregor.herburger@tq-group.com>
  Hagar Hemdan <hagarhem@amazon.com>
  Haifeng Xu <haifeng.xu@shopee.com>
  Hans de Goede <hdegoede@redhat.com>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Heiko Carstens <hca@linux.ibm.com>
  Huisong Li <lihuisong@huawei.com>
  Ian Forbes <ian.forbes@broadcom.com>
  Jean-Christophe Guillain <jean-christophe@guillain.net>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jens Axboe <axboe@kernel.dk>
  Jerry Snitselaar <jsnitsel@redhat.com>
  Jessica Zhang <quic_jesszhan@quicinc.com>
  Jiri Kosina <jkosina@suse.com>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan+linaro@kernel.org>
  Johannes Weiner <hannes@cmpxchg.org>
  José Expósito <jose.exposito89@gmail.com>
  Keith Busch <kbusch@kernel.org>
  Kun(llfl) <llfl@linux.alibaba.com>
  Lance Yang <ioworker0@gmail.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liviu Dudau <liviu.dudau@arm.com>
  Louis Dalibard <ontake@ontake.dev>
  Lu Baolu <baolu.lu@linux.intel.com>
  Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Rutland <mark.rutland@arm.com>
  Martin Tůma <martin.tuma@digiteqautomotive.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michal Hocko <mhocko@suse.com>
  Michal Wajdeczko <michal.wajdeczko@intel.com>
  Mike Rapoport (IBM) <rppt@kernel.org>
  Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
  Mikhail Zaslonko <zaslonko@linux.ibm.com>
  Muhammad Usama Anjum <usama.anjum@collabora.com>
  Nam Cao <namcao@linutronix.de>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Niklas Cassel <cassel@kernel.org>
  Omar Sandoval <osandov@fb.com>
  Oscar Salvador <osalvador@suse.de>
  Palmer Dabbelt <palmer@rivosinc.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Qu Wenruo <wqu@suse.com>
  Randy Dunlap <rdunlap@infradead.org>
  Richard Acayan <mailingradian@gmail.com>
  Robin Murphy <robin.murphy@arm.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Samuel Holland <samuel.holland@sifive.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Shakeel Butt <shakeel.butt@linux.dev>
  Steev Klimaszewski <steev@kali.org>
  Su Hui <suhui@nfschina.com>
  Suma Hegde <suma.hegde@amd.com>
  Sunil V L <sunilvl@ventanamicro.com>
  Suren Baghdasaryan <surenb@google.com>
  Tasos Sahanidis <tasos@tasossah.com>
  Thadeu Lima de Souza Cascardo <cascardo@igalia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
  Thomas Hellström <thomas.hellstrom@linux.intel.com>
  Vasant Hegde <vasant.hegde@amd.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Li <liwei391@huawei.com>
  Weiwen Hu <huweiwen@linux.alibaba.com>
  Will Deacon <will@kernel.org>
  Yazen Ghannam <yazen.ghannam@amd.com>
  Yujie Liu <yujie.liu@intel.com>
  Zack Rusin <zack.rusin@broadcom.com>
  Zhang Lixu <lixu.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-amd64-amd64-xl-qemut-debianhvm-i386-xsm broken

Not pushing.

(No revision log; it would be 3554 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 09 08:47:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Jun 2024 08:47:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736806.1142857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGED4-0004tV-59; Sun, 09 Jun 2024 08:47:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736806.1142857; Sun, 09 Jun 2024 08:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGED4-0004tO-2F; Sun, 09 Jun 2024 08:47:30 +0000
Received: by outflank-mailman (input) for mailman id 736806;
 Sun, 09 Jun 2024 08:47:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGED3-0004tE-9x; Sun, 09 Jun 2024 08:47:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGED3-0004b5-6k; Sun, 09 Jun 2024 08:47:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGED2-0001Ii-S2; Sun, 09 Jun 2024 08:47:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGED2-00080V-RW; Sun, 09 Jun 2024 08:47:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BKATRq97Y92SjU0Fn3ezUYYEEAbIjResPwDiqB4winQ=; b=bf8KQ/ZGiUbkayrgheRtPtnzwY
	6oel+x6DU+OklsJV5d2BVzcw2VN0cDYAiKxY21s+MtF7MHrv21R695Js4ZP+n81gxipxNyn9HUZnu
	SndgwxM7FFhXnx8hCsumbGm9tYRJh2l9GSXoszSI/KYuxAf4CaVEu2Tx98Df8vd9Pc8g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186292-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186292: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=061d1af7b0305227182bd9da60c7706c079348b7
X-Osstest-Versions-That:
    linux=8a92980606e3585d72d510a03b59906e96755b8a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 Jun 2024 08:47:28 +0000

flight 186292 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186292/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 186282
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 186282

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale   8 xen-boot         fail in 186290 pass in 186292
 test-armhf-armhf-xl           8 xen-boot                   fail pass in 186290
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail pass in 186290
 test-armhf-armhf-xl-multivcpu  8 xen-boot                  fail pass in 186290

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 186282

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-qcow2     8 xen-boot            fail in 186290 like 186278
 test-armhf-armhf-libvirt-vhd  8 xen-boot            fail in 186290 like 186278
 test-armhf-armhf-xl         15 migrate-support-check fail in 186290 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 186290 never pass
 test-armhf-armhf-xl-raw     14 migrate-support-check fail in 186290 never pass
 test-armhf-armhf-xl-raw 15 saverestore-support-check fail in 186290 never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 186290 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 186290 never pass
 test-armhf-armhf-xl-raw       8 xen-boot                     fail  like 186278
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186282
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186282
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                061d1af7b0305227182bd9da60c7706c079348b7
baseline version:
 linux                8a92980606e3585d72d510a03b59906e96755b8a

Last test of basis   186282  2024-06-07 18:13:19 Z    1 days
Failing since        186284  2024-06-08 00:43:36 Z    1 days    4 attempts
Testing same since   186290  2024-06-08 18:41:41 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Potapenko <glider@google.com>
  Andreas Hindborg <a.hindborg@samsung.com>
  Andrew Ballance <andrewjballance@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Shevchenko <andy.shevchenko@gmail.com>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Aseda Aboagye <aaboagye@chromium.org>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Barry Song <baohua@kernel.org>
  Barry Song <v-songbaohua@oppo.com>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Benjamin Tissoires <bentiss@kernel.org>
  Bingbu Cao <bingbu.cao@intel.com>
  Björn Töpel <bjorn@rivosinc.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Borislav Petkov <bp@alien8.de>
  Brian Johannesmeyer <bjohannesmeyer@gmail.com>
  Carlos Llamas <cmllamas@google.com>
  Chen Ni <nichen@iscas.ac.cn>
  Chengming Zhou <chengming.zhou@linux.dev>
  Chris Bainbridge <chris.bainbridge@gmail.com>
  Chris Li <chrisl@kernel.org>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Chunguang Xu <chunguang.xu@shopee.com>
  Cong Wang <cong.wang@bytedance.com>
  Dave Airlie <airlied@redhat.com>
  David Hildenbrand <david@redhat.com>
  David Kaplan <david.kaplan@amd.com>
  David Sterba <dsterba@suse.com>
  Dmitry Safonov <0x7f454c46@gmail.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dr. David Alan Gilbert <linux@treblig.org>
  Filipe Manana <fdmanana@suse.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gregor Herburger <gregor.herburger@tq-group.com>
  Hagar Hemdan <hagarhem@amazon.com>
  Haifeng Xu <haifeng.xu@shopee.com>
  Hans de Goede <hdegoede@redhat.com>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Heiko Carstens <hca@linux.ibm.com>
  Huisong Li <lihuisong@huawei.com>
  Ian Forbes <ian.forbes@broadcom.com>
  Jean-Christophe Guillain <jean-christophe@guillain.net>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jens Axboe <axboe@kernel.dk>
  Jerry Snitselaar <jsnitsel@redhat.com>
  Jessica Zhang <quic_jesszhan@quicinc.com>
  Jiri Kosina <jkosina@suse.com>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan+linaro@kernel.org>
  Johannes Weiner <hannes@cmpxchg.org>
  José Expósito <jose.exposito89@gmail.com>
  Keith Busch <kbusch@kernel.org>
  Kun(llfl) <llfl@linux.alibaba.com>
  Lance Yang <ioworker0@gmail.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liviu Dudau <liviu.dudau@arm.com>
  Louis Dalibard <ontake@ontake.dev>
  Lu Baolu <baolu.lu@linux.intel.com>
  Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Rutland <mark.rutland@arm.com>
  Martin Tůma <martin.tuma@digiteqautomotive.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michal Hocko <mhocko@suse.com>
  Michal Wajdeczko <michal.wajdeczko@intel.com>
  Mike Rapoport (IBM) <rppt@kernel.org>
  Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
  Mikhail Zaslonko <zaslonko@linux.ibm.com>
  Muhammad Usama Anjum <usama.anjum@collabora.com>
  Nam Cao <namcao@linutronix.de>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Niklas Cassel <cassel@kernel.org>
  Omar Sandoval <osandov@fb.com>
  Oscar Salvador <osalvador@suse.de>
  Palmer Dabbelt <palmer@rivosinc.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Qu Wenruo <wqu@suse.com>
  Randy Dunlap <rdunlap@infradead.org>
  Richard Acayan <mailingradian@gmail.com>
  Robin Murphy <robin.murphy@arm.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Samuel Holland <samuel.holland@sifive.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Shakeel Butt <shakeel.butt@linux.dev>
  Steev Klimaszewski <steev@kali.org>
  Su Hui <suhui@nfschina.com>
  Suma Hegde <suma.hegde@amd.com>
  Sunil V L <sunilvl@ventanamicro.com>
  Suren Baghdasaryan <surenb@google.com>
  Tasos Sahanidis <tasos@tasossah.com>
  Thadeu Lima de Souza Cascardo <cascardo@igalia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
  Thomas Hellström <thomas.hellstrom@linux.intel.com>
  Vasant Hegde <vasant.hegde@amd.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Li <liwei391@huawei.com>
  Weiwen Hu <huweiwen@linux.alibaba.com>
  Will Deacon <will@kernel.org>
  Yazen Ghannam <yazen.ghannam@amd.com>
  Yujie Liu <yujie.liu@intel.com>
  Zack Rusin <zack.rusin@broadcom.com>
  Zhang Lixu <lixu.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3554 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 09 11:23:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Jun 2024 11:23:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736820.1142867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGGda-0005L4-Ne; Sun, 09 Jun 2024 11:23:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736820.1142867; Sun, 09 Jun 2024 11:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGGda-0005Kx-Km; Sun, 09 Jun 2024 11:23:02 +0000
Received: by outflank-mailman (input) for mailman id 736820;
 Sun, 09 Jun 2024 11:23:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGGdZ-0005Kn-8B; Sun, 09 Jun 2024 11:23:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGGdZ-0007SX-1o; Sun, 09 Jun 2024 11:23:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGGdY-0007Lb-LW; Sun, 09 Jun 2024 11:23:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGGdY-00007D-LC; Sun, 09 Jun 2024 11:23:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ceUIaYZV0nIWSvUYDhDKQycNhYeqbx1VZp4udIjDB1o=; b=1DEBZeE+hbCFPMbTrf2d/1UoCn
	DgoYdimJo4/tK+YV5zIBujXeNK/PDwOnndECIcvebrm8oSQfhgRu/usZ3Q78XtcwSem6yxSMRHOcv
	REzJXuH01ENVnturN/cIUy4grdZ8jYIMAH+RCYulKXwCJ8mtmGbtoisgYlgrqCEgU3eM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186293-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186293: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
X-Osstest-Versions-That:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 Jun 2024 11:23:00 +0000

flight 186293 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186293/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186225
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail like 186234
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186285
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186285
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186285
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186285
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186285
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186285
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e
baseline version:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e

Last test of basis   186293  2024-06-09 01:51:55 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jun 09 16:56:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 Jun 2024 16:56:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736836.1142878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGLpp-0006NI-Nq; Sun, 09 Jun 2024 16:56:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736836.1142878; Sun, 09 Jun 2024 16:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGLpp-0006NB-KM; Sun, 09 Jun 2024 16:56:01 +0000
Received: by outflank-mailman (input) for mailman id 736836;
 Sun, 09 Jun 2024 16:56:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGLpo-0006N1-E0; Sun, 09 Jun 2024 16:56:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGLpo-0006Uv-95; Sun, 09 Jun 2024 16:56:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGLpn-0008GQ-Sg; Sun, 09 Jun 2024 16:55:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGLpn-0003K4-SA; Sun, 09 Jun 2024 16:55:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0vMtFRjKhJ5GQfE7J0IaC8h6LcqpV08dKghcpg8yji0=; b=Dgj2VE4CGZZBixN1apzRWJ+jv1
	aU32yZcLSTihLKubXUKAe/ao0SJTSU/B2JP1VHyqPxPAclDpEzs1rM7d3F4HCykBWTjuIOnnwhvKn
	Oibj0IUEVBjqmupVBSNpl4SvzBfFmHtVCde86xJ3mm5x2AN0APyb6fAfAMBNTq08PuH0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186294-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186294: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=771ed66105de9106a6f3e4311e06451881cdac5e
X-Osstest-Versions-That:
    linux=8a92980606e3585d72d510a03b59906e96755b8a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 Jun 2024 16:55:59 +0000

flight 186294 linux-linus real [real]
flight 186295 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186294/
http://logs.test-lab.xenproject.org/osstest/logs/186295/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt      8 xen-boot            fail pass in 186295-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 186295 like 186282
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 186295 never pass
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 186270
 test-armhf-armhf-xl-qcow2     8 xen-boot                     fail  like 186278
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186282
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186282
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                771ed66105de9106a6f3e4311e06451881cdac5e
baseline version:
 linux                8a92980606e3585d72d510a03b59906e96755b8a

Last test of basis   186282  2024-06-07 18:13:19 Z    1 days
Failing since        186284  2024-06-08 00:43:36 Z    1 days    5 attempts
Testing same since   186294  2024-06-09 08:51:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Potapenko <glider@google.com>
  Andreas Hindborg <a.hindborg@samsung.com>
  Andrew Ballance <andrewjballance@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Shevchenko <andy.shevchenko@gmail.com>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Aseda Aboagye <aaboagye@chromium.org>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Barry Song <baohua@kernel.org>
  Barry Song <v-songbaohua@oppo.com>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Benjamin Tissoires <bentiss@kernel.org>
  Bingbu Cao <bingbu.cao@intel.com>
  Björn Töpel <bjorn@rivosinc.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Borislav Petkov <bp@alien8.de>
  Brian Johannesmeyer <bjohannesmeyer@gmail.com>
  Carlos Llamas <cmllamas@google.com>
  Chen Ni <nichen@iscas.ac.cn>
  Chengming Zhou <chengming.zhou@linux.dev>
  Chris Bainbridge <chris.bainbridge@gmail.com>
  Chris Li <chrisl@kernel.org>
  Christian Borntraeger <borntraeger@linux.ibm.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Chunguang Xu <chunguang.xu@shopee.com>
  Cong Wang <cong.wang@bytedance.com>
  Dave Airlie <airlied@redhat.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David Kaplan <david.kaplan@amd.com>
  David Sterba <dsterba@suse.com>
  Dmitry Safonov <0x7f454c46@gmail.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dr. David Alan Gilbert <linux@treblig.org>
  Enzo Matsumiya <ematsumiya@suse.de>
  Filipe Manana <fdmanana@suse.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gregor Herburger <gregor.herburger@tq-group.com>
  Hagar Hemdan <hagarhem@amazon.com>
  Haifeng Xu <haifeng.xu@shopee.com>
  Hans de Goede <hdegoede@redhat.com>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Heiko Carstens <hca@linux.ibm.com>
  Huisong Li <lihuisong@huawei.com>
  Ian Forbes <ian.forbes@broadcom.com>
  Jean-Christophe Guillain <jean-christophe@guillain.net>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jens Axboe <axboe@kernel.dk>
  Jerry Snitselaar <jsnitsel@redhat.com>
  Jessica Zhang <quic_jesszhan@quicinc.com>
  Jiri Kosina <jkosina@suse.com>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan+linaro@kernel.org>
  Johannes Weiner <hannes@cmpxchg.org>
  José Expósito <jose.exposito89@gmail.com>
  Keith Busch <kbusch@kernel.org>
  Kun(llfl) <llfl@linux.alibaba.com>
  Lance Yang <ioworker0@gmail.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liviu Dudau <liviu.dudau@arm.com>
  Louis Dalibard <ontake@ontake.dev>
  Lu Baolu <baolu.lu@linux.intel.com>
  Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Rutland <mark.rutland@arm.com>
  Martin Tůma <martin.tuma@digiteqautomotive.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michal Hocko <mhocko@suse.com>
  Michal Wajdeczko <michal.wajdeczko@intel.com>
  Mike Rapoport (IBM) <rppt@kernel.org>
  Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
  Mikhail Zaslonko <zaslonko@linux.ibm.com>
  Muhammad Usama Anjum <usama.anjum@collabora.com>
  Nam Cao <namcao@linutronix.de>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Niklas Cassel <cassel@kernel.org>
  Omar Sandoval <osandov@fb.com>
  Oscar Salvador <osalvador@suse.de>
  Palmer Dabbelt <palmer@rivosinc.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Qu Wenruo <wqu@suse.com>
  Randy Dunlap <rdunlap@infradead.org>
  Richard Acayan <mailingradian@gmail.com>
  Robin Murphy <robin.murphy@arm.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Samuel Holland <samuel.holland@sifive.com>
  Sebastian Andrzej Siewior <bigeasy@linutronix.de>
  Shakeel Butt <shakeel.butt@linux.dev>
  Steev Klimaszewski <steev@kali.org>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Su Hui <suhui@nfschina.com>
  Suma Hegde <suma.hegde@amd.com>
  Sunil V L <sunilvl@ventanamicro.com>
  Suren Baghdasaryan <surenb@google.com>
  Tasos Sahanidis <tasos@tasossah.com>
  Thadeu Lima de Souza Cascardo <cascardo@igalia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
  Thomas Hellström <thomas.hellstrom@linux.intel.com>
  Vasant Hegde <vasant.hegde@amd.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Li <liwei391@huawei.com>
  Weiwen Hu <huweiwen@linux.alibaba.com>
  Will Deacon <will@kernel.org>
  Yazen Ghannam <yazen.ghannam@amd.com>
  Yujie Liu <yujie.liu@intel.com>
  Zack Rusin <zack.rusin@broadcom.com>
  Zhang Lixu <lixu.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   8a92980606e3..771ed66105de  771ed66105de9106a6f3e4311e06451881cdac5e -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 00:55:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 00:55:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736851.1142888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGTJS-0006Gi-9k; Mon, 10 Jun 2024 00:55:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736851.1142888; Mon, 10 Jun 2024 00:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGTJS-0006GR-4Y; Mon, 10 Jun 2024 00:55:06 +0000
Received: by outflank-mailman (input) for mailman id 736851;
 Mon, 10 Jun 2024 00:55:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGTJQ-0006GH-Gm; Mon, 10 Jun 2024 00:55:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGTJQ-0007uw-Ch; Mon, 10 Jun 2024 00:55:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGTJQ-00068Z-2T; Mon, 10 Jun 2024 00:55:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGTJQ-0007JM-21; Mon, 10 Jun 2024 00:55:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HevxPEJr6u6Y/209WOwgyrt3cFZFsT4nPDDGNYzm1GE=; b=NoxMtBnSvbPlPXD8B2Akv4P3Dw
	1sblw/BHyTHZ9kU5isCO2Zw0tnRjzkwwNxwcvfipPH4M/oWbmky3jp13ccAZuHaxSOXCV0oBVy27t
	yhTn7r0XLiyVCxqALtP3180fNPkDyJOXqo1HHq78xFLSkYqdnAbXvhKxn9LgPSY3E++Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186296-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186296: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b8481381d4e2549f06812eb6069198144696340c
X-Osstest-Versions-That:
    linux=771ed66105de9106a6f3e4311e06451881cdac5e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Jun 2024 00:55:04 +0000

flight 186296 linux-linus real [real]
flight 186297 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186296/
http://logs.test-lab.xenproject.org/osstest/logs/186297/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 186294

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-vhd  8 xen-boot            fail pass in 186297-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 186294
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check fail in 186297 never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check fail in 186297 never pass
 test-armhf-armhf-xl-qcow2     8 xen-boot                     fail  like 186294
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186294
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186294
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186294
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186294
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186294
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                b8481381d4e2549f06812eb6069198144696340c
baseline version:
 linux                771ed66105de9106a6f3e4311e06451881cdac5e

Last test of basis   186294  2024-06-09 08:51:48 Z    0 days
Testing same since   186296  2024-06-09 17:11:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Ingo Molnar <mingo@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Mark Rutland <mark.rutland@arm.com>
  Milian Wolff <milian.wolff@kdab.com>
  Namhyung Kim <namhyung@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 652 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 04:04:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 04:04:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736865.1142898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGWG2-0000S1-3U; Mon, 10 Jun 2024 04:03:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736865.1142898; Mon, 10 Jun 2024 04:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGWG2-0000Ru-0E; Mon, 10 Jun 2024 04:03:46 +0000
Received: by outflank-mailman (input) for mailman id 736865;
 Mon, 10 Jun 2024 04:03:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AMvd=NM=suse.de=osalvador@srs-se1.protection.inumbo.net>)
 id 1sGWG0-0000Rm-5G
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 04:03:44 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6c75c83e-26de-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 06:03:42 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 48E781F74B;
 Mon, 10 Jun 2024 04:03:41 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 2ECBB13A85;
 Mon, 10 Jun 2024 04:03:40 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id K/XmCBx7ZmbzEgAAD6G6ig
 (envelope-from <osalvador@suse.de>); Mon, 10 Jun 2024 04:03:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c75c83e-26de-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1717992221; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=IYHZUWJMx20LtuFCKKBR+s4SdF/zVxwDIU/zuxJyVpA=;
	b=ydtYYv4zpnWoA+DOEizDfb0xxwSoFgVJdNwaHwsZ4KNXXr0McZcQoDjm0TnP6modKw7qU/
	YU6A1x5Wb6Mc5i5eAKt/8AnZNi+ROMDh3WDQz74dIE+RuF5EAw2jT5VaieXlP6Lpkd0vmq
	E86gqlWw/GGQiz4G8sn557CdigPAhQM=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1717992221;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=IYHZUWJMx20LtuFCKKBR+s4SdF/zVxwDIU/zuxJyVpA=;
	b=2VfRPhmrW05hUtLnVJUcfs9KMJfRv9vz9QyGoBqjvqYUGpchjhJed480Cqm/I/F3MSoXT2
	Q9/kUhFsd2NbvbBw==
Authentication-Results: smtp-out2.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1717992221; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=IYHZUWJMx20LtuFCKKBR+s4SdF/zVxwDIU/zuxJyVpA=;
	b=ydtYYv4zpnWoA+DOEizDfb0xxwSoFgVJdNwaHwsZ4KNXXr0McZcQoDjm0TnP6modKw7qU/
	YU6A1x5Wb6Mc5i5eAKt/8AnZNi+ROMDh3WDQz74dIE+RuF5EAw2jT5VaieXlP6Lpkd0vmq
	E86gqlWw/GGQiz4G8sn557CdigPAhQM=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1717992221;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=IYHZUWJMx20LtuFCKKBR+s4SdF/zVxwDIU/zuxJyVpA=;
	b=2VfRPhmrW05hUtLnVJUcfs9KMJfRv9vz9QyGoBqjvqYUGpchjhJed480Cqm/I/F3MSoXT2
	Q9/kUhFsd2NbvbBw==
Date: Mon, 10 Jun 2024 06:03:38 +0200
From: Oscar Salvador <osalvador@suse.de>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
	Eugenio =?iso-8859-1?Q?P=E9rez?= <eperezma@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>, Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()
Message-ID: <ZmZ7GgwJw4ucPJaM@localhost.localdomain>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-2-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20240607090939.89524-2-david@redhat.com>
X-Spam-Flag: NO
X-Spam-Score: -4.30
X-Spam-Level: 
X-Spamd-Result: default: False [-4.30 / 50.00];
	BAYES_HAM(-3.00)[99.99%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	MIME_TRACE(0.00)[0:+];
	MISSING_XM_UA(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[23];
	RCVD_TLS_ALL(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_HAS_DN(0.00)[];
	TO_DN_SOME(0.00)[];
	FROM_EQ_ENVFROM(0.00)[];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo]

On Fri, Jun 07, 2024 at 11:09:36AM +0200, David Hildenbrand wrote:
> In preparation for further changes, let's teach __free_pages_core()
> about the differences of memory hotplug handling.
> 
> Move the memory hotplug specific handling from generic_online_page() to
> __free_pages_core(), use adjust_managed_page_count() on the memory
> hotplug path, and spell out why memory freed via memblock
> cannot currently use adjust_managed_page_count().
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

All looks good but I am puzzled with something.

> +	} else {
> +		/* memblock adjusts totalram_pages() ahead of time. */
> +		atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
> +	}

You say that memblock adjusts totalram_pages ahead of time, and I guess
you mean in memblock_free_all()

 pages = free_low_memory_core_early()
 totalram_pages_add(pages);

but that is not ahead, it looks like it is upading __after__ sending
them to buddy?


-- 
Oscar Salvador
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 04:24:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 04:24:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736871.1142908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGWZl-0003PV-N6; Mon, 10 Jun 2024 04:24:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736871.1142908; Mon, 10 Jun 2024 04:24:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGWZl-0003PO-K1; Mon, 10 Jun 2024 04:24:09 +0000
Received: by outflank-mailman (input) for mailman id 736871;
 Mon, 10 Jun 2024 04:24:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AMvd=NM=suse.de=osalvador@srs-se1.protection.inumbo.net>)
 id 1sGWZk-0003PI-B8
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 04:24:08 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de
 [2a07:de40:b251:101:10:150:64:1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 45b5b476-26e1-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 06:24:05 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 88FA1219DC;
 Mon, 10 Jun 2024 04:24:04 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 5D2BD13A7F;
 Mon, 10 Jun 2024 04:24:03 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id jBLQE+N/ZmYYFwAAD6G6ig
 (envelope-from <osalvador@suse.de>); Mon, 10 Jun 2024 04:24:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45b5b476-26e1-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1717993444; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=STes9dvsOiZHvfV9vRJe27/8pEpJsEu/gt/yrG0Qqek=;
	b=o8tqZ6ptgNBJgDzFFFlXJ4BuRYktYynoSB9CmRx3kAKTTWOQVXmg2tdchdht62Uh4KiAoT
	Xjh9Ea6U1Dp2/lG8fVpOXMgyDKEwE6igfCaySZdZRfxaGYQYm6usncgdP35NZiqT5JNfOa
	RSbw87vp6yMYDpWUD7crMXYt1zHzDeA=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1717993444;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=STes9dvsOiZHvfV9vRJe27/8pEpJsEu/gt/yrG0Qqek=;
	b=fQWD18ajS3CyXfmlEKPJp0P/CvQjtxtJJeWW8NIZLqMy2mo4mWHdImltCgMGR17riJ+K30
	kQDBwHnbIU0PuZBw==
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1717993444; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=STes9dvsOiZHvfV9vRJe27/8pEpJsEu/gt/yrG0Qqek=;
	b=o8tqZ6ptgNBJgDzFFFlXJ4BuRYktYynoSB9CmRx3kAKTTWOQVXmg2tdchdht62Uh4KiAoT
	Xjh9Ea6U1Dp2/lG8fVpOXMgyDKEwE6igfCaySZdZRfxaGYQYm6usncgdP35NZiqT5JNfOa
	RSbw87vp6yMYDpWUD7crMXYt1zHzDeA=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1717993444;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=STes9dvsOiZHvfV9vRJe27/8pEpJsEu/gt/yrG0Qqek=;
	b=fQWD18ajS3CyXfmlEKPJp0P/CvQjtxtJJeWW8NIZLqMy2mo4mWHdImltCgMGR17riJ+K30
	kQDBwHnbIU0PuZBw==
Date: Mon, 10 Jun 2024 06:23:57 +0200
From: Oscar Salvador <osalvador@suse.de>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
	Eugenio =?iso-8859-1?Q?P=E9rez?= <eperezma@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>, Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH v1 2/3] mm/memory_hotplug: initialize memmap of
 !ZONE_DEVICE with PageOffline() instead of PageReserved()
Message-ID: <ZmZ_3Xc7fdrL1R15@localhost.localdomain>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-3-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20240607090939.89524-3-david@redhat.com>
X-Spam-Level: 
X-Spamd-Result: default: False [-4.30 / 50.00];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	SUBJECT_HAS_EXCLAIM(0.00)[];
	ARC_NA(0.00)[];
	MIME_TRACE(0.00)[0:+];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	RCPT_COUNT_TWELVE(0.00)[23];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	FROM_HAS_DN(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	FROM_EQ_ENVFROM(0.00)[];
	TO_DN_SOME(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	MISSING_XM_UA(0.00)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo]
X-Spam-Score: -4.30
X-Spam-Flag: NO

On Fri, Jun 07, 2024 at 11:09:37AM +0200, David Hildenbrand wrote:
> We currently initialize the memmap such that PG_reserved is set and the
> refcount of the page is 1. In virtio-mem code, we have to manually clear
> that PG_reserved flag to make memory offlining with partially hotplugged
> memory blocks possible: has_unmovable_pages() would otherwise bail out on
> such pages.
> 
> We want to avoid PG_reserved where possible and move to typed pages
> instead. Further, we want to further enlighten memory offlining code about
> PG_offline: offline pages in an online memory section. One example is
> handling managed page count adjustments in a cleaner way during memory
> offlining.
> 
> So let's initialize the pages with PG_offline instead of PG_reserved.
> generic_online_page()->__free_pages_core() will now clear that flag before
> handing that memory to the buddy.
> 
> Note that the page refcount is still 1 and would forbid offlining of such
> memory except when special care is take during GOING_OFFLINE as
> currently only implemented by virtio-mem.
> 
> With this change, we can now get non-PageReserved() pages in the XEN
> balloon list. From what I can tell, that can already happen via
> decrease_reservation(), so that should be fine.
> 
> HV-balloon should not really observe a change: partial online memory
> blocks still cannot get surprise-offlined, because the refcount of these
> PageOffline() pages is 1.
> 
> Update virtio-mem, HV-balloon and XEN-balloon code to be aware that
> hotplugged pages are now PageOffline() instead of PageReserved() before
> they are handed over to the buddy.
> 
> We'll leave the ZONE_DEVICE case alone for now.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 27e3be75edcf7..0254059efcbe1 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -734,7 +734,7 @@ static inline void section_taint_zone_device(unsigned long pfn)
>  /*
>   * Associate the pfn range with the given zone, initializing the memmaps
>   * and resizing the pgdat/zone data to span the added pages. After this
> - * call, all affected pages are PG_reserved.
> + * call, all affected pages are PageOffline().
>   *
>   * All aligned pageblocks are initialized to the specified migratetype
>   * (usually MIGRATE_MOVABLE). Besides setting the migratetype, no related
> @@ -1100,8 +1100,12 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
>  
>  	move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_UNMOVABLE);
>  
> -	for (i = 0; i < nr_pages; i++)
> -		SetPageVmemmapSelfHosted(pfn_to_page(pfn + i));
> +	for (i = 0; i < nr_pages; i++) {
> +		struct page *page = pfn_to_page(pfn + i);
> +
> +		__ClearPageOffline(page);
> +		SetPageVmemmapSelfHosted(page);

So, refresh my memory here please.
AFAIR, those VmemmapSelfHosted pages were marked Reserved before, but now,
memmap_init_range() will not mark them reserved anymore.
I do not think that is ok? I am worried about walkers getting this wrong.

We usually skip PageReserved pages in walkers because are pages we cannot deal
with for those purposes, but with this change, we will leak
PageVmemmapSelfHosted, and I am not sure whether are ready for that.

Moreover, boot memmap pages are marked as PageReserved, which would be
now inconsistent with those added during hotplug operations.

All in all, I feel uneasy about this change.

-- 
Oscar Salvador
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 04:29:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 04:29:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736876.1142917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGWem-0003zf-8G; Mon, 10 Jun 2024 04:29:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736876.1142917; Mon, 10 Jun 2024 04:29:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGWem-0003zY-5V; Mon, 10 Jun 2024 04:29:20 +0000
Received: by outflank-mailman (input) for mailman id 736876;
 Mon, 10 Jun 2024 04:29:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AMvd=NM=suse.de=osalvador@srs-se1.protection.inumbo.net>)
 id 1sGWek-0003zS-4l
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 04:29:18 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ff8b8a70-26e1-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 06:29:17 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6FD4C1F76E;
 Mon, 10 Jun 2024 04:29:16 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 4E42513A7F;
 Mon, 10 Jun 2024 04:29:15 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id IEF7EBuBZmYbGAAAD6G6ig
 (envelope-from <osalvador@suse.de>); Mon, 10 Jun 2024 04:29:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff8b8a70-26e1-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1717993756; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=dZ0M1mMkyIQjD9GHv7ILLIsJEt6S/v29bdhzDaDdHV0=;
	b=ouS1aMbVEZAQVmHrDj9M+xsCMzi5lTc6/jUPSYA+SppS+h5NEBGkqZ/xLzWibfr8WOv/o9
	cc8rOjHzymZljwOTKMr5dRcYBGcH8jm6EnAkfFpJkmjP6ECb0c8FE+m96t+996chAS7XNW
	bCBh8iHhcqz1FH8y+Mtdlf61/swOKLc=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1717993756;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=dZ0M1mMkyIQjD9GHv7ILLIsJEt6S/v29bdhzDaDdHV0=;
	b=xMKg0LFzrRdT0JzUutNU0hVH0KB4afQ7p5Tw5Clmij6qnvVx6ObtCh65qQCvh1PUPif7Vq
	EyZx8fBScE2Fr4AQ==
Authentication-Results: smtp-out2.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1717993756; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=dZ0M1mMkyIQjD9GHv7ILLIsJEt6S/v29bdhzDaDdHV0=;
	b=ouS1aMbVEZAQVmHrDj9M+xsCMzi5lTc6/jUPSYA+SppS+h5NEBGkqZ/xLzWibfr8WOv/o9
	cc8rOjHzymZljwOTKMr5dRcYBGcH8jm6EnAkfFpJkmjP6ECb0c8FE+m96t+996chAS7XNW
	bCBh8iHhcqz1FH8y+Mtdlf61/swOKLc=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1717993756;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=dZ0M1mMkyIQjD9GHv7ILLIsJEt6S/v29bdhzDaDdHV0=;
	b=xMKg0LFzrRdT0JzUutNU0hVH0KB4afQ7p5Tw5Clmij6qnvVx6ObtCh65qQCvh1PUPif7Vq
	EyZx8fBScE2Fr4AQ==
Date: Mon, 10 Jun 2024 06:29:13 +0200
From: Oscar Salvador <osalvador@suse.de>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
	Eugenio =?iso-8859-1?Q?P=E9rez?= <eperezma@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>, Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH v1 3/3] mm/memory_hotplug: skip
 adjust_managed_page_count() for PageOffline() pages when offlining
Message-ID: <ZmaBGSqchtEWnqM1@localhost.localdomain>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-4-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20240607090939.89524-4-david@redhat.com>
X-Spam-Level: 
X-Spamd-Result: default: False [-4.30 / 50.00];
	BAYES_HAM(-3.00)[99.99%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	MIME_TRACE(0.00)[0:+];
	MISSING_XM_UA(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[23];
	RCVD_TLS_ALL(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_HAS_DN(0.00)[];
	TO_DN_SOME(0.00)[];
	FROM_EQ_ENVFROM(0.00)[];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,suse.de:email]
X-Spam-Score: -4.30
X-Spam-Flag: NO

On Fri, Jun 07, 2024 at 11:09:38AM +0200, David Hildenbrand wrote:
> We currently have a hack for virtio-mem in place to handle memory
> offlining with PageOffline pages for which we already adjusted the
> managed page count.
> 
> Let's enlighten memory offlining code so we can get rid of that hack,
> and document the situation.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

Acked-by: Oscar Salvador <osalvador@suse.de>

-- 
Oscar Salvador
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 05:12:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 05:12:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736846.1142927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGXK1-0003FS-AI; Mon, 10 Jun 2024 05:11:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736846.1142927; Mon, 10 Jun 2024 05:11:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGXK1-0003FL-7M; Mon, 10 Jun 2024 05:11:57 +0000
Received: by outflank-mailman (input) for mailman id 736846;
 Sun, 09 Jun 2024 18:44:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zSGE=NL=gmail.com=jain.abhinav177@srs-se1.protection.inumbo.net>)
 id 1sGNWp-0001TT-R1
 for xen-devel@lists.xenproject.org; Sun, 09 Jun 2024 18:44:31 +0000
Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com
 [2607:f8b0:4864:20::429])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4db7e3a7-2690-11ef-90a2-e314d9c70b13;
 Sun, 09 Jun 2024 20:44:30 +0200 (CEST)
Received: by mail-pf1-x429.google.com with SMTP id
 d2e1a72fcca58-704261a1f67so887901b3a.3
 for <xen-devel@lists.xenproject.org>; Sun, 09 Jun 2024 11:44:30 -0700 (PDT)
Received: from dev0.. ([132.154.51.183]) by smtp.gmail.com with ESMTPSA id
 41be03b00d2f7-6e2b38167a2sm3714385a12.90.2024.06.09.11.44.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 09 Jun 2024 11:44:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4db7e3a7-2690-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1717958668; x=1718563468; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=QUUN5jvMypW6P+Pp1YvyXgjvxQ678L4ijvc2XJHbh1A=;
        b=AGN9YmvmorPQ3+LWljXpwdOcvwq0QiuVBdNR2yfO/bOKI10cJC8w1eIXka62kxcsXy
         GL+bCzQZkqv8Vkfxxmr5O+RAVEkyh0EbK2Cn7zq4aNy1PIFgRgTaHfsOU27JyVAYCKw3
         0kWHJC0IRy81n0KVTd035PlVtWO2fqPeLgnM+s8o51Cxj6xcHIVkJ+b6R0Z0USOsMd53
         xtaGJXG3+XwR0Y9LYVnadZHp43oeUChwq3FIpaBh7XO7L109N/z++uJm98DOglmJljWb
         2OJ2+hkXSbTYoGnQj7wAPSl0aiVQTB6snmXXVd3P+Ir+ootBQzmoFYk6D4fCfcGRqE2k
         BWTg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1717958668; x=1718563468;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=QUUN5jvMypW6P+Pp1YvyXgjvxQ678L4ijvc2XJHbh1A=;
        b=KE68/JB9CpjH8QZDIoGXUHs6wyqOVnPdsf2omAhwDg/OMvvfoU93HJNLiO9SP3MZjq
         1y2PndJKt9nn2n8Hwr3oNoqDtFRUsI/fjEMmZmWH08X9WPfeNdEyo47bboVN3T+sexiX
         kp3zQvwyfx+G+xhnGfIQ8hIMlY7mW1JXo6YbTHv8ivTSJAVRx0eCpZYANuY5+hiK5N6R
         P7oLmEVQNPqYPVLPCHO9OXmrVSqxzbqnjuh5ooEo0bvLhnXZagaEle3BJnSs0W2SYy7x
         IAFTQ7l8ZGx4ehWhLQp51dL10xtE9/FPswLI3wjPltWpBPVs8qFVBA4Y43aMbZm6GbOq
         Evzw==
X-Forwarded-Encrypted: i=1; AJvYcCVdtdldg6e0zSpHfVn0efDYe6JiQG+BGyE97S7r9PiT+voLLarS3ETto1RcuVmsfGJvppIbsmefOdP+xZjTfRqk4Swrgn2UAbmF7RktXeI=
X-Gm-Message-State: AOJu0YwevMZKbXXWSD7iqaL4aFsj8MJ6bSVQhzxv1NSd2W5iihkBSbAe
	s8dTh+IIHUfKonqYDwWBpeAMq2v1dWpHAtyaqAYy7izB9rPky0Vg
X-Google-Smtp-Source: AGHT+IHKKiBJNRWQNNKgV62vwr/OlJ+hKJrqkVpM54C1atGqjWvBWdpuEMXjHXRaNIX0ePqrcNmTIg==
X-Received: by 2002:a05:6a00:17a3:b0:705:96b5:8bf2 with SMTP id d2e1a72fcca58-70596b596a2mr946312b3a.3.1717958668561;
        Sun, 09 Jun 2024 11:44:28 -0700 (PDT)
From: Abhinav Jain <jain.abhinav177@gmail.com>
To: jgross@suse.com,
	sstabellini@kernel.org,
	oleksandr_tyshchenko@epam.com,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: skhan@linuxfoundation.org,
	javier.carrasco.cruz@gmail.com,
	jain.abhinav177@gmail.com
Subject: [PATCH] xen: xen-pciback: Export a bridge and all its children as per TODO
Date: Sun,  9 Jun 2024 18:44:10 +0000
Message-Id: <20240609184410.53500-1-jain.abhinav177@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Check if the device is a bridge.
If it is a bridge, iterate over all its child devices and export them.
Log error if the export fails for any particular device logging details.
Export error string is split across lines as I could see several
other such occurrences in the file.
Please let me know if I should change it in some way.

Signed-off-by: Abhinav Jain <jain.abhinav177@gmail.com>
---
 drivers/xen/xen-pciback/xenbus.c | 39 +++++++++++++++++++++++++-------
 1 file changed, 31 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/xen-pciback/xenbus.c b/drivers/xen/xen-pciback/xenbus.c
index b11e401f1b1e..d15271d33ad6 100644
--- a/drivers/xen/xen-pciback/xenbus.c
+++ b/drivers/xen/xen-pciback/xenbus.c
@@ -258,14 +258,37 @@ static int xen_pcibk_export_device(struct xen_pcibk_device *pdev,
 		xen_register_device_domain_owner(dev, pdev->xdev->otherend_id);
 	}
 
-	/* TODO: It'd be nice to export a bridge and have all of its children
-	 * get exported with it. This may be best done in xend (which will
-	 * have to calculate resource usage anyway) but we probably want to
-	 * put something in here to ensure that if a bridge gets given to a
-	 * driver domain, that all devices under that bridge are not given
-	 * to other driver domains (as he who controls the bridge can disable
-	 * it and stop the other devices from working).
-	 */
+	/* Check if the device is a bridge and export all its children */
+	if ((dev->hdr_type && PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
+		struct pci_dev *child = NULL;
+
+		/* Iterate over all the devices in this bridge */
+		list_for_each_entry(child, &dev->subordinate->devices,
+				bus_list) {
+			dev_dbg(&pdev->xdev->dev,
+				"exporting child device %04x:%02x:%02x.%d\n",
+				child->domain, child->bus->number,
+				PCI_SLOT(child->devfn),
+				PCI_FUNC(child->devfn));
+
+			err = xen_pcibk_export_device(pdev,
+						      child->domain,
+						      child->bus->number,
+						      PCI_SLOT(child->devfn),
+						      PCI_FUNC(child->devfn),
+						      devid);
+			if (err) {
+				dev_err(&pdev->xdev->dev,
+					"failed to export child device : "
+					"%04x:%02x:%02x.%d\n",
+					child->domain,
+					child->bus->number,
+					PCI_SLOT(child->devfn),
+					PCI_FUNC(child->devfn));
+				goto out;
+			}
+		}
+	}
 out:
 	return err;
 }
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 06:39:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 06:39:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736892.1142937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYg6-0007IF-F9; Mon, 10 Jun 2024 06:38:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736892.1142937; Mon, 10 Jun 2024 06:38:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYg6-0007I8-Cf; Mon, 10 Jun 2024 06:38:50 +0000
Received: by outflank-mailman (input) for mailman id 736892;
 Mon, 10 Jun 2024 06:38:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGYg5-0007I2-7c
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:38:49 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 170a0994-26f4-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 08:38:47 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-a6266ffdba8so305456866b.1
 for <xen-devel@lists.xenproject.org>; Sun, 09 Jun 2024 23:38:47 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c5d21517dsm4990879a12.69.2024.06.09.23.38.45
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 09 Jun 2024 23:38:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 170a0994-26f4-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718001527; x=1718606327; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=m3481wDw6uVUVzHCqP2CpdcoJMJKK9Rm15AkwHAnuUE=;
        b=INHQOAU09u6oplfU13WKhQSpaPl2yLsDyD7Q3JdDxSHkpDx6clCLUqCj5WJ/PKgO6Y
         qZoVXLPVBa3CIJ/8BTWlUHL49Xaszpt9EeSYq7J1lTaTTclzwsSww3Om1xMkXWljvexy
         fgRTHfybVWfbLVwAAuhIWp0JA08LktWzQbyCVtTkzVLYMbALqBYXTW1Ig2GFr36u0cdN
         Kbv2m0YWJVrmxBNU/hiqWYVwOilVWlvKzkujrcGGr5Fa50cRSLluPF57HJBNQPBcVJOl
         Cz2H2bv7YVxqZgWOxtOJI4jbtfSXt1lNOw/1bfVbyg6jPMVjR+M27IpgM9wrpMLt4O9q
         wqtg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718001527; x=1718606327;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=m3481wDw6uVUVzHCqP2CpdcoJMJKK9Rm15AkwHAnuUE=;
        b=TZ2LfAexPOwIg3rWe1DesLCNWO9fIfj8ae+YGnXIY1Yv+Xmu00e7zZZfyPV9F/UJFk
         Gfi+DndYDzTYeuHTk0mqOzxp8F7mWJid7M3igNZvT9XNJBsJITBGNLDM2JZHE08yMK0r
         9h2CiVpkdaPiu1BABUf2DG7FOM5gfs3pxXz7GRtJUmKgE0GJPrQOawoak+SOBp1F7eLZ
         fz9OIWgh0rcocL9FyFXGgRUMkl0qEImxCh7g6wCe+FlmKQHPRyEvw3R/TrMek5q2lL+6
         7fkp2480clqx3n6p+By5HG63UzFoYuNNxmAPrpCbCVwwBNMyqhrklHp0i2WqFnfZgvWI
         iMXw==
X-Gm-Message-State: AOJu0YwWn305y9PPN0HLiniFsJIYjm8MHIYwpxPp+lCbcArCn8tOY855
	2w5JoljSNzdbQGR7SWTkzrWWuHHCJIe6IBx+hcSwCYfABYLD6KtTGTa3m8i6aro2wfZn70LNXTU
	=
X-Google-Smtp-Source: AGHT+IFfSibWKFizkFzc5f03K+R4eMM/yllM5gZw32omBTCqlxCOK4JmT7NL/fOb/ulgVUzEYbmvBQ==
X-Received: by 2002:a17:906:71c1:b0:a6a:6ed0:fbd7 with SMTP id a640c23a62f3a-a6cd7891aeemr528066266b.37.1718001526666;
        Sun, 09 Jun 2024 23:38:46 -0700 (PDT)
Message-ID: <5b9d57b4-bd28-4523-bb80-f4a5912eb3e8@suse.com>
Date: Mon, 10 Jun 2024 08:38:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Daniel Smith <dpsmith@apertussolutions.com>,
 Marek Marczykowski <marmarek@invisiblethingslab.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] MAINTAINERS: alter EFI section
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

To get past the recurring friction on the approach to take wrt
workarounds needed for various firmware flaws, I'm stepping down as the
maintainer of our code interfacing with EFI firmware. Two new
maintainers are being introduced in my place.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
For the new maintainers, here's a 1st patch to consider right away:
https://lists.xen.org/archives/html/xen-devel/2024-03/msg00931.html.

--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -308,7 +308,9 @@ F:	automation/eclair_analysis/
 F:	automation/scripts/eclair
 
 EFI
-M:	Jan Beulich <jbeulich@suse.com>
+M:	Daniel P. Smith <dpsmith@apertussolutions.com>
+M:	Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
+R:	Jan Beulich <jbeulich@suse.com>
 S:	Supported
 F:	xen/arch/x86/efi/
 F:	xen/arch/x86/include/asm/efi*.h


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 06:53:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 06:53:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736902.1142978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYui-0002W2-E3; Mon, 10 Jun 2024 06:53:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736902.1142978; Mon, 10 Jun 2024 06:53:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYui-0002Vt-AS; Mon, 10 Jun 2024 06:53:56 +0000
Received: by outflank-mailman (input) for mailman id 736902;
 Mon, 10 Jun 2024 06:53:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FMhB=NM=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1sGYug-0001oX-J9
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:53:54 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 32e3b659-26f6-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 08:53:53 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6ef8e62935so228092266b.3
 for <xen-devel@lists.xenproject.org>; Sun, 09 Jun 2024 23:53:53 -0700 (PDT)
Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se.
 [217.31.164.171]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 09 Jun 2024 23:53:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32e3b659-26f6-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718002432; x=1718607232; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=XH0hx5FhOx9XD5Quim9d/dWx8ptFKhc8V8iHDPJ7zTg=;
        b=lLIYtpGn4JjIP1moIj/qS+1o2d58yQNtFpIXoNNsDH73mx4rqOOLqxEPI+qgiSI7hC
         95AIt8fxyhj1lXu98iKMDINzdvgoKYY+fiIY3AZdyJ5EFS4uLg9ZjqrQWtEX7O5xOUny
         SRaD0rJqX4Qlc2cICZ0XLOyjp6fI7x5DbqzT2hT+Z+5l7QFQlg/lwmThGnUrbblblY6F
         DqXpVYSwsHJLLK9uPreBl3yZi3XRzE+FjaeRgPhTVDULIuk5L+UOhXjHJ2Zs9cHzu9nW
         Dl/F0wh+QsJ5PLdVryLPpf4FeGRoObmzdh0s58EOPTzXp+WerohVtYyDpFO2J3IFDGHR
         SbVA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718002432; x=1718607232;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=XH0hx5FhOx9XD5Quim9d/dWx8ptFKhc8V8iHDPJ7zTg=;
        b=k0fhwsZ3LSSf4kH3KlL0gF+YB8GHj5aoN1nIQX0NEnsxFw6+IcNaE9NtBorpDpvT8r
         USSwMu/4Pmoar0FIjc4tPn8dI1RUK/iiDkgKaolDgp4zKTeT3cJDVwMtGbx26xqpNQiW
         QL/LMBvQ730uGSwANllHM0wkrjmNxFZmsZQIK7WWNUhzhYR8W++9tX/VyzmaWA44iri1
         3TXThWe0cBAiIUGHholX1exJmPI+OcDXZKfc6JOOMCZ3yJrsv2O+YPW8CL6tnt9I2Xao
         2YdyjuZ4vRFmOE7IdgZPkDL3lxPjkL62HqU8ZI+R4A4W4ZW3Y1t8K/WfpLAhrTSImJnu
         k4zg==
X-Gm-Message-State: AOJu0Yz7trcAhjQQId7k9j83LHThx1K87wjEoTE5oAfwJNSMLlBv2JT4
	m2qKbggWa8GgtAmrxmKjtpeF4HcpVTLMeYGy4SoHgCKpBJfRd38aP0a1HCpEVYpb+ApopyuOvjo
	HWoI=
X-Google-Smtp-Source: AGHT+IFarHqSdGAqU5TZnBn1sJgCv2O8vReWZ1tFvpOuZd/60yHNPdDl8basOLuUVpHD3bVWJ7BmSg==
X-Received: by 2002:a17:906:80ca:b0:a6f:1004:dc30 with SMTP id a640c23a62f3a-a6f1004e1f0mr310918866b.65.1718002431882;
        Sun, 09 Jun 2024 23:53:51 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: patches@linaro.org,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [XEN PATCH v6 3/7] xen/arm: ffa: simplify ffa_handle_mem_share()
Date: Mon, 10 Jun 2024 08:53:39 +0200
Message-Id: <20240610065343.2594943-4-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Simplify ffa_handle_mem_share() by removing the start_page_idx and
last_page_idx parameters from get_shm_pages() and check that the number
of pages matches expectations at the end of get_shm_pages().

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/tee/ffa_shm.c | 18 ++++++------------
 1 file changed, 6 insertions(+), 12 deletions(-)

diff --git a/xen/arch/arm/tee/ffa_shm.c b/xen/arch/arm/tee/ffa_shm.c
index 75a5b66aeb4c..370d83ec5cf8 100644
--- a/xen/arch/arm/tee/ffa_shm.c
+++ b/xen/arch/arm/tee/ffa_shm.c
@@ -159,10 +159,9 @@ static int32_t ffa_mem_reclaim(uint32_t handle_lo, uint32_t handle_hi,
  */
 static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
                          const struct ffa_address_range *range,
-                         uint32_t range_count, unsigned int start_page_idx,
-                         unsigned int *last_page_idx)
+                         uint32_t range_count)
 {
-    unsigned int pg_idx = start_page_idx;
+    unsigned int pg_idx = 0;
     gfn_t gfn;
     unsigned int n;
     unsigned int m;
@@ -191,7 +190,9 @@ static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
         }
     }
 
-    *last_page_idx = pg_idx;
+    /* The ranges must add up */
+    if ( pg_idx < shm->page_count )
+            return FFA_RET_INVALID_PARAMETERS;
 
     return FFA_RET_OK;
 }
@@ -460,7 +461,6 @@ void ffa_handle_mem_share(struct cpu_user_regs *regs)
     struct domain *d = current->domain;
     struct ffa_ctx *ctx = d->arch.tee;
     struct ffa_shm_mem *shm = NULL;
-    unsigned int last_page_idx = 0;
     register_t handle_hi = 0;
     register_t handle_lo = 0;
     int ret = FFA_RET_DENIED;
@@ -570,15 +570,9 @@ void ffa_handle_mem_share(struct cpu_user_regs *regs)
         goto out;
     }
 
-    ret = get_shm_pages(d, shm, region_descr->address_range_array, range_count,
-                        0, &last_page_idx);
+    ret = get_shm_pages(d, shm, region_descr->address_range_array, range_count);
     if ( ret )
         goto out;
-    if ( last_page_idx != shm->page_count )
-    {
-        ret = FFA_RET_INVALID_PARAMETERS;
-        goto out;
-    }
 
     /* Note that share_shm() uses our tx buffer */
     spin_lock(&ffa_tx_buffer_lock);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 06:53:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 06:53:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736899.1142947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYuf-0001ou-Pg; Mon, 10 Jun 2024 06:53:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736899.1142947; Mon, 10 Jun 2024 06:53:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYuf-0001on-NB; Mon, 10 Jun 2024 06:53:53 +0000
Received: by outflank-mailman (input) for mailman id 736899;
 Mon, 10 Jun 2024 06:53:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FMhB=NM=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1sGYue-0001oX-5t
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:53:52 +0000
Received: from mail-ej1-x641.google.com (mail-ej1-x641.google.com
 [2a00:1450:4864:20::641])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2fd63ba6-26f6-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 08:53:48 +0200 (CEST)
Received: by mail-ej1-x641.google.com with SMTP id
 a640c23a62f3a-a6f177b78dcso95409266b.1
 for <xen-devel@lists.xenproject.org>; Sun, 09 Jun 2024 23:53:47 -0700 (PDT)
Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se.
 [217.31.164.171]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 09 Jun 2024 23:53:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fd63ba6-26f6-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718002427; x=1718607227; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=8ydBCMucw8T2ZNRmg/cuXstuBHa/XNErLk9auQP/k8g=;
        b=dHkH9gu+PexT7opiJbEyLr7t2r5W8m1YZMJrAlsdV6iINOJ9+sW+3glTj5HBu0uTRJ
         CdVslleygNUnzYIoxjWP91foO2dj2mXnT9fH/oAttlhT5ZGYULXh0X7yCoDUxlf8Y4J5
         cGZ7/o9eVBjNlt1TL8nZp0w2ijsMiWo29eaUziIcZogDV0ohzuyF5I2erJ7+PwWHQUGg
         CajP477rtPcV5qSg4Si+OrE4YXrmyugbNNhIGOIObFejrnrygZ524LUXFP/ZEt8vQ+3i
         wIp7cnJQ48wdkDPdHhb63RufWwlTXukXIg8ybVcL3eCChHyyqVYzJ2CCeLlXt/SzwkoG
         fV6A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718002427; x=1718607227;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=8ydBCMucw8T2ZNRmg/cuXstuBHa/XNErLk9auQP/k8g=;
        b=Sbnm9VExG50m2jYwXxd12wZQfOHgFpXofwxqMkSbO6DtTQ2cal9oucfa1zLbt4Kqt7
         +1LZVzZrHBz7ZoHY9XLLLB4HjZ2S79ElzX5SWskfnZvQKLVL5yqe4k+utU8GV9XeOvG/
         erSRzQFOwhFWIqNFrcavemvxhxU5UH10cOcwTRUtlxDcZtyx8MIKmvG2Z0w03hVTpHw4
         oS1dqFaYCUNITELEGxk0fFIeQ63v8KYHwTm6+A56Md9Vo4iz6aLVOuwYA5/FHo1e4qGs
         D6cSTeLiZnmK/fLzlRHqs819bdMimyxHWAkb5MUNDOnh1V3+hwCygc/ggCcqjtgTfj5X
         TkwA==
X-Gm-Message-State: AOJu0Yx8r4aBAEvLxV0NA2XgXVv1Xj3WkQg8xWrmDf9IkaJse++hnJRQ
	o5Wns2P2nsXaC0M9DNH2MJDxQVFNkQXgbft5XTXsXTt7bO/xuccA9/h+dWQHtQx8Z2gdu7YIAhf
	8PpYVKw==
X-Google-Smtp-Source: AGHT+IE83438JhKbRk+s8D/haJ85GMQd9q5RFeQuWrLdFH3yc7ZyFidDJlQegYEqPB1u0RzRrsecew==
X-Received: by 2002:a17:906:f20b:b0:a6f:268a:1fc4 with SMTP id a640c23a62f3a-a6f268a2031mr45606166b.61.1718002426704;
        Sun, 09 Jun 2024 23:53:46 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: patches@linaro.org,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [XEN PATCH v6 0/7] FF-A notifications
Date: Mon, 10 Jun 2024 08:53:36 +0200
Message-Id: <20240610065343.2594943-1-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi,

This patch set adds support for FF-A notifications. We only support
global notifications, per vCPU notifications remain unsupported.

The first three patches are further cleanup and can be merged before the
rest if desired.

A physical SGI is used to make Xen aware of pending FF-A notifications. The
physical SGI is selected by the SPMC in the secure world. Since it must not
already be used by Xen the SPMC is in practice forced to donate one of the
secure SGIs, but that's normally not a problem. The SGI handling in Xen is
updated to support registration of handlers for SGIs that aren't statically
assigned, that is, SGI IDs above GIC_SGI_MAX.

The patch "xen/arm: add and call init_tee_secondary()" provides a hook for
register the needed per-cpu interrupt handler in "xen/arm: ffa: support
notification".

The patch "xen/arm: add and call tee_free_domain_ctx()" provides a hook for
later freeing of the TEE context. This hook is used in "xen/arm: ffa:
support notification" and avoids the problem with TEE context being freed
while we need to access it when handling a Schedule Receiver interrupt. It
was suggested as an alternative in [1] that the TEE context could be freed
from complete_domain_destroy().

[1] https://lore.kernel.org/all/CAHUa44H4YpoxYT7e6WNH5XJFpitZQjqP9Ng4SmTy4eWhyN+F+w@mail.gmail.com/

Thanks,
Jens

v5->v6:
- Added Betrands R-B for "xen/arm: add and call tee_free_domain_ctx()",
  "xen/arm: add and call init_tee_secondary()"
- Added Juliens A-B for "xen/arm: allow dynamically assigned SGI handlers"
- Updated "xen/arm: ffa: support notification", details in the patch

v4->v5:
- Added two new patches "xen/arm: add and call init_tee_interrupt()" and
  "xen/arm: add and call tee_free_domain_ctx()"
- Updated "xen/arm: ffa: support notification", details in the patch

v3->v4:
- "xen/arm: ffa: support notification" and
  "xen/arm: allow dynamically assigned SGI handlers" updated as requestsed,
  details in each patch.

v2->v3:
- "xen/arm: ffa: support notification" and
  "xen/arm: allow dynamically assigned SGI handlers" updated as requestsed,
  details in each patch.

v1->v2:
- "xen/arm: ffa: support notification" and
  "xen/arm: allow dynamically assigned SGI handlers" updated as requestsed,
  details in each patch.
- Added Bertrands R-B for "xen/arm: ffa: refactor ffa_handle_call()",
  "xen/arm: ffa: use ACCESS_ONCE()", and
  "xen/arm: ffa: simplify ffa_handle_mem_share()"



Jens Wiklander (7):
  xen/arm: ffa: refactor ffa_handle_call()
  xen/arm: ffa: use ACCESS_ONCE()
  xen/arm: ffa: simplify ffa_handle_mem_share()
  xen/arm: allow dynamically assigned SGI handlers
  xen/arm: add and call init_tee_secondary()
  xen/arm: add and call tee_free_domain_ctx()
  xen/arm: ffa: support notification

 xen/arch/arm/domain.c              |   1 +
 xen/arch/arm/gic.c                 |  12 +-
 xen/arch/arm/include/asm/gic.h     |   2 +-
 xen/arch/arm/include/asm/tee/tee.h |  14 +
 xen/arch/arm/irq.c                 |  18 +-
 xen/arch/arm/smpboot.c             |   2 +
 xen/arch/arm/tee/Makefile          |   1 +
 xen/arch/arm/tee/ffa.c             | 105 +++++--
 xen/arch/arm/tee/ffa_notif.c       | 425 +++++++++++++++++++++++++++++
 xen/arch/arm/tee/ffa_partinfo.c    |   9 +-
 xen/arch/arm/tee/ffa_private.h     |  56 +++-
 xen/arch/arm/tee/ffa_shm.c         |  33 +--
 xen/arch/arm/tee/tee.c             |  14 +-
 xen/include/public/arch-arm.h      |  14 +
 14 files changed, 643 insertions(+), 63 deletions(-)
 create mode 100644 xen/arch/arm/tee/ffa_notif.c

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 06:53:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 06:53:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736903.1142984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYui-0002Zg-QG; Mon, 10 Jun 2024 06:53:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736903.1142984; Mon, 10 Jun 2024 06:53:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYui-0002ZW-Kd; Mon, 10 Jun 2024 06:53:56 +0000
Received: by outflank-mailman (input) for mailman id 736903;
 Mon, 10 Jun 2024 06:53:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FMhB=NM=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1sGYuh-0002SJ-J0
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:53:55 +0000
Received: from mail-ej1-x641.google.com (mail-ej1-x641.google.com
 [2a00:1450:4864:20::641])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3391d7a6-26f6-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 08:53:54 +0200 (CEST)
Received: by mail-ej1-x641.google.com with SMTP id
 a640c23a62f3a-a63359aaaa6so594625266b.2
 for <xen-devel@lists.xenproject.org>; Sun, 09 Jun 2024 23:53:54 -0700 (PDT)
Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se.
 [217.31.164.171]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 09 Jun 2024 23:53:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3391d7a6-26f6-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718002433; x=1718607233; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=JceofD7RBMGtBhxQJQYStOTtcfNOzjMXKDJSoSIwmug=;
        b=JaWqYG55WainfdKf+S2kg5CQ07FYmhbpwoj0U4RaPRbWap2Ml+0/l7yejbeTiIxd5O
         0lsUwdYQ+U7xGeGRiA6/Q+T7TnJLAZL8IqJsREGYbqqtxw4s+Swm9rF7b0yaMZXNY64p
         CqqSg6Q7pf2X9ICvZRS5rd78KC6j6F8UMmZx7t0a9HZWj7cctJ8oeXwTBuQJBlXpVyEu
         25wQan0tfVwqUmTQVTOWHdwahyTaezxlaBj0LVNz1lJhI6uQRJxuEycUgPHPJxpWoXoF
         lgEYCr7sDzCnmwEfhNyl108f70Njx0cAlNLGLwuG4xPVGSnwKvzc2vilpVJFCR/Sf0VE
         oPJQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718002433; x=1718607233;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=JceofD7RBMGtBhxQJQYStOTtcfNOzjMXKDJSoSIwmug=;
        b=UNyhsPTOP3G0fgmNM9MD5Kl1Es69F5YMNkH2jrxsMcNQNqpfUvIymnVNkURpUjb269
         wvVdBvQCARlkVQp5ztZNXfbk0YzrpAASQ1cOBSHSPoSOElFz8KLGU3qfJsRhJuucelWL
         HZIrPfLh6ts08kTk1kBqtosrV4mR9Wc3oK+FaZUgdDbtRX7ee1K3/UqptxdISRblQrGJ
         ANaKE4DRy8siHr2R0t0pa7rl/J9KMlWBe1KKv9H71rSVcr87t8Mrpb1Vi+efDGL1DasO
         AuPEGnfxr0k0iyFjQu1N1gV6OrFh3ggvPJLPcA78g+Lg55/0Y+WNS1Bt5PraeE3CTcDL
         Mviw==
X-Gm-Message-State: AOJu0YzfOgGH1k2jZVwKpvmL5DCKU6m26jRI0gl4acXs8eEAKrfV+Bh5
	q7PXfXMNos6w6dvEt45T5U/uTGW+sU2RzNRODsSmYLcQaMKCmHgm8XONQQj1Dcv5d+Xtg63Pu9n
	4YUf5wA==
X-Google-Smtp-Source: AGHT+IENIvSGyjGpOz9XjYX5uB8uPLMvPPR9yA55yF5Sw+1pXuvS4Hh317Vncdxx9CfRgcNU+2A6zw==
X-Received: by 2002:a17:906:1c87:b0:a6f:1106:5dc7 with SMTP id a640c23a62f3a-a6f11065e29mr239543466b.5.1718002433493;
        Sun, 09 Jun 2024 23:53:53 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: patches@linaro.org,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [XEN PATCH v6 4/7] xen/arm: allow dynamically assigned SGI handlers
Date: Mon, 10 Jun 2024 08:53:40 +0200
Message-Id: <20240610065343.2594943-5-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Updates so request_irq() can be used with a dynamically assigned SGI irq
as input. This prepares for a later patch where an FF-A schedule
receiver interrupt handler is installed for an SGI generated by the
secure world.

>From the Arm Base System Architecture v1.0C [1]:
"The system shall implement at least eight Non-secure SGIs, assigned to
interrupt IDs 0-7."

gic_route_irq_to_xen() don't gic_set_irq_type() for SGIs since they are
always edge triggered.

gic_interrupt() is updated to route the dynamically assigned SGIs to
do_IRQ() instead of do_sgi(). The latter still handles the statically
assigned SGI handlers like for instance GIC_SGI_CALL_FUNCTION.

[1] https://developer.arm.com/documentation/den0094/

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Acked-by: Julien Grall <jgrall@amazon.com>
---
v3->v4
- Use IRQ_TYPE_EDGE_RISING instead of DT_IRQ_TYPE_EDGE_RISING

v2->v3
- Rename GIC_SGI_MAX to GIC_SGI_STATIC_MAX and rename do_sgi() to
  do_static_sgi()
- Update comment in setup_irq() to mention that SGI irq_desc is banked
- Add ASSERT() in do_IRQ() that the irq isn't an SGI before injecting
  calling vgic_inject_irq()
- Initialize local_irqs_type[] range for SGIs as IRQ_TYPE_EDGE_RISING
- Adding link to the Arm Base System Architecture v1.0C

v1->v2
- Update patch description as requested
---
 xen/arch/arm/gic.c             | 12 +++++++-----
 xen/arch/arm/include/asm/gic.h |  2 +-
 xen/arch/arm/irq.c             | 18 ++++++++++++++----
 3 files changed, 22 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index b3467a76ae75..3eaf670fd731 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -38,7 +38,7 @@ const struct gic_hw_operations *gic_hw_ops;
 static void __init __maybe_unused build_assertions(void)
 {
     /* Check our enum gic_sgi only covers SGIs */
-    BUILD_BUG_ON(GIC_SGI_MAX > NR_GIC_SGI);
+    BUILD_BUG_ON(GIC_SGI_STATIC_MAX > NR_GIC_SGI);
 }
 
 void register_gic_ops(const struct gic_hw_operations *ops)
@@ -117,7 +117,9 @@ void gic_route_irq_to_xen(struct irq_desc *desc, unsigned int priority)
 
     desc->handler = gic_hw_ops->gic_host_irq_type;
 
-    gic_set_irq_type(desc, desc->arch.type);
+    /* SGIs are always edge-triggered, so there is need to set it */
+    if ( desc->irq >= NR_GIC_SGI)
+        gic_set_irq_type(desc, desc->arch.type);
     gic_set_irq_priority(desc, priority);
 }
 
@@ -322,7 +324,7 @@ void gic_disable_cpu(void)
     gic_hw_ops->disable_interface();
 }
 
-static void do_sgi(struct cpu_user_regs *regs, enum gic_sgi sgi)
+static void do_static_sgi(struct cpu_user_regs *regs, enum gic_sgi sgi)
 {
     struct irq_desc *desc = irq_to_desc(sgi);
 
@@ -367,7 +369,7 @@ void gic_interrupt(struct cpu_user_regs *regs, int is_fiq)
         /* Reading IRQ will ACK it */
         irq = gic_hw_ops->read_irq();
 
-        if ( likely(irq >= 16 && irq < 1020) )
+        if ( likely(irq >= GIC_SGI_STATIC_MAX && irq < 1020) )
         {
             isb();
             do_IRQ(regs, irq, is_fiq);
@@ -379,7 +381,7 @@ void gic_interrupt(struct cpu_user_regs *regs, int is_fiq)
         }
         else if ( unlikely(irq < 16) )
         {
-            do_sgi(regs, irq);
+            do_static_sgi(regs, irq);
         }
         else
         {
diff --git a/xen/arch/arm/include/asm/gic.h b/xen/arch/arm/include/asm/gic.h
index 03f209529b13..541f0eeb808a 100644
--- a/xen/arch/arm/include/asm/gic.h
+++ b/xen/arch/arm/include/asm/gic.h
@@ -285,7 +285,7 @@ enum gic_sgi {
     GIC_SGI_EVENT_CHECK,
     GIC_SGI_DUMP_STATE,
     GIC_SGI_CALL_FUNCTION,
-    GIC_SGI_MAX,
+    GIC_SGI_STATIC_MAX,
 };
 
 /* SGI irq mode types */
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index e5fb26a3de2d..c60502444ccf 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -142,7 +142,13 @@ void __init init_IRQ(void)
 
     spin_lock(&local_irqs_type_lock);
     for ( irq = 0; irq < NR_LOCAL_IRQS; irq++ )
-        local_irqs_type[irq] = IRQ_TYPE_INVALID;
+    {
+        /* SGIs are always edge-triggered */
+        if ( irq < NR_GIC_SGI )
+            local_irqs_type[irq] = IRQ_TYPE_EDGE_RISING;
+        else
+            local_irqs_type[irq] = IRQ_TYPE_INVALID;
+    }
     spin_unlock(&local_irqs_type_lock);
 
     BUG_ON(init_local_irq_data(smp_processor_id()) < 0);
@@ -214,9 +220,12 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
 
     perfc_incr(irqs);
 
-    ASSERT(irq >= 16); /* SGIs do not come down this path */
+    /* Statically assigned SGIs do not come down this path */
+    ASSERT(irq >= GIC_SGI_STATIC_MAX);
 
-    if ( irq < 32 )
+    if ( irq < NR_GIC_SGI )
+        perfc_incr(ipis);
+    else if ( irq < NR_GIC_LOCAL_IRQS )
         perfc_incr(ppis);
     else
         perfc_incr(spis);
@@ -250,6 +259,7 @@ void do_IRQ(struct cpu_user_regs *regs, unsigned int irq, int is_fiq)
          * The irq cannot be a PPI, we only support delivery of SPIs to
          * guests.
          */
+        ASSERT(irq >= NR_GIC_SGI);
         vgic_inject_irq(info->d, NULL, info->virq, true);
         goto out_no_end;
     }
@@ -386,7 +396,7 @@ int setup_irq(unsigned int irq, unsigned int irqflags, struct irqaction *new)
     {
         gic_route_irq_to_xen(desc, GIC_PRI_IRQ);
         /* It's fine to use smp_processor_id() because:
-         * For PPI: irq_desc is banked
+         * For SGI and PPI: irq_desc is banked
          * For SPI: we don't care for now which CPU will receive the
          * interrupt
          * TODO: Handle case where SPI is setup on different CPU than
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 06:53:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 06:53:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736900.1142954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYug-0001rg-2z; Mon, 10 Jun 2024 06:53:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736900.1142954; Mon, 10 Jun 2024 06:53:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYuf-0001r0-Tf; Mon, 10 Jun 2024 06:53:53 +0000
Received: by outflank-mailman (input) for mailman id 736900;
 Mon, 10 Jun 2024 06:53:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FMhB=NM=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1sGYue-0001oX-DX
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:53:52 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 30aa8c65-26f6-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 08:53:49 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id
 a640c23a62f3a-a6efae34c83so189671266b.0
 for <xen-devel@lists.xenproject.org>; Sun, 09 Jun 2024 23:53:49 -0700 (PDT)
Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se.
 [217.31.164.171]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 09 Jun 2024 23:53:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30aa8c65-26f6-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718002428; x=1718607228; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=2VzeWK8x82YxZO3FACx+2RlqRVfXhRtSExk4PP34ai4=;
        b=cYsfA46SHwpPnhzK1vOVQX3W3bmDYPSgg4XnVWEIWo9UKzr2jjItSjdHom0gRbXmlk
         a/t4kOXTzxmY/jdrvGS69i/5UUZ1V/rSnZz/QF+QUTBhBXuhkZ0d2fEjQCVAr+MUwLss
         6nPc6UuCRWX4MHH74B3MJgWp9DdsZp0+anxG1maJ8B+fdRXNCPMAcYLnmpnI93iX6jOC
         moluf+21qrjf0g/jCb1BBgrxRXsqiue9a95VESH8/4iaZTv2hgtv47thf3tEGxpqDOVW
         sQ5P8bamOshepzCCg9AfzuHVnsm84S6TdFI3Czv0OoqEHPFSvVzBbtUXinDZ3NejeD7z
         RRCw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718002428; x=1718607228;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=2VzeWK8x82YxZO3FACx+2RlqRVfXhRtSExk4PP34ai4=;
        b=e9ZzU1FNeb3dMYgHMbJkn7TRNDMpgXZ5ZbRyeqX0/8FY+TyptNNO9OkSqmYI2yu7yW
         CdvKbysUn2b5uoTX1p1V/YdFu+KQDsC+v2YWhUEyLJNH71DD6QwE+PmgCFrjgBgbpk+Q
         3Lxu9Ynf22Alf+7BDv1GG5iYV9Ubq1ENuwUi/W8kiDb7lSvPlVapLigOllD7TqVQzDcO
         QEY+28H5iAbielTPk5lxLM19sxIb4rRkU0SZn12O5h3+7fk4kUUwlwIG7KGM0J/s9jZI
         WE458BarE5fvPSc2f4KISkVitipulTZEX5+AMsBgknhyK3lUoJADn4EstzgeKf7Npka8
         5+0Q==
X-Gm-Message-State: AOJu0Yx7Mw7fYYGCY5696jAEGOM0GgmpHQ9BjH5j1mTqTeSySrFAmIwI
	Z0url6ngzjd6QvnTJn/Tvq9zY35MmcIfvl7Mv0YBeFFuZzA66WP96sF65afqJf6mRKuWaNFjwxs
	l59A=
X-Google-Smtp-Source: AGHT+IFksq917houloNmrm5lAA40EHRgPJB3pWIU+MWKx9Za4TcrtnD1C83f+jBsjtwQ38ngh3ac0g==
X-Received: by 2002:a17:906:ca0f:b0:a6f:1d50:bf1e with SMTP id a640c23a62f3a-a6f1d50c0a1mr115807866b.43.1718002428347;
        Sun, 09 Jun 2024 23:53:48 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: patches@linaro.org,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [XEN PATCH v6 1/7] xen/arm: ffa: refactor ffa_handle_call()
Date: Mon, 10 Jun 2024 08:53:37 +0200
Message-Id: <20240610065343.2594943-2-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Refactors the large switch block in ffa_handle_call() to use common code
for the simple case where it's either an error code or success with no
further parameters.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/tee/ffa.c | 30 ++++++++++--------------------
 1 file changed, 10 insertions(+), 20 deletions(-)

diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 8665201e34a9..5209612963e1 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -273,18 +273,10 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
     case FFA_RXTX_MAP_64:
         e = ffa_handle_rxtx_map(fid, get_user_reg(regs, 1),
 				get_user_reg(regs, 2), get_user_reg(regs, 3));
-        if ( e )
-            ffa_set_regs_error(regs, e);
-        else
-            ffa_set_regs_success(regs, 0, 0);
-        return true;
+        break;
     case FFA_RXTX_UNMAP:
         e = ffa_handle_rxtx_unmap();
-        if ( e )
-            ffa_set_regs_error(regs, e);
-        else
-            ffa_set_regs_success(regs, 0, 0);
-        return true;
+        break;
     case FFA_PARTITION_INFO_GET:
         e = ffa_handle_partition_info_get(get_user_reg(regs, 1),
                                           get_user_reg(regs, 2),
@@ -299,11 +291,7 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
         return true;
     case FFA_RX_RELEASE:
         e = ffa_handle_rx_release();
-        if ( e )
-            ffa_set_regs_error(regs, e);
-        else
-            ffa_set_regs_success(regs, 0, 0);
-        return true;
+        break;
     case FFA_MSG_SEND_DIRECT_REQ_32:
     case FFA_MSG_SEND_DIRECT_REQ_64:
         handle_msg_send_direct_req(regs, fid);
@@ -316,17 +304,19 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
         e = ffa_handle_mem_reclaim(regpair_to_uint64(get_user_reg(regs, 2),
                                                      get_user_reg(regs, 1)),
                                    get_user_reg(regs, 3));
-        if ( e )
-            ffa_set_regs_error(regs, e);
-        else
-            ffa_set_regs_success(regs, 0, 0);
-        return true;
+        break;
 
     default:
         gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
         ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
         return true;
     }
+
+    if ( e )
+        ffa_set_regs_error(regs, e);
+    else
+        ffa_set_regs_success(regs, 0, 0);
+    return true;
 }
 
 static int ffa_domain_init(struct domain *d)
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 06:53:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 06:53:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736901.1142960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYug-0001ub-BC; Mon, 10 Jun 2024 06:53:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736901.1142960; Mon, 10 Jun 2024 06:53:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYug-0001t9-4N; Mon, 10 Jun 2024 06:53:54 +0000
Received: by outflank-mailman (input) for mailman id 736901;
 Mon, 10 Jun 2024 06:53:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FMhB=NM=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1sGYue-0001oX-Qm
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:53:52 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 31cd4f37-26f6-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 08:53:51 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6ef64b092cso224166366b.1
 for <xen-devel@lists.xenproject.org>; Sun, 09 Jun 2024 23:53:51 -0700 (PDT)
Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se.
 [217.31.164.171]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 09 Jun 2024 23:53:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31cd4f37-26f6-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718002430; x=1718607230; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Injz1eHKMU34CsdRACoEV/3/mGn5muvQCz5MmACq6o4=;
        b=c1zGfPGabAngaLO0Mv+Fenz9hpoP4iVVtsnrPeTaPzQaa2LQ16QIRK0J63MpdYYoNw
         i+uND7RC1vR3NS4XdHEEflTCmyf8/negg3+zh4u+gZmlgRayc++Hfq1SGUefYJlXN8vS
         khLiRYBuZGObm1TbWQQe4bRySkGmkT0usBUw3Niamnuvf9cJGKCatM27w1bJJSR932B7
         rdHwJdhfFPkG+Anb/7kEgvtcEtwGayn98N1EGOziHhYXygQ8DPp0nahrdjk8afuVGsoH
         vgZ1vdMVh08khBCj3pzkvyMbEQQlCPavBpBGyViomvT/VxqlbqWfEodDiLKlGgg6E7LJ
         6dyA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718002430; x=1718607230;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Injz1eHKMU34CsdRACoEV/3/mGn5muvQCz5MmACq6o4=;
        b=YYTUKhnYc0UbARAi/QQMoYlvFkOWqfyhN6JdMQVRfPUbQaKHAtVzX178uGYlbcHKDM
         GhULVAo6yG6l6JP3DryF8nZ+V42QMveZ+lE0Uu+3nkvG3KaFFPoQ99s/+odwk6/iNKT9
         3d4V8WpmZRgb91r2EWJn332hgWY99bLPMpIx0RTZsgW3pBZig6mD8BS+FIXonCyuqvRB
         tVW2MBRdn0eFoUX7yfGUrGYobSSrbnti/9jHCm/5s4L71FbDcMvc4Smxt9GMMpvnTKZB
         RXX7dsciqAWY6sLtxwW471h+kiN4kjszE/gOxJkx/RnXQ3Pdvx5R9tk+TMiAMEAbiQju
         OjFw==
X-Gm-Message-State: AOJu0Yy4PuT6Sm2x1aIuLXQuYG/lJT5dBM4DU9GYNd0KAtX+q/RE9XBE
	nfReEn1pcXTKg9ddd23WAbBxBMEXqjbHq+dOTdGYOfwkgpRh+YfkSxR2IFZ0erHLfKgbaNYk8bd
	b4Gk=
X-Google-Smtp-Source: AGHT+IGMHCd19V2Ca1l7FaUhJpfnKwGOHKc5HQE5b6x+izr2N0yEBtHEkS4HKkyYX7r9eBBn5Pjgmw==
X-Received: by 2002:a17:906:7b87:b0:a6f:9ee:bd47 with SMTP id a640c23a62f3a-a6f09ef34d0mr300998766b.58.1718002429948;
        Sun, 09 Jun 2024 23:53:49 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: patches@linaro.org,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [XEN PATCH v6 2/7] xen/arm: ffa: use ACCESS_ONCE()
Date: Mon, 10 Jun 2024 08:53:38 +0200
Message-Id: <20240610065343.2594943-3-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Replace read_atomic() with ACCESS_ONCE() to match the intended use, that
is, to prevent the compiler from (via optimization) reading shared
memory more than once.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/tee/ffa_shm.c | 15 ++++++++-------
 1 file changed, 8 insertions(+), 7 deletions(-)

diff --git a/xen/arch/arm/tee/ffa_shm.c b/xen/arch/arm/tee/ffa_shm.c
index eed9ad2d2986..75a5b66aeb4c 100644
--- a/xen/arch/arm/tee/ffa_shm.c
+++ b/xen/arch/arm/tee/ffa_shm.c
@@ -7,6 +7,7 @@
 #include <xen/sizes.h>
 #include <xen/types.h>
 #include <xen/mm.h>
+#include <xen/lib.h>
 #include <xen/list.h>
 #include <xen/spinlock.h>
 
@@ -171,8 +172,8 @@ static int get_shm_pages(struct domain *d, struct ffa_shm_mem *shm,
 
     for ( n = 0; n < range_count; n++ )
     {
-        page_count = read_atomic(&range[n].page_count);
-        addr = read_atomic(&range[n].address);
+        page_count = ACCESS_ONCE(range[n].page_count);
+        addr = ACCESS_ONCE(range[n].address);
         for ( m = 0; m < page_count; m++ )
         {
             if ( pg_idx >= shm->page_count )
@@ -527,13 +528,13 @@ void ffa_handle_mem_share(struct cpu_user_regs *regs)
         goto out_unlock;
 
     mem_access = ctx->tx + trans.mem_access_offs;
-    if ( read_atomic(&mem_access->access_perm.perm) != FFA_MEM_ACC_RW )
+    if ( ACCESS_ONCE(mem_access->access_perm.perm) != FFA_MEM_ACC_RW )
     {
         ret = FFA_RET_NOT_SUPPORTED;
         goto out_unlock;
     }
 
-    region_offs = read_atomic(&mem_access->region_offs);
+    region_offs = ACCESS_ONCE(mem_access->region_offs);
     if ( sizeof(*region_descr) + region_offs > frag_len )
     {
         ret = FFA_RET_NOT_SUPPORTED;
@@ -541,8 +542,8 @@ void ffa_handle_mem_share(struct cpu_user_regs *regs)
     }
 
     region_descr = ctx->tx + region_offs;
-    range_count = read_atomic(&region_descr->address_range_count);
-    page_count = read_atomic(&region_descr->total_page_count);
+    range_count = ACCESS_ONCE(region_descr->address_range_count);
+    page_count = ACCESS_ONCE(region_descr->total_page_count);
 
     if ( !page_count )
     {
@@ -557,7 +558,7 @@ void ffa_handle_mem_share(struct cpu_user_regs *regs)
         goto out_unlock;
     }
     shm->sender_id = trans.sender_id;
-    shm->ep_id = read_atomic(&mem_access->access_perm.endpoint_id);
+    shm->ep_id = ACCESS_ONCE(mem_access->access_perm.endpoint_id);
 
     /*
      * Check that the Composite memory region descriptor fits.
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 06:53:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 06:53:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736904.1142998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYul-000344-8S; Mon, 10 Jun 2024 06:53:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736904.1142998; Mon, 10 Jun 2024 06:53:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYul-00033v-5e; Mon, 10 Jun 2024 06:53:59 +0000
Received: by outflank-mailman (input) for mailman id 736904;
 Mon, 10 Jun 2024 06:53:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FMhB=NM=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1sGYuj-0001oX-J5
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:53:57 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 34a79980-26f6-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 08:53:56 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a62ef52e837so528756866b.3
 for <xen-devel@lists.xenproject.org>; Sun, 09 Jun 2024 23:53:56 -0700 (PDT)
Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se.
 [217.31.164.171]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 09 Jun 2024 23:53:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34a79980-26f6-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718002435; x=1718607235; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=v0XZWV38zo7fRMFW2k6millPwbwyTFNlIqQWXfkZcoE=;
        b=wm3NVcf0cwmsKRDHMpzk0pUjs8pRNsKotohsW5zAUtAxPqcjiweXaPjV15z2nk83Dn
         1qe7DYXdaGjoyVZW+UzaKiufbfdrDI5LNPOHPsXhb1k8qDHBisLhPMHi0No75/eizajk
         Fq4Nz1G6LT+UXrGT1Jf5NOk6jtu0tun/5PObGi/fUJE8eGrOfEYocrSrj7dIUD4zvuGX
         RTlQW/pyQossoInGJS4gvs71Bhc4XmWN1kBru3mBbGTb5FYtqtsncnIDBgrIC3hwPWNe
         u7aADhnSGB6vWvFmaHFSZgy+tRPffv2PaLfk0DSp+Z+IqDo8lF3gN2Jma00mEBRmZZBr
         jnCA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718002435; x=1718607235;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=v0XZWV38zo7fRMFW2k6millPwbwyTFNlIqQWXfkZcoE=;
        b=NCfN7gDrafE5mYqYlrA+BE7VSyzLsQyVBuWHz0ICbsDsPNBkOMi+Ah1PJA5skvfKVs
         k0uDNyFCX3FtMOln7FNy+rV6kJ4h1ZU4ZNrJcspLF2+kBofxVU4jemyHbVqWwC/rzwli
         fcT6gNw+DfZHxZ7D/Abln4/pZB1kO65b/griFFv35XyGGbMW/L9eGtPrs4BA/yocSvFv
         je2USAZHaHnuxlOu1O40sHZnsfpiFheK39/Mb6HOUA0MS8ZWIg5hiaDMK/7YecDEx6L2
         IZvbKRNsh+SgNHKMf0OMaaSSE98+NgOcc0qir/5XcDYBAu2xo5PcXR7bcj+RiSqJBa4y
         D0Jw==
X-Gm-Message-State: AOJu0YykQRm73cX6ALs4b44yrUbgH9utKduQ6bftTZGoTIRcbixWS6eX
	HIuUFqHU8PTPNM2wIodF/90+tCVNjQ98XFF0wBVr0SpYABCScGlqo7/4MEyJAdQ3KEM2HCYXuH3
	I744=
X-Google-Smtp-Source: AGHT+IFrqovL90KjjtIWH7ELqoOBQ3DlSigv3se5/sV3vqUZIu9PEL+pc5diOLHr+PJK6s+8Xzun2Q==
X-Received: by 2002:a17:906:29d5:b0:a6f:1b40:82ab with SMTP id a640c23a62f3a-a6f1b408397mr163610266b.76.1718002434843;
        Sun, 09 Jun 2024 23:53:54 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: patches@linaro.org,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [XEN PATCH v6 5/7] xen/arm: add and call init_tee_secondary()
Date: Mon, 10 Jun 2024 08:53:41 +0200
Message-Id: <20240610065343.2594943-6-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add init_tee_secondary() to the TEE mediator framework and call it from
start_secondary() late enough that per-cpu interrupts can be configured
on CPUs as they are initialized. This is needed in later patches.

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
v5->v6:
- Rename init_tee_interrupt() to init_tee_secondary() as requested
---
 xen/arch/arm/include/asm/tee/tee.h | 8 ++++++++
 xen/arch/arm/smpboot.c             | 2 ++
 xen/arch/arm/tee/tee.c             | 6 ++++++
 3 files changed, 16 insertions(+)

diff --git a/xen/arch/arm/include/asm/tee/tee.h b/xen/arch/arm/include/asm/tee/tee.h
index da324467e130..6bc13da885b6 100644
--- a/xen/arch/arm/include/asm/tee/tee.h
+++ b/xen/arch/arm/include/asm/tee/tee.h
@@ -28,6 +28,9 @@ struct tee_mediator_ops {
      */
     bool (*probe)(void);
 
+    /* Initialize secondary CPUs */
+    void (*init_secondary)(void);
+
     /*
      * Called during domain construction if toolstack requests to enable
      * TEE support so mediator can inform TEE about new
@@ -66,6 +69,7 @@ int tee_domain_init(struct domain *d, uint16_t tee_type);
 int tee_domain_teardown(struct domain *d);
 int tee_relinquish_resources(struct domain *d);
 uint16_t tee_get_type(void);
+void init_tee_secondary(void);
 
 #define REGISTER_TEE_MEDIATOR(_name, _namestr, _type, _ops)         \
 static const struct tee_mediator_desc __tee_desc_##_name __used     \
@@ -105,6 +109,10 @@ static inline uint16_t tee_get_type(void)
     return XEN_DOMCTL_CONFIG_TEE_NONE;
 }
 
+static inline void init_tee_secondary(void)
+{
+}
+
 #endif  /* CONFIG_TEE */
 
 #endif /* __ARCH_ARM_TEE_TEE_H__ */
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 93a10d7721b4..04e363088d60 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -29,6 +29,7 @@
 #include <asm/procinfo.h>
 #include <asm/psci.h>
 #include <asm/acpi.h>
+#include <asm/tee/tee.h>
 
 /* Override macros from asm/page.h to make them work with mfn_t */
 #undef virt_to_mfn
@@ -401,6 +402,7 @@ void asmlinkage start_secondary(void)
      */
     init_maintenance_interrupt();
     init_timer_interrupt();
+    init_tee_secondary();
 
     local_abort_enable();
 
diff --git a/xen/arch/arm/tee/tee.c b/xen/arch/arm/tee/tee.c
index ddd17506a9ff..9fd1d7495b2e 100644
--- a/xen/arch/arm/tee/tee.c
+++ b/xen/arch/arm/tee/tee.c
@@ -96,6 +96,12 @@ static int __init tee_init(void)
 
 __initcall(tee_init);
 
+void __init init_tee_secondary(void)
+{
+    if ( cur_mediator && cur_mediator->ops->init_secondary )
+        cur_mediator->ops->init_secondary();
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 06:53:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 06:53:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736905.1143003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYul-00036W-JZ; Mon, 10 Jun 2024 06:53:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736905.1143003; Mon, 10 Jun 2024 06:53:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYul-000366-DK; Mon, 10 Jun 2024 06:53:59 +0000
Received: by outflank-mailman (input) for mailman id 736905;
 Mon, 10 Jun 2024 06:53:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FMhB=NM=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1sGYuj-0002SJ-VW
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:53:57 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 35603f96-26f6-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 08:53:57 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a63359aaaa6so594629866b.2
 for <xen-devel@lists.xenproject.org>; Sun, 09 Jun 2024 23:53:57 -0700 (PDT)
Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se.
 [217.31.164.171]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 09 Jun 2024 23:53:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35603f96-26f6-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718002436; x=1718607236; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=4VXeaEelKJYpG/6TyRv7n3Wpu8YGpUrR2rYBr36fFWI=;
        b=kTAnCAJnx9Thdpc0FXS2PUhrwcnsDYMb++DlD1CTsZu1OdpH4LM+3bxURpQTdgodMN
         Hn5TQr67piXzgCrOk94ZFY+ISeEZLUA/3JvSSG2XhuNn7jfnnvsl/Fd1Bxt/BU4MXxG7
         MACov1toSd84WvSEwzbWqwqrpO1HhqXWCfM6mr1jQKPiloxamLFibrvsUBm6yUtnhD+l
         kBvXLMXYfCSw7qhDPK8dKV6NtQNR7ssv0/ggeDlPqYDiA3t3nVc8EBWjiGU72A/eO2SD
         wsewUFmPnI9uJr8VMMpfo6d3A1ZvvS5XPcttbpHjRL45lNqRpdXP2515ByZ//ldPgvd9
         1bLg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718002436; x=1718607236;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=4VXeaEelKJYpG/6TyRv7n3Wpu8YGpUrR2rYBr36fFWI=;
        b=k8gX/POf27nSrWfIALZwdwOWuMoYmloplDpxoPL6nhPnyxjvvANpl3shSqn20Sbb6U
         e0wDFWjiWITvcLkEcwiEAT9ZeMroTiwZ2IqSG1Pn5cZFJMe+NlXf1XOq5qn031q3wIuz
         Yp45ONf/FhrYp4jn7DQmdYyO9vAphkuQn7+VvVcvJrdb03M1r3dnj1GxvTjeDcdTZxLT
         zKLkCBztCVsn1pWymdrUgVav23bxYuWT7c2TEqsrL+WVcCCgCLAqk9de4O4K7NVUJBMY
         WryySP7/Nc7Huu3DcpcaAB8x4sY4QllgpmERTuvD/1Uo0QGFpoBnAFEgWnQ9+TLdLOgn
         aHNA==
X-Gm-Message-State: AOJu0YwegWssKE/fimCyT99Et9lvNoRaVyMHX6W+hSzSqV7lX6n2WRvk
	62bDzyVj8PogkBPuiwcjxwvf2D+moLNJDC49axxwpmcluHnFK8Rc68G/ZlPblXySqwCkp20hP3l
	MZ4Q=
X-Google-Smtp-Source: AGHT+IHNPbpnRBW3v3XVGITo/4pgqQeNlf8GqFQc2TOpQHtSgwbVAgzIPb6eK/wrE0lBSuIW9tPH6g==
X-Received: by 2002:a17:906:ca0f:b0:a6f:1d50:bf1e with SMTP id a640c23a62f3a-a6f1d50c0a1mr115826066b.43.1718002436462;
        Sun, 09 Jun 2024 23:53:56 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: patches@linaro.org,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [XEN PATCH v6 6/7] xen/arm: add and call tee_free_domain_ctx()
Date: Mon, 10 Jun 2024 08:53:42 +0200
Message-Id: <20240610065343.2594943-7-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add tee_free_domain_ctx() to the TEE mediator framework.
tee_free_domain_ctx() is called from arch_domain_destroy() to allow late
freeing of the d->arch.tee context. This will simplify access to
d->arch.tee for domains retrieved with rcu_lock_domain_by_id().

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/domain.c              | 1 +
 xen/arch/arm/include/asm/tee/tee.h | 6 ++++++
 xen/arch/arm/tee/tee.c             | 6 ++++++
 3 files changed, 13 insertions(+)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 8bde2f730dfb..7cfcefd27944 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -843,6 +843,7 @@ int arch_domain_teardown(struct domain *d)
 
 void arch_domain_destroy(struct domain *d)
 {
+    tee_free_domain_ctx(d);
     /* IOMMU page table is shared with P2M, always call
      * iommu_domain_destroy() before p2m_final_teardown().
      */
diff --git a/xen/arch/arm/include/asm/tee/tee.h b/xen/arch/arm/include/asm/tee/tee.h
index 6bc13da885b6..0169fd746bcd 100644
--- a/xen/arch/arm/include/asm/tee/tee.h
+++ b/xen/arch/arm/include/asm/tee/tee.h
@@ -38,6 +38,7 @@ struct tee_mediator_ops {
      */
     int (*domain_init)(struct domain *d);
     int (*domain_teardown)(struct domain *d);
+    void (*free_domain_ctx)(struct domain *d);
 
     /*
      * Called during domain destruction to relinquish resources used
@@ -70,6 +71,7 @@ int tee_domain_teardown(struct domain *d);
 int tee_relinquish_resources(struct domain *d);
 uint16_t tee_get_type(void);
 void init_tee_secondary(void);
+void tee_free_domain_ctx(struct domain *d);
 
 #define REGISTER_TEE_MEDIATOR(_name, _namestr, _type, _ops)         \
 static const struct tee_mediator_desc __tee_desc_##_name __used     \
@@ -113,6 +115,10 @@ static inline void init_tee_secondary(void)
 {
 }
 
+static inline void tee_free_domain_ctx(struct domain *d)
+{
+}
+
 #endif  /* CONFIG_TEE */
 
 #endif /* __ARCH_ARM_TEE_TEE_H__ */
diff --git a/xen/arch/arm/tee/tee.c b/xen/arch/arm/tee/tee.c
index 9fd1d7495b2e..b1cae16c17a1 100644
--- a/xen/arch/arm/tee/tee.c
+++ b/xen/arch/arm/tee/tee.c
@@ -102,6 +102,12 @@ void __init init_tee_secondary(void)
         cur_mediator->ops->init_secondary();
 }
 
+void tee_free_domain_ctx(struct domain *d)
+{
+    if ( cur_mediator && cur_mediator->ops->free_domain_ctx)
+        cur_mediator->ops->free_domain_ctx(d);
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 06:54:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 06:54:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736906.1143018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYun-0003ca-Rp; Mon, 10 Jun 2024 06:54:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736906.1143018; Mon, 10 Jun 2024 06:54:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGYun-0003cO-Nz; Mon, 10 Jun 2024 06:54:01 +0000
Received: by outflank-mailman (input) for mailman id 736906;
 Mon, 10 Jun 2024 06:54:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FMhB=NM=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1sGYum-0002SJ-RT
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 06:54:01 +0000
Received: from mail-ej1-x644.google.com (mail-ej1-x644.google.com
 [2a00:1450:4864:20::644])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 365e6658-26f6-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 08:53:58 +0200 (CEST)
Received: by mail-ej1-x644.google.com with SMTP id
 a640c23a62f3a-a6f09eaf420so154697066b.3
 for <xen-devel@lists.xenproject.org>; Sun, 09 Jun 2024 23:53:59 -0700 (PDT)
Received: from rayden.urgonet (h-217-31-164-171.A175.priv.bahnhof.se.
 [217.31.164.171]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1e6795b9sm107981966b.174.2024.06.09.23.53.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 09 Jun 2024 23:53:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 365e6658-26f6-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718002438; x=1718607238; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=305iG9fx40qqEJp0HWKt7JOcd7D+k2FTNaXogI/WLSM=;
        b=E5bUyMO9elYHOfM5gKFwuUHKzW/Cge+s0otluzvzLbSo6hJRsptueTmhYzCwApPpW5
         dYYh/hCqgICI13xZInQ4fH2gBd0Y2YYpiNc+nXfg/oWaxcG8sEmVWMfSONn2QCScOmJ/
         /k9egZN2Lc6UezHrArn4yYXforSByu4/j6nFTs8LOBCfpAtNRdINg2V+kngH/lGuoGtw
         jOIGhhkU0nqo+lAWwaKdPzjRSw/+7USKu+ViRn+bLe13R8BCl1a/1kHdzfw9AP26hDSe
         AZnlGHqXcRkgUCb/EWH/k8OVMxgsnOKh53KOveO/VQIo/3UcdowmHYi0w0f21Pp57T5D
         NYyA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718002438; x=1718607238;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=305iG9fx40qqEJp0HWKt7JOcd7D+k2FTNaXogI/WLSM=;
        b=avi1nC2Sv+sDQG3R9k1sPYRtnHYS4+oCq/a7Pt8NPApv+7u/tF5V/xq35oHP7Prvwn
         5k86ILDcvCMLM8UDOijoX7tNZ7at2iPJBMNqdweYGEcbgT0zhTBtGfPULpb9Q8bnTxm3
         lJiOeJtZ6KHffzL5GBa0t8WrL+5U1w8v7TVQf41V2l3evle1xOv7VdBkthevowEvOnqL
         J+W7g7LLBg6VMZ1HeVa7h7yxT2DRAtXjIIJh3bHH90FkPjZ89rUZJdNsOZH/iwGbmFJ4
         38dZZ3C4ApQmguGtJc+Q8y+4RR7IVFLI4WdOjcD+HLgPf4oD/O/3+bdGbca44VRMk8zZ
         ICXw==
X-Gm-Message-State: AOJu0Yyv7j+nSE+L4KNwG3Ukf1ZcBCwggs1GMnNDr8mTJ1DCf6sGNpV+
	uHjYjNNDLiR17wqhbtUQ8dJpmAJuwfoVT1uxgbVibfHNL3SwSzwJz4INXqQyRcnL35NlHDw8Xh7
	1kQabcg==
X-Google-Smtp-Source: AGHT+IFovAOFQGDgK4ZMx0Nd6uSETjAmoBfhQRI6+9c4qvCcBcggDMQKP095+P3hPn/jy29eXTOuJw==
X-Received: by 2002:a17:906:3f92:b0:a59:9dbf:677b with SMTP id a640c23a62f3a-a6cdb0f53b4mr562970566b.48.1718002437935;
        Sun, 09 Jun 2024 23:53:57 -0700 (PDT)
From: Jens Wiklander <jens.wiklander@linaro.org>
To: xen-devel@lists.xenproject.org
Cc: patches@linaro.org,
	Jens Wiklander <jens.wiklander@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [XEN PATCH v6 7/7] xen/arm: ffa: support notification
Date: Mon, 10 Jun 2024 08:53:43 +0200
Message-Id: <20240610065343.2594943-8-jens.wiklander@linaro.org>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add support for FF-A notifications, currently limited to an SP (Secure
Partition) sending an asynchronous notification to a guest.

Guests and Xen itself are made aware of pending notifications with an
interrupt. The interrupt handler triggers a tasklet to retrieve the
notifications using the FF-A ABI and deliver them to their destinations.

Update ffa_partinfo_domain_init() to return error code like
ffa_notif_domain_init().

Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>
---
v5->v6:
- Add a local ffa_init_secondary() that calls ffa_notif_init_interrupt() as
  requested
- Add comments in notif_vm_pend_intr() to explain the cause and consequences of
  not finding the domain of a vm_id or if the found domain doesn't have a
  FF-A context.

v4->v5:
- Move the freeing of d->arch.tee to the new TEE mediator free_domain_ctx
  callback to make the context accessible during rcu_lock_domain_by_id()
  from a tasklet
- Initialize interrupt handlers for secondary CPUs from the new TEE mediator
  init_interrupt() callback
- Restore the ffa_probe() from v3, that is, remove the
  presmp_initcall(ffa_init) approach and use ffa_probe() as usual now that we
  have the init_interrupt callback.
- A tasklet is added to handle the Schedule Receiver interrupt. The tasklet
  finds each relevant domain with rcu_lock_domain_by_id() which now is enough
  to guarantee that the FF-A context can be accessed.
- The notification interrupt handler only schedules the notification
  tasklet mentioned above

v3->v4:
- Add another note on FF-A limitations
- Clear secure_pending in ffa_handle_notification_get() if both SP and SPM
  bitmaps are retrieved
- ASSERT that ffa_rcu_lock_domain_by_vm_id() isn't passed the vm_id FF-A
  uses for Xen itself
- Replace the get_domain_by_id() call done via ffa_get_domain_by_vm_id() in
  notif_irq_handler() with a call to rcu_lock_live_remote_domain_by_id() via
  ffa_rcu_lock_domain_by_vm_id()
- Remove spinlock in struct ffa_ctx_notif and use atomic functions as needed
  to access and update the secure_pending field
- In notif_irq_handler(), look for the first online CPU instead of assuming
  that the first CPU is online
- Initialize FF-A via presmp_initcall() before the other CPUs are online,
  use register_cpu_notifier() to install the interrupt handler
  notif_irq_handler()
- Update commit message to reflect recent updates

v2->v3:
- Add a GUEST_ prefix and move FFA_NOTIF_PEND_INTR_ID and
  FFA_SCHEDULE_RECV_INTR_ID to public/arch-arm.h
- Register the Xen SRI handler on each CPU using on_selected_cpus() and
  setup_irq()
- Check that the SGI ID retrieved with FFA_FEATURE_SCHEDULE_RECV_INTR
  doesn't conflict with static SGI handlers

v1->v2:
- Addressing review comments
- Change ffa_handle_notification_{bind,unbind,set}() to take struct
  cpu_user_regs *regs as argument.
- Update ffa_partinfo_domain_init() and ffa_notif_domain_init() to return
  an error code.
- Fixing a bug in handle_features() for FFA_FEATURE_SCHEDULE_RECV_INTR.

[review] xen/arm: ffa: support notification

[review] xen/arm: ffa: support notification
---
 xen/arch/arm/tee/Makefile       |   1 +
 xen/arch/arm/tee/ffa.c          |  77 +++++-
 xen/arch/arm/tee/ffa_notif.c    | 425 ++++++++++++++++++++++++++++++++
 xen/arch/arm/tee/ffa_partinfo.c |   9 +-
 xen/arch/arm/tee/ffa_private.h  |  56 ++++-
 xen/arch/arm/tee/tee.c          |   2 +-
 xen/include/public/arch-arm.h   |  14 ++
 7 files changed, 569 insertions(+), 15 deletions(-)
 create mode 100644 xen/arch/arm/tee/ffa_notif.c

diff --git a/xen/arch/arm/tee/Makefile b/xen/arch/arm/tee/Makefile
index f0112a2f922d..7c0f46f7f446 100644
--- a/xen/arch/arm/tee/Makefile
+++ b/xen/arch/arm/tee/Makefile
@@ -2,5 +2,6 @@ obj-$(CONFIG_FFA) += ffa.o
 obj-$(CONFIG_FFA) += ffa_shm.o
 obj-$(CONFIG_FFA) += ffa_partinfo.o
 obj-$(CONFIG_FFA) += ffa_rxtx.o
+obj-$(CONFIG_FFA) += ffa_notif.o
 obj-y += tee.o
 obj-$(CONFIG_OPTEE) += optee.o
diff --git a/xen/arch/arm/tee/ffa.c b/xen/arch/arm/tee/ffa.c
index 5209612963e1..022089278e1c 100644
--- a/xen/arch/arm/tee/ffa.c
+++ b/xen/arch/arm/tee/ffa.c
@@ -39,6 +39,12 @@
  *   - at most 32 shared memory regions per guest
  * o FFA_MSG_SEND_DIRECT_REQ:
  *   - only supported from a VM to an SP
+ * o FFA_NOTIFICATION_*:
+ *   - only supports global notifications, that is, per vCPU notifications
+ *     are not supported
+ *   - doesn't support signalling the secondary scheduler of pending
+ *     notification for secure partitions
+ *   - doesn't support notifications for Xen itself
  *
  * There are some large locked sections with ffa_tx_buffer_lock and
  * ffa_rx_buffer_lock. Especially the ffa_tx_buffer_lock spinlock used
@@ -194,6 +200,8 @@ out:
 
 static void handle_features(struct cpu_user_regs *regs)
 {
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.tee;
     uint32_t a1 = get_user_reg(regs, 1);
     unsigned int n;
 
@@ -240,6 +248,30 @@ static void handle_features(struct cpu_user_regs *regs)
         BUILD_BUG_ON(PAGE_SIZE != FFA_PAGE_SIZE);
         ffa_set_regs_success(regs, 0, 0);
         break;
+    case FFA_FEATURE_NOTIF_PEND_INTR:
+        if ( ctx->notif.enabled )
+            ffa_set_regs_success(regs, GUEST_FFA_NOTIF_PEND_INTR_ID, 0);
+        else
+            ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
+        break;
+    case FFA_FEATURE_SCHEDULE_RECV_INTR:
+        if ( ctx->notif.enabled )
+            ffa_set_regs_success(regs, GUEST_FFA_SCHEDULE_RECV_INTR_ID, 0);
+        else
+            ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
+        break;
+
+    case FFA_NOTIFICATION_BIND:
+    case FFA_NOTIFICATION_UNBIND:
+    case FFA_NOTIFICATION_GET:
+    case FFA_NOTIFICATION_SET:
+    case FFA_NOTIFICATION_INFO_GET_32:
+    case FFA_NOTIFICATION_INFO_GET_64:
+        if ( ctx->notif.enabled )
+            ffa_set_regs_success(regs, 0, 0);
+        else
+            ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
+        break;
     default:
         ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
         break;
@@ -305,6 +337,22 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
                                                      get_user_reg(regs, 1)),
                                    get_user_reg(regs, 3));
         break;
+    case FFA_NOTIFICATION_BIND:
+        e = ffa_handle_notification_bind(regs);
+        break;
+    case FFA_NOTIFICATION_UNBIND:
+        e = ffa_handle_notification_unbind(regs);
+        break;
+    case FFA_NOTIFICATION_INFO_GET_32:
+    case FFA_NOTIFICATION_INFO_GET_64:
+        ffa_handle_notification_info_get(regs);
+        return true;
+    case FFA_NOTIFICATION_GET:
+        ffa_handle_notification_get(regs);
+        return true;
+    case FFA_NOTIFICATION_SET:
+        e = ffa_handle_notification_set(regs);
+        break;
 
     default:
         gprintk(XENLOG_ERR, "ffa: unhandled fid 0x%x\n", fid);
@@ -322,6 +370,7 @@ static bool ffa_handle_call(struct cpu_user_regs *regs)
 static int ffa_domain_init(struct domain *d)
 {
     struct ffa_ctx *ctx;
+    int ret;
 
     if ( !ffa_version )
         return -ENODEV;
@@ -345,10 +394,11 @@ static int ffa_domain_init(struct domain *d)
      * error, so no need for cleanup in this function.
      */
 
-    if ( !ffa_partinfo_domain_init(d) )
-        return -EIO;
+    ret = ffa_partinfo_domain_init(d);
+    if ( ret )
+        return ret;
 
-    return 0;
+    return ffa_notif_domain_init(d);
 }
 
 static void ffa_domain_teardown_continue(struct ffa_ctx *ctx, bool first_time)
@@ -376,13 +426,6 @@ static void ffa_domain_teardown_continue(struct ffa_ctx *ctx, bool first_time)
     }
     else
     {
-        /*
-         * domain_destroy() might have been called (via put_domain() in
-         * ffa_reclaim_shms()), so we can't touch the domain structure
-         * anymore.
-         */
-        xfree(ctx);
-
         /* Only check if there has been a change to the teardown queue */
         if ( !first_time )
         {
@@ -423,17 +466,28 @@ static int ffa_domain_teardown(struct domain *d)
         return 0;
 
     ffa_rxtx_domain_destroy(d);
+    ffa_notif_domain_destroy(d);
 
     ffa_domain_teardown_continue(ctx, true /* first_time */);
 
     return 0;
 }
 
+static void ffa_free_domain_ctx(struct domain *d)
+{
+    XFREE(d->arch.tee);
+}
+
 static int ffa_relinquish_resources(struct domain *d)
 {
     return 0;
 }
 
+static void ffa_init_secondary(void)
+{
+    ffa_notif_init_interrupt();
+}
+
 static bool ffa_probe(void)
 {
     uint32_t vers;
@@ -502,6 +556,7 @@ static bool ffa_probe(void)
     if ( !ffa_partinfo_init() )
         goto err_rxtx_destroy;
 
+    ffa_notif_init();
     INIT_LIST_HEAD(&ffa_teardown_head);
     init_timer(&ffa_teardown_timer, ffa_teardown_timer_callback, NULL, 0);
 
@@ -517,8 +572,10 @@ err_rxtx_destroy:
 static const struct tee_mediator_ops ffa_ops =
 {
     .probe = ffa_probe,
+    .init_secondary = ffa_init_secondary,
     .domain_init = ffa_domain_init,
     .domain_teardown = ffa_domain_teardown,
+    .free_domain_ctx = ffa_free_domain_ctx,
     .relinquish_resources = ffa_relinquish_resources,
     .handle_call = ffa_handle_call,
 };
diff --git a/xen/arch/arm/tee/ffa_notif.c b/xen/arch/arm/tee/ffa_notif.c
new file mode 100644
index 000000000000..541e61d2f606
--- /dev/null
+++ b/xen/arch/arm/tee/ffa_notif.c
@@ -0,0 +1,425 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2024  Linaro Limited
+ */
+
+#include <xen/const.h>
+#include <xen/cpu.h>
+#include <xen/list.h>
+#include <xen/notifier.h>
+#include <xen/spinlock.h>
+#include <xen/tasklet.h>
+#include <xen/types.h>
+
+#include <asm/smccc.h>
+#include <asm/regs.h>
+
+#include "ffa_private.h"
+
+static bool __ro_after_init notif_enabled;
+static unsigned int __ro_after_init notif_sri_irq;
+
+int ffa_handle_notification_bind(struct cpu_user_regs *regs)
+{
+    struct domain *d = current->domain;
+    uint32_t src_dst = get_user_reg(regs, 1);
+    uint32_t flags = get_user_reg(regs, 2);
+    uint32_t bitmap_lo = get_user_reg(regs, 3);
+    uint32_t bitmap_hi = get_user_reg(regs, 4);
+
+    if ( !notif_enabled )
+        return FFA_RET_NOT_SUPPORTED;
+
+    if ( (src_dst & 0xFFFFU) != ffa_get_vm_id(d) )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    if ( flags )    /* Only global notifications are supported */
+        return FFA_RET_DENIED;
+
+    /*
+     * We only support notifications from SP so no need to check the sender
+     * endpoint ID, the SPMC will take care of that for us.
+     */
+    return ffa_simple_call(FFA_NOTIFICATION_BIND, src_dst, flags, bitmap_hi,
+                           bitmap_lo);
+}
+
+int ffa_handle_notification_unbind(struct cpu_user_regs *regs)
+{
+    struct domain *d = current->domain;
+    uint32_t src_dst = get_user_reg(regs, 1);
+    uint32_t bitmap_lo = get_user_reg(regs, 3);
+    uint32_t bitmap_hi = get_user_reg(regs, 4);
+
+    if ( !notif_enabled )
+        return FFA_RET_NOT_SUPPORTED;
+
+    if ( (src_dst & 0xFFFFU) != ffa_get_vm_id(d) )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    /*
+     * We only support notifications from SP so no need to check the
+     * destination endpoint ID, the SPMC will take care of that for us.
+     */
+    return  ffa_simple_call(FFA_NOTIFICATION_UNBIND, src_dst, 0, bitmap_hi,
+                            bitmap_lo);
+}
+
+void ffa_handle_notification_info_get(struct cpu_user_regs *regs)
+{
+    struct domain *d = current->domain;
+    struct ffa_ctx *ctx = d->arch.tee;
+
+    if ( !notif_enabled )
+    {
+        ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
+        return;
+    }
+
+    if ( test_and_clear_bool(ctx->notif.secure_pending) )
+    {
+        /* A pending global notification for the guest */
+        ffa_set_regs(regs, FFA_SUCCESS_64, 0,
+                     1U << FFA_NOTIF_INFO_GET_ID_COUNT_SHIFT, ffa_get_vm_id(d),
+                     0, 0, 0, 0);
+    }
+    else
+    {
+        /* Report an error if there where no pending global notification */
+        ffa_set_regs_error(regs, FFA_RET_NO_DATA);
+    }
+}
+
+void ffa_handle_notification_get(struct cpu_user_regs *regs)
+{
+    struct domain *d = current->domain;
+    uint32_t recv = get_user_reg(regs, 1);
+    uint32_t flags = get_user_reg(regs, 2);
+    uint32_t w2 = 0;
+    uint32_t w3 = 0;
+    uint32_t w4 = 0;
+    uint32_t w5 = 0;
+    uint32_t w6 = 0;
+    uint32_t w7 = 0;
+
+    if ( !notif_enabled )
+    {
+        ffa_set_regs_error(regs, FFA_RET_NOT_SUPPORTED);
+        return;
+    }
+
+    if ( (recv & 0xFFFFU) != ffa_get_vm_id(d) )
+    {
+        ffa_set_regs_error(regs, FFA_RET_INVALID_PARAMETERS);
+        return;
+    }
+
+    if ( flags & ( FFA_NOTIF_FLAG_BITMAP_SP | FFA_NOTIF_FLAG_BITMAP_SPM ) )
+    {
+        struct arm_smccc_1_2_regs arg = {
+            .a0 = FFA_NOTIFICATION_GET,
+            .a1 = recv,
+            .a2 = flags & ( FFA_NOTIF_FLAG_BITMAP_SP |
+                            FFA_NOTIF_FLAG_BITMAP_SPM ),
+        };
+        struct arm_smccc_1_2_regs resp;
+        int32_t e;
+
+        /*
+         * Clear secure pending if both FFA_NOTIF_FLAG_BITMAP_SP and
+         * FFA_NOTIF_FLAG_BITMAP_SPM are set since secure world can't have
+         * any more pending notifications.
+         */
+        if ( ( flags  & FFA_NOTIF_FLAG_BITMAP_SP ) &&
+             ( flags & FFA_NOTIF_FLAG_BITMAP_SPM ) )
+        {
+                struct ffa_ctx *ctx = d->arch.tee;
+
+                ACCESS_ONCE(ctx->notif.secure_pending) = false;
+        }
+
+        arm_smccc_1_2_smc(&arg, &resp);
+        e = ffa_get_ret_code(&resp);
+        if ( e )
+        {
+            ffa_set_regs_error(regs, e);
+            return;
+        }
+
+        if ( flags & FFA_NOTIF_FLAG_BITMAP_SP )
+        {
+            w2 = resp.a2;
+            w3 = resp.a3;
+        }
+
+        if ( flags & FFA_NOTIF_FLAG_BITMAP_SPM )
+            w6 = resp.a6;
+    }
+
+    ffa_set_regs(regs, FFA_SUCCESS_32, 0, w2, w3, w4, w5, w6, w7);
+}
+
+int ffa_handle_notification_set(struct cpu_user_regs *regs)
+{
+    struct domain *d = current->domain;
+    uint32_t src_dst = get_user_reg(regs, 1);
+    uint32_t flags = get_user_reg(regs, 2);
+    uint32_t bitmap_lo = get_user_reg(regs, 3);
+    uint32_t bitmap_hi = get_user_reg(regs, 4);
+
+    if ( !notif_enabled )
+        return FFA_RET_NOT_SUPPORTED;
+
+    if ( (src_dst >> 16) != ffa_get_vm_id(d) )
+        return FFA_RET_INVALID_PARAMETERS;
+
+    /* Let the SPMC check the destination of the notification */
+    return ffa_simple_call(FFA_NOTIFICATION_SET, src_dst, flags, bitmap_lo,
+                           bitmap_hi);
+}
+
+/*
+ * Extract a 16-bit ID (index n) from the successful return value from
+ * FFA_NOTIFICATION_INFO_GET_64 or FFA_NOTIFICATION_INFO_GET_32. IDs are
+ * returned in registers 3 to 7 with four IDs per register for 64-bit
+ * calling convention and two IDs per register for 32-bit calling
+ * convention.
+ */
+static uint16_t get_id_from_resp(struct arm_smccc_1_2_regs *resp,
+                                 unsigned int n)
+{
+    unsigned int ids_per_reg;
+    unsigned int reg_idx;
+    unsigned int reg_shift;
+
+    if ( smccc_is_conv_64(resp->a0) )
+        ids_per_reg = 4;
+    else
+        ids_per_reg = 2;
+
+    reg_idx = n / ids_per_reg + 3;
+    reg_shift = ( n % ids_per_reg ) * 16;
+
+    switch ( reg_idx )
+    {
+    case 3:
+        return resp->a3 >> reg_shift;
+    case 4:
+        return resp->a4 >> reg_shift;
+    case 5:
+        return resp->a5 >> reg_shift;
+    case 6:
+        return resp->a6 >> reg_shift;
+    case 7:
+        return resp->a7 >> reg_shift;
+    default:
+        ASSERT(0); /* "Can't happen" */
+        return 0;
+    }
+}
+
+static void notif_vm_pend_intr(uint16_t vm_id)
+{
+    struct ffa_ctx *ctx;
+    struct domain *d;
+    struct vcpu *v;
+
+    /*
+     * vm_id == 0 means a notifications pending for Xen itself, but
+     * we don't support that yet.
+     */
+    if ( !vm_id )
+        return;
+
+    /*
+     * This can fail if the domain has been destroyed after
+     * FFA_NOTIFICATION_INFO_GET_64. Ignoring this is harmless since the
+     * guest doesn't exist any more.
+     */
+    d = ffa_rcu_lock_domain_by_vm_id(vm_id);
+    if ( !d )
+        return;
+
+    /*
+     * Failing here is unlikely since the domain ID must have been reused
+     * for a new domain between the FFA_NOTIFICATION_INFO_GET_64 and
+     * ffa_rcu_lock_domain_by_vm_id() calls.
+     *
+     * Continuing on the scenario above if the domain has FF-A enabled. We
+     * can't tell here if the domain ID has been reused for a new domain so
+     * we inject an NPI. When the NPI handler in the domain calls
+     * FFA_NOTIFICATION_GET it will have accurate information, the worst
+     * case is a spurious NPI.
+     */
+    ctx = d->arch.tee;
+    if ( !ctx )
+        goto out_unlock;
+
+    /*
+     * arch.tee is freed from complete_domain_destroy() so the RCU lock
+     * guarantees that the data structure isn't freed while we're accessing
+     * it.
+     */
+    ACCESS_ONCE(ctx->notif.secure_pending) = true;
+
+    /*
+     * Since we're only delivering global notification, always
+     * deliver to the first online vCPU. It doesn't matter
+     * which we chose, as long as it's available.
+     */
+    for_each_vcpu(d, v)
+    {
+        if ( is_vcpu_online(v) )
+        {
+            vgic_inject_irq(d, v, GUEST_FFA_NOTIF_PEND_INTR_ID,
+                            true);
+            break;
+        }
+    }
+    if ( !v )
+        printk(XENLOG_ERR "ffa: can't inject NPI, all vCPUs offline\n");
+
+out_unlock:
+    rcu_unlock_domain(d);
+}
+
+static void notif_sri_action(void *unused)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_NOTIFICATION_INFO_GET_64,
+    };
+    struct arm_smccc_1_2_regs resp;
+    unsigned int id_pos;
+    unsigned int list_count;
+    uint64_t ids_count;
+    unsigned int n;
+    int32_t res;
+
+    do {
+        arm_smccc_1_2_smc(&arg, &resp);
+        res = ffa_get_ret_code(&resp);
+        if ( res )
+        {
+            if ( res != FFA_RET_NO_DATA )
+                printk(XENLOG_ERR "ffa: notification info get failed: error %d\n",
+                       res);
+            return;
+        }
+
+        ids_count = resp.a2 >> FFA_NOTIF_INFO_GET_ID_LIST_SHIFT;
+        list_count = ( resp.a2 >> FFA_NOTIF_INFO_GET_ID_COUNT_SHIFT ) &
+                     FFA_NOTIF_INFO_GET_ID_COUNT_MASK;
+
+        id_pos = 0;
+        for ( n = 0; n < list_count; n++ )
+        {
+            unsigned int count = ((ids_count >> 2 * n) & 0x3) + 1;
+            uint16_t vm_id = get_id_from_resp(&resp, id_pos);
+
+            notif_vm_pend_intr(vm_id);
+
+            id_pos += count;
+        }
+
+    } while (resp.a2 & FFA_NOTIF_INFO_GET_MORE_FLAG);
+}
+
+static DECLARE_TASKLET(notif_sri_tasklet, notif_sri_action, NULL);
+
+static void notif_irq_handler(int irq, void *data)
+{
+    tasklet_schedule(&notif_sri_tasklet);
+}
+
+static int32_t ffa_notification_bitmap_create(uint16_t vm_id,
+                                              uint32_t vcpu_count)
+{
+    return ffa_simple_call(FFA_NOTIFICATION_BITMAP_CREATE, vm_id, vcpu_count,
+                           0, 0);
+}
+
+static int32_t ffa_notification_bitmap_destroy(uint16_t vm_id)
+{
+    return ffa_simple_call(FFA_NOTIFICATION_BITMAP_DESTROY, vm_id, 0, 0, 0);
+}
+
+void ffa_notif_init_interrupt(void)
+{
+    int ret;
+
+    if ( notif_enabled && notif_sri_irq < NR_GIC_SGI )
+    {
+        /*
+         * An error here is unlikely since the primary CPU has already
+         * succeeded in installing the interrupt handler. If this fails it
+         * may lead to a problem with notifictaions.
+         *
+         * The CPUs without an notification handler installed will fail to
+         * trigger on the SGI indicating that there are notifications
+         * pending, while the SPMC in the secure world will not notice that
+         * the interrupt was lost.
+         */
+        ret = request_irq(notif_sri_irq, 0, notif_irq_handler, "FF-A notif",
+                          NULL);
+        if ( ret )
+            printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n",
+                   notif_sri_irq, ret);
+    }
+}
+
+void ffa_notif_init(void)
+{
+    const struct arm_smccc_1_2_regs arg = {
+        .a0 = FFA_FEATURES,
+        .a1 = FFA_FEATURE_SCHEDULE_RECV_INTR,
+    };
+    struct arm_smccc_1_2_regs resp;
+    unsigned int irq;
+    int ret;
+
+    arm_smccc_1_2_smc(&arg, &resp);
+    if ( resp.a0 != FFA_SUCCESS_32 )
+        return;
+
+    irq = resp.a2;
+    notif_sri_irq = irq;
+    if ( irq >= NR_GIC_SGI )
+        irq_set_type(irq, IRQ_TYPE_EDGE_RISING);
+    ret = request_irq(irq, 0, notif_irq_handler, "FF-A notif", NULL);
+    if ( ret )
+    {
+        printk(XENLOG_ERR "ffa: request_irq irq %u failed: error %d\n",
+               irq, ret);
+        return;
+    }
+
+    notif_enabled = true;
+}
+
+int ffa_notif_domain_init(struct domain *d)
+{
+    struct ffa_ctx *ctx = d->arch.tee;
+    int32_t res;
+
+    if ( !notif_enabled )
+        return 0;
+
+    res = ffa_notification_bitmap_create(ffa_get_vm_id(d), d->max_vcpus);
+    if ( res )
+        return -ENOMEM;
+
+    ctx->notif.enabled = true;
+
+    return 0;
+}
+
+void ffa_notif_domain_destroy(struct domain *d)
+{
+    struct ffa_ctx *ctx = d->arch.tee;
+
+    if ( ctx->notif.enabled )
+    {
+        ffa_notification_bitmap_destroy(ffa_get_vm_id(d));
+        ctx->notif.enabled = false;
+    }
+}
diff --git a/xen/arch/arm/tee/ffa_partinfo.c b/xen/arch/arm/tee/ffa_partinfo.c
index dc1059584828..93a03c6bc672 100644
--- a/xen/arch/arm/tee/ffa_partinfo.c
+++ b/xen/arch/arm/tee/ffa_partinfo.c
@@ -306,7 +306,7 @@ static void vm_destroy_bitmap_init(struct ffa_ctx *ctx,
     }
 }
 
-bool ffa_partinfo_domain_init(struct domain *d)
+int ffa_partinfo_domain_init(struct domain *d)
 {
     unsigned int count = BITS_TO_LONGS(subscr_vm_destroyed_count);
     struct ffa_ctx *ctx = d->arch.tee;
@@ -315,7 +315,7 @@ bool ffa_partinfo_domain_init(struct domain *d)
 
     ctx->vm_destroy_bitmap = xzalloc_array(unsigned long, count);
     if ( !ctx->vm_destroy_bitmap )
-        return false;
+        return -ENOMEM;
 
     for ( n = 0; n < subscr_vm_created_count; n++ )
     {
@@ -330,7 +330,10 @@ bool ffa_partinfo_domain_init(struct domain *d)
     }
     vm_destroy_bitmap_init(ctx, n);
 
-    return n == subscr_vm_created_count;
+    if ( n != subscr_vm_created_count )
+        return -EIO;
+
+    return 0;
 }
 
 bool ffa_partinfo_domain_destroy(struct domain *d)
diff --git a/xen/arch/arm/tee/ffa_private.h b/xen/arch/arm/tee/ffa_private.h
index 98236cbf14a3..7c6b06f686fc 100644
--- a/xen/arch/arm/tee/ffa_private.h
+++ b/xen/arch/arm/tee/ffa_private.h
@@ -25,6 +25,7 @@
 #define FFA_RET_DENIED                  -6
 #define FFA_RET_RETRY                   -7
 #define FFA_RET_ABORTED                 -8
+#define FFA_RET_NO_DATA                 -9
 
 /* FFA_VERSION helpers */
 #define FFA_VERSION_MAJOR_SHIFT         16U
@@ -175,6 +176,21 @@
  */
 #define FFA_PARTITION_INFO_GET_COUNT_FLAG BIT(0, U)
 
+/* Flags used in calls to FFA_NOTIFICATION_GET interface  */
+#define FFA_NOTIF_FLAG_BITMAP_SP        BIT(0, U)
+#define FFA_NOTIF_FLAG_BITMAP_VM        BIT(1, U)
+#define FFA_NOTIF_FLAG_BITMAP_SPM       BIT(2, U)
+#define FFA_NOTIF_FLAG_BITMAP_HYP       BIT(3, U)
+
+#define FFA_NOTIF_INFO_GET_MORE_FLAG        BIT(0, U)
+#define FFA_NOTIF_INFO_GET_ID_LIST_SHIFT    12
+#define FFA_NOTIF_INFO_GET_ID_COUNT_SHIFT   7
+#define FFA_NOTIF_INFO_GET_ID_COUNT_MASK    0x1F
+
+/* Feature IDs used with FFA_FEATURES */
+#define FFA_FEATURE_NOTIF_PEND_INTR     0x1U
+#define FFA_FEATURE_SCHEDULE_RECV_INTR  0x2U
+
 /* Function IDs */
 #define FFA_ERROR                       0x84000060U
 #define FFA_SUCCESS_32                  0x84000061U
@@ -213,6 +229,24 @@
 #define FFA_MEM_FRAG_TX                 0x8400007BU
 #define FFA_MSG_SEND                    0x8400006EU
 #define FFA_MSG_POLL                    0x8400006AU
+#define FFA_NOTIFICATION_BITMAP_CREATE  0x8400007DU
+#define FFA_NOTIFICATION_BITMAP_DESTROY 0x8400007EU
+#define FFA_NOTIFICATION_BIND           0x8400007FU
+#define FFA_NOTIFICATION_UNBIND         0x84000080U
+#define FFA_NOTIFICATION_SET            0x84000081U
+#define FFA_NOTIFICATION_GET            0x84000082U
+#define FFA_NOTIFICATION_INFO_GET_32    0x84000083U
+#define FFA_NOTIFICATION_INFO_GET_64    0xC4000083U
+
+struct ffa_ctx_notif {
+    bool enabled;
+
+    /*
+     * True if domain is reported by FFA_NOTIFICATION_INFO_GET to have
+     * pending global notifications.
+     */
+    bool secure_pending;
+};
 
 struct ffa_ctx {
     void *rx;
@@ -228,6 +262,7 @@ struct ffa_ctx {
     struct list_head shm_list;
     /* Number of allocated shared memory object */
     unsigned int shm_count;
+    struct ffa_ctx_notif notif;
     /*
      * tx_lock is used to serialize access to tx
      * rx_lock is used to serialize access to rx
@@ -257,7 +292,7 @@ void ffa_handle_mem_share(struct cpu_user_regs *regs);
 int ffa_handle_mem_reclaim(uint64_t handle, uint32_t flags);
 
 bool ffa_partinfo_init(void);
-bool ffa_partinfo_domain_init(struct domain *d);
+int ffa_partinfo_domain_init(struct domain *d);
 bool ffa_partinfo_domain_destroy(struct domain *d);
 int32_t ffa_handle_partition_info_get(uint32_t w1, uint32_t w2, uint32_t w3,
                                       uint32_t w4, uint32_t w5, uint32_t *count,
@@ -271,12 +306,31 @@ uint32_t ffa_handle_rxtx_map(uint32_t fid, register_t tx_addr,
 uint32_t ffa_handle_rxtx_unmap(void);
 int32_t ffa_handle_rx_release(void);
 
+void ffa_notif_init(void);
+void ffa_notif_init_interrupt(void);
+int ffa_notif_domain_init(struct domain *d);
+void ffa_notif_domain_destroy(struct domain *d);
+
+int ffa_handle_notification_bind(struct cpu_user_regs *regs);
+int ffa_handle_notification_unbind(struct cpu_user_regs *regs);
+void ffa_handle_notification_info_get(struct cpu_user_regs *regs);
+void ffa_handle_notification_get(struct cpu_user_regs *regs);
+int ffa_handle_notification_set(struct cpu_user_regs *regs);
+
 static inline uint16_t ffa_get_vm_id(const struct domain *d)
 {
     /* +1 since 0 is reserved for the hypervisor in FF-A */
     return d->domain_id + 1;
 }
 
+static inline struct domain *ffa_rcu_lock_domain_by_vm_id(uint16_t vm_id)
+{
+    ASSERT(vm_id);
+
+    /* -1 to match ffa_get_vm_id() */
+    return rcu_lock_domain_by_id(vm_id - 1);
+}
+
 static inline void ffa_set_regs(struct cpu_user_regs *regs, register_t v0,
                                 register_t v1, register_t v2, register_t v3,
                                 register_t v4, register_t v5, register_t v6,
diff --git a/xen/arch/arm/tee/tee.c b/xen/arch/arm/tee/tee.c
index b1cae16c17a1..3f65e45a7892 100644
--- a/xen/arch/arm/tee/tee.c
+++ b/xen/arch/arm/tee/tee.c
@@ -94,7 +94,7 @@ static int __init tee_init(void)
     return 0;
 }
 
-__initcall(tee_init);
+presmp_initcall(tee_init);
 
 void __init init_tee_secondary(void)
 {
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 289af81bd69d..e2412a17474e 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -505,6 +505,7 @@ typedef uint64_t xen_callback_t;
 #define GUEST_MAX_VCPUS 128
 
 /* Interrupts */
+
 #define GUEST_TIMER_VIRT_PPI    27
 #define GUEST_TIMER_PHYS_S_PPI  29
 #define GUEST_TIMER_PHYS_NS_PPI 30
@@ -515,6 +516,19 @@ typedef uint64_t xen_callback_t;
 #define GUEST_VIRTIO_MMIO_SPI_FIRST   33
 #define GUEST_VIRTIO_MMIO_SPI_LAST    43
 
+/*
+ * SGI is the preferred delivery mechanism of FF-A pending notifications or
+ * schedule recveive interrupt. SGIs 8-15 are normally not used by a guest
+ * as they in a non-virtualized system typically are assigned to the secure
+ * world. Here we're free to use SGI 8-15 since they are virtual and have
+ * nothing to do with the secure world.
+ *
+ * For partitioning of SGIs see also Arm Base System Architecture v1.0C,
+ * https://developer.arm.com/documentation/den0094/
+ */
+#define GUEST_FFA_NOTIF_PEND_INTR_ID      8
+#define GUEST_FFA_SCHEDULE_RECV_INTR_ID   9
+
 /* PSCI functions */
 #define PSCI_cpu_suspend 0
 #define PSCI_cpu_off     1
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 07:15:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 07:15:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736952.1143027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZFl-0001Jx-QA; Mon, 10 Jun 2024 07:15:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736952.1143027; Mon, 10 Jun 2024 07:15:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZFl-0001Jq-Nf; Mon, 10 Jun 2024 07:15:41 +0000
Received: by outflank-mailman (input) for mailman id 736952;
 Mon, 10 Jun 2024 07:15:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGZFk-0001Jk-MB
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 07:15:40 +0000
Received: from mail-lj1-x234.google.com (mail-lj1-x234.google.com
 [2a00:1450:4864:20::234])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3d5903ea-26f9-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 09:15:39 +0200 (CEST)
Received: by mail-lj1-x234.google.com with SMTP id
 38308e7fff4ca-2eaae2a6dc1so80989701fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 00:15:39 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6c80581c1csm615294966b.17.2024.06.10.00.15.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 00:15:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d5903ea-26f9-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718003739; x=1718608539; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=KxOqOh8UfBu2g09J9Z0iUbukC5LS8oAauWEWOrv76J0=;
        b=VzkIuvKcP1UdzrlhHtnHWkWwgMm7RFPS1X2bcvvOct4tYl51TU3qkpnG9e3KZOUgtk
         jLyd7Lhuh6Em8VIXB5SfbIsH+55ZmnjgO/g/i5xXLfSVJtQGKbVesr5z3kjIajlN4L1j
         YumsKGF7wm7ErQl1zO20K+QVcYzP62c1vw32alIiFOP1/nqiHdONsbrYkTnKa+41peGl
         VsLoUTjpiIXs3yzNOoBgI+rhSw4I0hR5EHjpHVxSpLwbMd1b79QQsrEjoNBNvZUi+Z/6
         NJBjt/dmpxXMgL66qPkr/dcKrnqdFFwBoMsr5+4eSUPlZOXAzmXTmzbdAL1SAAhd4J/4
         9FTw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718003739; x=1718608539;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=KxOqOh8UfBu2g09J9Z0iUbukC5LS8oAauWEWOrv76J0=;
        b=LOlbAY/S/IxAMYiFiiq1IHlev5HSmtfNuFOFfglW+rc8c3CbReV+PoinH/pkGBXowf
         9xfkNuGhUPLIMsZmPLzFDU/BUx7VcoeYB+nI8nTPXt/Xq9eG0r3c6bHJrH/5C9rRufRn
         ELjjVFBF7ZSSEKeWW+wDQKKVyroeQ5+oeA6X2WGrtHYPCKLLqSC/ehncAHHuqnxtMebT
         tQDkNzHnvMCsBae3FyWIQHooNE9xyU0hg2zZxj4j8G6uMLlJd3w6O0gb+yovTrTpmMZj
         HmhUuE2Mb9OQg6xRuz9aIt/z07KOMEYpjsTue1oMAPXnCHwZlCsqLa4RkFv5mIuGr7j5
         WU8w==
X-Forwarded-Encrypted: i=1; AJvYcCU1Gdt5GlFvwbng6cNcIcXx8uRK8QUbY58lvPOJEzk/ITD2orX48zepGqxM7LMgiSwHsPDgxIwA6FbdExCWTHtVOiZ4FcaTQNP2tQibGxo=
X-Gm-Message-State: AOJu0YzDUfYFexy2wC/mn1t5POnrBqsZ+lmMxPg9rNzWEa4sLm/tYgxD
	5Hrh7N7fcvvpstPhOIcOtIfZyxolC89ncwqu0QcHXNC8BUVtIbKtSJekAze6/w==
X-Google-Smtp-Source: AGHT+IGPt0CYADzORpmApfrMzt0iCjrpewhgiiew/HUx1RM0IEzMThHfiIei/VXmKnn5sksxNFbJ2A==
X-Received: by 2002:a05:6512:3ca5:b0:52b:c296:9739 with SMTP id 2adb3069b0e04-52bc2969770mr6331782e87.41.1718003738710;
        Mon, 10 Jun 2024 00:15:38 -0700 (PDT)
Message-ID: <afc347c0-ca2f-4972-b895-71184b1074ea@suse.com>
Date: Mon, 10 Jun 2024 09:15:37 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: vcpumask_to_pcpumask() case study
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <3bb4e3fa-376b-4641-824d-61864b4e1e8e@citrix.com>
 <c5951643-5172-4aa1-9833-1a7a0eebb540@suse.com>
 <1745d84b-59b7-4f90-a0a8-5d459b83b0bc@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <1745d84b-59b7-4f90-a0a8-5d459b83b0bc@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 07.06.2024 14:35, Andrew Cooper wrote:
> On 03/06/2024 10:19 pm, Jan Beulich wrote:
>> On 01.06.2024 20:50, Andrew Cooper wrote:
>>> One of the followon items I had from the bitops clean-up is this:
>>>
>>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>>> index 648d6dd475ba..9c3a017606ed 100644
>>> --- a/xen/arch/x86/mm.c
>>> +++ b/xen/arch/x86/mm.c
>>> @@ -3425,7 +3425,7 @@ static int vcpumask_to_pcpumask(
>>>              unsigned int cpu;
>>>  
>>>              vcpu_id = ffsl(vmask) - 1;
>>> -            vmask &= ~(1UL << vcpu_id);
>>> +            vmask &= vmask - 1;
>>>              vcpu_id += vcpu_bias;
>>>              if ( (vcpu_id >= d->max_vcpus) )
>>>                  return 0;
>>>
>>> which yields the following improvement:
>>>
>>>   add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-34 (-34)
>>>   Function                                     old     new   delta
>>>   vcpumask_to_pcpumask                         519     485     -34
>> Nice. At the risk of getting flamed again for raising dumb questions:
>> Considering that elsewhere "trickery" like the &= mask - 1 here were
>> deemed not nice to have (at least wanting to be hidden behind a
>> suitably named macro; see e.g. ISOLATE_LSB()), wouldn't __clear_bit()
>> be suitable here too, and less at risk of being considered "trickery"?
> 
> __clear_bit() is even worse, because it forces the bitmap to be spilled
> to memory.  It hopefully wont when I've given the test/set helpers the
> same treatment as ffs/fls.

Sorry, not directly related here: When you're saying "when I've given"
does that mean you'd like Oleksii's "xen: introduce generic non-atomic
test_*bit()" to not go in once at least an Arm ack has arrived?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 07:17:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 07:17:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736957.1143037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZHA-0001rF-4R; Mon, 10 Jun 2024 07:17:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736957.1143037; Mon, 10 Jun 2024 07:17:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZHA-0001r8-1c; Mon, 10 Jun 2024 07:17:08 +0000
Received: by outflank-mailman (input) for mailman id 736957;
 Mon, 10 Jun 2024 07:17:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGZH8-0001qy-No
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 07:17:06 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 70dca8c3-26f9-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 09:17:05 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id
 4fb4d7f45d1cf-57c76497cefso1284319a12.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 00:17:05 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57aae237408sm6922828a12.93.2024.06.10.00.17.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 00:17:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70dca8c3-26f9-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718003825; x=1718608625; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=JfXo5DY86DUjRjTZno/zguF/2yTWu03Tx+dhEuiFnwE=;
        b=RZDi/OiwzJ3F6E/EMocV9dkU6QeX0aTpleNKSoj58ihCvMR8cyBAb9Iq0M9APZOoUe
         GzI1rsmt5E3I/uHBAbvVkdhpDvd/2FopRSm5qcJi1V8UNh01zgAVwIV0BYFUi6KqiP9r
         ZGumVyjonhl7Zunk7K+GNYC9R+YGemxM6NpBXRjYFrHpP8M6qXVnXFfN+mrcxLki0kQx
         PlvsIwdhnHZaC+SH2+TLbvzNINrfnTWveFRb0Gx41SBnvVuNl/9p7IdAVmlTHBJl2gOx
         ZjacjDA/FMKCfHYtYhY2200QNywtHUJIsMRmyRwcaNMBW/9FjVkt97W25LMHIfWDLHlk
         qo9g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718003825; x=1718608625;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=JfXo5DY86DUjRjTZno/zguF/2yTWu03Tx+dhEuiFnwE=;
        b=a0Ta50jAJfykKyvw+b0z9WRxP8HkfMQI8NqkpEYcDiHpqF0p59yPighPTpVTUnPSuF
         Bnj2MGNlwbx+87oIK3H35RU0b/5Z45my0oO3e5RDM0WIuj8CtGG1dSGCX1oZ6yFAp/XR
         Zfs5+kCKpeBNkAVdYL5qlaX/yocBG2elZssQDwRspGm3kQJEc+2J/2JvWdwgkL0/7Yn0
         2GJ0+BvPw+M00zSUAPsz4/3EKmu1TWerSLvNX73aB1QlTzHrNNqE0sqALUOU1L2KrqT/
         u9CvX5jRdpLpNbvb/mZVFSy4ckgfcMTp7BdRFQzKf4erXkst0BQ4MQRt/p3akjjiBNsC
         8uXQ==
X-Forwarded-Encrypted: i=1; AJvYcCWSsjUINzIdt7v5SkSTu365Fv1C0mH9kODNfXy3cYPryTRLehJTrJBNbBIpMq1EuejK8NdaRW9I/6Qfxfc88rEwRW5gqy1kAf40OCwldog=
X-Gm-Message-State: AOJu0YwKtsHVyWOvFYX8CMQVjoBY9aGpf5xsl+7/VqaeOhRTKR4MCxf3
	UaJl31hwcevu8eHZ0UbdII6QbupKFQ0V7DHrckj21JZJ4x12Kgn9I8eTVzOt6A==
X-Google-Smtp-Source: AGHT+IFdDdcYuzFyiNiGOqyXwLz8UwIbjCTHYMbqhf4AD3V2WY9wDjhS5bzzzAAD7go3RrQS1b5hMw==
X-Received: by 2002:a50:c306:0:b0:57c:6861:d735 with SMTP id 4fb4d7f45d1cf-57c6861d783mr4418309a12.19.1718003825210;
        Mon, 10 Jun 2024 00:17:05 -0700 (PDT)
Message-ID: <c642d1ef-9466-424a-9e84-54accecd8c6a@suse.com>
Date: Mon, 10 Jun 2024 09:17:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v1] x86/cpufreq: separate powernow/hwp cpufreq code
To: Sergiy Kibrik <sergiy_kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jason Andryuk <jason.andryuk@amd.com>, xen-devel@lists.xenproject.org
References: <20240604093406.2448552-1-Sergiy_Kibrik@epam.com>
 <5cb13d1a-1452-4542-b50d-23e6a9d9d3ef@suse.com>
 <c66966da-bbe3-432e-8a2f-809bf434db39@epam.com>
 <ab57f7f3-ac54-4b41-950a-1f7bee4293ab@suse.com>
 <647b086a-04b0-42be-a7b8-a266c4f4e64b@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <647b086a-04b0-42be-a7b8-a266c4f4e64b@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 07.06.2024 11:14, Sergiy Kibrik wrote:
> 06.06.24 10:54, Jan Beulich:
>> On 06.06.2024 09:30, Sergiy Kibrik wrote:
>>> 06.06.24 10:08, Jan Beulich:
>>>> On 04.06.2024 11:34, Sergiy Kibrik wrote:
>>>>> --- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
>>>>> +++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
>>>>> @@ -657,7 +657,7 @@ static int __init cf_check cpufreq_driver_init(void)
>>>>>    
>>>>>            case X86_VENDOR_AMD:
>>>>>            case X86_VENDOR_HYGON:
>>>>> -            ret = powernow_register_driver();
>>>>> +            ret = IS_ENABLED(CONFIG_AMD) ? powernow_register_driver() : -ENODEV;
>>>>>                break;
>>>>>            }
>>>>
>>>> What about the Intel-specific code immediately up from here?
>>>> Dealing with that as well may likely permit to reduce ...
>>>
>>> you mean to guard a call to hwp_register_driver() the same way as for
>>> powernow_register_driver(), and save one stub? ?
>>
>> Yes, and perhaps more. Maybe more stubs can be avoided? And
>> acpi_cpufreq_driver doesn't need registering either, and hence
>> would presumably be left unreferenced when !INTEL?
>>
> 
> {get,set}_hwp_para() can be avoided, as they're being called just once 
> and may be guarded by IS_ENABLED(CONFIG_INTEL).
> The same for hwp_cmdline_parse().
> As for hwp_active() it's being used many times by generic cpufreq code 
> and even outside of cpufreq, so probably it has to be either a stub, or 
> be moved outside of hwp.c and become smth, like this:
> 
>   bool hwp_active(void)
>   {
>      return IS_ENABLED(CONFIG_INTEL) && hwp_in_use;
>   }
> 
> Though I'm not sure such movement would be any better than a stub.
> 
> acpi_cpufreq_driver, i.e. the most of code in cpufreq.c file, can 
> probably be separated into acpi.c and put under CONFIG_INTEL as well. 
> What you think of this?

Sounds like the direction I think we want to be following.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 07:23:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 07:23:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736964.1143048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZN4-0004Ai-Pc; Mon, 10 Jun 2024 07:23:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736964.1143048; Mon, 10 Jun 2024 07:23:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZN4-0004Ab-M6; Mon, 10 Jun 2024 07:23:14 +0000
Received: by outflank-mailman (input) for mailman id 736964;
 Mon, 10 Jun 2024 07:23:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGZN3-0004AV-Ip
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 07:23:13 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4af663db-26fa-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 09:23:11 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id
 4fb4d7f45d1cf-57c681dd692so2289552a12.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 00:23:11 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f18a33ad8sm161384866b.137.2024.06.10.00.23.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 00:23:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4af663db-26fa-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718004191; x=1718608991; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=9TjfTOlDvjR2FjhJaVij3/Cs4UqatQfpVPyMsfBMvgY=;
        b=Fy+ST3wzvhLFQ5KgX2cyEQGDru+KOGXGs93Q+zAKgqu6m3x9QLb9bBxwOofDUnyPO/
         DwZ0mTsiKXmBwokmXhzykz1XbOqTatVJaq86+yx9p2VnBI6p0/GgY62Nz1i8nSnVKrOB
         gE+maQYTVva3SBPp7RbSAPGWNeUFFDrJQbn05XWikqA11zx1vApcIdXksc7aDGT4A+Ys
         tmkomopQJnn3OgRmLbNfI4PmcPX+ic//8QhdzTsyYxWoM1no+UHIi+2hKMyJ2TeLHBDS
         dRMF/yHdySCJctO4nkaxf85PXqSAAcdCBMBjIrjSMu8heJg546UBuoSrGaJkzLRX/fBA
         SwEw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718004191; x=1718608991;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=9TjfTOlDvjR2FjhJaVij3/Cs4UqatQfpVPyMsfBMvgY=;
        b=Jz0cv8urYrmzA4X6PlABZyj5UzL/+dKFvs96thqplqffr1OF9YTRg46PYFtDczMEvi
         /ldrcdMn8JbYpET22WJ8rPJJo51mWDPwSkdWuZvfw28T5/foGBDvLlk52JzuTKZFK9n3
         iaiXL8c9MXzugjar/tMSnsFpMxCrU7do+oV79vknsjucBL/0phu50AIxTYAO2y8jUhV2
         a8oDI+VL9X8GOT/hlxrsbNLvyofcSRJkJlHECU+gvj2mTQ1TMBRjCbxbYckmaKHlLzcR
         y5s5yiATBxdqvHuRV1qVDhhS5gLvOAsUi4aJmxU2v8Z0kXOLVPquMshNI1TZprWG/ISN
         geIg==
X-Forwarded-Encrypted: i=1; AJvYcCW0GnhBR7ZWFuifZd8OMWvuZ9PuNJCzoc3CVi1Se+5S1LrFeP1EJMxBJbI6yyiPU3jB7x/xcU3fK/loYJdJlMrLBLXhftjpQbFQkLxajKw=
X-Gm-Message-State: AOJu0YzaTG1Dlu2uvX/b8pfdhPqJ1/YokU7K+laa18zql5VoXEXImhjQ
	4pC4dK7V9niaW2nsRVL6ltMiNoHY7GKsCTpsOmU1eCS3d/9Vnw4LeSUZfPSoZw==
X-Google-Smtp-Source: AGHT+IEuTXxUtyHqC7z09FnXVivv41HR0M//BFJRSVrI0BkvUlqdHc+r3S+JBGzweTWPKku5JRt76g==
X-Received: by 2002:a17:906:f354:b0:a6f:13fe:75c9 with SMTP id a640c23a62f3a-a6f13fe7eb2mr204749466b.52.1718004190816;
        Mon, 10 Jun 2024 00:23:10 -0700 (PDT)
Message-ID: <b72ca235-81a5-4a68-94e3-b9d1522a42e4@suse.com>
Date: Mon, 10 Jun 2024 09:23:09 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 02/16] x86/altp2m: check if altp2m active when
 giving away p2midx
To: Sergiy Kibrik <sergiy_kibrik@epam.com>,
 Tamas K Lengyel <tamas@tklengyel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <9306d31184b8e714c3a10ccc6a2b2c6a80777ddb.1717410850.git.Sergiy_Kibrik@epam.com>
 <7493c91c-070b-4828-a7f9-15d618d49ce5@suse.com>
 <c72ef6c8-6e5a-4533-a049-7636f6387e4b@suse.com>
 <971ed412-c016-4e95-b691-2e6795637c61@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <971ed412-c016-4e95-b691-2e6795637c61@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 07.06.2024 11:40, Sergiy Kibrik wrote:
> 07.06.24 10:50, Jan Beulich:
>> On 07.06.2024 09:25, Jan Beulich wrote:
>>> On 03.06.2024 13:09, Sergiy Kibrik wrote:
>>>> @@ -38,9 +34,13 @@ static inline bool altp2m_active(const struct domain *d)
>>>>   }
>>>>   
>>>>   /* Only declaration is needed. DCE will optimise it out when linking. */
>>>> -uint16_t altp2m_vcpu_idx(const struct vcpu *v);
>>>>   void altp2m_vcpu_disable_ve(struct vcpu *v);
>>>>   
>>>>   #endif
>>>>   
>>>> +static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
>>>> +{
>>>> +    return altp2m_active(v->domain) ? vcpu_altp2m(v).p2midx : 0;
>>>> +}
>>>
>>> While perhaps okay this way as a first step,
>>
>> Hmm, or maybe not. 0 is a valid index, and hence could be misleading
>> at call sites.
> 
> I'm returning 0 index here because implementation of 
> p2m_get_mem_access() for x86 & ARM expects it to be 0 when altp2m not 
> active or not implemented.

Tamas, considering the comment in x86'es p2m_get_mem_access(), what purpose
are d->arch.altp2m_p2m[0] and d->arch.altp2m_eptp[0] then? In case it indeed
is unused, why would p2m_init_altp2m() set up slot 0 in the first place?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 07:30:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 07:30:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736969.1143057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZUI-0005vf-HG; Mon, 10 Jun 2024 07:30:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736969.1143057; Mon, 10 Jun 2024 07:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZUI-0005vY-Ec; Mon, 10 Jun 2024 07:30:42 +0000
Received: by outflank-mailman (input) for mailman id 736969;
 Mon, 10 Jun 2024 07:30:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGZUH-0005uC-Re
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 07:30:41 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 56949246-26fb-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 09:30:40 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-a63359aaaa6so599766366b.2
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 00:30:40 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1e099418sm116013466b.72.2024.06.10.00.30.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 00:30:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56949246-26fb-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718004640; x=1718609440; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=azg+i8jPdC9upu0njj2bLfuldZjvuVjTdieDPOOveK0=;
        b=eB3W2YntMTMC6hr8rLFGIeiNQ5Quw5/V4X7z2S+t85bu4MMBTKWYkAZRcdP+Thigy1
         +4XmSgVUtQeElygpI3Lo1YxJY1v5ce2O+dCpqRyIF/nBHhVosxODXE56qG83GIqOlVwl
         4/ogSeY0j7d5plF5JBuKaNm3opselRU/dzSrnj5bJWMcdkdK2/rOZzoG9ukBsu+U6CMN
         8WJIg5xC07L3uUGdWtiU/0tZHJHKCjPDQQg/TKp0GQbxJDi5g4pcywDb4/tL9ewjQvbI
         YQik6JGmXvHgSpCSwmzBuJ2mTEYRKi+NNz6OoNH8UsJFKGvDFl0fsqao6Q8tiSyFzmP+
         2yRg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718004640; x=1718609440;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=azg+i8jPdC9upu0njj2bLfuldZjvuVjTdieDPOOveK0=;
        b=VKo9b/I/FOLMSucCE44t0lnSYcH1ducSlfuDx9W0IbWAZUUCsYwObtYFFcsvQOr30T
         bABcy7Nfy+4uOFsF0aUPyyaHsFKdElQpdtPlWyCLiy7XWYQJ6+TubLy0+FHb9ynaICXo
         nDTAE9bYzz0YJ4baS3aQRAeFGgezHKHcunhEUhzJa5cGqHZNXYYZJda58q1KU0JiBGEA
         IDGKeGaB8P1kK5dcl3PAgoR+cYvfbZqa+HX+a9KI6szRXfx653IQORwofAIZnhLQQUqr
         9dlKEmu20qUXV8LAlN2YwB1NHvP8GPgkhWePAYS8j2sIzP/kQ747upQeT0p/D3D1305A
         IdmQ==
X-Forwarded-Encrypted: i=1; AJvYcCUCUS2E31tLdp67PwVNT/aOaAoNgm4NSp1sn52fKLYlb/WVWuDtkAjqx1iqdBjrRdsN9aEOHrqPKQLR17iL60Pc7xEy3YpmjHrZDjIdM6s=
X-Gm-Message-State: AOJu0Yzj7ba+5/TPqVDfmGycjHXt0naheAR7gDDoZYZI0UYFu1RwCfju
	ee4utwQDegwmr6SBJ4XMHnh9GbvxZWOyIcNt4zXSKSsWDd3ekgj7pXh4O9yvBg==
X-Google-Smtp-Source: AGHT+IF43XhAidsP8/nIzdYsw8KXB0QcdwlVoDPrsdPQMs1o6uANeGnoQm44q2kXrBHXxTatKZsBow==
X-Received: by 2002:a17:906:8418:b0:a6f:1fd6:61b6 with SMTP id a640c23a62f3a-a6f1fd661femr89962066b.35.1718004639785;
        Mon, 10 Jun 2024 00:30:39 -0700 (PDT)
Message-ID: <f3cd00f2-bdcb-4604-bdc2-fd13eddb8ea0@suse.com>
Date: Mon, 10 Jun 2024 09:30:38 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v5 07/10] xen: Make the maximum number of altp2m
 views configurable for x86
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <cover.1717356829.git.w1benny@gmail.com>
 <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
 <22eabe14-10c3-4095-91d3-b63911908cb2@suse.com>
 <CAKBKdXhZ4HOqThPMkwaWB5ZhQOc6gE=xsKzkoL4_h+M6y33dcQ@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CAKBKdXhZ4HOqThPMkwaWB5ZhQOc6gE=xsKzkoL4_h+M6y33dcQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 09.06.2024 01:06, Petr Beneš wrote:
> On Thu, Jun 6, 2024 at 9:24 AM Jan Beulich <jbeulich@suse.com> wrote:
>>> @@ -122,7 +131,12 @@ int p2m_init_altp2m(struct domain *d)
>>>      struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
>>>
>>>      mm_lock_init(&d->arch.altp2m_list_lock);
>>> -    for ( i = 0; i < MAX_ALTP2M; i++ )
>>> +    d->arch.altp2m_p2m = xzalloc_array(struct p2m_domain *, d->nr_altp2m);
>>> +
>>> +    if ( !d->arch.altp2m_p2m )
>>> +        return -ENOMEM;
>>
>> This isn't really needed, is it? Both ...
>>
>>> +    for ( i = 0; i < d->nr_altp2m; i++ )
>>
>> ... this and ...
>>
>>>      {
>>>          d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
>>>          if ( p2m == NULL )
>>> @@ -143,7 +157,10 @@ void p2m_teardown_altp2m(struct domain *d)
>>>      unsigned int i;
>>>      struct p2m_domain *p2m;
>>>
>>> -    for ( i = 0; i < MAX_ALTP2M; i++ )
>>> +    if ( !d->arch.altp2m_p2m )
>>> +        return;

I'm sorry, the question was meant to be on this if() instead.

>>> +    for ( i = 0; i < d->nr_altp2m; i++ )
>>>      {
>>>          if ( !d->arch.altp2m_p2m[i] )
>>>              continue;
>>> @@ -151,6 +168,8 @@ void p2m_teardown_altp2m(struct domain *d)
>>>          d->arch.altp2m_p2m[i] = NULL;
>>>          p2m_free_one(p2m);
>>>      }
>>> +
>>> +    XFREE(d->arch.altp2m_p2m);
>>>  }
>>
>> ... this ought to be fine without?
> 
> Could you, please, elaborate? I honestly don't know what you mean here
> (by "this isn't needed").

I hope the above correction is enough?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 07:33:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 07:33:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736978.1143067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZWu-0007Gs-08; Mon, 10 Jun 2024 07:33:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736978.1143067; Mon, 10 Jun 2024 07:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZWt-0007Gj-T1; Mon, 10 Jun 2024 07:33:23 +0000
Received: by outflank-mailman (input) for mailman id 736978;
 Mon, 10 Jun 2024 07:33:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGZWt-0007GR-2A
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 07:33:23 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b5d2e47f-26fb-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 09:33:20 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id
 a640c23a62f3a-a626919d19dso910731366b.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 00:33:20 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6efed85dc3sm314515466b.124.2024.06.10.00.33.19
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 00:33:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5d2e47f-26fb-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718004800; x=1718609600; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=WzLFkLtR7EbRJPACEKegzEuozy8200Ef12p+Now7ihA=;
        b=AtNoNo9hHrRSfid7OE/IHp6glNdfu5ZAcGHh07RU4sToucjE6nMyNNMWoYlSGUfBTc
         u6E24Gx5RB3YHEXilQHntongzmrl82lnQB8CU9deQHFSNZa2azQI/7yPh+qoCo32YjpD
         VjKIkPs2H+i1kwdTisDSi5YaqtUJDvFjJFdNdoiYHvp9jbJqfOWgS2n0RmX0olGgYGJx
         OQ9LyBPPOuCb3qoDDDDlQRzMzbNY7pP9iL8dOPXqktWJq7kJS5wgfHF/H/D5za5eouZG
         wDu5wLYg9au7aSggmn6Omp7/fApbM0UJ2vsuhRplX9sh5AlrSxj5BsIUTGugrqE8xLOt
         l8bA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718004800; x=1718609600;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WzLFkLtR7EbRJPACEKegzEuozy8200Ef12p+Now7ihA=;
        b=ZIcj/Dj5+OWXlZsSVvHRcYRa7Yc673n3f1mb8ud9aI5AO2rAnuhlgKMH6sHDKFO5tk
         Uq6TkBvoCMowvWcArxGBMx8eUBLoy4hIdBq5WCvUxb5xrM8kDt8Z8uCaHHsUqsZOro+B
         4+RZV/8rmgMp5y0+9bx3TrevF3NMhUv34GOUog4HrajKKvhOPKmWAkCXq/oQXLNmiwYI
         B7zTLvfAgjTucUef+5ICHU/W8eu4alvsJu99p5plU6Oam/k4MW/d2pI2A8pwKDchYxSA
         Gb4jpB03yKKcHFEQSO6EH7z5sCR/xjm80YU1f+bY83O1NDWf+PzLjnICl3bCGBPaJoYi
         xe8A==
X-Forwarded-Encrypted: i=1; AJvYcCVXYIN6bL8bFnSD5l5U7sFqqHTP83HO5+0fcMCTOCK0ZaWpjoKftnf8AgzkqaCChnhW14jw6NhD4DOWGyaXoLb0oS/EHvH7VOANnHQh2qI=
X-Gm-Message-State: AOJu0YyPMjh3Exo5ixbJJTvQ9H5CVWr03TFnExXhNdWU4EOKeN3ED4zw
	nWaSfM9nqcUEF2EsIN6MHUyYwcKi2tAnuCRVBbA5qxxhvHsxR/TTTYVdhthSrxdacYiNnQQBRTo
	=
X-Google-Smtp-Source: AGHT+IEyGyI/8woqsRj6pEXWhIdYhUmKG3SNNSGc7iIwnV7LENiyyrIMk5ZL1f6lX0RxunQvL5iT0A==
X-Received: by 2002:a17:906:2991:b0:a6f:259d:9a5f with SMTP id a640c23a62f3a-a6f259d9ee9mr51402666b.35.1718004799625;
        Mon, 10 Jun 2024 00:33:19 -0700 (PDT)
Message-ID: <651a4f58-9737-462e-9ffa-1d3d14db6963@suse.com>
Date: Mon, 10 Jun 2024 09:33:18 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v5 09/10] xen/x86: Disallow creating domains
 with altp2m enabled and altp2m.nr == 0
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1717356829.git.w1benny@gmail.com>
 <d6fd97b66b5f1a974e317c9d3f72fb139b39118f.1717356829.git.w1benny@gmail.com>
 <8936b5ef-1ef7-4606-9f19-c75287aa88fa@suse.com>
 <CAKBKdXi9uiADe+rXyHSd6HV3A-MvT04fgJAdJA=LNuYOr2Eedw@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CAKBKdXi9uiADe+rXyHSd6HV3A-MvT04fgJAdJA=LNuYOr2Eedw@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 09.06.2024 01:08, Petr Beneš wrote:
> The flag isn't bool. It can hold one of 3 values (besides 0).

Oh, sorry, yes.

Acked-by: Jan Beulich <jbeulich@suse.com>

Jan

> On Thu, Jun 6, 2024 at 9:57 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 02.06.2024 22:04, Petr Beneš wrote:
>>> From: Petr Beneš <w1benny@gmail.com>
>>>
>>> Since libxl finally sends the altp2m.nr value, we can remove the previously
>>> introduced temporary workaround.
>>>
>>> Creating domain with enabled altp2m while setting altp2m.nr == 0 doesn't
>>> make sense and it's probably not what user wants.
>>
>> Yet: Do we need separate count and flag anymore? Can't "nr != 0"
>> take the place of the flag being true?
>>
>> Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 07:41:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 07:41:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736983.1143077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZeP-0001Fs-O6; Mon, 10 Jun 2024 07:41:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736983.1143077; Mon, 10 Jun 2024 07:41:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZeP-0001Fl-LN; Mon, 10 Jun 2024 07:41:09 +0000
Received: by outflank-mailman (input) for mailman id 736983;
 Mon, 10 Jun 2024 07:41:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ruA2=NM=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sGZeO-0001Ff-0w
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 07:41:08 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cb71d2fe-26fc-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 09:41:06 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-a63359aaaa6so601149266b.2
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 00:41:06 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6ef441b3d6sm401785766b.203.2024.06.10.00.41.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 00:41:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb71d2fe-26fc-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718005265; x=1718610065; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=s2Tcwv5lcq49+leEsw/Hn3k2ttqMfhbFYWQfvObavAs=;
        b=k6g5t/yPdGoomXfMzuZZUwdwuJEKF16/UYMJdvtqCihTaDcyUzrgoBQZaJSG5F1N6I
         8ZsLQ/4EFhCkEDIPs4zoxPBSrO/PnK8xmcwYCXaKl1j/feKp789Ynd8vzvZZVUxatyfD
         /yVcoRzcwiJ8dTB94AHeQV46EsjzX2JBT7SL3LFCYP/SW5pGc3JOpxOhUObc5znx4pdj
         efkRWNFLUKzb8s0b/zuCDjGPUYMCb1OCRRzOggm0GO2EBv1BHQazLx4O3U1uAsS4oIOW
         +8zleGzRBIGJDNaiM63jjX7GhbOf5zjaCEF+PO1bZOxmiJCJByr+1+biM7GrBABZMbTw
         dWww==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718005265; x=1718610065;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=s2Tcwv5lcq49+leEsw/Hn3k2ttqMfhbFYWQfvObavAs=;
        b=SZNMH1Lu8OMHVxXMGoTyvZUYRRH994K2PdAYFxloCvnYEwDTf6asKdCabUONJzrgYO
         8ckKRdlehh7Je+zz+8FEZp+bv4+o5Z41x7FHtFJgquP/Qx5tDU3dOsEOw0vfilrheYvz
         LGIVg5cBUO483pLCNasCGDeAsflszhk+XVdAgeyRWoAZMLVSxp0RqDpHBoc+oQEj2pjI
         czyvmGZ3deLPtiUkuEh+ye0z9VjPuvcGo5D9J8Y1DYfb/g8L5+l8KfuqFjzepDptxdHs
         qE9iQ+fKttWWAQdvdLvm1TaGsApmtkv+aZOE6a/6E0b+jvLXsFaF8ogcT4xp9KFpVi6l
         SzCQ==
X-Forwarded-Encrypted: i=1; AJvYcCWy9xnJWuDWDILviBR/etYEoCe+1qKoOWcPxzj6enHk6toKxY6vo6gewpQSu+VJupkU1YXB9z4OldLB+MEVca0dm9GLSFiYUH1XesDzh8U=
X-Gm-Message-State: AOJu0YwHtc1yGH+KLO9iaTKZGfwVO2wamczkAUTJjjmONXfOXVqgFO5I
	S1QicHTT7Bi6ZsF24ELeQi9BRAyEWZx6VVlly5HPsFtxYxpJEzjP
X-Google-Smtp-Source: AGHT+IF/gQSbX5KBEV6IdH2/3VZckDWvduuSLV+LT2zvfSjeZD6xXTZo9nj/+g00o1BId0fUSPzeJg==
X-Received: by 2002:a17:907:3e8b:b0:a68:c744:725b with SMTP id a640c23a62f3a-a6cd665f1e1mr646201266b.32.1718005265183;
        Mon, 10 Jun 2024 00:41:05 -0700 (PDT)
Message-ID: <769044faa70235c4c293b56024ac434de55f5918.camel@gmail.com>
Subject: Re: [PATCH] MAINTAINERS: alter EFI section
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Daniel Smith <dpsmith@apertussolutions.com>, Marek Marczykowski
 <marmarek@invisiblethingslab.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>,  George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Date: Mon, 10 Jun 2024 09:41:04 +0200
In-Reply-To: <5b9d57b4-bd28-4523-bb80-f4a5912eb3e8@suse.com>
References: <5b9d57b4-bd28-4523-bb80-f4a5912eb3e8@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Mon, 2024-06-10 at 08:38 +0200, Jan Beulich wrote:
> To get past the recurring friction on the approach to take wrt
> workarounds needed for various firmware flaws, I'm stepping down as
> the
> maintainer of our code interfacing with EFI firmware. Two new
> maintainers are being introduced in my place.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii
> ---
> For the new maintainers, here's a 1st patch to consider right away:
> https://lists.xen.org/archives/html/xen-devel/2024-03/msg00931.html.
>=20
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -308,7 +308,9 @@ F:	automation/eclair_analysis/
> =C2=A0F:	automation/scripts/eclair
> =C2=A0
> =C2=A0EFI
> -M:	Jan Beulich <jbeulich@suse.com>
> +M:	Daniel P. Smith <dpsmith@apertussolutions.com>
> +M:	Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> +R:	Jan Beulich <jbeulich@suse.com>
> =C2=A0S:	Supported
> =C2=A0F:	xen/arch/x86/efi/
> =C2=A0F:	xen/arch/x86/include/asm/efi*.h



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 07:43:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 07:43:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736988.1143087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZgT-00022S-4N; Mon, 10 Jun 2024 07:43:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736988.1143087; Mon, 10 Jun 2024 07:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZgT-00022L-1n; Mon, 10 Jun 2024 07:43:17 +0000
Received: by outflank-mailman (input) for mailman id 736988;
 Mon, 10 Jun 2024 07:43:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGZgR-00022F-Bu
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 07:43:15 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 17e098c3-26fd-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 09:43:14 +0200 (CEST)
Received: by mail-ed1-x52d.google.com with SMTP id
 4fb4d7f45d1cf-57a1fe6392eso5709656a12.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 00:43:14 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c7ef8d914sm2024293a12.71.2024.06.10.00.43.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 00:43:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17e098c3-26fd-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718005394; x=1718610194; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=j5u/j9/F2TaKM7M8J4V6qUJPouKbIf2M6wWnZ3N5XIw=;
        b=EDpA/Y+pEYCOXHKZSSrJ+OlqrDWy2cG2u7SXwRlKs1C9zU02JDee1VnZQTqAadc5+V
         SjSgENXkkGPPPv+ijUdxRVfZZoEQMLsGrJecRtRFlEGMZSiThKUMAuzd/oqRsgiz8tZu
         WY+gzoIKxAHD8d7W0SNX7hRG0G45oexo+YBWwKCScSZsSqFNJXXiz+UcB28igXO9WYK2
         pGIugv7eBF2Q+1pF95m/X4+WyT5HahHkNZzwKzXyyyBCI2ho4MtuEBBcj/5sUctyBs87
         eUrgvcXvvDveOPoxpZQ1B/tEEH5N5v3JGVFeLa5T6nxTU1OoZ8WSgqeXT9+qedi8ZP4y
         GCxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718005394; x=1718610194;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=j5u/j9/F2TaKM7M8J4V6qUJPouKbIf2M6wWnZ3N5XIw=;
        b=oNy/F5hoxsEeAlGb+m1gSMZ2VZLwRbLabQgFzkBDi0TEF3aw1Hz6cRDi/CP3hbyNYa
         OfESi/ROwFfUktzuG8ghtMLqTe60B+I6rAVIVcEgOK8WvN9+lOUoxEHBSju9IBvQ4LjJ
         gtOGJXCXODmsgxvNtmK+35OL2DuER9cXn9S2W1A9EbvFIMdc8pdhXMiEmkQerKHyjReq
         cjw5MIZA4i2lFPMlUNzLqPNyZNvccRNVfEooh2FHfDXcw5a20FJMffW4PR5krWOqUY1+
         9CT7fWEwba6359CzAl+VUskzHQ3JQ3oq7jE1xY+q9iPKXoARybf6rv0tMN+jjQENiUd0
         Hqlg==
X-Forwarded-Encrypted: i=1; AJvYcCUPaTFUT1y0f2snShXOUvUT8rtVBuZAU7vFC60n0LG2AEnOhiQYvzkE3o/+kr1Qrlvt/Vj40ufWFngDw7BpG+ljvMWm+37gJnqgIJJtkXk=
X-Gm-Message-State: AOJu0Yy2L/+LlAu8JjYfu+RzyMuRVWdxpU6j/vuvmStuxMU9OU6YnC6+
	DKEx57Qs67cHr7rP1pP5aKA7v0PvQIkOzJpr0zZ7boZhr183YjXXWKQvFLZuxQ==
X-Google-Smtp-Source: AGHT+IFuoiUXC7QXNxXo/eU9Ev/B3kLDrNuHwucf4pdJysLcZKaof4bQYwNNl+6Ib8h8HN0RSttTpg==
X-Received: by 2002:a50:d591:0:b0:57c:60d1:a4f4 with SMTP id 4fb4d7f45d1cf-57c60d1a54dmr4020709a12.39.1718005393625;
        Mon, 10 Jun 2024 00:43:13 -0700 (PDT)
Message-ID: <0cae0e19-8512-40e0-9ef2-6e91069779ec@suse.com>
Date: Mon, 10 Jun 2024 09:43:12 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH for-4.19 v2 3/3] automation/eclair_analysis: add more
 clean MISRA guidelines
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1717790683.git.nicola.vetrini@bugseng.com>
 <42645b41cf9d2d8b5ef72f0b171989711edb00a1.1717790683.git.nicola.vetrini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <42645b41cf9d2d8b5ef72f0b171989711edb00a1.1717790683.git.nicola.vetrini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 07.06.2024 22:13, Nicola Vetrini wrote:
> Rules 20.9, 20.12 and 14.4 are now clean on ARM and x86, so they are added
> to the list of clean guidelines.

Why is 20.9 being mentioned here when ...

> --- a/automation/eclair_analysis/ECLAIR/tagging.ecl
> +++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
> @@ -60,6 +60,7 @@ MC3R1.R11.7||
>  MC3R1.R11.9||
>  MC3R1.R12.5||
>  MC3R1.R14.1||
> +MC3R1.R14.4||
>  MC3R1.R16.7||
>  MC3R1.R17.1||
>  MC3R1.R17.3||
> @@ -73,6 +74,7 @@ MC3R1.R20.4||
>  MC3R1.R20.6||
>  MC3R1.R20.9||
>  MC3R1.R20.11||
> +MC3R1.R20.12||
>  MC3R1.R20.13||
>  MC3R1.R20.14||
>  MC3R1.R21.3||

... nothing changes in its regard?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 07:49:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 07:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736992.1143098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZmE-0002dj-PX; Mon, 10 Jun 2024 07:49:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736992.1143098; Mon, 10 Jun 2024 07:49:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZmE-0002dc-LR; Mon, 10 Jun 2024 07:49:14 +0000
Received: by outflank-mailman (input) for mailman id 736992;
 Mon, 10 Jun 2024 07:49:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Rpda=NM=intel.com=lkp@srs-se1.protection.inumbo.net>)
 id 1sGZmD-0002dV-BI
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 07:49:13 +0000
Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.7])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eabb4a85-26fd-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 09:49:09 +0200 (CEST)
Received: from fmviesa010.fm.intel.com ([10.60.135.150])
 by fmvoesa101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 10 Jun 2024 00:49:07 -0700
Received: from lkp-server01.sh.intel.com (HELO 8967fbab76b3) ([10.239.97.150])
 by fmviesa010.fm.intel.com with ESMTP; 10 Jun 2024 00:49:04 -0700
Received: from kbuild by 8967fbab76b3 with local (Exim 4.96)
 (envelope-from <lkp@intel.com>) id 1sGZm2-0001x1-0B;
 Mon, 10 Jun 2024 07:49:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eabb4a85-26fd-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1718005750; x=1749541750;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=6jhS++X/kRSNxPfeeQz/r6FMf/hMVlZK/ESRq3wR3fE=;
  b=HIWPQd19XH9UV9cZrawGD5XiONZjyCgFP7HBO5edG0FRi+rvRENo7da0
   vKeSjDEccBJXexLQka90zZOEiqNgpKqeyQy2T5e2tOtLGurwaL72+Kw86
   LETaRC7VRRsAnCjKBaJaoiEinIxQRcV3Jep3Rn/TaMBwHq9zcGeIRZfkO
   w1CQfqhuZ/kgl95D9Lt96+f4xi4pep4ci1NfhsF1qZLep6CDTgsz0wp00
   Nzh7jR8tmhTXAMUyGeOp/xX0eQ9hs1aTEqA0LW5Juzb0PrJcOB3RQxJvi
   3YO4LgLQ5uXl/2n7tEX9sRYF/NZBuiZhsjmlbwts/qeFx+2kumsVOmg/L
   A==;
X-CSE-ConnectionGUID: 6d4ox8IpRV2zH0G9vTTXlg==
X-CSE-MsgGUID: w/2AFHvdRmGVIbpDnyaMgQ==
X-IronPort-AV: E=McAfee;i="6600,9927,11098"; a="40052298"
X-IronPort-AV: E=Sophos;i="6.08,227,1712646000"; 
   d="scan'208";a="40052298"
X-CSE-ConnectionGUID: XI7aUyUVQ42Vnk1WEk1W2w==
X-CSE-MsgGUID: SoXCdszSRi+NL0hiSVaKjQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="6.08,227,1712646000"; 
   d="scan'208";a="39060747"
Date: Mon, 10 Jun 2024 15:48:31 +0800
From: kernel test robot <lkp@intel.com>
To: Abhinav Jain <jain.abhinav177@gmail.com>, jgross@suse.com,
	sstabellini@kernel.org, oleksandr_tyshchenko@epam.com,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: oe-kbuild-all@lists.linux.dev, skhan@linuxfoundation.org,
	javier.carrasco.cruz@gmail.com, jain.abhinav177@gmail.com
Subject: Re: [PATCH] xen: xen-pciback: Export a bridge and all its children
 as per TODO
Message-ID: <202406101511.hTO5m855-lkp@intel.com>
References: <20240609184410.53500-1-jain.abhinav177@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20240609184410.53500-1-jain.abhinav177@gmail.com>

Hi Abhinav,

kernel test robot noticed the following build errors:

[auto build test ERROR on xen-tip/linux-next]
[also build test ERROR on linus/master v6.10-rc3 next-20240607]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Abhinav-Jain/xen-xen-pciback-Export-a-bridge-and-all-its-children-as-per-TODO/20240610-024623
base:   https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git linux-next
patch link:    https://lore.kernel.org/r/20240609184410.53500-1-jain.abhinav177%40gmail.com
patch subject: [PATCH] xen: xen-pciback: Export a bridge and all its children as per TODO
config: arm64-defconfig (https://download.01.org/0day-ci/archive/20240610/202406101511.hTO5m855-lkp@intel.com/config)
compiler: aarch64-linux-gcc (GCC) 13.2.0
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240610/202406101511.hTO5m855-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202406101511.hTO5m855-lkp@intel.com/

All errors (new ones prefixed by >>):

   In file included from include/linux/device.h:15,
                    from include/xen/xenbus.h:37,
                    from drivers/xen/xen-pciback/xenbus.c:15:
   drivers/xen/xen-pciback/xenbus.c: In function 'xen_pcibk_export_device':
>> drivers/xen/xen-pciback/xenbus.c:270:38: error: 'struct pci_dev' has no member named 'domain'
     270 |                                 child->domain, child->bus->number,
         |                                      ^~
   include/linux/dev_printk.h:129:48: note: in definition of macro 'dev_printk'
     129 |                 _dev_printk(level, dev, fmt, ##__VA_ARGS__);            \
         |                                                ^~~~~~~~~~~
   drivers/xen/xen-pciback/xenbus.c:268:25: note: in expansion of macro 'dev_dbg'
     268 |                         dev_dbg(&pdev->xdev->dev,
         |                         ^~~~~~~
   drivers/xen/xen-pciback/xenbus.c:275:60: error: 'struct pci_dev' has no member named 'domain'
     275 |                                                       child->domain,
         |                                                            ^~
   drivers/xen/xen-pciback/xenbus.c:284:46: error: 'struct pci_dev' has no member named 'domain'
     284 |                                         child->domain,
         |                                              ^~
   include/linux/dev_printk.h:110:37: note: in definition of macro 'dev_printk_index_wrap'
     110 |                 _p_func(dev, fmt, ##__VA_ARGS__);                       \
         |                                     ^~~~~~~~~~~
   drivers/xen/xen-pciback/xenbus.c:281:33: note: in expansion of macro 'dev_err'
     281 |                                 dev_err(&pdev->xdev->dev,
         |                                 ^~~~~~~


vim +270 drivers/xen/xen-pciback/xenbus.c

   225	
   226	static int xen_pcibk_export_device(struct xen_pcibk_device *pdev,
   227					 int domain, int bus, int slot, int func,
   228					 int devid)
   229	{
   230		struct pci_dev *dev;
   231		int err = 0;
   232	
   233		dev_dbg(&pdev->xdev->dev, "exporting dom %x bus %x slot %x func %x\n",
   234			domain, bus, slot, func);
   235	
   236		dev = pcistub_get_pci_dev_by_slot(pdev, domain, bus, slot, func);
   237		if (!dev) {
   238			err = -EINVAL;
   239			xenbus_dev_fatal(pdev->xdev, err,
   240					 "Couldn't locate PCI device "
   241					 "(%04x:%02x:%02x.%d)! "
   242					 "perhaps already in-use?",
   243					 domain, bus, slot, func);
   244			goto out;
   245		}
   246	
   247		err = xen_pcibk_add_pci_dev(pdev, dev, devid,
   248					    xen_pcibk_publish_pci_dev);
   249		if (err)
   250			goto out;
   251	
   252		dev_info(&dev->dev, "registering for %d\n", pdev->xdev->otherend_id);
   253		if (xen_register_device_domain_owner(dev,
   254						     pdev->xdev->otherend_id) != 0) {
   255			dev_err(&dev->dev, "Stealing ownership from dom%d.\n",
   256				xen_find_device_domain_owner(dev));
   257			xen_unregister_device_domain_owner(dev);
   258			xen_register_device_domain_owner(dev, pdev->xdev->otherend_id);
   259		}
   260	
   261		/* Check if the device is a bridge and export all its children */
   262		if ((dev->hdr_type && PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
   263			struct pci_dev *child = NULL;
   264	
   265			/* Iterate over all the devices in this bridge */
   266			list_for_each_entry(child, &dev->subordinate->devices,
   267					bus_list) {
   268				dev_dbg(&pdev->xdev->dev,
   269					"exporting child device %04x:%02x:%02x.%d\n",
 > 270					child->domain, child->bus->number,
   271					PCI_SLOT(child->devfn),
   272					PCI_FUNC(child->devfn));
   273	
   274				err = xen_pcibk_export_device(pdev,
   275							      child->domain,
   276							      child->bus->number,
   277							      PCI_SLOT(child->devfn),
   278							      PCI_FUNC(child->devfn),
   279							      devid);
   280				if (err) {
   281					dev_err(&pdev->xdev->dev,
   282						"failed to export child device : "
   283						"%04x:%02x:%02x.%d\n",
   284						child->domain,
   285						child->bus->number,
   286						PCI_SLOT(child->devfn),
   287						PCI_FUNC(child->devfn));
   288					goto out;
   289				}
   290			}
   291		}
   292	out:
   293		return err;
   294	}
   295	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 07:49:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 07:49:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.736997.1143107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZmq-0003EN-0d; Mon, 10 Jun 2024 07:49:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 736997.1143107; Mon, 10 Jun 2024 07:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZmp-0003EG-UE; Mon, 10 Jun 2024 07:49:51 +0000
Received: by outflank-mailman (input) for mailman id 736997;
 Mon, 10 Jun 2024 07:49:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGZmp-0003D5-0Y
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 07:49:51 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 03f2c942-26fe-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 09:49:50 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a6266ffdba8so311449766b.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 00:49:50 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6efed85dc3sm315910266b.124.2024.06.10.00.49.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 00:49:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03f2c942-26fe-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718005790; x=1718610590; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=KwiEJNVRVOeG3gLM45b2+/KiKZQYIw3B7SIhM/nKLRs=;
        b=F/iPOaz7QSKrE3flLtmCdRe4xwsOfYDbQ5DaJ/ggMbmckVDQ8PZ4tL2Grv77jF1ZuJ
         qulVYfdNI5oXKcw+5mEUoix3TxQzFP96vAUevbuI6NxvxZL5VlykvHxgzkTOGHDDUiul
         ir3TIA5ipt4K3TxbUDFuSD3hLyFiaPsertk12sStPFf/1VKjthGtDuIq1Qvg/9K8jBkc
         0xsUswWg/fW0SyeG6wv40KmTk1i+XUiUoIYJ6LZkYgLIZJkzNY7hVEOamFD23nodIOD0
         52xFVuUwqhXglO3ddNja6FuR8m64BLU4chzz9MBaf39dkYZUHd7DJbl4fOu+D/hGc8Mi
         9SwQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718005790; x=1718610590;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=KwiEJNVRVOeG3gLM45b2+/KiKZQYIw3B7SIhM/nKLRs=;
        b=jXxwoBUltee4iW3D3ehSM6aUGAOLIt2TBEG2piHFbO+Oe6MdqtmIdGbTKAri3PSsQY
         NPfdH7pfC53Pe8W0a3/TQFBuzfiTT2S5mdRRFUbcXD/KGNWup60JZdgvxPhWkbUADzPM
         w86nAhm6egxdNPn5oFyq6pQDT1DoUT3XPhg+W0wY66/YwhtgeYAAI4bdhjJjC/Iw6Jh7
         JrS759o+QvzVUIwnDHEdzrs8ArMNLj1Z7eOffs4W8+UO25pHTwFAciUMUPV80mNApq4S
         MUKYDwakJEdrHkCZtvQdn3jMdW1UDDsKokiOoBYyXpiAMlL2cX3WonLKwpyxc6RYEjQg
         diiQ==
X-Forwarded-Encrypted: i=1; AJvYcCXP+zAB26KaBJBjDP3+5Gp6TlPsCvcvmBVdWP/Uxy0Lyj00o2z5HKRKLAQiFs5iRpy6ESpCA4G4ueCo1tm6vGgf9DmMFHwHix9t+I0gZuk=
X-Gm-Message-State: AOJu0YwehqONKcNwo0fh/B3TfxBg/c3ycDxgSi6MuLXI0SxNBzpygE85
	U/mvXiP1ftkTCL66NDJjOnEbPJNaL0FDrW5KnGMlRQgFI4uB2DrjA7w2vfhvkw==
X-Google-Smtp-Source: AGHT+IHwC/Vukz1JeyzipUsg056gd0k4WnZOoz8F/+UkhcQ0n2rmEUO9tEIwvIH7PM7xzw4G844e+g==
X-Received: by 2002:a17:906:68c5:b0:a6f:2d5c:5c8d with SMTP id a640c23a62f3a-a6f2d5c6342mr17558766b.30.1718005789617;
        Mon, 10 Jun 2024 00:49:49 -0700 (PDT)
Message-ID: <a5b5b0f4-0fe3-451b-b2d5-c9d2bcc91bc4@suse.com>
Date: Mon, 10 Jun 2024 09:49:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] xen: xen-pciback: Export a bridge and all its children as
 per TODO
To: Abhinav Jain <jain.abhinav177@gmail.com>
Cc: skhan@linuxfoundation.org, javier.carrasco.cruz@gmail.com,
 jgross@suse.com, sstabellini@kernel.org, oleksandr_tyshchenko@epam.com,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
References: <20240609184410.53500-1-jain.abhinav177@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240609184410.53500-1-jain.abhinav177@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 09.06.2024 20:44, Abhinav Jain wrote:
> Check if the device is a bridge.
> If it is a bridge, iterate over all its child devices and export them.
> Log error if the export fails for any particular device logging details.
> Export error string is split across lines as I could see several
> other such occurrences in the file.
> Please let me know if I should change it in some way.
> 
> Signed-off-by: Abhinav Jain <jain.abhinav177@gmail.com>
> ---
>  drivers/xen/xen-pciback/xenbus.c | 39 +++++++++++++++++++++++++-------
>  1 file changed, 31 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/xen/xen-pciback/xenbus.c b/drivers/xen/xen-pciback/xenbus.c
> index b11e401f1b1e..d15271d33ad6 100644
> --- a/drivers/xen/xen-pciback/xenbus.c
> +++ b/drivers/xen/xen-pciback/xenbus.c
> @@ -258,14 +258,37 @@ static int xen_pcibk_export_device(struct xen_pcibk_device *pdev,
>  		xen_register_device_domain_owner(dev, pdev->xdev->otherend_id);
>  	}
>  
> -	/* TODO: It'd be nice to export a bridge and have all of its children
> -	 * get exported with it. This may be best done in xend (which will
> -	 * have to calculate resource usage anyway) but we probably want to
> -	 * put something in here to ensure that if a bridge gets given to a
> -	 * driver domain, that all devices under that bridge are not given
> -	 * to other driver domains (as he who controls the bridge can disable
> -	 * it and stop the other devices from working).
> -	 */
> +	/* Check if the device is a bridge and export all its children */
> +	if ((dev->hdr_type && PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {

Besides it wanting to be & here, it feels as if such a change can't be done
standalone in pciback. It likely needs adjustments in the tool stack (even
if that's not send anymore) and possibly also arrangements in the hypervisor
(to correctly deal with bridges when handed to other than Dom0).

> +		struct pci_dev *child = NULL;
> +
> +		/* Iterate over all the devices in this bridge */
> +		list_for_each_entry(child, &dev->subordinate->devices,
> +				bus_list) {
> +			dev_dbg(&pdev->xdev->dev,
> +				"exporting child device %04x:%02x:%02x.%d\n",
> +				child->domain, child->bus->number,
> +				PCI_SLOT(child->devfn),
> +				PCI_FUNC(child->devfn));
> +
> +			err = xen_pcibk_export_device(pdev,
> +						      child->domain,
> +						      child->bus->number,
> +						      PCI_SLOT(child->devfn),
> +						      PCI_FUNC(child->devfn),
> +						      devid);
> +			if (err) {
> +				dev_err(&pdev->xdev->dev,
> +					"failed to export child device : "
> +					"%04x:%02x:%02x.%d\n",
> +					child->domain,
> +					child->bus->number,
> +					PCI_SLOT(child->devfn),
> +					PCI_FUNC(child->devfn));
> +				goto out;

Hmm, and leaving things in partially-exported state?

Jan

> +			}
> +		}
> +	}
>  out:
>  	return err;
>  }



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 07:58:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 07:58:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737012.1143118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZuy-0005rR-TX; Mon, 10 Jun 2024 07:58:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737012.1143118; Mon, 10 Jun 2024 07:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGZuy-0005rK-QL; Mon, 10 Jun 2024 07:58:16 +0000
Received: by outflank-mailman (input) for mailman id 737012;
 Mon, 10 Jun 2024 07:58:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGZux-0005rE-Ky
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 07:58:15 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2fe8ff29-26ff-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 09:58:13 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-a6269885572so900858566b.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 00:58:13 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c6d34f607sm3793582a12.30.2024.06.10.00.58.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 00:58:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fe8ff29-26ff-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718006293; x=1718611093; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from:cc
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=tjZzn6huI32bLIfB+xQcOGq8u/3dyaxQvwAifJWwFaw=;
        b=K0yHl5SRD+n1j5XWf9F9+41UZeqdcvD4eBiX3lIBGWqNuPvJ0DXuu53C6a96Xm5sef
         E30qkBeKsxipn+Wdntz5cKClSs/aPK+lbhQMpQHzEhASXp3paA+A+CeBVnkveYUVb+Hb
         zlnq0PpdBYzD8QRbSV/MmlILWjwVkOFwykNsBUIisIutN7OzWuiNCZXoh+64NhyTSwRt
         NZzzolZsAMKvX7vsuBK3a60J4IZptxgQt0zOY3BZRR6eN5juYLCfj42ki5ZU4R6Dvoht
         88+YMNIifJIOnCVm6K4G4kW4t5t7M3Ug/VUVREI36WhitFyybW2Sm/GbzlchOV9aRySr
         WC3A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718006293; x=1718611093;
        h=content-transfer-encoding:in-reply-to:autocrypt:from:cc
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=tjZzn6huI32bLIfB+xQcOGq8u/3dyaxQvwAifJWwFaw=;
        b=wTft+Fvcjo+39933XEq6fk7z7ZGLGJbOngpiKz2l6VRh+MMkiRsM2t4NLM042vPMc+
         dPTIg96z/uptov5RBnBQ+UA73kHE4IZ/+bqOnt37jvmp10pYUF9pHDVm3yaiIqEjSS81
         Elia/t4qUadcsLl688KpDMxXtoveyzH6sJhJCpZPgKSBbaUDnibA4AU0BpXITFkrBsm1
         IbCI2TOIlf32cMB56tvgTQyLOwiQuMxEVIaelMTOjV+1FKfVVVHaRYNK4+QaXHuLZvQn
         zYR4SYw3KDh23KLNvTtwp6uxQpia2KqUKn9QDqKX8nR3pCao6eQ6+cnkc/cAfbBAtdvB
         X2KQ==
X-Gm-Message-State: AOJu0YxsqdL72gV+TRwcxwnn/xgxNIxrenucBFXGiMg6UXdC+qJ31KhC
	DtJcGDsd3UyxsBrwxmh99cDEJuX/ZVB47KJT+cjqCwb7Zu4o3QIaUJDw7xxIsETuJ1EMY0v5V3c
	=
X-Google-Smtp-Source: AGHT+IEaVLAs9FvqUYV7YrgTyMEalk2IGuQCeGAr6nbl25cdJPM0gEGAV1Hn3SJnnlUpt+YQ41jLAg==
X-Received: by 2002:a17:906:adb:b0:a6e:f655:ef29 with SMTP id a640c23a62f3a-a6ef655f132mr491449066b.17.1718006292898;
        Mon, 10 Jun 2024 00:58:12 -0700 (PDT)
Message-ID: <b609eaab-0a0a-433b-81d3-84a0cd90ebc1@suse.com>
Date: Mon, 10 Jun 2024 09:58:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: Segment truncation in multi-segment PCI handling?
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
References: <ZmNjoeFAwWz8xhfM@mail-itl>
 <9cbb6dce-b669-4237-8932-b5cd64eb7288@citrix.com>
Content-Language: en-US
Cc: xen-devel <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <9cbb6dce-b669-4237-8932-b5cd64eb7288@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 07.06.2024 21:52, Andrew Cooper wrote:
> On 07/06/2024 8:46 pm, Marek Marczykowski-Górecki wrote:
>> Hi,
>>
>> I've got a new system, and it has two PCI segments:
>>
>>     0000:00:00.0 Host bridge: Intel Corporation Device 7d14 (rev 04)
>>     0000:00:02.0 VGA compatible controller: Intel Corporation Meteor Lake-P [Intel Graphics] (rev 08)
>>     ...
>>     10000:e0:06.0 System peripheral: Intel Corporation RST VMD Managed Controller
>>     10000:e0:06.2 PCI bridge: Intel Corporation Device 7ecb (rev 10)
>>     10000:e1:00.0 Non-Volatile memory controller: Phison Electronics Corporation PS5021-E21 PCIe4 NVMe Controller (DRAM-less) (rev 01)
>>
>> But looks like Xen doesn't handle it correctly:
>>
>>     (XEN) 0000:e0:06.0: unknown type 0
>>     (XEN) 0000:e0:06.2: unknown type 0
>>     (XEN) 0000:e1:00.0: unknown type 0
>>     ...
>>     (XEN) ==== PCI devices ====
>>     (XEN) ==== segment 0000 ====
>>     (XEN) 0000:e1:00.0 - NULL - node -1 
>>     (XEN) 0000:e0:06.2 - NULL - node -1 
>>     (XEN) 0000:e0:06.0 - NULL - node -1 
>>     (XEN) 0000:2b:00.0 - d0 - node -1  - MSIs < 161 >
>>     (XEN) 0000:00:1f.6 - d0 - node -1  - MSIs < 148 >
>>     ...
>>
>> This isn't exactly surprising, since pci_sbdf_t.seg is uint16_t, so
>> 0x10000 doesn't fit. OSDev wiki says PCI Express can have 65536 PCI
>> Segment Groups, each with 256 bus segments.
>>
>> Fortunately, I don't need this to work, if I disable VMD in the
>> firmware, I get a single segment and everything works fine.
>>
> 
> This is a known issue.  Works is being done, albeit slowly.

Is work being done? After the design session in Prague I put it on my
todo list, but at low priority. I'd be happy to take it off there if I
knew someone else is looking into this.

> 0x10000 is indeed not a spec-compliant PCI segment.  It's something
> model specific the Linux VMD driver is doing.

I wouldn't call this "model specific" - this numbering is purely a
software one (and would need coordinating between Dom0 and Xen).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 08:15:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 08:15:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737026.1143128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGaAq-0002B2-8b; Mon, 10 Jun 2024 08:14:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737026.1143128; Mon, 10 Jun 2024 08:14:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGaAq-0002Au-3Z; Mon, 10 Jun 2024 08:14:40 +0000
Received: by outflank-mailman (input) for mailman id 737026;
 Mon, 10 Jun 2024 08:14:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yElo=NM=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sGaAo-0002Ao-P8
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 08:14:38 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 79a5e171-2701-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 10:14:36 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id B75DD4EE0737;
 Mon, 10 Jun 2024 10:14:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79a5e171-2701-11ef-b4bb-af5377834399
MIME-Version: 1.0
Date: Mon, 10 Jun 2024 10:14:35 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com, Simone Ballarin
 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>, Oleksii
 Kurochko <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH for-4.19 v2 3/3] automation/eclair_analysis: add more
 clean MISRA guidelines
In-Reply-To: <0cae0e19-8512-40e0-9ef2-6e91069779ec@suse.com>
References: <cover.1717790683.git.nicola.vetrini@bugseng.com>
 <42645b41cf9d2d8b5ef72f0b171989711edb00a1.1717790683.git.nicola.vetrini@bugseng.com>
 <0cae0e19-8512-40e0-9ef2-6e91069779ec@suse.com>
Message-ID: <e199bad317efee793a995523d6d10eac@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-10 09:43, Jan Beulich wrote:
> On 07.06.2024 22:13, Nicola Vetrini wrote:
>> Rules 20.9, 20.12 and 14.4 are now clean on ARM and x86, so they are 
>> added
>> to the list of clean guidelines.
> 
> Why is 20.9 being mentioned here when ...
> 
>> --- a/automation/eclair_analysis/ECLAIR/tagging.ecl
>> +++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
>> @@ -60,6 +60,7 @@ MC3R1.R11.7||
>>  MC3R1.R11.9||
>>  MC3R1.R12.5||
>>  MC3R1.R14.1||
>> +MC3R1.R14.4||
>>  MC3R1.R16.7||
>>  MC3R1.R17.1||
>>  MC3R1.R17.3||
>> @@ -73,6 +74,7 @@ MC3R1.R20.4||
>>  MC3R1.R20.6||
>>  MC3R1.R20.9||
>>  MC3R1.R20.11||
>> +MC3R1.R20.12||
>>  MC3R1.R20.13||
>>  MC3R1.R20.14||
>>  MC3R1.R21.3||
> 
> ... nothing changes in its regard?
> 

Right, it should be removed from the message.

> Jan

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 08:29:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 08:29:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737033.1143138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGaOg-0005GW-AZ; Mon, 10 Jun 2024 08:28:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737033.1143138; Mon, 10 Jun 2024 08:28:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGaOg-0005GP-7e; Mon, 10 Jun 2024 08:28:58 +0000
Received: by outflank-mailman (input) for mailman id 737033;
 Mon, 10 Jun 2024 08:28:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ow3=NM=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGaOf-0005F2-Sn
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 08:28:57 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 79ee76d2-2703-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 10:28:55 +0200 (CEST)
Received: by mail-lf1-x12c.google.com with SMTP id
 2adb3069b0e04-52bc1261f45so2585180e87.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 01:28:55 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-421b905908asm33111905e9.20.2024.06.10.01.28.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 01:28:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79ee76d2-2703-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718008135; x=1718612935; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=Tk0EtazRDnLiSQ09qxK61PlwAAEhoixNyO5SJIi56Bg=;
        b=eb7DtaE10Vb/SR2TyZgYCp5rh4zZ6X6R9oQWlpW79RvWkXCBdhhvfZ2t5vCPdBxfut
         /10MpQtii53UG06wjBza1Ts3eCLhGQGuVBJ7VMAWWxkP4yKpggp8FdbmeDPG8YO18oy/
         WQYsEpF/oABYoj+xEo1J2UF1MbiMa4csidflg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718008135; x=1718612935;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Tk0EtazRDnLiSQ09qxK61PlwAAEhoixNyO5SJIi56Bg=;
        b=CnJtRRJ5FJwvTiwWjy5W02r1ubZ4igxHe2AzG7MPQeE+51+Ipk2qwyR5Y87pdf0fAU
         bKGI/6Mlob/m+TpyTGz6ts5yT6GVmJhfchP33T7Fd4gwRLOdUOXh62HEEZEkeWGRZ8hn
         VJJCgkl8e1ixk8ov0nRiOvNjjr8udjjX4vcBciaDDVHXr0KvKkUmkAEc3L/GvtQLOcKm
         saJhkKG0g2shGUvH0Nh0x11Ch403zBdPALQAhCEr8NUOTw8sJe3R5Jg83E+GqebRWkNV
         XcBYA9ga/PUELZoeFyCZmeE4befkXv/uP44LXBEfUXsOiDZLiSXDlUcgqk2LtOUd/HF6
         iOQA==
X-Forwarded-Encrypted: i=1; AJvYcCWHz/sWNGtqG80PD9PkGWYzIJgw4RdiEREcQo/V4ExQDT6k2M0PJnvjp29+6S8JG0LOO0XTgmMJz4FKMfS0izX8XIdAw8uNW8QCIQI/ee4=
X-Gm-Message-State: AOJu0YygE/Hmbg9sIS40ruTZszueMQhcd093UhQ+3qapevBfvMPLIn2D
	cp9FOITRKgQp9JaEIOHrdGd5+iPR2pB+U3kQmKTtBTsFHRQhw4MnvxwDqGmlF/Q=
X-Google-Smtp-Source: AGHT+IFo+Lz1v9SpkOVGLo+aTKf5D9dA/hlMM0GpA6vz7w9aKnvAasLII4NZ7XW61n6c87e6doCUTQ==
X-Received: by 2002:ac2:5b92:0:b0:52c:87d7:4b3f with SMTP id 2adb3069b0e04-52c87d74df2mr1709780e87.54.1718008135332;
        Mon, 10 Jun 2024 01:28:55 -0700 (PDT)
Date: Mon, 10 Jun 2024 10:28:54 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	xen-devel <xen-devel@lists.xenproject.org>, javi.merino@cloud.com
Subject: Re: Segment truncation in multi-segment PCI handling?
Message-ID: <Zma5Rj_cswrIYcD2@macbook>
References: <ZmNjoeFAwWz8xhfM@mail-itl>
 <9cbb6dce-b669-4237-8932-b5cd64eb7288@citrix.com>
 <b609eaab-0a0a-433b-81d3-84a0cd90ebc1@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b609eaab-0a0a-433b-81d3-84a0cd90ebc1@suse.com>

On Mon, Jun 10, 2024 at 09:58:11AM +0200, Jan Beulich wrote:
> On 07.06.2024 21:52, Andrew Cooper wrote:
> > On 07/06/2024 8:46 pm, Marek Marczykowski-Górecki wrote:
> >> Hi,
> >>
> >> I've got a new system, and it has two PCI segments:
> >>
> >>     0000:00:00.0 Host bridge: Intel Corporation Device 7d14 (rev 04)
> >>     0000:00:02.0 VGA compatible controller: Intel Corporation Meteor Lake-P [Intel Graphics] (rev 08)
> >>     ...
> >>     10000:e0:06.0 System peripheral: Intel Corporation RST VMD Managed Controller
> >>     10000:e0:06.2 PCI bridge: Intel Corporation Device 7ecb (rev 10)
> >>     10000:e1:00.0 Non-Volatile memory controller: Phison Electronics Corporation PS5021-E21 PCIe4 NVMe Controller (DRAM-less) (rev 01)
> >>
> >> But looks like Xen doesn't handle it correctly:

In the meantime you can probably disable VMD from the firmware and the
NVMe devices should appear on the regular PCI bus.

> >>     (XEN) 0000:e0:06.0: unknown type 0
> >>     (XEN) 0000:e0:06.2: unknown type 0
> >>     (XEN) 0000:e1:00.0: unknown type 0
> >>     ...
> >>     (XEN) ==== PCI devices ====
> >>     (XEN) ==== segment 0000 ====
> >>     (XEN) 0000:e1:00.0 - NULL - node -1 
> >>     (XEN) 0000:e0:06.2 - NULL - node -1 
> >>     (XEN) 0000:e0:06.0 - NULL - node -1 
> >>     (XEN) 0000:2b:00.0 - d0 - node -1  - MSIs < 161 >
> >>     (XEN) 0000:00:1f.6 - d0 - node -1  - MSIs < 148 >
> >>     ...
> >>
> >> This isn't exactly surprising, since pci_sbdf_t.seg is uint16_t, so
> >> 0x10000 doesn't fit. OSDev wiki says PCI Express can have 65536 PCI
> >> Segment Groups, each with 256 bus segments.
> >>
> >> Fortunately, I don't need this to work, if I disable VMD in the
> >> firmware, I get a single segment and everything works fine.
> >>
> > 
> > This is a known issue.  Works is being done, albeit slowly.
> 
> Is work being done? After the design session in Prague I put it on my
> todo list, but at low priority. I'd be happy to take it off there if I
> knew someone else is looking into this.

We had a design session about VMD?  If so I'm afraid I've missed it.

> > 0x10000 is indeed not a spec-compliant PCI segment.  It's something
> > model specific the Linux VMD driver is doing.
> 
> I wouldn't call this "model specific" - this numbering is purely a
> software one (and would need coordinating between Dom0 and Xen).

Hm, TBH I'm not sure whether Xen needs to be aware of VMD devices.
The resources used by the VMD devices are all assigned to the VMD
root.  My current hypothesis is that it might be possible to manage
such devices without Xen being aware of their existence.

Regards, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 08:34:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 08:34:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737039.1143147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGaUI-0006iv-St; Mon, 10 Jun 2024 08:34:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737039.1143147; Mon, 10 Jun 2024 08:34:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGaUI-0006io-QC; Mon, 10 Jun 2024 08:34:46 +0000
Received: by outflank-mailman (input) for mailman id 737039;
 Mon, 10 Jun 2024 08:34:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ruA2=NM=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sGaUH-0006ii-G6
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 08:34:45 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 49891480-2704-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 10:34:44 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a6e43dad8ecso416198266b.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 01:34:44 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6c805cc764sm605803266b.76.2024.06.10.01.34.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 01:34:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49891480-2704-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718008483; x=1718613283; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=FqPUijVzLO/kkuZ31y7nw3TyhqDSp+Y8LEwyr2uuO/g=;
        b=VNysWOs8h4zquLUxS8IGNyVuG+YZaPO+Ob3Knn3mSTem0G5w89GKEH5ZOnOrlhs79u
         Q27HczDXBqkX3M5RhMhvSkmdpcn0Gl/sxJfOTmk8l5yaCS3mLEuZxrY3ZehOZZih0gSJ
         MRfo29+EJ8S87o5jDsvpsIHEXAgPWEmk/TQvk/Tejtb9tHhNo/eCrVCptLw0qsug08Vk
         uHvZN8cqf/KE7nUwROwoW1jJ5EIhYWdZTSpRUyNptsx7FTM2CeOyNC2Sdw5bQEdmpQwV
         nk0wce3qqySXg6f1ePBs1flvLLyuV5izSH6uVoHafcCQU9qaGFe4OOmHx98FrDqJgxp/
         jeOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718008483; x=1718613283;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=FqPUijVzLO/kkuZ31y7nw3TyhqDSp+Y8LEwyr2uuO/g=;
        b=m2dbZK3IIW7JUwJXuQEh6Ej85dus7y/LHuUcVihUJHgxhSVQ3bzXgR7aAOcwHodD2y
         lXUs9eNn/O8/3SAc8nicCj6jeOYZpgtoo4d43bQSzQ30UdmpLu4QNhRsBthRhlsgXCYz
         4q56V7wLe1RG/BkXhjmKiZIt7Gb0b3DEgF84hK7aMD/8ldd5ddKHxCbGpV8LkfAzr7JB
         R50SsslomKGHaNWx7XahDYxZTHaMSiChGyNtCS8aEz7A3h+gCZyGf9JgltMtbv9zEP31
         xA67tHu5hICICN3664PHYu9xrdadF8bkVaiaQ0s0066fbJPJPxkDqxadWG03h2Gf0iVa
         IvTQ==
X-Forwarded-Encrypted: i=1; AJvYcCUX5vNGwrrl1xbeekO8vDgXs31Fv1KvdNFi385XeI/gjMXcS30I3fCkPhbrn9ngx0u+1en1BC9cVOrEQ7s+D+0FNk6ofBIRCYnWQUFDO9I=
X-Gm-Message-State: AOJu0Yz2Qfo6E6F1C1dkoXy6wOInpSu9m0a9N3DcH6CwDvkrrfH9s42G
	O3Qa5EMg8P7yxSPqjFT1DtW4i8rqJ7ByXOahZscFdCed8o/7V8IF
X-Google-Smtp-Source: AGHT+IH3W+CLY/uNaCTiMr1Pz4aIqkvvN9/0K3WknXtXbu+B2QtXiDRCmmZQujKuO2SqP8BHy0rXeQ==
X-Received: by 2002:a17:907:86a4:b0:a6f:1727:cf4b with SMTP id a640c23a62f3a-a6f1727d4dbmr229189166b.23.1718008483202;
        Mon, 10 Jun 2024 01:34:43 -0700 (PDT)
Message-ID: <d53485f2dd22ae73102311eb582187c911e6be5c.camel@gmail.com>
Subject: Re: [PATCH for-4.19] x86/pvh: declare PVH dom0 supported with
 caveats
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Community Manager <community.manager@xenproject.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>
Date: Mon, 10 Jun 2024 10:34:42 +0200
In-Reply-To: <20240607100320.11723-1-roger.pau@citrix.com>
References: <20240607100320.11723-1-roger.pau@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Fri, 2024-06-07 at 12:03 +0200, Roger Pau Monne wrote:
> PVH dom0 is functionally very similar to PVH domU except for the
> domain
> builder and the added set of hypercalls available to it.
>=20
> The main concern with declaring it "Supported" is the lack of some
> features
> when compared to classic PV dom0, hence switch it's status to
> supported with
> caveats.=C2=A0 List the known missing features, there might be more
> features missing
> or not working as expected apart from the ones listed.
>=20
> Note there's some (limited) PVH dom0 testing on both osstest and
> gitlab.
>=20
> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

> ---
> Hopefully this will attract more testing an resources to PVH dom0 in
> order to
> try to finish the missing features.
> ---
> =C2=A0CHANGELOG.md |=C2=A0 1 +
> =C2=A0SUPPORT.md=C2=A0=C2=A0 | 15 ++++++++++++++-
> =C2=A02 files changed, 15 insertions(+), 1 deletion(-)
>=20
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index 201478aa1c0e..1778419cae64 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -14,6 +14,7 @@ The format is based on [Keep a
> Changelog](https://keepachangelog.com/en/1.0.0/)
> =C2=A0=C2=A0=C2=A0 - HVM PIRQs are disabled by default.
> =C2=A0=C2=A0=C2=A0 - Reduce IOMMU setup time for hardware domain.
> =C2=A0=C2=A0=C2=A0 - Allow HVM/PVH domains to map foreign pages.
> +=C2=A0=C2=A0 - Declare PVH dom0 supported with caveats.
> =C2=A0 - xl/libxl configures vkb=3D[] for HVM domains with priority over
> vkb_device.
> =C2=A0 - Increase the maximum number of CPUs Xen can be built for from
> 4095 to
> =C2=A0=C2=A0=C2=A0 16383.
> diff --git a/SUPPORT.md b/SUPPORT.md
> index d5d60c62ec11..711aacf34662 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -161,7 +161,20 @@ Requires hardware virtualisation support (Intel
> VMX / AMD SVM).
> =C2=A0Dom0 support requires an IOMMU (Intel VT-d / AMD IOMMU).
> =C2=A0
> =C2=A0=C2=A0=C2=A0=C2=A0 Status, domU: Supported
> -=C2=A0=C2=A0=C2=A0 Status, dom0: Experimental
> +=C2=A0=C2=A0=C2=A0 Status, dom0: Supported, with caveats
> +
> +PVH dom0 hasn't received the same test coverage as PV dom0, so it
> can exhibit
> +unexpected behavior or issues on some hardware.
> +
> +At least the following features are missing on a PVH dom0:
> +
> +=C2=A0 * PCI SR-IOV and Resizable BARs.
> +
> +=C2=A0 * Native NMI forwarding (nmi=3Ddom0 command line option).
> +
> +=C2=A0 * MCE handling.
> +
> +=C2=A0 * PCI Passthrough to any kind of domUs.
> =C2=A0
> =C2=A0### ARM
> =C2=A0



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 08:38:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 08:38:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737044.1143157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGaXk-0007YO-Av; Mon, 10 Jun 2024 08:38:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737044.1143157; Mon, 10 Jun 2024 08:38:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGaXk-0007YH-89; Mon, 10 Jun 2024 08:38:20 +0000
Received: by outflank-mailman (input) for mailman id 737044;
 Mon, 10 Jun 2024 08:38:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Iexp=NM=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sGaXi-0007YB-V4
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 08:38:18 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c79ffbd9-2704-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 10:38:16 +0200 (CEST)
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-114-0O7pTDAMMqGbMeHa9jzmJQ-1; Mon, 10 Jun 2024 04:38:08 -0400
Received: by mail-wr1-f72.google.com with SMTP id
 ffacd0b85a97d-35f1ddd8a47so594721f8f.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 01:38:08 -0700 (PDT)
Received: from ?IPV6:2a09:80c0:192:0:5dac:bf3d:c41:c3e7?
 ([2a09:80c0:192:0:5dac:bf3d:c41:c3e7])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35f29629231sm157912f8f.67.2024.06.10.01.38.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 01:38:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c79ffbd9-2704-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1718008695;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=w6nrbStgbGoljYTouWdcfenV64/6o+i5+It1jT8aCEg=;
	b=Jd7djWf3gGJHWkT268XMsa0ZaIsXYN7IKYiDdUv7q4pI8WWRi5NEL4R5Co4WJrlUlUi5xB
	SS9JV+lWklw4KnBE5scNcJQ04AkmGxYocW1Mg60yyFMZDYHU4FgZdCi9kfHIEmf9X9IUKZ
	S+EPASv1y52bJPDW49QBxwoOLpZdiGY=
X-MC-Unique: 0O7pTDAMMqGbMeHa9jzmJQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718008687; x=1718613487;
        h=content-transfer-encoding:in-reply-to:organization:autocrypt
         :content-language:from:references:cc:to:subject:user-agent
         :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=w6nrbStgbGoljYTouWdcfenV64/6o+i5+It1jT8aCEg=;
        b=n8AVnzmJmMYNrjr3+0oSNArvDOA4YgR/2DpcJa2BlO+F7XXeeVccs+LcEt9KDs8VQz
         khk6C5sTHNLYDRvyHSeCeTosIZ77PIJhzrW3cv368PgjwdwoTIf/VwIhQUG/vFTcriIQ
         sk1U5HMDWxigmso5/TyiX5LnMyoG0v/ZjTHE622vhduL6gimVJs71/cKxdx1FCT8iR3e
         t9k+kBhvbVQrw+8rtneMLODnYDFmHylfQMyz2dBCFSGewHActZsinT+xwxo7vK9YcF60
         kSUzSuA+9oUs/Li1xm5RdscGF2TC02+tCqk8XuIk3vXsNZCL07Ip9u9HDu9+5Wpmdibk
         iV9g==
X-Forwarded-Encrypted: i=1; AJvYcCWevfOfgDf9Ri7VmAC3UxCeiOpBvgAXp5NwAY9WEjq+LEWr10FeO6uF3HjX87jkIOuPcUPIblfjskxOwir9SjXK2tVZs6afUZJXAj1CGPM=
X-Gm-Message-State: AOJu0YytpeCwNYmJ2ofQRanXDiHFE75WHEGjceCisSjQmqsIE5qOv+kl
	wneSe77Uw+AFCtIWpWcro/pXfotm+b7GgAxECjvLx7HLx8DHmqKEMiFFBSpYdTNVxgjpt8F5EXd
	9vK53nMGcVRxf1M6rJtcvIrvg4hWpITj3bofOpo40ihkMyg90DE6w09GQAMjJ7VRm
X-Received: by 2002:a5d:5f90:0:b0:35f:22d9:cab3 with SMTP id ffacd0b85a97d-35f22d9cd51mr2249202f8f.36.1718008687513;
        Mon, 10 Jun 2024 01:38:07 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IHXEFywI0gemoc9w1rogNSIeoaxga56ZirfcbsHoHRmzr//4KCG2uOiq+HM1RIZR6Nee59Wpg==
X-Received: by 2002:a5d:5f90:0:b0:35f:22d9:cab3 with SMTP id ffacd0b85a97d-35f22d9cd51mr2249152f8f.36.1718008686974;
        Mon, 10 Jun 2024 01:38:06 -0700 (PDT)
Message-ID: <13070847-4129-490c-b228-2e52bd77566a@redhat.com>
Date: Mon, 10 Jun 2024 10:38:05 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()
To: Oscar Salvador <osalvador@suse.de>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
 xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com,
 Andrew Morton <akpm@linux-foundation.org>, Mike Rapoport <rppt@kernel.org>,
 "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Jason Wang <jasowang@redhat.com>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
 =?UTF-8?Q?Eugenio_P=C3=A9rez?= <eperezma@redhat.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>,
 Dmitry Vyukov <dvyukov@google.com>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-2-david@redhat.com>
 <ZmZ7GgwJw4ucPJaM@localhost.localdomain>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; keydata=
 xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat
In-Reply-To: <ZmZ7GgwJw4ucPJaM@localhost.localdomain>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 10.06.24 06:03, Oscar Salvador wrote:
> On Fri, Jun 07, 2024 at 11:09:36AM +0200, David Hildenbrand wrote:
>> In preparation for further changes, let's teach __free_pages_core()
>> about the differences of memory hotplug handling.
>>
>> Move the memory hotplug specific handling from generic_online_page() to
>> __free_pages_core(), use adjust_managed_page_count() on the memory
>> hotplug path, and spell out why memory freed via memblock
>> cannot currently use adjust_managed_page_count().
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
> 
> All looks good but I am puzzled with something.
> 
>> +	} else {
>> +		/* memblock adjusts totalram_pages() ahead of time. */
>> +		atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
>> +	}
> 
> You say that memblock adjusts totalram_pages ahead of time, and I guess
> you mean in memblock_free_all()

And memblock_free_late(), which uses atomic_long_inc().

> 
>   pages = free_low_memory_core_early()
>   totalram_pages_add(pages);
> 
> but that is not ahead, it looks like it is upading __after__ sending
> them to buddy?

Right (it's suboptimal, but not really problematic so far. Hopefully Wei 
can clean it up and move it in here as well)

For the time being

"/* memblock adjusts totalram_pages() manually. */"

?

Thanks!

-- 
Cheers,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 08:41:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 08:41:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737052.1143167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGaai-00019h-S3; Mon, 10 Jun 2024 08:41:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737052.1143167; Mon, 10 Jun 2024 08:41:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGaai-00019a-PT; Mon, 10 Jun 2024 08:41:24 +0000
Received: by outflank-mailman (input) for mailman id 737052;
 Mon, 10 Jun 2024 08:41:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGaah-00019S-9h
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 08:41:23 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3626cfa5-2705-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 10:41:21 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2e95a75a90eso46008741fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 01:41:21 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c55d8aca8sm5351801a12.97.2024.06.10.01.41.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 01:41:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3626cfa5-2705-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718008880; x=1718613680; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=T1rWKJ/YYX2AvVliR35EdJCq8qppYe73Dcruj3rCPW0=;
        b=Qz2cQSsgJzSFV8ww37j0mRdhVQbvbqbzjs+Iq5XWNjig3Sr1v0UlQSArT0cOGHfOXA
         zvALPUPbZHAWRbt2L0AzF17UE16yqc7niQwi3XxHqqndbytfdgepa2wMqGYbdr3B0Wfh
         CFRk5k5wFdHWxyWZDJJaW4VBA2pHVr4UQvF4zta+WkL97LKjJFmLSjiaPp7FlCaalXoH
         n0JAh4v8Q++8xDtDBp9C3arDfWeHSpPQjuMazHQ9Cmjy3+v1NHHIO/TMYsoHkIPxgBdk
         mTiMuC95O9rN6GiFDZKzXpftpUktdfJmS1N8dEdLpZUQQ8/eRr6MzrSBPlks0Xh565fG
         qTdA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718008880; x=1718613680;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=T1rWKJ/YYX2AvVliR35EdJCq8qppYe73Dcruj3rCPW0=;
        b=uMl7wXMehVHr17Cso/vsFTFVKbjlagHY1Bt5mNm2CK54YOE1TFfT9MxGYPtO51L+qc
         wxxxJiN0L+afjJaJYlCKnmDDGOgkYrZWtWHbPS4HXxy6K8LxCzZUxbYnzZtDIA64NYy5
         jNkRB5q2QQhlp6vJvBpYhsbgyvncUWw9cJLOx99j6m+zY2QuAodoogVcbbHNXa6xXozg
         hBXP+TgrIui125dyEpAnRaVzvGu6lUXVnL7GrvxphQYRb3P81mEGwk0MD05rnJ/nxVRN
         99BRGEfazlpoueeSBAxstP9C92HfgnNyq80rApHcDST2SVTJaKbhEzIyuG6UjdeUXDjv
         dSXw==
X-Forwarded-Encrypted: i=1; AJvYcCWr6wgGPO5OYQX0cqDuZ5Qeo0HNNJo9XU1nTOoFFDslpkWrtVi1Ium/0tIqMUoG8hZgtNxHsqCBgDu6sjosOIjF9JQrx0tDgkQxTPijAP0=
X-Gm-Message-State: AOJu0Ywm7QdU06o7a5JkN8TSwTjhQhXNN3tURGWjXvs45CA4OrYJs0sR
	9p9rnSHsKVNPpmnASJCP8Yl+fE9FuCA8utXJK0TM4cZziwbmB5cBXjtUtlIdkg==
X-Google-Smtp-Source: AGHT+IG5TWnqe+1Cq7n1ktY0e7d/mkRjAbM6/l8C6EwP0FeislCOTK2Zg2BcrPzOTKYQSkZ2WWB8pA==
X-Received: by 2002:a2e:925a:0:b0:2eb:ddca:e175 with SMTP id 38308e7fff4ca-2ebddcae2e8mr20331441fa.42.1718008880597;
        Mon, 10 Jun 2024 01:41:20 -0700 (PDT)
Message-ID: <a8225a94-54ed-4b24-8867-b9da65cb0a14@suse.com>
Date: Mon, 10 Jun 2024 10:41:19 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: Segment truncation in multi-segment PCI handling?
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel
 <xen-devel@lists.xenproject.org>, javi.merino@cloud.com
References: <ZmNjoeFAwWz8xhfM@mail-itl>
 <9cbb6dce-b669-4237-8932-b5cd64eb7288@citrix.com>
 <b609eaab-0a0a-433b-81d3-84a0cd90ebc1@suse.com> <Zma5Rj_cswrIYcD2@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <Zma5Rj_cswrIYcD2@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10.06.2024 10:28, Roger Pau Monné wrote:
> On Mon, Jun 10, 2024 at 09:58:11AM +0200, Jan Beulich wrote:
>> On 07.06.2024 21:52, Andrew Cooper wrote:
>>> On 07/06/2024 8:46 pm, Marek Marczykowski-Górecki wrote:
>>>> Hi,
>>>>
>>>> I've got a new system, and it has two PCI segments:
>>>>
>>>>     0000:00:00.0 Host bridge: Intel Corporation Device 7d14 (rev 04)
>>>>     0000:00:02.0 VGA compatible controller: Intel Corporation Meteor Lake-P [Intel Graphics] (rev 08)
>>>>     ...
>>>>     10000:e0:06.0 System peripheral: Intel Corporation RST VMD Managed Controller
>>>>     10000:e0:06.2 PCI bridge: Intel Corporation Device 7ecb (rev 10)
>>>>     10000:e1:00.0 Non-Volatile memory controller: Phison Electronics Corporation PS5021-E21 PCIe4 NVMe Controller (DRAM-less) (rev 01)
>>>>
>>>> But looks like Xen doesn't handle it correctly:
> 
> In the meantime you can probably disable VMD from the firmware and the
> NVMe devices should appear on the regular PCI bus.
> 
>>>>     (XEN) 0000:e0:06.0: unknown type 0
>>>>     (XEN) 0000:e0:06.2: unknown type 0
>>>>     (XEN) 0000:e1:00.0: unknown type 0
>>>>     ...
>>>>     (XEN) ==== PCI devices ====
>>>>     (XEN) ==== segment 0000 ====
>>>>     (XEN) 0000:e1:00.0 - NULL - node -1 
>>>>     (XEN) 0000:e0:06.2 - NULL - node -1 
>>>>     (XEN) 0000:e0:06.0 - NULL - node -1 
>>>>     (XEN) 0000:2b:00.0 - d0 - node -1  - MSIs < 161 >
>>>>     (XEN) 0000:00:1f.6 - d0 - node -1  - MSIs < 148 >
>>>>     ...
>>>>
>>>> This isn't exactly surprising, since pci_sbdf_t.seg is uint16_t, so
>>>> 0x10000 doesn't fit. OSDev wiki says PCI Express can have 65536 PCI
>>>> Segment Groups, each with 256 bus segments.
>>>>
>>>> Fortunately, I don't need this to work, if I disable VMD in the
>>>> firmware, I get a single segment and everything works fine.
>>>>
>>>
>>> This is a known issue.  Works is being done, albeit slowly.
>>
>> Is work being done? After the design session in Prague I put it on my
>> todo list, but at low priority. I'd be happy to take it off there if I
>> knew someone else is looking into this.
> 
> We had a design session about VMD?  If so I'm afraid I've missed it.

In Prague last year, not just now in Lisbon.

>>> 0x10000 is indeed not a spec-compliant PCI segment.  It's something
>>> model specific the Linux VMD driver is doing.
>>
>> I wouldn't call this "model specific" - this numbering is purely a
>> software one (and would need coordinating between Dom0 and Xen).
> 
> Hm, TBH I'm not sure whether Xen needs to be aware of VMD devices.
> The resources used by the VMD devices are all assigned to the VMD
> root.  My current hypothesis is that it might be possible to manage
> such devices without Xen being aware of their existence.

Well, it may be possible to have things work in Dom0 without Xen
knowing much. Then Dom0 would need to suppress any physdevop calls
with such software-only segment numbers (in order to at least not
confuse Xen). I'd be curious though how e.g. MSI setup would work in
such a scenario. Plus clearly any passing through of a device behind
the VMD bridge will quite likely need Xen involvement (unless of
course the only way of doing such pass-through was to pass on the
entire hierarchy).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 08:51:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 08:51:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737061.1143186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGak2-0003Og-QC; Mon, 10 Jun 2024 08:51:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737061.1143186; Mon, 10 Jun 2024 08:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGak2-0003OZ-NE; Mon, 10 Jun 2024 08:51:02 +0000
Received: by outflank-mailman (input) for mailman id 737061;
 Mon, 10 Jun 2024 08:51:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ow3=NM=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGak1-0003NB-7y
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 08:51:01 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8f868f72-2706-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 10:51:00 +0200 (CEST)
Received: by mail-lf1-x130.google.com with SMTP id
 2adb3069b0e04-52c8ddc2b29so470095e87.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 01:51:00 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35f0e49e898sm6676580f8f.22.2024.06.10.01.50.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 01:50:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f868f72-2706-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718009459; x=1718614259; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=De9eQf+RRJBH4siiQAx4glFqcwjN/h5NTtVT8vduH1g=;
        b=dU17lyMgqVgVHSaOryqMqOtFuGz1nXj/uRSA6/DLjzM69jFygNkg+NpJSlHgj3FDWY
         a/ZuVW0VLxwUF3KLT+VGgZT+FpgQl4z4d6BF+G94wdOJTOBliWJrNAHrLm9JNauqhvIq
         zgRu1jjIyqxgB7ebN8BBoxRrnVsKF6MHmeA0A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718009459; x=1718614259;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=De9eQf+RRJBH4siiQAx4glFqcwjN/h5NTtVT8vduH1g=;
        b=J9KThnik8jFaPQCuWfCsHJX9fssDhbjhUsCZabkAgMza3Fa6m6EH473tLL5A9G7bo4
         fxU0aEjH0nAbkS4uwlAmaD8n8aikb18aqy1c3NTQTSVvBmvEMN4fcDN3Oo+Wf9CsG5Ev
         KVSSp76/wOL6vD1PDK8nIP7fmNawJ7O/1nPjoWxNypK8Z9h6mWr9SZI7NZ81vSEoO1G6
         z/nNq1BBTigVXmQnwAyvFKtVMiocv+yjA6j1iJS6RXUJnmvBWNvf0sFOsC2K3GbPpBU6
         4spNUUlozUONiaO6FB321qR/bfn+KNBPmvHbWDbrYZQyDnRhGM2MdaMo52MFEnHMPlo4
         Bsnw==
X-Gm-Message-State: AOJu0Yz57/vLyXCr9sktD4Nj2/smJrJd4XlpVhbIJ+I1gzVahSzr4otx
	Nwc9fd4dlJIzC91p5Njp5VMhYDzF2r1stblr61WPvFqkUyqecTTIzsK4QvzPgSKB5KnFAqDEEXi
	Z
X-Google-Smtp-Source: AGHT+IG2aSm0jdoW2rk0v5C9fQQkJy/SNNR/86B2O4aqqEQ74/wErpgNdDF2TrAdubVoOqjErn8GKg==
X-Received: by 2002:ac2:5dfc:0:b0:52c:8a9b:17f4 with SMTP id 2adb3069b0e04-52c8a9b19e4mr1411736e87.37.1718009458784;
        Mon, 10 Jun 2024 01:50:58 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Community Manager <community.manager@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH for-4.19 v2] x86/pvh: declare PVH dom0 supported with caveats
Date: Mon, 10 Jun 2024 10:50:52 +0200
Message-ID: <20240610085052.8499-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.44.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

PVH dom0 is functionally very similar to PVH domU except for the domain
builder and the added set of hypercalls available to it.

The main concern with declaring it "Supported" is the lack of some features
when compared to classic PV dom0, hence switch it's status to supported with
caveats.  List the known missing features, there might be more features missing
or not working as expected apart from the ones listed.

Note there's some (limited) PVH dom0 testing on both osstest and gitlab.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Changes since v1:
 - Remove boot warning.
---
 CHANGELOG.md                  |  1 +
 SUPPORT.md                    | 15 ++++++++++++++-
 xen/arch/x86/hvm/dom0_build.c |  1 -
 3 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 201478aa1c0e..1778419cae64 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -14,6 +14,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    - HVM PIRQs are disabled by default.
    - Reduce IOMMU setup time for hardware domain.
    - Allow HVM/PVH domains to map foreign pages.
+   - Declare PVH dom0 supported with caveats.
  - xl/libxl configures vkb=[] for HVM domains with priority over vkb_device.
  - Increase the maximum number of CPUs Xen can be built for from 4095 to
    16383.
diff --git a/SUPPORT.md b/SUPPORT.md
index d5d60c62ec11..711aacf34662 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -161,7 +161,20 @@ Requires hardware virtualisation support (Intel VMX / AMD SVM).
 Dom0 support requires an IOMMU (Intel VT-d / AMD IOMMU).
 
     Status, domU: Supported
-    Status, dom0: Experimental
+    Status, dom0: Supported, with caveats
+
+PVH dom0 hasn't received the same test coverage as PV dom0, so it can exhibit
+unexpected behavior or issues on some hardware.
+
+At least the following features are missing on a PVH dom0:
+
+  * PCI SR-IOV and Resizable BARs.
+
+  * Native NMI forwarding (nmi=dom0 command line option).
+
+  * MCE handling.
+
+  * PCI Passthrough to any kind of domUs.
 
 ### ARM
 
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 6acbaceb94c1..f3eddb684686 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -1360,7 +1360,6 @@ int __init dom0_construct_pvh(struct domain *d, const module_t *image,
         print_e820_memory_map(d->arch.e820, d->arch.nr_e820);
     }
 
-    printk("WARNING: PVH is an experimental mode with limited functionality\n");
     return 0;
 }
 
-- 
2.44.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 08:56:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 08:56:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737068.1143196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGap3-00047g-Dm; Mon, 10 Jun 2024 08:56:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737068.1143196; Mon, 10 Jun 2024 08:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGap3-00047Z-9q; Mon, 10 Jun 2024 08:56:13 +0000
Received: by outflank-mailman (input) for mailman id 737068;
 Mon, 10 Jun 2024 08:56:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Iexp=NM=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sGap2-00047T-Gk
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 08:56:12 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4796440e-2707-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 10:56:10 +0200 (CEST)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-130-expWzW3WMCiVE8FmTOwIDA-1; Mon, 10 Jun 2024 04:56:05 -0400
Received: by mail-wm1-f70.google.com with SMTP id
 5b1f17b1804b1-421739476b3so20088365e9.2
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 01:56:05 -0700 (PDT)
Received: from ?IPV6:2a09:80c0:192:0:5dac:bf3d:c41:c3e7?
 ([2a09:80c0:192:0:5dac:bf3d:c41:c3e7])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35f23c67e70sm2326824f8f.33.2024.06.10.01.56.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 01:56:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4796440e-2707-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1718009769;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=K7xP27zKqY4v47zLxGRsdu26ar7Ig8tqaIa+76/1NNg=;
	b=UPCMhPfCHtURhvB5Qax91K1kgjpRGhV4XzGnC0m2olSpu3F/gEnYnYl77A9ER/84nmDeII
	6mBGxmUAVnQlCE0H3U0tONiks1M/S+16h6Hg569P29VTER+o6WoUOBORoJWw/+RJGQNwNG
	oX3vHiF7yxSB4AtB243mIb+Kg3E0MQY=
X-MC-Unique: expWzW3WMCiVE8FmTOwIDA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718009764; x=1718614564;
        h=content-transfer-encoding:in-reply-to:organization:autocrypt
         :content-language:from:references:cc:to:subject:user-agent
         :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=K7xP27zKqY4v47zLxGRsdu26ar7Ig8tqaIa+76/1NNg=;
        b=agJQE6jmgoG/iAjOvXwMVmZbL2qp/cd3tB+3erxD2RkFMZmZH74CyC8IbWrj4U7VmA
         X6FazabZOiCss+Pn/Qta+GxNMOgCok+o7Q7X/Hjo5rsc/09h/oPQUIjzbAPrbpuajzLP
         xlCSQ1aPXtznxJ//JDcTAemwvjsoYPs0a7TknnEHz0RjfOOxX1lh+NNGcj5aJEhovqOF
         adiBPunRBjj4N/6QrzjrjU2PF7f2JV4I2N2a9drZUI0SrV5xvPCf+xrESnxGgv1lTT2d
         VQn6GHmdpiT/E68BpzZflQh3V4H5/lfjeGRCJk0aNpMYx1kpZt5Ssm3QpfiQD7fymbtT
         84hg==
X-Forwarded-Encrypted: i=1; AJvYcCXIFmwlM/XUtitFoohK5a7Bihzdbpl0TsFhEBM0zeDA8Ddb98/NwjZmXnvl4ppYXgK/br3yMwf0Mm/7r6V8dbRXr0k98iBK5UH9c9NGKGE=
X-Gm-Message-State: AOJu0YxhIs0KjyaAPJs0Vtg1POS1o1U/THzNuSMvV+olIbl4lWH3WP6m
	Hz5b+nF4laIkHctOZfdPGJlAVflQIo0JIbCUNIebjQfSDy7T3Gt2ogsDLF/eVG0whYJp8Emg3Dl
	2xQI5FayzTBBHb2wXKktgsulsUF5pM3PK64TCmAMoxOrFRcUmjFiM8zJIrAHqSR5t
X-Received: by 2002:a05:600c:4fc1:b0:422:aca:f87e with SMTP id 5b1f17b1804b1-4220acafc07mr8547165e9.19.1718009764158;
        Mon, 10 Jun 2024 01:56:04 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IE8VQ0Q4YEczV/EAVgWH6dX6M5wlMcQB+GZ0NoLgBFXD6JcjrTO0xiRa6CUmh2pcsQldmgXsA==
X-Received: by 2002:a05:600c:4fc1:b0:422:aca:f87e with SMTP id 5b1f17b1804b1-4220acafc07mr8546975e9.19.1718009763680;
        Mon, 10 Jun 2024 01:56:03 -0700 (PDT)
Message-ID: <5d9583e1-3374-437d-8eea-6ab1e1400a30@redhat.com>
Date: Mon, 10 Jun 2024 10:56:02 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v1 2/3] mm/memory_hotplug: initialize memmap of
 !ZONE_DEVICE with PageOffline() instead of PageReserved()
To: Oscar Salvador <osalvador@suse.de>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
 xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com,
 Andrew Morton <akpm@linux-foundation.org>, Mike Rapoport <rppt@kernel.org>,
 "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Jason Wang <jasowang@redhat.com>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
 =?UTF-8?Q?Eugenio_P=C3=A9rez?= <eperezma@redhat.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>,
 Dmitry Vyukov <dvyukov@google.com>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-3-david@redhat.com>
 <ZmZ_3Xc7fdrL1R15@localhost.localdomain>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; keydata=
 xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat
In-Reply-To: <ZmZ_3Xc7fdrL1R15@localhost.localdomain>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 10.06.24 06:23, Oscar Salvador wrote:
> On Fri, Jun 07, 2024 at 11:09:37AM +0200, David Hildenbrand wrote:
>> We currently initialize the memmap such that PG_reserved is set and the
>> refcount of the page is 1. In virtio-mem code, we have to manually clear
>> that PG_reserved flag to make memory offlining with partially hotplugged
>> memory blocks possible: has_unmovable_pages() would otherwise bail out on
>> such pages.
>>
>> We want to avoid PG_reserved where possible and move to typed pages
>> instead. Further, we want to further enlighten memory offlining code about
>> PG_offline: offline pages in an online memory section. One example is
>> handling managed page count adjustments in a cleaner way during memory
>> offlining.
>>
>> So let's initialize the pages with PG_offline instead of PG_reserved.
>> generic_online_page()->__free_pages_core() will now clear that flag before
>> handing that memory to the buddy.
>>
>> Note that the page refcount is still 1 and would forbid offlining of such
>> memory except when special care is take during GOING_OFFLINE as
>> currently only implemented by virtio-mem.
>>
>> With this change, we can now get non-PageReserved() pages in the XEN
>> balloon list. From what I can tell, that can already happen via
>> decrease_reservation(), so that should be fine.
>>
>> HV-balloon should not really observe a change: partial online memory
>> blocks still cannot get surprise-offlined, because the refcount of these
>> PageOffline() pages is 1.
>>
>> Update virtio-mem, HV-balloon and XEN-balloon code to be aware that
>> hotplugged pages are now PageOffline() instead of PageReserved() before
>> they are handed over to the buddy.
>>
>> We'll leave the ZONE_DEVICE case alone for now.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
> 
>> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
>> index 27e3be75edcf7..0254059efcbe1 100644
>> --- a/mm/memory_hotplug.c
>> +++ b/mm/memory_hotplug.c
>> @@ -734,7 +734,7 @@ static inline void section_taint_zone_device(unsigned long pfn)
>>   /*
>>    * Associate the pfn range with the given zone, initializing the memmaps
>>    * and resizing the pgdat/zone data to span the added pages. After this
>> - * call, all affected pages are PG_reserved.
>> + * call, all affected pages are PageOffline().
>>    *
>>    * All aligned pageblocks are initialized to the specified migratetype
>>    * (usually MIGRATE_MOVABLE). Besides setting the migratetype, no related
>> @@ -1100,8 +1100,12 @@ int mhp_init_memmap_on_memory(unsigned long pfn, unsigned long nr_pages,
>>   
>>   	move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_UNMOVABLE);
>>   
>> -	for (i = 0; i < nr_pages; i++)
>> -		SetPageVmemmapSelfHosted(pfn_to_page(pfn + i));
>> +	for (i = 0; i < nr_pages; i++) {
>> +		struct page *page = pfn_to_page(pfn + i);
>> +
>> +		__ClearPageOffline(page);
>> +		SetPageVmemmapSelfHosted(page);
> 
> So, refresh my memory here please.
> AFAIR, those VmemmapSelfHosted pages were marked Reserved before, but now,
> memmap_init_range() will not mark them reserved anymore.

Correct.

> I do not think that is ok? I am worried about walkers getting this wrong.
> 
> We usually skip PageReserved pages in walkers because are pages we cannot deal
> with for those purposes, but with this change, we will leak
> PageVmemmapSelfHosted, and I am not sure whether are ready for that.

There are fortunately not that many left.

I'd even say marking them (vmemmap) reserved is more wrong than right: 
note that ordinary vmemmap pages after memory hotplug are not reserved! 
Only bootmem should be reserved.

Let's take at the relevant core-mm ones (arch stuff is mostly just for 
MMIO remapping)

fs/proc/task_mmu.c:     if (PageReserved(page))
fs/proc/task_mmu.c:     if (PageReserved(page))

-> If we find vmemmap pages mapped into user space we already messed up
    seriously

kernel/power/snapshot.c:        if (PageReserved(page) ||
kernel/power/snapshot.c:        if (PageReserved(page)

-> There should be no change (saveable_page() would still allow saving
    them, highmem does not apply)

mm/hugetlb_vmemmap.c:           if (!PageReserved(head))
mm/hugetlb_vmemmap.c:   if (PageReserved(page))

-> Wants to identify bootmem, but we exclude these
    PageVmemmapSelfHosted() on the splitting part already properly


mm/page_alloc.c:                VM_WARN_ON_ONCE(PageReserved(p));
mm/page_alloc.c:                if (PageReserved(page))

-> pfn_range_valid_contig() would scan them, just like for ordinary
    vmemmap pages during hotplug. We'll simply fail isolating/migrating
    them similarly (like any unmovable allocations) later

mm/page_ext.c:          BUG_ON(PageReserved(page));

-> free_page_ext handling, does not apply

mm/page_isolation.c:            if (PageReserved(page))

-> has_unmovable_pages() should still detect them as unmovable (e.g.,
    neither movable nor LRU).

mm/page_owner.c:                        if (PageReserved(page))
mm/page_owner.c:                        if (PageReserved(page))

-> Simply page_ext_get() will return NULL instead and we'll similarly
    skip them

mm/sparse.c:            if (!PageReserved(virt_to_page(ms->usage))) {

-> Detecting boot memory for ms->usage allocation, does not apply to
    vmemmap.

virt/kvm/kvm_main.c:    if (!PageReserved(page))
virt/kvm/kvm_main.c:    return !PageReserved(page);

-> For MMIO remapping purposes, does not apply to vmemmap


> Moreover, boot memmap pages are marked as PageReserved, which would be
> now inconsistent with those added during hotplug operations.

Just like vmemmap pages allocated dynamically during memory hotplug. 
Now, really only bootmem-ones are PageReserved.

> All in all, I feel uneasy about this change.

I really don't want to mark these pages here PageReserved for the sake 
of it.

Any PageReserved user that I am missing, or why we should handle these 
vmemmap pages differently than the ones allocated during ordinary memory 
hotplug?

In the future, we might want to consider using a dedicated page type for 
them, so we can stop using a bit that doesn't allow to reliably identify 
them. (we should mark all vmemmap with that type then)

Thanks!

-- 
Cheers,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 08:56:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 08:56:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737069.1143206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGapH-0004Rf-OJ; Mon, 10 Jun 2024 08:56:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737069.1143206; Mon, 10 Jun 2024 08:56:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGapH-0004RY-KO; Mon, 10 Jun 2024 08:56:27 +0000
Received: by outflank-mailman (input) for mailman id 737069;
 Mon, 10 Jun 2024 08:56:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Iexp=NM=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sGapG-00047T-AV
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 08:56:26 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 502557a4-2707-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 10:56:24 +0200 (CEST)
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-663-Kn2zBcOyMCCPj7Wb6kHzrA-1; Mon, 10 Jun 2024 04:56:21 -0400
Received: by mail-wm1-f70.google.com with SMTP id
 5b1f17b1804b1-4216dbadb75so15926855e9.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 01:56:21 -0700 (PDT)
Received: from ?IPV6:2a09:80c0:192:0:5dac:bf3d:c41:c3e7?
 ([2a09:80c0:192:0:5dac:bf3d:c41:c3e7])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4215c1aa2f7sm133235285e9.14.2024.06.10.01.56.18
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 01:56:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 502557a4-2707-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1718009783;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=QUbORrAp1NZZYf9SPcpO+7DjP/I7StvG/pRCPWB4lqg=;
	b=DaDQce6FOfhLCwpV9ZXnMAcdiSEwHp4NclWS7440MRFxzLb+EgST98hoHEaX6JxMcZlh51
	BWtHnkwbPez5+Q/JV36KJDs3h3dqgyUCtUOcCFq1s/D9Z+pRThyAqhBZExNnVmmpInytn5
	8eXzQj4ogYCmdDUhgek0+RZA2cIL7ro=
X-MC-Unique: Kn2zBcOyMCCPj7Wb6kHzrA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718009780; x=1718614580;
        h=content-transfer-encoding:in-reply-to:organization:autocrypt
         :content-language:from:references:cc:to:subject:user-agent
         :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=QUbORrAp1NZZYf9SPcpO+7DjP/I7StvG/pRCPWB4lqg=;
        b=iCGxhFnhCEY1/RolMRobbloUBNNqhP0mL1U6MAnDGW3B7J8YuI7qo1te3ts/aGTQ26
         4sZYz72ncS90wfuBCxTmGcm6h3b+GL0PxghlWNxD+S9jz39gNix7DtbnOEbDwvdWlXWD
         PbHKe1Z5Udv69k0nrIdV0fmngxMhM70EZR+bMOCm+L5aABBxHYE3Jbl/BK/Qwo7EUkeU
         734vIDSH6GBh3hndz86upjCwaysYPC+GACVCTPMUt9ohmMEpIg2F3x98i84brpJt0J+W
         2udYGk2fJ4LjcvOSV04ZSSEftI1qqfGsDaKgzmLKdkMA5tygwCosMOGaiG/lmkG98Axp
         yPsw==
X-Forwarded-Encrypted: i=1; AJvYcCUkZj9TihAXeWybr1+h2YLsJXX03ZF/pYaGfmZ+EmbCAVVZI/tUxe/TeFgx3hdeT8sCzZ/EuMyUD940Jfy6j9hgPqBgnRX9MCrJMMxfOrc=
X-Gm-Message-State: AOJu0YxeYWcmv3vFRLvckPkbcZxeixXB3Y5S9zmyvvJZSSh+2ahTWnrI
	CBC6EhrE1dihWwLfUV7rDo8aYy3XfnngBiFTixJKxmnytY9f2t1ws3icdV3IIvmbbiUyaVVsQSx
	kHvyo+wwCRUUN3OgXitclUDo0bCCfStPKhUnbJ7Ali6n+Wd0NnF9X6NmL/SK+llUOz+pD86/S
X-Received: by 2002:a05:600c:4f84:b0:421:80d2:9db1 with SMTP id 5b1f17b1804b1-42180d2a37emr34136885e9.25.1718009780254;
        Mon, 10 Jun 2024 01:56:20 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IFCzA4qbOZu38eWA7pCHPb8vQSRCDdVK9WMYWcHk2kUSDfS7dx+PhewtKS0ZxiqzH3kwbKdyQ==
X-Received: by 2002:a05:600c:4f84:b0:421:80d2:9db1 with SMTP id 5b1f17b1804b1-42180d2a37emr34136735e9.25.1718009779932;
        Mon, 10 Jun 2024 01:56:19 -0700 (PDT)
Message-ID: <aa370847-14a6-4806-8a04-d2da0a591014@redhat.com>
Date: Mon, 10 Jun 2024 10:56:18 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v1 3/3] mm/memory_hotplug: skip
 adjust_managed_page_count() for PageOffline() pages when offlining
To: Oscar Salvador <osalvador@suse.de>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
 xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com,
 Andrew Morton <akpm@linux-foundation.org>, Mike Rapoport <rppt@kernel.org>,
 "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Jason Wang <jasowang@redhat.com>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
 =?UTF-8?Q?Eugenio_P=C3=A9rez?= <eperezma@redhat.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>,
 Dmitry Vyukov <dvyukov@google.com>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-4-david@redhat.com>
 <ZmaBGSqchtEWnqM1@localhost.localdomain>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; keydata=
 xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat
In-Reply-To: <ZmaBGSqchtEWnqM1@localhost.localdomain>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 10.06.24 06:29, Oscar Salvador wrote:
> On Fri, Jun 07, 2024 at 11:09:38AM +0200, David Hildenbrand wrote:
>> We currently have a hack for virtio-mem in place to handle memory
>> offlining with PageOffline pages for which we already adjusted the
>> managed page count.
>>
>> Let's enlighten memory offlining code so we can get rid of that hack,
>> and document the situation.
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
> 
> Acked-by: Oscar Salvador <osalvador@suse.de>
> 

Thanks for the review!

-- 
Cheers,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 09:10:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 09:10:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737081.1143216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGb38-0000Tz-Sn; Mon, 10 Jun 2024 09:10:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737081.1143216; Mon, 10 Jun 2024 09:10:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGb38-0000Ts-Pw; Mon, 10 Jun 2024 09:10:46 +0000
Received: by outflank-mailman (input) for mailman id 737081;
 Mon, 10 Jun 2024 09:10:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGb37-0000Tm-Og
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 09:10:45 +0000
Received: from mail-oa1-x2c.google.com (mail-oa1-x2c.google.com
 [2001:4860:4864:20::2c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 509a9218-2709-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 11:10:44 +0200 (CEST)
Received: by mail-oa1-x2c.google.com with SMTP id
 586e51a60fabf-254ab6d5745so901103fac.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 02:10:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 509a9218-2709-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718010642; x=1718615442; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Zh24kQOy+eI4fBtuWuIBT0um3qXBUckqBM47GYuT7+E=;
        b=iH5jt+3Jo4o/qnZGBOAQjTrBuNAYUdBTWlckp913FOFQAOP+HMRmaJ2lhB9bWZtLhl
         Mr6759MnSJOYnYCKN0l5runFEt+ar3n2lb88Pc2vze1u0r2zXqG+Dfht0kZ6h8koQtnE
         B9LiZ2pA/CQe1hrM0CviVSz2Og2CAyHzP/mHewluV5iknZFZ9CpMxk2FKwMCMKUKhJOk
         hwCbxCB0oNrWqKY2kTo77jfan3+byp+fIzT/cL3oa5N6usgUM6YU9fJbbSOUNeySOSvM
         iwegWyU0pIxQDzeUvs7z0kmFHmGpoJGx9/A8Nqqd8ZCFSsC+sSJC81KBehbTnGgfIrXO
         +u2A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718010642; x=1718615442;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Zh24kQOy+eI4fBtuWuIBT0um3qXBUckqBM47GYuT7+E=;
        b=ss2uEnrxC+zYp8MpgQb73ZwJEzD5RSavzyfCk7qht7M2RJ1uUPIeQznBs+BN5rKuoW
         nMlheTSuBZ5j3AKJq6k2oL7jvW50A1gMT/52e6PfT3Vxop8oUblfFltuoKoKB6dF0TKo
         X3u7HLq6QrVpTiHStOHMQemK4HHLzN7acbtYIFfjS9hosv729Ir3brbqIhcD7o4V14Po
         EJBCp6pTVdgzAK4v9uDxoeSw1A3S47zFRy/gBOG0P+ETPlfZqtC9a2EAt5ZR7+xOdzXo
         DEaQYWAj3vMUS409CM9fdvLrAs4dbOcCw0poRWJN9rmpxiy0yujx71Kt0VKZ2aIXaS1S
         8S2A==
X-Forwarded-Encrypted: i=1; AJvYcCUAr+bopBewN2WQTXDGd7/MZ6JYnoqRC6546E/EVj8CnKDEeZr7fxDYFQK/CcfTF3yauye0jZxyjwYrpr6JwTSmXFxXVCCdUuKdZT+UWZM=
X-Gm-Message-State: AOJu0Yz9BJ7nvjvdSLeJT7bfiWgDCFOhqgaraNMu92LAs3u6qK4KD1tp
	hnPkQNsVFy4Bo0+JV6egi91CUyL5ViU/iPqqi7zCmsTWE+zc8XUPAoig3stc+6X59jQfCEnz9td
	3XQANYhkLXi2yfLWXRue7D3be6yk=
X-Google-Smtp-Source: AGHT+IGPphbNxbkOIVS1Q2L9q867m24QYlQGpxuWkknmDyyV2wJIWyd345yilSFqzX+5aeul9cBvlpcU2KOZFKBKl5c=
X-Received: by 2002:a05:6871:e015:b0:250:7353:c8f2 with SMTP id
 586e51a60fabf-254647efd11mr9805537fac.43.1718010642523; Mon, 10 Jun 2024
 02:10:42 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1717356829.git.w1benny@gmail.com> <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
 <22eabe14-10c3-4095-91d3-b63911908cb2@suse.com> <CAKBKdXhZ4HOqThPMkwaWB5ZhQOc6gE=xsKzkoL4_h+M6y33dcQ@mail.gmail.com>
 <f3cd00f2-bdcb-4604-bdc2-fd13eddb8ea0@suse.com>
In-Reply-To: <f3cd00f2-bdcb-4604-bdc2-fd13eddb8ea0@suse.com>
From: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Date: Mon, 10 Jun 2024 11:10:31 +0200
Message-ID: <CAKBKdXje+_dd7kh3+aDJACw84+-1ozXt6N==KbA6Tgm7GeZEnQ@mail.gmail.com>
Subject: Re: [PATCH for-4.19? v5 07/10] xen: Make the maximum number of altp2m
 views configurable for x86
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel <michal.orzel@amd.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Tamas K Lengyel <tamas@tklengyel.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 10, 2024 at 9:30=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 09.06.2024 01:06, Petr Bene=C5=A1 wrote:
> > On Thu, Jun 6, 2024 at 9:24=E2=80=AFAM Jan Beulich <jbeulich@suse.com> =
wrote:
> >>> @@ -122,7 +131,12 @@ int p2m_init_altp2m(struct domain *d)
> >>>      struct p2m_domain *hostp2m =3D p2m_get_hostp2m(d);
> >>>
> >>>      mm_lock_init(&d->arch.altp2m_list_lock);
> >>> -    for ( i =3D 0; i < MAX_ALTP2M; i++ )
> >>> +    d->arch.altp2m_p2m =3D xzalloc_array(struct p2m_domain *, d->nr_=
altp2m);
> >>> +
> >>> +    if ( !d->arch.altp2m_p2m )
> >>> +        return -ENOMEM;
> >>
> >> This isn't really needed, is it? Both ...
> >>
> >>> +    for ( i =3D 0; i < d->nr_altp2m; i++ )
> >>
> >> ... this and ...
> >>
> >>>      {
> >>>          d->arch.altp2m_p2m[i] =3D p2m =3D p2m_init_one(d);
> >>>          if ( p2m =3D=3D NULL )
> >>> @@ -143,7 +157,10 @@ void p2m_teardown_altp2m(struct domain *d)
> >>>      unsigned int i;
> >>>      struct p2m_domain *p2m;
> >>>
> >>> -    for ( i =3D 0; i < MAX_ALTP2M; i++ )
> >>> +    if ( !d->arch.altp2m_p2m )
> >>> +        return;
>
> I'm sorry, the question was meant to be on this if() instead.
>
> >>> +    for ( i =3D 0; i < d->nr_altp2m; i++ )
> >>>      {
> >>>          if ( !d->arch.altp2m_p2m[i] )
> >>>              continue;
> >>> @@ -151,6 +168,8 @@ void p2m_teardown_altp2m(struct domain *d)
> >>>          d->arch.altp2m_p2m[i] =3D NULL;
> >>>          p2m_free_one(p2m);
> >>>      }
> >>> +
> >>> +    XFREE(d->arch.altp2m_p2m);
> >>>  }
> >>
> >> ... this ought to be fine without?
> >
> > Could you, please, elaborate? I honestly don't know what you mean here
> > (by "this isn't needed").
>
> I hope the above correction is enough?

I'm sorry, but not really? I feel like I'm blind but I can't see
anything I could remove without causing (or risking) crash.

P.


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 09:46:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 09:46:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737090.1143227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGbbY-00069h-B3; Mon, 10 Jun 2024 09:46:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737090.1143227; Mon, 10 Jun 2024 09:46:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGbbY-00069a-6m; Mon, 10 Jun 2024 09:46:20 +0000
Received: by outflank-mailman (input) for mailman id 737090;
 Mon, 10 Jun 2024 09:46:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ow3=NM=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGbbW-00069U-W1
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 09:46:19 +0000
Received: from mail-yw1-x112c.google.com (mail-yw1-x112c.google.com
 [2607:f8b0:4864:20::112c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 47cbfb34-270e-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 11:46:16 +0200 (CEST)
Received: by mail-yw1-x112c.google.com with SMTP id
 00721157ae682-627ebbe7720so42885197b3.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 02:46:17 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b07b867db7sm10024006d6.144.2024.06.10.02.46.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 02:46:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47cbfb34-270e-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718012776; x=1718617576; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=nXoHdGhpLM+SDnEOJBdw2O0WiRmYlBzKC+6UP7Leex8=;
        b=X5ovzayx3tRIAGkEWSuCmYhodPTxd1WBWI8/RWBFvwUiRXX6mEbml/xAaW0zSt4GNR
         5q/kHIwEd+UrOfPGINj6eGB4CPa2R0hzJjkBf8RJa5wFKPjdIrkUZ5hM6eGo1lVtCn+8
         2oJeHcVae3uzZNvWDAZbT4JXSSX8yADj+q4Qc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718012776; x=1718617576;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=nXoHdGhpLM+SDnEOJBdw2O0WiRmYlBzKC+6UP7Leex8=;
        b=AC0PYdt+sIxMr5cN4Pnm7RhBjd5CGrCiiDBG2zJFHG1Dzr905jVLXpIoKSe7CxZTvb
         InIDP2tyj2L6Zd+GkqEgBGF2MLxE5oK3tExg9/X7dJf526fR2S21Yl/9lse2uZ+ZDp3J
         9xiUMy7SHXw0IJEaHArLPhEwSgRB7PpeXqsOiyU9IM+hkJVc4k50eJxbaOqiZMwaZa2u
         qYEHpZi6qN1j1W/OhSsHGYK+yemZttzd2fTvC1qeSdqKm6NixCvKSgsAWa6GnSalu/2B
         yedbES52hKqMEbsBk90ObLwKJlWWpl+cmvAZeKV30gZhVtjnRRfuSQuwUjMzb2c0r9TO
         CuWg==
X-Forwarded-Encrypted: i=1; AJvYcCW4zQTWIYoIE0UsFm40sd8GqKabxBr7L1xKyQeyqCJca9dmBt6IvbMVmucwKXf3AklIK+EUwb/hje4KkwQYGJBSgAXorG0VTn/3QuUIUAg=
X-Gm-Message-State: AOJu0YxLIwXnw+vhP6eVHTTjErK0l5quKbyp0769a+BEGKWOEhCEt/AD
	EmJTtxsFUWKyOZ3zVxYk5N72C8MykRfXASrd5aMSW1jL1HEVJ+cB87P1R2hUS9E=
X-Google-Smtp-Source: AGHT+IEQwCZ7QEfFMfDPcfeYAn+xFaH3900sOchlGctQEt8L+DQkIl5lY57HUs1Gf6KYhUdlyqv6rQ==
X-Received: by 2002:a05:690c:fca:b0:62d:354:98ce with SMTP id 00721157ae682-62d03549a96mr34821147b3.51.1718012775470;
        Mon, 10 Jun 2024 02:46:15 -0700 (PDT)
Date: Mon, 10 Jun 2024 11:46:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	xen-devel <xen-devel@lists.xenproject.org>, javi.merino@cloud.com
Subject: Re: Segment truncation in multi-segment PCI handling?
Message-ID: <ZmbLZHSOg8KuRvAw@macbook>
References: <ZmNjoeFAwWz8xhfM@mail-itl>
 <9cbb6dce-b669-4237-8932-b5cd64eb7288@citrix.com>
 <b609eaab-0a0a-433b-81d3-84a0cd90ebc1@suse.com>
 <Zma5Rj_cswrIYcD2@macbook>
 <a8225a94-54ed-4b24-8867-b9da65cb0a14@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a8225a94-54ed-4b24-8867-b9da65cb0a14@suse.com>

On Mon, Jun 10, 2024 at 10:41:19AM +0200, Jan Beulich wrote:
> On 10.06.2024 10:28, Roger Pau Monné wrote:
> > On Mon, Jun 10, 2024 at 09:58:11AM +0200, Jan Beulich wrote:
> >> On 07.06.2024 21:52, Andrew Cooper wrote:
> >>> On 07/06/2024 8:46 pm, Marek Marczykowski-Górecki wrote:
> >>>> Hi,
> >>>>
> >>>> I've got a new system, and it has two PCI segments:
> >>>>
> >>>>     0000:00:00.0 Host bridge: Intel Corporation Device 7d14 (rev 04)
> >>>>     0000:00:02.0 VGA compatible controller: Intel Corporation Meteor Lake-P [Intel Graphics] (rev 08)
> >>>>     ...
> >>>>     10000:e0:06.0 System peripheral: Intel Corporation RST VMD Managed Controller
> >>>>     10000:e0:06.2 PCI bridge: Intel Corporation Device 7ecb (rev 10)
> >>>>     10000:e1:00.0 Non-Volatile memory controller: Phison Electronics Corporation PS5021-E21 PCIe4 NVMe Controller (DRAM-less) (rev 01)
> >>>>
> >>>> But looks like Xen doesn't handle it correctly:
> > 
> > In the meantime you can probably disable VMD from the firmware and the
> > NVMe devices should appear on the regular PCI bus.
> > 
> >>>>     (XEN) 0000:e0:06.0: unknown type 0
> >>>>     (XEN) 0000:e0:06.2: unknown type 0
> >>>>     (XEN) 0000:e1:00.0: unknown type 0
> >>>>     ...
> >>>>     (XEN) ==== PCI devices ====
> >>>>     (XEN) ==== segment 0000 ====
> >>>>     (XEN) 0000:e1:00.0 - NULL - node -1 
> >>>>     (XEN) 0000:e0:06.2 - NULL - node -1 
> >>>>     (XEN) 0000:e0:06.0 - NULL - node -1 
> >>>>     (XEN) 0000:2b:00.0 - d0 - node -1  - MSIs < 161 >
> >>>>     (XEN) 0000:00:1f.6 - d0 - node -1  - MSIs < 148 >
> >>>>     ...
> >>>>
> >>>> This isn't exactly surprising, since pci_sbdf_t.seg is uint16_t, so
> >>>> 0x10000 doesn't fit. OSDev wiki says PCI Express can have 65536 PCI
> >>>> Segment Groups, each with 256 bus segments.
> >>>>
> >>>> Fortunately, I don't need this to work, if I disable VMD in the
> >>>> firmware, I get a single segment and everything works fine.
> >>>>
> >>>
> >>> This is a known issue.  Works is being done, albeit slowly.
> >>
> >> Is work being done? After the design session in Prague I put it on my
> >> todo list, but at low priority. I'd be happy to take it off there if I
> >> knew someone else is looking into this.
> > 
> > We had a design session about VMD?  If so I'm afraid I've missed it.
> 
> In Prague last year, not just now in Lisbon.
> 
> >>> 0x10000 is indeed not a spec-compliant PCI segment.  It's something
> >>> model specific the Linux VMD driver is doing.
> >>
> >> I wouldn't call this "model specific" - this numbering is purely a
> >> software one (and would need coordinating between Dom0 and Xen).
> > 
> > Hm, TBH I'm not sure whether Xen needs to be aware of VMD devices.
> > The resources used by the VMD devices are all assigned to the VMD
> > root.  My current hypothesis is that it might be possible to manage
> > such devices without Xen being aware of their existence.
> 
> Well, it may be possible to have things work in Dom0 without Xen
> knowing much. Then Dom0 would need to suppress any physdevop calls
> with such software-only segment numbers (in order to at least not
> confuse Xen). I'd be curious though how e.g. MSI setup would work in
> such a scenario.

IIRC from my read of the spec, VMD devices don't use regular MSI
data/address fields, and instead configure an index into the MSI table
on the VMD root for the interrupt they want to use.  It's only the VMD
root device (which is a normal device on the PCI bus) that has
MSI(-X) configured with real vectors, and multiplexes interrupts for
all devices behind it.

If we had to passthrough VMD devices we might have to intercept writes
to the VMD MSI(-X) entries, but since they can only be safely assigned
to dom0 I think it's not an issue ATM (see below).

> Plus clearly any passing through of a device behind
> the VMD bridge will quite likely need Xen involvement (unless of
> course the only way of doing such pass-through was to pass on the
> entire hierarchy).

All VMD devices share the Requestor ID of the VMD root, so AFAIK it's
not possible to passthrough them (unless you passthrough the whole VMD
root) because they all share the same context entry on the IOMMU.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 09:54:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 09:54:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737096.1143236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGbjh-0008Ek-2N; Mon, 10 Jun 2024 09:54:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737096.1143236; Mon, 10 Jun 2024 09:54:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGbjg-0008Ed-Vm; Mon, 10 Jun 2024 09:54:44 +0000
Received: by outflank-mailman (input) for mailman id 737096;
 Mon, 10 Jun 2024 09:54:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Rpda=NM=intel.com=lkp@srs-se1.protection.inumbo.net>)
 id 1sGbjf-0008EX-10
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 09:54:43 +0000
Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 726da0b4-270f-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 11:54:39 +0200 (CEST)
Received: from fmviesa009.fm.intel.com ([10.60.135.149])
 by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 10 Jun 2024 02:54:36 -0700
Received: from lkp-server01.sh.intel.com (HELO 8967fbab76b3) ([10.239.97.150])
 by fmviesa009.fm.intel.com with ESMTP; 10 Jun 2024 02:54:33 -0700
Received: from kbuild by 8967fbab76b3 with local (Exim 4.96)
 (envelope-from <lkp@intel.com>) id 1sGbjT-00022n-1H;
 Mon, 10 Jun 2024 09:54:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 726da0b4-270f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1718013279; x=1749549279;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=yGdh2DgY5dbRPihxx4eSyhu1F2/ygbWCIoyyX/MAbWQ=;
  b=K9vp++lef2T6RG+/4hDpBXAoOfwwPHHR6232yWBoYf6Y53ic5OGahGtR
   I/ZUk7MkW1zjlgNLiFAQ3jj3o0KvC8ZXVcg7ejTrpTul4rh4eO0rIVuA8
   asLUf9OmHZ+e++c5qQpcIWr+asWeyPAqo95I+BMQEj280nIPMeApzEUJk
   TZSShgRma9kz8lXi3hLZYof+34+mOUkQkzV7w+8wWxnJ9cHCjWaNs7zOa
   yPD3UFz10CJEfY3OUHPyQGxAiUE2PwcsMyCVZ1C4JfBE+BdM31MRQN59D
   HxCOOh6QjC0VKq1HBJvxl0Y2LYsb6RGQEKrbHAFj8laPrW4T7uHwCUgHh
   A==;
X-CSE-ConnectionGUID: 6QZWxRt8TCOxxYfz32r18Q==
X-CSE-MsgGUID: yOx5sKorSGuIIEFoN+W0pQ==
X-IronPort-AV: E=McAfee;i="6600,9927,11098"; a="14818096"
X-IronPort-AV: E=Sophos;i="6.08,227,1712646000"; 
   d="scan'208";a="14818096"
X-CSE-ConnectionGUID: epjzt1QBSXWXDm/6gQpdhA==
X-CSE-MsgGUID: E+67wbjFTGe7ZAIo23ieyQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="6.08,227,1712646000"; 
   d="scan'208";a="39129952"
Date: Mon, 10 Jun 2024 17:53:44 +0800
From: kernel test robot <lkp@intel.com>
To: Abhinav Jain <jain.abhinav177@gmail.com>, jgross@suse.com,
	sstabellini@kernel.org, oleksandr_tyshchenko@epam.com,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: llvm@lists.linux.dev, oe-kbuild-all@lists.linux.dev,
	skhan@linuxfoundation.org, javier.carrasco.cruz@gmail.com,
	jain.abhinav177@gmail.com
Subject: Re: [PATCH] xen: xen-pciback: Export a bridge and all its children
 as per TODO
Message-ID: <202406101933.49pM50Ii-lkp@intel.com>
References: <20240609184410.53500-1-jain.abhinav177@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20240609184410.53500-1-jain.abhinav177@gmail.com>

Hi Abhinav,

kernel test robot noticed the following build warnings:

[auto build test WARNING on xen-tip/linux-next]
[also build test WARNING on linus/master v6.10-rc3 next-20240607]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Abhinav-Jain/xen-xen-pciback-Export-a-bridge-and-all-its-children-as-per-TODO/20240610-024623
base:   https://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git linux-next
patch link:    https://lore.kernel.org/r/20240609184410.53500-1-jain.abhinav177%40gmail.com
patch subject: [PATCH] xen: xen-pciback: Export a bridge and all its children as per TODO
config: x86_64-randconfig-006-20240610 (https://download.01.org/0day-ci/archive/20240610/202406101933.49pM50Ii-lkp@intel.com/config)
compiler: clang version 18.1.5 (https://github.com/llvm/llvm-project 617a15a9eac96088ae5e9134248d8236e34b91b1)
reproduce (this is a W=1 build): (https://download.01.org/0day-ci/archive/20240610/202406101933.49pM50Ii-lkp@intel.com/reproduce)

If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <lkp@intel.com>
| Closes: https://lore.kernel.org/oe-kbuild-all/202406101933.49pM50Ii-lkp@intel.com/

All warnings (new ones prefixed by >>):

>> drivers/xen/xen-pciback/xenbus.c:262:21: warning: use of logical '&&' with constant operand [-Wconstant-logical-operand]
     262 |         if ((dev->hdr_type && PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
         |                            ^  ~~~~~~~~~~~~~~~~~~~~
   drivers/xen/xen-pciback/xenbus.c:262:21: note: use '&' for a bitwise operation
     262 |         if ((dev->hdr_type && PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
         |                            ^~
         |                            &
   drivers/xen/xen-pciback/xenbus.c:262:21: note: remove constant to silence this warning
     262 |         if ((dev->hdr_type && PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
         |                            ^~~~~~~~~~~~~~~~~~~~~~~
   drivers/xen/xen-pciback/xenbus.c:270:12: error: no member named 'domain' in 'struct pci_dev'
     270 |                                 child->domain, child->bus->number,
         |                                 ~~~~~  ^
   include/linux/dev_printk.h:163:47: note: expanded from macro 'dev_dbg'
     163 |                 dev_printk(KERN_DEBUG, dev, dev_fmt(fmt), ##__VA_ARGS__); \
         |                                                             ^~~~~~~~~~~
   include/linux/dev_printk.h:129:34: note: expanded from macro 'dev_printk'
     129 |                 _dev_printk(level, dev, fmt, ##__VA_ARGS__);            \
         |                                                ^~~~~~~~~~~
   drivers/xen/xen-pciback/xenbus.c:275:20: error: no member named 'domain' in 'struct pci_dev'
     275 |                                                       child->domain,
         |                                                       ~~~~~  ^
   drivers/xen/xen-pciback/xenbus.c:284:13: error: no member named 'domain' in 'struct pci_dev'
     284 |                                         child->domain,
         |                                         ~~~~~  ^
   include/linux/dev_printk.h:144:65: note: expanded from macro 'dev_err'
     144 |         dev_printk_index_wrap(_dev_err, KERN_ERR, dev, dev_fmt(fmt), ##__VA_ARGS__)
         |                                                                        ^~~~~~~~~~~
   include/linux/dev_printk.h:110:23: note: expanded from macro 'dev_printk_index_wrap'
     110 |                 _p_func(dev, fmt, ##__VA_ARGS__);                       \
         |                                     ^~~~~~~~~~~
   1 warning and 3 errors generated.


vim +262 drivers/xen/xen-pciback/xenbus.c

   225	
   226	static int xen_pcibk_export_device(struct xen_pcibk_device *pdev,
   227					 int domain, int bus, int slot, int func,
   228					 int devid)
   229	{
   230		struct pci_dev *dev;
   231		int err = 0;
   232	
   233		dev_dbg(&pdev->xdev->dev, "exporting dom %x bus %x slot %x func %x\n",
   234			domain, bus, slot, func);
   235	
   236		dev = pcistub_get_pci_dev_by_slot(pdev, domain, bus, slot, func);
   237		if (!dev) {
   238			err = -EINVAL;
   239			xenbus_dev_fatal(pdev->xdev, err,
   240					 "Couldn't locate PCI device "
   241					 "(%04x:%02x:%02x.%d)! "
   242					 "perhaps already in-use?",
   243					 domain, bus, slot, func);
   244			goto out;
   245		}
   246	
   247		err = xen_pcibk_add_pci_dev(pdev, dev, devid,
   248					    xen_pcibk_publish_pci_dev);
   249		if (err)
   250			goto out;
   251	
   252		dev_info(&dev->dev, "registering for %d\n", pdev->xdev->otherend_id);
   253		if (xen_register_device_domain_owner(dev,
   254						     pdev->xdev->otherend_id) != 0) {
   255			dev_err(&dev->dev, "Stealing ownership from dom%d.\n",
   256				xen_find_device_domain_owner(dev));
   257			xen_unregister_device_domain_owner(dev);
   258			xen_register_device_domain_owner(dev, pdev->xdev->otherend_id);
   259		}
   260	
   261		/* Check if the device is a bridge and export all its children */
 > 262		if ((dev->hdr_type && PCI_HEADER_TYPE_MASK) == PCI_HEADER_TYPE_BRIDGE) {
   263			struct pci_dev *child = NULL;
   264	
   265			/* Iterate over all the devices in this bridge */
   266			list_for_each_entry(child, &dev->subordinate->devices,
   267					bus_list) {
   268				dev_dbg(&pdev->xdev->dev,
   269					"exporting child device %04x:%02x:%02x.%d\n",
   270					child->domain, child->bus->number,
   271					PCI_SLOT(child->devfn),
   272					PCI_FUNC(child->devfn));
   273	
   274				err = xen_pcibk_export_device(pdev,
   275							      child->domain,
   276							      child->bus->number,
   277							      PCI_SLOT(child->devfn),
   278							      PCI_FUNC(child->devfn),
   279							      devid);
   280				if (err) {
   281					dev_err(&pdev->xdev->dev,
   282						"failed to export child device : "
   283						"%04x:%02x:%02x.%d\n",
   284						child->domain,
   285						child->bus->number,
   286						PCI_SLOT(child->devfn),
   287						PCI_FUNC(child->devfn));
   288					goto out;
   289				}
   290			}
   291		}
   292	out:
   293		return err;
   294	}
   295	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 10:04:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 10:04:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737105.1143246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGbtK-00029r-1g; Mon, 10 Jun 2024 10:04:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737105.1143246; Mon, 10 Jun 2024 10:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGbtJ-00029k-V4; Mon, 10 Jun 2024 10:04:41 +0000
Received: by outflank-mailman (input) for mailman id 737105;
 Mon, 10 Jun 2024 10:04:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGbtJ-00029a-EO; Mon, 10 Jun 2024 10:04:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGbtJ-0002tx-BO; Mon, 10 Jun 2024 10:04:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGbtI-0001je-W8; Mon, 10 Jun 2024 10:04:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGbtI-0008Ov-Vd; Mon, 10 Jun 2024 10:04:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YsSDOa0YDldmRWIsvjDqG0/el5E/SMGQ0uZUz4oEm2E=; b=id37HiMCrjvVCqP+cFOmoo4sfM
	EqMLp5o0sYDUmKiaeQ5kmcTsiw2jrYbEsoQM1JSj+4Rcvt5EH0bBnBQ19OC6RsSrfKFl+3DSWZAP2
	Gajy9+q189386iNFapKorlrcEt8bhXs5fb0oXOTIyXeIXHpDcNClubsZNJ7AKz6HE9pM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186298-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186298: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=83a7eefedc9b56fe7bfeff13b6c7356688ffa670
X-Osstest-Versions-That:
    linux=771ed66105de9106a6f3e4311e06451881cdac5e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Jun 2024 10:04:40 +0000

flight 186298 linux-linus real [real]
flight 186300 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186298/
http://logs.test-lab.xenproject.org/osstest/logs/186300/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 186294

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 186294

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 186294
 test-armhf-armhf-xl-qcow2     8 xen-boot                     fail  like 186294
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186294
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186294
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186294
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186294
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186294
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                83a7eefedc9b56fe7bfeff13b6c7356688ffa670
baseline version:
 linux                771ed66105de9106a6f3e4311e06451881cdac5e

Last test of basis   186294  2024-06-09 08:51:48 Z    1 days
Failing since        186296  2024-06-09 17:11:41 Z    0 days    2 attempts
Testing same since   186298  2024-06-10 01:11:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Ingo Molnar <mingo@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Mark Rutland <mark.rutland@arm.com>
  Milian Wolff <milian.wolff@kdab.com>
  Namhyung Kim <namhyung@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 658 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 10:12:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 10:12:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737114.1143255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGc0Q-0004G2-PH; Mon, 10 Jun 2024 10:12:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737114.1143255; Mon, 10 Jun 2024 10:12:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGc0Q-0004Fv-Mb; Mon, 10 Jun 2024 10:12:02 +0000
Received: by outflank-mailman (input) for mailman id 737114;
 Mon, 10 Jun 2024 10:12:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGc0P-0004FU-J7
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 10:12:01 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e05314c3-2711-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 12:12:00 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6f1da33826so93670966b.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 03:12:00 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6efcbf645fsm348974066b.33.2024.06.10.03.11.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 03:11:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e05314c3-2711-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718014320; x=1718619120; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=LfdBiwd7A/UuG9Ic6GuyYfKI7xVDU7JyZ8LzU9Cbtxg=;
        b=ORowi5Kw5z8eKfglLYbiD7y+sG2fZBwi5SbMOoVpyjUKurUkELKQbJQiwvIu4OMu6W
         uNO8b614FUO0o29IuZ/D6yaICmzMK0Ai5hzLIwKJ5hwesBWqJxYU2ciBiAPBLMKqTotw
         iduLfS3hjlwETfnrYpIDG8tNBE5QblvvCTcYgpyQviRqWo2btMS6sK7xLv9/1+lzn5Lk
         6+7KkkDS+mZxKv6BiW3xbToEIYu36JByTV35k7QXDmCWVa4JRAAQBDt+Lib0qW5X16R2
         tuXZL2P0iuMUWYxX8dap07DIzlpmweXcoMqxitjKAumoFoqQWdf8KJJ4LlCmSxZ5iVrk
         p86w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718014320; x=1718619120;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=LfdBiwd7A/UuG9Ic6GuyYfKI7xVDU7JyZ8LzU9Cbtxg=;
        b=R3d+Jym9zDGDpvv54vddsiy9fqICYp+poTZUHXSkmCyzdi/C6omrdG79SA0+HpjvhX
         oTTlkaueSd6Mll+N7PLvf8HyHY1r+S96ZXfA08rc3M2b/F3VsxKPkJrpTxgxmqyHBTm7
         a5+5os2W/nGRqpWUxO3ahrL4OHiVIF04pXNWnOaDw9oloUJZLNTjDJM2SbPud0TA7amk
         a3+kJiNWdFASC6PHBD5OAHdC4x+J56Q9kptVjw9IpF/TLeDQ+v+9dLk77X8Cb6+D+l6H
         +Hwiw7NF7/W8LfhmAhu8H970XoFIWGEW8UXHOrQK/dum1o84veXXX3vwKcOBkku9IHDJ
         iEgw==
X-Forwarded-Encrypted: i=1; AJvYcCWgVh1r6GJoK4OSN+c4lCvcmRben1cR3aDZlgdc4mwdQq3oIfrUnmaAOGnlv9QWsIwBhGtW/Y2eRYjFk5+/sB81g8hT+1g7RYeBEdqot2c=
X-Gm-Message-State: AOJu0YyqJIFMnUvJiNRb3P2/Tp341JaOiCtwau8ZS3x7F1Xib5RcZ6rx
	PUeYd6w5iZdeNXBq6E1E2xmF0tUvlnCtY9XALCace6SBHF9ldjvyQc0NDZ22PQ==
X-Google-Smtp-Source: AGHT+IGohszrASHgS1qGzSQ4gEjJcTXzWqghCt+4g+yCs6JNyTXpsO0d6sJJj/ymI7y2rou7ILF6UQ==
X-Received: by 2002:a17:907:31cd:b0:a62:edcd:87c1 with SMTP id a640c23a62f3a-a6cd560f947mr837058366b.10.1718014319701;
        Mon, 10 Jun 2024 03:11:59 -0700 (PDT)
Message-ID: <9092e4d2-1feb-4667-86df-644a92468f58@suse.com>
Date: Mon, 10 Jun 2024 12:11:58 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: Segment truncation in multi-segment PCI handling?
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel
 <xen-devel@lists.xenproject.org>, javi.merino@cloud.com
References: <ZmNjoeFAwWz8xhfM@mail-itl>
 <9cbb6dce-b669-4237-8932-b5cd64eb7288@citrix.com>
 <b609eaab-0a0a-433b-81d3-84a0cd90ebc1@suse.com> <Zma5Rj_cswrIYcD2@macbook>
 <a8225a94-54ed-4b24-8867-b9da65cb0a14@suse.com> <ZmbLZHSOg8KuRvAw@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmbLZHSOg8KuRvAw@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10.06.2024 11:46, Roger Pau Monné wrote:
> On Mon, Jun 10, 2024 at 10:41:19AM +0200, Jan Beulich wrote:
>> On 10.06.2024 10:28, Roger Pau Monné wrote:
>>> On Mon, Jun 10, 2024 at 09:58:11AM +0200, Jan Beulich wrote:
>>>> On 07.06.2024 21:52, Andrew Cooper wrote:
>>>>> On 07/06/2024 8:46 pm, Marek Marczykowski-Górecki wrote:
>>>>>> Hi,
>>>>>>
>>>>>> I've got a new system, and it has two PCI segments:
>>>>>>
>>>>>>     0000:00:00.0 Host bridge: Intel Corporation Device 7d14 (rev 04)
>>>>>>     0000:00:02.0 VGA compatible controller: Intel Corporation Meteor Lake-P [Intel Graphics] (rev 08)
>>>>>>     ...
>>>>>>     10000:e0:06.0 System peripheral: Intel Corporation RST VMD Managed Controller
>>>>>>     10000:e0:06.2 PCI bridge: Intel Corporation Device 7ecb (rev 10)
>>>>>>     10000:e1:00.0 Non-Volatile memory controller: Phison Electronics Corporation PS5021-E21 PCIe4 NVMe Controller (DRAM-less) (rev 01)
>>>>>>
>>>>>> But looks like Xen doesn't handle it correctly:
>>>
>>> In the meantime you can probably disable VMD from the firmware and the
>>> NVMe devices should appear on the regular PCI bus.
>>>
>>>>>>     (XEN) 0000:e0:06.0: unknown type 0
>>>>>>     (XEN) 0000:e0:06.2: unknown type 0
>>>>>>     (XEN) 0000:e1:00.0: unknown type 0
>>>>>>     ...
>>>>>>     (XEN) ==== PCI devices ====
>>>>>>     (XEN) ==== segment 0000 ====
>>>>>>     (XEN) 0000:e1:00.0 - NULL - node -1 
>>>>>>     (XEN) 0000:e0:06.2 - NULL - node -1 
>>>>>>     (XEN) 0000:e0:06.0 - NULL - node -1 
>>>>>>     (XEN) 0000:2b:00.0 - d0 - node -1  - MSIs < 161 >
>>>>>>     (XEN) 0000:00:1f.6 - d0 - node -1  - MSIs < 148 >
>>>>>>     ...
>>>>>>
>>>>>> This isn't exactly surprising, since pci_sbdf_t.seg is uint16_t, so
>>>>>> 0x10000 doesn't fit. OSDev wiki says PCI Express can have 65536 PCI
>>>>>> Segment Groups, each with 256 bus segments.
>>>>>>
>>>>>> Fortunately, I don't need this to work, if I disable VMD in the
>>>>>> firmware, I get a single segment and everything works fine.
>>>>>>
>>>>>
>>>>> This is a known issue.  Works is being done, albeit slowly.
>>>>
>>>> Is work being done? After the design session in Prague I put it on my
>>>> todo list, but at low priority. I'd be happy to take it off there if I
>>>> knew someone else is looking into this.
>>>
>>> We had a design session about VMD?  If so I'm afraid I've missed it.
>>
>> In Prague last year, not just now in Lisbon.
>>
>>>>> 0x10000 is indeed not a spec-compliant PCI segment.  It's something
>>>>> model specific the Linux VMD driver is doing.
>>>>
>>>> I wouldn't call this "model specific" - this numbering is purely a
>>>> software one (and would need coordinating between Dom0 and Xen).
>>>
>>> Hm, TBH I'm not sure whether Xen needs to be aware of VMD devices.
>>> The resources used by the VMD devices are all assigned to the VMD
>>> root.  My current hypothesis is that it might be possible to manage
>>> such devices without Xen being aware of their existence.
>>
>> Well, it may be possible to have things work in Dom0 without Xen
>> knowing much. Then Dom0 would need to suppress any physdevop calls
>> with such software-only segment numbers (in order to at least not
>> confuse Xen). I'd be curious though how e.g. MSI setup would work in
>> such a scenario.
> 
> IIRC from my read of the spec,

So you have found a spec somewhere? I didn't so far, and I had even asked
Intel ...

> VMD devices don't use regular MSI
> data/address fields, and instead configure an index into the MSI table
> on the VMD root for the interrupt they want to use.  It's only the VMD
> root device (which is a normal device on the PCI bus) that has
> MSI(-X) configured with real vectors, and multiplexes interrupts for
> all devices behind it.
> 
> If we had to passthrough VMD devices we might have to intercept writes
> to the VMD MSI(-X) entries, but since they can only be safely assigned
> to dom0 I think it's not an issue ATM (see below).
> 
>> Plus clearly any passing through of a device behind
>> the VMD bridge will quite likely need Xen involvement (unless of
>> course the only way of doing such pass-through was to pass on the
>> entire hierarchy).
> 
> All VMD devices share the Requestor ID of the VMD root, so AFAIK it's
> not possible to passthrough them (unless you passthrough the whole VMD
> root) because they all share the same context entry on the IOMMU.

While that was my vague understanding too, it seemed too limiting to me
to be true.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 10:12:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 10:12:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737118.1143266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGc0v-0004i2-1Q; Mon, 10 Jun 2024 10:12:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737118.1143266; Mon, 10 Jun 2024 10:12:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGc0u-0004hv-UZ; Mon, 10 Jun 2024 10:12:32 +0000
Received: by outflank-mailman (input) for mailman id 737118;
 Mon, 10 Jun 2024 10:12:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OhkP=NM=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sGc0t-0004FU-96
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 10:12:31 +0000
Received: from mail-oa1-x33.google.com (mail-oa1-x33.google.com
 [2001:4860:4864:20::33])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f1f074dc-2711-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 12:12:30 +0200 (CEST)
Received: by mail-oa1-x33.google.com with SMTP id
 586e51a60fabf-25488f4e55aso1021899fac.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 03:12:30 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-795758dfe3asm111014985a.105.2024.06.10.03.12.28
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 03:12:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1f074dc-2711-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718014349; x=1718619149; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=2Yq7nVrlyB4+bTuo8fkH9BXvZhOByQ/M4/xpYdLoCT8=;
        b=dFq8h6sNTzmY56G3CJQHgtbP/FqUj1WU0b/sAhmgCu04LhxmIrOw4woA2F6snXKz4U
         OtEwzFhXX/GRtTqf5kvZCAIchmCT/RDcg74dnxY1Sel3oJKlwUVlu8Th3zT03oBASJXA
         l4sl56WXb5jjVva67+lh78lVb/4+LEmZXWA9k=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718014349; x=1718619149;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=2Yq7nVrlyB4+bTuo8fkH9BXvZhOByQ/M4/xpYdLoCT8=;
        b=f/ZXdBetA3W/F4mV03/wGrDB0sJsYHDPBXdBMjwMexKyEkLhf6xyfS/sRjCiVUqYE8
         KHzZkC+9D32nLt2mUcTZIpIYn3jl+IhB1O9FB/pw9BU0DMKjeMM3qWIHdSg3gid3JKgz
         a8oXXlJVXCyZBaBgD5km8TE/3sQ69l8QGsMDtP5s3HZOhxhFKSPMfBJvYYZrhGyuQPgo
         H4eWJdGv3ZsgjBZuBokAqDzHP0r6GKxEnVSescwAJHVv2dIemSYqAbqrabe44k1Z1Imm
         4Xp3Er4b1HuMdrSUO/Ky4Yzof1hE7Fv4M7coWXYsvJiLjDjdjS3zuMcjxhNnGU6xheyM
         5Ybg==
X-Forwarded-Encrypted: i=1; AJvYcCU5zd6KzE9hBxj9FnM2Zi+PC9W9GdnQ2SYYI/ZjATVoWDU8xl5Hy0I5kQupNSUzV/Xdf488Xb5L0jhje+OoRrVvbnWtKg5QYR/32igqWUs=
X-Gm-Message-State: AOJu0Ywf3bL8q+EOPcD9TsqfXgUT3GOIELJ5jdvW9vus5eqAFCgLLR7n
	LG0kD3j8SGnRPR4lQmDk+s9i0Oaj5bZ41nz0a3naEytrEIrpbXl4gwUf+B9x6QnUKokxUTsUb1h
	R+qA=
X-Google-Smtp-Source: AGHT+IEWO2epNIxWRQ6A5n0Id/ENs/Rn1RjQOADBpcesO9xbbcYR3sc9vIQIFtLAI5itj+UtbAQAXg==
X-Received: by 2002:a05:6870:71cf:b0:254:b73d:fe14 with SMTP id 586e51a60fabf-254b73e02cdmr3307281fac.52.1718014349237;
        Mon, 10 Jun 2024 03:12:29 -0700 (PDT)
Message-ID: <e56a4519-9d4e-4267-a189-e8e2fec1518b@citrix.com>
Date: Mon, 10 Jun 2024 11:12:27 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: vcpumask_to_pcpumask() case study
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <3bb4e3fa-376b-4641-824d-61864b4e1e8e@citrix.com>
 <c5951643-5172-4aa1-9833-1a7a0eebb540@suse.com>
 <1745d84b-59b7-4f90-a0a8-5d459b83b0bc@citrix.com>
 <afc347c0-ca2f-4972-b895-71184b1074ea@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <afc347c0-ca2f-4972-b895-71184b1074ea@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10/06/2024 8:15 am, Jan Beulich wrote:
> On 07.06.2024 14:35, Andrew Cooper wrote:
>> On 03/06/2024 10:19 pm, Jan Beulich wrote:
>>> On 01.06.2024 20:50, Andrew Cooper wrote:
>>>> One of the followon items I had from the bitops clean-up is this:
>>>>
>>>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>>>> index 648d6dd475ba..9c3a017606ed 100644
>>>> --- a/xen/arch/x86/mm.c
>>>> +++ b/xen/arch/x86/mm.c
>>>> @@ -3425,7 +3425,7 @@ static int vcpumask_to_pcpumask(
>>>>              unsigned int cpu;
>>>>  
>>>>              vcpu_id = ffsl(vmask) - 1;
>>>> -            vmask &= ~(1UL << vcpu_id);
>>>> +            vmask &= vmask - 1;
>>>>              vcpu_id += vcpu_bias;
>>>>              if ( (vcpu_id >= d->max_vcpus) )
>>>>                  return 0;
>>>>
>>>> which yields the following improvement:
>>>>
>>>>   add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-34 (-34)
>>>>   Function                                     old     new   delta
>>>>   vcpumask_to_pcpumask                         519     485     -34
>>> Nice. At the risk of getting flamed again for raising dumb questions:
>>> Considering that elsewhere "trickery" like the &= mask - 1 here were
>>> deemed not nice to have (at least wanting to be hidden behind a
>>> suitably named macro; see e.g. ISOLATE_LSB()), wouldn't __clear_bit()
>>> be suitable here too, and less at risk of being considered "trickery"?
>> __clear_bit() is even worse, because it forces the bitmap to be spilled
>> to memory.  It hopefully wont when I've given the test/set helpers the
>> same treatment as ffs/fls.
> Sorry, not directly related here: When you're saying "when I've given"
> does that mean you'd like Oleksii's "xen: introduce generic non-atomic
> test_*bit()" to not go in once at least an Arm ack has arrived?

If we weren't deep in a code freeze, I'd be insisting on some changes in
that patch.

For now, I'll settle for not introducing regressions, so it needs at
least one more spin (there's a MISRA and UB regression I spotted, but I
haven't reviewed it in detail yet).

But yes - they're going to end up rather different when I've applied all
the compile time optimisations which are available.

~Andre


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 10:16:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 10:16:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737126.1143276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGc4y-0005Si-KD; Mon, 10 Jun 2024 10:16:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737126.1143276; Mon, 10 Jun 2024 10:16:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGc4y-0005Sb-HU; Mon, 10 Jun 2024 10:16:44 +0000
Received: by outflank-mailman (input) for mailman id 737126;
 Mon, 10 Jun 2024 10:16:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGc4w-0005SR-Kg
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 10:16:42 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 87333be5-2712-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 12:16:40 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-a63359aaacaso645791466b.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 03:16:40 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f104a4f9fsm261662466b.83.2024.06.10.03.16.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 03:16:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87333be5-2712-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718014600; x=1718619400; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=sJqWECq0p0sNtzlS4Wxcd0ULLzGoy1RNu5MhKxaG+Pk=;
        b=LOy2+4hS5HRb+V3VZ6SBkNZR5A9MkIYLycCzMVRT0ZI4j9BEcwS31p0zXaGLgTp6qf
         xoCQx8JVNtxmxjV2SZST/ggJZO2DT+T5WpdyoYP1IjBPQ3IEI6LCNHU6dvWZ+YGYpVoF
         a7ytIJgjQW5WEhiiPkC3BPseOX8wB0C7OXqOr2K5uY7SlDNXgDij+FhBAvrCsV5120eV
         blT71hZoz/kssEZ939ZDZjjXdp9BNb40qXzScWV9cKP+cIUUm7xk0+3EY9o1hoXlzYsq
         v0kV2ySAttTI+IeYw89ZLfGoFAqAmaafP1RNwzSw8TmbSM32RTMUeqhCd7/AxLkJVpH2
         n2hw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718014600; x=1718619400;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sJqWECq0p0sNtzlS4Wxcd0ULLzGoy1RNu5MhKxaG+Pk=;
        b=fvuSA8kjnDpE6H042wpUEDSMIXog3d3qwb41ZIQ2xm3o8PJKTG4hprBR7ACbmbRrs/
         yw8HkQ1Cc5lU+3ulGhfDeO2FlFON8OUYokDIfJwlJBVo/RipE0NhR2QkfDEVa4/M3Aeo
         XnhEsV/ercGU8TgGDUPBwDTCi38T1+oWXo4J7W0V7v4gJKmjlAjFjzlENNm2uALB80JF
         6mWRFLvXIJTXLhxY64Uxo8H3losS/CftbFU2WW0gRzC/iBLA8LO/iA9uDzyGSKi8+nHC
         BlIMf4wFBUCUEkHiyhty3vHpyhUFviqOeLw4MKseQ4ao5nNwjSFJcMktf5b8a87x7Hty
         aJug==
X-Forwarded-Encrypted: i=1; AJvYcCVORxFMp75DtD5YfRxukNcAwwlCxTurb50yeJNTjDSbmT2wRkqvNF5PAKeCkZ/qrYs8wJW8QXbyxSVNav7dDY9pcjiSK65AkQWPv3TOqnA=
X-Gm-Message-State: AOJu0YxhSH2ka+4ZQH/r/ntRR/gQrBMg63mBuweqpx25u0VEPpTlyi+l
	qVWmOD37gZfVomwZPJrX6Sgp2UtE8m6VLS2fyRPPf9ApKyLpW6CtK1daBtj26Q==
X-Google-Smtp-Source: AGHT+IGSjelDK2JM1Ey8ENpVqMPqfpjPHakodbd23mz01dSoQtgH3LRJGMaxbaN8imnx5OzYfygpvw==
X-Received: by 2002:a17:906:f0c9:b0:a6e:7e1f:592f with SMTP id a640c23a62f3a-a6e7e1f5a11mr453033766b.39.1718014599744;
        Mon, 10 Jun 2024 03:16:39 -0700 (PDT)
Message-ID: <8961cf72-4eeb-4c47-9723-35da3e47d4d2@suse.com>
Date: Mon, 10 Jun 2024 12:16:37 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v5 07/10] xen: Make the maximum number of altp2m
 views configurable for x86
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <cover.1717356829.git.w1benny@gmail.com>
 <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
 <22eabe14-10c3-4095-91d3-b63911908cb2@suse.com>
 <CAKBKdXhZ4HOqThPMkwaWB5ZhQOc6gE=xsKzkoL4_h+M6y33dcQ@mail.gmail.com>
 <f3cd00f2-bdcb-4604-bdc2-fd13eddb8ea0@suse.com>
 <CAKBKdXje+_dd7kh3+aDJACw84+-1ozXt6N==KbA6Tgm7GeZEnQ@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CAKBKdXje+_dd7kh3+aDJACw84+-1ozXt6N==KbA6Tgm7GeZEnQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10.06.2024 11:10, Petr Beneš wrote:
> On Mon, Jun 10, 2024 at 9:30 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 09.06.2024 01:06, Petr Beneš wrote:
>>> On Thu, Jun 6, 2024 at 9:24 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>> @@ -122,7 +131,12 @@ int p2m_init_altp2m(struct domain *d)
>>>>>      struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
>>>>>
>>>>>      mm_lock_init(&d->arch.altp2m_list_lock);
>>>>> -    for ( i = 0; i < MAX_ALTP2M; i++ )
>>>>> +    d->arch.altp2m_p2m = xzalloc_array(struct p2m_domain *, d->nr_altp2m);
>>>>> +
>>>>> +    if ( !d->arch.altp2m_p2m )
>>>>> +        return -ENOMEM;
>>>>
>>>> This isn't really needed, is it? Both ...
>>>>
>>>>> +    for ( i = 0; i < d->nr_altp2m; i++ )
>>>>
>>>> ... this and ...
>>>>
>>>>>      {
>>>>>          d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
>>>>>          if ( p2m == NULL )
>>>>> @@ -143,7 +157,10 @@ void p2m_teardown_altp2m(struct domain *d)
>>>>>      unsigned int i;
>>>>>      struct p2m_domain *p2m;
>>>>>
>>>>> -    for ( i = 0; i < MAX_ALTP2M; i++ )
>>>>> +    if ( !d->arch.altp2m_p2m )
>>>>> +        return;
>>
>> I'm sorry, the question was meant to be on this if() instead.
>>
>>>>> +    for ( i = 0; i < d->nr_altp2m; i++ )
>>>>>      {
>>>>>          if ( !d->arch.altp2m_p2m[i] )
>>>>>              continue;
>>>>> @@ -151,6 +168,8 @@ void p2m_teardown_altp2m(struct domain *d)
>>>>>          d->arch.altp2m_p2m[i] = NULL;
>>>>>          p2m_free_one(p2m);
>>>>>      }
>>>>> +
>>>>> +    XFREE(d->arch.altp2m_p2m);
>>>>>  }
>>>>
>>>> ... this ought to be fine without?
>>>
>>> Could you, please, elaborate? I honestly don't know what you mean here
>>> (by "this isn't needed").
>>
>> I hope the above correction is enough?
> 
> I'm sorry, but not really? I feel like I'm blind but I can't see
> anything I could remove without causing (or risking) crash.

The loop is going to do nothing when d->nr_altp2m == 0, and the XFREE() is
going to do nothing when d->arch.altp2m_p2m == NULL. Hence what does the
if() guard against? IOW what possible crashes are you seeing that I don't
see?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 10:26:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 10:26:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737134.1143286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcEF-0007l9-DQ; Mon, 10 Jun 2024 10:26:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737134.1143286; Mon, 10 Jun 2024 10:26:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcEF-0007l2-At; Mon, 10 Jun 2024 10:26:19 +0000
Received: by outflank-mailman (input) for mailman id 737134;
 Mon, 10 Jun 2024 10:26:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GtSS=NM=epam.com=prvs=289119432d=sergiy_kibrik@srs-se1.protection.inumbo.net>)
 id 1sGcED-0007kw-7P
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 10:26:17 +0000
Received: from mx0a-0039f301.pphosted.com (mx0a-0039f301.pphosted.com
 [148.163.133.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dd328607-2713-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 12:26:15 +0200 (CEST)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45A92l6D004683;
 Mon, 10 Jun 2024 10:26:04 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2109.outbound.protection.outlook.com [104.47.18.109])
 by mx0a-0039f301.pphosted.com (PPS) with ESMTPS id 3ymgp3cknt-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 10 Jun 2024 10:26:04 +0000 (GMT)
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com (2603:10a6:20b:5c0::11)
 by DB8PR03MB6284.eurprd03.prod.outlook.com (2603:10a6:10:13e::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.16; Mon, 10 Jun
 2024 10:26:00 +0000
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d]) by AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d%6]) with mapi id 15.20.7656.012; Mon, 10 Jun 2024
 10:26:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd328607-2713-11ef-90a2-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TUmBXX+BdNifj+SmxYYIlDmkjGnW9U88ijj9+N3PdFVSasz03ekEwWmlg5q6kg81fgX5d8Qtkf8cPS598LfgjDxqk1uyWtttjIzqXSe6m18Ait7wfX31gOFLmLVOGBTAT/E+1/zMXdYpPUdevwybKlNQOcFAfJAYGTL8nmtnmIbiXpovZXcUQvuBfykxfnJtKnm4d1yuJGN96jVKgenlKdggzAc4SLaV1oBiWNvnKtPSCHB1EwzuTmcRuC6pCxXcIDX8xX2/AInqi6lBIQXgZj6Evyyqi/d5sAD5olwmnZqIZY4J+g0GgQs64/DCDu5JnEOPqn0y/J85yCZR5S4zbg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=9LwftG1D1TxQ41ZNc/rQVIomsuS+AvqPSY6z59+eY8I=;
 b=Tl9lOHT3BqerJ9g//EMiVZPdlZxNmIrUGW1UZh8T3XYBFFuFBKvGYccggIn1OlI/iLqe/UduOL8hvoGtvgwQUcN6YzMTq15/1fEmn12p1gVTqhuCXOOTIy4qPgECKd4uwjrIOB5k6PJoEKJyh88AJ8yalXTbDI4Dc4OvGQl2bTwJNv8k7Pxfi4XXXEU821KcFrZB5Wp+RryFCGD8q75fyNPc8fp486TEMdJ6vtpyTkeNyarDHf/MU4KOtCVwrTbWfTQAr19T6vAo4kAAPT6lJpTDX2tdkU7uwFBxNzzBkIqW8W7hlkDX0TRqMOAtTYZAXxZfaCRIeQWxMR+Xc5RDxQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9LwftG1D1TxQ41ZNc/rQVIomsuS+AvqPSY6z59+eY8I=;
 b=dqs5mrdsW6jHcX45I5K/3O5VhA3ZGgteZGAn10bZvGxhMEyxg9vcIV0VGxeRZft6NaSOooItr+euVD4OvUmVHp1ww0rioVKVJX82oRoQL5ufXtKBZ5glOEGFRVDc3Yff280U781tNDgr8AqL7XOqimL10Ij8UCDmR1QaOfYorKDS6EmSsItFjomleE4uiCHI167/BdaYoVI7Hu3kYoQcnmkfJmsZWwLYWnYJahlOkG40AK+DzcZy5frsp8leWzZnBlGH2XLBYikHUCGqIFL5zNspL91gKLsXP/HQadsGME332DZt8HVctnstYp3dLEdePobn99/gak6aaxyHL3BEKQ==
Message-ID: <1bd86288-38bc-4bd6-a937-d3e965f4276e@epam.com>
Date: Mon, 10 Jun 2024 13:25:58 +0300
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v1] x86/cpufreq: separate powernow/hwp cpufreq code
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
        =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Jason Andryuk <jason.andryuk@amd.com>, xen-devel@lists.xenproject.org
References: <20240604093406.2448552-1-Sergiy_Kibrik@epam.com>
 <5cb13d1a-1452-4542-b50d-23e6a9d9d3ef@suse.com>
 <c66966da-bbe3-432e-8a2f-809bf434db39@epam.com>
 <ab57f7f3-ac54-4b41-950a-1f7bee4293ab@suse.com>
 <647b086a-04b0-42be-a7b8-a266c4f4e64b@epam.com>
 <c642d1ef-9466-424a-9e84-54accecd8c6a@suse.com>
Content-Language: en-US
From: Sergiy Kibrik <sergiy_kibrik@epam.com>
In-Reply-To: <c642d1ef-9466-424a-9e84-54accecd8c6a@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: WA1P291CA0011.POLP291.PROD.OUTLOOK.COM
 (2603:10a6:1d0:19::8) To AS8PR03MB9192.eurprd03.prod.outlook.com
 (2603:10a6:20b:5c0::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AS8PR03MB9192:EE_|DB8PR03MB6284:EE_
X-MS-Office365-Filtering-Correlation-Id: 591715b7-cee6-4b9d-0f94-08dc8937b93a
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230031|376005|366007|1800799015;
X-Microsoft-Antispam-Message-Info: 
	=?utf-8?B?NVBjS0pMV2RwVEtuWmJ3K3kzSm5zR0xDSkpFbHNOcHl2M1RlMEowOEY3dW5U?=
 =?utf-8?B?eGRpODgyUU93U2dEYUY0NjZkZ0tmU05sdTJ0T2kySmxzNWdFbklvaUYrUDYz?=
 =?utf-8?B?K2M5K2pVZkdLV2hnOGZIdk83NGpWeXpWK1hPY1haRWxKdzdITXdMcXRmdlNv?=
 =?utf-8?B?S211b2wwUEdHdmRZY1dxT0hiZGQyMFRMdmE4Y04vZzVVK2ZzMzhXNVJSYmRt?=
 =?utf-8?B?RFZtYzMwQmNONjNSejFRaGlpQmFvVGVDN1psdnliZXhVSXV0ZXBDT3FLWjhz?=
 =?utf-8?B?L25UY3E3MFdyNWhXRDZyVUU1bG1XcWxocitxQkhyVGMzOTFXZ1ZmcDdqU3g2?=
 =?utf-8?B?K0RKT0ZLMUtwWFlkYVhTbDVKeW9tTC9IcE5IMitERSs3b2p2S0FHT3JNY2pl?=
 =?utf-8?B?YTFndnppOG1mSDNlZDdXR2syRytkUTdoWkY5T3RDdGljV0Q1Mzdabk5PdWJo?=
 =?utf-8?B?enFzM1QzczFiOFFtZHNJUXRrN000dFlHekZmeTNrSzE1RXJkK0RVWnhxbytI?=
 =?utf-8?B?VHpONVMrdWJiQjFHVmIyUy9YMHVzZ3Y5OFhLY1RuN2puU0F2c1JHTUxUa0lh?=
 =?utf-8?B?czFnSS9Jb1pXNnhtTi9OQWNOdG5EU0ZNZUlJWGpLY0VqY1lRbWkxdnBhNlIv?=
 =?utf-8?B?YVQzb1NyVW43azdRdUttZzZvZXcvazgvZUZKakY4eCttYmV5QS9GZnAxSWs4?=
 =?utf-8?B?bHlTanBqMHVFc2tXUStKeWFrWkI1Z2FZdGVJWnZpNnBVckttUnRkc2hBc2d5?=
 =?utf-8?B?dTJ2ZGNIOC9BNi9Vb1BRc2lCTGJLY3BTeUppSno0OVRKc2s4RU91R3VjMEY1?=
 =?utf-8?B?ZlF0MHNpNnc4MnBJQVREVjJMSHFyU3dlcXI1dVRmTWwyZ1hja3BSWjNpdWR2?=
 =?utf-8?B?Y2J1ZlBSaHZQTnAyTS9ISXlzdnhaVVhUUy9lbjd0Tkk4RERMUFlPZVpWR2k5?=
 =?utf-8?B?bmZjTUtKSnZkVkIzN3VnMGF5RVRaNXVFVWxVSWZkd040OW9MNUZKdE9WbmZK?=
 =?utf-8?B?bExDR0FlMmxGNUxDcElLa2sva25MWVlZdUNvUUpjcUovSDlRRHVKd0lnS1pG?=
 =?utf-8?B?ci94Q25sbG5sbGl2VElxNjJJQTgwcVNqMGl4UitVYVAzQUtXcDJUbTgvVk16?=
 =?utf-8?B?eU1XdkU0ZHpWT2t5T0NlTmpzc0Fwa2xNRVFqQmNVYXRpeS9QK3NuWkNlVEVa?=
 =?utf-8?B?SG9yNmJMNDA0U0lSNkdBKzZNejRlazM2ZXNQcFVMRmNrcVErelR3aDgzVXVl?=
 =?utf-8?B?R1k3UHc5VlVmcko4UUNDaXlyaTc3T3FFMXlNUUZkOFNGL2ZybzU4RXNoRzBB?=
 =?utf-8?B?NW0xME1lTEFrQy9uclVUZUF5dGdSMlF3RTRYQWY2NnZjaC8yVTBYc0ZNZVU0?=
 =?utf-8?B?L1B6c0U3d0N4ZzhLaVhmL0EzVkdSWVVPL3RXbzRrMnMrdjNNWjhpMGFXOFVS?=
 =?utf-8?B?OHZOSmpZcTdVd1pjNFZKYklmSDJGVmRydWQwNTZDWXljV011TDF1c3B2MkJS?=
 =?utf-8?B?Q0VYMFBadGc1dnVGR1FQaElNZGtNck1Vb0hhOWEvN0YzZXJUbTNlYTNXMUQ2?=
 =?utf-8?B?c2lOUUtHdDVWOStEVjc5bDc0b3hGMDFiVm5OUDh4T2p2L1ZnR2Z0Nklhcysz?=
 =?utf-8?B?d3I5S2FMaE1uT1NtdUlLUzdTL05oOTFTT21QZnN3L2wxQ0hWbXZYaEkyamhL?=
 =?utf-8?B?dzEyT2tXb2pvSkFFR0V2ZGp5czZuTDQyWUsySmJXaTVTMjVEVUZ5Y2hKYURi?=
 =?utf-8?Q?JxRbIaTE+cS95+NgJ0=3D?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR03MB9192.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(366007)(1800799015);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?QmJ2ako2ZU9aQTB3bWNNMXRXQVBycW5UUm1pMEMzWTlUMGFyRllPdk9QUkhz?=
 =?utf-8?B?NTRiUE04blZBdVM4ZmJNM3ZWQzlXUFdmRHVyODJOTEpRd29VaVZIamxaU2Yw?=
 =?utf-8?B?Um16aVdzTldpWGRCaGdBZDlKSWozdm1YamtaK2lWa1VYY2RoRnlsSHNDU010?=
 =?utf-8?B?VG1CaXBNREVIR0IzeWVUQWNzUStKb3FkUUc3MmxsSWFYZkZLU0ZSS1pUV2gr?=
 =?utf-8?B?SnprakdQYnRRdGt5N0hxbFFUdXNRdzZ5Y1VZQlJXTFY0eUNPQjB3UkVUSTlL?=
 =?utf-8?B?b3JrVm9NNFN3OWRvL0Q1aFNJeTJsdm1GeXlXRjNBRzJoYzFvZjRtNDhHTndz?=
 =?utf-8?B?Y01zYyt1RlBxT0FldlZ0ZFFzNFMrT0c3UUxKNWJLYmhLeC9tVEljRVhYZElt?=
 =?utf-8?B?aTU5NkpocnZXbmNucGJFZG80TXhZUWNHWko0a2ttb1ZiZVJWMkVjR1Z5Q3Yy?=
 =?utf-8?B?UUcwTVp2RzgyQ0RINVYycnlkUlZCV2QxNHBaL2RUUm1wcDhiaTFFZEdRQWdQ?=
 =?utf-8?B?MmZjdUdRdWhrU24zVW5nM3NXbmpjckVoY09TdllvZVJxRWpaZVpRU0VNTlBY?=
 =?utf-8?B?UzlFTmJVU0RrWFJuVU81L2dwRi9HL29iRnJvRjJuYituM2pNWUhQUThtczRt?=
 =?utf-8?B?VjNEVXdWcm1zZS9KUUdFNjExbkNVQkV0SXRkREdIeFZRU2ZxQ0UrL0hnZnZj?=
 =?utf-8?B?WVNEaEg2aWxBc25NR2wrdHNqRjQzTnNkT2VJUWptNUUxcE5NUGo3SFJXZHBo?=
 =?utf-8?B?VGtVMzVDSlRBbXdpaUpkQnlGWlRUOXplNnRYc3BzTWZTQ0xVRVE3cVFYRnBx?=
 =?utf-8?B?aTlieExjWS9nNFVsMjAvRmR4WU9iRGM5MUw3RnBWc1VSaDBVeTBKY3ZodXo5?=
 =?utf-8?B?aFFrQStMY0ozZklZa29MUEJVWm1JVHJkWHcrdFhzS1VrYzJhWG0rSTFPTU4x?=
 =?utf-8?B?a3RrQVFva2JYZnYrdVhwcDJIWGd1S2RUVGt5SSswRzhwUTRFRGhkVWVIV1RQ?=
 =?utf-8?B?eWlPdUxSTDM1ZmVTbktINW8wNjNmdGgyV2EwYlRJOFV1RkRJMU5BUVdueG1q?=
 =?utf-8?B?UlRVNEFyZklVTVBsdUFNaHhsd1NEOW1hL3o4alBaekdEZmczL2VET2FlTFFO?=
 =?utf-8?B?OW9mK1JGRXZZK2ZqOU1TZGRsbFB5U2NEbHFTNFFCSlRDdHQ4YkZLZEZZUnlQ?=
 =?utf-8?B?UzM3NTdlR1R1Y0RrZ01Jc1QzbVRjZDVyTDBwaWV5b0Jid2xJajk1TVBwcUtt?=
 =?utf-8?B?bTRuQlpGOHpja0l4TEpXYVVZKzJETmEyQzdSWUFhY3hIVFMrMkhzUmhwWGZB?=
 =?utf-8?B?eG12YnYreUdmMEdsVjUvOG9rRDdNZCt2N1I4TkdBRlk4dWVvQmJqVWpOQ2ZH?=
 =?utf-8?B?d3BLWC9jbXpNM3FyemlDM2RycEc5T3BBVHlVOUlkTEM5K3RjZS9TTi9sTEFY?=
 =?utf-8?B?clhhU05OYTRLWjdvbFdlODdCOFNjOXNJTnd4Y1krS2g0ZHVaa0drYUMyaElt?=
 =?utf-8?B?R05qNkxlSGFDSEdsVjVtLzN0MCsycGJ5Zlc4NXUwdUtyelZWbDhadDRzcWlq?=
 =?utf-8?B?WUhLTXM3eS95Q3dHYkhQeVF5b29rOWlQTDEvamJqLzh4S0NtbGRXcWdVTkZH?=
 =?utf-8?B?NEtHalR5TFZJSVdHUXNTdnZ3eDZlSzZsVndZN2RLZDJKY2Y2VVhIRDdmZWVZ?=
 =?utf-8?B?VFFEMnNsREVTb2VlZUVxSU43YUJJc0dVbnArMFJkZjNObEthTkd4SmVFd25U?=
 =?utf-8?B?QkRhcnR0eXJnNzRsdHJOVTdzRDFZNmpOZ3ZGbFdHc21HRUNuYXk3Q2UzdFVK?=
 =?utf-8?B?OHpwTTBPRmIrSjY3elphWEtUdnNPeE4xNHVRMjA5UVNWSUh2VFpFNWRNSTMw?=
 =?utf-8?B?bEI0WjFNT0ZZTG5SZVJwakJqQVh3RjlUbVNnRTY4Umd1Z0wzb1JxNVR0eGx2?=
 =?utf-8?B?QnVOWnU2WGk1Ukovempud3VIUUtEWHRWc2ZOV3NodzlNWktJUFFpZGNEUytn?=
 =?utf-8?B?bHNsUzFXNTdaZVowL0ZXMCtsemNFVWxicVFpS29hK3B2KzhNN3RTSDMvSUM1?=
 =?utf-8?B?ZWNhUUVLcDFCV2VpUDVHMDlUQklaWUlyZjJLRWFvbjNzNGxIbzFibVBQWkFW?=
 =?utf-8?Q?X8mQJxJsbYPtCPkNFsD5lOfel?=
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 591715b7-cee6-4b9d-0f94-08dc8937b93a
X-MS-Exchange-CrossTenant-AuthSource: AS8PR03MB9192.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2024 10:26:00.5807
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fq3rqlllo/1DUM/axrECjKO6ztBe4jdFp1PTxdjPJxSCgkoF3pa2uP/9YA3bdrtBCncvm8J08LAoTUyqiUGyVQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR03MB6284
X-Proofpoint-GUID: Z46RluE6bS419zDJ7i2_cOENv570fq-D
X-Proofpoint-ORIG-GUID: Z46RluE6bS419zDJ7i2_cOENv570fq-D
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-10_02,2024-06-10_01,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxscore=0 spamscore=0
 clxscore=1015 priorityscore=1501 lowpriorityscore=0 phishscore=0
 mlxlogscore=820 impostorscore=0 adultscore=0 bulkscore=0 malwarescore=0
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.19.0-2405170001 definitions=main-2406100078

>> Though I'm not sure such movement would be any better than a stub.
>>
>> acpi_cpufreq_driver, i.e. the most of code in cpufreq.c file, can
>> probably be separated into acpi.c and put under CONFIG_INTEL as well.
>> What you think of this?
> 
> Sounds like the direction I think we want to be following.
> 

Sure, then I'll make a series for that.

   -Sergiy


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 10:30:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 10:30:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737138.1143296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcIC-0001Kn-Tv; Mon, 10 Jun 2024 10:30:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737138.1143296; Mon, 10 Jun 2024 10:30:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcIC-0001Kg-R1; Mon, 10 Jun 2024 10:30:24 +0000
Received: by outflank-mailman (input) for mailman id 737138;
 Mon, 10 Jun 2024 10:30:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GtSS=NM=epam.com=prvs=289119432d=sergiy_kibrik@srs-se1.protection.inumbo.net>)
 id 1sGcIC-0001JK-1o
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 10:30:24 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 70de8ee4-2714-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 12:30:22 +0200 (CEST)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45A8Dt8H003943;
 Mon, 10 Jun 2024 10:30:12 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2107.outbound.protection.outlook.com [104.47.18.107])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3ymekyvpu0-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 10 Jun 2024 10:30:11 +0000 (GMT)
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com (2603:10a6:20b:5c0::11)
 by PA4PR03MB6703.eurprd03.prod.outlook.com (2603:10a6:102:ec::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.16; Mon, 10 Jun
 2024 10:30:09 +0000
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d]) by AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d%6]) with mapi id 15.20.7656.012; Mon, 10 Jun 2024
 10:30:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70de8ee4-2714-11ef-90a2-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IBUOC75GemOoBqVJFUwuIHnlBQIUHlh47DK0341pVXE5RYd/rMpe3KahmzJZnZC0hIMHKCSJaThTtbGg/CqtGhxaSoWgd9R6qAWjZ8lgKQW73aBxERdthnUXB3FHpUSlNrpOXaM7TFpyJE0T4xBfYJdsWDuVum4Uxd+cM8Vqhc+Ut7xsfyBijHzjElNKnILdC65+xOkll6TH7rB+XpeQlcozFMrd3U12bGCxL7zNEiSb1eDH4BkVo51c+21x41MWjnoO/BoiQP2w8YOmbmdfk7SEkc42v5MaYvnUOZ9jOkyRDPB1WoETlFHGYsZ8L6wULAM/QL+M6FnWLs1ma9tttA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7g57bSPk8mp4js75g+IUlOwMXroVDSlfYL+EhouJvoo=;
 b=nDOQUVkRdkpyPUaXg1FFMpLEGa1Gf2HGYrbmNA2j2DSHo2heYCxlcKD/CjO//u22tG0LmsfaV6Ac7wmxRuSh8JLNy6PSO4XESzcNaYFvInU9xzQXrflJfhnCvUHUSRiNnciSxNItV4kgjCZmTznmJDyhyBH+/CWIHIkuYAH6SEo+I0w0QlbYRXOeBjfwXVm52SMDtwciMHOU3eHkDnHlW12KfIgKkG7ykWkRMCeE2DMUdPkDwy/ZExvIIFaGyJQpa5a1OvavHj4aXKJHST8FsB/R4kGSAGZJRiS2dJnprMuR9/b5MtmRXscvHMG6Qtf7rmiuVk50w7SPOcV3hEm3Tg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7g57bSPk8mp4js75g+IUlOwMXroVDSlfYL+EhouJvoo=;
 b=cIdfD2B458XWYlG0MA7hJ9APU/CKXiKZg02l2TVN0h65zEwQswps/rNAwdCKIc4tuybZeNnnX8ijHty2+8NEmLu6gaBThfAncY7HULqaKPxHI1BzsUoiK0HxY+zROyFz1Pi6WNraOURNNraQ5jZVjzKrwQvnmpycBOllD1NVlVd4mGxenLUSt6KOCX4UrddKoloPK6daXngX/OhFKv/96HL6uQw72GLhtTmoj/Vo4Wy0Son6LUuibymHqx1HUDINnr45fii7bsgCN5jri98Uu6pcuQI/cQkJwejtjtzriLVzNIHRUm66aXyPwdYBUDX72PWnZws5IgAWjaHtGghEvA==
Message-ID: <66c92f54-f124-48c1-a419-4789bbf64c28@epam.com>
Date: Mon, 10 Jun 2024 13:30:06 +0300
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 07/16] x86/hvm: guard AMD-V and Intel VT-x
 hvm_function_table initializers
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
        =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
        xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <25d4ade03f22ae4eb260af3eae5f48528f2e3ca8.1717410850.git.Sergiy_Kibrik@epam.com>
 <9255e072-f86a-4c82-90dc-9c41d11326fc@suse.com>
Content-Language: en-US
From: Sergiy Kibrik <sergiy_kibrik@epam.com>
In-Reply-To: <9255e072-f86a-4c82-90dc-9c41d11326fc@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: WA1P291CA0003.POLP291.PROD.OUTLOOK.COM
 (2603:10a6:1d0:19::6) To AS8PR03MB9192.eurprd03.prod.outlook.com
 (2603:10a6:20b:5c0::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AS8PR03MB9192:EE_|PA4PR03MB6703:EE_
X-MS-Office365-Filtering-Correlation-Id: 2658eb01-6ee9-4dc6-44eb-08dc89384d60
X-LD-Processed: b41b72d0-4e9f-4c26-8a69-f949f367c91d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230031|1800799015|376005|366007;
X-Microsoft-Antispam-Message-Info: 
	=?utf-8?B?SU1nWUdIS2RzQkpKMEsrU3hzS1Q1K0xPYWVCbzNoc2lFSnhEVDZzODd5UzJm?=
 =?utf-8?B?UWczamdxSXIzN25YaEYvSmZYY215UHp1T2dibXIxNDlQaStpV3lPdU5Ldk5r?=
 =?utf-8?B?SGNDU2FxN2lhNUhOYmJZaGNsYkJneWM2SUo3c1kxSEVPY1VXZlN5UTB4SlNJ?=
 =?utf-8?B?VjM1L0k2U0tUc0FLNGJlN2JLZWF2TTRmZnczRzlsc01OSFBwNlBEVjhuOG9x?=
 =?utf-8?B?RlY0d1Jod3dmL0VpZDAwV3FGVVJkaTR3OWVNRUI5aHlYaGVicWlMb3orc2Ir?=
 =?utf-8?B?YVU1cUVRNFVUZjI4Q2wrZ0Z0YXg0TlpGL1JpTzdjckIrUDJRS0lpekNTR1gw?=
 =?utf-8?B?KzdDdldPdktLMExMZjBRcmZhOG95Y2hSL0p4YTlXN2RzRk1TZUdrSDQwdm9T?=
 =?utf-8?B?cUpGY29GWTk1eFhWdjNod0JYMFU3a1JIUENtTFFFdnlEakN4aDNZTWNEaVFZ?=
 =?utf-8?B?OExmOE1iRWpSSzhEWHVJaEpYVTRWMkxMUDJJN1pZR2RDRHRvekRsYjNoaDdB?=
 =?utf-8?B?dDlGMU0yZDhlTXJXS0lqbTc4c2h1azNnYjFvSUc1bm82eGRIcmQxSm84anRR?=
 =?utf-8?B?cnR5OXNOelh4TnNURWEzTGFOVFZYa2ovaTBsekVIdjVpeHFKMGxjVUxqaFky?=
 =?utf-8?B?c255S0grU3hmOXRha0JISEJwUmZoNWFCTjNJUWFPdTFEL1NmSmxmb3VEMUJW?=
 =?utf-8?B?QkpmL00vRHFMMGJpd2hndzRUU2hONjdzazE5ZTl2dkxlWW9tZVErdFhvdWg3?=
 =?utf-8?B?a0NscHhlQ1BVZnQwcWtoQ1Rwd3ROcGptZ0EyKzV1MlNnNmdMK2RvSGtraXF3?=
 =?utf-8?B?VUVNT1lRZzBXUS8xTWRRTGhmd1NsTjlNbUhOejRrbVVhRTNYOEhBK201SXlx?=
 =?utf-8?B?MlF4VjFuNHRYRktVMDNFQ2w3Z2lzL3FQc1dGVDA4cUdwOVVDNklLUVNLTTFp?=
 =?utf-8?B?M0tCSENXdjBlSS9Cby9hSXZWWTBRbFpqU0EySHlZamNFdUpjcGVDalBya0Q4?=
 =?utf-8?B?cUtPbS9wV2dlRlpQR08rUEhoTzBRZUZpbnNtK3BCdzVYNC9ibktMTm9ZNmIr?=
 =?utf-8?B?MUlJS3hDSVdSOWZ5dy9NNUltVWl1dTVkT3Q3bzV5aFNwNmFLaXpndW43QnEz?=
 =?utf-8?B?MkpCRjluVUF2Yzhhbmk5RXErZEdqN0JUTmFLQXBvWUtIU3dEMmU3RlVpMEVl?=
 =?utf-8?B?NEJKSU9tVkdOczFlU0htaTFpeWNFRkhJQTZ5WHRRZlhOYVNsaStWbG5vYjRE?=
 =?utf-8?B?TmNMcllMQnlrZG5pUVVEd0pFTUVIWnJHZ1VaWDEzeXBjNFZlNmJUNGJ5bWN5?=
 =?utf-8?B?QVBGbWMzMmd4N25XZ2tCN2FzczhPaVRIa0VxaXQzQ1FEMnZsbW4zeDBnRnVV?=
 =?utf-8?B?Z0QzOWl2TjlLM0Y3S3BKeWtYL000YUQ5cFBzazdrYVJSOFoxbGhMM0xxRmU1?=
 =?utf-8?B?dk5Md1Z6dzJTUVBHcW5MSkZodVV1RlU0TFlXZ0pldkJMZ01VU3dvSTUzUHQw?=
 =?utf-8?B?TkQ1T3dBZXVycEt4bStMTHFZdTlab3ZVd3RWdmZ0YTAvQWQ0Z3ZLVjdpK0lI?=
 =?utf-8?B?bi9YdWt1Wm43YzhOVXVwQStGVGZlQXR3anJET3dOR2p6NTVKSnovdjFqWGRR?=
 =?utf-8?B?RmxWTDlkRGZGTWFCQUhOUUNZclVob000aXFFbnNEK2tvSTRzVTNoUlJyMVdP?=
 =?utf-8?B?WU1pKzQ4UTdWS1R0MUpkV0VHNjRHaTVSajZPTjhLU2EyaXdFUmxBSlhQWFdh?=
 =?utf-8?Q?n6hYTMSHwLV81GfVI5+hy2vqAwcnmrNxMyP5a+N?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR03MB9192.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(1800799015)(376005)(366007);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?alpjV0Ezc2JIWXl6bk9IQnlFZnNQNjl4bWtnR1hhT204elYwQ3UvV2tLY3lW?=
 =?utf-8?B?TlJzaGxydkVhVTF2Vk9tRzdRdVRFeVpLR21qL0kwWXJKdG9EZDhIZlBCRmdm?=
 =?utf-8?B?bzdTOGttTnRRY29uQTBrMXlGYk1EWExaQ1dlUGdGdjVoSmNkK0JzVTJXdjI5?=
 =?utf-8?B?VEJXWDJMdWdubEIwT2FMSTk3azYrS3luWE5jVGlCTmRoajZjZnA0b0JTQzNQ?=
 =?utf-8?B?V3FJdzBJNS9qZFEyTWRPMEYzL0JXeDBlUHZ3ZkhYN2ZqbXZoSE9ia204MUN3?=
 =?utf-8?B?N213N1ZoaCtlYkR5d3ZLak5ORDNOdGNmelIraVNUd1dRNTY1U3NPbU1WazFr?=
 =?utf-8?B?bVBibnBsTW1HUnoyUm9EMmdqR3djTkp4U3FKdko0YnM1Yzkram0zYytmZEY5?=
 =?utf-8?B?NXdFZkZVRXMzY21YdVd3aWkweHkwWVBoa0lsYlYvUDhieVZ4NitBNWZNNHZT?=
 =?utf-8?B?NG9NUkVVTW92VjZ0azdSOTBMWlZDc252aGJWdGJjem9Qck8ydFE0L1E0QTk3?=
 =?utf-8?B?NzFTeU03UmRvQmhaVjY1b2QvYXNNdHE4VHpzakRzYmlDOWFxemRoNlk5TG9p?=
 =?utf-8?B?blBsSHNFa3dCU2JhbE9qUW9WcXhNYWZpZFdHSUlGN0FuNXhRSnJYdTV6dGlj?=
 =?utf-8?B?YytoZHpvNHNlNW1RYlplcVJ6djFueGFYOUNPdVRpKzhjc0lzMi9BQjY2VWFi?=
 =?utf-8?B?WDVxSlczMW1SSS9Hbzc3dFlNcjVlaXV2MVF6NFlFVVExQjNCZHZyaWl2ODE0?=
 =?utf-8?B?NnlsRVRnYlNqbzZVV3NPdXB0MDJ1d012NE1zdlVrWGZ4SUw1TTlzVmplV0ZS?=
 =?utf-8?B?cllwbU9Bb3h4MDdIQTY0dHh2eS9BeUh5SW94WnF6TWpObjlpbGRZNEx6RGdY?=
 =?utf-8?B?QlZ6TmtlZHJVb1cvZWNJZUlZbXorQ0kzT0VUTGRQN21GejZZMFVTRjYwM2x3?=
 =?utf-8?B?bHpMdHl5elo3MFE2SWNFcGxKMmhZWW1Wb05kcXpNRE1Wb3BzOS95MFNWQmJS?=
 =?utf-8?B?eitIdHlNV2loVnVyOC9jdHNrbUpOSUJsUE1sN1Nha2RWS0V4ZTNhNEw4YjZX?=
 =?utf-8?B?NVR3R1Z0TmJwcE9pSHRxZTc3L1NHaVVLTUxFRlg5UEx0b1UwWHpJTnRSYTcz?=
 =?utf-8?B?S2lHYmlia2FIUVJxWUI3bzhOd1hnMzBkSnhmT0dxUVl6S3JtdEhIZ2xyY1dt?=
 =?utf-8?B?ZFVKZlEzZm00RkEwMUVVMlBjd0lnT1U1aTdTcWxwZmV2UkROd1VoVFFOa20v?=
 =?utf-8?B?VjJsaU5WbGFrdzhlVHUxa2dQS2xvVjIyaVJ0RU9PcS9ZcEROR3JuL1NBeklj?=
 =?utf-8?B?Qlh4bHJUejQxVlZGMHhpS2N3TFUyWHVDdmV1a09tWGI3bk9QVDlvekRtVk1V?=
 =?utf-8?B?djV5YW05SzFOWTZyZzh1dURkY25NcTlyMU5iTGdRcmpiV3lic0xNTC9iRURU?=
 =?utf-8?B?Q0V1Nk95QVpNTXpmRWtDQ0dndEd3bWtzWnQrTWd2MWE0WVVUa0toZXV2a1I3?=
 =?utf-8?B?YytwV0hzZ3RFeHc0VkxmUmlRYzNZNERFNkhOcWEyNER4K2lVTEdHcjBJVXJH?=
 =?utf-8?B?Y0tQL2l2REY2NDFZazFkcnJ4WEZkZTJQdi9TUWVPdEs4L0MvaGduNk9iVWVD?=
 =?utf-8?B?Q2VRYUJxTlRMS1psUFpBVW5zb01idmZDbUpQNk81M0FtNTVZSXlCKzRueFF5?=
 =?utf-8?B?KzZ3S1lvMjhiQ0IzWitVZkdDaGhEKzc2MGhhZGMvdXNLKzZmQVorUjlDbmNh?=
 =?utf-8?B?UHV1WUFSSU5aeUxwRDZiWGJVNEJUZ2w1bWxVeS94d0U1SjBuWDdCSTNURlhO?=
 =?utf-8?B?SW0yU1A5RGI5clFxTHlFUXhwTkNKU1lUTjE0NGhsMC9zR0lVK3lPcTlzeWVY?=
 =?utf-8?B?Vm5mOGpSaEpBNWxFZ3JwZkpGdGUwRHNpK1ExMEsyTk5LQ1d0UDRxM1FoNXNR?=
 =?utf-8?B?bDNHdHR0UTgvWnlUb0Q3ekM4LzAxYWpNRXA1VG1qNDlYSTIrRitVZ29YR2pS?=
 =?utf-8?B?RnNlK3RqblNrTFdWb002d2lpc2VRdGlFK05NRWwrN05UMDN5SFdKSFJwSG12?=
 =?utf-8?B?d2Zpd2p1SXRTRVFjZitQTEFHSGtESEg4TVEvZ2hacmVwSmdMazczeTlqYkFI?=
 =?utf-8?Q?L91q2zpS5wB6B0O9WTQG9yfS+?=
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2658eb01-6ee9-4dc6-44eb-08dc89384d60
X-MS-Exchange-CrossTenant-AuthSource: AS8PR03MB9192.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2024 10:30:09.0255
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: A3Adk2FUAgA6mys0qOGLfa7DO5oQty+thXP3TcUjagMgwv5l0712nLEt9aclLlZMffyeeN8WgAT3/aDRkydRMw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR03MB6703
X-Proofpoint-ORIG-GUID: 6TUzi6UF7Ut2J30CGQJmb-F4pKx_wkbS
X-Proofpoint-GUID: 6TUzi6UF7Ut2J30CGQJmb-F4pKx_wkbS
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-10_02,2024-06-10_01,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 suspectscore=0 mlxscore=0
 spamscore=0 impostorscore=0 mlxlogscore=813 priorityscore=1501 bulkscore=0
 clxscore=1015 phishscore=0 malwarescore=0 lowpriorityscore=0 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.19.0-2405170001
 definitions=main-2406100079

07.06.24 11:01, Jan Beulich:
> On 03.06.2024 13:20, Sergiy Kibrik wrote:
>> From: Xenia Ragiadakou <burzalodowa@gmail.com>
>>
>> Since start_svm() is AMD-V specific while start_vmx() is Intel VT-x specific,
>> any of them can potentially be excluded from build completely with CONFIG_SVM
>> or CONFIG_VMX options respectively, hence we have to explicitly check if
>> they're available and use specific macros using_{svm,vmx} for that.
>>
>> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
>> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> yet again I think this could sensibly be folded with the earlier
> two patches.
> 

sure, I'll squash together patches 5,6,7

   -Sergiy


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 10:34:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 10:34:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737146.1143306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcLh-0002DG-F5; Mon, 10 Jun 2024 10:34:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737146.1143306; Mon, 10 Jun 2024 10:34:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcLh-0002D9-Bw; Mon, 10 Jun 2024 10:34:01 +0000
Received: by outflank-mailman (input) for mailman id 737146;
 Mon, 10 Jun 2024 10:33:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GtSS=NM=epam.com=prvs=289119432d=sergiy_kibrik@srs-se1.protection.inumbo.net>)
 id 1sGcLf-0002D1-R9
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 10:33:59 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f0ffda67-2714-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 12:33:57 +0200 (CEST)
Received: from pps.filterd (m0174680.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45AAL1hA015120;
 Mon, 10 Jun 2024 10:33:52 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2168.outbound.protection.outlook.com [104.47.17.168])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3ynynjg1t3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 10 Jun 2024 10:33:52 +0000 (GMT)
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com (2603:10a6:20b:5c0::11)
 by AM9PR03MB7489.eurprd03.prod.outlook.com (2603:10a6:20b:272::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.17; Mon, 10 Jun
 2024 10:33:49 +0000
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d]) by AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d%6]) with mapi id 15.20.7656.012; Mon, 10 Jun 2024
 10:33:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0ffda67-2714-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EsFpwTm8cxnCnqtJNL0KbuAlWwAwdEw1uF9VgR3D1wmhzQat9EvGdt1zccDqLMr/HKsrBO2mD/g5YuyTPbCNZuFKUQgKfqxycTpzaWv7Vxo5MHejD0VdtNYmkSmMU6+i8HQ+1kCsT0mimTf0Gk0p+TTbQ0gwYNNOwQtRujn5rxpP0Voa+thepv72ni7t02jCJR08snZmfsDWarVpAa0Dak232J4wScwBY9hp8T/L29a+EU2DznAIxq6GBfBigq64KVTlWz6WNHuHiAHxPuiX1sxh6mDU1m1J8i7DTYHfmDr95CMtsOJflb4ZSwpI4pJTq3l1WR4GCCE9Xq/w05ng0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cd1moHxhc1xm770fqgTpoNlWwNdQuDUv7YAsKOXFusI=;
 b=G0Q4zyTM42oYWrw2l8Z6fsMpB8mGU6cw7A0Q3Fhc/YrNUmOq4D1ETpC868cC+2o7pt+DHcx+eQuub8vDJGS1P3MyyGBbCjJwFcQMQzbmCiFUsSz3VTD7iSwMt6GLm2TCyfKXjEdHZhGEwM7eHnMdDbp23l3OSPpCcD7arjgZM4aPoMJm5kdGUM7S6qywhq1Ks4KFYI7myHohXcp0D9MmRlkxZkATDtqf/z3koPJ5SWzHbuiOrxB/1p9nC/w3ZKOf6skeDqMH8pUE/rjmYkv39BQZmo5HWlCFKQ62pzRLc5AITLdkuTwC39B5+gztYiwCT4qFvA/Oif4ZPweeCt5L8A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cd1moHxhc1xm770fqgTpoNlWwNdQuDUv7YAsKOXFusI=;
 b=oKi0f8sPNNJA4p+AB6jF5/JKo4sC0T91DB94f8T+mT60Fy6HVFpmgIlbQt9zA4o49OxZH/pJIsVEd4NLibMmx32+ENtbX/2TuTxO/EAfnOiR3tJWaEXopTFyqxIsii/NYN+xH/0QXZmiELpyjV6COGcAFUxQpYyG8+ns9SnAYF3aW93+ets9zETQzb4Pz90Ld4atP7q8TYcCDRm/SRJglpO344/xdtocmlPQN0p/5vgCfFelCcXS1t26cW9qTfq577wdQFM00KznMZhS9B3LLXYcAn6wLdjZvGauFGHb+AYywjZ3/8PLh6puoXFciw2ovcpjKDvm/nb002FWjtmBDg==
Message-ID: <cc33b870-4ada-43b3-aad5-d938a12a970b@epam.com>
Date: Mon, 10 Jun 2024 13:33:47 +0300
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 03/16] x86/p2m: guard altp2m routines
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
        Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
        Tamas K Lengyel <tamas@tklengyel.com>, xen-devel@lists.xenproject.org,
        Jan Beulich <jbeulich@suse.com>
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <acb98c1c52613555a59cd27aad853a24caef0e19.1717410850.git.Sergiy_Kibrik@epam.com>
 <ade6ba3c-5a53-463c-97fd-34f6ec7dacaf@suse.com>
Content-Language: en-US
From: Sergiy Kibrik <sergiy_kibrik@epam.com>
In-Reply-To: <ade6ba3c-5a53-463c-97fd-34f6ec7dacaf@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: WA1P291CA0021.POLP291.PROD.OUTLOOK.COM
 (2603:10a6:1d0:19::21) To AS8PR03MB9192.eurprd03.prod.outlook.com
 (2603:10a6:20b:5c0::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AS8PR03MB9192:EE_|AM9PR03MB7489:EE_
X-MS-Office365-Filtering-Correlation-Id: e45265c4-9e7c-4e24-9ae1-08dc8938d0e9
X-LD-Processed: b41b72d0-4e9f-4c26-8a69-f949f367c91d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230031|1800799015|366007|376005;
X-Microsoft-Antispam-Message-Info: 
	=?utf-8?B?dG1SSFF2bTIzUDMrNVVVTHMrdHM4RmgzWlEvMFVYbTBBc0hLSUdmRTNTMzhV?=
 =?utf-8?B?dUVvUEZidFVjZnRuUndCUGk5d1pNRmVxUFV3LzlDbHROdHFKRkFBaURCRXdl?=
 =?utf-8?B?dW4xbVNuZzV0V1l6YXQ5b1MwNDVTbkJRc2VzbE9iNGRDRzhVcU5ZME9heGk3?=
 =?utf-8?B?dU16VXN1Mm9ycExZQnNkRmtWbGVDU0dnV1JRWm1FbUw1bXcrSWlQam4yRUpw?=
 =?utf-8?B?M2MvUGlUcGpnK3pOVVFsbm5LWjZoYzFabHN6aE1IVWFQQUxIK2VjSVF1eG51?=
 =?utf-8?B?US93OElENlVKM1ZxRjJ2YkRNZEdLUVEyekdLWERiTnMvMThCb2R4aGJvT1Ro?=
 =?utf-8?B?SnQxcFlYK0hUcTFoTzhSK01rQ2FKN1BldHNQQk5IWDhvOU5PVytEM1d0b1Vn?=
 =?utf-8?B?aFJEOXZheTFLZlJaN1R4ekVTVmhHTW90TFdCamd6akpPQll1Vm1ObFhNKzZI?=
 =?utf-8?B?TGM5QUpDL1dsZ3pJK2F5cHExNU1CbnZucTBsK2pxK0ZRUndibEtQVmpQTjJ2?=
 =?utf-8?B?QkkrWUsxN3F4QlMrOXBSc3JzVDkyMXg1dXdLaDdKbW55ZkVnRnNEdm1Md3Nh?=
 =?utf-8?B?WXJEVFEyR1hCWTNOSE55ZTNVOFBaRi9TTkxtNlllWGpYN1hhcVV3azY5TmNY?=
 =?utf-8?B?cHB2M3BELzlFWmpIUVpSdEwwTENuYmVkRGh0OWVETDU0S09meWpQdkFaRXBJ?=
 =?utf-8?B?TVZIWU04SVhzNENzeDhVaURoUVQxdTRQd25oMzBmeVRseTRpUW1VRzB1TzNR?=
 =?utf-8?B?TmxVc2habDI3a28yaVhLME53WkQrcEQvRVBoZGhHSDdBSlIrWWVTZlovTXJs?=
 =?utf-8?B?d2lkTHdMWWZwcStZZ3hlSGFFNWRlcVRKOE5nQUNPSDY0Q0Z6SmhiZU42M1E4?=
 =?utf-8?B?TjdpVDV0dlgvMmw2UzNSSzVkVkxycXZFNEpJNjNvT0tld3BTa0NLTzJXUGt5?=
 =?utf-8?B?UG9ybVdrdjRYTEwySWxNeTFoQXlpcjdOU3pZbllBV0RPSGkyU0NJa2dQUExi?=
 =?utf-8?B?QzRvdzJyZWJ2SG8wRzZNMmVGS0RzdWR5Y3ZIRE5nQkxyUXNNRHB0cTNBclBv?=
 =?utf-8?B?bHhQRlJEbHpiQUhTdUx6ejU3UDIxTjlNZkhmbWl0cjlQaWYrZm50T1gyZnds?=
 =?utf-8?B?Z2hnKzQ5ZzVTRjBmbDA2TlZ3UXF1QTNkOFkxdTM0Q0U0U0VjRW8weDIwN0VS?=
 =?utf-8?B?elhNTllFdFl2bEJJdFIrWnAxWTVyQzczWEhzWUpWY09YdjZwTjZyeEg4V0hh?=
 =?utf-8?B?OUtZMFZNYmhRYkJaUDRWbnM4RGI0eXVQLy8rOVlibkVaeXhlRE92ZTkvSmZl?=
 =?utf-8?B?WlBBVDJ6NE5PdFZUQ2pWUXJ3b05td0xkTDUzRnFDRkZXbWhoTHJPcHZSeDY0?=
 =?utf-8?B?ZVJPR0VjTU9qVk9oQ2xOeStOUGZEVk10eTVGZUFWWkpEYjR2Q0hZQTJuQThK?=
 =?utf-8?B?L0xUeDhuT05jZVFabi9aRzVGYkcxODI5MzZnV2hzRkZFZXVYbGY3VTZ5dnIw?=
 =?utf-8?B?M2FVTDZiWXFrVzhobzJNZWhrVURQaGR3MUo3U0hDcEp6ZzJVMVMzTnpQRDJ2?=
 =?utf-8?B?SVc2NHl6ZElHa2N1QlV0aWdyaXRzSjRKL2dKZ0J4L2pnaTJwNnB0dWNkWXFW?=
 =?utf-8?B?djlHMWVGMXg1SzV5L0Vta2RyL2w3VDFwbGFScWNNbkQ4b1hIRkRyMWdjVkkv?=
 =?utf-8?B?dkM0c3dOaXg4UUtCQ1RySmdiUFdaeXBOakF1RnZuRS9JRjVSdGV1TXJvU2xV?=
 =?utf-8?Q?fI9/ilahh37ANtxMoHALSgF6VtcYygTZ1DAQHXl?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR03MB9192.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(1800799015)(366007)(376005);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?cXQvdC9OY3pDb0V6M2s3OFZhclBHeDdob1dPL1llTElyRW9JNGFqWHFkZjMx?=
 =?utf-8?B?TFZwak9Xc0RxYlpzb3RVMHozb2lTcDhJTlV1b0NHdzBIMUI3QlpvZUNXQnFC?=
 =?utf-8?B?Y2Rrb01jeGJxM2V5ZGREOHlUNFNRd250SDJtc2JYZWRCTnJjTGhmQmtjS1FZ?=
 =?utf-8?B?d0xLZ0IxSWo1R1BKRzFxTWpaL2U2N0JDQU9IS2c3cHoyNCtreG9kS29ubERh?=
 =?utf-8?B?MkpkamJVa0IzVlNnVkE5MEI1NzBRV1dVVlNOcVBteExJbDBuZTkxTUhZdmQr?=
 =?utf-8?B?WmlqZjZuOExMZzBvanJDT0FUWlNMZzV1Rm5DcFJnTVJzTlRGZ1lDbFBESjRP?=
 =?utf-8?B?WVE4ZVdkbnZFVEV6MXprTk1qZHprMWhnc1dBeVoyV1NiaGZYcjJycW9aYy9Z?=
 =?utf-8?B?a2tNc2lRbzA2ZUw2UExVUk9IYmxKZGpoSDhEcVZNZHFlZ2o1bld4N0Z1TEpN?=
 =?utf-8?B?cmJDdXFzTGRyYnR6RkRFNG4wUnZzZEhJWURYSm1WbjRPUTFqY0E3SnNENnRi?=
 =?utf-8?B?bHF3dEpjMUMxVU1IR3UxcnM2UkFQcUtrK0tpZVFqWkhTRzRYaHFRQ1BraUpT?=
 =?utf-8?B?SndaQnk0YXRUbmlTeEthbkx5aG54NmdKTkdjL3dqdHpTNm5GMk1RMnRlN2pG?=
 =?utf-8?B?MzllUFhvbjhoQk0xUUIrdGFEZU5pbmtPK3VUZXBaQUNHeU1wdllnR3FjZ1NQ?=
 =?utf-8?B?bDNtQ2ZsT2o0RzBHVzdXSER1ZHhMY3lsdDJRdUZYVTU4T0QrSXJQWngvNUVi?=
 =?utf-8?B?OGJVMzNUY2c5TWtDczlPY2FvSTRET1B2dTlCejJmTitmejlIMVRSVVlFcEtl?=
 =?utf-8?B?T3FyUlNQSUVHMEpFS2Q1WmhwanY5VG5YT3dlSGdiWU9UY09XZWFMbitQNWF4?=
 =?utf-8?B?TXdjQU1oOFYrUG9EY2JjYTMxNnlGeGZ2SUthY2ozNE5GOWdnNkJGTnRIWm94?=
 =?utf-8?B?SGR4SDdrRjVNeWExanVXeWU3NTlZUTVpY2I2cy9yNzQzNkxrWTlnbU9ySitn?=
 =?utf-8?B?RnVrb3UvMXM2TVBXN2VSLzY1Vmg0RlZUNHExdTJPUUsxemFwYTFEQld6UDlH?=
 =?utf-8?B?NzJKRXBDUUNtRXorTGljS0lPSng2SytVSjJrRXZlemNYaE9XdlZSbjA0ajZo?=
 =?utf-8?B?bjlMU04wNEhvWHBYNElpdDlncjBYb1hqc3MzNW1kWStJQkJ2NmNJdEdrSjh5?=
 =?utf-8?B?VUp4RWxtak43aEgrVWpJblR0TGRRZDNmRlVtUkNrbldnN3FMY3FXaUNzeXpp?=
 =?utf-8?B?cjBNMWxXQWlpNTcxZGZTNlRyQU0xZCtaanNpMFo3OXBSd2l5SlhZaFVwNlB1?=
 =?utf-8?B?dmNuOEV4OVJxZUlVaEg1ZGpJbDNYcC9GL1cwM29JbGdiRHZqdGs4UndSeTdV?=
 =?utf-8?B?RkNBaWxvTWE2a2hGUUtEMU00bDNzdE5QVEwrUU5VU3g5YnExd0p4aEl3a1Iz?=
 =?utf-8?B?Z2k3NjBobzZXZHZhNldqbWdWVUVlSzRuNFo3T1RvS1hUbHF0WENxNWVxMmdX?=
 =?utf-8?B?UXpyRnhIdUQ5QWIvMmdiTW03RWRYaGRQUTJteFgvMXg5cTdkUmJhamxoRHgv?=
 =?utf-8?B?aWZVSnZxZnRRbnBrb2pScGtjaGVjSFZZek9SV3kvajJBTWlnS015ZDN2djNh?=
 =?utf-8?B?cjJTb2RKSS9uUjNmazhSSUZpeHhhOFpBZ24yZVJxTHE4Wlo4cnpINU5OT01u?=
 =?utf-8?B?MFhzVm1FRTduOU9ibzFTWjZDTFNZKzMrblp3YzlZL1V1ZXQ5VDdOU21jU1Bv?=
 =?utf-8?B?NmkvOUtQS2g0OVhoSkVsVGpEekgxaytSaTdBdThRZFZaenBlRVdIdVpxYkRq?=
 =?utf-8?B?a1lPUE41eHVEQ2FSTWdCM2tJRHZIa2pGRnIvdThjL3IyYmdmalB0YWRxajlt?=
 =?utf-8?B?ZjdPNXFTSkxhSEdUeUxvTi85UlBTeE84WXIvTXRoT0crR1NoWGZaZWxvS3I3?=
 =?utf-8?B?ck1ZWlZqcFZhNW9DR2EyVGUwa241bVpyd3NIa1J4ZzB0M2txT1cxU3NpZWhj?=
 =?utf-8?B?UmpxYlVHZTFvRW1nOG9Fd0s1Y0t5eUlFRlk1UWNtYmdzeFQyeXVZNkhkU0NS?=
 =?utf-8?B?NCttWTlVdFZTZFdlQ0hlb1gxWjA4UnEycVZ3ODAwMVlFUDAvL09sT2M4NTFW?=
 =?utf-8?Q?pS4L7APfWVGMMfcnOPA331nCh?=
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e45265c4-9e7c-4e24-9ae1-08dc8938d0e9
X-MS-Exchange-CrossTenant-AuthSource: AS8PR03MB9192.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2024 10:33:49.7165
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dFu8ohnSSI2Tt+3bk7MXP6isqv3P1IT3n6/WsCnHhgmNFVK4jcFBsDUPyYyTnX8lBdgsaz7BgIEntrnaRk9yxQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB7489
X-Proofpoint-ORIG-GUID: BRopdfu_bHSNrzpGm3nvHzt7pjvt0zn0
X-Proofpoint-GUID: BRopdfu_bHSNrzpGm3nvHzt7pjvt0zn0
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-10_02,2024-06-10_01,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501
 clxscore=1015 bulkscore=0 impostorscore=0 phishscore=0 adultscore=0
 suspectscore=0 spamscore=0 mlxlogscore=999 lowpriorityscore=0 mlxscore=0
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.19.0-2405170001 definitions=main-2406100080

hi Stefano,

07.06.24 10:32, Jan Beulich:
> On 03.06.2024 13:11, Sergiy Kibrik wrote:
>> Initialize and bring down altp2m only when it is supported by the platform,
>> e.g. VMX. Also guard p2m_altp2m_propagate_change().
>> The puspose of that is the possiblity to disable altp2m support and exclude its
>> code from the build completely, when it's not supported by the target platform.
>>
>> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> I question though whether Stefano's R-b was valid to retain with ...
> 
>> ---
>> changes in v3:
>>   - put hvm_altp2m_supported() first
>>   - rewrite changes to p2m_init() with less code
> 
> ... this.

could you please take a look at this patch once more, after it's been 
through these modifications? Thank you!

    -Sergiy


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 10:34:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 10:34:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737150.1143316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcMH-0002g7-MW; Mon, 10 Jun 2024 10:34:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737150.1143316; Mon, 10 Jun 2024 10:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcMH-0002g0-JE; Mon, 10 Jun 2024 10:34:37 +0000
Received: by outflank-mailman (input) for mailman id 737150;
 Mon, 10 Jun 2024 10:34:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGcMG-0002fq-5e
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 10:34:36 +0000
Received: from mail-oi1-x232.google.com (mail-oi1-x232.google.com
 [2607:f8b0:4864:20::232])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 06ebcd3b-2715-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 12:34:34 +0200 (CEST)
Received: by mail-oi1-x232.google.com with SMTP id
 5614622812f47-3d226c5a157so673649b6e.2
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 03:34:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06ebcd3b-2715-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718015673; x=1718620473; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=g7ZgxAqjXY9AET+7F5hBQjyBy8YsMtzIxOARp//Ut1k=;
        b=MEFfyxCYmW0lBGEQ8GHXXm3liJNyqDh05Ep9SX9FzAFa2jT4rKJlsOXlJPoZn0pmWl
         bRH8KReR1tnXgMeC4AEfpZg4vfpVpRWYplp/ffCeCWqZD8UGwVY15CRlmIAEZZG2YAGn
         Cq8JWGCKEhkGohUxJbaSf3l9J5P3d9UuAfmnoJ0lwSAwFkGimCPiHnUtYFI/oF02v5kE
         +QEe0xBSJw1167oDmNfhc8ekxzbqVLstb5+AUl3fEb1T1hP3jeOBR9JiQyk8h6KjcK3I
         vPjI59n7dcTS5WTLJIMjyLNtBbKUrtdpf1z+tzYuHWFPsMpzQMa/pfCwqlDqOHn73vMI
         +jfA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718015673; x=1718620473;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=g7ZgxAqjXY9AET+7F5hBQjyBy8YsMtzIxOARp//Ut1k=;
        b=MlfD71wtTRffmLG9ZL+2rEMzRp5fTNsZTdmEo231fONCgAIGhwVe4WRdJnOHp3pYzm
         P5kx3OEfF920PpgA9Ed6GUsBMgc7LftXupf/eGWUdPibvYfrZfk+WUmsQsts0dEHs8fi
         9399/3jbnSywQt3+kyzU9Yo9Xm6PmCE9v27bOpzeRfibMWjibjkQ6+pmhqU3EnZS1WkM
         UYky02zVOrF52g7pooRpKFhoqIocVmZy6mkEwBnoQQhKdtp/3+UPOjLvSd820lR1cQmy
         5w4SEtcPlgTidJUrIUbt6R4OwPgEq/MUOZ6mJ8Qode7jotqA8qphbjxXrck/LHmE8XxR
         CZHA==
X-Forwarded-Encrypted: i=1; AJvYcCXHwGLajnuaSjkTM49q4lHtu6t6g9uz20Blittd3K1KXAuVbfTYpXuyJ7BAol5Abmix88qLq7OePUbqXIxYsUNvtABl9mabDkAa5u3Z4vE=
X-Gm-Message-State: AOJu0YwFP6TZBxHVDODGPNvFyo/8aIv4tpaUJhL5bCOFlwNBtn4F86al
	LXWFCF25S+7zozrGw/DhdvIot6dmU8rAN0fHWse5QdqAwYzV39wP7BiKJKFibzDNEYAuqw7bVo8
	emfAybwtzBoG4KjL+Y2076XOwj8o=
X-Google-Smtp-Source: AGHT+IE+UlLNyd1/61vLAoD8Eisrzv+MvwA3XfkK6aJ5nPuZsOV8aV16RayME3zDO8Fy5BiCHA6eVKJ4KPQ3M3Q3nEg=
X-Received: by 2002:a05:6870:d24c:b0:24f:c0bc:5ac6 with SMTP id
 586e51a60fabf-254644514e6mr10170426fac.11.1718015672792; Mon, 10 Jun 2024
 03:34:32 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1717356829.git.w1benny@gmail.com> <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
 <22eabe14-10c3-4095-91d3-b63911908cb2@suse.com> <CAKBKdXhZ4HOqThPMkwaWB5ZhQOc6gE=xsKzkoL4_h+M6y33dcQ@mail.gmail.com>
 <f3cd00f2-bdcb-4604-bdc2-fd13eddb8ea0@suse.com> <CAKBKdXje+_dd7kh3+aDJACw84+-1ozXt6N==KbA6Tgm7GeZEnQ@mail.gmail.com>
 <8961cf72-4eeb-4c47-9723-35da3e47d4d2@suse.com>
In-Reply-To: <8961cf72-4eeb-4c47-9723-35da3e47d4d2@suse.com>
From: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Date: Mon, 10 Jun 2024 12:34:22 +0200
Message-ID: <CAKBKdXiQhFeihx9HeuOv5cFe8K7H2O+GFUXy4ThF1X6ZGjCrig@mail.gmail.com>
Subject: Re: [PATCH for-4.19? v5 07/10] xen: Make the maximum number of altp2m
 views configurable for x86
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel <michal.orzel@amd.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Tamas K Lengyel <tamas@tklengyel.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 10, 2024 at 12:16=E2=80=AFPM Jan Beulich <jbeulich@suse.com> wr=
ote:
>
> On 10.06.2024 11:10, Petr Bene=C5=A1 wrote:
> > On Mon, Jun 10, 2024 at 9:30=E2=80=AFAM Jan Beulich <jbeulich@suse.com>=
 wrote:
> >>
> >> On 09.06.2024 01:06, Petr Bene=C5=A1 wrote:
> >>> On Thu, Jun 6, 2024 at 9:24=E2=80=AFAM Jan Beulich <jbeulich@suse.com=
> wrote:
> >>>>> @@ -122,7 +131,12 @@ int p2m_init_altp2m(struct domain *d)
> >>>>>      struct p2m_domain *hostp2m =3D p2m_get_hostp2m(d);
> >>>>>
> >>>>>      mm_lock_init(&d->arch.altp2m_list_lock);
> >>>>> -    for ( i =3D 0; i < MAX_ALTP2M; i++ )
> >>>>> +    d->arch.altp2m_p2m =3D xzalloc_array(struct p2m_domain *, d->n=
r_altp2m);
> >>>>> +
> >>>>> +    if ( !d->arch.altp2m_p2m )
> >>>>> +        return -ENOMEM;
> >>>>
> >>>> This isn't really needed, is it? Both ...
> >>>>
> >>>>> +    for ( i =3D 0; i < d->nr_altp2m; i++ )
> >>>>
> >>>> ... this and ...
> >>>>
> >>>>>      {
> >>>>>          d->arch.altp2m_p2m[i] =3D p2m =3D p2m_init_one(d);
> >>>>>          if ( p2m =3D=3D NULL )
> >>>>> @@ -143,7 +157,10 @@ void p2m_teardown_altp2m(struct domain *d)
> >>>>>      unsigned int i;
> >>>>>      struct p2m_domain *p2m;
> >>>>>
> >>>>> -    for ( i =3D 0; i < MAX_ALTP2M; i++ )
> >>>>> +    if ( !d->arch.altp2m_p2m )
> >>>>> +        return;
> >>
> >> I'm sorry, the question was meant to be on this if() instead.
> >>
> >>>>> +    for ( i =3D 0; i < d->nr_altp2m; i++ )
> >>>>>      {
> >>>>>          if ( !d->arch.altp2m_p2m[i] )
> >>>>>              continue;
> >>>>> @@ -151,6 +168,8 @@ void p2m_teardown_altp2m(struct domain *d)
> >>>>>          d->arch.altp2m_p2m[i] =3D NULL;
> >>>>>          p2m_free_one(p2m);
> >>>>>      }
> >>>>> +
> >>>>> +    XFREE(d->arch.altp2m_p2m);
> >>>>>  }
> >>>>
> >>>> ... this ought to be fine without?
> >>>
> >>> Could you, please, elaborate? I honestly don't know what you mean her=
e
> >>> (by "this isn't needed").
> >>
> >> I hope the above correction is enough?
> >
> > I'm sorry, but not really? I feel like I'm blind but I can't see
> > anything I could remove without causing (or risking) crash.
>
> The loop is going to do nothing when d->nr_altp2m =3D=3D 0, and the XFREE=
() is
> going to do nothing when d->arch.altp2m_p2m =3D=3D NULL. Hence what does =
the
> if() guard against? IOW what possible crashes are you seeing that I don't
> see?

I see now. I was seeing ghosts - my thinking was that if
p2m_init_altp2m fails to allocate altp2m_p2m, it will call
p2m_teardown_altp2m - which, without the if(), would start to iterate
through an array that is not allocated. It doesn't happen, it just
returns -ENOMEM.

So to reiterate:

    if ( !d->arch.altp2m_p2m )
        return;

... are we talking that this condition inside p2m_teardown_altp2m
isn't needed? Or is there anything else?

P.


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 10:40:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 10:40:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737156.1143326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcRY-0004n3-A2; Mon, 10 Jun 2024 10:40:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737156.1143326; Mon, 10 Jun 2024 10:40:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcRY-0004mQ-5d; Mon, 10 Jun 2024 10:40:04 +0000
Received: by outflank-mailman (input) for mailman id 737156;
 Mon, 10 Jun 2024 10:40:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGcRX-0004Yd-14; Mon, 10 Jun 2024 10:40:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGcRW-0003jr-Ul; Mon, 10 Jun 2024 10:40:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGcRW-0002dK-J4; Mon, 10 Jun 2024 10:40:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGcRW-00017i-IY; Mon, 10 Jun 2024 10:40:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SF5BwBYGD9mE2JvRk3oKQarjDGQkxDyCSLqBA3ZcT8I=; b=bJ4UlakIoF0zJBgfmLEtMEZlkZ
	fDjtXlegDFM7em5L2DL1O0qq8VPjoTxXgGzlbLz4PenQ2hBED04G1pXp0QCgXoMbjQndkRdQwmNPv
	0KcQA/VmLOKw+wYdYfOxoMaNc1DxUfmq0Lu9/hDhov8KAcMY2hj8fMv/cdDg1r8ram1U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186302-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186302: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=3dcc7b73df2b1c38c3c1a31724d577f4085f3ab1
X-Osstest-Versions-That:
    ovmf=ab069d580111ccc64d6b0c9697b7c5fd6e1507ce
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Jun 2024 10:40:02 +0000

flight 186302 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186302/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3dcc7b73df2b1c38c3c1a31724d577f4085f3ab1
baseline version:
 ovmf                 ab069d580111ccc64d6b0c9697b7c5fd6e1507ce

Last test of basis   186283  2024-06-07 18:13:28 Z    2 days
Testing same since   186302  2024-06-10 09:11:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ab069d5801..3dcc7b73df  3dcc7b73df2b1c38c3c1a31724d577f4085f3ab1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 10:46:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 10:46:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737165.1143335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcXP-0005gP-Lf; Mon, 10 Jun 2024 10:46:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737165.1143335; Mon, 10 Jun 2024 10:46:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcXP-0005gI-JA; Mon, 10 Jun 2024 10:46:07 +0000
Received: by outflank-mailman (input) for mailman id 737165;
 Mon, 10 Jun 2024 10:46:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7tSL=NM=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1sGcXO-0005gC-Cs
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 10:46:06 +0000
Received: from fout8-smtp.messagingengine.com (fout8-smtp.messagingengine.com
 [103.168.172.151]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a1600d76-2716-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 12:46:02 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailfout.nyi.internal (Postfix) with ESMTP id 9D5BE13800FD;
 Mon, 10 Jun 2024 06:46:01 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Mon, 10 Jun 2024 06:46:01 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 10 Jun 2024 06:46:00 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1600d76-2716-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718016361;
	 x=1718102761; bh=TqsrYyZlyVjXe4uGvyckEHwS22hu+++kF6oFj23YcBI=; b=
	Bj375HFlrVgHISPDglgYgJtvTQOsarPJDaRlFGQpC77tmq94m3L04CBBP5mIBciE
	s9w4cav1PaYRufUV4s7o4CzXGWam1kPElGOinnOQm023H1gs74LXAXNmy3duCkRg
	INTN9iBjqK9GgIGYpih7gM+MH1KHKkgXovYZqVbalEtjF9YqvMcv42vVYSpAj9+y
	xyP05U9/JvFUJySPBfomGujz7xjdB6bSvTMzqbrgcehKMUpXLN+8ETnSLGKl4pvS
	+qeDPE9rQkqsRuCsIbtPEwB8y5482aO7+wfEuNnAay5nqkkwISrQTctIkeOvJJYp
	D1Nf3m1y2l1ADDxiN9R/qA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718016361; x=1718102761; bh=TqsrYyZlyVjXe4uGvyckEHwS22hu
	+++kF6oFj23YcBI=; b=ZE6l44pyVjcTJYM8X+F/2Zxf4BygF8QMaqEUtJQaEEWj
	4o0aUaPA/d1NHS2miZ73KyBr6dSDX6a3bI+sp904QV54408//Xu4c5j6wsDQIvCd
	FdEN35LB8jjS37C+v8+spaubCH9BdmkWZWn6xb3P4rWtDZ5Iu3iUnfm1z4sXWamL
	XCMM4T28SR9PfT1YebQHhqJtaUTTKTYd7Ow/xBEupecOT58+Evpsp26IB4SiQa9z
	XevLXFs+YwIq9u7A4HDm5Bx+1yfs2yZ8GQnp/7zQKWlyW86iPoIMoW9Ehtr6AT5P
	8iFg0zrhtqM7OP3Lj5xf5xI5UWhTr4LSAWcbVkfuWw==
X-ME-Sender: <xms:adlmZrigq4fyFyr-3tYsVbv_m4ieyuluZij5mgFHIXCPNFvdr4Pm7A>
    <xme:adlmZoDaDvgXwY9sadkahefBbBJJAnw4GCP7ZFMhZxcRAkHo7QveScfn1GI88q3IW
    sjyIBl1Gb7i1A>
X-ME-Received: <xmr:adlmZrFgAlw44BoqbVVvF1hOUFuvmqwaEiFR-AwzJO0BV_q_pHbxKwnVjzr4_BtahNOeCMbN5VUfti2kYQluTKshdFFvQ-q7AQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedutddgfedvucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:adlmZoSd8TVLA-V4WxZ2S0v1cI4EccUpDT4PQHjE_0yatvQ80cODog>
    <xmx:adlmZoxtWPVg7SiTfeZ1Wmb-P1Z4ghapys-CqZ2FoJiZ4lii3Y88EQ>
    <xmx:adlmZu4ICc_wf9kjRzgYfnPlzQm3GbK4b5wT6-JWnDTrK75TkLXMJQ>
    <xmx:adlmZtwy5FY0ZCohk8FKFjfe0w6--p-jb44GyEJD_SBS4SXCYfvXSQ>
    <xmx:adlmZivBvkHx8AZoZ74F5ZI2R7B4FzfGcZ-2O6_KuDudTMBBOGUkRPBY>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 10 Jun 2024 12:45:56 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, javi.merino@cloud.com
Subject: Re: Segment truncation in multi-segment PCI handling?
Message-ID: <ZmbZZpMBjN3Kw9w1@mail-itl>
References: <ZmNjoeFAwWz8xhfM@mail-itl>
 <9cbb6dce-b669-4237-8932-b5cd64eb7288@citrix.com>
 <b609eaab-0a0a-433b-81d3-84a0cd90ebc1@suse.com>
 <Zma5Rj_cswrIYcD2@macbook>
 <a8225a94-54ed-4b24-8867-b9da65cb0a14@suse.com>
 <ZmbLZHSOg8KuRvAw@macbook>
 <9092e4d2-1feb-4667-86df-644a92468f58@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="RJRoFmr8Hqj/2V+I"
Content-Disposition: inline
In-Reply-To: <9092e4d2-1feb-4667-86df-644a92468f58@suse.com>


--RJRoFmr8Hqj/2V+I
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 10 Jun 2024 12:45:56 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel <xen-devel@lists.xenproject.org>, javi.merino@cloud.com
Subject: Re: Segment truncation in multi-segment PCI handling?

On Mon, Jun 10, 2024 at 12:11:58PM +0200, Jan Beulich wrote:
> On 10.06.2024 11:46, Roger Pau Monn=C3=A9 wrote:
> > On Mon, Jun 10, 2024 at 10:41:19AM +0200, Jan Beulich wrote:
> >> On 10.06.2024 10:28, Roger Pau Monn=C3=A9 wrote:
> >>> On Mon, Jun 10, 2024 at 09:58:11AM +0200, Jan Beulich wrote:
> >>>> On 07.06.2024 21:52, Andrew Cooper wrote:
> >>>>> On 07/06/2024 8:46 pm, Marek Marczykowski-G=C3=B3recki wrote:
> >>>>>> Hi,
> >>>>>>
> >>>>>> I've got a new system, and it has two PCI segments:
> >>>>>>
> >>>>>>     0000:00:00.0 Host bridge: Intel Corporation Device 7d14 (rev 0=
4)
> >>>>>>     0000:00:02.0 VGA compatible controller: Intel Corporation Mete=
or Lake-P [Intel Graphics] (rev 08)
> >>>>>>     ...
> >>>>>>     10000:e0:06.0 System peripheral: Intel Corporation RST VMD Man=
aged Controller
> >>>>>>     10000:e0:06.2 PCI bridge: Intel Corporation Device 7ecb (rev 1=
0)
> >>>>>>     10000:e1:00.0 Non-Volatile memory controller: Phison Electroni=
cs Corporation PS5021-E21 PCIe4 NVMe Controller (DRAM-less) (rev 01)
> >>>>>>
> >>>>>> But looks like Xen doesn't handle it correctly:
> >>>
> >>> In the meantime you can probably disable VMD from the firmware and the
> >>> NVMe devices should appear on the regular PCI bus.
> >>>
> >>>>>>     (XEN) 0000:e0:06.0: unknown type 0
> >>>>>>     (XEN) 0000:e0:06.2: unknown type 0
> >>>>>>     (XEN) 0000:e1:00.0: unknown type 0
> >>>>>>     ...
> >>>>>>     (XEN) =3D=3D=3D=3D PCI devices =3D=3D=3D=3D
> >>>>>>     (XEN) =3D=3D=3D=3D segment 0000 =3D=3D=3D=3D
> >>>>>>     (XEN) 0000:e1:00.0 - NULL - node -1=20
> >>>>>>     (XEN) 0000:e0:06.2 - NULL - node -1=20
> >>>>>>     (XEN) 0000:e0:06.0 - NULL - node -1=20
> >>>>>>     (XEN) 0000:2b:00.0 - d0 - node -1  - MSIs < 161 >
> >>>>>>     (XEN) 0000:00:1f.6 - d0 - node -1  - MSIs < 148 >
> >>>>>>     ...
> >>>>>>
> >>>>>> This isn't exactly surprising, since pci_sbdf_t.seg is uint16_t, so
> >>>>>> 0x10000 doesn't fit. OSDev wiki says PCI Express can have 65536 PCI
> >>>>>> Segment Groups, each with 256 bus segments.
> >>>>>>
> >>>>>> Fortunately, I don't need this to work, if I disable VMD in the
> >>>>>> firmware, I get a single segment and everything works fine.
> >>>>>>
> >>>>>
> >>>>> This is a known issue.=C2=A0 Works is being done, albeit slowly.
> >>>>
> >>>> Is work being done? After the design session in Prague I put it on my
> >>>> todo list, but at low priority. I'd be happy to take it off there if=
 I
> >>>> knew someone else is looking into this.
> >>>
> >>> We had a design session about VMD?  If so I'm afraid I've missed it.
> >>
> >> In Prague last year, not just now in Lisbon.
> >>
> >>>>> 0x10000 is indeed not a spec-compliant PCI segment.=C2=A0 It's some=
thing
> >>>>> model specific the Linux VMD driver is doing.
> >>>>
> >>>> I wouldn't call this "model specific" - this numbering is purely a
> >>>> software one (and would need coordinating between Dom0 and Xen).
> >>>
> >>> Hm, TBH I'm not sure whether Xen needs to be aware of VMD devices.
> >>> The resources used by the VMD devices are all assigned to the VMD
> >>> root.  My current hypothesis is that it might be possible to manage
> >>> such devices without Xen being aware of their existence.
> >>
> >> Well, it may be possible to have things work in Dom0 without Xen
> >> knowing much. Then Dom0 would need to suppress any physdevop calls
> >> with such software-only segment numbers (in order to at least not
> >> confuse Xen). I'd be curious though how e.g. MSI setup would work in
> >> such a scenario.
> >=20
> > IIRC from my read of the spec,
>=20
> So you have found a spec somewhere? I didn't so far, and I had even asked
> Intel ...
>=20
> > VMD devices don't use regular MSI
> > data/address fields, and instead configure an index into the MSI table
> > on the VMD root for the interrupt they want to use.  It's only the VMD
> > root device (which is a normal device on the PCI bus) that has
> > MSI(-X) configured with real vectors, and multiplexes interrupts for
> > all devices behind it.
> >=20
> > If we had to passthrough VMD devices we might have to intercept writes
> > to the VMD MSI(-X) entries, but since they can only be safely assigned
> > to dom0 I think it's not an issue ATM (see below).
> >=20
> >> Plus clearly any passing through of a device behind
> >> the VMD bridge will quite likely need Xen involvement (unless of
> >> course the only way of doing such pass-through was to pass on the
> >> entire hierarchy).
> >=20
> > All VMD devices share the Requestor ID of the VMD root, so AFAIK it's
> > not possible to passthrough them (unless you passthrough the whole VMD
> > root) because they all share the same context entry on the IOMMU.
>=20
> While that was my vague understanding too, it seemed too limiting to me
> to be true.

I my case, it was a single NVMe disk behind this VMD thing, so passing
through the whole VMD device wouldn't be too bad. I have no idea (nor
really interest in...) how it behaves with more disks.
=46rom the above discussion I understand the 0x10000 segment is really a
software construct, not anything that hardware expects, so IMO dom0
shouldn't tell Xen anything about it.

Since I have the hardware, I can do some more tests if somebody is
interested in results. But for now I have disabled VMD in firmware and
everything is fine.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--RJRoFmr8Hqj/2V+I
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmZm2WYACgkQ24/THMrX
1yz3Igf/V6gNcUAQ5BJcLFjjP/nfEZ1XoAGwRMT4pfjcdJFF+0SOPYMuRaH2N5cC
VEE/vLWm33etkTSmvx3N/GvgiZRESLbsD/EHtRSu16gEWFh2zJAL+w8jtmn3f80o
kuJCG+ZbKX/Y/7jFUF6tWTpZJATfq3UNzud8l25nNyAXmn1mvYcGhSPxfItp6pS7
fKa/V/3IbPufqEmlCm7uFHMU4Cb+7r/fGPTY0JtAn+14JVUK2e+xdVm5dhQDqu9b
V7NiS/MNWVPbxw0HKWX8f2yCRrqMnKBmSmHNr6al69QtDfUTff6apLYNAqdtdXAa
apowTdeghwPytxz5GNrium3q9WrVRQ==
=Qr1T
-----END PGP SIGNATURE-----

--RJRoFmr8Hqj/2V+I--


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 10:49:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 10:49:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737172.1143346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcaE-0006b6-6j; Mon, 10 Jun 2024 10:49:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737172.1143346; Mon, 10 Jun 2024 10:49:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGcaE-0006az-3i; Mon, 10 Jun 2024 10:49:02 +0000
Received: by outflank-mailman (input) for mailman id 737172;
 Mon, 10 Jun 2024 10:49:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GtSS=NM=epam.com=prvs=289119432d=sergiy_kibrik@srs-se1.protection.inumbo.net>)
 id 1sGcaC-0006ZZ-QU
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 10:49:00 +0000
Received: from mx0b-0039f301.pphosted.com (mx0b-0039f301.pphosted.com
 [148.163.137.242]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0a2575dc-2717-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 12:48:58 +0200 (CEST)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45A97JP8028931;
 Mon, 10 Jun 2024 10:48:54 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2168.outbound.protection.outlook.com [104.47.17.168])
 by mx0b-0039f301.pphosted.com (PPS) with ESMTPS id 3ynxk10eec-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Mon, 10 Jun 2024 10:48:54 +0000 (GMT)
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com (2603:10a6:20b:5c0::11)
 by VI1PR03MB9965.eurprd03.prod.outlook.com (2603:10a6:800:1cb::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.16; Mon, 10 Jun
 2024 10:48:51 +0000
Received: from AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d]) by AS8PR03MB9192.eurprd03.prod.outlook.com
 ([fe80::baa9:29b3:908:ed7d%6]) with mapi id 15.20.7656.012; Mon, 10 Jun 2024
 10:48:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a2575dc-2717-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=lCPIXS/11gaFq4AMvFuX69wkD6YU9aYTnkOMzf5Uvd2eNED0C9U7VWBiAsW9xJcrLwHj0MCvIlbnNh9kTVlEtPChinuk0U0uTldZ4MG1TxWMWhvg+wArA0cnNGQO7wtCyusgCysbdtGAcAGLuo2LPJT2d09I12lJh52SwL82dquXkS2qjxV6vd3TX5Mo5foEbvH0bgFwf22e+7b9eJo4QKwOjkvVKOr9guokuZYwkTmnxxCuxmVN67j+zzHR4VBz7+r1lqGXBjupgJtq/QhEW1HhiZ1qYAMp5uhHO8idUG2CDmUYMvCPbPUXETAWrTuerDOCBnrOp5uOV0x3hX+xUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=o7ch+KUupsGWBalC7hP5erJUYI2POY1MW1Bg21YSMJU=;
 b=nKJ8lphR5YuRB0utbeQTrmklpdp8tBw1vMPKKz5rJmRs/fx1gAGPwqZ9b+8BAr+7Q1VucEHkLhYyF1A0CCjdbjUedLx7EKpo1d8xCoh4L0nqr8b0OSF/soBiGvv8Dn8z2Z/zaSFqkQ/ZOYG4VvkQkvRWzKNCPXSL3GfmHZSmscKVhdJfD8ThAW9NaOUVGtXZMeitCNeAPqAx2oAIzox33cCde97h/h7tgfhiOrgWWp1Hkw40pK1fH2NCEfJV/G89yITW3mkrzcOg2li6PpzW4vn221dxXbjuresjp72HEiF390uHGWzAftP4qblSWByYdx03R+xioJ5ABI+Dt1oULQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o7ch+KUupsGWBalC7hP5erJUYI2POY1MW1Bg21YSMJU=;
 b=dDsKgvdeZzN2ZDb6zNkfQUmxEDBMX5PtHmSQhqJTRBE+kxWko0uabUQ0tPDfTy693SrkKPP0vJJzf+DKMEPYgH5ki3hm21mpF08P/3/UAuIElzi4VL9+HnjemHSKb2qNQZsr1uiIxxLeedNH1oUlbzWLxIR1p9LfdN+r0PwgNcqhAAeqHEboSNvWusMxbCeul3YS9cevxezm2W3xytZtwmdb+3qo19OOjPm0p2SwISEGJMnWSsnjbrbVogvXUJm4pYqDHiWUHQyw4b+4Cy+mSJQ8C/1Wmd9BfEBje4gmgYwn90A8DlLjFyZ/yfkyVj5MazR9kiWJxuAeWcMctceSsg==
Message-ID: <6b55015a-4cea-4064-a50e-38ab2b2e665c@epam.com>
Date: Mon, 10 Jun 2024 13:48:49 +0300
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 04/16] x86: introduce CONFIG_ALTP2M Kconfig option
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
        =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
        Tamas K Lengyel <tamas@tklengyel.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <035f63f2b92b963f2585064fa21e09e73157f9d3.1717410850.git.Sergiy_Kibrik@epam.com>
 <856e3517-4aef-4e18-85b1-174ebf5c358f@suse.com>
Content-Language: en-US
From: Sergiy Kibrik <sergiy_kibrik@epam.com>
In-Reply-To: <856e3517-4aef-4e18-85b1-174ebf5c358f@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ClientProxiedBy: WA1P291CA0022.POLP291.PROD.OUTLOOK.COM
 (2603:10a6:1d0:19::29) To AS8PR03MB9192.eurprd03.prod.outlook.com
 (2603:10a6:20b:5c0::11)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: AS8PR03MB9192:EE_|VI1PR03MB9965:EE_
X-MS-Office365-Filtering-Correlation-Id: df8297be-b35b-458b-344e-08dc893aea4e
X-LD-Processed: b41b72d0-4e9f-4c26-8a69-f949f367c91d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230031|366007|1800799015|376005;
X-Microsoft-Antispam-Message-Info: 
	=?utf-8?B?OXZvR0lqSUhIOWV6ZHcxYkNXSGhDNHBMTmpBNjlUTW5hUm9TRkE0NVVaMFMv?=
 =?utf-8?B?cHQ2MmpnbUZzeUZNYWEya09ISUsvN1RXUW5ZSEFQR1p6Z2hPN040ays1alNY?=
 =?utf-8?B?cG0vVWh0TE8vbmw3NEMwbDFrdXZKeHZUbW0vQk5OQ1BiL05idnpMUFBHWGlr?=
 =?utf-8?B?ZXZTbko0ZlVmbVJ0MG5jQkV2RnpmVm02V0xoZU9HR2tHZnIrYm9BckNIQVpi?=
 =?utf-8?B?aXRyM1ZiL0V2ZWVZVVVJOWFaSDJ4Mk5lcDZGSzAvTGxsd05ENW1aSmlTRzJs?=
 =?utf-8?B?NjVTbXJsbGQ3Y1ZzbzY5T0JIeUttMWlRWnltUFNtenU0UDloejJPYWp5c3BQ?=
 =?utf-8?B?NnFDS0ZsNUtOUlRPeXFCdWhLTjRNZXF2b0lDdWZPRjZiaWhmZ0lGcEM5ejhG?=
 =?utf-8?B?Z1dpWHJkZXBpVVpodEZuUUhxMm5GTjY1ajByL0NrMFBQVmx6S2UxeGhhcE9O?=
 =?utf-8?B?RnoyYkRyMWNWVzM3TGEzZGVWcGZBNjlJVkhMTW9tRUVDckYxQnpub3JaYW9K?=
 =?utf-8?B?cWlYRTdFYW5hVW9lbHFONVBpSXdyZlEzWXJlZGhDZVpNQ2FFeXFSNFBhd01s?=
 =?utf-8?B?YWJGeHF4TFJ3eHlqZ2owYzBHMzU0d1cyWHdiV0gvWTJGeG9pRDJkUGJMQklU?=
 =?utf-8?B?b2p5ZEc3VGlYd1hVYVFKVFVtcU10dEx0S2hsS3hhQjlsVG91NXdQZUZ5K0RS?=
 =?utf-8?B?MWhDbDRpR1AzbE5ySllnc1pwK3h2bVFNQW9YUlJwUFNJL3kzL0JqOHpYaktM?=
 =?utf-8?B?aVRxdlYvdzJkK2djajlBRnk4aW01cWFMaFIvVXpINTNQcXNVcEl5Y0U2akw1?=
 =?utf-8?B?bjZMUzNOZDg5WWpRblByeUVuNFdST0k3QlBQeWZXekY5cVFOaWhtaE42SEpM?=
 =?utf-8?B?N0tJY0pLUzQwbVptbVZmcWxKS2ZVNUJWYWk2RU1nb3hySnN6Q0RjRVZha2lR?=
 =?utf-8?B?SVhhQmdjK01Rb3RsTUhuVVhaSTNkbGM5ZEJubTlKNldCd1ZnbnhjZS9NUHlH?=
 =?utf-8?B?WklDMVcvdjgwMFVON0R3RmJHSElQK0h2ZHVXN3c1dTJnMDR3dW8zYlBIWE9L?=
 =?utf-8?B?QlBWazY0bjkzbzZxQ3BDeUdaTU5YL0xlY2FVYXlocGxDMmdCZkVRc2VHdFhY?=
 =?utf-8?B?NzU5ZTNwWkM1YktmeFRTdVJDdDZRZzlPOVNUanVLSG91S3hWeTBYM1lwbWtR?=
 =?utf-8?B?U3JJbS90eEY5aE1wbklxNkFuU2ZMMEU4THJ0OXZtcDdaN3pEaWt5aytRWEp6?=
 =?utf-8?B?MTRpUmpxOGtoZm9OWkdLK0lqalJ2ckRJOVVibndWVXZUeGZaWmFoR3NTK1JG?=
 =?utf-8?B?aGxqMVBDdmpXZkJUV0xCMkRqdFpueXY5UzBHZk5Makg5UCtaQnZ1Nlo5VDVx?=
 =?utf-8?B?dFdmZVREcm53bXN6c2NZS1lFUi9ZMndkcnZDYTdSbVV5dExPV0syYUI4cFRx?=
 =?utf-8?B?dFpqMHloQ1JxU3k3YW9qVnpPUUVJcVlPQVl1MFY4MjNrUjFLNGR1VXVCaG5S?=
 =?utf-8?B?dWU3YWRFZWdCU1RuVWlZcHhaTys4VXovMVRLRWlhUk50SnZsOXJYZWRVd3NR?=
 =?utf-8?B?NU9jNTExbFZGR3h1Ly9OdW5jVDVoOSs5OGsyeWFBanhIWlQ5cXdONFFCL045?=
 =?utf-8?B?SGxqUVN1UmxCTCtSQXFYbWVMaG1pUmFVQStzam4wRC8xSTEvRTRvd2g0eXRu?=
 =?utf-8?B?TWtPVWxmS2FoczJ2OUFMendPZDFycjBZU0FackkxYXRlb0Z3OFRqdXlBYU8z?=
 =?utf-8?Q?uuoDvJ5x+aCuvJtY2/QTrurIqJViqzUPPA7Qrog?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR03MB9192.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(1800799015)(376005);DIR:OUT;SFP:1102;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?utf-8?B?bWtxaUJBMTd6OXdMc3pCUm5kZm5TZTFUVk1Pa05pNHYrOUEwNkFCbTJKT3J0?=
 =?utf-8?B?KzJJRHpidmRhdU9MRjFsdVNRaHBwZEk4TGR6WW9jbWRDYk4rdElodmE5MjFh?=
 =?utf-8?B?dWJjcnRBQU84d2FDemdONEVHRk5URDBzbGNPZmM2cm84Z1gvTVplSFphR25V?=
 =?utf-8?B?NUw5QjZabkJTRmlPKzdtYzI3Y3NNc0VkeXJaeFRJN0ZWQ01GZXBITHZzdU9M?=
 =?utf-8?B?SFBqZjdmTWNPaVFzb0FYVE1LYWVnaUIzRk1MQlZ6SUIrVTNYQnFKb1JPMG52?=
 =?utf-8?B?WVZzWmxiZFlqdEFhd3RGNjRNRjl3bmxQdmhCeHkrc1ZoM0RHRzFVdGV0YVFO?=
 =?utf-8?B?STlzREJCNzM2WmVFNUczc1U0dVBGUlhNZkpOZEg4blpoRmZkdTNtd2Rqa1I3?=
 =?utf-8?B?UmEvSmFickRYajBWOXBRQVliNzNFRTR4cUpiL2MrR3lXb0FMN29OdGtNTXhW?=
 =?utf-8?B?U1FoNWxPeVNNOXdmOHRydjZVMEFnNm1zUC94MEZDYyt6TTBLVjBhZG1mdlBJ?=
 =?utf-8?B?Z1BtYm5lOUdKUTc0UUlFL1hsakJKVjB3a0VwcjVwUGZiZ283VjJnQWE4VmJu?=
 =?utf-8?B?SmFvd1FzR0NSZWJmeExtdmw0M0RoRzExL05sem1JRU5WZ29aSGdsakY2ZXND?=
 =?utf-8?B?L1NWMTd2c2VNQ3pFZ1owQVFMVjcvdXgzRzhHM3ZFTVFUVUsvNlhuM0c3ZWRy?=
 =?utf-8?B?K2tza3dkelZod3EvYnF5Zjk2R3E2UWUvSlh3QW5POGVNM3dxcTJ1Y0pUaWkx?=
 =?utf-8?B?bG9PVzVzL3NMZFlvQU82aSttbG5wb3QvblhkYTFBTzhwTmRnTkY0aGJsK0Rn?=
 =?utf-8?B?RWFjSWJBZDl5U2VTd092YUMwQmp4SVJJbmRLZnpNK1FZa0FUNFdEeFhjTTBr?=
 =?utf-8?B?bW1sK0o3aUp3UmlzRVc4Y1FyeEFITWpaT2RienY3VkYxcVMyYjN5aHR1NW5z?=
 =?utf-8?B?cS9Xb1VZM24wWmtoNHQ5SlNSbEJ1QlZLNlVjZ3llU0JMdGFUZ251NHlrYm5B?=
 =?utf-8?B?ZGRMWEtNa0xaT0czaWIwOGVhUVBMQzB1RzFENDBBaUFPMVd2WEJUUC83VGdn?=
 =?utf-8?B?eEFjMTFoOTZrcU01QVgrd05KSldaL3ZQbXhyWDN1UTY4RTdhL0ExTVRsSzB0?=
 =?utf-8?B?Nk9NV1RYNGsrTmtSYklCUHlkRVNkUXJOMXJ1VHNsRGRqczhsOHVPc0tvT2Zl?=
 =?utf-8?B?ZVhrU1lIb01MY1RaWGloTXJ2VExmcUxOOFQ2aGFHNlYyRlNSUE5IVTEyR3lq?=
 =?utf-8?B?SUdSRFVHekVTWWtvME1nNFpWWlNJSWtsb09paEVCMUxQTnpSTEczWWQ4S0pG?=
 =?utf-8?B?OHhoRnAzZHR2dEw2VG91Wjc5ZFdBOU81VmpTeVNoMHNhemtvQ0hHeEtjcWVx?=
 =?utf-8?B?ekYxU0gwaFFQWS9FNURrZDZwZ1RBUElSTy85MUdEL2xIMmd4MW44cHU0QXFE?=
 =?utf-8?B?QkZOOE9SZ1ZLREE1aTNBWUxmTDR0V0Z0OEhZSkV5cDNOMDg2QjFIWXRuK3ZZ?=
 =?utf-8?B?clRkZjduNzI0cVJxVmJNOEJZK0VVVGhsL3VDeXorVVg2MFhRSy9Ydm56cEd0?=
 =?utf-8?B?bFMxYkdEWEdTbUJydmQ0RXFxUmRCak04alM2OUd0Z1ZaYzM3T1d0RVBLWHFo?=
 =?utf-8?B?ZFV3V3F1MWsvb2Z5dVZXdGV4YURFQWgxcUFMR29ySGtFdm5iYTZwRXpxZml2?=
 =?utf-8?B?bnJ2NjBLcjZBelc4ZGd3K0EzMW45WGNtWmpLUloxRFROWmJnSlhKcXo5b09p?=
 =?utf-8?B?RnZKdWh5RGR3azZCWDQzYWhSdFFzWkN4UHRHU1VIR3g4VVFoVjMrNS8zRVJo?=
 =?utf-8?B?OW84L2ZQMHFwQk1TSW05eHZ6K0xkZ0YrSXNhbnllSlY5czcxKzROZCsvUXRT?=
 =?utf-8?B?UGdiZDROd3JUcmY1N05MVXhRem1tblMxenNhRTBPVk5WVnkrenhCOVppbk80?=
 =?utf-8?B?U3prZE9oNk5CKzBFMDJ0OERSdC95TFRON0FvVGUzUW8yK1ZlVW5GTXM0am14?=
 =?utf-8?B?TnVuSW9xUHRFSE85bWdYbHhJanRKd0FzRm9HRVdmYWFSK2VRMzR0S0hXUHlu?=
 =?utf-8?B?Q05SOW50dGt1WXo5NjVya3NKK1ZRRUNBdGJiSkxrbjB2OTAwTDNUV3kwMUVh?=
 =?utf-8?Q?Wrhn9BOgQPlOGEDB0LGyLJGbt?=
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: df8297be-b35b-458b-344e-08dc893aea4e
X-MS-Exchange-CrossTenant-AuthSource: AS8PR03MB9192.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2024 10:48:51.2854
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 07U6A/9Lnyidh+oLZGrCqshTlLNry4o2FBqD+4819rgyusoe9Pc/c+EsC6e3JtMjXH37w2Sq4kZvawdEbYzMFA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB9965
X-Proofpoint-GUID: 2P1f-gF-VyI8AcHHs2XUQISRxL4P_Adt
X-Proofpoint-ORIG-GUID: 2P1f-gF-VyI8AcHHs2XUQISRxL4P_Adt
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-10_02,2024-06-10_01,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=973
 suspectscore=0 phishscore=0 clxscore=1015 spamscore=0 adultscore=0
 malwarescore=0 mlxscore=0 lowpriorityscore=0 impostorscore=0
 priorityscore=1501 bulkscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.19.0-2405170001 definitions=main-2406100082

07.06.24 10:47, Jan Beulich:
> On 03.06.2024 13:13, Sergiy Kibrik wrote:
>> --- a/xen/arch/x86/include/asm/p2m.h
>> +++ b/xen/arch/x86/include/asm/p2m.h
>> @@ -577,10 +577,10 @@ static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn)
>>           return _gfn(mfn_x(mfn));
>>   }
>>   
>> -#ifdef CONFIG_HVM
>>   #define AP2MGET_prepopulate true
>>   #define AP2MGET_query false
>>   
>> +#ifdef CONFIG_ALTP2M
>>   /*
>>    * Looks up altp2m entry. If the entry is not found it looks up the entry in
>>    * hostp2m.
> 
> In principle this #ifdef shouldn't need moving. It's just that the
> three use sites need taking care of a little differently. E.g. ...
> 
>> @@ -589,6 +589,15 @@ static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn)
>>   int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn,
>>                                  p2m_type_t *t, p2m_access_t *a,
>>                                  bool prepopulate);
>> +#else
>> +static inline int altp2m_get_effective_entry(struct p2m_domain *ap2m,
>> +                                             gfn_t gfn, mfn_t *mfn,
>> +                                             p2m_type_t *t, p2m_access_t *a,
>> +                                             bool prepopulate)
>> +{
>> +    ASSERT_UNREACHABLE();
>> +    return -EOPNOTSUPP;
>> +}
> 
> static inline int altp2m_get_effective_entry(struct p2m_domain *ap2m,
>                                               gfn_t gfn, mfn_t *mfn,
>                                               p2m_type_t *t, p2m_access_t *a)
> {
>      ASSERT_UNREACHABLE();
>      return -EOPNOTSUPP;
> }
> #define altp2m_get_effective_entry(ap2m, gfn, mfn, t, a, prepopulate) \
>          altp2m_get_effective_entry(ap2m, gfn, mfn, t, a)
> 
> Misra doesn't like such shadowing, so the inline function may want
> naming slightly differently, e.g. _ap2m_get_effective_entry().


I can do that, sure.
Though here I'm curious what benefits we're getting from little 
complication of an indirect call to an empty stub -- is avoiding of 
AP2MGET_* defines worth it?

   -Sergiy


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 11:21:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 11:21:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737180.1143356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGd5y-0005Wz-LF; Mon, 10 Jun 2024 11:21:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737180.1143356; Mon, 10 Jun 2024 11:21:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGd5y-0005Ws-IX; Mon, 10 Jun 2024 11:21:50 +0000
Received: by outflank-mailman (input) for mailman id 737180;
 Mon, 10 Jun 2024 11:21:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGd5y-0005Wm-86
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 11:21:50 +0000
Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com
 [2a00:1450:4864:20::530])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a02a9f63-271b-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 13:21:47 +0200 (CEST)
Received: by mail-ed1-x530.google.com with SMTP id
 4fb4d7f45d1cf-57c681dd692so2613298a12.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 04:21:47 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f117c09c9sm254984866b.138.2024.06.10.04.21.45
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 04:21:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a02a9f63-271b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718018507; x=1718623307; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=XGldr9vcI6oDKmX5z8MXspCZpPQK+hCqKg1tnDCMmg8=;
        b=c4lstLTH++ftMsmmT3EJqEGGZ1UfmHXwKmPtAJPGHglpvtn913t/K9XqK7ZOjgDE0B
         abKqdAoLXqSLgIJroyp9QndIWzhuKwEBMbEl9ftd3axebl+CbVGbf2WNgUCswMeu2xow
         Z/E1PxFC/mDAQl15tSgA6Gmvxjydt8I0Cc6NHgqpEGyK97tP3/Jskawbqt1LLoCW2wGA
         Sz87TDFlnAqoGZ4x7MR7b9yVd4KrQdBqKp1QyWZs/KiOhBwKUklwHkkNCU5S5sNyO9gV
         FGpFa7M8ltkRUS7dmXV/zMzNnvGbM0zDmiU7d8McV4w2xLWAr/+ri+WldPISaXEGWgn3
         WrDQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718018507; x=1718623307;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=XGldr9vcI6oDKmX5z8MXspCZpPQK+hCqKg1tnDCMmg8=;
        b=YswWTpRH0JpMIg6+rBEC9q/M6XLtaV5fCcLDSCcnO7D8OGQr+dTX3mHCb/8w8kMpSy
         H9w25OHA+o+SJFzCvIXIRnvbc2FmiwYaoJPs9iQ5tVNOGZkYSfjltG5r4JG1Z1ralv13
         cRT1D1vFIqpYRAUwWzC1zxn4OdV26Hh4KHJm1XtLT2UFiQBLtrco5T5z8oZYdKuKh+fc
         OzIJDD6G2EPm4BcbmagiWQEdGtElg4AWWDCmj5XE0w6D0dPdO7E0hxqP1BeUqUtrE+Qc
         bQ4AS6jdjaNZTx+BpG3iir7cKN4vzpXArAyvVRB8qRvJc1xqE5RIoKSh/3xAlBSM5Lf1
         yrZw==
X-Forwarded-Encrypted: i=1; AJvYcCXLnUomdP67GYbeKtWLVBeNIL+o/kufljTSZ1P19Jxvfzf07nVh/bfeT1mO7es4rG1iVTD2bafTfJICUL+NXRl0D1tM8yacJ353e63yOyQ=
X-Gm-Message-State: AOJu0YygVIRy59AT5syoxwjtdhp6Up6+K6yLm7kzFktmFHRy/hF5drNY
	IIEILx9NYx1OqEcjeyBy3Sk0mdWWgojSGLcVZup4CiWuBuafZsZwkW2+pt5KsA==
X-Google-Smtp-Source: AGHT+IG0WA96tnh1GEBo5R2OR82Iu0ENmjmOHoi+59W01QN+m1PO8GWfl6wK381/FjHJKz9mEF1bzg==
X-Received: by 2002:a17:906:fe07:b0:a6d:b66f:7b21 with SMTP id a640c23a62f3a-a6db66f7e7emr721229066b.54.1718018507004;
        Mon, 10 Jun 2024 04:21:47 -0700 (PDT)
Message-ID: <093a45d0-da0b-44d1-902e-730eede80112@suse.com>
Date: Mon, 10 Jun 2024 13:21:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v5 07/10] xen: Make the maximum number of altp2m
 views configurable for x86
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <cover.1717356829.git.w1benny@gmail.com>
 <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
 <22eabe14-10c3-4095-91d3-b63911908cb2@suse.com>
 <CAKBKdXhZ4HOqThPMkwaWB5ZhQOc6gE=xsKzkoL4_h+M6y33dcQ@mail.gmail.com>
 <f3cd00f2-bdcb-4604-bdc2-fd13eddb8ea0@suse.com>
 <CAKBKdXje+_dd7kh3+aDJACw84+-1ozXt6N==KbA6Tgm7GeZEnQ@mail.gmail.com>
 <8961cf72-4eeb-4c47-9723-35da3e47d4d2@suse.com>
 <CAKBKdXiQhFeihx9HeuOv5cFe8K7H2O+GFUXy4ThF1X6ZGjCrig@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CAKBKdXiQhFeihx9HeuOv5cFe8K7H2O+GFUXy4ThF1X6ZGjCrig@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10.06.2024 12:34, Petr Beneš wrote:
> On Mon, Jun 10, 2024 at 12:16 PM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 10.06.2024 11:10, Petr Beneš wrote:
>>> On Mon, Jun 10, 2024 at 9:30 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 09.06.2024 01:06, Petr Beneš wrote:
>>>>> On Thu, Jun 6, 2024 at 9:24 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>> @@ -122,7 +131,12 @@ int p2m_init_altp2m(struct domain *d)
>>>>>>>      struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
>>>>>>>
>>>>>>>      mm_lock_init(&d->arch.altp2m_list_lock);
>>>>>>> -    for ( i = 0; i < MAX_ALTP2M; i++ )
>>>>>>> +    d->arch.altp2m_p2m = xzalloc_array(struct p2m_domain *, d->nr_altp2m);
>>>>>>> +
>>>>>>> +    if ( !d->arch.altp2m_p2m )
>>>>>>> +        return -ENOMEM;
>>>>>>
>>>>>> This isn't really needed, is it? Both ...
>>>>>>
>>>>>>> +    for ( i = 0; i < d->nr_altp2m; i++ )
>>>>>>
>>>>>> ... this and ...
>>>>>>
>>>>>>>      {
>>>>>>>          d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
>>>>>>>          if ( p2m == NULL )
>>>>>>> @@ -143,7 +157,10 @@ void p2m_teardown_altp2m(struct domain *d)
>>>>>>>      unsigned int i;
>>>>>>>      struct p2m_domain *p2m;
>>>>>>>
>>>>>>> -    for ( i = 0; i < MAX_ALTP2M; i++ )
>>>>>>> +    if ( !d->arch.altp2m_p2m )
>>>>>>> +        return;
>>>>
>>>> I'm sorry, the question was meant to be on this if() instead.
>>>>
>>>>>>> +    for ( i = 0; i < d->nr_altp2m; i++ )
>>>>>>>      {
>>>>>>>          if ( !d->arch.altp2m_p2m[i] )
>>>>>>>              continue;
>>>>>>> @@ -151,6 +168,8 @@ void p2m_teardown_altp2m(struct domain *d)
>>>>>>>          d->arch.altp2m_p2m[i] = NULL;
>>>>>>>          p2m_free_one(p2m);
>>>>>>>      }
>>>>>>> +
>>>>>>> +    XFREE(d->arch.altp2m_p2m);
>>>>>>>  }
>>>>>>
>>>>>> ... this ought to be fine without?
>>>>>
>>>>> Could you, please, elaborate? I honestly don't know what you mean here
>>>>> (by "this isn't needed").
>>>>
>>>> I hope the above correction is enough?
>>>
>>> I'm sorry, but not really? I feel like I'm blind but I can't see
>>> anything I could remove without causing (or risking) crash.
>>
>> The loop is going to do nothing when d->nr_altp2m == 0, and the XFREE() is
>> going to do nothing when d->arch.altp2m_p2m == NULL. Hence what does the
>> if() guard against? IOW what possible crashes are you seeing that I don't
>> see?
> 
> I see now. I was seeing ghosts - my thinking was that if
> p2m_init_altp2m fails to allocate altp2m_p2m, it will call
> p2m_teardown_altp2m - which, without the if(), would start to iterate
> through an array that is not allocated. It doesn't happen, it just
> returns -ENOMEM.
> 
> So to reiterate:
> 
>     if ( !d->arch.altp2m_p2m )
>         return;
> 
> ... are we talking that this condition inside p2m_teardown_altp2m
> isn't needed?

I'm not sure about "isn't" vs "shouldn't". The call from p2m_final_teardown()
also needs to remain safe to make. Which may require that upon allocation
failure you zap d->nr_altp2m. Or which alternatively may mean that the if()
actually needs to stay.

> Or is there anything else?

There was also the question of whether to guard the allocation, to avoid a
de-generate xmalloc_array() of zero size. Yet in the interest of avoiding
not strictly necessary conditionals, that may well want to remain as is.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 11:27:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 11:27:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737186.1143366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdAw-00067N-7N; Mon, 10 Jun 2024 11:26:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737186.1143366; Mon, 10 Jun 2024 11:26:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdAw-00067G-4Y; Mon, 10 Jun 2024 11:26:58 +0000
Received: by outflank-mailman (input) for mailman id 737186;
 Mon, 10 Jun 2024 11:26:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGdAv-00067A-94
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 11:26:57 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 57905e70-271c-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 13:26:55 +0200 (CEST)
Received: by mail-ed1-x535.google.com with SMTP id
 4fb4d7f45d1cf-57864327f6eso6269482a12.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 04:26:55 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c8615c188sm1006411a12.10.2024.06.10.04.26.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 04:26:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57905e70-271c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718018815; x=1718623615; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=/vdEmArE5XL+7quSxxBXVAaCtSRnV30OuJL1b8ceXW4=;
        b=HKaMpkwV/at14w/dr63vM5trO71TVAVQGNyiJpcmd73YTgR0GCFdfRtijiu/tQAJvX
         dmO+2nO3GXdzeJ51hrtA7iEh1X63oU5clijF+h0YEMoXQkrDAJXppMa1GaWyueomDXFC
         uQfv3vUyYcr3y7PaGWBXEuRTt5Qp3001xZ7bweN21mMNRHVteRCHmrUuJgseBenP7eu7
         +aj2yVdbjaMkm2ewZ+FczNRp1r5HAbOxhBuJUGt9MBcIJshLpcKKHbbTu6iglBisGZ6n
         PGGa7ftYMu2eJQuSrVOFie+ciuRo2RnkxzXZbdsOGsb83jPi6wdHD72vBbjj3P9Rf0a7
         /VUw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718018815; x=1718623615;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/vdEmArE5XL+7quSxxBXVAaCtSRnV30OuJL1b8ceXW4=;
        b=Q0QXuYxWLA2Jf+KDZKwDXnfrlGKOMLrHKdZyexh3kOHLhNWfuab6jWgRN3fbSnTHY2
         LtpI97QP7DBMyOEOw0LidirZ0XIXov6UQtsukHyQjcXn9s2N88oHIGlqQAyZqnXI5uhF
         NUhoOYWk2p2lPxedMluQOdLL8d0PUlthn6qp8OZpxTL0sYyHJwmgBdNACjedkuNuHr2F
         +Pe2n/5tIcg09lGjnl5EsB8umt+DLzfAgD4aRAsJejEso0eVUKngeokTCpr+e8Li3Zb4
         IYzROYoWFM+jmV4FO+wM6mKNYiHvtfIFTMyZLXgvmLnI+m+7sEfCu+B9rfeQiPILc16D
         nEsw==
X-Forwarded-Encrypted: i=1; AJvYcCWursxzJ5g69v7CqsaSwdFYRAqB6x5AuWgaoa94TAlJGci2oWZAxJdL3lsoOk6b+PgPu+DpIswbJ0bYV0IgiYXnpt9Jn2vhRHNGceQpdWI=
X-Gm-Message-State: AOJu0YzVlspNgTk8ylKeWlecjpbPTtjdznEr+FtqDiB1UKv+R2lIZCi1
	QNTeXfIBbPFyDPDwDF8+Qmzx7iEFScIU6uZUzF9ASkZ1JGjk4USLdoYI2yh3Dg==
X-Google-Smtp-Source: AGHT+IH3YGzNvlzAjkw1WXC7R6lSf8RUEE6T6culeoS7lotBzQaTnwVn5pHNP7rMePfBMEq5jBgXxw==
X-Received: by 2002:a50:9f4b:0:b0:57c:603a:6b2b with SMTP id 4fb4d7f45d1cf-57c603a6eebmr5118402a12.21.1718018814703;
        Mon, 10 Jun 2024 04:26:54 -0700 (PDT)
Message-ID: <be69d7c6-c004-44de-acee-c16dc923d412@suse.com>
Date: Mon, 10 Jun 2024 13:26:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 04/16] x86: introduce CONFIG_ALTP2M Kconfig option
To: Sergiy Kibrik <sergiy_kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <035f63f2b92b963f2585064fa21e09e73157f9d3.1717410850.git.Sergiy_Kibrik@epam.com>
 <856e3517-4aef-4e18-85b1-174ebf5c358f@suse.com>
 <6b55015a-4cea-4064-a50e-38ab2b2e665c@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <6b55015a-4cea-4064-a50e-38ab2b2e665c@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 10.06.2024 12:48, Sergiy Kibrik wrote:
> 07.06.24 10:47, Jan Beulich:
>> On 03.06.2024 13:13, Sergiy Kibrik wrote:
>>> --- a/xen/arch/x86/include/asm/p2m.h
>>> +++ b/xen/arch/x86/include/asm/p2m.h
>>> @@ -577,10 +577,10 @@ static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn)
>>>           return _gfn(mfn_x(mfn));
>>>   }
>>>   
>>> -#ifdef CONFIG_HVM
>>>   #define AP2MGET_prepopulate true
>>>   #define AP2MGET_query false
>>>   
>>> +#ifdef CONFIG_ALTP2M
>>>   /*
>>>    * Looks up altp2m entry. If the entry is not found it looks up the entry in
>>>    * hostp2m.
>>
>> In principle this #ifdef shouldn't need moving. It's just that the
>> three use sites need taking care of a little differently. E.g. ...
>>
>>> @@ -589,6 +589,15 @@ static inline gfn_t mfn_to_gfn(const struct domain *d, mfn_t mfn)
>>>   int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn,
>>>                                  p2m_type_t *t, p2m_access_t *a,
>>>                                  bool prepopulate);
>>> +#else
>>> +static inline int altp2m_get_effective_entry(struct p2m_domain *ap2m,
>>> +                                             gfn_t gfn, mfn_t *mfn,
>>> +                                             p2m_type_t *t, p2m_access_t *a,
>>> +                                             bool prepopulate)
>>> +{
>>> +    ASSERT_UNREACHABLE();
>>> +    return -EOPNOTSUPP;
>>> +}
>>
>> static inline int altp2m_get_effective_entry(struct p2m_domain *ap2m,
>>                                               gfn_t gfn, mfn_t *mfn,
>>                                               p2m_type_t *t, p2m_access_t *a)
>> {
>>      ASSERT_UNREACHABLE();
>>      return -EOPNOTSUPP;
>> }
>> #define altp2m_get_effective_entry(ap2m, gfn, mfn, t, a, prepopulate) \
>>          altp2m_get_effective_entry(ap2m, gfn, mfn, t, a)
>>
>> Misra doesn't like such shadowing, so the inline function may want
>> naming slightly differently, e.g. _ap2m_get_effective_entry().
> 
> 
> I can do that, sure.
> Though here I'm curious what benefits we're getting from little 
> complication of an indirect call to an empty stub -- is avoiding of 
> AP2MGET_* defines worth it?

First - how is an indirect call coming into the picture here? We aim at
avoiding indirect calls where possible, after all. Yet iirc all calls
to altp2m_get_effective_entry() are direct ones.

As to avoiding the (or in fact any such) #define-s - imo it's better to
not have items in the name space which can't validly be used, unless this
(elsewhere) helps to keep the code tidy overall. This way problems (e.g.
from misunderstandings or flaws elsewhere) can be detected early.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 11:35:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 11:35:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737196.1143379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdJF-0008Sk-6Z; Mon, 10 Jun 2024 11:35:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737196.1143379; Mon, 10 Jun 2024 11:35:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdJF-0008Sd-3M; Mon, 10 Jun 2024 11:35:33 +0000
Received: by outflank-mailman (input) for mailman id 737196;
 Mon, 10 Jun 2024 11:35:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGdJE-0008SX-0d
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 11:35:32 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8ad728c7-271d-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 13:35:31 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-a6f177b78dcso132296166b.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 04:35:30 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f29bda0c7sm42166366b.41.2024.06.10.04.35.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 04:35:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ad728c7-271d-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718019330; x=1718624130; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=0SFDPkYnBed/Q2Q5Aa4oXHCXEGjVbxYmaTR8Vh3jjWU=;
        b=aY3meXx7UCnj60sMDozY26oXY0tmNO2y+yDOmWfefTUT46sfhhzlnzsv6w/cdh/vZ/
         5uwJWPaQOXL3bzA4qF4SrtgBfpaFNBwJwi/puIHK/t4vm1ydecpVdH82VSqAmWYhlxJb
         cHMUqVCJcOI9dcwH2LSJNDXb4RFOWqm4eh65t/95BWFCXOQ7GUpQ7A01WIMBtiXmEcFS
         RaLFQuNlGMlI1cweCR558DB+v2druWWM/f17uDd6RqeirGhQMX0xYFAwPkX1ZBwXENzu
         0aJD/u6/DgGDc3Idc19wSvNke9vRDLzo3M4COX6I5ptun5btMmr02ncBLp1SNaYeqCTX
         3qdA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718019330; x=1718624130;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0SFDPkYnBed/Q2Q5Aa4oXHCXEGjVbxYmaTR8Vh3jjWU=;
        b=XhhA481HGbf0BRNM4Yd2h3EfurH4CtAYfwoFvYqb82eFVjnqUv5vgXFQlPZ7yZOEG7
         sX1z5f4uGXgzSa6LTyHiP+bKGXh3VpqGBF1MixtFhAUiLfr6C5q3+qPmItMNthMmB989
         RlyEeI4XoNGATEKIsZxAwNxOfKd9YPlIF+Uyv/7iV3y/ClEGxPte7K740fSjCFdL8f3E
         TDjGfN+q/Ui2n7pGUHLHpcRca/cL8QnOls0Qx3nhAvbsgx+NAYaQ+5oBpllh4plPgeJx
         BM9hh2+awjFI33fVYebg2jjjYM6QXMk+k0j0vcavPFBfOTCcWEj7oozjD7MBZmK4++Bg
         ejlg==
X-Forwarded-Encrypted: i=1; AJvYcCW9ojK5Z8e3xVYuCSj/7bM/05HNnSv1g/OolEIG/54hzv3A5F7+NVWTYYCTNDjKQSJf2OPeirbbnmIULCvUZhaK0+jazLAZROd9e0d423M=
X-Gm-Message-State: AOJu0YzBxoQ1FCWdgHfNaekkvRtVUdYu+O3DrWmdOM3VbIb7CWwpsvDR
	ASzRhCd0aKhNrxDJK/spxbtpoRzGdqb5gsFWQq7rORNjdflA5j3sW7nzIRvitA==
X-Google-Smtp-Source: AGHT+IH4YBjSdt4haXOO7+h5t1Y0IDE16FADpoKemfhaCoMcaQQ3omH68oIysITblo9KTqMtb1b/Rg==
X-Received: by 2002:a17:906:3b4d:b0:a6f:1b3a:8898 with SMTP id a640c23a62f3a-a6f1b3a88b6mr191307666b.2.1718019330443;
        Mon, 10 Jun 2024 04:35:30 -0700 (PDT)
Message-ID: <1fe65c97-6aea-452d-99c3-d9da018b33f7@suse.com>
Date: Mon, 10 Jun 2024 13:35:29 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] MAINTAINERS: add me as scheduer maintainer
To: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 Juergen Gross <jgross@suse.com>
References: <20240606054745.23555-1-jgross@suse.com>
 <20240606054745.23555-2-jgross@suse.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240606054745.23555-2-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

George, Dario,

On 06.06.2024 07:47, Juergen Gross wrote:
> I've been active in the scheduling code since many years now. Add
> me as a maintainer.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  MAINTAINERS | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 6ba7d2765f..cc40c0be9d 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -490,6 +490,7 @@ F:	xen/common/sched/rt.c
>  SCHEDULING
>  M:	George Dunlap <george.dunlap@citrix.com>
>  M:	Dario Faggioli <dfaggioli@suse.com>
> +M:	Juergen Gross <jgross@suse.com>
>  S:	Supported
>  F:	xen/common/sched/

no matter what get-maintainers.pl may say for changes here, I think it's
largely on the two of you to approve this addition.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 11:37:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 11:37:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737199.1143390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdL4-0000ZK-Hl; Mon, 10 Jun 2024 11:37:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737199.1143390; Mon, 10 Jun 2024 11:37:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdL4-0000ZD-Ej; Mon, 10 Jun 2024 11:37:26 +0000
Received: by outflank-mailman (input) for mailman id 737199;
 Mon, 10 Jun 2024 11:37:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ruA2=NM=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sGdL2-0000Z5-OY
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 11:37:24 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ce1d197c-271d-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 13:37:23 +0200 (CEST)
Received: by mail-lf1-x130.google.com with SMTP id
 2adb3069b0e04-52bc0a9cea4so2300652e87.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 04:37:23 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52c8a301adbsm492170e87.270.2024.06.10.04.37.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 04:37:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce1d197c-271d-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718019443; x=1718624243; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=M5MI3l7/fYMSuNh4hcT3Z44ssiaoPkKOh+q8gStjpmE=;
        b=kS7WDCoW7JPWqyQK/50/mVgIGJMAvxXtlIM9Cjdxp+SxaWgNOUMwcEeCBtfHtv3slX
         VGgunTsq+bSJpiCDDE7BH8qqSOQer1akq4i1yhLXzT6hsZUsobJg54kDppptqRAJyGQo
         zscr7Ogjx6rAOZyoGDhccjqEwEW/Y6mC8MnQsaaV6feNagFutTpZ+gbvyPH+wY2x3dFS
         Rsnybejy/sakqiecyCAU0jqA66zSTs8lmfkRRBBoHdF5Vd5haejaOzyq03KW+ENoDgR4
         Mg/Y/w/A68IWQw2YG539twh7NSoQdikvum9X2Uk95+/UpTzhs3CahmDlCe2ynE4Fvw22
         BqTw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718019443; x=1718624243;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=M5MI3l7/fYMSuNh4hcT3Z44ssiaoPkKOh+q8gStjpmE=;
        b=Q0v72mCOcWvvndHsafP6bbqHwzvuONn4nh1FD1R3qqvmnimK0XxxfU7AgOgQqk1vzj
         1PXY00YBLDONX82njOnQYuqoDePDH49fg/rWItI+dZUOwfYK426QxNexslFQ2VXQHc4E
         UWL4kZgHsKK+DmeJzNZ73fzO0Xvbvusz2edNvtKfq/RSWjgHtm7KNZMZDNPeLknqRHkn
         0kDVDss+PAUYBbCYyxIZeqFcD6agyUcKkxJg4tkVj4dEwUCYyMVMIivIrJ5ro4FaL6Ac
         2Z4fPH3qL7f8p8r5OserS1pQ98Xa1J3MD0OkzK2/SpyytVNATJUS9yvTJUM+EAZABhha
         nkYg==
X-Forwarded-Encrypted: i=1; AJvYcCUY/1f6MAhFXf6DlXG6eA0LTbL/X09XysP/GuJfsl4Mlzj3CsTaJY0sKN5Zf/56gRxziTBWS2/S8hYBr/P7mmvvKLWzrDPxxb0Az2FU1is=
X-Gm-Message-State: AOJu0YxBiFqZ+k7BTNoFADZhy3gbDAflJmtSuIZpQidx09F3jDPr8I2y
	rmnqmslcsfDusKmzrTGIedq9njUhjuclq8iMTxdrTcceM5aVYyCf
X-Google-Smtp-Source: AGHT+IGIhBwAwXM1G+r+gzDKb7v55p/TZQ8OhDY9VGOn9t897k86CYkyx+Xr7ATiKgue/Z+sctH8/g==
X-Received: by 2002:a19:8c5b:0:b0:52c:8906:4415 with SMTP id 2adb3069b0e04-52c890644fbmr780742e87.1.1718019442854;
        Mon, 10 Jun 2024 04:37:22 -0700 (PDT)
Message-ID: <1a1a5605a77fef5b2926a606673ad8458161c577.camel@gmail.com>
Subject: Re: [PATCH v12 8/8] xen/README: add compiler and binutils versions
 for RISC-V64
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@citrix.com>, Jan Beulich
 <jbeulich@suse.com>,  Julien Grall <julien@xen.org>, Stefano Stabellini
 <sstabellini@kernel.org>
Date: Mon, 10 Jun 2024 13:37:22 +0200
In-Reply-To: <d4e5b4c8-d494-440b-8970-488b49bee12e@citrix.com>
References: <cover.1717008161.git.oleksii.kurochko@gmail.com>
	 <c6ff49af9a107965f8121862e6b32c24548956e6.1717008161.git.oleksii.kurochko@gmail.com>
	 <d4e5b4c8-d494-440b-8970-488b49bee12e@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Thu, 2024-05-30 at 20:52 +0100, Andrew Cooper wrote:
> On 29/05/2024 8:55 pm, Oleksii Kurochko wrote:
> > diff --git a/README b/README
> > index c8a108449e..30da5ff9c0 100644
> > --- a/README
> > +++ b/README
> > @@ -48,6 +48,10 @@ provided by your OS distributor:
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - For ARM 64-bit:
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - GCC 5.1 or later
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - GNU Binutils 2.24 or=
 later
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - For RISC-V 64-bit:
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - GCC 12.2 or later
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - GNU Binutils 2.39 or late=
r
>=20
> I would like to petition for GCC 10 and Binutils 2.35.
>=20
> These are the versions provided as cross-compilers by Debian, and
> therefore are the versions I would prefer to do smoke testing with.
We can consider that, but I prefer to make a separate patch series for
that.

~ Oleksii

>=20
> One issue is in cpu_relax(), needing this diff to fix:
>=20
> diff --git a/xen/arch/riscv/include/asm/processor.h
> b/xen/arch/riscv/include/asm/processor.h
> index 6846151717f7..830a05dd8e3a 100644
> --- a/xen/arch/riscv/include/asm/processor.h
> +++ b/xen/arch/riscv/include/asm/processor.h
> @@ -67,7 +67,7 @@ static inline void cpu_relax(void)
> =C2=A0=C2=A0=C2=A0=C2=A0 __asm__ __volatile__ ( "pause" );
> =C2=A0#else
> =C2=A0=C2=A0=C2=A0=C2=A0 /* Encoding of the pause instruction */
> -=C2=A0=C2=A0=C2=A0 __asm__ __volatile__ ( ".insn 0x0100000F" );
> +=C2=A0=C2=A0=C2=A0 __asm__ __volatile__ ( ".4byte 0x0100000F" );
> =C2=A0#endif
> =C2=A0
> =C2=A0=C2=A0=C2=A0=C2=A0 barrier();
>=20
> The .insn directive appears to check that the byte pattern is a known
> extension, where .4byte doesn't.=C2=A0 AFAICT, this makes .insn pretty
> useless for what I'd expect is it's main purpose...
>=20
>=20
> The other problem is a real issue in cmpxchg.h, already committed to
> staging (51dabd6312c).
>=20
> RISC-V does a conditional toolchain for the Zbb extension
> (xen/arch/riscv/rules.mk), but unconditionally uses the ANDN
> instruction
> in emulate_xchg_1_2().
>=20
> Nevertheless, this is also quite easy to fix:
>=20
> diff --git a/xen/arch/riscv/include/asm/cmpxchg.h
> b/xen/arch/riscv/include/asm/cmpxchg.h
> index d5e678c03678..12ecb0950701 100644
> --- a/xen/arch/riscv/include/asm/cmpxchg.h
> +++ b/xen/arch/riscv/include/asm/cmpxchg.h
> @@ -18,6 +18,20 @@
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "r" (new) \
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "memory" );
> =C2=A0
> +/*
> + * Binutils < 2.37 doesn't understand ANDN.=C2=A0 If the toolchain is to=
o
> old, form
> + * it of a NOT+AND pair
> + */
> +#ifdef __riscv_zbb
> +#define ANDN_INSN(rd, rs1, rs2)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> +=C2=A0=C2=A0=C2=A0 "andn " rd ", " rs1 ", " rs2 "\n"
> +#else
> +#define ANDN_INSN(rd, rs1, rs2)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> +=C2=A0=C2=A0=C2=A0 "not " rd ", " rs2 "\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 \
> +=C2=A0=C2=A0=C2=A0 "and " rd ", " rs1 ", " rd "\n"
> +#endif
> +
> +
> =C2=A0/*
> =C2=A0 * For LR and SC, the A extension requires that the address held in
> rs1 be
> =C2=A0 * naturally aligned to the size of the operand (i.e., eight-byte
> aligned
> @@ -48,7 +62,7 @@
> =C2=A0=C2=A0=C2=A0=C2=A0 \
> =C2=A0=C2=A0=C2=A0=C2=A0 asm volatile ( \
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "0: lr.w" lr_sfx " %[old=
], %[ptr_]\n" \
> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "=C2=A0=C2=A0 andn=C2=A0 %[sc=
ratch], %[old], %[mask]\n" \
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ANDN_INSN("%[scratch]", "%[ol=
d]", "%[mask]") \
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "=C2=A0=C2=A0 or=C2=A0=
=C2=A0 %[scratch], %[scratch], %z[new_]\n" \
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "=C2=A0=C2=A0 sc.w" sc_s=
fx " %[scratch], %[scratch], %[ptr_]\n" \
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "=C2=A0=C2=A0 bnez %[scr=
atch], 0b\n" \
>=20
>=20
>=20
> And with that, everything builds... but doesn't link.=C2=A0 I've got this=
:
>=20
> =C2=A0 LDS=C2=A0=C2=A0=C2=A0=C2=A0 arch/riscv/xen.lds
> riscv64-linux-gnu-ld=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -T arch/riscv/xen.lds =
-N prelink.o \
> =C2=A0=C2=A0=C2=A0 ./common/symbols-dummy.o -o ./.xen-syms.0
> riscv64-linux-gnu-ld: prelink.o: in function
> `keyhandler_crash_action':
> /local/xen.git/xen/common/keyhandler.c:552: undefined reference to
> `guest_physmap_remove_page'
> riscv64-linux-gnu-ld: ./.xen-syms.0: hidden symbol
> `guest_physmap_remove_page' isn't defined
> riscv64-linux-gnu-ld: final link failed: bad value
>=20
> which is completely bizarre.
>=20
> keyhandler_crash_action() has no actual reference to
> guest_physmap_remove_page(), and keyhandler.o has no such relocation.
>=20
> I'm still investigating this one.
>=20
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 11:44:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 11:44:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737208.1143399 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdRb-0002gf-83; Mon, 10 Jun 2024 11:44:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737208.1143399; Mon, 10 Jun 2024 11:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdRb-0002gY-5O; Mon, 10 Jun 2024 11:44:11 +0000
Received: by outflank-mailman (input) for mailman id 737208;
 Mon, 10 Jun 2024 11:44:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGdRa-0002gS-0v
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 11:44:10 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bf731d62-271e-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 13:44:08 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-a6f0dc80ab9so208771466b.2
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 04:44:08 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f188a5162sm188434866b.81.2024.06.10.04.44.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 04:44:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf731d62-271e-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718019848; x=1718624648; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Nabna8pHSgnlv31SaIuDo73RdgCTyIgBJ3uEhhULGYE=;
        b=FgohtixbkooMR4EdiOVRR7GYz75OQ+mWDCkjLVFY+4VMfuthjcsa0Rcpk3E42lptrO
         XpWb2dRUwUrNeNnBRVx4agB6k7McuBW5FKn0jigkgmP28CgKkgXJ31I6j2dfIVgRDBZb
         TwD7z+AdJgCzVqbcwWi3jTsYz2sFFrc4BDOtWJbczHBrYXCxVWqvyCb0uavbLzeurN5I
         TKfOxfY3aV1hql9t5IX12oJS85hxnCVRoN7mTEWpPMUZaSspf/j1svu173ktgG0+ixYo
         JrwTJCNku/tUhmbiFaXadWBqhDODfWjx6NfJGxQ9kX5za0YT4jhLiCnr542gSAZl0nSm
         uKTg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718019848; x=1718624648;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Nabna8pHSgnlv31SaIuDo73RdgCTyIgBJ3uEhhULGYE=;
        b=njCrmqeaGtVtOXolfSvtlVhrJeSDRUe13B0mAtiGHKdwgAOTwWE7YUePpRBEVQbCwB
         wINbCHYSIT8qCi5DUQ57D1/Jd3R6pgZbCKzI9iRxYgYnaEa6M4hOgPznjoDs/29CUh9N
         hNHSquutOQ4QzwBIjzEfeoc5ECfxTgTkZZsqaDkUaQ03NmRccHyXEFXMQD0r9ff8loPI
         rkUESw0drAUOWFZfhudaMVOTUWzkXWoNm99Wt0CMIkvuZCkM3WnFoXe1jfcswm7kAO/1
         GYA0JN4x900T6FQgeG02IC8sk6c9Urq4X33+0q5Pxji0P00asHOcAI+YlYh5wj+pRG9K
         1LYw==
X-Forwarded-Encrypted: i=1; AJvYcCXuvGPbcjL/g9ym52/cyxQbnmchUHMH9eFi6+CCB4r9YS4u68jJcGWlIQrr4NurrBxZysbDEBsBWLmmpPztxsO0Y7H0yMkI7+f7elv5qJY=
X-Gm-Message-State: AOJu0Yx7qund0jkWssVvujXrDQx1EwZICpaWjKDBUtp30Vw+4DMOEvMv
	BYvbHy3K9r16IpHY955CYl05DBIXpyjCHzxWNRHBZQUud4IqQw+5QYbuk1VJUQ==
X-Google-Smtp-Source: AGHT+IHNDj6qcYxui4p4BtTAc/gqUvGli4rMnqkhvVE5nw4PvIpGHKgqh67hxaFbSnI1BtI3WWTswQ==
X-Received: by 2002:a17:906:f5a0:b0:a6f:1702:e131 with SMTP id a640c23a62f3a-a6f1702e8fbmr223693666b.49.1718019848188;
        Mon, 10 Jun 2024 04:44:08 -0700 (PDT)
Message-ID: <9cf06291-3389-4241-9d20-4b5411fb3d2e@suse.com>
Date: Mon, 10 Jun 2024 13:44:07 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: vcpumask_to_pcpumask() case study
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <3bb4e3fa-376b-4641-824d-61864b4e1e8e@citrix.com>
 <c5951643-5172-4aa1-9833-1a7a0eebb540@suse.com>
 <1745d84b-59b7-4f90-a0a8-5d459b83b0bc@citrix.com>
 <afc347c0-ca2f-4972-b895-71184b1074ea@suse.com>
 <e56a4519-9d4e-4267-a189-e8e2fec1518b@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <e56a4519-9d4e-4267-a189-e8e2fec1518b@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10.06.2024 12:12, Andrew Cooper wrote:
> On 10/06/2024 8:15 am, Jan Beulich wrote:
>> On 07.06.2024 14:35, Andrew Cooper wrote:
>>> On 03/06/2024 10:19 pm, Jan Beulich wrote:
>>>> On 01.06.2024 20:50, Andrew Cooper wrote:
>>>>> One of the followon items I had from the bitops clean-up is this:
>>>>>
>>>>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>>>>> index 648d6dd475ba..9c3a017606ed 100644
>>>>> --- a/xen/arch/x86/mm.c
>>>>> +++ b/xen/arch/x86/mm.c
>>>>> @@ -3425,7 +3425,7 @@ static int vcpumask_to_pcpumask(
>>>>>              unsigned int cpu;
>>>>>  
>>>>>              vcpu_id = ffsl(vmask) - 1;
>>>>> -            vmask &= ~(1UL << vcpu_id);
>>>>> +            vmask &= vmask - 1;
>>>>>              vcpu_id += vcpu_bias;
>>>>>              if ( (vcpu_id >= d->max_vcpus) )
>>>>>                  return 0;
>>>>>
>>>>> which yields the following improvement:
>>>>>
>>>>>   add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-34 (-34)
>>>>>   Function                                     old     new   delta
>>>>>   vcpumask_to_pcpumask                         519     485     -34
>>>> Nice. At the risk of getting flamed again for raising dumb questions:
>>>> Considering that elsewhere "trickery" like the &= mask - 1 here were
>>>> deemed not nice to have (at least wanting to be hidden behind a
>>>> suitably named macro; see e.g. ISOLATE_LSB()), wouldn't __clear_bit()
>>>> be suitable here too, and less at risk of being considered "trickery"?
>>> __clear_bit() is even worse, because it forces the bitmap to be spilled
>>> to memory.  It hopefully wont when I've given the test/set helpers the
>>> same treatment as ffs/fls.
>> Sorry, not directly related here: When you're saying "when I've given"
>> does that mean you'd like Oleksii's "xen: introduce generic non-atomic
>> test_*bit()" to not go in once at least an Arm ack has arrived?
> 
> If we weren't deep in a code freeze, I'd be insisting on some changes in
> that patch.
> 
> For now, I'll settle for not introducing regressions, so it needs at
> least one more spin (there's a MISRA and UB regression I spotted, but I
> haven't reviewed it in detail yet).

Did you point this[1] out to him? I've just checked, and I can't seem to be
able to find any reply of yours on any of the respective sub-threads, which
formally still means the patch would be fine to go in as is once an Arm ack
arrives (taking the same approach as you did elsewhere wrt a PPC one). It's
just that informally at least I now know to wait ...

Jan

[1] It'll likely be embarrassing to learn what I've overlooked during the
many rounds of review.

> But yes - they're going to end up rather different when I've applied all
> the compile time optimisations which are available.
> 
> ~Andre



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 11:46:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 11:46:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737213.1143411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdUE-0003Fn-MT; Mon, 10 Jun 2024 11:46:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737213.1143411; Mon, 10 Jun 2024 11:46:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdUE-0003Fg-HR; Mon, 10 Jun 2024 11:46:54 +0000
Received: by outflank-mailman (input) for mailman id 737213;
 Mon, 10 Jun 2024 11:46:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGdUD-0003FZ-7C
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 11:46:53 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1f9fc5ce-271f-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 13:46:50 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-a6efae34c83so221960666b.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 04:46:50 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6efd6cfb74sm348309366b.20.2024.06.10.04.46.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 04:46:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f9fc5ce-271f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718020009; x=1718624809; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=7EGkeVwdFSfeDMmh69UV5bstx3D2U4/JgPiOn88t3rg=;
        b=IPSjWBF4/U/mAADgvlItfVRCteXvyZJTamr9KGdFDQPSGNHTszQbnVX/DoYucYPfva
         Yd0qfQQuiA+uWc0uUsCqtNGJY7bS5DC3Nole/RG3lAauhqDSuIBT2bBDcTvtNAHAhkj8
         QpnYIhptVDDR/mlQEPbQkiIvkP+4if8zHmqfBMQ2tzRUsI5dLBSY2La6C5ZQJEAnbu9r
         8OcUrhfZmbEMGYViQa9FGGpaSXWVqAbdw8vEECcJIHdgMl6xvR9RuiscL0whQE8lr8Bv
         2l07cxA+K+jnlCBCyguoGEDWAY8TNSubou7BWgiPp84ioZiW4DbkwETx7J9psrxMQCP0
         RBmA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718020009; x=1718624809;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7EGkeVwdFSfeDMmh69UV5bstx3D2U4/JgPiOn88t3rg=;
        b=IObisdGwnkTE1LGNlxRy+4/Yr5CTsviYxgp9tVmOklZ3Zm3lLZcg/meiLfa4dLlTK/
         +b6w4BeHWHo9zLkukntoXRS5wB3OZIIXmJu8eT185wlHn1wbCafbKVCQtEAeCRGMKzpY
         A4+aTnZzRrBaA/tL7Yw3AsCRpnmzL9PedCk44wY9eUyP8SLgu9uWi2Bs630moUc1rh0q
         k5DPwunvJLGdQenz0ELmeRvrg5RwagPomcNpAiwwMK2rf2N68gn9n/SV0IDgmco7htR7
         XlgKkoZAjWGqulhaxwdTOOgbh7QkNdsAb+KkuPk69oiTWywMabfZyLqQQQyxkxXweW+C
         m7hQ==
X-Forwarded-Encrypted: i=1; AJvYcCXb3O8zXeOsD08NYglwVbDct6CytMaoNcO+xQekrfviBQKWFQO8urSnJGH2HBNLmKJO1mow7r4sHumdkSsfmq2nrpEw3W9Mzb/vfGjxNnw=
X-Gm-Message-State: AOJu0YzYVn8zSIrGi3zwMD1WF0TdmOUsQreJFon/8PhpT8FdUSlAsLpz
	n4C7vH7fIslI1u3rjc9KsLsIISyFyk2xLs1K/MijSGFdp8Q+m43gLGB4MAOzN4RbDxjRZPMtpQs
	=
X-Google-Smtp-Source: AGHT+IGJYEwI85FbXJPs34RJV7jLkCs3aMVxuWieFYT1JDMYNlQCcoAatbZK8ybHnIQA1SLl6H01dA==
X-Received: by 2002:a17:907:2da7:b0:a6f:1378:1325 with SMTP id a640c23a62f3a-a6f1378150amr324147266b.9.1718020009456;
        Mon, 10 Jun 2024 04:46:49 -0700 (PDT)
Message-ID: <8a2b4ac5-d725-46d4-92ac-8e06258b3776@suse.com>
Date: Mon, 10 Jun 2024 13:46:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 08/16] x86/p2m: guard EPT functions with using_vmx
 macro
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <52c64ffd589f289fda271422fee1e957f94aac6e.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <52c64ffd589f289fda271422fee1e957f94aac6e.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:22, Sergiy Kibrik wrote:
> From: Xenia Ragiadakou <burzalodowa@gmail.com>
> 
> Replace cpu_has_vmx check with using_vmx, so that presence of functions
> ept_p2m_init() and ept_p2m_uninit() in the build gets checked.
> Since currently Intel EPT implementation depends on CONFIG_VMX config option,
> when VMX is off these functions are unavailable.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>

The actual code change is okay, so
Acked-by: Jan Beulich <jbeulich@suse.com>
I don't, however, understand what "gets checked" in the description is meant
to mean. Elsewhere the wording was more towards the actual goal, I think.

Jan

> --- a/xen/arch/x86/mm/p2m-basic.c
> +++ b/xen/arch/x86/mm/p2m-basic.c
> @@ -40,7 +40,7 @@ static int p2m_initialise(struct domain *d, struct p2m_domain *p2m)
>      p2m_pod_init(p2m);
>      p2m_nestedp2m_init(p2m);
>  
> -    if ( hap_enabled(d) && cpu_has_vmx )
> +    if ( hap_enabled(d) && using_vmx )
>          ret = ept_p2m_init(p2m);
>      else
>          p2m_pt_init(p2m);
> @@ -72,7 +72,7 @@ struct p2m_domain *p2m_init_one(struct domain *d)
>  void p2m_free_one(struct p2m_domain *p2m)
>  {
>      p2m_free_logdirty(p2m);
> -    if ( hap_enabled(p2m->domain) && cpu_has_vmx )
> +    if ( hap_enabled(p2m->domain) && using_vmx )
>          ept_p2m_uninit(p2m);
>      free_cpumask_var(p2m->dirty_cpumask);
>      xfree(p2m);



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 11:47:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 11:47:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737221.1143420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdUr-0003pA-28; Mon, 10 Jun 2024 11:47:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737221.1143420; Mon, 10 Jun 2024 11:47:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdUq-0003p3-UQ; Mon, 10 Jun 2024 11:47:32 +0000
Received: by outflank-mailman (input) for mailman id 737221;
 Mon, 10 Jun 2024 11:47:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AMvd=NM=suse.de=osalvador@srs-se1.protection.inumbo.net>)
 id 1sGdUp-0003a1-R8
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 11:47:32 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de
 [2a07:de40:b251:101:10:150:64:1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 378e868d-271f-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 13:47:31 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 8172421A5B;
 Mon, 10 Jun 2024 11:47:29 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 6843213A7F;
 Mon, 10 Jun 2024 11:47:28 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id 5brfFtDnZmbLFQAAD6G6ig
 (envelope-from <osalvador@suse.de>); Mon, 10 Jun 2024 11:47:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 378e868d-271f-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718020049; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=WJbUj0Me5/uIhq7+KJarpD1i6Sf3idu2woFatetNcL0=;
	b=jQO+Qx/4Mj+vDjMEgjaHWKq33ho5f4Qb6TYgmmU0qZfZs7hYxUfoNdVh0yH7bMllbzqIzj
	cBgRVPF2mcuk7QqY+/pskie2ykcdmVfJAarIg0QR57zOzx4iU4VSWIXTSa8mKRdzIBa0QD
	lO9DjXCGRfL9yocXH6oR8JUqcqdOAxc=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718020049;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=WJbUj0Me5/uIhq7+KJarpD1i6Sf3idu2woFatetNcL0=;
	b=pE7TkXSXVcV2z59Zw2hFhiW+a9rT+ETwB+nt36sIeCijTXDS4lhfOoWDZeBLz9algnI9z/
	eVn0jCO+My8aZ1Ag==
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718020049; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=WJbUj0Me5/uIhq7+KJarpD1i6Sf3idu2woFatetNcL0=;
	b=jQO+Qx/4Mj+vDjMEgjaHWKq33ho5f4Qb6TYgmmU0qZfZs7hYxUfoNdVh0yH7bMllbzqIzj
	cBgRVPF2mcuk7QqY+/pskie2ykcdmVfJAarIg0QR57zOzx4iU4VSWIXTSa8mKRdzIBa0QD
	lO9DjXCGRfL9yocXH6oR8JUqcqdOAxc=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718020049;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=WJbUj0Me5/uIhq7+KJarpD1i6Sf3idu2woFatetNcL0=;
	b=pE7TkXSXVcV2z59Zw2hFhiW+a9rT+ETwB+nt36sIeCijTXDS4lhfOoWDZeBLz9algnI9z/
	eVn0jCO+My8aZ1Ag==
Date: Mon, 10 Jun 2024 13:47:18 +0200
From: Oscar Salvador <osalvador@suse.de>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
	Eugenio =?iso-8859-1?Q?P=E9rez?= <eperezma@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>, Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()
Message-ID: <ZmbnxrOuoarMbC6X@localhost.localdomain>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-2-david@redhat.com>
 <ZmZ7GgwJw4ucPJaM@localhost.localdomain>
 <13070847-4129-490c-b228-2e52bd77566a@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <13070847-4129-490c-b228-2e52bd77566a@redhat.com>
X-Spam-Level: 
X-Spamd-Result: default: False [-4.30 / 50.00];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-0.993];
	MIME_GOOD(-0.10)[text/plain];
	ARC_NA(0.00)[];
	MISSING_XM_UA(0.00)[];
	MIME_TRACE(0.00)[0:+];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_HAS_DN(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[23];
	FROM_EQ_ENVFROM(0.00)[];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	TO_DN_SOME(0.00)[]
X-Spam-Score: -4.30
X-Spam-Flag: NO

On Mon, Jun 10, 2024 at 10:38:05AM +0200, David Hildenbrand wrote:
> On 10.06.24 06:03, Oscar Salvador wrote:
> > On Fri, Jun 07, 2024 at 11:09:36AM +0200, David Hildenbrand wrote:
> > > In preparation for further changes, let's teach __free_pages_core()
> > > about the differences of memory hotplug handling.
> > > 
> > > Move the memory hotplug specific handling from generic_online_page() to
> > > __free_pages_core(), use adjust_managed_page_count() on the memory
> > > hotplug path, and spell out why memory freed via memblock
> > > cannot currently use adjust_managed_page_count().
> > > 
> > > Signed-off-by: David Hildenbrand <david@redhat.com>
> > 
> > All looks good but I am puzzled with something.
> > 
> > > +	} else {
> > > +		/* memblock adjusts totalram_pages() ahead of time. */
> > > +		atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
> > > +	}
> > 
> > You say that memblock adjusts totalram_pages ahead of time, and I guess
> > you mean in memblock_free_all()
> 
> And memblock_free_late(), which uses atomic_long_inc().

Ah yes.

 
> Right (it's suboptimal, but not really problematic so far. Hopefully Wei can
> clean it up and move it in here as well)

That would be great.

> For the time being
> 
> "/* memblock adjusts totalram_pages() manually. */"

Yes, I think that is better ;-)

Thanks!
 

-- 
Oscar Salvador
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 11:48:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 11:48:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737226.1143430 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdVi-0004pR-9i; Mon, 10 Jun 2024 11:48:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737226.1143430; Mon, 10 Jun 2024 11:48:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdVi-0004pK-5z; Mon, 10 Jun 2024 11:48:26 +0000
Received: by outflank-mailman (input) for mailman id 737226;
 Mon, 10 Jun 2024 11:48:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGdVh-0004Hp-7z
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 11:48:25 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 576c5dbb-271f-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 13:48:23 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-a6269885572so937277866b.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 04:48:23 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f13e08c89sm226356766b.88.2024.06.10.04.48.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 04:48:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 576c5dbb-271f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718020103; x=1718624903; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=kJrhT4z/+SEXmZZQ9t1aIgmTTofzN0MQlnz75vfH5XU=;
        b=d2B+XSavQVSjeMozbHF/+R0lkshJkNQguHZLFFn7PDITOp3SzCqDVT+/XC1OOFjgVk
         UAw9P0KBpeunK7fxaM4Z0zPEl+rMOnoEEVKQ0SfZ4IQMs7aF8ZnBM79G0d7TiKjEI+t7
         wy/vahYyQdt+HDdUu5Tx+wtyh90hrGiZqOWc/X3yEFLiIt5LcdQHIZLabgcJEQaeST2n
         8mKqpAPPKsJVmTmeGyiA22PDvPQdDgNeGFapCcLXE9/Wq6AX7V8TLRYxC1yUbsedjf2m
         vvmZgV3wdf9pnSPaQFHTd27yw0U5Ft/vw4/PP/qlhAC6gfOzXeQI/h1Vsw2rt282YYo9
         h9JQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718020103; x=1718624903;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kJrhT4z/+SEXmZZQ9t1aIgmTTofzN0MQlnz75vfH5XU=;
        b=OLlCWDNvjowqmiT4m/Kgc+YtzODzCmSLCWX+2ELoT+Ba6ZasT+XTawSUawrfFYfWsw
         U2LL8271jXFCXOAb3avS7tQVOa+0cHPtoofU47SQdJQ3FH/v6N+H3AZ62GEzb63g1l5A
         cFjgi/9OKFseNGO8rBLJD/HUyj3LMw3RId9RC1R3BwOoc2A9JR/y3rhhMEpqoSt1sslT
         N5utJVj6vvWXh4BOHjJW5pYfo5ylaltysPpC+i5alD2WNskTIYwF2cq8QIReFWVIoNM6
         IYuCr8N1+56asbEgSEFT+Mr1BkqahUSSwQtqRDjOivdPZ7XJkUY2EZ1M449fpir1TkSq
         PWtQ==
X-Forwarded-Encrypted: i=1; AJvYcCVvuZLA2toEZB48mucrwRyH4VzF407TTx+ISVXCZG2f0/jbtRFUPRMMfeglcO5kScy9rLBeeLu/tNUlxJJhXMumpJmlrxEmeUZFGAyVHx4=
X-Gm-Message-State: AOJu0YxWuXGedqsEuAtFCOMCt+6cuK1pN5ot97O0wOx4HZrfa/5c4/i9
	HLYmMVn+nghkmj0JYXc3xJYmzlgHJmm0xQH4tEopL4K3CCGyyzyjqN0rnqSS6g==
X-Google-Smtp-Source: AGHT+IFKyOjlVdZEmvu/S0b1lQUYM7ukMFhSo287rIk4v1Z7axronyTecLutUNmD0nMcwp39tLHBUQ==
X-Received: by 2002:a17:906:3592:b0:a6e:f8c1:8395 with SMTP id a640c23a62f3a-a6ef8c18434mr455730466b.9.1718020103045;
        Mon, 10 Jun 2024 04:48:23 -0700 (PDT)
Message-ID: <eee43358-b8b8-495d-94b7-0e1bb9c1cc1f@suse.com>
Date: Mon, 10 Jun 2024 13:48:21 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 05/16] x86: introduce using_{svm,vmx} macros
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <9860c4b497038abda71084ea3bce698eab3b277c.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <9860c4b497038abda71084ea3bce698eab3b277c.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:16, Sergiy Kibrik wrote:
> --- a/xen/arch/x86/include/asm/hvm/hvm.h
> +++ b/xen/arch/x86/include/asm/hvm/hvm.h
> @@ -26,6 +26,9 @@ extern bool opt_hvm_fep;
>  #define opt_hvm_fep 0
>  #endif
>  
> +#define using_vmx ( IS_ENABLED(CONFIG_VMX) && cpu_has_vmx )
> +#define using_svm ( IS_ENABLED(CONFIG_SVM) && cpu_has_svm )

Btw, having seen a number of uses by now: Can we consider making these at
least function-like macros (with no arguments), if not inline functions?
Imo that'll make use sites look more "natural".

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 11:51:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 11:51:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737230.1143439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdYB-0006aw-LU; Mon, 10 Jun 2024 11:50:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737230.1143439; Mon, 10 Jun 2024 11:50:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdYB-0006ap-IT; Mon, 10 Jun 2024 11:50:59 +0000
Received: by outflank-mailman (input) for mailman id 737230;
 Mon, 10 Jun 2024 11:50:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGdY9-0006af-Rp
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 11:50:57 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b2651d89-271f-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 13:50:56 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id
 a640c23a62f3a-a63359aaacaso659846766b.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 04:50:56 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f18bbf3cbsm186992666b.1.2024.06.10.04.50.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 04:50:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2651d89-271f-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718020256; x=1718625056; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=O1ZGV3J5wNqgYc0jHVNgrLKV8UB8hbRxEYdcXlor08c=;
        b=H/1qHww14YKYnYJaS2lHqDknegCHZrrM1OL3/ns7fhk34ZfGH93GWrw8xOXBD0D+i9
         Zl50SLOIzu0MUKSsIwcbTDlNiwD26eBoVMt8yLVl8rkFw6EDcwx5iWI/MjLA7QBxplWI
         iHTbXd29Mb+rB4tdHNqajnyi74M+7NuTz7r0HqeMaG/HL1Q/lHd1xX1xaEPru7lgaZ72
         +j1EdfL0ZY9IzyTJUNxrieTWi/jZ7WAbqn+jNeT+nv/mK5eKK1C+at1TZYHIwWL367Se
         Y+8TBPcUNIlfe+31nHahs9S7zjQFbPNpE6rXp5G23AXvbGY+VrE9t2qHJSbU2SmKqPuW
         EMCQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718020256; x=1718625056;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=O1ZGV3J5wNqgYc0jHVNgrLKV8UB8hbRxEYdcXlor08c=;
        b=O+kCiSwoRfyn1c6QLIDUIzbAnYhFUFWW+xn9q0KsBXwE6TzsmZL7h+6nkZ+8u23ZVS
         jg0F76NatI4C401mC4WCGsv0BCbbMMFvg9lKStNII1vM60Kko2KM5LxrstRyj7r7xit0
         3NacybK2tHNmaWqsB+2wOEg9PWjyAzF3uep9aix96Ga6hrr58r8f1oRrr/ftU5y7PDae
         vlGkcW3o/F0+GJB91Q7OvN0l5dfTdHDR8IDs6YDOYVowkmxe6mehZYf6QoAH1bKzxBE+
         8WWyIWy9TeGE5DQMIe6ntAE+fZ9hJsVikX/wvV0JNPKKiPmPzsEDlhpDYEaU6Xtsg80U
         /eRA==
X-Forwarded-Encrypted: i=1; AJvYcCX+JtUKz8cehel88wRjL0dL0F1wWKYUhI0RK5sxv4M8LoqG1oYClvMjJwSZvsZPgwQ1cPR3ZHG3Oe8CMXmHkoZqhdBfm5ZWadHQAcBGWLg=
X-Gm-Message-State: AOJu0Ywb0nDx5D6Dt6w+hwp6Dch0Pgc0FoF6rXxh6zIL1QNUoNtC8BnK
	8+v+YBE5Q88v+dQH9Kd52/o2MfTI4Kr1rzeXQu2SR5YYjLvC8YZu+NdRKHzLcw==
X-Google-Smtp-Source: AGHT+IHZDj9HvAgwD3EhGOx8JG0inz+TWItul4SQrZOUiGN4ZvOltlajhKqfHyft/8tAZPI4Zpf2sQ==
X-Received: by 2002:a17:907:7748:b0:a6f:15f5:261e with SMTP id a640c23a62f3a-a6f15f52770mr221409066b.7.1718020255704;
        Mon, 10 Jun 2024 04:50:55 -0700 (PDT)
Message-ID: <c86a4b33-7c35-4b1f-8d02-2431decf5140@suse.com>
Date: Mon, 10 Jun 2024 13:50:54 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 01/16] x86: introduce AMD-V and Intel VT-x Kconfig
 options
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <72439ab1749b4bdca3c74a7d2af0254d23626797.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <72439ab1749b4bdca3c74a7d2af0254d23626797.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:07, Sergiy Kibrik wrote:
> From: Xenia Ragiadakou <burzalodowa@gmail.com>
> 
> Introduce two new Kconfig options, SVM and VMX, to allow code
> specific to each virtualization technology to be separated and, when not
> required, stripped.
> CONFIG_SVM will be used to enable virtual machine extensions on platforms that
> implement the AMD Virtualization Technology (AMD-V).
> CONFIG_VMX will be used to enable virtual machine extensions on platforms that
> implement the Intel Virtualization Technology (Intel VT-x).
> 
> Both features depend on HVM support.
> 
> Since, at this point, disabling any of them would cause Xen to not compile,
> the options are enabled by default if HVM and are not selectable by the user.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>

Yet to clarify - my question as to ...

> changes in v1:
>  - change kconfig option name AMD_SVM/INTEL_VMX -> SVM/VMX

... undoing this still remains.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 11:51:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 11:51:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737238.1143450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdYs-00074s-TQ; Mon, 10 Jun 2024 11:51:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737238.1143450; Mon, 10 Jun 2024 11:51:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdYs-00074l-Qh; Mon, 10 Jun 2024 11:51:42 +0000
Received: by outflank-mailman (input) for mailman id 737238;
 Mon, 10 Jun 2024 11:51:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGdYr-00074X-Ol; Mon, 10 Jun 2024 11:51:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGdYr-0005Gb-Mn; Mon, 10 Jun 2024 11:51:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGdYr-00057g-AE; Mon, 10 Jun 2024 11:51:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGdYr-0003bb-9o; Mon, 10 Jun 2024 11:51:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KNNEZtNCiv03SKfxASIG6sM1+YEn+imJo5QPAsFNstI=; b=5bAANnnfhznncBb9JUzVOueVQb
	WcBuYfYv77wY5YnnwpqYSg43naL4YTwWguQ1RveCxJcScY/TfzrSNL2NM4vGXY/VLuNOj61PZiq9g
	ZhI2VKmR+DgjJL7BQT/uPEvbNj0ZPi6iBPI16XaGJaiPK2uU8yGDddfbSvoNLHnd9fe4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186301-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186301: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0a5b2ca32c1506bbb0e636a2dfab7502a52fe136
X-Osstest-Versions-That:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Jun 2024 11:51:41 +0000

flight 186301 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186301/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0a5b2ca32c1506bbb0e636a2dfab7502a52fe136
baseline version:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e

Last test of basis   186223  2024-06-01 15:00:23 Z    8 days
Testing same since   186301  2024-06-10 09:00:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Nicola Vetrini <nicola.vetrini@bugseng.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c2d5e63c73..0a5b2ca32c  0a5b2ca32c1506bbb0e636a2dfab7502a52fe136 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 11:51:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 11:51:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737241.1143461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdZ6-0007Tw-7c; Mon, 10 Jun 2024 11:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737241.1143461; Mon, 10 Jun 2024 11:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdZ6-0007Tp-2M; Mon, 10 Jun 2024 11:51:56 +0000
Received: by outflank-mailman (input) for mailman id 737241;
 Mon, 10 Jun 2024 11:51:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGdZ4-0006af-Q6
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 11:51:54 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d4efa8f6-271f-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 13:51:54 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6ef46d25efso306023166b.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 04:51:54 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f2e231f73sm27785366b.63.2024.06.10.04.51.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 04:51:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4efa8f6-271f-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718020314; x=1718625114; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=WM7cbeVxrugWysXgWa3o9HPhZiHUZzLyXfDDvyvD6ig=;
        b=b9bgmzMoWdHgAA8dd9yQ/4XytKwCNZh9HMhlbyxU4CteKB2/YiG9bkYV+wXoVf3tW4
         J0SHoNgPjeGGxyDDUnS6fA6xXukV5VQnr3vcbqBMkGmBPL7kPx7/r3iJKiD4uF4rd+z6
         D4NugQC3W7DLbTR/D88tXFk12bPJtM12B9Tq5N8WFB2iMEQWeSprTSs5YOtuL7LdDOG9
         PAYr0TPUOO5GD2zGg4G89xl1RvGXNaTj8iQibFI1xmDoMJrnjh8ergb0TLzvEWAAVgKK
         7038lh0cX4XwYHajDmmDOGdgLOXWlZoprnPScCi37hbFCkADA5gpz/kVHGNlRe//zguj
         m82Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718020314; x=1718625114;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WM7cbeVxrugWysXgWa3o9HPhZiHUZzLyXfDDvyvD6ig=;
        b=RKGkQ+UJaPj4si8ZSEWDp6et6aWxnfnO1rxYdRGjLUWdWePe3B8onrNCkWbOPkCJOf
         q8UNW3taDSrVwIeIqb3yQB2hIo9KTSn/uePMmCNs+osGtKT9p1dnyKE4YN+fHDY4t+97
         bW9j2qC3piECg3u+RCr7hIj8t/dGbjqtgjoh5LyDGmYApBMVDYC8RiP2ewkWv752Fv3R
         JdrkEdj8eUzY98oC2lkWYXyNOlC8/vZAXzTiQG8QoKvoDSaRK9+mj1l/xJOODt06xuoX
         7q6wKMOu3w6SittRdRdk/mWObAPNdGmmtcMLfLV/UQbRNRht0zeJbEXnl0SgxnrklMom
         t8hw==
X-Forwarded-Encrypted: i=1; AJvYcCWdc9laAA8ITHEuC4wlHcHSYkwYetyKXTuI7RE50tWhIYiKZBxU/q8kzltgk60H8jr6tLivTBVvPL4ZlNcQkiywcQ/Z4w5Hpu/kCubc7PA=
X-Gm-Message-State: AOJu0YwEjW/F2WsPcnU7XGtJlWVlXGMluUb4ku72UL6RfnUrxo1CTXhg
	xWpTFA+AURokDtd0WWmQyhss9KrN6GBUoluTP2at1gf69Lo8RZLEZXLdqDScUQ==
X-Google-Smtp-Source: AGHT+IHrOBEb4llKZ5J/HIzYgUbKjaF8ONdloK5PjgSgRfpM6vPu4rryJEWuzrlxC0EVn8U11u7yTA==
X-Received: by 2002:a17:906:ae4a:b0:a6e:46ab:9a9b with SMTP id a640c23a62f3a-a6e46ab9b22mr472769366b.31.1718020313805;
        Mon, 10 Jun 2024 04:51:53 -0700 (PDT)
Message-ID: <0a57be60-c8c6-4aba-9ef7-27725945a151@suse.com>
Date: Mon, 10 Jun 2024 13:51:52 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 09/16] x86/traps: guard vmx specific functions with
 usinc_vmx macro
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <63045d707485c818af5eafa45752e60405ecf887.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <63045d707485c818af5eafa45752e60405ecf887.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:24, Sergiy Kibrik wrote:
> From: Xenia Ragiadakou <burzalodowa@gmail.com>
> 
> Replace cpu_has_vmx check with using_vmx, so that not only VMX support in CPU
> gets checked, but also presence of functions vmx_vmcs_enter() & vmx_vmcs_exit()
> in the build checked as well.
> 
> Also since CONFIG_VMX is checked in using_vmx and it depends on CONFIG_HVM,
> we can drop #ifdef CONFIG_HVM lines around using_vmx.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jun 10 11:59:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 11:59:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737252.1143469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdgE-0000Jb-03; Mon, 10 Jun 2024 11:59:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737252.1143469; Mon, 10 Jun 2024 11:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGdgD-0000JU-Tf; Mon, 10 Jun 2024 11:59:17 +0000
Received: by outflank-mailman (input) for mailman id 737252;
 Mon, 10 Jun 2024 11:59:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGdgD-0000Hd-46; Mon, 10 Jun 2024 11:59:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGdgD-0005Qn-11; Mon, 10 Jun 2024 11:59:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGdgC-0005s6-7s; Mon, 10 Jun 2024 11:59:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGdgC-0000zP-7e; Mon, 10 Jun 2024 11:59:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nJKyfqY9MBbTATmCwVulujknglma+1pP/nUDYVTQi1I=; b=2SoFxvL5ox7wjYEFNc/CmOvzuy
	LZShRL774BgnQ6dJAR23WarpLsSny13z2V3MNmvuG/ZX49sXurKaAhaxbvL3E1DfF63HnjFsztgDf
	QzYX0OmeQ1VgZ4kXhqt7gPZ3e4av8fLGud/YyykIEL3a30U/k4TLekQaRLsKSOZsSWP4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186299-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186299: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
X-Osstest-Versions-That:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Jun 2024 11:59:16 +0000

flight 186299 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186299/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186293
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186293
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186293
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186293
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186293
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186293
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e
baseline version:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e

Last test of basis   186299  2024-06-10 01:51:55 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 12:21:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 12:21:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737266.1143480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGe1p-0005zh-U0; Mon, 10 Jun 2024 12:21:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737266.1143480; Mon, 10 Jun 2024 12:21:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGe1p-0005za-RE; Mon, 10 Jun 2024 12:21:37 +0000
Received: by outflank-mailman (input) for mailman id 737266;
 Mon, 10 Jun 2024 12:21:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGe1n-0005zP-Ob
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 12:21:35 +0000
Received: from mail-oa1-x2b.google.com (mail-oa1-x2b.google.com
 [2001:4860:4864:20::2b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f8e781e1-2723-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 14:21:33 +0200 (CEST)
Received: by mail-oa1-x2b.google.com with SMTP id
 586e51a60fabf-254c5bc84c6so479931fac.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 05:21:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8e781e1-2723-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718022092; x=1718626892; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=oNBQSeigbJsHzkU0snWQtpMi1kk6ZjIJqgd5Q0fbImI=;
        b=OJUKDkgQ3liiZK8HCjH/JY9W+JF+4thM/4gbePQOBpOgg5s61S7LEg7VcJ3QEF+qfU
         yMazyC1NuL9UWfjUTuzGvo+Scek7UKk4tNBBTWi+sAKcMX39+ezzmHv2CtdVFvhnWUW+
         SoSOtKwYm98A9Oxx5ZtXAZBGEucAzyUGjlQHt3GzO+18mrItgamTiaHTV5v4GQEcQHrN
         +ZV0nRcew6fPLJTtGgWnMXz/q3iRNdJro63c4TC5fVp+j5i3YskC89Pm/3zk8wq3wzag
         yuYfqcyy14Jl5q45H+ID+Opid7BTF5/5Y3rBdngewDw5tpx4eQ12K/7HQbe4X4HeieDr
         uC8Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718022092; x=1718626892;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=oNBQSeigbJsHzkU0snWQtpMi1kk6ZjIJqgd5Q0fbImI=;
        b=tvBaAPAJSFM8Xn5MyBreyetly2tn/cJY9dLUMn6eVQWpXHUalGZThwXJv2/RQ/q7q9
         SJ29E2Q8yL8DmuESbVBMW6d5TV1o3vX/6BaXBYvMi4cbEL3sWP3xWBhYV5VX7Recso75
         rV6MukJlfITsWA5AUC/c/ez/zV/kmRgemeoU92YR61BZ7lxTm4qtsvqWrUY/qeZwCd8i
         FMBlnE/iOS84gM12F/qX8toZeXmZxj1o1WQZJKeLmCTlNRw9jo0zP0DR8jEN6HGd4tJ4
         qsdIq8Gp/9AVGvYFkdC1gaiIkgLO7x+UECwQ0RiKUf9OIAmTV9N+Oi7yVAzGGvOMWjjv
         2YPA==
X-Forwarded-Encrypted: i=1; AJvYcCUhoo8oZpdSTwOAaevozZmSsVWpVgapBd8pirR5C3UEwtcaq9vezKyz36Eh9Yuq9gAdi1zFZqu6lry/LBTZwVNn51fDfWT3Agelz5iumDo=
X-Gm-Message-State: AOJu0YzSzAWCZX7jHq6AcFik963Wr+znIcQx6CG0cQ6U9EjIBYdkAS5q
	wLTb0Z6q+3mjG2kJ3xdcDJTw26PaXaspoz/B1iD4ti0dAyupeiqiVkAZQKZ25ilVRBdo9B1fNAI
	6SXWzO+uq5oFXO1j4FYhLNW2zK3aUxHRr
X-Google-Smtp-Source: AGHT+IFjlGkFVbDMELsDZcehtgz35sJ23cqrWFOP1uude3AjQojhHc4nMgIi6EIKAnAftXA6SILzFgLesf9XHovIC6A=
X-Received: by 2002:a05:6870:224c:b0:254:c3d7:fbeb with SMTP id
 586e51a60fabf-254c3d8015dmr3437114fac.13.1718022091764; Mon, 10 Jun 2024
 05:21:31 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1717356829.git.w1benny@gmail.com> <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
 <22eabe14-10c3-4095-91d3-b63911908cb2@suse.com> <CAKBKdXhZ4HOqThPMkwaWB5ZhQOc6gE=xsKzkoL4_h+M6y33dcQ@mail.gmail.com>
 <f3cd00f2-bdcb-4604-bdc2-fd13eddb8ea0@suse.com> <CAKBKdXje+_dd7kh3+aDJACw84+-1ozXt6N==KbA6Tgm7GeZEnQ@mail.gmail.com>
 <8961cf72-4eeb-4c47-9723-35da3e47d4d2@suse.com> <CAKBKdXiQhFeihx9HeuOv5cFe8K7H2O+GFUXy4ThF1X6ZGjCrig@mail.gmail.com>
 <093a45d0-da0b-44d1-902e-730eede80112@suse.com>
In-Reply-To: <093a45d0-da0b-44d1-902e-730eede80112@suse.com>
From: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Date: Mon, 10 Jun 2024 14:21:19 +0200
Message-ID: <CAKBKdXjWmVJtCNWsGHnM_9TT2BZ6S=qyxYbYS7hsLWqb4vR16w@mail.gmail.com>
Subject: Re: [PATCH for-4.19? v5 07/10] xen: Make the maximum number of altp2m
 views configurable for x86
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel <michal.orzel@amd.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Tamas K Lengyel <tamas@tklengyel.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 10, 2024 at 1:21=E2=80=AFPM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 10.06.2024 12:34, Petr Bene=C5=A1 wrote:
> > On Mon, Jun 10, 2024 at 12:16=E2=80=AFPM Jan Beulich <jbeulich@suse.com=
> wrote:
> >>
> >> On 10.06.2024 11:10, Petr Bene=C5=A1 wrote:
> >>> On Mon, Jun 10, 2024 at 9:30=E2=80=AFAM Jan Beulich <jbeulich@suse.co=
m> wrote:
> >>>>
> >>>> On 09.06.2024 01:06, Petr Bene=C5=A1 wrote:
> >>>>> On Thu, Jun 6, 2024 at 9:24=E2=80=AFAM Jan Beulich <jbeulich@suse.c=
om> wrote:
> >>>>>>> @@ -122,7 +131,12 @@ int p2m_init_altp2m(struct domain *d)
> >>>>>>>      struct p2m_domain *hostp2m =3D p2m_get_hostp2m(d);
> >>>>>>>
> >>>>>>>      mm_lock_init(&d->arch.altp2m_list_lock);
> >>>>>>> -    for ( i =3D 0; i < MAX_ALTP2M; i++ )
> >>>>>>> +    d->arch.altp2m_p2m =3D xzalloc_array(struct p2m_domain *, d-=
>nr_altp2m);
> >>>>>>> +
> >>>>>>> +    if ( !d->arch.altp2m_p2m )
> >>>>>>> +        return -ENOMEM;
> >>>>>>
> >>>>>> This isn't really needed, is it? Both ...
> >>>>>>
> >>>>>>> +    for ( i =3D 0; i < d->nr_altp2m; i++ )
> >>>>>>
> >>>>>> ... this and ...
> >>>>>>
> >>>>>>>      {
> >>>>>>>          d->arch.altp2m_p2m[i] =3D p2m =3D p2m_init_one(d);
> >>>>>>>          if ( p2m =3D=3D NULL )
> >>>>>>> @@ -143,7 +157,10 @@ void p2m_teardown_altp2m(struct domain *d)
> >>>>>>>      unsigned int i;
> >>>>>>>      struct p2m_domain *p2m;
> >>>>>>>
> >>>>>>> -    for ( i =3D 0; i < MAX_ALTP2M; i++ )
> >>>>>>> +    if ( !d->arch.altp2m_p2m )
> >>>>>>> +        return;
> >>>>
> >>>> I'm sorry, the question was meant to be on this if() instead.
> >>>>
> >>>>>>> +    for ( i =3D 0; i < d->nr_altp2m; i++ )
> >>>>>>>      {
> >>>>>>>          if ( !d->arch.altp2m_p2m[i] )
> >>>>>>>              continue;
> >>>>>>> @@ -151,6 +168,8 @@ void p2m_teardown_altp2m(struct domain *d)
> >>>>>>>          d->arch.altp2m_p2m[i] =3D NULL;
> >>>>>>>          p2m_free_one(p2m);
> >>>>>>>      }
> >>>>>>> +
> >>>>>>> +    XFREE(d->arch.altp2m_p2m);
> >>>>>>>  }
> >>>>>>
> >>>>>> ... this ought to be fine without?
> >>>>>
> >>>>> Could you, please, elaborate? I honestly don't know what you mean h=
ere
> >>>>> (by "this isn't needed").
> >>>>
> >>>> I hope the above correction is enough?
> >>>
> >>> I'm sorry, but not really? I feel like I'm blind but I can't see
> >>> anything I could remove without causing (or risking) crash.
> >>
> >> The loop is going to do nothing when d->nr_altp2m =3D=3D 0, and the XF=
REE() is
> >> going to do nothing when d->arch.altp2m_p2m =3D=3D NULL. Hence what do=
es the
> >> if() guard against? IOW what possible crashes are you seeing that I do=
n't
> >> see?
> >
> > I see now. I was seeing ghosts - my thinking was that if
> > p2m_init_altp2m fails to allocate altp2m_p2m, it will call
> > p2m_teardown_altp2m - which, without the if(), would start to iterate
> > through an array that is not allocated. It doesn't happen, it just
> > returns -ENOMEM.
> >
> > So to reiterate:
> >
> >     if ( !d->arch.altp2m_p2m )
> >         return;
> >
> > ... are we talking that this condition inside p2m_teardown_altp2m
> > isn't needed?
>
> I'm not sure about "isn't" vs "shouldn't". The call from p2m_final_teardo=
wn()
> also needs to remain safe to make. Which may require that upon allocation
> failure you zap d->nr_altp2m. Or which alternatively may mean that the if=
()
> actually needs to stay.

True, p2m_final_teardown is called whenever p2m_init (and by extension
p2m_init_altp2m) fails. Which means that condition must stay - or, as
you suggested, reset nr_altp2m to 0.

I would rather leave the code as is. Modifying nr_altp2m would (in my
opinion) feel semantically incorrect, as that value should behave more
or less as const, that is initialized once.

> > Or is there anything else?
>
> There was also the question of whether to guard the allocation, to avoid =
a
> de-generate xmalloc_array() of zero size. Yet in the interest of avoiding
> not strictly necessary conditionals, that may well want to remain as is.

True, nr_altp2m would mean zero-sized allocation, as p2m_init_altp2m
is called unconditionally (when booted with altp2m=3D1). Is it a
problem, though?

P.


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 12:28:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 12:28:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737271.1143490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGe8c-0006lt-L1; Mon, 10 Jun 2024 12:28:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737271.1143490; Mon, 10 Jun 2024 12:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGe8c-0006lm-H3; Mon, 10 Jun 2024 12:28:38 +0000
Received: by outflank-mailman (input) for mailman id 737271;
 Mon, 10 Jun 2024 12:28:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGe8c-0006lg-26
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 12:28:38 +0000
Received: from mail-oa1-x2a.google.com (mail-oa1-x2a.google.com
 [2001:4860:4864:20::2a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f589862a-2724-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 14:28:37 +0200 (CEST)
Received: by mail-oa1-x2a.google.com with SMTP id
 586e51a60fabf-2506fc3a6cfso2092772fac.2
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 05:28:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f589862a-2724-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718022516; x=1718627316; darn=lists.xenproject.org;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=XZpFk7tj5vBU7kedGp4MRZkG0MYQ3f9tJgmas/XFQB4=;
        b=Z8u9rfo1avRiAqSrRW7vqJ1g+UT5EgIqve1n5Z7ixdN7aQNpl9z8w3/OtTNVn9WALJ
         jO8G9kjPg+HfeUz1mUi20XjjLYNKABbrpkSSKomXg2o2dqa30YEDmF2U442GXIuAN6+f
         kgN1PiSV/Ktxv7Au4wGpTdDs7BXm3Z5b2ucBs06Z/FOius671RXCNJa7OTz+6fEig6GT
         gRNiMJy4d0EbW7ka4GWFAQQ4L3DblCDCbtEMjqkxLA95geFY/mTnFXG3QinyTJS+HNFC
         cz2XwiSNjkVyF8q8m+mjJ+hFFGI+BlvMDN/9m9MqcDfSPKLgdPLyD33Y98eoxQ7LMXQ+
         2Y3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718022516; x=1718627316;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=XZpFk7tj5vBU7kedGp4MRZkG0MYQ3f9tJgmas/XFQB4=;
        b=NyxMIqaDugKCKG2XrsgWPAMpIv5Ox/5bpU9Wun3cXgmuRVW49mRR4MAkGRHOfy9UAd
         1RotjpifPYEV8FrKLomfX6BB9ZikLTUe6NXfOJTass93o8LJ/BRujCsQAlptuvjUjRLf
         knV96SDEhYBr+sRF+rhJpLXFsiARPKh3wbhmil6Ud+B7e4fdjq5sGLMQ9bcb3HuVyMeU
         mRcqZ07yf9oV9mPbL/Ik9j7MnlSIxlhn6Ga2QxfMip2y4HFo6GtFOe0ZJTZMiuqJGxQb
         /4Gs7m/5lQzs1gVdxPvrtP0XqkF9O0Nxa23C0A9AlEhnDgyLiHQtKV9+6/0P9b4gRmAm
         EFaQ==
X-Forwarded-Encrypted: i=1; AJvYcCXbBU12i4r7z9Pts3EPP9guKIRsUbcL/8MlsZU+BCCllFhq2LlwsZkVyMopnVWNljBpOc0npFp7ep9Jcy81G3SMSSJBkrbIyl4UD4IERYE=
X-Gm-Message-State: AOJu0YyscpmxsnTFJjJQ7bgKVaAjI5pguHVwABt7ElNsmMkcMjfN38Cc
	WFjSIGL+TDxAlPY5J2HUkPkxNR1C0ZCh8bwHNhHQ2vPpIS/VjW6uN07q3y+IpbJ1alFV62jb2zJ
	QQyX27/jE0Nge2v6vTAaFUhqpHqk=
X-Google-Smtp-Source: AGHT+IFQpiSoBvP/fudpUWzDtkazMIhrTAyiy5l3Pzjh3LHcTtWGonvAAbOpXOq6SVe8fAm7M/gXhh35NFWykhmEsVA=
X-Received: by 2002:a05:6871:711:b0:24f:d2b2:c38b with SMTP id
 586e51a60fabf-25464476177mr9584653fac.5.1718022515778; Mon, 10 Jun 2024
 05:28:35 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1717356829.git.w1benny@gmail.com> <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
 <22eabe14-10c3-4095-91d3-b63911908cb2@suse.com> <CAKBKdXhZ4HOqThPMkwaWB5ZhQOc6gE=xsKzkoL4_h+M6y33dcQ@mail.gmail.com>
 <f3cd00f2-bdcb-4604-bdc2-fd13eddb8ea0@suse.com> <CAKBKdXje+_dd7kh3+aDJACw84+-1ozXt6N==KbA6Tgm7GeZEnQ@mail.gmail.com>
 <8961cf72-4eeb-4c47-9723-35da3e47d4d2@suse.com> <CAKBKdXiQhFeihx9HeuOv5cFe8K7H2O+GFUXy4ThF1X6ZGjCrig@mail.gmail.com>
 <093a45d0-da0b-44d1-902e-730eede80112@suse.com> <CAKBKdXjWmVJtCNWsGHnM_9TT2BZ6S=qyxYbYS7hsLWqb4vR16w@mail.gmail.com>
In-Reply-To: <CAKBKdXjWmVJtCNWsGHnM_9TT2BZ6S=qyxYbYS7hsLWqb4vR16w@mail.gmail.com>
From: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Date: Mon, 10 Jun 2024 14:28:24 +0200
Message-ID: <CAKBKdXhb=AnPN=N_HvCkdjT7EhEYnEuNE6HwKF7fJau+5byJCA@mail.gmail.com>
Subject: Re: [PATCH for-4.19? v5 07/10] xen: Make the maximum number of altp2m
 views configurable for x86
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel <michal.orzel@amd.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Tamas K Lengyel <tamas@tklengyel.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"

> (when booted with altp2m=1)

Sorry, forgot to remove this statement before sending the email, it's
not really true.

I wanted to add that as I wrote in a previous email exchange - altp2m
should be ideally initialized only when it's requested - as opposed to
the current situation, when it's initialized in the domain_create.
However, that is more suited for 4.20.

P.


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 13:32:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 13:32:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737281.1143499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGf8X-00012j-7A; Mon, 10 Jun 2024 13:32:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737281.1143499; Mon, 10 Jun 2024 13:32:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGf8X-00012c-4f; Mon, 10 Jun 2024 13:32:37 +0000
Received: by outflank-mailman (input) for mailman id 737281;
 Mon, 10 Jun 2024 13:32:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7tSL=NM=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1sGf8W-00012W-2F
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 13:32:36 +0000
Received: from fhigh1-smtp.messagingengine.com
 (fhigh1-smtp.messagingengine.com [103.168.172.152])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e33d8463-272d-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 15:32:31 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailfhigh.nyi.internal (Postfix) with ESMTP id AC1CB1140141;
 Mon, 10 Jun 2024 09:32:30 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Mon, 10 Jun 2024 09:32:30 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 10 Jun 2024 09:32:29 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e33d8463-272d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:message-id:mime-version:reply-to:subject:subject:to:to; s=fm1;
	 t=1718026350; x=1718112750; bh=hbscGw/cNDRdJD9fToOHiVS/f5IXmV3L
	8pdO8Nf6aPw=; b=Li44MU9R01C8n26bbsjuDD/DkcWOfFGcreulZD7L+Fzqz1dy
	AWyRvheM9ySMC8Ib9AKnar2ASjOrE0RE8siMII3PdJuZDW83QyQ3+d+q1UUFcSJC
	MmnWyA4mRjrJkQqmXv7AV5Wv0hIRNQBo1JqgVLZQxmEoROR9AspUbu9KL7vgCNNG
	TFqB9Y84wmEzrybPCqQv7rlOMDfUw69V3NyVCKFaNnqHBDRSWgJvb+6uvSqLNNFh
	3H/QVjJGf/bXgu1MeiC6UT0vUOu+YUba/AZn9KtjRLh9s1xi9xB2+G2tWehosqtU
	YuulFZ5gPlvFGFhjKwbiQ5U4IVilUNt2vqoKoQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:message-id:mime-version:reply-to:subject
	:subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender
	:x-sasl-enc; s=fm1; t=1718026350; x=1718112750; bh=hbscGw/cNDRdJ
	D9fToOHiVS/f5IXmV3L8pdO8Nf6aPw=; b=nntUy8ScznAo9ejAaCbiy2bvY3D8T
	Sy1AEGE/SgZLXwBVKnJckCTDkAIX2PiER1nt3MmkTHUwZo0RhMQUnRmTm5ZPVbTY
	zAmAmeFAv3VN7d5EZcytK2G5irx7n8lkVSD4HADgxoBe5m5I0ywjTnL85SCTKmVf
	xs6zysQx0on77FkVvTj0wPXuyzFL3DV6j+vmF8ztEf/Uk2soEKcFwsyi7FE/ooSI
	XLVwOC0RTyG7I8oCPfsdOI8b+IOqGRf2eWVTdAd3MO9bX+KbBR6Xo1bzE84NAv2g
	mtS+M8jWp4Pj6fPyUN4Amcts3ADhw2x6078p7BHpc9oXUGGKCj4fj0yEA==
X-ME-Sender: <xms:bgBnZlYcmwxJO97hkOZwWaIl59Ha_oe-Z3QfWsPpfOyebkYsXIAntQ>
    <xme:bgBnZsZwvrQsqcnspKlPFFY3azyXVh1--_YEdWxnUBfh5e4wCPDwXysLCm_itfW-J
    YPY97ATRvi9fA>
X-ME-Received: <xmr:bgBnZn9AULCGxY9CW8lbosqn4xyh6nc4_wIx5UKVzHlAk_B8vP3DCGgvIRMiRAv_WvsONcQYhv1A2hFazIJLj1K7ZSdAMVqnT5QuG-Fk0G6O7Hpj9Go>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedutddgieehucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhephffvvefufffkofggtgfgsehtkeertdertdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeekkeei
    veekueehfeegveejveevuedtjeeiveeguefhvdffueetfedtuddvueetveenucffohhmrg
    hinhepkhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghm
    pehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslh
    grsgdrtghomh
X-ME-Proxy: <xmx:bgBnZjr8FAg8siSULEkzrjEQ03u75RY-Lyt-1ZI7OPCR_az-RB2YZA>
    <xmx:bgBnZgpFR1n4kUpeXWBQIoceLW53rdiBj7mKz84RmqNcnGSgpKam5A>
    <xmx:bgBnZpT9fd2uKj6SloFFcum8wYjDowfwsj8akJZcLoRxYZk2PgvW_g>
    <xmx:bgBnZoqBhQ9WlkkyF5lR2Q9OWBA1sHbzg3ukioXhP5jV_R873rDsGw>
    <xmx:bgBnZrUByUUEgrXEsH9x7T-kHBh0BvVzpCdSb39STwT2mbUOmhQYDJQ9>
Feedback-ID: i1568416f:Fastmail
From: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v1] automation: add a test for HVM domU on PVH dom0
Date: Mon, 10 Jun 2024 15:32:09 +0200
Message-ID: <20240610133210.724346-1-marmarek@invisiblethingslab.com>
X-Mailer: git-send-email 2.44.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This tests if QEMU works in PVH dom0. QEMU in dom0 requires enabling TUN
in the kernel, so do that too.

Add it to both x86 runners, similar to the PVH domU test.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
---
Requires rebuilding test-artifacts/kernel/6.1.19

I'm actually not sure if there is a sense in testing HVM domU on both
runners, when PVH domU variant is already tested on both. Are there any
differences between Intel and AMD relevant for QEMU in dom0?
---
 automation/gitlab-ci/test.yaml                | 16 ++++++++++++++++
 automation/scripts/qubes-x86-64.sh            | 19 ++++++++++++++++---
 .../tests-artifacts/kernel/6.1.19.dockerfile  |  1 +
 3 files changed, 33 insertions(+), 3 deletions(-)

diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml
index 902139e14893..898d2adc8c5b 100644
--- a/automation/gitlab-ci/test.yaml
+++ b/automation/gitlab-ci/test.yaml
@@ -175,6 +175,14 @@ adl-smoke-x86-64-dom0pvh-gcc-debug:
     - *x86-64-test-needs
     - alpine-3.18-gcc-debug
 
+adl-smoke-x86-64-dom0pvh-hvm-gcc-debug:
+  extends: .adl-x86-64
+  script:
+    - ./automation/scripts/qubes-x86-64.sh dom0pvh-hvm 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.18-gcc-debug
+
 adl-suspend-x86-64-gcc-debug:
   extends: .adl-x86-64
   script:
@@ -215,6 +223,14 @@ zen3p-smoke-x86-64-dom0pvh-gcc-debug:
     - *x86-64-test-needs
     - alpine-3.18-gcc-debug
 
+zen3p-smoke-x86-64-dom0pvh-hvm-gcc-debug:
+  extends: .zen3p-x86-64
+  script:
+    - ./automation/scripts/qubes-x86-64.sh dom0pvh-hvm 2>&1 | tee ${LOGFILE}
+  needs:
+    - *x86-64-test-needs
+    - alpine-3.18-gcc-debug
+
 zen3p-pci-hvm-x86-64-gcc-debug:
   extends: .zen3p-x86-64
   script:
diff --git a/automation/scripts/qubes-x86-64.sh b/automation/scripts/qubes-x86-64.sh
index 087ab2dc171c..816c5dd9aa77 100755
--- a/automation/scripts/qubes-x86-64.sh
+++ b/automation/scripts/qubes-x86-64.sh
@@ -19,8 +19,8 @@ vif = [ "bridge=xenbr0", ]
 disk = [ ]
 '
 
-### test: smoke test & smoke test PVH
-if [ -z "${test_variant}" ] || [ "${test_variant}" = "dom0pvh" ]; then
+### test: smoke test & smoke test PVH & smoke test HVM
+if [ -z "${test_variant}" ] || [ "${test_variant}" = "dom0pvh" ] || [ "${test_variant}" = "dom0pvh-hvm" ]; then
     passed="ping test passed"
     domU_check="
 ifconfig eth0 192.168.0.2
@@ -37,10 +37,23 @@ done
 set -x
 echo \"${passed}\"
 "
-if [ "${test_variant}" = "dom0pvh" ]; then
+if [ "${test_variant}" = "dom0pvh" ] || [ "${test_variant}" = "dom0pvh-hvm" ]; then
     extra_xen_opts="dom0=pvh"
 fi
 
+if [ "${test_variant}" = "dom0pvh-hvm" ]; then
+    domU_config='
+type = "hvm"
+name = "domU"
+kernel = "/boot/vmlinuz"
+ramdisk = "/boot/initrd-domU"
+extra = "root=/dev/ram0 console=hvc0"
+memory = 512
+vif = [ "bridge=xenbr0", ]
+disk = [ ]
+'
+fi
+
 ### test: S3
 elif [ "${test_variant}" = "s3" ]; then
     passed="suspend test passed"
diff --git a/automation/tests-artifacts/kernel/6.1.19.dockerfile b/automation/tests-artifacts/kernel/6.1.19.dockerfile
index 3a4096780d20..021bde26c790 100644
--- a/automation/tests-artifacts/kernel/6.1.19.dockerfile
+++ b/automation/tests-artifacts/kernel/6.1.19.dockerfile
@@ -32,6 +32,7 @@ RUN curl -fsSLO https://cdn.kernel.org/pub/linux/kernel/v6.x/linux-"$LINUX_VERSI
     make xen.config && \
     scripts/config --enable BRIDGE && \
     scripts/config --enable IGC && \
+    scripts/config --enable TUN && \
     cp .config .config.orig && \
     cat .config.orig | grep XEN | grep =m |sed 's/=m/=y/g' >> .config && \
     make -j$(nproc) bzImage && \
-- 
2.44.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 13:37:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 13:37:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737286.1143510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGfCs-0001by-OD; Mon, 10 Jun 2024 13:37:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737286.1143510; Mon, 10 Jun 2024 13:37:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGfCs-0001bp-LN; Mon, 10 Jun 2024 13:37:06 +0000
Received: by outflank-mailman (input) for mailman id 737286;
 Mon, 10 Jun 2024 13:37:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGfCq-0001bj-UI
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 13:37:04 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 84a2771e-272e-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 15:37:02 +0200 (CEST)
Received: by mail-ej1-x633.google.com with SMTP id
 a640c23a62f3a-a6ef46d25efso322933166b.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 06:37:02 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1480821esm232112766b.150.2024.06.10.06.37.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 06:37:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84a2771e-272e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718026621; x=1718631421; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=WGveORXj8d2uAHhUI/wuRMK7VF70NHq9UIfCL1PH3v8=;
        b=GOHZE6q+l55Rx8ZGk/MCdNKXj06G6HdgoZrSUeKNnwmwPLtf5vIapyMVEO+ejUWWbi
         NerTzQITcb/VEug2G1RgzCCTxncY1YUtOkZUWGtKeC/h4cUAjMKL03pJN5wvKij441xs
         Ot0F2jaHIMYNpvc1+eg7DDnfp26Z3Oxu/8JJDJMkIqS0TzIuLNHQizMFD3YlUcXtGmF0
         y4qclKQhVklsda4KOGCGKfgRzZZTJd5LnMF5+bdw7437hJOzJn8snyYHwD2u0M28VLkM
         /Z7WERVHDgdzw7LXEhe1dXlVqWxxgInFPmTToRgOQH230Mq33htlbulivt3vYzqS3yYz
         ICLQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718026621; x=1718631421;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=WGveORXj8d2uAHhUI/wuRMK7VF70NHq9UIfCL1PH3v8=;
        b=F4Ma8jfJez+G+a6UkeYgEd9z5waB9C9XMnviGYtKFPgA7+5jHn7B32NxDFT5lwwKjH
         Rqx6DTaEmCIuhY4SKX0KTBK8xYd71VkRDzAi1XHBeaExdXM6tZgcATxy7twtlOJi7z97
         c6STbUEZR70xuE5e0HjEmAYaPhm95stfVGtUtZph2YyauBZf8ERFXkX2CPHt0p5d6vAN
         WYS7DYMFsGSLv0uPX6J0n0gJKY9v88+emJvbAMzeJl9nmcX3udiE9a5OtCQCGLIJnlOw
         dFlMwuj44jqleLQV6LNVBTkrn7doUCRK0wq0lsQnPlOYkNyBxo59lWcSv07B8e5ZMqf9
         buzw==
X-Gm-Message-State: AOJu0YyLn1DBLtQV6te6uoSp3U4tIm1fNemB7r7GZ87lr8zBchClOugY
	xVyOb9hay1oBvyZkz75CjXqSU+TzgMlfqiZOhcAxvC4vqaFh0tyt1Fa9HhTzRxOj3bBOVP3qHDk
	=
X-Google-Smtp-Source: AGHT+IFeAGGsR5TwlM3jVuG/w16wGuRbmt1P/urcLcmQTF1POaff3HmNrIhNMig6qNpPebYO6/C9zA==
X-Received: by 2002:a17:906:2b97:b0:a6e:f68c:a1cd with SMTP id a640c23a62f3a-a6ef68ca2femr394402666b.62.1718026621490;
        Mon, 10 Jun 2024 06:37:01 -0700 (PDT)
Message-ID: <f7228594-fa64-4fd8-b55b-506d004b73cb@suse.com>
Date: Mon, 10 Jun 2024 15:37:00 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini
 <sstabellini@kernel.org>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH RFC] Arm64: amend "xen/arm64: head: Add missing code symbol
 annotations"
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

While the change[1] didn't go in yet, there is the intention for the ELF
metadata annotations from xen/linkage.h to also effect honoring of
CONFIG_CC_SPLIT_SECTIONS. In code that's placement / ordering sensitive,
these annotations therefore need using with some care.

[1] https://lists.xen.org/archives/html/xen-devel/2024-02/msg00470.html

Fixes: fba250ae604e ("xen/arm64: head: Add missing code symbol annotations")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
An alternative would be to use LABEL{,_LOCAL}() instead of FUNC{,_LOCAL}().
That would avoid the need for any override, but would also lose the type
information. Question is whether the annotated ranges really are
"functions" in whichever wide or narrow sense.

The Fixes: tag is slightly questionable, seeing that the patch actually
adding section switching didn't go in yet; the issue right now is a
latent one only. Whichever form of an adjustment we'll settle on will
likely want to become part of [1] above. Hence the RFC.

--- a/xen/arch/arm/arm64/head.S
+++ b/xen/arch/arm/arm64/head.S
@@ -28,6 +28,14 @@
 #include <asm/arm64/efibind.h>
 #endif
 
+/*
+ * Code here is, at least in part, ordering sensitive.  Annotations
+ * from xen/linkage.h therefore may not switch sections (honoring
+ * CONFIG_CC_SPLIT_SECTIONS).  Override the respective macro.
+ */
+#undef SYM_PUSH_SECTION
+#define SYM_PUSH_SECTION(name, attr)
+
 #define __HEAD_FLAG_PAGE_SIZE   ((PAGE_SHIFT - 10) / 2)
 
 #define __HEAD_FLAG_PHYS_BASE   1


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 14:20:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 14:20:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737296.1143520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftD-0008MW-Kv; Mon, 10 Jun 2024 14:20:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737296.1143520; Mon, 10 Jun 2024 14:20:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftD-0008MP-Hj; Mon, 10 Jun 2024 14:20:51 +0000
Received: by outflank-mailman (input) for mailman id 737296;
 Mon, 10 Jun 2024 14:20:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ow3=NM=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGftC-0008MI-Cx
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 14:20:50 +0000
Received: from mail-oi1-x229.google.com (mail-oi1-x229.google.com
 [2607:f8b0:4864:20::229])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a1a37cd1-2734-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 16:20:48 +0200 (CEST)
Received: by mail-oi1-x229.google.com with SMTP id
 5614622812f47-3d215a594b9so1463897b6e.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 07:20:48 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79555162807sm217581985a.33.2024.06.10.07.20.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 07:20:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1a37cd1-2734-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718029247; x=1718634047; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=n/T5O2hGfQDiLF7n446iATUt1DbeXMiDaHNQoXud1hg=;
        b=Aj5L9mKnTsRByEAUCTDnrWDrICvEQHUErXwAIiAgF/Bb8t8XN0/ZVFxzTwm1aiMWi6
         +Z91vmtrENtNsiand4gtSQogGm5wXRtk45RwzYaZaNE26uAuGChrNVY/QV2b3sbnaCgs
         ucbDH+gPyF73y7YfykncaMVhIi8gpSnnQUANI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718029247; x=1718634047;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=n/T5O2hGfQDiLF7n446iATUt1DbeXMiDaHNQoXud1hg=;
        b=VuMEubrEz2bQHi3FzCcOpRv1CvzpxeekuyGMIQigQgOThqD1MF/njz5qLycfaAM/ld
         8IUIiQoR+SfFwLWgk9+XTas28vDgMm1IKEOZeIHnMdFrSTdCy29Lzp/wrRZOwKO3gLC1
         dRfX6xJxHOmFm4m+AS5gkTKbvnc7IjJIedD26Wrd01hcV9tgUgCbAFD91Ii3tjlBrUNr
         euNfVXGySePC2R3WJALXWuni7JbuqnHsmarO8yW5NZc5AsWrKqDxf2hUwVg8ULfzdGQN
         i5OirOq2dTxQQDB8M+Um4sBzgxaIhK0KwHaJwyhgpjocpgf5+QLCWfC9GTi9bBJ7kElF
         F0Kg==
X-Gm-Message-State: AOJu0YzolwtxV2WASO/egoVI2D6mB9AngVNm8CS1Rgdn4uT2nYQvFUU3
	ZoWIcH6FptQ9fCGumjmbXwqlYYAd1dJ4b3RW3zvQS9rdG/P7MmorEHPRmUD3TBdsWH6kHbdXw96
	h
X-Google-Smtp-Source: AGHT+IG6A3LJyzcjuC8iF8k2LDUMrWwlsyk+4xzd5hBbx7oW3hzQ35piZuUr5ATFT4rG2gQUHpMedg==
X-Received: by 2002:a05:6808:1525:b0:3d2:1a26:8a48 with SMTP id 5614622812f47-3d21a268ed8mr9746046b6e.5.1718029246527;
        Mon, 10 Jun 2024 07:20:46 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 0/7] x86/irq: fixes for CPU hot{,un}plug
Date: Mon, 10 Jun 2024 16:20:36 +0200
Message-ID: <20240610142043.11924-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.44.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Hello,

The following series aim to fix interrupt handling when doing CPU
plug/unplug operations.  Without this series running:

cpus=`xl info max_cpu_id`
while [ 1 ]; do
    for i in `seq 1 $cpus`; do
        xen-hptool cpu-offline $i;
        xen-hptool cpu-online $i;
    done
done

Quite quickly results in interrupts getting lost and "No irq handler for
vector" messages on the Xen console.  Drivers in dom0 also start getting
interrupt timeouts and the system becomes unusable.

After applying the series running the loop over night still result in a
fully usable system, no  "No irq handler for vector" messages at all, no
interrupt loses reported by dom0.  Test with x2apic-mode={mixed,cluster}.

I've attempted to document all code as good as I could, interrupt
handling has some unexpected corner cases that are hard to diagnose and
reason about.

I'm currently also doing some extra testing with XenRT in case I've
missed something.

Thanks, Roger.

Roger Pau Monne (7):
  x86/smp: do not use shorthand IPI destinations in CPU hot{,un}plug
    contexts
  x86/irq: describe how the interrupt CPU movement works
  x86/irq: limit interrupt movement done by fixup_irqs()
  x86/irq: restrict CPU movement in set_desc_affinity()
  x86/irq: deal with old_cpu_mask for interrupts in movement in
    fixup_irqs()
  x86/irq: handle moving interrupts in _assign_irq_vector()
  x86/irq: forward pending interrupts to new destination in fixup_irqs()

 xen/arch/x86/include/asm/apic.h |   5 +
 xen/arch/x86/include/asm/irq.h  |  40 ++++++-
 xen/arch/x86/irq.c              | 197 ++++++++++++++++++++++++--------
 xen/arch/x86/smp.c              |   2 +-
 xen/common/cpu.c                |   5 +
 xen/include/xen/cpu.h           |  10 ++
 xen/include/xen/rwlock.h        |   2 +
 7 files changed, 214 insertions(+), 47 deletions(-)

-- 
2.44.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 14:20:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 14:20:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737299.1143550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftJ-0000fL-B5; Mon, 10 Jun 2024 14:20:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737299.1143550; Mon, 10 Jun 2024 14:20:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftJ-0000fB-7e; Mon, 10 Jun 2024 14:20:57 +0000
Received: by outflank-mailman (input) for mailman id 737299;
 Mon, 10 Jun 2024 14:20:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ow3=NM=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGftI-0008MI-BQ
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 14:20:56 +0000
Received: from mail-qt1-x82e.google.com (mail-qt1-x82e.google.com
 [2607:f8b0:4864:20::82e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a585f964-2734-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 16:20:54 +0200 (CEST)
Received: by mail-qt1-x82e.google.com with SMTP id
 d75a77b69052e-43fdb114e07so19620621cf.2
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 07:20:54 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-440c9fed96asm10638341cf.17.2024.06.10.07.20.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 07:20:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a585f964-2734-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718029253; x=1718634053; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xj/7AKm3tnuoONAQVj+xWnm7kpsb26VRcfyAbHMA6OE=;
        b=Cs7+gK0Blpb8nX19OWjwm8ImsRVYDnuiGp2kKz7Qiexg0LoHwLKk8Fb4nOougrMOSw
         VWVk80ITKKhF7m5xCTqdTk+6RYr4+e+GG7hGRy1dJhhmtUoSHkMzLCSAUQS1RFJvRbx3
         nnw3BHpES43AeJFt5hr4/2K3jgn+AwV6ko3mc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718029253; x=1718634053;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=xj/7AKm3tnuoONAQVj+xWnm7kpsb26VRcfyAbHMA6OE=;
        b=H58slk8sEYcoxJYNpS8KGFsNgVn9eyjCXyboyALtoN/USdVR+Aeqo5roXNlNubkvKw
         0PlzhtdXOpcpKv52pAcPYDnmVuuFvd0bR9/L2C93kY7jADjqolhIZlUZNtwDSEyNzEii
         65wB6cDtAtE+OsNc2PW8hELTOgOUBGzf6XrHozSud4fwgmiaJz6qPhfXGjhSGSkBB418
         oufiG5KaXepdMyGhBfOdRlJPWuVr1ZgTWAY9rlvSBksKz9JSnv14lnxmQEReIYF+20NX
         EebvEFHIXZjQyWfvWQ+VcusKMkgcLsttLLrnOpqZLcUXRPMZbYbXjooMd8i4LTwM6HIU
         dteA==
X-Gm-Message-State: AOJu0YyAok5BY+t47TCMHTL6lEiUgyVzYGZIBDPK1sVzmrDEuHDXYQqK
	4FzDnCKoAzQlk/4Wh87eRl29trDeJNlI2JIRljqQab70NigHcmhoOO1DgFdWSeEOo21oyvKtUXG
	b
X-Google-Smtp-Source: AGHT+IE7KOuxwbv+yqrmisaABSEWMWauOW82L28C76Zoz4VFVCV+s9AHsB515/AQkXeHu+w19TE3Rw==
X-Received: by 2002:ac8:5703:0:b0:440:a22f:ac61 with SMTP id d75a77b69052e-440a22fadf0mr44014571cf.32.1718029253243;
        Mon, 10 Jun 2024 07:20:53 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v2 3/7] x86/irq: limit interrupt movement done by fixup_irqs()
Date: Mon, 10 Jun 2024 16:20:39 +0200
Message-ID: <20240610142043.11924-4-roger.pau@citrix.com>
X-Mailer: git-send-email 2.44.0
In-Reply-To: <20240610142043.11924-1-roger.pau@citrix.com>
References: <20240610142043.11924-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The current check used in fixup_irqs() to decide whether to move around
interrupts is based on the affinity mask, but such mask can have all bits set,
and hence is unlikely to be a subset of the input mask.  For example if an
interrupt has an affinity mask of all 1s, any input to fixup_irqs() that's not
an all set CPU mask would cause that interrupt to be shuffled around
unconditionally.

What fixup_irqs() care about is evacuating interrupts from CPUs not set on the
input CPU mask, and for that purpose it should check whether the interrupt is
assigned to a CPU not present in the input mask.  Assume that ->arch.cpu_mask
is a subset of the ->affinity mask, and keep the current logic that resets the
->affinity mask if the interrupt has to be shuffled around.

Doing the affinity movement based on ->arch.cpu_mask requires removing the
special handling to ->arch.cpu_mask done for high priority vectors, otherwise
the adjustment done to cpu_mask makes them always skip the CPU interrupt
movement.

While there also adjust the comment as to the purpose of fixup_irqs().

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Adjust handling of high priority vectors.

Changes since v0 (outside the series):
 - Do not AND ->arch.cpu_mask with cpu_online_map.
 - Keep using cpumask_subset().
 - Integrate into bigger series.
---
 xen/arch/x86/include/asm/irq.h |  2 +-
 xen/arch/x86/irq.c             | 21 +++++++++++----------
 2 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/include/asm/irq.h b/xen/arch/x86/include/asm/irq.h
index 94f634ce10a1..5f445299be61 100644
--- a/xen/arch/x86/include/asm/irq.h
+++ b/xen/arch/x86/include/asm/irq.h
@@ -168,7 +168,7 @@ void free_domain_pirqs(struct domain *d);
 int map_domain_emuirq_pirq(struct domain *d, int pirq, int emuirq);
 int unmap_domain_pirq_emuirq(struct domain *d, int pirq);
 
-/* Reset irq affinities to match the given CPU mask. */
+/* Evacuate interrupts assigned to CPUs not present in the input CPU mask. */
 void fixup_irqs(const cpumask_t *mask, bool verbose);
 void fixup_eoi(void);
 
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index d101ffeaf9f3..263e502bc0f6 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2516,7 +2516,7 @@ static int __init cf_check setup_dump_irqs(void)
 }
 __initcall(setup_dump_irqs);
 
-/* Reset irq affinities to match the given CPU mask. */
+/* Evacuate interrupts assigned to CPUs not present in the input CPU mask. */
 void fixup_irqs(const cpumask_t *mask, bool verbose)
 {
     unsigned int irq;
@@ -2540,19 +2540,15 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
 
         vector = irq_to_vector(irq);
         if ( vector >= FIRST_HIPRIORITY_VECTOR &&
-             vector <= LAST_HIPRIORITY_VECTOR )
+             vector <= LAST_HIPRIORITY_VECTOR &&
+             desc->handler == &no_irq_type )
         {
-            cpumask_and(desc->arch.cpu_mask, desc->arch.cpu_mask, mask);
-
             /*
              * This can in particular happen when parking secondary threads
              * during boot and when the serial console wants to use a PCI IRQ.
              */
-            if ( desc->handler == &no_irq_type )
-            {
-                spin_unlock(&desc->lock);
-                continue;
-            }
+            spin_unlock(&desc->lock);
+            continue;
         }
 
         if ( desc->arch.move_cleanup_count )
@@ -2573,7 +2569,12 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
                                affinity);
         }
 
-        if ( !desc->action || cpumask_subset(desc->affinity, mask) )
+        /*
+         * Avoid shuffling the interrupt around as long as current target CPUs
+         * are a subset of the input mask.  What fixup_irqs() cares about is
+         * evacuating interrupts from CPUs not in the input mask.
+         */
+        if ( !desc->action || cpumask_subset(desc->arch.cpu_mask, mask) )
         {
             spin_unlock(&desc->lock);
             continue;
-- 
2.44.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 14:20:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 14:20:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737297.1143529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftG-00009Z-RR; Mon, 10 Jun 2024 14:20:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737297.1143529; Mon, 10 Jun 2024 14:20:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftG-00009S-On; Mon, 10 Jun 2024 14:20:54 +0000
Received: by outflank-mailman (input) for mailman id 737297;
 Mon, 10 Jun 2024 14:20:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ow3=NM=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGftE-00007Q-Oc
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 14:20:52 +0000
Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com
 [2607:f8b0:4864:20::733])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a35acadf-2734-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 16:20:51 +0200 (CEST)
Received: by mail-qk1-x733.google.com with SMTP id
 af79cd13be357-79767180a15so45375185a.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 07:20:51 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-795758dfe3asm127231685a.105.2024.06.10.07.20.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 07:20:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a35acadf-2734-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718029249; x=1718634049; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=EmeqCABNjDotan3Y72svsvPEn7gJhsjuiG6QHhCgPvQ=;
        b=YvJfA12OB3sFdIw1RRotS2fex80GJdQh/gtzshnOXTHsaqPhGu3KpYWccUbMOo06RZ
         PF5rKb5RGDdC4R61n3IL2MZ+CbMoEhWqU1tUmeDp0WKXudkN05o5AcNRKxLy2XKkIUgu
         NDEPXfdUUCg3Wp/fH5KSBhWYt6PY8AtfsLxbk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718029249; x=1718634049;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=EmeqCABNjDotan3Y72svsvPEn7gJhsjuiG6QHhCgPvQ=;
        b=iLMYJJPxJjepAG7OTjl31Q8V0IYwXt4W4nHBKbiTGytoZa//K4C8OFquGbEkJRK9WV
         TILdQTZdBdSWyb3IDSzaMm2dHF/JYJGS+1MohQNrgGab2Ouy+VyTRzu3CiccYH3tGAgo
         ovvtfemPIWurJnaQS6iqMF1tF88pxP6+WXVXAI3sHk/3ILl5wH51wj8AEZqOGWx8DkM/
         mZRcnMku3Oc+6OI29P8AgBFusJiF/yu0tTck/yukhra39XrK+0Eyk4Ht5npAJaTfBLqC
         bHprzqWIY9DEer3qz0u8D2JIeQ0FaN7AqjauHL9ra+4mPKHz3HAXB6YU66Cq2Cwn4XxP
         EeyA==
X-Gm-Message-State: AOJu0Yz6RmjOt/FSJ1cWLL+8QEc5IOn5MdHDiUe33UzynUu4dqc6udbn
	RiV2/zXd6Un9+TGqmbBt6db7GeIfCOCm9/wiE2WVcfjnz94Y8tkw6ILsOtqy7OdVtKK6z3wU6MM
	5
X-Google-Smtp-Source: AGHT+IFVJ6qqqsqLC04Mqokc1ZPW7ysMrqYU9cIPAJV1KObi0//GhIoewHRe9csIUHpaXS5nGBBDKA==
X-Received: by 2002:a05:620a:3913:b0:795:f319:e4ae with SMTP id af79cd13be357-795f319f448mr418663485a.26.1718029248808;
        Mon, 10 Jun 2024 07:20:48 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 1/7] x86/smp: do not use shorthand IPI destinations in CPU hot{,un}plug contexts
Date: Mon, 10 Jun 2024 16:20:37 +0200
Message-ID: <20240610142043.11924-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.44.0
In-Reply-To: <20240610142043.11924-1-roger.pau@citrix.com>
References: <20240610142043.11924-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Due to the current rwlock logic, if the CPU calling get_cpu_maps() does so from
a cpu_hotplug_{begin,done}() region the function will still return success,
because a CPU taking the rwlock in read mode after having taken it in write
mode is allowed.  Such behavior however defeats the purpose of get_cpu_maps(),
as it should always return false when called with a CPU hot{,un}plug operation
is in progress.  Otherwise the logic in send_IPI_mask() is wrong, as it could
decide to use the shorthand even when a CPU operation is in progress.

Introduce a new helper to detect whether the current caller is between a
cpu_hotplug_{begin,done}() region and use it in send_IPI_mask() to restrict
shorthand usage.

Fixes: 5500d265a2a8 ('x86/smp: use APIC ALLBUT destination shorthand when possible')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Modify send_IPI_mask() to detect CPU hotplug context.
---
 xen/arch/x86/smp.c       |  2 +-
 xen/common/cpu.c         |  5 +++++
 xen/include/xen/cpu.h    | 10 ++++++++++
 xen/include/xen/rwlock.h |  2 ++
 4 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c
index 7443ad20335e..04c6a0572319 100644
--- a/xen/arch/x86/smp.c
+++ b/xen/arch/x86/smp.c
@@ -88,7 +88,7 @@ void send_IPI_mask(const cpumask_t *mask, int vector)
      * the system have been accounted for.
      */
     if ( system_state > SYS_STATE_smp_boot &&
-         !unaccounted_cpus && !disabled_cpus &&
+         !unaccounted_cpus && !disabled_cpus && !cpu_in_hotplug_context() &&
          /* NB: get_cpu_maps lock requires enabled interrupts. */
          local_irq_is_enabled() && (cpus_locked = get_cpu_maps()) &&
          (park_offline_cpus ||
diff --git a/xen/common/cpu.c b/xen/common/cpu.c
index 8709db4d2957..6e35b114c080 100644
--- a/xen/common/cpu.c
+++ b/xen/common/cpu.c
@@ -68,6 +68,11 @@ void cpu_hotplug_done(void)
     write_unlock(&cpu_add_remove_lock);
 }
 
+bool cpu_in_hotplug_context(void)
+{
+    return rw_is_write_locked_by_me(&cpu_add_remove_lock);
+}
+
 static NOTIFIER_HEAD(cpu_chain);
 
 void __init register_cpu_notifier(struct notifier_block *nb)
diff --git a/xen/include/xen/cpu.h b/xen/include/xen/cpu.h
index e1d4eb59675c..6bf578675008 100644
--- a/xen/include/xen/cpu.h
+++ b/xen/include/xen/cpu.h
@@ -13,6 +13,16 @@ void put_cpu_maps(void);
 void cpu_hotplug_begin(void);
 void cpu_hotplug_done(void);
 
+/*
+ * Returns true when the caller CPU is between a cpu_hotplug_{begin,done}()
+ * region.
+ *
+ * This is required to safely identify hotplug contexts, as get_cpu_maps()
+ * would otherwise succeed because a caller holding the lock in write mode is
+ * allowed to acquire the same lock in read mode.
+ */
+bool cpu_in_hotplug_context(void);
+
 /* Receive notification of CPU hotplug events. */
 void register_cpu_notifier(struct notifier_block *nb);
 
diff --git a/xen/include/xen/rwlock.h b/xen/include/xen/rwlock.h
index a2e98cad343e..4e7802821859 100644
--- a/xen/include/xen/rwlock.h
+++ b/xen/include/xen/rwlock.h
@@ -316,6 +316,8 @@ static always_inline void write_lock_irq(rwlock_t *l)
 
 #define rw_is_locked(l)               _rw_is_locked(l)
 #define rw_is_write_locked(l)         _rw_is_write_locked(l)
+#define rw_is_write_locked_by_me(l) \
+    lock_evaluate_nospec(_is_write_locked_by_me(atomic_read(&(l)->cnts)))
 
 
 typedef struct percpu_rwlock percpu_rwlock_t;
-- 
2.44.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 14:20:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 14:20:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737298.1143537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftH-0000D9-6p; Mon, 10 Jun 2024 14:20:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737298.1143537; Mon, 10 Jun 2024 14:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftG-0000CU-Vc; Mon, 10 Jun 2024 14:20:54 +0000
Received: by outflank-mailman (input) for mailman id 737298;
 Mon, 10 Jun 2024 14:20:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ow3=NM=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGftF-00007Q-DN
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 14:20:53 +0000
Received: from mail-yw1-x1135.google.com (mail-yw1-x1135.google.com
 [2607:f8b0:4864:20::1135])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a4542401-2734-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 16:20:52 +0200 (CEST)
Received: by mail-yw1-x1135.google.com with SMTP id
 00721157ae682-62cf4d32dceso296927b3.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 07:20:52 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b07b4b999csm12151576d6.63.2024.06.10.07.20.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 07:20:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4542401-2734-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718029251; x=1718634051; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=DO8Ho59BP8Cja4wQL980lT7rQ+BqnpHm8XHoOOnAnmY=;
        b=BgIP2REMw6cxRnkppT294rIeiTMVH57PORLjt6+oxx5ictq+LSLyFEr+g8rlq+MvME
         RsMOIYregntOZMHwtjgMIBq24z7otUeL9K+PDWOOPNX+U3Z68Y4cRfcMkhLImCAW5INw
         1zmFxCC0KcrOVtsFmsu+FPeFwJNdS2DKnhR/w=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718029251; x=1718634051;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=DO8Ho59BP8Cja4wQL980lT7rQ+BqnpHm8XHoOOnAnmY=;
        b=NEyOjF/nOa78oNfx+amOXQak2lWnKH+eD+IL9AI/3uKm/Xwwom7pTCgMVZw222QUD9
         zQ6xhZVKhhN1iW4V3ujl3Kyp6+4wGotq3gHpndZsTe5ahDEmP+c+6/etsiuMysyoTFR+
         g8QyXF/3kH1g1JcQHwPoo7Dq0BVlm53kc7nJcF1qwPeA/3kZhYKHw5Tgt7uQMxbPx2CQ
         XsyZPXDoBvs2lgPSyvKV3FMeyW7hUOX6nANC+jAAGtAdNfbveqf6frK+15kLRTPj8GXn
         4ykHYMzuYSqm0vocs+wkwY8xNdgGFSU1mKwdP2Gf9rjydYApTYMNyFquynV66XhlNMpw
         TGgw==
X-Gm-Message-State: AOJu0YypvDd6RBsaHZ5aw7u9pJwOqQKTHndjbtwPKYLI4hRkjfNchcmk
	fnIcJ7pBETNJmnA4I9DG4HdELbqmAqWvIHZOYDqUoaf5dq/Zd5JzerLVf3IjcPQJeceqbyE7rV7
	u
X-Google-Smtp-Source: AGHT+IEoMYN3SXv7692z9fWNkS/rCmIyG+AjwRjuRyQB6tQ2K4ivPZvs/F+EjKLE4LDTVnXU8Z9psw==
X-Received: by 2002:a0d:cb0a:0:b0:61e:a3a:2538 with SMTP id 00721157ae682-62cd55d157emr87669307b3.18.1718029251068;
        Mon, 10 Jun 2024 07:20:51 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v2 2/7] x86/irq: describe how the interrupt CPU movement works
Date: Mon, 10 Jun 2024 16:20:38 +0200
Message-ID: <20240610142043.11924-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.44.0
In-Reply-To: <20240610142043.11924-1-roger.pau@citrix.com>
References: <20240610142043.11924-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The logic to move interrupts across CPUs is complex, attempt to provide a
comment that describes the expected behavior so users of the interrupt system
have more context about the usage of the arch_irq_desc structure fields.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Mention the logic involved in IRQ_MOVE_PENDING and the reduction of
   old_cpu_mask.
---
 xen/arch/x86/include/asm/irq.h | 38 ++++++++++++++++++++++++++++++++++
 1 file changed, 38 insertions(+)

diff --git a/xen/arch/x86/include/asm/irq.h b/xen/arch/x86/include/asm/irq.h
index 413994d2133b..94f634ce10a1 100644
--- a/xen/arch/x86/include/asm/irq.h
+++ b/xen/arch/x86/include/asm/irq.h
@@ -28,6 +28,44 @@ typedef struct {
 
 struct irq_desc;
 
+/*
+ * Xen logic for moving interrupts around CPUs allows manipulating interrupts
+ * that target remote CPUs.  The logic to move an interrupt from CPU(s) is as
+ * follows:
+ *
+ * 1. irq_set_affinity() is called with the new destination mask, such mask is
+ *    copied into pending_mask and IRQ_MOVE_PENDING is set in status to notice
+ *    an affinity change has been requested.
+ * 2. An interrupt acked with the IRQ_MOVE_PENDING will trigger the logic to
+ *    migrate it to a destination in pending_mask as long as the mask contains
+ *    any online CPUs.
+ * 3. cpu_mask and vector is copied to old_cpu_mask and old_vector.
+ * 4. New cpu_mask and vector are set, vector is setup at the new destination.
+ * 5. move_in_progress is set.
+ * 6. Interrupt source is updated to target new CPU and vector.
+ * 7. Interrupts arriving at old_cpu_mask are processed normally.
+ * 8. When the first interrupt is delivered at the new destination (cpu_mask) as
+ *    part of acking the interrupt the cleanup of the old destination(s) is
+ *    engaged.  move_in_progress is cleared and old_cpu_mask is
+ *    reduced to the online CPUs.  If the result is empty the old vector is
+ *    released.  Otherwise move_cleanup_count is set to the weight of online
+ *    CPUs in old_cpu_mask and IRQ_MOVE_CLEANUP_VECTOR is sent to them.
+ * 9. When receiving IRQ_MOVE_CLEANUP_VECTOR CPUs in old_cpu_mask clean the
+ *    vector entry and decrease the count in move_cleanup_count.  The CPU that
+ *    sets move_cleanup_count to 0 releases the vector.
+ *
+ * Note that when interrupt movement (either move_in_progress or
+ * move_cleanup_count set) is in progress it's not possible to move the
+ * interrupt to yet a different CPU.
+ *
+ * Interrupt movements done by fixup_irqs() skip setting IRQ_MOVE_PENDING and
+ * pending_mask as the movement must be performed right away, and so start
+ * directly from step 3.
+ *
+ * By keeping the vector in the old CPU(s) configured until the interrupt is
+ * acked on the new destination Xen allows draining any pending interrupts at
+ * the old destinations.
+ */
 struct arch_irq_desc {
         s16 vector;                  /* vector itself is only 8 bits, */
         s16 old_vector;              /* but we use -1 for unassigned  */
-- 
2.44.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 14:20:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 14:20:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737300.1143560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftL-0000x5-HD; Mon, 10 Jun 2024 14:20:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737300.1143560; Mon, 10 Jun 2024 14:20:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftL-0000wy-E5; Mon, 10 Jun 2024 14:20:59 +0000
Received: by outflank-mailman (input) for mailman id 737300;
 Mon, 10 Jun 2024 14:20:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ow3=NM=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGftK-0008MI-O1
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 14:20:58 +0000
Received: from mail-qk1-x734.google.com (mail-qk1-x734.google.com
 [2607:f8b0:4864:20::734])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a6f335bc-2734-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 16:20:57 +0200 (CEST)
Received: by mail-qk1-x734.google.com with SMTP id
 af79cd13be357-7952b60b4d7so300652285a.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 07:20:57 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b0822399ccsm6739426d6.123.2024.06.10.07.20.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 07:20:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6f335bc-2734-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718029255; x=1718634055; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GGTwxJouy4hdHEkmRULV0M0hn4a97amWG2LKmAA4/LI=;
        b=gdPTr4yNpyzE5fZUnwiRORL5A1efOheLMHHR0UTvaLk9Ph0eCTVj/Z/WqhFkfubKrD
         aluOFtmfm0P9hP7rvGy//Rwqq4pjVK/S8F1SKTwWf9ZEhUV2zpBNyA4VaXJTYHID1CJB
         sXzOu56ykRSeU4qSAmYNbV8gPvS/mGUc1mxvU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718029255; x=1718634055;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=GGTwxJouy4hdHEkmRULV0M0hn4a97amWG2LKmAA4/LI=;
        b=ndv6p356hElMnpJE/XKF9pC53yvbGjBEc/0tL2ch23rybEc3Sk+i50DrUZCExHwOtI
         17OK9OKuSrG4XvgttLcsl6UTyqWyD5KVyQtFzme6XOr5UmVkXfduK8ryPfxzrLj4aS9k
         DeaXfW0U0SKtFmLhjVoCV+uUpArX9OiNAcZuYjMWxx3s7L6xro5brSiTLAmNx6xBQZM4
         joKUgJSH31zrz94HFlzXguRTavbkXVcxl8fpz3E7hHvidOla7mPssT/9iGxD+TmWqkkl
         O/qDv30UU4DNFUV+bgp0a6nMDVQm+bk3wOR2wc6oT3B5hKEjGjR/kdtP9AntbO+1HfT6
         srVQ==
X-Gm-Message-State: AOJu0Yx9wtC9F5aWJKcgH0Lx2RD0wnjvBax8q7/kGmFVdjAqZHw8VVfj
	HgYCf7uBl6adFnbt68H88XT6IXWq3uxte6hFpFWj4yx6fbwEyh6LbCwrrmOGgprvmVp75f2cH0C
	D
X-Google-Smtp-Source: AGHT+IFxyBZcjImT23c50YxZ3XhRQ3nOFbdcVSIf3HtTe5DOm8honhk2yNmjD3TfewFRcmiHX1JqGA==
X-Received: by 2002:a05:6214:4520:b0:6af:2344:5811 with SMTP id 6a1803df08f44-6b059f93d58mr116646196d6.55.1718029255616;
        Mon, 10 Jun 2024 07:20:55 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v2 4/7] x86/irq: restrict CPU movement in set_desc_affinity()
Date: Mon, 10 Jun 2024 16:20:40 +0200
Message-ID: <20240610142043.11924-5-roger.pau@citrix.com>
X-Mailer: git-send-email 2.44.0
In-Reply-To: <20240610142043.11924-1-roger.pau@citrix.com>
References: <20240610142043.11924-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

If external interrupts are using logical mode it's possible to have an overlap
between the current ->arch.cpu_mask and the provided mask (or TARGET_CPUS).  If
that's the case avoid assigning a new vector and just move the interrupt to a
member of ->arch.cpu_mask that overlaps with the provided mask and is online.

While there also add an extra assert to ensure the mask containing the possible
destinations is not empty before calling cpu_mask_to_apicid(), as at that point
having an empty mask is not expected.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/irq.c | 34 +++++++++++++++++++++++++++-------
 1 file changed, 27 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 263e502bc0f6..306e7756112f 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -837,19 +837,38 @@ void cf_check irq_complete_move(struct irq_desc *desc)
 
 unsigned int set_desc_affinity(struct irq_desc *desc, const cpumask_t *mask)
 {
-    int ret;
-    unsigned long flags;
     cpumask_t dest_mask;
 
     if ( mask && !cpumask_intersects(mask, &cpu_online_map) )
         return BAD_APICID;
 
-    spin_lock_irqsave(&vector_lock, flags);
-    ret = _assign_irq_vector(desc, mask ?: TARGET_CPUS);
-    spin_unlock_irqrestore(&vector_lock, flags);
+    /*
+     * mask input set can contain CPUs that are not online.  To decide whether
+     * the interrupt needs to be migrated restrict the input mask to the CPUs
+     * that are online.
+     */
+    if ( mask )
+        cpumask_and(&dest_mask, mask, &cpu_online_map);
+    else
+        cpumask_copy(&dest_mask, TARGET_CPUS);
 
-    if ( ret < 0 )
-        return BAD_APICID;
+    /*
+     * Only move the interrupt if there are no CPUs left in ->arch.cpu_mask
+     * that can handle it, otherwise just shuffle it around ->arch.cpu_mask
+     * to an available destination.
+     */
+    if ( !cpumask_intersects(desc->arch.cpu_mask, &dest_mask) )
+    {
+        int ret;
+        unsigned long flags;
+
+        spin_lock_irqsave(&vector_lock, flags);
+        ret = _assign_irq_vector(desc, mask ?: TARGET_CPUS);
+        spin_unlock_irqrestore(&vector_lock, flags);
+
+        if ( ret < 0 )
+            return BAD_APICID;
+    }
 
     if ( mask )
     {
@@ -862,6 +881,7 @@ unsigned int set_desc_affinity(struct irq_desc *desc, const cpumask_t *mask)
         cpumask_copy(&dest_mask, desc->arch.cpu_mask);
     }
     cpumask_and(&dest_mask, &dest_mask, &cpu_online_map);
+    ASSERT(!cpumask_empty(&dest_mask));
 
     return cpu_mask_to_apicid(&dest_mask);
 }
-- 
2.44.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 14:21:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 14:21:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737301.1143570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftN-0001Ec-SF; Mon, 10 Jun 2024 14:21:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737301.1143570; Mon, 10 Jun 2024 14:21:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftN-0001EL-Of; Mon, 10 Jun 2024 14:21:01 +0000
Received: by outflank-mailman (input) for mailman id 737301;
 Mon, 10 Jun 2024 14:21:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ow3=NM=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGftL-00007Q-Vr
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 14:20:59 +0000
Received: from mail-qk1-x72c.google.com (mail-qk1-x72c.google.com
 [2607:f8b0:4864:20::72c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a841e0bc-2734-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 16:20:59 +0200 (CEST)
Received: by mail-qk1-x72c.google.com with SMTP id
 af79cd13be357-7955f3d4516so106311385a.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 07:20:59 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-440387964afsm36806921cf.0.2024.06.10.07.20.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 07:20:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a841e0bc-2734-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718029258; x=1718634058; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/CHpMOxsvIdkAbF+8l0n1eVm0ADzVdTBMt6vKxN7VAA=;
        b=hqCGMqSu/r5P9AKQyqMjcgxzD1ZcE+SqoGU2JRvcDeHtdieRa34+QZGZEzUaXHJdmb
         RPUxW9gVZW0NxdXFMkios4liiz+m2VeteqKrKmmi3QlhlLG/tGpV2l5H0YI1ZiwqFRWw
         JEghK7IpJLndxVp2OaIAaN8HZrGXfN4tBw9aY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718029258; x=1718634058;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/CHpMOxsvIdkAbF+8l0n1eVm0ADzVdTBMt6vKxN7VAA=;
        b=cNQiqj6rqdC0Nqyr9NSJIK/8FXOE0bU3yNbHFPt5zY0JQcWSG4R+XszN3zaOj1Z/qf
         21g3dyFEJuCZuSEYYRlgdZqUwiO5YWxm+QopwNByKGIRIlcLQpzitiE9aq6D3XozjiZ2
         6dSHQkZk5PPZ3mPHtj+VhVvNNBqjivKuSACbgOtycixIQGMuJtmIC5iifE7Vbezf8GjQ
         gn87TQvIzzLhLWcLZqgnsLs7unaqEydOgzqhM+AyoycQeQ7Ap3eeSXRvo34ewlLx55ml
         0S1Zhq6GMGZEXd23eq3HYPCoLC/BMM7xqzBsyiBwS4hNKOHthVZNdBP9CQ0XD4b2KQfX
         ugJw==
X-Gm-Message-State: AOJu0YxQbLQ6ZyqF1X+0mnGWUVt1PLcq7XanajlANbyDV1db4OeT5YBN
	vc7R+EiaJee11t6C6J3UE5sb5cRAyyOtgd1QprEZHaQlQDaD3wtaejXoy9isPIbKoqQ6XfDELHd
	q
X-Google-Smtp-Source: AGHT+IEoORcnXRvpb01P4p2Pp35n1REoevTWgtTP+Wf8+LLKFZGHm5W9a3McbUnGGXRgS9uXGpQhRQ==
X-Received: by 2002:a05:620a:4149:b0:795:5c48:4271 with SMTP id af79cd13be357-7955c4843c3mr781619185a.25.1718029257749;
        Mon, 10 Jun 2024 07:20:57 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v2 5/7] x86/irq: deal with old_cpu_mask for interrupts in movement in fixup_irqs()
Date: Mon, 10 Jun 2024 16:20:41 +0200
Message-ID: <20240610142043.11924-6-roger.pau@citrix.com>
X-Mailer: git-send-email 2.44.0
In-Reply-To: <20240610142043.11924-1-roger.pau@citrix.com>
References: <20240610142043.11924-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Given the current logic it's possible for ->arch.old_cpu_mask to get out of
sync: if a CPU set in old_cpu_mask is offlined and then onlined
again without old_cpu_mask having been updated the data in the mask will no
longer be accurate, as when brought back online the CPU will no longer have
old_vector configured to handle the old interrupt source.

If there's an interrupt movement in progress, and the to be offlined CPU (which
is the call context) is in the old_cpu_mask clear it and update the mask, so it
doesn't contain stale data.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/irq.c | 24 +++++++++++++++++++++++-
 1 file changed, 23 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 306e7756112f..f07e09b63b53 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2546,7 +2546,7 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
     for ( irq = 0; irq < nr_irqs; irq++ )
     {
         bool break_affinity = false, set_affinity = true;
-        unsigned int vector;
+        unsigned int vector, cpu = smp_processor_id();
         cpumask_t *affinity = this_cpu(scratch_cpumask);
 
         if ( irq == 2 )
@@ -2589,6 +2589,28 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
                                affinity);
         }
 
+        if ( desc->arch.move_in_progress &&
+             !cpumask_test_cpu(cpu, &cpu_online_map) &&
+             cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) )
+        {
+            /*
+             * This CPU is going offline, remove it from ->arch.old_cpu_mask
+             * and possibly release the old vector if the old mask becomes
+             * empty.
+             *
+             * Note cleaning ->arch.old_cpu_mask is required if the CPU is
+             * brought offline and then online again, as when re-onlined the
+             * per-cpu vector table will no longer have ->arch.old_vector
+             * setup, and hence ->arch.old_cpu_mask would be stale.
+             */
+            cpumask_clear_cpu(cpu, desc->arch.old_cpu_mask);
+            if ( cpumask_empty(desc->arch.old_cpu_mask) )
+            {
+                desc->arch.move_in_progress = 0;
+                release_old_vec(desc);
+            }
+        }
+
         /*
          * Avoid shuffling the interrupt around as long as current target CPUs
          * are a subset of the input mask.  What fixup_irqs() cares about is
-- 
2.44.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 14:21:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 14:21:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737302.1143580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftR-0001ZF-6R; Mon, 10 Jun 2024 14:21:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737302.1143580; Mon, 10 Jun 2024 14:21:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftR-0001Yz-1z; Mon, 10 Jun 2024 14:21:05 +0000
Received: by outflank-mailman (input) for mailman id 737302;
 Mon, 10 Jun 2024 14:21:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ow3=NM=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGftP-0008MI-DJ
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 14:21:03 +0000
Received: from mail-qt1-x833.google.com (mail-qt1-x833.google.com
 [2607:f8b0:4864:20::833])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a9a999f6-2734-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 16:21:01 +0200 (CEST)
Received: by mail-qt1-x833.google.com with SMTP id
 d75a77b69052e-43ffbc0927fso25295631cf.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 07:21:01 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-44038a68e55sm36473281cf.24.2024.06.10.07.20.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 07:20:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9a999f6-2734-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718029260; x=1718634060; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=4+NbzKdNCPo1s6Mdl+9AvKyg4EH4oF+mfwLbxFaB8IA=;
        b=PxS/mHzupQbbfPj2V9dYqIojoNX3KQAPgxwMocNksyqibNCKhoc+Ouqo89sbvY8Yf8
         +Fp9L5/aCWEvLkXKet74jKqZOlpdc4NtQeAsY5VSANgsd8SAbZha6ytZEdHXIdAx7EG4
         ZmzOVMrBiY2NvS4EFLQVy3+toFJS44erp+c5M=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718029260; x=1718634060;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=4+NbzKdNCPo1s6Mdl+9AvKyg4EH4oF+mfwLbxFaB8IA=;
        b=F6FjbupZ1v/Ka8nIat/V07JWb5dXYDyo8BpPbzt/xzPc0bGG0oVIRluTHxnfjM7NBc
         Yup0NFAZ7HUDi+RH3aXc1JCvg4up8gexTPw2WrB65xZXrZGljMjaUKIIfVM70lTg/qR+
         BWD2dSr43FD5cX7Tpsc2gtkxg/PBTEtIPjGvaxy4hJ1juR4J6uAobqA0ObGeZQtxFD93
         6bHJPrRISht0qt7OzJnEZDi7If0q/eCvwz6Em9mPSmPc4CTs0o4iFr4RHOjkbCQT/cHQ
         5sjIzvlLVVA+YAnyZueh4fUtl3FcI1j9nWNJivzeL73kGTHqARzbi8ay+PuPAtBBodrM
         nGjA==
X-Gm-Message-State: AOJu0Yyo3XtjejCA+Mef3AqKgFgU1Uud02Mf15nJgSi3SfJrnYAvcUKO
	6THJeONcXkvlC2X0UnlsZVeFdDaR58ZouuIFcZ7qzGhuxuEk4tb9uLKG3iWjKPurQ8wUktwYR4/
	s
X-Google-Smtp-Source: AGHT+IGoVswSKHWjkGlfxiCqCrFXbRF14HzayUnmcbGoCXniQP/N9W0gX6JoNsdwjyxFEKSB1kvHow==
X-Received: by 2002:a05:622a:13c8:b0:43a:e730:3a23 with SMTP id d75a77b69052e-44041c31ad9mr113957351cf.3.1718029259897;
        Mon, 10 Jun 2024 07:20:59 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v2 6/7] x86/irq: handle moving interrupts in _assign_irq_vector()
Date: Mon, 10 Jun 2024 16:20:42 +0200
Message-ID: <20240610142043.11924-7-roger.pau@citrix.com>
X-Mailer: git-send-email 2.44.0
In-Reply-To: <20240610142043.11924-1-roger.pau@citrix.com>
References: <20240610142043.11924-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Currently there's logic in fixup_irqs() that attempts to prevent
_assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
interrupts from the CPUs not present in the input mask.  The current logic in
fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.

Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
_assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
to deal with interrupts that have either move_{in_progress,cleanup_count} set
and no remaining online CPUs in ->arch.cpu_mask.

If _assign_irq_vector() is requested to move an interrupt in the state
described above, first attempt to see if ->arch.old_cpu_mask contains any valid
CPUs that could be used as fallback, and if that's the case do move the
interrupt back to the previous destination.  Note this is easier because the
vector hasn't been released yet, so there's no need to allocate and setup a new
vector on the destination.

Due to the logic in fixup_irqs() that clears offline CPUs from
->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
shouldn't be possible to get into _assign_irq_vector() with
->arch.move_{in_progress,cleanup_count} set but no online CPUs in
->arch.old_cpu_mask.

However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
longer part of the affinity set, move the interrupt to a different CPU part of
the provided mask and keep the current ->arch.old_{cpu_mask,vector} for the
pending interrupt movement to be completed.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Further refine the logic in _assign_irq_vector().
---
 xen/arch/x86/irq.c | 87 ++++++++++++++++++++++++++++++----------------
 1 file changed, 58 insertions(+), 29 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index f07e09b63b53..54eabd23995c 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
     }
 
     if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
-        return -EAGAIN;
+    {
+        /*
+         * If the current destination is online refuse to shuffle.  Retry after
+         * the in-progress movement has finished.
+         */
+        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
+            return -EAGAIN;
+
+        /*
+         * Due to the logic in fixup_irqs() that clears offlined CPUs from
+         * ->arch.old_cpu_mask it shouldn't be possible to get here with
+         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
+         * ->arch.old_cpu_mask.
+         */
+        ASSERT(valid_irq_vector(desc->arch.old_vector));
+        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
+
+        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
+        {
+            /*
+             * Fallback to the old destination if moving is in progress and the
+             * current destination is to be offlined.  This is only possible if
+             * the CPUs in old_cpu_mask intersect with the affinity mask passed
+             * in the 'mask' parameter.
+             */
+            desc->arch.vector = desc->arch.old_vector;
+            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
+
+            /* Undo any possibly done cleanup. */
+            for_each_cpu(cpu, desc->arch.cpu_mask)
+                per_cpu(vector_irq, cpu)[desc->arch.vector] = irq;
+
+            /* Cancel the pending move. */
+            desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
+            cpumask_clear(desc->arch.old_cpu_mask);
+            desc->arch.move_in_progress = 0;
+            desc->arch.move_cleanup_count = 0;
+
+            return 0;
+        }
+
+        /*
+         * There's an interrupt movement in progress but the destination(s) in
+         * ->arch.old_cpu_mask are not suitable given the passed 'mask', go
+         * through the full logic to find a new vector in a suitable CPU.
+         */
+    }
 
     err = -ENOSPC;
 
@@ -600,7 +646,17 @@ next:
         current_vector = vector;
         current_offset = offset;
 
-        if ( valid_irq_vector(old_vector) )
+        if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
+        {
+            ASSERT(!cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map));
+            /*
+             * Special case when evacuating an interrupt from a CPU to be
+             * offlined and the interrupt was already in the process of being
+             * moved.  Leave ->arch.old_{vector,cpu_mask} as-is and just
+             * replace ->arch.{cpu_mask,vector} with the new destination.
+             */
+        }
+        else if ( valid_irq_vector(old_vector) )
         {
             cpumask_and(desc->arch.old_cpu_mask, desc->arch.cpu_mask,
                         &cpu_online_map);
@@ -2622,33 +2678,6 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
             continue;
         }
 
-        /*
-         * In order for the affinity adjustment below to be successful, we
-         * need _assign_irq_vector() to succeed. This in particular means
-         * clearing desc->arch.move_in_progress if this would otherwise
-         * prevent the function from succeeding. Since there's no way for the
-         * flag to get cleared anymore when there's no possible destination
-         * left (the only possibility then would be the IRQs enabled window
-         * after this loop), there's then also no race with us doing it here.
-         *
-         * Therefore the logic here and there need to remain in sync.
-         */
-        if ( desc->arch.move_in_progress &&
-             !cpumask_intersects(mask, desc->arch.cpu_mask) )
-        {
-            unsigned int cpu;
-
-            cpumask_and(affinity, desc->arch.old_cpu_mask, &cpu_online_map);
-
-            spin_lock(&vector_lock);
-            for_each_cpu(cpu, affinity)
-                per_cpu(vector_irq, cpu)[desc->arch.old_vector] = ~irq;
-            spin_unlock(&vector_lock);
-
-            release_old_vec(desc);
-            desc->arch.move_in_progress = 0;
-        }
-
         if ( !cpumask_intersects(mask, desc->affinity) )
         {
             break_affinity = true;
-- 
2.44.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 14:21:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 14:21:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737303.1143590 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftT-0001tT-DS; Mon, 10 Jun 2024 14:21:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737303.1143590; Mon, 10 Jun 2024 14:21:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGftT-0001tB-9v; Mon, 10 Jun 2024 14:21:07 +0000
Received: by outflank-mailman (input) for mailman id 737303;
 Mon, 10 Jun 2024 14:21:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3ow3=NM=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGftS-0008MI-L0
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 14:21:06 +0000
Received: from mail-oa1-x2d.google.com (mail-oa1-x2d.google.com
 [2001:4860:4864:20::2d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aaf2755e-2734-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 16:21:03 +0200 (CEST)
Received: by mail-oa1-x2d.google.com with SMTP id
 586e51a60fabf-254e42df409so198619fac.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 07:21:03 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-795ca413ed3sm109519685a.43.2024.06.10.07.21.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 07:21:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aaf2755e-2734-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718029262; x=1718634062; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1/0Nmqg5/SRyeHMyQYQ756u1OolFE3kJadfERqjt0Rk=;
        b=Gc70o86TfMk3tM1/TdasBJbrSAS2n6QgRTQCEdHAoC4SwabvYQMj2YIRxRSVRwF8lk
         o+LOWyx+390iEwQXUi6rpZvKp6Jk1QMr0I0V+AVh1uwCIliYz3HC7HH56cacAU6HebGN
         ssGWUvD6OX1Qf+G9KDXEXHoJqsBUgBAXxjUWQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718029262; x=1718634062;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=1/0Nmqg5/SRyeHMyQYQ756u1OolFE3kJadfERqjt0Rk=;
        b=aCTg/1FCquiIsW9t3q8krF0s7WjXERVFSUeMoooFsuF5RwbK26h15EbZ0h2cVAnWt9
         pYllin5nCPIPXQSjUEKezdH/XR0k7tBlKK7rd017YQMOMrs88UBVDMjTuJamjYpJLGYH
         us/+/gNBZHnLPsUi17JjiQssFZrBliP5n1M2oprdSfzFN4VDA7Fw/zxs5wUBTW1q+dMP
         F3BES/yQ0nRrEB9sCSrSCTNHaqEh4qZgirV3YmHZgzZzzBQRRuD2rlTLNsdRmQ25E1LS
         NE/s6buzFMBqynwRo5B2y9CTnM/dS6VcLNOPFVdWcRh90AGeXLulQNQMVsdOkoyX++/q
         stRA==
X-Gm-Message-State: AOJu0Yz62cchlDNVb4OaTFYo3MG3olqgwBdcbNt7ccmyHTJ0HqPP43ym
	gber1Z1gDuyomNDtrYCNcblr9Yfw+j+OIYViaf35kFLLmbx+WnwcDAU9PCrK8F17FHdPfYS1u/e
	X
X-Google-Smtp-Source: AGHT+IHogHOuVFsgIfOA2XNMT/LsSuo3pBEUpsSOSwxa/IxpdguZPEPVX8JY1QIHhekoZfHCLyzG6w==
X-Received: by 2002:a05:6870:70a9:b0:250:7032:5633 with SMTP id 586e51a60fabf-2546441a0a4mr10990106fac.14.1718029262133;
        Mon, 10 Jun 2024 07:21:02 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v2 7/7] x86/irq: forward pending interrupts to new destination in fixup_irqs()
Date: Mon, 10 Jun 2024 16:20:43 +0200
Message-ID: <20240610142043.11924-8-roger.pau@citrix.com>
X-Mailer: git-send-email 2.44.0
In-Reply-To: <20240610142043.11924-1-roger.pau@citrix.com>
References: <20240610142043.11924-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

fixup_irqs() is used to evacuate interrupts from to be offlined CPUs.  Given
the CPU is to become offline, the normal migration logic used by Xen where the
vector in the previous target(s) is left configured until the interrupt is
received on the new destination is not suitable.

Instead attempt to do as much as possible in order to prevent loosing
interrupts.  If fixup_irqs() is called from the CPU to be offlined (as is
currently the case) attempt to forward pending vectors when interrupts that
target the current CPU are migrated to a different destination.

Additionally, for interrupts that have already been moved from the current CPU
prior to the call to fixup_irqs() but that haven't been delivered to the new
destination (iow: interrupts with move_in_progress set and the current CPU set
in ->arch.old_cpu_mask) also check whether the previous vector is pending and
forward it to the new destination.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Rename to apic_irr_read().
---
 xen/arch/x86/include/asm/apic.h |  5 +++++
 xen/arch/x86/irq.c              | 37 ++++++++++++++++++++++++++++++++-
 2 files changed, 41 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/include/asm/apic.h b/xen/arch/x86/include/asm/apic.h
index d1cb001fb4ab..7bd66dc6e151 100644
--- a/xen/arch/x86/include/asm/apic.h
+++ b/xen/arch/x86/include/asm/apic.h
@@ -132,6 +132,11 @@ static inline bool apic_isr_read(uint8_t vector)
             (vector & 0x1f)) & 1;
 }
 
+static inline bool apic_irr_read(unsigned int vector)
+{
+    return apic_read(APIC_IRR + (vector / 32 * 0x10)) & (1U << (vector % 32));
+}
+
 static inline u32 get_apic_id(void)
 {
     u32 id = apic_read(APIC_ID);
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 54eabd23995c..ed262fb55f4a 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2601,7 +2601,7 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
 
     for ( irq = 0; irq < nr_irqs; irq++ )
     {
-        bool break_affinity = false, set_affinity = true;
+        bool break_affinity = false, set_affinity = true, check_irr = false;
         unsigned int vector, cpu = smp_processor_id();
         cpumask_t *affinity = this_cpu(scratch_cpumask);
 
@@ -2649,6 +2649,25 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
              !cpumask_test_cpu(cpu, &cpu_online_map) &&
              cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) )
         {
+            /*
+             * This to be offlined CPU was the target of an interrupt that's
+             * been moved, and the new destination target hasn't yet
+             * acknowledged any interrupt from it.
+             *
+             * We know the interrupt is configured to target the new CPU at
+             * this point, so we can check IRR for any pending vectors and
+             * forward them to the new destination.
+             *
+             * Note the difference between move_in_progress or
+             * move_cleanup_count being set.  For the later we know the new
+             * destination has already acked at least one interrupt from this
+             * source, and hence there's no need to forward any stale
+             * interrupts.
+             */
+            if ( apic_irr_read(desc->arch.old_vector) )
+                send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
+                              desc->arch.vector);
+
             /*
              * This CPU is going offline, remove it from ->arch.old_cpu_mask
              * and possibly release the old vector if the old mask becomes
@@ -2689,11 +2708,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
         if ( desc->handler->disable )
             desc->handler->disable(desc);
 
+        /*
+         * If the current CPU is going offline and is (one of) the target(s) of
+         * the interrupt signal to check whether there are any pending vectors
+         * to be handled in the local APIC after the interrupt has been moved.
+         */
+        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
+            check_irr = true;
+
         if ( desc->handler->set_affinity )
             desc->handler->set_affinity(desc, affinity);
         else if ( !(warned++) )
             set_affinity = false;
 
+        if ( check_irr && apic_irr_read(vector) )
+            /*
+             * Forward pending interrupt to the new destination, this CPU is
+             * going offline and otherwise the interrupt would be lost.
+             */
+            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
+                          desc->arch.vector);
+
         if ( desc->handler->enable )
             desc->handler->enable(desc);
 
-- 
2.44.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 14:59:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 14:59:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737346.1143600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgU6-0000x5-3J; Mon, 10 Jun 2024 14:58:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737346.1143600; Mon, 10 Jun 2024 14:58:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgU6-0000wy-0g; Mon, 10 Jun 2024 14:58:58 +0000
Received: by outflank-mailman (input) for mailman id 737346;
 Mon, 10 Jun 2024 14:58:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGgU4-0000ws-HZ
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 14:58:56 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f4a313c9-2739-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 16:58:54 +0200 (CEST)
Received: by mail-ej1-x633.google.com with SMTP id
 a640c23a62f3a-a6f04afcce1so233246666b.2
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 07:58:54 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f0ea2d230sm303163466b.28.2024.06.10.07.58.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 07:58:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4a313c9-2739-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718031534; x=1718636334; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=FiLoIBjOFubDY1li5qS1G1hfyqoewCi9zr062S3gC8E=;
        b=Bi9OwoDfIHIv9ZnKcsKIgcYfvQfrGpzUsUok4Ki21LdRdka56iuaPhQF/ljAE6wlmP
         wapgi0YDrS53PdzZGajBzS1vUvYGs2QBGRmDik579I+0/YbS4wADtTQUf8BImZY/16db
         ixQ5bWKE5AxX4a/UgB2LmGDHx2xQrVRXbR+UfkI5SjkjvX2UVnZl5as7pZd/6LILULcy
         zHPavhlwY/s1ZTbzU3nDmYLXTyX9Xt5cr562MUYyyaGCpan41wocPoMM2A03wDpIxDwA
         oitFqwDEDZplalOgYJ3t/Lq+pS6/nBhgNvvWML00vggQG91oi6O4YL7wQxxJVyJ2WBWx
         zILw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718031534; x=1718636334;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=FiLoIBjOFubDY1li5qS1G1hfyqoewCi9zr062S3gC8E=;
        b=Al8fyiLJCrpm86Kh9ZvaH/8/ECHNQPN3+gTiZnfWFR2LeSwOI9DKSpTFV4BeKJUGER
         yYTHSmG8bUEMkkMoHFEK1VndZ2GaPV4BWtuEyu5urQyWkdAnhcfhvrJwTDpYp6rsTITO
         RtaxZGt4zIma0ypYXH4GG3SnARXg/B4toj2N4RYw8+x69v0YdJBDjWtazVSQCeLfS1Ze
         ogp9ovHrFB4XTQw47NgOx8UaITmlBQk2dv2BBAYy3K8QZ/l549p2EESCDevDOrp+GLZu
         Pkdr6sKhtEAHjz5gkMwkcz22vMe7E5ifePkKe52ssbMZHsTzRjYHbUWGjOdQlzAN/qXx
         +OOA==
X-Gm-Message-State: AOJu0Yye6+7MVy8wEn5mUrADanOG3grORlYfTFKBAJH0a1609UjyjQih
	ArVIPTD5zv+sAHAXdtq2FsS2C2/nMEeNbh/b/j2sDdX0JHO1dfnN53AQx+O6OVU9kqcYNPJZGjs
	=
X-Google-Smtp-Source: AGHT+IEiqYVs8B4tfCt3b7EqMPA9pWdgBz02Rhkib0Mf4VZ+cBostXPcBIpa3U2iO1VVlqwDr9iYtg==
X-Received: by 2002:a17:907:376:b0:a6f:1541:e8d5 with SMTP id a640c23a62f3a-a6f1541e9d0mr339055766b.23.1718031533774;
        Mon, 10 Jun 2024 07:58:53 -0700 (PDT)
Message-ID: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
Date: Mon, 10 Jun 2024 16:58:52 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/EPT: relax iPAT for "invalid" MFNs
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

mfn_valid() is RAM-focused; it will often return false for MMIO. Yet
access to actual MMIO space should not generally be restricted to UC
only; especially video frame buffer accesses are unduly affected by such
a restriction. Permit PAT use for directly assigned MMIO as long as the
domain is known to have been granted some level of cache control.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Considering that we've just declared PVH Dom0 "supported", this may well
qualify for 4.19. The issue was specifically very noticable there.

The conditional may be more complex than really necessary, but it's in
line with what we do elsewhere. And imo better continue to be a little
too restrictive, than moving to too lax.

--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -503,7 +503,8 @@ int epte_get_entry_emt(struct domain *d,
 
     if ( !mfn_valid(mfn) )
     {
-        *ipat = true;
+        *ipat = type != p2m_mmio_direct ||
+                (!is_iommu_enabled(d) && !cache_flush_permitted(d));
         return X86_MT_UC;
     }
 


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:00:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:00:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737352.1143609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgVG-0002Oh-En; Mon, 10 Jun 2024 15:00:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737352.1143609; Mon, 10 Jun 2024 15:00:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgVG-0002Oa-CC; Mon, 10 Jun 2024 15:00:10 +0000
Received: by outflank-mailman (input) for mailman id 737352;
 Mon, 10 Jun 2024 15:00:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGgVE-0002OK-Tg
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:00:08 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1fc514f8-273a-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 17:00:06 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a6f13dddf7eso184452966b.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 08:00:06 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1d8369casm168170166b.225.2024.06.10.08.00.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 08:00:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1fc514f8-273a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718031606; x=1718636406; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=A60tftk2W4ZZSJmfLCRU6alxBe9/4ElxSP0dkCVeOp4=;
        b=GU3URsnMKaas9MmDUpSvqkQM1l7ZvNho3duymTunOG1m5kC6v1nt1imvrxuEP+TuPS
         mPwhVzmQjoV6ubOZXb0hTbdYvr2WEIcuag1LrYFP/cZhEfgjfFWlxTb+NgQbuZnZg077
         Dmt0bzvRiOamIqC5a18swQhWe1w1md9+s5jH86lAyrsCHWYP6Zyzb4WD4lvVaienSwHR
         SsSzvJ2F50Yf9e3RgJ4tregYpnDiVijcZDpArtBRSevqlX1tdy89DYqFCwrjk1twOjDX
         GZB4sYxKRVatQgjFMQOqEeBmh7Xk+nIGQ8ZXYojpi0fmHjuyGtbsXsEyBgTziw1y3o+A
         pRCw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718031606; x=1718636406;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=A60tftk2W4ZZSJmfLCRU6alxBe9/4ElxSP0dkCVeOp4=;
        b=DCcYy2oCQL0FxwjGWzadpwUnhP7jD5h8qrMOGv1QcKwcoTXGBAsSYZuMXieCTuey7p
         6UWpKAyHaaIELqozGUzpK/zJhjidFkmxVT+Fc8hc2i0NsSgLNolxIjQl3RqheeWzeWxo
         dd8mpONGTnrYYGwn9uTWhQs5IeLO82bEN/LCKcYaiu07nOSzh1+RyhFfZ5py6NuUISQZ
         MnZmhB0MIXG/c8beKg7VKWyk3C4l21i4Wlc0HXxcsX/tc8N03TZSfXD7DTr84iBGZ9PO
         vZ6G4r0UVRphoeB9VSmB9SpiG7NI/9Bqa46f4a9zmUnf81MzjGO/bLzVkk4qnssVwyV0
         6pWA==
X-Gm-Message-State: AOJu0YwLIWHJx4wC0O9Z+P0NnD3cR/qXgn5t9PXJqAFDwynYiOJCbyDy
	EFsLgF0WELkr7A0vurMdu5A6dgx/HJ5Et+na4as81FFl9c/nY5d+9MdkBUT4pSrXF6lEEABYKDo
	=
X-Google-Smtp-Source: AGHT+IE4ZHN11aSHdrnEkfqOLqDlff3wmihPd1/qlTTDooB0/fZcRP5H4FKX8NpeXPurJO2X2Tfhvg==
X-Received: by 2002:a17:906:2b57:b0:a6f:1d8c:f22f with SMTP id a640c23a62f3a-a6f1d8cf68emr213049566b.26.1718031606165;
        Mon, 10 Jun 2024 08:00:06 -0700 (PDT)
Message-ID: <dcdb2217-d3be-4cfa-b698-d18bdfdd91e3@suse.com>
Date: Mon, 10 Jun 2024 17:00:05 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] x86/EPT: relax iPAT for "invalid" MFNs
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
Content-Language: en-US
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 10.06.2024 16:58, Jan Beulich wrote:
> mfn_valid() is RAM-focused; it will often return false for MMIO. Yet
> access to actual MMIO space should not generally be restricted to UC
> only; especially video frame buffer accesses are unduly affected by such
> a restriction. Permit PAT use for directly assigned MMIO as long as the
> domain is known to have been granted some level of cache control.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Considering that we've just declared PVH Dom0 "supported", this may well
> qualify for 4.19. The issue was specifically very noticable there.

Actually - meant to Cc Oleksii for this, and then forgot.

Jan

> The conditional may be more complex than really necessary, but it's in
> line with what we do elsewhere. And imo better continue to be a little
> too restrictive, than moving to too lax.
> 
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -503,7 +503,8 @@ int epte_get_entry_emt(struct domain *d,
>  
>      if ( !mfn_valid(mfn) )
>      {
> -        *ipat = true;
> +        *ipat = type != p2m_mmio_direct ||
> +                (!is_iommu_enabled(d) && !cache_flush_permitted(d));
>          return X86_MT_UC;
>      }
>  



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:05:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:05:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737359.1143620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgaU-00031f-1i; Mon, 10 Jun 2024 15:05:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737359.1143620; Mon, 10 Jun 2024 15:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgaT-00031Y-VC; Mon, 10 Jun 2024 15:05:33 +0000
Received: by outflank-mailman (input) for mailman id 737359;
 Mon, 10 Jun 2024 15:05:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGgaT-00031S-KJ
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:05:33 +0000
Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com
 [2a00:1450:4864:20::12d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e1308350-273a-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 17:05:31 +0200 (CEST)
Received: by mail-lf1-x12d.google.com with SMTP id
 2adb3069b0e04-52bc27cfb14so3643233e87.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 08:05:31 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6c806eab0asm646111366b.134.2024.06.10.08.05.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 08:05:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1308350-273a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718031931; x=1718636731; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=RSUhmMvQ9TTq7jmQyRryGF8gV4D3aN8zOpEkmyaM2BQ=;
        b=BxIMhZunt79KD2Z87LCO3ZFZwe2rozf1shEPpZ3ZLy+aX3oyPYtyF0c3y79v2tGOBi
         nr4/WS4DJKKg8T5kfAsTk2YuXogwcsGPTo1ZuPOnUphvDiV3niypbHA3CB3IkHpnlV7T
         aAaoMl6+oDunvqB2Xb2HQUOyfVGSIrj9CuDlz+QctpiMzECHrui5ZrBfX7PfM23vTl4N
         NNy0SoTLEN60+r6Ip5pbgpVD/f6+AnPdwoaRHMhbM+GYRLEy52Xg0TAXmdyaygE4vvCy
         r2Dni0vy3HV4FvrHBPzCYrxtaFAKDDmrHkq8pMV3qGVn3505JG0dQczV8JyKEM4Ot0kx
         1b7A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718031931; x=1718636731;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=RSUhmMvQ9TTq7jmQyRryGF8gV4D3aN8zOpEkmyaM2BQ=;
        b=exehrjUV70hbHnS/Pni6ToYdzE03hvT6Q3oNhR25GF/OyGyA5aBWQST3PVvjc8jD+/
         eIoe/6qNM0beg8jGCfprrk6MNnDSjT6zc1UBOamn8fi7a9JXD/c4lUHnbRmulwqwg3ry
         Gwo7QMF+2JPmfvzApg8eo94gPM6n27rk00zMXASDJncaqeQx9YlwMOfS3WCmv/vgqdjh
         QBb3GCoN/eSWoR7u3xR/Wb19vaxOqbCgg8RP8zrWhyAzjZW7oM7sVimC6QfrAH6V4pp+
         WtT3iFdg2GMmHxV5Ar8GMl8q8PbMmMHyQuFPTSXP/mSrj4vNddYCWQp0mYU5bD8ASfyd
         akMg==
X-Forwarded-Encrypted: i=1; AJvYcCW6/yqhDfFKG3XstjWb/e+qpWrpU/G5tMSkYRtiuOBqWyqbf1wE3W5yyFY6pzS3qnYAK9C762yynqlshD4R34DRtbzVg+mZ+ll53bsL2yg=
X-Gm-Message-State: AOJu0Yx/DuL33+gUjMgCCr5QaENycWjVjXAPMBbRWkMBtnhnOGi20X3E
	Dgh56LsyOwy1BZMo7X/wtsgmBJVqlBm3gP3mnx9u2P3bKUcxF13TmZRvmcfR6Q==
X-Google-Smtp-Source: AGHT+IFJilQ0RRk0BGfR5r2VQsBIHIA+vhN9xxNoE1EvyJ/AirFLKaw0ux2xImWHZuwBNz0T4anNcA==
X-Received: by 2002:a19:8c42:0:b0:52c:7fbf:39f6 with SMTP id 2adb3069b0e04-52c7fbf3b15mr4575077e87.26.1718031930677;
        Mon, 10 Jun 2024 08:05:30 -0700 (PDT)
Message-ID: <19314130-dd3f-4056-97f3-b04ad24bdcae@suse.com>
Date: Mon, 10 Jun 2024 17:05:29 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v5 07/10] xen: Make the maximum number of altp2m
 views configurable for x86
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <cover.1717356829.git.w1benny@gmail.com>
 <84794f97bc738add96a66790425a3aa5f5084a25.1717356829.git.w1benny@gmail.com>
 <22eabe14-10c3-4095-91d3-b63911908cb2@suse.com>
 <CAKBKdXhZ4HOqThPMkwaWB5ZhQOc6gE=xsKzkoL4_h+M6y33dcQ@mail.gmail.com>
 <f3cd00f2-bdcb-4604-bdc2-fd13eddb8ea0@suse.com>
 <CAKBKdXje+_dd7kh3+aDJACw84+-1ozXt6N==KbA6Tgm7GeZEnQ@mail.gmail.com>
 <8961cf72-4eeb-4c47-9723-35da3e47d4d2@suse.com>
 <CAKBKdXiQhFeihx9HeuOv5cFe8K7H2O+GFUXy4ThF1X6ZGjCrig@mail.gmail.com>
 <093a45d0-da0b-44d1-902e-730eede80112@suse.com>
 <CAKBKdXjWmVJtCNWsGHnM_9TT2BZ6S=qyxYbYS7hsLWqb4vR16w@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CAKBKdXjWmVJtCNWsGHnM_9TT2BZ6S=qyxYbYS7hsLWqb4vR16w@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10.06.2024 14:21, Petr Beneš wrote:
> On Mon, Jun 10, 2024 at 1:21 PM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 10.06.2024 12:34, Petr Beneš wrote:
>>> On Mon, Jun 10, 2024 at 12:16 PM Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 10.06.2024 11:10, Petr Beneš wrote:
>>>>> On Mon, Jun 10, 2024 at 9:30 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>
>>>>>> On 09.06.2024 01:06, Petr Beneš wrote:
>>>>>>> On Thu, Jun 6, 2024 at 9:24 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>>>> @@ -122,7 +131,12 @@ int p2m_init_altp2m(struct domain *d)
>>>>>>>>>      struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
>>>>>>>>>
>>>>>>>>>      mm_lock_init(&d->arch.altp2m_list_lock);
>>>>>>>>> -    for ( i = 0; i < MAX_ALTP2M; i++ )
>>>>>>>>> +    d->arch.altp2m_p2m = xzalloc_array(struct p2m_domain *, d->nr_altp2m);
>>>>>>>>> +
>>>>>>>>> +    if ( !d->arch.altp2m_p2m )
>>>>>>>>> +        return -ENOMEM;
>>>>>>>>
>>>>>>>> This isn't really needed, is it? Both ...
>>>>>>>>
>>>>>>>>> +    for ( i = 0; i < d->nr_altp2m; i++ )
>>>>>>>>
>>>>>>>> ... this and ...
>>>>>>>>
>>>>>>>>>      {
>>>>>>>>>          d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
>>>>>>>>>          if ( p2m == NULL )
>>>>>>>>> @@ -143,7 +157,10 @@ void p2m_teardown_altp2m(struct domain *d)
>>>>>>>>>      unsigned int i;
>>>>>>>>>      struct p2m_domain *p2m;
>>>>>>>>>
>>>>>>>>> -    for ( i = 0; i < MAX_ALTP2M; i++ )
>>>>>>>>> +    if ( !d->arch.altp2m_p2m )
>>>>>>>>> +        return;
>>>>>>
>>>>>> I'm sorry, the question was meant to be on this if() instead.
>>>>>>
>>>>>>>>> +    for ( i = 0; i < d->nr_altp2m; i++ )
>>>>>>>>>      {
>>>>>>>>>          if ( !d->arch.altp2m_p2m[i] )
>>>>>>>>>              continue;
>>>>>>>>> @@ -151,6 +168,8 @@ void p2m_teardown_altp2m(struct domain *d)
>>>>>>>>>          d->arch.altp2m_p2m[i] = NULL;
>>>>>>>>>          p2m_free_one(p2m);
>>>>>>>>>      }
>>>>>>>>> +
>>>>>>>>> +    XFREE(d->arch.altp2m_p2m);
>>>>>>>>>  }
>>>>>>>>
>>>>>>>> ... this ought to be fine without?
>>>>>>>
>>>>>>> Could you, please, elaborate? I honestly don't know what you mean here
>>>>>>> (by "this isn't needed").
>>>>>>
>>>>>> I hope the above correction is enough?
>>>>>
>>>>> I'm sorry, but not really? I feel like I'm blind but I can't see
>>>>> anything I could remove without causing (or risking) crash.
>>>>
>>>> The loop is going to do nothing when d->nr_altp2m == 0, and the XFREE() is
>>>> going to do nothing when d->arch.altp2m_p2m == NULL. Hence what does the
>>>> if() guard against? IOW what possible crashes are you seeing that I don't
>>>> see?
>>>
>>> I see now. I was seeing ghosts - my thinking was that if
>>> p2m_init_altp2m fails to allocate altp2m_p2m, it will call
>>> p2m_teardown_altp2m - which, without the if(), would start to iterate
>>> through an array that is not allocated. It doesn't happen, it just
>>> returns -ENOMEM.
>>>
>>> So to reiterate:
>>>
>>>     if ( !d->arch.altp2m_p2m )
>>>         return;
>>>
>>> ... are we talking that this condition inside p2m_teardown_altp2m
>>> isn't needed?
>>
>> I'm not sure about "isn't" vs "shouldn't". The call from p2m_final_teardown()
>> also needs to remain safe to make. Which may require that upon allocation
>> failure you zap d->nr_altp2m. Or which alternatively may mean that the if()
>> actually needs to stay.
> 
> True, p2m_final_teardown is called whenever p2m_init (and by extension
> p2m_init_altp2m) fails. Which means that condition must stay - or, as
> you suggested, reset nr_altp2m to 0.
> 
> I would rather leave the code as is. Modifying nr_altp2m would (in my
> opinion) feel semantically incorrect, as that value should behave more
> or less as const, that is initialized once.
> 
>>> Or is there anything else?
>>
>> There was also the question of whether to guard the allocation, to avoid a
>> de-generate xmalloc_array() of zero size. Yet in the interest of avoiding
>> not strictly necessary conditionals, that may well want to remain as is.
> 
> True, nr_altp2m would mean zero-sized allocation, as p2m_init_altp2m
> is called unconditionally (when booted with altp2m=1). Is it a
> problem, though?

Not a significant one. Initially I thought this would end up being a non-
zero-size allocation, which we might like to avoid. But as it's a zero-
size one, I think that's okay to leave as is.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:10:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:10:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737364.1143630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgf1-0004vg-GG; Mon, 10 Jun 2024 15:10:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737364.1143630; Mon, 10 Jun 2024 15:10:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgf1-0004vZ-Cd; Mon, 10 Jun 2024 15:10:15 +0000
Received: by outflank-mailman (input) for mailman id 737364;
 Mon, 10 Jun 2024 15:10:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGgf0-0004vT-9P
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:10:14 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 88700d4f-273b-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 17:10:11 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id
 a640c23a62f3a-a6f1cf00b3aso204387466b.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 08:10:11 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6ef61aa6afsm428291066b.101.2024.06.10.08.10.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 08:10:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88700d4f-273b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718032211; x=1718637011; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=aCva5wXetFBP2BAPl883qTT/FreXl4XL7F5vVY8dcxI=;
        b=eTW2ao4Yx4xqwCtMZ75oDIV3RKHP1o+zFksdDkaUOkHHZ9oYNPw8q706AvquOiA+Qy
         BYtVGOgsWvtAiA7yPmH450oNx7UiQtITgXJ3KUV8gCR3ku3+QC+7iWNuNhi8VB8wjnII
         SyEa+Tg8Avvi4ouo9z+HJf3rojPFntzD3kgT59X/SlXT7+H/Lcyf4rI+8S4+MrZLq330
         KgyCczTYMDb7ZdusmhDdcSSWWM5zNk+jxm7kdsANV5PFpS5wpCmKDqc+iOVD27YL1j9P
         1BhmUj6GcvS0QJm5jx6C9hGs1FK2S+BdLuROGkhRalvzKIM6iBiFNyPIyCMlxXaHQnUR
         wwTg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718032211; x=1718637011;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=aCva5wXetFBP2BAPl883qTT/FreXl4XL7F5vVY8dcxI=;
        b=r+5iDnLB2BWtHU8nb87ipg9RDxMxtrMH7D832pCYJ4BhRJDX4nGyphFa68G/NbS38f
         oTh+vNt7le06y+GWv3TyVDP82m0/taOiAihsw9YBCR8QHXZusxt/TXNo9NHctGqj6u7h
         kbfohYFEn5jUILVHsreV0ymCBpYoeqxkX0jtuOF5mMN/PRjf/qmQm1NcG8uALT+5rJF+
         e+zt0lqkEcYIKeFMydveLt853g3jLpm9BpLWCuvink+ACjRQSXNybfFsuCjwhwTw8imW
         LsFAUpKomG5O8T49dTugb+QtphkMWjG21GyQLw7rtJt8drCKsiHe7TeWXeAw5otA/95g
         v3RQ==
X-Forwarded-Encrypted: i=1; AJvYcCXRZGzJ9sNT9jdrR92ZgwQq0f/fIhrlimHs8XyiyXse/hMhgUYBNW0xnbBWikxerdn1Sk+BY1cXtYshyjAr7wqLTSo6HbPXTtZcuKe1PiM=
X-Gm-Message-State: AOJu0Yxr5ZZVIUZ4f2q+B+awxlafQUkIRsnnC+kuoq7ueXi+gkKj/RGW
	K/ZaJoUS1fRacZDBG1CisNgUGZsP4poLKwR4Zw/rk2V3zpBY+GgiO1J6NCyYcg==
X-Google-Smtp-Source: AGHT+IHDs3sBs1VsKohIlv1RvlAqOR3U00lAv+fAH5wa1BnaLFDstKjRwsAk90j60Q8dco4N46fwhA==
X-Received: by 2002:a17:907:6d24:b0:a6f:2002:5bc4 with SMTP id a640c23a62f3a-a6f20025c37mr195339166b.15.1718032211274;
        Mon, 10 Jun 2024 08:10:11 -0700 (PDT)
Message-ID: <1bf03bd6-0f30-4a57-b597-bb59dcf800fa@suse.com>
Date: Mon, 10 Jun 2024 17:10:09 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 10/16] x86/domain: guard svm specific functions
 with usinc_svm macro
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <e03693d1daa386a31e09794b0167d282df5a8bfe.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <e03693d1daa386a31e09794b0167d282df5a8bfe.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:26, Sergiy Kibrik wrote:
> From: Xenia Ragiadakou <burzalodowa@gmail.com>
> 
> Replace cpu_has_svm check with using_svm, so that not only SVM support in CPU
> gets checked, but also presence of functions svm_load_segs() and
> svm_load_segs_prefetch() in the build checked as well.
> 
> Since SVM depends on HVM, it can be used alone.
> 
> Signed-off-by: Xenia Ragiadakou <burzalodowa@gmail.com>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>

The code you're touching is solely for PV, even if it's interacting with HVM
code. Therefore "x86/PV:" may be the better subject prefix.

> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1731,11 +1731,9 @@ static void load_segments(struct vcpu *n)
>          if ( !(n->arch.flags & TF_kernel_mode) )
>              SWAP(gsb, gss);
>  
> -#ifdef CONFIG_HVM
> -        if ( cpu_has_svm && (uregs->fs | uregs->gs) <= 3 )
> +        if ( using_svm && (uregs->fs | uregs->gs) <= 3 )
>              fs_gs_done = svm_load_segs(n->arch.pv.ldt_ents, LDT_VIRT_START(n),
>                                         n->arch.pv.fs_base, gsb, gss);
> -#endif
>      }
>  
>      if ( !fs_gs_done )
> @@ -2048,9 +2046,9 @@ static void __context_switch(void)
>  
>      write_ptbase(n);
>  
> -#if defined(CONFIG_PV) && defined(CONFIG_HVM)
> +#if defined(CONFIG_PV)

In such a case, would you mind switching (back) to the shorter "#ifdef" form?
Then
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan

>      /* Prefetch the VMCB if we expect to use it later in the context switch */
> -    if ( cpu_has_svm && is_pv_64bit_domain(nd) && !is_idle_domain(nd) )
> +    if ( using_svm && is_pv_64bit_domain(nd) && !is_idle_domain(nd) )
>          svm_load_segs_prefetch();
>  #endif
>  



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:16:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:16:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737372.1143639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgl7-0005do-3n; Mon, 10 Jun 2024 15:16:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737372.1143639; Mon, 10 Jun 2024 15:16:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgl7-0005dh-0s; Mon, 10 Jun 2024 15:16:33 +0000
Received: by outflank-mailman (input) for mailman id 737372;
 Mon, 10 Jun 2024 15:16:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGgl5-0005db-KT
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:16:31 +0000
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com
 [2a00:1450:4864:20::52c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 69fb46b8-273c-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 17:16:30 +0200 (CEST)
Received: by mail-ed1-x52c.google.com with SMTP id
 4fb4d7f45d1cf-57864327f6eso45225a12.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 08:16:30 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c81a98bfdsm2023951a12.26.2024.06.10.08.16.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 08:16:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69fb46b8-273c-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718032590; x=1718637390; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=AfawYvK/3FPV8tNdd4JFKAjuYyTJiAyKxi7NnMRsCIk=;
        b=Bnik8D0+b/yWo6EuoPd4kcz3iCgByeXrDXPkTEeAu0o/kTx+yBxaoN0e5Nqxx3cP/C
         hTrYxHQGW2RvOUKyDN82I1AIKW2X1z83ZSFDuTXK/uRQtAx1qCvrVftxmo5rUIqiF45B
         CGoDF13qkUJruuH6SndCEhg5b4softxdA7cbRaFpNNHvsahHDowJWhKnLv3tM8HYGqud
         +GiXLfAYBMrwsZwphSUHeqdEu/QBzCtTqoIdtpgfVpCocYr8wUP5u/MGs9bEqeNVUNXm
         Udb9NdONufqR0kEKRgMo/kUBiaQnH66NDKmKaOas4SvowdvH0MJVVkbRjnyL7KxxTVxP
         v4HA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718032590; x=1718637390;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=AfawYvK/3FPV8tNdd4JFKAjuYyTJiAyKxi7NnMRsCIk=;
        b=TUNEFy/w6cFdIRp9fzh3o0hUXfHIgcGhBow9P9tGGVwr5LfqrsPANw3xNQNvdmRVe7
         XnfLrLYIlUKea1AtUClm4lQFRgfDCXnpBKJ2UvZCDrshEKE1q7ZleUBEEz2XIROCE70y
         eUnauIeFpz3752YMFLbPXBf3usPTTysFMiMC0SAVvfuJ9iG8t9bQoGKRtjyy/UoBWglf
         Xb/DvfhrRJJMQG9A+mTnJHFxPS/gbWtjM2S23mm6jAb4AYHx/fWJ80x5wRK+h+P7UsBz
         dD+IVjcbBPPcjE4Xf6GQ6Q7lVLxtOjQZPpRMrjMvTvh9zJ8LLPNop5Hyogz73wLpmDOE
         FweQ==
X-Forwarded-Encrypted: i=1; AJvYcCVIUD9DIk1wuCtrsolQOusf08WXeCWafOg1ySfuPHsw8MDsIw9dukO6dhh3kpHCiSsDN4d3tumSQjOPaSrsQKcDNO2POb9CFyhw0tn4y8k=
X-Gm-Message-State: AOJu0YwfNb94Qq0WbqfWQrnUs6Vpej3UYRZCLBgxZpnNfVo8iR2rU1u3
	qVRjapSxq217UTUrdx/27u+3HGwBsQAi0es0bHvSF0aAngVjeWH9+2htLrwPpg==
X-Google-Smtp-Source: AGHT+IHRuMGFukhvdwxiYpB+k+Yt11iuwcSWLgGLy0v/omMOJ+oy1oz2FIhxLjiWt1K0Tr8wXO0oNA==
X-Received: by 2002:a50:874a:0:b0:57a:6aa6:b4f7 with SMTP id 4fb4d7f45d1cf-57aa55c697amr9815410a12.19.1718032589669;
        Mon, 10 Jun 2024 08:16:29 -0700 (PDT)
Message-ID: <6c7d0cc9-46b7-4aea-8181-0b21f8a9da48@suse.com>
Date: Mon, 10 Jun 2024 17:16:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 12/16] x86/vmx: guard access to cpu_has_vmx_* in
 common code
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Paul Durrant <paul@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <1645c0d4a5aae7b53cfb166ac10235e12ae4dbb1.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <1645c0d4a5aae7b53cfb166ac10235e12ae4dbb1.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:30, Sergiy Kibrik wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -5197,7 +5197,7 @@ int hvm_debug_op(struct vcpu *v, int32_t op)
>      {
>          case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_ON:
>          case XEN_DOMCTL_DEBUG_OP_SINGLE_STEP_OFF:
> -            if ( !cpu_has_monitor_trap_flag )
> +            if ( !using_vmx || !cpu_has_monitor_trap_flag )
>                  return -EOPNOTSUPP;
>              break;

Here and elsewhere you're adding a redundant check of cpu_has_vmx, even
if that's not visible without looking at using_vmx. I'm inclined to
think that adding IS_ENABLED() to the various cpu_has_* that you care
about might end up better (for being a little cheaper at runtime).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:22:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:22:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737380.1143649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgr3-0007oj-R0; Mon, 10 Jun 2024 15:22:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737380.1143649; Mon, 10 Jun 2024 15:22:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgr3-0007oc-Nu; Mon, 10 Jun 2024 15:22:41 +0000
Received: by outflank-mailman (input) for mailman id 737380;
 Mon, 10 Jun 2024 15:22:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGgr2-0007oW-6V
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:22:40 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 45ab1514-273d-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 17:22:38 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-57a677d3d79so10240539a12.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 08:22:38 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f0dfb73c0sm310472366b.180.2024.06.10.08.22.37
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 08:22:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45ab1514-273d-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718032958; x=1718637758; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=hT1c3yKG2C3ot1gvkoNSLJnU1soOSOzEvlqE6vLPc+o=;
        b=GqCbBRXKGboiDWWMNmDkCfWbrVOUYy3fL/2BQaEp6n015cXb7upyvfkYMJIYZ+S97t
         h3oQsicemCFCI5eEXo7HQMD2odzdSm9OYptl1F/nm+Uub8ogGhBkpfKGI9pXmLwKuGYj
         3G2K+JqYjLpw8Gx7BrEUvsCmplcgvmNAnn8Tido882wlzjeC3+ALCKfhbCvuQ6RA3WTV
         XCiqHPQTCd7Gyl2rK00a2YOjKZh30uBJ67zu1fo3P+3qwtSgPrCDWq/Zwtlk0c6NvVsq
         ViJmzLItnxe07pFxgyRCpxYkvUaHlUQetOExRVMjmKFxbPTIjW5IdBPcwgeyqdsFnKH4
         SGXQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718032958; x=1718637758;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=hT1c3yKG2C3ot1gvkoNSLJnU1soOSOzEvlqE6vLPc+o=;
        b=kBTZkJzgKr5FnJV7+ujpPaRh4JsnvmXlFjZvLJsbWh+tvafmg0XLbHSG9sqb5Oexa1
         g6GLl3wBFiLkdQqTDSruj/fIT0v6I+gr1d7qM0QeUYA691+8+EVeNTtpHFD3IsNUA4ny
         cTi13E78H1cVmt+egESwKIG0qnYvpxx8RmD3GtB8Ugf8R+7kNNs2VvWtMETbjfXMGTX9
         ZIXgqMq2R6JfvIHNVPKwe+o3LahVn/c/TlXv14u942qnhqPicChaeQbGnAYY1hyZ12HC
         SrLW5IUV4SJkV64+6/A3FlRKOz2W0VeGlf3pLlDG5/i9Tio0oQ6C2YHZ/GkT3ixg0/0v
         s4Gg==
X-Forwarded-Encrypted: i=1; AJvYcCUbUSc9p/nBGOhTvHv5+EOF3S8cLjwUbNwkLdhrsJiV2oYqB64Fg3kFLVQ7efSv7C92oOHNfzuahLuehufJXbhFN3cw8srWGPDBPMFihNo=
X-Gm-Message-State: AOJu0Yz83bd3Yj7sS+SgvTKmSyzQBnuIRquknpUYnhVt3pDqE1YbCvP2
	XS6zAtg13+Bx+D35UZIFBUb6XOreT1HTwFd+yYToKRvHF0x0aw0ExxgZpERSew==
X-Google-Smtp-Source: AGHT+IEdQ4HotAWzHu4hb8IhHt0h83vEfEAgPxyq7C3dAwH5akP0l9tHc3E6NaVlbAYFsi0+DugxWQ==
X-Received: by 2002:a17:906:d157:b0:a6f:1f67:9816 with SMTP id a640c23a62f3a-a6f1f67988cmr198391666b.22.1718032958200;
        Mon, 10 Jun 2024 08:22:38 -0700 (PDT)
Message-ID: <c3fcf89d-4f6a-4f7d-a49a-4e50e4978bc0@suse.com>
Date: Mon, 10 Jun 2024 17:22:37 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 13/16] x86/vpmu: guard calls to vmx/svm functions
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <b7f68e09ccc54782410d65173e490f477364a5f0.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <b7f68e09ccc54782410d65173e490f477364a5f0.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:32, Sergiy Kibrik wrote:
> If VMX/SVM disabled in the build, we may still want to have vPMU drivers for
> PV guests. Yet in such case before using VMX/SVM features and functions we have
> to explicitly check if they're available in the build. For this puspose

Nit: It's not the first time that I see this apparent typo - I assume here
and elsewhere "purpose" is meant?

> --- a/xen/arch/x86/cpu/vpmu_amd.c
> +++ b/xen/arch/x86/cpu/vpmu_amd.c
> @@ -27,6 +27,7 @@
>  #define is_pmu_enabled(msr) ((msr) & (1ULL << MSR_F10H_EVNTSEL_EN_SHIFT))
>  #define set_guest_mode(msr) ((msr) |= (1ULL << MSR_F10H_EVNTSEL_GO_SHIFT))
>  #define is_overflowed(msr) (!((msr) & (1ULL << (MSR_F10H_COUNTER_LENGTH - 1))))
> +#define is_svm_vcpu(v) (using_svm && is_hvm_vcpu(v))

Like for the earlier patch the implicit cpu_has_svm check you add here is
redundant with us knowing we're dealing with AMD hardware here. Please
consider switching to IS_ENABLED().

> --- a/xen/arch/x86/cpu/vpmu_intel.c
> +++ b/xen/arch/x86/cpu/vpmu_intel.c
> @@ -54,6 +54,8 @@
>  #define MSR_PMC_ALIAS_MASK       (~(MSR_IA32_PERFCTR0 ^ MSR_IA32_A_PERFCTR0))
>  static bool __read_mostly full_width_write;
>  
> +#define is_vmx_vcpu(v) ( using_vmx && is_hvm_vcpu(v) )

Nit (style): Unlike above, but like iirc you had it elsewhere, you have
stray blanks here immediately inside the parentheses. Beyond that the
comment above applies here, too (with s/AMD/Intel).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:23:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:23:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737385.1143659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgsE-0008M0-2z; Mon, 10 Jun 2024 15:23:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737385.1143659; Mon, 10 Jun 2024 15:23:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgsE-0008Lt-0V; Mon, 10 Jun 2024 15:23:54 +0000
Received: by outflank-mailman (input) for mailman id 737385;
 Mon, 10 Jun 2024 15:23:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGgsC-0008Ll-Gc
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:23:52 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7112a8de-273d-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 17:23:51 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-57864327f6eso61250a12.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 08:23:51 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57aadf9e208sm7778903a12.6.2024.06.10.08.23.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 08:23:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7112a8de-273d-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718033031; x=1718637831; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=6xCki/NpCnpA1HteArI1mb6N8AF4E1RIvQM6ewyrFxE=;
        b=fg2T/1IxkbK7BMHAlYl94Dn1Sg5GfvlZT8muK1BGcUdRpP15TPit4vmBESaCXdEAur
         nqWKvx7j/RqBoPC1dtd1+IYB1uKDi7OZEO0W8Y0GCOYk7+YZwevojTUeOC7p8XCLvml8
         DeBwJu2aydzM63DeaG7rMZGtBXxMEkXH9Oe0vzeNJQgOgR0HZI+VdyIaMUmwYoWTgELA
         93BVXNBnKt8ocSErCfy2eaK04Q5RaCSUlPaYua028ccYR11zANclTTdGyQUOZGssSpTU
         ecoPkZ5hRtH+LshZ4vgMoASu63+mxnqtDs+Bdcz7D5x8c5jQJftRjBLxwWHIqwdx38rp
         dmxg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718033031; x=1718637831;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=6xCki/NpCnpA1HteArI1mb6N8AF4E1RIvQM6ewyrFxE=;
        b=ZN3TukUPmLoBZ1WwMTHpW0iNy1UEZcy9AWeOZPiMOfvwenAy7cDKV+Ws7uM7B+CFsN
         uWWxGA4hSM7AqlwvWJMH9T8/HMr0pskrXeYlMn+LSjuhYvcDOv5TglGnzUgI+yc/OaE5
         BYRns/HLwTCgAN27J1oxXk2xASb4xSZfLzPXZXgIM3elOj6MJGh+mPLUxKCN4pfGdtwR
         Qn/QU+SUv8iWTB51XDM8NjAsQ3kAM9QXsUng5K8DA6kqx3IH1lMVoF/FcCOgWQx9+Ha6
         XGI3K4O9G2AGzSIBpQ/WuywAM23wO3P8EZoEzmxWkqYhA9D9HX4R1/2hIzKsTe0kMK2C
         YzHg==
X-Forwarded-Encrypted: i=1; AJvYcCWzCmv/PbA03Hg8DmHMLB+KItkwFYU0Ia0ahiYMBDfXAUxKlvf5+QRA3O3mpSKyq/9nfUOWKJuLkzdWK8QHI9EQvpxlFkK9MwXiGfQuqHU=
X-Gm-Message-State: AOJu0YxhRdbBQN+uEd6oQuWm/3lqXxtCjZW+axSjGraTSlXb7FrZ0Syx
	Yjl/DJ3tSmuWT/Zrztnt69O6GMvf3x7Co5Akly4u3zkAwd9UC8rbI1xsW+kvPA==
X-Google-Smtp-Source: AGHT+IH63vYMdlfXmRkZ0J/GObvRE+suGA3ilsk4/9x9D9/rtqi+HvtVbLSY/l4sGn+nU73Fs+yMIA==
X-Received: by 2002:a50:a697:0:b0:57c:6d37:4f43 with SMTP id 4fb4d7f45d1cf-57c6d375294mr4003584a12.11.1718033031158;
        Mon, 10 Jun 2024 08:23:51 -0700 (PDT)
Message-ID: <6182c5ca-f9df-4e6e-9a3d-9e71b3619fbe@suse.com>
Date: Mon, 10 Jun 2024 17:23:50 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 15/16] x86/vmx: replace CONFIG_HVM with CONFIG_VMX
 in vmx.h
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xenia Ragiadakou <xenia.ragiadakou@amd.com>, xen-devel@lists.xenproject.org
References: <cover.1717410850.git.Sergiy_Kibrik@epam.com>
 <9a1d4a9af373ff7164c20b9774eea5249af60b01.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <9a1d4a9af373ff7164c20b9774eea5249af60b01.1717410850.git.Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 03.06.2024 13:36, Sergiy Kibrik wrote:
> As now we got a separate config option for VMX which itself depends on
> CONFIG_HVM, we need to use it to provide vmx_pi_hooks_{assign,deassign}
> stubs for case when VMX is disabled while HVM is enabled.
> 
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:25:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:25:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737389.1143669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgtQ-0000TR-DD; Mon, 10 Jun 2024 15:25:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737389.1143669; Mon, 10 Jun 2024 15:25:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGgtQ-0000TK-9y; Mon, 10 Jun 2024 15:25:08 +0000
Received: by outflank-mailman (input) for mailman id 737389;
 Mon, 10 Jun 2024 15:25:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OhkP=NM=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sGgtO-0000T6-Eg
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:25:06 +0000
Received: from mail-oi1-x233.google.com (mail-oi1-x233.google.com
 [2607:f8b0:4864:20::233])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9cb983bb-273d-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 17:25:05 +0200 (CEST)
Received: by mail-oi1-x233.google.com with SMTP id
 5614622812f47-3d22a32bef8so433540b6e.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 08:25:05 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 5614622812f47-3d20b66cb6esm1658399b6e.8.2024.06.10.08.25.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 08:25:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cb983bb-273d-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718033104; x=1718637904; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=rnhFPfi2RC6ZER/wnN4bTXS24XOJtsf19bmT7kaWdmQ=;
        b=IKylY2S4/BoXD8rFp4oe+4WTMVZZTG1PyL80BFqEfdw23d9jMXKeMw0nEJb0Sv7s1u
         +EjLHsrENto/P+UGuwH7jEbJVg9U+i//5AsghPndYvzZlcuGjqXEGLohpoN9Kd9ALraz
         wP02BB7mm0dfgxwLXqDKymYrZG35y3gG2tGc0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718033104; x=1718637904;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=rnhFPfi2RC6ZER/wnN4bTXS24XOJtsf19bmT7kaWdmQ=;
        b=MDg32Ssq+YLmjglR0CS0Ov4k1BWRrfussjx9C9TFKAFXDeFPEZdz064lIGwePwvsXq
         3NNxh7P+TgZv2Jozwck6R2jHBJ8/vaSikIjfuuBVmWiOkEknr7XVZDDUJPiSzIekgQmJ
         YWSq3ZSRW8WyI0/a0tiWBwXK5TlnnTW3/ZG6z468nncgv8G8IOoNf94s+duldJTa3mCY
         wRPWHkN74r0rpg2tVmDkzolH6TAh7pRITikJjydGjVxtt9bUcbLq6Cf1iCyG1QkP/UVi
         QKi7pxBbVWrWSxG/2YfwRLw2QJakk2+WAdECxbKc3mfwmvCapjFUoTQHbsEMA0r2CZMd
         Q9Eg==
X-Forwarded-Encrypted: i=1; AJvYcCVAAWeOP4WYXN7OQe15gwbamMqJCcJvoxjRbHpnerC9NNWQx3QLzoLfYtW8p6/gRH8Ny1ygaVkKiaEugamZgrZVq01KKSe7Y/ZQRb7gzF8=
X-Gm-Message-State: AOJu0YxdxLq5CsvvazRxoYWMT9E2WA+kJJhXHDaxMbI79LwrpJx40QRj
	jVylsYqhpjFDm+X+ZxeX9ou25wBjPer6xUMG8CRgwadEDDBob42aAjvgrRA75Os=
X-Google-Smtp-Source: AGHT+IFTH2NuD4/XzvmFY7efAG2CKs72FQmMvt7EWxoKbrOPUh7swlz3oXwd3Afus714XJx0qNiAvA==
X-Received: by 2002:a05:6808:1396:b0:3d2:1da2:a0b0 with SMTP id 5614622812f47-3d21da2a8bbmr2957306b6e.9.1718033104168;
        Mon, 10 Jun 2024 08:25:04 -0700 (PDT)
Message-ID: <67a6fc3a-bcc3-48e8-beb8-b3c05217083c@citrix.com>
Date: Mon, 10 Jun 2024 16:25:01 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 v1] automation: add a test for HVM domU on PVH
 dom0
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20240610133210.724346-1-marmarek@invisiblethingslab.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <20240610133210.724346-1-marmarek@invisiblethingslab.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10/06/2024 2:32 pm, Marek Marczykowski-Górecki wrote:
> This tests if QEMU works in PVH dom0. QEMU in dom0 requires enabling TUN
> in the kernel, so do that too.
>
> Add it to both x86 runners, similar to the PVH domU test.
>
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

CC Oleksii.

> ---
> Requires rebuilding test-artifacts/kernel/6.1.19

Ok.

But on a tangent, shouldn't that move forwards somewhat?

>
> I'm actually not sure if there is a sense in testing HVM domU on both
> runners, when PVH domU variant is already tested on both. Are there any
> differences between Intel and AMD relevant for QEMU in dom0?

It's not just Qemu, it's also HVMLoader, and the particulars of VT-x/SVM
VMExit decode information in order to generate ioreqs.

I'd firmly suggest having both.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:39:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737397.1143680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGh73-0003Uj-Ig; Mon, 10 Jun 2024 15:39:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737397.1143680; Mon, 10 Jun 2024 15:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGh73-0003Uc-F2; Mon, 10 Jun 2024 15:39:13 +0000
Received: by outflank-mailman (input) for mailman id 737397;
 Mon, 10 Jun 2024 15:39:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGh72-0003UW-AY
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:39:12 +0000
Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com
 [2a00:1450:4864:20::52f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 94adb3a3-273f-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 17:39:10 +0200 (CEST)
Received: by mail-ed1-x52f.google.com with SMTP id
 4fb4d7f45d1cf-57a44c2ce80so5271511a12.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 08:39:10 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c82a5e0f9sm1847534a12.12.2024.06.10.08.39.09
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 08:39:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94adb3a3-273f-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718033950; x=1718638750; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=L699gjE9dnfl5DrdeNkMwQqNupwDuxbteRalIAJif1A=;
        b=EOLYVFtiTdm7leC1EcUENaqs7pyyDqpI5gt5v8YVBDgTPxDeRsy8fj2HE6m1giEI3c
         oPm4rIvvlc+BK5SzAho+uvGxWoO2hZ+akrUPg6gDfrlnHIqyp7zaqmqL+6BL/TQtqukr
         IE5UHzhuNGNK9LUKmSUfqhzgsp68uEC7Lc6k5/2AK9Nk/cf1B5MGsWK0oVJs2grPm/gr
         8EPMfEAxZMbKlq9K1f7J958L/VT/0Hhk98TN7n6vxod5Yo3vH982Svp7RZkELjxjAEtR
         ccB2Ks1iT24YY8P2cv1wgSeYaEERiFWkuF26VpJb7upeEkXzT/c9l06voelBeN5l2yHW
         qF4w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718033950; x=1718638750;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=L699gjE9dnfl5DrdeNkMwQqNupwDuxbteRalIAJif1A=;
        b=AysHPF2oAbKrkCoSMjhSetDlbk807ohCY4dN/qOWqrpM9kFuZYbIh85cCG+gM8n/Gg
         UYGi0J2/e8TDDivPfwGWbiPxVg+mfNO9OuR5LHaqke3b8euVCL/MpnuBPewy7GlswsFY
         xSSiNMxdyIXZQ3eTPJ1GdEgBLThUFg5um9sxA66sGpHGdhCneydY1pH1bZugGRf+04C2
         yUPb52jtKCpekhtgvedWjtgI+uPtTpGy23w9IC8N4g5cgP0tolIBGq7ZAIMiuJCe9vLQ
         we2DJkpr/VV35NKKktOSn2FR8Zdo/0eknA77NCOOADN3v1hQtHBjF29eiqf/P3ltoo/U
         qqUQ==
X-Forwarded-Encrypted: i=1; AJvYcCVCdA6ANKu8hRhoGJhKf+DPFPCyB671MpcJAKn9i1ZlrvPZUIvoOHRE59fcbeCndaBw2k/oAuVh0fs/tW3nlii1gRzq93G5lJ/5L0dfcZk=
X-Gm-Message-State: AOJu0YwgEuSyPKS94MT92Q9LQ0AnkeerPSCM32A6X5Pz7YqkSolZ2F2V
	TXGOgcrJtSGk07vcysRKHbrobG25DCWZFGs9hM/AcmjEkot5lyapMp7KIEGkEg==
X-Google-Smtp-Source: AGHT+IHHhohiPHKv7y4w0IAS2V0Vbc3EC3Jj7Q91xOYqOmch+BmkKbhVRm9WayctOFOK+BxPSkPTHw==
X-Received: by 2002:a50:c355:0:b0:57c:8339:6dcc with SMTP id 4fb4d7f45d1cf-57c83396e44mr1643099a12.33.1718033949808;
        Mon, 10 Jun 2024 08:39:09 -0700 (PDT)
Message-ID: <3398ea60-4562-454d-858f-9a5906ad2792@suse.com>
Date: Mon, 10 Jun 2024 17:39:08 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v1] x86/intel: optional build of PSR support
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20240606083908.2510396-1-Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240606083908.2510396-1-Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 06.06.2024 10:39, Sergiy Kibrik wrote:
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -1160,6 +1160,7 @@ long arch_do_domctl(
>          break;
>      }
>  
> +#ifdef CONFIG_INTEL
>      case XEN_DOMCTL_psr_cmt_op:
>          if ( !psr_cmt_enabled() )
>          {
> @@ -1262,6 +1263,7 @@ long arch_do_domctl(
>          }
>  
>          break;
> +#endif

Imo the result thereof shouldn't be -ENOSYS, but -EOPNOTSUPP (at least
for XEN_DOMCTL_psr_alloc; for XEN_DOMCTL_psr_cmt_op it shouldn't change,
even if I consider this wrong, but that's a separate topic). Wouldn't it
be possible here to reduce the #ifdef scope a little to just most of the
inner switch()es (i.e. requiring it to be split into two regions), whose
"default" case is already (kind of) doing what we want?

> --- a/xen/arch/x86/include/asm/psr.h
> +++ b/xen/arch/x86/include/asm/psr.h
> @@ -72,6 +72,8 @@ static inline bool psr_cmt_enabled(void)
>      return !!psr_cmt;
>  }
>  
> +#ifdef CONFIG_INTEL
> +
>  int psr_alloc_rmid(struct domain *d);
>  void psr_free_rmid(struct domain *d);
>  void psr_ctxt_switch_to(struct domain *d);
> @@ -86,6 +88,19 @@ int psr_set_val(struct domain *d, unsigned int socket,
>  void psr_domain_init(struct domain *d);
>  void psr_domain_free(struct domain *d);
>  
> +#else
> +
> +static inline void psr_ctxt_switch_to(struct domain *d)
> +{
> +}
> +static inline void psr_domain_init(struct domain *d)
> +{
> +}
> +static inline void psr_domain_free(struct domain *d)
> +{
> +}
> +#endif /* CONFIG_INTEL */

As I think I did mention elsewhere, such stubs can have the braces on the
same line the the function specifier.

> @@ -169,6 +171,7 @@ long arch_do_sysctl(
>      }
>      break;
>  
> +#ifdef CONFIG_INTEL
>      case XEN_SYSCTL_psr_cmt_op:
>          if ( !psr_cmt_enabled() )
>              return -ENODEV;
> @@ -286,6 +289,7 @@ long arch_do_sysctl(
>          }
>          break;
>      }
> +#endif

Like for domctl I think you want to reduce the #ifdef scope some, even if
for XEN_SYSCTL_psr_cmt_op that'll still result in -ENOSYS. At least we'll
then be consistent in clearing sysctl->u.psr_cmt_op.u.data there.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:44:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:44:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737405.1143690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhCB-00051J-7D; Mon, 10 Jun 2024 15:44:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737405.1143690; Mon, 10 Jun 2024 15:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhCB-000517-4Y; Mon, 10 Jun 2024 15:44:31 +0000
Received: by outflank-mailman (input) for mailman id 737405;
 Mon, 10 Jun 2024 15:44:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oRC9=NM=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sGhCA-00050q-2C
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:44:30 +0000
Received: from mail-ot1-x335.google.com (mail-ot1-x335.google.com
 [2607:f8b0:4864:20::335])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 521cf19e-2740-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 17:44:28 +0200 (CEST)
Received: by mail-ot1-x335.google.com with SMTP id
 46e09a7af769-6f9866bd5ccso66580a34.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 08:44:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 521cf19e-2740-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718034267; x=1718639067; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tdir/UNx0SSe9If8XC4A9luyE79i5T/zOKoNAl1clQM=;
        b=RXQX2yycty5AeUFWuZK6jOUBU1tS5RjHhEvHC9xv73pv6Dpso13tf2nfwgoV1eMDs8
         +fKJE9XXo1yB4rtFSYpP6zJ2drdAXK+NrZIX7hQY8ZXEMXghLMqjX5Ix3FnqUaydVA5H
         X5/qP8CmZHwXReVUtcNJT2Xb4npmA6akDW1i4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718034267; x=1718639067;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=tdir/UNx0SSe9If8XC4A9luyE79i5T/zOKoNAl1clQM=;
        b=kFcQy5RESAkiFiQ10AV0PZxlrsSONhmrKiT3DNZTPfMrvE20xJn8cv3avTYd6RMZPX
         XIPa8YAxLFpeyhh+IDOJkgAFIB2cI6NfKp3NB62bHlIEeVID2jABMvaTXUzr15TGQB8n
         Vngk4MjGY7IMF9JbZiRu1aEe+C7OiCcw2tSLz0wJAqzwD+wY4hyvxaX80sgXTeQyxxb6
         tsZndtpi9JPKC2V5yE6Twp4RqaT48f8LOn+ztePXAdnp6WG+EsnC+9A+l483ziXdG56X
         uAWBB2mwqtE7mvoe2lGnJ6nUvcLcWMw1v6i45+AWOHpEImxcW00UxhKw5T2dNEm0pZ6J
         CAfA==
X-Gm-Message-State: AOJu0Yxx2XOJOpOLbhKUv6tUQdxhnYpRTpm9Sg0D2XlydPp7kOl5mDs9
	D9DArU75bp9zmquQnbuC4Lr8e8uNI6MKP/IfdvSvY1tPJGLsSoVuyiblkdWLRJjzeadEarTVXF/
	7gf8vl33fbCRQeltEXL1d7/cqMFIdscPCVyEWmQ==
X-Google-Smtp-Source: AGHT+IHGHopiDXgahkEfW7GwMW+RmLo6MeGwznjXO2H+eXg6yect+cbsCO+B7a/xyikoTTj4P0L4gsC/LsVVwcJl3A0=
X-Received: by 2002:a05:6870:65ac:b0:254:b4a6:958d with SMTP id
 586e51a60fabf-254b4a6a1f8mr4558972fac.2.1718034267534; Mon, 10 Jun 2024
 08:44:27 -0700 (PDT)
MIME-Version: 1.0
References: <20240606054745.23555-1-jgross@suse.com> <20240606054745.23555-2-jgross@suse.com>
In-Reply-To: <20240606054745.23555-2-jgross@suse.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Mon, 10 Jun 2024 16:44:16 +0100
Message-ID: <CA+zSX=akVNGAKnhsXRvMpBthUi-gZGpjKjimP88rgnux=XfQ+w@mail.gmail.com>
Subject: Re: [PATCH] MAINTAINERS: add me as scheduer maintainer
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Jun 6, 2024 at 6:48=E2=80=AFAM Juergen Gross <jgross@suse.com> wrot=
e:
>
> I've been active in the scheduling code since many years now. Add
> me as a maintainer.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  MAINTAINERS | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 6ba7d2765f..cc40c0be9d 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -490,6 +490,7 @@ F:  xen/common/sched/rt.c
>  SCHEDULING
>  M:     George Dunlap <george.dunlap@citrix.com>
>  M:     Dario Faggioli <dfaggioli@suse.com>
> +M:     Juergen Gross <jgross@suse.com>

Reviewed-by: George Dunlap <george.dunlap@cloud.com>

Welcome aboard, Juergen!


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:48:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:48:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737410.1143700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhFk-0005mq-MW; Mon, 10 Jun 2024 15:48:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737410.1143700; Mon, 10 Jun 2024 15:48:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhFk-0005mj-JI; Mon, 10 Jun 2024 15:48:12 +0000
Received: by outflank-mailman (input) for mailman id 737410;
 Mon, 10 Jun 2024 15:48:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGhFj-0005lU-HV
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:48:11 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d6a65027-2740-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 17:48:10 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-a6ef8bf500dso7529166b.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 08:48:10 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1da43b48sm170053266b.195.2024.06.10.08.48.09
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 08:48:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6a65027-2740-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718034490; x=1718639290; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=43VHVI7viJItf5tXXGvFGZX4EMMfUUWYpq7iBqeHw4Q=;
        b=XRnesecxoveCg7E7zWDNlDnD5wX/iyc6SdLDmt4mrV0V77RoQOcCxVNP97AaD8BPYq
         fRK+ToThB+BqYlQyhF3CpiEGlf15l7XpkTNlGr9YOeyEtWlXgpHQ5a/+0VObnQSpe+XA
         677OcLdZgSUZxR0drZ6gd74M2w3JNHTva2TvpTMCcvYGLoGQUzf9L9Xo/tr+Z4md/qgH
         qzT5pvgfWqUenBCEkbfcqZDJvkiC9l7CPGrbLlOhWzxRfC+0AEzwbalyB2WJ1J0/Gbgm
         16PwswjBrFAPFq7L5va6tc4ctnQ6bOUExTe913HMgevoNCx59hZoKHSB8jv3D3pkZFtv
         Gtog==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718034490; x=1718639290;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=43VHVI7viJItf5tXXGvFGZX4EMMfUUWYpq7iBqeHw4Q=;
        b=tXZ9hoG8CoZjzDaXXS9CMQx+pqWjezufhUarL31OKywVF0/xybT8/oAQZFg/Po+1Fu
         ayaYb5tV18E4j+/adCPCXJrwnhJ7dLEyd1HAYCXzIdlIcEF4lC0yyhCjkAlHpo8RhuBm
         lZ+llGoTmS025N33aTs35BKHDjL801H5/qdAJaVhjpQUvf0df+YsCO215szskXcSFj74
         izoOPiQMp6mRXd7Rts3DOcWHOiblx/6v83DVm/29reu2W1wJjuz8PNIpnnsiBrJTOdXa
         4sfzmGxfA6h1sNuhnjEfKlKE23QgQntVdSOJBjdNDd9a4la512pjyQk3C2z98JbIDpQ6
         YbWg==
X-Forwarded-Encrypted: i=1; AJvYcCUsQFaCa6iDcsqv/X2Jy2p4qrhF3qpOIQ2hEWnZ8bV4mAHp6oHptK4RcCLulK4Pz46UjWZHqm6Q/VrcQmnE7P1ByhXi1LlCPCwx1Zf8pks=
X-Gm-Message-State: AOJu0YwL9SLVrK7/cPCXjixbLrdYpFY1ptpjk0n5I3AOuNGfN93ftLZf
	cJ4KUetKLO7g7kzpnenJQD8tAglF5Sf41Rnbd8mqnNnYsibUw8gq77VPF9dTOA==
X-Google-Smtp-Source: AGHT+IHfvc5jZMB24O1fa/yAzFCAkTiHrAZc/8yHQAt6ScwBrhMxzDgWwfpozuuVani2P0sUI0IPdQ==
X-Received: by 2002:a17:906:2c16:b0:a68:a800:5f7e with SMTP id a640c23a62f3a-a6cd561f12cmr656222866b.10.1718034490047;
        Mon, 10 Jun 2024 08:48:10 -0700 (PDT)
Message-ID: <260612cb-caa4-49e1-abd1-cbc82ffe65c2@suse.com>
Date: Mon, 10 Jun 2024 17:48:08 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v1] x86/intel: optional build of TSX support
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20240606110448.2540261-1-Sergiy_Kibrik@epam.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240606110448.2540261-1-Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 06.06.2024 13:04, Sergiy Kibrik wrote:
> --- a/xen/arch/x86/spec_ctrl.c
> +++ b/xen/arch/x86/spec_ctrl.c
> @@ -116,8 +116,10 @@ static int __init cf_check parse_spec_ctrl(const char *s)
>              if ( opt_pv_l1tf_domu < 0 )
>                  opt_pv_l1tf_domu = 0;
>  
> +#ifdef CONFIG_INTEL
>              if ( opt_tsx == -1 )
>                  opt_tsx = -3;
> +#endif

Personally I prefer using the direct check in such cases, rather one on
a prereq symbol. I.e. "#ifndef opt_tsx" both here and below. Other
maintainers may have a different view, though, so I won't insist unless
at least one of them shares this perspective with me. Other than this:
Looks largely okay to me.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:54:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:54:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737414.1143710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhLZ-0007k5-AC; Mon, 10 Jun 2024 15:54:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737414.1143710; Mon, 10 Jun 2024 15:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhLZ-0007jy-6x; Mon, 10 Jun 2024 15:54:13 +0000
Received: by outflank-mailman (input) for mailman id 737414;
 Mon, 10 Jun 2024 15:54:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n2do=NM=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1sGhLX-0007js-GM
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:54:11 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 (mail-he1eur04on2050.outbound.protection.outlook.com [40.107.7.50])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id acc5592c-2741-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 17:54:09 +0200 (CEST)
Received: from AM5PR0301CA0031.eurprd03.prod.outlook.com
 (2603:10a6:206:14::44) by DB9PR08MB7793.eurprd08.prod.outlook.com
 (2603:10a6:10:398::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Mon, 10 Jun
 2024 15:53:37 +0000
Received: from AMS0EPF00000195.eurprd05.prod.outlook.com
 (2603:10a6:206:14:cafe::f2) by AM5PR0301CA0031.outlook.office365.com
 (2603:10a6:206:14::44) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.24 via Frontend
 Transport; Mon, 10 Jun 2024 15:53:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AMS0EPF00000195.mail.protection.outlook.com (10.167.16.215) with
 Microsoft
 SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.7677.15
 via Frontend Transport; Mon, 10 Jun 2024 15:53:37 +0000
Received: ("Tessian outbound 5a0abdb578b5:v332");
 Mon, 10 Jun 2024 15:53:36 +0000
Received: from cdbf5670d108.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A45A2202-4EC6-454D-B0A9-C2DF70337AB0.1; 
 Mon, 10 Jun 2024 15:53:30 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cdbf5670d108.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 10 Jun 2024 15:53:30 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com (2603:10a6:10:25a::24)
 by VE1PR08MB5632.eurprd08.prod.outlook.com (2603:10a6:800:1b3::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Mon, 10 Jun
 2024 15:53:27 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a]) by DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a%5]) with mapi id 15.20.7633.036; Mon, 10 Jun 2024
 15:53:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acc5592c-2741-11ef-90a2-e314d9c70b13
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=bguY1naO9uxwsBxtxJPOVQRzY2ffOBAQZYfB7Pg2PRZoJUGF0WNEKtvUaK44aK6mYD/cqEi0JuMjbfKETdNfraKdXQt+ECOnlJdIiyv0xZX4cIAMd5HPBYci/pOvdRyHP5SYiMHBxUg3nO9uETvu1kqFi4RHM6ajI4ET92Q8bZ+GFU+XrFYwYVsg30bdJGVY8hFDKBjBiOKWs/YDHZHw7K+1eCZZVbxn8ne315TO5Kcr4jycBXl39oX2WKntNoC8AyRjw+Ltpx6yd1C29rnzHsQzOAIDFOpUZ8jryt7YOGZalGZJWKeS62kJSP0TZYGh6V6+pn0c6ahStBv1hragdg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OSWHpE7p/xuYoSd4qe9eLkhU+IRQkqKwz1qAwZKCEY4=;
 b=nxZ13340E/xLjyWUhHJQYU0fRrGhY3yzlRVdfb7xlY7eLMXaOoD+X/5uPg0KO3iIiYI8Nw/Hq4KsJOOrrxfzSWatJmt3vnfFF/SKuOMdxyyfsXppXSCX2uGeyGq2kMSWRjAhTvtHsivlua6vUSpJMt0Nuob/l52B3TaFHnlGZaODMGt6hflTV3Q43FsHI6pftLnkFXPw6XN4ZVfqZhJu1qNH5/9Rf9PdEf3BV0SqfMbK2V6ycH0xDXur/wNGqGLr+mCM5As3ydaVwj9P/njZH762GaXZPwNasxMeDI0HWlZGJ5SOmay+rbHPrV+Rczfx36b5MnUX2nM/IRulZEJR1g==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1
 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OSWHpE7p/xuYoSd4qe9eLkhU+IRQkqKwz1qAwZKCEY4=;
 b=krSVCiLNjkeLX4GLPdx/AX4QWQNwx1ISRl+QzkHsSbqx2y17RgA0ky/r9KaVo2EzwDD0tcdPJ7ng8rTgInRoTU0RY7k2y80vmvh6UU23HuLtLxWjiyMNbQA+kFyb+YBDROWmxXnD3eJQQbpYrusBuTv7xdYFtBugIsnW3bq9+z4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=arm.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3d2b30a75b036435
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vk1yzlBoi9XbseqQ2qO1KlygbbLMBpJV0NHwwuQdBZyeWc7cFys4vGGcTYWu8t2/81ITrQaNwsEMcGbCmAfEqZk7z5tO1Nmr5GIVTH3SSOjklP6pV7ekpMahdNgmZXXsk2izLqJHwv7EpTJTX6uG3ZzDf0vGxlqoUPYedMkCQ0GdwOt+FHJ72P9yeWwdkcnOZlTlNeqDITTlgVYHqfkWIJznDg5P5jjCnfs3fnWzSxTTut55lbEUIIvyBHTm7+ZHX1dlALJOQwK3EJotbZQ156Rg7PMQjxBdwA6Dr7GXHf4qYCUy3CHBekZ+g35uYgXx8jPQzBzLCZ8GuGgp4OkEOg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OSWHpE7p/xuYoSd4qe9eLkhU+IRQkqKwz1qAwZKCEY4=;
 b=lnc+B4yH6kjcHj9/sX874G9Ev1Z8tJf8Xaf60y6YuHAHkYixKGefcmlsxL27ljWDriSrsMLa6TAGZxXnaBvaO+7ecXhX2iGUZM0kKSdJ3PWtwoG+dehrdtDz2/ZLZtRYAdCb/LxoQao/7b/fzXvseIPHQrKjshjwP2EzawxG+j1ODdir3X9/H+iwXdPjMG5Aduz1/rZev+uIf6QhRMRwJUZCWV21d1O2dFBJQV1k6oLX00CaJw+0bQ6jM4KpAMwFybtwl/DCDPzv+hDBAyvuwlPzrKMIE5mcOqZlzypPV0lbjJJc4ezViJvJUyONhHrVPlSTYSjWWJA5u8bw+ZGkig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OSWHpE7p/xuYoSd4qe9eLkhU+IRQkqKwz1qAwZKCEY4=;
 b=krSVCiLNjkeLX4GLPdx/AX4QWQNwx1ISRl+QzkHsSbqx2y17RgA0ky/r9KaVo2EzwDD0tcdPJ7ng8rTgInRoTU0RY7k2y80vmvh6UU23HuLtLxWjiyMNbQA+kFyb+YBDROWmxXnD3eJQQbpYrusBuTv7xdYFtBugIsnW3bq9+z4=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, "patches@linaro.org"
	<patches@linaro.org>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Michal
 Orzel <michal.orzel@amd.com>
Subject: Re: [XEN PATCH v6 7/7] xen/arm: ffa: support notification
Thread-Topic: [XEN PATCH v6 7/7] xen/arm: ffa: support notification
Thread-Index: AQHauwMDaP+bX9VAx0y24WgBW6XzArHBJpiA
Date: Mon, 10 Jun 2024 15:53:27 +0000
Message-ID: <AD483AD3-CE42-41A8-87AB-5793F6CFE16D@arm.com>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
 <20240610065343.2594943-8-jens.wiklander@linaro.org>
In-Reply-To: <20240610065343.2594943-8-jens.wiklander@linaro.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3774.600.62)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	DB9PR08MB6588:EE_|VE1PR08MB5632:EE_|AMS0EPF00000195:EE_|DB9PR08MB7793:EE_
X-MS-Office365-Filtering-Correlation-Id: f0741e98-413e-41bf-2ca6-08dc89657dae
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted:
 BCL:0;ARA:13230031|1800799015|376005|366007|38070700009;
X-Microsoft-Antispam-Message-Info-Original:
 =?us-ascii?Q?9PQEM/rMn7eumsqvs5ybZc1e+tAqgiVAPIp2gutuk+H3J8EkfDJC7o2rtAPj?=
 =?us-ascii?Q?CBBdWM5tVkn7Zvscrqfo+gvkQcU7fZXCF0dmHZbIY3TiGH6bswl2YKmEBAqT?=
 =?us-ascii?Q?hgOVuGxhTia5oLV2/JP6X9ITb0sxGtgqVXr96D6fR6uitfaGyvg0dF8MKPFF?=
 =?us-ascii?Q?pCz/t4WDMohrdWXxqslk7i80KS5j8qXv/vYqzgacqz0Mz9ZY9AhZe9JTiIs7?=
 =?us-ascii?Q?gX7Gt6wyZXIcMFdzDe9geK09OsVcldYltfWgWNkjvzfPiUe9Vg2qncg6OOKq?=
 =?us-ascii?Q?QfwL2c8PEcQzRlM+FeoTObAqt9H6Q4nHiME5AsnFesQmajYvXXiLlTtSGPDi?=
 =?us-ascii?Q?AN0hrlDyOVGlYi2IE024AFXguuRPWLaAFmCcJW/uQdv1pBM3y5jN9GkSDf5Q?=
 =?us-ascii?Q?rhFLqZuGMvoOpFnUk+mtKUTQppfeDIWB2xIVzmg67owBMeaasqkXZt9B4ksO?=
 =?us-ascii?Q?scIfLOxDRMs6gQ9ebrchmco+RFZ/26tSXDG1A2jlV7D2O9ZJaS+Icn2Ie1++?=
 =?us-ascii?Q?SkPeASkskRTGJKtCC2IpIy9Un7+8aL4519wH2dVSjIEQBRo+moBLSPOCEitB?=
 =?us-ascii?Q?eyB7c7q6+5b0BNFlST/SoT9iN9MAd6JLg+zU0We9l5qJQE3svuwMesiPZ/pQ?=
 =?us-ascii?Q?lDAdhMoSodxKH2MUr8M5ZXqsjtvI7N8cEU9L8+s+it9w7rENEKvV5KlrAPUg?=
 =?us-ascii?Q?FFiMmA4Yv4BIV+LQkGZlNXql9JfG8VeClLg4Q4BM4rzxwqFEJTyCisPmLpHc?=
 =?us-ascii?Q?wwYrxtvhX5H07kVRDBb/kpsMuY/rIQZl4bt+DRx4y2T37pyOuFkSd6ccG8ru?=
 =?us-ascii?Q?0Jg9CBCpgMK8HOsrS0SUqZx70Q+/TSwCX1mJzEAhlkR01RCP++Wkego14/KK?=
 =?us-ascii?Q?3KHrjIRQII06XwegpMBmxUez/LtLDnxUp/GgWmAiHmwH4YW+o/MGeLoXPEUx?=
 =?us-ascii?Q?PXuvbPiWIAAD8amCj2268ATGEu6eBOLmPJFlzSEjqPbRJyfiSm5ZcBtemaQ1?=
 =?us-ascii?Q?OYgziSm8uZJ+OJ63Fhwc00yDr/GOM5bMCPQ5CbxEFPQNkyDFndAxojg4b/vF?=
 =?us-ascii?Q?VdYIbgT3m+p9BxLz3QhHHvdNC5ooXq+tYqhKMr86Iu4JpiDkaBzNke9/fJTC?=
 =?us-ascii?Q?BU/1m3Fvb1tfItTZJPaJoad7j8Zu2BOz40oU1T2052QMDOWG1qrZrKmanbo6?=
 =?us-ascii?Q?6MZ/sWDHeVv8c0TwmyiGWzuII5aSvGt7X6Wh6WqP+hLPv4A57mjGh93UaXnC?=
 =?us-ascii?Q?b6VD4KF6yXH1dOH4tJuie7DObHzfiEXoC52pExb81FtHAiqPhC8zoW0GIMa/?=
 =?us-ascii?Q?jQzFuNsM13iVZsNsDc/Brn2JEoqlFNE6AZ1a2kcNGs8juA3dB0qd1pUwpadn?=
 =?us-ascii?Q?XAFg/fI=3D?=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB9PR08MB6588.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(1800799015)(376005)(366007)(38070700009);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <E483D945B42A5E44809F0B5A71EA08A2@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5632
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AMS0EPF00000195.eurprd05.prod.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ebb49d8b-3b4b-4a90-45a1-08dc896577da
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|35042699013|82310400017|376005|36860700004|1800799015;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?0hhTPs/lPW3Cd2hMj2GV++cMWbLYlE6usVLgcuZnsHF/KufE9CumpBph9xcu?=
 =?us-ascii?Q?kLabxi68jKcAD9pK7+ccUygT8dJYQRKo2zOh009ehOMlBRP/hIMQZh6qg+pj?=
 =?us-ascii?Q?E+Sb6sa/BFs8GonmymjqvXFSB3sscQLlbyT5hXsZHszET0Kle+L4Bo/Y5bZO?=
 =?us-ascii?Q?kYWmtFJhjTpCFs2+Xy40hmAbZjdeNAYtj4NFXMwvOBbnJqoJEyYVJHGghBHe?=
 =?us-ascii?Q?bd53vtorpTY2oWIbGggrg6fOsYUYY2IYEQsCekK6wOEJZ5NX3iima2nLrnSn?=
 =?us-ascii?Q?Yg+yaYdQ6YzUNAcZ5SZFN2R6FTch/0+hVclG+846ufCauxoz63KAOY041+9V?=
 =?us-ascii?Q?a8qla6T16tbXNCKg5Hj8utzHt/OTU/RsRnCkwYQzj4CYFChTpSost/C37VE8?=
 =?us-ascii?Q?7BxXFzFBPsAkRPweTKCSXrJzNmDutd0kVo9963Arjd+6zbkBtktwF8UvgQ/x?=
 =?us-ascii?Q?MgYI/XanW7Ge0ewurxtCBAjU1c5Tsc+ixF7sOeFWXX7W3WoADdoMgCgNbo5w?=
 =?us-ascii?Q?5BOL2mwBJCWpky0QBuPuMb1eHYEflYQRy1Phu00jp4hxqYBNoeuX3wtMOJY6?=
 =?us-ascii?Q?X2X+eC3f1dx8KTKHoTNBZwslZ6gGmOp45N8+sf/rzhunmm5QYH/vl/DT2Yer?=
 =?us-ascii?Q?E40TQzlZoHQ422rLqKgLQyt8QttvEBxuJkx/P//2/Eb7jKtQdXwlVoSryneM?=
 =?us-ascii?Q?E6OTtKDHqdOwiwzJZ/OmIGoG/bwQt7IkfKwicGkwVDRFBaU9PzonXzuuj5CP?=
 =?us-ascii?Q?Gm1X6GhwvJjBZRcKwP5ImLCpCVvVDJZTP9BwStCxdOjIB4haoPNaGpUinFOW?=
 =?us-ascii?Q?XUGjh7ucVuWaqTKlFCtf3Nm4dN53TMut5F+65HBrJSy7gdeA1UGJp0bsMS3O?=
 =?us-ascii?Q?uYHg8qzvN+6dE+gK15jUnhKnZIjFkHx5+T40PdVNsnT1ga47m0Ipu7Keh9uU?=
 =?us-ascii?Q?a9/Yqgp1WY3HLXkNwIGFbFxCEc33Z6mNlh66n+gWvpX8g9TcckBHhroYDRAJ?=
 =?us-ascii?Q?zjS4kwY27llRGJjCgmOgYgHqRIXnegogMqpkYr8j60OAWiu3tRjqhnUTOGiQ?=
 =?us-ascii?Q?sdnKb3SLn85svhe+M0WcQUXP7F7WkD263r73MhoWEEghNnAFQ57gyTLr/0Vd?=
 =?us-ascii?Q?ZGNTBbsgB6+zk8NNdDOl1TpESvuDN4Nh0zvDwhkl6pJOLBsJ+onBFR8LydT5?=
 =?us-ascii?Q?nqqajoufMCWT518YZnMvqbd6HE4N1UvU75+CXMyOwj51SdkiU1KsB7IQcrcL?=
 =?us-ascii?Q?dTbljzyeBsraze6ywgQkqT+NATWEtZ2tZH6VBwMdq+3cuNrfyL4ZxuiTVnuJ?=
 =?us-ascii?Q?OYxk+ABmku2uly2b9bUwK5i1p0/H4T9IJHFCx8zht5Xnhp91IbrqHIq9hblB?=
 =?us-ascii?Q?Xgrc/HWlYMjRQdXfvmYuNpltay3yygn8q4EmvT3i8Irrs22cnw=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230031)(35042699013)(82310400017)(376005)(36860700004)(1800799015);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2024 15:53:37.2567
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f0741e98-413e-41bf-2ca6-08dc89657dae
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AMS0EPF00000195.eurprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB7793

Hi Jens,

> On 10 Jun 2024, at 08:53, Jens Wiklander <jens.wiklander@linaro.org> wrot=
e:
>=20
> Add support for FF-A notifications, currently limited to an SP (Secure
> Partition) sending an asynchronous notification to a guest.
>=20
> Guests and Xen itself are made aware of pending notifications with an
> interrupt. The interrupt handler triggers a tasklet to retrieve the
> notifications using the FF-A ABI and deliver them to their destinations.
>=20
> Update ffa_partinfo_domain_init() to return error code like
> ffa_notif_domain_init().
>=20
> Signed-off-by: Jens Wiklander <jens.wiklander@linaro.org>

Thanks for the fixes.

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:54:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:54:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737418.1143720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhLz-0008B3-Ir; Mon, 10 Jun 2024 15:54:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737418.1143720; Mon, 10 Jun 2024 15:54:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhLz-0008Aw-Ff; Mon, 10 Jun 2024 15:54:39 +0000
Received: by outflank-mailman (input) for mailman id 737418;
 Mon, 10 Jun 2024 15:54:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n2do=NM=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1sGhLy-00088v-5t
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:54:38 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20600.outbound.protection.outlook.com
 [2a01:111:f403:2613::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bc79a9f7-2741-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 17:54:36 +0200 (CEST)
Received: from AM0PR01CA0175.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:aa::44) by GV2PR08MB8124.eurprd08.prod.outlook.com
 (2603:10a6:150:74::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Mon, 10 Jun
 2024 15:54:29 +0000
Received: from AMS0EPF000001B0.eurprd05.prod.outlook.com
 (2603:10a6:208:aa:cafe::21) by AM0PR01CA0175.outlook.office365.com
 (2603:10a6:208:aa::44) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.24 via Frontend
 Transport; Mon, 10 Jun 2024 15:54:29 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AMS0EPF000001B0.mail.protection.outlook.com (10.167.16.164) with
 Microsoft
 SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.7677.15
 via Frontend Transport; Mon, 10 Jun 2024 15:54:28 +0000
Received: ("Tessian outbound 221fbec6f361:v332");
 Mon, 10 Jun 2024 15:54:28 +0000
Received: from 6a90a6854fa5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 17213163-404A-447C-A4A6-79ED0BCB7E38.1; 
 Mon, 10 Jun 2024 15:54:21 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 6a90a6854fa5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 10 Jun 2024 15:54:21 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com (2603:10a6:10:25a::24)
 by PAXPR08MB6656.eurprd08.prod.outlook.com (2603:10a6:102:135::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Mon, 10 Jun
 2024 15:54:18 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a]) by DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a%5]) with mapi id 15.20.7633.036; Mon, 10 Jun 2024
 15:54:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc79a9f7-2741-11ef-b4bb-af5377834399
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=VW2LiGdnb2XUJRJdWgE0e9OwVnilmxnDZGjOtAKlH8vzo+W37hKgK/zcuZNKGGviNPvJ50BbHgeVIlw/JiFyP8DrjXP4I6U4dpqoKa2cDD10m85C1DN8a17XeSGROyC+bnWw0CP5NgSHoQjdgkloySkm4g7rcrVUG0t+plLKZH92qF22VUVjjzy+szht1ZHUCFYuwh+R4JUA8YMvCJZgs4MhH7TZz6I+7M1ELcSW19YyPyQZOPHh1cGltpOXGUpVFrT1Icy2caRDKto1pKVZ9TS4cs2odfPNyRWpq32GCF9ndQWGYcccHHCXB6YX9NO1k92LonxhRrIrsr1VqX734Q==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gTWnI8wwauVsQUR9yslO8/Uj3sbFNoZs/LAgkpZb6Dg=;
 b=T2MsZHZJfWVGDE4OztDKzoZj12vrFRLPbxfkXdJm368b3rUPKaHUZsxmEaprvChD9h8ygI+r4FoicMVSvmiR09uTd4C4WBpBIQCMYML4caXEzUjlXlgNPogQkM2fMiy+U6OsEYtAip1G6U/KiMlikx0OjjPv54r1YTlxh0Li8rpypwSwGPW8NuStrKQA214Am08VTkOL31IUC2vO3F3n05Om4/TgwFpnjaFT9uMsOnOYij5POggkVZVgp4lJULbKw6zRS30fKMOnTFHjt1ahCH959FwulLjZki3+l/Syp5IrbgLBKevijRpudu+lY5KvN5N3E5ra5SzPC7qQ3bl19w==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1
 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gTWnI8wwauVsQUR9yslO8/Uj3sbFNoZs/LAgkpZb6Dg=;
 b=jks4M6psIs4om8qRRLZeTSdf0zzG+lUxJAuyaKdIDgD64xZOzt8+hdGQomPSGdvnQ1dYiuOD8EUCr5Sog5RP/tFTSi6hSPM8+E64MIgBaTP/F2N86Ma9SqbMU3hs7TfpEUN8ArKelMFaA7Tn+a88d9Ht5KOEPcmKRQNAJNzL+hM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=arm.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: faf34b9208f580df
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bKMTZ68IDMYQsxnbzykjWI6Y2NN/WXXiGHqWpCQdDtku2Addweoy8u6ZZDd6YgBhd7HjZWz5TZ5dRi+mQJv1IYr0eLSvAG0AvqnF5PMQpzFrJoh4hfEeuq5VwMJZimdLYkZXXSzUYPtC/jevSHO1/VDz/sF2DvOfVevqasa3ROPqrCtwACLKvFAC8CVuH7uY2+1rlQb3f1SPT5s1TmE159RtM/unyP2fKWOC/r0o93X4By6VQ+QVN8BLp0jwuY+V+vVoaV6Yt5oQPOw7hZ6pUlUr59ahENx7M3yExOCYLsxtmT3Wala2s3u2Tww4Ki1cRcdKhFYHEYzViH6QMUUCRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=gTWnI8wwauVsQUR9yslO8/Uj3sbFNoZs/LAgkpZb6Dg=;
 b=STBMQJoKvwe17y8k66S4XRUjshNoBODBoSWdxctsrMDfttxfRNmZjgsSNnqYanUZa3XJwnI60wT1H08oaWFpDaUssc/brI8Zs4Bd5bxdL0noP83uKt8XrYk1cCmEPuIA+rG9+h08x0YG4Cha+RZEZ/nMUyVMqSKyOYec6ACX5cWaawmcvRNwJw/NWbvTYMDrDEquNR46Xkdd3d2y2lV+2/v7aeisCUtf3u2KEM3k75Y9u9zxYR+05et8SPreD0/CBIZfQs1omWS9BlU/nkEOwjCIztoovXtUzVYeJMcYgiGz8d6355PyGYVgSxSH3nu2X8ZMe7AMlEMYlCDsFVUBJQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gTWnI8wwauVsQUR9yslO8/Uj3sbFNoZs/LAgkpZb6Dg=;
 b=jks4M6psIs4om8qRRLZeTSdf0zzG+lUxJAuyaKdIDgD64xZOzt8+hdGQomPSGdvnQ1dYiuOD8EUCr5Sog5RP/tFTSi6hSPM8+E64MIgBaTP/F2N86Ma9SqbMU3hs7TfpEUN8ArKelMFaA7Tn+a88d9Ht5KOEPcmKRQNAJNzL+hM=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jens Wiklander <jens.wiklander@linaro.org>
CC: Xen-devel <xen-devel@lists.xenproject.org>, "patches@linaro.org"
	<patches@linaro.org>, Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Michal
 Orzel <michal.orzel@amd.com>
Subject: Re: [XEN PATCH v6 0/7] FF-A notifications
Thread-Topic: [XEN PATCH v6 0/7] FF-A notifications
Thread-Index: AQHauwL3zf/sfUoleE2hXc1GLXTtIrHBJtUA
Date: Mon, 10 Jun 2024 15:54:18 +0000
Message-ID: <3C40228F-21AA-4CBF-A4BE-1C42DE6E94EB@arm.com>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
In-Reply-To: <20240610065343.2594943-1-jens.wiklander@linaro.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3774.600.62)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	DB9PR08MB6588:EE_|PAXPR08MB6656:EE_|AMS0EPF000001B0:EE_|GV2PR08MB8124:EE_
X-MS-Office365-Filtering-Correlation-Id: 71000f88-a2a1-4b92-8259-08dc89659c25
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted:
 BCL:0;ARA:13230031|1800799015|376005|366007|38070700009;
X-Microsoft-Antispam-Message-Info-Original:
 =?us-ascii?Q?4Uleo4FmdebGHHf2U8loBeZH7fX6Bys8ISidMtFVH1tUzpl8AyIM3KYN5bSa?=
 =?us-ascii?Q?cniWuclSxUQHERrpOvKmfwVLKokdyytsFWwwSoguvPx6iZbzm8/SC27+F7L1?=
 =?us-ascii?Q?FOBBY8ml+tzZ8iI5YE7F9ohcHi/xhG+O+UJdrQks87PBuPoTiaX124KYaAvb?=
 =?us-ascii?Q?ceJErJfn1OcW+3AqQwRS6CTqzn+S4DC+oTb+E38zWexec+EbPkxgRrmMlrhs?=
 =?us-ascii?Q?igZAlANmXQcAP1hq9SDu+FsJKnXaE84UQ8k/nMjhfxO8kxKRQLODmsjNENXF?=
 =?us-ascii?Q?C2zj9W420YNLh3oXhim4F+4de9M9+v84qjB/yuxR9DIfIfUIMK3F/awG59oc?=
 =?us-ascii?Q?tuYDlca7RZEXQ0IirksP0Ro/K47ir7JKVkdXL2mqBJYlkvcAFo2JTME/xbnw?=
 =?us-ascii?Q?nW3ajum5zQt3gE+DT0sdWntsyGgYu0H2WpAGLanpyO50adKkkl36a/69EJ56?=
 =?us-ascii?Q?/K7c2c/h9/R0nuHTd3f7BPskzy4143/5J6PtwS6ea5zRTJ3+Nn4u9TTTpE3o?=
 =?us-ascii?Q?rk4WuQZnrVwz4VIljvzvXT5ZJlsRm7OFmy5RaSiYC0mc/Y+MjwUoXSnnMOes?=
 =?us-ascii?Q?A2fyLTlsSMbai3uGaulTSKaLS2pVHaOPAuPF6TVXKsb2El8dGgFSQiwcJwoZ?=
 =?us-ascii?Q?HSVWhFoZM7ZaBOUOIk+XJl/rl9wyqo9BS4v6qzVON2hpxXKdNMcUJh0iRuLy?=
 =?us-ascii?Q?kbH8vaa/UFyqsIN2KMvhCNRkyT+txInPlICogPoMzId0/hFlj3E33OJ3HjaR?=
 =?us-ascii?Q?f7j0Q3hsGY57rgHTrjdGhdbsaL5iAN4503K8psuyXDtVk9MpI0FNNWwUjdns?=
 =?us-ascii?Q?fI2XEzMC24vKK1XY91OY/8Es1M2iyxr7QId6NTqpGZ0SGfZ//nBw/0RchXW0?=
 =?us-ascii?Q?lcqmMsLbG4y65svh72sGYjaFIe8ZU86eRXfSL9QyjVyeVlnjIrDjmkoDDVLs?=
 =?us-ascii?Q?kYQffR7Yzbo2LJllnwuqtN0rMEPelsYI0vjR/aEX2Iw6Jscy1b/J4Nc9YE2O?=
 =?us-ascii?Q?lP+QYT0jPBZHp47Sv1RqDGkQWu43CmxkLm6Q/x/3IvhNn1rpYqxQB7cud/dX?=
 =?us-ascii?Q?ruYO6urV/ZTgbT1HM0pGk3sCMUuXP/PKmWM9peZtQsK8rDxDfHSowLtMb2uh?=
 =?us-ascii?Q?Q+7CvosmCqNFG32A3g3eklSHPURb/1AHUF9IUlypVcWBN/DShJjKHKPhDHAm?=
 =?us-ascii?Q?iuTEaENm+ZUeX3fXSECC/XMClmcqiXMs4v2Z5u4cDFlgVvBjKOpiMzVY1Rda?=
 =?us-ascii?Q?/usni4y74FfgPk8VP7IzczxObFvyszM1G1EUfpU1zaFIQ2I0wLuPx67ef4NC?=
 =?us-ascii?Q?qO9mRZg5R8O+MMllnmZQaIsB?=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB9PR08MB6588.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(1800799015)(376005)(366007)(38070700009);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <C5BE74499A62E64E96968EA6B71B5B4A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6656
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AMS0EPF000001B0.eurprd05.prod.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1bf6584f-07e6-4786-8c87-08dc89659634
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|35042699013|82310400017|1800799015|36860700004|376005;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?Qf5N0PDeFikb1BefHbn6Y9AP6EnhM5tGPGW34oP6WwLAzvF/kuhmtyqFZOm5?=
 =?us-ascii?Q?1cRyxwMg1ikVMc7CT5LEdQ3qej5/x9m5b8Oktv3osphqW/HgOtWXofUUaoUN?=
 =?us-ascii?Q?kDFqNQTT0lkgY8afCf39eT7u4b1ots7cpYKzasbYszzmo9iRMiVP/yhetcgg?=
 =?us-ascii?Q?/VnEqb6bFPen2TqMfT08V2A3l38fHxRi8/dYb/Zse3BU497VfHA0XNMW3ZAR?=
 =?us-ascii?Q?zccx4IHPglTU0ESGhG0hJ6lTWklFHV0UCZZN64QMS6KFt60BEVPPsC8Gm3zf?=
 =?us-ascii?Q?g6hsl8OHs8uOOtr0cVOXAC5ISopTygSRVi7oh227djlMN2j5KDv+Q6fdXgUs?=
 =?us-ascii?Q?VDrYnnvbRqvY4l37mD3Po0n16gCnv6TetORgbq2fWwmpTQ8ZU/zG+idYuOK2?=
 =?us-ascii?Q?GZq+WEJvjYRfwXA8P+zpVozOrxmUFQMKa0ohdOLTP5nr9UdPPD/Sa10nJ6Li?=
 =?us-ascii?Q?VqaCtXQtWrWH1CK9JDXbJWpRWMDhFTedYxlCOMbxRl1vTJxX9C6hbx0S/1c9?=
 =?us-ascii?Q?5STmx/jm7ItQIz8ErxDSGw1go2I8iZJMHvGZZ/AtjwNu9x1BDx0TJ7dneBJ0?=
 =?us-ascii?Q?HNrmJVrI9A3xTttROGE0jZR3PD9J/KpGIEgEe1zHONf/SVEMXugtKsCnu9fn?=
 =?us-ascii?Q?O6JuYD4sRpALLgPskJxMuIFDeOq3/bZ6BtqaU5HaY2JiLoXcEYUv5XpDLq9J?=
 =?us-ascii?Q?rkuFQ9llVLVqWPjciKaG8OXhr0GwdAcCzvjdQ5CP60tF1iDo7CZNiWZzDtSV?=
 =?us-ascii?Q?GPcA6PyaJREXap7N7T+qqI0ZRzuTjDn08Sxt8NDIurb1n4zhOx19+S0T8W+r?=
 =?us-ascii?Q?uw3Uolk/h0KwLLnLXbG+w+RONGtEchGnfsu/493IsAzFp98rWWu0SjYq95/7?=
 =?us-ascii?Q?4v+FOQnZ8vfkB9QMO+ap3P5sPdfcZxl5pOJTh2esJM5BZXaRhmwla+KEIAZV?=
 =?us-ascii?Q?AkoxonnXyfjj0lemLqCapSFeDEoKbB9HQdczeOJKv4fJkeL5pw6tG9Zlzkip?=
 =?us-ascii?Q?mw/4E5C3i3f2bfw52Q6a4DnK2Mocc3o7V7kWGN5ZaMRNDBCeSEgOhl9BFoy+?=
 =?us-ascii?Q?SLC8cY4MGVvR22qVq3lU8jfo9WW8hmf4JjIZoECmVDwoXg64un8uiHuU4f+2?=
 =?us-ascii?Q?Pna1xZI9Wj2XJ9Vxrwnak1bpJc5ySBJWY+RWHivrRcl4x1WNSKhRtXcj1Frw?=
 =?us-ascii?Q?qzPRsmxA1GTaTvRdNksd8KvXpcf7wpvTml8X4htvx2eSz3dGjYVnpCK8gJEP?=
 =?us-ascii?Q?WixO+2Qod34gQeXk80fsBnV+ftq5z01SNfCr/R5vIWtpS2nvG2WdSBQO/epz?=
 =?us-ascii?Q?llGSqPOMsTZ5Bb6wG4ZkrS5BKEqF39f2u5R/axLnpaXSSxj1LHHELvAJQJpY?=
 =?us-ascii?Q?UY+KBoI=3D?=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230031)(35042699013)(82310400017)(1800799015)(36860700004)(376005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jun 2024 15:54:28.3548
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 71000f88-a2a1-4b92-8259-08dc89659c25
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AMS0EPF000001B0.eurprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB8124

Hi Jens,

> On 10 Jun 2024, at 08:53, Jens Wiklander <jens.wiklander@linaro.org> wrot=
e:
>=20
> Hi,
>=20
> This patch set adds support for FF-A notifications. We only support
> global notifications, per vCPU notifications remain unsupported.
>=20
> The first three patches are further cleanup and can be merged before the
> rest if desired.
>=20
> A physical SGI is used to make Xen aware of pending FF-A notifications. T=
he
> physical SGI is selected by the SPMC in the secure world. Since it must n=
ot
> already be used by Xen the SPMC is in practice forced to donate one of th=
e
> secure SGIs, but that's normally not a problem. The SGI handling in Xen i=
s
> updated to support registration of handlers for SGIs that aren't statical=
ly
> assigned, that is, SGI IDs above GIC_SGI_MAX.
>=20
> The patch "xen/arm: add and call init_tee_secondary()" provides a hook fo=
r
> register the needed per-cpu interrupt handler in "xen/arm: ffa: support
> notification".
>=20
> The patch "xen/arm: add and call tee_free_domain_ctx()" provides a hook f=
or
> later freeing of the TEE context. This hook is used in "xen/arm: ffa:
> support notification" and avoids the problem with TEE context being freed
> while we need to access it when handling a Schedule Receiver interrupt. I=
t
> was suggested as an alternative in [1] that the TEE context could be free=
d
> from complete_domain_destroy().
>=20
> [1] https://lore.kernel.org/all/CAHUa44H4YpoxYT7e6WNH5XJFpitZQjqP9Ng4SmTy=
4eWhyN+F+w@mail.gmail.com/
>=20
> Thanks,
> Jens

All patches are now reviewed and/or acked so I think they can get in for th=
e release.

Regards
Bertrand



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:55:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:55:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737425.1143730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhN7-0000Pw-0O; Mon, 10 Jun 2024 15:55:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737425.1143730; Mon, 10 Jun 2024 15:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhN6-0000Pp-TI; Mon, 10 Jun 2024 15:55:48 +0000
Received: by outflank-mailman (input) for mailman id 737425;
 Mon, 10 Jun 2024 15:55:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oRC9=NM=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sGhN5-0000Ph-H4
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:55:47 +0000
Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com
 [2607:f8b0:4864:20::22a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e61a7610-2741-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 17:55:46 +0200 (CEST)
Received: by mail-oi1-x22a.google.com with SMTP id
 5614622812f47-3d220039bc6so1057186b6e.2
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 08:55:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e61a7610-2741-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718034945; x=1718639745; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=z2w6PSj8mpMLLY7MVOid/n75KZdOEiDxsszO1eh6e0g=;
        b=Ql/XYO3+/p47O2QKtUA1Ppok1QrJYjUsGt4OXJhhLdOf+12naZQWgvBGmy8kclERLl
         PgJ7Mu/cfRK15BqoUgkef4fomjmi85ChBB174+OzW0D07a4R9bMOTHRf0/Op+HztWUQ2
         zEiEKSuIwLbT2Svsx4KFyFuOj4iAnx83oFrSg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718034945; x=1718639745;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=z2w6PSj8mpMLLY7MVOid/n75KZdOEiDxsszO1eh6e0g=;
        b=hPkpIdbOdZd+0OcgT+tql8dsk9zkDU4W7Ox6Tc2bM2BOx9bSlW47SzVkUPPqEJUIb7
         W4/pc7XTeqxFbc1gVkwU11KDMuO6SekCQEnUkMSZmTRnSu67eIru+6/ReD61Jx/Qhv1d
         7YoCH1OW0KQTcrLM3i2inqpXlk33f4Ua2aZFbpYFFs7w7AYUqyYTonI5f04B4uvRo1WK
         NgPDDDWpDVqBAsg9VomC7kjgvfHyZ+ZqTJhY7qiokc/fJOWUnukputqzRtgUm+3w/kzm
         eP3phIUXm0NRpTboZ+htuX1rrXV5waG14tFsE5KGaKTWVxLF6LjevQwa4jtzwww8lslk
         FvnA==
X-Gm-Message-State: AOJu0YxjV60RRLjXmGEoLxVk2fPiWVAY/5M9dgb/391qXHETPZIxeK6J
	06+C1W0cB5Mutu6AQHgEI5mmBSpbGPGeYodRX/FV2PGixpcMd/xLDVamZyjAfVe5N84JQPFhYVh
	KamgLS+mwKiuHbEccaEHX0Nm3GOtmq9iXKHHWbQ==
X-Google-Smtp-Source: AGHT+IENT19HJ9VMpxwSIhvqeehdG66MzmQ9GZSYhsx1RsJWD1gBssMaOM+lYSnB1G+tQ7LnNq2JFnk9z7VTbWWAqwA=
X-Received: by 2002:a05:6870:93cb:b0:250:7a8d:1756 with SMTP id
 586e51a60fabf-25464b78e32mr10900044fac.12.1718034945272; Mon, 10 Jun 2024
 08:55:45 -0700 (PDT)
MIME-Version: 1.0
References: <20240610085052.8499-1-roger.pau@citrix.com>
In-Reply-To: <20240610085052.8499-1-roger.pau@citrix.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Mon, 10 Jun 2024 16:55:34 +0100
Message-ID: <CA+zSX=Z3O_b44Jum3s9rRJ_h+BKjJzd11gAr249wFOxQCcFKEQ@mail.gmail.com>
Subject: Re: [PATCH for-4.19 v2] x86/pvh: declare PVH dom0 supported with caveats
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, 
	Oleksii Kurochko <oleksii.kurochko@gmail.com>, 
	Community Manager <community.manager@xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 10, 2024 at 9:50=E2=80=AFAM Roger Pau Monne <roger.pau@citrix.c=
om> wrote:
>
> PVH dom0 is functionally very similar to PVH domU except for the domain
> builder and the added set of hypercalls available to it.
>
> The main concern with declaring it "Supported" is the lack of some featur=
es
> when compared to classic PV dom0, hence switch it's status to supported w=
ith
> caveats.  List the known missing features, there might be more features m=
issing
> or not working as expected apart from the ones listed.
>
> Note there's some (limited) PVH dom0 testing on both osstest and gitlab.
>
> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> ---
> Changes since v1:
>  - Remove boot warning.
> ---
>  CHANGELOG.md                  |  1 +
>  SUPPORT.md                    | 15 ++++++++++++++-
>  xen/arch/x86/hvm/dom0_build.c |  1 -
>  3 files changed, 15 insertions(+), 2 deletions(-)
>
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index 201478aa1c0e..1778419cae64 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -14,6 +14,7 @@ The format is based on [Keep a Changelog](https://keepa=
changelog.com/en/1.0.0/)
>     - HVM PIRQs are disabled by default.
>     - Reduce IOMMU setup time for hardware domain.
>     - Allow HVM/PVH domains to map foreign pages.
> +   - Declare PVH dom0 supported with caveats.
>   - xl/libxl configures vkb=3D[] for HVM domains with priority over vkb_d=
evice.
>   - Increase the maximum number of CPUs Xen can be built for from 4095 to
>     16383.
> diff --git a/SUPPORT.md b/SUPPORT.md
> index d5d60c62ec11..711aacf34662 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -161,7 +161,20 @@ Requires hardware virtualisation support (Intel VMX =
/ AMD SVM).
>  Dom0 support requires an IOMMU (Intel VT-d / AMD IOMMU).
>
>      Status, domU: Supported
> -    Status, dom0: Experimental
> +    Status, dom0: Supported, with caveats
> +
> +PVH dom0 hasn't received the same test coverage as PV dom0, so it can ex=
hibit
> +unexpected behavior or issues on some hardware.

What's the criteria for removing this paragraph?

FAOD I'm OK with it being checked in as-is, but I feel like this
paragraph is somewhat anomalous, and would at least like to have an
idea what might trigger its removal.

 -George


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 15:58:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 15:58:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737434.1143739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhPt-0001Q6-BS; Mon, 10 Jun 2024 15:58:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737434.1143739; Mon, 10 Jun 2024 15:58:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhPt-0001Pz-8z; Mon, 10 Jun 2024 15:58:41 +0000
Received: by outflank-mailman (input) for mailman id 737434;
 Mon, 10 Jun 2024 15:58:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGhPr-0001P6-VI
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 15:58:39 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d179d9e-2742-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 17:58:38 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-a6e43dad8ecso495944166b.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 08:58:38 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6c8072ac1fsm653146266b.222.2024.06.10.08.58.37
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 08:58:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d179d9e-2742-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718035118; x=1718639918; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ha3gXWnhYiXnhWqYZL762tgIJAgzO9EIImER1q6mngg=;
        b=Z5zKb0WZzscSv3hQYZQtV8VzUCpLeFiHKc0fhpeKkyJRu+/lRhAUEWkBtKrptqjfm1
         L5iH2ycHjDRviK5NwAF+JJWBwvTquWpadlFQ0YryBbtlgRA7Upw68LsFQqo968zzrkeO
         KxBKI0suW3DVEOFl4g6dxybHI7+pVNB1vU6KcSb80IYhd/eUMy0sSvnDHbPTsE+CX9Jw
         4F/vQ6wCUm1/85r5wzHkM4xJeXaiKPZNhFK0fh+VJr0pIpH0KLAB4qHSKAMm4QlKhwqW
         iIdplGZCtY+7xz1+/dee+BvJWa4MCRfyao6vpDeOKy37H94xkCXM3xhUsWVynHRhsx9E
         lXuA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718035118; x=1718639918;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ha3gXWnhYiXnhWqYZL762tgIJAgzO9EIImER1q6mngg=;
        b=LiursuIoUQU0ZIQuUmoi6oS5OeJHTTZFbeog4Ajomf1Hd8r37Hn06oKj4br92up0lb
         4eaGwXPu+NRlm7bVah45bzKiNYt1KKlkUGz8OM51CDq6HhCTkcfUmWMvNn/kbBby3efz
         J3HTnNcGJhuxigLS27s993ySozx8nX+6af3vRlMLmKJThOUccNfwUL9rRVJQcVWha6fW
         Z6+ZfEHh8d6boT/gxUUmhjcyUX4bOKFvxgWAAx5MCoKnaNGynytjyUq3D7NbHGUNCvtx
         FgcehjvoycWtaunUha7LkxDp9GqlBvd5GFpxlICzQVXwpVsCoQ450Tb7ioXxFDyDeTvG
         445g==
X-Forwarded-Encrypted: i=1; AJvYcCUKQUSlTCkazVzHAM0NNKfvNmKRuwPGoULxopQP3jSWxvemeZTMe6Vb9YHkrn1+GoCU4Lo162hObOS8RfGV42ZXJClr7DrdxzwwWWE+YB4=
X-Gm-Message-State: AOJu0Yw0mfq4mjnGkRjMUJHVqoFnvxzZUdJaJiutSD5sLPqQCkQwGfAU
	Ug/zt03hzpXpCqmQIkR25C9Q9/4s2woFF955jgd/OO++deZf9tYGALzC3WLwhw==
X-Google-Smtp-Source: AGHT+IGyaqcvnQ6yq7vdbXiphgSd1IzzdXKCjqkWoehW+j4W4UfNztB4kZ5DnnoguNWDZOhlEkijXg==
X-Received: by 2002:a17:906:1ccb:b0:a6e:542f:b87b with SMTP id a640c23a62f3a-a6f34cf5bf9mr5013666b.19.1718035118184;
        Mon, 10 Jun 2024 08:58:38 -0700 (PDT)
Message-ID: <efc35614-561c-4baa-9d94-d17ecb528a4b@suse.com>
Date: Mon, 10 Jun 2024 17:58:36 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v9 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
To: Jiqian Chen <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 Stewart Hildebrand <Stewart.Hildebrand@amd.com>,
 Huang Rui <Ray.Huang@amd.com>, xen-devel@lists.xenproject.org
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-3-Jiqian.Chen@amd.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240607081127.126593-3-Jiqian.Chen@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 07.06.2024 10:11, Jiqian Chen wrote:
> If run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
> a passthrough device by using gsi, see qemu code
> xen_pt_realize->xc_physdev_map_pirq and libxl code
> pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq
> will call into Xen, but in hvm_physdev_op, PHYSDEVOP_map_pirq
> is not allowed because currd is PVH dom0 and PVH has no
> X86_EMU_USE_PIRQ flag, it will fail at has_pirq check.
> 
> So, allow PHYSDEVOP_map_pirq when dom0 is PVH and also allow
> PHYSDEVOP_unmap_pirq for the failed path to unmap pirq. And
> add a new check to prevent self map when subject domain has no
> PIRQ flag.
> 
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

What's imo missing in the description is a clarification / justification of
why it is going to be a good idea (or at least an acceptable one) to expose
the concept of PIRQs to PVH. If I'm not mistaken that concept so far has
been entirely a PV one.

> --- a/xen/arch/x86/hvm/hypercall.c
> +++ b/xen/arch/x86/hvm/hypercall.c
> @@ -71,8 +71,14 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>  
>      switch ( cmd )
>      {
> +    /*
> +     * Only being permitted for management of other domains.
> +     * Further restrictions are enforced in do_physdev_op.
> +     */
>      case PHYSDEVOP_map_pirq:
>      case PHYSDEVOP_unmap_pirq:
> +        break;

Nit: Imo such a comment ought to be indented like code (statements), not
like the case labels.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 16:05:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 16:05:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737440.1143750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhVp-0003vI-Vv; Mon, 10 Jun 2024 16:04:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737440.1143750; Mon, 10 Jun 2024 16:04:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhVp-0003v9-Sz; Mon, 10 Jun 2024 16:04:49 +0000
Received: by outflank-mailman (input) for mailman id 737440;
 Mon, 10 Jun 2024 16:04:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGhVo-0003v1-05
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 16:04:48 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 286d05c8-2743-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 18:04:46 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6ef46d25efso345702766b.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 09:04:46 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f2d3a02dcsm57697066b.106.2024.06.10.09.04.45
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 09:04:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 286d05c8-2743-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718035486; x=1718640286; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=bTWDUdkQF7tsItu6ZwtaKt2W0JOuvyHlIXFHCYV4rEE=;
        b=at84pYW+lFwAlW0Wd6OOM32nps9RDREyTpuWax82M2gvuV6eGYhjES0iV+sFilOnxN
         c1ikkqnc0bxJg8g7MSyS4mJUdzwSlqLQTpz8V2oHtJixYC6HNtmBlPqXaIZbssiq1/Tr
         v1XCFurrSfIm/6+DIifwV7ydtRxGH97EWwfRD3xJHpqRoxpQRISPTTEiYwIXfQGFmMR5
         WV91OqHUiYuSyqIhWz2iuA6OjMcK/LchuTr124KLbPMD+/2pyynBl3xKWlprZ2hCh/ju
         mMIaVw2DAiVIk9kKWmS1ZU6uMdaH/VQwDcoQSLS8azgGPIQq01DKxyBANE38GqlD5U4e
         qy2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718035486; x=1718640286;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bTWDUdkQF7tsItu6ZwtaKt2W0JOuvyHlIXFHCYV4rEE=;
        b=cw7nTeis3CJfbWq/BcJlHFsgcQiGHNfl3LffnGD8AQ/hDjTw4vdCn6VQnjJ/2qUiCz
         yf1So+HleDa2Lz5nA4jDpbY1+qEmZufDAm6AScGFezxyXa0kJqj6rysJtzwWb1BJV6Nz
         I6PJVeJ+VhzuC4qzJUd5C0LyRdUf68nR7a28qcAyG6X8H1ilwrjbiTK4m+TGyQU1FaBv
         GUTSsevW9fAK3/WFwqEg8yubGmwQ7UrB5F2wcUHGTMU5AZ3sM2gLEpplGQAsRgloSIjD
         d4SIILg6XoxqDepRTQ2zgf3HAxiNQmDjoo43kakeX95y91zpNbKGvs8BKo0V8oCUKbkk
         kacw==
X-Forwarded-Encrypted: i=1; AJvYcCU1vF4FrsPqhudDe6yYyue4Zba9/jobj6higt+8nAd/2AwW/ieHadRA/bhorZ8WCjbfkpZFcp8Vrwu+R6wyde7fPjAz1jE+gWkfnKZdEAI=
X-Gm-Message-State: AOJu0YzVruwaaSJ0RlB3cBs8Uzh4xAtEZCB6enCg3PiSz9HpW/fUVlFC
	9dNHGsZSQwFFLbGiw68pST5k0DCYUMI6VnFPbRQbZQV7on4XER4mluSAa+vslw==
X-Google-Smtp-Source: AGHT+IFbP5G4bPHAhpeqdqRbQ1kF8ONmRQvp1jB12XG0rjl+KPNHDNMnktyjJoLYcL2FmOIdTeI+IA==
X-Received: by 2002:a17:906:6bc8:b0:a6e:c5b0:b64e with SMTP id a640c23a62f3a-a6ec5b0b6e4mr524206766b.9.1718035486164;
        Mon, 10 Jun 2024 09:04:46 -0700 (PDT)
Message-ID: <38b5bf96-22fe-444c-824b-b4c2d6e107d0@suse.com>
Date: Mon, 10 Jun 2024 18:04:44 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v9 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
To: Jiqian Chen <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 Stewart Hildebrand <Stewart.Hildebrand@amd.com>,
 Huang Rui <Ray.Huang@amd.com>, xen-devel@lists.xenproject.org
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-4-Jiqian.Chen@amd.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240607081127.126593-4-Jiqian.Chen@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 07.06.2024 10:11, Jiqian Chen wrote:
> On PVH dom0, the gsis don't get registered, but
> the gsi of a passthrough device must be configured for it to
> be able to be mapped into a hvm domU.
> On Linux kernel side, it calles PHYSDEVOP_setup_gsi for
> passthrough devices to register gsi when dom0 is PVH.

"it calls" implies that ...

> So, add PHYSDEVOP_setup_gsi for above purpose.
> 
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
> ---
> The code link that will call this hypercall on linux kernel side is as follows
> https://lore.kernel.org/lkml/20240607075109.126277-3-Jiqian.Chen@amd.com/T/#u

... the code only to be added there would already be upstream. As I think the
hypervisor change wants to come first, this part of the description will want
re-wording to along the lines of "will need to" or some such.

As to GSIs not being registered: If that's not a problem for Dom0's own
operation, I think it'll also want/need explaining why what is sufficient for
Dom0 alone isn't sufficient when pass-through comes into play.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 16:07:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 16:07:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737444.1143760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhYU-0004VT-Dl; Mon, 10 Jun 2024 16:07:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737444.1143760; Mon, 10 Jun 2024 16:07:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhYU-0004VM-Ak; Mon, 10 Jun 2024 16:07:34 +0000
Received: by outflank-mailman (input) for mailman id 737444;
 Mon, 10 Jun 2024 16:07:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mjq2=NM=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGhYS-0004Tz-Bb
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 16:07:32 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8a284a27-2743-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 18:07:31 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id
 4fb4d7f45d1cf-57c75464e77so42285a12.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 09:07:31 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1ec85a74sm156717866b.56.2024.06.10.09.07.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 09:07:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a284a27-2743-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718035650; x=1718640450; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=F8T0Yn8TRGJABK7UP2QjtOX0FaifK/kYwMk5L2z4/6I=;
        b=QAMbSRn0qi7HwjBYPp1B9oaSsfmEofbjNvbnCBDuNDCe4m2N/WUJokB7goqd69iArt
         U3jezbDgSBxZHVIxFKRnYXzJ/IoEIRx8oWmEtyywDOh6OSX293Vq9iuhi7WDyv956INg
         D2k9O2rpRXnD82SwoBOPCIujeTbrBzUV5IZpUVyqgeNtHkydbHKmi9QayQ6bp7rEYMqL
         GnfL1rldX5jheU/3Lnu5jDJa+34xKkRb55qDuKKIeprQbAGgvf0EwFdyprLiwa+/UYKE
         lYwy3zxIsWYUZ1Xzz3iWODbt2afvEmDavkDnbPqLe+oJTs5mNNV/Kh98u+aK00lA5w/C
         KB5w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718035650; x=1718640450;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=F8T0Yn8TRGJABK7UP2QjtOX0FaifK/kYwMk5L2z4/6I=;
        b=Bxh8UUU8z6QdTsOMIDmbaX3FkmvDR8ZYmTq/hXDi5ut7N54xKmKLSSb7dFljg3UOJf
         IhC6FNoAyrrjVPnnW3M34lzlCS+mrd27EqW9gqgDWByqwhMBEZGUJDeaRim6qFK8l1vR
         D2gveoT03tYIVfQowmlkWOMnzC4/88xb8W9oo268T8pe+c2qI1v7Z4PnQxwqY3tsK96V
         dvXGQvEUTKJgeRAXBHFgAsknWKDB1ZltUVQwWWMzLda01N+S7k5B1gHwoDDtTUnOLnPc
         a/OYSRrbKZmPWbQ/IASGEZVe9G88BPLyslEWYrspQoaVv2MBtVMTpMkiw0caofTtSYkS
         eMHQ==
X-Forwarded-Encrypted: i=1; AJvYcCUBwtuYsFizQ8gCSFs2dIGSvyy/wmsNCaYUBbcNevSuKT7qjp8lBpxP/ZOpen8+29P/9gw3VIxqekGtXJSx59Mq7TxhQbwkhU0hU7q3rw8=
X-Gm-Message-State: AOJu0YwMjyJt6aY1/8eS1vyc+8Al2X/z0ROZHmbMUTxe/YIagllHLMg1
	qBuL/vfWMQnb6IePgWwaCIs0NGaf+x3O+uSadrotPnnRGXOJ5rIy4AF9/HonZw==
X-Google-Smtp-Source: AGHT+IEcGRf6/truxW/iq0L3+nvgrRFLsOCTE7thYlhYsI3422gYl8NGdy3OZO3qcwj4+13lUD5umQ==
X-Received: by 2002:a17:906:36cf:b0:a6f:24ae:f061 with SMTP id a640c23a62f3a-a6f24aef087mr154932566b.59.1718035649998;
        Mon, 10 Jun 2024 09:07:29 -0700 (PDT)
Message-ID: <e2202691-5ca2-42ad-a360-31761f73d889@suse.com>
Date: Mon, 10 Jun 2024 18:07:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v9 0/5] Support device passthrough when dom0 is PVH on
 Xen
To: Jiqian Chen <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 Stewart Hildebrand <Stewart.Hildebrand@amd.com>,
 Huang Rui <Ray.Huang@amd.com>, xen-devel@lists.xenproject.org
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240607081127.126593-1-Jiqian.Chen@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 07.06.2024 10:11, Jiqian Chen wrote:
> Hi All,
> This is v9 series to support passthrough when dom0 is PVH
> v8->v9 changes:
> * patch#1: Move pcidevs_unlock below write_lock, and remove "ASSERT(pcidevs_locked());" from vpci_reset_device_state;
>            Add pci_device_state_reset_type to distinguish the reset types.
> * patch#2: Add a comment above PHYSDEVOP_map_pirq to describe why need this hypercall.
>            Change "!is_pv_domain(d)" to "is_hvm_domain(d)", and "map.domid == DOMID_SELF" to "d == current->domian".
> * patch#3: Remove the check of PHYSDEVOP_setup_gsi, since there is same checke in below.

Having looked at patch 3, what check(s) is (are) being talked about here?
It feels as if to understand this revision log entry, one would still need
to go back to the earlier version. Yet the purpose of these is that one
(preferably) wouldn't need to do so.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 16:32:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 16:32:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737451.1143770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhwS-00022Z-Ax; Mon, 10 Jun 2024 16:32:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737451.1143770; Mon, 10 Jun 2024 16:32:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhwS-00022S-7I; Mon, 10 Jun 2024 16:32:20 +0000
Received: by outflank-mailman (input) for mailman id 737451;
 Mon, 10 Jun 2024 16:32:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGhwR-00022I-1g; Mon, 10 Jun 2024 16:32:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGhwQ-0003eJ-Uw; Mon, 10 Jun 2024 16:32:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGhwQ-0001X2-GS; Mon, 10 Jun 2024 16:32:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGhwQ-0005m0-Fz; Mon, 10 Jun 2024 16:32:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BmARdpNE6UjSuxnk7urfvB82a9T2DJuPoU6wDhkHQAg=; b=4sB3TzLwU8LMn06pRB2+YHQtOT
	Q7Kk7uZjlrRFG7ySHB9mA9iOgdGdcxu6ggbaI8CcolFI79YYBAV2Z05IIKO41Woeyr4IoNi0CvKhn
	GYu/afmjGEiXwOgm0axLp+y5yyR/4zot44FHf2aMViUyOg5Ll+ris/A1Qm/FuPH6dGbs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186304-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186304: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3
X-Osstest-Versions-That:
    xen=0a5b2ca32c1506bbb0e636a2dfab7502a52fe136
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Jun 2024 16:32:18 +0000

flight 186304 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186304/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3
baseline version:
 xen                  0a5b2ca32c1506bbb0e636a2dfab7502a52fe136

Last test of basis   186301  2024-06-10 09:00:24 Z    0 days
Testing same since   186304  2024-06-10 12:04:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0a5b2ca32c..ea1cb444c2  ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 16:35:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 16:35:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737460.1143779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhzY-0002gT-S4; Mon, 10 Jun 2024 16:35:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737460.1143779; Mon, 10 Jun 2024 16:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGhzY-0002gM-PX; Mon, 10 Jun 2024 16:35:32 +0000
Received: by outflank-mailman (input) for mailman id 737460;
 Mon, 10 Jun 2024 16:35:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OhkP=NM=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sGhzX-0002fx-8t
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 16:35:31 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 72b5b869-2747-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 18:35:29 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2ebd95f136bso1419821fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 09:35:29 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c71885170sm4080400a12.2.2024.06.10.09.35.28
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 09:35:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72b5b869-2747-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718037329; x=1718642129; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=NCZM3ue1C5jtbs037YDqPxE+Szl/qajvIkmTEBbSwnE=;
        b=OIZ4ydt2vbvperIRsOknXV49Zm3BCu+O5tWRn7BVTZL7jhvAzvtb1TcU2Q/Hog00j3
         H4OXA1H0N+Qps1OorTA+jtHbQHUOBYocxXoJpy/xqoGs67VpeqCgpv+44LTyx1i9CRsM
         c96qjDObHzBV6bAMR4h7hYBIEOIyqOGiAwPJc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718037329; x=1718642129;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=NCZM3ue1C5jtbs037YDqPxE+Szl/qajvIkmTEBbSwnE=;
        b=i3RWdtEncO81zB/NrjU8USmdWShVa8wd680CxXv+gchrM4d394xhnPlKLB7xWYV2m2
         VgWh3u/MDhZs11kL3905NSD8PehCEnfhrx3/keWS2jDbvYXQ3KCRZYMgIrgSYmDGMiaB
         aDsAqQ9127K3LYVgvlbJ5RIh0eHy1EU0odfxf3qMffJ0vDXGEqDHUdBPOVL6g+jJwnP2
         3E2o3IUF9HhHwsxUubqKGkQwN+B+xzuUgjxAhTTtOinBGba5MVaaLzb0tt3bY6/9NRhO
         ZS2wSPF+IH9KAXcJSe12rytKyNB9IXA6yORFPDbklILrYdbNSlNei0wpvcKMwRbpZAJR
         N0EA==
X-Forwarded-Encrypted: i=1; AJvYcCUKgQoGL8dgKbYcGdZTR0WH6/uOlDRDyaixq8hcN7EbXxVKuIqFzDHZsl34GhIft4lGjQ2lbsx7qG57BTaC6yjAEPwlnBjctPn5skTgz94=
X-Gm-Message-State: AOJu0YzyWVjdYrlQqSDAPBjzo4skz8JwIzNKhARLSfyCbABozCSmvXEi
	nLNl9htH7YuFwmhgQId7KGIXfPweU4qyzzvm7EKyID5FC+FPlTp+Dnc0CUkAlZI=
X-Google-Smtp-Source: AGHT+IFqrzWFifI6gn01+bvvf3cEa5rqFIdVvKQgZL0ctW1EzjgiwrKlw5S3/smCaExwp07elO/EIw==
X-Received: by 2002:a2e:8e88:0:b0:2eb:cdae:6791 with SMTP id 38308e7fff4ca-2ebcdae691bmr36605701fa.45.1718037328677;
        Mon, 10 Jun 2024 09:35:28 -0700 (PDT)
Message-ID: <d3f5e8a0-ee98-4574-95ac-2fd315ee9a45@citrix.com>
Date: Mon, 10 Jun 2024 17:35:27 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v1] x86/intel: optional build of TSX support
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>, xen-devel@lists.xenproject.org
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>
References: <20240606110448.2540261-1-Sergiy_Kibrik@epam.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <20240606110448.2540261-1-Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 06/06/2024 12:04 pm, Sergiy Kibrik wrote:
> Transactional Synchronization Extensions are available for certain Intel's
> CPUs only, hence can be put under CONFIG_INTEL build option.

Careful with "available" vs "supported".

They're only supported on certain Intel CPUs, but suffice it to say that
c0dd53b8cb was discovered because of Xen's TSX unit testing.

>
> The whole TSX support, even if supported by CPU, may need to be disabled via
> options, by microcode or through spec-ctrl, depending on a set of specific
> conditions. To make sure nothing gets accidentally rutime-broken all

runtime

> modifications of global TSX configuration variables is secured by #ifdef's,
> while variables themselves redefined to 0, so that ones can't mistakenly be
> written to.
>
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
> ---
>  xen/arch/x86/Makefile                | 2 +-
>  xen/arch/x86/include/asm/processor.h | 8 ++++++++
>  xen/arch/x86/spec_ctrl.c             | 4 ++++
>  3 files changed, 13 insertions(+), 1 deletion(-)

This needs a command line adjustment too.

diff --git a/docs/misc/xen-command-line.pandoc
b/docs/misc/xen-command-line.pandoc
index 1dea7431fab6..c8d32c13bbaa 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2584,10 +2584,11 @@ pages) must also be specified via the tbuf_size
parameter.
 ### tsx
     = <bool>
 
-    Applicability: x86
+    Applicability: x86 with CONFIG_INTEL active
     Default: false on parts vulnerable to TAA, true otherwise
 
-Controls for the use of Transactional Synchronization eXtensions.
+Controls for the use of Transactional Synchronization eXtensions,
available if
+Xen was compiled with `CONFIG_INTEL` active.
 
 Several microcode updates are relevant:
 


> diff --git a/xen/arch/x86/include/asm/processor.h b/xen/arch/x86/include/asm/processor.h
> index c26ef9090c..8b12627ab0 100644
> --- a/xen/arch/x86/include/asm/processor.h
> +++ b/xen/arch/x86/include/asm/processor.h
> @@ -503,9 +503,17 @@ static inline uint8_t get_cpu_family(uint32_t raw, uint8_t *model,
>      return fam;
>  }
>  
> +#ifdef CONFIG_INTEL
>  extern int8_t opt_tsx;
>  extern bool rtm_disabled;
>  void tsx_init(void);
> +#else
> +#define opt_tsx      0     /* explicitly indicate TSX is off */
> +#define rtm_disabled false /* RTM was not force-disabled */
> +static inline void tsx_init(void)
> +{
> +}

For trivial things like this, we allow

static inline void tsx_init(void) {}

All can be fixed on commit, but none of this is tagged for 4.19 and is
4.20 material IMO.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 17:11:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 17:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737475.1143870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXv-0002hM-E5; Mon, 10 Jun 2024 17:11:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737475.1143870; Mon, 10 Jun 2024 17:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXv-0002fG-6b; Mon, 10 Jun 2024 17:11:03 +0000
Received: by outflank-mailman (input) for mailman id 737475;
 Mon, 10 Jun 2024 17:11:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGiXu-0000kq-DG
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 17:11:02 +0000
Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com
 [2a00:1450:4864:20::533])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 68e72a3b-274c-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 19:11:00 +0200 (CEST)
Received: by mail-ed1-x533.google.com with SMTP id
 4fb4d7f45d1cf-57c68682d1aso2741897a12.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 10:11:00 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c76740d6asm3233169a12.7.2024.06.10.10.10.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 10:10:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68e72a3b-274c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718039459; x=1718644259; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wrX59hkSrQ0XvK870O5bXvRvRJt2HGVyFsC9P2tIoGs=;
        b=dS8D7DzULlvC9SjSLDIgpiypaQ7RI40ME3Rim2YDqhrHOv0cy+3BjLyIVGYand3lLA
         nnyfk1cA7IYNNr7YfpcYeT7wh3sAunF/U5wrzRZ1dFMoG9WXYN22VLcQ5fYsW6B1BLuj
         hfbariVFmZEWcffCwIC/mEqlKOS4dWuLVn7xDrfZt4u0vUc/AKIMX3PAuy094SGSZ7eI
         CB4U6bx8PsVGq8JiDjuABRQ3FfTGH6fOOUTW10INtTLN3uFfyvLQHfh+jJMAlWU+Mu5Y
         VR+hm6on53DEjyd359D75R53GReAdg9HjeQhSsZCC7gafGN8iED0jNR+2O0i3WtNXAUE
         LeIQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718039459; x=1718644259;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=wrX59hkSrQ0XvK870O5bXvRvRJt2HGVyFsC9P2tIoGs=;
        b=dV5j+P+Sq4rfXa4zBaHxpaERZMH1N4xrAdO+LuBrGN0n3AxP96M1B3FT4/EGUB8lNH
         B6W3zCu1KznFUY2+FZq0b4JlZH4mukqbPTE0p7wTegKZjYFEg2YLH3v1JaqLVGGIPIoD
         /t/BVP2GxX/V0005S0R7xx8rYfAoZFd6DVJlsZPbN63l+4Zz4pO89bM57ux9N7+n48Wd
         wGZJzcCa6A9CRv6cQa22zOTnbMn4otDct8vBvhHEwTwXwFncEGzhn8o7D0EpTrx6jJ3K
         zZzzOn/3DOf7qtB+st37gNJ3lrYuKE7dGpbbAeBdTaMdxsDznNYi9LCnksO15Sk9ljIB
         4zjQ==
X-Gm-Message-State: AOJu0YzSzCk9hxy2H8OhlSknnCjwuahg/uYRFIiHqRhJJTmCI4Pj1/Sq
	xCt8zYp64VuopiJ0TXu6gm0o6Z32/VjlmnNo1Sp7Hm1chOshH98w40ECOw==
X-Google-Smtp-Source: AGHT+IEXkPlHF5sRTc7fOS3R5VN5O5OYuYtfka9YGxGHMmFLmaA67En+ici0Ne59LQ3XF+BZaXxFyA==
X-Received: by 2002:a50:ab4b:0:b0:579:c8cb:ec3d with SMTP id 4fb4d7f45d1cf-57c509992b4mr5982072a12.37.1718039459552;
        Mon, 10 Jun 2024 10:10:59 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH for-4.19? v6 8/9] xen/x86: Disallow creating domains with altp2m enabled and altp2m.nr == 0
Date: Mon, 10 Jun 2024 17:10:46 +0000
Message-Id: <d3d5b3812db34758edcc8541b99c23408c79fd61.1718038855.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718038855.git.w1benny@gmail.com>
References: <cover.1718038855.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

Since libxl finally sends the altp2m.nr value, we can remove the previously
introduced temporary workaround.

Creating domain with enabled altp2m while setting altp2m.nr == 0 doesn't
make sense and it's probably not what user wants.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/domain.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index faec09e15e..721d753c95 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -747,8 +747,9 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)

         if ( !config->altp2m.nr )
         {
-            /* Fix the value to the legacy default */
-            config->altp2m.nr = 10;
+            dprintk(XENLOG_INFO,
+                    "altp2m must be requested with altp2m.nr > 0\n");
+            return -EINVAL;
         }

         if ( config->altp2m.nr > MAX_NR_ALTP2M )
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 17:11:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 17:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737471.1143821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXr-0001LR-Nx; Mon, 10 Jun 2024 17:10:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737471.1143821; Mon, 10 Jun 2024 17:10:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXr-0001Iw-HB; Mon, 10 Jun 2024 17:10:59 +0000
Received: by outflank-mailman (input) for mailman id 737471;
 Mon, 10 Jun 2024 17:10:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGiXp-0000kp-Mw
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 17:10:57 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 66d2807f-274c-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 19:10:56 +0200 (CEST)
Received: by mail-ed1-x52d.google.com with SMTP id
 4fb4d7f45d1cf-57c778b5742so2017809a12.2
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 10:10:56 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c76740d6asm3233169a12.7.2024.06.10.10.10.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 10:10:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66d2807f-274c-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718039456; x=1718644256; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7xYGePkFSrt21Sw0npf4IiaMWXG46KCQA4sFct5gdmA=;
        b=YYxBq7yIPwtQ2UdFShtYnUZ3gFQdCrQ+Cd0g7fA9U3tt40XSi3iXA4qW+qMp8VskUV
         M0XjVveRd4dqY07nsfhYe4V9cIsOgr59XlSNTcmQxdnePQWg43CV46tUq2LR3H30ZXCP
         SZYo6yNB5wth4HZkAxXTenCRV45a+S4RQZi5e3uxPXs9uOqYOd340T4gwsMsM64abGhD
         5cybsSAumzG07CTw7S8AawQWgG71kVb3b1z7kdDBLyTi38tPmZs6LhPHjIMvmQmLY2te
         ugAnJJ+73kJY31e4fVoM5yLB/DcwbQYGq1LAA1d8nnon0tckL6TvYTC41kGtdh66RYbT
         eqJQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718039456; x=1718644256;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7xYGePkFSrt21Sw0npf4IiaMWXG46KCQA4sFct5gdmA=;
        b=h4w5MN+EmP3cdKioBYv4zJTlxk0NkD8JvQhflr91h/QTFQ4ESUDHtocuLDBBlvgvTu
         YsAGR6TZnNEhIzBOjYY9ZYgCn9A7qXD+gLgg40T56U2Z7y3vy+8tMoN6C83UuDTJ6gnV
         V7QQiQFmgtqidyQZpVHibSdYv2mRpbW/vXZ99Ae+C3cxmzmPzpR0BGVpx88KFFOYwgzT
         TlJZFOLGFq2OmaRoDpLPO1r4dHqzBRkAfmNBQ0aWCpGZt4nYcUYez1vrp36x8WvSN6QB
         UFdD+M3pK/YIr3KizKZo55ui8WeZIqdPVm1u40JBXmZN3/5lRBWZAkQ+0O8NTZgqyaYG
         WMXw==
X-Gm-Message-State: AOJu0YxiJyQTKGA5JZKd5XU14QiS5DD8hzCAwtatDIVcCURhknh5sNXg
	YRb9S1FGeT4+shT/a/PQVHYPCD/Vb2gecG2yEC/Jk8DDJQoMTlO9urN+Dg==
X-Google-Smtp-Source: AGHT+IEf2l7g+205A9TwswItjn+rA+qNovEvPuFHbnYVuAjlddDejE2Rj6EKRursxEsZ7BLg5k+8oQ==
X-Received: by 2002:a50:c047:0:b0:57c:47c3:dc62 with SMTP id 4fb4d7f45d1cf-57c50829346mr5850619a12.5.1718039455710;
        Mon, 10 Jun 2024 10:10:55 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.19? v6 4/9] tools/xl: Add altp2m_count parameter
Date: Mon, 10 Jun 2024 17:10:42 +0000
Message-Id: <02e0eefe1bed87cb55490f6ea13fa28c94af2a0d.1718038855.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718038855.git.w1benny@gmail.com>
References: <cover.1718038855.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

Introduce a new altp2m_count parameter to control the maximum number of altp2m
views a domain can use. By default, if altp2m_count is unspecified and altp2m
is enabled, the value is set to 10, reflecting the legacy behavior.

This change is preparatory; it establishes the groundwork for the feature but
does not activate it.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 tools/golang/xenlight/helpers.gen.go | 2 ++
 tools/golang/xenlight/types.gen.go   | 1 +
 tools/include/libxl.h                | 8 ++++++++
 tools/libs/light/libxl_create.c      | 9 +++++++++
 tools/libs/light/libxl_types.idl     | 1 +
 tools/xl/xl_parse.c                  | 9 +++++++++
 6 files changed, 30 insertions(+)

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index fe5110474d..0449c55f31 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1159,6 +1159,7 @@ if err := x.ArchX86.MsrRelaxed.fromC(&xc.arch_x86.msr_relaxed);err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
 x.Altp2M = Altp2MMode(xc.altp2m)
+x.Altp2MCount = uint32(xc.altp2m_count)
 x.VmtraceBufKb = int(xc.vmtrace_buf_kb)
 if err := x.Vpmu.fromC(&xc.vpmu);err != nil {
 return fmt.Errorf("converting field Vpmu: %v", err)
@@ -1676,6 +1677,7 @@ if err := x.ArchX86.MsrRelaxed.toC(&xc.arch_x86.msr_relaxed); err != nil {
 return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
 }
 xc.altp2m = C.libxl_altp2m_mode(x.Altp2M)
+xc.altp2m_count = C.uint32_t(x.Altp2MCount)
 xc.vmtrace_buf_kb = C.int(x.VmtraceBufKb)
 if err := x.Vpmu.toC(&xc.vpmu); err != nil {
 return fmt.Errorf("converting field Vpmu: %v", err)
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index c9e45b306f..54607758d3 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -603,6 +603,7 @@ ArchX86 struct {
 MsrRelaxed Defbool
 }
 Altp2M Altp2MMode
+Altp2MCount uint32
 VmtraceBufKb int
 Vpmu Defbool
 }
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index f5c7167742..bfa06caad2 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -1250,6 +1250,14 @@ typedef struct libxl__ctx libxl_ctx;
  */
 #define LIBXL_HAVE_ALTP2M 1

+/*
+ * LIBXL_HAVE_ALTP2M_COUNT
+ * If this is defined, then libxl supports setting the maximum number of
+ * alternate p2m tables.
+ */
+#define LIBXL_HAVE_ALTP2M_COUNT 1
+#define LIBXL_ALTP2M_COUNT_DEFAULT (~(uint32_t)0)
+
 /*
  * LIBXL_HAVE_REMUS
  * If this is defined, then libxl supports remus.
diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 569e3d21ed..11d2f282f5 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -482,6 +482,15 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
         return -ERROR_INVAL;
     }

+    if (b_info->altp2m_count == LIBXL_ALTP2M_COUNT_DEFAULT) {
+        if ((libxl_defbool_val(b_info->u.hvm.altp2m) ||
+            b_info->altp2m != LIBXL_ALTP2M_MODE_DISABLED))
+            /* Reflect the default legacy count */
+            b_info->altp2m_count = 10;
+        else
+            b_info->altp2m_count = 0;
+    }
+
     /* Assume that providing a bootloader user implies enabling restrict. */
     libxl_defbool_setdefault(&b_info->bootloader_restrict,
                              !!b_info->bootloader_user);
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index 4e65e6fda5..2963c5e250 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -729,6 +729,7 @@ libxl_domain_build_info = Struct("domain_build_info",[
     # Alternate p2m is not bound to any architecture or guest type, as it is
     # supported by x86 HVM and ARM support is planned.
     ("altp2m", libxl_altp2m_mode),
+    ("altp2m_count", uint32, {'init_val': 'LIBXL_ALTP2M_COUNT_DEFAULT'}),

     # Size of preallocated vmtrace trace buffers (in KBYTES).
     # Use zero value to disable this feature.
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index e3a4800f6e..a82b8fe6e4 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2063,6 +2063,15 @@ void parse_config_data(const char *config_source,
         }
     }

+    if (!xlu_cfg_get_long(config, "altp2m_count", &l, 1)) {
+        if (l != (uint16_t)l) {
+            fprintf(stderr, "ERROR: invalid value %ld for \"altp2m_count\"\n", l);
+            exit (1);
+        }
+
+        b_info->altp2m_count = l;
+    }
+
     if (!xlu_cfg_get_long(config, "vmtrace_buf_kb", &l, 1) && l) {
         b_info->vmtrace_buf_kb = l;
     }
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 17:11:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 17:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737474.1143853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXt-000296-NV; Mon, 10 Jun 2024 17:11:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737474.1143853; Mon, 10 Jun 2024 17:11:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXt-00027t-G8; Mon, 10 Jun 2024 17:11:01 +0000
Received: by outflank-mailman (input) for mailman id 737474;
 Mon, 10 Jun 2024 17:11:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGiXs-0000kp-9o
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 17:11:00 +0000
Received: from mail-lj1-x234.google.com (mail-lj1-x234.google.com
 [2a00:1450:4864:20::234])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 688b69e0-274c-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 19:10:59 +0200 (CEST)
Received: by mail-lj1-x234.google.com with SMTP id
 38308e7fff4ca-2ebe40673d8so1199121fa.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 10:10:59 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c76740d6asm3233169a12.7.2024.06.10.10.10.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 10:10:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 688b69e0-274c-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718039459; x=1718644259; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nyVxB8SzIq3ix8NG+232jLUPvfNmU9Jk5I/4qPsfmIc=;
        b=Ov2fxg+Ymyf5y2iXTV5LWDz1HqViFD2ZGZjlyTXZpAR/yxthFRVEdo1CWGeLWrkVdg
         +Y1ctiiKm/XckOGdRmF3H6SNvRG6AmQfloyXGBu6sqqLMFKi2sTsxsiHbvRzeqCk7O3G
         E8OX9aeB4DLwdjBT8dnFTEC1OM3g1jmMpnKJlaiOfKtrNv3T27b+zLW3+BLezDWfXEKw
         GETB8CsEed47hCIeRAz3RQk+XIC06h0raDpjdHYtUJBZNvuit6/nmUvU0YvkC+X8CvHD
         6BpY4rITVohMDek5cJcrWshtE5PhbNjQm7nN/TdCSosODn2mgdAIWDCnHdG/hEriLAUD
         7waQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718039459; x=1718644259;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=nyVxB8SzIq3ix8NG+232jLUPvfNmU9Jk5I/4qPsfmIc=;
        b=KiVDexNOhWnhl38kfCaBFb2Oinx9b2/+7d8sTrDPuvGUC2wXPpL1+w/grNm7FS34XT
         DnZRB4jRtpMefhY7Z8TjW3hI4fz+ON7fgjOTKNWbii5jVmPp/u4ADF29paBTmeM5LyGf
         hai7vFs8W3Z/wbYu0+AHT2y7V5ytrEDxeriRzdbw6uJajmJwoyIOM1MRmU2dWZla/fzi
         8EBUsu+y67OcnxqrdETUPUbCirdI5wyNFafXQNKIhrfdsBdEwS9Iyi1ShMqrE3uMVu9b
         1ZdzoH+mgppn+fiipt8q3NgjqCI2R8y3JcC9HAkxbKJdQ2BPr8yOuEweEHe4bR4dEByV
         VXAQ==
X-Gm-Message-State: AOJu0YyyRPvnoKM+uEo7Fe4hM9SsNodh86mT8fGl9U3+rm9+Ao7jZ/MV
	Fg6t69jVdoamWDU6T2wMNkhOaqMErFxdSZouapEFEO7Ty7WzAbB87R8u6g==
X-Google-Smtp-Source: AGHT+IFZPpkHHTt0mbGPE0Nuo9jxfgNOdR2DGrtSnHr5o7+pGeI/4cHtN8q9sZV2e7Yu65V3+DJvow==
X-Received: by 2002:a2e:be0d:0:b0:2eb:f0be:442a with SMTP id 38308e7fff4ca-2ebf0be46b9mr2773011fa.39.1718039458759;
        Mon, 10 Jun 2024 10:10:58 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.19? v6 7/9] tools/libxl: Activate the altp2m_count feature
Date: Mon, 10 Jun 2024 17:10:45 +0000
Message-Id: <ad7aa98a3b0a0493130f1d9a84724e98be766897.1718038855.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718038855.git.w1benny@gmail.com>
References: <cover.1718038855.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

This commit activates the previously introduced altp2m_count parameter,
establishing the connection between libxl and Xen.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 tools/libs/light/libxl_create.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index 11d2f282f5..5ad552c4ec 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -656,6 +656,10 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
             .max_grant_frames = b_info->max_grant_frames,
             .max_maptrack_frames = b_info->max_maptrack_frames,
             .grant_opts = XEN_DOMCTL_GRANT_version(b_info->max_grant_version),
+            .altp2m = {
+                .opts = 0, /* .opts will be set below */
+                .nr = b_info->altp2m_count,
+            },
             .vmtrace_size = ROUNDUP(b_info->vmtrace_buf_kb << 10, XC_PAGE_SHIFT),
             .cpupool_id = info->poolid,
         };
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 17:11:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 17:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737468.1143799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXq-0000ze-MK; Mon, 10 Jun 2024 17:10:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737468.1143799; Mon, 10 Jun 2024 17:10:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXq-0000zX-JV; Mon, 10 Jun 2024 17:10:58 +0000
Received: by outflank-mailman (input) for mailman id 737468;
 Mon, 10 Jun 2024 17:10:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGiXo-0000kp-H8
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 17:10:56 +0000
Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com
 [2a00:1450:4864:20::533])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 66438e4c-274c-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 19:10:56 +0200 (CEST)
Received: by mail-ed1-x533.google.com with SMTP id
 4fb4d7f45d1cf-57a1fe63a96so6034347a12.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 10:10:56 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c76740d6asm3233169a12.7.2024.06.10.10.10.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 10:10:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66438e4c-274c-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718039455; x=1718644255; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Q0bBFO5q1RXhnEhFZi3YywzgmIW8ZFC/EvRHCN4/klw=;
        b=iwt/fm6Er4vVeB3dnZnO2tWRoaadhMQzz6Gz87MFKKWzdjDnNCyetdkzR87aEC80It
         wUK9/wq/ywqhK8GP3njxkwOPKhfe6m5kWKtD2HlVgkPdPWObnP4IxW6896T6ancIYmxh
         UT6CzAzE36sizTBOU/f95sMHmo1ge1oIZjz6JtTBbHE/HjZs6KaU2FzD30xaSlXvQnBy
         LvrLFyBORnWfrWlMSgZedFWBSmWAQc18G1N8ChqiWvyV4y7WV7LCbY2hG4oUJofhUMgI
         8rp0iBgqSoVMoMIqZNKkIfeeV7UbTqQHil2ZpMzJ9rDwS+nPax2FQL2ZGTh6Trbp68um
         AgPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718039455; x=1718644255;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Q0bBFO5q1RXhnEhFZi3YywzgmIW8ZFC/EvRHCN4/klw=;
        b=BUFlvLaegYdtMG1ul3dDblKQGnQpl3lLaZBBoNouNQ2bFoae2UAG/UwzV9HRBIpVZo
         pTfhsixjqwi8tD+k0ZLSJ+jmchf9lWIaa8qLuv42kZu5r6P3weNMp038tqtiqdHoCCge
         XJ4zO4Nj82xtFwP2PNsibnQ0ts1bXO4ZLNEAnLGxTVI0h9P/jQH8AIREe8awQkrqMhwA
         3MmYKo1ZKyA+GtdRxsd/9grPoBWWlPvy4eNZEZAijod2U/wohD6oEaWDRzRT82S7a39v
         RWK2WpgfPHk8fq2mJm/7yHYMtJB7ywwnISbbDi7Z2YlxehcqB4Ue/TvrAlQbXh/l5p1S
         Az8A==
X-Gm-Message-State: AOJu0YzY02By7zneSErMFCHHD1dSVkrQ2dw+6AepWAR+v5y01/30qglL
	NEoIfnM7ZCOrYPJtqbTM0HFRKn0vrV/jFXpLpyqSdNHQRlSG0vz4tXQvAIx4
X-Google-Smtp-Source: AGHT+IHfPN5/U6D0d1riqJfkCo9zHTyJhnBxn6b4dy6ex7KulMrSOCKuuET5dGM//7BuI5dw3Ls7cA==
X-Received: by 2002:a50:ab04:0:b0:57c:6248:62b7 with SMTP id 4fb4d7f45d1cf-57c62486320mr4930825a12.13.1718039454906;
        Mon, 10 Jun 2024 10:10:54 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH for-4.19? v6 3/9] xen: Refactor altp2m options into a structured format
Date: Mon, 10 Jun 2024 17:10:41 +0000
Message-Id: <dcf08c40e37072e18e5e878df8778ce459897bdc.1718038855.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718038855.git.w1benny@gmail.com>
References: <cover.1718038855.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

Encapsulate the altp2m options within a struct. This change is preparatory
and sets the groundwork for introducing additional parameter in subsequent
commit.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
Acked-by: Julien Grall <jgrall@amazon.com> # arm
Reviewed-by: Jan Beulich <jbeulich@suse.com> # hypervisor
---
 tools/libs/light/libxl_create.c     | 6 +++---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 4 +++-
 xen/arch/arm/domain.c               | 2 +-
 xen/arch/x86/domain.c               | 4 ++--
 xen/arch/x86/hvm/hvm.c              | 2 +-
 xen/include/public/domctl.h         | 4 +++-
 6 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/tools/libs/light/libxl_create.c b/tools/libs/light/libxl_create.c
index edeadd57ef..569e3d21ed 100644
--- a/tools/libs/light/libxl_create.c
+++ b/tools/libs/light/libxl_create.c
@@ -680,17 +680,17 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
         LOG(DETAIL, "altp2m: %s", libxl_altp2m_mode_to_string(b_info->altp2m));
         switch(b_info->altp2m) {
         case LIBXL_ALTP2M_MODE_MIXED:
-            create.altp2m_opts |=
+            create.altp2m.opts |=
                 XEN_DOMCTL_ALTP2M_mode(XEN_DOMCTL_ALTP2M_mixed);
             break;

         case LIBXL_ALTP2M_MODE_EXTERNAL:
-            create.altp2m_opts |=
+            create.altp2m.opts |=
                 XEN_DOMCTL_ALTP2M_mode(XEN_DOMCTL_ALTP2M_external);
             break;

         case LIBXL_ALTP2M_MODE_LIMITED:
-            create.altp2m_opts |=
+            create.altp2m.opts |=
                 XEN_DOMCTL_ALTP2M_mode(XEN_DOMCTL_ALTP2M_limited);
             break;

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index a529080129..e6c977521f 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -231,7 +231,9 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 		.max_maptrack_frames = Int_val(VAL_MAX_MAPTRACK_FRAMES),
 		.grant_opts =
 		    XEN_DOMCTL_GRANT_version(Int_val(VAL_MAX_GRANT_VERSION)),
-		.altp2m_opts = Int32_val(VAL_ALTP2M_OPTS),
+		.altp2m = {
+			.opts = Int32_val(VAL_ALTP2M_OPTS),
+		},
 		.vmtrace_size = vmtrace_size,
 		.cpupool_id = Int32_val(VAL_CPUPOOL_ID),
 	};
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 8bde2f730d..5234b627d0 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -688,7 +688,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }

-    if ( config->altp2m_opts )
+    if ( config->altp2m.opts )
     {
         dprintk(XENLOG_INFO, "Altp2m not supported\n");
         return -EINVAL;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index ccadfe0c9e..a4f2e7bad1 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -637,7 +637,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
     bool hap = config->flags & XEN_DOMCTL_CDF_hap;
     bool nested_virt = config->flags & XEN_DOMCTL_CDF_nested_virt;
     unsigned int max_vcpus;
-    unsigned int altp2m_mode = MASK_EXTR(config->altp2m_opts,
+    unsigned int altp2m_mode = MASK_EXTR(config->altp2m.opts,
                                          XEN_DOMCTL_ALTP2M_mode_mask);

     if ( hvm ? !hvm_enabled : !IS_ENABLED(CONFIG_PV) )
@@ -717,7 +717,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }

-    if ( config->altp2m_opts & ~XEN_DOMCTL_ALTP2M_mode_mask )
+    if ( config->altp2m.opts & ~XEN_DOMCTL_ALTP2M_mode_mask )
     {
         dprintk(XENLOG_INFO, "Invalid altp2m options selected: %#x\n",
                 config->flags);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 8334ab1711..a66ebaaceb 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -659,7 +659,7 @@ int hvm_domain_initialise(struct domain *d,
     d->arch.hvm.params[HVM_PARAM_TRIPLE_FAULT_REASON] = SHUTDOWN_reboot;

     /* Set altp2m based on domctl flags. */
-    switch ( MASK_EXTR(config->altp2m_opts, XEN_DOMCTL_ALTP2M_mode_mask) )
+    switch ( MASK_EXTR(config->altp2m.opts, XEN_DOMCTL_ALTP2M_mode_mask) )
     {
     case XEN_DOMCTL_ALTP2M_mixed:
         d->arch.hvm.params[HVM_PARAM_ALTP2M] = XEN_ALTP2M_mixed;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 2a49fe46ce..dea399aa8e 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -86,6 +86,7 @@ struct xen_domctl_createdomain {

     uint32_t grant_opts;

+    struct {
 /*
  * Enable altp2m mixed mode.
  *
@@ -102,7 +103,8 @@ struct xen_domctl_createdomain {
 /* Altp2m mode signaling uses bits [0, 1]. */
 #define XEN_DOMCTL_ALTP2M_mode_mask  (0x3U)
 #define XEN_DOMCTL_ALTP2M_mode(m)    ((m) & XEN_DOMCTL_ALTP2M_mode_mask)
-    uint32_t altp2m_opts;
+        uint32_t opts;
+    } altp2m;

     /* Per-vCPU buffer size in bytes.  0 to disable. */
     uint32_t vmtrace_size;
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 17:11:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 17:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737467.1143789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXp-0000lI-Fp; Mon, 10 Jun 2024 17:10:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737467.1143789; Mon, 10 Jun 2024 17:10:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXp-0000lB-DH; Mon, 10 Jun 2024 17:10:57 +0000
Received: by outflank-mailman (input) for mailman id 737467;
 Mon, 10 Jun 2024 17:10:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGiXn-0000kp-Rt
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 17:10:55 +0000
Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com
 [2a00:1450:4864:20::52e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 65775015-274c-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 19:10:54 +0200 (CEST)
Received: by mail-ed1-x52e.google.com with SMTP id
 4fb4d7f45d1cf-57c83100bd6so1606917a12.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 10:10:54 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c76740d6asm3233169a12.7.2024.06.10.10.10.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 10:10:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65775015-274c-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718039454; x=1718644254; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=HhcL0W6YpzAV4zN8oTvV+rbaKSalw9aj5zl6oIKR4Zo=;
        b=lHPrIbAnF4xaFyS/UWVjzrI9ukhvDEQ1TDaF2ThyYcoGLCd5ycfEVeLW2uVsfip0c9
         /11Fz6aFMPWstv68qCLn5f4Z/nmSMoQg4Sh9kobcmFdPLTBXfL7zZFX7k5ciuriWyJwy
         1O73iSXN+9cZNpJCtgKHSqX+sd7j8q0PCkvdNewybczmRGIxn9KRRcPkRX40DtYJ+Gpt
         R9hPibjgXm7HMktPpxGjxcqmcv6gwyiEdKWgknaIPsEOdBXUxYe2+GyIrqGcpqNua2gS
         OkAP04nEiAFIWHJSr3hp3IW549tT68u6cbWMHvihtUiGiaQK9Q4DF1Vj271QyVIHmmph
         gcvA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718039454; x=1718644254;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=HhcL0W6YpzAV4zN8oTvV+rbaKSalw9aj5zl6oIKR4Zo=;
        b=mP4bmrYUeVmF3IleAx/HBH5BmztFP3AbHMXbE4tjeh3elnH0W8y00ZlF10Kp9EX9l7
         Y0CGWkXViyy1U0WSfy9YLu5kNeU2FfCvdCTbz5nu48k+7JZ204ABUDFGEdWw+AyZSlKx
         2zAawkpuUsvV+WnCuGzMWcL6CQGXLhLOJNQ1r1Cmwa6Dp+mbACItdSa2bn6Popw/PcLk
         DSb6RuVYRDHbGOgDM1nozbTfQMbN+H2cgCF/Thss+ZDTWZ6A+/nJd1wxBl62zlo9f8v9
         IEb7RjBDq9Fr7+KMZlC+vHVAIPnGwonTgOZXfV50iewXKuql0C/ps7XKtfFAJrvBq/Gf
         VmMw==
X-Gm-Message-State: AOJu0YwszEZrKxVHx7qofk5pxx8dn6s4lE4TPQYfKM79T9VDasxV1cf1
	kszgp7Rdo/VPzAVtomZs2OLfsUKsnP8KntQiqTrVAroyj36tupQbOeHcRw==
X-Google-Smtp-Source: AGHT+IF/9drnPWbHUMULOPE6fQwj4LAi0XjphFGkLlfpN5YKzynad25966nxv2gDHdzMT+xQ8bOqCA==
X-Received: by 2002:a50:9eef:0:b0:57a:3273:e648 with SMTP id 4fb4d7f45d1cf-57c508ffe2cmr6658644a12.18.1718039453777;
        Mon, 10 Jun 2024 10:10:53 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony@xenproject.org>,
	Christian Lindig <christian.lindig@cloud.com>
Subject: [PATCH for-4.19? v6 2/9] tools/ocaml: Add missing ocaml bindings for altp2m_opts
Date: Mon, 10 Jun 2024 17:10:40 +0000
Message-Id: <f824c69128c85b1cab9c9554d94f7030e89d5e56.1718038855.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718038855.git.w1benny@gmail.com>
References: <cover.1718038855.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

Fixes: 0291089f6ea8 ("xen: enable altp2m at create domain domctl")

Signed-off-by: Petr Beneš <w1benny@gmail.com>
Acked-by: Christian Lindig <christian.lindig@cloud.com>
---
 tools/ocaml/libs/xc/xenctrl.ml      | 1 +
 tools/ocaml/libs/xc/xenctrl.mli     | 1 +
 tools/ocaml/libs/xc/xenctrl_stubs.c | 9 ++++++---
 3 files changed, 8 insertions(+), 3 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 55923857ec..2690f9a923 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -85,6 +85,7 @@ type domctl_create_config =
     max_grant_frames: int;
     max_maptrack_frames: int;
     max_grant_version: int;
+    altp2m_opts: int32;
     vmtrace_buf_kb: int32;
     cpupool_id: int32;
     arch: arch_domainconfig;
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index 9b4b45db3a..febbe1f6ae 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -77,6 +77,7 @@ type domctl_create_config = {
   max_grant_frames: int;
   max_maptrack_frames: int;
   max_grant_version: int;
+  altp2m_opts: int32;
   vmtrace_buf_kb: int32;
   cpupool_id: int32;
   arch: arch_domainconfig;
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index e86c455802..a529080129 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -210,9 +210,10 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 #define VAL_MAX_GRANT_FRAMES    Field(config, 6)
 #define VAL_MAX_MAPTRACK_FRAMES Field(config, 7)
 #define VAL_MAX_GRANT_VERSION   Field(config, 8)
-#define VAL_VMTRACE_BUF_KB      Field(config, 9)
-#define VAL_CPUPOOL_ID          Field(config, 10)
-#define VAL_ARCH                Field(config, 11)
+#define VAL_ALTP2M_OPTS         Field(config, 9)
+#define VAL_VMTRACE_BUF_KB      Field(config, 10)
+#define VAL_CPUPOOL_ID          Field(config, 11)
+#define VAL_ARCH                Field(config, 12)

 	uint32_t domid = Int_val(wanted_domid);
 	uint64_t vmtrace_size = Int32_val(VAL_VMTRACE_BUF_KB);
@@ -230,6 +231,7 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 		.max_maptrack_frames = Int_val(VAL_MAX_MAPTRACK_FRAMES),
 		.grant_opts =
 		    XEN_DOMCTL_GRANT_version(Int_val(VAL_MAX_GRANT_VERSION)),
+		.altp2m_opts = Int32_val(VAL_ALTP2M_OPTS),
 		.vmtrace_size = vmtrace_size,
 		.cpupool_id = Int32_val(VAL_CPUPOOL_ID),
 	};
@@ -288,6 +290,7 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 #undef VAL_ARCH
 #undef VAL_CPUPOOL_ID
 #undef VAL_VMTRACE_BUF_KB
+#undef VAL_ALTP2M_OPTS
 #undef VAL_MAX_GRANT_VERSION
 #undef VAL_MAX_MAPTRACK_FRAMES
 #undef VAL_MAX_GRANT_FRAMES
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 17:11:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 17:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737476.1143879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXw-0002yI-NB; Mon, 10 Jun 2024 17:11:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737476.1143879; Mon, 10 Jun 2024 17:11:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXw-0002y3-Iv; Mon, 10 Jun 2024 17:11:04 +0000
Received: by outflank-mailman (input) for mailman id 737476;
 Mon, 10 Jun 2024 17:11:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGiXv-0000kq-DE
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 17:11:03 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 696d7dd0-274c-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 19:11:01 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id
 4fb4d7f45d1cf-57a1fe63a96so6034508a12.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 10:11:01 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c76740d6asm3233169a12.7.2024.06.10.10.10.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 10:11:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 696d7dd0-274c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718039460; x=1718644260; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gMdK+Jvhj1V77/a0DgjcEJwq1OtvW7iw1n5SJqbw9Ys=;
        b=eZqSHRLH0qFfjnwyaa2iELeT3PzLEoKlQp+bqt/4wUErho+T08vRP1/0VZBGtlicJz
         PIQbaHpMiGbo5Z7EtBmVt3MsmXK57B21L1eG72CmnI3LDRLRKf1EML0bsZsD9jA3W9wT
         5EShInB7s+cy40KHDtdOK59uI4SPRCRT7vCpRA/Vdqgt2H8ICyH3cZHc5NdI6KcRK5Iv
         F0MK1C3zBGkPdnOknqlFmEV7yrywVFhzyu683y1g1HX7mF1dxqVyIcajHhHMOizdeLQf
         nbFC7eEPkRWJoDJ205CNrHBRWtZa7rtoliynaktxV0SK+caYXlwChWAjtBnKd9ZDwtXt
         dlpA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718039460; x=1718644260;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=gMdK+Jvhj1V77/a0DgjcEJwq1OtvW7iw1n5SJqbw9Ys=;
        b=EKw+s+m2FoK9E01JOB66qtZ8/u7+IwNobdD3YFawKQbeN39gy3jzjyPhENt8AqKDhW
         ygtLBV3B1hj+CYWqieEnsSff3GHl7U8Y9fnwgzkcACux2O7x86hv7UAd4IFfZf6pCSzd
         eHMx4FX74qd+I/zcBsFcwSBeG8zNukVx1G6MXr8ogx4mxiv56F2FohcNhWrgBHgTVMon
         WnIVr9kQoc7j7PS1+EdoWvvidTiTpYwIkNxxzrBxqnJ5seUZLdP5D/Cq4vZaSlUDMB1L
         74GdYM7QcMeye/wche8enxuKYQihFfv7P7ip9ecEffx1JVhhZkO8TVM7eKuaQuuRe8PL
         tuAQ==
X-Gm-Message-State: AOJu0YxBc/HhKgkc8I68bMH5dSHliBnvlDxPFQqTzP1Ivzx10rQCrSNB
	LXxkAxl5Hh9Lqzoa9C9NTPwxZVuSjkdEm4Iwq+OCWnjmsSHTK3ziSxweWg==
X-Google-Smtp-Source: AGHT+IEihXCzU827XqqARib1MQU3lplFIw2//ymH9ABmt/wth7JA8G74o5NW1SYyKq7UL8yerUguhQ==
X-Received: by 2002:a50:c2d9:0:b0:57c:5fc8:c95d with SMTP id 4fb4d7f45d1cf-57c5fc8ca83mr5564429a12.18.1718039460428;
        Mon, 10 Jun 2024 10:11:00 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony@xenproject.org>,
	Christian Lindig <christian.lindig@cloud.com>
Subject: [PATCH for-4.19? v6 9/9] tools/ocaml: Add altp2m_count parameter
Date: Mon, 10 Jun 2024 17:10:47 +0000
Message-Id: <f6fdce2ec0eb88a61be980b6eca55b543630118e.1718038855.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718038855.git.w1benny@gmail.com>
References: <cover.1718038855.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

Allow developers using the OCaml bindings to set the altp2m_count parameter.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
Acked-by: Christian Lindig <christian.lindig@cloud.com>
---
 tools/ocaml/libs/xc/xenctrl.ml      |  1 +
 tools/ocaml/libs/xc/xenctrl.mli     |  1 +
 tools/ocaml/libs/xc/xenctrl_stubs.c | 19 +++++++++++++++----
 3 files changed, 17 insertions(+), 4 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index 2690f9a923..a3e50ac394 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -86,6 +86,7 @@ type domctl_create_config =
     max_maptrack_frames: int;
     max_grant_version: int;
     altp2m_opts: int32;
+    altp2m_count: int32;
     vmtrace_buf_kb: int32;
     cpupool_id: int32;
     arch: arch_domainconfig;
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index febbe1f6ae..b97021d3d2 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -78,6 +78,7 @@ type domctl_create_config = {
   max_maptrack_frames: int;
   max_grant_version: int;
   altp2m_opts: int32;
+  altp2m_count: int32;
   vmtrace_buf_kb: int32;
   cpupool_id: int32;
   arch: arch_domainconfig;
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index e6c977521f..78ae4967e6 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -211,13 +211,22 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 #define VAL_MAX_MAPTRACK_FRAMES Field(config, 7)
 #define VAL_MAX_GRANT_VERSION   Field(config, 8)
 #define VAL_ALTP2M_OPTS         Field(config, 9)
-#define VAL_VMTRACE_BUF_KB      Field(config, 10)
-#define VAL_CPUPOOL_ID          Field(config, 11)
-#define VAL_ARCH                Field(config, 12)
+#define VAL_ALTP2M_COUNT        Field(config, 10)
+#define VAL_VMTRACE_BUF_KB      Field(config, 11)
+#define VAL_CPUPOOL_ID          Field(config, 12)
+#define VAL_ARCH                Field(config, 13)

 	uint32_t domid = Int_val(wanted_domid);
+	uint32_t altp2m_opts = Int32_val(VAL_ALTP2M_OPTS);
+	uint32_t altp2m_nr = Int32_val(VAL_ALTP2M_COUNT);
 	uint64_t vmtrace_size = Int32_val(VAL_VMTRACE_BUF_KB);

+	if ( altp2m_opts != (uint16_t)altp2m_opts )
+		caml_invalid_argument("altp2m_opts");
+
+	if ( altp2m_nr != (uint16_t)altp2m_nr )
+		caml_invalid_argument("altp2m_count");
+
 	vmtrace_size = ROUNDUP(vmtrace_size << 10, XC_PAGE_SHIFT);
 	if ( vmtrace_size != (uint32_t)vmtrace_size )
 		caml_invalid_argument("vmtrace_buf_kb");
@@ -232,7 +241,8 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 		.grant_opts =
 		    XEN_DOMCTL_GRANT_version(Int_val(VAL_MAX_GRANT_VERSION)),
 		.altp2m = {
-			.opts = Int32_val(VAL_ALTP2M_OPTS),
+			.opts = altp2m_opts,
+			.nr = altp2m_nr,
 		},
 		.vmtrace_size = vmtrace_size,
 		.cpupool_id = Int32_val(VAL_CPUPOOL_ID),
@@ -292,6 +302,7 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 #undef VAL_ARCH
 #undef VAL_CPUPOOL_ID
 #undef VAL_VMTRACE_BUF_KB
+#undef VAL_ALTP2M_COUNT
 #undef VAL_ALTP2M_OPTS
 #undef VAL_MAX_GRANT_VERSION
 #undef VAL_MAX_MAPTRACK_FRAMES
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 17:11:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 17:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737470.1143811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXr-000175-86; Mon, 10 Jun 2024 17:10:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737470.1143811; Mon, 10 Jun 2024 17:10:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXr-00015o-1t; Mon, 10 Jun 2024 17:10:59 +0000
Received: by outflank-mailman (input) for mailman id 737470;
 Mon, 10 Jun 2024 17:10:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGiXp-0000kq-6O
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 17:10:57 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65226e0f-274c-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 19:10:54 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id
 4fb4d7f45d1cf-57a1fe63947so138279a12.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 10:10:54 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c76740d6asm3233169a12.7.2024.06.10.10.10.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 10:10:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65226e0f-274c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718039453; x=1718644253; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=80YAum1ZZZhvvQ1bWVYLzziFeIFlw7BUBUnbDQYTJMk=;
        b=HGp4SOf4JnSLYepaSCeu/iTPxfLweoXcjZEpeT6oTXB6xhD3C6R8NdvDqb73b0wiP7
         gcr4F9jYg44zUBb984uzvhfMHX/73EBpK6NuOX5ONfh1Si/SFOAg7k805GqICdkeOGfe
         3gzxYxr1su5/d61c1wQBUPaJq95hxkKBMnQuTbtmjNUN4If0C3u48BkqpPw7NqRVzBxW
         v3ZrpQcmjKZFLqkX2OLQUTC762XD3/IB1lzcosC5/8uspd/TibQS32TmazUirQUL/O25
         Hs5TC4rr9COJbl5I0e7Wu5MKuS1ELsuED9AvfXKP94nyXE0Z4iRUF1ej5JVUaMi7TXav
         BQpg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718039453; x=1718644253;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=80YAum1ZZZhvvQ1bWVYLzziFeIFlw7BUBUnbDQYTJMk=;
        b=JcEpOWWYKN1Q5IOn0CyXImVg5B8pWuVLOxHdTkG7/w1VBYRb68IHy1YbjVOKNd5Cj1
         HLXSBYLz/SvqK7MfNLmrz3eZxWSrwNYw4TGBOPwc7iVqIKmsiY1+sqliw1qVDO/VCRFg
         sdZ30b/plF7uNBRNNak2SltrsPWxcVYzpCviE+QSp8u2FJxq9nNnt1KgOeI2+SQEiIQv
         yTa9Tn3k1wJY2dMQ1JU0WBNu0Apo8tUjOggh5FopnDc8eVL/jesCsZBlU7sr00Av7Ptc
         UQk9d8xfcd1HwjQfeVZyCkQI4fXGD8+maYmCR9VAhszC965hTokHOjjcxKcatZVpKdG2
         BluQ==
X-Gm-Message-State: AOJu0YzTK1w1V0qoY4jPFIqnmxtHEBac9/yshSoc2MhLBFM5ix1mXHy/
	3h1vqz+8JqiRwLSVERj0Z4wM3pzJKU16m/mQNKTYFYFkhkv0db9PlarXjA==
X-Google-Smtp-Source: AGHT+IFnmsdMByN59njo3QQ2aK0e8qFDGU1LXZbxrUVf9TYA/BBM9e1Ujr8rdvOIFpkVwJl+bKHrzA==
X-Received: by 2002:a50:d4d8:0:b0:579:e6d1:d38b with SMTP id 4fb4d7f45d1cf-57c50886a45mr5532742a12.2.1718039452961;
        Mon, 10 Jun 2024 10:10:52 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony@xenproject.org>,
	Christian Lindig <christian.lindig@cloud.com>
Subject: [PATCH for-4.19? v6 1/9] tools/ocaml: Fix mixed tabs/spaces
Date: Mon, 10 Jun 2024 17:10:39 +0000
Message-Id: <5e006de3b3e49419737d1280e15f5528193986f5.1718038855.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718038855.git.w1benny@gmail.com>
References: <cover.1718038855.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

No functional change.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
Acked-by: Christian Lindig <christian.lindig@cloud.com>
---
 tools/ocaml/libs/xc/xenctrl_stubs.c | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index c6da9bb091..e86c455802 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -138,12 +138,12 @@ static void domain_handle_of_uuid_string(xen_domain_handle_t h,
  * integers in the Ocaml ABI for more idiomatic handling.
  */
 static value c_bitmap_to_ocaml_list
-             /* ! */
-             /*
+	     /* ! */
+	     /*
 	      * All calls to this function must be in a form suitable
 	      * for xenctrl_abi_check.  The parsing there is ad-hoc.
 	      */
-             (unsigned int bitmap)
+	         (unsigned int bitmap)
 {
 	CAMLparam0();
 	CAMLlocal2(list, tmp);
@@ -180,8 +180,8 @@ static value c_bitmap_to_ocaml_list
 }

 static unsigned int ocaml_list_to_c_bitmap(value l)
-             /* ! */
-             /*
+	     /* ! */
+	     /*
 	      * All calls to this function must be in a form suitable
 	      * for xenctrl_abi_check.  The parsing there is ad-hoc.
 	      */
@@ -259,7 +259,7 @@ CAMLprim value stub_xc_domain_create(value xch_val, value wanted_domid, value co
 		/* Quick & dirty check for ABI changes. */
 		BUILD_BUG_ON(sizeof(cfg) != 68);

-        /* Mnemonics for the named fields inside xen_x86_arch_domainconfig */
+		/* Mnemonics for the named fields inside xen_x86_arch_domainconfig */
 #define VAL_EMUL_FLAGS          Field(arch_domconfig, 0)
 #define VAL_MISC_FLAGS          Field(arch_domconfig, 1)

@@ -351,7 +351,7 @@ static value dom_op(value xch_val, value domid,
 	caml_enter_blocking_section();
 	result = fn(xch, c_domid);
 	caml_leave_blocking_section();
-        if (result)
+	if (result)
 		failwith_xc(xch);
 	CAMLreturn(Val_unit);
 }
@@ -383,7 +383,7 @@ CAMLprim value stub_xc_domain_resume_fast(value xch_val, value domid)
 	caml_enter_blocking_section();
 	result = xc_domain_resume(xch, c_domid, 1);
 	caml_leave_blocking_section();
-        if (result)
+	if (result)
 		failwith_xc(xch);
 	CAMLreturn(Val_unit);
 }
@@ -426,7 +426,7 @@ static value alloc_domaininfo(xc_domaininfo_t * info)
 	Store_field(result, 13, Val_int(info->max_vcpu_id));
 	Store_field(result, 14, caml_copy_int32(info->ssidref));

-        tmp = caml_alloc_small(16, 0);
+	tmp = caml_alloc_small(16, 0);
 	for (i = 0; i < 16; i++) {
 		Field(tmp, i) = Val_int(info->handle[i]);
 	}
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 17:11:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 17:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737472.1143827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXs-0001Pt-20; Mon, 10 Jun 2024 17:11:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737472.1143827; Mon, 10 Jun 2024 17:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXr-0001NA-PS; Mon, 10 Jun 2024 17:10:59 +0000
Received: by outflank-mailman (input) for mailman id 737472;
 Mon, 10 Jun 2024 17:10:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGiXq-0000kp-9K
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 17:10:58 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 670c81b1-274c-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 19:10:57 +0200 (CEST)
Received: by mail-ed1-x535.google.com with SMTP id
 4fb4d7f45d1cf-579fa270e53so126105a12.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 10:10:57 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c76740d6asm3233169a12.7.2024.06.10.10.10.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 10:10:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 670c81b1-274c-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718039456; x=1718644256; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xPmb+DHWEiqVT7szGSWw+jNtsJuaWk/ic+48cvb9OAo=;
        b=NsOWpw7sS/nXO+/4D4XoDrI/ApJmnH4hV+SssHOeplwmcdILGGPn0+/3e3o7yGUqA/
         j2flRi+gbXlglItLymEYqfXGnQ+SZtKU1ldKRtk1ttdlZPT+6v1tjjP9YnxAvmN3Ka86
         cstIgL4kR+I6ADqZZBcxCJsfm58Aic2ojVBV14S/hCIT4FtmLrlCjvGd59Rj+cbCYrmr
         1lg0Lv8elhHZzxCf5Q7aFhxiqJaJIOu/Pw2v6mZaWOyB5HwtuiGKY2aveleFpVczIBc1
         An0s9WhEHuS/irhMMLFE3TSv9ZHA+XHAVJiipx9Eh46f9vnEmSW3Qsudxai8ewzRTHH0
         fVtQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718039456; x=1718644256;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=xPmb+DHWEiqVT7szGSWw+jNtsJuaWk/ic+48cvb9OAo=;
        b=de6LtBBBpGB4iDNOY/99lpd2f7wAV5Q0OcgOXEQVoROKZ/AWbSM5KAZduO+6XqHqG/
         IL590RC10OaEyZ9Xoh2ehptpzaoWyXOiNauO/3ZdyQ4vuYN7kVbg1Sscfr+VWXbTOLcB
         NtwaxQkArvfIccqjQumvorxYkdW1JCdYcIdaENRDpktH59mbNOiLXCJ+FyF81V4apBxA
         +aBbYlrog1jprXdCFh65ss649YpNYxBI0BqEe/AXTSw6uUhqG+EI4w79upmysxIcdGF6
         66oG4D04BDYTL4CS2tBgyDwP6AIIMDqxMtgDrGmEklyubLh3JvQ+D5lDNvQ7hah3H253
         9VtA==
X-Gm-Message-State: AOJu0Yw3Gy1n65F2bSy0smSI6ZCBkOE+Bw2rVUtuY5vxvQ9U5BXSEcxB
	DWrj/eui6ju7gmn+7zDhvkL30KFLiQhv0NZKplkJYr8yqxfomEFOVHQ7Hw==
X-Google-Smtp-Source: AGHT+IFF7WY/utzbt16P6pMIGjBwTMzNZPRvG1xOsy5dBLd/SvLAgJU2Ul0KuGeNsWZToGp5v3lUdA==
X-Received: by 2002:a17:906:388:b0:a6e:f596:7433 with SMTP id a640c23a62f3a-a6ef59675b7mr637961966b.45.1718039456495;
        Mon, 10 Jun 2024 10:10:56 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Anthony PERARD <anthony@xenproject.org>
Subject: [PATCH for-4.19? v6 5/9] docs/man: Add altp2m_count parameter to the xl.cfg manual
Date: Mon, 10 Jun 2024 17:10:43 +0000
Message-Id: <056a6d3337aafa36f341596e6236cf21dd7e705b.1718038855.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718038855.git.w1benny@gmail.com>
References: <cover.1718038855.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

Update manual pages to include detailed information about the altp2m_count
configuration parameter.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 docs/man/xl.cfg.5.pod.in | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index ac3f88fd57..ff03b43884 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2039,6 +2039,20 @@ a single guest HVM domain. B<This option is deprecated, use the option
 B<Note>: While the option "altp2mhvm" is deprecated, legacy applications for
 x86 systems will continue to work using it.

+=item B<altp2m_count=NUMBER>
+
+Specifies the maximum number of alternate-p2m views available to the guest.
+This setting is crucial in domain introspection scenarios that require
+multiple physical-to-machine (p2m) memory mappings to be established
+simultaneously.
+
+Enabling multiple p2m views may increase memory usage. It is advisable to
+review and adjust the B<shadow_memory> setting as necessary to accommodate
+the additional memory requirements.
+
+B<Note>: This option is ignored if B<altp2m> is disabled. The default value
+is 10.
+
 =item B<nestedhvm=BOOLEAN>

 Enable or disables guest access to hardware virtualisation features,
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 17:11:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 17:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737469.1143803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXq-000126-U7; Mon, 10 Jun 2024 17:10:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737469.1143803; Mon, 10 Jun 2024 17:10:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXq-000117-QY; Mon, 10 Jun 2024 17:10:58 +0000
Received: by outflank-mailman (input) for mailman id 737469;
 Mon, 10 Jun 2024 17:10:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGiXo-0000kq-Vg
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 17:10:56 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 64b0cc57-274c-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 19:10:53 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-57c72d6d5f3so2646558a12.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 10:10:53 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c76740d6asm3233169a12.7.2024.06.10.10.10.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 10:10:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64b0cc57-274c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718039452; x=1718644252; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=4jqdzkutA78dZetgHBumqJPxJP/89ei7jWnyGl0WcyM=;
        b=WdKjH2rGEzKAw2hG1xalrhfbyfd1um2vWkg3EMXjuxtf28K1o7KfdVVQIjIdb3svVh
         uKMEUKlWGabQ0HULdWCnvL3LARuYxIo85vdybYK2Zpf7wxftIxcZoZrfrJBPC12HQv6X
         ooYqmPNwqkmVSIbVuDz2oR9IY1wn6ttyAR/rm7t499mezj7dNXggKjaslVUEO38EsfwP
         Zt19RO99pjiE00bnLUcN9lmwU6/S2V39I/dRiu+Xmb7cXLHVqeVJlZoeG1KAbbzN78BX
         xjOhh9RMESBvf6dl10DCW9kd8TU5ZeGUb2PnufrjgFYhBf+rXYE9l0Q8wMcVLRqfoFQK
         5vaQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718039452; x=1718644252;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=4jqdzkutA78dZetgHBumqJPxJP/89ei7jWnyGl0WcyM=;
        b=xL+20do+Ta9VjEaAPzdUavEaDuyTrTlGo+qVH34x6rvJqhYCxPhF4nkKly/ZqNtLpx
         yoMPG6KACFmckQCS37XvrdetNxY5Zhyt8TSV18CrVm+VCAtEZZuF2BjhJ4BkRb+8kE84
         OzCrNNF4gAvISHJq6IINNVCVhYICi2tw3Y7sggWySsThoDG/YOwyofLeEqf8yOTogID0
         V1Wxc8mA+V7gPWdIsBsEqqa5kx/66gGKfNseYmpvwHudQeG9SqV2Ham5W2YzeKkcWP0b
         57KIKx0ZdR9XMbTnZyEeo9qCx8vU29s2Ip9sdi8E1UXzzhw4N6zU0AMxzHgIyKogs1gU
         1kIw==
X-Gm-Message-State: AOJu0YzdJ1TaeoU1Gu6XcG5l6ot/TqqoUjVVs2D++GXrXTqRRBkGJ+eh
	fG/uux+p/S9Hm23ae80hbGMh4EPieFSVneCMvxg/gARg8ab1fkdNjqUroLO9
X-Google-Smtp-Source: AGHT+IFcAQE0CgOwAfUFe3Sjm9ajlMWqD6HI9MKGe8zcS2xaqcczKqJoMl0L3DUKtGGal6WgMPCjyQ==
X-Received: by 2002:a50:9f22:0:b0:57c:6e94:a1a9 with SMTP id 4fb4d7f45d1cf-57c6e94a66emr3501246a12.17.1718039452143;
        Mon, 10 Jun 2024 10:10:52 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Nick Rosbrook <rosbrookn@gmail.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>
Subject: [PATCH for-4.19? v6 0/9] x86: Make MAX_ALTP2M configurable
Date: Mon, 10 Jun 2024 17:10:38 +0000
Message-Id: <cover.1718038855.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

This series introduces the ability to configure the maximum number of altp2m
tables during domain creation. Previously, the limits were hardcoded to a
maximum of 10. This change allows for greater flexibility in environments that
require more or fewer altp2m views.

This enhancement is particularly relevant for users leveraging Xen's features
for virtual machine introspection.

Changes since v5:
- Reverted "Introduction of accessor functions for altp2m arrays and
  refactoring the code to use them."
  - Reason is minimizing the code changes, and save the code consistency.
  - I've addressed (hopefully all) issues with long lines and mismatched
    _nospec replacements mentioned in previous reviews.
- Removed "struct domain *d" from altp2m_vcpu_initialise/destroy.

Changes since v4:
- Rebased on top of staging (applying Roger's changes).
- Fix mixed tabs/spaces in xenctrl_stubs.c.
- Add missing OCaml bindings for altp2m_opts.
- Substitute altp2m_opts into an unnamed structure. (This is a preparation for
  the next patch that will introduce the `nr` field.)
- altp2m.opts is then shortened to uint16_t and a new field altp2m.nr is added -
  also uint16_t. This value is then verified by libxl to not exceed the maximum
  uint16_t value.

  This puts a hard limit to number of altp2m to 65535, which is enough, at least
  for the time being. Also, altp2m.opts currently uses only 2 bits. Therefore
  I believe this change is justified.
- Introduction of accessor functions for altp2m arrays and refactoring the code
  to use them.
- Added a check to arm/arch_sanitise_domain_config() to disallow creating
  domains with altp2m.nr != 0.
- Added dummy hvm_altp2m_supported() to avoid build errors when CONFIG_HVM is
  disabled.
- Finally, expose altp2m_count to OCaml bindings (and verify both altp2m_opts
  and altp2m_count fit uint16_t).
- I also removed Christian Lindig from the Acked-by, since I think this change
  is significant enough to require a re-review.

Changes since v3:
- Rebased on top of staging (some functions were moved to altp2m.c).
- Re-added the array_index_nospec() where it was removed.

Changes since v2:
- Changed max_altp2m to nr_altp2m.
- Moved arch-dependent check from xen/common/domain.c to xen/arch/x86/domain.c.
- Replaced min(d->nr_altp2m, MAX_EPTP) occurences for just d->nr_altp2m.
- Replaced array_index_nospec(altp2m_idx, ...) for just altp2m_idx.
- Shortened long lines.
- Removed unnecessary comments in altp2m_vcpu_initialise/destroy.
- Moved nr_altp2m field after max_ fields in xen_domctl_createdomain.
- Removed the commit that adjusted the initial allocation of pages from 256
  to 1024. This means that after these patches, technically, the nr_altp2m will
  be capped to (256 - 1 - vcpus - MAX_NESTEDP2M) instead of MAX_EPTP (512).
  Future work will be needed to fix this.

Petr Beneš (9):
  tools/ocaml: Fix mixed tabs/spaces
  tools/ocaml: Add missing ocaml bindings for altp2m_opts
  xen: Refactor altp2m options into a structured format
  tools/xl: Add altp2m_count parameter
  docs/man: Add altp2m_count parameter to the xl.cfg manual
  xen: Make the maximum number of altp2m views configurable for x86
  tools/libxl: Activate the altp2m_count feature
  xen/x86: Disallow creating domains with altp2m enabled and altp2m.nr
    == 0
  tools/ocaml: Add altp2m_count parameter

 docs/man/xl.cfg.5.pod.in             | 14 ++++++
 tools/golang/xenlight/helpers.gen.go |  2 +
 tools/golang/xenlight/types.gen.go   |  1 +
 tools/include/libxl.h                |  8 ++++
 tools/libs/light/libxl_create.c      | 19 +++++++--
 tools/libs/light/libxl_types.idl     |  1 +
 tools/ocaml/libs/xc/xenctrl.ml       |  2 +
 tools/ocaml/libs/xc/xenctrl.mli      |  2 +
 tools/ocaml/libs/xc/xenctrl_stubs.c  | 40 +++++++++++------
 tools/xl/xl_parse.c                  |  9 ++++
 xen/arch/arm/domain.c                |  2 +-
 xen/arch/x86/domain.c                | 45 +++++++++++++++----
 xen/arch/x86/hvm/hvm.c               | 10 ++++-
 xen/arch/x86/hvm/vmx/vmx.c           |  2 +-
 xen/arch/x86/include/asm/domain.h    |  9 ++--
 xen/arch/x86/include/asm/hvm/hvm.h   |  5 +++
 xen/arch/x86/include/asm/p2m.h       |  4 +-
 xen/arch/x86/mm/altp2m.c             | 64 ++++++++++++++++++----------
 xen/arch/x86/mm/hap/hap.c            |  6 +--
 xen/arch/x86/mm/mem_access.c         | 14 +++---
 xen/arch/x86/mm/mem_sharing.c        |  2 +-
 xen/arch/x86/mm/p2m-ept.c            |  7 +--
 xen/arch/x86/mm/p2m.c                |  8 ++--
 xen/common/domain.c                  |  1 +
 xen/include/public/domctl.h          |  7 ++-
 xen/include/xen/sched.h              |  2 +
 26 files changed, 210 insertions(+), 76 deletions(-)

--
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 17:11:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 17:11:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737473.1143848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXt-00021m-A6; Mon, 10 Jun 2024 17:11:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737473.1143848; Mon, 10 Jun 2024 17:11:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGiXt-000216-4Y; Mon, 10 Jun 2024 17:11:01 +0000
Received: by outflank-mailman (input) for mailman id 737473;
 Mon, 10 Jun 2024 17:10:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7qHj=NM=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGiXr-0000kp-JU
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 17:10:59 +0000
Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com
 [2a00:1450:4864:20::52f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 67c7a0ed-274c-11ef-90a2-e314d9c70b13;
 Mon, 10 Jun 2024 19:10:58 +0200 (CEST)
Received: by mail-ed1-x52f.google.com with SMTP id
 4fb4d7f45d1cf-5751bcb3139so90853a12.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 10:10:58 -0700 (PDT)
Received: from lab.home
 (dynamic-2a00-1028-83a4-4bca-c0bb-96ff-feed-9d50.ipv6.o2.cz.
 [2a00:1028:83a4:4bca:c0bb:96ff:feed:9d50])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c76740d6asm3233169a12.7.2024.06.10.10.10.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 10 Jun 2024 10:10:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67c7a0ed-274c-11ef-90a2-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718039458; x=1718644258; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Ow/yFJlrc5q1U2MI32twYDrNyr1fDFVzZ03MnavAp8U=;
        b=A04P+wT/NXVQza/VIdGz5R9yNUg1fs+ffz4AsUdpJSx+3EFS23Rk9zD+OUWkbc4wa3
         zEdb7nVXDrUGOlmkOesEYZzH1FeXZqaiqazzWbrPWngyFL5cZyJIoNhkNer6D1hCS10+
         H2LVmKvC/BcMXRAX559aF5b5gfT4KHntbW5IGIpMji0AtNQHWG+YfRP379emIRfZz868
         uEtR/oeWU8vFoGJxpHVCZ03sgWcym88dJx/O0/gJjtATOU2s5Y86LYVJxkzDHTwlsi1T
         jchoOFbQKQQLUGyUft8PMFNiwywx5agMXsH52z/FCB5eTw4IWuMmCZ8b5lmHmRQJBxdQ
         MWCw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718039458; x=1718644258;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Ow/yFJlrc5q1U2MI32twYDrNyr1fDFVzZ03MnavAp8U=;
        b=iMz0TlIj17/Nv8CNSREplq5B84+79SY1rbXRpK2raYNT2O9x9YbAQoPzjqyEVh7Mku
         FaMaLKbjny1UuEsqCcpumJ5Ryib2CLzIriq1sizvBNuOj/vwIBHuhPZBH6SvaRWkjMrF
         D237JECkbU0912MohPMdhIB/oAMid/aO7PDUWJ7C2sYYD8CnT/ZRrp+jvVopUfBz4Oi3
         yhXVC0lgTOeI+prWNbyMMvNsfWaigs1xJnQBs5Bi7FLa4sZOSCm7U6tVDNH/KsOOM2OT
         LdkeDfvhIDLDg/BiJkZ+mIdGgIVjZl82jxQ6zKHnF0HSsy+a4u1/7pzcZulyrFynZKk2
         zXEA==
X-Gm-Message-State: AOJu0YwUUv9Z/9XDEa3EuzQZhcmI4Wq03yHYJwvbv+yZ9Efyy9jy98OZ
	ha07n6gbiUMsSdHaINK5T3bBT9lV998rgG91ZdeECXbv0VQ83BP5BJcVXUcQ
X-Google-Smtp-Source: AGHT+IE7RG/p/Fg6RKtMzYJqkZN7lo2Zux5OHR0D2sOPf1CEXxSjuKlH/wc2Kky/Jqp+Rh9x/vh2aw==
X-Received: by 2002:a50:9e06:0:b0:57a:2a46:701 with SMTP id 4fb4d7f45d1cf-57c50902d83mr5874821a12.19.1718039457578;
        Mon, 10 Jun 2024 10:10:57 -0700 (PDT)
From: "=?UTF-8?q?Petr=20Bene=C5=A1?=" <w1benny@gmail.com>
X-Google-Original-From: =?UTF-8?q?Petr=20Bene=C5=A1?= <petr.benes@gendigital.com>
To: xen-devel@lists.xenproject.org
Cc: =?UTF-8?q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>
Subject: [PATCH for-4.19? v6 6/9] xen: Make the maximum number of altp2m views configurable for x86
Date: Mon, 10 Jun 2024 17:10:44 +0000
Message-Id: <fee20e24a94cb29dea81631a6b775933d1151da4.1718038855.git.w1benny@gmail.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718038855.git.w1benny@gmail.com>
References: <cover.1718038855.git.w1benny@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Petr Beneš <w1benny@gmail.com>

This commit introduces the ability to configure the maximum number of altp2m
views for the domain during its creation. Previously, the limits were hardcoded
to a maximum of 10. This change allows for greater flexibility in environments
that require more or fewer altp2m views.

The maximum configurable limit for nr_altp2m on x86 is now set to
MAX_NR_ALTP2M (which currently holds the MAX_EPTP value - 512). This cap is
linked to the architectural limit of the EPTP-switching VMFUNC, which supports
up to 512 entries. Despite there being no inherent need for limiting nr_altp2m
in scenarios not utilizing VMFUNC, decoupling these components would necessitate
substantial code changes.

xen_domctl_createdomain::altp2m is extended for a new field `nr`, that will
configure this limit for a domain. Additionally, previous altp2m.opts value
has been reduced from uint32_t to uint16_t so that both of these fields occupy
as little space as possible.

Accesses to the altp2m_p2m array are modified to respect the new nr_altp2m
value. Accesses to the altp2m_(visible_)eptp arrays are unmodified, since
these arrays always have fixed size of MAX_EPTP.

A dummy hvm_altp2m_supported() function is introduced for non-HVM builds, so
that the compilation won't fail for them.

Additional sanitization is introduced in the x86/arch_sanitise_domain_config
to fix the altp2m.nr value to 10 if it is previously set to 0. This behavior
is only temporary and immediately removed in the upcoming commit (which will
disallow creating a domain with enabled altp2m with zero nr_altp2m).

The reason for this temporary workaround is to retain the legacy behavior
until the feature is fully activated in libxl.

Also, arm/arch_sanitise_domain_config is extended to not allow requesting
non-zero altp2ms.

Signed-off-by: Petr Beneš <w1benny@gmail.com>
---
 xen/arch/arm/domain.c              |  2 +-
 xen/arch/x86/domain.c              | 40 +++++++++++++++----
 xen/arch/x86/hvm/hvm.c             |  8 +++-
 xen/arch/x86/hvm/vmx/vmx.c         |  2 +-
 xen/arch/x86/include/asm/domain.h  |  9 +++--
 xen/arch/x86/include/asm/hvm/hvm.h |  5 +++
 xen/arch/x86/include/asm/p2m.h     |  4 +-
 xen/arch/x86/mm/altp2m.c           | 64 +++++++++++++++++++-----------
 xen/arch/x86/mm/hap/hap.c          |  6 +--
 xen/arch/x86/mm/mem_access.c       | 14 +++----
 xen/arch/x86/mm/mem_sharing.c      |  2 +-
 xen/arch/x86/mm/p2m-ept.c          |  7 ++--
 xen/arch/x86/mm/p2m.c              |  8 ++--
 xen/common/domain.c                |  1 +
 xen/include/public/domctl.h        |  5 ++-
 xen/include/xen/sched.h            |  2 +
 16 files changed, 121 insertions(+), 58 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 5234b627d0..e5785d2d96 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -688,7 +688,7 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }

-    if ( config->altp2m.opts )
+    if ( config->altp2m.opts || config->altp2m.nr )
     {
         dprintk(XENLOG_INFO, "Altp2m not supported\n");
         return -EINVAL;
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index a4f2e7bad1..faec09e15e 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -724,16 +724,42 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
         return -EINVAL;
     }

-    if ( altp2m_mode && nested_virt )
+    if ( altp2m_mode )
     {
-        dprintk(XENLOG_INFO,
-                "Nested virt and altp2m are not supported together\n");
-        return -EINVAL;
-    }
+        if ( nested_virt )
+        {
+            dprintk(XENLOG_INFO,
+                    "Nested virt and altp2m are not supported together\n");
+            return -EINVAL;
+        }
+
+        if ( !hap )
+        {
+            dprintk(XENLOG_INFO, "altp2m is only supported with HAP\n");
+            return -EINVAL;
+        }
+
+        if ( !hvm_altp2m_supported() )
+        {
+            dprintk(XENLOG_INFO, "altp2m is not supported\n");
+            return -EINVAL;
+        }
+
+        if ( !config->altp2m.nr )
+        {
+            /* Fix the value to the legacy default */
+            config->altp2m.nr = 10;
+        }

-    if ( altp2m_mode && !hap )
+        if ( config->altp2m.nr > MAX_NR_ALTP2M )
+        {
+            dprintk(XENLOG_INFO, "altp2m.nr must be <= %lu\n", MAX_NR_ALTP2M);
+            return -EINVAL;
+        }
+    }
+    else if ( config->altp2m.nr )
     {
-        dprintk(XENLOG_INFO, "altp2m is only supported with HAP\n");
+        dprintk(XENLOG_INFO, "altp2m.nr must be zero when altp2m is off\n");
         return -EINVAL;
     }

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index a66ebaaceb..3d0357a0f8 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4657,6 +4657,12 @@ static int do_altp2m_op(
         goto out;
     }

+    if ( d->nr_altp2m == 0 )
+    {
+        rc = -EINVAL;
+        goto out;
+    }
+
     if ( (rc = xsm_hvm_altp2mhvm_op(XSM_OTHER, d, mode, a.cmd)) )
         goto out;

@@ -5245,7 +5251,7 @@ void hvm_fast_singlestep(struct vcpu *v, uint16_t p2midx)
     if ( !hvm_is_singlestep_supported() )
         return;

-    if ( p2midx >= MAX_ALTP2M )
+    if ( p2midx >= v->domain->nr_altp2m )
         return;

     v->arch.hvm.single_step = true;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f16faa6a61..8548044278 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -4885,7 +4885,7 @@ bool asmlinkage vmx_vmenter_helper(const struct cpu_user_regs *regs)
         {
             unsigned int i;

-            for ( i = 0; i < MAX_ALTP2M; ++i )
+            for ( i = 0; i < currd->nr_altp2m; ++i )
             {
                 if ( currd->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
                     continue;
diff --git a/xen/arch/x86/include/asm/domain.h b/xen/arch/x86/include/asm/domain.h
index f5daeb182b..855e844bed 100644
--- a/xen/arch/x86/include/asm/domain.h
+++ b/xen/arch/x86/include/asm/domain.h
@@ -258,11 +258,12 @@ struct paging_vcpu {
     struct shadow_vcpu shadow;
 };

-#define MAX_NESTEDP2M 10
+#define MAX_EPTP        (PAGE_SIZE / sizeof(uint64_t))
+#define MAX_NR_ALTP2M   MAX_EPTP
+#define MAX_NESTEDP2M   10

-#define MAX_ALTP2M      10 /* arbitrary */
 #define INVALID_ALTP2M  0xffff
-#define MAX_EPTP        (PAGE_SIZE / sizeof(uint64_t))
+
 struct p2m_domain;
 struct time_scale {
     int shift;
@@ -353,7 +354,7 @@ struct arch_domain

     /* altp2m: allow multiple copies of host p2m */
     bool altp2m_active;
-    struct p2m_domain *altp2m_p2m[MAX_ALTP2M];
+    struct p2m_domain **altp2m_p2m;
     mm_lock_t altp2m_list_lock;
     uint64_t *altp2m_eptp;
     uint64_t *altp2m_visible_eptp;
diff --git a/xen/arch/x86/include/asm/hvm/hvm.h b/xen/arch/x86/include/asm/hvm/hvm.h
index 1c01e22c8e..277648dd18 100644
--- a/xen/arch/x86/include/asm/hvm/hvm.h
+++ b/xen/arch/x86/include/asm/hvm/hvm.h
@@ -828,6 +828,11 @@ static inline bool hvm_hap_supported(void)
     return false;
 }

+static inline bool hvm_altp2m_supported(void)
+{
+    return false;
+}
+
 static inline bool hvm_nested_virt_supported(void)
 {
     return false;
diff --git a/xen/arch/x86/include/asm/p2m.h b/xen/arch/x86/include/asm/p2m.h
index c1478ffc36..3bf4ce0782 100644
--- a/xen/arch/x86/include/asm/p2m.h
+++ b/xen/arch/x86/include/asm/p2m.h
@@ -886,7 +886,7 @@ static inline struct p2m_domain *p2m_get_altp2m(struct vcpu *v)
     if ( index == INVALID_ALTP2M )
         return NULL;

-    BUG_ON(index >= MAX_ALTP2M);
+    BUG_ON(index >= v->domain->nr_altp2m);

     return v->domain->arch.altp2m_p2m[index];
 }
@@ -896,7 +896,7 @@ static inline bool p2m_set_altp2m(struct vcpu *v, unsigned int idx)
 {
     struct p2m_domain *orig;

-    BUG_ON(idx >= MAX_ALTP2M);
+    BUG_ON(idx >= v->domain->nr_altp2m);

     if ( idx == vcpu_altp2m(v).p2midx )
         return false;
diff --git a/xen/arch/x86/mm/altp2m.c b/xen/arch/x86/mm/altp2m.c
index 6fe1e9ed6b..4ad24de714 100644
--- a/xen/arch/x86/mm/altp2m.c
+++ b/xen/arch/x86/mm/altp2m.c
@@ -15,6 +15,9 @@
 void
 altp2m_vcpu_initialise(struct vcpu *v)
 {
+    if ( v->domain->nr_altp2m == 0 )
+        return;
+
     if ( v != current )
         vcpu_pause(v);

@@ -32,6 +35,9 @@ altp2m_vcpu_destroy(struct vcpu *v)
 {
     struct p2m_domain *p2m;

+    if ( v->domain->nr_altp2m == 0 )
+        return;
+
     if ( v != current )
         vcpu_pause(v);

@@ -122,7 +128,12 @@ int p2m_init_altp2m(struct domain *d)
     struct p2m_domain *hostp2m = p2m_get_hostp2m(d);

     mm_lock_init(&d->arch.altp2m_list_lock);
-    for ( i = 0; i < MAX_ALTP2M; i++ )
+    d->arch.altp2m_p2m = xzalloc_array(struct p2m_domain *, d->nr_altp2m);
+
+    if ( !d->arch.altp2m_p2m )
+        return -ENOMEM;
+
+    for ( i = 0; i < d->nr_altp2m; i++ )
     {
         d->arch.altp2m_p2m[i] = p2m = p2m_init_one(d);
         if ( p2m == NULL )
@@ -143,7 +154,10 @@ void p2m_teardown_altp2m(struct domain *d)
     unsigned int i;
     struct p2m_domain *p2m;

-    for ( i = 0; i < MAX_ALTP2M; i++ )
+    if ( !d->arch.altp2m_p2m )
+        return;
+
+    for ( i = 0; i < d->nr_altp2m; i++ )
     {
         if ( !d->arch.altp2m_p2m[i] )
             continue;
@@ -151,6 +165,8 @@ void p2m_teardown_altp2m(struct domain *d)
         d->arch.altp2m_p2m[i] = NULL;
         p2m_free_one(p2m);
     }
+
+    XFREE(d->arch.altp2m_p2m);
 }

 int altp2m_get_effective_entry(struct p2m_domain *ap2m, gfn_t gfn, mfn_t *mfn,
@@ -200,7 +216,7 @@ bool p2m_switch_vcpu_altp2m_by_id(struct vcpu *v, unsigned int idx)
     struct domain *d = v->domain;
     bool rc = false;

-    if ( idx >= MAX_ALTP2M )
+    if ( idx >= d->nr_altp2m )
         return rc;

     altp2m_list_lock(d);
@@ -306,8 +322,8 @@ static void p2m_reset_altp2m(struct domain *d, unsigned int idx,
 {
     struct p2m_domain *p2m;

-    ASSERT(idx < MAX_ALTP2M);
-    p2m = array_access_nospec(d->arch.altp2m_p2m, idx);
+    ASSERT(idx < d->nr_altp2m);
+    p2m = d->arch.altp2m_p2m[array_index_nospec(idx, d->nr_altp2m)];

     p2m_lock(p2m);

@@ -332,7 +348,7 @@ void p2m_flush_altp2m(struct domain *d)

     altp2m_list_lock(d);

-    for ( i = 0; i < MAX_ALTP2M; i++ )
+    for ( i = 0; i < d->nr_altp2m; i++ )
     {
         p2m_reset_altp2m(d, i, ALTP2M_DEACTIVATE);
         d->arch.altp2m_eptp[i] = mfn_x(INVALID_MFN);
@@ -348,9 +364,9 @@ static int p2m_activate_altp2m(struct domain *d, unsigned int idx,
     struct p2m_domain *hostp2m, *p2m;
     int rc;

-    ASSERT(idx < MAX_ALTP2M);
+    ASSERT(idx < d->nr_altp2m);

-    p2m = array_access_nospec(d->arch.altp2m_p2m, idx);
+    p2m = d->arch.altp2m_p2m[array_index_nospec(idx, d->nr_altp2m)];
     hostp2m = p2m_get_hostp2m(d);

     p2m_lock(p2m);
@@ -388,7 +404,7 @@ int p2m_init_altp2m_by_id(struct domain *d, unsigned int idx)
     int rc = -EINVAL;
     struct p2m_domain *hostp2m = p2m_get_hostp2m(d);

-    if ( idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) )
+    if ( idx >= d->nr_altp2m )
         return rc;

     altp2m_list_lock(d);
@@ -415,7 +431,7 @@ int p2m_init_next_altp2m(struct domain *d, uint16_t *idx,

     altp2m_list_lock(d);

-    for ( i = 0; i < MAX_ALTP2M; i++ )
+    for ( i = 0; i < d->nr_altp2m; i++ )
     {
         if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
             continue;
@@ -437,7 +453,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
     struct p2m_domain *p2m;
     int rc = -EBUSY;

-    if ( !idx || idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) )
+    if ( !idx || idx >= d->nr_altp2m )
         return rc;

     rc = domain_pause_except_self(d);
@@ -450,7 +466,7 @@ int p2m_destroy_altp2m_by_id(struct domain *d, unsigned int idx)
     if ( d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] !=
          mfn_x(INVALID_MFN) )
     {
-        p2m = array_access_nospec(d->arch.altp2m_p2m, idx);
+        p2m = d->arch.altp2m_p2m[array_index_nospec(idx, d->nr_altp2m)];

         if ( !_atomic_read(p2m->active_vcpus) )
         {
@@ -475,7 +491,7 @@ int p2m_switch_domain_altp2m_by_id(struct domain *d, unsigned int idx)
     struct vcpu *v;
     int rc = -EINVAL;

-    if ( idx >= MAX_ALTP2M )
+    if ( idx >= d->nr_altp2m )
         return rc;

     rc = domain_pause_except_self(d);
@@ -510,13 +526,13 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
     mfn_t mfn;
     int rc = -EINVAL;

-    if ( idx >=  min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+    if ( idx >= d->nr_altp2m ||
          d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] ==
          mfn_x(INVALID_MFN) )
         return rc;

     hp2m = p2m_get_hostp2m(d);
-    ap2m = array_access_nospec(d->arch.altp2m_p2m, idx);
+    ap2m = d->arch.altp2m_p2m[array_index_nospec(idx, d->nr_altp2m)];

     p2m_lock(hp2m);
     p2m_lock(ap2m);
@@ -572,7 +588,7 @@ int p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,

     altp2m_list_lock(d);

-    for ( i = 0; i < MAX_ALTP2M; i++ )
+    for ( i = 0; i < d->nr_altp2m; i++ )
     {
         p2m_type_t t;
         p2m_access_t a;
@@ -595,7 +611,7 @@ int p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
             else
             {
                 /* At least 2 altp2m's impacted, so reset everything */
-                for ( i = 0; i < MAX_ALTP2M; i++ )
+                for ( i = 0; i < d->nr_altp2m; i++ )
                 {
                     if ( i == last_reset_idx ||
                          d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
@@ -659,12 +675,13 @@ int p2m_set_suppress_ve_multi(struct domain *d,

     if ( sve->view > 0 )
     {
-        if ( sve->view >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+        if ( sve->view >= d->nr_altp2m ||
              d->arch.altp2m_eptp[array_index_nospec(sve->view, MAX_EPTP)] ==
              mfn_x(INVALID_MFN) )
             return -EINVAL;

-        p2m = ap2m = array_access_nospec(d->arch.altp2m_p2m, sve->view);
+        p2m = ap2m =
+            d->arch.altp2m_p2m[array_index_nospec(sve->view, d->nr_altp2m)];
     }

     p2m_lock(host_p2m);
@@ -727,12 +744,13 @@ int p2m_get_suppress_ve(struct domain *d, gfn_t gfn, bool *suppress_ve,

     if ( altp2m_idx > 0 )
     {
-        if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+        if ( altp2m_idx >= d->nr_altp2m ||
              d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] ==
              mfn_x(INVALID_MFN) )
             return -EINVAL;

-        p2m = ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx);
+        p2m = ap2m =
+            d->arch.altp2m_p2m[array_index_nospec(altp2m_idx, d->nr_altp2m)];
     }
     else
         p2m = host_p2m;
@@ -763,9 +781,9 @@ int p2m_set_altp2m_view_visibility(struct domain *d, unsigned int altp2m_idx,

     /*
      * Eptp index is correlated with altp2m index and should not exceed
-     * min(MAX_ALTP2M, MAX_EPTP).
+     * d->nr_altp2m.
      */
-    if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+    if ( altp2m_idx >= d->nr_altp2m ||
          d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] ==
          mfn_x(INVALID_MFN) )
         rc = -EINVAL;
diff --git a/xen/arch/x86/mm/hap/hap.c b/xen/arch/x86/mm/hap/hap.c
index d2011fde24..501fd9848b 100644
--- a/xen/arch/x86/mm/hap/hap.c
+++ b/xen/arch/x86/mm/hap/hap.c
@@ -515,7 +515,7 @@ int hap_enable(struct domain *d, u32 mode)
             d->arch.altp2m_visible_eptp[i] = mfn_x(INVALID_MFN);
         }

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             rv = p2m_alloc_table(d->arch.altp2m_p2m[i]);
             if ( rv != 0 )
@@ -538,7 +538,7 @@ void hap_final_teardown(struct domain *d)
     unsigned int i;

     if ( hvm_altp2m_supported() )
-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
             p2m_teardown(d->arch.altp2m_p2m[i], true, NULL);

     /* Destroy nestedp2m's first */
@@ -590,7 +590,7 @@ void hap_teardown(struct domain *d, bool *preempted)
         FREE_XENHEAP_PAGE(d->arch.altp2m_eptp);
         FREE_XENHEAP_PAGE(d->arch.altp2m_visible_eptp);

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             p2m_teardown(d->arch.altp2m_p2m[i], false, preempted);
             if ( preempted && *preempted )
diff --git a/xen/arch/x86/mm/mem_access.c b/xen/arch/x86/mm/mem_access.c
index 60a0cce68a..f98408d187 100644
--- a/xen/arch/x86/mm/mem_access.c
+++ b/xen/arch/x86/mm/mem_access.c
@@ -347,12 +347,12 @@ long p2m_set_mem_access(struct domain *d, gfn_t gfn, uint32_t nr,
     /* altp2m view 0 is treated as the hostp2m */
     if ( altp2m_idx )
     {
-        if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+        if ( altp2m_idx >= d->nr_altp2m ||
              d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] ==
              mfn_x(INVALID_MFN) )
             return -EINVAL;

-        ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx);
+        ap2m = d->arch.altp2m_p2m[array_index_nospec(altp2m_idx, d->nr_altp2m)];
     }

     if ( !xenmem_access_to_p2m_access(p2m, access, &a) )
@@ -403,12 +403,12 @@ long p2m_set_mem_access_multi(struct domain *d,
     /* altp2m view 0 is treated as the hostp2m */
     if ( altp2m_idx )
     {
-        if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+        if ( altp2m_idx >= d->nr_altp2m ||
              d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] ==
              mfn_x(INVALID_MFN) )
             return -EINVAL;

-        ap2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx);
+        ap2m = d->arch.altp2m_p2m[array_index_nospec(altp2m_idx, d->nr_altp2m)];
     }

     p2m_lock(p2m);
@@ -466,12 +466,12 @@ int p2m_get_mem_access(struct domain *d, gfn_t gfn, xenmem_access_t *access,
     }
     else if ( altp2m_idx ) /* altp2m view 0 is treated as the hostp2m */
     {
-        if ( altp2m_idx >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
+        if ( altp2m_idx >= d->nr_altp2m ||
              d->arch.altp2m_eptp[array_index_nospec(altp2m_idx, MAX_EPTP)] ==
              mfn_x(INVALID_MFN) )
             return -EINVAL;

-        p2m = array_access_nospec(d->arch.altp2m_p2m, altp2m_idx);
+        p2m = d->arch.altp2m_p2m[array_index_nospec(altp2m_idx, d->nr_altp2m)];
     }

     return _p2m_get_mem_access(p2m, gfn, access);
@@ -486,7 +486,7 @@ void arch_p2m_set_access_required(struct domain *d, bool access_required)
     if ( altp2m_active(d) )
     {
         unsigned int i;
-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             struct p2m_domain *p2m = d->arch.altp2m_p2m[i];

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index da28266ef0..83bb9dd5df 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -912,7 +912,7 @@ static int nominate_page(struct domain *d, gfn_t gfn,

         altp2m_list_lock(d);

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             ap2m = d->arch.altp2m_p2m[i];
             if ( !ap2m )
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index f83610cb8c..69fce28d73 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -1293,7 +1293,7 @@ static void ept_set_ad_sync(struct domain *d, bool value)
     {
         unsigned int i;

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             struct p2m_domain *p2m;

@@ -1500,7 +1500,8 @@ void setup_ept_dump(void)

 void p2m_init_altp2m_ept(struct domain *d, unsigned int i)
 {
-    struct p2m_domain *p2m = array_access_nospec(d->arch.altp2m_p2m, i);
+    struct p2m_domain *p2m =
+        d->arch.altp2m_p2m[array_index_nospec(i, d->nr_altp2m)];
     struct p2m_domain *hostp2m = p2m_get_hostp2m(d);
     struct ept_data *ept;

@@ -1519,7 +1520,7 @@ unsigned int p2m_find_altp2m_by_eptp(struct domain *d, uint64_t eptp)

     altp2m_list_lock(d);

-    for ( i = 0; i < MAX_ALTP2M; i++ )
+    for ( i = 0; i < d->nr_altp2m; i++ )
     {
         if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
             continue;
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index e7e327d6a6..ac1d3685f0 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -105,7 +105,7 @@ void p2m_change_entry_type_global(struct domain *d,
     {
         unsigned int i;

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
             {
@@ -140,7 +140,7 @@ void p2m_memory_type_changed(struct domain *d)
     {
         unsigned int i;

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
             {
@@ -913,7 +913,7 @@ void p2m_change_type_range(struct domain *d,
     {
         unsigned int i;

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
             {
@@ -986,7 +986,7 @@ int p2m_finish_type_change(struct domain *d,
     {
         unsigned int i;

-        for ( i = 0; i < MAX_ALTP2M; i++ )
+        for ( i = 0; i < d->nr_altp2m; i++ )
         {
             if ( d->arch.altp2m_eptp[i] != mfn_x(INVALID_MFN) )
             {
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 67cadb7c3f..776442cec0 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -610,6 +610,7 @@ struct domain *domain_create(domid_t domid,
     if ( config )
     {
         d->options = config->flags;
+        d->nr_altp2m = config->altp2m.nr;
         d->vmtrace_size = config->vmtrace_size;
     }

diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index dea399aa8e..056bbc82a2 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -103,7 +103,10 @@ struct xen_domctl_createdomain {
 /* Altp2m mode signaling uses bits [0, 1]. */
 #define XEN_DOMCTL_ALTP2M_mode_mask  (0x3U)
 #define XEN_DOMCTL_ALTP2M_mode(m)    ((m) & XEN_DOMCTL_ALTP2M_mode_mask)
-        uint32_t opts;
+        uint16_t opts;
+
+        /* Number of altp2ms to allocate. */
+        uint16_t nr;
     } altp2m;

     /* Per-vCPU buffer size in bytes.  0 to disable. */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 2dcd1d1a4f..7119f3c44f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -610,6 +610,8 @@ struct domain
         unsigned int guest_request_sync          : 1;
     } monitor;

+    unsigned int nr_altp2m;    /* Number of altp2m tables */
+
     unsigned int vmtrace_size; /* Buffer size in bytes, or 0 to disable. */

 #ifdef CONFIG_ARGO
--
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 10 17:55:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 17:55:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737531.1143889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGjEb-0004Pg-Sg; Mon, 10 Jun 2024 17:55:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737531.1143889; Mon, 10 Jun 2024 17:55:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGjEb-0004PZ-Q7; Mon, 10 Jun 2024 17:55:09 +0000
Received: by outflank-mailman (input) for mailman id 737531;
 Mon, 10 Jun 2024 17:55:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGjEa-0004PO-Km; Mon, 10 Jun 2024 17:55:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGjEa-0005FE-Hl; Mon, 10 Jun 2024 17:55:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGjEa-0003il-6g; Mon, 10 Jun 2024 17:55:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGjEa-00067q-6G; Mon, 10 Jun 2024 17:55:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=s0O1leV4fgueMN58ad/Pq+E9YygjaIO2ND8UiSKxlj0=; b=TXTgAOWQKLfM+TJ/nEeBxkaY+n
	AW8cOMjrBEKq1zkCkW05Q+RB3l4wA0iFLJiM73p/SwtqsmoBQKNeKn39XtM9o61zdnT+5c0TYbimY
	E8rFKNfomsZ3PRSA/AYRvbvcAEl4qKLkN2dbPp8OUxzV9IqnoemjpeF1JSzP/eF7FkuM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186306-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186306: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=6d15276ceddd2bf05995ee2efa86316fca1cd73a
X-Osstest-Versions-That:
    ovmf=3dcc7b73df2b1c38c3c1a31724d577f4085f3ab1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Jun 2024 17:55:08 +0000

flight 186306 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186306/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6d15276ceddd2bf05995ee2efa86316fca1cd73a
baseline version:
 ovmf                 3dcc7b73df2b1c38c3c1a31724d577f4085f3ab1

Last test of basis   186302  2024-06-10 09:11:12 Z    0 days
Testing same since   186306  2024-06-10 16:12:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Sebastian Witt <sebastian.witt@siemens.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3dcc7b73df..6d15276ced  6d15276ceddd2bf05995ee2efa86316fca1cd73a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 17:58:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 17:58:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737538.1143899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGjHN-0005St-Af; Mon, 10 Jun 2024 17:58:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737538.1143899; Mon, 10 Jun 2024 17:58:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGjHN-0005Sl-86; Mon, 10 Jun 2024 17:58:01 +0000
Received: by outflank-mailman (input) for mailman id 737538;
 Mon, 10 Jun 2024 17:58:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGjHM-0005Sc-Uj; Mon, 10 Jun 2024 17:58:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGjHM-0005HF-T2; Mon, 10 Jun 2024 17:58:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGjHM-0003n9-KL; Mon, 10 Jun 2024 17:58:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGjHM-0007qt-Jv; Mon, 10 Jun 2024 17:58:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wS6BLn94MK0Wqr698k7aWZeO+kX06itbNFhFN6Ks6ro=; b=2VsIiSgDUs8fVQJ9Dh8x5PHuAy
	OO9XfWUJRaGpLzYx9p25/X8YxkmgytresHDwMsIZPd/j3O8mX85RFUow3chj+HU8O3G41eT85OfF8
	iSrJEwuXiFu8bYE5/2seKop4bjPaThiseJjIUxRe9Y1yUB2khwVcgY12bweueOpyc9DI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186303-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186303: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=83a7eefedc9b56fe7bfeff13b6c7356688ffa670
X-Osstest-Versions-That:
    linux=771ed66105de9106a6f3e4311e06451881cdac5e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Jun 2024 17:58:00 +0000

flight 186303 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186303/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot         fail in 186298 pass in 186303
 test-armhf-armhf-examine      8 reboot           fail in 186298 pass in 186303
 test-armhf-armhf-xl-raw       8 xen-boot                   fail pass in 186298
 test-armhf-armhf-xl           8 xen-boot                   fail pass in 186298
 test-armhf-armhf-xl-arndale   8 xen-boot                   fail pass in 186298

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 186294
 test-armhf-armhf-xl         15 migrate-support-check fail in 186298 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 186298 never pass
 test-armhf-armhf-xl-raw     14 migrate-support-check fail in 186298 never pass
 test-armhf-armhf-xl-raw 15 saverestore-support-check fail in 186298 never pass
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 186298 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 186298 never pass
 test-armhf-armhf-xl-qcow2     8 xen-boot                     fail  like 186294
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186294
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186294
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186294
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186294
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186294
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                83a7eefedc9b56fe7bfeff13b6c7356688ffa670
baseline version:
 linux                771ed66105de9106a6f3e4311e06451881cdac5e

Last test of basis   186294  2024-06-09 08:51:48 Z    1 days
Failing since        186296  2024-06-09 17:11:41 Z    1 days    3 attempts
Testing same since   186298  2024-06-10 01:11:40 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Ingo Molnar <mingo@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Mark Rutland <mark.rutland@arm.com>
  Milian Wolff <milian.wolff@kdab.com>
  Namhyung Kim <namhyung@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   771ed66105de..83a7eefedc9b  83a7eefedc9b56fe7bfeff13b6c7356688ffa670 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 18:47:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 18:47:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737549.1143910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGk3D-0003vW-W7; Mon, 10 Jun 2024 18:47:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737549.1143910; Mon, 10 Jun 2024 18:47:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGk3D-0003vP-Ta; Mon, 10 Jun 2024 18:47:27 +0000
Received: by outflank-mailman (input) for mailman id 737549;
 Mon, 10 Jun 2024 18:47:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7tSL=NM=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1sGk3C-0003vJ-CC
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 18:47:26 +0000
Received: from fhigh4-smtp.messagingengine.com
 (fhigh4-smtp.messagingengine.com [103.168.172.155])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ddec8ad8-2759-11ef-b4bb-af5377834399;
 Mon, 10 Jun 2024 20:47:21 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailfhigh.nyi.internal (Postfix) with ESMTP id 8914C114014F;
 Mon, 10 Jun 2024 14:47:19 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Mon, 10 Jun 2024 14:47:19 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 10 Jun 2024 14:47:18 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddec8ad8-2759-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718045239;
	 x=1718131639; bh=S8SC96e7az8r2HEhWCKxQ+Xgv9my0B/BvfG/6hoXoaA=; b=
	i89jFrVEKgiBIKq2RW/rgr6CJlH5NvUe22NkAk2FtxgvaxynKYqHGoWCZXa8lac4
	+wDZgrYJBBdqb/6F7pZr4uzEU0qi777F8ql4j4bQ0BtZ4QftBQqKfB5yvG0fUVQI
	e/spvA680opHA4BKYUbrhrWE6jPCmTuMYrsMChSTR0rGQ7qtK293+I/wGF6M1EmC
	FOBDT2l+GLMxQE8Rxx8ZEUMW+rzxJkqwyUb55Ms6vGsAsBYnt2K4dhyNP7Bu5jiS
	svbkwT/GzsHUDFZ4lHTG2SwC8aUpEAEEWjO7nOHzOZDaCBLruh3ciaLGb+H4tyZy
	VLtCE2U0N2WuEcnOZRzmTg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718045239; x=1718131639; bh=S8SC96e7az8r2HEhWCKxQ+Xgv9my
	0B/BvfG/6hoXoaA=; b=ZULHV3cQPrWPRC4i9H5odOOD7swXVohB8gzu8xq9+YTY
	woB/daVikxXo3goYJinaicrhHdf/ahxaUiG50NLIT20zifBbg7CDhmOnMV3rg1K0
	RKz5MtzrtKTdJ+kYZPq5qWbopBGtIG/BvKyLz1Y9QTjJdziTx/ac1Eqmks6Vewy4
	xQLh4DgBo8pIQvPpO5U+5UDQUEIMzUn6uPoDsiUMd5vlmJ4lN6ccz/Yki6dtOOtu
	fmEqmzXWBzpJPmyVK6RXDHBfxsHEwov1MAndbGR4De5FO0zRi86u0juH0cPcbFgI
	tT1hpyAAWLw0MJeAkE8twcKxOHj4PSFvcqM1WGa0Bg==
X-ME-Sender: <xms:N0pnZqPwGM2EqIW0AqFC8mBIi01Mqpx_bQgfmXc-qrKaLjck3IXJeQ>
    <xme:N0pnZo8DCzTgpSveFH6TBqjbKR3cEjI4TzzeulGvychTi6zxp1FdNHNzRnu26e0VL
    tbMfZx9TDMaZA>
X-ME-Received: <xmr:N0pnZhRWF0-jsE3RkS3qwJKADwlshR0bcIq9e2u2NMnBI4hjU_OYs5pXnwPDEgMToke9xG9m73e20E1Cz2ZhBgKWP-dMEbKspA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedutddguddvkecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpefg
    udelteefvefhfeehieetleeihfejhfeludevteetkeevtedtvdegueetfeejudenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:N0pnZqsKmZtsX_xo1TFo-ZJuPKpvzMJJJa-QsLsHTPl3TKbbwUKjSw>
    <xmx:N0pnZifxphtFzY_OphCiR4cnSaJqymsGUwt80H2dqcEdCNSlj8IieA>
    <xmx:N0pnZu1TmGUMAEckhE7RLLKiiDzkcy_NyJZ_sYVlfrrika47JFz4gg>
    <xmx:N0pnZm87XHS_dvlyZjICQLf8SvoLCdcz9TZ5rukPhyah0JMhIOPGLA>
    <xmx:N0pnZh4pyiWNssR9fxKrPLDbTvPNQrOkCrnkY88fygPSqQT1CM2wBM5v>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 10 Jun 2024 20:47:14 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH for-4.19 v1] automation: add a test for HVM domU on PVH
 dom0
Message-ID: <ZmdKMthsjw0qejyg@mail-itl>
References: <20240610133210.724346-1-marmarek@invisiblethingslab.com>
 <67a6fc3a-bcc3-48e8-beb8-b3c05217083c@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="Z7NtFVrrua97WZbt"
Content-Disposition: inline
In-Reply-To: <67a6fc3a-bcc3-48e8-beb8-b3c05217083c@citrix.com>


--Z7NtFVrrua97WZbt
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 10 Jun 2024 20:47:14 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: xen-devel@lists.xenproject.org,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH for-4.19 v1] automation: add a test for HVM domU on PVH
 dom0

On Mon, Jun 10, 2024 at 04:25:01PM +0100, Andrew Cooper wrote:
> On 10/06/2024 2:32 pm, Marek Marczykowski-G=C3=B3recki wrote:
> > This tests if QEMU works in PVH dom0. QEMU in dom0 requires enabling TUN
> > in the kernel, so do that too.
> >
> > Add it to both x86 runners, similar to the PVH domU test.
> >
> > Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblething=
slab.com>
>=20
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>=20
> CC Oleksii.
>=20
> > ---
> > Requires rebuilding test-artifacts/kernel/6.1.19
>=20
> Ok.
>=20
> But on a tangent, shouldn't that move forwards somewhat?

There is already "[PATCH 08/12] automation: update kernel for x86 tests"
in the stubdom test series. And as noted in the cover letter there, most
patches can be applied independently, and also they got R-by/A-by from
Stefano already.

> > I'm actually not sure if there is a sense in testing HVM domU on both
> > runners, when PVH domU variant is already tested on both. Are there any
> > differences between Intel and AMD relevant for QEMU in dom0?
>=20
> It's not just Qemu, it's also HVMLoader, and the particulars of VT-x/SVM
> VMExit decode information in order to generate ioreqs.

For just HVM, we have PCI passthrough tests on both - they run HVM (but
on PV dom0). My question was more about PVH-dom0 specific parts.

> I'd firmly suggest having both.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--Z7NtFVrrua97WZbt
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmZnSjIACgkQ24/THMrX
1yzBsAf/UUWGVzXgyeZ+olY1gl/F4Y5uCzBNhtce04PJLdQBk44152Mo4JqubnHQ
xjiC7uxhLH+bC2gMkJEBoyOOhaBluUQonzKFtvo2/CNkxati7xxkAI9NxupB+OIn
lkc9IukpYH6NppjF+2vFUuXfsoED1wJI73dm2vp5kr2O/b1uN0lvWvPWoz3aOWjf
cTLmX7YaYQo1hSnjNCRs958NQk6CP/u9MFOIrFrPK1iOzSXb2+k93DutKTu4R3HV
iVn3DcEVrOiURhuUZnpaJ9zcne1ZzGqVszmC83bD/A8RDNC3CIJ2RfudbgMjbroC
W1Kg8NHyb2BIq43LSFmJy/RiVliMsQ==
=P3x9
-----END PGP SIGNATURE-----

--Z7NtFVrrua97WZbt--


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 20:38:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 20:38:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737558.1143920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGlmh-00081S-8K; Mon, 10 Jun 2024 20:38:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737558.1143920; Mon, 10 Jun 2024 20:38:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGlmh-00081L-5n; Mon, 10 Jun 2024 20:38:31 +0000
Received: by outflank-mailman (input) for mailman id 737558;
 Mon, 10 Jun 2024 20:38:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sGlmf-00081D-DM
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 20:38:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sGlme-0008Lm-97; Mon, 10 Jun 2024 20:38:28 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.245])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sGlme-0005gt-2j; Mon, 10 Jun 2024 20:38:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=p2WrA/EEMgK3kjMP30WlqWYMjW3EJ8VqTbsaH2qOaOo=; b=WJ6RySQqUlRPU8iwqPSskKraOZ
	d/aP30u9AKLtAe14f8VABPah/2HN8QFXpMPpNoandm60eMdpL+BThMNt+H5fgF7kVbwACI5uieuWB
	w17tiH5o6QQqJxGZ43k0pYbT0hEKx7abE10WmCkX8L8Y9xrGqkzoWxSzPQxkC3rw85BE=;
Message-ID: <615f1766-253d-43dc-b0f0-f8e2eb7360b5@xen.org>
Date: Mon, 10 Jun 2024 21:38:26 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v6 0/7] FF-A notifications
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Jens Wiklander <jens.wiklander@linaro.org>,
 Oleksii <oleksii.kurochko@gmail.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 "patches@linaro.org" <patches@linaro.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Michal Orzel <michal.orzel@amd.com>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
 <3C40228F-21AA-4CBF-A4BE-1C42DE6E94EB@arm.com>
Content-Language: en-GB
From: Julien Grall <julien@xen.org>
In-Reply-To: <3C40228F-21AA-4CBF-A4BE-1C42DE6E94EB@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Bertrand,

On 10/06/2024 16:54, Bertrand Marquis wrote:
> Hi Jens,
> 
>> On 10 Jun 2024, at 08:53, Jens Wiklander <jens.wiklander@linaro.org> wrote:
>>
>> Hi,
>>
>> This patch set adds support for FF-A notifications. We only support
>> global notifications, per vCPU notifications remain unsupported.
>>
>> The first three patches are further cleanup and can be merged before the
>> rest if desired.
>>
>> A physical SGI is used to make Xen aware of pending FF-A notifications. The
>> physical SGI is selected by the SPMC in the secure world. Since it must not
>> already be used by Xen the SPMC is in practice forced to donate one of the
>> secure SGIs, but that's normally not a problem. The SGI handling in Xen is
>> updated to support registration of handlers for SGIs that aren't statically
>> assigned, that is, SGI IDs above GIC_SGI_MAX.
>>
>> The patch "xen/arm: add and call init_tee_secondary()" provides a hook for
>> register the needed per-cpu interrupt handler in "xen/arm: ffa: support
>> notification".
>>
>> The patch "xen/arm: add and call tee_free_domain_ctx()" provides a hook for
>> later freeing of the TEE context. This hook is used in "xen/arm: ffa:
>> support notification" and avoids the problem with TEE context being freed
>> while we need to access it when handling a Schedule Receiver interrupt. It
>> was suggested as an alternative in [1] that the TEE context could be freed
>> from complete_domain_destroy().
>>
>> [1] https://lore.kernel.org/all/CAHUa44H4YpoxYT7e6WNH5XJFpitZQjqP9Ng4SmTy4eWhyN+F+w@mail.gmail.com/
>>
>> Thanks,
>> Jens
> 
> All patches are now reviewed and/or acked so I think they can get in for the release.

This would need a release-ack from Oleksii (I can't seem to find already 
one).

As we discussed last week, I am fine with the idea to merge the FFA 
patches as the feature is tech-preview. But there are some changes in 
the arm generic code. Do you (or Jens) have an assessment of the risk of 
the changes?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 21:10:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 21:10:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737564.1143930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGmHa-0005Lr-KR; Mon, 10 Jun 2024 21:10:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737564.1143930; Mon, 10 Jun 2024 21:10:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGmHa-0005Lk-Gz; Mon, 10 Jun 2024 21:10:26 +0000
Received: by outflank-mailman (input) for mailman id 737564;
 Mon, 10 Jun 2024 21:10:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OhkP=NM=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sGmHa-0005Le-30
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 21:10:26 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dab1ec9a-276d-11ef-90a3-e314d9c70b13;
 Mon, 10 Jun 2024 23:10:24 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 5b1f17b1804b1-421798185f0so22939455e9.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 14:10:24 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4215c1aa2f7sm151881475e9.14.2024.06.10.14.10.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 14:10:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dab1ec9a-276d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718053824; x=1718658624; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=UO1E0YV6wqdAbLiHhuKko7dau2iK752deLBVS+LJyeo=;
        b=Cm+GM5d22czNue9p4yuQW599UGhdwyeRjJoQMjzbedt77dWl1A7VwCXNVC7ueGIiT8
         J858HnNPBCcyMJLMwdV/cuNlUJcRGM/x8Ju4aV8MY/8pvz6zA4ETFcwZN8ZYKfphpJYN
         dqTmO7HQV22mOUdqrUQPqddsTOYcfo1iYifT4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718053824; x=1718658624;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=UO1E0YV6wqdAbLiHhuKko7dau2iK752deLBVS+LJyeo=;
        b=uLu0lug1XyYPPMAwlb99YHamrAzp/dlfdrTN/ZtdUHasBmMu70NTMw6PSndM3kD0YD
         jYPW7zQZvBKZcWWOiC5ZRGQ549BONYuMg1DPEp3AabvWnHStke7v6090omz1wDV33L0p
         2kUy8wHQ8g6l1AHUXWtv3B8ra1Ik5Bg5XceMO6goglpeaxO4WMhmAEhGIiYLbpTBItLp
         /f9doheW3Rk/BRObZKg2AYqKfWmrs+8f9Zg0xJcGm8aazcIBjdqqX9v9lOXxO92FxfyC
         VkTQLgprJMKxFG1qDAdMv+1Cr7XMMnxSYkBPdOiE4DCuF9UEE9p2In8R8OPYOMhRZBHC
         aHRg==
X-Gm-Message-State: AOJu0YxYaJjgRbcX+RiTlfJKYCjRDIFsgfd9KPfwHjObWHZsaqFeaFNa
	eI7Kakt6zQv4bYA40Gep5FgXd7kQPFJJQJvp8euhcnwQv2nuoqBIH1N0X9M2q/0=
X-Google-Smtp-Source: AGHT+IGJH5ewgRoMSY1DvomImJJRsYknZa/0Cdb1iZuv3dAlj4zPLzXOWWT0d9E8OtPwyOqg8dkBog==
X-Received: by 2002:a05:600c:3aca:b0:421:b47d:4e9d with SMTP id 5b1f17b1804b1-421b47d4eaemr32655145e9.40.1718053824135;
        Mon, 10 Jun 2024 14:10:24 -0700 (PDT)
Message-ID: <f5c63d7a-5254-49f1-a907-9876be7f50b6@citrix.com>
Date: Mon, 10 Jun 2024 22:10:23 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 v1] automation: add a test for HVM domU on PVH
 dom0
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20240610133210.724346-1-marmarek@invisiblethingslab.com>
 <67a6fc3a-bcc3-48e8-beb8-b3c05217083c@citrix.com> <ZmdKMthsjw0qejyg@mail-itl>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <ZmdKMthsjw0qejyg@mail-itl>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10/06/2024 7:47 pm, Marek Marczykowski-Górecki wrote:
> On Mon, Jun 10, 2024 at 04:25:01PM +0100, Andrew Cooper wrote:
>> On 10/06/2024 2:32 pm, Marek Marczykowski-Górecki wrote:
>>> This tests if QEMU works in PVH dom0. QEMU in dom0 requires enabling TUN
>>> in the kernel, so do that too.
>>>
>>> Add it to both x86 runners, similar to the PVH domU test.
>>>
>>> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>
>> CC Oleksii.
>>
>>> ---
>>> Requires rebuilding test-artifacts/kernel/6.1.19
>> Ok.
>>
>> But on a tangent, shouldn't that move forwards somewhat?
> There is already "[PATCH 08/12] automation: update kernel for x86 tests"
> in the stubdom test series. And as noted in the cover letter there, most
> patches can be applied independently, and also they got R-by/A-by from
> Stefano already.

I've got yet more fixes to come too.  I'll chase down some CI R-ack's in
due course.

>
>>> I'm actually not sure if there is a sense in testing HVM domU on both
>>> runners, when PVH domU variant is already tested on both. Are there any
>>> differences between Intel and AMD relevant for QEMU in dom0?
>> It's not just Qemu, it's also HVMLoader, and the particulars of VT-x/SVM
>> VMExit decode information in order to generate ioreqs.
> For just HVM, we have PCI passthrough tests on both - they run HVM (but
> on PV dom0). My question was more about PVH-dom0 specific parts.

Still a firm recommendation for both.

Dom0 is a very different set of codepaths to other domains, and unlike
PV (where almost all logic is common), for PVH there's large variety of
VT-x/SVM specifics both in terms of configuring dom0 to start with, and
at runtime.

Within XenRT, it's been very rare that we've found a PVH dom0 bug
affecting Intel and AMD equally.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 23:13:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 23:13:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737574.1143947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGoCV-0001it-7D; Mon, 10 Jun 2024 23:13:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737574.1143947; Mon, 10 Jun 2024 23:13:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGoCV-0001im-4h; Mon, 10 Jun 2024 23:13:19 +0000
Received: by outflank-mailman (input) for mailman id 737574;
 Mon, 10 Jun 2024 23:13:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7tSL=NM=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1sGoCT-0001ie-4Q
 for xen-devel@lists.xenproject.org; Mon, 10 Jun 2024 23:13:17 +0000
Received: from fhigh7-smtp.messagingengine.com
 (fhigh7-smtp.messagingengine.com [103.168.172.158])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 01df97f2-277f-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 01:13:12 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailfhigh.nyi.internal (Postfix) with ESMTP id 4D2E0114017B;
 Mon, 10 Jun 2024 19:13:11 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Mon, 10 Jun 2024 19:13:11 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 10 Jun 2024 19:13:09 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01df97f2-277f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718061191;
	 x=1718147591; bh=XdB1LeNJvq5cTqeCGvMUXa2njhR2pweuTKlndzyLsPU=; b=
	KMSSkTvhUHf7Wny3EmIW8zkRSrIsajj0JC9sNO/jnCM4Vh1Wpy7PLSQHxFGYRJYD
	9vxd2nZikfJG8FGpFretDpt3ag3Ja7N+tu7IQiAhdS8W/hn/m+svZdBD0FTJuq3I
	i5gmI50W8p27EXEY1pT4ABoF0vGmtQJMmOVBrpBijKkzT+tfaUojba70SjQBJvn6
	KWQrgUxckJAbKcbhKdPVvRlf3U9aPqB2ct6EN0j3QM7nArCMzEXQsLMnz0yRXh3U
	vf03TFns3qKhxVS3vKVi5zP7en+BVONVr2FR04Eha0RMxk99k/HsHNmlYh31/xU9
	xJfd7moyStFPsSt4xftFuA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718061191; x=1718147591; bh=XdB1LeNJvq5cTqeCGvMUXa2njhR2
	pweuTKlndzyLsPU=; b=LdfUZQSXY06z1Oa9xvucvXpDqvKgTokuAvP7U4RvMK8P
	VigpzTZjYXbGxyClYJGl0NnPr/8vktOdmg2prx17dqw9/5ZoQRVAuIhmt2xf9IpO
	Qeg/wxO8cG8i1gMo9PYZ+xGkjuq26Cz/fUr6KMYi8vQ8+WDF/Jx6b+936ylS1XgD
	r42ICwOJ1hNu0EbLlrKQIHHlKfyJJtrXqDa2VC/ZtA5LlTu5Agqh+N9YIvBl2lgv
	uTKPMQLPyN4oT79r/yead77TnNbqnDJQV9DNP6U3Bjdo8noi7E1+uB6QKRYYH2iv
	wkcgMMRZrs6NSbEmRXh6j3w6h5uSVBGz6TtNAHluDw==
X-ME-Sender: <xms:hohnZsIQKC2ufqpf81oxTD18etEzt3xj-tLwbZo8jbJCOdXH--Jgqw>
    <xme:hohnZsKK6y-n8wlguPZ7w6CgjLtc8YLFcSIzMZvk08Ya_Zq3xazXUgW37maO4FHoO
    2imMoFnzgJjBg>
X-ME-Received: <xmr:hohnZsu4LCQVHmEk6Uvd7OKu3CWOLeWygTCDJcPpgJbEMKvrQyVaVR05VnBHHtzWDOz_8VwMqzhNp_BbByVLutbTQ0Uu1uoehA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfeduuddgudekucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiuceomhgrrhhmrghrvghksehinhhvihhsihgslhgvth
    hhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgeeuleejfeegtdeuhfek
    veelgedtvddtfffhkefggeeuudeuveevhffggedugfehnecuffhomhgrihhnpeigvghnrd
    horhhgnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhep
    mhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:h4hnZpZ8GD9nfEiIGN2Sr-erWGKbzOZm9qBSkvEcuhSAf_qJQK-ubw>
    <xmx:h4hnZjaU5wcBe0AdALzeitTz-Z0EVMpD-qyrJLvL5EPOUGHBBAG6PA>
    <xmx:h4hnZlA7cWP_wOuAKeIXDaLlQ3QTzCPbphrj0S2se4l5GLV_-Vqjug>
    <xmx:h4hnZpay3ZIpHnks75oei6lmOuNb8cA4hAXSXsEN1eb_S5DULRT5EQ>
    <xmx:h4hnZn5JNZTWxcPyzbtRf7RJmYNVQVhBtNBrUng7_G6RgtyMBH-tev5a>
Feedback-ID: i1568416f:Fastmail
Date: Tue, 11 Jun 2024 01:13:06 +0200
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Daniel Smith <dpsmith@apertussolutions.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH] MAINTAINERS: alter EFI section
Message-ID: <ZmeIgkFux7tbCZk4@mail-itl>
References: <5b9d57b4-bd28-4523-bb80-f4a5912eb3e8@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="v41JMIFRagluK5zY"
Content-Disposition: inline
In-Reply-To: <5b9d57b4-bd28-4523-bb80-f4a5912eb3e8@suse.com>


--v41JMIFRagluK5zY
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 11 Jun 2024 01:13:06 +0200
From: Marek Marczykowski <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Daniel Smith <dpsmith@apertussolutions.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH] MAINTAINERS: alter EFI section

On Mon, Jun 10, 2024 at 08:38:45AM +0200, Jan Beulich wrote:
> To get past the recurring friction on the approach to take wrt
> workarounds needed for various firmware flaws, I'm stepping down as the
> maintainer of our code interfacing with EFI firmware. Two new
> maintainers are being introduced in my place.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I'm not sure what is the proper tag for cases like this, but:
Acked-by: Marek Marczykowski <marmarek@invisiblethingslab.com>

> ---
> For the new maintainers, here's a 1st patch to consider right away:
> https://lists.xen.org/archives/html/xen-devel/2024-03/msg00931.html.
>=20
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -308,7 +308,9 @@ F:	automation/eclair_analysis/
>  F:	automation/scripts/eclair
> =20
>  EFI
> -M:	Jan Beulich <jbeulich@suse.com>
> +M:	Daniel P. Smith <dpsmith@apertussolutions.com>
> +M:	Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> +R:	Jan Beulich <jbeulich@suse.com>
>  S:	Supported
>  F:	xen/arch/x86/efi/
>  F:	xen/arch/x86/include/asm/efi*.h

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--v41JMIFRagluK5zY
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmZniIIACgkQ24/THMrX
1yxlMQgAk1G5lANkDxtIWJGe8EyEYWTZm2Tj6VAvgqfXWmhC3wRWboezjwICQjMr
DzT9HBPO/N++PsVtqz+1eKZQxeaVxTGUAS4m5ulj2svIis+qoMEuJRli23GGmlg5
y79VhNFPynkNDBrr670UrBvxPZJWm3NAGLGmqEG8j38zZbj+MWxr34em/CZXjcr5
ZlcU+FRRkyFf33hNM7HxllqI+IoLuPYl5q8u7T9bwHtXfYJsCE6SAhvYt0Ag5tYr
w8VCF6Fwacq8h7hP29AVxzfaWHB+4qlvdX5w4CczlBxZz9BVg+4D7MM5qVDj630O
g+8pgXRNk1aaF9sqMr7iaaYak2zXtA==
=Nh2x
-----END PGP SIGNATURE-----

--v41JMIFRagluK5zY--


From xen-devel-bounces@lists.xenproject.org Mon Jun 10 23:14:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 Jun 2024 23:14:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737578.1143958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGoDF-0002CF-Fu; Mon, 10 Jun 2024 23:14:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737578.1143958; Mon, 10 Jun 2024 23:14:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGoDF-0002C8-Ck; Mon, 10 Jun 2024 23:14:05 +0000
Received: by outflank-mailman (input) for mailman id 737578;
 Mon, 10 Jun 2024 23:14:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGoDD-0002By-BM; Mon, 10 Jun 2024 23:14:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGoDD-0002ch-9E; Mon, 10 Jun 2024 23:14:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGoDC-0002uj-Tt; Mon, 10 Jun 2024 23:14:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGoDC-0006x0-TL; Mon, 10 Jun 2024 23:14:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hffl9TDMkJaOj1stquvlInVi4DAupZn3vSrANrnmb+A=; b=4gLyH+Pcl8OT5IEF4D3kpsNfr+
	hAgb4AiCv1mFWUcQpc3+kPyduSt4faMW57uQu1bG8gP4ePACH5nH8lQUwKUqy7B5Qf6RcYRC/ebzp
	Mona9e6PkpnaSFyLyty3IEf77KaYpGhG9Z58MnNiLzUX/XGqk9BJ89dsf1qjq8PL+hqI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186305-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186305: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=0a5b2ca32c1506bbb0e636a2dfab7502a52fe136
X-Osstest-Versions-That:
    xen=c2d5e63c7380c7cb435d00211b512c53accb528e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 Jun 2024 23:14:02 +0000

flight 186305 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186305/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186299
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186299
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186299
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186299
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186299
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186299
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  0a5b2ca32c1506bbb0e636a2dfab7502a52fe136
baseline version:
 xen                  c2d5e63c7380c7cb435d00211b512c53accb528e

Last test of basis   186299  2024-06-10 01:51:55 Z    0 days
Testing same since   186305  2024-06-10 12:07:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Nicola Vetrini <nicola.vetrini@bugseng.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c2d5e63c73..0a5b2ca32c  0a5b2ca32c1506bbb0e636a2dfab7502a52fe136 -> master


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:20:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:20:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737593.1143968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvH-0006Mt-KT; Tue, 11 Jun 2024 05:19:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737593.1143968; Tue, 11 Jun 2024 05:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvH-0006Mm-HT; Tue, 11 Jun 2024 05:19:55 +0000
Received: by outflank-mailman (input) for mailman id 737593;
 Tue, 11 Jun 2024 05:19:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvE-0006Mb-Pq
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:19:53 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 39b505ae-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:19:51 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtuu-00000007Qnj-2sBx; Tue, 11 Jun 2024 05:19:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39b505ae-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
	Content-ID:Content-Description:In-Reply-To:References;
	bh=xQZ0tGVxbRfMe28F2c3vaqhpukPqzWUR2mw3UooUIr8=; b=SP0peDuUs+KEwheRnMyMlYgkS7
	BJO3TbWxLcYZrtF9hG/zSYlaaEBbMSCF64VvtU307jsPdSrtdMGsHLbLIPlb+3JQ+eYPkj0WFLQ82
	0RHyWfdxKEhDZW5T1ob5hHG0b14ikv/6qveZEUAQmVDgvdsZVt4TYWW6sqJaVmrwg5KcUOZQ84nPg
	CjpVq303dQfXA8cJ1suDHbBA/aeCpy9t1nwAxlcRoZCDjrGUy5Vn9fE6+OpvbxggFWtigsP5ZcQpg
	pUZerKUXlbVu4I0/QgfLjMkjQJmRXdy7KEomGmmJMn8IDs2fYlaMIz6Izl+aZ3CL2CjeVsmwbwNGT
	O/+0DiLA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: move features flags into queue_limits
Date: Tue, 11 Jun 2024 07:19:00 +0200
Message-ID: <20240611051929.513387-1-hch@lst.de>
X-Mailer: git-send-email 2.43.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Hi all,

this is the third and last major series to convert settings to
queue_limits for this merge window.  After a bunch of prep patches to
get various drivers in shape, it moves all the queue_flags that specify
driver controlled features into the queue limits so that they can be
set atomically and are separated from the blk-mq internal flags.

Note that I've only Cc'ed the maintainers for drivers with non-mechanical
changes as the Cc list is already huge.

This series sits on top of the "convert the SCSI ULDs to the atomic queue
limits API v2" and "move integrity settings to queue_limits v2" series.

A git tree is available here:

    git://git.infradead.org/users/hch/block.git block-limit-flags

Gitweb:

    http://git.infradead.org/?p=users/hch/block.git;a=shortlog;h=refs/heads/block-limit-flags

Diffstat:
 Documentation/block/writeback_cache_control.rst |   67 +++++---
 arch/m68k/emu/nfblock.c                         |    1 
 arch/um/drivers/ubd_kern.c                      |    3 
 arch/xtensa/platforms/iss/simdisk.c             |    5 
 block/blk-core.c                                |    7 
 block/blk-flush.c                               |   36 ++--
 block/blk-mq-debugfs.c                          |   13 -
 block/blk-mq.c                                  |   42 +++--
 block/blk-settings.c                            |   46 ++----
 block/blk-sysfs.c                               |  118 ++++++++-------
 block/blk-wbt.c                                 |    4 
 block/blk.h                                     |    2 
 drivers/block/amiflop.c                         |    5 
 drivers/block/aoe/aoeblk.c                      |    1 
 drivers/block/ataflop.c                         |    5 
 drivers/block/brd.c                             |    6 
 drivers/block/drbd/drbd_main.c                  |    6 
 drivers/block/floppy.c                          |    3 
 drivers/block/loop.c                            |   79 +++++-----
 drivers/block/mtip32xx/mtip32xx.c               |    2 
 drivers/block/n64cart.c                         |    2 
 drivers/block/nbd.c                             |   24 +--
 drivers/block/null_blk/main.c                   |   13 -
 drivers/block/null_blk/zoned.c                  |    3 
 drivers/block/pktcdvd.c                         |    1 
 drivers/block/ps3disk.c                         |    8 -
 drivers/block/rbd.c                             |   12 -
 drivers/block/rnbd/rnbd-clt.c                   |   14 -
 drivers/block/sunvdc.c                          |    1 
 drivers/block/swim.c                            |    5 
 drivers/block/swim3.c                           |    5 
 drivers/block/ublk_drv.c                        |   21 +-
 drivers/block/virtio_blk.c                      |   37 ++--
 drivers/block/xen-blkfront.c                    |   33 +---
 drivers/block/zram/zram_drv.c                   |    6 
 drivers/cdrom/gdrom.c                           |    1 
 drivers/md/bcache/super.c                       |    9 -
 drivers/md/dm-table.c                           |  181 +++++-------------------
 drivers/md/dm-zone.c                            |    2 
 drivers/md/dm-zoned-target.c                    |    2 
 drivers/md/dm.c                                 |   13 -
 drivers/md/md.c                                 |   40 -----
 drivers/md/raid5.c                              |    6 
 drivers/mmc/core/block.c                        |   42 ++---
 drivers/mmc/core/queue.c                        |   20 +-
 drivers/mmc/core/queue.h                        |    3 
 drivers/mtd/mtd_blkdevs.c                       |    9 -
 drivers/nvdimm/btt.c                            |    4 
 drivers/nvdimm/pmem.c                           |   14 -
 drivers/nvme/host/core.c                        |   33 ++--
 drivers/nvme/host/multipath.c                   |   24 ---
 drivers/nvme/host/zns.c                         |    3 
 drivers/s390/block/dasd_genhd.c                 |    1 
 drivers/s390/block/dcssblk.c                    |    2 
 drivers/s390/block/scm_blk.c                    |    5 
 drivers/scsi/iscsi_tcp.c                        |    8 -
 drivers/scsi/scsi_lib.c                         |    5 
 drivers/scsi/sd.c                               |   60 +++----
 drivers/scsi/sd.h                               |    7 
 drivers/scsi/sd_zbc.c                           |   17 +-
 include/linux/blkdev.h                          |  119 +++++++++++----
 61 files changed, 556 insertions(+), 710 deletions(-)


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:20:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:20:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737594.1143972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvH-0006Q4-R8; Tue, 11 Jun 2024 05:19:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737594.1143972; Tue, 11 Jun 2024 05:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvH-0006PW-Np; Tue, 11 Jun 2024 05:19:55 +0000
Received: by outflank-mailman (input) for mailman id 737594;
 Tue, 11 Jun 2024 05:19:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvF-0006Mb-Sg
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:19:53 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3b47ac41-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:19:53 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtuz-00000007Qo9-2Hoe; Tue, 11 Jun 2024 05:19:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b47ac41-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=LgTI+Mo1pVMP7QW3HpPljhgNIGtSuvGjjqfKNfk6Ue4=; b=T7BSPgV/QyrrcIKlwpgJzxhidX
	awUsR5pwkW2Y+/MUO52qBbxgexFKnN5lOA2j2FyQhx+MMPXFJnlBRAC7Q73t7WcP8+sz5LnII/s8t
	WjAaWbWnPNl4wwIf+gxy+iK9/rbGDSyhdks8mhXL++xpcrKaIympBw3/ZuRB6sO340tcY+E4rtfwn
	TwEEHtQ3MRsu599usMYpH5vHlE7O6axb/guHQvsshFZgotq4wxafrcixMXnVqL150iG8TowKDlHNj
	v336ivp7VUra5eU9XwMH15s+LohPlgy69vq7IBuXhmdR6+Ce7ImdHMsb7gj8BK/MgQUeBNe8bpumT
	UBRUvVrQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 02/26] sd: move zone limits setup out of sd_read_block_characteristics
Date: Tue, 11 Jun 2024 07:19:02 +0200
Message-ID: <20240611051929.513387-3-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move a bit of code that sets up the zone flag and the write granularity
into sd_zbc_read_zones to be with the rest of the zoned limits.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/scsi/sd.c     | 21 +--------------------
 drivers/scsi/sd_zbc.c | 13 ++++++++++++-
 2 files changed, 13 insertions(+), 21 deletions(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 85b45345a27739..5bfed61c70db8f 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3308,29 +3308,10 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp,
 		blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q);
 	}
 
-
-#ifdef CONFIG_BLK_DEV_ZONED /* sd_probe rejects ZBD devices early otherwise */
-	if (sdkp->device->type == TYPE_ZBC) {
-		lim->zoned = true;
-
-		/*
-		 * Per ZBC and ZAC specifications, writes in sequential write
-		 * required zones of host-managed devices must be aligned to
-		 * the device physical block size.
-		 */
-		lim->zone_write_granularity = sdkp->physical_block_size;
-	} else {
-		/*
-		 * Host-aware devices are treated as conventional.
-		 */
-		lim->zoned = false;
-	}
-#endif /* CONFIG_BLK_DEV_ZONED */
-
 	if (!sdkp->first_scan)
 		return;
 
-	if (lim->zoned)
+	if (sdkp->device->type == TYPE_ZBC)
 		sd_printk(KERN_NOTICE, sdkp, "Host-managed zoned block device\n");
 	else if (sdkp->zoned == 1)
 		sd_printk(KERN_NOTICE, sdkp, "Host-aware SMR disk used as regular disk\n");
diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
index 422eaed8457227..e9501db0450be3 100644
--- a/drivers/scsi/sd_zbc.c
+++ b/drivers/scsi/sd_zbc.c
@@ -598,8 +598,19 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
 	u32 zone_blocks = 0;
 	int ret;
 
-	if (!sd_is_zoned(sdkp))
+	if (!sd_is_zoned(sdkp)) {
+		lim->zoned = false;
 		return 0;
+	}
+
+	lim->zoned = true;
+
+	/*
+	 * Per ZBC and ZAC specifications, writes in sequential write required
+	 * zones of host-managed devices must be aligned to the device physical
+	 * block size.
+	 */
+	lim->zone_write_granularity = sdkp->physical_block_size;
 
 	/* READ16/WRITE16/SYNC16 is mandatory for ZBC devices */
 	sdkp->device->use_16_for_rw = 1;
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:20:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:20:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737596.1143998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvM-00074s-B4; Tue, 11 Jun 2024 05:20:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737596.1143998; Tue, 11 Jun 2024 05:20:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvM-00074j-8G; Tue, 11 Jun 2024 05:20:00 +0000
Received: by outflank-mailman (input) for mailman id 737596;
 Tue, 11 Jun 2024 05:19:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvL-0006Mb-0Y
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:19:59 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3e5b5cb7-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:19:58 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtv4-00000007QqN-2LdZ; Tue, 11 Jun 2024 05:19:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e5b5cb7-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=/rZOEnnB7Ck+WcvkswrD5CLavDNj5FzzjxdVzwDte5o=; b=WiVjV03MdppIPPUHNpSsnAOswF
	ZzMT4QsJKrqeWjtewh91sTq17Q4mo/C/OnhzWIHeo3L6EpP/u5Gsw2y5ZNZfc7s43LyqK4sDE1FpG
	n1CGW9hR1hsCjP39wbq7dK7SQZcyfIub4k/1gShQl6ug5pyEmBvaRSdlHDbKv6EQUFXWG9X2Kzvcy
	zRMbflEfBG9QqO9Hao0c27BB16ZyEsohd1SqZIqSoJ2Ac8A6Z5CwEpmzXHO5aP1ywzKVK6Fios+k2
	5kTJ0aXnUDA7DWdvI+0EcSX4K7nERHV9k7gUQZmoCHBh5G3C3UOTx9SpD50hTrGhUjyYp0My1+kjg
	Nmuwb5EA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 04/26] loop: always update discard settings in loop_reconfigure_limits
Date: Tue, 11 Jun 2024 07:19:04 +0200
Message-ID: <20240611051929.513387-5-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Simplify loop_reconfigure_limits by always updating the discard limits.
This adds a little more work to loop_set_block_size, but doesn't change
the outcome as the discard flag won't change.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/loop.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 93a49c40a31a71..c658282454af1b 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -975,8 +975,7 @@ loop_set_status_from_info(struct loop_device *lo,
 	return 0;
 }
 
-static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize,
-		bool update_discard_settings)
+static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
 {
 	struct queue_limits lim;
 
@@ -984,8 +983,7 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize,
 	lim.logical_block_size = bsize;
 	lim.physical_block_size = bsize;
 	lim.io_min = bsize;
-	if (update_discard_settings)
-		loop_config_discard(lo, &lim);
+	loop_config_discard(lo, &lim);
 	return queue_limits_commit_update(lo->lo_queue, &lim);
 }
 
@@ -1086,7 +1084,7 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
 	else
 		bsize = 512;
 
-	error = loop_reconfigure_limits(lo, bsize, true);
+	error = loop_reconfigure_limits(lo, bsize);
 	if (WARN_ON_ONCE(error))
 		goto out_unlock;
 
@@ -1496,7 +1494,7 @@ static int loop_set_block_size(struct loop_device *lo, unsigned long arg)
 	invalidate_bdev(lo->lo_device);
 
 	blk_mq_freeze_queue(lo->lo_queue);
-	err = loop_reconfigure_limits(lo, arg, false);
+	err = loop_reconfigure_limits(lo, arg);
 	loop_update_dio(lo);
 	blk_mq_unfreeze_queue(lo->lo_queue);
 
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:20:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:20:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737597.1144008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvQ-0007vm-Jc; Tue, 11 Jun 2024 05:20:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737597.1144008; Tue, 11 Jun 2024 05:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvQ-0007v9-Gj; Tue, 11 Jun 2024 05:20:04 +0000
Received: by outflank-mailman (input) for mailman id 737597;
 Tue, 11 Jun 2024 05:20:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvP-0006Mb-5y
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:03 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 40859eb5-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:20:02 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtv2-00000007QpR-0wAa; Tue, 11 Jun 2024 05:19:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40859eb5-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=QZOtfSoQAA02jj5gA5gptGlEH50imtUdZkhUebJgjrY=; b=UXmi+XkXDTmhUIFmQ+lCw4UloG
	14qIavWEJ+8aZIyq6Y+DxHB0S+SOdr/Qm7RnBCGVrQ6ORBr+Hqf8WMeKLVA1Ap/HXQFtuZp9XeHwE
	mLWCp4AAXKrG1/s2Fd8LRv78JletbtIZkPhtpE7wERgDDf281dTYOKtaadOkC4olXUszQKCGm5yP1
	BpkgDcxJ6TrEqCdmFlqaO5NBmiBg3C6wPCtwYpOHue5QOYvKPnHjae3lNtfzFW81dHNN4ZfSiAMBf
	4uq0nk+3pDzVezm1sl4pM5+/+sE3FR5B72qqnqHf1v7fD0uNQAYqOZPwF/oW5OjnMH5uLvQLA7tKb
	NcAQXwJw==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 03/26] loop: stop using loop_reconfigure_limits in __loop_clr_fd
Date: Tue, 11 Jun 2024 07:19:03 +0200
Message-ID: <20240611051929.513387-4-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

__loop_clr_fd wants to clear all settings on the device.  Prepare for
moving more settings into the block limits by open coding
loop_reconfigure_limits.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/loop.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 93780f41646b75..93a49c40a31a71 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1133,6 +1133,7 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
 
 static void __loop_clr_fd(struct loop_device *lo, bool release)
 {
+	struct queue_limits lim;
 	struct file *filp;
 	gfp_t gfp = lo->old_gfp_mask;
 
@@ -1156,7 +1157,14 @@ static void __loop_clr_fd(struct loop_device *lo, bool release)
 	lo->lo_offset = 0;
 	lo->lo_sizelimit = 0;
 	memset(lo->lo_file_name, 0, LO_NAME_SIZE);
-	loop_reconfigure_limits(lo, 512, false);
+
+	/* reset the block size to the default */
+	lim = queue_limits_start_update(lo->lo_queue);
+	lim.logical_block_size = 512;
+	lim.physical_block_size = 512;
+	lim.io_min = 512;
+	queue_limits_commit_update(lo->lo_queue, &lim);
+
 	invalidate_disk(lo->lo_disk);
 	loop_sysfs_exit(lo);
 	/* let user-space know about this change */
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:20:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:20:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737601.1144018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvT-0008VD-2g; Tue, 11 Jun 2024 05:20:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737601.1144018; Tue, 11 Jun 2024 05:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvS-0008Ul-Vn; Tue, 11 Jun 2024 05:20:06 +0000
Received: by outflank-mailman (input) for mailman id 737601;
 Tue, 11 Jun 2024 05:20:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvS-0006gk-Jw
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:06 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 41f05385-27b2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:20:05 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtv6-00000007Qry-3x0S; Tue, 11 Jun 2024 05:19:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41f05385-27b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=eTQ/ypSzK4lWAaVunDc3tjHt1iOpV7nN08jg8TeOXAw=; b=sHoQc0fUtpHycjsITV56G6LvHu
	Xm6IZXN1xSVejiDFIWNtspmlJUAy3p7hyoPQDHaK2j8uPpjWRbLpfhR0ew8O9GOJzk9M0kamdKVSK
	h+gO2FlpjqqnZU18qlfXHcI65jms2sw8Gusy9MG0fvAfD2y9LgREXQqEidgiJekt6Ad0BXV8H7wJ7
	laGv8UKsVQsPiuQ/hLbDrBliLkQDBQpg9pL4x4e1mvw/4MdqnWTVfp+cTO0elCEI+YMIJ7UGMAb5i
	o+KErO/5ahImbdE+1ecP+Db6dV/kjpPeVfK/+qvScno/vTvojiYJmwz/wDzBrXkKvWBAHc0v7OlDs
	cGWiEsNA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 05/26] loop: regularize upgrading the lock size for direct I/O
Date: Tue, 11 Jun 2024 07:19:05 +0200
Message-ID: <20240611051929.513387-6-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

The LOOP_CONFIGURE path automatically upgrades the block size to that
of the underlying file for O_DIRECT file descriptors, but the
LOOP_SET_BLOCK_SIZE path does not.  Fix this by lifting the code to
pick the block size into common code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/loop.c | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index c658282454af1b..4f6d8514d19bd6 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -975,10 +975,24 @@ loop_set_status_from_info(struct loop_device *lo,
 	return 0;
 }
 
+static unsigned short loop_default_blocksize(struct loop_device *lo,
+		struct block_device *backing_bdev)
+{
+	/* In case of direct I/O, match underlying block size */
+	if ((lo->lo_backing_file->f_flags & O_DIRECT) && backing_bdev)
+		return bdev_logical_block_size(backing_bdev);
+	return 512;
+}
+
 static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
 {
+	struct file *file = lo->lo_backing_file;
+	struct inode *inode = file->f_mapping->host;
 	struct queue_limits lim;
 
+	if (!bsize)
+		bsize = loop_default_blocksize(lo, inode->i_sb->s_bdev);
+
 	lim = queue_limits_start_update(lo->lo_queue);
 	lim.logical_block_size = bsize;
 	lim.physical_block_size = bsize;
@@ -997,7 +1011,6 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
 	int error;
 	loff_t size;
 	bool partscan;
-	unsigned short bsize;
 	bool is_loop;
 
 	if (!file)
@@ -1076,15 +1089,7 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
 	if (!(lo->lo_flags & LO_FLAGS_READ_ONLY) && file->f_op->fsync)
 		blk_queue_write_cache(lo->lo_queue, true, false);
 
-	if (config->block_size)
-		bsize = config->block_size;
-	else if ((lo->lo_backing_file->f_flags & O_DIRECT) && inode->i_sb->s_bdev)
-		/* In case of direct I/O, match underlying block size */
-		bsize = bdev_logical_block_size(inode->i_sb->s_bdev);
-	else
-		bsize = 512;
-
-	error = loop_reconfigure_limits(lo, bsize);
+	error = loop_reconfigure_limits(lo, config->block_size);
 	if (WARN_ON_ONCE(error))
 		goto out_unlock;
 
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:20:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:20:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737595.1143988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvL-0006pd-44; Tue, 11 Jun 2024 05:19:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737595.1143988; Tue, 11 Jun 2024 05:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvL-0006pS-0p; Tue, 11 Jun 2024 05:19:59 +0000
Received: by outflank-mailman (input) for mailman id 737595;
 Tue, 11 Jun 2024 05:19:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvJ-0006gk-Tp
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:19:57 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3a9dd0f2-27b2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:19:54 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtux-00000007Qnu-0nLE; Tue, 11 Jun 2024 05:19:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a9dd0f2-27b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=lvXE5gTUk5P3TVKgX5q5Q8J1/DjEMwO75JeCg7ieY7s=; b=rUA9ZaXUx5MpyhS2j5LBHsyXxQ
	KbTuAJUR1haH+9S1m8G4GZVBD1g7bx2ayPONElMYjcxYJ0QDJPl5o5zOp4k78Qgw0XyFdjZCPo5wp
	vIgPX59oPcfHLDliRB8Y+Bi/vjpbtw2V1cwzkbepDA/QzMJIwAsmVAdMFC3UF8wgtmukkiprCLdOG
	RnSyiCv+Mvh3ga6qyL+ash+wP6WvC9+JDwisvTf85QvcZuGjp66L5kSvkUkgxcMfJHjnZZ91k2qO0
	ThG6qMn0y6iW42CQt05TqF0tEm3cI9aOk07AkgbKRZMt1CIdlg2QNDBGCgPEgHOzn7Iuc8T8ztdEX
	0c6zkTmg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 01/26] sd: fix sd_is_zoned
Date: Tue, 11 Jun 2024 07:19:01 +0200
Message-ID: <20240611051929.513387-2-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Since commit 7437bb73f087 ("block: remove support for the host aware zone
model"), only ZBC devices expose a zoned access model.  sd_is_zoned is
used to check for that and thus return false for host aware devices.

Fixes: 7437bb73f087 ("block: remove support for the host aware zone model")
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/scsi/sd.h     | 7 ++++++-
 drivers/scsi/sd_zbc.c | 7 +------
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h
index 726f1613f6cb56..65dff3c2108926 100644
--- a/drivers/scsi/sd.h
+++ b/drivers/scsi/sd.h
@@ -222,9 +222,14 @@ static inline sector_t sectors_to_logical(struct scsi_device *sdev, sector_t sec
 
 void sd_dif_config_host(struct scsi_disk *sdkp, struct queue_limits *lim);
 
+/*
+ * Check if we support a zoned model for this device.
+ *
+ * Note that host aware devices are treated as conventional by Linux.
+ */
 static inline int sd_is_zoned(struct scsi_disk *sdkp)
 {
-	return sdkp->zoned == 1 || sdkp->device->type == TYPE_ZBC;
+	return sdkp->device->type == TYPE_ZBC;
 }
 
 #ifdef CONFIG_BLK_DEV_ZONED
diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
index f685838d9ed214..422eaed8457227 100644
--- a/drivers/scsi/sd_zbc.c
+++ b/drivers/scsi/sd_zbc.c
@@ -598,13 +598,8 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
 	u32 zone_blocks = 0;
 	int ret;
 
-	if (!sd_is_zoned(sdkp)) {
-		/*
-		 * Device managed or normal SCSI disk, no special handling
-		 * required.
-		 */
+	if (!sd_is_zoned(sdkp))
 		return 0;
-	}
 
 	/* READ16/WRITE16/SYNC16 is mandatory for ZBC devices */
 	sdkp->device->use_16_for_rw = 1;
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:20:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:20:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737603.1144028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvV-0000PX-AK; Tue, 11 Jun 2024 05:20:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737603.1144028; Tue, 11 Jun 2024 05:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvV-0000PM-6d; Tue, 11 Jun 2024 05:20:09 +0000
Received: by outflank-mailman (input) for mailman id 737603;
 Tue, 11 Jun 2024 05:20:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvU-0006Mb-6r
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:08 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 437cbbf5-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:20:07 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvC-00000007QuL-20ql; Tue, 11 Jun 2024 05:19:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 437cbbf5-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=ssio61bxvyjbVvX4GR2JkhWcuVQT+rCdDVvaw5rJLVc=; b=EJfeUdDKwEweKUiqH+eMopsdVZ
	x0gPUHB6qGWFYwZZXoBfv6LI+X2dpdCL1TKhBlBBGZmhekTG11CsY1Tp8gFm1z77BgnAggRFyPHtv
	/dS9x1QJgeNzRrx5rQgmApC3kW21NnhiRIEaqoJzqGn3bNpz/08IyT53p4q+cqbxaW2iE/+UsRzno
	ybACNAR9eGBjFjHk1OhTQhWhdG1r7xhSZGewPFQ+tjMcJUiDtfg4DFIXSmxBMyXPF8Somg+Z608jF
	modWNKz1rFqGQflCfEdrX2jtnU5vYnXih6AoW/RLdg3bhHrcGssn0BCXi6xxJfpGEC4cpZmmzl8lq
	/NJxmiHw==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 06/26] loop: also use the default block size from an underlying block device
Date: Tue, 11 Jun 2024 07:19:06 +0200
Message-ID: <20240611051929.513387-7-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Fix the code in loop_reconfigure_limits to pick a default block size for
O_DIRECT file descriptors to also work when the loop device sits on top
of a block device and not just on a regular file on a block device based
file system.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/loop.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 4f6d8514d19bd6..d7cf6bbbfb1b86 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -988,10 +988,16 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
 {
 	struct file *file = lo->lo_backing_file;
 	struct inode *inode = file->f_mapping->host;
+	struct block_device *backing_bdev = NULL;
 	struct queue_limits lim;
 
+	if (S_ISBLK(inode->i_mode))
+		backing_bdev = I_BDEV(inode);
+	else if (inode->i_sb->s_bdev)
+		backing_bdev = inode->i_sb->s_bdev;
+
 	if (!bsize)
-		bsize = loop_default_blocksize(lo, inode->i_sb->s_bdev);
+		bsize = loop_default_blocksize(lo, backing_bdev);
 
 	lim = queue_limits_start_update(lo->lo_queue);
 	lim.logical_block_size = bsize;
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:20:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:20:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737607.1144038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtva-0000vh-I4; Tue, 11 Jun 2024 05:20:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737607.1144038; Tue, 11 Jun 2024 05:20:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtva-0000vY-Ep; Tue, 11 Jun 2024 05:20:14 +0000
Received: by outflank-mailman (input) for mailman id 737607;
 Tue, 11 Jun 2024 05:20:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvZ-0006gk-Ai
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:13 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 463fc4c0-27b2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:20:11 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvI-00000007Qyl-3It7; Tue, 11 Jun 2024 05:19:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 463fc4c0-27b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=kL0f+BmFpJHwuUReMJjo5PGBiELHza4z/Zsrvg8Z7ys=; b=gdW9T5rz4N+rEJBWq0kAF2g3Vp
	o77U9+9fDZ5/f3DhEVBIVnsx+JISiCMAKbcTag0Zkah1FJ+d7ef7a0XWw5GyKTOuAjHDH2z6PQLXI
	qqm8eVD9+X/AUEDR4su0Fv0ORO890Gug6klvG+fz9LLmonMsZ/nWiuFaMeJEtd1ZpWZnp/6SQiBuw
	sJhpQDUOouNBi1OIMt6miJfjkGSe6SkbiTNcGWDKAw5t83d+9BGwUvy+rhPKTllj7eZvWDRIHkv9C
	XGWPCU2DEv4+tkiZUyd+xNvRq9iNhaNZSATRihjuGMZhyNxWmkMH18Ug0QkgF+uOM87yi68/tGWq+
	D62iDLEw==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 08/26] virtio_blk: remove virtblk_update_cache_mode
Date: Tue, 11 Jun 2024 07:19:08 +0200
Message-ID: <20240611051929.513387-9-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

virtblk_update_cache_mode boils down to a single call to
blk_queue_write_cache.  Remove it in preparation for moving the cache
control flags into the queue_limits.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/virtio_blk.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 2351f411fa4680..378b241911ca87 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -1089,14 +1089,6 @@ static int virtblk_get_cache_mode(struct virtio_device *vdev)
 	return writeback;
 }
 
-static void virtblk_update_cache_mode(struct virtio_device *vdev)
-{
-	u8 writeback = virtblk_get_cache_mode(vdev);
-	struct virtio_blk *vblk = vdev->priv;
-
-	blk_queue_write_cache(vblk->disk->queue, writeback, false);
-}
-
 static const char *const virtblk_cache_types[] = {
 	"write through", "write back"
 };
@@ -1116,7 +1108,7 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
 		return i;
 
 	virtio_cwrite8(vdev, offsetof(struct virtio_blk_config, wce), i);
-	virtblk_update_cache_mode(vdev);
+	blk_queue_write_cache(disk->queue, virtblk_get_cache_mode(vdev), false);
 	return count;
 }
 
@@ -1528,7 +1520,8 @@ static int virtblk_probe(struct virtio_device *vdev)
 	vblk->index = index;
 
 	/* configure queue flush support */
-	virtblk_update_cache_mode(vdev);
+	blk_queue_write_cache(vblk->disk->queue, virtblk_get_cache_mode(vdev),
+			false);
 
 	/* If disk is read-only in the host, the guest should obey */
 	if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO))
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:20:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:20:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737610.1144048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvf-0001Wv-RN; Tue, 11 Jun 2024 05:20:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737610.1144048; Tue, 11 Jun 2024 05:20:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvf-0001Wl-MH; Tue, 11 Jun 2024 05:20:19 +0000
Received: by outflank-mailman (input) for mailman id 737610;
 Tue, 11 Jun 2024 05:20:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvd-0006gk-Ro
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:17 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 48da98a5-27b2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:20:16 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvL-00000007R0v-1C21; Tue, 11 Jun 2024 05:19:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48da98a5-27b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=lt3t26OR+aUwy+yHARJ3+3kli+Y9MzGBmWJgtPDO0Yk=; b=3Uu1BULf7magHnf6R7hsR1GIlN
	SJd84Gj4n7hMxXgVxqwTjx12uB4BxsPsKofgqeK8s1tB9x/8UIqfFwYdEmhAf22CuIAOwrm0oxESi
	ycgsWyJNCcFgkZRL9iNnvCLYcGs2YN4KpB13M0ItXIwsYf4zxZ//KlgnW18ry7obpXptNUl3Jior/
	Fwwpt7Ey8K33VEKo8oxSt00ZEPoPxl7Xaeg9EVXtqGT/k85qAxMYikzkGk0wNt6tJYX2Bnl0J+7lG
	lA1TnI+25z1ZuXjSGhoq/L28+4F0/MOFLHnnstU1+tbzD0qia6Man53VkKd6ZswYiGXsl7DEHICUx
	lOPAQasw==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 09/26] nbd: move setting the cache control flags to __nbd_set_size
Date: Tue, 11 Jun 2024 07:19:09 +0200
Message-ID: <20240611051929.513387-10-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move setting the cache control flags in nbd in preparation for moving
these flags into the queue_limits structure.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/nbd.c | 17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index ad887d614d5b3f..44b8c671921e5c 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -342,6 +342,12 @@ static int __nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 		lim.max_hw_discard_sectors = UINT_MAX;
 	else
 		lim.max_hw_discard_sectors = 0;
+	if (!(nbd->config->flags & NBD_FLAG_SEND_FLUSH))
+		blk_queue_write_cache(nbd->disk->queue, false, false);
+	else if (nbd->config->flags & NBD_FLAG_SEND_FUA)
+		blk_queue_write_cache(nbd->disk->queue, true, true);
+	else
+		blk_queue_write_cache(nbd->disk->queue, true, false);
 	lim.logical_block_size = blksize;
 	lim.physical_block_size = blksize;
 	error = queue_limits_commit_update(nbd->disk->queue, &lim);
@@ -1286,19 +1292,10 @@ static void nbd_bdev_reset(struct nbd_device *nbd)
 
 static void nbd_parse_flags(struct nbd_device *nbd)
 {
-	struct nbd_config *config = nbd->config;
-	if (config->flags & NBD_FLAG_READ_ONLY)
+	if (nbd->config->flags & NBD_FLAG_READ_ONLY)
 		set_disk_ro(nbd->disk, true);
 	else
 		set_disk_ro(nbd->disk, false);
-	if (config->flags & NBD_FLAG_SEND_FLUSH) {
-		if (config->flags & NBD_FLAG_SEND_FUA)
-			blk_queue_write_cache(nbd->disk->queue, true, true);
-		else
-			blk_queue_write_cache(nbd->disk->queue, true, false);
-	}
-	else
-		blk_queue_write_cache(nbd->disk->queue, false, false);
 }
 
 static void send_disconnects(struct nbd_device *nbd)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:20:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:20:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737619.1144058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvo-0002MX-1S; Tue, 11 Jun 2024 05:20:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737619.1144058; Tue, 11 Jun 2024 05:20:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGtvn-0002MO-Uz; Tue, 11 Jun 2024 05:20:27 +0000
Received: by outflank-mailman (input) for mailman id 737619;
 Tue, 11 Jun 2024 05:20:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvm-0006gk-36
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:26 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4d914041-27b2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:20:24 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvQ-00000007R5I-2gcI; Tue, 11 Jun 2024 05:20:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d914041-27b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=Lo072dU2QpGVh/xLHOL31dXMwicRfcSfRIC85i+l4AM=; b=L+Tee0sieKd6J7cvbELhlhVaks
	LPgpuWpIZFicteZRDgauMdYkEwQiZlmV5XX+JbDB3vC4m0KJjDG5bevXe1oDrF7Jlfr+LBmHklh7Z
	M40vIFh7OGGJJuWxvhSfZOjx5DnhPxo/8ZdarjapcrQwl0gEEF0aMgeR7GX/Vza1VeumjCpiApl3p
	PLelqO46Phs6ttgR0xqN2LNDZRODEhHCY/W5L7HCIbxlf4xUJx7pMbuzHT/BrilnMkwuSzhiCHprb
	3j9JgyMjfS6K9KPGvRSqyYB7cH/3qG08T81bWKh0w5UA9xhwkAnTEMljKkuADRwOj/NUqNEybtjm0
	nk5ysDdw==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 11/26] block: freeze the queue in queue_attr_store
Date: Tue, 11 Jun 2024 07:19:11 +0200
Message-ID: <20240611051929.513387-12-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

queue_attr_store updates attributes used to control generating I/O, and
can cause malformed bios if changed with I/O in flight.  Freeze the queue
in common code instead of adding it to almost every attribute.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq.c    | 5 +++--
 block/blk-sysfs.c | 9 ++-------
 2 files changed, 5 insertions(+), 9 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 0d4cd39c3d25da..58b0d6c7cc34d6 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4631,13 +4631,15 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr)
 	int ret;
 	unsigned long i;
 
+	if (WARN_ON_ONCE(!q->mq_freeze_depth))
+		return -EINVAL;
+
 	if (!set)
 		return -EINVAL;
 
 	if (q->nr_requests == nr)
 		return 0;
 
-	blk_mq_freeze_queue(q);
 	blk_mq_quiesce_queue(q);
 
 	ret = 0;
@@ -4671,7 +4673,6 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr)
 	}
 
 	blk_mq_unquiesce_queue(q);
-	blk_mq_unfreeze_queue(q);
 
 	return ret;
 }
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index f0f9314ab65c61..5c787965b7d09e 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -189,12 +189,9 @@ static ssize_t queue_discard_max_store(struct request_queue *q,
 	if ((max_discard_bytes >> SECTOR_SHIFT) > UINT_MAX)
 		return -EINVAL;
 
-	blk_mq_freeze_queue(q);
 	lim = queue_limits_start_update(q);
 	lim.max_user_discard_sectors = max_discard_bytes >> SECTOR_SHIFT;
 	err = queue_limits_commit_update(q, &lim);
-	blk_mq_unfreeze_queue(q);
-
 	if (err)
 		return err;
 	return ret;
@@ -241,11 +238,9 @@ queue_max_sectors_store(struct request_queue *q, const char *page, size_t count)
 	if (ret < 0)
 		return ret;
 
-	blk_mq_freeze_queue(q);
 	lim = queue_limits_start_update(q);
 	lim.max_user_sectors = max_sectors_kb << 1;
 	err = queue_limits_commit_update(q, &lim);
-	blk_mq_unfreeze_queue(q);
 	if (err)
 		return err;
 	return ret;
@@ -585,13 +580,11 @@ static ssize_t queue_wb_lat_store(struct request_queue *q, const char *page,
 	 * ends up either enabling or disabling wbt completely. We can't
 	 * have IO inflight if that happens.
 	 */
-	blk_mq_freeze_queue(q);
 	blk_mq_quiesce_queue(q);
 
 	wbt_set_min_lat(q, val);
 
 	blk_mq_unquiesce_queue(q);
-	blk_mq_unfreeze_queue(q);
 
 	return count;
 }
@@ -722,9 +715,11 @@ queue_attr_store(struct kobject *kobj, struct attribute *attr,
 	if (!entry->store)
 		return -EIO;
 
+	blk_mq_freeze_queue(q);
 	mutex_lock(&q->sysfs_lock);
 	res = entry->store(q, page, length);
 	mutex_unlock(&q->sysfs_lock);
+	blk_mq_unfreeze_queue(q);
 	return res;
 }
 
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737639.1144092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2e-0004i6-OM; Tue, 11 Jun 2024 05:27:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737639.1144092; Tue, 11 Jun 2024 05:27:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2e-0004h7-JH; Tue, 11 Jun 2024 05:27:32 +0000
Received: by outflank-mailman (input) for mailman id 737639;
 Tue, 11 Jun 2024 05:27:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtw7-0006gk-5n
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:47 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5a143c15-27b2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:20:45 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvl-00000007RO6-3vhP; Tue, 11 Jun 2024 05:20:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a143c15-27b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=DgqWvoasERFRGuaCM+34rIUlXtPEepp99/7axWusXVI=; b=hy3Il6hUt6KqrOBQKKM8CUO3v+
	Mpo2qrQ6tRt5yXvoXLX8c/7qQJhPWZlEEG4tzuq8l2ohCB9gTJ/YcHMFRigpuEsWcFWwfDO6ZJ/cf
	o212GgxfQY+Ob2IFb8hv+EeZk58OyRrCiwaOiS0wyHKzLe74X4AWtScXMSTEJy5aGx4XFiCogybQV
	GFSIG6PVwRQf+1ujQ7y4zlHamOxPmlsMqbrOjGpgW6cQZ52UcXYU75vAybSksE5qfQbMHMcAdIBUb
	cH9YlxIMwbPYf5TcJdJZ3DC4jhlrTTAYwo5X/aRhV5iPfmcdGbCzukB6CsVrVyBGTV3ln8cnYw9b/
	TJsyHOTQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 18/26] block: move the synchronous flag to queue_limits
Date: Tue, 11 Jun 2024 07:19:18 +0200
Message-ID: <20240611051929.513387-19-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the synchronous flag into the queue_limits feature field so that it
can be set atomically and all I/O is frozen when changing the flag.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq-debugfs.c        | 1 -
 drivers/block/brd.c           | 2 +-
 drivers/block/zram/zram_drv.c | 4 ++--
 drivers/nvdimm/btt.c          | 3 +--
 drivers/nvdimm/pmem.c         | 4 ++--
 include/linux/blkdev.h        | 7 ++++---
 6 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index eb73f1d348e5a9..957774e40b1d0c 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -85,7 +85,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(SAME_COMP),
 	QUEUE_FLAG_NAME(FAIL_IO),
 	QUEUE_FLAG_NAME(NOXMERGES),
-	QUEUE_FLAG_NAME(SYNCHRONOUS),
 	QUEUE_FLAG_NAME(SAME_FORCE),
 	QUEUE_FLAG_NAME(INIT_DONE),
 	QUEUE_FLAG_NAME(POLL),
diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index b25dc463b5e3a6..d77deb571dbd06 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -335,6 +335,7 @@ static int brd_alloc(int i)
 		.max_hw_discard_sectors	= UINT_MAX,
 		.max_discard_segments	= 1,
 		.discard_granularity	= PAGE_SIZE,
+		.features		= BLK_FEAT_SYNCHRONOUS,
 	};
 
 	list_for_each_entry(brd, &brd_devices, brd_list)
@@ -366,7 +367,6 @@ static int brd_alloc(int i)
 	strscpy(disk->disk_name, buf, DISK_NAME_LEN);
 	set_capacity(disk, rd_size * 2);
 	
-	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, disk->queue);
 	err = add_disk(disk);
 	if (err)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index f8f1b5b54795ac..efcb8d9d274c31 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -2208,7 +2208,8 @@ static int zram_add(void)
 #if ZRAM_LOGICAL_BLOCK_SIZE == PAGE_SIZE
 		.max_write_zeroes_sectors	= UINT_MAX,
 #endif
-		.features			= BLK_FEAT_STABLE_WRITES,
+		.features			= BLK_FEAT_STABLE_WRITES |
+						  BLK_FEAT_SYNCHRONOUS,
 	};
 	struct zram *zram;
 	int ret, device_id;
@@ -2246,7 +2247,6 @@ static int zram_add(void)
 
 	/* Actual capacity set using sysfs (/sys/block/zram<id>/disksize */
 	set_capacity(zram->disk, 0);
-	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, zram->disk->queue);
 	ret = device_add_disk(NULL, zram->disk, zram_disk_groups);
 	if (ret)
 		goto out_cleanup_disk;
diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index e474afa8e9f68d..e79c06d65bb77b 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -1501,6 +1501,7 @@ static int btt_blk_init(struct btt *btt)
 		.logical_block_size	= btt->sector_size,
 		.max_hw_sectors		= UINT_MAX,
 		.max_integrity_segments	= 1,
+		.features		= BLK_FEAT_SYNCHRONOUS,
 	};
 	int rc;
 
@@ -1518,8 +1519,6 @@ static int btt_blk_init(struct btt *btt)
 	btt->btt_disk->fops = &btt_fops;
 	btt->btt_disk->private_data = btt;
 
-	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, btt->btt_disk->queue);
-
 	set_capacity(btt->btt_disk, btt->nlba * btt->sector_size >> 9);
 	rc = device_add_disk(&btt->nd_btt->dev, btt->btt_disk, NULL);
 	if (rc)
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 501cf226df0187..b821dcf018f6ae 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -455,7 +455,8 @@ static int pmem_attach_disk(struct device *dev,
 		.logical_block_size	= pmem_sector_size(ndns),
 		.physical_block_size	= PAGE_SIZE,
 		.max_hw_sectors		= UINT_MAX,
-		.features		= BLK_FEAT_WRITE_CACHE,
+		.features		= BLK_FEAT_WRITE_CACHE |
+					  BLK_FEAT_SYNCHRONOUS,
 	};
 	int nid = dev_to_node(dev), fua;
 	struct resource *res = &nsio->res;
@@ -546,7 +547,6 @@ static int pmem_attach_disk(struct device *dev,
 	}
 	pmem->virt_addr = addr;
 
-	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, q);
 	if (pmem->pfn_flags & PFN_MAP)
 		blk_queue_flag_set(QUEUE_FLAG_DAX, q);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index db14c61791e022..4d908e29c760da 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -301,6 +301,9 @@ enum {
 
 	/* don't modify data until writeback is done */
 	BLK_FEAT_STABLE_WRITES			= (1u << 5),
+
+	/* always completes in submit context */
+	BLK_FEAT_SYNCHRONOUS			= (1u << 6),
 };
 
 /*
@@ -566,7 +569,6 @@ struct request_queue {
 #define QUEUE_FLAG_SAME_COMP	4	/* complete on same CPU-group */
 #define QUEUE_FLAG_FAIL_IO	5	/* fake timeout */
 #define QUEUE_FLAG_NOXMERGES	9	/* No extended merges */
-#define QUEUE_FLAG_SYNCHRONOUS	11	/* always completes in submit context */
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
 #define QUEUE_FLAG_INIT_DONE	14	/* queue is initialized */
 #define QUEUE_FLAG_POLL		16	/* IO polling enabled if set */
@@ -1315,8 +1317,7 @@ static inline bool bdev_nonrot(struct block_device *bdev)
 
 static inline bool bdev_synchronous(struct block_device *bdev)
 {
-	return test_bit(QUEUE_FLAG_SYNCHRONOUS,
-			&bdev_get_queue(bdev)->queue_flags);
+	return bdev->bd_disk->queue->limits.features & BLK_FEAT_SYNCHRONOUS;
 }
 
 static inline bool bdev_stable_writes(struct block_device *bdev)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737634.1144068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2c-0004BR-QL; Tue, 11 Jun 2024 05:27:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737634.1144068; Tue, 11 Jun 2024 05:27:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2c-0004BK-Mi; Tue, 11 Jun 2024 05:27:30 +0000
Received: by outflank-mailman (input) for mailman id 737634;
 Tue, 11 Jun 2024 05:27:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtwK-0006Mb-Q7
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:21:00 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 62a76c16-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:21:00 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvs-00000007RUV-3WAY; Tue, 11 Jun 2024 05:20:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62a76c16-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=ZOX70G3pF40EpuFFjfhAf0Owh5pRXHCD6/58K2Ao0Eg=; b=aID6+0uheHH/EPlHuMos9YrI82
	Svu7qFrDLDOo4EHoAbaWREcr9JJG8DrxnncOI+EwSzgsR0Einz2cIZJ8ElJVHMzt35A/SYECI64bF
	4qfLxH0VSZ68DegO45KwBlm+6Gjhg/N+JsquVedMaxS8a67GHzehvoiPt5nXfE567FuPI3t9wfybG
	4IawUezNERiarVEfyYmkFa7JO75Y11QhDeAZmaHpe6b8ZUbC1++FMs6TEFuqjDb4D/YonWbaxEljj
	iIhBWn3lPe4wmwc+GJwhEF+wJ3+s5D2ozh7tp4WNHTI+8QVb2dkyCHPDFcW3TNQltFsiD/o7VNOoT
	JSGMNkWg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 20/26] block: move the dax flag to queue_limits
Date: Tue, 11 Jun 2024 07:19:20 +0200
Message-ID: <20240611051929.513387-21-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the dax flag into the queue_limits feature field so that it
can be set atomically and all I/O is frozen when changing the flag.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq-debugfs.c       | 1 -
 drivers/md/dm-table.c        | 4 ++--
 drivers/nvdimm/pmem.c        | 7 ++-----
 drivers/s390/block/dcssblk.c | 2 +-
 include/linux/blkdev.h       | 6 ++++--
 5 files changed, 9 insertions(+), 11 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 62b132e9a9ce3b..f4fa820251ce83 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -88,7 +88,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(SAME_FORCE),
 	QUEUE_FLAG_NAME(INIT_DONE),
 	QUEUE_FLAG_NAME(POLL),
-	QUEUE_FLAG_NAME(DAX),
 	QUEUE_FLAG_NAME(STATS),
 	QUEUE_FLAG_NAME(REGISTERED),
 	QUEUE_FLAG_NAME(QUIESCED),
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index eee43d27733f9a..d3a960aee03c6a 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1834,11 +1834,11 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 		limits->features |= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA;
 
 	if (dm_table_supports_dax(t, device_not_dax_capable)) {
-		blk_queue_flag_set(QUEUE_FLAG_DAX, q);
+		limits->features |= BLK_FEAT_DAX;
 		if (dm_table_supports_dax(t, device_not_dax_synchronous_capable))
 			set_dax_synchronous(t->md->dax_dev);
 	} else
-		blk_queue_flag_clear(QUEUE_FLAG_DAX, q);
+		limits->features &= ~BLK_FEAT_DAX;
 
 	if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, NULL))
 		dax_write_cache(t->md->dax_dev, true);
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index b821dcf018f6ae..1dd74c969d5a09 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -465,7 +465,6 @@ static int pmem_attach_disk(struct device *dev,
 	struct dax_device *dax_dev;
 	struct nd_pfn_sb *pfn_sb;
 	struct pmem_device *pmem;
-	struct request_queue *q;
 	struct gendisk *disk;
 	void *addr;
 	int rc;
@@ -499,6 +498,8 @@ static int pmem_attach_disk(struct device *dev,
 	}
 	if (fua)
 		lim.features |= BLK_FEAT_FUA;
+	if (is_nd_pfn(dev))
+		lim.features |= BLK_FEAT_DAX;
 
 	if (!devm_request_mem_region(dev, res->start, resource_size(res),
 				dev_name(&ndns->dev))) {
@@ -509,7 +510,6 @@ static int pmem_attach_disk(struct device *dev,
 	disk = blk_alloc_disk(&lim, nid);
 	if (IS_ERR(disk))
 		return PTR_ERR(disk);
-	q = disk->queue;
 
 	pmem->disk = disk;
 	pmem->pgmap.owner = pmem;
@@ -547,9 +547,6 @@ static int pmem_attach_disk(struct device *dev,
 	}
 	pmem->virt_addr = addr;
 
-	if (pmem->pfn_flags & PFN_MAP)
-		blk_queue_flag_set(QUEUE_FLAG_DAX, q);
-
 	disk->fops		= &pmem_fops;
 	disk->private_data	= pmem;
 	nvdimm_namespace_disk_name(ndns, disk->disk_name);
diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c
index 6d1689a2717e5f..d5a5d11ae0dcdf 100644
--- a/drivers/s390/block/dcssblk.c
+++ b/drivers/s390/block/dcssblk.c
@@ -548,6 +548,7 @@ dcssblk_add_store(struct device *dev, struct device_attribute *attr, const char
 {
 	struct queue_limits lim = {
 		.logical_block_size	= 4096,
+		.features		= BLK_FEAT_DAX,
 	};
 	int rc, i, j, num_of_segments;
 	struct dcssblk_dev_info *dev_info;
@@ -643,7 +644,6 @@ dcssblk_add_store(struct device *dev, struct device_attribute *attr, const char
 	dev_info->gd->fops = &dcssblk_devops;
 	dev_info->gd->private_data = dev_info;
 	dev_info->gd->flags |= GENHD_FL_NO_PART;
-	blk_queue_flag_set(QUEUE_FLAG_DAX, dev_info->gd->queue);
 
 	seg_byte_size = (dev_info->end - dev_info->start + 1);
 	set_capacity(dev_info->gd, seg_byte_size >> 9); // size in sectors
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 59c2327692589b..c2545580c5b134 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -307,6 +307,9 @@ enum {
 
 	/* supports REQ_NOWAIT */
 	BLK_FEAT_NOWAIT				= (1u << 7),
+
+	/* supports DAX */
+	BLK_FEAT_DAX				= (1u << 8),
 };
 
 /*
@@ -575,7 +578,6 @@ struct request_queue {
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
 #define QUEUE_FLAG_INIT_DONE	14	/* queue is initialized */
 #define QUEUE_FLAG_POLL		16	/* IO polling enabled if set */
-#define QUEUE_FLAG_DAX		19	/* device supports DAX */
 #define QUEUE_FLAG_STATS	20	/* track IO start and completion times */
 #define QUEUE_FLAG_REGISTERED	22	/* queue has been registered to a disk */
 #define QUEUE_FLAG_QUIESCED	24	/* queue has been quiesced */
@@ -602,7 +604,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_io_stat(q)	((q)->limits.features & BLK_FEAT_IO_STAT)
 #define blk_queue_zone_resetall(q)	\
 	test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags)
-#define blk_queue_dax(q)	test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
+#define blk_queue_dax(q)	((q)->limits.features & BLK_FEAT_DAX)
 #define blk_queue_pci_p2pdma(q)	\
 	test_bit(QUEUE_FLAG_PCI_P2PDMA, &(q)->queue_flags)
 #ifdef CONFIG_BLK_RQ_ALLOC_TIME
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737637.1144088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2e-0004fG-Ey; Tue, 11 Jun 2024 05:27:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737637.1144088; Tue, 11 Jun 2024 05:27:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2e-0004f2-BP; Tue, 11 Jun 2024 05:27:32 +0000
Received: by outflank-mailman (input) for mailman id 737637;
 Tue, 11 Jun 2024 05:27:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvr-0006Mb-0Y
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:31 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5017e03c-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:20:29 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvW-00000007RAW-1CJA; Tue, 11 Jun 2024 05:20:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5017e03c-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=IeK7FyZt6F6r9MKiKXHz3ErZlJd+DXOuD4iE/bLLwPM=; b=SFYY1p1VEU2QFDmwsE8obKVNl1
	js8QdtV2CbsI4FAsUKO7KHuzdYowBLFjVTf0TfykJ2AcxevyE8jQhoWv6ULdXamwrcPTm4eWXnVYN
	F7U2fhqvoZCczzW+2mPj3w8C3vpxToTyJ23VAHaEc76VRwnAwYf3HhpChxBioXSq+LMP7LHxpbai8
	ix+K7so3J+1BkZwPnb9Xk8WDJy/JBpdCCfZVi11ZLO+eb25zlsqZoAbXo93asXGyR4QQHb/pWgZAl
	bkIKq2FGF7ntzXFLLtWojtrGjtAIzF1irVgLdAjniuvRxH5zxM8GL2KlYKJFstZKJnmRGmkSGWPTx
	3X0654tA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 13/26] block: move cache control settings out of queue->flags
Date: Tue, 11 Jun 2024 07:19:13 +0200
Message-ID: <20240611051929.513387-14-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the cache control settings into the queue_limits so that they
can be set atomically and all I/O is frozen when changing the
flags.

Add new features and flags field for the driver set flags, and internal
(usually sysfs-controlled) flags in the block layer.  Note that we'll
eventually remove enough field from queue_limits to bring it back to the
previous size.

The disable flag is inverted compared to the previous meaning, which
means it now survives a rescan, similar to the max_sectors and
max_discard_sectors user limits.

The FLUSH and FUA flags are now inherited by blk_stack_limits, which
simplified the code in dm a lot, but also causes a slight behavior
change in that dm-switch and dm-unstripe now advertise a write cache
despite setting num_flush_bios to 0.  The I/O path will handle this
gracefully, but as far as I can tell the lack of num_flush_bios
and thus flush support is a pre-existing data integrity bug in those
targets that really needs fixing, after which a non-zero num_flush_bios
should be required in dm for targets that map to underlying devices.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 .../block/writeback_cache_control.rst         | 67 +++++++++++--------
 arch/um/drivers/ubd_kern.c                    |  2 +-
 block/blk-core.c                              |  2 +-
 block/blk-flush.c                             |  9 ++-
 block/blk-mq-debugfs.c                        |  2 -
 block/blk-settings.c                          | 29 ++------
 block/blk-sysfs.c                             | 29 +++++---
 block/blk-wbt.c                               |  4 +-
 drivers/block/drbd/drbd_main.c                |  2 +-
 drivers/block/loop.c                          |  9 +--
 drivers/block/nbd.c                           | 14 ++--
 drivers/block/null_blk/main.c                 | 12 ++--
 drivers/block/ps3disk.c                       |  7 +-
 drivers/block/rnbd/rnbd-clt.c                 | 10 +--
 drivers/block/ublk_drv.c                      |  8 ++-
 drivers/block/virtio_blk.c                    | 20 ++++--
 drivers/block/xen-blkfront.c                  |  9 ++-
 drivers/md/bcache/super.c                     |  7 +-
 drivers/md/dm-table.c                         | 39 +++--------
 drivers/md/md.c                               |  8 ++-
 drivers/mmc/core/block.c                      | 42 ++++++------
 drivers/mmc/core/queue.c                      | 12 ++--
 drivers/mmc/core/queue.h                      |  3 +-
 drivers/mtd/mtd_blkdevs.c                     |  5 +-
 drivers/nvdimm/pmem.c                         |  4 +-
 drivers/nvme/host/core.c                      |  7 +-
 drivers/nvme/host/multipath.c                 |  6 --
 drivers/scsi/sd.c                             | 28 +++++---
 include/linux/blkdev.h                        | 38 +++++++++--
 29 files changed, 227 insertions(+), 207 deletions(-)

diff --git a/Documentation/block/writeback_cache_control.rst b/Documentation/block/writeback_cache_control.rst
index b208488d0aae85..9cfe27f90253c7 100644
--- a/Documentation/block/writeback_cache_control.rst
+++ b/Documentation/block/writeback_cache_control.rst
@@ -46,41 +46,50 @@ worry if the underlying devices need any explicit cache flushing and how
 the Forced Unit Access is implemented.  The REQ_PREFLUSH and REQ_FUA flags
 may both be set on a single bio.
 
+Feature settings for block drivers
+----------------------------------
 
-Implementation details for bio based block drivers
---------------------------------------------------------------
+For devices that do not support volatile write caches there is no driver
+support required, the block layer completes empty REQ_PREFLUSH requests before
+entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from
+requests that have a payload.
 
-These drivers will always see the REQ_PREFLUSH and REQ_FUA bits as they sit
-directly below the submit_bio interface.  For remapping drivers the REQ_FUA
-bits need to be propagated to underlying devices, and a global flush needs
-to be implemented for bios with the REQ_PREFLUSH bit set.  For real device
-drivers that do not have a volatile cache the REQ_PREFLUSH and REQ_FUA bits
-on non-empty bios can simply be ignored, and REQ_PREFLUSH requests without
-data can be completed successfully without doing any work.  Drivers for
-devices with volatile caches need to implement the support for these
-flags themselves without any help from the block layer.
+For devices with volatile write caches the driver needs to tell the block layer
+that it supports flushing caches by setting the
 
+   BLK_FEAT_WRITE_CACHE
 
-Implementation details for request_fn based block drivers
----------------------------------------------------------
+flag in the queue_limits feature field.  For devices that also support the FUA
+bit the block layer needs to be told to pass on the REQ_FUA bit by also setting
+the
 
-For devices that do not support volatile write caches there is no driver
-support required, the block layer completes empty REQ_PREFLUSH requests before
-entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from
-requests that have a payload.  For devices with volatile write caches the
-driver needs to tell the block layer that it supports flushing caches by
-doing::
+   BLK_FEAT_FUA
+
+flag in the features field of the queue_limits structure.
+
+Implementation details for bio based block drivers
+--------------------------------------------------
+
+For bio based drivers the REQ_PREFLUSH and REQ_FUA bit are simplify passed on
+to the driver if the drivers sets the BLK_FEAT_WRITE_CACHE flag and the drivers
+needs to handle them.
+
+*NOTE*: The REQ_FUA bit also gets passed on when the BLK_FEAT_FUA flags is
+_not_ set.  Any bio based driver that sets BLK_FEAT_WRITE_CACHE also needs to
+handle REQ_FUA.
 
-	blk_queue_write_cache(sdkp->disk->queue, true, false);
+For remapping drivers the REQ_FUA bits need to be propagated to underlying
+devices, and a global flush needs to be implemented for bios with the
+REQ_PREFLUSH bit set.
 
-and handle empty REQ_OP_FLUSH requests in its prep_fn/request_fn.  Note that
-REQ_PREFLUSH requests with a payload are automatically turned into a sequence
-of an empty REQ_OP_FLUSH request followed by the actual write by the block
-layer.  For devices that also support the FUA bit the block layer needs
-to be told to pass through the REQ_FUA bit using::
+Implementation details for blk-mq drivers
+-----------------------------------------
 
-	blk_queue_write_cache(sdkp->disk->queue, true, true);
+When the BLK_FEAT_WRITE_CACHE flag is set, REQ_OP_WRITE | REQ_PREFLUSH requests
+with a payload are automatically turned into a sequence of a REQ_OP_FLUSH
+request followed by the actual write by the block layer.
 
-and the driver must handle write requests that have the REQ_FUA bit set
-in prep_fn/request_fn.  If the FUA bit is not natively supported the block
-layer turns it into an empty REQ_OP_FLUSH request after the actual write.
+When the BLK_FEA_FUA flags is set, the REQ_FUA bit simplify passed on for the
+REQ_OP_WRITE request, else a REQ_OP_FLUSH request is sent by the block layer
+after the completion of the write request for bio submissions with the REQ_FUA
+bit set.
diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index cdcb75a68989dd..19e01691ea0ea7 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -835,6 +835,7 @@ static int ubd_add(int n, char **error_out)
 	struct queue_limits lim = {
 		.max_segments		= MAX_SG,
 		.seg_boundary_mask	= PAGE_SIZE - 1,
+		.features		= BLK_FEAT_WRITE_CACHE,
 	};
 	struct gendisk *disk;
 	int err = 0;
@@ -882,7 +883,6 @@ static int ubd_add(int n, char **error_out)
 	}
 
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
-	blk_queue_write_cache(disk->queue, true, false);
 	disk->major = UBD_MAJOR;
 	disk->first_minor = n << UBD_SHIFT;
 	disk->minors = 1 << UBD_SHIFT;
diff --git a/block/blk-core.c b/block/blk-core.c
index 82c3ae22d76d88..2b45a4df9a1aa1 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -782,7 +782,7 @@ void submit_bio_noacct(struct bio *bio)
 		if (WARN_ON_ONCE(bio_op(bio) != REQ_OP_WRITE &&
 				 bio_op(bio) != REQ_OP_ZONE_APPEND))
 			goto end_io;
-		if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {
+		if (!bdev_write_cache(bdev)) {
 			bio->bi_opf &= ~(REQ_PREFLUSH | REQ_FUA);
 			if (!bio_sectors(bio)) {
 				status = BLK_STS_OK;
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 2234f8b3fc05f2..30b9d5033a2b85 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -381,8 +381,8 @@ static void blk_rq_init_flush(struct request *rq)
 bool blk_insert_flush(struct request *rq)
 {
 	struct request_queue *q = rq->q;
-	unsigned long fflags = q->queue_flags;	/* may change, cache */
 	struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx);
+	bool supports_fua = q->limits.features & BLK_FEAT_FUA;
 	unsigned int policy = 0;
 
 	/* FLUSH/FUA request must never be merged */
@@ -394,11 +394,10 @@ bool blk_insert_flush(struct request *rq)
 	/*
 	 * Check which flushes we need to sequence for this operation.
 	 */
-	if (fflags & (1UL << QUEUE_FLAG_WC)) {
+	if (blk_queue_write_cache(q)) {
 		if (rq->cmd_flags & REQ_PREFLUSH)
 			policy |= REQ_FSEQ_PREFLUSH;
-		if (!(fflags & (1UL << QUEUE_FLAG_FUA)) &&
-		    (rq->cmd_flags & REQ_FUA))
+		if ((rq->cmd_flags & REQ_FUA) && !supports_fua)
 			policy |= REQ_FSEQ_POSTFLUSH;
 	}
 
@@ -407,7 +406,7 @@ bool blk_insert_flush(struct request *rq)
 	 * REQ_PREFLUSH and FUA for the driver.
 	 */
 	rq->cmd_flags &= ~REQ_PREFLUSH;
-	if (!(fflags & (1UL << QUEUE_FLAG_FUA)))
+	if (!supports_fua)
 		rq->cmd_flags &= ~REQ_FUA;
 
 	/*
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 770c0c2b72faaa..e8b9db7c30c455 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -93,8 +93,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(INIT_DONE),
 	QUEUE_FLAG_NAME(STABLE_WRITES),
 	QUEUE_FLAG_NAME(POLL),
-	QUEUE_FLAG_NAME(WC),
-	QUEUE_FLAG_NAME(FUA),
 	QUEUE_FLAG_NAME(DAX),
 	QUEUE_FLAG_NAME(STATS),
 	QUEUE_FLAG_NAME(REGISTERED),
diff --git a/block/blk-settings.c b/block/blk-settings.c
index f11c8676eb4c67..536ee202fcdccb 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -261,6 +261,9 @@ static int blk_validate_limits(struct queue_limits *lim)
 		lim->misaligned = 0;
 	}
 
+	if (!(lim->features & BLK_FEAT_WRITE_CACHE))
+		lim->features &= ~BLK_FEAT_FUA;
+
 	err = blk_validate_integrity_limits(lim);
 	if (err)
 		return err;
@@ -454,6 +457,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 {
 	unsigned int top, bottom, alignment, ret = 0;
 
+	t->features |= (b->features & BLK_FEAT_INHERIT_MASK);
+
 	t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
 	t->max_user_sectors = min_not_zero(t->max_user_sectors,
 			b->max_user_sectors);
@@ -711,30 +716,6 @@ void blk_set_queue_depth(struct request_queue *q, unsigned int depth)
 }
 EXPORT_SYMBOL(blk_set_queue_depth);
 
-/**
- * blk_queue_write_cache - configure queue's write cache
- * @q:		the request queue for the device
- * @wc:		write back cache on or off
- * @fua:	device supports FUA writes, if true
- *
- * Tell the block layer about the write cache of @q.
- */
-void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua)
-{
-	if (wc) {
-		blk_queue_flag_set(QUEUE_FLAG_HW_WC, q);
-		blk_queue_flag_set(QUEUE_FLAG_WC, q);
-	} else {
-		blk_queue_flag_clear(QUEUE_FLAG_HW_WC, q);
-		blk_queue_flag_clear(QUEUE_FLAG_WC, q);
-	}
-	if (fua)
-		blk_queue_flag_set(QUEUE_FLAG_FUA, q);
-	else
-		blk_queue_flag_clear(QUEUE_FLAG_FUA, q);
-}
-EXPORT_SYMBOL_GPL(blk_queue_write_cache);
-
 int bdev_alignment_offset(struct block_device *bdev)
 {
 	struct request_queue *q = bdev_get_queue(bdev);
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 5c787965b7d09e..4f524c1d5e08bd 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -423,32 +423,41 @@ static ssize_t queue_io_timeout_store(struct request_queue *q, const char *page,
 
 static ssize_t queue_wc_show(struct request_queue *q, char *page)
 {
-	if (test_bit(QUEUE_FLAG_WC, &q->queue_flags))
-		return sprintf(page, "write back\n");
-
-	return sprintf(page, "write through\n");
+	if (q->limits.features & BLK_FLAGS_WRITE_CACHE_DISABLED)
+		return sprintf(page, "write through\n");
+	return sprintf(page, "write back\n");
 }
 
 static ssize_t queue_wc_store(struct request_queue *q, const char *page,
 			      size_t count)
 {
+	struct queue_limits lim;
+	bool disable;
+	int err;
+
 	if (!strncmp(page, "write back", 10)) {
-		if (!test_bit(QUEUE_FLAG_HW_WC, &q->queue_flags))
-			return -EINVAL;
-		blk_queue_flag_set(QUEUE_FLAG_WC, q);
+		disable = false;
 	} else if (!strncmp(page, "write through", 13) ||
-		 !strncmp(page, "none", 4)) {
-		blk_queue_flag_clear(QUEUE_FLAG_WC, q);
+		   !strncmp(page, "none", 4)) {
+		disable = true;
 	} else {
 		return -EINVAL;
 	}
 
+	lim = queue_limits_start_update(q);
+	if (disable)
+		lim.flags |= BLK_FLAGS_WRITE_CACHE_DISABLED;
+	else
+		lim.flags &= ~BLK_FLAGS_WRITE_CACHE_DISABLED;
+	err = queue_limits_commit_update(q, &lim);
+	if (err)
+		return err;
 	return count;
 }
 
 static ssize_t queue_fua_show(struct request_queue *q, char *page)
 {
-	return sprintf(page, "%u\n", test_bit(QUEUE_FLAG_FUA, &q->queue_flags));
+	return sprintf(page, "%u\n", !!(q->limits.features & BLK_FEAT_FUA));
 }
 
 static ssize_t queue_dax_show(struct request_queue *q, char *page)
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index 64472134dd26df..1a5e4b049ecd1d 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -206,8 +206,8 @@ static void wbt_rqw_done(struct rq_wb *rwb, struct rq_wait *rqw,
 	 */
 	if (wb_acct & WBT_DISCARD)
 		limit = rwb->wb_background;
-	else if (test_bit(QUEUE_FLAG_WC, &rwb->rqos.disk->queue->queue_flags) &&
-	         !wb_recent_wait(rwb))
+	else if (blk_queue_write_cache(rwb->rqos.disk->queue) &&
+		 !wb_recent_wait(rwb))
 		limit = 0;
 	else
 		limit = rwb->wb_normal;
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 113b441d4d3670..bf42a46781fa21 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -2697,6 +2697,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 		 * connect.
 		 */
 		.max_hw_sectors		= DRBD_MAX_BIO_SIZE_SAFE >> 8,
+		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA,
 	};
 
 	device = minor_to_device(minor);
@@ -2736,7 +2737,6 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 	disk->private_data = device;
 
 	blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, disk->queue);
-	blk_queue_write_cache(disk->queue, true, true);
 
 	device->md_io.page = alloc_page(GFP_KERNEL);
 	if (!device->md_io.page)
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 2c4a5eb3a6a7f9..0b23fdc4e2edcc 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -985,6 +985,9 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
 	lim.logical_block_size = bsize;
 	lim.physical_block_size = bsize;
 	lim.io_min = bsize;
+	lim.features &= ~BLK_FEAT_WRITE_CACHE;
+	if (file->f_op->fsync && !(lo->lo_flags & LO_FLAGS_READ_ONLY))
+		lim.features |= BLK_FEAT_WRITE_CACHE;
 	if (!backing_bdev || bdev_nonrot(backing_bdev))
 		blk_queue_flag_set(QUEUE_FLAG_NONROT, lo->lo_queue);
 	else
@@ -1078,9 +1081,6 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
 	lo->old_gfp_mask = mapping_gfp_mask(mapping);
 	mapping_set_gfp_mask(mapping, lo->old_gfp_mask & ~(__GFP_IO|__GFP_FS));
 
-	if (!(lo->lo_flags & LO_FLAGS_READ_ONLY) && file->f_op->fsync)
-		blk_queue_write_cache(lo->lo_queue, true, false);
-
 	error = loop_reconfigure_limits(lo, config->block_size);
 	if (WARN_ON_ONCE(error))
 		goto out_unlock;
@@ -1131,9 +1131,6 @@ static void __loop_clr_fd(struct loop_device *lo, bool release)
 	struct file *filp;
 	gfp_t gfp = lo->old_gfp_mask;
 
-	if (test_bit(QUEUE_FLAG_WC, &lo->lo_queue->queue_flags))
-		blk_queue_write_cache(lo->lo_queue, false, false);
-
 	/*
 	 * Freeze the request queue when unbinding on a live file descriptor and
 	 * thus an open device.  When called from ->release we are guaranteed
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 44b8c671921e5c..cb1c86a6a3fb9d 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -342,12 +342,14 @@ static int __nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 		lim.max_hw_discard_sectors = UINT_MAX;
 	else
 		lim.max_hw_discard_sectors = 0;
-	if (!(nbd->config->flags & NBD_FLAG_SEND_FLUSH))
-		blk_queue_write_cache(nbd->disk->queue, false, false);
-	else if (nbd->config->flags & NBD_FLAG_SEND_FUA)
-		blk_queue_write_cache(nbd->disk->queue, true, true);
-	else
-		blk_queue_write_cache(nbd->disk->queue, true, false);
+	if (!(nbd->config->flags & NBD_FLAG_SEND_FLUSH)) {
+		lim.features &= ~(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA);
+	} else if (nbd->config->flags & NBD_FLAG_SEND_FUA) {
+		lim.features |= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA;
+	} else {
+		lim.features |= BLK_FEAT_WRITE_CACHE;
+		lim.features &= ~BLK_FEAT_FUA;
+	}
 	lim.logical_block_size = blksize;
 	lim.physical_block_size = blksize;
 	error = queue_limits_commit_update(nbd->disk->queue, &lim);
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index 631dca2e4e8442..73e4aecf5bb492 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1928,6 +1928,13 @@ static int null_add_dev(struct nullb_device *dev)
 			goto out_cleanup_tags;
 	}
 
+	if (dev->cache_size > 0) {
+		set_bit(NULLB_DEV_FL_CACHE, &nullb->dev->flags);
+		lim.features |= BLK_FEAT_WRITE_CACHE;
+		if (dev->fua)
+			lim.features |= BLK_FEAT_FUA;
+	}
+
 	nullb->disk = blk_mq_alloc_disk(nullb->tag_set, &lim, nullb);
 	if (IS_ERR(nullb->disk)) {
 		rv = PTR_ERR(nullb->disk);
@@ -1940,11 +1947,6 @@ static int null_add_dev(struct nullb_device *dev)
 		nullb_setup_bwtimer(nullb);
 	}
 
-	if (dev->cache_size > 0) {
-		set_bit(NULLB_DEV_FL_CACHE, &nullb->dev->flags);
-		blk_queue_write_cache(nullb->q, true, dev->fua);
-	}
-
 	nullb->q->queuedata = nullb;
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, nullb->q);
 
diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
index b810ac0a5c4b97..8b73cf459b5937 100644
--- a/drivers/block/ps3disk.c
+++ b/drivers/block/ps3disk.c
@@ -388,9 +388,8 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
 		.max_segments		= -1,
 		.max_segment_size	= dev->bounce_size,
 		.dma_alignment		= dev->blk_size - 1,
+		.features		= BLK_FEAT_WRITE_CACHE,
 	};
-
-	struct request_queue *queue;
 	struct gendisk *gendisk;
 
 	if (dev->blk_size < 512) {
@@ -447,10 +446,6 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
 		goto fail_free_tag_set;
 	}
 
-	queue = gendisk->queue;
-
-	blk_queue_write_cache(queue, true, false);
-
 	priv->gendisk = gendisk;
 	gendisk->major = ps3disk_major;
 	gendisk->first_minor = devidx * PS3DISK_MINORS;
diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
index b7ffe03c61606d..02c4b173182719 100644
--- a/drivers/block/rnbd/rnbd-clt.c
+++ b/drivers/block/rnbd/rnbd-clt.c
@@ -1389,6 +1389,12 @@ static int rnbd_client_setup_device(struct rnbd_clt_dev *dev,
 			le32_to_cpu(rsp->max_discard_sectors);
 	}
 
+	if (rsp->cache_policy & RNBD_WRITEBACK) {
+		lim.features |= BLK_FEAT_WRITE_CACHE;
+		if (rsp->cache_policy & RNBD_FUA)
+			lim.features |= BLK_FEAT_FUA;
+	}
+
 	dev->gd = blk_mq_alloc_disk(&dev->sess->tag_set, &lim, dev);
 	if (IS_ERR(dev->gd))
 		return PTR_ERR(dev->gd);
@@ -1397,10 +1403,6 @@ static int rnbd_client_setup_device(struct rnbd_clt_dev *dev,
 
 	blk_queue_flag_set(QUEUE_FLAG_SAME_COMP, dev->queue);
 	blk_queue_flag_set(QUEUE_FLAG_SAME_FORCE, dev->queue);
-	blk_queue_write_cache(dev->queue,
-			      !!(rsp->cache_policy & RNBD_WRITEBACK),
-			      !!(rsp->cache_policy & RNBD_FUA));
-
 	return rnbd_clt_setup_gen_disk(dev, rsp, idx);
 }
 
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 4e159948c912c2..e45c65c1848d31 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -487,8 +487,6 @@ static void ublk_dev_param_basic_apply(struct ublk_device *ub)
 	struct request_queue *q = ub->ub_disk->queue;
 	const struct ublk_param_basic *p = &ub->params.basic;
 
-	blk_queue_write_cache(q, p->attrs & UBLK_ATTR_VOLATILE_CACHE,
-			p->attrs & UBLK_ATTR_FUA);
 	if (p->attrs & UBLK_ATTR_ROTATIONAL)
 		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
 	else
@@ -2210,6 +2208,12 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
 		lim.max_zone_append_sectors = p->max_zone_append_sectors;
 	}
 
+	if (ub->params.basic.attrs & UBLK_ATTR_VOLATILE_CACHE) {
+		lim.features |= BLK_FEAT_WRITE_CACHE;
+		if (ub->params.basic.attrs & UBLK_ATTR_FUA)
+			lim.features |= BLK_FEAT_FUA;
+	}
+
 	if (wait_for_completion_interruptible(&ub->completion) != 0)
 		return -EINTR;
 
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 378b241911ca87..b1a3c293528519 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -1100,6 +1100,7 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
 	struct gendisk *disk = dev_to_disk(dev);
 	struct virtio_blk *vblk = disk->private_data;
 	struct virtio_device *vdev = vblk->vdev;
+	struct queue_limits lim;
 	int i;
 
 	BUG_ON(!virtio_has_feature(vblk->vdev, VIRTIO_BLK_F_CONFIG_WCE));
@@ -1108,7 +1109,17 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
 		return i;
 
 	virtio_cwrite8(vdev, offsetof(struct virtio_blk_config, wce), i);
-	blk_queue_write_cache(disk->queue, virtblk_get_cache_mode(vdev), false);
+
+	lim = queue_limits_start_update(disk->queue);
+	if (virtblk_get_cache_mode(vdev))
+		lim.features |= BLK_FEAT_WRITE_CACHE;
+	else
+		lim.features &= ~BLK_FEAT_WRITE_CACHE;
+	blk_mq_freeze_queue(disk->queue);
+	i = queue_limits_commit_update(disk->queue, &lim);
+	blk_mq_unfreeze_queue(disk->queue);
+	if (i)
+		return i;
 	return count;
 }
 
@@ -1504,6 +1515,9 @@ static int virtblk_probe(struct virtio_device *vdev)
 	if (err)
 		goto out_free_tags;
 
+	if (virtblk_get_cache_mode(vdev))
+		lim.features |= BLK_FEAT_WRITE_CACHE;
+
 	vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, &lim, vblk);
 	if (IS_ERR(vblk->disk)) {
 		err = PTR_ERR(vblk->disk);
@@ -1519,10 +1533,6 @@ static int virtblk_probe(struct virtio_device *vdev)
 	vblk->disk->fops = &virtblk_fops;
 	vblk->index = index;
 
-	/* configure queue flush support */
-	blk_queue_write_cache(vblk->disk->queue, virtblk_get_cache_mode(vdev),
-			false);
-
 	/* If disk is read-only in the host, the guest should obey */
 	if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO))
 		set_disk_ro(vblk->disk, 1);
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 9794ac2d3299d1..de38e025769b14 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -956,6 +956,12 @@ static void blkif_set_queue_limits(const struct blkfront_info *info,
 			lim->max_secure_erase_sectors = UINT_MAX;
 	}
 
+	if (info->feature_flush) {
+		lim->features |= BLK_FEAT_WRITE_CACHE;
+		if (info->feature_fua)
+			lim->features |= BLK_FEAT_FUA;
+	}
+
 	/* Hard sector size and max sectors impersonate the equiv. hardware. */
 	lim->logical_block_size = info->sector_size;
 	lim->physical_block_size = info->physical_sector_size;
@@ -1150,9 +1156,6 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 	info->sector_size = sector_size;
 	info->physical_sector_size = physical_sector_size;
 
-	blk_queue_write_cache(info->rq, info->feature_flush ? true : false,
-			      info->feature_fua ? true : false);
-
 	pr_info("blkfront: %s: %s %s %s %s %s %s %s\n",
 		info->gd->disk_name, flush_info(info),
 		"persistent grants:", info->feature_persistent ?
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 4d11fc664cb0b8..cb6595c8b5514e 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -897,7 +897,6 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
 		sector_t sectors, struct block_device *cached_bdev,
 		const struct block_device_operations *ops)
 {
-	struct request_queue *q;
 	const size_t max_stripes = min_t(size_t, INT_MAX,
 					 SIZE_MAX / sizeof(atomic_t));
 	struct queue_limits lim = {
@@ -909,6 +908,7 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
 		.io_min			= block_size,
 		.logical_block_size	= block_size,
 		.physical_block_size	= block_size,
+		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA,
 	};
 	uint64_t n;
 	int idx;
@@ -975,12 +975,7 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
 	d->disk->fops		= ops;
 	d->disk->private_data	= d;
 
-	q = d->disk->queue;
-
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, d->disk->queue);
-
-	blk_queue_write_cache(q, true, true);
-
 	return 0;
 
 out_bioset_exit:
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index fd789eeb62d943..fbe125d55e25b4 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1686,34 +1686,16 @@ int dm_calculate_queue_limits(struct dm_table *t,
 	return validate_hardware_logical_block_alignment(t, limits);
 }
 
-static int device_flush_capable(struct dm_target *ti, struct dm_dev *dev,
-				sector_t start, sector_t len, void *data)
-{
-	unsigned long flush = (unsigned long) data;
-	struct request_queue *q = bdev_get_queue(dev->bdev);
-
-	return (q->queue_flags & flush);
-}
-
-static bool dm_table_supports_flush(struct dm_table *t, unsigned long flush)
+/*
+ * Check if an target requires flush support even if none of the underlying
+ * devices need it (e.g. to persist target-specific metadata).
+ */
+static bool dm_table_supports_flush(struct dm_table *t)
 {
-	/*
-	 * Require at least one underlying device to support flushes.
-	 * t->devices includes internal dm devices such as mirror logs
-	 * so we need to use iterate_devices here, which targets
-	 * supporting flushes must provide.
-	 */
 	for (unsigned int i = 0; i < t->num_targets; i++) {
 		struct dm_target *ti = dm_table_get_target(t, i);
 
-		if (!ti->num_flush_bios)
-			continue;
-
-		if (ti->flush_supported)
-			return true;
-
-		if (ti->type->iterate_devices &&
-		    ti->type->iterate_devices(ti, device_flush_capable, (void *) flush))
+		if (ti->num_flush_bios && ti->flush_supported)
 			return true;
 	}
 
@@ -1855,7 +1837,6 @@ static int device_requires_stable_pages(struct dm_target *ti,
 int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 			      struct queue_limits *limits)
 {
-	bool wc = false, fua = false;
 	int r;
 
 	if (dm_table_supports_nowait(t))
@@ -1876,12 +1857,8 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 	if (!dm_table_supports_secure_erase(t))
 		limits->max_secure_erase_sectors = 0;
 
-	if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_WC))) {
-		wc = true;
-		if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_FUA)))
-			fua = true;
-	}
-	blk_queue_write_cache(q, wc, fua);
+	if (dm_table_supports_flush(t))
+		limits->features |= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA;
 
 	if (dm_table_supports_dax(t, device_not_dax_capable)) {
 		blk_queue_flag_set(QUEUE_FLAG_DAX, q);
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 67ece2cd725f50..2f4c5d1755d857 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5785,7 +5785,10 @@ struct mddev *md_alloc(dev_t dev, char *name)
 	int partitioned;
 	int shift;
 	int unit;
-	int error ;
+	int error;
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA,
+	};
 
 	/*
 	 * Wait for any previous instance of this device to be completely
@@ -5825,7 +5828,7 @@ struct mddev *md_alloc(dev_t dev, char *name)
 		 */
 		mddev->hold_active = UNTIL_STOP;
 
-	disk = blk_alloc_disk(NULL, NUMA_NO_NODE);
+	disk = blk_alloc_disk(&lim, NUMA_NO_NODE);
 	if (IS_ERR(disk)) {
 		error = PTR_ERR(disk);
 		goto out_free_mddev;
@@ -5843,7 +5846,6 @@ struct mddev *md_alloc(dev_t dev, char *name)
 	disk->fops = &md_fops;
 	disk->private_data = mddev;
 
-	blk_queue_write_cache(disk->queue, true, true);
 	disk->events |= DISK_EVENT_MEDIA_CHANGE;
 	mddev->gendisk = disk;
 	error = add_disk(disk);
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 367509b5b6466c..2c9963248fcbd6 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2466,8 +2466,7 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
 	struct mmc_blk_data *md;
 	int devidx, ret;
 	char cap_str[10];
-	bool cache_enabled = false;
-	bool fua_enabled = false;
+	unsigned int features = 0;
 
 	devidx = ida_alloc_max(&mmc_blk_ida, max_devices - 1, GFP_KERNEL);
 	if (devidx < 0) {
@@ -2499,7 +2498,24 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
 	 */
 	md->read_only = mmc_blk_readonly(card);
 
-	md->disk = mmc_init_queue(&md->queue, card);
+	if (mmc_host_cmd23(card->host)) {
+		if ((mmc_card_mmc(card) &&
+		     card->csd.mmca_vsn >= CSD_SPEC_VER_3) ||
+		    (mmc_card_sd(card) &&
+		     card->scr.cmds & SD_SCR_CMD23_SUPPORT))
+			md->flags |= MMC_BLK_CMD23;
+	}
+
+	if (md->flags & MMC_BLK_CMD23 &&
+	    ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
+	     card->ext_csd.rel_sectors)) {
+		md->flags |= MMC_BLK_REL_WR;
+		features |= (BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA);
+	} else if (mmc_cache_enabled(card->host)) {
+		features |= BLK_FEAT_WRITE_CACHE;
+	}
+
+	md->disk = mmc_init_queue(&md->queue, card, features);
 	if (IS_ERR(md->disk)) {
 		ret = PTR_ERR(md->disk);
 		goto err_kfree;
@@ -2539,26 +2555,6 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
 
 	set_capacity(md->disk, size);
 
-	if (mmc_host_cmd23(card->host)) {
-		if ((mmc_card_mmc(card) &&
-		     card->csd.mmca_vsn >= CSD_SPEC_VER_3) ||
-		    (mmc_card_sd(card) &&
-		     card->scr.cmds & SD_SCR_CMD23_SUPPORT))
-			md->flags |= MMC_BLK_CMD23;
-	}
-
-	if (md->flags & MMC_BLK_CMD23 &&
-	    ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
-	     card->ext_csd.rel_sectors)) {
-		md->flags |= MMC_BLK_REL_WR;
-		fua_enabled = true;
-		cache_enabled = true;
-	}
-	if (mmc_cache_enabled(card->host))
-		cache_enabled  = true;
-
-	blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);
-
 	string_get_size((u64)size, 512, STRING_UNITS_2,
 			cap_str, sizeof(cap_str));
 	pr_info("%s: %s %s %s%s\n",
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 241cdc2b2a2a3b..97ff993d31570c 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -344,10 +344,12 @@ static const struct blk_mq_ops mmc_mq_ops = {
 };
 
 static struct gendisk *mmc_alloc_disk(struct mmc_queue *mq,
-		struct mmc_card *card)
+		struct mmc_card *card, unsigned int features)
 {
 	struct mmc_host *host = card->host;
-	struct queue_limits lim = { };
+	struct queue_limits lim = {
+		.features		= features,
+	};
 	struct gendisk *disk;
 
 	if (mmc_can_erase(card))
@@ -413,10 +415,12 @@ static inline bool mmc_merge_capable(struct mmc_host *host)
  * mmc_init_queue - initialise a queue structure.
  * @mq: mmc queue
  * @card: mmc card to attach this queue
+ * @features: block layer features (BLK_FEAT_*)
  *
  * Initialise a MMC card request queue.
  */
-struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card)
+struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
+		unsigned int features)
 {
 	struct mmc_host *host = card->host;
 	struct gendisk *disk;
@@ -460,7 +464,7 @@ struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card)
 		return ERR_PTR(ret);
 		
 
-	disk = mmc_alloc_disk(mq, card);
+	disk = mmc_alloc_disk(mq, card, features);
 	if (IS_ERR(disk))
 		blk_mq_free_tag_set(&mq->tag_set);
 	return disk;
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 9ade3bcbb714e4..1498840a4ea008 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -94,7 +94,8 @@ struct mmc_queue {
 	struct work_struct	complete_work;
 };
 
-struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card);
+struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
+		unsigned int features);
 extern void mmc_cleanup_queue(struct mmc_queue *);
 extern void mmc_queue_suspend(struct mmc_queue *);
 extern void mmc_queue_resume(struct mmc_queue *);
diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index 3caa0717d46c01..1b9f57f231e8be 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -336,6 +336,8 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	lim.logical_block_size = tr->blksize;
 	if (tr->discard)
 		lim.max_hw_discard_sectors = UINT_MAX;
+	if (tr->flush)
+		lim.features |= BLK_FEAT_WRITE_CACHE;
 
 	/* Create gendisk */
 	gd = blk_mq_alloc_disk(new->tag_set, &lim, new);
@@ -373,9 +375,6 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	spin_lock_init(&new->queue_lock);
 	INIT_LIST_HEAD(&new->rq_list);
 
-	if (tr->flush)
-		blk_queue_write_cache(new->rq, true, false);
-
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, new->rq);
 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, new->rq);
 
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 598fe2e89bda45..aff818469c114c 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -455,6 +455,7 @@ static int pmem_attach_disk(struct device *dev,
 		.logical_block_size	= pmem_sector_size(ndns),
 		.physical_block_size	= PAGE_SIZE,
 		.max_hw_sectors		= UINT_MAX,
+		.features		= BLK_FEAT_WRITE_CACHE,
 	};
 	int nid = dev_to_node(dev), fua;
 	struct resource *res = &nsio->res;
@@ -495,6 +496,8 @@ static int pmem_attach_disk(struct device *dev,
 		dev_warn(dev, "unable to guarantee persistence of writes\n");
 		fua = 0;
 	}
+	if (fua)
+		lim.features |= BLK_FEAT_FUA;
 
 	if (!devm_request_mem_region(dev, res->start, resource_size(res),
 				dev_name(&ndns->dev))) {
@@ -543,7 +546,6 @@ static int pmem_attach_disk(struct device *dev,
 	}
 	pmem->virt_addr = addr;
 
-	blk_queue_write_cache(q, true, fua);
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, q);
 	if (pmem->pfn_flags & PFN_MAP)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 5a673fa5cb2612..9fc5e36fe2e55e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2056,7 +2056,6 @@ static int nvme_update_ns_info_generic(struct nvme_ns *ns,
 static int nvme_update_ns_info_block(struct nvme_ns *ns,
 		struct nvme_ns_info *info)
 {
-	bool vwc = ns->ctrl->vwc & NVME_CTRL_VWC_PRESENT;
 	struct queue_limits lim;
 	struct nvme_id_ns_nvm *nvm = NULL;
 	struct nvme_zone_info zi = {};
@@ -2106,6 +2105,11 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns,
 	    ns->head->ids.csi == NVME_CSI_ZNS)
 		nvme_update_zone_info(ns, &lim, &zi);
 
+	if (ns->ctrl->vwc & NVME_CTRL_VWC_PRESENT)
+		lim.features |= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA;
+	else
+		lim.features &= ~(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA);
+
 	/*
 	 * Register a metadata profile for PI, or the plain non-integrity NVMe
 	 * metadata masquerading as Type 0 if supported, otherwise reject block
@@ -2132,7 +2136,6 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns,
 	if ((id->dlfeat & 0x7) == 0x1 && (id->dlfeat & (1 << 3)))
 		ns->head->features |= NVME_NS_DEAC;
 	set_disk_ro(ns->disk, nvme_ns_is_readonly(ns, info));
-	blk_queue_write_cache(ns->disk->queue, vwc, vwc);
 	set_bit(NVME_NS_READY, &ns->flags);
 	blk_mq_unfreeze_queue(ns->disk->queue);
 
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 12c59db02539e5..3d0e23a0a4ddd8 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -521,7 +521,6 @@ static void nvme_requeue_work(struct work_struct *work)
 int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 {
 	struct queue_limits lim;
-	bool vwc = false;
 
 	mutex_init(&head->lock);
 	bio_list_init(&head->requeue_list);
@@ -562,11 +561,6 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 	if (ctrl->tagset->nr_maps > HCTX_TYPE_POLL &&
 	    ctrl->tagset->map[HCTX_TYPE_POLL].nr_queues)
 		blk_queue_flag_set(QUEUE_FLAG_POLL, head->disk->queue);
-
-	/* we need to propagate up the VMC settings */
-	if (ctrl->vwc & NVME_CTRL_VWC_PRESENT)
-		vwc = true;
-	blk_queue_write_cache(head->disk->queue, vwc, vwc);
 	return 0;
 }
 
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 5bfed61c70db8f..8764ea14c9b881 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -120,17 +120,18 @@ static const char *sd_cache_types[] = {
 	"write back, no read (daft)"
 };
 
-static void sd_set_flush_flag(struct scsi_disk *sdkp)
+static void sd_set_flush_flag(struct scsi_disk *sdkp,
+		struct queue_limits *lim)
 {
-	bool wc = false, fua = false;
-
 	if (sdkp->WCE) {
-		wc = true;
+		lim->features |= BLK_FEAT_WRITE_CACHE;
 		if (sdkp->DPOFUA)
-			fua = true;
+			lim->features |= BLK_FEAT_FUA;
+		else
+			lim->features &= ~BLK_FEAT_FUA;
+	} else {
+		lim->features &= ~(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA);
 	}
-
-	blk_queue_write_cache(sdkp->disk->queue, wc, fua);
 }
 
 static ssize_t
@@ -168,9 +169,18 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
 	wce = (ct & 0x02) && !sdkp->write_prot ? 1 : 0;
 
 	if (sdkp->cache_override) {
+		struct queue_limits lim;
+
 		sdkp->WCE = wce;
 		sdkp->RCD = rcd;
-		sd_set_flush_flag(sdkp);
+
+		lim = queue_limits_start_update(sdkp->disk->queue);
+		sd_set_flush_flag(sdkp, &lim);
+		blk_mq_freeze_queue(sdkp->disk->queue);
+		ret = queue_limits_commit_update(sdkp->disk->queue, &lim);
+		blk_mq_unfreeze_queue(sdkp->disk->queue);
+		if (ret)
+			return ret;
 		return count;
 	}
 
@@ -3659,7 +3669,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 	 * We now have all cache related info, determine how we deal
 	 * with flush requests.
 	 */
-	sd_set_flush_flag(sdkp);
+	sd_set_flush_flag(sdkp, &lim);
 
 	/* Initial block count limit based on CDB TRANSFER LENGTH field size. */
 	dev_max = sdp->use_16_for_rw ? SD_MAX_XFER_BLOCKS : SD_DEF_XFER_BLOCKS;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index c792d4d81e5fcc..4e8931a2c76b07 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -282,6 +282,28 @@ static inline bool blk_op_is_passthrough(blk_opf_t op)
 	return op == REQ_OP_DRV_IN || op == REQ_OP_DRV_OUT;
 }
 
+/* flags set by the driver in queue_limits.features */
+enum {
+	/* supports a a volatile write cache */
+	BLK_FEAT_WRITE_CACHE			= (1u << 0),
+
+	/* supports passing on the FUA bit */
+	BLK_FEAT_FUA				= (1u << 1),
+};
+
+/*
+ * Flags automatically inherited when stacking limits.
+ */
+#define BLK_FEAT_INHERIT_MASK \
+	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA)
+
+
+/* internal flags in queue_limits.flags */
+enum {
+	/* do not send FLUSH or FUA command despite advertised write cache */
+	BLK_FLAGS_WRITE_CACHE_DISABLED		= (1u << 31),
+};
+
 /*
  * BLK_BOUNCE_NONE:	never bounce (default)
  * BLK_BOUNCE_HIGH:	bounce all highmem pages
@@ -292,6 +314,8 @@ enum blk_bounce {
 };
 
 struct queue_limits {
+	unsigned int		features;
+	unsigned int		flags;
 	enum blk_bounce		bounce;
 	unsigned long		seg_boundary_mask;
 	unsigned long		virt_boundary_mask;
@@ -536,12 +560,9 @@ struct request_queue {
 #define QUEUE_FLAG_ADD_RANDOM	10	/* Contributes to random pool */
 #define QUEUE_FLAG_SYNCHRONOUS	11	/* always completes in submit context */
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
-#define QUEUE_FLAG_HW_WC	13	/* Write back caching supported */
 #define QUEUE_FLAG_INIT_DONE	14	/* queue is initialized */
 #define QUEUE_FLAG_STABLE_WRITES 15	/* don't modify blks until WB is done */
 #define QUEUE_FLAG_POLL		16	/* IO polling enabled if set */
-#define QUEUE_FLAG_WC		17	/* Write back caching */
-#define QUEUE_FLAG_FUA		18	/* device supports FUA writes */
 #define QUEUE_FLAG_DAX		19	/* device supports DAX */
 #define QUEUE_FLAG_STATS	20	/* track IO start and completion times */
 #define QUEUE_FLAG_REGISTERED	22	/* queue has been registered to a disk */
@@ -951,7 +972,6 @@ void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev,
 		sector_t offset, const char *pfx);
 extern void blk_queue_update_dma_pad(struct request_queue *, unsigned int);
 extern void blk_queue_rq_timeout(struct request_queue *, unsigned int);
-extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fua);
 
 struct blk_independent_access_ranges *
 disk_alloc_independent_access_ranges(struct gendisk *disk, int nr_ia_ranges);
@@ -1305,14 +1325,20 @@ static inline bool bdev_stable_writes(struct block_device *bdev)
 	return test_bit(QUEUE_FLAG_STABLE_WRITES, &q->queue_flags);
 }
 
+static inline bool blk_queue_write_cache(struct request_queue *q)
+{
+	return (q->limits.features & BLK_FEAT_WRITE_CACHE) &&
+		(q->limits.flags & BLK_FLAGS_WRITE_CACHE_DISABLED);
+}
+
 static inline bool bdev_write_cache(struct block_device *bdev)
 {
-	return test_bit(QUEUE_FLAG_WC, &bdev_get_queue(bdev)->queue_flags);
+	return blk_queue_write_cache(bdev_get_queue(bdev));
 }
 
 static inline bool bdev_fua(struct block_device *bdev)
 {
-	return test_bit(QUEUE_FLAG_FUA, &bdev_get_queue(bdev)->queue_flags);
+	return bdev_get_queue(bdev)->limits.features & BLK_FEAT_FUA;
 }
 
 static inline bool bdev_nowait(struct block_device *bdev)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737635.1144073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2d-0004EC-0v; Tue, 11 Jun 2024 05:27:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737635.1144073; Tue, 11 Jun 2024 05:27:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2c-0004D1-Tb; Tue, 11 Jun 2024 05:27:30 +0000
Received: by outflank-mailman (input) for mailman id 737635;
 Tue, 11 Jun 2024 05:27:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtw9-0006gk-8P
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:49 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5b3a3601-27b2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:20:47 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvo-00000007RQZ-48Dr; Tue, 11 Jun 2024 05:20:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b3a3601-27b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=ShRCwrCpF80xuwktmYUd9TOxPwGnFgLSsg8hYRw+Pkc=; b=tAuMhP51GB5eMCe3QLYJStwXie
	NVsAmrkVPwA4BThIOeSWqMvoL7mHvdAM0pL1U6O6rQx0QcJyDhiWVJ/IYeSjOsDAXrwAvWSyzwkul
	lvfEjFTejjDUYSoDvBBSZhgGu5TxZXMe63GW9ckfBajgXTgJnVcbqwG4/Rm6LLrXNR1AelbMph8we
	AC5Xk/mK0OhKgbOuCkXuIa6c+/inoEp4DNhjvb4muIDBiiwivz+eM2856aMJk4o/UPlqbRrv5WX4F
	+VJdYqio9G+nzwlRfRsF9ZBhm2fWQ+4OHcaUl70wWT2qCEyw3t0sOqK0uoAxe9CItfywXTMzYzs3n
	S9mM0N3A==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 19/26] block: move the nowait flag to queue_limits
Date: Tue, 11 Jun 2024 07:19:19 +0200
Message-ID: <20240611051929.513387-20-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the nowait flag into the queue_limits feature field so that it
can be set atomically and all I/O is frozen when changing the flag.

Stacking drivers are simplified in that they now can simply set the
flag, and blk_stack_limits will clear it when the features is not
supported by any of the underlying devices.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq-debugfs.c        |  1 -
 block/blk-mq.c                |  2 +-
 block/blk-settings.c          |  9 +++++++++
 drivers/block/brd.c           |  4 ++--
 drivers/md/dm-table.c         | 16 ++--------------
 drivers/md/md.c               | 18 +-----------------
 drivers/nvme/host/multipath.c |  3 +--
 include/linux/blkdev.h        |  9 +++++----
 8 files changed, 21 insertions(+), 41 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 957774e40b1d0c..62b132e9a9ce3b 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -96,7 +96,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(ZONE_RESETALL),
 	QUEUE_FLAG_NAME(RQ_ALLOC_TIME),
 	QUEUE_FLAG_NAME(HCTX_ACTIVE),
-	QUEUE_FLAG_NAME(NOWAIT),
 	QUEUE_FLAG_NAME(SQ_SCHED),
 	QUEUE_FLAG_NAME(SKIP_TAGSET_QUIESCE),
 };
diff --git a/block/blk-mq.c b/block/blk-mq.c
index cf67dc13f7dd4c..43235acc87505f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4118,7 +4118,7 @@ struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
 
 	if (!lim)
 		lim = &default_lim;
-	lim->features |= BLK_FEAT_IO_STAT;
+	lim->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
 
 	q = blk_alloc_queue(lim, set->numa_node);
 	if (IS_ERR(q))
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 536ee202fcdccb..bf4622c19b5c09 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -459,6 +459,15 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 
 	t->features |= (b->features & BLK_FEAT_INHERIT_MASK);
 
+	/*
+	 * BLK_FEAT_NOWAIT needs to be supported both by the stacking driver
+	 * and all underlying devices.  The stacking driver sets the flag
+	 * before stacking the limits, and this will clear the flag if any
+	 * of the underlying devices does not support it.
+	 */
+	if (!(b->features & BLK_FEAT_NOWAIT))
+		t->features &= ~BLK_FEAT_NOWAIT;
+
 	t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
 	t->max_user_sectors = min_not_zero(t->max_user_sectors,
 			b->max_user_sectors);
diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index d77deb571dbd06..a300645cd9d4a5 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -335,7 +335,8 @@ static int brd_alloc(int i)
 		.max_hw_discard_sectors	= UINT_MAX,
 		.max_discard_segments	= 1,
 		.discard_granularity	= PAGE_SIZE,
-		.features		= BLK_FEAT_SYNCHRONOUS,
+		.features		= BLK_FEAT_SYNCHRONOUS |
+					  BLK_FEAT_NOWAIT,
 	};
 
 	list_for_each_entry(brd, &brd_devices, brd_list)
@@ -367,7 +368,6 @@ static int brd_alloc(int i)
 	strscpy(disk->disk_name, buf, DISK_NAME_LEN);
 	set_capacity(disk, rd_size * 2);
 	
-	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, disk->queue);
 	err = add_disk(disk);
 	if (err)
 		goto out_cleanup_disk;
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index f4e1b50ffdcda5..eee43d27733f9a 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -582,7 +582,7 @@ int dm_split_args(int *argc, char ***argvp, char *input)
 static void dm_set_stacking_limits(struct queue_limits *limits)
 {
 	blk_set_stacking_limits(limits);
-	limits->features |= BLK_FEAT_IO_STAT;
+	limits->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
 }
 
 /*
@@ -1746,12 +1746,6 @@ static bool dm_table_supports_write_zeroes(struct dm_table *t)
 	return true;
 }
 
-static int device_not_nowait_capable(struct dm_target *ti, struct dm_dev *dev,
-				     sector_t start, sector_t len, void *data)
-{
-	return !bdev_nowait(dev->bdev);
-}
-
 static bool dm_table_supports_nowait(struct dm_table *t)
 {
 	for (unsigned int i = 0; i < t->num_targets; i++) {
@@ -1759,10 +1753,6 @@ static bool dm_table_supports_nowait(struct dm_table *t)
 
 		if (!dm_target_supports_nowait(ti->type))
 			return false;
-
-		if (!ti->type->iterate_devices ||
-		    ti->type->iterate_devices(ti, device_not_nowait_capable, NULL))
-			return false;
 	}
 
 	return true;
@@ -1825,9 +1815,7 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 	int r;
 
 	if (dm_table_supports_nowait(t))
-		blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q);
-	else
-		blk_queue_flag_clear(QUEUE_FLAG_NOWAIT, q);
+		limits->features &= ~BLK_FEAT_NOWAIT;
 
 	if (!dm_table_supports_discards(t)) {
 		limits->max_hw_discard_sectors = 0;
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 8db0db8d5a27ac..f1c7d4f281c521 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5788,7 +5788,7 @@ struct mddev *md_alloc(dev_t dev, char *name)
 	int error;
 	struct queue_limits lim = {
 		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
-					  BLK_FEAT_IO_STAT,
+					  BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT,
 	};
 
 	/*
@@ -6150,13 +6150,6 @@ int md_run(struct mddev *mddev)
 		}
 	}
 
-	if (!mddev_is_dm(mddev)) {
-		struct request_queue *q = mddev->gendisk->queue;
-
-		/* Set the NOWAIT flags if all underlying devices support it */
-		if (nowait)
-			blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q);
-	}
 	if (pers->sync_request) {
 		if (mddev->kobj.sd &&
 		    sysfs_create_group(&mddev->kobj, &md_redundancy_group))
@@ -7115,15 +7108,6 @@ static int hot_add_disk(struct mddev *mddev, dev_t dev)
 	set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
 	if (!mddev->thread)
 		md_update_sb(mddev, 1);
-	/*
-	 * If the new disk does not support REQ_NOWAIT,
-	 * disable on the whole MD.
-	 */
-	if (!bdev_nowait(rdev->bdev)) {
-		pr_info("%s: Disabling nowait because %pg does not support nowait\n",
-			mdname(mddev), rdev->bdev);
-		blk_queue_flag_clear(QUEUE_FLAG_NOWAIT, mddev->gendisk->queue);
-	}
 	/*
 	 * Kick recovery, maybe this spare has to be added to the
 	 * array immediately.
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 173796f2ddea9f..61a162c9cf4e6c 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -538,7 +538,7 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 
 	blk_set_stacking_limits(&lim);
 	lim.dma_alignment = 3;
-	lim.features |= BLK_FEAT_IO_STAT;
+	lim.features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
 	if (head->ids.csi != NVME_CSI_ZNS)
 		lim.max_zone_append_sectors = 0;
 
@@ -550,7 +550,6 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 	sprintf(head->disk->disk_name, "nvme%dn%d",
 			ctrl->subsys->instance, head->instance);
 
-	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, head->disk->queue);
 	/*
 	 * This assumes all controllers that refer to a namespace either
 	 * support poll queues or not.  That is not a strict guarantee,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 4d908e29c760da..59c2327692589b 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -304,6 +304,9 @@ enum {
 
 	/* always completes in submit context */
 	BLK_FEAT_SYNCHRONOUS			= (1u << 6),
+
+	/* supports REQ_NOWAIT */
+	BLK_FEAT_NOWAIT				= (1u << 7),
 };
 
 /*
@@ -580,12 +583,10 @@ struct request_queue {
 #define QUEUE_FLAG_ZONE_RESETALL 26	/* supports Zone Reset All */
 #define QUEUE_FLAG_RQ_ALLOC_TIME 27	/* record rq->alloc_time_ns */
 #define QUEUE_FLAG_HCTX_ACTIVE	28	/* at least one blk-mq hctx is active */
-#define QUEUE_FLAG_NOWAIT       29	/* device supports NOWAIT */
 #define QUEUE_FLAG_SQ_SCHED     30	/* single queue style io dispatch */
 #define QUEUE_FLAG_SKIP_TAGSET_QUIESCE	31 /* quiesce_tagset skip the queue*/
 
-#define QUEUE_FLAG_MQ_DEFAULT	((1UL << QUEUE_FLAG_SAME_COMP) |	\
-				 (1UL << QUEUE_FLAG_NOWAIT))
+#define QUEUE_FLAG_MQ_DEFAULT	(1UL << QUEUE_FLAG_SAME_COMP)
 
 void blk_queue_flag_set(unsigned int flag, struct request_queue *q);
 void blk_queue_flag_clear(unsigned int flag, struct request_queue *q);
@@ -1349,7 +1350,7 @@ static inline bool bdev_fua(struct block_device *bdev)
 
 static inline bool bdev_nowait(struct block_device *bdev)
 {
-	return test_bit(QUEUE_FLAG_NOWAIT, &bdev_get_queue(bdev)->queue_flags);
+	return bdev->bd_disk->queue->limits.features & BLK_FEAT_NOWAIT;
 }
 
 static inline bool bdev_is_zoned(struct block_device *bdev)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737642.1144107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2g-0005Ax-7d; Tue, 11 Jun 2024 05:27:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737642.1144107; Tue, 11 Jun 2024 05:27:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2g-0005Ab-2G; Tue, 11 Jun 2024 05:27:34 +0000
Received: by outflank-mailman (input) for mailman id 737642;
 Tue, 11 Jun 2024 05:27:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtw1-0006gk-89
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:41 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 550115b2-27b2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:20:37 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvc-00000007RFZ-1Iu6; Tue, 11 Jun 2024 05:20:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 550115b2-27b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=WGv6hqDdXniN9xi+a0/kjXE1W6HmXn1u2/8L3CscUq8=; b=qK5xGOTWJ4lyFVCf3JTHyru2av
	cFRouC9fjjmHKE2CB+9AVYtjJDoOLnohkj4yc4r541cJW5dAd5UJaXYbHI3E/7qhQuTj7GLDdfaXH
	lTanTUcNIgbnsGSHMI0vN729Mu5vuNzSyWgo9BcoxokSVt3LAleYun3Rg5q3gTBnJpm4qusohVRIK
	IXe0lk81L74nL/zArBR+rLmOhEbwgG4AvjIhjKhAmAhkMV5+TbnCgJoSUV4rK9zuGqclI+IVpJ013
	9zyFQkQF4OHMuGL+301mX8NXlt4IGKa1tCEiwyJWxWGD+cpIdKjkHd5XqgvVAPb7H1VQfekwCSShh
	8skxxz8A==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 15/26] block: move the add_random flag to queue_limits
Date: Tue, 11 Jun 2024 07:19:15 +0200
Message-ID: <20240611051929.513387-16-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the add_random flag into the queue_limits feature field so that it
can be set atomically and all I/O is frozen when changing the flag.

Note that this also removes code from dm to clear the flag based on
the underlying devices, which can't be reached as dm devices will
always start out without the flag set.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq-debugfs.c            |  1 -
 block/blk-sysfs.c                 |  6 +++---
 drivers/block/mtip32xx/mtip32xx.c |  1 -
 drivers/md/dm-table.c             | 18 ------------------
 drivers/mmc/core/queue.c          |  2 --
 drivers/mtd/mtd_blkdevs.c         |  3 ---
 drivers/s390/block/scm_blk.c      |  4 ----
 drivers/scsi/scsi_lib.c           |  3 +--
 drivers/scsi/sd.c                 | 11 +++--------
 include/linux/blkdev.h            |  5 +++--
 10 files changed, 10 insertions(+), 44 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 4d0e62ec88f033..6b7edb50bfd3fa 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -86,7 +86,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(FAIL_IO),
 	QUEUE_FLAG_NAME(IO_STAT),
 	QUEUE_FLAG_NAME(NOXMERGES),
-	QUEUE_FLAG_NAME(ADD_RANDOM),
 	QUEUE_FLAG_NAME(SYNCHRONOUS),
 	QUEUE_FLAG_NAME(SAME_FORCE),
 	QUEUE_FLAG_NAME(INIT_DONE),
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 637ed3bbbfb46f..9174aca3b85526 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -323,7 +323,7 @@ queue_##name##_store(struct request_queue *q, const char *page, size_t count) \
 }
 
 QUEUE_SYSFS_FEATURE(rotational, BLK_FEAT_ROTATIONAL)
-QUEUE_SYSFS_BIT_FNS(random, ADD_RANDOM, 0);
+QUEUE_SYSFS_FEATURE(add_random, BLK_FEAT_ADD_RANDOM)
 QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0);
 QUEUE_SYSFS_BIT_FNS(stable_writes, STABLE_WRITES, 0);
 #undef QUEUE_SYSFS_BIT_FNS
@@ -561,7 +561,7 @@ static struct queue_sysfs_entry queue_hw_sector_size_entry = {
 
 QUEUE_RW_ENTRY(queue_rotational, "rotational");
 QUEUE_RW_ENTRY(queue_iostats, "iostats");
-QUEUE_RW_ENTRY(queue_random, "add_random");
+QUEUE_RW_ENTRY(queue_add_random, "add_random");
 QUEUE_RW_ENTRY(queue_stable_writes, "stable_writes");
 
 #ifdef CONFIG_BLK_WBT
@@ -665,7 +665,7 @@ static struct attribute *queue_attrs[] = {
 	&queue_nomerges_entry.attr,
 	&queue_iostats_entry.attr,
 	&queue_stable_writes_entry.attr,
-	&queue_random_entry.attr,
+	&queue_add_random_entry.attr,
 	&queue_poll_entry.attr,
 	&queue_wc_entry.attr,
 	&queue_fua_entry.attr,
diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index 1dbbf72659d549..c6ef0546ffc9d2 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -3485,7 +3485,6 @@ static int mtip_block_initialize(struct driver_data *dd)
 		goto start_service_thread;
 
 	/* Set device limits. */
-	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, dd->queue);
 	dma_set_max_seg_size(&dd->pdev->dev, 0x400000);
 
 	/* Set the capacity of the device in 512 byte sectors. */
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 3514a57c2df5d2..7654babc2775c1 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1716,14 +1716,6 @@ static int device_dax_write_cache_enabled(struct dm_target *ti,
 	return false;
 }
 
-static int device_is_not_random(struct dm_target *ti, struct dm_dev *dev,
-			     sector_t start, sector_t len, void *data)
-{
-	struct request_queue *q = bdev_get_queue(dev->bdev);
-
-	return !blk_queue_add_random(q);
-}
-
 static int device_not_write_zeroes_capable(struct dm_target *ti, struct dm_dev *dev,
 					   sector_t start, sector_t len, void *data)
 {
@@ -1876,16 +1868,6 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 	else
 		blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, q);
 
-	/*
-	 * Determine whether or not this queue's I/O timings contribute
-	 * to the entropy pool, Only request-based targets use this.
-	 * Clear QUEUE_FLAG_ADD_RANDOM if any underlying device does not
-	 * have it set.
-	 */
-	if (blk_queue_add_random(q) &&
-	    dm_table_any_dev_attr(t, device_is_not_random, NULL))
-		blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q);
-
 	/*
 	 * For a zoned target, setup the zones related queue attributes
 	 * and resources necessary for zone append emulation if necessary.
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index b4f62fa845864c..da00904d4a3c7e 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -387,8 +387,6 @@ static struct gendisk *mmc_alloc_disk(struct mmc_queue *mq,
 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, mq->queue);
 	blk_queue_rq_timeout(mq->queue, 60 * HZ);
 
-	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
-
 	dma_set_max_seg_size(mmc_dev(host), queue_max_segment_size(mq->queue));
 
 	INIT_WORK(&mq->recovery_work, mmc_mq_recovery_handler);
diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index bf8369ce7ddf1d..47ead84407cdcf 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -374,9 +374,6 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	/* Create the request queue */
 	spin_lock_init(&new->queue_lock);
 	INIT_LIST_HEAD(&new->rq_list);
-
-	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, new->rq);
-
 	gd->queue = new->rq;
 
 	if (new->readonly)
diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
index 2e2309fa9a0b34..3fcfe029db1b3a 100644
--- a/drivers/s390/block/scm_blk.c
+++ b/drivers/s390/block/scm_blk.c
@@ -439,7 +439,6 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
 		.logical_block_size	= 1 << 12,
 	};
 	unsigned int devindex;
-	struct request_queue *rq;
 	int len, ret;
 
 	lim.max_segments = min(scmdev->nr_max_block,
@@ -474,9 +473,6 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
 		ret = PTR_ERR(bdev->gendisk);
 		goto out_tag;
 	}
-	rq = bdev->rq = bdev->gendisk->queue;
-	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, rq);
-
 	bdev->gendisk->private_data = scmdev;
 	bdev->gendisk->fops = &scm_blk_devops;
 	bdev->gendisk->major = scm_major;
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index ec39acc986d6ec..54f771ec8cfb5e 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -631,8 +631,7 @@ static bool scsi_end_request(struct request *req, blk_status_t error,
 	if (blk_update_request(req, error, bytes))
 		return true;
 
-	// XXX:
-	if (blk_queue_add_random(q))
+	if (q->limits.features & BLK_FEAT_ADD_RANDOM)
 		add_disk_randomness(req->q->disk);
 
 	WARN_ON_ONCE(!blk_rq_is_passthrough(req) &&
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 254b00f896dbb4..6b645bec6c4a56 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3297,7 +3297,6 @@ static void sd_read_block_limits_ext(struct scsi_disk *sdkp)
 static void sd_read_block_characteristics(struct scsi_disk *sdkp,
 		struct queue_limits *lim)
 {
-	struct request_queue *q = sdkp->disk->queue;
 	struct scsi_vpd *vpd;
 	u16 rot;
 
@@ -3313,10 +3312,8 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp,
 	sdkp->zoned = (vpd->data[8] >> 4) & 3;
 	rcu_read_unlock();
 
-	if (rot == 1) {
-		lim->features &= ~BLK_FEAT_ROTATIONAL;
-		blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q);
-	}
+	if (rot == 1)
+		lim->features &= ~(BLK_FEAT_ROTATIONAL | BLK_FEAT_ADD_RANDOM);
 
 	if (!sdkp->first_scan)
 		return;
@@ -3595,7 +3592,6 @@ static int sd_revalidate_disk(struct gendisk *disk)
 {
 	struct scsi_disk *sdkp = scsi_disk(disk);
 	struct scsi_device *sdp = sdkp->device;
-	struct request_queue *q = sdkp->disk->queue;
 	sector_t old_capacity = sdkp->capacity;
 	struct queue_limits lim;
 	unsigned char *buffer;
@@ -3642,8 +3638,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 		 * cause this to be updated correctly and any device which
 		 * doesn't support it should be treated as rotational.
 		 */
-		lim.features |= BLK_FEAT_ROTATIONAL;
-		blk_queue_flag_set(QUEUE_FLAG_ADD_RANDOM, q);
+		lim.features |= (BLK_FEAT_ROTATIONAL | BLK_FEAT_ADD_RANDOM);
 
 		if (scsi_device_supports_vpd(sdp)) {
 			sd_read_block_provisioning(sdkp);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index c103f5adc17d84..e6a2382e21c4fe 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -292,6 +292,9 @@ enum {
 
 	/* rotational device (hard drive or floppy) */
 	BLK_FEAT_ROTATIONAL			= (1u << 2),
+
+	/* contributes to the random number pool */
+	BLK_FEAT_ADD_RANDOM			= (1u << 3),
 };
 
 /*
@@ -557,7 +560,6 @@ struct request_queue {
 #define QUEUE_FLAG_FAIL_IO	5	/* fake timeout */
 #define QUEUE_FLAG_IO_STAT	7	/* do disk/partitions IO accounting */
 #define QUEUE_FLAG_NOXMERGES	9	/* No extended merges */
-#define QUEUE_FLAG_ADD_RANDOM	10	/* Contributes to random pool */
 #define QUEUE_FLAG_SYNCHRONOUS	11	/* always completes in submit context */
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
 #define QUEUE_FLAG_INIT_DONE	14	/* queue is initialized */
@@ -591,7 +593,6 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 	test_bit(QUEUE_FLAG_NOXMERGES, &(q)->queue_flags)
 #define blk_queue_nonrot(q)	((q)->limits.features & BLK_FEAT_ROTATIONAL)
 #define blk_queue_io_stat(q)	test_bit(QUEUE_FLAG_IO_STAT, &(q)->queue_flags)
-#define blk_queue_add_random(q)	test_bit(QUEUE_FLAG_ADD_RANDOM, &(q)->queue_flags)
 #define blk_queue_zone_resetall(q)	\
 	test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags)
 #define blk_queue_dax(q)	test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737646.1144117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2h-0005Qh-IG; Tue, 11 Jun 2024 05:27:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737646.1144117; Tue, 11 Jun 2024 05:27:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2h-0005Ph-Cm; Tue, 11 Jun 2024 05:27:35 +0000
Received: by outflank-mailman (input) for mailman id 737646;
 Tue, 11 Jun 2024 05:27:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtwR-0006gk-Ef
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:21:07 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 662b5116-27b2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:21:05 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtw1-00000007Rcw-49Vx; Tue, 11 Jun 2024 05:20:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 662b5116-27b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=qlKmtKo7/zX0VXUQdwTjMLe6YSEDvI0ubwsdjUjoyVQ=; b=GHFc2na0UnihqmaixJGu/TELuO
	DA59e38dKWl4bLCWTE6MHwRpSY1i9TxhbUjbtlM5TMURGvj3fj9j7CbnFF6TCYc7bllp/4knQOHhs
	w1bkjVbhY44VYG+FJXbBfN/JegdBwx8hX2Ya1zQMEAdTkV3v3/EhTGrpD/DmBxnm7ClTS3qkxngmM
	bAtKFw3WPoTaIJGRqFPvvzbrBIc/FLHxu+suaiZz/SZNxnIJ5do9PeMFJo8imd1y0ZrW61bsSqjHa
	MeFMBRb54Ys8MywI117bala20KZHCnjb/NTo1y43kXdEBRtbd3/u6Ty+mxhLz2TNMNQU60CWbCCIp
	jD1pQUFg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 23/26] block: move the zone_resetall flag to queue_limits
Date: Tue, 11 Jun 2024 07:19:23 +0200
Message-ID: <20240611051929.513387-24-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the zone_resetall flag into the queue_limits feature field so that
it can be set atomically and all I/O is frozen when changing the flag.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq-debugfs.c         | 1 -
 drivers/block/null_blk/zoned.c | 3 +--
 drivers/block/ublk_drv.c       | 4 +---
 drivers/block/virtio_blk.c     | 3 +--
 drivers/nvme/host/zns.c        | 3 +--
 drivers/scsi/sd_zbc.c          | 5 +----
 include/linux/blkdev.h         | 6 ++++--
 7 files changed, 9 insertions(+), 16 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 3a21527913840d..f2fd72f4414ae8 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -91,7 +91,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(REGISTERED),
 	QUEUE_FLAG_NAME(QUIESCED),
 	QUEUE_FLAG_NAME(PCI_P2PDMA),
-	QUEUE_FLAG_NAME(ZONE_RESETALL),
 	QUEUE_FLAG_NAME(RQ_ALLOC_TIME),
 	QUEUE_FLAG_NAME(HCTX_ACTIVE),
 	QUEUE_FLAG_NAME(SQ_SCHED),
diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c
index ca8e739e76b981..b42c00f1313254 100644
--- a/drivers/block/null_blk/zoned.c
+++ b/drivers/block/null_blk/zoned.c
@@ -158,7 +158,7 @@ int null_init_zoned_dev(struct nullb_device *dev,
 		sector += dev->zone_size_sects;
 	}
 
-	lim->features |= BLK_FEAT_ZONED;
+	lim->features |= BLK_FEAT_ZONED | BLK_FEAT_ZONE_RESETALL;
 	lim->chunk_sectors = dev->zone_size_sects;
 	lim->max_zone_append_sectors = dev->zone_append_max_sectors;
 	lim->max_open_zones = dev->zone_max_open;
@@ -171,7 +171,6 @@ int null_register_zoned_dev(struct nullb *nullb)
 	struct request_queue *q = nullb->q;
 	struct gendisk *disk = nullb->disk;
 
-	blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q);
 	disk->nr_zones = bdev_nr_zones(disk->part0);
 
 	pr_info("%s: using %s zone append\n",
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 69c16018cbb19a..4fdff13fc23b8a 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -248,8 +248,6 @@ static int ublk_dev_param_zoned_validate(const struct ublk_device *ub)
 
 static void ublk_dev_param_zoned_apply(struct ublk_device *ub)
 {
-	blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, ub->ub_disk->queue);
-
 	ub->ub_disk->nr_zones = ublk_get_nr_zones(ub);
 }
 
@@ -2196,7 +2194,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
 		if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED))
 			return -EOPNOTSUPP;
 
-		lim.features |= BLK_FEAT_ZONED;
+		lim.features |= BLK_FEAT_ZONED | BLK_FEAT_ZONE_RESETALL;
 		lim.max_active_zones = p->max_active_zones;
 		lim.max_open_zones =  p->max_open_zones;
 		lim.max_zone_append_sectors = p->max_zone_append_sectors;
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index cea45b296f8bec..6c64a67ab9c901 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -728,7 +728,7 @@ static int virtblk_read_zoned_limits(struct virtio_blk *vblk,
 
 	dev_dbg(&vdev->dev, "probing host-managed zoned device\n");
 
-	lim->features |= BLK_FEAT_ZONED;
+	lim->features |= BLK_FEAT_ZONED | BLK_FEAT_ZONE_RESETALL;
 
 	virtio_cread(vdev, struct virtio_blk_config,
 		     zoned.max_open_zones, &v);
@@ -1548,7 +1548,6 @@ static int virtblk_probe(struct virtio_device *vdev)
 	 */
 	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
 	    (lim.features & BLK_FEAT_ZONED)) {
-		blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, vblk->disk->queue);
 		err = blk_revalidate_disk_zones(vblk->disk);
 		if (err)
 			goto out_cleanup_disk;
diff --git a/drivers/nvme/host/zns.c b/drivers/nvme/host/zns.c
index 06f2417aa50de7..99bb89c2495ae3 100644
--- a/drivers/nvme/host/zns.c
+++ b/drivers/nvme/host/zns.c
@@ -108,13 +108,12 @@ int nvme_query_zone_info(struct nvme_ns *ns, unsigned lbaf,
 void nvme_update_zone_info(struct nvme_ns *ns, struct queue_limits *lim,
 		struct nvme_zone_info *zi)
 {
-	lim->features |= BLK_FEAT_ZONED;
+	lim->features |= BLK_FEAT_ZONED | BLK_FEAT_ZONE_RESETALL;
 	lim->max_open_zones = zi->max_open_zones;
 	lim->max_active_zones = zi->max_active_zones;
 	lim->max_zone_append_sectors = ns->ctrl->max_zone_append;
 	lim->chunk_sectors = ns->head->zsze =
 		nvme_lba_to_sect(ns->head, zi->zone_size);
-	blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, ns->queue);
 }
 
 static void *nvme_zns_alloc_report_buffer(struct nvme_ns *ns,
diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
index 26b6e92350cda9..8c79f588f80d8b 100644
--- a/drivers/scsi/sd_zbc.c
+++ b/drivers/scsi/sd_zbc.c
@@ -592,8 +592,6 @@ int sd_zbc_revalidate_zones(struct scsi_disk *sdkp)
 int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
 		u8 buf[SD_BUF_SIZE])
 {
-	struct gendisk *disk = sdkp->disk;
-	struct request_queue *q = disk->queue;
 	unsigned int nr_zones;
 	u32 zone_blocks = 0;
 	int ret;
@@ -603,7 +601,7 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
 		return 0;
 	}
 
-	lim->features |= BLK_FEAT_ZONED;
+	lim->features |= BLK_FEAT_ZONED | BLK_FEAT_ZONE_RESETALL;
 
 	/*
 	 * Per ZBC and ZAC specifications, writes in sequential write required
@@ -632,7 +630,6 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
 	sdkp->early_zone_info.zone_blocks = zone_blocks;
 
 	/* The drive satisfies the kernel restrictions: set it up */
-	blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q);
 	if (sdkp->zones_max_open == U32_MAX)
 		lim->max_open_zones = 0;
 	else
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index c0e06ff1b24a3d..ffb7a42871b4ed 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -316,6 +316,9 @@ enum {
 
 	/* is a zoned device */
 	BLK_FEAT_ZONED				= (1u << 10),
+
+	/* supports Zone Reset All */
+	BLK_FEAT_ZONE_RESETALL			= (1u << 11),
 };
 
 /*
@@ -586,7 +589,6 @@ struct request_queue {
 #define QUEUE_FLAG_REGISTERED	22	/* queue has been registered to a disk */
 #define QUEUE_FLAG_QUIESCED	24	/* queue has been quiesced */
 #define QUEUE_FLAG_PCI_P2PDMA	25	/* device supports PCI p2p requests */
-#define QUEUE_FLAG_ZONE_RESETALL 26	/* supports Zone Reset All */
 #define QUEUE_FLAG_RQ_ALLOC_TIME 27	/* record rq->alloc_time_ns */
 #define QUEUE_FLAG_HCTX_ACTIVE	28	/* at least one blk-mq hctx is active */
 #define QUEUE_FLAG_SQ_SCHED     30	/* single queue style io dispatch */
@@ -607,7 +609,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_nonrot(q)	((q)->limits.features & BLK_FEAT_ROTATIONAL)
 #define blk_queue_io_stat(q)	((q)->limits.features & BLK_FEAT_IO_STAT)
 #define blk_queue_zone_resetall(q)	\
-	test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags)
+	((q)->limits.features & BLK_FEAT_ZONE_RESETALL)
 #define blk_queue_dax(q)	((q)->limits.features & BLK_FEAT_DAX)
 #define blk_queue_pci_p2pdma(q)	\
 	test_bit(QUEUE_FLAG_PCI_P2PDMA, &(q)->queue_flags)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737650.1144128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2j-0005mU-7t; Tue, 11 Jun 2024 05:27:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737650.1144128; Tue, 11 Jun 2024 05:27:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2j-0005mE-2M; Tue, 11 Jun 2024 05:27:37 +0000
Received: by outflank-mailman (input) for mailman id 737650;
 Tue, 11 Jun 2024 05:27:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtwV-0006Mb-Mj
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:21:11 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 697d6d9a-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:21:11 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtw7-00000007Riw-3tq0; Tue, 11 Jun 2024 05:20:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 697d6d9a-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=29U87g0UHkeYya05OUe00vczhbUtZ3UGETb+Z6Ht4RY=; b=GCSo6txVa5cYxZtp+tehNyFXpj
	7a0UzoIrHi3enEZSKL7o3EsXwmQZvVmrhYIFbumaGvp0YEB55dp84N66m+2E/5oDp+RoJV3Z0Z3b7
	7sx/ZtrwG9PDjeKwz7QEiP8S1yrPfa3GfzN7CBiLaPYwc2xvGnc/gPxGtFfKj6+v9f8sceyyWiH85
	oMRjZuDPcDqHsADK3eb89Cn6DY7/k1rjiwqjTFXlxGLpvfV7YRNk3O0R8k4nZVa9gpk8S4FDTe9us
	laMjQuuhp3M/5BexMgm9l2Var8w7BGjDv7Z5eRI3ogIypg2iVUumTp9GXdvQNtvL0MNNMEpr4RKMh
	lkoGGvyQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 25/26] block: move the skip_tagset_quiesce flag to queue_limits
Date: Tue, 11 Jun 2024 07:19:25 +0200
Message-ID: <20240611051929.513387-26-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the skip_tagset_quiesce flag into the queue_limits feature field so
that it can be set atomically and all I/O is frozen when changing the
flag.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq-debugfs.c   | 1 -
 drivers/nvme/host/core.c | 8 +++++---
 include/linux/blkdev.h   | 6 ++++--
 3 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 8b5a68861c119b..344f9e503bdb32 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -93,7 +93,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(RQ_ALLOC_TIME),
 	QUEUE_FLAG_NAME(HCTX_ACTIVE),
 	QUEUE_FLAG_NAME(SQ_SCHED),
-	QUEUE_FLAG_NAME(SKIP_TAGSET_QUIESCE),
 };
 #undef QUEUE_FLAG_NAME
 
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 31e752e8d632cd..bf410d10b12006 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4489,13 +4489,15 @@ int nvme_alloc_io_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
 		return ret;
 
 	if (ctrl->ops->flags & NVME_F_FABRICS) {
-		ctrl->connect_q = blk_mq_alloc_queue(set, NULL, NULL);
+		struct queue_limits lim = {
+			.features	= BLK_FEAT_SKIP_TAGSET_QUIESCE,
+		};
+
+		ctrl->connect_q = blk_mq_alloc_queue(set, &lim, NULL);
         	if (IS_ERR(ctrl->connect_q)) {
 			ret = PTR_ERR(ctrl->connect_q);
 			goto out_free_tag_set;
 		}
-		blk_queue_flag_set(QUEUE_FLAG_SKIP_TAGSET_QUIESCE,
-				   ctrl->connect_q);
 	}
 
 	ctrl->tagset = set;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index cc4f6e64e8e3f5..d7ad25def6e50b 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -322,6 +322,9 @@ enum {
 
 	/* supports PCI(e) p2p requests */
 	BLK_FEAT_PCI_P2PDMA			= (1u << 12),
+
+	/* skip this queue in blk_mq_(un)quiesce_tagset */
+	BLK_FEAT_SKIP_TAGSET_QUIESCE		= (1u << 13),
 };
 
 /*
@@ -594,7 +597,6 @@ struct request_queue {
 #define QUEUE_FLAG_RQ_ALLOC_TIME 27	/* record rq->alloc_time_ns */
 #define QUEUE_FLAG_HCTX_ACTIVE	28	/* at least one blk-mq hctx is active */
 #define QUEUE_FLAG_SQ_SCHED     30	/* single queue style io dispatch */
-#define QUEUE_FLAG_SKIP_TAGSET_QUIESCE	31 /* quiesce_tagset skip the queue*/
 
 #define QUEUE_FLAG_MQ_DEFAULT	(1UL << QUEUE_FLAG_SAME_COMP)
 
@@ -629,7 +631,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_registered(q)	test_bit(QUEUE_FLAG_REGISTERED, &(q)->queue_flags)
 #define blk_queue_sq_sched(q)	test_bit(QUEUE_FLAG_SQ_SCHED, &(q)->queue_flags)
 #define blk_queue_skip_tagset_quiesce(q) \
-	test_bit(QUEUE_FLAG_SKIP_TAGSET_QUIESCE, &(q)->queue_flags)
+	((q)->limits.features & BLK_FEAT_SKIP_TAGSET_QUIESCE)
 
 extern void blk_set_pm_only(struct request_queue *q);
 extern void blk_clear_pm_only(struct request_queue *q);
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737653.1144134 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2j-0005qK-SR; Tue, 11 Jun 2024 05:27:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737653.1144134; Tue, 11 Jun 2024 05:27:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2j-0005oo-Ex; Tue, 11 Jun 2024 05:27:37 +0000
Received: by outflank-mailman (input) for mailman id 737653;
 Tue, 11 Jun 2024 05:27:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvt-0006gk-LT
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:33 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 512fc680-27b2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:20:30 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvZ-00000007RDH-05Ac; Tue, 11 Jun 2024 05:20:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 512fc680-27b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=1GxRxLsXE5moMg5n1cEb8ABHGfGe1RvptT4thwIV1Fk=; b=e9a1qdzRlIf94UvmitGONJTDL/
	cTWVHCBiikYggKRgbBs5nhBk5UxvjNItjBI/k3PE7IjbV5Px9WBkA4HbLiU+kW6F5LUOJ6a2i6fdL
	z0nNZCd5OMG+TSt0ZHoSmOYKupq3DYNf+1EaQsiITpxg2jmR9J8Q/BzPU6plbv+2K5BrdtjCzk7kw
	YyWJ1OpDJaglFil5OfnBt3QoNYurFJODBWiPFmzmkihtvwXC0hEM4DX+IBxax+ZK7Jrya7Dcjhy9r
	DMWMok5FJky92jmoAEdBl0H82WqRUrV1LofCEkIGFt3XpqgbC0jsN4OP+cvn13CGtcEu9XpoREpna
	w6q2i1CA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 14/26] block: move the nonrot flag to queue_limits
Date: Tue, 11 Jun 2024 07:19:14 +0200
Message-ID: <20240611051929.513387-15-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the norot flag into the queue_limits feature field so that it can be
set atomically and all I/O is frozen when changing the flag.

Use the chance to switch to defaulting to non-rotational and require
the driver to opt into rotational, which matches the polarity of the
sysfs interface.

For the z2ram, ps3vram, 2x memstick, ubiblock and dcssblk the new
rotational flag is not set as they clearly are not rotational despite
this being a behavior change.  There are some other drivers that
unconditionally set the rotational flag to keep the existing behavior
as they arguably can be used on rotational devices even if that is
probably not their main use today (e.g. virtio_blk and drbd).

The flag is automatically inherited in blk_stack_limits matching the
existing behavior in dm and md.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 arch/m68k/emu/nfblock.c             |  1 +
 arch/um/drivers/ubd_kern.c          |  1 -
 arch/xtensa/platforms/iss/simdisk.c |  5 +++-
 block/blk-mq-debugfs.c              |  1 -
 block/blk-sysfs.c                   | 39 ++++++++++++++++++++++++++---
 drivers/block/amiflop.c             |  5 +++-
 drivers/block/aoe/aoeblk.c          |  1 +
 drivers/block/ataflop.c             |  5 +++-
 drivers/block/brd.c                 |  2 --
 drivers/block/drbd/drbd_main.c      |  3 ++-
 drivers/block/floppy.c              |  3 ++-
 drivers/block/loop.c                |  8 +++---
 drivers/block/mtip32xx/mtip32xx.c   |  1 -
 drivers/block/n64cart.c             |  2 --
 drivers/block/nbd.c                 |  5 ----
 drivers/block/null_blk/main.c       |  1 -
 drivers/block/pktcdvd.c             |  1 +
 drivers/block/ps3disk.c             |  3 ++-
 drivers/block/rbd.c                 |  3 ---
 drivers/block/rnbd/rnbd-clt.c       |  4 ---
 drivers/block/sunvdc.c              |  1 +
 drivers/block/swim.c                |  5 +++-
 drivers/block/swim3.c               |  5 +++-
 drivers/block/ublk_drv.c            |  9 +++----
 drivers/block/virtio_blk.c          |  4 ++-
 drivers/block/xen-blkfront.c        |  1 -
 drivers/block/zram/zram_drv.c       |  2 --
 drivers/cdrom/gdrom.c               |  1 +
 drivers/md/bcache/super.c           |  2 --
 drivers/md/dm-table.c               | 12 ---------
 drivers/md/md.c                     | 13 ----------
 drivers/mmc/core/queue.c            |  1 -
 drivers/mtd/mtd_blkdevs.c           |  1 -
 drivers/nvdimm/btt.c                |  1 -
 drivers/nvdimm/pmem.c               |  1 -
 drivers/nvme/host/core.c            |  1 -
 drivers/nvme/host/multipath.c       |  1 -
 drivers/s390/block/dasd_genhd.c     |  1 -
 drivers/s390/block/scm_blk.c        |  1 -
 drivers/scsi/sd.c                   |  4 +--
 include/linux/blkdev.h              | 10 ++++----
 41 files changed, 83 insertions(+), 88 deletions(-)

diff --git a/arch/m68k/emu/nfblock.c b/arch/m68k/emu/nfblock.c
index 642fb80c5c4e31..8eea7ef9115146 100644
--- a/arch/m68k/emu/nfblock.c
+++ b/arch/m68k/emu/nfblock.c
@@ -98,6 +98,7 @@ static int __init nfhd_init_one(int id, u32 blocks, u32 bsize)
 {
 	struct queue_limits lim = {
 		.logical_block_size	= bsize,
+		.features		= BLK_FEAT_ROTATIONAL,
 	};
 	struct nfhd_device *dev;
 	int dev_id = id - NFHD_DEV_OFFSET;
diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index 19e01691ea0ea7..9f1e76ddda5a26 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -882,7 +882,6 @@ static int ubd_add(int n, char **error_out)
 		goto out_cleanup_tags;
 	}
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
 	disk->major = UBD_MAJOR;
 	disk->first_minor = n << UBD_SHIFT;
 	disk->minors = 1 << UBD_SHIFT;
diff --git a/arch/xtensa/platforms/iss/simdisk.c b/arch/xtensa/platforms/iss/simdisk.c
index defc67909a9c74..d6d2b533a5744d 100644
--- a/arch/xtensa/platforms/iss/simdisk.c
+++ b/arch/xtensa/platforms/iss/simdisk.c
@@ -263,6 +263,9 @@ static const struct proc_ops simdisk_proc_ops = {
 static int __init simdisk_setup(struct simdisk *dev, int which,
 		struct proc_dir_entry *procdir)
 {
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_ROTATIONAL,
+	};
 	char tmp[2] = { '0' + which, 0 };
 	int err;
 
@@ -271,7 +274,7 @@ static int __init simdisk_setup(struct simdisk *dev, int which,
 	spin_lock_init(&dev->lock);
 	dev->users = 0;
 
-	dev->gd = blk_alloc_disk(NULL, NUMA_NO_NODE);
+	dev->gd = blk_alloc_disk(&lim, NUMA_NO_NODE);
 	if (IS_ERR(dev->gd)) {
 		err = PTR_ERR(dev->gd);
 		goto out;
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index e8b9db7c30c455..4d0e62ec88f033 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -84,7 +84,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(NOMERGES),
 	QUEUE_FLAG_NAME(SAME_COMP),
 	QUEUE_FLAG_NAME(FAIL_IO),
-	QUEUE_FLAG_NAME(NONROT),
 	QUEUE_FLAG_NAME(IO_STAT),
 	QUEUE_FLAG_NAME(NOXMERGES),
 	QUEUE_FLAG_NAME(ADD_RANDOM),
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 4f524c1d5e08bd..637ed3bbbfb46f 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -263,6 +263,39 @@ static ssize_t queue_dma_alignment_show(struct request_queue *q, char *page)
 	return queue_var_show(queue_dma_alignment(q), page);
 }
 
+static ssize_t queue_feature_store(struct request_queue *q, const char *page,
+		size_t count, unsigned int feature)
+{
+	struct queue_limits lim;
+	unsigned long val;
+	ssize_t ret;
+
+	ret = queue_var_store(&val, page, count);
+	if (ret < 0)
+		return ret;
+
+	lim = queue_limits_start_update(q);
+	if (val)
+		lim.features |= feature;
+	else
+		lim.features &= ~feature;
+	ret = queue_limits_commit_update(q, &lim);
+	if (ret)
+		return ret;
+	return count;
+}
+
+#define QUEUE_SYSFS_FEATURE(_name, _feature)				 \
+static ssize_t queue_##_name##_show(struct request_queue *q, char *page) \
+{									 \
+	return sprintf(page, "%u\n", !!(q->limits.features & _feature)); \
+}									 \
+static ssize_t queue_##_name##_store(struct request_queue *q,		 \
+		const char *page, size_t count)				 \
+{									 \
+	return queue_feature_store(q, page, count, _feature);		 \
+}
+
 #define QUEUE_SYSFS_BIT_FNS(name, flag, neg)				\
 static ssize_t								\
 queue_##name##_show(struct request_queue *q, char *page)		\
@@ -289,7 +322,7 @@ queue_##name##_store(struct request_queue *q, const char *page, size_t count) \
 	return ret;							\
 }
 
-QUEUE_SYSFS_BIT_FNS(nonrot, NONROT, 1);
+QUEUE_SYSFS_FEATURE(rotational, BLK_FEAT_ROTATIONAL)
 QUEUE_SYSFS_BIT_FNS(random, ADD_RANDOM, 0);
 QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0);
 QUEUE_SYSFS_BIT_FNS(stable_writes, STABLE_WRITES, 0);
@@ -526,7 +559,7 @@ static struct queue_sysfs_entry queue_hw_sector_size_entry = {
 	.show = queue_logical_block_size_show,
 };
 
-QUEUE_RW_ENTRY(queue_nonrot, "rotational");
+QUEUE_RW_ENTRY(queue_rotational, "rotational");
 QUEUE_RW_ENTRY(queue_iostats, "iostats");
 QUEUE_RW_ENTRY(queue_random, "add_random");
 QUEUE_RW_ENTRY(queue_stable_writes, "stable_writes");
@@ -624,7 +657,7 @@ static struct attribute *queue_attrs[] = {
 	&queue_write_zeroes_max_entry.attr,
 	&queue_zone_append_max_entry.attr,
 	&queue_zone_write_granularity_entry.attr,
-	&queue_nonrot_entry.attr,
+	&queue_rotational_entry.attr,
 	&queue_zoned_entry.attr,
 	&queue_nr_zones_entry.attr,
 	&queue_max_open_zones_entry.attr,
diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c
index a25414228e4741..ff45701f7a5e31 100644
--- a/drivers/block/amiflop.c
+++ b/drivers/block/amiflop.c
@@ -1776,10 +1776,13 @@ static const struct blk_mq_ops amiflop_mq_ops = {
 
 static int fd_alloc_disk(int drive, int system)
 {
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_ROTATIONAL,
+	};
 	struct gendisk *disk;
 	int err;
 
-	disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL, NULL);
+	disk = blk_mq_alloc_disk(&unit[drive].tag_set, &lim, NULL);
 	if (IS_ERR(disk))
 		return PTR_ERR(disk);
 
diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
index b6dac8cee70fe1..2028795ec61cbb 100644
--- a/drivers/block/aoe/aoeblk.c
+++ b/drivers/block/aoe/aoeblk.c
@@ -337,6 +337,7 @@ aoeblk_gdalloc(void *vp)
 	struct queue_limits lim = {
 		.max_hw_sectors		= aoe_maxsectors,
 		.io_opt			= SZ_2M,
+		.features		= BLK_FEAT_ROTATIONAL,
 	};
 	ulong flags;
 	int late = 0;
diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
index cacc4ba942a814..4ee10a742bdb93 100644
--- a/drivers/block/ataflop.c
+++ b/drivers/block/ataflop.c
@@ -1992,9 +1992,12 @@ static const struct blk_mq_ops ataflop_mq_ops = {
 
 static int ataflop_alloc_disk(unsigned int drive, unsigned int type)
 {
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_ROTATIONAL,
+	};
 	struct gendisk *disk;
 
-	disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL, NULL);
+	disk = blk_mq_alloc_disk(&unit[drive].tag_set, &lim, NULL);
 	if (IS_ERR(disk))
 		return PTR_ERR(disk);
 
diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index 558d8e67056608..b25dc463b5e3a6 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -366,8 +366,6 @@ static int brd_alloc(int i)
 	strscpy(disk->disk_name, buf, DISK_NAME_LEN);
 	set_capacity(disk, rd_size * 2);
 	
-	/* Tell the block layer that this is not a rotational device */
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, disk->queue);
 	err = add_disk(disk);
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index bf42a46781fa21..2ef29a47807550 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -2697,7 +2697,8 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 		 * connect.
 		 */
 		.max_hw_sectors		= DRBD_MAX_BIO_SIZE_SAFE >> 8,
-		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA,
+		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
+					  BLK_FEAT_ROTATIONAL,
 	};
 
 	device = minor_to_device(minor);
diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index 25c9d85667f1a2..6d7f7df97c3a6c 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -4516,7 +4516,8 @@ static bool floppy_available(int drive)
 static int floppy_alloc_disk(unsigned int drive, unsigned int type)
 {
 	struct queue_limits lim = {
-		.max_hw_sectors = 64,
+		.max_hw_sectors		= 64,
+		.features		= BLK_FEAT_ROTATIONAL,
 	};
 	struct gendisk *disk;
 
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 0b23fdc4e2edcc..6b01b30245b74a 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -985,13 +985,11 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
 	lim.logical_block_size = bsize;
 	lim.physical_block_size = bsize;
 	lim.io_min = bsize;
-	lim.features &= ~BLK_FEAT_WRITE_CACHE;
+	lim.features &= ~(BLK_FEAT_WRITE_CACHE | BLK_FEAT_ROTATIONAL);
 	if (file->f_op->fsync && !(lo->lo_flags & LO_FLAGS_READ_ONLY))
 		lim.features |= BLK_FEAT_WRITE_CACHE;
-	if (!backing_bdev || bdev_nonrot(backing_bdev))
-		blk_queue_flag_set(QUEUE_FLAG_NONROT, lo->lo_queue);
-	else
-		blk_queue_flag_clear(QUEUE_FLAG_NONROT, lo->lo_queue);
+	if (backing_bdev && !bdev_nonrot(backing_bdev))
+		lim.features |= BLK_FEAT_ROTATIONAL;
 	loop_config_discard(lo, &lim);
 	return queue_limits_commit_update(lo->lo_queue, &lim);
 }
diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index 43a187609ef794..1dbbf72659d549 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -3485,7 +3485,6 @@ static int mtip_block_initialize(struct driver_data *dd)
 		goto start_service_thread;
 
 	/* Set device limits. */
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, dd->queue);
 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, dd->queue);
 	dma_set_max_seg_size(&dd->pdev->dev, 0x400000);
 
diff --git a/drivers/block/n64cart.c b/drivers/block/n64cart.c
index 27b2187e7a6d55..b9fdeff31cafdf 100644
--- a/drivers/block/n64cart.c
+++ b/drivers/block/n64cart.c
@@ -150,8 +150,6 @@ static int __init n64cart_probe(struct platform_device *pdev)
 	set_capacity(disk, size >> SECTOR_SHIFT);
 	set_disk_ro(disk, 1);
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
-
 	err = add_disk(disk);
 	if (err)
 		goto out_cleanup_disk;
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index cb1c86a6a3fb9d..6cddf5baffe02a 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1867,11 +1867,6 @@ static struct nbd_device *nbd_dev_add(int index, unsigned int refs)
 		goto out_err_disk;
 	}
 
-	/*
-	 * Tell the block layer that we are not a rotational device
-	 */
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
-
 	mutex_init(&nbd->config_lock);
 	refcount_set(&nbd->config_refs, 0);
 	/*
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index 73e4aecf5bb492..3c521ec123ea3b 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1948,7 +1948,6 @@ static int null_add_dev(struct nullb_device *dev)
 	}
 
 	nullb->q->queuedata = nullb;
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, nullb->q);
 
 	rv = ida_alloc(&nullb_indexes, GFP_KERNEL);
 	if (rv < 0)
diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
index 8a2ce80700109d..7cece5884b9c67 100644
--- a/drivers/block/pktcdvd.c
+++ b/drivers/block/pktcdvd.c
@@ -2622,6 +2622,7 @@ static int pkt_setup_dev(dev_t dev, dev_t* pkt_dev)
 	struct queue_limits lim = {
 		.max_hw_sectors		= PACKET_MAX_SECTORS,
 		.logical_block_size	= CD_FRAMESIZE,
+		.features		= BLK_FEAT_ROTATIONAL,
 	};
 	int idx;
 	int ret = -ENOMEM;
diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
index 8b73cf459b5937..ff45ed76646957 100644
--- a/drivers/block/ps3disk.c
+++ b/drivers/block/ps3disk.c
@@ -388,7 +388,8 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
 		.max_segments		= -1,
 		.max_segment_size	= dev->bounce_size,
 		.dma_alignment		= dev->blk_size - 1,
-		.features		= BLK_FEAT_WRITE_CACHE,
+		.features		= BLK_FEAT_WRITE_CACHE |
+					  BLK_FEAT_ROTATIONAL,
 	};
 	struct gendisk *gendisk;
 
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 22ad704f81d8b9..ec1f1c7d4275cd 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -4997,9 +4997,6 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
 	disk->fops = &rbd_bd_ops;
 	disk->private_data = rbd_dev;
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
-	/* QUEUE_FLAG_ADD_RANDOM is off by default for blk-mq */
-
 	if (!ceph_test_opt(rbd_dev->rbd_client->client, NOCRC))
 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, q);
 
diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
index 02c4b173182719..4918b0f68b46cd 100644
--- a/drivers/block/rnbd/rnbd-clt.c
+++ b/drivers/block/rnbd/rnbd-clt.c
@@ -1352,10 +1352,6 @@ static int rnbd_clt_setup_gen_disk(struct rnbd_clt_dev *dev,
 	if (dev->access_mode == RNBD_ACCESS_RO)
 		set_disk_ro(dev->gd, true);
 
-	/*
-	 * Network device does not need rotational
-	 */
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, dev->queue);
 	err = add_disk(dev->gd);
 	if (err)
 		put_disk(dev->gd);
diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
index 5286cb8e0824d1..2d38331ee66793 100644
--- a/drivers/block/sunvdc.c
+++ b/drivers/block/sunvdc.c
@@ -791,6 +791,7 @@ static int probe_disk(struct vdc_port *port)
 		.seg_boundary_mask		= PAGE_SIZE - 1,
 		.max_segment_size		= PAGE_SIZE,
 		.max_segments			= port->ring_cookies,
+		.features			= BLK_FEAT_ROTATIONAL,
 	};
 	struct request_queue *q;
 	struct gendisk *g;
diff --git a/drivers/block/swim.c b/drivers/block/swim.c
index 6731678f3a41db..126f151c4f2cf0 100644
--- a/drivers/block/swim.c
+++ b/drivers/block/swim.c
@@ -787,6 +787,9 @@ static void swim_cleanup_floppy_disk(struct floppy_state *fs)
 
 static int swim_floppy_init(struct swim_priv *swd)
 {
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_ROTATIONAL,
+	};
 	int err;
 	int drive;
 	struct swim __iomem *base = swd->base;
@@ -820,7 +823,7 @@ static int swim_floppy_init(struct swim_priv *swd)
 			goto exit_put_disks;
 
 		swd->unit[drive].disk =
-			blk_mq_alloc_disk(&swd->unit[drive].tag_set, NULL,
+			blk_mq_alloc_disk(&swd->unit[drive].tag_set, &lim,
 					  &swd->unit[drive]);
 		if (IS_ERR(swd->unit[drive].disk)) {
 			blk_mq_free_tag_set(&swd->unit[drive].tag_set);
diff --git a/drivers/block/swim3.c b/drivers/block/swim3.c
index a04756ac778ee8..90be1017f7bfcd 100644
--- a/drivers/block/swim3.c
+++ b/drivers/block/swim3.c
@@ -1189,6 +1189,9 @@ static int swim3_add_device(struct macio_dev *mdev, int index)
 static int swim3_attach(struct macio_dev *mdev,
 			const struct of_device_id *match)
 {
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_ROTATIONAL,
+	};
 	struct floppy_state *fs;
 	struct gendisk *disk;
 	int rc;
@@ -1210,7 +1213,7 @@ static int swim3_attach(struct macio_dev *mdev,
 	if (rc)
 		goto out_unregister;
 
-	disk = blk_mq_alloc_disk(&fs->tag_set, NULL, fs);
+	disk = blk_mq_alloc_disk(&fs->tag_set, &lim, fs);
 	if (IS_ERR(disk)) {
 		rc = PTR_ERR(disk);
 		goto out_free_tag_set;
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index e45c65c1848d31..4fcde099935868 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -484,14 +484,8 @@ static inline unsigned ublk_pos_to_tag(loff_t pos)
 
 static void ublk_dev_param_basic_apply(struct ublk_device *ub)
 {
-	struct request_queue *q = ub->ub_disk->queue;
 	const struct ublk_param_basic *p = &ub->params.basic;
 
-	if (p->attrs & UBLK_ATTR_ROTATIONAL)
-		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
-	else
-		blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
-
 	if (p->attrs & UBLK_ATTR_READ_ONLY)
 		set_disk_ro(ub->ub_disk, true);
 
@@ -2214,6 +2208,9 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
 			lim.features |= BLK_FEAT_FUA;
 	}
 
+	if (ub->params.basic.attrs & UBLK_ATTR_ROTATIONAL)
+		lim.features |= BLK_FEAT_ROTATIONAL;
+
 	if (wait_for_completion_interruptible(&ub->completion) != 0)
 		return -EINTR;
 
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index b1a3c293528519..13a2f24f176628 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -1451,7 +1451,9 @@ static int virtblk_read_limits(struct virtio_blk *vblk,
 static int virtblk_probe(struct virtio_device *vdev)
 {
 	struct virtio_blk *vblk;
-	struct queue_limits lim = { };
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_ROTATIONAL,
+	};
 	int err, index;
 	unsigned int queue_depth;
 
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index de38e025769b14..4fe95a2bffe91a 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1133,7 +1133,6 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 		err = PTR_ERR(gd);
 		goto out_free_tag_set;
 	}
-	blk_queue_flag_set(QUEUE_FLAG_VIRT, gd->queue);
 
 	strcpy(gd->disk_name, DEV_NAME);
 	ptr = encode_disk_name(gd->disk_name + sizeof(DEV_NAME) - 1, offset);
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 3acd7006ad2ccd..aad840fc7e18e3 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -2245,8 +2245,6 @@ static int zram_add(void)
 
 	/* Actual capacity set using sysfs (/sys/block/zram<id>/disksize */
 	set_capacity(zram->disk, 0);
-	/* zram devices sort of resembles non-rotational disks */
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, zram->disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, zram->disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, zram->disk->queue);
 	ret = device_add_disk(NULL, zram->disk, zram_disk_groups);
diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c
index eefdd422ad8e9f..71cfe7a85913c4 100644
--- a/drivers/cdrom/gdrom.c
+++ b/drivers/cdrom/gdrom.c
@@ -744,6 +744,7 @@ static int probe_gdrom(struct platform_device *devptr)
 		.max_segments			= 1,
 		/* set a large max size to get most from DMA */
 		.max_segment_size		= 0x40000,
+		.features			= BLK_FEAT_ROTATIONAL,
 	};
 	int err;
 
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index cb6595c8b5514e..baa364eedd0051 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -974,8 +974,6 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
 	d->disk->minors		= BCACHE_MINORS;
 	d->disk->fops		= ops;
 	d->disk->private_data	= d;
-
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, d->disk->queue);
 	return 0;
 
 out_bioset_exit:
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index fbe125d55e25b4..3514a57c2df5d2 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1716,12 +1716,6 @@ static int device_dax_write_cache_enabled(struct dm_target *ti,
 	return false;
 }
 
-static int device_is_rotational(struct dm_target *ti, struct dm_dev *dev,
-				sector_t start, sector_t len, void *data)
-{
-	return !bdev_nonrot(dev->bdev);
-}
-
 static int device_is_not_random(struct dm_target *ti, struct dm_dev *dev,
 			     sector_t start, sector_t len, void *data)
 {
@@ -1870,12 +1864,6 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 	if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, NULL))
 		dax_write_cache(t->md->dax_dev, true);
 
-	/* Ensure that all underlying devices are non-rotational. */
-	if (dm_table_any_dev_attr(t, device_is_rotational, NULL))
-		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
-	else
-		blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
-
 	/*
 	 * Some devices don't use blk_integrity but still want stable pages
 	 * because they do their own checksumming.
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 2f4c5d1755d857..c23423c51fb7c2 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -6151,20 +6151,7 @@ int md_run(struct mddev *mddev)
 
 	if (!mddev_is_dm(mddev)) {
 		struct request_queue *q = mddev->gendisk->queue;
-		bool nonrot = true;
 
-		rdev_for_each(rdev, mddev) {
-			if (rdev->raid_disk >= 0 && !bdev_nonrot(rdev->bdev)) {
-				nonrot = false;
-				break;
-			}
-		}
-		if (mddev->degraded)
-			nonrot = false;
-		if (nonrot)
-			blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
-		else
-			blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
 		blk_queue_flag_set(QUEUE_FLAG_IO_STAT, q);
 
 		/* Set the NOWAIT flags if all underlying devices support it */
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 97ff993d31570c..b4f62fa845864c 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -387,7 +387,6 @@ static struct gendisk *mmc_alloc_disk(struct mmc_queue *mq,
 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, mq->queue);
 	blk_queue_rq_timeout(mq->queue, 60 * HZ);
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue);
 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
 
 	dma_set_max_seg_size(mmc_dev(host), queue_max_segment_size(mq->queue));
diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index 1b9f57f231e8be..bf8369ce7ddf1d 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -375,7 +375,6 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	spin_lock_init(&new->queue_lock);
 	INIT_LIST_HEAD(&new->rq_list);
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, new->rq);
 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, new->rq);
 
 	gd->queue = new->rq;
diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index c5f8451b494d6c..e474afa8e9f68d 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -1518,7 +1518,6 @@ static int btt_blk_init(struct btt *btt)
 	btt->btt_disk->fops = &btt_fops;
 	btt->btt_disk->private_data = btt;
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, btt->btt_disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, btt->btt_disk->queue);
 
 	set_capacity(btt->btt_disk, btt->nlba * btt->sector_size >> 9);
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index aff818469c114c..501cf226df0187 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -546,7 +546,6 @@ static int pmem_attach_disk(struct device *dev,
 	}
 	pmem->virt_addr = addr;
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, q);
 	if (pmem->pfn_flags & PFN_MAP)
 		blk_queue_flag_set(QUEUE_FLAG_DAX, q);
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 9fc5e36fe2e55e..0d753fe71f35b0 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3744,7 +3744,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 	if (ctrl->opts && ctrl->opts->data_digest)
 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, ns->queue);
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, ns->queue);
 	if (ctrl->ops->supports_pci_p2pdma &&
 	    ctrl->ops->supports_pci_p2pdma(ctrl))
 		blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue);
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 3d0e23a0a4ddd8..58c13304e558e0 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -549,7 +549,6 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 	sprintf(head->disk->disk_name, "nvme%dn%d",
 			ctrl->subsys->instance, head->instance);
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, head->disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, head->disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_IO_STAT, head->disk->queue);
 	/*
diff --git a/drivers/s390/block/dasd_genhd.c b/drivers/s390/block/dasd_genhd.c
index 4533dd055ca8e3..1aa426b1deddc7 100644
--- a/drivers/s390/block/dasd_genhd.c
+++ b/drivers/s390/block/dasd_genhd.c
@@ -68,7 +68,6 @@ int dasd_gendisk_alloc(struct dasd_block *block)
 		blk_mq_free_tag_set(&block->tag_set);
 		return PTR_ERR(gdp);
 	}
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, gdp->queue);
 
 	/* Initialize gendisk structure. */
 	gdp->major = DASD_MAJOR;
diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
index 1d456a5a3bfb8e..2e2309fa9a0b34 100644
--- a/drivers/s390/block/scm_blk.c
+++ b/drivers/s390/block/scm_blk.c
@@ -475,7 +475,6 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
 		goto out_tag;
 	}
 	rq = bdev->rq = bdev->gendisk->queue;
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, rq);
 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, rq);
 
 	bdev->gendisk->private_data = scmdev;
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 8764ea14c9b881..254b00f896dbb4 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3314,7 +3314,7 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp,
 	rcu_read_unlock();
 
 	if (rot == 1) {
-		blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
+		lim->features &= ~BLK_FEAT_ROTATIONAL;
 		blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q);
 	}
 
@@ -3642,7 +3642,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 		 * cause this to be updated correctly and any device which
 		 * doesn't support it should be treated as rotational.
 		 */
-		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
+		lim.features |= BLK_FEAT_ROTATIONAL;
 		blk_queue_flag_set(QUEUE_FLAG_ADD_RANDOM, q);
 
 		if (scsi_device_supports_vpd(sdp)) {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 4e8931a2c76b07..c103f5adc17d84 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -289,14 +289,16 @@ enum {
 
 	/* supports passing on the FUA bit */
 	BLK_FEAT_FUA				= (1u << 1),
+
+	/* rotational device (hard drive or floppy) */
+	BLK_FEAT_ROTATIONAL			= (1u << 2),
 };
 
 /*
  * Flags automatically inherited when stacking limits.
  */
 #define BLK_FEAT_INHERIT_MASK \
-	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA)
-
+	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | BLK_FEAT_ROTATIONAL)
 
 /* internal flags in queue_limits.flags */
 enum {
@@ -553,8 +555,6 @@ struct request_queue {
 #define QUEUE_FLAG_NOMERGES     3	/* disable merge attempts */
 #define QUEUE_FLAG_SAME_COMP	4	/* complete on same CPU-group */
 #define QUEUE_FLAG_FAIL_IO	5	/* fake timeout */
-#define QUEUE_FLAG_NONROT	6	/* non-rotational device (SSD) */
-#define QUEUE_FLAG_VIRT		QUEUE_FLAG_NONROT /* paravirt device */
 #define QUEUE_FLAG_IO_STAT	7	/* do disk/partitions IO accounting */
 #define QUEUE_FLAG_NOXMERGES	9	/* No extended merges */
 #define QUEUE_FLAG_ADD_RANDOM	10	/* Contributes to random pool */
@@ -589,7 +589,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_nomerges(q)	test_bit(QUEUE_FLAG_NOMERGES, &(q)->queue_flags)
 #define blk_queue_noxmerges(q)	\
 	test_bit(QUEUE_FLAG_NOXMERGES, &(q)->queue_flags)
-#define blk_queue_nonrot(q)	test_bit(QUEUE_FLAG_NONROT, &(q)->queue_flags)
+#define blk_queue_nonrot(q)	((q)->limits.features & BLK_FEAT_ROTATIONAL)
 #define blk_queue_io_stat(q)	test_bit(QUEUE_FLAG_IO_STAT, &(q)->queue_flags)
 #define blk_queue_add_random(q)	test_bit(QUEUE_FLAG_ADD_RANDOM, &(q)->queue_flags)
 #define blk_queue_zone_resetall(q)	\
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737654.1144140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2k-0005wZ-Ko; Tue, 11 Jun 2024 05:27:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737654.1144140; Tue, 11 Jun 2024 05:27:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2k-0005uh-1U; Tue, 11 Jun 2024 05:27:38 +0000
Received: by outflank-mailman (input) for mailman id 737654;
 Tue, 11 Jun 2024 05:27:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtwO-0006Mb-Cy
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:21:04 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 64da028a-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:21:03 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvy-00000007RZs-3sVF; Tue, 11 Jun 2024 05:20:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64da028a-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=5mPtzUFYWku1ONmVEH8jZ17L+LcdHqvBXUpltlTWtTA=; b=H5hnHf06QTVfs2OqEucyFELRfr
	PEG5Y/mAHSJAe5iiJYIr+4/Z1GEubo4APGRf/ohbtDmDJ1KP0rUDnDJ5yuYW8QiFN/WG0fh/CsKZn
	+iofXA+NbMD9kchsArbhniwvy7h+NRtqygX6fT1m+/xdIC4zfXUSz85rGeU8L+mL1ETLjLUzzh1KM
	hdXMgFDqLq5i0Oj21OU7rhibSZ5Nr3a6CQyRvHzcKbtDKaidYmYY3LEgqEhFtxAHENLSHR3BVM1o5
	+WWotNHL2Qs6XWxhEP/A+ohChDHg/Wz5SZ906+XqqpqRwc24kl5NWWOn0f3FUjSZDhQTaiHxepGo0
	ab83rQ0w==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 22/26] block: move the zoned flag into the feature field
Date: Tue, 11 Jun 2024 07:19:22 +0200
Message-ID: <20240611051929.513387-23-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the boolean zoned field into the flags field to reclaim a little
bit of space.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-settings.c           |  5 ++---
 drivers/block/null_blk/zoned.c |  2 +-
 drivers/block/ublk_drv.c       |  2 +-
 drivers/block/virtio_blk.c     |  5 +++--
 drivers/md/dm-table.c          | 11 ++++++-----
 drivers/md/dm-zone.c           |  2 +-
 drivers/md/dm-zoned-target.c   |  2 +-
 drivers/nvme/host/zns.c        |  2 +-
 drivers/scsi/sd_zbc.c          |  4 ++--
 include/linux/blkdev.h         |  9 ++++++---
 10 files changed, 24 insertions(+), 20 deletions(-)

diff --git a/block/blk-settings.c b/block/blk-settings.c
index 026ba68d829856..96e07f24bd9aa1 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -68,7 +68,7 @@ static void blk_apply_bdi_limits(struct backing_dev_info *bdi,
 
 static int blk_validate_zoned_limits(struct queue_limits *lim)
 {
-	if (!lim->zoned) {
+	if (!(lim->features & BLK_FEAT_ZONED)) {
 		if (WARN_ON_ONCE(lim->max_open_zones) ||
 		    WARN_ON_ONCE(lim->max_active_zones) ||
 		    WARN_ON_ONCE(lim->zone_write_granularity) ||
@@ -602,8 +602,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 						   b->max_secure_erase_sectors);
 	t->zone_write_granularity = max(t->zone_write_granularity,
 					b->zone_write_granularity);
-	t->zoned = max(t->zoned, b->zoned);
-	if (!t->zoned) {
+	if (!(t->features & BLK_FEAT_ZONED)) {
 		t->zone_write_granularity = 0;
 		t->max_zone_append_sectors = 0;
 	}
diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c
index f118d304f31080..ca8e739e76b981 100644
--- a/drivers/block/null_blk/zoned.c
+++ b/drivers/block/null_blk/zoned.c
@@ -158,7 +158,7 @@ int null_init_zoned_dev(struct nullb_device *dev,
 		sector += dev->zone_size_sects;
 	}
 
-	lim->zoned = true;
+	lim->features |= BLK_FEAT_ZONED;
 	lim->chunk_sectors = dev->zone_size_sects;
 	lim->max_zone_append_sectors = dev->zone_append_max_sectors;
 	lim->max_open_zones = dev->zone_max_open;
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 4fcde099935868..69c16018cbb19a 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2196,7 +2196,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
 		if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED))
 			return -EOPNOTSUPP;
 
-		lim.zoned = true;
+		lim.features |= BLK_FEAT_ZONED;
 		lim.max_active_zones = p->max_active_zones;
 		lim.max_open_zones =  p->max_open_zones;
 		lim.max_zone_append_sectors = p->max_zone_append_sectors;
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 13a2f24f176628..cea45b296f8bec 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -728,7 +728,7 @@ static int virtblk_read_zoned_limits(struct virtio_blk *vblk,
 
 	dev_dbg(&vdev->dev, "probing host-managed zoned device\n");
 
-	lim->zoned = true;
+	lim->features |= BLK_FEAT_ZONED;
 
 	virtio_cread(vdev, struct virtio_blk_config,
 		     zoned.max_open_zones, &v);
@@ -1546,7 +1546,8 @@ static int virtblk_probe(struct virtio_device *vdev)
 	 * All steps that follow use the VQs therefore they need to be
 	 * placed after the virtio_device_ready() call above.
 	 */
-	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && lim.zoned) {
+	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
+	    (lim.features & BLK_FEAT_ZONED)) {
 		blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, vblk->disk->queue);
 		err = blk_revalidate_disk_zones(vblk->disk);
 		if (err)
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 653c253b6f7f32..48ccd9a396d8e6 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1605,12 +1605,12 @@ int dm_calculate_queue_limits(struct dm_table *t,
 		ti->type->iterate_devices(ti, dm_set_device_limits,
 					  &ti_limits);
 
-		if (!zoned && ti_limits.zoned) {
+		if (!zoned && (ti_limits.features & BLK_FEAT_ZONED)) {
 			/*
 			 * After stacking all limits, validate all devices
 			 * in table support this zoned model and zone sectors.
 			 */
-			zoned = ti_limits.zoned;
+			zoned = (ti_limits.features & BLK_FEAT_ZONED);
 			zone_sectors = ti_limits.chunk_sectors;
 		}
 
@@ -1658,12 +1658,12 @@ int dm_calculate_queue_limits(struct dm_table *t,
 	 *   zoned model on host-managed zoned block devices.
 	 * BUT...
 	 */
-	if (limits->zoned) {
+	if (limits->features & BLK_FEAT_ZONED) {
 		/*
 		 * ...IF the above limits stacking determined a zoned model
 		 * validate that all of the table's devices conform to it.
 		 */
-		zoned = limits->zoned;
+		zoned = limits->features & BLK_FEAT_ZONED;
 		zone_sectors = limits->chunk_sectors;
 	}
 	if (validate_hardware_zoned(t, zoned, zone_sectors))
@@ -1834,7 +1834,8 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 	 * For a zoned target, setup the zones related queue attributes
 	 * and resources necessary for zone append emulation if necessary.
 	 */
-	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && limits->zoned) {
+	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
+	    (limits->features & limits->features & BLK_FEAT_ZONED)) {
 		r = dm_set_zones_restrictions(t, q, limits);
 		if (r)
 			return r;
diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c
index 5d66d916730efa..88d313229b43ff 100644
--- a/drivers/md/dm-zone.c
+++ b/drivers/md/dm-zone.c
@@ -263,7 +263,7 @@ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q,
 	if (nr_conv_zones >= ret) {
 		lim->max_open_zones = 0;
 		lim->max_active_zones = 0;
-		lim->zoned = false;
+		lim->features &= ~BLK_FEAT_ZONED;
 		clear_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
 		disk->nr_zones = 0;
 		return 0;
diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c
index 12236e6f46f39c..cd0ee144973f9f 100644
--- a/drivers/md/dm-zoned-target.c
+++ b/drivers/md/dm-zoned-target.c
@@ -1009,7 +1009,7 @@ static void dmz_io_hints(struct dm_target *ti, struct queue_limits *limits)
 	limits->max_sectors = chunk_sectors;
 
 	/* We are exposing a drive-managed zoned block device */
-	limits->zoned = false;
+	limits->features &= ~BLK_FEAT_ZONED;
 }
 
 /*
diff --git a/drivers/nvme/host/zns.c b/drivers/nvme/host/zns.c
index 77aa0f440a6d2a..06f2417aa50de7 100644
--- a/drivers/nvme/host/zns.c
+++ b/drivers/nvme/host/zns.c
@@ -108,7 +108,7 @@ int nvme_query_zone_info(struct nvme_ns *ns, unsigned lbaf,
 void nvme_update_zone_info(struct nvme_ns *ns, struct queue_limits *lim,
 		struct nvme_zone_info *zi)
 {
-	lim->zoned = 1;
+	lim->features |= BLK_FEAT_ZONED;
 	lim->max_open_zones = zi->max_open_zones;
 	lim->max_active_zones = zi->max_active_zones;
 	lim->max_zone_append_sectors = ns->ctrl->max_zone_append;
diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
index e9501db0450be3..26b6e92350cda9 100644
--- a/drivers/scsi/sd_zbc.c
+++ b/drivers/scsi/sd_zbc.c
@@ -599,11 +599,11 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
 	int ret;
 
 	if (!sd_is_zoned(sdkp)) {
-		lim->zoned = false;
+		lim->features &= ~BLK_FEAT_ZONED;
 		return 0;
 	}
 
-	lim->zoned = true;
+	lim->features |= BLK_FEAT_ZONED;
 
 	/*
 	 * Per ZBC and ZAC specifications, writes in sequential write required
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index d0db354b12db47..c0e06ff1b24a3d 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -313,6 +313,9 @@ enum {
 
 	/* supports I/O polling */
 	BLK_FEAT_POLL				= (1u << 9),
+
+	/* is a zoned device */
+	BLK_FEAT_ZONED				= (1u << 10),
 };
 
 /*
@@ -320,7 +323,7 @@ enum {
  */
 #define BLK_FEAT_INHERIT_MASK \
 	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | BLK_FEAT_ROTATIONAL | \
-	 BLK_FEAT_STABLE_WRITES)
+	 BLK_FEAT_STABLE_WRITES | BLK_FEAT_ZONED)
 
 /* internal flags in queue_limits.flags */
 enum {
@@ -372,7 +375,6 @@ struct queue_limits {
 	unsigned char		misaligned;
 	unsigned char		discard_misaligned;
 	unsigned char		raid_partial_stripes_expensive;
-	bool			zoned;
 	unsigned int		max_open_zones;
 	unsigned int		max_active_zones;
 
@@ -654,7 +656,8 @@ static inline enum rpm_status queue_rpm_status(struct request_queue *q)
 
 static inline bool blk_queue_is_zoned(struct request_queue *q)
 {
-	return IS_ENABLED(CONFIG_BLK_DEV_ZONED) && q->limits.zoned;
+	return IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
+		(q->limits.features & BLK_FEAT_ZONED);
 }
 
 #ifdef CONFIG_BLK_DEV_ZONED
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737658.1144146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2l-00066w-F4; Tue, 11 Jun 2024 05:27:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737658.1144146; Tue, 11 Jun 2024 05:27:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2k-00064l-S0; Tue, 11 Jun 2024 05:27:38 +0000
Received: by outflank-mailman (input) for mailman id 737658;
 Tue, 11 Jun 2024 05:27:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtwY-0006gk-EG
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:21:14 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6a7a20aa-27b2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:21:12 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtw4-00000007Rfx-3Hsc; Tue, 11 Jun 2024 05:20:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a7a20aa-27b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=MPwremY8Q7NX1Ef8RTwP3wepSQ5hhCHGmRWL/QC9GrI=; b=f1hZ2SuHKaMYNYDlKddtsMrEj4
	HhJ0iwXM6G2h13fzU48w2Wp1cg9nk2YZ6gpAuUou9ff1GYduJZ0WfzXf3shKgk6wyYrmmiGL51jtj
	XbNogudD9e403EfmIZRu+cHyfIbgFlIIY5jXRlKtfcYVvelaQBaiB5rariZryMc+5ShP0wImrZGUf
	HwPzDYeZDkEqyQ9JjhzMhbUEJgoB0ykUCUe71NZZ4iM8X9lzH0xXrhhtc3ukr1Zq2/qge7T4ojgtC
	PPW5bNUe9scd9Xqt6YR7LEPh7UJ5/vAECPRKTcBRuA2XxgnZmYKTh8weAU8h6E46qMGktD9WSYAY8
	XOMueOlQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 24/26] block: move the pci_p2pdma flag to queue_limits
Date: Tue, 11 Jun 2024 07:19:24 +0200
Message-ID: <20240611051929.513387-25-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the pci_p2pdma flag into the queue_limits feature field so that it
can be set atomically and all I/O is frozen when changing the flag.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq-debugfs.c   | 1 -
 drivers/nvme/host/core.c | 8 +++-----
 include/linux/blkdev.h   | 7 ++++---
 3 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index f2fd72f4414ae8..8b5a68861c119b 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -90,7 +90,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(STATS),
 	QUEUE_FLAG_NAME(REGISTERED),
 	QUEUE_FLAG_NAME(QUIESCED),
-	QUEUE_FLAG_NAME(PCI_P2PDMA),
 	QUEUE_FLAG_NAME(RQ_ALLOC_TIME),
 	QUEUE_FLAG_NAME(HCTX_ACTIVE),
 	QUEUE_FLAG_NAME(SQ_SCHED),
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 5ecf762d7c8837..31e752e8d632cd 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3735,6 +3735,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 
 	if (ctrl->opts && ctrl->opts->data_digest)
 		lim.features |= BLK_FEAT_STABLE_WRITES;
+	if (ctrl->ops->supports_pci_p2pdma &&
+	    ctrl->ops->supports_pci_p2pdma(ctrl))
+		lim.features |= BLK_FEAT_PCI_P2PDMA;
 
 	disk = blk_mq_alloc_disk(ctrl->tagset, &lim, ns);
 	if (IS_ERR(disk))
@@ -3744,11 +3747,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 
 	ns->disk = disk;
 	ns->queue = disk->queue;
-
-	if (ctrl->ops->supports_pci_p2pdma &&
-	    ctrl->ops->supports_pci_p2pdma(ctrl))
-		blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue);
-
 	ns->ctrl = ctrl;
 	kref_init(&ns->kref);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index ffb7a42871b4ed..cc4f6e64e8e3f5 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -319,6 +319,9 @@ enum {
 
 	/* supports Zone Reset All */
 	BLK_FEAT_ZONE_RESETALL			= (1u << 11),
+
+	/* supports PCI(e) p2p requests */
+	BLK_FEAT_PCI_P2PDMA			= (1u << 12),
 };
 
 /*
@@ -588,7 +591,6 @@ struct request_queue {
 #define QUEUE_FLAG_STATS	20	/* track IO start and completion times */
 #define QUEUE_FLAG_REGISTERED	22	/* queue has been registered to a disk */
 #define QUEUE_FLAG_QUIESCED	24	/* queue has been quiesced */
-#define QUEUE_FLAG_PCI_P2PDMA	25	/* device supports PCI p2p requests */
 #define QUEUE_FLAG_RQ_ALLOC_TIME 27	/* record rq->alloc_time_ns */
 #define QUEUE_FLAG_HCTX_ACTIVE	28	/* at least one blk-mq hctx is active */
 #define QUEUE_FLAG_SQ_SCHED     30	/* single queue style io dispatch */
@@ -611,8 +613,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_zone_resetall(q)	\
 	((q)->limits.features & BLK_FEAT_ZONE_RESETALL)
 #define blk_queue_dax(q)	((q)->limits.features & BLK_FEAT_DAX)
-#define blk_queue_pci_p2pdma(q)	\
-	test_bit(QUEUE_FLAG_PCI_P2PDMA, &(q)->queue_flags)
+#define blk_queue_pci_p2pdma(q)	((q)->limits.features & BLK_FEAT_PCI_P2PDMA)
 #ifdef CONFIG_BLK_RQ_ALLOC_TIME
 #define blk_queue_rq_alloc_time(q)	\
 	test_bit(QUEUE_FLAG_RQ_ALLOC_TIME, &(q)->queue_flags)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737666.1144167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2p-00073l-1D; Tue, 11 Jun 2024 05:27:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737666.1144167; Tue, 11 Jun 2024 05:27:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2o-00071f-LQ; Tue, 11 Jun 2024 05:27:42 +0000
Received: by outflank-mailman (input) for mailman id 737666;
 Tue, 11 Jun 2024 05:27:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvX-0006Mb-3Q
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:11 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 45197cae-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:20:10 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvF-00000007Qwk-2PPi; Tue, 11 Jun 2024 05:19:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45197cae-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=rkSQ+6nZApqoAfrjo6q+Wq9Ws51k8by9JfSQNhUShHI=; b=Ki2bJ+c1QdcY5HPLBtBOGK1wy/
	Q9aJI78KWVLLTM5bn/EHpuvmHBU1C6sHSCaz91uiphE8Z9XZb4wA6Z9/B4venlxYIjbipq5bq5ghm
	BkzL4kj4cd9KDP78OrmjzOjuqsJKThI/tpWzKzlobFrOA7RjkUXMQ7VDaowU7CCJwPP44T5LppJkk
	lpvt9qNjIE/gUVreXqbK4kl5Ybt76uIrx0PbUPDOSxfiuZoXgDuak4EadqaXl00yP4T0+Rszn/XLp
	1aJwQOVF0wGakOipCvBGudj4WytD8/RAkq+/mJDhpHGOfXqKz8BCgAd9vJ4nk27dOpXX9s68LMdDs
	jBqpOuCg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 07/26] loop: fold loop_update_rotational into loop_reconfigure_limits
Date: Tue, 11 Jun 2024 07:19:07 +0200
Message-ID: <20240611051929.513387-8-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

This prepares for moving the rotational flag into the queue_limits and
also fixes it for the case where the loop device is backed by a block
device.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/loop.c | 23 ++++-------------------
 1 file changed, 4 insertions(+), 19 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index d7cf6bbbfb1b86..2c4a5eb3a6a7f9 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -916,24 +916,6 @@ static void loop_free_idle_workers_timer(struct timer_list *timer)
 	return loop_free_idle_workers(lo, false);
 }
 
-static void loop_update_rotational(struct loop_device *lo)
-{
-	struct file *file = lo->lo_backing_file;
-	struct inode *file_inode = file->f_mapping->host;
-	struct block_device *file_bdev = file_inode->i_sb->s_bdev;
-	struct request_queue *q = lo->lo_queue;
-	bool nonrot = true;
-
-	/* not all filesystems (e.g. tmpfs) have a sb->s_bdev */
-	if (file_bdev)
-		nonrot = bdev_nonrot(file_bdev);
-
-	if (nonrot)
-		blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
-	else
-		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
-}
-
 /**
  * loop_set_status_from_info - configure device from loop_info
  * @lo: struct loop_device to configure
@@ -1003,6 +985,10 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
 	lim.logical_block_size = bsize;
 	lim.physical_block_size = bsize;
 	lim.io_min = bsize;
+	if (!backing_bdev || bdev_nonrot(backing_bdev))
+		blk_queue_flag_set(QUEUE_FLAG_NONROT, lo->lo_queue);
+	else
+		blk_queue_flag_clear(QUEUE_FLAG_NONROT, lo->lo_queue);
 	loop_config_discard(lo, &lim);
 	return queue_limits_commit_update(lo->lo_queue, &lim);
 }
@@ -1099,7 +1085,6 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
 	if (WARN_ON_ONCE(error))
 		goto out_unlock;
 
-	loop_update_rotational(lo);
 	loop_update_dio(lo);
 	loop_sysfs_init(lo);
 
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737672.1144174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2q-0007RT-H9; Tue, 11 Jun 2024 05:27:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737672.1144174; Tue, 11 Jun 2024 05:27:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2q-0007P2-7T; Tue, 11 Jun 2024 05:27:44 +0000
Received: by outflank-mailman (input) for mailman id 737672;
 Tue, 11 Jun 2024 05:27:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtwa-0006gk-ER
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:21:16 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6bb4aa03-27b2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:21:14 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtwA-00000007RkZ-1jS3; Tue, 11 Jun 2024 05:20:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bb4aa03-27b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=xY3slEHkiIq7NT2sGuMOxA8hJLcOIj4MfNseHu5TMns=; b=4T9NMLhZ85LGaZsJ9WbdPngt7y
	Lg47NyWbOusFuIcnPlTazNjFjWXkT5yAEsdF2Tng0oqqiQUya3UrgF3lv0cr8mioMxIZ95opUklxw
	3M1OWMNS18z5UDDHYnISdGZR4enIuBTfKqZeh9NEHC/wwn7yQffyG8jZyBMEHv7CUIPTauCM/4A4t
	uZ54Izj2QaXtkwLI0uaSn4JJf/3BR911bsG/t7tsxTYATIA0iybKQ85jYBhiCUPleyeGd+jL60ZaI
	wLfr7z+VuHHqSsgF7BQktz3lOGgIphJl2pF+e1sp4qBMFXL07QcuVEXmkU7DDi+7mFGg/G/7yUWm6
	lSBsjOfg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 26/26] block: move the bounce flag into the feature field
Date: Tue, 11 Jun 2024 07:19:26 +0200
Message-ID: <20240611051929.513387-27-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the bounce field into the flags field to reclaim a little bit of
space.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-settings.c    | 1 -
 block/blk.h             | 2 +-
 drivers/scsi/scsi_lib.c | 2 +-
 include/linux/blkdev.h  | 6 ++++--
 4 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/block/blk-settings.c b/block/blk-settings.c
index 96e07f24bd9aa1..d0e9096f93ca8a 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -479,7 +479,6 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 					b->max_write_zeroes_sectors);
 	t->max_zone_append_sectors = min(queue_limits_max_zone_append_sectors(t),
 					 queue_limits_max_zone_append_sectors(b));
-	t->bounce = max(t->bounce, b->bounce);
 
 	t->seg_boundary_mask = min_not_zero(t->seg_boundary_mask,
 					    b->seg_boundary_mask);
diff --git a/block/blk.h b/block/blk.h
index 79e8d5d4fe0caf..fa32f7fad5d7e6 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -394,7 +394,7 @@ struct bio *__blk_queue_bounce(struct bio *bio, struct request_queue *q);
 static inline bool blk_queue_may_bounce(struct request_queue *q)
 {
 	return IS_ENABLED(CONFIG_BOUNCE) &&
-		q->limits.bounce == BLK_BOUNCE_HIGH &&
+		(q->limits.features & BLK_FEAT_BOUNCE_HIGH) &&
 		max_low_pfn >= max_pfn;
 }
 
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 54f771ec8cfb5e..e2f7bfb2b9e450 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1986,7 +1986,7 @@ void scsi_init_limits(struct Scsi_Host *shost, struct queue_limits *lim)
 		shost->dma_alignment, dma_get_cache_alignment() - 1);
 
 	if (shost->no_highmem)
-		lim->bounce = BLK_BOUNCE_HIGH;
+		lim->features |= BLK_FEAT_BOUNCE_HIGH;
 
 	dma_set_seg_boundary(dev, shost->dma_boundary);
 	dma_set_max_seg_size(dev, shost->max_segment_size);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index d7ad25def6e50b..d1d9787e76ce73 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -325,6 +325,9 @@ enum {
 
 	/* skip this queue in blk_mq_(un)quiesce_tagset */
 	BLK_FEAT_SKIP_TAGSET_QUIESCE		= (1u << 13),
+
+	/* bounce all highmem pages */
+	BLK_FEAT_BOUNCE_HIGH			= (1u << 14),
 };
 
 /*
@@ -332,7 +335,7 @@ enum {
  */
 #define BLK_FEAT_INHERIT_MASK \
 	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | BLK_FEAT_ROTATIONAL | \
-	 BLK_FEAT_STABLE_WRITES | BLK_FEAT_ZONED)
+	 BLK_FEAT_STABLE_WRITES | BLK_FEAT_ZONED | BLK_FEAT_BOUNCE_HIGH)
 
 /* internal flags in queue_limits.flags */
 enum {
@@ -352,7 +355,6 @@ enum blk_bounce {
 struct queue_limits {
 	unsigned int		features;
 	unsigned int		flags;
-	enum blk_bounce		bounce;
 	unsigned long		seg_boundary_mask;
 	unsigned long		virt_boundary_mask;
 
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737688.1144187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2u-0008Qd-U0; Tue, 11 Jun 2024 05:27:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737688.1144187; Tue, 11 Jun 2024 05:27:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2u-0008Ok-HF; Tue, 11 Jun 2024 05:27:48 +0000
Received: by outflank-mailman (input) for mailman id 737688;
 Tue, 11 Jun 2024 05:27:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvi-0006Mb-8l
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:22 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4c14d9ee-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:20:21 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvO-00000007R2v-03k6; Tue, 11 Jun 2024 05:20:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c14d9ee-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=XX8OKzSF4I8HqKWv9CPGPzdF42TlXrJn92mCb7zGUpU=; b=MgJCZXdCXtq5WcrpKLj6n44BMN
	x7as6F25HhuflNiQsl6jB5W6Jl8Cql52H+wX7EAkBatRuj+eJQEWJ77tUg0C3taOh9WTyyJ5Bqsdi
	RXIGXMyVTRoSsh9a4uAfJCaQK5+FHeeQHkBBZE0YZA9FygNVsS+hGq+FQgsOMtLKMq7cOPugwuWbd
	nHgbDPqb89/cUkFHBBzn5P6ypmdPwRtPSnVwTBJUEpsncbavyujHEC5azlnNETobhWGFD7GOTUPkY
	4mvSB/tNstXuD79kJeevqTycR98vtBAfM0kE9ci/1QpnRCn9+ZGtDS5u57IeabqphYzfqjxqKjOQg
	V0F/xAxw==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 10/26] xen-blkfront: don't disable cache flushes when they fail
Date: Tue, 11 Jun 2024 07:19:10 +0200
Message-ID: <20240611051929.513387-11-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

blkfront always had a robust negotiation protocol for detecting a write
cache.  Stop simply disabling cache flushes when they fail as that is
a grave error.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/xen-blkfront.c | 29 +++++++++--------------------
 1 file changed, 9 insertions(+), 20 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 9b4ec3e4908cce..9794ac2d3299d1 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -982,18 +982,6 @@ static const char *flush_info(struct blkfront_info *info)
 		return "barrier or flush: disabled;";
 }
 
-static void xlvbd_flush(struct blkfront_info *info)
-{
-	blk_queue_write_cache(info->rq, info->feature_flush ? true : false,
-			      info->feature_fua ? true : false);
-	pr_info("blkfront: %s: %s %s %s %s %s %s %s\n",
-		info->gd->disk_name, flush_info(info),
-		"persistent grants:", info->feature_persistent ?
-		"enabled;" : "disabled;", "indirect descriptors:",
-		info->max_indirect_segments ? "enabled;" : "disabled;",
-		"bounce buffer:", info->bounce ? "enabled" : "disabled;");
-}
-
 static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)
 {
 	int major;
@@ -1162,7 +1150,15 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 	info->sector_size = sector_size;
 	info->physical_sector_size = physical_sector_size;
 
-	xlvbd_flush(info);
+	blk_queue_write_cache(info->rq, info->feature_flush ? true : false,
+			      info->feature_fua ? true : false);
+
+	pr_info("blkfront: %s: %s %s %s %s %s %s %s\n",
+		info->gd->disk_name, flush_info(info),
+		"persistent grants:", info->feature_persistent ?
+		"enabled;" : "disabled;", "indirect descriptors:",
+		info->max_indirect_segments ? "enabled;" : "disabled;",
+		"bounce buffer:", info->bounce ? "enabled" : "disabled;");
 
 	if (info->vdisk_info & VDISK_READONLY)
 		set_disk_ro(gd, 1);
@@ -1622,13 +1618,6 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 				       info->gd->disk_name, op_name(bret.operation));
 				blkif_req(req)->error = BLK_STS_NOTSUPP;
 			}
-			if (unlikely(blkif_req(req)->error)) {
-				if (blkif_req(req)->error == BLK_STS_NOTSUPP)
-					blkif_req(req)->error = BLK_STS_OK;
-				info->feature_fua = 0;
-				info->feature_flush = 0;
-				xlvbd_flush(info);
-			}
 			fallthrough;
 		case BLKIF_OP_READ:
 		case BLKIF_OP_WRITE:
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737708.1144198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2z-00014a-4k; Tue, 11 Jun 2024 05:27:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737708.1144198; Tue, 11 Jun 2024 05:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2y-00014L-Uz; Tue, 11 Jun 2024 05:27:52 +0000
Received: by outflank-mailman (input) for mailman id 737708;
 Tue, 11 Jun 2024 05:27:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtvk-0006Mb-HO
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:24 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d6e32e7-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:20:24 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvT-00000007R7j-1fbd; Tue, 11 Jun 2024 05:20:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d6e32e7-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=d3AwxBu+VkFli34awimWaaFVvEWHaqxKTv0UwQ9CRyc=; b=tqeyL4qAQR7OhvjRw/yA9bfsYL
	N7xOcD+7T+hDGiHjgbYtqeGtHZOfnb7lkqTfhINRoXm8Ee/FCoElOSL9s5+c7xym/wAGYkkCCfTnf
	qnZh6UX/gAl/6P74rJ/m8SqAil023LUZcv7Cr265hYsCU/L9dwK5YaQOsXptA+e34hFEY+D4fsWNP
	I/TkliZmU6+cRQG+O7IAEegj9IsAiU1oLp5bK5FUQVjUzc+yLZkVwnddnikR9Tnjh2lQ+tABeLqD6
	BTy061IMVJym4Pxy6W2dcyfJp0bugTmVuat/LwZBLmMMQuWFCsrfBogivxrNR625W3HQo2OxmIK95
	Gt3RlwjA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 12/26] block: remove blk_flush_policy
Date: Tue, 11 Jun 2024 07:19:12 +0200
Message-ID: <20240611051929.513387-13-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Fold blk_flush_policy into the only caller to prepare for pending changes
to it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-flush.c | 33 +++++++++++++++------------------
 1 file changed, 15 insertions(+), 18 deletions(-)

diff --git a/block/blk-flush.c b/block/blk-flush.c
index c17cf8ed8113db..2234f8b3fc05f2 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -100,23 +100,6 @@ blk_get_flush_queue(struct request_queue *q, struct blk_mq_ctx *ctx)
 	return blk_mq_map_queue(q, REQ_OP_FLUSH, ctx)->fq;
 }
 
-static unsigned int blk_flush_policy(unsigned long fflags, struct request *rq)
-{
-	unsigned int policy = 0;
-
-	if (blk_rq_sectors(rq))
-		policy |= REQ_FSEQ_DATA;
-
-	if (fflags & (1UL << QUEUE_FLAG_WC)) {
-		if (rq->cmd_flags & REQ_PREFLUSH)
-			policy |= REQ_FSEQ_PREFLUSH;
-		if (!(fflags & (1UL << QUEUE_FLAG_FUA)) &&
-		    (rq->cmd_flags & REQ_FUA))
-			policy |= REQ_FSEQ_POSTFLUSH;
-	}
-	return policy;
-}
-
 static unsigned int blk_flush_cur_seq(struct request *rq)
 {
 	return 1 << ffz(rq->flush.seq);
@@ -399,12 +382,26 @@ bool blk_insert_flush(struct request *rq)
 {
 	struct request_queue *q = rq->q;
 	unsigned long fflags = q->queue_flags;	/* may change, cache */
-	unsigned int policy = blk_flush_policy(fflags, rq);
 	struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx);
+	unsigned int policy = 0;
 
 	/* FLUSH/FUA request must never be merged */
 	WARN_ON_ONCE(rq->bio != rq->biotail);
 
+	if (blk_rq_sectors(rq))
+		policy |= REQ_FSEQ_DATA;
+
+	/*
+	 * Check which flushes we need to sequence for this operation.
+	 */
+	if (fflags & (1UL << QUEUE_FLAG_WC)) {
+		if (rq->cmd_flags & REQ_PREFLUSH)
+			policy |= REQ_FSEQ_PREFLUSH;
+		if (!(fflags & (1UL << QUEUE_FLAG_FUA)) &&
+		    (rq->cmd_flags & REQ_FUA))
+			policy |= REQ_FSEQ_POSTFLUSH;
+	}
+
 	/*
 	 * @policy now records what operations need to be done.  Adjust
 	 * REQ_PREFLUSH and FUA for the driver.
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737715.1144203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2z-0001B3-Lw; Tue, 11 Jun 2024 05:27:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737715.1144203; Tue, 11 Jun 2024 05:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu2z-0001AZ-Cn; Tue, 11 Jun 2024 05:27:53 +0000
Received: by outflank-mailman (input) for mailman id 737715;
 Tue, 11 Jun 2024 05:27:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtw2-0006Mb-MB
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:42 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 57a88125-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:20:41 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvi-00000007RLM-3OgN; Tue, 11 Jun 2024 05:20:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57a88125-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=pzsnAOfifZqQSLy6CuvG93O9Rr4qQBbiyDOq0rei51I=; b=t2U7oX6n7Tjv8GHV4fu28FZ43h
	WGvHDUGMmFkttnRXe8Tvh6R56aak0mOLWfXLZpLoc4mIZp7uxolTLs81R2XqFYNAAXL/lP0Pmi1Nx
	WTcPMpMUjOPsfCfJZHgNG3vV0vM+YzdP6Q12mq9Gmds8iEpzvN2bZDC4SYTfWafIxg2BYqJ1EudC9
	lxEXx7y7RzHs6B4o1uLQYKzfPvl0ZUyW3abl2fb+ncCp5t//264TIuekCdrjXAiJaFJE7QGY7PbVk
	PFaHFUIXV4yFE0Tai88QVbbl/FR62DIBZ7JudTJzjKOoP3TypM3X6oSx37NH807FmHs/42o4ifma+
	h9jcm7GQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 17/26]  block: move the stable_write flag to queue_limits
Date: Tue, 11 Jun 2024 07:19:17 +0200
Message-ID: <20240611051929.513387-18-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the io_stat flag into the queue_limits feature field so that it can
be set atomically and all I/O is frozen when changing the flag.

The flag is now inherited by blk_stack_limits, which greatly simplifies
the code in dm, and fixed md which previously did not pass on the flag
set on lower devices.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq-debugfs.c         |  1 -
 block/blk-sysfs.c              | 29 +----------------------------
 drivers/block/drbd/drbd_main.c |  5 ++---
 drivers/block/rbd.c            |  9 +++------
 drivers/block/zram/zram_drv.c  |  2 +-
 drivers/md/dm-table.c          | 19 -------------------
 drivers/md/raid5.c             |  6 ++++--
 drivers/mmc/core/queue.c       |  5 +++--
 drivers/nvme/host/core.c       |  9 +++++----
 drivers/nvme/host/multipath.c  |  4 ----
 drivers/scsi/iscsi_tcp.c       |  8 ++++----
 include/linux/blkdev.h         |  9 ++++++---
 12 files changed, 29 insertions(+), 77 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index cbe99444ed1a54..eb73f1d348e5a9 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -88,7 +88,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(SYNCHRONOUS),
 	QUEUE_FLAG_NAME(SAME_FORCE),
 	QUEUE_FLAG_NAME(INIT_DONE),
-	QUEUE_FLAG_NAME(STABLE_WRITES),
 	QUEUE_FLAG_NAME(POLL),
 	QUEUE_FLAG_NAME(DAX),
 	QUEUE_FLAG_NAME(STATS),
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 6f58530fb3c08e..cde525724831ef 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -296,37 +296,10 @@ static ssize_t queue_##_name##_store(struct request_queue *q,		 \
 	return queue_feature_store(q, page, count, _feature);		 \
 }
 
-#define QUEUE_SYSFS_BIT_FNS(name, flag, neg)				\
-static ssize_t								\
-queue_##name##_show(struct request_queue *q, char *page)		\
-{									\
-	int bit;							\
-	bit = test_bit(QUEUE_FLAG_##flag, &q->queue_flags);		\
-	return queue_var_show(neg ? !bit : bit, page);			\
-}									\
-static ssize_t								\
-queue_##name##_store(struct request_queue *q, const char *page, size_t count) \
-{									\
-	unsigned long val;						\
-	ssize_t ret;							\
-	ret = queue_var_store(&val, page, count);			\
-	if (ret < 0)							\
-		 return ret;						\
-	if (neg)							\
-		val = !val;						\
-									\
-	if (val)							\
-		blk_queue_flag_set(QUEUE_FLAG_##flag, q);		\
-	else								\
-		blk_queue_flag_clear(QUEUE_FLAG_##flag, q);		\
-	return ret;							\
-}
-
 QUEUE_SYSFS_FEATURE(rotational, BLK_FEAT_ROTATIONAL)
 QUEUE_SYSFS_FEATURE(add_random, BLK_FEAT_ADD_RANDOM)
 QUEUE_SYSFS_FEATURE(iostats, BLK_FEAT_IO_STAT)
-QUEUE_SYSFS_BIT_FNS(stable_writes, STABLE_WRITES, 0);
-#undef QUEUE_SYSFS_BIT_FNS
+QUEUE_SYSFS_FEATURE(stable_writes, BLK_FEAT_STABLE_WRITES);
 
 static ssize_t queue_zoned_show(struct request_queue *q, char *page)
 {
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 2ef29a47807550..f92673f05c7abc 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -2698,7 +2698,8 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 		 */
 		.max_hw_sectors		= DRBD_MAX_BIO_SIZE_SAFE >> 8,
 		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
-					  BLK_FEAT_ROTATIONAL,
+					  BLK_FEAT_ROTATIONAL |
+					  BLK_FEAT_STABLE_WRITES,
 	};
 
 	device = minor_to_device(minor);
@@ -2737,8 +2738,6 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 	sprintf(disk->disk_name, "drbd%d", minor);
 	disk->private_data = device;
 
-	blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, disk->queue);
-
 	device->md_io.page = alloc_page(GFP_KERNEL);
 	if (!device->md_io.page)
 		goto out_no_io_page;
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index ec1f1c7d4275cd..008e850555f41a 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -4949,7 +4949,6 @@ static const struct blk_mq_ops rbd_mq_ops = {
 static int rbd_init_disk(struct rbd_device *rbd_dev)
 {
 	struct gendisk *disk;
-	struct request_queue *q;
 	unsigned int objset_bytes =
 	    rbd_dev->layout.object_size * rbd_dev->layout.stripe_count;
 	struct queue_limits lim = {
@@ -4979,12 +4978,14 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
 		lim.max_write_zeroes_sectors = objset_bytes >> SECTOR_SHIFT;
 	}
 
+	if (!ceph_test_opt(rbd_dev->rbd_client->client, NOCRC))
+		lim.features |= BLK_FEAT_STABLE_WRITES;
+
 	disk = blk_mq_alloc_disk(&rbd_dev->tag_set, &lim, rbd_dev);
 	if (IS_ERR(disk)) {
 		err = PTR_ERR(disk);
 		goto out_tag_set;
 	}
-	q = disk->queue;
 
 	snprintf(disk->disk_name, sizeof(disk->disk_name), RBD_DRV_NAME "%d",
 		 rbd_dev->dev_id);
@@ -4996,10 +4997,6 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
 		disk->minors = RBD_MINORS_PER_MAJOR;
 	disk->fops = &rbd_bd_ops;
 	disk->private_data = rbd_dev;
-
-	if (!ceph_test_opt(rbd_dev->rbd_client->client, NOCRC))
-		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, q);
-
 	rbd_dev->disk = disk;
 
 	return 0;
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index aad840fc7e18e3..f8f1b5b54795ac 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -2208,6 +2208,7 @@ static int zram_add(void)
 #if ZRAM_LOGICAL_BLOCK_SIZE == PAGE_SIZE
 		.max_write_zeroes_sectors	= UINT_MAX,
 #endif
+		.features			= BLK_FEAT_STABLE_WRITES,
 	};
 	struct zram *zram;
 	int ret, device_id;
@@ -2246,7 +2247,6 @@ static int zram_add(void)
 	/* Actual capacity set using sysfs (/sys/block/zram<id>/disksize */
 	set_capacity(zram->disk, 0);
 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, zram->disk->queue);
-	blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, zram->disk->queue);
 	ret = device_add_disk(NULL, zram->disk, zram_disk_groups);
 	if (ret)
 		goto out_cleanup_disk;
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 3e3b713502f61e..f4e1b50ffdcda5 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1819,13 +1819,6 @@ static bool dm_table_supports_secure_erase(struct dm_table *t)
 	return true;
 }
 
-static int device_requires_stable_pages(struct dm_target *ti,
-					struct dm_dev *dev, sector_t start,
-					sector_t len, void *data)
-{
-	return bdev_stable_writes(dev->bdev);
-}
-
 int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 			      struct queue_limits *limits)
 {
@@ -1862,18 +1855,6 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 	if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, NULL))
 		dax_write_cache(t->md->dax_dev, true);
 
-	/*
-	 * Some devices don't use blk_integrity but still want stable pages
-	 * because they do their own checksumming.
-	 * If any underlying device requires stable pages, a table must require
-	 * them as well.  Only targets that support iterate_devices are considered:
-	 * don't want error, zero, etc to require stable pages.
-	 */
-	if (dm_table_any_dev_attr(t, device_requires_stable_pages, NULL))
-		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, q);
-	else
-		blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, q);
-
 	/*
 	 * For a zoned target, setup the zones related queue attributes
 	 * and resources necessary for zone append emulation if necessary.
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 675c68fa6c6403..e875763d69917d 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -7082,12 +7082,14 @@ raid5_store_skip_copy(struct mddev *mddev, const char *page, size_t len)
 		err = -ENODEV;
 	else if (new != conf->skip_copy) {
 		struct request_queue *q = mddev->gendisk->queue;
+		struct queue_limits lim = queue_limits_start_update(q);
 
 		conf->skip_copy = new;
 		if (new)
-			blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, q);
+			lim.features |= BLK_FEAT_STABLE_WRITES;
 		else
-			blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, q);
+			lim.features &= ~BLK_FEAT_STABLE_WRITES;
+		err = queue_limits_commit_update(q, &lim);
 	}
 	mddev_unlock_and_resume(mddev);
 	return err ?: len;
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index da00904d4a3c7e..d0b3ca8a11f071 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -378,13 +378,14 @@ static struct gendisk *mmc_alloc_disk(struct mmc_queue *mq,
 		lim.max_segments = host->max_segs;
 	}
 
+	if (mmc_host_is_spi(host) && host->use_spi_crc)
+		lim.features |= BLK_FEAT_STABLE_WRITES;
+
 	disk = blk_mq_alloc_disk(&mq->tag_set, &lim, mq);
 	if (IS_ERR(disk))
 		return disk;
 	mq->queue = disk->queue;
 
-	if (mmc_host_is_spi(host) && host->use_spi_crc)
-		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, mq->queue);
 	blk_queue_rq_timeout(mq->queue, 60 * HZ);
 
 	dma_set_max_seg_size(mmc_dev(host), queue_max_segment_size(mq->queue));
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0d753fe71f35b0..5ecf762d7c8837 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3724,6 +3724,7 @@ static void nvme_ns_add_to_ctrl_list(struct nvme_ns *ns)
 
 static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 {
+	struct queue_limits lim = { };
 	struct nvme_ns *ns;
 	struct gendisk *disk;
 	int node = ctrl->numa_node;
@@ -3732,7 +3733,10 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 	if (!ns)
 		return;
 
-	disk = blk_mq_alloc_disk(ctrl->tagset, NULL, ns);
+	if (ctrl->opts && ctrl->opts->data_digest)
+		lim.features |= BLK_FEAT_STABLE_WRITES;
+
+	disk = blk_mq_alloc_disk(ctrl->tagset, &lim, ns);
 	if (IS_ERR(disk))
 		goto out_free_ns;
 	disk->fops = &nvme_bdev_ops;
@@ -3741,9 +3745,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 	ns->disk = disk;
 	ns->queue = disk->queue;
 
-	if (ctrl->opts && ctrl->opts->data_digest)
-		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, ns->queue);
-
 	if (ctrl->ops->supports_pci_p2pdma &&
 	    ctrl->ops->supports_pci_p2pdma(ctrl))
 		blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue);
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index eea727cfa9e67d..173796f2ddea9f 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -868,10 +868,6 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, __le32 anagrpid)
 		nvme_mpath_set_live(ns);
 	}
 
-	if (test_bit(QUEUE_FLAG_STABLE_WRITES, &ns->queue->queue_flags) &&
-	    ns->head->disk)
-		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES,
-				   ns->head->disk->queue);
 #ifdef CONFIG_BLK_DEV_ZONED
 	if (blk_queue_is_zoned(ns->queue) && ns->head->disk)
 		ns->head->disk->nr_zones = ns->disk->nr_zones;
diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
index 60688f18fac6f7..c708e105963833 100644
--- a/drivers/scsi/iscsi_tcp.c
+++ b/drivers/scsi/iscsi_tcp.c
@@ -1057,15 +1057,15 @@ static umode_t iscsi_sw_tcp_attr_is_visible(int param_type, int param)
 	return 0;
 }
 
-static int iscsi_sw_tcp_slave_configure(struct scsi_device *sdev)
+static int iscsi_sw_tcp_device_configure(struct scsi_device *sdev,
+		struct queue_limits *lim)
 {
 	struct iscsi_sw_tcp_host *tcp_sw_host = iscsi_host_priv(sdev->host);
 	struct iscsi_session *session = tcp_sw_host->session;
 	struct iscsi_conn *conn = session->leadconn;
 
 	if (conn->datadgst_en)
-		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES,
-				   sdev->request_queue);
+		lim->features |= BLK_FEAT_STABLE_WRITES;
 	return 0;
 }
 
@@ -1083,7 +1083,7 @@ static const struct scsi_host_template iscsi_sw_tcp_sht = {
 	.eh_device_reset_handler= iscsi_eh_device_reset,
 	.eh_target_reset_handler = iscsi_eh_recover_target,
 	.dma_boundary		= PAGE_SIZE - 1,
-	.slave_configure        = iscsi_sw_tcp_slave_configure,
+	.device_configure	= iscsi_sw_tcp_device_configure,
 	.proc_name		= "iscsi_tcp",
 	.this_id		= -1,
 	.track_queue_depth	= 1,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f8e38f94fd8c9a..db14c61791e022 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -298,13 +298,17 @@ enum {
 
 	/* do disk/partitions IO accounting */
 	BLK_FEAT_IO_STAT			= (1u << 4),
+
+	/* don't modify data until writeback is done */
+	BLK_FEAT_STABLE_WRITES			= (1u << 5),
 };
 
 /*
  * Flags automatically inherited when stacking limits.
  */
 #define BLK_FEAT_INHERIT_MASK \
-	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | BLK_FEAT_ROTATIONAL)
+	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | BLK_FEAT_ROTATIONAL | \
+	 BLK_FEAT_STABLE_WRITES)
 
 /* internal flags in queue_limits.flags */
 enum {
@@ -565,7 +569,6 @@ struct request_queue {
 #define QUEUE_FLAG_SYNCHRONOUS	11	/* always completes in submit context */
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
 #define QUEUE_FLAG_INIT_DONE	14	/* queue is initialized */
-#define QUEUE_FLAG_STABLE_WRITES 15	/* don't modify blks until WB is done */
 #define QUEUE_FLAG_POLL		16	/* IO polling enabled if set */
 #define QUEUE_FLAG_DAX		19	/* device supports DAX */
 #define QUEUE_FLAG_STATS	20	/* track IO start and completion times */
@@ -1324,7 +1327,7 @@ static inline bool bdev_stable_writes(struct block_device *bdev)
 	if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) &&
 	    q->limits.integrity.csum_type != 0)
 		return true;
-	return test_bit(QUEUE_FLAG_STABLE_WRITES, &q->queue_flags);
+	return q->limits.features & BLK_FEAT_STABLE_WRITES;
 }
 
 static inline bool blk_queue_write_cache(struct request_queue *q)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737720.1144217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu31-0001t8-RP; Tue, 11 Jun 2024 05:27:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737720.1144217; Tue, 11 Jun 2024 05:27:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu31-0001rf-Nr; Tue, 11 Jun 2024 05:27:55 +0000
Received: by outflank-mailman (input) for mailman id 737720;
 Tue, 11 Jun 2024 05:27:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtw3-0006gk-87
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:20:43 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5536dac9-27b2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:20:37 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvf-00000007RHo-0EdF; Tue, 11 Jun 2024 05:20:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5536dac9-27b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=JbMVJ/FXRp2fRg+LG74XYlDu5LD/z/A0OThY3vyxvEY=; b=U+OCOYuQKA7+zHZ8rI+/LEGIbf
	yyzT81mUDdX1g1y0nBhYq/a+EznJGskITZ623q2rthgZDGhP42yUIKsgXE0ezCNhyosXm+RT9FFEY
	A52iyPdbnDIJ5efpmZnKHUpUenY083jIVDC+WPBVWwVEAvPeAn6CciLM4N1Hcpehk9s3Mp9orQCQx
	68Ico/AWJ9UgH34ohFg3R2d02LxxNNUJwlskWjJLiWi8hQSJdujL9ZerI2qqS9YzvjknwcVoZeFU1
	+DCgonENDPhmpHZVeWByIfwrHG/4RyEwbq1nEHWLdTax+87Gv1zqzs9FGYPjA3r3/RjI/XKpixsP7
	sylNMw8A==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 16/26] block: move the io_stat flag setting to queue_limits
Date: Tue, 11 Jun 2024 07:19:16 +0200
Message-ID: <20240611051929.513387-17-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the io_stat flag into the queue_limits feature field so that it
can be set atomically and all I/O is frozen when changing the flag.

Simplify md and dm to set the flag unconditionally instead of avoiding
setting a simple flag for cases where it already is set by other means,
which is a bit pointless.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq-debugfs.c        |  1 -
 block/blk-mq.c                |  6 +++++-
 block/blk-sysfs.c             |  2 +-
 drivers/md/dm-table.c         | 12 +++++++++---
 drivers/md/dm.c               | 13 +++----------
 drivers/md/md.c               |  5 ++---
 drivers/nvme/host/multipath.c |  2 +-
 include/linux/blkdev.h        |  9 +++++----
 8 files changed, 26 insertions(+), 24 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 6b7edb50bfd3fa..cbe99444ed1a54 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -84,7 +84,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(NOMERGES),
 	QUEUE_FLAG_NAME(SAME_COMP),
 	QUEUE_FLAG_NAME(FAIL_IO),
-	QUEUE_FLAG_NAME(IO_STAT),
 	QUEUE_FLAG_NAME(NOXMERGES),
 	QUEUE_FLAG_NAME(SYNCHRONOUS),
 	QUEUE_FLAG_NAME(SAME_FORCE),
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 58b0d6c7cc34d6..cf67dc13f7dd4c 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4116,7 +4116,11 @@ struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
 	struct request_queue *q;
 	int ret;
 
-	q = blk_alloc_queue(lim ? lim : &default_lim, set->numa_node);
+	if (!lim)
+		lim = &default_lim;
+	lim->features |= BLK_FEAT_IO_STAT;
+
+	q = blk_alloc_queue(lim, set->numa_node);
 	if (IS_ERR(q))
 		return q;
 	q->queuedata = queuedata;
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 9174aca3b85526..6f58530fb3c08e 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -324,7 +324,7 @@ queue_##name##_store(struct request_queue *q, const char *page, size_t count) \
 
 QUEUE_SYSFS_FEATURE(rotational, BLK_FEAT_ROTATIONAL)
 QUEUE_SYSFS_FEATURE(add_random, BLK_FEAT_ADD_RANDOM)
-QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0);
+QUEUE_SYSFS_FEATURE(iostats, BLK_FEAT_IO_STAT)
 QUEUE_SYSFS_BIT_FNS(stable_writes, STABLE_WRITES, 0);
 #undef QUEUE_SYSFS_BIT_FNS
 
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 7654babc2775c1..3e3b713502f61e 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -579,6 +579,12 @@ int dm_split_args(int *argc, char ***argvp, char *input)
 	return 0;
 }
 
+static void dm_set_stacking_limits(struct queue_limits *limits)
+{
+	blk_set_stacking_limits(limits);
+	limits->features |= BLK_FEAT_IO_STAT;
+}
+
 /*
  * Impose necessary and sufficient conditions on a devices's table such
  * that any incoming bio which respects its logical_block_size can be
@@ -617,7 +623,7 @@ static int validate_hardware_logical_block_alignment(struct dm_table *t,
 	for (i = 0; i < t->num_targets; i++) {
 		ti = dm_table_get_target(t, i);
 
-		blk_set_stacking_limits(&ti_limits);
+		dm_set_stacking_limits(&ti_limits);
 
 		/* combine all target devices' limits */
 		if (ti->type->iterate_devices)
@@ -1591,7 +1597,7 @@ int dm_calculate_queue_limits(struct dm_table *t,
 	unsigned int zone_sectors = 0;
 	bool zoned = false;
 
-	blk_set_stacking_limits(limits);
+	dm_set_stacking_limits(limits);
 
 	t->integrity_supported = true;
 	for (unsigned int i = 0; i < t->num_targets; i++) {
@@ -1604,7 +1610,7 @@ int dm_calculate_queue_limits(struct dm_table *t,
 	for (unsigned int i = 0; i < t->num_targets; i++) {
 		struct dm_target *ti = dm_table_get_target(t, i);
 
-		blk_set_stacking_limits(&ti_limits);
+		dm_set_stacking_limits(&ti_limits);
 
 		if (!ti->type->iterate_devices) {
 			/* Set I/O hints portion of queue limits */
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 13037d6a6f62a2..8a976cee448bed 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2386,22 +2386,15 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t)
 	struct table_device *td;
 	int r;
 
-	switch (type) {
-	case DM_TYPE_REQUEST_BASED:
+	WARN_ON_ONCE(type == DM_TYPE_NONE);
+
+	if (type == DM_TYPE_REQUEST_BASED) {
 		md->disk->fops = &dm_rq_blk_dops;
 		r = dm_mq_init_request_queue(md, t);
 		if (r) {
 			DMERR("Cannot initialize queue for request-based dm mapped device");
 			return r;
 		}
-		break;
-	case DM_TYPE_BIO_BASED:
-	case DM_TYPE_DAX_BIO_BASED:
-		blk_queue_flag_set(QUEUE_FLAG_IO_STAT, md->queue);
-		break;
-	case DM_TYPE_NONE:
-		WARN_ON_ONCE(true);
-		break;
 	}
 
 	r = dm_calculate_queue_limits(t, &limits);
diff --git a/drivers/md/md.c b/drivers/md/md.c
index c23423c51fb7c2..8db0db8d5a27ac 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5787,7 +5787,8 @@ struct mddev *md_alloc(dev_t dev, char *name)
 	int unit;
 	int error;
 	struct queue_limits lim = {
-		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA,
+		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
+					  BLK_FEAT_IO_STAT,
 	};
 
 	/*
@@ -6152,8 +6153,6 @@ int md_run(struct mddev *mddev)
 	if (!mddev_is_dm(mddev)) {
 		struct request_queue *q = mddev->gendisk->queue;
 
-		blk_queue_flag_set(QUEUE_FLAG_IO_STAT, q);
-
 		/* Set the NOWAIT flags if all underlying devices support it */
 		if (nowait)
 			blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q);
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 58c13304e558e0..eea727cfa9e67d 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -538,6 +538,7 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 
 	blk_set_stacking_limits(&lim);
 	lim.dma_alignment = 3;
+	lim.features |= BLK_FEAT_IO_STAT;
 	if (head->ids.csi != NVME_CSI_ZNS)
 		lim.max_zone_append_sectors = 0;
 
@@ -550,7 +551,6 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 			ctrl->subsys->instance, head->instance);
 
 	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, head->disk->queue);
-	blk_queue_flag_set(QUEUE_FLAG_IO_STAT, head->disk->queue);
 	/*
 	 * This assumes all controllers that refer to a namespace either
 	 * support poll queues or not.  That is not a strict guarantee,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index e6a2382e21c4fe..f8e38f94fd8c9a 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -295,6 +295,9 @@ enum {
 
 	/* contributes to the random number pool */
 	BLK_FEAT_ADD_RANDOM			= (1u << 3),
+
+	/* do disk/partitions IO accounting */
+	BLK_FEAT_IO_STAT			= (1u << 4),
 };
 
 /*
@@ -558,7 +561,6 @@ struct request_queue {
 #define QUEUE_FLAG_NOMERGES     3	/* disable merge attempts */
 #define QUEUE_FLAG_SAME_COMP	4	/* complete on same CPU-group */
 #define QUEUE_FLAG_FAIL_IO	5	/* fake timeout */
-#define QUEUE_FLAG_IO_STAT	7	/* do disk/partitions IO accounting */
 #define QUEUE_FLAG_NOXMERGES	9	/* No extended merges */
 #define QUEUE_FLAG_SYNCHRONOUS	11	/* always completes in submit context */
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
@@ -577,8 +579,7 @@ struct request_queue {
 #define QUEUE_FLAG_SQ_SCHED     30	/* single queue style io dispatch */
 #define QUEUE_FLAG_SKIP_TAGSET_QUIESCE	31 /* quiesce_tagset skip the queue*/
 
-#define QUEUE_FLAG_MQ_DEFAULT	((1UL << QUEUE_FLAG_IO_STAT) |		\
-				 (1UL << QUEUE_FLAG_SAME_COMP) |	\
+#define QUEUE_FLAG_MQ_DEFAULT	((1UL << QUEUE_FLAG_SAME_COMP) |	\
 				 (1UL << QUEUE_FLAG_NOWAIT))
 
 void blk_queue_flag_set(unsigned int flag, struct request_queue *q);
@@ -592,7 +593,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_noxmerges(q)	\
 	test_bit(QUEUE_FLAG_NOXMERGES, &(q)->queue_flags)
 #define blk_queue_nonrot(q)	((q)->limits.features & BLK_FEAT_ROTATIONAL)
-#define blk_queue_io_stat(q)	test_bit(QUEUE_FLAG_IO_STAT, &(q)->queue_flags)
+#define blk_queue_io_stat(q)	((q)->limits.features & BLK_FEAT_IO_STAT)
 #define blk_queue_zone_resetall(q)	\
 	test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags)
 #define blk_queue_dax(q)	test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:27:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:27:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737723.1144228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu33-0002H3-H3; Tue, 11 Jun 2024 05:27:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737723.1144228; Tue, 11 Jun 2024 05:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGu33-0002Ge-Ar; Tue, 11 Jun 2024 05:27:57 +0000
Received: by outflank-mailman (input) for mailman id 737723;
 Tue, 11 Jun 2024 05:27:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M7An=NN=bombadil.srs.infradead.org=BATV+2fedbe304aabaf399917+7597+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sGtwL-0006Mb-QE
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:21:01 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 62a89e84-27b2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:21:00 +0200 (CEST)
Received: from
 2a02-8389-2341-5b80-cdb4-8e7d-405d-6b77.cable.dynamic.v6.surfer.at
 ([2a02:8389:2341:5b80:cdb4:8e7d:405d:6b77] helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sGtvv-00000007RX9-390b; Tue, 11 Jun 2024 05:20:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62a89e84-27b2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=V5hRHqDq4ZHZPbFUzLlCFThXRG0MEFvxawznaCVh6Qw=; b=Gl/M7JDu0KQApY9VUFpuumTI7l
	qsq1ZgkJiAE284oaN0Vafr5cFhk3M+F21Rng3KYBGR6r80WQoQxaro2LrHKFdMMwj2LuSQbPY+OJg
	w/IKFIDE9CjsHXbNoktUIadjSWoBZZiuhIc04MvtimnyHGIZ3m+cpenH6Lk1OsfDyjROTw9iSz8zI
	QWpMf3BXDsaAeOb0STa1nJYpvmxBmuO7Mx1h6bwfKUspQZZnEF6vn+a0+AyPjhDhasAXSRe819drt
	v0EsRlmuhSAOlBG/GgEKuksu6kH7wLy7OWH28Aqx7yShEJ4yqz9Wgq72nYKdAVGmbUNdi4p84YPFm
	x6wVt/eg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 21/26] block: move the poll flag to queue_limits
Date: Tue, 11 Jun 2024 07:19:21 +0200
Message-ID: <20240611051929.513387-22-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240611051929.513387-1-hch@lst.de>
References: <20240611051929.513387-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the poll flag into the queue_limits feature field so that it
can be set atomically and all I/O is frozen when changing the flag.

Stacking drivers are simplified in that they now can simply set the
flag, and blk_stack_limits will clear it when the features is not
supported by any of the underlying devices.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-core.c              |  5 ++--
 block/blk-mq-debugfs.c        |  1 -
 block/blk-mq.c                | 31 +++++++++++---------
 block/blk-settings.c          | 10 ++++---
 block/blk-sysfs.c             |  4 +--
 drivers/md/dm-table.c         | 54 +++++++++--------------------------
 drivers/nvme/host/multipath.c | 12 +-------
 include/linux/blkdev.h        |  4 ++-
 8 files changed, 45 insertions(+), 76 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 2b45a4df9a1aa1..8d9fbd353fc7fc 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -791,7 +791,7 @@ void submit_bio_noacct(struct bio *bio)
 		}
 	}
 
-	if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
+	if (!(q->limits.features & BLK_FEAT_POLL))
 		bio_clear_polled(bio);
 
 	switch (bio_op(bio)) {
@@ -915,8 +915,7 @@ int bio_poll(struct bio *bio, struct io_comp_batch *iob, unsigned int flags)
 		return 0;
 
 	q = bdev_get_queue(bdev);
-	if (cookie == BLK_QC_T_NONE ||
-	    !test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
+	if (cookie == BLK_QC_T_NONE || !(q->limits.features & BLK_FEAT_POLL))
 		return 0;
 
 	blk_flush_plug(current->plug, false);
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index f4fa820251ce83..3a21527913840d 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -87,7 +87,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(NOXMERGES),
 	QUEUE_FLAG_NAME(SAME_FORCE),
 	QUEUE_FLAG_NAME(INIT_DONE),
-	QUEUE_FLAG_NAME(POLL),
 	QUEUE_FLAG_NAME(STATS),
 	QUEUE_FLAG_NAME(REGISTERED),
 	QUEUE_FLAG_NAME(QUIESCED),
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 43235acc87505f..e2b9710ddc5ad1 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4109,6 +4109,12 @@ void blk_mq_release(struct request_queue *q)
 	blk_mq_sysfs_deinit(q);
 }
 
+static bool blk_mq_can_poll(struct blk_mq_tag_set *set)
+{
+	return set->nr_maps > HCTX_TYPE_POLL &&
+		set->map[HCTX_TYPE_POLL].nr_queues;
+}
+
 struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
 		struct queue_limits *lim, void *queuedata)
 {
@@ -4119,6 +4125,8 @@ struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
 	if (!lim)
 		lim = &default_lim;
 	lim->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
+	if (blk_mq_can_poll(set))
+		lim->features |= BLK_FEAT_POLL;
 
 	q = blk_alloc_queue(lim, set->numa_node);
 	if (IS_ERR(q))
@@ -4273,17 +4281,6 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
 	mutex_unlock(&q->sysfs_lock);
 }
 
-static void blk_mq_update_poll_flag(struct request_queue *q)
-{
-	struct blk_mq_tag_set *set = q->tag_set;
-
-	if (set->nr_maps > HCTX_TYPE_POLL &&
-	    set->map[HCTX_TYPE_POLL].nr_queues)
-		blk_queue_flag_set(QUEUE_FLAG_POLL, q);
-	else
-		blk_queue_flag_clear(QUEUE_FLAG_POLL, q);
-}
-
 int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
 		struct request_queue *q)
 {
@@ -4311,7 +4308,6 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
 	q->tag_set = set;
 
 	q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT;
-	blk_mq_update_poll_flag(q);
 
 	INIT_DELAYED_WORK(&q->requeue_work, blk_mq_requeue_work);
 	INIT_LIST_HEAD(&q->flush_list);
@@ -4798,8 +4794,10 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
 fallback:
 	blk_mq_update_queue_map(set);
 	list_for_each_entry(q, &set->tag_list, tag_set_list) {
+		struct queue_limits lim;
+
 		blk_mq_realloc_hw_ctxs(set, q);
-		blk_mq_update_poll_flag(q);
+
 		if (q->nr_hw_queues != set->nr_hw_queues) {
 			int i = prev_nr_hw_queues;
 
@@ -4811,6 +4809,13 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
 			set->nr_hw_queues = prev_nr_hw_queues;
 			goto fallback;
 		}
+		lim = queue_limits_start_update(q);
+		if (blk_mq_can_poll(set))
+			lim.features |= BLK_FEAT_POLL;
+		else
+			lim.features &= ~BLK_FEAT_POLL;
+		if (queue_limits_commit_update(q, &lim) < 0)
+			pr_warn("updating the poll flag failed\n");
 		blk_mq_map_swqueue(q);
 	}
 
diff --git a/block/blk-settings.c b/block/blk-settings.c
index bf4622c19b5c09..026ba68d829856 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -460,13 +460,15 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 	t->features |= (b->features & BLK_FEAT_INHERIT_MASK);
 
 	/*
-	 * BLK_FEAT_NOWAIT needs to be supported both by the stacking driver
-	 * and all underlying devices.  The stacking driver sets the flag
-	 * before stacking the limits, and this will clear the flag if any
-	 * of the underlying devices does not support it.
+	 * BLK_FEAT_NOWAIT and BLK_FEAT_POLL need to be supported both by the
+	 * stacking driver and all underlying devices.  The stacking driver sets
+	 * the flags before stacking the limits, and this will clear the flags
+	 * if any of the underlying devices does not support it.
 	 */
 	if (!(b->features & BLK_FEAT_NOWAIT))
 		t->features &= ~BLK_FEAT_NOWAIT;
+	if (!(b->features & BLK_FEAT_POLL))
+		t->features &= ~BLK_FEAT_POLL;
 
 	t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
 	t->max_user_sectors = min_not_zero(t->max_user_sectors,
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index cde525724831ef..da4e96d686f91e 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -394,13 +394,13 @@ static ssize_t queue_poll_delay_store(struct request_queue *q, const char *page,
 
 static ssize_t queue_poll_show(struct request_queue *q, char *page)
 {
-	return queue_var_show(test_bit(QUEUE_FLAG_POLL, &q->queue_flags), page);
+	return queue_var_show(q->limits.features & BLK_FEAT_POLL, page);
 }
 
 static ssize_t queue_poll_store(struct request_queue *q, const char *page,
 				size_t count)
 {
-	if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
+	if (!(q->limits.features & BLK_FEAT_POLL))
 		return -EINVAL;
 	pr_info_ratelimited("writes to the poll attribute are ignored.\n");
 	pr_info_ratelimited("please use driver specific parameters instead.\n");
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index d3a960aee03c6a..653c253b6f7f32 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -582,7 +582,7 @@ int dm_split_args(int *argc, char ***argvp, char *input)
 static void dm_set_stacking_limits(struct queue_limits *limits)
 {
 	blk_set_stacking_limits(limits);
-	limits->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
+	limits->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | BLK_FEAT_POLL;
 }
 
 /*
@@ -1024,14 +1024,13 @@ bool dm_table_request_based(struct dm_table *t)
 	return __table_type_request_based(dm_table_get_type(t));
 }
 
-static bool dm_table_supports_poll(struct dm_table *t);
-
 static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *md)
 {
 	enum dm_queue_mode type = dm_table_get_type(t);
 	unsigned int per_io_data_size = 0, front_pad, io_front_pad;
 	unsigned int min_pool_size = 0, pool_size;
 	struct dm_md_mempools *pools;
+	unsigned int bioset_flags = 0;
 
 	if (unlikely(type == DM_TYPE_NONE)) {
 		DMERR("no table type is set, can't allocate mempools");
@@ -1048,6 +1047,9 @@ static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *
 		goto init_bs;
 	}
 
+	if (md->queue->limits.features & BLK_FEAT_POLL)
+		bioset_flags |= BIOSET_PERCPU_CACHE;
+
 	for (unsigned int i = 0; i < t->num_targets; i++) {
 		struct dm_target *ti = dm_table_get_target(t, i);
 
@@ -1060,8 +1062,7 @@ static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *
 
 	io_front_pad = roundup(per_io_data_size,
 		__alignof__(struct dm_io)) + DM_IO_BIO_OFFSET;
-	if (bioset_init(&pools->io_bs, pool_size, io_front_pad,
-			dm_table_supports_poll(t) ? BIOSET_PERCPU_CACHE : 0))
+	if (bioset_init(&pools->io_bs, pool_size, io_front_pad, bioset_flags))
 		goto out_free_pools;
 	if (t->integrity_supported &&
 	    bioset_integrity_create(&pools->io_bs, pool_size))
@@ -1404,14 +1405,6 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
 	return &t->targets[(KEYS_PER_NODE * n) + k];
 }
 
-static int device_not_poll_capable(struct dm_target *ti, struct dm_dev *dev,
-				   sector_t start, sector_t len, void *data)
-{
-	struct request_queue *q = bdev_get_queue(dev->bdev);
-
-	return !test_bit(QUEUE_FLAG_POLL, &q->queue_flags);
-}
-
 /*
  * type->iterate_devices() should be called when the sanity check needs to
  * iterate and check all underlying data devices. iterate_devices() will
@@ -1459,19 +1452,6 @@ static int count_device(struct dm_target *ti, struct dm_dev *dev,
 	return 0;
 }
 
-static bool dm_table_supports_poll(struct dm_table *t)
-{
-	for (unsigned int i = 0; i < t->num_targets; i++) {
-		struct dm_target *ti = dm_table_get_target(t, i);
-
-		if (!ti->type->iterate_devices ||
-		    ti->type->iterate_devices(ti, device_not_poll_capable, NULL))
-			return false;
-	}
-
-	return true;
-}
-
 /*
  * Check whether a table has no data devices attached using each
  * target's iterate_devices method.
@@ -1817,6 +1797,13 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 	if (dm_table_supports_nowait(t))
 		limits->features &= ~BLK_FEAT_NOWAIT;
 
+	/*
+	 * The current polling impementation does not support request based
+	 * stacking.
+	 */
+	if (!__table_type_bio_based(t->type))
+		limits->features &= ~BLK_FEAT_POLL;
+
 	if (!dm_table_supports_discards(t)) {
 		limits->max_hw_discard_sectors = 0;
 		limits->discard_granularity = 0;
@@ -1858,21 +1845,6 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 		return r;
 
 	dm_update_crypto_profile(q, t);
-
-	/*
-	 * Check for request-based device is left to
-	 * dm_mq_init_request_queue()->blk_mq_init_allocated_queue().
-	 *
-	 * For bio-based device, only set QUEUE_FLAG_POLL when all
-	 * underlying devices supporting polling.
-	 */
-	if (__table_type_bio_based(t->type)) {
-		if (dm_table_supports_poll(t))
-			blk_queue_flag_set(QUEUE_FLAG_POLL, q);
-		else
-			blk_queue_flag_clear(QUEUE_FLAG_POLL, q);
-	}
-
 	return 0;
 }
 
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 61a162c9cf4e6c..4933194d00e592 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -538,7 +538,7 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 
 	blk_set_stacking_limits(&lim);
 	lim.dma_alignment = 3;
-	lim.features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
+	lim.features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | BLK_FEAT_POLL;
 	if (head->ids.csi != NVME_CSI_ZNS)
 		lim.max_zone_append_sectors = 0;
 
@@ -549,16 +549,6 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 	head->disk->private_data = head;
 	sprintf(head->disk->disk_name, "nvme%dn%d",
 			ctrl->subsys->instance, head->instance);
-
-	/*
-	 * This assumes all controllers that refer to a namespace either
-	 * support poll queues or not.  That is not a strict guarantee,
-	 * but if the assumption is wrong the effect is only suboptimal
-	 * performance but not correctness problem.
-	 */
-	if (ctrl->tagset->nr_maps > HCTX_TYPE_POLL &&
-	    ctrl->tagset->map[HCTX_TYPE_POLL].nr_queues)
-		blk_queue_flag_set(QUEUE_FLAG_POLL, head->disk->queue);
 	return 0;
 }
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index c2545580c5b134..d0db354b12db47 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -310,6 +310,9 @@ enum {
 
 	/* supports DAX */
 	BLK_FEAT_DAX				= (1u << 8),
+
+	/* supports I/O polling */
+	BLK_FEAT_POLL				= (1u << 9),
 };
 
 /*
@@ -577,7 +580,6 @@ struct request_queue {
 #define QUEUE_FLAG_NOXMERGES	9	/* No extended merges */
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
 #define QUEUE_FLAG_INIT_DONE	14	/* queue is initialized */
-#define QUEUE_FLAG_POLL		16	/* IO polling enabled if set */
 #define QUEUE_FLAG_STATS	20	/* track IO start and completion times */
 #define QUEUE_FLAG_REGISTERED	22	/* queue has been registered to a disk */
 #define QUEUE_FLAG_QUIESCED	24	/* queue has been quiesced */
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:46:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:46:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737796.1144237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuLL-0007AW-0A; Tue, 11 Jun 2024 05:46:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737796.1144237; Tue, 11 Jun 2024 05:46:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuLK-0007AP-Td; Tue, 11 Jun 2024 05:46:50 +0000
Received: by outflank-mailman (input) for mailman id 737796;
 Tue, 11 Jun 2024 05:46:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGuLI-0007AJ-UP
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:46:48 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fcdfc634-27b5-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:46:47 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id DA72160C78;
 Tue, 11 Jun 2024 05:46:44 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3F6E2C2BD10;
 Tue, 11 Jun 2024 05:46:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fcdfc634-27b5-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718084804;
	bh=ExeLOYUGYjsXp3wdbBqJbcQ6b1Q8a86JlJB2p9Et3no=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=WBxCLVZAQ2Cg02C5kvrfE9AgQqRfV0Vp9wUF3fqyRhPTMqteB7yjcEcGuYnFUrdIw
	 Nu+HCmDrsEWaDY0FS7c46/3H7ATl/iGMVLCSF+Y1IXagrNKa23K/U8gLfuLZVv+pfk
	 se724bhlH7gvtfrzVvNU5z4sgOI97FPDkZ/i4w+FATTtdCE12k8Vsqx39K60DyqVqK
	 eKEG3WqOVb470PJhBzxUxSx4QvapUkoaIbwLZb4Jn2jRXEpSQbkqLj2jPx5RESJvRH
	 5VCPbFYE05UT1aOpVYJIEWldcQvGMdLjZeDXOZ3CBVDxeKuyv8iCzyjAX6fSrOkmTW
	 QhoVFafzbn1yw==
Message-ID: <d50efca4-ba29-49f9-94cc-5bd4795f6e38@kernel.org>
Date: Tue, 11 Jun 2024 14:46:38 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 01/26] sd: fix sd_is_zoned
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-2-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-2-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Since commit 7437bb73f087 ("block: remove support for the host aware zone
> model"), only ZBC devices expose a zoned access model.  sd_is_zoned is
> used to check for that and thus return false for host aware devices.
> 
> Fixes: 7437bb73f087 ("block: remove support for the host aware zone model")
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:51:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:51:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737800.1144247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuPu-0000Ke-Gz; Tue, 11 Jun 2024 05:51:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737800.1144247; Tue, 11 Jun 2024 05:51:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuPu-0000KX-EJ; Tue, 11 Jun 2024 05:51:34 +0000
Received: by outflank-mailman (input) for mailman id 737800;
 Tue, 11 Jun 2024 05:51:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGuPt-0000KR-Jy
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:51:33 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a738c21d-27b6-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:51:32 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 1ABDF60C8C;
 Tue, 11 Jun 2024 05:51:31 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 4F8B3C2BD10;
 Tue, 11 Jun 2024 05:51:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a738c21d-27b6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718085090;
	bh=n/truQQNALnyrE4ZbwFnrA1tq2cozaH91kaKcSN3ADo=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=ANn0WrF5kOQJZxWO+lYlEGkU5hVzQiFb2dNu1fRL0d2IWY3RReTj+asObSRTOVsDn
	 nik3p07qhEGu9jO0A5uH0hbRiLwLOJ3d6qy5Lpfrdzdf1JpR8Qb6Y+TeigefvpTdHC
	 PaMP9CtOKbfsr2fBnfzUYFtnczoPg5lf2QS4Trvu+DMNLb/UDOZQCC9uHBx6LdmPM8
	 /dgqDFvO1q4KPpY9GF6hhisUoWhTc6hasUEAygvr3guCne7aOz50r92/yOBM880RPV
	 /mezANfcSB8zUh/+nThd+2qNqFJrJ+m45Wswa5U72/uWVWbOFnW/S5hmwY1Gx82O5w
	 +K3mqL1+sk3wQ==
Message-ID: <40ca8052-6ac1-4c1b-8c39-b0a7948839f8@kernel.org>
Date: Tue, 11 Jun 2024 14:51:24 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 02/26] sd: move zone limits setup out of
 sd_read_block_characteristics
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-3-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-3-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move a bit of code that sets up the zone flag and the write granularity
> into sd_zbc_read_zones to be with the rest of the zoned limits.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/scsi/sd.c     | 21 +--------------------
>  drivers/scsi/sd_zbc.c | 13 ++++++++++++-
>  2 files changed, 13 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
> index 85b45345a27739..5bfed61c70db8f 100644
> --- a/drivers/scsi/sd.c
> +++ b/drivers/scsi/sd.c
> @@ -3308,29 +3308,10 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp,
>  		blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q);
>  	}
>  
> -
> -#ifdef CONFIG_BLK_DEV_ZONED /* sd_probe rejects ZBD devices early otherwise */
> -	if (sdkp->device->type == TYPE_ZBC) {
> -		lim->zoned = true;
> -
> -		/*
> -		 * Per ZBC and ZAC specifications, writes in sequential write
> -		 * required zones of host-managed devices must be aligned to
> -		 * the device physical block size.
> -		 */
> -		lim->zone_write_granularity = sdkp->physical_block_size;
> -	} else {
> -		/*
> -		 * Host-aware devices are treated as conventional.
> -		 */
> -		lim->zoned = false;
> -	}
> -#endif /* CONFIG_BLK_DEV_ZONED */
> -
>  	if (!sdkp->first_scan)
>  		return;
>  
> -	if (lim->zoned)
> +	if (sdkp->device->type == TYPE_ZBC)

Nit: use sd_is_zoned() here ?

>  		sd_printk(KERN_NOTICE, sdkp, "Host-managed zoned block device\n");
>  	else if (sdkp->zoned == 1)
>  		sd_printk(KERN_NOTICE, sdkp, "Host-aware SMR disk used as regular disk\n");
> diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
> index 422eaed8457227..e9501db0450be3 100644
> --- a/drivers/scsi/sd_zbc.c
> +++ b/drivers/scsi/sd_zbc.c
> @@ -598,8 +598,19 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
>  	u32 zone_blocks = 0;
>  	int ret;
>  
> -	if (!sd_is_zoned(sdkp))
> +	if (!sd_is_zoned(sdkp)) {
> +		lim->zoned = false;

Maybe we should clear the other zone related limits here ? If the drive is
reformatted/converted from SMR to CMR (FORMAT WITH PRESET), the other zone
limits may be set already, no ?

>  		return 0;
> +	}
> +
> +	lim->zoned = true;
> +
> +	/*
> +	 * Per ZBC and ZAC specifications, writes in sequential write required
> +	 * zones of host-managed devices must be aligned to the device physical
> +	 * block size.
> +	 */
> +	lim->zone_write_granularity = sdkp->physical_block_size;
>  
>  	/* READ16/WRITE16/SYNC16 is mandatory for ZBC devices */
>  	sdkp->device->use_16_for_rw = 1;

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:52:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:52:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737806.1144258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuR2-0000sM-Pw; Tue, 11 Jun 2024 05:52:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737806.1144258; Tue, 11 Jun 2024 05:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuR2-0000sF-NI; Tue, 11 Jun 2024 05:52:44 +0000
Received: by outflank-mailman (input) for mailman id 737806;
 Tue, 11 Jun 2024 05:52:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QBFV=NN=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sGuR2-0000s7-2z
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:52:44 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d1cc7ba0-27b6-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:52:43 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 3636D67373; Tue, 11 Jun 2024 07:52:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1cc7ba0-27b6-11ef-90a3-e314d9c70b13
Date: Tue, 11 Jun 2024 07:52:39 +0200
From: Christoph Hellwig <hch@lst.de>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph B??hmwalder <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau Monn?? <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 02/26] sd: move zone limits setup out of
 sd_read_block_characteristics
Message-ID: <20240611055239.GA3141@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-3-hch@lst.de> <40ca8052-6ac1-4c1b-8c39-b0a7948839f8@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <40ca8052-6ac1-4c1b-8c39-b0a7948839f8@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 11, 2024 at 02:51:24PM +0900, Damien Le Moal wrote:
> > -	if (lim->zoned)
> > +	if (sdkp->device->type == TYPE_ZBC)
> 
> Nit: use sd_is_zoned() here ?

Yes.

> > -	if (!sd_is_zoned(sdkp))
> > +	if (!sd_is_zoned(sdkp)) {
> > +		lim->zoned = false;
> 
> Maybe we should clear the other zone related limits here ? If the drive is
> reformatted/converted from SMR to CMR (FORMAT WITH PRESET), the other zone
> limits may be set already, no ?

blk_validate_zoned_limits already takes care of that.



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:53:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:53:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737813.1144268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuRn-0001Rl-5r; Tue, 11 Jun 2024 05:53:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737813.1144268; Tue, 11 Jun 2024 05:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuRn-0001Re-33; Tue, 11 Jun 2024 05:53:31 +0000
Received: by outflank-mailman (input) for mailman id 737813;
 Tue, 11 Jun 2024 05:53:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGuRl-0001KK-Go
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:53:29 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ebdbe814-27b6-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:53:27 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 5366D60C88;
 Tue, 11 Jun 2024 05:53:26 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7DFA5C2BD10;
 Tue, 11 Jun 2024 05:53:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebdbe814-27b6-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718085206;
	bh=RvinM2CEhnrYy7/V+Hc/2gNsHgLPMfOqlJof5s6pLpY=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=acWTCLh69lElyFzhBDitf/FJSPcZHoSKSYkPin9SpKmAkY2E8tt2RX9YMgvNW08VJ
	 pXyQOUrKXLHipujeCeYwwOOV1ng3liGMUrh79LCxXQAnCnsegn/0fa+OxA8G7Re61L
	 fmOHhV3QGFVIYl1a9Zvr8YBZq5o5MC+kVjqKkUi4kq/pw8SNm9WHEEcZBI2M9gnJTB
	 rNM+9TLvbzpYIaLgbWYUi2Bsf66NXsvZZfHsd+k+aOkAbuju2GyIYlvuYfiZ5s0zaB
	 Ghir0etXCncHmfvSSJDhrNjKYgtJ1ghf+Dq41ngHLndOPehGRZnzQ4OTjdqTetZ0XF
	 qcw8t2IN/8aPA==
Message-ID: <ca5a3441-768a-4331-a1c2-a4bdadf2f150@kernel.org>
Date: Tue, 11 Jun 2024 14:53:19 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 03/26] loop: stop using loop_reconfigure_limits in
 __loop_clr_fd
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-4-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-4-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> __loop_clr_fd wants to clear all settings on the device.  Prepare for
> moving more settings into the block limits by open coding
> loop_reconfigure_limits.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/block/loop.c | 10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> index 93780f41646b75..93a49c40a31a71 100644
> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -1133,6 +1133,7 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
>  
>  static void __loop_clr_fd(struct loop_device *lo, bool release)
>  {
> +	struct queue_limits lim;
>  	struct file *filp;
>  	gfp_t gfp = lo->old_gfp_mask;
>  
> @@ -1156,7 +1157,14 @@ static void __loop_clr_fd(struct loop_device *lo, bool release)
>  	lo->lo_offset = 0;
>  	lo->lo_sizelimit = 0;
>  	memset(lo->lo_file_name, 0, LO_NAME_SIZE);
> -	loop_reconfigure_limits(lo, 512, false);
> +
> +	/* reset the block size to the default */
> +	lim = queue_limits_start_update(lo->lo_queue);
> +	lim.logical_block_size = 512;

Nit: SECTOR_SIZE ? maybe ?

> +	lim.physical_block_size = 512;
> +	lim.io_min = 512;
> +	queue_limits_commit_update(lo->lo_queue, &lim);
> +
>  	invalidate_disk(lo->lo_disk);
>  	loop_sysfs_exit(lo);
>  	/* let user-space know about this change */

Otherwise, looks OK to me.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:54:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:54:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737818.1144278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuSP-0001xG-DB; Tue, 11 Jun 2024 05:54:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737818.1144278; Tue, 11 Jun 2024 05:54:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuSP-0001x9-AJ; Tue, 11 Jun 2024 05:54:09 +0000
Received: by outflank-mailman (input) for mailman id 737818;
 Tue, 11 Jun 2024 05:54:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QBFV=NN=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sGuSO-0001rF-CI
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:54:08 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 046716f4-27b7-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:54:07 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id B228C68B05; Tue, 11 Jun 2024 07:54:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 046716f4-27b7-11ef-90a3-e314d9c70b13
Date: Tue, 11 Jun 2024 07:54:05 +0200
From: Christoph Hellwig <hch@lst.de>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph B??hmwalder <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau Monn?? <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 02/26] sd: move zone limits setup out of
 sd_read_block_characteristics
Message-ID: <20240611055405.GA3256@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-3-hch@lst.de> <40ca8052-6ac1-4c1b-8c39-b0a7948839f8@kernel.org> <20240611055239.GA3141@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20240611055239.GA3141@lst.de>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 11, 2024 at 07:52:39AM +0200, Christoph Hellwig wrote:
> > Maybe we should clear the other zone related limits here ? If the drive is
> > reformatted/converted from SMR to CMR (FORMAT WITH PRESET), the other zone
> > limits may be set already, no ?
> 
> blk_validate_zoned_limits already takes care of that.

Sorry, brainfart.  The integrity code does that, but not the zoned
code.  I suspect the core code might be a better place for it,
though.



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:54:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:54:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737823.1144287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuSk-0002S6-KR; Tue, 11 Jun 2024 05:54:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737823.1144287; Tue, 11 Jun 2024 05:54:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuSk-0002Rz-Hh; Tue, 11 Jun 2024 05:54:30 +0000
Received: by outflank-mailman (input) for mailman id 737823;
 Tue, 11 Jun 2024 05:54:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGuSj-0001rF-1P
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:54:29 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0c7b95ea-27b7-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:54:27 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 91F99CE19A1;
 Tue, 11 Jun 2024 05:54:16 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 12FF9C2BD10;
 Tue, 11 Jun 2024 05:54:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c7b95ea-27b7-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718085255;
	bh=ImOyYBG3xCcX0n3tLvSbs6xw/kdWptb6Do6trOqK4QA=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=qHFHpGL8aUbN9wv5klAImzJ5jjK0oiGT8U4BTD8xfsG5jiCOpGSUv7Km3WvNqtPCD
	 6j1nrDjh7PhyALjuNQNHYkKLmYYq6Y1xffYfB0MxJDe2yW8SOyNDdlvw/znXF9eZc8
	 KT4CX+clUIQtduOC06IcHkCGn74lBXgof4aV9GT8PsAx4EZCRJSboZ+IHz3mWN+raN
	 KotqG4Gp62inAvWnPIr4JchEXO3PbNevMpTBNuCyAHk0M6HQGrIQYLuf6JcDaVG6mV
	 fVlgaG9YnpTo4YxUBgRoQVMO/dLvogh2Z6hk8kGRTiZh4LaBEnh8YlC7x62pw9weqZ
	 kWW9/78SoGctQ==
Message-ID: <c5524425-b2bd-40c4-bdc2-5b7e51ce6d67@kernel.org>
Date: Tue, 11 Jun 2024 14:54:11 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 04/26] loop: always update discard settings in
 loop_reconfigure_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-5-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-5-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Simplify loop_reconfigure_limits by always updating the discard limits.
> This adds a little more work to loop_set_block_size, but doesn't change
> the outcome as the discard flag won't change.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks OK to me.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:54:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:54:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737827.1144298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuTB-0002zY-RZ; Tue, 11 Jun 2024 05:54:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737827.1144298; Tue, 11 Jun 2024 05:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuTB-0002zR-Oh; Tue, 11 Jun 2024 05:54:57 +0000
Received: by outflank-mailman (input) for mailman id 737827;
 Tue, 11 Jun 2024 05:54:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QBFV=NN=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sGuTB-0002Pi-ER
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:54:57 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 20dc6037-27b7-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:54:55 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id A7EA968BEB; Tue, 11 Jun 2024 07:54:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20dc6037-27b7-11ef-b4bb-af5377834399
Date: Tue, 11 Jun 2024 07:54:53 +0200
From: Christoph Hellwig <hch@lst.de>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph B??hmwalder <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau Monn?? <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 03/26] loop: stop using loop_reconfigure_limits in
 __loop_clr_fd
Message-ID: <20240611055453.GA3384@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-4-hch@lst.de> <ca5a3441-768a-4331-a1c2-a4bdadf2f150@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <ca5a3441-768a-4331-a1c2-a4bdadf2f150@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 11, 2024 at 02:53:19PM +0900, Damien Le Moal wrote:
> > +	/* reset the block size to the default */
> > +	lim = queue_limits_start_update(lo->lo_queue);
> > +	lim.logical_block_size = 512;
> 
> Nit: SECTOR_SIZE ? maybe ?

Yes.  I was following the existing code, but SECTOR_SIZE is probably
a better choice here.



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:57:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:57:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737833.1144308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuVS-0003cZ-7j; Tue, 11 Jun 2024 05:57:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737833.1144308; Tue, 11 Jun 2024 05:57:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuVS-0003cS-4c; Tue, 11 Jun 2024 05:57:18 +0000
Received: by outflank-mailman (input) for mailman id 737833;
 Tue, 11 Jun 2024 05:57:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGuVQ-0003cM-Rj
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:57:16 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6fb475b3-27b7-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:57:15 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id CA4B3CE0B9B;
 Tue, 11 Jun 2024 05:57:06 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7C4CBC2BD10;
 Tue, 11 Jun 2024 05:57:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fb475b3-27b7-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718085426;
	bh=xFN1h7CLNOVeSD1DVq7Y6t3Bb5KlL582IrHJpAfOI+I=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=fv21i4LDKBohOjAMc7pIwhaiiTsAlafv5K+jqxJasBRtlliGgCtTrXsT0H8yV5+9T
	 njwksi1EuST41csRG3Vqsp2L+QlP+t8YOn8e3W08JXRiXRq8hhVj8Dagc3+6nbt30R
	 F+mIQhqmSwNPdFOouK0TdZvzzsTWzz1Lqs4jPTNbQ0Jw2LrEoRIpZD54zKPNaLm1HA
	 t4c032+2B98Y3f+oLMEONdKDXJVupqzzbghc2EZK18tDOe6HzPnnhx/GWgADRWFlM4
	 oxYC9o2lrFo6wnGlsgbujf5T6gsAVdypUTxeMWDRMSrvLMxkDZnNQFg5hAvDm4J/fA
	 kYvXmValPXY/Q==
Message-ID: <dabc33cd-feb9-4263-8f6e-4d2ab3d71430@kernel.org>
Date: Tue, 11 Jun 2024 14:56:59 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 05/26] loop: regularize upgrading the lock size for direct
 I/O
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-6-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-6-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> The LOOP_CONFIGURE path automatically upgrades the block size to that
> of the underlying file for O_DIRECT file descriptors, but the
> LOOP_SET_BLOCK_SIZE path does not.  Fix this by lifting the code to
> pick the block size into common code.

s/lock/block in the commit title.

> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/block/loop.c | 25 +++++++++++++++----------
>  1 file changed, 15 insertions(+), 10 deletions(-)
> 
> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> index c658282454af1b..4f6d8514d19bd6 100644
> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -975,10 +975,24 @@ loop_set_status_from_info(struct loop_device *lo,
>  	return 0;
>  }
>  
> +static unsigned short loop_default_blocksize(struct loop_device *lo,
> +		struct block_device *backing_bdev)
> +{
> +	/* In case of direct I/O, match underlying block size */
> +	if ((lo->lo_backing_file->f_flags & O_DIRECT) && backing_bdev)
> +		return bdev_logical_block_size(backing_bdev);
> +	return 512;

Nit: SECTOR_SIZE ?

> +}
> +
>  static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
>  {
> +	struct file *file = lo->lo_backing_file;
> +	struct inode *inode = file->f_mapping->host;
>  	struct queue_limits lim;
>  
> +	if (!bsize)
> +		bsize = loop_default_blocksize(lo, inode->i_sb->s_bdev);

If bsize is specified and there is a backing dev used with direct IO, should it
be checked that bsize is a multiple of bdev_logical_block_size(backing_bdev) ?

> +
>  	lim = queue_limits_start_update(lo->lo_queue);
>  	lim.logical_block_size = bsize;
>  	lim.physical_block_size = bsize;
> @@ -997,7 +1011,6 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
>  	int error;
>  	loff_t size;
>  	bool partscan;
> -	unsigned short bsize;
>  	bool is_loop;
>  
>  	if (!file)
> @@ -1076,15 +1089,7 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
>  	if (!(lo->lo_flags & LO_FLAGS_READ_ONLY) && file->f_op->fsync)
>  		blk_queue_write_cache(lo->lo_queue, true, false);
>  
> -	if (config->block_size)
> -		bsize = config->block_size;
> -	else if ((lo->lo_backing_file->f_flags & O_DIRECT) && inode->i_sb->s_bdev)
> -		/* In case of direct I/O, match underlying block size */
> -		bsize = bdev_logical_block_size(inode->i_sb->s_bdev);
> -	else
> -		bsize = 512;
> -
> -	error = loop_reconfigure_limits(lo, bsize);
> +	error = loop_reconfigure_limits(lo, config->block_size);
>  	if (WARN_ON_ONCE(error))
>  		goto out_unlock;
>  

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:59:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:59:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737838.1144317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuXH-0004hr-It; Tue, 11 Jun 2024 05:59:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737838.1144317; Tue, 11 Jun 2024 05:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuXH-0004hk-Fe; Tue, 11 Jun 2024 05:59:11 +0000
Received: by outflank-mailman (input) for mailman id 737838;
 Tue, 11 Jun 2024 05:59:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGuXG-0004hc-9u
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:59:10 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b5bbe1b2-27b7-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:59:07 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 591E8CE19AB;
 Tue, 11 Jun 2024 05:59:04 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id F39B0C2BD10;
 Tue, 11 Jun 2024 05:58:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5bbe1b2-27b7-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718085543;
	bh=ytNiiCLVFE28gHr4h5G30TNqYehF/KogG4pSKSbNQ2E=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=juelLKuku0LFbgtXG9OzRzRWb3ooPaRYkdEGYbxCY/l/EREHcpsd6y+kWaUp4iBKj
	 Cs53RLnIaSbrZccm/GN9QVCVAOcAI/+By8TM/0/LFmq4RPTUmj+axscKmxzzB5coCx
	 TPpMXPmrdbBJt790n7mZ2SG7cnazZrwM4LN+e2wDErlp27GY0Q5cW5+prp1xuTs+nr
	 t3wCoIWjpcxWO+rGzFp9IqhsxvwTy7jx2vG3FMgsOx4qNL4tLOJuL4SBWR9Le8e1Gu
	 FbIAW9jk7NKo8gZzoXSm3zjx5aS60f2RIO1ZIKWZDZVw79ebRnV6Cm+DP9ptNCmDsn
	 briDJeVws9DJA==
Message-ID: <27e76310-1831-473e-803a-e0294b91463c@kernel.org>
Date: Tue, 11 Jun 2024 14:58:56 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 06/26] loop: also use the default block size from an
 underlying block device
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-7-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-7-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Fix the code in loop_reconfigure_limits to pick a default block size for
> O_DIRECT file descriptors to also work when the loop device sits on top
> of a block device and not just on a regular file on a block device based
> file system.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/block/loop.c | 8 +++++++-
>  1 file changed, 7 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> index 4f6d8514d19bd6..d7cf6bbbfb1b86 100644
> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -988,10 +988,16 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
>  {
>  	struct file *file = lo->lo_backing_file;
>  	struct inode *inode = file->f_mapping->host;
> +	struct block_device *backing_bdev = NULL;
>  	struct queue_limits lim;
>  
> +	if (S_ISBLK(inode->i_mode))
> +		backing_bdev = I_BDEV(inode);
> +	else if (inode->i_sb->s_bdev)
> +		backing_bdev = inode->i_sb->s_bdev;
> +

Why not move this hunk inside the below "if" ? (backing_dev declaration can go
there too).

>  	if (!bsize)
> -		bsize = loop_default_blocksize(lo, inode->i_sb->s_bdev);
> +		bsize = loop_default_blocksize(lo, backing_bdev);
>  
>  	lim = queue_limits_start_update(lo->lo_queue);
>  	lim.logical_block_size = bsize;

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:59:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:59:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737839.1144328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuXK-0004x0-PE; Tue, 11 Jun 2024 05:59:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737839.1144328; Tue, 11 Jun 2024 05:59:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuXK-0004wt-Mi; Tue, 11 Jun 2024 05:59:14 +0000
Received: by outflank-mailman (input) for mailman id 737839;
 Tue, 11 Jun 2024 05:59:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QBFV=NN=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sGuXJ-0004hc-7X
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:59:13 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b9656513-27b7-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 07:59:11 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id B7C0F68CFE; Tue, 11 Jun 2024 07:59:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9656513-27b7-11ef-b4bb-af5377834399
Date: Tue, 11 Jun 2024 07:59:06 +0200
From: Christoph Hellwig <hch@lst.de>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph B??hmwalder <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau Monn?? <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 05/26] loop: regularize upgrading the lock size for
 direct I/O
Message-ID: <20240611055906.GA3640@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-6-hch@lst.de> <dabc33cd-feb9-4263-8f6e-4d2ab3d71430@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <dabc33cd-feb9-4263-8f6e-4d2ab3d71430@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 11, 2024 at 02:56:59PM +0900, Damien Le Moal wrote:
> > +	if (!bsize)
> > +		bsize = loop_default_blocksize(lo, inode->i_sb->s_bdev);
> 
> If bsize is specified and there is a backing dev used with direct IO, should it
> be checked that bsize is a multiple of bdev_logical_block_size(backing_bdev) ?

For direct I/O that check would be useful.  For buffered I/O we can do
read-modify-write cycles.  However this series is already huge and not
primarily about improving the loop driver parameter validation, so
I'll defer this for now.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 05:59:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 05:59:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737850.1144337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuXv-0005kk-4F; Tue, 11 Jun 2024 05:59:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737850.1144337; Tue, 11 Jun 2024 05:59:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuXv-0005kZ-18; Tue, 11 Jun 2024 05:59:51 +0000
Received: by outflank-mailman (input) for mailman id 737850;
 Tue, 11 Jun 2024 05:59:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=QBFV=NN=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sGuXt-0005WP-DW
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 05:59:49 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cfa9641d-27b7-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 07:59:48 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id B1AFB68C4E; Tue, 11 Jun 2024 07:59:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfa9641d-27b7-11ef-90a3-e314d9c70b13
Date: Tue, 11 Jun 2024 07:59:46 +0200
From: Christoph Hellwig <hch@lst.de>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph B??hmwalder <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau Monn?? <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 06/26] loop: also use the default block size from an
 underlying block device
Message-ID: <20240611055946.GA3777@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-7-hch@lst.de> <27e76310-1831-473e-803a-e0294b91463c@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <27e76310-1831-473e-803a-e0294b91463c@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 11, 2024 at 02:58:56PM +0900, Damien Le Moal wrote:
> > +	if (S_ISBLK(inode->i_mode))
> > +		backing_bdev = I_BDEV(inode);
> > +	else if (inode->i_sb->s_bdev)
> > +		backing_bdev = inode->i_sb->s_bdev;
> > +
> 
> Why not move this hunk inside the below "if" ? (backing_dev declaration can go
> there too).

Because another use will pop up a bit later :)



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 06:01:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 06:01:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737855.1144348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuZD-0007H6-EF; Tue, 11 Jun 2024 06:01:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737855.1144348; Tue, 11 Jun 2024 06:01:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuZD-0007Gz-Ar; Tue, 11 Jun 2024 06:01:11 +0000
Received: by outflank-mailman (input) for mailman id 737855;
 Tue, 11 Jun 2024 06:01:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGuZB-0007GY-BQ
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 06:01:09 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fda5c7a4-27b7-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 08:01:06 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 4355B60C78;
 Tue, 11 Jun 2024 06:01:05 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 61916C2BD10;
 Tue, 11 Jun 2024 06:01:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fda5c7a4-27b7-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718085665;
	bh=SwdezD+VbbCE5IbMCIb8XIlsQn00OdtPFM8sRr4ztZk=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=Zr2vvzA/p7vYbT1k0uSBle8fVBLgAMu7Z8G2rZTtwcHwZJBLkdKFIjTMoJNCzigg7
	 HTTeNcLNiD8mOl1bIMJSt8wupoDF5SVZaTY0NG2Ov76mIgZZigWXzzY7XVXH2QtfH1
	 x8KGc2pS1U1PRUpHUWYyhzSKqD5PI1im3Hi8m7vBLaJJJ9qEpljyUtR6RhIgyapNwR
	 oXmNiibgRDzuOY4ziojjhBQyINExkUL4KDOxUOXHRojL4EJXnDtsoUuvYOOl28n8tz
	 mMTtS5NPXLxyTGNL4ivwnqnaNnzfQN6jrGtKZFwmi4ccqOatSNqZrVoj3XSteIqnUe
	 tMfkIjYNQxDVw==
Message-ID: <89258309-c77a-4b82-a5a1-a4f08f4e119e@kernel.org>
Date: Tue, 11 Jun 2024 15:00:59 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 07/26] loop: fold loop_update_rotational into
 loop_reconfigure_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-8-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-8-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> This prepares for moving the rotational flag into the queue_limits and
> also fixes it for the case where the loop device is backed by a block
> device.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good to me.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 06:20:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 06:20:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737869.1144358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGus3-0002ML-KR; Tue, 11 Jun 2024 06:20:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737869.1144358; Tue, 11 Jun 2024 06:20:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGus3-0002ME-HB; Tue, 11 Jun 2024 06:20:39 +0000
Received: by outflank-mailman (input) for mailman id 737869;
 Tue, 11 Jun 2024 06:20:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGus2-0002Ko-4n; Tue, 11 Jun 2024 06:20:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGus2-0002ob-1L; Tue, 11 Jun 2024 06:20:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGus1-0008JU-Ji; Tue, 11 Jun 2024 06:20:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGus1-0002kn-JC; Tue, 11 Jun 2024 06:20:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=anhOyE4TQSHOSWL/MIZa+ULOliGjN9I48XvalQ7YqA8=; b=gmVmSnLjQeeAwhdzmM97x0hZn5
	RlhLiDXBuZBXNFk2HYxGFpkOEdqxZfTUkXVpZ6JjebLlox9/A5TsuvKg5SogyZwe1KlrWZsMh1+Gw
	6hcnSlku8YiSQRSVSgaR6XHl298f8/4gD+NxRKc4pxnA+yCN4gERz0PZl9v0pSG8Nvas=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186307-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186307: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3
X-Osstest-Versions-That:
    xen=0a5b2ca32c1506bbb0e636a2dfab7502a52fe136
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 Jun 2024 06:20:37 +0000

flight 186307 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186307/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186305
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186305
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186305
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186305
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186305
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186305
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3
baseline version:
 xen                  0a5b2ca32c1506bbb0e636a2dfab7502a52fe136

Last test of basis   186305  2024-06-10 12:07:11 Z    0 days
Testing same since   186307  2024-06-10 23:40:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0a5b2ca32c..ea1cb444c2  ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3 -> master


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 06:22:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 06:22:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737878.1144372 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGuu0-0003Jk-10; Tue, 11 Jun 2024 06:22:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737878.1144372; Tue, 11 Jun 2024 06:22:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGutz-0003Jd-US; Tue, 11 Jun 2024 06:22:39 +0000
Received: by outflank-mailman (input) for mailman id 737878;
 Tue, 11 Jun 2024 06:22:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGutz-0003JX-0g
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 06:22:39 +0000
Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com
 [2a00:1450:4864:20::533])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id feedc4bc-27ba-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 08:22:36 +0200 (CEST)
Received: by mail-ed1-x533.google.com with SMTP id
 4fb4d7f45d1cf-57a1fe63947so842076a12.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 23:22:36 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c76740d6asm4203850a12.7.2024.06.10.23.22.35
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 23:22:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: feedc4bc-27ba-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718086956; x=1718691756; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=aI4GPFKJNyMFjrw09qqn9X7Y2hv0LBzHHit4fK5jXa0=;
        b=Rswky5aKH1t4fTPwkPBXKPZ1+jPJl4p5MZlU+qhYBz0ZTvU6gp55pw23YpAHKhpCFh
         Qyq6tu2Olr07QZnq7rW/tn+9q15O8ZwVbXTvVbHx9hkbm6V4xFXJ52Xv3kckqaCCaTN4
         59hr6/uUoXJ51W9UWmGaEpdIHuxUTIxvqhgJsScnoM++kWHrov7WgQiVTAwKDfk2lFRK
         rdRS1EryPo+3GK7HucdK38hOgc58gZpjN0ORUhtoDJTudheCxZSNf3DIcaKjQDLxscJz
         ZsJlAc0MZG7fYz7C7BjJpFkQidSeZa9I267eDKKtKA9Snq9POt6jaB8BRyp6e3N+efXC
         8qDQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718086956; x=1718691756;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=aI4GPFKJNyMFjrw09qqn9X7Y2hv0LBzHHit4fK5jXa0=;
        b=jOtzXpm8Wuo8HZaWdh9Z05TtiRkVw8tOmXWnMdcw2zykhW5Ak+pphz9J3TtCpPG6m1
         QgK/SJmzt5YoFZsGY6buRI0yr1WpP1nq1/UiLXHoO9U2pQ+8yZG44zZoVqMZoXw1Topb
         ZxVKMldAbY74XhLZvLp6sZ2ILT7af9LzDcBEHsHrZHsz8zsj7NRqD7GTvPuRNrl4l2Wj
         KTmRcOZd9UO2bfh7odvtX5NJJZ4kv8UGV3zaNflq2kxixCsbVCTDtluLyLwyFsz8DaKM
         siTTiACCYTtCqvUiKZqDXYuwgCbSIpZEqzXqbrtUaPdx3l6DJx6ayL3Q/xDXsf1WyoVV
         oWlQ==
X-Gm-Message-State: AOJu0YzrCK2c8yFYdGhLu3KRhGS1zTHQsvw7tuETf7BJxU8tivniq+c5
	CKPbOb0b6kGFU9RI0LDLd9hkHGLAghoDvY0q/XL0/F11k+80Bv+PR9GvEs8IlQ==
X-Google-Smtp-Source: AGHT+IHVZnpm8b+Tpaw/kN9ouiaaicpDnRCZJJcKWNiyXafPx+zcktxqZPRO+5ob9g+yMPBMfETD+Q==
X-Received: by 2002:a50:d6d7:0:b0:57c:60e8:c519 with SMTP id 4fb4d7f45d1cf-57c60e8c5c4mr5160147a12.16.1718086956058;
        Mon, 10 Jun 2024 23:22:36 -0700 (PDT)
Message-ID: <f28875ae-1b87-4a83-b4b0-6f42046c03bf@suse.com>
Date: Tue, 11 Jun 2024 08:22:34 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] MAINTAINERS: alter EFI section
To: Marek Marczykowski <marmarek@invisiblethingslab.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel Smith <dpsmith@apertussolutions.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <5b9d57b4-bd28-4523-bb80-f4a5912eb3e8@suse.com>
 <ZmeIgkFux7tbCZk4@mail-itl>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmeIgkFux7tbCZk4@mail-itl>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 11.06.2024 01:13, Marek Marczykowski wrote:
> On Mon, Jun 10, 2024 at 08:38:45AM +0200, Jan Beulich wrote:
>> To get past the recurring friction on the approach to take wrt
>> workarounds needed for various firmware flaws, I'm stepping down as the
>> maintainer of our code interfacing with EFI firmware. Two new
>> maintainers are being introduced in my place.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I'm not sure what is the proper tag for cases like this, but:
> Acked-by: Marek Marczykowski <marmarek@invisiblethingslab.com>

Exactly this, afaia. Thanks!

Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 06:41:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 06:41:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737885.1144382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvBu-0007Nv-JW; Tue, 11 Jun 2024 06:41:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737885.1144382; Tue, 11 Jun 2024 06:41:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvBu-0007No-Ge; Tue, 11 Jun 2024 06:41:10 +0000
Received: by outflank-mailman (input) for mailman id 737885;
 Tue, 11 Jun 2024 06:41:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGvBt-0007Nf-64
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 06:41:09 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 94467d27-27bd-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 08:41:06 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-a6265d48ec3so64050266b.0
 for <xen-devel@lists.xenproject.org>; Mon, 10 Jun 2024 23:41:06 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f3672d1efsm61063866b.224.2024.06.10.23.41.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 Jun 2024 23:41:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94467d27-27bd-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718088066; x=1718692866; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Nu3ax5KVPeqG8BM8VsCcDtpYZL+s3TZJfuanIM6yJKg=;
        b=E0pheYonal01B20RBPY67zmkmtcDgWDggO6WIJlBmwYDWzIqGLKSptDsMeSN97OUgv
         3vfXhqObkgrkhNazJhEVy9cA3etoBEab9uzA9iyAJQ9tt3nX5OqD5bAxXL7rK7J6Qd5A
         B31IudoCjLSPyUeSh47QszOhW5Zwf0M+JyTjI5OyrecjIRTCqZZOpMfvEvEHcoHEtZvS
         KrYWnFXdHjzTg1xqdzSVLdV+YecZZgRV1/E2w48o5EuDGqZP/OnWuk8/nDFPWW0+Ij8+
         h/E8cj3RJI3SfGac6pjrpD6nqc0jjjPpJRdUsAZQkHL2jlmsEAduQ8jxbdVu14AEJEt9
         XBHA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718088066; x=1718692866;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Nu3ax5KVPeqG8BM8VsCcDtpYZL+s3TZJfuanIM6yJKg=;
        b=YNHblUMCTChHWWo5zVjRN4ix95o//orWEj6ukxKMjUvkarX+n+LRC43u3nKtgR5m8h
         SwQvGm+D5gUgTTAG01fuy8yG8nqpKpK26FAB+ByvRv2IQHUV5tDU6+WA6rK5QfNg5zKC
         onVVU7k+CbjVJFDduReJngNXwNyF3ARbihshz/Rc26XEp12UQdpEDwfYdB6V5FsnzLoW
         OGpc3ofQUe2hfFZ8PJMJ/LEl8fVxrYEBH9gBRWoRy54Yz8FXVSbBBsaH1C4BeWDCtzOL
         4nWkWltPp7PRB38qxwaeHuu6bZve23ihHx5wqqoe2wXVDeYlJHvZXeRuOfIy5oOdLNKu
         YfGw==
X-Forwarded-Encrypted: i=1; AJvYcCXK+ViDsEgvYGA2IV7HdhS8hKuuMJmmcq/GUbzD3xU0j/r61lKQlI+BAV6+E50/DxeKAsjgzjtgDRNrCGthTBlo25UsfFHFSeRnt4vhps8=
X-Gm-Message-State: AOJu0YwZqVIMTm3f/hkWghBQyaEVZdNTj2Axjq2/sQmkkPUKrWv5ixlf
	VwmkJedt/AUoBOwLai0XeJapKxulyZG9HMrCKa6WTequXbS3K9KXpXbPHSydYQ==
X-Google-Smtp-Source: AGHT+IHrH2tqMk1qnEjMwCF/74ZahiYTmfudGevPccfGDqcP01Rl6TQ8pYazhXeDfv84w6hyEIycNw==
X-Received: by 2002:a17:906:4a4f:b0:a6f:1045:d5dc with SMTP id a640c23a62f3a-a6f1045d63dmr445833366b.6.1718088065655;
        Mon, 10 Jun 2024 23:41:05 -0700 (PDT)
Message-ID: <8787608f-f3b0-4fb3-95ee-98050cf95182@suse.com>
Date: Tue, 11 Jun 2024 08:41:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v6 3/9] xen: Refactor altp2m options into a
 structured format
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
References: <cover.1718038855.git.w1benny@gmail.com>
 <dcf08c40e37072e18e5e878df8778ce459897bdc.1718038855.git.w1benny@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <dcf08c40e37072e18e5e878df8778ce459897bdc.1718038855.git.w1benny@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10.06.2024 19:10, Petr Beneš wrote:
> From: Petr Beneš <w1benny@gmail.com>
> 
> Encapsulate the altp2m options within a struct. This change is preparatory
> and sets the groundwork for introducing additional parameter in subsequent
> commit.
> 
> Signed-off-by: Petr Beneš <w1benny@gmail.com>
> Acked-by: Julien Grall <jgrall@amazon.com> # arm
> Reviewed-by: Jan Beulich <jbeulich@suse.com> # hypervisor

Looks like you lost Christian's ack for ...

> ---
>  tools/libs/light/libxl_create.c     | 6 +++---
>  tools/ocaml/libs/xc/xenctrl_stubs.c | 4 +++-

... the adjustment of this file?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:09:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:09:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737895.1144392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvdM-0003WU-M1; Tue, 11 Jun 2024 07:09:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737895.1144392; Tue, 11 Jun 2024 07:09:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvdM-0003WN-JQ; Tue, 11 Jun 2024 07:09:32 +0000
Received: by outflank-mailman (input) for mailman id 737895;
 Tue, 11 Jun 2024 07:09:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X7bi=NN=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1sGvdK-0003WH-Ip
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:09:30 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05on20610.outbound.protection.outlook.com
 [2a01:111:f400:7e1a::610])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 89f5f709-27c1-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 09:09:27 +0200 (CEST)
Received: from AM0PR01CA0147.eurprd01.prod.exchangelabs.com
 (2603:10a6:208:aa::16) by GV2PR08MB9925.eurprd08.prod.outlook.com
 (2603:10a6:150:c3::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Tue, 11 Jun
 2024 07:09:17 +0000
Received: from AM4PEPF00027A68.eurprd04.prod.outlook.com
 (2603:10a6:208:aa:cafe::86) by AM0PR01CA0147.outlook.office365.com
 (2603:10a6:208:aa::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.17 via Frontend
 Transport; Tue, 11 Jun 2024 07:09:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM4PEPF00027A68.mail.protection.outlook.com (10.167.16.85) with
 Microsoft
 SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.7677.15
 via Frontend Transport; Tue, 11 Jun 2024 07:09:20 +0000
Received: ("Tessian outbound 0445a89d5280:v332");
 Tue, 11 Jun 2024 07:09:20 +0000
Received: from cc4cd251fb73.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 95E8E8A4-8798-427F-B76A-BD33C8437888.1; 
 Tue, 11 Jun 2024 07:09:14 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id cc4cd251fb73.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 11 Jun 2024 07:09:14 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com (2603:10a6:10:25a::24)
 by FRZPR08MB11192.eurprd08.prod.outlook.com (2603:10a6:d10:13c::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.37; Tue, 11 Jun
 2024 07:09:12 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a]) by DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a%5]) with mapi id 15.20.7633.036; Tue, 11 Jun 2024
 07:09:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89f5f709-27c1-11ef-b4bb-af5377834399
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=O9hkJGhDPqShSDuVJSwwfbgFQ9kuHxqLWp0evB1QWjQVz9Adj8iLnNNR1xRdqtEUAzaXDDWMGfSVcdmVSXzJUdTm/hqSXIhzegyN452suxul6Ji6qyyWvHVmxoGGNTb3nLUntz/1EP6oovAp2Fs8ECAYWlaXfRCpL6mdiqwSg8cOo782HCHRkf7n3to1C2fQ1gOgqZkrjmM3nxL9phdywElBem/nO8MoiVASX4XHpsqawEz9Rz+kZ0tKShW98fuflavF9H65uV2F9ZPi+NiIUaz7UwuFnYgny0GVSeaMm9S7r+8I4gfcGeO2nUcl0Y/rk5AC2I9/c6oqB2jFkBDbgw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xOIwESuqNCQdxuNEAdJsvkDTLrPgkfSnOuwTA5Bkni4=;
 b=HBoLO0KmArR6TlAccCSuH5DOkf1zWi4ZmtDWS9fRcMWU7jn3N8AN8mEXJLH+bD46ae9sWvAEwynPHgq7gK9qsk5HuNyNNbrhgjIfGGB7ggPE9lpsgATOrANlAi1Y5aVNAFiV9rlVjnHDpjb81+zAmv+bBcKJQPXNteTvLj+Fu8OfXD8irJPAMJwxLwJUzbz3QW5E0X4Ue0xcZ9iXrFca6Zp7DCIThHcxi5Cw0VYyS0Qh1IkbSy29P1pEVCq1Sj9kBD3P4VN2SQefADi5FYz6tEINYa+a/4SYTWsh/LoiTsCJPwbV+owfEsKUb+fSGmpfxdyJAF+V3ggCbFP0Z5rxag==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1
 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xOIwESuqNCQdxuNEAdJsvkDTLrPgkfSnOuwTA5Bkni4=;
 b=To6t2fArsygeH2OAGLh7E4b93p3ufcMmragkVdroNvFAggRr7U0UlRAHgG3GexAoNrKsy+OGfBqKLPFTrmvvSYIgKrl41XkocTadD2IGmweb3H6PrRqNB1hYYP747xR20EYdeyh9QWGdOn2WpJVF7EvZ7h33FlIV0hfuC/BEvsw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=arm.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: edc5ccbdb2b44d33
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=etZylHpzGtoeM9+yxzd+tVWj0c75Sn5+M7AtCPqWz9PksBZ9hC3bFaVIBfsFnczIC7QAegWmzy3Us8p10AOblGvwRljzFKB+EVHhl057n8kY6ykyGBKWzYssp3Qtt0QXspsqnOOovIWxUPaBem2pIW64CrR5lQWfbgPriXR3t1nkTd+YZnPtOTiAUKNYMAK9NjINkr2Ku2IeBm2tM0cs7xa3425HHBLNuwqUkre4MQ243ifXo9WUqKRPr8FQV1odHboLnPqKFFDPU0GIKy2mZu2ZF9rADYAlJwTDnXHSWmi5R4BMqcWEnFs3t2NgEb9crcnW/ohp+ZZGcG+iVJV3ng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xOIwESuqNCQdxuNEAdJsvkDTLrPgkfSnOuwTA5Bkni4=;
 b=oNXOyFvMdzevZvTxYHoPS9rGObsPcF/Myn5FjSNFHfqjMQmlgpCYPDP4IKA0YzyJsfWn+3Mn/Q54wgKDc4XE0Y9Nlg4sN88n0Ghsa3L+BpjrLezoVtc3iR5/wakcGzKkOHs2nsQEXKZELoEVSHdAPGvEfAirPQLynx3MPSyOJF17zV18iO/Axl8c0RYdFCQsbHc6IWkn6DJRs11cY+WCIzPLwDQd/bq4a7uUqw7hDBfX/WxMP5OiFIjJlFDf7dt/muu73uUYEdRRmRYJrqWvAp3yKXmzw9VeIBvYExxB4W4SVY7ywtw3jWvobyOyEWA40B8112LYddX1+u3V1diOjw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xOIwESuqNCQdxuNEAdJsvkDTLrPgkfSnOuwTA5Bkni4=;
 b=To6t2fArsygeH2OAGLh7E4b93p3ufcMmragkVdroNvFAggRr7U0UlRAHgG3GexAoNrKsy+OGfBqKLPFTrmvvSYIgKrl41XkocTadD2IGmweb3H6PrRqNB1hYYP747xR20EYdeyh9QWGdOn2WpJVF7EvZ7h33FlIV0hfuC/BEvsw=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>, Oleksii <oleksii.kurochko@gmail.com>
CC: Jens Wiklander <jens.wiklander@linaro.org>, Xen-devel
	<xen-devel@lists.xenproject.org>, "patches@linaro.org" <patches@linaro.org>,
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Michal Orzel <michal.orzel@amd.com>
Subject: Re: [XEN PATCH v6 0/7] FF-A notifications
Thread-Topic: [XEN PATCH v6 0/7] FF-A notifications
Thread-Index: AQHauwL3zf/sfUoleE2hXc1GLXTtIrHBJtUAgABPbgCAALAvgA==
Date: Tue, 11 Jun 2024 07:09:08 +0000
Message-ID: <8558AEB5-2F38-4F8C-A017-794E32045068@arm.com>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
 <3C40228F-21AA-4CBF-A4BE-1C42DE6E94EB@arm.com>
 <615f1766-253d-43dc-b0f0-f8e2eb7360b5@xen.org>
In-Reply-To: <615f1766-253d-43dc-b0f0-f8e2eb7360b5@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3774.600.62)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	DB9PR08MB6588:EE_|FRZPR08MB11192:EE_|AM4PEPF00027A68:EE_|GV2PR08MB9925:EE_
X-MS-Office365-Filtering-Correlation-Id: ae00cb4d-15bf-45a7-9763-08dc89e56a76
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted:
 BCL:0;ARA:13230031|376005|366007|1800799015|38070700009;
X-Microsoft-Antispam-Message-Info-Original:
 =?us-ascii?Q?IpOxQOLeamimOT5f84QOBmz9WsLCkMA9nJLJdOpHomKwChdyL2ZzLNvXyn//?=
 =?us-ascii?Q?ECTKfm3mSRy287/jlXJA9d1hYQupOhM5S7JJGNNpYCM/kgGRxeoHPq0tr9tt?=
 =?us-ascii?Q?ymSbjNxJ0UR7tchHLdd1hKq+l1xGlxgDREDHRoxzXmam5wgnWiq0Yyb/KPKt?=
 =?us-ascii?Q?hUcuWapqauBWIfDlQiClkCUfSAqSg48o3OGEdgZVwr/l9rQv8j4Ei05A4jZe?=
 =?us-ascii?Q?VJZIb2jPbCLyoi9+8/XSORUgiE9K5orfKcCsfhkdYDiT6ohpekAOLDLO1c7V?=
 =?us-ascii?Q?LAc7ghu8k2J0HBD4bUcTpr3pAeIm95c0AYSpnlIKhf/OCLYVG1qCDZaMSxn8?=
 =?us-ascii?Q?6Yz0U2Abl8lm8SmsuIJx0eKti4BCptG0NvbuF99vqJAqYsgcGyRu1+tT8CWs?=
 =?us-ascii?Q?R9vLONpK4HjFzFOlZdsEk6q/gB051nV3KZY0amwTTvt9q4BuJjoFPSZao/n9?=
 =?us-ascii?Q?ccaiq1gl8IyN6kd7+xvZqJwilJNINQL/edpDGE1uPeml+KIGanjGemTSqD2b?=
 =?us-ascii?Q?Q4FlQPcGTvvL90eIF7Bu5U5gdGCOx5+T1h6qldhkuWuIIEJxcrMmJgxOnQkK?=
 =?us-ascii?Q?CsN2Fu0gEJVJniLe++Jt/mm9PR7KyEAYR9g/wovfQG3Ac1v8EuZT1FHP2puj?=
 =?us-ascii?Q?MI8rNKkfAp4xOr0cBgoJEtOG7/gipOHBUnGspDbi0ijPdElaygg4VzK6CPkg?=
 =?us-ascii?Q?swTlIf9cOGKph3aeK/sZILBvNIxNXVrwfjDsHVwXNgHzki1fV/v3/M/WvmLs?=
 =?us-ascii?Q?C819iFjT1UpFxBVlL0t5MMqkgC0XCV7TC4yREBjbD6vHJNIfa/3l5ar8KYLa?=
 =?us-ascii?Q?L0B1QP1jcx8ZyqTeIAdzSLyl8Q/fj4NsbJGfA3dKj2KMC94JAziPEmABq4nR?=
 =?us-ascii?Q?bw9bN+zMR6wOqfo8Q8yoUOkZYMAyMmXm8gRTFow9WV4jv1FYtJi8LSHEmIz+?=
 =?us-ascii?Q?Lcxw/0K/zb8nCK5UdGPeLIRZprnPniA/KF0DkLCnmEvT11D8pvO/mv1qNpg+?=
 =?us-ascii?Q?P9viNl6+u3hLLsPdElY87QnKvejMgOEruXDRyH+Fd6AT1dbJCJLyyRkoqWud?=
 =?us-ascii?Q?9aeqay8XuBGFsbV2KDS8onRph2zRFLZmwANdzW+ZjEdb6LbxfTs+EcQC4FiA?=
 =?us-ascii?Q?DGYZh9E2qhOGuN+8fbJsmsDSunIX5q2X8RNIRAxWO7Av0R2r9S922r5i58m+?=
 =?us-ascii?Q?BWXUrJvdvrzqIos/mcTJhhGLfFSvFaO/+mawaSXecSv7z9xxMQGQjsdxB4S5?=
 =?us-ascii?Q?gqm6P2GuFqk6yCv8uO0NmHdLLqpDOZnLRX0d3k7CDpobhvjZ1ANbfcJ0rBn5?=
 =?us-ascii?Q?r+x9qelk6FkooNcAYNGwDtUs?=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB9PR08MB6588.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(366007)(1800799015)(38070700009);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <4940ADFA46C775449E47820770F079DF@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: FRZPR08MB11192
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM4PEPF00027A68.eurprd04.prod.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	35884d9d-d1de-48f6-6313-08dc89e562fa
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|35042699013|36860700004|82310400017|376005|1800799015;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?mNlhDE79dkinkMapOr0YW9LlFF06qDDfaIqg5sDuhlBEmf4r33x5nv3bdxtc?=
 =?us-ascii?Q?3yj2rnlRVIwwaLQI43nwQq77mRyknGJ/FqLSgDkQGmPKaoq/FVGX1uYZ9k/P?=
 =?us-ascii?Q?g2HXPN7KeoAjVye93S7hr6GZrD21rKCoaz0mkIkC7tkPI70ltP4DvuZy/EsK?=
 =?us-ascii?Q?6dWh1o7ivkm+RJe4NzuKgmaFvNwdtvEW3YlExbfO55fsl06KoNkFA4YrzECQ?=
 =?us-ascii?Q?8oyJVObLch1W5ag+4Q6iH3tROxDdSpUSJJii1YPsN/BKN8qVs8NMDPWIRMmu?=
 =?us-ascii?Q?FnHdLUOhaG2HE5ED2fMeaLswJUqI0tK4uO2gVUb4EXXXjaRMR7TpJpAvQIfc?=
 =?us-ascii?Q?voS6vzbI7dELBHqpRE0G/1UAEuplKSJ6bNTxIwUkjk6ad6g0K4sRSLeQMoSs?=
 =?us-ascii?Q?VBPEFy7YoYDrmJGwzsvNIRusftJOmDMLaJntCnSRB1pd7m4uWNvOVEqwDNd2?=
 =?us-ascii?Q?w9ZHgB1aZ8lpyr96Q77rUwBQtQVdiacSCjozt8ILP2GnDQM5SAzret91Eewc?=
 =?us-ascii?Q?QNZrJ7De/8cUGuR5Ml6QqtmE9H6HqhGqUmy4aV1d+Fr3IVjAzZdUtX6yo5vP?=
 =?us-ascii?Q?WK3+MJWXGRT8RVr+JPrDK3Kr7h3hD/1fY+YbyAJfPO9bu/zj1ntt/DuDUfw2?=
 =?us-ascii?Q?SNPSvteLL1N4WUfJgnXI7jXyo0V2l0XODRxeQYsQ2gLgi9o0ZfDtRBut2Wzi?=
 =?us-ascii?Q?NJSaB6UPmCiB0zdf3LOxGcPt+s1YsTsL/AfAxcbApn+z9G5XCLCggvf039DU?=
 =?us-ascii?Q?72Qn9imljpbi6XU9cO8DyWrjWmnlaTsvvmVF8pOrs8WyDkf1cAySutgsH3XH?=
 =?us-ascii?Q?BXf0ZtAryXCglWIWYseUFtwrlKWdkRJjkmrnfMoIMSk8OiLNheHJ9eQxTijd?=
 =?us-ascii?Q?VQFSglLgWg8HGWz+G9XRy1TQNHnkDrt/5wfhEqnem3VIxIPm5zIc6M7fu5hX?=
 =?us-ascii?Q?atZd9rsnzRO1zfmwcgEgAQxTII0PZ5hgg7PJ5XyA87RlDrAoVcAn0O1X8IDR?=
 =?us-ascii?Q?2lzQYyttMMNw3CaV4P3WLf4pOsQZI5yqXP7YIkU8szlktFCyh88p7h3oEpak?=
 =?us-ascii?Q?fkdErbu2XWfSVZaWWoBGqKfH9jUkctCWMO9PPUkhyfr4jj8zFIQ7OIX0kTNk?=
 =?us-ascii?Q?IPSQtCi0BBSNTR18+WqiISYJ6qjj2OLcfkzwG8HCc0Q7S9Rv/ZRZvvHfjzyx?=
 =?us-ascii?Q?k3f4/dF7gUt/IPNAjO1wNr3//XR/dVMZGf3V4PQrBzLPSVbUB6Rac3pD3zov?=
 =?us-ascii?Q?kcgSNJAg9qWWDuhGGFSu/zQrb9+MNg/zZEqA66opXZdNi8Y8wwXiE9ZxUqUw?=
 =?us-ascii?Q?4sUZzt+pjj+gIRsaz16QR8kx/RctUxk2qdK2c4fgNo5H0pW1xw1wEsCGg1A/?=
 =?us-ascii?Q?f1ZzUr4=3D?=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230031)(35042699013)(36860700004)(82310400017)(376005)(1800799015);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2024 07:09:20.5972
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ae00cb4d-15bf-45a7-9763-08dc89e56a76
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM4PEPF00027A68.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: GV2PR08MB9925

Hi Julien and Oleksii,

@Oleksii: Could we consider having this serie merged for next release ?

It is a feature that is in tech-preview at the moment and as such should no=
t have any
consequences on existing system unless it is activated explicitly in the Xe=
n configuration.

There are some changes in the arm common code introducing support to regist=
er SGI
interrupt handlers in drivers. As no drivers appart from FF-A is trying to =
register such
handlers, the risk is minimal for existing systems.


> On 10 Jun 2024, at 22:38, Julien Grall <julien@xen.org> wrote:
>=20
> Hi Bertrand,
>=20
> On 10/06/2024 16:54, Bertrand Marquis wrote:
>> Hi Jens,
>>> On 10 Jun 2024, at 08:53, Jens Wiklander <jens.wiklander@linaro.org> wr=
ote:
>>>=20
>>> Hi,
>>>=20
>>> This patch set adds support for FF-A notifications. We only support
>>> global notifications, per vCPU notifications remain unsupported.
>>>=20
>>> The first three patches are further cleanup and can be merged before th=
e
>>> rest if desired.
>>>=20
>>> A physical SGI is used to make Xen aware of pending FF-A notifications.=
 The
>>> physical SGI is selected by the SPMC in the secure world. Since it must=
 not
>>> already be used by Xen the SPMC is in practice forced to donate one of =
the
>>> secure SGIs, but that's normally not a problem. The SGI handling in Xen=
 is
>>> updated to support registration of handlers for SGIs that aren't static=
ally
>>> assigned, that is, SGI IDs above GIC_SGI_MAX.
>>>=20
>>> The patch "xen/arm: add and call init_tee_secondary()" provides a hook =
for
>>> register the needed per-cpu interrupt handler in "xen/arm: ffa: support
>>> notification".
>>>=20
>>> The patch "xen/arm: add and call tee_free_domain_ctx()" provides a hook=
 for
>>> later freeing of the TEE context. This hook is used in "xen/arm: ffa:
>>> support notification" and avoids the problem with TEE context being fre=
ed
>>> while we need to access it when handling a Schedule Receiver interrupt.=
 It
>>> was suggested as an alternative in [1] that the TEE context could be fr=
eed
>>> from complete_domain_destroy().
>>>=20
>>> [1] https://lore.kernel.org/all/CAHUa44H4YpoxYT7e6WNH5XJFpitZQjqP9Ng4Sm=
Ty4eWhyN+F+w@mail.gmail.com/
>>>=20
>>> Thanks,
>>> Jens
>> All patches are now reviewed and/or acked so I think they can get in for=
 the release.
>=20
> This would need a release-ack from Oleksii (I can't seem to find already =
one).

You are right, i do not know why in my mind we already got one.

>=20
> As we discussed last week, I am fine with the idea to merge the FFA patch=
es as the feature is tech-preview. But there are some changes in the arm ge=
neric code. Do you (or Jens) have an assessment of the risk of the changes?

Agree.

In my view, the changes are changing the behaviour of some internal functio=
ns if an interrupt handler is register for SGI but should not have any impa=
ct for other kind of interrupts.
As there was nothing before the FF-A driver registering such interrupts, th=
e risk of the changes having any impact on existing configurations not acti=
vating FF-A is fairly reduced.

@Jens: do you agree with my analysis.

I made a text for Oleksii at the beginning of the mail.

Cheers

Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall




From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:17:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:17:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737901.1144402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvlF-000521-DK; Tue, 11 Jun 2024 07:17:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737901.1144402; Tue, 11 Jun 2024 07:17:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvlF-00051u-AX; Tue, 11 Jun 2024 07:17:41 +0000
Received: by outflank-mailman (input) for mailman id 737901;
 Tue, 11 Jun 2024 07:17:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3L0u=NN=linaro.org=jens.wiklander@srs-se1.protection.inumbo.net>)
 id 1sGvlE-00051o-Ah
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:17:40 +0000
Received: from mail-oo1-xc36.google.com (mail-oo1-xc36.google.com
 [2607:f8b0:4864:20::c36])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id af0d7da3-27c2-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 09:17:39 +0200 (CEST)
Received: by mail-oo1-xc36.google.com with SMTP id
 006d021491bc7-5ba18126a3bso1995303eaf.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 00:17:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af0d7da3-27c2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718090258; x=1718695058; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=UyoYBBNLSSP1O4lL3mZy+Sbfi2OWvv3v8FsDQhqIkBY=;
        b=WDnCk1RX2YsHKvBdxTfa1ma0qmqSKpuwcVSylSfq8k4tD8/4GI5TvlbJ4AEnPONkSa
         FxcQnndksaeO7iB1+7xwOPs96v31p8jTl47MCR/uLaiyWVamN+CF4v0QvXXpvltxy7jT
         mHlcbCF9dSdyzc7rhcIWM8sZ6yuBnkDInYqckzexQDET5DgYIQ5Jz6+B98wb8MgsZZMK
         TSqgS+zpUy2n+/XC+dT2EiIi4RBFhjFNw/doh2xqYb4U8RRJE36++qVzlQifypqiu37O
         NjBRRMursi84dghnl1z7KlxBjHpPYGLSqC9GdliwOL3GCjlRL5DkCjr3sTxVudLgocWp
         lmWg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718090258; x=1718695058;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=UyoYBBNLSSP1O4lL3mZy+Sbfi2OWvv3v8FsDQhqIkBY=;
        b=lkeRQzlT0zOLgNfKo+G0oAjuPyVuLytRyuAYm23556fsq1sm4nSVMR69HGGQ17kLAk
         YP/5Vo1CzLsE6jhHxuJ0bPB8ia/MDX/Tk64HigUc7NuVJwWLjwAQdKHHYo1HFukPNQE/
         N5y8K04Xtfau6xXlPWmD6uu7rRKdUaFKYB/hJ8wW4P9y341l9h2RnRaRe+V0JtNBwmFM
         SL74UfZbloR2gA0zbIea9p4BOdMt7CbdSaZz3uJPyqv1liRe4TDNRJpoiznw/x2Wf/fM
         3GrAi7IUNZfL9Jxi9MfCiCc0zFeH3u28n1kh7R8uJ5pGD49RqhYhemKifVRNDkVl/Ct/
         RpCg==
X-Forwarded-Encrypted: i=1; AJvYcCWMBZAV66p1b80HW5fA9KIGG2lof8IXTEMUwzdSsxFMtYy/tCV5QCIlihqv8M81FRxm6O3ZnKQjcgZilD58QRLvsdzwie39j8DHXf5QP8Y=
X-Gm-Message-State: AOJu0YzmTqQPf9SiAUNMICd/ywjgsvq3OAjADDAgbnWPN+rH5DK6cDRy
	XvzKV6sFru7B4oWDV3GUq0qsWaGN6+w85WYG8tX1n7fZznnTvKmSXAgCybP5mOmjZAPonqSnvqn
	/P8j31gy7AZI1r5p9ju8JvB67yJveApC4+UMwVg==
X-Google-Smtp-Source: AGHT+IElWgHm1YsdjYTbTrK1XC4MMT1WF+Bag115PY4VvffFAmulpL3C5j87T3O+tieGy998GKY1OY46P/EJpZxxwsE=
X-Received: by 2002:a05:6820:612:b0:5bb:1ae0:17de with SMTP id
 006d021491bc7-5bb1f7b92b8mr884702eaf.1.1718090257860; Tue, 11 Jun 2024
 00:17:37 -0700 (PDT)
MIME-Version: 1.0
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
 <3C40228F-21AA-4CBF-A4BE-1C42DE6E94EB@arm.com> <615f1766-253d-43dc-b0f0-f8e2eb7360b5@xen.org>
 <8558AEB5-2F38-4F8C-A017-794E32045068@arm.com>
In-Reply-To: <8558AEB5-2F38-4F8C-A017-794E32045068@arm.com>
From: Jens Wiklander <jens.wiklander@linaro.org>
Date: Tue, 11 Jun 2024 09:17:26 +0200
Message-ID: <CAHUa44GJYs3mqXG=4T-YyePXK+71FD0qtTmB-_G00FmackZYWA@mail.gmail.com>
Subject: Re: [XEN PATCH v6 0/7] FF-A notifications
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Julien Grall <julien@xen.org>, Oleksii <oleksii.kurochko@gmail.com>, 
	Xen-devel <xen-devel@lists.xenproject.org>, "patches@linaro.org" <patches@linaro.org>, 
	Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Michal Orzel <michal.orzel@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi all,

On Tue, Jun 11, 2024 at 9:09=E2=80=AFAM Bertrand Marquis
<Bertrand.Marquis@arm.com> wrote:
>
> Hi Julien and Oleksii,
>
> @Oleksii: Could we consider having this serie merged for next release ?
>
> It is a feature that is in tech-preview at the moment and as such should =
not have any
> consequences on existing system unless it is activated explicitly in the =
Xen configuration.
>
> There are some changes in the arm common code introducing support to regi=
ster SGI
> interrupt handlers in drivers. As no drivers appart from FF-A is trying t=
o register such
> handlers, the risk is minimal for existing systems.
>
>
> > On 10 Jun 2024, at 22:38, Julien Grall <julien@xen.org> wrote:
> >
> > Hi Bertrand,
> >
> > On 10/06/2024 16:54, Bertrand Marquis wrote:
> >> Hi Jens,
> >>> On 10 Jun 2024, at 08:53, Jens Wiklander <jens.wiklander@linaro.org> =
wrote:
> >>>
> >>> Hi,
> >>>
> >>> This patch set adds support for FF-A notifications. We only support
> >>> global notifications, per vCPU notifications remain unsupported.
> >>>
> >>> The first three patches are further cleanup and can be merged before =
the
> >>> rest if desired.
> >>>
> >>> A physical SGI is used to make Xen aware of pending FF-A notification=
s. The
> >>> physical SGI is selected by the SPMC in the secure world. Since it mu=
st not
> >>> already be used by Xen the SPMC is in practice forced to donate one o=
f the
> >>> secure SGIs, but that's normally not a problem. The SGI handling in X=
en is
> >>> updated to support registration of handlers for SGIs that aren't stat=
ically
> >>> assigned, that is, SGI IDs above GIC_SGI_MAX.
> >>>
> >>> The patch "xen/arm: add and call init_tee_secondary()" provides a hoo=
k for
> >>> register the needed per-cpu interrupt handler in "xen/arm: ffa: suppo=
rt
> >>> notification".
> >>>
> >>> The patch "xen/arm: add and call tee_free_domain_ctx()" provides a ho=
ok for
> >>> later freeing of the TEE context. This hook is used in "xen/arm: ffa:
> >>> support notification" and avoids the problem with TEE context being f=
reed
> >>> while we need to access it when handling a Schedule Receiver interrup=
t. It
> >>> was suggested as an alternative in [1] that the TEE context could be =
freed
> >>> from complete_domain_destroy().
> >>>
> >>> [1] https://lore.kernel.org/all/CAHUa44H4YpoxYT7e6WNH5XJFpitZQjqP9Ng4=
SmTy4eWhyN+F+w@mail.gmail.com/
> >>>
> >>> Thanks,
> >>> Jens
> >> All patches are now reviewed and/or acked so I think they can get in f=
or the release.
> >
> > This would need a release-ack from Oleksii (I can't seem to find alread=
y one).
>
> You are right, i do not know why in my mind we already got one.
>
> >
> > As we discussed last week, I am fine with the idea to merge the FFA pat=
ches as the feature is tech-preview. But there are some changes in the arm =
generic code. Do you (or Jens) have an assessment of the risk of the change=
s?
>
> Agree.
>
> In my view, the changes are changing the behaviour of some internal funct=
ions if an interrupt handler is register for SGI but should not have any im=
pact for other kind of interrupts.
> As there was nothing before the FF-A driver registering such interrupts, =
the risk of the changes having any impact on existing configurations not ac=
tivating FF-A is fairly reduced.
>
> @Jens: do you agree with my analysis.

Yes, I agree.

Cheers,
Jens


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:21:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:21:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737912.1144411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvod-0006pR-VY; Tue, 11 Jun 2024 07:21:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737912.1144411; Tue, 11 Jun 2024 07:21:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvod-0006pK-Sq; Tue, 11 Jun 2024 07:21:11 +0000
Received: by outflank-mailman (input) for mailman id 737912;
 Tue, 11 Jun 2024 07:21:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGvoc-0006on-Rm
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:21:10 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28d5b08b-27c3-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 09:21:03 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 5EAFE60C76;
 Tue, 11 Jun 2024 07:21:02 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9B857C2BD10;
 Tue, 11 Jun 2024 07:20:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28d5b08b-27c3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718090461;
	bh=SxGiXC2eXjeuicvqjgmxBbbuRkQDAALVXqGltltCvpM=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=J9t/JghW0gR/ETwdzjpl7VD7KnwAlMW8rRhwZr9kxWhQ6tl8Phg9WjO76NiKxxgu8
	 fLxeqF9M5t6BVVC5kK/sKL3zR1K2qLPoyVha6i0EJluUzp1kNJ5g2TNKh8g+EJPN33
	 hrljcXVnUuN8Lr5KlIujiAJZzEJDuCsFcPLl3jPutPmhKf9R0GeNiPNem29bg2OBO7
	 a1cFWzw0qNt9DSnc6yLqLEresNea8LBs8q7/++QEaUosyu+AjkSi/+C9VElPN9n7PP
	 XlPsIxsXYvBqYod+WVUBRRvW1FiyVSTv+Mlz8E7DodnwczmTJdbU3tnt32fP5VZhAs
	 3tDYxoJTsEL7w==
Message-ID: <6bf90562-0ff9-46b6-8a58-7381332e3beb@kernel.org>
Date: Tue, 11 Jun 2024 16:20:54 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 02/26] sd: move zone limits setup out of
 sd_read_block_characteristics
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 Christoph B??hmwalder <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Roger Pau Monn?? <roger.pau@citrix.com>, Alasdair Kergon <agk@redhat.com>,
 Mike Snitzer <snitzer@kernel.org>, Mikulas Patocka <mpatocka@redhat.com>,
 Song Liu <song@kernel.org>, Yu Kuai <yukuai3@huawei.com>,
 Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-3-hch@lst.de>
 <40ca8052-6ac1-4c1b-8c39-b0a7948839f8@kernel.org>
 <20240611055239.GA3141@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611055239.GA3141@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:52 PM, Christoph Hellwig wrote:
> On Tue, Jun 11, 2024 at 02:51:24PM +0900, Damien Le Moal wrote:
>>> -	if (lim->zoned)
>>> +	if (sdkp->device->type == TYPE_ZBC)
>>
>> Nit: use sd_is_zoned() here ?
> 
> Yes.
> 
>>> -	if (!sd_is_zoned(sdkp))
>>> +	if (!sd_is_zoned(sdkp)) {
>>> +		lim->zoned = false;
>>
>> Maybe we should clear the other zone related limits here ? If the drive is
>> reformatted/converted from SMR to CMR (FORMAT WITH PRESET), the other zone
>> limits may be set already, no ?
> 
> blk_validate_zoned_limits already takes care of that.

I do not think it does:

static int blk_validate_zoned_limits(struct queue_limits *lim)
{
        if (!lim->zoned) {
                if (WARN_ON_ONCE(lim->max_open_zones) ||
                    WARN_ON_ONCE(lim->max_active_zones) ||
                    WARN_ON_ONCE(lim->zone_write_granularity) ||
                    WARN_ON_ONCE(lim->max_zone_append_sectors))
                        return -EINVAL;
                return 0;
        }
	...

So setting lim->zoned to false without clearing the other limits potentially
will trigger warnings...

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:25:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:25:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737919.1144422 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvsw-0007WV-Gh; Tue, 11 Jun 2024 07:25:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737919.1144422; Tue, 11 Jun 2024 07:25:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvsw-0007WO-D1; Tue, 11 Jun 2024 07:25:38 +0000
Received: by outflank-mailman (input) for mailman id 737919;
 Tue, 11 Jun 2024 07:25:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGvsv-0007WI-Dh
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:25:37 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c5be8907-27c3-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 09:25:33 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id B2D1CCE19E4;
 Tue, 11 Jun 2024 07:25:24 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 80F46C2BD10;
 Tue, 11 Jun 2024 07:25:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5be8907-27c3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718090724;
	bh=SKDzpdZjTAHkWp+zsEQwH6Ky8PVIUSEQggaioawXXkU=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=UCtiHiiw5QvJYmElyhHUIvI9qZN2kq7wHApTDMzEobrX5tFMllUF7QgwTYv0VDSQe
	 4AvJZ9Xc1WQTZynkZ5eqZVN35p41xaFCgmlC7UMaLIvvCM36uO+T8E3D8B2kWkrwrw
	 CxnQ5rfAOwcQk9F3stXF5NyYVqFnlDWOGrsxEQrWt2LCiqL+fvPkUXoNp9aPDrZKK/
	 Jf1WpJR5Tfo8C4kN4+IDjAFgA3wv+/TYcPyqih/c85wJGX2v2SpjxCABdfzaKWUWw8
	 mwFNFCAMG5oIrZT1WMKm7NaX1XpI2tHzzzRuvurRNs7Kd2xxeNWyHz9cxkpcbZ5fhc
	 X7SraO1QML8XQ==
Message-ID: <92df5033-5df7-4b2a-98ad-a27f8443ee6a@kernel.org>
Date: Tue, 11 Jun 2024 16:25:18 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 02/26] sd: move zone limits setup out of
 sd_read_block_characteristics
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 Christoph B??hmwalder <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Roger Pau Monn?? <roger.pau@citrix.com>, Alasdair Kergon <agk@redhat.com>,
 Mike Snitzer <snitzer@kernel.org>, Mikulas Patocka <mpatocka@redhat.com>,
 Song Liu <song@kernel.org>, Yu Kuai <yukuai3@huawei.com>,
 Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-3-hch@lst.de>
 <40ca8052-6ac1-4c1b-8c39-b0a7948839f8@kernel.org>
 <20240611055239.GA3141@lst.de> <20240611055405.GA3256@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611055405.GA3256@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:54 PM, Christoph Hellwig wrote:
> On Tue, Jun 11, 2024 at 07:52:39AM +0200, Christoph Hellwig wrote:
>>> Maybe we should clear the other zone related limits here ? If the drive is
>>> reformatted/converted from SMR to CMR (FORMAT WITH PRESET), the other zone
>>> limits may be set already, no ?
>>
>> blk_validate_zoned_limits already takes care of that.
> 
> Sorry, brainfart.  The integrity code does that, but not the zoned
> code.  I suspect the core code might be a better place for it,
> though.

Yes. Just replied to your previous email before seeing this one.
I think that:

static int blk_validate_zoned_limits(struct queue_limits *lim)
{
        if (!lim->zoned) {
                if (WARN_ON_ONCE(lim->max_open_zones) ||
                    WARN_ON_ONCE(lim->max_active_zones) ||
                    WARN_ON_ONCE(lim->zone_write_granularity) ||
                    WARN_ON_ONCE(lim->max_zone_append_sectors))
                        return -EINVAL;
                return 0;
        }
	...

could be changed into:

static int blk_validate_zoned_limits(struct queue_limits *lim)
{
	if (!lim->zoned) {
                lim->max_open_zones = 0;
		lim->max_active_zones = 0;
		lim->zone_write_granularity = 0;
		lim->max_zone_append_sectors = 0
		return 0;
	}

But then we would not see "bad" drivers. Could have a small

blk_clear_zoned_limits(struct queue_limits *lim)

helper too.

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:27:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:27:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737927.1144431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvuG-00082m-QH; Tue, 11 Jun 2024 07:27:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737927.1144431; Tue, 11 Jun 2024 07:27:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvuG-00082f-Nc; Tue, 11 Jun 2024 07:27:00 +0000
Received: by outflank-mailman (input) for mailman id 737927;
 Tue, 11 Jun 2024 07:26:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGvuE-00082Z-Sl
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:26:58 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f94110fe-27c3-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 09:26:56 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 9B23CCE19AB;
 Tue, 11 Jun 2024 07:26:51 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A2CA5C2BD10;
 Tue, 11 Jun 2024 07:26:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f94110fe-27c3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718090811;
	bh=HG5UcXVYc2pVMu2PGi4iUFqByljMapi6PfR/6ZqBwfo=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=lhYNStRMJNPVo8fichD7Z0COMh5KCSazuVjE6/0APdy+3x8aX6d+I4Uw+ap69ZwAp
	 3u5aa7mjFdEVQyfJG5WY+eMXxSD+64OfuIlIsHMpLDbrxnp9UPUkR/yayj5dU13RXh
	 23qGW6mbrSQ4iFRccSriB2Li9Cf3c2qv3X/apL7b2mQFQrd0/fuwpUhAaRGZ1yOusP
	 n79nDvyWzKvLB0HpWl65/GKtJMfxc7ZpotjxL9zrrr1dQ3UsAtGXJa69S+Dlnhewkx
	 FHwdj5ZYnFJw9FRojR+8wLW1FYdzAyPRpPQ7jZN/hjLct6+Y9JYli0WJbtrwanSHKq
	 GWPNeceTmkiMQ==
Message-ID: <b3a0692c-05f2-459d-9bed-33b7aa3d79c0@kernel.org>
Date: Tue, 11 Jun 2024 16:26:45 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 08/26] virtio_blk: remove virtblk_update_cache_mode
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-9-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-9-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> virtblk_update_cache_mode boils down to a single call to
> blk_queue_write_cache.  Remove it in preparation for moving the cache
> control flags into the queue_limits.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:28:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:28:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737932.1144441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvvW-0000Xk-2y; Tue, 11 Jun 2024 07:28:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737932.1144441; Tue, 11 Jun 2024 07:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvvW-0000Xd-00; Tue, 11 Jun 2024 07:28:18 +0000
Received: by outflank-mailman (input) for mailman id 737932;
 Tue, 11 Jun 2024 07:28:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGvvU-0000XP-HU
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:28:16 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 28eb489a-27c4-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 09:28:15 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id C7491CE19E8;
 Tue, 11 Jun 2024 07:28:11 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B5C86C2BD10;
 Tue, 11 Jun 2024 07:28:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28eb489a-27c4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718090891;
	bh=PzNpxodKMaXzOrbrQbgP0Usp5K8OR/0f+y46RXYG5GI=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=LTBp4slDow5NcmeSUO+en/N5x+IvZpqfr4dZ5xfX2JzmBZDcu3dZh1tN6fO3CqfN5
	 fC9Rl/OM4T8uVQSZE7K0T1VbSc5kGVEloD8CTTVrPkbvI0U3LUhjHwjw0gYgT6m0wL
	 fsRPyDZqXTT6dQCkbhnlW45jwonhukSCr4JLmJKisZxNZLowwOQwk30YR8lK0/xTNh
	 Ya4myshjfW/7uX8nEJDGL/WMaUQq8CNmicSbttSzKnr2l9hJq0Cmz3iSLAwvK+uere
	 9vcVlwnm7Ycw2p4k6Qg21I4xZXBRD9ik7edbDXJpFSNi3bC1udvNreNPZ0wJBOp6Lf
	 6blq/N52Sp8VQ==
Message-ID: <d1bff26d-0059-4122-8179-75a1b72f3cfc@kernel.org>
Date: Tue, 11 Jun 2024 16:28:05 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 09/26] nbd: move setting the cache control flags to
 __nbd_set_size
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-10-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-10-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move setting the cache control flags in nbd in preparation for moving
> these flags into the queue_limits structure.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:30:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:30:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737936.1144453 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvxx-0002W6-GL; Tue, 11 Jun 2024 07:30:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737936.1144453; Tue, 11 Jun 2024 07:30:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvxx-0002Vz-Bx; Tue, 11 Jun 2024 07:30:49 +0000
Received: by outflank-mailman (input) for mailman id 737936;
 Tue, 11 Jun 2024 07:30:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGvxw-0002Vt-K7
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:30:48 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 84b4d173-27c4-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 09:30:47 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 1E9A560D2B;
 Tue, 11 Jun 2024 07:30:46 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EEC6CC2BD10;
 Tue, 11 Jun 2024 07:30:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84b4d173-27c4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718091045;
	bh=mXwlDhzGw5x2p2Nr8gAFPISP2nDIAfZIJOaFJyvJXOI=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=VCI1eDgXTsePnpt155uEye/3wYDicIPNZJ3+S2Hq2BKZNkLNvV5I0fT+JdCzdUnSb
	 yuXJ7xiSHP9TiSA6PVPXt+qO9Dn4e1NQaWtG0WZKRDKr7qk2obBVABl9+DPiAto15R
	 ZY06Qeaqugg7VSWPCcJ0qkt9/u4FV8hF/W3PG2hU6Xg27Ev50hNdLuTn4EcUBYyJ8n
	 S+KRGHhRfCxLhpogZoDRTutfmrs5hXd1HvW8FhrlCCZSHQ5haYJkZbrzON8/BzrfEH
	 kcYMtKu/xYq8uguhHsXhgHJwfT/OivpjDtyI/mVqW6dF9WSlmB2m2KaCIH1Th4AQTt
	 8udJurnL3fqPA==
Message-ID: <fdfc024a-368a-4495-8b85-b5ab7741f6d4@kernel.org>
Date: Tue, 11 Jun 2024 16:30:39 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 10/26] xen-blkfront: don't disable cache flushes when they
 fail
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-11-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-11-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> blkfront always had a robust negotiation protocol for detecting a write
> cache.  Stop simply disabling cache flushes when they fail as that is
> a grave error.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good to me but maybe mention that removal of xlvbd_flush() as well ?
And it feels like the "stop disabling cache flushes when they fail" part should
be a fix patch sent separately...

Anyway, for both parts, feel free to add:

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

> ---
>  drivers/block/xen-blkfront.c | 29 +++++++++--------------------
>  1 file changed, 9 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 9b4ec3e4908cce..9794ac2d3299d1 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -982,18 +982,6 @@ static const char *flush_info(struct blkfront_info *info)
>  		return "barrier or flush: disabled;";
>  }
>  
> -static void xlvbd_flush(struct blkfront_info *info)
> -{
> -	blk_queue_write_cache(info->rq, info->feature_flush ? true : false,
> -			      info->feature_fua ? true : false);
> -	pr_info("blkfront: %s: %s %s %s %s %s %s %s\n",
> -		info->gd->disk_name, flush_info(info),
> -		"persistent grants:", info->feature_persistent ?
> -		"enabled;" : "disabled;", "indirect descriptors:",
> -		info->max_indirect_segments ? "enabled;" : "disabled;",
> -		"bounce buffer:", info->bounce ? "enabled" : "disabled;");
> -}
> -
>  static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)
>  {
>  	int major;
> @@ -1162,7 +1150,15 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
>  	info->sector_size = sector_size;
>  	info->physical_sector_size = physical_sector_size;
>  
> -	xlvbd_flush(info);
> +	blk_queue_write_cache(info->rq, info->feature_flush ? true : false,
> +			      info->feature_fua ? true : false);
> +
> +	pr_info("blkfront: %s: %s %s %s %s %s %s %s\n",
> +		info->gd->disk_name, flush_info(info),
> +		"persistent grants:", info->feature_persistent ?
> +		"enabled;" : "disabled;", "indirect descriptors:",
> +		info->max_indirect_segments ? "enabled;" : "disabled;",
> +		"bounce buffer:", info->bounce ? "enabled" : "disabled;");
>  
>  	if (info->vdisk_info & VDISK_READONLY)
>  		set_disk_ro(gd, 1);
> @@ -1622,13 +1618,6 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
>  				       info->gd->disk_name, op_name(bret.operation));
>  				blkif_req(req)->error = BLK_STS_NOTSUPP;
>  			}
> -			if (unlikely(blkif_req(req)->error)) {
> -				if (blkif_req(req)->error == BLK_STS_NOTSUPP)
> -					blkif_req(req)->error = BLK_STS_OK;
> -				info->feature_fua = 0;
> -				info->feature_flush = 0;
> -				xlvbd_flush(info);
> -			}
>  			fallthrough;
>  		case BLKIF_OP_READ:
>  		case BLKIF_OP_WRITE:

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:32:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:32:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737941.1144462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvzg-00032S-Pg; Tue, 11 Jun 2024 07:32:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737941.1144462; Tue, 11 Jun 2024 07:32:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGvzg-00032L-Mu; Tue, 11 Jun 2024 07:32:36 +0000
Received: by outflank-mailman (input) for mailman id 737941;
 Tue, 11 Jun 2024 07:32:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGvzf-00032F-K6
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:32:35 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c3dc4308-27c4-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 09:32:33 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 32D5260C47;
 Tue, 11 Jun 2024 07:32:32 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5F0F2C2BD10;
 Tue, 11 Jun 2024 07:32:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3dc4308-27c4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718091151;
	bh=AF+zG6ZFufj7ktoeAMS7+47wp+ON6l7hR24Gr5fCwt0=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=SPZjzavtcTKyCdC4WGEmHFxtV9LSffVVHpZkghrgh94KIDRkU8Rgia29Z6gttxoe9
	 zOX7C6t8muRYmEJZDxtGZLcKQdFby+uhJ+qwHFKzMHofsRuUcCtmhO2REjHrW7IVjC
	 7C0j0tRe+awY6OwUGv767JT4KMLbYhYrBxC1MF4vQdaDAWH1YkqCVa5cS1IpTDl7jQ
	 DwpBzOgI8vuLSRr2nP+6MuC1MYxBAnJs8yxlw2hav/xabgNZGi1G1i8tZ6aqMjzXS4
	 h+CSlB5tmAFTFG1Iuv9NHsTDXLnN731DIz3tGAdH8iuIjAV/89MvaPrp5ZojK8LhAf
	 PBu9mWf+fZU2w==
Message-ID: <77ea357f-f73f-4524-8995-ed204d5f3431@kernel.org>
Date: Tue, 11 Jun 2024 16:32:26 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 11/26] block: freeze the queue in queue_attr_store
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-12-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-12-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> queue_attr_store updates attributes used to control generating I/O, and
> can cause malformed bios if changed with I/O in flight.  Freeze the queue
> in common code instead of adding it to almost every attribute.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:33:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:33:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737947.1144472 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGw0l-0003eG-4d; Tue, 11 Jun 2024 07:33:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737947.1144472; Tue, 11 Jun 2024 07:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGw0l-0003e9-1x; Tue, 11 Jun 2024 07:33:43 +0000
Received: by outflank-mailman (input) for mailman id 737947;
 Tue, 11 Jun 2024 07:33:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGw0j-0003dk-Sz
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:33:41 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ec30b661-27c4-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 09:33:41 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id D4C1760C25;
 Tue, 11 Jun 2024 07:33:39 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CD458C2BD10;
 Tue, 11 Jun 2024 07:33:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec30b661-27c4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718091219;
	bh=oHlrsHMGSP+TFKBHBiaHuNGPDyXYV0yztvtK1Y/4xoA=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=NK69Ca5T+un738Cht9WHCgDsDCtG+gx8LYD6gWuaN5n85S7Yoy0xxNUkFXn4cHxHv
	 Wlv+wGcdYIf+EVH9ZUhb5I5VG8NRhQniAqY8z0Q49SoQKDz4xicGpKlJi87Afp005I
	 EtH89giKv6HAFkHTX4qfxqAZNfqbbdEoL02qUnKFAY7p5J0TI+KJ8srSwUMmzd4BcU
	 hfvqIdBP7X08PHZVdAv1hTm1A4VlEuGPTd2au2zULBAaTxZxCjrUn/VKKe7PVNMklX
	 962B+4yOZjBDU8ZEfAUGNkCr9yE5Pg/u5zYNHenczAAetVmyH+GxTRZJ9JSnWhGfdr
	 S8dGvnoNweUXw==
Message-ID: <57a98863-e1ca-4ef8-aa7c-5012daa22808@kernel.org>
Date: Tue, 11 Jun 2024 16:33:32 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 12/26] block: remove blk_flush_policy
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-13-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-13-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Fold blk_flush_policy into the only caller to prepare for pending changes
> to it.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:41:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:41:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737957.1144482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGw82-0006HV-Q3; Tue, 11 Jun 2024 07:41:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737957.1144482; Tue, 11 Jun 2024 07:41:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGw82-0006HO-Mq; Tue, 11 Jun 2024 07:41:14 +0000
Received: by outflank-mailman (input) for mailman id 737957;
 Tue, 11 Jun 2024 07:41:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b7dS=NN=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGw81-0006HI-BE
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:41:13 +0000
Received: from mail-qk1-x735.google.com (mail-qk1-x735.google.com
 [2607:f8b0:4864:20::735])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f8bf255b-27c5-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 09:41:11 +0200 (CEST)
Received: by mail-qk1-x735.google.com with SMTP id
 af79cd13be357-7955c585af0so50776085a.2
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 00:41:11 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-440ee9ca87esm16532421cf.23.2024.06.11.00.41.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 00:41:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8bf255b-27c5-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718091670; x=1718696470; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=XG4UrLljBpRJt7MEDg4S7rUqsAvR0Hgz2iYi7r9pP6c=;
        b=UMe5XbVxfVPzu+kuY01BlO/sA2FnnFlUBzMRsl0lqcRP9K6asf8ZgSIwdjXQbnpHoS
         M2vzmnfWi6rw24P3qymiVYCzr6j4gdmmaUiqMeC5+ZBEmG8fxVoiA40CimJy2lAloiLO
         cxWESF3YGSz5cwYdpncOxwUeKUFv/9zSKCqV0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718091670; x=1718696470;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=XG4UrLljBpRJt7MEDg4S7rUqsAvR0Hgz2iYi7r9pP6c=;
        b=Ylt7snudkRLR5oO+MhcmqNgRIiAnbwOuksfCxpdhVIYTJbD94SPEngZAt/nGU2NKsT
         tPg898GppuhtfBxtuBX1ximQdZA/hlCsyP8MkK8FNeKRHQ+uYf5lCO4RVKECYh/pKKW3
         P//na+dZxekQjLHqcGDKRJt6JENDLskX7gSvgI1+p8gDhmYkD2ucyINa/dakplyZ7kSK
         WuSB2tQUeUdDPzNeV3mbFm3vlYsei0PV1HIrgxm1Rl59MB2LmZvkT+SElJZKUIOSz19Y
         BwVqSl6wwfzMO9aTa9b5jnXTQN92Pdlgw2NucapNx9wutcJvzIMdz2EVSc25EPzI2X59
         HZRw==
X-Gm-Message-State: AOJu0Yyfss7DlVf52AXf8DQGeNUZCyiZgBCp1zK+ny0cd76ny+0xJyOi
	zchh+uPEP67AVfcbDET8XjORfjYLjF5hB+Y33hgLkJr8zjomyW/ZipZ1JqEWToMXaTONYu34VUf
	O
X-Google-Smtp-Source: AGHT+IHsq+vu4+N9ojz+udTBXM5pQ9W+lTRzU3ByNBUxG4x42gqYouv/A4AL02Nq9L04MV54PGE8YQ==
X-Received: by 2002:a05:620a:4792:b0:795:5fc1:3217 with SMTP id af79cd13be357-7955fc133c9mr561960985a.61.1718091670119;
        Tue, 11 Jun 2024 00:41:10 -0700 (PDT)
Date: Tue, 11 Jun 2024 09:41:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/EPT: relax iPAT for "invalid" MFNs
Message-ID: <Zmf_k2meED8iG3H5@macbook>
References: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>

On Mon, Jun 10, 2024 at 04:58:52PM +0200, Jan Beulich wrote:
> mfn_valid() is RAM-focused; it will often return false for MMIO. Yet
> access to actual MMIO space should not generally be restricted to UC
> only; especially video frame buffer accesses are unduly affected by such
> a restriction. Permit PAT use for directly assigned MMIO as long as the
> domain is known to have been granted some level of cache control.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Considering that we've just declared PVH Dom0 "supported", this may well
> qualify for 4.19. The issue was specifically very noticable there.
> 
> The conditional may be more complex than really necessary, but it's in
> line with what we do elsewhere. And imo better continue to be a little
> too restrictive, than moving to too lax.
> 
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -503,7 +503,8 @@ int epte_get_entry_emt(struct domain *d,
>  
>      if ( !mfn_valid(mfn) )
>      {
> -        *ipat = true;
> +        *ipat = type != p2m_mmio_direct ||
> +                (!is_iommu_enabled(d) && !cache_flush_permitted(d));

Looking at this, shouldn't the !mfn_valid special case be removed, and
mfns without a valid page be processed normally, so that the guest
MTRR values are taken into account, and no iPAT is enforced?

I also think this likely wants a:

Fixes: 81fd0d3ca4b2 ('x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()')

As AFAICT before that commit direct MMIO regions would set iPAT to WB,
which would result in the correct attributes (albeit guest MTRR was
still ignored).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:42:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:42:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737962.1144492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGw9W-0006pF-31; Tue, 11 Jun 2024 07:42:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737962.1144492; Tue, 11 Jun 2024 07:42:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGw9W-0006p8-0L; Tue, 11 Jun 2024 07:42:46 +0000
Received: by outflank-mailman (input) for mailman id 737962;
 Tue, 11 Jun 2024 07:42:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGw9U-0006p0-2d
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:42:44 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2efbfffa-27c6-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 09:42:41 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-a6ef8bf500dso83426666b.0
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 00:42:41 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f13e08c89sm323843166b.88.2024.06.11.00.42.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 00:42:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2efbfffa-27c6-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718091761; x=1718696561; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Hv4dhT8JGFXr7CnwQDKQ4vVPjy6zfDICAYO+IVglWEA=;
        b=Wke5SrlQ6TgfThUigLz7YwdfIa1D4whedOLhdpqZ6C1l3+MVWdEZA4SgZ1KcDfX0Ct
         6z5iAE8DXsQXGU3UcZJxDtNPrgW6xV5AHIbEnkvqGa/R1jcgpi3gC0Td8dsLH1mN7ixB
         vv853OaBAwfjq3hGJSxUICEz06dxUcTg073GltP3ZT5ilxDs37H7c8DqqTc7dWm36X2g
         Cuy7tYzQTuQWttZ+8b9FqmTPKSgUesXqdmLD6MOrdx4LiYY/ZcwcmViZ4CDXFqI/p8hR
         Rq/eC+/rer8sl6VIA9Q+EvnY3j/A1WRkBCWEEl47pCRIE4HQDHojwY3YSupd+Tb1d41y
         963A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718091761; x=1718696561;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Hv4dhT8JGFXr7CnwQDKQ4vVPjy6zfDICAYO+IVglWEA=;
        b=GevHSD4ymCeB1kkSiEyd00ky4gOX1ZAW5H3T2KxLhtO4d3dhte7ITdqbZf4iNs0m1R
         8V1j/0N5XPUx7Mf4nQQ6OUym5C5TssdxSlAw8ldaLCE5v5UmHxZ82Pqhj9UMRtyUTJXO
         djUiN0i+Ejecye8W0v7QX+e1JoYCwDiiYz2vh7oYI/TIDjThfaOyOcqNxEmIs3LILaC6
         jk+F4QvLv3LvmDbnW1FEn3hQRwNcOUlH0tl19CO0lgVYUm4qpoDeVXiJhDfe7qXqxbXU
         GvLJbesyhfFTEED3q9JCjRv9RX9AnRajMJy2yeP/Mp8WT8zN/nZH+PkzFdLmuVYZU2df
         d+zA==
X-Forwarded-Encrypted: i=1; AJvYcCVbc/6tQ0i6fLKO8U5f/4mMaTgyPvdWRvMY2Z+pOSvq1QJX9ybZi/HKBbIYI6PDgtZLvmtklinK8DTW1nMUYhn1TQY0cpii93WNVCIXZbE=
X-Gm-Message-State: AOJu0Yyn9QnJwgf7s0zodzo7WaPCtVDAz0R2asDwdpS1M8RFqKquxh+I
	qzEJflIhujQqYjb0wI15Yq/6PGVBMNcub+l7ghddgLbuYA6TrWaeaxYMw9P4hQ==
X-Google-Smtp-Source: AGHT+IF9XwYl+aUCSUVfoMpYlXfFFbibv6DMxCGtNyYTHPtFXUsJm0X7P5uYj6VAJIrs2XBK0x3oTw==
X-Received: by 2002:a17:907:7d9e:b0:a6f:ea6:9534 with SMTP id a640c23a62f3a-a6f0ea698d6mr610622066b.76.1718091761219;
        Tue, 11 Jun 2024 00:42:41 -0700 (PDT)
Message-ID: <615582c8-c153-424d-bce4-eb0c903d28ad@suse.com>
Date: Tue, 11 Jun 2024 09:42:39 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 1/7] x86/smp: do not use shorthand IPI destinations in
 CPU hot{,un}plug contexts
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-2-roger.pau@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240610142043.11924-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10.06.2024 16:20, Roger Pau Monne wrote:
> Due to the current rwlock logic, if the CPU calling get_cpu_maps() does so from
> a cpu_hotplug_{begin,done}() region the function will still return success,
> because a CPU taking the rwlock in read mode after having taken it in write
> mode is allowed.  Such behavior however defeats the purpose of get_cpu_maps(),

I'm not happy to see you still have this claim here. The behavior may (appear
to) defeat the purpose here, but as expressed previously I don't view that as
being a general pattern.

> as it should always return false when called with a CPU hot{,un}plug operation
> is in progress.  Otherwise the logic in send_IPI_mask() is wrong, as it could
> decide to use the shorthand even when a CPU operation is in progress.
> 
> Introduce a new helper to detect whether the current caller is between a
> cpu_hotplug_{begin,done}() region and use it in send_IPI_mask() to restrict
> shorthand usage.
> 
> Fixes: 5500d265a2a8 ('x86/smp: use APIC ALLBUT destination shorthand when possible')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Changes since v1:
>  - Modify send_IPI_mask() to detect CPU hotplug context.
> ---
>  xen/arch/x86/smp.c       |  2 +-
>  xen/common/cpu.c         |  5 +++++
>  xen/include/xen/cpu.h    | 10 ++++++++++
>  xen/include/xen/rwlock.h |  2 ++
>  4 files changed, 18 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c
> index 7443ad20335e..04c6a0572319 100644
> --- a/xen/arch/x86/smp.c
> +++ b/xen/arch/x86/smp.c
> @@ -88,7 +88,7 @@ void send_IPI_mask(const cpumask_t *mask, int vector)
>       * the system have been accounted for.
>       */
>      if ( system_state > SYS_STATE_smp_boot &&
> -         !unaccounted_cpus && !disabled_cpus &&
> +         !unaccounted_cpus && !disabled_cpus && !cpu_in_hotplug_context() &&
>           /* NB: get_cpu_maps lock requires enabled interrupts. */
>           local_irq_is_enabled() && (cpus_locked = get_cpu_maps()) &&
>           (park_offline_cpus ||

Along with changing the condition you also want to update the comment to
reflect the code adjustment.

If we can agree on respective wording in both places, I'd be happy to make
respective adjustments while committing; the code changes themselves are
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:44:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:44:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737967.1144502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwBT-0007N0-EI; Tue, 11 Jun 2024 07:44:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737967.1144502; Tue, 11 Jun 2024 07:44:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwBT-0007Mt-Bn; Tue, 11 Jun 2024 07:44:47 +0000
Received: by outflank-mailman (input) for mailman id 737967;
 Tue, 11 Jun 2024 07:44:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGwBR-0007Ml-V5
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:44:45 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 77e7df4a-27c6-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 09:44:44 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a6efe62f583so56867866b.3
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 00:44:44 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1b7f5b12sm270687566b.196.2024.06.11.00.44.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 00:44:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77e7df4a-27c6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718091884; x=1718696684; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=RqT0W4YHSax70D5wSkQzykxk2VA8rXEPlBhJzRJ7a4I=;
        b=bFx40jl48VNROPcfbp4dgeb/Qcgvuc0eEXsNcqLRlma83Ga3/VN4D4KwkDTA5KWOKg
         pJG6IftdTht2WHPyqhXQudUgs5GTPrUqzJvq8miiZ3HVCGTY5dPU0XZ6+xtphnLRLpxF
         15FlKtHwxxxiooTtTOtBQYxNXGx5Q4iRpk9LnErPtEdjGexkSHaMC+cQspWzX2W3vWAc
         CxJWG/XRoQiESSMnTGfQi8TyxJZV7jOFHt8nkzH24C+Cmf7Pfsm49SKoN+geWB5DDmoO
         O04jv9QFIY6Yog9gCKhPkX4u06qotOfwzlgzOewMuUK/yE/t3rJHj9zrZ495Z9Eaicir
         x0pA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718091884; x=1718696684;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=RqT0W4YHSax70D5wSkQzykxk2VA8rXEPlBhJzRJ7a4I=;
        b=fi+Pmc0UZotbEwCkZlZ39ZSILfCaVHC9PH1knWwUhRhMn+TUusUR1D+aHflmq/jauZ
         OLPZt5WNljkxPgwiJyh1RG1mL1umyPu4WbNRHF0hhImbufbHtKZujgTNtMIIo/aiu8a9
         TsqH+6S6kJfV9uX97CEE9PoyN0No0pjowt0zEwmSb39ukV7f7TEprbJWM8aMkw0et+no
         1MqCatn1CZquIP69iaEzMvjb9A1YgHU/Bb3sOTuDtYbfcnj1V2WA3NdGUbf/tbn9oUTh
         qFQfptU2aykDbV0wG1kUJ2kiYsPLXmXorBKRrFUYV/3nsb2ztlW/TBrNSs7Wxa/2mtHi
         uvaA==
X-Forwarded-Encrypted: i=1; AJvYcCW21Viu0ibl9aWwgJ8tJy+JyThduj6YSaPCgZ1yFyp7gFK9CZlD2RbpS+IfY3d4LMl1WnxrvIdFw4fR3osDsvL+tcGYJMyZYOw6FPq5nao=
X-Gm-Message-State: AOJu0YzxA9/SlKXlz2HzkZAmBVRE6oIx7yjKwDFSevA3s7CtpfgGe5ig
	12zz+0Q2Kbe66PGLNoKphYwspT4MpF+nIftpYMEgUxN24R3VwIVTeCHHn6nKrw==
X-Google-Smtp-Source: AGHT+IHQDjqgxm5+PCebhgYwTsf2wmns2k0ZghhWnAMv9pJ95vbQIuAxVDBA6MDwxCJ2Bnhhjq+i8w==
X-Received: by 2002:a17:906:94f:b0:a6f:3155:bb89 with SMTP id a640c23a62f3a-a6f3155bbf1mr144096666b.70.1718091883604;
        Tue, 11 Jun 2024 00:44:43 -0700 (PDT)
Message-ID: <9048733a-e942-4384-b926-e8a304095356@suse.com>
Date: Tue, 11 Jun 2024 09:44:42 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 2/7] x86/irq: describe how the interrupt CPU movement
 works
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-3-roger.pau@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240610142043.11924-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10.06.2024 16:20, Roger Pau Monne wrote:
> The logic to move interrupts across CPUs is complex, attempt to provide a
> comment that describes the expected behavior so users of the interrupt system
> have more context about the usage of the arch_irq_desc structure fields.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:45:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:45:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737971.1144512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwBr-0007pi-Ma; Tue, 11 Jun 2024 07:45:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737971.1144512; Tue, 11 Jun 2024 07:45:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwBr-0007pb-JM; Tue, 11 Jun 2024 07:45:11 +0000
Received: by outflank-mailman (input) for mailman id 737971;
 Tue, 11 Jun 2024 07:45:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b7dS=NN=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGwBr-0007Ml-6R
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:45:11 +0000
Received: from mail-qv1-xf2f.google.com (mail-qv1-xf2f.google.com
 [2607:f8b0:4864:20::f2f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 87419b1d-27c6-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 09:45:10 +0200 (CEST)
Received: by mail-qv1-xf2f.google.com with SMTP id
 6a1803df08f44-6b0783b6dd5so9547386d6.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 00:45:10 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b08d97d68csm3702646d6.128.2024.06.11.00.45.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 00:45:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87419b1d-27c6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718091909; x=1718696709; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=v1wRuKc+h3Qi+BV6WCNGLiGZWMBEevyVedTfAwQoAfs=;
        b=jROWtDxHzHpv0dSNWW1VZRmIF3wG1AddvyqZEUZUMeZ2a31zDcVJtsgcbBD2rR9CbE
         550VTu9q4xUVOEFvSrn+U2iBVhNOY5gSRn84n9yg1i42TJae8V/QHtzA1ja/yDFxlqwX
         NGfmZqWUJalHChjuO//1DX2QbGIKButum+21A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718091909; x=1718696709;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=v1wRuKc+h3Qi+BV6WCNGLiGZWMBEevyVedTfAwQoAfs=;
        b=LOVKH1gR3P65QL5fEXAGbptwhsZetJnt4SW0tJJNp2hDyVS1346SfXW2Ov+sSgk1zF
         ytloDXrIakJUJHguwGP/0q81hvuXeCGM9bVG06E9kG3EOQKjn3/Dd6AYSkWJe3crxLHB
         2+DNqUTjjlERCVhDrFJKnycU7d0un7ZFZ8uu3GT6n0Feg0lt9G8+pKu7AxFFWtJXopKd
         Bz3i58wVeyg722htF1s46f44G6G0N6JUnY+4U5DqQ5+dC6iuVlpyXyp+GWRSmodLovoR
         o/1fPcGSB2x0R7PX90d7Pj4S4kq3XzbT17odrQT9H4B8rLVSTEZxJu0lwhL8w4yVRord
         Ks2w==
X-Gm-Message-State: AOJu0YxduKWF/hjaL49fu9umvcLrm35wZZvHrHzpCz2V3hXhPy3WrGLq
	eDeU8h1NoPv5I1+KkMEbekiOOyXzZfHX1WhDj2EsM5NfN0EfstIPqtUFUAN4DSE/U63v9VGCMPr
	S
X-Google-Smtp-Source: AGHT+IGHow0qnZOWhajZutBwKlWIj613DycXRylqVyrt4CX26nRsYL7wHi2Yw+9NWc8Bsp/kodgClg==
X-Received: by 2002:a05:6214:3389:b0:6ad:75ac:ab35 with SMTP id 6a1803df08f44-6b059b50ba0mr122931386d6.8.1718091909260;
        Tue, 11 Jun 2024 00:45:09 -0700 (PDT)
Date: Tue, 11 Jun 2024 09:45:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: George Dunlap <george.dunlap@cloud.com>
Cc: xen-devel@lists.xenproject.org,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Community Manager <community.manager@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH for-4.19 v2] x86/pvh: declare PVH dom0 supported with
 caveats
Message-ID: <ZmgAg0Io0fSLl6s5@macbook>
References: <20240610085052.8499-1-roger.pau@citrix.com>
 <CA+zSX=Z3O_b44Jum3s9rRJ_h+BKjJzd11gAr249wFOxQCcFKEQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CA+zSX=Z3O_b44Jum3s9rRJ_h+BKjJzd11gAr249wFOxQCcFKEQ@mail.gmail.com>

On Mon, Jun 10, 2024 at 04:55:34PM +0100, George Dunlap wrote:
> On Mon, Jun 10, 2024 at 9:50 AM Roger Pau Monne <roger.pau@citrix.com> wrote:
> >
> > PVH dom0 is functionally very similar to PVH domU except for the domain
> > builder and the added set of hypercalls available to it.
> >
> > The main concern with declaring it "Supported" is the lack of some features
> > when compared to classic PV dom0, hence switch it's status to supported with
> > caveats.  List the known missing features, there might be more features missing
> > or not working as expected apart from the ones listed.
> >
> > Note there's some (limited) PVH dom0 testing on both osstest and gitlab.
> >
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > ---
> > Changes since v1:
> >  - Remove boot warning.
> > ---
> >  CHANGELOG.md                  |  1 +
> >  SUPPORT.md                    | 15 ++++++++++++++-
> >  xen/arch/x86/hvm/dom0_build.c |  1 -
> >  3 files changed, 15 insertions(+), 2 deletions(-)
> >
> > diff --git a/CHANGELOG.md b/CHANGELOG.md
> > index 201478aa1c0e..1778419cae64 100644
> > --- a/CHANGELOG.md
> > +++ b/CHANGELOG.md
> > @@ -14,6 +14,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
> >     - HVM PIRQs are disabled by default.
> >     - Reduce IOMMU setup time for hardware domain.
> >     - Allow HVM/PVH domains to map foreign pages.
> > +   - Declare PVH dom0 supported with caveats.
> >   - xl/libxl configures vkb=[] for HVM domains with priority over vkb_device.
> >   - Increase the maximum number of CPUs Xen can be built for from 4095 to
> >     16383.
> > diff --git a/SUPPORT.md b/SUPPORT.md
> > index d5d60c62ec11..711aacf34662 100644
> > --- a/SUPPORT.md
> > +++ b/SUPPORT.md
> > @@ -161,7 +161,20 @@ Requires hardware virtualisation support (Intel VMX / AMD SVM).
> >  Dom0 support requires an IOMMU (Intel VT-d / AMD IOMMU).
> >
> >      Status, domU: Supported
> > -    Status, dom0: Experimental
> > +    Status, dom0: Supported, with caveats
> > +
> > +PVH dom0 hasn't received the same test coverage as PV dom0, so it can exhibit
> > +unexpected behavior or issues on some hardware.
> 
> What's the criteria for removing this paragraph?
> 
> FAOD I'm OK with it being checked in as-is, but I feel like this
> paragraph is somewhat anomalous, and would at least like to have an
> idea what might trigger its removal.

More testing is the only way this paragraph can be removed IMO.

For example I would be happy to remove it if dom0 PVH works on all
hardware in the XenRT lab.  So far the Linux dom0 version used by
XenServer is missing some required fixes for such testing to be
feasible.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:46:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:46:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737978.1144522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwCg-0008T0-32; Tue, 11 Jun 2024 07:46:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737978.1144522; Tue, 11 Jun 2024 07:46:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwCf-0008St-VY; Tue, 11 Jun 2024 07:46:01 +0000
Received: by outflank-mailman (input) for mailman id 737978;
 Tue, 11 Jun 2024 07:46:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UxxS=NN=suse.de=osalvador@srs-se1.protection.inumbo.net>)
 id 1sGwCe-0008H5-GX
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:46:00 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a452a0ab-27c6-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 09:45:58 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 0B24722CCD;
 Tue, 11 Jun 2024 07:45:57 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id D8AB213A55;
 Tue, 11 Jun 2024 07:45:55 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id 4MdfMrMAaGYMUQAAD6G6ig
 (envelope-from <osalvador@suse.de>); Tue, 11 Jun 2024 07:45:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a452a0ab-27c6-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718091958; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=HVao4QsDP4Z+JAjo5dOzj5sxwVNV/Y4USEllUqua+SQ=;
	b=u5mEgWuSpunEORDd99KB+qYCCofPeL9U0D+nligMM1u7MVYkAsDQ4tHD3Kx6+UbLLoWb9Q
	ZKLtpOD0A7zMaX96b5O0y7p1jSfvfiouCBinzY0ukzGsQHISDNnZX5BxP7yYnZl30CXRvD
	XUtm+fRniwugupRUTF0pqvUJ6aUhHeI=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718091958;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=HVao4QsDP4Z+JAjo5dOzj5sxwVNV/Y4USEllUqua+SQ=;
	b=Vs4QnX1RXzMMer0yUJ/gnSrGDlsxSJRsNXNkxuY1DjYdWEK4c26NAw8fho5r6ae0TqbKkl
	TducIsP3PboEO4DQ==
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718091957; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=HVao4QsDP4Z+JAjo5dOzj5sxwVNV/Y4USEllUqua+SQ=;
	b=beS+LyS4FBWnwGXe6a5zd/EhBO0SySrgPbqyL6f5aR8KILqXrR2/CI5lDKJ80NRBdN/pSq
	JbySBQlifQRqEKrIl1R/jcgrzWFcaHlw0Fl6CIKXrfrMDA8Ys0+B7INFr6lrOOFwfTr1iS
	ef8d3ylhBReZsLmY5KbBBlD/myYBhkQ=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718091957;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=HVao4QsDP4Z+JAjo5dOzj5sxwVNV/Y4USEllUqua+SQ=;
	b=blQE/fLATW0heDMQ1LEDrIBV+gvCQOtxZiUyMLRgeESKGRSW5JNnkJ/TCLcUuKPV7v/6M6
	dbtt7phZc0CzDZBw==
Date: Tue, 11 Jun 2024 09:45:54 +0200
From: Oscar Salvador <osalvador@suse.de>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
	Eugenio =?iso-8859-1?Q?P=E9rez?= <eperezma@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>, Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH v1 2/3] mm/memory_hotplug: initialize memmap of
 !ZONE_DEVICE with PageOffline() instead of PageReserved()
Message-ID: <ZmgAsolx7SAHeDW7@localhost.localdomain>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-3-david@redhat.com>
 <ZmZ_3Xc7fdrL1R15@localhost.localdomain>
 <5d9583e1-3374-437d-8eea-6ab1e1400a30@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <5d9583e1-3374-437d-8eea-6ab1e1400a30@redhat.com>
X-Spam-Level: 
X-Spamd-Result: default: False [-4.30 / 50.00];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-0.999];
	MIME_GOOD(-0.10)[text/plain];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	SUBJECT_HAS_EXCLAIM(0.00)[];
	ARC_NA(0.00)[];
	MIME_TRACE(0.00)[0:+];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	RCPT_COUNT_TWELVE(0.00)[23];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	FROM_HAS_DN(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	FROM_EQ_ENVFROM(0.00)[];
	TO_DN_SOME(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	MISSING_XM_UA(0.00)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo]
X-Spam-Score: -4.30
X-Spam-Flag: NO

On Mon, Jun 10, 2024 at 10:56:02AM +0200, David Hildenbrand wrote:
> There are fortunately not that many left.
> 
> I'd even say marking them (vmemmap) reserved is more wrong than right: note
> that ordinary vmemmap pages after memory hotplug are not reserved! Only
> bootmem should be reserved.

Ok, that is a very good point that I missed.
I thought that hotplugged-vmemmap pages (not selfhosted) were marked as
Reserved, that is why I thought this would be inconsistent.
But then, if that is the case, I think we are safe as kernel can already
encounter vmemmap pages that are not reserved and it deals with them
somehow.

> Let's take at the relevant core-mm ones (arch stuff is mostly just for MMIO
> remapping)
> 
... 
> Any PageReserved user that I am missing, or why we should handle these
> vmemmap pages differently than the ones allocated during ordinary memory
> hotplug?

No, I cannot think of a reason why normal vmemmap pages should behave
different than self-hosted.

I was also confused because I thought that after this change
pfn_to_online_page() would be different for self-hosted vmemmap pages,
because I thought that somehow we relied on PageOffline(), but it is not
the case.

> In the future, we might want to consider using a dedicated page type for
> them, so we can stop using a bit that doesn't allow to reliably identify
> them. (we should mark all vmemmap with that type then)

Yes, a all-vmemmap pages type would be a good thing, so we do not have
to special case.

Just one last thing.
Now self-hosted vmemmap pages will have the PageOffline cleared, and that
will still remain after the memory-block they belong to has gone
offline, which is ok because those vmemmap pages lay around until the
chunk of memory gets removed.

Ok, just wanted to convince myself that there will no be surprises.

Thanks David for claryfing.
 

-- 
Oscar Salvador
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 07:55:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 07:55:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737988.1144531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwLb-00036C-T3; Tue, 11 Jun 2024 07:55:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737988.1144531; Tue, 11 Jun 2024 07:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwLb-000365-QZ; Tue, 11 Jun 2024 07:55:15 +0000
Received: by outflank-mailman (input) for mailman id 737988;
 Tue, 11 Jun 2024 07:55:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwLa-00035z-2S
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 07:55:14 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id edff2b10-27c7-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 09:55:12 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id B1A1F60D2D;
 Tue, 11 Jun 2024 07:55:10 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AA554C32789;
 Tue, 11 Jun 2024 07:55:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: edff2b10-27c7-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718092510;
	bh=KMSwFKEyi35uaRtii0t7lI/4MvH1eLdQhWL/35KPWTI=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=IS1YYCI7UiDI5FXif6bcLeLXLCw0414L3MOXmusciXh7PaojbE/cgrfs+I8YCjyJJ
	 V0+j5AdfNNezODyfiopqQYbQWcBuIJ2lWiwcBrqOui7H8bgYLm4NNYizjvbLwlEqrP
	 O2g64hgllN2eL51XXDwSw43lH278hXsPf5y/ZZzX5hxgkSpC5QFW7kLRA5elZAYCsp
	 GnDfFNfx7wcoV8dBHEwNHJaRxFY4lLNmZ2w4G3IhNEOsULl1snsZfisN/xq8Pn2fzq
	 CD/V47DAq0TI3OBp5yS7Se4f0Y8iTK1ZkHv1i7mSOq4XYXg4eyabIba1vcFOH5GNOL
	 KLm9npjmJEeqA==
Message-ID: <d21b162a-1fd3-4fd1-a17f-f127f964bdf1@kernel.org>
Date: Tue, 11 Jun 2024 16:55:04 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 13/26] block: move cache control settings out of
 queue->flags
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-14-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-14-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the cache control settings into the queue_limits so that they
> can be set atomically and all I/O is frozen when changing the
> flags.

...so that they can be set atomically with the device queue frozen when
changing the flags.

may be better.

> 
> Add new features and flags field for the driver set flags, and internal
> (usually sysfs-controlled) flags in the block layer.  Note that we'll
> eventually remove enough field from queue_limits to bring it back to the
> previous size.
> 
> The disable flag is inverted compared to the previous meaning, which
> means it now survives a rescan, similar to the max_sectors and
> max_discard_sectors user limits.
> 
> The FLUSH and FUA flags are now inherited by blk_stack_limits, which
> simplified the code in dm a lot, but also causes a slight behavior
> change in that dm-switch and dm-unstripe now advertise a write cache
> despite setting num_flush_bios to 0.  The I/O path will handle this
> gracefully, but as far as I can tell the lack of num_flush_bios
> and thus flush support is a pre-existing data integrity bug in those
> targets that really needs fixing, after which a non-zero num_flush_bios
> should be required in dm for targets that map to underlying devices.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  .../block/writeback_cache_control.rst         | 67 +++++++++++--------
>  arch/um/drivers/ubd_kern.c                    |  2 +-
>  block/blk-core.c                              |  2 +-
>  block/blk-flush.c                             |  9 ++-
>  block/blk-mq-debugfs.c                        |  2 -
>  block/blk-settings.c                          | 29 ++------
>  block/blk-sysfs.c                             | 29 +++++---
>  block/blk-wbt.c                               |  4 +-
>  drivers/block/drbd/drbd_main.c                |  2 +-
>  drivers/block/loop.c                          |  9 +--
>  drivers/block/nbd.c                           | 14 ++--
>  drivers/block/null_blk/main.c                 | 12 ++--
>  drivers/block/ps3disk.c                       |  7 +-
>  drivers/block/rnbd/rnbd-clt.c                 | 10 +--
>  drivers/block/ublk_drv.c                      |  8 ++-
>  drivers/block/virtio_blk.c                    | 20 ++++--
>  drivers/block/xen-blkfront.c                  |  9 ++-
>  drivers/md/bcache/super.c                     |  7 +-
>  drivers/md/dm-table.c                         | 39 +++--------
>  drivers/md/md.c                               |  8 ++-
>  drivers/mmc/core/block.c                      | 42 ++++++------
>  drivers/mmc/core/queue.c                      | 12 ++--
>  drivers/mmc/core/queue.h                      |  3 +-
>  drivers/mtd/mtd_blkdevs.c                     |  5 +-
>  drivers/nvdimm/pmem.c                         |  4 +-
>  drivers/nvme/host/core.c                      |  7 +-
>  drivers/nvme/host/multipath.c                 |  6 --
>  drivers/scsi/sd.c                             | 28 +++++---
>  include/linux/blkdev.h                        | 38 +++++++++--
>  29 files changed, 227 insertions(+), 207 deletions(-)
> 
> diff --git a/Documentation/block/writeback_cache_control.rst b/Documentation/block/writeback_cache_control.rst
> index b208488d0aae85..9cfe27f90253c7 100644
> --- a/Documentation/block/writeback_cache_control.rst
> +++ b/Documentation/block/writeback_cache_control.rst
> @@ -46,41 +46,50 @@ worry if the underlying devices need any explicit cache flushing and how
>  the Forced Unit Access is implemented.  The REQ_PREFLUSH and REQ_FUA flags
>  may both be set on a single bio.
>  
> +Feature settings for block drivers
> +----------------------------------
>  
> -Implementation details for bio based block drivers
> ---------------------------------------------------------------
> +For devices that do not support volatile write caches there is no driver
> +support required, the block layer completes empty REQ_PREFLUSH requests before
> +entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from
> +requests that have a payload.
>  
> -These drivers will always see the REQ_PREFLUSH and REQ_FUA bits as they sit
> -directly below the submit_bio interface.  For remapping drivers the REQ_FUA
> -bits need to be propagated to underlying devices, and a global flush needs
> -to be implemented for bios with the REQ_PREFLUSH bit set.  For real device
> -drivers that do not have a volatile cache the REQ_PREFLUSH and REQ_FUA bits
> -on non-empty bios can simply be ignored, and REQ_PREFLUSH requests without
> -data can be completed successfully without doing any work.  Drivers for
> -devices with volatile caches need to implement the support for these
> -flags themselves without any help from the block layer.
> +For devices with volatile write caches the driver needs to tell the block layer
> +that it supports flushing caches by setting the
>  
> +   BLK_FEAT_WRITE_CACHE
>  
> -Implementation details for request_fn based block drivers
> ----------------------------------------------------------
> +flag in the queue_limits feature field.  For devices that also support the FUA
> +bit the block layer needs to be told to pass on the REQ_FUA bit by also setting
> +the
>  
> -For devices that do not support volatile write caches there is no driver
> -support required, the block layer completes empty REQ_PREFLUSH requests before
> -entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from
> -requests that have a payload.  For devices with volatile write caches the
> -driver needs to tell the block layer that it supports flushing caches by
> -doing::
> +   BLK_FEAT_FUA
> +
> +flag in the features field of the queue_limits structure.
> +
> +Implementation details for bio based block drivers
> +--------------------------------------------------
> +
> +For bio based drivers the REQ_PREFLUSH and REQ_FUA bit are simplify passed on
> +to the driver if the drivers sets the BLK_FEAT_WRITE_CACHE flag and the drivers
> +needs to handle them.
> +
> +*NOTE*: The REQ_FUA bit also gets passed on when the BLK_FEAT_FUA flags is
> +_not_ set.  Any bio based driver that sets BLK_FEAT_WRITE_CACHE also needs to
> +handle REQ_FUA.
>  
> -	blk_queue_write_cache(sdkp->disk->queue, true, false);
> +For remapping drivers the REQ_FUA bits need to be propagated to underlying
> +devices, and a global flush needs to be implemented for bios with the
> +REQ_PREFLUSH bit set.
>  
> -and handle empty REQ_OP_FLUSH requests in its prep_fn/request_fn.  Note that
> -REQ_PREFLUSH requests with a payload are automatically turned into a sequence
> -of an empty REQ_OP_FLUSH request followed by the actual write by the block
> -layer.  For devices that also support the FUA bit the block layer needs
> -to be told to pass through the REQ_FUA bit using::
> +Implementation details for blk-mq drivers
> +-----------------------------------------
>  
> -	blk_queue_write_cache(sdkp->disk->queue, true, true);
> +When the BLK_FEAT_WRITE_CACHE flag is set, REQ_OP_WRITE | REQ_PREFLUSH requests
> +with a payload are automatically turned into a sequence of a REQ_OP_FLUSH
> +request followed by the actual write by the block layer.
>  
> -and the driver must handle write requests that have the REQ_FUA bit set
> -in prep_fn/request_fn.  If the FUA bit is not natively supported the block
> -layer turns it into an empty REQ_OP_FLUSH request after the actual write.
> +When the BLK_FEA_FUA flags is set, the REQ_FUA bit simplify passed on for the

s/BLK_FEA_FUA/BLK_FEAT_FUA

> +REQ_OP_WRITE request, else a REQ_OP_FLUSH request is sent by the block layer
> +after the completion of the write request for bio submissions with the REQ_FUA
> +bit set.
	
> diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> index 5c787965b7d09e..4f524c1d5e08bd 100644
> --- a/block/blk-sysfs.c
> +++ b/block/blk-sysfs.c
> @@ -423,32 +423,41 @@ static ssize_t queue_io_timeout_store(struct request_queue *q, const char *page,
>  
>  static ssize_t queue_wc_show(struct request_queue *q, char *page)
>  {
> -	if (test_bit(QUEUE_FLAG_WC, &q->queue_flags))
> -		return sprintf(page, "write back\n");
> -
> -	return sprintf(page, "write through\n");
> +	if (q->limits.features & BLK_FLAGS_WRITE_CACHE_DISABLED)
> +		return sprintf(page, "write through\n");
> +	return sprintf(page, "write back\n");
>  }
>  
>  static ssize_t queue_wc_store(struct request_queue *q, const char *page,
>  			      size_t count)
>  {
> +	struct queue_limits lim;
> +	bool disable;
> +	int err;
> +
>  	if (!strncmp(page, "write back", 10)) {
> -		if (!test_bit(QUEUE_FLAG_HW_WC, &q->queue_flags))
> -			return -EINVAL;
> -		blk_queue_flag_set(QUEUE_FLAG_WC, q);
> +		disable = false;
>  	} else if (!strncmp(page, "write through", 13) ||
> -		 !strncmp(page, "none", 4)) {
> -		blk_queue_flag_clear(QUEUE_FLAG_WC, q);
> +		   !strncmp(page, "none", 4)) {
> +		disable = true;
>  	} else {
>  		return -EINVAL;
>  	}

I think you can drop the curly brackets for this chain of if-else-if-else.

>  
> +	lim = queue_limits_start_update(q);
> +	if (disable)
> +		lim.flags |= BLK_FLAGS_WRITE_CACHE_DISABLED;
> +	else
> +		lim.flags &= ~BLK_FLAGS_WRITE_CACHE_DISABLED;
> +	err = queue_limits_commit_update(q, &lim);
> +	if (err)
> +		return err;
>  	return count;
>  }


> diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
> index fd789eeb62d943..fbe125d55e25b4 100644
> --- a/drivers/md/dm-table.c
> +++ b/drivers/md/dm-table.c
> @@ -1686,34 +1686,16 @@ int dm_calculate_queue_limits(struct dm_table *t,
>  	return validate_hardware_logical_block_alignment(t, limits);
>  }
>  
> -static int device_flush_capable(struct dm_target *ti, struct dm_dev *dev,
> -				sector_t start, sector_t len, void *data)
> -{
> -	unsigned long flush = (unsigned long) data;
> -	struct request_queue *q = bdev_get_queue(dev->bdev);
> -
> -	return (q->queue_flags & flush);
> -}
> -
> -static bool dm_table_supports_flush(struct dm_table *t, unsigned long flush)
> +/*
> + * Check if an target requires flush support even if none of the underlying

s/an/a

> + * devices need it (e.g. to persist target-specific metadata).
> + */
> +static bool dm_table_supports_flush(struct dm_table *t)
>  {
> -	/*
> -	 * Require at least one underlying device to support flushes.
> -	 * t->devices includes internal dm devices such as mirror logs
> -	 * so we need to use iterate_devices here, which targets
> -	 * supporting flushes must provide.
> -	 */
>  	for (unsigned int i = 0; i < t->num_targets; i++) {
>  		struct dm_target *ti = dm_table_get_target(t, i);
>  
> -		if (!ti->num_flush_bios)
> -			continue;
> -
> -		if (ti->flush_supported)
> -			return true;
> -
> -		if (ti->type->iterate_devices &&
> -		    ti->type->iterate_devices(ti, device_flush_capable, (void *) flush))
> +		if (ti->num_flush_bios && ti->flush_supported)
>  			return true;
>  	}


> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index c792d4d81e5fcc..4e8931a2c76b07 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -282,6 +282,28 @@ static inline bool blk_op_is_passthrough(blk_opf_t op)
>  	return op == REQ_OP_DRV_IN || op == REQ_OP_DRV_OUT;
>  }
>  
> +/* flags set by the driver in queue_limits.features */
> +enum {
> +	/* supports a a volatile write cache */

Repeated "a".

> +	BLK_FEAT_WRITE_CACHE			= (1u << 0),
> +
> +	/* supports passing on the FUA bit */
> +	BLK_FEAT_FUA				= (1u << 1),
> +};


> +static inline bool blk_queue_write_cache(struct request_queue *q)
> +{
> +	return (q->limits.features & BLK_FEAT_WRITE_CACHE) &&
> +		(q->limits.flags & BLK_FLAGS_WRITE_CACHE_DISABLED);

Hmm, shouldn't this be !(q->limits.flags & BLK_FLAGS_WRITE_CACHE_DISABLED) ?

> +}
> +
>  static inline bool bdev_write_cache(struct block_device *bdev)
>  {
> -	return test_bit(QUEUE_FLAG_WC, &bdev_get_queue(bdev)->queue_flags);
> +	return blk_queue_write_cache(bdev_get_queue(bdev));
>  }
>  
>  static inline bool bdev_fua(struct block_device *bdev)
>  {
> -	return test_bit(QUEUE_FLAG_FUA, &bdev_get_queue(bdev)->queue_flags);
> +	return bdev_get_queue(bdev)->limits.features & BLK_FEAT_FUA;
>  }
>  
>  static inline bool bdev_nowait(struct block_device *bdev)

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:01:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:01:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737998.1144541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwR9-0006EU-IY; Tue, 11 Jun 2024 08:00:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737998.1144541; Tue, 11 Jun 2024 08:00:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwR9-0006EN-G0; Tue, 11 Jun 2024 08:00:59 +0000
Received: by outflank-mailman (input) for mailman id 737998;
 Tue, 11 Jun 2024 08:00:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rr1P=NN=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGwR7-0006EH-Ur
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:00:57 +0000
Received: from mail-oo1-xc2d.google.com (mail-oo1-xc2d.google.com
 [2607:f8b0:4864:20::c2d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id baffc0c3-27c8-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 10:00:56 +0200 (CEST)
Received: by mail-oo1-xc2d.google.com with SMTP id
 006d021491bc7-5bae81effd1so370589eaf.2
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 01:00:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: baffc0c3-27c8-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718092855; x=1718697655; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=KEi3zevVXfjsj8/mtFCGT2oWlIyt4mIIfGQQN0Jtzkg=;
        b=PJ1q81f2O/DvDqMUv//micrTsRXAh5hMM2/gN1BmSQBfhiBo78gLAH0MesHIfcZdCp
         c0kTKfjvBQi3ZE9oZ3MfdRPKgx/a1Nom4qBdL/nEcDMpQKG6EfFz6ZwaS3uJO0YB10qo
         jt6yadofL8b7hozrBpbJ4oKEr+cafttUhuDwv3PC7fQwORu9oyXEBtsNsi9TkQTpvFGp
         hSt65Ev7+utvpmZowWPt9d/bortDRDk6UBQMCr3J1vVRAR07AsAL5BUBGgEjFjTiyi9L
         55Ca3FwpOOvSffZxbE+DSteU3f3YgBA2QdTMtH9iUJN5huh1vZehMULFouzGMNnl5ExY
         C0Mg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718092855; x=1718697655;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=KEi3zevVXfjsj8/mtFCGT2oWlIyt4mIIfGQQN0Jtzkg=;
        b=trHROC+lW6S1c0d1TQ5uotONTFZIBhZPv3NhNbY2aqJ0zwXdIb/8rSNrzRayXj6/t5
         OHb652UOOF5Woqf9rY/c2qrmCLz3rVha5k7THNbQ738ha8QHJbocs6I8cy6slICzm8Th
         f8sKP4+3nzJkRC7UsxMT1Yd4cJNqfBdFoHfPXx6VC4HbeAiyFPUUHaG7c4a88U5kzQrQ
         qiN4LVXJz5Y+jeffQQwxq2tuTvk7xyEBeV4aIufFBDX65/ESl4P6qidXQDpUkHMpkZYy
         oheHfk2oRmI7ePitwvWJxdOz5h5IOJ7BPaSg+NjYGPpuMagbGx2FNqkB8im26T9fxs5g
         QxCQ==
X-Forwarded-Encrypted: i=1; AJvYcCVIaAXHZwb0RY/z4qHWdLdzzFYGYxC8xCTqd26xHrjBP2C/2VyB+Yoq6EfxnfoNqCPOatsVAciVOkLXNhgCuTZxiHG6FQlwC2hk5ZMOO7o=
X-Gm-Message-State: AOJu0YyvC5oxmZSsdVRKBeF/Oy0hHoomqiA3IJuuOFq7IK7LRgoy1oBi
	PO74dpuqDqQNfyfDxiGOFs1njUKIpYPiS7p6h5p3Ck+wiZzt02/2oxNkl8yr/ZOAEGgYi6Cz3qB
	Le7EAx0iChTgC06JxqdOZRWJVxI9zfQ6U6F4=
X-Google-Smtp-Source: AGHT+IE74HBmDhRiWxm4vxl1WUdXHqN3oJKZlvaLuTbtgwqszDV2wvyUSVi4cy+NI5VND4dOHACaLZz9grWvhFwnaFU=
X-Received: by 2002:a05:6871:79d:b0:254:b30a:2ed0 with SMTP id
 586e51a60fabf-254b30a3048mr7454125fac.32.1718092854919; Tue, 11 Jun 2024
 01:00:54 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1718038855.git.w1benny@gmail.com> <dcf08c40e37072e18e5e878df8778ce459897bdc.1718038855.git.w1benny@gmail.com>
 <8787608f-f3b0-4fb3-95ee-98050cf95182@suse.com>
In-Reply-To: <8787608f-f3b0-4fb3-95ee-98050cf95182@suse.com>
From: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Date: Tue, 11 Jun 2024 10:00:43 +0200
Message-ID: <CAKBKdXiiZdz70nWx7kqp2S5RdbRsku+qtn6z9DBk44LZOgp3Qw@mail.gmail.com>
Subject: Re: [PATCH for-4.19? v6 3/9] xen: Refactor altp2m options into a
 structured format
To: Jan Beulich <jbeulich@suse.com>
Cc: Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
	Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel <michal.orzel@amd.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 11, 2024 at 8:41=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 10.06.2024 19:10, Petr Bene=C5=A1 wrote:
> > From: Petr Bene=C5=A1 <w1benny@gmail.com>
> >
> > Encapsulate the altp2m options within a struct. This change is preparat=
ory
> > and sets the groundwork for introducing additional parameter in subsequ=
ent
> > commit.
> >
> > Signed-off-by: Petr Bene=C5=A1 <w1benny@gmail.com>
> > Acked-by: Julien Grall <jgrall@amazon.com> # arm
> > Reviewed-by: Jan Beulich <jbeulich@suse.com> # hypervisor
>
> Looks like you lost Christian's ack for ...
>
> > ---
> >  tools/libs/light/libxl_create.c     | 6 +++---
> >  tools/ocaml/libs/xc/xenctrl_stubs.c | 4 +++-
>
> ... the adjustment of this file?

In the cover email, Christian only acked:

> tools/ocaml/libs/xc/xenctrl.ml       |   2 +
> tools/ocaml/libs/xc/xenctrl.mli      |   2 +
> tools/ocaml/libs/xc/xenctrl_stubs.c  |  40 +++++++---

P.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:01:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:01:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.737999.1144552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwRL-0006Xl-Q1; Tue, 11 Jun 2024 08:01:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 737999.1144552; Tue, 11 Jun 2024 08:01:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwRL-0006Xe-MO; Tue, 11 Jun 2024 08:01:11 +0000
Received: by outflank-mailman (input) for mailman id 737999;
 Tue, 11 Jun 2024 08:01:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=UxxS=NN=suse.de=osalvador@srs-se1.protection.inumbo.net>)
 id 1sGwRJ-0006EH-ET
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:01:09 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c0ffaf2c-27c8-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 10:01:05 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 3DCEE20560;
 Tue, 11 Jun 2024 08:01:05 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 2124F13A55;
 Tue, 11 Jun 2024 08:01:04 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id cdtzBUAEaGY4VgAAD6G6ig
 (envelope-from <osalvador@suse.de>); Tue, 11 Jun 2024 08:01:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0ffaf2c-27c8-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718092865; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=qHTLyIX7jLzbH1ivzBIljZd8gbAzcrr4kki4IuM/w4s=;
	b=eZBr7hv01QtJAQ63m8FhIkoxR6rfVWdIAtAM3tMSW7+msYF8sKzstxlxUcLGVQkzYd+xcg
	yOUDCjNkttpZD4LSyjDGBE0HLudW2/1+Hz5TvpMX2H9pqAKeYuboG4MyCcVVBme6zH6ZFU
	gW3gI5sxhgleQef05pj0FqPwwOu6HPU=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718092865;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=qHTLyIX7jLzbH1ivzBIljZd8gbAzcrr4kki4IuM/w4s=;
	b=OXxCJyUQRJS9OhqC4VgzbBVBP3JyYprOS8XwZTJXMp2JoR9G+H5IoGoPm1Pm1PuGPVCFI5
	a/T9/prPxYrogmCQ==
Authentication-Results: smtp-out2.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718092865; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=qHTLyIX7jLzbH1ivzBIljZd8gbAzcrr4kki4IuM/w4s=;
	b=eZBr7hv01QtJAQ63m8FhIkoxR6rfVWdIAtAM3tMSW7+msYF8sKzstxlxUcLGVQkzYd+xcg
	yOUDCjNkttpZD4LSyjDGBE0HLudW2/1+Hz5TvpMX2H9pqAKeYuboG4MyCcVVBme6zH6ZFU
	gW3gI5sxhgleQef05pj0FqPwwOu6HPU=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718092865;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=qHTLyIX7jLzbH1ivzBIljZd8gbAzcrr4kki4IuM/w4s=;
	b=OXxCJyUQRJS9OhqC4VgzbBVBP3JyYprOS8XwZTJXMp2JoR9G+H5IoGoPm1Pm1PuGPVCFI5
	a/T9/prPxYrogmCQ==
Date: Tue, 11 Jun 2024 10:01:02 +0200
From: Oscar Salvador <osalvador@suse.de>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
	linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com,
	Andrew Morton <akpm@linux-foundation.org>,
	Mike Rapoport <rppt@kernel.org>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
	Eugenio =?iso-8859-1?Q?P=E9rez?= <eperezma@redhat.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Alexander Potapenko <glider@google.com>,
	Marco Elver <elver@google.com>, Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH v1 2/3] mm/memory_hotplug: initialize memmap of
 !ZONE_DEVICE with PageOffline() instead of PageReserved()
Message-ID: <ZmgEPgjyG4EfYkNM@localhost.localdomain>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-3-david@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20240607090939.89524-3-david@redhat.com>
X-Spam-Level: 
X-Spamd-Result: default: False [-4.26 / 50.00];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.16)[-0.822];
	MIME_GOOD(-0.10)[text/plain];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	MIME_TRACE(0.00)[0:+];
	MISSING_XM_UA(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[23];
	SUBJECT_HAS_EXCLAIM(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FROM_EQ_ENVFROM(0.00)[];
	FROM_HAS_DN(0.00)[];
	TO_DN_SOME(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,suse.de:email]
X-Spam-Score: -4.26
X-Spam-Flag: NO

On Fri, Jun 07, 2024 at 11:09:37AM +0200, David Hildenbrand wrote:
> We currently initialize the memmap such that PG_reserved is set and the
> refcount of the page is 1. In virtio-mem code, we have to manually clear
> that PG_reserved flag to make memory offlining with partially hotplugged
> memory blocks possible: has_unmovable_pages() would otherwise bail out on
> such pages.
> 
> We want to avoid PG_reserved where possible and move to typed pages
> instead. Further, we want to further enlighten memory offlining code about
> PG_offline: offline pages in an online memory section. One example is
> handling managed page count adjustments in a cleaner way during memory
> offlining.
> 
> So let's initialize the pages with PG_offline instead of PG_reserved.
> generic_online_page()->__free_pages_core() will now clear that flag before
> handing that memory to the buddy.
> 
> Note that the page refcount is still 1 and would forbid offlining of such
> memory except when special care is take during GOING_OFFLINE as
> currently only implemented by virtio-mem.
> 
> With this change, we can now get non-PageReserved() pages in the XEN
> balloon list. From what I can tell, that can already happen via
> decrease_reservation(), so that should be fine.
> 
> HV-balloon should not really observe a change: partial online memory
> blocks still cannot get surprise-offlined, because the refcount of these
> PageOffline() pages is 1.
> 
> Update virtio-mem, HV-balloon and XEN-balloon code to be aware that
> hotplugged pages are now PageOffline() instead of PageReserved() before
> they are handed over to the buddy.
> 
> We'll leave the ZONE_DEVICE case alone for now.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>

Acked-by: Oscar Salvador <osalvador@suse.de> # for the generic
memory-hotplug bits


-- 
Oscar Salvador
SUSE Labs


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:02:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:02:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738010.1144562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwSm-0007Wi-90; Tue, 11 Jun 2024 08:02:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738010.1144562; Tue, 11 Jun 2024 08:02:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwSm-0007Wb-5h; Tue, 11 Jun 2024 08:02:40 +0000
Received: by outflank-mailman (input) for mailman id 738010;
 Tue, 11 Jun 2024 08:02:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwSl-0007WR-Bo
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:02:39 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f70c4b51-27c8-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 10:02:38 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 58DC4CE10F6;
 Tue, 11 Jun 2024 08:02:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CAD55C2BD10;
 Tue, 11 Jun 2024 08:02:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f70c4b51-27c8-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718092954;
	bh=9448E2ZQUrvNp7ZURrtI3/D86m7bLK+HS2+4F3/kAbo=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=cVm1t8g2Dmo9pyaeFexP0cVZp2V9+zrzLaosx0fHWpCgyVMjhJafz/YjIRFcP6e/N
	 JiQrRCWm2cXvKJW25gK4jKXdI9z1r5PiGAPnVoL3Jgi9iLA+RTBXO3LhS10/kQ9VJv
	 S0HFEAZmlsLJdqInbsOGCHZ7lARtlpYV0wnusyawzuHxpb/RclmLX5rlq1DxBTsAuq
	 DUc+w0vOa4IhkCXuAcAxPK35SsjdXOHX0tyGNoT44IBIj7FNuzH/TkqH47YX1TuM03
	 t8bqET45kpSSPx+5HvwwYL7yfxY/bG8VgPBzRal4b99z62CNkY4oDCrnC6ah7dxmTJ
	 PwVnGOKYPVpoA==
Message-ID: <01366bae-699e-45dc-bad1-9541883a8b42@kernel.org>
Date: Tue, 11 Jun 2024 17:02:28 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 14/26] block: move the nonrot flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-15-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-15-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the norot flag into the queue_limits feature field so that it can be

s/norot/nonrot

> set atomically and all I/O is frozen when changing the flag.

and... -> with the queue frozen when... ?

> 
> Use the chance to switch to defaulting to non-rotational and require
> the driver to opt into rotational, which matches the polarity of the
> sysfs interface.
> 
> For the z2ram, ps3vram, 2x memstick, ubiblock and dcssblk the new
> rotational flag is not set as they clearly are not rotational despite
> this being a behavior change.  There are some other drivers that
> unconditionally set the rotational flag to keep the existing behavior
> as they arguably can be used on rotational devices even if that is
> probably not their main use today (e.g. virtio_blk and drbd).
> 
> The flag is automatically inherited in blk_stack_limits matching the
> existing behavior in dm and md.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Other than that, looks good to me.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:04:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:04:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738014.1144571 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwUa-0008I1-Ib; Tue, 11 Jun 2024 08:04:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738014.1144571; Tue, 11 Jun 2024 08:04:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwUa-0008Hu-Fn; Tue, 11 Jun 2024 08:04:32 +0000
Received: by outflank-mailman (input) for mailman id 738014;
 Tue, 11 Jun 2024 08:04:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=I6Ds=NN=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sGwUZ-0008Ho-5z
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:04:31 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 39969dd0-27c9-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:04:28 +0200 (CEST)
Received: from mail-lj1-f200.google.com (mail-lj1-f200.google.com
 [209.85.208.200]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-360-Qcpa36pOOuW6iPMUKDwWXA-1; Tue, 11 Jun 2024 04:04:19 -0400
Received: by mail-lj1-f200.google.com with SMTP id
 38308e7fff4ca-2eae96cecaeso30962451fa.2
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 01:04:19 -0700 (PDT)
Received: from ?IPV6:2003:cb:c748:ba00:1c00:48ea:7b5a:c12b?
 (p200300cbc748ba001c0048ea7b5ac12b.dip0.t-ipconnect.de.
 [2003:cb:c748:ba00:1c00:48ea:7b5a:c12b])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42182ed2b23sm75337235e9.18.2024.06.11.01.04.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 01:04:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39969dd0-27c9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1718093067;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=hqXBYKFMfHuGo/oWXFTKDIQ0hmdU9g1oFp9QTADLe9k=;
	b=dec9CIfZetR2cYPaBbivWEvSrnc2UqHVfkxbKoFbs9gsRusG1hLtHnQuEfcaR/E2YVn6Rb
	o4NGBkvh0wDqcOyZEDqid/Z56VCcCVH3/VvZoVmyqTqtcaaILI0z0BEOp8s+WLFVqdKWmk
	m9qowwUdWoKYRxEVrBsS9AykH9FEvRw=
X-MC-Unique: Qcpa36pOOuW6iPMUKDwWXA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718093058; x=1718697858;
        h=content-transfer-encoding:in-reply-to:organization:autocrypt
         :content-language:from:references:cc:to:subject:user-agent
         :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=hqXBYKFMfHuGo/oWXFTKDIQ0hmdU9g1oFp9QTADLe9k=;
        b=fZUrJ8P2MsDv0GOY7o5YtHpFSlubMB1jllp6DmLN6NfRi0OrwiSIitlDEWlauCL5K/
         mx8vvqNCBaXlSGWqnFYpQ4oh/uURMFO9OzhyZY1pjr/rpHrCA7Vm9D+VbIAAK2dqnwZl
         OAjtS6PgXIdqIXrgWAjGtI6knVO3Fl0F0PKySdG7R8lGrmeChyIUhGfCyaXnYrUCbbnp
         aVV7b1G2rwqWLHLA4Y7r8C5JHF/HVxLKNUm3s2CsYjxSbN9samR7Mt+zwQ6GjGl87XNx
         VHnQsOPs0BUkE2R0eLftBZ6maZparfCnA+75pdO+Cg8HsEgaMEeRWYg5r+dl8BZr/BEx
         QtmA==
X-Forwarded-Encrypted: i=1; AJvYcCVTyS/569+WnBZ3fKl8bUdyyVm7frLGQnP64ffBN91lU6fDiP+DY7X8BFkKkHSXrQ7pEsYoi8io5frEiCKiyNYTCjXrPk0ahBTfkVV8g+E=
X-Gm-Message-State: AOJu0YwCwlA9yoY0mcIG/OSDIjqTyDgZFbQk4PEPufVRnKDP+613dFrz
	SMxtXLfvxVSvehpU/9iYCectqKPn06UATNKHUkTzM2xrW3b+ZlYmAXZQpTKklvfb5ZnLGaao6RS
	1zWzQQF5Jj+MhvUJaP7CPfABpQxWEh0vLW5ZnH7C/i3tObEFDuAe2302Op1gj9jyW
X-Received: by 2002:a2e:87cb:0:b0:2eb:f82a:d8d2 with SMTP id 38308e7fff4ca-2ebf82adfe7mr159631fa.41.1718093058332;
        Tue, 11 Jun 2024 01:04:18 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IFKToayglVvfmqXbbZxIgMw6bY+YaaS9CFQLQWFoPjFHVHH1W6TdHfgwTXUw4yHKqwXuMGUGQ==
X-Received: by 2002:a2e:87cb:0:b0:2eb:f82a:d8d2 with SMTP id 38308e7fff4ca-2ebf82adfe7mr159231fa.41.1718093057647;
        Tue, 11 Jun 2024 01:04:17 -0700 (PDT)
Message-ID: <30b5d493-b7c2-4e63-86c1-dcc73d21dc15@redhat.com>
Date: Tue, 11 Jun 2024 10:04:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v1 2/3] mm/memory_hotplug: initialize memmap of
 !ZONE_DEVICE with PageOffline() instead of PageReserved()
To: Oscar Salvador <osalvador@suse.de>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
 xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com,
 Andrew Morton <akpm@linux-foundation.org>, Mike Rapoport <rppt@kernel.org>,
 "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Jason Wang <jasowang@redhat.com>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
 =?UTF-8?Q?Eugenio_P=C3=A9rez?= <eperezma@redhat.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>,
 Dmitry Vyukov <dvyukov@google.com>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-3-david@redhat.com>
 <ZmZ_3Xc7fdrL1R15@localhost.localdomain>
 <5d9583e1-3374-437d-8eea-6ab1e1400a30@redhat.com>
 <ZmgAsolx7SAHeDW7@localhost.localdomain>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; keydata=
 xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat
In-Reply-To: <ZmgAsolx7SAHeDW7@localhost.localdomain>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 11.06.24 09:45, Oscar Salvador wrote:
> On Mon, Jun 10, 2024 at 10:56:02AM +0200, David Hildenbrand wrote:
>> There are fortunately not that many left.
>>
>> I'd even say marking them (vmemmap) reserved is more wrong than right: note
>> that ordinary vmemmap pages after memory hotplug are not reserved! Only
>> bootmem should be reserved.
> 
> Ok, that is a very good point that I missed.
> I thought that hotplugged-vmemmap pages (not selfhosted) were marked as
> Reserved, that is why I thought this would be inconsistent.
> But then, if that is the case, I think we are safe as kernel can already
> encounter vmemmap pages that are not reserved and it deals with them
> somehow.
> 
>> Let's take at the relevant core-mm ones (arch stuff is mostly just for MMIO
>> remapping)
>>
> ...
>> Any PageReserved user that I am missing, or why we should handle these
>> vmemmap pages differently than the ones allocated during ordinary memory
>> hotplug?
> 
> No, I cannot think of a reason why normal vmemmap pages should behave
> different than self-hosted.
> 
> I was also confused because I thought that after this change
> pfn_to_online_page() would be different for self-hosted vmemmap pages,
> because I thought that somehow we relied on PageOffline(), but it is not
> the case.

Fortunately not :) PageFakeOffline() or PageLogicallyOffline()  might be 
clearer, but I don't quite like these names. If you have a good idea, 
please let me know.

> 
>> In the future, we might want to consider using a dedicated page type for
>> them, so we can stop using a bit that doesn't allow to reliably identify
>> them. (we should mark all vmemmap with that type then)
> 
> Yes, a all-vmemmap pages type would be a good thing, so we do not have
> to special case.
> 
> Just one last thing.
> Now self-hosted vmemmap pages will have the PageOffline cleared, and that
> will still remain after the memory-block they belong to has gone
> offline, which is ok because those vmemmap pages lay around until the
> chunk of memory gets removed.

Yes, and that memmap might even get poisoned in debug kernels to catch 
any wrong access.

> 
> Ok, just wanted to convince myself that there will no be surprises.
> 
> Thanks David for claryfing.

Thanks for the review and raising that. I'll add more details to the 
patch description!

-- 
Cheers,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:06:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:06:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738022.1144582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwWf-0000Q3-V5; Tue, 11 Jun 2024 08:06:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738022.1144582; Tue, 11 Jun 2024 08:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwWf-0000Pw-Rf; Tue, 11 Jun 2024 08:06:41 +0000
Received: by outflank-mailman (input) for mailman id 738022;
 Tue, 11 Jun 2024 08:06:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwWf-0000Ow-0s
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:06:41 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 86b65d7b-27c9-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 10:06:40 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id A0AEDCE193F;
 Tue, 11 Jun 2024 08:06:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 59A7BC2BD10;
 Tue, 11 Jun 2024 08:06:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86b65d7b-27c9-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718093195;
	bh=jCn0oX3HrJ6Qlcw/LuQoAhJRxhLaDvFIki6GqIikVDs=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=d/4pmxeJzHoEeQ+g2h36PLWMNipYFVLUgldUlVqiJ/oHVrEeGGWvSVD8Qj0WOLs1h
	 W+bfZgsci1Mj1j7xr+uUvjowbUTryqU8M7niWdxvm06YM2JVc8jmnbjYFRmHHcrBjc
	 35EtrgT1CO5oyRbn/EfIn5kNPHJPwVOwHupvi057HyipWOQnDb4p6IBWsy9PJTe2z7
	 aguwef6PSD/S3jnHhAFsQmA+8/MCfLVtbrqTe0OoYvK6gaKfrbg4IMTS8YbzdTrpnX
	 ruwuQZ89i04k1WQRZE0wdLm48U3gwb4DUnNOpOrrhv3eKIrI6XQd7uuU3njn1J65js
	 90POvgbQ696Tw==
Message-ID: <0f01ed9c-6f85-427c-9690-1551e67e46a9@kernel.org>
Date: Tue, 11 Jun 2024 17:06:29 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 15/26] block: move the add_random flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-16-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-16-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the add_random flag into the queue_limits feature field so that it
> can be set atomically and all I/O is frozen when changing the flag.

Same remark as the previous patches for the end of this sentence.c

> 
> Note that this also removes code from dm to clear the flag based on
> the underlying devices, which can't be reached as dm devices will
> always start out without the flag set.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Other than that, looks OK to me.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:09:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:09:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738027.1144591 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwZo-0001fI-CD; Tue, 11 Jun 2024 08:09:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738027.1144591; Tue, 11 Jun 2024 08:09:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwZo-0001fB-9e; Tue, 11 Jun 2024 08:09:56 +0000
Received: by outflank-mailman (input) for mailman id 738027;
 Tue, 11 Jun 2024 08:09:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwZm-0001f5-OS
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:09:54 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fb3b6b5f-27c9-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 10:09:53 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 7900960D2E;
 Tue, 11 Jun 2024 08:09:52 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8F729C2BD10;
 Tue, 11 Jun 2024 08:09:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb3b6b5f-27c9-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718093392;
	bh=q0QC/DwdDecsb+SsO2WPgUlAOrUNDUio/s56qnb48bM=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=ASIoL/u9H/6b4lUyYk5oEzSADY2SPITglYQKbn22RE2v5Zmv+Bdu4yXt2dx4LBuyj
	 NVG1rUDMHnNTWyjPGwlGMS7q0dEKke4PAWQg/UzftKu/Ly2DNOhNfxEkKQj6EFDaHU
	 gbMPLXlbaDGII4KmlbUUWCoStYYGCahBnCwU3sHyUUKQqZ1YdU0vcRocoxwls2ROg2
	 gKF/wxkN6x65ww9rMbS9NJlQNYUg7Y5U2acfwFYdpXRWwDXOcSxo1CVQbqP3uooiqP
	 0hvYmqaICGemmnL938SwljJXBPejQKyIB0+ALAtSSclYdQCRHYhtFi/CQNV4wF5aIA
	 8HiTkYhUDl6Cg==
Message-ID: <d51e4163-99e3-4435-870d-faef3887ab6a@kernel.org>
Date: Tue, 11 Jun 2024 17:09:45 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 16/26] block: move the io_stat flag setting to
 queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-17-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-17-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the io_stat flag into the queue_limits feature field so that it
> can be set atomically and all I/O is frozen when changing the flag.

Why a feature ? It seems more appropriate for io_stat to be a flag rather than
a feature as that is a block layer thing rather than a device characteristic, no ?

> 
> Simplify md and dm to set the flag unconditionally instead of avoiding
> setting a simple flag for cases where it already is set by other means,
> which is a bit pointless.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:11:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:11:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738031.1144601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwbI-0003NO-MO; Tue, 11 Jun 2024 08:11:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738031.1144601; Tue, 11 Jun 2024 08:11:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwbI-0003NH-JY; Tue, 11 Jun 2024 08:11:28 +0000
Received: by outflank-mailman (input) for mailman id 738031;
 Tue, 11 Jun 2024 08:11:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=W6la=NN=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sGwbG-0003Lx-D1
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:11:26 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 31925599-27ca-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:11:24 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 9402920557;
 Tue, 11 Jun 2024 08:11:23 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id EBB04137DF;
 Tue, 11 Jun 2024 08:11:22 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id DmIXOaoGaGZ8WQAAD6G6ig
 (envelope-from <hare@suse.de>); Tue, 11 Jun 2024 08:11:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31925599-27ca-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718093483; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wnQIUpYdHgOL/hM9GsXN8B0Ts994fdsauoO8bl3CExk=;
	b=Xw7LckrVyqrAakH7KSEDESVADg9MT2w/sOoFpkdH/KzwzUXrjoXFGlasjukRVjQpOXbqnR
	VEDyTRo7qiPPyB7GBgictg+KZT62wfeU0rWkpx7ertnwVCjRX97V2ITAvcIuwzNOYZyJWm
	R6IQoIRGoRGYPlreoXuIGgLHDGiR/+I=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718093483;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wnQIUpYdHgOL/hM9GsXN8B0Ts994fdsauoO8bl3CExk=;
	b=tY/SuqTuiamIUOCP9Gy6Edetjtr/Waqk3ZBvBpF4W/PGm0bmy0haYRoNg/wzfxeSc++Z+2
	TUnVbKx5YmbhspBg==
Authentication-Results: smtp-out2.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718093483; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wnQIUpYdHgOL/hM9GsXN8B0Ts994fdsauoO8bl3CExk=;
	b=Xw7LckrVyqrAakH7KSEDESVADg9MT2w/sOoFpkdH/KzwzUXrjoXFGlasjukRVjQpOXbqnR
	VEDyTRo7qiPPyB7GBgictg+KZT62wfeU0rWkpx7ertnwVCjRX97V2ITAvcIuwzNOYZyJWm
	R6IQoIRGoRGYPlreoXuIGgLHDGiR/+I=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718093483;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=wnQIUpYdHgOL/hM9GsXN8B0Ts994fdsauoO8bl3CExk=;
	b=tY/SuqTuiamIUOCP9Gy6Edetjtr/Waqk3ZBvBpF4W/PGm0bmy0haYRoNg/wzfxeSc++Z+2
	TUnVbKx5YmbhspBg==
Message-ID: <f85620ad-a19b-400d-bae7-29a1815fc33d@suse.de>
Date: Tue, 11 Jun 2024 10:11:22 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 01/26] sd: fix sd_is_zoned
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-2-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240611051929.513387-2-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spam-Flag: NO
X-Spam-Score: -8.29
X-Spam-Level: 
X-Spamd-Result: default: False [-8.29 / 50.00];
	REPLY(-4.00)[];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-0.996];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	MIME_TRACE(0.00)[0:+];
	TO_DN_SOME(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[37];
	MID_RHS_MATCH_FROM(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_HAS_DN(0.00)[];
	R_RATELIMIT(0.00)[to_ip_from(RLex1noz7jcsrkfdtgx8bqesde)];
	FROM_EQ_ENVFROM(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,suse.de:email]

On 6/11/24 07:19, Christoph Hellwig wrote:
> Since commit 7437bb73f087 ("block: remove support for the host aware zone
> model"), only ZBC devices expose a zoned access model.  sd_is_zoned is
> used to check for that and thus return false for host aware devices.
> 
> Fixes: 7437bb73f087 ("block: remove support for the host aware zone model")
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/scsi/sd.h     | 7 ++++++-
>   drivers/scsi/sd_zbc.c | 7 +------
>   2 files changed, 7 insertions(+), 7 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:12:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:12:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738037.1144611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwc5-00046V-1y; Tue, 11 Jun 2024 08:12:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738037.1144611; Tue, 11 Jun 2024 08:12:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwc4-00046O-Vl; Tue, 11 Jun 2024 08:12:16 +0000
Received: by outflank-mailman (input) for mailman id 738037;
 Tue, 11 Jun 2024 08:12:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=W6la=NN=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sGwc4-00046C-CL
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:12:16 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 500ea7ca-27ca-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 10:12:15 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id F414222D0F;
 Tue, 11 Jun 2024 08:12:14 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id EBB43137DF;
 Tue, 11 Jun 2024 08:12:13 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id VT1AN90GaGbKWQAAD6G6ig
 (envelope-from <hare@suse.de>); Tue, 11 Jun 2024 08:12:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 500ea7ca-27ca-11ef-90a3-e314d9c70b13
Authentication-Results: smtp-out1.suse.de;
	none
Message-ID: <4032635d-a17f-44e5-a547-b175fa271945@suse.de>
Date: Tue, 11 Jun 2024 10:12:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 02/26] sd: move zone limits setup out of
 sd_read_block_characteristics
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-3-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240611051929.513387-3-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Spam-Level: 
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Rspamd-Queue-Id: F414222D0F
X-Rspamd-Server: rspamd2.dmz-prg2.suse.org
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Action: no action

On 6/11/24 07:19, Christoph Hellwig wrote:
> Move a bit of code that sets up the zone flag and the write granularity
> into sd_zbc_read_zones to be with the rest of the zoned limits.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/scsi/sd.c     | 21 +--------------------
>   drivers/scsi/sd_zbc.c | 13 ++++++++++++-
>   2 files changed, 13 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
> index 85b45345a27739..5bfed61c70db8f 100644
> --- a/drivers/scsi/sd.c
> +++ b/drivers/scsi/sd.c
> @@ -3308,29 +3308,10 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp,
>   		blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q);
>   	}
>   
> -
> -#ifdef CONFIG_BLK_DEV_ZONED /* sd_probe rejects ZBD devices early otherwise */
> -	if (sdkp->device->type == TYPE_ZBC) {
> -		lim->zoned = true;
> -
> -		/*
> -		 * Per ZBC and ZAC specifications, writes in sequential write
> -		 * required zones of host-managed devices must be aligned to
> -		 * the device physical block size.
> -		 */
> -		lim->zone_write_granularity = sdkp->physical_block_size;
> -	} else {
> -		/*
> -		 * Host-aware devices are treated as conventional.
> -		 */
> -		lim->zoned = false;
> -	}
> -#endif /* CONFIG_BLK_DEV_ZONED */
> -
>   	if (!sdkp->first_scan)
>   		return;
>   
> -	if (lim->zoned)
> +	if (sdkp->device->type == TYPE_ZBC)

Why not sd_is_zoned()?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:12:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:12:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738043.1144621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwcj-0004oo-BC; Tue, 11 Jun 2024 08:12:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738043.1144621; Tue, 11 Jun 2024 08:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwcj-0004oh-7Z; Tue, 11 Jun 2024 08:12:57 +0000
Received: by outflank-mailman (input) for mailman id 738043;
 Tue, 11 Jun 2024 08:12:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwci-0004cV-6u
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:12:56 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6551dd5e-27ca-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:12:53 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 3E604CE0988;
 Tue, 11 Jun 2024 08:12:46 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 85352C2BD10;
 Tue, 11 Jun 2024 08:12:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6551dd5e-27ca-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718093565;
	bh=zMBbLHWiI0o4/6MjscYA2/ULza201WjvhSMdL22ilmQ=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=efMUQjqzDR6/6LZgdCczIkMTU67PAqpDM0RBu6vBwxbRTCrqV/SLQPjysAgXVIH+u
	 Gs7FwJjGTrCL01zQP92DMiL3AF1rQHaJ0EqlTORyNynpUYXTzsuY6sJ9mfoqPT1Tvg
	 BSo7cvDW+fM7fvH1aN5LdfHWfYWXWuQSJiOAXVL+z2KUCiOWthz8XV9sMCo6FSpXv+
	 2SHsDignO7grM22XDSxgAJ0znadsNiLDmWusigAzWpriOV5Y9jvSmQhMPwNSEb8Bvj
	 SvInHsx54FSttIPvfIwG30ulL5P3X40mKZipSzOnWLUzNb0kOcJMR/irZgSdTd0fnE
	 LJc4doaBwGC7g==
Message-ID: <a10087ad-8b2c-4a6c-accb-fb1e8015e704@kernel.org>
Date: Tue, 11 Jun 2024 17:12:40 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 17/26] block: move the stable_write flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-18-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-18-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the io_stat flag into the queue_limits feature field so that it can

s/io_stat/stable_write

> be set atomically and all I/O is frozen when changing the flag.
> 
> The flag is now inherited by blk_stack_limits, which greatly simplifies
> the code in dm, and fixed md which previously did not pass on the flag
> set on lower devices.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Other than the nit above, looks OK to me.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:14:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:14:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738047.1144631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwds-0005UD-JW; Tue, 11 Jun 2024 08:14:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738047.1144631; Tue, 11 Jun 2024 08:14:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwds-0005U6-Gl; Tue, 11 Jun 2024 08:14:08 +0000
Received: by outflank-mailman (input) for mailman id 738047;
 Tue, 11 Jun 2024 08:14:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwdq-0005U0-R4
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:14:06 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8ea37c12-27ca-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 10:14:05 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 69373CE193F;
 Tue, 11 Jun 2024 08:13:57 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 17C1DC2BD10;
 Tue, 11 Jun 2024 08:13:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ea37c12-27ca-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718093636;
	bh=Ts7aLdf8C9OWwGOUYNUO/M/a+9bfhQDB/UXt0XvRgP8=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=rofJBX5JNUQ4+kum9tXFynL7EYsPL7Q0KpCGApK8UUywKKWzZFWf//NTc6ETwxvAj
	 a8A/PpgedR3JFeoFFYagjhmZDCoIfcDwvtbjsSmqm4tUPrc5+/CNJBuJdjSU/gAghn
	 gFAjLu8rDiAmvD+AGVC2HFwxhOJxaDRk0LEG637q0x+wWdh1qDqXecrioXhw/WgLq/
	 z6eYyXSBzOA5E93r3CufKFnmCp0Rd06eblngnOUKkmVMJslwphJuJlwn0dieAbpnXx
	 mIsJEpZRQysVLdUZy59Nxz+wB/+28rVZ5Fwtt4tdw0yLdp6mqNz/XQPlvKYsJFt70s
	 9HDIcvYwKXcTA==
Message-ID: <0d4a7361-f3f1-4014-af92-9abd45223fed@kernel.org>
Date: Tue, 11 Jun 2024 17:13:50 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 18/26] block: move the synchronous flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-19-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-19-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the synchronous flag into the queue_limits feature field so that it
> can be set atomically and all I/O is frozen when changing the flag.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:15:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:15:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738053.1144642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwei-0006Fj-Rg; Tue, 11 Jun 2024 08:15:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738053.1144642; Tue, 11 Jun 2024 08:15:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwei-0006Fc-P5; Tue, 11 Jun 2024 08:15:00 +0000
Received: by outflank-mailman (input) for mailman id 738053;
 Tue, 11 Jun 2024 08:14:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=W6la=NN=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sGweh-0005rY-7e
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:14:59 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b0c4b3ed-27ca-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:14:57 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2A4D222D25;
 Tue, 11 Jun 2024 08:14:57 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id BF8FC137DF;
 Tue, 11 Jun 2024 08:14:56 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id /IkVLYAHaGaWWgAAD6G6ig
 (envelope-from <hare@suse.de>); Tue, 11 Jun 2024 08:14:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0c4b3ed-27ca-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718093697; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TvaIIm79eqqPbLminoznJSDrbLqw3mW5mHXwDZlXypc=;
	b=UAVg+dgYJHfI7a6nZAEcbYAtWsk0klMOZYa2iy7C0rW5az0ssEDtiefWG3AdjIjXybWcmF
	kOlJh4yiiUlDOqqkpdL3T6JLzA2oV66njCG0MhC2FVDPhA9gftHedq8olQocDeOTfFDu9K
	2Xqf28MwCvKZCyx7CKXTGSS0ovR6/js=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718093697;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TvaIIm79eqqPbLminoznJSDrbLqw3mW5mHXwDZlXypc=;
	b=2OUCyC8guBhwEZpETd6z76Sh3cvmPC9h5I3QPoR5PyJO0aX1s3/L4DedyeoNADr+ncNmNZ
	UJWfCWpPENXpODCQ==
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718093697; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TvaIIm79eqqPbLminoznJSDrbLqw3mW5mHXwDZlXypc=;
	b=UAVg+dgYJHfI7a6nZAEcbYAtWsk0klMOZYa2iy7C0rW5az0ssEDtiefWG3AdjIjXybWcmF
	kOlJh4yiiUlDOqqkpdL3T6JLzA2oV66njCG0MhC2FVDPhA9gftHedq8olQocDeOTfFDu9K
	2Xqf28MwCvKZCyx7CKXTGSS0ovR6/js=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718093697;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TvaIIm79eqqPbLminoznJSDrbLqw3mW5mHXwDZlXypc=;
	b=2OUCyC8guBhwEZpETd6z76Sh3cvmPC9h5I3QPoR5PyJO0aX1s3/L4DedyeoNADr+ncNmNZ
	UJWfCWpPENXpODCQ==
Message-ID: <00f59eb6-f6fc-4d93-8d45-6ce6a2a200ed@suse.de>
Date: Tue, 11 Jun 2024 10:14:56 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 03/26] loop: stop using loop_reconfigure_limits in
 __loop_clr_fd
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-4-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240611051929.513387-4-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spam-Level: 
X-Spamd-Result: default: False [-8.29 / 50.00];
	REPLY(-4.00)[];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[37];
	MIME_TRACE(0.00)[0:+];
	ARC_NA(0.00)[];
	FROM_HAS_DN(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	FROM_EQ_ENVFROM(0.00)[];
	TO_DN_SOME(0.00)[];
	MID_RHS_MATCH_FROM(0.00)[];
	R_RATELIMIT(0.00)[to_ip_from(RLex1noz7jcsrkfdtgx8bqesde)];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[lst.de:email,imap1.dmz-prg2.suse.org:helo,suse.de:email]
X-Spam-Score: -8.29
X-Spam-Flag: NO

On 6/11/24 07:19, Christoph Hellwig wrote:
> __loop_clr_fd wants to clear all settings on the device.  Prepare for
> moving more settings into the block limits by open coding
> loop_reconfigure_limits.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/loop.c | 10 +++++++++-
>   1 file changed, 9 insertions(+), 1 deletion(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:15:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738058.1144652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwfZ-0006ym-4Q; Tue, 11 Jun 2024 08:15:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738058.1144652; Tue, 11 Jun 2024 08:15:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwfZ-0006yf-1V; Tue, 11 Jun 2024 08:15:53 +0000
Received: by outflank-mailman (input) for mailman id 738058;
 Tue, 11 Jun 2024 08:15:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=W6la=NN=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sGwfX-0006yV-B1
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:15:51 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cfa72e69-27ca-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:15:49 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id E567122D2B;
 Tue, 11 Jun 2024 08:15:48 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 9054B137DF;
 Tue, 11 Jun 2024 08:15:48 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id OPB/IbQHaGbxWgAAD6G6ig
 (envelope-from <hare@suse.de>); Tue, 11 Jun 2024 08:15:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfa72e69-27ca-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718093749; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2r7MKAg6vOI9CQHj9QnmN2izcLv5sbG9WOrlm/Tmo8I=;
	b=Msni/WWtQLJC7aiHZZiIPHdLoVyj8dvs0ywS03oXyJvZOgwHsNbEtMFNNO/rKPPwIVlOTY
	c1E1VE43mVs+c82WXEHd5kGOw9r1nGpnTlv/8hcy40zaNCRaYIIWc1L4DQqC+daZ2ZJ4FO
	MR9U/2yYW0xdzUkr2wVwGOmAbopiS+M=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718093749;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2r7MKAg6vOI9CQHj9QnmN2izcLv5sbG9WOrlm/Tmo8I=;
	b=jzzrLP1ZySMJr/ZpnKFyGXiwELwFPv7+mdiJjZVwlAeIGClmXFhoxCoKqdvUasa9E9ZDL2
	k/hSZ7RUw5WKZYDA==
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718093748; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2r7MKAg6vOI9CQHj9QnmN2izcLv5sbG9WOrlm/Tmo8I=;
	b=Qr6H2VmmwLwr7pMkTPc1IokvvR4M2YocGyeffhfCeL+fAvcnsb5fEvomUeoLAXoe+JGH4e
	8M3jjuywd208xFOoLE6NVO6cmyBZFpxxulLERQX4AFhnF4GlsMoMZtCanQx148T9MG9Uqk
	uPc16jDnqAa/RN/AE7ZGr8tZbqoW/Oo=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718093748;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2r7MKAg6vOI9CQHj9QnmN2izcLv5sbG9WOrlm/Tmo8I=;
	b=9xNbjaTNKIkHrIWVNAWqobr8/55RzaflSr0XLs4aXDQe/ZVLfdWVr681lzwfO99s/69NMi
	1sxmXZD4BgTrA7Aw==
Message-ID: <b586980b-0a5a-4371-bcb7-e578633ed71c@suse.de>
Date: Tue, 11 Jun 2024 10:15:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 04/26] loop: always update discard settings in
 loop_reconfigure_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-5-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240611051929.513387-5-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spam-Level: 
X-Spamd-Result: default: False [-8.29 / 50.00];
	REPLY(-4.00)[];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[37];
	MIME_TRACE(0.00)[0:+];
	ARC_NA(0.00)[];
	FROM_HAS_DN(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	FROM_EQ_ENVFROM(0.00)[];
	TO_DN_SOME(0.00)[];
	MID_RHS_MATCH_FROM(0.00)[];
	R_RATELIMIT(0.00)[to_ip_from(RLex1noz7jcsrkfdtgx8bqesde)];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[lst.de:email,imap1.dmz-prg2.suse.org:helo,suse.de:email]
X-Spam-Score: -8.29
X-Spam-Flag: NO

On 6/11/24 07:19, Christoph Hellwig wrote:
> Simplify loop_reconfigure_limits by always updating the discard limits.
> This adds a little more work to loop_set_block_size, but doesn't change
> the outcome as the discard flag won't change.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/loop.c | 10 ++++------
>   1 file changed, 4 insertions(+), 6 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:16:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:16:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738068.1144662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwgR-0007Um-Da; Tue, 11 Jun 2024 08:16:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738068.1144662; Tue, 11 Jun 2024 08:16:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwgR-0007Uf-A7; Tue, 11 Jun 2024 08:16:47 +0000
Received: by outflank-mailman (input) for mailman id 738068;
 Tue, 11 Jun 2024 08:16:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwgQ-0006yV-3S
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:16:46 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eff57d90-27ca-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:16:44 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id E04C060C3E;
 Tue, 11 Jun 2024 08:16:42 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2A204C2BD10;
 Tue, 11 Jun 2024 08:16:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eff57d90-27ca-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718093802;
	bh=0xCsZy2WAGO31vOt2jMux3XsQG5JzkwMLoCsgqbDM7s=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=BshMQEOFKHQCAeU7Uq7D/QqRf+yvZZUvvFAljFVHRMbxOyzti+TjMe0D1IJKi9gMV
	 zyUA+05eoWqeEmhuLL8uoXj2uFvoj39+EiUXirtXWtXQFdTuvV4hWDH4qLaMCjLfy+
	 AnqgnZ0xCGWpIVHNkCHGqby5QcQzPelY2FXqqGrtk4KASWZEpNZldaxiF+wB3Rw9dc
	 xbMnzOX/e3WR+yoGOTEoauLs/X9V3dTUosPnw+YpOF05bIeErHzOz8DqMDnd798OeG
	 g23Oj4HG5YREgeqkRw2UIxm1G6V2rlg7SUCDzzjKvw7jq8xou/AYaSDtybrj1/IgEi
	 S8SrLRUjTG4qQ==
Message-ID: <4845aae8-ad03-407e-bf31-f164b8f684d4@kernel.org>
Date: Tue, 11 Jun 2024 17:16:37 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 19/26] block: move the nowait flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-20-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-20-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the nowait flag into the queue_limits feature field so that it
> can be set atomically and all I/O is frozen when changing the flag.
> 
> Stacking drivers are simplified in that they now can simply set the
> flag, and blk_stack_limits will clear it when the features is not
> supported by any of the underlying devices.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>


> @@ -1825,9 +1815,7 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
>  	int r;
>  
>  	if (dm_table_supports_nowait(t))
> -		blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q);
> -	else
> -		blk_queue_flag_clear(QUEUE_FLAG_NOWAIT, q);
> +		limits->features &= ~BLK_FEAT_NOWAIT;

Shouldn't you set the flag here instead of clearing it ?

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:17:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:17:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738070.1144672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwgf-0007pl-OU; Tue, 11 Jun 2024 08:17:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738070.1144672; Tue, 11 Jun 2024 08:17:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwgf-0007pb-Kz; Tue, 11 Jun 2024 08:17:01 +0000
Received: by outflank-mailman (input) for mailman id 738070;
 Tue, 11 Jun 2024 08:17:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=W6la=NN=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sGwge-0006yV-Aa
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:17:00 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f8fcdb66-27ca-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:16:58 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2899F22D2F;
 Tue, 11 Jun 2024 08:16:58 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id BFD26137DF;
 Tue, 11 Jun 2024 08:16:57 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id 2a1qLvkHaGY8WwAAD6G6ig
 (envelope-from <hare@suse.de>); Tue, 11 Jun 2024 08:16:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8fcdb66-27ca-11ef-b4bb-af5377834399
Authentication-Results: smtp-out1.suse.de;
	none
Message-ID: <e25047ab-2c01-4704-b554-df85a8d34cd7@suse.de>
Date: Tue, 11 Jun 2024 10:16:57 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 05/26] loop: regularize upgrading the lock size for direct
 I/O
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-6-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240611051929.513387-6-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Spam-Level: 
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Rspamd-Queue-Id: 2899F22D2F
X-Rspamd-Server: rspamd2.dmz-prg2.suse.org
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Action: no action

On 6/11/24 07:19, Christoph Hellwig wrote:
> The LOOP_CONFIGURE path automatically upgrades the block size to that
> of the underlying file for O_DIRECT file descriptors, but the
> LOOP_SET_BLOCK_SIZE path does not.  Fix this by lifting the code to
> pick the block size into common code.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/loop.c | 25 +++++++++++++++----------
>   1 file changed, 15 insertions(+), 10 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:17:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:17:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738079.1144682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwhL-0000Lv-1O; Tue, 11 Jun 2024 08:17:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738079.1144682; Tue, 11 Jun 2024 08:17:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwhK-0000Lo-Td; Tue, 11 Jun 2024 08:17:42 +0000
Received: by outflank-mailman (input) for mailman id 738079;
 Tue, 11 Jun 2024 08:17:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwhJ-0000Hi-LO
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:17:41 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1198be18-27cb-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 10:17:40 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 964F460C99;
 Tue, 11 Jun 2024 08:17:39 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 001E2C2BD10;
 Tue, 11 Jun 2024 08:17:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1198be18-27cb-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718093859;
	bh=8hoK2gKDaRsBvhpNjKWXz1NiL4R/W55wPg0pcyUnTg8=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=vRlCqU6kXmdmYokXUm2SdAVQKVkuohc6QD634jV5YaAQTtPSpRt6l4PYnhfTViNDw
	 sbGdXs6SfI8gJDA08QAexH1JSxVl9S/Par06DxrUB4GI0tIhYG/yn58Ww6l/0QCu56
	 WseAAZIp7UfGEfNQUZDJH/9CKtHnu/6zi6luI2GQJ1poRNm+fPzSuf2EsNdvmJ7qNi
	 nJSmsZ8E48CIwWwRJgt5yxhenbvusswprhVLBGGicCH3/2nSpxzVzmGUhEFnCcducJ
	 jqZanUchjNsStFy5gRtuzoGaUYgOY6jNc667Bvz8/tDlqfTuQVB8PXPIosnj3IAUtu
	 ilukga62R9dEQ==
Message-ID: <c52f1553-21a2-415b-a9a6-02bc5cde1ac7@kernel.org>
Date: Tue, 11 Jun 2024 17:17:32 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 20/26] block: move the dax flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-21-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-21-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the dax flag into the queue_limits feature field so that it
> can be set atomically and all I/O is frozen when changing the flag.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:18:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:18:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738082.1144692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwhc-0000tl-7U; Tue, 11 Jun 2024 08:18:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738082.1144692; Tue, 11 Jun 2024 08:18:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwhc-0000te-4o; Tue, 11 Jun 2024 08:18:00 +0000
Received: by outflank-mailman (input) for mailman id 738082;
 Tue, 11 Jun 2024 08:17:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=W6la=NN=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sGwha-0000rh-Gf
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:17:58 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de
 [2a07:de40:b251:101:10:150:64:1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1b2f337b-27cb-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:17:56 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id D69B422D35;
 Tue, 11 Jun 2024 08:17:55 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 1F239137DF;
 Tue, 11 Jun 2024 08:17:55 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id hDAwBzMIaGbQWwAAD6G6ig
 (envelope-from <hare@suse.de>); Tue, 11 Jun 2024 08:17:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b2f337b-27cb-11ef-b4bb-af5377834399
Authentication-Results: smtp-out1.suse.de;
	none
Message-ID: <fc162d48-de62-437e-b2a7-bbf56a507c4d@suse.de>
Date: Tue, 11 Jun 2024 10:17:54 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 06/26] loop: also use the default block size from an
 underlying block device
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-7-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240611051929.513387-7-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Action: no action
X-Rspamd-Server: rspamd1.dmz-prg2.suse.org
X-Spam-Level: 
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Rspamd-Queue-Id: D69B422D35

On 6/11/24 07:19, Christoph Hellwig wrote:
> Fix the code in loop_reconfigure_limits to pick a default block size for
> O_DIRECT file descriptors to also work when the loop device sits on top
> of a block device and not just on a regular file on a block device based
> file system.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/loop.c | 8 +++++++-
>   1 file changed, 7 insertions(+), 1 deletion(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:18:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:18:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738091.1144702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwiL-0001oG-Gj; Tue, 11 Jun 2024 08:18:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738091.1144702; Tue, 11 Jun 2024 08:18:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwiL-0001o9-DV; Tue, 11 Jun 2024 08:18:45 +0000
Received: by outflank-mailman (input) for mailman id 738091;
 Tue, 11 Jun 2024 08:18:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=W6la=NN=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sGwiJ-0001nn-Gq
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:18:43 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 364c3a8a-27cb-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:18:41 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 532A822151;
 Tue, 11 Jun 2024 08:18:41 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id AF5F2137DF;
 Tue, 11 Jun 2024 08:18:40 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id VlsQJmAIaGYaXAAAD6G6ig
 (envelope-from <hare@suse.de>); Tue, 11 Jun 2024 08:18:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 364c3a8a-27cb-11ef-b4bb-af5377834399
Authentication-Results: smtp-out1.suse.de;
	none
Message-ID: <6a785fab-f2b4-4238-bb3b-c5bb54e38c59@suse.de>
Date: Tue, 11 Jun 2024 10:18:40 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 07/26] loop: fold loop_update_rotational into
 loop_reconfigure_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-8-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240611051929.513387-8-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Spam-Level: 
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Rspamd-Queue-Id: 532A822151
X-Rspamd-Server: rspamd2.dmz-prg2.suse.org
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Action: no action

On 6/11/24 07:19, Christoph Hellwig wrote:
> This prepares for moving the rotational flag into the queue_limits and
> also fixes it for the case where the loop device is backed by a block
> device.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/loop.c | 23 ++++-------------------
>   1 file changed, 4 insertions(+), 19 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:19:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:19:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738096.1144712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwj4-0002Qq-P6; Tue, 11 Jun 2024 08:19:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738096.1144712; Tue, 11 Jun 2024 08:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwj4-0002Qj-L9; Tue, 11 Jun 2024 08:19:30 +0000
Received: by outflank-mailman (input) for mailman id 738096;
 Tue, 11 Jun 2024 08:19:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=W6la=NN=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sGwj3-0002QV-OS
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:19:29 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de
 [2a07:de40:b251:101:10:150:64:1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 51d4da25-27cb-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:19:27 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 4AADB22D13;
 Tue, 11 Jun 2024 08:19:27 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 20BA0137DF;
 Tue, 11 Jun 2024 08:19:26 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id 8CuOB44IaGZWXAAAD6G6ig
 (envelope-from <hare@suse.de>); Tue, 11 Jun 2024 08:19:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51d4da25-27cb-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718093967; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GGJw6dDkRs6k2mZvFIE+UpGETGYtBieUVHqbMn2Gy0w=;
	b=Mz5UZyRupBOC5OyhaduIbHV4wlYjNn2j7sCWgFf6vyQPQnjkDPAfZ6Q/JsN5Cmj6BSiYUF
	xwezQUEPyxhygG9mxrG/r7jSoy12gHEp2VD+DxfzLjw84R1R+ikRm3duxRjgQt2dRc+rR/
	zySz8BDz3RCb9rOF3+/OqcxNzULWgh8=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718093967;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GGJw6dDkRs6k2mZvFIE+UpGETGYtBieUVHqbMn2Gy0w=;
	b=q1OceNt62q6MVmBZro11IIzTmhALV7YjKRa+RmhhyhFm32XqJpB3kE67x0ShtykE6k333v
	IXaAxWMTn07ACfBQ==
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718093967; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GGJw6dDkRs6k2mZvFIE+UpGETGYtBieUVHqbMn2Gy0w=;
	b=Mz5UZyRupBOC5OyhaduIbHV4wlYjNn2j7sCWgFf6vyQPQnjkDPAfZ6Q/JsN5Cmj6BSiYUF
	xwezQUEPyxhygG9mxrG/r7jSoy12gHEp2VD+DxfzLjw84R1R+ikRm3duxRjgQt2dRc+rR/
	zySz8BDz3RCb9rOF3+/OqcxNzULWgh8=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718093967;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GGJw6dDkRs6k2mZvFIE+UpGETGYtBieUVHqbMn2Gy0w=;
	b=q1OceNt62q6MVmBZro11IIzTmhALV7YjKRa+RmhhyhFm32XqJpB3kE67x0ShtykE6k333v
	IXaAxWMTn07ACfBQ==
Message-ID: <1208a68f-bac4-4f10-8f67-58eabf5ba89e@suse.de>
Date: Tue, 11 Jun 2024 10:19:25 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 08/26] virtio_blk: remove virtblk_update_cache_mode
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-9-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240611051929.513387-9-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spam-Level: 
X-Spamd-Result: default: False [-8.29 / 50.00];
	REPLY(-4.00)[];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	MIME_TRACE(0.00)[0:+];
	TO_DN_SOME(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[37];
	MID_RHS_MATCH_FROM(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_HAS_DN(0.00)[];
	R_RATELIMIT(0.00)[to_ip_from(RLex1noz7jcsrkfdtgx8bqesde)];
	FROM_EQ_ENVFROM(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[suse.de:email,lst.de:email,imap1.dmz-prg2.suse.org:helo]
X-Spam-Score: -8.29
X-Spam-Flag: NO

On 6/11/24 07:19, Christoph Hellwig wrote:
> virtblk_update_cache_mode boils down to a single call to
> blk_queue_write_cache.  Remove it in preparation for moving the cache
> control flags into the queue_limits.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/virtio_blk.c | 13 +++----------
>   1 file changed, 3 insertions(+), 10 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:20:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:20:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738101.1144721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwk1-0003zH-0W; Tue, 11 Jun 2024 08:20:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738101.1144721; Tue, 11 Jun 2024 08:20:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwk0-0003zA-UB; Tue, 11 Jun 2024 08:20:28 +0000
Received: by outflank-mailman (input) for mailman id 738101;
 Tue, 11 Jun 2024 08:20:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=W6la=NN=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sGwk0-0003z1-AD
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:20:28 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 74bfe177-27cb-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:20:26 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E4DDB2056A;
 Tue, 11 Jun 2024 08:20:25 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 812E7137DF;
 Tue, 11 Jun 2024 08:20:25 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id io/THskIaGa0XAAAD6G6ig
 (envelope-from <hare@suse.de>); Tue, 11 Jun 2024 08:20:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74bfe177-27cb-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718094026; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jGomuCAp2pz/1uNBu4SRDVlzjHuYKTuVW/cC7GVoJ+s=;
	b=CeZPcfC1tei4Yoyik+tGn3KMUx0tYad+9QT86YWgmWIoMwD8SG6Olr8/OpQgGmSlpFOFrV
	1QMhq17GV3KTHtbekRcFjW6L+u5akZQinRuMUbw96XHeekDnFkAcnzJHDSMFcVSc1JTgCf
	PIvex34kEKn8lkXWIw8YlQLqvIZdM4k=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718094026;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jGomuCAp2pz/1uNBu4SRDVlzjHuYKTuVW/cC7GVoJ+s=;
	b=rbO1KscBBDGwI875BWEKNiUKvkGWpIHf7GXnAoTtDPKgnWXQVIXWhgCl2vWo6uzLSeZXs1
	kBnIZBFi8VbrEHDg==
Authentication-Results: smtp-out2.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718094025; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jGomuCAp2pz/1uNBu4SRDVlzjHuYKTuVW/cC7GVoJ+s=;
	b=ru97uNsYchlahhQRdCUMGWhruNOUhzwEthEA7N1gdGL1zKHNbQZPYXDbsm29DX8BaibMfU
	TpmRKmq6nzdpHbPVCOGOBsSVgCWvYVMacGcYa+dNJn5XRlpn58CiIVsix1vUeqAKLDkIDY
	khUgAadqtH4P4RSYCuTBRHzcjnQ3m7o=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718094025;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jGomuCAp2pz/1uNBu4SRDVlzjHuYKTuVW/cC7GVoJ+s=;
	b=JCo1uwRLDQmHCULWVwq4uscwDUMV84vAByFfE7YcJRHJ+TMx+QhsTnZckChZRP2tohL5Do
	JYyX/2Jy2/atV/DQ==
Message-ID: <161db500-6d5e-4dcc-8e44-f75c0f777a0c@suse.de>
Date: Tue, 11 Jun 2024 10:20:25 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 09/26] nbd: move setting the cache control flags to
 __nbd_set_size
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-10-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240611051929.513387-10-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spam-Level: 
X-Spamd-Result: default: False [-8.29 / 50.00];
	REPLY(-4.00)[];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	MIME_TRACE(0.00)[0:+];
	TO_DN_SOME(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[37];
	MID_RHS_MATCH_FROM(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_HAS_DN(0.00)[];
	R_RATELIMIT(0.00)[to_ip_from(RLex1noz7jcsrkfdtgx8bqesde)];
	FROM_EQ_ENVFROM(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[suse.de:email,lst.de:email,imap1.dmz-prg2.suse.org:helo]
X-Spam-Score: -8.29
X-Spam-Flag: NO

On 6/11/24 07:19, Christoph Hellwig wrote:
> Move setting the cache control flags in nbd in preparation for moving
> these flags into the queue_limits structure.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/nbd.c | 17 +++++++----------
>   1 file changed, 7 insertions(+), 10 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:21:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:21:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738108.1144733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwkl-0004l0-FB; Tue, 11 Jun 2024 08:21:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738108.1144733; Tue, 11 Jun 2024 08:21:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwkl-0004kt-AX; Tue, 11 Jun 2024 08:21:15 +0000
Received: by outflank-mailman (input) for mailman id 738108;
 Tue, 11 Jun 2024 08:21:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwkk-0004OI-DM
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:21:14 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9093f3c9-27cb-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 10:21:13 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 9FE0660C54;
 Tue, 11 Jun 2024 08:21:12 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EC86FC2BD10;
 Tue, 11 Jun 2024 08:21:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9093f3c9-27cb-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718094072;
	bh=JnxwCboQk8q7RlLZlW4EySiyxLKgUjk778Traf3fmoA=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=T4lYpc9oPKzbZgXNScRXj6fCCJuF2imMpVDO2TmKoprDVoEMAifMh8yJGso4Vfp5y
	 OgmplPRZ+vOoc/3W5Z2/vvUR59B9dvvmiJ3TXlfZjt80a4a+B616vcQsbnQtB8TQlx
	 hCCsxl7ULUCU4AFz7+jEuDHX9a+L+WI83HnZLceoWGeScqnvdznMEcmPFZA+cvHULW
	 tsWbQ1wsp08S3pZw6QSDKAzAlfLw76qcbODE/Rb62WAcCaMIMds4nLrdbgO640++hO
	 MSBTvFYh9It5CVDUDafarknsby40aO62VVd0BASXyBUwrOv7fDEDdarOIcY3IX1DTf
	 +T3qkIPPHiYZg==
Message-ID: <d1775d3f-daaa-4193-9f68-06ec47563b35@kernel.org>
Date: Tue, 11 Jun 2024 17:21:07 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 21/26] block: move the poll flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-22-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-22-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the poll flag into the queue_limits feature field so that it
> can be set atomically and all I/O is frozen when changing the flag.
> 
> Stacking drivers are simplified in that they now can simply set the
> flag, and blk_stack_limits will clear it when the features is not
> supported by any of the underlying devices.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Kind of the same remark as for io_stat about this not really being a device
feature. But I guess seeing "features" as a queue feature rather than just a
device feature makes it OK to have poll (and io_stat) as a feature rather than
a flag.

So:

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:22:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:22:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738112.1144742 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwlW-0005Pp-Lb; Tue, 11 Jun 2024 08:22:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738112.1144742; Tue, 11 Jun 2024 08:22:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwlW-0005Pi-J0; Tue, 11 Jun 2024 08:22:02 +0000
Received: by outflank-mailman (input) for mailman id 738112;
 Tue, 11 Jun 2024 08:22:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=W6la=NN=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sGwlU-0005PO-S7
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:22:00 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aa970c92-27cb-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:21:56 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 3E7E522D3A;
 Tue, 11 Jun 2024 08:21:56 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 8F474137DF;
 Tue, 11 Jun 2024 08:21:55 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id lmNGIiMJaGZHXQAAD6G6ig
 (envelope-from <hare@suse.de>); Tue, 11 Jun 2024 08:21:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa970c92-27cb-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718094116; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RLD/wZqQacB/W5rh70zz1o8n7VA43LzLz19EabvWDaE=;
	b=YxtPwcWIaOFUzTXw/mkYpgmWsLeCUBBNkuJxe9snqPt2Igi4jPHgxy+GcGQB+1aaAHK29b
	GjrkLYWkmjpe7Q5jSlCSeSkBnzNZRyHO/HKydFhnkbqGxwUxxXH6gZxmJ2l5/OxMgMLkMS
	YcV7qHBuqHuwOs+bqZAAWdeK7NFcRCY=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718094116;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RLD/wZqQacB/W5rh70zz1o8n7VA43LzLz19EabvWDaE=;
	b=+PLWltxYVglRNk6Ug+A6Unrn+sSBWMRgpps7gDT8CP5b2+hzJF26JbZd7C6yHh/hwsLcJm
	TT/wBbLUU2iXIeDQ==
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718094116; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RLD/wZqQacB/W5rh70zz1o8n7VA43LzLz19EabvWDaE=;
	b=YxtPwcWIaOFUzTXw/mkYpgmWsLeCUBBNkuJxe9snqPt2Igi4jPHgxy+GcGQB+1aaAHK29b
	GjrkLYWkmjpe7Q5jSlCSeSkBnzNZRyHO/HKydFhnkbqGxwUxxXH6gZxmJ2l5/OxMgMLkMS
	YcV7qHBuqHuwOs+bqZAAWdeK7NFcRCY=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718094116;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RLD/wZqQacB/W5rh70zz1o8n7VA43LzLz19EabvWDaE=;
	b=+PLWltxYVglRNk6Ug+A6Unrn+sSBWMRgpps7gDT8CP5b2+hzJF26JbZd7C6yHh/hwsLcJm
	TT/wBbLUU2iXIeDQ==
Message-ID: <51ec022c-a978-407b-bd05-54399551841e@suse.de>
Date: Tue, 11 Jun 2024 10:21:55 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 10/26] xen-blkfront: don't disable cache flushes when they
 fail
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-11-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240611051929.513387-11-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spam-Flag: NO
X-Spam-Score: -8.29
X-Spam-Level: 
X-Spamd-Result: default: False [-8.29 / 50.00];
	REPLY(-4.00)[];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	MIME_TRACE(0.00)[0:+];
	MID_RHS_MATCH_FROM(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[37];
	ARC_NA(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	TO_DN_SOME(0.00)[];
	FROM_HAS_DN(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FROM_EQ_ENVFROM(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,suse.de:email,lst.de:email]

On 6/11/24 07:19, Christoph Hellwig wrote:
> blkfront always had a robust negotiation protocol for detecting a write
> cache.  Stop simply disabling cache flushes when they fail as that is
> a grave error.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/xen-blkfront.c | 29 +++++++++--------------------
>   1 file changed, 9 insertions(+), 20 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:23:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:23:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738116.1144752 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwmS-000693-VR; Tue, 11 Jun 2024 08:23:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738116.1144752; Tue, 11 Jun 2024 08:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwmS-000684-S1; Tue, 11 Jun 2024 08:23:00 +0000
Received: by outflank-mailman (input) for mailman id 738116;
 Tue, 11 Jun 2024 08:22:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=W6la=NN=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sGwmQ-000668-Pf
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:22:58 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ceed2a7f-27cb-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 10:22:57 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id F3E2920572;
 Tue, 11 Jun 2024 08:22:56 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 3BD78137DF;
 Tue, 11 Jun 2024 08:22:56 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id pW+KDGAJaGarXQAAD6G6ig
 (envelope-from <hare@suse.de>); Tue, 11 Jun 2024 08:22:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ceed2a7f-27cb-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718094177; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=v2QFshTHpTXr8H3hpjusrZ3HFg1YQT7iiPIq9+Y77hw=;
	b=kX5WA5hx35jK/JcqWyaStVcQvk/UeScZVw4zmQvw3svmy33Z5O4RzzbPMhO2R/Ib1Pm6sQ
	LoTxwVrlbd2b5eJMe78VfVsp2FGlAmziX2M2y+SFE5wjjOf1OiLSYoBnGMfbMD4cfjnWjG
	epGVsrP8Mu28b7oMLf8H0iBbAhZ6KXo=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718094177;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=v2QFshTHpTXr8H3hpjusrZ3HFg1YQT7iiPIq9+Y77hw=;
	b=2bVYtBA/QCeXiwq/2s8JG71dsOf00+ub4v6Qle2M2kTPxicWQaMbNubz94QGvvQd976scy
	rRmq0YpmzWl6GxDw==
Authentication-Results: smtp-out2.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718094177; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=v2QFshTHpTXr8H3hpjusrZ3HFg1YQT7iiPIq9+Y77hw=;
	b=kX5WA5hx35jK/JcqWyaStVcQvk/UeScZVw4zmQvw3svmy33Z5O4RzzbPMhO2R/Ib1Pm6sQ
	LoTxwVrlbd2b5eJMe78VfVsp2FGlAmziX2M2y+SFE5wjjOf1OiLSYoBnGMfbMD4cfjnWjG
	epGVsrP8Mu28b7oMLf8H0iBbAhZ6KXo=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718094177;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=v2QFshTHpTXr8H3hpjusrZ3HFg1YQT7iiPIq9+Y77hw=;
	b=2bVYtBA/QCeXiwq/2s8JG71dsOf00+ub4v6Qle2M2kTPxicWQaMbNubz94QGvvQd976scy
	rRmq0YpmzWl6GxDw==
Message-ID: <5a7f03c3-81e3-4ed9-9c57-0ca86dd97cd0@suse.de>
Date: Tue, 11 Jun 2024 10:22:55 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 11/26] block: freeze the queue in queue_attr_store
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-12-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240611051929.513387-12-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spam-Level: 
X-Spamd-Result: default: False [-8.29 / 50.00];
	REPLY(-4.00)[];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	MIME_TRACE(0.00)[0:+];
	TO_DN_SOME(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[37];
	MID_RHS_MATCH_FROM(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_HAS_DN(0.00)[];
	R_RATELIMIT(0.00)[to_ip_from(RLex1noz7jcsrkfdtgx8bqesde)];
	FROM_EQ_ENVFROM(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[lst.de:email,suse.de:email,imap1.dmz-prg2.suse.org:helo]
X-Spam-Score: -8.29
X-Spam-Flag: NO

On 6/11/24 07:19, Christoph Hellwig wrote:
> queue_attr_store updates attributes used to control generating I/O, and
> can cause malformed bios if changed with I/O in flight.  Freeze the queue
> in common code instead of adding it to almost every attribute.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/blk-mq.c    | 5 +++--
>   block/blk-sysfs.c | 9 ++-------
>   2 files changed, 5 insertions(+), 9 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:23:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:23:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738120.1144762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwn7-0006iw-7X; Tue, 11 Jun 2024 08:23:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738120.1144762; Tue, 11 Jun 2024 08:23:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwn7-0006ip-4A; Tue, 11 Jun 2024 08:23:41 +0000
Received: by outflank-mailman (input) for mailman id 738120;
 Tue, 11 Jun 2024 08:23:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwn6-0006Wk-4i
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:23:40 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e4eef363-27cb-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:23:37 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 534AECE1A05;
 Tue, 11 Jun 2024 08:23:31 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 86496C2BD10;
 Tue, 11 Jun 2024 08:23:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4eef363-27cb-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718094210;
	bh=PCwVdRvz4XJGGIiNmDy6wMkXrr3H3HUMT3pkIraybk8=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=JFzobLSEwQfpgalfDjsXhew6RBycCszqE2f2C/Ru/r8ifaDggADerb3ZWcDYMf7wy
	 2UguDgiYd7paLCb6tqnoYu2dxS7VJu76k45dUpfxH2c8l3qFSrNn2EGh4jWjnKfIAS
	 HBvH4pVK5+Is/WKoiUVoWutNlCyWeMQP2hCSD9Kx8S9haYF4sfr4JiJhYG2nonJP87
	 ivR19am2NLev5ARZzY2SbXtQPtiKMKt2kSHRh2CyzGWwYvO/eLAkUihfoIzngcBiyb
	 99DP2nymQwawaYtHi2qz0jmjPnoMpqNkIxNh8Ag/5f+XcYLmzaYxN9ppmZ11QAIus7
	 gALv/eJkusunA==
Message-ID: <29c6bbe8-f0fe-49dd-a28b-327d86ceb51d@kernel.org>
Date: Tue, 11 Jun 2024 17:23:25 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 22/26] block: move the zoned flag into the feature field
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-23-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-23-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the boolean zoned field into the flags field to reclaim a little
> bit of space.

Nit: flags -> feature flags

> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:23:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:23:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738122.1144772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwnN-00074P-Gi; Tue, 11 Jun 2024 08:23:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738122.1144772; Tue, 11 Jun 2024 08:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwnN-00074I-Bq; Tue, 11 Jun 2024 08:23:57 +0000
Received: by outflank-mailman (input) for mailman id 738122;
 Tue, 11 Jun 2024 08:23:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=W6la=NN=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sGwnM-0006Wk-Cc
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:23:56 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f0f32d8a-27cb-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:23:54 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 57C2E20572;
 Tue, 11 Jun 2024 08:23:54 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 6A965137DF;
 Tue, 11 Jun 2024 08:23:53 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id 1p7uGJkJaGbsXQAAD6G6ig
 (envelope-from <hare@suse.de>); Tue, 11 Jun 2024 08:23:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0f32d8a-27cb-11ef-b4bb-af5377834399
Authentication-Results: smtp-out2.suse.de;
	none
Message-ID: <def8fea1-66ae-4fea-9b49-2842b91404ea@suse.de>
Date: Tue, 11 Jun 2024 10:23:52 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 12/26] block: remove blk_flush_policy
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-13-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240611051929.513387-13-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Action: no action
X-Rspamd-Server: rspamd1.dmz-prg2.suse.org
X-Spam-Level: 
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Rspamd-Queue-Id: 57C2E20572

On 6/11/24 07:19, Christoph Hellwig wrote:
> Fold blk_flush_policy into the only caller to prepare for pending changes
> to it.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/blk-flush.c | 33 +++++++++++++++------------------
>   1 file changed, 15 insertions(+), 18 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:24:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:24:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738127.1144782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwnf-0007m9-Mw; Tue, 11 Jun 2024 08:24:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738127.1144782; Tue, 11 Jun 2024 08:24:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwnf-0007m2-JN; Tue, 11 Jun 2024 08:24:15 +0000
Received: by outflank-mailman (input) for mailman id 738127;
 Tue, 11 Jun 2024 08:24:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwnd-000731-Vp
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:24:13 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fba6eec9-27cb-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 10:24:13 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id CD28C60D2E;
 Tue, 11 Jun 2024 08:24:11 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id DD2C5C2BD10;
 Tue, 11 Jun 2024 08:24:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fba6eec9-27cb-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718094251;
	bh=EFLCVdoWVXgI8x+vDqpH+KGLXZNaxOUmiQChqNs4meI=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=Q/kBEFZ/0xrAI2sEu9TaynLfxDXFSelXwQfPVhceWBMJF67rXnsoSf0WPmrSssyjG
	 wAyQEjvNKOSDuGTW99P/n/USyXBpEs/hJPBtWDhhYK8Vw7vkoUePhP/xVsLzFkoqW6
	 lvKTjjxF0YRUZJg2F6zMuxFTCUXMjyMw9CECUZorzcdjTlaZUCv5Igv6ct9+LSc3ef
	 11G6zLnpcAxQhEGJrLPj98OB8egVZFZCicz/XL1AiLWkTGFK+0l5ihYD/pd34iBMBZ
	 L/cJqhcyVqLZ5BGlUzg23fH/c4iANFyH2WwiPuUXno5waidx9DiZljw1OQFE15jkSs
	 eG4YOkLzWNcuA==
Message-ID: <cb865b5b-ea4d-49cc-b41b-7f46b62b9dd0@kernel.org>
Date: Tue, 11 Jun 2024 17:24:04 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 23/26] block: move the zone_resetall flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-24-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-24-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the zone_resetall flag into the queue_limits feature field so that
> it can be set atomically and all I/O is frozen when changing the flag.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:24:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:24:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738133.1144792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwoG-0000G9-3j; Tue, 11 Jun 2024 08:24:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738133.1144792; Tue, 11 Jun 2024 08:24:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwoG-0000G2-0E; Tue, 11 Jun 2024 08:24:52 +0000
Received: by outflank-mailman (input) for mailman id 738133;
 Tue, 11 Jun 2024 08:24:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwoF-0006Wk-0E
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:24:51 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0f30edf0-27cc-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:24:49 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 8C529CE1986;
 Tue, 11 Jun 2024 08:24:43 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 22FE3C2BD10;
 Tue, 11 Jun 2024 08:24:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f30edf0-27cc-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718094282;
	bh=aa39oj6grsQuWCvXe8Xg/GJ/V11Jvezu/rMLECj5P4s=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=aV9kUDMWPWfXNXmeOO+XWzXEhkoXq7ADFSgqT6UOp6QmIpyBhYF9XRI2luZ1jDGrm
	 fjnG6eWakSd1CNbsGjF5O5LrabvpJGoxEOU9IkGiZiHR46HxhFdMHk0kIvmZI9wx0L
	 JOsygBPd1TdA6IDc/8xDg7nKHyfJ/2D590dnIGGMTWnavCtGtya+9Az2pKzwaIFhFJ
	 j4onYH3Luprc5hdFhkVV5RWd7C4hgmyLZChJZgAFOBU129M9EEZQ0rLACWD8TZ/l9S
	 OXq+G88IRiQo0zQ3DlC2WKDuqnhjQLUQDaVvY8Aat/yU/aTyp6lpmYzMIDEOv1I7i1
	 CT8Dtiuz6hADg==
Message-ID: <d457fc95-9231-4bc8-a2dd-2991aa8732ec@kernel.org>
Date: Tue, 11 Jun 2024 17:24:38 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 24/26] block: move the pci_p2pdma flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-25-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-25-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the pci_p2pdma flag into the queue_limits feature field so that it
> can be set atomically and all I/O is frozen when changing the flag.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:25:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:25:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738137.1144801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwpE-00016f-Cc; Tue, 11 Jun 2024 08:25:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738137.1144801; Tue, 11 Jun 2024 08:25:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwpE-00016Y-9l; Tue, 11 Jun 2024 08:25:52 +0000
Received: by outflank-mailman (input) for mailman id 738137;
 Tue, 11 Jun 2024 08:25:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwpC-00016O-OB
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:25:50 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 333c9edd-27cc-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:25:48 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 142C8CE1A1B;
 Tue, 11 Jun 2024 08:25:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 617E6C2BD10;
 Tue, 11 Jun 2024 08:25:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 333c9edd-27cc-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718094344;
	bh=SsjtIxdZTv9alxTM6s+eXXe/b4WXYQcjMfNg5CfEKdc=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=ZQZg1j5YVztWgVvkxKyPKd72qqKRNzGYuDroL0hLwlP4ltRHp2mvUzylgapCjo/QX
	 /9JUYHtH9lVcV7U8Irn9NtQ6Tf4T6BlUFQm06rRyckQkTNr9Kms7oXAKJzeYMkpiMK
	 Ybljnp8uYPvlerlZvfIK+8/5AdyU/rT7kIsgR7AlzPHmM2fE435z6Gjn2Vvlo55KVu
	 JYmLy8lPunL3Cs67SCKuZOcNFm3ArkP7Bsy0nS3RPixSC4k7DYJ0IbBTowTxu4OiQO
	 q7okcClLynGKcjyF6P0660uu2Ke9bJiS/94AvjFYJ69OKCMBI3EL8IsbNDCkR/OILW
	 7cbHkjEZuN7bA==
Message-ID: <f4497895-93ce-4d96-bcaa-6ad77be83c83@kernel.org>
Date: Tue, 11 Jun 2024 17:25:39 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 25/26] block: move the skip_tagset_quiesce flag to
 queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-26-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-26-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the skip_tagset_quiesce flag into the queue_limits feature field so
> that it can be set atomically and all I/O is frozen when changing the
> flag.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:26:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:26:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738155.1144812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwpw-0001md-KJ; Tue, 11 Jun 2024 08:26:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738155.1144812; Tue, 11 Jun 2024 08:26:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwpw-0001mW-HZ; Tue, 11 Jun 2024 08:26:36 +0000
Received: by outflank-mailman (input) for mailman id 738155;
 Tue, 11 Jun 2024 08:26:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGwpw-00016O-2H
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:26:36 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4fc76610-27cc-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 10:26:34 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id
 a640c23a62f3a-a6ef64b092cso364992266b.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 01:26:34 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f188a5162sm291486366b.81.2024.06.11.01.26.32
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 01:26:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fc76610-27cc-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718094393; x=1718699193; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=i3MvvtXF42y2SpxtapsHIt7Jb43QszZt0LMwTm3bkkQ=;
        b=It85GZcoyoU5ndOeDYXYAJevVD8b8W6tIc/a2A51R/x9JRrEGm/DaR8i/FQwWuIdxO
         NQDqZnrXdOGnL/XDXjRqmn0FlB7NAL2xfAcrGSO5tsoHHN3afEDRiJrBls759hzOvlAx
         eo28cHw5fv3KOGR63BSrkXKOOX0yAG2AhNwJePL5L9GWGemTypW4tUaBY1M8StbDRuhQ
         YFNOz85jZ0BAr4fbgbuAHg6n/9JwFT9uP5Q9edDqizWi/1ogAxvAZ2kpRs5Sc5L9Qhcl
         /nG66UB9QOcN26vVxraNsdvnDeoOW0D4dCOc1jWtLHnITIdWRjwuXH0gHBChYSA0Ph+F
         t+zw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718094393; x=1718699193;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=i3MvvtXF42y2SpxtapsHIt7Jb43QszZt0LMwTm3bkkQ=;
        b=P6HQJjzo9590LVw43zrCAnBCAUsNgCeEbK4v8qZvbvKU6pni6IqY8HbSet95LsHZaN
         JHHFQyWd79hNUIv3FrBw1iLydt40Bw8dZqlq6V9qSsTE6brH3EnDIj88abUG0oH/Tr6/
         t1kprfjMjgEQRvYyIAHHZ+uf8jW6mhCJZnNtY3cLuEUux0hB/JbtcWb5TUjHYyAW3lhv
         33EILHyH4yQoEx8ucGZcf+tWzA/iZoRHZOtFOTUMN97thQQffTF40lJbldkS1obkwowe
         lSvOr3LUdfunH4ez6maNIB5VjyQVuDr1XDU3Pnsg7O98wkpv3mxARDSHzxBv85MTsfu4
         SqFA==
X-Gm-Message-State: AOJu0YzetLYQTWy+HMYThI109g6ZBLrYHY7XgKBOkfVph+/POBNLKIWI
	MSUqlxkLHxvQvCUG5uV+fb3hgDOONCi4BJdRvUsvH2y/qhP5mhPzXfUXl8p+5Q==
X-Google-Smtp-Source: AGHT+IHv7pa3+obiuK0L5n+LJ3s6BpfzgnjHn+GXWfs83BUVndRPXYU0OaSGuMJJUuKvQDpqtU5o4Q==
X-Received: by 2002:a17:906:eb0f:b0:a6e:f869:d722 with SMTP id a640c23a62f3a-a6ef869d8ccmr530938166b.64.1718094393135;
        Tue, 11 Jun 2024 01:26:33 -0700 (PDT)
Message-ID: <a11259be-7114-4332-b873-d1b163687a3e@suse.com>
Date: Tue, 11 Jun 2024 10:26:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] x86/EPT: relax iPAT for "invalid" MFNs
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
 <Zmf_k2meED8iG3H5@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <Zmf_k2meED8iG3H5@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 11.06.2024 09:41, Roger Pau Monné wrote:
> On Mon, Jun 10, 2024 at 04:58:52PM +0200, Jan Beulich wrote:
>> mfn_valid() is RAM-focused; it will often return false for MMIO. Yet
>> access to actual MMIO space should not generally be restricted to UC
>> only; especially video frame buffer accesses are unduly affected by such
>> a restriction. Permit PAT use for directly assigned MMIO as long as the
>> domain is known to have been granted some level of cache control.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> Considering that we've just declared PVH Dom0 "supported", this may well
>> qualify for 4.19. The issue was specifically very noticable there.
>>
>> The conditional may be more complex than really necessary, but it's in
>> line with what we do elsewhere. And imo better continue to be a little
>> too restrictive, than moving to too lax.
>>
>> --- a/xen/arch/x86/mm/p2m-ept.c
>> +++ b/xen/arch/x86/mm/p2m-ept.c
>> @@ -503,7 +503,8 @@ int epte_get_entry_emt(struct domain *d,
>>  
>>      if ( !mfn_valid(mfn) )
>>      {
>> -        *ipat = true;
>> +        *ipat = type != p2m_mmio_direct ||
>> +                (!is_iommu_enabled(d) && !cache_flush_permitted(d));
> 
> Looking at this, shouldn't the !mfn_valid special case be removed, and
> mfns without a valid page be processed normally, so that the guest
> MTRR values are taken into account, and no iPAT is enforced?

Such removal is what, in the post commit message remark, I'm referring to
as "moving to too lax". Doing so might be okay, but will imo be hard to
prove to be correct for all possible cases. Along these lines goes also
that I'm adding the IOMMU-enabled and cache-flush checks: In principle
p2m_mmio_direct should not be used when neither of these return true. Yet
a similar consideration would apply to the immediately subsequent if().

Removing this code would, in particular, result in INVALID_MFN getting a
type of WB by way of the subsequent if(), unless the type there would
also be p2m_mmio_direct (which, as said, it ought to never be for non-
pass-through domains). That again _may_ not be a problem as long as such
EPT entries would never be marked present, yet that's again difficult to
prove.

I was in fact wondering whether to special-case INVALID_MFN in the change
I'm making. Question there is: Are we sure that by now we've indeed got
rid of all arithmetic mistakenly done on MFN variables happening to hold
INVALID_MFN as the value? IOW I fear that there might be code left which
would pass in INVALID_MFN masked down to a 2M or 1G boundary. At which
point checking for just INVALID_MFN would end up insufficient. If we
meant to rely on this (tagging possible leftover issues as bugs we don't
mean to attempt to cover for here anymore), then indeed the mfn_valid()
check could be replaced by a comparison with INVALID_MFN (following a
pattern we've been slowly trying to carry through elsewhere, especially
in shadow code). Yet it could still not be outright dropped imo.

Furthermore simply dropping (or replacing as per above) that check won't
work either: Further down in the function we use mfn_to_page(), which
requires an up-front mfn_valid() check. That said, this code looks
partly broken to me anyway: For a 1G page mfn_valid() on the start of it
doesn't really imply all parts of it are valid. I guess I need to make a
2nd patch to address that as well, which may then want to be a prereq
change to the one here (if we decided to go the route you're asking for).

> I also think this likely wants a:
> 
> Fixes: 81fd0d3ca4b2 ('x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()')

Oh, indeed, I should have dug out when this broke. I didn't because I
knew this mfn_valid() check was there forever, neglecting that it wasn't
always (almost) first.

> As AFAICT before that commit direct MMIO regions would set iPAT to WB,
> which would result in the correct attributes (albeit guest MTRR was
> still ignored).

Two corrections here: First iPAT is a boolean; it can't be set to WB.
And then what was happening prior to that change was that for the APIC
access page iPAT was set to true, thus forcing WB there. iPAT was left
set to false for all other p2m_mmio_direct pages, yielding (PAT-
overridable) UC there.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 08:27:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 08:27:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738159.1144822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwqI-0002LU-Sr; Tue, 11 Jun 2024 08:26:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738159.1144822; Tue, 11 Jun 2024 08:26:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGwqI-0002LN-Q0; Tue, 11 Jun 2024 08:26:58 +0000
Received: by outflank-mailman (input) for mailman id 738159;
 Tue, 11 Jun 2024 08:26:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7xGt=NN=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sGwqH-0001QC-Ha
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 08:26:57 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5d0b71e1-27cc-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 10:26:56 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id A868E60D2E;
 Tue, 11 Jun 2024 08:26:55 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C91B8C2BD10;
 Tue, 11 Jun 2024 08:26:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d0b71e1-27cc-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718094415;
	bh=zPCa8xPO5T29Ue6m/rGvsIVSedmfYpv71DR4OREdVBQ=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=ez17pQw3y3AnnRAsbp/TeB/EKjEe7TyilW2EVLSJJrew7wwyol5eLtQF972oIfLpG
	 oYcDa9ge7GLmbENS0nP9E6ta6tD7CQef1VnquDAuNjK/vmtFTMl8qc2CWkRZn6DFMS
	 raKMjYT80FmXY5BCGp2iWYpN2mIcXyE1s1Z58hYCCGi8z4FHsqMGJz+kXo4P2Nk278
	 r+zW/RpHA3mNDdc41rCNeksvCw0askxtETJGNAUBwkUPNPgC4kqE5kupqRA+Iycdzy
	 jj5qMbPiZljguvK2fsauAxET3ATsJgfsyWMl0BAtw4WiWIzM56ISDOgnDaaVGyV3zf
	 MkLh4tzLpuL/g==
Message-ID: <b5db88d4-5639-47a9-9611-2628235f4244@kernel.org>
Date: Tue, 11 Jun 2024 17:26:49 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 26/26] block: move the bounce flag into the feature field
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-27-hch@lst.de>
Content-Language: en-US
From: Damien Le Moal <dlemoal@kernel.org>
Organization: Western Digital Research
In-Reply-To: <20240611051929.513387-27-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> Move the bounce field into the flags field to reclaim a little bit of

s/flags/feature

> space.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 09:03:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 09:03:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738176.1144831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGxP1-0002vG-H3; Tue, 11 Jun 2024 09:02:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738176.1144831; Tue, 11 Jun 2024 09:02:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGxP1-0002v9-EW; Tue, 11 Jun 2024 09:02:51 +0000
Received: by outflank-mailman (input) for mailman id 738176;
 Tue, 11 Jun 2024 09:02:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b7dS=NN=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGxOz-0002v3-Rs
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 09:02:49 +0000
Received: from mail-qv1-xf31.google.com (mail-qv1-xf31.google.com
 [2607:f8b0:4864:20::f31])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5f939c65-27d1-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 11:02:48 +0200 (CEST)
Received: by mail-qv1-xf31.google.com with SMTP id
 6a1803df08f44-6b060f0f48aso4458856d6.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 02:02:48 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b04f6c3a68sm55361436d6.52.2024.06.11.02.02.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 02:02:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f939c65-27d1-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718096567; x=1718701367; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=mtDEpSuaBXCZz37XDOSHA0WZI0YDApNnvtidOUUCopk=;
        b=Px4r9oa94V7AY6fsTrUgB3qHstuPQeIEHKC180KueseVUa1yV7IFOCTKCWNHeJSJnf
         P5SYmrEKBI4kY4nqgT0fFXJksV7rk7nb5t2yBdk4m0WDwCAs81KWkECOrPhalimZGAFl
         xLe87kreWC8kecBROBu+UHB2rk+/T0QgyMzgk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718096567; x=1718701367;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=mtDEpSuaBXCZz37XDOSHA0WZI0YDApNnvtidOUUCopk=;
        b=G4xq/R2iXlPBPsyCmrtY25gC5UVSviuaP5kDfz44FZTgvOPgBetpacecdJ1WRWjFSK
         UrUqBqvh+NQURQErhsIhIl5Kh1v5RUkZeQP4ghDE66xegrStQ10+kI2btFzLipKFSXdC
         /cwEwKwreS3g1Ksq8CatjhlF7cjmpJw1/G1ZUlSalPttgnxTguYvgN8G8E8jcSWgAS/Z
         FZjJQBzU4r3TzJHeqKP/gcLv/2AocxxycFhShKrtteipMgwEgRRWHyq60rW0SZUVEiaR
         grvNEeO36enjew2q7G58FJ/tct1xauABFhfLZKsoR/RFWaw4qYFeDSooeKsQRLzidI2B
         LHSw==
X-Gm-Message-State: AOJu0Yx625VFGNCR6xcVvGgoHHuGD8AJQH0clpoms1XFxF3/BCyO6Wna
	6jQmGUhE5TF0I56+H9FC4PYJTttbX8QtfHl1H7NMDh6zoj3HwpNUSEKE0jwUQVBj7rgXmmQatRy
	g
X-Google-Smtp-Source: AGHT+IHYAOsR7T+GCbYwnw/K/cTCXmexF2H8IXYqNYtlu8BmsqLRMk5tpAB3K3uZjuV6fKgl9mPEmA==
X-Received: by 2002:a05:6214:4a8f:b0:6b0:6837:752c with SMTP id 6a1803df08f44-6b068377582mr100067436d6.24.1718096566977;
        Tue, 11 Jun 2024 02:02:46 -0700 (PDT)
Date: Tue, 11 Jun 2024 11:02:44 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/EPT: relax iPAT for "invalid" MFNs
Message-ID: <ZmgStGbVRuGaNUD_@macbook>
References: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
 <Zmf_k2meED8iG3H5@macbook>
 <a11259be-7114-4332-b873-d1b163687a3e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a11259be-7114-4332-b873-d1b163687a3e@suse.com>

On Tue, Jun 11, 2024 at 10:26:32AM +0200, Jan Beulich wrote:
> On 11.06.2024 09:41, Roger Pau Monné wrote:
> > On Mon, Jun 10, 2024 at 04:58:52PM +0200, Jan Beulich wrote:
> >> mfn_valid() is RAM-focused; it will often return false for MMIO. Yet
> >> access to actual MMIO space should not generally be restricted to UC
> >> only; especially video frame buffer accesses are unduly affected by such
> >> a restriction. Permit PAT use for directly assigned MMIO as long as the
> >> domain is known to have been granted some level of cache control.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> ---
> >> Considering that we've just declared PVH Dom0 "supported", this may well
> >> qualify for 4.19. The issue was specifically very noticable there.
> >>
> >> The conditional may be more complex than really necessary, but it's in
> >> line with what we do elsewhere. And imo better continue to be a little
> >> too restrictive, than moving to too lax.
> >>
> >> --- a/xen/arch/x86/mm/p2m-ept.c
> >> +++ b/xen/arch/x86/mm/p2m-ept.c
> >> @@ -503,7 +503,8 @@ int epte_get_entry_emt(struct domain *d,
> >>  
> >>      if ( !mfn_valid(mfn) )
> >>      {
> >> -        *ipat = true;
> >> +        *ipat = type != p2m_mmio_direct ||
> >> +                (!is_iommu_enabled(d) && !cache_flush_permitted(d));
> > 
> > Looking at this, shouldn't the !mfn_valid special case be removed, and
> > mfns without a valid page be processed normally, so that the guest
> > MTRR values are taken into account, and no iPAT is enforced?
> 
> Such removal is what, in the post commit message remark, I'm referring to
> as "moving to too lax". Doing so might be okay, but will imo be hard to
> prove to be correct for all possible cases. Along these lines goes also
> that I'm adding the IOMMU-enabled and cache-flush checks: In principle
> p2m_mmio_direct should not be used when neither of these return true. Yet
> a similar consideration would apply to the immediately subsequent if().
> 
> Removing this code would, in particular, result in INVALID_MFN getting a
> type of WB by way of the subsequent if(), unless the type there would
> also be p2m_mmio_direct (which, as said, it ought to never be for non-
> pass-through domains). That again _may_ not be a problem as long as such
> EPT entries would never be marked present, yet that's again difficult to
> prove.

My understanding is that the !mfn_valid() check was a way to detect
MMIO regions in order to exit early and set those to UC.  I however
don't follow why the guest MTRR settings shouldn't also be applied to
those regions.

I'm also confused by your comment about "as such EPT entries would
never be marked present": non-present EPT entries don't even get into
epte_get_entry_emt(), and hence we could assert in epte_get_entry_emt
that mfn != INVALID_MFN?

> I was in fact wondering whether to special-case INVALID_MFN in the change
> I'm making. Question there is: Are we sure that by now we've indeed got
> rid of all arithmetic mistakenly done on MFN variables happening to hold
> INVALID_MFN as the value? IOW I fear that there might be code left which
> would pass in INVALID_MFN masked down to a 2M or 1G boundary. At which
> point checking for just INVALID_MFN would end up insufficient. If we
> meant to rely on this (tagging possible leftover issues as bugs we don't
> mean to attempt to cover for here anymore), then indeed the mfn_valid()
> check could be replaced by a comparison with INVALID_MFN (following a
> pattern we've been slowly trying to carry through elsewhere, especially
> in shadow code). Yet it could still not be outright dropped imo.
> 
> Furthermore simply dropping (or replacing as per above) that check won't
> work either: Further down in the function we use mfn_to_page(), which
> requires an up-front mfn_valid() check. That said, this code looks
> partly broken to me anyway: For a 1G page mfn_valid() on the start of it
> doesn't really imply all parts of it are valid. I guess I need to make a
> 2nd patch to address that as well, which may then want to be a prereq
> change to the one here (if we decided to go the route you're asking for).

I see, yes, the loop over the special pages array will need to be
adjusted to account for mfn_to_page() possibly returning NULL.

Overall I don't understand the need for this special case for
!mfn_valid().  The rest of special cases we have (the special pages
and domains without devices or MMIO regions assigned) are performance
optimizations which I do understand.  Yet the special casing of
!mfn_valid regions bypassing guest MTRR settings seems bogus to me.

> 
> > I also think this likely wants a:
> > 
> > Fixes: 81fd0d3ca4b2 ('x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()')
> 
> Oh, indeed, I should have dug out when this broke. I didn't because I
> knew this mfn_valid() check was there forever, neglecting that it wasn't
> always (almost) first.
> 
> > As AFAICT before that commit direct MMIO regions would set iPAT to WB,
> > which would result in the correct attributes (albeit guest MTRR was
> > still ignored).
> 
> Two corrections here: First iPAT is a boolean; it can't be set to WB.
> And then what was happening prior to that change was that for the APIC
> access page iPAT was set to true, thus forcing WB there. iPAT was left
> set to false for all other p2m_mmio_direct pages, yielding (PAT-
> overridable) UC there.

Right, that behavior was still dubious to me, as I would assume those
regions would also want to fetch the type from guest MTRRs.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 09:15:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 09:15:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738184.1144842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGxai-0000vB-Ml; Tue, 11 Jun 2024 09:14:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738184.1144842; Tue, 11 Jun 2024 09:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGxai-0000v4-Jy; Tue, 11 Jun 2024 09:14:56 +0000
Received: by outflank-mailman (input) for mailman id 738184;
 Tue, 11 Jun 2024 09:14:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGxag-0000tm-SF
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 09:14:54 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0f5088e2-27d3-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 11:14:52 +0200 (CEST)
Received: by mail-ed1-x529.google.com with SMTP id
 4fb4d7f45d1cf-57c83100bd6so2439141a12.3
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 02:14:52 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57aae2321e0sm8968376a12.88.2024.06.11.02.14.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 02:14:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f5088e2-27d3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718097292; x=1718702092; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=VjiAdgWOVSKYgm8t/BP0TJ5qmzYPqyPBHxotHaIsFnA=;
        b=HpIlvaZ0ztba/U5RpxVeK80Oo5/rHmrpWyhDfIeAE/VKgDLqyUlh/IyAvWaWVqh3Q4
         tP/a5uWkusDwmr3Ssez/UZOwExeTFA06XNhRLNdXqDABK8ivS+PzxrAifj34EaAa5439
         gDWkStXSU/SlKpNN0ogzrVcmMZaGABo2unS/EHLnWCNE54VxwsAx91A5bwnixMzQubDK
         6iQuBCDDVpWwQJtQuGsE4X4E5tCmxEKqyp3rNgK2oV9amTuZyRhYPLn8zVU1YTJvUwaT
         hM5gG5dA5ovNcJPtZsg94PWPXxuIGTRHL1VAGETRgtjt8CJ+V/M+UZcKVhgbNNyO3XyW
         8vJQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718097292; x=1718702092;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=VjiAdgWOVSKYgm8t/BP0TJ5qmzYPqyPBHxotHaIsFnA=;
        b=Uo1sxqI27GkkuaZWvtOumXvy+2XLVuzH1SOeKqY7qHCgWd8T2lDsRg4nM97hdU9UQ0
         CJmvvNWmGZfC8QN5i3qTHeJ28PwyP1Q3TNN9feAlBkl2YPOPnBal1J0WN2X1NrQE4iSI
         lsqhkTTE1ZPuN0Dh3ZdBQ7Fa02xC3NcBIiqFnqho5FNdtm8F4lgUe1RxKOjKqZaaDmBW
         8Zm0WHbk/710+aE1v/E8k1wW9SpRggPgmKpE+tBOiCygo+wpna0+B/Vq+nht4LGeSCE/
         an/W0nJE4LLX4GgsQ7vAeSOJ4ReQnQ4G3HZdXFM30ZbtsOQUE/1nbAUxu+5sTtrolwmR
         0yMg==
X-Forwarded-Encrypted: i=1; AJvYcCV4k/L+q4dnNQXklIqCh802pp7QVYmPSUNH3GbEzKtKAWMkJ9JLL0qHWn5jBCof1pvuxoAE+55FfFmmCLH3VN8ZqaulhYJf6qyK+Q4dcOg=
X-Gm-Message-State: AOJu0YxBXXDyoRuNsTq4zkCLiBVMPBP8hk4Td0nizFUwBMJpAY3ZV/4d
	XJCz1H5LZGTSYdgV2il7NWbXrXkYbu6A1HIwUiezHYt7EPCgNHZh0s0RC70DVA==
X-Google-Smtp-Source: AGHT+IHXYDohSIaX+xR7RjJ6ieqlKeJbUF7DVXX6SHgEGlZAKvCndfJQGWMeiN5z+eLNr2rrv6jb4Q==
X-Received: by 2002:a50:d796:0:b0:57c:6ed2:870c with SMTP id 4fb4d7f45d1cf-57c6ed28bb1mr4686608a12.2.1718097291976;
        Tue, 11 Jun 2024 02:14:51 -0700 (PDT)
Message-ID: <217202e9-608f-4788-b689-8140567e4485@suse.com>
Date: Tue, 11 Jun 2024 11:14:50 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v6 3/9] xen: Refactor altp2m options into a
 structured format
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
References: <cover.1718038855.git.w1benny@gmail.com>
 <dcf08c40e37072e18e5e878df8778ce459897bdc.1718038855.git.w1benny@gmail.com>
 <8787608f-f3b0-4fb3-95ee-98050cf95182@suse.com>
 <CAKBKdXiiZdz70nWx7kqp2S5RdbRsku+qtn6z9DBk44LZOgp3Qw@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CAKBKdXiiZdz70nWx7kqp2S5RdbRsku+qtn6z9DBk44LZOgp3Qw@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 11.06.2024 10:00, Petr Beneš wrote:
> On Tue, Jun 11, 2024 at 8:41 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 10.06.2024 19:10, Petr Beneš wrote:
>>> From: Petr Beneš <w1benny@gmail.com>
>>>
>>> Encapsulate the altp2m options within a struct. This change is preparatory
>>> and sets the groundwork for introducing additional parameter in subsequent
>>> commit.
>>>
>>> Signed-off-by: Petr Beneš <w1benny@gmail.com>
>>> Acked-by: Julien Grall <jgrall@amazon.com> # arm
>>> Reviewed-by: Jan Beulich <jbeulich@suse.com> # hypervisor
>>
>> Looks like you lost Christian's ack for ...
>>
>>> ---
>>>  tools/libs/light/libxl_create.c     | 6 +++---
>>>  tools/ocaml/libs/xc/xenctrl_stubs.c | 4 +++-
>>
>> ... the adjustment of this file?
> 
> In the cover email, Christian only acked:
> 
>> tools/ocaml/libs/xc/xenctrl.ml       |   2 +
>> tools/ocaml/libs/xc/xenctrl.mli      |   2 +
>> tools/ocaml/libs/xc/xenctrl_stubs.c  |  40 +++++++---

Right, but above I was talking about the last of these three files.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 09:33:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 09:33:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738191.1144853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGxsf-000122-6l; Tue, 11 Jun 2024 09:33:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738191.1144853; Tue, 11 Jun 2024 09:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGxsf-00011E-1l; Tue, 11 Jun 2024 09:33:29 +0000
Received: by outflank-mailman (input) for mailman id 738191;
 Tue, 11 Jun 2024 09:33:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGxse-00010y-74
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 09:33:28 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a7719f45-27d5-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 11:33:26 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id
 a640c23a62f3a-a6f1c4800easo239535566b.3
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 02:33:26 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f392fba29sm50910866b.109.2024.06.11.02.33.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 02:33:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7719f45-27d5-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718098406; x=1718703206; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=XQFTv1399tZhTTTL6nPwZ82TtAVPVZCA3Jne/9EUPlc=;
        b=VmUJK4/ixlwMcIgQsEApg95VJv8e6zmFWiqY3xR+z4QPy1QZwnpdAgRtV7rF0zR44N
         SC2C/cZ8dhKc+zqOD1U8+I6uoCgHhL910f9NblGFcb5d4fGa/Ai9lE0BaHxLTttiISTv
         iWiTW6dgUY9ZTAN+KzE350FXGa0WZGuAF4TBJRSip6JRRLkSCCkqaGskIDvZH347MWqS
         l3Vhg1SSvxEr/NAq3GxFTXG4JKTyJ/Cdhs1oqEzKNOklJwEVQxJ1UkmNXAydH3SEKEfM
         mgZh3Co5bUyM6c//qaFh7FxsHYJ+4kamp1q1qk44McK7Tek425MTkyuve74Mt1PmYU1K
         DRQw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718098406; x=1718703206;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=XQFTv1399tZhTTTL6nPwZ82TtAVPVZCA3Jne/9EUPlc=;
        b=BdL8zNu9vayPirbdmwKHdDPZCX4wHDQ5BYFP97CRo0DOBZC9cVkoXiXz/VjRyGZiTv
         njgCgyaeTQFHn9dKtErzhhF8nJMOsPWxp8jur7raXn3p13eNUblAfkV8vr8ajflbfPE+
         gySSMEH6InlJlasFerZbr+/HaY93xm2OAMKfRBIVnSlNQDSHK+jz0BkfJsBJLFIOMP6Z
         /me1CS5vGBWCU44G24lpRmyH9tsrJHKjWYhmAywZc9R2lYcai1nWO6E4wW4DeCxUV36h
         qvtWTu61hxO8vezooOrQVzCmO315R60YDli5r9b+mltgvP0zz1PqO/ygJs2NkxAP8Vhk
         wARQ==
X-Gm-Message-State: AOJu0YwnDHLQ9w2qTsmoVknd8H/TZtGd+rbbKKJ4/MLb8l1LM1KvKTs7
	Zv7mLdZ3VtPRRai/UBgHFWIoRrZUEeLlEapRyns8OuokvQzy/0JUTNXfKXSeyw==
X-Google-Smtp-Source: AGHT+IEDx4acLv0WFhFICqx+rLm9ZVsA5KSr0I4BhGZbEEC1P7W/D4E9ZU41kxMS8JHq2SBDVqKrNg==
X-Received: by 2002:a17:906:1c87:b0:a6f:1106:5dc7 with SMTP id a640c23a62f3a-a6f11065e29mr439712366b.5.1718098405800;
        Tue, 11 Jun 2024 02:33:25 -0700 (PDT)
Message-ID: <f171c98a-c78d-41c8-88d8-7d631b80333b@suse.com>
Date: Tue, 11 Jun 2024 11:33:24 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] x86/EPT: relax iPAT for "invalid" MFNs
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
 <Zmf_k2meED8iG3H5@macbook> <a11259be-7114-4332-b873-d1b163687a3e@suse.com>
 <ZmgStGbVRuGaNUD_@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmgStGbVRuGaNUD_@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 11.06.2024 11:02, Roger Pau Monné wrote:
> On Tue, Jun 11, 2024 at 10:26:32AM +0200, Jan Beulich wrote:
>> On 11.06.2024 09:41, Roger Pau Monné wrote:
>>> On Mon, Jun 10, 2024 at 04:58:52PM +0200, Jan Beulich wrote:
>>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>>> @@ -503,7 +503,8 @@ int epte_get_entry_emt(struct domain *d,
>>>>  
>>>>      if ( !mfn_valid(mfn) )
>>>>      {
>>>> -        *ipat = true;
>>>> +        *ipat = type != p2m_mmio_direct ||
>>>> +                (!is_iommu_enabled(d) && !cache_flush_permitted(d));
>>>
>>> Looking at this, shouldn't the !mfn_valid special case be removed, and
>>> mfns without a valid page be processed normally, so that the guest
>>> MTRR values are taken into account, and no iPAT is enforced?
>>
>> Such removal is what, in the post commit message remark, I'm referring to
>> as "moving to too lax". Doing so might be okay, but will imo be hard to
>> prove to be correct for all possible cases. Along these lines goes also
>> that I'm adding the IOMMU-enabled and cache-flush checks: In principle
>> p2m_mmio_direct should not be used when neither of these return true. Yet
>> a similar consideration would apply to the immediately subsequent if().
>>
>> Removing this code would, in particular, result in INVALID_MFN getting a
>> type of WB by way of the subsequent if(), unless the type there would
>> also be p2m_mmio_direct (which, as said, it ought to never be for non-
>> pass-through domains). That again _may_ not be a problem as long as such
>> EPT entries would never be marked present, yet that's again difficult to
>> prove.
> 
> My understanding is that the !mfn_valid() check was a way to detect
> MMIO regions in order to exit early and set those to UC.  I however
> don't follow why the guest MTRR settings shouldn't also be applied to
> those regions.

It's unclear to me whether the original purpose of he check really was
(just) MMIO. It could as well also have been to cover the (then not yet
named that way) case of INVALID_MFN.

As to ignoring guest MTRRs for MMIO: I think that's to be on the safe
side. We don't want guests to map uncachable memory with a cachable
memory type. Yet control isn't fine grained enough to prevent just
that. Hence why we force UC, allowing merely to move to WC via PAT.

> I'm also confused by your comment about "as such EPT entries would
> never be marked present": non-present EPT entries don't even get into
> epte_get_entry_emt(), and hence we could assert in epte_get_entry_emt
> that mfn != INVALID_MFN?

I don't think we can. Especially for the call from ept_set_entry() I
can't spot anything that would prevent the call for non-present entries.
This may be a mistake, but I can't do anything about it right here.

>> I was in fact wondering whether to special-case INVALID_MFN in the change
>> I'm making. Question there is: Are we sure that by now we've indeed got
>> rid of all arithmetic mistakenly done on MFN variables happening to hold
>> INVALID_MFN as the value? IOW I fear that there might be code left which
>> would pass in INVALID_MFN masked down to a 2M or 1G boundary. At which
>> point checking for just INVALID_MFN would end up insufficient. If we
>> meant to rely on this (tagging possible leftover issues as bugs we don't
>> mean to attempt to cover for here anymore), then indeed the mfn_valid()
>> check could be replaced by a comparison with INVALID_MFN (following a
>> pattern we've been slowly trying to carry through elsewhere, especially
>> in shadow code). Yet it could still not be outright dropped imo.
>>
>> Furthermore simply dropping (or replacing as per above) that check won't
>> work either: Further down in the function we use mfn_to_page(), which
>> requires an up-front mfn_valid() check. That said, this code looks
>> partly broken to me anyway: For a 1G page mfn_valid() on the start of it
>> doesn't really imply all parts of it are valid. I guess I need to make a
>> 2nd patch to address that as well, which may then want to be a prereq
>> change to the one here (if we decided to go the route you're asking for).
> 
> I see, yes, the loop over the special pages array will need to be
> adjusted to account for mfn_to_page() possibly returning NULL.

Except that NULL will hardly ever come back there. What we need is an
explicit mfn_valid() check. I already have a patch, but I'd like to
submit it only once I know how the v2 of the one here is going to look
like.

> Overall I don't understand the need for this special case for
> !mfn_valid().  The rest of special cases we have (the special pages
> and domains without devices or MMIO regions assigned) are performance
> optimizations which I do understand.  Yet the special casing of
> !mfn_valid regions bypassing guest MTRR settings seems bogus to me.

As said, it may well be that we can (now) switch to comparison against
INVALID_MFN there, if we're certain MMIO isn't to be covered by this
(anymore).

>>> I also think this likely wants a:
>>>
>>> Fixes: 81fd0d3ca4b2 ('x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()')
>>
>> Oh, indeed, I should have dug out when this broke. I didn't because I
>> knew this mfn_valid() check was there forever, neglecting that it wasn't
>> always (almost) first.
>>
>>> As AFAICT before that commit direct MMIO regions would set iPAT to WB,
>>> which would result in the correct attributes (albeit guest MTRR was
>>> still ignored).
>>
>> Two corrections here: First iPAT is a boolean; it can't be set to WB.
>> And then what was happening prior to that change was that for the APIC
>> access page iPAT was set to true, thus forcing WB there. iPAT was left
>> set to false for all other p2m_mmio_direct pages, yielding (PAT-
>> overridable) UC there.
> 
> Right, that behavior was still dubious to me, as I would assume those
> regions would also want to fetch the type from guest MTRRs.

Well, for the APIC access page we want to prevent it becoming UC. It's MMIO
from the guest's perspective, yet _we_ know it's really ordinary RAM. For
actual MMIO see above; the only case where we probably ought to respect
guest MTRRs is when they say WC (following from what I said further up).
Yet that's again an independent change to (possibly) make.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 09:34:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 09:34:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738196.1144862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGxu1-00024O-FI; Tue, 11 Jun 2024 09:34:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738196.1144862; Tue, 11 Jun 2024 09:34:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGxu1-00024H-C9; Tue, 11 Jun 2024 09:34:53 +0000
Received: by outflank-mailman (input) for mailman id 738196;
 Tue, 11 Jun 2024 09:34:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rr1P=NN=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGxu0-000247-Bl
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 09:34:52 +0000
Received: from mail-ot1-x331.google.com (mail-ot1-x331.google.com
 [2607:f8b0:4864:20::331])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d92deb77-27d5-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 11:34:50 +0200 (CEST)
Received: by mail-ot1-x331.google.com with SMTP id
 46e09a7af769-6f97a4c4588so1785054a34.2
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 02:34:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d92deb77-27d5-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718098489; x=1718703289; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=mCPQMBYY0J4AiCv1dyzFad01O/YvgYeZmL/mAmkRHB8=;
        b=HN3WVP1IVZVBHcRzsUfnVCXPnJp0KtCZ2FICbu3t9fIrWMgTGZQNJnsfrlguATrzR0
         oUzqhDX6VXcuzodkquntL13ih5BoE9UAExTrNbWMHqKMcIV6LS57BFjiTjqSb9dUSbOv
         VxpLzrGGLb9FGHrgUHpGtlRsUhO6zW3YC2L1e2rQzAPcs7ZUP3MBYelmt9HtXUdjUxGH
         BHIQ8V0yp2uR180lxs5rPic5FeIusziaAEqRcVfEtj21Yxy6aPu0AkkglxWqlaCNqqVH
         NwKSULE+ytz/MknbKi2whUHjuQgskLAdBSPGzdD+hajuJ3zu+qv1qI725FsBq9uV+MQ8
         r1XQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718098489; x=1718703289;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=mCPQMBYY0J4AiCv1dyzFad01O/YvgYeZmL/mAmkRHB8=;
        b=cVSdJ0bXwsRmetIBfrlvywoK9W56rnqNkUlhc4IXTkI1uoKSuf60Vj9LGv47b0ftl0
         JecWeyzIrj8md6y8Fekjnk7lFlqwNpavi+mKkusepciSk1ov3NTUaThiWCLIRDO+JCc0
         O/pGUXv3mp3q/NhPoSQ0BQ6qup/CqRz9XiHoPm2nSbbkDEWklWvt9H3run+ztsG6PwL0
         I3DLX9GDT7uepXzhbGEyMrRfkN2Psd3YeJKKyFaNpQS9HsdatJ19MeIblzIiAkeBri1v
         pRJYsApPdWvIH4X1EUZ18KRe7UpBvVjaAd2QviiIPIOZqufyHe+ag0nn74rJby4UGIRe
         fV6w==
X-Forwarded-Encrypted: i=1; AJvYcCVgVpUimA4yluNDXr9qeR+sPj+Yd193/hRFTkI6InIDe9B4uH+naPohL7nQnousT1Ws08OmtKu8JACOxra8YisXlSQZzv6v+5DVTcXnJXw=
X-Gm-Message-State: AOJu0YwQJ5ZJ3BI157TbnKQQSCqIpc5/tYkPUbC9+K9xnjQVEpsTZ4mV
	plLX54KdbrcC4Sf1V/KDJa+T50eNSpg4lPcTaCOgCx1c6EWB8JuTSmWqqg+6gzy0s5nOHrf/Fn8
	LKDNWUNfGkOq0lVb7GAm4PDneZzo=
X-Google-Smtp-Source: AGHT+IFnNsxRD2WxB8PrZcdHmOO7p5dwF3j71uoFD9ah8g4jNP55HWn3xF1m/HjdWah08g4BdlCwpkXqM0Zvjxreomc=
X-Received: by 2002:a05:6870:f144:b0:250:67c4:d73c with SMTP id
 586e51a60fabf-25464ccb42amr12685336fac.28.1718098488919; Tue, 11 Jun 2024
 02:34:48 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1718038855.git.w1benny@gmail.com> <dcf08c40e37072e18e5e878df8778ce459897bdc.1718038855.git.w1benny@gmail.com>
 <8787608f-f3b0-4fb3-95ee-98050cf95182@suse.com> <CAKBKdXiiZdz70nWx7kqp2S5RdbRsku+qtn6z9DBk44LZOgp3Qw@mail.gmail.com>
 <217202e9-608f-4788-b689-8140567e4485@suse.com>
In-Reply-To: <217202e9-608f-4788-b689-8140567e4485@suse.com>
From: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Date: Tue, 11 Jun 2024 11:34:38 +0200
Message-ID: <CAKBKdXhzRZuaiZ+cDYD=ofShgRySbGyZjSZe=G9Rdd0T8wof3A@mail.gmail.com>
Subject: Re: [PATCH for-4.19? v6 3/9] xen: Refactor altp2m options into a
 structured format
To: Jan Beulich <jbeulich@suse.com>
Cc: Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
	Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel <michal.orzel@amd.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 11, 2024 at 11:14=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wr=
ote:
>
> On 11.06.2024 10:00, Petr Bene=C5=A1 wrote:
> > On Tue, Jun 11, 2024 at 8:41=E2=80=AFAM Jan Beulich <jbeulich@suse.com>=
 wrote:
> >>
> >> On 10.06.2024 19:10, Petr Bene=C5=A1 wrote:
> >>> From: Petr Bene=C5=A1 <w1benny@gmail.com>
> >>>
> >>> Encapsulate the altp2m options within a struct. This change is prepar=
atory
> >>> and sets the groundwork for introducing additional parameter in subse=
quent
> >>> commit.
> >>>
> >>> Signed-off-by: Petr Bene=C5=A1 <w1benny@gmail.com>
> >>> Acked-by: Julien Grall <jgrall@amazon.com> # arm
> >>> Reviewed-by: Jan Beulich <jbeulich@suse.com> # hypervisor
> >>
> >> Looks like you lost Christian's ack for ...
> >>
> >>> ---
> >>>  tools/libs/light/libxl_create.c     | 6 +++---
> >>>  tools/ocaml/libs/xc/xenctrl_stubs.c | 4 +++-
> >>
> >> ... the adjustment of this file?
> >
> > In the cover email, Christian only acked:
> >
> >> tools/ocaml/libs/xc/xenctrl.ml       |   2 +
> >> tools/ocaml/libs/xc/xenctrl.mli      |   2 +
> >> tools/ocaml/libs/xc/xenctrl_stubs.c  |  40 +++++++---
>
> Right, but above I was talking about the last of these three files.
>
> Jan

Ouch. It didn't occur to me that Ack on cover email acks each of the
files in every separate patch. My thinking was it acks only the
patches where those three are together. Anyway, it makes sense. I'll
resend v7.

P.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 09:36:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 09:36:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738203.1144872 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGxvb-0002x1-Q5; Tue, 11 Jun 2024 09:36:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738203.1144872; Tue, 11 Jun 2024 09:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGxvb-0002wu-Mk; Tue, 11 Jun 2024 09:36:31 +0000
Received: by outflank-mailman (input) for mailman id 738203;
 Tue, 11 Jun 2024 09:36:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGxva-0002wZ-Tr
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 09:36:30 +0000
Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com
 [2a00:1450:4864:20::22e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 15276604-27d6-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 11:36:30 +0200 (CEST)
Received: by mail-lj1-x22e.google.com with SMTP id
 38308e7fff4ca-2ebe3fb5d4dso19071591fa.0
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 02:36:30 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 38308e7fff4ca-2ebeeedb430sm5640841fa.7.2024.06.11.02.36.28
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 02:36:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15276604-27d6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718098590; x=1718703390; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=fWUldjVAa8Hkj/pKWiQmgY0XPWD+ktn/SGr6EoSJ0Tw=;
        b=O2jqoSnedHIge8bnHyY3AXQUP5SCf44Jwxv45zlzp05jw+lrCLBSo3AJclqmFJ4zLl
         g8v0k82bGv88MkgyoVWzP7wzr11lPWV3egR1TnkVavCXHwX09AzW0O23uHMBCoO8TS/N
         ScQGoX1XIy2zKKYn95WZ/ILDgx9/W4ea1uNkZTTdGGFtfxA1a/ZS8zJdCft9MV9ZYJX6
         Tib1d4VwDMR4lST1MOh00CuMKQPCpyyXcDMdymLx3NaOMnPVhZyH4MxPqeiOlhln1RLA
         xqUxpnoBWAZlfBtk0IbAj+u1+VP17vdnf1LKJJoGTSsIlupoAGUxT5He9ODOVaUNMHsV
         X8tA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718098590; x=1718703390;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=fWUldjVAa8Hkj/pKWiQmgY0XPWD+ktn/SGr6EoSJ0Tw=;
        b=C4OKqGLYjjE4iiHi3t+V+P7/2//z86xb99ofsZHo9oVawyF85iKfa+4XLZEtooS5bL
         aSOGg0ZEoJ3SH8VU8aiuvJBCtdMDy09BfrqczgITZtpUAeQeS6W91zzWwe8TN+u4v1tS
         n1vPKW2qGOkHvq+HkH06bFw6hzSfLN0BtNmKLZnrtGmaR1i+0mpm+pdxnTphK+PHLk7P
         YJ8Z8x3UPBdztC9PG5/KYfR4C3KJvh2rXgKFgKRsSqZwLub1xbzgcadwqZvuyjunQvu1
         OmWb5prGRaDNWGVjZ11hbpdyJJKbLGdcp+pVmlt6o/79DIv/dPTU0Qeq5ea54gBwYzBe
         XmWg==
X-Forwarded-Encrypted: i=1; AJvYcCWbxoXces1UI0VhzVPs3vXRwYTOXqGQfueEhDGRu9mapDjko4btYp9KD8/kINE7+eqyMGNJBOKi5kN8J9pafLdKtAR3JOitFh5gXNBQ5H8=
X-Gm-Message-State: AOJu0Yw68lHvILtPpYeE76Kyq7zhvsU9CJ2FcUpU0QkNe4rhAueX35sg
	5XfWYMeiqHblIkTeVOomhjjnEBdYYVUDKb61IFIv0V3TGMX1osaXLENpnqEAgA==
X-Google-Smtp-Source: AGHT+IG5UrTbCiVxuZiYvN+SWySPYtARVrjgzCCkQ3Cf9TiFEsc0npD5O1DTuXMd4B8Eq9Qw/XGWvg==
X-Received: by 2002:a05:651c:b0d:b0:2eb:f626:ba6e with SMTP id 38308e7fff4ca-2ebf626bbd1mr3129321fa.21.1718098589818;
        Tue, 11 Jun 2024 02:36:29 -0700 (PDT)
Message-ID: <6cfeeca8-10dd-4d79-8436-fbe3cf342a54@suse.com>
Date: Tue, 11 Jun 2024 11:36:27 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v6 3/9] xen: Refactor altp2m options into a
 structured format
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott
 <dave@recoil.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
References: <cover.1718038855.git.w1benny@gmail.com>
 <dcf08c40e37072e18e5e878df8778ce459897bdc.1718038855.git.w1benny@gmail.com>
 <8787608f-f3b0-4fb3-95ee-98050cf95182@suse.com>
 <CAKBKdXiiZdz70nWx7kqp2S5RdbRsku+qtn6z9DBk44LZOgp3Qw@mail.gmail.com>
 <217202e9-608f-4788-b689-8140567e4485@suse.com>
 <CAKBKdXhzRZuaiZ+cDYD=ofShgRySbGyZjSZe=G9Rdd0T8wof3A@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CAKBKdXhzRZuaiZ+cDYD=ofShgRySbGyZjSZe=G9Rdd0T8wof3A@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 11.06.2024 11:34, Petr Beneš wrote:
> On Tue, Jun 11, 2024 at 11:14 AM Jan Beulich <jbeulich@suse.com> wrote:
>> On 11.06.2024 10:00, Petr Beneš wrote:
>>> On Tue, Jun 11, 2024 at 8:41 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 10.06.2024 19:10, Petr Beneš wrote:
>>>>> From: Petr Beneš <w1benny@gmail.com>
>>>>>
>>>>> Encapsulate the altp2m options within a struct. This change is preparatory
>>>>> and sets the groundwork for introducing additional parameter in subsequent
>>>>> commit.
>>>>>
>>>>> Signed-off-by: Petr Beneš <w1benny@gmail.com>
>>>>> Acked-by: Julien Grall <jgrall@amazon.com> # arm
>>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com> # hypervisor
>>>>
>>>> Looks like you lost Christian's ack for ...
>>>>
>>>>> ---
>>>>>  tools/libs/light/libxl_create.c     | 6 +++---
>>>>>  tools/ocaml/libs/xc/xenctrl_stubs.c | 4 +++-
>>>>
>>>> ... the adjustment of this file?
>>>
>>> In the cover email, Christian only acked:
>>>
>>>> tools/ocaml/libs/xc/xenctrl.ml       |   2 +
>>>> tools/ocaml/libs/xc/xenctrl.mli      |   2 +
>>>> tools/ocaml/libs/xc/xenctrl_stubs.c  |  40 +++++++---
>>
>> Right, but above I was talking about the last of these three files.
>>
>> Jan
> 
> Ouch. It didn't occur to me that Ack on cover email acks each of the
> files in every separate patch. My thinking was it acks only the
> patches where those three are together. Anyway, it makes sense. I'll
> resend v7.

Well, no need to just for this. Yet if a v7 turns out necessary, please
make sure you have the ack recorded.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 09:38:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 09:38:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738210.1144882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGxxc-0004Kq-Af; Tue, 11 Jun 2024 09:38:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738210.1144882; Tue, 11 Jun 2024 09:38:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGxxc-0004Kj-7T; Tue, 11 Jun 2024 09:38:36 +0000
Received: by outflank-mailman (input) for mailman id 738210;
 Tue, 11 Jun 2024 09:38:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rr1P=NN=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sGxxa-0004Kd-FL
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 09:38:34 +0000
Received: from mail-oa1-x2d.google.com (mail-oa1-x2d.google.com
 [2001:4860:4864:20::2d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5d82c309-27d6-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 11:38:32 +0200 (CEST)
Received: by mail-oa1-x2d.google.com with SMTP id
 586e51a60fabf-2548c80efc6so1430687fac.0
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 02:38:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d82c309-27d6-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718098711; x=1718703511; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=pqO/jtbOhl0WTM/03VmSyYnKt2ZVAFH87f5BRYJfwJY=;
        b=eW3Q2xiYmDAim9fBk8XbI49L6qHwGQmGsrYID/DvA/u3xz1YYwQy8JGTvn25kTbM0b
         e6B0P1aAGvT8cNnifm0B4tFaSYgnFzOuPQ3iBA3ZHTw3P2Qs8A42l8thgsBfTDadgvtc
         cBgX6qPfE31S3rFYKPEmL0LKmZrplyaJBMikO955v38o7Fk6cjaOcpgOEq9q53f15RhI
         1qlSvJBlw/GeOq7INTkPTA57YVuvjrz/hdC6DdkA8Voe0mqf+hoKDIofnaAL+lGy7alx
         pOdd6s1hnAgEuY9dj8um1jgI7QOzxnjullnXv7uKmWqddIvktvIC8Z8I2cXk/4OKtJLz
         R5xQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718098711; x=1718703511;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=pqO/jtbOhl0WTM/03VmSyYnKt2ZVAFH87f5BRYJfwJY=;
        b=uQLWfzbo4151aclMgbAejomy1beBuQjCmwTkiSkpifaG2/Q3bA39BFBmuziubq9Sk8
         XVduKPFZAY4VrEjofy8JCeCdSYoFLNbUvfpfoIVsM124478pUmm8CfwRa6VkdaCZe840
         bpC5df/PpR0zYKp6aV7v1GlAi5IW6IP8fllwJdtib4mpEzsaACtPKHaHnniuX7PZdHnL
         QYlu/i2EDCaJRPUTXCNV16+4DqARKq26tHmSnjkMOecrmDUA5pI5Lw4/E5qQoGgjfiQN
         wksElyroKgiQo1AFwTj2O9ax2FkP9PnIiu9v3hB6/1c6VcFzgrQgNIhmFVmYXvuXBn0O
         MNAw==
X-Forwarded-Encrypted: i=1; AJvYcCVUpyqwlVNAyZT0E2i/nu8FtfBbylHmD+mgZxv8PDZ67V6EuXz75T3Cpt8lTdlJsfM/cQ7BHY7B9LwkKFgl4P+ppBGr8JAPmBDGuDpge5A=
X-Gm-Message-State: AOJu0YwR5j+Bh0Zr2M05Uj3BEM/HqYULQMxQbLsYfKdSRn1lFx2DI/Ud
	Zj5ThQnK4xRsP5G2PwJfon7dtrOhvSuc9YzrsKksiAcHz4euXoZnm3xxaE0DDH75MdcUuVJwk3Z
	LIVXRBJChFj0pTkptU8sTvzZa3RY=
X-Google-Smtp-Source: AGHT+IFr8puub9immjrdfMVWGjf+tf55lOcsST15FqiIjzPfPuQ0Ef+1AZllU10MIaFk0SoIssPNeeX46MivowhXcjo=
X-Received: by 2002:a05:6870:d182:b0:254:eda1:912c with SMTP id
 586e51a60fabf-254eda19b9emr861418fac.5.1718098711066; Tue, 11 Jun 2024
 02:38:31 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1718038855.git.w1benny@gmail.com> <dcf08c40e37072e18e5e878df8778ce459897bdc.1718038855.git.w1benny@gmail.com>
 <8787608f-f3b0-4fb3-95ee-98050cf95182@suse.com> <CAKBKdXiiZdz70nWx7kqp2S5RdbRsku+qtn6z9DBk44LZOgp3Qw@mail.gmail.com>
 <217202e9-608f-4788-b689-8140567e4485@suse.com> <CAKBKdXhzRZuaiZ+cDYD=ofShgRySbGyZjSZe=G9Rdd0T8wof3A@mail.gmail.com>
 <6cfeeca8-10dd-4d79-8436-fbe3cf342a54@suse.com>
In-Reply-To: <6cfeeca8-10dd-4d79-8436-fbe3cf342a54@suse.com>
From: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Date: Tue, 11 Jun 2024 11:38:20 +0200
Message-ID: <CAKBKdXgHGx5_tw5HNjuuHzT__VC_dT7J7rF3KFrJ6htVmeQobA@mail.gmail.com>
Subject: Re: [PATCH for-4.19? v6 3/9] xen: Refactor altp2m options into a
 structured format
To: Jan Beulich <jbeulich@suse.com>
Cc: Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
	Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel <michal.orzel@amd.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 11, 2024 at 11:36=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wr=
ote:
>
> On 11.06.2024 11:34, Petr Bene=C5=A1 wrote:
> > On Tue, Jun 11, 2024 at 11:14=E2=80=AFAM Jan Beulich <jbeulich@suse.com=
> wrote:
> >> On 11.06.2024 10:00, Petr Bene=C5=A1 wrote:
> >>> On Tue, Jun 11, 2024 at 8:41=E2=80=AFAM Jan Beulich <jbeulich@suse.co=
m> wrote:
> >>>> On 10.06.2024 19:10, Petr Bene=C5=A1 wrote:
> >>>>> From: Petr Bene=C5=A1 <w1benny@gmail.com>
> >>>>>
> >>>>> Encapsulate the altp2m options within a struct. This change is prep=
aratory
> >>>>> and sets the groundwork for introducing additional parameter in sub=
sequent
> >>>>> commit.
> >>>>>
> >>>>> Signed-off-by: Petr Bene=C5=A1 <w1benny@gmail.com>
> >>>>> Acked-by: Julien Grall <jgrall@amazon.com> # arm
> >>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com> # hypervisor
> >>>>
> >>>> Looks like you lost Christian's ack for ...
> >>>>
> >>>>> ---
> >>>>>  tools/libs/light/libxl_create.c     | 6 +++---
> >>>>>  tools/ocaml/libs/xc/xenctrl_stubs.c | 4 +++-
> >>>>
> >>>> ... the adjustment of this file?
> >>>
> >>> In the cover email, Christian only acked:
> >>>
> >>>> tools/ocaml/libs/xc/xenctrl.ml       |   2 +
> >>>> tools/ocaml/libs/xc/xenctrl.mli      |   2 +
> >>>> tools/ocaml/libs/xc/xenctrl_stubs.c  |  40 +++++++---
> >>
> >> Right, but above I was talking about the last of these three files.
> >>
> >> Jan
> >
> > Ouch. It didn't occur to me that Ack on cover email acks each of the
> > files in every separate patch. My thinking was it acks only the
> > patches where those three are together. Anyway, it makes sense. I'll
> > resend v7.
>
> Well, no need to just for this. Yet if a v7 turns out necessary, please
> make sure you have the ack recorded.
>
> Jan

Noted.

P.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 09:43:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 09:43:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738215.1144892 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGy21-00071E-PJ; Tue, 11 Jun 2024 09:43:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738215.1144892; Tue, 11 Jun 2024 09:43:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGy21-000717-Lp; Tue, 11 Jun 2024 09:43:09 +0000
Received: by outflank-mailman (input) for mailman id 738215;
 Tue, 11 Jun 2024 09:43:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=I6Ds=NN=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sGy1z-0006zn-Py
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 09:43:07 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0001f2fb-27d7-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 11:43:05 +0200 (CEST)
Received: from mail-lf1-f69.google.com (mail-lf1-f69.google.com
 [209.85.167.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-498-kVRicqTLOXGsh_la4MqUdw-1; Tue, 11 Jun 2024 05:43:00 -0400
Received: by mail-lf1-f69.google.com with SMTP id
 2adb3069b0e04-52bbd8ac2c1so3189025e87.3
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 02:43:00 -0700 (PDT)
Received: from ?IPV6:2003:cb:c748:ba00:1c00:48ea:7b5a:c12b?
 (p200300cbc748ba001c0048ea7b5ac12b.dip0.t-ipconnect.de.
 [2003:cb:c748:ba00:1c00:48ea:7b5a:c12b])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4215c2cd247sm176185215e9.40.2024.06.11.02.42.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 02:42:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0001f2fb-27d7-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1718098984;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=4Fvo4snqjmjLL5YGwAq7AoShRlEWBSMshPzLS47jMwA=;
	b=bDEHSvn1pkrM2xWjOzrnCT7EE58Kqp+J1PxYmA1sw/NIWno3Wy+9Gzd5/qpDrmPb5T6tA2
	mcxAGUqAjlf24ZEcRSUndxkziTAJvIWMla8SgkUoCjZlsHFWoW443/XwClFEFKK+x1FeeR
	8oSBs4RTnMqlv4D9E2GOolxdUMEKznE=
X-MC-Unique: kVRicqTLOXGsh_la4MqUdw-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718098979; x=1718703779;
        h=content-transfer-encoding:in-reply-to:organization:autocrypt
         :content-language:from:references:cc:to:subject:user-agent
         :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=4Fvo4snqjmjLL5YGwAq7AoShRlEWBSMshPzLS47jMwA=;
        b=X2zqVxPT7EBS1b2vRnAAqytEY/9CguKR/Oo1ABHk2HMrmScJne8rvCvWCHMl6bQ1Pr
         gaKkQrUgI64dwtzG3WYL7g7G/3MbsP8NJSKHTvUTuLICyiw8ns4K8jNgBhFCtYC4WFP5
         +86KQSMTtAgoStGVPtAPsAgJODcN2SU6O+WEcbj+keGtJYnDvFve7/9YFs0eD9DgKdND
         2956t10CmGRnoyXpcwr1APAtVjSPzBTpoqOaNVYUBHy+xEwZRD6AFhS4EQJmL0VA/uzB
         1mO4IugAdEjjcFcaaTJRpgsUe7MwNyEmdhg9P9lnv9RbncAFRQMvlxrw3E7Rhabq4lYs
         I77g==
X-Forwarded-Encrypted: i=1; AJvYcCW2ud+BCLIAb6/AaBYlapr/W8B7q+PQq/X3vObR+dyzVGahWad1XAnpmAj82VHPRS+IDJbqR6mfGz85DAfFMOxLH043lqiVf6dj5T1zSJE=
X-Gm-Message-State: AOJu0YxEIBDzAyxcnQVtliLz8OCLd12AMlSTH1W64BCB55Iagz8JKZpF
	Q5raCRQG5/NAwVI4lAdXYO30YuFHvPAGoKo6wkemo3S3R5eL3i0rzHHsHNnhfbW4BDAR1XujQ1u
	Tnbs+FEvq1KQkkbxjpFeFTi0Sa7g6VH88cV8uo7tNCse+eDuxl6j5SeejIG4LEpUo
X-Received: by 2002:ac2:5a43:0:b0:52b:e7ff:4eb7 with SMTP id 2adb3069b0e04-52be7ff4ed8mr5513980e87.59.1718098978996;
        Tue, 11 Jun 2024 02:42:58 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IGXolbyC1nt04woF61IXkeFzl40jH1amuFwhtMdKb0jcnAeO4WTr7EvpCOinxTpGFsUzHiFKA==
X-Received: by 2002:ac2:5a43:0:b0:52b:e7ff:4eb7 with SMTP id 2adb3069b0e04-52be7ff4ed8mr5513958e87.59.1718098978571;
        Tue, 11 Jun 2024 02:42:58 -0700 (PDT)
Message-ID: <824c319a-530e-4153-80f5-20e2c463fa81@redhat.com>
Date: Tue, 11 Jun 2024 11:42:56 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v1 2/3] mm/memory_hotplug: initialize memmap of
 !ZONE_DEVICE with PageOffline() instead of PageReserved()
To: linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 kasan-dev@googlegroups.com, Mike Rapoport <rppt@kernel.org>,
 Oscar Salvador <osalvador@suse.de>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Jason Wang <jasowang@redhat.com>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
 =?UTF-8?Q?Eugenio_P=C3=A9rez?= <eperezma@redhat.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>,
 Dmitry Vyukov <dvyukov@google.com>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-3-david@redhat.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; keydata=
 xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat
In-Reply-To: <20240607090939.89524-3-david@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 07.06.24 11:09, David Hildenbrand wrote:
> We currently initialize the memmap such that PG_reserved is set and the
> refcount of the page is 1. In virtio-mem code, we have to manually clear
> that PG_reserved flag to make memory offlining with partially hotplugged
> memory blocks possible: has_unmovable_pages() would otherwise bail out on
> such pages.
> 
> We want to avoid PG_reserved where possible and move to typed pages
> instead. Further, we want to further enlighten memory offlining code about
> PG_offline: offline pages in an online memory section. One example is
> handling managed page count adjustments in a cleaner way during memory
> offlining.
> 
> So let's initialize the pages with PG_offline instead of PG_reserved.
> generic_online_page()->__free_pages_core() will now clear that flag before
> handing that memory to the buddy.
> 
> Note that the page refcount is still 1 and would forbid offlining of such
> memory except when special care is take during GOING_OFFLINE as
> currently only implemented by virtio-mem.
> 
> With this change, we can now get non-PageReserved() pages in the XEN
> balloon list. From what I can tell, that can already happen via
> decrease_reservation(), so that should be fine.
> 
> HV-balloon should not really observe a change: partial online memory
> blocks still cannot get surprise-offlined, because the refcount of these
> PageOffline() pages is 1.
> 
> Update virtio-mem, HV-balloon and XEN-balloon code to be aware that
> hotplugged pages are now PageOffline() instead of PageReserved() before
> they are handed over to the buddy.
> 
> We'll leave the ZONE_DEVICE case alone for now.
> 

@Andrew, can we add here:

"Note that self-hosted vmemmap pages will no longer be marked as 
reserved. This matches ordinary vmemmap pages allocated from the buddy 
during memory hotplug. Now, really only vmemmap pages allocated from 
memblock during early boot will be marked reserved. Existing 
PageReserved() checks seem to be handling all relevant cases correctly 
even after this change."

-- 
Cheers,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 09:58:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 09:58:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738223.1144902 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGyGm-0005E1-VJ; Tue, 11 Jun 2024 09:58:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738223.1144902; Tue, 11 Jun 2024 09:58:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGyGm-0005Du-Se; Tue, 11 Jun 2024 09:58:24 +0000
Received: by outflank-mailman (input) for mailman id 738223;
 Tue, 11 Jun 2024 09:58:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=W6la=NN=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sGyGk-0005Do-Nn
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 09:58:22 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 226678d6-27d9-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 11:58:21 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id B119320601;
 Tue, 11 Jun 2024 09:58:20 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 565AB13A55;
 Tue, 11 Jun 2024 09:58:20 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id Igo1E7wfaGa8fAAAD6G6ig
 (envelope-from <hare@suse.de>); Tue, 11 Jun 2024 09:58:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 226678d6-27d9-11ef-90a3-e314d9c70b13
Authentication-Results: smtp-out2.suse.de;
	none
Message-ID: <34a7b2a4-b0cb-4580-85c9-b598fd70449e@suse.de>
Date: Tue, 11 Jun 2024 11:58:19 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 13/26] block: move cache control settings out of
 queue->flags
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-14-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240611051929.513387-14-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Action: no action
X-Rspamd-Server: rspamd1.dmz-prg2.suse.org
X-Spam-Level: 
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Rspamd-Queue-Id: B119320601

On 6/11/24 07:19, Christoph Hellwig wrote:
> Move the cache control settings into the queue_limits so that they
> can be set atomically and all I/O is frozen when changing the
> flags.
> 
> Add new features and flags field for the driver set flags, and internal
> (usually sysfs-controlled) flags in the block layer.  Note that we'll
> eventually remove enough field from queue_limits to bring it back to the
> previous size.
> 
> The disable flag is inverted compared to the previous meaning, which
> means it now survives a rescan, similar to the max_sectors and
> max_discard_sectors user limits.
> 
> The FLUSH and FUA flags are now inherited by blk_stack_limits, which
> simplified the code in dm a lot, but also causes a slight behavior
> change in that dm-switch and dm-unstripe now advertise a write cache
> despite setting num_flush_bios to 0.  The I/O path will handle this
> gracefully, but as far as I can tell the lack of num_flush_bios
> and thus flush support is a pre-existing data integrity bug in those
> targets that really needs fixing, after which a non-zero num_flush_bios
> should be required in dm for targets that map to underlying devices.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   .../block/writeback_cache_control.rst         | 67 +++++++++++--------
>   arch/um/drivers/ubd_kern.c                    |  2 +-
>   block/blk-core.c                              |  2 +-
>   block/blk-flush.c                             |  9 ++-
>   block/blk-mq-debugfs.c                        |  2 -
>   block/blk-settings.c                          | 29 ++------
>   block/blk-sysfs.c                             | 29 +++++---
>   block/blk-wbt.c                               |  4 +-
>   drivers/block/drbd/drbd_main.c                |  2 +-
>   drivers/block/loop.c                          |  9 +--
>   drivers/block/nbd.c                           | 14 ++--
>   drivers/block/null_blk/main.c                 | 12 ++--
>   drivers/block/ps3disk.c                       |  7 +-
>   drivers/block/rnbd/rnbd-clt.c                 | 10 +--
>   drivers/block/ublk_drv.c                      |  8 ++-
>   drivers/block/virtio_blk.c                    | 20 ++++--
>   drivers/block/xen-blkfront.c                  |  9 ++-
>   drivers/md/bcache/super.c                     |  7 +-
>   drivers/md/dm-table.c                         | 39 +++--------
>   drivers/md/md.c                               |  8 ++-
>   drivers/mmc/core/block.c                      | 42 ++++++------
>   drivers/mmc/core/queue.c                      | 12 ++--
>   drivers/mmc/core/queue.h                      |  3 +-
>   drivers/mtd/mtd_blkdevs.c                     |  5 +-
>   drivers/nvdimm/pmem.c                         |  4 +-
>   drivers/nvme/host/core.c                      |  7 +-
>   drivers/nvme/host/multipath.c                 |  6 --
>   drivers/scsi/sd.c                             | 28 +++++---
>   include/linux/blkdev.h                        | 38 +++++++++--
>   29 files changed, 227 insertions(+), 207 deletions(-)
> 
> diff --git a/Documentation/block/writeback_cache_control.rst b/Documentation/block/writeback_cache_control.rst
> index b208488d0aae85..9cfe27f90253c7 100644
> --- a/Documentation/block/writeback_cache_control.rst
> +++ b/Documentation/block/writeback_cache_control.rst
> @@ -46,41 +46,50 @@ worry if the underlying devices need any explicit cache flushing and how
>   the Forced Unit Access is implemented.  The REQ_PREFLUSH and REQ_FUA flags
>   may both be set on a single bio.
>   
> +Feature settings for block drivers
> +----------------------------------
>   
> -Implementation details for bio based block drivers
> ---------------------------------------------------------------
> +For devices that do not support volatile write caches there is no driver
> +support required, the block layer completes empty REQ_PREFLUSH requests before
> +entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from
> +requests that have a payload.
>   
> -These drivers will always see the REQ_PREFLUSH and REQ_FUA bits as they sit
> -directly below the submit_bio interface.  For remapping drivers the REQ_FUA
> -bits need to be propagated to underlying devices, and a global flush needs
> -to be implemented for bios with the REQ_PREFLUSH bit set.  For real device
> -drivers that do not have a volatile cache the REQ_PREFLUSH and REQ_FUA bits
> -on non-empty bios can simply be ignored, and REQ_PREFLUSH requests without
> -data can be completed successfully without doing any work.  Drivers for
> -devices with volatile caches need to implement the support for these
> -flags themselves without any help from the block layer.
> +For devices with volatile write caches the driver needs to tell the block layer
> +that it supports flushing caches by setting the
>   
> +   BLK_FEAT_WRITE_CACHE
>   
> -Implementation details for request_fn based block drivers
> ----------------------------------------------------------
> +flag in the queue_limits feature field.  For devices that also support the FUA
> +bit the block layer needs to be told to pass on the REQ_FUA bit by also setting
> +the
>   
> -For devices that do not support volatile write caches there is no driver
> -support required, the block layer completes empty REQ_PREFLUSH requests before
> -entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from
> -requests that have a payload.  For devices with volatile write caches the
> -driver needs to tell the block layer that it supports flushing caches by
> -doing::
> +   BLK_FEAT_FUA
> +
> +flag in the features field of the queue_limits structure.
> +
> +Implementation details for bio based block drivers
> +--------------------------------------------------
> +
> +For bio based drivers the REQ_PREFLUSH and REQ_FUA bit are simplify passed on
> +to the driver if the drivers sets the BLK_FEAT_WRITE_CACHE flag and the drivers
> +needs to handle them.
> +
> +*NOTE*: The REQ_FUA bit also gets passed on when the BLK_FEAT_FUA flags is
> +_not_ set.  Any bio based driver that sets BLK_FEAT_WRITE_CACHE also needs to
> +handle REQ_FUA.
>   
> -	blk_queue_write_cache(sdkp->disk->queue, true, false);
> +For remapping drivers the REQ_FUA bits need to be propagated to underlying
> +devices, and a global flush needs to be implemented for bios with the
> +REQ_PREFLUSH bit set.
>   
> -and handle empty REQ_OP_FLUSH requests in its prep_fn/request_fn.  Note that
> -REQ_PREFLUSH requests with a payload are automatically turned into a sequence
> -of an empty REQ_OP_FLUSH request followed by the actual write by the block
> -layer.  For devices that also support the FUA bit the block layer needs
> -to be told to pass through the REQ_FUA bit using::
> +Implementation details for blk-mq drivers
> +-----------------------------------------
>   
> -	blk_queue_write_cache(sdkp->disk->queue, true, true);
> +When the BLK_FEAT_WRITE_CACHE flag is set, REQ_OP_WRITE | REQ_PREFLUSH requests
> +with a payload are automatically turned into a sequence of a REQ_OP_FLUSH
> +request followed by the actual write by the block layer.
>   
> -and the driver must handle write requests that have the REQ_FUA bit set
> -in prep_fn/request_fn.  If the FUA bit is not natively supported the block
> -layer turns it into an empty REQ_OP_FLUSH request after the actual write.
> +When the BLK_FEA_FUA flags is set, the REQ_FUA bit simplify passed on for the
> +REQ_OP_WRITE request, else a REQ_OP_FLUSH request is sent by the block layer
> +after the completion of the write request for bio submissions with the REQ_FUA
> +bit set.
> diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
> index cdcb75a68989dd..19e01691ea0ea7 100644
> --- a/arch/um/drivers/ubd_kern.c
> +++ b/arch/um/drivers/ubd_kern.c
> @@ -835,6 +835,7 @@ static int ubd_add(int n, char **error_out)
>   	struct queue_limits lim = {
>   		.max_segments		= MAX_SG,
>   		.seg_boundary_mask	= PAGE_SIZE - 1,
> +		.features		= BLK_FEAT_WRITE_CACHE,
>   	};
>   	struct gendisk *disk;
>   	int err = 0;
> @@ -882,7 +883,6 @@ static int ubd_add(int n, char **error_out)
>   	}
>   
>   	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
> -	blk_queue_write_cache(disk->queue, true, false);
>   	disk->major = UBD_MAJOR;
>   	disk->first_minor = n << UBD_SHIFT;
>   	disk->minors = 1 << UBD_SHIFT;
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 82c3ae22d76d88..2b45a4df9a1aa1 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -782,7 +782,7 @@ void submit_bio_noacct(struct bio *bio)
>   		if (WARN_ON_ONCE(bio_op(bio) != REQ_OP_WRITE &&
>   				 bio_op(bio) != REQ_OP_ZONE_APPEND))
>   			goto end_io;
> -		if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {
> +		if (!bdev_write_cache(bdev)) {
>   			bio->bi_opf &= ~(REQ_PREFLUSH | REQ_FUA);
>   			if (!bio_sectors(bio)) {
>   				status = BLK_STS_OK;
> diff --git a/block/blk-flush.c b/block/blk-flush.c
> index 2234f8b3fc05f2..30b9d5033a2b85 100644
> --- a/block/blk-flush.c
> +++ b/block/blk-flush.c
> @@ -381,8 +381,8 @@ static void blk_rq_init_flush(struct request *rq)
>   bool blk_insert_flush(struct request *rq)
>   {
>   	struct request_queue *q = rq->q;
> -	unsigned long fflags = q->queue_flags;	/* may change, cache */
>   	struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx);
> +	bool supports_fua = q->limits.features & BLK_FEAT_FUA;

Shouldn't we have a helper like blk_feat_fua() here?

>   	unsigned int policy = 0;
>   
>   	/* FLUSH/FUA request must never be merged */
> @@ -394,11 +394,10 @@ bool blk_insert_flush(struct request *rq)
>   	/*
>   	 * Check which flushes we need to sequence for this operation.
>   	 */
> -	if (fflags & (1UL << QUEUE_FLAG_WC)) {
> +	if (blk_queue_write_cache(q)) {
>   		if (rq->cmd_flags & REQ_PREFLUSH)
>   			policy |= REQ_FSEQ_PREFLUSH;
> -		if (!(fflags & (1UL << QUEUE_FLAG_FUA)) &&
> -		    (rq->cmd_flags & REQ_FUA))
> +		if ((rq->cmd_flags & REQ_FUA) && !supports_fua)
>   			policy |= REQ_FSEQ_POSTFLUSH;
>   	}
>   
> @@ -407,7 +406,7 @@ bool blk_insert_flush(struct request *rq)
>   	 * REQ_PREFLUSH and FUA for the driver.
>   	 */
>   	rq->cmd_flags &= ~REQ_PREFLUSH;
> -	if (!(fflags & (1UL << QUEUE_FLAG_FUA)))
> +	if (!supports_fua)
>   		rq->cmd_flags &= ~REQ_FUA;
>   
>   	/*
> diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
> index 770c0c2b72faaa..e8b9db7c30c455 100644
> --- a/block/blk-mq-debugfs.c
> +++ b/block/blk-mq-debugfs.c
> @@ -93,8 +93,6 @@ static const char *const blk_queue_flag_name[] = {
>   	QUEUE_FLAG_NAME(INIT_DONE),
>   	QUEUE_FLAG_NAME(STABLE_WRITES),
>   	QUEUE_FLAG_NAME(POLL),
> -	QUEUE_FLAG_NAME(WC),
> -	QUEUE_FLAG_NAME(FUA),
>   	QUEUE_FLAG_NAME(DAX),
>   	QUEUE_FLAG_NAME(STATS),
>   	QUEUE_FLAG_NAME(REGISTERED),
> diff --git a/block/blk-settings.c b/block/blk-settings.c
> index f11c8676eb4c67..536ee202fcdccb 100644
> --- a/block/blk-settings.c
> +++ b/block/blk-settings.c
> @@ -261,6 +261,9 @@ static int blk_validate_limits(struct queue_limits *lim)
>   		lim->misaligned = 0;
>   	}
>   
> +	if (!(lim->features & BLK_FEAT_WRITE_CACHE))
> +		lim->features &= ~BLK_FEAT_FUA;
> +
>   	err = blk_validate_integrity_limits(lim);
>   	if (err)
>   		return err;
> @@ -454,6 +457,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
>   {
>   	unsigned int top, bottom, alignment, ret = 0;
>   
> +	t->features |= (b->features & BLK_FEAT_INHERIT_MASK);
> +
>   	t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
>   	t->max_user_sectors = min_not_zero(t->max_user_sectors,
>   			b->max_user_sectors);
> @@ -711,30 +716,6 @@ void blk_set_queue_depth(struct request_queue *q, unsigned int depth)
>   }
>   EXPORT_SYMBOL(blk_set_queue_depth);
>   
> -/**
> - * blk_queue_write_cache - configure queue's write cache
> - * @q:		the request queue for the device
> - * @wc:		write back cache on or off
> - * @fua:	device supports FUA writes, if true
> - *
> - * Tell the block layer about the write cache of @q.
> - */
> -void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua)
> -{
> -	if (wc) {
> -		blk_queue_flag_set(QUEUE_FLAG_HW_WC, q);
> -		blk_queue_flag_set(QUEUE_FLAG_WC, q);
> -	} else {
> -		blk_queue_flag_clear(QUEUE_FLAG_HW_WC, q);
> -		blk_queue_flag_clear(QUEUE_FLAG_WC, q);
> -	}
> -	if (fua)
> -		blk_queue_flag_set(QUEUE_FLAG_FUA, q);
> -	else
> -		blk_queue_flag_clear(QUEUE_FLAG_FUA, q);
> -}
> -EXPORT_SYMBOL_GPL(blk_queue_write_cache);
> -
>   int bdev_alignment_offset(struct block_device *bdev)
>   {
>   	struct request_queue *q = bdev_get_queue(bdev);
> diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> index 5c787965b7d09e..4f524c1d5e08bd 100644
> --- a/block/blk-sysfs.c
> +++ b/block/blk-sysfs.c
> @@ -423,32 +423,41 @@ static ssize_t queue_io_timeout_store(struct request_queue *q, const char *page,
>   
>   static ssize_t queue_wc_show(struct request_queue *q, char *page)
>   {
> -	if (test_bit(QUEUE_FLAG_WC, &q->queue_flags))
> -		return sprintf(page, "write back\n");
> -
> -	return sprintf(page, "write through\n");
> +	if (q->limits.features & BLK_FLAGS_WRITE_CACHE_DISABLED)

Where is the difference between 'flags' and 'features'?
Ie why is is named BLK_FEAT_FUA but BLK_FLAGS_WRITE_CACHE_DISABLED?
And if the feature is the existence of a capability, and the flag is
the setting of that capability, can you make it clear in the documentation?

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 09:59:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 09:59:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738231.1144912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGyI3-00068t-CY; Tue, 11 Jun 2024 09:59:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738231.1144912; Tue, 11 Jun 2024 09:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGyI3-00068T-9J; Tue, 11 Jun 2024 09:59:43 +0000
Received: by outflank-mailman (input) for mailman id 738231;
 Tue, 11 Jun 2024 09:59:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGyI2-00068N-Jd
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 09:59:42 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51fe4a30-27d9-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 11:59:41 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-a6f21ff4e6dso196933366b.3
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 02:59:41 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f19171cd0sm294426566b.61.2024.06.11.02.59.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 02:59:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51fe4a30-27d9-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718099980; x=1718704780; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=4JyI9QIbwAtP+nBSRYQBz9kJI7Jh3JRHrHptlZlbyTo=;
        b=dOH72qAugutv4amqYWZjApjQIhZ7CNi5u/D0XQAV7kKA8RQn9xvLETajh2crNYrWIL
         wz8RA6VQQdkUVpnr/datZYwppaJGH0EZHB5cNbBXjouq+2kHJWzZvpGLBCmNbJIeTSw2
         TdjwMP/7kf83B8dQ1UhYxn0TtAkjeRNJ6el5MsJYZ1v4pmJvWwXvOQ+6pkjFp1y8PXXi
         cKFHiIiMnjWDiUDwH+t5VDxJZA2xZr+9mas+NSX79IaiwW0dmhEAc4IFTNja9QDsxvFO
         YnArXYtBEPdamGtoP9coOzz+31nlmQ0C8XZrBMxZ5Z3wQ8T3J7MQLeP1x1s2TXW9IB3E
         LaMA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718099980; x=1718704780;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=4JyI9QIbwAtP+nBSRYQBz9kJI7Jh3JRHrHptlZlbyTo=;
        b=bs5xhMEBNJCh7KUfgE/my+CMIzVv2aU20yOdHZ5+GhiWrceaM0SNUi2On/rRWYZFpZ
         Qxw0AQr5CHqbSAagcXZEanCwzFJmkhEOCAwDcyyNUQaW0VTo5yOfppqRQCA8WxZshUrP
         a6/V3+EtYn5Kq0Q0mOvbrMhCw4uYMjkGzIaRLNMfdTkJfmDK5G57LZpIoz6lbxi2S76c
         U3UxLSxtvtOSVdbgfwn+xh0zm+bv6Ua837x2UTrRof40Utr+mbGvHU4oGcxp79cGBYw9
         yKcLWUEvq8et+xYB9ub6OmKbSbJntlEOsPnX8mLxq3qQ28MpdWWGN3K+6iNb2ZvjkMIC
         9HAw==
X-Forwarded-Encrypted: i=1; AJvYcCXN+sCSTgj1oYFzzOYFWVpSUapsOy8H9FwCFhu9lWR0pAqYv15H62PUWjsTvfGbYywqj/sJa0tLW6hi0F52Pg+R93iAzu/50AjLVIirRAE=
X-Gm-Message-State: AOJu0YwpA0OaiW627cTKeXZPNOqBDLx6wqxjMJMxBA7c4OOee97ugxW5
	UxQpN49szPFuRqe8zsYmpYCFmpLSuVjj0JOhxRyM2xSCqs6nbtyZgVVPx2bt4Q==
X-Google-Smtp-Source: AGHT+IHjE/BNHjSgeHQTsk9rOctS4IQFWy1K85WcYoxuQmqTYlpJA+LzY+KqSwG4TKJmCZMp1xHqEg==
X-Received: by 2002:a17:906:cc93:b0:a6e:88cc:bee9 with SMTP id a640c23a62f3a-a6e88ccbf2bmr618877866b.24.1718099980329;
        Tue, 11 Jun 2024 02:59:40 -0700 (PDT)
Message-ID: <5660db44-b169-44e3-9439-67d3b55bcac0@suse.com>
Date: Tue, 11 Jun 2024 11:59:39 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 3/7] x86/irq: limit interrupt movement done by
 fixup_irqs()
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-4-roger.pau@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240610142043.11924-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10.06.2024 16:20, Roger Pau Monne wrote:
> The current check used in fixup_irqs() to decide whether to move around
> interrupts is based on the affinity mask, but such mask can have all bits set,
> and hence is unlikely to be a subset of the input mask.  For example if an
> interrupt has an affinity mask of all 1s, any input to fixup_irqs() that's not
> an all set CPU mask would cause that interrupt to be shuffled around
> unconditionally.
> 
> What fixup_irqs() care about is evacuating interrupts from CPUs not set on the
> input CPU mask, and for that purpose it should check whether the interrupt is
> assigned to a CPU not present in the input mask.  Assume that ->arch.cpu_mask
> is a subset of the ->affinity mask, and keep the current logic that resets the
> ->affinity mask if the interrupt has to be shuffled around.
> 
> Doing the affinity movement based on ->arch.cpu_mask requires removing the
> special handling to ->arch.cpu_mask done for high priority vectors, otherwise
> the adjustment done to cpu_mask makes them always skip the CPU interrupt
> movement.
> 
> While there also adjust the comment as to the purpose of fixup_irqs().
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Aiui this is independent of patch 1, so could go in while we still settle on
how to word things there?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 10:07:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 10:07:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738238.1144922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGyPF-0000jR-25; Tue, 11 Jun 2024 10:07:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738238.1144922; Tue, 11 Jun 2024 10:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGyPE-0000jK-Vc; Tue, 11 Jun 2024 10:07:08 +0000
Received: by outflank-mailman (input) for mailman id 738238;
 Tue, 11 Jun 2024 10:07:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=I6Ds=NN=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sGyPD-0000i3-Gr
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 10:07:07 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5a2c71ec-27da-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 12:07:05 +0200 (CEST)
Received: from mail-lj1-f200.google.com (mail-lj1-f200.google.com
 [209.85.208.200]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-650-4heh_eiFOiKnSkrDExmWow-1; Tue, 11 Jun 2024 06:07:00 -0400
Received: by mail-lj1-f200.google.com with SMTP id
 38308e7fff4ca-2eb6f6b1b2dso24544741fa.0
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 03:07:00 -0700 (PDT)
Received: from ?IPV6:2003:cb:c748:ba00:1c00:48ea:7b5a:c12b?
 (p200300cbc748ba001c0048ea7b5ac12b.dip0.t-ipconnect.de.
 [2003:cb:c748:ba00:1c00:48ea:7b5a:c12b])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-421818907b6sm86762715e9.27.2024.06.11.03.06.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 03:06:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a2c71ec-27da-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1718100424;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=F/H/c4fGsFUdJRDofZpCfuHa+2bAf+RkO1DuZv3WdgI=;
	b=W3X2GaYPXGldjAN/D+mmcGq/lSkv/xPGx1uRfAau8CjtK+tAxC7WeFzEAgxDiIPNJlOBuD
	KiZOHwpN35iZUe1eXA4sZSQ8QTOD7cjCLOrujXSu+4YkZNeJswjc6QiAsx4ngJ457TvNf3
	6Hy217x1oLGOM9GgRhCVxVnb2NzURQ8=
X-MC-Unique: 4heh_eiFOiKnSkrDExmWow-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718100419; x=1718705219;
        h=content-transfer-encoding:in-reply-to:organization:autocrypt
         :content-language:from:references:cc:to:subject:user-agent
         :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=F/H/c4fGsFUdJRDofZpCfuHa+2bAf+RkO1DuZv3WdgI=;
        b=f+lp1/8UhTzWNjTEz1mSTCz+W67v2URgk81KhYWbCSrk07G4zSzXZmfrEPfP+I1XYK
         tj74rZY8uvLY+nq6ud8h5P4r1GXK+Rwxp9/5KbYuamsl5PFd9r/vSS534dyFB3OpQbeq
         9tqMlDv0oPjrK3DiJCw9v8O3LxpwT1/p5AuXYwl6YebZrd5uJgbhuw32haRHqnT3VdED
         2x6fJRtX23+pxXFq1WaAhH7VfB7aKdN43mNNvbZrNLtzTq6Ewt5cpmJmLrzyzb1WXSOi
         Cu40tRbRV62koKEvBmXt+1Rq+kfW+jVo23dVTq+m+6tlokFZ7HN/TmvApqho3uf3eCUB
         rTjw==
X-Forwarded-Encrypted: i=1; AJvYcCXz+WfTgtIRGeDhqXC8LGCAI+0EpinAIf9QyQTu9YF2cguasq5BUMyjoYMxNK6OWfxv1UPKMnizEU/p/6heR8EilE7FxtXxLOll2RnL9+c=
X-Gm-Message-State: AOJu0YzL5UifRkZuHeGE6SkBLdb+//vT68sFziHkTDd+u1Qj6nBARPvW
	kceyWp+o44ttLa8ikfrKzyxYS0qn1qcfMQd74eVXXOvgx6J3KKFQoHfmEpR+wYMZeBarX8g8QCJ
	NB64n4+L7QtkV9h92mVqC0GsC0fBrylbmLzfKMUwuv//91mn2BfOJtsNbhzY5xxGA
X-Received: by 2002:a2e:908e:0:b0:2eb:ee64:1e19 with SMTP id 38308e7fff4ca-2ebee641fb3mr18566611fa.42.1718100418968;
        Tue, 11 Jun 2024 03:06:58 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IFTPfn39pFW+4UY1ixf2NXkOeipXmnEb8MezD4IOdN3tIhaG3lKM4bAlWBQ/Nnh1BIsuQ/pCw==
X-Received: by 2002:a2e:908e:0:b0:2eb:ee64:1e19 with SMTP id 38308e7fff4ca-2ebee641fb3mr18566441fa.42.1718100418548;
        Tue, 11 Jun 2024 03:06:58 -0700 (PDT)
Message-ID: <2ed64218-7f3b-4302-a5dc-27f060654fe2@redhat.com>
Date: Tue, 11 Jun 2024 12:06:56 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()
To: linux-kernel@vger.kernel.org, Andrew Morton <akpm@linux-foundation.org>
Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 kasan-dev@googlegroups.com, Mike Rapoport <rppt@kernel.org>,
 Oscar Salvador <osalvador@suse.de>, "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Jason Wang <jasowang@redhat.com>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
 =?UTF-8?Q?Eugenio_P=C3=A9rez?= <eperezma@redhat.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>,
 Dmitry Vyukov <dvyukov@google.com>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-2-david@redhat.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; keydata=
 xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat
In-Reply-To: <20240607090939.89524-2-david@redhat.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 07.06.24 11:09, David Hildenbrand wrote:
> In preparation for further changes, let's teach __free_pages_core()
> about the differences of memory hotplug handling.
> 
> Move the memory hotplug specific handling from generic_online_page() to
> __free_pages_core(), use adjust_managed_page_count() on the memory
> hotplug path, and spell out why memory freed via memblock
> cannot currently use adjust_managed_page_count().
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---

@Andrew, can you squash the following?

 From 0a7921cf21cacf178ca7485da0138fc38a97a28e Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@redhat.com>
Date: Tue, 11 Jun 2024 12:05:09 +0200
Subject: [PATCH] fixup: mm/highmem: make nr_free_highpages() return "unsigned
  long"

Fixup the memblock comment.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
  mm/page_alloc.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index e0c8a8354be36..fc53f96db58a2 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1245,7 +1245,7 @@ void __free_pages_core(struct page *page, unsigned int order,
  		debug_pagealloc_map_pages(page, nr_pages);
  		adjust_managed_page_count(page, nr_pages);
  	} else {
-		/* memblock adjusts totalram_pages() ahead of time. */
+		/* memblock adjusts totalram_pages() manually. */
  		atomic_long_add(nr_pages, &page_zone(page)->managed_pages);
  	}
  
-- 
2.45.2



-- 
Cheers,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 10:20:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 10:20:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738244.1144932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGycH-0008HC-5O; Tue, 11 Jun 2024 10:20:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738244.1144932; Tue, 11 Jun 2024 10:20:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGycH-0008H5-1c; Tue, 11 Jun 2024 10:20:37 +0000
Received: by outflank-mailman (input) for mailman id 738244;
 Tue, 11 Jun 2024 10:20:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGycG-0008Gz-8O
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 10:20:36 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3d8e234a-27dc-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 12:20:35 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-a6f1da33826so116198666b.0
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 03:20:35 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f0d20a616sm411470366b.30.2024.06.11.03.20.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 03:20:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d8e234a-27dc-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718101235; x=1718706035; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=a7M4NIhbu347nb7OvoNmNePCXq7KJFFbaaV9wMmnR7Y=;
        b=DKYdvYiLHauOvdXuUhWpqEo1El0qzqrgjQEKZWesbdfUXQP+0eZinl3v58SGsPOQPv
         wc7b3i95LgYOAnr1FjhzT/BdKivZ71y5X8RaefmZDSsVNFUxpBQZ0lA+7D+gdSbkCnyF
         EXuExp7ildsNkzYCdUNQSz4iUocWWp1OAR6IjtxMk0/9YUkSw6zSbuIOwHh7e9LAQ3cd
         Ml/DYdv1xMHiSOv5Mo+EtRxjLKJzA2qINoLXIID2c03HHgB9147COyEBcb6tMlPtTBcc
         E55RHVKkkA9daR4yXJYXQljv4BXQv7wpJArCzMUVZP1GfMCbKkZzDYKVoShisLd4suYQ
         DpOw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718101235; x=1718706035;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=a7M4NIhbu347nb7OvoNmNePCXq7KJFFbaaV9wMmnR7Y=;
        b=o0oFv6LqMTeRE5Zzsj5/X5IgtPzLeqh2jsTeIasDArmyrcXZ1VjuzmrMX2sfSRlf41
         M63THHTrgMvFH0t/6RinFAqVLITs4gwFsBBSNXRwXYZDT5+dfa80QdlcYUfK1ISZSY35
         oGVlaAjr/i76lFJ4zqFV0qFuas+4M5HhTBFqboDgbV/0VZKSsmbZ+UK4NamufMUc8z5L
         a3OfhL25r8su6K7d6wFqQCCLxGoZfFCURo6ZsPZ5WBrCs/1UmJMlJdFww/mxGqxsbpkp
         5vW35PWIgFbM9A+AX/3lMYR0PB2nQ1oShiSJt4mhS69800lffwG7EWaSrwOlN8niU/UG
         HVlg==
X-Forwarded-Encrypted: i=1; AJvYcCVrSPhGTUTXRXTFLMe3YDXDBBmLQHNe0G5TkOms4++dysWFkENtekHKbB71rXWLbVk2fZG5RaLBZRfd03oqiswj5OuBs9ylpTHRWUUeGlQ=
X-Gm-Message-State: AOJu0YzsXsSVN9dGdO8GO1RsHII/WycIf04C4lsN3unaNzvIGbKBbaRZ
	SGbJj4t2D2RUkUQ30wMW2ENuLJYOelW28unjhoFT7jjkda6tMuzHgF9Jvohzbw==
X-Google-Smtp-Source: AGHT+IEiG0IN2v4mmTWyppDinQxyQygJB7SAFBygCuWSh3dfPP/JexRCm78SyhxtaMT0WBI00tv7/w==
X-Received: by 2002:a17:906:16d9:b0:a6f:a54:1598 with SMTP id a640c23a62f3a-a6f0a541702mr646991666b.49.1718101234586;
        Tue, 11 Jun 2024 03:20:34 -0700 (PDT)
Message-ID: <b2e8eed9-1df8-442b-ae7e-401c406eaef8@suse.com>
Date: Tue, 11 Jun 2024 12:20:33 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 4/7] x86/irq: restrict CPU movement in
 set_desc_affinity()
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-5-roger.pau@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240610142043.11924-5-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 10.06.2024 16:20, Roger Pau Monne wrote:
> If external interrupts are using logical mode it's possible to have an overlap
> between the current ->arch.cpu_mask and the provided mask (or TARGET_CPUS).  If
> that's the case avoid assigning a new vector and just move the interrupt to a
> member of ->arch.cpu_mask that overlaps with the provided mask and is online.

What I'm kind of missing here is an explanation of why what _assign_irq_vector()
does to avoid unnecessary migration (very first conditional there) isn't
sufficient.

> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -837,19 +837,38 @@ void cf_check irq_complete_move(struct irq_desc *desc)
>  
>  unsigned int set_desc_affinity(struct irq_desc *desc, const cpumask_t *mask)
>  {
> -    int ret;
> -    unsigned long flags;
>      cpumask_t dest_mask;
>  
>      if ( mask && !cpumask_intersects(mask, &cpu_online_map) )
>          return BAD_APICID;
>  
> -    spin_lock_irqsave(&vector_lock, flags);
> -    ret = _assign_irq_vector(desc, mask ?: TARGET_CPUS);
> -    spin_unlock_irqrestore(&vector_lock, flags);
> +    /*
> +     * mask input set can contain CPUs that are not online.  To decide whether
> +     * the interrupt needs to be migrated restrict the input mask to the CPUs
> +     * that are online.
> +     */
> +    if ( mask )
> +        cpumask_and(&dest_mask, mask, &cpu_online_map);
> +    else
> +        cpumask_copy(&dest_mask, TARGET_CPUS);

Why once &cpu_online_map and once TARGET_CPUS?

> -    if ( ret < 0 )
> -        return BAD_APICID;
> +    /*
> +     * Only move the interrupt if there are no CPUs left in ->arch.cpu_mask
> +     * that can handle it, otherwise just shuffle it around ->arch.cpu_mask
> +     * to an available destination.
> +     */

"an available destination" (singular) gives the impression that it's only
ever going to be a single CPU. Yet cpu_mask_to_apicid_flat() and
cpu_mask_to_apicid_x2apic_cluster() can produce sets of multiple CPUs.
Therefore maybe "an available destination / the (sub)set of available
destinations"? Or as that's getting longish "(an) available destination(s)"?

> +    if ( !cpumask_intersects(desc->arch.cpu_mask, &dest_mask) )
> +    {
> +        int ret;
> +        unsigned long flags;
> +
> +        spin_lock_irqsave(&vector_lock, flags);
> +        ret = _assign_irq_vector(desc, mask ?: TARGET_CPUS);

Why not pass dest_mask here, as you now calculate that up front? The
function will skip offline CPUs anyway.

> @@ -862,6 +881,7 @@ unsigned int set_desc_affinity(struct irq_desc *desc, const cpumask_t *mask)
>          cpumask_copy(&dest_mask, desc->arch.cpu_mask);
>      }
>      cpumask_and(&dest_mask, &dest_mask, &cpu_online_map);
> +    ASSERT(!cpumask_empty(&dest_mask));
>  
>      return cpu_mask_to_apicid(&dest_mask);

I wonder whether the assertion wouldn't better live in cpu_mask_to_apicid()
itself (the macro, not the backing functions).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 10:37:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 10:37:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738250.1144942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGys7-0005lc-Fs; Tue, 11 Jun 2024 10:36:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738250.1144942; Tue, 11 Jun 2024 10:36:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGys7-0005lV-Ch; Tue, 11 Jun 2024 10:36:59 +0000
Received: by outflank-mailman (input) for mailman id 738250;
 Tue, 11 Jun 2024 10:36:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=887E=NN=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sGys5-0005lP-J4
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 10:36:57 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8650b6c8-27de-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 12:36:56 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2ebed33cbafso16313211fa.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 03:36:56 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 38308e7fff4ca-2ebeeedb430sm5839201fa.7.2024.06.11.03.36.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 03:36:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8650b6c8-27de-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718102216; x=1718707016; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=ua/9tU6WjUgdIb99k8GzdQaIcfyExYSTyUBWc9wYbNs=;
        b=Q1bWwgF1E2BtklRXg01pSbkjLPM5ajNncu7PtXynaigICD79RimFpjYt+bAAazG2cf
         YSYL43ZKPuWdar28qVdaPrxqWvxqPhIyybTAH7bMQL8+opCh/rv2xKTKsCUv8N2Cs3eU
         T3CidGzWfFwDeIe88ohLJE7sx3yGNo2aYqA2gmsvbEWj0xeKRTbBKBJu9jBAR72dWJ4K
         lj+hgPuC1GGHGA7ehQIRrdyap0tbCBPlnbg1/N8dGCvCdRDJgfwmTSiVrpp91NRIj9ON
         x3OhHXbanIyjBqa9rMVyU0YZrbX1JDOmFYuJ1MGkE2NtOLlGtGb3DEGmr0/g9A16J2sn
         ZTNQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718102216; x=1718707016;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=ua/9tU6WjUgdIb99k8GzdQaIcfyExYSTyUBWc9wYbNs=;
        b=Ch0gF/D/bpAu7SMjga+nI4q7y6TPPCLqPRTV1z3l4hckPPXYH0DRBxqQc3udFgbMqZ
         O6D2JDIQKbbPGo27MPEKfl215MXKSpgf5kyoLxt9CwE3fBggi3yBAD1lcAxPU5oBv0lF
         TAKfzfgUE9pofmOTpsKD8wQun+KM/df64YIv+Ad0AfhkLVMzn0sMIdLfhX17voqgdnzc
         ALlI83fdfdPvUiVTlYj2g86xTWGh16ZkbTj8NE2aryX6PkZio6Cbbp8AonzI7FJBIWl/
         iJe4EJy6mEVf/I5Tq0ut7W+WHdXrchHp8hAkFLhu7mTY2ZEBqWveoJX8xZEoRKeq31UD
         g3wA==
X-Forwarded-Encrypted: i=1; AJvYcCXrlNv/3Jnt04dIrrcbXDGY12DWYoBbK4ufLYqOtIDHXB+zHgw52ZhhjaPKStUO9cdQoBQAa4a06Z5yuQfq5ejlEtCOjK+4q4WmODHdG5s=
X-Gm-Message-State: AOJu0YwnSsFaAO7hJHKpZzl1LMMHrZ0TOmh6oi3cY96VIQRfmCcKZJwv
	PYFJX4E/qVW6ltNiSU/bf2aPZHoISK8ePbwRLye9xZJ7NBRb2YWh
X-Google-Smtp-Source: AGHT+IHYsD+RBewU6TEw3bw/LvCitwcG9YbpIzBPYTeW5cMKahE5cAUAulsxW7/VYoKUAA+2j3L/ug==
X-Received: by 2002:a2e:9d91:0:b0:2eb:d620:88d2 with SMTP id 38308e7fff4ca-2ebd6208a4fmr44532761fa.5.1718102215160;
        Tue, 11 Jun 2024 03:36:55 -0700 (PDT)
Message-ID: <6a255f3dccc609e680659ed05b613c21a33cfb20.camel@gmail.com>
Subject: Re: [XEN PATCH v6 0/7] FF-A notifications
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall
 <julien@xen.org>
Cc: Jens Wiklander <jens.wiklander@linaro.org>, Xen-devel
 <xen-devel@lists.xenproject.org>, "patches@linaro.org"
 <patches@linaro.org>,  Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Michal Orzel
 <michal.orzel@amd.com>
Date: Tue, 11 Jun 2024 12:36:54 +0200
In-Reply-To: <8558AEB5-2F38-4F8C-A017-794E32045068@arm.com>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
	 <3C40228F-21AA-4CBF-A4BE-1C42DE6E94EB@arm.com>
	 <615f1766-253d-43dc-b0f0-f8e2eb7360b5@xen.org>
	 <8558AEB5-2F38-4F8C-A017-794E32045068@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

Hi Bertrand and Julien,

On Tue, 2024-06-11 at 07:09 +0000, Bertrand Marquis wrote:
> Hi Julien and Oleksii,
>=20
> @Oleksii: Could we consider having this serie merged for next release
> ?
We can consider including it in Xen 4.19 as it has a low impact on
existing systems and needs to be explicitly activated:
 Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

>=20
> It is a feature that is in tech-preview at the moment and as such
> should not have any
> consequences on existing system unless it is activated explicitly in
> the Xen configuration.
>=20
> There are some changes in the arm common code introducing support to
> register SGI
> interrupt handlers in drivers. As no drivers appart from FF-A is
> trying to register such
> handlers, the risk is minimal for existing systems.
>=20
>=20
> > On 10 Jun 2024, at 22:38, Julien Grall <julien@xen.org> wrote:
> >=20
> > Hi Bertrand,
> >=20
> > On 10/06/2024 16:54, Bertrand Marquis wrote:
> > > Hi Jens,
> > > > On 10 Jun 2024, at 08:53, Jens Wiklander
> > > > <jens.wiklander@linaro.org> wrote:
> > > >=20
> > > > Hi,
> > > >=20
> > > > This patch set adds support for FF-A notifications. We only
> > > > support
> > > > global notifications, per vCPU notifications remain
> > > > unsupported.
> > > >=20
> > > > The first three patches are further cleanup and can be merged
> > > > before the
> > > > rest if desired.
> > > >=20
> > > > A physical SGI is used to make Xen aware of pending FF-A
> > > > notifications. The
> > > > physical SGI is selected by the SPMC in the secure world. Since
> > > > it must not
> > > > already be used by Xen the SPMC is in practice forced to donate
> > > > one of the
> > > > secure SGIs, but that's normally not a problem. The SGI
> > > > handling in Xen is
> > > > updated to support registration of handlers for SGIs that
> > > > aren't statically
> > > > assigned, that is, SGI IDs above GIC_SGI_MAX.
> > > >=20
> > > > The patch "xen/arm: add and call init_tee_secondary()" provides
> > > > a hook for
> > > > register the needed per-cpu interrupt handler in "xen/arm: ffa:
> > > > support
> > > > notification".
> > > >=20
> > > > The patch "xen/arm: add and call tee_free_domain_ctx()"
> > > > provides a hook for
> > > > later freeing of the TEE context. This hook is used in
> > > > "xen/arm: ffa:
> > > > support notification" and avoids the problem with TEE context
> > > > being freed
> > > > while we need to access it when handling a Schedule Receiver
> > > > interrupt. It
> > > > was suggested as an alternative in [1] that the TEE context
> > > > could be freed
> > > > from complete_domain_destroy().
> > > >=20
> > > > [1]
> > > > https://lore.kernel.org/all/CAHUa44H4YpoxYT7e6WNH5XJFpitZQjqP9Ng4Sm=
Ty4eWhyN+F+w@mail.gmail.com/
> > > >=20
> > > > Thanks,
> > > > Jens
> > > All patches are now reviewed and/or acked so I think they can get
> > > in for the release.
> >=20
> > This would need a release-ack from Oleksii (I can't seem to find
> > already one).
>=20
> You are right, i do not know why in my mind we already got one.
>=20
> >=20
> > As we discussed last week, I am fine with the idea to merge the FFA
> > patches as the feature is tech-preview. But there are some changes
> > in the arm generic code. Do you (or Jens) have an assessment of the
> > risk of the changes?
>=20
> Agree.
>=20
> In my view, the changes are changing the behaviour of some internal
> functions if an interrupt handler is register for SGI but should not
> have any impact for other kind of interrupts.
> As there was nothing before the FF-A driver registering such
> interrupts, the risk of the changes having any impact on existing
> configurations not activating FF-A is fairly reduced.
>=20
> @Jens: do you agree with my analysis.
>=20
> I made a text for Oleksii at the beginning of the mail.
>=20
> Cheers
>=20
> Bertrand
>=20
> >=20
> > Cheers,
> >=20
> > --=20
> > Julien Grall
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 10:38:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 10:38:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738258.1144951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGytt-0007FX-Ux; Tue, 11 Jun 2024 10:38:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738258.1144951; Tue, 11 Jun 2024 10:38:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGytt-0007FQ-SG; Tue, 11 Jun 2024 10:38:49 +0000
Received: by outflank-mailman (input) for mailman id 738258;
 Tue, 11 Jun 2024 10:38:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=887E=NN=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sGytt-0007FK-DX
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 10:38:49 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c95b2d47-27de-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 12:38:48 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id
 2adb3069b0e04-52bbdb15dd5so1129770e87.3
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 03:38:48 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52c90657c72sm456085e87.0.2024.06.11.03.38.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 03:38:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c95b2d47-27de-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718102328; x=1718707128; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=senLl3UHIbk9RkT+z6sH70yFNN3h44ScR0/Is9gc4Ms=;
        b=PbPscuRv8i9WTqbfnlMBsk3PHV2SUq0LE/gYcAa5DjRtRskKN52Pn4AufaqWsYVyFl
         wu+ZT/VhQhaCHIA5HEbPqOIv+ZGzoRR8N31u1cQKpXMxv6roX0nmsEHPrIXEeuaNbSmA
         eLt/DmHhCrns5286WdAvnokKWSB7RreJElBH9oxWLbXavdk4Bkk9tTtoq65HyDW1mSGg
         7f8uI1MlRCBUm0dBeNatL6P5J+J3uXATPwMY1aIF8hLRe90FDO19BHQoozpF2YVjJJqJ
         2RkvdxtLjl/GxpHyf+0A1W1nS4TmkI2efF+GNd101b0wJNQa4IyKHRjwDh677kEXkYZm
         P8RA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718102328; x=1718707128;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=senLl3UHIbk9RkT+z6sH70yFNN3h44ScR0/Is9gc4Ms=;
        b=xVWVJvPs8BoHTeWKj++fQDllXcH+S6Mw1R5hIAjLKeBhLP+jypR7FZTt3fZREiUw15
         WgnBN8JeCMGGgPEt5L1asaPDSRta4Z5ZoVi0aqnxIoYgmASscoi5BCu44pEc/pLVJcSq
         +U48qvAOERaCITAZM2leIezA+4/HbKclQQt4OZkvYkRdT6RindKfghuCpfRC6D5ypq/S
         qzGfPFMckJEac3m7fjfp9EEAMmzWW5lsMoSZyC5QIDEDdvyYMYE4ofvcMB5p1hBUaejW
         ptvogWA5gQETqvwRJYP19i/rWInRpzEjnXYcOYvhN/HzUuMpiIYl4uUdiRg3CUz5h+tV
         2Onw==
X-Forwarded-Encrypted: i=1; AJvYcCXa9Q2A57id1ELEoviWK+OgSOUn+5Jg4TaW/Qo90Foj7QvLfsa9l0haKwpQoWn+huAW7hvxJgEjeSNim9gSsVCDJ+pfpkSTs1+I2587vck=
X-Gm-Message-State: AOJu0Yw5MfdHA7XEqf/SzUaRa2yJNlZDF9oL1pe3qsMGCSkPw137QN7h
	dROHYGo3kXQWMSWyU2PUt0WfZooYy+sSPUIlFvpt+AyNpJtXpZKP
X-Google-Smtp-Source: AGHT+IGG1BVYgE2BWmAgeI49uqjNWliuK7t48IpO0S3BoKH8pnS1w97htLc28fIcjSsz1bOEavo1DA==
X-Received: by 2002:a05:6512:785:b0:52b:c14d:733c with SMTP id 2adb3069b0e04-52bc14d73f5mr5800054e87.68.1718102327969;
        Tue, 11 Jun 2024 03:38:47 -0700 (PDT)
Message-ID: <fcfe1bb749473920a72858ee1cbbb443ca059a09.camel@gmail.com>
Subject: Re: [PATCH for-4.19 v1] automation: add a test for HVM domU on PVH
 dom0
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Marek
 =?ISO-8859-1?Q?Marczykowski-G=F3recki?=
	 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>, Stefano Stabellini
	 <sstabellini@kernel.org>
Date: Tue, 11 Jun 2024 12:38:47 +0200
In-Reply-To: <67a6fc3a-bcc3-48e8-beb8-b3c05217083c@citrix.com>
References: <20240610133210.724346-1-marmarek@invisiblethingslab.com>
	 <67a6fc3a-bcc3-48e8-beb8-b3c05217083c@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Mon, 2024-06-10 at 16:25 +0100, Andrew Cooper wrote:
> On 10/06/2024 2:32 pm, Marek Marczykowski-G=C3=B3recki wrote:
> > This tests if QEMU works in PVH dom0. QEMU in dom0 requires
> > enabling TUN
> > in the kernel, so do that too.
> >=20
> > Add it to both x86 runners, similar to the PVH domU test.
> >=20
> > Signed-off-by: Marek Marczykowski-G=C3=B3recki
> > <marmarek@invisiblethingslab.com>
>=20
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>=20
> CC Oleksii.
Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii=20
>=20
> > ---
> > Requires rebuilding test-artifacts/kernel/6.1.19
>=20
> Ok.
>=20
> But on a tangent, shouldn't that move forwards somewhat?
>=20
> >=20
> > I'm actually not sure if there is a sense in testing HVM domU on
> > both
> > runners, when PVH domU variant is already tested on both. Are there
> > any
> > differences between Intel and AMD relevant for QEMU in dom0?
>=20
> It's not just Qemu, it's also HVMLoader, and the particulars of VT-
> x/SVM
> VMExit decode information in order to generate ioreqs.
>=20
> I'd firmly suggest having both.
>=20
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 10:40:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 10:40:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738262.1144961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGyvf-00011l-9l; Tue, 11 Jun 2024 10:40:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738262.1144961; Tue, 11 Jun 2024 10:40:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGyvf-00011e-6v; Tue, 11 Jun 2024 10:40:39 +0000
Received: by outflank-mailman (input) for mailman id 738262;
 Tue, 11 Jun 2024 10:40:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=887E=NN=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sGyve-00011Y-Nz
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 10:40:38 +0000
Received: from mail-lf1-x12b.google.com (mail-lf1-x12b.google.com
 [2a00:1450:4864:20::12b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0a7ca15c-27df-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 12:40:37 +0200 (CEST)
Received: by mail-lf1-x12b.google.com with SMTP id
 2adb3069b0e04-52bc335e49aso1259525e87.3
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 03:40:37 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52c90af4f01sm441743e87.179.2024.06.11.03.40.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 03:40:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a7ca15c-27df-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718102437; x=1718707237; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=TcmKGQ9cWbbsBGbku5qP9rjaAFfDpJaaLMtBz07Xhjw=;
        b=VSaUMIID4VjExVFclDh7FAltKVK/7r0Yo6yS3ACIScRK5vP00xGXu3XCZutUroqDBU
         H50g9+mrJzhDAKUbFpaNvkj8oftevNAy4N6/ApUzMxsYkXVGnVk9fbfCDP+p8XPjGfSo
         QgPLDg5Lrt2h1etXFRPpYoc4Y12UMz4uoGDCseaOnDOHALO8dgWDA3lJBlOEiNc+09Hv
         yiYDCBO6ZfUJUF6iibHCKEzGhH6OmnXHE9EJ8q6+kDLMZWowMSV/9EbJ3TFPJjB+vDhe
         qh/+Hd0ruOT6hhQbILtg8TmjoZrwjZ/+Uev6bHMmvY+vC4ExLbJDUlqVtLC0T4wLYJ9Z
         hxjg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718102437; x=1718707237;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=TcmKGQ9cWbbsBGbku5qP9rjaAFfDpJaaLMtBz07Xhjw=;
        b=G4fpbyxnNfLRMhJHkt/jpXk2rb9gfV5sLS43Y0A87bOLimNHBuqwKdzV+6DKftaUs8
         1ip8xu+02WouGLEugE1rKZ93JOw17ADlCAp8VMmJE4Zp7SAZ2Ty7JBetYhsO6spy0i/6
         KbUF+9ME8UfCzEVDRrIeDLFJQ3XW80hfdkuPc8zoz40EtW6JyxLhWxUa8/1eEXYQnwP3
         i44nM5TnIRuEsxd4ojzhh8dsZfTL7Z6D6y6SEtJb/Xne1TBL+Fl2YYptf6PJhDZtlEa8
         DS3Vn5j49C3iZAV0yXIbZvjTVVEqb22ZTtmpRD7ROMkMYg1tUVodjWpVWbZ3jUBtBry2
         kAXw==
X-Forwarded-Encrypted: i=1; AJvYcCW+JMCBle80fltEruB+WR829oKVgiPlD7oCa3IeUJcqBUOCV0v8iuV4BhGlgkpL0pvb63FoHyVkZ3GmB2GcGNi6VsdfSHUlIvcNHtgCbtM=
X-Gm-Message-State: AOJu0YwQDHcdA4kq8pzPLyPzdF/yUoUsEx+D6z0D5+9sMfjK8zt+u/vP
	momDYqZAXNrj+EAix6JU9OhMPYXXDw/vCi7tnCEdsyfHnLWR7jahK1LZFQ==
X-Google-Smtp-Source: AGHT+IGRCnfj0agldrg7jGTI3BN19RrN2rSjG0NOhdopTSnq4AjP7ETtXzGG6DEA4zjeaQB0GYU4NA==
X-Received: by 2002:a05:6512:1288:b0:52c:90aa:444c with SMTP id 2adb3069b0e04-52c90aa4595mr2044498e87.28.1718102437230;
        Tue, 11 Jun 2024 03:40:37 -0700 (PDT)
Message-ID: <8d75d6983d3a9e1d98c6a3739a8cea2e9bbad85a.camel@gmail.com>
Subject: Re: [PATCH for-4.19] x86/EPT: relax iPAT for "invalid" MFNs
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau
 =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>
Date: Tue, 11 Jun 2024 12:40:36 +0200
In-Reply-To: <dcdb2217-d3be-4cfa-b698-d18bdfdd91e3@suse.com>
References: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
	 <dcdb2217-d3be-4cfa-b698-d18bdfdd91e3@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Mon, 2024-06-10 at 17:00 +0200, Jan Beulich wrote:
> On 10.06.2024 16:58, Jan Beulich wrote:
> > mfn_valid() is RAM-focused; it will often return false for MMIO.
> > Yet
> > access to actual MMIO space should not generally be restricted to
> > UC
> > only; especially video frame buffer accesses are unduly affected by
> > such
> > a restriction. Permit PAT use for directly assigned MMIO as long as
> > the
> > domain is known to have been granted some level of cache control.
> >=20
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

> > ---
> > Considering that we've just declared PVH Dom0 "supported", this may
> > well
> > qualify for 4.19. The issue was specifically very noticable there.
>=20
> Actually - meant to Cc Oleksii for this, and then forgot.
>=20
> Jan
>=20
> > The conditional may be more complex than really necessary, but it's
> > in
> > line with what we do elsewhere. And imo better continue to be a
> > little
> > too restrictive, than moving to too lax.
> >=20
> > --- a/xen/arch/x86/mm/p2m-ept.c
> > +++ b/xen/arch/x86/mm/p2m-ept.c
> > @@ -503,7 +503,8 @@ int epte_get_entry_emt(struct domain *d,
> > =C2=A0
> > =C2=A0=C2=A0=C2=A0=C2=A0 if ( !mfn_valid(mfn) )
> > =C2=A0=C2=A0=C2=A0=C2=A0 {
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *ipat =3D true;
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *ipat =3D type !=3D p2m_mmi=
o_direct ||
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 (!is_iommu_enabled(d) &&
> > !cache_flush_permitted(d));
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return X86_MT_UC;
> > =C2=A0=C2=A0=C2=A0=C2=A0 }
> > =C2=A0
>=20



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 10:40:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 10:40:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738265.1144972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGyvx-0001Yo-He; Tue, 11 Jun 2024 10:40:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738265.1144972; Tue, 11 Jun 2024 10:40:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGyvx-0001Yh-E8; Tue, 11 Jun 2024 10:40:57 +0000
Received: by outflank-mailman (input) for mailman id 738265;
 Tue, 11 Jun 2024 10:40:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b7dS=NN=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGyvv-0001X2-Oq
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 10:40:55 +0000
Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com
 [2607:f8b0:4864:20::733])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 135d08b3-27df-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 12:40:53 +0200 (CEST)
Received: by mail-qk1-x733.google.com with SMTP id
 af79cd13be357-795fb13b256so142144385a.0
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 03:40:53 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b0908287f9sm2068136d6.119.2024.06.11.03.40.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 03:40:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 135d08b3-27df-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718102452; x=1718707252; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=r8a1AXtM0wZIJDy65EK2bmDa1peS7KgyN303R6Jr3pM=;
        b=Huue/vMogZhplHB55ITXaYGVAHg6pLB/Qu9jlxgoa002G5Hwu0BBwgiat12ZUmewhG
         tR7wtB0bIwYZEaQQBO+jgoyJ3nQtBrVf5bw0iI3mZHBCTJwp9/IfcwSG/NBJtMlrRi7d
         aBO+3SbtbFTcmqUmaC0saosxu7+n8Deid6Kkg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718102452; x=1718707252;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=r8a1AXtM0wZIJDy65EK2bmDa1peS7KgyN303R6Jr3pM=;
        b=PZAcYs1Vf7a7pAzV4YlsD/Q2kwITuOa+cEr3FnFu3PGnSy7+0yiAwAW2lyesTYlV01
         VCuBXq3GnNIIcv9tls84WO96ROi4tdt1Hb1QQnlWk23zvlLS2DKcrvpky97T3bslnqYz
         VSFKJjT/6Hn43MHhtLK1LoQbmh7dMd+hcgCyB3CKHapz2SyvumRbEg4DopT8yDk9cIfT
         7UFzkKk03dQIyvpCr+6pDKiMEOYw8C6ITbEEqmxhH1ksFHGmYxXgsXC4FIrHnnWfYwK1
         LnS+D4AQxlQVB16dvZk44dZpkMpfu0XB0K/MHnmcFjLDkrImYXaIzNvfhxMlwnnqSduu
         NJBQ==
X-Gm-Message-State: AOJu0YxXV00RIGPxBGI4qcjhrHKn/y88Gp8mtxrWpr+rP6q3e1gAxjPn
	xpLY15Swgu78J91NrrBDGut4rlr9X0PWe67lUJ2g89yWvVJ8Nz70uXi8s6ikHdc=
X-Google-Smtp-Source: AGHT+IFUX0JmgVH31R8bjU4n4qqzVICfX2TxGlZjuLmvfGKyDt6fSAfrFLugGhgOwh/lwIs4t+sHMg==
X-Received: by 2002:a05:6214:3d08:b0:6b0:732f:b91b with SMTP id 6a1803df08f44-6b089f7056dmr46105886d6.26.1718102451611;
        Tue, 11 Jun 2024 03:40:51 -0700 (PDT)
Date: Tue, 11 Jun 2024 12:40:49 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO
 page read only
Message-ID: <ZmgpsZJ4afLd1Fc3@macbook>
References: <cover.68462f37276d69ab6e268be94d049f866a321f73.1716392340.git-series.marmarek@invisiblethingslab.com>
 <30562c807ff2e434731a76d7110d48614a58884b.1716392340.git-series.marmarek@invisiblethingslab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <30562c807ff2e434731a76d7110d48614a58884b.1716392340.git-series.marmarek@invisiblethingslab.com>

On Wed, May 22, 2024 at 05:39:03PM +0200, Marek Marczykowski-Górecki wrote:
> In some cases, only few registers on a page needs to be write-protected.
> Examples include USB3 console (64 bytes worth of registers) or MSI-X's
> PBA table (which doesn't need to span the whole table either), although
> in the latter case the spec forbids placing other registers on the same
> page. Current API allows only marking whole pages pages read-only,
> which sometimes may cover other registers that guest may need to
> write into.
> 
> Currently, when a guest tries to write to an MMIO page on the
> mmio_ro_ranges, it's either immediately crashed on EPT violation - if
> that's HVM, or if PV, it gets #PF. In case of Linux PV, if access was
> from userspace (like, /dev/mem), it will try to fixup by updating page
> tables (that Xen again will force to read-only) and will hit that #PF
> again (looping endlessly). Both behaviors are undesirable if guest could
> actually be allowed the write.
> 
> Introduce an API that allows marking part of a page read-only. Since
> sub-page permissions are not a thing in page tables (they are in EPT,
> but not granular enough), do this via emulation (or simply page fault
> handler for PV) that handles writes that are supposed to be allowed.
> The new subpage_mmio_ro_add() takes a start physical address and the
> region size in bytes. Both start address and the size need to be 8-byte
> aligned, as a practical simplification (allows using smaller bitmask,
> and a smaller granularity isn't really necessary right now).
> It will internally add relevant pages to mmio_ro_ranges, but if either
> start or end address is not page-aligned, it additionally adds that page
> to a list for sub-page R/O handling. The list holds a bitmask which
> qwords are supposed to be read-only and an address where page is mapped
> for write emulation - this mapping is done only on the first access. A
> plain list is used instead of more efficient structure, because there
> isn't supposed to be many pages needing this precise r/o control.
> 
> The mechanism this API is plugged in is slightly different for PV and
> HVM. For both paths, it's plugged into mmio_ro_emulated_write(). For PV,
> it's already called for #PF on read-only MMIO page. For HVM however, EPT
> violation on p2m_mmio_direct page results in a direct domain_crash() for
> non hardware domains.  To reach mmio_ro_emulated_write(), change how
> write violations for p2m_mmio_direct are handled - specifically, check
> if they relate to such partially protected page via
> subpage_mmio_write_accept() and if so, call hvm_emulate_one_mmio() for
> them too. This decodes what guest is trying write and finally calls
> mmio_ro_emulated_write(). The EPT write violation is detected as
> npfec.write_access and npfec.present both being true (similar to other
> places), which may cover some other (future?) cases - if that happens,
> emulator might get involved unnecessarily, but since it's limited to
> pages marked with subpage_mmio_ro_add() only, the impact is minimal.
> Both of those paths need an MFN to which guest tried to write (to check
> which part of the page is supposed to be read-only, and where
> the page is mapped for writes). This information currently isn't
> available directly in mmio_ro_emulated_write(), but in both cases it is
> already resolved somewhere higher in the call tree. Pass it down to
> mmio_ro_emulated_write() via new mmio_ro_emulate_ctxt.mfn field.
> 
> This may give a bit more access to the instruction emulator to HVM
> guests (the change in hvm_hap_nested_page_fault()), but only for pages
> explicitly marked with subpage_mmio_ro_add() - so, if the guest has a
> passed through a device partially used by Xen.
> As of the next patch, it applies only configuration explicitly
> documented as not security supported.
> 
> The subpage_mmio_ro_add() function cannot be called with overlapping
> ranges, and on pages already added to mmio_ro_ranges separately.
> Successful calls would result in correct handling, but error paths may
> result in incorrect state (like pages removed from mmio_ro_ranges too
> early). Debug build has asserts for relevant cases.
> 
> Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> ---
> Shadow mode is not tested, but I don't expect it to work differently than
> HAP in areas related to this patch.
> 
> Changes in v4:
> - rename SUBPAGE_MMIO_RO_ALIGN to MMIO_RO_SUBPAGE_GRAN
> - guard subpage_mmio_write_accept with CONFIG_HVM, as it's used only
>   there
> - rename ro_qwords to ro_elems
> - use unsigned arguments for subpage_mmio_ro_remove_page()
> - use volatile for __iomem
> - do not set mmio_ro_ctxt.mfn for mmcfg case
> - comment where fields of mmio_ro_ctxt are used
> - use bool for result of __test_and_set_bit
> - do not open-code mfn_to_maddr()
> - remove leftover RCU
> - mention hvm_hap_nested_page_fault() explicitly in the commit message
> Changes in v3:
> - use unsigned int for loop iterators
> - use __set_bit/__clear_bit when under spinlock
> - avoid ioremap() under spinlock
> - do not cast away const
> - handle unaligned parameters in release build
> - comment fixes
> - remove RCU - the add functions are __init and actual usage is only
>   much later after domains are running
> - add checks overlapping ranges in debug build and document the
>   limitations
> - change subpage_mmio_ro_add() so the error path doesn't potentially
>   remove pages from mmio_ro_ranges
> - move printing message to avoid one goto in
>   subpage_mmio_write_emulate()
> Changes in v2:
> - Simplify subpage_mmio_ro_add() parameters
> - add to mmio_ro_ranges from within subpage_mmio_ro_add()
> - use ioremap() instead of caller-provided fixmap
> - use 8-bytes granularity (largest supported single write) and a bitmap
>   instead of a rangeset
> - clarify commit message
> - change how it's plugged in for HVM domain, to not change the behavior for
>   read-only parts (keep it hitting domain_crash(), instead of ignoring
>   write)
> - remove unused subpage_mmio_ro_remove()
> ---
>  xen/arch/x86/hvm/emulate.c      |   2 +-
>  xen/arch/x86/hvm/hvm.c          |   4 +-
>  xen/arch/x86/include/asm/mm.h   |  25 +++-
>  xen/arch/x86/mm.c               | 273 +++++++++++++++++++++++++++++++++-
>  xen/arch/x86/pv/ro-page-fault.c |   6 +-
>  5 files changed, 305 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index ab1bc516839a..e98513afc69b 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -2735,7 +2735,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
>          .write      = mmio_ro_emulated_write,
>          .validate   = hvmemul_validate,
>      };
> -    struct mmio_ro_emulate_ctxt mmio_ro_ctxt = { .cr2 = gla };
> +    struct mmio_ro_emulate_ctxt mmio_ro_ctxt = { .cr2 = gla, .mfn = _mfn(mfn) };
>      struct hvm_emulate_ctxt ctxt;
>      const struct x86_emulate_ops *ops;
>      unsigned int seg, bdf;
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 9594e0a5c530..73bbfe2bdc99 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2001,8 +2001,8 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
>          goto out_put_gfn;
>      }
>  
> -    if ( (p2mt == p2m_mmio_direct) && is_hardware_domain(currd) &&
> -         npfec.write_access && npfec.present &&
> +    if ( (p2mt == p2m_mmio_direct) && npfec.write_access && npfec.present &&
> +         (is_hardware_domain(currd) || subpage_mmio_write_accept(mfn, gla)) &&
>           (hvm_emulate_one_mmio(mfn_x(mfn), gla) == X86EMUL_OKAY) )
>      {
>          rc = 1;
> diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/mm.h
> index 98b66edaca5e..d04cf2c4165e 100644
> --- a/xen/arch/x86/include/asm/mm.h
> +++ b/xen/arch/x86/include/asm/mm.h
> @@ -522,9 +522,34 @@ extern struct rangeset *mmio_ro_ranges;
>  void memguard_guard_stack(void *p);
>  void memguard_unguard_stack(void *p);
>  
> +/*
> + * Add more precise r/o marking for a MMIO page. Range specified here
> + * will still be R/O, but the rest of the page (not marked as R/O via another
> + * call) will have writes passed through.
> + * The start address and the size must be aligned to MMIO_RO_SUBPAGE_GRAN.
> + *
> + * This API cannot be used for overlapping ranges, nor for pages already added
> + * to mmio_ro_ranges separately.
> + *
> + * Since there is currently no subpage_mmio_ro_remove(), relevant device should
> + * not be hot-unplugged.
> + *
> + * Return values:
> + *  - negative: error
> + *  - 0: success
> + */
> +#define MMIO_RO_SUBPAGE_GRAN 8
> +int subpage_mmio_ro_add(paddr_t start, size_t size);
> +#ifdef CONFIG_HVM
> +bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla);
> +#endif
> +
>  struct mmio_ro_emulate_ctxt {
>          unsigned long cr2;
> +        /* Used only for mmcfg case */
>          unsigned int seg, bdf;
> +        /* Used only for non-mmcfg case */
> +        mfn_t mfn;
>  };
>  
>  int cf_check mmio_ro_emulated_write(
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index d968bbbc7315..dab7cc018c3f 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -150,6 +150,17 @@ bool __read_mostly machine_to_phys_mapping_valid;
>  
>  struct rangeset *__read_mostly mmio_ro_ranges;
>  
> +/* Handling sub-page read-only MMIO regions */
> +struct subpage_ro_range {
> +    struct list_head list;
> +    mfn_t mfn;
> +    void __iomem *mapped;
> +    DECLARE_BITMAP(ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GRAN);
> +};
> +
> +static LIST_HEAD(subpage_ro_ranges);
> +static DEFINE_SPINLOCK(subpage_ro_lock);
> +
>  static uint32_t base_disallow_mask;
>  /* Global bit is allowed to be set on L1 PTEs. Intended for user mappings. */
>  #define L1_DISALLOW_MASK ((base_disallow_mask | _PAGE_GNTTAB) & ~_PAGE_GLOBAL)
> @@ -4910,6 +4921,265 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>      return rc;
>  }
>  
> +/*
> + * Mark part of the page as R/O.
> + * Returns:
> + * - 0 on success - first range in the page
> + * - 1 on success - subsequent range in the page
> + * - <0 on error
> + *
> + * This needs subpage_ro_lock already taken.
> + */
> +static int __init subpage_mmio_ro_add_page(
> +    mfn_t mfn, unsigned int offset_s, unsigned int offset_e)

Nit: parameters here seem to be indented differently than below.

> +{
> +    struct subpage_ro_range *entry = NULL, *iter;
> +    unsigned int i;
> +
> +    list_for_each_entry(iter, &subpage_ro_ranges, list)
> +    {
> +        if ( mfn_eq(iter->mfn, mfn) )
> +        {
> +            entry = iter;
> +            break;
> +        }
> +    }

AFAICT you could put the search logic into a separate function and use
it here, plus in subpage_mmio_ro_remove_page(),
subpage_mmio_write_emulate() and subpage_mmio_write_accept() possibly.

> +    if ( !entry )
> +    {
> +        /* iter == NULL marks it was a newly allocated entry */
> +        iter = NULL;
> +        entry = xzalloc(struct subpage_ro_range);
> +        if ( !entry )
> +            return -ENOMEM;
> +        entry->mfn = mfn;
> +    }
> +
> +    for ( i = offset_s; i <= offset_e; i += MMIO_RO_SUBPAGE_GRAN )
> +    {
> +        bool oldbit = __test_and_set_bit(i / MMIO_RO_SUBPAGE_GRAN,
> +                                        entry->ro_elems);
> +        ASSERT(!oldbit);
> +    }
> +
> +    if ( !iter )
> +        list_add(&entry->list, &subpage_ro_ranges);
> +
> +    return iter ? 1 : 0;
> +}
> +
> +/* This needs subpage_ro_lock already taken */
> +static void __init subpage_mmio_ro_remove_page(
> +    mfn_t mfn,
> +    unsigned int offset_s,
> +    unsigned int offset_e)
> +{
> +    struct subpage_ro_range *entry = NULL, *iter;
> +    unsigned int i;
> +
> +    list_for_each_entry(iter, &subpage_ro_ranges, list)
> +    {
> +        if ( mfn_eq(iter->mfn, mfn) )
> +        {
> +            entry = iter;
> +            break;
> +        }
> +    }
> +    if ( !entry )
> +        return;
> +
> +    for ( i = offset_s; i <= offset_e; i += MMIO_RO_SUBPAGE_GRAN )
> +        __clear_bit(i / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems);
> +
> +    if ( !bitmap_empty(entry->ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GRAN) )
> +        return;
> +
> +    list_del(&entry->list);
> +    if ( entry->mapped )
> +        iounmap(entry->mapped);
> +    xfree(entry);
> +}
> +
> +int __init subpage_mmio_ro_add(
> +    paddr_t start,
> +    size_t size)
> +{
> +    mfn_t mfn_start = maddr_to_mfn(start);
> +    paddr_t end = start + size - 1;
> +    mfn_t mfn_end = maddr_to_mfn(end);
> +    unsigned int offset_end = 0;
> +    int rc;
> +    bool subpage_start, subpage_end;
> +
> +    ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
> +    ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
> +    if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
> +        size = ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);
> +
> +    if ( !size )
> +        return 0;
> +
> +    if ( mfn_eq(mfn_start, mfn_end) )
> +    {
> +        /* Both starting and ending parts handled at once */
> +        subpage_start = PAGE_OFFSET(start) || PAGE_OFFSET(end) != PAGE_SIZE - 1;
> +        subpage_end = false;

Given the intended usage of this, don't we want to limit to only a
single page?  So that PFN_DOWN(start + size) == PFN_DOWN/(start), as
that would simplify the logic here?

Mostly asking because I think for the usage of XHCI the registers that
need to be marked RO are all inside the same page, and hence would
like to avoid introducing logic to handle multipage ranges if that's
not tested at all.

> +    }
> +    else
> +    {
> +        subpage_start = PAGE_OFFSET(start);
> +        subpage_end = PAGE_OFFSET(end) != PAGE_SIZE - 1;
> +    }
> +
> +    spin_lock(&subpage_ro_lock);

Do you really need the lock if modifications can only happen during
init?  Xen initialization is single threaded, so you can likely avoid
the lock during boot.

> +
> +    if ( subpage_start )
> +    {
> +        offset_end = mfn_eq(mfn_start, mfn_end) ?
> +                     PAGE_OFFSET(end) :
> +                     (PAGE_SIZE - 1);
> +        rc = subpage_mmio_ro_add_page(mfn_start,
> +                                      PAGE_OFFSET(start),
> +                                      offset_end);
> +        if ( rc < 0 )
> +            goto err_unlock;
> +        /* Check if not marking R/W part of a page intended to be fully R/O */
> +        ASSERT(rc || !rangeset_contains_singleton(mmio_ro_ranges,
> +                                                  mfn_x(mfn_start)));

I think it would be better if this check was done ahead, and an error
was returned.  I see no point in delaying the check until the region
has already been registered.

> +    }
> +
> +    if ( subpage_end )
> +    {
> +        rc = subpage_mmio_ro_add_page(mfn_end, 0, PAGE_OFFSET(end));
> +        if ( rc < 0 )
> +            goto err_unlock_remove;
> +        /* Check if not marking R/W part of a page intended to be fully R/O */
> +        ASSERT(rc || !rangeset_contains_singleton(mmio_ro_ranges,
> +                                                  mfn_x(mfn_end)));
> +    }
> +
> +    spin_unlock(&subpage_ro_lock);
> +
> +    rc = rangeset_add_range(mmio_ro_ranges, mfn_x(mfn_start), mfn_x(mfn_end));
> +    if ( rc )
> +        goto err_remove;
> +
> +    return 0;
> +
> + err_remove:
> +    spin_lock(&subpage_ro_lock);
> +    if ( subpage_end )
> +        subpage_mmio_ro_remove_page(mfn_end, 0, PAGE_OFFSET(end));
> + err_unlock_remove:
> +    if ( subpage_start )
> +        subpage_mmio_ro_remove_page(mfn_start, PAGE_OFFSET(start), offset_end);
> + err_unlock:
> +    spin_unlock(&subpage_ro_lock);
> +    return rc;
> +}
> +
> +static void __iomem *subpage_mmio_map_page(
> +    struct subpage_ro_range *entry)
> +{
> +    void __iomem *mapped_page;
> +
> +    if ( entry->mapped )
> +        return entry->mapped;
> +
> +    mapped_page = ioremap(mfn_to_maddr(entry->mfn), PAGE_SIZE);
> +
> +    spin_lock(&subpage_ro_lock);
> +    /* Re-check under the lock */
> +    if ( entry->mapped )
> +    {
> +        spin_unlock(&subpage_ro_lock);
> +        if ( mapped_page )
> +            iounmap(mapped_page);
> +        return entry->mapped;
> +    }
> +
> +    entry->mapped = mapped_page;
> +    spin_unlock(&subpage_ro_lock);
> +    return entry->mapped;
> +}
> +
> +static void subpage_mmio_write_emulate(
> +    mfn_t mfn,
> +    unsigned int offset,
> +    const void *data,
> +    unsigned int len)
> +{
> +    struct subpage_ro_range *entry;
> +    volatile void __iomem *addr;
> +
> +    list_for_each_entry(entry, &subpage_ro_ranges, list)
> +    {
> +        if ( mfn_eq(entry->mfn, mfn) )
> +        {
> +            if ( test_bit(offset / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems) )
> +            {
> + write_ignored:
> +                gprintk(XENLOG_WARNING,
> +                        "ignoring write to R/O MMIO 0x%"PRI_mfn"%03x len %u\n",
> +                        mfn_x(mfn), offset, len);
> +                return;
> +            }
> +
> +            addr = subpage_mmio_map_page(entry);

Given the very limited usage of this subpage RO infrastructure, I
would be tempted to just map the mfn when the page is registered, in
order to simplify the logic here.  The only use-case we have is XHCI,
and further usage of this are likely to be limited to similar hardware
that's shared between Xen and the hardware domain.

> +            if ( !addr )
> +            {
> +                gprintk(XENLOG_ERR,
> +                        "Failed to map page for MMIO write at 0x%"PRI_mfn"%03x\n",
> +                        mfn_x(mfn), offset);
> +                return;
> +            }
> +
> +            switch ( len )
> +            {
> +            case 1:
> +                writeb(*(const uint8_t*)data, addr);
> +                break;
> +            case 2:
> +                writew(*(const uint16_t*)data, addr);
> +                break;
> +            case 4:
> +                writel(*(const uint32_t*)data, addr);
> +                break;
> +            case 8:
> +                writeq(*(const uint64_t*)data, addr);
> +                break;
> +            default:
> +                /* mmio_ro_emulated_write() already validated the size */
> +                ASSERT_UNREACHABLE();
> +                goto write_ignored;
> +            }
> +            return;
> +        }
> +    }
> +    /* Do not print message for pages without any writable parts. */
> +}
> +
> +#ifdef CONFIG_HVM
> +bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla)
> +{
> +    unsigned int offset = PAGE_OFFSET(gla);
> +    const struct subpage_ro_range *entry;
> +
> +    list_for_each_entry(entry, &subpage_ro_ranges, list)
> +        if ( mfn_eq(entry->mfn, mfn) &&
> +             !test_bit(offset / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems) )
> +        {
> +            /*
> +             * We don't know the write size at this point yet, so it could be
> +             * an unaligned write, but accept it here anyway and deal with it
> +             * later.
> +             */
> +            return true;

For accesses that fall into the RO region, I think you need to accept
them here and just terminate them?  I see no point in propagating
them further in hvm_hap_nested_page_fault().

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 10:50:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 10:50:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738278.1144981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGz5N-0004sX-J4; Tue, 11 Jun 2024 10:50:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738278.1144981; Tue, 11 Jun 2024 10:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGz5N-0004sQ-Fq; Tue, 11 Jun 2024 10:50:41 +0000
Received: by outflank-mailman (input) for mailman id 738278;
 Tue, 11 Jun 2024 10:50:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3f28=NN=wdc.com=prvs=8854759ac=Johannes.Thumshirn@srs-se1.protection.inumbo.net>)
 id 1sGz5L-0004r8-NL
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 10:50:39 +0000
Received: from esa6.hgst.iphmx.com (esa6.hgst.iphmx.com [216.71.154.45])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6f95e7d5-27e0-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 12:50:38 +0200 (CEST)
Received: from mail-mw2nam04lp2176.outbound.protection.outlook.com (HELO
 NAM04-MW2-obe.outbound.protection.outlook.com) ([104.47.73.176])
 by ob1.hgst.iphmx.com with ESMTP; 11 Jun 2024 18:50:33 +0800
Received: from PH0PR04MB7416.namprd04.prod.outlook.com (2603:10b6:510:12::17)
 by BY5PR04MB7105.namprd04.prod.outlook.com (2603:10b6:a03:222::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.37; Tue, 11 Jun
 2024 10:50:30 +0000
Received: from PH0PR04MB7416.namprd04.prod.outlook.com
 ([fe80::ee22:5d81:bfcf:7969]) by PH0PR04MB7416.namprd04.prod.outlook.com
 ([fe80::ee22:5d81:bfcf:7969%4]) with mapi id 15.20.7633.037; Tue, 11 Jun 2024
 10:50:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f95e7d5-27e0-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1718103038; x=1749639038;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=joufdrNO2/FLdgGzlnaieoHfZ28FYCwNXGhrZ08f24E=;
  b=cZ9r213nKm9/5PfCMYIqj84dnnnDpNLqTdNOnyhdh2dRpflEUPrHgSXN
   mLMS+Q7G2KtzgBQCVMaYEduWlAa0bFsnEVlPBUU+q9ri4K8vLgdmo4kFB
   f+gK4VET/qpW0EVywIpCIZb6ruMrJqJpIRQj5Lqpeb09U0/u+zHOFTsF/
   CGuSB3JlZEgd1xjnelOktPl3ju6Lko7pDkpnZlUaiTeYJT/tXXfcGaaXA
   ypH/kDAJqeP78ArvzlXvaqI/y+9UeSfQvOgKeOHCmnPqgy9hfNj5LX4AL
   iWRhTOAXQUKWq9sgkWnA39C4Qh2nMA2IoAdvGLaq8U/H1vEA/G2/K7h5D
   Q==;
X-CSE-ConnectionGUID: XG+0u+qGSs+VPGBVXSu+Jw==
X-CSE-MsgGUID: lJhORMbQS2qna7/3Ae/3wg==
X-IronPort-AV: E=Sophos;i="6.08,229,1712592000"; 
   d="scan'208";a="18778899"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HUihhz3g+A+tDgcMN9MzwA6oRuJjRvXkvgrDXfFqbucwUirvj3o4xSkgyx7/Rj/q5KYqVvGfl+MuZyw2WSfB/FOlnoegfk8LNyWjQa0iZSTvdNBXsGky/cwBMSpA+qfrxLna97BWmriowlXIEfNQ0/S7e2hLBkhU8jiv6z3k0VpXCdktJn7AhDsBy2MS2YH5/hVluHaYvVUpDWjxk/ih8E9C/GKZCH9W2Gj7nzjHPbzF727uTHGxnSA851e6g2DIzjC+9zAY+aLLUCxD0Yqjpdxyxj8n5bnOZD7PXR5jYsjxV/tHXJ1L4Hmv8Xaj+92bcDp035JOhMRi+1TlbbdsdA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=joufdrNO2/FLdgGzlnaieoHfZ28FYCwNXGhrZ08f24E=;
 b=XwRa0j5isF/hrxxLzPEzdp44syZkcaRZZMJD677zGfHK+5qyWbaDeTF+ifg9M9J3/Df4M5k2kEzTmC2pG1PdEA2bflM+rzgnT/mgMuC+LJDhyIK/fbkr4yHXgRJQU7AG2IGPeB5buQigXmDKZkfxWyJAaeZ8GNOxD9/NVTKcPFn5EzjIlwZt7tHMri05vwryBTc+/g4Uo/kQan9yzTEPVspq6pSVYWXFjlPxP07d2FB9o1CTKTjAAGA8sezAg29zEEGNYiTKnZ9ZC4hb37FBdxsCfHRIO2bVXJs5MRRI/LVlRQv7H7puYmwyFY7xw0oeGDPM5VH7u/7oqmhBiBCWdA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=joufdrNO2/FLdgGzlnaieoHfZ28FYCwNXGhrZ08f24E=;
 b=znrZMK8y8asm9X0HR3CeoUK6jORYAI3cBWQ0FKAOqYtYtrF8+kyZzZ4MHhibIxobwDEu9di1W5sV4XQlZ7emhPRpmehE4lH42iaM9svQFJA4v90WR8PJlsv1J9SsFhleFjFiyG18lzu8XLQlAIT072gkkOa4HR1uwX5WptUX7qg=
From: Johannes Thumshirn <Johannes.Thumshirn@wdc.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Geert Uytterhoeven <geert@linux-m68k.org>, Richard Weinberger
	<richard@nod.at>, Philipp Reisner <philipp.reisner@linbit.com>, Lars
 Ellenberg <lars.ellenberg@linbit.com>,
	=?utf-8?B?Q2hyaXN0b3BoIELDtmhtd2FsZGVy?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>, "Michael
 S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Alasdair Kergon
	<agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>, Mikulas Patocka
	<mpatocka@redhat.com>, Song Liu <song@kernel.org>, Yu Kuai
	<yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>, "Martin K.
 Petersen" <martin.petersen@oracle.com>, "linux-m68k@lists.linux-m68k.org"
	<linux-m68k@lists.linux-m68k.org>, "linux-um@lists.infradead.org"
	<linux-um@lists.infradead.org>, "drbd-dev@lists.linbit.com"
	<drbd-dev@lists.linbit.com>, "nbd@other.debian.org" <nbd@other.debian.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	"virtualization@lists.linux.dev" <virtualization@lists.linux.dev>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"dm-devel@lists.linux.dev" <dm-devel@lists.linux.dev>,
	"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>,
	"linux-mmc@vger.kernel.org" <linux-mmc@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"nvdimm@lists.linux.dev" <nvdimm@lists.linux.dev>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-s390@vger.kernel.org" <linux-s390@vger.kernel.org>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>
Subject: Re: [PATCH 01/26] sd: fix sd_is_zoned
Thread-Topic: [PATCH 01/26] sd: fix sd_is_zoned
Thread-Index: AQHau78EzIfqHhndw06EBBro2gSZqbHCYtuA
Date: Tue, 11 Jun 2024 10:50:30 +0000
Message-ID: <b9b02f22-6835-4a9c-a1b7-790fa6c0afb5@wdc.com>
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-2-hch@lst.de>
In-Reply-To: <20240611051929.513387-2-hch@lst.de>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla Thunderbird
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=wdc.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: PH0PR04MB7416:EE_|BY5PR04MB7105:EE_
x-ms-office365-filtering-correlation-id: e237afd1-1a8b-4420-23ed-08dc8a044fef
wdcipoutbound: EOP-TRUE
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230031|376005|7416005|1800799015|366007|38070700009;
x-microsoft-antispam-message-info:
 =?utf-8?B?VUlMbFE1dU1sbTQycXpML2VXUTJBWDk5YTdZNDdrVGVuckQ2YTk1ZTZCQXcw?=
 =?utf-8?B?Y3daN2hMOWtGbEpkK0NJTVdFMDlGQS9oTUtRbjFCRVJqb3JSdjd2Vk5YQ0Zk?=
 =?utf-8?B?ZGN2Y3ZUZTVzTEFhVUF6SDFjY1lDU2MveWRpQ3pyTG41bE8xcFFtSGgzc2JW?=
 =?utf-8?B?WTN2MVE4cHhjOTZtakM5RTRmSDdvRlN5Qk5zTWVPeXFBQlpOUVl5YnJRcFZp?=
 =?utf-8?B?TTE0Q2VpZkd5djFiSktGR0RFNzNmSHZxdmNaV2UvVEY2QWU4dFNsdzNxUmNt?=
 =?utf-8?B?M0FIOHhIRmVheGtNZlNSNEgyNzVNOTI0VHJNSDEwcWpaenYyelZ5QThNQWhx?=
 =?utf-8?B?akJjcy9zeXBZa3hXa2NvN24rOHhSVkkxVUJ3Y3F4amlrVG5EQzlDUW9TY0Nt?=
 =?utf-8?B?aWJFMEhBRGo0NDFIZmFHWUJPdENTSzN4QXVudXRvTlFnbFU4UkRlRW04ZkRE?=
 =?utf-8?B?ZkFISTY4SmVtaFhhZTVGWkUwWHFURUJZOVZsWnord2xaS3h3Y0w1M2tERUN5?=
 =?utf-8?B?elpwekp6cU8rbTgxYXVXNHV2eG5XOTFZVGZqYnJXbjF0cGNLcWwvdXg2dkxX?=
 =?utf-8?B?aXQ4YXFxczMyOHFkM3lKWXlxZ3lsRHAwakJlVi8zcnRST1NQZlBSeStRNVpk?=
 =?utf-8?B?UDc1RGsraW0vMktoS2V2S25qRVNOdmhFWHJyLzk0aWFwbFVkYUkwdUt6TUk5?=
 =?utf-8?B?ZnFXdHY1Uk5scVVhMkx0VTQzZUpXUWlpTUhmOE9yQ0FtenZuU01zWjRlMi9a?=
 =?utf-8?B?WTl5YzdGczhMR0lhUEtuSUtEZFVrRis0K0cwMkdyVkx2ZzNvNGZ3Z0VKNUY2?=
 =?utf-8?B?aVRCcjEraG5XV04vZzRxU1FORnBVUE9MTlpleEI5dHJwUFpOZkw4L3dQbjQy?=
 =?utf-8?B?VzBQNS9rL2NzUjR0NTJ6cWJBSWhCY3JSNUxVZTRyUWpReTJxd25nL000Sm5J?=
 =?utf-8?B?OGFCNG52ckVJSnR5dklvbU9BQkEvSEtidVJhSUpQcHVlaE54cktLR0tuNkh5?=
 =?utf-8?B?ck1VNHprSEdmdWRGQndkMFhHb3BacUJvVWhaZCtMOFN6NkhmNWhWT3JNWEd6?=
 =?utf-8?B?ckJzUWJ0eFArODVza1Bnd1RwbUpJbWRXV3cyWkFRUTg2NldxVFFidDZhcDVC?=
 =?utf-8?B?a2xrdHkySjEyRVgySCtBMEZEdy9kQ0QrUS9ialIvYTI5a2N1a2lKQjkwbml3?=
 =?utf-8?B?R2I2REd3ODg2Vkxwa1U4ZnQ4czJRNUdSZTJ5aUxwdG0xd3NOcnR0WHdMTjFv?=
 =?utf-8?B?UzM1MlJGaVZnWEhMOEd3T0VWTEwrS0U2SWRPT2NtdGwySXBYbkhBTWN2ajYw?=
 =?utf-8?B?dGVONytuT1BxZ1dMbTRqMVQ5QlU0ZUYra2lhQjJMTnRXZU9TNjlDSUU5NEdu?=
 =?utf-8?B?WkFEVm5xYzlNWkJlaVJ0dWFrRGYxOHY0UjRVeFhZK05mTnY2MklNUDdxZkJs?=
 =?utf-8?B?R2ppVHd1emlIOGRJZG9ScVFHT0NtL2ZPUjh6ampyVzM5M0hTL3Q2ZlBJQ3Ji?=
 =?utf-8?B?SlM2WmJSRGlZTXlHR05ZOVlWRVZDMmtNRXk2K3BnNWJPTS9kMG4yZVpac3hp?=
 =?utf-8?B?NVQ5ckpkcnFIdUlXZW52TjRkYXZjQ2dNTlM1MmJ5Z3NsZGVUeVo3Tk5hMWFo?=
 =?utf-8?B?TURUYXNmTVlsZjJPZC90eUNqcGFONUpYSnZrcDJEc1RPWEp6UXl6Y1RTUDlJ?=
 =?utf-8?B?cDhvSHNRM3B2VjF6OXJkVkF6SytkZS80aDl3d1RwYS9NTUwrelRTNGh5dDNF?=
 =?utf-8?B?YVFneTlheXdmbVgwUHZnd1FKRk0wVmFjd2NHZ0k0V3E4NVk3ZmZTQ09NU0lY?=
 =?utf-8?Q?yz/Jix8RL8kb6p4H32wmC4RaO75IJtq1sGJFk=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR04MB7416.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(376005)(7416005)(1800799015)(366007)(38070700009);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SytxU2YyN0lwYW1lMW5BdHkrVkJnWGhseUQ0YW5rcEFJODVycHpkMXIwck5o?=
 =?utf-8?B?YnA4UjV4dnIvZjlJZ0h2Y0FKSElMYkIrUGc0ZlpzTy9YTEdiUVJzL0E4Sm4x?=
 =?utf-8?B?WkNBeEdVcG9OZGhRZ00wUEhwdm9nZWh6V1lZSHV0UGliejBoT3RRK3cxcVRv?=
 =?utf-8?B?cERsZkI5VkxFdVpObEowdytneTVrWnlkNTJuaGwzVFlSYlZJZUtxeW54N2dH?=
 =?utf-8?B?Mjd1T0dYeHMvWWoySDBHNllxUm9SclI1V3JxNjhjanhsQ3FzdTRMNE8wcnhL?=
 =?utf-8?B?WHJUeEdESW5YRnU3V25TZWYwa0plNnNxQmpBbjdITTYwdmljZzdEWmJDZ1pv?=
 =?utf-8?B?TnZIZ1dabEtSaHJaVUtnOTBzNHBNOC9STnZrU2EvNjczQXF2WmovdXJVbUpC?=
 =?utf-8?B?cElGTDZxMWVBeEF2aUI5MHV2MnMyUE42cnBLQkRhY1BVa044ck9xejh4RXE5?=
 =?utf-8?B?T21iK1M0N01YYTZlc1hFNFhuYlZOMkIwMmsrQkQrZzBlRGVINmJuUnRZSU1w?=
 =?utf-8?B?UXc5UGs2ZnJ1WXhDbHpnREd5SjRETGNmT3NsUGgwNDdUVVl6MEt5ejA2THBT?=
 =?utf-8?B?cHhpZkpIOE1MUFhJMFJUSjBNZEVDM0MybGd0ZXVFeGVtLzhJcFRiQnFYOFRu?=
 =?utf-8?B?YkgwUEdpWFczLzZ3YlZEcG1rdUhKK0N1ZHg5czNaa1BtKzVmZHdFcHpPYWFI?=
 =?utf-8?B?U3hEaWRjK3dqMksvcEtHYUhqVTg3Sko3cjhtTjlBNmRTS2ZaYkoyMlVUbGpQ?=
 =?utf-8?B?SE5oSmVxVlBrMW1PVU13aTh2ZDBZYkluVnp2Nlo1M0U3aTU4N1VJbE5SeUZ6?=
 =?utf-8?B?VTV5V1BkRjZSSmVlVmU3ak43Q0FRQm9NT2RyRUZiMmNDRmhTNzZubjlzQ2ls?=
 =?utf-8?B?UG81bFUyT2M3YkR2QXI3WWlmdlBvdDRiM080clY0NXg3bldOZEUrbldibU9N?=
 =?utf-8?B?aHRMeVE1T0pMZ05ybE10YncxWE5ETStpb2t5SWdCNmtRYVFOZ2cyak9Lb3l0?=
 =?utf-8?B?MHhKRjh4cDBMbGxFbDAxdTFsUUdObTRXSm1MZXRKRFJuWHV2ekdRREU3TTk5?=
 =?utf-8?B?V1ltaWFDSWRBQ1hIeFcybXMrSW5kenNSbk5mc0tyR2pGMTBEWW1TVTNxQktj?=
 =?utf-8?B?anMrUXlzYVR6OFB3bGYwS2psSTc2NXczVTRoa3RzNFZURDVlbGVmTVMzRlNw?=
 =?utf-8?B?amZ6c2k4S09YNjNrTG1OS3hxNms2NERDU3pGazJPVU05VXN3MnlNWS9FVmFC?=
 =?utf-8?B?d0U1VHNLdmp5bmtmVU1FVE1KUVhJaXlZWTR2ZVQ0bzBvTnMyekU3WisrNitk?=
 =?utf-8?B?aC9EcmlHcWl1Rm95UHN6UGw5ODV6L3VXUG9hbWZKZk1vYWVzclZCdXR0cURl?=
 =?utf-8?B?N3UzaTUvL0lPRFQwOTFKWnNydDREbjIwVi9HcTdhYkhhcmJmOVp0OGxoZUl2?=
 =?utf-8?B?S1hXUnlLbUdJdStVYTdLZXdjaFpiYnBud2tqb0M3VS9HQkFyeGt4aGdOaWIx?=
 =?utf-8?B?RWZMV3JLVVBNWENyZTY2NzN4ajBveTZPcFc0aGVGbVNaNjM5bUVZU1VzQkxw?=
 =?utf-8?B?WDNlb1VkY3VTTktOM1NnVjhhZ2hjWC9aNitVU0g5c1VvS1l0MklMUGlKOVBx?=
 =?utf-8?B?Mzd0T1l3eTR0NFp5V1FrZVBqeGIzcEdNUVZib0ZWOFpFb2d1ZFcrR0QzTGRJ?=
 =?utf-8?B?TkhmcENNWXRqUDVPcVc3dVhNRDhhL2hheUlWVDl2VFpHQ0k0Y3FFS2pFVDQv?=
 =?utf-8?B?LzdRRUhPNkVHbGI4ZjBIUVFyY1dhT2pYQVJ4K0o4WjdCR3dtc04xZ3pFQkxK?=
 =?utf-8?B?NjJkTndla3dsUW5JWDlKNzhHMU1Dd25qTDV4ak5JN2oyZk9FU1dMQWF4UDln?=
 =?utf-8?B?R2xzclVHczJnaDRSd3pvMStuOXREM0dDZlZ2VHVFQlZUTFVKMU9HeEd0M3ll?=
 =?utf-8?B?Wmd0UW15cEZ4UE0rWnEvV0psZFBYUDRzSVpuMXJUdHlMQVRIQ3Q2YkQxck1X?=
 =?utf-8?B?eHBhdmRZK1BTenF6TGdrV1hBN1NSRHlKaUw1TXZEdS9KdjBWRGRsUUlCcjgv?=
 =?utf-8?B?UGRRZ2RIanpLMDR5N1dnOXkzY01JMStZNWpvMFNtUWFDTnE1OU1TamgwS2Vz?=
 =?utf-8?B?VUgweXo0Ym5LV2t2bElnMk5nWGZkZWozd2I1MjVyZTNqS056WllNNDR2L0d2?=
 =?utf-8?B?N2c9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <050DDD6DB6C2124FA3F8ED9DC37A34A4@namprd04.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	LfuiCTpDV1XOT4min+jpdGv1cOG53qf/Q7eW4YiZtziNslRG+ShEQZksWk80HEYuizltKxkMxDdRAi92tlukC6fPY08stsD1AFiR8q5dcPaUjfhCVkxTz1GRNMXeAudEibPoIbaIWkYZsSCzYTdjDfc61lgCtjki4/DajHKJCNAizeTL6t1cNq6wiVZasLSVeEHHMzNwuCS2tuli9A8FSCT/jfgm8fqgwdVPMD5FuYDdPEvTH56REV8q59Sz5PCyjlcpZxqePWF4adEtwUM8z3mLnp3mB8GXhnJfb/vV0T0IORhI0HP0y4RW9cjlEh+iu23fRBh5nUesGFph+Xwv56yYa5bScPxAai2sPSNb7j6be1Ua9GBROYt/Q57kmwD7aIZTq9oY37Cl7phWHKxXwZssZWCUreXwzWb1IEZVpsan78HTrIf3H73Cqn/JdNYi8kdj3Af580J05ZSjdeFYFAsVbWar6s/6xFKA2AzvK4phK0+HoW7DKhXB6OrE3Te9tIY3TMs6J0DwvPDKt23P3kZFzvhhkxo4pmNLrNSVKgv8ZbYAIVKmatK7gtF0BwhKcZWZcaARVTecPT7id+s8udrnl6Jbf8PlKnL6eiji048gEWH6MT3gXWC45w4r+2wx
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR04MB7416.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e237afd1-1a8b-4420-23ed-08dc8a044fef
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jun 2024 10:50:30.5035
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: VchCIwENkr4e2H1VYqeq5y2OZY6ymUDk/J+o1F4FeS5WEBurkzCTAaST9NPyVE7GAux24AcOpdHQLISKmapiuwi1arY6kL+cBMh9dVTsntY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR04MB7105

TG9va3MgZ29vZCwNClJldmlld2VkLWJ5OiBKb2hhbm5lcyBUaHVtc2hpcm4gPGpvaGFubmVzLnRo
dW1zaGlybkB3ZGMuY29tPg0K


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 11:00:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 11:00:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738283.1144992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGzF4-0001If-F4; Tue, 11 Jun 2024 11:00:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738283.1144992; Tue, 11 Jun 2024 11:00:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGzF4-0001IY-B1; Tue, 11 Jun 2024 11:00:42 +0000
Received: by outflank-mailman (input) for mailman id 738283;
 Tue, 11 Jun 2024 11:00:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H2Xh=NN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1sGzF2-0001IS-VY
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 11:00:41 +0000
Received: from fout7-smtp.messagingengine.com (fout7-smtp.messagingengine.com
 [103.168.172.150]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d5852f22-27e1-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 13:00:38 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailfout.nyi.internal (Postfix) with ESMTP id 0015A13801CB;
 Tue, 11 Jun 2024 07:00:36 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Tue, 11 Jun 2024 07:00:36 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 11 Jun 2024 07:00:35 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5852f22-27e1-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718103636;
	 x=1718190036; bh=i54ZzkY4GG39UCWRf8HO44rFoz7nuZMFD/2hx/uPmfw=; b=
	Ps2BKSMq55JT2j/zRerg6+E8zgQaaqODxqMPCTGllTtamckrWajPtoXo+rtLVWUM
	cvIjdRop1ZC6MMGABOCUkzd3LhbRWJfQzigTzHmqrAtUsttjwlr+oIWBaCdzVXdf
	8wfzFluATkYmHBa0Z5UeObbL9LF9WFfEBcLFZC2+at9iNohaH8CAuAuUkpVz90Mc
	LFirZive77sttZOBBlx8YesixpTEnupMk+QQpbHgOSMJ2LFTTKSqOaEaAp6uD/ZS
	KZaFCTnXwLaRUgPBjsjwau5T2DNNOtO7Jz0fOoSNc0bfsYtHT6ox1g6ZsY4FlmXI
	pOHSsGYocW/KP9/DVDv0KA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718103636; x=1718190036; bh=i54ZzkY4GG39UCWRf8HO44rFoz7n
	uZMFD/2hx/uPmfw=; b=WRwYTaKMKYjv05d5mflmS9JUGl4Nty7qQZwt6NVpziuR
	O84ulCZuddIaHycY9IdrDYbCxp+XnZ12gGLzza+bz14AVKgg8uINTC/4BJrYclaZ
	80O/rep90RRjWlaxoqaP86lSWHBZRj77p/JMlFZ1E8BBr+hCv/MPKQiucppg9eUV
	4qD7m78Bx7SSHRSn1iz/xaH+KAG7alAkbK9uat5kUt4ixuna+Os8dJaFVwhR8329
	UxLEG8ExDrV4KT3kJTSK5tHwXSl8M0bb1YOFcUjKk5mUNpoP1S8ERAqaE5tIMXWv
	VuUAAm3ePW3lnGpbg3BAdp7QMp4K1GfIfqDYKKteSw==
X-ME-Sender: <xms:VC5oZpez32FjW-YjZX0tOkVEHfWZH5ET92dcXY-vP3KHSuVy0bKYwQ>
    <xme:VC5oZnM9w3CUk7wOCvCIKBmEFeghf-5CY9sMYkAMElAUnZLYK-1nsBQ5s28oYTUPI
    2nDJMu4GYj3lg>
X-ME-Received: <xmr:VC5oZijUusMzaPdwz7euDeefr3EdiAL82-Awpfr2GfP6Zd5QQP4vSoIakrvS6hWX8iRh6N_Ytcrk4KPcv-qqwus9-latZ-WRhw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfeduvddgfeegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:VC5oZi-x67odXBvIGFjupn0TbgU6vCLpLWyRLVC30JTQCeHZYpNDmQ>
    <xmx:VC5oZltvIq3OwZj7WWId2dYx27swdrwuK5OuZboouFYplYm3o5Cqzg>
    <xmx:VC5oZhHIf7pjCiVdMphYd0H5UOyehn1sGkljdXCAYx8S9BCkH1XlwA>
    <xmx:VC5oZsMGcdZI76Mn4s9XlxAMtiln6Y2o4dLDRMkWrPtVBKmlgiSnqQ>
    <xmx:VC5oZtK-2MbP5GZDff9iFm0hFBS72gRcfVCvayfotSwnz_Ll2P0_DeD2>
Feedback-ID: i1568416f:Fastmail
Date: Tue, 11 Jun 2024 13:00:32 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO
 page read only
Message-ID: <ZmguUCL4Ggb66wxL@mail-itl>
References: <cover.68462f37276d69ab6e268be94d049f866a321f73.1716392340.git-series.marmarek@invisiblethingslab.com>
 <30562c807ff2e434731a76d7110d48614a58884b.1716392340.git-series.marmarek@invisiblethingslab.com>
 <d2ce1c48-fd95-47f9-b821-8e01d5006e8e@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="Wol1ROn3KyGiQtoL"
Content-Disposition: inline
In-Reply-To: <d2ce1c48-fd95-47f9-b821-8e01d5006e8e@suse.com>


--Wol1ROn3KyGiQtoL
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 11 Jun 2024 13:00:32 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO
 page read only

On Fri, Jun 07, 2024 at 09:01:25AM +0200, Jan Beulich wrote:
> On 22.05.2024 17:39, Marek Marczykowski-G=C3=B3recki wrote:
> > --- a/xen/arch/x86/include/asm/mm.h
> > +++ b/xen/arch/x86/include/asm/mm.h
> > @@ -522,9 +522,34 @@ extern struct rangeset *mmio_ro_ranges;
> >  void memguard_guard_stack(void *p);
> >  void memguard_unguard_stack(void *p);
> > =20
> > +/*
> > + * Add more precise r/o marking for a MMIO page. Range specified here
> > + * will still be R/O, but the rest of the page (not marked as R/O via =
another
> > + * call) will have writes passed through.
> > + * The start address and the size must be aligned to MMIO_RO_SUBPAGE_G=
RAN.
> > + *
> > + * This API cannot be used for overlapping ranges, nor for pages alrea=
dy added
> > + * to mmio_ro_ranges separately.
> > + *
> > + * Since there is currently no subpage_mmio_ro_remove(), relevant devi=
ce should
> > + * not be hot-unplugged.
>=20
> Yet there are no guarantees whatsoever. I think we should refuse
> hot-unplug attempts (not just here, but also e.g. for an EHCI
> controller that we use the debug feature of), but doing so would
> likely require coordination with Dom0. Nothing to be done right
> here, of course.
>=20
> > + * Return values:
> > + *  - negative: error
> > + *  - 0: success
> > + */
> > +#define MMIO_RO_SUBPAGE_GRAN 8
> > +int subpage_mmio_ro_add(paddr_t start, size_t size);
> > +#ifdef CONFIG_HVM
> > +bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla);
> > +#endif
>=20
> I'd suggest to omit the #ifdef here. The declaration alone doesn't
> hurt, and the #ifdef harms readability (if only a bit).

Ok.


> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -150,6 +150,17 @@ bool __read_mostly machine_to_phys_mapping_valid;
> > =20
> >  struct rangeset *__read_mostly mmio_ro_ranges;
> > =20
> > +/* Handling sub-page read-only MMIO regions */
> > +struct subpage_ro_range {
> > +    struct list_head list;
> > +    mfn_t mfn;
> > +    void __iomem *mapped;
> > +    DECLARE_BITMAP(ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GRAN);
> > +};
> > +
> > +static LIST_HEAD(subpage_ro_ranges);
>=20
> With modifications all happen from __init code, this likely wants
> to be LIST_HEAD_RO_AFTER_INIT() (which would need introducing, to
> parallel LIST_HEAD_READ_MOSTLY()).

Makes sense. And then I would be comfortable with dropping the spinlock
as Roger suggested.
I tried to make this API a bit more generic than I currently need, but
indeed it can be simplified for this particular use case.

> > +int __init subpage_mmio_ro_add(
> > +    paddr_t start,
> > +    size_t size)
> > +{
> > +    mfn_t mfn_start =3D maddr_to_mfn(start);
> > +    paddr_t end =3D start + size - 1;
> > +    mfn_t mfn_end =3D maddr_to_mfn(end);
> > +    unsigned int offset_end =3D 0;
> > +    int rc;
> > +    bool subpage_start, subpage_end;
> > +
> > +    ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
> > +    ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
> > +    if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
> > +        size =3D ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);
>=20
> I'm puzzled: You first check suitable alignment and then adjust size
> to have suitable granularity. Either it is a mistake to call the
> function with a bad size, or it is not. If it's a mistake, the
> release build alternative to the assertion would be to return an
> error. If it's not a mistake, the assertion ought to go away.
>=20
> If the assertion is to stay, then I'll further question why the
> other one doesn't also have release build safety fallback logic.

For some reason I read your earlier comment as a request to (try to)
continue safely in this case. But indeed an error is a better option, it
isn't supposed to happen anyway.

> > +    if ( !size )
> > +        return 0;
> > +
> > +    if ( mfn_eq(mfn_start, mfn_end) )
> > +    {
> > +        /* Both starting and ending parts handled at once */
> > +        subpage_start =3D PAGE_OFFSET(start) || PAGE_OFFSET(end) !=3D =
PAGE_SIZE - 1;
> > +        subpage_end =3D false;
> > +    }
> > +    else
> > +    {
> > +        subpage_start =3D PAGE_OFFSET(start);
> > +        subpage_end =3D PAGE_OFFSET(end) !=3D PAGE_SIZE - 1;
> > +    }
>=20
> Since you calculate "end" before adjusting "size", the logic here
> depends on there being the assertion further up.
>=20
> Jan

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--Wol1ROn3KyGiQtoL
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmZoLlAACgkQ24/THMrX
1yzr0Qf+K2Yv3CcKbg/CIDEiJbtsrpk1QksUKHST6p4+OQvLRvMC5OYq1pLe2l5o
92nBhuciOoCChB7Vrj809TBOP0pNRYanhiw+G6uS9AKEQL61iN3bcHN2JDPFR6x8
Af0AyQRaopkN8l2yTV7NXw2RGlSMYjcAEFOOu4g8QwJ0YyJExXyJN/59UGqVfwAr
21B4XG7ilGnszIoLb0rXKmu4ovKUhHzj5pgUbCuv/tsfnBj+Di4TbpjoCFjF9Rkd
ZPBJfN+Z54iXqC9Nz2ywDjttauLqFoZHUQKAA0Xw3aMMHypsOM46zM3rY3a1ZD8Q
pO/qjD8ly/EvBu9m62gEbhVEbOLHVA==
=aJAV
-----END PGP SIGNATURE-----

--Wol1ROn3KyGiQtoL--


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 11:08:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 11:08:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738289.1145001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGzMf-0002TX-5w; Tue, 11 Jun 2024 11:08:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738289.1145001; Tue, 11 Jun 2024 11:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGzMf-0002TQ-3E; Tue, 11 Jun 2024 11:08:33 +0000
Received: by outflank-mailman (input) for mailman id 738289;
 Tue, 11 Jun 2024 11:08:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b7dS=NN=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sGzMe-0002TK-4m
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 11:08:32 +0000
Received: from mail-qv1-xf31.google.com (mail-qv1-xf31.google.com
 [2607:f8b0:4864:20::f31])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id eeac0481-27e2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 13:08:29 +0200 (CEST)
Received: by mail-qv1-xf31.google.com with SMTP id
 6a1803df08f44-6b08d661dbaso3853596d6.0
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 04:08:29 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b09131edcbsm1118256d6.25.2024.06.11.04.08.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 04:08:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eeac0481-27e2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718104108; x=1718708908; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=CLdfwikmmiK6vX/ivtXLz8rsFripeFRIuZ/WeVgWS3w=;
        b=LD+iZLWqq8RI0Ko1uwZg07W8npgPSV+uzt4fYNCCwul6G6Bh1WELkDmLnjb+FlG8BE
         yCn/dWuKQl5WL3/nkciXXaPq0qeAyysRWoc9itn6/Q2vFMRI6UZ7zaBBcI71SNj8Yvsw
         qF4eewAHNZnsAqKv0ypuGeqxLwCozKd3UB6Eg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718104108; x=1718708908;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=CLdfwikmmiK6vX/ivtXLz8rsFripeFRIuZ/WeVgWS3w=;
        b=ToevjBaUb1UWDUmX7aOJpAeBXfRZBZzlDbxwFxqa8QDhdqngF+VJEbYverJ6Iaodto
         UadRLrkFgWaYIyWIHnWwpBkOwFGksAMYEPBqRgj/ElsFvVr6IGO6PNcuuvku8mgq/c1V
         q94So7RfRIChTbJkO68/jaiDsVPmZh/Bs4+RRmBsNcN4+hm2CKpoQOkwjaEY0fpexn2S
         0NnjVu7aYCEgH8l3mpjru/a9odvPE/TFoAyoSnZgXxerUO6VVmYZHVgbLcf5WTX/GRZG
         OYqBRgjubw9e1DjH1L66drKpdcfYtT43G8BBhxqAsK0YxtHyJNoh2pW9xIjvhqZjPTi8
         Zgqw==
X-Gm-Message-State: AOJu0YwbZmzN5ZoulNWIk5aAyrtpo7rXR7gRLfKhIsS6oeF+6MAtJ1zM
	jBg6VOQ0rx34kQdsODLxGGywTza6Iqj3SSvmQIMjDnv0ZsEUKBrXLCa1Ja7il0Q=
X-Google-Smtp-Source: AGHT+IF6VTucfoGipLqrI7yQZdj4uyVvOoLM+nKEaSJe++0hZlY9/pTIIxVUPNNhMRqUEncy5J63uQ==
X-Received: by 2002:a05:6214:568d:b0:6b0:7f36:8ae4 with SMTP id 6a1803df08f44-6b089f41c3emr36419066d6.14.1718104108483;
        Tue, 11 Jun 2024 04:08:28 -0700 (PDT)
Date: Tue, 11 Jun 2024 13:08:26 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/EPT: relax iPAT for "invalid" MFNs
Message-ID: <ZmgwKmcLDJDhIsl7@macbook>
References: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
 <Zmf_k2meED8iG3H5@macbook>
 <a11259be-7114-4332-b873-d1b163687a3e@suse.com>
 <ZmgStGbVRuGaNUD_@macbook>
 <f171c98a-c78d-41c8-88d8-7d631b80333b@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f171c98a-c78d-41c8-88d8-7d631b80333b@suse.com>

On Tue, Jun 11, 2024 at 11:33:24AM +0200, Jan Beulich wrote:
> On 11.06.2024 11:02, Roger Pau Monné wrote:
> > On Tue, Jun 11, 2024 at 10:26:32AM +0200, Jan Beulich wrote:
> >> On 11.06.2024 09:41, Roger Pau Monné wrote:
> >>> On Mon, Jun 10, 2024 at 04:58:52PM +0200, Jan Beulich wrote:
> >>>> --- a/xen/arch/x86/mm/p2m-ept.c
> >>>> +++ b/xen/arch/x86/mm/p2m-ept.c
> >>>> @@ -503,7 +503,8 @@ int epte_get_entry_emt(struct domain *d,
> >>>>  
> >>>>      if ( !mfn_valid(mfn) )
> >>>>      {
> >>>> -        *ipat = true;
> >>>> +        *ipat = type != p2m_mmio_direct ||
> >>>> +                (!is_iommu_enabled(d) && !cache_flush_permitted(d));
> >>>
> >>> Looking at this, shouldn't the !mfn_valid special case be removed, and
> >>> mfns without a valid page be processed normally, so that the guest
> >>> MTRR values are taken into account, and no iPAT is enforced?
> >>
> >> Such removal is what, in the post commit message remark, I'm referring to
> >> as "moving to too lax". Doing so might be okay, but will imo be hard to
> >> prove to be correct for all possible cases. Along these lines goes also
> >> that I'm adding the IOMMU-enabled and cache-flush checks: In principle
> >> p2m_mmio_direct should not be used when neither of these return true. Yet
> >> a similar consideration would apply to the immediately subsequent if().
> >>
> >> Removing this code would, in particular, result in INVALID_MFN getting a
> >> type of WB by way of the subsequent if(), unless the type there would
> >> also be p2m_mmio_direct (which, as said, it ought to never be for non-
> >> pass-through domains). That again _may_ not be a problem as long as such
> >> EPT entries would never be marked present, yet that's again difficult to
> >> prove.
> > 
> > My understanding is that the !mfn_valid() check was a way to detect
> > MMIO regions in order to exit early and set those to UC.  I however
> > don't follow why the guest MTRR settings shouldn't also be applied to
> > those regions.
> 
> It's unclear to me whether the original purpose of he check really was
> (just) MMIO. It could as well also have been to cover the (then not yet
> named that way) case of INVALID_MFN.
> 
> As to ignoring guest MTRRs for MMIO: I think that's to be on the safe
> side. We don't want guests to map uncachable memory with a cachable
> memory type. Yet control isn't fine grained enough to prevent just
> that. Hence why we force UC, allowing merely to move to WC via PAT.

Would that be to cover up for guests bugs, or there's a coherency
reason for not allowing guests to access memory using fully guest
chosen cache attributes?

I really wonder whether Xen has enough information to figure out
whether a hole (MMIO region) is supposed to be accessed as UC or
something else.

Your proposed patch already allows guest to set such attributes in
PAT, and hence I don't see why also taking guest MTRRs into account
would be any worse.

> > I'm also confused by your comment about "as such EPT entries would
> > never be marked present": non-present EPT entries don't even get into
> > epte_get_entry_emt(), and hence we could assert in epte_get_entry_emt
> > that mfn != INVALID_MFN?
> 
> I don't think we can. Especially for the call from ept_set_entry() I
> can't spot anything that would prevent the call for non-present entries.
> This may be a mistake, but I can't do anything about it right here.

Hm, I see, then we should explicitly handle INVALID_MFN in
epte_get_entry_emt(), and just return early.

> >>> I also think this likely wants a:
> >>>
> >>> Fixes: 81fd0d3ca4b2 ('x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()')
> >>
> >> Oh, indeed, I should have dug out when this broke. I didn't because I
> >> knew this mfn_valid() check was there forever, neglecting that it wasn't
> >> always (almost) first.
> >>
> >>> As AFAICT before that commit direct MMIO regions would set iPAT to WB,
> >>> which would result in the correct attributes (albeit guest MTRR was
> >>> still ignored).
> >>
> >> Two corrections here: First iPAT is a boolean; it can't be set to WB.
> >> And then what was happening prior to that change was that for the APIC
> >> access page iPAT was set to true, thus forcing WB there. iPAT was left
> >> set to false for all other p2m_mmio_direct pages, yielding (PAT-
> >> overridable) UC there.
> > 
> > Right, that behavior was still dubious to me, as I would assume those
> > regions would also want to fetch the type from guest MTRRs.
> 
> Well, for the APIC access page we want to prevent it becoming UC. It's MMIO
> from the guest's perspective, yet _we_ know it's really ordinary RAM. For
> actual MMIO see above; the only case where we probably ought to respect
> guest MTRRs is when they say WC (following from what I said further up).
> Yet that's again an independent change to (possibly) make.

For emulated devices we might map regular RAM into what the guest
otherwise thinks it's MMIO.  Maybe the mfn_valid() check should be
inverted, and return WB when the underlying mfn is RAM, and otherwise
use the guest MTRRs to decide the cache attribute?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 11:26:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 11:26:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738298.1145012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGzdx-0000Wo-Ro; Tue, 11 Jun 2024 11:26:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738298.1145012; Tue, 11 Jun 2024 11:26:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGzdx-0000Wh-O2; Tue, 11 Jun 2024 11:26:25 +0000
Received: by outflank-mailman (input) for mailman id 738298;
 Tue, 11 Jun 2024 11:26:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sGzdw-0000Wb-DU
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 11:26:24 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6eb6af78-27e5-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 13:26:23 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-a6ef46d25efso435319366b.0
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 04:26:23 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1226c190sm362513566b.93.2024.06.11.04.26.21
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 04:26:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6eb6af78-27e5-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718105182; x=1718709982; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=PLBOlfmpvygct6ZbMMhgVvalzCMpGnzPOX1hgd8wYU4=;
        b=J3th1U/CBLCHf6hlBLPKKUPs3B4Qy7hYKsaMNdSMgmK+8RkjwWqogm86fE5REzrXg6
         BKPKfJy5xGcHv50EkgADnLtqQPa0bdSrEPsexyCIK+QcFmro1dGZbumqW9xPgSwFuvRJ
         CQhRFuMEUHQ7FpWo3GoPhBedyd5sfajonxiF8wa2z6WNJqoVymTdHdeY8FrWrmO7wdTP
         IWj8QAkKy46KmeRKfCP7vue5cI8CI2rMu/jBHtsMvmm42LJPHS8JzgU65eRxIasIwZk0
         xCFsooP4VxsGPS0Hr5q4iSryE/hK7TM3xxe5nxaVPbEhOGK8EIwbiejUdsB39ISE/z2X
         LhOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718105182; x=1718709982;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=PLBOlfmpvygct6ZbMMhgVvalzCMpGnzPOX1hgd8wYU4=;
        b=JAPzOLePrb26JVDiYuVVCAsGCLWKL7vcjABbj0LhZgSo8IM6IMwsCIVTaztaqrB2MC
         LBEIDq6Wj1dh3xeKR5e0ZLzX2Fs8dHexktVYYifuXz/JNcWSOZZO3KKgoZPVkxUchDPi
         LJzR9Qo2oA/eOJvTLSTZs7xSbEyJG8bvLzRUieVD4kXiXIlyzVYLRq2z/W3pp9Hc5gTC
         qG16DMYQVtefLkRKmoNX+UQrURlV8A0yRfH6SdD0x7l20pSSA8P7x8tZuAMk3ulIATrg
         MVWpiP5UruA0dQt1DvPFV8awgrD3tKb/QLAryyfytbiVJsZY8k6KWb1OLZOoKN0sYchR
         vm7A==
X-Gm-Message-State: AOJu0Yy65UdPkb+c/mpZNCMuoh/2AmNknFICcCuc10e8UgXNBhamVve5
	09PReXXBJjfpf8GZkuRF7TYx3ExLIJ+ca4J25W+jmSyP5dbXazGD7VciYaxdX5mpJ1Mc3z6BI+M
	=
X-Google-Smtp-Source: AGHT+IHKne9GF30+gAF2mcTow4+gjbglsJsEu8b2Nr4/uxawWMpTDrwzZxIMr46w/XdUIYRMt9QmGQ==
X-Received: by 2002:a17:906:b190:b0:a6e:f6b0:66cc with SMTP id a640c23a62f3a-a6ef6b06d15mr614292866b.18.1718105182577;
        Tue, 11 Jun 2024 04:26:22 -0700 (PDT)
Message-ID: <a1281966-3f3b-4dbc-aa98-0cabbfc4e16a@suse.com>
Date: Tue, 11 Jun 2024 13:26:21 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO
 page read only
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
References: <cover.68462f37276d69ab6e268be94d049f866a321f73.1716392340.git-series.marmarek@invisiblethingslab.com>
 <30562c807ff2e434731a76d7110d48614a58884b.1716392340.git-series.marmarek@invisiblethingslab.com>
 <ZmgpsZJ4afLd1Fc3@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmgpsZJ4afLd1Fc3@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 11.06.2024 12:40, Roger Pau Monné wrote:
> On Wed, May 22, 2024 at 05:39:03PM +0200, Marek Marczykowski-Górecki wrote:
>> +int __init subpage_mmio_ro_add(
>> +    paddr_t start,
>> +    size_t size)
>> +{
>> +    mfn_t mfn_start = maddr_to_mfn(start);
>> +    paddr_t end = start + size - 1;
>> +    mfn_t mfn_end = maddr_to_mfn(end);
>> +    unsigned int offset_end = 0;
>> +    int rc;
>> +    bool subpage_start, subpage_end;
>> +
>> +    ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
>> +    ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
>> +    if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
>> +        size = ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);
>> +
>> +    if ( !size )
>> +        return 0;
>> +
>> +    if ( mfn_eq(mfn_start, mfn_end) )
>> +    {
>> +        /* Both starting and ending parts handled at once */
>> +        subpage_start = PAGE_OFFSET(start) || PAGE_OFFSET(end) != PAGE_SIZE - 1;
>> +        subpage_end = false;
> 
> Given the intended usage of this, don't we want to limit to only a
> single page?  So that PFN_DOWN(start + size) == PFN_DOWN/(start), as
> that would simplify the logic here?
> 
> Mostly asking because I think for the usage of XHCI the registers that
> need to be marked RO are all inside the same page, and hence would
> like to avoid introducing logic to handle multipage ranges if that's
> not tested at all.
> 
>> +    }
>> +    else
>> +    {
>> +        subpage_start = PAGE_OFFSET(start);
>> +        subpage_end = PAGE_OFFSET(end) != PAGE_SIZE - 1;
>> +    }
>> +
>> +    spin_lock(&subpage_ro_lock);
> 
> Do you really need the lock if modifications can only happen during
> init?  Xen initialization is single threaded, so you can likely avoid
> the lock during boot.

I was wondering the same, but then concluded the locking here is for
the sake of completenese, not because it's strictly needed.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 11:39:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 11:39:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738304.1145022 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGzpr-0004PP-SQ; Tue, 11 Jun 2024 11:38:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738304.1145022; Tue, 11 Jun 2024 11:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGzpr-0004PI-PY; Tue, 11 Jun 2024 11:38:43 +0000
Received: by outflank-mailman (input) for mailman id 738304;
 Tue, 11 Jun 2024 11:38:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H2Xh=NN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1sGzpp-0004Nx-Vh
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 11:38:42 +0000
Received: from fhigh6-smtp.messagingengine.com
 (fhigh6-smtp.messagingengine.com [103.168.172.157])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 25320fa4-27e7-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 13:38:39 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailfhigh.nyi.internal (Postfix) with ESMTP id 3639E1140197;
 Tue, 11 Jun 2024 07:38:38 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute4.internal (MEProxy); Tue, 11 Jun 2024 07:38:38 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 11 Jun 2024 07:38:36 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25320fa4-27e7-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718105918;
	 x=1718192318; bh=9jk86Gh47kEw/xgU70UqKZayQAMG0fS3Qjj5di27MrM=; b=
	h+NARRE8nOY27f6QsUSgeVOQhUS2r6B7U4Dxm5qBezpzCO5YWNGfDjMA/J/iiltU
	wAus8q0+OlJWovitFE/1Xf6nsXq5a64XWiuztneOgK28pNNrfjEGz4okiiSaz/Ik
	b8o3zvqbicxV0c1lz9rCkxCZYCyvZohdqMwX0Iyl3c2Fow3OX86FReZyPS5ED2Go
	kGsBiYV6GjHNOxO02V2IA/hyB63KqP3Eg8vbcoiW3YDvhfdCCFeMOLJHpIOi5LZI
	4JqpZIg+8kl52OBsUOi45d8g0G4lna9Jqk2E99an3cxyp2LI76ewjs6U48jAAhtk
	UKrIM7oI6TAIEi1gmG5MLQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718105918; x=1718192318; bh=9jk86Gh47kEw/xgU70UqKZayQAMG
	0fS3Qjj5di27MrM=; b=b8OAY5Aqca4Eni1QQNp0c2PlWokJ3+GUPHcx/Z14JfIJ
	+8CjRajTT49s69uvrRu7z8G0pb16jfV6eh4K6bB7R+Xthu7Enw8U+zmMYSc7bAuI
	jMZyDksbsQJ+mhqHgsP2t3FlO0LlMW/Nzqc96kg+s3Po1ZUq3a++2dl275GGTSgX
	KfD/H/WqYdpwQ8p5YScviRcD3aHCZzZfjDQJ7Pvl4P1kRKIxQQMO5TPJywSfKo1X
	NbnweeMlso8LdtHUWeIciz1Ao8zAG/23N7JpG0J44SlFAd5iGzb83K/kKrlQPwa+
	RXCEaPkX62TJtamEScjkMR6J3YacIv2s6VbvLkuSOw==
X-ME-Sender: <xms:PTdoZu_z2TPjBX4lUe5hNpU93x0rWy5J5uSVGmqQwBMkYQOCjgAJqA>
    <xme:PTdoZuuyP08BhANmRG0RbUx0lMCgZpS8Wzh-o8iQDEmlLvrdqhIVbsV5y0STw1BFU
    r2o94ESfpMRKA>
X-ME-Received: <xmr:PTdoZkCo4qrVKQFQKqI8P5V-h34NragGlmeSsof5hg27GdaFm_B-YUVFH25zZo52uEldes6y4FOu50iJD-fFgRr63CxVXCZYLA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfeduvddggedvucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:PTdoZmePjkUXybgNNxMZNEPL1DaDOoW1j1S1sJ2Fs0IUbRbO-9KkWQ>
    <xmx:PTdoZjNdwXNKbfJp2OppJCgfFk37zSO4kFgxSg9LS8QHLQeYj9adRw>
    <xmx:PTdoZgkd-6h4cnUGqNCSKNuWaubrvj_nZmQBTCNIrj-NpDwFsCPQzA>
    <xmx:PTdoZlsE65NtsNe1SEQyvMRunci0rpBbDpphY1kRwuV0liyqUNtAsg>
    <xmx:PjdoZsq6XG4OTVbVQAyI3ppl9eMplRdUOegDr4Oy9ArlHVZwu1OZiRqK>
Feedback-ID: i1568416f:Fastmail
Date: Tue, 11 Jun 2024 13:38:35 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO
 page read only
Message-ID: <Zmg3O7zvd9KBC1Fv@mail-itl>
References: <cover.68462f37276d69ab6e268be94d049f866a321f73.1716392340.git-series.marmarek@invisiblethingslab.com>
 <30562c807ff2e434731a76d7110d48614a58884b.1716392340.git-series.marmarek@invisiblethingslab.com>
 <ZmgpsZJ4afLd1Fc3@macbook>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="UvpSW2j5ZN71EM0u"
Content-Disposition: inline
In-Reply-To: <ZmgpsZJ4afLd1Fc3@macbook>


--UvpSW2j5ZN71EM0u
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 11 Jun 2024 13:38:35 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO
 page read only

On Tue, Jun 11, 2024 at 12:40:49PM +0200, Roger Pau Monn=C3=A9 wrote:
> On Wed, May 22, 2024 at 05:39:03PM +0200, Marek Marczykowski-G=C3=B3recki=
 wrote:
> > In some cases, only few registers on a page needs to be write-protected.
> > Examples include USB3 console (64 bytes worth of registers) or MSI-X's
> > PBA table (which doesn't need to span the whole table either), although
> > in the latter case the spec forbids placing other registers on the same
> > page. Current API allows only marking whole pages pages read-only,
> > which sometimes may cover other registers that guest may need to
> > write into.
> >=20
> > Currently, when a guest tries to write to an MMIO page on the
> > mmio_ro_ranges, it's either immediately crashed on EPT violation - if
> > that's HVM, or if PV, it gets #PF. In case of Linux PV, if access was
> > from userspace (like, /dev/mem), it will try to fixup by updating page
> > tables (that Xen again will force to read-only) and will hit that #PF
> > again (looping endlessly). Both behaviors are undesirable if guest could
> > actually be allowed the write.
> >=20
> > Introduce an API that allows marking part of a page read-only. Since
> > sub-page permissions are not a thing in page tables (they are in EPT,
> > but not granular enough), do this via emulation (or simply page fault
> > handler for PV) that handles writes that are supposed to be allowed.
> > The new subpage_mmio_ro_add() takes a start physical address and the
> > region size in bytes. Both start address and the size need to be 8-byte
> > aligned, as a practical simplification (allows using smaller bitmask,
> > and a smaller granularity isn't really necessary right now).
> > It will internally add relevant pages to mmio_ro_ranges, but if either
> > start or end address is not page-aligned, it additionally adds that page
> > to a list for sub-page R/O handling. The list holds a bitmask which
> > qwords are supposed to be read-only and an address where page is mapped
> > for write emulation - this mapping is done only on the first access. A
> > plain list is used instead of more efficient structure, because there
> > isn't supposed to be many pages needing this precise r/o control.
> >=20
> > The mechanism this API is plugged in is slightly different for PV and
> > HVM. For both paths, it's plugged into mmio_ro_emulated_write(). For PV,
> > it's already called for #PF on read-only MMIO page. For HVM however, EPT
> > violation on p2m_mmio_direct page results in a direct domain_crash() for
> > non hardware domains.  To reach mmio_ro_emulated_write(), change how
> > write violations for p2m_mmio_direct are handled - specifically, check
> > if they relate to such partially protected page via
> > subpage_mmio_write_accept() and if so, call hvm_emulate_one_mmio() for
> > them too. This decodes what guest is trying write and finally calls
> > mmio_ro_emulated_write(). The EPT write violation is detected as
> > npfec.write_access and npfec.present both being true (similar to other
> > places), which may cover some other (future?) cases - if that happens,
> > emulator might get involved unnecessarily, but since it's limited to
> > pages marked with subpage_mmio_ro_add() only, the impact is minimal.
> > Both of those paths need an MFN to which guest tried to write (to check
> > which part of the page is supposed to be read-only, and where
> > the page is mapped for writes). This information currently isn't
> > available directly in mmio_ro_emulated_write(), but in both cases it is
> > already resolved somewhere higher in the call tree. Pass it down to
> > mmio_ro_emulated_write() via new mmio_ro_emulate_ctxt.mfn field.
> >=20
> > This may give a bit more access to the instruction emulator to HVM
> > guests (the change in hvm_hap_nested_page_fault()), but only for pages
> > explicitly marked with subpage_mmio_ro_add() - so, if the guest has a
> > passed through a device partially used by Xen.
> > As of the next patch, it applies only configuration explicitly
> > documented as not security supported.
> >=20
> > The subpage_mmio_ro_add() function cannot be called with overlapping
> > ranges, and on pages already added to mmio_ro_ranges separately.
> > Successful calls would result in correct handling, but error paths may
> > result in incorrect state (like pages removed from mmio_ro_ranges too
> > early). Debug build has asserts for relevant cases.
> >=20
> > Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblething=
slab.com>
> > ---
> > Shadow mode is not tested, but I don't expect it to work differently th=
an
> > HAP in areas related to this patch.
> >=20
> > Changes in v4:
> > - rename SUBPAGE_MMIO_RO_ALIGN to MMIO_RO_SUBPAGE_GRAN
> > - guard subpage_mmio_write_accept with CONFIG_HVM, as it's used only
> >   there
> > - rename ro_qwords to ro_elems
> > - use unsigned arguments for subpage_mmio_ro_remove_page()
> > - use volatile for __iomem
> > - do not set mmio_ro_ctxt.mfn for mmcfg case
> > - comment where fields of mmio_ro_ctxt are used
> > - use bool for result of __test_and_set_bit
> > - do not open-code mfn_to_maddr()
> > - remove leftover RCU
> > - mention hvm_hap_nested_page_fault() explicitly in the commit message
> > Changes in v3:
> > - use unsigned int for loop iterators
> > - use __set_bit/__clear_bit when under spinlock
> > - avoid ioremap() under spinlock
> > - do not cast away const
> > - handle unaligned parameters in release build
> > - comment fixes
> > - remove RCU - the add functions are __init and actual usage is only
> >   much later after domains are running
> > - add checks overlapping ranges in debug build and document the
> >   limitations
> > - change subpage_mmio_ro_add() so the error path doesn't potentially
> >   remove pages from mmio_ro_ranges
> > - move printing message to avoid one goto in
> >   subpage_mmio_write_emulate()
> > Changes in v2:
> > - Simplify subpage_mmio_ro_add() parameters
> > - add to mmio_ro_ranges from within subpage_mmio_ro_add()
> > - use ioremap() instead of caller-provided fixmap
> > - use 8-bytes granularity (largest supported single write) and a bitmap
> >   instead of a rangeset
> > - clarify commit message
> > - change how it's plugged in for HVM domain, to not change the behavior=
 for
> >   read-only parts (keep it hitting domain_crash(), instead of ignoring
> >   write)
> > - remove unused subpage_mmio_ro_remove()
> > ---
> >  xen/arch/x86/hvm/emulate.c      |   2 +-
> >  xen/arch/x86/hvm/hvm.c          |   4 +-
> >  xen/arch/x86/include/asm/mm.h   |  25 +++-
> >  xen/arch/x86/mm.c               | 273 ++++++++++++++++++++++++++++++++=
+-
> >  xen/arch/x86/pv/ro-page-fault.c |   6 +-
> >  5 files changed, 305 insertions(+), 5 deletions(-)
> >=20
> > diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> > index ab1bc516839a..e98513afc69b 100644
> > --- a/xen/arch/x86/hvm/emulate.c
> > +++ b/xen/arch/x86/hvm/emulate.c
> > @@ -2735,7 +2735,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsig=
ned long gla)
> >          .write      =3D mmio_ro_emulated_write,
> >          .validate   =3D hvmemul_validate,
> >      };
> > -    struct mmio_ro_emulate_ctxt mmio_ro_ctxt =3D { .cr2 =3D gla };
> > +    struct mmio_ro_emulate_ctxt mmio_ro_ctxt =3D { .cr2 =3D gla, .mfn =
=3D _mfn(mfn) };
> >      struct hvm_emulate_ctxt ctxt;
> >      const struct x86_emulate_ops *ops;
> >      unsigned int seg, bdf;
> > diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> > index 9594e0a5c530..73bbfe2bdc99 100644
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -2001,8 +2001,8 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsign=
ed long gla,
> >          goto out_put_gfn;
> >      }
> > =20
> > -    if ( (p2mt =3D=3D p2m_mmio_direct) && is_hardware_domain(currd) &&
> > -         npfec.write_access && npfec.present &&
> > +    if ( (p2mt =3D=3D p2m_mmio_direct) && npfec.write_access && npfec.=
present &&
> > +         (is_hardware_domain(currd) || subpage_mmio_write_accept(mfn, =
gla)) &&
> >           (hvm_emulate_one_mmio(mfn_x(mfn), gla) =3D=3D X86EMUL_OKAY) )
> >      {
> >          rc =3D 1;
> > diff --git a/xen/arch/x86/include/asm/mm.h b/xen/arch/x86/include/asm/m=
m.h
> > index 98b66edaca5e..d04cf2c4165e 100644
> > --- a/xen/arch/x86/include/asm/mm.h
> > +++ b/xen/arch/x86/include/asm/mm.h
> > @@ -522,9 +522,34 @@ extern struct rangeset *mmio_ro_ranges;
> >  void memguard_guard_stack(void *p);
> >  void memguard_unguard_stack(void *p);
> > =20
> > +/*
> > + * Add more precise r/o marking for a MMIO page. Range specified here
> > + * will still be R/O, but the rest of the page (not marked as R/O via =
another
> > + * call) will have writes passed through.
> > + * The start address and the size must be aligned to MMIO_RO_SUBPAGE_G=
RAN.
> > + *
> > + * This API cannot be used for overlapping ranges, nor for pages alrea=
dy added
> > + * to mmio_ro_ranges separately.
> > + *
> > + * Since there is currently no subpage_mmio_ro_remove(), relevant devi=
ce should
> > + * not be hot-unplugged.
> > + *
> > + * Return values:
> > + *  - negative: error
> > + *  - 0: success
> > + */
> > +#define MMIO_RO_SUBPAGE_GRAN 8
> > +int subpage_mmio_ro_add(paddr_t start, size_t size);
> > +#ifdef CONFIG_HVM
> > +bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla);
> > +#endif
> > +
> >  struct mmio_ro_emulate_ctxt {
> >          unsigned long cr2;
> > +        /* Used only for mmcfg case */
> >          unsigned int seg, bdf;
> > +        /* Used only for non-mmcfg case */
> > +        mfn_t mfn;
> >  };
> > =20
> >  int cf_check mmio_ro_emulated_write(
> > diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> > index d968bbbc7315..dab7cc018c3f 100644
> > --- a/xen/arch/x86/mm.c
> > +++ b/xen/arch/x86/mm.c
> > @@ -150,6 +150,17 @@ bool __read_mostly machine_to_phys_mapping_valid;
> > =20
> >  struct rangeset *__read_mostly mmio_ro_ranges;
> > =20
> > +/* Handling sub-page read-only MMIO regions */
> > +struct subpage_ro_range {
> > +    struct list_head list;
> > +    mfn_t mfn;
> > +    void __iomem *mapped;
> > +    DECLARE_BITMAP(ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GRAN);
> > +};
> > +
> > +static LIST_HEAD(subpage_ro_ranges);
> > +static DEFINE_SPINLOCK(subpage_ro_lock);
> > +
> >  static uint32_t base_disallow_mask;
> >  /* Global bit is allowed to be set on L1 PTEs. Intended for user mappi=
ngs. */
> >  #define L1_DISALLOW_MASK ((base_disallow_mask | _PAGE_GNTTAB) & ~_PAGE=
_GLOBAL)
> > @@ -4910,6 +4921,265 @@ long arch_memory_op(unsigned long cmd, XEN_GUES=
T_HANDLE_PARAM(void) arg)
> >      return rc;
> >  }
> > =20
> > +/*
> > + * Mark part of the page as R/O.
> > + * Returns:
> > + * - 0 on success - first range in the page
> > + * - 1 on success - subsequent range in the page
> > + * - <0 on error
> > + *
> > + * This needs subpage_ro_lock already taken.
> > + */
> > +static int __init subpage_mmio_ro_add_page(
> > +    mfn_t mfn, unsigned int offset_s, unsigned int offset_e)
>=20
> Nit: parameters here seem to be indented differently than below.
>=20
> > +{
> > +    struct subpage_ro_range *entry =3D NULL, *iter;
> > +    unsigned int i;
> > +
> > +    list_for_each_entry(iter, &subpage_ro_ranges, list)
> > +    {
> > +        if ( mfn_eq(iter->mfn, mfn) )
> > +        {
> > +            entry =3D iter;
> > +            break;
> > +        }
> > +    }
>=20
> AFAICT you could put the search logic into a separate function and use
> it here, plus in subpage_mmio_ro_remove_page(),
> subpage_mmio_write_emulate() and subpage_mmio_write_accept() possibly.

Good idea.

> > +    if ( !entry )
> > +    {
> > +        /* iter =3D=3D NULL marks it was a newly allocated entry */
> > +        iter =3D NULL;
> > +        entry =3D xzalloc(struct subpage_ro_range);
> > +        if ( !entry )
> > +            return -ENOMEM;
> > +        entry->mfn =3D mfn;
> > +    }
> > +
> > +    for ( i =3D offset_s; i <=3D offset_e; i +=3D MMIO_RO_SUBPAGE_GRAN=
 )
> > +    {
> > +        bool oldbit =3D __test_and_set_bit(i / MMIO_RO_SUBPAGE_GRAN,
> > +                                        entry->ro_elems);
> > +        ASSERT(!oldbit);
> > +    }
> > +
> > +    if ( !iter )
> > +        list_add(&entry->list, &subpage_ro_ranges);
> > +
> > +    return iter ? 1 : 0;
> > +}
> > +
> > +/* This needs subpage_ro_lock already taken */
> > +static void __init subpage_mmio_ro_remove_page(
> > +    mfn_t mfn,
> > +    unsigned int offset_s,
> > +    unsigned int offset_e)
> > +{
> > +    struct subpage_ro_range *entry =3D NULL, *iter;
> > +    unsigned int i;
> > +
> > +    list_for_each_entry(iter, &subpage_ro_ranges, list)
> > +    {
> > +        if ( mfn_eq(iter->mfn, mfn) )
> > +        {
> > +            entry =3D iter;
> > +            break;
> > +        }
> > +    }
> > +    if ( !entry )
> > +        return;
> > +
> > +    for ( i =3D offset_s; i <=3D offset_e; i +=3D MMIO_RO_SUBPAGE_GRAN=
 )
> > +        __clear_bit(i / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems);
> > +
> > +    if ( !bitmap_empty(entry->ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GR=
AN) )
> > +        return;
> > +
> > +    list_del(&entry->list);
> > +    if ( entry->mapped )
> > +        iounmap(entry->mapped);
> > +    xfree(entry);
> > +}
> > +
> > +int __init subpage_mmio_ro_add(
> > +    paddr_t start,
> > +    size_t size)
> > +{
> > +    mfn_t mfn_start =3D maddr_to_mfn(start);
> > +    paddr_t end =3D start + size - 1;
> > +    mfn_t mfn_end =3D maddr_to_mfn(end);
> > +    unsigned int offset_end =3D 0;
> > +    int rc;
> > +    bool subpage_start, subpage_end;
> > +
> > +    ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
> > +    ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
> > +    if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
> > +        size =3D ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);
> > +
> > +    if ( !size )
> > +        return 0;
> > +
> > +    if ( mfn_eq(mfn_start, mfn_end) )
> > +    {
> > +        /* Both starting and ending parts handled at once */
> > +        subpage_start =3D PAGE_OFFSET(start) || PAGE_OFFSET(end) !=3D =
PAGE_SIZE - 1;
> > +        subpage_end =3D false;
>=20
> Given the intended usage of this, don't we want to limit to only a
> single page?  So that PFN_DOWN(start + size) =3D=3D PFN_DOWN/(start), as
> that would simplify the logic here?

I have considered that, but I haven't found anything in the spec
mandating the XHCI DbC registers to not cross page boundary. Currently
(on a system I test this on) they don't cross page boundary, but I don't
want to assume extra constrains - to avoid issues like before (when
on the older system I tested the DbC registers didn't shared page with
other registers, but then they shared the page on a newer hardware).

> Mostly asking because I think for the usage of XHCI the registers that
> need to be marked RO are all inside the same page, and hence would
> like to avoid introducing logic to handle multipage ranges if that's
> not tested at all.
>=20
> > +    }
> > +    else
> > +    {
> > +        subpage_start =3D PAGE_OFFSET(start);
> > +        subpage_end =3D PAGE_OFFSET(end) !=3D PAGE_SIZE - 1;
> > +    }
> > +
> > +    spin_lock(&subpage_ro_lock);
>=20
> Do you really need the lock if modifications can only happen during
> init?  Xen initialization is single threaded, so you can likely avoid
> the lock during boot.

With adding (and removing) firmly tied to init (via __ro_after_init), I
think I'm okay with dropping the spinlock here. Yet, it's still needed
for mapping the page.

> > +
> > +    if ( subpage_start )
> > +    {
> > +        offset_end =3D mfn_eq(mfn_start, mfn_end) ?
> > +                     PAGE_OFFSET(end) :
> > +                     (PAGE_SIZE - 1);
> > +        rc =3D subpage_mmio_ro_add_page(mfn_start,
> > +                                      PAGE_OFFSET(start),
> > +                                      offset_end);
> > +        if ( rc < 0 )
> > +            goto err_unlock;
> > +        /* Check if not marking R/W part of a page intended to be full=
y R/O */
> > +        ASSERT(rc || !rangeset_contains_singleton(mmio_ro_ranges,
> > +                                                  mfn_x(mfn_start)));
>=20
> I think it would be better if this check was done ahead, and an error
> was returned.  I see no point in delaying the check until the region
> has already been registered.

I need return value from subpage_mmio_ro_add_page() for this check,
because currently it's okay to mark further regions read-only (at which
point the page is already on mmio_ro_ranges). Theoretically I could
probably limit the scope of this API even further - to just one R/O
region per page, but even in the XHCI driver, I can imagine needing
marking more regions (which might share a page, depending on hardware
layout) in some future version that could gain some more features.

> > +    }
> > +
> > +    if ( subpage_end )
> > +    {
> > +        rc =3D subpage_mmio_ro_add_page(mfn_end, 0, PAGE_OFFSET(end));
> > +        if ( rc < 0 )
> > +            goto err_unlock_remove;
> > +        /* Check if not marking R/W part of a page intended to be full=
y R/O */
> > +        ASSERT(rc || !rangeset_contains_singleton(mmio_ro_ranges,
> > +                                                  mfn_x(mfn_end)));
> > +    }
> > +
> > +    spin_unlock(&subpage_ro_lock);
> > +
> > +    rc =3D rangeset_add_range(mmio_ro_ranges, mfn_x(mfn_start), mfn_x(=
mfn_end));
> > +    if ( rc )
> > +        goto err_remove;
> > +
> > +    return 0;
> > +
> > + err_remove:
> > +    spin_lock(&subpage_ro_lock);
> > +    if ( subpage_end )
> > +        subpage_mmio_ro_remove_page(mfn_end, 0, PAGE_OFFSET(end));
> > + err_unlock_remove:
> > +    if ( subpage_start )
> > +        subpage_mmio_ro_remove_page(mfn_start, PAGE_OFFSET(start), off=
set_end);
> > + err_unlock:
> > +    spin_unlock(&subpage_ro_lock);
> > +    return rc;
> > +}
> > +
> > +static void __iomem *subpage_mmio_map_page(
> > +    struct subpage_ro_range *entry)
> > +{
> > +    void __iomem *mapped_page;
> > +
> > +    if ( entry->mapped )
> > +        return entry->mapped;
> > +
> > +    mapped_page =3D ioremap(mfn_to_maddr(entry->mfn), PAGE_SIZE);
> > +
> > +    spin_lock(&subpage_ro_lock);
> > +    /* Re-check under the lock */
> > +    if ( entry->mapped )
> > +    {
> > +        spin_unlock(&subpage_ro_lock);
> > +        if ( mapped_page )
> > +            iounmap(mapped_page);
> > +        return entry->mapped;
> > +    }
> > +
> > +    entry->mapped =3D mapped_page;
> > +    spin_unlock(&subpage_ro_lock);
> > +    return entry->mapped;
> > +}
> > +
> > +static void subpage_mmio_write_emulate(
> > +    mfn_t mfn,
> > +    unsigned int offset,
> > +    const void *data,
> > +    unsigned int len)
> > +{
> > +    struct subpage_ro_range *entry;
> > +    volatile void __iomem *addr;
> > +
> > +    list_for_each_entry(entry, &subpage_ro_ranges, list)
> > +    {
> > +        if ( mfn_eq(entry->mfn, mfn) )
> > +        {
> > +            if ( test_bit(offset / MMIO_RO_SUBPAGE_GRAN, entry->ro_ele=
ms) )
> > +            {
> > + write_ignored:
> > +                gprintk(XENLOG_WARNING,
> > +                        "ignoring write to R/O MMIO 0x%"PRI_mfn"%03x l=
en %u\n",
> > +                        mfn_x(mfn), offset, len);
> > +                return;
> > +            }
> > +
> > +            addr =3D subpage_mmio_map_page(entry);
>=20
> Given the very limited usage of this subpage RO infrastructure, I
> would be tempted to just map the mfn when the page is registered, in
> order to simplify the logic here.  The only use-case we have is XHCI,
> and further usage of this are likely to be limited to similar hardware
> that's shared between Xen and the hardware domain.

In an earlier similar series (which was about 1 or 2 pages in practice
per device) Jan requested doing lazy mapping, so I did it similar in
this series too.

> > +            if ( !addr )
> > +            {
> > +                gprintk(XENLOG_ERR,
> > +                        "Failed to map page for MMIO write at 0x%"PRI_=
mfn"%03x\n",
> > +                        mfn_x(mfn), offset);
> > +                return;
> > +            }
> > +
> > +            switch ( len )
> > +            {
> > +            case 1:
> > +                writeb(*(const uint8_t*)data, addr);
> > +                break;
> > +            case 2:
> > +                writew(*(const uint16_t*)data, addr);
> > +                break;
> > +            case 4:
> > +                writel(*(const uint32_t*)data, addr);
> > +                break;
> > +            case 8:
> > +                writeq(*(const uint64_t*)data, addr);
> > +                break;
> > +            default:
> > +                /* mmio_ro_emulated_write() already validated the size=
 */
> > +                ASSERT_UNREACHABLE();
> > +                goto write_ignored;
> > +            }
> > +            return;
> > +        }
> > +    }
> > +    /* Do not print message for pages without any writable parts. */
> > +}
> > +
> > +#ifdef CONFIG_HVM
> > +bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla)
> > +{
> > +    unsigned int offset =3D PAGE_OFFSET(gla);
> > +    const struct subpage_ro_range *entry;
> > +
> > +    list_for_each_entry(entry, &subpage_ro_ranges, list)
> > +        if ( mfn_eq(entry->mfn, mfn) &&
> > +             !test_bit(offset / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems)=
 )
> > +        {
> > +            /*
> > +             * We don't know the write size at this point yet, so it c=
ould be
> > +             * an unaligned write, but accept it here anyway and deal =
with it
> > +             * later.
> > +             */
> > +            return true;
>=20
> For accesses that fall into the RO region, I think you need to accept
> them here and just terminate them?  I see no point in propagating
> them further in hvm_hap_nested_page_fault().

If write hits an R/O region on a page with some writable regions the
handling should be the same as it would be just on the mmio_ro_ranges.
This is what the patch does.
There may be an opportunity to simplify mmio_ro_ranges handling
somewhere, but I don't think it belongs to this patch.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--UvpSW2j5ZN71EM0u
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmZoNzsACgkQ24/THMrX
1yzN6Af/enoXpilNZ3xO0MWSlmNKlRMB6DV9nrG6Cq01f9aYzghws9gffLSA11gR
v82kVJbl+epSMDmxqCAAKK4UCvTlvADY/R8iqUko2sYktVVLb65js0T8lrmw3pLV
zejBpRIOvGFiWUCVQAwb3Uc9ZpwE39+QRW2EfsO8JonNFjTjiDM+fg9lFwHoEZlr
TvUnlTjpwnuFqzS9hJxYy6HpZ4FIsTSxu/JJeh/GsSRQfsr0cnHWMkyGEBXh28aW
JVNhk1mgvbVjLmLa5SGeSBkbzXSOAkybjCr3snPedk8mNtiokVrGiMwXtrHfRmwh
yS4wHltWmiuc8foRXXNfxa5JzSSmkA==
=/xei
-----END PGP SIGNATURE-----

--UvpSW2j5ZN71EM0u--


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 11:44:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 11:44:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738312.1145032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGzv3-0007TK-Il; Tue, 11 Jun 2024 11:44:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738312.1145032; Tue, 11 Jun 2024 11:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sGzv3-0007TD-GA; Tue, 11 Jun 2024 11:44:05 +0000
Received: by outflank-mailman (input) for mailman id 738312;
 Tue, 11 Jun 2024 11:44:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGzv2-0007T3-51; Tue, 11 Jun 2024 11:44:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGzv2-0001AS-2H; Tue, 11 Jun 2024 11:44:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sGzv1-000345-Nz; Tue, 11 Jun 2024 11:44:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sGzv1-00023Z-NW; Tue, 11 Jun 2024 11:44:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yCr2rT8ZXeOVa77VB0nxw3XwH39CsXCRt3lORMldAnQ=; b=4/M9xQODDKsyWqb59dmzQOaioT
	1i9PZQPYMpKY55kPxZzIPmgK1viNfap9qQAp+WLpDulBZ/4z2LpiHWyptjdW9//hVG//ifrjU8GmW
	zsjn7McWaVWhdTUMlTKF45xfPK0x+HPeRtbp8oHw7JCyuMACwa5h/aCta1x8B9GTgRls=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186309-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186309: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0982da4f50279bfb2be479f97821b86feb87c336
X-Osstest-Versions-That:
    ovmf=6d15276ceddd2bf05995ee2efa86316fca1cd73a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 Jun 2024 11:44:03 +0000

flight 186309 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186309/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0982da4f50279bfb2be479f97821b86feb87c336
baseline version:
 ovmf                 6d15276ceddd2bf05995ee2efa86316fca1cd73a

Last test of basis   186306  2024-06-10 16:12:57 Z    0 days
Testing same since   186309  2024-06-11 09:43:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dhaval <dhaval@rivosinc.com>
  Dhaval Sharma <dhaval@rivosinc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6d15276ced..0982da4f50  0982da4f50279bfb2be479f97821b86feb87c336 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 11:49:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 11:49:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738322.1145042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH00J-0000O8-67; Tue, 11 Jun 2024 11:49:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738322.1145042; Tue, 11 Jun 2024 11:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH00J-0000O1-2t; Tue, 11 Jun 2024 11:49:31 +0000
Received: by outflank-mailman (input) for mailman id 738322;
 Tue, 11 Jun 2024 11:49:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3f28=NN=wdc.com=prvs=8854759ac=Johannes.Thumshirn@srs-se1.protection.inumbo.net>)
 id 1sH00I-0000Ml-59
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 11:49:30 +0000
Received: from esa6.hgst.iphmx.com (esa6.hgst.iphmx.com [216.71.154.45])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a7c09a9b-27e8-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 13:49:28 +0200 (CEST)
Received: from mail-mw2nam10lp2048.outbound.protection.outlook.com (HELO
 NAM10-MW2-obe.outbound.protection.outlook.com) ([104.47.55.48])
 by ob1.hgst.iphmx.com with ESMTP; 11 Jun 2024 19:49:22 +0800
Received: from PH0PR04MB7416.namprd04.prod.outlook.com (2603:10b6:510:12::17)
 by PH0PR04MB7688.namprd04.prod.outlook.com (2603:10b6:510:5f::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.37; Tue, 11 Jun
 2024 11:49:20 +0000
Received: from PH0PR04MB7416.namprd04.prod.outlook.com
 ([fe80::ee22:5d81:bfcf:7969]) by PH0PR04MB7416.namprd04.prod.outlook.com
 ([fe80::ee22:5d81:bfcf:7969%4]) with mapi id 15.20.7633.037; Tue, 11 Jun 2024
 11:49:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7c09a9b-27e8-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=wdc.com; i=@wdc.com; q=dns/txt; s=dkim.wdc.com;
  t=1718106568; x=1749642568;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=joufdrNO2/FLdgGzlnaieoHfZ28FYCwNXGhrZ08f24E=;
  b=O0lTdn4H/rPo5k84KR32U7w4fO9fFmmvyjDlz6sr22zLBtWcVbep6oMs
   H6OkH2Arhx2hC6MyCE3VRWKdaZqqRdsZjZzOMlUNeA/aaWHPNBV1x2jeB
   gBfhwAqu31TEG+w9hUogXbVl27WkwARPSQpz1qI9B7TqaPdK7EiIyph+J
   8l3WhO/ML3e+M7AFvYfEB2lD3R86wITVOUWvD+XCIxFs7LdU82j35Fb0p
   6sbQ8y4B3SFSknCGW1ELW2QbC762aIoCArQvTAA597c2Tw0GnM6fC883k
   J1Nw9Rwlj9xfjBoRaCN2M/yxARpEm3sSgmzhBo8vCXB5b7YtISxABCmWZ
   g==;
X-CSE-ConnectionGUID: JrPDpdWfRG+ZqkmmTiOxOg==
X-CSE-MsgGUID: E3cjbR6US1mJc3gvMlH95g==
X-IronPort-AV: E=Sophos;i="6.08,229,1712592000"; 
   d="scan'208";a="18781832"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CRmGq6iSB90y4s8aNIFdy1KnNncJpFLUxVfTcv/yRxyNve+RltmXcIEvt1bSKHThU0VlPg28INa8JFeIncYLOuyorWyLraqrqYhY47AVMStS6ZTLCDwkMEjP6L+Yh6V51U04Q4Klft0CXv3xJNk9KWDjp7k/5jUlOH1jzctSHglGH9KsLVwzkAa1XRekLwtWywu/5nkkrWCp/FJ6z2CvjW63ZPFOUPwrzLmDsPHp8HlrGLhAcsuxOpL1BF83X/qmDVRhp3hnqj1UksYcbIADlUWBSzXIEQAynQOHBvd8tumoZlS9CNtm6EOqNQbcBQIc3+mA6UjNEiFzI+Q4WRkp4w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=joufdrNO2/FLdgGzlnaieoHfZ28FYCwNXGhrZ08f24E=;
 b=iO5F3SwUN3X3j77HvgMjY6jG5O7gm7dZZrLDmD0WIOFQAxdRfIZ7BR8wEU9ZMv4qxZjdQcX3RgrKsJCQB5+h7FBo5Rvnzxh9yf/qPpCr5YaaKLndyFNj4h+fxJUnM1HmgMYzeut5iGT+BkoJmEiFET4CVOUNbTIbY1mACXiR3q88O5AVnDLouz7DGwaVjNsrDkBVAyD/80TfEsWxlq0bDJ/WMzaXVkibVOHG9JeCsGBPocarfWyz2IMj7vU2zcKVHi+5hB15gzYfYsZw2ioCshBWZOhEQaY21G8mUhGAF+eCX6MOfsqKJ33eacPbQldtHmTlG2v/gNRzVvspJxF0lA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=wdc.com; dmarc=pass action=none header.from=wdc.com; dkim=pass
 header.d=wdc.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=sharedspace.onmicrosoft.com; s=selector2-sharedspace-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=joufdrNO2/FLdgGzlnaieoHfZ28FYCwNXGhrZ08f24E=;
 b=OVto4xmIBCZSfAUJmn/NJAVGklt21Os5hPTqNzTajqHqfyRNN/xQkPzIx0w4SrbcjO3PrBonbpVhNOJ+smxroDnUEuwHQyNRVuh8sJ6rhfRV1AdlkCaURdQyWxJ38Jr2v842hXjWVaNAOZs2wpsNceCT0GfYnZH6a+RyQfLDbDA=
From: Johannes Thumshirn <Johannes.Thumshirn@wdc.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Geert Uytterhoeven <geert@linux-m68k.org>, Richard Weinberger
	<richard@nod.at>, Philipp Reisner <philipp.reisner@linbit.com>, Lars
 Ellenberg <lars.ellenberg@linbit.com>,
	=?utf-8?B?Q2hyaXN0b3BoIELDtmhtd2FsZGVy?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>, "Michael
 S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Alasdair Kergon
	<agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>, Mikulas Patocka
	<mpatocka@redhat.com>, Song Liu <song@kernel.org>, Yu Kuai
	<yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>, "Martin K.
 Petersen" <martin.petersen@oracle.com>, "linux-m68k@lists.linux-m68k.org"
	<linux-m68k@lists.linux-m68k.org>, "linux-um@lists.infradead.org"
	<linux-um@lists.infradead.org>, "drbd-dev@lists.linbit.com"
	<drbd-dev@lists.linbit.com>, "nbd@other.debian.org" <nbd@other.debian.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	"virtualization@lists.linux.dev" <virtualization@lists.linux.dev>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"dm-devel@lists.linux.dev" <dm-devel@lists.linux.dev>,
	"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>,
	"linux-mmc@vger.kernel.org" <linux-mmc@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"nvdimm@lists.linux.dev" <nvdimm@lists.linux.dev>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-s390@vger.kernel.org" <linux-s390@vger.kernel.org>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>
Subject: Re: [PATCH 08/26] virtio_blk: remove virtblk_update_cache_mode
Thread-Topic: [PATCH 08/26] virtio_blk: remove virtblk_update_cache_mode
Thread-Index: AQHau79kF7G5+NsLzE2Hf/a9e+yyI7HCc0sA
Date: Tue, 11 Jun 2024 11:49:20 +0000
Message-ID: <89d4138d-92b7-4899-a028-277dde4b662f@wdc.com>
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-9-hch@lst.de>
In-Reply-To: <20240611051929.513387-9-hch@lst.de>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla Thunderbird
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=wdc.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: PH0PR04MB7416:EE_|PH0PR04MB7688:EE_
x-ms-office365-filtering-correlation-id: 89a728e8-ce41-4765-5853-08dc8a0c87ff
wdcipoutbound: EOP-TRUE
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230031|366007|376005|1800799015|7416005|38070700009;
x-microsoft-antispam-message-info:
 =?utf-8?B?NWQ0OExuMTY3TGJVTzcxS0JUanpBVE9EYWRXNEI2K1BJTmV5a0dpQ0NncXpk?=
 =?utf-8?B?cE9WY3VHRnJtdU9BZzdKUjZGdHkvN2p4MWo3WVI1RUt6V0k4aXVXeng3WEln?=
 =?utf-8?B?QkRXRTYxSUovc2IybSthSUF3NlRMYmwwdTk1cWY4Vk1oc1UrWUlyT1NRRnQ2?=
 =?utf-8?B?ckpac2xud0VldFF1MWZUM3VmY0xWRURCNk9pNEtqZ0VseUF3ejJHUWhDUFFW?=
 =?utf-8?B?M09pbWF4blowenFHNHJSb3oxS2krTDc0eWM0Y2FEenFYOVROd3Y5SGg3Smpv?=
 =?utf-8?B?Q1Rib1FzSkIrdWkvUFg1ajJ5b2QwL3ZIUHlmNDY5QlhlUWhXcWtKRzIvOUdK?=
 =?utf-8?B?dmJMRWJxcVJjNHNYV000QmVBVVl1amFHL2d4SDN1ZnBxNXc1ejR6VjBZU2Ri?=
 =?utf-8?B?NDRVVDlJTFRRSkMvQm1sUGM3NmdQU0t6c0FvY2hENC9paE4yQnlyQWIzNlV1?=
 =?utf-8?B?MnNLQ0JGOFM1VWdwbThtUWJ5Q2orS2kvVDdqOGlxUGdmL2g3K3p1SytBcllq?=
 =?utf-8?B?M2hUeVBMaVZ5VFUxK1BtNTE5TnpDRVc1NHJSL0YvT0tsank0aDRXVG1SVkdL?=
 =?utf-8?B?eTJvcENsMUFvdTZVa24rTWtMRDc0Vlh4SCszRkd5RGxlU0pZMWF0RmZtYWgv?=
 =?utf-8?B?TjZzT21Oem51ci9TcCt1ckx1YURTVVlIOTh3SjRzem9FdTEwWVNSZCt0Y2NO?=
 =?utf-8?B?djhNbkQ0N3FTajEwUThXUzhiQTBnK0FPWklYRy9IamhCL0NXbDBaaksvVVdy?=
 =?utf-8?B?SC9IM3JrMDVsZm9wcjExSHhMUFNqeUpmcjhCNS9IN1pHVGRycVFlRWE4T29w?=
 =?utf-8?B?cXpEZGorNnZrLyszdEpZN1NnS1NweDRTKytCVWNOSjVSVFhscDMzSWZnbmNM?=
 =?utf-8?B?VE13Z3pvcXBraGZNRjBjWU9UYlFXYkRMaCthMmI0MitYbjA2VlI0UEdyUTFT?=
 =?utf-8?B?OVBTbW5qTDBpYlNmRCthbHpUaC8zUHVaL2ptNUZ2b0ZTR2VIYjkvSURxK29o?=
 =?utf-8?B?elkrSDhDRkx3ZC9rbm5qM1h3aEx2c2FqUWJVSEpYaUlxRVAxUFJrU0toWkps?=
 =?utf-8?B?RDBwdmFDcVhtZlFRaHZQNC9TL1RvbitBc3lLTzh6M2I1dVRYWUdkS1V6ZlBV?=
 =?utf-8?B?RmFJNkVUZHF4eFdRMjNUUkQxUnJJSzlzTFFQQ1E0M1RKWUpESzViWkZMaWNC?=
 =?utf-8?B?c1FHSUpTRTRsbXlUWWpRekhTUzBlcUk3dzNJY0xZbVNBcGVKWmdSZ1hpRVVs?=
 =?utf-8?B?WGs3djN3SDZpMmRKVnlJZ0xFdTFLUkNORlhxTlBLZ2dWK1lZdDc2eEN0QnBW?=
 =?utf-8?B?N0ttQUZReGpGZW9qTFE4MDZYTk12dGI1THg1alo0b0NJQkhiQ2xTd3ljc21B?=
 =?utf-8?B?aVNwVnRzTm9wQTM1ZjBKQ2ZXdXRNd0dWY0NYNkI3cFZnVkdEWHZIYWVJeExZ?=
 =?utf-8?B?UEVKODB2MDM5V2V4TVJtcnRxTXVoRDUzMThkZHhVTUtySUJ1blJjazk5Nmhx?=
 =?utf-8?B?cUhmcDAxQ1IvNW16RmlwQ2ZERFdIcXRGMHlQRW1PQXdMMngraE9KOVlxZi92?=
 =?utf-8?B?S0JaUXBYQnJtU1FtNkNaa2JCYjFwWXNXSU5uaE5vTGxDR3RnVGxnU0x0OEx2?=
 =?utf-8?B?RzlNcG1WWHRUVU9LWXd6S2FDMm1kQUd2WnhVK096TU56UXZteWZ6ZnFWNndL?=
 =?utf-8?B?bm5sVXk1dnkrT0loSjQ2RHp1bktzN3hyM0x0M0tHdzFXT090bW5sR3B0R1Fy?=
 =?utf-8?B?S0NEWVNiY0tNUUtvdFQ1L2FSMnIrL2R4U3RYMExRcHFQY1VsMFJpY1V0OThu?=
 =?utf-8?Q?fHqIqB2e0zMkLzeieOsvwqCrxfD1ZTx3fy5Pw=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR04MB7416.namprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(366007)(376005)(1800799015)(7416005)(38070700009);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MHA5UVJ0K1JMZDFzVVBqQllHbHFJbWd6ZEpRSFJGQTVmRFlxbmhMYndCcm91?=
 =?utf-8?B?M3ZzcG1rV0llY3dpZlFtWWNBWm0wenJpNGpPbnR0NUdvMlF4RmxrTHZsWUZC?=
 =?utf-8?B?bGhhWThzQ1hIYTYzaGJjbHR2bVcyaU0wSU54OElwaDlQNk04Y1lXU2RHOFBi?=
 =?utf-8?B?dUwwdklCMHRPOGlmQnU2bkEzV0xFckdzRmVPL2hKZUJDVWFXVU16TkdUTTZI?=
 =?utf-8?B?ZkVWR3VraCtwdlNvbWZnTXVyNWFoWEFMWkcxUk40NmU3VGYxVDZxNDVtRklQ?=
 =?utf-8?B?TjN5K3Z1ZnFBNkREYnpFd2dobWNFTUpDb2VVOGFjM3NxcjRUekd2Sm96UVRP?=
 =?utf-8?B?QzZEN0FlU1R0d1Nub041OU8yNlNIcnJCQzM2ZTJ3dTVHWERBWU1TVEdVSlI5?=
 =?utf-8?B?dTVWN2tMdTJqZFNNKzZXMkdNaFhTRVJHUlFsY0h2YU4rVUNzUDFWdUpwQVFK?=
 =?utf-8?B?ZXdwVVpBSHQ4b2xPRmozR0xEdndZOGRBRjdacUFUU0lqZGtPelh3R3NRQmVh?=
 =?utf-8?B?TkJKbTdycTBEREFhbFdVaGRJQjVPOSt5UGh1QVNRVDFtT2hMeURIVWlXcnZk?=
 =?utf-8?B?UmVSNTFwK05ZNU9OZXZMSklrTUw5UmhMSE5QWlhHbDgrd08xemE0UENoR3JU?=
 =?utf-8?B?SmFQWEUzZmZIbm5UVStmYklKQmtaRDFzMUNLZmFLSFdtN2FuSlZMYmtmcHVv?=
 =?utf-8?B?SlAwL0NWZVpZZ1ViMDNTQjV0TWMxV2RRbXlzZzl2eWtabHJQV3h2aGJLUkRX?=
 =?utf-8?B?RElzbjQ2YXNISjk5K0VkNFRQdndoMnppMXlIQUpFVFdwT1g4TCt3dW1sOWZq?=
 =?utf-8?B?a0NsRVJXcjVSR21mUVpybTJQYzhtd3owbncxRGVPV2lYYjJpenpucFlVcTQ5?=
 =?utf-8?B?Yk8rUDZFODBXbmhuOS82RHpGWVN6dG1TaSs1ajd0Rk5wYzJpU2c3V2IrdWR0?=
 =?utf-8?B?RUhlUjFodTYyT0ZldjBaejhwU21aemZMT3JsL1p1VENWenhoUm1DRzFWMVJq?=
 =?utf-8?B?a2dGYXZKV2Y1VHh2L2RuY2NsVjMvN28zQTVRYXRlcy9hOXNqMW1aME5qLzgw?=
 =?utf-8?B?em5xZzFCekZZblZ2QWd0YWFhQ1cvbmVHZVlGbk9OSlFZdDlDYUZYUjNzVWtY?=
 =?utf-8?B?RjVGSVBOWGRaU1ZKQ1cyMkpTOVN5clM5ejRQTlJkYmljbCtrN0VRQm1SdWYv?=
 =?utf-8?B?bzMxbmZjZHZ2ZTNyajVwejZJcWo1RmhZclNzMjNTUTlhZVEyZGxVZW5RSmtp?=
 =?utf-8?B?ak5LVHQzTEYxSVYrTUppdWtMNHV0d2JScHcyT2N4RXQ0UXl1QVEzNGhBaXNK?=
 =?utf-8?B?UG9yV01iY1g2NlhyVE5aU1FpS08wUXBJTWJDOU1iLzZMNDhzdUVaS21PVlAy?=
 =?utf-8?B?YWR0QkVuSlRyamJFcm9BTGJYZWFjZm0wbklzMlRIUnlxTTRvT0FvYWYwazND?=
 =?utf-8?B?ZjRpZEg3VHFrTHRWWWlLZkVWSHFlT1RFeGpzU3RYQzl3eVF3UE56aWhwQ2d1?=
 =?utf-8?B?TXl2UmVoaVhVVTBKR2JnSDROUGUvUUFUait3WVhGakJDV1U5dGJXTFcxK21Z?=
 =?utf-8?B?dGhGWGszMXJhQ0k3VWo3bU04ZE80VkIzMWVHelVGR244RGlRZzUraHAxaDU4?=
 =?utf-8?B?am9oS2F1L09aUkFNMWtBbjF4NXM3dnYrZFlJK3J6NVBDSFNJbUhSM0RaeHo2?=
 =?utf-8?B?VjFva0RraDZEMlBrM0p1bjE3RGlnVEZjK1NBNUNjWmo5eEpZNnYrU2pMSC90?=
 =?utf-8?B?V21McGNXU2R3UWkwTjdiVXBPTXRaRjJLMmJTUUJ6a2tpQkw2N0J3MzFsb1Q4?=
 =?utf-8?B?VjV5R2lSbkFxSXpVZ3ZmdU8vbXVJRlBqaEpHWFRveUtQbVhrYzhDcjVyWWJF?=
 =?utf-8?B?dGh3MzZBdWlJZUEyQUhsNFY3ZDZvdWd3UXNaQmFPQjZTTk1Cb05qUElIWEJn?=
 =?utf-8?B?OHpqU0ozUXc0SzU5UlRqNXZVVUZuSVdqKy9qQ1RxZXdacGd6cVBBWFJoRjV3?=
 =?utf-8?B?aWJ2eEIxVXB1V3Q3eDZ6b20ydXI2RmZ6N1I3QVRyZDg1Qm81dE9JbVhyMEZv?=
 =?utf-8?B?OWkxcGtwNHBqT3R2L3RJMnE1MUpkU0d6YlFheHUvU2szdW1BQWg0azQrYkd4?=
 =?utf-8?B?QTQ4Yi9IczdobFdDUjdLdkxXSndueG9vcUUvU2Z4bWpJRkJzeWdTYTZmT2JZ?=
 =?utf-8?B?ckE9PQ==?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <958B41E259453E438A1DB5FF9C75C781@namprd04.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0:
	dQw6uzYdE/FnnfxnwjpkW3OgbullM0F/pcQuDv92l5TOph4sKR0fHczCbXxBw5zc/h60Jk6v/g3kf2aFVPiuMYLmWdD0uus26oDDOJnKvSHw1PpSaq1SUGVz9WMxpAbTLRrCBg58vEAS81mnY2M6ZdEKxT3tO0V75X+/r4mD7jv50QHA4T7278REL+/T5ypizLl6wveIKs6Hd+x5S2E4mRyXbI4zaV6X8Xn8uWAHW4jAN0XPGOPnO3ZzoPPrElb8IXZoCbsl97XQ0CeESxrs38qWCIcod6s4IxVQJzqwhjUcWHEMeW7+VrX/9G50f+AqmURZpsBXKQYlUfxe8Et4gh6VmGIBq07SRRlZNsbv8+bWjNvLgimrs1dLC2S73xbMhTos/fTubBOobNUt+fTThGB0KkMSVFOBg7QtJmjsSURPD64/mJdvZJ/OfJEhjL7v/KUMw0qvWDDlBOo7/rCOiqtXADNz03e7C8x2nDeKj81cMMuQHaXiQ5eouZNYjgYxFMfhTr3k3i9YLe6BNPjyoGuAnmmqKfxO0PzdFSP583TCIdW9/1NQFqb4BSKjTAxVZdKv/v7c8BeWp4wpV1/FgyDUGFmobj5x7AOPWeiIqLXmG8/FPKX+vxrn7nf6qOPE
X-OriginatorOrg: wdc.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR04MB7416.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 89a728e8-ce41-4765-5853-08dc8a0c87ff
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 Jun 2024 11:49:20.5036
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b61c8803-16f3-4c35-9b17-6f65f441df86
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: abShtI8J+M5F2N3VJfAC6CnUQqnjNC8m27YeKFq1a1ey9sM9Qw5Nzs0SS0ktb01VOGcWcXfhjpCV/tqsv6p866q7FWxCaD+jL++RMAczM2c=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR04MB7688

TG9va3MgZ29vZCwNClJldmlld2VkLWJ5OiBKb2hhbm5lcyBUaHVtc2hpcm4gPGpvaGFubmVzLnRo
dW1zaGlybkB3ZGMuY29tPg0K


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 11:53:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 11:53:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738328.1145053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH03i-0002tD-MU; Tue, 11 Jun 2024 11:53:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738328.1145053; Tue, 11 Jun 2024 11:53:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH03i-0002t6-Hv; Tue, 11 Jun 2024 11:53:02 +0000
Received: by outflank-mailman (input) for mailman id 738328;
 Tue, 11 Jun 2024 11:53:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sH03h-0002sy-02
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 11:53:01 +0000
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com
 [2a00:1450:4864:20::52c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2650b50a-27e9-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 13:52:59 +0200 (CEST)
Received: by mail-ed1-x52c.google.com with SMTP id
 4fb4d7f45d1cf-57c72d6d5f3so3681544a12.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 04:52:59 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c7ef8d914sm4223789a12.71.2024.06.11.04.52.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 04:52:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2650b50a-27e9-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718106779; x=1718711579; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=KZpbqDn0Y/BPvQ8JCxbw4Bue1wDMPXTDQt7/Xb+xbmc=;
        b=P2tmtft+hmtHY6vrqbHt6eGwBRhXI9zCl3Oyg6GeTec26Vy1hq72niMrbEWKbkSxBk
         b36jQmEaMIQ9or2FU7LRHI1UQWhio3uVcPEuqbs8mhtmEAVkpQI5zRhgoiqMeaXDWUh1
         0hgzuEMqFgFhLRgnDz2gSs39qpiDJ48/NT5vPsAFwOYJ+lNMzkKl7v6NA6qc36Ne1Na1
         r4K4drE6NYRvggNMFL1NTAAV6Qo6hF84Dqqxe0ubR+/jZ3AItSdQ0mzO4wjI/ORaI5Vb
         RmA6kXVYlHllIFCQxrELuWIuhjzUFvrYE98afbs1eVkQxeE/0VQbIAkZTIhsK5s1G/xZ
         ZZSw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718106779; x=1718711579;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=KZpbqDn0Y/BPvQ8JCxbw4Bue1wDMPXTDQt7/Xb+xbmc=;
        b=dG51AcergG0Vc4zt7Klo9ewPQ42qbFIloEmeEXTf7+9S2aum8n6hyVn+1Fbj6WrtBv
         /vaR08Jej45DUiMJDFSwKdS+GNR8ir4vWbokds7F1wQS3aqsAvd65PeiXL8I9SvQSKoX
         x/VsLcLnWp0DWBGG2SDFCMaYJM//ipccWxzpRjh7FtsPWSgcsaprjAnTcAjvDzdAp2A0
         wqRiC1D3UbE0jRu9oivCrjEAkrR5AYxJauKAQX6NshNTmv+xr+lr2yDHVL0fhwy73w8P
         QbWAObUBX2S/bRLRo+6DmHuwCpiENLCbxy5R9DaNnof2lFBJ+0H5WNZU4KNiwCagVqvg
         bvHQ==
X-Gm-Message-State: AOJu0YzvOSQ5kyuDztJB5eGpbHrS4KlhG+1y4JfaAM4HAegIDvGciPtY
	zUuIpMt94qwI7pKnqvwVZlTnSP3W+1sZGG4OL8nBNj22uGhOI0HYihcg0SfshA==
X-Google-Smtp-Source: AGHT+IFzWKobZSODZqfS9LJOmwX+h6jUTdWf8BkyZO8sTQihiv03H0EeVUfKfGgRXFtyVmK/vtREEQ==
X-Received: by 2002:a50:cdda:0:b0:57c:7ce3:6cd9 with SMTP id 4fb4d7f45d1cf-57c7ce36e15mr3622795a12.23.1718106779083;
        Tue, 11 Jun 2024 04:52:59 -0700 (PDT)
Message-ID: <b076dc8d-701e-4a9f-a147-c54673959009@suse.com>
Date: Tue, 11 Jun 2024 13:52:58 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] x86/EPT: relax iPAT for "invalid" MFNs
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
 <Zmf_k2meED8iG3H5@macbook> <a11259be-7114-4332-b873-d1b163687a3e@suse.com>
 <ZmgStGbVRuGaNUD_@macbook> <f171c98a-c78d-41c8-88d8-7d631b80333b@suse.com>
 <ZmgwKmcLDJDhIsl7@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmgwKmcLDJDhIsl7@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 11.06.2024 13:08, Roger Pau Monné wrote:
> On Tue, Jun 11, 2024 at 11:33:24AM +0200, Jan Beulich wrote:
>> On 11.06.2024 11:02, Roger Pau Monné wrote:
>>> On Tue, Jun 11, 2024 at 10:26:32AM +0200, Jan Beulich wrote:
>>>> On 11.06.2024 09:41, Roger Pau Monné wrote:
>>>>> On Mon, Jun 10, 2024 at 04:58:52PM +0200, Jan Beulich wrote:
>>>>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>>>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>>>>> @@ -503,7 +503,8 @@ int epte_get_entry_emt(struct domain *d,
>>>>>>  
>>>>>>      if ( !mfn_valid(mfn) )
>>>>>>      {
>>>>>> -        *ipat = true;
>>>>>> +        *ipat = type != p2m_mmio_direct ||
>>>>>> +                (!is_iommu_enabled(d) && !cache_flush_permitted(d));
>>>>>
>>>>> Looking at this, shouldn't the !mfn_valid special case be removed, and
>>>>> mfns without a valid page be processed normally, so that the guest
>>>>> MTRR values are taken into account, and no iPAT is enforced?
>>>>
>>>> Such removal is what, in the post commit message remark, I'm referring to
>>>> as "moving to too lax". Doing so might be okay, but will imo be hard to
>>>> prove to be correct for all possible cases. Along these lines goes also
>>>> that I'm adding the IOMMU-enabled and cache-flush checks: In principle
>>>> p2m_mmio_direct should not be used when neither of these return true. Yet
>>>> a similar consideration would apply to the immediately subsequent if().
>>>>
>>>> Removing this code would, in particular, result in INVALID_MFN getting a
>>>> type of WB by way of the subsequent if(), unless the type there would
>>>> also be p2m_mmio_direct (which, as said, it ought to never be for non-
>>>> pass-through domains). That again _may_ not be a problem as long as such
>>>> EPT entries would never be marked present, yet that's again difficult to
>>>> prove.
>>>
>>> My understanding is that the !mfn_valid() check was a way to detect
>>> MMIO regions in order to exit early and set those to UC.  I however
>>> don't follow why the guest MTRR settings shouldn't also be applied to
>>> those regions.
>>
>> It's unclear to me whether the original purpose of he check really was
>> (just) MMIO. It could as well also have been to cover the (then not yet
>> named that way) case of INVALID_MFN.
>>
>> As to ignoring guest MTRRs for MMIO: I think that's to be on the safe
>> side. We don't want guests to map uncachable memory with a cachable
>> memory type. Yet control isn't fine grained enough to prevent just
>> that. Hence why we force UC, allowing merely to move to WC via PAT.
> 
> Would that be to cover up for guests bugs, or there's a coherency
> reason for not allowing guests to access memory using fully guest
> chosen cache attributes?

I think the main reason is that this way we don't need to bother thinking
of whether MMIO regions may need caches flushed in order for us to be
sure memory is all up-to-date. But I have no insight into what the
original reasons here may have been.

> I really wonder whether Xen has enough information to figure out
> whether a hole (MMIO region) is supposed to be accessed as UC or
> something else.

It certainly hasn't, and hence is erring on the (safe) side of forcing
UC.

> Your proposed patch already allows guest to set such attributes in
> PAT, and hence I don't see why also taking guest MTRRs into account
> would be any worse.

Whatever the guest sets in PAT, UC in EMT will win except fot the
special case of WC.

>>>>> I also think this likely wants a:
>>>>>
>>>>> Fixes: 81fd0d3ca4b2 ('x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()')
>>>>
>>>> Oh, indeed, I should have dug out when this broke. I didn't because I
>>>> knew this mfn_valid() check was there forever, neglecting that it wasn't
>>>> always (almost) first.
>>>>
>>>>> As AFAICT before that commit direct MMIO regions would set iPAT to WB,
>>>>> which would result in the correct attributes (albeit guest MTRR was
>>>>> still ignored).
>>>>
>>>> Two corrections here: First iPAT is a boolean; it can't be set to WB.
>>>> And then what was happening prior to that change was that for the APIC
>>>> access page iPAT was set to true, thus forcing WB there. iPAT was left
>>>> set to false for all other p2m_mmio_direct pages, yielding (PAT-
>>>> overridable) UC there.
>>>
>>> Right, that behavior was still dubious to me, as I would assume those
>>> regions would also want to fetch the type from guest MTRRs.
>>
>> Well, for the APIC access page we want to prevent it becoming UC. It's MMIO
>> from the guest's perspective, yet _we_ know it's really ordinary RAM. For
>> actual MMIO see above; the only case where we probably ought to respect
>> guest MTRRs is when they say WC (following from what I said further up).
>> Yet that's again an independent change to (possibly) make.
> 
> For emulated devices we might map regular RAM into what the guest
> otherwise thinks it's MMIO.

Right, and for non-pass-through domains we force everything to WB already.

>  Maybe the mfn_valid() check should be
> inverted, and return WB when the underlying mfn is RAM, and otherwise
> use the guest MTRRs to decide the cache attribute?

First: Whether WB is correct for RAM isn't known. With some peculiar device
assigned, the guest may want/need part of its RAM be e.g. WC or WT. (It's
only without any physical devices assigned that we can be quite sure that
WB is good for all of RAM.) Therefore, second, I think respecting MTRRs for
RAM is less likely to cause problems than respecting them for MMIO.

I think at this point the main question is: Do we want to do things at least
along the lines of this v1, or do we instead feel certain enough to switch
the mfn_valid() to a comparison against INVALID_MFN (and perhaps moving it
up to almost the top of the function)? One caveat here that I forgot to
mention before: MFNs taken out of EPT entries will never be INVALID_MFN, for
the truncation that happens when populating entries. In that case we rely on
mfn_valid() to be "rejecting" them.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 12:36:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 12:36:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738337.1145062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH0jT-0004JF-Td; Tue, 11 Jun 2024 12:36:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738337.1145062; Tue, 11 Jun 2024 12:36:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH0jT-0004J8-QS; Tue, 11 Jun 2024 12:36:11 +0000
Received: by outflank-mailman (input) for mailman id 738337;
 Tue, 11 Jun 2024 12:36:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d3+M=NN=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1sH0jR-0004Iz-Ug
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 12:36:10 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 (mail-ve1eur01on20600.outbound.protection.outlook.com
 [2a01:111:f403:2602::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2c563826-27ef-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 14:36:06 +0200 (CEST)
Received: from DBBPR09CA0035.eurprd09.prod.outlook.com (2603:10a6:10:d4::23)
 by PA4PR08MB5886.eurprd08.prod.outlook.com (2603:10a6:102:e2::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Tue, 11 Jun
 2024 12:35:59 +0000
Received: from DU6PEPF0000B622.eurprd02.prod.outlook.com
 (2603:10a6:10:d4:cafe::c0) by DBBPR09CA0035.outlook.office365.com
 (2603:10a6:10:d4::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7656.25 via Frontend
 Transport; Tue, 11 Jun 2024 12:35:59 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DU6PEPF0000B622.mail.protection.outlook.com (10.167.8.139) with
 Microsoft
 SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.7677.15
 via Frontend Transport; Tue, 11 Jun 2024 12:36:03 +0000
Received: ("Tessian outbound 5a0abdb578b5:v332");
 Tue, 11 Jun 2024 12:36:03 +0000
Received: from bffecf3a46c1.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 483B029E-DFB4-4946-A6D8-C791C6DE8B5D.1; 
 Tue, 11 Jun 2024 12:35:52 +0000
Received: from EUR02-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bffecf3a46c1.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 11 Jun 2024 12:35:52 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS8PR08MB8299.eurprd08.prod.outlook.com (2603:10a6:20b:56f::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.37; Tue, 11 Jun
 2024 12:35:48 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6204:b901:9cc6:bf2b]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6204:b901:9cc6:bf2b%6]) with mapi id 15.20.7633.037; Tue, 11 Jun 2024
 12:35:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c563826-27ef-11ef-b4bb-af5377834399
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=mVwpRiw1Rd1ggeW2tUds8zcs1eNcgHxW82eRjcICLcBaJd4yxGydTBdfgdQbf1/T+vyaFM1i8VNKT9egSC76Zfk98puF+LHqr/WGZhFGrgpzqMoIrsK2JdHtYkDYWuoMiq7dEzZB69k7+EXVlfbVfOqQ7vX3+3ybSuRCWexTyN8fZS1bxUUweVgur630Haz+MtSW3/fVwmAmAhAr0Pqb3FdpOh8PEKdm0UbCjSfd6Sl9vm0wxz7c4uxbTi6ci/ZAcw1mQSKV7S5ts2cybcTsYu/duIL2YhBJJS/NdFcH05eA8Aq7e/VS334KC2BLqtQKKlIec7Ompptmmp4NfW/+yA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Oq2k33ssq0RsWs77F3gWkmHi5LZo8p3bDqy2F5s3s2w=;
 b=NDNNB1i/Zl9rw9jF66gQcyVOtolcNoRl3O9nZA6zXbi9bAnEAmABel2TAhU6kP0Uw0qAJlmRqc4l5tal/bCaqPgz1ew7cp3JFn1R3DLxS10bHbs7NndQMXnRQid6JRYJtBnit+jQjkyqHT4BRgrkcJVSPqhkpN+xYUxpn2X0V9aQyKzFLIEFuEFpx0XBo9+qFd6hmpqlgJeOVVl5QKbFN4GEyrbcxcL3YHoEbFZfxv3ziD36Qava862spKguFsmL2jCWqxFH6Ph44lm7mCybivXojTqJyJX86Yy189StXug9bPpY4rc3lVWsfrwLzhtffCu6g6hppPcQ7KG4JPsaAA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1
 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Oq2k33ssq0RsWs77F3gWkmHi5LZo8p3bDqy2F5s3s2w=;
 b=JuLOFpaZy9gt+MHgwDbxWyJPE1M7cIrMDUuh360tlRuKm15W8RtQTG/0Of/EkBIjr5o5QPJJ/Ab6xposivmj58xdQrEFNNqkgUyluI6j4Lo4BD9g5Ndvt8Tyl/vfIDdfOM3nVA7hW6T2hLlXJALsqozNbdnSKznlJcOu1tad+H8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=arm.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: fb45eb3c75c539e4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ijoSzSz1Cw9KAbFPyWYRquqHyYtyuAXxVIE8a2FFL2+QHDJaqZ5Gh+sHcXD9FddDB39lkFeO4HU2m82ZNgTH6kqXNQROWAT8iiU27dfn+eQFDW4ItmcrC6X17DNwMCTWXbYbnRrUeFxHgtk07K9cORF/HtHLvqZ79N6pKZdBIV/rfVFAJMcFvwVBc4mY2jNWNRZFBX7DGJmSGkL7pyoa+xBVwPjPq2+pjqkFcDzJX33v9bCfBCvg1LGWZ7bh5Fn15oD51QSo2wjVH+4JziAJIbOdHZSiAKHPLs496WkFXD6qTkEKG1RJnc4D2+1e/fvXkfozGIflyKZuRUc/SALYDQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Oq2k33ssq0RsWs77F3gWkmHi5LZo8p3bDqy2F5s3s2w=;
 b=H6mBWT6VNTGfKkvblwenk5S2H6OvvatusVCsk98tGNYOuaejTUjNYSR5O91a/8WqNRfs5BShnExU00EQd6CrTzF/i13cB4CHaPYI0YiRt/s6qV3mtphpShGRvGOlFHg7Wu+uqq2O7a3ZCrxGcGh8wd+lk6VI2a3VZ4tPryf2nseRTAXpL+0/rK5t3cHlvCyLClpl053lqV6jk5IacuD/2pNafkpBdD5jtuEG7/rTPCrpNtXnnoQdvri2jZz5SKUTemqmsYgQsDyk2XKYSRjQv5BXALN5gz9n5JcHSlFHdodHwUM/te8Va0UHxojLpEfzU0D2TIv3Tlc0YXiRQCyI2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Oq2k33ssq0RsWs77F3gWkmHi5LZo8p3bDqy2F5s3s2w=;
 b=JuLOFpaZy9gt+MHgwDbxWyJPE1M7cIrMDUuh360tlRuKm15W8RtQTG/0Of/EkBIjr5o5QPJJ/Ab6xposivmj58xdQrEFNNqkgUyluI6j4Lo4BD9g5Ndvt8Tyl/vfIDdfOM3nVA7hW6T2hLlXJALsqozNbdnSKznlJcOu1tad+H8=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, Michal Orzel
	<michal.orzel@amd.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH v4 0/7] Static shared memory followup v2 - pt2
Thread-Topic: [PATCH v4 0/7] Static shared memory followup v2 - pt2
Thread-Index: AQHarde1wBZMIqRdOku1vS0eoyuKdLHCnAmA
Date: Tue, 11 Jun 2024 12:35:44 +0000
Message-ID: <3DDAAFF7-5E43-4B92-9D6B-6D8AFBA8496F@arm.com>
References: <20240524124055.3871399-1-luca.fancellu@arm.com>
In-Reply-To: <20240524124055.3871399-1-luca.fancellu@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3774.600.62)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS8PR08MB8299:EE_|DU6PEPF0000B622:EE_|PA4PR08MB5886:EE_
X-MS-Office365-Filtering-Correlation-Id: a65a8f92-37e4-4246-a56c-08dc8a130eb4
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted:
 BCL:0;ARA:13230031|1800799015|366007|376005|38070700009;
X-Microsoft-Antispam-Message-Info-Original:
 =?utf-8?B?OUVMVG40eE9GNytzNEJqS1U1QmpyTGdndG5zMjJvZWoweDBHZ1prU2ZlVHha?=
 =?utf-8?B?RC9iOUVZa1FUdjBDQ3ovcHBxaFN3emExRVVmZGxaWThaODZMSWJCRFhlTStv?=
 =?utf-8?B?aklWTXEzMjhhd1IyYmtyN09DeFpXSFpIOFVyUGpXQ0htNStzKzFidjltRGRL?=
 =?utf-8?B?VklSOHpDbFVvS2VCbVkyWWZsT0VzMW5pdzk0TlZPWnBFejRxWTlWVkZ0R1Yr?=
 =?utf-8?B?VkI2a1hGdlQyT0dBakhoWUdwUXFJM21aWWlUVjhLQW1sVXY4M2lnQzhpWTdq?=
 =?utf-8?B?S2pqM1ZmMUxaN3NjNWhoRmVnUzNLdE15WlVubldXZnZuVkcvVGRZVFllZG91?=
 =?utf-8?B?QnBzdkJpaGtXdk9DY2MzNy9aQzl6Smg0SmRjekRkSEh2TEtWeHVmL3JhRTZG?=
 =?utf-8?B?SXVhRUNQY2xMaWt1aStad3N3OUQyb0FOMmJ6cmRUdnlGT3JHMXhGNnFJNkVI?=
 =?utf-8?B?VVZFc3NFY2NVVEhtRnE2L2ZtWkFNaDlXdmdPRGdubTBsSDRFejZqaTY2ZXJn?=
 =?utf-8?B?L3pjSFliQ3hZaWNHaTBPUzgwd1hNaTc2MlJSbWlqVXJGc2hBLzlpZUpJa1Jx?=
 =?utf-8?B?R3dabk9KYTZQYmZBbnZtNnIzWENjczlXbHFVSG9ncnl3L1EzczQ4YWVZM21z?=
 =?utf-8?B?VnNkdHZHTEJPSnIvVThySFE3WnZqREtmVk1YU2FHcUVWNGpEMnNucVp2Skhj?=
 =?utf-8?B?Q3RCU3l1Z2lCbUZkdk01ZDMraFRFUG9XMmdBL0RDbWgyQkoydFBpSGFLQlBZ?=
 =?utf-8?B?Y0pnZ0ZMSWtiTVVybWdOYUN2NW1iVmErZjkwSGFvVHdBOU1lNXNyMC8vazhF?=
 =?utf-8?B?UXpDckw1aFdWVmJxcEZIMlRRNEwrTkRvMTN4ZWZJdnNyZmM0bEFTYVppN0xC?=
 =?utf-8?B?WUpqdWsrcjBTUDEwQVZiQmVtZU0rWUI5T2pmc2ZMMCtQa0N4UEpYT1BraHZN?=
 =?utf-8?B?Ynl5L0NRcCtKQ0RNOHhIcVptb3lyZFZpQ2poRmtQTzFyd2NhQ2I5WmdraFIv?=
 =?utf-8?B?bDAzWW1ST1lTdEo5NUNDZEJrK0xyeFZ3bStUb1kxNDJKYXF2ZmNoWTRrY2NU?=
 =?utf-8?B?enVucmlwdEJIUThxakMvZ2hOTzdTQXVlM0lySFFWcnVVd3RRdC9WK2E2bTM4?=
 =?utf-8?B?Y2Vxdnl5ZldJdWxNZHdqeTF4RmlDMG9JeXJEVTBnWVJaYVE3ZHUyZDZ0cHkv?=
 =?utf-8?B?WEJTWTZNVnlObVphTlRCYUZkcjJVamtPaVdYcmR6WUV5b0Y4Z2U4aHVrM3lR?=
 =?utf-8?B?MUM4UkxtL09XenBPLzJVYXduVEE1V01IN2J4QTAyTnFsTDJlbmZQeTVMeHpH?=
 =?utf-8?B?c1lDS08rRjVLbnlkb3pyOWRrZ005SUhXc0IwUnowUDJWbTg2RDB4VWFkaytl?=
 =?utf-8?B?enA3bWZVTks5MEJBdzR3NGFGMXFQTDhTWHRwUHZEdStPRit2YkJGVnNpQUhW?=
 =?utf-8?B?dmM0VDBDeDh2UCtNOE1iTzIyaDF1aFNJR1ZjNlE1amhnOUFUaUF3SERJc1p1?=
 =?utf-8?B?THdjcVRGcHp1UXArVGRBN2dNZmVlM2NLcUdFVmNoVm9YUm9vWGFvSDhhQmxY?=
 =?utf-8?B?dE1GQ05iWG1pd0NXWWJ4UFBiQ0hxWjdhSWd2d1kzUjhQSi9rbVZoam1KbXZa?=
 =?utf-8?B?Szc0SkIwdlVxMlFpblNmaE9RMnhjRVdTRmRhdVoxaDh1MkpKZkdOc3lkREVt?=
 =?utf-8?B?WWRXazVNOGtLcUcxeXlNamhtS2RzdUtOZVpMYVVxT0pRdGVHT2xLOUpNVk1V?=
 =?utf-8?B?SnI0Y1loZUpZRklrNWUxM1VsV0xYNkJqNEhPZXN3YVFkOWU2WW80emZLb2xJ?=
 =?utf-8?B?VnVnejFNNXgveFNxZlBuRE9Hd3JaUmdwc1FzR1RMTk9mRFNGc0pUTVd1Vmtw?=
 =?utf-8?Q?EQbyNP4OOhag0?=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230031)(1800799015)(366007)(376005)(38070700009);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <50C5AE020BA5EA499D4268EAC3822111@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB8299
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DU6PEPF0000B622.eurprd02.prod.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6274787a-5728-4a53-58b6-08dc8a130324
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|35042699013|82310400017|1800799015|376005|36860700004;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?VHBNUld6SHgxcUhmaWpweWVYaUhXa0FxOXZwTFo4Z01ac2NjeXJqd0hoYk84?=
 =?utf-8?B?ZFFmakkycnpqNURTdEExZHVRZWk2c2FoZ1psYzBLN0szYVZPa0ErM3JaS2Z2?=
 =?utf-8?B?Nmg3YkFmWXFZZ1ozZjYvNndWUFVjQnFEcWcycVFvTHN2ekpQT055RjhSbGVG?=
 =?utf-8?B?ak1BeVNOc3ZCcHhPS0xCR0xQZFRvbWNjeHBGaUlwdkNPZDJncFJRWlJZT29Z?=
 =?utf-8?B?REg2MEVvWmJpODROU2NlTUkvWFFqRHF0WGRDTUpvRmExSTluN3NmUU1McFgy?=
 =?utf-8?B?MXY3M1J6QktLM3BZYnNLVDc3SXFMN0sxajg3NWVPdWZFcmhyZGxHdlVWZGtV?=
 =?utf-8?B?UGsxTXlPVS9GVis4QW5oN0Z5Q0tNTTkycmI0Yzl2Wnd0ekR1WUdTY2lTci92?=
 =?utf-8?B?SDE1U3JsaXplYWZ5SE9wdW5GMERobXpVcGZKaWMrVE56anlhR2FYQWk0R1JW?=
 =?utf-8?B?YVZ5ZjFyVkxIQkxiaFVZU2gwQnFOVGE1Q0JYUDNoamhLdXArdHFhVm1OVWxH?=
 =?utf-8?B?MjVxTDdiam9TSmJBc0FIUkY1MHdCb012QmxuYm9mWUFodEpienMxUTRvQnA1?=
 =?utf-8?B?Z04zcll2K296Mm1kZTQ0cWtlczFwTUV3eXpvcWJEd3R3N0lJTTJETk9wWStJ?=
 =?utf-8?B?eDVEZGZpbm52WnpaOXdac0RVNFE3UW8rVW9CMTdoNDU1SzRZS042dUJKSTFS?=
 =?utf-8?B?YXljM0RHcEVXdVFoMUxveHRaMTA2UVl2MVJ0MGhiWHhGSHNndlNSQnlMek9L?=
 =?utf-8?B?Y01ZRzdpaUFvd1dnblp0RG5zK0puWVJxNWhnWFdDTVQ0a0w0REdUTWI2aEVk?=
 =?utf-8?B?SFMveXJpTkFjT2lBYm43b3NpTzFKRUNOay9Wc1ZXazFkZGI0dUhVUFJ0M2Y1?=
 =?utf-8?B?T1FhRWhWQWtvdVRQT3pXVkNLVnZsOUh2aUlsVVhqamJrVFl6UUM0aFVOT2xz?=
 =?utf-8?B?bzYyVjVkVWhaSEI2eEoxbUxTOXl6dEJ3SzYvRjcrbWdSU2ZmNGFtcmZZbjlh?=
 =?utf-8?B?TFJ3ZVkyYTJvMWtKL0wwTXhqYjNFRjNDVkRUZnEyS2ZTMmhsMFpEZUpRTUxD?=
 =?utf-8?B?dU5xYTJIZTJCNjZWSnA0U0lwbjRVNzVqUytqbXh6aGVaalk5QVRwVTBnN0RJ?=
 =?utf-8?B?WGhRdmE2Tll2dVB3bnlINklxbEk1UnR2a0dxNjhMaW8vRFdGcnkvM2RzK2Iw?=
 =?utf-8?B?cTFidGZ1VlVkRGpOVGVCRmc2MlVCbVlFSnlLQ29WczJNQkVXRUxUQTQ1WWxa?=
 =?utf-8?B?QUVQN0phOU5URm9Cb3YrWmhXT2xtQUdtMndUdHJLYkc5ZFZXRWRUTWFrU3R6?=
 =?utf-8?B?d3l3eXpxWnc3a3JyQVlYZWJ5Mk5Db016RDJCeHpVUmlKU1R4L2lRR09waU9x?=
 =?utf-8?B?c3JjTDdiS0VRMElnemdWS2szZjd1akZjSFlwcWU3QVgvUGlnRGw3bHQyMWRi?=
 =?utf-8?B?YkJWRlRXM1FtUWx5RG9CU0FXMHROaVRMT0V3YXJvRE1XMG9sK252SysrR3RK?=
 =?utf-8?B?K2xnWHQ1Z2VjUDloeW5wSis2bU5STjZuUFVPSkdCSkZNYXVSS2NPTDdrWkl5?=
 =?utf-8?B?cENjeGFRTE5TM1U4Tjc5Q2RlbFNOTkIxM2NISXkwRkRuN3VYSmhwU3lUZnJI?=
 =?utf-8?B?bURsV3ZTRHRMRURveEN5clkvaFdQdEs5RGtNWnE2YkFkQ0cyRkxzdTNtM0xT?=
 =?utf-8?B?dFpDWGFpYlNmYkc1aDVsV0F5Q2EwTHk0NHRqMzBLSTIwYlkyYkFWRll0Q05T?=
 =?utf-8?B?cGxnSUtEdTY1SjBQb3lNeEtsaFRHQlVJcFFPcnBDeDVQTjlvaW5MRnowSU9z?=
 =?utf-8?B?Vm1tbUUrN2poTWlpcGh0TWFGbEEwTVdrTlVoUUZnSU5FdGJCWk9vZlZ6YXJB?=
 =?utf-8?B?Qnh3UHBOUUQ3bklrM0lVNEVVZHEvbjhuOC9ETnlUUXptL3I5eDJaMTVneGJP?=
 =?utf-8?Q?EVJF4UsHMhw=3D?=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230031)(35042699013)(82310400017)(1800799015)(376005)(36860700004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2024 12:36:03.5499
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a65a8f92-37e4-4246-a56c-08dc8a130eb4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DU6PEPF0000B622.eurprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB5886

KyBPbGVrc2lpDQoNCj4gT24gMjQgTWF5IDIwMjQsIGF0IDEzOjQwLCBMdWNhIEZhbmNlbGx1IDxM
dWNhLkZhbmNlbGx1QGFybS5jb20+IHdyb3RlOg0KPiANCj4gVGhpcyBzZXJpZSBpcyBhIHBhcnRp
YWwgcmV3b3JrIG9mIHRoaXMgb3RoZXIgc2VyaWU6DQo+IGh0dHBzOi8vcGF0Y2h3b3JrLmtlcm5l
bC5vcmcvcHJvamVjdC94ZW4tZGV2ZWwvY292ZXIvMjAyMzEyMDYwOTA2MjMuMTkzMjI3NS0xLVBl
bm55LlpoZW5nQGFybS5jb20vDQo+IA0KPiBUaGUgb3JpZ2luYWwgc2VyaWUgaXMgYWRkcmVzc2lu
ZyBhbiBpc3N1ZSBvZiB0aGUgc3RhdGljIHNoYXJlZCBtZW1vcnkgZmVhdHVyZQ0KPiB0aGF0IGlt
cGFjdHMgdGhlIG1lbW9yeSBmb290cHJpbnQgb2Ygb3RoZXIgY29tcG9uZW50IHdoZW4gdGhlIGZl
YXR1cmUgaXMNCj4gZW5hYmxlZCwgYW5vdGhlciBpc3N1ZSBpbXBhY3RzIHRoZSBkZXZpY2UgdHJl
ZSBnZW5lcmF0aW9uIGZvciB0aGUgZ3Vlc3RzIHdoZW4NCj4gdGhlIGZlYXR1cmUgaXMgZW5hYmxl
ZCBhbmQgdXNlZCBhbmQgdGhlIGxhc3Qgb25lIGlzIGEgbWlzc2luZyBmZWF0dXJlIHRoYXQgaXMN
Cj4gdGhlIG9wdGlvbiB0byBoYXZlIGEgc3RhdGljIHNoYXJlZCBtZW1vcnkgcmVnaW9uIHRoYXQg
aXMgbm90IGZyb20gdGhlIGhvc3QNCj4gYWRkcmVzcyBzcGFjZS4NCj4gDQo+IFRoaXMgc2VyaWUg
aXMgaGFuZGxpbmcgc29tZSBjb21tZW50IG9uIHRoZSBvcmlnaW5hbCBzZXJpZSBhbmQgaXQgaXMg
c3BsaXR0aW5nDQo+IHRoZSByZXdvcmsgaW4gdHdvIHBhcnQsIHRoaXMgZmlyc3QgcGFydCBpcyBh
ZGRyZXNzaW5nIHRoZSBtZW1vcnkgZm9vdHByaW50IGlzc3VlDQo+IGFuZCB0aGUgZGV2aWNlIHRy
ZWUgZ2VuZXJhdGlvbiBhbmQgY3VycmVudGx5IGlzIGZ1bGx5IG1lcmdlZA0KPiAoaHR0cHM6Ly9w
YXRjaHdvcmsua2VybmVsLm9yZy9wcm9qZWN0L3hlbi1kZXZlbC9jb3Zlci8yMDI0MDQxODA3MzY1
Mi4zNjIyODI4LTEtbHVjYS5mYW5jZWxsdUBhcm0uY29tLyksDQo+IHRoaXMgc2VyaWUgaXMgYWRk
cmVzc2luZyB0aGUgc3RhdGljIHNoYXJlZCBtZW1vcnkgYWxsb2NhdGlvbiBmcm9tIHRoZSBYZW4g
aGVhcC4NCj4gDQo+IEx1Y2EgRmFuY2VsbHUgKDUpOg0KPiAgeGVuL2FybTogTG9va3VwIGJvb3Rp
bmZvIHNobSBiYW5rIGR1cmluZyB0aGUgbWFwcGluZw0KPiAgeGVuL2FybTogV3JhcCBzaGFyZWQg
bWVtb3J5IG1hcHBpbmcgY29kZSBpbiBvbmUgZnVuY3Rpb24NCj4gIHhlbi9hcm06IFBhcnNlIHhl
bixzaGFyZWQtbWVtIHdoZW4gaG9zdCBwaHlzIGFkZHJlc3MgaXMgbm90IHByb3ZpZGVkDQo+ICB4
ZW4vYXJtOiBSZXdvcmsgaGVhcCBwYWdlIGFsbG9jYXRpb24gb3V0c2lkZSBhbGxvY2F0ZV9iYW5r
X21lbW9yeQ0KPiAgeGVuL2FybTogSW1wbGVtZW50IHRoZSBsb2dpYyBmb3Igc3RhdGljIHNoYXJl
ZCBtZW1vcnkgZnJvbSBYZW4gaGVhcA0KPiANCj4gUGVubnkgWmhlbmcgKDIpOg0KPiAgeGVuL3Ay
bTogcHV0IHJlZmVyZW5jZSBmb3IgbGV2ZWwgMiBzdXBlcnBhZ2UNCj4gIHhlbi9kb2NzOiBEZXNj
cmliZSBzdGF0aWMgc2hhcmVkIG1lbW9yeSB3aGVuIGhvc3QgYWRkcmVzcyBpcyBub3QNCj4gICAg
cHJvdmlkZWQNCj4gDQo+IGRvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQgICB8
ICA1MiArKy0NCj4geGVuL2FyY2gvYXJtL2FybTMyL21tdS9tbS5jICAgICAgICAgICAgIHwgIDEx
ICstDQo+IHhlbi9hcmNoL2FybS9kb20wbGVzcy1idWlsZC5jICAgICAgICAgICB8ICAgNCArLQ0K
PiB4ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxkLmMgICAgICAgICAgICAgfCAgODQgKysrLS0NCj4g
eGVuL2FyY2gvYXJtL2luY2x1ZGUvYXNtL2RvbWFpbl9idWlsZC5oIHwgICA5ICstDQo+IHhlbi9h
cmNoL2FybS9tbXUvcDJtLmMgICAgICAgICAgICAgICAgICB8ICA4MiArKystLQ0KPiB4ZW4vYXJj
aC9hcm0vc2V0dXAuYyAgICAgICAgICAgICAgICAgICAgfCAgMTQgKy0NCj4geGVuL2FyY2gvYXJt
L3N0YXRpYy1zaG1lbS5jICAgICAgICAgICAgIHwgNDMyICsrKysrKysrKysrKysrKysrLS0tLS0t
LQ0KPiA4IGZpbGVzIGNoYW5nZWQsIDUwMiBpbnNlcnRpb25zKCspLCAxODYgZGVsZXRpb25zKC0p
DQo+IA0KPiAtLSANCj4gMi4zNC4xDQo+IA0KPiANCg0KSGksDQoNCldlIHdvdWxkIGxpa2UgdGhp
cyBzZXJpZSB0byBiZSBpbiBYZW4gNC4xOSwgdGhlcmUgd2FzIGEgbWlzdW5kZXJzdGFuZGluZyBv
biBvdXIgc2lkZSBiZWNhdXNlIHdlIHRob3VnaHQNCnRoYXQgc2luY2UgdGhlIHNlcmllIHdhcyBz
ZW50IGJlZm9yZSB0aGUgbGFzdCBwb3N0aW5nIGRhdGUsIGl0IGNvdWxkIGhhdmUgYmVlbiBhIGNh
bmRpZGF0ZSBmb3IgbWVyZ2luZyBpbiB0aGUNCm5ldyByZWxlYXNlLCBpbnN0ZWFkIGFmdGVyIHNw
ZWFraW5nIHdpdGggSnVsaWVuIGFuZCBPbGVrc2lpIHdlIGFyZSBub3cgYXdhcmUgdGhhdCB3ZSBu
ZWVkIHRvIHByb3ZpZGUgYQ0KanVzdGlmaWNhdGlvbiBmb3IgaXRzIHByZXNlbmNlLg0KDQpQcm9z
IHRvIHRoaXMgc2VyaWUgaXMgdGhhdCB3ZSBhcmUgY2xvc2luZyB0aGUgY2lyY2xlIGZvciBzdGF0
aWMgc2hhcmVkIG1lbW9yeSwgYWxsb3dpbmcgaXQgdG8gdXNlIG1lbW9yeSBmcm9tDQp0aGUgaG9z
dCBvciBmcm9tIFhlbiwgaXQgaXMgYWxzbyBhIGZlYXR1cmUgdGhhdCBpcyBub3QgZW5hYmxlZCBi
eSBkZWZhdWx0LCBzbyBpdCBzaG91bGQgbm90IGNhdXNlIHRvbyBtdWNoDQpkaXNydXB0aW9uIGlu
IGNhc2Ugb2YgYW55IGJ1Z3MgdGhhdCBlc2NhcGVkIHRoZSByZXZpZXcsIGhvd2V2ZXIgd2XigJl2
ZSB0ZXN0ZWQgbWFueSBjb25maWd1cmF0aW9ucyBmb3IgdGhhdA0Kd2l0aC93aXRob3V0IGVuYWJs
aW5nIHRoZSBmZWF0dXJlIGlmIHRoYXQgY2FuIGJlIGFuIGFkZGl0aW9uYWwgdmFsdWUuDQoNCkNv
bnM6IHdlIGFyZSB0b3VjaGluZyBzb21lIGNvbW1vbiBjb2RlIHJlbGF0ZWQgdG8gcDJtLCBidXQg
YWxzbyB0aGVyZSB0aGUgaW1wYWN0IHNob3VsZCBiZSBtaW5pbWFsIGJlY2F1c2UNCnRoZSBuZXcg
Y29kZSBpcyBzdWJqZWN0IHRvIGwyIGZvcmVpZ24gbWFwcGluZyAodG8gYmUgY29uZmlybWVkIG1h
eWJlIGZyb20gYSBwMm0gZXhwZXJ0IGxpa2UgSnVsaWVuKS4NCg0KVGhlIGNvbW1lbnRzIG9uIHBh
dGNoIDMgb2YgdGhpcyBzZXJpZSBhcmUgYWRkcmVzc2VkIGJ5IHRoaXMgcGF0Y2g6DQpodHRwczov
L3BhdGNod29yay5rZXJuZWwub3JnL3Byb2plY3QveGVuLWRldmVsL3BhdGNoLzIwMjQwNTI4MTI1
NjAzLjI0Njc2NDAtMS1sdWNhLmZhbmNlbGx1QGFybS5jb20vDQpBbmQgdGhlIHNlcmllIGlzIGZ1
bGx5IHJldmlld2VkLg0KDQpTbyBvdXIgcmVxdWVzdCBpcyB0byBhbGxvdyB0aGlzIHNlcmllIGlu
IDQuMTksIE9sZWtzaWksIEFSTSBtYWludGFpbmVycywgZG8geW91IGFncmVlIG9uIHRoYXQ/DQoN
CkNoZWVycywNCkx1Y2ENCg0K


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 12:42:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 12:42:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738343.1145071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH0pJ-0006m9-Hh; Tue, 11 Jun 2024 12:42:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738343.1145071; Tue, 11 Jun 2024 12:42:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH0pJ-0006m2-F6; Tue, 11 Jun 2024 12:42:13 +0000
Received: by outflank-mailman (input) for mailman id 738343;
 Tue, 11 Jun 2024 12:42:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PtoW=NN=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1sH0pI-0006lw-1s
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 12:42:12 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on20604.outbound.protection.outlook.com
 [2a01:111:f403:2408::604])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 035d82e6-27f0-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 14:42:08 +0200 (CEST)
Received: from SJ0PR03CA0001.namprd03.prod.outlook.com (2603:10b6:a03:33a::6)
 by SN7PR12MB7978.namprd12.prod.outlook.com (2603:10b6:806:34b::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.24; Tue, 11 Jun
 2024 12:42:04 +0000
Received: from SJ1PEPF00002312.namprd03.prod.outlook.com
 (2603:10b6:a03:33a:cafe::23) by SJ0PR03CA0001.outlook.office365.com
 (2603:10b6:a03:33a::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.17 via Frontend
 Transport; Tue, 11 Jun 2024 12:42:04 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 SJ1PEPF00002312.mail.protection.outlook.com (10.167.242.166) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Tue, 11 Jun 2024 12:42:03 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Tue, 11 Jun
 2024 07:42:03 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Tue, 11 Jun
 2024 07:42:02 -0500
Received: from [10.252.147.188] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend
 Transport; Tue, 11 Jun 2024 07:42:01 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 035d82e6-27f0-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L71tS/Rvzgu7J9QoR1Z+8FqAY2wO3Y2nA5/j5uXfqaKjpJS9ySp172B2KKPP7Ph0dwFMmefC/XkeLELfwNu/DWXyWPUPtSanvphzppqTBK4OIRNssvNVMYe8FCfOVD62Ltx4mi9pkYU9zaEI7QHN8ve42tGngDuQ3zwymORWNrHo8QqD1FkMDfkZ1kQeI+NJbow/5gwRh09jo5c3vcaAmk9EWK0B6/rMFN1W0EQ3vsSszHuV/kECQTejR9oTJjyOZglHghZJiuAJ5rbW7d4xxKKZ49/kURvEgzBaQqhhi7zstga19ExyVHVomlFuIopdNzZ9P36UE8T0oBDm5SKtog==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JwN7sSeMlmS1YUWczykEipW7HhUPiWbqBijU+k3Ruuk=;
 b=Zpp1dJ5AYpzrSGUsfjgbE1CKuUNwvh3gxeqkalAvM9/EwHDgHA/L0TKEo3ykQukkzF9NsNkz9bEs6FpcU1BjmDv66cIJlwQKq9Ym/FKISNLkDm9xXxU0At7152BkO8vIdZmeu9DGct8y5LgRJwOFat35UEz9RSHmAcMwr5PkK3PprU8zq84i6ywjsOR4UeamMOPadlgjnl7jBH1z7ffrtNNQvCGnvauxX7QAegBo/C1NQjEW8BiKdE92wQ8fNjYPDajEtQZEmY4z7eyo+2f2zGFbpcwV2u3ivYzh9TXEXX6HhMcWdiIRCgLm+RYmK9CFTMAOxwwntciqPHwkUux7Kg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=arm.com smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JwN7sSeMlmS1YUWczykEipW7HhUPiWbqBijU+k3Ruuk=;
 b=g4xvvApvlFjQCNugxEJ5L6NNWd4wFlCwjNm5Ddi95ubwp1+Y2pBHeHJ+jiLOJeginNrE20AewO1/AFb6M5ZOjkYn2yR3WqBF6TJcRG83qjEPq/sOEz99NAF7wsc1JhL1kkt/gRKVy4h5B4QfPa/8HOdikP+PLX9cjXtjoRrB+yk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <66e5d584-b326-4197-81d5-ec2b8233a3fa@amd.com>
Date: Tue, 11 Jun 2024 14:42:00 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 0/7] Static shared memory followup v2 - pt2
To: Luca Fancellu <Luca.Fancellu@arm.com>, Xen-devel
	<xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
	<julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>, "Volodymyr
 Babchuk" <Volodymyr_Babchuk@epam.com>, Oleksii Kurochko
	<oleksii.kurochko@gmail.com>
References: <20240524124055.3871399-1-luca.fancellu@arm.com>
 <3DDAAFF7-5E43-4B92-9D6B-6D8AFBA8496F@arm.com>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <3DDAAFF7-5E43-4B92-9D6B-6D8AFBA8496F@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ1PEPF00002312:EE_|SN7PR12MB7978:EE_
X-MS-Office365-Filtering-Correlation-Id: 60bb0cf4-e36b-4b9c-cd0e-08dc8a13e58b
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230031|376005|82310400017|1800799015|36860700004;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?TUYzOWp2ekZPM29XcVozZi84M2Y5ZFllNzVGcC9SLzlTaGtkTVNxTkI5Mi9F?=
 =?utf-8?B?S2tJbUNuZHZzb0kvNi9pU2RXbS9FL2x0TDM0UGdnalNQQnBwdmJSbFFKdWdJ?=
 =?utf-8?B?TG9pM2NLOVhUNTlpZ3o0ZCt4eVhWTjVGK1NyZXNTb2l1aFpyZDNVV2JwUlN2?=
 =?utf-8?B?Q282ZXljUnZpMnpIWjJ1dmozRnp4SGpXMjB5UUZZSVJJYjMwUXNNRWYySVRt?=
 =?utf-8?B?VVRQRVVLRFJWTmMxcytnb1l5cTVlWHJlQmJ4b3NCY29tMEdKOVM5aWNBNWla?=
 =?utf-8?B?MEdwUXNKY1ZjeDNBNWpzQXJxV1ZEVitid0FrMXFTV3RkSWQzNTk1ajJlL2Iz?=
 =?utf-8?B?UWJZVXE0SVdkVEtWZmdtMFlrZW9GQ3dMM1hJblQ3NmxhSytJQzdXcXNzODV4?=
 =?utf-8?B?QnpUNnpmZ1hiUFZnenZoM3Q4a2JFSzF6OEpsQ2doNS9FTG10bHRWbW9Xc3RC?=
 =?utf-8?B?NnJLblJNMjF2QlluVEJnTy9KVExSblN1TUp1RjFobHFndWtBekptQnpVUTN1?=
 =?utf-8?B?TGNNbXFSYzVsM3hHOVZhWk11aURRMWhlK3VzTmNsYVFNQ3R1bVdRcEtLMjd4?=
 =?utf-8?B?ZzVOdFNJQUhQbUt4TWNXZGNLWUpNN3U4dWV0NC9Ib0hqVzNVWWV3bFhxTWJ5?=
 =?utf-8?B?MVRxbEhJR1d3b3k4M0hVdXFWcUJKVEhQUTlZU0VKS28xWjZ1bi94QVoyZTJI?=
 =?utf-8?B?UTJ6RTFqUnNNNWJGYmxCOG0vZklzR0d0ZTMvMFd5VUdwcmkxZUlhN0NJanFt?=
 =?utf-8?B?SVc0cnNZcUxJcnplSUlLOW5WK0FpT1RkMGJadGt4Nm9nWlZ2VUlxMDY1NHB3?=
 =?utf-8?B?WTdRUjlFWURTSTJvOXJucndVVXlrKzk3ZzNtUDNHa1JwV05SYzRUNG5VM1Ez?=
 =?utf-8?B?Z3NzdzgrRHNqQmUrUU5vRGZTMm1HbHE4RFp6aFQrWnJMT0kyNmFubzB6QnZW?=
 =?utf-8?B?WkRtUmU4TFBSS21seldhZEZ4QmNSMGxyNlhsdXRMWmhaL2N0Y2RLUFlYMUZT?=
 =?utf-8?B?TFdoWitGTllYazVCZ2tzNDk0ZEVjNDNEM2VJbVFpZlRpd3Z5SHBLM3dMMXh0?=
 =?utf-8?B?YnZlV3hlb1Q3eSttMDRMTjkrbHBqR21zYk5VaTIxN1JMWGVmckVPVEtaUXRm?=
 =?utf-8?B?SThybk01TllwbWRVVi9zRERSc1lRQjQ4TE9jMWRkSlkraHdyOVRqRFJiYXlz?=
 =?utf-8?B?RmFGNEdPWkN5TjYyQjg2K09jWXBWWUdQMUwydFZaN3VXQ1hoV05MMjlzaFJh?=
 =?utf-8?B?bDBZcHRFd05zSk5YRFdFZ1hoTURBN3VoWmpaMGJPazFGTzlrZU54aTVTRDRT?=
 =?utf-8?B?YWZ5ZzZHRTZtalJBNXh2MzNUTFJpU2FhMXdTWUp5ekY5MlJ4dnJmVFF3UUMw?=
 =?utf-8?B?QXNoRXdvQm03b3JJcWNySng0YUxUMDJES0EvZjJWU0toNENTL3dtRlZoV3JV?=
 =?utf-8?B?dGxUUjZ2NFQycEc2WmtEOElvazZTcjNDbHNoK3Rleks4Nm84R0RWZXd5MEQ4?=
 =?utf-8?B?cU8xd0Nnd0VGZEtmbUsxRzV4UnRacUttdjlrTlBUY3ZGSVI0UzgxcXF5eHVm?=
 =?utf-8?B?NzA4VlprYlBHa1ZTNGdnSktxVVVJOTJYNmZNK0NjczV0K1UyK3pGL0hGSmpK?=
 =?utf-8?B?MUR2aUhVRDhvQkIrc2twTGRWdlBEQjF3WjNTK3hMdi95UnZLOG9yd1BIZHBI?=
 =?utf-8?B?WHE5WDgydFNmYWhjenNSRVgzRTc1WTlUcUNienhUYWloMmtCdFFzcno4LzhD?=
 =?utf-8?B?VG93Z0hNMmU1am1kUUlWaEZMTmRnRlBCUU44WlloZDVoZFJjS25vZUs3WWVC?=
 =?utf-8?B?YlRlMDRUMXFSTnhWaE1CSjVnYlRyWit3a2M2S0dycksrbmtRZUdCVHRJc1FS?=
 =?utf-8?B?VUJmUWh1V01zaFFOVkpFV3VWbDJGUk1jdzByQll1K3FlT3c9PQ==?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230031)(376005)(82310400017)(1800799015)(36860700004);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 Jun 2024 12:42:03.8048
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 60bb0cf4-e36b-4b9c-cd0e-08dc8a13e58b
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	SJ1PEPF00002312.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7978

Hi,

On 11/06/2024 14:35, Luca Fancellu wrote:
> 
> 
> + Oleksii
> 
>> On 24 May 2024, at 13:40, Luca Fancellu <Luca.Fancellu@arm.com> wrote:
>>
>> This serie is a partial rework of this other serie:
>> https://patchwork.kernel.org/project/xen-devel/cover/20231206090623.1932275-1-Penny.Zheng@arm.com/
>>
>> The original serie is addressing an issue of the static shared memory feature
>> that impacts the memory footprint of other component when the feature is
>> enabled, another issue impacts the device tree generation for the guests when
>> the feature is enabled and used and the last one is a missing feature that is
>> the option to have a static shared memory region that is not from the host
>> address space.
>>
>> This serie is handling some comment on the original serie and it is splitting
>> the rework in two part, this first part is addressing the memory footprint issue
>> and the device tree generation and currently is fully merged
>> (https://patchwork.kernel.org/project/xen-devel/cover/20240418073652.3622828-1-luca.fancellu@arm.com/),
>> this serie is addressing the static shared memory allocation from the Xen heap.
>>
>> Luca Fancellu (5):
>>  xen/arm: Lookup bootinfo shm bank during the mapping
>>  xen/arm: Wrap shared memory mapping code in one function
>>  xen/arm: Parse xen,shared-mem when host phys address is not provided
>>  xen/arm: Rework heap page allocation outside allocate_bank_memory
>>  xen/arm: Implement the logic for static shared memory from Xen heap
>>
>> Penny Zheng (2):
>>  xen/p2m: put reference for level 2 superpage
>>  xen/docs: Describe static shared memory when host address is not
>>    provided
>>
>> docs/misc/arm/device-tree/booting.txt   |  52 ++-
>> xen/arch/arm/arm32/mmu/mm.c             |  11 +-
>> xen/arch/arm/dom0less-build.c           |   4 +-
>> xen/arch/arm/domain_build.c             |  84 +++--
>> xen/arch/arm/include/asm/domain_build.h |   9 +-
>> xen/arch/arm/mmu/p2m.c                  |  82 +++--
>> xen/arch/arm/setup.c                    |  14 +-
>> xen/arch/arm/static-shmem.c             | 432 +++++++++++++++++-------
>> 8 files changed, 502 insertions(+), 186 deletions(-)
>>
>> --
>> 2.34.1
>>
>>
> 
> Hi,
> 
> We would like this serie to be in Xen 4.19, there was a misunderstanding on our side because we thought
> that since the serie was sent before the last posting date, it could have been a candidate for merging in the
> new release, instead after speaking with Julien and Oleksii we are now aware that we need to provide a
> justification for its presence.
> 
> Pros to this serie is that we are closing the circle for static shared memory, allowing it to use memory from
> the host or from Xen, it is also a feature that is not enabled by default, so it should not cause too much
> disruption in case of any bugs that escaped the review, however we’ve tested many configurations for that
> with/without enabling the feature if that can be an additional value.
> 
> Cons: we are touching some common code related to p2m, but also there the impact should be minimal because
> the new code is subject to l2 foreign mapping (to be confirmed maybe from a p2m expert like Julien).
> 
> The comments on patch 3 of this serie are addressed by this patch:
> https://patchwork.kernel.org/project/xen-devel/patch/20240528125603.2467640-1-luca.fancellu@arm.com/
> And the serie is fully reviewed.
> 
> So our request is to allow this serie in 4.19, Oleksii, ARM maintainers, do you agree on that?
As a main reviewer of this series I'm ok to have this series in. It is nicely encapsulated and the feature itself
is still in unsupported state. I don't foresee any issues with it.

~Michal


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 12:45:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 12:45:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738350.1145082 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH0sE-0007P0-5H; Tue, 11 Jun 2024 12:45:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738350.1145082; Tue, 11 Jun 2024 12:45:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH0sE-0007Ot-1A; Tue, 11 Jun 2024 12:45:14 +0000
Received: by outflank-mailman (input) for mailman id 738350;
 Tue, 11 Jun 2024 12:45:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sH0sC-0007On-PG
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 12:45:12 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 71141817-27f0-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 14:45:11 +0200 (CEST)
Received: by mail-ej1-x634.google.com with SMTP id
 a640c23a62f3a-a6269885572so1141650766b.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 05:45:11 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f0e47b8f7sm411091366b.31.2024.06.11.05.45.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 05:45:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71141817-27f0-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718109911; x=1718714711; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=gErMh05jD6muaAbq6AkdIwJpe5lI8cqV2npPiwbmxvw=;
        b=CTTyDEE2JPVM7MImABHG5lm3nlmnotKDIIw89/LVEkc0oyq1cxE1fuMf4Yqpn0J+le
         qxempD4vUkkiH4JAClqPI9i1gpHBV59hh435xJVUw90e21+TOeQhyIMl1dBDsqjab3EB
         pXJSM33QAoVJhrn6ZCjSTUR1cj0X+Ev8Fek6A8Htotlw4Ffr1x/tHtkhQkA6Z3+svgHY
         xyTZb5Cod0TiwAP3ZfF1q14mU+C3YvmgnjnNMF3Jj5uwpefnry3n5F5rlxvsaGVjyOaw
         gb02wZIj4vCdPOgZHyO4Xsgy6ETzG0UTJwXJrVS1FjFcCumsJJMtnRtKW03w/O5+M30V
         rVig==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718109911; x=1718714711;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gErMh05jD6muaAbq6AkdIwJpe5lI8cqV2npPiwbmxvw=;
        b=lYN3kgLv9r2fJsdFnDJ0uVbmWWa08ZCUiQfU2DWkWQ7kkNWmYcCFJxAje2iWQM74Df
         7W0EQ+1WT2kdKxdUYhdKJYeywCx0cRqT2T3WPq1vYLOSlbadvy4JnUjlxIeNkW0Cfnx0
         kW2qUk3mAaexV3JWuY5DR1pQk02iRppcNXlahQStuWkYxcl/jzkzvgxsg6Ytuk5J3DEy
         ULRHtTGT/Hi5nL1hJFLSeJzu4g2wQ8Is5B/FLnN5CiDfZo+oun17P0aU7G6pCOH7La+H
         Rgh/ywMSzgOPu6QhIK+VABlPufvtYnMA+YJZOCJHH4Zc0CqULAsu7zdc1tW5V5pXFq2D
         8rfw==
X-Forwarded-Encrypted: i=1; AJvYcCVxst752e4zKn0MC0hkqankwHYDyQ5A6oD5NPadU7BQgr5pIvMqloCujEIsM3NrLDiAnm9Kr4ipIK2R4aooWS1wQEIri/JhD08xdrLyUWc=
X-Gm-Message-State: AOJu0YyeuHmAr1s9xc7DcIKxffVb0hvwrXu8531jOlAdHsSMyL4uibNh
	nJzJM65wXbojaNw0ouagllhVTHPhnbfQ3wCcAFtp0z6wCZnVciA/6frebcZDHQ==
X-Google-Smtp-Source: AGHT+IH9st5mdVjW2jsQa9hGOa1pLiBKclRhlA0iSgGJ6tqX7vUU23EOJJjNl0mvldoaEwRrG21eQQ==
X-Received: by 2002:a17:906:199b:b0:a6f:2380:3a32 with SMTP id a640c23a62f3a-a6f34ceceb5mr166952066b.21.1718109911043;
        Tue, 11 Jun 2024 05:45:11 -0700 (PDT)
Message-ID: <66fc06cc-f1f6-4f12-83d4-a3b9788bffba@suse.com>
Date: Tue, 11 Jun 2024 14:45:09 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 5/7] x86/irq: deal with old_cpu_mask for interrupts in
 movement in fixup_irqs()
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-6-roger.pau@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240610142043.11924-6-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 10.06.2024 16:20, Roger Pau Monne wrote:
> Given the current logic it's possible for ->arch.old_cpu_mask to get out of
> sync: if a CPU set in old_cpu_mask is offlined and then onlined
> again without old_cpu_mask having been updated the data in the mask will no
> longer be accurate, as when brought back online the CPU will no longer have
> old_vector configured to handle the old interrupt source.
> 
> If there's an interrupt movement in progress, and the to be offlined CPU (which
> is the call context) is in the old_cpu_mask clear it and update the mask, so it
> doesn't contain stale data.

This imo is too __cpu_disable()-centric. In the code you cover the
smp_send_stop() case afaict, where it's all _other_ CPUs which are being
offlined. As we're not meaning to bring CPUs online again in that case,
dealing with the situation likely isn't needed. Yet the description should
imo at least make clear that the case was considered.

> @@ -2589,6 +2589,28 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
>                                 affinity);
>          }
>  
> +        if ( desc->arch.move_in_progress &&
> +             !cpumask_test_cpu(cpu, &cpu_online_map) &&

This part of the condition is, afaict, what covers (excludes) the
smp_send_stop() case. Might be nice to have a brief comment here, thus
also clarifying ...

> +             cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) )
> +        {
> +            /*
> +             * This CPU is going offline, remove it from ->arch.old_cpu_mask
> +             * and possibly release the old vector if the old mask becomes
> +             * empty.
> +             *
> +             * Note cleaning ->arch.old_cpu_mask is required if the CPU is
> +             * brought offline and then online again, as when re-onlined the
> +             * per-cpu vector table will no longer have ->arch.old_vector
> +             * setup, and hence ->arch.old_cpu_mask would be stale.
> +             */
> +            cpumask_clear_cpu(cpu, desc->arch.old_cpu_mask);
> +            if ( cpumask_empty(desc->arch.old_cpu_mask) )
> +            {
> +                desc->arch.move_in_progress = 0;
> +                release_old_vec(desc);
> +            }

... that none of this is really wanted or necessary in that other case.
Assuming my understanding above is correct, the code change itself is
once again
Reviewed-by: Jan Beulich <jbeulich@suse.com>
yet here I'm uncertain whether to offer on-commit editing, as it's not
really clear to me whether there's a dependencies on patch 4.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 12:46:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 12:46:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738356.1145091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH0tP-0007vB-Ck; Tue, 11 Jun 2024 12:46:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738356.1145091; Tue, 11 Jun 2024 12:46:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH0tP-0007v4-AC; Tue, 11 Jun 2024 12:46:27 +0000
Received: by outflank-mailman (input) for mailman id 738356;
 Tue, 11 Jun 2024 12:46:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sH0tO-0007up-2y; Tue, 11 Jun 2024 12:46:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sH0tN-0002JW-Vi; Tue, 11 Jun 2024 12:46:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sH0tN-0004Wg-IV; Tue, 11 Jun 2024 12:46:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sH0tN-0001e8-Hz; Tue, 11 Jun 2024 12:46:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1vVwaZzSP8P6NymiqLNUDAYlnHw9nehQS7lBHm4Me5Q=; b=SM5WAHhTPnPU1BJ1CSl8lpYASK
	K7P+Q+baZ5WhV6UUG4YiX72hTiACjVcBzB8ReE9iMA0y7gdv/yGF5mxgC8nSOPPS/2gq2xseFZ/+T
	5I96ukTjCME5uBx+zaT+8XOinAegKpA3Nm4WfLkAYrtZyI4yXouSULU2J5mEf8vnOSRk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186308-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186308: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3
X-Osstest-Versions-That:
    xen=ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 Jun 2024 12:46:25 +0000

flight 186308 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186308/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-pvhv2-amd 20 guest-localmigrate/x10    fail pass in 186307
 test-amd64-amd64-qemuu-freebsd11-amd64 21 guest-start/freebsd.repeat fail pass in 186307

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186307
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186307
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186307
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186307
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186307
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186307
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3
baseline version:
 xen                  ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3

Last test of basis   186308  2024-06-11 06:24:29 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 12:47:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 12:47:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738364.1145102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH0u3-0008Rl-MH; Tue, 11 Jun 2024 12:47:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738364.1145102; Tue, 11 Jun 2024 12:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH0u3-0008Re-Jh; Tue, 11 Jun 2024 12:47:07 +0000
Received: by outflank-mailman (input) for mailman id 738364;
 Tue, 11 Jun 2024 12:47:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eM8s=NN=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sH0u1-0008RL-U4
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 12:47:05 +0000
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com
 [2a00:1450:4864:20::52c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b41e27e9-27f0-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 14:47:04 +0200 (CEST)
Received: by mail-ed1-x52c.google.com with SMTP id
 4fb4d7f45d1cf-57c778b5742so3205412a12.2
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 05:47:04 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1cf00b49sm280511766b.164.2024.06.11.05.47.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 05:47:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b41e27e9-27f0-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718110023; x=1718714823; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=9EwulIzL/UVcCt0Fpy5jfPOow5MxBueMIwAtgvXuH+Q=;
        b=E/FyjNDpXpNmk4NKRjAhFF0/CMc4vfq3wMeC5fjzgiXj/MjuJBOiysw5t0J3/ik1Pa
         sI17fRl5hV3GzJsjl/zma0aouNOLitr5lXz/GSEXk5L+U58bRS6WdCAxf0pnWlALv+R3
         dIzUwle72rLskqGagIbOavid8LXtLR2n7TGaQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718110023; x=1718714823;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=9EwulIzL/UVcCt0Fpy5jfPOow5MxBueMIwAtgvXuH+Q=;
        b=JaInqudtbqx4UHYi4LBIja7c7H2fm3iEwORcIW18PY1SKDse8yBhVm7hhzZ5t1jeTn
         SM/9SP33+jqOmIn+e7tB5OSVp7lxazur8JXtMGbPMoWyeHpSkxE3Pn8QLK7kXukDyv0O
         uj9JajC9CukRyxhPUtregwrBkR0Y1kQjcmObhqNeVZrc57IhmCrCTVlpo7vTc7W1wywq
         KtEpj7tMsPR4wOndZnwWSAjvfppR2NpzdcFkVajXUeC2DTyM0gSnkU/ToI3CSsUNv1Lu
         bshCn0YiESGnrlLgFa4ZG1w78IUtCCfwmATTx87bHNL/wgsBWQy2QNtIPG7me2UU/fdK
         rD0Q==
X-Gm-Message-State: AOJu0YxHQ1uK8/P64BxPSSc3XN4Rm/z+LLvwk9QGVJpPxkI1Ze1mu7Ii
	qQ4a5xoPrnww7K+U1Lu3eiMjUEWLecljghC65QnWAm/lgae9bFfnzVounVzCxgxAMjbifjGs8/j
	2ooU=
X-Google-Smtp-Source: AGHT+IGpeSWS2SB4iybVauE/ENvrfT8wtDnK+N2OiC9ynJczTVjE2ByPrHS27sb8T/NrNVlRccJb9w==
X-Received: by 2002:a17:906:2602:b0:a6f:ee4:ffa5 with SMTP id a640c23a62f3a-a6f0ee4ffe4mr449282166b.62.1718110022985;
        Tue, 11 Jun 2024 05:47:02 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Michal Orzel <michal.orzel@amd.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [PATCH for-4.19] CI: Update FreeBSD to 13.3
Date: Tue, 11 Jun 2024 13:47:01 +0100
Message-Id: <20240611124701.802752-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Michal Orzel <michal.orzel@amd.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
CC: Jan Beulich <JBeulich@suse.com>

Updated run:
  https://cirrus-ci.com/task/4903594304995328

For 4.19, and for backporting to all trees including security trees.
FreeBSD-13.2 isn't available any more:
  https://cirrus-ci.com/task/4554831417835520

causing build failures.
---
 .cirrus.yml | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/.cirrus.yml b/.cirrus.yml
index d0a9021a77e4..c431d8d2447d 100644
--- a/.cirrus.yml
+++ b/.cirrus.yml
@@ -17,7 +17,7 @@ freebsd_template: &FREEBSD_TEMPLATE
 task:
   name: 'FreeBSD 13'
   freebsd_instance:
-    image_family: freebsd-13-2
+    image_family: freebsd-13-3
   << : *FREEBSD_TEMPLATE
 
 task:

base-commit: ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 12:55:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 12:55:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738373.1145116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH129-0002tJ-Lx; Tue, 11 Jun 2024 12:55:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738373.1145116; Tue, 11 Jun 2024 12:55:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH129-0002tC-I2; Tue, 11 Jun 2024 12:55:29 +0000
Received: by outflank-mailman (input) for mailman id 738373;
 Tue, 11 Jun 2024 12:55:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b7dS=NN=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sH128-0002t1-2W
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 12:55:28 +0000
Received: from mail-qk1-x736.google.com (mail-qk1-x736.google.com
 [2607:f8b0:4864:20::736])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id deea1aaa-27f1-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 14:55:26 +0200 (CEST)
Received: by mail-qk1-x736.google.com with SMTP id
 af79cd13be357-7952b60b4d7so64805485a.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 05:55:26 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-440582554d4sm30366351cf.90.2024.06.11.05.55.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 05:55:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: deea1aaa-27f1-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718110525; x=1718715325; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=Pni60Czign4fM1pm0krKw9OQQFHWSNrVDNNnAJ4U5JY=;
        b=BAOwJ5EH+LJnC5sA17PAxN6XVFCqcpaSNYBXyPwHlKAITwyhsx8i3XZpBjMVRDSfhv
         FTWUi/fWjJG/SpELOFKPr9JINTSDEhz3vGfJ7+aHztjkQnBNBOvpgtilEtrYYSLkYM+0
         b9NS5/3YixdmiowMA+LswERZ+ufd415uKYN3E=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718110525; x=1718715325;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Pni60Czign4fM1pm0krKw9OQQFHWSNrVDNNnAJ4U5JY=;
        b=oCe7PpMcFfHr1PGKlWrzCzIK+FKlMnAqPRsLofArZOtthndQDpv7eH9P15sWmIGqrn
         3sX+UP5n1K6g+UoCfbw8oRIBR8qzqu0MnqIWlk2VgzhjCZP8t/fpu+RKtGJJm14+AUkF
         nqCc60lBD6n0fv1La2wW6/b3GFIvlE92hZtSOevT5WEr+VqVkxj3sHm4c7jEOO4YpXOc
         Ee8U+cszd5O75QKxV3jivsTTqHIeIJ4aGtufqHJ8GIkU2TtIV1fw+yOIBGrlX5yV/r7i
         R6MZ2dJkkiTpifyp9ebex8/VLNW/X+QP3OeOZLc679A7JQdowhTFEyr9j6q0T81foNIZ
         IW6Q==
X-Gm-Message-State: AOJu0YxUNew9h6zAm45c6tIEi5ANeza/in6zBRU9J09bRI5EpQC09qrQ
	xyuaal9960KH8HVoL5bku2TdkqwBHZK/brEhw6hE7BmpEt/9NBAJ0P8OLCJS0a8=
X-Google-Smtp-Source: AGHT+IF8/RsdU0w9jMk0yCUEAKAEabwOnjD2na2Jn+IilvALFTq8oL0+Yj35ZNFzMKYbWVaC6tSusQ==
X-Received: by 2002:a05:620a:2989:b0:795:59ba:4a3d with SMTP id af79cd13be357-79559ba51ffmr978461485a.78.1718110524589;
        Tue, 11 Jun 2024 05:55:24 -0700 (PDT)
Date: Tue, 11 Jun 2024 14:55:22 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO
 page read only
Message-ID: <ZmhJOjggtJiNccPo@macbook>
References: <cover.68462f37276d69ab6e268be94d049f866a321f73.1716392340.git-series.marmarek@invisiblethingslab.com>
 <30562c807ff2e434731a76d7110d48614a58884b.1716392340.git-series.marmarek@invisiblethingslab.com>
 <ZmgpsZJ4afLd1Fc3@macbook>
 <Zmg3O7zvd9KBC1Fv@mail-itl>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <Zmg3O7zvd9KBC1Fv@mail-itl>

On Tue, Jun 11, 2024 at 01:38:35PM +0200, Marek Marczykowski-Górecki wrote:
> On Tue, Jun 11, 2024 at 12:40:49PM +0200, Roger Pau Monné wrote:
> > On Wed, May 22, 2024 at 05:39:03PM +0200, Marek Marczykowski-Górecki wrote:
> > > +    if ( !entry )
> > > +    {
> > > +        /* iter == NULL marks it was a newly allocated entry */
> > > +        iter = NULL;
> > > +        entry = xzalloc(struct subpage_ro_range);
> > > +        if ( !entry )
> > > +            return -ENOMEM;
> > > +        entry->mfn = mfn;
> > > +    }
> > > +
> > > +    for ( i = offset_s; i <= offset_e; i += MMIO_RO_SUBPAGE_GRAN )
> > > +    {
> > > +        bool oldbit = __test_and_set_bit(i / MMIO_RO_SUBPAGE_GRAN,
> > > +                                        entry->ro_elems);
> > > +        ASSERT(!oldbit);
> > > +    }
> > > +
> > > +    if ( !iter )
> > > +        list_add(&entry->list, &subpage_ro_ranges);
> > > +
> > > +    return iter ? 1 : 0;
> > > +}
> > > +
> > > +/* This needs subpage_ro_lock already taken */
> > > +static void __init subpage_mmio_ro_remove_page(
> > > +    mfn_t mfn,
> > > +    unsigned int offset_s,
> > > +    unsigned int offset_e)
> > > +{
> > > +    struct subpage_ro_range *entry = NULL, *iter;
> > > +    unsigned int i;
> > > +
> > > +    list_for_each_entry(iter, &subpage_ro_ranges, list)
> > > +    {
> > > +        if ( mfn_eq(iter->mfn, mfn) )
> > > +        {
> > > +            entry = iter;
> > > +            break;
> > > +        }
> > > +    }
> > > +    if ( !entry )
> > > +        return;
> > > +
> > > +    for ( i = offset_s; i <= offset_e; i += MMIO_RO_SUBPAGE_GRAN )
> > > +        __clear_bit(i / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems);
> > > +
> > > +    if ( !bitmap_empty(entry->ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GRAN) )
> > > +        return;
> > > +
> > > +    list_del(&entry->list);
> > > +    if ( entry->mapped )
> > > +        iounmap(entry->mapped);
> > > +    xfree(entry);
> > > +}
> > > +
> > > +int __init subpage_mmio_ro_add(
> > > +    paddr_t start,
> > > +    size_t size)
> > > +{
> > > +    mfn_t mfn_start = maddr_to_mfn(start);
> > > +    paddr_t end = start + size - 1;
> > > +    mfn_t mfn_end = maddr_to_mfn(end);
> > > +    unsigned int offset_end = 0;
> > > +    int rc;
> > > +    bool subpage_start, subpage_end;
> > > +
> > > +    ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
> > > +    ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
> > > +    if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
> > > +        size = ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);
> > > +
> > > +    if ( !size )
> > > +        return 0;
> > > +
> > > +    if ( mfn_eq(mfn_start, mfn_end) )
> > > +    {
> > > +        /* Both starting and ending parts handled at once */
> > > +        subpage_start = PAGE_OFFSET(start) || PAGE_OFFSET(end) != PAGE_SIZE - 1;
> > > +        subpage_end = false;
> > 
> > Given the intended usage of this, don't we want to limit to only a
> > single page?  So that PFN_DOWN(start + size) == PFN_DOWN/(start), as
> > that would simplify the logic here?
> 
> I have considered that, but I haven't found anything in the spec
> mandating the XHCI DbC registers to not cross page boundary. Currently
> (on a system I test this on) they don't cross page boundary, but I don't
> want to assume extra constrains - to avoid issues like before (when
> on the older system I tested the DbC registers didn't shared page with
> other registers, but then they shared the page on a newer hardware).

Oh, from our conversation at XenSummit I got the impression debug registers
where always at the same position.  Looking at patch 2/2, it seems you
only need to block access to a single register.  Are registers in XHCI
size aligned?  As this would guarantee it doesn't cross a page
boundary (as long as the register is <= 4096 in size).

> > > +            if ( !addr )
> > > +            {
> > > +                gprintk(XENLOG_ERR,
> > > +                        "Failed to map page for MMIO write at 0x%"PRI_mfn"%03x\n",
> > > +                        mfn_x(mfn), offset);
> > > +                return;
> > > +            }
> > > +
> > > +            switch ( len )
> > > +            {
> > > +            case 1:
> > > +                writeb(*(const uint8_t*)data, addr);
> > > +                break;
> > > +            case 2:
> > > +                writew(*(const uint16_t*)data, addr);
> > > +                break;
> > > +            case 4:
> > > +                writel(*(const uint32_t*)data, addr);
> > > +                break;
> > > +            case 8:
> > > +                writeq(*(const uint64_t*)data, addr);
> > > +                break;
> > > +            default:
> > > +                /* mmio_ro_emulated_write() already validated the size */
> > > +                ASSERT_UNREACHABLE();
> > > +                goto write_ignored;
> > > +            }
> > > +            return;
> > > +        }
> > > +    }
> > > +    /* Do not print message for pages without any writable parts. */
> > > +}
> > > +
> > > +#ifdef CONFIG_HVM
> > > +bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla)
> > > +{
> > > +    unsigned int offset = PAGE_OFFSET(gla);
> > > +    const struct subpage_ro_range *entry;
> > > +
> > > +    list_for_each_entry(entry, &subpage_ro_ranges, list)
> > > +        if ( mfn_eq(entry->mfn, mfn) &&
> > > +             !test_bit(offset / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems) )
> > > +        {
> > > +            /*
> > > +             * We don't know the write size at this point yet, so it could be
> > > +             * an unaligned write, but accept it here anyway and deal with it
> > > +             * later.
> > > +             */
> > > +            return true;
> > 
> > For accesses that fall into the RO region, I think you need to accept
> > them here and just terminate them?  I see no point in propagating
> > them further in hvm_hap_nested_page_fault().
> 
> If write hits an R/O region on a page with some writable regions the
> handling should be the same as it would be just on the mmio_ro_ranges.
> This is what the patch does.
> There may be an opportunity to simplify mmio_ro_ranges handling
> somewhere, but I don't think it belongs to this patch.

Maybe worth adding a comment that the logic here intends to deal only
with the RW bits of a page that's otherwise RO, and that by not
handling the RO regions the intention is that those are dealt just
like fully RO pages.

I guess there's some message printed when attempting to write to a RO
page that you would also like to print here?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 12:59:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 12:59:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738380.1145126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH162-0004L3-4h; Tue, 11 Jun 2024 12:59:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738380.1145126; Tue, 11 Jun 2024 12:59:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH162-0004Kw-0t; Tue, 11 Jun 2024 12:59:30 +0000
Received: by outflank-mailman (input) for mailman id 738380;
 Tue, 11 Jun 2024 12:59:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b7dS=NN=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sH160-0004Jz-SW
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 12:59:28 +0000
Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com
 [2607:f8b0:4864:20::82c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6eb0a908-27f2-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 14:59:27 +0200 (CEST)
Received: by mail-qt1-x82c.google.com with SMTP id
 d75a77b69052e-441187684e3so9783971cf.3
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 05:59:27 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-4405526e8c0sm32519421cf.60.2024.06.11.05.59.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 05:59:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6eb0a908-27f2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718110766; x=1718715566; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=NeDnhCiELrWayyc5wgOeORcf9fnb1qKshODJE+9tPoU=;
        b=VVm4duaYDr61tCDBXaQnpPRG1WPTnqpo3lUkCr5ZwIA+L//qO/H5XAtNde3I/dCD6G
         G4KQw5DaAyBBlMBsCI2lAaUJPQ41DbDjtmi29OxVLxXA68+1FGw9kXIqaVz5KZwcL7gR
         tfI+lZErQmRdiF9TkbHCr94+B3VuEL1B0S5NA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718110766; x=1718715566;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=NeDnhCiELrWayyc5wgOeORcf9fnb1qKshODJE+9tPoU=;
        b=ca4nwFgobX6iKIFqflgOJnL+1/iD8e9VZ/G4Suq89Zu5uqJjsEO47bVA2KFEQnhsCh
         TvpLc/u2PZNFAL4LDnhZYgHTqPJTUlR/0VOm/AkcioQXP58vDU69p+aPuiq9yHCQ3ClO
         s6elmIFTmRon6INh64+HZKZ3LVbems2MWt8na81dYvHqvzGL+zOjB2TkWTt7WhDDbq9d
         eyF26lEQRVp04hHQnazyPng6FSsyKD/oR1uriQAgf17+PyOg4PdGpVDPMnDApQngVNwv
         ukFxS2LgloTiDEEvceZMI7d796Fq1TGsqXSev7sF+QL6KWfxwoABhZTlJFYz9X/ETdU8
         eTEg==
X-Gm-Message-State: AOJu0YziICKATRUHTLlzMmzggjodl3zFeipYDoTmYg+UYjPowWs0VgHE
	uf46ZLeBFLK3CiGq8ZY0trTIwcsAUiCFau1VeoixTVkKrTivAmknvpzqjWpMCu8=
X-Google-Smtp-Source: AGHT+IHC1Syrj91mHBha1zW1XO2xFlbCM5t6doUp2botDMKLCRw9WrurjwazkXjhc6YXSNejy+XVzQ==
X-Received: by 2002:a05:622a:188b:b0:440:891c:4022 with SMTP id d75a77b69052e-440891c412emr47598671cf.7.1718110765863;
        Tue, 11 Jun 2024 05:59:25 -0700 (PDT)
Date: Tue, 11 Jun 2024 14:59:23 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Michal Orzel <michal.orzel@amd.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: Re: [PATCH for-4.19] CI: Update FreeBSD to 13.3
Message-ID: <ZmhKK4PcLki8EVST@macbook>
References: <20240611124701.802752-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20240611124701.802752-1-andrew.cooper3@citrix.com>

On Tue, Jun 11, 2024 at 01:47:01PM +0100, Andrew Cooper wrote:
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Albeit I'm unsure if that's some kind of glitch or error on the
FreeBSD side.  13.2 is not EOL until June 30, 2024 [0].

Thanks, Roger.

[0] https://www.freebsd.org/security/#sup


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 13:15:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 13:15:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738386.1145136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1Lq-0007xB-Eu; Tue, 11 Jun 2024 13:15:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738386.1145136; Tue, 11 Jun 2024 13:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1Lq-0007x4-BR; Tue, 11 Jun 2024 13:15:50 +0000
Received: by outflank-mailman (input) for mailman id 738386;
 Tue, 11 Jun 2024 13:15:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H2Xh=NN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1sH1Lp-0007wy-Nc
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 13:15:49 +0000
Received: from fhigh7-smtp.messagingengine.com
 (fhigh7-smtp.messagingengine.com [103.168.172.158])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b70ce9f8-27f4-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 15:15:47 +0200 (CEST)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailfhigh.nyi.internal (Postfix) with ESMTP id 3845411400F8;
 Tue, 11 Jun 2024 09:15:46 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Tue, 11 Jun 2024 09:15:46 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 11 Jun 2024 09:15:45 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b70ce9f8-27f4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718111746;
	 x=1718198146; bh=jBLnpq0+MEHS7l7798QLDaAjEcS8410zQss5n3UlQUU=; b=
	fSrF2o3hwS6eKZBKlsNRp76U53Geq/O7wjFtm3tY0XHtC+xHrw3T0Kx94hyRm9up
	WgVPROcR/P8diYGrVTCviZ4rSbaUFs53OGVInUyt6oO2d/jmF/azydKnjk0tu3+Q
	MeHR+zHRF+aILpq7dM0lsuW+XsinWkz7gr69YTef4AIUnJvWcUL76Wxo4L/w1jY8
	0w9rzqYUEsKC6kcwwF/2BdT5pA59VghLr3Z32L3FCzllNHlLNCqH7EDF7yTwwa8E
	JGBjaVBJ6ZS+s2zeQtOui5XsZPFWUuoFUIQqaQS0gvzLxU5AY67uO9uk4yRh4knt
	giVR2NP/N5wPQ4C4yFEVRg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718111746; x=1718198146; bh=jBLnpq0+MEHS7l7798QLDaAjEcS8
	410zQss5n3UlQUU=; b=XEa2mPdyztPus9dKyAnf53zARzgxFOPQA0FcfswfEBew
	6QsSt5eeQ7l4ZqwCnbzqMq03jQaZX4aJIRp2BLAwm4MGOOAz12rA5wVtIPv03C6P
	y6PzCsCznAx8zMwlChsMtnlnIkMmrUpiex3QYXbuX2MmTJCm8ZC/kvnWA4L1iEM/
	c3FDmOxx3zYV2KeRjCH+Ji0/XP71ACkC45EzAWsqJyENvHaNbtBIjln/hkQwcB1K
	dUif3Ku1ZZdeehK/LP6gHc/Y1LVKjYtz4qbF9Hix7TuH9OBVxEHi4PsvtK6emTS8
	v+N2IwOHEmP0NM2uw27G8xsvzEJlB6CgMsMIMn2VuQ==
X-ME-Sender: <xms:AU5oZl0KDd3EYM7HIQBu3XtDgtN3uiVzHysxtRAjuONMdcCwlnrl8Q>
    <xme:AU5oZsETj1HfuUjOs4OatmyKBXSdG5xRwJgfgZSBwXwdRJdeXLdgXR1bZjwFsXryc
    OOupN66RsLpjg>
X-ME-Received: <xmr:AU5oZl4n99DrogVEDmbna6LC1YJazBUetYKJv95ly-cvRr8CrbHWFqVy3ql115XzR6vaVAN93zsw5CB3yHEFyK-kn7lBLjoTgw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfeduvddgieduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:AU5oZi2jJRqhswznDv8wN1KJDybnyzMtibbb0nudCciZBbxLDn_LEw>
    <xmx:AU5oZoGsYKKDd_VXLukuha7hZHetNfRB643DEsMqpJbZUUhAPvGCjQ>
    <xmx:AU5oZj_192PyOwuzxo-GKfYpPVFAfH0jauMIb6L_UGUks3te6ktmLw>
    <xmx:AU5oZlmXm4FYlKqSDL0nUgKTB2oK0pOun6SGBxTjm-2vHvjM8ZE2_Q>
    <xmx:Ak5oZsAjDF0gjvcIHOKIDTxvVcrDxfDIKJGdNipP79Sn15DpR8V_tu4i>
Feedback-ID: i1568416f:Fastmail
Date: Tue, 11 Jun 2024 15:15:42 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO
 page read only
Message-ID: <ZmhN_hNHp7WtyPyD@mail-itl>
References: <cover.68462f37276d69ab6e268be94d049f866a321f73.1716392340.git-series.marmarek@invisiblethingslab.com>
 <30562c807ff2e434731a76d7110d48614a58884b.1716392340.git-series.marmarek@invisiblethingslab.com>
 <ZmgpsZJ4afLd1Fc3@macbook>
 <Zmg3O7zvd9KBC1Fv@mail-itl>
 <ZmhJOjggtJiNccPo@macbook>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="DmQuZvV0PUqnDOdo"
Content-Disposition: inline
In-Reply-To: <ZmhJOjggtJiNccPo@macbook>


--DmQuZvV0PUqnDOdo
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 11 Jun 2024 15:15:42 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO
 page read only

On Tue, Jun 11, 2024 at 02:55:22PM +0200, Roger Pau Monn=C3=A9 wrote:
> On Tue, Jun 11, 2024 at 01:38:35PM +0200, Marek Marczykowski-G=C3=B3recki=
 wrote:
> > On Tue, Jun 11, 2024 at 12:40:49PM +0200, Roger Pau Monn=C3=A9 wrote:
> > > On Wed, May 22, 2024 at 05:39:03PM +0200, Marek Marczykowski-G=C3=B3r=
ecki wrote:
> > > > +    if ( !entry )
> > > > +    {
> > > > +        /* iter =3D=3D NULL marks it was a newly allocated entry */
> > > > +        iter =3D NULL;
> > > > +        entry =3D xzalloc(struct subpage_ro_range);
> > > > +        if ( !entry )
> > > > +            return -ENOMEM;
> > > > +        entry->mfn =3D mfn;
> > > > +    }
> > > > +
> > > > +    for ( i =3D offset_s; i <=3D offset_e; i +=3D MMIO_RO_SUBPAGE_=
GRAN )
> > > > +    {
> > > > +        bool oldbit =3D __test_and_set_bit(i / MMIO_RO_SUBPAGE_GRA=
N,
> > > > +                                        entry->ro_elems);
> > > > +        ASSERT(!oldbit);
> > > > +    }
> > > > +
> > > > +    if ( !iter )
> > > > +        list_add(&entry->list, &subpage_ro_ranges);
> > > > +
> > > > +    return iter ? 1 : 0;
> > > > +}
> > > > +
> > > > +/* This needs subpage_ro_lock already taken */
> > > > +static void __init subpage_mmio_ro_remove_page(
> > > > +    mfn_t mfn,
> > > > +    unsigned int offset_s,
> > > > +    unsigned int offset_e)
> > > > +{
> > > > +    struct subpage_ro_range *entry =3D NULL, *iter;
> > > > +    unsigned int i;
> > > > +
> > > > +    list_for_each_entry(iter, &subpage_ro_ranges, list)
> > > > +    {
> > > > +        if ( mfn_eq(iter->mfn, mfn) )
> > > > +        {
> > > > +            entry =3D iter;
> > > > +            break;
> > > > +        }
> > > > +    }
> > > > +    if ( !entry )
> > > > +        return;
> > > > +
> > > > +    for ( i =3D offset_s; i <=3D offset_e; i +=3D MMIO_RO_SUBPAGE_=
GRAN )
> > > > +        __clear_bit(i / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems);
> > > > +
> > > > +    if ( !bitmap_empty(entry->ro_elems, PAGE_SIZE / MMIO_RO_SUBPAG=
E_GRAN) )
> > > > +        return;
> > > > +
> > > > +    list_del(&entry->list);
> > > > +    if ( entry->mapped )
> > > > +        iounmap(entry->mapped);
> > > > +    xfree(entry);
> > > > +}
> > > > +
> > > > +int __init subpage_mmio_ro_add(
> > > > +    paddr_t start,
> > > > +    size_t size)
> > > > +{
> > > > +    mfn_t mfn_start =3D maddr_to_mfn(start);
> > > > +    paddr_t end =3D start + size - 1;
> > > > +    mfn_t mfn_end =3D maddr_to_mfn(end);
> > > > +    unsigned int offset_end =3D 0;
> > > > +    int rc;
> > > > +    bool subpage_start, subpage_end;
> > > > +
> > > > +    ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
> > > > +    ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
> > > > +    if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
> > > > +        size =3D ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);
> > > > +
> > > > +    if ( !size )
> > > > +        return 0;
> > > > +
> > > > +    if ( mfn_eq(mfn_start, mfn_end) )
> > > > +    {
> > > > +        /* Both starting and ending parts handled at once */
> > > > +        subpage_start =3D PAGE_OFFSET(start) || PAGE_OFFSET(end) !=
=3D PAGE_SIZE - 1;
> > > > +        subpage_end =3D false;
> > >=20
> > > Given the intended usage of this, don't we want to limit to only a
> > > single page?  So that PFN_DOWN(start + size) =3D=3D PFN_DOWN/(start),=
 as
> > > that would simplify the logic here?
> >=20
> > I have considered that, but I haven't found anything in the spec
> > mandating the XHCI DbC registers to not cross page boundary. Currently
> > (on a system I test this on) they don't cross page boundary, but I don't
> > want to assume extra constrains - to avoid issues like before (when
> > on the older system I tested the DbC registers didn't shared page with
> > other registers, but then they shared the page on a newer hardware).
>=20
> Oh, from our conversation at XenSummit I got the impression debug registe=
rs
> where always at the same position.  Looking at patch 2/2, it seems you
> only need to block access to a single register.  Are registers in XHCI
> size aligned?  As this would guarantee it doesn't cross a page
> boundary (as long as the register is <=3D 4096 in size).

It's a couple of registers (one "extended capability"), see
`struct dbc_reg` in xhci-dbc.c. It's location is discovered at startup
(device presents a linked-list of capabilities in one of its BARs).
The spec talks only about alignment of individual registers, not the
whole group...

> > > > +            if ( !addr )
> > > > +            {
> > > > +                gprintk(XENLOG_ERR,
> > > > +                        "Failed to map page for MMIO write at 0x%"=
PRI_mfn"%03x\n",
> > > > +                        mfn_x(mfn), offset);
> > > > +                return;
> > > > +            }
> > > > +
> > > > +            switch ( len )
> > > > +            {
> > > > +            case 1:
> > > > +                writeb(*(const uint8_t*)data, addr);
> > > > +                break;
> > > > +            case 2:
> > > > +                writew(*(const uint16_t*)data, addr);
> > > > +                break;
> > > > +            case 4:
> > > > +                writel(*(const uint32_t*)data, addr);
> > > > +                break;
> > > > +            case 8:
> > > > +                writeq(*(const uint64_t*)data, addr);
> > > > +                break;
> > > > +            default:
> > > > +                /* mmio_ro_emulated_write() already validated the =
size */
> > > > +                ASSERT_UNREACHABLE();
> > > > +                goto write_ignored;
> > > > +            }
> > > > +            return;
> > > > +        }
> > > > +    }
> > > > +    /* Do not print message for pages without any writable parts. =
*/
> > > > +}
> > > > +
> > > > +#ifdef CONFIG_HVM
> > > > +bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla)
> > > > +{
> > > > +    unsigned int offset =3D PAGE_OFFSET(gla);
> > > > +    const struct subpage_ro_range *entry;
> > > > +
> > > > +    list_for_each_entry(entry, &subpage_ro_ranges, list)
> > > > +        if ( mfn_eq(entry->mfn, mfn) &&
> > > > +             !test_bit(offset / MMIO_RO_SUBPAGE_GRAN, entry->ro_el=
ems) )
> > > > +        {
> > > > +            /*
> > > > +             * We don't know the write size at this point yet, so =
it could be
> > > > +             * an unaligned write, but accept it here anyway and d=
eal with it
> > > > +             * later.
> > > > +             */
> > > > +            return true;
> > >=20
> > > For accesses that fall into the RO region, I think you need to accept
> > > them here and just terminate them?  I see no point in propagating
> > > them further in hvm_hap_nested_page_fault().
> >=20
> > If write hits an R/O region on a page with some writable regions the
> > handling should be the same as it would be just on the mmio_ro_ranges.
> > This is what the patch does.
> > There may be an opportunity to simplify mmio_ro_ranges handling
> > somewhere, but I don't think it belongs to this patch.
>=20
> Maybe worth adding a comment that the logic here intends to deal only
> with the RW bits of a page that's otherwise RO, and that by not
> handling the RO regions the intention is that those are dealt just
> like fully RO pages.

I can extend the comment, but I assumed it's kinda implied already (if
nothing else, by the function name).

> I guess there's some message printed when attempting to write to a RO
> page that you would also like to print here?

If a HVM domain writes to an R/O area, it is crashed, so you will get a
message. This applies to both full page R/O and partial R/O. PV doesn't
go through subpage_mmio_write_accept().

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--DmQuZvV0PUqnDOdo
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmZoTf4ACgkQ24/THMrX
1yydDQf+Jat3ciKVsBOka9pL2K+Eg8cs3xUHsMhk7LcVs8oDrVZk//jaTUN91Zoz
uzYjaJF9f+PzVMbaV8ByxhlLl+lrnxEQmq9fLFaDrvFzlU8KP7zonbSzYlcFzFhK
rJJiPeqVT29QWI0ZXM4FEPFyzJvv1reJ5qcpsfvNGwFkchBpfyQ+Nl//2gVvgy95
bB5/GIE+hXO4FA0rK209dbHnFlOmn013jOfYMpS8tNHPDYqZWUIjITDP0CVk2uAm
pkErHhQBoPVjqaxPVXukwNfWLUuZ6UVRkD69NYbT1hwA5lru0mcBnods6i/9MM7O
jFzN+Rmoi0pRDUa2E9OqWow3LeWCcA==
=od9M
-----END PGP SIGNATURE-----

--DmQuZvV0PUqnDOdo--


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 13:18:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 13:18:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738391.1145146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1OW-0000fd-RF; Tue, 11 Jun 2024 13:18:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738391.1145146; Tue, 11 Jun 2024 13:18:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1OW-0000fW-Oh; Tue, 11 Jun 2024 13:18:36 +0000
Received: by outflank-mailman (input) for mailman id 738391;
 Tue, 11 Jun 2024 13:18:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sH1OV-0000eA-PZ
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 13:18:35 +0000
Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com
 [2a00:1450:4864:20::531])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1af8c2f8-27f5-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 15:18:34 +0200 (CEST)
Received: by mail-ed1-x531.google.com with SMTP id
 4fb4d7f45d1cf-57c923e03caso1145275a12.3
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 06:18:34 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f41f2d977sm1276666b.63.2024.06.11.06.18.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 06:18:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1af8c2f8-27f5-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718111914; x=1718716714; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=gnsPBQ1vM10yGOfinRQ/JgtIP/sICTCtWS64KPKg+9Y=;
        b=FDXcCAnQWrEw0ha5+/5X58db6YhV7UoodEj/u3zhE6kHg4Y0lBOou0D7F4SKr6so6I
         wjH90IzNPOV+ujBoILEXJ46b/KXz4QFS0OJSkhxFrVZHf69757NZVLs8vVqJd9s6asdX
         JAFl/QvqactFwL4wc8s60Fg6pHXJA+soMs+uwkx+PWDJvQ+wS9MFQSaHLRbvlEs4skGt
         runHpEwd6bwOlN6q/Hw+Y1OYJwIpPXnCUm+4rQKdZ+A+iLp0ZmpOL+Ys2Atp8+yloiWC
         EenIgODqVSWfxwfrPsGkbdVa3aEv2ON5tf8ZsOeSY69smpJ51KFrEZovkn+6M+AVuoUA
         DhCg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718111914; x=1718716714;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gnsPBQ1vM10yGOfinRQ/JgtIP/sICTCtWS64KPKg+9Y=;
        b=A5wptXRikg6t1ZYWw4g585CIEnGubJKaL9xpPq842N2VB0DZ+NKdy71R5R+ylBD9Si
         KZc+yWPfWhnIwGiqr6dxVjMx5wy7fvqiMfj2Jg596Bs9cM40hbKI68wso+e8MkwRgtmR
         bAesb0XX9nxltifr2K4mOx0+fRss8rSK6jNzlZYjrN7ZeQSHA39PMnZgwDdeTmpF+vsb
         PBzvxkBelxs59hlFwqA6cJIo4qntHZRA8VJooYU5J7xu+NFxn6HVl88vXvHx7WBF5D1s
         i2Yvnz0x7jP8DVVWYJOFrhFK1kB0Hvwadwe/yALcfi56NFno7twtLSBrH3v2r9XHareh
         xSUQ==
X-Forwarded-Encrypted: i=1; AJvYcCX1TxPSnILSIt5+EiL52Dj63YTSBB8DytMCsyNfKoHtMO5LoxFh206cIPPnqEuLbgVWRL7IiE1DdfUo3TgS9U6zl/Rst6mRQaVxDn0+z6M=
X-Gm-Message-State: AOJu0YzyGv4xKxdoypXASYG0uC9F7+gvEkkKkS9I0O7k4hKVw1wqPh0o
	+90oXB31kDoXM/5F7enF+u0By6ZWmkE+gttHBXOoI0zW91vBt4ATLE2hQtgmJw==
X-Google-Smtp-Source: AGHT+IHkAhtTh32yhQHhj5vGsJozE8/gQUpcBS5DNqVfciFzfp+hXVyIIGMmylrFoI+QivFuCfsMkA==
X-Received: by 2002:a17:906:eea:b0:a62:fc9d:6fec with SMTP id a640c23a62f3a-a6cd74bf905mr723568266b.34.1718111914024;
        Tue, 11 Jun 2024 06:18:34 -0700 (PDT)
Message-ID: <9de1a9c7-814c-4375-9182-90a2f04806b2@suse.com>
Date: Tue, 11 Jun 2024 15:18:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in
 _assign_irq_vector()
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-7-roger.pau@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240610142043.11924-7-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 10.06.2024 16:20, Roger Pau Monne wrote:
> Currently there's logic in fixup_irqs() that attempts to prevent
> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
> interrupts from the CPUs not present in the input mask.  The current logic in
> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
> 
> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
> to deal with interrupts that have either move_{in_progress,cleanup_count} set
> and no remaining online CPUs in ->arch.cpu_mask.
> 
> If _assign_irq_vector() is requested to move an interrupt in the state
> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
> CPUs that could be used as fallback, and if that's the case do move the
> interrupt back to the previous destination.  Note this is easier because the
> vector hasn't been released yet, so there's no need to allocate and setup a new
> vector on the destination.
> 
> Due to the logic in fixup_irqs() that clears offline CPUs from
> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
> shouldn't be possible to get into _assign_irq_vector() with
> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
> ->arch.old_cpu_mask.
> 
> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
> longer part of the affinity set,

I'm having trouble relating this (->arch.old_cpu_mask related) to ...

> move the interrupt to a different CPU part of
> the provided mask

... this (->arch.cpu_mask related).

> and keep the current ->arch.old_{cpu_mask,vector} for the
> pending interrupt movement to be completed.

Right, that's to clean up state from before the initial move. What isn't
clear to me is what's to happen with the state of the intermediate
placement. Description and code changes leave me with the impression that
it's okay to simply abandon, without any cleanup, yet I can't quite figure
why that would be an okay thing to do.

> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
>      }
>  
>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> -        return -EAGAIN;
> +    {
> +        /*
> +         * If the current destination is online refuse to shuffle.  Retry after
> +         * the in-progress movement has finished.
> +         */
> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
> +            return -EAGAIN;
> +
> +        /*
> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
> +         * ->arch.old_cpu_mask.
> +         */
> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
> +
> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
> +        {
> +            /*
> +             * Fallback to the old destination if moving is in progress and the
> +             * current destination is to be offlined.  This is only possible if
> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
> +             * in the 'mask' parameter.
> +             */
> +            desc->arch.vector = desc->arch.old_vector;
> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
> +
> +            /* Undo any possibly done cleanup. */
> +            for_each_cpu(cpu, desc->arch.cpu_mask)
> +                per_cpu(vector_irq, cpu)[desc->arch.vector] = irq;
> +
> +            /* Cancel the pending move. */
> +            desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
> +            cpumask_clear(desc->arch.old_cpu_mask);
> +            desc->arch.move_in_progress = 0;
> +            desc->arch.move_cleanup_count = 0;
> +
> +            return 0;
> +        }

In how far is this guaranteed to respect the (new) affinity that was set,
presumably having led to the movement in the first place?

> @@ -600,7 +646,17 @@ next:
>          current_vector = vector;
>          current_offset = offset;
>  
> -        if ( valid_irq_vector(old_vector) )
> +        if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> +        {
> +            ASSERT(!cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map));
> +            /*
> +             * Special case when evacuating an interrupt from a CPU to be
> +             * offlined and the interrupt was already in the process of being
> +             * moved.  Leave ->arch.old_{vector,cpu_mask} as-is and just
> +             * replace ->arch.{cpu_mask,vector} with the new destination.
> +             */

And where's the cleaning up of ->arch.old_* going to be taken care of then?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 13:25:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 13:25:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738399.1145156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1Ur-0002nL-Kv; Tue, 11 Jun 2024 13:25:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738399.1145156; Tue, 11 Jun 2024 13:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1Ur-0002nE-I8; Tue, 11 Jun 2024 13:25:09 +0000
Received: by outflank-mailman (input) for mailman id 738399;
 Tue, 11 Jun 2024 13:25:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sH1Up-0002n8-Lo
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 13:25:07 +0000
Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com
 [2a00:1450:4864:20::530])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 04abf04d-27f6-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 15:25:06 +0200 (CEST)
Received: by mail-ed1-x530.google.com with SMTP id
 4fb4d7f45d1cf-57c72d6d5f3so3852775a12.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 06:25:06 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57aadf9e208sm9463309a12.6.2024.06.11.06.25.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 06:25:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04abf04d-27f6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718112306; x=1718717106; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=6NWwJIvxM8MsrXefBVGyw7mhdssnASXzZeq0sSzEt5w=;
        b=Tb5tvy2KOUhAuTpBGJUCLmMoCHmQEESD85divzRcnBIQTo6rLG1qOcQgNSm2juIz4b
         zPMkKVty3uu87nkrgRp21U4G6xy826aNrg9yc/XY940B4AWuzvoxtYikjhw3y7c3aupq
         3fNF0ZkJUM9s9khdUQs8HNmVtisDaDOiFqqA1+VhaB+sCFf55ucc3raWw7eoE5gXeNL7
         /rS7sNxa59l7pMG6x8v2q1koSxjpT2PApRYQGgx8j2BReLdMNFz18G2M2pVMhHIOUnOh
         R8XOHnJBegQFjw+UqxsTxfndo2xM5Op5iIWKHjvY9YsnmVebQ0p7dWqG9wmlsbOYtBKG
         OgNQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718112306; x=1718717106;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=6NWwJIvxM8MsrXefBVGyw7mhdssnASXzZeq0sSzEt5w=;
        b=hxhRDDtVwzmHXd7Js38h86CLK3VIc5VLyF+pZsaEH1helSHT7l1IjvhGtOEeQuuqXv
         8NdPJB9AViMM2UN6RByN587P8qtgvMFWSNM3I32peaHQsFThSlUFpzpX170zsVJeVl/D
         ojEwwF9sEHVOsM2tGN76LOWvsEKdjQkmPWHLmFctdZeLOQrtnavlTQeawHwZkBelQVTY
         JEWs4iabhiISbxNTqaDGICtgi47s6x+jXNWcJfRJlT82lQsfXuA0emvTUCdCHfGKongs
         dMFLYM3DWx3uxuQY05m6oNfnMnMd6Rw+mWSh5j7W81O68Ukjr5mfCLaxexWrA22sseLj
         AD6A==
X-Forwarded-Encrypted: i=1; AJvYcCWaRT88eKpLX+kVqKkbM+8yt3uTp0kVkj366vH31WgH7jZS2k7BFkF8mPSEKB7eIDRHUg+Yj2BSfLuf7B5wi7Ovjz/QTD7y8ZxfcnESz5I=
X-Gm-Message-State: AOJu0YwPjqOV9CvYeYzU0WXfXM0FDPdzkUGvXRyIJ9wI/kUtNKGdim4+
	A8aMtGPQy1jhu9jj+CBL2IqoVXs+xtSiI0WeM6yOuLYGLrksRpDrirF+83XAcQ==
X-Google-Smtp-Source: AGHT+IHYfknDb/YucadxbdWGFikeL3Wv29fKiLbLbH8bZhABJSdUP1HxhzE8tj/Z+EChnUmKlU0NOQ==
X-Received: by 2002:a50:cdda:0:b0:57c:7ce3:6cd9 with SMTP id 4fb4d7f45d1cf-57c7ce36e15mr3768898a12.23.1718112306048;
        Tue, 11 Jun 2024 06:25:06 -0700 (PDT)
Message-ID: <a93b4aee-6574-4442-8f14-40e9df96f56e@suse.com>
Date: Tue, 11 Jun 2024 15:25:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 0/9] x86/irq: fixes for CPU hot{,un}plug
To: "Oleksii K." <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20240529090132.59434-1-roger.pau@citrix.com>
 <96cbb9df754f35d8df805df0138c942466a8f904.camel@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <96cbb9df754f35d8df805df0138c942466a8f904.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 29.05.2024 11:37, Oleksii K. wrote:
> On Wed, 2024-05-29 at 11:01 +0200, Roger Pau Monne wrote:
>> Hello,
>>
>> The following series aim to fix interrupt handling when doing CPU
>> plug/unplug operations.  Without this series running:
>>
>> cpus=`xl info max_cpu_id`
>> while [ 1 ]; do
>>     for i in `seq 1 $cpus`; do
>>         xen-hptool cpu-offline $i;
>>         xen-hptool cpu-online $i;
>>     done
>> done
>>
>> Quite quickly results in interrupts getting lost and "No irq handler
>> for
>> vector" messages on the Xen console.  Drivers in dom0 also start
>> getting
>> interrupt timeouts and the system becomes unusable.
>>
>> After applying the series running the loop over night still result in
>> a
>> fully usable system, no  "No irq handler for vector" messages at all,
>> no
>> interrupt loses reported by dom0.  Test with
>> x2apic-mode={mixed,cluster}.
>>
>> I'm tagging this for 4.19 as it's IMO bugfixes, but the series has
>> grown
>> quite bigger than expected, and hence we need to be careful to not
>> introduce breakages late in the release cycle.  I've attempted to
>> document all code as good as I could, interrupt handling has some
>> unexpected corner cases that are hard to diagnose and reason about.
> Despite of the fact that it can be considered as bugfixes, it seems to
> me that this patch series can be risky. Let's wait for maintainers
> opinion...

Working my way through v2 of this series, I think I'd be okay with
including stuff there up to patch 5. Patch 6, which I just finished
taking a first look at, is likely correct (and it's just me missing some
aspects to fully grok the changes done there), but at the same time
looks to be more intrusive than we would like to have it at this point
of the release cycle. That said, I'd be pretty okay to be overridden in
this regard by Roger and/or Andrew.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 13:25:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 13:25:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738402.1145165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1VH-0003EP-RD; Tue, 11 Jun 2024 13:25:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738402.1145165; Tue, 11 Jun 2024 13:25:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1VH-0003EI-Oh; Tue, 11 Jun 2024 13:25:35 +0000
Received: by outflank-mailman (input) for mailman id 738402;
 Tue, 11 Jun 2024 13:25:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t3ge=NN=cloud.com=kelly.choi@srs-se1.protection.inumbo.net>)
 id 1sH1VG-0002n8-IJ
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 13:25:34 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 14e7437f-27f6-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 15:25:33 +0200 (CEST)
Received: by mail-ed1-x535.google.com with SMTP id
 4fb4d7f45d1cf-5751bcb3139so1344533a12.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 06:25:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14e7437f-27f6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718112333; x=1718717133; darn=lists.xenproject.org;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=3T80f/U57WIOBBnkYbt17B9SaLcZhGHQ8mzVH8zpSHo=;
        b=fna2WouTmi4/oSNTIjGfVDpN+ntXt6dPHn9vOy+UnefDmddXK4xDD6p8kv1vtsMP9w
         7ps1SdYxE2+RYSqFXmbc27+lSiOYbnzShuUBw9cwqTtOxQ8xfjk5D/CLhqdxQA2QPGy2
         MK1qlpfTOK7UK7/c7c95TnCZ+o0CpUTHLdve8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718112333; x=1718717133;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=3T80f/U57WIOBBnkYbt17B9SaLcZhGHQ8mzVH8zpSHo=;
        b=Tq28M7JhF1tdaoPNDiS8Lhb/AnQiYSSo6sD+OSD4btFaauybBfjU43ipAfG8pUMkjX
         CLUCzwDHRViE8trPormHrxQN1BnGqN1PfvuU1Runlt+qhMkITlFuO4BtCcY0sK981mD7
         saSDi2o9Bt8az8wdk5y4jXlLlH+lNzs4Tv2c+58kkyl8fs4LyR6MmwEeqpqDa0f+zAHc
         9M6pmJrMI/4WGRvsj3C7FJMB7gWy3bcMP11fkyVp1M7e1ReDP6QzXMcbKV5rGGAangaY
         +ULizM8ZJIpY4slLDwOT15IBeQ88GIiqrJNTiGByYntsF/7zmhQBn2XlC7aTOCMQl2Oa
         YvYw==
X-Gm-Message-State: AOJu0YyWg1/WVJLXFeB6f0gRK10iPai2mDeJgT0s/5EqK7x9hecbavIv
	3rcRaWupc64MbE/zsd+6pnP98rXS7kuJ1OLtJvHw4zU9TnH6S0tOW60NgK0Gwu+u9tT4t51WqWk
	e04ftzFuP8xDh5pxX/KOD8qeNQ+6azKvoe5qEdC35LRxcXopcXeIJ5UXT
X-Google-Smtp-Source: AGHT+IGaS9CUpsVER0k0bf3BjYsivGWZXHGyR11QL/4m+NsL8970f4EJtFs8daUXKhhanWBQRyulx2KnhbxOl8Q0fEI=
X-Received: by 2002:a50:a419:0:b0:57c:7676:ea4d with SMTP id
 4fb4d7f45d1cf-57c7676ec61mr3571638a12.13.1718112332577; Tue, 11 Jun 2024
 06:25:32 -0700 (PDT)
MIME-Version: 1.0
From: Kelly Choi <kelly.choi@cloud.com>
Date: Tue, 11 Jun 2024 14:24:56 +0100
Message-ID: <CAO-mL=xMsBa=Gndq6sPkC3_JRnJiV5YeMZTqGxS2Nu_n61rMxg@mail.gmail.com>
Subject: [ANNOUNCE] Call for agenda items - Community Call 13th June 2024
To: xen-devel <xen-devel@lists.xenproject.org>, xen-users@lists.xenproject.org
Content-Type: multipart/alternative; boundary="00000000000044ada9061a9d33cc"

--00000000000044ada9061a9d33cc
Content-Type: text/plain; charset="UTF-8"

Hi all,

*Please add your proposed agenda items below.*

https://cryptpad.fr/pad/#/2/pad/edit/Uj1NJNpA9pgpXJJ54dyuqhDQ/

If any action items are missing or have been resolved, please add/remove
them from the sheet.

*CALL LINK: https://meet.jit.si/XenProjectCommunityCall
<https://www.google.com/url?q=https://meet.jit.si/XenProjectCommunityCall&sa=D&source=calendar&ust=1699196661201312&usg=AOvVaw1FcogEsMjFSd1Pmi7V0cBc>*

*DATE: Thursday 13th June 2024*

*TIME: 1500 UTC (4 pm UK time)*
*Note the following administrative conventions for the call:*


** To allow time to switch between meetings, we plan on starting theagenda
at 15:05 UTC sharp.  Aim to join by 15:03 UTC if possible to allocatetime
to sort out technical difficulties.*








** If you want to be CC'ed please add or remove yourself from
thesign-up-sheet
at https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/
<https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/>== Dial-in
Information ==## Meeting time16:00 - 17:00 British timeFurther
International meeting times:*
https://www.timeanddate.com/worldclock/meetingdetails.html?year=2024&month=6&day=13=4&hour=15&min=0&sec=0&p1=1234&p2=37&p3=224&p4=179

## Dial in details
https://meet.jit.si/static/dialInInfo.html?room=XenProjectCommunityCall

Many thanks,
Kelly Choi

Community Manager
Xen Project

--00000000000044ada9061a9d33cc
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div><div>Hi all,<br><p><u>Please add your proposed agenda=
 items below.</u>=C2=A0</p><p><a href=3D"https://cryptpad.fr/pad/#/2/pad/ed=
it/Uj1NJNpA9pgpXJJ54dyuqhDQ/">https://cryptpad.fr/pad/#/2/pad/edit/Uj1NJNpA=
9pgpXJJ54dyuqhDQ/</a>=C2=A0<br></p><p>If any action items are missing or ha=
ve been resolved, please add/remove them from the sheet.=C2=A0</p><p><b>CAL=
L=C2=A0LINK:=C2=A0<a href=3D"https://www.google.com/url?q=3Dhttps://meet.ji=
t.si/XenProjectCommunityCall&amp;sa=3DD&amp;source=3Dcalendar&amp;ust=3D169=
9196661201312&amp;usg=3DAOvVaw1FcogEsMjFSd1Pmi7V0cBc" target=3D"_blank">htt=
ps://meet.jit.si/XenProjectCommunityCall</a></b></p><p><b>DATE: Thursday 13=
th June 2024</b></p><p><b>TIME: 1500 UTC (4 pm UK time)</b></p><i>Note the =
following administrative conventions for the=C2=A0call:</i></div><div><div>=
<i>* To allow time to switch between meetings, we plan on starting the<br>a=
genda at 15:05 UTC sharp.=C2=A0 Aim to join by 15:03 UTC if possible to all=
ocate<br>time to sort out technical difficulties.</i></div><div><i><br>* If=
 you want to be CC&#39;ed please add or remove yourself from the<br>sign-up=
-sheet at=C2=A0<a href=3D"https://cryptpad.fr/pad/#/2/pad/edit/D9vGzihPxxAO=
e6RFPz0sRCf+/" rel=3D"noreferrer" target=3D"_blank">https://cryptpad.fr/pad=
/#/2/pad/edit/D9vGzihPxxAOe6RFPz0sRCf+/</a><br><br>=3D=3D=C2=A0Dial-in Info=
rmation =3D=3D<br>## Meeting time<br>16:00 - 17:00 British time<br>Further =
International meeting times:<br></i><a href=3D"https://www.timeanddate.com/=
worldclock/meetingdetails.html?year=3D2024&amp;month=3D6&amp;day=3D13=3D4&a=
mp;hour=3D15&amp;min=3D0&amp;sec=3D0&amp;p1=3D1234&amp;p2=3D37&amp;p3=3D224=
&amp;p4=3D179" target=3D"_blank">https://www.timeanddate.com/worldclock/mee=
tingdetails.html?year=3D2024&amp;month=3D6&amp;day=3D13=3D4&amp;hour=3D15&a=
mp;min=3D0&amp;sec=3D0&amp;p1=3D1234&amp;p2=3D37&amp;p3=3D224&amp;p4=3D179<=
/a><br><br>##=C2=A0Dial=C2=A0in details<br><a href=3D"https://meet.jit.si/s=
tatic/dialInInfo.html?room=3DXenProjectCommunityCall" rel=3D"noreferrer" ta=
rget=3D"_blank">https://meet.jit.si/static/dialInInfo.html?room=3DXenProjec=
tCommunityCall</a><div></div></div></div><div><br></div></div><div><div dir=
=3D"ltr" class=3D"gmail_signature" data-smartmail=3D"gmail_signature"><div =
dir=3D"ltr"><div>Many thanks,</div><div>Kelly Choi</div><div><br></div><div=
><div style=3D"color:rgb(136,136,136)">Community Manager</div><div style=3D=
"color:rgb(136,136,136)">Xen Project=C2=A0<br></div></div></div></div></div=
></div>

--00000000000044ada9061a9d33cc--


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 13:47:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 13:47:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738439.1145192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1q8-0000cO-NA; Tue, 11 Jun 2024 13:47:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738439.1145192; Tue, 11 Jun 2024 13:47:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1q8-0000cH-KU; Tue, 11 Jun 2024 13:47:08 +0000
Received: by outflank-mailman (input) for mailman id 738439;
 Tue, 11 Jun 2024 13:47:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sH1q8-0000cB-4Z
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 13:47:08 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1661b7e5-27f9-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 15:47:04 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id
 4fb4d7f45d1cf-57c60b13a56so4769174a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 06:47:04 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57aae11a98csm9271232a12.48.2024.06.11.06.47.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 06:47:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1661b7e5-27f9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718113624; x=1718718424; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=JHbMF6CeepYRy3Wn3DYzNdxmP6DQLhUOKuKCEK/M2JE=;
        b=ek5cNokgkLVWLvM0IcMLe11KZf5pFpIkK+nEq1rhl8RX7ydwhltyYJhuIsYZGUjQAI
         OTin4ZR0DHoQEzstaBrWK7pcOK4+OQQh5r5lNj0Hlwk5D4Km5tfoTFgj7r1wGctJ4hdM
         I2GC8O8pFanJme1+Yc8Xk5xPUcZE39HE2pyhyIYo1IYlxoMzjNErvdDkEm3jGUcA/rU2
         wHkRCbYEWNs2io15ym0qK7apIMWi3h8TvpqILn7+4ahqM/2dliln+lj0pSGaXJGcytLX
         oHKBTMJzZWUkWVvrQl87JOayCUtpWsYFzNdmywoxuzp4+zqVKkYxV1SQGwNpeehLnkPg
         EJ5w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718113624; x=1718718424;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=JHbMF6CeepYRy3Wn3DYzNdxmP6DQLhUOKuKCEK/M2JE=;
        b=Ey/5rvv+ZzBY2B+kmiY/2dShya0MpkW1n44Q6kiKXp/p+tNkBjrmGXdxwO5fcmqgv9
         QmLbJBxOqODXl6D7JTALbwca7kbMlwL2sDH3gRdb7hyLrvSr20OREo9Bb8XG1HXGhY09
         17cZBXXKV7irxg58cm7tL9qYENe5zIcFD00tOPju72/6fZz6mjJ7ND2/rpOuAfINDOkF
         laTzNyMYnkIsULEoaRE3vMQ18Ej2XVOUZBUAhvLwNoIjohOGwU/0LSn2Q2lTJJrBUifM
         Q6BzqzP1FC50VNF7eaCq4x0/pXcgMrCrQQB2MI3wnOt/vcvONU5CsshvKoFp8QxGMhN2
         NDlw==
X-Forwarded-Encrypted: i=1; AJvYcCV4WRJdJvt9cgW3tiPBywet3psclzTuUmAfpEsnqiBlWhRmUZqdlgZl3DbI8oseM1KiC9ARcZ5iAjd3Cdn5pl1/bx4nqWZd+5BSFNpmyGs=
X-Gm-Message-State: AOJu0Yy0QID036I644CNtVkYk8K16GsykTsCurGhcvZhDSAJ9kor/7Yb
	55mNkkQoYra55htMnf7D5WGo9Ebayab2LGgw9WexdEmaFHbLy29tSefQNCW24KlEpJx5bmsgNuA
	=
X-Google-Smtp-Source: AGHT+IHReDcHCS+52c26gyCdRw2S/o2RgLjjuqd1dJKhSZ3UgjnLw6Jjzgi3U4DedD7YQO6l1EyeqQ==
X-Received: by 2002:a50:d5da:0:b0:57c:61ca:eb8c with SMTP id 4fb4d7f45d1cf-57c61caec8fmr6117494a12.42.1718113624310;
        Tue, 11 Jun 2024 06:47:04 -0700 (PDT)
Message-ID: <9a7e5fab-7ea7-4196-bbc5-5c9e286cf576@suse.com>
Date: Tue, 11 Jun 2024 15:47:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 5/7] x86/irq: deal with old_cpu_mask for interrupts in
 movement in fixup_irqs()
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-6-roger.pau@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240610142043.11924-6-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 10.06.2024 16:20, Roger Pau Monne wrote:
> @@ -2589,6 +2589,28 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
>                                 affinity);
>          }
>  
> +        if ( desc->arch.move_in_progress &&
> +             !cpumask_test_cpu(cpu, &cpu_online_map) &&

Btw - any reason you're open-coding !cpu_online() here? I've noticed this
in the context of patch 7, where a little further down a !cpu_online() is
being added. Those likely all want to be consistent.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 13:50:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 13:50:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738443.1145202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1tf-0002Pu-5u; Tue, 11 Jun 2024 13:50:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738443.1145202; Tue, 11 Jun 2024 13:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1tf-0002Pn-2o; Tue, 11 Jun 2024 13:50:47 +0000
Received: by outflank-mailman (input) for mailman id 738443;
 Tue, 11 Jun 2024 13:50:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sH1td-0002PK-Ht
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 13:50:45 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 993ceb9d-27f9-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 15:50:44 +0200 (CEST)
Received: by mail-ed1-x529.google.com with SMTP id
 4fb4d7f45d1cf-57c61165af6so4826751a12.2
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 06:50:44 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c61155622sm7034544a12.5.2024.06.11.06.50.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 06:50:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 993ceb9d-27f9-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718113844; x=1718718644; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=pzlFqip9Fy1ra5YP2sJX1mQ40a+ZGo8+KGnVcl0gd+Y=;
        b=CYPzC2DxeSE4OVtoweyQhY6s4mMuqXNUsVuDaYwnKBr/5JQnIWmQ+5W0+61jhOAe8v
         fMd7aW+2pzzgOhRlcH9lF1rz3CdA0lagND+OqlRPkA7/n8F4zRXZGXY/3v8ghnNG6akY
         McsCBDDNOmDHSK2VEMXxrveeVcjuL/KoF6l9VXiJxd5MYDRVc7thFCdGCdPyAwj8SSoo
         d4vOKmbv5pfK3ciZwrUtNSgYAGhuVCsmx05eS82yzVnzpJM8ylkJkOjZkz7/PtyW7MLA
         jZsUk8RTlUcTUWFanxTC6Lz1NV2l+usVPzVrWXWL8FvkbbegzipiNxuZ8rpZ8xEc+sDH
         nBfw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718113844; x=1718718644;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=pzlFqip9Fy1ra5YP2sJX1mQ40a+ZGo8+KGnVcl0gd+Y=;
        b=kOpBN1Egp7CgfbMv4OktI68z9OKnOehD0LbWyVaKfUQDPD9XJSPLiwnF31D064Huly
         IiuJ+0RlWhGAB0s+Ja5e5TKSL1HJMLMEfe9TXxshHCbyHWtk2NUpPOY1F+vxdy1fDQZk
         x7psHIInAaUR97qZLpFd+7nRAI0C3mRE4UCZw23MwC3DHLGVW9a3RVeub9pdUUTT8bA8
         +T8q/c+N/3UKRplIt8HMDdoBENR2b982jl4yGJBRweQjuU6Sjd7rxJwUBylxZiaykO15
         10Cn0zaWNhfp3S5Pkwt5nIERdG0KeQfrDbJFJRwetjpOo9+VIjWTOA/8C0jA3GDbzhXn
         ZjcA==
X-Forwarded-Encrypted: i=1; AJvYcCXN1iwInoKEToJgJlUzaDPbAVGPhuEAguFkZd+ntwlC83YMqFSdASYpuKxbzGYFW0FCV9rdKIug6NHV0VBWuQDLi2UTsR8DaD7bIMfTHhM=
X-Gm-Message-State: AOJu0YxrNSwJ+JHcNXXXngGBXIGjGGkZEIIpDf6pj0tyoGzez37wIm4e
	jxEYp6mbq9ZcTFpryqDCmOUjgT9lBdiiBy9g4vD2SCnVobdME0hJt/5/EXIlPQ==
X-Google-Smtp-Source: AGHT+IExe6cNH0vRheyeUdlgTbw2gG902SXKuwCRdzt2J/gSzHpN9VG6iysKKRO3kCe8ujMDk0zNAg==
X-Received: by 2002:a50:c31b:0:b0:57c:74ea:8d24 with SMTP id 4fb4d7f45d1cf-57c74ea8f6amr4344767a12.18.1718113843853;
        Tue, 11 Jun 2024 06:50:43 -0700 (PDT)
Message-ID: <7e090e00-2061-4ef1-a0a4-b45ac86c5ee6@suse.com>
Date: Tue, 11 Jun 2024 15:50:42 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 7/7] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-8-roger.pau@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240610142043.11924-8-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10.06.2024 16:20, Roger Pau Monne wrote:
> fixup_irqs() is used to evacuate interrupts from to be offlined CPUs.  Given
> the CPU is to become offline, the normal migration logic used by Xen where the
> vector in the previous target(s) is left configured until the interrupt is
> received on the new destination is not suitable.
> 
> Instead attempt to do as much as possible in order to prevent loosing
> interrupts.  If fixup_irqs() is called from the CPU to be offlined (as is
> currently the case) attempt to forward pending vectors when interrupts that
> target the current CPU are migrated to a different destination.
> 
> Additionally, for interrupts that have already been moved from the current CPU
> prior to the call to fixup_irqs() but that haven't been delivered to the new
> destination (iow: interrupts with move_in_progress set and the current CPU set
> in ->arch.old_cpu_mask) also check whether the previous vector is pending and
> forward it to the new destination.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Changes since v1:
>  - Rename to apic_irr_read().
> ---
>  xen/arch/x86/include/asm/apic.h |  5 +++++
>  xen/arch/x86/irq.c              | 37 ++++++++++++++++++++++++++++++++-
>  2 files changed, 41 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/include/asm/apic.h b/xen/arch/x86/include/asm/apic.h
> index d1cb001fb4ab..7bd66dc6e151 100644
> --- a/xen/arch/x86/include/asm/apic.h
> +++ b/xen/arch/x86/include/asm/apic.h
> @@ -132,6 +132,11 @@ static inline bool apic_isr_read(uint8_t vector)
>              (vector & 0x1f)) & 1;
>  }
>  
> +static inline bool apic_irr_read(unsigned int vector)
> +{
> +    return apic_read(APIC_IRR + (vector / 32 * 0x10)) & (1U << (vector % 32));
> +}
> +
>  static inline u32 get_apic_id(void)
>  {
>      u32 id = apic_read(APIC_ID);
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> index 54eabd23995c..ed262fb55f4a 100644
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -2601,7 +2601,7 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
>  
>      for ( irq = 0; irq < nr_irqs; irq++ )
>      {
> -        bool break_affinity = false, set_affinity = true;
> +        bool break_affinity = false, set_affinity = true, check_irr = false;
>          unsigned int vector, cpu = smp_processor_id();
>          cpumask_t *affinity = this_cpu(scratch_cpumask);
>  
> @@ -2649,6 +2649,25 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
>               !cpumask_test_cpu(cpu, &cpu_online_map) &&
>               cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) )
>          {
> +            /*
> +             * This to be offlined CPU was the target of an interrupt that's
> +             * been moved, and the new destination target hasn't yet
> +             * acknowledged any interrupt from it.
> +             *
> +             * We know the interrupt is configured to target the new CPU at
> +             * this point, so we can check IRR for any pending vectors and
> +             * forward them to the new destination.
> +             *
> +             * Note the difference between move_in_progress or
> +             * move_cleanup_count being set.  For the later we know the new
> +             * destination has already acked at least one interrupt from this
> +             * source, and hence there's no need to forward any stale
> +             * interrupts.
> +             */

I'm a little confused by this last paragraph: It talks about a difference,
yet ...

> +            if ( apic_irr_read(desc->arch.old_vector) )
> +                send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
> +                              desc->arch.vector);

... in the code being commented there's no difference visible. Hmm, I guess
this is related to the enclosing if(). Maybe this could be worded a little
differently, e.g. starting with "Note that for the other case -
move_cleanup_count being non-zero - we know ..."?

> @@ -2689,11 +2708,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
>          if ( desc->handler->disable )
>              desc->handler->disable(desc);
>  
> +        /*
> +         * If the current CPU is going offline and is (one of) the target(s) of
> +         * the interrupt signal to check whether there are any pending vectors
> +         * to be handled in the local APIC after the interrupt has been moved.
> +         */

After reading this a number of times, I think there wants to be a comma between
"interrupt" and "signal". Or am I getting wrong what is being meant?

> +        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
> +            check_irr = true;
> +
>          if ( desc->handler->set_affinity )
>              desc->handler->set_affinity(desc, affinity);
>          else if ( !(warned++) )
>              set_affinity = false;
>  
> +        if ( check_irr && apic_irr_read(vector) )
> +            /*
> +             * Forward pending interrupt to the new destination, this CPU is
> +             * going offline and otherwise the interrupt would be lost.
> +             */
> +            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
> +                          desc->arch.vector);
> +
>          if ( desc->handler->enable )
>              desc->handler->enable(desc);
>  

Down from here, after the loop, there's a 1ms window where latched but not
yet delivered interrupts can be received. How's that playing together with
the changes you're making? Aren't we then liable to get two interrupts, one
at the old and one at the new source, in unknown order?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 13:53:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 13:53:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738450.1145211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1vm-00032U-Kq; Tue, 11 Jun 2024 13:52:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738450.1145211; Tue, 11 Jun 2024 13:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1vm-00032N-HR; Tue, 11 Jun 2024 13:52:58 +0000
Received: by outflank-mailman (input) for mailman id 738450;
 Tue, 11 Jun 2024 13:52:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b7dS=NN=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sH1vk-00032D-UP
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 13:52:56 +0000
Received: from mail-qk1-x735.google.com (mail-qk1-x735.google.com
 [2607:f8b0:4864:20::735])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e724d29c-27f9-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 15:52:55 +0200 (CEST)
Received: by mail-qk1-x735.google.com with SMTP id
 af79cd13be357-7954dcf3158so230878085a.3
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 06:52:55 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79557127817sm298719685a.40.2024.06.11.06.52.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 06:52:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e724d29c-27f9-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718113974; x=1718718774; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=R0TsreSRtoMQiKoTDWojasdOseqDouGJ98Nj9rOopks=;
        b=j/e0wxyHNb252YaedX1NkXYk9tCF0X6R4TMD7NRE8HfxSXfwwDLrFyYulfFPDfA9aW
         hkfUdpUFD3x0ye3b/lScsfBZiU0K3G6b0jUxm/SdDTizCDYiczq777682ukwmfx2Kzf9
         vR+CvjfdOx+4L358Du5oSbWLdBhRHrlf0bheI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718113974; x=1718718774;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=R0TsreSRtoMQiKoTDWojasdOseqDouGJ98Nj9rOopks=;
        b=k6v/FCiIyUoCPsOIYQ+VQpnrF0KZmDv7v8JlA1ONXr6i7YTNqx5iq2y2V/KFmzHzot
         I41SrdAhe72vu1FDEcNYG7BY+5LNODwCwMFVtCW56309abzL9jomLBlCk8j5Va/Cyp6D
         l6676VojdQ1AiHqJW96JwBmzg7bpCoz/r4J0ZBZHhhYMqqzMWR3Z1APapuGGkRPBeCAi
         KOBTWJ9pWJ/dTe+d6ZRuscxePmOSMee/2ch1uR7ISUf53HWG9NV8uXmLOHYq7UJ2Xvzm
         AQZ2ytED+Rcbf8SG+yU4U7OBzmRLxUgxi0GoQYGMHeRGNW4DY43JIIafx45CT2nRoi+a
         aNmw==
X-Gm-Message-State: AOJu0Ywjicb6Rp2mOy16pMPYb4hMG2HEIhvpFv8LNXcTzYLXZBCqhIu7
	Wg/Ai9g9JGxYIENyiTP2vC5nDdRr7RSrIF7BMzjce0DKzjobeIyoemHzuvz6c+0=
X-Google-Smtp-Source: AGHT+IFXFgwbWlA2W7vJSrVyR6otpPEQTBl3eYEZSLdGMgzrlB8xlI7zayO4ZAt9wxfSI/yKwS6MZQ==
X-Received: by 2002:a05:620a:298a:b0:795:fb0e:1eb1 with SMTP id af79cd13be357-795fb0e22e1mr674679185a.63.1718113974325;
        Tue, 11 Jun 2024 06:52:54 -0700 (PDT)
Date: Tue, 11 Jun 2024 15:52:52 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/EPT: relax iPAT for "invalid" MFNs
Message-ID: <ZmhWtEyuwjTuIAxK@macbook>
References: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
 <Zmf_k2meED8iG3H5@macbook>
 <a11259be-7114-4332-b873-d1b163687a3e@suse.com>
 <ZmgStGbVRuGaNUD_@macbook>
 <f171c98a-c78d-41c8-88d8-7d631b80333b@suse.com>
 <ZmgwKmcLDJDhIsl7@macbook>
 <b076dc8d-701e-4a9f-a147-c54673959009@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b076dc8d-701e-4a9f-a147-c54673959009@suse.com>

On Tue, Jun 11, 2024 at 01:52:58PM +0200, Jan Beulich wrote:
> On 11.06.2024 13:08, Roger Pau Monné wrote:
> > On Tue, Jun 11, 2024 at 11:33:24AM +0200, Jan Beulich wrote:
> >> On 11.06.2024 11:02, Roger Pau Monné wrote:
> >>> On Tue, Jun 11, 2024 at 10:26:32AM +0200, Jan Beulich wrote:
> >>>> On 11.06.2024 09:41, Roger Pau Monné wrote:
> >>>>> On Mon, Jun 10, 2024 at 04:58:52PM +0200, Jan Beulich wrote:
> >>>>>> --- a/xen/arch/x86/mm/p2m-ept.c
> >>>>>> +++ b/xen/arch/x86/mm/p2m-ept.c
> >>>>>> @@ -503,7 +503,8 @@ int epte_get_entry_emt(struct domain *d,
> >>>>>>  
> >>>>>>      if ( !mfn_valid(mfn) )
> >>>>>>      {
> >>>>>> -        *ipat = true;
> >>>>>> +        *ipat = type != p2m_mmio_direct ||
> >>>>>> +                (!is_iommu_enabled(d) && !cache_flush_permitted(d));
> >>>>>
> >>>>> Looking at this, shouldn't the !mfn_valid special case be removed, and
> >>>>> mfns without a valid page be processed normally, so that the guest
> >>>>> MTRR values are taken into account, and no iPAT is enforced?
> >>>>
> >>>> Such removal is what, in the post commit message remark, I'm referring to
> >>>> as "moving to too lax". Doing so might be okay, but will imo be hard to
> >>>> prove to be correct for all possible cases. Along these lines goes also
> >>>> that I'm adding the IOMMU-enabled and cache-flush checks: In principle
> >>>> p2m_mmio_direct should not be used when neither of these return true. Yet
> >>>> a similar consideration would apply to the immediately subsequent if().
> >>>>
> >>>> Removing this code would, in particular, result in INVALID_MFN getting a
> >>>> type of WB by way of the subsequent if(), unless the type there would
> >>>> also be p2m_mmio_direct (which, as said, it ought to never be for non-
> >>>> pass-through domains). That again _may_ not be a problem as long as such
> >>>> EPT entries would never be marked present, yet that's again difficult to
> >>>> prove.
> >>>
> >>> My understanding is that the !mfn_valid() check was a way to detect
> >>> MMIO regions in order to exit early and set those to UC.  I however
> >>> don't follow why the guest MTRR settings shouldn't also be applied to
> >>> those regions.
> >>
> >> It's unclear to me whether the original purpose of he check really was
> >> (just) MMIO. It could as well also have been to cover the (then not yet
> >> named that way) case of INVALID_MFN.
> >>
> >> As to ignoring guest MTRRs for MMIO: I think that's to be on the safe
> >> side. We don't want guests to map uncachable memory with a cachable
> >> memory type. Yet control isn't fine grained enough to prevent just
> >> that. Hence why we force UC, allowing merely to move to WC via PAT.
> > 
> > Would that be to cover up for guests bugs, or there's a coherency
> > reason for not allowing guests to access memory using fully guest
> > chosen cache attributes?
> 
> I think the main reason is that this way we don't need to bother thinking
> of whether MMIO regions may need caches flushed in order for us to be
> sure memory is all up-to-date. But I have no insight into what the
> original reasons here may have been.
> 
> > I really wonder whether Xen has enough information to figure out
> > whether a hole (MMIO region) is supposed to be accessed as UC or
> > something else.
> 
> It certainly hasn't, and hence is erring on the (safe) side of forcing
> UC.

Except that for the vesa framebuffer at least this is a bad choice :).

> > Your proposed patch already allows guest to set such attributes in
> > PAT, and hence I don't see why also taking guest MTRRs into account
> > would be any worse.
> 
> Whatever the guest sets in PAT, UC in EMT will win except fot the
> special case of WC.
> 
> >>>>> I also think this likely wants a:
> >>>>>
> >>>>> Fixes: 81fd0d3ca4b2 ('x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()')
> >>>>
> >>>> Oh, indeed, I should have dug out when this broke. I didn't because I
> >>>> knew this mfn_valid() check was there forever, neglecting that it wasn't
> >>>> always (almost) first.
> >>>>
> >>>>> As AFAICT before that commit direct MMIO regions would set iPAT to WB,
> >>>>> which would result in the correct attributes (albeit guest MTRR was
> >>>>> still ignored).
> >>>>
> >>>> Two corrections here: First iPAT is a boolean; it can't be set to WB.
> >>>> And then what was happening prior to that change was that for the APIC
> >>>> access page iPAT was set to true, thus forcing WB there. iPAT was left
> >>>> set to false for all other p2m_mmio_direct pages, yielding (PAT-
> >>>> overridable) UC there.
> >>>
> >>> Right, that behavior was still dubious to me, as I would assume those
> >>> regions would also want to fetch the type from guest MTRRs.
> >>
> >> Well, for the APIC access page we want to prevent it becoming UC. It's MMIO
> >> from the guest's perspective, yet _we_ know it's really ordinary RAM. For
> >> actual MMIO see above; the only case where we probably ought to respect
> >> guest MTRRs is when they say WC (following from what I said further up).
> >> Yet that's again an independent change to (possibly) make.
> > 
> > For emulated devices we might map regular RAM into what the guest
> > otherwise thinks it's MMIO.
> 
> Right, and for non-pass-through domains we force everything to WB already.
> 
> >  Maybe the mfn_valid() check should be
> > inverted, and return WB when the underlying mfn is RAM, and otherwise
> > use the guest MTRRs to decide the cache attribute?
> 
> First: Whether WB is correct for RAM isn't known. With some peculiar device
> assigned, the guest may want/need part of its RAM be e.g. WC or WT. (It's
> only without any physical devices assigned that we can be quite sure that
> WB is good for all of RAM.) Therefore, second, I think respecting MTRRs for
> RAM is less likely to cause problems than respecting them for MMIO.
> 
> I think at this point the main question is: Do we want to do things at least
> along the lines of this v1, or do we instead feel certain enough to switch
> the mfn_valid() to a comparison against INVALID_MFN (and perhaps moving it
> up to almost the top of the function)?

My preferred option would be the later, as that would remove a special
casing.  However, I'm unsure how much fallout this could cause - those
caching changes are always tricky and lead to unexpected fallout.

OTOH the current !mfn_valid() check is very restrictive, as it forces
all MMIO to UC.  So by removing it we allow guest chosen types to take
effect, which are likely less restrictive than UC (whether those are
correct is another question).

> One caveat here that I forgot to
> mention before: MFNs taken out of EPT entries will never be INVALID_MFN, for
> the truncation that happens when populating entries. In that case we rely on
> mfn_valid() to be "rejecting" them.

The only caller where mfns from EPT entries are passed to
epte_get_entry_emt() is in resolve_misconfig() AFAICT, and in that
case the EPT entry must be present for epte_get_entry_emt() to be
called.  So it seems to me that epte_get_entry_emt() can never be
called from an mfn constructed from an INVALID_MFN EPT entry (but it's
worth adding an assert for it).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 13:55:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 13:55:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738455.1145222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1ye-0003py-1c; Tue, 11 Jun 2024 13:55:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738455.1145222; Tue, 11 Jun 2024 13:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH1yd-0003pr-UF; Tue, 11 Jun 2024 13:55:55 +0000
Received: by outflank-mailman (input) for mailman id 738455;
 Tue, 11 Jun 2024 13:55:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eM8s=NN=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sH1yc-0003pj-LM
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 13:55:54 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 50f2b98b-27fa-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 15:55:52 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id
 4fb4d7f45d1cf-57c831b6085so1465481a12.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 06:55:52 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c5f19fb43sm7310003a12.19.2024.06.11.06.55.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 06:55:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50f2b98b-27fa-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718114152; x=1718718952; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=VjahdgDhEuuGaZuRJ7m00nTtmOvkgS3F6Q8qDGy4Yvo=;
        b=idm2g8a5JrEnxYUPzpG3FuO1JrJ3UyLgloSfs8nQwpYH3nicYK9eKG0ydIfZ/JCfj7
         YrHU+seuVa7a2NheW+40tMZ7zC6dgEMp9M8iLiEy2OL81M1FSMShGb9KUQATFUARfy/m
         7hHqouVRJPVuZqP4Dk4EGn1iOVyQYWhl71S2Y=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718114152; x=1718718952;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=VjahdgDhEuuGaZuRJ7m00nTtmOvkgS3F6Q8qDGy4Yvo=;
        b=MiVOSpY264hnHnrsrPrEMJxlhKZREXlZfIQvY8iTeH40y/miQ0+jDhRBCQ4XujFrl6
         knWoDajQz9Mxkb9VLTReO1PdJ22YT0LRatekHlzNB1AnpTuZblS2ar51pr1anJzJXczP
         DfNpo9QobtyeUSTaYm4XlyJwv5vIpaTOQeVb9Q7lifOkg2g2aJhrsC7YZ5z7umswppxJ
         fhxAATM8V+6NOc9dWJbwGPONErDyVZsS1h/Xjdj+E5g9RIWZ+kcCX38N4hfWpoAScN6X
         cg+ws3t7Ojz/jJiDdGBj0Mfi1qzMjxOVTQmiGCHq05UYBtqV+rMYWdKGj5PwbwZ6yNvK
         bPEQ==
X-Gm-Message-State: AOJu0YzS5S0ITbxdK7AdHjJFycZCgtgGAkYivq7N0bnll+lBGr4Ccx2N
	ISl8Lp8W/fBAkuV5nA368A/VNxYUsT8NY3fKnDwiaOrx+RJFsI/IfQFRkaTWyLk=
X-Google-Smtp-Source: AGHT+IFoLqHQ+z9cMNpUt6LN37JpBz84u84PfN6nTZw6sjy5CBbGisxcP5r9KwVSMLQnZRwE9Afm0Q==
X-Received: by 2002:a50:c308:0:b0:57c:6d89:eaef with SMTP id 4fb4d7f45d1cf-57c6d89f0damr6607154a12.18.1718114151974;
        Tue, 11 Jun 2024 06:55:51 -0700 (PDT)
Message-ID: <42e4d6de-4428-40a9-90d8-1329fbee5a1f@citrix.com>
Date: Tue, 11 Jun 2024 14:55:50 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] x86/EPT: relax iPAT for "invalid" MFNs
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
 <Zmf_k2meED8iG3H5@macbook> <a11259be-7114-4332-b873-d1b163687a3e@suse.com>
 <ZmgStGbVRuGaNUD_@macbook> <f171c98a-c78d-41c8-88d8-7d631b80333b@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <f171c98a-c78d-41c8-88d8-7d631b80333b@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 11/06/2024 10:33 am, Jan Beulich wrote:
> On 11.06.2024 11:02, Roger Pau Monné wrote:
>> On Tue, Jun 11, 2024 at 10:26:32AM +0200, Jan Beulich wrote:
>>> On 11.06.2024 09:41, Roger Pau Monné wrote:
>>>> On Mon, Jun 10, 2024 at 04:58:52PM +0200, Jan Beulich wrote:
>>>>> --- a/xen/arch/x86/mm/p2m-ept.c
>>>>> +++ b/xen/arch/x86/mm/p2m-ept.c
>>>>> @@ -503,7 +503,8 @@ int epte_get_entry_emt(struct domain *d,
>>>>>  
>>>>>      if ( !mfn_valid(mfn) )
>>>>>      {
>>>>> -        *ipat = true;
>>>>> +        *ipat = type != p2m_mmio_direct ||
>>>>> +                (!is_iommu_enabled(d) && !cache_flush_permitted(d));
>>>> Looking at this, shouldn't the !mfn_valid special case be removed, and
>>>> mfns without a valid page be processed normally, so that the guest
>>>> MTRR values are taken into account, and no iPAT is enforced?
>>> Such removal is what, in the post commit message remark, I'm referring to
>>> as "moving to too lax". Doing so might be okay, but will imo be hard to
>>> prove to be correct for all possible cases. Along these lines goes also
>>> that I'm adding the IOMMU-enabled and cache-flush checks: In principle
>>> p2m_mmio_direct should not be used when neither of these return true. Yet
>>> a similar consideration would apply to the immediately subsequent if().
>>>
>>> Removing this code would, in particular, result in INVALID_MFN getting a
>>> type of WB by way of the subsequent if(), unless the type there would
>>> also be p2m_mmio_direct (which, as said, it ought to never be for non-
>>> pass-through domains). That again _may_ not be a problem as long as such
>>> EPT entries would never be marked present, yet that's again difficult to
>>> prove.
>> My understanding is that the !mfn_valid() check was a way to detect
>> MMIO regions in order to exit early and set those to UC.  I however
>> don't follow why the guest MTRR settings shouldn't also be applied to
>> those regions.
> It's unclear to me whether the original purpose of he check really was
> (just) MMIO. It could as well also have been to cover the (then not yet
> named that way) case of INVALID_MFN.
>
> As to ignoring guest MTRRs for MMIO: I think that's to be on the safe
> side. We don't want guests to map uncachable memory with a cachable
> memory type. Yet control isn't fine grained enough to prevent just
> that. Hence why we force UC, allowing merely to move to WC via PAT.
>
>> I'm also confused by your comment about "as such EPT entries would
>> never be marked present": non-present EPT entries don't even get into
>> epte_get_entry_emt(), and hence we could assert in epte_get_entry_emt
>> that mfn != INVALID_MFN?
> I don't think we can. Especially for the call from ept_set_entry() I
> can't spot anything that would prevent the call for non-present entries.
> This may be a mistake, but I can't do anything about it right here.
>
>>> I was in fact wondering whether to special-case INVALID_MFN in the change
>>> I'm making. Question there is: Are we sure that by now we've indeed got
>>> rid of all arithmetic mistakenly done on MFN variables happening to hold
>>> INVALID_MFN as the value? IOW I fear that there might be code left which
>>> would pass in INVALID_MFN masked down to a 2M or 1G boundary. At which
>>> point checking for just INVALID_MFN would end up insufficient. If we
>>> meant to rely on this (tagging possible leftover issues as bugs we don't
>>> mean to attempt to cover for here anymore), then indeed the mfn_valid()
>>> check could be replaced by a comparison with INVALID_MFN (following a
>>> pattern we've been slowly trying to carry through elsewhere, especially
>>> in shadow code). Yet it could still not be outright dropped imo.
>>>
>>> Furthermore simply dropping (or replacing as per above) that check won't
>>> work either: Further down in the function we use mfn_to_page(), which
>>> requires an up-front mfn_valid() check. That said, this code looks
>>> partly broken to me anyway: For a 1G page mfn_valid() on the start of it
>>> doesn't really imply all parts of it are valid. I guess I need to make a
>>> 2nd patch to address that as well, which may then want to be a prereq
>>> change to the one here (if we decided to go the route you're asking for).
>> I see, yes, the loop over the special pages array will need to be
>> adjusted to account for mfn_to_page() possibly returning NULL.
> Except that NULL will hardly ever come back there. What we need is an
> explicit mfn_valid() check. I already have a patch, but I'd like to
> submit it only once I know how the v2 of the one here is going to look
> like.
>
>> Overall I don't understand the need for this special case for
>> !mfn_valid().  The rest of special cases we have (the special pages
>> and domains without devices or MMIO regions assigned) are performance
>> optimizations which I do understand.  Yet the special casing of
>> !mfn_valid regions bypassing guest MTRR settings seems bogus to me.
> As said, it may well be that we can (now) switch to comparison against
> INVALID_MFN there, if we're certain MMIO isn't to be covered by this
> (anymore).
>
>>>> I also think this likely wants a:
>>>>
>>>> Fixes: 81fd0d3ca4b2 ('x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()')
>>> Oh, indeed, I should have dug out when this broke. I didn't because I
>>> knew this mfn_valid() check was there forever, neglecting that it wasn't
>>> always (almost) first.
>>>
>>>> As AFAICT before that commit direct MMIO regions would set iPAT to WB,
>>>> which would result in the correct attributes (albeit guest MTRR was
>>>> still ignored).
>>> Two corrections here: First iPAT is a boolean; it can't be set to WB.
>>> And then what was happening prior to that change was that for the APIC
>>> access page iPAT was set to true, thus forcing WB there. iPAT was left
>>> set to false for all other p2m_mmio_direct pages, yielding (PAT-
>>> overridable) UC there.
>> Right, that behavior was still dubious to me, as I would assume those
>> regions would also want to fetch the type from guest MTRRs.
> Well, for the APIC access page we want to prevent it becoming UC. It's MMIO
> from the guest's perspective, yet _we_ know it's really ordinary RAM.

It's really not "ordinary" RAM.

For both Intel and AMD, APIC acceleration is triggered based on a memory
operand match in host physical address space, but accesses are
redirected to the (per vCPU) APIC register page.

Intel state that the EPT translation must be a 4k translation, and AMD
state that the NPT perms must be RW.

I can't actually find any statement about cacheability.  I expect this
is because it's never actually accessed.  (Intel go as far as saying
that even if you CLFLUSH against it, because of the redirect, you'll end
up flushing the respective line in the APIC Regs page.)

Irrespective, it appears that the changeability doesn't matter, but I
would recommend against using it as a representative example for the
discussion here.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 14:07:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 14:07:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738465.1145232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH29V-0006p6-VT; Tue, 11 Jun 2024 14:07:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738465.1145232; Tue, 11 Jun 2024 14:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH29V-0006oz-So; Tue, 11 Jun 2024 14:07:09 +0000
Received: by outflank-mailman (input) for mailman id 738465;
 Tue, 11 Jun 2024 14:07:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b7dS=NN=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sH29U-0006ot-Vq
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 14:07:08 +0000
Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com
 [2607:f8b0:4864:20::82c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e2b20c55-27fb-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 16:07:07 +0200 (CEST)
Received: by mail-qt1-x82c.google.com with SMTP id
 d75a77b69052e-43ffbc0927fso6091861cf.3
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 07:07:07 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-4403895aa9asm46747981cf.7.2024.06.11.07.07.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 07:07:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2b20c55-27fb-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718114826; x=1718719626; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=iZOGBocvRXPqBK+JTEw1kCb8PgDJgiulcODOBJvURkc=;
        b=eJLG9Y+ANVRYUdHO0ytWevqHRFPIYRgvsxsKxo26QOIEFMMx+fFq+QCcKf17dwjiAd
         xCmzJrnA/gKGP7GKyPTMbH1acVO4pxFlguYK/eCMgDmWfp43ShN902x5v2bwFLnrcBy7
         anuqoRMUN9U9rczc5Ai6vTRV+7LmIaIVbGmeo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718114826; x=1718719626;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=iZOGBocvRXPqBK+JTEw1kCb8PgDJgiulcODOBJvURkc=;
        b=oFsWNHvk2SuYGXuOnPDERy/CoedUa0hyElq2C8l/GMIaLvTeUSGDeiWP6WVVNDEnPZ
         k3iZbNfIf0lGmzhprUlEXSXOq1Or4b1TWT9Xrqi2FKBqPeqDzK6NsqoRULou9hK87RrZ
         qB441otCsMLGb/l87E0N2GEHcuXzixbivyipkJ9MhYHWzkzbROrPTq+gCNc+mTWjBSs7
         dw3gB4561ZeVS6x37a3hpo8hzKtWeMEGOAYHVUjgyQDXhegSM5ActkPHXtCzW/48a0nu
         GvKndQqlgXnA10G8dEsVBGUo6Yt9YZ9q99me3hw8c5inqV4u9znU8Dvhq7sCbdQMArGe
         a+rg==
X-Gm-Message-State: AOJu0Ywz12+PapN0ThZdmDd5cPI4/Rk5Yu5rae4LcGxJBcoZqVVW51Bv
	DIUg0vESjtyDCq7cSe4Q6q1B2Xj1px5g2OnelIMD2uHaUGJ8LQNsuOPaesrx5qA=
X-Google-Smtp-Source: AGHT+IFdT52hiGWCnKDohwsgytDwgakJoQgohxLrK+5taS/8BVKOy9DhWGN5kLemWU54U+CHrX3/fw==
X-Received: by 2002:ac8:5ac4:0:b0:441:2b3:595c with SMTP id d75a77b69052e-44102b35ab6mr51174531cf.30.1718114825336;
        Tue, 11 Jun 2024 07:07:05 -0700 (PDT)
Date: Tue, 11 Jun 2024 16:07:03 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO
 page read only
Message-ID: <ZmhaB57Tc6BsknVO@macbook>
References: <cover.68462f37276d69ab6e268be94d049f866a321f73.1716392340.git-series.marmarek@invisiblethingslab.com>
 <30562c807ff2e434731a76d7110d48614a58884b.1716392340.git-series.marmarek@invisiblethingslab.com>
 <ZmgpsZJ4afLd1Fc3@macbook>
 <Zmg3O7zvd9KBC1Fv@mail-itl>
 <ZmhJOjggtJiNccPo@macbook>
 <ZmhN_hNHp7WtyPyD@mail-itl>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZmhN_hNHp7WtyPyD@mail-itl>

On Tue, Jun 11, 2024 at 03:15:42PM +0200, Marek Marczykowski-Górecki wrote:
> On Tue, Jun 11, 2024 at 02:55:22PM +0200, Roger Pau Monné wrote:
> > On Tue, Jun 11, 2024 at 01:38:35PM +0200, Marek Marczykowski-Górecki wrote:
> > > On Tue, Jun 11, 2024 at 12:40:49PM +0200, Roger Pau Monné wrote:
> > > > On Wed, May 22, 2024 at 05:39:03PM +0200, Marek Marczykowski-Górecki wrote:
> > > > > +    if ( !entry )
> > > > > +    {
> > > > > +        /* iter == NULL marks it was a newly allocated entry */
> > > > > +        iter = NULL;
> > > > > +        entry = xzalloc(struct subpage_ro_range);
> > > > > +        if ( !entry )
> > > > > +            return -ENOMEM;
> > > > > +        entry->mfn = mfn;
> > > > > +    }
> > > > > +
> > > > > +    for ( i = offset_s; i <= offset_e; i += MMIO_RO_SUBPAGE_GRAN )
> > > > > +    {
> > > > > +        bool oldbit = __test_and_set_bit(i / MMIO_RO_SUBPAGE_GRAN,
> > > > > +                                        entry->ro_elems);
> > > > > +        ASSERT(!oldbit);
> > > > > +    }
> > > > > +
> > > > > +    if ( !iter )
> > > > > +        list_add(&entry->list, &subpage_ro_ranges);
> > > > > +
> > > > > +    return iter ? 1 : 0;
> > > > > +}
> > > > > +
> > > > > +/* This needs subpage_ro_lock already taken */
> > > > > +static void __init subpage_mmio_ro_remove_page(
> > > > > +    mfn_t mfn,
> > > > > +    unsigned int offset_s,
> > > > > +    unsigned int offset_e)
> > > > > +{
> > > > > +    struct subpage_ro_range *entry = NULL, *iter;
> > > > > +    unsigned int i;
> > > > > +
> > > > > +    list_for_each_entry(iter, &subpage_ro_ranges, list)
> > > > > +    {
> > > > > +        if ( mfn_eq(iter->mfn, mfn) )
> > > > > +        {
> > > > > +            entry = iter;
> > > > > +            break;
> > > > > +        }
> > > > > +    }
> > > > > +    if ( !entry )
> > > > > +        return;
> > > > > +
> > > > > +    for ( i = offset_s; i <= offset_e; i += MMIO_RO_SUBPAGE_GRAN )
> > > > > +        __clear_bit(i / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems);
> > > > > +
> > > > > +    if ( !bitmap_empty(entry->ro_elems, PAGE_SIZE / MMIO_RO_SUBPAGE_GRAN) )
> > > > > +        return;
> > > > > +
> > > > > +    list_del(&entry->list);
> > > > > +    if ( entry->mapped )
> > > > > +        iounmap(entry->mapped);
> > > > > +    xfree(entry);
> > > > > +}
> > > > > +
> > > > > +int __init subpage_mmio_ro_add(
> > > > > +    paddr_t start,
> > > > > +    size_t size)
> > > > > +{
> > > > > +    mfn_t mfn_start = maddr_to_mfn(start);
> > > > > +    paddr_t end = start + size - 1;
> > > > > +    mfn_t mfn_end = maddr_to_mfn(end);
> > > > > +    unsigned int offset_end = 0;
> > > > > +    int rc;
> > > > > +    bool subpage_start, subpage_end;
> > > > > +
> > > > > +    ASSERT(IS_ALIGNED(start, MMIO_RO_SUBPAGE_GRAN));
> > > > > +    ASSERT(IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN));
> > > > > +    if ( !IS_ALIGNED(size, MMIO_RO_SUBPAGE_GRAN) )
> > > > > +        size = ROUNDUP(size, MMIO_RO_SUBPAGE_GRAN);
> > > > > +
> > > > > +    if ( !size )
> > > > > +        return 0;
> > > > > +
> > > > > +    if ( mfn_eq(mfn_start, mfn_end) )
> > > > > +    {
> > > > > +        /* Both starting and ending parts handled at once */
> > > > > +        subpage_start = PAGE_OFFSET(start) || PAGE_OFFSET(end) != PAGE_SIZE - 1;
> > > > > +        subpage_end = false;
> > > > 
> > > > Given the intended usage of this, don't we want to limit to only a
> > > > single page?  So that PFN_DOWN(start + size) == PFN_DOWN/(start), as
> > > > that would simplify the logic here?
> > > 
> > > I have considered that, but I haven't found anything in the spec
> > > mandating the XHCI DbC registers to not cross page boundary. Currently
> > > (on a system I test this on) they don't cross page boundary, but I don't
> > > want to assume extra constrains - to avoid issues like before (when
> > > on the older system I tested the DbC registers didn't shared page with
> > > other registers, but then they shared the page on a newer hardware).
> > 
> > Oh, from our conversation at XenSummit I got the impression debug registers
> > where always at the same position.  Looking at patch 2/2, it seems you
> > only need to block access to a single register.  Are registers in XHCI
> > size aligned?  As this would guarantee it doesn't cross a page
> > boundary (as long as the register is <= 4096 in size).
> 
> It's a couple of registers (one "extended capability"), see
> `struct dbc_reg` in xhci-dbc.c.

That looks to be an awful lot of individual registers...

> It's location is discovered at startup
> (device presents a linked-list of capabilities in one of its BARs).
> The spec talks only about alignment of individual registers, not the
> whole group...

Never mind then, I had the expectation we could get away with a single
page, but doesn't look to be the case.

I assume the spec doesn't mention anything about the BAR where the
capabilities reside having a size <= 4KiB.

> > > > > +            if ( !addr )
> > > > > +            {
> > > > > +                gprintk(XENLOG_ERR,
> > > > > +                        "Failed to map page for MMIO write at 0x%"PRI_mfn"%03x\n",
> > > > > +                        mfn_x(mfn), offset);
> > > > > +                return;
> > > > > +            }
> > > > > +
> > > > > +            switch ( len )
> > > > > +            {
> > > > > +            case 1:
> > > > > +                writeb(*(const uint8_t*)data, addr);
> > > > > +                break;
> > > > > +            case 2:
> > > > > +                writew(*(const uint16_t*)data, addr);
> > > > > +                break;
> > > > > +            case 4:
> > > > > +                writel(*(const uint32_t*)data, addr);
> > > > > +                break;
> > > > > +            case 8:
> > > > > +                writeq(*(const uint64_t*)data, addr);
> > > > > +                break;
> > > > > +            default:
> > > > > +                /* mmio_ro_emulated_write() already validated the size */
> > > > > +                ASSERT_UNREACHABLE();
> > > > > +                goto write_ignored;
> > > > > +            }
> > > > > +            return;
> > > > > +        }
> > > > > +    }
> > > > > +    /* Do not print message for pages without any writable parts. */
> > > > > +}
> > > > > +
> > > > > +#ifdef CONFIG_HVM
> > > > > +bool subpage_mmio_write_accept(mfn_t mfn, unsigned long gla)
> > > > > +{
> > > > > +    unsigned int offset = PAGE_OFFSET(gla);
> > > > > +    const struct subpage_ro_range *entry;
> > > > > +
> > > > > +    list_for_each_entry(entry, &subpage_ro_ranges, list)
> > > > > +        if ( mfn_eq(entry->mfn, mfn) &&
> > > > > +             !test_bit(offset / MMIO_RO_SUBPAGE_GRAN, entry->ro_elems) )
> > > > > +        {
> > > > > +            /*
> > > > > +             * We don't know the write size at this point yet, so it could be
> > > > > +             * an unaligned write, but accept it here anyway and deal with it
> > > > > +             * later.
> > > > > +             */
> > > > > +            return true;
> > > > 
> > > > For accesses that fall into the RO region, I think you need to accept
> > > > them here and just terminate them?  I see no point in propagating
> > > > them further in hvm_hap_nested_page_fault().
> > > 
> > > If write hits an R/O region on a page with some writable regions the
> > > handling should be the same as it would be just on the mmio_ro_ranges.
> > > This is what the patch does.
> > > There may be an opportunity to simplify mmio_ro_ranges handling
> > > somewhere, but I don't think it belongs to this patch.
> > 
> > Maybe worth adding a comment that the logic here intends to deal only
> > with the RW bits of a page that's otherwise RO, and that by not
> > handling the RO regions the intention is that those are dealt just
> > like fully RO pages.
> 
> I can extend the comment, but I assumed it's kinda implied already (if
> nothing else, by the function name).

Well, at this point we know the write is not going to make it to host
memory.  The only reason to not handle the access here is that we want
to unify the consequences it has for a guest writing to a RO address.

> > I guess there's some message printed when attempting to write to a RO
> > page that you would also like to print here?
> 
> If a HVM domain writes to an R/O area, it is crashed, so you will get a
> message. This applies to both full page R/O and partial R/O. PV doesn't
> go through subpage_mmio_write_accept().

Oh, crashing the domain is more strict than I was expecting.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 14:40:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 14:40:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738478.1145242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH2fJ-0004fs-Gy; Tue, 11 Jun 2024 14:40:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738478.1145242; Tue, 11 Jun 2024 14:40:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH2fJ-0004fl-Cy; Tue, 11 Jun 2024 14:40:01 +0000
Received: by outflank-mailman (input) for mailman id 738478;
 Tue, 11 Jun 2024 14:39:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sH2fH-0004fb-U4
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 14:39:59 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7a08f7cf-2800-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 16:39:58 +0200 (CEST)
Received: by mail-ed1-x52d.google.com with SMTP id
 4fb4d7f45d1cf-57c778b5742so3449981a12.2
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 07:39:58 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f117c09c9sm392727466b.138.2024.06.11.07.39.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 07:39:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a08f7cf-2800-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718116798; x=1718721598; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=0/P4J0zk3QORR/5fLMEjLT6hX7REXkWZcoYqoG1IaII=;
        b=H9zFJW8rektVbQJCx6mTi3AxCiX1szqko0+1VlZhGgpFDs2XB69HiuoczMgZnWBVbh
         AdxFO8/tKQPXHpU46YGD3n0SCAPFZaAC9p3zu7o24ZNp9R/Yui6u4TNljzw+HLMqZbzg
         c0prGV2AHV7AWIpOK/FgIdFIaJz4R4pv/axvP06aKj/T5n/YkXV5GhHCXwLRyHU0xK4P
         OCXh4nruEputhyOK8NJY1dw1pfgp45lOSGIadOo04QB347EGGPy/YCEBRRkRldyyIpVX
         ce8c1iTRwbgDFIXWOh7TzV4UtoxUPSvp6vLp3T3dS2m/8Uv525o18X+wsVzbmXkMIyNm
         39KA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718116798; x=1718721598;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0/P4J0zk3QORR/5fLMEjLT6hX7REXkWZcoYqoG1IaII=;
        b=u23xOAwyPQ6gVUBHCd/tFNtmaKX51Ro1pw2xtBnwNXpQyToKjfi4AZqdo/vi3fKAnF
         xpFSrwdI0e9g5s0nPYswclO3K9YbaZV/CfGICM4fiaWhf7iX+/KCgC9V7ArKZoYFDnwH
         9pScvsFs23xCXd7zEd/peekF8T/zJUCe4ZcW8+kWIc5POrNGCez7cm1tY3OPFeM/Anb9
         lnzPLNAEyqkl8OXMPIX1T1/M9MscnGSxDFkysZwioW4QyRfj5qMKXC9WuPJCyvLZrXX5
         XetkR3idNmLg/z+nnyCzIi2ZDdGZYaBxdif6c0zPmgokmAoLYWvDVU2tgEWltuxRCcHH
         n0Zg==
X-Forwarded-Encrypted: i=1; AJvYcCUpC+RFhp7V3uyZaieOBREXEWTPdgS1SfXefxSspK0DBxl5t9i1bUgR5Wzqa2hqAHJUqarHgtp5FuW6OT4XYvVs3ZlDCVr1E3t4mY0npdg=
X-Gm-Message-State: AOJu0Yz3ifFdO9g2uHobOehCbVDRaomUR/rgmK6/O28EdzDHDjwBKTYm
	DHel0qWpm9IuRAn+WeZs0P2eFiKtkXJJxOV1JkljEVLTJ8qpZUNyPwRHTzktGA==
X-Google-Smtp-Source: AGHT+IGoo9TjyzkRN/eT4S1f70xdYcSENWFBSpVkBKh4O0CNaeqeJdF+c5n0ITbGpxDe+oAckI4Yxw==
X-Received: by 2002:a17:906:ca0b:b0:a6e:fc25:27b6 with SMTP id a640c23a62f3a-a6efc252c38mr510691466b.38.1718116797823;
        Tue, 11 Jun 2024 07:39:57 -0700 (PDT)
Message-ID: <987f5d21-bbb5-4cdb-975b-91949e802921@suse.com>
Date: Tue, 11 Jun 2024 16:39:56 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: Jiqian Chen <Jiqian.Chen@amd.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 Stewart Hildebrand <Stewart.Hildebrand@amd.com>,
 Huang Rui <Ray.Huang@amd.com>, xen-devel@lists.xenproject.org
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-6-Jiqian.Chen@amd.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240607081127.126593-6-Jiqian.Chen@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 07.06.2024 10:11, Jiqian Chen wrote:
> Some type of domain don't have PIRQ, like PVH, it do not do
> PHYSDEVOP_map_pirq for each gsi. When passthrough a device
> to guest on PVH dom0, callstack
> pci_add_dm_done->XEN_DOMCTL_irq_permission will failed at
> domain_pirq_to_irq, because PVH has no mapping of gsi, pirq
> and irq on Xen side.

All of this is, to me at least, in pretty sharp contradiction to what
patch 2 says and does. IOW: Do we want the concept of pIRQ in PVH, or
do we want to keep that to PV?

> What's more, current hypercall XEN_DOMCTL_irq_permission require
> passing in pirq and grant the access of irq, it is not suitable
> for dom0 that has no PIRQ flag, because passthrough a device
> needs gsi and grant the corresponding irq to guest. So, add a
> new hypercall to grant gsi permission when dom0 is not PV or dom0
> has not PIRQ flag.
> 
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>

A problem throughout the series as it seems: Who's the author of these
patches? There's no From: saying it's not you, but your S-o-b also
isn't first.

> --- a/tools/libs/light/libxl_pci.c
> +++ b/tools/libs/light/libxl_pci.c
> @@ -1412,6 +1412,37 @@ static bool pci_supp_legacy_irq(void)
>  #define PCI_SBDF(seg, bus, devfn) \
>              ((((uint32_t)(seg)) << 16) | (PCI_DEVID(bus, devfn)))
>  
> +static int pci_device_set_gsi(libxl_ctx *ctx,
> +                              libxl_domid domid,
> +                              libxl_device_pci *pci,
> +                              bool map,
> +                              int *gsi_back)
> +{
> +    int r, gsi, pirq;
> +    uint32_t sbdf;
> +
> +    sbdf = PCI_SBDF(pci->domain, pci->bus, (PCI_DEVFN(pci->dev, pci->func)));
> +    r = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
> +    *gsi_back = r;
> +    if (r < 0)
> +        return r;
> +
> +    gsi = r;
> +    pirq = r;

r is a GSI as per above; why would you store such in a variable named pirq?
And how can ...

> +    if (map)
> +        r = xc_physdev_map_pirq(ctx->xch, domid, gsi, &pirq);
> +    else
> +        r = xc_physdev_unmap_pirq(ctx->xch, domid, pirq);

... that value be the correct one to pass into here? In fact, the pIRQ number
you obtain above in the "map" case isn't handed to the caller, i.e. it is
effectively lost. Yet that's what would need passing into such an unmap call.

> +    if (r)
> +        return r;
> +
> +    r = xc_domain_gsi_permission(ctx->xch, domid, gsi, map);

Looking at the hypervisor side, this will fail for PV Dom0. In which case imo
you better would avoid making the call in the first place.

> +    if (r && errno == EOPNOTSUPP)

Before here you don't really need the pIRQ number; if all it really is needed
for is ...

> +        r = xc_domain_irq_permission(ctx->xch, domid, pirq, map);

... this, then it probably also should only be obtained when it's needed. Yet
overall the intentions here aren't quite clear to me.

> @@ -1485,6 +1516,19 @@ static void pci_add_dm_done(libxl__egc *egc,
>      fclose(f);
>      if (!pci_supp_legacy_irq())
>          goto out_no_irq;
> +
> +    r = pci_device_set_gsi(ctx, domid, pci, 1, &gsi);
> +    if (gsi >= 0) {
> +        if (r < 0) {

This unusual way of error checking likely wants a comment.

> +            rc = ERROR_FAIL;
> +            LOGED(ERROR, domainid,
> +                  "pci_device_set_gsi gsi=%d (error=%d)", gsi, errno);
> +            goto out;
> +        } else {
> +            goto process_permissive;
> +        }

Btw, no need for "else" when the earlier if ends in "goto" or alike.

> @@ -1493,13 +1537,6 @@ static void pci_add_dm_done(libxl__egc *egc,
>          goto out_no_irq;
>      }
>      if ((fscanf(f, "%u", &irq) == 1) && irq) {
> -        sbdf = PCI_SBDF(pci->domain, pci->bus,
> -                        (PCI_DEVFN(pci->dev, pci->func)));
> -        r = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
> -        /* if fail, keep using irq; if success, r is gsi, use gsi */
> -        if (r != -1) {
> -            irq = r;
> -        }

If I'm not mistaken, this and ...

> @@ -2255,13 +2302,6 @@ skip_bar:
>      }
>  
>      if ((fscanf(f, "%u", &irq) == 1) && irq) {
> -        sbdf = PCI_SBDF(pci->domain, pci->bus,
> -                        (PCI_DEVFN(pci->dev, pci->func)));
> -        r = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
> -        /* if fail, keep using irq; if success, r is gsi, use gsi */
> -        if (r != -1) {
> -            irq = r;
> -        }

... this is code added by the immediately preceding patch. It's pretty odd
for that to be deleted here again right away. Can the interaction of the
two patches perhaps be re-arranged to avoid this anomaly?

> @@ -237,6 +238,43 @@ long arch_do_domctl(
>          break;
>      }
>  
> +    case XEN_DOMCTL_gsi_permission:
> +    {
> +        unsigned int gsi = domctl->u.gsi_permission.gsi;
> +        int irq = gsi_2_irq(gsi);

I'm not sure it is a good idea to issue this call ahead of the basic error
checks below.

> +        bool allow = domctl->u.gsi_permission.allow_access;

This allows any non-zero values to mean "true". I think you want to bail
on values larger than 1, much like you also want to check that the padding
fields are all zero.

> +        /*
> +         * If current domain is PV or it has PIRQ flag, it has a mapping
> +         * of gsi, pirq and irq, so it should use XEN_DOMCTL_irq_permission
> +         * to grant irq permission.
> +         */
> +        if ( is_pv_domain(current->domain) || has_pirq(current->domain) )

Please use currd here (and also again below).

> +        {
> +            ret = -EOPNOTSUPP;
> +            break;
> +        }
> +
> +        if ( gsi >= nr_irqs_gsi || irq < 0 )
> +        {
> +            ret = -EINVAL;
> +            break;
> +        }
> +
> +        if ( !irq_access_permitted(current->domain, irq) ||
> +             xsm_irq_permission(XSM_HOOK, d, irq, allow) )

Daniel, is it okay to issue the XSM check using the translated value, not
the one that was originally passed into the hypercall?

> --- a/xen/arch/x86/io_apic.c
> +++ b/xen/arch/x86/io_apic.c
> @@ -955,6 +955,27 @@ static int pin_2_irq(int idx, int apic, int pin)
>      return irq;
>  }
>  
> +int gsi_2_irq(int gsi)
> +{
> +    int entry, ioapic, pin;
> +
> +    ioapic = mp_find_ioapic(gsi);
> +    if ( ioapic < 0 )
> +        return -1;

Can this be a proper errno value (likely -EINVAL), please?

> +    pin = gsi - io_apic_gsi_base(ioapic);

Hmm, instead of the below: Once you have an (ioapic,pin) tuple, can't
you simply call apic_pin_2_gsi_irq()?

> +    entry = find_irq_entry(ioapic, pin, mp_INT);
> +    /*
> +     * If there is no override mapping for irq and gsi in mp_irqs,
> +     * then the default identity mapping applies.
> +     */
> +    if ( entry < 0 )
> +        return gsi;
> +
> +    return pin_2_irq(entry, ioapic, pin);

Under certain conditions this may return 0. Yet you surely don't want
to pass IRQ0 back as a result; you want to hand an error back instead.

> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -465,6 +465,14 @@ struct xen_domctl_irq_permission {
>  };
>  
>  
> +/* XEN_DOMCTL_gsi_permission */
> +struct xen_domctl_gsi_permission {
> +    uint32_t gsi;
> +    uint8_t allow_access;    /* flag to specify enable/disable of x86 gsi access */
> +    uint8_t pad[3];
> +};
> +
> +

Nit: No (new) double blank lines please. In fact (didn't I say this before
already?) you could insert between the two above, such that the existing issue
also disappears.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 14:53:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 14:53:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738486.1145251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH2sJ-0007r1-JA; Tue, 11 Jun 2024 14:53:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738486.1145251; Tue, 11 Jun 2024 14:53:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH2sJ-0007qu-Ge; Tue, 11 Jun 2024 14:53:27 +0000
Received: by outflank-mailman (input) for mailman id 738486;
 Tue, 11 Jun 2024 14:53:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WBrw=NN=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sH2sH-0007qo-W3
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 14:53:25 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5a1ca1da-2802-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 16:53:24 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id
 a640c23a62f3a-a6efe62f583so105178966b.3
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 07:53:24 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1ec85a74sm268414866b.56.2024.06.11.07.53.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 07:53:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a1ca1da-2802-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718117603; x=1718722403; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=l86eqjnMZ92TwVa78jcH2B2phLBNWQ4tUfYoh0IYomQ=;
        b=CRrMa2Vd8S5zNyGVSImHaJ2MPv/9wFaYPUujqUybAk1QUiO59chtBjuXNh3+ODgldv
         jFqTW+WKwtA+eMevOhKFnAqmEkgrolhVLtFlaInQRbR/XWbLp1EgfdVGGCsYDVA57Mpw
         ijYd55f25Y8d/p8kbo8GuYqrwWxwG77rFPfFfU7VdoixjfR0u+t8rvDmW6X7j8d4Nbiy
         NupMMG5kHuJMOLsBBpcPCSKWz5s0DJUW1cY3c0APKlyMGEd2kV6sPPLD9YEgG/gTgjA6
         wvwKYR1aJbyKWWhIUaIwNZgbFrPOlBOz13Z7t7LiL29I66B6DQq0VItmXjcUVPokCkJr
         ojpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718117603; x=1718722403;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=l86eqjnMZ92TwVa78jcH2B2phLBNWQ4tUfYoh0IYomQ=;
        b=Yvv7BJH2t/EF1h8u5q1Lg1kOa3s56gmxjaDTWUcGzYvWizevq+gAYJtkMaaZYb8Vwz
         BOtQFd5MvFToG0X9n3CdxX0L0r4Vx6Xy+STiM9nQA9fN1UgzJpdW0W7+ky0vRHKVzNSZ
         Wue3qG2Ll4Wp+LCKjEJvP0tWJlLbthLBQ4a2hpUS4jHYEuheq6LRc5XJrfdO2AJ/aB5t
         hh9k8Jc+jRJVPMa/WTrMoENp8m/vlxc1/1AGry/5S7LU0mEGlsOrJiPyIw6MHMRmgaki
         oXW2voWRz/8XUDTjPe+EMT5xDiO5qlylQI9EIzQLKgX8aYHffTDCvPGYOnDcCPQy0/0H
         QK4Q==
X-Gm-Message-State: AOJu0YxhXs4Ce9NT5a04P/lR6BWo/2YOyL/aRW9q/7QQKx5d2sIoQNga
	0XE3crHQkUMQGH4v6gvtVGuVJR8HvRo0j58uEjIhvHJCYUAw+XV9QEHo2Y87hw==
X-Google-Smtp-Source: AGHT+IFHkfRWPaS7Y2lJELkNeMp+n3RSdhRBSG7JZdGXmODvCHEG8YPaHNCac74C0DP8I/yncbR2Ew==
X-Received: by 2002:a17:907:7d8f:b0:a6f:d7a:d650 with SMTP id a640c23a62f3a-a6f0d7ad8fcmr644505566b.50.1718117603364;
        Tue, 11 Jun 2024 07:53:23 -0700 (PDT)
Message-ID: <beb67703-c1f0-490a-a3ad-36e2f331f5e4@suse.com>
Date: Tue, 11 Jun 2024 16:53:22 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] x86/EPT: relax iPAT for "invalid" MFNs
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
 <Zmf_k2meED8iG3H5@macbook> <a11259be-7114-4332-b873-d1b163687a3e@suse.com>
 <ZmgStGbVRuGaNUD_@macbook> <f171c98a-c78d-41c8-88d8-7d631b80333b@suse.com>
 <ZmgwKmcLDJDhIsl7@macbook> <b076dc8d-701e-4a9f-a147-c54673959009@suse.com>
 <ZmhWtEyuwjTuIAxK@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmhWtEyuwjTuIAxK@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 11.06.2024 15:52, Roger Pau Monné wrote:
> On Tue, Jun 11, 2024 at 01:52:58PM +0200, Jan Beulich wrote:
>> On 11.06.2024 13:08, Roger Pau Monné wrote:
>>> I really wonder whether Xen has enough information to figure out
>>> whether a hole (MMIO region) is supposed to be accessed as UC or
>>> something else.
>>
>> It certainly hasn't, and hence is erring on the (safe) side of forcing
>> UC.
> 
> Except that for the vesa framebuffer at least this is a bad choice :).

Well, yes, that's where we want WC to be permitted. But for that we only
need to avoid setting iPAT; we still can uniformly hand back UC. Except
(as mentioned elsewhere earlier) if the guest uses MTRRs rather than PAT
to arrange for WC.

>>>  Maybe the mfn_valid() check should be
>>> inverted, and return WB when the underlying mfn is RAM, and otherwise
>>> use the guest MTRRs to decide the cache attribute?
>>
>> First: Whether WB is correct for RAM isn't known. With some peculiar device
>> assigned, the guest may want/need part of its RAM be e.g. WC or WT. (It's
>> only without any physical devices assigned that we can be quite sure that
>> WB is good for all of RAM.) Therefore, second, I think respecting MTRRs for
>> RAM is less likely to cause problems than respecting them for MMIO.
>>
>> I think at this point the main question is: Do we want to do things at least
>> along the lines of this v1, or do we instead feel certain enough to switch
>> the mfn_valid() to a comparison against INVALID_MFN (and perhaps moving it
>> up to almost the top of the function)?
> 
> My preferred option would be the later, as that would remove a special
> casing.  However, I'm unsure how much fallout this could cause - those
> caching changes are always tricky and lead to unexpected fallout.

Which is the very reason why I tried to avoid going to far with this.

> OTOH the current !mfn_valid() check is very restrictive, as it forces
> all MMIO to UC.

Which is why, in this v1, I'm relaxing only the iPAT part.

>  So by removing it we allow guest chosen types to take
> effect, which are likely less restrictive than UC (whether those are
> correct is another question).

No, guest chosen types still wouldn't come into play, due to what the
switch() further down in the function does for p2m_mmio_direct.

>> One caveat here that I forgot to
>> mention before: MFNs taken out of EPT entries will never be INVALID_MFN, for
>> the truncation that happens when populating entries. In that case we rely on
>> mfn_valid() to be "rejecting" them.
> 
> The only caller where mfns from EPT entries are passed to
> epte_get_entry_emt() is in resolve_misconfig() AFAICT, and in that
> case the EPT entry must be present for epte_get_entry_emt() to be
> called.  So it seems to me that epte_get_entry_emt() can never be
> called from an mfn constructed from an INVALID_MFN EPT entry (but it's
> worth adding an assert for it).

Are you sure? I agree for the first of those two calls, but the second
isn't quite as obvious. There we'd need to first prove that we will
never create non-present super-page entries. Yet if I'm not mistaken
for PoD we may create such.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 14:59:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 14:59:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738492.1145262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH2y5-0000Kz-7I; Tue, 11 Jun 2024 14:59:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738492.1145262; Tue, 11 Jun 2024 14:59:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH2y5-0000Ks-3g; Tue, 11 Jun 2024 14:59:25 +0000
Received: by outflank-mailman (input) for mailman id 738492;
 Tue, 11 Jun 2024 14:59:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eM8s=NN=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sH2y4-0000Km-1H
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 14:59:24 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2ffcc929-2803-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 16:59:22 +0200 (CEST)
Received: by mail-ej1-x62b.google.com with SMTP id
 a640c23a62f3a-a6265d48ec3so122735966b.0
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 07:59:22 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f00108531sm468117866b.211.2024.06.11.07.59.21
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 07:59:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ffcc929-2803-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718117962; x=1718722762; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=s/+sL8UQsp33974oHDcUejEizKLfQHWG3c2yC3tFfww=;
        b=fOX1M1UOsCWQLMTwdS02jPSkG08b3QLEdX3CCiW0iKcbCm70fLjNdzqH0AeaVyu+5I
         vl/VSbkwKuPVd+0OBgmVc0GbS1IWZNVlzi5XhDs9b/j60aLqayoec/MZp6d1HSog+s7R
         GuCOnosfYEcOLhw4uMjhwboq3jLIvriISEOWA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718117962; x=1718722762;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=s/+sL8UQsp33974oHDcUejEizKLfQHWG3c2yC3tFfww=;
        b=ADWqK4X8j+Ho1P4IMwMQQrmetnY4ukAzAISAUUoy61bMClGWYbz1QyjaTgciIOVsYf
         J0Wm4bx2ag5CyDy0IrBivH42Yw6rkLeo8POsCDb+MmU5Y4WbATDy6ZHVuwiQDPCQs1nG
         +0axCzI5UGMvfd7LUbQ2Luhe7yHcbbznYso4CjzxgYfYP1UySzTFaIgnkZuJnFGPFFu7
         z0XYvIv61jD0ZEySaI266hJz73wzgXBlDrLLiSWjlS/EbqFr10HIsnjfDlgIIe2+opdj
         MZH2jwSj8YqYjY+HZ16glmsUS+c7E5kkkjd0xKVe/3uIDVEcAGM/NXx6r82ByacYEjEv
         T3QQ==
X-Forwarded-Encrypted: i=1; AJvYcCUCxEu5OHA/W1ZKVXf6aUW3652awsrZGKzMgjVvEy8pTo3U8JvYepCghf//BEAy0LQmeGfM5cpuWEDRQXr0DGnEq6QQnqeLyAc/QNpP1qQ=
X-Gm-Message-State: AOJu0YwT2ajt2iU7qdPs4KTwHYNkFPWLc0rFTu8iEzReq7S0PBdulyRW
	3Nphi98yeeGJheq0A4EXifo8DmpEhx3LXqtiRnZ5GLoMoequmxPs8LphcWK3em8=
X-Google-Smtp-Source: AGHT+IFudohw47bDbB5i5747u3iZsRlP5B+jcAlQ/0xitOYbKpLMw+bk7sntCjz2k6k6h7HW33MdMA==
X-Received: by 2002:a17:907:84c:b0:a64:e418:f93f with SMTP id a640c23a62f3a-a6cdacfedbcmr1039452666b.60.1718117962183;
        Tue, 11 Jun 2024 07:59:22 -0700 (PDT)
Message-ID: <476f3b8d-6b0a-4246-878f-f2e284d75a3d@citrix.com>
Date: Tue, 11 Jun 2024 15:59:21 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v12 8/8] xen/README: add compiler and binutils versions
 for RISC-V64
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
References: <cover.1717008161.git.oleksii.kurochko@gmail.com>
 <c6ff49af9a107965f8121862e6b32c24548956e6.1717008161.git.oleksii.kurochko@gmail.com>
 <d4e5b4c8-d494-440b-8970-488b49bee12e@citrix.com>
 <79a2d936-62f1-4749-9e75-0be019cd3d99@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <79a2d936-62f1-4749-9e75-0be019cd3d99@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 31/05/2024 7:18 am, Jan Beulich wrote:
> On 30.05.2024 21:52, Andrew Cooper wrote:
>> On 29/05/2024 8:55 pm, Oleksii Kurochko wrote:
>>> diff --git a/README b/README
>>> index c8a108449e..30da5ff9c0 100644
>>> --- a/README
>>> +++ b/README
>>> @@ -48,6 +48,10 @@ provided by your OS distributor:
>>>        - For ARM 64-bit:
>>>          - GCC 5.1 or later
>>>          - GNU Binutils 2.24 or later
>>> +      - For RISC-V 64-bit:
>>> +        - GCC 12.2 or later
>>> +        - GNU Binutils 2.39 or later
>> I would like to petition for GCC 10 and Binutils 2.35.
>>
>> These are the versions provided as cross-compilers by Debian, and
>> therefore are the versions I would prefer to do smoke testing with.
> See why I asked to amend the specified versions by a softening sentence that
> you (only now) said you dislike? The "this is what we use in CI" makes it a
> very random choice, entirely unrelated to the compiler's abilities.

"what's in CI" is an arbitrary choice, and that's *explicitly* fine and
the right choice for Oleksii to have made.

He's got the hard job of making the damn thing work in the first place. 
Requiring him to also go and get some old compilers to backdate the
support statement is unreasonable for you to demand.

In this case, I'm saying that it would be convenient for *me* if the
numbers were older, because that's what *I* have and what *I'm* wanting
to testing with.  This means that I'm the one taking on the
responsibility of playing backwards-compatibility-roulette.

Now, for other reasons I no longer have those versions, but one of the 3
bugs I raises is still a real bug needing fixing.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:02:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:02:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738498.1145272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH30Z-00029D-MW; Tue, 11 Jun 2024 15:01:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738498.1145272; Tue, 11 Jun 2024 15:01:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH30Z-000296-JH; Tue, 11 Jun 2024 15:01:59 +0000
Received: by outflank-mailman (input) for mailman id 738498;
 Tue, 11 Jun 2024 15:01:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H2Xh=NN=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1sH30Y-00028x-CC
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:01:58 +0000
Received: from fhigh8-smtp.messagingengine.com
 (fhigh8-smtp.messagingengine.com [103.168.172.159])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8ac7adc1-2803-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 17:01:56 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailfhigh.nyi.internal (Postfix) with ESMTP id 71D001140159;
 Tue, 11 Jun 2024 11:01:54 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 11 Jun 2024 11:01:54 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 11 Jun 2024 11:01:53 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ac7adc1-2803-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718118114;
	 x=1718204514; bh=8pNd4VEepn/40DLjjsKiqn3jk1h0YICBQP1GZtZNZhc=; b=
	m5+E0aBTD7N8KJjxggUoYUb1Zhmwm1uhbLDLxZOFQAKBqQSDycz8w9AX+Txrr9b4
	8N3edq1Dw9I9fsCMuDw156rqwnXYAg0ShJ/sqwj244PPK0MHCyM/aExre04kCv3n
	+PkJBfMqaINAQEY8KKhem3O+8AHUSj3HG2lVAj5OelN28GVCTjeWfZ7q1a0cbvWl
	rSKw1YysxGP1KRfruu7ScCHqzkNWTtvo1drfjXqzEy4g6yc3f0gIqPtYMtwqIi+e
	DSsZ6T4I5femPzXS0WdTtn0UUi9XmuU+2gOJgZjMXvo9/NcbH9phD6+mLuPJMYkw
	INxS7SIcZudMdY79E8RfYQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718118114; x=1718204514; bh=8pNd4VEepn/40DLjjsKiqn3jk1h0
	YICBQP1GZtZNZhc=; b=H3QluBTDcNWAYNvmHjpT51t/zt1hfxQz+6lJodkvqUYD
	V5uphf+rxSNUx26Y0C0pge62c/jqIKwF9wEh1wKFYxiDvMjw/UtJ40DqOKP9yhvh
	9PvnYDLDFPjqMpeU9adceBdBTCnE0G5g0RFKe6sq3+WL1DcTpceGNZr6SGE91ehy
	krTTTn4TjRbnvou5cSz9U1KxPCwTIqAWPHSql1XYov/cILEt25n+sIcNPFZ2POlN
	9KKmxXCWtkoG8BUP+EGEStFdZAHw6/Kf+LCZ7nBdM8mlekBBB6JT3ThHWqxH3HrZ
	O2kY5ShR/b7d/BSb1vmflvaYxoFvVhdFN13Hb92s8Q==
X-ME-Sender: <xms:4mZoZngpctcchugFHjNgSVt20IEK12UaXbdt5M4dOrXa2LzOSzwWXw>
    <xme:4mZoZkCUc985poPWo7plypDTvuCFpbq_eCtZkAkhBtc16y0tSRyAG0r1zniuFRu66
    rizRDAhPZMIBg>
X-ME-Received: <xmr:4mZoZnH3xx0dv5d8_qS8zjYTYgp9H7g3ezmB5lLLYzFf_KVyhMG8dd6lcZGz8YmPknB3o_LXAeqspbkALfTqZdRFrfSSakMkmA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfeduvddgkeduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgfdu
    leetfeevhfefheeiteeliefhjefhleduveetteekveettddvgeeuteefjedunecuvehluh
    hsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghk
    sehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:4mZoZkRNg7VSjxzL26munKgSdt9RsyKn16s1zpp8WlpaJytxrafnpQ>
    <xmx:4mZoZkzCRq90wd7mqT0l-dXkRtV-lM7FbWswc9uH_4-_GzwvbYLk-g>
    <xmx:4mZoZq6ep7rzh6xuzWoL2Zxaesh1bFYE-BJwbNicMWLr3zwT9twq4Q>
    <xmx:4mZoZpx19LD2cBxbOgrwXYp-TudBv0IcoBHMJF4dwCPzPVvI1zlMxg>
    <xmx:4mZoZg_s9xGnSWBDfwGCkXU9HovacBlDurs8a4PpRV_EOE4f_O7MDaLm>
Feedback-ID: i1568416f:Fastmail
Date: Tue, 11 Jun 2024 17:01:49 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO
 page read only
Message-ID: <Zmhm3r8U4Nz7vxhQ@mail-itl>
References: <cover.68462f37276d69ab6e268be94d049f866a321f73.1716392340.git-series.marmarek@invisiblethingslab.com>
 <30562c807ff2e434731a76d7110d48614a58884b.1716392340.git-series.marmarek@invisiblethingslab.com>
 <ZmgpsZJ4afLd1Fc3@macbook>
 <Zmg3O7zvd9KBC1Fv@mail-itl>
 <ZmhJOjggtJiNccPo@macbook>
 <ZmhN_hNHp7WtyPyD@mail-itl>
 <ZmhaB57Tc6BsknVO@macbook>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="VBToB/jz9xQ1jL61"
Content-Disposition: inline
In-Reply-To: <ZmhaB57Tc6BsknVO@macbook>


--VBToB/jz9xQ1jL61
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 11 Jun 2024 17:01:49 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4 1/2] x86/mm: add API for marking only part of a MMIO
 page read only

On Tue, Jun 11, 2024 at 04:07:03PM +0200, Roger Pau Monn=C3=A9 wrote:
> On Tue, Jun 11, 2024 at 03:15:42PM +0200, Marek Marczykowski-G=C3=B3recki=
 wrote:
> > It's location is discovered at startup
> > (device presents a linked-list of capabilities in one of its BARs).
> > The spec talks only about alignment of individual registers, not the
> > whole group...
>=20
> Never mind then, I had the expectation we could get away with a single
> page, but doesn't look to be the case.
>=20
> I assume the spec doesn't mention anything about the BAR where the
> capabilities reside having a size <=3D 4KiB.

No, and in fact I see it's a BAR of 64KiB on one of devices...

> > > Maybe worth adding a comment that the logic here intends to deal only
> > > with the RW bits of a page that's otherwise RO, and that by not
> > > handling the RO regions the intention is that those are dealt just
> > > like fully RO pages.
> >=20
> > I can extend the comment, but I assumed it's kinda implied already (if
> > nothing else, by the function name).
>=20
> Well, at this point we know the write is not going to make it to host
> memory.  The only reason to not handle the access here is that we want
> to unify the consequences it has for a guest writing to a RO address.

Yup.

> > > I guess there's some message printed when attempting to write to a RO
> > > page that you would also like to print here?
> >=20
> > If a HVM domain writes to an R/O area, it is crashed, so you will get a
> > message. This applies to both full page R/O and partial R/O. PV doesn't
> > go through subpage_mmio_write_accept().
>=20
> Oh, crashing the domain is more strict than I was expecting.

That's how it was before, I'm not really changing it here. It's less
strict for PV though (it either gets a #PF forwarded back to the guest,
or is ignored).

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--VBToB/jz9xQ1jL61
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmZoZt0ACgkQ24/THMrX
1yzgNAf+JAlcmZdIC+tvAFE5mJQhSYWRgIUUdiAM7HxzNHGnHDihKmAMVwj39Z1D
4pL5kSXxEcDiQ71jnlCz/Pkhf8q/ZpKkf1IX/xPjXw63DUx1SKI7QuZUlnRSSPvP
zxGAL0su3FzMYnSGWHUrDkcO0Aum5pYLrJhIM0rRzFTwTP+nq27T/4NjmD42uDWp
QdNYPLtPzUr5qEIvyWERQtK9mI36EQ5y9ppySgNR7Ccj93vPa8SbgIQk2cKYddZv
ZJPGb7DB9ROYM60lBYCVu7cEWMhHymKKN/jKjN8NTOb41Ni25Mm+2B1G+tzDnbly
G2YrMzgd85RRFdfp5kjsC5lKas0n8w==
=sF1c
-----END PGP SIGNATURE-----

--VBToB/jz9xQ1jL61--


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:25:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:25:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738519.1145291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3N2-0005rg-If; Tue, 11 Jun 2024 15:25:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738519.1145291; Tue, 11 Jun 2024 15:25:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3N2-0005rZ-Ft; Tue, 11 Jun 2024 15:25:12 +0000
Received: by outflank-mailman (input) for mailman id 738519;
 Tue, 11 Jun 2024 15:25:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=887E=NN=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sH3N1-0005rT-5F
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:25:11 +0000
Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com
 [2607:f8b0:4864:20::102c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c911243a-2806-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 17:25:09 +0200 (CEST)
Received: by mail-pj1-x102c.google.com with SMTP id
 98e67ed59e1d1-2bfff08fc29so4920748a91.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 08:25:08 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c2ee465711sm6203613a91.45.2024.06.11.08.25.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 08:25:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c911243a-2806-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718119507; x=1718724307; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=O/lyxeDYB0BtkQX1JRbiTJgbbHqOyGOFegqTxPt6Ejs=;
        b=IM5eyyBonLQbuiz4kWfU8okWHO/RSt6hZzYzyJTEHJRVLeoeo5j07ky30Yk9T9hhvC
         nSCr4NPqpJO3TvsboVd8PXb4Z3yMh8R1dCm78nXdN5F/Kgr658wTJ9pOJH241QUoSb7s
         JFERb5GV11awJboVpXrJhBznCkiuOCvN1vjQf6HwD8G1opfItB8rFDl/2JZMKAi0W+pp
         S6ZcEeE3Zof6w6JytY3cZNFryXN+u9fHEQB7bsnuLTV0vcI9d0/mWbenY4la/fp/Y7rw
         SWxWC3MPiaKa9gwEFokyA9c1rWrGrslCgfosU9e+BCWAzYhX4tnZKZlCMZZt0N+/g+dL
         MFGw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718119507; x=1718724307;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=O/lyxeDYB0BtkQX1JRbiTJgbbHqOyGOFegqTxPt6Ejs=;
        b=b2c/S+asu3rUVp2b6XfpckoI4hWU+L7oYpAAwEIqP3gHj3fVzuVF3y8acAExaesbK+
         +OtdYJJYkxIsislCgFLG/A8xA4O5MOGc9pzi4RZmSwNI5zhVvDvJoggzkPSh+BcWSc/h
         fIF91/xOLhxKfEjH7ZVbFOgsvHQtAxeXW9QKyF5JIkOSLFlkQ3xYDiZba21vMXnjLE5r
         gsonAeEbHzJl6edIvtwmhz/NnWNo9bYr0ILrDWpRWzZeeY+ZJeEeOGpjtz6BI4AZdVOT
         OZev0g4dOTJFrZipnJ8moJRoL0+ZC3VfHlHII7Cdgy/VyMxoEJEvcTrfq3ol0ydHgrSo
         wedQ==
X-Forwarded-Encrypted: i=1; AJvYcCWy9GlTUDUp0pB9Uvm1uyh9r0wnh9523bslVkEmN4y7kPGlldbzL/5SZtolHGYxW5Zv2pvonIlzqUKNB2G7N8BG87ulyH24un7vRr1N4w4=
X-Gm-Message-State: AOJu0Yyj0fLrql6cfqXo2btTBY7gvPGhNVXeiZ0UjjM1V0/HIWKh3K5n
	QBPVBnAkaFSJn6d3QYGWxTbn1jfPFXxF7A5FL1BntcJg/7B+ZLsZ
X-Google-Smtp-Source: AGHT+IE30y7NoPBSKIKC0FcjoEDs9Cx3FWO8hRWeqEvhlmlyRA7JBHeCflWG/5LszAA9bkoDLeohgA==
X-Received: by 2002:a17:90b:188c:b0:2c2:1d86:f785 with SMTP id 98e67ed59e1d1-2c2bcc855cdmr10563511a91.47.1718119507103;
        Tue, 11 Jun 2024 08:25:07 -0700 (PDT)
Message-ID: <ff3a6de3c5cb08f4ebf55bc0ab26a02272a57c74.camel@gmail.com>
Subject: Re: [PATCH v12 5/8] xen/riscv: add minimal stuff to mm.h to build
 full Xen
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org,  Jan Beulich <jbeulich@suse.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, George
 Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>
Date: Tue, 11 Jun 2024 17:25:02 +0200
In-Reply-To: <70128dba-498f-4d85-8507-bb1621182754@citrix.com>
References: <cover.1717008161.git.oleksii.kurochko@gmail.com>
	 <d00b86f41ef2c7d928a28dadd8c34fb845f23d0a.1717008161.git.oleksii.kurochko@gmail.com>
	 <70128dba-498f-4d85-8507-bb1621182754@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Thu, 2024-05-30 at 18:23 +0100, Andrew Cooper wrote:
> On 29/05/2024 8:55 pm, Oleksii Kurochko wrote:
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > Acked-by: Jan Beulich <jbeulich@suse.com>
>=20
> This patch looks like it can go in independently?=C2=A0 Or does it depend
> on
> having bitops.h working in practice?
>=20
> However, one very strong suggestion...
>=20
>=20
> > diff --git a/xen/arch/riscv/include/asm/mm.h
> > b/xen/arch/riscv/include/asm/mm.h
> > index 07c7a0abba..cc4a07a71c 100644
> > --- a/xen/arch/riscv/include/asm/mm.h
> > +++ b/xen/arch/riscv/include/asm/mm.h
> > @@ -3,11 +3,246 @@
> > <snip>
> > +/* PDX of the first page in the frame table. */
> > +extern unsigned long frametable_base_pdx;
> > +
> > +/* Convert between machine frame numbers and page-info structures.
> > */
> > +#define
> > mfn_to_page(mfn)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > +=C2=A0=C2=A0=C2=A0 (frame_table + (mfn_to_pdx(mfn) - frametable_base_p=
dx))
> > +#define
> > page_to_mfn(pg)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > +=C2=A0=C2=A0=C2=A0 pdx_to_mfn((unsigned long)((pg) - frame_table) +
> > frametable_base_pdx)
>=20
> Do yourself a favour and not introduce frametable_base_pdx to begin
> with.

To drop frametable_base_pdx if the following changes will be enough:
   diff --git a/xen/arch/riscv/include/asm/mm.h
   b/xen/arch/riscv/include/asm/mm.h
   index cc4a07a71c..fdac7e0646 100644
   --- a/xen/arch/riscv/include/asm/mm.h
   +++ b/xen/arch/riscv/include/asm/mm.h
   @@ -107,14 +107,11 @@ struct page_info
   =20
    #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
   =20
   -/* PDX of the first page in the frame table. */
   -extern unsigned long frametable_base_pdx;
   -
    /* Convert between machine frame numbers and page-info structures. */
    #define mfn_to_page(mfn)                                            \
   -    (frame_table + (mfn_to_pdx(mfn) - frametable_base_pdx))
   +    (frame_table + mfn - FRAMETABLE_BASE_OFFSET))
    #define page_to_mfn(pg)                                             \
   -    pdx_to_mfn((unsigned long)((pg) - frame_table) +
   frametable_base_pdx)
   +    ((unsigned long)((pg) - frame_table) + FRAMETABLE_BASE_OFFSET)
   =20
    static inline void *page_to_virt(const struct page_info *pg)
    {
   diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
   index 9c0fd80588..8f6dbdc699 100644
   --- a/xen/arch/riscv/mm.c
   +++ b/xen/arch/riscv/mm.c
   @@ -15,7 +15,7 @@
    #include <asm/page.h>
    #include <asm/processor.h>
   =20
   -unsigned long __ro_after_init frametable_base_pdx;
    unsigned long __ro_after_init frametable_virt_end;
   =20
    struct mmu_desc {


>=20
> Every RISC-V board I can find has things starting from 0 in physical
> address space, with RAM starting immediately after.
>=20
> Taking the microchip board as an example, RAM actually starts at
> 0x8000000, which means that having frametable_base_pdx and assuming
> it
> does get set to 0x8000 (which isn't even a certainty, given that I
> think
> you'll need struct pages covering the PLICs), then what you are
> trading
> off is:
>=20
> * Saving 32k of virtual address space only (no need to even allocate
> memory for this range of the framtable), by
> * Having an extra memory load and add/sub in every page <-> mfn
> conversion, which is a screaming hotpath all over Xen.
Are you referring here to `mfn_to_pdx()` used in `mfn_to_page()` and
`pdx_to_mfn()` in `page_to_mfn()`?

My expectation was that when CONFIG_PDX_COMPRESSION is disabled then
this macros doesn't do anything:
/* pdx<->pfn =3D=3D identity */
#define pdx_to_pfn(x) (x)
#define pfn_to_pdx(x) (x)


>=20
> It's a terribly short-sighted tradeoff.
>=20
> 32k of VA space might be worth saving in a 32bit build (I personally
> wouldn't - especially as there's no need to share Xen's VA space with
> guests, given no PV guests on ARM/RISC-V), but it's absolutely not at
> all in an a 64bit build with TB of VA space available.
Why 32k? If RAM is started at 0x8000_0000 then we have to cover 0x80000
entries, and the size of it is 0x8_0000 =3D 524288 * 64 =3D 33554432, so it
is 32 Mb.
Am I confusing something?

P.S.: Should I map this start 32 mb? Or it is enough to have a slide (
FRAMETABLE_BASE_OFFSET ).

~ Oleksii

>=20
> Even if we do find a board with the first interesting thing in the
> frametable starting sufficiently away from 0 that it might be worth
> considering this slide, then it should still be Kconfig-able in a
> similar way to PDX_COMPRESSION.
>=20
> You don't want to be penalising everyone because of a
> theoretical/weird
> corner case.
>=20
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:28:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738527.1145302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3QP-0006gZ-0t; Tue, 11 Jun 2024 15:28:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738527.1145302; Tue, 11 Jun 2024 15:28:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3QO-0006gS-Tx; Tue, 11 Jun 2024 15:28:40 +0000
Received: by outflank-mailman (input) for mailman id 738527;
 Tue, 11 Jun 2024 15:28:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=887E=NN=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sH3QO-0006gM-0R
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:28:40 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4621360d-2807-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 17:28:38 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2ebed33cb65so21114251fa.2
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 08:28:38 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 38308e7fff4ca-2eaec6a6651sm17706691fa.132.2024.06.11.08.28.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 08:28:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4621360d-2807-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718119717; x=1718724517; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Ngue88i9QPsRcKYakT6OZq9AAIPqlAqsFErcL3YdYQM=;
        b=P7WcJktHiNZ8GiogegK4VHLC0H7mv4wHwPXesrFKcfrHD2oiM6aRBZCRqwU0u6v0m7
         kxX/SdpAmABAkIKaVBOgNm8Z2irwO93YB2zUWN604t+f8Am0wWhy083ORoLq9Rib8QAk
         1WHibkKQn1LtWQ5TVH/kQOxM/jWpXfStzpIWDW0okGDCurIsmpw2Y7D2vfrcLEZ89GZK
         RHeQ7Ih9ih3nEOCdkZnD/z2Mzu/OlyC3z/gyLb2XXYejZxFccZ5xrRPXWy1BUvKWprTC
         lmyvEnlg6xbSUYllOn833DWmHE1mhXkUCdvLj4SmcxIs1hxAvHQdC5IJZarDYMligDv4
         JI0g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718119717; x=1718724517;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Ngue88i9QPsRcKYakT6OZq9AAIPqlAqsFErcL3YdYQM=;
        b=n3In5GstvAiClGsi6XIDCcuURT4j0Mf9yCZpM9avMVXem/y7eOMtA1ZHND2bG9DE5H
         aCgnfIzrQrN6PsPaP62WcLNrE5H5qvWRNL5LCmWj8A8QY0ndtRy8uzAJEuIKeGC+OwAi
         px4vnagn2ynS+F/+13eqxQf4POgZsb9WSazcnCI4EPdmzILvUSgqDC+nhN2V1JO02EJz
         eoslta3ydt6wHy3+WU0yTVjA8wG7P7uWwqkeAecX90jjLVGxHM1BAhCbRvoZjLqmEtYB
         0w3V7HaEocVHIwh4PcrvWTcSfB6HvSbxXTUKA276s3a9A6cCjOfiJWB/OEzIguLHqyKo
         wBhQ==
X-Forwarded-Encrypted: i=1; AJvYcCXGBRh4vxO84eJ7TYS8EoYbvtOAcHSfuCaCMCtD5OYMhn5G5LiFBI4VTlAdTxG89WPzxMYKgnFDX3x4LZfOzLx1/Py4dbkOfe/tv/1Mun0=
X-Gm-Message-State: AOJu0Yzcq0znK1pSAm9M/HkzDXuP8u7sEHCV0m9BcNbndkb/vua3A/JC
	AXwZ1ZtotQg/iSSF3MZQjL7QYyKGSNAmAFQhi1HpZQfDJGvXuc5W
X-Google-Smtp-Source: AGHT+IFcJ3MgpvZflx9o/fVa0PkCnU31hUFo6K0mraxVC5BXZl5tBCtdck4EFPeqZiIA9/Pqmct4tw==
X-Received: by 2002:a05:651c:4d1:b0:2e0:752c:1f2e with SMTP id 38308e7fff4ca-2eadce1bd83mr91384401fa.1.1718119717083;
        Tue, 11 Jun 2024 08:28:37 -0700 (PDT)
Message-ID: <64edc5419adf68a21c8792337bb4000820be002e.camel@gmail.com>
Subject: Re: [PATCH v4 0/7] Static shared memory followup v2 - pt2
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Luca Fancellu <Luca.Fancellu@arm.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>,  Bertrand Marquis <Bertrand.Marquis@arm.com>, Michal
 Orzel <michal.orzel@amd.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Date: Tue, 11 Jun 2024 17:28:36 +0200
In-Reply-To: <3DDAAFF7-5E43-4B92-9D6B-6D8AFBA8496F@arm.com>
References: <20240524124055.3871399-1-luca.fancellu@arm.com>
	 <3DDAAFF7-5E43-4B92-9D6B-6D8AFBA8496F@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Tue, 2024-06-11 at 12:35 +0000, Luca Fancellu wrote:
> + Oleksii
>=20
> > On 24 May 2024, at 13:40, Luca Fancellu <Luca.Fancellu@arm.com>
> > wrote:
> >=20
> > This serie is a partial rework of this other serie:
> > https://patchwork.kernel.org/project/xen-devel/cover/20231206090623.193=
2275-1-Penny.Zheng@arm.com/
> >=20
> > The original serie is addressing an issue of the static shared
> > memory feature
> > that impacts the memory footprint of other component when the
> > feature is
> > enabled, another issue impacts the device tree generation for the
> > guests when
> > the feature is enabled and used and the last one is a missing
> > feature that is
> > the option to have a static shared memory region that is not from
> > the host
> > address space.
> >=20
> > This serie is handling some comment on the original serie and it is
> > splitting
> > the rework in two part, this first part is addressing the memory
> > footprint issue
> > and the device tree generation and currently is fully merged
> > (
> > https://patchwork.kernel.org/project/xen-devel/cover/20240418073652.362=
2828-1-luca.fancellu@arm.com
> > /),
> > this serie is addressing the static shared memory allocation from
> > the Xen heap.
> >=20
> > Luca Fancellu (5):
> > =C2=A0xen/arm: Lookup bootinfo shm bank during the mapping
> > =C2=A0xen/arm: Wrap shared memory mapping code in one function
> > =C2=A0xen/arm: Parse xen,shared-mem when host phys address is not
> > provided
> > =C2=A0xen/arm: Rework heap page allocation outside allocate_bank_memory
> > =C2=A0xen/arm: Implement the logic for static shared memory from Xen
> > heap
> >=20
> > Penny Zheng (2):
> > =C2=A0xen/p2m: put reference for level 2 superpage
> > =C2=A0xen/docs: Describe static shared memory when host address is not
> > =C2=A0=C2=A0 provided
> >=20
> > docs/misc/arm/device-tree/booting.txt=C2=A0=C2=A0 |=C2=A0 52 ++-
> > xen/arch/arm/arm32/mmu/mm.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 11 +-
> > xen/arch/arm/dom0less-build.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 4 +-
> > xen/arch/arm/domain_build.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 84 +++--
> > xen/arch/arm/include/asm/domain_build.h |=C2=A0=C2=A0 9 +-
> > xen/arch/arm/mmu/p2m.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 82 +++--
> > xen/arch/arm/setup.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 14 =
+-
> > xen/arch/arm/static-shmem.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 432 +++++++++++++++++----
> > ---
> > 8 files changed, 502 insertions(+), 186 deletions(-)
> >=20
> > --=20
> > 2.34.1
> >=20
> >=20
>=20
> Hi,
>=20
> We would like this serie to be in Xen 4.19, there was a
> misunderstanding on our side because we thought
> that since the serie was sent before the last posting date, it could
> have been a candidate for merging in the
> new release, instead after speaking with Julien and Oleksii we are
> now aware that we need to provide a
> justification for its presence.
>=20
> Pros to this serie is that we are closing the circle for static
> shared memory, allowing it to use memory from
> the host or from Xen, it is also a feature that is not enabled by
> default, so it should not cause too much
> disruption in case of any bugs that escaped the review, however we=E2=80=
=99ve
> tested many configurations for that
> with/without enabling the feature if that can be an additional value.
>=20
> Cons: we are touching some common code related to p2m, but also there
> the impact should be minimal because
> the new code is subject to l2 foreign mapping (to be confirmed maybe
> from a p2m expert like Julien).
>=20
> The comments on patch 3 of this serie are addressed by this patch:
> https://patchwork.kernel.org/project/xen-devel/patch/20240528125603.24676=
40-1-luca.fancellu@arm.com/
> And the serie is fully reviewed.
>=20
> So our request is to allow this serie in 4.19, Oleksii, ARM
> maintainers, do you agree on that?
Considering that it is not enabled by default and affect on common code
is minimal we should consider this patch series in 4.19:
 Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii




From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:32:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:32:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738532.1145311 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3U0-0008Oo-Eu; Tue, 11 Jun 2024 15:32:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738532.1145311; Tue, 11 Jun 2024 15:32:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3U0-0008Oh-Bc; Tue, 11 Jun 2024 15:32:24 +0000
Received: by outflank-mailman (input) for mailman id 738532;
 Tue, 11 Jun 2024 15:32:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=887E=NN=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sH3Ty-0008Oa-Nc
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:32:22 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cba51557-2807-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 17:32:21 +0200 (CEST)
Received: by mail-lf1-x129.google.com with SMTP id
 2adb3069b0e04-52bc29c79fdso4898539e87.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 08:32:21 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52c899bd9d5sm1118642e87.257.2024.06.11.08.32.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 08:32:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cba51557-2807-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718119941; x=1718724741; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Tpm0S6fj0bcy9G/8UXbYvcBQjs1Nke37QyJgI25AkUs=;
        b=gBelceU09e8LsQ5uTaBRI5kjPIu75UK4q7vmodXRV62dz4icMEkzNJBoHl1Co7F5sU
         PXrFHI/sB4JjbfVJHFNhP0hx4w1WmR3ks+2SY4zaG6vqsqCFrdJrsSTdT6EE4tQDXupV
         KAxQv/SxV/l86kdhVrqDq6MpVpJ952vJ5lzdaYJ+6hEW78OG3GxOkmPFgRD3b1i6z7Z6
         Ha7gctxFfeXU9mw3S47phZXKxS976ffMLXopg+Hf0MzKQkLbFImFu21SKgqFT4MuQ6de
         syJIzAtRfVv5isU+VozsHY/+P5s9K3ki78O/VCgVj4q6cu8BX3v+maEZ/U2+XNizkhKX
         GROQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718119941; x=1718724741;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Tpm0S6fj0bcy9G/8UXbYvcBQjs1Nke37QyJgI25AkUs=;
        b=L6L6qHnt1ZYMVP8mw3wkrJMB24OxdkFYxo3cZ1UfDqt278QdfpnIqvn+iLys9gRR3V
         MTp4r3JGmKLkJnNoudVRKFxnDpD+C49YAqN+oh8vTY29TeflSAc6hc4oPXUwYfxdBvba
         QGijZsXMYwqRxOOpzbTQ222Y9W+dTbH7q08OBaLOoQKsCT4hUlwdwN9kXW+xTdVq7oRz
         b4v7lPxUNdAwnPc/FUZm4Dwnec48LSxB7sjqF2OS8iOxXgPhVr+fiOR4K1s6cSLOvMBl
         QiSlHGaBErHJ4iXaaZD9DHBmHRQpW1kAbWs9CI8Rr1YJE2LG20PnW1+FMO0wfI+zLndQ
         3r0Q==
X-Forwarded-Encrypted: i=1; AJvYcCWHjzB+6aR+hsFcX1VJECwl/g4kpc8DEsC7uJw+KugaK4xCW/RF8aiakS92/RfsVhTWExwWtSTWf8mR6ILCukP40mMJBBwlwnMhzPCaYLs=
X-Gm-Message-State: AOJu0YynnK3YaxlcJZEjGo6y/EJeYm7aXulO4W9dLdEsH7m8yFf3iWvC
	rrjkIVqLfriTQlfBfS2Krr0phlxdHak6L/Bw8NGovPam+aF2f23X
X-Google-Smtp-Source: AGHT+IEcWsldpicF8KuMr4ItGuSVT86QYhc8Cl644vhCjir/Qm6lEIp3An4+X98cSiqe/apC2CBJrg==
X-Received: by 2002:a05:6512:404:b0:52b:8ad9:cf0a with SMTP id 2adb3069b0e04-52bb9fc5e89mr8513310e87.51.1718119941142;
        Tue, 11 Jun 2024 08:32:21 -0700 (PDT)
Message-ID: <50a2438030c160505603501044bc4045749a769c.camel@gmail.com>
Subject: Re: [PATCH for-4.19] CI: Update FreeBSD to 13.3
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Michal Orzel
	 <michal.orzel@amd.com>, Roger Pau =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>, Jan Beulich <JBeulich@suse.com>
Date: Tue, 11 Jun 2024 17:32:20 +0200
In-Reply-To: <20240611124701.802752-1-andrew.cooper3@citrix.com>
References: <20240611124701.802752-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Tue, 2024-06-11 at 13:47 +0100, Andrew Cooper wrote:
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii


> ---
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Michal Orzel <michal.orzel@amd.com>
> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> CC: Jan Beulich <JBeulich@suse.com>
>=20
> Updated run:
> =C2=A0 https://cirrus-ci.com/task/4903594304995328
>=20
> For 4.19, and for backporting to all trees including security trees.
> FreeBSD-13.2 isn't available any more:
> =C2=A0 https://cirrus-ci.com/task/4554831417835520
>=20
> causing build failures.
> ---
> =C2=A0.cirrus.yml | 2 +-
> =C2=A01 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/.cirrus.yml b/.cirrus.yml
> index d0a9021a77e4..c431d8d2447d 100644
> --- a/.cirrus.yml
> +++ b/.cirrus.yml
> @@ -17,7 +17,7 @@ freebsd_template: &FREEBSD_TEMPLATE
> =C2=A0task:
> =C2=A0=C2=A0 name: 'FreeBSD 13'
> =C2=A0=C2=A0 freebsd_instance:
> -=C2=A0=C2=A0=C2=A0 image_family: freebsd-13-2
> +=C2=A0=C2=A0=C2=A0 image_family: freebsd-13-3
> =C2=A0=C2=A0 << : *FREEBSD_TEMPLATE
> =C2=A0
> =C2=A0task:
>=20
> base-commit: ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:43:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:43:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738540.1145322 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3ep-0002BR-GJ; Tue, 11 Jun 2024 15:43:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738540.1145322; Tue, 11 Jun 2024 15:43:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3ep-0002BK-Dj; Tue, 11 Jun 2024 15:43:35 +0000
Received: by outflank-mailman (input) for mailman id 738540;
 Tue, 11 Jun 2024 15:43:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JAKB=NN=redhat.com=stefanha@srs-se1.protection.inumbo.net>)
 id 1sH3eo-0002BE-N6
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:43:34 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5b4ad31f-2809-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 17:43:33 +0200 (CEST)
Received: from mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com
 (ec2-54-186-198-63.us-west-2.compute.amazonaws.com [54.186.198.63]) by
 relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3,
 cipher=TLS_AES_256_GCM_SHA384) id us-mta-541-M7gCHZsfNvaRoPk4wQQxjw-1; Tue,
 11 Jun 2024 11:43:27 -0400
Received: from mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com
 (mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com [10.30.177.15])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by mx-prod-mc-03.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTPS
 id 8414F1956068; Tue, 11 Jun 2024 15:43:15 +0000 (UTC)
Received: from localhost (unknown [10.39.193.36])
 by mx-prod-int-02.mail-002.prod.us-west-2.aws.redhat.com (Postfix) with ESMTP
 id DD4331954AC1; Tue, 11 Jun 2024 15:43:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5b4ad31f-2809-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1718120612;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=m3mVOlE8X03VcFujwgTUstJqoq0FTNeRmucJ5uXzzb4=;
	b=g3ImpHwFPd3zYDbSo6+DyCmL/sFBqBusrGTGQ+nUJ1Jc3pjC2QGt6IElAywhXBseiygDNC
	z6Eh48Xu/zfSEPDq602vJsDch/Zk73o2E+gsTG0mWXmfLdGVuBh2n43fjFcUbiwN2993ci
	LMQuTert1Fi/8EYVt+zdL5AN8obFP8c=
X-MC-Unique: M7gCHZsfNvaRoPk4wQQxjw-1
Date: Tue, 11 Jun 2024 11:43:09 -0400
From: Stefan Hajnoczi <stefanha@redhat.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 08/26] virtio_blk: remove virtblk_update_cache_mode
Message-ID: <20240611154309.GA371660@fedora.redhat.com>
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-9-hch@lst.de>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="RRHJimCDVdpqhQ+7"
Content-Disposition: inline
In-Reply-To: <20240611051929.513387-9-hch@lst.de>
X-Scanned-By: MIMEDefang 3.0 on 10.30.177.15


--RRHJimCDVdpqhQ+7
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 11, 2024 at 07:19:08AM +0200, Christoph Hellwig wrote:
> virtblk_update_cache_mode boils down to a single call to
> blk_queue_write_cache.  Remove it in preparation for moving the cache
> control flags into the queue_limits.
>=20
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/block/virtio_blk.c | 13 +++----------
>  1 file changed, 3 insertions(+), 10 deletions(-)

Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>

--RRHJimCDVdpqhQ+7
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhpWov9P5fNqsNXdanKSrs4Grc8gFAmZocI0ACgkQnKSrs4Gr
c8gYxQf+MiHN7lIto5cvBArHuLRaYXdHSqN8WkOxjyk6pKDVJN3zByol4IsQ1or0
gi3U/1yXaU1lyM8v76HhRI789ZE9OXHiRD8iKWM54w0uldvJLPNzByqsrvapKvmR
XjYyMxgp/uFJZ4qxg3nonI2Fa2FzSjqA/ct/sTYj8AbXOsOEK/bUZasvnrwUuIhP
FwODujdCtfIpzMvn4c262LUiz3TOY+p3nH/CSKsYZwR5xiUbbZCf30PKrwN4RcmU
ti4hIKoOJcLH5gjgeXpfx7jOM/6Qr7eQrEelsDnuMAKYXC9WMj48+O6Cf8mFja4M
N1txQKX0NepjOjzDmydD5Dx/69S/sg==
=ehtQ
-----END PGP SIGNATURE-----

--RRHJimCDVdpqhQ+7--



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:47:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:47:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738546.1145332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3iD-0002jW-Vs; Tue, 11 Jun 2024 15:47:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738546.1145332; Tue, 11 Jun 2024 15:47:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3iD-0002jP-Rs; Tue, 11 Jun 2024 15:47:05 +0000
Received: by outflank-mailman (input) for mailman id 738546;
 Tue, 11 Jun 2024 15:47:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BOE/=NN=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sH3iD-0002jJ-6k
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:47:05 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d881bafa-2809-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 17:47:03 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 1AC6760ED4;
 Tue, 11 Jun 2024 15:47:02 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A77A1C2BD10;
 Tue, 11 Jun 2024 15:47:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d881bafa-2809-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718120821;
	bh=wwtIEU7tFR0wb8jBKgY1L9opmSWOB8Rc754j5LMe1wQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=lAzCRqYrlkFHStrtCzN226EyhMSSC7BlbsT/1xMid7kcB5MLefpFjnyCEhOf8KDY8
	 EepRy2cGK0c06LXrwYLJttu423+eBh+yTrJRUOitgqqGKynoAQ9Bk8yaXLxhTxeFNE
	 b4J4Vs1oA1DjDCNsiFz4QEsLExpAwIZ8FqweTlZfYapj6Z1pBizCFdE02SuD24NCDl
	 LnQIgpK08YQs27DSd9ESBrJYu8//HVwE+QKKP/pHbDPDwltqiy/nkcgOfQjKgV6HHN
	 SMDy1CGMQu//w8ZUfWKzs/5V3nt2gVH8C17TINRtIu+V8hMABdwGzkYwCkgxBNklpq
	 AwRy5Y1ckVXKg==
Date: Tue, 11 Jun 2024 10:46:59 -0500 (CDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
cc: Andrew Cooper <andrew.cooper3@citrix.com>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>, 
    Jan Beulich <JBeulich@suse.com>
Subject: Re: [PATCH for-4.19] CI: Update FreeBSD to 13.3
In-Reply-To: <ZmhKK4PcLki8EVST@macbook>
Message-ID: <alpine.DEB.2.22.394.2406111046430.1328433@ubuntu-linux-20-04-desktop>
References: <20240611124701.802752-1-andrew.cooper3@citrix.com> <ZmhKK4PcLki8EVST@macbook>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-2056669113-1718120821=:1328433"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-2056669113-1718120821=:1328433
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Tue, 11 Jun 2024, Roger Pau Monné wrote:
> On Tue, Jun 11, 2024 at 01:47:01PM +0100, Andrew Cooper wrote:
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>
--8323329-2056669113-1718120821=:1328433--


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:53:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:53:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738551.1145342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3oP-0004vh-I7; Tue, 11 Jun 2024 15:53:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738551.1145342; Tue, 11 Jun 2024 15:53:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3oP-0004va-FI; Tue, 11 Jun 2024 15:53:29 +0000
Received: by outflank-mailman (input) for mailman id 738551;
 Tue, 11 Jun 2024 15:53:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eM8s=NN=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sH3oN-0004vT-RO
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:53:27 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bc995427-280a-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 17:53:25 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-57c83100cb4so2797702a12.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 08:53:25 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f0f98a285sm421760966b.210.2024.06.11.08.53.24
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 08:53:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc995427-280a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718121204; x=1718726004; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=MULxB8npoLyZiEUAp0PTIrbFWoZ8N2zCmpko8qDNTmU=;
        b=IQKPrXcyIT5CK1fA4P9qTaZ6JRNgIMy0WF/NzJnl0bTxvvm1fdOHlpWL268hYp/HeN
         VGDjiU5mxZKE8M8rYD8aOG4k2KRESaSAIy9CgCFotT9yFPX3f1wgzmvkbVClkEFoG7ow
         kodJ/7SxIWB3BhHBDzF4ZR8fM3ytGJZtPiNWU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718121204; x=1718726004;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=MULxB8npoLyZiEUAp0PTIrbFWoZ8N2zCmpko8qDNTmU=;
        b=KgpZ4YRt5rcpoIRgVxEWetQc8A1EmzYIpi3/HHIAzTtK5uAgmyUIkAxLwldwAkPDBU
         sNSyr/8uantWtUqfea0cogKtfOLWfNwMd4Av4gf/c/pRwX6hucD42hCleDDpPPF1rlED
         4po8MNFHyU6GorJpzn5yVCdM72Mj7fnPBsA3vbzbINQdiOKBIipMDFq5kEjIcWncVCXn
         RYVTQ4T1rX1hq8wVp7pn7GSqsSguOhQ15G29oFdUPUSUBUZ+5YHnXcOI6KMyCuOF2Dwy
         fKIjkGA4ZH4v0UViTpTauypdWuvR5+p33fA7NGq2aXXe8/QJXvr6BP/sgfXb8lREsKhI
         laTA==
X-Forwarded-Encrypted: i=1; AJvYcCU44HwxhM5a2L2U4n0MDVyfpngWd/Qd3YBHXZdupFnEiNUW/tUPPkeMlZD19DlR8MBvYTOQu4doiFoehoYGfAiVtmaNTqlay0IJ30YTW/4=
X-Gm-Message-State: AOJu0Yx2zS7AIV2gxIooiDPKnukhELaEaH/R5QG2wxKCbKQ4156F4UXa
	XOIAt5g6YAdVDQbI49pJDW3/mjH6ajHzpDsSf1FnTeyTYVkJNp+bBmH9luOITh5PoR3t1RZ8j30
	sZaA=
X-Google-Smtp-Source: AGHT+IFFDzd1myyds4FSAI+TZUpNGRE6QElKc3qGe2lkweFI9SkFt6N3zQIaaZum6yZL7V7r9X6Hzw==
X-Received: by 2002:a17:907:86a2:b0:a6f:ea5:a168 with SMTP id a640c23a62f3a-a6f0ea5a6a3mr652247066b.57.1718121204545;
        Tue, 11 Jun 2024 08:53:24 -0700 (PDT)
Message-ID: <e80e30c9-6558-4b70-ab2f-18c34c359dae@citrix.com>
Date: Tue, 11 Jun 2024 16:53:23 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v12 5/8] xen/riscv: add minimal stuff to mm.h to build
 full Xen
To: "Oleksii K." <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <cover.1717008161.git.oleksii.kurochko@gmail.com>
 <d00b86f41ef2c7d928a28dadd8c34fb845f23d0a.1717008161.git.oleksii.kurochko@gmail.com>
 <70128dba-498f-4d85-8507-bb1621182754@citrix.com>
 <7721c1b4eb0ea76cca5460264954d40d639499b7.camel@gmail.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <7721c1b4eb0ea76cca5460264954d40d639499b7.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 30/05/2024 7:22 pm, Oleksii K. wrote:
> On Thu, 2024-05-30 at 18:23 +0100, Andrew Cooper wrote:
>> On 29/05/2024 8:55 pm, Oleksii Kurochko wrote:
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>> Acked-by: Jan Beulich <jbeulich@suse.com>
>> This patch looks like it can go in independently?  Or does it depend
>> on
>> having bitops.h working in practice?
>>
>> However, one very strong suggestion...
>>
>>
>>> diff --git a/xen/arch/riscv/include/asm/mm.h
>>> b/xen/arch/riscv/include/asm/mm.h
>>> index 07c7a0abba..cc4a07a71c 100644
>>> --- a/xen/arch/riscv/include/asm/mm.h
>>> +++ b/xen/arch/riscv/include/asm/mm.h
>>> @@ -3,11 +3,246 @@
>>> <snip>
>>> +/* PDX of the first page in the frame table. */
>>> +extern unsigned long frametable_base_pdx;
>>> +
>>> +/* Convert between machine frame numbers and page-info structures.
>>> */
>>> +#define
>>> mfn_to_page(mfn)                                            \
>>> +    (frame_table + (mfn_to_pdx(mfn) - frametable_base_pdx))
>>> +#define
>>> page_to_mfn(pg)                                             \
>>> +    pdx_to_mfn((unsigned long)((pg) - frame_table) +
>>> frametable_base_pdx)
>> Do yourself a favour and not introduce frametable_base_pdx to begin
>> with.
>>
>> Every RISC-V board I can find has things starting from 0 in physical
>> address space, with RAM starting immediately after.
> I checked Linux kernel and grep there:
>    [ok@fedora linux-aia]$ grep -Rni "memory@" arch/riscv/boot/dts/ --
>    exclude "*.tmp" -I
>    arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-2.dtsi:33:    
>    memory@40000000 {
>    arch/riscv/boot/dts/starfive/jh7100-common.dtsi:28:     memory@80000000
>    {
>    arch/riscv/boot/dts/microchip/mpfs-sev-kit.dts:49:      ddrc_cache:
>    memory@1000000000 {
>    arch/riscv/boot/dts/microchip/mpfs-m100pfsevp.dts:33:   ddrc_cache_lo:
>    memory@80000000 {
>    arch/riscv/boot/dts/microchip/mpfs-m100pfsevp.dts:37:   ddrc_cache_hi:
>    memory@1040000000 {
>    arch/riscv/boot/dts/microchip/mpfs-tysom-m.dts:34:      ddrc_cache_lo:
>    memory@80000000 {
>    arch/riscv/boot/dts/microchip/mpfs-tysom-m.dts:40:      ddrc_cache_hi:
>    memory@1000000000 {
>    arch/riscv/boot/dts/microchip/mpfs-polarberry.dts:22:   ddrc_cache_lo:
>    memory@80000000 {
>    arch/riscv/boot/dts/microchip/mpfs-polarberry.dts:27:   ddrc_cache_hi:
>    memory@1000000000 {
>    arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts:57:   ddrc_cache_lo:
>    memory@80000000 {
>    arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts:63:   ddrc_cache_hi:
>    memory@1040000000 {
>    arch/riscv/boot/dts/thead/th1520-beaglev-ahead.dts:32:  memory@0 {
>    arch/riscv/boot/dts/thead/th1520-lichee-module-4a.dtsi:14:     
>    memory@0 {
>    arch/riscv/boot/dts/sophgo/cv1800b-milkv-duo.dts:26:    memory@80000000
>    {
>    arch/riscv/boot/dts/sophgo/cv1812h.dtsi:12:     memory@80000000 {
>    arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts:26: memory@80000000
>    {
>    arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts:25: memory@80000000
>    {
>    arch/riscv/boot/dts/canaan/k210.dtsi:82:        sram: memory@80000000 {
>    
> And based on  that majority of supported by Linux kernel boards has RAM
> starting not from 0 in physical address space. Am I confusing
> something?
>
>> Taking the microchip board as an example, RAM actually starts at
>> 0x8000000,
> Today we had conversation with the guy from SiFive in xen-devel channel
> and he mentioned that they are using "starfive visionfive2 and sifive
> unleashed platforms." which based on the grep above has RAM not at 0
> address.
>
> Also, QEMU uses 0x8000000.
>
>>  which means that having frametable_base_pdx and assuming it
>> does get set to 0x8000 (which isn't even a certainty, given that I
>> think
>> you'll need struct pages covering the PLICs), then what you are
>> trading
>> off is:
>>
>> * Saving 32k of virtual address space only (no need to even allocate
>> memory for this range of the framtable), by
>> * Having an extra memory load and add/sub in every page <-> mfn
>> conversion, which is a screaming hotpath all over Xen.
>>
>> It's a terribly short-sighted tradeoff.
>>
>> 32k of VA space might be worth saving in a 32bit build (I personally
>> wouldn't - especially as there's no need to share Xen's VA space with
>> guests, given no PV guests on ARM/RISC-V), but it's absolutely not at
>> all in an a 64bit build with TB of VA space available.
>>
>> Even if we do find a board with the first interesting thing in the
>> frametable starting sufficiently away from 0 that it might be worth
>> considering this slide, then it should still be Kconfig-able in a
>> similar way to PDX_COMPRESSION.
> I found your tradeoffs reasonable, but I don't understand how it will
> work if RAM does not start from 0, as the frametable address and RAM
> address are linked.
> I tried to look at the PDX_COMPRESSION config and couldn't find any
> "slide" there. Could you please clarify this for me?
> If to use this "slide" would it help to avoid the mentioned above
> tradeoffs?
>
> One more question: if we decide to go without frametable_base_pdx,
> would it be sufficient to simply remove mentions of it from the code (
> at least, for now )?

There is a relationship between system/host physical addresses (what Xen
called maddr/mfn), and the frametable.  The frametable has one entry per
mfn.

In the most simple case, there's a 1:1 relationship.  i.e. frametable[0]
= maddr(0), frametable[1] = maddr(4k) etc.  This is very simple, and
very easy to calculate (page_to_mfn()/mfn_to_page()).

The frametable is one big array.  It starts at a compile-time fixed
address, and needs to be long enough to cover everything interesting in
memory.  Therefore it potentially takes a large amount of virtual
address space.

However, only interesting maddrs need to have data in the frametable, so
it's fine for the backing RAM to be sparsely allocated/mapped in the
frametable virtual addresses.

For 64bit, that's really all you need, because there's always far more
virtual address space than physical RAM in the system, even when you're
looking at TB-scale giant servers.


For 32bit, virtual address space is a limited resource.  (Also to an
extent, 64bit x86 with PV guests because we give 98% of the virtual
address space to the guest kernel to use.)

There are two tricks to reduce the virtual address space used, but they
both cost performance in fastpaths.

1) PDX Compression.

PDX compression makes a non-linear mfn <-> maddr mapping.  This is for a
usecase where you've got multiple RAM banks which are separated by a
large distance (and evenly spaced), then you can "compress" a single
range of 0's out of the middle of the system/host physical address.

The cost is that all page <-> mfn conversions need to read two masks and
a shift-count from variables in memory, to split/shift/recombine the
address bits.

2) A slide, which is frametable_base_pdx in this context.

When there's a big gap between 0 and the start of something interesting,
you could chop out that range by just subtracting base_pdx.  What
qualifies as "big" is subjective, but Qemu starting at 128M certainly
does not qualify as big enough to warrant frametable_base_pdx.

This is less expensive that PDX compression.  It only adds one memory
read to the fastpath, but it also doesn't save as much virtual address
space as PDX compression.


When virtual address space is a major constraint (32 bit builds), both
of these techniques are worth doing.  But when there's no constraint on
virtual address space (64 bit builds), there's no reason to use either;
and the performance will definitely improve as a result.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:53:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738558.1145387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3og-0005sK-UQ; Tue, 11 Jun 2024 15:53:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738558.1145387; Tue, 11 Jun 2024 15:53:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3og-0005qr-KA; Tue, 11 Jun 2024 15:53:46 +0000
Received: by outflank-mailman (input) for mailman id 738558;
 Tue, 11 Jun 2024 15:53:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sLxx=NN=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sH3of-0005DH-BH
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:53:45 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c850d614-280a-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 17:53:44 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 3DA734EE075B;
 Tue, 11 Jun 2024 17:53:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c850d614-280a-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [XEN PATCH 6/6] automation/eclair_analysis: clean ECLAIR configuration scripts
Date: Tue, 11 Jun 2024 17:53:36 +0200
Message-Id: <e5bae94c215d85671ed21c39c5dcc3d67d02bbb0.1718117557.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718117557.git.nicola.vetrini@bugseng.com>
References: <cover.1718117557.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove from the ECLAIR integration scripts an unused option, which
was already ignored, and make the help texts consistent
with the rest of the scripts.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/analyze.sh | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/automation/eclair_analysis/ECLAIR/analyze.sh b/automation/eclair_analysis/ECLAIR/analyze.sh
index 0ea5520c93a6..e96456c3c18d 100755
--- a/automation/eclair_analysis/ECLAIR/analyze.sh
+++ b/automation/eclair_analysis/ECLAIR/analyze.sh
@@ -11,7 +11,7 @@ fatal() {
 }
 
 usage() {
-  fatal "Usage: ${script_name} <ARM64|X86_64> <Set0|Set1|Set2|Set3>"
+  fatal "Usage: ${script_name} <ARM64|X86_64> <accepted|monitored>"
 }
 
 if [[ $# -ne 2 ]]; then
@@ -40,7 +40,6 @@ ECLAIR_REPORT_LOG=${ECLAIR_OUTPUT_DIR}/REPORT.log
 if [[ "$1" = "X86_64" ]]; then
   export CROSS_COMPILE=
   export XEN_TARGET_ARCH=x86_64
-  EXTRA_ECLAIR_ENV_OPTIONS=-disable=MC3R1.R20.7
 elif [[ "$1" = "ARM64" ]]; then
   export CROSS_COMPILE=aarch64-linux-gnu-
   export XEN_TARGET_ARCH=arm64
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:53:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738552.1145353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3oe-0005Fh-Ry; Tue, 11 Jun 2024 15:53:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738552.1145353; Tue, 11 Jun 2024 15:53:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3oe-0005Fa-MX; Tue, 11 Jun 2024 15:53:44 +0000
Received: by outflank-mailman (input) for mailman id 738552;
 Tue, 11 Jun 2024 15:53:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sLxx=NN=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sH3oc-0005DH-PE
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:53:42 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c5df9a76-280a-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 17:53:41 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 70C834EE0754;
 Tue, 11 Jun 2024 17:53:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5df9a76-280a-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 0/6] address several violations of MISRA Rule 20.7
Date: Tue, 11 Jun 2024 17:53:30 +0200
Message-Id: <cover.1718117557.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi all,

this series addresses several violations of Rule 20.7, as well as a
small fix to the ECLAIR integration scripts that do not influence
the current behaviour, but were mistakenly part of the upstream
configuration.

Note that by applying this series the rule has a few leftover violations.
Most of those are in x86 code in xen/arch/x86/include/asm/msi.h .
I did send a patch [1] to deal with those, limited only to addressing the MISRA
violations, but in the end it was dropped in favour of a more general cleanup of
the file upon agreement, so this is why those changes are not included here.

[1] https://lore.kernel.org/xen-devel/2f2c865f20d0296e623f1d65bed25c083f5dd497.1711700095.git.nicola.vetrini@bugseng.com/

Nicola Vetrini (6):
  automation/eclair: address violations of MISRA C Rule 20.7
  xen/self-tests: address violations of MISRA rule 20.7
  xen/guest_access: address violations of MISRA rule 20.7
  x86emul: address violations of MISRA C Rule 20.7
  x86/irq: address violations of MISRA C Rule 20.7
  automation/eclair_analysis: clean ECLAIR configuration scripts

 automation/eclair_analysis/ECLAIR/analyze.sh     | 3 +--
 automation/eclair_analysis/ECLAIR/deviations.ecl | 8 ++++++++
 xen/arch/x86/x86_emulate/x86_emulate.c           | 4 ++--
 xen/include/xen/guest_access.h                   | 4 ++--
 xen/include/xen/irq.h                            | 2 +-
 xen/include/xen/self-tests.h                     | 8 ++++----
 6 files changed, 18 insertions(+), 11 deletions(-)

-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:53:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738560.1145393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3oh-0005xY-CN; Tue, 11 Jun 2024 15:53:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738560.1145393; Tue, 11 Jun 2024 15:53:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3og-0005vr-Uw; Tue, 11 Jun 2024 15:53:46 +0000
Received: by outflank-mailman (input) for mailman id 738560;
 Tue, 11 Jun 2024 15:53:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sLxx=NN=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sH3of-0004vT-O1
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:53:45 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c7aba4e1-280a-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 17:53:43 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 1BA284EE0755;
 Tue, 11 Jun 2024 17:53:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7aba4e1-280a-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 4/6] x86emul: address violations of MISRA C Rule 20.7
Date: Tue, 11 Jun 2024 17:53:34 +0200
Message-Id: <0a502d2a9c5ce13be13281d9de49d263313b7852.1718117557.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718117557.git.nicola.vetrini@bugseng.com>
References: <cover.1718117557.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses". Therefore, some
macro definitions should gain additional parentheses to ensure that all
current and future users will be safe with respect to expansions that
can possibly alter the semantics of the passed-in macro parameter.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
These local helpers could in principle be deviated, but the readability
and functionality are essentially unchanged by complying with the rule,
so I decided to modify the macro definition as the preferred option.
---
 xen/arch/x86/x86_emulate/x86_emulate.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/x86_emulate/x86_emulate.c b/xen/arch/x86/x86_emulate/x86_emulate.c
index 2d5c1de8ecc2..9352d341346a 100644
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -2255,7 +2255,7 @@ x86_emulate(
         switch ( modrm_reg & 7 )
         {
 #define GRP2(name, ext) \
-        case ext: \
+        case (ext): \
             if ( ops->rmw && dst.type == OP_MEM ) \
                 state->rmw = rmw_##name; \
             else \
@@ -8611,7 +8611,7 @@ int x86_emul_rmw(
             unsigned long dummy;
 
 #define XADD(sz, cst, mod) \
-        case sz: \
+        case (sz): \
             asm ( "" \
                   COND_LOCK(xadd) " %"#mod"[reg], %[mem]; " \
                   _POST_EFLAGS("[efl]", "[msk]", "[tmp]") \
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:53:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738553.1145357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3of-0005Iw-41; Tue, 11 Jun 2024 15:53:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738553.1145357; Tue, 11 Jun 2024 15:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3oe-0005Gb-Ti; Tue, 11 Jun 2024 15:53:44 +0000
Received: by outflank-mailman (input) for mailman id 738553;
 Tue, 11 Jun 2024 15:53:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sH3oc-0005DU-UE; Tue, 11 Jun 2024 15:53:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sH3oc-0005UZ-Qm; Tue, 11 Jun 2024 15:53:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sH3oc-0000Iv-F9; Tue, 11 Jun 2024 15:53:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sH3oc-0002gP-Ei; Tue, 11 Jun 2024 15:53:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/Q0CE3FDz5ipRBusi4N2Bn2xINV+93b8m2cLQ8AHMxU=; b=ToPfjJcQamLD+GfB+ZyAVE8tg9
	NjuR5TlHUYeyI32qamuMPzTN9xYJUPyN8p0Qax9oe7f6vqBY+lL+HXjZwwWjpfZUTKf8D27zaemaU
	jWa/n5/i9Kb1DzBdouuB/G50rbYkv4GntrCTPADj2uqiICtoxq3TZIoZ+/mjQOU8+7kY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186310-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186310: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=43de96a70f00b631d0f4c658c232204079b2f2b2
X-Osstest-Versions-That:
    xen=ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 Jun 2024 15:53:42 +0000

flight 186310 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186310/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  43de96a70f00b631d0f4c658c232204079b2f2b2
baseline version:
 xen                  ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3

Last test of basis   186304  2024-06-10 12:04:02 Z    1 days
Testing same since   186310  2024-06-11 13:02:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ea1cb444c2..43de96a70f  43de96a70f00b631d0f4c658c232204079b2f2b2 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:53:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738556.1145371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3og-0005ce-13; Tue, 11 Jun 2024 15:53:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738556.1145371; Tue, 11 Jun 2024 15:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3of-0005al-PZ; Tue, 11 Jun 2024 15:53:45 +0000
Received: by outflank-mailman (input) for mailman id 738556;
 Tue, 11 Jun 2024 15:53:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sLxx=NN=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sH3oe-0005DH-8P
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:53:44 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c75677dd-280a-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 17:53:43 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 830934EE0756;
 Tue, 11 Jun 2024 17:53:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c75677dd-280a-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH 3/6] xen/guest_access: address violations of MISRA rule 20.7
Date: Tue, 11 Jun 2024 17:53:33 +0200
Message-Id: <2dbc4b40261b91de2148e467ce0fdade5cc89c50.1718117557.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718117557.git.nicola.vetrini@bugseng.com>
References: <cover.1718117557.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses". Therefore, some
macro definitions should gain additional parentheses to ensure that all
current and future users will be safe with respect to expansions that
can possibly alter the semantics of the passed-in macro parameter.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 xen/include/xen/guest_access.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
index af33ae3ab652..8bd2a124e823 100644
--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -49,9 +49,9 @@
     ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
 
 #define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)(ptr) })
 #define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)(ptr) })
 
 /*
  * Copy an array of objects to guest context via a guest handle,
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:53:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738561.1145413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3oi-0006ap-Ol; Tue, 11 Jun 2024 15:53:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738561.1145413; Tue, 11 Jun 2024 15:53:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3oi-0006Yw-Ep; Tue, 11 Jun 2024 15:53:48 +0000
Received: by outflank-mailman (input) for mailman id 738561;
 Tue, 11 Jun 2024 15:53:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sLxx=NN=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sH3og-0004vT-O5
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:53:46 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c80337eb-280a-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 17:53:44 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id A42F54EE0759;
 Tue, 11 Jun 2024 17:53:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c80337eb-280a-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH 5/6] x86/irq: address violations of MISRA C Rule 20.7
Date: Tue, 11 Jun 2024 17:53:35 +0200
Message-Id: <a34c9483e17d59a79ddb2cc9c74cf5809b8f2e70.1718117557.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718117557.git.nicola.vetrini@bugseng.com>
References: <cover.1718117557.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses". Therefore, some
macro definitions should gain additional parentheses to ensure that all
current and future users will be safe with respect to expansions that
can possibly alter the semantics of the passed-in macro parameter.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
Note that the rule does not apply to f because that parameter
is not used as an expression in the macro, but rather as an identifier.
---
 xen/include/xen/irq.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index adf33547d25f..0401f06c4fca 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -178,7 +178,7 @@ extern struct pirq *pirq_get_info(struct domain *d, int pirq);
 
 #define pirq_field(d, p, f, def) ({ \
     const struct pirq *__pi = pirq_info(d, p); \
-    __pi ? __pi->f : def; \
+    __pi ? __pi->f : (def); \
 })
 #define pirq_to_evtchn(d, pirq) pirq_field(d, pirq, evtchn, 0)
 #define pirq_masked(d, pirq) pirq_field(d, pirq, masked, 0)
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:53:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738557.1145379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3og-0005jW-Ea; Tue, 11 Jun 2024 15:53:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738557.1145379; Tue, 11 Jun 2024 15:53:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3og-0005g8-6L; Tue, 11 Jun 2024 15:53:46 +0000
Received: by outflank-mailman (input) for mailman id 738557;
 Tue, 11 Jun 2024 15:53:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sLxx=NN=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sH3oe-0004vT-Ns
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:53:44 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c6fd1c78-280a-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 17:53:42 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 9D6B04EE0758;
 Tue, 11 Jun 2024 17:53:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6fd1c78-280a-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH 2/6] xen/self-tests: address violations of MISRA rule 20.7
Date: Tue, 11 Jun 2024 17:53:32 +0200
Message-Id: <38d6b849e0ed868f1025d4af548dcebe89bda42d.1718117557.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718117557.git.nicola.vetrini@bugseng.com>
References: <cover.1718117557.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses". Therefore, some
macro definitions should gain additional parentheses to ensure that all
current and future users will be safe with respect to expansions that
can possibly alter the semantics of the passed-in macro parameter.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
In this case the use of parentheses can detect misuses of the COMPILE_CHECK
macro for the fn argument that happen to pass the compile-time check
(see e.g. https://godbolt.org/z/n4zTdz595).

An alternative would be to deviate these macros, but since they are used
to check the correctness of other code it seemed the better alternative
to futher ensure that all usages of the macros are safe.
---
 xen/include/xen/self-tests.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/include/xen/self-tests.h b/xen/include/xen/self-tests.h
index 42a4cc4d17fe..58484fe5a8ae 100644
--- a/xen/include/xen/self-tests.h
+++ b/xen/include/xen/self-tests.h
@@ -19,11 +19,11 @@
 #if !defined(CONFIG_CC_IS_CLANG) || CONFIG_CLANG_VERSION >= 80000
 #define COMPILE_CHECK(fn, val, res)                                     \
     do {                                                                \
-        typeof(fn(val)) real = fn(val);                                 \
+        typeof((fn)(val)) real = (fn)(val);                             \
                                                                         \
         if ( !__builtin_constant_p(real) )                              \
             asm ( ".error \"'" STR(fn(val)) "' not compile-time constant\"" ); \
-        else if ( real != res )                                         \
+        else if ( real != (res) )                                       \
             asm ( ".error \"Compile time check '" STR(fn(val) == res) "' failed\"" ); \
     } while ( 0 )
 #else
@@ -37,9 +37,9 @@
  */
 #define RUNTIME_CHECK(fn, val, res)                     \
     do {                                                \
-        typeof(fn(val)) real = fn(HIDE(val));           \
+        typeof((fn)(val)) real = (fn)(HIDE(val));       \
                                                         \
-        if ( real != res )                              \
+        if ( real != (res) )                            \
             panic("%s: %s(%s) expected %u, got %u\n",   \
                   __func__, #fn, #val, real, res);      \
     } while ( 0 )
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 15:53:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 15:53:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738555.1145365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3of-0005Ph-GH; Tue, 11 Jun 2024 15:53:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738555.1145365; Tue, 11 Jun 2024 15:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3of-0005ML-7k; Tue, 11 Jun 2024 15:53:45 +0000
Received: by outflank-mailman (input) for mailman id 738555;
 Tue, 11 Jun 2024 15:53:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sLxx=NN=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sH3od-0005DH-87
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:53:43 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c66ffc9c-280a-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 17:53:41 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 9693E4EE0757;
 Tue, 11 Jun 2024 17:53:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c66ffc9c-280a-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: nicola.vetrini@bugseng.com,
	xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [XEN PATCH 1/6] automation/eclair: address violations of MISRA C Rule 20.7
Date: Tue, 11 Jun 2024 17:53:31 +0200
Message-Id: <44d392cb30949ed9ddb4551fa7f2a5faa504629f.1718117557.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718117557.git.nicola.vetrini@bugseng.com>
References: <cover.1718117557.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses".

The helper macro bitmap_switch has parameters that cannot be parenthesized
in order to comply with the rule, as that would break its functionality.
Moreover, the risk of misuse due developer confusion is deemed not
substantial enough to warrant a more involved refactor, thus the macro
is deviated for this rule.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index 447c1e6661d1..c2698e7074aa 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -463,6 +463,14 @@ of this macro do not lead to developer confusion, and can thus be deviated."
 -config=MC3R1.R20.7,reports+={safe, "any_area(any_loc(any_exp(macro(^count_args_$))))"}
 -doc_end
 
+-doc_begin="The arguments of macro bitmap_switch macro can't be parenthesized as
+the rule would require, without breaking the functionality of the macro. This is
+a specialized local helper macro only used within the bitmap.h header, so it is
+less likely to lead to developer confusion and it is deemed better to deviate it."
+-file_tag+={xen_bitmap_h, "^xen/include/xen/bitmap\\.h$"}
+-config=MC3R1.R20.7,reports+={safe, "any_area(any_loc(any_exp(macro(loc(file(xen_bitmap_h))&&^bitmap_switch$))))"}
+-doc_end
+
 -doc_begin="Uses of variadic macros that have one of their arguments defined as
 a macro and used within the body for both ordinary parameter expansion and as an
 operand to the # or ## operators have a behavior that is well-understood and
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 16:04:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 16:04:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738621.1145431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3zN-0003co-Ki; Tue, 11 Jun 2024 16:04:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738621.1145431; Tue, 11 Jun 2024 16:04:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH3zN-0003ch-I0; Tue, 11 Jun 2024 16:04:49 +0000
Received: by outflank-mailman (input) for mailman id 738621;
 Tue, 11 Jun 2024 16:04:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uR64=NN=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1sH3zM-0003cb-A1
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 16:04:48 +0000
Received: from sender4-of-o50.zoho.com (sender4-of-o50.zoho.com
 [136.143.188.50]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 516dbaf8-280c-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 18:04:45 +0200 (CEST)
Received: by mx.zohomail.com with SMTPS id 1718121879640188.48405145338086;
 Tue, 11 Jun 2024 09:04:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 516dbaf8-280c-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; t=1718121881; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=T2PNNVP5U5+K9ZnxNZw6iQlsNzR7xvEU5qj9pN5ZWj5NK5TyofSuBFuNcCdIshziM/m7XJvJJIJaElqwOJigIqw0kQ9mut4bWdT4rHGSaq7e4rIAxazOh2HYrkc27v3UhJfIk4E+FmwBNoXC/saqLJzlpB2QX+anRlR3JxIdgBI=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1718121881; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=KfeiOOgfpOFgvkhWmHPxuL2X27FsXUGlE70IAUigRFA=; 
	b=AzURdVW6SVPmcMQiyTOKhibd5kvPXBLaTmhjKRi4zkf/HKrEOt9YUmvEuv05tbDgnZ0XcYUs2qp2AHoaDJ+ZynZVqezW0b+Rm+Zzw/0MhdiJUq25uGz+Rj8ISMreT/w857WHJrLk4lVaUMguGyO5UpL8+XIzJNXNlIJ/bGH4MUs=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1718121881;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:To:To:Cc:Cc:References:From:From:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=KfeiOOgfpOFgvkhWmHPxuL2X27FsXUGlE70IAUigRFA=;
	b=H2qWCh+MmVBKIsA+oLMtksxMlitWUla4Vy4TrhIgDL6WHV/OXQqc3Y9CMwpG9DEM
	M+czwmDb/j8Z1jrsk7BFseGc5EKpml9e77rZeVsH9jJwEH2Vf1v+TkoohF8d6ANpu5u
	9e/Q2hX5KMAkohNaPlIkvYn6kklrTgY1fkn8r+MI=
Message-ID: <6a5e71a6-52cb-4795-af8e-5e674754ab63@apertussolutions.com>
Date: Tue, 11 Jun 2024 12:04:37 -0400
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] MAINTAINERS: alter EFI section
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Marek Marczykowski <marmarek@invisiblethingslab.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <5b9d57b4-bd28-4523-bb80-f4a5912eb3e8@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Autocrypt: addr=dpsmith@apertussolutions.com; keydata=
 xsJuBFYrueARCACPWL3r2bCSI6TrkIE/aRzj4ksFYPzLkJbWLZGBRlv7HQLvs6i/K4y/b4fs
 JDq5eL4e9BdfdnZm/b+K+Gweyc0Px2poDWwKVTFFRgxKWq9R7McwNnvuZ4nyXJBVn7PTEn/Z
 G7D08iZg94ZsnUdeXfgYdJrqmdiWA6iX9u84ARHUtb0K4r5WpLUMcQ8PVmnv1vVrs/3Wy/Rb
 foxebZNWxgUiSx+d02e3Ad0aEIur1SYXXv71mqKwyi/40CBSHq2jk9eF6zmEhaoFi5+MMMgX
 X0i+fcBkvmT0N88W4yCtHhHQds+RDbTPLGm8NBVJb7R5zbJmuQX7ADBVuNYIU8hx3dF3AQCm
 601w0oZJ0jGOV1vXQgHqZYJGHg5wuImhzhZJCRESIwf+PJxik7TJOgBicko1hUVOxJBZxoe0
 x+/SO6tn+s8wKlR1Yxy8gYN9ZRqV2I83JsWZbBXMG1kLzV0SAfk/wq0PAppA1VzrQ3JqXg7T
 MZ3tFgxvxkYqUP11tO2vrgys+InkZAfjBVMjqXWHokyQPpihUaW0a8mr40w9Qui6DoJj7+Gg
 DtDWDZ7Zcn2hoyrypuht88rUuh1JuGYD434Q6qwQjUDlY+4lgrUxKdMD8R7JJWt38MNlTWvy
 rMVscvZUNc7gxcmnFUn41NPSKqzp4DDRbmf37Iz/fL7i01y7IGFTXaYaF3nEACyIUTr/xxi+
 MD1FVtEtJncZNkRn7WBcVFGKMAf+NEeaeQdGYQ6mGgk++i/vJZxkrC/a9ZXme7BhWRP485U5
 sXpFoGjdpMn4VlC7TFk2qsnJi3yF0pXCKVRy1ukEls8o+4PF2JiKrtkCrWCimB6jxGPIG3lk
 3SuKVS/din3RHz+7Sr1lXWFcGYDENmPd/jTwr1A1FiHrSj+u21hnJEHi8eTa9029F1KRfocp
 ig+k0zUEKmFPDabpanI323O5Tahsy7hwf2WOQwTDLvQ+eqQu40wbb6NocmCNFjtRhNZWGKJS
 b5GrGDGu/No5U6w73adighEuNcCSNBsLyUe48CE0uTO7eAL6Vd+2k28ezi6XY4Y0mgASJslb
 NwW54LzSSM0uRGFuaWVsIFAuIFNtaXRoIDxkcHNtaXRoQGFwZXJ0dXNzb2x1dGlvbnMuY29t
 PsJ6BBMRCAAiBQJWK7ngAhsjBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRBTc6WbYpR8
 KrQ9AP94+xjtFfJ8gj5c7PVx06Zv9rcmFUqQspZ5wSEkvxOuQQEAg6qEsPYegI7iByLVzNEg
 7B7fUG7pqWIfMqFwFghYhQzOwU0EViu54BAIAL6MXXNlrJ5tRUf+KMBtVz1LJQZRt/uxWrCb
 T06nZjnbp2UcceuYNbISOVHGXTzu38r55YzpkEA8eURQf+5hjtvlrOiHxvpD+Z6WcpV6rrMB
 kcAKWiZTQihW2HoGgVB3gwG9dCh+n0X5OzliAMiGK2a5iqnIZi3o0SeW6aME94bSkTkuj6/7
 OmH9KAzK8UnlhfkoMg3tXW8L6/5CGn2VyrjbB/rcrbIR4mCQ+yCUlocuOjFCJhBd10AG1IcX
 OXUa/ux+/OAV9S5mkr5Fh3kQxYCTcTRt8RY7+of9RGBk10txi94dXiU2SjPbassvagvu/hEi
 twNHms8rpkSJIeeq0/cAAwUH/jV3tXpaYubwcL2tkk5ggL9Do+/Yo2WPzXmbp8vDiJPCvSJW
 rz2NrYkd/RoX+42DGqjfu8Y04F9XehN1zZAFmCDUqBMa4tEJ7kOT1FKJTqzNVcgeKNBGcT7q
 27+wsqbAerM4A0X/F/ctjYcKwNtXck1Bmd/T8kiw2IgyeOC+cjyTOSwKJr2gCwZXGi5g+2V8
 NhJ8n72ISPnOh5KCMoAJXmCF+SYaJ6hIIFARmnuessCIGw4ylCRIU/TiXK94soilx5aCqb1z
 ke943EIUts9CmFAHt8cNPYOPRd20pPu4VFNBuT4fv9Ys0iv0XGCEP+sos7/pgJ3gV3pCOric
 p15jV4PCYQQYEQgACQUCViu54AIbDAAKCRBTc6WbYpR8Khu7AP9NJrBUn94C/3PeNbtQlEGZ
 NV46Mx5HF0P27lH3sFpNrwD/dVdZ5PCnHQYBZ287ZxVfVr4Zuxjo5yJbRjT93Hl0vMY=
In-Reply-To: <5b9d57b4-bd28-4523-bb80-f4a5912eb3e8@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

On 6/10/24 02:38, Jan Beulich wrote:
> To get past the recurring friction on the approach to take wrt
> workarounds needed for various firmware flaws, I'm stepping down as the
> maintainer of our code interfacing with EFI firmware. Two new
> maintainers are being introduced in my place.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> For the new maintainers, here's a 1st patch to consider right away:
> https://lists.xen.org/archives/html/xen-devel/2024-03/msg00931.html.
> 
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -308,7 +308,9 @@ F:	automation/eclair_analysis/
>   F:	automation/scripts/eclair
>   
>   EFI
> -M:	Jan Beulich <jbeulich@suse.com>
> +M:	Daniel P. Smith <dpsmith@apertussolutions.com>
> +M:	Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> +R:	Jan Beulich <jbeulich@suse.com>
>   S:	Supported
>   F:	xen/arch/x86/efi/
>   F:	xen/arch/x86/include/asm/efi*.h

Acked-by: Daniel P. Smith <dpsmith@apertussolutions.com>


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 16:22:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 16:22:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738630.1145445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH4Fv-0007Xg-2v; Tue, 11 Jun 2024 16:21:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738630.1145445; Tue, 11 Jun 2024 16:21:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH4Fv-0007XZ-0G; Tue, 11 Jun 2024 16:21:55 +0000
Received: by outflank-mailman (input) for mailman id 738630;
 Tue, 11 Jun 2024 16:21:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b7dS=NN=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sH4Fu-0007XT-1B
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 16:21:54 +0000
Received: from mail-qk1-x735.google.com (mail-qk1-x735.google.com
 [2607:f8b0:4864:20::735])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b5423b95-280e-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 18:21:51 +0200 (CEST)
Received: by mail-qk1-x735.google.com with SMTP id
 af79cd13be357-797a7f9b52eso114272885a.2
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 09:21:51 -0700 (PDT)
Received: from localhost ([213.195.114.223]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-795330b2426sm529069285a.73.2024.06.11.09.21.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 09:21:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5423b95-280e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718122910; x=1718727710; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=bgcEAoD20pyXujmelPMl1KRx/ATOqc4py2jWXdGdleI=;
        b=g8fGrOjCsB2lhH7FX1SYiZcyt2XZI2mXsXhhjq+sZzkc3ISlwgeDBKiycQlKRebyYy
         Robt0+U7CroVAaDQ0OeWbVamRnEJPChO7sRZNls/ib4bEZqnH8Dq+lyRVMLjiu9bTFFh
         8btGILhHTDSCp94MzdrXrB2xKD6m36V2fdG0Q=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718122910; x=1718727710;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=bgcEAoD20pyXujmelPMl1KRx/ATOqc4py2jWXdGdleI=;
        b=W07g2eyurAh3k6X9D2vZM4C8FgnA4yAhOchbWCzkWphgcyDP4ykPQdEIDLplqcnfXm
         wg0lehtkeREfBlRbwHNCZ541HiyhSpALUU1pkT6cXGOQm2gxjuVzHxkCWctDcNYdeq9l
         +5M2DPCUkuzgI53d84Jv9R8/X8SJu0XcdHXv3fMpW/wBvENmqVKOQR3o6B9iCHliHvf6
         GcTbmustmuN0Cuuh4aNyKnGLAHBrIY8b0EcrA15JqNUXj6Y4TRtzVEAxUk6qkKUNuevo
         rVVgZ6z0RJtMi7qHTm7bZXvj8F1xgS27A2fmbDASoK2scWSt5W6K2/XOOoDsz62exFHM
         Scog==
X-Gm-Message-State: AOJu0YyiWEjqiRABcZ232T7uBoDYspuEaLfoekdh9HHiIZwN4dpnbvhj
	41zu6oweyb4uS9FZ+aW1Wt4XBrMxN9sL2ZxJSTzi7/4dlqmeJZptHC8pwo0d/Gw=
X-Google-Smtp-Source: AGHT+IE2Pd/Q1egGA5IbjzttdKRI0ftwMFmKGnFSUKoVi1Rggkl4QF6W+uKp/VK9T8nZtGB8WNGiUA==
X-Received: by 2002:a05:620a:c50:b0:794:f330:6caa with SMTP id af79cd13be357-7953c49de71mr1389368185a.58.1718122909639;
        Tue, 11 Jun 2024 09:21:49 -0700 (PDT)
Date: Tue, 11 Jun 2024 18:21:47 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/EPT: relax iPAT for "invalid" MFNs
Message-ID: <Zmh5mw15_FwITnj1@macbook>
References: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
 <Zmf_k2meED8iG3H5@macbook>
 <a11259be-7114-4332-b873-d1b163687a3e@suse.com>
 <ZmgStGbVRuGaNUD_@macbook>
 <f171c98a-c78d-41c8-88d8-7d631b80333b@suse.com>
 <ZmgwKmcLDJDhIsl7@macbook>
 <b076dc8d-701e-4a9f-a147-c54673959009@suse.com>
 <ZmhWtEyuwjTuIAxK@macbook>
 <beb67703-c1f0-490a-a3ad-36e2f331f5e4@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <beb67703-c1f0-490a-a3ad-36e2f331f5e4@suse.com>

On Tue, Jun 11, 2024 at 04:53:22PM +0200, Jan Beulich wrote:
> On 11.06.2024 15:52, Roger Pau Monné wrote:
> > On Tue, Jun 11, 2024 at 01:52:58PM +0200, Jan Beulich wrote:
> >> On 11.06.2024 13:08, Roger Pau Monné wrote:
> >>> I really wonder whether Xen has enough information to figure out
> >>> whether a hole (MMIO region) is supposed to be accessed as UC or
> >>> something else.
> >>
> >> It certainly hasn't, and hence is erring on the (safe) side of forcing
> >> UC.
> > 
> > Except that for the vesa framebuffer at least this is a bad choice :).
> 
> Well, yes, that's where we want WC to be permitted. But for that we only
> need to avoid setting iPAT; we still can uniformly hand back UC. Except
> (as mentioned elsewhere earlier) if the guest uses MTRRs rather than PAT
> to arrange for WC.

If we want to get this into 4.19, we likely want to go your proposed
approach then, as it's less risky.

I think a comment would be helpful to note that the fix here to not
enforce iPAT by still return UC is mostly done to accommodate vesa
regions mapped with PAT attributes to use WC.

I would also like to add some kind of note that special casing
!mfn_valid() might not be needed, but that removing it must be done
carefully to not cause regressions.

> >>>  Maybe the mfn_valid() check should be
> >>> inverted, and return WB when the underlying mfn is RAM, and otherwise
> >>> use the guest MTRRs to decide the cache attribute?
> >>
> >> First: Whether WB is correct for RAM isn't known. With some peculiar device
> >> assigned, the guest may want/need part of its RAM be e.g. WC or WT. (It's
> >> only without any physical devices assigned that we can be quite sure that
> >> WB is good for all of RAM.) Therefore, second, I think respecting MTRRs for
> >> RAM is less likely to cause problems than respecting them for MMIO.
> >>
> >> I think at this point the main question is: Do we want to do things at least
> >> along the lines of this v1, or do we instead feel certain enough to switch
> >> the mfn_valid() to a comparison against INVALID_MFN (and perhaps moving it
> >> up to almost the top of the function)?
> > 
> > My preferred option would be the later, as that would remove a special
> > casing.  However, I'm unsure how much fallout this could cause - those
> > caching changes are always tricky and lead to unexpected fallout.
> 
> Which is the very reason why I tried to avoid going to far with this.
> 
> > OTOH the current !mfn_valid() check is very restrictive, as it forces
> > all MMIO to UC.
> 
> Which is why, in this v1, I'm relaxing only the iPAT part.
> 
> >  So by removing it we allow guest chosen types to take
> > effect, which are likely less restrictive than UC (whether those are
> > correct is another question).
> 
> No, guest chosen types still wouldn't come into play, due to what the
> switch() further down in the function does for p2m_mmio_direct.

Indeed.  That should also be removed if we decide for MMIO cache
attributes to be controlled by guest MTRRs.

> 
> >> One caveat here that I forgot to
> >> mention before: MFNs taken out of EPT entries will never be INVALID_MFN, for
> >> the truncation that happens when populating entries. In that case we rely on
> >> mfn_valid() to be "rejecting" them.
> > 
> > The only caller where mfns from EPT entries are passed to
> > epte_get_entry_emt() is in resolve_misconfig() AFAICT, and in that
> > case the EPT entry must be present for epte_get_entry_emt() to be
> > called.  So it seems to me that epte_get_entry_emt() can never be
> > called from an mfn constructed from an INVALID_MFN EPT entry (but it's
> > worth adding an assert for it).
> 
> Are you sure? I agree for the first of those two calls, but the second
> isn't quite as obvious. There we'd need to first prove that we will
> never create non-present super-page entries. Yet if I'm not mistaken
> for PoD we may create such.

I should go look then, didn't know PoD would do that.

Regards, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 16:51:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 16:51:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738640.1145455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH4i5-0004Wt-3n; Tue, 11 Jun 2024 16:51:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738640.1145455; Tue, 11 Jun 2024 16:51:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH4i5-0004Wm-19; Tue, 11 Jun 2024 16:51:01 +0000
Received: by outflank-mailman (input) for mailman id 738640;
 Tue, 11 Jun 2024 16:50:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GLyx=NN=toxicpanda.com=josef@srs-se1.protection.inumbo.net>)
 id 1sH4i3-0004Wg-Hs
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 16:50:59 +0000
Received: from mail-yb1-xb2c.google.com (mail-yb1-xb2c.google.com
 [2607:f8b0:4864:20::b2c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c6544281-2812-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 18:50:58 +0200 (CEST)
Received: by mail-yb1-xb2c.google.com with SMTP id
 3f1490d57ef6-dfa727cde2dso6227383276.0
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 09:50:58 -0700 (PDT)
Received: from localhost (syn-076-182-020-124.res.spectrum.com.
 [76.182.20.124]) by smtp.gmail.com with ESMTPSA id
 00721157ae682-62ccaef2825sm20935207b3.139.2024.06.11.09.50.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 09:50:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6544281-2812-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=toxicpanda-com.20230601.gappssmtp.com; s=20230601; t=1718124657; x=1718729457; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=k7qH/Egkqtc5OmSCKcG8suDn1pwScsKWRQ+gioBgKgY=;
        b=PJESTWoQVy6Dhkt6D7dBOsO6aw4Cg07a8vF8akRIb2EBaRYSDHChl7n57hw7EODl9s
         KOv4+Ddfrh2UXXVaGaWsBV/g/BGIm+McSqa3ftaffhA0pzKIkAIjtK7kK0cSJIg1gzEA
         DCEEsJ6rjoe3/pcBAClEQzKGjL0yj0xnEKNkatCP8TCYDLEV/FqkDQf3pX58H5Lqc6lJ
         DLcsRkC73Fm2jHWr66DOXlWbZJXdzCiKdohpILjRNUuQL30LxgxyySFR0EnFjQbpnO99
         NVAf+vSjTpuCkyXUPi9TiWIJre2itl3aIip9jtko6ZSqsixxJVb2q6JpkZVy6rJcIL9o
         /NyQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718124657; x=1718729457;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=k7qH/Egkqtc5OmSCKcG8suDn1pwScsKWRQ+gioBgKgY=;
        b=w0OANrEeaXOW833fM9sCZg98eX7bKPkmtqC56RgZENoJHrsxnUEzPrvD4v8z9/EHe5
         covFfXYt956SYFr6a2AJ0CkddhYNuP7X5KXgLfUnKOIpldmWqlucGLGzpCQpuQw0nIXE
         v5O7OxvGo61+vxQikkP5mALTZ6lRf5eJ/zsv8MHwRTlV9pLkYj/8oEstSD4OkEzxIuSy
         IOt5yPFnNUcp1O0pwOu9+o4FYP7QGlyj68MZmN+1CBBgDN6tmE0VLW2nYDbb42qggz+x
         IyharMJpcrb16PeUU8H1W+Y2oOzKPncDqhXWzrO4OXbPSXA80FsVdD/TNQfBl4jaAZNc
         gSFw==
X-Forwarded-Encrypted: i=1; AJvYcCUQzyq19Md2j3RF4F3H7qNClVFbHygpIq1IwRq0BBV2Ql+jAK9y4maXTr6QfepHQLV7LyPiFC4PRE6F52ZirXvQ942O9B4OlcwdgRJm7zo=
X-Gm-Message-State: AOJu0YxjGEzzCi529dKk1ELVPMyIhQHkBfAFo7CznKYXxm+/Mr47pzCt
	XPXOZuuLzvfs5i6aTqUKiaVRb1GxVjHK/Jg6HyWXd4t54Gz44xWUpDgLN2D5fxM=
X-Google-Smtp-Source: AGHT+IFLbNcRGoXteJvsvQz9ePKtTqtffGprFSIVvQoC+S7q0aJ0DFFhhqHjAFy+KyUVXYjD8ktBew==
X-Received: by 2002:a0d:d851:0:b0:618:95a3:70b9 with SMTP id 00721157ae682-62cd565129cmr130634777b3.36.1718124656832;
        Tue, 11 Jun 2024 09:50:56 -0700 (PDT)
Date: Tue, 11 Jun 2024 12:50:55 -0400
From: Josef Bacik <josef@toxicpanda.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 09/26] nbd: move setting the cache control flags to
 __nbd_set_size
Message-ID: <20240611165055.GD247672@perftesting>
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-10-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20240611051929.513387-10-hch@lst.de>

On Tue, Jun 11, 2024 at 07:19:09AM +0200, Christoph Hellwig wrote:
> Move setting the cache control flags in nbd in preparation for moving
> these flags into the queue_limits structure.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Josef Bacik <josef@toxicpanda.com>

Thanks,

Josef


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 18:24:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 18:24:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738658.1145465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH69t-0008SZ-Hh; Tue, 11 Jun 2024 18:23:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738658.1145465; Tue, 11 Jun 2024 18:23:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH69t-0008SS-F3; Tue, 11 Jun 2024 18:23:49 +0000
Received: by outflank-mailman (input) for mailman id 738658;
 Tue, 11 Jun 2024 18:23:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=887E=NN=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sH69r-0008S3-Ov
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 18:23:47 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bd0d7c7c-281f-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 20:23:45 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-a6f1dc06298so179386366b.1
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 11:23:45 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6efb3b3147sm521747766b.185.2024.06.11.11.23.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 Jun 2024 11:23:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd0d7c7c-281f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718130225; x=1718735025; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=dCdJ3PVijp2foRVxCQFSIFJVXNHooBISSPQnZibTpT8=;
        b=iHlGEdFWAdIQyZMhiUZSfanoLZEOPY09Llm7OFjSPpQwVgdt+0qJixcnBTXUGPG4O7
         mOUyGtVcrajeGUxbOzghj0OiDmBVEboxRJwLIDVDuLdd9mZO50qkt6/U/bHMeS6RIAoO
         EzePMOH6nNW1XnfVPPVLGJOrl5hAqlkszVqDm+5ZZ97xjHmQzcdjF6QWv5EY8lQSTPJH
         QrpZU8mulyrXugWIhYm7KG28xuzaNwKsHdJEjcnXmLTZrWmKoUtixbGwN4W8yQg/kXA+
         9Ay10cJCz/L2z8GV4pshCFena4QgNzk9c43shWU6KhP5ZiLKsG1aj/CsyCwPA17vuKik
         OFiQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718130225; x=1718735025;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=dCdJ3PVijp2foRVxCQFSIFJVXNHooBISSPQnZibTpT8=;
        b=JkSghoNDVfbEkjYAPC7k+EPwq172Nupoc3kT/mssGSHIr/j0nnLVpYxS6EBTZVa0XF
         A6W47xaewPseXctWSzHaLdVfCYQ7gCUV0swS5bv6bmMoDN5ZO5S9iL4Eq9dnl9wy5DsC
         SYKf+nEG2xE39kT67whYjIDWC8ft8T4rI+LoI8sDXO9FqDb1pPCaMgPrHPnU5R0ZwWXf
         7NhYtirPsFnHCBYjLsntWXqe2Kxp7qk1w7GTEUHR/K8PSJ63xG3rGiXWA56vlknvUSLd
         oix5tm09v7TE6jnD7T0U+c4YEiYq7itpQxv9kTjuOVS3iFfGxbRl2UGDu7lcdoDGEtRO
         32aQ==
X-Forwarded-Encrypted: i=1; AJvYcCU87K14HZ9uL9xBvAZKM6lLw7i/oCm3x4QWXZtJHkRdsztCD0I18kLl4OswdhziGqhbMU9xqVjaJxT2K1Mpy7sfZR95iAYtI7w2PMvRGHg=
X-Gm-Message-State: AOJu0Yxx1d09zfvVHAw5Gx5DnjXY5wsd8yyLNXZoWh/4psV5R/bC4L3X
	IACHMDWLkjU++IeMNjkmotthIQLOM0EgLAmVhWB+lOe37ADCmj5i
X-Google-Smtp-Source: AGHT+IHYaxNfVBdHubENCTuPUfR879XJj3T+ZmcavHa4Q1LejCpAg5cTavVjiaryVvZMyA0VUUg62w==
X-Received: by 2002:a17:906:28d3:b0:a6f:fbe:d3e8 with SMTP id a640c23a62f3a-a6f0fbed85emr541472466b.54.1718130224495;
        Tue, 11 Jun 2024 11:23:44 -0700 (PDT)
Message-ID: <1b3b389156ad924f00af8af1d173b89fc533050e.camel@gmail.com>
Subject: Re: [PATCH v12 5/8] xen/riscv: add minimal stuff to mm.h to build
 full Xen
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Alistair Francis <alistair.francis@wdc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, George
 Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien
 Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Date: Tue, 11 Jun 2024 20:23:43 +0200
In-Reply-To: <e80e30c9-6558-4b70-ab2f-18c34c359dae@citrix.com>
References: <cover.1717008161.git.oleksii.kurochko@gmail.com>
	 <d00b86f41ef2c7d928a28dadd8c34fb845f23d0a.1717008161.git.oleksii.kurochko@gmail.com>
	 <70128dba-498f-4d85-8507-bb1621182754@citrix.com>
	 <7721c1b4eb0ea76cca5460264954d40d639499b7.camel@gmail.com>
	 <e80e30c9-6558-4b70-ab2f-18c34c359dae@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Tue, 2024-06-11 at 16:53 +0100, Andrew Cooper wrote:
> On 30/05/2024 7:22 pm, Oleksii K. wrote:
> > On Thu, 2024-05-30 at 18:23 +0100, Andrew Cooper wrote:
> > > On 29/05/2024 8:55 pm, Oleksii Kurochko wrote:
> > > > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > > > Acked-by: Jan Beulich <jbeulich@suse.com>
> > > This patch looks like it can go in independently?=C2=A0 Or does it
> > > depend
> > > on
> > > having bitops.h working in practice?
> > >=20
> > > However, one very strong suggestion...
> > >=20
> > >=20
> > > > diff --git a/xen/arch/riscv/include/asm/mm.h
> > > > b/xen/arch/riscv/include/asm/mm.h
> > > > index 07c7a0abba..cc4a07a71c 100644
> > > > --- a/xen/arch/riscv/include/asm/mm.h
> > > > +++ b/xen/arch/riscv/include/asm/mm.h
> > > > @@ -3,11 +3,246 @@
> > > > <snip>
> > > > +/* PDX of the first page in the frame table. */
> > > > +extern unsigned long frametable_base_pdx;
> > > > +
> > > > +/* Convert between machine frame numbers and page-info
> > > > structures.
> > > > */
> > > > +#define
> > > > mfn_to_page(mfn)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > > +=C2=A0=C2=A0=C2=A0 (frame_table + (mfn_to_pdx(mfn) - frametable_ba=
se_pdx))
> > > > +#define
> > > > page_to_mfn(pg)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > > +=C2=A0=C2=A0=C2=A0 pdx_to_mfn((unsigned long)((pg) - frame_table) =
+
> > > > frametable_base_pdx)
> > > Do yourself a favour and not introduce frametable_base_pdx to
> > > begin
> > > with.
> > >=20
> > > Every RISC-V board I can find has things starting from 0 in
> > > physical
> > > address space, with RAM starting immediately after.
> > I checked Linux kernel and grep there:
> > =C2=A0=C2=A0 [ok@fedora linux-aia]$ grep -Rni "memory@" arch/riscv/boot=
/dts/
> > --
> > =C2=A0=C2=A0 exclude "*.tmp" -I
> > =C2=A0=C2=A0 arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-
> > 2.dtsi:33:=C2=A0=C2=A0=C2=A0=20
> > =C2=A0=C2=A0 memory@40000000 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/starfive/jh7100-common.dtsi:28:=C2=A0=
=C2=A0=C2=A0=C2=A0
> > memory@80000000
> > =C2=A0=C2=A0 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-sev-kit.dts:49:=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0
> > ddrc_cache:
> > =C2=A0=C2=A0 memory@1000000000 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-m100pfsevp.dts:33:=C2=
=A0=C2=A0
> > ddrc_cache_lo:
> > =C2=A0=C2=A0 memory@80000000 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-m100pfsevp.dts:37:=C2=
=A0=C2=A0
> > ddrc_cache_hi:
> > =C2=A0=C2=A0 memory@1040000000 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-tysom-m.dts:34:=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0
> > ddrc_cache_lo:
> > =C2=A0=C2=A0 memory@80000000 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-tysom-m.dts:40:=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0
> > ddrc_cache_hi:
> > =C2=A0=C2=A0 memory@1000000000 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-polarberry.dts:22:=C2=
=A0=C2=A0
> > ddrc_cache_lo:
> > =C2=A0=C2=A0 memory@80000000 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-polarberry.dts:27:=C2=
=A0=C2=A0
> > ddrc_cache_hi:
> > =C2=A0=C2=A0 memory@1000000000 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts:57:=C2=
=A0=C2=A0
> > ddrc_cache_lo:
> > =C2=A0=C2=A0 memory@80000000 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts:63:=C2=
=A0=C2=A0
> > ddrc_cache_hi:
> > =C2=A0=C2=A0 memory@1040000000 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/thead/th1520-beaglev-ahead.dts:32:=C2=
=A0 memory@0
> > {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/thead/th1520-lichee-module-4a.dtsi:14:=
=C2=A0=C2=A0=C2=A0=C2=A0=20
> > =C2=A0=C2=A0 memory@0 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/sophgo/cv1800b-milkv-duo.dts:26:=C2=A0=
=C2=A0=C2=A0
> > memory@80000000
> > =C2=A0=C2=A0 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/sophgo/cv1812h.dtsi:12:=C2=A0=C2=A0=C2=
=A0=C2=A0 memory@80000000
> > {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts:26:
> > memory@80000000
> > =C2=A0=C2=A0 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts:25:
> > memory@80000000
> > =C2=A0=C2=A0 {
> > =C2=A0=C2=A0 arch/riscv/boot/dts/canaan/k210.dtsi:82:=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 sram:
> > memory@80000000 {
> > =C2=A0=C2=A0=20
> > And based on=C2=A0 that majority of supported by Linux kernel boards ha=
s
> > RAM
> > starting not from 0 in physical address space. Am I confusing
> > something?
> >=20
> > > Taking the microchip board as an example, RAM actually starts at
> > > 0x8000000,
> > Today we had conversation with the guy from SiFive in xen-devel
> > channel
> > and he mentioned that they are using "starfive visionfive2 and
> > sifive
> > unleashed platforms." which based on the grep above has RAM not at
> > 0
> > address.
> >=20
> > Also, QEMU uses 0x8000000.
> >=20
> > > =C2=A0which means that having frametable_base_pdx and assuming it
> > > does get set to 0x8000 (which isn't even a certainty, given that
> > > I
> > > think
> > > you'll need struct pages covering the PLICs), then what you are
> > > trading
> > > off is:
> > >=20
> > > * Saving 32k of virtual address space only (no need to even
> > > allocate
> > > memory for this range of the framtable), by
> > > * Having an extra memory load and add/sub in every page <-> mfn
> > > conversion, which is a screaming hotpath all over Xen.
> > >=20
> > > It's a terribly short-sighted tradeoff.
> > >=20
> > > 32k of VA space might be worth saving in a 32bit build (I
> > > personally
> > > wouldn't - especially as there's no need to share Xen's VA space
> > > with
> > > guests, given no PV guests on ARM/RISC-V), but it's absolutely
> > > not at
> > > all in an a 64bit build with TB of VA space available.
> > >=20
> > > Even if we do find a board with the first interesting thing in
> > > the
> > > frametable starting sufficiently away from 0 that it might be
> > > worth
> > > considering this slide, then it should still be Kconfig-able in a
> > > similar way to PDX_COMPRESSION.
> > I found your tradeoffs reasonable, but I don't understand how it
> > will
> > work if RAM does not start from 0, as the frametable address and
> > RAM
> > address are linked.
> > I tried to look at the PDX_COMPRESSION config and couldn't find any
> > "slide" there. Could you please clarify this for me?
> > If to use this "slide" would it help to avoid the mentioned above
> > tradeoffs?
> >=20
> > One more question: if we decide to go without frametable_base_pdx,
> > would it be sufficient to simply remove mentions of it from the
> > code (
> > at least, for now )?
>=20
> There is a relationship between system/host physical addresses (what
> Xen
> called maddr/mfn), and the frametable.=C2=A0 The frametable has one entry
> per
> mfn.
>=20
> In the most simple case, there's a 1:1 relationship.=C2=A0 i.e.
> frametable[0]
> =3D maddr(0), frametable[1] =3D maddr(4k) etc.=C2=A0 This is very simple,=
 and
> very easy to calculate (page_to_mfn()/mfn_to_page()).
>=20
> The frametable is one big array.=C2=A0 It starts at a compile-time fixed
> address, and needs to be long enough to cover everything interesting
> in
> memory.=C2=A0 Therefore it potentially takes a large amount of virtual
> address space.
>=20
> However, only interesting maddrs need to have data in the frametable,
> so
> it's fine for the backing RAM to be sparsely allocated/mapped in the
> frametable virtual addresses.
>=20
> For 64bit, that's really all you need, because there's always far
> more
> virtual address space than physical RAM in the system, even when
> you're
> looking at TB-scale giant servers.
>=20
>=20
> For 32bit, virtual address space is a limited resource.=C2=A0 (Also to an
> extent, 64bit x86 with PV guests because we give 98% of the virtual
> address space to the guest kernel to use.)
>=20
> There are two tricks to reduce the virtual address space used, but
> they
> both cost performance in fastpaths.
>=20
> 1) PDX Compression.
>=20
> PDX compression makes a non-linear mfn <-> maddr mapping.=C2=A0 This is
> for a
> usecase where you've got multiple RAM banks which are separated by a
> large distance (and evenly spaced), then you can "compress" a single
> range of 0's out of the middle of the system/host physical address.
>=20
> The cost is that all page <-> mfn conversions need to read two masks
> and
> a shift-count from variables in memory, to split/shift/recombine the
> address bits.
>=20
> 2) A slide, which is frametable_base_pdx in this context.
>=20
> When there's a big gap between 0 and the start of something
> interesting,
> you could chop out that range by just subtracting base_pdx.=C2=A0 What
> qualifies as "big" is subjective, but Qemu starting at 128M certainly
> does not qualify as big enough to warrant frametable_base_pdx.
>=20
> This is less expensive that PDX compression.=C2=A0 It only adds one memor=
y
> read to the fastpath, but it also doesn't save as much virtual
> address
> space as PDX compression.
>=20
>=20
> When virtual address space is a major constraint (32 bit builds),
> both
> of these techniques are worth doing.=C2=A0 But when there's no constraint
> on
> virtual address space (64 bit builds), there's no reason to use
> either;
> and the performance will definitely improve as a result.

Thanks for such good explanation.

For RISC-V we have PDX config disabled as I haven't seen multiple RAM
banks at boards which has hypervisor extension. Thereby mfn_to_pdx()
and pdx_to_mfn() do nothing. The same for frametable_base_pdx, in case
of PDX disabled, it just an offset ( or a slide ).

IIUUC, you meant that it make sense to map frametable not to the
address of where RAM is started, but to 0, so then we don't need this
+-frametable_base_pdx. The price for that is loosing of VA space for
the range from 0 to RAM start address.

Right now, we are trying to support 3 boards with the following RAM
address:
1. 0x8000_0000 - QEMU and SiFive board
2. 0x40_0000_0000 - Microchip board

So if we mapping frametable to 0 ( not RAM start ) we will loose:
1. 0x8000_0 ( amount of page entries to cover range [0, 0x8000_0000] )
* 64 ( size of struct page_info ) =3D 32 MB
2. 0x40_0000_0 ( amount of page entries to cover range [0,
0x40_0000_0000] ) * 64 ( size of struct page_info ) =3D 4 Gb

In terms of available virtual address space for RV-64 we can consider
both options as acceptable.

[OPTION 1] If we accepting of loosing 4 Gb of VA then we could
implement mfn_to_page() and page_to_mfn() in the following way:
```
   diff --git a/xen/arch/riscv/include/asm/mm.h
   b/xen/arch/riscv/include/asm/mm.h
   index cc4a07a71c..fdac7e0646 100644
   --- a/xen/arch/riscv/include/asm/mm.h
   +++ b/xen/arch/riscv/include/asm/mm.h
   @@ -107,14 +107,11 @@ struct page_info
  =20
    #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
  =20
   -/* PDX of the first page in the frame table. */
   -extern unsigned long frametable_base_pdx;
   -
    /* Convert between machine frame numbers and page-info structures.
*/
    #define mfn_to_page(mfn)                                         =20
\
   -    (frame_table + (mfn_to_pdx(mfn) - frametable_base_pdx))
   +    (frame_table + mfn))
    #define page_to_mfn(pg)                                          =20
\
   -    pdx_to_mfn((unsigned long)((pg) - frame_table) +
   frametable_base_pdx)
   +    ((unsigned long)((pg) - frame_table))
  =20
    static inline void *page_to_virt(const struct page_info *pg)
    {
   diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
   index 9c0fd80588..8f6dbdc699 100644
   --- a/xen/arch/riscv/mm.c
   +++ b/xen/arch/riscv/mm.c
   @@ -15,7 +15,7 @@
    #include <asm/page.h>
    #include <asm/processor.h>
  =20
   -unsigned long __ro_after_init frametable_base_pdx;
    unsigned long __ro_after_init frametable_virt_end;
  =20
    struct mmu_desc {
```

[OPTION 2] If we are not accepting of loosing 4 Gb of VA I think that
there is no sense to rework that in the following way:
```
   diff --git a/xen/arch/riscv/include/asm/mm.h
   b/xen/arch/riscv/include/asm/mm.h
   index cc4a07a71c..fdac7e0646 100644
   --- a/xen/arch/riscv/include/asm/mm.h
   +++ b/xen/arch/riscv/include/asm/mm.h
   @@ -107,14 +107,11 @@ struct page_info
  =20
    #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
  =20
   -/* PDX of the first page in the frame table. */
   -extern unsigned long frametable_base_pdx;
   -
    /* Convert between machine frame numbers and page-info structures.
*/
    #define mfn_to_page(mfn)                                         =20
\
   -    (frame_table + (mfn_to_pdx(mfn) - frametable_base_pdx))
   +    (frame_table + mfn - FRAMETABLE_BASE_OFFSET))
    #define page_to_mfn(pg)                                          =20
\
   -    pdx_to_mfn((unsigned long)((pg) - frame_table) +
   frametable_base_pdx)
   +    ((unsigned long)((pg) - frame_table) + FRAMETABLE_BASE_OFFSET)
  =20
    static inline void *page_to_virt(const struct page_info *pg)
    {
   diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
   index 9c0fd80588..8f6dbdc699 100644
   --- a/xen/arch/riscv/mm.c
   +++ b/xen/arch/riscv/mm.c
   @@ -15,7 +15,7 @@
    #include <asm/page.h>
    #include <asm/processor.h>
  =20
   -unsigned long __ro_after_init frametable_base_pdx;
    unsigned long __ro_after_init frametable_virt_end;
  =20
    struct mmu_desc {
```

And I am not sure that there is any sense in option 2 as basically it
the same to have the following macros definition with PDX disabled:
```
   /* Convert between machine frame numbers and page-info structures.
   */
   #define mfn_to_page(mfn)                                          =20
   \
       (frame_table + (mfn_to_pdx(mfn) - frametable_base_pdx))
   #define page_to_mfn(pg)                                           =20
   \
       pdx_to_mfn((unsigned long)((pg) - frame_table) +
   frametable_base_pdx)
```
The only sense of option 2 is the case when FRAMETABLE_BASE_OFFSET is
equal to 0, so the compiler will generate the code without additional
sub/add of FRAMETABLE_BASE_OFFSET.

Could you please clarify if my understanding is correct?

Should we still change the definition of mfn_to_page() and
page_to_mfn() with the fact that PDX is disabled? If yes, OPTION_1 is
okay or I am missing something?

Thanks in advance.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:19:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:19:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738665.1145475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH71S-0006w2-EN; Tue, 11 Jun 2024 19:19:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738665.1145475; Tue, 11 Jun 2024 19:19:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH71S-0006vv-BX; Tue, 11 Jun 2024 19:19:10 +0000
Received: by outflank-mailman (input) for mailman id 738665;
 Tue, 11 Jun 2024 19:19:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f7IT=NN=acm.org=bvanassche@srs-se1.protection.inumbo.net>)
 id 1sH71Q-0006vp-3W
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:19:08 +0000
Received: from 009.lax.mailroute.net (009.lax.mailroute.net [199.89.1.12])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 77163d80-2827-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 21:19:05 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by 009.lax.mailroute.net (Postfix) with ESMTP id 4VzJPg1skBzlgMVP;
 Tue, 11 Jun 2024 19:19:03 +0000 (UTC)
Received: from 009.lax.mailroute.net ([127.0.0.1])
 by localhost (009.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP
 id Krn5mVvq-SSd; Tue, 11 Jun 2024 19:18:55 +0000 (UTC)
Received: from [100.96.154.26] (unknown [104.132.0.90])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: bvanassche@acm.org)
 by 009.lax.mailroute.net (Postfix) with ESMTPSA id 4VzJPL3DslzlgMVN;
 Tue, 11 Jun 2024 19:18:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77163d80-2827-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h=
	content-transfer-encoding:content-type:content-type:in-reply-to
	:from:from:content-language:references:subject:subject
	:user-agent:mime-version:date:date:message-id:received:received;
	 s=mr01; t=1718133535; x=1720725536; bh=1x28cs1vNnTovmOEZshDr4sX
	K3dLcchrq6+7Natc4C0=; b=aLYYpUIO/Gxr6+oqIDEKMSeJEI9bEPlY1LmfpMwQ
	E2zY/jIjQz06DLuKS7/oqPEXBkmWPwKJXv7uV9mIl2KYhZCemVl03S1oSnwY4qxi
	j0LRHUdBHYckWLy3gMU7t6sg6PZRzo7w6uU2MvHhI0f4HAtEwdZgsvVAv0FAT7p4
	EBEjjdqMwYqi/U4rE0KUxNAQvcj1p4UprqA32cXd2cHFp7mwER3riMRLdMFdlrrT
	vPbUR/NzJad+yqXUxUvzt01I+FlnPsokOOQFYLX++R6LD2ksIREkny28AtjzGpS9
	3ApefM2xSewPQ14QhJuxg0s5WeeRKuj87Ridc/j3ANOv2A==
X-Virus-Scanned: by MailRoute
Message-ID: <fc9fb8dd-05e4-48b8-ab01-d1dd84996df4@acm.org>
Date: Tue, 11 Jun 2024 12:18:40 -0700
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 01/26] sd: fix sd_is_zoned
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-2-hch@lst.de>
Content-Language: en-US
From: Bart Van Assche <bvanassche@acm.org>
In-Reply-To: <20240611051929.513387-2-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 6/10/24 10:19 PM, Christoph Hellwig wrote:
> Since commit 7437bb73f087 ("block: remove support for the host aware zone
> model"), only ZBC devices expose a zoned access model.  sd_is_zoned is
> used to check for that and thus return false for host aware devices.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:19:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:19:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738668.1145486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH725-0007OO-Mg; Tue, 11 Jun 2024 19:19:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738668.1145486; Tue, 11 Jun 2024 19:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH725-0007OH-Jf; Tue, 11 Jun 2024 19:19:49 +0000
Received: by outflank-mailman (input) for mailman id 738668;
 Tue, 11 Jun 2024 19:19:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p+2X=NN=linux-foundation.org=akpm@srs-se1.protection.inumbo.net>)
 id 1sH724-0007Nm-7O
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:19:48 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8fc70894-2827-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 21:19:46 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 7BE7E6115A;
 Tue, 11 Jun 2024 19:19:44 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7CA6EC2BD10;
 Tue, 11 Jun 2024 19:19:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8fc70894-2827-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org;
	s=korg; t=1718133584;
	bh=AUJ44aafNBmr/yleRRZzaXjkALQuPOf50mYZh00Z5XE=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=eEh1BqswoHduG9oop33Az9TKXljfW3F9d2mhubpTL00w+7sI2Bxwr+YQyOGwZl7sK
	 MhJWEusJFvap0UM4BgBZ5H75YgdngB52ZuTWLHaFqC+jP2aZPOLpyM8OKdLaQDRdBi
	 79cuL9urUZBnnBw995W3elJj3j5pLY0GIPM3fFx0=
Date: Tue, 11 Jun 2024 12:19:42 -0700
From: Andrew Morton <akpm@linux-foundation.org>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
 xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com, Mike Rapoport
 <rppt@kernel.org>, Oscar Salvador <osalvador@suse.de>, "K. Y. Srinivasan"
 <kys@microsoft.com>, Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu
 <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Xuan Zhuo <xuanzhuo@linux.alibaba.com>, Eugenio =?ISO-8859-1?Q?P=E9rez?=
 <eperezma@redhat.com>, Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Oleksandr Tyshchenko
 <oleksandr_tyshchenko@epam.com>, Alexander Potapenko <glider@google.com>,
 Marco Elver <elver@google.com>, Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()
Message-Id: <20240611121942.050a2215143af0ecb576122f@linux-foundation.org>
In-Reply-To: <2ed64218-7f3b-4302-a5dc-27f060654fe2@redhat.com>
References: <20240607090939.89524-1-david@redhat.com>
	<20240607090939.89524-2-david@redhat.com>
	<2ed64218-7f3b-4302-a5dc-27f060654fe2@redhat.com>
X-Mailer: Sylpheed 3.8.0beta1 (GTK+ 2.24.33; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Tue, 11 Jun 2024 12:06:56 +0200 David Hildenbrand <david@redhat.com> wrote:

> On 07.06.24 11:09, David Hildenbrand wrote:
> > In preparation for further changes, let's teach __free_pages_core()
> > about the differences of memory hotplug handling.
> > 
> > Move the memory hotplug specific handling from generic_online_page() to
> > __free_pages_core(), use adjust_managed_page_count() on the memory
> > hotplug path, and spell out why memory freed via memblock
> > cannot currently use adjust_managed_page_count().
> > 
> > Signed-off-by: David Hildenbrand <david@redhat.com>
> > ---
> 
> @Andrew, can you squash the following?

Sure.

I queued it against "mm: pass meminit_context to __free_pages_core()",
not against

> Subject: [PATCH] fixup: mm/highmem: make nr_free_highpages() return "unsigned
>   long"



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:21:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:21:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738676.1145495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH73O-0000Wz-4W; Tue, 11 Jun 2024 19:21:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738676.1145495; Tue, 11 Jun 2024 19:21:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH73O-0000Ws-1g; Tue, 11 Jun 2024 19:21:10 +0000
Received: by outflank-mailman (input) for mailman id 738676;
 Tue, 11 Jun 2024 19:21:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=I6Ds=NN=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sH73M-0000Wf-LX
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:21:08 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id be513377-2827-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 21:21:04 +0200 (CEST)
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-681-EDKUggXhPFyS5bAid2De1Q-1; Tue, 11 Jun 2024 15:21:01 -0400
Received: by mail-wm1-f72.google.com with SMTP id
 5b1f17b1804b1-42120e123beso53280505e9.0
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 12:21:01 -0700 (PDT)
Received: from ?IPV6:2003:cb:c748:ba00:1c00:48ea:7b5a:c12b?
 (p200300cbc748ba001c0048ea7b5ac12b.dip0.t-ipconnect.de.
 [2003:cb:c748:ba00:1c00:48ea:7b5a:c12b])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-421fd7573c0sm65944095e9.38.2024.06.11.12.20.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 12:20:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be513377-2827-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1718133663;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=iaiFAJhQM+Rt1OjOGeJr7VugpzhyWLTZGK0DjHGr1/Q=;
	b=exjtnSIFaIDYIvunQw51w6sP15W8su+LrOLt1hBe0XnKmKSSrtY901+A1Wy9cxQrzghBoi
	Laq0P3dAF6fr/unSvamO1CDaaZTkM1ITYBWpfQwlvTWsGplZb2caxQHg5I48srYeLhHfDl
	uj24nyUTf36ujjhaqPNQ3h70nvfnIB8=
X-MC-Unique: EDKUggXhPFyS5bAid2De1Q-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718133660; x=1718738460;
        h=content-transfer-encoding:in-reply-to:organization:autocrypt
         :content-language:from:references:cc:to:subject:user-agent
         :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=iaiFAJhQM+Rt1OjOGeJr7VugpzhyWLTZGK0DjHGr1/Q=;
        b=K7HOnmAtYOzms/86wmhpdnEU1Rlr/a1RDbXajERTucHtMyR/Jcye3jp8R3j9Jlb+me
         eVNG0QRDWJANDaphI082Oc9mogahNEhlt4EWofFBOJN7GykfngOMzN0crOtYj9Y3icqp
         JxaKFvaTDD5Jp3SLRaeWJUH33Xe4bpGfexktruzauN5dX1vY7ftqfjxe9dX7mdB8pD80
         mx168rNjslJCAAzgt1iyR5bNK7iUbb/qbidKRvYIVxSntqQD9etuty9tEYBiZ3aOZZdn
         3nnJoGjAUhvYgl+jrtopI77z93QiVbWBbaxQOh6Apw10Kvg5A1bZh6zWeNX0YLELsuvg
         Vc5A==
X-Forwarded-Encrypted: i=1; AJvYcCVw5u4UD5yaJvmqmZNsFv4nUwW2bIHaDuZjqxZe8z3mtH3CrHVTHXlOzxEbf5qSDu1OC8qGhknlJvJKjqOdQkFgDmST/tCkZRluCrBNvMA=
X-Gm-Message-State: AOJu0Yz528SgFtI2vKWpZavIZd5bLRa/EGQQao4uBJVWcBELoiBL3aVW
	vRvEhiqs6ukZDoETzJx+FNfKUf47rXOVWl9l3g+sVJPggN4rvSqJrygkLsy9AEjEfMpSOp3mMPV
	n9Kb6ZZlLbnGvwr2iIFFtxCeWnNmYdz1Cyj+eiX0GRbiWkPRY5NqtBau2gOy9V52+
X-Received: by 2002:a05:600c:4e87:b0:421:7f30:7cfb with SMTP id 5b1f17b1804b1-4217f308036mr79478505e9.40.1718133660222;
        Tue, 11 Jun 2024 12:21:00 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IGZHgVQLSKwOZIrNSQnQXJqFPIbdpuB578AjS4z7cwEh73NInG9mIJ1L/Lk0ZLrDzgll6w0ZQ==
X-Received: by 2002:a05:600c:4e87:b0:421:7f30:7cfb with SMTP id 5b1f17b1804b1-4217f308036mr79478295e9.40.1718133659765;
        Tue, 11 Jun 2024 12:20:59 -0700 (PDT)
Message-ID: <6165471b-e86b-456b-99c6-c308bf5d6e4c@redhat.com>
Date: Tue, 11 Jun 2024 21:20:57 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()
To: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
 xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com,
 Mike Rapoport <rppt@kernel.org>, Oscar Salvador <osalvador@suse.de>,
 "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Jason Wang <jasowang@redhat.com>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
 =?UTF-8?Q?Eugenio_P=C3=A9rez?= <eperezma@redhat.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>,
 Dmitry Vyukov <dvyukov@google.com>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-2-david@redhat.com>
 <2ed64218-7f3b-4302-a5dc-27f060654fe2@redhat.com>
 <20240611121942.050a2215143af0ecb576122f@linux-foundation.org>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; keydata=
 xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat
In-Reply-To: <20240611121942.050a2215143af0ecb576122f@linux-foundation.org>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 11.06.24 21:19, Andrew Morton wrote:
> On Tue, 11 Jun 2024 12:06:56 +0200 David Hildenbrand <david@redhat.com> wrote:
> 
>> On 07.06.24 11:09, David Hildenbrand wrote:
>>> In preparation for further changes, let's teach __free_pages_core()
>>> about the differences of memory hotplug handling.
>>>
>>> Move the memory hotplug specific handling from generic_online_page() to
>>> __free_pages_core(), use adjust_managed_page_count() on the memory
>>> hotplug path, and spell out why memory freed via memblock
>>> cannot currently use adjust_managed_page_count().
>>>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>
>> @Andrew, can you squash the following?
> 
> Sure.
> 
> I queued it against "mm: pass meminit_context to __free_pages_core()",
> not against

Ah yes, sorry. Thanks!

-- 
Cheers,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:22:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:22:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738682.1145506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH74Q-00014J-Ej; Tue, 11 Jun 2024 19:22:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738682.1145506; Tue, 11 Jun 2024 19:22:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH74Q-00014C-Bi; Tue, 11 Jun 2024 19:22:14 +0000
Received: by outflank-mailman (input) for mailman id 738682;
 Tue, 11 Jun 2024 19:22:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f7IT=NN=acm.org=bvanassche@srs-se1.protection.inumbo.net>)
 id 1sH74O-000146-OK
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:22:12 +0000
Received: from 009.lax.mailroute.net (009.lax.mailroute.net [199.89.1.12])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e68694ea-2827-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 21:22:11 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by 009.lax.mailroute.net (Postfix) with ESMTP id 4VzJTG32hjzlgMVS;
 Tue, 11 Jun 2024 19:22:10 +0000 (UTC)
Received: from 009.lax.mailroute.net ([127.0.0.1])
 by localhost (009.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP
 id kg4NlOQnyRM4; Tue, 11 Jun 2024 19:22:01 +0000 (UTC)
Received: from [100.96.154.26] (unknown [104.132.0.90])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: bvanassche@acm.org)
 by 009.lax.mailroute.net (Postfix) with ESMTPSA id 4VzJSw3kFKzlgMVR;
 Tue, 11 Jun 2024 19:21:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e68694ea-2827-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h=
	content-transfer-encoding:content-type:content-type:in-reply-to
	:from:from:content-language:references:subject:subject
	:user-agent:mime-version:date:date:message-id:received:received;
	 s=mr01; t=1718133721; x=1720725722; bh=2kItM8M6DKC8yk2cGf7wD1da
	KQLqhlWP/tN3daGsJLE=; b=AyIVaWHWYQL5dujfBjhbCIsCn397P+T5JLcgaDqJ
	pzESqNRWihm+746zwor/sibSvVIHxIyLj3I91PU4zHR4eiIJTnWSankubQFHkoES
	nh+QGckPwl5LlUArR5NM75XHxtPvhUfvqBY9mUu1pPzAlLzEtFY8gm1BpazYZh+r
	eW0iCZxPI/ZM6Um6Jh0o6Qaqo9cTiflVm+qX5Iae8FNcpCd4IG1igDsvNxIWTO8h
	pCjh2iWPrBsFmJCx0AeTInPZYsGcJ2NUw3I9d9axGajvmLN3Qd5Vay7WgFzs+E7r
	R9Jl5pi76ng65UABo1TMrVFSsftoCgjknuPmDNeELXHvOQ==
X-Virus-Scanned: by MailRoute
Message-ID: <490fb178-8246-46cc-87fc-a57e076b9657@acm.org>
Date: Tue, 11 Jun 2024 12:21:50 -0700
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 03/26] loop: stop using loop_reconfigure_limits in
 __loop_clr_fd
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-4-hch@lst.de>
Content-Language: en-US
From: Bart Van Assche <bvanassche@acm.org>
In-Reply-To: <20240611051929.513387-4-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 6/10/24 10:19 PM, Christoph Hellwig wrote:
> __loop_clr_fd wants to clear all settings on the device.  Prepare for
> moving more settings into the block limits by open coding
> loop_reconfigure_limits.

If Damien's comment is addressed, feel free to add:

Reviewed-by: Bart Van Assche <bvanassche@acm.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:24:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:24:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738686.1145515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH76C-0001eZ-Ok; Tue, 11 Jun 2024 19:24:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738686.1145515; Tue, 11 Jun 2024 19:24:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH76C-0001eS-M0; Tue, 11 Jun 2024 19:24:04 +0000
Received: by outflank-mailman (input) for mailman id 738686;
 Tue, 11 Jun 2024 19:24:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f7IT=NN=acm.org=bvanassche@srs-se1.protection.inumbo.net>)
 id 1sH76A-0001eM-Lh
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:24:02 +0000
Received: from 009.lax.mailroute.net (009.lax.mailroute.net [199.89.1.12])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2705bd1f-2828-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 21:24:00 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by 009.lax.mailroute.net (Postfix) with ESMTP id 4VzJWL4fk8zlgMVV;
 Tue, 11 Jun 2024 19:23:58 +0000 (UTC)
Received: from 009.lax.mailroute.net ([127.0.0.1])
 by localhost (009.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP
 id zYqmexEwqHMb; Tue, 11 Jun 2024 19:23:50 +0000 (UTC)
Received: from [100.96.154.26] (unknown [104.132.0.90])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: bvanassche@acm.org)
 by 009.lax.mailroute.net (Postfix) with ESMTPSA id 4VzJVz4rs6zlgMVT;
 Tue, 11 Jun 2024 19:23:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2705bd1f-2828-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h=
	content-transfer-encoding:content-type:content-type:in-reply-to
	:from:from:content-language:references:subject:subject
	:user-agent:mime-version:date:date:message-id:received:received;
	 s=mr01; t=1718133830; x=1720725831; bh=rZhPumdb2n42pDla++r+0nDR
	HS4rqP791XrFPg2G27s=; b=tf12JUS8O/vzIYeKCwJjv+pwBfVWJQo9KgcDGp0z
	zC7lV0/c/f2ThXKlndGod8/GtTQwAvp2FqIrETvuNz72J0EV/Nq7OLg2Ro3uTx1k
	a3Lag73AcQE3SA5r4iVyJOxbBbB3z7MfHM9AUAy4N74Z8QUhqfhGTIxZ6DcyQWB0
	8Q6dtcQ0sUi3rbVLLp7uCK5xx11aui7r2k4FrkxildcMH3hs4D+DUFLmfX5g1L14
	BQ47XWYzIYACtm6ChhwhZqZfg6iWXjIO/WaP2y1Mjt4ULw04tLtnXIqdkxdlkhKw
	j/QTJWPUoPxIYeArXQy68tOsZbpeaOqMb660pTcOxJP27w==
X-Virus-Scanned: by MailRoute
Message-ID: <165613a2-237d-4f2b-9843-75ce0f928dff@acm.org>
Date: Tue, 11 Jun 2024 12:23:36 -0700
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 04/26] loop: always update discard settings in
 loop_reconfigure_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-5-hch@lst.de>
Content-Language: en-US
From: Bart Van Assche <bvanassche@acm.org>
In-Reply-To: <20240611051929.513387-5-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 6/10/24 10:19 PM, Christoph Hellwig wrote:
> Simplify loop_reconfigure_limits by always updating the discard limits.
> This adds a little more work to loop_set_block_size, but doesn't change
> the outcome as the discard flag won't change.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:28:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:28:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738697.1145526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7A8-0002Ic-75; Tue, 11 Jun 2024 19:28:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738697.1145526; Tue, 11 Jun 2024 19:28:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7A8-0002IV-4V; Tue, 11 Jun 2024 19:28:08 +0000
Received: by outflank-mailman (input) for mailman id 738697;
 Tue, 11 Jun 2024 19:28:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f7IT=NN=acm.org=bvanassche@srs-se1.protection.inumbo.net>)
 id 1sH7A6-0002IP-4u
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:28:06 +0000
Received: from 009.lax.mailroute.net (009.lax.mailroute.net [199.89.1.12])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b8b7d6da-2828-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 21:28:05 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by 009.lax.mailroute.net (Postfix) with ESMTP id 4VzJc26ZdMzlgMVS;
 Tue, 11 Jun 2024 19:28:02 +0000 (UTC)
Received: from 009.lax.mailroute.net ([127.0.0.1])
 by localhost (009.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP
 id 3RE5VBL2zIbE; Tue, 11 Jun 2024 19:27:54 +0000 (UTC)
Received: from [100.96.154.26] (unknown [104.132.0.90])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: bvanassche@acm.org)
 by 009.lax.mailroute.net (Postfix) with ESMTPSA id 4VzJbg1ZPszlgMVR;
 Tue, 11 Jun 2024 19:27:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8b7d6da-2828-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h=
	content-transfer-encoding:content-type:content-type:in-reply-to
	:from:from:content-language:references:subject:subject
	:user-agent:mime-version:date:date:message-id:received:received;
	 s=mr01; t=1718134074; x=1720726075; bh=gYA8TbA/2Ugidtah56dUhtBG
	Galu8/NG0wUScZCuyoQ=; b=EdKe51ZYqmfiPrpKkhPZ4ZF3PIbJwSbtn3RaaOF4
	ofhW5+7WMGPdWrsE2RMUMe4iOqCcKHfWsBE3jV1OI639Mq7dV1i23L3heMui5/1M
	qnw7wzx3me2m0nYbh4wIXdkzvzP1wJeBO9R+vxMKtan9+zubixL4PI1lPSTjLtQo
	fiVS5xI+5EIjbwxGLfn4YEaQCSjam9kUL6ZrBJ399SkJbGkj6meDfAJdNRLPTK4J
	cHr7uNkLCLE6LFJeypIVrgAn7huEmidRUea5q5IRH9I+kWNO+72D6ry2ZEj0M5Y2
	xKogQ+lj25MniDZvtXiYa8/diDPnTI2EYnDYd8bY8qFhAA==
X-Virus-Scanned: by MailRoute
Message-ID: <ec31fd62-8a1d-44b2-8c0d-d6cca64f752e@acm.org>
Date: Tue, 11 Jun 2024 12:27:41 -0700
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 05/26] loop: regularize upgrading the lock size for direct
 I/O
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-6-hch@lst.de>
Content-Language: en-US
From: Bart Van Assche <bvanassche@acm.org>
In-Reply-To: <20240611051929.513387-6-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 6/10/24 10:19 PM, Christoph Hellwig wrote:
> The LOOP_CONFIGURE path automatically upgrades the block size to that
> of the underlying file for O_DIRECT file descriptors, but the
> LOOP_SET_BLOCK_SIZE path does not.  Fix this by lifting the code to
> pick the block size into common code.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:29:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:29:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738701.1145535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7BA-0002xq-H6; Tue, 11 Jun 2024 19:29:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738701.1145535; Tue, 11 Jun 2024 19:29:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7BA-0002xj-ER; Tue, 11 Jun 2024 19:29:12 +0000
Received: by outflank-mailman (input) for mailman id 738701;
 Tue, 11 Jun 2024 19:29:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f7IT=NN=acm.org=bvanassche@srs-se1.protection.inumbo.net>)
 id 1sH7B9-0002xZ-Ib
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:29:11 +0000
Received: from 008.lax.mailroute.net (008.lax.mailroute.net [199.89.1.11])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id df3f4f08-2828-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 21:29:09 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by 008.lax.mailroute.net (Postfix) with ESMTP id 4VzJdH4SBmz6CmSMs;
 Tue, 11 Jun 2024 19:29:07 +0000 (UTC)
Received: from 008.lax.mailroute.net ([127.0.0.1])
 by localhost (008.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP
 id Y9txyge2gF45; Tue, 11 Jun 2024 19:28:57 +0000 (UTC)
Received: from [100.96.154.26] (unknown [104.132.0.90])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: bvanassche@acm.org)
 by 008.lax.mailroute.net (Postfix) with ESMTPSA id 4VzJcx1mXLz6Cnv3g;
 Tue, 11 Jun 2024 19:28:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df3f4f08-2828-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h=
	content-transfer-encoding:content-type:content-type:in-reply-to
	:from:from:content-language:references:subject:subject
	:user-agent:mime-version:date:date:message-id:received:received;
	 s=mr01; t=1718134137; x=1720726138; bh=oG6oPhBgt1YU+3jZs6S1n+VS
	Zh2oZl9u1VSlv448PeE=; b=Oaqg2I3+YXy1J0fPjxkC83856qeFGY4vdRjp9Qxa
	+wJkULGln3djEGax1PdSGiMNRvHdhe9zcgKixqUsGvW/jmJutoRu16UQwdaDHriC
	OWpW8VcZGkZ3HaBLVdmiIThIeHIstXHbEFA2s8IvfnGumvvdzyOzyjekQ8PtqMht
	tc/Cm4NZMpZWX1ohKOFJv4ViPSVr2Wyaq1nq8MXu+VI/Wbij9MCLD6SSveQUOj+n
	YMojTei3bCPalW8xv1DDlQMCQRyqpz8at3LNToLtEddZ3uBLR/OfbGYFrnGZBNmB
	qMDz26SxddNWTGEqhUGD1F/cvKoCXD3FCgJCiUxU5DlHjg==
X-Virus-Scanned: by MailRoute
Message-ID: <3697d0ed-9567-4aa9-b006-e0715d3c1e9a@acm.org>
Date: Tue, 11 Jun 2024 12:28:47 -0700
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 06/26] loop: also use the default block size from an
 underlying block device
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-7-hch@lst.de>
Content-Language: en-US
From: Bart Van Assche <bvanassche@acm.org>
In-Reply-To: <20240611051929.513387-7-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 6/10/24 10:19 PM, Christoph Hellwig wrote:
> Fix the code in loop_reconfigure_limits to pick a default block size for
> O_DIRECT file descriptors to also work when the loop device sits on top
> of a block device and not just on a regular file on a block device based
> file system.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:31:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:31:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738709.1145545 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7Db-0004wu-T8; Tue, 11 Jun 2024 19:31:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738709.1145545; Tue, 11 Jun 2024 19:31:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7Db-0004wn-QM; Tue, 11 Jun 2024 19:31:43 +0000
Received: by outflank-mailman (input) for mailman id 738709;
 Tue, 11 Jun 2024 19:31:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f7IT=NN=acm.org=bvanassche@srs-se1.protection.inumbo.net>)
 id 1sH7Da-0004vG-VB
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:31:42 +0000
Received: from 009.lax.mailroute.net (009.lax.mailroute.net [199.89.1.12])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 39fcebdc-2829-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 21:31:41 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by 009.lax.mailroute.net (Postfix) with ESMTP id 4VzJhC6RZtzlgMVP;
 Tue, 11 Jun 2024 19:31:39 +0000 (UTC)
Received: from 009.lax.mailroute.net ([127.0.0.1])
 by localhost (009.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP
 id DQP8SoW0yS3G; Tue, 11 Jun 2024 19:31:31 +0000 (UTC)
Received: from [100.96.154.26] (unknown [104.132.0.90])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: bvanassche@acm.org)
 by 009.lax.mailroute.net (Postfix) with ESMTPSA id 4VzJgv1SgJzlgMVN;
 Tue, 11 Jun 2024 19:31:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39fcebdc-2829-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h=
	content-transfer-encoding:content-type:content-type:in-reply-to
	:from:from:content-language:references:subject:subject
	:user-agent:mime-version:date:date:message-id:received:received;
	 s=mr01; t=1718134291; x=1720726292; bh=Cf7GcJGKcu7D5BiBoZvRx6x3
	LTfH72Sz+jyW+IP4PGw=; b=d+QkU/8VZ6Dq+WOkUNTA34lPXUILl3LVbu6kO4uj
	dRf9ZLceTOdlVMVrbZoxWEf4u5ywVZd7xmS7z7Qu9fGFq3c09JpTwe9Izw4JtWWY
	tHhkQiq7BjVQmgAeymktZWCjgXvLtR18TTk9AbAP5hLoxcQzO5YN29aNyMj+6U8N
	mnLgDc3DfcoUZU+HIpoj33PjIC9sY+avSJ5HN8s208j4yIUXD3pm8VAHyLZOImgN
	bjIvKsv7Fn1K4+4dykzuQT4iVGBagOEJKDZaAYW7tHizV3z3TzNCBZqR7MTEAs6D
	ikorZXLPiNhCfCU2hPtZjpFb2ahQxj4wO1mOgHFaYmHiNQ==
X-Virus-Scanned: by MailRoute
Message-ID: <8206c0cf-1787-4282-bcec-746d7c7f3880@acm.org>
Date: Tue, 11 Jun 2024 12:31:21 -0700
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 07/26] loop: fold loop_update_rotational into
 loop_reconfigure_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-8-hch@lst.de>
Content-Language: en-US
From: Bart Van Assche <bvanassche@acm.org>
In-Reply-To: <20240611051929.513387-8-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 6/10/24 10:19 PM, Christoph Hellwig wrote:
> This prepares for moving the rotational flag into the queue_limits and
> also fixes it for the case where the loop device is backed by a block
> device.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:32:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:32:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738715.1145555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7Eh-0005Y6-BL; Tue, 11 Jun 2024 19:32:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738715.1145555; Tue, 11 Jun 2024 19:32:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7Eh-0005Xz-8T; Tue, 11 Jun 2024 19:32:51 +0000
Received: by outflank-mailman (input) for mailman id 738715;
 Tue, 11 Jun 2024 19:32:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f7IT=NN=acm.org=bvanassche@srs-se1.protection.inumbo.net>)
 id 1sH7Ef-0005Xb-P8
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:32:49 +0000
Received: from 009.lax.mailroute.net (009.lax.mailroute.net [199.89.1.12])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 615c8525-2829-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 21:32:48 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by 009.lax.mailroute.net (Postfix) with ESMTP id 4VzJjV1BYPzlgMVP;
 Tue, 11 Jun 2024 19:32:46 +0000 (UTC)
Received: from 009.lax.mailroute.net ([127.0.0.1])
 by localhost (009.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP
 id Y1jELP_gqM3e; Tue, 11 Jun 2024 19:32:38 +0000 (UTC)
Received: from [100.96.154.26] (unknown [104.132.0.90])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: bvanassche@acm.org)
 by 009.lax.mailroute.net (Postfix) with ESMTPSA id 4VzJj93JfNzlgMVN;
 Tue, 11 Jun 2024 19:32:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 615c8525-2829-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h=
	content-transfer-encoding:content-type:content-type:in-reply-to
	:from:from:content-language:references:subject:subject
	:user-agent:mime-version:date:date:message-id:received:received;
	 s=mr01; t=1718134358; x=1720726359; bh=NinUkL0nGF4zzUpzdC8fXnN/
	I9QzcU4DY70U0/CiXlk=; b=tydEMcbkmjZvUgM3BMYGB8ptZaiX1xSWkqvzlXjS
	N+d6qsqQsAQuVet9UxqHiZLwEopChIDKNIIFy8hqfGJbWbRcsptm4r1eCxxBB7bS
	zJlCWiRgrj9C771wRVNgaqEgeb47b1UPK6RX0EWnJz2fmGzdd9Yle0FBhwOb2shG
	s5V6Vwq4SlacZgToBgTwE4ROC+lC73bVlCAAnBKs9iY/9mXR1a7bE0gFdSzJ8o4Z
	AAmvlaj1apsee5D+9h6qX3hN9wQGqsRU9xb9fn79I+qilXROStdod47NBQuzB6lE
	Cuk8ybjhW1/CdxDOGi4kQk7/IXQCqfKZONnPRP3FnS4gWw==
X-Virus-Scanned: by MailRoute
Message-ID: <69663f60-1d42-4f95-97cc-acb73ed5d7c8@acm.org>
Date: Tue, 11 Jun 2024 12:32:27 -0700
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 08/26] virtio_blk: remove virtblk_update_cache_mode
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-9-hch@lst.de>
Content-Language: en-US
From: Bart Van Assche <bvanassche@acm.org>
In-Reply-To: <20240611051929.513387-9-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 6/10/24 10:19 PM, Christoph Hellwig wrote:
> virtblk_update_cache_mode boils down to a single call to
> blk_queue_write_cache.  Remove it in preparation for moving the cache
> control flags into the queue_limits.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:35:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:35:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738722.1145566 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7HJ-00066q-OX; Tue, 11 Jun 2024 19:35:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738722.1145566; Tue, 11 Jun 2024 19:35:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7HJ-00066j-L6; Tue, 11 Jun 2024 19:35:33 +0000
Received: by outflank-mailman (input) for mailman id 738722;
 Tue, 11 Jun 2024 19:35:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f7IT=NN=acm.org=bvanassche@srs-se1.protection.inumbo.net>)
 id 1sH7HI-00066d-IA
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:35:32 +0000
Received: from 009.lax.mailroute.net (009.lax.mailroute.net [199.89.1.12])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c21c4265-2829-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 21:35:30 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by 009.lax.mailroute.net (Postfix) with ESMTP id 4VzJmc1ZkpzlgMVP;
 Tue, 11 Jun 2024 19:35:28 +0000 (UTC)
Received: from 009.lax.mailroute.net ([127.0.0.1])
 by localhost (009.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP
 id kM3NVR9fxZIH; Tue, 11 Jun 2024 19:35:20 +0000 (UTC)
Received: from [100.96.154.26] (unknown [104.132.0.90])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: bvanassche@acm.org)
 by 009.lax.mailroute.net (Postfix) with ESMTPSA id 4VzJm40fCWzlgMVN;
 Tue, 11 Jun 2024 19:34:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c21c4265-2829-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h=
	content-transfer-encoding:content-type:content-type:in-reply-to
	:from:from:content-language:references:subject:subject
	:user-agent:mime-version:date:date:message-id:received:received;
	 s=mr01; t=1718134520; x=1720726521; bh=qIoic/+Eicm2rmoTh1GHTDwZ
	kragJhr4fvW17JXrG0M=; b=map4zviGJJNMHWZcL0XSQcKZOt8HWwKs4uPCn6h8
	ClKZM//GsD59uwkW/BTQY4EwjOkcFI8O/2h1lvfA4LAL1jmbGgLTNuYoF0Q6mH/f
	f0c9ECm+XNFBDbP6yRxg58slOwTWkCmc/bwd3R9+TXJ6JNN+7VCrT6hBYuAJbGau
	cP0Ibr0GKyH9NB+v4g+CkSDs2g8BRJcKK2rvBZA6bcKGt4wBGdtuJJyCZ9IkY8vH
	+jBWfaKTTSMD0aQJ/O6PovdqQWYkZxLId41z+BCIcQxfQOcW8v1mrgpZgFJMJsy7
	Jf+H2kPgaFs82OcmUoGRFeFKF0L/aVmUOEUaFL7Q+8CSAQ==
X-Virus-Scanned: by MailRoute
Message-ID: <5a806e25-554f-4179-b73e-d3fd9f440441@acm.org>
Date: Tue, 11 Jun 2024 12:34:58 -0700
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 09/26] nbd: move setting the cache control flags to
 __nbd_set_size
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-10-hch@lst.de>
Content-Language: en-US
From: Bart Van Assche <bvanassche@acm.org>
In-Reply-To: <20240611051929.513387-10-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 6/10/24 10:19 PM, Christoph Hellwig wrote:
> Move setting the cache control flags in nbd in preparation for moving
> these flags into the queue_limits structure.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:36:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:36:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738726.1145576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7I1-0006a5-Vl; Tue, 11 Jun 2024 19:36:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738726.1145576; Tue, 11 Jun 2024 19:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7I1-0006Zy-SU; Tue, 11 Jun 2024 19:36:17 +0000
Received: by outflank-mailman (input) for mailman id 738726;
 Tue, 11 Jun 2024 19:36:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p+2X=NN=linux-foundation.org=akpm@srs-se1.protection.inumbo.net>)
 id 1sH7Hz-0006Zi-P7
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:36:15 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dcbde3f3-2829-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 21:36:14 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id C5D446117E;
 Tue, 11 Jun 2024 19:36:12 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C5183C2BD10;
 Tue, 11 Jun 2024 19:36:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dcbde3f3-2829-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org;
	s=korg; t=1718134572;
	bh=GdA6KT5UlfpnGhC3psTeDTcc9QB/a0N+IRhJ64cOBEU=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=JB948CVYmQmTc/5A4rHdUtUx98VRAUbQLgcPx7ZLb7Ixz24iZxaAzJDVfs+JRO5YW
	 CCPvVl85B+g0dyiNojTR+kxGEPc9ZgbqQD2ZBp34OLwozipdz5vU+CNeaE3NaT5REm
	 svy2ZcgGUGTCQSvDFEiwKvklcemFxCLjWAbltRSA=
Date: Tue, 11 Jun 2024 12:36:11 -0700
From: Andrew Morton <akpm@linux-foundation.org>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
 xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com, Mike Rapoport
 <rppt@kernel.org>, Oscar Salvador <osalvador@suse.de>, "K. Y. Srinivasan"
 <kys@microsoft.com>, Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu
 <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Xuan Zhuo <xuanzhuo@linux.alibaba.com>, Eugenio =?ISO-8859-1?Q?P=E9rez?=
 <eperezma@redhat.com>, Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Oleksandr Tyshchenko
 <oleksandr_tyshchenko@epam.com>, Alexander Potapenko <glider@google.com>,
 Marco Elver <elver@google.com>, Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH v1 2/3] mm/memory_hotplug: initialize memmap of
 !ZONE_DEVICE with PageOffline() instead of PageReserved()
Message-Id: <20240611123611.36d0633c65ec8171152fe803@linux-foundation.org>
In-Reply-To: <824c319a-530e-4153-80f5-20e2c463fa81@redhat.com>
References: <20240607090939.89524-1-david@redhat.com>
	<20240607090939.89524-3-david@redhat.com>
	<824c319a-530e-4153-80f5-20e2c463fa81@redhat.com>
X-Mailer: Sylpheed 3.8.0beta1 (GTK+ 2.24.33; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Tue, 11 Jun 2024 11:42:56 +0200 David Hildenbrand <david@redhat.com> wrote:

> > We'll leave the ZONE_DEVICE case alone for now.
> > 
> 
> @Andrew, can we add here:
> 
> "Note that self-hosted vmemmap pages will no longer be marked as 
> reserved. This matches ordinary vmemmap pages allocated from the buddy 
> during memory hotplug. Now, really only vmemmap pages allocated from 
> memblock during early boot will be marked reserved. Existing 
> PageReserved() checks seem to be handling all relevant cases correctly 
> even after this change."

Done, thanks.


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:37:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:37:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738730.1145586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7J2-00078W-6p; Tue, 11 Jun 2024 19:37:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738730.1145586; Tue, 11 Jun 2024 19:37:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7J2-00078P-42; Tue, 11 Jun 2024 19:37:20 +0000
Received: by outflank-mailman (input) for mailman id 738730;
 Tue, 11 Jun 2024 19:37:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f7IT=NN=acm.org=bvanassche@srs-se1.protection.inumbo.net>)
 id 1sH7J0-00078F-Sb
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:37:18 +0000
Received: from 009.lax.mailroute.net (009.lax.mailroute.net [199.89.1.12])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 01f8ffc0-282a-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 21:37:16 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by 009.lax.mailroute.net (Postfix) with ESMTP id 4VzJpg3N77zlgMVP;
 Tue, 11 Jun 2024 19:37:15 +0000 (UTC)
Received: from 009.lax.mailroute.net ([127.0.0.1])
 by localhost (009.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP
 id I78Iw22WN8VT; Tue, 11 Jun 2024 19:37:06 +0000 (UTC)
Received: from [100.96.154.26] (unknown [104.132.0.90])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: bvanassche@acm.org)
 by 009.lax.mailroute.net (Postfix) with ESMTPSA id 4VzJpL0wGgzlgMVN;
 Tue, 11 Jun 2024 19:36:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01f8ffc0-282a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h=
	content-transfer-encoding:content-type:content-type:in-reply-to
	:from:from:content-language:references:subject:subject
	:user-agent:mime-version:date:date:message-id:received:received;
	 s=mr01; t=1718134626; x=1720726627; bh=nxeAEPoRIF1w17Vf6H50avwR
	d47tQeq24V55mlSo6Fo=; b=n4YdDdBfOscNrfscK2NP2xV4LYi1o1AdP30LHl21
	+IbqLgtMyNqee6SbzA50sLVTZE0Ji6k+2w0WLmPh57DienJvg5IuAQuKYp0InUsK
	R6rrYr54yHJbztT/cndM4ZANOd0BSHTpKcctiVeFtmZFjzrLeHPbM9VtSJmKwf7X
	Af8olT+2oDMLUTt1MIpMS3WiRglrB6WCMiAEgpmD+A4upmkHshWHc+3fJHBYgE5i
	tD5snqlsdZGaJP09e2u2aXuuDiqbxQTmEF2zfjaEm7Jr+gWSnF6gfshEHOyMC1oC
	wY6QiEq3x8Ed+pcVJd6N4WwEZhVXh/S1bVC6eE1yI7Gx6w==
X-Virus-Scanned: by MailRoute
Message-ID: <1f7c1123-0f80-42a4-8252-c082ded33ca6@acm.org>
Date: Tue, 11 Jun 2024 12:36:55 -0700
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 11/26] block: freeze the queue in queue_attr_store
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-12-hch@lst.de>
Content-Language: en-US
From: Bart Van Assche <bvanassche@acm.org>
In-Reply-To: <20240611051929.513387-12-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 6/10/24 10:19 PM, Christoph Hellwig wrote:
> queue_attr_store updates attributes used to control generating I/O, and
> can cause malformed bios if changed with I/O in flight.  Freeze the queue
> in common code instead of adding it to almost every attribute.

Reviewed-by: Bart Van Assche <bvanassche@acm.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:38:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:38:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738734.1145595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7Jl-0007mc-Es; Tue, 11 Jun 2024 19:38:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738734.1145595; Tue, 11 Jun 2024 19:38:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7Jl-0007mV-CI; Tue, 11 Jun 2024 19:38:05 +0000
Received: by outflank-mailman (input) for mailman id 738734;
 Tue, 11 Jun 2024 19:38:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f7IT=NN=acm.org=bvanassche@srs-se1.protection.inumbo.net>)
 id 1sH7Jk-0007mF-GJ
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:38:04 +0000
Received: from 009.lax.mailroute.net (009.lax.mailroute.net [199.89.1.12])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1d381ddc-282a-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 21:38:02 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by 009.lax.mailroute.net (Postfix) with ESMTP id 4VzJqY2LNKzlgMVP;
 Tue, 11 Jun 2024 19:38:01 +0000 (UTC)
Received: from 009.lax.mailroute.net ([127.0.0.1])
 by localhost (009.lax [127.0.0.1]) (mroute_mailscanner, port 10029) with LMTP
 id TYPL5HYORF0n; Tue, 11 Jun 2024 19:37:53 +0000 (UTC)
Received: from [100.96.154.26] (unknown [104.132.0.90])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 (Authenticated sender: bvanassche@acm.org)
 by 009.lax.mailroute.net (Postfix) with ESMTPSA id 4VzJqD5hVvzlgMVN;
 Tue, 11 Jun 2024 19:37:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d381ddc-282a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=acm.org; h=
	content-transfer-encoding:content-type:content-type:in-reply-to
	:from:from:content-language:references:subject:subject
	:user-agent:mime-version:date:date:message-id:received:received;
	 s=mr01; t=1718134673; x=1720726674; bh=0hnbQEod74zEsc3xGCFcX6S0
	bBHUsC8JGcAKSWF8hiY=; b=YL4MxZ10FXpIpPLCE4CZCH+u/phW+zPSavUSnees
	X3hFh+szuiEXRkWSrqQXNYIhU0UJbzi/GHx8Kl9pzwID8L7djiktSWoqSJawtcTN
	eACQHyinxvwid5PBDKOPXHioaILwfZth5RTqz1dRgkYDlk023V89V3LY0fPlpXR0
	ZjGrS0ATmGTxDvR88c454EcuD5z2B7TEK4AUazeGvS0OkvYK9K5aLe/SEJZB1fEB
	WtswPKKjQnmZAIZ2btE2JJFpRK4bbK63mBG2amE83a1CL9Fd2be6YrLh9YZh7k7F
	6DXimFz0V4Au4RPDc4bBe0TfqrqfZwLel7pBkDBg2DZJFg==
X-Virus-Scanned: by MailRoute
Message-ID: <f5a5f79e-43f5-46ae-9c11-371e8e558685@acm.org>
Date: Tue, 11 Jun 2024 12:37:42 -0700
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 12/26] block: remove blk_flush_policy
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-13-hch@lst.de>
Content-Language: en-US
From: Bart Van Assche <bvanassche@acm.org>
In-Reply-To: <20240611051929.513387-13-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 6/10/24 10:19 PM, Christoph Hellwig wrote:
> Fold blk_flush_policy into the only caller to prepare for pending changes
> to it.
  Reviewed-by: Bart Van Assche <bvanassche@acm.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 19:51:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 19:51:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738750.1145606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7WH-0003b1-IL; Tue, 11 Jun 2024 19:51:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738750.1145606; Tue, 11 Jun 2024 19:51:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH7WH-0003au-Dz; Tue, 11 Jun 2024 19:51:01 +0000
Received: by outflank-mailman (input) for mailman id 738750;
 Tue, 11 Jun 2024 19:51:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=I6Ds=NN=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sH7WG-0003ao-JB
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:51:00 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.129.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ec11d165-282b-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 21:50:59 +0200 (CEST)
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-253-MlxRDyHpPaej60Ac1jwDIQ-1; Tue, 11 Jun 2024 15:50:56 -0400
Received: by mail-wm1-f69.google.com with SMTP id
 5b1f17b1804b1-42153125d3eso44059405e9.2
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 12:50:56 -0700 (PDT)
Received: from ?IPV6:2003:cb:c748:ba00:1c00:48ea:7b5a:c12b?
 (p200300cbc748ba001c0048ea7b5ac12b.dip0.t-ipconnect.de.
 [2003:cb:c748:ba00:1c00:48ea:7b5a:c12b])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42274379b83sm16254225e9.28.2024.06.11.12.50.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 Jun 2024 12:50:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec11d165-282b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1718135457;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=VC/FKQpyiv8QuT0prkiboQ+mv2U8+abllOC5cV7bqrg=;
	b=LIHd12/+iiuaD64VAKpt0kUPqHDOin39OncI1af6A8GWSipj7qOYseukDigUEiwfZD20IY
	KvQgDisbRAwu7xDPn+gMuGJtbsuvyRV7QRURfLzED9ZkOTqiVm/B18xWLCEc6RvIFs9lo4
	aW6DmYh144JfkVJ9Ak4Q/qUJUWcMrEY=
X-MC-Unique: MlxRDyHpPaej60Ac1jwDIQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718135455; x=1718740255;
        h=content-transfer-encoding:in-reply-to:organization:autocrypt
         :content-language:from:references:cc:to:subject:user-agent
         :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=VC/FKQpyiv8QuT0prkiboQ+mv2U8+abllOC5cV7bqrg=;
        b=V5QZmyjVqULH57uq1dkfVxWs8Gh6DNNvf2x6nzliojxxeNtaf9oXzxG4gVpo7cDet4
         Jpdf+qkihyMCpOPMZf9pU8oe2ognGxuz1Snn0z6KrnAvJmPz3nM1xJYtBN7YXMgMM+uh
         S+GMQ6AmoU2JmPuSu0qm/78BAPEyjNnb7uTHwPNra9mgOOY3RgaWzg7cHF4jAi2R8YYw
         Ovd9GPxJlbUxkwzoUhnvMAFa5EOzqAH3seC6gDif83/eDJGEPv7GmsgSxyBsYVKc9bRK
         6NhrobqyOkAP/f0oakx6wOWtN/Jz47Ujpw/OwwKJXJT9MIA2bwVcLOb5ydWck+qAoKmK
         LRlg==
X-Forwarded-Encrypted: i=1; AJvYcCWUA8ptub3THXiuJcYacwWQ7KaxcgXU19I7jxApVFPxr7JznPGjRftEnXVIPwgld+8mrkA2pzC3GtqINKGtI539MKwnGp+Aq6h0wBZFvIs=
X-Gm-Message-State: AOJu0YxhNZu6hibSGlkgsR4R8A9Rf1kRU3CLy84zlSPuQstGmT2c5NhA
	StsnSjR6YP3TEHM5u/748se+/RlE7JZ3XKXCBXIWWZlAkuScDcYGJyvV2dT+E75+558WmV+gD8t
	bXT13mZxjzenFj9cuwE5yHpIIDuv/+DfJwtVrfiJ6iZFu8MCYG0CU20rPz06v1163
X-Received: by 2002:a05:600c:1c91:b0:421:8028:a507 with SMTP id 5b1f17b1804b1-4218028a5f6mr70330435e9.18.1718135455367;
        Tue, 11 Jun 2024 12:50:55 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IEeIZw5rvJCb1LtW+k52Hf6likkqbQ5y9zsOlyQgFY0I2VYm+kIPKpawNjMsNjEje9W3JPASg==
X-Received: by 2002:a05:600c:1c91:b0:421:8028:a507 with SMTP id 5b1f17b1804b1-4218028a5f6mr70330165e9.18.1718135454911;
        Tue, 11 Jun 2024 12:50:54 -0700 (PDT)
Message-ID: <fff6e4d3-4a11-4481-b28c-edfb072daf35@redhat.com>
Date: Tue, 11 Jun 2024 21:50:52 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()
To: Tim Chen <tim.c.chen@linux.intel.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 kasan-dev@googlegroups.com, Andrew Morton <akpm@linux-foundation.org>,
 Mike Rapoport <rppt@kernel.org>, Oscar Salvador <osalvador@suse.de>,
 "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Jason Wang <jasowang@redhat.com>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
 =?UTF-8?Q?Eugenio_P=C3=A9rez?= <eperezma@redhat.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>,
 Dmitry Vyukov <dvyukov@google.com>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-2-david@redhat.com>
 <80532f73e52e2c21fdc9aac7bce24aefb76d11b0.camel@linux.intel.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; keydata=
 xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat
In-Reply-To: <80532f73e52e2c21fdc9aac7bce24aefb76d11b0.camel@linux.intel.com>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 11.06.24 21:41, Tim Chen wrote:
> On Fri, 2024-06-07 at 11:09 +0200, David Hildenbrand wrote:
>> In preparation for further changes, let's teach __free_pages_core()
>> about the differences of memory hotplug handling.
>>
>> Move the memory hotplug specific handling from generic_online_page() to
>> __free_pages_core(), use adjust_managed_page_count() on the memory
>> hotplug path, and spell out why memory freed via memblock
>> cannot currently use adjust_managed_page_count().
>>
>> Signed-off-by: David Hildenbrand <david@redhat.com>
>> ---
>>   mm/internal.h       |  3 ++-
>>   mm/kmsan/init.c     |  2 +-
>>   mm/memory_hotplug.c |  9 +--------
>>   mm/mm_init.c        |  4 ++--
>>   mm/page_alloc.c     | 17 +++++++++++++++--
>>   5 files changed, 21 insertions(+), 14 deletions(-)
>>
>> diff --git a/mm/internal.h b/mm/internal.h
>> index 12e95fdf61e90..3fdee779205ab 100644
>> --- a/mm/internal.h
>> +++ b/mm/internal.h
>> @@ -604,7 +604,8 @@ extern void __putback_isolated_page(struct page *page, unsigned int order,
>>   				    int mt);
>>   extern void memblock_free_pages(struct page *page, unsigned long pfn,
>>   					unsigned int order);
>> -extern void __free_pages_core(struct page *page, unsigned int order);
>> +extern void __free_pages_core(struct page *page, unsigned int order,
>> +		enum meminit_context);
> 
> Shouldn't the above be
> 		enum meminit_context context);

Although C allows parameters without names in declarations, this was 
unintended.

Thanks!

-- 
Cheers,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Tue Jun 11 21:48:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 21:48:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738761.1145615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH9LS-0008Mi-Rr; Tue, 11 Jun 2024 21:47:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738761.1145615; Tue, 11 Jun 2024 21:47:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sH9LS-0008Mb-PK; Tue, 11 Jun 2024 21:47:58 +0000
Received: by outflank-mailman (input) for mailman id 738761;
 Tue, 11 Jun 2024 21:47:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sH9LR-0008MR-Py; Tue, 11 Jun 2024 21:47:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sH9LR-00051V-OR; Tue, 11 Jun 2024 21:47:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sH9LR-0004GW-E3; Tue, 11 Jun 2024 21:47:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sH9LR-0007Ts-DX; Tue, 11 Jun 2024 21:47:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=r8Iiw9AmhheBoanjnqs6ZAwnhj0RB6v8rygPgqxVxXE=; b=4rHKzuVH9n0197i0P1StDpyJG5
	g36Y50tigKBOA1q+tSIeqa0TqDzeNegl0N7KuO9OggAlF1HwZgCVslT8bh89ufYAHMEdF9ZwE9dMj
	cqfNPao3dBXzv79KxnfrW7ANLNnV2P5ZDLIAQrY44Oxln8DfqwWKeKmCi5KlatDDyweA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186312-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186312: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d
X-Osstest-Versions-That:
    xen=43de96a70f00b631d0f4c658c232204079b2f2b2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 Jun 2024 21:47:57 +0000

flight 186312 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186312/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d
baseline version:
 xen                  43de96a70f00b631d0f4c658c232204079b2f2b2

Last test of basis   186310  2024-06-11 13:02:08 Z    0 days
Testing same since   186312  2024-06-11 17:02:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   43de96a70f..5ea7f2c9d7  5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 11 23:54:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 Jun 2024 23:54:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738774.1145626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHBJh-0006VT-C0; Tue, 11 Jun 2024 23:54:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738774.1145626; Tue, 11 Jun 2024 23:54:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHBJh-0006VL-9M; Tue, 11 Jun 2024 23:54:17 +0000
Received: by outflank-mailman (input) for mailman id 738774;
 Tue, 11 Jun 2024 23:54:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MXno=NN=quicinc.com=quic_jjohnson@srs-se1.protection.inumbo.net>)
 id 1sHBJg-0006VF-2U
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 23:54:16 +0000
Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com
 [205.220.168.131]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e6cca60d-284d-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 01:54:13 +0200 (CEST)
Received: from pps.filterd (m0279865.ppops.net [127.0.0.1])
 by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45BBa1Zo008217;
 Tue, 11 Jun 2024 23:54:08 GMT
Received: from nalasppmta03.qualcomm.com (Global_NAT1.qualcomm.com
 [129.46.96.20])
 by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3yme8s08t6-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 11 Jun 2024 23:54:08 +0000 (GMT)
Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com
 [10.47.209.196])
 by NALASPPMTA03.qualcomm.com (8.17.1.19/8.17.1.19) with ESMTPS id
 45BNs6iK022456
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 11 Jun 2024 23:54:06 GMT
Received: from [169.254.0.1] (10.49.16.6) by nalasex01a.na.qualcomm.com
 (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Tue, 11 Jun
 2024 16:54:05 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6cca60d-284d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=
	cc:content-transfer-encoding:content-type:date:from:message-id
	:mime-version:subject:to; s=qcppdkim1; bh=0Y4GBa+TXFeSS85sJfdH9J
	Pc3l5l71wn18Ou06XOFkE=; b=e4TLos3dK/YG8S+9jMfoo4Wu+sKGwfLL4fKyS0
	xiEXEYx43KDh9ay6ggX33lNeVg28igFxu4fUTQAZvQVmgwiBy0kn4F56a/FQg0Rx
	inxz72fZKXyRAvETnJDyzm2JR6M8j8DjKgCkUBTWwA/PFphcnsrUw/qI3gMk0Okm
	Et9HdGp4QNmllsaHDj+xp7juIvfMgwK5tADIZhw4Vsw72eH7EQQO52A60s28JtP1
	/xa2YAbW7mskRvXdrSyGva+1+lOhKxy/UGof3JXoyGFH1zt4KhpIrIbSTtTC/aYk
	qWMhE3YiSkFHy/MpZgqrzJk1qBrFGajQlYLwgfRyy1/Wz6+g==
From: Jeff Johnson <quic_jjohnson@quicinc.com>
Date: Tue, 11 Jun 2024 16:54:05 -0700
Subject: [PATCH] xen: add missing MODULE_DESCRIPTION() macros
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Message-ID: <20240611-md-drivers-xen-v1-1-1eb677364ca6@quicinc.com>
X-B4-Tracking: v=1; b=H4sIAJzjaGYC/x3MywrCMBBG4Vcps3YgCV6KryIu0uSPHbBRZmopl
 L670eW3OGcjgwqMrt1GikVMXrXBHzpKY6wPsORmCi4c3dl7njJnlQVqvKLyKYSSelxQXE8teiu
 KrP/h7d48RAMPGmsaf5un1M/KU7QZSvv+BbPmCuV/AAAA
To: Juergen Gross <jgross@suse.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>
CC: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>,
        <kernel-janitors@vger.kernel.org>,
        Jeff Johnson <quic_jjohnson@quicinc.com>
X-Mailer: b4 0.13.0
X-Originating-IP: [10.49.16.6]
X-ClientProxiedBy: nalasex01b.na.qualcomm.com (10.47.209.197) To
 nalasex01a.na.qualcomm.com (10.47.209.196)
X-QCInternal: smtphost
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085
X-Proofpoint-ORIG-GUID: y8hV1AhieqTHTlSwvyJF07qR0S-m50t2
X-Proofpoint-GUID: y8hV1AhieqTHTlSwvyJF07qR0S-m50t2
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-11_12,2024-06-11_01,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1011 adultscore=0
 suspectscore=0 bulkscore=0 phishscore=0 spamscore=0 lowpriorityscore=0
 priorityscore=1501 mlxlogscore=999 impostorscore=0 mlxscore=0
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.19.0-2405170001 definitions=main-2406110162

With ARCH=x86, make allmodconfig && make W=1 C=1 reports:
WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/xen/xen-pciback/xen-pciback.o
WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/xen/xen-evtchn.o
WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/xen/xen-privcmd.o

Add the missing invocations of the MODULE_DESCRIPTION() macro.

Signed-off-by: Jeff Johnson <quic_jjohnson@quicinc.com>
---
Corrections to these descriptions are welcomed. I'm not an expert in
this code so in most cases I've taken these descriptions directly from
code comments, Kconfig descriptions, or git logs.  History has shown
that in some cases these are originally wrong due to cut-n-paste
errors, and in other cases the drivers have evolved such that the
original information is no longer accurate.
---
 drivers/xen/evtchn.c               | 1 +
 drivers/xen/privcmd-buf.c          | 1 +
 drivers/xen/privcmd.c              | 1 +
 drivers/xen/xen-pciback/pci_stub.c | 1 +
 4 files changed, 4 insertions(+)

diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
index f6a2216c2c87..9b7fcc7dbb38 100644
--- a/drivers/xen/evtchn.c
+++ b/drivers/xen/evtchn.c
@@ -729,4 +729,5 @@ static void __exit evtchn_cleanup(void)
 module_init(evtchn_init);
 module_exit(evtchn_cleanup);
 
+MODULE_DESCRIPTION("Xen /dev/xen/evtchn device driver");
 MODULE_LICENSE("GPL");
diff --git a/drivers/xen/privcmd-buf.c b/drivers/xen/privcmd-buf.c
index 2fa10ca5be14..0f0dad427d7e 100644
--- a/drivers/xen/privcmd-buf.c
+++ b/drivers/xen/privcmd-buf.c
@@ -19,6 +19,7 @@
 
 #include "privcmd.h"
 
+MODULE_DESCRIPTION("Xen Mmap of hypercall buffers");
 MODULE_LICENSE("GPL");
 
 struct privcmd_buf_private {
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 67dfa4778864..b9b784633c01 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -48,6 +48,7 @@
 
 #include "privcmd.h"
 
+MODULE_DESCRIPTION("Xen hypercall passthrough driver");
 MODULE_LICENSE("GPL");
 
 #define PRIV_VMA_LOCKED ((void *)1)
diff --git a/drivers/xen/xen-pciback/pci_stub.c b/drivers/xen/xen-pciback/pci_stub.c
index e34b623e4b41..4faebbb84999 100644
--- a/drivers/xen/xen-pciback/pci_stub.c
+++ b/drivers/xen/xen-pciback/pci_stub.c
@@ -1708,5 +1708,6 @@ static void __exit xen_pcibk_cleanup(void)
 module_init(xen_pcibk_init);
 module_exit(xen_pcibk_cleanup);
 
+MODULE_DESCRIPTION("Xen PCI-device stub driver");
 MODULE_LICENSE("Dual BSD/GPL");
 MODULE_ALIAS("xen-backend:pci");

---
base-commit: 83a7eefedc9b56fe7bfeff13b6c7356688ffa670
change-id: 20240611-md-drivers-xen-522fc8e7ef08



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 00:21:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 00:21:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738780.1145636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHBk5-0003b2-L9; Wed, 12 Jun 2024 00:21:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738780.1145636; Wed, 12 Jun 2024 00:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHBk5-0003av-Hj; Wed, 12 Jun 2024 00:21:33 +0000
Received: by outflank-mailman (input) for mailman id 738780;
 Wed, 12 Jun 2024 00:21:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHBk4-0003al-36; Wed, 12 Jun 2024 00:21:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHBk3-0000EH-W9; Wed, 12 Jun 2024 00:21:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHBk3-0007lZ-6E; Wed, 12 Jun 2024 00:21:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHBk3-0007d6-5p; Wed, 12 Jun 2024 00:21:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yETrKN2tut21VZC80AqiugbTodvbH43l0qX6NILhSWc=; b=U7U46FTtkHWVt2EGPOBQe46YhF
	nj1RSMyUpHOSLdIgfN2J/j2snZ4pOv/tsuHouih8N2hZRgR/VIDy6u7uWiIFUmsJZCp3LothFZ4BI
	T2fjkKEcHUy3v2xrTmLW0WL/legSGJKfqo8zMAuMN3j3n0g3i3Rm+bfWm0UYa0nwGiPc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186311-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186311: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-raw:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=43de96a70f00b631d0f4c658c232204079b2f2b2
X-Osstest-Versions-That:
    xen=ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Jun 2024 00:21:31 +0000

flight 186311 xen-unstable real [real]
flight 186313 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186311/
http://logs.test-lab.xenproject.org/osstest/logs/186313/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-raw 21 guest-start/debian.repeat fail pass in 186313-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186308
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186308
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186308
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186308
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186308
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186308
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  43de96a70f00b631d0f4c658c232204079b2f2b2
baseline version:
 xen                  ea1cb444c28ce3ae7915d9c94c4344f4bf6d87d3

Last test of basis   186308  2024-06-11 06:24:29 Z    0 days
Testing same since   186311  2024-06-11 16:08:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      fail    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ea1cb444c2..43de96a70f  43de96a70f00b631d0f4c658c232204079b2f2b2 -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 02:34:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 02:34:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738793.1145650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHDoD-0004Kt-JC; Wed, 12 Jun 2024 02:33:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738793.1145650; Wed, 12 Jun 2024 02:33:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHDoD-0004Km-G3; Wed, 12 Jun 2024 02:33:57 +0000
Received: by outflank-mailman (input) for mailman id 738793;
 Wed, 12 Jun 2024 02:33:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nAc7=NO=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sHDoD-0004Kg-2R
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 02:33:57 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20600.outbound.protection.outlook.com
 [2a01:111:f403:2416::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 34f53537-2864-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 04:33:53 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by SA1PR12MB8723.namprd12.prod.outlook.com (2603:10b6:806:385::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Wed, 12 Jun
 2024 02:33:48 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7633.036; Wed, 12 Jun 2024
 02:33:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34f53537-2864-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BLx4IXGQUDRKVpSyIxXmeaapRYSPUyCs3EPtRcD3/+yvU4eJsk7mYk/XrMuPFeW6zCEKYJ+8RCZ2MucKTl2jbuUb5PB9VQRHTaJQkcbBQCE4qb0nC77BArqNOhulTrpB03CcN8ZL2g63PbZdxsfMD5gCY1ap8/fHjrbKC42/TFDElWtcyshxbHBOH9wHzkBTPQJMYLzPjGoCK0eFTTOU73vKee0rBQezMi8Zxw0YcZse4ySxBqIx9qVOJpGh9pRILb+Xye1m8EFCvdKzact70kci6291lLzAnOxUBGbRpYlOXX7/rhaJZeM0EbVs2PsilXOK5l/3G4sbuZRPW3ydCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KasVo+XhgybeRV6E11VQ8Riy9H2SSGjPWcryqoqwTnw=;
 b=Ak3C5MUfuSZ5s6IyJhZWTTFaIIcIWIDfJy3wmvtk12i+rS1t0d2XctCPxJyIp5bIBt7vbNMbhUk27Nor9LvPrQK4U+Q7EYpjTEaUpsiKOvANYpAOTkK4Ow2Eak26OQd862+eSMHk2DTsPNDGvEH4U793QZYjLpueC10dPiF8CsIIKnIzsS6Z5jPwJNOK3VDK4092BiWFEMn/0FIg1mjoqRU731HEXE4pfa4W/wvqcuUUj4KaEY1fKrd9IiLXzPGIogNMEQWd4UXXama4fuhWkdDmx9YBCrB/ggHZohPAwhMRNgNgbWDyq8/i+twFfeyuBZCc/F30oHCyjhS2VyAkiA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KasVo+XhgybeRV6E11VQ8Riy9H2SSGjPWcryqoqwTnw=;
 b=g7CnB3Bfh7e2bAFPDzELUQnQu9YiNtbfycAw2wIG2TgllPNNtw3TFF7gnfw9EwYKXQuCfmWwg98RMeislSp77ig48vLH9MsvFLl+o23HmMnzbYhFv9TfgVPEPVxbHL+Mgz0NCDLU86bua+cvybBP9YvvII9jWhlUoBp6g5aLoVI=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v9 0/5] Support device passthrough when dom0 is PVH on
 Xen
Thread-Topic: [XEN PATCH v9 0/5] Support device passthrough when dom0 is PVH
 on Xen
Thread-Index: AQHauLJeUet4FsoIl0WzirfdjEDvvLHBLy8AgALGaAA=
Date: Wed, 12 Jun 2024 02:33:48 +0000
Message-ID:
 <BL1PR12MB5849971F44987BA2140DB412E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <e2202691-5ca2-42ad-a360-31761f73d889@suse.com>
In-Reply-To: <e2202691-5ca2-42ad-a360-31761f73d889@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7633.034)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|SA1PR12MB8723:EE_
x-ms-office365-filtering-correlation-id: 7185f01c-3bde-4f3c-3dde-08dc8a88171a
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230032|376006|7416006|366008|1800799016|38070700010;
x-microsoft-antispam-message-info:
 =?utf-8?B?NU1OSXpIVThubkdYUThnMU1rT2lTZWhCaGU5ZzdLazZlcHYzTlduNWN6dTFw?=
 =?utf-8?B?allJTUNsTi8yZUhvWjNGRk02ZjhDQ2pNeVVua2JrWENKK1lmZlFIKzlYMzhI?=
 =?utf-8?B?aC8rSm5EOXRBN3pFSTFENCtNY0xST2ZaYmxYeXJ0L0sxWEZuSUxZdlE2eGx4?=
 =?utf-8?B?dXY3dHJRbnlTSmxRdUpXNk5RU2ZsUDRjM0NNVWlOai95bVloVVBTaXZWZzlv?=
 =?utf-8?B?ZE1kQkJYNGgxVG40Y1BvcmNqT0I5dkgrMGlCVmIzdlZEM2M4WEpMZnNGUmtV?=
 =?utf-8?B?YzJXcjVkQy9mNFB5VjlEelpiQXB5Y3YzL3V1VUQwRDJXQ3VnWmRFUEZrMjIx?=
 =?utf-8?B?ZVVXNXV1RElYeWtNeXR1bGdySjBzdzQ5ZjRQbEVtaTFYWjZ1R3BzeTVuSHVa?=
 =?utf-8?B?ZHlWY2ZZbzFKVVFGSXdjbEpUQXVCTHBMazk3czBiQmtTODgrV0hBUkJJdUZw?=
 =?utf-8?B?elNRZ2liYWRwS2JWeGZQVzU5clgydjJqR09oU2JXM1psNW1ucXNFU1Z1aHdR?=
 =?utf-8?B?VGRzWGZMZTEwSEo0MFc1MmZEMzNZaGhsMW0ybHlVUklmRkJYL0JjZXczU3F2?=
 =?utf-8?B?RkVheWt2Znk3a0hTanZ0NmtWNGEwYzZqclNUcU51dzcvNjIxQ0srVlplcStN?=
 =?utf-8?B?NWtNYnBWdGRuUmVic2wzWnAzNjQ0UmdkNW5xbklvb0xEK0Rzallmc2szcWFo?=
 =?utf-8?B?TUZkZ0xVeEcvYkVOWWw1TkVCd3BBeVlCZnFNcFNnTm1pVC9NZ1R4MlVrVFJM?=
 =?utf-8?B?dlN6NndsS3pzSDZuRnNjdDJpelpxR0NMZElNaHUvdk9MN1VJOEhWQTUwZW5C?=
 =?utf-8?B?b0RkdFhiR1FIUHJJdWJGSU1Vak5CR1N4R1phODVKQ0xYWFpXMHNmbDVGdnda?=
 =?utf-8?B?Z2d4SUt4WW4rUnNsN3hBTTlCOEF1L3hMWGhvTUFpUFJoQmgzWDVWUkhCQ3dF?=
 =?utf-8?B?N3F5cE5Sc0RzSHVxU051UEZiYnovMkloMUZZUWJXYUMrL2l1T0RZak93cmpB?=
 =?utf-8?B?WldGdGdKamxCRjNzZjdMRkZvMGRZQnFVbHdmMnBZd2dFWjBRaFpiRnJ1Ym5o?=
 =?utf-8?B?cy9rWUlXNXdUYXBIUkwveE9vQ1hGNE1KbmRBeHBuMk5kQVFPRDZHZTZSNUZH?=
 =?utf-8?B?RnJpRGtBd3BkcEFEb0QyeHdtb2l4OWV6d2R2MjNzYXp5RFk0T2NZQkNBLzhG?=
 =?utf-8?B?RVliSVNVUGZqQnYyNGU5YTRabVhKZkJqWE5PSmJNckZlUHlFRDNXUStIRklI?=
 =?utf-8?B?UDhuNm9LWEEvM2F2RjVWNGUrUHEwcUp1VmdZMzUrRFBQQU05aDArS2pIamlz?=
 =?utf-8?B?ZU1seEpna29YakdPbERjdmd4WDU4R0pyTnE4bkZPM1lSa2EvM1g5QzhWMWQ5?=
 =?utf-8?B?S0NwOGtNYjdFcllEZEVoeVBMdnZwMmFpK3FIVlg0YnAvNDg2RHVyTFAwNjlw?=
 =?utf-8?B?dkFQOTNIbHkrV01WUGJkZzViOUQxdDZCMFRyTUNTZ2xndWNYS2ttRG9mZXJj?=
 =?utf-8?B?UGhrMEZXSm9HQlU4a2VMdFdCelZWTzF0dE5HWWtaMCt2VjBBYVlvakU0T2ZE?=
 =?utf-8?B?RzFteUNvVHFxOTB6ZVJrZXI3ZnJzNWcwYVFOQmVxM1NDRUVxZFBaQTRWRkJa?=
 =?utf-8?B?VXAvYUNyUmFoeFBjZlIyeC82bWMrTGxxdGdlN1RmSDF3aE8yRktGMVBkYWQr?=
 =?utf-8?B?c0szd3lpbUtSdFlTYW1oa0dpeWFITFp5dERWNHprRkdDSGFUSEdzNStFL05r?=
 =?utf-8?B?MGtyWFE2enh2T0IzRGxTZEE5N1BZcXNqL0pLSHUyeTkxTTdLZWhYUzRxZUZX?=
 =?utf-8?Q?V30BbrXZwQ8wPzlBpiTb8JMyhxllWHNOc6P7w=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230032)(376006)(7416006)(366008)(1800799016)(38070700010);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?N0RzTUNWR0JzY25GWS93OUZTS0pqZjFsWkR3VnhzV3JWc1RINUFZRXJoc1l0?=
 =?utf-8?B?a2tWYlUyelcrZGphQVVtVEFqN1RDSkNYN2RzSHJOZ1hwNFFDVHFCTldyTEZ6?=
 =?utf-8?B?cGkxTHUzNWVqK2tuNGQ5MjdmaXJ1Z3FYL1d5ODlETDF6TTd4cmR5eXBVK2Qr?=
 =?utf-8?B?TGRjQk5QMXE5bkJpTDZ2UnBmSmVOSFlWbDN6a09rN2VwOUw0QTN4OWlBOXNw?=
 =?utf-8?B?YnZQUE1oblBsdjAyR09neU1QYnlKTUVwVzdVeU9QQXo1YTRUOTQxZTRoSlI0?=
 =?utf-8?B?ZEx4S2QxaDNuUFlDbmpnckVrS3U3akNucjk3UUx2Z24rVjJvaGp5dTVhTzMr?=
 =?utf-8?B?cEdvWGFFM05rQ3dJOU1HQTJnbHFIbDZNaXdDTitPWmNYSGtHdm11c0dyT3lM?=
 =?utf-8?B?dlFCOHBPZ0ZaSXBPaHljRVNBL2FEcnRCN0V3ZXJGRlRDbXJqd3pXYmVHd0Ew?=
 =?utf-8?B?eVlnTERFcWVLZVhZMHlSNXRJNDVWZHBxbnVpRzYxTVgwNW14dDRVS1pFYjZO?=
 =?utf-8?B?U3hwOW9vMWZvaG9jejlrai92U0ExUnhqUSs0NzVDNHZxWXRweVpSNkpHcEM1?=
 =?utf-8?B?ajlOeWhyU2RhV1dzTDhIMWRKZ3V1dkwwNUIzdzlFeWN0USswRXY5ME5KNm5H?=
 =?utf-8?B?LzErRnZhNTNpNmp5S3ZWMElQeENjY2pXYk1BMGxnUTRkRVd3YmRncVgvZlNU?=
 =?utf-8?B?QjJJa0kwZlorcG1ROTkyS1BCc01Qa2pPUDJpenp3T1EwbEJpdWVDbmVWdEY5?=
 =?utf-8?B?aTFCZVdXRWFpdG8rdnQyMVVIOUNnTE85Y2NNUzVVdytLZlNQRDhOV0pJeXVa?=
 =?utf-8?B?RFhzSkIrSStzb1dTSGpDZUxBdXltbXFPazVPWkU1M2xLTytnTytoN3VQY3dl?=
 =?utf-8?B?endpWUhvQi9pc2JCYWxCQ3BxSVlwMklQWk0wUSs3UUcvUEJRN09oSHlVVzAz?=
 =?utf-8?B?QldySWliVjJETWZBK0N5aG50NXdpZjh3MG91K2lhMXRtV0tBdEpSMS94SW5X?=
 =?utf-8?B?aVZjemJnalhTNDZCVTRXN3NSc0xzYzB6aVBVQklxSGU2TXoyb3FRa083TXVE?=
 =?utf-8?B?VmFNQ1FESVV0REtOd01tOHhHSWM4NjVzR3pIWDVqYjNrSm5DU2tIdmZGVjIr?=
 =?utf-8?B?d0FYYitxM0NPZEU1bDBJQS9XVXg0UXBXcnpLK0FRNGJPZXhHR0Q0VWwxUDJX?=
 =?utf-8?B?cXp1a2VBdUZVZDk2MmRWNjh2UkU5TUpTTHZaNzVLR05BQkZnT3ovTUJZblBw?=
 =?utf-8?B?Zy9zOUcwZXowaVR0YjVwZGtHR2MxdXVNbkFJcGhFTjl2YUk3bzVnSllYU2l6?=
 =?utf-8?B?OEpMTi9JSVpCLzlyU1RTOUJ6cE4yZHNRVFQvN2kyOHR2Qk9LRmJxUi9Zb3FR?=
 =?utf-8?B?eHdrTFZlRjVPcHJqZmFCTGQwU3pMV1FIMldUdnZHOUNKTzdGR2pTbzUrUDY2?=
 =?utf-8?B?STZtVzlPNUhXN2hvTnFOM3BYVSs5UkpoMExEbHo3L1haYW5VZHUxTklMRDQ5?=
 =?utf-8?B?NVp2ZUl3YTh2QjVBYlZYOXRrM0Z0a3hHTjJqbDJVV2Y0WjExNitUK0JGTFpj?=
 =?utf-8?B?dlZmMFl3K1hlWXpkSk5WcEZDdVErSmJjU2pCRkMzTlhFTEttSWJVTjdTbEkv?=
 =?utf-8?B?aDhBNlRTS0dob2prUEY0a3dOTFJ6cWtBZzljYTBrZHdKUHZFQmZaVGVCeTVx?=
 =?utf-8?B?N0NGalc3Umh6a1A2WktkZjFuclA1ZHRodmtVb0FCUUlQNmZMWk9tZE40SmJo?=
 =?utf-8?B?NnJLRTRkUGZhY0dPYy9RQXN0Y1FBOE5paTNld3hPbTlQL1RFUkxBcm5tYVJQ?=
 =?utf-8?B?MXN6TGZwU0R6MXkrSTEwdGs0dVlkbkpnUEkyQ25UUjlOVU5DV2wwdWo0MFBz?=
 =?utf-8?B?ZnRCRVgvZXdJRG5BeHdPTnU2d3F1eFA5ZGhkR0dBNUhuSmJYVm8wTVVId0g0?=
 =?utf-8?B?VjVhR3VTUStmdEYrcGdmMUNpVXpZRW81ZEZoKy92RXVZK1ZublBLQkVZalNP?=
 =?utf-8?B?L3dKSk1reHBreWNSQWZkR3J5aGlra0k4T1J4K0dGdDVpUTFERUZ4QzFYc2da?=
 =?utf-8?B?NGhQRFhCZzZZVFIycGhvbGtwMG0vVktOQjk3UmtZL1hHLy9BY0N6c3JyRGVa?=
 =?utf-8?Q?Nfw0=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <620EC99086C844468B30BA8FD52F9721@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7185f01c-3bde-4f3c-3dde-08dc8a88171a
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jun 2024 02:33:48.7493
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: /A8UCMS5D0p8UDLiET3JQheBdtdyhOfDAmqjmCZg9oWJrKL/lBm74bfFclafW/RJQ6dAAWohtdNO7w4YZuS9XQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB8723

T24gMjAyNC82LzExIDAwOjA3LCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDcuMDYuMjAyNCAx
MDoxMSwgSmlxaWFuIENoZW4gd3JvdGU6DQo+PiBIaSBBbGwsDQo+PiBUaGlzIGlzIHY5IHNlcmll
cyB0byBzdXBwb3J0IHBhc3N0aHJvdWdoIHdoZW4gZG9tMCBpcyBQVkgNCj4+IHY4LT52OSBjaGFu
Z2VzOg0KPj4gKiBwYXRjaCMxOiBNb3ZlIHBjaWRldnNfdW5sb2NrIGJlbG93IHdyaXRlX2xvY2ss
IGFuZCByZW1vdmUgIkFTU0VSVChwY2lkZXZzX2xvY2tlZCgpKTsiIGZyb20gdnBjaV9yZXNldF9k
ZXZpY2Vfc3RhdGU7DQo+PiAgICAgICAgICAgIEFkZCBwY2lfZGV2aWNlX3N0YXRlX3Jlc2V0X3R5
cGUgdG8gZGlzdGluZ3Vpc2ggdGhlIHJlc2V0IHR5cGVzLg0KPj4gKiBwYXRjaCMyOiBBZGQgYSBj
b21tZW50IGFib3ZlIFBIWVNERVZPUF9tYXBfcGlycSB0byBkZXNjcmliZSB3aHkgbmVlZCB0aGlz
IGh5cGVyY2FsbC4NCj4+ICAgICAgICAgICAgQ2hhbmdlICIhaXNfcHZfZG9tYWluKGQpIiB0byAi
aXNfaHZtX2RvbWFpbihkKSIsIGFuZCAibWFwLmRvbWlkID09IERPTUlEX1NFTEYiIHRvICJkID09
IGN1cnJlbnQtPmRvbWlhbiIuDQo+PiAqIHBhdGNoIzM6IFJlbW92ZSB0aGUgY2hlY2sgb2YgUEhZ
U0RFVk9QX3NldHVwX2dzaSwgc2luY2UgdGhlcmUgaXMgc2FtZSBjaGVja2UgaW4gYmVsb3cuDQo+
IA0KPiBIYXZpbmcgbG9va2VkIGF0IHBhdGNoIDMsIHdoYXQgY2hlY2socykgaXMgKGFyZSkgYmVp
bmcgdGFsa2VkIGFib3V0IGhlcmU/DQo+IEl0IGZlZWxzIGFzIGlmIHRvIHVuZGVyc3RhbmQgdGhp
cyByZXZpc2lvbiBsb2cgZW50cnksIG9uZSB3b3VsZCBzdGlsbCBuZWVkDQo+IHRvIGdvIGJhY2sg
dG8gdGhlIGVhcmxpZXIgdmVyc2lvbi4gWWV0IHRoZSBwdXJwb3NlIG9mIHRoZXNlIGlzIHRoYXQg
b25lDQo+IChwcmVmZXJhYmx5KSB3b3VsZG4ndCBuZWVkIHRvIGRvIHNvLg0KU29ycnksIGl0IHNo
b3VsZCBiZToNCnBhdGNoIzM6IFJlbW92ZSB0aGUgY2hlY2sgb2YgUEhZU0RFVk9QX3NldHVwX2dz
aSwgc2luY2UgdGhlcmUgaXMgc2FtZSBjaGVjayBpbiBiZWxvdy4gQWx0aG91Z2ggdGhlaXIgcmV0
dXJuIHZhbHVlcyBhcmUgZGlmZmVyZW50LCB0aGlzIGRpZmZlcmVuY2UgaXMgYWNjZXB0YWJsZSBm
b3IgdGhlIHNha2Ugb2YgY29kZSBjb25zaXN0ZW5jeQ0KICAgICAgICAgICAgICAgICBpZiAoICFp
c19oYXJkd2FyZV9kb21haW4oY3VycmQpICkNCgkgICAgICAgICAgICAgcmV0dXJuIC1FTk9TWVM7
DQogICAgICAgICAgICAgICAgIGJyZWFrOw0KSSB3aWxsIGNoYW5nZSBpbiBuZXh0IHZlcnNpb24u
DQoNCj4gDQo+IEphbg0KDQotLSANCkJlc3QgcmVnYXJkcywNCkppcWlhbiBDaGVuLg0K


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 02:44:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 02:44:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738798.1145660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHDxt-00066z-Gs; Wed, 12 Jun 2024 02:43:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738798.1145660; Wed, 12 Jun 2024 02:43:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHDxt-00066s-DA; Wed, 12 Jun 2024 02:43:57 +0000
Received: by outflank-mailman (input) for mailman id 738798;
 Wed, 12 Jun 2024 02:43:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nAc7=NO=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sHDxs-00066m-L8
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 02:43:56 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20622.outbound.protection.outlook.com
 [2a01:111:f403:2414::622])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9bff1155-2865-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 04:43:55 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by BL1PR12MB5731.namprd12.prod.outlook.com (2603:10b6:208:386::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Wed, 12 Jun
 2024 02:43:52 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7633.036; Wed, 12 Jun 2024
 02:43:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bff1155-2865-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YM7zGWZbY73n3z3VECnXoPWAJN5ggR+nqqvAnJHLmUuT4rycN2z3B5TIbS7ulcECmlHIK0G2f3uyAXe10V5loiaaWuWGYJTNCaWTNoy4V1An0DmHbU44lTLuGbTiInCjtNkM8qk8SYGFVur5Ax/UJ9DETxQKZtW2+MzZLXNzBiY2Ymg5uRgtcC21DATtA0NjjYAQl3AQ9NjJNkaj3Ymp2rcP4zA5EPwK9OZP9pJ0UxlHGRWojJHf3dk4DwGQuaJEBRYGgOf15FGpu3gM6ul+iwGFDaHnctHp81dqPztIyC69NGsfdM7bG0FXK4QeklerstIXFqahwDdAz/thHXouTg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YsyLDthGinb2wG8OOq9BSBAG2db9iZHQburqgXXesKM=;
 b=EflQTm4IngNAUMpbkpt7TJW1sFMYo9xkGU+guTmMkLD5olYE/r4las4iDzYOt4SP/p2nGWIktLufpri+GnHh9eWbd24AZIlHbqubGYWba8a4AdwuzFOW44Xqnyz++SP22FTbEgpQweYURIQEwpOwhC/uxzK/KvIbPq/xPd76mq1lCp5wdaPc2yTnxqAgdiE/m6qT3900WF95R403uWREWoMIA7otvvc+cbx83+IYS6uwjOb9LPNVmwJW+YgkveMj5LREmRmfHv6DnYiE/vAEdJLFyBHd/kUsWAGzubwuGg9RU/mu4vu8fQ8jPFYEEWLR8rRIf2BaOFVtPdUt6tWnEg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YsyLDthGinb2wG8OOq9BSBAG2db9iZHQburqgXXesKM=;
 b=CTHKQPPtfBEvT+Ke+xDnVtfsxuDjr1Irm36ZcZwD14j6RTQJdK+3YJrGcxQyO9s52P/Tx2DE3WLQ9RLBKpZ8Bc2oxH+Kz4mJ59YvhBaxq91cKEtgrlYtaq/ds9sG0tHtdymSJ3VHI41liP6XJvJNvaU0eb+xlDlcpTpTFZqkqYg=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v9 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
Thread-Topic: [XEN PATCH v9 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
Thread-Index: AQHauLJgk79xZtykz0GMHuXNeMsUFbHBLLUAgALK4YA=
Date: Wed, 12 Jun 2024 02:43:51 +0000
Message-ID:
 <BL1PR12MB5849B1B536BAD641C37A4308E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-3-Jiqian.Chen@amd.com>
 <efc35614-561c-4baa-9d94-d17ecb528a4b@suse.com>
In-Reply-To: <efc35614-561c-4baa-9d94-d17ecb528a4b@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7633.034)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|BL1PR12MB5731:EE_
x-ms-office365-filtering-correlation-id: bcd9349d-13b7-4ff2-7c13-08dc8a897ea9
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230032|1800799016|376006|366008|7416006|38070700010;
x-microsoft-antispam-message-info:
 =?utf-8?B?Q1RaaVdrSGxOdnc4LzRiRFQrelJLenkzbVZQcE5ZSGltUk8xektKUHFVUFBj?=
 =?utf-8?B?MGY0aGZLUWdYNTdOYW1nUHpXT2ZCeFQ0UWZJNzZmUEpVQWZHWVZQM2VrSVUr?=
 =?utf-8?B?RjhlUkpwek1kN001c1JSeWdSbitNb2lzQmc2akFzb2JDMHdRemVQN3NXY1NW?=
 =?utf-8?B?TVVwZFBoMjd4cGlYOS9yeTBLS1pBMkEreHQ5b0Jod2pQdTRsYjdpdnFzTDVT?=
 =?utf-8?B?bFFyWW9DTlEvOUNjSDlNMkJ1RGxwUEgwbU54Z25qTnpGVEZYVlpyb1JzWkt3?=
 =?utf-8?B?Rjc5NjYrajZpMEU4akZnM3RaaFFNRjdIaUQvMHIxV3l6RzNEVkMwakc3OElE?=
 =?utf-8?B?UG5YVnVZdUVkM1Ezb2hEWkg1SWxFeFZQRmJyWTVxWVFOMUs2Wk5weUZsQVpy?=
 =?utf-8?B?aCtaSWxvdUZTQ2VjNnl3ekErSzNLTk5nUXRwYVRkeUxId0doNlRvRjRZY01v?=
 =?utf-8?B?NEM5RStCenY1MHp6WnEvYUhxSlcrK3dFNUZXbFJrbEdFOTMvYitjTmxsa2xW?=
 =?utf-8?B?UjNheVZkeGk0VVZvNVk0UDJPOFNFcTkrbnM3Mktab1RpYzhZQTBnS1NJL094?=
 =?utf-8?B?TWlyQXpGRndueGo4a3B3aExDekJZU3lMdDJxcXhZNm0xNjlrMHlrZE5tVUIr?=
 =?utf-8?B?V3FFM2RpV2ptK0kvaXZHQTF3MSsvb0hkY3VWOWx1cFBGQlcrYlNNQTJGZ0R3?=
 =?utf-8?B?WU9xcm5FcjNwQjdpbXc0TVRQWWlPeVRCNnBSd3FwdlFkWURWSkU2MEpvZUoz?=
 =?utf-8?B?UGJKWDBsUEZ1WWRRMmZuQ1UwYU9VUkJ2VDdjdEhvNnBoYkM3ZzUvcVVhRTBG?=
 =?utf-8?B?S0RZVnA0SXU5aHFHNWw1ZmtuQ0EyQUFKSU9MYmVNTlg5cmRHZmJHVTY4OVFn?=
 =?utf-8?B?cEJOakYrUDBhVTZhSGp2SUVkS1ZqSlhTM2V1TEhjVWh0aDE4VHkzZHZOMDM5?=
 =?utf-8?B?a0szWmZqT2FYQ3cyVE14UGE0Qk9KbVg4MkJyb1BRbko2U2pQd3cyOTAzS1FR?=
 =?utf-8?B?WTdjME5yY1JRdXZZdUtrVlVLakNxdjZOZE5DTU11QklGQjVKWG9ieWVDTVBH?=
 =?utf-8?B?di9RcnphR29EMTh0ZmF5Sm9rS3BrZWVjSS8vbVUvZnh3RG5CMmZlWXhCc3RN?=
 =?utf-8?B?dzRFNGtjSmphMkxXNHZsR2NsTmI1L1pRS2NYZzlVNHIyM0ZVb1R1OWV5U09X?=
 =?utf-8?B?OFlHNjhRbGhIWjdOTm5aT2tSc0lrcXFxaXRDS2JUcmExMGU3OUt4ZjZTRXRs?=
 =?utf-8?B?YnpPU3BpaXZ3OGkxWHBxaFhsRmNnQ2tRQldlQlJiaDE5MzhtWm1WMm9KWnhK?=
 =?utf-8?B?UVA5ai9jNXYyekhHajZ0SG9WVERFVFBMNmpmcWxHbStscWh1T2p4MzArTjc1?=
 =?utf-8?B?S0g3NDAvcG1LdlR6OHp5UkE4Q0lvNXo0cjI3YmdSQ0VEMnA1ckJkR05OMFF5?=
 =?utf-8?B?YWZxREhzVnoyelBnNmZLVDNBOXVlQjF2anJGN1NXaWZwNG9zczVvbzJRZjJN?=
 =?utf-8?B?c2ZtRHM1a2VSOWlxSVhJZ3hBekYyNXpiSHpPYXkveldud09TdGlzRVhMNUlO?=
 =?utf-8?B?UVl2OE5oWkFia0lDbVBPV0VadEhSeC84ckJMdXFGMkw4bGZxakgrcHRkdC9V?=
 =?utf-8?B?cjFEUytTSDZhK3pMZFBkVVdDWHN4SUlIRWxEMk9RSWQzd0FXdDkwczkvWXRS?=
 =?utf-8?B?YjN1bVBZQlFxenRZZHhRaFdtUmVrWkhId2VaYUUyNnA0aVI2SlhqdWVqNjRS?=
 =?utf-8?B?MThpLzVQSThVcW9EVXdkMDNialFReGJiUUozYkpIV0pSQ01ZKzlCYkJFOUFN?=
 =?utf-8?Q?uZL5XA/ACZplv9sAVxohxthkm5npr9DBM2+FA=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230032)(1800799016)(376006)(366008)(7416006)(38070700010);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?MzdlcDJJSjdhYkpXQmJzcVhKOWg3a0ZvdFZqSjd0NTdpRGZGdDhIemc5QjNK?=
 =?utf-8?B?WW93ekRTM1dCZ0JNWWxEQ3o4NFBGa0I1RzNpeG91bnBlbEFLRGRDbnJLT3BX?=
 =?utf-8?B?eldFbHdiQ2d5ZDhhVzdvaUZ6S3duUGpzbFV6RkFZUWh2bzhPSXR5Yk54dE1O?=
 =?utf-8?B?bUJreTAwdzI3YWFVLytkYzFMVDZTWUFXOVU0VEptWEZRTUpDVXVXNmhQYVZF?=
 =?utf-8?B?QXNjVU9aNnFSZXE1ZWpVM0IrNnROeHVuSmxTQjhoL20zaFZNaUlOdnBZaVVP?=
 =?utf-8?B?cjYxVmFUWEt4OGhpT0Q5Tis4K1dMQzVvK3dDZ21Dd3RDNHZScU14QXBZN3Ju?=
 =?utf-8?B?b0RkQmNXOE9BSW56cGJaMk1KSXlqMDlwS3dPLzluNTFKZWFubjRmcmdCQ2Nu?=
 =?utf-8?B?MmFYRTdWSk13bituTXBUZlVRWnd0Q3ZyZVdvYng3ZFdDWmpDTVFQUUs1MGYr?=
 =?utf-8?B?UE9YMUkxSFdJVnptS21zWnNsdkpJaFdoV3hmTURBb1htWVhlR2dIT0VkU1RM?=
 =?utf-8?B?SC9xVHRyY243MXZtelI5N0J2bnEyZ052Nklsa2QzNTcxUktLYkZ1aXZrOUVk?=
 =?utf-8?B?TjJYWW80aVBTQUdFM29hRFIvQzRTcmNtSXIvb3JHOCtpNkNTSXFUNHlyY25w?=
 =?utf-8?B?anJxL0pGWUZlNk55aGFLWEFGSjJxcG9LRzFMY1QyMUhBSXd2TVlETVdGNUhj?=
 =?utf-8?B?SzllbFc0U0hCekVMcnlXTTNUMTZiNGw1QjJaSERKZHJrUWVrTDZ0VHNLRWFF?=
 =?utf-8?B?WkFlaFR0M3RlWUdoajU4NVNtWmZ0WFhYOUN5RE45a01pNGYzUFJQcXhMdjhm?=
 =?utf-8?B?Z0I3SHY5UTAzYmh6MjFJc1VvTjN4ZFRSYjZUUEVTbDFyTmNDM3UvcWdjY0Y0?=
 =?utf-8?B?YjFSS0JkcElqbVcwRlBtVGlsSGlnS0diWDh2Sng2MTRlNWVNVWFJZmJ6VTRs?=
 =?utf-8?B?aWw1TVIwNzVXYVIxQVJOcmd2aXlUN2RyTmdTcGhGWnQ4enRxOGw1d0NGSlhF?=
 =?utf-8?B?aVVQUnFTdzAwSStNd0JDMlJwK21ReGRvQTVWL3M5NVlYclFUejZLZG5pNHJz?=
 =?utf-8?B?b1BxbVNvZDlzMW44TXFaS2MxTmlvcHhtcGxMdFdNYjdSeVZLSmI3STMyNXJV?=
 =?utf-8?B?c0lPeEJmZURGQk5uOUNacW5RY3pSMTBGVHRMMFU3aW9HTEplaDdXRVVwV0lB?=
 =?utf-8?B?YmFiSVJOYy9QNXI5dHNLSUZHZVVjWUtVU1QrdUtHKzJmN1VkR3JpR21Fcnhr?=
 =?utf-8?B?aVBlUE9Nbm1DKzlOV2dsV2o5dUdaWUJiVWVjWGV5Q2ZBRkE4dXJrOTg3MEdi?=
 =?utf-8?B?TWFiOWJsdDZTQkNHc1NhSGpTc2lEcFo3czVMSU5rUEdob1pyajZFTDd6OGd2?=
 =?utf-8?B?dFdSSWpZNG5wWkhxd2s4NDhWdk1wbDRyUzJIalNiVGROWEZsY294Nm5xWE9m?=
 =?utf-8?B?Skxxa2JzbmRtV1FIRG9JWVJiclFyRkxGM0JIZ2sxdEs5UXRLeUtRQUgrQzIy?=
 =?utf-8?B?NlJHMGVneVAzczM5dy9VUXp3VzlLak5nb1NRdVp2VkJWdnZCSlhmQ0RteGlP?=
 =?utf-8?B?UW9BT3RXbmdWc0FaR1B0dUo4SW5HUjJnQVNWUTFiVGVDU2FTRnhoT3BkTlox?=
 =?utf-8?B?RWFmQk1jTDA3UkNpQzlFYUZTTUZmdFRSZk01NVZCbng5czhGWGF2dlRBeGlp?=
 =?utf-8?B?a3U3MEVNTWFyWkZlVEYybGRTZXlNeDR0NVRCOUo2UWp4WHpqdWJtL1JURUpH?=
 =?utf-8?B?S2Y2cGh2YXFtTWxsVll2SFVVaVNrbzdGRXpYSnJFWE82UjFsWGZIWm8vakwv?=
 =?utf-8?B?di9PSmlqNWtMLzQxc0VrajRTRzc3ZWtFdmtrQktwRWxybDZuYk5pN1dwM0h1?=
 =?utf-8?B?UE1wR0g5d3VMMlJwSXVIV2VoeVBuNDJQeWE2UGgrWEphckl4QnN3U2dBSjk4?=
 =?utf-8?B?L0dEN0RneTdnbzZ6cHBSM1JNUWNPQmFXQmZ1bnFoalcwbjhtSGhrWWhwYjA2?=
 =?utf-8?B?RlpaVGRKQktlTWdXTm5nV0VYZDlQaHVSWWhkdHRoR1VQcnYzdXpJYkdrSUVn?=
 =?utf-8?B?NWZ5SGQzTjd1Vm1NaVdJZ2M0UUMzN3JIOGJ5WmJzS2hRU1JZSlpOeGVYcUxN?=
 =?utf-8?Q?YSdo=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <04C8574644F6AE4CB9A1EAB248FA644D@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bcd9349d-13b7-4ff2-7c13-08dc8a897ea9
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jun 2024 02:43:51.9341
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: m4tM78/Jh36dODqh3p3w4WkDMQlhi8YppExJeNtvnKG78BJ8FARRKQz8knb4aOYmUKLG5Ur1XfsQA8J5v5knJA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL1PR12MB5731

T24gMjAyNC82LzEwIDIzOjU4LCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDcuMDYuMjAyNCAx
MDoxMSwgSmlxaWFuIENoZW4gd3JvdGU6DQo+PiBJZiBydW4gWGVuIHdpdGggUFZIIGRvbTAgYW5k
IGh2bSBkb21VLCBodm0gd2lsbCBtYXAgYSBwaXJxIGZvcg0KPj4gYSBwYXNzdGhyb3VnaCBkZXZp
Y2UgYnkgdXNpbmcgZ3NpLCBzZWUgcWVtdSBjb2RlDQo+PiB4ZW5fcHRfcmVhbGl6ZS0+eGNfcGh5
c2Rldl9tYXBfcGlycSBhbmQgbGlieGwgY29kZQ0KPj4gcGNpX2FkZF9kbV9kb25lLT54Y19waHlz
ZGV2X21hcF9waXJxLiBUaGVuIHhjX3BoeXNkZXZfbWFwX3BpcnENCj4+IHdpbGwgY2FsbCBpbnRv
IFhlbiwgYnV0IGluIGh2bV9waHlzZGV2X29wLCBQSFlTREVWT1BfbWFwX3BpcnENCj4+IGlzIG5v
dCBhbGxvd2VkIGJlY2F1c2UgY3VycmQgaXMgUFZIIGRvbTAgYW5kIFBWSCBoYXMgbm8NCj4+IFg4
Nl9FTVVfVVNFX1BJUlEgZmxhZywgaXQgd2lsbCBmYWlsIGF0IGhhc19waXJxIGNoZWNrLg0KPj4N
Cj4+IFNvLCBhbGxvdyBQSFlTREVWT1BfbWFwX3BpcnEgd2hlbiBkb20wIGlzIFBWSCBhbmQgYWxz
byBhbGxvdw0KPj4gUEhZU0RFVk9QX3VubWFwX3BpcnEgZm9yIHRoZSBmYWlsZWQgcGF0aCB0byB1
bm1hcCBwaXJxLiBBbmQNCj4+IGFkZCBhIG5ldyBjaGVjayB0byBwcmV2ZW50IHNlbGYgbWFwIHdo
ZW4gc3ViamVjdCBkb21haW4gaGFzIG5vDQo+PiBQSVJRIGZsYWcuDQo+Pg0KPj4gU2lnbmVkLW9m
Zi1ieTogSHVhbmcgUnVpIDxyYXkuaHVhbmdAYW1kLmNvbT4NCj4+IFNpZ25lZC1vZmYtYnk6IEpp
cWlhbiBDaGVuIDxKaXFpYW4uQ2hlbkBhbWQuY29tPg0KPj4gUmV2aWV3ZWQtYnk6IFN0ZWZhbm8g
U3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4gDQo+IFdoYXQncyBpbW8gbWlz
c2luZyBpbiB0aGUgZGVzY3JpcHRpb24gaXMgYSBjbGFyaWZpY2F0aW9uIC8ganVzdGlmaWNhdGlv
biBvZg0KPiB3aHkgaXQgaXMgZ29pbmcgdG8gYmUgYSBnb29kIGlkZWEgKG9yIGF0IGxlYXN0IGFu
IGFjY2VwdGFibGUgb25lKSB0byBleHBvc2UNCj4gdGhlIGNvbmNlcHQgb2YgUElSUXMgdG8gUFZI
LiBJZiBJJ20gbm90IG1pc3Rha2VuIHRoYXQgY29uY2VwdCBzbyBmYXIgaGFzDQo+IGJlZW4gZW50
aXJlbHkgYSBQViBvbmUuDQpJIGRpZG4ndCB3YW50IHRvIGV4cG9zZSB0aGUgY29uY2VwdCBvZiBQ
SVJRcyB0byBQVkguDQpJIGRpZCB0aGlzIHBhdGNoIGlzIGZvciBIVk0gdGhhdCB1c2UgUElSUXMs
IHdoYXQgSSBzYWlkIGluIGNvbW1pdCBtZXNzYWdlIGlzIEhWTSB3aWxsIG1hcCBhIHBpcnEgZm9y
IGdzaSwgbm90IFBWSC4NCkZvciB0aGUgb3JpZ2luYWwgY29kZSwgaXQgY2hlY2tzICIgIWhhc19w
aXJxKGN1cnJkKSIsIGJ1dCBjdXJyZCBpcyBQVkggZG9tMCwgc28gaXQgZmFpbGVkLiBTbyBJIG5l
ZWQgdG8gYWxsb3cgUEhZU0RFVk9QX21hcF9waXJxDQpldmVuIGN1cnJkIGhhcyBubyBQSVJRcywg
YnV0IHRoZSBzdWJqZWN0IGRvbWFpbiBoYXMuDQoNCj4gDQo+PiAtLS0gYS94ZW4vYXJjaC94ODYv
aHZtL2h5cGVyY2FsbC5jDQo+PiArKysgYi94ZW4vYXJjaC94ODYvaHZtL2h5cGVyY2FsbC5jDQo+
PiBAQCAtNzEsOCArNzEsMTQgQEAgbG9uZyBodm1fcGh5c2Rldl9vcChpbnQgY21kLCBYRU5fR1VF
U1RfSEFORExFX1BBUkFNKHZvaWQpIGFyZykNCj4+ICANCj4+ICAgICAgc3dpdGNoICggY21kICkN
Cj4+ICAgICAgew0KPj4gKyAgICAvKg0KPj4gKyAgICAgKiBPbmx5IGJlaW5nIHBlcm1pdHRlZCBm
b3IgbWFuYWdlbWVudCBvZiBvdGhlciBkb21haW5zLg0KPj4gKyAgICAgKiBGdXJ0aGVyIHJlc3Ry
aWN0aW9ucyBhcmUgZW5mb3JjZWQgaW4gZG9fcGh5c2Rldl9vcC4NCj4+ICsgICAgICovDQo+PiAg
ICAgIGNhc2UgUEhZU0RFVk9QX21hcF9waXJxOg0KPj4gICAgICBjYXNlIFBIWVNERVZPUF91bm1h
cF9waXJxOg0KPj4gKyAgICAgICAgYnJlYWs7DQo+IA0KPiBOaXQ6IEltbyBzdWNoIGEgY29tbWVu
dCBvdWdodCB0byBiZSBpbmRlbnRlZCBsaWtlIGNvZGUgKHN0YXRlbWVudHMpLCBub3QNCj4gbGlr
ZSB0aGUgY2FzZSBsYWJlbHMuDQpUaGFua3MsIEkgd2lsbCBjaGFuZ2UgaW4gbmV4dCB2ZXJzaW9u
Lg0KDQo+IA0KPiBKYW4NCg0KLS0gDQpCZXN0IHJlZ2FyZHMsDQpKaXFpYW4gQ2hlbi4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 04:46:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 04:46:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738805.1145671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHFsB-0004HN-JJ; Wed, 12 Jun 2024 04:46:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738805.1145671; Wed, 12 Jun 2024 04:46:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHFsB-0004HG-FG; Wed, 12 Jun 2024 04:46:11 +0000
Received: by outflank-mailman (input) for mailman id 738805;
 Wed, 12 Jun 2024 04:46:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tg07=NO=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sHFsA-0004H7-6L
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 04:46:10 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id acc76355-2876-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 06:46:04 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 2B0FA68BEB; Wed, 12 Jun 2024 06:45:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acc76355-2876-11ef-b4bb-af5377834399
Date: Wed, 12 Jun 2024 06:45:58 +0200
From: Christoph Hellwig <hch@lst.de>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 02/26] sd: move zone limits setup out of
 sd_read_block_characteristics
Message-ID: <20240612044558.GA26468@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-3-hch@lst.de> <40ca8052-6ac1-4c1b-8c39-b0a7948839f8@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <40ca8052-6ac1-4c1b-8c39-b0a7948839f8@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 11, 2024 at 02:51:24PM +0900, Damien Le Moal wrote:
> > -	if (!sd_is_zoned(sdkp))
> > +	if (!sd_is_zoned(sdkp)) {
> > +		lim->zoned = false;
> 
> Maybe we should clear the other zone related limits here ? If the drive is
> reformatted/converted from SMR to CMR (FORMAT WITH PRESET), the other zone
> limits may be set already, no ?

Yes, but we would not end up here.  The device type is constant over
the struct of the scsi_device and we'd have to fully reprobe it.

So we don't need to clear any flags, including the actual zoned flag
here.



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 04:50:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 04:50:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738812.1145679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHFwQ-00062l-61; Wed, 12 Jun 2024 04:50:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738812.1145679; Wed, 12 Jun 2024 04:50:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHFwQ-00062e-38; Wed, 12 Jun 2024 04:50:34 +0000
Received: by outflank-mailman (input) for mailman id 738812;
 Wed, 12 Jun 2024 04:50:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tg07=NO=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sHFwO-00062Y-E4
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 04:50:32 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4b84cd6c-2877-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 06:50:30 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 27E1B68BEB; Wed, 12 Jun 2024 06:50:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b84cd6c-2877-11ef-b4bb-af5377834399
Date: Wed, 12 Jun 2024 06:50:26 +0200
From: Christoph Hellwig <hch@lst.de>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 10/26] xen-blkfront: don't disable cache flushes when
 they fail
Message-ID: <20240612045026.GA26653@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-11-hch@lst.de> <fdfc024a-368a-4495-8b85-b5ab7741f6d4@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <fdfc024a-368a-4495-8b85-b5ab7741f6d4@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 11, 2024 at 04:30:39PM +0900, Damien Le Moal wrote:
> On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> > blkfront always had a robust negotiation protocol for detecting a write
> > cache.  Stop simply disabling cache flushes when they fail as that is
> > a grave error.
> > 
> > Signed-off-by: Christoph Hellwig <hch@lst.de>
> 
> Looks good to me but maybe mention that removal of xlvbd_flush() as well ?
> And it feels like the "stop disabling cache flushes when they fail" part should
> be a fix patch sent separately...

I'll move the patch to the front of the series to get more attention from
the maintainers, but otherwise the xlvbd_flush remova lis the really
trivial part here.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 04:53:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 04:53:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738817.1145689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHFyr-0006bq-Hm; Wed, 12 Jun 2024 04:53:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738817.1145689; Wed, 12 Jun 2024 04:53:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHFyr-0006bj-Ej; Wed, 12 Jun 2024 04:53:05 +0000
Received: by outflank-mailman (input) for mailman id 738817;
 Wed, 12 Jun 2024 04:53:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tg07=NO=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sHFyp-0006bd-EI
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 04:53:03 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a578828f-2877-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 06:53:01 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id D6EFD68BEB; Wed, 12 Jun 2024 06:52:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a578828f-2877-11ef-b4bb-af5377834399
Date: Wed, 12 Jun 2024 06:52:57 +0200
From: Christoph Hellwig <hch@lst.de>
To: Hannes Reinecke <hare@suse.de>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 13/26] block: move cache control settings out of
 queue->flags
Message-ID: <20240612045257.GA26776@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-14-hch@lst.de> <34a7b2a4-b0cb-4580-85c9-b598fd70449e@suse.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <34a7b2a4-b0cb-4580-85c9-b598fd70449e@suse.de>
User-Agent: Mutt/1.5.17 (2007-11-01)

A friendly reminder that I've skipped over the full quote.  Please
properly quote mails if you want your replies to be seen.



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 04:54:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 04:54:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738822.1145700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHG0H-000780-RH; Wed, 12 Jun 2024 04:54:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738822.1145700; Wed, 12 Jun 2024 04:54:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHG0H-00077t-OX; Wed, 12 Jun 2024 04:54:33 +0000
Received: by outflank-mailman (input) for mailman id 738822;
 Wed, 12 Jun 2024 04:54:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tg07=NO=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sHG0H-00077n-AR
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 04:54:33 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id db1f4dac-2877-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 06:54:31 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 6904768BEB; Wed, 12 Jun 2024 06:54:29 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db1f4dac-2877-11ef-b4bb-af5377834399
Date: Wed, 12 Jun 2024 06:54:29 +0200
From: Christoph Hellwig <hch@lst.de>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 13/26] block: move cache control settings out of
 queue->flags
Message-ID: <20240612045429.GB26776@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-14-hch@lst.de> <d21b162a-1fd3-4fd1-a17f-f127f964bdf1@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <d21b162a-1fd3-4fd1-a17f-f127f964bdf1@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 11, 2024 at 04:55:04PM +0900, Damien Le Moal wrote:
> On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> > Move the cache control settings into the queue_limits so that they
> > can be set atomically and all I/O is frozen when changing the
> > flags.
> 
> ...so that they can be set atomically with the device queue frozen when
> changing the flags.
> 
> may be better.

Sure.

If there was anything below I've skipped it after skipping over two
pages of full quotes.



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 04:58:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 04:58:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738831.1145710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHG4A-0007po-Ai; Wed, 12 Jun 2024 04:58:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738831.1145710; Wed, 12 Jun 2024 04:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHG4A-0007ph-83; Wed, 12 Jun 2024 04:58:34 +0000
Received: by outflank-mailman (input) for mailman id 738831;
 Wed, 12 Jun 2024 04:58:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tg07=NO=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sHG48-0007pb-SP
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 04:58:32 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6a79711b-2878-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 06:58:32 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 6FC9B68BFE; Wed, 12 Jun 2024 06:58:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a79711b-2878-11ef-90a3-e314d9c70b13
Date: Wed, 12 Jun 2024 06:58:28 +0200
From: Christoph Hellwig <hch@lst.de>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 16/26] block: move the io_stat flag setting to
 queue_limits
Message-ID: <20240612045828.GC26776@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-17-hch@lst.de> <d51e4163-99e3-4435-870d-faef3887ab6a@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <d51e4163-99e3-4435-870d-faef3887ab6a@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 11, 2024 at 05:09:45PM +0900, Damien Le Moal wrote:
> On 6/11/24 2:19 PM, Christoph Hellwig wrote:
> > Move the io_stat flag into the queue_limits feature field so that it
> > can be set atomically and all I/O is frozen when changing the flag.
> 
> Why a feature ? It seems more appropriate for io_stat to be a flag rather than
> a feature as that is a block layer thing rather than a device characteristic, no ?

Because it must actually be supported by the driver for bio based
drivers.  Then again we also support chaning it through sysfs, so
we might actually need both.  At least unlike say the cache it's
not actively harmful when enabled despite not being supported.

I can look into that, but I'll do it in another series after getting
all the driver changes out.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 05:01:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 05:01:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738836.1145720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHG6m-00019t-Ni; Wed, 12 Jun 2024 05:01:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738836.1145720; Wed, 12 Jun 2024 05:01:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHG6m-00019m-Jx; Wed, 12 Jun 2024 05:01:16 +0000
Received: by outflank-mailman (input) for mailman id 738836;
 Wed, 12 Jun 2024 05:01:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tg07=NO=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sHG6l-00019A-Up
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 05:01:15 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cbd7de87-2878-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 07:01:15 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 775D868C4E; Wed, 12 Jun 2024 07:01:10 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cbd7de87-2878-11ef-90a3-e314d9c70b13
Date: Wed, 12 Jun 2024 07:01:09 +0200
From: Christoph Hellwig <hch@lst.de>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 19/26] block: move the nowait flag to queue_limits
Message-ID: <20240612050109.GA26959@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-20-hch@lst.de> <4845aae8-ad03-407e-bf31-f164b8f684d4@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <4845aae8-ad03-407e-bf31-f164b8f684d4@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 11, 2024 at 05:16:37PM +0900, Damien Le Moal wrote:
> > @@ -1825,9 +1815,7 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
> >  	int r;
> >  
> >  	if (dm_table_supports_nowait(t))
> > -		blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q);
> > -	else
> > -		blk_queue_flag_clear(QUEUE_FLAG_NOWAIT, q);
> > +		limits->features &= ~BLK_FEAT_NOWAIT;
> 
> Shouldn't you set the flag here instead of clearing it ?

No, but the dm_table_supports_nowait check needs to be inverted.
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 05:03:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 05:03:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738842.1145729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHG9F-0001ir-3L; Wed, 12 Jun 2024 05:03:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738842.1145729; Wed, 12 Jun 2024 05:03:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHG9F-0001ik-0k; Wed, 12 Jun 2024 05:03:49 +0000
Received: by outflank-mailman (input) for mailman id 738842;
 Wed, 12 Jun 2024 05:03:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tg07=NO=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sHG9D-0001iX-FI
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 05:03:47 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 255c6c7c-2879-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 07:03:45 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 0069168C4E; Wed, 12 Jun 2024 07:03:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 255c6c7c-2879-11ef-b4bb-af5377834399
Date: Wed, 12 Jun 2024 07:03:41 +0200
From: Christoph Hellwig <hch@lst.de>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 21/26] block: move the poll flag to queue_limits
Message-ID: <20240612050341.GA27049@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-22-hch@lst.de> <d1775d3f-daaa-4193-9f68-06ec47563b35@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <d1775d3f-daaa-4193-9f68-06ec47563b35@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 11, 2024 at 05:21:07PM +0900, Damien Le Moal wrote:
> Kind of the same remark as for io_stat about this not really being a device
> feature. But I guess seeing "features" as a queue feature rather than just a
> device feature makes it OK to have poll (and io_stat) as a feature rather than
> a flag.

So unlike io_stat this very much is a feature and a feature only as
we don't even allow changing it.  It purely exposes a device (or
rather driver) capability.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 05:06:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 05:06:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738609.1145740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHGBq-0002Gk-Go; Wed, 12 Jun 2024 05:06:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738609.1145740; Wed, 12 Jun 2024 05:06:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHGBq-0002Gd-Dk; Wed, 12 Jun 2024 05:06:30 +0000
Received: by outflank-mailman (input) for mailman id 738609;
 Tue, 11 Jun 2024 15:58:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7gKT=NN=bounce.vates.tech=bounce-md_30504962.66687428.v1-e812e3cb5a86419e990fcbce5cbc4876@srs-se1.protection.inumbo.net>)
 id 1sH3tM-0001Zg-2k
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 15:58:36 +0000
Received: from mail180-20.suw31.mandrillapp.com
 (mail180-20.suw31.mandrillapp.com [198.2.180.20])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 744f2ff4-280b-11ef-90a3-e314d9c70b13;
 Tue, 11 Jun 2024 17:58:34 +0200 (CEST)
Received: from pmta11.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail180-20.suw31.mandrillapp.com (Mailchimp) with ESMTP id
 4VzCyJ3tgpzFCWbSW
 for <xen-devel@lists.xenproject.org>; Tue, 11 Jun 2024 15:58:32 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 e812e3cb5a86419e990fcbce5cbc4876; Tue, 11 Jun 2024 15:58:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 744f2ff4-280b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718121512; x=1718382012;
	bh=qBnJRR2GLdLoxmMsT2I+0SYigL3MpNvaVIQF5Xvqsx8=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=inFLXmjUr+A4mLmjAo41HPAuw/gMxiJK2QU7ng9WBOvqWV718V9MZN1LWdrky9TgQ
	 tvQoCCMeYfrqW56860L0pKSIIVFFaD22HRkgt07MRpoeAbnLT+k2+O0/GIz3vvwt/M
	 MBEUOsYClgHiB0wbry7xaxhQL/BTwM4dqh47KJIR3eK4K3Mgbf+7CJ3JvdVLBgafpV
	 1p2AYPVOcBbZH3Q2a985EYoItpDnVPMBI175/cgldCkgxIdd8gYahcJH/+VRY4LktX
	 XPJSOxnIRdJen0wGoPQrjl3LZTI1I/KMhRQG6ptCqKohE5JJvzWLNwX6ELDYBwM4wq
	 UowVawaTNPuww==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718121512; x=1718382012; i=anthony.perard@vates.tech;
	bh=qBnJRR2GLdLoxmMsT2I+0SYigL3MpNvaVIQF5Xvqsx8=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=gCtUEc4ans5+W9FORQ8cvdI4/fRKucB1NCMakkFbQ+4EejywKIiai1uDyEdhv1fPp
	 ujA49C8AQ4HxAuCXSaf16lvgduyFjc4QMZLiTket1uDty2z28sBF19gATSNYvp5xDP
	 iFQx/b+ouHoIdwPGIbaW9aa+aLezx4ojt0pBe2/e+c9I3KlRP2Sw+CrofZQT6e5Ar6
	 SAtSXtA6h3NUdXPLD5LLRkecTJoLm9FQRQWx5KuUz30FSHgtzR/OOlRpDeqyfhCTMi
	 4AFbYV19/sL1OuYrNTdpKeMeeaCML94Y+la8NImhuN+6geVbbdNIFgoiWnqoaznuvk
	 94hri9KUyFjXA==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[PATCH=20for-4.19=3F=20v6=203/9]=20xen:=20Refactor=20altp2m=20options=20into=20a=20structured=20format?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718121511518
To: =?utf-8?Q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>
Cc: xen-devel@lists.xenproject.org, Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel <michal.orzel@amd.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Julien Grall <jgrall@amazon.com>
Message-Id: <ZmhxFFnanyUzI4RY@l14>
References: <cover.1718038855.git.w1benny@gmail.com> <dcf08c40e37072e18e5e878df8778ce459897bdc.1718038855.git.w1benny@gmail.com>
In-Reply-To: <dcf08c40e37072e18e5e878df8778ce459897bdc.1718038855.git.w1benny@gmail.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.e812e3cb5a86419e990fcbce5cbc4876?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240611:md
Date: Tue, 11 Jun 2024 15:58:32 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 10, 2024 at 05:10:41PM +0000, Petr Bene=C5=A1 wrote:
> From: Petr Bene=C5=A1 <w1benny@gmail.com>
> 
> Encapsulate the altp2m options within a struct. This change is preparator=
y
> and sets the groundwork for introducing additional parameter in subsequen=
t
> commit.
> 
> Signed-off-by: Petr Bene=C5=A1 <w1benny@gmail.com>
> Acked-by: Julien Grall <jgrall@amazon.com> # arm
> Reviewed-by: Jan Beulich <jbeulich@suse.com> # hypervisor
> ---
>  tools/libs/light/libxl_create.c     | 6 +++---

Acked-by: Anthony PERARD <anthony.perard@vates.tech>

Thanks,

-- 


Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 05:06:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 05:06:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738746.1145744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHGBq-0002JM-OH; Wed, 12 Jun 2024 05:06:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738746.1145744; Wed, 12 Jun 2024 05:06:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHGBq-0002IL-Jn; Wed, 12 Jun 2024 05:06:30 +0000
Received: by outflank-mailman (input) for mailman id 738746;
 Tue, 11 Jun 2024 19:41:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rMx3=NN=linux.intel.com=tim.c.chen@srs-se1.protection.inumbo.net>)
 id 1sH7NV-0001BJ-K5
 for xen-devel@lists.xenproject.org; Tue, 11 Jun 2024 19:41:57 +0000
Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a7323852-282a-11ef-b4bb-af5377834399;
 Tue, 11 Jun 2024 21:41:54 +0200 (CEST)
Received: from fmviesa009.fm.intel.com ([10.60.135.149])
 by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Jun 2024 12:41:53 -0700
Received: from mmasroor-mobl.amr.corp.intel.com (HELO [10.255.231.206])
 ([10.255.231.206])
 by fmviesa009-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 Jun 2024 12:41:52 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7323852-282a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1718134915; x=1749670915;
  h=message-id:subject:from:to:cc:date:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=3jSJzP/wKvbuiDtG0srhckDI0jqTpk/NVaK27TegDYY=;
  b=Txa/O434fPtNVN8jejVEK4GQd5KLG/dxr4fJIpUpVMFpyyfy3moEf96o
   3t6h70pNz/+OX5tzRNfphI9UzLkxXTDSIdWjFYzkAhYh8YdIkSIq0PxXz
   NoLZViLbrfbROUfTH8xleCglCnrWw1vvLpRxA8BM3VJcy3ihHWX8fX2dF
   dA1tFcmdszgqoi477kAhSoi7MvrpLEMefAsRUIta0L8UXsUgzobZtWMit
   zXi+dHs+HGr+tuPz25BCXunJTRvKFzoLm95mt1uQgPYmndBp0NiTFh3dM
   uDjJSySMyG2krjc1ltX2bBJ+AqrF1Yivhts05T81/ExByjBmlU8pSEls7
   A==;
X-CSE-ConnectionGUID: aN8xGcLkQSiGT8PYSgEXfQ==
X-CSE-MsgGUID: KG80D3N5Qs6iUzB2G3FBvA==
X-IronPort-AV: E=McAfee;i="6600,9927,11100"; a="14826583"
X-IronPort-AV: E=Sophos;i="6.08,231,1712646000"; 
   d="scan'208";a="14826583"
X-CSE-ConnectionGUID: RmVZgz1VRteabv7gZ38CIw==
X-CSE-MsgGUID: YWjwba72SB6sl89EETJtkw==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="6.08,231,1712646000"; 
   d="scan'208";a="39643390"
Message-ID: <80532f73e52e2c21fdc9aac7bce24aefb76d11b0.camel@linux.intel.com>
Subject: Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()
From: Tim Chen <tim.c.chen@linux.intel.com>
To: David Hildenbrand <david@redhat.com>, linux-kernel@vger.kernel.org
Cc: linux-mm@kvack.org, linux-hyperv@vger.kernel.org, 
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org, 
 kasan-dev@googlegroups.com, Andrew Morton <akpm@linux-foundation.org>, Mike
 Rapoport <rppt@kernel.org>, Oscar Salvador <osalvador@suse.de>, "K. Y.
 Srinivasan" <kys@microsoft.com>,  Haiyang Zhang <haiyangz@microsoft.com>,
 Wei Liu <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,  "Michael
 S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, Xuan Zhuo
 <xuanzhuo@linux.alibaba.com>, Eugenio =?ISO-8859-1?Q?P=E9rez?=
 <eperezma@redhat.com>, Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Oleksandr Tyshchenko
 <oleksandr_tyshchenko@epam.com>,  Alexander Potapenko <glider@google.com>,
 Marco Elver <elver@google.com>, Dmitry Vyukov <dvyukov@google.com>
Date: Tue, 11 Jun 2024 12:41:51 -0700
In-Reply-To: <20240607090939.89524-2-david@redhat.com>
References: <20240607090939.89524-1-david@redhat.com>
	 <20240607090939.89524-2-david@redhat.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.44.4 (3.44.4-3.fc36) 
MIME-Version: 1.0

On Fri, 2024-06-07 at 11:09 +0200, David Hildenbrand wrote:
> In preparation for further changes, let's teach __free_pages_core()
> about the differences of memory hotplug handling.
>=20
> Move the memory hotplug specific handling from generic_online_page() to
> __free_pages_core(), use adjust_managed_page_count() on the memory
> hotplug path, and spell out why memory freed via memblock
> cannot currently use adjust_managed_page_count().
>=20
> Signed-off-by: David Hildenbrand <david@redhat.com>
> ---
>  mm/internal.h       |  3 ++-
>  mm/kmsan/init.c     |  2 +-
>  mm/memory_hotplug.c |  9 +--------
>  mm/mm_init.c        |  4 ++--
>  mm/page_alloc.c     | 17 +++++++++++++++--
>  5 files changed, 21 insertions(+), 14 deletions(-)
>=20
> diff --git a/mm/internal.h b/mm/internal.h
> index 12e95fdf61e90..3fdee779205ab 100644
> --- a/mm/internal.h
> +++ b/mm/internal.h
> @@ -604,7 +604,8 @@ extern void __putback_isolated_page(struct page *page=
, unsigned int order,
>  				    int mt);
>  extern void memblock_free_pages(struct page *page, unsigned long pfn,
>  					unsigned int order);
> -extern void __free_pages_core(struct page *page, unsigned int order);
> +extern void __free_pages_core(struct page *page, unsigned int order,
> +		enum meminit_context);

Shouldn't the above be=20
		enum meminit_context context);
> =20
>  /*
>   * This will have no effect, other than possibly generating a warning, i=
f the

Thanks.

Tim


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 07:58:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 07:58:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738901.1145766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHIs6-0006j7-Jl; Wed, 12 Jun 2024 07:58:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738901.1145766; Wed, 12 Jun 2024 07:58:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHIs6-0006j0-GZ; Wed, 12 Jun 2024 07:58:18 +0000
Received: by outflank-mailman (input) for mailman id 738901;
 Wed, 12 Jun 2024 07:58:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHIs5-0006iq-Ie; Wed, 12 Jun 2024 07:58:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHIs5-0000M7-G6; Wed, 12 Jun 2024 07:58:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHIs5-00018n-3r; Wed, 12 Jun 2024 07:58:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHIs5-0002HK-3T; Wed, 12 Jun 2024 07:58:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rQEABXqHuPX5VmaPoIJVzrVVkEb1jgRRXP5bOooP0O0=; b=wXmnjIPSEhUJavUnlaIpKbKer8
	8vQ/U2igyN1xm+LH/j7RgGwQ2u0JH2s4NFt1qE1gO3kphMkBDlGldnQzmjsXbgAOQ/vxXixSq7RoZ
	u9gMQiYkFG9h+8adV69nnnMio+/5pCfHPIxzpxssiybz39bPAzIWgXOYC91hKu+vWg8c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186314-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186314: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2ef5971ff345d3c000873725db555085e0131961
X-Osstest-Versions-That:
    linux=83a7eefedc9b56fe7bfeff13b6c7356688ffa670
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Jun 2024 07:58:17 +0000

flight 186314 linux-linus real [real]
flight 186317 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186314/
http://logs.test-lab.xenproject.org/osstest/logs/186317/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-vhd  8 xen-boot            fail pass in 186317-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check fail in 186317 never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check fail in 186317 never pass
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 186298
 test-armhf-armhf-xl-raw       8 xen-boot                     fail  like 186303
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186303
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186303
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186303
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186303
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 186303
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186303
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186303
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                2ef5971ff345d3c000873725db555085e0131961
baseline version:
 linux                83a7eefedc9b56fe7bfeff13b6c7356688ffa670

Last test of basis   186303  2024-06-10 10:07:55 Z    1 days
Testing same since   186314  2024-06-12 00:10:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Baokun Li <libaokun1@huawei.com>
  Chandan Babu R <chandanbabu@kernel.org>
  Christian Brauner <brauner@kernel.org>
  Gao Xiang <hsiangkao@linux.alibaba.com>
  Jeff Layton <jlayton@kernel.org>
  John Garry <john.g.garry@oracle.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Ritesh Harjani (IBM) <ritesh.list@gmail.com>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Yuntao Wang <yuntao.wang@linux.dev>
  Zhang Yi <yi.zhang@huawei.com>
  Zizhi Wo <wozizhi@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   83a7eefedc9b..2ef5971ff345  2ef5971ff345d3c000873725db555085e0131961 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 08:01:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 08:01:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738911.1145775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHIv9-0000Hm-44; Wed, 12 Jun 2024 08:01:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738911.1145775; Wed, 12 Jun 2024 08:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHIv9-0000Hc-1B; Wed, 12 Jun 2024 08:01:27 +0000
Received: by outflank-mailman (input) for mailman id 738911;
 Wed, 12 Jun 2024 08:01:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHIv8-0000HT-6b
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 08:01:26 +0000
Received: from mail-yw1-x1133.google.com (mail-yw1-x1133.google.com
 [2607:f8b0:4864:20::1133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f5e2ab11-2891-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 10:01:23 +0200 (CEST)
Received: by mail-yw1-x1133.google.com with SMTP id
 00721157ae682-628c1f09f5cso23561547b3.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 01:01:23 -0700 (PDT)
Received: from localhost ([46.222.2.38]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b093aff889sm6894416d6.101.2024.06.12.01.01.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 01:01:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5e2ab11-2891-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718179283; x=1718784083; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=1PGf6HlJc1ANQenmlixWu1Sno9FA8D3ndcnX4NZVy9I=;
        b=qLOUZn7tY+yv/P2ZpA83zNeCW2OAXyc+7DvvMI7UeC/9SIPanqgn2I69BpyuX84Cew
         iMCPFyN8ZH1GKo8771xNGwUwRkW1SjmyuEZFwr9oaVOF/bqiQ4yRLvu7Dd/ZRJTajJxm
         YiTm1zXC0AMq7DZCX609knQTFjF6MJ/ZsiDZA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718179283; x=1718784083;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1PGf6HlJc1ANQenmlixWu1Sno9FA8D3ndcnX4NZVy9I=;
        b=Ao3NyM0qS/eps6o5Af83zx/Mb4SKpSlmWC6+5YMOYrvssWWpb4oorcPYIrIph57Zp6
         aRtDNeyB8cPM2cK/5j/9W/thDb3ZIIgc3g1tpXQDol4HW1U71ftDxzZUByS+MB8/GQmE
         36LBHJVLGCuel3/i/Oaw7p1SIWWslGBYNBrnhIgv+eAm/2w1cnJT8kT2QG/4aoL8VN66
         IDOh2uY3SbRpC4vcLuxnd7/N0Q1Nfn11bFRk5FUYdJqiz8g7KS6THZBx4KlioP3IYd9i
         OXQkp6AU1br/gcRzsTNFGbUiTV3bu5Mx5qErTQ6Xk0SsOVqhAnPWqNrTfUTJC0wTxvUx
         o0Ow==
X-Forwarded-Encrypted: i=1; AJvYcCWSmK9zAqN0GIMCVC3wOyBKMjr1QLy9Jy3vmamUpdBgvM/7DGDtMsNtHJ4kWpYbb0utxmI2/d6MXkGVIYqBI+UMA//b0qrwxe/1qHMADM8=
X-Gm-Message-State: AOJu0Yy/n4/Gqgvg7IHeS575Yw9+a6kuGcN0t24eREA0tfMRkinDxbK9
	Ekf8t4oDaMWiAdSGfGc/b89sXK5+m3LGC8ITMnInj54+3Z7ceGRS7SLPzZpTo3U=
X-Google-Smtp-Source: AGHT+IEk5bObzz8qCZsqdRqFNOU7a/dh67daOPgnEKJGJzh0sP06pKqPApTvT03CHGJxr8rf1zPYww==
X-Received: by 2002:a81:b647:0:b0:61b:e62e:73f1 with SMTP id 00721157ae682-62fb8a58273mr12605907b3.3.1718179282688;
        Wed, 12 Jun 2024 01:01:22 -0700 (PDT)
Date: Wed, 12 Jun 2024 10:01:18 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?utf-8?Q?B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>, Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 10/26] xen-blkfront: don't disable cache flushes when
 they fail
Message-ID: <ZmlVziizbaboaBSn@macbook>
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-11-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20240611051929.513387-11-hch@lst.de>

On Tue, Jun 11, 2024 at 07:19:10AM +0200, Christoph Hellwig wrote:
> blkfront always had a robust negotiation protocol for detecting a write
> cache.  Stop simply disabling cache flushes when they fail as that is
> a grave error.

It's my understanding the current code attempts to cover up for the
lack of guarantees the feature itself provides:

 * feature-barrier
 *      Values:         0/1 (boolean)
 *      Default Value:  0
 *
 *      A value of "1" indicates that the backend can process requests
 *      containing the BLKIF_OP_WRITE_BARRIER request opcode.  Requests
 *      of this type may still be returned at any time with the
 *      BLKIF_RSP_EOPNOTSUPP result code.
 *
 * feature-flush-cache
 *      Values:         0/1 (boolean)
 *      Default Value:  0
 *
 *      A value of "1" indicates that the backend can process requests
 *      containing the BLKIF_OP_FLUSH_DISKCACHE request opcode.  Requests
 *      of this type may still be returned at any time with the
 *      BLKIF_RSP_EOPNOTSUPP result code.

So even when the feature is exposed, the backend might return
EOPNOTSUPP for the flush/barrier operations.

Such failure is tied on whether the underlying blkback storage
supports REQ_OP_WRITE with REQ_PREFLUSH operation.  blkback will
expose "feature-barrier" and/or "feature-flush-cache" without knowing
whether the underlying backend supports those operations, hence the
weird fallback in blkfront.

I'm unsure whether lack of REQ_PREFLUSH support is not something that
we should worry about, it seems like it was when the code was
introduced, but that's > 10y ago.

Overall blkback should ensure that REQ_PREFLUSH is supported before
exposing "feature-barrier" or "feature-flush-cache", as then the
exposed features would really match what the underlying backend
supports (rather than the commands blkback knows about).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 08:09:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 08:09:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738920.1145786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJ36-00017Z-PD; Wed, 12 Jun 2024 08:09:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738920.1145786; Wed, 12 Jun 2024 08:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJ36-00017S-MW; Wed, 12 Jun 2024 08:09:40 +0000
Received: by outflank-mailman (input) for mailman id 738920;
 Wed, 12 Jun 2024 08:09:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHJ35-00017M-Ep
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 08:09:39 +0000
Received: from mail-qk1-x72c.google.com (mail-qk1-x72c.google.com
 [2607:f8b0:4864:20::72c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1c266f0d-2893-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 10:09:37 +0200 (CEST)
Received: by mail-qk1-x72c.google.com with SMTP id
 af79cd13be357-7955841fddaso68316685a.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 01:09:37 -0700 (PDT)
Received: from localhost ([46.222.2.38]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-7956b8d6038sm286669185a.11.2024.06.12.01.09.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 01:09:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c266f0d-2893-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718179776; x=1718784576; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=M7ywAhhvLAK4IhY6eRe1keaABENQyDZEJGYffq9r5Og=;
        b=l4zUhnVTZNX1PNxO29VCC6F8XSmpA5KoRhJ5ryZdnhWhBOnPBvw+inHVImZBffWwHo
         Qay1eyTOiQJoPRvvNEOs/5ooV3BfblaSCJQnY8s00OrolhYdl8fi1sI2oWSjp9tsl06m
         VBGnYGtDjQAeV8HGrrISXXhq+R/IW9yS421Mw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718179776; x=1718784576;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=M7ywAhhvLAK4IhY6eRe1keaABENQyDZEJGYffq9r5Og=;
        b=r+TeBe3lL91/vUemPp4Iobf4jmUL15TecZVBSsXo0oOuyx6JroFEF13njKZtED9s0L
         eHLYoIT9/x+HnZa64bV3wlsgwleqMKh0O1Sq8XbDP1boFAxLlrBACW7JWwq19G/M4gDl
         7JqUgemn8VQElvGiS1wZYbDs+LjpfUe20J0V2sy8010L8iXwPMlrytX1XdnhcPETfMoO
         HEKWmlQ+HuDwtMXSO0NALTgzWn2cXchcdMqYqWdmVfec0hCTJgCX3D8aHdFBDJG5g8sQ
         B4aDcA+3qrOPBoMGv/jxd39K+70qYEb8kSbckTZUOScuojD7UJ0aQEqNkQZ6gBs/L7Qp
         QPhA==
X-Forwarded-Encrypted: i=1; AJvYcCWGAyrT08BH4NhN68vDnEeMW+npgkSfsRsHWfBXITy2wFcqx/itG3MKLX+CdO3qfV97/IgjK6pklxWIBd4qDtERZzAw46/Dn39NTRWnxZ8=
X-Gm-Message-State: AOJu0YwJgjcP3JXtECyiKBuW0wnByqbaEvtfkfkw9QJRqOO98GA2Pppr
	aQ2IzMrri/72i0kM1ozmYx8n2abeLLnVNTi8TA4Qf9qJTg8U9+LMUpoWDS+Z2bw=
X-Google-Smtp-Source: AGHT+IEO6VEeG1fc1ZRiL+XsLLigphmueKuTA95tN9EfbEeI69e4qNpY+O5pq08Qm6vBE12oTaTsTA==
X-Received: by 2002:a05:620a:43a7:b0:795:4997:38e2 with SMTP id af79cd13be357-797c32ad662mr620809185a.33.1718179776300;
        Wed, 12 Jun 2024 01:09:36 -0700 (PDT)
Date: Wed, 12 Jun 2024 10:09:33 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 1/7] x86/smp: do not use shorthand IPI destinations in
 CPU hot{,un}plug contexts
Message-ID: <ZmlXve3rV2Vx9bH7@macbook>
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-2-roger.pau@citrix.com>
 <615582c8-c153-424d-bce4-eb0c903d28ad@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <615582c8-c153-424d-bce4-eb0c903d28ad@suse.com>

On Tue, Jun 11, 2024 at 09:42:39AM +0200, Jan Beulich wrote:
> On 10.06.2024 16:20, Roger Pau Monne wrote:
> > Due to the current rwlock logic, if the CPU calling get_cpu_maps() does so from
> > a cpu_hotplug_{begin,done}() region the function will still return success,
> > because a CPU taking the rwlock in read mode after having taken it in write
> > mode is allowed.  Such behavior however defeats the purpose of get_cpu_maps(),
> 
> I'm not happy to see you still have this claim here. The behavior may (appear
> to) defeat the purpose here, but as expressed previously I don't view that as
> being a general pattern.

Right.  What about replacing the paragraph with:

"Due to the current rwlock logic, if the CPU calling get_cpu_maps() does so from
a cpu_hotplug_{begin,done}() region the function will still return success,
because a CPU taking the rwlock in read mode after having taken it in write
mode is allowed.  Such corner case makes using get_cpu_maps() alone
not enough to prevent using the shorthand in CPU hotplug regions."

I think the rest is of the commit message is not controversial.

> > as it should always return false when called with a CPU hot{,un}plug operation
> > is in progress.  Otherwise the logic in send_IPI_mask() is wrong, as it could
> > decide to use the shorthand even when a CPU operation is in progress.
> > 
> > Introduce a new helper to detect whether the current caller is between a
> > cpu_hotplug_{begin,done}() region and use it in send_IPI_mask() to restrict
> > shorthand usage.
> > 
> > Fixes: 5500d265a2a8 ('x86/smp: use APIC ALLBUT destination shorthand when possible')
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Changes since v1:
> >  - Modify send_IPI_mask() to detect CPU hotplug context.
> > ---
> >  xen/arch/x86/smp.c       |  2 +-
> >  xen/common/cpu.c         |  5 +++++
> >  xen/include/xen/cpu.h    | 10 ++++++++++
> >  xen/include/xen/rwlock.h |  2 ++
> >  4 files changed, 18 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xen/arch/x86/smp.c b/xen/arch/x86/smp.c
> > index 7443ad20335e..04c6a0572319 100644
> > --- a/xen/arch/x86/smp.c
> > +++ b/xen/arch/x86/smp.c
> > @@ -88,7 +88,7 @@ void send_IPI_mask(const cpumask_t *mask, int vector)
> >       * the system have been accounted for.
> >       */
> >      if ( system_state > SYS_STATE_smp_boot &&
> > -         !unaccounted_cpus && !disabled_cpus &&
> > +         !unaccounted_cpus && !disabled_cpus && !cpu_in_hotplug_context() &&
> >           /* NB: get_cpu_maps lock requires enabled interrupts. */
> >           local_irq_is_enabled() && (cpus_locked = get_cpu_maps()) &&
> >           (park_offline_cpus ||
> 
> Along with changing the condition you also want to update the comment to
> reflect the code adjustment.

I've assumed the wording in the commet that says: "no CPU hotplug or
unplug operations are taking place" would already cover the addition
of the !cpu_in_hotplug_context() check.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 08:10:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 08:10:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738926.1145797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJ49-0002Xq-6l; Wed, 12 Jun 2024 08:10:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738926.1145797; Wed, 12 Jun 2024 08:10:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJ49-0002Xj-2F; Wed, 12 Jun 2024 08:10:45 +0000
Received: by outflank-mailman (input) for mailman id 738926;
 Wed, 12 Jun 2024 08:10:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHJ47-00017M-HC
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 08:10:43 +0000
Received: from mail-yw1-x112d.google.com (mail-yw1-x112d.google.com
 [2607:f8b0:4864:20::112d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 428e4cf5-2893-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 10:10:42 +0200 (CEST)
Received: by mail-yw1-x112d.google.com with SMTP id
 00721157ae682-62a08091c2bso24939937b3.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 01:10:42 -0700 (PDT)
Received: from localhost ([46.222.2.38]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-795f314fcb5sm260760585a.72.2024.06.12.01.10.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 01:10:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 428e4cf5-2893-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718179841; x=1718784641; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=Ok1fOm0ZQJOWLsqRuLXcwY4KZXnpsaiAj+FUXeRbHrQ=;
        b=srELggTjaH3FTXH1DY5JZ8E3S46C+rkly4CYMJFZt1/4HHKjaovUIFHQZVkYPC3a4c
         f1GBDe+ExwxkDlqf3MIL0TAT0Mwidn9ANtx2RNGGkztfZx+DxbYOJmCWn8Lj0wPmXqwL
         2wbfYSb7+2gnMdx65WaBFGQtl52EThaTqI1AM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718179841; x=1718784641;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Ok1fOm0ZQJOWLsqRuLXcwY4KZXnpsaiAj+FUXeRbHrQ=;
        b=m8SkYYuSK6QRkccyQ2MCmXmOZGZkwibjFS2NPEKAdiskc1UzIREKMxZ6kF1X1ZjR6+
         uyP/bzOrlXIcdRXGYV8XrDyhOo4qiWaxq/G/vbdIZtwFCG27X3+qjuCzCd2MVnq6aIed
         fzc0w8y844bp2EtpKLJG8EddRPfiu4vvdTHclj03gKzQ+ojHp/4DM636t6wJc5BQa1S0
         jk4GazAkUuvV93JTBvMNzD0OOMQiIFhRJytuQB3L4bFbvsmyCaV4d6VPQGqAtm7DGzm4
         7afLDA+UyA+fBcPUhqjes2Bq86gtLiIKKNN4eQL9uQ8lfWzMqultUXUt/3DXnqXnmZk1
         lykg==
X-Forwarded-Encrypted: i=1; AJvYcCUQXcCl031obYSFLEuMgumt07K2ypMINbvEaJ/900cg4c0Fp7MOYpFw2IhMwrnJKWIiJDhClsrLAYuHDFZqo3jFBJZMyhx/hVsQTNEa74o=
X-Gm-Message-State: AOJu0Yy+1EjxVMiqWRXkQzchK7DVQ5+mjd7Yz7nmM9/zRhNvwLzQS4Gz
	dGkml5m2BjQ/GttMoz6RYLzS5kWGWPQuQ7EVDT2J5jIzsUoA4HNUoNziHloLsdM2lEXGLZnU6RC
	q
X-Google-Smtp-Source: AGHT+IHn3yWYZkBGFUL1w1Mt4rk4mRmDpYKUkqoEgOCfvwJwbmygVnLlEMplEgM0m5bZnLRMUIc8YQ==
X-Received: by 2002:a0d:d403:0:b0:62d:a29:5384 with SMTP id 00721157ae682-62fbdaaa65cmr10460997b3.43.1718179840830;
        Wed, 12 Jun 2024 01:10:40 -0700 (PDT)
Date: Wed, 12 Jun 2024 10:10:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 3/7] x86/irq: limit interrupt movement done by
 fixup_irqs()
Message-ID: <ZmlX_cxqJLerZKee@macbook>
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-4-roger.pau@citrix.com>
 <5660db44-b169-44e3-9439-67d3b55bcac0@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5660db44-b169-44e3-9439-67d3b55bcac0@suse.com>

On Tue, Jun 11, 2024 at 11:59:39AM +0200, Jan Beulich wrote:
> On 10.06.2024 16:20, Roger Pau Monne wrote:
> > The current check used in fixup_irqs() to decide whether to move around
> > interrupts is based on the affinity mask, but such mask can have all bits set,
> > and hence is unlikely to be a subset of the input mask.  For example if an
> > interrupt has an affinity mask of all 1s, any input to fixup_irqs() that's not
> > an all set CPU mask would cause that interrupt to be shuffled around
> > unconditionally.
> > 
> > What fixup_irqs() care about is evacuating interrupts from CPUs not set on the
> > input CPU mask, and for that purpose it should check whether the interrupt is
> > assigned to a CPU not present in the input mask.  Assume that ->arch.cpu_mask
> > is a subset of the ->affinity mask, and keep the current logic that resets the
> > ->affinity mask if the interrupt has to be shuffled around.
> > 
> > Doing the affinity movement based on ->arch.cpu_mask requires removing the
> > special handling to ->arch.cpu_mask done for high priority vectors, otherwise
> > the adjustment done to cpu_mask makes them always skip the CPU interrupt
> > movement.
> > 
> > While there also adjust the comment as to the purpose of fixup_irqs().
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Aiui this is independent of patch 1, so could go in while we still settle on
> how to word things there?

I think so, the issue patch 1 fixes is independent from the rest of the
series.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 08:24:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 08:24:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738942.1145806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJH8-0004SY-8r; Wed, 12 Jun 2024 08:24:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738942.1145806; Wed, 12 Jun 2024 08:24:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJH8-0004SR-64; Wed, 12 Jun 2024 08:24:10 +0000
Received: by outflank-mailman (input) for mailman id 738942;
 Wed, 12 Jun 2024 08:24:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0BDc=NO=bounce.vates.tech=bounce-md_30504962.66695b25.v1-b9d18e805cb34b3aad09cf3d27c46ab2@srs-se1.protection.inumbo.net>)
 id 1sHJH7-0004SL-3c
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 08:24:09 +0000
Received: from mail180-20.suw31.mandrillapp.com
 (mail180-20.suw31.mandrillapp.com [198.2.180.20])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2245e87a-2895-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 10:24:06 +0200 (CEST)
Received: from pmta11.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail180-20.suw31.mandrillapp.com (Mailchimp) with ESMTP id
 4VzdqT48JFzFCWhBK
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 08:24:05 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 b9d18e805cb34b3aad09cf3d27c46ab2; Wed, 12 Jun 2024 08:24:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2245e87a-2895-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718180645; x=1718441145;
	bh=WR+ob45H4wrg2breZLSlO1RKjvBeRuzC/58xjyoZb1U=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=sRpw1IfVJHkyZlucuGSdgEQlRx1E+WC+8C/dVMrYGvcUvAJAOtqF3nqX0HOud0UGK
	 5mB+vx8gRUfNq1Js0ysgGRT8HQ4fXLDC+cbTJJFA4MeZtnHO4KVzoBDttgwYYeDdXd
	 FN4BcfB3BDD1AGTkc9fajFn1Z5FnnzonoMSNWxOwJX73+ZuaDE8ev2lbfmWLY00klP
	 G90uYtc3UtSf9yEch4nZ6yISDNBRkYn1ylruGCuRpqcS8vHp3H4D4vYtGHQRJj2zD5
	 gg+q+bCKRUV/HnQZ9xs53mJ6DU7bBsv31tQioy4HjoP2xV2CnuB9kiUn0pdkMHo0QD
	 Bp7eWzoPhmvpw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718180645; x=1718441145; i=anthony.perard@vates.tech;
	bh=WR+ob45H4wrg2breZLSlO1RKjvBeRuzC/58xjyoZb1U=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=ELcFGMOnx+09icFWl6C3eFu7S3kgU5K58zlDwF8hv42SprsJ9YtE3OAmHjOyBSlsf
	 4msc8dwk9b55TiGvt8A+nNgYMwqysrv2h6+RKIuSpL2MUy2Hbfqq6Pg+sBij0hnXwm
	 BbNNG4Bau1ObON5UzFuk3+LGCA/LCOzw76SxNBxWCnEGasgFkEHlpq7clTsQM5pA36
	 41aUs6bJ5seUGStvXF+WzWWRs2XMwobJiuV4QD2D6EyNffou+yXRRGv7Hg0gbS7go2
	 Osav0FvJ+0MPiuRULHcgJeJQFxHcA0/70qFqh6mqcmCA5q8lWkeApw0vzNxTNtRc5d
	 eOPk1cjOJnrlQ==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[PATCH=20for-4.19=3F=20v6=204/9]=20tools/xl:=20Add=20altp2m=5Fcount=20parameter?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718180642893
To: =?utf-8?Q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>, Nick Rosbrook <rosbrookn@gmail.com>, Juergen Gross <jgross@suse.com>
Message-Id: <ZmlbInvgw6iu7s8b@l14>
References: <cover.1718038855.git.w1benny@gmail.com> <02e0eefe1bed87cb55490f6ea13fa28c94af2a0d.1718038855.git.w1benny@gmail.com>
In-Reply-To: <02e0eefe1bed87cb55490f6ea13fa28c94af2a0d.1718038855.git.w1benny@gmail.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.b9d18e805cb34b3aad09cf3d27c46ab2?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240612:md
Date: Wed, 12 Jun 2024 08:24:05 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 10, 2024 at 05:10:42PM +0000, Petr Bene=C5=A1 wrote:
> From: Petr Bene=C5=A1 <w1benny@gmail.com>
> 
> Introduce a new altp2m_count parameter to control the maximum number of a=
ltp2m
> views a domain can use. By default, if altp2m_count is unspecified and al=
tp2m
> is enabled, the value is set to 10, reflecting the legacy behavior.
> 
> This change is preparatory; it establishes the groundwork for the feature=
 but
> does not activate it.
> 
> Signed-off-by: Petr Bene=C5=A1 <w1benny@gmail.com>
> ---
>  tools/golang/xenlight/helpers.gen.go | 2 ++
>  tools/golang/xenlight/types.gen.go   | 1 +
>  tools/include/libxl.h                | 8 ++++++++
>  tools/libs/light/libxl_create.c      | 9 +++++++++
>  tools/libs/light/libxl_types.idl     | 1 +
>  tools/xl/xl_parse.c                  | 9 +++++++++
>  6 files changed, 30 insertions(+)
> 
> diff --git a/tools/include/libxl.h b/tools/include/libxl.h
> index f5c7167742..bfa06caad2 100644
> --- a/tools/include/libxl.h
> +++ b/tools/include/libxl.h
> @@ -1250,6 +1250,14 @@ typedef struct libxl__ctx libxl_ctx;
>   */
>  #define LIBXL_HAVE_ALTP2M 1
> 
> +/*
> + * LIBXL_HAVE_ALTP2M_COUNT
> + * If this is defined, then libxl supports setting the maximum number of
> + * alternate p2m tables.
> + */
> +#define LIBXL_HAVE_ALTP2M_COUNT 1
> +#define LIBXL_ALTP2M_COUNT_DEFAULT (~(uint32_t)0)

Can you move this define (LIBXL_ALTP2M_COUNT_DEFAULT) out of the public
header? I don't think this needs to be known to application using libxl
(like xl). You can move it to "libxl_internal.h", I don't think there's
a better place and there's already a few "default" (more like initial
value) define there.

Beside that, the patch looks fine,
so with that change: Reviewed-by: Anthony PERARD <anthony.perard@vates.tech=
>

Thanks,

-- 


Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 08:27:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 08:27:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738948.1145815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJJt-0004zc-LD; Wed, 12 Jun 2024 08:27:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738948.1145815; Wed, 12 Jun 2024 08:27:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJJt-0004zV-Ih; Wed, 12 Jun 2024 08:27:01 +0000
Received: by outflank-mailman (input) for mailman id 738948;
 Wed, 12 Jun 2024 08:27:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UyOc=NO=bounce.vates.tech=bounce-md_30504962.66695bd2.v1-61969071693b4b219bff4215114ecb07@srs-se1.protection.inumbo.net>)
 id 1sHJJs-0004ys-MZ
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 08:27:00 +0000
Received: from mail112.us4.mandrillapp.com (mail112.us4.mandrillapp.com
 [205.201.136.112]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 89547847-2895-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 10:26:59 +0200 (CEST)
Received: from pmta15.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail112.us4.mandrillapp.com (Mailchimp) with ESMTP id 4Vzdtp4GJ4z8XRrPm
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 08:26:58 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 61969071693b4b219bff4215114ecb07; Wed, 12 Jun 2024 08:26:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89547847-2895-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718180818; x=1718441318;
	bh=j6VIeUnTOqUuAjOuIeyai7Ic3Rr0EWVfW8bkQxexx94=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=lNXFHiWayfGZsEWIOCHxfpe7NHNtfFTspErk8s/WuXbjLXc51ZUx3+QyIdwO+97B7
	 FSS9nujfqSGKp0nr6VPRKDdt9x/pHwqTYIRKHHKyymtyC7dgtQfGxUJeeJwOpueckq
	 v3QTChQAaJ750NjA0+vP/ClqkDPucQ/+Bwi+Vj/soSZ5naBCZhqCPkzDz38mvyG7CT
	 aALvqsJBVP8oGA7Z6HK+vDRT8p78YbYwQrZdEYMPxoKiO81e9DT/g+GkQNgA/vKqzD
	 xIDzloVsWiOF5voMP31HNa9ZvEg3f8uTntTaTzBdJChxU3m8O+CPFDGeEgWrlSHSGF
	 qedKCxXYuqrjQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718180818; x=1718441318; i=anthony.perard@vates.tech;
	bh=j6VIeUnTOqUuAjOuIeyai7Ic3Rr0EWVfW8bkQxexx94=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=av72xdjc/PkgVNdcw+DDaprQmS0Cx7f2D2g/RWcyFAr48/iZvolFDawHMbytslTL2
	 MBteLucpN6zrHyaKKpMsFtBkeVJ1zYgXJUZ7lFdElXAocD8mbydy+C11nBrCj81Qkf
	 0SYaojxsfcvRE9LCJ7UADa/JnrF8F9m9nKz3zHlvr2i3HfBJXDa1HYx1Nk0BAuQetD
	 paBnF4se10hR1D7nxzUXOb2BiLC5ANPcXxd2f8Kws1LICDtFcqELcnWy9V7J+gecsO
	 vegD13FIj9nUv4UuPxIIxnd1j68NzgUTCmPbHhV/kXOQn1BQDWfMfi8HmKd4i1K67t
	 7fwXTL7B1C6aA==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[PATCH=20for-4.19=3F=20v6=205/9]=20docs/man:=20Add=20altp2m=5Fcount=20parameter=20to=20the=20xl.cfg=20manual?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718180817728
To: =?utf-8?Q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>
Cc: xen-devel@lists.xenproject.org
Message-Id: <Zmlb0eN1SEvTDL5P@l14>
References: <cover.1718038855.git.w1benny@gmail.com> <056a6d3337aafa36f341596e6236cf21dd7e705b.1718038855.git.w1benny@gmail.com>
In-Reply-To: <056a6d3337aafa36f341596e6236cf21dd7e705b.1718038855.git.w1benny@gmail.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.61969071693b4b219bff4215114ecb07?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240612:md
Date: Wed, 12 Jun 2024 08:26:58 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 10, 2024 at 05:10:43PM +0000, Petr Bene=C5=A1 wrote:
> From: Petr Bene=C5=A1 <w1benny@gmail.com>
> 
> Update manual pages to include detailed information about the altp2m_coun=
t
> configuration parameter.
> 
> Signed-off-by: Petr Bene=C5=A1 <w1benny@gmail.com>

Acked-by: Anthony PERARD <anthony.perard@vates.tech>

Thanks,

-- 


Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 08:31:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 08:31:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738957.1145826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJNu-0006qq-74; Wed, 12 Jun 2024 08:31:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738957.1145826; Wed, 12 Jun 2024 08:31:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJNu-0006qj-2X; Wed, 12 Jun 2024 08:31:10 +0000
Received: by outflank-mailman (input) for mailman id 738957;
 Wed, 12 Jun 2024 08:31:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHJNt-0006q6-8Z
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 08:31:09 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1d7257fe-2896-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 10:31:07 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 5b1f17b1804b1-421d32fda86so30369335e9.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 01:31:07 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-422874de618sm16423335e9.37.2024.06.12.01.31.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 01:31:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d7257fe-2896-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718181067; x=1718785867; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=8A0WhxCTC1sf7GsqpPCXviAZ1key9LA6PyIZaOL7QxA=;
        b=NrUFY9PcAjv6Oi12A62VUtZYCVbH8bf2Vw7DHXVylWjRa93gydBkQReSGRGp0kknQP
         bzNkjdBc1npUBAGWjd9Id99J71NVFHVz7yYzUbEzWmDrL7deHHDQZ93bwaegTeXQMbXa
         4+AniWpyQ5fFe2RVz3sO9qcvZTo/Y9w8zA7hg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718181067; x=1718785867;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=8A0WhxCTC1sf7GsqpPCXviAZ1key9LA6PyIZaOL7QxA=;
        b=aZaFXq7NXstJfiJN8eF8GupVfYU3/sIhCAS/uIqp5/MdXCgCUkxvi1Da/zoogNuFM/
         XXSkwilqjQggdiuTvNhJHYq1vV8YdNL6uPQaG/nkaVawGXUXBfjJ27hpqYmH6eDmmxSr
         vfJggM7S99rjCkIZFdiI8w62t/HqCUg8ZFAX0Vc1OcXPTOXVQDA0qJZ8pwPpdEdf2MwG
         MiE7fqRgcZU3ZOZ8BR7d2uOfxKdTxdiw6lF8sEKLC7zeocqvEK7G5+zOB8xyemGZHwEa
         4djSFj3U1S45SpKTtRtfzRXiMou8m8DbTEFbd7DFiZ7MI5LNmHd/jM0ld6fOj5ra7pdw
         8igQ==
X-Forwarded-Encrypted: i=1; AJvYcCVjRnbibSvb0ILUJxQ4inu2dIIBN0L3eQdsAuOuvEMUxbZ7yhKlYw4qUUBzbztNjD18zIowb4u1npvJaN/27CC0h8F31rGzgd7kVRT5d88=
X-Gm-Message-State: AOJu0YwIZ6p2Oz6HvK0QrMrCl3QkjZemT3Va3iE+ISuf8u8LZjs6Yedv
	EP277ewnhSnZJ6HUX3Y+jHo9JVqZsxj9DPsweAOAPNja5SbuyqagE3C93YB+MUg=
X-Google-Smtp-Source: AGHT+IEuBdChjBDzysOORmqPbbCyzmTofVzaRyuyCob+e8MQ8OQyrPKu+c57GZbpSLh/BBAvqdcFaQ==
X-Received: by 2002:a05:600c:46ce:b0:422:e8d:e559 with SMTP id 5b1f17b1804b1-422864aebe0mr12309155e9.23.1718181067152;
        Wed, 12 Jun 2024 01:31:07 -0700 (PDT)
Date: Wed, 12 Jun 2024 10:31:06 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 4/7] x86/irq: restrict CPU movement in
 set_desc_affinity()
Message-ID: <ZmlcyqkHh562lL2j@macbook>
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-5-roger.pau@citrix.com>
 <b2e8eed9-1df8-442b-ae7e-401c406eaef8@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <b2e8eed9-1df8-442b-ae7e-401c406eaef8@suse.com>

On Tue, Jun 11, 2024 at 12:20:33PM +0200, Jan Beulich wrote:
> On 10.06.2024 16:20, Roger Pau Monne wrote:
> > If external interrupts are using logical mode it's possible to have an overlap
> > between the current ->arch.cpu_mask and the provided mask (or TARGET_CPUS).  If
> > that's the case avoid assigning a new vector and just move the interrupt to a
> > member of ->arch.cpu_mask that overlaps with the provided mask and is online.
> 
> What I'm kind of missing here is an explanation of why what _assign_irq_vector()
> does to avoid unnecessary migration (very first conditional there) isn't
> sufficient.

Somehow I looked at that and think it wasn't enough, but now I cannot
figure out why, so it might be just fine, and this patch is not
needed.  Let me test again and get back to you, for the time being
ignore this patch.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 08:32:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 08:32:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738963.1145836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJOq-0007M7-De; Wed, 12 Jun 2024 08:32:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738963.1145836; Wed, 12 Jun 2024 08:32:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJOq-0007M0-Ac; Wed, 12 Jun 2024 08:32:08 +0000
Received: by outflank-mailman (input) for mailman id 738963;
 Wed, 12 Jun 2024 08:32:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LIwU=NO=bounce.vates.tech=bounce-md_30504962.66695cfe.v1-064d15fccc2f44c2a0ae4ace68583eb4@srs-se1.protection.inumbo.net>)
 id 1sHJOo-0007Lu-Rl
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 08:32:06 +0000
Received: from mail180-20.suw31.mandrillapp.com
 (mail180-20.suw31.mandrillapp.com [198.2.180.20])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3f96030e-2896-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 10:32:06 +0200 (CEST)
Received: from pmta11.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail180-20.suw31.mandrillapp.com (Mailchimp) with ESMTP id
 4Vzf0h2FRnzFCWZ7Y
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 08:32:04 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 064d15fccc2f44c2a0ae4ace68583eb4; Wed, 12 Jun 2024 08:31:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f96030e-2896-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718181124; x=1718441624;
	bh=1jBDSWnswaV80WNWJr1M+dh0MUU46rLIQXch3TjMFfs=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=A3kBiTkesPL8EXf6KhMcht0/7Kfiu3atvcCE5bQfHYbWXXFh/3ti38/3v9CD+zKbf
	 837H7l/+NDot/y9B5rRGBJvqAoGf5Djl4ugGZ89zCwUutt3MiN0hrtGB3gwMBBWMTH
	 tCYICCQiu115WH+pAtRwbqqITsqUvrg4ey0CGQW/9ae7tKnPBS+u3xXA8pu9ndEnbZ
	 AddvCFTX/1441hA07YxjiRLDTugR1sAivMSxF4TRCDaZUL7BqxrkPfLQAE1dM9vbNx
	 TUAy70/m5FV8LR9MxeoVfDALJkQyxyoyXojjeZO3TP0vFciI+pvunFkuru+Rz2qhsI
	 x17ZewPyRErcw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718181124; x=1718441624; i=anthony.perard@vates.tech;
	bh=1jBDSWnswaV80WNWJr1M+dh0MUU46rLIQXch3TjMFfs=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=XtbasWmKYevWo8uDVdPxP+FeONkyx9Sdo38eovxWK6vWnB2NT2/vdhZuiSCeMTa/s
	 p3CGM60BsRnJRq0J6hifHrwK20MmEpnefce2v1hEFGa0tofxW52DlVmU2QidiDOysu
	 oWnE3gFWxaaD+CESFgzrVvWfWztTvfnPHJL3s6YqoTHvt86RvVaoHtjfbD78uXfHJK
	 vSiDtg9AsOFEpS2L3ityZX+LVxn0UFUHgD9ogse0yqTiUPQr+NzvBSbnu+wq4LwC8s
	 V/8Jyy1sfJtcollZnCDdh136vdyEDxK/FHb15w+Z8zPhbBilDtzIHA0+P6jdegCCSc
	 0q0cy2ZGurXUQ==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[PATCH=20for-4.19=3F=20v6=207/9]=20tools/libxl:=20Activate=20the=20altp2m=5Fcount=20feature?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718181117933
To: =?utf-8?Q?Petr=20Bene=C5=A1?= <w1benny@gmail.com>
Cc: xen-devel@lists.xenproject.org, Juergen Gross <jgross@suse.com>
Message-Id: <Zmlc/e2cfak0n2f0@l14>
References: <cover.1718038855.git.w1benny@gmail.com> <ad7aa98a3b0a0493130f1d9a84724e98be766897.1718038855.git.w1benny@gmail.com>
In-Reply-To: <ad7aa98a3b0a0493130f1d9a84724e98be766897.1718038855.git.w1benny@gmail.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.064d15fccc2f44c2a0ae4ace68583eb4?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240612:md
Date: Wed, 12 Jun 2024 08:31:58 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 10, 2024 at 05:10:45PM +0000, Petr Bene=C5=A1 wrote:
> From: Petr Bene=C5=A1 <w1benny@gmail.com>
> 
> This commit activates the previously introduced altp2m_count parameter,
> establishing the connection between libxl and Xen.
> 
> Signed-off-by: Petr Bene=C5=A1 <w1benny@gmail.com>

Acked-by: Anthony PERARD <anthony.perard@vates.tech>

Thanks,

-- 


Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 08:37:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 08:37:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738968.1145845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJTZ-0007xl-UZ; Wed, 12 Jun 2024 08:37:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738968.1145845; Wed, 12 Jun 2024 08:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJTZ-0007xe-Rs; Wed, 12 Jun 2024 08:37:01 +0000
Received: by outflank-mailman (input) for mailman id 738968;
 Wed, 12 Jun 2024 08:37:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHJTX-0007xD-UL
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 08:36:59 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ee71e0e7-2896-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 10:36:58 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 5b1f17b1804b1-4217f2e3450so36418805e9.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 01:36:58 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-422874de68asm16694835e9.29.2024.06.12.01.36.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 01:36:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee71e0e7-2896-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718181418; x=1718786218; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=bCBnd0K8E9ClUJFrDbRKrnASQruoG/gxzWhvOfVi1q0=;
        b=gdlagN/qEK1GvI2GIW4wGjuqtTmna9TxFqMvKSb74aQRIL2D80nai2a65+F3IXXG2e
         dW3Tsdi9xFfmDTBzr8pPzKW98a2tQkEkXKdyObw69m+p3GtV+k6RY7TsBN6sbPFdr4Et
         UNKd7aDnhIqSY3UUCNzoq1gNZqUYXqjHgDXQw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718181418; x=1718786218;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bCBnd0K8E9ClUJFrDbRKrnASQruoG/gxzWhvOfVi1q0=;
        b=Aruzrs0EQ42YkySJ7QKKQPucMZQcnKeGj0EKddcfUgx9MrDneDv4OWaMA19f1ZcOpa
         WLalEYfPbMMl5fwaWRLse7PRg/WkH5AkXrTVTFpaQ/5QOpX2vJDsg4G6NYXSRtnruInl
         RaN0H+Vxga7cjkF0+sGdOl1duL5sDXSVciZLMhQ7ZKs9Ve/7FIRdNvd/2ZoaCYYvx0IH
         a9yvQpSZN73psKdvBaUAydH1gERRORC3EbKLhnhrp8tcERbBSa6az/SLuA7GzO9Hbd4F
         fZ0Md3bwc1h445VI344xwXCgZqZKppK+hN4oDNRWZnNPTDiGZQd5ROqEnGrf+G3Q+rNt
         Ft7Q==
X-Forwarded-Encrypted: i=1; AJvYcCUTh0r7OtboRxoHrRZBqFImAP8m04o94O0qGSuHYqmLfMaEhhSHOUAFi3xDoST2w/DC0ZHAbFEbWARppYzUxxqK24yw/ZMsxK5zAPMw2dQ=
X-Gm-Message-State: AOJu0YwSj/PFRiRz1inMW6He8DrBBDhZPZySGtrfH5y18keRUhjDc8SW
	3rqUcTi4ZWSdtYveeqXqBBnmOzeCKS3rcK91LRd1vP1q1fhUHKZLHtfO522md8o=
X-Google-Smtp-Source: AGHT+IFcMsHQuLU9LBXv3M95LM8OVMShZdL8DfYkzJ/ZtDALAYNKxYA1QgVZVj9jULuRT5gBNcW2KA==
X-Received: by 2002:a05:600c:314d:b0:421:7c1e:5d5d with SMTP id 5b1f17b1804b1-422867bf846mr14811335e9.35.1718181417764;
        Wed, 12 Jun 2024 01:36:57 -0700 (PDT)
Date: Wed, 12 Jun 2024 10:36:56 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 5/7] x86/irq: deal with old_cpu_mask for interrupts in
 movement in fixup_irqs()
Message-ID: <ZmleKLpdfqqgS8gd@macbook>
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-6-roger.pau@citrix.com>
 <9a7e5fab-7ea7-4196-bbc5-5c9e286cf576@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <9a7e5fab-7ea7-4196-bbc5-5c9e286cf576@suse.com>

On Tue, Jun 11, 2024 at 03:47:03PM +0200, Jan Beulich wrote:
> On 10.06.2024 16:20, Roger Pau Monne wrote:
> > @@ -2589,6 +2589,28 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
> >                                 affinity);
> >          }
> >  
> > +        if ( desc->arch.move_in_progress &&
> > +             !cpumask_test_cpu(cpu, &cpu_online_map) &&
> 
> Btw - any reason you're open-coding !cpu_online() here? I've noticed this
> in the context of patch 7, where a little further down a !cpu_online() is
> being added. Those likely all want to be consistent.

No reason really - just me not realizing we had that helper.  Can
adjust in next version.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 08:45:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 08:45:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738977.1145855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJbI-0001m3-P9; Wed, 12 Jun 2024 08:45:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738977.1145855; Wed, 12 Jun 2024 08:45:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJbI-0001lw-Mf; Wed, 12 Jun 2024 08:45:00 +0000
Received: by outflank-mailman (input) for mailman id 738977;
 Wed, 12 Jun 2024 08:45:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHJbI-0001lR-1o
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 08:45:00 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0c78ff17-2898-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 10:44:58 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-a63359aaacaso271791366b.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 01:44:58 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f0d8b280esm545166366b.149.2024.06.12.01.44.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 01:44:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c78ff17-2898-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718181897; x=1718786697; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=MJxOAo5SAvKlD+sCEsB+J6Gl8aLcSMBlLyM4QhDUGS0=;
        b=LophMwB4iiCz729ZKiEAmrnSC/YZavwTVS+V3KglVmdUS1q2toKNBKqhkE1qsD52kp
         /xy+IjWQuonFVfHk1pQpqWXJ6TBHtlMdJnhWsv56w63o9n8W6MneI1aoHBHIeFHLhnbI
         jwm13UD816Gq+1FQxUOJWBsGF+qeleMN3CNU/asj3U2HTAZ3GVVh+7gVo7aGMozSnSWX
         Y/yAUGzR+lJA6L1ltnUl0WOinCpFGeJZJzHPzwjg+OtApGJ0JAH9MCrCfbmoKtSECkB7
         C5SCKrTmDPA66sYrKelnZJIp4Qch0IGbmvLBmeVINXSztTtpkhZQzD4Q/oJqv50dhkEs
         DBxw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718181897; x=1718786697;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=MJxOAo5SAvKlD+sCEsB+J6Gl8aLcSMBlLyM4QhDUGS0=;
        b=Dn2jm5K0gpqEY5hTR04IjHUviAJ81Qn8aO8bXWQr84E8Fz5TGeQd54B91oD3Z9PUu4
         67YETsSmr4Lun1/R1Z1eo7klRevXRyN76NggEn+4lLFAbsWdSlKKwg85GK0TxuVFLeCj
         /+vQZ72v7iCnxdL8s2a5DVZY/JDxmNBquuLblpA7HFinzgQT1sHT33XbvIUTqRhboo9L
         aStGHpNNkTrAtbVApQc2cEduSFttNXPm4dKnQSyib8NH9968hOZmkwUrx9ghofbZwd2v
         Q7/J3JBg8UAsAD+bCqUol/rpYSJrdMMHx2aAKzUCJXurYpjwjfSveBjochoLGFFpoV6E
         beew==
X-Gm-Message-State: AOJu0Yy1jTSRaXunDgIMDCO+uRrs/qZhskro9UBbBOq/NtPeb/7/0bSb
	OnK8Cep+R3KK5dAnLqJWKbarmJkHoos9aycZ1yAaCiVO5X7O45R3ALkNMwJMkdsrqL/Io25Exsg
	=
X-Google-Smtp-Source: AGHT+IH3/mJPB15Wxv4XcOSQ6xBp6KLbI28BGrGFp8yc4/A2JnPWSKI73tcpHR855408wahk8+ZyqA==
X-Received: by 2002:a17:907:9482:b0:a6f:4824:e2f2 with SMTP id a640c23a62f3a-a6f4824e4f3mr85716566b.55.1718181897589;
        Wed, 12 Jun 2024 01:44:57 -0700 (PDT)
Message-ID: <c7d12669-7851-4701-9b2d-0b22f9d32c1d@suse.com>
Date: Wed, 12 Jun 2024 10:44:56 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Chen Jiqian <Jiqian.Chen@amd.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH for-4.19???] x86/physdev: replace physdev_{,un}map_pirq()
 checking against DOMID_SELF
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

It's hardly ever correct to check for just DOMID_SELF, as guests have
ways to figure out their domain IDs and hence could instead use those as
inputs to respective hypercalls. Note, however, that for ordinary DomU-s
the adjustment is relaxing things rather than tightening them, since
- as a result of XSA-237 - the respective XSM checks would have rejected
self (un)mapping attempts for other than the control domain.

Since in physdev_map_pirq() handling overall is a little easier this
way, move obtaining of the domain pointer into the caller. Doing the
same for physdev_unmap_pirq() is just to keep both consistent in this
regard. For both this has the advantage that it is now provable (by the
build not failing) that there are no DOMID_SELF checks left (and none
could easily be re-added).

Fixes: 0b469cd68708 ("Interrupt remapping to PIRQs in HVM guests")
Fixes: 9e1a3415b773 ("x86: fixes after emuirq changes")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Note that the moving of rcu_lock_domain_by_any_id() is also going to
help
https://lists.xen.org/archives/html/xen-devel/2024-06/msg00206.html.

--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -18,9 +18,9 @@
 #include <xsm/xsm.h>
 #include <asm/p2m.h>
 
-int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p,
+int physdev_map_pirq(struct domain *d, int type, int *index, int *pirq_p,
                      struct msi_info *msi);
-int physdev_unmap_pirq(domid_t domid, int pirq);
+int physdev_unmap_pirq(struct domain *d, int pirq);
 
 #include "x86_64/mmconfig.h"
 
@@ -88,13 +88,12 @@ static int physdev_hvm_map_pirq(
     return ret;
 }
 
-int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p,
+int physdev_map_pirq(struct domain *d, int type, int *index, int *pirq_p,
                      struct msi_info *msi)
 {
-    struct domain *d = current->domain;
     int ret;
 
-    if ( domid == DOMID_SELF && is_hvm_domain(d) && has_pirq(d) )
+    if ( d == current->domain && is_hvm_domain(d) && has_pirq(d) )
     {
         /*
          * Only makes sense for vector-based callback, else HVM-IRQ logic
@@ -106,13 +105,9 @@ int physdev_map_pirq(domid_t domid, int
         return physdev_hvm_map_pirq(d, type, index, pirq_p);
     }
 
-    d = rcu_lock_domain_by_any_id(domid);
-    if ( d == NULL )
-        return -ESRCH;
-
     ret = xsm_map_domain_pirq(XSM_DM_PRIV, d);
     if ( ret )
-        goto free_domain;
+        return ret;
 
     /* Verify or get irq. */
     switch ( type )
@@ -135,24 +130,17 @@ int physdev_map_pirq(domid_t domid, int
         break;
     }
 
- free_domain:
-    rcu_unlock_domain(d);
     return ret;
 }
 
-int physdev_unmap_pirq(domid_t domid, int pirq)
+int physdev_unmap_pirq(struct domain *d, int pirq)
 {
-    struct domain *d;
     int ret = 0;
 
-    d = rcu_lock_domain_by_any_id(domid);
-    if ( d == NULL )
-        return -ESRCH;
-
-    if ( domid != DOMID_SELF || !is_hvm_domain(d) || !has_pirq(d) )
+    if ( d != current->domain || !is_hvm_domain(d) || !has_pirq(d) )
         ret = xsm_unmap_domain_pirq(XSM_DM_PRIV, d);
     if ( ret )
-        goto free_domain;
+        return ret;
 
     if ( is_hvm_domain(d) && has_pirq(d) )
     {
@@ -160,8 +148,8 @@ int physdev_unmap_pirq(domid_t domid, in
         if ( domain_pirq_to_emuirq(d, pirq) != IRQ_UNBOUND )
             ret = unmap_domain_pirq_emuirq(d, pirq);
         write_unlock(&d->event_lock);
-        if ( domid == DOMID_SELF || ret )
-            goto free_domain;
+        if ( d == current->domain || ret )
+            return ret;
     }
 
     pcidevs_lock();
@@ -170,8 +158,6 @@ int physdev_unmap_pirq(domid_t domid, in
     write_unlock(&d->event_lock);
     pcidevs_unlock();
 
- free_domain:
-    rcu_unlock_domain(d);
     return ret;
 }
 #endif /* COMPAT */
@@ -184,6 +170,8 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
 
     switch ( cmd )
     {
+        struct domain *d;
+
     case PHYSDEVOP_eoi: {
         struct physdev_eoi eoi;
         struct pirq *pirq;
@@ -331,8 +319,15 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         msi.sbdf.devfn = map.devfn;
         msi.entry_nr = map.entry_nr;
         msi.table_base = map.table_base;
-        ret = physdev_map_pirq(map.domid, map.type, &map.index, &map.pirq,
-                               &msi);
+
+        d = rcu_lock_domain_by_any_id(map.domid);
+        ret = -ESRCH;
+        if ( !d )
+            break;
+
+        ret = physdev_map_pirq(d, map.type, &map.index, &map.pirq, &msi);
+
+        rcu_unlock_domain(d);
 
         if ( map.type == MAP_PIRQ_TYPE_MULTI_MSI )
             map.entry_nr = msi.entry_nr;
@@ -348,7 +343,15 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         if ( copy_from_guest(&unmap, arg, 1) != 0 )
             break;
 
-        ret = physdev_unmap_pirq(unmap.domid, unmap.pirq);
+        d = rcu_lock_domain_by_any_id(unmap.domid);
+        ret = -ESRCH;
+        if ( !d )
+            break;
+
+        ret = physdev_unmap_pirq(d, unmap.pirq);
+
+        rcu_unlock_domain(d);
+
         break;
     }
 


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 08:47:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 08:47:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738984.1145865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJdr-0002L9-5i; Wed, 12 Jun 2024 08:47:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738984.1145865; Wed, 12 Jun 2024 08:47:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJdr-0002L2-2l; Wed, 12 Jun 2024 08:47:39 +0000
Received: by outflank-mailman (input) for mailman id 738984;
 Wed, 12 Jun 2024 08:47:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHJdq-0002Kw-27
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 08:47:38 +0000
Received: from mail-wm1-x334.google.com (mail-wm1-x334.google.com
 [2a00:1450:4864:20::334])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6ae8da24-2898-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 10:47:36 +0200 (CEST)
Received: by mail-wm1-x334.google.com with SMTP id
 5b1f17b1804b1-42278f3aea4so9101785e9.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 01:47:36 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-35f215d4602sm8542927f8f.74.2024.06.12.01.47.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 01:47:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ae8da24-2898-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718182056; x=1718786856; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=+CRRYU5mEu5rN9aNcGMiLxUPkcX1eGaLdhfh49hCeDs=;
        b=wK9xyQQj+VUq6vPrGQBM5EbPYXK6f7j+HClN6lAUrOtokkNJOf99xXwncnOv4Q4A7e
         DN7A6EM/ecFJFZk0QD4S6bmaY5e/NddkmrYX2FuQs1rWoH2DqZI7W5UX8tnSAzgJA3jI
         9d20fjXtkuC1sAYK3uJADTTEtGro7Nx/wn59c=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718182056; x=1718786856;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+CRRYU5mEu5rN9aNcGMiLxUPkcX1eGaLdhfh49hCeDs=;
        b=fsFs/n3Ps5xXRBFXC4wx6Q0i7bI0CGkmcmZshyQi1Pt9Dd1ZRzn11dqxWiaVOKEEXV
         XAT3OoVpd82NGIE6s4BkQ/1GQ20vszbE2WkQBe5LWNLmq2s/gXkz0AeSamz1aJhR7Exs
         jGIiB3jRGNdVMNjh+b7o2t66OsQd3OXIn7AqcZjRNV+LwWDWlmyw9Me1LqMShUAgpvQO
         ftNajf5spK3KABMqRfyzEQzkK7cmNhEaA2JVe0PnCzK/DmcU1OiA1QXSMYUBVtQ2zGJe
         1CDVwiR1LoWh6huF/1rcqp8HQUnnQF11x44LHsWV/Olgx7BpZ8oDMlD91hL090ioYQEu
         K3ZA==
X-Forwarded-Encrypted: i=1; AJvYcCVE+fszywJvYlKBI+1/fOar/YW1U+GI5BGQKJ3u2lMbfU+w/noaAy5FLDHTZ3GwkozA0XH/wMdtK5waLU1vctB4/jTO770Nk28sWDu3Keo=
X-Gm-Message-State: AOJu0Yw5JfsGGAPswJf5QK249U37nZzbFezF3V/Twt615CnpXROr9lze
	7cCTvNumHvqMHwNFieFdjLzQZ2xIxldzJLbegBUbRxoP0MIvgevNTZiAfOFfOrg=
X-Google-Smtp-Source: AGHT+IFWr14ByUDbOpQZsZFALbhGBqITtnXzd2qYwsknTGBU5g4Gp9iV+8EnDFeQpNb7yVGcYqeMow==
X-Received: by 2002:adf:e98d:0:b0:35f:2197:dbff with SMTP id ffacd0b85a97d-35fe8915491mr724885f8f.24.1718182056098;
        Wed, 12 Jun 2024 01:47:36 -0700 (PDT)
Date: Wed, 12 Jun 2024 10:47:35 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 5/7] x86/irq: deal with old_cpu_mask for interrupts in
 movement in fixup_irqs()
Message-ID: <Zmlgp9C2ryFtT65B@macbook>
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-6-roger.pau@citrix.com>
 <66fc06cc-f1f6-4f12-83d4-a3b9788bffba@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <66fc06cc-f1f6-4f12-83d4-a3b9788bffba@suse.com>

On Tue, Jun 11, 2024 at 02:45:09PM +0200, Jan Beulich wrote:
> On 10.06.2024 16:20, Roger Pau Monne wrote:
> > Given the current logic it's possible for ->arch.old_cpu_mask to get out of
> > sync: if a CPU set in old_cpu_mask is offlined and then onlined
> > again without old_cpu_mask having been updated the data in the mask will no
> > longer be accurate, as when brought back online the CPU will no longer have
> > old_vector configured to handle the old interrupt source.
> > 
> > If there's an interrupt movement in progress, and the to be offlined CPU (which
> > is the call context) is in the old_cpu_mask clear it and update the mask, so it
> > doesn't contain stale data.
> 
> This imo is too __cpu_disable()-centric. In the code you cover the
> smp_send_stop() case afaict, where it's all _other_ CPUs which are being
> offlined. As we're not meaning to bring CPUs online again in that case,
> dealing with the situation likely isn't needed. Yet the description should
> imo at least make clear that the case was considered.

What about adding the following paragraph:

Note that when the system is going down fixup_irqs() will be called by
smp_send_stop() from CPU 0 with a mask with only CPU 0 on it,
effectively asking to move all interrupts to the current caller (CPU
0) which is the only CPU online.  In that case we don't care to
migrate interrupts that are in the process of being moved, as it's
likely we won't be able to move all interrupts to CPU 0 due to vector
shortage anyway.

> 
> > @@ -2589,6 +2589,28 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
> >                                 affinity);
> >          }
> >  
> > +        if ( desc->arch.move_in_progress &&
> > +             !cpumask_test_cpu(cpu, &cpu_online_map) &&
> 
> This part of the condition is, afaict, what covers (excludes) the
> smp_send_stop() case. Might be nice to have a brief comment here, thus
> also clarifying ...

Would you be fine with:

        if ( desc->arch.move_in_progress &&
             /*
	      * Only attempt to migrate if the current CPU is going
	      * offline, otherwise the whole system is going down and
	      * leaving stale interrupts is fine.
	      */
             !cpumask_test_cpu(cpu, &cpu_online_map) &&
             cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) )


> > +             cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) )
> > +        {
> > +            /*
> > +             * This CPU is going offline, remove it from ->arch.old_cpu_mask
> > +             * and possibly release the old vector if the old mask becomes
> > +             * empty.
> > +             *
> > +             * Note cleaning ->arch.old_cpu_mask is required if the CPU is
> > +             * brought offline and then online again, as when re-onlined the
> > +             * per-cpu vector table will no longer have ->arch.old_vector
> > +             * setup, and hence ->arch.old_cpu_mask would be stale.
> > +             */
> > +            cpumask_clear_cpu(cpu, desc->arch.old_cpu_mask);
> > +            if ( cpumask_empty(desc->arch.old_cpu_mask) )
> > +            {
> > +                desc->arch.move_in_progress = 0;
> > +                release_old_vec(desc);
> > +            }
> 
> ... that none of this is really wanted or necessary in that other case.
> Assuming my understanding above is correct, the code change itself is
> once again

It is.  For the smp_send_stop() case we don't care much about leaving
stale data around, as the system is going down.  It's also likely
impossible to move all interrupts to CPU0 due to vector shortage, so
some interrupts will be left assigned to different CPUs.

> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> yet here I'm uncertain whether to offer on-commit editing, as it's not
> really clear to me whether there's a dependencies on patch 4.

No, in principle it should be fine to skip patch 4, but I would like
to do another round of testing before confirming.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 08:53:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 08:53:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738989.1145876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJjh-0004Wa-Pq; Wed, 12 Jun 2024 08:53:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738989.1145876; Wed, 12 Jun 2024 08:53:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJjh-0004WT-MS; Wed, 12 Jun 2024 08:53:41 +0000
Received: by outflank-mailman (input) for mailman id 738989;
 Wed, 12 Jun 2024 08:53:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHJjg-0004WN-S9
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 08:53:40 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 42d412a4-2899-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 10:53:38 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a6f1dc06298so249070366b.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 01:53:38 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f107adb1esm500989966b.50.2024.06.12.01.53.37
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 01:53:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42d412a4-2899-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718182418; x=1718787218; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=hasYh0cGP8cu6FU9agK0WjRPVAcQ9kpJmW7Mb/1Ogvk=;
        b=cJXz3GflU9uMHUSavwKyVM3WIiBuTePk0t/4t+++beoeeaYYjRIrlIytLOWX4ide3Q
         cImNldTEoGWLj/tWseJgqXoZYMI8dBLERPU1YX9IDVhkOtg6uV/7cimtwF93SzckxnwW
         mG33lRJZqxwPYYc89FDDKpGLUYUb+V4kh/8PoQVvFYH/UZjSsACKaC300mVErXbOhtcE
         SRsYAH7ODfXdP4AnDdgauDIPllB+GjsrnjF8bPa6IGO856y18LhbJ9/Af7rlgK88CB6V
         oN7+xjhurGGfYXP69Bb4xXUz4mzAzyr4VhlzJVpbNtwGRYDiYIknG02aVulbc7tgkrNS
         rDCg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718182418; x=1718787218;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=hasYh0cGP8cu6FU9agK0WjRPVAcQ9kpJmW7Mb/1Ogvk=;
        b=urNAHhUpe2erIv4bcrSYuIt7MUvTtg4puxmwxjuvbgviyvUpE4DdLlLjHCNn78TIKB
         awSbMG4TS4Mhmn3XljgpOh2fogakh5pM5gos4TC3hNObW136Slw5eXdSyl+2PXhwBasH
         U03QK2wV+XSeEa6dzFinfDFc101gWQ8801pi7hz95HwxNb8RwH9t8Izfoi1btNiI4V1g
         sMU3PN3sw2hMgZFtH0gPeQnxJ5OarvwY3a1Zr6hxQsHI27RLWHEOpx0cUUgPjjQt85I+
         W6rrEaZ4XkdxbCh8bzpWWclx1lH0Dn56+LFjc7jEbBm4N07pa/FyyV7s0soulcud6ulj
         oYMw==
X-Forwarded-Encrypted: i=1; AJvYcCWuuOzCrISwF4qoXw1xzDrnSbJarJRByHPSPvEGx8JU7IFqit4TBoaXijP2OMEmUnHUje2Fli/qxJ9joSl7RlIuugY7fTa4G/lsNCrJj8k=
X-Gm-Message-State: AOJu0YziuZlqVpSoSg/ShdGc/iUS7Mc44R4u9z7z9og1BODeEeJvEAm4
	n/dwNXzgjmSrG0l7KO2xcku//OfaM7qGqx44w0LbRWhGnCP5mzjVJiATusBNHw==
X-Google-Smtp-Source: AGHT+IGGBEbfPA3rSVKmZgbYbuUjwb07t5oH5IsxX9ptuFcOfSVBJUoFli+rxKMXCfYZX223gNLy5g==
X-Received: by 2002:a17:906:cec3:b0:a6f:d7a:d650 with SMTP id a640c23a62f3a-a6f47d3b998mr62605266b.50.1718182418285;
        Wed, 12 Jun 2024 01:53:38 -0700 (PDT)
Message-ID: <41905257-e2e6-4bce-b723-516916448dfd@suse.com>
Date: Wed, 12 Jun 2024 10:53:36 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v9 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-3-Jiqian.Chen@amd.com>
 <efc35614-561c-4baa-9d94-d17ecb528a4b@suse.com>
 <BL1PR12MB5849B1B536BAD641C37A4308E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB5849B1B536BAD641C37A4308E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 12.06.2024 04:43, Chen, Jiqian wrote:
> On 2024/6/10 23:58, Jan Beulich wrote:
>> On 07.06.2024 10:11, Jiqian Chen wrote:
>>> If run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
>>> a passthrough device by using gsi, see qemu code
>>> xen_pt_realize->xc_physdev_map_pirq and libxl code
>>> pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq
>>> will call into Xen, but in hvm_physdev_op, PHYSDEVOP_map_pirq
>>> is not allowed because currd is PVH dom0 and PVH has no
>>> X86_EMU_USE_PIRQ flag, it will fail at has_pirq check.
>>>
>>> So, allow PHYSDEVOP_map_pirq when dom0 is PVH and also allow
>>> PHYSDEVOP_unmap_pirq for the failed path to unmap pirq. And
>>> add a new check to prevent self map when subject domain has no
>>> PIRQ flag.
>>>
>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
>>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>
>> What's imo missing in the description is a clarification / justification of
>> why it is going to be a good idea (or at least an acceptable one) to expose
>> the concept of PIRQs to PVH. If I'm not mistaken that concept so far has
>> been entirely a PV one.
> I didn't want to expose the concept of PIRQs to PVH.
> I did this patch is for HVM that use PIRQs, what I said in commit message is HVM will map a pirq for gsi, not PVH.
> For the original code, it checks " !has_pirq(currd)", but currd is PVH dom0, so it failed. So I need to allow PHYSDEVOP_map_pirq
> even currd has no PIRQs, but the subject domain has.

But that's not what you're enforcing in do_physdev_op(). There you only
prevent self-mapping. If I'm not mistaken all you need to do is drop the
"d == current->domain" checks from those conditionals.

Further see also
https://lists.xen.org/archives/html/xen-devel/2024-06/msg00540.html.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 08:56:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 08:56:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.738996.1145893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJmD-00059f-6U; Wed, 12 Jun 2024 08:56:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 738996.1145893; Wed, 12 Jun 2024 08:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJmD-00059Y-3x; Wed, 12 Jun 2024 08:56:17 +0000
Received: by outflank-mailman (input) for mailman id 738996;
 Wed, 12 Jun 2024 08:56:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHJmC-00059S-9Y
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 08:56:16 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a0156058-2899-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 10:56:15 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a6f21ff4e6dso358837866b.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 01:56:15 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f4b691964sm28845666b.71.2024.06.12.01.56.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 01:56:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0156058-2899-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718182575; x=1718787375; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=feaN8oXIcM5A7obxQ/KqJNLFUb+wYq7HPhGPru0feU0=;
        b=afUxdYjxUInbcYjYiPtW4Xe9vVPCAuQLx4lCpo7NxeN4m29eDagvFvN+pwy/WS3av6
         8xPv1C7NI+tthKLv79KFe4fEPHFmscEsZUCWN+37MoykKFaujXVNmD6HC8XHsuZTtVrR
         a2Yn7qJZZOpiDUNxcYceLP8rV13sQSiyMCZNopFUOn4keVJcVOF2epGmDm57d8aMtwHg
         CuwraqMnfdB+E16XrYhJkaNLRjed9YVOEkUl+hPqiN151YB7ntoqXvhQj6O8+DAQMcZ3
         u6ujBHKi2Jxoysndaj1xoLt94wYf2ClyBSvYxhBkhyENmKu07oAZ3wmPAMwEBXGka1e0
         yCKg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718182575; x=1718787375;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=feaN8oXIcM5A7obxQ/KqJNLFUb+wYq7HPhGPru0feU0=;
        b=UHu9D7RDYWFLVfQLmWzdMCd/meNrotzhIvSJZvBujn2KHaxBHkRWTO++8tGsNHsuI1
         q5TOaRYg+p5yAVQprTPz8IlVT/wDp+r/883Cg72a8XesFLgQdWpN2oX0m63+d9RFAomc
         3sPyTk9/zBIvFvMwRO2sxWSHt3a5dGNOAjM06oZfwu1xxBgViSiMzsPhotc/ewNrunDN
         FGDR83oiE1tUe6vl5zvPHtauDMnnrDA7CQ5lClxRWhubA0f5hchgkBOsidyhk3DiMF6t
         jSnJVsF44IP4ni45qYTZnXLIgvPuHh03R6fuJWlKVH7jp3Bj5g7WFq36yiv9/63CM6/M
         IXFg==
X-Forwarded-Encrypted: i=1; AJvYcCU3Q6/hEE/kpJFYY2qhDIjZePs9GHGf6yP7rEQv86Ng7DxLoVO6ckvWuoPqJ13TcKX5IWJsUzxZ9XxVAjPWov4gUA44wuTmsmQRwwZw4Io=
X-Gm-Message-State: AOJu0YzFP3JQMbUPfSsEAXhGMyvaOIVi2rvIuF/vlkBDnQWnrT40Lmop
	zjhAki+cL6ozJkbDXN5MCiuyrYQiYZegXT1AtICE0fOtuppH1+hPdW/P0bW0vw==
X-Google-Smtp-Source: AGHT+IGyh4fKP1D6C6FfPfUslS7sckZ6ZH+wCx933435WTrpIYswoj2K2oJWkZ7LsOQPG3zvDj2udA==
X-Received: by 2002:a17:906:234e:b0:a6f:1d19:c0b1 with SMTP id a640c23a62f3a-a6f47d5240bmr61769666b.18.1718182574848;
        Wed, 12 Jun 2024 01:56:14 -0700 (PDT)
Message-ID: <92584a2d-6695-4884-ba2e-990842318d8a@suse.com>
Date: Wed, 12 Jun 2024 10:56:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 1/7] x86/smp: do not use shorthand IPI destinations in
 CPU hot{,un}plug contexts
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-2-roger.pau@citrix.com>
 <615582c8-c153-424d-bce4-eb0c903d28ad@suse.com> <ZmlXve3rV2Vx9bH7@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmlXve3rV2Vx9bH7@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12.06.2024 10:09, Roger Pau Monné wrote:
> On Tue, Jun 11, 2024 at 09:42:39AM +0200, Jan Beulich wrote:
>> On 10.06.2024 16:20, Roger Pau Monne wrote:
>>> Due to the current rwlock logic, if the CPU calling get_cpu_maps() does so from
>>> a cpu_hotplug_{begin,done}() region the function will still return success,
>>> because a CPU taking the rwlock in read mode after having taken it in write
>>> mode is allowed.  Such behavior however defeats the purpose of get_cpu_maps(),
>>
>> I'm not happy to see you still have this claim here. The behavior may (appear
>> to) defeat the purpose here, but as expressed previously I don't view that as
>> being a general pattern.
> 
> Right.  What about replacing the paragraph with:
> 
> "Due to the current rwlock logic, if the CPU calling get_cpu_maps() does so from
> a cpu_hotplug_{begin,done}() region the function will still return success,
> because a CPU taking the rwlock in read mode after having taken it in write
> mode is allowed.  Such corner case makes using get_cpu_maps() alone
> not enough to prevent using the shorthand in CPU hotplug regions."

Thanks.

> I think the rest is of the commit message is not controversial.

Indeed.

>>> --- a/xen/arch/x86/smp.c
>>> +++ b/xen/arch/x86/smp.c
>>> @@ -88,7 +88,7 @@ void send_IPI_mask(const cpumask_t *mask, int vector)
>>>       * the system have been accounted for.
>>>       */
>>>      if ( system_state > SYS_STATE_smp_boot &&
>>> -         !unaccounted_cpus && !disabled_cpus &&
>>> +         !unaccounted_cpus && !disabled_cpus && !cpu_in_hotplug_context() &&
>>>           /* NB: get_cpu_maps lock requires enabled interrupts. */
>>>           local_irq_is_enabled() && (cpus_locked = get_cpu_maps()) &&
>>>           (park_offline_cpus ||
>>
>> Along with changing the condition you also want to update the comment to
>> reflect the code adjustment.
> 
> I've assumed the wording in the commet that says: "no CPU hotplug or
> unplug operations are taking place" would already cover the addition
> of the !cpu_in_hotplug_context() check.

Hmm, yes, you're right. Just needs a release-ack then to go in (with the
description adjustment).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 09:04:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 09:04:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739012.1145904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJuB-0007gR-9n; Wed, 12 Jun 2024 09:04:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739012.1145904; Wed, 12 Jun 2024 09:04:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJuB-0007gK-72; Wed, 12 Jun 2024 09:04:31 +0000
Received: by outflank-mailman (input) for mailman id 739012;
 Wed, 12 Jun 2024 09:04:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHJuA-0007gE-ES
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 09:04:30 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c648c2e4-289a-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 11:04:29 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-a6f0c3d0792so240954566b.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 02:04:29 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f266884c5sm301465166b.29.2024.06.12.02.04.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 02:04:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c648c2e4-289a-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718183068; x=1718787868; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Wcm5F37KeemHjaSK5yrqiqOX4fxzpUOyJ0KEJrS0kh4=;
        b=OPY3+VKaJt0Ce6r4EGlquvUmHVNPncSygNs3Y08vwtXP2Daqz6DAk2b/7+f7OF9CCa
         f7pozHSdgSq86VwPb4qVr8uGyozDXx5k6gEvcQNaW+U+BvTuqgWef0Rd5wSQjyr5rQyY
         SdALkSZzb9t6Kutg9VsT6cPhGi9EAgzLrbNzNwNnCsDFcHMSxIdLwAsn2wEY+YHB+4k5
         ic4pf2/LLGIYXJZ5cw4Hs4IRxsNGI/sOWzYzWedPDCHPEtGvwi1wXg/Sgc3fQ93lQLQn
         yH9COf3eclJLS7Ug0bhLCIwLnPyrw0AsujejPwChtt56z99+DBbKK76k2rYKngz0XV/u
         idoA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718183068; x=1718787868;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Wcm5F37KeemHjaSK5yrqiqOX4fxzpUOyJ0KEJrS0kh4=;
        b=pcxC6zWNTF2vj8GoRSanAN+FovvGzwj5v+pFT2LGvvdDmU7vcVQBQnddM/xXbWMSyr
         BaIxT/wU59mLoBo6c6F0fsa3Fbp+7rGVXxjRFV2Qc+NaLkO5MaXgTFJrhSyTHgFuqR3g
         wqdbP6aPcKubPm1JWEdOnJ0bjjoZFdISJ9T56boVOaITcb4ofvTPajJPy+cohWewwLfe
         54sY9npPbbjdJ/9M5Uvim9VsEonxlddiqpPLvDUDdELCTq9PB+p5vAONt89PlO1TgCiw
         lfYd3KB0IUBHL5CzT63hZGnalHEItawp6ywg6sJrrR0275lmVIBoJdWxn5wYI6ncKaH4
         9eoA==
X-Forwarded-Encrypted: i=1; AJvYcCXtacVgjkgESqehzzYE6wQeZ5vsaMQxCTtjUox994ClmzuFm6l3Yh6WPt+qFKM5a80rT8h1o0Ge4cegAWjVhWWVxAc4tsyu62BIy0avp0U=
X-Gm-Message-State: AOJu0YxcgR6D8xr7LoA+jBB9OY/YnbbUW5r5Wt6Ul++Vxi+5oVj8WgEh
	CpZxVHRONUjhUJ/FosMLL+44KEIlmDTJTH/i6WaNCM5tkPd/tGXZkK9E4OVKtA==
X-Google-Smtp-Source: AGHT+IESfo6cyxU+RqY/Xv8Z0T+s9lNVJ3IE7AApxpz3Flox2Cpp4Q1n7VERJeIGvAzISbIsEL5v8Q==
X-Received: by 2002:a17:906:6899:b0:a6f:2c2:9ea2 with SMTP id a640c23a62f3a-a6f47c9b62bmr83248566b.1.1718183068247;
        Wed, 12 Jun 2024 02:04:28 -0700 (PDT)
Message-ID: <ddef2b3d-9766-4697-ac15-4105a0592090@suse.com>
Date: Wed, 12 Jun 2024 11:04:26 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 5/7] x86/irq: deal with old_cpu_mask for interrupts in
 movement in fixup_irqs()
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-6-roger.pau@citrix.com>
 <66fc06cc-f1f6-4f12-83d4-a3b9788bffba@suse.com> <Zmlgp9C2ryFtT65B@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <Zmlgp9C2ryFtT65B@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12.06.2024 10:47, Roger Pau Monné wrote:
> On Tue, Jun 11, 2024 at 02:45:09PM +0200, Jan Beulich wrote:
>> On 10.06.2024 16:20, Roger Pau Monne wrote:
>>> Given the current logic it's possible for ->arch.old_cpu_mask to get out of
>>> sync: if a CPU set in old_cpu_mask is offlined and then onlined
>>> again without old_cpu_mask having been updated the data in the mask will no
>>> longer be accurate, as when brought back online the CPU will no longer have
>>> old_vector configured to handle the old interrupt source.
>>>
>>> If there's an interrupt movement in progress, and the to be offlined CPU (which
>>> is the call context) is in the old_cpu_mask clear it and update the mask, so it
>>> doesn't contain stale data.
>>
>> This imo is too __cpu_disable()-centric. In the code you cover the
>> smp_send_stop() case afaict, where it's all _other_ CPUs which are being
>> offlined. As we're not meaning to bring CPUs online again in that case,
>> dealing with the situation likely isn't needed. Yet the description should
>> imo at least make clear that the case was considered.
> 
> What about adding the following paragraph:

Sounds good, just maybe one small adjustment:

> Note that when the system is going down fixup_irqs() will be called by
> smp_send_stop() from CPU 0 with a mask with only CPU 0 on it,
> effectively asking to move all interrupts to the current caller (CPU
> 0) which is the only CPU online.  In that case we don't care to

"... the only CPU to remain online."

> migrate interrupts that are in the process of being moved, as it's
> likely we won't be able to move all interrupts to CPU 0 due to vector
> shortage anyway.
> 
>>
>>> @@ -2589,6 +2589,28 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
>>>                                 affinity);
>>>          }
>>>  
>>> +        if ( desc->arch.move_in_progress &&
>>> +             !cpumask_test_cpu(cpu, &cpu_online_map) &&
>>
>> This part of the condition is, afaict, what covers (excludes) the
>> smp_send_stop() case. Might be nice to have a brief comment here, thus
>> also clarifying ...
> 
> Would you be fine with:
> 
>         if ( desc->arch.move_in_progress &&
>              /*
> 	      * Only attempt to migrate if the current CPU is going
> 	      * offline, otherwise the whole system is going down and
> 	      * leaving stale interrupts is fine.
> 	      */
>              !cpumask_test_cpu(cpu, &cpu_online_map) &&
>              cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) )

Sure, this is even more verbose (i.e. better) than I was after.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 09:07:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 09:07:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739019.1145914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJxM-0008GV-NU; Wed, 12 Jun 2024 09:07:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739019.1145914; Wed, 12 Jun 2024 09:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHJxM-0008GO-Kj; Wed, 12 Jun 2024 09:07:48 +0000
Received: by outflank-mailman (input) for mailman id 739019;
 Wed, 12 Jun 2024 09:07:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nAc7=NO=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sHJxL-0008GG-B1
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 09:07:47 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20600.outbound.protection.outlook.com
 [2a01:111:f403:2415::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3a9ce332-289b-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 11:07:45 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by CY8PR12MB7537.namprd12.prod.outlook.com (2603:10b6:930:94::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Wed, 12 Jun
 2024 09:07:41 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7633.036; Wed, 12 Jun 2024
 09:07:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a9ce332-289b-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XNg0Y8iq3ODO4GlZX3SO84IEVp/tJyJvHJeI8Y0HPubc1i2hL9HSvDRDb1m1K/VLRJVpChiqV4GKsNHYTeR5uBf9LsrOXDjd5odUfXr4mIv2xG6aya3dh0jHjYF/306jjnnXmCH00hZylpCy/zyLtrXWGjuKlA904DoUNhdreqgADsIvI60CZsHyhAlyLLbLd6FMT6BwvuOZgOtMVOo5YGjj7V+PlQhIE8OjwOq5z0mdCFMkAjwIHzseh8FnWye6iZSLGP4hQd+8KdV4f6Sc4iHwxAWy/sRBKfO3ErOvqx2QVR5T4/9bhmk24VTE640ukD/Y5nedu9z8UGvtAOBoUw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=bU2ATdsX0Yzo7wY8o3X6Dp8VNRgdUums5xFQ35452tE=;
 b=WK/QszVatm5jV/I5+ISwmHRS4Wbll7tFC50tV5vP+8ITMp/HBK0VBc53eOt3F/X8FEAri26GG6r7btZf91xqZwpAZCt+fIMSK0k5FYCePrJlseiF9IJzGpQV0fiqQ6evkrYvFKU062+xMdQUJZ0dcpAmwLIVwbddalniyNms0vYwHoyQr/ujGVYd+oxAubMT6qKX7vSQSoTs3OLq+sY/tGxpeyRJCdJe+Ng4uMY5aeIVH22JuTEUcUDsLB4IeFmYja8X2v4NPxoDlR1memnPBov0DUq1fyXN9+wHZ1MC1BrvQmuXO9DzfXNu79ygkiPNHV9YCbJiIPonTDFFeNvsrA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bU2ATdsX0Yzo7wY8o3X6Dp8VNRgdUums5xFQ35452tE=;
 b=OFow5uJiGNsuEUD0tUfC1Z1X3l/Pnlb+scWgVdbqwQId9RikTOoauQmDb7yQ/PZ+7lpJ7zwv/E1VNepNo+AuIYWnLIsztNHbpmSVpYmCndTiZ5cHvjFEc53PHDsMvy89eiEzqyI6Sdgc2JghOi21nSwmsDzXX2YwG0d03Nbs2kM=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v9 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
Thread-Topic: [XEN PATCH v9 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
Thread-Index: AQHauLJgk79xZtykz0GMHuXNeMsUFbHBLLUAgALK4YD//+MLAIAAiCyA
Date: Wed, 12 Jun 2024 09:07:40 +0000
Message-ID:
 <BL1PR12MB58493C065A5CA4FF2A9C03B6E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-3-Jiqian.Chen@amd.com>
 <efc35614-561c-4baa-9d94-d17ecb528a4b@suse.com>
 <BL1PR12MB5849B1B536BAD641C37A4308E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
 <41905257-e2e6-4bce-b723-516916448dfd@suse.com>
In-Reply-To: <41905257-e2e6-4bce-b723-516916448dfd@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7633.034)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|CY8PR12MB7537:EE_
x-ms-office365-filtering-correlation-id: f93d5200-bb91-4881-935d-08dc8abf1d25
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230032|376006|7416006|1800799016|366008|38070700010;
x-microsoft-antispam-message-info:
 =?utf-8?B?anRjbVZ2SWJDUTJLZUE0WGY3T3ZGdUlpV29zSStoZElXNmZuQWhuTzJpVjVR?=
 =?utf-8?B?WGowUmJLNlRKdGlpK04xREcwdVRRVWJwdFpJTmt4RGprSlh3Q20yQlpnUVg0?=
 =?utf-8?B?TUhuSk1jM2xhT1dtTUpsbG43NGlRWFVtYUlPalltWkZZS0FHd1l2NVJWdjVs?=
 =?utf-8?B?Y1d5ZDZVdGhCQWNnMXVCZkc1VkQxV0hVQmZwK1IxNmJNcjhHN1VSMFJpa2Zn?=
 =?utf-8?B?Wi81ODZzMVMyOG1Ga1Rxd2FPT2RXdkU4VkwwUjI2ejNNNS9WazdyWlNUcHlZ?=
 =?utf-8?B?YWRibEVtd3R4VzNvdEV5anQ2YUZCUVZ0eTIrY29KUmVUZmpHS0dNeGc4b0lD?=
 =?utf-8?B?UTFPdmhkb3NxbDB6d0ZHYkJzVXZvOTJVUzJOQklCQ3BBb015L0xTUzRJczR1?=
 =?utf-8?B?VldoSFdZcFdlUjZrT0lpMTNnekl6bm5BeHdLamZ4Mnk1ZTdSTU52VGxKUnVk?=
 =?utf-8?B?ZW1qUVZwY0xtYy9iTGRkRnpIQ0xzNDlodWtROXhDL3RzUU1pM3R0eEVHd3F6?=
 =?utf-8?B?ZTZKZXhnaEpFRmRTdUdkZG9kS3dsRE5jMkxLWElUY1F1VVFEV1l2c1djd2Zk?=
 =?utf-8?B?aGpYRXRGVUtraVlrN2d2bUVIKzRZTVJRcEx1TTlqRkFYN0tjQUxxNTRBcE94?=
 =?utf-8?B?NHF1dk05NlZPS05yQ3FHTE9hUXd3WFpRY1EzNVp6WTRJOEJxb3Q3Sm84b25R?=
 =?utf-8?B?NW54QXo4WXhBWVJsd2V0NGx5b0JKUDhPcU5NRWluOHFOSmRJOUMxZmdGTGZO?=
 =?utf-8?B?M1FQZXo1UHRacStRdGwyR0dRUUU0RGRCL1QxaVl4ZDZQSW9HdFhoeXJzVDhj?=
 =?utf-8?B?bW5xc0NEOE5NZStFSFpvVTM2YXFrSWczZHZ2N0FaZS9ua0k2WGFlYkZKWWU5?=
 =?utf-8?B?QytBd3dTSWtTRlB5cnZNUzNiMlFkcmREbFF6bVM3VEhTVi9ibGZtcmhENGh5?=
 =?utf-8?B?ejNneFZtbG5aTXMrOER1eDlHdG1SU2xGdVFwd3hOaUFDaHBrSkVUYkl2cnox?=
 =?utf-8?B?R1ZhSkx2NGR4QkhaSHVpMzIyRndUYm52MHBxZ2ZFUHVDbnpMV0VNRitvVTUv?=
 =?utf-8?B?TGZVa2E2V2Z6Z3RjY2RNVmRkUEFEQWdKcGFDdi9PN3RnbVY2T2NubVNyeUFu?=
 =?utf-8?B?MlVlSzZhcVYwYUlET2R6STBYOU15bnFvVkx0Q3pGLzlBZjhmRlFrazE4UGVM?=
 =?utf-8?B?S2Uzb2Z6SFBOdUpPUG9yZzBBUE45TGpGSDBXNjc5YWxUUzlYZnA0NUxTOFBl?=
 =?utf-8?B?aWJMemM3Sm9BWVNJSVlsVDJxUHQ1NmcwVzVyb1g3WWhTOWNIWXNJRkFwTkRo?=
 =?utf-8?B?ckEvSW00dnJvU2RuY05ya0czNW42ZzRleE5HVWJZbGx2Wk5laUpVd2FYNENk?=
 =?utf-8?B?WlVkeWo2VmtRekMweU0zRDRRWTRqV2U3c0NiS0RIQ0hFZ0pzbC9WemF0QjVa?=
 =?utf-8?B?SXl3eStVUEZYNnFZQm1EMXA3WnFTQ2NPa3JBMm9DUmtsT0lxZlBlR0RtemVD?=
 =?utf-8?B?eUcvZklQczZRRGpnM1lGcVNBZDdlRTBacEJ5ZG5YQVFiSGhpTWpLc3poZFM1?=
 =?utf-8?B?RmlmQkQ4UGtoZ3h5VWJGZi9lZkN1SHhwMjczRC9nZWI0WmppNEV6cVBFdVd1?=
 =?utf-8?B?YUp6RjRpb1NOamMwOGd6YitMQXhqRDFLVFdab3ZjRWZZV3dSaG5VckVnWmlX?=
 =?utf-8?B?NlkyTjlOSXFGb2c3bmFTZG85Wjl1U0tUNGt4UmVNbHd5MjcvMmgzR1R2Mnln?=
 =?utf-8?Q?kmQcvSm8Y7+9NTlUdaZNd+FbPg+ncMM43w0TxQI?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230032)(376006)(7416006)(1800799016)(366008)(38070700010);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?dGRFVTU5NzMwd2FwNVNpUkM2bThtSytpS2NkVjBGTkd4KzAyQThwSGp4bHBU?=
 =?utf-8?B?MDZSZXN1dmFSOFNSUXc1bmo4V3hTUkd3SkpYbFp2SnBqN3RDYmo1ZmpmeDNM?=
 =?utf-8?B?NmVRS2dWNTRlNXhLS0IxdXR4WG5HcDExSjhMWXh6UlNqQ0ZzZ3ZiNERVREMy?=
 =?utf-8?B?cHJhV3lPejBHWmtUaVhiME93UDU0R243eFNzSTVON0JEQloveVg2ek5Ed2hD?=
 =?utf-8?B?djl3MHNTYzk0RnNDK0RLNXhpTU1EREJrbTByVHU2dCs2eVpJWU9FWmlGbVlo?=
 =?utf-8?B?dGIyZDh4NWZqcmZEZDdiTlVhcE9wcVVqeG9ZY05MV1RJeVJCNDRNNERTMTBx?=
 =?utf-8?B?ZE50ckFoVDU4a1pGeWpJbnk4TDh4RXpuUEZpbDRYeHU3U3MwWmtrK0FMdGcv?=
 =?utf-8?B?Q0UrbUtLbkRRRWFPMEdHejE4WUxRTUQ2eEhieGk5dVoxRExJaXdTbXg1R2Q2?=
 =?utf-8?B?OFMxMklMMFh6Y0tLVVNxM0wyVE9Ha3VWTG9Iak5HMnRWTytXV1RxTFQ5V0JL?=
 =?utf-8?B?d2FMSllETlJZQkVoWlRyNkhLeTJSMU1MSHlwVU8xN1kzYmRLa2ZZM2RFNUli?=
 =?utf-8?B?Q0lWUStmSGcrWkdqbnV6OGNtdmJNWjllT0poU3Fnc1BKblFIY3VESmp1R0Y4?=
 =?utf-8?B?OThQVHp3UlFqTUlvYlZ3RmUxbFBrcTYzN1FKK2hyMEQvYlQyM3FuOHBQSDdD?=
 =?utf-8?B?dGRVWm1lVHE1eHEyZ0VJLzd1UlRWaW5zMHo1YWhYWlU1SGxQR0pGTVdna2Z0?=
 =?utf-8?B?SmtWN2dlQ2FmOHhhOGlpT3lyRkRQRFR6Y0ZST005Ujh2UVJ3QWhWOGFMd0g3?=
 =?utf-8?B?WUxtTUxjQmZaVm9RNFdwVDZBZzdFNW9EdXJPYTkrSmkyZm5FYXVYc29tL1M5?=
 =?utf-8?B?THFMeGhQRCs3ZVlpOXhRbXgzZDZpdVAwdXFrRTlmbmUvdkhHUHM3OEVRNity?=
 =?utf-8?B?ZHNiVkRRVWo0eWxKTkdMd2xQbURsKzFwTWxIWE9xa2N1VFpHYTRmU2JxM0Nq?=
 =?utf-8?B?dzByTGx3VTc1T0NFTWlHc1lSeVNxS3VXZmNlMWVncGxlWnUrc293TStLT1Bk?=
 =?utf-8?B?VGdxaWVUV2g5cGpVT3lQMXNVRVZxL3IwVyt2Y3E5VE1QNVBLUjVSWng2cUs3?=
 =?utf-8?B?Qm9wWmVaTXJFWXk3VG9xUEVRNHA4OUFUd3Z2STQ5bDhzNjYrUENwcHFnR0g0?=
 =?utf-8?B?MWZsVVdxanliNnhlY3VDR0JjSFNuVlpjVUlmOFBFZUQvVGl5ZDFvOEZGT3Jp?=
 =?utf-8?B?VHJRTDVZR0g5b0pOUjhveUsrYTcxaXA0VXBqdWRUUEZVSjlrV1psNjdlWGw3?=
 =?utf-8?B?bTJNWXpXRXQ1ekE3c25sU3o0a0p4NFBISWl1NlRzK2ZVN3l1dzF5VkJDWnZE?=
 =?utf-8?B?V1l4U2VkMlB5VDVPZmFuYkdzeXYvMFdhdnFYYWlaSGJ6WWJXVjRCTlp5M1JT?=
 =?utf-8?B?di9qaTd5ZzJKbmp1TTYyVGI1VSs0ZEdkdVRlY1VKaEJ1TXA0dkdsYm5CUnZn?=
 =?utf-8?B?YWRYV1E3dlZScUozYWkzRjA0cHRGTXlUV29MazFIRWpGOFRJLzFIYUpOY2Nn?=
 =?utf-8?B?SHVqNTNVdXliYWpqem9UaUZ5aTlBcjZKSmxPSXVIT3JGU3JrYTQrU2dTMENE?=
 =?utf-8?B?VTNnc0hUZDBzRGlZZDF4T3VLUHBwVTdaazZnRUJzZlFZKzlhWXhEMUV2ZTZu?=
 =?utf-8?B?U3FiZ1YyQW9rOXhCYjlDaHFOZXZxYUsyeFI3K1k5ZlBtTnAvSnRYMXkzaTF1?=
 =?utf-8?B?ZnRKRnR4M09ZZlF5dlptdkN0QkE1SGJZVGMxcFc0aUVhM00rTDBRcnRSSFp0?=
 =?utf-8?B?ZlNTS2Q3TnlHMnBHTEU0MzNINUhCR3p3SWJWZ0lJZHZlMURaUWJ1WlBpbEFD?=
 =?utf-8?B?Uk5hcStPeWpCNGhxYzZ3Tk1aeG9QZzBxbVFuNkFQZGgxcXdrZlVqR2h4bVRa?=
 =?utf-8?B?U0hlN3pVMDhESUJrZmpmdE5rWis4enZFT3I2WE9TNDJsWXI3T2M3S0MyZ1NJ?=
 =?utf-8?B?cnU4WGRxb09FOFpBbzJlbjZuT3Y2Nmc0Y1VnZFJ4Vm1oeGIrWG1ncFRsdk1C?=
 =?utf-8?B?QlpwN1NXaWdiVWdUaktDMFVWY1ZrSkl2ZWhlUnovU3VKWnhYOHM4SVBiNmZD?=
 =?utf-8?Q?7a5s=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <CF85F22B7C44B74983628958B67ABB6F@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f93d5200-bb91-4881-935d-08dc8abf1d25
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jun 2024 09:07:40.9567
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: /ZWfhax2XgtFrKsSavsfrG2dJdNpsRwUtEDwKIG2qrtkZlVE9Jydq+3A8HyhYnw3cD5CQZ8eMeLQ3gB76D73yw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB7537

T24gMjAyNC82LzEyIDE2OjUzLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTIuMDYuMjAyNCAw
NDo0MywgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzEwIDIzOjU4LCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAwNy4wNi4yMDI0IDEwOjExLCBKaXFpYW4gQ2hlbiB3cm90ZToN
Cj4+Pj4gSWYgcnVuIFhlbiB3aXRoIFBWSCBkb20wIGFuZCBodm0gZG9tVSwgaHZtIHdpbGwgbWFw
IGEgcGlycSBmb3INCj4+Pj4gYSBwYXNzdGhyb3VnaCBkZXZpY2UgYnkgdXNpbmcgZ3NpLCBzZWUg
cWVtdSBjb2RlDQo+Pj4+IHhlbl9wdF9yZWFsaXplLT54Y19waHlzZGV2X21hcF9waXJxIGFuZCBs
aWJ4bCBjb2RlDQo+Pj4+IHBjaV9hZGRfZG1fZG9uZS0+eGNfcGh5c2Rldl9tYXBfcGlycS4gVGhl
biB4Y19waHlzZGV2X21hcF9waXJxDQo+Pj4+IHdpbGwgY2FsbCBpbnRvIFhlbiwgYnV0IGluIGh2
bV9waHlzZGV2X29wLCBQSFlTREVWT1BfbWFwX3BpcnENCj4+Pj4gaXMgbm90IGFsbG93ZWQgYmVj
YXVzZSBjdXJyZCBpcyBQVkggZG9tMCBhbmQgUFZIIGhhcyBubw0KPj4+PiBYODZfRU1VX1VTRV9Q
SVJRIGZsYWcsIGl0IHdpbGwgZmFpbCBhdCBoYXNfcGlycSBjaGVjay4NCj4+Pj4NCj4+Pj4gU28s
IGFsbG93IFBIWVNERVZPUF9tYXBfcGlycSB3aGVuIGRvbTAgaXMgUFZIIGFuZCBhbHNvIGFsbG93
DQo+Pj4+IFBIWVNERVZPUF91bm1hcF9waXJxIGZvciB0aGUgZmFpbGVkIHBhdGggdG8gdW5tYXAg
cGlycS4gQW5kDQo+Pj4+IGFkZCBhIG5ldyBjaGVjayB0byBwcmV2ZW50IHNlbGYgbWFwIHdoZW4g
c3ViamVjdCBkb21haW4gaGFzIG5vDQo+Pj4+IFBJUlEgZmxhZy4NCj4+Pj4NCj4+Pj4gU2lnbmVk
LW9mZi1ieTogSHVhbmcgUnVpIDxyYXkuaHVhbmdAYW1kLmNvbT4NCj4+Pj4gU2lnbmVkLW9mZi1i
eTogSmlxaWFuIENoZW4gPEppcWlhbi5DaGVuQGFtZC5jb20+DQo+Pj4+IFJldmlld2VkLWJ5OiBT
dGVmYW5vIFN0YWJlbGxpbmkgPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+DQo+Pj4NCj4+PiBXaGF0
J3MgaW1vIG1pc3NpbmcgaW4gdGhlIGRlc2NyaXB0aW9uIGlzIGEgY2xhcmlmaWNhdGlvbiAvIGp1
c3RpZmljYXRpb24gb2YNCj4+PiB3aHkgaXQgaXMgZ29pbmcgdG8gYmUgYSBnb29kIGlkZWEgKG9y
IGF0IGxlYXN0IGFuIGFjY2VwdGFibGUgb25lKSB0byBleHBvc2UNCj4+PiB0aGUgY29uY2VwdCBv
ZiBQSVJRcyB0byBQVkguIElmIEknbSBub3QgbWlzdGFrZW4gdGhhdCBjb25jZXB0IHNvIGZhciBo
YXMNCj4+PiBiZWVuIGVudGlyZWx5IGEgUFYgb25lLg0KPj4gSSBkaWRuJ3Qgd2FudCB0byBleHBv
c2UgdGhlIGNvbmNlcHQgb2YgUElSUXMgdG8gUFZILg0KPj4gSSBkaWQgdGhpcyBwYXRjaCBpcyBm
b3IgSFZNIHRoYXQgdXNlIFBJUlFzLCB3aGF0IEkgc2FpZCBpbiBjb21taXQgbWVzc2FnZSBpcyBI
Vk0gd2lsbCBtYXAgYSBwaXJxIGZvciBnc2ksIG5vdCBQVkguDQo+PiBGb3IgdGhlIG9yaWdpbmFs
IGNvZGUsIGl0IGNoZWNrcyAiICFoYXNfcGlycShjdXJyZCkiLCBidXQgY3VycmQgaXMgUFZIIGRv
bTAsIHNvIGl0IGZhaWxlZC4gU28gSSBuZWVkIHRvIGFsbG93IFBIWVNERVZPUF9tYXBfcGlycQ0K
Pj4gZXZlbiBjdXJyZCBoYXMgbm8gUElSUXMsIGJ1dCB0aGUgc3ViamVjdCBkb21haW4gaGFzLg0K
PiANCj4gQnV0IHRoYXQncyBub3Qgd2hhdCB5b3UncmUgZW5mb3JjaW5nIGluIGRvX3BoeXNkZXZf
b3AoKS4gVGhlcmUgeW91IG9ubHkNCj4gcHJldmVudCBzZWxmLW1hcHBpbmcuIElmIEknbSBub3Qg
bWlzdGFrZW4gYWxsIHlvdSBuZWVkIHRvIGRvIGlzIGRyb3AgdGhlDQo+ICJkID09IGN1cnJlbnQt
PmRvbWFpbiIgY2hlY2tzIGZyb20gdGhvc2UgY29uZGl0aW9uYWxzLg0KV2hhdCBJIHdhbnQgaXMg
dG8gYWxsb3cgUEhZU0RFVk9QX21hcF9waXJxIHdoZW4gY3VycmQgZG9lc24ndCBoYXZlIFBJUlFz
LCBidXQgc3ViamVjdCBkb21haW4gaGFzLg0KVGhlbiBJIGp1c3QgYWRkICJicmVhayIgaW4gaHZt
X3BoeXNkZXZfb3Agd2l0aG91dCBhbnkgY2hlY2tzLCB0aGF0IHdpbGwgY2F1c2Ugc2VsZi1tYXBw
aW5nIHByb2JsZW1zLg0KQW5kIGluIHByZXZpb3VzIG1haWwgdGhyZWFkLCB5b3Ugc3VnZ2VzdGVk
IG1lIHRvIHByZXZlbnQgc2VsZi1tYXBwaW5nIHdoZW4gc3ViamVjdCBkb21haW4gZG9lc24ndCBo
YXZlIFBJUlFzLg0KU28gSSBhZGRlZCBjaGVja3MgaW4gZG9fcGh5c2Rldl9vcC4NCg0KPiANCj4g
RnVydGhlciBzZWUgYWxzbw0KPiBodHRwczovL2xpc3RzLnhlbi5vcmcvYXJjaGl2ZXMvaHRtbC94
ZW4tZGV2ZWwvMjAyNC0wNi9tc2cwMDU0MC5odG1sLg0KPiANCj4gSmFuDQoNCi0tIA0KQmVzdCBy
ZWdhcmRzLA0KSmlxaWFuIENoZW4uDQo=


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 09:12:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 09:12:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739024.1145923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHK27-00026l-8I; Wed, 12 Jun 2024 09:12:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739024.1145923; Wed, 12 Jun 2024 09:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHK27-00026e-5f; Wed, 12 Jun 2024 09:12:43 +0000
Received: by outflank-mailman (input) for mailman id 739024;
 Wed, 12 Jun 2024 09:12:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHK25-000250-Ph
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 09:12:41 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb4d41c1-289b-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 11:12:40 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id
 4fb4d7f45d1cf-57ca8e45a1bso631420a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 02:12:40 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c7681d659sm6172799a12.54.2024.06.12.02.12.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 02:12:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb4d41c1-289b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718183560; x=1718788360; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=McyaLi/nhCcZg+uIMfat7vuPezhIz2XY+n1hh1uQxKc=;
        b=bkclvpCfnCKKHcU+PPBizLB3H+HcJHCv8JNxCQeZ7q8o6aCI4IfeE4BUdBOuZiozAJ
         NuoKMS8aA2Ed6WGadhKJoa4Qt+bvdZetG41J8cidVREByJTrUOgyCAmCoA1WSXcI0pzx
         mDaf7MmakodsNs3x55SqKjnFQWzK++WeYHAaRJnYaTNLhuZu4IIOnVTjgnxtKu4feiAb
         gj4Fi/7bS9Y2CWU1EFJ9hDPGNIycjJ+IB9tYOu2LE9CBpKUgt/VWqCh4AMujX6z3FXpM
         hygjSAEKYt/JZaPXcRr0Jx/TkmbhXs5YXeATUs+Nej1lzlbBFtJwrS28WrV/ESURR99R
         Ks4w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718183560; x=1718788360;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=McyaLi/nhCcZg+uIMfat7vuPezhIz2XY+n1hh1uQxKc=;
        b=asfFDvv5q2U8PF+OpFG0AGtctlXzNSPfza+Rr7WKB80sSULfwyrU3KWlH0aIvSAVsw
         a2hCC5UaL221sgfr3+k5FZPJSIGty97Ay89zk8xeiu8Rxy3/vA0rNtQq6nHdbjdLnTpC
         WILv9inx/XOxs3xcXKbBeTguNbvQPCRwkkmDhDZs4ANZU8JNyVw2Vj67gB0IeMIdTPP+
         xyddS7OO/aEF4pMybg6lFeHNmfZgkqilAsE2S/qbm4MMRgA7hc3a/CTBZHoEmVGD1V7t
         Q7+mnsPX1zPiTrLpI6iPkYG5h7WTyCOJt9t5GSZG6HvsXNiBkk7eDFctYkQhdlNbI4b1
         EdsQ==
X-Forwarded-Encrypted: i=1; AJvYcCWWxeDJlYa5yoNB18ZlBA9nuZrQFMrd6IkJQVuS/CxJ+JU2mEeyCjAwHDGYJzh0Qg3vfeiUVVewUbhX8T6Irc4smjbU+vgw8thPCejSHB0=
X-Gm-Message-State: AOJu0YwvkoLEtWY/kvKalkOvsH08G+W3hUwo6FAUqQwNW7yJ/oRMY2eU
	1hbzc0BNsefTf8zuEp+jJt1w51/DLX93GDSC14CFTOKtn5Qf+bKgRI06u0tmgQ==
X-Google-Smtp-Source: AGHT+IEgyiZ1YesR+iwI3YyovAuT9BE7VBa7/3WPB4NYTmvUgQJYxNBAOnodfybizoXWc8fw7044+w==
X-Received: by 2002:a50:8d52:0:b0:57c:55f6:b068 with SMTP id 4fb4d7f45d1cf-57caa9bee1cmr1167549a12.32.1718183559961;
        Wed, 12 Jun 2024 02:12:39 -0700 (PDT)
Message-ID: <ca29b910-332d-420a-89d0-60f3c15a750d@suse.com>
Date: Wed, 12 Jun 2024 11:12:38 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 2/6] xen/self-tests: address violations of MISRA rule
 20.7
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1718117557.git.nicola.vetrini@bugseng.com>
 <38d6b849e0ed868f1025d4af548dcebe89bda42d.1718117557.git.nicola.vetrini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <38d6b849e0ed868f1025d4af548dcebe89bda42d.1718117557.git.nicola.vetrini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 11.06.2024 17:53, Nicola Vetrini wrote:
> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
> of macro parameters shall be enclosed in parentheses". Therefore, some
> macro definitions should gain additional parentheses to ensure that all
> current and future users will be safe with respect to expansions that
> can possibly alter the semantics of the passed-in macro parameter.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
> ---
> In this case the use of parentheses can detect misuses of the COMPILE_CHECK
> macro for the fn argument that happen to pass the compile-time check
> (see e.g. https://godbolt.org/z/n4zTdz595).

While readability suffers a little, I'm okay with the approach taken:
Reviewed-by: Jan Beulich <jbeulich@suse.com>
I'd like to give in particular Andrew some time to possibly object, though.
And anyway I don't think we want to rush any more Misra changes into 4.19.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 09:13:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 09:13:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739029.1145935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHK2m-0002ee-Mi; Wed, 12 Jun 2024 09:13:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739029.1145935; Wed, 12 Jun 2024 09:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHK2m-0002eX-HT; Wed, 12 Jun 2024 09:13:24 +0000
Received: by outflank-mailman (input) for mailman id 739029;
 Wed, 12 Jun 2024 09:13:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHK2k-0002eJ-QD; Wed, 12 Jun 2024 09:13:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHK2k-0002Eb-Nw; Wed, 12 Jun 2024 09:13:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHK2k-00037R-FC; Wed, 12 Jun 2024 09:13:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHK2k-00026u-Eh; Wed, 12 Jun 2024 09:13:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wjFo24sGLG7XqYxVJbBXGPrViQKhlFRORRtUQyVwVFQ=; b=k3Xu/nPjrbdmew8A9ESdaMOz8A
	JnLs/2wNkOLIy0k7MejM4H6b6dJFNhwji3W2ExK8tv9gIUfgMDHNiuimkLtvMJ/MSJHnGp/xPAVa4
	xEnqSWd+MtWLUPKwQ4wobICUHm+OKk7d8mdPXLfY/YVBefPH1c1eEkevQ6I9+FB55I4U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186318-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186318: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d3b32dca06b987d7214637f3952c2ce1ce69f308
X-Osstest-Versions-That:
    ovmf=0982da4f50279bfb2be479f97821b86feb87c336
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Jun 2024 09:13:22 +0000

flight 186318 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186318/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d3b32dca06b987d7214637f3952c2ce1ce69f308
baseline version:
 ovmf                 0982da4f50279bfb2be479f97821b86feb87c336

Last test of basis   186309  2024-06-11 09:43:10 Z    0 days
Testing same since   186318  2024-06-12 07:41:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>
  Ray Ni <ray.ni@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0982da4f50..d3b32dca06  d3b32dca06b987d7214637f3952c2ce1ce69f308 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 09:13:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 09:13:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739035.1145944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHK3A-00039u-TM; Wed, 12 Jun 2024 09:13:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739035.1145944; Wed, 12 Jun 2024 09:13:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHK3A-00039n-QY; Wed, 12 Jun 2024 09:13:48 +0000
Received: by outflank-mailman (input) for mailman id 739035;
 Wed, 12 Jun 2024 09:13:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHK3A-00036N-8M
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 09:13:48 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 12c3bf47-289c-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 11:13:46 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id
 4fb4d7f45d1cf-57ca8e45a1bso632874a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 02:13:46 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cab4fe431sm719264a12.61.2024.06.12.02.13.45
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 02:13:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12c3bf47-289c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718183626; x=1718788426; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=JeHwpJrOKwSwOR/lxPqxf/lKQ8BcNFSutmoLuilAq0I=;
        b=Nl4143ohJDc6LB6661SzyNWfTh6Rm0ZE9wRt1kcwCQugGvWUgLhJsBtRRnB50tMMD2
         ak9X9+Lm9yFpmh5DaDjTgW0MxfLRPmTQwyg1C27j4TKyxLE/8R0kWnEG8NQoXbeR95RY
         MpmWUjLiphG/lgHPtseVNiZDD5vOvYAYvHuihWq856AofHXnxPXqFaJVY0dcG/CovI4C
         suoIHYlmAA4iUvy0b/qzBf0uwgE7f5L2CD5vfxBYBtC6wGtY+5jT3f5khPLT+xHsXwhJ
         EyB9nHpRS2XeaAcKyWBzPIYVvEyt6kGWy/n1j9Jzy5IWXI/E6tw+B4QWGxvKne3tvBIR
         E+9g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718183626; x=1718788426;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=JeHwpJrOKwSwOR/lxPqxf/lKQ8BcNFSutmoLuilAq0I=;
        b=r7q+KK4jmWeA3ZS4KczvjsUA5LIpO+X3xl2T2OLvnD+wfY5/oJyLuBwO/NPv971Rlk
         hkRZal+3btjlA13OZJ7LZfyYRPqzBdF0uOUGblyZFKfpZLNe5RXX5uqTKtcMFVJjgKA/
         WWRF1ZSqqhFvJ5XOfR1WgcmjhGOnU5ogejBDKFKtTXlrm3iqg0BkWKBwIMWZeJcfOskl
         ZA8fmOiTeDpdQ8yd6EJg2BzyhybDV8cE70jQohkm94jWYByiQbJsrdvyW3ZWK+ieMouz
         WN/GsQx3DoThceJbAELk8COrNBmcMsNCsudFEg2QBR2avvzrIpr6n2UoFHS7sFTTb8fC
         orqw==
X-Forwarded-Encrypted: i=1; AJvYcCX8xBpXv95hSi1QJcvGxbxdcb4AMOGC640wMf8JcraHs6WQvfo/HiwOio3/R6Adh4JrPrN5b1kSskOEJMKgE2kARh6IIMDXyhV7GVnGwJU=
X-Gm-Message-State: AOJu0Yx4C2cB4E4VOTisOVk0Fa9ehvUlnElUkYzNRc3Tp0Gd27rCe2Tc
	PoCSQPmXJs3hDwhfvt1ueIr0T83ZdVwUIEzlFro55Y/ji7ruY2lhRhHNtMX2Gg==
X-Google-Smtp-Source: AGHT+IHLpflgp4IjhWs5Tlqr/xG++cBBxfZTyQDwrePgzAh128ePS8WSFMQr+mdvuE4amIGOCs8EdQ==
X-Received: by 2002:a50:9f41:0:b0:57c:9dbc:2b6e with SMTP id 4fb4d7f45d1cf-57caaac753dmr929725a12.42.1718183626264;
        Wed, 12 Jun 2024 02:13:46 -0700 (PDT)
Message-ID: <6d392d69-c062-4dd3-a6d6-1ef08b1619f0@suse.com>
Date: Wed, 12 Jun 2024 11:13:44 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 3/6] xen/guest_access: address violations of MISRA
 rule 20.7
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1718117557.git.nicola.vetrini@bugseng.com>
 <2dbc4b40261b91de2148e467ce0fdade5cc89c50.1718117557.git.nicola.vetrini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <2dbc4b40261b91de2148e467ce0fdade5cc89c50.1718117557.git.nicola.vetrini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 11.06.2024 17:53, Nicola Vetrini wrote:
> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
> of macro parameters shall be enclosed in parentheses". Therefore, some
> macro definitions should gain additional parentheses to ensure that all
> current and future users will be safe with respect to expansions that
> can possibly alter the semantics of the passed-in macro parameter.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Wed Jun 12 09:20:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 09:20:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739045.1145953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHK9F-0005JX-HO; Wed, 12 Jun 2024 09:20:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739045.1145953; Wed, 12 Jun 2024 09:20:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHK9F-0005JQ-EY; Wed, 12 Jun 2024 09:20:05 +0000
Received: by outflank-mailman (input) for mailman id 739045;
 Wed, 12 Jun 2024 09:20:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHK9E-00052r-TB
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 09:20:04 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f26bb629-289c-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 11:20:02 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6efae34c83so498022966b.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 02:20:02 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6e98f12e9bsm678174766b.32.2024.06.12.02.20.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 02:20:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f26bb629-289c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718184001; x=1718788801; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ZjPtJtdKfEGZMCprJaHEP/yb+AHta6yT2Oa6Q0eFdWs=;
        b=VHbwfyDxcxgDDGT1vjtIsOyzEZ9yFOEzVaMhApYBwnpnjakQXt4DzsmcXT0cvLsU/B
         qM6p27AJ/0dL8JOnyoMh0U5ONfBZG6c6cQhVSEIsLOac31E/qUiiXjQnFGNraHFY9X3e
         Kq5Phwu+5BXqLbSoD2/jYuIw/Uq5IXH26Z/Y77BRhALzik4bThXU3TZIukVQY9zXJoY+
         68c9n6wsLHqtMtYrhwk34fRBdqeabd7k0gGwG8mK1F1z/vSfhlKnlbsKsaPQQKyyMxqd
         ROj5iwRT3Rb0pIaq2/uLrWl77dwIovQ0I64yYtV/jebJPPf+jupck7aVtxLJ6zR5G+Ni
         EwgQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718184001; x=1718788801;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ZjPtJtdKfEGZMCprJaHEP/yb+AHta6yT2Oa6Q0eFdWs=;
        b=aeZvuPU3ifMaBZAEezSOLtQ0IgGaZSxi/pY8Qf9fuKgNSp0oQ5kPgLQ3sgw2I07eao
         1LGh0WjAQPapko46fNxgkOe0x3wqtZVtNzsqJcHJizzEJqb2ha/noN+0HHUaxTEjbL5+
         x9QmDzmN1EjNhzDUsIUbzz111uSkXWQVWuBv45VuhutqAExeJ9yf8Jz+HiZxIzX2TSHr
         CsNba1reQMkUvJWxX9ucYUpaDBC5a6bstiMz1dUPW0scLJhl6gIpODX+TLaMFKm4FNIw
         yTN4XJQkOS41ymurKOnQ639hMEn42Dadx/RHbU1pDQNO/ozcziziNYnB1U6EEtHJ3RPK
         T25w==
X-Forwarded-Encrypted: i=1; AJvYcCWyRNkvK7sZ03jP0S92vUChTp4yLDNbtFQGJzTgFsKtyGC9z2AiucCULArsDsB0zgKo3UEqK8JM5ZuinwShWE500K/EVnpXC0ZguU0hQEM=
X-Gm-Message-State: AOJu0YyrhYrxm8ONU6Dh5ZWVVjembVmwg5w6t4D7OiMZXbWZG7coJ6DR
	N+yFKjTR7SWAzAbeXTM5onVeYPvmN3JF50mFHIjYYqwEq24SVabBe8CQdX4w3Q==
X-Google-Smtp-Source: AGHT+IFFWK9MQfbx7X21PlkLm3NcgcqnjRfqQuEQLYahWkU9M82MIH59IFiU54tY/AizM4h0607fCw==
X-Received: by 2002:a17:906:d8ae:b0:a6f:4be5:a661 with SMTP id a640c23a62f3a-a6f4be5a7c1mr36197666b.46.1718184001244;
        Wed, 12 Jun 2024 02:20:01 -0700 (PDT)
Message-ID: <12ce10af-cd36-492e-a73b-2b81b5bf60cc@suse.com>
Date: Wed, 12 Jun 2024 11:19:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 4/6] x86emul: address violations of MISRA C Rule 20.7
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1718117557.git.nicola.vetrini@bugseng.com>
 <0a502d2a9c5ce13be13281d9de49d263313b7852.1718117557.git.nicola.vetrini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <0a502d2a9c5ce13be13281d9de49d263313b7852.1718117557.git.nicola.vetrini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 11.06.2024 17:53, Nicola Vetrini wrote:
> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
> of macro parameters shall be enclosed in parentheses". Therefore, some
> macro definitions should gain additional parentheses to ensure that all
> current and future users will be safe with respect to expansions that
> can possibly alter the semantics of the passed-in macro parameter.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
> ---
> These local helpers could in principle be deviated, but the readability
> and functionality are essentially unchanged by complying with the rule,
> so I decided to modify the macro definition as the preferred option.

Well, yes, but ...

> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -2255,7 +2255,7 @@ x86_emulate(
>          switch ( modrm_reg & 7 )
>          {
>  #define GRP2(name, ext) \
> -        case ext: \
> +        case (ext): \
>              if ( ops->rmw && dst.type == OP_MEM ) \
>                  state->rmw = rmw_##name; \
>              else \
> @@ -8611,7 +8611,7 @@ int x86_emul_rmw(
>              unsigned long dummy;
>  
>  #define XADD(sz, cst, mod) \
> -        case sz: \
> +        case (sz): \
>              asm ( "" \
>                    COND_LOCK(xadd) " %"#mod"[reg], %[mem]; " \
>                    _POST_EFLAGS("[efl]", "[msk]", "[tmp]") \

... this is really nitpicky of the rule / tool. What halfway realistic
ways do you see to actually misuse these macros? What follows the "case"
keyword is just an expression; no precedence related issues are possible.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 09:21:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 09:21:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739052.1145963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKAr-00068p-RM; Wed, 12 Jun 2024 09:21:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739052.1145963; Wed, 12 Jun 2024 09:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKAr-00068i-Om; Wed, 12 Jun 2024 09:21:45 +0000
Received: by outflank-mailman (input) for mailman id 739052;
 Wed, 12 Jun 2024 09:21:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHKAq-00068T-LS
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 09:21:44 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2f23a1c8-289d-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 11:21:43 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-57c68682d1aso4940754a12.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 02:21:43 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6ef441b3d6sm662162566b.203.2024.06.12.02.21.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 02:21:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f23a1c8-289d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718184103; x=1718788903; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=0hXpePDv78pKPWRABOaJqWPezG+KTeK8tT5jZ5ivkqw=;
        b=DbAV2o6h46cftDNhJoflhw/ytwR2LRNa107SwIZADJOCsTKWKJAs17fnYVXiQPohzE
         rCK3n1ITnBLY/U2vgEQi4uMNpndz4CD29dFhFdLSFIgMs33RZdfAHotId9e5khMOWT45
         Y11tzPsA4o3uARGV5JKz+v+CtD7IHqqFwbIMe9AOyznzfeQh6RII1lQWj64aaV99x2Ez
         +EUUksNlGqv9ZAAXBkslsvwHTaxH1HTOIFTUhnkG0CsH5Wf0y82DwLjUArV/vmVXnuVt
         9Ywh9WkP323feQgApbBf7JaoE0RQaoyiTnRrw8O+Ulwd0IVlyrREpU7+3c0O7qZ7cAo8
         pcVw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718184103; x=1718788903;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0hXpePDv78pKPWRABOaJqWPezG+KTeK8tT5jZ5ivkqw=;
        b=tHWjede2zHmiu+Ey/JXzBf0LVmRPW4d34Sx40RzOlJEuTiiJMGcXC+Kg/6t8t6Y9hf
         NFMqNnh2lDpSNu526ZcR7bE5njRCeT5v+wiFn0hXNxovDaqHYAy92/m8XzSVEQarfNbJ
         i/A7WSaDSTF+mEoRBvK9IPp14ATSgXA1iguJq3Rj36ygCMI+1pckVvRe53OyaCf1DDUL
         slue8qw5C2Ucov46ziwvf4VSenJ86BsNzP7IA3upcagVJG2FhXAN8M5a7sw14JilpEKL
         gBiWEV148Bz7Q55pduwvsjam6o3kfMi6tDzblGTQdunTBJBcFIEN3jCt7THGS6aDTHZG
         /gew==
X-Forwarded-Encrypted: i=1; AJvYcCX7gpUqw+Gyhk2Tavh20gw0Cqwemyhn8NoFYb7KadKOxnQ4Z+MmHU9kqe+DvzlCq84yTGn90UMqIu09fwtVqDnhh2rytWKXJWA4SIRYZ+8=
X-Gm-Message-State: AOJu0YxwXWyN5suo23mJaam3PMfgboDGML2yGSIAZ3UnKGbj25uBIn8A
	5TUo2DCx3fykrTCEMJ8MzIX6tntxGbISGG6jdn/N/PNpz2ziaTlN3wwALqhhjA==
X-Google-Smtp-Source: AGHT+IFlBMLpFH8pD3tYHVMvpo5ztndrbzGLUqq3GPQlUaPbMwENZBudPm+MyvCT5AXk2yUM7mgcig==
X-Received: by 2002:a17:907:7e91:b0:a6f:1400:2e13 with SMTP id a640c23a62f3a-a6f47fd6f50mr87537566b.58.1718184103243;
        Wed, 12 Jun 2024 02:21:43 -0700 (PDT)
Message-ID: <5bb436a7-d426-4413-84bf-907615e12212@suse.com>
Date: Wed, 12 Jun 2024 11:21:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v9 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-3-Jiqian.Chen@amd.com>
 <efc35614-561c-4baa-9d94-d17ecb528a4b@suse.com>
 <BL1PR12MB5849B1B536BAD641C37A4308E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
 <41905257-e2e6-4bce-b723-516916448dfd@suse.com>
 <BL1PR12MB58493C065A5CA4FF2A9C03B6E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB58493C065A5CA4FF2A9C03B6E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 12.06.2024 11:07, Chen, Jiqian wrote:
> On 2024/6/12 16:53, Jan Beulich wrote:
>> On 12.06.2024 04:43, Chen, Jiqian wrote:
>>> On 2024/6/10 23:58, Jan Beulich wrote:
>>>> On 07.06.2024 10:11, Jiqian Chen wrote:
>>>>> If run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
>>>>> a passthrough device by using gsi, see qemu code
>>>>> xen_pt_realize->xc_physdev_map_pirq and libxl code
>>>>> pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq
>>>>> will call into Xen, but in hvm_physdev_op, PHYSDEVOP_map_pirq
>>>>> is not allowed because currd is PVH dom0 and PVH has no
>>>>> X86_EMU_USE_PIRQ flag, it will fail at has_pirq check.
>>>>>
>>>>> So, allow PHYSDEVOP_map_pirq when dom0 is PVH and also allow
>>>>> PHYSDEVOP_unmap_pirq for the failed path to unmap pirq. And
>>>>> add a new check to prevent self map when subject domain has no
>>>>> PIRQ flag.
>>>>>
>>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>>>> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
>>>>> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>>>>
>>>> What's imo missing in the description is a clarification / justification of
>>>> why it is going to be a good idea (or at least an acceptable one) to expose
>>>> the concept of PIRQs to PVH. If I'm not mistaken that concept so far has
>>>> been entirely a PV one.
>>> I didn't want to expose the concept of PIRQs to PVH.
>>> I did this patch is for HVM that use PIRQs, what I said in commit message is HVM will map a pirq for gsi, not PVH.
>>> For the original code, it checks " !has_pirq(currd)", but currd is PVH dom0, so it failed. So I need to allow PHYSDEVOP_map_pirq
>>> even currd has no PIRQs, but the subject domain has.
>>
>> But that's not what you're enforcing in do_physdev_op(). There you only
>> prevent self-mapping. If I'm not mistaken all you need to do is drop the
>> "d == current->domain" checks from those conditionals.
> What I want is to allow PHYSDEVOP_map_pirq when currd doesn't have PIRQs, but subject domain has.
> Then I just add "break" in hvm_physdev_op without any checks, that will cause self-mapping problems.
> And in previous mail thread, you suggested me to prevent self-mapping when subject domain doesn't have PIRQs.
> So I added checks in do_physdev_op.

Self-mapping was a primary concern of mine. Yet why deal with only a subset
of what needs preventing, when generalizing things actually can be done by
having less code.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 09:22:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 09:22:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739057.1145974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKBl-0006f6-4S; Wed, 12 Jun 2024 09:22:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739057.1145974; Wed, 12 Jun 2024 09:22:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKBl-0006ez-1j; Wed, 12 Jun 2024 09:22:41 +0000
Received: by outflank-mailman (input) for mailman id 739057;
 Wed, 12 Jun 2024 09:22:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHKBj-0006bB-Hj
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 09:22:39 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4f62a439-289d-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 11:22:38 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a6f0c3d0792so243391766b.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 02:22:38 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c741b0581sm6676181a12.57.2024.06.12.02.22.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 02:22:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f62a439-289d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718184157; x=1718788957; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=JeHwpJrOKwSwOR/lxPqxf/lKQ8BcNFSutmoLuilAq0I=;
        b=V37Qn3+i4MvU1b4RodamJbV4g2raRDFf4towXjNQDs1unskXQcnY0pFGMn5ntRKb23
         51cDmHrUGyFwbYSbltM1zYbXHpebmZbXFt7JGbTRMfiydrkLrzJ46QmEudRMYs7G6k/K
         PGafB3i88/BVXZMYgG23B5l693DkOUSEC2Hg2wmhSGJD8Y9LOMMKV7Ya91zG4ymkkOpd
         DmAFGAUzvhj2XMSWQqRrapZ73Q/AzGsoY6tIsVEdtGZCe7gTtv46hLUcSbgzD5aY0tYf
         jRxunwt9kyjIZdHW0DuiAAzk5co9LUkZk8k8Uyic0nzpWdEXkuBsjy8Le+Ktqf6LlXqf
         U6iw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718184157; x=1718788957;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=JeHwpJrOKwSwOR/lxPqxf/lKQ8BcNFSutmoLuilAq0I=;
        b=ZiNxzPmiVXyf+/kdSp8UT+VaNs4J9lOut32nXAZYh3LkiD4DMueppbiZKUTzc/73DL
         FluUhoMglFO+7OWoDJwoEQ3sGhH+7l/3G2l0EXCwP13z6uWzgTATZndFoozUJkurutbD
         9JX7hcmcOKaV6iHJmcQUQbRtsXCEqNaPHJZ36nQBqp0LutXzMsLhC4eOP819qsCyS60r
         NaWK+6TRtPMmTqZFvdCWHYSPTcInD4/rEfvalo0BMwSjTyaU53jJ7NGpwSIKNWFtR70p
         rqb3JyYmsh6C/U7186QUh2Jyg9n1aSQV+AJ/kzKPm2edcYUDQImPylhoC3ZCLtVIx8JH
         ENnQ==
X-Forwarded-Encrypted: i=1; AJvYcCW76HfvqFwDLneZmPcbar5kKe9XoYt9V0Ry4ZDjN/xk10/qrI16vvyqD9Pq83WFJo3aq/3dF2nOZ6ir0Z64lOrChTAaODC/R49SVS1ccU4=
X-Gm-Message-State: AOJu0YxrIf8DXV2imlO2AT+dyALxe4Qg7VRqM+crdyrs5NuNT47L1VPn
	uOKoX+gEKbJjw6WA319q8YIg0dSWSTHpsPD9hzJTDqIF2RfLqTWTD78V+hVaCg==
X-Google-Smtp-Source: AGHT+IHxb5EatDBqLmx7bdNqVqiqDtoJnFodZiM/PhbqW6WNY4clzkIcT2+2mazLB6mcIOr5Y/z7ww==
X-Received: by 2002:a17:906:cc82:b0:a6f:266d:adc2 with SMTP id a640c23a62f3a-a6f47c9b606mr81624766b.14.1718184157425;
        Wed, 12 Jun 2024 02:22:37 -0700 (PDT)
Message-ID: <756f5200-a309-484e-9278-b4981dd82ec2@suse.com>
Date: Wed, 12 Jun 2024 11:22:36 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 5/6] x86/irq: address violations of MISRA C Rule 20.7
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1718117557.git.nicola.vetrini@bugseng.com>
 <a34c9483e17d59a79ddb2cc9c74cf5809b8f2e70.1718117557.git.nicola.vetrini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <a34c9483e17d59a79ddb2cc9c74cf5809b8f2e70.1718117557.git.nicola.vetrini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 11.06.2024 17:53, Nicola Vetrini wrote:
> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
> of macro parameters shall be enclosed in parentheses". Therefore, some
> macro definitions should gain additional parentheses to ensure that all
> current and future users will be safe with respect to expansions that
> can possibly alter the semantics of the passed-in macro parameter.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Wed Jun 12 09:33:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 09:33:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739067.1145984 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKMJ-000115-6U; Wed, 12 Jun 2024 09:33:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739067.1145984; Wed, 12 Jun 2024 09:33:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKMJ-00010y-3h; Wed, 12 Jun 2024 09:33:35 +0000
Received: by outflank-mailman (input) for mailman id 739067;
 Wed, 12 Jun 2024 09:33:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FFiX=NO=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sHKMH-00010s-Ms
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 09:33:33 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d4d5a8eb-289e-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 11:33:31 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a6efe62f583so186620566b.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 02:33:31 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1ad792e8sm417785366b.152.2024.06.12.02.33.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 02:33:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4d5a8eb-289e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718184811; x=1718789611; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=myKzWYdJrGl7pYBItkqX4vkGRMnqNTAXx8KujSOPhko=;
        b=ZvT85qILKrLshVzssqZFiOmBgvQIHcNtkLzeXimvxwLt8qjQAE7e1M5fRaYKjgshRf
         gSJR7AZ9u5ox0A5e1NuKN889oldwWUl0SDct4hcsNCFqxiuehWDfSRjnyEIVTA7TZKPn
         s7gifuatkkuba+oYzczo06XSfl+yWAk98pv7RVdzGJsl2sw204O6BGczKGr049luoBjy
         NTOCV3NOKvtpi9kXcMAw4/QMtFssmSGSei/Y9f4zSf5MkGFUMZoQIjp3zT0Y0HPgCNdj
         XGmItVnInDy9itR0+zkpWvxO09ioKrc7oWSFLdNv27Cg8UiYXB9I0vSZyutRJ5oTxQcK
         uL3A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718184811; x=1718789611;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=myKzWYdJrGl7pYBItkqX4vkGRMnqNTAXx8KujSOPhko=;
        b=bQxn53M8kGKlPQEasySUKYdAr4QILS8Gx2PgI6PP1d6zsqaox1m7uLz9l1J5EyS0Vp
         Tgj14xS0bv2tAcQG40jiELjb8UXhf0/mxIdw+qkJKJnywGr2qNSOMfHzb1Qf9k0L9/Xf
         3kac8hyDRbsG8iVVkLo4+9OPcD81kAiAJ6WhDTxy7+lcNQ4W37M3kNxRU5loBNSJv2se
         tFrk7yzF5GyrA8QwiRF/LungqoKGQoWXZXC5EW++A+BCgOVA5r40hu20nlc7I6TmXhvP
         g6nF2nRGORccgqiHVHSw4stvnqPAkPJMqzIjedE4wXRa68C3/kCSO0BziHcFGlUvUviV
         E0iQ==
X-Forwarded-Encrypted: i=1; AJvYcCW2P9omQ/hO7RAIaERfYIesEwv6PPbVeA51t9sahYoLMFTR3bewJcnJxliQ1h0LrGoTliRnpJ8QdCN3y01ILRF80TzvzGbAxELn7S9LmWs=
X-Gm-Message-State: AOJu0Ywv8j9uwO5dmZyAdDVa4LcUOYx9YEeS0or6ETmJlQyUrh33yphO
	Ze4Lj/fprMD/6m3k4+JDcDvVGdJYHpI8+3kbQfKovmfeNQ9q2F5K
X-Google-Smtp-Source: AGHT+IGHUGpjTf269uyO5+qcGpEOFC4EvxM2Qrc9ovPlvSF/n7l51+9mowHq88Xm4MkXkZlG9jPqvg==
X-Received: by 2002:a17:906:c11:b0:a6f:bc2:e579 with SMTP id a640c23a62f3a-a6f47c9ef52mr68251366b.30.1718184810912;
        Wed, 12 Jun 2024 02:33:30 -0700 (PDT)
Message-ID: <8ba52fb7b6fd6757d5defa515d45f50b99f7bc0d.camel@gmail.com>
Subject: Re: [PATCH for-4.19???] x86/physdev: replace
 physdev_{,un}map_pirq() checking against DOMID_SELF
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau
 =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>, Chen Jiqian <Jiqian.Chen@amd.com>
Date: Wed, 12 Jun 2024 11:33:30 +0200
In-Reply-To: <c7d12669-7851-4701-9b2d-0b22f9d32c1d@suse.com>
References: <c7d12669-7851-4701-9b2d-0b22f9d32c1d@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

T24gV2VkLCAyMDI0LTA2LTEyIGF0IDEwOjQ0ICswMjAwLCBKYW4gQmV1bGljaCB3cm90ZToKPiBJ
dCdzIGhhcmRseSBldmVyIGNvcnJlY3QgdG8gY2hlY2sgZm9yIGp1c3QgRE9NSURfU0VMRiwgYXMg
Z3Vlc3RzIGhhdmUKPiB3YXlzIHRvIGZpZ3VyZSBvdXQgdGhlaXIgZG9tYWluIElEcyBhbmQgaGVu
Y2UgY291bGQgaW5zdGVhZCB1c2UgdGhvc2UKPiBhcwo+IGlucHV0cyB0byByZXNwZWN0aXZlIGh5
cGVyY2FsbHMuIE5vdGUsIGhvd2V2ZXIsIHRoYXQgZm9yIG9yZGluYXJ5Cj4gRG9tVS1zCj4gdGhl
IGFkanVzdG1lbnQgaXMgcmVsYXhpbmcgdGhpbmdzIHJhdGhlciB0aGFuIHRpZ2h0ZW5pbmcgdGhl
bSwgc2luY2UKPiAtIGFzIGEgcmVzdWx0IG9mIFhTQS0yMzcgLSB0aGUgcmVzcGVjdGl2ZSBYU00g
Y2hlY2tzIHdvdWxkIGhhdmUKPiByZWplY3RlZAo+IHNlbGYgKHVuKW1hcHBpbmcgYXR0ZW1wdHMg
Zm9yIG90aGVyIHRoYW4gdGhlIGNvbnRyb2wgZG9tYWluLgo+IAo+IFNpbmNlIGluIHBoeXNkZXZf
bWFwX3BpcnEoKSBoYW5kbGluZyBvdmVyYWxsIGlzIGEgbGl0dGxlIGVhc2llciB0aGlzCj4gd2F5
LCBtb3ZlIG9idGFpbmluZyBvZiB0aGUgZG9tYWluIHBvaW50ZXIgaW50byB0aGUgY2FsbGVyLiBE
b2luZyB0aGUKPiBzYW1lIGZvciBwaHlzZGV2X3VubWFwX3BpcnEoKSBpcyBqdXN0IHRvIGtlZXAg
Ym90aCBjb25zaXN0ZW50IGluIHRoaXMKPiByZWdhcmQuIEZvciBib3RoIHRoaXMgaGFzIHRoZSBh
ZHZhbnRhZ2UgdGhhdCBpdCBpcyBub3cgcHJvdmFibGUgKGJ5Cj4gdGhlCj4gYnVpbGQgbm90IGZh
aWxpbmcpIHRoYXQgdGhlcmUgYXJlIG5vIERPTUlEX1NFTEYgY2hlY2tzIGxlZnQgKGFuZCBub25l
Cj4gY291bGQgZWFzaWx5IGJlIHJlLWFkZGVkKS4KPiAKPiBGaXhlczogMGI0NjljZDY4NzA4ICgi
SW50ZXJydXB0IHJlbWFwcGluZyB0byBQSVJRcyBpbiBIVk0gZ3Vlc3RzIikKPiBGaXhlczogOWUx
YTM0MTViNzczICgieDg2OiBmaXhlcyBhZnRlciBlbXVpcnEgY2hhbmdlcyIpCj4gU2lnbmVkLW9m
Zi1ieTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgpSZWxlYXNlLUFja2VkLUJ5OiBP
bGVrc2lpIEt1cm9jaGtvIDxvbGVrc2lpLmt1cm9jaGtvQGdtYWlsLmNvbT4KCn4gT2xla3NpaQo+
IC0tLQo+IE5vdGUgdGhhdCB0aGUgbW92aW5nIG9mIHJjdV9sb2NrX2RvbWFpbl9ieV9hbnlfaWQo
KSBpcyBhbHNvIGdvaW5nIHRvCj4gaGVscAo+IGh0dHBzOi8vbGlzdHMueGVuLm9yZy9hcmNoaXZl
cy9odG1sL3hlbi1kZXZlbC8yMDI0LTA2L21zZzAwMjA2Lmh0bWwuCj4gCj4gLS0tIGEveGVuL2Fy
Y2gveDg2L3BoeXNkZXYuYwo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9waHlzZGV2LmMKPiBAQCAtMTgs
OSArMTgsOSBAQAo+IMKgI2luY2x1ZGUgPHhzbS94c20uaD4KPiDCoCNpbmNsdWRlIDxhc20vcDJt
Lmg+Cj4gwqAKPiAtaW50IHBoeXNkZXZfbWFwX3BpcnEoZG9taWRfdCBkb21pZCwgaW50IHR5cGUs
IGludCAqaW5kZXgsIGludAo+ICpwaXJxX3AsCj4gK2ludCBwaHlzZGV2X21hcF9waXJxKHN0cnVj
dCBkb21haW4gKmQsIGludCB0eXBlLCBpbnQgKmluZGV4LCBpbnQKPiAqcGlycV9wLAo+IMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBzdHJ1Y3QgbXNpX2luZm8gKm1z
aSk7Cj4gLWludCBwaHlzZGV2X3VubWFwX3BpcnEoZG9taWRfdCBkb21pZCwgaW50IHBpcnEpOwo+
ICtpbnQgcGh5c2Rldl91bm1hcF9waXJxKHN0cnVjdCBkb21haW4gKmQsIGludCBwaXJxKTsKPiDC
oAo+IMKgI2luY2x1ZGUgIng4Nl82NC9tbWNvbmZpZy5oIgo+IMKgCj4gQEAgLTg4LDEzICs4OCwx
MiBAQCBzdGF0aWMgaW50IHBoeXNkZXZfaHZtX21hcF9waXJxKAo+IMKgwqDCoMKgIHJldHVybiBy
ZXQ7Cj4gwqB9Cj4gwqAKPiAtaW50IHBoeXNkZXZfbWFwX3BpcnEoZG9taWRfdCBkb21pZCwgaW50
IHR5cGUsIGludCAqaW5kZXgsIGludAo+ICpwaXJxX3AsCj4gK2ludCBwaHlzZGV2X21hcF9waXJx
KHN0cnVjdCBkb21haW4gKmQsIGludCB0eXBlLCBpbnQgKmluZGV4LCBpbnQKPiAqcGlycV9wLAo+
IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBzdHJ1Y3QgbXNpX2lu
Zm8gKm1zaSkKPiDCoHsKPiAtwqDCoMKgIHN0cnVjdCBkb21haW4gKmQgPSBjdXJyZW50LT5kb21h
aW47Cj4gwqDCoMKgwqAgaW50IHJldDsKPiDCoAo+IC3CoMKgwqAgaWYgKCBkb21pZCA9PSBET01J
RF9TRUxGICYmIGlzX2h2bV9kb21haW4oZCkgJiYgaGFzX3BpcnEoZCkgKQo+ICvCoMKgwqAgaWYg
KCBkID09IGN1cnJlbnQtPmRvbWFpbiAmJiBpc19odm1fZG9tYWluKGQpICYmIGhhc19waXJxKGQp
ICkKPiDCoMKgwqDCoCB7Cj4gwqDCoMKgwqDCoMKgwqDCoCAvKgo+IMKgwqDCoMKgwqDCoMKgwqDC
oCAqIE9ubHkgbWFrZXMgc2Vuc2UgZm9yIHZlY3Rvci1iYXNlZCBjYWxsYmFjaywgZWxzZSBIVk0t
SVJRCj4gbG9naWMKPiBAQCAtMTA2LDEzICsxMDUsOSBAQCBpbnQgcGh5c2Rldl9tYXBfcGlycShk
b21pZF90IGRvbWlkLCBpbnQKPiDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiBwaHlzZGV2X2h2bV9t
YXBfcGlycShkLCB0eXBlLCBpbmRleCwgcGlycV9wKTsKPiDCoMKgwqDCoCB9Cj4gwqAKPiAtwqDC
oMKgIGQgPSByY3VfbG9ja19kb21haW5fYnlfYW55X2lkKGRvbWlkKTsKPiAtwqDCoMKgIGlmICgg
ZCA9PSBOVUxMICkKPiAtwqDCoMKgwqDCoMKgwqAgcmV0dXJuIC1FU1JDSDsKPiAtCj4gwqDCoMKg
wqAgcmV0ID0geHNtX21hcF9kb21haW5fcGlycShYU01fRE1fUFJJViwgZCk7Cj4gwqDCoMKgwqAg
aWYgKCByZXQgKQo+IC3CoMKgwqDCoMKgwqDCoCBnb3RvIGZyZWVfZG9tYWluOwo+ICvCoMKgwqDC
oMKgwqDCoCByZXR1cm4gcmV0Owo+IMKgCj4gwqDCoMKgwqAgLyogVmVyaWZ5IG9yIGdldCBpcnEu
ICovCj4gwqDCoMKgwqAgc3dpdGNoICggdHlwZSApCj4gQEAgLTEzNSwyNCArMTMwLDE3IEBAIGlu
dCBwaHlzZGV2X21hcF9waXJxKGRvbWlkX3QgZG9taWQsIGludAo+IMKgwqDCoMKgwqDCoMKgwqAg
YnJlYWs7Cj4gwqDCoMKgwqAgfQo+IMKgCj4gLSBmcmVlX2RvbWFpbjoKPiAtwqDCoMKgIHJjdV91
bmxvY2tfZG9tYWluKGQpOwo+IMKgwqDCoMKgIHJldHVybiByZXQ7Cj4gwqB9Cj4gwqAKPiAtaW50
IHBoeXNkZXZfdW5tYXBfcGlycShkb21pZF90IGRvbWlkLCBpbnQgcGlycSkKPiAraW50IHBoeXNk
ZXZfdW5tYXBfcGlycShzdHJ1Y3QgZG9tYWluICpkLCBpbnQgcGlycSkKPiDCoHsKPiAtwqDCoMKg
IHN0cnVjdCBkb21haW4gKmQ7Cj4gwqDCoMKgwqAgaW50IHJldCA9IDA7Cj4gwqAKPiAtwqDCoMKg
IGQgPSByY3VfbG9ja19kb21haW5fYnlfYW55X2lkKGRvbWlkKTsKPiAtwqDCoMKgIGlmICggZCA9
PSBOVUxMICkKPiAtwqDCoMKgwqDCoMKgwqAgcmV0dXJuIC1FU1JDSDsKPiAtCj4gLcKgwqDCoCBp
ZiAoIGRvbWlkICE9IERPTUlEX1NFTEYgfHwgIWlzX2h2bV9kb21haW4oZCkgfHwgIWhhc19waXJx
KGQpICkKPiArwqDCoMKgIGlmICggZCAhPSBjdXJyZW50LT5kb21haW4gfHwgIWlzX2h2bV9kb21h
aW4oZCkgfHwgIWhhc19waXJxKGQpICkKPiDCoMKgwqDCoMKgwqDCoMKgIHJldCA9IHhzbV91bm1h
cF9kb21haW5fcGlycShYU01fRE1fUFJJViwgZCk7Cj4gwqDCoMKgwqAgaWYgKCByZXQgKQo+IC3C
oMKgwqDCoMKgwqDCoCBnb3RvIGZyZWVfZG9tYWluOwo+ICvCoMKgwqDCoMKgwqDCoCByZXR1cm4g
cmV0Owo+IMKgCj4gwqDCoMKgwqAgaWYgKCBpc19odm1fZG9tYWluKGQpICYmIGhhc19waXJxKGQp
ICkKPiDCoMKgwqDCoCB7Cj4gQEAgLTE2MCw4ICsxNDgsOCBAQCBpbnQgcGh5c2Rldl91bm1hcF9w
aXJxKGRvbWlkX3QgZG9taWQsIGluCj4gwqDCoMKgwqDCoMKgwqDCoCBpZiAoIGRvbWFpbl9waXJx
X3RvX2VtdWlycShkLCBwaXJxKSAhPSBJUlFfVU5CT1VORCApCj4gwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgIHJldCA9IHVubWFwX2RvbWFpbl9waXJxX2VtdWlycShkLCBwaXJxKTsKPiDCoMKgwqDC
oMKgwqDCoMKgIHdyaXRlX3VubG9jaygmZC0+ZXZlbnRfbG9jayk7Cj4gLcKgwqDCoMKgwqDCoMKg
IGlmICggZG9taWQgPT0gRE9NSURfU0VMRiB8fCByZXQgKQo+IC3CoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIGdvdG8gZnJlZV9kb21haW47Cj4gK8KgwqDCoMKgwqDCoMKgIGlmICggZCA9PSBjdXJyZW50
LT5kb21haW4gfHwgcmV0ICkKPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCByZXR1cm4gcmV0Owo+
IMKgwqDCoMKgIH0KPiDCoAo+IMKgwqDCoMKgIHBjaWRldnNfbG9jaygpOwo+IEBAIC0xNzAsOCAr
MTU4LDYgQEAgaW50IHBoeXNkZXZfdW5tYXBfcGlycShkb21pZF90IGRvbWlkLCBpbgo+IMKgwqDC
oMKgIHdyaXRlX3VubG9jaygmZC0+ZXZlbnRfbG9jayk7Cj4gwqDCoMKgwqAgcGNpZGV2c191bmxv
Y2soKTsKPiDCoAo+IC0gZnJlZV9kb21haW46Cj4gLcKgwqDCoCByY3VfdW5sb2NrX2RvbWFpbihk
KTsKPiDCoMKgwqDCoCByZXR1cm4gcmV0Owo+IMKgfQo+IMKgI2VuZGlmIC8qIENPTVBBVCAqLwo+
IEBAIC0xODQsNiArMTcwLDggQEAgcmV0X3QgZG9fcGh5c2Rldl9vcChpbnQgY21kLCBYRU5fR1VF
U1RfSAo+IMKgCj4gwqDCoMKgwqAgc3dpdGNoICggY21kICkKPiDCoMKgwqDCoCB7Cj4gK8KgwqDC
oMKgwqDCoMKgIHN0cnVjdCBkb21haW4gKmQ7Cj4gKwo+IMKgwqDCoMKgIGNhc2UgUEhZU0RFVk9Q
X2VvaTogewo+IMKgwqDCoMKgwqDCoMKgwqAgc3RydWN0IHBoeXNkZXZfZW9pIGVvaTsKPiDCoMKg
wqDCoMKgwqDCoMKgIHN0cnVjdCBwaXJxICpwaXJxOwo+IEBAIC0zMzEsOCArMzE5LDE1IEBAIHJl
dF90IGRvX3BoeXNkZXZfb3AoaW50IGNtZCwgWEVOX0dVRVNUX0gKPiDCoMKgwqDCoMKgwqDCoMKg
IG1zaS5zYmRmLmRldmZuID0gbWFwLmRldmZuOwo+IMKgwqDCoMKgwqDCoMKgwqAgbXNpLmVudHJ5
X25yID0gbWFwLmVudHJ5X25yOwo+IMKgwqDCoMKgwqDCoMKgwqAgbXNpLnRhYmxlX2Jhc2UgPSBt
YXAudGFibGVfYmFzZTsKPiAtwqDCoMKgwqDCoMKgwqAgcmV0ID0gcGh5c2Rldl9tYXBfcGlycSht
YXAuZG9taWQsIG1hcC50eXBlLCAmbWFwLmluZGV4LAo+ICZtYXAucGlycSwKPiAtwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICZtc2kp
Owo+ICsKPiArwqDCoMKgwqDCoMKgwqAgZCA9IHJjdV9sb2NrX2RvbWFpbl9ieV9hbnlfaWQobWFw
LmRvbWlkKTsKPiArwqDCoMKgwqDCoMKgwqAgcmV0ID0gLUVTUkNIOwo+ICvCoMKgwqDCoMKgwqDC
oCBpZiAoICFkICkKPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBicmVhazsKPiArCj4gK8KgwqDC
oMKgwqDCoMKgIHJldCA9IHBoeXNkZXZfbWFwX3BpcnEoZCwgbWFwLnR5cGUsICZtYXAuaW5kZXgs
ICZtYXAucGlycSwKPiAmbXNpKTsKPiArCj4gK8KgwqDCoMKgwqDCoMKgIHJjdV91bmxvY2tfZG9t
YWluKGQpOwo+IMKgCj4gwqDCoMKgwqDCoMKgwqDCoCBpZiAoIG1hcC50eXBlID09IE1BUF9QSVJR
X1RZUEVfTVVMVElfTVNJICkKPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgbWFwLmVudHJ5X25y
ID0gbXNpLmVudHJ5X25yOwo+IEBAIC0zNDgsNyArMzQzLDE1IEBAIHJldF90IGRvX3BoeXNkZXZf
b3AoaW50IGNtZCwgWEVOX0dVRVNUX0gKPiDCoMKgwqDCoMKgwqDCoMKgIGlmICggY29weV9mcm9t
X2d1ZXN0KCZ1bm1hcCwgYXJnLCAxKSAhPSAwICkKPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
YnJlYWs7Cj4gwqAKPiAtwqDCoMKgwqDCoMKgwqAgcmV0ID0gcGh5c2Rldl91bm1hcF9waXJxKHVu
bWFwLmRvbWlkLCB1bm1hcC5waXJxKTsKPiArwqDCoMKgwqDCoMKgwqAgZCA9IHJjdV9sb2NrX2Rv
bWFpbl9ieV9hbnlfaWQodW5tYXAuZG9taWQpOwo+ICvCoMKgwqDCoMKgwqDCoCByZXQgPSAtRVNS
Q0g7Cj4gK8KgwqDCoMKgwqDCoMKgIGlmICggIWQgKQo+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
IGJyZWFrOwo+ICsKPiArwqDCoMKgwqDCoMKgwqAgcmV0ID0gcGh5c2Rldl91bm1hcF9waXJxKGQs
IHVubWFwLnBpcnEpOwo+ICsKPiArwqDCoMKgwqDCoMKgwqAgcmN1X3VubG9ja19kb21haW4oZCk7
Cj4gKwo+IMKgwqDCoMKgwqDCoMKgwqAgYnJlYWs7Cj4gwqDCoMKgwqAgfQo+IMKgCgo=



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 09:41:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 09:41:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739074.1146000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKTu-0003OI-AH; Wed, 12 Jun 2024 09:41:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739074.1146000; Wed, 12 Jun 2024 09:41:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKTu-0003Nf-4e; Wed, 12 Jun 2024 09:41:26 +0000
Received: by outflank-mailman (input) for mailman id 739074;
 Wed, 12 Jun 2024 09:40:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sVMZ=NO=bounce.vates.tech=bounce-md_30504962.66696d19.v1-69e8e1b4a6ec4e9a978221c2533b8984@srs-se1.protection.inumbo.net>)
 id 1sHKTE-00031X-Qy
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 09:40:44 +0000
Received: from mail112.us4.mandrillapp.com (mail112.us4.mandrillapp.com
 [205.201.136.112]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d5f4834b-289f-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 11:40:43 +0200 (CEST)
Received: from pmta15.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail112.us4.mandrillapp.com (Mailchimp) with ESMTP id 4VzgWt0V5Nz8XRqZ1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 09:40:42 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 69e8e1b4a6ec4e9a978221c2533b8984; Wed, 12 Jun 2024 09:40:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5f4834b-289f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718185242; x=1718445742;
	bh=0dzQz5g+yQ1z3qAhbSgWZ1eOZhES88esoSrO8/gqnNU=;
	h=From:Subject:Message-Id:To:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=r9TaUceBDQG0m5IF534ZS9rRZ2X3+VQ4z+aBycwjfv29rxaqyE+3oDZHizeTX8BVd
	 BWdiNBGzlgzoAY/IDhTXB7LNhRbS4hspjROmHZGgIFp65aP73vyfrqc+iA7Nqt+yAk
	 dRAKvsGSooaDPmxgr4C3j+ZNpX5ryMIF+yNuu9CNoCbYOEjGlqUmGdJbz+bK9x67Wq
	 wn1Wq9J4u7fizWLGuQfocokoVQzx8fQR+dcNt7htT3AWdgUJ0WZptW8PnGUK4xsiCA
	 X8AMFnAdD6ldL0Izq8MmmqGXQcIIv4IkF8cFcZNJXHXwe3IWzaaoATIWVhDHhtt34P
	 Db7oL030Tu0Bg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718185242; x=1718445742; i=damien.thenot@vates.tech;
	bh=0dzQz5g+yQ1z3qAhbSgWZ1eOZhES88esoSrO8/gqnNU=;
	h=From:Subject:Message-Id:To:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=SsPjvoLjiHURPpQ8jVs1pJf/STej3jDn/JcWK5UDPR3x8HDc57MOwGTf1o9X9l6uG
	 TXzcGzluiJ9ied9ArialbxC/UhrmSa5Zt7zPAv63bEsED/NZEoyyciAPICqopQC/jQ
	 jyv2CpoRZR2sNP4WYg6ENJO8jdYANLC3T3R1YFRHOTdfKTVfFTocwEgOZvtx2F+IvO
	 gl4XWWp/+PHHRaGMC7MTHnWSY491iAp4yiz0B8VMF2zEu9J0XELi/vvtcRRbp8lOUu
	 TxAhth8/PK44Q9CoAX1joe88RPomsBpf6wV5ci457xlihs+w4nZl4RwXBRX/vnbT4i
	 dGjzhZ1KQF6Pw==
From: Damien Thenot <damien.thenot@vates.tech>
Subject: =?utf-8?Q?BlueIris=20error=20on=20Xen=204.17?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718185241193
Message-Id: <ced16fca-3b55-40a1-a7e2-ffadd9707394@vates.tech>
To: xen-devel@lists.xenproject.org
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.69e8e1b4a6ec4e9a978221c2533b8984?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240612:md
Date: Wed, 12 Jun 2024 09:40:41 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hello,

A XCP-ng 8.3 user that use Blue Iris Software encountered a crash with 
Xen upgraded to version 4.17.
It worked correctly when XCP-ng 8.3 used Xen 4.13.
It is happening on Intel Xeon E-2378 CPU @ 2.60GHz CPUs and it seems 
more recent Intel CPUs.
His guests are Windows with a NVIDIA GPU given to the guest.

The user added:
 > On an older box with i9-9900K CPU it does not happen and the VM works 
as expected. Also working on an older
 > Xeon Intel Xeon E-2146G and a E-2276G. Anything newer than that 
however the VM will just BSOD.

You can find more information in the XCP-ng forum post: 
https://xcp-ng.org/forum/topic/8873/windows-blue-iris-xcp-ng-8-3

The user tried enabling `msr-relaxed` following notes in Xen 4.15 
documentation.
But it didn't change the behavior and the guest still crashes.

Has someone else observed such behavior?

Here is a Xen dmesg output with the error that the user was able to obtain:

```

(d1) [=C2=A0 132.028963] xen|BUGCHECK: =3D=3D=3D=3D>
(d1) [=C2=A0 132.029008] xen|BUGCHECK: SYSTEM_SERVICE_EXCEPTION: 
00000000C0000096 FFFFF80418A21E27 FFFFAC009F27B900 0000000000000000
(d1) [=C2=A0 132.029057] xen|BUGCHECK: EXCEPTION (FFFFF80418A21E27):
(d1) [=C2=A0 132.029096] xen|BUGCHECK: - Code =3D 8589320F
(d1) [=C2=A0 132.029134] xen|BUGCHECK: - Flags =3D 000000A8
(d1) [=C2=A0 132.029174] xen|BUGCHECK: - Address =3D 8589320F008EECE9
(d1) [=C2=A0 132.029214] xen|BUGCHECK: - Parameter[0] =3D 0F000001D9B90000
(d1) [=C2=A0 132.029255] xen|BUGCHECK: - Parameter[1] =3D 4166300FFCE08332
(d1) [=C2=A0 132.029297] xen|BUGCHECK: - Parameter[2] =3D 0355000002C881F7
(d1) [=C2=A0 132.029338] xen|BUGCHECK: - Parameter[3] =3D 0002A0818B497B74
(d1) [=C2=A0 132.029379] xen|BUGCHECK: - Parameter[4] =3D 000002A8918B4900
(d1) [=C2=A0 132.029421] xen|BUGCHECK: - Parameter[5] =3D 8B49CA230FC0230F
(d1) [=C2=A0 132.029462] xen|BUGCHECK: - Parameter[6] =3D 918B49000002B081
(d1) [=C2=A0 132.029502] xen|BUGCHECK: - Parameter[7] =3D 0FD0230F000002B8
(d1) [=C2=A0 132.029544] xen|BUGCHECK: - Parameter[8] =3D 0002C8918B49DA23
(d1) [=C2=A0 132.029585] xen|BUGCHECK: - Parameter[9] =3D 230FF0230FC03300
(d1) [=C2=A0 132.029626] xen|BUGCHECK: - Parameter[10] =3D 008AE22504F665FA
(d1) [=C2=A0 132.029667] xen|BUGCHECK: - Parameter[11] =3D 00C2F76639740200
(d1) [=C2=A0 132.029709] xen|BUGCHECK: - Parameter[12] =3D 81342405F7327403
(d1) [=C2=A0 132.029753] xen|BUGCHECK: - Parameter[13] =3D 6626750000000200
(d1) [=C2=A0 132.029794] xen|BUGCHECK: - Parameter[14] =3D C88303740200C2F7
(d1) [=C2=A0 132.029835] xen|BUGCHECK: EXCEPTION (0D8B000000AC9589):
(d1) [=C2=A0 132.029888] xen|BUGCHECK: CONTEXT (FFFFAC009F27B900):
(d1) [=C2=A0 132.029925] xen|BUGCHECK: - GS =3D 002B
(d1) [=C2=A0 132.029959] xen|BUGCHECK: - FS =3D 0053
(d1) [=C2=A0 132.029994] xen|BUGCHECK: - ES =3D 002B
(d1) [=C2=A0 132.030028] xen|BUGCHECK: - DS =3D 002B
(d1) [=C2=A0 132.030056] xen|BUGCHECK: - SS =3D 0018
(d1) [=C2=A0 132.030089] xen|BUGCHECK: - CS =3D 0010
(d1) [=C2=A0 132.030123] xen|BUGCHECK: - EFLAGS =3D 00040046
(d1) [=C2=A0 132.030160] xen|BUGCHECK: - RDI =3D 00000000000002C4
(d1) [=C2=A0 132.030199] xen|BUGCHECK: - RSI =3D 00000000005FFA48
(d1) [=C2=A0 132.030237] xen|BUGCHECK: - RBX =3D 00000000426ED080
(d1) [=C2=A0 132.030275] xen|BUGCHECK: - RDX =3D 0000000000000000
(d1) [=C2=A0 132.030312] xen|BUGCHECK: - RCX =3D 00000000000001DD
(d1) [=C2=A0 132.030349] xen|BUGCHECK: - RAX =3D 0000000000000000
(d1) [=C2=A0 132.030386] xen|BUGCHECK: - RBP =3D 00000000E4427520
(d1) [=C2=A0 132.030424] xen|BUGCHECK: - RIP =3D 0000000018A21E27
(d1) [=C2=A0 132.030463] xen|BUGCHECK: - RSP =3D 00000000E4427498
(d1) [=C2=A0 132.030504] xen|BUGCHECK: - R8 =3D 0000000000000000
(d1) [=C2=A0 132.030543] xen|BUGCHECK: - R9 =3D 000000009F25A000
(d1) [=C2=A0 132.030580] xen|BUGCHECK: - R10 =3D 00000000000002C4
(d1) [=C2=A0 132.030618] xen|BUGCHECK: - R11 =3D 0000000000000246
(d1) [=C2=A0 132.030657] xen|BUGCHECK: - R12 =3D 0000000002CEB528
(d1) [=C2=A0 132.030696] xen|BUGCHECK: - R13 =3D 00000000A81E3A80
(d1) [=C2=A0 132.030735] xen|BUGCHECK: - R14 =3D 00000000000002C4
(d1) [=C2=A0 132.030775] xen|BUGCHECK: - R15 =3D 0000000000000000
(d1) [=C2=A0 132.030812] xen|BUGCHECK: STACK:
(d1) [=C2=A0 132.030858] xen|BUGCHECK: 00000000E44274A0: (00000000426ED080 
00000000005FF898 0000000000000000 0000000000000000) ntoskrnl.exe + 
000000000043667A
(d1) [=C2=A0 132.030935] xen|BUGCHECK: 00000000005FFA18: (00000000A74AD76E 
0000000000000000 00000000A81E3A80 000000000000000E) 00007FFBA9BFF4D4
(d1) [=C2=A0 132.031010] xen|BUGCHECK: 00000000005FFA20: (0000000000000000 
00000000A81E3A80 000000000000000E 0000000000000003) 00007FFBA74AD76E
(d1) [=C2=A0 132.031086] xen|BUGCHECK: 00000000005FFA28: (00000000A81E3A80 
000000000000000E 0000000000000003 00000000005FFB10) 0000000000000000
(d1) [=C2=A0 132.031151] xen|BUGCHECK: <=3D=3D=3D=3D
(XEN) [=C2=A0 132.040828] memory_map:remove: dom1 gfn=3Df60c0 mfn=3Da3080 n=
r=3D4
(XEN) [=C2=A0 132.040975] memory_map:remove: dom1 gfn=3Df5000 mfn=3Da2000 n=
r=3D1000
(XEN) [=C2=A0 132.041124] memory_map:remove: dom1 gfn=3De0000 mfn=3D4000000=
 nr=3D10000
(XEN) [=C2=A0 132.041463] memory_map:remove: dom1 gfn=3De8000 mfn=3D4008000=
 nr=3D8000
(XEN) [=C2=A0 132.041905] memory_map:remove: dom1 gfn=3Df8000 mfn=3D4010000=
 nr=3D2000
(XEN) [=C2=A0 132.042072] ioport_map:remove: dom1 gport=3Dc100 mport=3D6000=
 nr=3D80
```

Thank you



Damien Thenot | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 09:41:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 09:41:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739063.1145994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKTu-0003L4-1D; Wed, 12 Jun 2024 09:41:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739063.1145994; Wed, 12 Jun 2024 09:41:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKTt-0003Kx-Tr; Wed, 12 Jun 2024 09:41:25 +0000
Received: by outflank-mailman (input) for mailman id 739063;
 Wed, 12 Jun 2024 09:24:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V2Vf=NO=nfschina.com=zeming@srs-se1.protection.inumbo.net>)
 id 1sHKDz-0007J8-3Z
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 09:24:59 +0000
Received: from mail.nfschina.com (unknown [42.101.60.195])
 by se1-gles-flk1.inumbo.com (Halon) with SMTP
 id a0d21b7e-289d-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 11:24:55 +0200 (CEST)
Received: from localhost.localdomain (unknown [219.141.250.2])
 by mail.nfschina.com (Maildata Gateway V2.8.8) with ESMTPA id 96FED60230176;
 Wed, 12 Jun 2024 17:24:49 +0800 (CST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0d21b7e-289d-11ef-b4bb-af5377834399
X-MD-Sfrom: zeming@nfschina.com
X-MD-SrcIP: 219.141.250.2
From: Li zeming <zeming@nfschina.com>
To: jgross@suse.com,
	bhelgaas@google.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	dave.hansen@linux.intel.com
Cc: x86@kernel.org,
	xen-devel@lists.xenproject.org,
	linux-pci@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	Li zeming <zeming@nfschina.com>
Subject: [PATCH] =?UTF-8?q?x86:=20pci:=20xen:=20Remove=20unnecessary=20?= =?UTF-8?q?=E2=80=980=E2=80=99=20values=20from=20ret?=
Date: Wed, 12 Jun 2024 17:24:06 +0800
Message-Id: <20240612092406.39007-1-zeming@nfschina.com>
X-Mailer: git-send-email 2.18.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

ret is assigned first, so it does not need to initialize the assignment.

Signed-off-by: Li zeming <zeming@nfschina.com>
---
 arch/x86/pci/xen.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
index 652cd53e77f6..67cb9dc9b2e7 100644
--- a/arch/x86/pci/xen.c
+++ b/arch/x86/pci/xen.c
@@ -267,7 +267,7 @@ static bool __read_mostly pci_seg_supported = true;
 
 static int xen_initdom_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 {
-	int ret = 0;
+	int ret;
 	struct msi_desc *msidesc;
 
 	msi_for_each_desc(msidesc, &dev->dev, MSI_DESC_NOTASSOCIATED) {
@@ -353,7 +353,7 @@ static int xen_initdom_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
 
 bool xen_initdom_restore_msi(struct pci_dev *dev)
 {
-	int ret = 0;
+	int ret;
 
 	if (!xen_initial_domain())
 		return true;
-- 
2.18.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 09:52:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 09:52:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739091.1146014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKeV-0006UX-61; Wed, 12 Jun 2024 09:52:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739091.1146014; Wed, 12 Jun 2024 09:52:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKeV-0006UQ-2V; Wed, 12 Jun 2024 09:52:23 +0000
Received: by outflank-mailman (input) for mailman id 739091;
 Wed, 12 Jun 2024 09:52:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uw/A=NO=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sHKeU-0006UJ-DL
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 09:52:22 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 754ae3b4-28a1-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 11:52:19 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 70A734EE0754;
 Wed, 12 Jun 2024 11:52:19 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 754ae3b4-28a1-11ef-b4bb-af5377834399
MIME-Version: 1.0
Date: Wed, 12 Jun 2024 11:52:19 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH 4/6] x86emul: address violations of MISRA C Rule 20.7
In-Reply-To: <12ce10af-cd36-492e-a73b-2b81b5bf60cc@suse.com>
References: <cover.1718117557.git.nicola.vetrini@bugseng.com>
 <0a502d2a9c5ce13be13281d9de49d263313b7852.1718117557.git.nicola.vetrini@bugseng.com>
 <12ce10af-cd36-492e-a73b-2b81b5bf60cc@suse.com>
Message-ID: <ac1faf5feded028ce80752ce69983352@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-12 11:19, Jan Beulich wrote:
> On 11.06.2024 17:53, Nicola Vetrini wrote:
>> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
>> of macro parameters shall be enclosed in parentheses". Therefore, some
>> macro definitions should gain additional parentheses to ensure that 
>> all
>> current and future users will be safe with respect to expansions that
>> can possibly alter the semantics of the passed-in macro parameter.
>> 
>> No functional change.
>> 
>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
>> ---
>> These local helpers could in principle be deviated, but the 
>> readability
>> and functionality are essentially unchanged by complying with the 
>> rule,
>> so I decided to modify the macro definition as the preferred option.
> 
> Well, yes, but ...
> 
>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>> @@ -2255,7 +2255,7 @@ x86_emulate(
>>          switch ( modrm_reg & 7 )
>>          {
>>  #define GRP2(name, ext) \
>> -        case ext: \
>> +        case (ext): \
>>              if ( ops->rmw && dst.type == OP_MEM ) \
>>                  state->rmw = rmw_##name; \
>>              else \
>> @@ -8611,7 +8611,7 @@ int x86_emul_rmw(
>>              unsigned long dummy;
>> 
>>  #define XADD(sz, cst, mod) \
>> -        case sz: \
>> +        case (sz): \
>>              asm ( "" \
>>                    COND_LOCK(xadd) " %"#mod"[reg], %[mem]; " \
>>                    _POST_EFLAGS("[efl]", "[msk]", "[tmp]") \
> 
> ... this is really nitpicky of the rule / tool. What halfway realistic
> ways do you see to actually misuse these macros? What follows the 
> "case"
> keyword is just an expression; no precedence related issues are 
> possible.
> 

I do share the view: no real danger is possible in sensible uses. Often 
MISRA rules are stricter than necessary to have a simple formulation, by 
avoiding too many special cases.

However, if a deviation is formulated, then it needs to be maintained, 
for no real readability benefit in this case, in my opinion. I can be 
convinced otherwise, of course.

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:07:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:07:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739101.1146024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKtM-0000du-Cq; Wed, 12 Jun 2024 10:07:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739101.1146024; Wed, 12 Jun 2024 10:07:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKtM-0000dn-A6; Wed, 12 Jun 2024 10:07:44 +0000
Received: by outflank-mailman (input) for mailman id 739101;
 Wed, 12 Jun 2024 10:07:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FFiX=NO=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sHKtL-0000da-AA
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:07:43 +0000
Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com
 [2a00:1450:4864:20::131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9a88af79-28a3-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 12:07:41 +0200 (CEST)
Received: by mail-lf1-x131.google.com with SMTP id
 2adb3069b0e04-52c32d934c2so4161362e87.2
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 03:07:41 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52bb41e1bd6sm2394771e87.48.2024.06.12.03.07.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 03:07:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a88af79-28a3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718186861; x=1718791661; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=ep0PBQ7KNW188xRXp5IS+ALJEl/avlLGzIRQZPwnSBY=;
        b=V9r9zd23RLfcBlf6oeI4Q5cq3EDovB562HcEIrDfK7qbHdSHz0seIuGiPFDWne6gJN
         hUTFji7L2nodq5/BjTOxh1m+vRa56gij9yE3Z++LnTYXoBBt3JgsHUB5E3WlQ/iQu5Ao
         gPjJwVbtxP20UayX6OUAY6MMZnHp1/GePiayNkvrp5NRD2dmRk53qVmebnasxysYKMtn
         Jire+gyP6In3/hSC/wIMGXy3Q75NDP9eN/HsjcSwas+Q3q0Vgqytxjd2uTGKxm8tBQqP
         b0N6vTF1a8nhOkPL67JZMvz5XKVSmIxFIFR4quECN4sKD0OuW59crTmRFxTrnwTaad/2
         3BIA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718186861; x=1718791661;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=ep0PBQ7KNW188xRXp5IS+ALJEl/avlLGzIRQZPwnSBY=;
        b=WQzyomxr7l9Bayn04dQmKVWIy/SCidLV3wqKI5uDlav4MxAJIDfA/iAgJ3YrjP1VxQ
         zAMlhUUvZ2QcfniXVhXfpYbtQRQQvSqegCb2Mp6uvyEcSHgltkHeXKser21VmJ3iICqV
         JMlkBVSEn9Ir70MjOouUka7qBWgwc9MsId9QRaUwjyeIztTmoPylzuC9qn04nPkinrAD
         eYBhHrr+pbrcQY6cqjGdOsXxAQp0pWNZFCkhxHhCrw0guFY80Brpk2Wvr5gjbQvqnPPh
         rTe3wWSEadEoD7UvvxZB59KgJnJfW7YfzIni/2g/lxSVRLiXU9LTQdVaN2Jqpat03xdi
         TBSA==
X-Forwarded-Encrypted: i=1; AJvYcCVO8kBD4QcAqh8+1giIo+NX+lSzxFWyoGXIfwesVoe05iH0PGVkNk91OIJQJN4MrGNC8FKsmm4dUx+RrbuSlmiS4j/EalEDzFPbJCB49hQ=
X-Gm-Message-State: AOJu0YyxhUjVSTbPNbfbfhg86hjymC7ZfgzeROPcJ5IeXhQkC9lH4UT9
	4h6DP7ItxNh0No/IkPBVtO3w5X/JXcch6mpvMCDmgZNjbq8pk0cn
X-Google-Smtp-Source: AGHT+IG6lD2KlqlU6/mPFVCj87W0QLKiGM3w0TpPFZUmnuH2oz0JsR9TcJIuWLJQym2B1jcoAKSlgw==
X-Received: by 2002:a05:6512:452:b0:52c:4cfa:c5a6 with SMTP id 2adb3069b0e04-52c9a3df26fmr640282e87.34.1718186860374;
        Wed, 12 Jun 2024 03:07:40 -0700 (PDT)
Message-ID: <3d5287cacaf7960cfabc1262fe1603e53267a344.camel@gmail.com>
Subject: Re: [PATCH v2 1/7] x86/smp: do not use shorthand IPI destinations
 in CPU hot{,un}plug contexts
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, Roger Pau =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Date: Wed, 12 Jun 2024 12:07:39 +0200
In-Reply-To: <92584a2d-6695-4884-ba2e-990842318d8a@suse.com>
References: <20240610142043.11924-1-roger.pau@citrix.com>
	 <20240610142043.11924-2-roger.pau@citrix.com>
	 <615582c8-c153-424d-bce4-eb0c903d28ad@suse.com> <ZmlXve3rV2Vx9bH7@macbook>
	 <92584a2d-6695-4884-ba2e-990842318d8a@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Wed, 2024-06-12 at 10:56 +0200, Jan Beulich wrote:
> On 12.06.2024 10:09, Roger Pau Monn=C3=A9 wrote:
> > On Tue, Jun 11, 2024 at 09:42:39AM +0200, Jan Beulich wrote:
> > > On 10.06.2024 16:20, Roger Pau Monne wrote:
> > > > Due to the current rwlock logic, if the CPU calling
> > > > get_cpu_maps() does so from
> > > > a cpu_hotplug_{begin,done}() region the function will still
> > > > return success,
> > > > because a CPU taking the rwlock in read mode after having taken
> > > > it in write
> > > > mode is allowed.=C2=A0 Such behavior however defeats the purpose of
> > > > get_cpu_maps(),
> > >=20
> > > I'm not happy to see you still have this claim here. The behavior
> > > may (appear
> > > to) defeat the purpose here, but as expressed previously I don't
> > > view that as
> > > being a general pattern.
> >=20
> > Right.=C2=A0 What about replacing the paragraph with:
> >=20
> > "Due to the current rwlock logic, if the CPU calling get_cpu_maps()
> > does so from
> > a cpu_hotplug_{begin,done}() region the function will still return
> > success,
> > because a CPU taking the rwlock in read mode after having taken it
> > in write
> > mode is allowed.=C2=A0 Such corner case makes using get_cpu_maps() alon=
e
> > not enough to prevent using the shorthand in CPU hotplug regions."
>=20
> Thanks.
>=20
> > I think the rest is of the commit message is not controversial.
>=20
> Indeed.
>=20
> > > > --- a/xen/arch/x86/smp.c
> > > > +++ b/xen/arch/x86/smp.c
> > > > @@ -88,7 +88,7 @@ void send_IPI_mask(const cpumask_t *mask, int
> > > > vector)
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * the system have been accounted for=
.
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */
> > > > =C2=A0=C2=A0=C2=A0=C2=A0 if ( system_state > SYS_STATE_smp_boot &&
> > > > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 !unaccounted_cpus=
 && !disabled_cpus &&
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 !unaccounted_cpus=
 && !disabled_cpus &&
> > > > !cpu_in_hotplug_context() &&
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* NB: get_c=
pu_maps lock requires enabled interrupts.
> > > > */
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 local_irq_is=
_enabled() && (cpus_locked =3D
> > > > get_cpu_maps()) &&
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (park_offlin=
e_cpus ||
> > >=20
> > > Along with changing the condition you also want to update the
> > > comment to
> > > reflect the code adjustment.
> >=20
> > I've assumed the wording in the commet that says: "no CPU hotplug
> > or
> > unplug operations are taking place" would already cover the
> > addition
> > of the !cpu_in_hotplug_context() check.
>=20
> Hmm, yes, you're right. Just needs a release-ack then to go in (with
> the
> description adjustment).
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:08:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:08:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739107.1146034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKuK-0001PE-OO; Wed, 12 Jun 2024 10:08:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739107.1146034; Wed, 12 Jun 2024 10:08:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKuK-0001P7-Ld; Wed, 12 Jun 2024 10:08:44 +0000
Received: by outflank-mailman (input) for mailman id 739107;
 Wed, 12 Jun 2024 10:08:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FFiX=NO=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sHKuI-0000da-Rw
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:08:42 +0000
Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com
 [2a00:1450:4864:20::52e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id be361f78-28a3-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 12:08:40 +0200 (CEST)
Received: by mail-ed1-x52e.google.com with SMTP id
 4fb4d7f45d1cf-57c923e03caso2282210a12.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 03:08:41 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c71b6dcaesm7134035a12.78.2024.06.12.03.08.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 03:08:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be361f78-28a3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718186921; x=1718791721; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=q/J/Jkf5evDDxyzdW18ps6v9OcP9gBIFDKcaK5iQWfY=;
        b=kAadM38BioPVaoH6A6LZ8oeBcKXCIlg1OoX8PzupLl6u/9fJ4YhqQyxW5KN5BPrbmd
         NMkrGIcoTtiUa1pyBF+NzquojMZDbgdW4SUBtLzK1/uVQ6QjWGjQ+KPcEbSqAApvekuq
         SgySkvzZ446X9ohnrwfNyuBXcknp/8VKAtAfi/vI/jVv56TN4yPjxshTrnDUNBG9c1WY
         hGXthbyOw3n6VNh4Iz8duNbc4tnf/cSPm374MlcNBwJVMLZn4qC8g+cvvQk7cG820fFv
         upX12E3mb/XDn7RRrJIKycNMHnh00hgwx2V8ZMoYRLmKU448iiV1MnhNoahgjIhi6AuC
         jxvQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718186921; x=1718791721;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=q/J/Jkf5evDDxyzdW18ps6v9OcP9gBIFDKcaK5iQWfY=;
        b=DJTzVfzwbfggkz7UABo/Wx5YjW+3AuVotoGk/s+saXumF77DYPv6ucf4UaUqkfGvLJ
         6bp32TcHrB9o/GdF1oDcmZMYcGPPn/gjf8kEpcZMjyjBbKeSl8INZfrVQ3+se810daIj
         X2U3uVC9SnQCTAFv9cBWsazBbJcysUbcdA3wLO7i07s8OKj+G2oG8qXzF7C0jaXSO88l
         yeJ+MU1dMjVI1cy0CfqLcoPvAJ+cUvW6sMkxesvzTPCStpOj/NU1M5mit7th0ReWVcTX
         XxWHqTde+piUdE4ajmix2TlLQnSTGgAT39k/zByhnCbQXB+JUae3Q8ZKyQ3JqAvb69fL
         Lkrw==
X-Forwarded-Encrypted: i=1; AJvYcCVzv2XXx4iXgewiqI3A+5vpnRzifQsJeuLrYJUUqm/Lbq374iHPcMUzVd1ELAfHeq6p7tvTDG8tScPStyCG04rVlMCiFkjv+o5aB5NNwrA=
X-Gm-Message-State: AOJu0YxgZAivpH3caSM4zuFU50f0mqxpE/AhX6FSpsk2Fe12M+jlpSPW
	fhO8zxO7jZ9hZZjKFCAk3doYLyJlMb0pdYOdCGs9myShD8k7h7EHfm7FUA==
X-Google-Smtp-Source: AGHT+IFhxUOiL9oh0eEj3VXIP9Y7MuKGG4nO61HNtKXIe2QhHKbtGrQ20+oeUSWGURxWwSYm4A47eg==
X-Received: by 2002:a50:ab58:0:b0:57c:6953:2cac with SMTP id 4fb4d7f45d1cf-57ca976aac8mr817059a12.22.1718186920476;
        Wed, 12 Jun 2024 03:08:40 -0700 (PDT)
Message-ID: <b84269bc7d8cb10bf1c1c118d959634c30902603.camel@gmail.com>
Subject: Re: [PATCH v2 2/7] x86/irq: describe how the interrupt CPU movement
 works
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Date: Wed, 12 Jun 2024 12:08:39 +0200
In-Reply-To: <9048733a-e942-4384-b926-e8a304095356@suse.com>
References: <20240610142043.11924-1-roger.pau@citrix.com>
	 <20240610142043.11924-3-roger.pau@citrix.com>
	 <9048733a-e942-4384-b926-e8a304095356@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Tue, 2024-06-11 at 09:44 +0200, Jan Beulich wrote:
> On 10.06.2024 16:20, Roger Pau Monne wrote:
> > The logic to move interrupts across CPUs is complex, attempt to
> > provide a
> > comment that describes the expected behavior so users of the
> > interrupt system
> > have more context about the usage of the arch_irq_desc structure
> > fields.
> >=20
> > Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:12:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:12:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739112.1146044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKxv-0003DO-5s; Wed, 12 Jun 2024 10:12:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739112.1146044; Wed, 12 Jun 2024 10:12:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKxv-0003DH-2v; Wed, 12 Jun 2024 10:12:27 +0000
Received: by outflank-mailman (input) for mailman id 739112;
 Wed, 12 Jun 2024 10:12:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nAc7=NO=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sHKxs-0003D4-Vj
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:12:25 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2062d.outbound.protection.outlook.com
 [2a01:111:f403:2009::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 424033e9-28a4-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 12:12:23 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by SN7PR12MB6912.namprd12.prod.outlook.com (2603:10b6:806:26d::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.21; Wed, 12 Jun
 2024 10:12:18 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7633.036; Wed, 12 Jun 2024
 10:12:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 424033e9-28a4-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kHWKtqXsmou//7VUbjbnOASyecYH8nZ1p7ixa4ESHzS7k2gowC7pzRJPnJ91SLXYJukEcmwEHpx5LuBeu8AZWE6hrQP3gDjzlARbocxXP+27i3/IGTpm33BTOzeJGUPg1Xe9WwYcoColfLmBxA+QPrVKpKufNOqe/B+8qzKv/ApcImQtyo+MkttQPxDCSk+nye3/iIm27UnrGekmfLJb1EI5iAqtzS5JqGvsCL7z4lv5eApz597qM9eum1BpIKVwKS+U0REsfAo86yd7ItegwYLxuKlNPtjEXAR9eOsYETX37u3rmt9u0Za1kmaCML+I1ZjQ7dhqzjxP6iSvMpYbhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mPNspxN0qPxIyjp0LEj+8zHpDKYloUZF3qw2f1jDq8E=;
 b=MjzKg/lxjm5wp+6bdEC9BgrUcNuF0qI8FNqrZ+TUAr9YMict0LlLsk+gKZWv2F6Q78U69QqtYRSoB1oLmgxWcXteXr2gRbgDIym9c3jot9G4Tljad1vNUHhQ17/p9gowtOz6hhscZYMEGPVjYHBDi/rkoEy/o0fB+fsv6OVUT8P/mrADDqAr7sQmoaIzwxa9MLzGo0dR+s7nCQRyhRdWRpLy28a7I8GXaDF85e3L/FuF0eTXT54h94t+wES8TTiM7cl8514qC8bSS6tbHHZb/1eD+L/zKnLpZ/4RMLZyK43hMcJSov2imjJqJXFBKb5haPucmx7CsclDRumZmPYRzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mPNspxN0qPxIyjp0LEj+8zHpDKYloUZF3qw2f1jDq8E=;
 b=e37bby2m3as3UEpXWLoj//diY7pB/u5qGOfgsTMR8P8FTHdKTyXC91M+MokCj0vfvvGldibDSHVonXFe1mlVv61/mgfZsHW+xy54Kwu0fw3wLPXJOO6X36pJjYSi3p4Lg4l/a6LA6yeTb5hawNU9KjzRiwaJ4bFoT9VhnxCCzWk=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Hildebrand,
 Stewart" <Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Daniel P
 . Smith" <dpsmith@apertussolutions.com>, "Chen, Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index: AQHauLJn3FT0TCW9KkmRq1iRWqwBt7HCqQ8AgAG9m4A=
Date: Wed, 12 Jun 2024 10:12:18 +0000
Message-ID:
 <BL1PR12MB5849FF595AEED1112622A98DE7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-6-Jiqian.Chen@amd.com>
 <987f5d21-bbb5-4cdb-975b-91949e802921@suse.com>
In-Reply-To: <987f5d21-bbb5-4cdb-975b-91949e802921@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7633.034)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|SN7PR12MB6912:EE_
x-ms-office365-filtering-correlation-id: e60e41cd-6547-4cb8-0aaf-08dc8ac8240c
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230032|366008|376006|1800799016|7416006|38070700010;
x-microsoft-antispam-message-info:
 =?utf-8?B?aUpnckttaHpuTElsZnUvVDFpdDRVRWhGZndlbThuZFd3VUl0Nk5NR1RVeGdk?=
 =?utf-8?B?N3J1bWw5N1huRWhiK3RTNnJSdm5aWmtmTHRwTDBId1QwTEN6L2pxd1VtQkpl?=
 =?utf-8?B?ZndvR1RsT2YvenQrQzRsamZNY05hdmdncHBsUUp5eWFyYWcrRkRYSFFzVlVr?=
 =?utf-8?B?SVl3ai9zbjVoOHVOSUJWbUZHWnNvVzhuYmwyeGRCeWRHczdON0VWN1VLV2xP?=
 =?utf-8?B?dVVqVGZvdFJ3YlBJc3JHaWViRGtTd1l4ZVI0akhiRWtYUTBvUE5oemh4MzhT?=
 =?utf-8?B?ald1UDQrYVNNb1JvL0l4SnNueStuVFBjL1NvUW5sUURVbjFxZWdyeVVtOGY1?=
 =?utf-8?B?cE0xS3Q3c1FnMEhsbTBEUWEzVUUvQlh6eHBTSEthaTM2SElTZHEvUjZrc2NG?=
 =?utf-8?B?K1ZhY0xUMlJGR2xwY3QxVTdtMi9ZK3Bxb21jZmZiNVJuV3pNL2QxT2VDMERi?=
 =?utf-8?B?RUg3ZnR2b1g4MlgvT2hjT1E1aEN0VE5kWktZZmFNN1YyNU5tTHp6SWtXaXVE?=
 =?utf-8?B?MVdDQlhPdWl2Qm1VRU5MbGlwN2F6MkUzK3FJL3Nic0pSUmRjN1ZDR2RDeThr?=
 =?utf-8?B?V0pyV2xxQ0lQNGdZejNkeG9DdHd1TEpaMVZ4YTBjbzl5VEpKMjlWejFNRnE1?=
 =?utf-8?B?VzZIeUQrM241SGFJVmdqOXl4QXN0dW9jZ1VXM3RVRnpybGZIMk9vbTFhOEN5?=
 =?utf-8?B?NjJyclFSWWs0S1ljcGdJR0JNWEt3M3Jja001UE5nNk9kKzl4c0lucXpMSVJr?=
 =?utf-8?B?ZE1iMkd4ajBkWFRvbnU4SkdqVlkyQW1rZzgyWGNvZjk5ZkFETnpQbnQzaE1S?=
 =?utf-8?B?dWt2TE9Ma0JIT1dCSjFMbE9JNW9yR053VFdMSFlLcVFXU1RpS1NHK2kvQXJU?=
 =?utf-8?B?a0ZVZmR5R0FPZ243bUZaUU5CdENFYlpLVHBQNmViQzdTSElnbjV5VmgwOGcz?=
 =?utf-8?B?YmE2ZW1RSU5yMTJ6NnhPalp3YlhnaTljY1FXUXpEdk9pTTdhMmF6ZTNUaXBp?=
 =?utf-8?B?aS93cDRNYXZNb2l6YWlrZTZlRDlUS1hmaHE4Z1RHd2x4T0F1dVNjazdoMGZ2?=
 =?utf-8?B?cHliQ2xZMDNNYjN6RkFZeDlqM3NnMkVjblREL0ZlQWRHZWFRRy9qcTRGUVY2?=
 =?utf-8?B?S2FLWTRNZGJZTXZFUk4wU0VVRmZFU3U1K3lsUzEyY2J1eWRndzF3QTQ4dVFU?=
 =?utf-8?B?Q0F3cnFxTHhtMXRHZkVGeGdMTkZSblFNR3Boa2tMYlErenUyUmo4RXFRNmJi?=
 =?utf-8?B?UTE1Q2pEd3lFdXR5UndtVGdKakZ2eUdIMmhHTnlmT3VtNEJ3OFpsSTBnR1Rh?=
 =?utf-8?B?cU4wcFllM2N1c3ROZlYxTjJYWWx5dkcvUER4Wm5TbDdpaEpSZVBJWUdLSHp5?=
 =?utf-8?B?SXRoQXc0RnhMU0h0aTZVeEFUWGxoaUR6bW9oT0YrVUpGV3VrbkJqQStaYnJX?=
 =?utf-8?B?WmpLcVFGdUR6Q2JuVUNWVXkxZjBLNmRsVVUzbjgyOW03OTQ0Mm9iTWhHUHo3?=
 =?utf-8?B?VUpJVEhObjJ5QzgzL3QvenBzR0w1OTk4UlBEczVEM05FR3N3ZVpGRHA0TTNT?=
 =?utf-8?B?VTRQTnJpK0t0OWttbHUzWkF0QWJFT01uamhUS0NvcHExblI3YWtBUWU4L0hO?=
 =?utf-8?B?eUhOVS95K2pIZndDamZRYTNLdGo4RExoNVJTT1Y1VnZia3FpbUhpYkJqTU5Y?=
 =?utf-8?B?ZlFJNmpOUm9UZWsyemlXb3JHaUdTbXBySHpxdDEvbEZaMXN2QXlOZk5NdHZD?=
 =?utf-8?B?VjBsVDl2WXowamRSOHJWVWh5aTNnQmVoZXlXdHZ0RnNlSWRZbXZFSEdoN1d6?=
 =?utf-8?Q?o8wlgCN97iFdG0cS8CwYt8+wvhkiFA8V7Hwx8=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230032)(366008)(376006)(1800799016)(7416006)(38070700010);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?T3RMNnZMNFhOeitRUUpTcE90NnJ5MFYvOG5aSytIR0lUVVkzRUFXWnZxcmxm?=
 =?utf-8?B?a1RqUWVQTnpzRnk2S3d4RHo3N2I3Q1RnUTk3czVjKzBXOFBHZDZBQW9McVFC?=
 =?utf-8?B?b3hRUFNiTEYwS0c2dzFNZWgyc3Frdm1iMXl5NWxKaCtsamtidDNaeTlRY3BZ?=
 =?utf-8?B?eG8wNm9lMDRvUXZJM2pGRzZ3a2tIOEhlaGRTNkl1ZVlYaXhFSjV4VldLQzl2?=
 =?utf-8?B?dDRXQ0pwL3RIdWRRVW00RkJqYW9kYTFxa1l0bWZHQzE3ZmwxblhEWGZaQlFk?=
 =?utf-8?B?aDd6d2hwNlRtYUdZbGVZVDlYMy9KaC9MQjJzM3ltWVFDaU0zWVZMcW41bEZp?=
 =?utf-8?B?VElMSmJsSGsvTmVMMFp0TVZjQkpqMUllcW91VjVYS1l5RHN4dlphNmdnQWdD?=
 =?utf-8?B?bElCc2xqZDVvT01Pc0xVNWRWTG5nQU9BR0ZEcmROSkNPbmtVTlBMUG9CaE9W?=
 =?utf-8?B?bE1FbC81NEtlUlorcHZuRm9McnJSdlllbnYwbmFGSFdYREh2VWUvS3ZianpJ?=
 =?utf-8?B?YWJONSs4clBDOHc5aHRHL25sS1UrOTlCNEhUT1ArOVNKV09OeTZFMDNOMWUy?=
 =?utf-8?B?UnlTVTNjNkN0NHhpWTh1dDJ3NE50OENheXk1KzMyYkhuNjl0Y21YWVo3RFJX?=
 =?utf-8?B?SXFtMUlxTDNvTE1iaG5Ibk45RmR6bGdwdytYbGw5VE5XUTl6U3hrbmF0MXFU?=
 =?utf-8?B?VnZFdm1BOTNNMkhWd1BxdmJMcGxON010TExHWTFoS0MxQmVzM3hHQmhiSmVu?=
 =?utf-8?B?ckl5VkhoSDdVT1RSR0dpK2VLbkZNdzltZC9nVlo4QnhqSnZDWHdYeVVIQzA1?=
 =?utf-8?B?ZHlSdDZDdDdnQXFKLzJkVyt5M1NtcE9sRDhTWjZBaW9SWGgvWU9JbG5hUWdt?=
 =?utf-8?B?M2wycCt0c2h6U3p0RFhBSldFNU53VUJtNk1Hc1VqTzFSeHd5SFNFcHZBaE1K?=
 =?utf-8?B?c211akg5Zjh6T1hsSzBlZnZtUkE4aXVvL01UeGtZU3J4VHNpVlZMVUp1VGZr?=
 =?utf-8?B?NHRkbjNUbnVkR0RCczdhem1vbysxMXpFakdWeVBKaWpGTnZHSWd6U0Z4M0Zv?=
 =?utf-8?B?L1A2TnF6UTRMMGUxZXJ5OHlGM3g2NU8walUwWlBGbDBlazVvZjl1Ly9FUVdE?=
 =?utf-8?B?WGNPWUJqSnZDMC9TamorYVZHV2x4S3FyNFg3N2RxQTBYZUplUWlzMWpmeGtM?=
 =?utf-8?B?NGFYOVBoSmorSU9VNGxCazI5WWVJNlg0bE4vL3Y1a2Vaam12Vmk2S1kvU2tI?=
 =?utf-8?B?eHR4eU1DUFJxY1FLQ1hFbHY5WTlqbmpDY1VzdHZ5NW40VlV4WFgxVnpSL3c5?=
 =?utf-8?B?Q3Q3WFVCV3RBMW5SNFJFeXR6Q1pOUEdDazV5VW03U01zQzVoQjBsY0MrSGJM?=
 =?utf-8?B?aUN3WkNSUmJkZHEzZ1MwazZZaUFvb2VhR1ppVjdnTlFCOFhZRHI0U3BtaGFX?=
 =?utf-8?B?eUhNSWxsZTZQMk5ObW5VeFk2cmduMGdGeXY5dHZ0M1hBcy82ZkZ2Tk94bFB5?=
 =?utf-8?B?N0VFdnhVODFBUDhoaTMxNnRiSnZJeUJXZTY2a0k5c1VKVGNWYU4yM3VtT0p0?=
 =?utf-8?B?U0EwUWZMSVBCOVFORElnTWxYRm9TL3F0aVRneXY3MTF1TzZ4cEFnbkNLMkts?=
 =?utf-8?B?OXIzSmgvaGhiK0Zyc1U2VGhqcW1WTFBuUitkb3psVDJGUFV3aEN0dUJBVmVQ?=
 =?utf-8?B?dzdYenl4RkxRcHVmV09MY3F3ejg4OHF1cHY5b1gwd3ViZ3FkWEJ4WVRaKzRv?=
 =?utf-8?B?dVhlR1RnSzRnVndaUGNHWjcvV2xTZnRiTzhRL0ExdzhLdjNRT1RaaExvMCtH?=
 =?utf-8?B?RHlNM2FOOWFSVGl1T1BuYzNYUjdiWTUwTXVJWlZNRmVQUmVlSXdCdXFrQXBB?=
 =?utf-8?B?TW1MaXphcTdXdXFmYk9Nd3o0dXRWNUc2VXp4d1gyTk1UOW8ya0hEMnRjOWd3?=
 =?utf-8?B?TjRRYWNXR3dCbzVWQ3JzVzJqT3lGUGMwOUtYdFJjYWVpRzYzRkE1dWJMN3Nt?=
 =?utf-8?B?dlpLWmYyM1ovVkQzWWZ3eDQzNXZXajB3NjRTQUIydkk4UHFZaHVWM1ZZSFhm?=
 =?utf-8?B?c3VqTjlSYlhWWlY4dlZ0aVNJY3hib1Q1amFxT2Q5Si9mejJmSnlFdHRveSsv?=
 =?utf-8?Q?sZU4=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <1FF46195B3578C4B9185164C23E6B05B@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e60e41cd-6547-4cb8-0aaf-08dc8ac8240c
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jun 2024 10:12:18.1995
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: fGzmg96m4rEMkaxEvZL+BWlmrrjeesvTnRHamnXSuUmxSXCaQ9qcbsGvhr6qys8oCQO2Xl79vxM7PeLfY7t7jg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB6912

SGkgSmFuLA0KDQpPbiAyMDI0LzYvMTEgMjI6MzksIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAw
Ny4wNi4yMDI0IDEwOjExLCBKaXFpYW4gQ2hlbiB3cm90ZToNCj4+IFNvbWUgdHlwZSBvZiBkb21h
aW4gZG9uJ3QgaGF2ZSBQSVJRLCBsaWtlIFBWSCwgaXQgZG8gbm90IGRvDQo+PiBQSFlTREVWT1Bf
bWFwX3BpcnEgZm9yIGVhY2ggZ3NpLiBXaGVuIHBhc3N0aHJvdWdoIGEgZGV2aWNlDQo+PiB0byBn
dWVzdCBvbiBQVkggZG9tMCwgY2FsbHN0YWNrDQo+PiBwY2lfYWRkX2RtX2RvbmUtPlhFTl9ET01D
VExfaXJxX3Blcm1pc3Npb24gd2lsbCBmYWlsZWQgYXQNCj4+IGRvbWFpbl9waXJxX3RvX2lycSwg
YmVjYXVzZSBQVkggaGFzIG5vIG1hcHBpbmcgb2YgZ3NpLCBwaXJxDQo+PiBhbmQgaXJxIG9uIFhl
biBzaWRlLg0KPiANCj4gQWxsIG9mIHRoaXMgaXMsIHRvIG1lIGF0IGxlYXN0LCBpbiBwcmV0dHkg
c2hhcnAgY29udHJhZGljdGlvbiB0byB3aGF0DQo+IHBhdGNoIDIgc2F5cyBhbmQgZG9lcy4gSU9X
OiBEbyB3ZSB3YW50IHRoZSBjb25jZXB0IG9mIHBJUlEgaW4gUFZILCBvcg0KPiBkbyB3ZSB3YW50
IHRvIGtlZXAgdGhhdCB0byBQVj8NCkl0J3Mgbm90IGNvbnRyYWRpY3RvcnkuDQpXaGF0IEkgZGlk
IGlzIG5vdCB0byBhZGQgdGhlIGNvbmNlcHQgb2YgUElSUXMgZm9yIFBWSC4NCkFsbCBwcmV2aW91
cyBwYXNzdGhyb3VnaCBjb2RlIHdhcyBpbXBsZW1lbnRlZCBvbiB0aGUgYmFzaXMgb2YgcHYgZG9t
MCArIGh2bSBkb21VLg0KRm9yIHB2IGRvbTAsIGl0IGhhcyBQSVJRcy4gRm9yIGh2bSBkb21VLCBp
dCBoYXMgUElSUXMgdG9vLg0KU28gdGhlIGNvZGVzIGFyZSBub3Qgc3VpdGFibGUgZm9yIFBWSCBk
b20wICsgaHZtIGRvbVUsIGJlY2F1c2UgUFZIIGRvbTAgaGFzIG5vIFBJUlFzLg0KUGF0Y2ggMiBk
byBQSFlTREVWT1BfbWFwX3BpcnEgZm9yIGh2bSBkb21VIGV2ZW4gd2hlbiBkb20wIGlzIFBWSCBp
bnN0ZWFkIG9mIFBWLiBJdCBkaWRuJ3QgYWRkIFBJUlFzIGZvciBQVkguDQpUaGlzIHBhdGNoIGlz
IHRvIGdyYW50IGlycSggdGhhdCBnZXQgZnJvbSBnc2kgKSB0byBodm0gZG9tVSwgd2h5IFhFTl9E
T01DVExfaXJxX3Blcm1pc3Npb24gaXMgbm90IHVzZWZ1bCBpcyBiZWNhdXNlIFBWSCBoYXMgbm8g
UElSUXMsIHdlIGNhbid0IGdldCBpcnEgdGhyb3VnaCBwaXJxIGxpa2UgUFYgZG9lcy4NCg0KPiAN
Cj4+IFdoYXQncyBtb3JlLCBjdXJyZW50IGh5cGVyY2FsbCBYRU5fRE9NQ1RMX2lycV9wZXJtaXNz
aW9uIHJlcXVpcmUNCj4+IHBhc3NpbmcgaW4gcGlycSBhbmQgZ3JhbnQgdGhlIGFjY2VzcyBvZiBp
cnEsIGl0IGlzIG5vdCBzdWl0YWJsZQ0KPj4gZm9yIGRvbTAgdGhhdCBoYXMgbm8gUElSUSBmbGFn
LCBiZWNhdXNlIHBhc3N0aHJvdWdoIGEgZGV2aWNlDQo+PiBuZWVkcyBnc2kgYW5kIGdyYW50IHRo
ZSBjb3JyZXNwb25kaW5nIGlycSB0byBndWVzdC4gU28sIGFkZCBhDQo+PiBuZXcgaHlwZXJjYWxs
IHRvIGdyYW50IGdzaSBwZXJtaXNzaW9uIHdoZW4gZG9tMCBpcyBub3QgUFYgb3IgZG9tMA0KPj4g
aGFzIG5vdCBQSVJRIGZsYWcuDQo+Pg0KPj4gU2lnbmVkLW9mZi1ieTogSHVhbmcgUnVpIDxyYXku
aHVhbmdAYW1kLmNvbT4NCj4+IFNpZ25lZC1vZmYtYnk6IEppcWlhbiBDaGVuIDxKaXFpYW4uQ2hl
bkBhbWQuY29tPg0KPiANCj4gQSBwcm9ibGVtIHRocm91Z2hvdXQgdGhlIHNlcmllcyBhcyBpdCBz
ZWVtczogV2hvJ3MgdGhlIGF1dGhvciBvZiB0aGVzZQ0KPiBwYXRjaGVzPyBUaGVyZSdzIG5vIEZy
b206IHNheWluZyBpdCdzIG5vdCB5b3UsIGJ1dCB5b3VyIFMtby1iIGFsc28NCj4gaXNuJ3QgZmly
c3QuDQpTbyBJIG5lZWQgdG8gY2hhbmdlIHRvOg0KU2lnbmVkLW9mZi1ieTogSmlxaWFuIENoZW4g
PEppcWlhbi5DaGVuQGFtZC5jb20+IG1lYW5zIEkgYW0gdGhlIGF1dGhvci4NClNpZ25lZC1vZmYt
Ynk6IEh1YW5nIFJ1aSA8cmF5Lmh1YW5nQGFtZC5jb20+IG1lYW5zIFJ1aSBzZW50IHRoZW0gdG8g
dXBzdHJlYW0gZmlyc3RseS4NClNpZ25lZC1vZmYtYnk6IEppcWlhbiBDaGVuIDxKaXFpYW4uQ2hl
bkBhbWQuY29tPiBtZWFucyBJIHRha2UgY29udGludWUgdG8gdXBzdHJlYW0uDQoNCj4gDQo+PiAt
LS0gYS90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jDQo+PiArKysgYi90b29scy9saWJzL2xp
Z2h0L2xpYnhsX3BjaS5jDQo+PiBAQCAtMTQxMiw2ICsxNDEyLDM3IEBAIHN0YXRpYyBib29sIHBj
aV9zdXBwX2xlZ2FjeV9pcnEodm9pZCkNCj4+ICAjZGVmaW5lIFBDSV9TQkRGKHNlZywgYnVzLCBk
ZXZmbikgXA0KPj4gICAgICAgICAgICAgICgoKCh1aW50MzJfdCkoc2VnKSkgPDwgMTYpIHwgKFBD
SV9ERVZJRChidXMsIGRldmZuKSkpDQo+PiAgDQo+PiArc3RhdGljIGludCBwY2lfZGV2aWNlX3Nl
dF9nc2kobGlieGxfY3R4ICpjdHgsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
bGlieGxfZG9taWQgZG9taWQsDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGli
eGxfZGV2aWNlX3BjaSAqcGNpLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJv
b2wgbWFwLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCAqZ3NpX2JhY2sp
DQo+PiArew0KPj4gKyAgICBpbnQgciwgZ3NpLCBwaXJxOw0KPj4gKyAgICB1aW50MzJfdCBzYmRm
Ow0KPj4gKw0KPj4gKyAgICBzYmRmID0gUENJX1NCREYocGNpLT5kb21haW4sIHBjaS0+YnVzLCAo
UENJX0RFVkZOKHBjaS0+ZGV2LCBwY2ktPmZ1bmMpKSk7DQo+PiArICAgIHIgPSB4Y19waHlzZGV2
X2dzaV9mcm9tX2RldihjdHgtPnhjaCwgc2JkZik7DQo+PiArICAgICpnc2lfYmFjayA9IHI7DQo+
PiArICAgIGlmIChyIDwgMCkNCj4+ICsgICAgICAgIHJldHVybiByOw0KPj4gKw0KPj4gKyAgICBn
c2kgPSByOw0KPj4gKyAgICBwaXJxID0gcjsNCj4gDQo+IHIgaXMgYSBHU0kgYXMgcGVyIGFib3Zl
OyB3aHkgd291bGQgeW91IHN0b3JlIHN1Y2ggaW4gYSB2YXJpYWJsZSBuYW1lZCBwaXJxPw0KPiBB
bmQgaG93IGNhbiAuLi4NCj4gDQo+PiArICAgIGlmIChtYXApDQo+PiArICAgICAgICByID0geGNf
cGh5c2Rldl9tYXBfcGlycShjdHgtPnhjaCwgZG9taWQsIGdzaSwgJnBpcnEpOw0KPj4gKyAgICBl
bHNlDQo+PiArICAgICAgICByID0geGNfcGh5c2Rldl91bm1hcF9waXJxKGN0eC0+eGNoLCBkb21p
ZCwgcGlycSk7DQo+IA0KPiAuLi4gdGhhdCB2YWx1ZSBiZSB0aGUgY29ycmVjdCBvbmUgdG8gcGFz
cyBpbnRvIGhlcmU/IEluIGZhY3QsIHRoZSBwSVJRIG51bWJlcg0KPiB5b3Ugb2J0YWluIGFib3Zl
IGluIHRoZSAibWFwIiBjYXNlIGlzbid0IGhhbmRlZCB0byB0aGUgY2FsbGVyLCBpLmUuIGl0IGlz
DQo+IGVmZmVjdGl2ZWx5IGxvc3QuIFlldCB0aGF0J3Mgd2hhdCB3b3VsZCBuZWVkIHBhc3Npbmcg
aW50byBzdWNoIGFuIHVubWFwIGNhbGwuDQpZZXMgciBpcyBHU0kgYW5kIEkga25vdyBwaXJxIHdp
bGwgYmUgcmVwbGFjZWQgYnkgeGNfcGh5c2Rldl9tYXBfcGlycS4NCldoYXQgSSBkbyAicGlycSA9
IHIiIGlzIGZvciB4Y19waHlzZGV2X3VubWFwX3BpcnEsIHVubWFwIG5lZWQgcGFzc2luZyBpbiBw
aXJxLA0KYW5kIHRoZSBudW1iZXIgb2YgcGlycSBpcyBhbHdheXMgZXF1YWwgdG8gZ3NpLg0KDQo+
IA0KPj4gKyAgICBpZiAocikNCj4+ICsgICAgICAgIHJldHVybiByOw0KPj4gKw0KPj4gKyAgICBy
ID0geGNfZG9tYWluX2dzaV9wZXJtaXNzaW9uKGN0eC0+eGNoLCBkb21pZCwgZ3NpLCBtYXApOw0K
PiANCj4gTG9va2luZyBhdCB0aGUgaHlwZXJ2aXNvciBzaWRlLCB0aGlzIHdpbGwgZmFpbCBmb3Ig
UFYgRG9tMC4gSW4gd2hpY2ggY2FzZSBpbW8NCj4geW91IGJldHRlciB3b3VsZCBhdm9pZCBtYWtp
bmcgdGhlIGNhbGwgaW4gdGhlIGZpcnN0IHBsYWNlLg0KWWVzLCBmb3IgUFYgZG9tMCwgdGhlIGVy
cm5vIGlzIEVPUE5PVFNVUFAsIHRoZW4gaXQgd2lsbCBkbyBiZWxvdyB4Y19kb21haW5faXJxX3Bl
cm1pc3Npb24uDQoNCj4gDQo+PiArICAgIGlmIChyICYmIGVycm5vID09IEVPUE5PVFNVUFApDQo+
IA0KPiBCZWZvcmUgaGVyZSB5b3UgZG9uJ3QgcmVhbGx5IG5lZWQgdGhlIHBJUlEgbnVtYmVyOyBp
ZiBhbGwgaXQgcmVhbGx5IGlzIG5lZWRlZA0KPiBmb3IgaXMgLi4uDQo+IA0KPj4gKyAgICAgICAg
ciA9IHhjX2RvbWFpbl9pcnFfcGVybWlzc2lvbihjdHgtPnhjaCwgZG9taWQsIHBpcnEsIG1hcCk7
DQo+IA0KPiAuLi4gdGhpcywgdGhlbiBpdCBwcm9iYWJseSBhbHNvIHNob3VsZCBvbmx5IGJlIG9i
dGFpbmVkIHdoZW4gaXQncyBuZWVkZWQuIFlldA0KPiBvdmVyYWxsIHRoZSBpbnRlbnRpb25zIGhl
cmUgYXJlbid0IHF1aXRlIGNsZWFyIHRvIG1lLg0KQWRkaW5nIHRoZSBmdW5jdGlvbiBwY2lfZGV2
aWNlX3NldF9nc2kgaXMgZm9yIFBWSCBkb20wLCB3aGlsZSBhbHNvIGVuc3VyaW5nIGNvbXBhdGli
aWxpdHkgd2l0aCBQViBkb20wLg0KV2hlbiBQVkggZG9tMCwgaXQgZG9lcyB4Y19waHlzZGV2X21h
cF9waXJxIGFuZCB4Y19kb21haW5fZ3NpX3Blcm1pc3Npb24obmV3IGh5cGVyY2FsbCBmb3IgUFZI
IGRvbTApDQpXaGVuIFBWIGRvbTAsIGl0IGtlZXBzIHRoZSBzYW1lIGFjdGlvbnMgYXMgYmVmb3Jl
IGNvZGVzLCBpdCBkb2VzIHhjX3BoeXNkZXZfbWFwX3BpcnEgYW5kIHhjX2RvbWFpbl9pcnFfcGVy
bWlzc2lvbi4NCg0KPiANCj4+IEBAIC0xNDg1LDYgKzE1MTYsMTkgQEAgc3RhdGljIHZvaWQgcGNp
X2FkZF9kbV9kb25lKGxpYnhsX19lZ2MgKmVnYywNCj4+ICAgICAgZmNsb3NlKGYpOw0KPj4gICAg
ICBpZiAoIXBjaV9zdXBwX2xlZ2FjeV9pcnEoKSkNCj4+ICAgICAgICAgIGdvdG8gb3V0X25vX2ly
cTsNCj4+ICsNCj4+ICsgICAgciA9IHBjaV9kZXZpY2Vfc2V0X2dzaShjdHgsIGRvbWlkLCBwY2ks
IDEsICZnc2kpOw0KPj4gKyAgICBpZiAoZ3NpID49IDApIHsNCj4+ICsgICAgICAgIGlmIChyIDwg
MCkgew0KPiANCj4gVGhpcyB1bnVzdWFsIHdheSBvZiBlcnJvciBjaGVja2luZyBsaWtlbHkgd2Fu
dHMgYSBjb21tZW50Lg0KV2lsbCBhZGQgaW4gbmV4dCB2ZXJzaW9uLg0KDQo+IA0KPj4gKyAgICAg
ICAgICAgIHJjID0gRVJST1JfRkFJTDsNCj4+ICsgICAgICAgICAgICBMT0dFRChFUlJPUiwgZG9t
YWluaWQsDQo+PiArICAgICAgICAgICAgICAgICAgInBjaV9kZXZpY2Vfc2V0X2dzaSBnc2k9JWQg
KGVycm9yPSVkKSIsIGdzaSwgZXJybm8pOw0KPj4gKyAgICAgICAgICAgIGdvdG8gb3V0Ow0KPj4g
KyAgICAgICAgfSBlbHNlIHsNCj4+ICsgICAgICAgICAgICBnb3RvIHByb2Nlc3NfcGVybWlzc2l2
ZTsNCj4+ICsgICAgICAgIH0NCj4gDQo+IEJ0dywgbm8gbmVlZCBmb3IgImVsc2UiIHdoZW4gdGhl
IGVhcmxpZXIgaWYgZW5kcyBpbiAiZ290byIgb3IgYWxpa2UuDQpPSywgSSB3aWxsIGNoYW5nZSBp
biBuZXh0IHZlcnNpb24uDQoNCj4gDQo+PiBAQCAtMTQ5MywxMyArMTUzNyw2IEBAIHN0YXRpYyB2
b2lkIHBjaV9hZGRfZG1fZG9uZShsaWJ4bF9fZWdjICplZ2MsDQo+PiAgICAgICAgICBnb3RvIG91
dF9ub19pcnE7DQo+PiAgICAgIH0NCj4+ICAgICAgaWYgKChmc2NhbmYoZiwgIiV1IiwgJmlycSkg
PT0gMSkgJiYgaXJxKSB7DQo+PiAtICAgICAgICBzYmRmID0gUENJX1NCREYocGNpLT5kb21haW4s
IHBjaS0+YnVzLA0KPj4gLSAgICAgICAgICAgICAgICAgICAgICAgIChQQ0lfREVWRk4ocGNpLT5k
ZXYsIHBjaS0+ZnVuYykpKTsNCj4+IC0gICAgICAgIHIgPSB4Y19waHlzZGV2X2dzaV9mcm9tX2Rl
dihjdHgtPnhjaCwgc2JkZik7DQo+PiAtICAgICAgICAvKiBpZiBmYWlsLCBrZWVwIHVzaW5nIGly
cTsgaWYgc3VjY2VzcywgciBpcyBnc2ksIHVzZSBnc2kgKi8NCj4+IC0gICAgICAgIGlmIChyICE9
IC0xKSB7DQo+PiAtICAgICAgICAgICAgaXJxID0gcjsNCj4+IC0gICAgICAgIH0NCj4gDQo+IElm
IEknbSBub3QgbWlzdGFrZW4sIHRoaXMgYW5kIC4uLg0KPiANCj4+IEBAIC0yMjU1LDEzICsyMzAy
LDYgQEAgc2tpcF9iYXI6DQo+PiAgICAgIH0NCj4+ICANCj4+ICAgICAgaWYgKChmc2NhbmYoZiwg
IiV1IiwgJmlycSkgPT0gMSkgJiYgaXJxKSB7DQo+PiAtICAgICAgICBzYmRmID0gUENJX1NCREYo
cGNpLT5kb21haW4sIHBjaS0+YnVzLA0KPj4gLSAgICAgICAgICAgICAgICAgICAgICAgIChQQ0lf
REVWRk4ocGNpLT5kZXYsIHBjaS0+ZnVuYykpKTsNCj4+IC0gICAgICAgIHIgPSB4Y19waHlzZGV2
X2dzaV9mcm9tX2RldihjdHgtPnhjaCwgc2JkZik7DQo+PiAtICAgICAgICAvKiBpZiBmYWlsLCBr
ZWVwIHVzaW5nIGlycTsgaWYgc3VjY2VzcywgciBpcyBnc2ksIHVzZSBnc2kgKi8NCj4+IC0gICAg
ICAgIGlmIChyICE9IC0xKSB7DQo+PiAtICAgICAgICAgICAgaXJxID0gcjsNCj4+IC0gICAgICAg
IH0NCj4gDQo+IC4uLiB0aGlzIGlzIGNvZGUgYWRkZWQgYnkgdGhlIGltbWVkaWF0ZWx5IHByZWNl
ZGluZyBwYXRjaC4gSXQncyBwcmV0dHkgb2RkDQo+IGZvciB0aGF0IHRvIGJlIGRlbGV0ZWQgaGVy
ZSBhZ2FpbiByaWdodCBhd2F5LiBDYW4gdGhlIGludGVyYWN0aW9uIG9mIHRoZQ0KPiB0d28gcGF0
Y2hlcyBwZXJoYXBzIGJlIHJlLWFycmFuZ2VkIHRvIGF2b2lkIHRoaXMgYW5vbWFseT8NCk9LLCBJ
IHdpbGwgZG8gaW4gbmV4dCB2ZXJzaW9uLg0KDQo+IA0KPj4gQEAgLTIzNyw2ICsyMzgsNDMgQEAg
bG9uZyBhcmNoX2RvX2RvbWN0bCgNCj4+ICAgICAgICAgIGJyZWFrOw0KPj4gICAgICB9DQo+PiAg
DQo+PiArICAgIGNhc2UgWEVOX0RPTUNUTF9nc2lfcGVybWlzc2lvbjoNCj4+ICsgICAgew0KPj4g
KyAgICAgICAgdW5zaWduZWQgaW50IGdzaSA9IGRvbWN0bC0+dS5nc2lfcGVybWlzc2lvbi5nc2k7
DQo+PiArICAgICAgICBpbnQgaXJxID0gZ3NpXzJfaXJxKGdzaSk7DQo+IA0KPiBJJ20gbm90IHN1
cmUgaXQgaXMgYSBnb29kIGlkZWEgdG8gaXNzdWUgdGhpcyBjYWxsIGFoZWFkIG9mIHRoZSBiYXNp
YyBlcnJvcg0KPiBjaGVja3MgYmVsb3cuDQpJIHdpbGwgbW92ZSBpdCBiZWxvdyB0aGUgY2hlY2tz
Lg0KDQo+IA0KPj4gKyAgICAgICAgYm9vbCBhbGxvdyA9IGRvbWN0bC0+dS5nc2lfcGVybWlzc2lv
bi5hbGxvd19hY2Nlc3M7DQo+IA0KPiBUaGlzIGFsbG93cyBhbnkgbm9uLXplcm8gdmFsdWVzIHRv
IG1lYW4gInRydWUiLiBJIHRoaW5rIHlvdSB3YW50IHRvIGJhaWwNCj4gb24gdmFsdWVzIGxhcmdl
ciB0aGFuIDEsIG11Y2ggbGlrZSB5b3UgYWxzbyB3YW50IHRvIGNoZWNrIHRoYXQgdGhlIHBhZGRp
bmcNCj4gZmllbGRzIGFyZSBhbGwgemVyby4NCldpbGwgY2hhbmdlIGluIG5leHQgdmVyc2lvbi4N
Cg0KPiANCj4+ICsgICAgICAgIC8qDQo+PiArICAgICAgICAgKiBJZiBjdXJyZW50IGRvbWFpbiBp
cyBQViBvciBpdCBoYXMgUElSUSBmbGFnLCBpdCBoYXMgYSBtYXBwaW5nDQo+PiArICAgICAgICAg
KiBvZiBnc2ksIHBpcnEgYW5kIGlycSwgc28gaXQgc2hvdWxkIHVzZSBYRU5fRE9NQ1RMX2lycV9w
ZXJtaXNzaW9uDQo+PiArICAgICAgICAgKiB0byBncmFudCBpcnEgcGVybWlzc2lvbi4NCj4+ICsg
ICAgICAgICAqLw0KPj4gKyAgICAgICAgaWYgKCBpc19wdl9kb21haW4oY3VycmVudC0+ZG9tYWlu
KSB8fCBoYXNfcGlycShjdXJyZW50LT5kb21haW4pICkNCj4gDQo+IFBsZWFzZSB1c2UgY3VycmQg
aGVyZSAoYW5kIGFsc28gYWdhaW4gYmVsb3cpLg0KV2lsbCBjaGFuZ2UgaW4gbmV4dCB2ZXJzaW9u
Lg0KPiANCj4+ICsgICAgICAgIHsNCj4+ICsgICAgICAgICAgICByZXQgPSAtRU9QTk9UU1VQUDsN
Cj4+ICsgICAgICAgICAgICBicmVhazsNCj4+ICsgICAgICAgIH0NCj4+ICsNCj4+ICsgICAgICAg
IGlmICggZ3NpID49IG5yX2lycXNfZ3NpIHx8IGlycSA8IDAgKQ0KPj4gKyAgICAgICAgew0KPj4g
KyAgICAgICAgICAgIHJldCA9IC1FSU5WQUw7DQo+PiArICAgICAgICAgICAgYnJlYWs7DQo+PiAr
ICAgICAgICB9DQo+PiArDQo+PiArICAgICAgICBpZiAoICFpcnFfYWNjZXNzX3Blcm1pdHRlZChj
dXJyZW50LT5kb21haW4sIGlycSkgfHwNCj4+ICsgICAgICAgICAgICAgeHNtX2lycV9wZXJtaXNz
aW9uKFhTTV9IT09LLCBkLCBpcnEsIGFsbG93KSApDQo+IA0KPiBEYW5pZWwsIGlzIGl0IG9rYXkg
dG8gaXNzdWUgdGhlIFhTTSBjaGVjayB1c2luZyB0aGUgdHJhbnNsYXRlZCB2YWx1ZSwgbm90DQo+
IHRoZSBvbmUgdGhhdCB3YXMgb3JpZ2luYWxseSBwYXNzZWQgaW50byB0aGUgaHlwZXJjYWxsPw0K
PiANCj4+IC0tLSBhL3hlbi9hcmNoL3g4Ni9pb19hcGljLmMNCj4+ICsrKyBiL3hlbi9hcmNoL3g4
Ni9pb19hcGljLmMNCj4+IEBAIC05NTUsNiArOTU1LDI3IEBAIHN0YXRpYyBpbnQgcGluXzJfaXJx
KGludCBpZHgsIGludCBhcGljLCBpbnQgcGluKQ0KPj4gICAgICByZXR1cm4gaXJxOw0KPj4gIH0N
Cj4+ICANCj4+ICtpbnQgZ3NpXzJfaXJxKGludCBnc2kpDQo+PiArew0KPj4gKyAgICBpbnQgZW50
cnksIGlvYXBpYywgcGluOw0KPj4gKw0KPj4gKyAgICBpb2FwaWMgPSBtcF9maW5kX2lvYXBpYyhn
c2kpOw0KPj4gKyAgICBpZiAoIGlvYXBpYyA8IDAgKQ0KPj4gKyAgICAgICAgcmV0dXJuIC0xOw0K
PiANCj4gQ2FuIHRoaXMgYmUgYSBwcm9wZXIgZXJybm8gdmFsdWUgKGxpa2VseSAtRUlOVkFMKSwg
cGxlYXNlPw0KV2lsbCBjaGFuZ2UgaW4gbmV4dCB2ZXJzaW9uLg0KPiANCj4+ICsgICAgcGluID0g
Z3NpIC0gaW9fYXBpY19nc2lfYmFzZShpb2FwaWMpOw0KPiANCj4gSG1tLCBpbnN0ZWFkIG9mIHRo
ZSBiZWxvdzogT25jZSB5b3UgaGF2ZSBhbiAoaW9hcGljLHBpbikgdHVwbGUsIGNhbid0DQo+IHlv
dSBzaW1wbHkgY2FsbCBhcGljX3Bpbl8yX2dzaV9pcnEoKT8NCldpbGwgY2hhbmdlIGluIG5leHQg
dmVyc2lvbi4NCj4gDQo+PiArICAgIGVudHJ5ID0gZmluZF9pcnFfZW50cnkoaW9hcGljLCBwaW4s
IG1wX0lOVCk7DQo+PiArICAgIC8qDQo+PiArICAgICAqIElmIHRoZXJlIGlzIG5vIG92ZXJyaWRl
IG1hcHBpbmcgZm9yIGlycSBhbmQgZ3NpIGluIG1wX2lycXMsDQo+PiArICAgICAqIHRoZW4gdGhl
IGRlZmF1bHQgaWRlbnRpdHkgbWFwcGluZyBhcHBsaWVzLg0KPj4gKyAgICAgKi8NCj4+ICsgICAg
aWYgKCBlbnRyeSA8IDAgKQ0KPj4gKyAgICAgICAgcmV0dXJuIGdzaTsNCj4+ICsNCj4+ICsgICAg
cmV0dXJuIHBpbl8yX2lycShlbnRyeSwgaW9hcGljLCBwaW4pOw0KPiANCj4gVW5kZXIgY2VydGFp
biBjb25kaXRpb25zIHRoaXMgbWF5IHJldHVybiAwLiBZZXQgeW91IHN1cmVseSBkb24ndCB3YW50
DQo+IHRvIHBhc3MgSVJRMCBiYWNrIGFzIGEgcmVzdWx0OyB5b3Ugd2FudCB0byBoYW5kIGFuIGVy
cm9yIGJhY2sgaW5zdGVhZC4NCldpbGwgYWRkIGluIG5leHQgdmVyc2lvbi4NCj4gDQo+PiAtLS0g
YS94ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmgNCj4+ICsrKyBiL3hlbi9pbmNsdWRlL3B1Ymxp
Yy9kb21jdGwuaA0KPj4gQEAgLTQ2NSw2ICs0NjUsMTQgQEAgc3RydWN0IHhlbl9kb21jdGxfaXJx
X3Blcm1pc3Npb24gew0KPj4gIH07DQo+PiAgDQo+PiAgDQo+PiArLyogWEVOX0RPTUNUTF9nc2lf
cGVybWlzc2lvbiAqLw0KPj4gK3N0cnVjdCB4ZW5fZG9tY3RsX2dzaV9wZXJtaXNzaW9uIHsNCj4+
ICsgICAgdWludDMyX3QgZ3NpOw0KPj4gKyAgICB1aW50OF90IGFsbG93X2FjY2VzczsgICAgLyog
ZmxhZyB0byBzcGVjaWZ5IGVuYWJsZS9kaXNhYmxlIG9mIHg4NiBnc2kgYWNjZXNzICovDQo+PiAr
ICAgIHVpbnQ4X3QgcGFkWzNdOw0KPj4gK307DQo+PiArDQo+PiArDQo+IA0KPiBOaXQ6IE5vIChu
ZXcpIGRvdWJsZSBibGFuayBsaW5lcyBwbGVhc2UuIEluIGZhY3QgKGRpZG4ndCBJIHNheSB0aGlz
IGJlZm9yZQ0KPiBhbHJlYWR5PykgeW91IGNvdWxkIGluc2VydCBiZXR3ZWVuIHRoZSB0d28gYWJv
dmUsIHN1Y2ggdGhhdCB0aGUgZXhpc3RpbmcgaXNzdWUNCj4gYWxzbyBkaXNhcHBlYXJzLg0KSSBy
ZW1lbWJlciBJIGNoYW5nZWQgaXQgaGVyZSwgbWF5YmUgSSBtYWRlIGEgbWlzdGFrZSBzb21ld2hl
cmUsIGFuZCBJIHdpbGwgbW9kaWZ5IGl0IGluIHRoZSBuZXh0IHZlcnNpb24uDQoNCj4gDQo+IEph
bg0KDQotLSANCkJlc3QgcmVnYXJkcywNCkppcWlhbiBDaGVuLg0K


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:13:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:13:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739118.1146054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKyo-0003kj-FL; Wed, 12 Jun 2024 10:13:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739118.1146054; Wed, 12 Jun 2024 10:13:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHKyo-0003kc-CM; Wed, 12 Jun 2024 10:13:22 +0000
Received: by outflank-mailman (input) for mailman id 739118;
 Wed, 12 Jun 2024 10:13:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FFiX=NO=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sHKyn-0003kR-Pd
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:13:21 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 65245206-28a4-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 12:13:20 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-a6f21ff4e6dso370767566b.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 03:13:20 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f17b2bcbcsm438058266b.102.2024.06.12.03.13.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 03:13:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65245206-28a4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718187200; x=1718792000; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=aiz/H2My8QHO6/QGgq1w7yGj1brhz9OJKzqqTH7M94U=;
        b=T/RXLLtjG9/KbSBUykTyAg6pxjbpZwonw4G/j0wmFK09+9qd3Ddghcj9NNxUb74ZTL
         ZwY5ntZaNPGm25YxZJDRSYdfJMTpjyYmfVZG+6NajeeZzUkBaDdJftmabbDe+f8io09T
         Z/QuZ1k+fONaF3qycOLSnY712wvRnoczJS669s8uZm0SrdAXXW0l3IeUwG6ZR2LN7wvR
         MNc+yUN/Vi04Z60nC0OeY7lnwoRBODC0Jm46EgudDrDr+cFI3tjjpEuzuu3v/TFqdaMc
         427/xr/KxgG85Nol5pSxQWIYJEe7SqkRXmiAbZtzMOLLwz7RsxEMK1ZYmCnBCEsOfK3m
         lQQg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718187200; x=1718792000;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=aiz/H2My8QHO6/QGgq1w7yGj1brhz9OJKzqqTH7M94U=;
        b=Ty2u7bZ7C0ZFda8ZiTwAGYsCgm9EW2ycgxqibpw7Nu76nz6/10B7zYxU5SGjdSCFlE
         s6237+A6X9ZURwukwiMG34STavxeA1Brg8sH+LSxMiRP94v7+7CvorjsvPKjk5qsg60U
         shkUZlgp7281j/eN2CHb7HS4I9DX4zoasjUL02GTio6w5iXxp0P1+XvgSzrlbBr4zCsM
         7doKGz6lT//8XALe/Zp0Jyf6Rd5IfXwuixAYtoQdcjp66ke/y0E4NFmFW/HXYataQ2eQ
         7hLdKa74hTmcTbI0oVvpInUh3bBmMy1/AhGOn+Z2B3QadkNOYZuGVkDr+pO4eNO41qLG
         0YIQ==
X-Forwarded-Encrypted: i=1; AJvYcCXddCJI4YucTZRHr6k/KU7vuBsAQ5qB2sNvpIQ77LQlCAC3CS+CELGD57YZcvzvoMpNGoVwn/uMCHkYU0cYS5xfvtVf96J9teZeMlCXXos=
X-Gm-Message-State: AOJu0YwuxPGMa9LwgQEzmeNh3hXIDGFco8ZswmmuXfoUjcYFDbFnRIoN
	mow0zh6eFU1QJyIZoEnTwYtvA+kStxJLx1uTwt2osqnUyvoZARQv
X-Google-Smtp-Source: AGHT+IFVVLt3s9+yTNrXWUlNKYloRUA4AtCJdyqDt+cis782nXGLaX2jslfhoUxN0LUJU2sPCghA0Q==
X-Received: by 2002:a17:906:f2c7:b0:a68:fb7e:f476 with SMTP id a640c23a62f3a-a6f47d52b3fmr66819866b.30.1718187200154;
        Wed, 12 Jun 2024 03:13:20 -0700 (PDT)
Message-ID: <335bebc196baba16679cdfc9ba997acff1705da0.camel@gmail.com>
Subject: Re: [PATCH v2 3/7] x86/irq: limit interrupt movement done by
 fixup_irqs()
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Date: Wed, 12 Jun 2024 12:13:18 +0200
In-Reply-To: <5660db44-b169-44e3-9439-67d3b55bcac0@suse.com>
References: <20240610142043.11924-1-roger.pau@citrix.com>
	 <20240610142043.11924-4-roger.pau@citrix.com>
	 <5660db44-b169-44e3-9439-67d3b55bcac0@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Tue, 2024-06-11 at 11:59 +0200, Jan Beulich wrote:
> On 10.06.2024 16:20, Roger Pau Monne wrote:
> > The current check used in fixup_irqs() to decide whether to move
> > around
> > interrupts is based on the affinity mask, but such mask can have
> > all bits set,
> > and hence is unlikely to be a subset of the input mask.=C2=A0 For
> > example if an
> > interrupt has an affinity mask of all 1s, any input to fixup_irqs()
> > that's not
> > an all set CPU mask would cause that interrupt to be shuffled
> > around
> > unconditionally.
> >=20
> > What fixup_irqs() care about is evacuating interrupts from CPUs not
> > set on the
> > input CPU mask, and for that purpose it should check whether the
> > interrupt is
> > assigned to a CPU not present in the input mask.=C2=A0 Assume that -
> > >arch.cpu_mask
> > is a subset of the ->affinity mask, and keep the current logic that
> > resets the
> > ->affinity mask if the interrupt has to be shuffled around.
> >=20
> > Doing the affinity movement based on ->arch.cpu_mask requires
> > removing the
> > special handling to ->arch.cpu_mask done for high priority vectors,
> > otherwise
> > the adjustment done to cpu_mask makes them always skip the CPU
> > interrupt
> > movement.
> >=20
> > While there also adjust the comment as to the purpose of
> > fixup_irqs().
> >=20
> > Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii
>=20
> Aiui this is independent of patch 1, so could go in while we still
> settle on
> how to word things there?
>=20
> Jan
>=20



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:15:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:15:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739125.1146063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHL1D-0004Ni-VP; Wed, 12 Jun 2024 10:15:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739125.1146063; Wed, 12 Jun 2024 10:15:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHL1D-0004Nb-SJ; Wed, 12 Jun 2024 10:15:51 +0000
Received: by outflank-mailman (input) for mailman id 739125;
 Wed, 12 Jun 2024 10:15:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nAc7=NO=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sHL1C-0004NV-QF
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:15:50 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20600.outbound.protection.outlook.com
 [2a01:111:f403:2415::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bd0105b6-28a4-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 12:15:48 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by MW4PR12MB7481.namprd12.prod.outlook.com (2603:10b6:303:212::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.37; Wed, 12 Jun
 2024 10:15:44 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7633.036; Wed, 12 Jun 2024
 10:15:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd0105b6-28a4-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cXG+rr+anpClgKYFJjDsLo2OZGL0x5lbjhFhJQj79VTKRMpD0gi2bqbREhivRkrhLUxJeX9m2dfEuHcAvyJYxMJOxsgDWyosZLE/Nh7pdk8fCRdAy8ZpOU3Q3YWNADyv0P5cJDnZeUK8mLEOxHqcfORy3O5DCHDrSvjaSQn7qWpTNN0+hR3wmUh1kMmcx3sS5GivRI7jmxvauVfpECZeItXnFPKXXSzrnYh+JNTcGKFoWgjITFwNNdzCHlHrlj0cgxkRupsoD/xKs/R/+TUwmU99WTiL0zU4icxBuVLT2wtJnJrxnzQLZIZjCuzLEm+j4p6fPoAP0Ub1RC87VxTMrA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NlkX//18Qxj+WXJZ1k/mDIZx9FU/OP9f2KkKTS44ThE=;
 b=Jr+0/RPTRhZmIshhoJTfKRKMQsLPWq+NxOOrp4yrJB5IaZiWzyQpXyXOl0GqdnC9+C1MZNsLCQ8jfvscRxBTdKL0VleVVDVdV+a8qAcTTG5wgcJVMgv341M/F0pMQZ3/3F8hAx+tV/5XAgKJp2Qc7G1AaMaFNnr503v+5txDBJdqQfIyWEA1QII4HAHjUm2nADMqa1Loig1wIEDqXp7htjEXZFzwqkdbbjLI/CmFSO6jhDPgLkmDsNLCMZY86GCbquFPTm6f0cIaP5jrOnHSe61M7GL+MhWs+pUZR8EiVmXLBHOWpIrdYi8mQDQpPXPA+p2UeQfY2DxPj2NKdULHfg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NlkX//18Qxj+WXJZ1k/mDIZx9FU/OP9f2KkKTS44ThE=;
 b=xjxujFL/OmVk78xy1370uUIYoqvbeBG2MGi16KKq/Hk9TMdrIV3inK26fxR/JrdIwx9bgMCwyfzi5QudgYRsMzKWAiglVylH4nKrD+6ESqqCiFAS8XpAeJkJLjTovPNswrUhosY8Rs8v6lLsfONIHDoOFVQ8C+vFYNnfGnmpRDQ=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v9 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
Thread-Topic: [XEN PATCH v9 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
Thread-Index:
 AQHauLJgk79xZtykz0GMHuXNeMsUFbHBLLUAgALK4YD//+MLAIAAiCyA//9/rICAAJTZAA==
Date: Wed, 12 Jun 2024 10:15:44 +0000
Message-ID:
 <BL1PR12MB584946C6DAC365C1C040C258E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-3-Jiqian.Chen@amd.com>
 <efc35614-561c-4baa-9d94-d17ecb528a4b@suse.com>
 <BL1PR12MB5849B1B536BAD641C37A4308E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
 <41905257-e2e6-4bce-b723-516916448dfd@suse.com>
 <BL1PR12MB58493C065A5CA4FF2A9C03B6E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
 <5bb436a7-d426-4413-84bf-907615e12212@suse.com>
In-Reply-To: <5bb436a7-d426-4413-84bf-907615e12212@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7633.034)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|MW4PR12MB7481:EE_
x-ms-office365-filtering-correlation-id: 05089407-9b0a-4370-0d72-08dc8ac89ec0
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230032|1800799016|366008|376006|7416006|38070700010;
x-microsoft-antispam-message-info:
 =?utf-8?B?ZlNsQ09Ra2tXeUV6Sm11anBTVkxrOHFvMW1EUmg5d1VNVGVpOVJNNldRaVNC?=
 =?utf-8?B?a2lqc1VaLzNQS0QycjNrbmJ2KzJGMllMZmtvdUhGbGp5U0xLbE9mbzhyV2c3?=
 =?utf-8?B?WnozYlV0SDFOZE15ZTBhWTArWmdXTzVXVnIyTFhaWHIwYnQ5dlF1c3dSbXp4?=
 =?utf-8?B?dWdKRUhCTlhEeWdzTEdFekF4TEE5VlBlRnlkdGlXUnlGQ1JvWFJLNWhCcHB2?=
 =?utf-8?B?RTVZSE9yVjcvRFhpQ3dveWpIMDRIM21XMFFoQWp6bHdyZzk2Tm8vREt0eHAw?=
 =?utf-8?B?SUFPZ0x4cVFyOGdCaG8zYURvMGl3MUdqWkRmNzVWYUZZNTBEZVExQTdkY2tM?=
 =?utf-8?B?YUdaNlllcXVFRUl0RUNOYStVelZ3SVdCQnNzUDgwWCszWXZlVWJhb0MvemlU?=
 =?utf-8?B?OUwveDJDQVpNdGFJTmFMN01nL2EzaklOSWNGTy9FRFFaRFFXdVZrcS9Ra01j?=
 =?utf-8?B?eEFGRmZaMHl0cXJNLzFxd0Z0d2JVcUhhZ244NGZlUzJ0a1VsdldnWTJTZDNJ?=
 =?utf-8?B?VCt2UndEdVFYS3BQUjdTSWxxNkNjeWZQcFlrb3o1QUg3aVQ5enVuVko3bndv?=
 =?utf-8?B?VE0rWTBYdUt5bitKSzRYRlJ3eFNseDRyTUJTRmRXUmZ5dUZCU0RFR29keXN1?=
 =?utf-8?B?TmgzMThQdTk4dm9GWCtzTjNiUVdaZERpY0FuYjIxa2pWbWVvRXcwUXBybFlR?=
 =?utf-8?B?UElDdUw0T09zWmttTTBpTFdIdkg4OVJRZTN1VkVTcWlJa1dPYUh3NzVPMjdY?=
 =?utf-8?B?a1FuU3A3eitqY04vUmIza2VaOHBvaGEybWZXNFZocEZTdzBlSkNKVnVOOGkv?=
 =?utf-8?B?aWFRSkZGam05ODRyR2sxNE9zSmQ5NDQ5bEcvYU9EcHJld056YzdwK2NsZkR6?=
 =?utf-8?B?KzJxRDdSYjVzZWhWYXI5OFdVVE9abFhqQm9MbVZOWTN2YkVZLzJ2ZVJrajV2?=
 =?utf-8?B?OG1kR3FUd3h3RVRDc3JNQklZdE50NDQxaVE5ZUVTd3h4MnNrYWNBMEdhK1dx?=
 =?utf-8?B?enF2d0kzM2xzQUEya3kzNm9LRjk3Vmc4M2F2cXhJTVVGN1lnM0E3bWVjbER1?=
 =?utf-8?B?eXhRK29NblNBN1NaZm5ZRHorS2E3N3JpajFoRUMyRU9JaU1nUGo5UmcwandL?=
 =?utf-8?B?WVcyY2FLWUVsRFBqbWdHdkdoYWxFd2tUbVFNdFNmMFc0U3kxRFU1elJaaFJN?=
 =?utf-8?B?cmlPdlFnYmV0Q0FYY0t5dnl3YWZGQTY3Y2N6aGdTMGVoOGNaM0Jub0kxeTlS?=
 =?utf-8?B?SlF3UUs5ZG1QU1hWUlZ3Y2dESnp4MkdLc0xSdlVEVFJ3UE9LTHF2YXkrK3F5?=
 =?utf-8?B?bG5TWFA1ejAzUndPRktqdWp5K3FmeHFHM2d5RFQzbFRnQjRjQlpxYjJidFNz?=
 =?utf-8?B?V0hCTU8vSHZOYkJlMjB0TE1jZXNVUkdrQ0g1TW96dHNqK0E3QXhWZXZNaEhQ?=
 =?utf-8?B?K211cEZ2aGRuUDdYeThXNFNEYW1KRWwvQ2JZRktJS0h1aS9DcVltNEV5bnJU?=
 =?utf-8?B?bWhMaW9XL25FSk9xZlFmOWZmenU3ZjJxb1pXT2Ywamw4bzRMdktBNThJaVVq?=
 =?utf-8?B?alFXb2dwRHZOMFYzZUR2Vll3TERpSTFmbkhLbnFva0E1V1M4SXdkUUNwelBx?=
 =?utf-8?B?S2RKdkZVcm9jRHBlbnRDdHlzQkMxQlhkSis0MmZadU9QMGo5TElUYWdMUEln?=
 =?utf-8?B?aTJRdXRDdXdMTmorU0tSUTl5WjYyQzZQbC9DTTRzMlFsK2JLUU5LV1RlVWg5?=
 =?utf-8?B?QTRJaTNGSnVjQzJ2NDMyTEwva1J5eUFTeWpxWFRxVm01MUhJeSs3cjZ0aVRa?=
 =?utf-8?Q?QUST2PdT7phjMEJ3DGnu4DJwTIoANUZwm5C90=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230032)(1800799016)(366008)(376006)(7416006)(38070700010);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?VENScXMrSEY5Y0JYK1llSVpEajV5Z0ljUW5kOXRjcTlKdzVzdUo5ODIxQVl0?=
 =?utf-8?B?T2tObjhGUGk4NnZ6QngzVS9vK2RKdGc2UEZRWUtDQ3VYUGJyK0VrZnJ6Qm5P?=
 =?utf-8?B?cDVCN0FZdkxlenVJcXlUZ293aGxHRUpPa1ZlU3piMWlNZ1BUcHQ5U2dJOGtu?=
 =?utf-8?B?eHpsZE0rcklXWkVJUXNwcHozbVJwVCtrRVg4UGR2MkR0RkhxT1NtMUF4N2lW?=
 =?utf-8?B?TERQdmdpMTZWWVhWcFFieEVqQmxqbHpva2VUY0FzTnJsSHE5ODZpdVdueU5N?=
 =?utf-8?B?aG5uVHpTbmVnNEloTC9OaGZkRll3d3NRR1hhekJJMFI4L1FZRC9IV3RBQjk4?=
 =?utf-8?B?RFNXQThlUzl5WWJTN0o2QXNwU2VEQVFCTTg5YmpxNTNZNHdCREMvSWZJSjhD?=
 =?utf-8?B?a2IvQVlHZFNybGhrUUdqdTFzcVFMdUg5OUVXZWhpcCs0V2JVOUpVYXU0YVkz?=
 =?utf-8?B?bnNzcnBZcjRmbWxMZnFVUGlSamhuVk5zYnhTS1BtSVNJVFNVWmxWd3VWVGFo?=
 =?utf-8?B?VE40b2tGTHBPczdEamNhVzFTUTRYaHJBVVVKUG1wY1MwVzlZUmwxd3VYTUZv?=
 =?utf-8?B?SzVzcGJjRmxwcHNDaHpkTGZXaUtxbytEa2phRmx3TFh4amNVeSs3QUtMVlNz?=
 =?utf-8?B?WGF2U2l5TnVjaFJSc25yMUdBUTYwb1l1Y0lIbWdqU2t2d0ZTZ2FkR0x5ay9u?=
 =?utf-8?B?WTZZSmN6T2tiTWNGcVV5eUpXMlJvVXdVNzJmOE9EYmdNYTBSMUM5Y1hiNTZ5?=
 =?utf-8?B?NjZadXk5N3pXWlI0Ny9UMWJKVXVkcjFoaTFzZVdwL1R1bjRNRHpNTXRZc3Rh?=
 =?utf-8?B?MkloVUNWU1Rkdm9lakx3UVNSSXExYUVCRlhkVTlld0Z0MDMyVUpuek1PTGZ0?=
 =?utf-8?B?bUhvUVJDdit5SFlIMU14MExTWi9FRjNtOVFsRUMzT0FUR1VyWGxKQ0tNSmJl?=
 =?utf-8?B?bk9MZFpEczhDN3Y0dEFHVEIvWmxLZm9sRTBSWDBjM1BqTEU2ME53cjlSWSs1?=
 =?utf-8?B?Q2RaVTNhWUhaSnZXaFgyUVRReGNGeEdDVGxrRDBBVnlTM2pzTUh0NXdlVk1E?=
 =?utf-8?B?MmgvRERldExWYVIxVFZpVWJhNDNiVGFta2pITGdlcUNDZCtyVnF5R0kxMzFa?=
 =?utf-8?B?NmdvOE5lUXJwZUxwTm1vNEcwQXdDenpXby9RL01WekI2QUgvY2p5bmpGK3g2?=
 =?utf-8?B?TG9vdzA3a2tLYnNyVFV6WWFHV2d1YUErTzg0TlF3cmNLNFVNL3pqZGdTc0Mz?=
 =?utf-8?B?eHBQRWk5bDVwZCs4N2cwNUhoTEpKanYwOVBZQ1d4WTArSEhGVW9CVkgzalpB?=
 =?utf-8?B?Z2xhdGVaamdNT0NteFpIR25LM1FCdUtnWUNYTVZlMjNKWUc2ZCtadGppRFdD?=
 =?utf-8?B?UEJSQlpZdHQ0Yjhock9hUUcxTmd1WGY4MU8rbEJMWTc3bCtZM1A2TG1JdG9D?=
 =?utf-8?B?N2tyemwyWW90S3NCS2pudUw4Rm80WDFqSUlSdGR4YjQrRENyT05jbUlmbWVt?=
 =?utf-8?B?UElINmNORlV2MXk2NUJmOHJaRGxld21rMEorQ3dRdmVkZWNGcncvMDFvcU14?=
 =?utf-8?B?dnRHSmUzeFFjaU5mTU5zYmxKdXpoS0pubWNUUWgxejBoRmVpUkZGa2FFOFE1?=
 =?utf-8?B?MlZsU3JxbHRYaEFjalZrbFZhaXk3VFRsQTVDdW80V2pFR0xucFJYYlZSZWVa?=
 =?utf-8?B?M0V6Mkhsdis3SVBOc2ZYWldUcUhHNlVLcGRzelpETXcvY3dxbHE4aDNQNFRI?=
 =?utf-8?B?YzNsVVUyeUViU2FUV1pFQVVsYmsxdGJMNmNxZG5WVkxDK0p4ZnBrNUZDTmJD?=
 =?utf-8?B?LzFSQm5XK3psai8zM1VLUG1haFd4WFI5bzF2dVdOMk82enFWdzlub0ZxcDU5?=
 =?utf-8?B?d2h4NUpkOXBJaUc1UTlRZ2J2OTMxU0tPWE5kZWZZQ3p0USsxWGdJSkQ2U1ZM?=
 =?utf-8?B?Y3RGVmpFTVljazR4blByMnZlU1E1ZGVRUnZHNE1jMi9GU0lrYlIwZ0pyMi9I?=
 =?utf-8?B?UjFsRmtwTUxuczBCY3BxVG4zSWRMQ3BYVnRqUUc3SkhvYlNnTW1uMVYvTHJk?=
 =?utf-8?B?QXl5SDJ5bGdLRUJDUDVNTjkzTHN5aGswR3VTcjh1U0JzS0VPc1lSK1U3UU5o?=
 =?utf-8?Q?6xKc=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <EF1FAFCFA9425C4DAE231777B1DD9132@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 05089407-9b0a-4370-0d72-08dc8ac89ec0
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jun 2024 10:15:44.0832
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: voKbKBJe5p8MbvIF2XMvr5wYjW9F0t6RS+gxPY2+sjJQK7mMGDsJFIFhtz6jHzeqiLwJyza9YCaPHIcoYGuFnA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7481

T24gMjAyNC82LzEyIDE3OjIxLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTIuMDYuMjAyNCAx
MTowNywgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzEyIDE2OjUzLCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAxMi4wNi4yMDI0IDA0OjQzLCBDaGVuLCBKaXFpYW4gd3JvdGU6
DQo+Pj4+IE9uIDIwMjQvNi8xMCAyMzo1OCwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAw
Ny4wNi4yMDI0IDEwOjExLCBKaXFpYW4gQ2hlbiB3cm90ZToNCj4+Pj4+PiBJZiBydW4gWGVuIHdp
dGggUFZIIGRvbTAgYW5kIGh2bSBkb21VLCBodm0gd2lsbCBtYXAgYSBwaXJxIGZvcg0KPj4+Pj4+
IGEgcGFzc3Rocm91Z2ggZGV2aWNlIGJ5IHVzaW5nIGdzaSwgc2VlIHFlbXUgY29kZQ0KPj4+Pj4+
IHhlbl9wdF9yZWFsaXplLT54Y19waHlzZGV2X21hcF9waXJxIGFuZCBsaWJ4bCBjb2RlDQo+Pj4+
Pj4gcGNpX2FkZF9kbV9kb25lLT54Y19waHlzZGV2X21hcF9waXJxLiBUaGVuIHhjX3BoeXNkZXZf
bWFwX3BpcnENCj4+Pj4+PiB3aWxsIGNhbGwgaW50byBYZW4sIGJ1dCBpbiBodm1fcGh5c2Rldl9v
cCwgUEhZU0RFVk9QX21hcF9waXJxDQo+Pj4+Pj4gaXMgbm90IGFsbG93ZWQgYmVjYXVzZSBjdXJy
ZCBpcyBQVkggZG9tMCBhbmQgUFZIIGhhcyBubw0KPj4+Pj4+IFg4Nl9FTVVfVVNFX1BJUlEgZmxh
ZywgaXQgd2lsbCBmYWlsIGF0IGhhc19waXJxIGNoZWNrLg0KPj4+Pj4+DQo+Pj4+Pj4gU28sIGFs
bG93IFBIWVNERVZPUF9tYXBfcGlycSB3aGVuIGRvbTAgaXMgUFZIIGFuZCBhbHNvIGFsbG93DQo+
Pj4+Pj4gUEhZU0RFVk9QX3VubWFwX3BpcnEgZm9yIHRoZSBmYWlsZWQgcGF0aCB0byB1bm1hcCBw
aXJxLiBBbmQNCj4+Pj4+PiBhZGQgYSBuZXcgY2hlY2sgdG8gcHJldmVudCBzZWxmIG1hcCB3aGVu
IHN1YmplY3QgZG9tYWluIGhhcyBubw0KPj4+Pj4+IFBJUlEgZmxhZy4NCj4+Pj4+Pg0KPj4+Pj4+
IFNpZ25lZC1vZmYtYnk6IEh1YW5nIFJ1aSA8cmF5Lmh1YW5nQGFtZC5jb20+DQo+Pj4+Pj4gU2ln
bmVkLW9mZi1ieTogSmlxaWFuIENoZW4gPEppcWlhbi5DaGVuQGFtZC5jb20+DQo+Pj4+Pj4gUmV2
aWV3ZWQtYnk6IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJlbGxpbmlAa2VybmVsLm9yZz4NCj4+
Pj4+DQo+Pj4+PiBXaGF0J3MgaW1vIG1pc3NpbmcgaW4gdGhlIGRlc2NyaXB0aW9uIGlzIGEgY2xh
cmlmaWNhdGlvbiAvIGp1c3RpZmljYXRpb24gb2YNCj4+Pj4+IHdoeSBpdCBpcyBnb2luZyB0byBi
ZSBhIGdvb2QgaWRlYSAob3IgYXQgbGVhc3QgYW4gYWNjZXB0YWJsZSBvbmUpIHRvIGV4cG9zZQ0K
Pj4+Pj4gdGhlIGNvbmNlcHQgb2YgUElSUXMgdG8gUFZILiBJZiBJJ20gbm90IG1pc3Rha2VuIHRo
YXQgY29uY2VwdCBzbyBmYXIgaGFzDQo+Pj4+PiBiZWVuIGVudGlyZWx5IGEgUFYgb25lLg0KPj4+
PiBJIGRpZG4ndCB3YW50IHRvIGV4cG9zZSB0aGUgY29uY2VwdCBvZiBQSVJRcyB0byBQVkguDQo+
Pj4+IEkgZGlkIHRoaXMgcGF0Y2ggaXMgZm9yIEhWTSB0aGF0IHVzZSBQSVJRcywgd2hhdCBJIHNh
aWQgaW4gY29tbWl0IG1lc3NhZ2UgaXMgSFZNIHdpbGwgbWFwIGEgcGlycSBmb3IgZ3NpLCBub3Qg
UFZILg0KPj4+PiBGb3IgdGhlIG9yaWdpbmFsIGNvZGUsIGl0IGNoZWNrcyAiICFoYXNfcGlycShj
dXJyZCkiLCBidXQgY3VycmQgaXMgUFZIIGRvbTAsIHNvIGl0IGZhaWxlZC4gU28gSSBuZWVkIHRv
IGFsbG93IFBIWVNERVZPUF9tYXBfcGlycQ0KPj4+PiBldmVuIGN1cnJkIGhhcyBubyBQSVJRcywg
YnV0IHRoZSBzdWJqZWN0IGRvbWFpbiBoYXMuDQo+Pj4NCj4+PiBCdXQgdGhhdCdzIG5vdCB3aGF0
IHlvdSdyZSBlbmZvcmNpbmcgaW4gZG9fcGh5c2Rldl9vcCgpLiBUaGVyZSB5b3Ugb25seQ0KPj4+
IHByZXZlbnQgc2VsZi1tYXBwaW5nLiBJZiBJJ20gbm90IG1pc3Rha2VuIGFsbCB5b3UgbmVlZCB0
byBkbyBpcyBkcm9wIHRoZQ0KPj4+ICJkID09IGN1cnJlbnQtPmRvbWFpbiIgY2hlY2tzIGZyb20g
dGhvc2UgY29uZGl0aW9uYWxzLg0KPj4gV2hhdCBJIHdhbnQgaXMgdG8gYWxsb3cgUEhZU0RFVk9Q
X21hcF9waXJxIHdoZW4gY3VycmQgZG9lc24ndCBoYXZlIFBJUlFzLCBidXQgc3ViamVjdCBkb21h
aW4gaGFzLg0KPj4gVGhlbiBJIGp1c3QgYWRkICJicmVhayIgaW4gaHZtX3BoeXNkZXZfb3Agd2l0
aG91dCBhbnkgY2hlY2tzLCB0aGF0IHdpbGwgY2F1c2Ugc2VsZi1tYXBwaW5nIHByb2JsZW1zLg0K
Pj4gQW5kIGluIHByZXZpb3VzIG1haWwgdGhyZWFkLCB5b3Ugc3VnZ2VzdGVkIG1lIHRvIHByZXZl
bnQgc2VsZi1tYXBwaW5nIHdoZW4gc3ViamVjdCBkb21haW4gZG9lc24ndCBoYXZlIFBJUlFzLg0K
Pj4gU28gSSBhZGRlZCBjaGVja3MgaW4gZG9fcGh5c2Rldl9vcC4NCj4gDQo+IFNlbGYtbWFwcGlu
ZyB3YXMgYSBwcmltYXJ5IGNvbmNlcm4gb2YgbWluZS4gWWV0IHdoeSBkZWFsIHdpdGggb25seSBh
IHN1YnNldA0KPiBvZiB3aGF0IG5lZWRzIHByZXZlbnRpbmcsIHdoZW4gZ2VuZXJhbGl6aW5nIHRo
aW5ncyBhY3R1YWxseSBjYW4gYmUgZG9uZSBieQ0KPiBoYXZpbmcgbGVzcyBjb2RlLg0KTWFrZSBz
ZW5zZS4gSSB3aWxsIHJlYmFzZSB0aGUgYnJhbmNoIG9uY2UgeW91ciBjb2RlcyBhcmUgbWVyZ2Vk
Lg0KDQo+IA0KPiBKYW4NCg0KLS0gDQpCZXN0IHJlZ2FyZHMsDQpKaXFpYW4gQ2hlbi4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:16:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:16:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739126.1146074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHL1R-0004gO-7j; Wed, 12 Jun 2024 10:16:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739126.1146074; Wed, 12 Jun 2024 10:16:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHL1R-0004gH-3P; Wed, 12 Jun 2024 10:16:05 +0000
Received: by outflank-mailman (input) for mailman id 739126;
 Wed, 12 Jun 2024 10:16:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nAc7=NO=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sHL1P-0004fa-MI
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:16:03 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20630.outbound.protection.outlook.com
 [2a01:111:f403:200a::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c4ef25b7-28a4-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 12:16:02 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by MW4PR12MB7481.namprd12.prod.outlook.com (2603:10b6:303:212::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.37; Wed, 12 Jun
 2024 10:15:58 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7633.036; Wed, 12 Jun 2024
 10:15:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4ef25b7-28a4-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ga0G6YRXj16i4+m+3h6WdqQNWKvqm/QFxVLS+TI0cZb2OLQ7obBlVQaW7OWgx2xOSYd0O5n/7z3ff5UvgYLyxq3m7Tj327cY1ZjHVzOTiO2AG/BFeMAdStzXY3LjzYUG0aUxoMOSFMiJqMbLKkuwkCHwBHBYCilCryCadxMmWB9Ca+NjxemeeqSATL3sMyDOeUJGP9NJSD8k3HPFhj4s08/gcl0LCTy8qBR2yE/p75BYmmF2N2zaqWDLXBXewJSC/IhTzleijCiqfO3aucj/QyYqo5FdgMcu7wZnukls2wzmlsQ1i2Gt/ERCBp3NfwBTNbDDYM5rORJVWo49ZwcC2g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vc/18CKA6D5qLuF90oPGG3kp6+92Km7slGuHcaPIoRA=;
 b=hfkgQGXNdwS8H00RWCC+57OC0uznQSJ2sGU1xXrwJ5VKIcOjGHtxvVAPkEAPCvnytXI6IV+h+3mTrnIGgxznfRVsH5DB8J6pkx1RCuLrvbbt18pDXyDOQmHEm5Mef0q8Xf3/Fveka5pm2XQXvmT54FH9F/TOkvZw5mDPjL3+MqZABf6TxzghrfLBPQmXy8D0Y4KybTYKaCTORnR2KdicI97p8s2YthUhKYsbTTEE7FSirUSIwXlKWsYCl0/SufIHwGKL2ayztkUC9xBE2elK6aI0m0HolEllWhL5zka7N9fl+AXmeRNMcCr6ChitUb5ud+G6UbDbrS8qPte28Ld20A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vc/18CKA6D5qLuF90oPGG3kp6+92Km7slGuHcaPIoRA=;
 b=Lx2Y6jcTUw0zTP8Ggsl0xiWn5IWGAG7/ntHI5OF74upXXQd121iTN4LwdMF5Rbw6xU+YyCZN7HELUleUzJUtfuWt7AatLPeVxHMuG9JUuK2en63vy0kpRsp1fTrrRbWceBFAHC8I9XTUiSuwlyuIF2FbB4gOwL+C7RxSaMp69WU=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v9 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
Thread-Topic: [XEN PATCH v9 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
Thread-Index: AQHauLJiOTCaFyDADkao1O0TqexhE7HBLmwAgALLiIA=
Date: Wed, 12 Jun 2024 10:15:58 +0000
Message-ID:
 <BL1PR12MB58499C90372E8D5D1FA7F26AE7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-4-Jiqian.Chen@amd.com>
 <38b5bf96-22fe-444c-824b-b4c2d6e107d0@suse.com>
In-Reply-To: <38b5bf96-22fe-444c-824b-b4c2d6e107d0@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7633.034)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|MW4PR12MB7481:EE_
x-ms-office365-filtering-correlation-id: 723c8a6b-9336-4768-f036-08dc8ac8a76e
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230032|1800799016|366008|376006|7416006|38070700010;
x-microsoft-antispam-message-info:
 =?utf-8?B?SGI2OUlaOUVMbkNOS0VhMS85UlNRVDVsUXlFaE1ic24xZEtXYUg5MUhEekhL?=
 =?utf-8?B?ZkxUUUtNdU1naWlDMHQ5MXUvd3o3M1RqTS9idVAwUnJ0dXpxZjBtU21wcVQz?=
 =?utf-8?B?UU1ONTJYZDFiRnlsVmI0a1hJNmsxbzZkZ3hXVUhFZExTQzRYaWFRZnJCZS90?=
 =?utf-8?B?cmJnOGNqZkFVSDNERzhmU25ndmNhTy9XRkhrM0NoZVBicktIeStHK0VnaFFU?=
 =?utf-8?B?YkF3MDFvU29FMDdGc1ZqeGhacWZVQmRQS2x2SGd3dE85b3l1N25Rb0dqWXJG?=
 =?utf-8?B?anVKN3RUaklXSlR3WGNtLzViM2RvUnlJYlBKeGpyMW42ZmxEMlR3S3N5WEFp?=
 =?utf-8?B?RGNtN2NBcVNpeXpFTEw3WlZOU2d6U09BNHQxWWMxb1VrNEZkNm5nZzM4Sjhi?=
 =?utf-8?B?WVhYWGNudVZOOUlGTjN0c29DQTRoMVQyMSttb1djZktOVExVS1J5OEpvV1hi?=
 =?utf-8?B?Mm9ZY0RZRDRoekRpOXRKeEVNK1MxUXlIM0ljanMzcGpBQS9PSSsycjhVcUtm?=
 =?utf-8?B?cG9jNEJmY1NWUTBhNDBWYm4vbVJaMkEyUFZKQ2ZWTVRoTzVZa3BKS3dVQW5F?=
 =?utf-8?B?aEJEbkhuVHZhM09nU3NXZmRuMGRoSkQvTDMzbjArd2JvaTF2QzF2djg4TjF1?=
 =?utf-8?B?R25jWVp4anViZG1oUUlURmd5TkUwZitzcWwxVjFIa3BFWGtDNmRJUjlOaHZL?=
 =?utf-8?B?NFVYTWpnZ2xoR1R5WGFWMlNjMVVOTDJzdFg0aFdaMFdoU2NaLzRMZFV1dXJo?=
 =?utf-8?B?alNVU2VMdGVVZHNpV0FWZWw5Uk5JbGU4N1NSVFpOWGcwRFp3NVVRMnFRYlQ3?=
 =?utf-8?B?VlRzSDNUNi9DaFlKWGJ5cFBoS0psOEsrN2tvUDFFMFl1bXhlTVhxc2Jwc1l2?=
 =?utf-8?B?QVZRUmwrQ1E0YXdHaVpoeTdKVFVBTkQycEZIMVgxSGZ5Smd5NmNUYmRaeGdY?=
 =?utf-8?B?SE1Pb1NOZEg1LytkYmhtQmVzRzRRZnd5QXVDQmlDekpBZnd3S0ZOc0xEekFD?=
 =?utf-8?B?ZXdIYUtDZkt6L2ZYeXk2K0syM1o5MEJXM1ZSb09xcWZVcmlmS2RKK0MyZ25t?=
 =?utf-8?B?WGM4c2dreWlZM2NXZmxTRjFwNU9EcEl3ZkVrbk0xUk93MVlNd09NL1BrZjRV?=
 =?utf-8?B?Z1YvWU1pdTFFczF6SitYLzZwazA0YW9JTDQxVmU5QklVdUxTcTduL1ZIU0pU?=
 =?utf-8?B?K2F0M3JzSzE2SEVwNng0T2dvK0RWUHc3NEVGVEsrRkF6VUw5SUhPZUVXQlhh?=
 =?utf-8?B?VGl0cmt5OW5vUUhNTWtSMk9sa1NKMXJvNTQ2eFIrRzFLcW9PVWxQRDhSNGVz?=
 =?utf-8?B?WE0rOWVsZytDZVBLN25XeWJRRXFTb1lPSWtteFRiRWh2NDVjOXFyeHBFSzJR?=
 =?utf-8?B?K3NhOHJ2MXhLOGVPWmJENlRFT2dOd0N3UUtDeXpod1BHYisvb0RHYzAwbTJh?=
 =?utf-8?B?eEs1TCs2OFN3KysrM1hHWU4vVHExOUJrZnVUU3pCRG9nVnFJeGdySE1QNjJ2?=
 =?utf-8?B?UkpWNkxwM1ZyREVSMXMyRUdMQ2VmRlR3dVQ3R3kzeElqLzJickZyaVJJVDlZ?=
 =?utf-8?B?NGpldTNweHhldUxBNTk3Z2RpdHdxWVVGRzBGRnJmdlhadDhzQ3BXU1NXR2xv?=
 =?utf-8?B?ekdYNStXVU5aN2c5L21HWU1LTEwwTkZYYmVuNGhJbUJTSWE0SlhMQytHaTNI?=
 =?utf-8?B?YTFsRERpbWZHd3RvbTFMd0dGdXpuNzc3SFMweWhsVnJuQ1lkbml0RHZXeWdx?=
 =?utf-8?Q?TDNrtVuwnnSh3CgV4yqQaVf3qyCjlL05frVG/gL?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230032)(1800799016)(366008)(376006)(7416006)(38070700010);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?QndVK0Z6dzNtL0ZsdVduR1lnY1BvWGJuc1Q0ZEM3RHJKZWdQQ2N2dEtucms2?=
 =?utf-8?B?ZVRHM2EzTzdLKzY2MFZFaHl2NmFKQXBvbU8xV0N0NlR0S1ZZQlFFTlNnNDJ4?=
 =?utf-8?B?bDYwVU1aSmx0Q1dpeW50TzVQYVc4dk1zRmlNT21VcnJwWkdqSHAza0ppUTBR?=
 =?utf-8?B?cmlVT055RVhsanp3OGtuQ2VSSnJPSzhhOU96anl6Y0kvMERSSWtNbmlYVUUr?=
 =?utf-8?B?Rit1Q2hHS1JpY0c3MCsyWjBtSVdDQjJLdVpDQzVWc0RiUHRuZHg2SXdDbzhq?=
 =?utf-8?B?YmhNL1dpRW5sV3lDWmNNYnphWWo0eWwrQ2IxNS9heHJGZVpRUTVHWDFJZm1w?=
 =?utf-8?B?SWs5anZ0cVlNNVlvTGRyQ3lXSkFSaTVXVC9KbHQ1REM2c0RsZWlZUmhjU1hD?=
 =?utf-8?B?cUY0Ui9pU3A2QWFodmVtNTJUeWNuV1RySU9JOGlIaDR0Z1ZjU0RMU1F3VGRJ?=
 =?utf-8?B?elJ6SFB1cCtaZjBydHYyZFZzdmxkMTRocFdDQ29FSjQzVFcweXZwalZRQXU0?=
 =?utf-8?B?R3hhaVRPcEtFZ2JmSmhNSm03U0dXM2lia2s0NDZEaFFnQ2czSW4wcEhoSmhv?=
 =?utf-8?B?Y3ZOYkNQWU93T3NpVUtNMzBHbUEwaWxFay9rSEU5VTE3bUdKQi9aUzFRbjBO?=
 =?utf-8?B?anA1OTBBWk9DSDVjaVdlYUczRmJyN3ZOdVZrbi9kaHhmS3pKcWlqK0hEakpl?=
 =?utf-8?B?SlVjSGhlYVBFR3R3YmlibUpUblV2UzdJRytBbTB0OGRjR2Y3S25DTmFvY3o1?=
 =?utf-8?B?ZVBzWlcvQVIvT1IwTUFza2pSRm42cGRFZjRrd09RTTBONGJQQ2FVZXFMb0hJ?=
 =?utf-8?B?U3hFQUJpT3dRK3FkZ0N4dytEZUlaMmk2M3R3SCtzUWR4MDlSWUFSL0ZnV0F6?=
 =?utf-8?B?a3l6ZUFpU2VTbUlUcEFRL3M1aDdFR1R3SjdoWVdJTEMrVktOYXQvMFRMYUYw?=
 =?utf-8?B?K1oxZENuZXlSdEpON1QvODdZZWlybHU4RTJrbWppOUZIeUdPNzhNZndteC9L?=
 =?utf-8?B?dHBpbzdLaWFicitubmR6TGNTTUROSUoyNWpxRm1JSlNqdkUyeVp5RWpkbkZ0?=
 =?utf-8?B?ckthRzZaVGFUazZURkdGTTY5TlVJZnp5dE9LSXFLTjhMdUI0VmZnL3VNaVc2?=
 =?utf-8?B?djR2ZkVPKzJyTHpRRU9HdmNTUHk5SmFBUnhOWTZ6WklLV3ljWDB3TzI2UXcv?=
 =?utf-8?B?QVRJYnZmeEVuZmIrdFZ0TWtxRlJPajZMTHBJRWRoWjZyU1YrdFRpYVNyUldC?=
 =?utf-8?B?dGNFaUloNzVoc2FWQkp5bUVmbm9lVytEQUIwN1JGbWRvKzY0ZCtteWthaXBK?=
 =?utf-8?B?OXdsSmRhekczNE90TFBwT0dpZCtOcGJlWEN0THRYT2dWQnBoc0JiNmRONzli?=
 =?utf-8?B?ZlNqZEJCUDBqYXdwcFc0b1ppUzRCcVFQUlBYUXRxOEh2VXBnY29sUjNDN0hP?=
 =?utf-8?B?cFVER3BmT2RZSjZzNi9RMi9lRVJXaWJTMG85R0MrWVp2OExjYlNhYVI4WXBq?=
 =?utf-8?B?cGEyL0JJQlRiS1B4VTVpRExWMEZ0cm4xTFo0ZzVDaDJLczBINmpzeGZ2ckp1?=
 =?utf-8?B?RnV5bUpLbFRRczNKbm5xdkF3ajdCRWdwTlY1K1FFd0UzNU1aU1FncGlHZERh?=
 =?utf-8?B?K1pNTlBNZWx6dlo1LzBVOGhGTThlWHhMSWVWVkxrQXdxUTZML01HL0daZHlX?=
 =?utf-8?B?TExBVlpXMy9hbnU5SUZ4Y1E4N1NRd1BUUlpSUzZUamk3NTN2N0RNbW0yQU56?=
 =?utf-8?B?dytFQ2JMUnVzK3NiVStXZ21aSUhxYzBTZ042Nk5hTWNCdCtUWWpsQVREbFBI?=
 =?utf-8?B?K0dobG00dVlmWTRhZUhCLzZ6N2Nlb0dZb3N6anRGTzdFT1hEQm9kWXNZWkJi?=
 =?utf-8?B?VDlFWHNMdVQ2YVRyWE0xaXUxanNEWGJ5d0ZUMFBTZDgya0U1SlZTNnFhdTRS?=
 =?utf-8?B?bnhRQWNUTU5nUE1semtUd3ZhUStMcUxudStOQTlBRW1qM3owQ1haV08wd0JY?=
 =?utf-8?B?VHpFcUFSK0dUUWJCK2k2TUZaT3ozUitTS0tidWJOSkFXb1oydldtZW85dlRx?=
 =?utf-8?B?US9HSUVFbmw0TjE2d2pvMFdnUFlOQm5iKy9ERGlYVjNhdjQ1QTNaaVB0M2hJ?=
 =?utf-8?Q?vWDc=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <F0186C142DCB464EB7CFAD74E392C102@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 723c8a6b-9336-4768-f036-08dc8ac8a76e
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jun 2024 10:15:58.6639
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: RLsFz6PNCUd0p5PESbA1dgpDOA55NIOBHI37Mvsz0/wHv+5qcrW3xRj0YkAKJ2+rCGrAihnf7ApjaPJ/rHJNZA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7481

T24gMjAyNC82LzExIDAwOjA0LCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMDcuMDYuMjAyNCAx
MDoxMSwgSmlxaWFuIENoZW4gd3JvdGU6DQo+PiBPbiBQVkggZG9tMCwgdGhlIGdzaXMgZG9uJ3Qg
Z2V0IHJlZ2lzdGVyZWQsIGJ1dA0KPj4gdGhlIGdzaSBvZiBhIHBhc3N0aHJvdWdoIGRldmljZSBt
dXN0IGJlIGNvbmZpZ3VyZWQgZm9yIGl0IHRvDQo+PiBiZSBhYmxlIHRvIGJlIG1hcHBlZCBpbnRv
IGEgaHZtIGRvbVUuDQo+PiBPbiBMaW51eCBrZXJuZWwgc2lkZSwgaXQgY2FsbGVzIFBIWVNERVZP
UF9zZXR1cF9nc2kgZm9yDQo+PiBwYXNzdGhyb3VnaCBkZXZpY2VzIHRvIHJlZ2lzdGVyIGdzaSB3
aGVuIGRvbTAgaXMgUFZILg0KPiANCj4gIml0IGNhbGxzIiBpbXBsaWVzIHRoYXQgLi4uDQo+IA0K
Pj4gU28sIGFkZCBQSFlTREVWT1Bfc2V0dXBfZ3NpIGZvciBhYm92ZSBwdXJwb3NlLg0KPj4NCj4+
IFNpZ25lZC1vZmYtYnk6IEh1YW5nIFJ1aSA8cmF5Lmh1YW5nQGFtZC5jb20+DQo+PiBTaWduZWQt
b2ZmLWJ5OiBKaXFpYW4gQ2hlbiA8SmlxaWFuLkNoZW5AYW1kLmNvbT4NCj4+IC0tLQ0KPj4gVGhl
IGNvZGUgbGluayB0aGF0IHdpbGwgY2FsbCB0aGlzIGh5cGVyY2FsbCBvbiBsaW51eCBrZXJuZWwg
c2lkZSBpcyBhcyBmb2xsb3dzDQo+PiBodHRwczovL2xvcmUua2VybmVsLm9yZy9sa21sLzIwMjQw
NjA3MDc1MTA5LjEyNjI3Ny0zLUppcWlhbi5DaGVuQGFtZC5jb20vVC8jdQ0KPiANCj4gLi4uIHRo
ZSBjb2RlIG9ubHkgdG8gYmUgYWRkZWQgdGhlcmUgd291bGQgYWxyZWFkeSBiZSB1cHN0cmVhbS4g
QXMgSSB0aGluayB0aGUNCj4gaHlwZXJ2aXNvciBjaGFuZ2Ugd2FudHMgdG8gY29tZSBmaXJzdCwg
dGhpcyBwYXJ0IG9mIHRoZSBkZXNjcmlwdGlvbiB3aWxsIHdhbnQNCj4gcmUtd29yZGluZyB0byBh
bG9uZyB0aGUgbGluZXMgb2YgIndpbGwgbmVlZCB0byIgb3Igc29tZSBzdWNoLg0KVGhhbmtzLCBJ
IHdpbGwgY2hhbmdlIGluIG5leHQgdmVyc2lvbi4NCg0KPiANCj4gQXMgdG8gR1NJcyBub3QgYmVp
bmcgcmVnaXN0ZXJlZDogSWYgdGhhdCdzIG5vdCBhIHByb2JsZW0gZm9yIERvbTAncyBvd24NCj4g
b3BlcmF0aW9uLCBJIHRoaW5rIGl0J2xsIGFsc28gd2FudC9uZWVkIGV4cGxhaW5pbmcgd2h5IHdo
YXQgaXMgc3VmZmljaWVudCBmb3INCj4gRG9tMCBhbG9uZSBpc24ndCBzdWZmaWNpZW50IHdoZW4g
cGFzcy10aHJvdWdoIGNvbWVzIGludG8gcGxheS4NCk9LLCBJIHdpbGwgYWRkIGluIG5leHQgdmVy
c2lvbi4NCg0KPiANCj4gSmFuDQoNCi0tIA0KQmVzdCByZWdhcmRzLA0KSmlxaWFuIENoZW4uDQo=


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:26:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:26:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739139.1146083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLAw-0007Pm-2a; Wed, 12 Jun 2024 10:25:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739139.1146083; Wed, 12 Jun 2024 10:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLAv-0007Pf-Vu; Wed, 12 Jun 2024 10:25:53 +0000
Received: by outflank-mailman (input) for mailman id 739139;
 Wed, 12 Jun 2024 10:25:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oKT8=NO=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sHLAu-0007PZ-Ei
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:25:52 +0000
Received: from mail-qt1-x82d.google.com (mail-qt1-x82d.google.com
 [2607:f8b0:4864:20::82d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2332f224-28a6-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 12:25:49 +0200 (CEST)
Received: by mail-qt1-x82d.google.com with SMTP id
 d75a77b69052e-440655e64bdso19671241cf.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 03:25:49 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-4404f5bc3edsm44095061cf.54.2024.06.12.03.25.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 03:25:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2332f224-28a6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718187949; x=1718792749; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=qtihfR1s1QsEGhiNOU25jC8KetGBf6NK2giC6XWhyZk=;
        b=vQVQZzV8x1VDehUQp/ixeokeSJqvZITFx1PAHTXGFSX+jrsiC188NSVHWz8LeTn12T
         jDOTG3m2JQeXJoS9WDxe9ax1/Egtu5qDIz4Ephb7t3de/5iosqlfD51CtGTFDS1W8Okw
         CmHLwTfH2mVtqTi8QuD9j6t39LdryT7nmT/uM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718187949; x=1718792749;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=qtihfR1s1QsEGhiNOU25jC8KetGBf6NK2giC6XWhyZk=;
        b=gaLqWGiCnih3pla7oiZXoFwb4tMVm9L6KfFSFY1n5HTuqyAwIYHsI1P5Mc6pROl7yg
         1LN/dYRNyTvi03l++JrWXR3WIiQms3vy925/bRti2tit43OWbuTOPk2kf1ve0wB+w2pA
         P38b3UGQ50zUwiy0/lo9MXOBN8O2pzQVZ1I6mOL7wPzNWWuzJzBnJgh+QLhbwIsOZkbL
         3EAUfUYEUEHiSFBAEHY64hZpgRlruOw/E2QPVbm8LE7FckudnlRh8zsimf8RpFhZeD1a
         JP2T3anTpU+9jiOMVCbDhoU91FihFiyOTC6QaCz/X5poZqf/GaEv4ePHhAWbjjgp1y9a
         YlrA==
X-Forwarded-Encrypted: i=1; AJvYcCXzi3e257jcXr6kEMEj0z9AwAQpgCw1Zu9d2YCD5Xjy2CSk1/I2Vqm+ul5uQAPetlLjBxG/ASx9S9km5HTqtqvhweHz7TYtMdpQDixO/yE=
X-Gm-Message-State: AOJu0YwjC2cHUk14c6ab+jPqhJkP9YO9jrLoqw+8/VntnWmfEwvg7Skg
	3CS+JWv7ZdzJC84bAv0xraWYgS2hwE6nVt9GYHKRJtmCpIChAvhpY3sSrptkjKY=
X-Google-Smtp-Source: AGHT+IFT4OIASyZBSKf/NhczPNuJX6UGmXkr+edVkgfwefBbDOVMFKVS2HOgGVKJvOobyRyYSq/pqA==
X-Received: by 2002:ac8:5f49:0:b0:441:2998:3e7b with SMTP id d75a77b69052e-4415abeac4bmr17517471cf.30.1718187948555;
        Wed, 12 Jun 2024 03:25:48 -0700 (PDT)
Message-ID: <1eaed8c1-981f-4c16-a2f6-e783fa43963f@citrix.com>
Date: Wed, 12 Jun 2024 11:25:46 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: BlueIris error on Xen 4.17
To: Damien Thenot <damien.thenot@vates.tech>, xen-devel@lists.xenproject.org
References: <ced16fca-3b55-40a1-a7e2-ffadd9707394@vates.tech>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <ced16fca-3b55-40a1-a7e2-ffadd9707394@vates.tech>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12/06/2024 10:40 am, Damien Thenot wrote:
> Hello,
>
> A XCP-ng 8.3 user that use Blue Iris Software encountered a crash with 
> Xen upgraded to version 4.17.
> It worked correctly when XCP-ng 8.3 used Xen 4.13.
> It is happening on Intel Xeon E-2378 CPU @ 2.60GHz CPUs and it seems 
> more recent Intel CPUs.
> His guests are Windows with a NVIDIA GPU given to the guest.
>
> The user added:
>  > On an older box with i9-9900K CPU it does not happen and the VM works 
> as expected. Also working on an older
>  > Xeon Intel Xeon E-2146G and a E-2276G. Anything newer than that 
> however the VM will just BSOD.
>
> You can find more information in the XCP-ng forum post: 
> https://xcp-ng.org/forum/topic/8873/windows-blue-iris-xcp-ng-8-3
>
> The user tried enabling `msr-relaxed` following notes in Xen 4.15 
> documentation.
> But it didn't change the behavior and the guest still crashes.
>
> Has someone else observed such behavior?
>
> Here is a Xen dmesg output with the error that the user was able to obtain:
>
> ```
>
> (d1) [  132.028963] xen|BUGCHECK: ====>
> (d1) [  132.029008] xen|BUGCHECK: SYSTEM_SERVICE_EXCEPTION: 
> 00000000C0000096 FFFFF80418A21E27 FFFFAC009F27B900 0000000000000000
> (d1) [  132.029057] xen|BUGCHECK: EXCEPTION (FFFFF80418A21E27):
> (d1) [  132.029096] xen|BUGCHECK: - Code = 8589320F
> (d1) [  132.029134] xen|BUGCHECK: - Flags = 000000A8
> (d1) [  132.029174] xen|BUGCHECK: - Address = 8589320F008EECE9
> (d1) [  132.029214] xen|BUGCHECK: - Parameter[0] = 0F000001D9B90000
> (d1) [  132.029255] xen|BUGCHECK: - Parameter[1] = 4166300FFCE08332
> (d1) [  132.029297] xen|BUGCHECK: - Parameter[2] = 0355000002C881F7
> (d1) [  132.029338] xen|BUGCHECK: - Parameter[3] = 0002A0818B497B74
> (d1) [  132.029379] xen|BUGCHECK: - Parameter[4] = 000002A8918B4900
> (d1) [  132.029421] xen|BUGCHECK: - Parameter[5] = 8B49CA230FC0230F
> (d1) [  132.029462] xen|BUGCHECK: - Parameter[6] = 918B49000002B081
> (d1) [  132.029502] xen|BUGCHECK: - Parameter[7] = 0FD0230F000002B8
> (d1) [  132.029544] xen|BUGCHECK: - Parameter[8] = 0002C8918B49DA23
> (d1) [  132.029585] xen|BUGCHECK: - Parameter[9] = 230FF0230FC03300
> (d1) [  132.029626] xen|BUGCHECK: - Parameter[10] = 008AE22504F665FA
> (d1) [  132.029667] xen|BUGCHECK: - Parameter[11] = 00C2F76639740200
> (d1) [  132.029709] xen|BUGCHECK: - Parameter[12] = 81342405F7327403
> (d1) [  132.029753] xen|BUGCHECK: - Parameter[13] = 6626750000000200
> (d1) [  132.029794] xen|BUGCHECK: - Parameter[14] = C88303740200C2F7
> (d1) [  132.029835] xen|BUGCHECK: EXCEPTION (0D8B000000AC9589):
> (d1) [  132.029888] xen|BUGCHECK: CONTEXT (FFFFAC009F27B900):
> (d1) [  132.029925] xen|BUGCHECK: - GS = 002B
> (d1) [  132.029959] xen|BUGCHECK: - FS = 0053
> (d1) [  132.029994] xen|BUGCHECK: - ES = 002B
> (d1) [  132.030028] xen|BUGCHECK: - DS = 002B
> (d1) [  132.030056] xen|BUGCHECK: - SS = 0018
> (d1) [  132.030089] xen|BUGCHECK: - CS = 0010
> (d1) [  132.030123] xen|BUGCHECK: - EFLAGS = 00040046
> (d1) [  132.030160] xen|BUGCHECK: - RDI = 00000000000002C4
> (d1) [  132.030199] xen|BUGCHECK: - RSI = 00000000005FFA48
> (d1) [  132.030237] xen|BUGCHECK: - RBX = 00000000426ED080
> (d1) [  132.030275] xen|BUGCHECK: - RDX = 0000000000000000
> (d1) [  132.030312] xen|BUGCHECK: - RCX = 00000000000001DD
> (d1) [  132.030349] xen|BUGCHECK: - RAX = 0000000000000000
> (d1) [  132.030386] xen|BUGCHECK: - RBP = 00000000E4427520
> (d1) [  132.030424] xen|BUGCHECK: - RIP = 0000000018A21E27
> (d1) [  132.030463] xen|BUGCHECK: - RSP = 00000000E4427498
> (d1) [  132.030504] xen|BUGCHECK: - R8 = 0000000000000000
> (d1) [  132.030543] xen|BUGCHECK: - R9 = 000000009F25A000
> (d1) [  132.030580] xen|BUGCHECK: - R10 = 00000000000002C4
> (d1) [  132.030618] xen|BUGCHECK: - R11 = 0000000000000246
> (d1) [  132.030657] xen|BUGCHECK: - R12 = 0000000002CEB528
> (d1) [  132.030696] xen|BUGCHECK: - R13 = 00000000A81E3A80
> (d1) [  132.030735] xen|BUGCHECK: - R14 = 00000000000002C4
> (d1) [  132.030775] xen|BUGCHECK: - R15 = 0000000000000000
> (d1) [  132.030812] xen|BUGCHECK: STACK:
> (d1) [  132.030858] xen|BUGCHECK: 00000000E44274A0: (00000000426ED080 
> 00000000005FF898 0000000000000000 0000000000000000) ntoskrnl.exe + 
> 000000000043667A
> (d1) [  132.030935] xen|BUGCHECK: 00000000005FFA18: (00000000A74AD76E 
> 0000000000000000 00000000A81E3A80 000000000000000E) 00007FFBA9BFF4D4
> (d1) [  132.031010] xen|BUGCHECK: 00000000005FFA20: (0000000000000000 
> 00000000A81E3A80 000000000000000E 0000000000000003) 00007FFBA74AD76E
> (d1) [  132.031086] xen|BUGCHECK: 00000000005FFA28: (00000000A81E3A80 
> 000000000000000E 0000000000000003 00000000005FFB10) 0000000000000000
> (d1) [  132.031151] xen|BUGCHECK: <====
> (XEN) [  132.040828] memory_map:remove: dom1 gfn=f60c0 mfn=a3080 nr=4
> (XEN) [  132.040975] memory_map:remove: dom1 gfn=f5000 mfn=a2000 nr=1000
> (XEN) [  132.041124] memory_map:remove: dom1 gfn=e0000 mfn=4000000 nr=10000
> (XEN) [  132.041463] memory_map:remove: dom1 gfn=e8000 mfn=4008000 nr=8000
> (XEN) [  132.041905] memory_map:remove: dom1 gfn=f8000 mfn=4010000 nr=2000
> (XEN) [  132.042072] ioport_map:remove: dom1 gport=c100 mport=6000 nr=80
> ```

There was a #GP fault accessing MSR 0x1DD

You'll need to investigate how that MSR is supposed to behave on real
hardware, and see if Xen's behaviour matches or not.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:34:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:34:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739148.1146093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLJC-0001OO-1D; Wed, 12 Jun 2024 10:34:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739148.1146093; Wed, 12 Jun 2024 10:34:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLJB-0001OH-UU; Wed, 12 Jun 2024 10:34:25 +0000
Received: by outflank-mailman (input) for mailman id 739148;
 Wed, 12 Jun 2024 10:34:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHLJA-0001OB-QA
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:34:24 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 55748f1b-28a7-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 12:34:23 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id
 4fb4d7f45d1cf-57c76497cefso4801131a12.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 03:34:23 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57ca82877b7sm1133692a12.48.2024.06.12.03.34.21
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 03:34:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55748f1b-28a7-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718188462; x=1718793262; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=uKjO2lbOacLKbkepJHdnURZOX2A8rIBxV0nZa0iKZ4k=;
        b=K6SKP5jFUZeBvv5+ZHMtlpm05ODj7JJ++OS5CQjpXYQGzfkfy63Ah/+RyvhnDsgJ/h
         Syq+8rHmUEDHRVABmHhSMEgsu4W/qtb60POtKX9pGfwLaaU2pKy+ZPI/XD7OecxO64On
         zZss+VFFX+MKKH+SrZ85A/Lbi5QKKrz9pKVfD3VJnsyK9ntAIWTPSAISO4gOMM9eASVV
         O/5+u4fE6NfS/aPVKi11mxS1vnOFYdWaowxD3qtzGNc+lsmW2+Q2ERD86pk2Aj9w/c6g
         52o0QF88VFM+C4c5+FC7C4bQ9cbe0yIuh4ye1toJ2R/5v9EwEPYJzD0YEhDFamYr4Gui
         jrcQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718188462; x=1718793262;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=uKjO2lbOacLKbkepJHdnURZOX2A8rIBxV0nZa0iKZ4k=;
        b=rfQcuWtMgtor/9nQ81J2gXapBc+AZa7PNlfeN7SIdnHBRj+/LTDTueSJxAQMDNulV9
         cJXavm+yll5ZjaHTPJTUSWjNFhuiiz59VKtARhl5WOyU/dUsGN6TtPrFW0yev6mjh5V3
         RK40oapYPfYd0Wit7wChplMDhbCl3cm/XZww0k4dNys2QEJT9xkYtJAjExoPopG4N/oC
         +9C/aIF96A3cv7XLprATOADM+oP7qHsI8suc3gkPSLA36THkC/yTMbOCvyF1tXsYBrzX
         PgUo9HfXlh4KO2mre4pt9LEZhwmDBOXocH0bwS71i/poJj8/ATnnbu5XPobbeskIBivy
         HWzQ==
X-Forwarded-Encrypted: i=1; AJvYcCXs6sLZ96q40obzZ3yoLhMm7X7ehsmj8t8EnoLUqXc/jW9cPSO2qWrQ5437qGiNGhtRQW1JU96TtD2a0XZ2+WxgpbLxWeqTlEErwqzUbaI=
X-Gm-Message-State: AOJu0YwmSxFmmwgN8tHmHIpg9gc0SGN01YGmGsmA9SMdBQyRqZJK/A7U
	felc1QqBvPrMnJXQBmTH+fS1z/nMeUNW+BlScKIBLkiZ7tcmpZm/PYmiG01cRQ==
X-Google-Smtp-Source: AGHT+IEclVmKgX0GCWA7DQsqIaF7OXVSF4LDq3ufQ+QDn+5v4Hf5yp2bDguG6LnATnT+kLvyHyKgyQ==
X-Received: by 2002:a50:9e24:0:b0:578:57b7:9f32 with SMTP id 4fb4d7f45d1cf-57caaabf033mr1058344a12.35.1718188462574;
        Wed, 12 Jun 2024 03:34:22 -0700 (PDT)
Message-ID: <c2a5b9cd-2a85-4e01-8b8b-31b85726dbd4@suse.com>
Date: Wed, 12 Jun 2024 12:34:21 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-6-Jiqian.Chen@amd.com>
 <987f5d21-bbb5-4cdb-975b-91949e802921@suse.com>
 <BL1PR12MB5849FF595AEED1112622A98DE7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB5849FF595AEED1112622A98DE7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 12.06.2024 12:12, Chen, Jiqian wrote:
> On 2024/6/11 22:39, Jan Beulich wrote:
>> On 07.06.2024 10:11, Jiqian Chen wrote:
>>> Some type of domain don't have PIRQ, like PVH, it do not do
>>> PHYSDEVOP_map_pirq for each gsi. When passthrough a device
>>> to guest on PVH dom0, callstack
>>> pci_add_dm_done->XEN_DOMCTL_irq_permission will failed at
>>> domain_pirq_to_irq, because PVH has no mapping of gsi, pirq
>>> and irq on Xen side.
>>
>> All of this is, to me at least, in pretty sharp contradiction to what
>> patch 2 says and does. IOW: Do we want the concept of pIRQ in PVH, or
>> do we want to keep that to PV?
> It's not contradictory.
> What I did is not to add the concept of PIRQs for PVH.

After your further explanations on patch 2 - yes, I see now. But in particular
there it needs making more clear what case it is that is being enabled by the
changes.

>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
>>
>> A problem throughout the series as it seems: Who's the author of these
>> patches? There's no From: saying it's not you, but your S-o-b also
>> isn't first.
> So I need to change to:
> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com> means I am the author.
> Signed-off-by: Huang Rui <ray.huang@amd.com> means Rui sent them to upstream firstly.
> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com> means I take continue to upstream.

I guess so, yes.

>>> --- a/tools/libs/light/libxl_pci.c
>>> +++ b/tools/libs/light/libxl_pci.c
>>> @@ -1412,6 +1412,37 @@ static bool pci_supp_legacy_irq(void)
>>>  #define PCI_SBDF(seg, bus, devfn) \
>>>              ((((uint32_t)(seg)) << 16) | (PCI_DEVID(bus, devfn)))
>>>  
>>> +static int pci_device_set_gsi(libxl_ctx *ctx,
>>> +                              libxl_domid domid,
>>> +                              libxl_device_pci *pci,
>>> +                              bool map,
>>> +                              int *gsi_back)
>>> +{
>>> +    int r, gsi, pirq;
>>> +    uint32_t sbdf;
>>> +
>>> +    sbdf = PCI_SBDF(pci->domain, pci->bus, (PCI_DEVFN(pci->dev, pci->func)));
>>> +    r = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
>>> +    *gsi_back = r;
>>> +    if (r < 0)
>>> +        return r;
>>> +
>>> +    gsi = r;
>>> +    pirq = r;
>>
>> r is a GSI as per above; why would you store such in a variable named pirq?
>> And how can ...
>>
>>> +    if (map)
>>> +        r = xc_physdev_map_pirq(ctx->xch, domid, gsi, &pirq);
>>> +    else
>>> +        r = xc_physdev_unmap_pirq(ctx->xch, domid, pirq);
>>
>> ... that value be the correct one to pass into here? In fact, the pIRQ number
>> you obtain above in the "map" case isn't handed to the caller, i.e. it is
>> effectively lost. Yet that's what would need passing into such an unmap call.
> Yes r is GSI and I know pirq will be replaced by xc_physdev_map_pirq.
> What I do "pirq = r" is for xc_physdev_unmap_pirq, unmap need passing in pirq,
> and the number of pirq is always equal to gsi.

Why would that be? pIRQ is purely a software construct (of Xen's), I
don't think there's any guarantee whatsoever on the numbering. And even
if there was (for e.g. non-MSI ones), it would be pIRQ == IRQ. And recall
that elsewhere I think I meanwhile succeeded in explaining to you that
IRQ != GSI (in the common case, even if in most cases they match).

>>> +    if (r)
>>> +        return r;
>>> +
>>> +    r = xc_domain_gsi_permission(ctx->xch, domid, gsi, map);
>>
>> Looking at the hypervisor side, this will fail for PV Dom0. In which case imo
>> you better would avoid making the call in the first place.
> Yes, for PV dom0, the errno is EOPNOTSUPP, then it will do below xc_domain_irq_permission.

Hence why call xc_domain_gsi_permission() at all on a PV Dom0?

>>> +    if (r && errno == EOPNOTSUPP)
>>
>> Before here you don't really need the pIRQ number; if all it really is needed
>> for is ...
>>
>>> +        r = xc_domain_irq_permission(ctx->xch, domid, pirq, map);
>>
>> ... this, then it probably also should only be obtained when it's needed. Yet
>> overall the intentions here aren't quite clear to me.
> Adding the function pci_device_set_gsi is for PVH dom0, while also ensuring compatibility with PV dom0.
> When PVH dom0, it does xc_physdev_map_pirq and xc_domain_gsi_permission(new hypercall for PVH dom0)
> When PV dom0, it keeps the same actions as before codes, it does xc_physdev_map_pirq and xc_domain_irq_permission.

And why does PVH Dom0 need to call xc_physdev_map_pirq(), when in that case
the pIRQ isn't used?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:36:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:36:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739154.1146103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLLS-00020A-DH; Wed, 12 Jun 2024 10:36:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739154.1146103; Wed, 12 Jun 2024 10:36:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLLS-000203-AQ; Wed, 12 Jun 2024 10:36:46 +0000
Received: by outflank-mailman (input) for mailman id 739154;
 Wed, 12 Jun 2024 10:36:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHLLR-0001zS-3w
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:36:45 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a9bf1503-28a7-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 12:36:44 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a6f1da33826so283047566b.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 03:36:44 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f16427842sm455078866b.100.2024.06.12.03.36.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 03:36:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9bf1503-28a7-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718188604; x=1718793404; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ghMlT45U+M0R4YXPKCzBM3f2z/I7uLceGhZXqbMMbOQ=;
        b=IQXe57s8McFFJ+iFONEGPBPsBWL19RXdwihSp8sPlLlswiAKE7j1Wl2RuUhZeE9Hfs
         +cFcy1lgbJ0RzqoeGsHSC/dCi5yXfTBFF/gQHDibdvYprqs/7xKP9xR6Yt0C/h2sl4ft
         QGZuakiPELMNXhMfTzLJtCxWl9A7BDQ/zwVD4YS669MOZ5YHBNET6hXJsDfUuuBo0LsR
         sLOFiygYQviK/RKMU24BoyBhN2FQDVCB/S5NnUZZCfD15M/cD5CcDGBVnbeY2JJSXX6J
         L96THnFmnZgfbnYtBhWq6WQQr2UTeA4pRFPUFJZ5vTiFvfORPjmB3bIJL+UCqfEEc49U
         5L3g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718188604; x=1718793404;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ghMlT45U+M0R4YXPKCzBM3f2z/I7uLceGhZXqbMMbOQ=;
        b=MEJY4K7p7YZ3mieH/RESQdf4HQ/q3xQl67jXPzsR1I5aUiKa24X32CRKgu3h/PF+7k
         k7nbKwibNK9Ta+HP+DYRu+MvX44CEB35lKmE+h57pC73+bDoNsoH/kAkCLlvm1B82r3U
         3AMlAS7GR1taPtLJic/Z8Hc8+IHdH6m/Anmkq94DJpPVSDehONj/SBmiOQxJOD5g6gm0
         lA86ju/VliN97/OR2hTlFdQRjv6M3xscLAk5MxX9rbsPlmKp2VLimo3CGZo27R54EkIX
         62wSluWq8arntYsxBkiDp9MIZ3GnzrrrIafnx1ss+mVlirUOH4Y0wFYUrjuVERXU9Sbn
         xVEA==
X-Forwarded-Encrypted: i=1; AJvYcCVMhJXYxuZvaVYuxY75bAjq1tvckpHyEoLfT6afqTJI+w1X605/0FgawMhpnbupmOIeoSh76CBTtwcaTzlZ4bICnuZlOpXLwwFKKQehP3A=
X-Gm-Message-State: AOJu0YwvxpcJUIPwok4JJHB0bPxwQJ8haJ99HWFtl6riKiTf/yWbvxEH
	Ixrb5Mpqcsi1hl4tX1qEkh1EkrNPdwr4BKNqjXIWMo8sdvKBrIwWfgA5FYHZ6A==
X-Google-Smtp-Source: AGHT+IGuFnNylE/bptuT+c/FeVzA7aLpmi44cGFEyR02p96hc6dXVYElIiYyAccoFaCW+O/GvbJmaw==
X-Received: by 2002:a17:906:1d55:b0:a6f:2253:d1f7 with SMTP id a640c23a62f3a-a6f480086b4mr110643966b.61.1718188603972;
        Wed, 12 Jun 2024 03:36:43 -0700 (PDT)
Message-ID: <c40bcf67-ebdd-4bcf-b6bc-ecec6a1fd7eb@suse.com>
Date: Wed, 12 Jun 2024 12:36:42 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 4/6] x86emul: address violations of MISRA C Rule 20.7
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1718117557.git.nicola.vetrini@bugseng.com>
 <0a502d2a9c5ce13be13281d9de49d263313b7852.1718117557.git.nicola.vetrini@bugseng.com>
 <12ce10af-cd36-492e-a73b-2b81b5bf60cc@suse.com>
 <ac1faf5feded028ce80752ce69983352@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ac1faf5feded028ce80752ce69983352@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 12.06.2024 11:52, Nicola Vetrini wrote:
> On 2024-06-12 11:19, Jan Beulich wrote:
>> On 11.06.2024 17:53, Nicola Vetrini wrote:
>>> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
>>> of macro parameters shall be enclosed in parentheses". Therefore, some
>>> macro definitions should gain additional parentheses to ensure that 
>>> all
>>> current and future users will be safe with respect to expansions that
>>> can possibly alter the semantics of the passed-in macro parameter.
>>>
>>> No functional change.
>>>
>>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
>>> ---
>>> These local helpers could in principle be deviated, but the 
>>> readability
>>> and functionality are essentially unchanged by complying with the 
>>> rule,
>>> so I decided to modify the macro definition as the preferred option.
>>
>> Well, yes, but ...
>>
>>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>>> @@ -2255,7 +2255,7 @@ x86_emulate(
>>>          switch ( modrm_reg & 7 )
>>>          {
>>>  #define GRP2(name, ext) \
>>> -        case ext: \
>>> +        case (ext): \
>>>              if ( ops->rmw && dst.type == OP_MEM ) \
>>>                  state->rmw = rmw_##name; \
>>>              else \
>>> @@ -8611,7 +8611,7 @@ int x86_emul_rmw(
>>>              unsigned long dummy;
>>>
>>>  #define XADD(sz, cst, mod) \
>>> -        case sz: \
>>> +        case (sz): \
>>>              asm ( "" \
>>>                    COND_LOCK(xadd) " %"#mod"[reg], %[mem]; " \
>>>                    _POST_EFLAGS("[efl]", "[msk]", "[tmp]") \
>>
>> ... this is really nitpicky of the rule / tool. What halfway realistic
>> ways do you see to actually misuse these macros? What follows the 
>> "case"
>> keyword is just an expression; no precedence related issues are 
>> possible.
>>
> 
> I do share the view: no real danger is possible in sensible uses. Often 
> MISRA rules are stricter than necessary to have a simple formulation, by 
> avoiding too many special cases.
> 
> However, if a deviation is formulated, then it needs to be maintained, 
> for no real readability benefit in this case, in my opinion. I can be 
> convinced otherwise, of course.

Well, aiui you're thinking of a per-macro deviation here. Whereas I'd be
thinking of deviating the pattern.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:40:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:40:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739160.1146113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLOZ-0003aP-RE; Wed, 12 Jun 2024 10:39:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739160.1146113; Wed, 12 Jun 2024 10:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLOZ-0003aI-Oj; Wed, 12 Jun 2024 10:39:59 +0000
Received: by outflank-mailman (input) for mailman id 739160;
 Wed, 12 Jun 2024 10:39:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHLOY-0003Zt-K1
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:39:58 +0000
Received: from mail-qv1-xf32.google.com (mail-qv1-xf32.google.com
 [2607:f8b0:4864:20::f32])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1c539ee3-28a8-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 12:39:57 +0200 (CEST)
Received: by mail-qv1-xf32.google.com with SMTP id
 6a1803df08f44-6b06bb80d7dso20461946d6.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 03:39:57 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b04f6c4022sm66718626d6.51.2024.06.12.03.39.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 03:39:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c539ee3-28a8-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718188796; x=1718793596; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=B5F2JmgWRYZgyg+QL6wdx6HfKJIQYTTDoVrNtSqqYF4=;
        b=RRG06pC6L1KcdgB65zqLXzWbGGyVe/slEjUX1qzdVfACYssrGaLv7LR1NTykeAVldW
         MMg++7++AyT4GnZcUPKt1oOke/2IeruJ+iBr7BwdHCLk3PbQiBJlnQszEd0S0F97YDUV
         EQQzZDMBNprQT6J89B/ZgxNnpCwz9P9/JI1io=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718188796; x=1718793596;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=B5F2JmgWRYZgyg+QL6wdx6HfKJIQYTTDoVrNtSqqYF4=;
        b=spJGGmJFqfKaPw4baEHF5NDWJUk7qZckcubdVJ5DwoT1W+ROqmK8p4TArnc9Kx0ecB
         vo8X01J3D7zjzQrqrWdCZv/pXXmyuIwI/T0o/5c7enEHif/TrCYSrau8Y7whCggeqLe4
         nC9ZLkv+1LEUso9v0r50U/ti6loBd0h8j1DekP22JyjDbtxdJHzVUKLnMZ2FfzYTo6a3
         gKs+3ieyoHQcvKKQr7RUVWCKY7A9wXMbPXKEpmg5yr3cCN4muI5JFs1P6etIs+B+3doY
         lJXp/YW831tDAnx9ABFpUTj4P9NoH5Vi6q+RbGd1WMsA3E7gtxaLCtr3+LiWbuzqgb79
         jeBw==
X-Forwarded-Encrypted: i=1; AJvYcCW7AtnrLbvGN4Gk+myQkcJeoYvozaFIC6qFsrRnN/zZup5nH0VNcjUO6nCi1m3+0Ypk/I4gyLiIaV8IAVq/H+uYXpeTR/wkmG/Rqa4LlHk=
X-Gm-Message-State: AOJu0YzAJgsyAvHD6Tl5SzncN5V+jre39hpA+jNWhK5tr/0STYBP/B9x
	Ddu5Eya3EwlDFwLn4Xr6+vW+z71xgmHv0blGnczmcvPImRJIFHn5ubEzQeV61Rg=
X-Google-Smtp-Source: AGHT+IFOopaJiRtWtC/ylf1krPYyrUL7CViCOhuAI5ARVBxdZ7QRGSNqBFXiJRnXS3G00mnle0TQQQ==
X-Received: by 2002:a05:6214:3c85:b0:6af:c64c:d1ae with SMTP id 6a1803df08f44-6b191a659d7mr10785996d6.15.1718188795951;
        Wed, 12 Jun 2024 03:39:55 -0700 (PDT)
Date: Wed, 12 Jun 2024 12:39:53 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in
 _assign_irq_vector()
Message-ID: <Zml6-ViFPTWI1cUc@macbook>
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-7-roger.pau@citrix.com>
 <9de1a9c7-814c-4375-9182-90a2f04806b2@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <9de1a9c7-814c-4375-9182-90a2f04806b2@suse.com>

On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
> On 10.06.2024 16:20, Roger Pau Monne wrote:
> > Currently there's logic in fixup_irqs() that attempts to prevent
> > _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
> > interrupts from the CPUs not present in the input mask.  The current logic in
> > fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
> > move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
> > 
> > Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
> > _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
> > to deal with interrupts that have either move_{in_progress,cleanup_count} set
> > and no remaining online CPUs in ->arch.cpu_mask.
> > 
> > If _assign_irq_vector() is requested to move an interrupt in the state
> > described above, first attempt to see if ->arch.old_cpu_mask contains any valid
> > CPUs that could be used as fallback, and if that's the case do move the
> > interrupt back to the previous destination.  Note this is easier because the
> > vector hasn't been released yet, so there's no need to allocate and setup a new
> > vector on the destination.
> > 
> > Due to the logic in fixup_irqs() that clears offline CPUs from
> > ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
> > shouldn't be possible to get into _assign_irq_vector() with
> > ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
> > ->arch.old_cpu_mask.
> > 
> > However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
> > also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
> > longer part of the affinity set,
> 
> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
> 
> > move the interrupt to a different CPU part of
> > the provided mask
> 
> ... this (->arch.cpu_mask related).

No, the "provided mask" here is the "mask" parameter, not
->arch.cpu_mask.

> 
> > and keep the current ->arch.old_{cpu_mask,vector} for the
> > pending interrupt movement to be completed.
> 
> Right, that's to clean up state from before the initial move. What isn't
> clear to me is what's to happen with the state of the intermediate
> placement. Description and code changes leave me with the impression that
> it's okay to simply abandon, without any cleanup, yet I can't quite figure
> why that would be an okay thing to do.

There isn't much we can do with the intermediate placement, as the CPU
is going offline.  However we can drain any pending interrupts from
IRR after the new destination has been set, since setting the
destination is done from the CPU that's the current target of the
interrupts.  So we can ensure the draining is done strictly after the
target has been switched, hence ensuring no further interrupts from
this source will be delivered to the current CPU.

> > --- a/xen/arch/x86/irq.c
> > +++ b/xen/arch/x86/irq.c
> > @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
> >      }
> >  
> >      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> > -        return -EAGAIN;
> > +    {
> > +        /*
> > +         * If the current destination is online refuse to shuffle.  Retry after
> > +         * the in-progress movement has finished.
> > +         */
> > +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
> > +            return -EAGAIN;
> > +
> > +        /*
> > +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
> > +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
> > +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
> > +         * ->arch.old_cpu_mask.
> > +         */
> > +        ASSERT(valid_irq_vector(desc->arch.old_vector));
> > +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
> > +
> > +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
> > +        {
> > +            /*
> > +             * Fallback to the old destination if moving is in progress and the
> > +             * current destination is to be offlined.  This is only possible if
> > +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
> > +             * in the 'mask' parameter.
> > +             */
> > +            desc->arch.vector = desc->arch.old_vector;
> > +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
> > +
> > +            /* Undo any possibly done cleanup. */
> > +            for_each_cpu(cpu, desc->arch.cpu_mask)
> > +                per_cpu(vector_irq, cpu)[desc->arch.vector] = irq;
> > +
> > +            /* Cancel the pending move. */
> > +            desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
> > +            cpumask_clear(desc->arch.old_cpu_mask);
> > +            desc->arch.move_in_progress = 0;
> > +            desc->arch.move_cleanup_count = 0;
> > +
> > +            return 0;
> > +        }
> 
> In how far is this guaranteed to respect the (new) affinity that was set,
> presumably having led to the movement in the first place?

The 'mask' parameter should account for the new affinity, hence the
cpumask_intersects() check guarantees we are moving to a CPU still in
the affinity mask.

> > @@ -600,7 +646,17 @@ next:
> >          current_vector = vector;
> >          current_offset = offset;
> >  
> > -        if ( valid_irq_vector(old_vector) )
> > +        if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> > +        {
> > +            ASSERT(!cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map));
> > +            /*
> > +             * Special case when evacuating an interrupt from a CPU to be
> > +             * offlined and the interrupt was already in the process of being
> > +             * moved.  Leave ->arch.old_{vector,cpu_mask} as-is and just
> > +             * replace ->arch.{cpu_mask,vector} with the new destination.
> > +             */
> 
> And where's the cleaning up of ->arch.old_* going to be taken care of then?

Such cleaning will be handled normally by the interrupt still having
->arch.move_{in_progress,cleanup_count} set.  The CPUs in
->arch.old_cpu_mask must not all be offline, otherwise the logic in
fixup_irqs() would have already released the old vector.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:41:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:41:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739166.1146123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLQF-0004z2-5Y; Wed, 12 Jun 2024 10:41:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739166.1146123; Wed, 12 Jun 2024 10:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLQF-0004yv-2s; Wed, 12 Jun 2024 10:41:43 +0000
Received: by outflank-mailman (input) for mailman id 739166;
 Wed, 12 Jun 2024 10:41:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHLQD-0004yn-T7
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:41:41 +0000
Received: from mail-qv1-xf35.google.com (mail-qv1-xf35.google.com
 [2607:f8b0:4864:20::f35])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 597832fc-28a8-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 12:41:39 +0200 (CEST)
Received: by mail-qv1-xf35.google.com with SMTP id
 6a1803df08f44-6b064c4857dso22538616d6.2
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 03:41:39 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b0869d137csm22361186d6.64.2024.06.12.03.41.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 03:41:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 597832fc-28a8-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718188899; x=1718793699; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=sxBUTZtUrBR9oLNSLJrCyYDuywgJfnwRaLYKvWAuEUA=;
        b=W+wk6baIIVT5ckkxoMuu1EOvWtVfjJe3LsMAL/nncasZ6WifLWiXSR2z4+lntkwPrI
         L+HZbsL9TycmEjamhNWA9v7KdX5do7xrEoI52Wt3C6YB2ayjTYlobNBkW4omSjgdyb4+
         yjdjF/Zy4VD8wkpNkE7hIISLSoUJHEqfZRm4s=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718188899; x=1718793699;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=sxBUTZtUrBR9oLNSLJrCyYDuywgJfnwRaLYKvWAuEUA=;
        b=k7Ryy7tQSWAdoWuiiCuKxbXRMDvI+1S1cGexZFoF0GbstbqNkK8wGkMcFSbwV1mn3T
         OAPGAbBPNJ0Rt3shao7Lf8uhEx6KXJy9yWgRma/++QF0z7YwkG98OujPhy3N+26AdqmS
         CtQ8E7STYhU2bi5ZIvcspwY7zUrY3JUNpoUBzP+HUDkZmSaCgtDu5TL7ntDFK2XNAY9S
         Y+QJ/aRf//FFzM2k5pQ7OrnRU6/iMF9carZwP8Yz/dLhvQd19CTphnYb64+MJaRzP7a+
         WSz0cb8p/zKh+uXB2c5UTAOfkoCn/5chvrmkhPI/hWOto13pXfeooiMrhn8m0aB1SRGO
         KPfg==
X-Forwarded-Encrypted: i=1; AJvYcCXaBAwEqfPUnaLJgjzC0PpaqtD/P2Mf3y6BZfwUfuEq/MKQadBeSCHztvGZCqqkvvB2O9DGmxrVErNN/mW6OLVANjzEBLVyPTUPMwLmC+c=
X-Gm-Message-State: AOJu0YygYhL3hEKyZbjfXw6wJ4nceXYzokhqcft5mtjlepMGvXMn5Cd7
	zeLpxSokkgHI+OwiaimLybBfpHlutr2EjQCciXyTTKIfmMhdrh9SJt5nuWW+XdMVvHPBeZJYd0a
	Z
X-Google-Smtp-Source: AGHT+IFH+t+/Ba3CG1G2Do5S9eB+BcBKKgtyDOH6DpWOJq6+Js8xCX7B/WFzB3D7f5R2edoJtATA2Q==
X-Received: by 2002:a05:6214:5990:b0:6b0:7db4:42b9 with SMTP id 6a1803df08f44-6b1a7ad485amr11868286d6.65.1718188898788;
        Wed, 12 Jun 2024 03:41:38 -0700 (PDT)
Date: Wed, 12 Jun 2024 12:41:36 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 5/7] x86/irq: deal with old_cpu_mask for interrupts in
 movement in fixup_irqs()
Message-ID: <Zml7YIzY86E248mt@macbook>
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-6-roger.pau@citrix.com>
 <66fc06cc-f1f6-4f12-83d4-a3b9788bffba@suse.com>
 <Zmlgp9C2ryFtT65B@macbook>
 <ddef2b3d-9766-4697-ac15-4105a0592090@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ddef2b3d-9766-4697-ac15-4105a0592090@suse.com>

On Wed, Jun 12, 2024 at 11:04:26AM +0200, Jan Beulich wrote:
> On 12.06.2024 10:47, Roger Pau Monné wrote:
> > On Tue, Jun 11, 2024 at 02:45:09PM +0200, Jan Beulich wrote:
> >> On 10.06.2024 16:20, Roger Pau Monne wrote:
> >>> Given the current logic it's possible for ->arch.old_cpu_mask to get out of
> >>> sync: if a CPU set in old_cpu_mask is offlined and then onlined
> >>> again without old_cpu_mask having been updated the data in the mask will no
> >>> longer be accurate, as when brought back online the CPU will no longer have
> >>> old_vector configured to handle the old interrupt source.
> >>>
> >>> If there's an interrupt movement in progress, and the to be offlined CPU (which
> >>> is the call context) is in the old_cpu_mask clear it and update the mask, so it
> >>> doesn't contain stale data.
> >>
> >> This imo is too __cpu_disable()-centric. In the code you cover the
> >> smp_send_stop() case afaict, where it's all _other_ CPUs which are being
> >> offlined. As we're not meaning to bring CPUs online again in that case,
> >> dealing with the situation likely isn't needed. Yet the description should
> >> imo at least make clear that the case was considered.
> > 
> > What about adding the following paragraph:
> 
> Sounds good, just maybe one small adjustment:
> 
> > Note that when the system is going down fixup_irqs() will be called by
> > smp_send_stop() from CPU 0 with a mask with only CPU 0 on it,
> > effectively asking to move all interrupts to the current caller (CPU
> > 0) which is the only CPU online.  In that case we don't care to
> 
> "... the only CPU to remain online."

Right, that's better.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:47:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:47:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739178.1146137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLVT-0005gp-S6; Wed, 12 Jun 2024 10:47:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739178.1146137; Wed, 12 Jun 2024 10:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLVT-0005gi-PC; Wed, 12 Jun 2024 10:47:07 +0000
Received: by outflank-mailman (input) for mailman id 739178;
 Wed, 12 Jun 2024 10:47:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHLVR-0005gY-UG; Wed, 12 Jun 2024 10:47:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHLVR-00047C-Qz; Wed, 12 Jun 2024 10:47:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHLVR-0005Xe-Iu; Wed, 12 Jun 2024 10:47:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHLVR-0002Hf-IT; Wed, 12 Jun 2024 10:47:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=s5GYpJRLvSmybRfiMpQJqo+rnl5PTzKBzEhzpCNDrJc=; b=gsyc+G7duDzLTSQdfR+5rOYhyp
	EWo7no9HF/t8qq4xKHI2mYYIDbYQbaW8n7zGitb6qMEU6ikvIfG7QHeJ8KilS0O4FQ6YWlTWVSX89
	DYlLq3tYkpz/NLPuXsqm6oQDmgHi+RWg29t4d133J2mkOYaAMavIlT05mUgfZtHZ4qPg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186315-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186315: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-credit1:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d
X-Osstest-Versions-That:
    xen=43de96a70f00b631d0f4c658c232204079b2f2b2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Jun 2024 10:47:05 +0000

flight 186315 xen-unstable real [real]
flight 186321 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186315/
http://logs.test-lab.xenproject.org/osstest/logs/186321/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit1   8 xen-boot            fail pass in 186321-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 186321 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 186321 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186311
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186311
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186311
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186311
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186311
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186311
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d
baseline version:
 xen                  43de96a70f00b631d0f4c658c232204079b2f2b2

Last test of basis   186311  2024-06-11 16:08:45 Z    0 days
Testing same since   186315  2024-06-12 00:41:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   43de96a70f..5ea7f2c9d7  5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:49:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:49:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739185.1146148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLYC-0007IS-9o; Wed, 12 Jun 2024 10:49:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739185.1146148; Wed, 12 Jun 2024 10:49:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLYC-0007IL-6E; Wed, 12 Jun 2024 10:49:56 +0000
Received: by outflank-mailman (input) for mailman id 739185;
 Wed, 12 Jun 2024 10:49:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHLYB-0007IF-5o
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:49:55 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7f938287-28a9-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 12:49:52 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a6265d48ec3so238250766b.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 03:49:52 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1da43b48sm394577166b.195.2024.06.12.03.49.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 03:49:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f938287-28a9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718189392; x=1718794192; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from:cc
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=kCHi9i7kiAc1JKyqc9KQQd8JvO+yBsQDZQQQuXhfu7E=;
        b=Q2bxN5WYbw7SZUaNkfs93IsmtLWfuCCcEfM7YK/zoqOCgtLqxDvaU9srrkeBKYmaRB
         +oJ5bTCF/q8r9emV/6XZKGb3Xt8WZtSaLyjzOLoSVy9GSxl4yRVuW5gqCPtqhc/ExGDT
         LLHfMvFYv2idI7I6OO/TxnaU8JYly1cZPpK72oJKKQYLk3JBC+YMAr+sRZvMzsADxF3S
         OZJywFJoNl5XqGdsmvVbQoqnsXpCtgj5aMhl3RDdWZsnTVAqj5QPbIsXa8nStVEvj+XN
         wCiBFcODB9fbaGg8/6U5lOvYU7WtgeY7+9qZGbPZ22pX91uURU7mTU7gE0SKz3GZ/2uk
         +VaA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718189392; x=1718794192;
        h=content-transfer-encoding:in-reply-to:autocrypt:from:cc
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=kCHi9i7kiAc1JKyqc9KQQd8JvO+yBsQDZQQQuXhfu7E=;
        b=g9d8+0cmAI24xnVsJbU+E/lO3OOsSC5hoIIQ5+WB6iC0JVKcCeL0chWwDBkRNapNOw
         ML9/plcjotqiizA1oi6/ajWu3qf+KtiVknsXiR1krrb1cp9z1oJK5K4Ho4V8odWHreYx
         j5n5Ci9D1fypsNXjkyBYYhmPyX+4NCNjrUNl/1rw2kJzsUC1LMIeh+d0pe+7ZhD/VXc1
         wMfNln/H3B0za0HPTzpNx5a3OTBuYq9/MxgkUQy1vSg5X/t2syIq9p3o2igB/AOQxy3E
         7LxsjGfAbECWiO4c5G7bsUpo4MzvFRBtMEHAJmfVBZZMa6AMtUKaCuie8DFgVxqS5Iq4
         3X9Q==
X-Forwarded-Encrypted: i=1; AJvYcCVxURAaVlPOzQIZXl8Pix18pI1jA357pX358n15RUp+k199igUxA+YmyDwxfipTRcaob8tKR0Nbv90s816VECAEc8HifjahICDftKTuvV0=
X-Gm-Message-State: AOJu0YzZQ0ia5DlIGJFC47OETa3Th9hcQEzx3AdvHVuYInfFWUZxeby7
	eLIlvup+dn3mYDM6/zscF+H9oo1n4Knai+lNc+q4wRAX7Hua7U5k41c3B1ap6Q==
X-Google-Smtp-Source: AGHT+IHTGawWv1MOPTU/1//RtsPxP2m9rYAYIbKMsVppJiovMaDyNVw9fu4s1AmLLKHfBoPwjv/ZPg==
X-Received: by 2002:a17:906:350c:b0:a6f:3e18:f9e8 with SMTP id a640c23a62f3a-a6f47d5f67amr81931566b.76.1718189392233;
        Wed, 12 Jun 2024 03:49:52 -0700 (PDT)
Message-ID: <7a7b10da-8429-4f5c-a946-7442d4896ab5@suse.com>
Date: Wed, 12 Jun 2024 12:49:50 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: BlueIris error on Xen 4.17
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <ced16fca-3b55-40a1-a7e2-ffadd9707394@vates.tech>
 <1eaed8c1-981f-4c16-a2f6-e783fa43963f@citrix.com>
Content-Language: en-US
Cc: Damien Thenot <damien.thenot@vates.tech>, xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <1eaed8c1-981f-4c16-a2f6-e783fa43963f@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12.06.2024 12:25, Andrew Cooper wrote:
> On 12/06/2024 10:40 am, Damien Thenot wrote:
>> Hello,
>>
>> A XCP-ng 8.3 user that use Blue Iris Software encountered a crash with 
>> Xen upgraded to version 4.17.
>> It worked correctly when XCP-ng 8.3 used Xen 4.13.
>> It is happening on Intel Xeon E-2378 CPU @ 2.60GHz CPUs and it seems 
>> more recent Intel CPUs.
>> His guests are Windows with a NVIDIA GPU given to the guest.
>>
>> The user added:
>>  > On an older box with i9-9900K CPU it does not happen and the VM works 
>> as expected. Also working on an older
>>  > Xeon Intel Xeon E-2146G and a E-2276G. Anything newer than that 
>> however the VM will just BSOD.
>>
>> You can find more information in the XCP-ng forum post: 
>> https://xcp-ng.org/forum/topic/8873/windows-blue-iris-xcp-ng-8-3
>>
>> The user tried enabling `msr-relaxed` following notes in Xen 4.15 
>> documentation.
>> But it didn't change the behavior and the guest still crashes.
>>
>> Has someone else observed such behavior?
>>
>> Here is a Xen dmesg output with the error that the user was able to obtain:
>>
>> ```
>>
>> (d1) [  132.028963] xen|BUGCHECK: ====>
>> (d1) [  132.029008] xen|BUGCHECK: SYSTEM_SERVICE_EXCEPTION: 
>> 00000000C0000096 FFFFF80418A21E27 FFFFAC009F27B900 0000000000000000
>> (d1) [  132.029057] xen|BUGCHECK: EXCEPTION (FFFFF80418A21E27):
>> (d1) [  132.029096] xen|BUGCHECK: - Code = 8589320F
>> (d1) [  132.029134] xen|BUGCHECK: - Flags = 000000A8
>> (d1) [  132.029174] xen|BUGCHECK: - Address = 8589320F008EECE9
>> (d1) [  132.029214] xen|BUGCHECK: - Parameter[0] = 0F000001D9B90000
>> (d1) [  132.029255] xen|BUGCHECK: - Parameter[1] = 4166300FFCE08332
>> (d1) [  132.029297] xen|BUGCHECK: - Parameter[2] = 0355000002C881F7
>> (d1) [  132.029338] xen|BUGCHECK: - Parameter[3] = 0002A0818B497B74
>> (d1) [  132.029379] xen|BUGCHECK: - Parameter[4] = 000002A8918B4900
>> (d1) [  132.029421] xen|BUGCHECK: - Parameter[5] = 8B49CA230FC0230F
>> (d1) [  132.029462] xen|BUGCHECK: - Parameter[6] = 918B49000002B081
>> (d1) [  132.029502] xen|BUGCHECK: - Parameter[7] = 0FD0230F000002B8
>> (d1) [  132.029544] xen|BUGCHECK: - Parameter[8] = 0002C8918B49DA23
>> (d1) [  132.029585] xen|BUGCHECK: - Parameter[9] = 230FF0230FC03300
>> (d1) [  132.029626] xen|BUGCHECK: - Parameter[10] = 008AE22504F665FA
>> (d1) [  132.029667] xen|BUGCHECK: - Parameter[11] = 00C2F76639740200
>> (d1) [  132.029709] xen|BUGCHECK: - Parameter[12] = 81342405F7327403
>> (d1) [  132.029753] xen|BUGCHECK: - Parameter[13] = 6626750000000200
>> (d1) [  132.029794] xen|BUGCHECK: - Parameter[14] = C88303740200C2F7
>> (d1) [  132.029835] xen|BUGCHECK: EXCEPTION (0D8B000000AC9589):
>> (d1) [  132.029888] xen|BUGCHECK: CONTEXT (FFFFAC009F27B900):
>> (d1) [  132.029925] xen|BUGCHECK: - GS = 002B
>> (d1) [  132.029959] xen|BUGCHECK: - FS = 0053
>> (d1) [  132.029994] xen|BUGCHECK: - ES = 002B
>> (d1) [  132.030028] xen|BUGCHECK: - DS = 002B
>> (d1) [  132.030056] xen|BUGCHECK: - SS = 0018
>> (d1) [  132.030089] xen|BUGCHECK: - CS = 0010
>> (d1) [  132.030123] xen|BUGCHECK: - EFLAGS = 00040046
>> (d1) [  132.030160] xen|BUGCHECK: - RDI = 00000000000002C4
>> (d1) [  132.030199] xen|BUGCHECK: - RSI = 00000000005FFA48
>> (d1) [  132.030237] xen|BUGCHECK: - RBX = 00000000426ED080
>> (d1) [  132.030275] xen|BUGCHECK: - RDX = 0000000000000000
>> (d1) [  132.030312] xen|BUGCHECK: - RCX = 00000000000001DD
>> (d1) [  132.030349] xen|BUGCHECK: - RAX = 0000000000000000
>> (d1) [  132.030386] xen|BUGCHECK: - RBP = 00000000E4427520
>> (d1) [  132.030424] xen|BUGCHECK: - RIP = 0000000018A21E27
>> (d1) [  132.030463] xen|BUGCHECK: - RSP = 00000000E4427498
>> (d1) [  132.030504] xen|BUGCHECK: - R8 = 0000000000000000
>> (d1) [  132.030543] xen|BUGCHECK: - R9 = 000000009F25A000
>> (d1) [  132.030580] xen|BUGCHECK: - R10 = 00000000000002C4
>> (d1) [  132.030618] xen|BUGCHECK: - R11 = 0000000000000246
>> (d1) [  132.030657] xen|BUGCHECK: - R12 = 0000000002CEB528
>> (d1) [  132.030696] xen|BUGCHECK: - R13 = 00000000A81E3A80
>> (d1) [  132.030735] xen|BUGCHECK: - R14 = 00000000000002C4
>> (d1) [  132.030775] xen|BUGCHECK: - R15 = 0000000000000000
>> (d1) [  132.030812] xen|BUGCHECK: STACK:
>> (d1) [  132.030858] xen|BUGCHECK: 00000000E44274A0: (00000000426ED080 
>> 00000000005FF898 0000000000000000 0000000000000000) ntoskrnl.exe + 
>> 000000000043667A
>> (d1) [  132.030935] xen|BUGCHECK: 00000000005FFA18: (00000000A74AD76E 
>> 0000000000000000 00000000A81E3A80 000000000000000E) 00007FFBA9BFF4D4
>> (d1) [  132.031010] xen|BUGCHECK: 00000000005FFA20: (0000000000000000 
>> 00000000A81E3A80 000000000000000E 0000000000000003) 00007FFBA74AD76E
>> (d1) [  132.031086] xen|BUGCHECK: 00000000005FFA28: (00000000A81E3A80 
>> 000000000000000E 0000000000000003 00000000005FFB10) 0000000000000000
>> (d1) [  132.031151] xen|BUGCHECK: <====
>> (XEN) [  132.040828] memory_map:remove: dom1 gfn=f60c0 mfn=a3080 nr=4
>> (XEN) [  132.040975] memory_map:remove: dom1 gfn=f5000 mfn=a2000 nr=1000
>> (XEN) [  132.041124] memory_map:remove: dom1 gfn=e0000 mfn=4000000 nr=10000
>> (XEN) [  132.041463] memory_map:remove: dom1 gfn=e8000 mfn=4008000 nr=8000
>> (XEN) [  132.041905] memory_map:remove: dom1 gfn=f8000 mfn=4010000 nr=2000
>> (XEN) [  132.042072] ioport_map:remove: dom1 gport=c100 mport=6000 nr=80
>> ```
> 
> There was a #GP fault accessing MSR 0x1DD

That's LASTINTFROMIP, so ...

> You'll need to investigate how that MSR is supposed to behave on real
> hardware, and see if Xen's behaviour matches or not.

... I'd be inclined to assume our get_model_specific_lbr() isn't up-to-date.
Even with architectural LBRs (which I might guess is what their hardware is
offering, except that then software wouldn't be respecting CPUID output) I
take it that LASTINT{FROM,TO}IP would still be available and working as
before (leaving aside layout aspects).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:52:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:52:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739189.1146158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLar-0000I0-Lu; Wed, 12 Jun 2024 10:52:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739189.1146158; Wed, 12 Jun 2024 10:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLar-0000Hs-JI; Wed, 12 Jun 2024 10:52:41 +0000
Received: by outflank-mailman (input) for mailman id 739189;
 Wed, 12 Jun 2024 10:52:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHLaq-0000Hl-Q1
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:52:40 +0000
Received: from mail-vk1-xa2a.google.com (mail-vk1-xa2a.google.com
 [2607:f8b0:4864:20::a2a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e2403568-28a9-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 12:52:38 +0200 (CEST)
Received: by mail-vk1-xa2a.google.com with SMTP id
 71dfb90a1353d-4e7eadf3668so2682941e0c.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 03:52:38 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-7977e0eb111sm233665285a.89.2024.06.12.03.52.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 03:52:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2403568-28a9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718189558; x=1718794358; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=YN+2joQB63AnNvNJYmaybfBAzJR2+D9Xvo6Lh7CUxc4=;
        b=OJBmPFyYMvLgZ02sogur5G+7/ZNb44G2mOp/bsBqjvzCbBPEGWDg6e72atIMdZAOpH
         RMzxJy09wxpoFM1NbVa41/uosyXITBhcvBsa0U9b0+wCSVgk75pgJi2ylIsjD0oo0HFr
         26rkbZo8uKtircM255WUlb5JZDXfp0dov5zGM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718189558; x=1718794358;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=YN+2joQB63AnNvNJYmaybfBAzJR2+D9Xvo6Lh7CUxc4=;
        b=VzVXinBR+Za9OonXizkNMWN4heggKWaBNSkQu8QMneyqcFQ4rx6PQ0ae5cg+hpBgkn
         qmMEj11oeN6YCWL+hlbtZF5X+w2eO3gvgn2wQCxJJl48to5NU4R8yT1ynHEYZyk+5oeQ
         iXSMw57akIl/4fFTz7vH4vs8cDL6NtUsDRhE1XUNQmPiP9Lo0XmZcRMoR/VJOXLQUMfX
         OzoCrRP1kA3TSFTXXATH6101Yp2hFGovlvBNjEhCTBVph9eJyx5mnbv4uqmpi4aGxW1Q
         QSouG6fE5q0KoIaz5iMSc1BgsjwtoPkchH71DzlY34GmsrxtnkC2VID3Yrt+RmhOrq3+
         CSpQ==
X-Gm-Message-State: AOJu0Yx+lVwbR+GXyzE7COMNy75icJ0gFEjQagRD165FA+yTrvKRM9hq
	1fQo1YQ+pRH3fb48LrTOWPeohckeNKTNVBB7J6uTqf72u5u3QZRdnS0spm6NhJ0=
X-Google-Smtp-Source: AGHT+IHfZNkRX6Vf1LsDt0h3dZnMW3/Xnztc/bst51K5CsXrvyjG0gvGapyk0X3zyPSdLdjevfKAGw==
X-Received: by 2002:a05:6122:2cd:b0:4df:1a3f:2ec1 with SMTP id 71dfb90a1353d-4ed07ab34ecmr1337076e0c.1.1718189557750;
        Wed, 12 Jun 2024 03:52:37 -0700 (PDT)
Date: Wed, 12 Jun 2024 12:52:35 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Chen Jiqian <Jiqian.Chen@amd.com>
Subject: Re: [PATCH for-4.19???] x86/physdev: replace physdev_{,un}map_pirq()
 checking against DOMID_SELF
Message-ID: <Zml984lQW1XcrG9_@macbook>
References: <c7d12669-7851-4701-9b2d-0b22f9d32c1d@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c7d12669-7851-4701-9b2d-0b22f9d32c1d@suse.com>

On Wed, Jun 12, 2024 at 10:44:56AM +0200, Jan Beulich wrote:
> It's hardly ever correct to check for just DOMID_SELF, as guests have
> ways to figure out their domain IDs and hence could instead use those as
> inputs to respective hypercalls. Note, however, that for ordinary DomU-s
> the adjustment is relaxing things rather than tightening them, since
> - as a result of XSA-237 - the respective XSM checks would have rejected
> self (un)mapping attempts for other than the control domain.
> 
> Since in physdev_map_pirq() handling overall is a little easier this
> way, move obtaining of the domain pointer into the caller. Doing the
> same for physdev_unmap_pirq() is just to keep both consistent in this
> regard. For both this has the advantage that it is now provable (by the
> build not failing) that there are no DOMID_SELF checks left (and none
> could easily be re-added).
> 
> Fixes: 0b469cd68708 ("Interrupt remapping to PIRQs in HVM guests")
> Fixes: 9e1a3415b773 ("x86: fixes after emuirq changes")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

I wonder if we should introduce a helper to check for the current
domain:

#define is_current_domain(d) ((d) == current->domain)

Or similar.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:55:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:55:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739196.1146168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLdU-0000wL-5q; Wed, 12 Jun 2024 10:55:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739196.1146168; Wed, 12 Jun 2024 10:55:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLdU-0000wE-2K; Wed, 12 Jun 2024 10:55:24 +0000
Received: by outflank-mailman (input) for mailman id 739196;
 Wed, 12 Jun 2024 10:55:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nAc7=NO=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sHLdS-0000vz-UJ
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:55:23 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20609.outbound.protection.outlook.com
 [2a01:111:f403:2009::609])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 41f960f6-28aa-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 12:55:20 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by BY5PR12MB4194.namprd12.prod.outlook.com (2603:10b6:a03:210::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.37; Wed, 12 Jun
 2024 10:55:15 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7633.036; Wed, 12 Jun 2024
 10:55:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41f960f6-28aa-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TKCnIN2XqW6IUPnL8GCczhwFeayFDfHYorGKGSGZDF9S7qW6S0JK66gZgjlFtVivLTQ+MJhx4RomAyLqSHUOo3SdoeGFUTNbOPWGKk69txfF5RhRFkI4IJGs5ZtutqC/favhH3DNZYfCCuAn9DHcAuJ5rY9UsUuFdk/jNQFmfsy703UW2DR8ys4OwWC2UHVbPBJnchjqoA+SQLoKDHiNvqw8HvTT+ZWpZOGbS2o/VU5fjrBYcd9KDTgcIxDevk4ckfq6B5tgCYW+0EcFATELYOJ8O5DC9X9KvNoVasRa157c1+Qrd+lTJmvtGRBoq6ST+WMG6q5j+5fkoy2A/Jfx4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=0lmlbwjIzoh+1fOGTk+gBrxq5g3NNwL8TlImwiNgls8=;
 b=BeJVWgMO8VYYoMvz2m0znrm3X71yd1I/SbMsbtyT64q9AzSPMDjRw3o4IoBeUVw9jrI5sjUgjpumTuoWATraInKw818L14QrBQi97by0/fxkHIlZxD/u9xXA31Soi7YMrOM6eXCzS6r8IF8ey3gZG/e94he+5gl7dZFz8ACDE7ngEFtxlflRAzK23oc01ijH3lT7HzIV/8U2W7NIn4j4VySZFqVWkDOe8N6M3W/UELGSylwNA4FlpkWvRbdxuRLYU5jb06yz7DynTRS71JRLHj3Zr3DQ3aVVIyMVNd8h9utcqFNbqTbW8+gXZPLVKccej7UzR1eoTzX/Gav/gCqB1w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0lmlbwjIzoh+1fOGTk+gBrxq5g3NNwL8TlImwiNgls8=;
 b=MiWSKpuiQkADDxi1RbThbPAUgeT9N90us7xPASx74a9h8t6+HzLFrp5vyJrDUccaXiRrD/GloFReOLN+q2XmWGSirxujLgn8dIUpWbGZdCc/l0wTyEg3oXcf0JN32mTiBq10oCvNu0+bZY73kT87ehAtRhZHeC1ryskVvEbj34A=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Hildebrand,
 Stewart" <Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Daniel P
 . Smith" <dpsmith@apertussolutions.com>, "Chen, Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index: AQHauLJn3FT0TCW9KkmRq1iRWqwBt7HCqQ8AgAG9m4D//5AdgIAAiWiA
Date: Wed, 12 Jun 2024 10:55:14 +0000
Message-ID:
 <BL1PR12MB5849652CE3039C8D17CD7FA6E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-6-Jiqian.Chen@amd.com>
 <987f5d21-bbb5-4cdb-975b-91949e802921@suse.com>
 <BL1PR12MB5849FF595AEED1112622A98DE7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
 <c2a5b9cd-2a85-4e01-8b8b-31b85726dbd4@suse.com>
In-Reply-To: <c2a5b9cd-2a85-4e01-8b8b-31b85726dbd4@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7633.034)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|BY5PR12MB4194:EE_
x-ms-office365-filtering-correlation-id: 8870a766-651d-4bfe-8441-08dc8ace23c8
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230032|1800799016|366008|376006|7416006|38070700010;
x-microsoft-antispam-message-info:
 =?utf-8?B?emZKOTE0K3l5a2lFMGZYNEtiSSs1eTgrc0lyWWlLelM4ckRtVExlOEJucVBE?=
 =?utf-8?B?VkRsYjhrS1JpV1JqSis3clF2SHZoNTFjNTh1MlJYR2pXQXlkWkppaWtXMHZL?=
 =?utf-8?B?djlMbXBDbjIrMndZUmFmeFNLUFowMlFsSTVtazczSlNIb2k2UlZBcXlKcVpk?=
 =?utf-8?B?cHNEb3NLL0dJRkpzcDIvY0xNdTIyMU1TbHV2cW8vZzJCdWJoc0N3Z3NCTzl0?=
 =?utf-8?B?aFdmNUVCQWh5bGh6RUlOY0poSFBYTnZlMjFMdi8xelV0NUJyck5Gekx3cFdh?=
 =?utf-8?B?QzBkOEJLWG9VSXZNaHh6UlFsRTZ6Qm44Vk1DV3ZBa2t0R2VlV2FzTHAxaC9w?=
 =?utf-8?B?eTBoUzMwMEpUMVE1ZWJ2YVVsN3dzRzFvY0dZc2ZTaGgrajl4OXBxZHJEMWR0?=
 =?utf-8?B?NnJObHdtaXRxeTRrZG1jSEZ5cXA4VlpSSmM4T0hLQWJGWGJOclh3V2tnTjFj?=
 =?utf-8?B?b3JKS0xxU3ZYclJHSDM3QVVITS9udGhWei8zYytDQ1ZtNUdpZVkrcXpZV0ZM?=
 =?utf-8?B?VVA4Um95YzlJdHFkUENFRWVHWDFTMEpEUzZUVDVqL2NmRHF4YUNYdFdNNWZq?=
 =?utf-8?B?YkpCSGNMM0pkNHRVTlRPbHVrekRVWWIzckkrbVBqdEgwWUxYdWl2NHhTZGZC?=
 =?utf-8?B?K1lFK0JKOHd0M0k1ZnNkaDdsNS9wWG54UzJpQm1TaWljVVVDOVl2RGEwQnVQ?=
 =?utf-8?B?ME9SSGxsZ1l3WVlIdi8zK1hWekZYQW53ZUVUUkhFRmdUUTcwUHVha0JZOGhW?=
 =?utf-8?B?UXF1NkhuZ09LbVFOeXdjTzJ0dDdscDFsUWdDMmRURk5BdHFOdzlOUDdiNnJG?=
 =?utf-8?B?TCsyQnRsZlcyK3RNMmRTKzN3WlNzWGlUcm94TWJ1TWZtWTZ1eWNDWnZwN0Ny?=
 =?utf-8?B?U2l0bU5ZMGlTaUhob3pwU2JrTGxWSlk4ajBBU2YyMjNpQTZNRXRwdjFKdmZZ?=
 =?utf-8?B?Mm9BUE15UnJFV2RaOXVFQVFDR3FJNi85UXhOMDVrWU5rb0RqL0Yrd3hDRTJE?=
 =?utf-8?B?eEpRaUV0TTQwVk9RV29wUkI0NlJnVWIzNFE5NTNrdGw2OVJldTBjWWJCWW4r?=
 =?utf-8?B?WnRSSnpQRmEyZzk3OW1DZnFGcEthaFMyTkhtaTBUdFVNUVhqRSs2WXI1NC9w?=
 =?utf-8?B?RGdaVk9UOHR6SncvNHJNOXJEWkR6ZnF4d2dMRHZaLzhoWDQ4OHpId3JQN253?=
 =?utf-8?B?NTZtVnFNcXlocHFRRjhKaVlFc3BNWWNwUmEwQ3BOWGttQ1pJVjJZaHlFS3l4?=
 =?utf-8?B?NXh6UWNReWtwTG1ndzQ2aDUvQkdsZ2dCbWxYcWpGRWZpd3M0aGJCUWVNWVdC?=
 =?utf-8?B?TEN6MGRuYkM3YVFBVFNyVTNYQjFCRE5tU2luUmlHSERxSFJEZjhLRVdTS2pm?=
 =?utf-8?B?allsL1NWNmFQaTc1N3lsd01DTEpqWFhEdTVpWHpDbnErYjVvcHBDcVU3Vysz?=
 =?utf-8?B?VTZrN1ZZb3o4eExwN2xWaWVWUWduWnY2TnpQM0VVLzkrSDE3Zy9ITXRMR1BE?=
 =?utf-8?B?MFBEWkxRb3V2ZG80aXZ3dXZNKzdheGlNbVdvOUJyelNqZFF3V1VHWWVYSFk4?=
 =?utf-8?B?ZXZxNDYzU3FnMGF1R3lGdENwZW45eU44MW5ZcEpBYjZNNTczdEF0aGRSV013?=
 =?utf-8?B?UW1nSEs4ZDZTVzJnRExLYVFpbXRTOVZhcnRXa3JHc3JsVExBc2xBNjdtb0pY?=
 =?utf-8?B?UndPcWdGd0VVeEk5OXh5TVpNWXJWRnFIWGx0N254b1lFOGljcmRWSmFEWUNJ?=
 =?utf-8?B?QUY2N1liYkcwVklTbklWWkVnY0NRUi9sVXM4WjN2YzN0M0dYZ2hwVmFhSEVn?=
 =?utf-8?Q?1MliGRA9NBOXNh101ctM3tKlbV0+tdk5L/81c=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230032)(1800799016)(366008)(376006)(7416006)(38070700010);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?bHo4anRRTXFiMGd6Q2RrZ3VnVHpWTVNQL3A2SXB3akREa3dVQnZ3LzBmWWQv?=
 =?utf-8?B?eENKWGx6bW5KOFMrMHl0aGp5WVV6eW1xRDNWNytJNTVOVnltYmVIR09CSnl4?=
 =?utf-8?B?UG9XSjRnWGFwdTNSb1ZYMUJLTkt1WHFCMjZjN2M1ZEFvS1lFbUR5MStuZ3Q5?=
 =?utf-8?B?V0xYNEFManVUYWZlc1EwYUZidHRUZUVZaC9Kd3MzQ0paWDYwVFZmekhCZUow?=
 =?utf-8?B?Q2RpWmZZYmhNSlR1OXJ3RUxoSHRzTXNBMVBuWS9LWEpLQlRQR3J3YncrdmhU?=
 =?utf-8?B?cU1wTVRmVzE0WWhabkFLRWRoZi82Rk0vemQ3d0dtWGhOamhRL1A3SjVmTzUx?=
 =?utf-8?B?aDQyMU1ZQThKUFZUclFUZTQySnp6RXlnZHdZb1pWYXBMWnR6b0RYbENtejNj?=
 =?utf-8?B?bGN5TXEraytMY05yQ0grbW9xSWJKVWtKa3pScC9DQlRYNU9CRDhKUGZMbnVt?=
 =?utf-8?B?MmtVS05lcHVGY081VGtNWGI3MDdHU29xVHNyaGd4THNiWkN1dEFBV3paMVFm?=
 =?utf-8?B?cU9sNklGRVA3Ym42VUVNQ3RDdnNlNTIzb0FNcnkxMkY1RVFGQkJoK1k5YmhD?=
 =?utf-8?B?cFJSR1pDZ3R2S3M5WFc4bUh6V0wySVVWb1liZjlwVUkreUpqOXBxYk13T1da?=
 =?utf-8?B?UkMvOXVYcFhvWjFEazB6UGdPVTZqQ3JHUzhkUUJYNFhRbkx6c2tWQWpxWDd5?=
 =?utf-8?B?R3A2VmMxMzluS3N2UDNlOVJIZXNSb0hVbU50SFkzN2w5ZjNUdTBUUUdsMzdH?=
 =?utf-8?B?NXJWcHVpdFZqTWFNNWc4MW9IdUJLVW9Pc0RldGsrankvM0VRTGdSazNwd2Vo?=
 =?utf-8?B?dmtDNW51SStKOS85SzR0c2lMeUJDUTBaeWNJZEFpelFZMjJmWFh6VWh6eFRl?=
 =?utf-8?B?UTY1K2EwNmFQOE9VUEo4L1BBVEFpZHc3cTZBRlNYbWtOWnhEMzB1SzA4Z29N?=
 =?utf-8?B?YVNpdE1jYTdKMkJmZnQ4aW9jODFRUThIVWRzdWhPcExDMVhrckdmSWw2Uklz?=
 =?utf-8?B?eTVOdG9sRVJnckVGZVdhbk9ycnJBU3JHSVhVRitQYUp0cGJ6UEV1KzFaeHRr?=
 =?utf-8?B?ZmJ5U1p5N29aZjkrVFk0eDV4NFAzVDJTeElBUmZLWnBTekc3YTJhL1doc2ky?=
 =?utf-8?B?bktKWUVFVWJjTG1wcVZJWDlKcXVtVnJrUlNDZm0wb09wYUVaUXZ6WCtiMnpF?=
 =?utf-8?B?OHhpU3o4amR1dXhTRTNZdHh0RUIremVTMHRzVEdJR1JucUR3UDZmdHBzWnJ0?=
 =?utf-8?B?akUyam9zYlZ0YkFmSzIzdWtZVzY4aFR1UG00N1hsUmlaVVZlRXk2MWFHdzBx?=
 =?utf-8?B?UmNYd1ozYWhVZzlTaE1tZ3hsMlJxMEYzdzJ4N0VPUnlDWlV5cHZ0TlQvRUly?=
 =?utf-8?B?aUt2U0xsSmxubWpmK0sxcVdiQ2E5MWxzbnJhZGZSZWNQUGdONlFBRUl6cTEy?=
 =?utf-8?B?aFQ4anFFeU9yZUhwTWlla1Z3VzA0aitraytkQXpoZFdudDAvRmc5Mk1PYUtw?=
 =?utf-8?B?VEFtUXgzYlRwYnk0dlRYeTBSNExXQkdyS0wzVytzNFA0TFhTektGNnpocUxT?=
 =?utf-8?B?clNqYk9wU1F6SmlsbGxaeHltUWJ4ZTJlRTRQRUhmbzV3eE50Z1lvT2Y5YVUz?=
 =?utf-8?B?bStzclZqMi9LSG96NjVJeGxCR2NKc3IyNVRRSmFDQldRMHVyMkpqYVBEbnVC?=
 =?utf-8?B?ZWhsNjBjMERSQnltNER3R2lqNTkvOUtpcGQyd1R3OFNSUXdLOUdUMHdLWC9V?=
 =?utf-8?B?bTNUZEV1VFZIRC9zVGVreVhkM1pHMFZGdHN4cEwyUmhmMGkrellVSm5JS293?=
 =?utf-8?B?ZXc4djlkM1EwelhhSjZqcEdNNm9SUGx3K29QTEFHMnVnaUpMWmRMLzVPaTFX?=
 =?utf-8?B?OWgxdHlSTmt5bmNvdk9qVXFXdkd3NEw1WFdjQmVIaE9NVEdGSXJNczlYZ01h?=
 =?utf-8?B?OTZrMC9ob1UrR29KZGN2b2QzYmQvRHgwUW9pNUxLRGNNK0Q2YlltdTNXUmlT?=
 =?utf-8?B?bk5UMW1UYlF0di9wVVowcm9xdkZnWlgybVllWk9pSXdITVRJZE1ZY2dyTDR3?=
 =?utf-8?B?UHhnRzZha2hEZzdGdHd3ZGdxVTdoeURvaXpFN0Y2ZVBZSEFCRTRVNHNvRDZk?=
 =?utf-8?Q?k2q4=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <A0314608A686E3418F08E7F596D77606@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8870a766-651d-4bfe-8441-08dc8ace23c8
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 Jun 2024 10:55:14.7720
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: yVj97751tn4lLMDkS/mrpj+kVrZ8Z/RvMGt7zL3cPPsm6D+fBgAjXX6NN6RduC9hC9vgEl0z1HwO0K0zINCljw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4194

T24gMjAyNC82LzEyIDE4OjM0LCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTIuMDYuMjAyNCAx
MjoxMiwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzExIDIyOjM5LCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAwNy4wNi4yMDI0IDEwOjExLCBKaXFpYW4gQ2hlbiB3cm90ZToN
Cj4+Pj4gU29tZSB0eXBlIG9mIGRvbWFpbiBkb24ndCBoYXZlIFBJUlEsIGxpa2UgUFZILCBpdCBk
byBub3QgZG8NCj4+Pj4gUEhZU0RFVk9QX21hcF9waXJxIGZvciBlYWNoIGdzaS4gV2hlbiBwYXNz
dGhyb3VnaCBhIGRldmljZQ0KPj4+PiB0byBndWVzdCBvbiBQVkggZG9tMCwgY2FsbHN0YWNrDQo+
Pj4+IHBjaV9hZGRfZG1fZG9uZS0+WEVOX0RPTUNUTF9pcnFfcGVybWlzc2lvbiB3aWxsIGZhaWxl
ZCBhdA0KPj4+PiBkb21haW5fcGlycV90b19pcnEsIGJlY2F1c2UgUFZIIGhhcyBubyBtYXBwaW5n
IG9mIGdzaSwgcGlycQ0KPj4+PiBhbmQgaXJxIG9uIFhlbiBzaWRlLg0KPj4+DQo+Pj4gQWxsIG9m
IHRoaXMgaXMsIHRvIG1lIGF0IGxlYXN0LCBpbiBwcmV0dHkgc2hhcnAgY29udHJhZGljdGlvbiB0
byB3aGF0DQo+Pj4gcGF0Y2ggMiBzYXlzIGFuZCBkb2VzLiBJT1c6IERvIHdlIHdhbnQgdGhlIGNv
bmNlcHQgb2YgcElSUSBpbiBQVkgsIG9yDQo+Pj4gZG8gd2Ugd2FudCB0byBrZWVwIHRoYXQgdG8g
UFY/DQo+PiBJdCdzIG5vdCBjb250cmFkaWN0b3J5Lg0KPj4gV2hhdCBJIGRpZCBpcyBub3QgdG8g
YWRkIHRoZSBjb25jZXB0IG9mIFBJUlFzIGZvciBQVkguDQo+IA0KPiBBZnRlciB5b3VyIGZ1cnRo
ZXIgZXhwbGFuYXRpb25zIG9uIHBhdGNoIDIgLSB5ZXMsIEkgc2VlIG5vdy4gQnV0IGluIHBhcnRp
Y3VsYXINCj4gdGhlcmUgaXQgbmVlZHMgbWFraW5nIG1vcmUgY2xlYXIgd2hhdCBjYXNlIGl0IGlz
IHRoYXQgaXMgYmVpbmcgZW5hYmxlZCBieSB0aGUNCj4gY2hhbmdlcy4NCk9LLCBJIHdpbGwgYWRk
IHNvbWUgZGVzY3JpcHRpb25zIGluIG5leHQgdmVyc2lvbi4NCg0KPiANCj4+Pj4gU2lnbmVkLW9m
Zi1ieTogSHVhbmcgUnVpIDxyYXkuaHVhbmdAYW1kLmNvbT4NCj4+Pj4gU2lnbmVkLW9mZi1ieTog
SmlxaWFuIENoZW4gPEppcWlhbi5DaGVuQGFtZC5jb20+DQo+Pj4NCj4+PiBBIHByb2JsZW0gdGhy
b3VnaG91dCB0aGUgc2VyaWVzIGFzIGl0IHNlZW1zOiBXaG8ncyB0aGUgYXV0aG9yIG9mIHRoZXNl
DQo+Pj4gcGF0Y2hlcz8gVGhlcmUncyBubyBGcm9tOiBzYXlpbmcgaXQncyBub3QgeW91LCBidXQg
eW91ciBTLW8tYiBhbHNvDQo+Pj4gaXNuJ3QgZmlyc3QuDQo+PiBTbyBJIG5lZWQgdG8gY2hhbmdl
IHRvOg0KPj4gU2lnbmVkLW9mZi1ieTogSmlxaWFuIENoZW4gPEppcWlhbi5DaGVuQGFtZC5jb20+
IG1lYW5zIEkgYW0gdGhlIGF1dGhvci4NCj4+IFNpZ25lZC1vZmYtYnk6IEh1YW5nIFJ1aSA8cmF5
Lmh1YW5nQGFtZC5jb20+IG1lYW5zIFJ1aSBzZW50IHRoZW0gdG8gdXBzdHJlYW0gZmlyc3RseS4N
Cj4+IFNpZ25lZC1vZmYtYnk6IEppcWlhbiBDaGVuIDxKaXFpYW4uQ2hlbkBhbWQuY29tPiBtZWFu
cyBJIHRha2UgY29udGludWUgdG8gdXBzdHJlYW0uDQo+IA0KPiBJIGd1ZXNzIHNvLCB5ZXMuDQpU
aGFua3MuDQoNCj4gDQo+Pj4+IC0tLSBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMNCj4+
Pj4gKysrIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYw0KPj4+PiBAQCAtMTQxMiw2ICsx
NDEyLDM3IEBAIHN0YXRpYyBib29sIHBjaV9zdXBwX2xlZ2FjeV9pcnEodm9pZCkNCj4+Pj4gICNk
ZWZpbmUgUENJX1NCREYoc2VnLCBidXMsIGRldmZuKSBcDQo+Pj4+ICAgICAgICAgICAgICAoKCgo
dWludDMyX3QpKHNlZykpIDw8IDE2KSB8IChQQ0lfREVWSUQoYnVzLCBkZXZmbikpKQ0KPj4+PiAg
DQo+Pj4+ICtzdGF0aWMgaW50IHBjaV9kZXZpY2Vfc2V0X2dzaShsaWJ4bF9jdHggKmN0eCwNCj4+
Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGxpYnhsX2RvbWlkIGRvbWlkLA0KPj4+
PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgbGlieGxfZGV2aWNlX3BjaSAqcGNpLA0K
Pj4+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbCBtYXAsDQo+Pj4+ICsgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgKmdzaV9iYWNrKQ0KPj4+PiArew0KPj4+PiAr
ICAgIGludCByLCBnc2ksIHBpcnE7DQo+Pj4+ICsgICAgdWludDMyX3Qgc2JkZjsNCj4+Pj4gKw0K
Pj4+PiArICAgIHNiZGYgPSBQQ0lfU0JERihwY2ktPmRvbWFpbiwgcGNpLT5idXMsIChQQ0lfREVW
Rk4ocGNpLT5kZXYsIHBjaS0+ZnVuYykpKTsNCj4+Pj4gKyAgICByID0geGNfcGh5c2Rldl9nc2lf
ZnJvbV9kZXYoY3R4LT54Y2gsIHNiZGYpOw0KPj4+PiArICAgICpnc2lfYmFjayA9IHI7DQo+Pj4+
ICsgICAgaWYgKHIgPCAwKQ0KPj4+PiArICAgICAgICByZXR1cm4gcjsNCj4+Pj4gKw0KPj4+PiAr
ICAgIGdzaSA9IHI7DQo+Pj4+ICsgICAgcGlycSA9IHI7DQo+Pj4NCj4+PiByIGlzIGEgR1NJIGFz
IHBlciBhYm92ZTsgd2h5IHdvdWxkIHlvdSBzdG9yZSBzdWNoIGluIGEgdmFyaWFibGUgbmFtZWQg
cGlycT8NCj4+PiBBbmQgaG93IGNhbiAuLi4NCj4+Pg0KPj4+PiArICAgIGlmIChtYXApDQo+Pj4+
ICsgICAgICAgIHIgPSB4Y19waHlzZGV2X21hcF9waXJxKGN0eC0+eGNoLCBkb21pZCwgZ3NpLCAm
cGlycSk7DQo+Pj4+ICsgICAgZWxzZQ0KPj4+PiArICAgICAgICByID0geGNfcGh5c2Rldl91bm1h
cF9waXJxKGN0eC0+eGNoLCBkb21pZCwgcGlycSk7DQo+Pj4NCj4+PiAuLi4gdGhhdCB2YWx1ZSBi
ZSB0aGUgY29ycmVjdCBvbmUgdG8gcGFzcyBpbnRvIGhlcmU/IEluIGZhY3QsIHRoZSBwSVJRIG51
bWJlcg0KPj4+IHlvdSBvYnRhaW4gYWJvdmUgaW4gdGhlICJtYXAiIGNhc2UgaXNuJ3QgaGFuZGVk
IHRvIHRoZSBjYWxsZXIsIGkuZS4gaXQgaXMNCj4+PiBlZmZlY3RpdmVseSBsb3N0LiBZZXQgdGhh
dCdzIHdoYXQgd291bGQgbmVlZCBwYXNzaW5nIGludG8gc3VjaCBhbiB1bm1hcCBjYWxsLg0KPj4g
WWVzIHIgaXMgR1NJIGFuZCBJIGtub3cgcGlycSB3aWxsIGJlIHJlcGxhY2VkIGJ5IHhjX3BoeXNk
ZXZfbWFwX3BpcnEuDQo+PiBXaGF0IEkgZG8gInBpcnEgPSByIiBpcyBmb3IgeGNfcGh5c2Rldl91
bm1hcF9waXJxLCB1bm1hcCBuZWVkIHBhc3NpbmcgaW4gcGlycSwNCj4+IGFuZCB0aGUgbnVtYmVy
IG9mIHBpcnEgaXMgYWx3YXlzIGVxdWFsIHRvIGdzaS4NCj4gDQo+IFdoeSB3b3VsZCB0aGF0IGJl
PyBwSVJRIGlzIHB1cmVseSBhIHNvZnR3YXJlIGNvbnN0cnVjdCAob2YgWGVuJ3MpLCBJDQo+IGRv
bid0IHRoaW5rIHRoZXJlJ3MgYW55IGd1YXJhbnRlZSB3aGF0c29ldmVyIG9uIHRoZSBudW1iZXJp
bmcuIEFuZCBldmVuDQo+IGlmIHRoZXJlIHdhcyAoZm9yIGUuZy4gbm9uLU1TSSBvbmVzKSwgaXQg
d291bGQgYmUgcElSUSA9PSBJUlEuIEFuZCByZWNhbGwNCj4gdGhhdCBlbHNld2hlcmUgSSB0aGlu
ayBJIG1lYW53aGlsZSBzdWNjZWVkZWQgaW4gZXhwbGFpbmluZyB0byB5b3UgdGhhdA0KPiBJUlEg
IT0gR1NJIChpbiB0aGUgY29tbW9uIGNhc2UsIGV2ZW4gaWYgaW4gbW9zdCBjYXNlcyB0aGV5IG1h
dGNoKS4NCk9LLCB3aWxsIGNoYW5nZSBpbiBuZXh0IHZlcnNpb24uDQoNCj4gDQo+Pj4+ICsgICAg
aWYgKHIpDQo+Pj4+ICsgICAgICAgIHJldHVybiByOw0KPj4+PiArDQo+Pj4+ICsgICAgciA9IHhj
X2RvbWFpbl9nc2lfcGVybWlzc2lvbihjdHgtPnhjaCwgZG9taWQsIGdzaSwgbWFwKTsNCj4+Pg0K
Pj4+IExvb2tpbmcgYXQgdGhlIGh5cGVydmlzb3Igc2lkZSwgdGhpcyB3aWxsIGZhaWwgZm9yIFBW
IERvbTAuIEluIHdoaWNoIGNhc2UgaW1vDQo+Pj4geW91IGJldHRlciB3b3VsZCBhdm9pZCBtYWtp
bmcgdGhlIGNhbGwgaW4gdGhlIGZpcnN0IHBsYWNlLg0KPj4gWWVzLCBmb3IgUFYgZG9tMCwgdGhl
IGVycm5vIGlzIEVPUE5PVFNVUFAsIHRoZW4gaXQgd2lsbCBkbyBiZWxvdyB4Y19kb21haW5faXJx
X3Blcm1pc3Npb24uDQo+IA0KPiBIZW5jZSB3aHkgY2FsbCB4Y19kb21haW5fZ3NpX3Blcm1pc3Np
b24oKSBhdCBhbGwgb24gYSBQViBEb20wPw0KSXMgdGhlcmUgYSBmdW5jdGlvbiB0byBkaXN0aW5n
dWlzaCB0aGF0IGN1cnJlbnQgZG9tMCBpcyBQViBvciBQVkggZG9tMCBpbiB0b29scy9saWJzPw0K
DQo+IA0KPj4+PiArICAgIGlmIChyICYmIGVycm5vID09IEVPUE5PVFNVUFApDQo+Pj4NCj4+PiBC
ZWZvcmUgaGVyZSB5b3UgZG9uJ3QgcmVhbGx5IG5lZWQgdGhlIHBJUlEgbnVtYmVyOyBpZiBhbGwg
aXQgcmVhbGx5IGlzIG5lZWRlZA0KPj4+IGZvciBpcyAuLi4NCj4+Pg0KPj4+PiArICAgICAgICBy
ID0geGNfZG9tYWluX2lycV9wZXJtaXNzaW9uKGN0eC0+eGNoLCBkb21pZCwgcGlycSwgbWFwKTsN
Cj4+Pg0KPj4+IC4uLiB0aGlzLCB0aGVuIGl0IHByb2JhYmx5IGFsc28gc2hvdWxkIG9ubHkgYmUg
b2J0YWluZWQgd2hlbiBpdCdzIG5lZWRlZC4gWWV0DQo+Pj4gb3ZlcmFsbCB0aGUgaW50ZW50aW9u
cyBoZXJlIGFyZW4ndCBxdWl0ZSBjbGVhciB0byBtZS4NCj4+IEFkZGluZyB0aGUgZnVuY3Rpb24g
cGNpX2RldmljZV9zZXRfZ3NpIGlzIGZvciBQVkggZG9tMCwgd2hpbGUgYWxzbyBlbnN1cmluZyBj
b21wYXRpYmlsaXR5IHdpdGggUFYgZG9tMC4NCj4+IFdoZW4gUFZIIGRvbTAsIGl0IGRvZXMgeGNf
cGh5c2Rldl9tYXBfcGlycSBhbmQgeGNfZG9tYWluX2dzaV9wZXJtaXNzaW9uKG5ldyBoeXBlcmNh
bGwgZm9yIFBWSCBkb20wKQ0KPj4gV2hlbiBQViBkb20wLCBpdCBrZWVwcyB0aGUgc2FtZSBhY3Rp
b25zIGFzIGJlZm9yZSBjb2RlcywgaXQgZG9lcyB4Y19waHlzZGV2X21hcF9waXJxIGFuZCB4Y19k
b21haW5faXJxX3Blcm1pc3Npb24uDQo+IA0KPiBBbmQgd2h5IGRvZXMgUFZIIERvbTAgbmVlZCB0
byBjYWxsIHhjX3BoeXNkZXZfbWFwX3BpcnEoKSwgd2hlbiBpbiB0aGF0IGNhc2UNCj4gdGhlIHBJ
UlEgaXNuJ3QgdXNlZD8NCkkgZGlkbid0IGV4cGVjdCB0aGF0IGludHJvZHVjaW5nIHBjaV9kZXZp
Y2Vfc2V0X2dzaSBjYXVzZXMgc28gbWFueSBjb25mdXNpb25zLg0KSSB3aWxsIHJlbW92ZSBpdCBp
biB0aGUgbmV4dCB2ZXJzaW9uIGFuZCBtYWtlIG1vZGlmaWNhdGlvbnMgZGlyZWN0bHkgaW4gcGNp
X2FkZF9kbV9kb25lLg0KDQo+IA0KPiBKYW4NCg0KLS0gDQpCZXN0IHJlZ2FyZHMsDQpKaXFpYW4g
Q2hlbi4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 10:57:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 10:57:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739206.1146177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLfB-0001Vz-F5; Wed, 12 Jun 2024 10:57:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739206.1146177; Wed, 12 Jun 2024 10:57:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLfB-0001Vs-Cf; Wed, 12 Jun 2024 10:57:09 +0000
Received: by outflank-mailman (input) for mailman id 739206;
 Wed, 12 Jun 2024 10:57:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+mOs=NO=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sHLf9-0001Vk-EY
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 10:57:07 +0000
Received: from mail-oa1-x30.google.com (mail-oa1-x30.google.com
 [2001:4860:4864:20::30])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 819e8e76-28aa-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 12:57:06 +0200 (CEST)
Received: by mail-oa1-x30.google.com with SMTP id
 586e51a60fabf-24c9f297524so3726133fac.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 03:57:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 819e8e76-28aa-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718189825; x=1718794625; darn=lists.xenproject.org;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=d4CF5jyC63XCq0u9FoYR5aOs8m8c0dHKwp4gOWXkpMU=;
        b=beizp3e6myFwFZX2xeLZi0hY7on814W3eC0iWOyEhDVy5tSxyuS1ZxKbosWr5v/xs9
         hVyUWoeE+MHoOecECKmp7dk42ZZZIHU8xib/NcLhrfkCvbEsxpe69QdzcFi2Km6o95uT
         lrEgS0VKCGIC1x37f0x1pZWQc5rrvkA1rYZNM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718189825; x=1718794625;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=d4CF5jyC63XCq0u9FoYR5aOs8m8c0dHKwp4gOWXkpMU=;
        b=NesQ6oppsdTJX7BacHK9DhTtivtfZVvx7sFh+JpHwRwSo0+pSiTjFZGQN/fDCKfdp+
         ro9AD1HnoSC2Z4bCgdGWj48AKe7GYBmlwak7SOUQikMMuEacZlb+tslRWGlMfvAEp4QW
         W2NHaSKVKJ3K5EnNIYKyTZdOBAP+EL0ZFEat3mf287/Ur77Gf5xAioPrDAph0gZoNORK
         bYUnFc4eKuFw35NpA59YG43wfjsuq+2xVxWC5q9QbzJxuX3LVHIU44F7PqVT1HCHLOzD
         /DFGqy36Bz1MKy4tL55fBWYJpbdlcaZE9UjpBnHDM9cVNYb0kU5yRASevk1I8KoUWyQI
         bpfA==
X-Gm-Message-State: AOJu0YyIoeDHkUOLLZGwA58JM9CfwcA3CKdqOjpU4L7IEbBdadwCoOtu
	kY6GzFZR31KYCKJmuHoQ7AEfqEDyrDlR08ZccLO2XxFmW8V2NEtO0ZI8uw85pPAeEAanblQNCmk
	Pv873BkijRZvuB57eSVoLfORKdL7OGknrHUuVMOMHt7zJ0VkM
X-Google-Smtp-Source: AGHT+IFbDcjPpG2OU1TFBI/aIA+TS88MX2CtsAsaHGFvJYgcDVKI1LNz+yvy1BukC64Qps54l+ejNxkE0kB0g6+CoqI=
X-Received: by 2002:a05:6871:726:b0:250:7f7e:fa6a with SMTP id
 586e51a60fabf-25514c5acb1mr1549416fac.23.1718189824841; Wed, 12 Jun 2024
 03:57:04 -0700 (PDT)
MIME-Version: 1.0
From: George Dunlap <george.dunlap@cloud.com>
Date: Wed, 12 Jun 2024 11:56:54 +0100
Message-ID: <CA+zSX=YoUCL40_GtQy0j9_8=3zT-QsuBOmBwRsAUJiBmji1bZA@mail.gmail.com>
Subject: Design Session: Matrix Channel
To: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

Nobody volunteered *up front* to take notes for the Matrix design
session [1], but I did end up taking a few notes, so agreed to do some
follow-up.

The general issue seemed to be how difficult it was to pull out
"signal" from "noise" (where "noise" is individual; i.e., something
completely ARM-specific would be "signal" to the ARM maintainers, but
might be "noise" to someone else).  We also talked about a few other
things that could potentially be improved.

The decision was to make a guidelines page we could point people to
quickly, with points like the following:

---

- Please when possible, reply in thread, even for short things.  If
you realize you haven't replied in thread, please start as soon as
possible, and before replying, look for threads.

- If there's been a technical discussion that has reached an agreement
or conclusion, try to remember to @ relevant people saying something
like "any thoughts"? Just to make sure they have an opportunity to
weigh in.

- Also consider whether it would be good for posterity to paste the
entire thread onto xen-devel, for wider discussion.  (We used to do
this a lot more for IRC conversations.)

- Don't be shy about asking people to move the discussion to a more
appropriate channel.

- We may at some point consider having more specific sub-channels.  In
most cases the core maintianers would need to be part of all the
channels anyway, but having multiple channels still helps, by 1)
breaking down the stuff to catch up on into smaller chunks 2) sorting
it into bits which should be easier to skim.  e.g., x86 maintainers
would skim through the backlog on an ARM channel fairly quickly, while
spending a lot more time going through the backlog on an x86 channel.

- For patches, console logs, etc longer than a few lines, please use a
pastebin.  We recommend either paste.debian.net, gitlab 'gist', or
termbin (instructions for termbin below).

---

Any thoughts?

[1] https://design-sessions.xenproject.org/uid/discussion/disc_Dp9L1y1poJPV69rgUjYO/view


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 11:06:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 11:06:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739211.1146188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLng-0004NU-5u; Wed, 12 Jun 2024 11:05:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739211.1146188; Wed, 12 Jun 2024 11:05:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHLng-0004NN-2M; Wed, 12 Jun 2024 11:05:56 +0000
Received: by outflank-mailman (input) for mailman id 739211;
 Wed, 12 Jun 2024 11:05:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oKT8=NO=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sHLne-0004NH-AX
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 11:05:54 +0000
Received: from mail-yw1-x1129.google.com (mail-yw1-x1129.google.com
 [2607:f8b0:4864:20::1129])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bbb6405d-28ab-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 13:05:53 +0200 (CEST)
Received: by mail-yw1-x1129.google.com with SMTP id
 00721157ae682-62f86eaffbeso8610447b3.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 04:05:53 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b06ae2266bsm45452746d6.3.2024.06.12.04.05.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 04:05:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bbb6405d-28ab-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718190352; x=1718795152; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=51VaqkyJ2zdpT7OtjzMVMDyDBYuqua5OIfu8akJWwLY=;
        b=aEvJE2yQUCDDyr0e7GdMvpGfGOL1nZjN0/LkaHGC6BjXgsOHZtdzMGDT1DMU7/VHrg
         tGYjLIbUCR2EQ6Qsgyo3SsCVwY/3PdOPI7lRS13tQ9Rbf8ysT7aappzaiqw394lbbIlA
         zbZ8cMQ73uVms2cUTDrnCzT11o63kND+kqGbw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718190352; x=1718795152;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=51VaqkyJ2zdpT7OtjzMVMDyDBYuqua5OIfu8akJWwLY=;
        b=jYyZoJQuEVqNz7w230ft7nKt21kG/wveKP0cnBGDEuEe3tGEhIdWsd1zp2DCxTMzQb
         txDDHge/GLdkHw3w9luS/w891xVN9BsE7g5X8Smf6H0pcyjmFTOntCjxAX/jmDHauYN7
         ou+AkM6WRfmVj+oJ8tVp+wFW83wsX3gNTFHhMI+Pb79Rrx0fXNlw6IHGBSIkwoI3uJnZ
         aQdH3HzXtBLe2TAemzMykCKkYHOTSE3zpaxAeWr878OlOPEUM9dFVuTOYrOTVf455/as
         jB3f+VrKaUQSGd4QqllUvdDmx0I1NlatljFp85Cvpr8OXU5YtFb622VXcG3eJpVrWxdA
         8S7Q==
X-Forwarded-Encrypted: i=1; AJvYcCU9GcORmQXrbe0u0t7WALo60RAH5DI6C6XWwfY2hnavF1ahc/ejtUJQKU1xC/mfecbc3epIP4yKLdlPXJu28U6AZ1fEf7TG4CbdU0N1Y1c=
X-Gm-Message-State: AOJu0YwIDe/te3Y3I7f/FoxIegcdus+x18O0SSPebzlz0IY/0JkLfcHA
	EuelWkLeRQfYXknfdiASbAvm7b8MopXHEDKXLL7rcD2p58VoySQU4CFmHCSZBpY=
X-Google-Smtp-Source: AGHT+IFRNpayvcqEX9Wd1PM4lc7BVPgeTk7t7r0vRyEV5gHooKj7bhivzlqu+UavCkBqgyUZ5bILbg==
X-Received: by 2002:a25:c741:0:b0:dfd:fac6:ff80 with SMTP id 3f1490d57ef6-dfe68b1451emr1267277276.57.1718190351972;
        Wed, 12 Jun 2024 04:05:51 -0700 (PDT)
Message-ID: <37ccb940-dfcd-419d-8cea-93800fd2c865@citrix.com>
Date: Wed, 12 Jun 2024 12:05:49 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19???] x86/physdev: replace physdev_{,un}map_pirq()
 checking against DOMID_SELF
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Chen Jiqian <Jiqian.Chen@amd.com>
References: <c7d12669-7851-4701-9b2d-0b22f9d32c1d@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <c7d12669-7851-4701-9b2d-0b22f9d32c1d@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12/06/2024 9:44 am, Jan Beulich wrote:
> It's hardly ever correct to check for just DOMID_SELF, as guests have
> ways to figure out their domain IDs and hence could instead use those as
> inputs to respective hypercalls. Note, however, that for ordinary DomU-s
> the adjustment is relaxing things rather than tightening them, since
> - as a result of XSA-237 - the respective XSM checks would have rejected
> self (un)mapping attempts for other than the control domain.
>
> Since in physdev_map_pirq() handling overall is a little easier this
> way, move obtaining of the domain pointer into the caller. Doing the
> same for physdev_unmap_pirq() is just to keep both consistent in this
> regard. For both this has the advantage that it is now provable (by the
> build not failing) that there are no DOMID_SELF checks left (and none
> could easily be re-added).
>
> Fixes: 0b469cd68708 ("Interrupt remapping to PIRQs in HVM guests")
> Fixes: 9e1a3415b773 ("x86: fixes after emuirq changes")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I think it is right to perform the domid lookup in do_physdev_op() and
pass d down into physdev_{un,}map_pirq().

But I don't see what this has to do with the build failing.  You're not
undef-ing DOMID_SELF, so I don't see what kind of provability you've added.

> --- a/xen/arch/x86/physdev.c
> +++ b/xen/arch/x86/physdev.c
> @@ -184,6 +170,8 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>  
>      switch ( cmd )
>      {
> +        struct domain *d;
> +

Please don't introduce any more of these.

We've discussed several times about wanting to start using trivial
autovar init support, and every one of these additions is going to need
reverting.

In this case, there's literally no difference having it at function scope.

Furthermore, I recall it being a MISRA violation now too.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 11:23:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 11:23:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739220.1146197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHM4O-0000jL-Iy; Wed, 12 Jun 2024 11:23:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739220.1146197; Wed, 12 Jun 2024 11:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHM4O-0000jE-GQ; Wed, 12 Jun 2024 11:23:12 +0000
Received: by outflank-mailman (input) for mailman id 739220;
 Wed, 12 Jun 2024 11:23:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHM4N-0000j8-AL
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 11:23:11 +0000
Received: from mail-oi1-x22b.google.com (mail-oi1-x22b.google.com
 [2607:f8b0:4864:20::22b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 24facf0c-28ae-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 13:23:08 +0200 (CEST)
Received: by mail-oi1-x22b.google.com with SMTP id
 5614622812f47-3d227b1f4f0so1161179b6e.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 04:23:09 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-44085391d1esm29071241cf.16.2024.06.12.04.23.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 04:23:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24facf0c-28ae-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718191388; x=1718796188; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=vcgFRsodj/wMV+ko0JmTgp51Mi6AMfxvIQHoqlh4BT8=;
        b=Z5XmlrmL/6+6r1LUv5MScrh04IqoM+0Jkb4E/Jpp+mh3mlUI6CLkGe240SqYMAhDwU
         iAplxfDOey1p+gCFlbtZr8/Gs5Hc3vGDa9jWLKkbUTMNpIhmJBnJo4Lr1y/RYFkoYlPR
         8AiFbpRGVtzh0N5D7DCTW+j3ZANEqzeQB/H6g=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718191388; x=1718796188;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=vcgFRsodj/wMV+ko0JmTgp51Mi6AMfxvIQHoqlh4BT8=;
        b=Buugfp0QHNzKiNH1mdX3cu78OD//Gc8zW7RwbL8967P2lQ57TU6K/qCfr+tPiFmvNA
         5N0akxkiQoLOlPZ3oxaT35tqQ43/LZKRWXkcgBeu5ara2ntGrnY6E/BZeaXuVqFuvlBB
         htDlQC1l7mi0deJe8sOSl7OwTVmwbcsl5LLjqT/a6IZkCKUk+iHRjAxMDy1/8CHGhSH+
         q4nkPn7nB3G2o0j+VhFti5UDM4jFJoY++7e3anY4+z775IBG3mkPWfw+Tz+wmbQnGMK2
         1lZyMUivQOvGic8WY37vokifQ/5w24Iik+3kKBv7GSr/4X/j71Fn0KQkAxjtRM/QAKSy
         jAOg==
X-Forwarded-Encrypted: i=1; AJvYcCVBQwnnwnYIgAYroeNAM+CaF++kZkW6WMgonq7YbkKcYsOhHBloYGbyhkGiGgImi27oEKHWZQeV32n3ge/tVQgB4DjfvPGtKMYUq9kf/V0=
X-Gm-Message-State: AOJu0Yzn6Q3qFJstmJfg9wrEg3A4OfM1RQ/iAwzyutZl+e6hcE07w5kp
	OEuX/KZepLu95Shqlj2k3hLEyJTLtQejX0YXVzs3pmTu8JTeypyMzKV5wIR7MPb9lEcXG0M4dTg
	6
X-Google-Smtp-Source: AGHT+IHBKYr6ZlOs/EJeu0I8X6WloWCRXmj47u36gLyGpEevia2d56RvKCeY/DfNifLIVCUDQUlkiA==
X-Received: by 2002:a05:6808:2f0a:b0:3d2:1e7e:60a0 with SMTP id 5614622812f47-3d23dfd92c8mr1721865b6e.11.1718191387507;
        Wed, 12 Jun 2024 04:23:07 -0700 (PDT)
Date: Wed, 12 Jun 2024 13:23:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 7/7] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
Message-ID: <ZmmFGc7TSoKsCH95@macbook>
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-8-roger.pau@citrix.com>
 <7e090e00-2061-4ef1-a0a4-b45ac86c5ee6@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7e090e00-2061-4ef1-a0a4-b45ac86c5ee6@suse.com>

On Tue, Jun 11, 2024 at 03:50:42PM +0200, Jan Beulich wrote:
> On 10.06.2024 16:20, Roger Pau Monne wrote:
> > fixup_irqs() is used to evacuate interrupts from to be offlined CPUs.  Given
> > the CPU is to become offline, the normal migration logic used by Xen where the
> > vector in the previous target(s) is left configured until the interrupt is
> > received on the new destination is not suitable.
> > 
> > Instead attempt to do as much as possible in order to prevent loosing
> > interrupts.  If fixup_irqs() is called from the CPU to be offlined (as is
> > currently the case) attempt to forward pending vectors when interrupts that
> > target the current CPU are migrated to a different destination.
> > 
> > Additionally, for interrupts that have already been moved from the current CPU
> > prior to the call to fixup_irqs() but that haven't been delivered to the new
> > destination (iow: interrupts with move_in_progress set and the current CPU set
> > in ->arch.old_cpu_mask) also check whether the previous vector is pending and
> > forward it to the new destination.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Changes since v1:
> >  - Rename to apic_irr_read().
> > ---
> >  xen/arch/x86/include/asm/apic.h |  5 +++++
> >  xen/arch/x86/irq.c              | 37 ++++++++++++++++++++++++++++++++-
> >  2 files changed, 41 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xen/arch/x86/include/asm/apic.h b/xen/arch/x86/include/asm/apic.h
> > index d1cb001fb4ab..7bd66dc6e151 100644
> > --- a/xen/arch/x86/include/asm/apic.h
> > +++ b/xen/arch/x86/include/asm/apic.h
> > @@ -132,6 +132,11 @@ static inline bool apic_isr_read(uint8_t vector)
> >              (vector & 0x1f)) & 1;
> >  }
> >  
> > +static inline bool apic_irr_read(unsigned int vector)
> > +{
> > +    return apic_read(APIC_IRR + (vector / 32 * 0x10)) & (1U << (vector % 32));
> > +}
> > +
> >  static inline u32 get_apic_id(void)
> >  {
> >      u32 id = apic_read(APIC_ID);
> > diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> > index 54eabd23995c..ed262fb55f4a 100644
> > --- a/xen/arch/x86/irq.c
> > +++ b/xen/arch/x86/irq.c
> > @@ -2601,7 +2601,7 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
> >  
> >      for ( irq = 0; irq < nr_irqs; irq++ )
> >      {
> > -        bool break_affinity = false, set_affinity = true;
> > +        bool break_affinity = false, set_affinity = true, check_irr = false;
> >          unsigned int vector, cpu = smp_processor_id();
> >          cpumask_t *affinity = this_cpu(scratch_cpumask);
> >  
> > @@ -2649,6 +2649,25 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
> >               !cpumask_test_cpu(cpu, &cpu_online_map) &&
> >               cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) )
> >          {
> > +            /*
> > +             * This to be offlined CPU was the target of an interrupt that's
> > +             * been moved, and the new destination target hasn't yet
> > +             * acknowledged any interrupt from it.
> > +             *
> > +             * We know the interrupt is configured to target the new CPU at
> > +             * this point, so we can check IRR for any pending vectors and
> > +             * forward them to the new destination.
> > +             *
> > +             * Note the difference between move_in_progress or
> > +             * move_cleanup_count being set.  For the later we know the new
> > +             * destination has already acked at least one interrupt from this
> > +             * source, and hence there's no need to forward any stale
> > +             * interrupts.
> > +             */
> 
> I'm a little confused by this last paragraph: It talks about a difference,
> yet ...
> 
> > +            if ( apic_irr_read(desc->arch.old_vector) )
> > +                send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
> > +                              desc->arch.vector);
> 
> ... in the code being commented there's no difference visible. Hmm, I guess
> this is related to the enclosing if(). Maybe this could be worded a little
> differently, e.g. starting with "Note that for the other case -
> move_cleanup_count being non-zero - we know ..."?

Hm, I see.  Yes, the difference is that for interrupts that have
move_cleanup_count set we don't forward pending interrupts in IRR on
this CPU.  I put this here because I think it's more naturally
arranged with the rest of the comment.  I can pull the whole comment
ahead if the if() if that's better.

> > @@ -2689,11 +2708,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
> >          if ( desc->handler->disable )
> >              desc->handler->disable(desc);
> >  
> > +        /*
> > +         * If the current CPU is going offline and is (one of) the target(s) of
> > +         * the interrupt signal to check whether there are any pending vectors
> > +         * to be handled in the local APIC after the interrupt has been moved.
> > +         */
> 
> After reading this a number of times, I think there wants to be a comma between
> "interrupt" and "signal". Or am I getting wrong what is being meant?

Indeed.

> > +        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
> > +            check_irr = true;
> > +
> >          if ( desc->handler->set_affinity )
> >              desc->handler->set_affinity(desc, affinity);
> >          else if ( !(warned++) )
> >              set_affinity = false;
> >  
> > +        if ( check_irr && apic_irr_read(vector) )
> > +            /*
> > +             * Forward pending interrupt to the new destination, this CPU is
> > +             * going offline and otherwise the interrupt would be lost.
> > +             */
> > +            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
> > +                          desc->arch.vector);
> > +
> >          if ( desc->handler->enable )
> >              desc->handler->enable(desc);
> >  
> 
> Down from here, after the loop, there's a 1ms window where latched but not
> yet delivered interrupts can be received. How's that playing together with
> the changes you're making? Aren't we then liable to get two interrupts, one
> at the old and one at the new source, in unknown order?

I was mistakenly thinking that clear_local_APIC() would block
interrupt delivery, but that's not the case, so yes, interrupts should
still be delivered in the window below.

Let me test without this last patch.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 11:44:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 11:44:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739228.1146208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMON-0004oV-7R; Wed, 12 Jun 2024 11:43:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739228.1146208; Wed, 12 Jun 2024 11:43:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMON-0004oO-4R; Wed, 12 Jun 2024 11:43:51 +0000
Received: by outflank-mailman (input) for mailman id 739228;
 Wed, 12 Jun 2024 11:43:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHMOL-0004oF-Vw
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 11:43:49 +0000
Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com
 [2a00:1450:4864:20::12f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 080a234f-28b1-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 13:43:48 +0200 (CEST)
Received: by mail-lf1-x12f.google.com with SMTP id
 2adb3069b0e04-52c9034860dso3486581e87.2
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 04:43:48 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f16427842sm461770366b.100.2024.06.12.04.43.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 04:43:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 080a234f-28b1-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718192628; x=1718797428; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=DR8MrFMpPBxo4kGOVNcrzp+benZP9y/vQ7nGRow2i/U=;
        b=KtC0ctw0f3KTnUf7tmb1YXS0tlsgwzLQF757AJ1jMRz1PJ37ux7u6DKebu851iEj5S
         OrSQ9L3t9CU+b+cWxMrflV0BL5NDqMM9S17SLEVmj49ckN25BI09PVBC8VuVegv1H05p
         3AkHNGlPuwbxUqB2Fb1KUZvIGsOTovg0zJVk0HUCGsCQvB3CHrt5agE3HEw9Jq/dyOJr
         UvdrvuXv0cBaZnaxmxpCv0PW+SSmO5xS1IuBJNom3w7UgAuyUhYKjC56hXjv79oVJkAO
         ecPpCJcDTTRYv4BZ7wf2amrk3xfmLfQan51AQcxYw/D56kpH1VShbs3TyL0D85DHHxex
         vLLg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718192628; x=1718797428;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=DR8MrFMpPBxo4kGOVNcrzp+benZP9y/vQ7nGRow2i/U=;
        b=vqGRVcvrwHYHeTqZlF2N95hUr3r4HF3nM9mhWRMO67nripNWEwq5KRQ5AC73XXvnnT
         1j8JDDi8LnrZpA18aQ0EpoUPKZMJ+O1nXRKjSmPuwZWvyvhjbyseLrKTVNbvY+ROuYit
         I4I1kFQ/E9GJ8b1FWz//TJ38CzS5piIBYpyQVfI31nMLUJl1y+T6dPhf5WrxqCeyGz43
         bRo5KyOeuWydAnTzFwqFDe0PCgT01OxIom1aeMpDxC32Pv6f7ta6JuCYgY/+PB6tzmHR
         OYKCFV0pcVgIttR2dqxSb2Ss9miZl1pZwlEowYSd17IGqzJIVjitQEH+4AiWZv2IuVC7
         DWpQ==
X-Forwarded-Encrypted: i=1; AJvYcCUHCTUaNlZPB5k0D370rxOTLgPEClsFlKX10YvlkhTu3+xJBrQ7ppxONf2egPGDL0mv/R8fvxGHgaSdeKei+liBTkBXDtvOb5lGGW/bZjE=
X-Gm-Message-State: AOJu0Yya1biRK6z5ymwxVc/UD0o9FLS6wzQDebxga3CJyo1VX1v9fdKP
	wM9NSr1z45AH2auqYgXn4Gn6VmDkqCsYgofH7GLr2hANUMvuZX1tMfpp0u2W9Q==
X-Google-Smtp-Source: AGHT+IHZiNvJpPTDC7TlKYlJrzr43r9YbGWfkBZepwqIx8UpABBVxFd16HQOGRKfmcOr1HZcg5JFLg==
X-Received: by 2002:a19:2d4e:0:b0:52b:c88e:cec1 with SMTP id 2adb3069b0e04-52c9a3e3b37mr1144596e87.33.1718192627587;
        Wed, 12 Jun 2024 04:43:47 -0700 (PDT)
Message-ID: <258b3241-c9fd-4534-9dd6-88d8542757bc@suse.com>
Date: Wed, 12 Jun 2024 13:43:45 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-6-Jiqian.Chen@amd.com>
 <987f5d21-bbb5-4cdb-975b-91949e802921@suse.com>
 <BL1PR12MB5849FF595AEED1112622A98DE7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
 <c2a5b9cd-2a85-4e01-8b8b-31b85726dbd4@suse.com>
 <BL1PR12MB5849652CE3039C8D17CD7FA6E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB5849652CE3039C8D17CD7FA6E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 12.06.2024 12:55, Chen, Jiqian wrote:
> On 2024/6/12 18:34, Jan Beulich wrote:
>> On 12.06.2024 12:12, Chen, Jiqian wrote:
>>> On 2024/6/11 22:39, Jan Beulich wrote:
>>>> On 07.06.2024 10:11, Jiqian Chen wrote:
>>>>> +    r = xc_domain_gsi_permission(ctx->xch, domid, gsi, map);
>>>>
>>>> Looking at the hypervisor side, this will fail for PV Dom0. In which case imo
>>>> you better would avoid making the call in the first place.
>>> Yes, for PV dom0, the errno is EOPNOTSUPP, then it will do below xc_domain_irq_permission.
>>
>> Hence why call xc_domain_gsi_permission() at all on a PV Dom0?
> Is there a function to distinguish that current dom0 is PV or PVH dom0 in tools/libs?

That's a question to the tools folks, I suppose?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 11:45:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 11:45:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739235.1146218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMQB-0005L5-K0; Wed, 12 Jun 2024 11:45:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739235.1146218; Wed, 12 Jun 2024 11:45:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMQB-0005Ky-G8; Wed, 12 Jun 2024 11:45:43 +0000
Received: by outflank-mailman (input) for mailman id 739235;
 Wed, 12 Jun 2024 11:45:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHMQA-0005Kq-Lk
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 11:45:42 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4b03eb47-28b1-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 13:45:40 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id
 4fb4d7f45d1cf-579fa270e53so3459255a12.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 04:45:40 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57c75ab6297sm6475289a12.14.2024.06.12.04.45.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 04:45:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b03eb47-28b1-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718192740; x=1718797540; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ODzRth6Dmpaw3RHDhtgAUYcVH8M5z4K/UWksi07P3n8=;
        b=gdYN81b20D7Q8gA3tcN95wK9yu/MNeT0wZjgeCz/AKThMoMgXAbFx5tkb8D7uytMvX
         2vOPcUckM2LATjrtpR/Jx3p03btFlAvIlL3TDRYU7CZ8Ko0Zl/nbeawWneBiZpeT52cO
         vRdUmhBYayR+xVtcIc3iLG8CZ/iHcgbfD32EadCFhANDUmxHuLuTfQXdzKY9ann9BgZs
         X3dfDHbaxUdcYrFOjYp9dFLwd6UN2SYrM0eR9KaCLbTVbvcVzvWZkCkPUCcdVgENbzZk
         altDYicIPybBaoGrsXOHc04C8NGssrfWuMhF/fTEzQYxEhCJlV37IWGABKisEIbn84cy
         MtTQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718192740; x=1718797540;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ODzRth6Dmpaw3RHDhtgAUYcVH8M5z4K/UWksi07P3n8=;
        b=dQg83mTnXd7pzDDkrksMtaAulW/6pSj1yi2z1lEiCQbeV0RSRXHcrTDgwYjuDAuhPO
         dZKtyMn/dgZ0nSIvGbflL09h7ji3BkiIKKgAqpdzfilJs2UMCGZzqtF5mHTqQuSUhfIc
         xGfp03lA+rLK98mGn9/p44UExdz1P0QTaoSWrfb77yGPg2emFJSw2nFu+iyYD6ksQsbT
         VRwm+iC1R0xbViozPFclOkMMRGTe9eSlHdhMzzIUwYImyQ1LUuwSalpSLoiTAFouclgh
         jIIARue0ekvXQTyH1I9X6Uw+pfRYBVAe7uMPRa8b7GE4BgLYQyl+w3F5pYlS4XDJj/Ik
         t62Q==
X-Forwarded-Encrypted: i=1; AJvYcCXsWZDC4W0ip9VolNpeN+0WUU0oQdXj3MQIMdUOYSVnsS3y5hYGi2VX64Q+nBFMRKV3hqcUugNi3yL3cUZ4uGhaBPZgVDUw48DLoSRmAig=
X-Gm-Message-State: AOJu0YxJk0CKgFwdgKEjPyjTulbl8hJfnbqHqA6wVkX8cX2tM49/HWi9
	w41rc8hseX2cBX85c9KYvRzpDtH0wczKPuOzwNN2qRl6URB2m1GZtCizTviRiQ==
X-Google-Smtp-Source: AGHT+IFu5IndTRoDxRrrnaGWGrUuyO081/1KDUK9FrUPC2Po3TmzySz5TCbd41b7OBIARXpGjZzcbg==
X-Received: by 2002:a50:8e5d:0:b0:57c:6ae2:abda with SMTP id 4fb4d7f45d1cf-57ca9743b46mr1460555a12.5.1718192740109;
        Wed, 12 Jun 2024 04:45:40 -0700 (PDT)
Message-ID: <5f568372-5114-4cd7-92e1-aae5028c923c@suse.com>
Date: Wed, 12 Jun 2024 13:45:39 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19???] x86/physdev: replace physdev_{,un}map_pirq()
 checking against DOMID_SELF
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Chen Jiqian <Jiqian.Chen@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c7d12669-7851-4701-9b2d-0b22f9d32c1d@suse.com>
 <37ccb940-dfcd-419d-8cea-93800fd2c865@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <37ccb940-dfcd-419d-8cea-93800fd2c865@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12.06.2024 13:05, Andrew Cooper wrote:
> On 12/06/2024 9:44 am, Jan Beulich wrote:
>> It's hardly ever correct to check for just DOMID_SELF, as guests have
>> ways to figure out their domain IDs and hence could instead use those as
>> inputs to respective hypercalls. Note, however, that for ordinary DomU-s
>> the adjustment is relaxing things rather than tightening them, since
>> - as a result of XSA-237 - the respective XSM checks would have rejected
>> self (un)mapping attempts for other than the control domain.
>>
>> Since in physdev_map_pirq() handling overall is a little easier this
>> way, move obtaining of the domain pointer into the caller. Doing the
>> same for physdev_unmap_pirq() is just to keep both consistent in this
>> regard. For both this has the advantage that it is now provable (by the
>> build not failing) that there are no DOMID_SELF checks left (and none
>> could easily be re-added).
>>
>> Fixes: 0b469cd68708 ("Interrupt remapping to PIRQs in HVM guests")
>> Fixes: 9e1a3415b773 ("x86: fixes after emuirq changes")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I think it is right to perform the domid lookup in do_physdev_op() and
> pass d down into physdev_{un,}map_pirq().
> 
> But I don't see what this has to do with the build failing.  You're not
> undef-ing DOMID_SELF, so I don't see what kind of provability you've added.

I'm talking of provability for the two functions in question. Not
globally of course.

>> --- a/xen/arch/x86/physdev.c
>> +++ b/xen/arch/x86/physdev.c
>> @@ -184,6 +170,8 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>>  
>>      switch ( cmd )
>>      {
>> +        struct domain *d;
>> +
> 
> Please don't introduce any more of these.
> 
> We've discussed several times about wanting to start using trivial
> autovar init support, and every one of these additions is going to need
> reverting.
> 
> In this case, there's literally no difference having it at function scope.

Will do; sorry, habits.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 11:47:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 11:47:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739241.1146227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMRP-0005s0-Sb; Wed, 12 Jun 2024 11:46:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739241.1146227; Wed, 12 Jun 2024 11:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMRP-0005rt-Pf; Wed, 12 Jun 2024 11:46:59 +0000
Received: by outflank-mailman (input) for mailman id 739241;
 Wed, 12 Jun 2024 11:46:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHMRO-0005rn-9w
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 11:46:58 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 78176594-28b1-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 13:46:56 +0200 (CEST)
Received: by mail-ed1-x529.google.com with SMTP id
 4fb4d7f45d1cf-57c76497cefso4891361a12.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 04:46:56 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57aae0c9f88sm10891913a12.21.2024.06.12.04.46.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 04:46:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78176594-28b1-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718192816; x=1718797616; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=/8Bqkp/I/7LiyMqxZ2e88rah2tV1WpwqC+6QLe+jgZM=;
        b=akR6C/1MoIocJV/t2c1mZH6WTS4HV7Yo+BygQkuN1+fzKLwL/uDoEnU1+55GpHY5+0
         1FUefHsnxDcSW31KT+UmKWgFgpcpfKdbevAL5ZqAmRPOdiN2XFXm6Pojv4vvZZdPlx90
         kGMFiuNJYuD+o8HVzhlHetQU/nZTM1lJoKwZQK2y9Eo8Ac/AjNzzBWqMAUf+BzsXWG+O
         4oCjTzlJEdOceOejpfHM+5zUed4XMyk95K4/G9i5mqUoJ7EEkcbB/WhWE4b6YAhU/Crd
         QVma6pa4Z8GRQi5O/a2PNwqwt3jFYR6NehdwZg6KfXhIydZ3nurbpK0Uxr8RZzyY/3tB
         z6xg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718192816; x=1718797616;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/8Bqkp/I/7LiyMqxZ2e88rah2tV1WpwqC+6QLe+jgZM=;
        b=wNWAmWLJ0nbBjlXImlzxGm7cLfjMmDl4a0y4+cDm9JkvEfHafqvBtEiXaxIGDQCcvf
         /kcRayhltLOGTNKdo2f/r6SHZUmfVv+e5oVNkJX+x5b/7evDsOBIptUlK7Kchc583dJU
         GwDW24kYQ+Un14HuQFwIZsAdHMuamAMpB2vIyL5LxNC9ooL6zv6GN94vI5u3VRcSrqxa
         8HETbF5Y6L4Sm6AeqL843ifZIOKr8Cbyfijr/HDqvBjJqfJko6Usrvt1rsPZQDp/dVue
         nZMRy7RuhTbLiIIb6PFl/1jeKuTvrqBuLHUvMItFfpPoV5yUzhjguFd8Fk6p8h0KcrGv
         98nA==
X-Gm-Message-State: AOJu0Yxd86tBG7eK1SfWX2i3h8Z507g5ZFhmuOjn3NDpFKvUbEEIKS1t
	GwFsuQhhsyptd+P94aWRiQMlpSZbdwGwsnoitzpgJZ35brBf1SjMzRwsDzbbSQ==
X-Google-Smtp-Source: AGHT+IHK5Pw6kBKzQHvkTRwAlohxJm4OmztL/eN/LDUZwEMWiGohjwXjhe8CsFih5tcUxGmeDyqaTQ==
X-Received: by 2002:a50:9e84:0:b0:57c:6740:f47c with SMTP id 4fb4d7f45d1cf-57ca97936a9mr1180871a12.27.1718192815631;
        Wed, 12 Jun 2024 04:46:55 -0700 (PDT)
Message-ID: <aaac0f5d-18d9-4cd6-ad89-e8d5aaa3a797@suse.com>
Date: Wed, 12 Jun 2024 13:46:54 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19???] x86/physdev: replace physdev_{,un}map_pirq()
 checking against DOMID_SELF
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Chen Jiqian <Jiqian.Chen@amd.com>
References: <c7d12669-7851-4701-9b2d-0b22f9d32c1d@suse.com>
 <Zml984lQW1XcrG9_@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <Zml984lQW1XcrG9_@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12.06.2024 12:52, Roger Pau Monné wrote:
> On Wed, Jun 12, 2024 at 10:44:56AM +0200, Jan Beulich wrote:
>> It's hardly ever correct to check for just DOMID_SELF, as guests have
>> ways to figure out their domain IDs and hence could instead use those as
>> inputs to respective hypercalls. Note, however, that for ordinary DomU-s
>> the adjustment is relaxing things rather than tightening them, since
>> - as a result of XSA-237 - the respective XSM checks would have rejected
>> self (un)mapping attempts for other than the control domain.
>>
>> Since in physdev_map_pirq() handling overall is a little easier this
>> way, move obtaining of the domain pointer into the caller. Doing the
>> same for physdev_unmap_pirq() is just to keep both consistent in this
>> regard. For both this has the advantage that it is now provable (by the
>> build not failing) that there are no DOMID_SELF checks left (and none
>> could easily be re-added).
>>
>> Fixes: 0b469cd68708 ("Interrupt remapping to PIRQs in HVM guests")
>> Fixes: 9e1a3415b773 ("x86: fixes after emuirq changes")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> I wonder if we should introduce a helper to check for the current
> domain:
> 
> #define is_current_domain(d) ((d) == current->domain)

Hmm, that's not even shorter, and imo not any more "meaningful". Plus
it wouldn't cover the case where we have currd already in a local var.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 11:52:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 11:52:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739255.1146243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMX3-0008Gb-Rm; Wed, 12 Jun 2024 11:52:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739255.1146243; Wed, 12 Jun 2024 11:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMX3-0008FZ-M5; Wed, 12 Jun 2024 11:52:49 +0000
Received: by outflank-mailman (input) for mailman id 739255;
 Wed, 12 Jun 2024 11:52:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHMX2-0008DZ-18
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 11:52:48 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4888322d-28b2-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 13:52:45 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6f0e153eddso499071266b.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 04:52:46 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f2942b02fsm299228766b.167.2024.06.12.04.52.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 04:52:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4888322d-28b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718193165; x=1718797965; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=0GtsBUhKtUS/+Nyih41jc/QcusLWAMSW99BhU1x1Fl8=;
        b=MJXxQXBZrO10EoYG7ZNyX35yfp5ZfygYAJaEp/PUtl4Hd022OHZ3NUqWyZMpWKBiuC
         K4Fz6pPCQWF/187gtlh3QXvwTkSrl5HjV7ZX+Y9OXkVyz+Q1xw4Guf43OZYjuUSupdLR
         M/BBmvRNW159aujIy+lPHkR9RApxBt4B1f+XgUD8/BqE+Rz/oJG3OQ9G9A8nusG2C15a
         kETu715T0LBfLaBoewBRLVoCbP8TNsEj8d0uWn9wWtZ9M0xGIwnSd7F7MO0AyTY7ZzFI
         U548reaO+wlWyCJFmelJoK1TDtdjP8wFQLHlsFM4mxSwVUfd+HPMj2iG+NeiZQr06ftp
         XZHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718193165; x=1718797965;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0GtsBUhKtUS/+Nyih41jc/QcusLWAMSW99BhU1x1Fl8=;
        b=k4WqKkEdEmKUqZld0gc5ioTFRX7kdMA+IAGwEKPFjUFeQckfv4Vn8hgtcpRMfEsnis
         HSBKt653Gxg9BNAI1hlm+egHLKZsScDiODTKARn3HOBgiuVouWN8bJItHyKdD7fTnAWf
         B9d8cVjACzRBmteqbK3LnRQhvPaKV8GmasPtE5989ydMXAY3rw/ufcyYgr8VjXSQxkU5
         MsKWuXCljce4xqz+isj1ltJuYPyoUbHJKuKs1VLTYPA9ll2urdomolk0mBME+PJ6tNnY
         aQaLAih8M4qtJfigqhAJvaWDxOSr0NArlYUpdBecp1l18avQhFC510ZMVJNbTSJCSjTo
         EbTQ==
X-Gm-Message-State: AOJu0YzdeQ9iU1ofv3NSl5OZQnHzQn/bnAqFAwc0KzVUZ6jxW9RFUpTa
	toMYClunYkFgGKiz6juF6gmAma+037K+TZyajYVTbhA24S8RDZt11DiZRxznlQ==
X-Google-Smtp-Source: AGHT+IHENud6V7zSoeaqUpCmDgKudaSsfadMj1optQt2VFipfDengS1K4o3GLdbzxBKKkAwlLdk4iw==
X-Received: by 2002:a17:907:7e87:b0:a6f:4b7d:599b with SMTP id a640c23a62f3a-a6f4b7d6bf0mr76758666b.33.1718193165251;
        Wed, 12 Jun 2024 04:52:45 -0700 (PDT)
Message-ID: <cc7442c2-a218-4fc4-bc03-638faf41fd97@suse.com>
Date: Wed, 12 Jun 2024 13:52:43 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] x86/EPT: relax iPAT for "invalid" MFNs
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <56063a8f-f569-4130-ac25-f0f064e288a1@suse.com>
 <Zmf_k2meED8iG3H5@macbook> <a11259be-7114-4332-b873-d1b163687a3e@suse.com>
 <ZmgStGbVRuGaNUD_@macbook> <f171c98a-c78d-41c8-88d8-7d631b80333b@suse.com>
 <ZmgwKmcLDJDhIsl7@macbook> <b076dc8d-701e-4a9f-a147-c54673959009@suse.com>
 <ZmhWtEyuwjTuIAxK@macbook> <beb67703-c1f0-490a-a3ad-36e2f331f5e4@suse.com>
 <Zmh5mw15_FwITnj1@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <Zmh5mw15_FwITnj1@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 11.06.2024 18:21, Roger Pau Monné wrote:
> On Tue, Jun 11, 2024 at 04:53:22PM +0200, Jan Beulich wrote:
>> On 11.06.2024 15:52, Roger Pau Monné wrote:
>>> On Tue, Jun 11, 2024 at 01:52:58PM +0200, Jan Beulich wrote:
>>>> On 11.06.2024 13:08, Roger Pau Monné wrote:
>>>>> I really wonder whether Xen has enough information to figure out
>>>>> whether a hole (MMIO region) is supposed to be accessed as UC or
>>>>> something else.
>>>>
>>>> It certainly hasn't, and hence is erring on the (safe) side of forcing
>>>> UC.
>>>
>>> Except that for the vesa framebuffer at least this is a bad choice :).
>>
>> Well, yes, that's where we want WC to be permitted. But for that we only
>> need to avoid setting iPAT; we still can uniformly hand back UC. Except
>> (as mentioned elsewhere earlier) if the guest uses MTRRs rather than PAT
>> to arrange for WC.
> 
> If we want to get this into 4.19, we likely want to go your proposed
> approach then, as it's less risky.
> 
> I think a comment would be helpful to note that the fix here to not
> enforce iPAT by still return UC is mostly done to accommodate vesa
> regions mapped with PAT attributes to use WC.
> 
> I would also like to add some kind of note that special casing
> !mfn_valid() might not be needed, but that removing it must be done
> carefully to not cause regressions.

Hmm, in the meantime I have myself sufficiently convinced that with a
small (hopefully easy / uncontroversial) change to ept_set_entry() I
can arrange for having the guarantees that neither INVALID_MFN nor a
truncated for of it can make it into the function, allowing the check
to be dropped (as you had initially asked for).

>>>> One caveat here that I forgot to
>>>> mention before: MFNs taken out of EPT entries will never be INVALID_MFN, for
>>>> the truncation that happens when populating entries. In that case we rely on
>>>> mfn_valid() to be "rejecting" them.
>>>
>>> The only caller where mfns from EPT entries are passed to
>>> epte_get_entry_emt() is in resolve_misconfig() AFAICT, and in that
>>> case the EPT entry must be present for epte_get_entry_emt() to be
>>> called.  So it seems to me that epte_get_entry_emt() can never be
>>> called from an mfn constructed from an INVALID_MFN EPT entry (but it's
>>> worth adding an assert for it).
>>
>> Are you sure? I agree for the first of those two calls, but the second
>> isn't quite as obvious. There we'd need to first prove that we will
>> never create non-present super-page entries. Yet if I'm not mistaken
>> for PoD we may create such.
> 
> I should go look then, didn't know PoD would do that.

I've meanwhile checked, and indeed we do. That's what with said prereq
change I hope to make no longer be the case.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 11:52:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 11:52:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739253.1146238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMX3-0008Dm-JY; Wed, 12 Jun 2024 11:52:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739253.1146238; Wed, 12 Jun 2024 11:52:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMX3-0008Df-Fl; Wed, 12 Jun 2024 11:52:49 +0000
Received: by outflank-mailman (input) for mailman id 739253;
 Wed, 12 Jun 2024 11:52:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHMX1-0008DP-Ch; Wed, 12 Jun 2024 11:52:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHMX1-0005No-Ac; Wed, 12 Jun 2024 11:52:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHMX0-0000H1-VX; Wed, 12 Jun 2024 11:52:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHMX0-00088S-V3; Wed, 12 Jun 2024 11:52:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tyRcWju6xGKMNuJ71D6oVlr2FtElMrz+lDenL4m13Ts=; b=yWagQq7oYG8cP1oVU2vCijsvFi
	r07s53DLxpGRojRaTv77q4x+FAEyfn/Nl3a4b02+hkvKgIkctAnW+fYY7QndZNibc19ArpVpZoxoT
	qSB0E2y5ZDe+wFsv9GHSIjgrUKZdTSnP6p6bMIcw6RjFUKCce4vnnMJxm2mflfHplfQE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186319-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186319: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b0e5352c600ce42f109ddb43a4233ac2c9e0abbd
X-Osstest-Versions-That:
    xen=5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Jun 2024 11:52:46 +0000

flight 186319 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186319/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b0e5352c600ce42f109ddb43a4233ac2c9e0abbd
baseline version:
 xen                  5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d

Last test of basis   186312  2024-06-11 17:02:07 Z    0 days
Testing same since   186319  2024-06-12 09:02:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski <marmarek@invisiblethingslab.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5ea7f2c9d7..b0e5352c60  b0e5352c600ce42f109ddb43a4233ac2c9e0abbd -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 11:54:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 11:54:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739268.1146258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMYk-0000rA-49; Wed, 12 Jun 2024 11:54:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739268.1146258; Wed, 12 Jun 2024 11:54:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMYk-0000r3-15; Wed, 12 Jun 2024 11:54:34 +0000
Received: by outflank-mailman (input) for mailman id 739268;
 Wed, 12 Jun 2024 11:54:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oKT8=NO=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sHMYi-0000pT-Qs
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 11:54:32 +0000
Received: from mail-qv1-xf2f.google.com (mail-qv1-xf2f.google.com
 [2607:f8b0:4864:20::f2f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 86d03da8-28b2-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 13:54:30 +0200 (CEST)
Received: by mail-qv1-xf2f.google.com with SMTP id
 6a1803df08f44-6b08857f3b8so12026446d6.2
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 04:54:31 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b093f3a38bsm8121306d6.138.2024.06.12.04.54.28
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 04:54:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86d03da8-28b2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718193270; x=1718798070; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=3bxY91Y/OFfBPXGRV3EPnTmcgydfN2OXanriSVHk1gs=;
        b=L2AFqe/j+slL4kq8g9XWE7m9glrwhVRVdrDZUF1UmUsj2+XR6FXrlFKduDYEFCfiKz
         IdOZJrgaoPQnaQnjAj6Wbf9PYaCHCLbM6JiXGQII1O5DTlOvgDQ2L7jTKYTX77opJ7rY
         zTbsHiDqqeRFFcMkjGFha1Uy5o8G468X7/kow=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718193270; x=1718798070;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=3bxY91Y/OFfBPXGRV3EPnTmcgydfN2OXanriSVHk1gs=;
        b=wdVuw+d6+5PaI52F6YXphXP1S5V1mvN3T9mM+RNVX808B+iqLqfGCssBJYl5O4v1M9
         4DX82aR7h/G88d2ubhswLR0DxEUjbvlip6JF+kgvPFT0QMlcc1AmkSnPVVb0/ipp1sOJ
         W46bLLXpqHFQ+sF2dkz67ZvQ0uV0oE5DVF8oDWj1ISZDN9qieKT7hy+GpCM6s+ETVJ3C
         IJBTvYlSOwGocjkZ4xPfo3EU9E02+t0bIbgyg4/DHQXp8KrrEn/W03E2ufQzKyYvNe5/
         eJxQ3s5oVRFYIVZZFN/JCi9Z7mmxg0uGkkqL8ssaecD487nrUL2tY4yIiMBBOHApWYGd
         J+1A==
X-Forwarded-Encrypted: i=1; AJvYcCURGFPirX93akDXBjrWw2MysDox9v5X/xbHClqNNurgAo4c2mFdLXQ2UxrmdCj1WK+MkJJCNPZrK/mkVmWy/C/JaLiriIKV/CIvz5G63Ig=
X-Gm-Message-State: AOJu0Yxpgsnqt0jSSFWG5iO09IU3ml3SXv5Kmv84BdYGSceMMsjYVrLY
	Xtj4JWZnV/Z1OiZAOSO+Ql73rfgZUHY+DSRoal79q8DtYEWhDlQCE+/PyTtoii4=
X-Google-Smtp-Source: AGHT+IF63grcHPGjkTv5wNaLdMCCfx5MwPhDjVlnHzJCbcsK5Np7aIPGKpELKnYFitlkDzseNa64AA==
X-Received: by 2002:a05:6214:590b:b0:6b0:623c:15d1 with SMTP id 6a1803df08f44-6b1a7ad46e6mr15856426d6.56.1718193269915;
        Wed, 12 Jun 2024 04:54:29 -0700 (PDT)
Message-ID: <a39d06c2-0c1e-41dc-833e-d73c9c45e30f@citrix.com>
Date: Wed, 12 Jun 2024 12:54:27 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19???] x86/physdev: replace physdev_{,un}map_pirq()
 checking against DOMID_SELF
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Chen Jiqian <Jiqian.Chen@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c7d12669-7851-4701-9b2d-0b22f9d32c1d@suse.com>
 <37ccb940-dfcd-419d-8cea-93800fd2c865@citrix.com>
 <5f568372-5114-4cd7-92e1-aae5028c923c@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <5f568372-5114-4cd7-92e1-aae5028c923c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12/06/2024 12:45 pm, Jan Beulich wrote:
> On 12.06.2024 13:05, Andrew Cooper wrote:
>> On 12/06/2024 9:44 am, Jan Beulich wrote:
>>> It's hardly ever correct to check for just DOMID_SELF, as guests have
>>> ways to figure out their domain IDs and hence could instead use those as
>>> inputs to respective hypercalls. Note, however, that for ordinary DomU-s
>>> the adjustment is relaxing things rather than tightening them, since
>>> - as a result of XSA-237 - the respective XSM checks would have rejected
>>> self (un)mapping attempts for other than the control domain.
>>>
>>> Since in physdev_map_pirq() handling overall is a little easier this
>>> way, move obtaining of the domain pointer into the caller. Doing the
>>> same for physdev_unmap_pirq() is just to keep both consistent in this
>>> regard. For both this has the advantage that it is now provable (by the
>>> build not failing) that there are no DOMID_SELF checks left (and none
>>> could easily be re-added).
>>>
>>> Fixes: 0b469cd68708 ("Interrupt remapping to PIRQs in HVM guests")
>>> Fixes: 9e1a3415b773 ("x86: fixes after emuirq changes")
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> I think it is right to perform the domid lookup in do_physdev_op() and
>> pass d down into physdev_{un,}map_pirq().
>>
>> But I don't see what this has to do with the build failing.  You're not
>> undef-ing DOMID_SELF, so I don't see what kind of provability you've added.
> I'm talking of provability for the two functions in question. Not
> globally of course.

I'd suggest simply dropping the sentence.

I don't think it adds anything (both functions are trivially short) to
the overall message, and the "build breaking" part in particular is at
odds with the change.

>>> --- a/xen/arch/x86/physdev.c
>>> +++ b/xen/arch/x86/physdev.c
>>> @@ -184,6 +170,8 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>>>  
>>>      switch ( cmd )
>>>      {
>>> +        struct domain *d;
>>> +
>> Please don't introduce any more of these.
>>
>> We've discussed several times about wanting to start using trivial
>> autovar init support, and every one of these additions is going to need
>> reverting.
>>
>> In this case, there's literally no difference having it at function scope.
> Will do; sorry, habits.

Thanks.

With that, and preferably the adjusted commit message, Reviewed-by:
Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 12:16:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 12:16:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739291.1146268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMuJ-0005Do-Vi; Wed, 12 Jun 2024 12:16:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739291.1146268; Wed, 12 Jun 2024 12:16:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMuJ-0005Dh-SU; Wed, 12 Jun 2024 12:16:51 +0000
Received: by outflank-mailman (input) for mailman id 739291;
 Wed, 12 Jun 2024 12:16:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AdqZ=NO=gmail.com=milandjokic1995@srs-se1.protection.inumbo.net>)
 id 1sHMuI-0005Db-L6
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 12:16:50 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a4db3b3f-28b5-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 14:16:49 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 5b1f17b1804b1-4210aa00c94so56973025e9.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 05:16:49 -0700 (PDT)
Received: from Xen-host.domain.local ([89.216.37.146])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-422a1e68d04sm4567705e9.36.2024.06.12.05.16.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 05:16:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4db3b3f-28b5-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718194608; x=1718799408; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=S99lrXXA8NoM9HNxPtZJ+V98dLUlIFlE99OYkDQ01bw=;
        b=f+0FZzlgOZmo/JChgGFe22sdzv1DJfMxMFa2VVBRnvyfsWLzCfkkcwbrayojJXfmXB
         gnToiW9h292OXfvuVkAouSv55LCfXGgJkcbIW0FxzvMlQIRKtZtyljI+41HAlDp8DsoL
         7AbS2sq2DaQ35Ml6dXFlfBpwh770ske/kht8DRIZcjWHH/yWG4RIl4+66+uiwq2cmVAo
         /RxrAbFzNzNnsEjNXr6jVJA+0uUzlrLKk0CLhED7T7JaHUXVd5uPBp1BlHu0bBefsnHA
         +2ZUFRor4AtBlGWojffoyUul1nq2J1Yz1/YgDaAWhBKpGBb8Q6I+qk5FFeTTTeYQ1IqS
         oaJQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718194608; x=1718799408;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=S99lrXXA8NoM9HNxPtZJ+V98dLUlIFlE99OYkDQ01bw=;
        b=M0351e/kmTfAIwXcRXuCcB1kLiGqfW4+hGh8q1LGyvGLOfVtSNzFripq1KrSenMMtN
         W1DmuLseQAOc/ILWut++zi29erdNB7R3pQTLSPp6NwV+aVVwPXQU8xdl/usmxmQZZh2k
         p3otbthTwF+fSltGVB1s+UXn6aRB7Q1M3rwTNZYSV5QJKxtiY7Tf567OkhdknW/orrlH
         dABEvA4YQgWG5KtWBuTRCb3er4iAJcAJxlY/L6eNwWsyKVh9nkjwaOfMuq1RTXVXDeQc
         BSkaw3mssavZX7k25lQZ0RtCxGi6TptAEd2KtLFRR0nb80mheMI/WVUj9+tSrTjFqYAH
         sQTg==
X-Gm-Message-State: AOJu0YzijPm8O+DIOeq9ix2kMaxVD8k4xs6rLoBTwjbxeZJhQqaQlrEn
	ta4DZ4vYNmMaY8Z5qbjVHy8TXFD48OM1vpqq+7lBpsMhW1t6HHVdR1MDZriVyGA=
X-Google-Smtp-Source: AGHT+IHOfgVHbRT2rETFlqz8L9ilmWJLeglNxaHjjabF332uZMkUE3PlReKTC9YCYHDp9on6AGoxwg==
X-Received: by 2002:a05:600c:4e52:b0:421:de31:89 with SMTP id 5b1f17b1804b1-422863b6249mr13439415e9.18.1718194608193;
        Wed, 12 Jun 2024 05:16:48 -0700 (PDT)
From: milandjokic1995@gmail.com
To: xen-devel@lists.xenproject.org
Cc: milan.djokic@rt-rk.com,
	Nikola Jelic <nikola.jelic@rt-rk.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/riscv: PE/COFF image header for RISC-V target
Date: Wed, 12 Jun 2024 14:15:45 +0200
Message-Id: <0e10ee9c215269b577321ba44f5d038a5eb299a7.1718193326.git.milan.djokic@rt-rk.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <87b5e458498bbff2e54ac011a50ff1f9555c3613.1717354932.git.milan.djokic@rt-rk.com>
References: <87b5e458498bbff2e54ac011a50ff1f9555c3613.1717354932.git.milan.djokic@rt-rk.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Nikola Jelic <nikola.jelic@rt-rk.com>

Extended RISC-V xen image with PE/COFF headers,
in order to support xen boot from popular bootloaders like U-boot.
Image header is optionally included (with CONFIG_RISCV_EFI) so
both plain ELF and image with PE/COFF header can now be generated as build artifacts.
Note that this patch also represents initial EFI application format support (image
contains EFI application header embeded into binary when CONFIG_RISCV_EFI
is enabled). For full EFI application Xen support, boot/runtime services
and system table handling are yet to be implemented.

Tested on both QEMU and StarFive VisionFive 2 with OpenSBI->U-Boot->xen->dom0 boot chain.

Signed-off-by: Nikola Jelic <nikola.jelic@rt-rk.com>

---
Changes since v1 (following review comments from Jan Beulich):
  * Fix coding style
  * Extended image header with all the necessary
    PE/COFF (EFI) fields (instead of only those used by
U-boot)
  * Removed usage of deprecated types
---
 xen/arch/riscv/Kconfig                        |   9 ++
 xen/arch/riscv/include/asm/pe.h               | 148 ++++++++++++++++++
 .../riscv/include/asm/riscv_image_header.h    |  54 +++++++
 xen/arch/riscv/riscv64/head.S                 | 141 ++++++++++++++++-
 xen/arch/riscv/xen.lds.S                      |   6 +-
 5 files changed, 356 insertions(+), 2 deletions(-)
 create mode 100644 xen/arch/riscv/include/asm/pe.h
 create mode 100644 xen/arch/riscv/include/asm/riscv_image_header.h

diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
index f382b36f6c..59bf5aa2a6 100644
--- a/xen/arch/riscv/Kconfig
+++ b/xen/arch/riscv/Kconfig
@@ -9,6 +9,15 @@ config ARCH_DEFCONFIG
 	string
 	default "arch/riscv/configs/tiny64_defconfig"
 
+config RISCV_EFI
+	bool "UEFI boot service support"
+	depends on RISCV_64
+	default n
+	help
+	  This option provides support for boot services through
+	  UEFI firmware. A UEFI stub is provided to allow Xen to
+	  be booted as an EFI application.
+
 menu "Architecture Features"
 
 source "arch/Kconfig"
diff --git a/xen/arch/riscv/include/asm/pe.h b/xen/arch/riscv/include/asm/pe.h
new file mode 100644
index 0000000000..084de1e712
--- /dev/null
+++ b/xen/arch/riscv/include/asm/pe.h
@@ -0,0 +1,148 @@
+#ifndef _ASM_RISCV_PE_H
+#define _ASM_RISCV_PE_H
+
+#define LINUX_EFISTUB_MAJOR_VERSION     0x1
+#define LINUX_EFISTUB_MINOR_VERSION     0x0
+
+#define MZ_MAGIC                    0x5a4d          /* "MZ" */
+
+#define PE_MAGIC                    0x00004550      /* "PE\0\0" */
+#define PE_OPT_MAGIC_PE32           0x010b
+#define PE_OPT_MAGIC_PE32PLUS       0x020b
+
+/* machine type */
+#define IMAGE_FILE_MACHINE_RISCV32  0x5032
+#define IMAGE_FILE_MACHINE_RISCV64  0x5064
+
+/* flags */
+#define IMAGE_FILE_EXECUTABLE_IMAGE 0x0002
+#define IMAGE_FILE_LINE_NUMS_STRIPPED 0x0004
+#define IMAGE_FILE_DEBUG_STRIPPED   0x0200
+#define IMAGE_SUBSYSTEM_EFI_APPLICATION 10
+
+#define IMAGE_SCN_CNT_CODE          0x00000020      /* .text */
+#define IMAGE_SCN_CNT_INITIALIZED_DATA 0x00000040   /* .data */
+#define IMAGE_SCN_MEM_EXECUTE       0x20000000
+#define IMAGE_SCN_MEM_READ          0x40000000      /* readable */
+#define IMAGE_SCN_MEM_WRITE         0x80000000      /* writeable */
+
+#ifndef __ASSEMBLY__
+
+struct mz_hdr {
+    uint16_t magic;                  /* MZ_MAGIC */
+    uint16_t lbsize;                 /* size of last used block */
+    uint16_t blocks;                 /* pages in file, 0x3 */
+    uint16_t relocs;                 /* relocations */
+    uint16_t hdrsize;                /* header size in "paragraphs" */
+    uint16_t min_extra_pps;          /* .bss */
+    uint16_t max_extra_pps;          /* runtime limit for the arena size */
+    uint16_t ss;                     /* relative stack segment */
+    uint16_t sp;                     /* initial %sp register */
+    uint16_t checksum;               /* word checksum */
+    uint16_t ip;                     /* initial %ip register */
+    uint16_t cs;                     /* initial %cs relative to load segment */
+    uint16_t reloc_table_offset;     /* offset of the first relocation */
+    uint16_t overlay_num;
+    uint16_t reserved0[4];
+    uint16_t oem_id;
+    uint16_t oem_info;
+    uint16_t reserved1[10];
+    uint32_t peaddr;                 /* address of pe header */
+    char     message[];              /* message to print */
+};
+
+struct pe_hdr {
+    uint32_t magic;                  /* PE magic */
+    uint16_t machine;                /* machine type */
+    uint16_t sections;               /* number of sections */
+    uint32_t timestamp;
+    uint32_t symbol_table;           /* symbol table offset */
+    uint32_t symbols;                /* number of symbols */
+    uint16_t opt_hdr_size;           /* size of optional header */
+    uint16_t flags;                  /* flags */
+};
+
+struct pe32_opt_hdr {
+    /* "standard" header */
+    uint16_t magic;                  /* file type */
+    uint8_t  ld_major;               /* linker major version */
+    uint8_t  ld_minor;               /* linker minor version */
+    uint32_t text_size;
+    uint32_t data_size;
+    uint32_t bss_size;
+    uint32_t entry_point;            /* file offset of entry point */
+    uint32_t code_base;              /* relative code addr in ram */
+    uint32_t data_base;              /* relative data addr in ram */
+    /* "extra" header fields */
+    uint32_t image_base;             /* preferred load address */
+    uint32_t section_align;          /* alignment in bytes */
+    uint32_t file_align;             /* file alignment in bytes */
+    uint16_t os_major;
+    uint16_t os_minor;
+    uint16_t image_major;
+    uint16_t image_minor;
+    uint16_t subsys_major;
+    uint16_t subsys_minor;
+    uint32_t win32_version;          /* reserved, must be 0 */
+    uint32_t image_size;
+    uint32_t header_size;
+    uint32_t csum;
+    uint16_t subsys;
+    uint16_t dll_flags;
+    uint32_t stack_size_req;         /* amt of stack requested */
+    uint32_t stack_size;             /* amt of stack required */
+    uint32_t heap_size_req;          /* amt of heap requested */
+    uint32_t heap_size;              /* amt of heap required */
+    uint32_t loader_flags;           /* reserved, must be 0 */
+    uint32_t data_dirs;              /* number of data dir entries */
+};
+
+struct pe32plus_opt_hdr {
+    uint16_t magic;                  /* file type */
+    uint8_t  ld_major;               /* linker major version */
+    uint8_t  ld_minor;               /* linker minor version */
+    uint32_t text_size;
+    uint32_t data_size;
+    uint32_t bss_size;
+    uint32_t entry_point;            /* file offset of entry point */
+    uint32_t code_base;              /* relative code addr in ram */
+    /* "extra" header fields */
+    uint64_t image_base;             /* preferred load address */
+    uint32_t section_align;          /* alignment in bytes */
+    uint32_t file_align;             /* file alignment in bytes */
+    uint16_t os_major;
+    uint16_t os_minor;
+    uint16_t image_major;
+    uint16_t image_minor;
+    uint16_t subsys_major;
+    uint16_t subsys_minor;
+    uint32_t win32_version;          /* reserved, must be 0 */
+    uint32_t image_size;
+    uint32_t header_size;
+    uint32_t csum;
+    uint16_t subsys;
+    uint16_t dll_flags;
+    uint64_t stack_size_req;         /* amt of stack requested */
+    uint64_t stack_size;             /* amt of stack required */
+    uint64_t heap_size_req;          /* amt of heap requested */
+    uint64_t heap_size;              /* amt of heap required */
+    uint32_t loader_flags;           /* reserved, must be 0 */
+    uint32_t data_dirs;              /* number of data dir entries */
+};
+
+struct section_header {
+    char     name[8];                /* name or "/12\0" string tbl offset */
+    uint32_t virtual_size;           /* size of loaded section in ram */
+    uint32_t virtual_address;        /* relative virtual address */
+    uint32_t raw_data_size;          /* size of the section */
+    uint32_t data_addr;              /* file pointer to first page of section */
+    uint32_t relocs;                 /* file pointer to relocation entries */
+    uint32_t line_numbers;
+    uint16_t num_relocs;
+    uint16_t num_lin_numbers;
+    uint32_t flags;
+};
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PE_H */
diff --git a/xen/arch/riscv/include/asm/riscv_image_header.h b/xen/arch/riscv/include/asm/riscv_image_header.h
new file mode 100644
index 0000000000..89c7511d56
--- /dev/null
+++ b/xen/arch/riscv/include/asm/riscv_image_header.h
@@ -0,0 +1,54 @@
+#ifndef _ASM_RISCV_IMAGE_H
+#define _ASM_RISCV_IMAGE_H
+
+#define RISCV_IMAGE_MAGIC "RISCV\0\0\0"
+#define RISCV_IMAGE_MAGIC2 "RSC\x05"
+
+#define RISCV_IMAGE_FLAG_BE_SHIFT 0
+
+#define RISCV_IMAGE_FLAG_LE 0
+#define RISCV_IMAGE_FLAG_BE 1
+
+#define __HEAD_FLAG_BE RISCV_IMAGE_FLAG_LE
+
+#define __HEAD_FLAG(field) (__HEAD_FLAG_##field << RISCV_IMAGE_FLAG_##field##_SHIFT)
+
+#define __HEAD_FLAGS (__HEAD_FLAG(BE))
+
+#define RISCV_HEADER_VERSION_MAJOR 0
+#define RISCV_HEADER_VERSION_MINOR 2
+
+#define RISCV_HEADER_VERSION (RISCV_HEADER_VERSION_MAJOR << 16 | \
+                              RISCV_HEADER_VERSION_MINOR)
+
+#ifndef __ASSEMBLY__
+/*
+ * struct riscv_image_header - riscv xen image header
+ *
+ * @code0:              Executable code
+ * @code1:              Executable code
+ * @text_offset:        Image load offset
+ * @image_size:         Effective Image size
+ * @reserved:           reserved
+ * @reserved:           reserved
+ * @reserved:           reserved
+ * @magic:              Magic number
+ * @reserved:           reserved
+ * @reserved:           reserved (will be used for PE COFF offset)
+ */
+
+struct riscv_image_header
+{
+    uint32_t code0;
+    uint32_t code1;
+    uint64_t text_offset;
+    uint64_t image_size;
+    uint64_t res1;
+    uint64_t res2;
+    uint64_t res3;
+    uint64_t magic;
+    uint32_t res4;
+    uint32_t res5;
+};
+#endif /* __ASSEMBLY__ */
+#endif /* _ASM_RISCV_IMAGE_H */
diff --git a/xen/arch/riscv/riscv64/head.S b/xen/arch/riscv/riscv64/head.S
index 3261e9fce8..609638b921 100644
--- a/xen/arch/riscv/riscv64/head.S
+++ b/xen/arch/riscv/riscv64/head.S
@@ -1,14 +1,150 @@
 #include <asm/asm.h>
 #include <asm/riscv_encoding.h>
+#include <asm/riscv_image_header.h>
+#ifdef CONFIG_RISCV_EFI
+#include <asm/pe.h>
+#endif
 
         .section .text.header, "ax", %progbits
 
         /*
          * OpenSBI pass to start():
          *   a0 -> hart_id ( bootcpu_id )
-         *   a1 -> dtb_base 
+         *   a1 -> dtb_base
          */
 FUNC(start)
+
+efi_head:
+
+#ifdef CONFIG_RISCV_EFI
+        /*
+         * This instruction decodes to "MZ" ASCII required by UEFI.
+         */
+        c.li s4,-13
+        j xen_start
+#else
+        /* jump to start kernel */
+        j xen_start
+        /* reserved */
+        .word 0
+#endif
+        .balign 8
+#ifdef CONFIG_RISCV_64
+        /* Image load offset(2MB) from start of RAM */
+        .dword 0x200000
+#else
+        /* Image load offset(4MB) from start of RAM */
+        .dword 0x400000
+#endif
+        /* Effective size of xen image */
+        .dword _end - _start
+        .dword __HEAD_FLAGS
+        .word RISCV_HEADER_VERSION
+        .word 0
+        .dword 0
+        .ascii RISCV_IMAGE_MAGIC
+        .balign 4
+        .ascii RISCV_IMAGE_MAGIC2
+#ifndef CONFIG_RISCV_EFI
+        .word 0
+#else
+        .word pe_head_start - efi_head
+pe_head_start:
+        .long	PE_MAGIC
+coff_header:
+#ifdef CONFIG_RISCV_64
+        .short  IMAGE_FILE_MACHINE_RISCV64              /* Machine */
+#else
+        .short  IMAGE_FILE_MACHINE_RISCV32              /* Machine */
+#endif
+        .short  section_count                           /* NumberOfSections */
+        .long   0                                       /* TimeDateStamp */
+        .long   0                                       /* PointerToSymbolTable */
+        .long   0                                       /* NumberOfSymbols */
+        .short  section_table - optional_header         /* SizeOfOptionalHeader */
+        .short  IMAGE_FILE_DEBUG_STRIPPED | \
+                IMAGE_FILE_EXECUTABLE_IMAGE | \
+                IMAGE_FILE_LINE_NUMS_STRIPPED           /* Characteristics */
+
+optional_header:
+#ifdef CONFIG_RISCV_64
+        .short  PE_OPT_MAGIC_PE32PLUS                   /* PE32+ format */
+#else
+        .short  PE_OPT_MAGIC_PE32                       /* PE32 format */
+#endif
+        .byte   0x02                                    /* MajorLinkerVersion */
+        .byte   0x14                                    /* MinorLinkerVersion */
+        .long   _end - xen_start                        /* SizeOfCode */
+        .long   0                                       /* SizeOfInitializedData */
+        .long   0                                       /* SizeOfUninitializedData */
+        .long   0                                       /* AddressOfEntryPoint */
+        .long   xen_start - efi_head                    /* BaseOfCode */
+
+extra_header_fields:
+        .quad   0                                       /* ImageBase */
+        .long   PECOFF_SECTION_ALIGNMENT                /* SectionAlignment */
+        .long   PECOFF_FILE_ALIGNMENT                   /* FileAlignment */
+        .short  0                                       /* MajorOperatingSystemVersion */
+        .short  0                                       /* MinorOperatingSystemVersion */
+        .short  LINUX_EFISTUB_MAJOR_VERSION             /* MajorImageVersion */
+        .short  LINUX_EFISTUB_MINOR_VERSION             /* MinorImageVersion */
+        .short  0                                       /* MajorSubsystemVersion */
+        .short  0                                       /* MinorSubsystemVersion */
+        .long   0                                       /* Win32VersionValue */
+        .long   _end - efi_head                         /* SizeOfImage */
+
+        /* Everything before the xen image is considered part of the header */
+        .long   xen_start - efi_head                    /* SizeOfHeaders */
+        .long   0                                       /* CheckSum */
+        .short  IMAGE_SUBSYSTEM_EFI_APPLICATION         /* Subsystem */
+        .short  0                                       /* DllCharacteristics */
+        .quad   0                                       /* SizeOfStackReserve */
+        .quad   0                                       /* SizeOfStackCommit */
+        .quad   0                                       /* SizeOfHeapReserve */
+        .quad   0                                       /* SizeOfHeapCommit */
+        .long   0                                       /* LoaderFlags */
+        .long   (section_table - .) / 8                 /* NumberOfRvaAndSizes */
+        .quad   0                                       /* ExportTable */
+        .quad   0                                       /* ImportTable */
+        .quad   0                                       /* ResourceTable */
+        .quad   0                                       /* ExceptionTable */
+        .quad   0                                       /* CertificationTable */
+        .quad   0                                       /* BaseRelocationTable */
+
+/* Section table */
+section_table:
+        .ascii  ".text\0\0\0"
+        .long   0
+        .long   0
+        .long   0                                       /* SizeOfRawData */
+        .long   0                                       /* PointerToRawData */
+        .long   0                                       /* PointerToRelocations */
+        .long   0                                       /* PointerToLineNumbers */
+        .short  0                                       /* NumberOfRelocations */
+        .short  0                                       /* NumberOfLineNumbers */
+        .long   IMAGE_SCN_CNT_CODE | \
+                IMAGE_SCN_MEM_READ | \
+                IMAGE_SCN_MEM_EXECUTE                   /* Characteristics */
+
+        .ascii  ".data\0\0\0"
+        .long   _end - xen_start                        /* VirtualSize */
+        .long   xen_start - efi_head                    /* VirtualAddress */
+        .long   __init_end_efi - xen_start              /* SizeOfRawData */
+        .long   xen_start - efi_head                    /* PointerToRawData */
+        .long   0                                       /* PointerToRelocations */
+        .long   0                                       /* PointerToLineNumbers */
+        .short  0                                       /* NumberOfRelocations */
+        .short  0                                       /* NumberOfLineNumbers */
+        .long   IMAGE_SCN_CNT_INITIALIZED_DATA | \
+                IMAGE_SCN_MEM_READ | \
+                IMAGE_SCN_MEM_WRITE                    /* Characteristics */
+
+        .set    section_count, (. - section_table) / 40
+
+        .balign  0x1000
+#endif/* CONFIG_RISCV_EFI */
+
+FUNC(xen_start)
         /* Mask all interrupts */
         csrw    CSR_SIE, zero
 
@@ -60,6 +196,9 @@ FUNC(start)
         mv      a1, s1
 
         tail    start_xen
+
+END(xen_start)
+
 END(start)
 
         .section .text, "ax", %progbits
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
index 8510a87c4d..2eddde43c1 100644
--- a/xen/arch/riscv/xen.lds.S
+++ b/xen/arch/riscv/xen.lds.S
@@ -12,6 +12,9 @@ PHDRS
 #endif
 }
 
+PECOFF_SECTION_ALIGNMENT = 0x1000;
+PECOFF_FILE_ALIGNMENT = 0x200;
+
 SECTIONS
 {
     . = XEN_VIRT_START;
@@ -144,7 +147,7 @@ SECTIONS
     .got.plt : {
         *(.got.plt)
     } : text
-
+    __init_end_efi = .;
     . = ALIGN(POINTER_ALIGN);
     __init_end = .;
 
@@ -165,6 +168,7 @@ SECTIONS
         . = ALIGN(POINTER_ALIGN);
         __bss_end = .;
     } :text
+
     _end = . ;
 
     /* Section for the device tree blob (if any). */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 12:19:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 12:19:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739295.1146278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMwX-000662-A5; Wed, 12 Jun 2024 12:19:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739295.1146278; Wed, 12 Jun 2024 12:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHMwX-00065v-7P; Wed, 12 Jun 2024 12:19:09 +0000
Received: by outflank-mailman (input) for mailman id 739295;
 Wed, 12 Jun 2024 12:19:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bXq+=NO=bounce.vates.tech=bounce-md_30504962.66699237.v1-867ad23be80446a7ae42bac702f26d9f@srs-se1.protection.inumbo.net>)
 id 1sHMwV-00065n-BY
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 12:19:07 +0000
Received: from mail180-20.suw31.mandrillapp.com
 (mail180-20.suw31.mandrillapp.com [198.2.180.20])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f549961d-28b5-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 14:19:04 +0200 (CEST)
Received: from pmta11.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail180-20.suw31.mandrillapp.com (Mailchimp) with ESMTP id
 4Vzl2b5GkVzFCWkXd
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 12:19:03 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 867ad23be80446a7ae42bac702f26d9f; Wed, 12 Jun 2024 12:19:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f549961d-28b5-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718194743; x=1718455243;
	bh=CCZxrXkAmvRoGJj76f4KfCKkWKu978lQtcipbRAaLzo=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=v1nBT4mqsKnOMzey1Yq3iRB4NXZXKwW1daXOcOvPz+Puz3KcTxBIhj4P589ZvLSFJ
	 Wtro3KxfdMq8Gel/To4tf95JKraew5JrLruK4SnZ6CGDIyFf8Jkm4SS+F9jAjLyAmR
	 H3Let47zIt5DUiKOvW06/IscSnFNQ7UYcYtdRkzX7ZqH49K0tt0xvIlBlyJLf2CQrj
	 Mz8gbgpsbLbMNejAbgYAd772JfCojWOnl/j7vJJgJI9rZblZe9zdWoS6ISdGxAeLVx
	 P5ljqDPbMkejqwCIn+YIfMW5RTgnFBKa8FjfOjx8IRr9L3+zk9TyDHXu4d3hKQ1iHA
	 fYKa5kcpvAFRQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718194743; x=1718455243; i=anthony.perard@vates.tech;
	bh=CCZxrXkAmvRoGJj76f4KfCKkWKu978lQtcipbRAaLzo=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=NDNy3OAoFXOSm4nb8KjyhaHC/hObaVL/on975D89MAdJ4LtkXYzD085vJIa+V8MaL
	 uSpqjzz5NsfwSDvmhYWcwF4Q3Ub0/McMz5t4L7pfNzyJXlHi+IZ9aj9xSZMxIP/SK8
	 CUubCdA8SA/LogwTE73nmfGhZRZ4T8kWxQ2yw6QIW3af5PaiFB/ZYh6kOWdFVeJ00L
	 m+eFD660kxZWm228GLE8gyo67K5QzsNvV6Hs1i8kr1Zae5bU5OftARSEACXu8PAvfp
	 srPasW2yohk62WkwGPutFLQ8iYtdTZCcrlBKAd7dNYtX/vWfoKebrMxQn2izzWYHxl
	 ClqYoMceMkGpw==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[PATCH=20v4=203/4]=20tools/init-dom0less:=20Avoid=20hardcoding=20GUEST=5FMAGIC=5FBASE?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718194741680
To: Stefano Stabellini <stefano.stabellini@amd.com>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, julien@xen.org, bertrand.marquis@arm.com, michal.orzel@amd.com, Volodymyr_Babchuk@epam.com, Henry Wang <xin.wang2@amd.com>, Alec Kwapis <alec.kwapis@medtronic.com>, Jason Andryuk <jason.andryuk@amd.com>
Message-Id: <ZmmSNWaOp6RYcrmU@l14>
References: <alpine.DEB.2.22.394.2405241552240.2557291@ubuntu-linux-20-04-desktop> <20240524225522.2878481-3-stefano.stabellini@amd.com>
In-Reply-To: <20240524225522.2878481-3-stefano.stabellini@amd.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.867ad23be80446a7ae42bac702f26d9f?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240612:md
Date: Wed, 12 Jun 2024 12:19:03 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

On Fri, May 24, 2024 at 03:55:21PM -0700, Stefano Stabellini wrote:
> From: Henry Wang <xin.wang2@amd.com>
> 
> Currently the GUEST_MAGIC_BASE in the init-dom0less application is
> hardcoded, which will lead to failures for 1:1 direct-mapped Dom0less
> DomUs.
> 
> Since the guest magic region allocation from init-dom0less is for
> XenStore, and the XenStore page is now allocated from the hypervisor,
> instead of hardcoding the guest magic pages region, use
> xc_hvm_param_get() to get the XenStore page PFN. Rename alloc_xs_page()
> to get_xs_page() to reflect the changes.
> 
> With this change, some existing code is not needed anymore, including:
> (1) The definition of the XenStore page offset.
> (2) Call to xc_domain_setmaxmem() and xc_clear_domain_page() as we
>     don't need to set the max mem and clear the page anymore.
> (3) Foreign mapping of the XenStore page, setting of XenStore interface
>     status and HVM_PARAM_STORE_PFN from init-dom0less, as they are set
>     by the hypervisor.
> 
> Take the opportunity to do some coding style improvements when possible.
> 
> Reported-by: Alec Kwapis <alec.kwapis@medtronic.com>
> Signed-off-by: Henry Wang <xin.wang2@amd.com>
> Reviewed-by: Jason Andryuk <jason.andryuk@amd.com>
> ---
> +static int get_xs_page(struct xc_interface_core *xch, libxl_dominfo *info,
> +                       uint64_t *xenstore_pfn)
>  {

[...]

> +    rc = xc_hvm_param_get(xch, info->domid, HVM_PARAM_STORE_PFN, xenstore_pfn);
> +    if (rc < 0) {
> +        printf("Failed to get HVM_PARAM_STORE_PFN\n");

Shouldn't we print error message on "stderr" instead?

> @@ -245,20 +232,11 @@ static int init_domain(struct xs_handle *xsh,
>      if (!xenstore_evtchn)
>          return 0;
>  
> -    /* Alloc xenstore page */
> -    if (alloc_xs_page(xch, info, &xenstore_pfn) != 0) {
> -        printf("Error on alloc magic pages\n");
> -        return 1;
> -    }
> -
> -    intf = xenforeignmemory_map(xfh, info->domid, PROT_READ | PROT_WRITE, 1,
> -                                &xenstore_pfn, NULL);
> -    if (!intf) {
> -        printf("Error mapping xenstore page\n");
> +    /* Get xenstore page */
> +    if (get_xs_page(xch, info, &xenstore_pfn) != 0) {
> +        printf("Error on getting xenstore page\n");

Same here.


In anycase:
Acked-by: Anthony PERARD <anthony.perard@vates.tech>

Thanks,

-- 


Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 12:39:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 12:39:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739307.1146304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHNGG-0002MZ-0L; Wed, 12 Jun 2024 12:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739307.1146304; Wed, 12 Jun 2024 12:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHNGF-0002MS-Sw; Wed, 12 Jun 2024 12:39:31 +0000
Received: by outflank-mailman (input) for mailman id 739307;
 Wed, 12 Jun 2024 12:39:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHNGF-0002MI-0D; Wed, 12 Jun 2024 12:39:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHNGE-0006RW-Tq; Wed, 12 Jun 2024 12:39:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHNGE-00039i-In; Wed, 12 Jun 2024 12:39:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHNGE-0000NL-IQ; Wed, 12 Jun 2024 12:39:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KP6zXLImLyStSUvo9/vKoSCNG/m+LmiuWzwZSPdALzs=; b=mKQJBvaRvsqaJh61CqdKph5uf8
	LJucblr461Zn4dErXYuMpE+3m7/lnXPnRx0T7PRhUnFmT+VSeYiTkhzmj7XZA/74QAEN12xmc2VKP
	GI47jBTB/jsuMbl7oBzXFAo17TX8hqCIsTU3mXTmOqrIBygYDT+557+yWWQk0Tl3fs2A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186316-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186316: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=acb26f22a19f80caec6a2d3cdfd4ad49744ea968
X-Osstest-Versions-That:
    libvirt=a7eb7de53171b4cdabc3d36524c468abfe2590fa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Jun 2024 12:39:30 +0000

flight 186316 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186316/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186286
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              acb26f22a19f80caec6a2d3cdfd4ad49744ea968
baseline version:
 libvirt              a7eb7de53171b4cdabc3d36524c468abfe2590fa

Last test of basis   186286  2024-06-08 04:20:31 Z    4 days
Testing same since   186316  2024-06-12 04:20:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel P. Berrangé <berrange@redhat.com>
  Georgia Garcia <georgia.garcia@canonical.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   a7eb7de531..acb26f22a1  acb26f22a19f80caec6a2d3cdfd4ad49744ea968 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 13:15:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 13:15:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739319.1146314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHNos-0008DX-ID; Wed, 12 Jun 2024 13:15:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739319.1146314; Wed, 12 Jun 2024 13:15:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHNos-0008DQ-En; Wed, 12 Jun 2024 13:15:18 +0000
Received: by outflank-mailman (input) for mailman id 739319;
 Wed, 12 Jun 2024 13:15:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHNor-0008DK-Cv
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 13:15:17 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ce486d43-28bd-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 15:15:14 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-a6f09b457fdso390477466b.2
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 06:15:14 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb3a9f972sm134302a12.93.2024.06.12.06.15.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 06:15:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce486d43-28bd-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718198114; x=1718802914; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:content-language:cc:to:subject
         :from:user-agent:mime-version:date:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Cdxu6LAZNNFM3u5h7DZ9HSxzF/+Mj8nmFDxYxnGhnzE=;
        b=dzORleSGZClfaIzyC68G4GHwtaIKwMm5STMBoUTUBkY7zJoOfjSWL2OcDLnYqUAdU1
         N0bgho5qxk7XljUJodiBgX1zyfPQxGW+LTaFqGZkd/zl7qjo2j5ftcKWdANvvWbuoT+m
         9OGUcKg3ZN1yCuqitLuhhZDTHn5q0YNIdplLsYR2q137TxVCF5KnB2PO3awk6yshVatu
         Q1vCK4pbsvH+KA/+smDI9RGFfv4X78uW/770K2ic6XOwhr6ZZ4EB8gkEe7dTXxSrSyY0
         3dunLi1MOvDmcc2iSxJBUB5626x5HtgOKoboXDxhhoPowXzfqVkjejslRuzGooIbjjI+
         O+uA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718198114; x=1718802914;
        h=content-transfer-encoding:autocrypt:content-language:cc:to:subject
         :from:user-agent:mime-version:date:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Cdxu6LAZNNFM3u5h7DZ9HSxzF/+Mj8nmFDxYxnGhnzE=;
        b=Eb1OZx4oPTuMHFhjjPW50xDNBkQxsDobd6GBOkkzJMpzorp/Uub1+GQWjg+NvbnFF1
         uU8gOF/ZuquxLksRig1GPrdmy720qsi1GND2hHLpMAp7B/OAP6E/uVrQ38Ps+YD7099R
         hlMWg6+WJB70chaNQnsdlMQzMcMF7QF2dNyfLFtaNn0ec+C0ng7DzZQyKodvyIQwUijI
         v7ik4llepLqj4/HDx5LLH0exHYOUbGAaZHAlY/48TSSUzK0pTEMH5VV8enpvsopKTMG3
         lCCFDbrBAacBNe0tzbe1jmmBTIoPCu+YxvmE6bvfrEMwdQiwDsQe29PrxFkQnIieXOKf
         jiIA==
X-Gm-Message-State: AOJu0YxkTfgHd1s7D7txO61m23bejyzJELEfrsUhMDJZi6GnYm8/fdHh
	zIYqROH3QhQSQMHaUYF2hLNkUgRYOCE8eRx/KMiCs//e2Aid4/Osok5bCaZzfVKDOCQhH6HpJXo
	=
X-Google-Smtp-Source: AGHT+IFG6nY0bSo5ZRD5zWBWypsgCR56fl7pSBYXY4IDEYzSIq6V34OGLJhZMhoP2VDWnqm4KEui5g==
X-Received: by 2002:a17:906:f190:b0:a68:e335:3e62 with SMTP id a640c23a62f3a-a6f480268admr106902766b.72.1718198114181;
        Wed, 12 Jun 2024 06:15:14 -0700 (PDT)
Message-ID: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
Date: Wed, 12 Jun 2024 15:15:12 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 for-4.19 0/3] x86/EPT: avoid undue forcing of MMIO accesses
 to UC
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
Content-Language: en-US
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

..., getting in the way of, in particular, PVH Dom0 accessing its video
frame buffer (if it has a console).

While especially the 1st one may not appear to be so, both of the earlier
patches are strictly prereqs to the last one.

1: correct special page checking in epte_get_entry_emt()
2: avoid marking non-present entries for re-configuring
3: drop questionable mfn_valid() from epte_get_entry_emt()

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 13:16:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 13:16:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739322.1146323 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHNqD-0000J4-RN; Wed, 12 Jun 2024 13:16:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739322.1146323; Wed, 12 Jun 2024 13:16:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHNqD-0000Ix-Om; Wed, 12 Jun 2024 13:16:41 +0000
Received: by outflank-mailman (input) for mailman id 739322;
 Wed, 12 Jun 2024 13:16:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHNqC-0000Ir-Ur
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 13:16:40 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 01116f7b-28be-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 15:16:40 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-a6f1dc06298so287400666b.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 06:16:40 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f1c99d8f1sm426917466b.175.2024.06.12.06.16.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 06:16:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01116f7b-28be-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718198199; x=1718802999; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=IIBq/heXgaX7T28VY8MGlPRmzmqXncKPkrU8y33t/6Q=;
        b=Mmh7EiEOBFRwDyV91f2ap5XTMIEWo3YTypsleYMGQq48N5g/fWTtGxLmmNYp1rozSo
         UVcZOKQzihLQM4ul+x5MJweNkukTQrgXmdtj+jdLgDQWQHwWgXIMTFtDtQ48rWKUb8GW
         yf+KDfcHXdKeXi9/p+Qyo0ATyAWwFxhZyUH6jIFs8CP5DBX4fMd8kMoKf0LsF0YsboN8
         d0GEYo3ABWaoVkqDeYiyMzAMwpSDeN+dzFMbMlLd9uCXfRcWYdhLOCfOaBcqwXoaHgrw
         s0HJOcQXF7ygNGhXDq79hLFJn8vkCo0dZYGQP0T9/p5WmyRHSkonaecVMMeflWuvbTam
         0kZQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718198199; x=1718802999;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=IIBq/heXgaX7T28VY8MGlPRmzmqXncKPkrU8y33t/6Q=;
        b=dbugReXPgfwlbgbsTeqCh6qlCxC+iWIqKY4zwDeo8dgi1RyFZK9qt/VaWHNjHEwJ6z
         xxiXtjFF+eWPgWlog1wPqnIkrX8l4hObP9frktUBYGj/6koFz/juj+TLz/yyrNxY1kF7
         XDAhKFlLt26y7BTXdRXd+xOpjAR9hY/lr75Ji84TvQpqkao6ljCVDxuenspnJzEgLZ7f
         pRXwN5pdpNyQTJp8iZCwGNHj4TG2MRLhA7UfLyBVRMmHXgx4UDFqNvcAFI9GkDlUaD64
         gz73X19f+oFZQw03SZKGpXYQv4h2eNBwL+nezhyONF82t4NsiVRUnhQZE8KjSdxyq/7w
         obzg==
X-Gm-Message-State: AOJu0Yx4hbdML/XNA2gvAGDI079bLlhWWCHfYLL43Hcj2o2Dng0cue9E
	aczJgGNl2OnbQnp3UN/ojiCHK2FXniy75RronRELkxKCKijxlChjUp+DQXxEO7tN5PfiYzgLlyA
	=
X-Google-Smtp-Source: AGHT+IGa7qIcGL9sb6ucqx2qnH+S/tl++sd2dtseLK+/TzKRMt3cKSekBFUIbqMltkXqp/I+pnl8CQ==
X-Received: by 2002:a17:906:d972:b0:a6f:6ef:225 with SMTP id a640c23a62f3a-a6f47c9c2dcmr115080166b.19.1718198199385;
        Wed, 12 Jun 2024 06:16:39 -0700 (PDT)
Message-ID: <175df1a2-a95f-462b-ad49-3a0fef727658@suse.com>
Date: Wed, 12 Jun 2024 15:16:37 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: [PATCH v2 for-4.19 1/3] x86/EPT: correct special page checking in
 epte_get_entry_emt()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
Content-Language: en-US
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

mfn_valid() granularity is (currently) 256Mb. Therefore the start of a
1Gb page passing the test doesn't necessarily mean all parts of such a
range would also pass. Yet using the result of mfn_to_page() on an MFN
which doesn't pass mfn_valid() checking is liable to result in a crash
(the invocation of mfn_to_page() alone is presumably "just" UB in such a
case).

Fixes: ca24b2ffdbd9 ("x86/hvm: set 'ipat' in EPT for special pages")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Of course we could leverage mfn_valid() granularity here to do an
increment by more than 1 if mfn_valid() returned false. Yet doing so
likely would want a suitable helper to be introduced first, rather than
open-coding such logic here.
---
v2: New.

--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -519,8 +519,12 @@ int epte_get_entry_emt(struct domain *d,
     }
 
     for ( special_pgs = i = 0; i < (1ul << order); i++ )
-        if ( is_special_page(mfn_to_page(mfn_add(mfn, i))) )
+    {
+        mfn_t cur = mfn_add(mfn, i);
+
+        if ( mfn_valid(cur) && is_special_page(mfn_to_page(cur)) )
             special_pgs++;
+    }
 
     if ( special_pgs )
     {



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 13:17:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 13:17:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739324.1146333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHNqa-0000j2-2k; Wed, 12 Jun 2024 13:17:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739324.1146333; Wed, 12 Jun 2024 13:17:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHNqa-0000iv-04; Wed, 12 Jun 2024 13:17:04 +0000
Received: by outflank-mailman (input) for mailman id 739324;
 Wed, 12 Jun 2024 13:17:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHNqY-0000Ir-A4
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 13:17:02 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0e08868c-28be-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 15:17:01 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-a6e349c0f2bso680882466b.2
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 06:17:01 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f0ffcbf38sm544927766b.142.2024.06.12.06.17.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 06:17:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e08868c-28be-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718198221; x=1718803021; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=YkdNjBOZ78Xa9HqoWeOu8L3vhUg6jdFS1CUj7DasUQM=;
        b=NaBF1Vd3LhQtM9NPdUWJtCNV8uRmGIGJEIu63CvWks+wmh2gqUKdzeqha3VuvSzabI
         Qi514K4khOxJAvsDvDrZBzMu/z7U8/yM2aL02WWwbW/nqT8eP0+9C9BYWwa4WaBie5b2
         al8cXHxnSVjlOnfv7BciSqL04uUPEvyjKk5oQv53ysj5LYkvVpZsvc+TuA2puxi+SKgm
         CrRO4Ddj50rFK3iooxHVeDmf2BDXHSrLNpdNQMZhXnt1TmYqNbNtSedury+ozS/Ia4O5
         6r5eqdBRa3Yw98ui2+FtCMzPMkg2iUYUatfhY1OIWSQXecFjl8SKRIxxxvPqM1LVNPs5
         zM1g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718198221; x=1718803021;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=YkdNjBOZ78Xa9HqoWeOu8L3vhUg6jdFS1CUj7DasUQM=;
        b=ovsPzmrTegy20117T4lQ8rWC7qbEtUR0/X5eOnJEHefe90+QdlXwJxajW6pHE6aGw2
         Xixhx/Fe65UPOumIxUY1VAc4ICTIZuAX6nPArjJ/9Lfnmqj+DnclYrBs+GvMLb1LsCff
         67cDr3uehqX6qwOPCLa8+qIN48DUtWocUAXL0a3DV4BoNWwnPhbhR9p8oGkwZtt3BqQ5
         lDYhr/IXf+AH7SCvZxS7qbEx3dGIgZRecD0fT10ObCYLsKx5aZ58D0eMLbBtF/IO9Uhg
         zV3bZ9/lOGwfgrFzIEJlg8IG3yubAIWr+Qn4s86RYXhm/YiYlLoZEWH4Xah2TLiAhL4J
         nyzA==
X-Gm-Message-State: AOJu0YzcXq5FRgiwdmTOMgMKNUMXslgsu9PWqAiy1ibOfMf7mmOI2jog
	DopBcMJ6e4Ii6rXya4I4jFIwPsBwH9tAk2m6zWu7+dvIHfvkOLl6BhWA5gI/8OSJsBqj46i+kKo
	=
X-Google-Smtp-Source: AGHT+IHq2OVHWuKZ/0M/6EvrW0HJ4JL2KtqF1J1urmoia2YhOyNoort1lZ3zhFA/wiaWN+QQ9uI9kA==
X-Received: by 2002:a17:907:f81:b0:a6f:1972:7fd with SMTP id a640c23a62f3a-a6f480272c3mr110037866b.67.1718198220955;
        Wed, 12 Jun 2024 06:17:00 -0700 (PDT)
Message-ID: <d31f0f8e-4eb7-4617-86f6-81f38b5c61aa@suse.com>
Date: Wed, 12 Jun 2024 15:16:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: [PATCH v2 for-4.19 2/3] x86/EPT: avoid marking non-present entries
 for re-configuring
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
Content-Language: en-US
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

For non-present entries EMT, like most other fields, is meaningless to
hardware. Make the logic in ept_set_entry() setting the field (and iPAT)
conditional upon dealing with a present entry, leaving the value at 0
otherwise. This has two effects for epte_get_entry_emt() which we'll
want to leverage subsequently:
1) The call moved here now won't be issued with INVALID_MFN anymore (a
   respective BUG_ON() is being added).
2) Neither of the other two calls could now be issued with a truncated
   form of INVALID_MFN anymore (as long as there's no bug anywhere
   marking an entry present when that was populated using INVALID_MFN).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -650,6 +650,8 @@ static int cf_check resolve_misconfig(st
             if ( e.emt != MTRR_NUM_TYPES )
                 break;
 
+            ASSERT(is_epte_present(&e));
+
             if ( level == 0 )
             {
                 for ( gfn -= i, i = 0; i < EPT_PAGETABLE_ENTRIES; ++i )
@@ -915,17 +917,6 @@ ept_set_entry(struct p2m_domain *p2m, gf
 
     if ( mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt) )
     {
-        bool ipat;
-        int emt = epte_get_entry_emt(p2m->domain, _gfn(gfn), mfn,
-                                     i * EPT_TABLE_ORDER, &ipat,
-                                     p2mt);
-
-        if ( emt >= 0 )
-            new_entry.emt = emt;
-        else /* ept_handle_misconfig() will need to take care of this. */
-            new_entry.emt = MTRR_NUM_TYPES;
-
-        new_entry.ipat = ipat;
         new_entry.sp = !!i;
         new_entry.sa_p2mt = p2mt;
         new_entry.access = p2ma;
@@ -941,6 +932,22 @@ ept_set_entry(struct p2m_domain *p2m, gf
             need_modify_vtd_table = 0;
 
         ept_p2m_type_to_flags(p2m, &new_entry);
+
+        if ( is_epte_present(&new_entry) )
+        {
+            bool ipat;
+            int emt = epte_get_entry_emt(p2m->domain, _gfn(gfn), mfn,
+                                         i * EPT_TABLE_ORDER, &ipat,
+                                         p2mt);
+
+            BUG_ON(mfn_eq(mfn, INVALID_MFN));
+
+            if ( emt >= 0 )
+                new_entry.emt = emt;
+            else /* ept_handle_misconfig() will need to take care of this. */
+                new_entry.emt = MTRR_NUM_TYPES;
+            new_entry.ipat = ipat;
+        }
     }
 
     if ( sve != -1 )



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 13:17:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 13:17:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739333.1146343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHNrE-0001WF-AL; Wed, 12 Jun 2024 13:17:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739333.1146343; Wed, 12 Jun 2024 13:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHNrE-0001W8-7Z; Wed, 12 Jun 2024 13:17:44 +0000
Received: by outflank-mailman (input) for mailman id 739333;
 Wed, 12 Jun 2024 13:17:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHNrC-0000uE-LP
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 13:17:42 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 254f252c-28be-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 15:17:40 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id
 4fb4d7f45d1cf-57c6011d75dso2660444a12.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 06:17:40 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cab3f4741sm1100360a12.22.2024.06.12.06.17.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 06:17:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 254f252c-28be-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718198260; x=1718803060; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=OE6BxbFhz/GuABFcUaV8g29dJf9tmzpd1oPgGPbHYps=;
        b=IdT5xownBC46L/4jUwkqJEUgl/IaPfATr8M6Qktcslfs2X6nIHuxs17YgMi6Q3TW53
         nYBw0Pdag5wunGCTYySK8wZSDPFUpuVnVXRlYjOolCYSJRYNeOlngdOC/o2R8rLme38b
         75yzHJUJp7FDuirLm6t5Y+pu7ElogOvKScWJDWFVrENlBAuHvyc9BPUmrt0XvM7d2ROH
         HOs4XAq1itfNuTkxSWrHtq/J78ScZQ1YEkAuEfdMSyfFVBMkU9kQ7yH5uMGe9tq8GTgH
         U3KkZr4XnIHQzDCKVyHu6/qBWzNDjyA2ch0wYvSYyapERaOHBeN0f6iAtcd8a1nl0n3a
         xdYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718198260; x=1718803060;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=OE6BxbFhz/GuABFcUaV8g29dJf9tmzpd1oPgGPbHYps=;
        b=o9uYJJF7IRPbodYg3l2mYdm9EjvpKQguEJJfyCtViYMT6DwiuHupGltAJOOxxd2FEU
         9pOTlhi3K0DFo61VI7HjpXm8rrODl0SD1wxFJK/AbXOL8QKPZyESzW9VocU+jTHBja69
         UaKuZ0NbxyosZ+W6wxSwKrsDDKOXyJoNEGCeB+uKtEeJMov5OEfLb3VkWwZG2Tx/MPPH
         ZQPH0VjOkK69/oaYcOYk9bhD1L2SgvcKwzPWk8KCxPRVIk/K4JuwzCpX//OE37xfYyjm
         +rRjEcpyK7r70pTH7kNnMIImHpvATDtRQjToj1T5hXEX7eJJ/D/tk6/QypvL3blvgr/r
         tbqA==
X-Gm-Message-State: AOJu0Yxu3LmDJRkQspSUlUq8LTkQt370nbN7kxK6IMvT76rl7GC/XAsU
	xqq30HANwZ6riLSRPZQ/4s241Gns40+MA38S5V4w+Eq7Ug4I8UwSA+DG26OH4mY6Kr0OUoDqOgA
	=
X-Google-Smtp-Source: AGHT+IGsMdg4tUyavdeBHQOEiXhnoC90DvTKudxzlW25vpQM2p+VKF34nC8unmxL9isU1dg41/8w2w==
X-Received: by 2002:a50:8acd:0:b0:57c:6f0a:bc57 with SMTP id 4fb4d7f45d1cf-57caaae520bmr1225185a12.36.1718198260234;
        Wed, 12 Jun 2024 06:17:40 -0700 (PDT)
Message-ID: <7607c5f7-772a-4c49-b2df-19f32ec2180b@suse.com>
Date: Wed, 12 Jun 2024 15:17:38 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: [PATCH v2 for-4.19 3/3] x86/EPT: drop questionable mfn_valid() from
 epte_get_entry_emt()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
Content-Language: en-US
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

mfn_valid() is RAM-focused; it will often return false for MMIO. Yet
access to actual MMIO space should not generally be restricted to UC
only; especially video frame buffer accesses are unduly affected by such
a restriction.

Since, as of ???????????? ("x86/EPT: avoid marking non-present entries
for re-configuring"), the function won't be called with INVALID_MFN or,
worse, truncated forms thereof anymore, we call fully drop that check.

Fixes: 81fd0d3ca4b2 ("x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
Considering that we've just declared PVH Dom0 "supported", this may well
qualify for 4.19. The issue was specifically very noticeable there.
---
v2: Different approach (and hence different title and description).

--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -501,12 +501,6 @@ int epte_get_entry_emt(struct domain *d,
         return -1;
     }
 
-    if ( !mfn_valid(mfn) )
-    {
-        *ipat = true;
-        return X86_MT_UC;
-    }
-
     /*
      * Conditional must be kept in sync with the code in
      * {iomem,ioports}_{permit,deny}_access().



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 13:43:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 13:43:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739347.1146353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHOFl-0006Nd-7D; Wed, 12 Jun 2024 13:43:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739347.1146353; Wed, 12 Jun 2024 13:43:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHOFl-0006NW-4f; Wed, 12 Jun 2024 13:43:05 +0000
Received: by outflank-mailman (input) for mailman id 739347;
 Wed, 12 Jun 2024 13:43:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHOFj-0006NQ-5E
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 13:43:03 +0000
Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com
 [2a00:1450:4864:20::52f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id af350871-28c1-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 15:43:00 +0200 (CEST)
Received: by mail-ed1-x52f.google.com with SMTP id
 4fb4d7f45d1cf-57c73a3b3d7so4925294a12.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 06:43:00 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f49c36547sm77869966b.8.2024.06.12.06.42.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 06:42:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af350871-28c1-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718199780; x=1718804580; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=gEupkGP7AJGQgCEF5kqRH/ZL9AKYn7qiZMsmBY3mbsE=;
        b=OA62A5qId8pCSDYWwOW7NxOYwvY3NfSwfCKLaBvCo5BDSoM4YKMy4yj2+ODiXmlHXa
         kUUQwFtsiOM5wllUiAwANw5ECOikhy3oTtxWX5RuL9tP9cfzI5LWgBE132Vn7F+MBU7Y
         yVS1+5q+t2GpoxRATWe2x7FHTpEwkhPanshNVTtPPBJQhCf8Y7tgvHolKOI9WaX3VRJC
         0sm17LKAhrz6HL6mQcdj76hJODMZNREv67WolGak6ER5sIWIujL2u7zQ5g+Pu0n+z4A/
         XLiOplvYdbZrI4BIXPmVc1eawzYVt3G4nRXrTCG6BdDvvq47I+oyEuPxOB17GJNGJo/B
         78vg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718199780; x=1718804580;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gEupkGP7AJGQgCEF5kqRH/ZL9AKYn7qiZMsmBY3mbsE=;
        b=UitYkLkkSEsmqPOQtMGZfeppIT68erAikkfB5wnFerb/oTKoXtfradR2xqLipMoyw/
         cHSESclEDC8UXzOLoEkF9B6hO7klq+RMSsLEbPgfSqdfjlda25L5TBNIoT/i2zHe6I+0
         U2AMrEUYfQcC/ijrFIjCPyQPXgbY9eayPyahvcBpTOACWBn6Rd9QYaujXPKH79ASXkv5
         FTEaeSozLxdUtrRTxYGJOUwbrtHrzEwVMIYN0pnbRYKUEBJ6RN1nof2spm4X/AwzYvqS
         6TURbI22XywE+ldN2kWd8rtWL7e+ZCWSJeNS3HVCx2R8VD8cQ4d+nWRS0zsUCPYZiBpL
         2KKw==
X-Forwarded-Encrypted: i=1; AJvYcCUEX7uqZNqiNjmBND8rf/ZxW2Lp+x0fBxATz0EPSMpV734cRrsAbKl6DdnjRghPK4xf7MPOtG3yX3kb1ytlgYVr57jvJdBhxnl+4bvo+cM=
X-Gm-Message-State: AOJu0Yyjm2CZuof9Rr9iN9H/yVJO+IKcSRJsSRXmBKr6kPI0dRTwj3tn
	C8dVMls0CCGtfEApK83p2mYXP4tOyGxIcA/0eGt9QMOAQuUtl/zF54GwR3XCj1yYp7QjPlGxBDU
	=
X-Google-Smtp-Source: AGHT+IGXQ0bzS9dLJCcJ+AYyYjtFeeBFkI3DgVIzx78jTCElWzCpEKevhlO44mzCKzHZWJzz2fHq9g==
X-Received: by 2002:a17:906:a08:b0:a68:a137:d041 with SMTP id a640c23a62f3a-a6f47d36659mr108265266b.12.1718199780030;
        Wed, 12 Jun 2024 06:43:00 -0700 (PDT)
Message-ID: <d5b1d273-913e-4d53-9fb6-9b01525da498@suse.com>
Date: Wed, 12 Jun 2024 15:42:58 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in
 _assign_irq_vector()
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-7-roger.pau@citrix.com>
 <9de1a9c7-814c-4375-9182-90a2f04806b2@suse.com> <Zml6-ViFPTWI1cUc@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <Zml6-ViFPTWI1cUc@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12.06.2024 12:39, Roger Pau Monné wrote:
> On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
>> On 10.06.2024 16:20, Roger Pau Monne wrote:
>>> Currently there's logic in fixup_irqs() that attempts to prevent
>>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
>>> interrupts from the CPUs not present in the input mask.  The current logic in
>>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
>>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
>>>
>>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
>>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
>>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
>>> and no remaining online CPUs in ->arch.cpu_mask.
>>>
>>> If _assign_irq_vector() is requested to move an interrupt in the state
>>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
>>> CPUs that could be used as fallback, and if that's the case do move the
>>> interrupt back to the previous destination.  Note this is easier because the
>>> vector hasn't been released yet, so there's no need to allocate and setup a new
>>> vector on the destination.
>>>
>>> Due to the logic in fixup_irqs() that clears offline CPUs from
>>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
>>> shouldn't be possible to get into _assign_irq_vector() with
>>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
>>> ->arch.old_cpu_mask.
>>>
>>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
>>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
>>> longer part of the affinity set,
>>
>> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
>>
>>> move the interrupt to a different CPU part of
>>> the provided mask
>>
>> ... this (->arch.cpu_mask related).
> 
> No, the "provided mask" here is the "mask" parameter, not
> ->arch.cpu_mask.

Oh, so this describes the case of "hitting" the comment at the very bottom of
the first hunk then? (I probably was misreading this because I was expecting
it to describe a code change, rather than the case where original behavior
needs retaining. IOW - all fine here then.)

>>> and keep the current ->arch.old_{cpu_mask,vector} for the
>>> pending interrupt movement to be completed.
>>
>> Right, that's to clean up state from before the initial move. What isn't
>> clear to me is what's to happen with the state of the intermediate
>> placement. Description and code changes leave me with the impression that
>> it's okay to simply abandon, without any cleanup, yet I can't quite figure
>> why that would be an okay thing to do.
> 
> There isn't much we can do with the intermediate placement, as the CPU
> is going offline.  However we can drain any pending interrupts from
> IRR after the new destination has been set, since setting the
> destination is done from the CPU that's the current target of the
> interrupts.  So we can ensure the draining is done strictly after the
> target has been switched, hence ensuring no further interrupts from
> this source will be delivered to the current CPU.

Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
the ...

>>> --- a/xen/arch/x86/irq.c
>>> +++ b/xen/arch/x86/irq.c
>>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
>>>      }
>>>  
>>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
>>> -        return -EAGAIN;
>>> +    {
>>> +        /*
>>> +         * If the current destination is online refuse to shuffle.  Retry after
>>> +         * the in-progress movement has finished.
>>> +         */
>>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
>>> +            return -EAGAIN;
>>> +
>>> +        /*
>>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
>>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
>>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
>>> +         * ->arch.old_cpu_mask.
>>> +         */
>>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
>>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
>>> +
>>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
>>> +        {
>>> +            /*
>>> +             * Fallback to the old destination if moving is in progress and the
>>> +             * current destination is to be offlined.  This is only possible if
>>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
>>> +             * in the 'mask' parameter.
>>> +             */
>>> +            desc->arch.vector = desc->arch.old_vector;
>>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);

... replacing of vector (and associated mask), without any further accounting.

>>> +            /* Undo any possibly done cleanup. */
>>> +            for_each_cpu(cpu, desc->arch.cpu_mask)
>>> +                per_cpu(vector_irq, cpu)[desc->arch.vector] = irq;
>>> +
>>> +            /* Cancel the pending move. */
>>> +            desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
>>> +            cpumask_clear(desc->arch.old_cpu_mask);
>>> +            desc->arch.move_in_progress = 0;
>>> +            desc->arch.move_cleanup_count = 0;
>>> +
>>> +            return 0;
>>> +        }
>>
>> In how far is this guaranteed to respect the (new) affinity that was set,
>> presumably having led to the movement in the first place?
> 
> The 'mask' parameter should account for the new affinity, hence the
> cpumask_intersects() check guarantees we are moving to a CPU still in
> the affinity mask.

Ah, right, I must have been confused.

>>> @@ -600,7 +646,17 @@ next:
>>>          current_vector = vector;
>>>          current_offset = offset;
>>>  
>>> -        if ( valid_irq_vector(old_vector) )
>>> +        if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
>>> +        {
>>> +            ASSERT(!cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map));
>>> +            /*
>>> +             * Special case when evacuating an interrupt from a CPU to be
>>> +             * offlined and the interrupt was already in the process of being
>>> +             * moved.  Leave ->arch.old_{vector,cpu_mask} as-is and just
>>> +             * replace ->arch.{cpu_mask,vector} with the new destination.
>>> +             */
>>
>> And where's the cleaning up of ->arch.old_* going to be taken care of then?
> 
> Such cleaning will be handled normally by the interrupt still having
> ->arch.move_{in_progress,cleanup_count} set.  The CPUs in
> ->arch.old_cpu_mask must not all be offline, otherwise the logic in
> fixup_irqs() would have already released the old vector.

Maybe add "Cleanup will be done normally" to the comment?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 13:47:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 13:47:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739356.1146364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHOJj-0006zU-Ry; Wed, 12 Jun 2024 13:47:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739356.1146364; Wed, 12 Jun 2024 13:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHOJj-0006zN-O2; Wed, 12 Jun 2024 13:47:11 +0000
Received: by outflank-mailman (input) for mailman id 739356;
 Wed, 12 Jun 2024 13:47:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHOJi-0006zH-HM
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 13:47:10 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 42d4299c-28c2-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 15:47:08 +0200 (CEST)
Received: by mail-ed1-x52d.google.com with SMTP id
 4fb4d7f45d1cf-57ca81533d0so1476665a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 06:47:08 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f15ffa147sm473861366b.86.2024.06.12.06.47.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 06:47:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42d4299c-28c2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718200028; x=1718804828; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=TwfoME/5wT6WN0woE3J1881dmdAqtMaIqmBYpIhhWBo=;
        b=YlQJSGsy+Ve/OeEap+I09vX63/VYJpoktw80I3hVr3G8VzolW6/6CVMO62NPvldUVN
         nOZADt3TQoYHWF4h3KmCpNFmVZspmSDsW+NmcUtXbrRTeKvd3yLXXSuv3kGcuCPIz+Hm
         +Zh/spmuh6a6RRB8P7TxPP2TSbHAGfyXEZ44W7pA1cYcT118f9V2NHmaGBFi4Q2AzKSW
         qvOvGC/RErmqlSBFa6liAxR3OheoUxO1eN8vTAY/hIkXtKVsTP6DqM8+aVqjES3PthFA
         t+hMKFxZzk7EN+7Dlh2FJbkf/HiHdroMaY46LHtGRioTmXwJ/SYAAiQm5jojTN3O7xnT
         aqpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718200028; x=1718804828;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=TwfoME/5wT6WN0woE3J1881dmdAqtMaIqmBYpIhhWBo=;
        b=OmFlmoawN78UjXTA+CgHpDyfksraHKFrae1ny4CeWcUnhy8NXhvUEozcezmCzZ+Yfx
         1EAQaPITG00vz749GIx50FsxZTIns1qMFOX6d4c59nxts2ffAEukxjzmF0o5xPtLj5I5
         2lB+c3Tv1yRzge9ugFqhcjFdgwWkDWmZGwEUcp0o1a4ywUptu+2JbsC+cmdEAU0HMFvU
         QoCSwyTkbVEIiFPConqktdh+TSEQb+pg+zL7YEVD6kk6CAeZJFbknP+j60dY8yWEn7bt
         HDKaXQAwUdTlNZm4BxF+sIkntLUMU0LZhWB63ScgUeASl0kihlVyCT0NT5BXzt3kX5EK
         ZMqg==
X-Forwarded-Encrypted: i=1; AJvYcCWZ+YL+EytPmN4rq9WY984slhr4C7Y33Yx0+AzN1AvKlgDfk8nxM2+7aAPMuBFzyDa2oZswappBgTgoYgVu6pVy8fgfiuIXHCdMp6gs+BM=
X-Gm-Message-State: AOJu0YyH6rIR7C9Il31vEubcSjPKeq1ISwC2WPr0y/Z4n3ObpT6Ve0sb
	hRT5m9klKUMjB6i5iTPSRBu70+OIlORqMl1Nlnv9l4NhJPerOqT8DcQAkV01I8AEZfLoICfq820
	=
X-Google-Smtp-Source: AGHT+IHyjiikrhL6e98blGm4+3UZLiyW3t4W4n1LrrxA8hGkFVa4mnE24Qz/kS5aUgyXNfipb4Cdfg==
X-Received: by 2002:a17:906:4f0a:b0:a6f:3b3b:b7cb with SMTP id a640c23a62f3a-a6f3b3bb944mr310780166b.7.1718200027835;
        Wed, 12 Jun 2024 06:47:07 -0700 (PDT)
Message-ID: <6e4e04b5-7fa3-4109-b222-8e7b7d75d48f@suse.com>
Date: Wed, 12 Jun 2024 15:47:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 7/7] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-8-roger.pau@citrix.com>
 <7e090e00-2061-4ef1-a0a4-b45ac86c5ee6@suse.com> <ZmmFGc7TSoKsCH95@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmmFGc7TSoKsCH95@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12.06.2024 13:23, Roger Pau Monné wrote:
> On Tue, Jun 11, 2024 at 03:50:42PM +0200, Jan Beulich wrote:
>> On 10.06.2024 16:20, Roger Pau Monne wrote:
>>> @@ -2649,6 +2649,25 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
>>>               !cpumask_test_cpu(cpu, &cpu_online_map) &&
>>>               cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) )
>>>          {
>>> +            /*
>>> +             * This to be offlined CPU was the target of an interrupt that's
>>> +             * been moved, and the new destination target hasn't yet
>>> +             * acknowledged any interrupt from it.
>>> +             *
>>> +             * We know the interrupt is configured to target the new CPU at
>>> +             * this point, so we can check IRR for any pending vectors and
>>> +             * forward them to the new destination.
>>> +             *
>>> +             * Note the difference between move_in_progress or
>>> +             * move_cleanup_count being set.  For the later we know the new
>>> +             * destination has already acked at least one interrupt from this
>>> +             * source, and hence there's no need to forward any stale
>>> +             * interrupts.
>>> +             */
>>
>> I'm a little confused by this last paragraph: It talks about a difference,
>> yet ...
>>
>>> +            if ( apic_irr_read(desc->arch.old_vector) )
>>> +                send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
>>> +                              desc->arch.vector);
>>
>> ... in the code being commented there's no difference visible. Hmm, I guess
>> this is related to the enclosing if(). Maybe this could be worded a little
>> differently, e.g. starting with "Note that for the other case -
>> move_cleanup_count being non-zero - we know ..."?
> 
> Hm, I see.  Yes, the difference is that for interrupts that have
> move_cleanup_count set we don't forward pending interrupts in IRR on
> this CPU.  I put this here because I think it's more naturally
> arranged with the rest of the comment.  I can pull the whole comment
> ahead if the if() if that's better.

I actually agree with you that the placement right now is "more natural".
I'm really just after making more clear what difference it is that is
being talked about. Assuming of course ...

>>> +        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
>>> +            check_irr = true;
>>> +
>>>          if ( desc->handler->set_affinity )
>>>              desc->handler->set_affinity(desc, affinity);
>>>          else if ( !(warned++) )
>>>              set_affinity = false;
>>>  
>>> +        if ( check_irr && apic_irr_read(vector) )
>>> +            /*
>>> +             * Forward pending interrupt to the new destination, this CPU is
>>> +             * going offline and otherwise the interrupt would be lost.
>>> +             */
>>> +            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
>>> +                          desc->arch.vector);
>>> +
>>>          if ( desc->handler->enable )
>>>              desc->handler->enable(desc);
>>>  
>>
>> Down from here, after the loop, there's a 1ms window where latched but not
>> yet delivered interrupts can be received. How's that playing together with
>> the changes you're making? Aren't we then liable to get two interrupts, one
>> at the old and one at the new source, in unknown order?
> 
> I was mistakenly thinking that clear_local_APIC() would block
> interrupt delivery, but that's not the case, so yes, interrupts should
> still be delivered in the window below.
> 
> Let me test without this last patch.

... the patch wants / needs retaining in the first place.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 14:11:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 14:11:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739373.1146373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHOhO-0003Mn-Ir; Wed, 12 Jun 2024 14:11:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739373.1146373; Wed, 12 Jun 2024 14:11:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHOhO-0003Mg-GN; Wed, 12 Jun 2024 14:11:38 +0000
Received: by outflank-mailman (input) for mailman id 739373;
 Wed, 12 Jun 2024 14:11:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHOhN-0003Ma-HR
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 14:11:37 +0000
Received: from mail-qv1-xf33.google.com (mail-qv1-xf33.google.com
 [2607:f8b0:4864:20::f33])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id acda2a32-28c5-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 16:11:35 +0200 (CEST)
Received: by mail-qv1-xf33.google.com with SMTP id
 6a1803df08f44-6afc61f9a2eso8758026d6.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 07:11:35 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b08c25d7a7sm18080436d6.41.2024.06.12.07.11.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 07:11:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acda2a32-28c5-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718201494; x=1718806294; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=G6v3euzWvAR7MYKGWycaAXUSOp/5qPdZ84CiBBNCQis=;
        b=cAD1dnhiWRaVnQagLfYvreg8QN92uIdFxbOsNYZOsjGd6pC/XlDj3mKSwNwFXKRbg0
         yxXnjcBlIMbN7Ih+0WO70mgTP5Rz0qRvMvcH68GPWnPjzR5Il7yvifL9fBbxANEMsvHU
         JhbHeNOJEe9cKKhJSPKpu34kfCCxCR5eNsgWU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718201494; x=1718806294;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=G6v3euzWvAR7MYKGWycaAXUSOp/5qPdZ84CiBBNCQis=;
        b=e/J1h8wI5l8KEmS+ds4kGkFiWRxvIu4WFAjAr1BpUqOe34LTUoFuY2f+rEbZy5H4aL
         cr2SPHuJASAU42fSw7/0Zx1OBB/6qxIpjCPkpdiwa9YT2U09DINAII71tXvJfEKfuvsH
         woTLnAV63ikr7CrF3tO6fE7EbqqXAP1shn6lp117XYlIfKpVflKGk+jkNNAuZhJq4F/Y
         ovy/lj+Nf51sAdkj3IbJw5TTHvrXnlOcwGHlwGhe5bDq1AgGN0XVy0TgYdT4EezP8UM4
         qQaTZxpnWojp0TmLZtFVwUBNyNpCXV2dAk+TLF0taYO3bf4Z6lgTC2vdhkVVF/SI2IvP
         QV6w==
X-Gm-Message-State: AOJu0YyL6PBrnNsxBA/Tb9YkHsyX9dr4c3t/v9fLfjBCXjXZnUVcar3l
	hqQjAzOOthwHsuFXfG6D4NXdr2c0WMTvHbpcgFAeN8k+08dXH6fWAEC8F6+ZMtZN9pLR7C92rZF
	N
X-Google-Smtp-Source: AGHT+IFSzINSG79ZYBLSDuOMUUT4EUTGhK8ebc/Lz1wyJB7gsXSf7EESHeb5qo9+zHA7L7Y6BZNpIA==
X-Received: by 2002:a05:6214:2621:b0:6b0:72ac:b306 with SMTP id 6a1803df08f44-6b089e89cc6mr108736186d6.1.1718201494117;
        Wed, 12 Jun 2024 07:11:34 -0700 (PDT)
Date: Wed, 12 Jun 2024 16:11:31 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH v2 for-4.19 1/3] x86/EPT: correct special page checking
 in epte_get_entry_emt()
Message-ID: <ZmmskwdoKvAotRk-@macbook>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
 <175df1a2-a95f-462b-ad49-3a0fef727658@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <175df1a2-a95f-462b-ad49-3a0fef727658@suse.com>

On Wed, Jun 12, 2024 at 03:16:37PM +0200, Jan Beulich wrote:
> mfn_valid() granularity is (currently) 256Mb. Therefore the start of a
> 1Gb page passing the test doesn't necessarily mean all parts of such a
> range would also pass.

How would such a superpage end up in the EPT?

I would assume this can only happen when adding a superpage MMIO that
has part of it return success from mfn_valid()?

> Yet using the result of mfn_to_page() on an MFN
> which doesn't pass mfn_valid() checking is liable to result in a crash
> (the invocation of mfn_to_page() alone is presumably "just" UB in such a
> case).
> 
> Fixes: ca24b2ffdbd9 ("x86/hvm: set 'ipat' in EPT for special pages")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> Of course we could leverage mfn_valid() granularity here to do an
> increment by more than 1 if mfn_valid() returned false. Yet doing so
> likely would want a suitable helper to be introduced first, rather than
> open-coding such logic here.

We would still need to call is_special_page() on each 4K chunk, at
which point taking advantage of the mfn_valid() granularity is likely
to make the code more complicated to follow IMO.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 14:39:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 14:39:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739381.1146383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHP7u-0007SJ-I6; Wed, 12 Jun 2024 14:39:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739381.1146383; Wed, 12 Jun 2024 14:39:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHP7u-0007SC-FG; Wed, 12 Jun 2024 14:39:02 +0000
Received: by outflank-mailman (input) for mailman id 739381;
 Wed, 12 Jun 2024 14:39:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHP7t-0007Rq-R7
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 14:39:01 +0000
Received: from mail-qv1-xf35.google.com (mail-qv1-xf35.google.com
 [2607:f8b0:4864:20::f35])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 80a4524a-28c9-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 16:39:00 +0200 (CEST)
Received: by mail-qv1-xf35.google.com with SMTP id
 6a1803df08f44-6b07937b84fso11908836d6.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 07:38:58 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b04f6c3a68sm68200066d6.52.2024.06.12.07.38.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 07:38:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80a4524a-28c9-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718203138; x=1718807938; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=YsQmLlGiAdIAtpleQKN3iUqLiU1T3f22WnBDLc4QB8w=;
        b=IBCFEl/W3PZ1/gdR08av3+is1Zdk1uviWN/G8oXdZph5O0ZxgSJRkPciHXfckTva32
         zMdFwLcylGpnrBVIUFEqHUi50sgF336QVQqVscgHrmOBlvEysfr6Jv9x+DfE6f9g1Fwv
         jIfXMbTByudG1ldFvUqchygVVo4ZZ6Jw/H5Mk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718203138; x=1718807938;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=YsQmLlGiAdIAtpleQKN3iUqLiU1T3f22WnBDLc4QB8w=;
        b=qA3MHI7xmFK4jhrBejRk+1oFQ/TjLu5mIUNx4MM530BfEWICUFrA9ltPNwLL2lpOfa
         felKZUV25f+OQXgjhmAGvfanGtdcmhLbq7vq08z30rFdT+zU2xRU+QDhMiDzPnRDKqYl
         g8uOz5ZpF/9la4jAO12pSsAezFUSgOyJ3K+VQYlpCzEDrHCfr0HJJLkFWC2zsWro1IOG
         cGg/g2/d7eW33ozmyOH4srKP6RLYk9d2oMd9YazwlrUAObJjbf5LEjE7+ZwASNqljnaJ
         nXdW5zqMCIoiRizW/9MvpFQlV02nAy7kyZ26cR/coG8Vf/SHZ11mtmFBwjsNMq4s+K0J
         UZjw==
X-Gm-Message-State: AOJu0YxMUpe8m3xFKa6hCUgnR/Y0lHSHMUbgpJji/wrmaR8V+KUTAMkD
	veygBqCSYPfBJWlm4rn4i1E/UF61VnjWpGET+MAqePIaJ5VfZT8BH5ASBUFp97c=
X-Google-Smtp-Source: AGHT+IEGEEKCole9RnH2jsFIyRYxxPxtF2PWIksbsDpwL38sX2ypPFN6XSRLcotoqFfyxcqsnFJOzw==
X-Received: by 2002:a05:6214:3109:b0:6b0:7ceb:3076 with SMTP id 6a1803df08f44-6b191684731mr19629566d6.12.1718203137666;
        Wed, 12 Jun 2024 07:38:57 -0700 (PDT)
Date: Wed, 12 Jun 2024 16:38:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH v2 for-4.19 2/3] x86/EPT: avoid marking non-present
 entries for re-configuring
Message-ID: <Zmmy_-JqqWRuwvCj@macbook>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
 <d31f0f8e-4eb7-4617-86f6-81f38b5c61aa@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <d31f0f8e-4eb7-4617-86f6-81f38b5c61aa@suse.com>

On Wed, Jun 12, 2024 at 03:16:59PM +0200, Jan Beulich wrote:
> For non-present entries EMT, like most other fields, is meaningless to
> hardware. Make the logic in ept_set_entry() setting the field (and iPAT)
> conditional upon dealing with a present entry, leaving the value at 0
> otherwise. This has two effects for epte_get_entry_emt() which we'll
> want to leverage subsequently:
> 1) The call moved here now won't be issued with INVALID_MFN anymore (a
>    respective BUG_ON() is being added).
> 2) Neither of the other two calls could now be issued with a truncated
>    form of INVALID_MFN anymore (as long as there's no bug anywhere
>    marking an entry present when that was populated using INVALID_MFN).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: New.
> 
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -650,6 +650,8 @@ static int cf_check resolve_misconfig(st
>              if ( e.emt != MTRR_NUM_TYPES )
>                  break;
>  
> +            ASSERT(is_epte_present(&e));

If this is added here, then there's a condition further below:

if ( !is_epte_valid(&e) || !is_epte_present(&e) )

That needs adjusting AFAICT.

However, in ept_set_entry() we seem to unconditionally call
resolve_misconfig() against the new entry to be populated, won't this
possibly cause resolve_misconfig() to be called against non-present
EPT entries?  I think this is fine because such non-present entries
will have emt == 0, and hence will take the break just ahead of the
added ASSERT().

> +
>              if ( level == 0 )
>              {
>                  for ( gfn -= i, i = 0; i < EPT_PAGETABLE_ENTRIES; ++i )
> @@ -915,17 +917,6 @@ ept_set_entry(struct p2m_domain *p2m, gf
>  
>      if ( mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt) )
>      {
> -        bool ipat;
> -        int emt = epte_get_entry_emt(p2m->domain, _gfn(gfn), mfn,
> -                                     i * EPT_TABLE_ORDER, &ipat,
> -                                     p2mt);
> -
> -        if ( emt >= 0 )
> -            new_entry.emt = emt;
> -        else /* ept_handle_misconfig() will need to take care of this. */
> -            new_entry.emt = MTRR_NUM_TYPES;
> -
> -        new_entry.ipat = ipat;
>          new_entry.sp = !!i;
>          new_entry.sa_p2mt = p2mt;
>          new_entry.access = p2ma;
> @@ -941,6 +932,22 @@ ept_set_entry(struct p2m_domain *p2m, gf
>              need_modify_vtd_table = 0;
>  
>          ept_p2m_type_to_flags(p2m, &new_entry);
> +
> +        if ( is_epte_present(&new_entry) )
> +        {
> +            bool ipat;
> +            int emt = epte_get_entry_emt(p2m->domain, _gfn(gfn), mfn,
> +                                         i * EPT_TABLE_ORDER, &ipat,
> +                                         p2mt);
> +
> +            BUG_ON(mfn_eq(mfn, INVALID_MFN));
> +
> +            if ( emt >= 0 )
> +                new_entry.emt = emt;
> +            else /* ept_handle_misconfig() will need to take care of this. */
> +                new_entry.emt = MTRR_NUM_TYPES;
> +            new_entry.ipat = ipat;
> +        }

Should we assert that if new_entry.emt == MTRR_NUM_TYPES the entry
must have the present bit set before the atomic_write_ept_entry()
call?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 14:47:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 14:47:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739387.1146393 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPFw-0000dB-8U; Wed, 12 Jun 2024 14:47:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739387.1146393; Wed, 12 Jun 2024 14:47:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPFw-0000d4-5c; Wed, 12 Jun 2024 14:47:20 +0000
Received: by outflank-mailman (input) for mailman id 739387;
 Wed, 12 Jun 2024 14:47:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHPFu-0000cy-Q2
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 14:47:18 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a81e3b8e-28ca-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 16:47:14 +0200 (CEST)
Received: by mail-lf1-x12c.google.com with SMTP id
 2adb3069b0e04-52bc29c79fdso6268081e87.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 07:47:14 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f51d7db37sm728266b.23.2024.06.12.07.47.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 07:47:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a81e3b8e-28ca-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718203634; x=1718808434; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=1qth86KmYUuRlpIkhFYQ/Pic099/vjZZTjocKnp7wjI=;
        b=MIcSKfbz59YI4biok26CMLckHmJyq5f7QyWQs8rPLUJuS8J3MK53RnosaBJxgaIjBt
         HisLblpN0aGqZ9MSNct6R7ks5n7SiSVeXbymyqCF8Vjot4ewbG9AfHv99dLFgWFSRkFs
         5+/uHGULtZ+j3AMXHBZjpXdoIGel9zJxdrOFrhMeuOFDCbOc63ivWEz4GbPOtHuzzm0C
         4yHjzIJm9ZREN+em90TCSi079557tpqI9HAZLNyTQ7vLFzH3UaUiDjsXfrdy7RQZtshB
         XPD4WdgL2aBm/ATl97z/fj/zGYVPCMvVM0+W/7KN+/qpo41QZtxN2dqyh4DWf3P7RSFy
         lIig==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718203634; x=1718808434;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1qth86KmYUuRlpIkhFYQ/Pic099/vjZZTjocKnp7wjI=;
        b=vSMOOZwJvSc2LiTFcMah0m7RoWafASflWcttndNXHvqjxhQJf0BYvt75unq1589Vs1
         eN76ZwzDWfxb0P1DG0oaSWAcDT6onoO6qQbWIBjpRfb0Bi4B8iUnE//iZyryue6eY/c+
         jAin7BdX4S9jPPk90HaAK8OfuQCz9LoBWQBJogbIS3+Ql2VVrSNu/7F0HAStL/9TQsGj
         dUpgJ8Qd/VEpKLLZm2PmRjFWTI7JqJ9dSPhgeyqXnUNZf7wo3deoOjebjfBzKNfHv8ds
         ljyvdw0/VQYGd3rDRiF3AF8t40+hw0N4BsBe3kAUIi3ngmj7GIVY28ilHgjoGRx+OLVU
         w8KQ==
X-Gm-Message-State: AOJu0YyWZAyIYB07L2xETeg18C0u85t9yZ/htsfR93IQ+3k8GGud4QXU
	2GYxaF8t7j6ggjTbqDVCQD/NTmPczmYjW/JipUBzsope9bzr4CimVyjwXm0V+w==
X-Google-Smtp-Source: AGHT+IG3fRONXzlz1YcmOA+OZs4NndjjN2rZKCsbEekz3fcqc8JQcJapDHhFa+/fNEMaAxPsY2e1mg==
X-Received: by 2002:a05:6512:3694:b0:52c:993d:b462 with SMTP id 2adb3069b0e04-52c9a3e233bmr1444359e87.29.1718203634008;
        Wed, 12 Jun 2024 07:47:14 -0700 (PDT)
Message-ID: <b2985742-75e4-4974-be9d-be088d728731@suse.com>
Date: Wed, 12 Jun 2024 16:47:12 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 for-4.19 1/3] x86/EPT: correct special page checking in
 epte_get_entry_emt()
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
 <175df1a2-a95f-462b-ad49-3a0fef727658@suse.com> <ZmmskwdoKvAotRk-@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmmskwdoKvAotRk-@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12.06.2024 16:11, Roger Pau Monné wrote:
> On Wed, Jun 12, 2024 at 03:16:37PM +0200, Jan Beulich wrote:
>> mfn_valid() granularity is (currently) 256Mb. Therefore the start of a
>> 1Gb page passing the test doesn't necessarily mean all parts of such a
>> range would also pass.
> 
> How would such a superpage end up in the EPT?
> 
> I would assume this can only happen when adding a superpage MMIO that
> has part of it return success from mfn_valid()?

Yes, that's the only way I can think of.

>> Yet using the result of mfn_to_page() on an MFN
>> which doesn't pass mfn_valid() checking is liable to result in a crash
>> (the invocation of mfn_to_page() alone is presumably "just" UB in such a
>> case).
>>
>> Fixes: ca24b2ffdbd9 ("x86/hvm: set 'ipat' in EPT for special pages")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>> ---
>> Of course we could leverage mfn_valid() granularity here to do an
>> increment by more than 1 if mfn_valid() returned false. Yet doing so
>> likely would want a suitable helper to be introduced first, rather than
>> open-coding such logic here.
> 
> We would still need to call is_special_page() on each 4K chunk,

Why? Within any block for which mfn_valid() returns false, there can be
no RAM pages and hence also no special ones. It's only blocks where
mfn_valid() returns true that we'd need to iterate through page-by-page.

> at
> which point taking advantage of the mfn_valid() granularity is likely
> to make the code more complicated to follow IMO.

Right, this making it more complicated is the main counter argument. Hence
why I think that if to go such a route at all, it would need some new
helper(s) such that at the use sites things still would remain reasonably
clear.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 14:53:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 14:53:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739393.1146403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPLi-0002Wu-SY; Wed, 12 Jun 2024 14:53:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739393.1146403; Wed, 12 Jun 2024 14:53:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPLi-0002Wn-PT; Wed, 12 Jun 2024 14:53:18 +0000
Received: by outflank-mailman (input) for mailman id 739393;
 Wed, 12 Jun 2024 14:53:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHPLh-0002Wf-2j
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 14:53:17 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7faf3f02-28cb-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 16:53:15 +0200 (CEST)
Received: by mail-ed1-x529.google.com with SMTP id
 4fb4d7f45d1cf-57ca81533d0so1620405a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 07:53:15 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6ef83ac0c4sm654263666b.74.2024.06.12.07.53.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 07:53:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7faf3f02-28cb-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718203995; x=1718808795; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=KMfultdLDvWcbbfzC1McmTyqZlehqGAI5EuXh7v3no0=;
        b=XHBxSEWMV4Xett/K4NnOKPkBfGF7UBPP892dMGMe1iRbvyZqZEkoUA8xjuA7iJlOse
         agQp1gshxHJ2razcZkGOE99pj1pYsQ28s8F11ajp43KLHfWhBQLIyFTv6pJJRcpfLyxO
         ycaXgV9vMR979moDQp2pkYQLTsN31WZAkh0CX2HnVAnxae1mOFHzD9o8ZJu9K7EPL5vn
         j3e+hwjSKpvScU5JqmzxuPdbgkWiOUcOOyHjWw0tUoJsS7sUWbLLaQrs635uyJ/bi9EP
         ZyG9HX8QMomrH9w2TTOEMkGGJrtg/vRMUwLa6JuGAdWKold/2/WBIj5NMqt33w/UgWIT
         cxGw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718203995; x=1718808795;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=KMfultdLDvWcbbfzC1McmTyqZlehqGAI5EuXh7v3no0=;
        b=IcTky1h7QJ0RSSSwGP+BoXkJ/JtWYWMV7bXxgkNu4F6E1I+EY/BiwRNgym76BxDR2y
         8P3KLkJMlz/pdZ81FeElMFsKQE8fBrxeebnOnL/72UmKdU/sK4EDaVaKtDmRRFGqxCHU
         0FCgyPbcDWDZaCI2YDwXzbOh27efTeXmQRILr+YbLhJUUNsTrEj4t8LPv9eOHS98CQ4x
         OqiQftVPkQprW6Zm4m9t761AywKHGAkjgPJXKuMJ/l11QWXTetTtJi4Di3Y+cPVCV1uZ
         lG2vPatCErUzmxIeiYZ/ORZdTn2C4J5CFur/ra8hSKI5i2RTMKz3lOh5UxGNZds0Glcs
         6Aog==
X-Gm-Message-State: AOJu0YzQP7uC/HG2uS9NBDv8oiEfRcZxyBuA5ByUej2KBiNFu6EH6ek9
	ZIfxwTqg7BPwlfbxinBhZTZLgs4DVTIrKOHORh2aP3vInHgO9/4DWVPMC5UNjw==
X-Google-Smtp-Source: AGHT+IG4X/QPKD+xKjNDSrdz9/1FXMTgnLZcFFfG1GXXbfAMAyMOtizDFHET+2hsiUxS0SLWoydP/Q==
X-Received: by 2002:a17:906:31d3:b0:a6e:f99b:cd57 with SMTP id a640c23a62f3a-a6f468f2032mr138709166b.34.1718203995279;
        Wed, 12 Jun 2024 07:53:15 -0700 (PDT)
Message-ID: <e944583a-2459-435f-90fb-04bcca18197f@suse.com>
Date: Wed, 12 Jun 2024 16:53:14 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 for-4.19 2/3] x86/EPT: avoid marking non-present
 entries for re-configuring
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
 <d31f0f8e-4eb7-4617-86f6-81f38b5c61aa@suse.com> <Zmmy_-JqqWRuwvCj@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <Zmmy_-JqqWRuwvCj@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12.06.2024 16:38, Roger Pau Monné wrote:
> On Wed, Jun 12, 2024 at 03:16:59PM +0200, Jan Beulich wrote:
>> For non-present entries EMT, like most other fields, is meaningless to
>> hardware. Make the logic in ept_set_entry() setting the field (and iPAT)
>> conditional upon dealing with a present entry, leaving the value at 0
>> otherwise. This has two effects for epte_get_entry_emt() which we'll
>> want to leverage subsequently:
>> 1) The call moved here now won't be issued with INVALID_MFN anymore (a
>>    respective BUG_ON() is being added).
>> 2) Neither of the other two calls could now be issued with a truncated
>>    form of INVALID_MFN anymore (as long as there's no bug anywhere
>>    marking an entry present when that was populated using INVALID_MFN).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v2: New.
>>
>> --- a/xen/arch/x86/mm/p2m-ept.c
>> +++ b/xen/arch/x86/mm/p2m-ept.c
>> @@ -650,6 +650,8 @@ static int cf_check resolve_misconfig(st
>>              if ( e.emt != MTRR_NUM_TYPES )
>>                  break;
>>  
>> +            ASSERT(is_epte_present(&e));
> 
> If this is added here, then there's a condition further below:
> 
> if ( !is_epte_valid(&e) || !is_epte_present(&e) )
> 
> That needs adjusting AFAICT.

I don't think so, because e was re-fetched in between.

> However, in ept_set_entry() we seem to unconditionally call
> resolve_misconfig() against the new entry to be populated, won't this
> possibly cause resolve_misconfig() to be called against non-present
> EPT entries?  I think this is fine because such non-present entries
> will have emt == 0, and hence will take the break just ahead of the
> added ASSERT().

Right, hence how I placed this assertion.

>> @@ -941,6 +932,22 @@ ept_set_entry(struct p2m_domain *p2m, gf
>>              need_modify_vtd_table = 0;
>>  
>>          ept_p2m_type_to_flags(p2m, &new_entry);
>> +
>> +        if ( is_epte_present(&new_entry) )
>> +        {
>> +            bool ipat;
>> +            int emt = epte_get_entry_emt(p2m->domain, _gfn(gfn), mfn,
>> +                                         i * EPT_TABLE_ORDER, &ipat,
>> +                                         p2mt);
>> +
>> +            BUG_ON(mfn_eq(mfn, INVALID_MFN));
>> +
>> +            if ( emt >= 0 )
>> +                new_entry.emt = emt;
>> +            else /* ept_handle_misconfig() will need to take care of this. */
>> +                new_entry.emt = MTRR_NUM_TYPES;
>> +            new_entry.ipat = ipat;
>> +        }
> 
> Should we assert that if new_entry.emt == MTRR_NUM_TYPES the entry
> must have the present bit set before the atomic_write_ept_entry()
> call?

This would feel excessive to me. All writing to new_entry is close together,
immediately ahead of that atomic_write_ept_entry(). And we're (now) writing
MTRR_NUM_TYPES only when is_epte_present() is true (note that it's not "the
present bit").

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 14:54:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 14:54:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739400.1146413 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPMa-00035l-8J; Wed, 12 Jun 2024 14:54:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739400.1146413; Wed, 12 Jun 2024 14:54:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPMa-00035e-5g; Wed, 12 Jun 2024 14:54:12 +0000
Received: by outflank-mailman (input) for mailman id 739400;
 Wed, 12 Jun 2024 14:54:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FQOM=NO=linaro.org=ulf.hansson@srs-se1.protection.inumbo.net>)
 id 1sHPMY-0002Wf-6A
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 14:54:10 +0000
Received: from mail-yw1-x1130.google.com (mail-yw1-x1130.google.com
 [2607:f8b0:4864:20::1130])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9e5cdde8-28cb-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 16:54:08 +0200 (CEST)
Received: by mail-yw1-x1130.google.com with SMTP id
 00721157ae682-627f3265898so74440227b3.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 07:54:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e5cdde8-28cb-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718204046; x=1718808846; darn=lists.xenproject.org;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=bziVRdXSiVshwhhdlB5Zfeta9rMg2I/94yUew4zEeXk=;
        b=QVuD2bdYbA859HEWPW1orkKSN5beD+n3/uq3xzZPgLSl46SoSufK4v3Bw5YR1VnX8J
         2bpz2jDe6Hd2hrLUXeRC9GP9w2CRq76PcgPwRKDEshMVGWYt+8onfikLwVRW0WEjYfxD
         n0TR/3Or9wcCnkeosc+l8ypXOYWrGb4i9V7FBnTU80PL29BMzpay7e5Yp/nP7CDKuwQO
         Gu5tuBfZ99ufV9GURlTmQqh3L/yTfrGJOhrkpMLKdZoETEPdq8YgnGGO5eJiUHyLZehZ
         2A1eDIeZalekxrVOlnpm+dZrUzL00RHH3baPnmvirKlga6VJ95S/AmlG7/QCH9lcP6sf
         b9jw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718204046; x=1718808846;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=bziVRdXSiVshwhhdlB5Zfeta9rMg2I/94yUew4zEeXk=;
        b=TgZSjuitAFaPqyseBAeXoITo81an3U1Y7zL48zJQwqbkGq99iUu+cfqMKr2qcKllsB
         IBEkW96cysDpwNRP6oT3EdhRQURxIFL3nagrNiPuk5hQz3LVKkj4YpbOPV+HYZLGDsXW
         0xrZODH4CwvpJSDlKbpMtxzzFve3TYgoAL1BydfL5zJjitmRJG3Nk2z8ufg+lRC+pfvv
         CDZ55Li9h8FOnR6LRCQLgS3WVC3BqgDi/VpfAt4VmAGOFMlR6/4xYlsXdMgt0tRYd9gF
         5uZ2lwRcIYrE0eHvZfAEGiJE/eGVazYuzaYqyEy3H/+EUBiBj/VAYT8DXpep98kWjZD8
         0FDQ==
X-Forwarded-Encrypted: i=1; AJvYcCVEheVPi/d2Slnsyy3Sim51SxZk1j4dJRd68HtmGn23sYKs1T5f07Gm/NOSZHtElTmlH3i33JT3m1Ubhl/j90/lj5uYQ/4mivPfZrgW2zI=
X-Gm-Message-State: AOJu0YxOpjKo9FObCT6vwb37WQG7vMxWy9nEi+5tAut+pDZboTo/nwk2
	S8gKnHl4NlDjg+gIejhzvq/WyqhjiGkw856v1x9kMObQceWBG0Ci9nazwTHu1WsnkmsU4BwOG9l
	hsGvgyJzZz/uBDjfPA4Sv/rmX7o5eZKZ6LlnxkQ==
X-Google-Smtp-Source: AGHT+IFCxCnLhA/PF4dGFj22NjNJfNyDagLJmvuQKDpy4ehXHJPkwxBLp6VP7TUPMCoJJJ2fz6XxV2M8hd2gzn3jjIQ=
X-Received: by 2002:a25:c5c8:0:b0:dfb:24e4:cee3 with SMTP id
 3f1490d57ef6-dfe668718f8mr1977546276.24.1718204046206; Wed, 12 Jun 2024
 07:54:06 -0700 (PDT)
MIME-Version: 1.0
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-14-hch@lst.de>
In-Reply-To: <20240611051929.513387-14-hch@lst.de>
From: Ulf Hansson <ulf.hansson@linaro.org>
Date: Wed, 12 Jun 2024 16:53:29 +0200
Message-ID: <CAPDyKFrv+Gg=BKzKN249MiQy+bPCRALM-LL-zyhbJ38GHtHgAA@mail.gmail.com>
Subject: Re: [PATCH 13/26] block: move cache control settings out of queue->flags
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Geert Uytterhoeven <geert@linux-m68k.org>, 
	Richard Weinberger <richard@nod.at>, Philipp Reisner <philipp.reisner@linbit.com>, 
	Lars Ellenberg <lars.ellenberg@linbit.com>, 
	=?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>, 
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>, 
	"Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>, 
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>, Yu Kuai <yukuai3@huawei.com>, 
	Vineeth Vijayan <vneethv@linux.ibm.com>, "Martin K. Petersen" <martin.petersen@oracle.com>, 
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org, 
	drbd-dev@lists.linbit.com, nbd@other.debian.org, 
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org, 
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org, 
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev, 
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, 
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev, 
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org, 
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"

On Tue, 11 Jun 2024 at 07:24, Christoph Hellwig <hch@lst.de> wrote:
>
> Move the cache control settings into the queue_limits so that they
> can be set atomically and all I/O is frozen when changing the
> flags.
>
> Add new features and flags field for the driver set flags, and internal
> (usually sysfs-controlled) flags in the block layer.  Note that we'll
> eventually remove enough field from queue_limits to bring it back to the
> previous size.
>
> The disable flag is inverted compared to the previous meaning, which
> means it now survives a rescan, similar to the max_sectors and
> max_discard_sectors user limits.
>
> The FLUSH and FUA flags are now inherited by blk_stack_limits, which
> simplified the code in dm a lot, but also causes a slight behavior
> change in that dm-switch and dm-unstripe now advertise a write cache
> despite setting num_flush_bios to 0.  The I/O path will handle this
> gracefully, but as far as I can tell the lack of num_flush_bios
> and thus flush support is a pre-existing data integrity bug in those
> targets that really needs fixing, after which a non-zero num_flush_bios
> should be required in dm for targets that map to underlying devices.
>
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Ulf Hansson <ulf.hansson@linaro.org> # For MMC

FYI, for now I don't expect any other patches in my mmc tree to clash
with this for v6.11, assuming that is the target.

Kind regards
Uffe

> ---
>  .../block/writeback_cache_control.rst         | 67 +++++++++++--------
>  arch/um/drivers/ubd_kern.c                    |  2 +-
>  block/blk-core.c                              |  2 +-
>  block/blk-flush.c                             |  9 ++-
>  block/blk-mq-debugfs.c                        |  2 -
>  block/blk-settings.c                          | 29 ++------
>  block/blk-sysfs.c                             | 29 +++++---
>  block/blk-wbt.c                               |  4 +-
>  drivers/block/drbd/drbd_main.c                |  2 +-
>  drivers/block/loop.c                          |  9 +--
>  drivers/block/nbd.c                           | 14 ++--
>  drivers/block/null_blk/main.c                 | 12 ++--
>  drivers/block/ps3disk.c                       |  7 +-
>  drivers/block/rnbd/rnbd-clt.c                 | 10 +--
>  drivers/block/ublk_drv.c                      |  8 ++-
>  drivers/block/virtio_blk.c                    | 20 ++++--
>  drivers/block/xen-blkfront.c                  |  9 ++-
>  drivers/md/bcache/super.c                     |  7 +-
>  drivers/md/dm-table.c                         | 39 +++--------
>  drivers/md/md.c                               |  8 ++-
>  drivers/mmc/core/block.c                      | 42 ++++++------
>  drivers/mmc/core/queue.c                      | 12 ++--
>  drivers/mmc/core/queue.h                      |  3 +-
>  drivers/mtd/mtd_blkdevs.c                     |  5 +-
>  drivers/nvdimm/pmem.c                         |  4 +-
>  drivers/nvme/host/core.c                      |  7 +-
>  drivers/nvme/host/multipath.c                 |  6 --
>  drivers/scsi/sd.c                             | 28 +++++---
>  include/linux/blkdev.h                        | 38 +++++++++--
>  29 files changed, 227 insertions(+), 207 deletions(-)
>
> diff --git a/Documentation/block/writeback_cache_control.rst b/Documentation/block/writeback_cache_control.rst
> index b208488d0aae85..9cfe27f90253c7 100644
> --- a/Documentation/block/writeback_cache_control.rst
> +++ b/Documentation/block/writeback_cache_control.rst
> @@ -46,41 +46,50 @@ worry if the underlying devices need any explicit cache flushing and how
>  the Forced Unit Access is implemented.  The REQ_PREFLUSH and REQ_FUA flags
>  may both be set on a single bio.
>
> +Feature settings for block drivers
> +----------------------------------
>
> -Implementation details for bio based block drivers
> ---------------------------------------------------------------
> +For devices that do not support volatile write caches there is no driver
> +support required, the block layer completes empty REQ_PREFLUSH requests before
> +entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from
> +requests that have a payload.
>
> -These drivers will always see the REQ_PREFLUSH and REQ_FUA bits as they sit
> -directly below the submit_bio interface.  For remapping drivers the REQ_FUA
> -bits need to be propagated to underlying devices, and a global flush needs
> -to be implemented for bios with the REQ_PREFLUSH bit set.  For real device
> -drivers that do not have a volatile cache the REQ_PREFLUSH and REQ_FUA bits
> -on non-empty bios can simply be ignored, and REQ_PREFLUSH requests without
> -data can be completed successfully without doing any work.  Drivers for
> -devices with volatile caches need to implement the support for these
> -flags themselves without any help from the block layer.
> +For devices with volatile write caches the driver needs to tell the block layer
> +that it supports flushing caches by setting the
>
> +   BLK_FEAT_WRITE_CACHE
>
> -Implementation details for request_fn based block drivers
> ----------------------------------------------------------
> +flag in the queue_limits feature field.  For devices that also support the FUA
> +bit the block layer needs to be told to pass on the REQ_FUA bit by also setting
> +the
>
> -For devices that do not support volatile write caches there is no driver
> -support required, the block layer completes empty REQ_PREFLUSH requests before
> -entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from
> -requests that have a payload.  For devices with volatile write caches the
> -driver needs to tell the block layer that it supports flushing caches by
> -doing::
> +   BLK_FEAT_FUA
> +
> +flag in the features field of the queue_limits structure.
> +
> +Implementation details for bio based block drivers
> +--------------------------------------------------
> +
> +For bio based drivers the REQ_PREFLUSH and REQ_FUA bit are simplify passed on
> +to the driver if the drivers sets the BLK_FEAT_WRITE_CACHE flag and the drivers
> +needs to handle them.
> +
> +*NOTE*: The REQ_FUA bit also gets passed on when the BLK_FEAT_FUA flags is
> +_not_ set.  Any bio based driver that sets BLK_FEAT_WRITE_CACHE also needs to
> +handle REQ_FUA.
>
> -       blk_queue_write_cache(sdkp->disk->queue, true, false);
> +For remapping drivers the REQ_FUA bits need to be propagated to underlying
> +devices, and a global flush needs to be implemented for bios with the
> +REQ_PREFLUSH bit set.
>
> -and handle empty REQ_OP_FLUSH requests in its prep_fn/request_fn.  Note that
> -REQ_PREFLUSH requests with a payload are automatically turned into a sequence
> -of an empty REQ_OP_FLUSH request followed by the actual write by the block
> -layer.  For devices that also support the FUA bit the block layer needs
> -to be told to pass through the REQ_FUA bit using::
> +Implementation details for blk-mq drivers
> +-----------------------------------------
>
> -       blk_queue_write_cache(sdkp->disk->queue, true, true);
> +When the BLK_FEAT_WRITE_CACHE flag is set, REQ_OP_WRITE | REQ_PREFLUSH requests
> +with a payload are automatically turned into a sequence of a REQ_OP_FLUSH
> +request followed by the actual write by the block layer.
>
> -and the driver must handle write requests that have the REQ_FUA bit set
> -in prep_fn/request_fn.  If the FUA bit is not natively supported the block
> -layer turns it into an empty REQ_OP_FLUSH request after the actual write.
> +When the BLK_FEA_FUA flags is set, the REQ_FUA bit simplify passed on for the
> +REQ_OP_WRITE request, else a REQ_OP_FLUSH request is sent by the block layer
> +after the completion of the write request for bio submissions with the REQ_FUA
> +bit set.
> diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
> index cdcb75a68989dd..19e01691ea0ea7 100644
> --- a/arch/um/drivers/ubd_kern.c
> +++ b/arch/um/drivers/ubd_kern.c
> @@ -835,6 +835,7 @@ static int ubd_add(int n, char **error_out)
>         struct queue_limits lim = {
>                 .max_segments           = MAX_SG,
>                 .seg_boundary_mask      = PAGE_SIZE - 1,
> +               .features               = BLK_FEAT_WRITE_CACHE,
>         };
>         struct gendisk *disk;
>         int err = 0;
> @@ -882,7 +883,6 @@ static int ubd_add(int n, char **error_out)
>         }
>
>         blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
> -       blk_queue_write_cache(disk->queue, true, false);
>         disk->major = UBD_MAJOR;
>         disk->first_minor = n << UBD_SHIFT;
>         disk->minors = 1 << UBD_SHIFT;
> diff --git a/block/blk-core.c b/block/blk-core.c
> index 82c3ae22d76d88..2b45a4df9a1aa1 100644
> --- a/block/blk-core.c
> +++ b/block/blk-core.c
> @@ -782,7 +782,7 @@ void submit_bio_noacct(struct bio *bio)
>                 if (WARN_ON_ONCE(bio_op(bio) != REQ_OP_WRITE &&
>                                  bio_op(bio) != REQ_OP_ZONE_APPEND))
>                         goto end_io;
> -               if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {
> +               if (!bdev_write_cache(bdev)) {
>                         bio->bi_opf &= ~(REQ_PREFLUSH | REQ_FUA);
>                         if (!bio_sectors(bio)) {
>                                 status = BLK_STS_OK;
> diff --git a/block/blk-flush.c b/block/blk-flush.c
> index 2234f8b3fc05f2..30b9d5033a2b85 100644
> --- a/block/blk-flush.c
> +++ b/block/blk-flush.c
> @@ -381,8 +381,8 @@ static void blk_rq_init_flush(struct request *rq)
>  bool blk_insert_flush(struct request *rq)
>  {
>         struct request_queue *q = rq->q;
> -       unsigned long fflags = q->queue_flags;  /* may change, cache */
>         struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx);
> +       bool supports_fua = q->limits.features & BLK_FEAT_FUA;
>         unsigned int policy = 0;
>
>         /* FLUSH/FUA request must never be merged */
> @@ -394,11 +394,10 @@ bool blk_insert_flush(struct request *rq)
>         /*
>          * Check which flushes we need to sequence for this operation.
>          */
> -       if (fflags & (1UL << QUEUE_FLAG_WC)) {
> +       if (blk_queue_write_cache(q)) {
>                 if (rq->cmd_flags & REQ_PREFLUSH)
>                         policy |= REQ_FSEQ_PREFLUSH;
> -               if (!(fflags & (1UL << QUEUE_FLAG_FUA)) &&
> -                   (rq->cmd_flags & REQ_FUA))
> +               if ((rq->cmd_flags & REQ_FUA) && !supports_fua)
>                         policy |= REQ_FSEQ_POSTFLUSH;
>         }
>
> @@ -407,7 +406,7 @@ bool blk_insert_flush(struct request *rq)
>          * REQ_PREFLUSH and FUA for the driver.
>          */
>         rq->cmd_flags &= ~REQ_PREFLUSH;
> -       if (!(fflags & (1UL << QUEUE_FLAG_FUA)))
> +       if (!supports_fua)
>                 rq->cmd_flags &= ~REQ_FUA;
>
>         /*
> diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
> index 770c0c2b72faaa..e8b9db7c30c455 100644
> --- a/block/blk-mq-debugfs.c
> +++ b/block/blk-mq-debugfs.c
> @@ -93,8 +93,6 @@ static const char *const blk_queue_flag_name[] = {
>         QUEUE_FLAG_NAME(INIT_DONE),
>         QUEUE_FLAG_NAME(STABLE_WRITES),
>         QUEUE_FLAG_NAME(POLL),
> -       QUEUE_FLAG_NAME(WC),
> -       QUEUE_FLAG_NAME(FUA),
>         QUEUE_FLAG_NAME(DAX),
>         QUEUE_FLAG_NAME(STATS),
>         QUEUE_FLAG_NAME(REGISTERED),
> diff --git a/block/blk-settings.c b/block/blk-settings.c
> index f11c8676eb4c67..536ee202fcdccb 100644
> --- a/block/blk-settings.c
> +++ b/block/blk-settings.c
> @@ -261,6 +261,9 @@ static int blk_validate_limits(struct queue_limits *lim)
>                 lim->misaligned = 0;
>         }
>
> +       if (!(lim->features & BLK_FEAT_WRITE_CACHE))
> +               lim->features &= ~BLK_FEAT_FUA;
> +
>         err = blk_validate_integrity_limits(lim);
>         if (err)
>                 return err;
> @@ -454,6 +457,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
>  {
>         unsigned int top, bottom, alignment, ret = 0;
>
> +       t->features |= (b->features & BLK_FEAT_INHERIT_MASK);
> +
>         t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
>         t->max_user_sectors = min_not_zero(t->max_user_sectors,
>                         b->max_user_sectors);
> @@ -711,30 +716,6 @@ void blk_set_queue_depth(struct request_queue *q, unsigned int depth)
>  }
>  EXPORT_SYMBOL(blk_set_queue_depth);
>
> -/**
> - * blk_queue_write_cache - configure queue's write cache
> - * @q:         the request queue for the device
> - * @wc:                write back cache on or off
> - * @fua:       device supports FUA writes, if true
> - *
> - * Tell the block layer about the write cache of @q.
> - */
> -void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua)
> -{
> -       if (wc) {
> -               blk_queue_flag_set(QUEUE_FLAG_HW_WC, q);
> -               blk_queue_flag_set(QUEUE_FLAG_WC, q);
> -       } else {
> -               blk_queue_flag_clear(QUEUE_FLAG_HW_WC, q);
> -               blk_queue_flag_clear(QUEUE_FLAG_WC, q);
> -       }
> -       if (fua)
> -               blk_queue_flag_set(QUEUE_FLAG_FUA, q);
> -       else
> -               blk_queue_flag_clear(QUEUE_FLAG_FUA, q);
> -}
> -EXPORT_SYMBOL_GPL(blk_queue_write_cache);
> -
>  int bdev_alignment_offset(struct block_device *bdev)
>  {
>         struct request_queue *q = bdev_get_queue(bdev);
> diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> index 5c787965b7d09e..4f524c1d5e08bd 100644
> --- a/block/blk-sysfs.c
> +++ b/block/blk-sysfs.c
> @@ -423,32 +423,41 @@ static ssize_t queue_io_timeout_store(struct request_queue *q, const char *page,
>
>  static ssize_t queue_wc_show(struct request_queue *q, char *page)
>  {
> -       if (test_bit(QUEUE_FLAG_WC, &q->queue_flags))
> -               return sprintf(page, "write back\n");
> -
> -       return sprintf(page, "write through\n");
> +       if (q->limits.features & BLK_FLAGS_WRITE_CACHE_DISABLED)
> +               return sprintf(page, "write through\n");
> +       return sprintf(page, "write back\n");
>  }
>
>  static ssize_t queue_wc_store(struct request_queue *q, const char *page,
>                               size_t count)
>  {
> +       struct queue_limits lim;
> +       bool disable;
> +       int err;
> +
>         if (!strncmp(page, "write back", 10)) {
> -               if (!test_bit(QUEUE_FLAG_HW_WC, &q->queue_flags))
> -                       return -EINVAL;
> -               blk_queue_flag_set(QUEUE_FLAG_WC, q);
> +               disable = false;
>         } else if (!strncmp(page, "write through", 13) ||
> -                !strncmp(page, "none", 4)) {
> -               blk_queue_flag_clear(QUEUE_FLAG_WC, q);
> +                  !strncmp(page, "none", 4)) {
> +               disable = true;
>         } else {
>                 return -EINVAL;
>         }
>
> +       lim = queue_limits_start_update(q);
> +       if (disable)
> +               lim.flags |= BLK_FLAGS_WRITE_CACHE_DISABLED;
> +       else
> +               lim.flags &= ~BLK_FLAGS_WRITE_CACHE_DISABLED;
> +       err = queue_limits_commit_update(q, &lim);
> +       if (err)
> +               return err;
>         return count;
>  }
>
>  static ssize_t queue_fua_show(struct request_queue *q, char *page)
>  {
> -       return sprintf(page, "%u\n", test_bit(QUEUE_FLAG_FUA, &q->queue_flags));
> +       return sprintf(page, "%u\n", !!(q->limits.features & BLK_FEAT_FUA));
>  }
>
>  static ssize_t queue_dax_show(struct request_queue *q, char *page)
> diff --git a/block/blk-wbt.c b/block/blk-wbt.c
> index 64472134dd26df..1a5e4b049ecd1d 100644
> --- a/block/blk-wbt.c
> +++ b/block/blk-wbt.c
> @@ -206,8 +206,8 @@ static void wbt_rqw_done(struct rq_wb *rwb, struct rq_wait *rqw,
>          */
>         if (wb_acct & WBT_DISCARD)
>                 limit = rwb->wb_background;
> -       else if (test_bit(QUEUE_FLAG_WC, &rwb->rqos.disk->queue->queue_flags) &&
> -                !wb_recent_wait(rwb))
> +       else if (blk_queue_write_cache(rwb->rqos.disk->queue) &&
> +                !wb_recent_wait(rwb))
>                 limit = 0;
>         else
>                 limit = rwb->wb_normal;
> diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
> index 113b441d4d3670..bf42a46781fa21 100644
> --- a/drivers/block/drbd/drbd_main.c
> +++ b/drivers/block/drbd/drbd_main.c
> @@ -2697,6 +2697,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
>                  * connect.
>                  */
>                 .max_hw_sectors         = DRBD_MAX_BIO_SIZE_SAFE >> 8,
> +               .features               = BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA,
>         };
>
>         device = minor_to_device(minor);
> @@ -2736,7 +2737,6 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
>         disk->private_data = device;
>
>         blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, disk->queue);
> -       blk_queue_write_cache(disk->queue, true, true);
>
>         device->md_io.page = alloc_page(GFP_KERNEL);
>         if (!device->md_io.page)
> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> index 2c4a5eb3a6a7f9..0b23fdc4e2edcc 100644
> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -985,6 +985,9 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
>         lim.logical_block_size = bsize;
>         lim.physical_block_size = bsize;
>         lim.io_min = bsize;
> +       lim.features &= ~BLK_FEAT_WRITE_CACHE;
> +       if (file->f_op->fsync && !(lo->lo_flags & LO_FLAGS_READ_ONLY))
> +               lim.features |= BLK_FEAT_WRITE_CACHE;
>         if (!backing_bdev || bdev_nonrot(backing_bdev))
>                 blk_queue_flag_set(QUEUE_FLAG_NONROT, lo->lo_queue);
>         else
> @@ -1078,9 +1081,6 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
>         lo->old_gfp_mask = mapping_gfp_mask(mapping);
>         mapping_set_gfp_mask(mapping, lo->old_gfp_mask & ~(__GFP_IO|__GFP_FS));
>
> -       if (!(lo->lo_flags & LO_FLAGS_READ_ONLY) && file->f_op->fsync)
> -               blk_queue_write_cache(lo->lo_queue, true, false);
> -
>         error = loop_reconfigure_limits(lo, config->block_size);
>         if (WARN_ON_ONCE(error))
>                 goto out_unlock;
> @@ -1131,9 +1131,6 @@ static void __loop_clr_fd(struct loop_device *lo, bool release)
>         struct file *filp;
>         gfp_t gfp = lo->old_gfp_mask;
>
> -       if (test_bit(QUEUE_FLAG_WC, &lo->lo_queue->queue_flags))
> -               blk_queue_write_cache(lo->lo_queue, false, false);
> -
>         /*
>          * Freeze the request queue when unbinding on a live file descriptor and
>          * thus an open device.  When called from ->release we are guaranteed
> diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
> index 44b8c671921e5c..cb1c86a6a3fb9d 100644
> --- a/drivers/block/nbd.c
> +++ b/drivers/block/nbd.c
> @@ -342,12 +342,14 @@ static int __nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
>                 lim.max_hw_discard_sectors = UINT_MAX;
>         else
>                 lim.max_hw_discard_sectors = 0;
> -       if (!(nbd->config->flags & NBD_FLAG_SEND_FLUSH))
> -               blk_queue_write_cache(nbd->disk->queue, false, false);
> -       else if (nbd->config->flags & NBD_FLAG_SEND_FUA)
> -               blk_queue_write_cache(nbd->disk->queue, true, true);
> -       else
> -               blk_queue_write_cache(nbd->disk->queue, true, false);
> +       if (!(nbd->config->flags & NBD_FLAG_SEND_FLUSH)) {
> +               lim.features &= ~(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA);
> +       } else if (nbd->config->flags & NBD_FLAG_SEND_FUA) {
> +               lim.features |= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA;
> +       } else {
> +               lim.features |= BLK_FEAT_WRITE_CACHE;
> +               lim.features &= ~BLK_FEAT_FUA;
> +       }
>         lim.logical_block_size = blksize;
>         lim.physical_block_size = blksize;
>         error = queue_limits_commit_update(nbd->disk->queue, &lim);
> diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
> index 631dca2e4e8442..73e4aecf5bb492 100644
> --- a/drivers/block/null_blk/main.c
> +++ b/drivers/block/null_blk/main.c
> @@ -1928,6 +1928,13 @@ static int null_add_dev(struct nullb_device *dev)
>                         goto out_cleanup_tags;
>         }
>
> +       if (dev->cache_size > 0) {
> +               set_bit(NULLB_DEV_FL_CACHE, &nullb->dev->flags);
> +               lim.features |= BLK_FEAT_WRITE_CACHE;
> +               if (dev->fua)
> +                       lim.features |= BLK_FEAT_FUA;
> +       }
> +
>         nullb->disk = blk_mq_alloc_disk(nullb->tag_set, &lim, nullb);
>         if (IS_ERR(nullb->disk)) {
>                 rv = PTR_ERR(nullb->disk);
> @@ -1940,11 +1947,6 @@ static int null_add_dev(struct nullb_device *dev)
>                 nullb_setup_bwtimer(nullb);
>         }
>
> -       if (dev->cache_size > 0) {
> -               set_bit(NULLB_DEV_FL_CACHE, &nullb->dev->flags);
> -               blk_queue_write_cache(nullb->q, true, dev->fua);
> -       }
> -
>         nullb->q->queuedata = nullb;
>         blk_queue_flag_set(QUEUE_FLAG_NONROT, nullb->q);
>
> diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
> index b810ac0a5c4b97..8b73cf459b5937 100644
> --- a/drivers/block/ps3disk.c
> +++ b/drivers/block/ps3disk.c
> @@ -388,9 +388,8 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
>                 .max_segments           = -1,
>                 .max_segment_size       = dev->bounce_size,
>                 .dma_alignment          = dev->blk_size - 1,
> +               .features               = BLK_FEAT_WRITE_CACHE,
>         };
> -
> -       struct request_queue *queue;
>         struct gendisk *gendisk;
>
>         if (dev->blk_size < 512) {
> @@ -447,10 +446,6 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
>                 goto fail_free_tag_set;
>         }
>
> -       queue = gendisk->queue;
> -
> -       blk_queue_write_cache(queue, true, false);
> -
>         priv->gendisk = gendisk;
>         gendisk->major = ps3disk_major;
>         gendisk->first_minor = devidx * PS3DISK_MINORS;
> diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
> index b7ffe03c61606d..02c4b173182719 100644
> --- a/drivers/block/rnbd/rnbd-clt.c
> +++ b/drivers/block/rnbd/rnbd-clt.c
> @@ -1389,6 +1389,12 @@ static int rnbd_client_setup_device(struct rnbd_clt_dev *dev,
>                         le32_to_cpu(rsp->max_discard_sectors);
>         }
>
> +       if (rsp->cache_policy & RNBD_WRITEBACK) {
> +               lim.features |= BLK_FEAT_WRITE_CACHE;
> +               if (rsp->cache_policy & RNBD_FUA)
> +                       lim.features |= BLK_FEAT_FUA;
> +       }
> +
>         dev->gd = blk_mq_alloc_disk(&dev->sess->tag_set, &lim, dev);
>         if (IS_ERR(dev->gd))
>                 return PTR_ERR(dev->gd);
> @@ -1397,10 +1403,6 @@ static int rnbd_client_setup_device(struct rnbd_clt_dev *dev,
>
>         blk_queue_flag_set(QUEUE_FLAG_SAME_COMP, dev->queue);
>         blk_queue_flag_set(QUEUE_FLAG_SAME_FORCE, dev->queue);
> -       blk_queue_write_cache(dev->queue,
> -                             !!(rsp->cache_policy & RNBD_WRITEBACK),
> -                             !!(rsp->cache_policy & RNBD_FUA));
> -
>         return rnbd_clt_setup_gen_disk(dev, rsp, idx);
>  }
>
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index 4e159948c912c2..e45c65c1848d31 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -487,8 +487,6 @@ static void ublk_dev_param_basic_apply(struct ublk_device *ub)
>         struct request_queue *q = ub->ub_disk->queue;
>         const struct ublk_param_basic *p = &ub->params.basic;
>
> -       blk_queue_write_cache(q, p->attrs & UBLK_ATTR_VOLATILE_CACHE,
> -                       p->attrs & UBLK_ATTR_FUA);
>         if (p->attrs & UBLK_ATTR_ROTATIONAL)
>                 blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
>         else
> @@ -2210,6 +2208,12 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
>                 lim.max_zone_append_sectors = p->max_zone_append_sectors;
>         }
>
> +       if (ub->params.basic.attrs & UBLK_ATTR_VOLATILE_CACHE) {
> +               lim.features |= BLK_FEAT_WRITE_CACHE;
> +               if (ub->params.basic.attrs & UBLK_ATTR_FUA)
> +                       lim.features |= BLK_FEAT_FUA;
> +       }
> +
>         if (wait_for_completion_interruptible(&ub->completion) != 0)
>                 return -EINTR;
>
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 378b241911ca87..b1a3c293528519 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -1100,6 +1100,7 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
>         struct gendisk *disk = dev_to_disk(dev);
>         struct virtio_blk *vblk = disk->private_data;
>         struct virtio_device *vdev = vblk->vdev;
> +       struct queue_limits lim;
>         int i;
>
>         BUG_ON(!virtio_has_feature(vblk->vdev, VIRTIO_BLK_F_CONFIG_WCE));
> @@ -1108,7 +1109,17 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
>                 return i;
>
>         virtio_cwrite8(vdev, offsetof(struct virtio_blk_config, wce), i);
> -       blk_queue_write_cache(disk->queue, virtblk_get_cache_mode(vdev), false);
> +
> +       lim = queue_limits_start_update(disk->queue);
> +       if (virtblk_get_cache_mode(vdev))
> +               lim.features |= BLK_FEAT_WRITE_CACHE;
> +       else
> +               lim.features &= ~BLK_FEAT_WRITE_CACHE;
> +       blk_mq_freeze_queue(disk->queue);
> +       i = queue_limits_commit_update(disk->queue, &lim);
> +       blk_mq_unfreeze_queue(disk->queue);
> +       if (i)
> +               return i;
>         return count;
>  }
>
> @@ -1504,6 +1515,9 @@ static int virtblk_probe(struct virtio_device *vdev)
>         if (err)
>                 goto out_free_tags;
>
> +       if (virtblk_get_cache_mode(vdev))
> +               lim.features |= BLK_FEAT_WRITE_CACHE;
> +
>         vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, &lim, vblk);
>         if (IS_ERR(vblk->disk)) {
>                 err = PTR_ERR(vblk->disk);
> @@ -1519,10 +1533,6 @@ static int virtblk_probe(struct virtio_device *vdev)
>         vblk->disk->fops = &virtblk_fops;
>         vblk->index = index;
>
> -       /* configure queue flush support */
> -       blk_queue_write_cache(vblk->disk->queue, virtblk_get_cache_mode(vdev),
> -                       false);
> -
>         /* If disk is read-only in the host, the guest should obey */
>         if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO))
>                 set_disk_ro(vblk->disk, 1);
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 9794ac2d3299d1..de38e025769b14 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -956,6 +956,12 @@ static void blkif_set_queue_limits(const struct blkfront_info *info,
>                         lim->max_secure_erase_sectors = UINT_MAX;
>         }
>
> +       if (info->feature_flush) {
> +               lim->features |= BLK_FEAT_WRITE_CACHE;
> +               if (info->feature_fua)
> +                       lim->features |= BLK_FEAT_FUA;
> +       }
> +
>         /* Hard sector size and max sectors impersonate the equiv. hardware. */
>         lim->logical_block_size = info->sector_size;
>         lim->physical_block_size = info->physical_sector_size;
> @@ -1150,9 +1156,6 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
>         info->sector_size = sector_size;
>         info->physical_sector_size = physical_sector_size;
>
> -       blk_queue_write_cache(info->rq, info->feature_flush ? true : false,
> -                             info->feature_fua ? true : false);
> -
>         pr_info("blkfront: %s: %s %s %s %s %s %s %s\n",
>                 info->gd->disk_name, flush_info(info),
>                 "persistent grants:", info->feature_persistent ?
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index 4d11fc664cb0b8..cb6595c8b5514e 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -897,7 +897,6 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
>                 sector_t sectors, struct block_device *cached_bdev,
>                 const struct block_device_operations *ops)
>  {
> -       struct request_queue *q;
>         const size_t max_stripes = min_t(size_t, INT_MAX,
>                                          SIZE_MAX / sizeof(atomic_t));
>         struct queue_limits lim = {
> @@ -909,6 +908,7 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
>                 .io_min                 = block_size,
>                 .logical_block_size     = block_size,
>                 .physical_block_size    = block_size,
> +               .features               = BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA,
>         };
>         uint64_t n;
>         int idx;
> @@ -975,12 +975,7 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
>         d->disk->fops           = ops;
>         d->disk->private_data   = d;
>
> -       q = d->disk->queue;
> -
>         blk_queue_flag_set(QUEUE_FLAG_NONROT, d->disk->queue);
> -
> -       blk_queue_write_cache(q, true, true);
> -
>         return 0;
>
>  out_bioset_exit:
> diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
> index fd789eeb62d943..fbe125d55e25b4 100644
> --- a/drivers/md/dm-table.c
> +++ b/drivers/md/dm-table.c
> @@ -1686,34 +1686,16 @@ int dm_calculate_queue_limits(struct dm_table *t,
>         return validate_hardware_logical_block_alignment(t, limits);
>  }
>
> -static int device_flush_capable(struct dm_target *ti, struct dm_dev *dev,
> -                               sector_t start, sector_t len, void *data)
> -{
> -       unsigned long flush = (unsigned long) data;
> -       struct request_queue *q = bdev_get_queue(dev->bdev);
> -
> -       return (q->queue_flags & flush);
> -}
> -
> -static bool dm_table_supports_flush(struct dm_table *t, unsigned long flush)
> +/*
> + * Check if an target requires flush support even if none of the underlying
> + * devices need it (e.g. to persist target-specific metadata).
> + */
> +static bool dm_table_supports_flush(struct dm_table *t)
>  {
> -       /*
> -        * Require at least one underlying device to support flushes.
> -        * t->devices includes internal dm devices such as mirror logs
> -        * so we need to use iterate_devices here, which targets
> -        * supporting flushes must provide.
> -        */
>         for (unsigned int i = 0; i < t->num_targets; i++) {
>                 struct dm_target *ti = dm_table_get_target(t, i);
>
> -               if (!ti->num_flush_bios)
> -                       continue;
> -
> -               if (ti->flush_supported)
> -                       return true;
> -
> -               if (ti->type->iterate_devices &&
> -                   ti->type->iterate_devices(ti, device_flush_capable, (void *) flush))
> +               if (ti->num_flush_bios && ti->flush_supported)
>                         return true;
>         }
>
> @@ -1855,7 +1837,6 @@ static int device_requires_stable_pages(struct dm_target *ti,
>  int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
>                               struct queue_limits *limits)
>  {
> -       bool wc = false, fua = false;
>         int r;
>
>         if (dm_table_supports_nowait(t))
> @@ -1876,12 +1857,8 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
>         if (!dm_table_supports_secure_erase(t))
>                 limits->max_secure_erase_sectors = 0;
>
> -       if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_WC))) {
> -               wc = true;
> -               if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_FUA)))
> -                       fua = true;
> -       }
> -       blk_queue_write_cache(q, wc, fua);
> +       if (dm_table_supports_flush(t))
> +               limits->features |= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA;
>
>         if (dm_table_supports_dax(t, device_not_dax_capable)) {
>                 blk_queue_flag_set(QUEUE_FLAG_DAX, q);
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 67ece2cd725f50..2f4c5d1755d857 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -5785,7 +5785,10 @@ struct mddev *md_alloc(dev_t dev, char *name)
>         int partitioned;
>         int shift;
>         int unit;
> -       int error ;
> +       int error;
> +       struct queue_limits lim = {
> +               .features               = BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA,
> +       };
>
>         /*
>          * Wait for any previous instance of this device to be completely
> @@ -5825,7 +5828,7 @@ struct mddev *md_alloc(dev_t dev, char *name)
>                  */
>                 mddev->hold_active = UNTIL_STOP;
>
> -       disk = blk_alloc_disk(NULL, NUMA_NO_NODE);
> +       disk = blk_alloc_disk(&lim, NUMA_NO_NODE);
>         if (IS_ERR(disk)) {
>                 error = PTR_ERR(disk);
>                 goto out_free_mddev;
> @@ -5843,7 +5846,6 @@ struct mddev *md_alloc(dev_t dev, char *name)
>         disk->fops = &md_fops;
>         disk->private_data = mddev;
>
> -       blk_queue_write_cache(disk->queue, true, true);
>         disk->events |= DISK_EVENT_MEDIA_CHANGE;
>         mddev->gendisk = disk;
>         error = add_disk(disk);
> diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
> index 367509b5b6466c..2c9963248fcbd6 100644
> --- a/drivers/mmc/core/block.c
> +++ b/drivers/mmc/core/block.c
> @@ -2466,8 +2466,7 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
>         struct mmc_blk_data *md;
>         int devidx, ret;
>         char cap_str[10];
> -       bool cache_enabled = false;
> -       bool fua_enabled = false;
> +       unsigned int features = 0;
>
>         devidx = ida_alloc_max(&mmc_blk_ida, max_devices - 1, GFP_KERNEL);
>         if (devidx < 0) {
> @@ -2499,7 +2498,24 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
>          */
>         md->read_only = mmc_blk_readonly(card);
>
> -       md->disk = mmc_init_queue(&md->queue, card);
> +       if (mmc_host_cmd23(card->host)) {
> +               if ((mmc_card_mmc(card) &&
> +                    card->csd.mmca_vsn >= CSD_SPEC_VER_3) ||
> +                   (mmc_card_sd(card) &&
> +                    card->scr.cmds & SD_SCR_CMD23_SUPPORT))
> +                       md->flags |= MMC_BLK_CMD23;
> +       }
> +
> +       if (md->flags & MMC_BLK_CMD23 &&
> +           ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
> +            card->ext_csd.rel_sectors)) {
> +               md->flags |= MMC_BLK_REL_WR;
> +               features |= (BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA);
> +       } else if (mmc_cache_enabled(card->host)) {
> +               features |= BLK_FEAT_WRITE_CACHE;
> +       }
> +
> +       md->disk = mmc_init_queue(&md->queue, card, features);
>         if (IS_ERR(md->disk)) {
>                 ret = PTR_ERR(md->disk);
>                 goto err_kfree;
> @@ -2539,26 +2555,6 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
>
>         set_capacity(md->disk, size);
>
> -       if (mmc_host_cmd23(card->host)) {
> -               if ((mmc_card_mmc(card) &&
> -                    card->csd.mmca_vsn >= CSD_SPEC_VER_3) ||
> -                   (mmc_card_sd(card) &&
> -                    card->scr.cmds & SD_SCR_CMD23_SUPPORT))
> -                       md->flags |= MMC_BLK_CMD23;
> -       }
> -
> -       if (md->flags & MMC_BLK_CMD23 &&
> -           ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
> -            card->ext_csd.rel_sectors)) {
> -               md->flags |= MMC_BLK_REL_WR;
> -               fua_enabled = true;
> -               cache_enabled = true;
> -       }
> -       if (mmc_cache_enabled(card->host))
> -               cache_enabled  = true;
> -
> -       blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);
> -
>         string_get_size((u64)size, 512, STRING_UNITS_2,
>                         cap_str, sizeof(cap_str));
>         pr_info("%s: %s %s %s%s\n",
> diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
> index 241cdc2b2a2a3b..97ff993d31570c 100644
> --- a/drivers/mmc/core/queue.c
> +++ b/drivers/mmc/core/queue.c
> @@ -344,10 +344,12 @@ static const struct blk_mq_ops mmc_mq_ops = {
>  };
>
>  static struct gendisk *mmc_alloc_disk(struct mmc_queue *mq,
> -               struct mmc_card *card)
> +               struct mmc_card *card, unsigned int features)
>  {
>         struct mmc_host *host = card->host;
> -       struct queue_limits lim = { };
> +       struct queue_limits lim = {
> +               .features               = features,
> +       };
>         struct gendisk *disk;
>
>         if (mmc_can_erase(card))
> @@ -413,10 +415,12 @@ static inline bool mmc_merge_capable(struct mmc_host *host)
>   * mmc_init_queue - initialise a queue structure.
>   * @mq: mmc queue
>   * @card: mmc card to attach this queue
> + * @features: block layer features (BLK_FEAT_*)
>   *
>   * Initialise a MMC card request queue.
>   */
> -struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card)
> +struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
> +               unsigned int features)
>  {
>         struct mmc_host *host = card->host;
>         struct gendisk *disk;
> @@ -460,7 +464,7 @@ struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card)
>                 return ERR_PTR(ret);
>
>
> -       disk = mmc_alloc_disk(mq, card);
> +       disk = mmc_alloc_disk(mq, card, features);
>         if (IS_ERR(disk))
>                 blk_mq_free_tag_set(&mq->tag_set);
>         return disk;
> diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
> index 9ade3bcbb714e4..1498840a4ea008 100644
> --- a/drivers/mmc/core/queue.h
> +++ b/drivers/mmc/core/queue.h
> @@ -94,7 +94,8 @@ struct mmc_queue {
>         struct work_struct      complete_work;
>  };
>
> -struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card);
> +struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
> +               unsigned int features);
>  extern void mmc_cleanup_queue(struct mmc_queue *);
>  extern void mmc_queue_suspend(struct mmc_queue *);
>  extern void mmc_queue_resume(struct mmc_queue *);
> diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
> index 3caa0717d46c01..1b9f57f231e8be 100644
> --- a/drivers/mtd/mtd_blkdevs.c
> +++ b/drivers/mtd/mtd_blkdevs.c
> @@ -336,6 +336,8 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
>         lim.logical_block_size = tr->blksize;
>         if (tr->discard)
>                 lim.max_hw_discard_sectors = UINT_MAX;
> +       if (tr->flush)
> +               lim.features |= BLK_FEAT_WRITE_CACHE;
>
>         /* Create gendisk */
>         gd = blk_mq_alloc_disk(new->tag_set, &lim, new);
> @@ -373,9 +375,6 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
>         spin_lock_init(&new->queue_lock);
>         INIT_LIST_HEAD(&new->rq_list);
>
> -       if (tr->flush)
> -               blk_queue_write_cache(new->rq, true, false);
> -
>         blk_queue_flag_set(QUEUE_FLAG_NONROT, new->rq);
>         blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, new->rq);
>
> diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
> index 598fe2e89bda45..aff818469c114c 100644
> --- a/drivers/nvdimm/pmem.c
> +++ b/drivers/nvdimm/pmem.c
> @@ -455,6 +455,7 @@ static int pmem_attach_disk(struct device *dev,
>                 .logical_block_size     = pmem_sector_size(ndns),
>                 .physical_block_size    = PAGE_SIZE,
>                 .max_hw_sectors         = UINT_MAX,
> +               .features               = BLK_FEAT_WRITE_CACHE,
>         };
>         int nid = dev_to_node(dev), fua;
>         struct resource *res = &nsio->res;
> @@ -495,6 +496,8 @@ static int pmem_attach_disk(struct device *dev,
>                 dev_warn(dev, "unable to guarantee persistence of writes\n");
>                 fua = 0;
>         }
> +       if (fua)
> +               lim.features |= BLK_FEAT_FUA;
>
>         if (!devm_request_mem_region(dev, res->start, resource_size(res),
>                                 dev_name(&ndns->dev))) {
> @@ -543,7 +546,6 @@ static int pmem_attach_disk(struct device *dev,
>         }
>         pmem->virt_addr = addr;
>
> -       blk_queue_write_cache(q, true, fua);
>         blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
>         blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, q);
>         if (pmem->pfn_flags & PFN_MAP)
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 5a673fa5cb2612..9fc5e36fe2e55e 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -2056,7 +2056,6 @@ static int nvme_update_ns_info_generic(struct nvme_ns *ns,
>  static int nvme_update_ns_info_block(struct nvme_ns *ns,
>                 struct nvme_ns_info *info)
>  {
> -       bool vwc = ns->ctrl->vwc & NVME_CTRL_VWC_PRESENT;
>         struct queue_limits lim;
>         struct nvme_id_ns_nvm *nvm = NULL;
>         struct nvme_zone_info zi = {};
> @@ -2106,6 +2105,11 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns,
>             ns->head->ids.csi == NVME_CSI_ZNS)
>                 nvme_update_zone_info(ns, &lim, &zi);
>
> +       if (ns->ctrl->vwc & NVME_CTRL_VWC_PRESENT)
> +               lim.features |= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA;
> +       else
> +               lim.features &= ~(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA);
> +
>         /*
>          * Register a metadata profile for PI, or the plain non-integrity NVMe
>          * metadata masquerading as Type 0 if supported, otherwise reject block
> @@ -2132,7 +2136,6 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns,
>         if ((id->dlfeat & 0x7) == 0x1 && (id->dlfeat & (1 << 3)))
>                 ns->head->features |= NVME_NS_DEAC;
>         set_disk_ro(ns->disk, nvme_ns_is_readonly(ns, info));
> -       blk_queue_write_cache(ns->disk->queue, vwc, vwc);
>         set_bit(NVME_NS_READY, &ns->flags);
>         blk_mq_unfreeze_queue(ns->disk->queue);
>
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 12c59db02539e5..3d0e23a0a4ddd8 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -521,7 +521,6 @@ static void nvme_requeue_work(struct work_struct *work)
>  int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
>  {
>         struct queue_limits lim;
> -       bool vwc = false;
>
>         mutex_init(&head->lock);
>         bio_list_init(&head->requeue_list);
> @@ -562,11 +561,6 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
>         if (ctrl->tagset->nr_maps > HCTX_TYPE_POLL &&
>             ctrl->tagset->map[HCTX_TYPE_POLL].nr_queues)
>                 blk_queue_flag_set(QUEUE_FLAG_POLL, head->disk->queue);
> -
> -       /* we need to propagate up the VMC settings */
> -       if (ctrl->vwc & NVME_CTRL_VWC_PRESENT)
> -               vwc = true;
> -       blk_queue_write_cache(head->disk->queue, vwc, vwc);
>         return 0;
>  }
>
> diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
> index 5bfed61c70db8f..8764ea14c9b881 100644
> --- a/drivers/scsi/sd.c
> +++ b/drivers/scsi/sd.c
> @@ -120,17 +120,18 @@ static const char *sd_cache_types[] = {
>         "write back, no read (daft)"
>  };
>
> -static void sd_set_flush_flag(struct scsi_disk *sdkp)
> +static void sd_set_flush_flag(struct scsi_disk *sdkp,
> +               struct queue_limits *lim)
>  {
> -       bool wc = false, fua = false;
> -
>         if (sdkp->WCE) {
> -               wc = true;
> +               lim->features |= BLK_FEAT_WRITE_CACHE;
>                 if (sdkp->DPOFUA)
> -                       fua = true;
> +                       lim->features |= BLK_FEAT_FUA;
> +               else
> +                       lim->features &= ~BLK_FEAT_FUA;
> +       } else {
> +               lim->features &= ~(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA);
>         }
> -
> -       blk_queue_write_cache(sdkp->disk->queue, wc, fua);
>  }
>
>  static ssize_t
> @@ -168,9 +169,18 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
>         wce = (ct & 0x02) && !sdkp->write_prot ? 1 : 0;
>
>         if (sdkp->cache_override) {
> +               struct queue_limits lim;
> +
>                 sdkp->WCE = wce;
>                 sdkp->RCD = rcd;
> -               sd_set_flush_flag(sdkp);
> +
> +               lim = queue_limits_start_update(sdkp->disk->queue);
> +               sd_set_flush_flag(sdkp, &lim);
> +               blk_mq_freeze_queue(sdkp->disk->queue);
> +               ret = queue_limits_commit_update(sdkp->disk->queue, &lim);
> +               blk_mq_unfreeze_queue(sdkp->disk->queue);
> +               if (ret)
> +                       return ret;
>                 return count;
>         }
>
> @@ -3659,7 +3669,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
>          * We now have all cache related info, determine how we deal
>          * with flush requests.
>          */
> -       sd_set_flush_flag(sdkp);
> +       sd_set_flush_flag(sdkp, &lim);
>
>         /* Initial block count limit based on CDB TRANSFER LENGTH field size. */
>         dev_max = sdp->use_16_for_rw ? SD_MAX_XFER_BLOCKS : SD_DEF_XFER_BLOCKS;
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index c792d4d81e5fcc..4e8931a2c76b07 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -282,6 +282,28 @@ static inline bool blk_op_is_passthrough(blk_opf_t op)
>         return op == REQ_OP_DRV_IN || op == REQ_OP_DRV_OUT;
>  }
>
> +/* flags set by the driver in queue_limits.features */
> +enum {
> +       /* supports a a volatile write cache */
> +       BLK_FEAT_WRITE_CACHE                    = (1u << 0),
> +
> +       /* supports passing on the FUA bit */
> +       BLK_FEAT_FUA                            = (1u << 1),
> +};
> +
> +/*
> + * Flags automatically inherited when stacking limits.
> + */
> +#define BLK_FEAT_INHERIT_MASK \
> +       (BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA)
> +
> +
> +/* internal flags in queue_limits.flags */
> +enum {
> +       /* do not send FLUSH or FUA command despite advertised write cache */
> +       BLK_FLAGS_WRITE_CACHE_DISABLED          = (1u << 31),
> +};
> +
>  /*
>   * BLK_BOUNCE_NONE:    never bounce (default)
>   * BLK_BOUNCE_HIGH:    bounce all highmem pages
> @@ -292,6 +314,8 @@ enum blk_bounce {
>  };
>
>  struct queue_limits {
> +       unsigned int            features;
> +       unsigned int            flags;
>         enum blk_bounce         bounce;
>         unsigned long           seg_boundary_mask;
>         unsigned long           virt_boundary_mask;
> @@ -536,12 +560,9 @@ struct request_queue {
>  #define QUEUE_FLAG_ADD_RANDOM  10      /* Contributes to random pool */
>  #define QUEUE_FLAG_SYNCHRONOUS 11      /* always completes in submit context */
>  #define QUEUE_FLAG_SAME_FORCE  12      /* force complete on same CPU */
> -#define QUEUE_FLAG_HW_WC       13      /* Write back caching supported */
>  #define QUEUE_FLAG_INIT_DONE   14      /* queue is initialized */
>  #define QUEUE_FLAG_STABLE_WRITES 15    /* don't modify blks until WB is done */
>  #define QUEUE_FLAG_POLL                16      /* IO polling enabled if set */
> -#define QUEUE_FLAG_WC          17      /* Write back caching */
> -#define QUEUE_FLAG_FUA         18      /* device supports FUA writes */
>  #define QUEUE_FLAG_DAX         19      /* device supports DAX */
>  #define QUEUE_FLAG_STATS       20      /* track IO start and completion times */
>  #define QUEUE_FLAG_REGISTERED  22      /* queue has been registered to a disk */
> @@ -951,7 +972,6 @@ void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev,
>                 sector_t offset, const char *pfx);
>  extern void blk_queue_update_dma_pad(struct request_queue *, unsigned int);
>  extern void blk_queue_rq_timeout(struct request_queue *, unsigned int);
> -extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fua);
>
>  struct blk_independent_access_ranges *
>  disk_alloc_independent_access_ranges(struct gendisk *disk, int nr_ia_ranges);
> @@ -1305,14 +1325,20 @@ static inline bool bdev_stable_writes(struct block_device *bdev)
>         return test_bit(QUEUE_FLAG_STABLE_WRITES, &q->queue_flags);
>  }
>
> +static inline bool blk_queue_write_cache(struct request_queue *q)
> +{
> +       return (q->limits.features & BLK_FEAT_WRITE_CACHE) &&
> +               (q->limits.flags & BLK_FLAGS_WRITE_CACHE_DISABLED);
> +}
> +
>  static inline bool bdev_write_cache(struct block_device *bdev)
>  {
> -       return test_bit(QUEUE_FLAG_WC, &bdev_get_queue(bdev)->queue_flags);
> +       return blk_queue_write_cache(bdev_get_queue(bdev));
>  }
>
>  static inline bool bdev_fua(struct block_device *bdev)
>  {
> -       return test_bit(QUEUE_FLAG_FUA, &bdev_get_queue(bdev)->queue_flags);
> +       return bdev_get_queue(bdev)->limits.features & BLK_FEAT_FUA;
>  }
>
>  static inline bool bdev_nowait(struct block_device *bdev)
> --
> 2.43.0
>
>


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 15:00:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 15:00:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739409.1146424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPSp-0005J8-2w; Wed, 12 Jun 2024 15:00:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739409.1146424; Wed, 12 Jun 2024 15:00:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPSo-0005J1-VH; Wed, 12 Jun 2024 15:00:38 +0000
Received: by outflank-mailman (input) for mailman id 739409;
 Wed, 12 Jun 2024 15:00:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tg07=NO=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sHPSn-0005Iv-LH
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 15:00:37 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 862b64a0-28cc-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 17:00:36 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 9E11668AFE; Wed, 12 Jun 2024 17:00:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 862b64a0-28cc-11ef-90a3-e314d9c70b13
Date: Wed, 12 Jun 2024 17:00:30 +0200
From: Christoph Hellwig <hch@lst.de>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>, Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 10/26] xen-blkfront: don't disable cache flushes when
 they fail
Message-ID: <20240612150030.GA29188@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-11-hch@lst.de> <ZmlVziizbaboaBSn@macbook>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZmlVziizbaboaBSn@macbook>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 12, 2024 at 10:01:18AM +0200, Roger Pau Monn wrote:
> On Tue, Jun 11, 2024 at 07:19:10AM +0200, Christoph Hellwig wrote:
> > blkfront always had a robust negotiation protocol for detecting a write
> > cache.  Stop simply disabling cache flushes when they fail as that is
> > a grave error.
> 
> It's my understanding the current code attempts to cover up for the
> lack of guarantees the feature itself provides:

> So even when the feature is exposed, the backend might return
> EOPNOTSUPP for the flush/barrier operations.

How is this supposed to work?  I mean in the worst case we could
just immediately complete the flush requests in the driver, but
we're really lying to any upper layer.

> Such failure is tied on whether the underlying blkback storage
> supports REQ_OP_WRITE with REQ_PREFLUSH operation.  blkback will
> expose "feature-barrier" and/or "feature-flush-cache" without knowing
> whether the underlying backend supports those operations, hence the
> weird fallback in blkfront.

If we are just talking about the Linux blkback driver (I know there
probably are a few other implementations) it won't every do that.
I see it has code to do so, but the Linux block layer doesn't
allow the flush operation to randomly fail if it was previously
advertised.  Note that even blkfront conforms to this as it fixes
up the return value when it gets this notsupp error to ok.

> Overall blkback should ensure that REQ_PREFLUSH is supported before
> exposing "feature-barrier" or "feature-flush-cache", as then the
> exposed features would really match what the underlying backend
> supports (rather than the commands blkback knows about).

Yes.  The in-tree xen-blkback does that, but even without that the
Linux block layer actually makes sure flushes sent by upper layers
always succeed even when not supported.



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 15:01:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 15:01:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739412.1146434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPTA-0005gf-98; Wed, 12 Jun 2024 15:01:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739412.1146434; Wed, 12 Jun 2024 15:01:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPTA-0005gY-6a; Wed, 12 Jun 2024 15:01:00 +0000
Received: by outflank-mailman (input) for mailman id 739412;
 Wed, 12 Jun 2024 15:00:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHPT8-0005ZN-Jd
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 15:00:58 +0000
Received: from mail-qt1-x829.google.com (mail-qt1-x829.google.com
 [2607:f8b0:4864:20::829])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 91ee1b19-28cc-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 17:00:56 +0200 (CEST)
Received: by mail-qt1-x829.google.com with SMTP id
 d75a77b69052e-44055ca3103so23964531cf.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 08:00:56 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-440bf2d4468sm29640381cf.9.2024.06.12.08.00.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 08:00:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91ee1b19-28cc-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718204456; x=1718809256; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=dxZP0hKUEW2kZeY+yafFRnwUKUsUxP+QHgSC1avzSHc=;
        b=EsRcOH0iqpQrE95+URHef3QdljAaGD+GmN4+nulsZPFzwRbR5eRAYZmsv2prnHkTiq
         1Hz+dhEGrS1bqfov2G+CNeiH3K22M40APN1oqb1nnz22mqa7nKqobJBCBsQ7OLuc9r1A
         duzO0Pa8nEjk6frri5TJNW5s2r8V6B6Ack/ao=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718204456; x=1718809256;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=dxZP0hKUEW2kZeY+yafFRnwUKUsUxP+QHgSC1avzSHc=;
        b=I9Rvb2IqGN859Iyfupr13UCKTfZWhZI85yKEhg2eopfSkGVjmPL6ZuOGEZw1X5yo9Z
         GbsKK8JiKGcn5oqTLRLE1T59O9lTLwlHcYYe8ABoOHBjJZzd95t7a0Ock1E4zqvRWYY0
         Z9xqkmIyP6voiBJKgWFdRZV7tVUAjqjQKc+jOyQKxJaWMtdY9Ro4aC8pVYJgDqtQQII0
         DIvL2nRvPBLhH7/38gFSNMEcu+IL2K7y9F4mvDaK+bCI9xUXO6fWR+4lZ0GZy7/7Gx9/
         OKJ9iJtbwmahdl8rjKnoFk7bNuHPk/09q9N7hjcUDkx7bPxzd4/kAyoXrp6TOjOsebNO
         KFJA==
X-Gm-Message-State: AOJu0YwTvC+InmlYObc5fdKln/1vaAi31i4rcDYA9HFk5Y/3rRKQNKNo
	7I0IOQdFJMa8+JdTTXxccjF8hp5I9bZNPDiHsRsOUm+qzSQPKL3fT10IeJUDpCs=
X-Google-Smtp-Source: AGHT+IGPyL4Km+M0jYWUgVXQvWNL/m6sF3b1HMSqHI63rPS86O4XF6l7h+EeJmLqOgJRplBMznZKPg==
X-Received: by 2002:ac8:7d0c:0:b0:441:5994:fd48 with SMTP id d75a77b69052e-4415ac7354fmr14283441cf.63.1718204455763;
        Wed, 12 Jun 2024 08:00:55 -0700 (PDT)
Date: Wed, 12 Jun 2024 17:00:53 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH v2 for-4.19 3/3] x86/EPT: drop questionable mfn_valid()
 from epte_get_entry_emt()
Message-ID: <Zmm4JdaLL0oRALL_@macbook>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
 <7607c5f7-772a-4c49-b2df-19f32ec2180b@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7607c5f7-772a-4c49-b2df-19f32ec2180b@suse.com>

On Wed, Jun 12, 2024 at 03:17:38PM +0200, Jan Beulich wrote:
> mfn_valid() is RAM-focused; it will often return false for MMIO. Yet
> access to actual MMIO space should not generally be restricted to UC
> only; especially video frame buffer accesses are unduly affected by such
> a restriction.
> 
> Since, as of ???????????? ("x86/EPT: avoid marking non-present entries
> for re-configuring"), the function won't be called with INVALID_MFN or,
> worse, truncated forms thereof anymore, we call fully drop that check.
> 
> Fixes: 81fd0d3ca4b2 ("x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

I do think this is the way to go (removing quirks from
epte_get_entry_emt()), however it's a risky change to make at this
point in the release.

If this turns out to cause some unexpected damage, it would only
affect HVM guests with PCI passthrough and PVH dom0, which I consider
not great, but tolerable.

I would be more comfortable with making the change just not so close
to the release, but that's where we are.

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I wonder if you should explicitly mention that if adding the
mfn_valid() check was done to ensure all mappings to MMIO are created
with effective UC caching attribute it won't be fully correct either.
Xen could map those using a different effective caching attribute by
virtue of host MTRRs being in effect plus Xen chosen PAT attributes.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 15:03:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 15:03:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739420.1146444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPV4-0006NO-LN; Wed, 12 Jun 2024 15:02:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739420.1146444; Wed, 12 Jun 2024 15:02:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPV4-0006NH-I6; Wed, 12 Jun 2024 15:02:58 +0000
Received: by outflank-mailman (input) for mailman id 739420;
 Wed, 12 Jun 2024 15:02:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHPV3-0006NB-H8
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 15:02:57 +0000
Received: from mail-qk1-x72b.google.com (mail-qk1-x72b.google.com
 [2607:f8b0:4864:20::72b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d88e52f2-28cc-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 17:02:55 +0200 (CEST)
Received: by mail-qk1-x72b.google.com with SMTP id
 af79cd13be357-7954dcf3158so313902285a.3
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 08:02:55 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79689c09e28sm270178385a.121.2024.06.12.08.02.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 08:02:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d88e52f2-28cc-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718204574; x=1718809374; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=ke6anFFy82rvVNho3c9tbKmAMX7YrT9i5uibTvjd4Dk=;
        b=XmBEn2S/R1FhstoXs48uS3TxLWNbjQJF6ow+X4KdYwr1ASMcatDsJ4aNVktRzgX3fK
         vpN6L2jwZ1INTaSBTKIeM9rmn1AdYWy7KXsgpaYrJpMmGJ1TqkzlMZhKQbu1+ip+XFJh
         KqwlcIvA8rT9moY1YCKQvN9GgYSNuduAIcA6A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718204574; x=1718809374;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=ke6anFFy82rvVNho3c9tbKmAMX7YrT9i5uibTvjd4Dk=;
        b=dUj7N26eFINdiArv3KjUJbrixrsBE0mVniH4zis4e5oH2kAhxwVGUkVI+iKxS301ku
         uCrv3bKTdZqqF1AyZEr//HSuwhbLuq4luce4TcdBALTIGnk3wx40BD5q2tasLbFrJ00C
         ConfheAfy4PhmwgtWwXF/nBc/LN4DkkjNm4Or9nlj8mhVmxBjTDeYLNvWNbi34dublGR
         0vh86U8m8I5OAKtg0bDn7rFA3dhW511WeKlVMrmf2gtWHexrEg6y7tomNRlVP5+MIiCx
         rIhE7NMIJrEqfARQjk2K4ZYO4h5F8xkQFHC2O6su1K+oTIfU/tDufdk71DpitvGS9ZQa
         Oo7g==
X-Gm-Message-State: AOJu0Ywm1lBnIej/BFUL3vZV4qtPhFA18+ri2IDft0QHiaac5aZRMBBZ
	wA+vSlgZhxE4o0mQzJkJ9GfbI9/qHKWEhRGsbYbGin2RndIcvxsG9F8qQrcxmok=
X-Google-Smtp-Source: AGHT+IHiWCp4URa5VNmtZvDPALCUuObEtqJMiW36OOdyA+LIp9lO8xBCphBTcAp2/7QIXepRAhGoCg==
X-Received: by 2002:a05:620a:410a:b0:795:5828:547b with SMTP id af79cd13be357-797f61d6b1dmr211028685a.61.1718204574145;
        Wed, 12 Jun 2024 08:02:54 -0700 (PDT)
Date: Wed, 12 Jun 2024 17:02:52 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH v2 for-4.19 1/3] x86/EPT: correct special page checking
 in epte_get_entry_emt()
Message-ID: <Zmm4nFOw_wN0PKt0@macbook>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
 <175df1a2-a95f-462b-ad49-3a0fef727658@suse.com>
 <ZmmskwdoKvAotRk-@macbook>
 <b2985742-75e4-4974-be9d-be088d728731@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b2985742-75e4-4974-be9d-be088d728731@suse.com>

On Wed, Jun 12, 2024 at 04:47:12PM +0200, Jan Beulich wrote:
> On 12.06.2024 16:11, Roger Pau Monné wrote:
> > On Wed, Jun 12, 2024 at 03:16:37PM +0200, Jan Beulich wrote:
> >> mfn_valid() granularity is (currently) 256Mb. Therefore the start of a
> >> 1Gb page passing the test doesn't necessarily mean all parts of such a
> >> range would also pass.
> > 
> > How would such a superpage end up in the EPT?
> > 
> > I would assume this can only happen when adding a superpage MMIO that
> > has part of it return success from mfn_valid()?
> 
> Yes, that's the only way I can think of.
> 
> >> Yet using the result of mfn_to_page() on an MFN
> >> which doesn't pass mfn_valid() checking is liable to result in a crash
> >> (the invocation of mfn_to_page() alone is presumably "just" UB in such a
> >> case).
> >>
> >> Fixes: ca24b2ffdbd9 ("x86/hvm: set 'ipat' in EPT for special pages")
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Thanks.
> 
> >> ---
> >> Of course we could leverage mfn_valid() granularity here to do an
> >> increment by more than 1 if mfn_valid() returned false. Yet doing so
> >> likely would want a suitable helper to be introduced first, rather than
> >> open-coding such logic here.
> > 
> > We would still need to call is_special_page() on each 4K chunk,
> 
> Why? Within any block for which mfn_valid() returns false, there can be
> no RAM pages and hence also no special ones. It's only blocks where
> mfn_valid() returns true that we'd need to iterate through page-by-page.

Oh right, I was thinking the other way around (mfn_valid() returning
true), never mind.

> > at
> > which point taking advantage of the mfn_valid() granularity is likely
> > to make the code more complicated to follow IMO.
> 
> Right, this making it more complicated is the main counter argument. Hence
> why I think that if to go such a route at all, it would need some new
> helper(s) such that at the use sites things still would remain reasonably
> clear.

We could also add an extra check to exit the loop early if special
pages have been found but don't match the current loop index, as it's
all special pages or none.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 15:06:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 15:06:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739425.1146454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPYa-0006yI-3o; Wed, 12 Jun 2024 15:06:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739425.1146454; Wed, 12 Jun 2024 15:06:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPYa-0006yB-0q; Wed, 12 Jun 2024 15:06:36 +0000
Received: by outflank-mailman (input) for mailman id 739425;
 Wed, 12 Jun 2024 15:06:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHPYY-0006y3-D5
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 15:06:34 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 59ea64b7-28cd-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 17:06:31 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-a6f253a06caso278527566b.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 08:06:32 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f15095318sm494459866b.131.2024.06.12.08.06.30
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 08:06:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59ea64b7-28cd-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718204791; x=1718809591; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=K+pyMSftbXnT5i26IhXc+Hdnpw2pgGxMRiyPScqFRnQ=;
        b=P3/lhQoV1vlhag2W1fbilbZf5eULcaeyiy3LN6cLE9m28IBmJn1CEGpziZMeGPmKhR
         bQIJwEy8aKQiihHCfhYJpPGHVotsWdgIuH5Ne1OznevL5UHFvB6ryOwz1yDTa4FyM4Q2
         O8BkXFZj0DtZ7mTdD11m59AuafCxVt8DDHaa+GteVuRGjzZepla/h9g8pvsaRLw76sjd
         kI5pw4d/XI/WgJEV/MZSyZmKNfBMBnN9Mi8WmTINyvSDYuJ9wQRaQFl3ICu91Z/2LIjC
         5TaReAiqgjOe/3MRD7qdVYCuqvse4px2igK/d9LnViZb2E/xaBc4SYUWOfwyruokcbs6
         YJzg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718204791; x=1718809591;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=K+pyMSftbXnT5i26IhXc+Hdnpw2pgGxMRiyPScqFRnQ=;
        b=JAuZQHUtOZrWqsTKG62Us2Atl5inqQGPE5MHdTJF9lpxrDjPWAUv+i6EkLuDFxbEbC
         y3vbC9O7aAUIM2kIXRrm1P+IePneDh4r+f1X+LD2jttFqJHYUXcaRhchaWwboCmay65T
         fTILHKYBbQ8qDqTdQ1NzN8OT67/7OFOd0UeBXuSfwnXCEqb8ptZZry5pRjDyDYMXxxfz
         ljxcaxkMRMDZJzHM14Ix57V11gCoKtOceMfV4fvZ3pj4LKBca9VMQKd4hPjGM7CsKEwh
         9JxLVki2qqVyyPVopnAwoyLir22y9XRBgqPep9wUXA03edElEOPxccXRA2Qqj5OHZI6q
         uakA==
X-Gm-Message-State: AOJu0Yzb0mREs59KPCuEfDIadpVJQBUb0UeJ1ks9hhYALWgjN1sBE1kx
	CMzEmns/zEWUGpMrG08f8BHnVnEt2VWJWPXOjJ6xq4mQ29pX0JpOghWM0zocyg==
X-Google-Smtp-Source: AGHT+IG1bnnqcK+93fnPFFZ4+99CstwDE7PI+l8sLW3EesjKydZh6nXDOk7t/pBuNUGqU/o7/Gggig==
X-Received: by 2002:a17:906:55cf:b0:a6e:ff2f:8780 with SMTP id a640c23a62f3a-a6f47f9519dmr176391066b.36.1718204791349;
        Wed, 12 Jun 2024 08:06:31 -0700 (PDT)
Message-ID: <610ec8c5-26e4-4dff-a750-fdc7bbe97f39@suse.com>
Date: Wed, 12 Jun 2024 17:06:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 for-4.19 1/3] x86/EPT: correct special page checking in
 epte_get_entry_emt()
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
 <175df1a2-a95f-462b-ad49-3a0fef727658@suse.com> <ZmmskwdoKvAotRk-@macbook>
 <b2985742-75e4-4974-be9d-be088d728731@suse.com> <Zmm4nFOw_wN0PKt0@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <Zmm4nFOw_wN0PKt0@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12.06.2024 17:02, Roger Pau Monné wrote:
> We could also add an extra check to exit the loop early if special
> pages have been found but don't match the current loop index, as it's
> all special pages or none.

I was actually considering to make such a change, but then concluded
that in the common case there'll be no special pages anyway, and hence
we need to run the loop to completion anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 15:14:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 15:14:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739434.1146464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPgR-0000rd-SS; Wed, 12 Jun 2024 15:14:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739434.1146464; Wed, 12 Jun 2024 15:14:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPgR-0000rW-P2; Wed, 12 Jun 2024 15:14:43 +0000
Received: by outflank-mailman (input) for mailman id 739434;
 Wed, 12 Jun 2024 15:14:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h1N0=NO=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHPgQ-0000rO-7P
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 15:14:42 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7d0a9310-28ce-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 17:14:39 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id
 a640c23a62f3a-a6f3efa1cc7so185387066b.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 08:14:39 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f37662fa1sm239075366b.17.2024.06.12.08.14.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 08:14:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d0a9310-28ce-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718205279; x=1718810079; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=8t3YgES7Gx6Zo2DxcW4yVljY/BB46w86dYTBiDFh8Lg=;
        b=fabjvoe2alei20ueGFmQ4gYPdGtQ2qUgYwrl/PXhtHklX7mbv0VYLtz7YgeU7Oi2Ap
         uqBcHYp1d6e5P28VYwQNefsLCyXCWzoh1M4aPmSp8tlHSXC2k/3xKj3LkE7mbxlUqaie
         ArtB7lOK6QQUb5Bf0TRZU7o9Un/j1US9E/thT5ZfXOm/VMmKHqRw1IYEUmx7L4rwo3iN
         08yoS7fv6L4GDDFSzgl3AzdebNAi3v14/+BGlvRZCIqnSsrlSYqlqNyiKzbbqistVpCg
         y0sjH8fxg8ITVUMGSmollyIZv0mjHDiCOmm1TQtt7dg8EBJtmyeo+JoD+NObFhw0HZh9
         i7Pw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718205279; x=1718810079;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=8t3YgES7Gx6Zo2DxcW4yVljY/BB46w86dYTBiDFh8Lg=;
        b=VAaRDmAelAycj65Dm5s9QE534IOhMuI97AtPjXDN9PMiOwuFeAAGR6SG12JTQ51ka/
         dL3r0PTr5jj5W9ujzojbO3ub+sV46JthsUrCabBPaobssqL+hRUU/E0cCAfEx/M/fQi6
         lpmCYyFnxRxTwTrzGXHnQZh2QXA/hFdeMZ+23Rus+QC/CHO9Cc0d42camRW1zBcx+wFM
         dCIm81XmFAJKb8JBgd7iKEufTuD7XTfQSvQlz1fnzXhAoMdq5I1sE1z8wAwr1XjZ8mGp
         t5H0PH6QFftdQxrE94uyDQZaWKjxDzGqDYGZjui6zPhXb5zwkaZLYUSULiVRxMTlAKSA
         lelw==
X-Gm-Message-State: AOJu0Ywggk8wVakrokdSon1p3vNwxwZ/FQAVLbFMN/yaib2jzzEVAoDr
	bfEoTNIBu37AcmLh8d/G8YW6VQq4xOSp2d40MY9eNGhw+dn3O5ggMIRWoid5MQ==
X-Google-Smtp-Source: AGHT+IHj53qtTbJ6mA9ykDH2xjaPQgBi+0moO+BoPjiXm3uVEsvnTThTKLbxUN8/VXXd2SWc+cRyew==
X-Received: by 2002:a17:907:bb93:b0:a6f:3996:517c with SMTP id a640c23a62f3a-a6f39965843mr361726766b.18.1718205279260;
        Wed, 12 Jun 2024 08:14:39 -0700 (PDT)
Message-ID: <07d38484-dda3-4494-9dbb-75d4d2dbc3c3@suse.com>
Date: Wed, 12 Jun 2024 17:14:37 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 for-4.19 3/3] x86/EPT: drop questionable mfn_valid()
 from epte_get_entry_emt()
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
 <7607c5f7-772a-4c49-b2df-19f32ec2180b@suse.com> <Zmm4JdaLL0oRALL_@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <Zmm4JdaLL0oRALL_@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12.06.2024 17:00, Roger Pau Monné wrote:
> On Wed, Jun 12, 2024 at 03:17:38PM +0200, Jan Beulich wrote:
>> mfn_valid() is RAM-focused; it will often return false for MMIO. Yet
>> access to actual MMIO space should not generally be restricted to UC
>> only; especially video frame buffer accesses are unduly affected by such
>> a restriction.
>>
>> Since, as of ???????????? ("x86/EPT: avoid marking non-present entries
>> for re-configuring"), the function won't be called with INVALID_MFN or,
>> worse, truncated forms thereof anymore, we call fully drop that check.
>>
>> Fixes: 81fd0d3ca4b2 ("x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> 
> I do think this is the way to go (removing quirks from
> epte_get_entry_emt()), however it's a risky change to make at this
> point in the release.
> 
> If this turns out to cause some unexpected damage, it would only
> affect HVM guests with PCI passthrough and PVH dom0, which I consider
> not great, but tolerable.
> 
> I would be more comfortable with making the change just not so close
> to the release, but that's where we are.

Certainly, and I could live with Oleksii revoking his R-a-b (or simply
not offering it for either of the two prereq changes). Main thing for
me is - PVH Dom0 finally isn't so horribly slow anymore. However, if it
doesn't go into the release, then I'd also be unsure about eventual
backporting.

> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> I wonder if you should explicitly mention that if adding the
> mfn_valid() check was done to ensure all mappings to MMIO are created
> with effective UC caching attribute it won't be fully correct either.
> Xen could map those using a different effective caching attribute by
> virtue of host MTRRs being in effect plus Xen chosen PAT attributes.

Well, the mfn_valid() can't have been there to cover _all_ MMIO. It was
maybe a flawed initial attempt at doing so, and then wasn't properly
adjusted / dropped. So overall - no, I don't think extending the
description with anything along the lines of the above would make a lot
of sense.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 15:23:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 15:23:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739443.1146474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPoa-0003DL-OA; Wed, 12 Jun 2024 15:23:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739443.1146474; Wed, 12 Jun 2024 15:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPoa-0003DE-L5; Wed, 12 Jun 2024 15:23:08 +0000
Received: by outflank-mailman (input) for mailman id 739443;
 Wed, 12 Jun 2024 15:23:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHPoZ-0003D8-RP
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 15:23:07 +0000
Received: from mail-qt1-x82d.google.com (mail-qt1-x82d.google.com
 [2607:f8b0:4864:20::82d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aa2b368f-28cf-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 17:23:05 +0200 (CEST)
Received: by mail-qt1-x82d.google.com with SMTP id
 d75a77b69052e-43fb094da40so9496641cf.0
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 08:23:05 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b07e89f6c9sm31326466d6.79.2024.06.12.08.23.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 08:23:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa2b368f-28cf-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718205784; x=1718810584; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=DebMbvDX/E4xNEF8xci3NeCxNb/W5oNFpq3ktstEZy8=;
        b=pBcY9JzunQow3YbntIiVZF+1CL+AxI7D5M5cmDnEtjfsDbC3Spc3uaAE/fUT4XDlAy
         9bP2NERVv2xo6iShT/yoTvygxhP7mR2tOFJxU6hnL/GlQ8phBz81ISc9yY6PzHCUNsta
         k51REf7oG4SRbHF3dAbW9WaGwGlxYM61kVJe8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718205784; x=1718810584;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=DebMbvDX/E4xNEF8xci3NeCxNb/W5oNFpq3ktstEZy8=;
        b=L8fJYgR4xaE5h3Qajo30NXav/niDPTK1iQckK79ZcvllQPxI1gICZHmCHEcMpw3t5e
         UIhBdSDhjdy6n6u1r3FocNKl2DYQsqqMgzzUAodfGhEpfXjeK2Zb3YqcoZ7tc6I5avOB
         6H8j6vIpzZXU4Vhu9ivJHO6mhl2lucwN7FazKIVPuMerIy9NjmC6fHP/+NGR3hzgs0yz
         DvbglbCMtk/ZvG9wDbtQSNCf5fyvlyeG6g7MRMGViLSIU75KeKAJ5tkRO9bCicoXxy8j
         W1sn3UVyKEpE6GoduBcZuOkZGoaPjPj39QEVZ5RsYJgDNzkt4Lb9MXiWNKz2HGqY0Z4Z
         I6SA==
X-Gm-Message-State: AOJu0YwHsCYfOzkJs5iAQtInRHpIsxKoNsuS1LHrZI08ZEjMjwIa2pen
	QBU2UTT8o/HMHyTrH5ievQrmo5jn0GxTaoJNpvrCheIEzoVerhlQ4K9AWl/ibig=
X-Google-Smtp-Source: AGHT+IFvNbeEKRw0Fmq2Ul6UDaZQA/n9PO0xmH2Vl9AjyG30J/fnxGzVKyhbVPoDq6gpCBamzEf4AA==
X-Received: by 2002:ad4:5f4e:0:b0:6af:4fcd:3065 with SMTP id 6a1803df08f44-6b089f46738mr116597326d6.19.1718205784473;
        Wed, 12 Jun 2024 08:23:04 -0700 (PDT)
Date: Wed, 12 Jun 2024 17:23:02 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH v2 for-4.19 2/3] x86/EPT: avoid marking non-present
 entries for re-configuring
Message-ID: <Zmm9VuMjsOMhQCMQ@macbook>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
 <d31f0f8e-4eb7-4617-86f6-81f38b5c61aa@suse.com>
 <Zmmy_-JqqWRuwvCj@macbook>
 <e944583a-2459-435f-90fb-04bcca18197f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <e944583a-2459-435f-90fb-04bcca18197f@suse.com>

On Wed, Jun 12, 2024 at 04:53:14PM +0200, Jan Beulich wrote:
> On 12.06.2024 16:38, Roger Pau Monné wrote:
> > On Wed, Jun 12, 2024 at 03:16:59PM +0200, Jan Beulich wrote:
> >> For non-present entries EMT, like most other fields, is meaningless to
> >> hardware. Make the logic in ept_set_entry() setting the field (and iPAT)
> >> conditional upon dealing with a present entry, leaving the value at 0
> >> otherwise. This has two effects for epte_get_entry_emt() which we'll
> >> want to leverage subsequently:
> >> 1) The call moved here now won't be issued with INVALID_MFN anymore (a
> >>    respective BUG_ON() is being added).
> >> 2) Neither of the other two calls could now be issued with a truncated
> >>    form of INVALID_MFN anymore (as long as there's no bug anywhere
> >>    marking an entry present when that was populated using INVALID_MFN).
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> >> ---
> >> v2: New.
> >>
> >> --- a/xen/arch/x86/mm/p2m-ept.c
> >> +++ b/xen/arch/x86/mm/p2m-ept.c
> >> @@ -650,6 +650,8 @@ static int cf_check resolve_misconfig(st
> >>              if ( e.emt != MTRR_NUM_TYPES )
> >>                  break;
> >>  
> >> +            ASSERT(is_epte_present(&e));
> > 
> > If this is added here, then there's a condition further below:
> > 
> > if ( !is_epte_valid(&e) || !is_epte_present(&e) )
> > 
> > That needs adjusting AFAICT.
> 
> I don't think so, because e was re-fetched in between.

Oh, I see, we take the opportunity to do the recalculation for all the
EPT entries that share the same page table.

> > However, in ept_set_entry() we seem to unconditionally call
> > resolve_misconfig() against the new entry to be populated, won't this
> > possibly cause resolve_misconfig() to be called against non-present
> > EPT entries?  I think this is fine because such non-present entries
> > will have emt == 0, and hence will take the break just ahead of the
> > added ASSERT().
> 
> Right, hence how I placed this assertion.

OK, just wanted to double check.

> >> @@ -941,6 +932,22 @@ ept_set_entry(struct p2m_domain *p2m, gf
> >>              need_modify_vtd_table = 0;
> >>  
> >>          ept_p2m_type_to_flags(p2m, &new_entry);
> >> +
> >> +        if ( is_epte_present(&new_entry) )
> >> +        {
> >> +            bool ipat;
> >> +            int emt = epte_get_entry_emt(p2m->domain, _gfn(gfn), mfn,
> >> +                                         i * EPT_TABLE_ORDER, &ipat,
> >> +                                         p2mt);
> >> +
> >> +            BUG_ON(mfn_eq(mfn, INVALID_MFN));
> >> +
> >> +            if ( emt >= 0 )
> >> +                new_entry.emt = emt;
> >> +            else /* ept_handle_misconfig() will need to take care of this. */
> >> +                new_entry.emt = MTRR_NUM_TYPES;
> >> +            new_entry.ipat = ipat;
> >> +        }
> > 
> > Should we assert that if new_entry.emt == MTRR_NUM_TYPES the entry
> > must have the present bit set before the atomic_write_ept_entry()
> > call?
> 
> This would feel excessive to me. All writing to new_entry is close together,
> immediately ahead of that atomic_write_ept_entry(). And we're (now) writing
> MTRR_NUM_TYPES only when is_epte_present() is true (note that it's not "the
> present bit").

Fair enough.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 15:27:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 15:27:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739449.1146484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPsg-0003mL-6f; Wed, 12 Jun 2024 15:27:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739449.1146484; Wed, 12 Jun 2024 15:27:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHPsg-0003mE-3t; Wed, 12 Jun 2024 15:27:22 +0000
Received: by outflank-mailman (input) for mailman id 739449;
 Wed, 12 Jun 2024 15:27:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHPsf-0003m8-3W
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 15:27:21 +0000
Received: from mail-qt1-x829.google.com (mail-qt1-x829.google.com
 [2607:f8b0:4864:20::829])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 41e20d5c-28d0-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 17:27:20 +0200 (CEST)
Received: by mail-qt1-x829.google.com with SMTP id
 d75a77b69052e-44054a2c153so12915831cf.2
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 08:27:20 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-441136b1d16sm23245271cf.25.2024.06.12.08.27.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 08:27:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41e20d5c-28d0-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718206039; x=1718810839; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=A94detY1UyHmoOWbBrK/++zmK7b8qtAxB+K8E9DexK0=;
        b=DekJgvxwYZYc0hQx+nzw5cxo3sKybwBWZ4NDDZBJqC/EmRNZ2tigSfF61qyIqxehE8
         Ngi2IqpIQe8A7ySkwGqTTk+MKpOgeHhIN41r6oRVhflDTq5ipE73+WjSsqZdxnoiqoXl
         Wu3B3g6yZmXfmSAxa7QqlYCtIPel/Jr7f0Gig=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718206039; x=1718810839;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=A94detY1UyHmoOWbBrK/++zmK7b8qtAxB+K8E9DexK0=;
        b=EpKx+jmJc6A+JpIvksxSA/USUG4YcsBGC4yFS909XPOWiarFp8050RH/QawDyG9U1z
         wqswsG8jFVWKgZika+kpX9zyMv5MDkq2h1B84UoVJeHzoyFVFMI+i8UyowHOMdyJ2dy3
         RtgI4ub+oxbOARuVMMM7C1YUuJ0SsIty5kcsI8Luc0OWTJwM0Ov8C+JdPPRcTCD2TSjE
         gN4uMlu5BmfoZpnMm3mTq2HNXpVwIJm0CWL2VOWylX7LlJq6tFymJ+wyst6O7aR2xll2
         KKiFMkIdqP/J4IHedM/MOSIvH4n7qp5J3JBwW2B1bOaL/KeCaSnszpgLRQgswkmRceip
         1uoQ==
X-Gm-Message-State: AOJu0YzAIAiAPf5jEG+QHYVczZp/lyNu1vpR4bcOnuO+jwHsRSU6sChf
	u+oprFrA1byo9aqklhN2wjO02cpOsj8FG00EF/j6LDlzXTT/a2oA/CaNyuPPWoU=
X-Google-Smtp-Source: AGHT+IHCiLpJPetdyLh9zWdcFF+M9fI61y5Jh1ibc8wrGCrLx/bJXxIXC1Ap0CC5Y1OC2DXC/g/bPQ==
X-Received: by 2002:ac8:7f10:0:b0:441:58e9:678d with SMTP id d75a77b69052e-4415ac6e056mr20030781cf.64.1718206039065;
        Wed, 12 Jun 2024 08:27:19 -0700 (PDT)
Date: Wed, 12 Jun 2024 17:27:16 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH v2 for-4.19 3/3] x86/EPT: drop questionable mfn_valid()
 from epte_get_entry_emt()
Message-ID: <Zmm-VGEvAecY4UlV@macbook>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
 <7607c5f7-772a-4c49-b2df-19f32ec2180b@suse.com>
 <Zmm4JdaLL0oRALL_@macbook>
 <07d38484-dda3-4494-9dbb-75d4d2dbc3c3@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <07d38484-dda3-4494-9dbb-75d4d2dbc3c3@suse.com>

On Wed, Jun 12, 2024 at 05:14:37PM +0200, Jan Beulich wrote:
> On 12.06.2024 17:00, Roger Pau Monné wrote:
> > I wonder if you should explicitly mention that if adding the
> > mfn_valid() check was done to ensure all mappings to MMIO are created
> > with effective UC caching attribute it won't be fully correct either.
> > Xen could map those using a different effective caching attribute by
> > virtue of host MTRRs being in effect plus Xen chosen PAT attributes.
> 
> Well, the mfn_valid() can't have been there to cover _all_ MMIO. It was
> maybe a flawed initial attempt at doing so, and then wasn't properly
> adjusted / dropped. So overall - no, I don't think extending the
> description with anything along the lines of the above would make a lot
> of sense.

I realized myself when writing the paragraph that I wouldn't even know
how to word it properly, neither it would be much helpful without
knowing the exact intention the mfn_valid() check was added for.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 15:36:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 15:36:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739460.1146493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHQ1h-0005hW-13; Wed, 12 Jun 2024 15:36:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739460.1146493; Wed, 12 Jun 2024 15:36:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHQ1g-0005hP-UG; Wed, 12 Jun 2024 15:36:40 +0000
Received: by outflank-mailman (input) for mailman id 739460;
 Wed, 12 Jun 2024 15:36:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHQ1f-0005hJ-DN
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 15:36:39 +0000
Received: from mail-qk1-x72b.google.com (mail-qk1-x72b.google.com
 [2607:f8b0:4864:20::72b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8d6620cc-28d1-11ef-b4bb-af5377834399;
 Wed, 12 Jun 2024 17:36:36 +0200 (CEST)
Received: by mail-qk1-x72b.google.com with SMTP id
 af79cd13be357-795dc9e0d15so194436485a.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 08:36:36 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b075e37080sm39841916d6.46.2024.06.12.08.36.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 08:36:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d6620cc-28d1-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718206595; x=1718811395; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=yqVruEeXc2MDMuSjgeHLOK+zggStZivpQZ292dmamSk=;
        b=GNA7MoEwTVuRkYvcD0WhzChx5+hn0JichZnb3t2xB8UQgItxrwA6CbeLrXE2JLsmK6
         FgDZF7z2tn8wKW3IOIfuL2YUrKOdkc24XWdovHxeLqyfo3hNIQNQnXr6UA3LoQrvAYCz
         z3f/L9XwCakWLZ4D5aMC5+ows3ja2/K5ByPfw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718206595; x=1718811395;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=yqVruEeXc2MDMuSjgeHLOK+zggStZivpQZ292dmamSk=;
        b=vs64AB4sM/w330ODAv0kiCVmARsor6+gabc5HOo0JheVwqttBrHCSnh3dbACX4wbw2
         WFVoB72JUiHlJR1vpEjnUYQEkVf63yV6o+qPqU2xaAAsZl36uxYsmadcPZfbRN14tDr4
         Tjl7acr9K9FuHR+ZHLzTp2HIbLbXxkiPz1NRFN0J2obHvvjEBL1EjaHNvoSxoN2hATWM
         NgVJ/WVjP2YZrZNf3oc7Zcro6PhInnRXiNxppE90AOCICI6MfvoQud0jyHFVqEd4NdnA
         DG9sxL3cshSrVtKVaxtrw/JIEcmbIIDC+eX3uBKxxq7iRK1mFJ7UEalVIJ9AVoetmptM
         naqw==
X-Forwarded-Encrypted: i=1; AJvYcCWGNNmzxat0RY3+G8DRST7hL2wtbFg7p8KkIBslxNUOX0PQCUQvEBXkQAwE3zMKrBU1QA3MFiS9d31ad0HGnBD++/WrO0GMAmsZ0Y6UtX8=
X-Gm-Message-State: AOJu0YwxuaFrBMoXmgOUU/+O2V4Ybh4XCNvDLjrdTsG6FPeGieU8go2e
	s65BkFdx/tzV26QIrwTV/W/8H3Mu8MTTIT61Gp6BS5n6G7i6CZlABqnB4iZ3nPLuLHdglIBSZS8
	I
X-Google-Smtp-Source: AGHT+IEqpJmbSzajuYOcJrD85VqbzOVEmctVt8Y0cNXpcqW27coPGKftpzE3w/jzUbf30J9iZFbLPg==
X-Received: by 2002:a05:6214:3c8a:b0:6b0:8041:8ae1 with SMTP id 6a1803df08f44-6b1a7419aecmr27509786d6.61.1718206595227;
        Wed, 12 Jun 2024 08:36:35 -0700 (PDT)
Date: Wed, 12 Jun 2024 17:36:33 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in
 _assign_irq_vector()
Message-ID: <ZmnAgSBjjP6N-uJS@macbook>
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-7-roger.pau@citrix.com>
 <9de1a9c7-814c-4375-9182-90a2f04806b2@suse.com>
 <Zml6-ViFPTWI1cUc@macbook>
 <d5b1d273-913e-4d53-9fb6-9b01525da498@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d5b1d273-913e-4d53-9fb6-9b01525da498@suse.com>

On Wed, Jun 12, 2024 at 03:42:58PM +0200, Jan Beulich wrote:
> On 12.06.2024 12:39, Roger Pau Monné wrote:
> > On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
> >> On 10.06.2024 16:20, Roger Pau Monne wrote:
> >>> Currently there's logic in fixup_irqs() that attempts to prevent
> >>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
> >>> interrupts from the CPUs not present in the input mask.  The current logic in
> >>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
> >>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
> >>>
> >>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
> >>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
> >>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
> >>> and no remaining online CPUs in ->arch.cpu_mask.
> >>>
> >>> If _assign_irq_vector() is requested to move an interrupt in the state
> >>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
> >>> CPUs that could be used as fallback, and if that's the case do move the
> >>> interrupt back to the previous destination.  Note this is easier because the
> >>> vector hasn't been released yet, so there's no need to allocate and setup a new
> >>> vector on the destination.
> >>>
> >>> Due to the logic in fixup_irqs() that clears offline CPUs from
> >>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
> >>> shouldn't be possible to get into _assign_irq_vector() with
> >>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
> >>> ->arch.old_cpu_mask.
> >>>
> >>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
> >>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
> >>> longer part of the affinity set,
> >>
> >> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
> >>
> >>> move the interrupt to a different CPU part of
> >>> the provided mask
> >>
> >> ... this (->arch.cpu_mask related).
> > 
> > No, the "provided mask" here is the "mask" parameter, not
> > ->arch.cpu_mask.
> 
> Oh, so this describes the case of "hitting" the comment at the very bottom of
> the first hunk then? (I probably was misreading this because I was expecting
> it to describe a code change, rather than the case where original behavior
> needs retaining. IOW - all fine here then.)
> 
> >>> and keep the current ->arch.old_{cpu_mask,vector} for the
> >>> pending interrupt movement to be completed.
> >>
> >> Right, that's to clean up state from before the initial move. What isn't
> >> clear to me is what's to happen with the state of the intermediate
> >> placement. Description and code changes leave me with the impression that
> >> it's okay to simply abandon, without any cleanup, yet I can't quite figure
> >> why that would be an okay thing to do.
> > 
> > There isn't much we can do with the intermediate placement, as the CPU
> > is going offline.  However we can drain any pending interrupts from
> > IRR after the new destination has been set, since setting the
> > destination is done from the CPU that's the current target of the
> > interrupts.  So we can ensure the draining is done strictly after the
> > target has been switched, hence ensuring no further interrupts from
> > this source will be delivered to the current CPU.
> 
> Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
> the ...
> 
> >>> --- a/xen/arch/x86/irq.c
> >>> +++ b/xen/arch/x86/irq.c
> >>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
> >>>      }
> >>>  
> >>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> >>> -        return -EAGAIN;
> >>> +    {
> >>> +        /*
> >>> +         * If the current destination is online refuse to shuffle.  Retry after
> >>> +         * the in-progress movement has finished.
> >>> +         */
> >>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
> >>> +            return -EAGAIN;
> >>> +
> >>> +        /*
> >>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
> >>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
> >>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
> >>> +         * ->arch.old_cpu_mask.
> >>> +         */
> >>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
> >>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
> >>> +
> >>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
> >>> +        {
> >>> +            /*
> >>> +             * Fallback to the old destination if moving is in progress and the
> >>> +             * current destination is to be offlined.  This is only possible if
> >>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
> >>> +             * in the 'mask' parameter.
> >>> +             */
> >>> +            desc->arch.vector = desc->arch.old_vector;
> >>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
> 
> ... replacing of vector (and associated mask), without any further accounting.

It's quite likely I'm missing something here, but what further
accounting you would like to do?

The current target of the interrupt (->arch.cpu_mask previous to
cpumask_and()) is all going offline, so any attempt to set it in
->arch.old_cpu_mask would just result in a stale (offline) CPU getting
set in ->arch.old_cpu_mask, which previous patches attempted to
solve.

Maybe by "further accounting" you meant something else not related to
->arch.old_{cpu_mask,vector}?

> >>> @@ -600,7 +646,17 @@ next:
> >>>          current_vector = vector;
> >>>          current_offset = offset;
> >>>  
> >>> -        if ( valid_irq_vector(old_vector) )
> >>> +        if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> >>> +        {
> >>> +            ASSERT(!cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map));
> >>> +            /*
> >>> +             * Special case when evacuating an interrupt from a CPU to be
> >>> +             * offlined and the interrupt was already in the process of being
> >>> +             * moved.  Leave ->arch.old_{vector,cpu_mask} as-is and just
> >>> +             * replace ->arch.{cpu_mask,vector} with the new destination.
> >>> +             */
> >>
> >> And where's the cleaning up of ->arch.old_* going to be taken care of then?
> > 
> > Such cleaning will be handled normally by the interrupt still having
> > ->arch.move_{in_progress,cleanup_count} set.  The CPUs in
> > ->arch.old_cpu_mask must not all be offline, otherwise the logic in
> > fixup_irqs() would have already released the old vector.
> 
> Maybe add "Cleanup will be done normally" to the comment?


Can do.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 15:56:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 15:56:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739474.1146504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHQKj-0001Ym-JY; Wed, 12 Jun 2024 15:56:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739474.1146504; Wed, 12 Jun 2024 15:56:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHQKj-0001Yf-Gt; Wed, 12 Jun 2024 15:56:21 +0000
Received: by outflank-mailman (input) for mailman id 739474;
 Wed, 12 Jun 2024 15:56:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kV4F=NO=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHQKi-0001YZ-Bm
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 15:56:20 +0000
Received: from mail-qv1-xf2f.google.com (mail-qv1-xf2f.google.com
 [2607:f8b0:4864:20::f2f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4e4c5e6d-28d4-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 17:56:19 +0200 (CEST)
Received: by mail-qv1-xf2f.google.com with SMTP id
 6a1803df08f44-6b09072c9d9so5936d6.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 08:56:19 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b0884337e9sm22877866d6.16.2024.06.12.08.56.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 Jun 2024 08:56:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e4c5e6d-28d4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718207778; x=1718812578; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=0JAgWI6yypeQvJHtBsLjxy9EsiTAtsBZFfQkPH4VMNc=;
        b=cNBipdUbfoVb7qaspFGAc2v6AVDt0z3+7wMzJJXX1OG6OOY9+FYe3ws/YFBP/7u89I
         qZ3uR1mXzFIk2Hl8Tpjt90vKhwfdrZqivbLoDUuLbq4TbRG5yUWvwjdkhmt9fdf9Bz4O
         DCWzZYmRgsNiZihllrVSh2JmagvyvWokSrrHQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718207778; x=1718812578;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=0JAgWI6yypeQvJHtBsLjxy9EsiTAtsBZFfQkPH4VMNc=;
        b=qRdLwNFi5MmOJ5Gi06w7FMdhF3SXNqaszHUd5CPapEGUTZWWgDXbG3mKNQUut/eP/o
         0yMl7si3Fv75a5Lf9NnrbL+dBrlxlWc7s6TyHh8GROymTH2rSr63H//CNlbmM+c04owo
         rhCgv5UZdOiLmaJhLrAXNSuhJdKPoXq70oO3XOxQXEdqsLto4d+ztrKey1MfBcbjOgML
         z9sfhcio2UeM6g6zztrvydtl21pWwjdQL94lNok6RGvn71mSwoH/+ToNXsK7zkHITaOQ
         9jvbfs5AGpaWqsY1VtAeDDGNefR94//hOya3sVohRt3ldcnIEt41mrHVHWQeMeCMkZln
         GZvw==
X-Forwarded-Encrypted: i=1; AJvYcCViqHZs/kku+hwfQOQciL5H5rqCueMga6bv+PvxIrgDYsMJlosFfge/oXinRNLLh9a+stwT9/+O5IPd6p/D53TkesWLOeZFvYn3LkPgTQ8=
X-Gm-Message-State: AOJu0Yz8IJl1xgKxHvtD1fUxTlUl6U6+WeP8C0IEos7xAnoT8aKVk2Q+
	YewjBPisvI8icF/ysGyxVpkLArfz4AS2phJbDKVciNXRHkoQkCFPeTIzMlNhi88=
X-Google-Smtp-Source: AGHT+IH3vJ/Cfk8LzYoeLv9cXT/SgN6Pt3X56xC7VVYFazmjn8i4deMSmAqg4iM25NKqjeo5TBvU1g==
X-Received: by 2002:a05:6214:2a47:b0:6b0:7365:dde0 with SMTP id 6a1803df08f44-6b2a33de160mr1306776d6.18.1718207777691;
        Wed, 12 Jun 2024 08:56:17 -0700 (PDT)
Date: Wed, 12 Jun 2024 17:56:15 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?utf-8?Q?B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>, Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 10/26] xen-blkfront: don't disable cache flushes when
 they fail
Message-ID: <ZmnFH17bTV2Ot_iR@macbook>
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-11-hch@lst.de>
 <ZmlVziizbaboaBSn@macbook>
 <20240612150030.GA29188@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20240612150030.GA29188@lst.de>

On Wed, Jun 12, 2024 at 05:00:30PM +0200, Christoph Hellwig wrote:
> On Wed, Jun 12, 2024 at 10:01:18AM +0200, Roger Pau Monné wrote:
> > On Tue, Jun 11, 2024 at 07:19:10AM +0200, Christoph Hellwig wrote:
> > > blkfront always had a robust negotiation protocol for detecting a write
> > > cache.  Stop simply disabling cache flushes when they fail as that is
> > > a grave error.
> > 
> > It's my understanding the current code attempts to cover up for the
> > lack of guarantees the feature itself provides:
> 
> > So even when the feature is exposed, the backend might return
> > EOPNOTSUPP for the flush/barrier operations.
> 
> How is this supposed to work?  I mean in the worst case we could
> just immediately complete the flush requests in the driver, but
> we're really lying to any upper layer.

Right.  AFAICT advertising "feature-barrier" and/or
"feature-flush-cache" could be done based on whether blkback
understand those commands, not on whether the underlying storage
supports the equivalent of them.

Worst case we can print a warning message once about the underlying
storage failing to complete flush/barrier requests, and that data
integrity might not be guaranteed going forward, and not propagate the
error to the upper layer?

What would be the consequence of propagating a flush error to the
upper layers?

> > Such failure is tied on whether the underlying blkback storage
> > supports REQ_OP_WRITE with REQ_PREFLUSH operation.  blkback will
> > expose "feature-barrier" and/or "feature-flush-cache" without knowing
> > whether the underlying backend supports those operations, hence the
> > weird fallback in blkfront.
> 
> If we are just talking about the Linux blkback driver (I know there
> probably are a few other implementations) it won't every do that.
> I see it has code to do so, but the Linux block layer doesn't
> allow the flush operation to randomly fail if it was previously
> advertised.  Note that even blkfront conforms to this as it fixes
> up the return value when it gets this notsupp error to ok.

Yes, I'm afraid it's impossible to know what the multiple incarnations
of all the scattered blkback implementations possibly do (FreeBSD,
NetBSD, QEMU and blktap at least I know of).

> > Overall blkback should ensure that REQ_PREFLUSH is supported before
> > exposing "feature-barrier" or "feature-flush-cache", as then the
> > exposed features would really match what the underlying backend
> > supports (rather than the commands blkback knows about).
> 
> Yes.  The in-tree xen-blkback does that, but even without that the
> Linux block layer actually makes sure flushes sent by upper layers
> always succeed even when not supported.

Given the description of the feature in the blkif header, I'm afraid
we cannot guarantee that seeing the feature exposed implies barrier or
flush support, since the request could fail at any time (or even from
the start of the disk attachment) and it would still sadly be a correct
implementation given the description of the options.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 16:33:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 16:33:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739491.1146513 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHQuW-0000Wq-Ab; Wed, 12 Jun 2024 16:33:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739491.1146513; Wed, 12 Jun 2024 16:33:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHQuW-0000Wj-85; Wed, 12 Jun 2024 16:33:20 +0000
Received: by outflank-mailman (input) for mailman id 739491;
 Wed, 12 Jun 2024 16:33:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHQuV-0000WZ-9B; Wed, 12 Jun 2024 16:33:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHQuV-0002zH-5V; Wed, 12 Jun 2024 16:33:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHQuU-0004oT-Ps; Wed, 12 Jun 2024 16:33:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHQuU-0007T4-PI; Wed, 12 Jun 2024 16:33:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Zk4eiBlbsm66BXhH8dpj3Dx4No7by2i9yxQVR/tdzT0=; b=HGXzb8dwFgUz8TbonWvqAR0PZR
	mkaI6JJnLwOF0Whf/LVzDECwCKN4i7xl7aYEfTiDfyiiNV6AHg/gnnGmAETbdEHlTYIlDa9LnqlLg
	BrggaeXrG2DMQSfFh2+w4X0R53Nez8DWhiVfMXDD8IUFN9nN0p5MuPbOLcMBPZlNzJiw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186323-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186323: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=401448f2d12f969956fac3ef3d29dc065430fd56
X-Osstest-Versions-That:
    xen=b0e5352c600ce42f109ddb43a4233ac2c9e0abbd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Jun 2024 16:33:18 +0000

flight 186323 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186323/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  401448f2d12f969956fac3ef3d29dc065430fd56
baseline version:
 xen                  b0e5352c600ce42f109ddb43a4233ac2c9e0abbd

Last test of basis   186319  2024-06-12 09:02:07 Z    0 days
Testing same since   186323  2024-06-12 13:02:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b0e5352c60..401448f2d1  401448f2d12f969956fac3ef3d29dc065430fd56 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 18:04:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 18:04:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739512.1146524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHSK7-0004RH-Oq; Wed, 12 Jun 2024 18:03:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739512.1146524; Wed, 12 Jun 2024 18:03:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHSK7-0004RA-Lm; Wed, 12 Jun 2024 18:03:51 +0000
Received: by outflank-mailman (input) for mailman id 739512;
 Wed, 12 Jun 2024 18:03:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHSK7-0004R0-6p; Wed, 12 Jun 2024 18:03:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHSK6-0004VD-PR; Wed, 12 Jun 2024 18:03:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHSK6-0007D2-Co; Wed, 12 Jun 2024 18:03:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHSK6-00036U-CI; Wed, 12 Jun 2024 18:03:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Jv8DMboBH4uiYV6jHqr1IrSIxBqI+nX5FmfHfD/6kyU=; b=ZIRSslsciuAjvNAtOct71tOguV
	524oDz8mAhXLF6T+DUnf4UWB7J8UD4GkCgOPgTvrm9Xm8Ho/ef5wHmuhwnnnHtyX3kbKkqCdEK1VX
	mtNDXJ2AWBaUlkzMrSJznrJGzh/ycebeSgF7eYXi0EMBw78IS7w1ARKgqne5TcfxbGMU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186320-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-6.1 test] 186320: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-6.1:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ae9f2a70d69e9c840ee1eda201f09662ca7e2038
X-Osstest-Versions-That:
    linux=88690811da69826fdb59d908a6e5e9d0c63b581a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Jun 2024 18:03:50 +0000

flight 186320 linux-6.1 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186320/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186150
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186150
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186150
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186150
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186150
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186150
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                ae9f2a70d69e9c840ee1eda201f09662ca7e2038
baseline version:
 linux                88690811da69826fdb59d908a6e5e9d0c63b581a

Last test of basis   186150  2024-05-26 01:42:32 Z   17 days
Testing same since   186320  2024-06-12 09:12:33 Z    0 days    1 attempts

------------------------------------------------------------
421 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   88690811da69..ae9f2a70d69e  ae9f2a70d69e9c840ee1eda201f09662ca7e2038 -> tested/linux-6.1


From xen-devel-bounces@lists.xenproject.org Wed Jun 12 18:38:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 18:38:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739523.1146535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHSrh-0000MP-A8; Wed, 12 Jun 2024 18:38:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739523.1146535; Wed, 12 Jun 2024 18:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHSrh-0000MI-58; Wed, 12 Jun 2024 18:38:33 +0000
Received: by outflank-mailman (input) for mailman id 739523;
 Wed, 12 Jun 2024 18:38:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CiHp=NO=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sHSrg-0000MC-31
 for xen-devel@lists.xenproject.org; Wed, 12 Jun 2024 18:38:32 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f55e7c15-28ea-11ef-90a3-e314d9c70b13;
 Wed, 12 Jun 2024 20:38:29 +0200 (CEST)
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-478-g3taUYkWP26CFMlHXrXTpQ-1; Wed, 12 Jun 2024 14:38:25 -0400
Received: by mail-wm1-f71.google.com with SMTP id
 5b1f17b1804b1-42183fdd668so1052435e9.2
 for <xen-devel@lists.xenproject.org>; Wed, 12 Jun 2024 11:38:25 -0700 (PDT)
Received: from ?IPV6:2003:cb:c702:bf00:abf6:cc3a:24d6:fa55?
 (p200300cbc702bf00abf6cc3a24d6fa55.dip0.t-ipconnect.de.
 [2003:cb:c702:bf00:abf6:cc3a:24d6:fa55])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-422870f760fsm35980815e9.33.2024.06.12.11.38.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 12 Jun 2024 11:38:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f55e7c15-28ea-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1718217507;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=eHkiYypujAnwSi5QrP8pZu0qQw3cLgJft3y7PW/t9t4=;
	b=ORYCUpDHd7ycd/UMM0kfT657bEQiHayWHOm9X9/YzGVbZJzjFNtWIpK6Dq7HdV+mpr5U2j
	XXxZiLH2vk9K5cw5pVdof+2xchKEY6bFWvoPny2KzpIRNJ4lv/FhkdlRInvps6MIPxnoot
	H4GMilHb5k72hooLgFZlTs7OTsuGXgw=
X-MC-Unique: g3taUYkWP26CFMlHXrXTpQ-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718217504; x=1718822304;
        h=content-transfer-encoding:in-reply-to:organization:autocrypt
         :content-language:from:references:cc:to:subject:user-agent
         :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=eHkiYypujAnwSi5QrP8pZu0qQw3cLgJft3y7PW/t9t4=;
        b=PfS2YTVu4wHuQIq/vt/3WMqL3ACzT4PDn86OA9hYlT5vZcPGVZssWY0vBYWPCDBEgM
         8UlVjB6zjqA6anRH4i4qo6jGX4+mcs1HvKB4o63ebCjx5S9TlaMCZKAWq+T9auDb9q8i
         /syB/M9Mwl0kdMY/M5Bhu0rQXnNR+bN39n3uirc9p27YFHowSb5/nQfDRXgSId4A6vfo
         rNOh5cNnLcmFcl+sVl2M+W3ePre3f0DewcT0tu8VAi1glTSEhgVNqdW7y35FRVzlxiGH
         evu3lifw+TUgZzx8rxwAFmOGsL6jMOBL5PCyuMFWE4FIvMcNJ1CXIpagz/YJI7F6AvZ4
         oXAg==
X-Forwarded-Encrypted: i=1; AJvYcCUH7tr3AcTVhnVi0a7W/o46Brl/CU11IR4nOBBAsJW9q/3AW/N/zxGR7sd1O6JQaGNpZdD5pj6U9CrwUAMF3Zx/Ro/37IfEiFgu1MRvZcM=
X-Gm-Message-State: AOJu0YxKhdVd6LpgLkGcs4R6iF2XgpZf9W23Xg9nBO6ce9VOd4AfZ0Sl
	VLx5QsOebgoOmb3g+k2h+kPRsqV2S3is1hFh/V/pFKNT9ot6FSL126HhXo5Y3s6KBixuR+cTTvP
	YmhGBPn3Pgi6ki+Sp+8For9lyyxb3fri+R4oBeIs/QdqsQBV7Ecpe6M7fE1QmfeWH
X-Received: by 2002:a05:600c:358b:b0:422:683b:df4d with SMTP id 5b1f17b1804b1-422862aca70mr25635735e9.8.1718217504323;
        Wed, 12 Jun 2024 11:38:24 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IF9c6UnJX0cTtZy9cm2l0blMA61xj4ptfyilK9HIBnbYNBzABHUDtJIhxGr1xkMg1evDFPT+w==
X-Received: by 2002:a05:600c:358b:b0:422:683b:df4d with SMTP id 5b1f17b1804b1-422862aca70mr25635525e9.8.1718217503744;
        Wed, 12 Jun 2024 11:38:23 -0700 (PDT)
Message-ID: <ca575956-f0dd-4fb9-a307-6b7621681ed9@redhat.com>
Date: Wed, 12 Jun 2024 20:38:21 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v1 1/3] mm: pass meminit_context to __free_pages_core()
To: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
 xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com,
 Mike Rapoport <rppt@kernel.org>, Oscar Salvador <osalvador@suse.de>,
 "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Jason Wang <jasowang@redhat.com>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
 =?UTF-8?Q?Eugenio_P=C3=A9rez?= <eperezma@redhat.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>,
 Dmitry Vyukov <dvyukov@google.com>
References: <20240607090939.89524-1-david@redhat.com>
 <20240607090939.89524-2-david@redhat.com>
 <2ed64218-7f3b-4302-a5dc-27f060654fe2@redhat.com>
 <20240611121942.050a2215143af0ecb576122f@linux-foundation.org>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; keydata=
 xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat
In-Reply-To: <20240611121942.050a2215143af0ecb576122f@linux-foundation.org>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 11.06.24 21:19, Andrew Morton wrote:
> On Tue, 11 Jun 2024 12:06:56 +0200 David Hildenbrand <david@redhat.com> wrote:
> 
>> On 07.06.24 11:09, David Hildenbrand wrote:
>>> In preparation for further changes, let's teach __free_pages_core()
>>> about the differences of memory hotplug handling.
>>>
>>> Move the memory hotplug specific handling from generic_online_page() to
>>> __free_pages_core(), use adjust_managed_page_count() on the memory
>>> hotplug path, and spell out why memory freed via memblock
>>> cannot currently use adjust_managed_page_count().
>>>
>>> Signed-off-by: David Hildenbrand <david@redhat.com>
>>> ---
>>
>> @Andrew, can you squash the following?
> 
> Sure.
> 
> I queued it against "mm: pass meminit_context to __free_pages_core()",
> not against
> 
>> Subject: [PATCH] fixup: mm/highmem: make nr_free_highpages() return "unsigned
>>    long"
> 

Can you squash the following as well? (hopefully the last fixup, otherwise I
might just resend a v2)


 From 53c8c5834e638b2ae5e2a34fa7d49ce0dcf25192 Mon Sep 17 00:00:00 2001
From: David Hildenbrand <david@redhat.com>
Date: Wed, 12 Jun 2024 20:31:07 +0200
Subject: [PATCH] fixup: mm: pass meminit_context to __free_pages_core()

Let's add the parameter name also in the declaration.

Signed-off-by: David Hildenbrand <david@redhat.com>
---
  mm/internal.h | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/internal.h b/mm/internal.h
index 14bab8a41baf6..254dd907bf9a2 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -605,7 +605,7 @@ extern void __putback_isolated_page(struct page *page, unsigned int order,
  extern void memblock_free_pages(struct page *page, unsigned long pfn,
  					unsigned int order);
  extern void __free_pages_core(struct page *page, unsigned int order,
-		enum meminit_context);
+		enum meminit_context context);
  
  /*
   * This will have no effect, other than possibly generating a warning, if the
-- 
2.45.2


-- 
Cheers,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Wed Jun 12 22:29:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 Jun 2024 22:29:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739567.1146544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHWSy-0003nI-2l; Wed, 12 Jun 2024 22:29:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739567.1146544; Wed, 12 Jun 2024 22:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHWSx-0003nB-W0; Wed, 12 Jun 2024 22:29:15 +0000
Received: by outflank-mailman (input) for mailman id 739567;
 Wed, 12 Jun 2024 22:29:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHWSw-0003n1-4F; Wed, 12 Jun 2024 22:29:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHWSv-0000oP-St; Wed, 12 Jun 2024 22:29:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHWSv-0000iv-DF; Wed, 12 Jun 2024 22:29:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHWSv-0006sR-Cj; Wed, 12 Jun 2024 22:29:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6hgU9+xdJAgCYEvuL85F+VeCSM0iEhvyDAExBn90Bj0=; b=RmgqbXwFC6KvdaRrAsIkhi74eb
	dILFF4B1+sndLPDtwmOs72+xsyjpi+bmXn49Cb4dL3pLKw6o5rOs2wjg0f66iBt5wJfQysZ5FTZQ0
	ymm2NeZZFej04TwW3meSjpX+qQYRKGQb16cCmiCXkqNC6BQAgMi9vXaHtNpLbodY5mc0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186322-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186322: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d
X-Osstest-Versions-That:
    xen=5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 Jun 2024 22:29:13 +0000

flight 186322 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186322/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186315
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186315
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186315
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186315
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186315
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186315
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d
baseline version:
 xen                  5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d

Last test of basis   186322  2024-06-12 10:51:29 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 04:31:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 04:31:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739609.1146554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHc6m-0004Oq-1S; Thu, 13 Jun 2024 04:30:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739609.1146554; Thu, 13 Jun 2024 04:30:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHc6l-0004Oj-Un; Thu, 13 Jun 2024 04:30:43 +0000
Received: by outflank-mailman (input) for mailman id 739609;
 Thu, 13 Jun 2024 04:30:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHc6j-0004OZ-S6; Thu, 13 Jun 2024 04:30:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHc6j-0006qf-Jn; Thu, 13 Jun 2024 04:30:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHc6j-00051Q-7X; Thu, 13 Jun 2024 04:30:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHc6j-0008BI-71; Thu, 13 Jun 2024 04:30:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VuLjCdc+nJ8nIoUrH9WYgUoYAR4t3G/OKXUzwTj6o+Q=; b=5I4jRo7O1YYu5dKILkHgTEJy1+
	cXTK+5j6n93jGe3fYFkQglxPfOJgzRFQ02Scfiq2hfYJ0r46bN3VFvqhN4KsvDLMZkozmSr4g4T+t
	CZvMQkx9NYk5sHFJAQzGCp43FmRNn5Y+qSQMzHvsfaNcYn9S+4r649WKwA12psg46SIE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186324-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186324: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=cea2a26553ace13ee36b56dc09ad548b5e6907df
X-Osstest-Versions-That:
    linux=2ef5971ff345d3c000873725db555085e0131961
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Jun 2024 04:30:41 +0000

flight 186324 linux-linus real [real]
flight 186329 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186324/
http://logs.test-lab.xenproject.org/osstest/logs/186329/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-qcow2     8 xen-boot                 fail REGR. vs. 186314

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 186314
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186314
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186314
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186314
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186314
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                cea2a26553ace13ee36b56dc09ad548b5e6907df
baseline version:
 linux                2ef5971ff345d3c000873725db555085e0131961

Last test of basis   186314  2024-06-12 00:10:33 Z    1 days
Testing same since   186324  2024-06-12 17:12:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Linus Torvalds <torvalds@linux-foundation.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cea2a26553ace13ee36b56dc09ad548b5e6907df
Author: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Date:   Tue Jun 11 17:58:57 2024 +0300

    mailmap: Add my outdated addresses to the map file
    
    There is a couple of outdated addresses that are still visible
    in the Git history, add them to .mailmap.
    
    While at it, replace one in the comment.
    
    Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 06:38:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 06:38:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739638.1146564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHe6Y-0002HB-VJ; Thu, 13 Jun 2024 06:38:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739638.1146564; Thu, 13 Jun 2024 06:38:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHe6Y-0002H4-Qs; Thu, 13 Jun 2024 06:38:38 +0000
Received: by outflank-mailman (input) for mailman id 739638;
 Thu, 13 Jun 2024 06:38:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p+hf=NP=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sHe6Y-0002Gy-76
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 06:38:38 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8fcc26a5-294f-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 08:38:36 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.79.104])
 by support.bugseng.com (Postfix) with ESMTPSA id DD4DA4EE0756;
 Thu, 13 Jun 2024 08:38:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8fcc26a5-294f-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v2] automation/eclair: extend existing deviations of MISRA C:2012 Rule 16.3
Date: Thu, 13 Jun 2024 08:38:25 +0200
Message-Id: <20c0779f2d749a682758defc06514772e97c9d89.1718260010.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update ECLAIR configuration to deviate more cases where an
unintentional fallthrough cannot happen.

Add Rule 16.3 to the monitored set and tag it as clean for arm.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
The v1 of this patch did not receive any comments:
https://lists.xenproject.org/archives/html/xen-devel/2024-05/msg01754.html
I am sending this new version with some wording improvements.
---
 .../eclair_analysis/ECLAIR/deviations.ecl     | 30 ++++++++++++++-----
 .../eclair_analysis/ECLAIR/monitored.ecl      |  1 +
 automation/eclair_analysis/ECLAIR/tagging.ecl |  2 +-
 docs/misra/deviations.rst                     | 28 +++++++++++++++--
 4 files changed, 49 insertions(+), 12 deletions(-)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index 447c1e6661..dd9445578b 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -364,14 +364,29 @@ therefore it is deemed better to leave such files as is."
 -config=MC3R1.R16.2,reports+={deliberate, "any_area(any_loc(file(x86_emulate||x86_svm_emulate)))"}
 -doc_end
 
--doc_begin="Switch clauses ending with continue, goto, return statements are
-safe."
--config=MC3R1.R16.3,terminals+={safe, "node(continue_stmt||goto_stmt||return_stmt)"}
+-doc_begin="Statements that change the control flow (i.e., break, continue, goto, return) and calls to functions that do not return the control back are \"allowed terminal statements\"."
+-stmt_selector+={r16_3_allowed_terminal, "node(break_stmt||continue_stmt||goto_stmt||return_stmt)||call(property(noreturn))"}
+-config=MC3R1.R16.3,terminals+={safe, "r16_3_allowed_terminal"}
+-doc_end
+
+-doc_begin="An if-else statement having both branches ending with an allowed terminal statement is itself an allowed terminal statement."
+-stmt_selector+={r16_3_if, "node(if_stmt)&&(child(then,r16_3_allowed_terminal)||child(then,any_stmt(stmt,-1,r16_3_allowed_terminal)))"}
+-stmt_selector+={r16_3_else, "node(if_stmt)&&(child(else,r16_3_allowed_terminal)||child(else,any_stmt(stmt,-1,r16_3_allowed_terminal)))"}
+-stmt_selector+={r16_3_if_else, "r16_3_if&&r16_3_else"}
+-config=MC3R1.R16.3,terminals+={safe, "r16_3_if_else"}
+-doc_end
+
+-doc_begin="An if-else statement having an always true condition and the true branch ending with an allowed terminal statement is itself an allowed terminal statement."
+-stmt_selector+={r16_3_if_true, "r16_3_if&&child(cond,definitely_in(1..))"}
+-config=MC3R1.R16.3,terminals+={safe, "r16_3_if_true"}
+-doc_end
+
+-doc_begin="Switch clauses ending with a statement expression which, in turn, ends with an allowed terminal statement are safe."
+-config=MC3R1.R16.3,terminals+={safe, "node(stmt_expr)&&child(stmt,node(compound_stmt)&&any_stmt(stmt,-1,r16_3_allowed_terminal||r16_3_if_else||r16_3_if_true))"}
 -doc_end
 
--doc_begin="Switch clauses ending with a call to a function that does not give
-the control back (i.e., a function with attribute noreturn) are safe."
--config=MC3R1.R16.3,terminals+={safe, "call(property(noreturn))"}
+-doc_begin="Switch clauses ending with a do-while-false which, in turn, ends with an allowed terminal statement are safe, except for debug macro ASSERT_UNREACHABLE()."
+-config=MC3R1.R16.3,terminals+={safe, "!macro(name(ASSERT_UNREACHABLE))&&node(do_stmt)&&child(cond,definitely_in(0))&&child(body,any_stmt(stmt,-1,r16_3_allowed_terminal||r16_3_if_else||r16_3_if_true))"}
 -doc_end
 
 -doc_begin="Switch clauses ending with pseudo-keyword \"fallthrough\" are
@@ -383,8 +398,7 @@ safe."
 -config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(/BUG\\(\\);/))))"}
 -doc_end
 
--doc_begin="Switch clauses not ending with the break statement are safe if an
-explicit comment indicating the fallthrough intention is present."
+-doc_begin="Switch clauses ending with an explicit comment indicating the fallthrough intention are safe."
 -config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through.? \\*/.*$,0..1))))"}
 -doc_end
 
diff --git a/automation/eclair_analysis/ECLAIR/monitored.ecl b/automation/eclair_analysis/ECLAIR/monitored.ecl
index 4daecb0c83..45a60074f9 100644
--- a/automation/eclair_analysis/ECLAIR/monitored.ecl
+++ b/automation/eclair_analysis/ECLAIR/monitored.ecl
@@ -22,6 +22,7 @@
 -enable=MC3R1.R14.1
 -enable=MC3R1.R14.4
 -enable=MC3R1.R16.2
+-enable=MC3R1.R16.3
 -enable=MC3R1.R16.6
 -enable=MC3R1.R16.7
 -enable=MC3R1.R17.1
diff --git a/automation/eclair_analysis/ECLAIR/tagging.ecl b/automation/eclair_analysis/ECLAIR/tagging.ecl
index a354ff322e..07de2e7b65 100644
--- a/automation/eclair_analysis/ECLAIR/tagging.ecl
+++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
@@ -105,7 +105,7 @@ if(string_equal(target,"x86_64"),
 )
 
 if(string_equal(target,"arm64"),
-    service_selector({"additional_clean_guidelines","MC3R1.R14.4||MC3R1.R16.6||MC3R1.R20.12||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.2||MC3R1.R7.3||MC3R1.R8.6||MC3R1.R9.3"})
+    service_selector({"additional_clean_guidelines","MC3R1.R14.4||MC3R1.R16.3||MC3R1.R16.6||MC3R1.R20.12||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.2||MC3R1.R7.3||MC3R1.R8.6||MC3R1.R9.3"})
 )
 
 -reports+={clean:added,"service(clean_guidelines_common||additional_clean_guidelines)"}
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 36959aa44a..f693bb59af 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -309,12 +309,34 @@ Deviations related to MISRA C:2012 Rules:
      - Tagged as `deliberate` for ECLAIR.
 
    * - R16.3
-     - Switch clauses ending with continue, goto, return statements are safe.
+     - Statements that change the control flow (i.e., break, continue, goto,
+       return) and calls to functions that do not return the control back are
+       \"allowed terminal statements\".
      - Tagged as `safe` for ECLAIR.
 
    * - R16.3
-     - Switch clauses ending with a call to a function that does not give
-       the control back (i.e., a function with attribute noreturn) are safe.
+     - An if-else statement having both branches ending with one of the allowed
+       terminal statemets is itself an allowed terminal statements.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R16.3
+     - An if-else statement having an always true condition and the true
+       branch ending with an allowed terminal statement is itself an allowed
+       terminal statement.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R16.3
+     - Switch clauses ending with a statement expression which, in turn, ends
+       with an allowed terminal statement are safe (e.g., the expansion of
+       generate_exception()).
+     - Tagged as `safe` for ECLAIR.
+
+   * - R16.3
+     - Switch clauses ending with a do-while-false which, in turn, ends with an
+       allowed terminal statement are safe (e.g., PARSE_ERR_RET()).
+       Being ASSERT_UNREACHABLE() a construct that is effective in debug builds
+       only, it is not considered as an allowed terminal statement, despite its
+       definition.
      - Tagged as `safe` for ECLAIR.
 
    * - R16.3
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 07:11:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 07:11:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739657.1146585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHecN-0007CR-AH; Thu, 13 Jun 2024 07:11:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739657.1146585; Thu, 13 Jun 2024 07:11:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHecN-0007CK-7J; Thu, 13 Jun 2024 07:11:31 +0000
Received: by outflank-mailman (input) for mailman id 739657;
 Thu, 13 Jun 2024 07:11:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHecM-0007CA-CE; Thu, 13 Jun 2024 07:11:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHecM-0001al-Ad; Thu, 13 Jun 2024 07:11:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHecM-0002uq-0h; Thu, 13 Jun 2024 07:11:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHecM-0003Fi-0E; Thu, 13 Jun 2024 07:11:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9m/RccQhnYwYXmzezJAUfVFgzAGi5dZQUqBPJquynwE=; b=Ri9ZfedW7OWKpTQUYZS3bZoz9o
	7w4fKvSOdZ+CqYFRc7eYN4UnP7ArTSQ27PvCtsgDYVuUDj7c/8NB0Xc1IJC/QIWl2sLmogimQStx0
	29XEpvZpdttgZE6ge1SQ34haEAo2ugfRk93lVdxDSAYxMZWAYiLgJ88CsDQ/N5HcVUaA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186328-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186328: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=401448f2d12f969956fac3ef3d29dc065430fd56
X-Osstest-Versions-That:
    xen=5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Jun 2024 07:11:30 +0000

flight 186328 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186328/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186322
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186322
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186322
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186322
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186322
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186322
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  401448f2d12f969956fac3ef3d29dc065430fd56
baseline version:
 xen                  5ea7f2c9d7a1334b3b2bd5f67fab4d447b60613d

Last test of basis   186322  2024-06-12 10:51:29 Z    0 days
Testing same since   186328  2024-06-12 22:40:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel P. Smith <dpsmith@apertussolutions.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Marek Marczykowski <marmarek@invisiblethingslab.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5ea7f2c9d7..401448f2d1  401448f2d12f969956fac3ef3d29dc065430fd56 -> master


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 07:33:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 07:33:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739668.1146608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHex8-0001ga-5e; Thu, 13 Jun 2024 07:32:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739668.1146608; Thu, 13 Jun 2024 07:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHex8-0001gT-2c; Thu, 13 Jun 2024 07:32:58 +0000
Received: by outflank-mailman (input) for mailman id 739668;
 Thu, 13 Jun 2024 07:32:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9y96=NP=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHex6-0001gN-Dx
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 07:32:56 +0000
Received: from mail-yw1-x1132.google.com (mail-yw1-x1132.google.com
 [2607:f8b0:4864:20::1132])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 21811a5f-2957-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 09:32:48 +0200 (CEST)
Received: by mail-yw1-x1132.google.com with SMTP id
 00721157ae682-62f39fcb010so8723337b3.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 00:32:48 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2a5bf2890sm4109446d6.2.2024.06.13.00.32.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Jun 2024 00:32:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21811a5f-2957-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718263967; x=1718868767; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=S/+Jpp5P1BNC8rRtA3cmywRtmY8n2l7RdBddXTwgHjw=;
        b=cLZhpLbn1VVBOAZN3N3JmXp2ndsRX1PFvHwqzF5e1oZda98NA8rHHF568kaN8VGgbe
         THpXdry3NqolL//ndFDONpensk3trJbXH01UoQGbvV4B+jWrhygq3xekfnOURyZysrIe
         56WXKQ2+/Yu8Wr3Vp6RLw5L0/QWD9up1dLITk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718263967; x=1718868767;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=S/+Jpp5P1BNC8rRtA3cmywRtmY8n2l7RdBddXTwgHjw=;
        b=mJ3hnaYe0BAZrnYPgCNMoWreM0segO1kieKxPmDEOULrI4KlIfk03bqghca4WcCONv
         GUSM6LjHdhlKLmn5QQg1oj6w6zTC8umz6zAg9UOdqKNi3XvA0e/vxh+81yMlAnsPRCQP
         pz5YOYTzapZKchPje4MsB166nk2/JrPB7wVY3uPu++tKx1YKqELYqeXRhw7A1pLS0Zi/
         hFFDRP5YQjIne1jwa2lJq1pvPdVWh4eE4j9EEqYI+rLp/1xqbHKseYrGqU1nAYMWUamz
         932ps8ZoDfH9Uijy5CmPFrstdad5iNPoI6n3c2K6Fc0mJO1ExZTM/uJKuioCUfrSaf8r
         JZjw==
X-Gm-Message-State: AOJu0Yw+zGWFB6nsfujpOlHbsJlP5LqwpfwyEL7ZVm4Nlbfng/Ac/nxs
	EISGAit5eizU0yJbvYD2JZPorqkjr4PrSYQkiKYyPlXsNiUjyRwFhmPnxmq+wE4=
X-Google-Smtp-Source: AGHT+IFivZj+q1Rxyq3t0OXNyBjKNeJ4VI/L8RcS3AzaM3/mY+QXGJBbacXqoJ9TbK413abbw+7Rvw==
X-Received: by 2002:a0d:df0a:0:b0:61b:e62e:8fad with SMTP id 00721157ae682-62fbbdee2e7mr38263427b3.21.1718263966010;
        Thu, 13 Jun 2024 00:32:46 -0700 (PDT)
Date: Thu, 13 Jun 2024 09:32:43 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH v2 for-4.19 3/3] x86/EPT: drop questionable mfn_valid()
 from epte_get_entry_emt()
Message-ID: <ZmqgmzEH5-5dNDVJ@macbook>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
 <7607c5f7-772a-4c49-b2df-19f32ec2180b@suse.com>
 <Zmm4JdaLL0oRALL_@macbook>
 <07d38484-dda3-4494-9dbb-75d4d2dbc3c3@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <07d38484-dda3-4494-9dbb-75d4d2dbc3c3@suse.com>

On Wed, Jun 12, 2024 at 05:14:37PM +0200, Jan Beulich wrote:
> On 12.06.2024 17:00, Roger Pau Monné wrote:
> > On Wed, Jun 12, 2024 at 03:17:38PM +0200, Jan Beulich wrote:
> >> mfn_valid() is RAM-focused; it will often return false for MMIO. Yet
> >> access to actual MMIO space should not generally be restricted to UC
> >> only; especially video frame buffer accesses are unduly affected by such
> >> a restriction.
> >>
> >> Since, as of ???????????? ("x86/EPT: avoid marking non-present entries
> >> for re-configuring"), the function won't be called with INVALID_MFN or,
> >> worse, truncated forms thereof anymore, we call fully drop that check.
> >>
> >> Fixes: 81fd0d3ca4b2 ("x86/hvm: simplify 'mmio_direct' check in epte_get_entry_emt()")
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >> Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > 
> > I do think this is the way to go (removing quirks from
> > epte_get_entry_emt()), however it's a risky change to make at this
> > point in the release.
> > 
> > If this turns out to cause some unexpected damage, it would only
> > affect HVM guests with PCI passthrough and PVH dom0, which I consider
> > not great, but tolerable.
> > 
> > I would be more comfortable with making the change just not so close
> > to the release, but that's where we are.
> 
> Certainly, and I could live with Oleksii revoking his R-a-b (or simply
> not offering it for either of the two prereq changes). Main thing for
> me is - PVH Dom0 finally isn't so horribly slow anymore. However, if it
> doesn't go into the release, then I'd also be unsure about eventual
> backporting.

Thinking about this, it's also likely to fix issues with PCI
passthrough to HVM guests, so I'm quite sure we would need to
backport it.

David Woodhouse already had to fix it once:

https://xenbits.xen.org/gitweb/?p=xen.git;a=commit;h=30921dc2df3665ca1b2593595aa6725ff013d386

And I'm quite sure this fix was not related to PVH dom0.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 08:16:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 08:16:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739683.1146618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHfd6-0007eA-II; Thu, 13 Jun 2024 08:16:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739683.1146618; Thu, 13 Jun 2024 08:16:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHfd6-0007e3-FJ; Thu, 13 Jun 2024 08:16:20 +0000
Received: by outflank-mailman (input) for mailman id 739683;
 Thu, 13 Jun 2024 08:16:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHfd4-0007dx-RP
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 08:16:18 +0000
Received: from mail-lf1-x133.google.com (mail-lf1-x133.google.com
 [2a00:1450:4864:20::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 34121396-295d-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 10:16:15 +0200 (CEST)
Received: by mail-lf1-x133.google.com with SMTP id
 2adb3069b0e04-52bc29c79fdso1005074e87.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 01:16:15 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56dd2daasm45696566b.97.2024.06.13.01.16.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 01:16:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34121396-295d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718266575; x=1718871375; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=/ET4vw2AlU+0cfjqxdZV81Wk6N10Ew+YwgelARvdfJQ=;
        b=TDj5YQtB8sJp23gQQKCxi6zdas9Grlihk8KaG4bc/yhT6lHjbDC9XmM9M7dHhESRns
         OalvEyjhMMSvH7JsoRi49mDbaVmJSb9VLdOYN7HtCiJqf8OtCnYbXtv26ZgHt8WDnX7q
         /KrkMmR/iGVOgdJUc2Ml3dDAC9QBbMnCT5JJVZP6inYbjShrTT2/f6nKs245X1G5/kze
         9LHcXmWvdx2N6ZcduddELFw98q8kvDObXinE9idGaQ+IS9nSj+0m1mhvzPnEyt2Hrm8m
         P4rKJrbTBi4hfgcxB5AdwhR/s8H61GkJE2efO52ccRN8AznrI4LII7G4apfWFRvkuZpq
         wIeg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718266575; x=1718871375;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/ET4vw2AlU+0cfjqxdZV81Wk6N10Ew+YwgelARvdfJQ=;
        b=ACd30a8EIyBi9+UxmwkotyjZR+XG7QQ29+T/03QKnsO03MGInShE/8ddGEqYbo7xpb
         v5SDFs4C1xPCYuGVJABrCrRrzF96hwXhPDh44oJ016t0ntZDOg6txbVrX1yqzL3H0aRk
         apLlZWRsEtDOjBB3X7cE7pfZKR8862RiAv8SjGkXB93t7auPc7l7CievtCBCv/X+Xk1J
         MFqByMnuD8xsK8T1BXC/pFAU8kdISJyA9gRufIJ05hjxvEbwRC7pgSSk6qoV3jHQIfOV
         +Leh4s1aJhWSyEF6TiuQJUVuOWTxhcGq5Jr/kCq1w9F/ZzmsOiaxJTesDviCEgUZHjsi
         dO2g==
X-Forwarded-Encrypted: i=1; AJvYcCXNt+YbYg961BleY1N1ZyZ49AkjQ5+sVIuI4sbP/92d/+1xQKu+YdEWZTP7Oz0F9Vr1IgtPAYpaHOJ3mCXDrKCNb4oAboJOdYNzMQYBeAA=
X-Gm-Message-State: AOJu0Yxz/wiUFwUDXDP4yQLaPyw6nR9Mn7lVRWHjhlK8TwIVCxL9bZgm
	5hTanNLQfAWfdHIHUcryXftQEbd6t6xXZ9Mky1EcxuP3XSXmjO2alqQUaHiEyQ==
X-Google-Smtp-Source: AGHT+IEc067c8Jx+141+0kjmU/kly8F3BtRw6SVvxaMDdG7Y6qEkHz8/wu59gOh9k5jSkoIOe1mtVw==
X-Received: by 2002:a19:e00a:0:b0:52c:8811:42f7 with SMTP id 2adb3069b0e04-52c9a3d2020mr3075963e87.19.1718266574844;
        Thu, 13 Jun 2024 01:16:14 -0700 (PDT)
Message-ID: <cbd59b32-213b-4b5c-90fb-67906b7ae680@suse.com>
Date: Thu, 13 Jun 2024 10:16:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2] automation/eclair: extend existing deviations of
 MISRA C:2012 Rule 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <20c0779f2d749a682758defc06514772e97c9d89.1718260010.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20c0779f2d749a682758defc06514772e97c9d89.1718260010.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 13.06.2024 08:38, Federico Serafini wrote:
> Update ECLAIR configuration to deviate more cases where an
> unintentional fallthrough cannot happen.
> 
> Add Rule 16.3 to the monitored set and tag it as clean for arm.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> ---
> The v1 of this patch did not receive any comments:
> https://lists.xenproject.org/archives/html/xen-devel/2024-05/msg01754.html
> I am sending this new version with some wording improvements.
> ---
>  .../eclair_analysis/ECLAIR/deviations.ecl     | 30 ++++++++++++++-----
>  .../eclair_analysis/ECLAIR/monitored.ecl      |  1 +
>  automation/eclair_analysis/ECLAIR/tagging.ecl |  2 +-
>  docs/misra/deviations.rst                     | 28 +++++++++++++++--
>  4 files changed, 49 insertions(+), 12 deletions(-)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> index 447c1e6661..dd9445578b 100644
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -364,14 +364,29 @@ therefore it is deemed better to leave such files as is."
>  -config=MC3R1.R16.2,reports+={deliberate, "any_area(any_loc(file(x86_emulate||x86_svm_emulate)))"}
>  -doc_end
>  
> --doc_begin="Switch clauses ending with continue, goto, return statements are
> -safe."
> --config=MC3R1.R16.3,terminals+={safe, "node(continue_stmt||goto_stmt||return_stmt)"}
> +-doc_begin="Statements that change the control flow (i.e., break, continue, goto, return) and calls to functions that do not return the control back are \"allowed terminal statements\"."
> +-stmt_selector+={r16_3_allowed_terminal, "node(break_stmt||continue_stmt||goto_stmt||return_stmt)||call(property(noreturn))"}
> +-config=MC3R1.R16.3,terminals+={safe, "r16_3_allowed_terminal"}
> +-doc_end
> +
> +-doc_begin="An if-else statement having both branches ending with an allowed terminal statement is itself an allowed terminal statement."
> +-stmt_selector+={r16_3_if, "node(if_stmt)&&(child(then,r16_3_allowed_terminal)||child(then,any_stmt(stmt,-1,r16_3_allowed_terminal)))"}
> +-stmt_selector+={r16_3_else, "node(if_stmt)&&(child(else,r16_3_allowed_terminal)||child(else,any_stmt(stmt,-1,r16_3_allowed_terminal)))"}
> +-stmt_selector+={r16_3_if_else, "r16_3_if&&r16_3_else"}
> +-config=MC3R1.R16.3,terminals+={safe, "r16_3_if_else"}
> +-doc_end
> +
> +-doc_begin="An if-else statement having an always true condition and the true branch ending with an allowed terminal statement is itself an allowed terminal statement."
> +-stmt_selector+={r16_3_if_true, "r16_3_if&&child(cond,definitely_in(1..))"}
> +-config=MC3R1.R16.3,terminals+={safe, "r16_3_if_true"}
> +-doc_end
> +
> +-doc_begin="Switch clauses ending with a statement expression which, in turn, ends with an allowed terminal statement are safe."
> +-config=MC3R1.R16.3,terminals+={safe, "node(stmt_expr)&&child(stmt,node(compound_stmt)&&any_stmt(stmt,-1,r16_3_allowed_terminal||r16_3_if_else||r16_3_if_true))"}
>  -doc_end
>  
> --doc_begin="Switch clauses ending with a call to a function that does not give
> -the control back (i.e., a function with attribute noreturn) are safe."
> --config=MC3R1.R16.3,terminals+={safe, "call(property(noreturn))"}
> +-doc_begin="Switch clauses ending with a do-while-false which, in turn, ends with an allowed terminal statement are safe, except for debug macro ASSERT_UNREACHABLE()."
> +-config=MC3R1.R16.3,terminals+={safe, "!macro(name(ASSERT_UNREACHABLE))&&node(do_stmt)&&child(cond,definitely_in(0))&&child(body,any_stmt(stmt,-1,r16_3_allowed_terminal||r16_3_if_else||r16_3_if_true))"}
>  -doc_end
>  
>  -doc_begin="Switch clauses ending with pseudo-keyword \"fallthrough\" are
> @@ -383,8 +398,7 @@ safe."
>  -config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(/BUG\\(\\);/))))"}
>  -doc_end
>  
> --doc_begin="Switch clauses not ending with the break statement are safe if an
> -explicit comment indicating the fallthrough intention is present."
> +-doc_begin="Switch clauses ending with an explicit comment indicating the fallthrough intention are safe."
>  -config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through.? \\*/.*$,0..1))))"}
>  -doc_end
>  
> diff --git a/automation/eclair_analysis/ECLAIR/monitored.ecl b/automation/eclair_analysis/ECLAIR/monitored.ecl
> index 4daecb0c83..45a60074f9 100644
> --- a/automation/eclair_analysis/ECLAIR/monitored.ecl
> +++ b/automation/eclair_analysis/ECLAIR/monitored.ecl
> @@ -22,6 +22,7 @@
>  -enable=MC3R1.R14.1
>  -enable=MC3R1.R14.4
>  -enable=MC3R1.R16.2
> +-enable=MC3R1.R16.3
>  -enable=MC3R1.R16.6
>  -enable=MC3R1.R16.7
>  -enable=MC3R1.R17.1
> diff --git a/automation/eclair_analysis/ECLAIR/tagging.ecl b/automation/eclair_analysis/ECLAIR/tagging.ecl
> index a354ff322e..07de2e7b65 100644
> --- a/automation/eclair_analysis/ECLAIR/tagging.ecl
> +++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
> @@ -105,7 +105,7 @@ if(string_equal(target,"x86_64"),
>  )
>  
>  if(string_equal(target,"arm64"),
> -    service_selector({"additional_clean_guidelines","MC3R1.R14.4||MC3R1.R16.6||MC3R1.R20.12||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.2||MC3R1.R7.3||MC3R1.R8.6||MC3R1.R9.3"})
> +    service_selector({"additional_clean_guidelines","MC3R1.R14.4||MC3R1.R16.3||MC3R1.R16.6||MC3R1.R20.12||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.2||MC3R1.R7.3||MC3R1.R8.6||MC3R1.R9.3"})
>  )
>  
>  -reports+={clean:added,"service(clean_guidelines_common||additional_clean_guidelines)"}
> diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
> index 36959aa44a..f693bb59af 100644
> --- a/docs/misra/deviations.rst
> +++ b/docs/misra/deviations.rst
> @@ -309,12 +309,34 @@ Deviations related to MISRA C:2012 Rules:
>       - Tagged as `deliberate` for ECLAIR.
>  
>     * - R16.3
> -     - Switch clauses ending with continue, goto, return statements are safe.
> +     - Statements that change the control flow (i.e., break, continue, goto,
> +       return) and calls to functions that do not return the control back are
> +       \"allowed terminal statements\".
>       - Tagged as `safe` for ECLAIR.
>  
>     * - R16.3
> -     - Switch clauses ending with a call to a function that does not give
> -       the control back (i.e., a function with attribute noreturn) are safe.
> +     - An if-else statement having both branches ending with one of the allowed
> +       terminal statemets is itself an allowed terminal statements.

Nit: "... terminal statements is ... terminal statement."

> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R16.3
> +     - An if-else statement having an always true condition and the true
> +       branch ending with an allowed terminal statement is itself an allowed
> +       terminal statement.
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R16.3
> +     - Switch clauses ending with a statement expression which, in turn, ends
> +       with an allowed terminal statement are safe (e.g., the expansion of
> +       generate_exception()).
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R16.3
> +     - Switch clauses ending with a do-while-false which, in turn, ends with an

Maybe more precisely "the body of which"?

> +       allowed terminal statement are safe (e.g., PARSE_ERR_RET()).
> +       Being ASSERT_UNREACHABLE() a construct that is effective in debug builds
> +       only, it is not considered as an allowed terminal statement, despite its
> +       definition.

DYM despite its name? Its definition is what specifically renders it unsuitable
for release builds.

Also I think the sentence wants to either start "ASSERT_UNREACHABLE() being a
..." or wants to be re-ordered to e.g. "Being a construct that is effective in
debug builds only, ASSERT_UNREACHABLE() is not considered ..."

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 08:19:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 08:19:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739688.1146627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHfgE-0008S2-Vi; Thu, 13 Jun 2024 08:19:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739688.1146627; Thu, 13 Jun 2024 08:19:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHfgE-0008Rv-T8; Thu, 13 Jun 2024 08:19:34 +0000
Received: by outflank-mailman (input) for mailman id 739688;
 Thu, 13 Jun 2024 08:19:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHfgD-0008Rl-Mc
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 08:19:33 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a9a417d3-295d-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 10:19:32 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6f1da33826so109889466b.0
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 01:19:32 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56ed3590sm45571666b.98.2024.06.13.01.19.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 01:19:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9a417d3-295d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718266772; x=1718871572; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=N8LlFuRQgnRz0O9t1OBsogqAF1aAI8bFLWduLEN2M4c=;
        b=UWZMsdy5DaNRAfqGgJnKFNQp5FYAUz0I0a5urzm9Hbvta8ktF0JeWn8QyNF+B/qYZo
         dGsY6LG0oFy9OdqTbBBkOtAsKIDxOxShjqNrfEAR6MEB1bsa5comKSjd8+o4MxlMqGMe
         pIZvSR+dEpItUEdy1IZ8SABtO+dMYTVgQ7oA5/gPN2a8923V+IUZzFGgaz1Fymft5nsX
         iuZKQGE+N8e9lrkOIf8k2pW0nnpNKLOvZQqrOSoJ7XpE432cm+cTxvtUp+bbqg/BPIy0
         L81696XZeUQAZtVlneCJe8aXlYYIzOoZeFzpOboDsz21LQcTQrExRya2t4jrxMWF+9zZ
         ZYjA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718266772; x=1718871572;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=N8LlFuRQgnRz0O9t1OBsogqAF1aAI8bFLWduLEN2M4c=;
        b=GhcE5LEgicuJxCNwkrnSmGKVkHZ0I0ggCRYCI3+B+e1IDq/JdQe7TQ2fOeL79sl6wG
         b4VLRqdmLH8ySYw+Pu6xtAkXY/rnTFF2s13QIQts3voUF17CfLPGqjjIq2vdm2zmrxne
         wZT3qfyIJRwsKPRQUhq/byongYnUsPXBwD9b5+98srBsyzLntJU1OCPMOSRzkeXH8cGK
         BGK7q82FBY14je9UlU++llr6Oi8lR4UyKSr/qCFRTK43sXLStwRlDEMgfowVN0ndMOlC
         Sd3Z57cWJdekWfpK2NtIlfQjMPIordjX0vqLRDWjP5jXDXD6wz7K4iXbYpbwqqhu+LAm
         Ag+Q==
X-Gm-Message-State: AOJu0YzkBbeenn+BtshTCH3DJMRwVBVnaGKsBr25c2cR6hRPGMQQIGZk
	U4fz9VkSbzhQzrwQFRFquyOwIslEBWLh+CYlf+OvOSj9mwcE7DAB02MpPY4N8PulS351SFKUn2g
	=
X-Google-Smtp-Source: AGHT+IERydvPM3D/32D++x4lvjvmilGv2w915HTQhdNwSpmX0Ga4CU/rKGOL5SRt7VND7dMAerfgxg==
X-Received: by 2002:a17:907:9407:b0:a6f:535e:77f1 with SMTP id a640c23a62f3a-a6f535e79d3mr167427166b.56.1718266771791;
        Thu, 13 Jun 2024 01:19:31 -0700 (PDT)
Message-ID: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>
Date: Thu, 13 Jun 2024 10:19:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH for 4.19?] x86/Intel: unlock CPUID earlier for the BSP
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Intel CPUs have a MSR bit to limit CPUID enumeration to leaf two. If
this bit is set by the BIOS then CPUID evaluation does not work when
data from any leaf greater than two is needed; early_cpu_init() in
particular wants to collect leaf 7 data.

Cure this by unlocking CPUID right before evaluating anything which
depends on the maximum CPUID leaf being greater than two.

Inspired by (and description cloned from) Linux commit 0c2f6d04619e
("x86/topology/intel: Unlock CPUID before evaluating anything").

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
While I couldn't spot anything, it kind of feels as if I'm overlooking
further places where we might be inspecting in particular leaf 7 yet
earlier.

No Fixes: tag(s), as imo it would be too many that would want
enumerating.

--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -336,7 +336,8 @@ void __init early_cpu_init(bool verbose)
 
 	c->x86_vendor = x86_cpuid_lookup_vendor(ebx, ecx, edx);
 	switch (c->x86_vendor) {
-	case X86_VENDOR_INTEL:    actual_cpu = intel_cpu_dev;    break;
+	case X86_VENDOR_INTEL:    intel_unlock_cpuid_leaves(c);
+				  actual_cpu = intel_cpu_dev;    break;
 	case X86_VENDOR_AMD:      actual_cpu = amd_cpu_dev;      break;
 	case X86_VENDOR_CENTAUR:  actual_cpu = centaur_cpu_dev;  break;
 	case X86_VENDOR_SHANGHAI: actual_cpu = shanghai_cpu_dev; break;
--- a/xen/arch/x86/cpu/cpu.h
+++ b/xen/arch/x86/cpu/cpu.h
@@ -24,3 +24,5 @@ void amd_init_lfence(struct cpuinfo_x86
 void amd_init_ssbd(const struct cpuinfo_x86 *c);
 void amd_init_spectral_chicken(void);
 void detect_zen2_null_seg_behaviour(void);
+
+void intel_unlock_cpuid_leaves(struct cpuinfo_x86 *c);
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -303,10 +303,24 @@ static void __init noinline intel_init_l
 		ctxt_switch_masking = intel_ctxt_switch_masking;
 }
 
-static void cf_check early_init_intel(struct cpuinfo_x86 *c)
+/* Unmask CPUID levels if masked. */
+void intel_unlock_cpuid_leaves(struct cpuinfo_x86 *c)
 {
-	u64 misc_enable, disable;
+	uint64_t misc_enable, disable;
+
+	rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
+
+	disable = misc_enable & MSR_IA32_MISC_ENABLE_LIMIT_CPUID;
+	if (disable) {
+		wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable & ~disable);
+		bootsym(trampoline_misc_enable_off) |= disable;
+		c->cpuid_level = cpuid_eax(0);
+		printk(KERN_INFO "revised cpuid level: %u\n", c->cpuid_level);
+	}
+}
 
+static void cf_check early_init_intel(struct cpuinfo_x86 *c)
+{
 	/* Netburst reports 64 bytes clflush size, but does IO in 128 bytes */
 	if (c->x86 == 15 && c->x86_cache_alignment == 64)
 		c->x86_cache_alignment = 128;
@@ -315,16 +329,7 @@ static void cf_check early_init_intel(st
 	    bootsym(trampoline_misc_enable_off) & MSR_IA32_MISC_ENABLE_XD_DISABLE)
 		printk(KERN_INFO "re-enabled NX (Execute Disable) protection\n");
 
-	/* Unmask CPUID levels and NX if masked: */
-	rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
-
-	disable = misc_enable & MSR_IA32_MISC_ENABLE_LIMIT_CPUID;
-	if (disable) {
-		wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable & ~disable);
-		bootsym(trampoline_misc_enable_off) |= disable;
-		printk(KERN_INFO "revised cpuid level: %d\n",
-		       cpuid_eax(0));
-	}
+	intel_unlock_cpuid_leaves(c);
 
 	/* CPUID workaround for Intel 0F33/0F34 CPU */
 	if (boot_cpu_data.x86 == 0xF && boot_cpu_data.x86_model == 3 &&


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 08:38:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 08:38:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739701.1146638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHfyj-00031W-Fi; Thu, 13 Jun 2024 08:38:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739701.1146638; Thu, 13 Jun 2024 08:38:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHfyj-00031P-D2; Thu, 13 Jun 2024 08:38:41 +0000
Received: by outflank-mailman (input) for mailman id 739701;
 Thu, 13 Jun 2024 08:38:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHfyi-00031J-03
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 08:38:40 +0000
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com
 [2a00:1450:4864:20::52c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 540a49ad-2960-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 10:38:38 +0200 (CEST)
Received: by mail-ed1-x52c.google.com with SMTP id
 4fb4d7f45d1cf-57cad452f8bso649317a12.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 01:38:38 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56ecdd63sm46689266b.124.2024.06.13.01.38.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 01:38:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 540a49ad-2960-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718267917; x=1718872717; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=QTwzH08dHLfZS4mGNrImJhr+GbxIyr9gFLLXFHhBz+Y=;
        b=BO4cqVnN+XO5Xff6w3eCQUZAhJLOgadmasbQJKIISPelJu5mI12uKEqhhgEGCx7cn2
         hbx70QGY7DpL0FS+Lq84DsLuw2x73Et34YTnk3OM8IKyQ0s0uPoef5YLz/uZNS38H6Is
         4rC4RLbjt0HgxDOdwsMOgO+zmbcjdcHaZkey7p8NVm9s60UIrXdIFXiBU5G9fYT1UHWI
         VMcoEMrwchG8jb8FA3QMIHaB2T3ldo/ut2jj1+KmlncUzapi5F6Vr/ndP7Uod03ggFLo
         BQUbtd9YnVUpI06pDYODmmdZYeCZ9P1oh9Vdfrcaw4I6g4K8sPoJXGfaLgAclV7VmSmx
         GzIA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718267917; x=1718872717;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=QTwzH08dHLfZS4mGNrImJhr+GbxIyr9gFLLXFHhBz+Y=;
        b=E1wWh55SYoyA0PCJOczs72jgbaqlrAUBpVCCglZAg0lmIs7bC8hnrVuqTEn4po+bxB
         coq5KEupei5yJNBPhTUkWc4u0JDmKVd/C5Z31NPV4zfAbq4rWHc4cWga1dFXiwTlhE9u
         AfrcKV8eEqm5+4TV+hPLaFDcRguli8sHHqQ4nyh7ovOG6wwJaPMQGCGCJK7U6WCZTlNs
         ewlFMOe3TD8tQ5ExKhCoTEvGdeqNB/qMtPl7/4xNksHoBnSupjDkHTAYI6+bPT7ibyhD
         wLMonBjZq2wgMhN1h06bw7/O+SeVrEZJaAgABr36IGl8KDJL5JN1YgSY09nCqyvMAbQL
         TxjQ==
X-Forwarded-Encrypted: i=1; AJvYcCVni497LvF7UPulVuTmbB1jOnM5pUAfpm4BbGSWFPSa3dm2puMT5gNhXRmLn3Mo7UwWAEHxeT8LmaLxzB8ZpqCXDGqAT+gAKW7/PrIbMFA=
X-Gm-Message-State: AOJu0YyfoD8toozP9LsRIGiO2klLViBvUd/xviZ64R17XSwuLx/05C+l
	2gfhfM8PufVASVgdLPAJEQd4qWyLsTkyyZpM7Tbn4t63zi2Rnx4zD6p/qqSatw==
X-Google-Smtp-Source: AGHT+IFu8lhNuAEoxwvqTTJRYMze4CK2ZerVD00WoN96Nygc7CPcXFdl7oxtI7MWE5CQcy7gHJbbog==
X-Received: by 2002:a17:906:2518:b0:a6f:4fc8:2658 with SMTP id a640c23a62f3a-a6f4fc870e7mr156321566b.14.1718267917036;
        Thu, 13 Jun 2024 01:38:37 -0700 (PDT)
Message-ID: <d45ef203-aa29-4aa6-8b40-0449334a2bf0@suse.com>
Date: Thu, 13 Jun 2024 10:38:35 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in
 _assign_irq_vector()
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-7-roger.pau@citrix.com>
 <9de1a9c7-814c-4375-9182-90a2f04806b2@suse.com> <Zml6-ViFPTWI1cUc@macbook>
 <d5b1d273-913e-4d53-9fb6-9b01525da498@suse.com> <ZmnAgSBjjP6N-uJS@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmnAgSBjjP6N-uJS@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 12.06.2024 17:36, Roger Pau Monné wrote:
> On Wed, Jun 12, 2024 at 03:42:58PM +0200, Jan Beulich wrote:
>> On 12.06.2024 12:39, Roger Pau Monné wrote:
>>> On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
>>>> On 10.06.2024 16:20, Roger Pau Monne wrote:
>>>>> Currently there's logic in fixup_irqs() that attempts to prevent
>>>>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
>>>>> interrupts from the CPUs not present in the input mask.  The current logic in
>>>>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
>>>>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
>>>>>
>>>>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
>>>>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
>>>>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
>>>>> and no remaining online CPUs in ->arch.cpu_mask.
>>>>>
>>>>> If _assign_irq_vector() is requested to move an interrupt in the state
>>>>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
>>>>> CPUs that could be used as fallback, and if that's the case do move the
>>>>> interrupt back to the previous destination.  Note this is easier because the
>>>>> vector hasn't been released yet, so there's no need to allocate and setup a new
>>>>> vector on the destination.
>>>>>
>>>>> Due to the logic in fixup_irqs() that clears offline CPUs from
>>>>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
>>>>> shouldn't be possible to get into _assign_irq_vector() with
>>>>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
>>>>> ->arch.old_cpu_mask.
>>>>>
>>>>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
>>>>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
>>>>> longer part of the affinity set,
>>>>
>>>> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
>>>>
>>>>> move the interrupt to a different CPU part of
>>>>> the provided mask
>>>>
>>>> ... this (->arch.cpu_mask related).
>>>
>>> No, the "provided mask" here is the "mask" parameter, not
>>> ->arch.cpu_mask.
>>
>> Oh, so this describes the case of "hitting" the comment at the very bottom of
>> the first hunk then? (I probably was misreading this because I was expecting
>> it to describe a code change, rather than the case where original behavior
>> needs retaining. IOW - all fine here then.)
>>
>>>>> and keep the current ->arch.old_{cpu_mask,vector} for the
>>>>> pending interrupt movement to be completed.
>>>>
>>>> Right, that's to clean up state from before the initial move. What isn't
>>>> clear to me is what's to happen with the state of the intermediate
>>>> placement. Description and code changes leave me with the impression that
>>>> it's okay to simply abandon, without any cleanup, yet I can't quite figure
>>>> why that would be an okay thing to do.
>>>
>>> There isn't much we can do with the intermediate placement, as the CPU
>>> is going offline.  However we can drain any pending interrupts from
>>> IRR after the new destination has been set, since setting the
>>> destination is done from the CPU that's the current target of the
>>> interrupts.  So we can ensure the draining is done strictly after the
>>> target has been switched, hence ensuring no further interrupts from
>>> this source will be delivered to the current CPU.
>>
>> Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
>> the ...
>>
>>>>> --- a/xen/arch/x86/irq.c
>>>>> +++ b/xen/arch/x86/irq.c
>>>>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
>>>>>      }
>>>>>  
>>>>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
>>>>> -        return -EAGAIN;
>>>>> +    {
>>>>> +        /*
>>>>> +         * If the current destination is online refuse to shuffle.  Retry after
>>>>> +         * the in-progress movement has finished.
>>>>> +         */
>>>>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
>>>>> +            return -EAGAIN;
>>>>> +
>>>>> +        /*
>>>>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
>>>>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
>>>>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
>>>>> +         * ->arch.old_cpu_mask.
>>>>> +         */
>>>>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
>>>>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
>>>>> +
>>>>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
>>>>> +        {
>>>>> +            /*
>>>>> +             * Fallback to the old destination if moving is in progress and the
>>>>> +             * current destination is to be offlined.  This is only possible if
>>>>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
>>>>> +             * in the 'mask' parameter.
>>>>> +             */
>>>>> +            desc->arch.vector = desc->arch.old_vector;
>>>>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
>>
>> ... replacing of vector (and associated mask), without any further accounting.
> 
> It's quite likely I'm missing something here, but what further
> accounting you would like to do?
> 
> The current target of the interrupt (->arch.cpu_mask previous to
> cpumask_and()) is all going offline, so any attempt to set it in
> ->arch.old_cpu_mask would just result in a stale (offline) CPU getting
> set in ->arch.old_cpu_mask, which previous patches attempted to
> solve.
> 
> Maybe by "further accounting" you meant something else not related to
> ->arch.old_{cpu_mask,vector}?

Indeed. What I'm thinking of is what normally release_old_vec() would
do (of which only desc->arch.used_vectors updating would appear to be
relevant, seeing the CPU's going offline). The other one I was thinking
of, updating vector_irq[], likely is also unnecessary, again because
that's per-CPU data of a CPU going down.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 09:02:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 09:02:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739714.1146647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHgLd-00073V-F3; Thu, 13 Jun 2024 09:02:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739714.1146647; Thu, 13 Jun 2024 09:02:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHgLd-00073O-C9; Thu, 13 Jun 2024 09:02:21 +0000
Received: by outflank-mailman (input) for mailman id 739714;
 Thu, 13 Jun 2024 09:02:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p+hf=NP=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sHgLc-00073I-1y
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 09:02:20 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a2fc1e37-2963-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 11:02:18 +0200 (CEST)
Received: from [172.20.10.8] (unknown [78.209.79.104])
 by support.bugseng.com (Postfix) with ESMTPSA id 7AA344EE0756;
 Thu, 13 Jun 2024 11:02:17 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2fc1e37-2963-11ef-90a3-e314d9c70b13
Message-ID: <81332f64-9bd3-47bb-a6a5-adeecabf9730@bugseng.com>
Date: Thu, 13 Jun 2024 11:02:16 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2] automation/eclair: extend existing deviations of
 MISRA C:2012 Rule 16.3
To: Jan Beulich <jbeulich@suse.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <20c0779f2d749a682758defc06514772e97c9d89.1718260010.git.federico.serafini@bugseng.com>
 <cbd59b32-213b-4b5c-90fb-67906b7ae680@suse.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <cbd59b32-213b-4b5c-90fb-67906b7ae680@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 13/06/24 10:16, Jan Beulich wrote:
> On 13.06.2024 08:38, Federico Serafini wrote:
>> Update ECLAIR configuration to deviate more cases where an
>> unintentional fallthrough cannot happen.
>>
>> Add Rule 16.3 to the monitored set and tag it as clean for arm.
>>
>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>> ---
>>   
>>      * - R16.3
>> -     - Switch clauses ending with continue, goto, return statements are safe.
>> +     - Statements that change the control flow (i.e., break, continue, goto,
>> +       return) and calls to functions that do not return the control back are
>> +       \"allowed terminal statements\".
>>        - Tagged as `safe` for ECLAIR.
>>   
>>      * - R16.3
>> -     - Switch clauses ending with a call to a function that does not give
>> -       the control back (i.e., a function with attribute noreturn) are safe.
>> +     - An if-else statement having both branches ending with one of the allowed
>> +       terminal statemets is itself an allowed terminal statements.
> 
> Nit: "... terminal statements is ... terminal statement."

Thanks.

> 
>> +     - Tagged as `safe` for ECLAIR.
>> +
>> +   * - R16.3
>> +     - An if-else statement having an always true condition and the true
>> +       branch ending with an allowed terminal statement is itself an allowed
>> +       terminal statement.
>> +     - Tagged as `safe` for ECLAIR.
>> +
>> +   * - R16.3
>> +     - Switch clauses ending with a statement expression which, in turn, ends
>> +       with an allowed terminal statement are safe (e.g., the expansion of
>> +       generate_exception()).
>> +     - Tagged as `safe` for ECLAIR.
>> +
>> +   * - R16.3
>> +     - Switch clauses ending with a do-while-false which, in turn, ends with an
> 
> Maybe more precisely "the body of which"?

Will do.

> 
>> +       allowed terminal statement are safe (e.g., PARSE_ERR_RET()).
>> +       Being ASSERT_UNREACHABLE() a construct that is effective in debug builds
>> +       only, it is not considered as an allowed terminal statement, despite its
>> +       definition.
> 
> DYM despite its name? Its definition is what specifically renders it unsuitable
> for release builds.

In debug builds, ASSERT_UNREACHABLE() expands to a do-while-false
the body of which ends with __builtin_unreachable() which is a builtin
marked as "noreturn" and thus considered as one of the "allowed
terminal statements".
As a result, ASSERT_UNREACHABLE() will be considered as an
"allowed terminal statement" as well, which is something we want to
avoid.

> 
> Also I think the sentence wants to either start "ASSERT_UNREACHABLE() being a
> ..." or wants to be re-ordered to e.g. "Being a construct that is effective in
> debug builds only, ASSERT_UNREACHABLE() is not considered ..."
> 
> Jan

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 09:07:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 09:07:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739723.1146658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHgQb-0007fw-1L; Thu, 13 Jun 2024 09:07:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739723.1146658; Thu, 13 Jun 2024 09:07:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHgQa-0007fo-UH; Thu, 13 Jun 2024 09:07:28 +0000
Received: by outflank-mailman (input) for mailman id 739723;
 Thu, 13 Jun 2024 09:07:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p+hf=NP=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sHgQZ-0007fI-A7
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 09:07:27 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5aa6ba75-2964-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 11:07:26 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.79.104])
 by support.bugseng.com (Postfix) with ESMTPSA id 5EFCE4EE0756;
 Thu, 13 Jun 2024 11:07:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5aa6ba75-2964-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH] automation/eclair: add deviation for MISRA C Rule 17.7
Date: Thu, 13 Jun 2024 11:07:11 +0200
Message-Id: <a5137c812eefab7e0417670386b0fee35468504d.1718264354.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update ECLAIR configuration to deviate some cases where not using
the return value of a function is not dangerous.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/deviations.ecl |  4 ++++
 docs/misra/deviations.rst                        | 11 +++++++++++
 2 files changed, 15 insertions(+)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index 447c1e6661..7bae804569 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -413,6 +413,10 @@ explicit comment indicating the fallthrough intention is present."
 -config=MC3R1.R17.1,macros+={hide , "^va_(arg|start|copy|end)$"}
 -doc_end
 
+-doc_begin="Not using the return value of a function do not endanger safety if it coincides with the first actual argument."
+-config=MC3R1.R17.7,calls+={safe, "any()", "decl(name(__builtin_memcpy||__builtin_memmove||__builtin_memset||cpumask_check||strlcat||strlcpy))"}
+-doc_end
+
 #
 # Series 18.
 #
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 36959aa44a..0bbac3cb9a 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -364,6 +364,17 @@ Deviations related to MISRA C:2012 Rules:
        by `stdarg.h`.
      - Tagged as `deliberate` for ECLAIR.
 
+   * - R17.7
+     - Not using the return value of a function do not endanger safety if it
+       coincides with the first actual argument.
+     - Tagged as `safe` for ECLAIR. Such functions are:
+         - __builtin_memcpy()
+         - __builtin_memmove()
+         - __builtin_memset()
+         - __cpumask_check()
+         - strlcat()
+         - strlcpy()
+
    * - R20.4
      - The override of the keyword \"inline\" in xen/compiler.h is present so
        that section contents checks pass when the compiler chooses not to
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 09:39:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 09:39:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739735.1146669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHgvX-0003X1-Dr; Thu, 13 Jun 2024 09:39:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739735.1146669; Thu, 13 Jun 2024 09:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHgvX-0003Wu-9L; Thu, 13 Jun 2024 09:39:27 +0000
Received: by outflank-mailman (input) for mailman id 739735;
 Thu, 13 Jun 2024 09:39:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tbfa=NP=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sHgvV-0003Wl-OA
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 09:39:25 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d077f58a-2968-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 11:39:22 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 7DE0E68B05; Thu, 13 Jun 2024 11:39:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d077f58a-2968-11ef-90a3-e314d9c70b13
Date: Thu, 13 Jun 2024 11:39:18 +0200
From: Christoph Hellwig <hch@lst.de>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 02/26] sd: move zone limits setup out of
 sd_read_block_characteristics
Message-ID: <20240613093918.GA27629@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-3-hch@lst.de> <40ca8052-6ac1-4c1b-8c39-b0a7948839f8@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <40ca8052-6ac1-4c1b-8c39-b0a7948839f8@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, Jun 11, 2024 at 02:51:24PM +0900, Damien Le Moal wrote:
> > +	if (sdkp->device->type == TYPE_ZBC)
> 
> Nit: use sd_is_zoned() here ?

Actually - is there much in even keeping sd_is_zoned now that the
host aware support is removed?  Just open coding the type check isn't
any more code, and probably easier to follow.



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 09:41:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 09:41:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739742.1146677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHgxn-0004vz-Nc; Thu, 13 Jun 2024 09:41:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739742.1146677; Thu, 13 Jun 2024 09:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHgxn-0004vs-L1; Thu, 13 Jun 2024 09:41:47 +0000
Received: by outflank-mailman (input) for mailman id 739742;
 Thu, 13 Jun 2024 09:41:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qa8k=NP=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sHgxm-0004vk-8y
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 09:41:46 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 258a7355-2969-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 11:41:45 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id A241F4EE0756;
 Thu, 13 Jun 2024 11:41:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 258a7355-2969-11ef-90a3-e314d9c70b13
MIME-Version: 1.0
Date: Thu, 13 Jun 2024 11:41:44 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH 4/6] x86emul: address violations of MISRA C Rule 20.7
In-Reply-To: <c40bcf67-ebdd-4bcf-b6bc-ecec6a1fd7eb@suse.com>
References: <cover.1718117557.git.nicola.vetrini@bugseng.com>
 <0a502d2a9c5ce13be13281d9de49d263313b7852.1718117557.git.nicola.vetrini@bugseng.com>
 <12ce10af-cd36-492e-a73b-2b81b5bf60cc@suse.com>
 <ac1faf5feded028ce80752ce69983352@bugseng.com>
 <c40bcf67-ebdd-4bcf-b6bc-ecec6a1fd7eb@suse.com>
Message-ID: <05eadcff96c640c3b8aa744c4b362661@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-12 12:36, Jan Beulich wrote:
> On 12.06.2024 11:52, Nicola Vetrini wrote:
>> On 2024-06-12 11:19, Jan Beulich wrote:
>>> On 11.06.2024 17:53, Nicola Vetrini wrote:
>>>> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
>>>> of macro parameters shall be enclosed in parentheses". Therefore, 
>>>> some
>>>> macro definitions should gain additional parentheses to ensure that
>>>> all
>>>> current and future users will be safe with respect to expansions 
>>>> that
>>>> can possibly alter the semantics of the passed-in macro parameter.
>>>> 
>>>> No functional change.
>>>> 
>>>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
>>>> ---
>>>> These local helpers could in principle be deviated, but the
>>>> readability
>>>> and functionality are essentially unchanged by complying with the
>>>> rule,
>>>> so I decided to modify the macro definition as the preferred option.
>>> 
>>> Well, yes, but ...
>>> 
>>>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>>>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>>>> @@ -2255,7 +2255,7 @@ x86_emulate(
>>>>          switch ( modrm_reg & 7 )
>>>>          {
>>>>  #define GRP2(name, ext) \
>>>> -        case ext: \
>>>> +        case (ext): \
>>>>              if ( ops->rmw && dst.type == OP_MEM ) \
>>>>                  state->rmw = rmw_##name; \
>>>>              else \
>>>> @@ -8611,7 +8611,7 @@ int x86_emul_rmw(
>>>>              unsigned long dummy;
>>>> 
>>>>  #define XADD(sz, cst, mod) \
>>>> -        case sz: \
>>>> +        case (sz): \
>>>>              asm ( "" \
>>>>                    COND_LOCK(xadd) " %"#mod"[reg], %[mem]; " \
>>>>                    _POST_EFLAGS("[efl]", "[msk]", "[tmp]") \
>>> 
>>> ... this is really nitpicky of the rule / tool. What halfway 
>>> realistic
>>> ways do you see to actually misuse these macros? What follows the
>>> "case"
>>> keyword is just an expression; no precedence related issues are
>>> possible.
>>> 
>> 
>> I do share the view: no real danger is possible in sensible uses. 
>> Often
>> MISRA rules are stricter than necessary to have a simple formulation, 
>> by
>> avoiding too many special cases.
>> 
>> However, if a deviation is formulated, then it needs to be maintained,
>> for no real readability benefit in this case, in my opinion. I can be
>> convinced otherwise, of course.
> 
> Well, aiui you're thinking of a per-macro deviation here. Whereas I'd 
> be
> thinking of deviating the pattern.
> 

Might be reasonable. I'll think about it for v2

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 10:03:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 10:03:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739753.1146688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHhIc-00080H-E7; Thu, 13 Jun 2024 10:03:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739753.1146688; Thu, 13 Jun 2024 10:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHhIc-00080A-AV; Thu, 13 Jun 2024 10:03:18 +0000
Received: by outflank-mailman (input) for mailman id 739753;
 Thu, 13 Jun 2024 10:03:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHhIb-000804-ED
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 10:03:17 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 270b997f-296c-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 12:03:16 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-a6f0e153eddso111085966b.0
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 03:03:16 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da40easm54712366b.26.2024.06.13.03.03.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 03:03:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 270b997f-296c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718272995; x=1718877795; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=CHFu7upoFYNR81PQR/MEomtxlSSiAke3/gsmQel2lOU=;
        b=SNtHZ3Ba5pJoUlBgF3nhoc8mYFBAiGdfWMPbIPHH8yf+JEf73Qf6IMMhs8Q1X94Ebi
         WKkpIyzcD0dMBBvpfIAiekQubRRrLBlMB/GXeU2v68IrCuJTywl5wxDb1LDCrJf98JPk
         4kmzP8JXHEVjIFlYpbHczYKdbTX5S3KbZIU9Gjnl+hA2OCwdPR3PcnRNxXLe8vTHD4uN
         PUfqT8jWmT/QRd21mz6Mc1yPQ0ZfvGdcbUIKbs9UcRsWUCvZo6IVqnXcjzpDDCQSinxs
         S/NVQCx+4D2RP1FYUk4MEMEVetYGfqImQrSm42vol9IiIQRX0lA6x8dMvvdZLU7r6Dus
         g1Og==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718272995; x=1718877795;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=CHFu7upoFYNR81PQR/MEomtxlSSiAke3/gsmQel2lOU=;
        b=Ix9Jxb6p/Sm29hTNHqdZd5Lr0RjTjKzjJ5Uii1lVNJ/EoG5iWzC+tX6HE5gNjPO+Fg
         M162DtDKfcN4qd0epeuJAIp1e6CvfWXor2ZtukEHHUUrON7jba2YvQeIVA+Ge+EyB07W
         5w4hHH3V2awUWoC/SZQJTmhR9lsBGZrM/tYBRwKgKQEpU7MNy2vTJsmN4e92S8Ck+yix
         mA9JbufMof6FGvEPThRmq6lc1gFLy/gjAD3IImDvTKqjlZGbH42wFIuyu9ZcvcJTxFlu
         7AZzf20uXKDuPI4AHK3BpDNpoPbA14Vj4q30CVC8mUQLXeKcdtpcpY8e1zakrSpIaPZw
         uhCg==
X-Forwarded-Encrypted: i=1; AJvYcCUZxca/6DkO93KvCARfXenCaZq/yNWxj7swTKr1WcQs0Y5o1GJ7rDWOR1nHtlxanjyBGVHNiHveUdrbFxR+Gi4zic379wi/O5QxfDeffuk=
X-Gm-Message-State: AOJu0Yzgyg+eF9XjJrL1CPwmhz928Zb1KDV6XL6o874fPkUlKeKwzG9x
	EtkUi/F1/H2U2Y/bTKNusKkhzdbqqVsgCrDhJGVpORcHHKhlz7axodsLVfEL4w==
X-Google-Smtp-Source: AGHT+IHoyZSgFwmD7LypWrbCqxd9mNYhTrWz7fYI4dqmZyGZ1sgLZ4JhjBnIult8ZC0hNQZuVGnZRw==
X-Received: by 2002:a17:906:c192:b0:a6d:b66f:7b24 with SMTP id a640c23a62f3a-a6f48028205mr269293166b.75.1718272995524;
        Thu, 13 Jun 2024 03:03:15 -0700 (PDT)
Message-ID: <1dd68917-090b-45ab-88a5-157a4afe0f6a@suse.com>
Date: Thu, 13 Jun 2024 12:03:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2] automation/eclair: extend existing deviations of
 MISRA C:2012 Rule 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <20c0779f2d749a682758defc06514772e97c9d89.1718260010.git.federico.serafini@bugseng.com>
 <cbd59b32-213b-4b5c-90fb-67906b7ae680@suse.com>
 <81332f64-9bd3-47bb-a6a5-adeecabf9730@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <81332f64-9bd3-47bb-a6a5-adeecabf9730@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 13.06.2024 11:02, Federico Serafini wrote:
> On 13/06/24 10:16, Jan Beulich wrote:
>> On 13.06.2024 08:38, Federico Serafini wrote:
>>> +   * - R16.3
>>> +     - Switch clauses ending with a do-while-false which, in turn, ends with an
>>
>> Maybe more precisely "the body of which"?
> 
> Will do.
> 
>>
>>> +       allowed terminal statement are safe (e.g., PARSE_ERR_RET()).
>>> +       Being ASSERT_UNREACHABLE() a construct that is effective in debug builds
>>> +       only, it is not considered as an allowed terminal statement, despite its
>>> +       definition.
>>
>> DYM despite its name? Its definition is what specifically renders it unsuitable
>> for release builds.
> 
> In debug builds, ASSERT_UNREACHABLE() expands to a do-while-false
> the body of which ends with __builtin_unreachable() which is a builtin
> marked as "noreturn" and thus considered as one of the "allowed
> terminal statements".
> As a result, ASSERT_UNREACHABLE() will be considered as an
> "allowed terminal statement" as well, which is something we want to
> avoid.

Hmm, then maybe add "there" at the end of the sentence, to refer back to
"debug builds"?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 10:08:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 10:08:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739758.1146698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHhNz-0000Q1-Vk; Thu, 13 Jun 2024 10:08:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739758.1146698; Thu, 13 Jun 2024 10:08:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHhNz-0000Pu-Sv; Thu, 13 Jun 2024 10:08:51 +0000
Received: by outflank-mailman (input) for mailman id 739758;
 Thu, 13 Jun 2024 10:08:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHhNz-0000Po-CY
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 10:08:51 +0000
Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com
 [2a00:1450:4864:20::136])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ee2a7115-296c-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 12:08:50 +0200 (CEST)
Received: by mail-lf1-x136.google.com with SMTP id
 2adb3069b0e04-52bc29c79fdso1149090e87.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 03:08:50 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f5952f85fsm41422066b.11.2024.06.13.03.08.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 03:08:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee2a7115-296c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718273329; x=1718878129; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=X4dmGxxpXaLEGno/tSpoArXj5CLMuZyTfgyBDtG9zfM=;
        b=NrtE4v248cp29DLiDKUu0+JsT/cfWXlzKj6rxINFyYnmAyQjXA8a8uHssHs/Q1o2Rk
         K0oVHDMPNR2b3efe4LPdV6GXft/Q+Dx8LTd4V+PzpE0JzMmwMuO1PNecYLPRf9mw+tv0
         KikqEYvhSrbTH0k4+VlROycrjmsipryt7oMeZaIPnV4iH2bZmjmpq83jZsAMDCuoqEmD
         PCH0jIlTTkOYeBUd8wsgxRk/Py8Jc0XmpnIZbzNjR+hw5D3E2E922l4j70cpS5qo4lQT
         xklzT4kOKAoNGSmGpsswhrhwFwEYT/naLsrwD6XkzzjcUtbTHefOLsGGGRmHR9hyhpVW
         LL7w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718273329; x=1718878129;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=X4dmGxxpXaLEGno/tSpoArXj5CLMuZyTfgyBDtG9zfM=;
        b=ey5eq0M5ne2PxAHpRP9QCnWVfvBgM29bB0RA3zfgftImrquaKZZgJ6D36qtOHJ3ZPJ
         6q9mZBdrsT8qPV71h3BGz0NC0PT2AEVhLTplqxq1M0t4Q34bs7gxZuuU7NZiRjIVHx2n
         kslPcCMHgi6BBq6pgHQpPm8hwizdm8feT7Gk8eiA3LguVciNidBWRaPB3jcKeEh94oNW
         Iw/5Ak9RcBSouMPVFJX1dUv9sQZX84oq6kQdLYLrkrhfLJASCe7KDOVkrjiyReXq1+JO
         p6l2vuGicpxJV6P/EObnRMr74dt9ks6rrwReUAm2+veTx3bjQ3L4qEdtCJIz3LkYIRTW
         vohw==
X-Forwarded-Encrypted: i=1; AJvYcCXEJP1W/7SnoTh5Bo8duoh3Pp3JPvs6vjSsWTsYaWzU9nA/YId762eBjmJKGj+0VrN+T0K0ex+HlylBOTowUXLBjbCC+v+7riKGkaxnv3w=
X-Gm-Message-State: AOJu0Yya4EWh1IMO9IOFEaL1lW45H3QjHRXot/ZoGSllnRrWAFojhmym
	fmkGclEOBoNJVlnF1ygOndsz5X/kNuVK+cb4+6YdkmRj8ckg0qJprqKnFXVxcQ==
X-Google-Smtp-Source: AGHT+IHJuQxWZwcDvYSvaXnCRzzSLalRxQMVVVKS5m1NW2fsbtmTFOHUGEzYNJX1VMbTyD+xpwUJyw==
X-Received: by 2002:a05:6512:10cf:b0:52c:8b03:99d1 with SMTP id 2adb3069b0e04-52c9a40301amr4760939e87.48.1718273329548;
        Thu, 13 Jun 2024 03:08:49 -0700 (PDT)
Message-ID: <55f46457-4182-4e1b-a792-e94cc6c16864@suse.com>
Date: Thu, 13 Jun 2024 12:08:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] automation/eclair: add deviation for MISRA C Rule
 17.7
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <a5137c812eefab7e0417670386b0fee35468504d.1718264354.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <a5137c812eefab7e0417670386b0fee35468504d.1718264354.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 13.06.2024 11:07, Federico Serafini wrote:
> --- a/docs/misra/deviations.rst
> +++ b/docs/misra/deviations.rst
> @@ -364,6 +364,17 @@ Deviations related to MISRA C:2012 Rules:
>         by `stdarg.h`.
>       - Tagged as `deliberate` for ECLAIR.
>  
> +   * - R17.7
> +     - Not using the return value of a function do not endanger safety if it
> +       coincides with the first actual argument.
> +     - Tagged as `safe` for ECLAIR. Such functions are:
> +         - __builtin_memcpy()
> +         - __builtin_memmove()
> +         - __builtin_memset()
> +         - __cpumask_check()
> +         - strlcat()
> +         - strlcpy()

These last two aren't similar to strcat/strcpy in what they return, so I'm
not convinced they should be listed here. Certainly not with the "coincides"
justification.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 10:19:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 10:19:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739767.1146709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHhYJ-0002K6-1E; Thu, 13 Jun 2024 10:19:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739767.1146709; Thu, 13 Jun 2024 10:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHhYI-0002Jz-Sb; Thu, 13 Jun 2024 10:19:30 +0000
Received: by outflank-mailman (input) for mailman id 739767;
 Thu, 13 Jun 2024 10:19:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHhYH-0002Jt-Q0
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 10:19:29 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6a28fc68-296e-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 12:19:27 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id
 4fb4d7f45d1cf-57c73a3b3d7so730327a12.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 03:19:27 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cbb30ba30sm194754a12.39.2024.06.13.03.19.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 03:19:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a28fc68-296e-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718273967; x=1718878767; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=aeLOOdXKE2aiRzZ5FVjg3Rz9YKI1cO3KwN+PP5oYw/k=;
        b=Z54FzTgm0NgTJz2Sz+Ex9sCBp+0eTZUm5i7UqXMd0upE+WuXXfqimhtaq3rqouqTmb
         aEvZcKbnh4ERTKIXCHWDm8/jf8JxKwTFMUCLwZWW4T49s+37xY/dbhnfRp3N1+3ya+Wx
         nVkULAVQsIFPz4kYyMB8iwtTyrVnl4tBX0mOk+og5pr+eMIpYoOw/7QZn6mTsBu5X6R6
         5l9npH2+lQAiko82mDp8dTcx78TQz/KUGU82usb2JqnN2ThW++9KjzaebWK27v5pgVlK
         phLgGgRt3DEna1hEzatduHL8BZv3iMJVGKB5KHFfPTF4C+LKPQHiSgqzya0stNLEtoFD
         g10Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718273967; x=1718878767;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=aeLOOdXKE2aiRzZ5FVjg3Rz9YKI1cO3KwN+PP5oYw/k=;
        b=Dx5ltFQ/nhk/Bi4mtXSiM52yfMko6bdkKdSqy/kvulB71t3TDfTiKa2kCDAZ18mhXY
         VjWWW6iiYsflbTwMsjw7756AStAGzrQedsCMVE77dOMLu6LkDL34K2uKoS6sdTMxMfTo
         RAoh5jtlGt4ObM8wNwq07W+vEJlXACmuKg6o+Mj6pu+3qFpEeGEZhsvb7jaMu3axZWJt
         knskYNjnA1G2va8NJ0d/egP4arn1XKzDH+S1ymAfItS2Gh4vrgeEtjTYtUyG306lGZtV
         MZld/OEoKFwVa3rrqDiGWz7NVZ+Ie+T8JZWOCZIvLQnTHW+RK/2a2x9ZSojSwQrPQwto
         hpFw==
X-Forwarded-Encrypted: i=1; AJvYcCXCoIqyKkpGCHIk5oskkhsw0CaJZ8i9QXk4faj0qU6YeQJkYWS/YP5/AFqYbyj72kuZkTEwtZc25ZnSHtrfgEa2fE+UEyKtyOrhydJmbjc=
X-Gm-Message-State: AOJu0YxQJEdKXT+U38/V+O9/lsFCIBqdY6WX4VkyELZqGN4ul/rDuIN8
	sOMeDjmM3wfN/Ax9vzqEQUz6p9ORKHFLSlfaA78Xcv3t5tz/+kgCKEjcEeO+ew==
X-Google-Smtp-Source: AGHT+IGwjVkKC8DOibl4F0rf3MSfjAIMG11OVXGGe0Yr01K+UgqzVsxtX+t7l7PLP4oSD41AYQTxUg==
X-Received: by 2002:a50:bb64:0:b0:57c:47c3:dc62 with SMTP id 4fb4d7f45d1cf-57ca97496cfmr2736123a12.5.1718273967209;
        Thu, 13 Jun 2024 03:19:27 -0700 (PDT)
Message-ID: <bc39fe10-68a4-47c3-aa70-c2a8865ea8cd@suse.com>
Date: Thu, 13 Jun 2024 12:19:25 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN v1] docs/designs: Introduce IOMMU context management
To: Teddy Astie <teddy.astie@vates.tech>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
References: <1cdd746f1af79970f8c7151d23854d33416772e0.1713876394.git.teddy.astie@vates.tech>
 <5c7c76f29726d377e3ff8a22ba2e3eb01dfa4c3b.1713876394.git.teddy.astie@vates.tech>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <5c7c76f29726d377e3ff8a22ba2e3eb01dfa4c3b.1713876394.git.teddy.astie@vates.tech>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 23.04.2024 14:59, Teddy Astie wrote:
> --- /dev/null
> +++ b/docs/designs/iommu-contexts.md
> @@ -0,0 +1,373 @@
> +# IOMMU context management in Xen
> +
> +Status: Experimental
> +Revision: 0
> +
> +# Background
> +
> +The design for *IOMMU paravirtualization for Dom0* [1] explains that some guests may
> +want to access to IOMMU features. In order to implement this in Xen, several adjustments 
> +needs to be made to the IOMMU subsystem.
> +
> +This "hardware IOMMU domain"

One very basic question up front, before I can even think of properly reading this
doc: Here you refer to terminology used in that other doc, yet ...

> +[1] See pv-iommu.md

... it's not clear what this actually refers to. There's no such doc in our tree,
afaics.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 10:21:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 10:21:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739771.1146718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHhaQ-0003j8-CK; Thu, 13 Jun 2024 10:21:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739771.1146718; Thu, 13 Jun 2024 10:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHhaQ-0003j1-8B; Thu, 13 Jun 2024 10:21:42 +0000
Received: by outflank-mailman (input) for mailman id 739771;
 Thu, 13 Jun 2024 10:21:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p+hf=NP=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sHhaO-0003iv-Pu
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 10:21:40 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b82b7156-296e-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 12:21:38 +0200 (CEST)
Received: from [172.20.10.8] (unknown [78.209.79.104])
 by support.bugseng.com (Postfix) with ESMTPSA id 6C0494EE0756;
 Thu, 13 Jun 2024 12:21:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b82b7156-296e-11ef-b4bb-af5377834399
Message-ID: <d6540755-878d-404a-af89-b006a8e7fed5@bugseng.com>
Date: Thu, 13 Jun 2024 12:21:36 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] automation/eclair: add deviation for MISRA C Rule
 17.7
To: Jan Beulich <jbeulich@suse.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <a5137c812eefab7e0417670386b0fee35468504d.1718264354.git.federico.serafini@bugseng.com>
 <55f46457-4182-4e1b-a792-e94cc6c16864@suse.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <55f46457-4182-4e1b-a792-e94cc6c16864@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 13/06/24 12:08, Jan Beulich wrote:
> On 13.06.2024 11:07, Federico Serafini wrote:
>> --- a/docs/misra/deviations.rst
>> +++ b/docs/misra/deviations.rst
>> @@ -364,6 +364,17 @@ Deviations related to MISRA C:2012 Rules:
>>          by `stdarg.h`.
>>        - Tagged as `deliberate` for ECLAIR.
>>   
>> +   * - R17.7
>> +     - Not using the return value of a function do not endanger safety if it
>> +       coincides with the first actual argument.
>> +     - Tagged as `safe` for ECLAIR. Such functions are:
>> +         - __builtin_memcpy()
>> +         - __builtin_memmove()
>> +         - __builtin_memset()
>> +         - __cpumask_check()
>> +         - strlcat()
>> +         - strlcpy()
> 
> These last two aren't similar to strcat/strcpy in what they return, so I'm
> not convinced they should be listed here. Certainly not with the "coincides"
> justification.

Right, thanks.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 10:30:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 10:30:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739781.1146727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHhiN-0004iS-3H; Thu, 13 Jun 2024 10:29:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739781.1146727; Thu, 13 Jun 2024 10:29:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHhiN-0004iL-0k; Thu, 13 Jun 2024 10:29:55 +0000
Received: by outflank-mailman (input) for mailman id 739781;
 Thu, 13 Jun 2024 10:29:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p+hf=NP=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sHhiL-0004iF-ND
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 10:29:53 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id de544c0f-296f-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 12:29:52 +0200 (CEST)
Received: from [172.20.10.8] (unknown [78.209.79.104])
 by support.bugseng.com (Postfix) with ESMTPSA id DBB024EE0756;
 Thu, 13 Jun 2024 12:29:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de544c0f-296f-11ef-90a3-e314d9c70b13
Message-ID: <0d9fa72c-11f1-4b34-931a-d9afbaa4b339@bugseng.com>
Date: Thu, 13 Jun 2024 12:29:49 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2] automation/eclair: extend existing deviations of
 MISRA C:2012 Rule 16.3
To: Jan Beulich <jbeulich@suse.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <20c0779f2d749a682758defc06514772e97c9d89.1718260010.git.federico.serafini@bugseng.com>
 <cbd59b32-213b-4b5c-90fb-67906b7ae680@suse.com>
 <81332f64-9bd3-47bb-a6a5-adeecabf9730@bugseng.com>
 <1dd68917-090b-45ab-88a5-157a4afe0f6a@suse.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <1dd68917-090b-45ab-88a5-157a4afe0f6a@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 13/06/24 12:03, Jan Beulich wrote:
> On 13.06.2024 11:02, Federico Serafini wrote:
>> On 13/06/24 10:16, Jan Beulich wrote:
>>> On 13.06.2024 08:38, Federico Serafini wrote:
>>>> +   * - R16.3
>>>> +     - Switch clauses ending with a do-while-false which, in turn, ends with an
>>>
>>> Maybe more precisely "the body of which"?
>>
>> Will do.
>>
>>>
>>>> +       allowed terminal statement are safe (e.g., PARSE_ERR_RET()).
>>>> +       Being ASSERT_UNREACHABLE() a construct that is effective in debug builds
>>>> +       only, it is not considered as an allowed terminal statement, despite its
>>>> +       definition.
>>>
>>> DYM despite its name? Its definition is what specifically renders it unsuitable
>>> for release builds.
>>
>> In debug builds, ASSERT_UNREACHABLE() expands to a do-while-false
>> the body of which ends with __builtin_unreachable() which is a builtin
>> marked as "noreturn" and thus considered as one of the "allowed
>> terminal statements".
>> As a result, ASSERT_UNREACHABLE() will be considered as an
>> "allowed terminal statement" as well, which is something we want to
>> avoid.
> 
> Hmm, then maybe add "there" at the end of the sentence, to refer back to
> "debug builds"?

Alright.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 10:35:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 10:35:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739794.1146739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHhni-0006Dt-Mm; Thu, 13 Jun 2024 10:35:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739794.1146739; Thu, 13 Jun 2024 10:35:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHhni-0006Dm-If; Thu, 13 Jun 2024 10:35:26 +0000
Received: by outflank-mailman (input) for mailman id 739794;
 Thu, 13 Jun 2024 10:35:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHhnh-0006Dc-Vd; Thu, 13 Jun 2024 10:35:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHhnh-0006S6-Og; Thu, 13 Jun 2024 10:35:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHhnh-00039C-FA; Thu, 13 Jun 2024 10:35:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHhnh-0003Yd-Ea; Thu, 13 Jun 2024 10:35:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=M4evUDx92359op46+jXXf75dxz3h530V14FX+CaxFPs=; b=LLARbWxWQp4Rdl9ufzcp81gCO1
	Opdmpxne6Kb1S6Z6w8wC+ZbuXgoV2AKwns5/5jYp5CWL/3rgSQLghXZqhYnMmpHB79sLOhvGWBkc2
	mOObIL0pEgskVmw/en8ik30MRbDc0qWFxCY43lxGYDv3g73ariPaclA186/a1NG0tfeA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186330-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186330: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=230d81fc3a1398e66f482e404abe9759afd9bb59
X-Osstest-Versions-That:
    libvirt=acb26f22a19f80caec6a2d3cdfd4ad49744ea968
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Jun 2024 10:35:25 +0000

flight 186330 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186330/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186316
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              230d81fc3a1398e66f482e404abe9759afd9bb59
baseline version:
 libvirt              acb26f22a19f80caec6a2d3cdfd4ad49744ea968

Last test of basis   186316  2024-06-12 04:20:42 Z    1 days
Testing same since   186330  2024-06-13 04:18:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   acb26f22a1..230d81fc3a  230d81fc3a1398e66f482e404abe9759afd9bb59 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 11:05:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 11:05:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739811.1146747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHiGC-00029I-Ie; Thu, 13 Jun 2024 11:04:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739811.1146747; Thu, 13 Jun 2024 11:04:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHiGC-00029B-G0; Thu, 13 Jun 2024 11:04:52 +0000
Received: by outflank-mailman (input) for mailman id 739811;
 Thu, 13 Jun 2024 11:04:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHiGB-000295-D0
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 11:04:51 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bfff9c0f-2974-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 13:04:48 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a6266ffdba8so98409466b.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 04:04:48 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56db67fdsm60232266b.46.2024.06.13.04.04.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 04:04:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfff9c0f-2974-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718276688; x=1718881488; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=yHyBPG1v4jxtE+C9XxxXDIA0QAMppzlHeGNld6GN0qo=;
        b=Hw/z9Bi7Evt6c4mDD16s0+NmEl9DsjOm/UUVYyxr1+PaAznAWSOOuNQjrbtBNfnt89
         DNkcKQX1aAFkPVvee6Zl0s+XWKxcOnifXU1KeenudncSKemGkmU6REx/QFiPgqdNs4hW
         jX4u8Ag8qmi4JK2ufMwiJvj+bkMxGH2rPziTjxcpjcy8kaA6ZF3EjVkgq824qm+UgEsV
         oNmKlJuQpgsDaskmD/Gu6BX/VGy6mLQskyM5dFLw1203j5igCtfIDrqyOGh4hIOAMNHH
         r3GRmmnfa1AMz58ZFVONQqFZfkTDLy5vz7Ne2LZvudEoA9QOSKYFWyLZxmZPG7Q90xFy
         rOaQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718276688; x=1718881488;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=yHyBPG1v4jxtE+C9XxxXDIA0QAMppzlHeGNld6GN0qo=;
        b=sFbe1dNVvfzvIwUXCTDYi2fsXCtjzZr+ZUcVnOGiIaS8pQunHtDf4U4EFfw8RjJDOP
         aTuTZViRSgGQAqOxIJyiqXPm6FghacBiMGpRwhRbvBtACByDfDn1FCPpmkT98Hujpt1s
         GqxCyjdiWRJrQrstGN1PuEtK4yr7/Q7c6zWrhiJw0BWqa2EV2lQRRDCnXCsdYEknYO3L
         QYHtp6e01HgZMeNQdGIiBNoHwcq4w4X5dgr0tvK4yH7Wc3kMwAYmDssHIGWDGs5drNIj
         LG7HRFHJM2XK4dE9qi23t/zbl4ctdSoq2Mbaf3GVN0EQPXoqQWsbFQG6iBwORrxh2ARQ
         IXFA==
X-Forwarded-Encrypted: i=1; AJvYcCV+9iEwKgC0NS472ykdRYz4rElQFe073SJQczG3dJwv0ZEChtFLgyAzdJkEaJBqdW2zywxxw9LFXp+mFVJW0Mdnk/DgrXDMBjiQemBZbWs=
X-Gm-Message-State: AOJu0Yw5iUijt0chDiO5TlMF8JBAfsdoHx4I/h0EzMtqwfOuoolS0SHq
	3KbLDOPmVW0CxA+sG75M654dJkRc7U3c09SsGKbQhYQrmLkV22ADl6KqfCtirA==
X-Google-Smtp-Source: AGHT+IEUqV6GZEUF+9Ng0UwhYMLUGOZOIUpfUOD8Lq26I6wqQOSlT3GwIL+MV3IJhdhGl+w7AzNg4g==
X-Received: by 2002:a17:906:c105:b0:a6f:489a:3a28 with SMTP id a640c23a62f3a-a6f489a3ba7mr320763966b.61.1718276687715;
        Thu, 13 Jun 2024 04:04:47 -0700 (PDT)
Message-ID: <c18dbed6-07ac-4ce6-a5e4-4a72cbac3e12@suse.com>
Date: Thu, 13 Jun 2024 13:04:46 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC for-4.20 v1 1/1] x86/hvm: Introduce Xen-wide ASID Allocator
To: Vaishali Thakkar <vaishali.thakkar@vates.tech>
Cc: andrew.cooper3@citrix.com, roger.pau@citrix.com,
 george.dunlap@citrix.com, xen-devel@lists.xenproject.org
References: <cover.1716551380.git.vaishali.thakkar@vates.tech>
 <f15042aa7953d986b6dbd4dc1512024ba6362420.1716551380.git.vaishali.thakkar@vates.tech>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <f15042aa7953d986b6dbd4dc1512024ba6362420.1716551380.git.vaishali.thakkar@vates.tech>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.05.2024 14:31, Vaishali Thakkar wrote:
> --- a/xen/arch/x86/hvm/asid.c
> +++ b/xen/arch/x86/hvm/asid.c
> @@ -27,8 +27,8 @@ boolean_param("asid", opt_asid_enabled);
>   * the TLB.
>   *
>   * Sketch of the Implementation:
> - *
> - * ASIDs are a CPU-local resource.  As preemption of ASIDs is not possible,
> + * TODO(vaishali): Update this comment
> + * ASIDs are Xen-wide resource.  As preemption of ASIDs is not possible,
>   * ASIDs are assigned in a round-robin scheme.  To minimize the overhead of
>   * ASID invalidation, at the time of a TLB flush,  ASIDs are tagged with a
>   * 64-bit generation.  Only on a generation overflow the code needs to
> @@ -38,20 +38,21 @@ boolean_param("asid", opt_asid_enabled);
>   * ASID useage to retain correctness.
>   */
>  
> -/* Per-CPU ASID management. */
> +/* Xen-wide ASID management */
>  struct hvm_asid_data {
> -   uint64_t core_asid_generation;
> +   uint64_t asid_generation;
>     uint32_t next_asid;
>     uint32_t max_asid;
> +   uint32_t min_asid;
>     bool disabled;
>  };
>  
> -static DEFINE_PER_CPU(struct hvm_asid_data, hvm_asid_data);
> +static struct hvm_asid_data asid_data;
>  
>  void hvm_asid_init(int nasids)
>  {
>      static int8_t g_disabled = -1;
> -    struct hvm_asid_data *data = &this_cpu(hvm_asid_data);
> +    struct hvm_asid_data *data = &asid_data;
>  
>      data->max_asid = nasids - 1;
>      data->disabled = !opt_asid_enabled || (nasids <= 1);
> @@ -64,67 +65,73 @@ void hvm_asid_init(int nasids)
>      }
>  
>      /* Zero indicates 'invalid generation', so we start the count at one. */
> -    data->core_asid_generation = 1;
> +    data->asid_generation = 1;
>  
>      /* Zero indicates 'ASIDs disabled', so we start the count at one. */
>      data->next_asid = 1;

Both of these want to become compile-time initializers now, I think. The
comments would move there, but especially the second one also looks to
need updating.

>  }
>  
> -void hvm_asid_flush_vcpu_asid(struct hvm_vcpu_asid *asid)
> +void hvm_asid_flush_domain_asid(struct hvm_domain_asid *asid)
>  {
>      write_atomic(&asid->generation, 0);
>  }
>  
> -void hvm_asid_flush_vcpu(struct vcpu *v)
> +void hvm_asid_flush_domain(struct domain *d)
>  {
> -    hvm_asid_flush_vcpu_asid(&v->arch.hvm.n1asid);
> -    hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(v).nv_n2asid);
> +    hvm_asid_flush_domain_asid(&d->arch.hvm.n1asid);
> +    //hvm_asid_flush_domain_asid(&vcpu_nestedhvm(v).nv_n2asid);

While in Lisbon Andrew indicated to not specifically bother about the nested
case, I don't think he meant by this to outright break it. Hence imo
something like this will need taking care of (which it being commented out
may or may not indicate is the plan).

>  }
>  
> -void hvm_asid_flush_core(void)
> +void hvm_asid_flush_all(void)
>  {
> -    struct hvm_asid_data *data = &this_cpu(hvm_asid_data);
> +    struct hvm_asid_data *data = &asid_data;
>  
> -    if ( data->disabled )
> +    if ( data->disabled)
>          return;
>  
> -    if ( likely(++data->core_asid_generation != 0) )
> +    if ( likely(++data->asid_generation != 0) )
>          return;
>  
>      /*
> -     * ASID generations are 64 bit.  Overflow of generations never happens.
> -     * For safety, we simply disable ASIDs, so correctness is established; it
> -     * only runs a bit slower.
> -     */
> +    * ASID generations are 64 bit.  Overflow of generations never happens.
> +    * For safety, we simply disable ASIDs, so correctness is established; it
> +    * only runs a bit slower.
> +    */

Please don't screw up indentation; this comment was well-formed before. What
I question is whether, with the ultimate purpose in mind, the comment actually
will continue to be correct. We can't simply disable ASIDs when we have SEV
VMs running, can we?

>      printk("HVM: ASID generation overrun. Disabling ASIDs.\n");
>      data->disabled = 1;
>  }
>  
> -bool hvm_asid_handle_vmenter(struct hvm_vcpu_asid *asid)
> +/* This function is called only when first vmenter happens after creating a new domain */
> +bool hvm_asid_handle_vmenter(struct hvm_domain_asid *asid)

With what the comment says, the function likely wants giving a different
name.

>  {
> -    struct hvm_asid_data *data = &this_cpu(hvm_asid_data);
> +    struct hvm_asid_data *data = &asid_data;
>  
>      /* On erratum #170 systems we must flush the TLB. 
>       * Generation overruns are taken here, too. */
>      if ( data->disabled )
>          goto disabled;
>  
> -    /* Test if VCPU has valid ASID. */
> -    if ( read_atomic(&asid->generation) == data->core_asid_generation )
> +    /* Test if domain has valid ASID. */
> +    if ( read_atomic(&asid->generation) == data->asid_generation )
>          return 0;
>  
>      /* If there are no free ASIDs, need to go to a new generation */
>      if ( unlikely(data->next_asid > data->max_asid) )
>      {
> -        hvm_asid_flush_core();
> +        // TODO(vaishali): Add a check to pick the asid from the reclaimable asids if any
> +        hvm_asid_flush_all();
>          data->next_asid = 1;
>          if ( data->disabled )
>              goto disabled;
>      }
>  
> -    /* Now guaranteed to be a free ASID. */
> -    asid->asid = data->next_asid++;
> -    write_atomic(&asid->generation, data->core_asid_generation);
> +    /* Now guaranteed to be a free ASID. Only assign a new asid if the ASID is 1 */
> +    if (asid->asid == 0)

Comment and code appear to not be matching up, which makes it difficult
to figure what is actually meant.

> +    {
> +        asid->asid = data->next_asid++;
> +    }
> +
> +    write_atomic(&asid->generation, data->asid_generation);

Is this generation thing really still relevant when a domain has an ASID
assigned once and for all? Plus, does allocation really need to happen as
late as immediately ahead of the first VM entry? Can't that be part of
normal domain creation?

> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -2519,10 +2519,7 @@ static int cf_check hvmemul_tlb_op(
>  
>      case x86emul_invpcid:
>          if ( x86emul_invpcid_type(aux) != X86_INVPCID_INDIV_ADDR )
> -        {
> -            hvm_asid_flush_vcpu(current);
>              break;
> -        }

What replaces this flush? Things like this (also e.g. the change to
switch_cr3_cr4(), and there are more further down) would be good to
explain in the description, perhaps by more generally explaining how
the flushing model changes.

> --- a/xen/arch/x86/hvm/svm/asid.c
> +++ b/xen/arch/x86/hvm/svm/asid.c
> @@ -7,31 +7,35 @@
>  #include <asm/amd.h>
>  #include <asm/hvm/nestedhvm.h>
>  #include <asm/hvm/svm/svm.h>
> -
> +#include <asm/processor.h>
>  #include "svm.h"
> +#include "xen/cpumask.h"

The blank line was here on purpose; please keep it. And please never
use "" for xen/ includes. That #include also wants moving up; the
"canonical" way of arranging #include-s would be <xen/...> first, then
<asm/...>, then anything custom. Each block separated by a blank line.

> -void svm_asid_init(const struct cpuinfo_x86 *c)
> +void svm_asid_init(void)

Since this is now global initialization, can't it be __init?

>  {
> +    unsigned int cpu = smp_processor_id();
> +    const struct cpuinfo_x86 *c;
>      int nasids = 0;
>  
> -    /* Check for erratum #170, and leave ASIDs disabled if it's present. */
> -    if ( !cpu_has_amd_erratum(c, AMD_ERRATUM_170) )
> -        nasids = cpuid_ebx(0x8000000aU);
> -
> +    for_each_online_cpu( cpu ) {

Nit (style): Brace goes on its own line.

> +        c = &cpu_data[cpu];
> +        /* Check for erratum #170, and leave ASIDs disabled if it's present. */
> +        if ( !cpu_has_amd_erratum(c, AMD_ERRATUM_170) )
> +            nasids += cpuid_ebx(0x8000000aU);

Why += ? Don't you mean to establish the minimum across all CPUs? Which would
be assuming there might be an asymmetry, which generally we assume there
isn't.

And if you invoke CPUID, you'll need to do so on the very CPU, not many times
in a row on the BSP.

> +    }
>      hvm_asid_init(nasids);
>  }
>  
>  /*
> - * Called directly before VMRUN.  Checks if the VCPU needs a new ASID,
> + * Called directly before first VMRUN.  Checks if the domain needs an ASID,
>   * assigns it, and if required, issues required TLB flushes.
>   */
>  void svm_asid_handle_vmrun(void)

Again the function name will likely want to change if this is called just
once per domain. Question then again is whether really this needs doing as
late as ahead of the VMRUN, of whether perhaps at least parts can be done
during normal domain creation.

>  {
> -    struct vcpu *curr = current;
> -    struct vmcb_struct *vmcb = curr->arch.hvm.svm.vmcb;
> -    struct hvm_vcpu_asid *p_asid =
> -        nestedhvm_vcpu_in_guestmode(curr)
> -        ? &vcpu_nestedhvm(curr).nv_n2asid : &curr->arch.hvm.n1asid;
> +    struct vcpu *v = current;

Please don't move away from naming this "curr".

> +    struct domain *d = current->domain;

Then this, if it needs to be a local variable in the first place, would
want to be "currd".

> @@ -988,8 +986,8 @@ static void noreturn cf_check svm_do_resume(void)
>          v->arch.hvm.svm.launch_core = smp_processor_id();
>          hvm_migrate_timers(v);
>          hvm_migrate_pirqs(v);
> -        /* Migrating to another ASID domain.  Request a new ASID. */
> -        hvm_asid_flush_vcpu(v);
> +        /* Migrating to another ASID domain. Request a new ASID. */
> +        hvm_asid_flush_domain(v->domain);
>      }

Is "migrating to another ASID domain" actually still possible in the new
model?

> @@ -2358,8 +2351,9 @@ static bool cf_check is_invlpg(
>  
>  static void cf_check svm_invlpg(struct vcpu *v, unsigned long linear)
>  {
> +    v = current;

???

>      /* Safe fallback. Take a new ASID. */
> -    hvm_asid_flush_vcpu(v);
> +    hvm_asid_flush_domain(v->domain);
>  }
>  
>  static bool cf_check svm_get_pending_event(
> @@ -2533,6 +2527,7 @@ void asmlinkage svm_vmexit_handler(void)
>      struct cpu_user_regs *regs = guest_cpu_user_regs();
>      uint64_t exit_reason;
>      struct vcpu *v = current;
> +    struct domain *d = current->domain;
>      struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
>      int insn_len, rc;
>      vintr_t intr;
> @@ -2927,7 +2922,7 @@ void asmlinkage svm_vmexit_handler(void)
>          }
>          if ( (insn_len = svm_get_insn_len(v, INSTR_INVLPGA)) == 0 )
>              break;
> -        svm_invlpga_intercept(v, regs->rax, regs->ecx);
> +        svm_invlpga_intercept(d, regs->rax, regs->ecx);
>          __update_guest_eip(regs, insn_len);
>          break;

The function may certainly benefit from introducing "d" (or better "currd"),
but please uniformly then (in a separate, prereq patch). Else just use
v->domain in this one place you're changing.

> --- a/xen/arch/x86/include/asm/hvm/asid.h
> +++ b/xen/arch/x86/include/asm/hvm/asid.h
> @@ -8,25 +8,24 @@
>  #ifndef __ASM_X86_HVM_ASID_H__
>  #define __ASM_X86_HVM_ASID_H__
>  
> +struct hvm_domain;
> +struct hvm_domain_asid;

I can see the need for the latter, but why the former?

> -struct vcpu;
> -struct hvm_vcpu_asid;
> -
> -/* Initialise ASID management for the current physical CPU. */
> +/* Initialise ASID management distributed across all CPUs. */
>  void hvm_asid_init(int nasids);
>  
>  /* Invalidate a particular ASID allocation: forces re-allocation. */
> -void hvm_asid_flush_vcpu_asid(struct hvm_vcpu_asid *asid);
> +void hvm_asid_flush_domain_asid(struct hvm_domain_asid *asid);
>  
> -/* Invalidate all ASID allocations for specified VCPU: forces re-allocation. */
> -void hvm_asid_flush_vcpu(struct vcpu *v);
> +/* Invalidate all ASID allocations for specified domain */
> +void hvm_asid_flush_domain(struct domain *d);
>  
> -/* Flush all ASIDs on this processor core. */
> -void hvm_asid_flush_core(void);
> +/* Flush all ASIDs on all processor cores */
> +void hvm_asid_flush_all(void);
>  
>  /* Called before entry to guest context. Checks ASID allocation, returns a
>   * boolean indicating whether all ASIDs must be flushed. */
> -bool hvm_asid_handle_vmenter(struct hvm_vcpu_asid *asid);
> +bool hvm_asid_handle_vmenter(struct hvm_domain_asid *asid);

Much like the comment on the definition, this comment then wants adjusting,
too.

> --- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
> @@ -525,6 +525,7 @@ void ept_sync_domain(struct p2m_domain *p2m);
>  
>  static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long gva)
>  {
> +    struct domain *d = current->domain;

Why "current" when "v" is being passed in?

> --- a/xen/arch/x86/mm/hap/hap.c
> +++ b/xen/arch/x86/mm/hap/hap.c
> @@ -739,13 +739,13 @@ static bool cf_check flush_tlb(const unsigned long *vcpu_bitmap)
>          if ( !flush_vcpu(v, vcpu_bitmap) )
>              continue;
>  
> -        hvm_asid_flush_vcpu(v);
> -
>          cpu = read_atomic(&v->dirty_cpu);
>          if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) && v->is_running )
>              __cpumask_set_cpu(cpu, mask);
>      }
>  
> +    hvm_asid_flush_domain(d);

Hmm, that's potentially much more flushing than is needed here. There
surely wants to be a wait to flush at a granularity smaller than the
entire domain. (Likely applies elsewhere as well.)

> @@ -2013,6 +2014,12 @@ void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
>          printk(XENLOG_INFO "Parked %u CPUs\n", num_parked);
>      smp_cpus_done();
>  
> +    /* Initialize xen-wide ASID handling */
> +    #ifdef CONFIG_HVM
> +    if ( cpu_has_svm )
> +        svm_asid_init();
> +    #endif

Nit (style): Pre-processor directives want to start in the leftmost column.

Overall I'm afraid I have to say that there's too much hackery and too
little description to properly reply to this to address its RFC-ness. You
also don't really raise any specific questions.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 11:08:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 11:08:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739818.1146758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHiJc-00037j-5E; Thu, 13 Jun 2024 11:08:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739818.1146758; Thu, 13 Jun 2024 11:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHiJc-00037c-1N; Thu, 13 Jun 2024 11:08:24 +0000
Received: by outflank-mailman (input) for mailman id 739818;
 Thu, 13 Jun 2024 11:08:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8X4E=NP=gmail.com=anshulmakkar@srs-se1.protection.inumbo.net>)
 id 1sHiJb-00037W-DA
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 11:08:23 +0000
Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com
 [2607:f8b0:4864:20::1030])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3760da0f-2975-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 13:08:09 +0200 (CEST)
Received: by mail-pj1-x1030.google.com with SMTP id
 98e67ed59e1d1-2c2e31d319eso674769a91.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 04:08:09 -0700 (PDT)
Received: from [192.168.0.112] ([203.92.56.82])
 by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c4c45f68e4sm1358862a91.28.2024.06.13.04.08.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 04:08:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3760da0f-2975-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718276888; x=1718881688; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=8b3HjmUdCtnogQCYfsgu2iuWm6pk2Lq6e9hKd+Na3VI=;
        b=Q4jBvU3Xw3n+3fs1m+Zx6fm5v6FDjWswttQVyRZbVGiQorqzn/B8hUvVB9C4myTf3q
         YbFxfwml9AoqT8EHGU4NNKvdE4ouVNwHRCk8ozKnm7/VKW21eEqpWR2SIUwlPPLs7ayj
         /Jp5ZtRVJ/KSHvbftkJZUIvYcL0ZJ81b5SBJPD3CvBK3tlyk6KKIPRZ4UEypaavSV1hW
         tKC8DGj6GUu5ZsyrAoPVgPTRa1pY5bt0nLQU46GVorqckCGsc9xfoRZzTU15Es82kqOM
         VX1x4yYD7xMFYndeeGPFl9YNmwFY1zaIOf4qNuNF6KHCxsOSShI5bS/CMSvGdI5wC8x2
         YSKw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718276888; x=1718881688;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=8b3HjmUdCtnogQCYfsgu2iuWm6pk2Lq6e9hKd+Na3VI=;
        b=PWG1or6ui8lHfxQlTDyjt0xFREykZxuBJUXyAQsy6vbj2G22uEHyg+3ETUHLra5Aqj
         R3+P2f/BL1NoDJi+7pZ3FddNWmTtljN8c4ahC8XWilnZjBBbN0zxPPmV2LCJOcaPMVs3
         PQ/4IRB6g4zyMA0pZ0xhR63MkFA9WiBGy43FcPtuacPM6oIslf7Z4Zp0gN7X71RPcyzq
         eodK1ncRkNiewbIpvIWbYMI4pCXcsJOYXAtiW/0YGR6SJfpRrnURmj0qts4BJcFmAxj1
         kpthpRd5kwXHrIKZMbf2Ny1e7ekVCI+EHkA3iTwpbXEnvhYbfRre9XeadOAm07QJyzQO
         kZ/A==
X-Forwarded-Encrypted: i=1; AJvYcCV6SltQOhDuJBL6/yiID/FAxl0cRze+xdHM5dH6126ptRKDg7uHTzflM6ApIh+vFWx6ZLf6OE2F3F/Jift+2CUbXDe11YvxyTFWG8iC64c=
X-Gm-Message-State: AOJu0Yz2p7Dxg63rSVKPf1eSsOScD6NFtSls1CSThI1HCtA2YBv1wmlL
	s60s86KlqPzGPQD5WNfh5UoyB8ZRr5X0C0dgDf0956cdNFWwG6AD
X-Google-Smtp-Source: AGHT+IFT3YpFvtCt/hHSXRxsqxC5hnjPujuLXSplVeyKDZ6WbW4gF4L8a93hdm6bGpPgS5Re9XhMdg==
X-Received: by 2002:a17:90a:2dcb:b0:2c2:da02:a2c7 with SMTP id 98e67ed59e1d1-2c4a76fa971mr4056110a91.44.1718276888097;
        Thu, 13 Jun 2024 04:08:08 -0700 (PDT)
Message-ID: <7b0d8c58-c1f7-453c-bec4-76d383880bc1@gmail.com>
Date: Thu, 13 Jun 2024 16:38:03 +0530
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN v1] docs/designs: Introduce IOMMU context management
To: Jan Beulich <jbeulich@suse.com>, Teddy Astie <teddy.astie@vates.tech>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Rahul Singh <rahul.singh@arm.com>, xen-devel@lists.xenproject.org
References: <1cdd746f1af79970f8c7151d23854d33416772e0.1713876394.git.teddy.astie@vates.tech>
 <5c7c76f29726d377e3ff8a22ba2e3eb01dfa4c3b.1713876394.git.teddy.astie@vates.tech>
 <bc39fe10-68a4-47c3-aa70-c2a8865ea8cd@suse.com>
Content-Language: en-GB
From: anshul makkar <anshulmakkar@gmail.com>
In-Reply-To: <bc39fe10-68a4-47c3-aa70-c2a8865ea8cd@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit


On 6/13/2024 3:49 PM, Jan Beulich wrote:
> On 23.04.2024 14:59, Teddy Astie wrote:
>> --- /dev/null
>> +++ b/docs/designs/iommu-contexts.md
>> @@ -0,0 +1,373 @@
>> +# IOMMU context management in Xen
>> +
>> +Status: Experimental
>> +Revision: 0
>> +
>> +# Background
>> +
>> +The design for *IOMMU paravirtualization for Dom0* [1] explains that some guests may
>> +want to access to IOMMU features. In order to implement this in Xen, several adjustments
>> +needs to be made to the IOMMU subsystem.
>> +
>> +This "hardware IOMMU domain"
> One very basic question up front, before I can even think of properly reading this
> doc: Here you refer to terminology used in that other doc, yet ...
>
>> +[1] See pv-iommu.md
> ... it's not clear what this actually refers to. There's no such doc in our tree,
> afaics.
>
> Jan

Went through the RFC. Please can you share whats the use case you are 
trying to solve here.

Anshul

>


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 11:31:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 11:31:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739831.1146768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHig0-0007NR-LX; Thu, 13 Jun 2024 11:31:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739831.1146768; Thu, 13 Jun 2024 11:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHig0-0007NK-Iv; Thu, 13 Jun 2024 11:31:32 +0000
Received: by outflank-mailman (input) for mailman id 739831;
 Thu, 13 Jun 2024 11:31:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9y96=NP=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHifz-0007ND-7S
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 11:31:31 +0000
Received: from mail-qk1-x72c.google.com (mail-qk1-x72c.google.com
 [2607:f8b0:4864:20::72c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7a1faebf-2978-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 13:31:30 +0200 (CEST)
Received: by mail-qk1-x72c.google.com with SMTP id
 af79cd13be357-7960454db4fso51457385a.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 04:31:29 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798abc0ced8sm43985785a.77.2024.06.13.04.31.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Jun 2024 04:31:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a1faebf-2978-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718278289; x=1718883089; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=dUd/HmDYST0f7kMjkMimmE34MAmFTG7peoxWvdrWoY8=;
        b=I7izLf6WvV3dtg+W3GZCvEJ/CMP7QKjlwla7Jzrt1Kpe/bwMzXUJHVy6fQrpaFBP9N
         PAzOddWNoHgwO2+3lEC9EW1mZvwy5VIykXtWtRT3Qfjjdkk/A+/IUQW3qFmOzKPeOduK
         355loBZJh0U/3CEEak+hMdXbd8wL3i/r+JWIM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718278289; x=1718883089;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=dUd/HmDYST0f7kMjkMimmE34MAmFTG7peoxWvdrWoY8=;
        b=HJ2of+OWaBNzEi6rSJcSkwGsOuDBrPbUFsqZqTKeDDq9Xvn5fmecwAyHnnw5MhRbYa
         0+AyDAl7FT1imgNvtPQsOF/bleqCC11qUfMERDc2BVxbdiywWGXraH3WiMgpi0KyQs0l
         XUQMUeT+F4C6qXdYleD36jHBKRAPasWdtaEnM3kSkhwflmbgUEB7LhB+0m7mxIF0E+Lg
         SJ0N57T0SRj1vVQNav/fz8s8acDHriaenP0FxUEt55leQfjdfgO4CU7CnL5KLN3wwlf7
         B5ylTCrxWlW0wDlMOK+Q/d5LGsMTF/Fd4MZZzmbUsBqGNWapRkyJusPUW9HKECy9cLey
         Z8lQ==
X-Forwarded-Encrypted: i=1; AJvYcCUMED4Uj4WK0lgmn4cTQ4pTW6efUYq0j1mlPc9IEdeoostvh/QSFN9tPVJrelHImKMcVPR1lnhmdTpndVyBuZl1GqWAwt4k7r+6bmP6lgE=
X-Gm-Message-State: AOJu0Yxl8oWqhTWO2/JaTYn4CpT8ymaDrBRX/ex6Ew08A7hYDHGjZBwZ
	Qt6ipvDfkQwUa4kxWUYXRRjfYj6EuXigQpI8oVt47MthMRXFDCEL/s10gqO0aoM=
X-Google-Smtp-Source: AGHT+IH68Iml0VUix8Rg0+ERbBShEoK1Kz8Tth9iMAI1nQ6S2ln5PdASpeG9OUq2C2dh2pA9LG09Pw==
X-Received: by 2002:a05:620a:19a6:b0:795:5930:d0e6 with SMTP id af79cd13be357-797f600af87mr514635585a.17.1718278288306;
        Thu, 13 Jun 2024 04:31:28 -0700 (PDT)
Date: Thu, 13 Jun 2024 13:31:26 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in
 _assign_irq_vector()
Message-ID: <ZmrYjv2ljhf-1Ag_@macbook>
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-7-roger.pau@citrix.com>
 <9de1a9c7-814c-4375-9182-90a2f04806b2@suse.com>
 <Zml6-ViFPTWI1cUc@macbook>
 <d5b1d273-913e-4d53-9fb6-9b01525da498@suse.com>
 <ZmnAgSBjjP6N-uJS@macbook>
 <d45ef203-aa29-4aa6-8b40-0449334a2bf0@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d45ef203-aa29-4aa6-8b40-0449334a2bf0@suse.com>

On Thu, Jun 13, 2024 at 10:38:35AM +0200, Jan Beulich wrote:
> On 12.06.2024 17:36, Roger Pau Monné wrote:
> > On Wed, Jun 12, 2024 at 03:42:58PM +0200, Jan Beulich wrote:
> >> On 12.06.2024 12:39, Roger Pau Monné wrote:
> >>> On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
> >>>> On 10.06.2024 16:20, Roger Pau Monne wrote:
> >>>>> Currently there's logic in fixup_irqs() that attempts to prevent
> >>>>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
> >>>>> interrupts from the CPUs not present in the input mask.  The current logic in
> >>>>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
> >>>>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
> >>>>>
> >>>>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
> >>>>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
> >>>>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
> >>>>> and no remaining online CPUs in ->arch.cpu_mask.
> >>>>>
> >>>>> If _assign_irq_vector() is requested to move an interrupt in the state
> >>>>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
> >>>>> CPUs that could be used as fallback, and if that's the case do move the
> >>>>> interrupt back to the previous destination.  Note this is easier because the
> >>>>> vector hasn't been released yet, so there's no need to allocate and setup a new
> >>>>> vector on the destination.
> >>>>>
> >>>>> Due to the logic in fixup_irqs() that clears offline CPUs from
> >>>>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
> >>>>> shouldn't be possible to get into _assign_irq_vector() with
> >>>>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
> >>>>> ->arch.old_cpu_mask.
> >>>>>
> >>>>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
> >>>>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
> >>>>> longer part of the affinity set,
> >>>>
> >>>> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
> >>>>
> >>>>> move the interrupt to a different CPU part of
> >>>>> the provided mask
> >>>>
> >>>> ... this (->arch.cpu_mask related).
> >>>
> >>> No, the "provided mask" here is the "mask" parameter, not
> >>> ->arch.cpu_mask.
> >>
> >> Oh, so this describes the case of "hitting" the comment at the very bottom of
> >> the first hunk then? (I probably was misreading this because I was expecting
> >> it to describe a code change, rather than the case where original behavior
> >> needs retaining. IOW - all fine here then.)
> >>
> >>>>> and keep the current ->arch.old_{cpu_mask,vector} for the
> >>>>> pending interrupt movement to be completed.
> >>>>
> >>>> Right, that's to clean up state from before the initial move. What isn't
> >>>> clear to me is what's to happen with the state of the intermediate
> >>>> placement. Description and code changes leave me with the impression that
> >>>> it's okay to simply abandon, without any cleanup, yet I can't quite figure
> >>>> why that would be an okay thing to do.
> >>>
> >>> There isn't much we can do with the intermediate placement, as the CPU
> >>> is going offline.  However we can drain any pending interrupts from
> >>> IRR after the new destination has been set, since setting the
> >>> destination is done from the CPU that's the current target of the
> >>> interrupts.  So we can ensure the draining is done strictly after the
> >>> target has been switched, hence ensuring no further interrupts from
> >>> this source will be delivered to the current CPU.
> >>
> >> Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
> >> the ...
> >>
> >>>>> --- a/xen/arch/x86/irq.c
> >>>>> +++ b/xen/arch/x86/irq.c
> >>>>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
> >>>>>      }
> >>>>>  
> >>>>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> >>>>> -        return -EAGAIN;
> >>>>> +    {
> >>>>> +        /*
> >>>>> +         * If the current destination is online refuse to shuffle.  Retry after
> >>>>> +         * the in-progress movement has finished.
> >>>>> +         */
> >>>>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
> >>>>> +            return -EAGAIN;
> >>>>> +
> >>>>> +        /*
> >>>>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
> >>>>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
> >>>>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
> >>>>> +         * ->arch.old_cpu_mask.
> >>>>> +         */
> >>>>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
> >>>>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
> >>>>> +
> >>>>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
> >>>>> +        {
> >>>>> +            /*
> >>>>> +             * Fallback to the old destination if moving is in progress and the
> >>>>> +             * current destination is to be offlined.  This is only possible if
> >>>>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
> >>>>> +             * in the 'mask' parameter.
> >>>>> +             */
> >>>>> +            desc->arch.vector = desc->arch.old_vector;
> >>>>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
> >>
> >> ... replacing of vector (and associated mask), without any further accounting.
> > 
> > It's quite likely I'm missing something here, but what further
> > accounting you would like to do?
> > 
> > The current target of the interrupt (->arch.cpu_mask previous to
> > cpumask_and()) is all going offline, so any attempt to set it in
> > ->arch.old_cpu_mask would just result in a stale (offline) CPU getting
> > set in ->arch.old_cpu_mask, which previous patches attempted to
> > solve.
> > 
> > Maybe by "further accounting" you meant something else not related to
> > ->arch.old_{cpu_mask,vector}?
> 
> Indeed. What I'm thinking of is what normally release_old_vec() would
> do (of which only desc->arch.used_vectors updating would appear to be
> relevant, seeing the CPU's going offline). The other one I was thinking
> of, updating vector_irq[], likely is also unnecessary, again because
> that's per-CPU data of a CPU going down.

I think updating vector_irq[] should be explicitly avoided, as doing
so would prevent us from correctly draining any pending interrupts
because the vector -> irq mapping would be broken when the interrupt
enable window at the bottom of fixup_irqs() is reached.

For used_vectors: we might clean it, I'm a bit worried however that at
some point we insert a check in do_IRQ() path that ensures the
vector_irq[] is inline with desc->arch.used_vectors, which would fail
for interrupts drained at the bottom of fixup_irqs().  Let me attempt
to clean the currently used vector from ->arch.used_vectors.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 11:31:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 11:31:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739832.1146778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHigN-0007jr-Tc; Thu, 13 Jun 2024 11:31:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739832.1146778; Thu, 13 Jun 2024 11:31:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHigN-0007jk-Q0; Thu, 13 Jun 2024 11:31:55 +0000
Received: by outflank-mailman (input) for mailman id 739832;
 Thu, 13 Jun 2024 11:31:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sHigM-0007jM-LL
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 11:31:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sHigM-0007em-7M; Thu, 13 Jun 2024 11:31:54 +0000
Received: from [15.248.2.235] (helo=[10.24.67.23])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sHigL-0000Un-VI; Thu, 13 Jun 2024 11:31:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=5wawNYkETsgzaBqRebHTdBfVwoewtHrgrZ7oDAX2uAQ=; b=FLqwSS3dMW286pFRBOWagQwfLI
	pMlt6DmpJdCDcEfICMMPn3SLVUpH/4XXBbZLRTs47zF+R1L/tRWTePnXT8crbW8u2L3x2nQUekymE
	tLoJCdtMJU7tFyWCQuMrlUXElTH225ug3mD5KuWpCWRYB/8vUYptChZKraGiy0Ejmf5o=;
Message-ID: <502d0284-1ca2-478c-b4f4-fda5caff3723@xen.org>
Date: Thu, 13 Jun 2024 12:31:51 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 0/7] Static shared memory followup v2 - pt2
Content-Language: en-GB
To: Michal Orzel <michal.orzel@amd.com>, Luca Fancellu
 <Luca.Fancellu@arm.com>, Xen-devel <xen-devel@lists.xenproject.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20240524124055.3871399-1-luca.fancellu@arm.com>
 <3DDAAFF7-5E43-4B92-9D6B-6D8AFBA8496F@arm.com>
 <66e5d584-b326-4197-81d5-ec2b8233a3fa@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <66e5d584-b326-4197-81d5-ec2b8233a3fa@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 11/06/2024 13:42, Michal Orzel wrote:
>> We would like this serie to be in Xen 4.19, there was a misunderstanding on our side because we thought
>> that since the serie was sent before the last posting date, it could have been a candidate for merging in the
>> new release, instead after speaking with Julien and Oleksii we are now aware that we need to provide a
>> justification for its presence.
>>
>> Pros to this serie is that we are closing the circle for static shared memory, allowing it to use memory from
>> the host or from Xen, it is also a feature that is not enabled by default, so it should not cause too much
>> disruption in case of any bugs that escaped the review, however we’ve tested many configurations for that
>> with/without enabling the feature if that can be an additional value.
>>
>> Cons: we are touching some common code related to p2m, but also there the impact should be minimal because
>> the new code is subject to l2 foreign mapping (to be confirmed maybe from a p2m expert like Julien).
>>
>> The comments on patch 3 of this serie are addressed by this patch:
>> https://patchwork.kernel.org/project/xen-devel/patch/20240528125603.2467640-1-luca.fancellu@arm.com/
>> And the serie is fully reviewed.
>>
>> So our request is to allow this serie in 4.19, Oleksii, ARM maintainers, do you agree on that?
> As a main reviewer of this series I'm ok to have this series in. It is nicely encapsulated and the feature itself
> is still in unsupported state. I don't foresee any issues with it.

There are changes in the p2m code and the memory allocation for boot 
domains. So is it really encapsulated?

For me there are two risks:
  * p2m (already mentioned by Luca): We modify the code to put foreign 
mapping. The worse that can happen if we don't release the foreign 
mapping. This would mean the page will not be freed. AFAIK, we don't 
exercise this path in the CI.
  * domain allocation: This mainly look like refactoring. And the path 
is exercised in the CI.

So I am not concerned with the domain allocation one. @Luca, would it be 
possible to detail how did you test the foreign pages were properly removed?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 11:37:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 11:37:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739841.1146788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHilJ-0008Ua-Ew; Thu, 13 Jun 2024 11:37:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739841.1146788; Thu, 13 Jun 2024 11:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHilJ-0008UT-C2; Thu, 13 Jun 2024 11:37:01 +0000
Received: by outflank-mailman (input) for mailman id 739841;
 Thu, 13 Jun 2024 11:36:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHilH-0008UN-Em
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 11:36:59 +0000
Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com
 [2a00:1450:4864:20::530])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3df1e259-2979-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 13:36:58 +0200 (CEST)
Received: by mail-ed1-x530.google.com with SMTP id
 4fb4d7f45d1cf-57cbc2a2496so56216a12.0
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 04:36:58 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56ecdbbbsm62862666b.99.2024.06.13.04.36.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 04:36:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3df1e259-2979-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718278617; x=1718883417; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=in+sVQxzp4l+XAse0dba3QdnyjCCDhAafq+4H784p7M=;
        b=LeBy/Kmau6AJWyBDsARRbuSifqco0oGxe0uie/0nuiuGPrTebqWkQhtK8oW++GP+XD
         MITMJWZd3ycXfWIUu1dzjb//ZjmT856MQF6PbgkFj8y9Jwc8G/40RADt4pXLpo9nksPE
         YemukoTsyiEWfY7hNy+AsvpWQn6EIZBVZQCi+NLtr436Mf4YdQlTs5Gwsrr/fgSTBAcu
         9rczGWg77NduwGfjPWHRALJ3rb8to6k2QnJe7lJ383gvWU5MIokUE/QyWTVvUPzkcc4c
         6nDfm8amLxJ3mixAzaSrNoWLXiCDujlQIqKO5oBr3cWS3c054w9K19n0mQV8VavPDotl
         nFXg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718278617; x=1718883417;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=in+sVQxzp4l+XAse0dba3QdnyjCCDhAafq+4H784p7M=;
        b=Qyso0iGCTn3eeCqvKEWXE/or9245pGRqeNQsX0ZJy6Zw1OlhLsHH/16yIP+GPjZcc1
         eaYuu48kZCzgJSaHIYCt47zwmhiOmoZM8bjScIySzk9ViTrM+faFMc3AtOOrh0w+3lLN
         jIjPnI8xwF6JXMncl/UjOjOxhRiHDQe4kAGDYKgmlt6oQucJ0T8OesHsw5ajcImMP+rZ
         zmhjIjpSqzqDwKueZmo4XKVydpicPQ8Xck7cGSx+a5OW5RnzP3ZTgO7t0tggdltrPGvb
         C5vZSQusy05aLZ8+n1bOCpmum/etA7BUB5yt94STCD/7DNwEPswVI7aUgPTN09IpfkZO
         PrVQ==
X-Forwarded-Encrypted: i=1; AJvYcCULltjmGWU79Y8aTX1RrReFgXKm4sZU3IWVX1hGhEgVJxFLX3r2T3AlDFj1XkOhBQP74CgsIZEXTMDyoZQ2NgPA7wo870D9sDpPityG15w=
X-Gm-Message-State: AOJu0YzESDMD+E2VZqulRBXrLYKkO0C6+SX1NAg9hw6d1SX5dEO7UfNt
	tcbXrn3/hY0HF/37rsGDXgS83TWgrtJNNT5qDfvOofQOxHRyC9t1ClR42bIs1g==
X-Google-Smtp-Source: AGHT+IFjj8VVWhkAkfw0oo3mI+ObazPGztFchRGC4HlcN08uUnsg706BVSwdb1FxaZisyXl3PDc3XA==
X-Received: by 2002:a17:907:7754:b0:a6f:501d:c228 with SMTP id a640c23a62f3a-a6f501dd0f3mr239611266b.9.1718278617110;
        Thu, 13 Jun 2024 04:36:57 -0700 (PDT)
Message-ID: <f4c51152-e8a0-4715-af75-4b8c7801fa08@suse.com>
Date: Thu, 13 Jun 2024 13:36:55 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in
 _assign_irq_vector()
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-7-roger.pau@citrix.com>
 <9de1a9c7-814c-4375-9182-90a2f04806b2@suse.com> <Zml6-ViFPTWI1cUc@macbook>
 <d5b1d273-913e-4d53-9fb6-9b01525da498@suse.com> <ZmnAgSBjjP6N-uJS@macbook>
 <d45ef203-aa29-4aa6-8b40-0449334a2bf0@suse.com> <ZmrYjv2ljhf-1Ag_@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmrYjv2ljhf-1Ag_@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 13.06.2024 13:31, Roger Pau Monné wrote:
> On Thu, Jun 13, 2024 at 10:38:35AM +0200, Jan Beulich wrote:
>> On 12.06.2024 17:36, Roger Pau Monné wrote:
>>> On Wed, Jun 12, 2024 at 03:42:58PM +0200, Jan Beulich wrote:
>>>> On 12.06.2024 12:39, Roger Pau Monné wrote:
>>>>> On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
>>>>>> On 10.06.2024 16:20, Roger Pau Monne wrote:
>>>>>>> Currently there's logic in fixup_irqs() that attempts to prevent
>>>>>>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
>>>>>>> interrupts from the CPUs not present in the input mask.  The current logic in
>>>>>>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
>>>>>>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
>>>>>>>
>>>>>>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
>>>>>>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
>>>>>>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
>>>>>>> and no remaining online CPUs in ->arch.cpu_mask.
>>>>>>>
>>>>>>> If _assign_irq_vector() is requested to move an interrupt in the state
>>>>>>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
>>>>>>> CPUs that could be used as fallback, and if that's the case do move the
>>>>>>> interrupt back to the previous destination.  Note this is easier because the
>>>>>>> vector hasn't been released yet, so there's no need to allocate and setup a new
>>>>>>> vector on the destination.
>>>>>>>
>>>>>>> Due to the logic in fixup_irqs() that clears offline CPUs from
>>>>>>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
>>>>>>> shouldn't be possible to get into _assign_irq_vector() with
>>>>>>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
>>>>>>> ->arch.old_cpu_mask.
>>>>>>>
>>>>>>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
>>>>>>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
>>>>>>> longer part of the affinity set,
>>>>>>
>>>>>> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
>>>>>>
>>>>>>> move the interrupt to a different CPU part of
>>>>>>> the provided mask
>>>>>>
>>>>>> ... this (->arch.cpu_mask related).
>>>>>
>>>>> No, the "provided mask" here is the "mask" parameter, not
>>>>> ->arch.cpu_mask.
>>>>
>>>> Oh, so this describes the case of "hitting" the comment at the very bottom of
>>>> the first hunk then? (I probably was misreading this because I was expecting
>>>> it to describe a code change, rather than the case where original behavior
>>>> needs retaining. IOW - all fine here then.)
>>>>
>>>>>>> and keep the current ->arch.old_{cpu_mask,vector} for the
>>>>>>> pending interrupt movement to be completed.
>>>>>>
>>>>>> Right, that's to clean up state from before the initial move. What isn't
>>>>>> clear to me is what's to happen with the state of the intermediate
>>>>>> placement. Description and code changes leave me with the impression that
>>>>>> it's okay to simply abandon, without any cleanup, yet I can't quite figure
>>>>>> why that would be an okay thing to do.
>>>>>
>>>>> There isn't much we can do with the intermediate placement, as the CPU
>>>>> is going offline.  However we can drain any pending interrupts from
>>>>> IRR after the new destination has been set, since setting the
>>>>> destination is done from the CPU that's the current target of the
>>>>> interrupts.  So we can ensure the draining is done strictly after the
>>>>> target has been switched, hence ensuring no further interrupts from
>>>>> this source will be delivered to the current CPU.
>>>>
>>>> Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
>>>> the ...
>>>>
>>>>>>> --- a/xen/arch/x86/irq.c
>>>>>>> +++ b/xen/arch/x86/irq.c
>>>>>>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
>>>>>>>      }
>>>>>>>  
>>>>>>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
>>>>>>> -        return -EAGAIN;
>>>>>>> +    {
>>>>>>> +        /*
>>>>>>> +         * If the current destination is online refuse to shuffle.  Retry after
>>>>>>> +         * the in-progress movement has finished.
>>>>>>> +         */
>>>>>>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
>>>>>>> +            return -EAGAIN;
>>>>>>> +
>>>>>>> +        /*
>>>>>>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
>>>>>>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
>>>>>>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
>>>>>>> +         * ->arch.old_cpu_mask.
>>>>>>> +         */
>>>>>>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
>>>>>>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
>>>>>>> +
>>>>>>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
>>>>>>> +        {
>>>>>>> +            /*
>>>>>>> +             * Fallback to the old destination if moving is in progress and the
>>>>>>> +             * current destination is to be offlined.  This is only possible if
>>>>>>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
>>>>>>> +             * in the 'mask' parameter.
>>>>>>> +             */
>>>>>>> +            desc->arch.vector = desc->arch.old_vector;
>>>>>>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
>>>>
>>>> ... replacing of vector (and associated mask), without any further accounting.
>>>
>>> It's quite likely I'm missing something here, but what further
>>> accounting you would like to do?
>>>
>>> The current target of the interrupt (->arch.cpu_mask previous to
>>> cpumask_and()) is all going offline, so any attempt to set it in
>>> ->arch.old_cpu_mask would just result in a stale (offline) CPU getting
>>> set in ->arch.old_cpu_mask, which previous patches attempted to
>>> solve.
>>>
>>> Maybe by "further accounting" you meant something else not related to
>>> ->arch.old_{cpu_mask,vector}?
>>
>> Indeed. What I'm thinking of is what normally release_old_vec() would
>> do (of which only desc->arch.used_vectors updating would appear to be
>> relevant, seeing the CPU's going offline). The other one I was thinking
>> of, updating vector_irq[], likely is also unnecessary, again because
>> that's per-CPU data of a CPU going down.
> 
> I think updating vector_irq[] should be explicitly avoided, as doing
> so would prevent us from correctly draining any pending interrupts
> because the vector -> irq mapping would be broken when the interrupt
> enable window at the bottom of fixup_irqs() is reached.
> 
> For used_vectors: we might clean it, I'm a bit worried however that at
> some point we insert a check in do_IRQ() path that ensures the
> vector_irq[] is inline with desc->arch.used_vectors, which would fail
> for interrupts drained at the bottom of fixup_irqs().  Let me attempt
> to clean the currently used vector from ->arch.used_vectors.

Just to clarify: It may well be that for draining the bit can't be cleared
right here. But it then still needs clearing _somewhere_, or else we
chance ending up with inconsistent state (triggering e.g. an assertion
later on) or the leaking of vectors. My problem here was that I also
couldn't locate any such "somewhere", and commentary also didn't point me
anywhere.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 11:50:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 11:50:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739854.1146798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHiyZ-0003QS-P6; Thu, 13 Jun 2024 11:50:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739854.1146798; Thu, 13 Jun 2024 11:50:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHiyZ-0003QL-Lz; Thu, 13 Jun 2024 11:50:43 +0000
Received: by outflank-mailman (input) for mailman id 739854;
 Thu, 13 Jun 2024 11:50:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p+hf=NP=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sHiyZ-0003QF-5D
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 11:50:43 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 28d34f2d-297b-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 13:50:41 +0200 (CEST)
Received: from [157.138.77.22] (unknown [157.138.77.22])
 by support.bugseng.com (Postfix) with ESMTPSA id A00CE4EE0756;
 Thu, 13 Jun 2024 13:50:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28d34f2d-297b-11ef-90a3-e314d9c70b13
Message-ID: <c1f92d1f-0934-4603-b3b8-a77402643f22@bugseng.com>
Date: Thu, 13 Jun 2024 13:50:39 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] automation/eclair: add deviation for MISRA C Rule
 17.7
To: Jan Beulich <jbeulich@suse.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <a5137c812eefab7e0417670386b0fee35468504d.1718264354.git.federico.serafini@bugseng.com>
 <55f46457-4182-4e1b-a792-e94cc6c16864@suse.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <55f46457-4182-4e1b-a792-e94cc6c16864@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 13/06/24 12:08, Jan Beulich wrote:
> On 13.06.2024 11:07, Federico Serafini wrote:
>> --- a/docs/misra/deviations.rst
>> +++ b/docs/misra/deviations.rst
>> @@ -364,6 +364,17 @@ Deviations related to MISRA C:2012 Rules:
>>          by `stdarg.h`.
>>        - Tagged as `deliberate` for ECLAIR.
>>   
>> +   * - R17.7
>> +     - Not using the return value of a function do not endanger safety if it
>> +       coincides with the first actual argument.
>> +     - Tagged as `safe` for ECLAIR. Such functions are:
>> +         - __builtin_memcpy()
>> +         - __builtin_memmove()
>> +         - __builtin_memset()
>> +         - __cpumask_check()
>> +         - strlcat()
>> +         - strlcpy()
> 
> These last two aren't similar to strcat/strcpy in what they return, so I'm
> not convinced they should be listed here. Certainly not with the "coincides"
> justification.

Thanks to violations of Rule 17.7 I noticed that safe_strcpy()
and safe_strcat() are used without checking the return value.
Is this intentional?

[1]
https://saas.eclairit.com:3787/fs/var/local/eclair/XEN.ecdf/ECLAIR_normal/staging/X86_64-BUGSENG/665/PROJECT.ecd;/by_service/MC3R1.R17.7.html#{"select":true,"selection":{"hiddenAreaKinds":[],"hiddenSubareaKinds":[],"show":true,"selector":{"enabled":true,"negated":false,"kind":2,"children":[{"enabled":true,"negated":false,"kind":0,"domain":"message","inputs":[{"enabled":true,"text":"^.*safe_strcpy"}]}]}}}

[2]
https://saas.eclairit.com:3787/fs/var/local/eclair/XEN.ecdf/ECLAIR_normal/staging/X86_64-BUGSENG/665/PROJECT.ecd;/sources/xen/arch/x86/setup.c.html#R5021_1{"select":true,"selection":{"hiddenAreaKinds":[],"hiddenSubareaKinds":[],"show":true,"selector":{"enabled":true,"negated":false,"kind":2,"children":[{"enabled":true,"negated":false,"kind":0,"domain":"message","inputs":[{"enabled":true,"text":"^.*safe_strcat"}]}]}}}

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 12:03:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 12:03:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739867.1146808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjAp-0005R0-VC; Thu, 13 Jun 2024 12:03:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739867.1146808; Thu, 13 Jun 2024 12:03:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjAp-0005Qt-Sa; Thu, 13 Jun 2024 12:03:23 +0000
Received: by outflank-mailman (input) for mailman id 739867;
 Thu, 13 Jun 2024 12:03:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHjAo-0005Qn-Jo
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 12:03:22 +0000
Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com
 [2a00:1450:4864:20::531])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ecd39031-297c-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 14:03:20 +0200 (CEST)
Received: by mail-ed1-x531.google.com with SMTP id
 4fb4d7f45d1cf-57cad4475e0so2575829a12.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 05:03:20 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56ecdd79sm65525166b.132.2024.06.13.05.03.18
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 05:03:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecd39031-297c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718280199; x=1718884999; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=0xKJLy2ZRf6uWK4lv0zRdTNoXfGkNhCoBlUq8csk0vo=;
        b=aydMLUDXa4yOF1+fy982yuIg7itKPxo8dqTzNJ7KCDRZZoN5x40gDmDtXeprNr7eKU
         IHFg0YCfN1VwHSsw3UljpPBqgRTLSXw2XyXwmoIy75bBIVOSfMPRkKzjLJq6XYfQ4Tzi
         fKvlMavw3Q1w3inMG5mmzYTWUeNm6XcVN9jW2r2m0EAYip7y6OMytDsOCW6HA1ZZeZf0
         +WWJdmIr8m3Z565PEwr+cwHQNXIq3ILgz1q4aGwJZKXre6H6mhAvD8TsNRuPJxSghoVa
         tBdsrjV+p+62ob3atgFrrtmpRzTnZgmNFPM6tpRe0QNqi1xCikPXYdVBd4hhB9lP1FSD
         ks8w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718280199; x=1718884999;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0xKJLy2ZRf6uWK4lv0zRdTNoXfGkNhCoBlUq8csk0vo=;
        b=S/NFOQQKxLOz3YSgkQUV6eQiCmQzgztrtszX7mLJllYzMe5eLIdDD0ru3hf+0ijb7D
         vfpSq3nkg9tRuCsdz+34MOhFHt0OqeS3KXpzWXOf4gxGv+W8bMX0m+dyeRwIH/Q0z2kb
         IJMgv6QVQ1vaxrz1lpmjW+dKHtAalubba5oHF2lIhbXwJTB+ULKyXIkU7HqoLVmwCnU9
         /EAc/c2jJFwopU2hV9ehfxOGddeoCpgYym1v2YWuyQlfOnWuOAB2xCE8tblXsStBxRi/
         6ZY9/fLIiUN1MXt4Ci0hGcyaNQy2uiQ9CgaovRHhG3x/hD2CGjZT/JKkTfWabW1NPZYN
         n/Hw==
X-Forwarded-Encrypted: i=1; AJvYcCWOd2q1ruwfEp3aT6wpMVJvwmktMFEqGMFYXo4pgXruBieG1fv/wgte82/C3icw1rXCgCUIVRwrrkbLIXfx9FVvRiNx9k7ItX2JEzPaox0=
X-Gm-Message-State: AOJu0YwzAxEruum2zs4GqG5EtLgjcldS7ICttQzlFh9vxvlkZMXxiWxV
	IjbPcSo/s8B5h1h9bQ/HhwfSBsn1gk9CVyaBxhsPECYY8DDJv1DISEawNNAjFw==
X-Google-Smtp-Source: AGHT+IENdr+AUe+h4QngRQ/mAesLRdy8QdZGMEqNEZAoX3//ujevGlRkkRNwN4Mf+RPBljZAoxK0mQ==
X-Received: by 2002:a17:906:f59f:b0:a6f:1bf8:658d with SMTP id a640c23a62f3a-a6f524742b2mr196888266b.37.1718280199244;
        Thu, 13 Jun 2024 05:03:19 -0700 (PDT)
Message-ID: <4a49fe9b-66fd-4a32-ad01-14ed4c5fc34c@suse.com>
Date: Thu, 13 Jun 2024 14:03:17 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v6 6/9] xen: Make the maximum number of altp2m
 views configurable for x86
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <cover.1718038855.git.w1benny@gmail.com>
 <fee20e24a94cb29dea81631a6b775933d1151da4.1718038855.git.w1benny@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <fee20e24a94cb29dea81631a6b775933d1151da4.1718038855.git.w1benny@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 10.06.2024 19:10, Petr Beneš wrote:
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -724,16 +724,42 @@ int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
>          return -EINVAL;
>      }
> 
> -    if ( altp2m_mode && nested_virt )
> +    if ( altp2m_mode )
>      {
> -        dprintk(XENLOG_INFO,
> -                "Nested virt and altp2m are not supported together\n");
> -        return -EINVAL;
> -    }
> +        if ( nested_virt )
> +        {
> +            dprintk(XENLOG_INFO,
> +                    "Nested virt and altp2m are not supported together\n");
> +            return -EINVAL;
> +        }
> +
> +        if ( !hap )
> +        {
> +            dprintk(XENLOG_INFO, "altp2m is only supported with HAP\n");
> +            return -EINVAL;
> +        }
> +
> +        if ( !hvm_altp2m_supported() )
> +        {
> +            dprintk(XENLOG_INFO, "altp2m is not supported\n");
> +            return -EINVAL;
> +        }

Wouldn't this better be first in the group?

> @@ -510,13 +526,13 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
>      mfn_t mfn;
>      int rc = -EINVAL;
> 
> -    if ( idx >=  min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
> +    if ( idx >= d->nr_altp2m ||
>           d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] ==

This ends up being suspicious: The range check is against a value different
from what is passed to array_index_nospec(). The two weren't the same
before either, but there the range check was more strict (which now isn't
visible anymore, even though I think it would still be true). Imo this
wants a comment, or an assertion effectively taking the place of a comment.
(I actually wonder whether we really [still] need to allocate a full page
for d->arch.altp2m_eptp.)

> @@ -659,12 +675,13 @@ int p2m_set_suppress_ve_multi(struct domain *d,
> 
>      if ( sve->view > 0 )
>      {
> -        if ( sve->view >= min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
> +        if ( sve->view >= d->nr_altp2m ||
>               d->arch.altp2m_eptp[array_index_nospec(sve->view, MAX_EPTP)] ==
>               mfn_x(INVALID_MFN) )
>              return -EINVAL;

Same again here and at least twice more further down, and yet more of those
elsewhere. Since they're all "is this slot populated" checks, maybe we want
an is_altp2m_eptp_valid() helper?

> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -103,7 +103,10 @@ struct xen_domctl_createdomain {
>  /* Altp2m mode signaling uses bits [0, 1]. */
>  #define XEN_DOMCTL_ALTP2M_mode_mask  (0x3U)
>  #define XEN_DOMCTL_ALTP2M_mode(m)    ((m) & XEN_DOMCTL_ALTP2M_mode_mask)
> -        uint32_t opts;
> +        uint16_t opts;
> +
> +        /* Number of altp2ms to allocate. */
> +        uint16_t nr;
>      } altp2m;

Nit: I wouldn't say "allocate" here, but "permit" or "support" or some such.
Whether any form of per-altp2m allocation is necessary is an implementation
detail.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 12:07:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 12:07:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739872.1146817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjEZ-0005zu-Dd; Thu, 13 Jun 2024 12:07:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739872.1146817; Thu, 13 Jun 2024 12:07:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjEZ-0005zn-B7; Thu, 13 Jun 2024 12:07:15 +0000
Received: by outflank-mailman (input) for mailman id 739872;
 Thu, 13 Jun 2024 12:07:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHjEY-0005zh-1K
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 12:07:14 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7733bbef-297d-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 14:07:12 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id
 a640c23a62f3a-a6f04afcce1so127265866b.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 05:07:12 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f41721sm65334466b.154.2024.06.13.05.07.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 05:07:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7733bbef-297d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718280431; x=1718885231; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=tXvak5OJKqvKSELC6b06Z8QGY6YuES4yko3UV3ZxhZE=;
        b=EPcihI5UqRyVl5WolnqRpKoMSv77xtbGXp/5JvxpG1jEzyqJOAPpMAUZiwWox1EHs4
         simKblHxjkRLZ9n8RN35m3JoqNR43aSQX5Jx/L1wdioEd/kFFaKVDj+r4NUODVr8bYEZ
         x2KWnSGbYXut+sx6rSooS7/RNTur//HiGwpW+ssxN1GwrJk6cdmeFusgPcZaiRqeAXLn
         0wm261DBlmk2Qk1a1587rQc4esN4R1IiCsK6CI652v40DLulMtzuh3vD6rnb1RlUDKsg
         N3qxCSPs6Vq95X5SjS+SH8Xg+HpgEG5Y4t0s/XmkDD2YlC9g/BLtfE8wjb96qqNeOlnZ
         XiSg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718280431; x=1718885231;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tXvak5OJKqvKSELC6b06Z8QGY6YuES4yko3UV3ZxhZE=;
        b=cAbDNIxrj164rGOn70/mtARayGDuzSD7zq3N7J8Twjts8fL3Tg5S/DFK0G2uGC3khG
         qIjq1T8dVG+2vM7Sc5vRlTP8Nu+KO5W3/xAX7b4Lwdq3NEskR4Pq5ivcOmhnga1wG/Zt
         tjcK2fChBjgO4ztKCmPJq/Iwcba6Va1neNPUG796qyzRLYc/w2RwPOh+q7STGER/D2iD
         ZP422HdgW88WFW5jWhVnTbeN5oX7i9FvZJeAoqeudr0IgfRV/hIl4BkLVFiwnKB5wULM
         ld+SZ5MKcE3eR3gGPSef7773bd5fpMR1Cc/1NfNbSNEgMmffBkH6WQge/z+4s7tBxTGF
         IIfA==
X-Forwarded-Encrypted: i=1; AJvYcCV5j6aiWJz5OOuZYz6+R06AUfnVWJvxUn0rbQUBd6fMuEjnBdVBt5tF/85jMr6qd3Zua6xHE3RmsZvpBwTVTHQjjaF9HVTOstfjKKcYquQ=
X-Gm-Message-State: AOJu0Yyp4tPQDBHg39FhE157TpUnbO+z2GgrtoiQVqyKvzNAGqJSicwh
	xEFCxpPqT8aTKbsrvtXUCDK99+JsF935C9cRFSpY7Ry1M/JlBAkHiEINs1YI1w==
X-Google-Smtp-Source: AGHT+IGl3UsvU7WX/OH4GGzmvBcLOXgG7noV1Z3uNCCDxU9oJRwI09foBu2AgLoNP3hUiNzY1v1yVA==
X-Received: by 2002:a17:906:c193:b0:a6f:1004:dc30 with SMTP id a640c23a62f3a-a6f48019244mr342647966b.65.1718280431455;
        Thu, 13 Jun 2024 05:07:11 -0700 (PDT)
Message-ID: <0c523073-e8fa-4bb8-81fb-e8c3d2c1d9d3@suse.com>
Date: Thu, 13 Jun 2024 14:07:09 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] automation/eclair: add deviation for MISRA C Rule
 17.7
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <a5137c812eefab7e0417670386b0fee35468504d.1718264354.git.federico.serafini@bugseng.com>
 <55f46457-4182-4e1b-a792-e94cc6c16864@suse.com>
 <c1f92d1f-0934-4603-b3b8-a77402643f22@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <c1f92d1f-0934-4603-b3b8-a77402643f22@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 13.06.2024 13:50, Federico Serafini wrote:
> On 13/06/24 12:08, Jan Beulich wrote:
>> On 13.06.2024 11:07, Federico Serafini wrote:
>>> --- a/docs/misra/deviations.rst
>>> +++ b/docs/misra/deviations.rst
>>> @@ -364,6 +364,17 @@ Deviations related to MISRA C:2012 Rules:
>>>          by `stdarg.h`.
>>>        - Tagged as `deliberate` for ECLAIR.
>>>   
>>> +   * - R17.7
>>> +     - Not using the return value of a function do not endanger safety if it
>>> +       coincides with the first actual argument.
>>> +     - Tagged as `safe` for ECLAIR. Such functions are:
>>> +         - __builtin_memcpy()
>>> +         - __builtin_memmove()
>>> +         - __builtin_memset()
>>> +         - __cpumask_check()
>>> +         - strlcat()
>>> +         - strlcpy()
>>
>> These last two aren't similar to strcat/strcpy in what they return, so I'm
>> not convinced they should be listed here. Certainly not with the "coincides"
>> justification.
> 
> Thanks to violations of Rule 17.7 I noticed that safe_strcpy()
> and safe_strcat() are used without checking the return value.
> Is this intentional?

I expect that's case by case judgement. The main thing for them is to make
sure the destination buffer isn't overrun. There may be callers which can
live with possible truncation, there may be other callers which guarantee
a suitably sized buffer, and there may also be callers which actually ought
to check.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 12:15:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 12:15:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739879.1146828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjM0-0007vg-69; Thu, 13 Jun 2024 12:14:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739879.1146828; Thu, 13 Jun 2024 12:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjM0-0007vZ-2M; Thu, 13 Jun 2024 12:14:56 +0000
Received: by outflank-mailman (input) for mailman id 739879;
 Thu, 13 Jun 2024 12:14:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vNde=NP=suse.de=tiwai@srs-se1.protection.inumbo.net>)
 id 1sHjLy-0007vT-3U
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 12:14:54 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de
 [2a07:de40:b251:101:10:150:64:1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8948a2f6-297e-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 14:14:53 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 42B9C353C5;
 Thu, 13 Jun 2024 12:14:51 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 112BD13A7F;
 Thu, 13 Jun 2024 12:14:51 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id 0PcLA7viambcIwAAD6G6ig
 (envelope-from <tiwai@suse.de>); Thu, 13 Jun 2024 12:14:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8948a2f6-297e-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718280891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=XYJJaDAiE8Jnnmi5U+ASAdIZI7iCR9QVodqnRXuBBqg=;
	b=gwOmERsCN9Jisej0D2mWfR0CF+yYvljzggdkpfJj7qVd+VT2vQLVdGERnIaT5PubIfvJaA
	4+efjBLW+2xKCPfQcc/KwR7VYsq3/I9Co9iGvsvyensynFWmfISCuu79w9v7NB8u16GPzW
	Gc+c//495/7gzJkPPIP8ltmP8bvg+0E=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718280891;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=XYJJaDAiE8Jnnmi5U+ASAdIZI7iCR9QVodqnRXuBBqg=;
	b=LNtBgVTjWRTqpp2XQ2vE+B3wRBXOSf2UaZHbBQZLjh+X35P/f4BcmZ11zvnrlim0rN2m+C
	B9/+VIQ7Dajm98BQ==
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718280891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=XYJJaDAiE8Jnnmi5U+ASAdIZI7iCR9QVodqnRXuBBqg=;
	b=gwOmERsCN9Jisej0D2mWfR0CF+yYvljzggdkpfJj7qVd+VT2vQLVdGERnIaT5PubIfvJaA
	4+efjBLW+2xKCPfQcc/KwR7VYsq3/I9Co9iGvsvyensynFWmfISCuu79w9v7NB8u16GPzW
	Gc+c//495/7gzJkPPIP8ltmP8bvg+0E=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718280891;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=XYJJaDAiE8Jnnmi5U+ASAdIZI7iCR9QVodqnRXuBBqg=;
	b=LNtBgVTjWRTqpp2XQ2vE+B3wRBXOSf2UaZHbBQZLjh+X35P/f4BcmZ11zvnrlim0rN2m+C
	B9/+VIQ7Dajm98BQ==
Date: Thu, 13 Jun 2024 14:15:14 +0200
Message-ID: <87bk45ninh.wl-tiwai@suse.de>
From: Takashi Iwai <tiwai@suse.de>
To: linux@treblig.org
Cc: oleksandr_andrushchenko@epam.com,
	perex@perex.cz,
	tiwai@suse.com,
	xen-devel@lists.xenproject.org,
	linux-sound@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] ALSA: xen-front: remove unused struct 'alsa_sndif_hw_param'
In-Reply-To: <20240601232604.198662-1-linux@treblig.org>
References: <20240601232604.198662-1-linux@treblig.org>
User-Agent: Wanderlust/2.15.9 (Almost Unreal) Emacs/27.2 Mule/6.0
MIME-Version: 1.0 (generated by SEMI-EPG 1.14.7 - "Harue")
Content-Type: text/plain; charset=US-ASCII
X-Spam-Score: -3.25
X-Spam-Level: 
X-Spam-Flag: NO
X-Spamd-Result: default: False [-3.25 / 50.00];
	BAYES_HAM(-2.95)[99.79%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	MID_CONTAINS_FROM(1.00)[];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	RCVD_TLS_ALL(0.00)[];
	RCPT_COUNT_SEVEN(0.00)[7];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	FROM_HAS_DN(0.00)[];
	MIME_TRACE(0.00)[0:+];
	FROM_EQ_ENVFROM(0.00)[];
	TO_DN_NONE(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo]

On Sun, 02 Jun 2024 01:26:04 +0200,
linux@treblig.org wrote:
> 
> From: "Dr. David Alan Gilbert" <linux@treblig.org>
> 
> 'alsa_sndif_hw_param' has been unused since the original
> commit 1cee559351a7 ("ALSA: xen-front: Implement ALSA virtual sound
> driver").
> 
> Remove it.
> 
> Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org>

Thanks, applied.


Takashi


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 12:16:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 12:16:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739885.1146844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjNp-00006C-TB; Thu, 13 Jun 2024 12:16:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739885.1146844; Thu, 13 Jun 2024 12:16:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjNp-0008Vl-Pa; Thu, 13 Jun 2024 12:16:49 +0000
Received: by outflank-mailman (input) for mailman id 739885;
 Thu, 13 Jun 2024 12:16:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4a53=NP=bounce.vates.tech=bounce-md_30504962.666ae32d.v1-6ca278561c174d59a5f3ec6b9390c276@srs-se1.protection.inumbo.net>)
 id 1sHjNo-0008Rn-Gu
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 12:16:48 +0000
Received: from mail187-11.suw11.mandrillapp.com
 (mail187-11.suw11.mandrillapp.com [198.2.187.11])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cd16be1f-297e-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 14:16:46 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-11.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W0LxT0WFGzLfHVPn
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 12:16:45 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 6ca278561c174d59a5f3ec6b9390c276; Thu, 13 Jun 2024 12:16:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd16be1f-297e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718281005; x=1718541505;
	bh=hEcTUZt+HEBRIBv50h+VE0TB/aTFBpE2/hWqA5imAgs=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=BSZxQNX8u6Zv8CRNV9Dtg4XoE9Cn9svSu8gQTtWK619KGgnT1gEE5rRJaNJrMCNdl
	 fZVOmF01M66Rrxs6isf+CTGOoUG7fi7EB2BeAXpXs5rbmoLOuFAiLotg1gmqXyXg+z
	 r8sZIIEmgZWbkLqIcPYj87emYuBnUJu2dN60eF1cMklgK31xcwIfvOnMF5rsv2kEms
	 Q0IZH+3Ji1R7x+Y+Pkk6QM9wx/cR+mv4PXIVf/ExT6NMnoenEH2ij8O+lWKv279vWy
	 0ekWVSgvQKoS1dsPmSeLnxXe1Ef1gIsxJsw2TzmT3brhE8JWWUxi6QbynQkhfn7mBo
	 4BKaTYqmI5bww==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718281005; x=1718541505; i=teddy.astie@vates.tech;
	bh=hEcTUZt+HEBRIBv50h+VE0TB/aTFBpE2/hWqA5imAgs=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=HRwxBk/k1CXOLO0ZeP4qdNRBy5n+UsHg/Smm+/tLaFR54V8JSi9eMs/lzBINGakvf
	 df/XvoqIwJpjZJNy7dbsBcjcp536nuJ7LP9zZ537WquxsMegP99RYu4udEC0B+1Lvq
	 vKnnhpFDI/rYibVxgi9NuIrfVqRr61LsmqRjx5oLJowK8BPe7/7PL9BWTAUwN36u7H
	 v+I0bYhHgJ3RKNl9WDZ6f25iuuV4ua/lPGnTyxFZeEO7fPtZFHHC0RhPKigUuE2OcX
	 V4l6H8kUFNsTpfPLXp/RD6KEXl4Mw3aqDLvcZb4uYxxaf/sKBOYF+STO+qivMXuw1c
	 fUHZdJmMtqU2Q==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20XEN=20PATCH=202/5]=20docs/designs:=20Add=20a=20design=20document=20for=20IOMMU=20subsystem=20redesign?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718281002846
To: xen-devel@lists.xenproject.org
Cc: Teddy Astie <teddy.astie@vates.tech>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <b133d04075b462493d77ade72cac23e7ffc50f62.1718269097.git.teddy.astie@vates.tech>
In-Reply-To: <cover.1718269097.git.teddy.astie@vates.tech>
References: <cover.1718269097.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.6ca278561c174d59a5f3ec6b9390c276?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240613:md
Date: Thu, 13 Jun 2024 12:16:45 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

Current IOMMU subsystem has some limitations that make PV-IOMMU practically impossible.
One of them is the assumtion that each domain is bound to a single "IOMMU domain", which
also causes complications with quarantine implementation.

Moreover, current IOMMU subsystem is not entirely well-defined, for instance, the behavior
of map_page between ARM SMMUv3 and x86 VT-d/AMD-Vi greatly differs. On ARM, it can modifies
the domain page table while on x86, it may be forbidden (e.g using HAP with PVH), or only
modifying the devices PoV (e.g using PV).

The goal of this redesign is to define more explicitely the behavior and interface of the
IOMMU subsystem while allowing PV-IOMMU to be effectively implemented.

Signed-off-by Teddy Astie <teddy.astie@vates.tech>
---
 docs/designs/iommu-contexts.md | 398 +++++++++++++++++++++++++++++++++
 1 file changed, 398 insertions(+)
 create mode 100644 docs/designs/iommu-contexts.md

diff --git a/docs/designs/iommu-contexts.md b/docs/designs/iommu-contexts.md
new file mode 100644
index 0000000000..41bc701bf2
--- /dev/null
+++ b/docs/designs/iommu-contexts.md
@@ -0,0 +1,398 @@
+# IOMMU context management in Xen
+
+Status: Experimental
+Revision: 0
+
+# Background
+
+The design for *IOMMU paravirtualization for Dom0* [1] explains that some guests may
+want to access to IOMMU features. In order to implement this in Xen, several adjustments
+needs to be made to the IOMMU subsystem.
+
+This "hardware IOMMU domain" is currently implemented on a per-domain basis such as each
+domain actually has a specific *hardware IOMMU domain*, this design aims to allow a
+single Xen domain to manage several "IOMMU context", and allow some domains (e.g Dom0
+[1]) to modify their IOMMU contexts.
+
+In addition to this, quarantine feature can be refactored into using IOMMU contexts
+to reduce the complexity of platform-specific implementations and ensuring more
+consistency across platforms.
+
+# IOMMU context
+
+We define a "IOMMU context" as being a *hardware IOMMU domain*, but named as a context
+to avoid confusion with Xen domains.
+It represents some hardware-specific data structure that contains mappings from a device
+frame-number to a machine frame-number (e.g using a pagetable) that can be applied to
+a device using IOMMU hardware.
+
+This structure is bound to a Xen domain, but a Xen domain may have several IOMMU context.
+These contexts may be modifiable using the interface as defined in [1] aside some
+specific cases (e.g modifying default context).
+
+This is implemented in Xen as a new structure that will hold context-specific
+data.
+
+```c
+struct iommu_context {
+    u16 id; /* Context id (0 means default context) */
+    struct list_head devices;
+
+    struct arch_iommu_context arch;
+
+    bool opaque; /* context can't be modified nor accessed (e.g HAP) */
+};
+```
+
+A context is identified by a number that is domain-specific and may be used by IOMMU
+users such as PV-IOMMU by the guest.
+
+struct arch_iommu_context is splited from struct arch_iommu
+
+```c
+struct arch_iommu_context
+{
+    spinlock_t pgtables_lock;
+    struct page_list_head pgtables;
+
+    union {
+        /* Intel VT-d */
+        struct {
+            uint64_t pgd_maddr; /* io page directory machine address */
+            domid_t *didmap; /* per-iommu DID */
+            unsigned long *iommu_bitmap; /* bitmap of iommu(s) that the context uses */
+        } vtd;
+        /* AMD IOMMU */
+        struct {
+            struct page_info *root_table;
+        } amd;
+    };
+};
+
+struct arch_iommu
+{
+    spinlock_t mapping_lock; /* io page table lock */
+    struct {
+        struct page_list_head list;
+        spinlock_t lock;
+    } pgtables;
+
+    struct list_head identity_maps;
+
+    union {
+        /* Intel VT-d */
+        struct {
+            /* no more context-specific values */
+            unsigned int agaw; /* adjusted guest address width, 0 is level 2 30-bit */
+        } vtd;
+        /* AMD IOMMU */
+        struct {
+            unsigned int paging_mode;
+            struct guest_iommu *g_iommu;
+        } amd;
+    };
+};
+```
+
+IOMMU context information is now carried by iommu_context rather than being integrated to
+struct arch_iommu.
+
+# Xen domain IOMMU structure
+
+`struct domain_iommu` is modified to allow multiples context within a single Xen domain
+to exist :
+
+```c
+struct iommu_context_list {
+    uint16_t count; /* Context count excluding default context */
+
+    /* if count > 0 */
+
+    uint64_t *bitmap; /* bitmap of context allocation */
+    struct iommu_context *map; /* Map of contexts */
+};
+
+struct domain_iommu {
+    /* ... */
+
+    struct iommu_context default_ctx;
+    struct iommu_context_list other_contexts;
+
+    /* ... */
+}
+```
+
+default_ctx is a special context with id=0 that holds the page table mapping the entire
+domain, which basically preserve the previous behavior. All devices are expected to be
+bound to this context during initialization.
+
+Along with this default context that always exist, we use a pool of contexts that has a
+fixed size at domain initialization, where contexts can be allocated (if possible), and
+have a id matching their position in the map (considering that id != 0).
+These contexts may be used by IOMMU contexts users such as PV-IOMMU or quarantine domain
+(DomIO).
+
+# Platform independent context management interface
+
+A new platform independant interface is introduced in Xen hypervisor to allow
+IOMMU contexts users to create and manage contexts within domains.
+
+```c
+/* Direct context access functions (not supposed to be used directly) */
+#define iommu_default_context(d) (&dom_iommu(d)->default_ctx)
+struct iommu_context *iommu_get_context(struct domain *d, u16 ctx_no);
+int iommu_context_init(struct domain *d, struct iommu_context *ctx, u16 ctx_no, u32 flags);
+int iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags);
+
+/* Check if a specific context exist in the domain, note that ctx_no=0 always
+    exists */
+bool iommu_check_context(struct domain *d, u16 ctx_no);
+
+/* Flag for default context initialization */
+#define IOMMU_CONTEXT_INIT_default (1 << 0)
+
+/* Flag for quarantine contexts (scratch page, DMA Abort mode, ...) */
+#define IOMMU_CONTEXT_INIT_quarantine (1 << 1)
+
+/* Flag to specify that devices will need to be reattached to default domain */
+#define IOMMU_TEARDOWN_REATTACH_DEFAULT (1 << 0)
+
+/* Allocate a new context, uses CONTEXT_INIT flags */
+int iommu_context_alloc(struct domain *d, u16 *ctx_no, u32 flags);
+
+/* Free a context, uses CONTEXT_TEARDOWN flags */
+int iommu_context_free(struct domain *d, u16 ctx_no, u32 flags);
+
+/* Move a device from one context to another, including between different domains. */
+int iommu_reattach_context(struct domain *prev_dom, struct domain *next_dom,
+                            device_t *dev, u16 ctx_no);
+
+/* Add a device to a context for first initialization */
+int iommu_attach_context(struct domain *d, device_t *dev, u16 ctx_no);
+
+/* Remove a device from a context, effectively removing it from the IOMMU. */
+int iommu_dettach_context(struct domain *d, device_t *dev);
+```
+
+This interface will use a new interface with drivers to implement these features.
+
+Some existing functions will have a new parameter to specify on what context to do the operation.
+- iommu_map (iommu_legacy_map untouched)
+- iommu_unmap (iommu_legacy_unmap untouched)
+- iommu_lookup_page
+- iommu_iotlb_flush
+
+These functions will modify the iommu_context structure to accomodate with the
+operations applied, these functions will be used to replace some operations previously
+made in the IOMMU driver.
+
+# IOMMU platform_ops interface changes
+
+The IOMMU driver needs to expose a way to create and manage IOMMU contexts, the approach
+taken here is to modify the interface to allow specifying a IOMMU context on operations,
+and at the same time, simplifying the interface by relying more on iommu
+platform-independent code.
+
+Added functions in iommu_ops
+
+```c
+/* Initialize a context (creating page tables, allocating hardware, structures, ...) */
+int (*context_init)(struct domain *d, struct iommu_context *ctx,
+                    u32 flags);
+/* Destroy a context, assumes no device is bound to the context. */
+int (*context_teardown)(struct domain *d, struct iommu_context *ctx,
+                        u32 flags);
+/* Put a device in a context (assumes the device is not attached to another context) */
+int (*attach)(struct domain *d, device_t *dev,
+              struct iommu_context *ctx);
+/* Remove a device from a context, and from the IOMMU. */
+int (*dettach)(struct domain *d, device_t *dev,
+               struct iommu_context *prev_ctx);
+/* Move the device from a context to another, including if the new context is in
+   another domain. d corresponds to the target domain. */
+int (*reattach)(struct domain *d, device_t *dev,
+                struct iommu_context *prev_ctx,
+                struct iommu_context *ctx);
+
+#ifdef CONFIG_HAS_PCI
+/* Specific interface for phantom function devices. */
+int (*add_devfn)(struct domain *d, struct pci_dev *pdev, u16 devfn,
+                    struct iommu_context *ctx);
+int (*remove_devfn)(struct domain *d, struct pci_dev *pdev, u16 devfn,
+                struct iommu_context *ctx);
+#endif
+
+/* Changes in existing to use a specified iommu_context. */
+int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn,
+                                unsigned int flags,
+                                unsigned int *flush_flags,
+                                struct iommu_context *ctx);
+int __must_check (*unmap_page)(struct domain *d, dfn_t dfn,
+                                unsigned int order,
+                                unsigned int *flush_flags,
+                                struct iommu_context *ctx);
+int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                                unsigned int *flags,
+                                struct iommu_context *ctx);
+
+int __must_check (*iotlb_flush)(struct iommu_context *ctx, dfn_t dfn,
+                                unsigned long page_count,
+                                unsigned int flush_flags);
+
+void (*clear_root_pgtable)(struct domain *d, struct iommu_context *ctx);
+```
+
+These functions are redundant with existing functions, therefore, the following functions
+are replaced with new equivalents :
+- quarantine_init : platform-independent code and IOMMU_CONTEXT_INIT_quarantine flag
+- add_device : attach and add_devfn (phantom)
+- assign_device : attach and add_devfn (phantom)
+- remove_device : dettach and remove_devfn (phantom)
+- reassign_device : reattach
+
+Some functionnal differences with previous functions, the following should be handled
+by platform-independent/arch-specific code instead of IOMMU driver :
+- identity mappings (unity mappings and rmrr)
+- device list in context and domain
+- domain of a device
+- quarantine
+
+The idea behind this is to implement IOMMU context features while simplifying IOMMU
+drivers implementations and ensuring more consistency between IOMMU drivers.
+
+## Phantom function handling
+
+PCI devices may use additionnal devfn to do DMA operations, in order to support such
+devices, an interface is added to map specific device functions without implying that
+the device is mapped to a new context (that may cause duplicates in Xen data structures).
+
+Functions add_devfn and remove_devfn allows to map a iommu context on specific devfn
+for a pci device, without altering platform-independent data structures.
+
+It is important for the reattach operation to care about these devices, in order
+to prevent devices from being partially reattached to the new context (see XSA-449 [2])
+by using a all-or-nothing approach for reattaching such devices.
+
+# Quarantine refactoring using IOMMU contexts
+
+The quarantine mecanism can be entirely reimplemented using IOMMU context, making
+it simpler, more consistent between platforms,
+
+Quarantine is currently only supported with x86 platforms and works by creating a
+single *hardware IOMMU domain* per quarantined device. All the quarantine logic is
+the implemented in a platform-specific fashion while actually implementing the same
+concepts :
+
+The *hardware IOMMU context* data structures for quarantine are currently stored in
+the device structure itself (using arch_pci_dev) and IOMMU driver needs to care about
+whether we are dealing with quarantine operations or regular operations (often dealt
+using macros such as QUARANTINE_SKIP or DEVICE_PGTABLE).
+
+The page table that will apply on the quarantined device is created reserved device
+regions, and adding mappings to a scratch page if enabled (quarantine=scratch-page).
+
+A new approach we can use is allowing the quarantine domain (DomIO) to manage IOMMU
+contexts, and implement all the quarantine logic using IOMMU contexts.
+
+That way, the quarantine implementation can be platform-independent, thus have a more
+consistent implementation between platforms. It will also allows quarantine to work
+with other IOMMU implementations without having to implement platform-specific behavior.
+Moreover, quarantine operations can be implemented using regular context operations
+instead of relying on driver-specific code.
+
+Quarantine implementation can be summarised as
+
+```c
+int iommu_quarantine_dev_init(device_t *dev)
+{
+    int ret;
+    u16 ctx_no;
+
+    if ( !iommu_quarantine )
+        return -EINVAL;
+
+    ret = iommu_context_alloc(dom_io, &ctx_no, IOMMU_CONTEXT_INIT_quarantine);
+
+    if ( ret )
+        return ret;
+
+    /** TODO: Setup scratch page, mappings... */
+
+    ret = iommu_reattach_context(dev->domain, dom_io, dev, ctx_no);
+
+    if ( ret )
+    {
+        ASSERT(!iommu_context_free(dom_io, ctx_no, 0));
+        return ret;
+    }
+
+    return ret;
+}
+```
+
+# Platform-specific considerations
+
+## Reference counters on target pages
+
+When mapping a guest page onto a IOMMU context, we need to make sure that
+this page is not reused for something else while being actually referenced
+by a IOMMU context. One way of doing it is incrementing the reference counter
+of each target page we map (excluding reserved regions), and decrementing it
+when the mapping isn't used anymore.
+
+One consideration to have is when destroying the context while having existing
+mappings in it. We can walk through the entire page table and decrement the
+reference counter of all mappings. All of that assumes that there is no reserved
+region mapped (which should be the case as a requirement of teardown, or as a
+consequence of REATTACH_DEFAULT flag).
+
+Another consideration is that the "cleanup mappings" operation may take a lot
+of time depending on the complexity of the page table. Making the teardown operation preemptable can allow the hypercall to be preempted if needed also preventing a malicious
+guest from stalling a CPU in a teardown operation with a specially crafted IOMMU
+context (e.g with several 1G superpages).
+
+## Limit the amount of pages IOMMU contexts can use
+
+In order to prevent a (eventually malicious) guest from causing too much allocations
+in Xen, we can enforce limits on the memory the IOMMU subsystem can use for IOMMU context.
+A possible implementation can be to preallocate a reasonably large chunk of memory
+and split it into pages for use by the IOMMU subsystem only for non-default IOMMU
+contexts (e.g PV-IOMMU interface), if this limitation is overcome, some operations
+may fail from the guest side. These limitations shouldn't impact "usual" operations
+of the IOMMU subsystem (e.g default context initialization).
+
+## x86 Architecture
+
+TODO
+
+### Intel VT-d
+
+VT-d uses DID to tag the *IOMMU domain* applied to a device and assumes that all entries
+with the same DID uses the same page table (i.e same IOMMU context).
+Under certain circonstances (e.g DRHD with DID limit below 16-bits), the *DID* is
+transparently converted into a DRHD-specific DID using a map managed internally.
+
+The current implementation of the code reuses the Xen domain_id as DID.
+However, by using multiples IOMMU contexts per domain, we can't use the domain_id for
+contexts (otherwise, different page tables will be mapped with the same DID).
+The following strategy is used :
+- on the default context, reuse the domain_id (the default context is unique per domain)
+- on non-default context, use a id allocated in the pseudo_domid map, (actually used by
+quarantine) which is a DID outside of Xen domain_id range
+
+### AMD-Vi
+
+TODO
+
+## Device-tree platforms
+
+### SMMU and SMMUv3
+
+TODO
+
+* * *
+
+[1] See pv-iommu.md
+
+[2] pci: phantom functions assigned to incorrect contexts
+https://xenbits.xen.org/xsa/advisory-449.html
\ No newline at end of file
-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 12:16:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 12:16:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739884.1146838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjNp-0008S7-Gn; Thu, 13 Jun 2024 12:16:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739884.1146838; Thu, 13 Jun 2024 12:16:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjNp-0008S0-De; Thu, 13 Jun 2024 12:16:49 +0000
Received: by outflank-mailman (input) for mailman id 739884;
 Thu, 13 Jun 2024 12:16:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xLSK=NP=bounce.vates.tech=bounce-md_30504962.666ae32b.v1-a1a12ac15ee84a27862ee32dd567ea52@srs-se1.protection.inumbo.net>)
 id 1sHjNn-0008Rn-Qr
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 12:16:47 +0000
Received: from mail177-18.suw61.mandrillapp.com
 (mail177-18.suw61.mandrillapp.com [198.2.177.18])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cc7f90e4-297e-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 14:16:45 +0200 (CEST)
Received: from pmta14.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail177-18.suw61.mandrillapp.com (Mailchimp) with ESMTP id
 4W0LxR70YbzCf9PPj
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 12:16:43 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 a1a12ac15ee84a27862ee32dd567ea52; Thu, 13 Jun 2024 12:16:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc7f90e4-297e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718281004; x=1718541504;
	bh=Si1H+0tUp3aV0kgXlGZ06VpmpP6WpGI1yiw22kocZDk=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=pDTrdiHzEmXxuwfPMWwUWnBUi6F/r2k0j3GA9HUoWlPFey1RzxoLBtM4l8ePfUo2X
	 wjRcYUP9oVNauoOz0lNMN/hZ2ndnXIB42W2gWcfQ/9/P8zVCiPjq5P3MS8DtODaWkB
	 JtFbSgTxm4ktw84BH6eMPOQ3RR43RyEgclPIAKPLkDdW1P4jj8e7VQM6aqK3z/3NYK
	 qo7Fxs+aoDmx93akKzm59NuSxNBSraxlIjY0H9l+sVe8H1FibcEpYQTNAsBf1ZuGLc
	 o281aN34sdfgw4jf2+GJN9Bh3+Ybhiofz8yb+hUSE2l/p7+0bZAuFZB5NkXPmGbLXL
	 78G9uwWCku6hA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718281004; x=1718541504; i=teddy.astie@vates.tech;
	bh=Si1H+0tUp3aV0kgXlGZ06VpmpP6WpGI1yiw22kocZDk=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=Ha5g06IMmj4caP8Agqckb45mSfuMqSAh4vl1ZbJ3mAl/Dpa8tYiQ0F/4EePW9L+EI
	 R1vXX8SNoLGbVHkxdpER+n+d5CTk61J+PfP4mMY+1l/sNGK9v0Hilq/fjlNVvprJhY
	 rCGQJ9xcsMM8GAMDGCgIkxpuZ15X4CmLMqQGMeNgdPeQS2+LWhiQ+H2r1yioHZUwl0
	 3tyXEICLLwDAcWvnUL5lkh0vwN+/jdDQm3czP3YO3PkoBtNvrUKZLIXLJe95NRTBtR
	 iODBjOUwf6gbv2cMJfVP7V//T+Yt/nBXD/SVR9O3Hd74J77x26DwnvtT3rK33YwkH2
	 jXl+i1+13mecA==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20XEN=20PATCH=201/5]=20docs/designs:=20Add=20a=20design=20document=20for=20PV-IOMMU?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718281001992
To: xen-devel@lists.xenproject.org
Cc: Teddy Astie <teddy.astie@vates.tech>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <2aa4e20a1d7aeb51f393cde4d142732e18d3a57c.1718269097.git.teddy.astie@vates.tech>
In-Reply-To: <cover.1718269097.git.teddy.astie@vates.tech>
References: <cover.1718269097.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.a1a12ac15ee84a27862ee32dd567ea52?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240613:md
Date: Thu, 13 Jun 2024 12:16:43 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

Some operating systems want to use IOMMU to implement various features (e.g
VFIO) or DMA protection.
This patch introduce a proposal for IOMMU paravirtualization for Dom0.

Signed-off-by Teddy Astie <teddy.astie@vates.tech>
---
 docs/designs/pv-iommu.md | 105 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 105 insertions(+)
 create mode 100644 docs/designs/pv-iommu.md

diff --git a/docs/designs/pv-iommu.md b/docs/designs/pv-iommu.md
new file mode 100644
index 0000000000..c01062a3ad
--- /dev/null
+++ b/docs/designs/pv-iommu.md
@@ -0,0 +1,105 @@
+# IOMMU paravirtualization for Dom0
+
+Status: Experimental
+
+# Background
+
+By default, Xen only uses the IOMMU for itself, either to make device adress
+space coherent with guest adress space (x86 HVM/PVH) or to prevent devices
+from doing DMA outside it's expected memory regions including the hypervisor
+(x86 PV).
+
+A limitation is that guests (especially privildged ones) may want to use
+IOMMU hardware in order to implement features such as DMA protection and
+VFIO [1] as IOMMU functionality is not available outside of the hypervisor
+currently.
+
+[1] VFIO - "Virtual Function I/O" - https://www.kernel.org/doc/html/latest/driver-api/vfio.html
+
+# Design
+
+The operating system may want to have access to various IOMMU features such as
+context management and DMA remapping. We can create a new hypercall that allows
+the guest to have access to a new paravirtualized IOMMU interface.
+
+This feature is only meant to be available for the Dom0, as DomU have some
+emulated devices that can't be managed on Xen side and are not hardware, we
+can't rely on the hardware IOMMU to enforce DMA remapping.
+
+This interface is exposed under the `iommu_op` hypercall.
+
+In addition, Xen domains are modified in order to allow existence of several
+IOMMU context including a default one that implement default behavior (e.g
+hardware assisted paging) and can't be modified by guest. DomU cannot have
+contexts, and therefore act as if they only have the default domain.
+
+Each IOMMU context within a Xen domain is identified using a domain-specific
+context number that is used in the Xen IOMMU subsystem and the hypercall
+interface.
+
+The number of IOMMU context a domain can use is predetermined at domain creation
+and is configurable through `dom0-iommu=nb-ctx=N` xen cmdline.
+
+# IOMMU operations
+
+## Alloc context
+
+Create a new IOMMU context for the guest and return the context number to the
+guest.
+Fail if the IOMMU context limit of the guest is reached.
+
+A flag can be specified to create a identity mapping.
+
+## Free context
+
+Destroy a IOMMU context created previously.
+It is not possible to free the default context.
+
+Reattach context devices to default context if specified by the guest.
+
+Fail if there is a device in the context and reattach-to-default flag is not
+specified.
+
+## Reattach device
+
+Reattach a device to another IOMMU context (including the default one).
+The target IOMMU context number must be valid and the context allocated.
+
+The guest needs to specify a PCI SBDF of a device he has access to.
+
+## Map/unmap page
+
+Map/unmap a page on a context.
+The guest needs to specify a gfn and target dfn to map.
+
+Refuse to create the mapping if one already exist for the same dfn.
+
+## Lookup page
+
+Get the gfn mapped by a specific dfn.
+
+# Implementation considerations
+
+## Hypercall batching
+
+In order to prevent unneeded hypercalls and IOMMU flushing, it is advisable to
+be able to batch some critical IOMMU operations (e.g map/unmap multiple pages).
+
+## Hardware without IOMMU support
+
+Operating system needs to be aware on PV-IOMMU capability, and whether it is
+able to make contexts. However, some operating system may critically fail in
+case they are able to make a new IOMMU context. Which is supposed to happen
+if no IOMMU hardware is available.
+
+The hypercall interface needs a interface to advertise the ability to create
+and manage IOMMU contexts including the amount of context the guest is able
+to use. Using these informations, the Dom0 may decide whether to use or not
+the PV-IOMMU interface.
+
+## Page pool for contexts
+
+In order to prevent unexpected starving on the hypervisor memory with a
+buggy Dom0. We can preallocate the pages the contexts will use and make
+map/unmap use these pages instead of allocating them dynamically.
+
-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 12:16:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 12:16:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739886.1146858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjNr-0000VL-5r; Thu, 13 Jun 2024 12:16:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739886.1146858; Thu, 13 Jun 2024 12:16:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjNr-0000VE-2k; Thu, 13 Jun 2024 12:16:51 +0000
Received: by outflank-mailman (input) for mailman id 739886;
 Thu, 13 Jun 2024 12:16:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CnjO=NP=bounce.vates.tech=bounce-md_30504962.666ae32e.v1-471be41421624684a71f3bb9807026d6@srs-se1.protection.inumbo.net>)
 id 1sHjNp-0008Rn-Gu
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 12:16:49 +0000
Received: from mail177-18.suw61.mandrillapp.com
 (mail177-18.suw61.mandrillapp.com [198.2.177.18])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cda20959-297e-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 14:16:47 +0200 (CEST)
Received: from pmta14.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail177-18.suw61.mandrillapp.com (Mailchimp) with ESMTP id
 4W0LxV1cQZzCf9KKR
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 12:16:46 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 471be41421624684a71f3bb9807026d6; Thu, 13 Jun 2024 12:16:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cda20959-297e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718281006; x=1718541506;
	bh=zsmiGEzGYH4Na7mn3S3okl3HuYM19CV8RnCjm2eOULs=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=G/Pewr7xGkteKMRSRCUQpv3iRKVZ94vRAJiE+0HkhA/gVQ+zSLhMCMqdbKzrktKBD
	 CbKEIVRwsl/BcgJgWm9MDJpHr0D+j8rAL2OU/9wDg/0CbZb26R7eyryUogbgMPYX58
	 HX1WPFON+apWhxPjyUW3SEZAfjpiUZwrZm5MKpd+2i31whKYGCOkV4tnGFTwH8yhgY
	 G7zYRC/4hk6JcPJPQoBV8U/SlJ8u6GFRxVS2ZB1hQUR1yDS4PAOdT3dv31ZT9R0jNr
	 GzNBgGXUXpGoMA6b4xSPlqpQAO8eMKve+ZedU1alEFw3GqP/Xi37fzb26QNE+RD6lu
	 2a6XrxhoGJtZA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718281006; x=1718541506; i=teddy.astie@vates.tech;
	bh=zsmiGEzGYH4Na7mn3S3okl3HuYM19CV8RnCjm2eOULs=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=PVhWYT//8l3r+fY1nLrLslymaRq0VH3mLtvobH/oWiU2Bq3pESwOETBiJzRsxosdZ
	 7mYAm2U6rgA15Lx2RerBgxFgJPuZpERpaLMf4fUAqLFrF9QgnqILddpHIYGz6/8UUE
	 mMrtOAdMEMyFiDnUl+ap8j10nSMEt+Ar82dTfwolGBpDdUifienLoyQjyg/eNsYnDM
	 vXxjOV3gn/OU1PDD2rXB7b9zjFr+gql+XMzCLHi2GW9us5HefMWndGkVcDoqTOOXn2
	 K+4nEDzP9Io+MgO7GtR3QkyuKYUnU/ZID8K5/Wv9TKfM7E9rtSZvyEVodCGCPlCzyA
	 Swpo/iGocQhJw==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20XEN=20PATCH=200/5]=20IOMMU=20subsystem=20redesign=20and=20PV-IOMMU=20interface?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718281000809
To: xen-devel@lists.xenproject.org
Cc: Teddy Astie <teddy.astie@vates.tech>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Lukasz Hawrylko <lukasz@hawrylko.pl>, "Daniel P. Smith" <dpsmith@apertussolutions.com>, =?utf-8?Q?Mateusz=20M=C3=B3wka?= <mateusz.mowka@intel.com>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <cover.1718269097.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.471be41421624684a71f3bb9807026d6?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240613:md
Date: Thu, 13 Jun 2024 12:16:46 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

This work has been presented at Xen Summit 2024 during the
  IOMMU paravirtualization and Xen IOMMU subsystem rework
design session.

Operating systems may want to have access to a IOMMU in order to do DMA
protection or implement certain features (e.g VFIO on Linux).

VFIO support is mandatory for framework such as SPDK, which can be useful to
implement an alternative storage backend for virtual machines [1].

In this patch series, we introduce in Xen the ability to manage several
contexts per domain and provide a new hypercall interface to allow guests
to manage IOMMU contexts.

The VT-d driver is updated to support these new features.

[1] Using SPDK with the Xen hypervisor - FOSDEM 2023

Teddy Astie (5):
  docs/designs: Add a design document for PV-IOMMU
  docs/designs: Add a design document for IOMMU subsystem redesign
  IOMMU: Introduce redesigned IOMMU subsystem
  VT-d: Port IOMMU driver to new subsystem
  xen/public: Introduce PV-IOMMU hypercall interface

 docs/designs/iommu-contexts.md       |  398 +++++++
 docs/designs/pv-iommu.md             |  105 ++
 xen/arch/x86/domain.c                |    2 +-
 xen/arch/x86/include/asm/arena.h     |   54 +
 xen/arch/x86/include/asm/iommu.h     |   44 +-
 xen/arch/x86/include/asm/pci.h       |   17 -
 xen/arch/x86/mm/p2m-ept.c            |    2 +-
 xen/arch/x86/pv/dom0_build.c         |    4 +-
 xen/arch/x86/tboot.c                 |    4 +-
 xen/common/Makefile                  |    1 +
 xen/common/memory.c                  |    4 +-
 xen/common/pv-iommu.c                |  320 ++++++
 xen/drivers/passthrough/Kconfig      |   14 +
 xen/drivers/passthrough/Makefile     |    3 +
 xen/drivers/passthrough/context.c    |  626 +++++++++++
 xen/drivers/passthrough/iommu.c      |  333 ++----
 xen/drivers/passthrough/pci.c        |   49 +-
 xen/drivers/passthrough/quarantine.c |   49 +
 xen/drivers/passthrough/vtd/Makefile |    2 +-
 xen/drivers/passthrough/vtd/extern.h |   14 +-
 xen/drivers/passthrough/vtd/iommu.c  | 1555 +++++++++++---------------
 xen/drivers/passthrough/vtd/quirks.c |   21 +-
 xen/drivers/passthrough/x86/Makefile |    1 +
 xen/drivers/passthrough/x86/arena.c  |  157 +++
 xen/drivers/passthrough/x86/iommu.c  |  104 +-
 xen/include/hypercall-defs.c         |    6 +
 xen/include/public/pv-iommu.h        |  114 ++
 xen/include/public/xen.h             |    1 +
 xen/include/xen/iommu.h              |  118 +-
 xen/include/xen/pci.h                |    3 +
 30 files changed, 2822 insertions(+), 1303 deletions(-)
 create mode 100644 docs/designs/iommu-contexts.md
 create mode 100644 docs/designs/pv-iommu.md
 create mode 100644 xen/arch/x86/include/asm/arena.h
 create mode 100644 xen/common/pv-iommu.c
 create mode 100644 xen/drivers/passthrough/context.c
 create mode 100644 xen/drivers/passthrough/quarantine.c
 create mode 100644 xen/drivers/passthrough/x86/arena.c
 create mode 100644 xen/include/public/pv-iommu.h

-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 12:16:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 12:16:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739887.1146868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjNs-0000lC-E6; Thu, 13 Jun 2024 12:16:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739887.1146868; Thu, 13 Jun 2024 12:16:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjNs-0000l3-BM; Thu, 13 Jun 2024 12:16:52 +0000
Received: by outflank-mailman (input) for mailman id 739887;
 Thu, 13 Jun 2024 12:16:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wFZK=NP=bounce.vates.tech=bounce-md_30504962.666ae330.v1-31a1578ca9b74cb5801c7c02def44933@srs-se1.protection.inumbo.net>)
 id 1sHjNq-0008RX-Df
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 12:16:50 +0000
Received: from mail187-11.suw11.mandrillapp.com
 (mail187-11.suw11.mandrillapp.com [198.2.187.11])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ceda6a83-297e-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 14:16:49 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-11.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W0LxX1s1MzLfMD8p
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 12:16:48 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 31a1578ca9b74cb5801c7c02def44933; Thu, 13 Jun 2024 12:16:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ceda6a83-297e-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718281008; x=1718541508;
	bh=E+4kMkd4WCWgQP6h5CFZEH7O128Jlpqjeek1o3nsjgc=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=jmXq1t97GCDIkYd4WQn8DiU6+15sv/usr0KGDOPio4RI/o2jmRZTUtR1/GEKDTAWC
	 HZrpdtXOoo0g6Kh9LDnuFsINpe7JHMxBA/9WQxb+eBfs0uoDH8QVbClFhtObE01ES5
	 9aw+6fcjtXEb6k788wcKJkmUkaUceBbiOa/cyg1K5u6faCeCFCqNYo3E5N7b8f9G6s
	 yp9/FOySf6N1DSVTsHzxTEeFvnFruqIr+lIIiykvfxkuQcP1kTZvdUxk6YqFKQhqqs
	 BtsOIsNDqdpMHQznuN4yERUGYf4yA7BbhHHdmzbBwsfATan5Du9PNEqOouiIseSnlp
	 wO3W1nF1/nFyg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718281008; x=1718541508; i=teddy.astie@vates.tech;
	bh=E+4kMkd4WCWgQP6h5CFZEH7O128Jlpqjeek1o3nsjgc=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=LFHJAltL5CmM+jqSS8Doz+wSWniZGuectFQTDYND8gKL10UtHT0e7yMDSP8TLGirM
	 RfBhlvwlbzn8/068Xx1KSdXCry8c47jIA1tRKvDJ2QvMO9M3uRD9/mqgGlNMUWRhx+
	 rngoUbwsham01/4mkAedLZ+P5+xL9DXAWLjSELhbEoowx/guQwGux0lq/syxHNoh9u
	 AeLAiF3mNO7hnYQaMrgx8vjP+dEJzKgMLuzaRqQ4XV+tHZ9tSi5ATLIMKwOvN3j7zG
	 vJ6HowwmAy/AfQ21QQBwoJraiX60ZiCui7ho/s/S/QJqyiX+t+Da/I/RtzbQ4o5gIJ
	 8xeBLkdG+yBKA==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20XEN=20PATCH=205/5]=20xen/public:=20Introduce=20PV-IOMMU=20hypercall=20interface?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718281006122
To: xen-devel@lists.xenproject.org
Cc: Teddy Astie <teddy.astie@vates.tech>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <b596471737b726ffadd9569f5ad43d3ab5748a65.1718269097.git.teddy.astie@vates.tech>
In-Reply-To: <cover.1718269097.git.teddy.astie@vates.tech>
References: <cover.1718269097.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.31a1578ca9b74cb5801c7c02def44933?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240613:md
Date: Thu, 13 Jun 2024 12:16:48 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

Introduce a new pv interface to manage the underlying IOMMU and manage contexts
and devices. This interface allows creation of new contexts from Dom0 and
addition of IOMMU mappings using guest PoV.

This interface doesn't allow creation of mappings to other domains.

Signed-off-by Teddy Astie <teddy.astie@vates.tech>
---
Missing in this RFC

Usage of PV-IOMMU inside DomU

Differences with Malcolm Crossley PV IOMMU RFC [1] :

Original PV IOMMU interfaces exposes IOMMU subsystem operations to the guest,
in some way, it still has the limitations of the Xen IOMMU subsystem. For
instance, all devices are bound to a single IOMMU domain, and this subsystem
can only modify a domain-wide one.
The main goal of the original implementation is to allow implementing vGPU by
mapping other guests into devices's address space (actually shared for all devices
of the domain).
This original implementation draft cannot work with PVH (due to HAP P2M being immutable
from IOMMU driver point of view) and cannot be used for implementing the Linux IOMMU
subsystem (due to inability to create separate iommu domains).

This new proposal aims for supporting the Linux IOMMU subsystem from Dom0 (and DomU
in the future). It needs to allow creation and management of IOMMU domains (named
IOMMU context) separated from the "default context" on a per-domain basis. There is
no foreign mapping support (yet) and emphasis on uses of Linux IOMMU subsystem (e.g
DMA protection and VFIO).

[1] https://lore.kernel.org/all/1455099035-17649-2-git-send-email-malcolm.crossley@citrix.com/
---
 xen/common/Makefile           |   1 +
 xen/common/pv-iommu.c         | 320 ++++++++++++++++++++++++++++++++++
 xen/include/hypercall-defs.c  |   6 +
 xen/include/public/pv-iommu.h | 114 ++++++++++++
 xen/include/public/xen.h      |   1 +
 5 files changed, 442 insertions(+)
 create mode 100644 xen/common/pv-iommu.c
 create mode 100644 xen/include/public/pv-iommu.h

diff --git a/xen/common/Makefile b/xen/common/Makefile
index d512cad524..336c5ea143 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -57,6 +57,7 @@ obj-y += wait.o
 obj-bin-y += warning.init.o
 obj-$(CONFIG_XENOPROF) += xenoprof.o
 obj-y += xmalloc_tlsf.o
+obj-y += pv-iommu.o
 
 obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma lzo unlzo unlz4 unzstd earlycpio,$(n).init.o)
 
diff --git a/xen/common/pv-iommu.c b/xen/common/pv-iommu.c
new file mode 100644
index 0000000000..844642ee54
--- /dev/null
+++ b/xen/common/pv-iommu.c
@@ -0,0 +1,320 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * xen/common/pv_iommu.c
+ *
+ * PV-IOMMU hypercall interface.
+ */
+
+#include <xen/mm.h>
+#include <xen/lib.h>
+#include <xen/iommu.h>
+#include <xen/sched.h>
+#include <xen/pci.h>
+#include <xen/guest_access.h>
+#include <asm/p2m.h>
+#include <asm/event.h>
+#include <public/pv-iommu.h>
+
+#define PVIOMMU_PREFIX "[PV-IOMMU] "
+
+#define PVIOMMU_MAX_PAGES 256 /* Move to Kconfig ? */
+
+/* Allowed masks for each sub-operation */
+#define ALLOC_OP_FLAGS_MASK (0)
+#define FREE_OP_FLAGS_MASK (IOMMU_TEARDOWN_REATTACH_DEFAULT)
+
+static int get_paged_frame(struct domain *d, gfn_t gfn, mfn_t *mfn,
+                           struct page_info **page, int readonly)
+{
+    p2m_type_t p2mt;
+
+    *page = get_page_from_gfn(d, gfn_x(gfn), &p2mt,
+                             (readonly) ? P2M_ALLOC : P2M_UNSHARE);
+
+    if ( !(*page) )
+    {
+        *mfn = INVALID_MFN;
+        if ( p2m_is_shared(p2mt) )
+            return -EINVAL;
+        if ( p2m_is_paging(p2mt) )
+        {
+            p2m_mem_paging_populate(d, gfn);
+            return -EIO;
+        }
+
+        return -EPERM;
+    }
+
+    *mfn = page_to_mfn(*page);
+
+    return 0;
+}
+
+static int can_use_iommu_check(struct domain *d)
+{
+    if ( !iommu_enabled )
+    {
+        printk(PVIOMMU_PREFIX "IOMMU is not enabled\n");
+        return 0;
+    }
+
+    if ( !is_hardware_domain(d) )
+    {
+        printk(PVIOMMU_PREFIX "Non-hardware domain\n");
+        return 0;
+    }
+
+    if ( !is_iommu_enabled(d) )
+    {
+        printk(PVIOMMU_PREFIX "IOMMU disabled for this domain\n");
+        return 0;
+    }
+
+    return 1;
+}
+
+static long query_cap_op(struct pv_iommu_op *op, struct domain *d)
+{
+    op->cap.max_ctx_no = d->iommu.other_contexts.count;
+    op->cap.max_nr_pages = PVIOMMU_MAX_PAGES;
+    op->cap.max_iova_addr = (1LLU << 39) - 1; /* TODO: hardcoded 39-bits */
+
+    return 0;
+}
+
+static long alloc_context_op(struct pv_iommu_op *op, struct domain *d)
+{
+    u16 ctx_no = 0;
+    int status = 0;
+
+    status = iommu_context_alloc(d, &ctx_no, op->flags & ALLOC_OP_FLAGS_MASK);
+
+    if (status < 0)
+        return status;
+
+    printk("Created context %hu\n", ctx_no);
+
+    op->ctx_no = ctx_no;
+    return 0;
+}
+
+static long free_context_op(struct pv_iommu_op *op, struct domain *d)
+{
+    return iommu_context_free(d, op->ctx_no,
+                              IOMMU_TEARDOWN_PREEMPT | (op->flags & FREE_OP_FLAGS_MASK));
+}
+
+static long reattach_device_op(struct pv_iommu_op *op, struct domain *d)
+{
+    struct physdev_pci_device dev = op->reattach_device.dev;
+    device_t *pdev;
+
+    pdev = pci_get_pdev(d, PCI_SBDF(dev.seg, dev.bus, dev.devfn));
+
+    if ( !pdev )
+        return -ENOENT;
+
+    return iommu_reattach_context(d, d, pdev, op->ctx_no);
+}
+
+static long map_pages_op(struct pv_iommu_op *op, struct domain *d)
+{
+    int ret = 0, flush_ret;
+    struct page_info *page = NULL;
+    mfn_t mfn;
+    unsigned int flags;
+    unsigned int flush_flags = 0;
+    size_t i = 0;
+
+    if ( op->map_pages.nr_pages > PVIOMMU_MAX_PAGES )
+        return -E2BIG;
+
+    if ( !iommu_check_context(d, op->ctx_no) )
+        return -EINVAL;
+
+    //printk("Mapping gfn:%lx-%lx to dfn:%lx-%lx on %hu\n",
+    //       op->map_pages.gfn, op->map_pages.gfn + op->map_pages.nr_pages - 1,
+    //       op->map_pages.dfn, op->map_pages.dfn + op->map_pages.nr_pages - 1,
+    //       op->ctx_no);
+
+    flags = 0;
+
+    if ( op->flags & IOMMU_OP_readable )
+        flags |= IOMMUF_readable;
+
+    if ( op->flags & IOMMU_OP_writeable )
+        flags |= IOMMUF_writable;
+
+    for (i = 0; i < op->map_pages.nr_pages; i++)
+    {
+        gfn_t gfn = _gfn(op->map_pages.gfn + i);
+        dfn_t dfn = _dfn(op->map_pages.dfn + i);
+
+        /* Lookup pages struct backing gfn */
+        ret = get_paged_frame(d, gfn, &mfn, &page, 0);
+
+        if ( ret )
+            break;
+
+        /* Check for conflict with existing mappings */
+        if ( !iommu_lookup_page(d, dfn, &mfn, &flags, op->ctx_no) )
+        {
+            put_page(page);
+            ret = -EADDRINUSE;
+            break;
+        }
+
+        ret = iommu_map(d, dfn, mfn, 1, flags, &flush_flags, op->ctx_no);
+
+        if ( ret )
+            break;
+    }
+
+    op->map_pages.mapped = i;
+
+    flush_ret = iommu_iotlb_flush(d, _dfn(op->map_pages.dfn),
+                                  op->map_pages.nr_pages, flush_flags,
+                                  op->ctx_no);
+
+    if ( flush_ret )
+        printk("Flush operation failed (%d)\n", flush_ret);
+
+    return ret;
+}
+
+static long unmap_pages_op(struct pv_iommu_op *op, struct domain *d)
+{
+    mfn_t mfn;
+    int ret = 0, flush_ret;
+    unsigned int flags;
+    unsigned int flush_flags = 0;
+    size_t i = 0;
+
+    if ( op->unmap_pages.nr_pages > PVIOMMU_MAX_PAGES )
+        return -E2BIG;
+
+    if ( !iommu_check_context(d, op->ctx_no) )
+        return -EINVAL;
+
+    //printk("Unmapping dfn:%lx-%lx on %hu\n",
+    //       op->unmap_pages.dfn, op->unmap_pages.dfn + op->unmap_pages.nr_pages - 1,
+    //       op->ctx_no);
+
+    for (i = 0; i < op->unmap_pages.nr_pages; i++)
+    {
+        dfn_t dfn = _dfn(op->unmap_pages.dfn + i);
+
+        /* Check if there is a valid mapping for this domain */
+        if ( iommu_lookup_page(d, dfn, &mfn, &flags, op->ctx_no) ) {
+            ret = -ENOENT;
+            break;
+        }
+
+        ret = iommu_unmap(d, dfn, 1, 0, &flush_flags, op->ctx_no);
+
+        if (ret)
+            break;
+
+        /* Decrement reference counter */
+        put_page(mfn_to_page(mfn));
+    }
+
+    op->unmap_pages.unmapped = i;
+
+    flush_ret = iommu_iotlb_flush(d, _dfn(op->unmap_pages.dfn),
+                                  op->unmap_pages.nr_pages, flush_flags,
+                                  op->ctx_no);
+
+    if ( flush_ret )
+        printk("Flush operation failed (%d)\n", flush_ret);
+
+    return ret;
+}
+
+static long lookup_page_op(struct pv_iommu_op *op, struct domain *d)
+{
+    mfn_t mfn;
+    gfn_t gfn;
+    unsigned int flags = 0;
+
+    if ( !iommu_check_context(d, op->ctx_no) )
+        return -EINVAL;
+
+    /* Check if there is a valid BFN mapping for this domain */
+    if ( iommu_lookup_page(d, _dfn(op->lookup_page.dfn), &mfn, &flags, op->ctx_no) )
+        return -ENOENT;
+
+    gfn = mfn_to_gfn(d, mfn);
+    BUG_ON(gfn_eq(gfn, INVALID_GFN));
+
+    op->lookup_page.gfn = gfn_x(gfn);
+
+    return 0;
+}
+
+long do_iommu_sub_op(struct pv_iommu_op *op)
+{
+    struct domain *d = current->domain;
+
+    if ( !can_use_iommu_check(d) )
+        return -EPERM;
+
+    switch ( op->subop_id )
+    {
+        case 0:
+            return 0;
+
+        case IOMMUOP_query_capabilities:
+            return query_cap_op(op, d);
+
+        case IOMMUOP_alloc_context:
+            return alloc_context_op(op, d);
+
+        case IOMMUOP_free_context:
+            return free_context_op(op, d);
+
+        case IOMMUOP_reattach_device:
+            return reattach_device_op(op, d);
+
+        case IOMMUOP_map_pages:
+            return map_pages_op(op, d);
+
+        case IOMMUOP_unmap_pages:
+            return unmap_pages_op(op, d);
+
+        case IOMMUOP_lookup_page:
+            return lookup_page_op(op, d);
+
+        default:
+            return -EINVAL;
+    }
+}
+
+long do_iommu_op(XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    long ret = 0;
+    struct pv_iommu_op op;
+
+    if ( unlikely(copy_from_guest(&op, arg, 1)) )
+        return -EFAULT;
+
+    ret = do_iommu_sub_op(&op);
+
+    if ( ret == -ERESTART )
+        return hypercall_create_continuation(__HYPERVISOR_iommu_op, "h", arg);
+
+    if ( unlikely(copy_to_guest(arg, &op, 1)) )
+        return -EFAULT;
+
+    return ret;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/hypercall-defs.c b/xen/include/hypercall-defs.c
index 47c093acc8..84db1ab3c3 100644
--- a/xen/include/hypercall-defs.c
+++ b/xen/include/hypercall-defs.c
@@ -209,6 +209,9 @@ hypfs_op(unsigned int cmd, const char *arg1, unsigned long arg2, void *arg3, uns
 #ifdef CONFIG_X86
 xenpmu_op(unsigned int op, xen_pmu_params_t *arg)
 #endif
+#ifdef CONFIG_HAS_PASSTHROUGH
+iommu_op(void *arg)
+#endif
 
 #ifdef CONFIG_PV
 caller: pv64
@@ -295,5 +298,8 @@ mca                                do       do       -        -        -
 #ifndef CONFIG_PV_SHIM_EXCLUSIVE
 paging_domctl_cont                 do       do       do       do       -
 #endif
+#ifdef CONFIG_HAS_PASSTHROUGH
+iommu_op                           do       do       do       do        -
+#endif
 
 #endif /* !CPPCHECK */
diff --git a/xen/include/public/pv-iommu.h b/xen/include/public/pv-iommu.h
new file mode 100644
index 0000000000..45f9c44eb1
--- /dev/null
+++ b/xen/include/public/pv-iommu.h
@@ -0,0 +1,114 @@
+/* SPDX-License-Identifier: MIT */
+/******************************************************************************
+ * pv-iommu.h
+ *
+ * Paravirtualized IOMMU driver interface.
+ *
+ * Copyright (c) 2024 Teddy Astie <teddy.astie@vates.tech>
+ */
+
+#ifndef __XEN_PUBLIC_PV_IOMMU_H__
+#define __XEN_PUBLIC_PV_IOMMU_H__
+
+#include "xen.h"
+#include "physdev.h"
+
+#define IOMMU_DEFAULT_CONTEXT (0)
+
+/**
+ * Query PV-IOMMU capabilities for this domain.
+ */
+#define IOMMUOP_query_capabilities    1
+
+/**
+ * Allocate an IOMMU context, the new context handle will be written to ctx_no.
+ */
+#define IOMMUOP_alloc_context         2
+
+/**
+ * Destroy a IOMMU context.
+ * All devices attached to this context are reattached to default context.
+ *
+ * The default context can't be destroyed (0).
+ */
+#define IOMMUOP_free_context          3
+
+/**
+ * Reattach the device to IOMMU context.
+ */
+#define IOMMUOP_reattach_device       4
+
+#define IOMMUOP_map_pages             5
+#define IOMMUOP_unmap_pages           6
+
+/**
+ * Get the GFN associated to a specific DFN.
+ */
+#define IOMMUOP_lookup_page           7
+
+struct pv_iommu_op {
+    uint16_t subop_id;
+    uint16_t ctx_no;
+
+/**
+ * Create a context that is cloned from default.
+ * The new context will be populated with 1:1 mappings covering the entire guest memory.
+ */
+#define IOMMU_CREATE_clone (1 << 0)
+
+#define IOMMU_OP_readable (1 << 0)
+#define IOMMU_OP_writeable (1 << 1)
+    uint32_t flags;
+
+    union {
+        struct {
+            uint64_t gfn;
+            uint64_t dfn;
+            /* Number of pages to map */
+            uint32_t nr_pages;
+            /* Number of pages actually mapped after sub-op */
+            uint32_t mapped;
+        } map_pages;
+
+        struct {
+            uint64_t dfn;
+            /* Number of pages to unmap */
+            uint32_t nr_pages;
+            /* Number of pages actually unmapped after sub-op */
+            uint32_t unmapped;
+        } unmap_pages;
+
+        struct {
+            struct physdev_pci_device dev;
+        } reattach_device;
+
+        struct {
+            uint64_t gfn;
+            uint64_t dfn;
+        } lookup_page;
+
+        struct {
+            /* Maximum number of IOMMU context this domain can use. */
+            uint16_t max_ctx_no;
+            /* Maximum number of pages that can be modified in a single map/unmap operation. */
+            uint32_t max_nr_pages;
+            /* Maximum device address (iova) that the guest can use for mappings. */
+            uint64_t max_iova_addr;
+        } cap;
+    };
+};
+
+typedef struct pv_iommu_op pv_iommu_op_t;
+DEFINE_XEN_GUEST_HANDLE(pv_iommu_op_t);
+
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
\ No newline at end of file
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index b47d48d0e2..28ab815ebc 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -118,6 +118,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_xenpmu_op            40
 #define __HYPERVISOR_dm_op                41
 #define __HYPERVISOR_hypfs_op             42
+#define __HYPERVISOR_iommu_op             43
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 12:16:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 12:16:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739888.1146878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjNt-00011q-SZ; Thu, 13 Jun 2024 12:16:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739888.1146878; Thu, 13 Jun 2024 12:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjNt-00011c-P2; Thu, 13 Jun 2024 12:16:53 +0000
Received: by outflank-mailman (input) for mailman id 739888;
 Thu, 13 Jun 2024 12:16:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pkvO=NP=bounce.vates.tech=bounce-md_30504962.666ae32f.v1-f0d5b8db8c4c4466a232666919dd083a@srs-se1.protection.inumbo.net>)
 id 1sHjNr-0008Rn-OV
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 12:16:52 +0000
Received: from mail177-18.suw61.mandrillapp.com
 (mail177-18.suw61.mandrillapp.com [198.2.177.18])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ce6ee1e1-297e-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 14:16:48 +0200 (CEST)
Received: from pmta14.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail177-18.suw61.mandrillapp.com (Mailchimp) with ESMTP id
 4W0LxW40WwzCf9Pn2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 12:16:47 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 f0d5b8db8c4c4466a232666919dd083a; Thu, 13 Jun 2024 12:16:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce6ee1e1-297e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718281007; x=1718541507;
	bh=hyzz5JKkkwDb4eHBd2rl06WDZVc9rBqW/BDYct2UHeM=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=OHTH7yBN/qlZiqvWWcA6ygSAq8qmXyHWixAHXPEzb5ZsSye4vBTcyTS3cJqZqoqxi
	 K6c1TOyD6n6m9upT0L6sCUnDObNdIHnOrt0mtVeR/r6Le7fUHUY2aY0DueGpY0AKeZ
	 UasOtCnrI9dhbWv1REoZXCowOYw6M4FDPrpEgK2Q7XpmiJLuH5k1EWJBRXL2p9nwth
	 Yoqkt6YC0RQoVuzL1iObHRIKRKfVTbA/vZCZVeng9tOJhfiC2JQbikmUmDEFTnrk93
	 j2uwO12N/6HwDXP2jn2Zl0YJ315poH7PeJAE7htb5L+/tV9JmfCcjLerr2mwPjpZsZ
	 Q3GZAZJarkpKw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718281007; x=1718541507; i=teddy.astie@vates.tech;
	bh=hyzz5JKkkwDb4eHBd2rl06WDZVc9rBqW/BDYct2UHeM=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=faytnum1zLW5jGxVgG/xGoIbyRCXW6EOqdtMZt6nFnb6kyVhIn5/fBCZZSRcxgSJ+
	 U/Qaizr4EuYVWeekLuIkeFIaZJh/ahtJkCopPFp3BoXrSwJliGUBELsunXUzDUBb2K
	 FO72LQQ8H8fQ9d4o5c8uTQWra3IDmj3xFXSJWvvbbsfwngdMLJLjCzOx+1/p2mPx7b
	 NVawvakillwA52cPxvH00xQw/ev9PCER+KXvNrG04jJUT/sDPgsQzpZNdDptUMwGlX
	 xJa3g7foYlDvAbTxKW914gcFWXe44vUv6ELCWnTnK7MwRZZSwDRtvrndJIeDG/QyPr
	 O5HSFIteIqBkA==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20XEN=20PATCH=204/5]=20VT-d:=20Port=20IOMMU=20driver=20to=20new=20subsystem?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718281004942
To: xen-devel@lists.xenproject.org
Cc: Teddy Astie <teddy.astie@vates.tech>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <e2ccc784c1ad64e52ff60245299ab4c700e1d5fd.1718269097.git.teddy.astie@vates.tech>
In-Reply-To: <cover.1718269097.git.teddy.astie@vates.tech>
References: <cover.1718269097.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.f0d5b8db8c4c4466a232666919dd083a?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240613:md
Date: Thu, 13 Jun 2024 12:16:47 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

Port the driver with guidances specified in iommu-contexts.md.

Add a arena-based allocator for allocating a fixed chunk of memory and
split it into 4k pages for use by the IOMMU contexts. This chunk size
is configurable with X86_ARENA_ORDER and dom0-iommu=arena-order=N.

Signed-off-by Teddy Astie <teddy.astie@vates.tech>
---
Missing in this RFC

Preventing guest from mapping on top of reserved regions.
Reattach RMRR failure cleanup code is incomplete and can cause issues with a
subsequent teardown operation.
---
 xen/arch/x86/include/asm/arena.h     |   54 +
 xen/arch/x86/include/asm/iommu.h     |   44 +-
 xen/arch/x86/include/asm/pci.h       |   17 -
 xen/drivers/passthrough/Kconfig      |   14 +
 xen/drivers/passthrough/vtd/Makefile |    2 +-
 xen/drivers/passthrough/vtd/extern.h |   14 +-
 xen/drivers/passthrough/vtd/iommu.c  | 1555 +++++++++++---------------
 xen/drivers/passthrough/vtd/quirks.c |   21 +-
 xen/drivers/passthrough/x86/Makefile |    1 +
 xen/drivers/passthrough/x86/arena.c  |  157 +++
 xen/drivers/passthrough/x86/iommu.c  |  104 +-
 11 files changed, 980 insertions(+), 1003 deletions(-)
 create mode 100644 xen/arch/x86/include/asm/arena.h
 create mode 100644 xen/drivers/passthrough/x86/arena.c

diff --git a/xen/arch/x86/include/asm/arena.h b/xen/arch/x86/include/asm/arena.h
new file mode 100644
index 0000000000..7555b100e0
--- /dev/null
+++ b/xen/arch/x86/include/asm/arena.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/**
+ * Simple arena-based page allocator.
+ */
+
+#ifndef __XEN_IOMMU_ARENA_H__
+#define __XEN_IOMMU_ARENA_H__
+
+#include "xen/domain.h"
+#include "xen/atomic.h"
+#include "xen/mm-frame.h"
+#include "xen/types.h"
+
+/**
+ * struct page_arena: Page arena structure
+ */
+struct iommu_arena {
+    /* mfn of the first page of the memory region */
+    mfn_t region_start;
+    /* bitmap of allocations */
+    unsigned long *map;
+
+    /* Order of the arena */
+    unsigned int order;
+
+    /* Used page count */
+    atomic_t used_pages;
+};
+
+/**
+ * Initialize a arena using domheap allocator.
+ * @param [out] arena Arena to allocate
+ * @param [in] domain domain that has ownership of arena pages
+ * @param [in] order order of the arena (power of two of the size)
+ * @param [in] memflags Flags for domheap_alloc_pages()
+ * @return -ENOMEM on arena allocation error, 0 otherwise
+ */
+int iommu_arena_initialize(struct iommu_arena *arena, struct domain *domain,
+                           unsigned int order, unsigned int memflags);
+
+/**
+ * Teardown a arena.
+ * @param [out] arena arena to allocate
+ * @param [in] check check for existing allocations
+ * @return -EBUSY if check is specified
+ */
+int iommu_arena_teardown(struct iommu_arena *arena, bool check);
+
+struct page_info *iommu_arena_allocate_page(struct iommu_arena *arena);
+bool iommu_arena_free_page(struct iommu_arena *arena, struct page_info *page);
+
+#define iommu_arena_size(arena) (1LLU << (arena)->order)
+
+#endif
diff --git a/xen/arch/x86/include/asm/iommu.h b/xen/arch/x86/include/asm/iommu.h
index 8dc464fbd3..8fb402f1ee 100644
--- a/xen/arch/x86/include/asm/iommu.h
+++ b/xen/arch/x86/include/asm/iommu.h
@@ -2,14 +2,18 @@
 #ifndef __ARCH_X86_IOMMU_H__
 #define __ARCH_X86_IOMMU_H__
 
+#include <xen/bitmap.h>
 #include <xen/errno.h>
 #include <xen/list.h>
 #include <xen/mem_access.h>
 #include <xen/spinlock.h>
+#include <xen/stdbool.h>
 #include <asm/apicdef.h>
 #include <asm/cache.h>
 #include <asm/processor.h>
 
+#include "arena.h"
+
 #define DEFAULT_DOMAIN_ADDRESS_WIDTH 48
 
 struct g2m_ioport {
@@ -31,27 +35,48 @@ typedef uint64_t daddr_t;
 #define dfn_to_daddr(dfn) __dfn_to_daddr(dfn_x(dfn))
 #define daddr_to_dfn(daddr) _dfn(__daddr_to_dfn(daddr))
 
-struct arch_iommu
+struct arch_iommu_context
 {
-    spinlock_t mapping_lock; /* io page table lock */
     struct {
         struct page_list_head list;
         spinlock_t lock;
     } pgtables;
 
-    struct list_head identity_maps;
+    /* Queue for freeing pages */
+    struct page_list_head free_queue;
 
     union {
         /* Intel VT-d */
         struct {
             uint64_t pgd_maddr; /* io page directory machine address */
+            domid_t *didmap; /* per-iommu DID */
+            unsigned long *iommu_bitmap; /* bitmap of iommu(s) that the context uses */
+            bool duplicated_rmrr; /* tag indicating that duplicated rmrr mappings are mapped */
+            uint32_t superpage_progress; /* superpage progress during teardown */
+        } vtd;
+        /* AMD IOMMU */
+        struct {
+            struct page_info *root_table;
+        } amd;
+    };
+};
+
+struct arch_iommu
+{
+    spinlock_t lock; /* io page table lock */
+    struct list_head identity_maps;
+
+    struct iommu_arena pt_arena; /* allocator for non-default contexts */
+
+    union {
+        /* Intel VT-d */
+        struct {
             unsigned int agaw; /* adjusted guest address width, 0 is level 2 30-bit */
-            unsigned long *iommu_bitmap; /* bitmap of iommu(s) that the domain uses */
         } vtd;
         /* AMD IOMMU */
         struct {
             unsigned int paging_mode;
-            struct page_info *root_table;
+            struct guest_iommu *g_iommu;
         } amd;
     };
 };
@@ -128,14 +153,19 @@ unsigned long *iommu_init_domid(domid_t reserve);
 domid_t iommu_alloc_domid(unsigned long *map);
 void iommu_free_domid(domid_t domid, unsigned long *map);
 
-int __must_check iommu_free_pgtables(struct domain *d);
+struct iommu_context;
+int __must_check iommu_free_pgtables(struct domain *d, struct iommu_context *ctx);
 struct domain_iommu;
 struct page_info *__must_check iommu_alloc_pgtable(struct domain_iommu *hd,
+                                                   struct iommu_context *ctx,
                                                    uint64_t contig_mask);
-void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *pg);
+void iommu_queue_free_pgtable(struct iommu_context *ctx, struct page_info *pg);
 
 /* Check [start, end] unity map range for correctness. */
 bool iommu_unity_region_ok(const char *prefix, mfn_t start, mfn_t end);
+int arch_iommu_context_init(struct domain *d, struct iommu_context *ctx, u32 flags);
+int arch_iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags);
+int arch_iommu_flush_free_queue(struct domain *d, struct iommu_context *ctx);
 
 #endif /* !__ARCH_X86_IOMMU_H__ */
 /*
diff --git a/xen/arch/x86/include/asm/pci.h b/xen/arch/x86/include/asm/pci.h
index fd5480d67d..214c1a0948 100644
--- a/xen/arch/x86/include/asm/pci.h
+++ b/xen/arch/x86/include/asm/pci.h
@@ -15,23 +15,6 @@
 
 struct arch_pci_dev {
     vmask_t used_vectors;
-    /*
-     * These fields are (de)initialized under pcidevs-lock. Other uses of
-     * them don't race (de)initialization and hence don't strictly need any
-     * locking.
-     */
-    union {
-        /* Subset of struct arch_iommu's fields, to be used in dom_io. */
-        struct {
-            uint64_t pgd_maddr;
-        } vtd;
-        struct {
-            struct page_info *root_table;
-        } amd;
-    };
-    domid_t pseudo_domid;
-    mfn_t leaf_mfn;
-    struct page_list_head pgtables_list;
 };
 
 int pci_conf_write_intercept(unsigned int seg, unsigned int bdf,
diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
index 78edd80536..1b9f4c8b9c 100644
--- a/xen/drivers/passthrough/Kconfig
+++ b/xen/drivers/passthrough/Kconfig
@@ -91,3 +91,17 @@ choice
 	config IOMMU_QUARANTINE_SCRATCH_PAGE
 		bool "scratch page"
 endchoice
+
+config X86_ARENA_ORDER
+	int "IOMMU arena order" if EXPERT
+	depends on X86
+	default 9
+	help
+	  Specifies the default size of the Dom0 IOMMU arena allocator.
+	  Use 2^order pages for arena. If your system has lots of PCI devices or if you
+	  encounter IOMMU errors in Dom0, try increasing this value.
+	  This value can be overriden with command-line dom0-iommu=arena-order=N.
+
+	  [7] 128 pages, 512 KB arena
+	  [9] 512 pages, 2 MB arena (default)
+	  [11] 2048 pages, 8 MB arena
\ No newline at end of file
diff --git a/xen/drivers/passthrough/vtd/Makefile b/xen/drivers/passthrough/vtd/Makefile
index fde7555fac..81e1f46179 100644
--- a/xen/drivers/passthrough/vtd/Makefile
+++ b/xen/drivers/passthrough/vtd/Makefile
@@ -5,4 +5,4 @@ obj-y += dmar.o
 obj-y += utils.o
 obj-y += qinval.o
 obj-y += intremap.o
-obj-y += quirks.o
+obj-y += quirks.o
\ No newline at end of file
diff --git a/xen/drivers/passthrough/vtd/extern.h b/xen/drivers/passthrough/vtd/extern.h
index 667590ee52..69f808a44a 100644
--- a/xen/drivers/passthrough/vtd/extern.h
+++ b/xen/drivers/passthrough/vtd/extern.h
@@ -80,12 +80,10 @@ uint64_t alloc_pgtable_maddr(unsigned long npages, nodeid_t node);
 void free_pgtable_maddr(u64 maddr);
 void *map_vtd_domain_page(u64 maddr);
 void unmap_vtd_domain_page(const void *va);
-int domain_context_mapping_one(struct domain *domain, struct vtd_iommu *iommu,
-                               uint8_t bus, uint8_t devfn,
-                               const struct pci_dev *pdev, domid_t domid,
-                               paddr_t pgd_maddr, unsigned int mode);
-int domain_context_unmap_one(struct domain *domain, struct vtd_iommu *iommu,
-                             uint8_t bus, uint8_t devfn);
+int apply_context_single(struct domain *domain, struct iommu_context *ctx,
+                         struct vtd_iommu *iommu, uint8_t bus, uint8_t devfn);
+int unapply_context_single(struct domain *domain, struct iommu_context *ctx,
+                           struct vtd_iommu *iommu, uint8_t bus, uint8_t devfn);
 int cf_check intel_iommu_get_reserved_device_memory(
     iommu_grdm_t *func, void *ctxt);
 
@@ -106,8 +104,8 @@ void platform_quirks_init(void);
 void vtd_ops_preamble_quirk(struct vtd_iommu *iommu);
 void vtd_ops_postamble_quirk(struct vtd_iommu *iommu);
 int __must_check me_wifi_quirk(struct domain *domain, uint8_t bus,
-                               uint8_t devfn, domid_t domid, paddr_t pgd_maddr,
-                               unsigned int mode);
+                               uint8_t devfn, domid_t domid,
+                               unsigned int mode, struct iommu_context *ctx);
 void pci_vtd_quirk(const struct pci_dev *);
 void quirk_iommu_caps(struct vtd_iommu *iommu);
 
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index e13be244c1..068aeed876 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -30,12 +30,21 @@
 #include <xen/time.h>
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
+#include <xen/sched.h>
+#include <xen/event.h>
 #include <xen/keyhandler.h>
+#include <xen/list.h>
+#include <xen/spinlock.h>
+#include <xen/iommu.h>
+#include <xen/lib.h>
 #include <asm/msi.h>
 #include <asm/nops.h>
 #include <asm/irq.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/p2m.h>
+#include <asm/bitops.h>
+#include <asm/iommu.h>
+#include <asm/page.h>
 #include <mach_apic.h>
 #include "iommu.h"
 #include "dmar.h"
@@ -46,14 +55,6 @@
 #define CONTIG_MASK DMA_PTE_CONTIG_MASK
 #include <asm/pt-contig-markers.h>
 
-/* dom_io is used as a sentinel for quarantined devices */
-#define QUARANTINE_SKIP(d, pgd_maddr) ((d) == dom_io && !(pgd_maddr))
-#define DEVICE_DOMID(d, pdev) ((d) != dom_io ? (d)->domain_id \
-                                             : (pdev)->arch.pseudo_domid)
-#define DEVICE_PGTABLE(d, pdev) ((d) != dom_io \
-                                 ? dom_iommu(d)->arch.vtd.pgd_maddr \
-                                 : (pdev)->arch.vtd.pgd_maddr)
-
 bool __read_mostly iommu_igfx = true;
 bool __read_mostly iommu_qinval = true;
 #ifndef iommu_snoop
@@ -206,26 +207,14 @@ static bool any_pdev_behind_iommu(const struct domain *d,
  * clear iommu in iommu_bitmap and clear domain_id in domid_bitmap.
  */
 static void check_cleanup_domid_map(const struct domain *d,
+                                    const struct iommu_context *ctx,
                                     const struct pci_dev *exclude,
                                     struct vtd_iommu *iommu)
 {
-    bool found;
-
-    if ( d == dom_io )
-        return;
-
-    found = any_pdev_behind_iommu(d, exclude, iommu);
-    /*
-     * Hidden devices are associated with DomXEN but usable by the hardware
-     * domain. Hence they need considering here as well.
-     */
-    if ( !found && is_hardware_domain(d) )
-        found = any_pdev_behind_iommu(dom_xen, exclude, iommu);
-
-    if ( !found )
+    if ( !any_pdev_behind_iommu(d, exclude, iommu) )
     {
-        clear_bit(iommu->index, dom_iommu(d)->arch.vtd.iommu_bitmap);
-        cleanup_domid_map(d->domain_id, iommu);
+        clear_bit(iommu->index, ctx->arch.vtd.iommu_bitmap);
+        cleanup_domid_map(ctx->arch.vtd.didmap[iommu->index], iommu);
     }
 }
 
@@ -312,8 +301,9 @@ static u64 bus_to_context_maddr(struct vtd_iommu *iommu, u8 bus)
  *   PTE for the requested address,
  * - for target == 0 the full PTE contents below PADDR_BITS limit.
  */
-static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
-                                       unsigned int target,
+static uint64_t addr_to_dma_page_maddr(struct domain *domain,
+                                       struct iommu_context *ctx,
+                                       daddr_t addr, unsigned int target,
                                        unsigned int *flush_flags, bool alloc)
 {
     struct domain_iommu *hd = dom_iommu(domain);
@@ -323,10 +313,9 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
     u64 pte_maddr = 0;
 
     addr &= (((u64)1) << addr_width) - 1;
-    ASSERT(spin_is_locked(&hd->arch.mapping_lock));
     ASSERT(target || !alloc);
 
-    if ( !hd->arch.vtd.pgd_maddr )
+    if ( !ctx->arch.vtd.pgd_maddr )
     {
         struct page_info *pg;
 
@@ -334,13 +323,13 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
             goto out;
 
         pte_maddr = level;
-        if ( !(pg = iommu_alloc_pgtable(hd, 0)) )
+        if ( !(pg = iommu_alloc_pgtable(hd, ctx, 0)) )
             goto out;
 
-        hd->arch.vtd.pgd_maddr = page_to_maddr(pg);
+        ctx->arch.vtd.pgd_maddr = page_to_maddr(pg);
     }
 
-    pte_maddr = hd->arch.vtd.pgd_maddr;
+    pte_maddr = ctx->arch.vtd.pgd_maddr;
     parent = map_vtd_domain_page(pte_maddr);
     while ( level > target )
     {
@@ -376,7 +365,7 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
             }
 
             pte_maddr = level - 1;
-            pg = iommu_alloc_pgtable(hd, DMA_PTE_CONTIG_MASK);
+            pg = iommu_alloc_pgtable(hd, ctx, DMA_PTE_CONTIG_MASK);
             if ( !pg )
                 break;
 
@@ -428,38 +417,25 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
     return pte_maddr;
 }
 
-static paddr_t domain_pgd_maddr(struct domain *d, paddr_t pgd_maddr,
-                                unsigned int nr_pt_levels)
+static paddr_t get_context_pgd(struct domain *d, struct iommu_context *ctx,
+                               unsigned int nr_pt_levels)
 {
-    struct domain_iommu *hd = dom_iommu(d);
     unsigned int agaw;
+    paddr_t pgd_maddr = ctx->arch.vtd.pgd_maddr;
 
-    ASSERT(spin_is_locked(&hd->arch.mapping_lock));
-
-    if ( pgd_maddr )
-        /* nothing */;
-    else if ( iommu_use_hap_pt(d) )
+    if ( !ctx->arch.vtd.pgd_maddr )
     {
-        pagetable_t pgt = p2m_get_pagetable(p2m_get_hostp2m(d));
+        /*
+         * Ensure we have pagetables allocated down to the smallest
+         * level the loop below may need to run to.
+         */
+        addr_to_dma_page_maddr(d, ctx, 0, min_pt_levels, NULL, true);
 
-        pgd_maddr = pagetable_get_paddr(pgt);
+        if ( !ctx->arch.vtd.pgd_maddr )
+            return 0;
     }
-    else
-    {
-        if ( !hd->arch.vtd.pgd_maddr )
-        {
-            /*
-             * Ensure we have pagetables allocated down to the smallest
-             * level the loop below may need to run to.
-             */
-            addr_to_dma_page_maddr(d, 0, min_pt_levels, NULL, true);
-
-            if ( !hd->arch.vtd.pgd_maddr )
-                return 0;
-        }
 
-        pgd_maddr = hd->arch.vtd.pgd_maddr;
-    }
+    pgd_maddr = ctx->arch.vtd.pgd_maddr;
 
     /* Skip top level(s) of page tables for less-than-maximum level DRHDs. */
     for ( agaw = level_to_agaw(4);
@@ -727,28 +703,18 @@ static int __must_check iommu_flush_all(void)
     return rc;
 }
 
-static int __must_check cf_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
+static int __must_check cf_check iommu_flush_iotlb(struct domain *d,
+                                                   struct iommu_context *ctx,
+                                                   dfn_t dfn,
                                                    unsigned long page_count,
                                                    unsigned int flush_flags)
 {
-    struct domain_iommu *hd = dom_iommu(d);
     struct acpi_drhd_unit *drhd;
     struct vtd_iommu *iommu;
     bool flush_dev_iotlb;
     int iommu_domid;
     int ret = 0;
 
-    if ( flush_flags & IOMMU_FLUSHF_all )
-    {
-        dfn = INVALID_DFN;
-        page_count = 0;
-    }
-    else
-    {
-        ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN));
-        ASSERT(flush_flags);
-    }
-
     /*
      * No need pcideves_lock here because we have flush
      * when assign/deassign device
@@ -759,13 +725,20 @@ static int __must_check cf_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
 
         iommu = drhd->iommu;
 
-        if ( !test_bit(iommu->index, hd->arch.vtd.iommu_bitmap) )
-            continue;
+        if ( ctx )
+        {
+            if ( !test_bit(iommu->index, ctx->arch.vtd.iommu_bitmap) )
+                continue;
+
+            iommu_domid = get_iommu_did(ctx->arch.vtd.didmap[iommu->index], iommu, true);
+
+            if ( iommu_domid == -1 )
+                continue;
+        }
+        else
+            iommu_domid = 0;
 
         flush_dev_iotlb = !!find_ats_dev_drhd(iommu);
-        iommu_domid = get_iommu_did(d->domain_id, iommu, !d->is_dying);
-        if ( iommu_domid == -1 )
-            continue;
 
         if ( !page_count || (page_count & (page_count - 1)) ||
              dfn_eq(dfn, INVALID_DFN) || !IS_ALIGNED(dfn_x(dfn), page_count) )
@@ -784,10 +757,13 @@ static int __must_check cf_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
             ret = rc;
     }
 
+    if ( !ret && ctx )
+        arch_iommu_flush_free_queue(d, ctx);
+
     return ret;
 }
 
-static void queue_free_pt(struct domain_iommu *hd, mfn_t mfn, unsigned int level)
+static void queue_free_pt(struct iommu_context *ctx, mfn_t mfn, unsigned int level)
 {
     if ( level > 1 )
     {
@@ -796,13 +772,13 @@ static void queue_free_pt(struct domain_iommu *hd, mfn_t mfn, unsigned int level
 
         for ( i = 0; i < PTE_NUM; ++i )
             if ( dma_pte_present(pt[i]) && !dma_pte_superpage(pt[i]) )
-                queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(pt[i])),
+                queue_free_pt(ctx, maddr_to_mfn(dma_pte_addr(pt[i])),
                               level - 1);
 
         unmap_domain_page(pt);
     }
 
-    iommu_queue_free_pgtable(hd, mfn_to_page(mfn));
+    iommu_queue_free_pgtable(ctx, mfn_to_page(mfn));
 }
 
 static int iommu_set_root_entry(struct vtd_iommu *iommu)
@@ -1433,11 +1409,6 @@ static int cf_check intel_iommu_domain_init(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
 
-    hd->arch.vtd.iommu_bitmap = xzalloc_array(unsigned long,
-                                              BITS_TO_LONGS(nr_iommus));
-    if ( !hd->arch.vtd.iommu_bitmap )
-        return -ENOMEM;
-
     hd->arch.vtd.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
 
     return 0;
@@ -1465,32 +1436,22 @@ static void __hwdom_init cf_check intel_iommu_hwdom_init(struct domain *d)
     }
 }
 
-/*
- * This function returns
- * - a negative errno value upon error,
- * - zero upon success when previously the entry was non-present, or this isn't
- *   the "main" request for a device (pdev == NULL), or for no-op quarantining
- *   assignments,
- * - positive (one) upon success when previously the entry was present and this
- *   is the "main" request for a device (pdev != NULL).
+/**
+ * Apply a context on a device.
+ * @param domain Domain of the context
+ * @param iommu IOMMU hardware to use (must match device iommu)
+ * @param ctx IOMMU context to apply
+ * @param devfn PCI device function (may be different to pdev)
  */
-int domain_context_mapping_one(
-    struct domain *domain,
-    struct vtd_iommu *iommu,
-    uint8_t bus, uint8_t devfn, const struct pci_dev *pdev,
-    domid_t domid, paddr_t pgd_maddr, unsigned int mode)
+int apply_context_single(struct domain *domain, struct iommu_context *ctx,
+                         struct vtd_iommu *iommu, uint8_t bus, uint8_t devfn)
 {
-    struct domain_iommu *hd = dom_iommu(domain);
     struct context_entry *context, *context_entries, lctxt;
-    __uint128_t old;
+    __uint128_t res, old;
     uint64_t maddr;
-    uint16_t seg = iommu->drhd->segment, prev_did = 0;
-    struct domain *prev_dom = NULL;
+    uint16_t seg = iommu->drhd->segment, prev_did = 0, did;
     int rc, ret;
-    bool flush_dev_iotlb;
-
-    if ( QUARANTINE_SKIP(domain, pgd_maddr) )
-        return 0;
+    bool flush_dev_iotlb, overwrite_entry = false;
 
     ASSERT(pcidevs_locked());
     spin_lock(&iommu->lock);
@@ -1499,28 +1460,15 @@ int domain_context_mapping_one(
     context = &context_entries[devfn];
     old = (lctxt = *context).full;
 
-    if ( context_present(lctxt) )
-    {
-        domid_t domid;
+    did = ctx->arch.vtd.didmap[iommu->index];
 
+    if ( context_present(*context) )
+    {
         prev_did = context_domain_id(lctxt);
-        domid = did_to_domain_id(iommu, prev_did);
-        if ( domid < DOMID_FIRST_RESERVED )
-            prev_dom = rcu_lock_domain_by_id(domid);
-        else if ( pdev ? domid == pdev->arch.pseudo_domid : domid > DOMID_MASK )
-            prev_dom = rcu_lock_domain(dom_io);
-        if ( !prev_dom )
-        {
-            spin_unlock(&iommu->lock);
-            unmap_vtd_domain_page(context_entries);
-            dprintk(XENLOG_DEBUG VTDPREFIX,
-                    "no domain for did %u (nr_dom %u)\n",
-                    prev_did, cap_ndoms(iommu->cap));
-            return -ESRCH;
-        }
+        overwrite_entry = true;
     }
 
-    if ( iommu_hwdom_passthrough && is_hardware_domain(domain) )
+    if ( iommu_hwdom_passthrough && is_hardware_domain(domain) && !ctx->id )
     {
         context_set_translation_type(lctxt, CONTEXT_TT_PASS_THRU);
     }
@@ -1528,16 +1476,10 @@ int domain_context_mapping_one(
     {
         paddr_t root;
 
-        spin_lock(&hd->arch.mapping_lock);
-
-        root = domain_pgd_maddr(domain, pgd_maddr, iommu->nr_pt_levels);
+        root = get_context_pgd(domain, ctx, iommu->nr_pt_levels);
         if ( !root )
         {
-            spin_unlock(&hd->arch.mapping_lock);
-            spin_unlock(&iommu->lock);
             unmap_vtd_domain_page(context_entries);
-            if ( prev_dom )
-                rcu_unlock_domain(prev_dom);
             return -ENOMEM;
         }
 
@@ -1546,98 +1488,39 @@ int domain_context_mapping_one(
             context_set_translation_type(lctxt, CONTEXT_TT_DEV_IOTLB);
         else
             context_set_translation_type(lctxt, CONTEXT_TT_MULTI_LEVEL);
-
-        spin_unlock(&hd->arch.mapping_lock);
     }
 
-    rc = context_set_domain_id(&lctxt, domid, iommu);
+    rc = context_set_domain_id(&lctxt, did, iommu);
     if ( rc )
-    {
-    unlock:
-        spin_unlock(&iommu->lock);
-        unmap_vtd_domain_page(context_entries);
-        if ( prev_dom )
-            rcu_unlock_domain(prev_dom);
-        return rc;
-    }
-
-    if ( !prev_dom )
-    {
-        context_set_address_width(lctxt, level_to_agaw(iommu->nr_pt_levels));
-        context_set_fault_enable(lctxt);
-        context_set_present(lctxt);
-    }
-    else if ( prev_dom == domain )
-    {
-        ASSERT(lctxt.full == context->full);
-        rc = !!pdev;
         goto unlock;
-    }
-    else
-    {
-        ASSERT(context_address_width(lctxt) ==
-               level_to_agaw(iommu->nr_pt_levels));
-        ASSERT(!context_fault_disable(lctxt));
-    }
-
-    if ( cpu_has_cx16 )
-    {
-        __uint128_t res = cmpxchg16b(context, &old, &lctxt.full);
 
-        /*
-         * Hardware does not update the context entry behind our backs,
-         * so the return value should match "old".
-         */
-        if ( res != old )
-        {
-            if ( pdev )
-                check_cleanup_domid_map(domain, pdev, iommu);
-            printk(XENLOG_ERR
-                   "%pp: unexpected context entry %016lx_%016lx (expected %016lx_%016lx)\n",
-                   &PCI_SBDF(seg, bus, devfn),
-                   (uint64_t)(res >> 64), (uint64_t)res,
-                   (uint64_t)(old >> 64), (uint64_t)old);
-            rc = -EILSEQ;
-            goto unlock;
-        }
-    }
-    else if ( !prev_dom || !(mode & MAP_WITH_RMRR) )
-    {
-        context_clear_present(*context);
-        iommu_sync_cache(context, sizeof(*context));
+    context_set_address_width(lctxt, level_to_agaw(iommu->nr_pt_levels));
+    context_set_fault_enable(lctxt);
+    context_set_present(lctxt);
 
-        write_atomic(&context->hi, lctxt.hi);
-        /* No barrier should be needed between these two. */
-        write_atomic(&context->lo, lctxt.lo);
-    }
-    else /* Best effort, updating DID last. */
-    {
-         /*
-          * By non-atomically updating the context entry's DID field last,
-          * during a short window in time TLB entries with the old domain ID
-          * but the new page tables may be inserted.  This could affect I/O
-          * of other devices using this same (old) domain ID.  Such updating
-          * therefore is not a problem if this was the only device associated
-          * with the old domain ID.  Diverting I/O of any of a dying domain's
-          * devices to the quarantine page tables is intended anyway.
-          */
-        if ( !(mode & (MAP_OWNER_DYING | MAP_SINGLE_DEVICE)) )
-            printk(XENLOG_WARNING VTDPREFIX
-                   " %pp: reassignment may cause %pd data corruption\n",
-                   &PCI_SBDF(seg, bus, devfn), prev_dom);
+    res = cmpxchg16b(context, &old, &lctxt.full);
 
-        write_atomic(&context->lo, lctxt.lo);
-        /* No barrier should be needed between these two. */
-        write_atomic(&context->hi, lctxt.hi);
+    /*
+     * Hardware does not update the context entry behind our backs,
+     * so the return value should match "old".
+     */
+    if ( res != old )
+    {
+        printk(XENLOG_ERR
+                "%pp: unexpected context entry %016lx_%016lx (expected %016lx_%016lx)\n",
+                &PCI_SBDF(seg, bus, devfn),
+                (uint64_t)(res >> 64), (uint64_t)res,
+                (uint64_t)(old >> 64), (uint64_t)old);
+        rc = -EILSEQ;
+        goto unlock;
     }
 
     iommu_sync_cache(context, sizeof(struct context_entry));
-    spin_unlock(&iommu->lock);
 
     rc = iommu_flush_context_device(iommu, prev_did, PCI_BDF(bus, devfn),
-                                    DMA_CCMD_MASK_NOBIT, !prev_dom);
+                                    DMA_CCMD_MASK_NOBIT, !overwrite_entry);
     flush_dev_iotlb = !!find_ats_dev_drhd(iommu);
-    ret = iommu_flush_iotlb_dsi(iommu, prev_did, !prev_dom, flush_dev_iotlb);
+    ret = iommu_flush_iotlb_dsi(iommu, prev_did, !overwrite_entry, flush_dev_iotlb);
 
     /*
      * The current logic for returns:
@@ -1653,230 +1536,55 @@ int domain_context_mapping_one(
     if ( rc > 0 )
         rc = 0;
 
-    set_bit(iommu->index, hd->arch.vtd.iommu_bitmap);
+    set_bit(iommu->index, ctx->arch.vtd.iommu_bitmap);
 
     unmap_vtd_domain_page(context_entries);
+    spin_unlock(&iommu->lock);
 
     if ( !seg && !rc )
-        rc = me_wifi_quirk(domain, bus, devfn, domid, pgd_maddr, mode);
-
-    if ( rc && !(mode & MAP_ERROR_RECOVERY) )
-    {
-        if ( !prev_dom ||
-             /*
-              * Unmapping here means DEV_TYPE_PCI devices with RMRRs (if such
-              * exist) would cause problems if such a region was actually
-              * accessed.
-              */
-             (prev_dom == dom_io && !pdev) )
-            ret = domain_context_unmap_one(domain, iommu, bus, devfn);
-        else
-            ret = domain_context_mapping_one(prev_dom, iommu, bus, devfn, pdev,
-                                             DEVICE_DOMID(prev_dom, pdev),
-                                             DEVICE_PGTABLE(prev_dom, pdev),
-                                             (mode & MAP_WITH_RMRR) |
-                                             MAP_ERROR_RECOVERY) < 0;
-
-        if ( !ret && pdev && pdev->devfn == devfn )
-            check_cleanup_domid_map(domain, pdev, iommu);
-    }
+        rc = me_wifi_quirk(domain, bus, devfn, did, 0, ctx);
 
-    if ( prev_dom )
-        rcu_unlock_domain(prev_dom);
+    return rc;
 
-    return rc ?: pdev && prev_dom;
+    unlock:
+        unmap_vtd_domain_page(context_entries);
+        spin_unlock(&iommu->lock);
+        return rc;
 }
 
-static const struct acpi_drhd_unit *domain_context_unmap(
-    struct domain *d, uint8_t devfn, struct pci_dev *pdev);
-
-static int domain_context_mapping(struct domain *domain, u8 devfn,
-                                  struct pci_dev *pdev)
+int apply_context(struct domain *d, struct iommu_context *ctx,
+                  struct pci_dev *pdev, u8 devfn)
 {
     const struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
-    const struct acpi_rmrr_unit *rmrr;
-    paddr_t pgd_maddr = DEVICE_PGTABLE(domain, pdev);
-    domid_t orig_domid = pdev->arch.pseudo_domid;
     int ret = 0;
-    unsigned int i, mode = 0;
-    uint16_t seg = pdev->seg, bdf;
-    uint8_t bus = pdev->bus, secbus;
-
-    /*
-     * Generally we assume only devices from one node to get assigned to a
-     * given guest.  But even if not, by replacing the prior value here we
-     * guarantee that at least some basic allocations for the device being
-     * added will get done against its node.  Any further allocations for
-     * this or other devices may be penalized then, but some would also be
-     * if we left other than NUMA_NO_NODE untouched here.
-     */
-    if ( drhd && drhd->iommu->node != NUMA_NO_NODE )
-        dom_iommu(domain)->node = drhd->iommu->node;
-
-    ASSERT(pcidevs_locked());
-
-    for_each_rmrr_device( rmrr, bdf, i )
-    {
-        if ( rmrr->segment != pdev->seg || bdf != pdev->sbdf.bdf )
-            continue;
 
-        mode |= MAP_WITH_RMRR;
-        break;
-    }
+    if ( !drhd )
+        return -EINVAL;
 
-    if ( domain != pdev->domain && pdev->domain != dom_io )
+    if ( pdev->type == DEV_TYPE_PCI_HOST_BRIDGE ||
+         pdev->type == DEV_TYPE_PCIe_BRIDGE ||
+         pdev->type == DEV_TYPE_PCIe2PCI_BRIDGE ||
+         pdev->type == DEV_TYPE_LEGACY_PCI_BRIDGE )
     {
-        if ( pdev->domain->is_dying )
-            mode |= MAP_OWNER_DYING;
-        else if ( drhd &&
-                  !any_pdev_behind_iommu(pdev->domain, pdev, drhd->iommu) &&
-                  !pdev->phantom_stride )
-            mode |= MAP_SINGLE_DEVICE;
+        printk(XENLOG_WARNING VTDPREFIX " Ignoring apply_context on PCI bridge\n");
+        return 0;
     }
 
-    switch ( pdev->type )
-    {
-        bool prev_present;
-
-    case DEV_TYPE_PCI_HOST_BRIDGE:
-        if ( iommu_debug )
-            printk(VTDPREFIX "%pd:Hostbridge: skip %pp map\n",
-                   domain, &PCI_SBDF(seg, bus, devfn));
-        if ( !is_hardware_domain(domain) )
-            return -EPERM;
-        break;
-
-    case DEV_TYPE_PCIe_BRIDGE:
-    case DEV_TYPE_PCIe2PCI_BRIDGE:
-    case DEV_TYPE_LEGACY_PCI_BRIDGE:
-        break;
-
-    case DEV_TYPE_PCIe_ENDPOINT:
-        if ( !drhd )
-            return -ENODEV;
-
-        if ( iommu_quarantine && orig_domid == DOMID_INVALID )
-        {
-            pdev->arch.pseudo_domid =
-                iommu_alloc_domid(drhd->iommu->pseudo_domid_map);
-            if ( pdev->arch.pseudo_domid == DOMID_INVALID )
-                return -ENOSPC;
-        }
-
-        if ( iommu_debug )
-            printk(VTDPREFIX "%pd:PCIe: map %pp\n",
-                   domain, &PCI_SBDF(seg, bus, devfn));
-        ret = domain_context_mapping_one(domain, drhd->iommu, bus, devfn, pdev,
-                                         DEVICE_DOMID(domain, pdev), pgd_maddr,
-                                         mode);
-        if ( ret > 0 )
-            ret = 0;
-        if ( !ret && devfn == pdev->devfn && ats_device(pdev, drhd) > 0 )
-            enable_ats_device(pdev, &drhd->iommu->ats_devices);
-
-        break;
-
-    case DEV_TYPE_PCI:
-        if ( !drhd )
-            return -ENODEV;
-
-        if ( iommu_quarantine && orig_domid == DOMID_INVALID )
-        {
-            pdev->arch.pseudo_domid =
-                iommu_alloc_domid(drhd->iommu->pseudo_domid_map);
-            if ( pdev->arch.pseudo_domid == DOMID_INVALID )
-                return -ENOSPC;
-        }
-
-        if ( iommu_debug )
-            printk(VTDPREFIX "%pd:PCI: map %pp\n",
-                   domain, &PCI_SBDF(seg, bus, devfn));
-
-        ret = domain_context_mapping_one(domain, drhd->iommu, bus, devfn,
-                                         pdev, DEVICE_DOMID(domain, pdev),
-                                         pgd_maddr, mode);
-        if ( ret < 0 )
-            break;
-        prev_present = ret;
-
-        if ( (ret = find_upstream_bridge(seg, &bus, &devfn, &secbus)) < 1 )
-        {
-            if ( !ret )
-                break;
-            ret = -ENXIO;
-        }
-        /*
-         * Strictly speaking if the device is the only one behind this bridge
-         * and the only one with this (secbus,0,0) tuple, it could be allowed
-         * to be re-assigned regardless of RMRR presence.  But let's deal with
-         * that case only if it is actually found in the wild.  Note that
-         * dealing with this just here would still not render the operation
-         * secure.
-         */
-        else if ( prev_present && (mode & MAP_WITH_RMRR) &&
-                  domain != pdev->domain )
-            ret = -EOPNOTSUPP;
-
-        /*
-         * Mapping a bridge should, if anything, pass the struct pci_dev of
-         * that bridge. Since bridges don't normally get assigned to guests,
-         * their owner would be the wrong one. Pass NULL instead.
-         */
-        if ( ret >= 0 )
-            ret = domain_context_mapping_one(domain, drhd->iommu, bus, devfn,
-                                             NULL, DEVICE_DOMID(domain, pdev),
-                                             pgd_maddr, mode);
-
-        /*
-         * Devices behind PCIe-to-PCI/PCIx bridge may generate different
-         * requester-id. It may originate from devfn=0 on the secondary bus
-         * behind the bridge. Map that id as well if we didn't already.
-         *
-         * Somewhat similar as for bridges, we don't want to pass a struct
-         * pci_dev here - there may not even exist one for this (secbus,0,0)
-         * tuple. If there is one, without properly working device groups it
-         * may again not have the correct owner.
-         */
-        if ( !ret && pdev_type(seg, bus, devfn) == DEV_TYPE_PCIe2PCI_BRIDGE &&
-             (secbus != pdev->bus || pdev->devfn != 0) )
-            ret = domain_context_mapping_one(domain, drhd->iommu, secbus, 0,
-                                             NULL, DEVICE_DOMID(domain, pdev),
-                                             pgd_maddr, mode);
-
-        if ( ret )
-        {
-            if ( !prev_present )
-                domain_context_unmap(domain, devfn, pdev);
-            else if ( pdev->domain != domain ) /* Avoid infinite recursion. */
-                domain_context_mapping(pdev->domain, devfn, pdev);
-        }
+    ASSERT(pcidevs_locked());
 
-        break;
+    ret = apply_context_single(d, ctx, drhd->iommu, pdev->bus, devfn);
 
-    default:
-        dprintk(XENLOG_ERR VTDPREFIX, "%pd:unknown(%u): %pp\n",
-                domain, pdev->type, &PCI_SBDF(seg, bus, devfn));
-        ret = -EINVAL;
-        break;
-    }
+    if ( !ret && ats_device(pdev, drhd) > 0 )
+        enable_ats_device(pdev, &drhd->iommu->ats_devices);
 
     if ( !ret && devfn == pdev->devfn )
         pci_vtd_quirk(pdev);
 
-    if ( ret && drhd && orig_domid == DOMID_INVALID )
-    {
-        iommu_free_domid(pdev->arch.pseudo_domid,
-                         drhd->iommu->pseudo_domid_map);
-        pdev->arch.pseudo_domid = DOMID_INVALID;
-    }
-
     return ret;
 }
 
-int domain_context_unmap_one(
-    struct domain *domain,
-    struct vtd_iommu *iommu,
-    uint8_t bus, uint8_t devfn)
+int unapply_context_single(struct domain *domain, struct iommu_context *ctx,
+                           struct vtd_iommu *iommu, uint8_t bus, uint8_t devfn)
 {
     struct context_entry *context, *context_entries;
     u64 maddr;
@@ -1928,8 +1636,8 @@ int domain_context_unmap_one(
     unmap_vtd_domain_page(context_entries);
 
     if ( !iommu->drhd->segment && !rc )
-        rc = me_wifi_quirk(domain, bus, devfn, DOMID_INVALID, 0,
-                           UNMAP_ME_PHANTOM_FUNC);
+        rc = me_wifi_quirk(domain, bus, devfn, DOMID_INVALID, UNMAP_ME_PHANTOM_FUNC,
+                           NULL);
 
     if ( rc && !is_hardware_domain(domain) && domain != dom_io )
     {
@@ -1947,105 +1655,14 @@ int domain_context_unmap_one(
     return rc;
 }
 
-static const struct acpi_drhd_unit *domain_context_unmap(
-    struct domain *domain,
-    uint8_t devfn,
-    struct pci_dev *pdev)
-{
-    const struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
-    struct vtd_iommu *iommu = drhd ? drhd->iommu : NULL;
-    int ret;
-    uint16_t seg = pdev->seg;
-    uint8_t bus = pdev->bus, tmp_bus, tmp_devfn, secbus;
-
-    switch ( pdev->type )
-    {
-    case DEV_TYPE_PCI_HOST_BRIDGE:
-        if ( iommu_debug )
-            printk(VTDPREFIX "%pd:Hostbridge: skip %pp unmap\n",
-                   domain, &PCI_SBDF(seg, bus, devfn));
-        return ERR_PTR(is_hardware_domain(domain) ? 0 : -EPERM);
-
-    case DEV_TYPE_PCIe_BRIDGE:
-    case DEV_TYPE_PCIe2PCI_BRIDGE:
-    case DEV_TYPE_LEGACY_PCI_BRIDGE:
-        return ERR_PTR(0);
-
-    case DEV_TYPE_PCIe_ENDPOINT:
-        if ( !iommu )
-            return ERR_PTR(-ENODEV);
-
-        if ( iommu_debug )
-            printk(VTDPREFIX "%pd:PCIe: unmap %pp\n",
-                   domain, &PCI_SBDF(seg, bus, devfn));
-        ret = domain_context_unmap_one(domain, iommu, bus, devfn);
-        if ( !ret && devfn == pdev->devfn && ats_device(pdev, drhd) > 0 )
-            disable_ats_device(pdev);
-
-        break;
-
-    case DEV_TYPE_PCI:
-        if ( !iommu )
-            return ERR_PTR(-ENODEV);
-
-        if ( iommu_debug )
-            printk(VTDPREFIX "%pd:PCI: unmap %pp\n",
-                   domain, &PCI_SBDF(seg, bus, devfn));
-        ret = domain_context_unmap_one(domain, iommu, bus, devfn);
-        if ( ret )
-            break;
-
-        tmp_bus = bus;
-        tmp_devfn = devfn;
-        if ( (ret = find_upstream_bridge(seg, &tmp_bus, &tmp_devfn,
-                                         &secbus)) < 1 )
-        {
-            if ( ret )
-            {
-                ret = -ENXIO;
-                if ( !domain->is_dying &&
-                     !is_hardware_domain(domain) && domain != dom_io )
-                {
-                    domain_crash(domain);
-                    /* Make upper layers continue in a best effort manner. */
-                    ret = 0;
-                }
-            }
-            break;
-        }
-
-        ret = domain_context_unmap_one(domain, iommu, tmp_bus, tmp_devfn);
-        /* PCIe to PCI/PCIx bridge */
-        if ( !ret && pdev_type(seg, tmp_bus, tmp_devfn) == DEV_TYPE_PCIe2PCI_BRIDGE )
-            ret = domain_context_unmap_one(domain, iommu, secbus, 0);
-
-        break;
-
-    default:
-        dprintk(XENLOG_ERR VTDPREFIX, "%pd:unknown(%u): %pp\n",
-                domain, pdev->type, &PCI_SBDF(seg, bus, devfn));
-        return ERR_PTR(-EINVAL);
-    }
-
-    if ( !ret && pdev->devfn == devfn &&
-         !QUARANTINE_SKIP(domain, pdev->arch.vtd.pgd_maddr) )
-        check_cleanup_domid_map(domain, pdev, iommu);
-
-    return drhd;
-}
-
-static void cf_check iommu_clear_root_pgtable(struct domain *d)
+static void cf_check iommu_clear_root_pgtable(struct domain *d, struct iommu_context *ctx)
 {
-    struct domain_iommu *hd = dom_iommu(d);
-
-    spin_lock(&hd->arch.mapping_lock);
-    hd->arch.vtd.pgd_maddr = 0;
-    spin_unlock(&hd->arch.mapping_lock);
+    ctx->arch.vtd.pgd_maddr = 0;
 }
 
 static void cf_check iommu_domain_teardown(struct domain *d)
 {
-    struct domain_iommu *hd = dom_iommu(d);
+    struct iommu_context *ctx = iommu_default_context(d);
     const struct acpi_drhd_unit *drhd;
 
     if ( list_empty(&acpi_drhd_units) )
@@ -2053,37 +1670,15 @@ static void cf_check iommu_domain_teardown(struct domain *d)
 
     iommu_identity_map_teardown(d);
 
-    ASSERT(!hd->arch.vtd.pgd_maddr);
+    ASSERT(!ctx->arch.vtd.pgd_maddr);
 
     for_each_drhd_unit ( drhd )
         cleanup_domid_map(d->domain_id, drhd->iommu);
-
-    XFREE(hd->arch.vtd.iommu_bitmap);
-}
-
-static void quarantine_teardown(struct pci_dev *pdev,
-                                const struct acpi_drhd_unit *drhd)
-{
-    struct domain_iommu *hd = dom_iommu(dom_io);
-
-    ASSERT(pcidevs_locked());
-
-    if ( !pdev->arch.vtd.pgd_maddr )
-        return;
-
-    ASSERT(page_list_empty(&hd->arch.pgtables.list));
-    page_list_move(&hd->arch.pgtables.list, &pdev->arch.pgtables_list);
-    while ( iommu_free_pgtables(dom_io) == -ERESTART )
-        /* nothing */;
-    pdev->arch.vtd.pgd_maddr = 0;
-
-    if ( drhd )
-        cleanup_domid_map(pdev->arch.pseudo_domid, drhd->iommu);
 }
 
 static int __must_check cf_check intel_iommu_map_page(
     struct domain *d, dfn_t dfn, mfn_t mfn, unsigned int flags,
-    unsigned int *flush_flags)
+    unsigned int *flush_flags, struct iommu_context *ctx)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct dma_pte *page, *pte, old, new = {};
@@ -2095,32 +1690,28 @@ static int __must_check cf_check intel_iommu_map_page(
            PAGE_SIZE_4K);
 
     /* Do nothing if VT-d shares EPT page table */
-    if ( iommu_use_hap_pt(d) )
+    if ( iommu_use_hap_pt(d) && !ctx->id )
         return 0;
 
     /* Do nothing if hardware domain and iommu supports pass thru. */
-    if ( iommu_hwdom_passthrough && is_hardware_domain(d) )
+    if ( iommu_hwdom_passthrough && is_hardware_domain(d) && !ctx->id )
         return 0;
 
-    spin_lock(&hd->arch.mapping_lock);
-
     /*
      * IOMMU mapping request can be safely ignored when the domain is dying.
      *
-     * hd->arch.mapping_lock guarantees that d->is_dying will be observed
+     * hd->lock guarantees that d->is_dying will be observed
      * before any page tables are freed (see iommu_free_pgtables())
      */
     if ( d->is_dying )
     {
-        spin_unlock(&hd->arch.mapping_lock);
         return 0;
     }
 
-    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), level, flush_flags,
+    pg_maddr = addr_to_dma_page_maddr(d, ctx, dfn_to_daddr(dfn), level, flush_flags,
                                       true);
     if ( pg_maddr < PAGE_SIZE )
     {
-        spin_unlock(&hd->arch.mapping_lock);
         return -ENOMEM;
     }
 
@@ -2141,7 +1732,6 @@ static int __must_check cf_check intel_iommu_map_page(
 
     if ( !((old.val ^ new.val) & ~DMA_PTE_CONTIG_MASK) )
     {
-        spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
         return 0;
     }
@@ -2170,7 +1760,7 @@ static int __must_check cf_check intel_iommu_map_page(
         new.val &= ~(LEVEL_MASK << level_to_offset_bits(level));
         dma_set_pte_superpage(new);
 
-        pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), ++level,
+        pg_maddr = addr_to_dma_page_maddr(d, ctx, dfn_to_daddr(dfn), ++level,
                                           flush_flags, false);
         BUG_ON(pg_maddr < PAGE_SIZE);
 
@@ -2180,11 +1770,10 @@ static int __must_check cf_check intel_iommu_map_page(
         iommu_sync_cache(pte, sizeof(*pte));
 
         *flush_flags |= IOMMU_FLUSHF_modified | IOMMU_FLUSHF_all;
-        iommu_queue_free_pgtable(hd, pg);
+        iommu_queue_free_pgtable(ctx, pg);
         perfc_incr(iommu_pt_coalesces);
     }
 
-    spin_unlock(&hd->arch.mapping_lock);
     unmap_vtd_domain_page(page);
 
     *flush_flags |= IOMMU_FLUSHF_added;
@@ -2193,7 +1782,7 @@ static int __must_check cf_check intel_iommu_map_page(
         *flush_flags |= IOMMU_FLUSHF_modified;
 
         if ( IOMMUF_order(flags) && !dma_pte_superpage(old) )
-            queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)),
+            queue_free_pt(ctx, maddr_to_mfn(dma_pte_addr(old)),
                           IOMMUF_order(flags) / LEVEL_STRIDE);
     }
 
@@ -2201,7 +1790,8 @@ static int __must_check cf_check intel_iommu_map_page(
 }
 
 static int __must_check cf_check intel_iommu_unmap_page(
-    struct domain *d, dfn_t dfn, unsigned int order, unsigned int *flush_flags)
+    struct domain *d, dfn_t dfn, unsigned int order, unsigned int *flush_flags,
+    struct iommu_context *ctx)
 {
     struct domain_iommu *hd = dom_iommu(d);
     daddr_t addr = dfn_to_daddr(dfn);
@@ -2216,28 +1806,23 @@ static int __must_check cf_check intel_iommu_unmap_page(
     ASSERT((hd->platform_ops->page_sizes >> order) & PAGE_SIZE_4K);
 
     /* Do nothing if VT-d shares EPT page table */
-    if ( iommu_use_hap_pt(d) )
+    if ( iommu_use_hap_pt(d) && !ctx->id )
         return 0;
 
     /* Do nothing if hardware domain and iommu supports pass thru. */
     if ( iommu_hwdom_passthrough && is_hardware_domain(d) )
         return 0;
 
-    spin_lock(&hd->arch.mapping_lock);
     /* get target level pte */
-    pg_maddr = addr_to_dma_page_maddr(d, addr, level, flush_flags, false);
+    pg_maddr = addr_to_dma_page_maddr(d, ctx, addr, level, flush_flags, false);
     if ( pg_maddr < PAGE_SIZE )
-    {
-        spin_unlock(&hd->arch.mapping_lock);
         return pg_maddr ? -ENOMEM : 0;
-    }
 
     page = map_vtd_domain_page(pg_maddr);
     pte = &page[address_level_offset(addr, level)];
 
     if ( !dma_pte_present(*pte) )
     {
-        spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
         return 0;
     }
@@ -2255,7 +1840,7 @@ static int __must_check cf_check intel_iommu_unmap_page(
 
         unmap_vtd_domain_page(page);
 
-        pg_maddr = addr_to_dma_page_maddr(d, addr, level, flush_flags, false);
+        pg_maddr = addr_to_dma_page_maddr(d, ctx, addr, level, flush_flags, false);
         BUG_ON(pg_maddr < PAGE_SIZE);
 
         page = map_vtd_domain_page(pg_maddr);
@@ -2264,42 +1849,38 @@ static int __must_check cf_check intel_iommu_unmap_page(
         iommu_sync_cache(pte, sizeof(*pte));
 
         *flush_flags |= IOMMU_FLUSHF_all;
-        iommu_queue_free_pgtable(hd, pg);
+        iommu_queue_free_pgtable(ctx, pg);
         perfc_incr(iommu_pt_coalesces);
     }
 
-    spin_unlock(&hd->arch.mapping_lock);
-
     unmap_vtd_domain_page(page);
 
     *flush_flags |= IOMMU_FLUSHF_modified;
 
     if ( order && !dma_pte_superpage(old) )
-        queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)),
+        queue_free_pt(ctx, maddr_to_mfn(dma_pte_addr(old)),
                       order / LEVEL_STRIDE);
 
     return 0;
 }
 
 static int cf_check intel_iommu_lookup_page(
-    struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags)
+    struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags,
+    struct iommu_context *ctx)
 {
-    struct domain_iommu *hd = dom_iommu(d);
     uint64_t val;
 
     /*
      * If VT-d shares EPT page table or if the domain is the hardware
      * domain and iommu_passthrough is set then pass back the dfn.
      */
-    if ( iommu_use_hap_pt(d) ||
-         (iommu_hwdom_passthrough && is_hardware_domain(d)) )
+    if ( (iommu_use_hap_pt(d) ||
+         (iommu_hwdom_passthrough && is_hardware_domain(d)))
+         && !ctx->id )
         return -EOPNOTSUPP;
 
-    spin_lock(&hd->arch.mapping_lock);
-
-    val = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0, NULL, false);
 
-    spin_unlock(&hd->arch.mapping_lock);
+    val = addr_to_dma_page_maddr(d, ctx, dfn_to_daddr(dfn), 0, NULL, false);
 
     if ( val < PAGE_SIZE )
         return -ENOENT;
@@ -2320,7 +1901,7 @@ static bool __init vtd_ept_page_compatible(const struct vtd_iommu *iommu)
 
     /* EPT is not initialised yet, so we must check the capability in
      * the MSR explicitly rather than use cpu_has_vmx_ept_*() */
-    if ( rdmsr_safe(MSR_IA32_VMX_EPT_VPID_CAP, ept_cap) != 0 ) 
+    if ( rdmsr_safe(MSR_IA32_VMX_EPT_VPID_CAP, ept_cap) != 0 )
         return false;
 
     return (ept_has_2mb(ept_cap) && opt_hap_2mb) <=
@@ -2329,44 +1910,6 @@ static bool __init vtd_ept_page_compatible(const struct vtd_iommu *iommu)
             (cap_sps_1gb(vtd_cap) && iommu_superpages);
 }
 
-static int cf_check intel_iommu_add_device(u8 devfn, struct pci_dev *pdev)
-{
-    struct acpi_rmrr_unit *rmrr;
-    u16 bdf;
-    int ret, i;
-
-    ASSERT(pcidevs_locked());
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    for_each_rmrr_device ( rmrr, bdf, i )
-    {
-        if ( rmrr->segment == pdev->seg && bdf == PCI_BDF(pdev->bus, devfn) )
-        {
-            /*
-             * iommu_add_device() is only called for the hardware
-             * domain (see xen/drivers/passthrough/pci.c:pci_add_device()).
-             * Since RMRRs are always reserved in the e820 map for the hardware
-             * domain, there shouldn't be a conflict.
-             */
-            ret = iommu_identity_mapping(pdev->domain, p2m_access_rw,
-                                         rmrr->base_address, rmrr->end_address,
-                                         0);
-            if ( ret )
-                dprintk(XENLOG_ERR VTDPREFIX, "%pd: RMRR mapping failed\n",
-                        pdev->domain);
-        }
-    }
-
-    ret = domain_context_mapping(pdev->domain, devfn, pdev);
-    if ( ret )
-        dprintk(XENLOG_ERR VTDPREFIX, "%pd: context mapping failed\n",
-                pdev->domain);
-
-    return ret;
-}
-
 static int cf_check intel_iommu_enable_device(struct pci_dev *pdev)
 {
     struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
@@ -2382,49 +1925,16 @@ static int cf_check intel_iommu_enable_device(struct pci_dev *pdev)
     return ret >= 0 ? 0 : ret;
 }
 
-static int cf_check intel_iommu_remove_device(u8 devfn, struct pci_dev *pdev)
-{
-    const struct acpi_drhd_unit *drhd;
-    struct acpi_rmrr_unit *rmrr;
-    u16 bdf;
-    unsigned int i;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    drhd = domain_context_unmap(pdev->domain, devfn, pdev);
-    if ( IS_ERR(drhd) )
-        return PTR_ERR(drhd);
-
-    for_each_rmrr_device ( rmrr, bdf, i )
-    {
-        if ( rmrr->segment != pdev->seg || bdf != PCI_BDF(pdev->bus, devfn) )
-            continue;
-
-        /*
-         * Any flag is nothing to clear these mappings but here
-         * its always safe and strict to set 0.
-         */
-        iommu_identity_mapping(pdev->domain, p2m_access_x, rmrr->base_address,
-                               rmrr->end_address, 0);
-    }
-
-    quarantine_teardown(pdev, drhd);
-
-    if ( drhd )
-    {
-        iommu_free_domid(pdev->arch.pseudo_domid,
-                         drhd->iommu->pseudo_domid_map);
-        pdev->arch.pseudo_domid = DOMID_INVALID;
-    }
-
-    return 0;
-}
-
 static int __hwdom_init cf_check setup_hwdom_device(
     u8 devfn, struct pci_dev *pdev)
 {
-    return domain_context_mapping(pdev->domain, devfn, pdev);
+    if (pdev->type == DEV_TYPE_PCI_HOST_BRIDGE ||
+        pdev->type == DEV_TYPE_PCIe_BRIDGE ||
+        pdev->type == DEV_TYPE_PCIe2PCI_BRIDGE ||
+        pdev->type == DEV_TYPE_LEGACY_PCI_BRIDGE)
+        return 0;
+
+    return _iommu_attach_context(hardware_domain, pdev, 0);
 }
 
 void clear_fault_bits(struct vtd_iommu *iommu)
@@ -2518,7 +2028,7 @@ static int __must_check init_vtd_hw(bool resume)
 
     /*
      * Enable queue invalidation
-     */   
+     */
     for_each_drhd_unit ( drhd )
     {
         iommu = drhd->iommu;
@@ -2539,7 +2049,7 @@ static int __must_check init_vtd_hw(bool resume)
 
     /*
      * Enable interrupt remapping
-     */  
+     */
     if ( iommu_intremap != iommu_intremap_off )
     {
         int apic;
@@ -2622,15 +2132,60 @@ static struct iommu_state {
     uint32_t fectl;
 } *__read_mostly iommu_state;
 
-static int __init cf_check vtd_setup(void)
+static void arch_iommu_dump_domain_contexts(struct domain *d)
 {
-    struct acpi_drhd_unit *drhd;
-    struct vtd_iommu *iommu;
-    unsigned int large_sizes = iommu_superpages ? PAGE_SIZE_2M | PAGE_SIZE_1G : 0;
-    int ret;
-    bool reg_inval_supported = true;
+    unsigned int i, iommu_no;
+    struct pci_dev *pdev;
+    struct iommu_context *ctx;
+    struct domain_iommu *hd = dom_iommu(d);
 
-    if ( list_empty(&acpi_drhd_units) )
+    printk("d%hu contexts\n", d->domain_id);
+
+    spin_lock(&hd->lock);
+
+    for (i = 0; i < (1 + dom_iommu(d)->other_contexts.count); ++i)
+    {
+        if (iommu_check_context(d, i))
+        {
+            ctx = iommu_get_context(d, i);
+            printk(" Context %d (%"PRIx64")\n", i, ctx->arch.vtd.pgd_maddr);
+
+            for (iommu_no = 0; iommu_no < nr_iommus; iommu_no++)
+                printk("  IOMMU %hu (used=%u; did=%hu)\n", iommu_no,
+                       test_bit(iommu_no, ctx->arch.vtd.iommu_bitmap),
+                       ctx->arch.vtd.didmap[iommu_no]);
+
+            list_for_each_entry(pdev, &ctx->devices, context_list)
+            {
+                printk("  - %pp\n", &pdev->sbdf);
+            }
+        }
+    }
+
+    spin_unlock(&hd->lock);
+}
+
+static void arch_iommu_dump_contexts(unsigned char key)
+{
+    struct domain *d;
+
+    for_each_domain(d) {
+        struct domain_iommu *hd = dom_iommu(d);
+        printk("d%hu arena page usage: %d\n", d->domain_id,
+               atomic_read(&hd->arch.pt_arena.used_pages));
+
+        arch_iommu_dump_domain_contexts(d);
+    }
+}
+static int __init cf_check vtd_setup(void)
+{
+    struct acpi_drhd_unit *drhd;
+    struct vtd_iommu *iommu;
+    unsigned int large_sizes = iommu_superpages ? PAGE_SIZE_2M | PAGE_SIZE_1G : 0;
+    int ret;
+    bool reg_inval_supported = true;
+
+    if ( list_empty(&acpi_drhd_units) )
     {
         ret = -ENODEV;
         goto error;
@@ -2749,6 +2304,7 @@ static int __init cf_check vtd_setup(void)
     iommu_ops.page_sizes |= large_sizes;
 
     register_keyhandler('V', vtd_dump_iommu_info, "dump iommu info", 1);
+    register_keyhandler('X', arch_iommu_dump_contexts, "dump iommu contexts", 1);
 
     return 0;
 
@@ -2763,192 +2319,6 @@ static int __init cf_check vtd_setup(void)
     return ret;
 }
 
-static int cf_check reassign_device_ownership(
-    struct domain *source,
-    struct domain *target,
-    u8 devfn, struct pci_dev *pdev)
-{
-    int ret;
-
-    if ( !QUARANTINE_SKIP(target, pdev->arch.vtd.pgd_maddr) )
-    {
-        if ( !has_arch_pdevs(target) )
-            vmx_pi_hooks_assign(target);
-
-#ifdef CONFIG_PV
-        /*
-         * Devices assigned to untrusted domains (here assumed to be any domU)
-         * can attempt to send arbitrary LAPIC/MSI messages. We are unprotected
-         * by the root complex unless interrupt remapping is enabled.
-         */
-        if ( !iommu_intremap && !is_hardware_domain(target) &&
-             !is_system_domain(target) )
-            untrusted_msi = true;
-#endif
-
-        ret = domain_context_mapping(target, devfn, pdev);
-
-        if ( !ret && pdev->devfn == devfn &&
-             !QUARANTINE_SKIP(source, pdev->arch.vtd.pgd_maddr) )
-        {
-            const struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
-
-            if ( drhd )
-                check_cleanup_domid_map(source, pdev, drhd->iommu);
-        }
-    }
-    else
-    {
-        const struct acpi_drhd_unit *drhd;
-
-        drhd = domain_context_unmap(source, devfn, pdev);
-        ret = IS_ERR(drhd) ? PTR_ERR(drhd) : 0;
-    }
-    if ( ret )
-    {
-        if ( !has_arch_pdevs(target) )
-            vmx_pi_hooks_deassign(target);
-        return ret;
-    }
-
-    if ( devfn == pdev->devfn && pdev->domain != target )
-    {
-        write_lock(&source->pci_lock);
-        list_del(&pdev->domain_list);
-        write_unlock(&source->pci_lock);
-
-        pdev->domain = target;
-
-        write_lock(&target->pci_lock);
-        list_add(&pdev->domain_list, &target->pdev_list);
-        write_unlock(&target->pci_lock);
-    }
-
-    if ( !has_arch_pdevs(source) )
-        vmx_pi_hooks_deassign(source);
-
-    /*
-     * If the device belongs to the hardware domain, and it has RMRR, don't
-     * remove it from the hardware domain, because BIOS may use RMRR at
-     * booting time.
-     */
-    if ( !is_hardware_domain(source) )
-    {
-        const struct acpi_rmrr_unit *rmrr;
-        u16 bdf;
-        unsigned int i;
-
-        for_each_rmrr_device( rmrr, bdf, i )
-            if ( rmrr->segment == pdev->seg &&
-                 bdf == PCI_BDF(pdev->bus, devfn) )
-            {
-                /*
-                 * Any RMRR flag is always ignored when remove a device,
-                 * but its always safe and strict to set 0.
-                 */
-                ret = iommu_identity_mapping(source, p2m_access_x,
-                                             rmrr->base_address,
-                                             rmrr->end_address, 0);
-                if ( ret && ret != -ENOENT )
-                    return ret;
-            }
-    }
-
-    return 0;
-}
-
-static int cf_check intel_iommu_assign_device(
-    struct domain *d, u8 devfn, struct pci_dev *pdev, u32 flag)
-{
-    struct domain *s = pdev->domain;
-    struct acpi_rmrr_unit *rmrr;
-    int ret = 0, i;
-    u16 bdf, seg;
-    u8 bus;
-
-    if ( list_empty(&acpi_drhd_units) )
-        return -ENODEV;
-
-    seg = pdev->seg;
-    bus = pdev->bus;
-    /*
-     * In rare cases one given rmrr is shared by multiple devices but
-     * obviously this would put the security of a system at risk. So
-     * we would prevent from this sort of device assignment. But this
-     * can be permitted if user set
-     *      "pci = [ 'sbdf, rdm_policy=relaxed' ]"
-     *
-     * TODO: in the future we can introduce group device assignment
-     * interface to make sure devices sharing RMRR are assigned to the
-     * same domain together.
-     */
-    for_each_rmrr_device( rmrr, bdf, i )
-    {
-        if ( rmrr->segment == seg && bdf == PCI_BDF(bus, devfn) &&
-             rmrr->scope.devices_cnt > 1 )
-        {
-            bool relaxed = flag & XEN_DOMCTL_DEV_RDM_RELAXED;
-
-            printk(XENLOG_GUEST "%s" VTDPREFIX
-                   " It's %s to assign %pp"
-                   " with shared RMRR at %"PRIx64" for %pd.\n",
-                   relaxed ? XENLOG_WARNING : XENLOG_ERR,
-                   relaxed ? "risky" : "disallowed",
-                   &PCI_SBDF(seg, bus, devfn), rmrr->base_address, d);
-            if ( !relaxed )
-                return -EPERM;
-        }
-    }
-
-    if ( d == dom_io )
-        return reassign_device_ownership(s, d, devfn, pdev);
-
-    /* Setup rmrr identity mapping */
-    for_each_rmrr_device( rmrr, bdf, i )
-    {
-        if ( rmrr->segment == seg && bdf == PCI_BDF(bus, devfn) )
-        {
-            ret = iommu_identity_mapping(d, p2m_access_rw, rmrr->base_address,
-                                         rmrr->end_address, flag);
-            if ( ret )
-            {
-                printk(XENLOG_G_ERR VTDPREFIX
-                       "%pd: cannot map reserved region [%"PRIx64",%"PRIx64"]: %d\n",
-                       d, rmrr->base_address, rmrr->end_address, ret);
-                break;
-            }
-        }
-    }
-
-    if ( !ret )
-        ret = reassign_device_ownership(s, d, devfn, pdev);
-
-    /* See reassign_device_ownership() for the hwdom aspect. */
-    if ( !ret || is_hardware_domain(d) )
-        return ret;
-
-    for_each_rmrr_device( rmrr, bdf, i )
-    {
-        if ( rmrr->segment == seg && bdf == PCI_BDF(bus, devfn) )
-        {
-            int rc = iommu_identity_mapping(d, p2m_access_x,
-                                            rmrr->base_address,
-                                            rmrr->end_address, 0);
-
-            if ( rc && rc != -ENOENT )
-            {
-                printk(XENLOG_ERR VTDPREFIX
-                       "%pd: cannot unmap reserved region [%"PRIx64",%"PRIx64"]: %d\n",
-                       d, rmrr->base_address, rmrr->end_address, rc);
-                domain_crash(d);
-                break;
-            }
-        }
-    }
-
-    return ret;
-}
-
 static int cf_check intel_iommu_group_id(u16 seg, u8 bus, u8 devfn)
 {
     u8 secbus;
@@ -3073,6 +2443,11 @@ static void vtd_dump_page_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
     if ( level < 1 )
         return;
 
+    if (pt_maddr == 0) {
+        printk(" (empty)\n");
+        return;
+    }
+
     pt_vaddr = map_vtd_domain_page(pt_maddr);
 
     next_level = level - 1;
@@ -3103,158 +2478,478 @@ static void vtd_dump_page_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
 
 static void cf_check vtd_dump_page_tables(struct domain *d)
 {
-    const struct domain_iommu *hd = dom_iommu(d);
+    struct domain_iommu *hd = dom_iommu(d);
+    unsigned int i;
 
     printk(VTDPREFIX" %pd table has %d levels\n", d,
            agaw_to_level(hd->arch.vtd.agaw));
-    vtd_dump_page_table_level(hd->arch.vtd.pgd_maddr,
-                              agaw_to_level(hd->arch.vtd.agaw), 0, 0);
+
+    for (i = 1; i < (1 + hd->other_contexts.count); ++i)
+    {
+        bool allocated = iommu_check_context(d, i);
+        printk(VTDPREFIX " %pd context %d: %s\n", d, i,
+               allocated ? "allocated" : "non-allocated");
+
+        if (allocated) {
+            const struct iommu_context *ctx = iommu_get_context(d, i);
+            vtd_dump_page_table_level(ctx->arch.vtd.pgd_maddr,
+                                      agaw_to_level(hd->arch.vtd.agaw), 0, 0);
+        }
+    }
 }
 
-static int fill_qpt(struct dma_pte *this, unsigned int level,
-                    struct page_info *pgs[6])
+static int intel_iommu_context_init(struct domain *d, struct iommu_context *ctx, u32 flags)
 {
-    struct domain_iommu *hd = dom_iommu(dom_io);
-    unsigned int i;
-    int rc = 0;
+    struct acpi_drhd_unit *drhd;
 
-    for ( i = 0; !rc && i < PTE_NUM; ++i )
+    ctx->arch.vtd.didmap = xzalloc_array(u16, nr_iommus);
+
+    if ( !ctx->arch.vtd.didmap )
+        return -ENOMEM;
+
+    ctx->arch.vtd.iommu_bitmap = xzalloc_array(unsigned long,
+                                               BITS_TO_LONGS(nr_iommus));
+    if ( !ctx->arch.vtd.iommu_bitmap )
+        return -ENOMEM;
+
+    ctx->arch.vtd.duplicated_rmrr = false;
+    ctx->arch.vtd.superpage_progress = 0;
+
+    if ( flags & IOMMU_CONTEXT_INIT_default )
     {
-        struct dma_pte *pte = &this[i], *next;
+        ctx->arch.vtd.pgd_maddr = 0;
 
-        if ( !dma_pte_present(*pte) )
+        /* Populate context DID map using domain id. */
+        for_each_drhd_unit(drhd)
         {
-            if ( !pgs[level] )
-            {
-                /*
-                 * The pgtable allocator is fine for the leaf page, as well as
-                 * page table pages, and the resulting allocations are always
-                 * zeroed.
-                 */
-                pgs[level] = iommu_alloc_pgtable(hd, 0);
-                if ( !pgs[level] )
-                {
-                    rc = -ENOMEM;
-                    break;
-                }
-
-                if ( level )
-                {
-                    next = map_vtd_domain_page(page_to_maddr(pgs[level]));
-                    rc = fill_qpt(next, level - 1, pgs);
-                    unmap_vtd_domain_page(next);
-                }
-            }
+            ctx->arch.vtd.didmap[drhd->iommu->index] =
+                convert_domid(drhd->iommu, d->domain_id);
+        }
+    }
+    else
+    {
+        /* Populate context DID map using pseudo DIDs */
+        for_each_drhd_unit(drhd)
+        {
+            ctx->arch.vtd.didmap[drhd->iommu->index] =
+                iommu_alloc_domid(drhd->iommu->pseudo_domid_map);
+        }
+
+        /* Create initial context page */
+        addr_to_dma_page_maddr(d, ctx, 0, min_pt_levels, NULL, true);
+    }
 
-            dma_set_pte_addr(*pte, page_to_maddr(pgs[level]));
-            dma_set_pte_readable(*pte);
-            dma_set_pte_writable(*pte);
+    return arch_iommu_context_init(d, ctx, flags);
+}
+
+static int intel_iommu_cleanup_pte(uint64_t pte_maddr, bool preempt)
+{
+    size_t i;
+    struct dma_pte *pte = map_vtd_domain_page(pte_maddr);
+
+    for (i = 0; i < (1 << PAGETABLE_ORDER); ++i)
+        if ( dma_pte_present(pte[i]) )
+        {
+            /* Remove the reference of the target mapping */
+            put_page(maddr_to_page(dma_pte_addr(pte[i])));
+
+            if ( preempt )
+                dma_clear_pte(pte[i]);
         }
-        else if ( level && !dma_pte_superpage(*pte) )
+
+    unmap_vtd_domain_page(pte);
+
+    return 0;
+}
+
+/**
+ * Cleanup logic :
+ * Walk through the entire page table, progressively removing mappings if preempt.
+ *
+ * Return values :
+ *  - Report preemption with -ERESTART.
+ *  - Report empty pte/pgd with 0.
+ *
+ * When preempted during superpage operation, store state in vtd.superpage_progress.
+ */
+
+static int intel_iommu_cleanup_superpage(struct iommu_context *ctx,
+                                          unsigned int page_order, uint64_t pte_maddr,
+                                          bool preempt)
+{
+    size_t i = 0, page_count = 1 << page_order;
+    struct page_info *page = maddr_to_page(pte_maddr);
+
+    if ( preempt )
+        i = ctx->arch.vtd.superpage_progress;
+
+    for (; i < page_count; page++)
+    {
+        put_page(page);
+
+        if ( preempt && (i & 0xff) && general_preempt_check() )
         {
-            next = map_vtd_domain_page(dma_pte_addr(*pte));
-            rc = fill_qpt(next, level - 1, pgs);
-            unmap_vtd_domain_page(next);
+            ctx->arch.vtd.superpage_progress = i + 1;
+            return -ERESTART;
         }
     }
 
-    return rc;
+    if ( preempt )
+        ctx->arch.vtd.superpage_progress = 0;
+
+    return 0;
 }
 
-static int cf_check intel_iommu_quarantine_init(struct pci_dev *pdev,
-                                                bool scratch_page)
+static int intel_iommu_cleanup_mappings(struct iommu_context *ctx,
+                                         unsigned int nr_pt_levels, uint64_t pgd_maddr,
+                                         bool preempt)
 {
-    struct domain_iommu *hd = dom_iommu(dom_io);
-    struct page_info *pg;
-    unsigned int agaw = hd->arch.vtd.agaw;
-    unsigned int level = agaw_to_level(agaw);
-    const struct acpi_drhd_unit *drhd;
-    const struct acpi_rmrr_unit *rmrr;
-    unsigned int i, bdf;
-    bool rmrr_found = false;
+    size_t i;
     int rc;
+    struct dma_pte *pgd = map_vtd_domain_page(pgd_maddr);
 
-    ASSERT(pcidevs_locked());
-    ASSERT(!hd->arch.vtd.pgd_maddr);
-    ASSERT(page_list_empty(&hd->arch.pgtables.list));
+    for (i = 0; i < (1 << PAGETABLE_ORDER); ++i)
+    {
+        if ( dma_pte_present(pgd[i]) )
+        {
+            uint64_t pte_maddr = dma_pte_addr(pgd[i]);
+
+            if ( dma_pte_superpage(pgd[i]) )
+                rc = intel_iommu_cleanup_superpage(ctx, nr_pt_levels * SUPERPAGE_ORDER,
+                                                   pte_maddr, preempt);
+            else if ( nr_pt_levels > 2 )
+                /* Next level is not PTE */
+                rc = intel_iommu_cleanup_mappings(ctx, nr_pt_levels - 1,
+                                                  pte_maddr, preempt);
+            else
+                rc = intel_iommu_cleanup_pte(pte_maddr, preempt);
+
+            if ( preempt && !rc )
+                /* Fold pgd (no more mappings in it) */
+                dma_clear_pte(pgd[i]);
+            else if ( preempt && (rc == -ERESTART || general_preempt_check()) )
+            {
+                unmap_vtd_domain_page(pgd);
+                return -ERESTART;
+            }
+        }
+    }
 
-    if ( pdev->arch.vtd.pgd_maddr )
+    unmap_vtd_domain_page(pgd);
+
+    return 0;
+}
+
+static int intel_iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags)
+{
+    struct acpi_drhd_unit *drhd;
+    pcidevs_lock();
+
+    // Cleanup mappings
+    if ( intel_iommu_cleanup_mappings(ctx, agaw_to_level(d->iommu.arch.vtd.agaw),
+                                      ctx->arch.vtd.pgd_maddr,
+                                      flags & IOMMUF_preempt) < 0 )
     {
-        clear_domain_page(pdev->arch.leaf_mfn);
-        return 0;
+        pcidevs_unlock();
+        return -ERESTART;
     }
 
-    drhd = acpi_find_matched_drhd_unit(pdev);
-    if ( !drhd )
-        return -ENODEV;
+    if (ctx->arch.vtd.didmap)
+    {
+        for_each_drhd_unit(drhd)
+        {
+            iommu_free_domid(ctx->arch.vtd.didmap[drhd->iommu->index],
+                drhd->iommu->pseudo_domid_map);
+        }
 
-    pg = iommu_alloc_pgtable(hd, 0);
-    if ( !pg )
-        return -ENOMEM;
+        xfree(ctx->arch.vtd.didmap);
+    }
 
-    rc = context_set_domain_id(NULL, pdev->arch.pseudo_domid, drhd->iommu);
+    pcidevs_unlock();
+    return arch_iommu_context_teardown(d, ctx, flags);
+}
 
-    /* Transiently install the root into DomIO, for iommu_identity_mapping(). */
-    hd->arch.vtd.pgd_maddr = page_to_maddr(pg);
+static int intel_iommu_map_identity(struct domain *d, struct pci_dev *pdev,
+                                    struct iommu_context *ctx, struct acpi_rmrr_unit *rmrr)
+{
+    /* TODO: This code doesn't cleanup on failure */
 
-    for_each_rmrr_device ( rmrr, bdf, i )
+    int ret = 0, rc = 0;
+    unsigned int flush_flags = 0, flags;
+    u64 base_pfn = rmrr->base_address >> PAGE_SHIFT_4K;
+    u64 end_pfn = PAGE_ALIGN_4K(rmrr->end_address) >> PAGE_SHIFT_4K;
+    u64 pfn = base_pfn;
+
+    printk(XENLOG_INFO VTDPREFIX
+            " Mapping d%dc%d device %pp identity mapping [%08" PRIx64 ":%08" PRIx64 "]\n",
+            d->domain_id, ctx->id, &pdev->sbdf, rmrr->base_address, rmrr->end_address);
+
+    ASSERT(end_pfn >= base_pfn);
+
+    while (pfn < end_pfn)
     {
-        if ( rc )
-            break;
+        mfn_t mfn = INVALID_MFN;
+        ret = intel_iommu_lookup_page(d, _dfn(pfn), &mfn, &flags, ctx);
 
-        if ( rmrr->segment == pdev->seg && bdf == pdev->sbdf.bdf )
+        if ( ret == -ENOENT )
         {
-            rmrr_found = true;
+            ret = intel_iommu_map_page(d, _dfn(pfn), _mfn(pfn),
+                                      IOMMUF_readable | IOMMUF_writable,
+                                      &flush_flags, ctx);
 
-            rc = iommu_identity_mapping(dom_io, p2m_access_rw,
-                                        rmrr->base_address, rmrr->end_address,
-                                        0);
-            if ( rc )
+            if ( ret < 0 )
+            {
                 printk(XENLOG_ERR VTDPREFIX
-                       "%pp: RMRR quarantine mapping failed\n",
-                       &pdev->sbdf);
+                        " Unable to map RMRR page %"PRI_pfn" (%d)\n",
+                        pfn, ret);
+                break;
+            }
+        }
+        else if ( ret )
+        {
+            printk(XENLOG_ERR VTDPREFIX
+                    " Unable to lookup page %"PRI_pfn" (%d)\n",
+                    pfn, ret);
+            break;
         }
+        else if ( mfn_x(mfn) != pfn )
+        {
+            /* The dfn is already mapped to something else, can't continue. */
+            printk(XENLOG_ERR VTDPREFIX
+                   " Unable to map RMRR page %"PRI_mfn"!=%"PRI_pfn" (incompatible mapping)\n",
+                   mfn_x(mfn), pfn);
+
+            ret = -EINVAL;
+            break;
+        }
+        else if ( mfn_x(mfn) == pfn )
+        {
+            /*
+             * There is already a identity mapping in this context, we need to
+             * be extra-careful when dettaching the device to not break another
+             * existing RMRR.
+             */
+            printk(XENLOG_WARNING VTDPREFIX
+                   "Duplicated RMRR mapping %"PRI_pfn"\n", pfn);
+
+            ctx->arch.vtd.duplicated_rmrr = true;
+        }
+
+        pfn++;
     }
 
-    iommu_identity_map_teardown(dom_io);
-    hd->arch.vtd.pgd_maddr = 0;
-    pdev->arch.vtd.pgd_maddr = page_to_maddr(pg);
+    rc = iommu_flush_iotlb(d, ctx, _dfn(base_pfn), end_pfn - base_pfn + 1, flush_flags);
 
-    if ( !rc && scratch_page )
+    return ret ?: rc;
+}
+
+static int intel_iommu_map_dev_rmrr(struct domain *d, struct pci_dev *pdev,
+                                    struct iommu_context *ctx)
+{
+    struct acpi_rmrr_unit *rmrr;
+    u16 bdf;
+    int ret, i;
+
+    for_each_rmrr_device(rmrr, bdf, i)
     {
-        struct dma_pte *root;
-        struct page_info *pgs[6] = {};
+        if ( PCI_SBDF(rmrr->segment, bdf).sbdf == pdev->sbdf.sbdf )
+        {
+            ret = intel_iommu_map_identity(d, pdev, ctx, rmrr);
 
-        root = map_vtd_domain_page(pdev->arch.vtd.pgd_maddr);
-        rc = fill_qpt(root, level - 1, pgs);
-        unmap_vtd_domain_page(root);
+            if ( ret < 0 )
+                return ret;
+        }
+    }
 
-        pdev->arch.leaf_mfn = page_to_mfn(pgs[0]);
+    return 0;
+}
+
+static int intel_iommu_unmap_identity(struct domain *d, struct pci_dev *pdev,
+                                      struct iommu_context *ctx, struct acpi_rmrr_unit *rmrr)
+{
+    /* TODO: This code doesn't cleanup on failure */
+
+    int ret = 0, rc = 0;
+    unsigned int flush_flags = 0;
+    u64 base_pfn = rmrr->base_address >> PAGE_SHIFT_4K;
+    u64 end_pfn = PAGE_ALIGN_4K(rmrr->end_address) >> PAGE_SHIFT_4K;
+    u64 pfn = base_pfn;
+
+    printk(VTDPREFIX
+            " Unmapping d%dc%d device %pp identity mapping [%08" PRIx64 ":%08" PRIx64 "]\n",
+            d->domain_id, ctx->id, &pdev->sbdf, rmrr->base_address, rmrr->end_address);
+
+    ASSERT(end_pfn >= base_pfn);
+
+    while (pfn < end_pfn)
+    {
+        ret = intel_iommu_unmap_page(d, _dfn(pfn), PAGE_ORDER_4K, &flush_flags, ctx);
+
+        if ( ret )
+            break;
+
+        pfn++;
     }
 
-    page_list_move(&pdev->arch.pgtables_list, &hd->arch.pgtables.list);
+    rc = iommu_flush_iotlb(d, ctx, _dfn(base_pfn), end_pfn - base_pfn + 1, flush_flags);
 
-    if ( rc || (!scratch_page && !rmrr_found) )
-        quarantine_teardown(pdev, drhd);
+    return ret ?: rc;
+}
 
-    return rc;
+/* Check if another overlapping rmrr exist for another device of the context */
+static bool intel_iommu_check_duplicate(struct domain *d, struct pci_dev *pdev,
+                                        struct iommu_context *ctx,
+                                        struct acpi_rmrr_unit *rmrr)
+{
+    struct acpi_rmrr_unit *other_rmrr;
+    u16 bdf;
+    int i;
+
+    for_each_rmrr_device(other_rmrr, bdf, i)
+    {
+        if (rmrr == other_rmrr)
+            continue;
+
+        /* Skip RMRR entries of the same device */
+        if ( PCI_SBDF(rmrr->segment, bdf).sbdf == pdev->sbdf.sbdf )
+            continue;
+
+        /* Check for overlap */
+        if ( rmrr->base_address >= other_rmrr->base_address
+            && rmrr->end_address <= other_rmrr->end_address )
+            return true;
+
+        if ( other_rmrr->base_address >= rmrr->base_address
+            && other_rmrr->end_address <= rmrr->end_address )
+            return true;
+    }
+
+    return false;
+}
+
+static int intel_iommu_unmap_dev_rmrr(struct domain *d, struct pci_dev *pdev,
+                                      struct iommu_context *ctx)
+{
+    struct acpi_rmrr_unit *rmrr;
+    u16 bdf;
+    int ret, i;
+
+    for_each_rmrr_device(rmrr, bdf, i)
+    {
+        if ( PCI_SBDF(rmrr->segment, bdf).sbdf == pdev->sbdf.sbdf )
+        {
+            if ( ctx->arch.vtd.duplicated_rmrr
+                && intel_iommu_check_duplicate(d, pdev, ctx, rmrr) )
+                continue;
+
+            ret = intel_iommu_unmap_identity(d, pdev, ctx, rmrr);
+
+            if ( ret < 0 )
+                return ret;
+        }
+    }
+
+    return 0;
+}
+
+static int intel_iommu_attach(struct domain *d, struct pci_dev *pdev,
+                              struct iommu_context *ctx)
+{
+    int ret;
+    const struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
+
+    if (!pdev || !drhd)
+        return -EINVAL;
+
+    if ( ctx->id )
+    {
+        ret = intel_iommu_map_dev_rmrr(d, pdev, ctx);
+
+        if ( ret )
+            return ret;
+    }
+
+    ret = apply_context(d, ctx, pdev, pdev->devfn);
+
+    if ( ret )
+        return ret;
+
+    pci_vtd_quirk(pdev);
+
+    return ret;
+}
+
+static int intel_iommu_dettach(struct domain *d, struct pci_dev *pdev,
+                               struct iommu_context *prev_ctx)
+{
+    int ret;
+    const struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
+
+    if (!pdev || !drhd)
+        return -EINVAL;
+
+    ret = unapply_context_single(d, prev_ctx, drhd->iommu, pdev->bus, pdev->devfn);
+
+    if ( ret )
+        return ret;
+
+    if ( prev_ctx->id )
+        WARN_ON(intel_iommu_unmap_dev_rmrr(d, pdev, prev_ctx));
+
+    check_cleanup_domid_map(d, prev_ctx, NULL, drhd->iommu);
+
+    return ret;
+}
+
+static int intel_iommu_reattach(struct domain *d, struct pci_dev *pdev,
+                                struct iommu_context *prev_ctx,
+                                struct iommu_context *ctx)
+{
+    int ret;
+    const struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
+
+    if (!pdev || !drhd)
+        return -EINVAL;
+
+    if ( ctx->id )
+    {
+        ret = intel_iommu_map_dev_rmrr(d, pdev, ctx);
+
+        if ( ret )
+            return ret;
+    }
+
+    ret = apply_context_single(d, ctx, drhd->iommu, pdev->bus, pdev->devfn);
+
+    if ( ret )
+        return ret;
+
+    if ( prev_ctx->id )
+        WARN_ON(intel_iommu_unmap_dev_rmrr(d, pdev, prev_ctx));
+
+    /* We are overwriting an entry, cleanup previous domid if needed. */
+    check_cleanup_domid_map(d, prev_ctx, pdev, drhd->iommu);
+
+    pci_vtd_quirk(pdev);
+
+    return ret;
 }
 
 static const struct iommu_ops __initconst_cf_clobber vtd_ops = {
     .page_sizes = PAGE_SIZE_4K,
     .init = intel_iommu_domain_init,
     .hwdom_init = intel_iommu_hwdom_init,
-    .quarantine_init = intel_iommu_quarantine_init,
-    .add_device = intel_iommu_add_device,
+    .context_init = intel_iommu_context_init,
+    .context_teardown = intel_iommu_context_teardown,
+    .attach = intel_iommu_attach,
+    .dettach = intel_iommu_dettach,
+    .reattach = intel_iommu_reattach,
     .enable_device = intel_iommu_enable_device,
-    .remove_device = intel_iommu_remove_device,
-    .assign_device  = intel_iommu_assign_device,
     .teardown = iommu_domain_teardown,
     .clear_root_pgtable = iommu_clear_root_pgtable,
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
     .lookup_page = intel_iommu_lookup_page,
-    .reassign_device = reassign_device_ownership,
     .get_device_group_id = intel_iommu_group_id,
     .enable_x2apic = intel_iommu_enable_eim,
     .disable_x2apic = intel_iommu_disable_eim,
diff --git a/xen/drivers/passthrough/vtd/quirks.c b/xen/drivers/passthrough/vtd/quirks.c
index 950dcd56ef..719985f885 100644
--- a/xen/drivers/passthrough/vtd/quirks.c
+++ b/xen/drivers/passthrough/vtd/quirks.c
@@ -408,9 +408,8 @@ void __init platform_quirks_init(void)
 
 static int __must_check map_me_phantom_function(struct domain *domain,
                                                 unsigned int dev,
-                                                domid_t domid,
-                                                paddr_t pgd_maddr,
-                                                unsigned int mode)
+                                                unsigned int mode,
+                                                struct iommu_context *ctx)
 {
     struct acpi_drhd_unit *drhd;
     struct pci_dev *pdev;
@@ -422,18 +421,18 @@ static int __must_check map_me_phantom_function(struct domain *domain,
 
     /* map or unmap ME phantom function */
     if ( !(mode & UNMAP_ME_PHANTOM_FUNC) )
-        rc = domain_context_mapping_one(domain, drhd->iommu, 0,
-                                        PCI_DEVFN(dev, 7), NULL,
-                                        domid, pgd_maddr, mode);
+        rc = apply_context_single(domain, ctx, drhd->iommu, 0,
+                                  PCI_DEVFN(dev, 7));
     else
-        rc = domain_context_unmap_one(domain, drhd->iommu, 0,
-                                      PCI_DEVFN(dev, 7));
+        rc = unapply_context_single(domain, ctx, drhd->iommu, 0,
+                                    PCI_DEVFN(dev, 7));
 
     return rc;
 }
 
 int me_wifi_quirk(struct domain *domain, uint8_t bus, uint8_t devfn,
-                  domid_t domid, paddr_t pgd_maddr, unsigned int mode)
+                  domid_t domid, unsigned int mode,
+                  struct iommu_context *ctx)
 {
     u32 id;
     int rc = 0;
@@ -457,7 +456,7 @@ int me_wifi_quirk(struct domain *domain, uint8_t bus, uint8_t devfn,
             case 0x423b8086:
             case 0x423c8086:
             case 0x423d8086:
-                rc = map_me_phantom_function(domain, 3, domid, pgd_maddr, mode);
+                rc = map_me_phantom_function(domain, 3, mode, ctx);
                 break;
             default:
                 break;
@@ -483,7 +482,7 @@ int me_wifi_quirk(struct domain *domain, uint8_t bus, uint8_t devfn,
             case 0x42388086:        /* Puma Peak */
             case 0x422b8086:
             case 0x422c8086:
-                rc = map_me_phantom_function(domain, 22, domid, pgd_maddr, mode);
+                rc = map_me_phantom_function(domain, 22, mode, ctx);
                 break;
             default:
                 break;
diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
index 75b2885336..1614f3d284 100644
--- a/xen/drivers/passthrough/x86/Makefile
+++ b/xen/drivers/passthrough/x86/Makefile
@@ -1,2 +1,3 @@
 obj-y += iommu.o
+obj-y += arena.o
 obj-$(CONFIG_HVM) += hvm.o
diff --git a/xen/drivers/passthrough/x86/arena.c b/xen/drivers/passthrough/x86/arena.c
new file mode 100644
index 0000000000..984bc4d643
--- /dev/null
+++ b/xen/drivers/passthrough/x86/arena.c
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/**
+ * Simple arena-based page allocator.
+ *
+ * Allocate a large block using alloc_domheam_pages and allocate single pages
+ * using iommu_arena_allocate_page and iommu_arena_free_page functions.
+ *
+ * Concurrent {allocate/free}_page is thread-safe
+ * iommu_arena_teardown during {allocate/free}_page is not thread-safe.
+ *
+ * Written by Teddy Astie <teddy.astie@vates.tech>
+ */
+
+#include <asm/bitops.h>
+#include <asm/page.h>
+#include <xen/atomic.h>
+#include <xen/bug.h>
+#include <xen/config.h>
+#include <xen/mm-frame.h>
+#include <xen/mm.h>
+#include <xen/xmalloc.h>
+
+#include <asm/arena.h>
+
+/* Maximum of scan tries if the bit found not available */
+#define ARENA_TSL_MAX_TRIES 5
+
+int iommu_arena_initialize(struct iommu_arena *arena, struct domain *d,
+                           unsigned int order, unsigned int memflags)
+{
+    struct page_info *page;
+
+    /* TODO: Maybe allocate differently ? */
+    page = alloc_domheap_pages(d, order, memflags);
+
+    if ( !page )
+        return -ENOMEM;
+
+    arena->map = xzalloc_array(unsigned long, BITS_TO_LONGS(1LLU << order));
+    arena->order = order;
+    arena->region_start = page_to_mfn(page);
+
+    _atomic_set(&arena->used_pages, 0);
+    bitmap_zero(arena->map, iommu_arena_size(arena));
+
+    printk(XENLOG_DEBUG "IOMMU: Allocated arena (%llu pages, start=%"PRI_mfn")\n",
+           iommu_arena_size(arena), mfn_x(arena->region_start));
+    return 0;
+}
+
+int iommu_arena_teardown(struct iommu_arena *arena, bool check)
+{
+    BUG_ON(mfn_x(arena->region_start) == 0);
+
+    /* Check for allocations if check is specified */
+    if ( check && (atomic_read(&arena->used_pages) > 0) )
+        return -EBUSY;
+
+    free_domheap_pages(mfn_to_page(arena->region_start), arena->order);
+
+    arena->region_start = _mfn(0);
+    _atomic_set(&arena->used_pages, 0);
+    xfree(arena->map);
+    arena->map = NULL;
+
+    return 0;
+}
+
+struct page_info *iommu_arena_allocate_page(struct iommu_arena *arena)
+{
+    unsigned int index;
+    unsigned int tsl_tries = 0;
+
+    BUG_ON(mfn_x(arena->region_start) == 0);
+
+    if ( atomic_read(&arena->used_pages) == iommu_arena_size(arena) )
+        /* All pages used */
+        return NULL;
+
+    do
+    {
+        index = find_first_zero_bit(arena->map, iommu_arena_size(arena));
+
+        if ( index >= iommu_arena_size(arena) )
+            /* No more free pages */
+            return NULL;
+
+        /*
+         * While there shouldn't be a lot of retries in practice, this loop
+         * *may* run indefinetly if the found bit is never free due to being
+         * overwriten by another CPU core right after. Add a safeguard for
+         * such very rare cases.
+         */
+        tsl_tries++;
+
+        if ( unlikely(tsl_tries == ARENA_TSL_MAX_TRIES) )
+        {
+            printk(XENLOG_ERR "ARENA: Too many TSL retries !");
+            return NULL;
+        }
+
+        /* Make sure that the bit we found is still free */
+    } while ( test_and_set_bit(index, arena->map) );
+
+    atomic_inc(&arena->used_pages);
+
+    return mfn_to_page(mfn_add(arena->region_start, index));
+}
+
+bool iommu_arena_free_page(struct iommu_arena *arena, struct page_info *page)
+{
+    unsigned long index;
+    mfn_t frame;
+
+    if ( !page )
+    {
+        printk(XENLOG_WARNING "IOMMU: Trying to free NULL page");
+        WARN();
+        return false;
+    }
+
+    frame = page_to_mfn(page);
+
+    /* Check if page belongs to our arena */
+    if ( (mfn_x(frame) < mfn_x(arena->region_start))
+        || (mfn_x(frame) >= (mfn_x(arena->region_start) + iommu_arena_size(arena))) )
+    {
+        printk(XENLOG_WARNING
+               "IOMMU: Trying to free outside arena region [mfn=%"PRI_mfn"]",
+               mfn_x(frame));
+        WARN();
+        return false;
+    }
+
+    index = mfn_x(frame) - mfn_x(arena->region_start);
+
+    /* Sanity check in case of underflow. */
+    ASSERT(index < iommu_arena_size(arena));
+
+    if ( !test_and_clear_bit(index, arena->map) )
+    {
+        /*
+         * Bit was free during our arena_free_page, which means that
+         * either this page was never allocated, or we are in a double-free
+         * situation.
+         */
+        printk(XENLOG_WARNING
+               "IOMMU: Freeing non-allocated region (double-free?) [mfn=%"PRI_mfn"]",
+               mfn_x(frame));
+        WARN();
+        return false;
+    }
+
+    atomic_dec(&arena->used_pages);
+
+    return true;
+}
\ No newline at end of file
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index a3fa0aef7c..078ed12b0a 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -12,6 +12,13 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/keyhandler.h>
+#include <xen/lib.h>
+#include <xen/pci.h>
+#include <xen/spinlock.h>
+#include <xen/bitmap.h>
+#include <xen/list.h>
+#include <xen/mm.h>
 #include <xen/cpu.h>
 #include <xen/sched.h>
 #include <xen/iocap.h>
@@ -28,6 +35,9 @@
 #include <asm/mem_paging.h>
 #include <asm/pt-contig-markers.h>
 #include <asm/setup.h>
+#include <asm/iommu.h>
+#include <asm/arena.h>
+#include <asm/page.h>
 
 const struct iommu_init_ops *__initdata iommu_init_ops;
 struct iommu_ops __ro_after_init iommu_ops;
@@ -183,15 +193,42 @@ void __hwdom_init arch_iommu_check_autotranslated_hwdom(struct domain *d)
         panic("PVH hardware domain iommu must be set in 'strict' mode\n");
 }
 
-int arch_iommu_domain_init(struct domain *d)
+int arch_iommu_context_init(struct domain *d, struct iommu_context *ctx, u32 flags)
+{
+    INIT_PAGE_LIST_HEAD(&ctx->arch.pgtables.list);
+    spin_lock_init(&ctx->arch.pgtables.lock);
+
+    INIT_PAGE_LIST_HEAD(&ctx->arch.free_queue);
+
+    return 0;
+}
+
+int arch_iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags)
 {
+    /* Cleanup all page tables */
+    while ( iommu_free_pgtables(d, ctx) == -ERESTART )
+        /* nothing */;
+
+    return 0;
+}
+
+int arch_iommu_flush_free_queue(struct domain *d, struct iommu_context *ctx)
+{
+    struct page_info *pg;
     struct domain_iommu *hd = dom_iommu(d);
 
-    spin_lock_init(&hd->arch.mapping_lock);
+    while ( (pg = page_list_remove_head(&ctx->arch.free_queue)) )
+        iommu_arena_free_page(&hd->arch.pt_arena, pg);
+
+    return 0;
+}
+
+int arch_iommu_domain_init(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
 
-    INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list);
-    spin_lock_init(&hd->arch.pgtables.lock);
     INIT_LIST_HEAD(&hd->arch.identity_maps);
+    iommu_arena_initialize(&hd->arch.pt_arena, NULL, iommu_hwdom_arena_order, 0);
 
     return 0;
 }
@@ -203,8 +240,9 @@ void arch_iommu_domain_destroy(struct domain *d)
      * domain is destroyed. Note that arch_iommu_domain_destroy() is
      * called unconditionally, so pgtables may be uninitialized.
      */
-    ASSERT(!dom_iommu(d)->platform_ops ||
-           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
+    struct domain_iommu *hd = dom_iommu(d);
+
+    ASSERT(!hd->platform_ops);
 }
 
 struct identity_map {
@@ -227,7 +265,7 @@ int iommu_identity_mapping(struct domain *d, p2m_access_t p2ma,
     ASSERT(base < end);
 
     /*
-     * No need to acquire hd->arch.mapping_lock: Both insertion and removal
+     * No need to acquire hd->arch.lock: Both insertion and removal
      * get done while holding pcidevs_lock.
      */
     list_for_each_entry( map, &hd->arch.identity_maps, list )
@@ -356,8 +394,8 @@ static int __hwdom_init cf_check identity_map(unsigned long s, unsigned long e,
              */
             if ( iomem_access_permitted(d, s, s) )
             {
-                rc = iommu_map(d, _dfn(s), _mfn(s), 1, perms,
-                               &info->flush_flags);
+                rc = _iommu_map(d, _dfn(s), _mfn(s), 1, perms,
+                                &info->flush_flags, 0);
                 if ( rc < 0 )
                     break;
                 /* Must map a frame at least, which is what we request for. */
@@ -366,8 +404,8 @@ static int __hwdom_init cf_check identity_map(unsigned long s, unsigned long e,
             }
             s++;
         }
-        while ( (rc = iommu_map(d, _dfn(s), _mfn(s), e - s + 1,
-                                perms, &info->flush_flags)) > 0 )
+        while ( (rc = _iommu_map(d, _dfn(s), _mfn(s), e - s + 1,
+                                 perms, &info->flush_flags, 0)) > 0 )
         {
             s += rc;
             process_pending_softirqs();
@@ -533,7 +571,6 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
 
 void arch_pci_init_pdev(struct pci_dev *pdev)
 {
-    pdev->arch.pseudo_domid = DOMID_INVALID;
 }
 
 unsigned long *__init iommu_init_domid(domid_t reserve)
@@ -564,8 +601,6 @@ domid_t iommu_alloc_domid(unsigned long *map)
     static unsigned int start;
     unsigned int idx = find_next_zero_bit(map, UINT16_MAX - DOMID_MASK, start);
 
-    ASSERT(pcidevs_locked());
-
     if ( idx >= UINT16_MAX - DOMID_MASK )
         idx = find_first_zero_bit(map, UINT16_MAX - DOMID_MASK);
     if ( idx >= UINT16_MAX - DOMID_MASK )
@@ -591,7 +626,7 @@ void iommu_free_domid(domid_t domid, unsigned long *map)
         BUG();
 }
 
-int iommu_free_pgtables(struct domain *d)
+int iommu_free_pgtables(struct domain *d, struct iommu_context *ctx)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct page_info *pg;
@@ -601,17 +636,20 @@ int iommu_free_pgtables(struct domain *d)
         return 0;
 
     /* After this barrier, no new IOMMU mappings can be inserted. */
-    spin_barrier(&hd->arch.mapping_lock);
+    spin_barrier(&ctx->arch.pgtables.lock);
 
     /*
      * Pages will be moved to the free list below. So we want to
      * clear the root page-table to avoid any potential use after-free.
      */
-    iommu_vcall(hd->platform_ops, clear_root_pgtable, d);
+    iommu_vcall(hd->platform_ops, clear_root_pgtable, d, ctx);
 
-    while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
+    while ( (pg = page_list_remove_head(&ctx->arch.pgtables.list)) )
     {
-        free_domheap_page(pg);
+        if (ctx->id == 0)
+            free_domheap_page(pg);
+        else
+            iommu_arena_free_page(&hd->arch.pt_arena, pg);
 
         if ( !(++done & 0xff) && general_preempt_check() )
             return -ERESTART;
@@ -621,6 +659,7 @@ int iommu_free_pgtables(struct domain *d)
 }
 
 struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd,
+                                      struct iommu_context *ctx,
                                       uint64_t contig_mask)
 {
     unsigned int memflags = 0;
@@ -632,7 +671,11 @@ struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd,
         memflags = MEMF_node(hd->node);
 #endif
 
-    pg = alloc_domheap_page(NULL, memflags);
+    if (ctx->id == 0)
+        pg = alloc_domheap_page(NULL, memflags);
+    else
+        pg = iommu_arena_allocate_page(&hd->arch.pt_arena);
+
     if ( !pg )
         return NULL;
 
@@ -665,9 +708,7 @@ struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd,
 
     unmap_domain_page(p);
 
-    spin_lock(&hd->arch.pgtables.lock);
-    page_list_add(pg, &hd->arch.pgtables.list);
-    spin_unlock(&hd->arch.pgtables.lock);
+    page_list_add(pg, &ctx->arch.pgtables.list);
 
     return pg;
 }
@@ -706,17 +747,22 @@ static void cf_check free_queued_pgtables(void *arg)
     }
 }
 
-void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *pg)
+void iommu_queue_free_pgtable(struct iommu_context *ctx, struct page_info *pg)
 {
     unsigned int cpu = smp_processor_id();
 
-    spin_lock(&hd->arch.pgtables.lock);
-    page_list_del(pg, &hd->arch.pgtables.list);
-    spin_unlock(&hd->arch.pgtables.lock);
+    spin_lock(&ctx->arch.pgtables.lock);
+    page_list_del(pg, &ctx->arch.pgtables.list);
+    spin_unlock(&ctx->arch.pgtables.lock);
 
-    page_list_add_tail(pg, &per_cpu(free_pgt_list, cpu));
+    if ( !ctx->id )
+    {
+        page_list_add_tail(pg, &per_cpu(free_pgt_list, cpu));
 
-    tasklet_schedule(&per_cpu(free_pgt_tasklet, cpu));
+        tasklet_schedule(&per_cpu(free_pgt_tasklet, cpu));
+    }
+    else
+        page_list_add_tail(pg, &ctx->arch.free_queue);
 }
 
 static int cf_check cpu_callback(
-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 12:16:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 12:16:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739889.1146888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjNv-0001JL-DM; Thu, 13 Jun 2024 12:16:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739889.1146888; Thu, 13 Jun 2024 12:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjNv-0001JA-9s; Thu, 13 Jun 2024 12:16:55 +0000
Received: by outflank-mailman (input) for mailman id 739889;
 Thu, 13 Jun 2024 12:16:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uZHm=NP=bounce.vates.tech=bounce-md_30504962.666ae332.v1-585464d0f5044fc79899a47ef17fb25e@srs-se1.protection.inumbo.net>)
 id 1sHjNu-0008Rn-9Q
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 12:16:54 +0000
Received: from mail187-11.suw11.mandrillapp.com
 (mail187-11.suw11.mandrillapp.com [198.2.187.11])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d0050012-297e-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 14:16:51 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-11.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W0LxZ1bqyzLfMFLl
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 12:16:50 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 585464d0f5044fc79899a47ef17fb25e; Thu, 13 Jun 2024 12:16:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0050012-297e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718281010; x=1718541510;
	bh=Oa/BVbybk5tb2G/uKn7Rk/ZHGV0IpjnUp3ozImGDpyM=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=oTOqQx1qVD645I4gdcg+g1sD7EFRdFI24HkBlUzcy2Tv1b1QDYqyxjFRafvwnAxzI
	 xReROCMXc/vX5LOSqZeuooW5mEeiSlR4goWmoQNAv7XFOFU7gDDO3KBCBHBxxtEf8z
	 LaWy1NHnsUu+Qw0yeO7TR+Gy15swOlVlw0nhbPB0KdN/qwP8k27UoKhs71SMlm/4/J
	 0bf695mExdK988ByOjd+30m+6ldhrXtcvpu3rzcsSYaN6pKNs86EyWK/OwzhZglo5r
	 fMLZ/usGhyDvCzmKLmMSRWsRrpCVdYW4A4AMQ8OrY6taqLuNSy1C/T0fAz3PlOn+a2
	 5NMbs3ZkkrZGA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718281010; x=1718541510; i=teddy.astie@vates.tech;
	bh=Oa/BVbybk5tb2G/uKn7Rk/ZHGV0IpjnUp3ozImGDpyM=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=JvkJNbGMBClGuqqCbefzFckbO/1FEsyOQtgBpgPwPmcBpfTrD99YKEyB98kPKzc3m
	 lBaSCUzjOMUcEqxP5YFnTRVSoPZgjJGNlSuRihYP7N1FKfeqfmTzO5wUWEDAkcuU4C
	 7wj0j2asV2NomUb6ck7EJJFf2vAy5mb58/4i3KhbSHNfqK2MhmA41Qe1OYXURnliqz
	 xrfm43BWdGPpWbzpZWpxpz7hMJaQ9Dw1/bU+TZYTK4MbZkV7VvHHUy8ulhXlwnio6G
	 sMGxK5JaNPPXKz4PgYEz8Z2GtpPZshwOUR+diVLUW2T0RNXisDLJLvu6+bSdS1LPv8
	 ExoLjyq61ne5w==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20XEN=20PATCH=203/5]=20IOMMU:=20Introduce=20redesigned=20IOMMU=20subsystem?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718281003958
To: xen-devel@lists.xenproject.org
Cc: Teddy Astie <teddy.astie@vates.tech>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Lukasz Hawrylko <lukasz@hawrylko.pl>, "Daniel P. Smith" <dpsmith@apertussolutions.com>, =?utf-8?Q?Mateusz=20M=C3=B3wka?= <mateusz.mowka@intel.com>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <99d93c1a8100c0d20d40d80c0e94f46f906a986b.1718269097.git.teddy.astie@vates.tech>
In-Reply-To: <cover.1718269097.git.teddy.astie@vates.tech>
References: <cover.1718269097.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.585464d0f5044fc79899a47ef17fb25e?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240613:md
Date: Thu, 13 Jun 2024 12:16:50 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

Based on docs/designs/iommu-contexts.md, implement the redesigned IOMMU subsystem.

Signed-off-by Teddy Astie <teddy.astie@vates.tech>
---
Missing in this RFC

Quarantine implementation is incomplete
Automatic determination of max ctx_no (maximum IOMMU context count) using
on PCI device count.
Automatic determination of max ctx_no (for dom_io).
Empty/no default IOMMU context mode (UEFI IOMMU based boot).
Support for DomU (and configuration using e.g libxl).

---
 xen/arch/x86/domain.c                |   2 +-
 xen/arch/x86/mm/p2m-ept.c            |   2 +-
 xen/arch/x86/pv/dom0_build.c         |   4 +-
 xen/arch/x86/tboot.c                 |   4 +-
 xen/common/memory.c                  |   4 +-
 xen/drivers/passthrough/Makefile     |   3 +
 xen/drivers/passthrough/context.c    | 626 +++++++++++++++++++++++++++
 xen/drivers/passthrough/iommu.c      | 333 ++++----------
 xen/drivers/passthrough/pci.c        |  49 ++-
 xen/drivers/passthrough/quarantine.c |  49 +++
 xen/include/xen/iommu.h              | 118 ++++-
 xen/include/xen/pci.h                |   3 +
 12 files changed, 897 insertions(+), 300 deletions(-)
 create mode 100644 xen/drivers/passthrough/context.c
 create mode 100644 xen/drivers/passthrough/quarantine.c

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 00a3aaa576..52de634c81 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2381,7 +2381,7 @@ int domain_relinquish_resources(struct domain *d)
 
     PROGRESS(iommu_pagetables):
 
-        ret = iommu_free_pgtables(d);
+        ret = iommu_free_pgtables(d, iommu_default_context(d));
         if ( ret )
             return ret;
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index f83610cb8c..94c3631818 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -970,7 +970,7 @@ out:
             rc = iommu_iotlb_flush(d, _dfn(gfn), 1ul << order,
                                    (iommu_flags ? IOMMU_FLUSHF_added : 0) |
                                    (vtd_pte_present ? IOMMU_FLUSHF_modified
-                                                    : 0));
+                                                    : 0), 0);
         else if ( need_iommu_pt_sync(d) )
             rc = iommu_flags ?
                 iommu_legacy_map(d, _dfn(gfn), mfn, 1ul << order, iommu_flags) :
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index d8043fa58a..db7298737d 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -76,7 +76,7 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d,
          * iommu_memory_setup() ended up mapping them.
          */
         if ( need_iommu_pt_sync(d) &&
-             iommu_unmap(d, _dfn(mfn_x(page_to_mfn(page))), 1, 0, flush_flags) )
+             iommu_unmap(d, _dfn(mfn_x(page_to_mfn(page))), 1, 0, flush_flags, 0) )
             BUG();
 
         /* Read-only mapping + PGC_allocated + page-table page. */
@@ -127,7 +127,7 @@ static void __init iommu_memory_setup(struct domain *d, const char *what,
 
     while ( (rc = iommu_map(d, _dfn(mfn_x(mfn)), mfn, nr,
                             IOMMUF_readable | IOMMUF_writable | IOMMUF_preempt,
-                            flush_flags)) > 0 )
+                            flush_flags, 0)) > 0 )
     {
         mfn = mfn_add(mfn, rc);
         nr -= rc;
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index ba0700d2d5..ca55306830 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -216,9 +216,9 @@ static void tboot_gen_domain_integrity(const uint8_t key[TB_KEY_SIZE],
 
         if ( is_iommu_enabled(d) && is_vtd )
         {
-            const struct domain_iommu *dio = dom_iommu(d);
+            struct domain_iommu *dio = dom_iommu(d);
 
-            update_iommu_mac(&ctx, dio->arch.vtd.pgd_maddr,
+            update_iommu_mac(&ctx, iommu_default_context(d)->arch.vtd.pgd_maddr,
                              agaw_to_level(dio->arch.vtd.agaw));
         }
     }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index de2cc7ad92..0eb0f9da7b 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -925,7 +925,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         this_cpu(iommu_dont_flush_iotlb) = 0;
 
         ret = iommu_iotlb_flush(d, _dfn(xatp->idx - done), done,
-                                IOMMU_FLUSHF_modified);
+                                IOMMU_FLUSHF_modified, 0);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
 
@@ -939,7 +939,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
             put_page(pages[i]);
 
         ret = iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), done,
-                                IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified);
+                                IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified, 0);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
     }
diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index a1621540b7..69327080ab 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -4,6 +4,9 @@ obj-$(CONFIG_X86) += x86/
 obj-$(CONFIG_ARM) += arm/
 
 obj-y += iommu.o
+obj-y += context.o
+obj-y += quarantine.o
+
 obj-$(CONFIG_HAS_PCI) += pci.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
 obj-$(CONFIG_HAS_PCI) += ats.o
diff --git a/xen/drivers/passthrough/context.c b/xen/drivers/passthrough/context.c
new file mode 100644
index 0000000000..3cc7697164
--- /dev/null
+++ b/xen/drivers/passthrough/context.c
@@ -0,0 +1,626 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/iommu.h>
+#include <xen/sched.h>
+#include <xen/spinlock.h>
+#include <xen/bitops.h>
+#include <xen/bitmap.h>
+#include <xen/event.h>
+
+bool iommu_check_context(struct domain *d, u16 ctx_no) {
+    struct domain_iommu *hd = dom_iommu(d);
+
+    if (ctx_no == 0)
+        return 1; /* Default context always exist. */
+
+    if ((ctx_no - 1) >= hd->other_contexts.count)
+        return 0; /* out of bounds */
+
+    return test_bit(ctx_no - 1, hd->other_contexts.bitmap);
+}
+
+struct iommu_context *iommu_get_context(struct domain *d, u16 ctx_no) {
+    struct domain_iommu *hd = dom_iommu(d);
+
+    if (!iommu_check_context(d, ctx_no))
+        return NULL;
+
+    if (ctx_no == 0)
+        return &hd->default_ctx;
+    else
+        return &hd->other_contexts.map[ctx_no - 1];
+}
+
+static unsigned int mapping_order(const struct domain_iommu *hd,
+                                  dfn_t dfn, mfn_t mfn, unsigned long nr)
+{
+    unsigned long res = dfn_x(dfn) | mfn_x(mfn);
+    unsigned long sizes = hd->platform_ops->page_sizes;
+    unsigned int bit = find_first_set_bit(sizes), order = 0;
+
+    ASSERT(bit == PAGE_SHIFT);
+
+    while ( (sizes = (sizes >> bit) & ~1) )
+    {
+        unsigned long mask;
+
+        bit = find_first_set_bit(sizes);
+        mask = (1UL << bit) - 1;
+        if ( nr <= mask || (res & mask) )
+            break;
+        order += bit;
+        nr >>= bit;
+        res >>= bit;
+    }
+
+    return order;
+}
+
+long _iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
+               unsigned long page_count, unsigned int flags,
+               unsigned int *flush_flags, u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    unsigned long i;
+    unsigned int order, j = 0;
+    int rc = 0;
+
+    if ( !is_iommu_enabled(d) )
+        return 0;
+
+    if (!iommu_check_context(d, ctx_no))
+        return -ENOENT;
+
+    ASSERT(!IOMMUF_order(flags));
+
+    for ( i = 0; i < page_count; i += 1UL << order )
+    {
+        dfn_t dfn = dfn_add(dfn0, i);
+        mfn_t mfn = mfn_add(mfn0, i);
+
+        order = mapping_order(hd, dfn, mfn, page_count - i);
+
+        if ( (flags & IOMMUF_preempt) &&
+             ((!(++j & 0xfff) && general_preempt_check()) ||
+              i > LONG_MAX - (1UL << order)) )
+            return i;
+
+        rc = iommu_call(hd->platform_ops, map_page, d, dfn, mfn,
+                        flags | IOMMUF_order(order), flush_flags,
+                        iommu_get_context(d, ctx_no));
+
+        if ( likely(!rc) )
+            continue;
+
+        if ( !d->is_shutting_down && printk_ratelimit() )
+            printk(XENLOG_ERR
+                   "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" failed: %d\n",
+                   d->domain_id, dfn_x(dfn), mfn_x(mfn), rc);
+
+        /* while statement to satisfy __must_check */
+        while ( _iommu_unmap(d, dfn0, i, 0, flush_flags, ctx_no) )
+            break;
+
+        if ( !ctx_no && !is_hardware_domain(d) )
+            domain_crash(d);
+
+        break;
+    }
+
+    /*
+     * Something went wrong so, if we were dealing with more than a single
+     * page, flush everything and clear flush flags.
+     */
+    if ( page_count > 1 && unlikely(rc) &&
+         !iommu_iotlb_flush_all(d, *flush_flags) )
+        *flush_flags = 0;
+
+    return rc;
+}
+
+long iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
+               unsigned long page_count, unsigned int flags,
+               unsigned int *flush_flags, u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    long ret;
+
+    spin_lock(&hd->lock);
+    ret = _iommu_map(d, dfn0, mfn0, page_count, flags, flush_flags, ctx_no);
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
+                     unsigned long page_count, unsigned int flags)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    unsigned int flush_flags = 0;
+    int rc;
+
+    ASSERT(!(flags & IOMMUF_preempt));
+
+    spin_lock(&hd->lock);
+    rc = _iommu_map(d, dfn, mfn, page_count, flags, &flush_flags, 0);
+
+    if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
+        rc = _iommu_iotlb_flush(d, dfn, page_count, flush_flags, 0);
+    spin_unlock(&hd->lock);
+
+    return rc;
+}
+
+long iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_count,
+                 unsigned int flags, unsigned int *flush_flags,
+                 u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    long ret;
+
+    spin_lock(&hd->lock);
+    ret = _iommu_unmap(d, dfn0, page_count, flags, flush_flags, ctx_no);
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+long _iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_count,
+                  unsigned int flags, unsigned int *flush_flags,
+                  u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    unsigned long i;
+    unsigned int order, j = 0;
+    int rc = 0;
+
+    if ( !is_iommu_enabled(d) )
+        return 0;
+
+    if ( !iommu_check_context(d, ctx_no) )
+        return -ENOENT;
+
+    ASSERT(!(flags & ~IOMMUF_preempt));
+
+    for ( i = 0; i < page_count; i += 1UL << order )
+    {
+        dfn_t dfn = dfn_add(dfn0, i);
+        int err;
+
+        order = mapping_order(hd, dfn, _mfn(0), page_count - i);
+
+        if ( (flags & IOMMUF_preempt) &&
+             ((!(++j & 0xfff) && general_preempt_check()) ||
+              i > LONG_MAX - (1UL << order)) )
+            return i;
+
+        err = iommu_call(hd->platform_ops, unmap_page, d, dfn,
+                         flags | IOMMUF_order(order), flush_flags,
+                         iommu_get_context(d, ctx_no));
+
+        if ( likely(!err) )
+            continue;
+
+        if ( !d->is_shutting_down && printk_ratelimit() )
+            printk(XENLOG_ERR
+                   "d%d: IOMMU unmapping dfn %"PRI_dfn" failed: %d\n",
+                   d->domain_id, dfn_x(dfn), err);
+
+        if ( !rc )
+            rc = err;
+
+        if ( !is_hardware_domain(d) )
+        {
+            domain_crash(d);
+            break;
+        }
+    }
+
+    /*
+     * Something went wrong so, if we were dealing with more than a single
+     * page, flush everything and clear flush flags.
+     */
+    if ( page_count > 1 && unlikely(rc) &&
+         !iommu_iotlb_flush_all(d, *flush_flags) )
+        *flush_flags = 0;
+
+    return rc;
+}
+
+int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned long page_count)
+{
+    unsigned int flush_flags = 0;
+    struct domain_iommu *hd = dom_iommu(d);
+    int rc;
+
+    spin_lock(&hd->lock);
+    rc = _iommu_unmap(d, dfn, page_count, 0, &flush_flags, 0);
+
+    if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
+        rc = _iommu_iotlb_flush(d, dfn, page_count, flush_flags, 0);
+    spin_unlock(&hd->lock);
+
+    return rc;
+}
+
+int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                      unsigned int *flags, u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    int ret;
+
+    if ( !is_iommu_enabled(d) || !hd->platform_ops->lookup_page )
+        return -EOPNOTSUPP;
+
+    if (!iommu_check_context(d, ctx_no))
+        return -ENOENT;
+
+    spin_lock(&hd->lock);
+    ret = iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags, iommu_get_context(d, ctx_no));
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int _iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned long page_count,
+                       unsigned int flush_flags, u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    int rc;
+
+    if ( !is_iommu_enabled(d) || !hd->platform_ops->iotlb_flush ||
+         !page_count || !flush_flags )
+        return 0;
+
+    if ( dfn_eq(dfn, INVALID_DFN) )
+        return -EINVAL;
+
+    if ( !iommu_check_context(d, ctx_no) ) {
+        spin_unlock(&hd->lock);
+        return -ENOENT;
+    }
+
+    rc = iommu_call(hd->platform_ops, iotlb_flush, d, iommu_get_context(d, ctx_no),
+                    dfn, page_count, flush_flags);
+    if ( unlikely(rc) )
+    {
+        if ( !d->is_shutting_down && printk_ratelimit() )
+            printk(XENLOG_ERR
+                   "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", page count %lu flags %x\n",
+                   d->domain_id, rc, dfn_x(dfn), page_count, flush_flags);
+
+        if ( !is_hardware_domain(d) )
+            domain_crash(d);
+    }
+
+    return rc;
+}
+
+int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned long page_count,
+                      unsigned int flush_flags, u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    int ret;
+
+    spin_lock(&hd->lock);
+    ret = _iommu_iotlb_flush(d, dfn, page_count, flush_flags, ctx_no);
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int iommu_context_init(struct domain *d, struct iommu_context *ctx, u16 ctx_no, u32 flags)
+{
+    if ( !dom_iommu(d)->platform_ops->context_init )
+        return -ENOSYS;
+
+    INIT_LIST_HEAD(&ctx->devices);
+    ctx->id = ctx_no;
+    ctx->dying = false;
+
+    return iommu_call(dom_iommu(d)->platform_ops, context_init, d, ctx, flags);
+}
+
+int iommu_context_alloc(struct domain *d, u16 *ctx_no, u32 flags)
+{
+    unsigned int i;
+    int ret;
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->lock);
+
+    /* TODO: use TSL instead ? */
+    i = find_first_zero_bit(hd->other_contexts.bitmap, hd->other_contexts.count);
+
+    if ( i < hd->other_contexts.count )
+        set_bit(i, hd->other_contexts.bitmap);
+
+    if ( i >= hd->other_contexts.count ) /* no free context */
+        return -ENOSPC;
+
+    *ctx_no = i + 1;
+
+    ret = iommu_context_init(d, iommu_get_context(d, *ctx_no), *ctx_no, flags);
+
+    if ( ret )
+        __clear_bit(*ctx_no, hd->other_contexts.bitmap);
+
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int _iommu_attach_context(struct domain *d, device_t *dev, u16 ctx_no)
+{
+    struct iommu_context *ctx;
+    int ret;
+
+    pcidevs_lock();
+
+    if ( !iommu_check_context(d, ctx_no) )
+    {
+        ret = -ENOENT;
+        goto unlock;
+    }
+
+    ctx = iommu_get_context(d, ctx_no);
+
+    if ( ctx->dying )
+    {
+        ret = -EINVAL;
+        goto unlock;
+    }
+
+    ret = iommu_call(dom_iommu(d)->platform_ops, attach, d, dev, ctx);
+
+    if ( !ret )
+    {
+        dev->context = ctx_no;
+        list_add(&dev->context_list, &ctx->devices);
+    }
+
+unlock:
+    pcidevs_unlock();
+    return ret;
+}
+
+int iommu_attach_context(struct domain *d, device_t *dev, u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    int ret;
+
+    spin_lock(&hd->lock);
+    ret = _iommu_attach_context(d, dev, ctx_no);
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int _iommu_dettach_context(struct domain *d, device_t *dev)
+{
+    struct iommu_context *ctx;
+    int ret;
+
+    if (!dev->domain)
+    {
+        printk("IOMMU: Trying to dettach a non-attached device.");
+        WARN();
+        return 0;
+    }
+
+    /* Make sure device is actually in the domain. */
+    ASSERT(d == dev->domain);
+
+    pcidevs_lock();
+
+    ctx = iommu_get_context(d, dev->context);
+    ASSERT(ctx); /* device is using an invalid context ?
+                    dev->context invalid ? */
+
+    ret = iommu_call(dom_iommu(d)->platform_ops, dettach, d, dev, ctx);
+
+    if ( !ret )
+    {
+        list_del(&dev->context_list);
+
+        /** TODO: Do we need to remove the device from domain ?
+         *        Reattaching to something (quarantine, hardware domain ?)
+         */
+
+        /*
+         * rcu_lock_domain ?
+         * list_del(&dev->domain_list);
+         * dev->domain = ?;
+         */
+    }
+
+    pcidevs_unlock();
+    return ret;
+}
+
+int iommu_dettach_context(struct domain *d, device_t *dev)
+{
+    int ret;
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->lock);
+    ret = _iommu_dettach_context(d, dev);
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int _iommu_reattach_context(struct domain *prev_dom, struct domain *next_dom,
+                            device_t *dev, u16 ctx_no)
+{
+    struct domain_iommu *hd;
+    u16 prev_ctx_no;
+    device_t *ctx_dev;
+    struct iommu_context *prev_ctx, *next_ctx;
+    int ret;
+    bool same_domain;
+
+    /* Make sure we actually are doing something meaningful */
+    BUG_ON(!prev_dom && !next_dom);
+
+    /// TODO: Do such cases exists ?
+    // /* Platform ops must match */
+    // if (dom_iommu(prev_dom)->platform_ops != dom_iommu(next_dom)->platform_ops)
+    //     return -EINVAL;
+
+    pcidevs_lock();
+
+    if (!prev_dom)
+        return _iommu_attach_context(next_dom, dev, ctx_no);
+
+    if (!next_dom)
+        return _iommu_dettach_context(prev_dom, dev);
+
+    hd = dom_iommu(prev_dom);
+    same_domain = prev_dom == next_dom;
+
+    prev_ctx_no = dev->context;
+
+    if ( !same_domain && (ctx_no == prev_ctx_no) )
+    {
+        printk(XENLOG_DEBUG "Reattaching %pp to same IOMMU context c%hu\n", &dev, ctx_no);
+        ret = 0;
+        goto unlock;
+    }
+
+    if ( !iommu_check_context(next_dom, ctx_no) )
+    {
+        ret = -ENOENT;
+        goto unlock;
+    }
+
+    prev_ctx = iommu_get_context(prev_dom, prev_ctx_no);
+    next_ctx = iommu_get_context(next_dom, ctx_no);
+
+    if ( next_ctx->dying )
+    {
+        ret = -EINVAL;
+        goto unlock;
+    }
+
+    ret = iommu_call(hd->platform_ops, reattach, next_dom, dev, prev_ctx,
+                     next_ctx);
+
+    if ( ret )
+        goto unlock;
+
+    /* Remove device from previous context, and add it to new one. */
+    list_for_each_entry(ctx_dev, &prev_ctx->devices, context_list)
+    {
+        if ( ctx_dev == dev )
+        {
+            list_del(&ctx_dev->context_list);
+            list_add(&ctx_dev->context_list, &next_ctx->devices);
+            break;
+        }
+    }
+
+    if ( !same_domain )
+    {
+        /* Update domain pci devices accordingly */
+
+        /** TODO: should be done here or elsewhere ? */
+    }
+
+    if (!ret)
+        dev->context = ctx_no; /* update device context*/
+
+unlock:
+    pcidevs_unlock();
+    return ret;
+}
+
+int iommu_reattach_context(struct domain *prev_dom, struct domain *next_dom,
+                           device_t *dev, u16 ctx_no)
+{
+    int ret;
+    struct domain_iommu *prev_hd = dom_iommu(prev_dom);
+    struct domain_iommu *next_hd = dom_iommu(next_dom);
+
+    spin_lock(&prev_hd->lock);
+
+    if (prev_dom != next_dom)
+        spin_lock(&next_hd->lock);
+
+    ret = _iommu_reattach_context(prev_dom, next_dom, dev, ctx_no);
+
+    spin_unlock(&prev_hd->lock);
+
+    if (prev_dom != next_dom)
+        spin_unlock(&next_hd->lock);
+
+    return ret;
+}
+
+int _iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+
+    if ( !dom_iommu(d)->platform_ops->context_teardown )
+        return -ENOSYS;
+
+    ctx->dying = true;
+
+    /* first reattach devices back to default context if needed */
+    if ( flags & IOMMU_TEARDOWN_REATTACH_DEFAULT )
+    {
+        struct pci_dev *device;
+        list_for_each_entry(device, &ctx->devices, context_list)
+            _iommu_reattach_context(d, d, device, 0);
+    }
+    else if (!list_empty(&ctx->devices))
+        return -EBUSY; /* there is a device in context */
+
+    return iommu_call(hd->platform_ops, context_teardown, d, ctx, flags);
+}
+
+int iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    int ret;
+
+    spin_lock(&hd->lock);
+    ret = _iommu_context_teardown(d, ctx, flags);
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int iommu_context_free(struct domain *d, u16 ctx_no, u32 flags)
+{
+    int ret;
+    struct domain_iommu *hd = dom_iommu(d);
+
+    if ( ctx_no == 0 )
+        return -EINVAL;
+
+    spin_lock(&hd->lock);
+    if ( !iommu_check_context(d, ctx_no) )
+        return -ENOENT;
+
+    ret = _iommu_context_teardown(d, iommu_get_context(d, ctx_no), flags);
+
+    if ( !ret )
+        clear_bit(ctx_no - 1, hd->other_contexts.bitmap);
+
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index ba18136c46..a9e2a8a49b 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -12,6 +12,7 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/pci.h>
 #include <xen/sched.h>
 #include <xen/iommu.h>
 #include <xen/paging.h>
@@ -21,6 +22,10 @@
 #include <xen/softirq.h>
 #include <xen/keyhandler.h>
 #include <xsm/xsm.h>
+#include <asm/iommu.h>
+#include <asm/bitops.h>
+#include <asm/device.h>
+#include <xen/spinlock.h>
 
 #ifdef CONFIG_X86
 #include <asm/e820.h>
@@ -35,22 +40,6 @@ bool __read_mostly force_iommu;
 bool __read_mostly iommu_verbose;
 static bool __read_mostly iommu_crash_disable;
 
-#define IOMMU_quarantine_none         0 /* aka false */
-#define IOMMU_quarantine_basic        1 /* aka true */
-#define IOMMU_quarantine_scratch_page 2
-#ifdef CONFIG_HAS_PCI
-uint8_t __read_mostly iommu_quarantine =
-# if defined(CONFIG_IOMMU_QUARANTINE_NONE)
-    IOMMU_quarantine_none;
-# elif defined(CONFIG_IOMMU_QUARANTINE_BASIC)
-    IOMMU_quarantine_basic;
-# elif defined(CONFIG_IOMMU_QUARANTINE_SCRATCH_PAGE)
-    IOMMU_quarantine_scratch_page;
-# endif
-#else
-# define iommu_quarantine IOMMU_quarantine_none
-#endif /* CONFIG_HAS_PCI */
-
 static bool __hwdom_initdata iommu_hwdom_none;
 bool __hwdom_initdata iommu_hwdom_strict;
 bool __read_mostly iommu_hwdom_passthrough;
@@ -61,6 +50,13 @@ int8_t __hwdom_initdata iommu_hwdom_reserved = -1;
 bool __read_mostly iommu_hap_pt_share = true;
 #endif
 
+uint16_t __read_mostly iommu_hwdom_nb_ctx = 8;
+bool __read_mostly iommu_hwdom_nb_ctx_forced = false;
+
+#ifdef CONFIG_X86
+unsigned int __read_mostly iommu_hwdom_arena_order = CONFIG_X86_ARENA_ORDER;
+#endif
+
 bool __read_mostly iommu_debug;
 
 DEFINE_PER_CPU(bool, iommu_dont_flush_iotlb);
@@ -156,6 +152,7 @@ static int __init cf_check parse_dom0_iommu_param(const char *s)
     int rc = 0;
 
     do {
+        long long ll_val;
         int val;
 
         ss = strchr(s, ',');
@@ -172,6 +169,20 @@ static int __init cf_check parse_dom0_iommu_param(const char *s)
             iommu_hwdom_reserved = val;
         else if ( !cmdline_strcmp(s, "none") )
             iommu_hwdom_none = true;
+        else if ( !parse_signed_integer("nb-ctx", s, ss, &ll_val) )
+        {
+            if (ll_val > 0 && ll_val < UINT16_MAX)
+                iommu_hwdom_nb_ctx = ll_val;
+            else
+                printk(XENLOG_WARNING "'nb-ctx=%lld' value out of range!\n", ll_val);
+        }
+        else if ( !parse_signed_integer("arena-order", s, ss, &ll_val) )
+        {
+            if (ll_val > 0)
+                iommu_hwdom_arena_order = ll_val;
+            else
+                printk(XENLOG_WARNING "'arena-order=%lld' value out of range!\n", ll_val);
+        }
         else
             rc = -EINVAL;
 
@@ -193,9 +204,26 @@ static void __hwdom_init check_hwdom_reqs(struct domain *d)
     arch_iommu_check_autotranslated_hwdom(d);
 }
 
+uint16_t __hwdom_init iommu_hwdom_ctx_count(void)
+{
+    if (iommu_hwdom_nb_ctx_forced)
+        return iommu_hwdom_nb_ctx;
+
+    /* TODO: Find a proper way of counting devices ? */
+    return 256;
+
+    /*
+    if (iommu_hwdom_nb_ctx != UINT16_MAX)
+        iommu_hwdom_nb_ctx++;
+    else
+        printk(XENLOG_WARNING " IOMMU: Can't prepare more contexts: too much devices");
+    */
+}
+
 int iommu_domain_init(struct domain *d, unsigned int opts)
 {
     struct domain_iommu *hd = dom_iommu(d);
+    uint16_t other_context_count;
     int ret = 0;
 
     if ( is_hardware_domain(d) )
@@ -236,6 +264,37 @@ int iommu_domain_init(struct domain *d, unsigned int opts)
 
     ASSERT(!(hd->need_sync && hd->hap_pt_share));
 
+    iommu_hwdom_nb_ctx = iommu_hwdom_ctx_count();
+
+    if ( is_hardware_domain(d) )
+    {
+        BUG_ON(iommu_hwdom_nb_ctx == 0); /* sanity check (prevent underflow) */
+        printk(XENLOG_INFO "Dom0 uses %lu IOMMU contexts\n",
+               (unsigned long)iommu_hwdom_nb_ctx);
+        hd->other_contexts.count = iommu_hwdom_nb_ctx - 1;
+    }
+    else if ( d == dom_io )
+    {
+        /* TODO: Determine count differently */
+        hd->other_contexts.count = 128;
+    }
+    else
+        hd->other_contexts.count = 0;
+
+    other_context_count = hd->other_contexts.count;
+    if (other_context_count > 0) {
+        /* Initialize context bitmap */
+        hd->other_contexts.bitmap = xzalloc_array(unsigned long,
+                                                  BITS_TO_LONGS(other_context_count));
+        hd->other_contexts.map = xzalloc_array(struct iommu_context,
+                                               other_context_count);
+    } else {
+        hd->other_contexts.bitmap = NULL;
+        hd->other_contexts.map = NULL;
+    }
+
+    iommu_context_init(d, &hd->default_ctx, 0, IOMMU_CONTEXT_INIT_default);
+
     return 0;
 }
 
@@ -249,13 +308,12 @@ static void cf_check iommu_dump_page_tables(unsigned char key)
 
     for_each_domain(d)
     {
-        if ( is_hardware_domain(d) || !is_iommu_enabled(d) )
+        if ( !is_iommu_enabled(d) )
             continue;
 
         if ( iommu_use_hap_pt(d) )
         {
             printk("%pd sharing page tables\n", d);
-            continue;
         }
 
         iommu_vcall(dom_iommu(d)->platform_ops, dump_page_tables, d);
@@ -276,10 +334,13 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
     iommu_vcall(hd->platform_ops, hwdom_init, d);
 }
 
-static void iommu_teardown(struct domain *d)
+void iommu_domain_destroy(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
 
+    if ( !is_iommu_enabled(d) )
+        return;
+
     /*
      * During early domain creation failure, we may reach here with the
      * ops not yet initialized.
@@ -288,224 +349,10 @@ static void iommu_teardown(struct domain *d)
         return;
 
     iommu_vcall(hd->platform_ops, teardown, d);
-}
-
-void iommu_domain_destroy(struct domain *d)
-{
-    if ( !is_iommu_enabled(d) )
-        return;
-
-    iommu_teardown(d);
 
     arch_iommu_domain_destroy(d);
 }
 
-static unsigned int mapping_order(const struct domain_iommu *hd,
-                                  dfn_t dfn, mfn_t mfn, unsigned long nr)
-{
-    unsigned long res = dfn_x(dfn) | mfn_x(mfn);
-    unsigned long sizes = hd->platform_ops->page_sizes;
-    unsigned int bit = find_first_set_bit(sizes), order = 0;
-
-    ASSERT(bit == PAGE_SHIFT);
-
-    while ( (sizes = (sizes >> bit) & ~1) )
-    {
-        unsigned long mask;
-
-        bit = find_first_set_bit(sizes);
-        mask = (1UL << bit) - 1;
-        if ( nr <= mask || (res & mask) )
-            break;
-        order += bit;
-        nr >>= bit;
-        res >>= bit;
-    }
-
-    return order;
-}
-
-long iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
-               unsigned long page_count, unsigned int flags,
-               unsigned int *flush_flags)
-{
-    const struct domain_iommu *hd = dom_iommu(d);
-    unsigned long i;
-    unsigned int order, j = 0;
-    int rc = 0;
-
-    if ( !is_iommu_enabled(d) )
-        return 0;
-
-    ASSERT(!IOMMUF_order(flags));
-
-    for ( i = 0; i < page_count; i += 1UL << order )
-    {
-        dfn_t dfn = dfn_add(dfn0, i);
-        mfn_t mfn = mfn_add(mfn0, i);
-
-        order = mapping_order(hd, dfn, mfn, page_count - i);
-
-        if ( (flags & IOMMUF_preempt) &&
-             ((!(++j & 0xfff) && general_preempt_check()) ||
-              i > LONG_MAX - (1UL << order)) )
-            return i;
-
-        rc = iommu_call(hd->platform_ops, map_page, d, dfn, mfn,
-                        flags | IOMMUF_order(order), flush_flags);
-
-        if ( likely(!rc) )
-            continue;
-
-        if ( !d->is_shutting_down && printk_ratelimit() )
-            printk(XENLOG_ERR
-                   "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" failed: %d\n",
-                   d->domain_id, dfn_x(dfn), mfn_x(mfn), rc);
-
-        /* while statement to satisfy __must_check */
-        while ( iommu_unmap(d, dfn0, i, 0, flush_flags) )
-            break;
-
-        if ( !is_hardware_domain(d) )
-            domain_crash(d);
-
-        break;
-    }
-
-    /*
-     * Something went wrong so, if we were dealing with more than a single
-     * page, flush everything and clear flush flags.
-     */
-    if ( page_count > 1 && unlikely(rc) &&
-         !iommu_iotlb_flush_all(d, *flush_flags) )
-        *flush_flags = 0;
-
-    return rc;
-}
-
-int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
-                     unsigned long page_count, unsigned int flags)
-{
-    unsigned int flush_flags = 0;
-    int rc;
-
-    ASSERT(!(flags & IOMMUF_preempt));
-    rc = iommu_map(d, dfn, mfn, page_count, flags, &flush_flags);
-
-    if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
-        rc = iommu_iotlb_flush(d, dfn, page_count, flush_flags);
-
-    return rc;
-}
-
-long iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_count,
-                 unsigned int flags, unsigned int *flush_flags)
-{
-    const struct domain_iommu *hd = dom_iommu(d);
-    unsigned long i;
-    unsigned int order, j = 0;
-    int rc = 0;
-
-    if ( !is_iommu_enabled(d) )
-        return 0;
-
-    ASSERT(!(flags & ~IOMMUF_preempt));
-
-    for ( i = 0; i < page_count; i += 1UL << order )
-    {
-        dfn_t dfn = dfn_add(dfn0, i);
-        int err;
-
-        order = mapping_order(hd, dfn, _mfn(0), page_count - i);
-
-        if ( (flags & IOMMUF_preempt) &&
-             ((!(++j & 0xfff) && general_preempt_check()) ||
-              i > LONG_MAX - (1UL << order)) )
-            return i;
-
-        err = iommu_call(hd->platform_ops, unmap_page, d, dfn,
-                         flags | IOMMUF_order(order), flush_flags);
-
-        if ( likely(!err) )
-            continue;
-
-        if ( !d->is_shutting_down && printk_ratelimit() )
-            printk(XENLOG_ERR
-                   "d%d: IOMMU unmapping dfn %"PRI_dfn" failed: %d\n",
-                   d->domain_id, dfn_x(dfn), err);
-
-        if ( !rc )
-            rc = err;
-
-        if ( !is_hardware_domain(d) )
-        {
-            domain_crash(d);
-            break;
-        }
-    }
-
-    /*
-     * Something went wrong so, if we were dealing with more than a single
-     * page, flush everything and clear flush flags.
-     */
-    if ( page_count > 1 && unlikely(rc) &&
-         !iommu_iotlb_flush_all(d, *flush_flags) )
-        *flush_flags = 0;
-
-    return rc;
-}
-
-int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned long page_count)
-{
-    unsigned int flush_flags = 0;
-    int rc = iommu_unmap(d, dfn, page_count, 0, &flush_flags);
-
-    if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
-        rc = iommu_iotlb_flush(d, dfn, page_count, flush_flags);
-
-    return rc;
-}
-
-int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
-                      unsigned int *flags)
-{
-    const struct domain_iommu *hd = dom_iommu(d);
-
-    if ( !is_iommu_enabled(d) || !hd->platform_ops->lookup_page )
-        return -EOPNOTSUPP;
-
-    return iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags);
-}
-
-int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned long page_count,
-                      unsigned int flush_flags)
-{
-    const struct domain_iommu *hd = dom_iommu(d);
-    int rc;
-
-    if ( !is_iommu_enabled(d) || !hd->platform_ops->iotlb_flush ||
-         !page_count || !flush_flags )
-        return 0;
-
-    if ( dfn_eq(dfn, INVALID_DFN) )
-        return -EINVAL;
-
-    rc = iommu_call(hd->platform_ops, iotlb_flush, d, dfn, page_count,
-                    flush_flags);
-    if ( unlikely(rc) )
-    {
-        if ( !d->is_shutting_down && printk_ratelimit() )
-            printk(XENLOG_ERR
-                   "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", page count %lu flags %x\n",
-                   d->domain_id, rc, dfn_x(dfn), page_count, flush_flags);
-
-        if ( !is_hardware_domain(d) )
-            domain_crash(d);
-    }
-
-    return rc;
-}
-
 int iommu_iotlb_flush_all(struct domain *d, unsigned int flush_flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
@@ -515,7 +362,7 @@ int iommu_iotlb_flush_all(struct domain *d, unsigned int flush_flags)
          !flush_flags )
         return 0;
 
-    rc = iommu_call(hd->platform_ops, iotlb_flush, d, INVALID_DFN, 0,
+    rc = iommu_call(hd->platform_ops, iotlb_flush, d, NULL, INVALID_DFN, 0,
                     flush_flags | IOMMU_FLUSHF_all);
     if ( unlikely(rc) )
     {
@@ -531,24 +378,6 @@ int iommu_iotlb_flush_all(struct domain *d, unsigned int flush_flags)
     return rc;
 }
 
-int iommu_quarantine_dev_init(device_t *dev)
-{
-    const struct domain_iommu *hd = dom_iommu(dom_io);
-
-    if ( !iommu_quarantine || !hd->platform_ops->quarantine_init )
-        return 0;
-
-    return iommu_call(hd->platform_ops, quarantine_init,
-                      dev, iommu_quarantine == IOMMU_quarantine_scratch_page);
-}
-
-static int __init iommu_quarantine_init(void)
-{
-    dom_io->options |= XEN_DOMCTL_CDF_iommu;
-
-    return iommu_domain_init(dom_io, 0);
-}
-
 int __init iommu_setup(void)
 {
     int rc = -ENODEV;
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 5a446d3dce..46c8a01801 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1,6 +1,6 @@
 /*
  * Copyright (C) 2008,  Netronome Systems, Inc.
- *                
+ *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms and conditions of the GNU General Public License,
  * version 2, as published by the Free Software Foundation.
@@ -286,14 +286,14 @@ static void apply_quirks(struct pci_dev *pdev)
          * Device [8086:2fc0]
          * Erratum HSE43
          * CONFIG_TDP_NOMINAL CSR Implemented at Incorrect Offset
-         * http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-v3-spec-update.html 
+         * http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-v3-spec-update.html
          */
         { PCI_VENDOR_ID_INTEL, 0x2fc0 },
         /*
          * Devices [8086:6f60,6fa0,6fc0]
          * Errata BDF2 / BDX2
          * PCI BARs in the Home Agent Will Return Non-Zero Values During Enumeration
-         * http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-v4-spec-update.html 
+         * http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-v4-spec-update.html
         */
         { PCI_VENDOR_ID_INTEL, 0x6f60 },
         { PCI_VENDOR_ID_INTEL, 0x6fa0 },
@@ -870,8 +870,8 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
         devfn += pdev->phantom_stride;
         if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
             break;
-        ret = iommu_call(hd->platform_ops, reassign_device, d, target, devfn,
-                         pci_to_dev(pdev));
+        ret = iommu_call(hd->platform_ops, add_devfn, d, pci_to_dev(pdev), devfn,
+                         &target->iommu.default_ctx);
         if ( ret )
             goto out;
     }
@@ -880,9 +880,9 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
     vpci_deassign_device(pdev);
     write_unlock(&d->pci_lock);
 
-    devfn = pdev->devfn;
-    ret = iommu_call(hd->platform_ops, reassign_device, d, target, devfn,
-                     pci_to_dev(pdev));
+    ret = iommu_call(hd->platform_ops, reattach, target, pci_to_dev(pdev),
+                     iommu_get_context(d, pdev->context),
+                     iommu_default_context(target));
     if ( ret )
         goto out;
 
@@ -890,6 +890,7 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
         pdev->quarantine = false;
 
     pdev->fault.count = 0;
+    pdev->domain = target;
 
     write_lock(&target->pci_lock);
     /* Re-assign back to hardware_domain */
@@ -1329,12 +1330,7 @@ static int cf_check _dump_pci_devices(struct pci_seg *pseg, void *arg)
     list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
     {
         printk("%pp - ", &pdev->sbdf);
-#ifdef CONFIG_X86
-        if ( pdev->domain == dom_io )
-            printk("DomIO:%x", pdev->arch.pseudo_domid);
-        else
-#endif
-            printk("%pd", pdev->domain);
+        printk("%pd", pdev->domain);
         printk(" - node %-3d", (pdev->node != NUMA_NO_NODE) ? pdev->node : -1);
         pdev_dump_msi(pdev);
         printk("\n");
@@ -1373,7 +1369,7 @@ static int iommu_add_device(struct pci_dev *pdev)
     if ( !is_iommu_enabled(pdev->domain) )
         return 0;
 
-    rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
+    rc = iommu_attach_context(pdev->domain, pci_to_dev(pdev), 0);
     if ( rc || !pdev->phantom_stride )
         return rc;
 
@@ -1382,7 +1378,9 @@ static int iommu_add_device(struct pci_dev *pdev)
         devfn += pdev->phantom_stride;
         if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
             return 0;
-        rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
+
+        rc = iommu_call(hd->platform_ops, add_devfn, pdev->domain, pdev, devfn,
+                        iommu_default_context(pdev->domain));
         if ( rc )
             printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
                    &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
@@ -1409,6 +1407,7 @@ static int iommu_enable_device(struct pci_dev *pdev)
 static int iommu_remove_device(struct pci_dev *pdev)
 {
     const struct domain_iommu *hd;
+    struct iommu_context *ctx;
     u8 devfn;
 
     if ( !pdev->domain )
@@ -1418,6 +1417,10 @@ static int iommu_remove_device(struct pci_dev *pdev)
     if ( !is_iommu_enabled(pdev->domain) )
         return 0;
 
+    ctx = iommu_get_context(pdev->domain, pdev->context);
+    if ( !ctx )
+        return -EINVAL;
+
     for ( devfn = pdev->devfn ; pdev->phantom_stride; )
     {
         int rc;
@@ -1425,8 +1428,8 @@ static int iommu_remove_device(struct pci_dev *pdev)
         devfn += pdev->phantom_stride;
         if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
             break;
-        rc = iommu_call(hd->platform_ops, remove_device, devfn,
-                        pci_to_dev(pdev));
+        rc = iommu_call(hd->platform_ops, remove_devfn, pdev->domain, pdev,
+                        devfn, ctx);
         if ( !rc )
             continue;
 
@@ -1437,7 +1440,7 @@ static int iommu_remove_device(struct pci_dev *pdev)
 
     devfn = pdev->devfn;
 
-    return iommu_call(hd->platform_ops, remove_device, devfn, pci_to_dev(pdev));
+    return iommu_call(hd->platform_ops, dettach, pdev->domain, pdev, ctx);
 }
 
 static int device_assigned(u16 seg, u8 bus, u8 devfn)
@@ -1497,22 +1500,22 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
     if ( pdev->domain != dom_io )
     {
         rc = iommu_quarantine_dev_init(pci_to_dev(pdev));
+        /** TODO: Consider phantom functions */
         if ( rc )
             goto done;
     }
 
     pdev->fault.count = 0;
 
-    rc = iommu_call(hd->platform_ops, assign_device, d, devfn, pci_to_dev(pdev),
-                    flag);
+    iommu_attach_context(d, pci_to_dev(pdev), 0);
 
     while ( pdev->phantom_stride && !rc )
     {
         devfn += pdev->phantom_stride;
         if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
             break;
-        rc = iommu_call(hd->platform_ops, assign_device, d, devfn,
-                        pci_to_dev(pdev), flag);
+        rc = iommu_call(hd->platform_ops, add_devfn, d, pci_to_dev(pdev),
+                        devfn, iommu_default_context(d));
     }
 
     if ( rc )
diff --git a/xen/drivers/passthrough/quarantine.c b/xen/drivers/passthrough/quarantine.c
new file mode 100644
index 0000000000..b58f136ad8
--- /dev/null
+++ b/xen/drivers/passthrough/quarantine.c
@@ -0,0 +1,49 @@
+#include <xen/stdint.h>
+#include <xen/iommu.h>
+#include <xen/sched.h>
+
+#ifdef CONFIG_HAS_PCI
+uint8_t __read_mostly iommu_quarantine =
+# if defined(CONFIG_IOMMU_QUARANTINE_NONE)
+    IOMMU_quarantine_none;
+# elif defined(CONFIG_IOMMU_QUARANTINE_BASIC)
+    IOMMU_quarantine_basic;
+# elif defined(CONFIG_IOMMU_QUARANTINE_SCRATCH_PAGE)
+    IOMMU_quarantine_scratch_page;
+# endif
+#else
+# define iommu_quarantine IOMMU_quarantine_none
+#endif /* CONFIG_HAS_PCI */
+
+int iommu_quarantine_dev_init(device_t *dev)
+{
+    int ret;
+    u16 ctx_no;
+
+    if ( !iommu_quarantine )
+        return 0;
+
+    ret = iommu_context_alloc(dom_io, &ctx_no, IOMMU_CONTEXT_INIT_quarantine);
+
+    if ( ret )
+        return ret;
+
+    /** TODO: Setup scratch page, mappings... */
+
+    ret = iommu_reattach_context(dev->domain, dom_io, dev, ctx_no);
+
+    if ( ret )
+    {
+        ASSERT(!iommu_context_free(dom_io, ctx_no, 0));
+        return ret;
+    }
+
+    return ret;
+}
+
+int __init iommu_quarantine_init(void)
+{
+    dom_io->options |= XEN_DOMCTL_CDF_iommu;
+
+    return iommu_domain_init(dom_io, 0);
+}
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 442ae5322d..41b0e50827 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -52,7 +52,11 @@ static inline bool dfn_eq(dfn_t x, dfn_t y)
 #ifdef CONFIG_HAS_PASSTHROUGH
 extern bool iommu_enable, iommu_enabled;
 extern bool force_iommu, iommu_verbose;
+
 /* Boolean except for the specific purposes of drivers/passthrough/iommu.c. */
+#define IOMMU_quarantine_none         0 /* aka false */
+#define IOMMU_quarantine_basic        1 /* aka true */
+#define IOMMU_quarantine_scratch_page 2
 extern uint8_t iommu_quarantine;
 #else
 #define iommu_enabled false
@@ -107,6 +111,11 @@ extern bool amd_iommu_perdev_intremap;
 
 extern bool iommu_hwdom_strict, iommu_hwdom_passthrough, iommu_hwdom_inclusive;
 extern int8_t iommu_hwdom_reserved;
+extern uint16_t iommu_hwdom_nb_ctx;
+
+#ifdef CONFIG_X86
+extern unsigned int iommu_hwdom_arena_order;
+#endif
 
 extern unsigned int iommu_dev_iotlb_timeout;
 
@@ -161,11 +170,16 @@ enum
  */
 long __must_check iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
                             unsigned long page_count, unsigned int flags,
-                            unsigned int *flush_flags);
+                            unsigned int *flush_flags, u16 ctx_no);
+long __must_check _iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
+                             unsigned long page_count, unsigned int flags,
+                             unsigned int *flush_flags, u16 ctx_no);
 long __must_check iommu_unmap(struct domain *d, dfn_t dfn0,
                               unsigned long page_count, unsigned int flags,
-                              unsigned int *flush_flags);
-
+                              unsigned int *flush_flags, u16 ctx_no);
+long __must_check _iommu_unmap(struct domain *d, dfn_t dfn0,
+                               unsigned long page_count, unsigned int flags,
+                               unsigned int *flush_flags, u16 ctx_no);
 int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
                                   unsigned long page_count,
                                   unsigned int flags);
@@ -173,11 +187,16 @@ int __must_check iommu_legacy_unmap(struct domain *d, dfn_t dfn,
                                     unsigned long page_count);
 
 int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
-                                   unsigned int *flags);
+                                   unsigned int *flags, u16 ctx_no);
 
 int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn,
                                    unsigned long page_count,
-                                   unsigned int flush_flags);
+                                   unsigned int flush_flags,
+                                   u16 ctx_no);
+int __must_check _iommu_iotlb_flush(struct domain *d, dfn_t dfn,
+                                   unsigned long page_count,
+                                   unsigned int flush_flags,
+                                   u16 ctx_no);
 int __must_check iommu_iotlb_flush_all(struct domain *d,
                                        unsigned int flush_flags);
 
@@ -250,20 +269,31 @@ struct page_info;
  */
 typedef int iommu_grdm_t(xen_pfn_t start, xen_ulong_t nr, u32 id, void *ctxt);
 
+struct iommu_context;
+
 struct iommu_ops {
     unsigned long page_sizes;
     int (*init)(struct domain *d);
     void (*hwdom_init)(struct domain *d);
-    int (*quarantine_init)(device_t *dev, bool scratch_page);
-    int (*add_device)(uint8_t devfn, device_t *dev);
+    int (*context_init)(struct domain *d, struct iommu_context *ctx,
+                        u32 flags);
+    int (*context_teardown)(struct domain *d, struct iommu_context *ctx,
+                            u32 flags);
+    int (*attach)(struct domain *d, device_t *dev,
+                  struct iommu_context *ctx);
+    int (*dettach)(struct domain *d, device_t *dev,
+                   struct iommu_context *prev_ctx);
+    int (*reattach)(struct domain *d, device_t *dev,
+                    struct iommu_context *prev_ctx,
+                    struct iommu_context *ctx);
+
     int (*enable_device)(device_t *dev);
-    int (*remove_device)(uint8_t devfn, device_t *dev);
-    int (*assign_device)(struct domain *d, uint8_t devfn, device_t *dev,
-                         uint32_t flag);
-    int (*reassign_device)(struct domain *s, struct domain *t,
-                           uint8_t devfn, device_t *dev);
 #ifdef CONFIG_HAS_PCI
     int (*get_device_group_id)(uint16_t seg, uint8_t bus, uint8_t devfn);
+    int (*add_devfn)(struct domain *d, struct pci_dev *pdev, u16 devfn,
+                     struct iommu_context *ctx);
+    int (*remove_devfn)(struct domain *d, struct pci_dev *pdev, u16 devfn,
+                    struct iommu_context *ctx);
 #endif /* HAS_PCI */
 
     void (*teardown)(struct domain *d);
@@ -274,12 +304,15 @@ struct iommu_ops {
      */
     int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn,
                                  unsigned int flags,
-                                 unsigned int *flush_flags);
+                                 unsigned int *flush_flags,
+                                 struct iommu_context *ctx);
     int __must_check (*unmap_page)(struct domain *d, dfn_t dfn,
                                    unsigned int order,
-                                   unsigned int *flush_flags);
+                                   unsigned int *flush_flags,
+                                   struct iommu_context *ctx);
     int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mfn,
-                                    unsigned int *flags);
+                                    unsigned int *flags,
+                                    struct iommu_context *ctx);
 
 #ifdef CONFIG_X86
     int (*enable_x2apic)(void);
@@ -292,14 +325,15 @@ struct iommu_ops {
     int (*setup_hpet_msi)(struct msi_desc *msi_desc);
 
     void (*adjust_irq_affinities)(void);
-    void (*clear_root_pgtable)(struct domain *d);
+    void (*clear_root_pgtable)(struct domain *d, struct iommu_context *ctx);
     int (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg *msg);
 #endif /* CONFIG_X86 */
 
     int __must_check (*suspend)(void);
     void (*resume)(void);
     void (*crash_shutdown)(void);
-    int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn,
+    int __must_check (*iotlb_flush)(struct domain *d,
+                                    struct iommu_context *ctx, dfn_t dfn,
                                     unsigned long page_count,
                                     unsigned int flush_flags);
     int (*get_reserved_device_memory)(iommu_grdm_t *func, void *ctxt);
@@ -343,11 +377,36 @@ extern int iommu_get_extra_reserved_device_memory(iommu_grdm_t *func,
 # define iommu_vcall iommu_call
 #endif
 
+struct iommu_context {
+    u16 id; /* Context id (0 means default context) */
+    struct list_head devices;
+
+    struct arch_iommu_context arch;
+
+    bool opaque; /* context can't be modified nor accessed (e.g HAP) */
+    bool dying; /* the context is tearing down */
+};
+
+struct iommu_context_list {
+    uint16_t count; /* Context count excluding default context */
+
+    /* if count > 0 */
+
+    uint64_t *bitmap; /* bitmap of context allocation */
+    struct iommu_context *map; /* Map of contexts */
+};
+
+
 struct domain_iommu {
+    spinlock_t lock; /* iommu lock */
+
 #ifdef CONFIG_HAS_PASSTHROUGH
     struct arch_iommu arch;
 #endif
 
+    struct iommu_context default_ctx;
+    struct iommu_context_list other_contexts;
+
     /* iommu_ops */
     const struct iommu_ops *platform_ops;
 
@@ -380,6 +439,7 @@ struct domain_iommu {
 #define dom_iommu(d)              (&(d)->iommu)
 #define iommu_set_feature(d, f)   set_bit(f, dom_iommu(d)->features)
 #define iommu_clear_feature(d, f) clear_bit(f, dom_iommu(d)->features)
+#define iommu_default_context(d) (&dom_iommu(d)->default_ctx)
 
 /* Are we using the domain P2M table as its IOMMU pagetable? */
 #define iommu_use_hap_pt(d)       (IS_ENABLED(CONFIG_HVM) && \
@@ -405,6 +465,8 @@ int __must_check iommu_suspend(void);
 void iommu_resume(void);
 void iommu_crash_shutdown(void);
 int iommu_get_reserved_device_memory(iommu_grdm_t *func, void *ctxt);
+
+int __init iommu_quarantine_init(void);
 int iommu_quarantine_dev_init(device_t *dev);
 
 #ifdef CONFIG_HAS_PCI
@@ -414,6 +476,28 @@ int iommu_do_pci_domctl(struct xen_domctl *domctl, struct domain *d,
 
 void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev);
 
+struct iommu_context *iommu_get_context(struct domain *d, u16 ctx_no);
+bool iommu_check_context(struct domain *d, u16 ctx_no);
+
+#define IOMMU_CONTEXT_INIT_default (1 << 0)
+#define IOMMU_CONTEXT_INIT_quarantine (1 << 1)
+int iommu_context_init(struct domain *d, struct iommu_context *ctx, u16 ctx_no, u32 flags);
+
+#define IOMMU_TEARDOWN_REATTACH_DEFAULT (1 << 0)
+#define IOMMU_TEARDOWN_PREEMPT (1 << 1)
+int iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags);
+
+int iommu_context_alloc(struct domain *d, u16 *ctx_no, u32 flags);
+int iommu_context_free(struct domain *d, u16 ctx_no, u32 flags);
+
+int iommu_reattach_context(struct domain *prev_dom, struct domain *next_dom,
+                           device_t *dev, u16 ctx_no);
+int iommu_attach_context(struct domain *d, device_t *dev, u16 ctx_no);
+int iommu_dettach_context(struct domain *d, device_t *dev);
+
+int _iommu_attach_context(struct domain *d, device_t *dev, u16 ctx_no);
+int _iommu_dettach_context(struct domain *d, device_t *dev);
+
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
  * avoid unecessary iotlb_flush in the low level IOMMU code.
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index 63e49f0117..d6d4aaa6a5 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -97,6 +97,7 @@ struct pci_dev_info {
 struct pci_dev {
     struct list_head alldevs_list;
     struct list_head domain_list;
+    struct list_head context_list;
 
     struct list_head msi_list;
 
@@ -104,6 +105,8 @@ struct pci_dev {
 
     struct domain *domain;
 
+    uint16_t context; /* IOMMU context number of domain */
+
     const union {
         struct {
             uint8_t devfn;
-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 12:44:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 12:44:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739942.1146898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjos-0000O6-Mw; Thu, 13 Jun 2024 12:44:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739942.1146898; Thu, 13 Jun 2024 12:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjos-0000Nz-JZ; Thu, 13 Jun 2024 12:44:46 +0000
Received: by outflank-mailman (input) for mailman id 739942;
 Thu, 13 Jun 2024 12:44:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sHjor-0000Nt-3y
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 12:44:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sHjop-0000XR-JJ; Thu, 13 Jun 2024 12:44:43 +0000
Received: from [15.248.2.235] (helo=[10.24.67.23])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sHjop-0004d8-A7; Thu, 13 Jun 2024 12:44:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=zOXqRhHHRj26v5QgB4k79VEHKG0EmO6UxKsZfxY/i3E=; b=GMn4pzAlQ1BhoF7LWObrENZkKP
	t49Ivkq2kYNwxBGS01YgyNkALthODw6Xilz4unatUHTPhBVe/n4IQ83r9B30f0bsH2wrAu69aBq2u
	y3fpOsm5j9Pd07Q7Xfk2GiRQCD8Wz4xprn4bL5BncEx+AHP0pOrJo5DM+OoNgHfTK5Vk=;
Message-ID: <fefb0ceb-0713-4520-b9a7-e37aa2f77850@xen.org>
Date: Thu, 13 Jun 2024 13:44:41 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v6 0/7] FF-A notifications
Content-Language: en-GB
To: "Oleksii K." <oleksii.kurochko@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Jens Wiklander <jens.wiklander@linaro.org>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 "patches@linaro.org" <patches@linaro.org>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Michal Orzel <michal.orzel@amd.com>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
 <3C40228F-21AA-4CBF-A4BE-1C42DE6E94EB@arm.com>
 <615f1766-253d-43dc-b0f0-f8e2eb7360b5@xen.org>
 <8558AEB5-2F38-4F8C-A017-794E32045068@arm.com>
 <6a255f3dccc609e680659ed05b613c21a33cfb20.camel@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <6a255f3dccc609e680659ed05b613c21a33cfb20.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 11/06/2024 11:36, Oleksii K. wrote:
> Hi Bertrand and Julien,
> 
> On Tue, 2024-06-11 at 07:09 +0000, Bertrand Marquis wrote:
>> Hi Julien and Oleksii,
>>
>> @Oleksii: Could we consider having this serie merged for next release
>> ?
> We can consider including it in Xen 4.19 as it has a low impact on
> existing systems and needs to be explicitly activated:
>   Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

It is now merged.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 12:51:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 12:51:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739946.1146907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjv6-0002X5-BT; Thu, 13 Jun 2024 12:51:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739946.1146907; Thu, 13 Jun 2024 12:51:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjv6-0002Wy-8q; Thu, 13 Jun 2024 12:51:12 +0000
Received: by outflank-mailman (input) for mailman id 739946;
 Thu, 13 Jun 2024 12:51:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sySU=NP=bounce.vates.tech=bounce-md_30504962.666aeb39.v1-c2b85fac760e4aaf95af6a4eaa84fc63@srs-se1.protection.inumbo.net>)
 id 1sHjv5-0002Wp-Np
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 12:51:11 +0000
Received: from mail187-11.suw11.mandrillapp.com
 (mail187-11.suw11.mandrillapp.com [198.2.187.11])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 99c18921-2983-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 14:51:07 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-11.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W0Mj537XGzLfMRBF
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 12:51:05 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 c2b85fac760e4aaf95af6a4eaa84fc63; Thu, 13 Jun 2024 12:51:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99c18921-2983-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718283065; x=1718543565;
	bh=H4iqKVTD7LURL26EcQvbtrjH8XxZS+TosojtxE4Ee48=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=c08wS/AKcz57wazOW4VwyD1EncMNtY8sgZOsJqRbVDore7HbhEyB57ZEFllCiwjyf
	 8pn9XGXZLMnvBJlbJK7CKPwawMqEEMUQi7UFkrCwAHVhnhWqN7uvSb6dtuGk3G6jb/
	 afnX9g/Behp9Hd9Ndmu6fg9zn4dEsTm03WgiD5Niupynsiw65hMFhdfZyiHtxRPkor
	 0UbQowr8j7/a/5yTG31pEZhcE9OI2NYb4fiyZ+KwFXAviK/kUbE/WNhe7VDLGrg4k1
	 9V1MMXY7kRDDbafeePHBRsQn85kHybT7Yf7qYHIerxwEbUzqI2SnglJLuz5ELbyxzq
	 rWWisp75PW/AA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718283065; x=1718543565; i=anthony.perard@vates.tech;
	bh=H4iqKVTD7LURL26EcQvbtrjH8XxZS+TosojtxE4Ee48=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=RhXGyfq95LXceV67VjE3+2f+QTWRJokQf3eow1su+AZajJd7OsW2NdlFSII+c6ytz
	 PVFhsaiOIE+TyMiLlaOGGMK4MuGZ0IaBAJ1DwcXbz8sY6M93deivd2q2au9WaoSoqM
	 TFGDrmjGsE1c2Fx4UXNuzv75h5z38mo0l4z79OKDhpjbq5+UEt3MOxJueclDUvDLlb
	 LsyZqM8Z+yaD5CN/fCDJ95anDG3gyBQpkJiwKCJgCaDGrzR9sBtn6KTN0W3NcqmE6I
	 oAzkmIihS2fbIkCjXHG7log23VZuLOJ1K7XnXAPWwdkwduQrXYqmzIrQDlT9AT8v1f
	 7L6SaYPmUaqXQ==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[RFC=20XEN=20PATCH=20v9=205/5]=20domctl:=20Add=20XEN=5FDOMCTL=5Fgsi=5Fpermission=20to=20grant=20gsi?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718283062766
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross <jgross@suse.com>, "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>, xen-devel@lists.xenproject.org, "Daniel P . Smith" <dpsmith@apertussolutions.com>
Message-Id: <ZmrrNvv2sVaOIS5h@l14>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com> <20240607081127.126593-6-Jiqian.Chen@amd.com> <987f5d21-bbb5-4cdb-975b-91949e802921@suse.com> <BL1PR12MB5849FF595AEED1112622A98DE7C02@BL1PR12MB5849.namprd12.prod.outlook.com> <c2a5b9cd-2a85-4e01-8b8b-31b85726dbd4@suse.com> <BL1PR12MB5849652CE3039C8D17CD7FA6E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
In-Reply-To: <BL1PR12MB5849652CE3039C8D17CD7FA6E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.c2b85fac760e4aaf95af6a4eaa84fc63?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240613:md
Date: Thu, 13 Jun 2024 12:51:05 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

On Wed, Jun 12, 2024 at 10:55:14AM +0000, Chen, Jiqian wrote:
> On 2024/6/12 18:34, Jan Beulich wrote:
> > On 12.06.2024 12:12, Chen, Jiqian wrote:
> >> On 2024/6/11 22:39, Jan Beulich wrote:
> >>> On 07.06.2024 10:11, Jiqian Chen wrote:
> >>>> +    r = xc_domain_gsi_permission(ctx->xch, domid, gsi, map);
> >>>
> >>> Looking at the hypervisor side, this will fail for PV Dom0. In which case imo
> >>> you better would avoid making the call in the first place.
> >> Yes, for PV dom0, the errno is EOPNOTSUPP, then it will do below xc_domain_irq_permission.
> > 
> > Hence why call xc_domain_gsi_permission() at all on a PV Dom0?
> Is there a function to distinguish that current dom0 is PV or PVH dom0 in tools/libs?

That might have never been needed before, so probably not. There's
libxl__domain_type() but if that works with dom0 it might return "HVM"
for PVH dom0. So if xc_domain_getinfo_single() works and give the right
info about dom0, libxl__domain_type() could be extended to deal with
dom0 I guess. I don't know if there's a good way to find out which
flavor of dom0 is running.

Cheers,

-- 


Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 12:55:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 12:55:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739955.1146934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjzO-0003G2-0C; Thu, 13 Jun 2024 12:55:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739955.1146934; Thu, 13 Jun 2024 12:55:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHjzN-0003Fv-Tk; Thu, 13 Jun 2024 12:55:37 +0000
Received: by outflank-mailman (input) for mailman id 739955;
 Thu, 13 Jun 2024 12:55:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9y96=NP=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHjzM-0003EK-Oq
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 12:55:36 +0000
Received: from mail-yb1-xb36.google.com (mail-yb1-xb36.google.com
 [2607:f8b0:4864:20::b36])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 38ec0fd7-2984-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 14:55:34 +0200 (CEST)
Received: by mail-yb1-xb36.google.com with SMTP id
 3f1490d57ef6-dfe71dd22a5so1090605276.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 05:55:34 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2a5efb07csm6423696d6.134.2024.06.13.05.55.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Jun 2024 05:55:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38ec0fd7-2984-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718283333; x=1718888133; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=f1Gw1b4WLJzk7K8iju+47E5f7SockjSXaH80GGfWyLk=;
        b=QMSXEILuou2Ig67AQXMRXIwMTpnqwXdDZeu3wzV0fuYxctKtsgHBIhZy0ANSF+zU3u
         doLMxXU/et5SZe1qWuvVTuduAB9Vo12vvTu3+Gtp5uGzXIfkEJdpr+mq3AlowCRtBqVC
         061uIrlFkBs75TeUvVUBU/qpk2hTTCLYg4Pw4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718283333; x=1718888133;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=f1Gw1b4WLJzk7K8iju+47E5f7SockjSXaH80GGfWyLk=;
        b=dZ7y4poCZBzPLwR1aoo+QTw/WrWJ9ys+Q9aF+imFCQGeYbIadtwv+u678JBlvm2+ur
         VxBiBP3EJBPubNxGPKiDB39/y+Us2TftRw/JDB/QT1L3ZhI8a+/3Eb/rmfXbNlIGUOM0
         uUEJ+C4m2ghWyp+1pZ/FlvrKzeGeI2Ellb2u7O6cFmhABRBnt2cJAbbeKulr+F6BF/90
         XqsmJwo/fHxiZVhzaZTn76XiGGPhJuB331pqTOK3CMBTjUntjnBrvGqWP+DiuF7hEXsU
         pw79goM56fr1oUbTAqPr4Ag0tHdJftRudAVonIMQlh9/M4NPoH0FvETSKt4o6zX6+yv2
         Tifw==
X-Forwarded-Encrypted: i=1; AJvYcCWxanKHlQJKpqEmnLzOFKQXu+U2hh1HNKO5DtEBsicfOkYnCKpuvezdA9x+JF27SoXtRJOiPAqpt6c16+9HqwAWDhzs0vIQcSU/uG7qKwg=
X-Gm-Message-State: AOJu0Yz6noIK7P5nEkYmbTuNxzxYr0nGzDKRa+ZbO52wu0jORFm+c5Iz
	GUlWdQTg4dhvZLmjyhMs0Hcd6OzqIauAvNDrPu8bs4CyjEZmPFwzHAW35q6D24w=
X-Google-Smtp-Source: AGHT+IG+tZ1qnBMl2CXeQ0YZLH+yPb6khUYSy4mGnDeP0RSJFvKE64UCMbVpJ1zyWSA4iU3qQguULQ==
X-Received: by 2002:a05:6902:4e8:b0:dfa:5895:7814 with SMTP id 3f1490d57ef6-dfe67261a4cmr4545010276.36.1718283333475;
        Thu, 13 Jun 2024 05:55:33 -0700 (PDT)
Date: Thu, 13 Jun 2024 14:55:31 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in
 _assign_irq_vector()
Message-ID: <ZmrsQ0ncZWD3tXXV@macbook>
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-7-roger.pau@citrix.com>
 <9de1a9c7-814c-4375-9182-90a2f04806b2@suse.com>
 <Zml6-ViFPTWI1cUc@macbook>
 <d5b1d273-913e-4d53-9fb6-9b01525da498@suse.com>
 <ZmnAgSBjjP6N-uJS@macbook>
 <d45ef203-aa29-4aa6-8b40-0449334a2bf0@suse.com>
 <ZmrYjv2ljhf-1Ag_@macbook>
 <f4c51152-e8a0-4715-af75-4b8c7801fa08@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f4c51152-e8a0-4715-af75-4b8c7801fa08@suse.com>

On Thu, Jun 13, 2024 at 01:36:55PM +0200, Jan Beulich wrote:
> On 13.06.2024 13:31, Roger Pau Monné wrote:
> > On Thu, Jun 13, 2024 at 10:38:35AM +0200, Jan Beulich wrote:
> >> On 12.06.2024 17:36, Roger Pau Monné wrote:
> >>> On Wed, Jun 12, 2024 at 03:42:58PM +0200, Jan Beulich wrote:
> >>>> On 12.06.2024 12:39, Roger Pau Monné wrote:
> >>>>> On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
> >>>>>> On 10.06.2024 16:20, Roger Pau Monne wrote:
> >>>>>>> Currently there's logic in fixup_irqs() that attempts to prevent
> >>>>>>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
> >>>>>>> interrupts from the CPUs not present in the input mask.  The current logic in
> >>>>>>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
> >>>>>>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
> >>>>>>>
> >>>>>>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
> >>>>>>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
> >>>>>>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
> >>>>>>> and no remaining online CPUs in ->arch.cpu_mask.
> >>>>>>>
> >>>>>>> If _assign_irq_vector() is requested to move an interrupt in the state
> >>>>>>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
> >>>>>>> CPUs that could be used as fallback, and if that's the case do move the
> >>>>>>> interrupt back to the previous destination.  Note this is easier because the
> >>>>>>> vector hasn't been released yet, so there's no need to allocate and setup a new
> >>>>>>> vector on the destination.
> >>>>>>>
> >>>>>>> Due to the logic in fixup_irqs() that clears offline CPUs from
> >>>>>>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
> >>>>>>> shouldn't be possible to get into _assign_irq_vector() with
> >>>>>>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
> >>>>>>> ->arch.old_cpu_mask.
> >>>>>>>
> >>>>>>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
> >>>>>>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
> >>>>>>> longer part of the affinity set,
> >>>>>>
> >>>>>> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
> >>>>>>
> >>>>>>> move the interrupt to a different CPU part of
> >>>>>>> the provided mask
> >>>>>>
> >>>>>> ... this (->arch.cpu_mask related).
> >>>>>
> >>>>> No, the "provided mask" here is the "mask" parameter, not
> >>>>> ->arch.cpu_mask.
> >>>>
> >>>> Oh, so this describes the case of "hitting" the comment at the very bottom of
> >>>> the first hunk then? (I probably was misreading this because I was expecting
> >>>> it to describe a code change, rather than the case where original behavior
> >>>> needs retaining. IOW - all fine here then.)
> >>>>
> >>>>>>> and keep the current ->arch.old_{cpu_mask,vector} for the
> >>>>>>> pending interrupt movement to be completed.
> >>>>>>
> >>>>>> Right, that's to clean up state from before the initial move. What isn't
> >>>>>> clear to me is what's to happen with the state of the intermediate
> >>>>>> placement. Description and code changes leave me with the impression that
> >>>>>> it's okay to simply abandon, without any cleanup, yet I can't quite figure
> >>>>>> why that would be an okay thing to do.
> >>>>>
> >>>>> There isn't much we can do with the intermediate placement, as the CPU
> >>>>> is going offline.  However we can drain any pending interrupts from
> >>>>> IRR after the new destination has been set, since setting the
> >>>>> destination is done from the CPU that's the current target of the
> >>>>> interrupts.  So we can ensure the draining is done strictly after the
> >>>>> target has been switched, hence ensuring no further interrupts from
> >>>>> this source will be delivered to the current CPU.
> >>>>
> >>>> Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
> >>>> the ...
> >>>>
> >>>>>>> --- a/xen/arch/x86/irq.c
> >>>>>>> +++ b/xen/arch/x86/irq.c
> >>>>>>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
> >>>>>>>      }
> >>>>>>>  
> >>>>>>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> >>>>>>> -        return -EAGAIN;
> >>>>>>> +    {
> >>>>>>> +        /*
> >>>>>>> +         * If the current destination is online refuse to shuffle.  Retry after
> >>>>>>> +         * the in-progress movement has finished.
> >>>>>>> +         */
> >>>>>>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
> >>>>>>> +            return -EAGAIN;
> >>>>>>> +
> >>>>>>> +        /*
> >>>>>>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
> >>>>>>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
> >>>>>>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
> >>>>>>> +         * ->arch.old_cpu_mask.
> >>>>>>> +         */
> >>>>>>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
> >>>>>>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
> >>>>>>> +
> >>>>>>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
> >>>>>>> +        {
> >>>>>>> +            /*
> >>>>>>> +             * Fallback to the old destination if moving is in progress and the
> >>>>>>> +             * current destination is to be offlined.  This is only possible if
> >>>>>>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
> >>>>>>> +             * in the 'mask' parameter.
> >>>>>>> +             */
> >>>>>>> +            desc->arch.vector = desc->arch.old_vector;
> >>>>>>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
> >>>>
> >>>> ... replacing of vector (and associated mask), without any further accounting.
> >>>
> >>> It's quite likely I'm missing something here, but what further
> >>> accounting you would like to do?
> >>>
> >>> The current target of the interrupt (->arch.cpu_mask previous to
> >>> cpumask_and()) is all going offline, so any attempt to set it in
> >>> ->arch.old_cpu_mask would just result in a stale (offline) CPU getting
> >>> set in ->arch.old_cpu_mask, which previous patches attempted to
> >>> solve.
> >>>
> >>> Maybe by "further accounting" you meant something else not related to
> >>> ->arch.old_{cpu_mask,vector}?
> >>
> >> Indeed. What I'm thinking of is what normally release_old_vec() would
> >> do (of which only desc->arch.used_vectors updating would appear to be
> >> relevant, seeing the CPU's going offline). The other one I was thinking
> >> of, updating vector_irq[], likely is also unnecessary, again because
> >> that's per-CPU data of a CPU going down.
> > 
> > I think updating vector_irq[] should be explicitly avoided, as doing
> > so would prevent us from correctly draining any pending interrupts
> > because the vector -> irq mapping would be broken when the interrupt
> > enable window at the bottom of fixup_irqs() is reached.
> > 
> > For used_vectors: we might clean it, I'm a bit worried however that at
> > some point we insert a check in do_IRQ() path that ensures the
> > vector_irq[] is inline with desc->arch.used_vectors, which would fail
> > for interrupts drained at the bottom of fixup_irqs().  Let me attempt
> > to clean the currently used vector from ->arch.used_vectors.
> 
> Just to clarify: It may well be that for draining the bit can't be cleared
> right here. But it then still needs clearing _somewhere_, or else we
> chance ending up with inconsistent state (triggering e.g. an assertion
> later on) or the leaking of vectors. My problem here was that I also
> couldn't locate any such "somewhere", and commentary also didn't point me
> anywhere.

You are correct, there's no such place where the cleanup would happen.

I'm afraid the only option I see to correctly deal with this is to do
the cleanup of the old destination in _assign_irq_vector(), and then
do the pending interrupt draining from IRR like I had proposed in
patch 7/7, thus removing the interrupt enable window at the bottom of
fixup_irqs().

Let me know if that seems sensible.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 13:00:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 13:00:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739965.1146956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHk3W-0004Yb-N0; Thu, 13 Jun 2024 12:59:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739965.1146956; Thu, 13 Jun 2024 12:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHk3W-0004YU-JC; Thu, 13 Jun 2024 12:59:54 +0000
Received: by outflank-mailman (input) for mailman id 739965;
 Thu, 13 Jun 2024 12:59:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHk3U-0004YO-PX
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 12:59:52 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d192ebf7-2984-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 14:59:50 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6ef8e62935so136072466b.3
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 05:59:50 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f427a5sm69333066b.180.2024.06.13.05.59.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 05:59:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d192ebf7-2984-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718283589; x=1718888389; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=FSr56EEEylLogCWY43+qhZuZ5O2+BF64BoIdv2hiHDs=;
        b=cq8C2km0P7+MYH60QV8pSg+iUO4TivAkkRPKLCjwXpEgG6fVrM4afhQcX2n6q/tVPK
         TYwCqfAfP9o5Ew4J5ZfBz4azf2LN/lRdXknKW8QSOTeotFcrL2bxozCXwlYtk2A2yqnS
         Uckzvb8fM73sfpAdRFWEDYp4BtDsHaVd3z4tzw/P4ooT7DPYY/7iuXJobheQesHLMMIy
         rOfkpw8bFi7DwbZMFrhggo/EXrOHCSMkBlmip7up9MYjHyYA7rLzlfyBiu12PGlQoDE4
         4oCW2fB3b3DJpPCvoQyUf/eLJyo2xRY61m2VNiu4kae2te231A4KEDBjPPR5zD+YX8Vc
         UfiQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718283589; x=1718888389;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FSr56EEEylLogCWY43+qhZuZ5O2+BF64BoIdv2hiHDs=;
        b=ia2r0nWO0tYrIZdSWAsooYyrNe2S4D6+QEF9J7B+0fGzypc7fIenz+R1tQEaWVv8lg
         CqUQbsKAIUDy1LTYvITB7I0wfs0F9C9Uiap9qyngsieyZkbBvJBbgOzGBQ0fsxvyAIHN
         8djOvHlvXOao2Wr2CmrtJ2W5VM2rZdEV9WGLOMUeHm6OYl82AR/uxSP6uEBWdncsvC2N
         9ne/HEPn4RbcpsMER4crXWC8aBqyZ96WFBVr6VVJm4o9y72blLDAZb+X6S4zwClTcs8n
         yuYgWVcMhyw2Cp/zRP6F1Ps1TBfZ8Z8F14KCMlEFsSvU9jcK8zm/NfGoBMfB1lBkwzTC
         lwkQ==
X-Forwarded-Encrypted: i=1; AJvYcCWYB6LLBc9Hf0vuRTLrXRwZgjWqTrNa/QDAdB+bgIWHMuOvL2tOeOvMmsZYzBkpR2ysQk51y56zntBouH4cwC/M+xwah9zPy3DPkKbeeeg=
X-Gm-Message-State: AOJu0YzDy/fCwGkkOvIJM5IMmwin77Sv87h3xQZFCu6yV3eBPwxGz3na
	MMLKsUfs1jMAcimFrwZhsCgenAjUDyzF/OhdNimJOSv8mrUQbOMspXD0FLQuWw==
X-Google-Smtp-Source: AGHT+IFjr9ScWq1dPebIOQ22jfP440qluXFBC3Uu0sjHC9u3yoKdb32x/gNXNlbtTpRf85YdJCPkFQ==
X-Received: by 2002:a17:906:1791:b0:a6f:4c32:3fb6 with SMTP id a640c23a62f3a-a6f4c32430bmr289865466b.45.1718283589499;
        Thu, 13 Jun 2024 05:59:49 -0700 (PDT)
Message-ID: <8112bee8-efdc-4db9-b0d4-58b160b4e923@suse.com>
Date: Thu, 13 Jun 2024 14:59:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] xen/riscv: PE/COFF image header for RISC-V target
To: milandjokic1995@gmail.com
Cc: milan.djokic@rt-rk.com, Nikola Jelic <nikola.jelic@rt-rk.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <87b5e458498bbff2e54ac011a50ff1f9555c3613.1717354932.git.milan.djokic@rt-rk.com>
 <0e10ee9c215269b577321ba44f5d038a5eb299a7.1718193326.git.milan.djokic@rt-rk.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <0e10ee9c215269b577321ba44f5d038a5eb299a7.1718193326.git.milan.djokic@rt-rk.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 12.06.2024 14:15, milandjokic1995@gmail.com wrote:
> From: Nikola Jelic <nikola.jelic@rt-rk.com>
> 
> Extended RISC-V xen image with PE/COFF headers,
> in order to support xen boot from popular bootloaders like U-boot.
> Image header is optionally included (with CONFIG_RISCV_EFI) so
> both plain ELF and image with PE/COFF header can now be generated as build artifacts.
> Note that this patch also represents initial EFI application format support (image
> contains EFI application header embeded into binary when CONFIG_RISCV_EFI
> is enabled). For full EFI application Xen support, boot/runtime services
> and system table handling are yet to be implemented.
> 
> Tested on both QEMU and StarFive VisionFive 2 with OpenSBI->U-Boot->xen->dom0 boot chain.
> 
> Signed-off-by: Nikola Jelic <nikola.jelic@rt-rk.com>

This isn't you, is it? Your S-o-b is going to be needed, too.

> --- a/xen/arch/riscv/Kconfig
> +++ b/xen/arch/riscv/Kconfig
> @@ -9,6 +9,15 @@ config ARCH_DEFCONFIG
>  	string
>  	default "arch/riscv/configs/tiny64_defconfig"
>  
> +config RISCV_EFI
> +	bool "UEFI boot service support"
> +	depends on RISCV_64
> +	default n

Nit: This line can be omitted (and if I'm not mistaken we generally do omit
such).

> +	help
> +	  This option provides support for boot services through
> +	  UEFI firmware. A UEFI stub is provided to allow Xen to
> +	  be booted as an EFI application.

I don't think my v1 comment on this was addressed.

> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/pe.h
> @@ -0,0 +1,148 @@
> +#ifndef _ASM_RISCV_PE_H
> +#define _ASM_RISCV_PE_H
> +
> +#define LINUX_EFISTUB_MAJOR_VERSION     0x1
> +#define LINUX_EFISTUB_MINOR_VERSION     0x0
> +
> +#define MZ_MAGIC                    0x5a4d          /* "MZ" */
> +
> +#define PE_MAGIC                    0x00004550      /* "PE\0\0" */
> +#define PE_OPT_MAGIC_PE32           0x010b
> +#define PE_OPT_MAGIC_PE32PLUS       0x020b
> +
> +/* machine type */
> +#define IMAGE_FILE_MACHINE_RISCV32  0x5032
> +#define IMAGE_FILE_MACHINE_RISCV64  0x5064

Apart from this, hardly anything in this header is RISC-V specific.
Please consider moving to xen/include/xen/.

> +/* flags */
> +#define IMAGE_FILE_EXECUTABLE_IMAGE 0x0002
> +#define IMAGE_FILE_LINE_NUMS_STRIPPED 0x0004
> +#define IMAGE_FILE_DEBUG_STRIPPED   0x0200
> +#define IMAGE_SUBSYSTEM_EFI_APPLICATION 10
> +
> +#define IMAGE_SCN_CNT_CODE          0x00000020      /* .text */
> +#define IMAGE_SCN_CNT_INITIALIZED_DATA 0x00000040   /* .data */
> +#define IMAGE_SCN_MEM_EXECUTE       0x20000000
> +#define IMAGE_SCN_MEM_READ          0x40000000      /* readable */
> +#define IMAGE_SCN_MEM_WRITE         0x80000000      /* writeable */
> +
> +#ifndef __ASSEMBLY__
> +
> +struct mz_hdr {
> +    uint16_t magic;                  /* MZ_MAGIC */
> +    uint16_t lbsize;                 /* size of last used block */
> +    uint16_t blocks;                 /* pages in file, 0x3 */
> +    uint16_t relocs;                 /* relocations */
> +    uint16_t hdrsize;                /* header size in "paragraphs" */
> +    uint16_t min_extra_pps;          /* .bss */
> +    uint16_t max_extra_pps;          /* runtime limit for the arena size */
> +    uint16_t ss;                     /* relative stack segment */
> +    uint16_t sp;                     /* initial %sp register */
> +    uint16_t checksum;               /* word checksum */
> +    uint16_t ip;                     /* initial %ip register */
> +    uint16_t cs;                     /* initial %cs relative to load segment */
> +    uint16_t reloc_table_offset;     /* offset of the first relocation */
> +    uint16_t overlay_num;
> +    uint16_t reserved0[4];
> +    uint16_t oem_id;
> +    uint16_t oem_info;
> +    uint16_t reserved1[10];
> +    uint32_t peaddr;                 /* address of pe header */
> +    char     message[];              /* message to print */
> +};
> +
> +struct pe_hdr {
> +    uint32_t magic;                  /* PE magic */
> +    uint16_t machine;                /* machine type */
> +    uint16_t sections;               /* number of sections */
> +    uint32_t timestamp;
> +    uint32_t symbol_table;           /* symbol table offset */
> +    uint32_t symbols;                /* number of symbols */
> +    uint16_t opt_hdr_size;           /* size of optional header */
> +    uint16_t flags;                  /* flags */
> +};
> +
> +struct pe32_opt_hdr {
> +    /* "standard" header */
> +    uint16_t magic;                  /* file type */
> +    uint8_t  ld_major;               /* linker major version */
> +    uint8_t  ld_minor;               /* linker minor version */
> +    uint32_t text_size;
> +    uint32_t data_size;
> +    uint32_t bss_size;
> +    uint32_t entry_point;            /* file offset of entry point */
> +    uint32_t code_base;              /* relative code addr in ram */
> +    uint32_t data_base;              /* relative data addr in ram */
> +    /* "extra" header fields */
> +    uint32_t image_base;             /* preferred load address */
> +    uint32_t section_align;          /* alignment in bytes */
> +    uint32_t file_align;             /* file alignment in bytes */
> +    uint16_t os_major;
> +    uint16_t os_minor;
> +    uint16_t image_major;
> +    uint16_t image_minor;
> +    uint16_t subsys_major;
> +    uint16_t subsys_minor;
> +    uint32_t win32_version;          /* reserved, must be 0 */
> +    uint32_t image_size;
> +    uint32_t header_size;
> +    uint32_t csum;
> +    uint16_t subsys;
> +    uint16_t dll_flags;
> +    uint32_t stack_size_req;         /* amt of stack requested */
> +    uint32_t stack_size;             /* amt of stack required */
> +    uint32_t heap_size_req;          /* amt of heap requested */
> +    uint32_t heap_size;              /* amt of heap required */
> +    uint32_t loader_flags;           /* reserved, must be 0 */
> +    uint32_t data_dirs;              /* number of data dir entries */
> +};
> +
> +struct pe32plus_opt_hdr {
> +    uint16_t magic;                  /* file type */
> +    uint8_t  ld_major;               /* linker major version */
> +    uint8_t  ld_minor;               /* linker minor version */
> +    uint32_t text_size;
> +    uint32_t data_size;
> +    uint32_t bss_size;
> +    uint32_t entry_point;            /* file offset of entry point */
> +    uint32_t code_base;              /* relative code addr in ram */
> +    /* "extra" header fields */
> +    uint64_t image_base;             /* preferred load address */
> +    uint32_t section_align;          /* alignment in bytes */
> +    uint32_t file_align;             /* file alignment in bytes */
> +    uint16_t os_major;
> +    uint16_t os_minor;
> +    uint16_t image_major;
> +    uint16_t image_minor;
> +    uint16_t subsys_major;
> +    uint16_t subsys_minor;
> +    uint32_t win32_version;          /* reserved, must be 0 */
> +    uint32_t image_size;
> +    uint32_t header_size;
> +    uint32_t csum;
> +    uint16_t subsys;
> +    uint16_t dll_flags;
> +    uint64_t stack_size_req;         /* amt of stack requested */
> +    uint64_t stack_size;             /* amt of stack required */
> +    uint64_t heap_size_req;          /* amt of heap requested */
> +    uint64_t heap_size;              /* amt of heap required */
> +    uint32_t loader_flags;           /* reserved, must be 0 */
> +    uint32_t data_dirs;              /* number of data dir entries */
> +};
> +
> +struct section_header {
> +    char     name[8];                /* name or "/12\0" string tbl offset */

Why 12?

> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/riscv_image_header.h

Is this file taken from somewhere else, kind of making it desirable to keep
its name in sync with the original? Otherwise: We prefer dashes over underscores
in new files' names.

> @@ -0,0 +1,54 @@
> +#ifndef _ASM_RISCV_IMAGE_H
> +#define _ASM_RISCV_IMAGE_H
> +
> +#define RISCV_IMAGE_MAGIC "RISCV\0\0\0"
> +#define RISCV_IMAGE_MAGIC2 "RSC\x05"
> +
> +#define RISCV_IMAGE_FLAG_BE_SHIFT 0
> +
> +#define RISCV_IMAGE_FLAG_LE 0
> +#define RISCV_IMAGE_FLAG_BE 1
> +
> +#define __HEAD_FLAG_BE RISCV_IMAGE_FLAG_LE
> +
> +#define __HEAD_FLAG(field) (__HEAD_FLAG_##field << RISCV_IMAGE_FLAG_##field##_SHIFT)
> +
> +#define __HEAD_FLAGS (__HEAD_FLAG(BE))
> +
> +#define RISCV_HEADER_VERSION_MAJOR 0
> +#define RISCV_HEADER_VERSION_MINOR 2
> +
> +#define RISCV_HEADER_VERSION (RISCV_HEADER_VERSION_MAJOR << 16 | \
> +                              RISCV_HEADER_VERSION_MINOR)
> +
> +#ifndef __ASSEMBLY__
> +/*
> + * struct riscv_image_header - riscv xen image header

You saying "xen": Is there anything Xen-specific in this struct?

> + * @code0:              Executable code
> + * @code1:              Executable code
> + * @text_offset:        Image load offset
> + * @image_size:         Effective Image size
> + * @reserved:           reserved
> + * @reserved:           reserved
> + * @reserved:           reserved
> + * @magic:              Magic number
> + * @reserved:           reserved
> + * @reserved:           reserved (will be used for PE COFF offset)
> + */
> +
> +struct riscv_image_header
> +{
> +    uint32_t code0;
> +    uint32_t code1;
> +    uint64_t text_offset;
> +    uint64_t image_size;
> +    uint64_t res1;
> +    uint64_t res2;
> +    uint64_t res3;
> +    uint64_t magic;
> +    uint32_t res4;
> +    uint32_t res5;
> +};
> +#endif /* __ASSEMBLY__ */
> +#endif /* _ASM_RISCV_IMAGE_H */
> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -1,14 +1,150 @@
>  #include <asm/asm.h>
>  #include <asm/riscv_encoding.h>
> +#include <asm/riscv_image_header.h>
> +#ifdef CONFIG_RISCV_EFI
> +#include <asm/pe.h>
> +#endif
>  
>          .section .text.header, "ax", %progbits
>  
>          /*
>           * OpenSBI pass to start():
>           *   a0 -> hart_id ( bootcpu_id )
> -         *   a1 -> dtb_base 
> +         *   a1 -> dtb_base
>           */
>  FUNC(start)
> +
> +efi_head:

Why is this ...

> +#ifdef CONFIG_RISCV_EFI

... ahead of this?

> +        /*
> +         * This instruction decodes to "MZ" ASCII required by UEFI.
> +         */
> +        c.li s4,-13

IOW RISCV_EFI ought to be (made) dependent upon RISCV_ISA_C?

> +        j xen_start

Doesn't this then need to be c.j, to be sure it fits in 16 bits (and
raise an assembler error if not)?

> +#else
> +        /* jump to start kernel */
> +        j xen_start
> +        /* reserved */
> +        .word 0

What struct field does this correspond to? Or if not a struct field,
why is this needed?

Also I can't help the impression that the resulting layout will be
different depending on whether RISCV_ISA_C is enabled, as the "j"
may translate to a 16-bit or 32-bit insn.

I wonder anyway what use everything from here to ...

> +#endif
> +        .balign 8
> +#ifdef CONFIG_RISCV_64
> +        /* Image load offset(2MB) from start of RAM */
> +        .dword 0x200000
> +#else
> +        /* Image load offset(4MB) from start of RAM */
> +        .dword 0x400000
> +#endif
> +        /* Effective size of xen image */
> +        .dword _end - _start
> +        .dword __HEAD_FLAGS
> +        .word RISCV_HEADER_VERSION
> +        .word 0
> +        .dword 0
> +        .ascii RISCV_IMAGE_MAGIC
> +        .balign 4
> +        .ascii RISCV_IMAGE_MAGIC2
> +#ifndef CONFIG_RISCV_EFI
> +        .word 0

... here is when RISCV_EFI=n. You add data which wasn't needed so far,
and for which it also isn't explained why it would suddenly be needed.

I also can't bring several of the fields above in sync with the
struct riscv_image_header definition in the header. Please can you
annotate each field with a comment naming the corresponding C struct
field (like you do further down, at least in a way)?

> +#else
> +        .word pe_head_start - efi_head
> +pe_head_start:
> +        .long	PE_MAGIC
> +coff_header:
> +#ifdef CONFIG_RISCV_64
> +        .short  IMAGE_FILE_MACHINE_RISCV64              /* Machine */
> +#else
> +        .short  IMAGE_FILE_MACHINE_RISCV32              /* Machine */
> +#endif
> +        .short  section_count                           /* NumberOfSections */
> +        .long   0                                       /* TimeDateStamp */
> +        .long   0                                       /* PointerToSymbolTable */
> +        .long   0                                       /* NumberOfSymbols */
> +        .short  section_table - optional_header         /* SizeOfOptionalHeader */
> +        .short  IMAGE_FILE_DEBUG_STRIPPED | \
> +                IMAGE_FILE_EXECUTABLE_IMAGE | \
> +                IMAGE_FILE_LINE_NUMS_STRIPPED           /* Characteristics */
> +
> +optional_header:
> +#ifdef CONFIG_RISCV_64
> +        .short  PE_OPT_MAGIC_PE32PLUS                   /* PE32+ format */
> +#else
> +        .short  PE_OPT_MAGIC_PE32                       /* PE32 format */
> +#endif
> +        .byte   0x02                                    /* MajorLinkerVersion */
> +        .byte   0x14                                    /* MinorLinkerVersion */
> +        .long   _end - xen_start                        /* SizeOfCode */
> +        .long   0                                       /* SizeOfInitializedData */
> +        .long   0                                       /* SizeOfUninitializedData */
> +        .long   0                                       /* AddressOfEntryPoint */
> +        .long   xen_start - efi_head                    /* BaseOfCode */
> +
> +extra_header_fields:
> +        .quad   0                                       /* ImageBase */

This is quad only for PE32+, iirc. In PE32 it's two separate 32-bit
fields instead. The overall result may be tolerable (a data RVA of 0
can't be quite right, but we may be able to get away with that), but
it will at least want commenting on.

Any anyway - further up in the RISC-V header struct you use .word and
.dword. Why .long and .quad here? That's at least somewhat confusing.

> +        .long   PECOFF_SECTION_ALIGNMENT                /* SectionAlignment */
> +        .long   PECOFF_FILE_ALIGNMENT                   /* FileAlignment */
> +        .short  0                                       /* MajorOperatingSystemVersion */
> +        .short  0                                       /* MinorOperatingSystemVersion */
> +        .short  LINUX_EFISTUB_MAJOR_VERSION             /* MajorImageVersion */
> +        .short  LINUX_EFISTUB_MINOR_VERSION             /* MinorImageVersion */
> +        .short  0                                       /* MajorSubsystemVersion */
> +        .short  0                                       /* MinorSubsystemVersion */
> +        .long   0                                       /* Win32VersionValue */
> +        .long   _end - efi_head                         /* SizeOfImage */
> +
> +        /* Everything before the xen image is considered part of the header */
> +        .long   xen_start - efi_head                    /* SizeOfHeaders */
> +        .long   0                                       /* CheckSum */
> +        .short  IMAGE_SUBSYSTEM_EFI_APPLICATION         /* Subsystem */
> +        .short  0                                       /* DllCharacteristics */
> +        .quad   0                                       /* SizeOfStackReserve */
> +        .quad   0                                       /* SizeOfStackCommit */
> +        .quad   0                                       /* SizeOfHeapReserve */
> +        .quad   0                                       /* SizeOfHeapCommit */

All of these are again 32 bits only in PE32, if I'm not mistaken.

> +        .long   0                                       /* LoaderFlags */
> +        .long   (section_table - .) / 8                 /* NumberOfRvaAndSizes */
> +        .quad   0                                       /* ExportTable */
> +        .quad   0                                       /* ImportTable */
> +        .quad   0                                       /* ResourceTable */
> +        .quad   0                                       /* ExceptionTable */
> +        .quad   0                                       /* CertificationTable */
> +        .quad   0                                       /* BaseRelocationTable */

Would you mind clarifying on what basis this set of 6 entries was
chosen?

> +/* Section table */
> +section_table:
> +        .ascii  ".text\0\0\0"
> +        .long   0
> +        .long   0
> +        .long   0                                       /* SizeOfRawData */
> +        .long   0                                       /* PointerToRawData */
> +        .long   0                                       /* PointerToRelocations */
> +        .long   0                                       /* PointerToLineNumbers */
> +        .short  0                                       /* NumberOfRelocations */
> +        .short  0                                       /* NumberOfLineNumbers */
> +        .long   IMAGE_SCN_CNT_CODE | \
> +                IMAGE_SCN_MEM_READ | \
> +                IMAGE_SCN_MEM_EXECUTE                   /* Characteristics */
> +
> +        .ascii  ".data\0\0\0"
> +        .long   _end - xen_start                        /* VirtualSize */
> +        .long   xen_start - efi_head                    /* VirtualAddress */
> +        .long   __init_end_efi - xen_start              /* SizeOfRawData */
> +        .long   xen_start - efi_head                    /* PointerToRawData */
> +        .long   0                                       /* PointerToRelocations */
> +        .long   0                                       /* PointerToLineNumbers */
> +        .short  0                                       /* NumberOfRelocations */
> +        .short  0                                       /* NumberOfLineNumbers */
> +        .long   IMAGE_SCN_CNT_INITIALIZED_DATA | \
> +                IMAGE_SCN_MEM_READ | \
> +                IMAGE_SCN_MEM_WRITE                    /* Characteristics */

IOW no code and the entire image expressed as data. Interesting.
No matter whether that has a reason or is completely arbitrary, I
think it, too, wants commenting on.

> +        .set    section_count, (. - section_table) / 40
> +
> +        .balign  0x1000
> +#endif/* CONFIG_RISCV_EFI */
> +
> +FUNC(xen_start)
>          /* Mask all interrupts */
>          csrw    CSR_SIE, zero
>  
> @@ -60,6 +196,9 @@ FUNC(start)
>          mv      a1, s1
>  
>          tail    start_xen
> +
> +END(xen_start)
> +
>  END(start)

I don't think you addressed my function nesting comment here either.

> --- a/xen/arch/riscv/xen.lds.S
> +++ b/xen/arch/riscv/xen.lds.S
> @@ -12,6 +12,9 @@ PHDRS
>  #endif
>  }
>  
> +PECOFF_SECTION_ALIGNMENT = 0x1000;
> +PECOFF_FILE_ALIGNMENT = 0x200;
> +
>  SECTIONS
>  {
>      . = XEN_VIRT_START;
> @@ -144,7 +147,7 @@ SECTIONS
>      .got.plt : {
>          *(.got.plt)
>      } : text
> -
> +    __init_end_efi = .;

Why does the blank line disappear? And why is ...

>      . = ALIGN(POINTER_ALIGN);
>      __init_end = .;

... __init_end not good enough? (I think I can guess the answer, but
then I further think the name of the symbol is misleading. )

> @@ -165,6 +168,7 @@ SECTIONS
>          . = ALIGN(POINTER_ALIGN);
>          __bss_end = .;
>      } :text
> +
>      _end = . ;

Interestingly an unrelated blank line suddenly appears here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 13:03:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 13:03:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739974.1146966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHk6h-0006KK-90; Thu, 13 Jun 2024 13:03:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739974.1146966; Thu, 13 Jun 2024 13:03:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHk6h-0006KD-62; Thu, 13 Jun 2024 13:03:11 +0000
Received: by outflank-mailman (input) for mailman id 739974;
 Thu, 13 Jun 2024 13:03:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9r+0=NP=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1sHk6f-0006K7-07
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 13:03:09 +0000
Received: from EUR02-AM0-obe.outbound.protection.outlook.com
 (mail-am0eur02on2061b.outbound.protection.outlook.com
 [2a01:111:f403:2606::61b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 46d00962-2985-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 15:03:07 +0200 (CEST)
Received: from AS4P192CA0038.EURP192.PROD.OUTLOOK.COM (2603:10a6:20b:658::7)
 by DU4PR08MB11174.eurprd08.prod.outlook.com (2603:10a6:10:577::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Thu, 13 Jun
 2024 13:03:04 +0000
Received: from AM3PEPF0000A78E.eurprd04.prod.outlook.com
 (2603:10a6:20b:658:cafe::52) by AS4P192CA0038.outlook.office365.com
 (2603:10a6:20b:658::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.24 via Frontend
 Transport; Thu, 13 Jun 2024 13:03:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM3PEPF0000A78E.mail.protection.outlook.com (10.167.16.117) with
 Microsoft
 SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.7677.15
 via Frontend Transport; Thu, 13 Jun 2024 13:03:03 +0000
Received: ("Tessian outbound 221fbec6f361:v332");
 Thu, 13 Jun 2024 13:03:03 +0000
Received: from 940064599862.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7B967D27-3EE2-4E99-9382-AF3B89D923D4.1; 
 Thu, 13 Jun 2024 13:02:56 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 940064599862.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 13 Jun 2024 13:02:56 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com (2603:10a6:10:25a::24)
 by PAVPR08MB9555.eurprd08.prod.outlook.com (2603:10a6:102:312::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.24; Thu, 13 Jun
 2024 13:02:55 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a]) by DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a%5]) with mapi id 15.20.7677.019; Thu, 13 Jun 2024
 13:02:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46d00962-2985-11ef-90a3-e314d9c70b13
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=I/NU2m9PmsKwwzf3+hVuHttgGQ5kD92te3JwAuJnq1/yD8Lg18B/Ry0M9LCz7483UHA8Z+C2DeDzPdl9eO1Nq3rQcVKeCxpBQ0mOBVEXb78KbBPggFErqX1k/qFi5LmlAQYlpEcq4/XP2qDeoUZa1KoeFcjdUCHzd5dVa0BSQsXjU/vWBScagezjTe+C6/LhhRQEI/euPNZi4iDO0O5peTsUzpqyhc/jGXQCkIZbdRrA5yUXgd25PUBNV2+DxUteFQ1FLcYleFNwYVVfZHYK1QA3S+5w14gGGNDXkGjFrrSQTCSKV5SJaQutH0LyXMNwDL7p3PqoMINd4qJ09r0qtA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Wit2hO10V1XrX9FpSsEGmKtI8e6EPJiPCcFfRW8Xkb8=;
 b=WIrwas9msiryzxjSico7g+L4NUHN6G2FiCYQDwYxzY8p8wyXISWQIc3dyfoWAZSw1D9WO8b8RmWCYKtRPjF8qBBR2QoBUxqMUHJEM1IzMoyR1hAJJTZJyCuCO6yGrcMPaFRDTZWyLsuO97kSqiWfznx2Pihcpj3nMSsG+B+SguT3oPXQ18UtYntRshpE3PxfkhKYFLQUVaaOTC3F63j/pR+54zMUoXM7+LzmrWNUrSxaJmVMJTsIqDmpmzUGu9HeUq77AbBtXX/gGvlHMaXDP93U3k+3mbIInT60feP7IPJDx1g+m9PyJguHP0oA/c8xQOhk+sk9ylr2VG7WJAcWcA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1
 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wit2hO10V1XrX9FpSsEGmKtI8e6EPJiPCcFfRW8Xkb8=;
 b=P2SOx/jYYSffWVpoduUupvmEIvXOYfDkmLJlJmk54dNYT1sZS4U11sO92Pao+odLf1lHRrUmZu599iLzlPgVDLSaLQj07naeiJp2bV91kkKvoSEIisjCTQ1heVSDMNmmC3E0KiTtOIh9mv274ZqNj/KwIXqnO97jpwjP5Q1kqv8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=arm.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 22364ac81a1ca692
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SqHgm4t2P3xR6ViklXnLwlkmY9WbTSAyR3t2QewjbRIQ6ZC3VZusnabsgdopSGI79UIzLQJqNq4D+7mwOIg0zdnoGTHKiA/Wf6w27jznb0ArGXcRRtV0mtb+FDVTSkFMhLOq9gvkOcFfgLX6An4oonsz/kkdDWN9pJz7HOWMoGLtmFitGNwoCilvNBa8Kkgsi1/P+ELFqIBKptu+laJdPNePegHQyVNqk6sIzanqbJzLyITEw/8MzpVqxjhfLiHKe1+sHp1sbLHd9DLZgpwKfRIf5i8cNQYRsxP9n7n3pgcOTkhfnh+A0Ue3O0xRIMVXSqLjCovVn9xMofGFXV6OyA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Wit2hO10V1XrX9FpSsEGmKtI8e6EPJiPCcFfRW8Xkb8=;
 b=kzUZfkjV17tm+TtP1MhwTt+DuERNJv659PXUzTsYsowRrrO9GiRINv4LyxDBPgvGxX6wBcoyFoCHt0fXfXLYhrD40nhhLEZpLXjaEeKD6z/sknGDnGu1NQ2sOwPpUy5+A75SZY0I9L+5Ra4FMEOQ2h6n5bPa7XM5USPGv9GfeMO4oi3YDNASyGxpkTdNN2mZvLUT79dhhdJfpTo34yLQymBPiFFOZQJngXgE5Qj9nL6GEpywVvcTF3ZcHRO3JnVEC+tX0VT+v8wpKkzdQlyaCENrJkIm8cLmmAwYCZBaY+U0+gFg8hfTYXggnkSwQHusTjSfSB1TQacOkh2BDr5Y7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Wit2hO10V1XrX9FpSsEGmKtI8e6EPJiPCcFfRW8Xkb8=;
 b=P2SOx/jYYSffWVpoduUupvmEIvXOYfDkmLJlJmk54dNYT1sZS4U11sO92Pao+odLf1lHRrUmZu599iLzlPgVDLSaLQj07naeiJp2bV91kkKvoSEIisjCTQ1heVSDMNmmC3E0KiTtOIh9mv274ZqNj/KwIXqnO97jpwjP5Q1kqv8=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
CC: Oleksii K. <oleksii.kurochko@gmail.com>, Jens Wiklander
	<jens.wiklander@linaro.org>, Xen-devel <xen-devel@lists.xenproject.org>,
	"patches@linaro.org" <patches@linaro.org>, Volodymyr Babchuk
	<volodymyr_babchuk@epam.com>, Stefano Stabellini <sstabellini@kernel.org>,
	Michal Orzel <michal.orzel@amd.com>
Subject: Re: [XEN PATCH v6 0/7] FF-A notifications
Thread-Topic: [XEN PATCH v6 0/7] FF-A notifications
Thread-Index:
 AQHauwL3zf/sfUoleE2hXc1GLXTtIrHBJtUAgABPbgCAALAvgIAAOhUAgANIXoCAAAULAA==
Date: Thu, 13 Jun 2024 13:02:54 +0000
Message-ID: <DBAF0EF2-0A4D-4F5C-9BD2-C8DEBB54125A@arm.com>
References: <20240610065343.2594943-1-jens.wiklander@linaro.org>
 <3C40228F-21AA-4CBF-A4BE-1C42DE6E94EB@arm.com>
 <615f1766-253d-43dc-b0f0-f8e2eb7360b5@xen.org>
 <8558AEB5-2F38-4F8C-A017-794E32045068@arm.com>
 <6a255f3dccc609e680659ed05b613c21a33cfb20.camel@gmail.com>
 <fefb0ceb-0713-4520-b9a7-e37aa2f77850@xen.org>
In-Reply-To: <fefb0ceb-0713-4520-b9a7-e37aa2f77850@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3774.600.62)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	DB9PR08MB6588:EE_|PAVPR08MB9555:EE_|AM3PEPF0000A78E:EE_|DU4PR08MB11174:EE_
X-MS-Office365-Filtering-Correlation-Id: 3a71c84d-530e-4385-b039-08dc8ba92905
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted:
 BCL:0;ARA:13230035|1800799019|366011|376009|38070700013;
X-Microsoft-Antispam-Message-Info-Original:
 =?us-ascii?Q?Ecf0VlpH76SILhrSVXmJjuBdUy41s1YHM4jYUdw6aJNfJ4o9Q+Okz/i2fjD+?=
 =?us-ascii?Q?O5CzatUW6GaKtS43OXwybTF1EDEVoePgpsyz+rjlhpVhCxKUJGVUR5ntdsSX?=
 =?us-ascii?Q?wcx9bt1hjiCLeVtK7FNoZVpRzudmzEHZu5knLcGND7ylJqXo7NUynBDvVKrS?=
 =?us-ascii?Q?4HLkVnqcqFkV7ABes77DPVQumlmn1QoEjnvBOXygLdLGm9n6jARnHUoFAnrO?=
 =?us-ascii?Q?A+ZgR3/e5B2UNmuwbkkclb9BSDZVPhQn6DtRHGymnolThX5DJ/DbxalK+3yJ?=
 =?us-ascii?Q?gKBmauRZsmSaVd/jzlPXUIWC1WlQNKGvxcdpIlFx3wK0DBSgrCURl/50lyeY?=
 =?us-ascii?Q?Nw7tHAyKJAA0E3RBoGyXbp15MRHFVGUXR3eyXsZdJGKGGozzpeodMUrhM6Fr?=
 =?us-ascii?Q?SWtXITXr5oMec1sxesRxaG+1x+ZmAzLHghiPDZHDWnKPB1rLwfFu7m3gAQ1k?=
 =?us-ascii?Q?m+lOmq0Sm5ldhOyigUQzVVkhxy7c/th4D9LcMnS8PGEeltm64XDlKHLmJT+C?=
 =?us-ascii?Q?PbBvk3BHK0VByMCFnMqGZtSTAz8jxGgJXfQ2gKAL13gFhnO7EhSORHAdq2RT?=
 =?us-ascii?Q?pw7gd0Uni8yQuQ5mD8Pw+wvUtP+SI+hGjHmrO+2hweBjGqpDhE8QIIo6J51I?=
 =?us-ascii?Q?sN4hC68jx6tO3Yo6RRyLkA5Ce0PqERkiZIJWoouqcSffKQk3pkNxA8uKS83u?=
 =?us-ascii?Q?tUoiBBcPhhhllQiFDPwVBG6Uill1KdHBlHQ6YCbB2vk1To6izIisuybzbXA5?=
 =?us-ascii?Q?JfmD5JH2qjkWBBqbnbDJR/svx46ZX50ewOd+dEL9bDUkZkzBeV9pdvcrQhb3?=
 =?us-ascii?Q?4MFiypOQybP6ks9WYwcCj9HDujCxjN22FiOaAinwgZz3oyxPYbDuGvIfTeJA?=
 =?us-ascii?Q?nfuujdZHs7xz67p9IXRMMqxm5gfiCBs2QN9m/K9zTNuO2dLhnguHqcn8+xSY?=
 =?us-ascii?Q?BctWIZEpsOZle5/DcbRNIjOVRiV6ghl8v5tR1k0dWzDSLhlZd9CAWeUMTEdQ?=
 =?us-ascii?Q?e0l5du1uXk0Zibx9wAZKWlq0QyBHZZufKE0gSi2v9gpDSdfalQHVodUP17gT?=
 =?us-ascii?Q?4befilxwMZcAiVRVLqESZnyqzCS2DykxA6zZnyyH3pBC1uCH3e30nHsWgA2S?=
 =?us-ascii?Q?wZUwioY9BNGUWCrzfgr8r+zxBL8bFmrNy8UWdF8lHRDKetFOtUZPwLEEFu9q?=
 =?us-ascii?Q?+nLv6AsoEofgrIoN+W+XWLctxx9ZfSmIdM40tu+K5bK+jov6R8PdB1Cg7JUj?=
 =?us-ascii?Q?e92DcqgSG7lOKwK6bW9mcJFnt7QnrR6gM/EvDKVEzjiY3pjeHeOBQkrzB2dM?=
 =?us-ascii?Q?UsKbzDzytw9W/DMWAqTGcN3dHGsWdhRu0HXBo/fMaDya3UaO4M2DxjwiaS6H?=
 =?us-ascii?Q?dXvTWx8=3D?=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB9PR08MB6588.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(376009)(38070700013);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <80D13C2FD1497040892C6256DCD85AA4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB9555
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM3PEPF0000A78E.eurprd04.prod.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d61d2d05-28c4-4c79-b03d-08dc8ba923fe
X-Microsoft-Antispam:
	BCL:0;ARA:13230035|36860700008|34020700011|82310400021|1800799019|376009|35042699017;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?DordZGFqFPLUySKNP6DBfpxXR2IrlZH+yGYtKyJvz8gHJUZ5dYmzcGH+dTB7?=
 =?us-ascii?Q?3XB+2h61tEIO7bn5NaPdUIfLe+i941G4vHzutiyFexv/9LJ+GVr/FpQzCQbk?=
 =?us-ascii?Q?v487olANPlQOUjIsjXlKOgU5l7w6EFW2WRjoo7bvIMKPuf32aS77mGA5XKk7?=
 =?us-ascii?Q?suenf57CLiol6hDyUIYKrQEV23dhDXY/Qs6ceZJUYhQxqktG6SFHAOSjhzAi?=
 =?us-ascii?Q?aOqcNbd4m2v63OK204QV4veb1ec61/lnipdtqPM9gEMCpQqqfGvr8Uoi+ibM?=
 =?us-ascii?Q?BoTcEjdkjxSq+OGVKawjMs0rlEI9YhJNTglUkfyNXpxrxqVVUvv7kWxN4D9i?=
 =?us-ascii?Q?6bVa5INEaGe1kYuA8O1M8JTJDNYDERz4T9B+5wJeE4GaYtWeYsU0EXJNRy/R?=
 =?us-ascii?Q?S0DQ2DpBog/qfbXTwnUe+KdBT8SJ1u5ybvSmc+d4dSbdS/Ntpw0KruQb6SvS?=
 =?us-ascii?Q?ZQMtsPgzB9jtdVD4tLLGsJKu+LSZrRhYf6YoUYp6AqpODXykfNdE6Zvp6wsP?=
 =?us-ascii?Q?mYcEq1xh2T6LhdbMJQxwRg6PJROyU7xy5+WMWpSlYIhO1qxshqth+QAxuTtM?=
 =?us-ascii?Q?e0mdezqTFcrBisF0Gjs4uieB+dTb1nfIXW2wWJaeagVIDOhtg2E2c1uobLci?=
 =?us-ascii?Q?SNc3Bd8NF0J0fTOWTcP9rirt659zsnW8Dl1a9OL84lbgupNSgvU6Ul4pSm/4?=
 =?us-ascii?Q?ATd76hMHfZ7xZxrx6DGKQQijZJx6A9TU518VjUDzTA2LtkJnPqApQp5OZYhR?=
 =?us-ascii?Q?FfcTVh/zFCXsbcfCC3XyJ4rqWr/bGnxCPYPbhicJO4SmP6tvLx5ti0OExKqR?=
 =?us-ascii?Q?pj9BIXa1zYbuWRFsNjaBROGOj8wiP8Cgd2nA8sfRR+1v38xLyAgSMRRJoHw4?=
 =?us-ascii?Q?xeKRlvZS6h1/TnbuCmwChPIQITEq5w22C46qLpK/Dw7f0J7RLJDEAzSRlMlU?=
 =?us-ascii?Q?7+puRgM9MFnUMSxveXz0DCfX6uljtwHZxxmX9Ex95gY9SKQ5XwQlH7TzCNEq?=
 =?us-ascii?Q?j9dviyLc54YWPKtxsdXaQD7saI6UK232gbAz20GXyUzgRayL0T5nbOKSLT5k?=
 =?us-ascii?Q?PkDPx8SzSF4XbZcJaCML0Xxz/UrkmEUe2vuioC4ZBySmy7BRSRDuJzikq211?=
 =?us-ascii?Q?lQio+r9gzc62914AB5UMbYeYoArh9aQZ4oRagNxWjHr2JztRQqk4eTQWfbo2?=
 =?us-ascii?Q?OZJSOKphd2KsS11XuteQVTz76t75uR5ElW8SEWjaViU0xGjtThjDY/6snNfz?=
 =?us-ascii?Q?4YsDMjXJQYInEQGK8oo4/1JpcfgwxcrTiGgVNcjhoJlh+/XXYTM1YJtwc6Z3?=
 =?us-ascii?Q?lXVlxe5jvQZKpXbVP0v2VmGsuCrtYC9lYg1QAAPsF4pXN7jBwUd2xjFzUtiK?=
 =?us-ascii?Q?WCvzHj91NbJ8R8xXt2iR6Em4rOUoc2nEW6SRyfAKrZyM/WRBpQ=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230035)(36860700008)(34020700011)(82310400021)(1800799019)(376009)(35042699017);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 Jun 2024 13:03:03.3115
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3a71c84d-530e-4385-b039-08dc8ba92905
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM3PEPF0000A78E.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU4PR08MB11174

Hi Julien,

> On 13 Jun 2024, at 14:44, Julien Grall <julien@xen.org> wrote:
>=20
> Hi,
>=20
> On 11/06/2024 11:36, Oleksii K. wrote:
>> Hi Bertrand and Julien,
>> On Tue, 2024-06-11 at 07:09 +0000, Bertrand Marquis wrote:
>>> Hi Julien and Oleksii,
>>>=20
>>> @Oleksii: Could we consider having this serie merged for next release
>>> ?
>> We can consider including it in Xen 4.19 as it has a low impact on
>> existing systems and needs to be explicitly activated:
>>  Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> It is now merged.

Great, thanks a lot :-)

Cheers
Bertrand

>=20
> Cheers,
>=20
> --=20
> Julien Grall



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 13:07:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 13:07:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739978.1146975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHkB0-0006vm-PV; Thu, 13 Jun 2024 13:07:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739978.1146975; Thu, 13 Jun 2024 13:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHkB0-0006vf-Mf; Thu, 13 Jun 2024 13:07:38 +0000
Received: by outflank-mailman (input) for mailman id 739978;
 Thu, 13 Jun 2024 13:07:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHkAz-0006vZ-Nh
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 13:07:37 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e75bdb48-2985-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 15:07:36 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id
 a640c23a62f3a-a6f0dc80ab9so163825566b.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 06:07:36 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f416dfsm70921266b.164.2024.06.13.06.07.35
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 06:07:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e75bdb48-2985-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718284056; x=1718888856; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=vE/yhJRg2OuaMGSiE8GwcWt6S2O+bUofmKJ0USws6KU=;
        b=PwNsTeHEFyyHU02FZlC6wxTTh5Jt3jMUqy3KR72My+A1f7SWTkTBWjbr7C+VRCgGGG
         cRrCLItUkbyYcB/UUwbWpB5IEA8FmFB7vxrCNpyoxc4IGRG1Kg9W84Q1z3arxBz5/kAC
         58eW40baQGAfBNOzNDBX8Wiem+s9HyUX2MwMJGf6pdAHQ44Ix0r9rgvLWFgLPaXuW39x
         uAP+YYh3oJrOYnxVNu3XhRxCzcESZoRFjF/dUuL5SKpId0SSVOFmq8wAKoF+GqS29FZn
         QSSeBjlhd1y1QQf9fZ1cj2ILvxotVGqbpPL9LbzDibC+BX9gKhwTk1gOcs2zZSCCgWrK
         0rFA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718284056; x=1718888856;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=vE/yhJRg2OuaMGSiE8GwcWt6S2O+bUofmKJ0USws6KU=;
        b=kpejchbv7h1Ywqj+M5anG5830e7vTMl+Tt8qz8naZOZrJdV5L2dgjXGWtMdF8fPjd1
         1MerbhUFk9mZQ/Hi36ytAi2O120cF3Z6yjsMMWKS0cSL0BgFj8wNcU6up4/XhIpYENIS
         Tpm2xfRnCcyahHlD1eVD4RDUO6Hgd1P5CyFm2mMoBqsN8XXdE3NotXvZ0P31u8AFHcJj
         YnrR83rKj8+EW+ODFUvCA6Jr36ZdHEnUUGO37H1PAMer3wltmU9KOxKlWLIBZB5PeK6f
         dDMzZ0FIOLhoh0fnrCaJLb6rrbknEgaHRGfORA1uz6YWof74XCXVHCuIeFtGBtDvLuQW
         YBoQ==
X-Forwarded-Encrypted: i=1; AJvYcCV0yfjP5Pn7Wy87Asmv0P6fBEQ069gBCxt4PPzhh+MfW3VcAuwtTjZQxSrVuxBvp2OBX5RD4PEO7yPu5GMyiXphsRYmSK3swVt/BzbH9mA=
X-Gm-Message-State: AOJu0YyOTl/SAhSMU6JwdirUsTOsjhvuJFmv8qMZ3i+f3Qh3u0vlp86P
	sSL+M7XGod9n1o6T8oe0qJcZgUpyycs50eboOrSkxrf69TjCFSO2evhnQ3SRDVQkeCkvFfTe/aI
	=
X-Google-Smtp-Source: AGHT+IH68n6+mAuhzYQ9m/AR+2EDepWY9iUfsKhnuM+BlRViNEXHNds2Yx0yuO1kW6hEs/zx/Y4FQQ==
X-Received: by 2002:a17:906:8411:b0:a6e:f701:384d with SMTP id a640c23a62f3a-a6f47c9ef68mr283846366b.29.1718284055706;
        Thu, 13 Jun 2024 06:07:35 -0700 (PDT)
Message-ID: <1f8c1b9d-7503-4ec5-955f-a025fd06f1b8@suse.com>
Date: Thu, 13 Jun 2024 15:07:34 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 6/7] x86/irq: handle moving interrupts in
 _assign_irq_vector()
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240610142043.11924-1-roger.pau@citrix.com>
 <20240610142043.11924-7-roger.pau@citrix.com>
 <9de1a9c7-814c-4375-9182-90a2f04806b2@suse.com> <Zml6-ViFPTWI1cUc@macbook>
 <d5b1d273-913e-4d53-9fb6-9b01525da498@suse.com> <ZmnAgSBjjP6N-uJS@macbook>
 <d45ef203-aa29-4aa6-8b40-0449334a2bf0@suse.com> <ZmrYjv2ljhf-1Ag_@macbook>
 <f4c51152-e8a0-4715-af75-4b8c7801fa08@suse.com> <ZmrsQ0ncZWD3tXXV@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmrsQ0ncZWD3tXXV@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 13.06.2024 14:55, Roger Pau Monné wrote:
> On Thu, Jun 13, 2024 at 01:36:55PM +0200, Jan Beulich wrote:
>> On 13.06.2024 13:31, Roger Pau Monné wrote:
>>> On Thu, Jun 13, 2024 at 10:38:35AM +0200, Jan Beulich wrote:
>>>> On 12.06.2024 17:36, Roger Pau Monné wrote:
>>>>> On Wed, Jun 12, 2024 at 03:42:58PM +0200, Jan Beulich wrote:
>>>>>> On 12.06.2024 12:39, Roger Pau Monné wrote:
>>>>>>> On Tue, Jun 11, 2024 at 03:18:32PM +0200, Jan Beulich wrote:
>>>>>>>> On 10.06.2024 16:20, Roger Pau Monne wrote:
>>>>>>>>> Currently there's logic in fixup_irqs() that attempts to prevent
>>>>>>>>> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
>>>>>>>>> interrupts from the CPUs not present in the input mask.  The current logic in
>>>>>>>>> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
>>>>>>>>> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
>>>>>>>>>
>>>>>>>>> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
>>>>>>>>> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
>>>>>>>>> to deal with interrupts that have either move_{in_progress,cleanup_count} set
>>>>>>>>> and no remaining online CPUs in ->arch.cpu_mask.
>>>>>>>>>
>>>>>>>>> If _assign_irq_vector() is requested to move an interrupt in the state
>>>>>>>>> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
>>>>>>>>> CPUs that could be used as fallback, and if that's the case do move the
>>>>>>>>> interrupt back to the previous destination.  Note this is easier because the
>>>>>>>>> vector hasn't been released yet, so there's no need to allocate and setup a new
>>>>>>>>> vector on the destination.
>>>>>>>>>
>>>>>>>>> Due to the logic in fixup_irqs() that clears offline CPUs from
>>>>>>>>> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
>>>>>>>>> shouldn't be possible to get into _assign_irq_vector() with
>>>>>>>>> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
>>>>>>>>> ->arch.old_cpu_mask.
>>>>>>>>>
>>>>>>>>> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
>>>>>>>>> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
>>>>>>>>> longer part of the affinity set,
>>>>>>>>
>>>>>>>> I'm having trouble relating this (->arch.old_cpu_mask related) to ...
>>>>>>>>
>>>>>>>>> move the interrupt to a different CPU part of
>>>>>>>>> the provided mask
>>>>>>>>
>>>>>>>> ... this (->arch.cpu_mask related).
>>>>>>>
>>>>>>> No, the "provided mask" here is the "mask" parameter, not
>>>>>>> ->arch.cpu_mask.
>>>>>>
>>>>>> Oh, so this describes the case of "hitting" the comment at the very bottom of
>>>>>> the first hunk then? (I probably was misreading this because I was expecting
>>>>>> it to describe a code change, rather than the case where original behavior
>>>>>> needs retaining. IOW - all fine here then.)
>>>>>>
>>>>>>>>> and keep the current ->arch.old_{cpu_mask,vector} for the
>>>>>>>>> pending interrupt movement to be completed.
>>>>>>>>
>>>>>>>> Right, that's to clean up state from before the initial move. What isn't
>>>>>>>> clear to me is what's to happen with the state of the intermediate
>>>>>>>> placement. Description and code changes leave me with the impression that
>>>>>>>> it's okay to simply abandon, without any cleanup, yet I can't quite figure
>>>>>>>> why that would be an okay thing to do.
>>>>>>>
>>>>>>> There isn't much we can do with the intermediate placement, as the CPU
>>>>>>> is going offline.  However we can drain any pending interrupts from
>>>>>>> IRR after the new destination has been set, since setting the
>>>>>>> destination is done from the CPU that's the current target of the
>>>>>>> interrupts.  So we can ensure the draining is done strictly after the
>>>>>>> target has been switched, hence ensuring no further interrupts from
>>>>>>> this source will be delivered to the current CPU.
>>>>>>
>>>>>> Hmm, I'm afraid I still don't follow: I'm specifically in trouble with
>>>>>> the ...
>>>>>>
>>>>>>>>> --- a/xen/arch/x86/irq.c
>>>>>>>>> +++ b/xen/arch/x86/irq.c
>>>>>>>>> @@ -544,7 +544,53 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
>>>>>>>>>      }
>>>>>>>>>  
>>>>>>>>>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
>>>>>>>>> -        return -EAGAIN;
>>>>>>>>> +    {
>>>>>>>>> +        /*
>>>>>>>>> +         * If the current destination is online refuse to shuffle.  Retry after
>>>>>>>>> +         * the in-progress movement has finished.
>>>>>>>>> +         */
>>>>>>>>> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
>>>>>>>>> +            return -EAGAIN;
>>>>>>>>> +
>>>>>>>>> +        /*
>>>>>>>>> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
>>>>>>>>> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
>>>>>>>>> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
>>>>>>>>> +         * ->arch.old_cpu_mask.
>>>>>>>>> +         */
>>>>>>>>> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
>>>>>>>>> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
>>>>>>>>> +
>>>>>>>>> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
>>>>>>>>> +        {
>>>>>>>>> +            /*
>>>>>>>>> +             * Fallback to the old destination if moving is in progress and the
>>>>>>>>> +             * current destination is to be offlined.  This is only possible if
>>>>>>>>> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
>>>>>>>>> +             * in the 'mask' parameter.
>>>>>>>>> +             */
>>>>>>>>> +            desc->arch.vector = desc->arch.old_vector;
>>>>>>>>> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
>>>>>>
>>>>>> ... replacing of vector (and associated mask), without any further accounting.
>>>>>
>>>>> It's quite likely I'm missing something here, but what further
>>>>> accounting you would like to do?
>>>>>
>>>>> The current target of the interrupt (->arch.cpu_mask previous to
>>>>> cpumask_and()) is all going offline, so any attempt to set it in
>>>>> ->arch.old_cpu_mask would just result in a stale (offline) CPU getting
>>>>> set in ->arch.old_cpu_mask, which previous patches attempted to
>>>>> solve.
>>>>>
>>>>> Maybe by "further accounting" you meant something else not related to
>>>>> ->arch.old_{cpu_mask,vector}?
>>>>
>>>> Indeed. What I'm thinking of is what normally release_old_vec() would
>>>> do (of which only desc->arch.used_vectors updating would appear to be
>>>> relevant, seeing the CPU's going offline). The other one I was thinking
>>>> of, updating vector_irq[], likely is also unnecessary, again because
>>>> that's per-CPU data of a CPU going down.
>>>
>>> I think updating vector_irq[] should be explicitly avoided, as doing
>>> so would prevent us from correctly draining any pending interrupts
>>> because the vector -> irq mapping would be broken when the interrupt
>>> enable window at the bottom of fixup_irqs() is reached.
>>>
>>> For used_vectors: we might clean it, I'm a bit worried however that at
>>> some point we insert a check in do_IRQ() path that ensures the
>>> vector_irq[] is inline with desc->arch.used_vectors, which would fail
>>> for interrupts drained at the bottom of fixup_irqs().  Let me attempt
>>> to clean the currently used vector from ->arch.used_vectors.
>>
>> Just to clarify: It may well be that for draining the bit can't be cleared
>> right here. But it then still needs clearing _somewhere_, or else we
>> chance ending up with inconsistent state (triggering e.g. an assertion
>> later on) or the leaking of vectors. My problem here was that I also
>> couldn't locate any such "somewhere", and commentary also didn't point me
>> anywhere.
> 
> You are correct, there's no such place where the cleanup would happen.
> 
> I'm afraid the only option I see to correctly deal with this is to do
> the cleanup of the old destination in _assign_irq_vector(), and then
> do the pending interrupt draining from IRR like I had proposed in
> patch 7/7, thus removing the interrupt enable window at the bottom of
> fixup_irqs().
> 
> Let me know if that seems sensible.

I think it does; doing away with that entirely heuristic window would be
pretty nice anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 13:50:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 13:50:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.739992.1146985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHkqR-0007Pi-U8; Thu, 13 Jun 2024 13:50:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 739992.1146985; Thu, 13 Jun 2024 13:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHkqR-0007Pb-RT; Thu, 13 Jun 2024 13:50:27 +0000
Received: by outflank-mailman (input) for mailman id 739992;
 Thu, 13 Jun 2024 13:50:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XDAE=NP=bounce.vates.tech=bounce-md_30504962.666af91e.v1-c40a10789c0a4bc7a37d42d048579bcb@srs-se1.protection.inumbo.net>)
 id 1sHkqQ-0007PV-Bp
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 13:50:26 +0000
Received: from mail177-18.suw61.mandrillapp.com
 (mail177-18.suw61.mandrillapp.com [198.2.177.18])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e196fc1f-298b-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 15:50:24 +0200 (CEST)
Received: from pmta14.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail177-18.suw61.mandrillapp.com (Mailchimp) with ESMTP id
 4W0P1V5l3fzCf9KVH
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 13:50:22 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 c40a10789c0a4bc7a37d42d048579bcb; Thu, 13 Jun 2024 13:50:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e196fc1f-298b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718286622; x=1718547122;
	bh=yOp6n6ZWWUNSdC8h4VQyGHPUzOnfugZqh72W0IfKJNE=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=bluulqKV2uhDQ3h6rzdi6g/7DS4FjkAGcgjElY3Q58ME0EcBv0/QSPphXCDKAFh++
	 dhezZMDMNtebajEQNAIQc8t5vWKBCNsgBDnRJsMbS64hlsB7Gr4pncb4uogYHQ0NJo
	 C/JoAwxeWWYO4cz3YMgYL9N9L38WdBAOsuwKuJhibWC/Jgtkhqq1ZKavk8R0N2SaYx
	 Nvmwhy4LvgnDvaPSoS7JJQN74ZZO2DXseJBS/af8RODzff+vgu2hNsP8zcv7qqeTUb
	 kboK836RIq7aEBjyW/V/B/dtfjZLNmN6kdPA+3J84zx7sss1JQK3Qhlk4JO+jMfC7D
	 4UOR+E2EM3LSA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718286622; x=1718547122; i=teddy.astie@vates.tech;
	bh=yOp6n6ZWWUNSdC8h4VQyGHPUzOnfugZqh72W0IfKJNE=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=LYZGrVAkhwyV/dBB4cixg2j3YKNcjPWpHllqgD+1Gv+j3sTcq7MnrxNQGfusVnntG
	 pd1nndbX8TudUjijsqS5d9DCof+h9T/rQrysVwpnW4nvcTXcNTqhkFGtt0CT/jTKD2
	 8g8axTGdyd0TvXWf6kKjp0SU58KKoMBVDUA9XjZnM3K7P0u6VnQLkAFDw8DqL3vZHE
	 F6XWy0flnjoSUTy+Nr1f6bDEojrN7H1dbI5GEj7aQD9xOls2yfGI+d33r0vgEkoBw9
	 MAV1DdHC5F22q2Grk8u3MDeQx+RGzC9QYk8BJHf0BHfGKvw9WfTIymp7CG6p4xcQkN
	 WHR8fhBPhEG9w==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20PATCH]=20iommu/xen:=20Add=20Xen=20PV-IOMMU=20driver?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718286620009
To: xen-devel@lists.xenproject.org, iommu@lists.linux.dev
Cc: Teddy Astie <teddy.astie@vates.tech>, Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, Robin Murphy <robin.murphy@arm.com>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <fe36b8d36ed3bc01c78901bdf7b87a71cb1adaad.1718286176.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.c40a10789c0a4bc7a37d42d048579bcb?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240613:md
Date: Thu, 13 Jun 2024 13:50:22 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

In the context of Xen, Linux runs as Dom0 and doesn't have access to the
machine IOMMU. Although, a IOMMU is mandatory to use some kernel features
such as VFIO or DMA protection.

In Xen, we added a paravirtualized IOMMU with iommu_op hypercall in order
to allow Dom0 to implement such feature. This commit introduces a new
IOMMU driver that uses this new hypercall interface.

Signed-off-by Teddy Astie <teddy.astie@vates.tech>
---
 arch/x86/include/asm/xen/hypercall.h |   6 +
 drivers/iommu/Kconfig                |   9 +
 drivers/iommu/Makefile               |   1 +
 drivers/iommu/xen-iommu.c            | 508 +++++++++++++++++++++++++++
 include/xen/interface/memory.h       |  33 ++
 include/xen/interface/pv-iommu.h     | 114 ++++++
 include/xen/interface/xen.h          |   1 +
 7 files changed, 672 insertions(+)
 create mode 100644 drivers/iommu/xen-iommu.c
 create mode 100644 include/xen/interface/pv-iommu.h

diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
index a2dd24947eb8..6b1857f27c14 100644
--- a/arch/x86/include/asm/xen/hypercall.h
+++ b/arch/x86/include/asm/xen/hypercall.h
@@ -490,6 +490,12 @@ HYPERVISOR_xenpmu_op(unsigned int op, void *arg)
 	return _hypercall2(int, xenpmu_op, op, arg);
 }
 
+static inline int
+HYPERVISOR_iommu_op(void *arg)
+{
+	return _hypercall1(int, iommu_op, arg);
+}
+
 static inline int
 HYPERVISOR_dm_op(
 	domid_t dom, unsigned int nr_bufs, struct xen_dm_op_buf *bufs)
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 2b12b583ef4b..8d8a22b91e34 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -482,6 +482,15 @@ config VIRTIO_IOMMU
 
 	  Say Y here if you intend to run this kernel as a guest.
 
+config XEN_IOMMU
+	bool "Xen IOMMU driver"
+	depends on XEN_DOM0
+	select IOMMU_API
+	help
+		Xen PV-IOMMU driver for Dom0.
+
+		Say Y here if you intend to run this guest as Xen Dom0.
+
 config SPRD_IOMMU
 	tristate "Unisoc IOMMU Support"
 	depends on ARCH_SPRD || COMPILE_TEST
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 769e43d780ce..11fa258d3a04 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -30,3 +30,4 @@ obj-$(CONFIG_VIRTIO_IOMMU) += virtio-iommu.o
 obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o io-pgfault.o
 obj-$(CONFIG_SPRD_IOMMU) += sprd-iommu.o
 obj-$(CONFIG_APPLE_DART) += apple-dart.o
+obj-$(CONFIG_XEN_IOMMU) += xen-iommu.o
\ No newline at end of file
diff --git a/drivers/iommu/xen-iommu.c b/drivers/iommu/xen-iommu.c
new file mode 100644
index 000000000000..2c8e42240a6b
--- /dev/null
+++ b/drivers/iommu/xen-iommu.c
@@ -0,0 +1,508 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xen PV-IOMMU driver.
+ *
+ * Copyright (C) 2024 Vates SAS
+ *
+ * Author: Teddy Astie <teddy.astie@vates.tech>
+ *
+ */
+
+#define pr_fmt(fmt)	"xen-iommu: " fmt
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/iommu.h>
+#include <linux/dma-map-ops.h>
+#include <linux/pci.h>
+#include <linux/list.h>
+#include <linux/string.h>
+#include <linux/device/driver.h>
+#include <linux/slab.h>
+#include <linux/err.h>
+#include <linux/printk.h>
+#include <linux/stddef.h>
+#include <linux/spinlock.h>
+#include <linux/minmax.h>
+#include <linux/string.h>
+#include <asm/iommu.h>
+
+#include <xen/xen.h>
+#include <xen/page.h>
+#include <xen/interface/memory.h>
+#include <xen/interface/physdev.h>
+#include <xen/interface/pv-iommu.h>
+#include <asm/xen/hypercall.h>
+#include <asm/xen/page.h>
+
+MODULE_DESCRIPTION("Xen IOMMU driver");
+MODULE_AUTHOR("Teddy Astie <teddy.astie@vates.tech>");
+MODULE_LICENSE("GPL");
+
+#define MSI_RANGE_START		(0xfee00000)
+#define MSI_RANGE_END		(0xfeefffff)
+
+#define XEN_IOMMU_PGSIZES       (0x1000)
+
+struct xen_iommu_domain {
+	struct iommu_domain domain;
+
+	u16 ctx_no; /* Xen PV-IOMMU context number */
+};
+
+static struct iommu_device xen_iommu_device;
+
+static uint32_t max_nr_pages;
+static uint64_t max_iova_addr;
+
+static spinlock_t lock;
+
+static inline struct xen_iommu_domain *to_xen_iommu_domain(struct iommu_domain *dom)
+{
+	return container_of(dom, struct xen_iommu_domain, domain);
+}
+
+static inline u64 addr_to_pfn(u64 addr)
+{
+	return addr >> 12;
+}
+
+static inline u64 pfn_to_addr(u64 pfn)
+{
+	return pfn << 12;
+}
+
+bool xen_iommu_capable(struct device *dev, enum iommu_cap cap)
+{
+	switch (cap) {
+	case IOMMU_CAP_CACHE_COHERENCY:
+		return true;
+
+	default:
+		return false;
+	}
+}
+
+struct iommu_domain *xen_iommu_domain_alloc(unsigned type)
+{
+	struct xen_iommu_domain *domain;
+	u16 ctx_no;
+	int ret;
+
+	if (type & IOMMU_DOMAIN_IDENTITY) {
+		/* use default domain */
+		ctx_no = 0;
+	} else {
+		struct pv_iommu_op op = {
+			.ctx_no = 0,
+			.flags = 0,
+			.subop_id = IOMMUOP_alloc_context
+		};
+
+		ret = HYPERVISOR_iommu_op(&op);
+
+		if (ret) {
+			pr_err("Unable to create Xen IOMMU context (%d)", ret);
+			return ERR_PTR(ret);
+		}
+
+		ctx_no = op.ctx_no;
+	}
+
+	domain = kzalloc(sizeof(*domain), GFP_KERNEL);
+
+	domain->ctx_no = ctx_no;
+
+	domain->domain.geometry.aperture_start = 0;
+
+	domain->domain.geometry.aperture_end = max_iova_addr;
+	domain->domain.geometry.force_aperture = true;
+
+	return &domain->domain;
+}
+
+static struct iommu_group *xen_iommu_device_group(struct device *dev)
+{
+	if (!dev_is_pci(dev))
+		return ERR_PTR(-ENODEV);
+
+	return pci_device_group(dev);
+}
+
+static struct iommu_device *xen_iommu_probe_device(struct device *dev)
+{
+	if (!dev_is_pci(dev))
+		return ERR_PTR(-ENODEV);
+
+	return &xen_iommu_device;
+}
+
+static void xen_iommu_probe_finalize(struct device *dev)
+{
+	set_dma_ops(dev, NULL);
+	iommu_setup_dma_ops(dev, 0, max_iova_addr);
+}
+
+static void xen_iommu_release_device(struct device *dev)
+{
+	int ret;
+	struct pci_dev *pdev;
+	struct pv_iommu_op op = {
+		.subop_id = IOMMUOP_reattach_device,
+		.flags = 0,
+		.ctx_no = 0 /* reattach device back to default context */
+	};
+
+	if (!dev_is_pci(dev))
+		return;
+
+	pdev = to_pci_dev(dev);
+
+	op.reattach_device.dev.seg = pci_domain_nr(pdev->bus);
+	op.reattach_device.dev.bus = pdev->bus->number;
+	op.reattach_device.dev.devfn = pdev->devfn;
+
+	ret = HYPERVISOR_iommu_op(&op);
+
+	if (ret)
+		pr_warn("Unable to release device %p\n", &op.reattach_device.dev);
+}
+
+static int xen_iommu_map_pages(struct iommu_domain *domain, unsigned long iova,
+							   phys_addr_t paddr, size_t pgsize, size_t pgcount,
+							   int prot, gfp_t gfp, size_t *mapped)
+{
+	size_t xen_pg_count = (pgsize / XEN_PAGE_SIZE) * pgcount;
+	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
+	struct pv_iommu_op op = {
+		.subop_id = IOMMUOP_map_pages,
+		.flags = 0,
+		.ctx_no = dom->ctx_no
+	};
+	/* NOTE: paddr is actually bound to pfn, not gfn */
+	uint64_t pfn = addr_to_pfn(paddr);
+	uint64_t dfn = addr_to_pfn(iova);
+	int ret = 0;
+
+	if (WARN(!dom->ctx_no, "Tried to map page to default context"))
+		return -EINVAL;
+
+	//pr_info("Mapping to %lx %zu %zu paddr %x\n", iova, pgsize, pgcount, paddr);
+
+	if (prot & IOMMU_READ)
+		op.flags |= IOMMU_OP_readable;
+
+	if (prot & IOMMU_WRITE)
+		op.flags |= IOMMU_OP_writeable;
+
+	while (xen_pg_count) {
+		size_t to_map = min(xen_pg_count, max_nr_pages);
+		uint64_t gfn = pfn_to_gfn(pfn);
+
+		//pr_info("Mapping %lx-%lx at %lx-%lx\n", gfn, gfn + to_map - 1, dfn, dfn + to_map - 1);
+
+		op.map_pages.gfn = gfn;
+		op.map_pages.dfn = dfn;
+
+		op.map_pages.nr_pages = to_map;
+
+		ret = HYPERVISOR_iommu_op(&op);
+
+		//pr_info("map_pages.mapped = %u\n", op.map_pages.mapped);
+
+		if (mapped)
+			*mapped += XEN_PAGE_SIZE * op.map_pages.mapped;
+
+		if (ret)
+			break;
+
+		xen_pg_count -= to_map;
+
+		pfn += to_map;
+		dfn += to_map;
+	}
+
+	return ret;
+}
+
+static size_t xen_iommu_unmap_pages(struct iommu_domain *domain, unsigned long iova,
+									size_t pgsize, size_t pgcount,
+									struct iommu_iotlb_gather *iotlb_gather)
+{
+	size_t xen_pg_count = (pgsize / XEN_PAGE_SIZE) * pgcount;
+	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
+	struct pv_iommu_op op = {
+		.subop_id = IOMMUOP_unmap_pages,
+		.ctx_no = dom->ctx_no,
+		.flags = 0,
+	};
+	uint64_t dfn = addr_to_pfn(iova);
+	int ret = 0;
+
+	if (WARN(!dom->ctx_no, "Tried to unmap page to default context"))
+		return -EINVAL;
+
+	while (xen_pg_count) {
+		size_t to_unmap = min(xen_pg_count, max_nr_pages);
+
+		//pr_info("Unmapping %lx-%lx\n", dfn, dfn + to_unmap - 1);
+
+		op.unmap_pages.dfn = dfn;
+		op.unmap_pages.nr_pages = to_unmap;
+
+		ret = HYPERVISOR_iommu_op(&op);
+
+		if (ret)
+			pr_warn("Unmap failure (%lx-%lx)\n", dfn, dfn + to_unmap - 1);
+
+		xen_pg_count -= to_unmap;
+
+		dfn += to_unmap;
+	}
+
+	return pgcount * pgsize;
+}
+
+int xen_iommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	struct pci_dev *pdev;
+	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
+	struct pv_iommu_op op = {
+		.subop_id = IOMMUOP_reattach_device,
+		.flags = 0,
+		.ctx_no = dom->ctx_no,
+	};
+
+	if (!dev_is_pci(dev))
+		return -EINVAL;
+
+	pdev = to_pci_dev(dev);
+
+	op.reattach_device.dev.seg = pci_domain_nr(pdev->bus);
+	op.reattach_device.dev.bus = pdev->bus->number;
+	op.reattach_device.dev.devfn = pdev->devfn;
+
+	return HYPERVISOR_iommu_op(&op);
+}
+
+static void xen_iommu_free(struct iommu_domain *domain)
+{
+	int ret;
+	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
+
+	if (dom->ctx_no != 0) {
+		struct pv_iommu_op op = {
+			.ctx_no = dom->ctx_no,
+			.flags = 0,
+			.subop_id = IOMMUOP_free_context
+		};
+
+		ret = HYPERVISOR_iommu_op(&op);
+
+		if (ret)
+			pr_err("Context %hu destruction failure\n", dom->ctx_no);
+	}
+
+	kfree(domain);
+}
+
+static phys_addr_t xen_iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
+{
+	int ret;
+	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
+
+	struct pv_iommu_op op = {
+		.ctx_no = dom->ctx_no,
+		.flags = 0,
+		.subop_id = IOMMUOP_lookup_page,
+	};
+
+	op.lookup_page.dfn = addr_to_pfn(iova);
+
+	ret = HYPERVISOR_iommu_op(&op);
+
+	if (ret)
+		return 0;
+
+	phys_addr_t page_addr = pfn_to_addr(gfn_to_pfn(op.lookup_page.gfn));
+
+	/* Consider non-aligned iova */
+	return page_addr + (iova & 0xFFF);
+}
+
+static void xen_iommu_get_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *reg;
+	struct xen_reserved_device_memory *entries;
+	struct xen_reserved_device_memory_map map;
+	struct pci_dev *pdev;
+	int ret, i;
+
+	if (!dev_is_pci(dev))
+		return;
+
+	pdev = to_pci_dev(dev);
+
+	reg = iommu_alloc_resv_region(MSI_RANGE_START,
+		MSI_RANGE_END - MSI_RANGE_START + 1,
+		0, IOMMU_RESV_MSI, GFP_KERNEL);
+
+	if (!reg)
+		return;
+
+	list_add_tail(&reg->list, head);
+
+	/* Map xen-specific entries */
+
+	/* First, get number of entries to map */
+	map.buffer = NULL;
+	map.nr_entries = 0;
+	map.flags = 0;
+
+	map.dev.pci.seg = pci_domain_nr(pdev->bus);
+	map.dev.pci.bus = pdev->bus->number;
+	map.dev.pci.devfn = pdev->devfn;
+
+	ret = HYPERVISOR_memory_op(XENMEM_reserved_device_memory_map, &map);
+
+	if (ret == 0)
+		/* No reserved region, nothing to do */
+		return;
+
+	if (ret != -ENOBUFS) {
+		pr_err("Unable to get reserved region count (%d)\n", ret);
+		return;
+	}
+
+	/* Assume a reasonable number of entries, otherwise, something is probably wrong */
+	if (WARN_ON(map.nr_entries > 256))
+		pr_warn("Xen reporting many reserved regions (%u)\n", map.nr_entries);
+
+	/* And finally get actual mappings */
+	entries = kcalloc(map.nr_entries, sizeof(struct xen_reserved_device_memory),
+					  GFP_KERNEL);
+
+	if (!entries) {
+		pr_err("No memory for map entries\n");
+		return;
+	}
+
+	map.buffer = entries;
+
+	ret = HYPERVISOR_memory_op(XENMEM_reserved_device_memory_map, &map);
+
+	if (ret != 0) {
+		pr_err("Unable to get reserved regions (%d)\n", ret);
+		kfree(entries);
+		return;
+	}
+
+	for (i = 0; i < map.nr_entries; i++) {
+		struct xen_reserved_device_memory entry = entries[i];
+
+		reg = iommu_alloc_resv_region(pfn_to_addr(entry.start_pfn),
+									  pfn_to_addr(entry.nr_pages),
+									  0, IOMMU_RESV_RESERVED, GFP_KERNEL);
+
+		if (!reg)
+			break;
+
+		list_add_tail(&reg->list, head);
+	}
+
+	kfree(entries);
+}
+
+static struct iommu_ops xen_iommu_ops = {
+	.capable = xen_iommu_capable,
+	.domain_alloc = xen_iommu_domain_alloc,
+	.probe_device = xen_iommu_probe_device,
+	.probe_finalize = xen_iommu_probe_finalize,
+	.device_group = xen_iommu_device_group,
+	.release_device = xen_iommu_release_device,
+	.get_resv_regions = xen_iommu_get_resv_regions,
+	.pgsize_bitmap = XEN_IOMMU_PGSIZES,
+	.default_domain_ops = &(const struct iommu_domain_ops) {
+		.map_pages = xen_iommu_map_pages,
+		.unmap_pages = xen_iommu_unmap_pages,
+		.attach_dev = xen_iommu_attach_dev,
+		.iova_to_phys = xen_iommu_iova_to_phys,
+		.free = xen_iommu_free,
+	},
+};
+
+int __init xen_iommu_init(void)
+{
+	int ret;
+	struct pv_iommu_op op = {
+		.subop_id = IOMMUOP_query_capabilities
+	};
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	/* Check if iommu_op is supported */
+	if (HYPERVISOR_iommu_op(&op) == -ENOSYS)
+		return -ENODEV; /* No Xen IOMMU hardware */
+
+	pr_info("Initialising Xen IOMMU driver\n");
+	pr_info("max_nr_pages=%d\n", op.cap.max_nr_pages);
+	pr_info("max_ctx_no=%d\n", op.cap.max_ctx_no);
+	pr_info("max_iova_addr=%llx\n", op.cap.max_iova_addr);
+
+	if (op.cap.max_ctx_no == 0) {
+		pr_err("Unable to use IOMMU PV driver (no context available)\n");
+		return -ENOTSUPP; /* Unable to use IOMMU PV ? */
+	}
+
+	if (xen_domain_type == XEN_PV_DOMAIN)
+		/* TODO: In PV domain, due to the existing pfn-gfn mapping we need to
+		 * consider that under certains circonstances, we have :
+		 *   pfn_to_gfn(x + 1) != pfn_to_gfn(x) + 1
+		 *
+		 * In these cases, we would want to separate the subop into several calls.
+		 * (only doing the grouped operation when the mapping is actually contigous)
+		 * Only map operation would be affected, as unmap actually uses dfn which
+		 * doesn't have this kind of mapping.
+		 *
+		 * Force single-page operations to work arround this issue for now.
+		 */
+		max_nr_pages = 1;
+	else
+		/* With HVM domains, pfn_to_gfn is identity, there is no issue regarding this. */
+		max_nr_pages = op.cap.max_nr_pages;
+
+	max_iova_addr = op.cap.max_iova_addr;
+
+	spin_lock_init(&lock);
+
+	ret = iommu_device_sysfs_add(&xen_iommu_device, NULL, NULL, "xen-iommu");
+	if (ret) {
+		pr_err("Unable to add Xen IOMMU sysfs\n");
+		return ret;
+	}
+
+	ret = iommu_device_register(&xen_iommu_device, &xen_iommu_ops, NULL);
+	if (ret) {
+		pr_err("Unable to register Xen IOMMU device %d\n", ret);
+		iommu_device_sysfs_remove(&xen_iommu_device);
+		return ret;
+	}
+
+	/* swiotlb is redundant when IOMMU is active. */
+	x86_swiotlb_enable = false;
+
+	return 0;
+}
+
+void __exit xen_iommu_fini(void)
+{
+	pr_info("Unregistering Xen IOMMU driver\n");
+
+	iommu_device_unregister(&xen_iommu_device);
+	iommu_device_sysfs_remove(&xen_iommu_device);
+}
+
+module_init(xen_iommu_init);
+module_exit(xen_iommu_fini);
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index 1a371a825c55..08571add426b 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -10,6 +10,7 @@
 #ifndef __XEN_PUBLIC_MEMORY_H__
 #define __XEN_PUBLIC_MEMORY_H__
 
+#include "xen/interface/physdev.h"
 #include <linux/spinlock.h>
 
 /*
@@ -214,6 +215,38 @@ struct xen_add_to_physmap_range {
 };
 DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap_range);
 
+/*
+ * With some legacy devices, certain guest-physical addresses cannot safely
+ * be used for other purposes, e.g. to map guest RAM.  This hypercall
+ * enumerates those regions so the toolstack can avoid using them.
+ */
+#define XENMEM_reserved_device_memory_map   27
+struct xen_reserved_device_memory {
+    xen_pfn_t start_pfn;
+    xen_ulong_t nr_pages;
+};
+DEFINE_GUEST_HANDLE_STRUCT(xen_reserved_device_memory);
+
+struct xen_reserved_device_memory_map {
+#define XENMEM_RDM_ALL 1 /* Request all regions (ignore dev union). */
+    /* IN */
+    uint32_t flags;
+    /*
+     * IN/OUT
+     *
+     * Gets set to the required number of entries when too low,
+     * signaled by error code -ERANGE.
+     */
+    unsigned int nr_entries;
+    /* OUT */
+    GUEST_HANDLE(xen_reserved_device_memory) buffer;
+    /* IN */
+    union {
+        struct physdev_pci_device pci;
+    } dev;
+};
+DEFINE_GUEST_HANDLE_STRUCT(xen_reserved_device_memory_map);
+
 /*
  * Returns the pseudo-physical memory map as it was when the domain
  * was started (specified by XENMEM_set_memory_map).
diff --git a/include/xen/interface/pv-iommu.h b/include/xen/interface/pv-iommu.h
new file mode 100644
index 000000000000..5560609d0e7a
--- /dev/null
+++ b/include/xen/interface/pv-iommu.h
@@ -0,0 +1,114 @@
+/* SPDX-License-Identifier: MIT */
+/******************************************************************************
+ * pv-iommu.h
+ *
+ * Paravirtualized IOMMU driver interface.
+ *
+ * Copyright (c) 2024 Teddy Astie <teddy.astie@vates.tech>
+ */
+
+#ifndef __XEN_PUBLIC_PV_IOMMU_H__
+#define __XEN_PUBLIC_PV_IOMMU_H__
+
+#include "xen.h"
+#include "physdev.h"
+
+#define IOMMU_DEFAULT_CONTEXT (0)
+
+/**
+ * Query PV-IOMMU capabilities for this domain.
+ */
+#define IOMMUOP_query_capabilities    1
+
+/**
+ * Allocate an IOMMU context, the new context handle will be written to ctx_no.
+ */
+#define IOMMUOP_alloc_context         2
+
+/**
+ * Destroy a IOMMU context.
+ * All devices attached to this context are reattached to default context.
+ *
+ * The default context can't be destroyed (0).
+ */
+#define IOMMUOP_free_context          3
+
+/**
+ * Reattach the device to IOMMU context.
+ */
+#define IOMMUOP_reattach_device       4
+
+#define IOMMUOP_map_pages             5
+#define IOMMUOP_unmap_pages           6
+
+/**
+ * Get the GFN associated to a specific DFN.
+ */
+#define IOMMUOP_lookup_page           7
+
+struct pv_iommu_op {
+    uint16_t subop_id;
+    uint16_t ctx_no;
+
+/**
+ * Create a context that is cloned from default.
+ * The new context will be populated with 1:1 mappings covering the entire guest memory.
+ */
+#define IOMMU_CREATE_clone (1 << 0)
+
+#define IOMMU_OP_readable (1 << 0)
+#define IOMMU_OP_writeable (1 << 1)
+    uint32_t flags;
+
+    union {
+        struct {
+            uint64_t gfn;
+            uint64_t dfn;
+            /* Number of pages to map */
+            uint32_t nr_pages;
+            /* Number of pages actually mapped after sub-op */
+            uint32_t mapped;
+        } map_pages;
+
+        struct {
+            uint64_t dfn;
+            /* Number of pages to unmap */
+            uint32_t nr_pages;
+            /* Number of pages actually unmapped after sub-op */
+            uint32_t unmapped;
+        } unmap_pages;
+
+        struct {
+            struct physdev_pci_device dev;
+        } reattach_device;
+
+        struct {
+            uint64_t gfn;
+            uint64_t dfn;
+        } lookup_page;
+
+        struct {
+            /* Maximum number of IOMMU context this domain can use. */
+            uint16_t max_ctx_no;
+            /* Maximum number of pages that can be modified in a single map/unmap operation. */
+            uint32_t max_nr_pages;
+            /* Maximum device address (iova) that the guest can use for mappings. */
+            uint64_t max_iova_addr;
+        } cap;
+    };
+};
+
+typedef struct pv_iommu_op pv_iommu_op_t;
+DEFINE_GUEST_HANDLE_STRUCT(pv_iommu_op_t);
+
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
\ No newline at end of file
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 0ca23eca2a9c..8b1daf3fecc6 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -65,6 +65,7 @@
 #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
 #define __HYPERVISOR_xenpmu_op            40
 #define __HYPERVISOR_dm_op                41
+#define __HYPERVISOR_iommu_op 					  43
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 14:05:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 14:05:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740000.1146996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHl4m-0001E5-7z; Thu, 13 Jun 2024 14:05:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740000.1146996; Thu, 13 Jun 2024 14:05:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHl4m-0001Dy-4f; Thu, 13 Jun 2024 14:05:16 +0000
Received: by outflank-mailman (input) for mailman id 740000;
 Thu, 13 Jun 2024 14:05:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Tbfa=NP=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sHl4k-0001Ds-Nl
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 14:05:14 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f3a53c03-298d-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 16:05:13 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id C2B2968BEB; Thu, 13 Jun 2024 16:05:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3a53c03-298d-11ef-90a3-e314d9c70b13
Date: Thu, 13 Jun 2024 16:05:08 +0200
From: Christoph Hellwig <hch@lst.de>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>, Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 10/26] xen-blkfront: don't disable cache flushes when
 they fail
Message-ID: <20240613140508.GA16529@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-11-hch@lst.de> <ZmlVziizbaboaBSn@macbook> <20240612150030.GA29188@lst.de> <ZmnFH17bTV2Ot_iR@macbook>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZmnFH17bTV2Ot_iR@macbook>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 12, 2024 at 05:56:15PM +0200, Roger Pau Monn wrote:
> Right.  AFAICT advertising "feature-barrier" and/or
> "feature-flush-cache" could be done based on whether blkback
> understand those commands, not on whether the underlying storage
> supports the equivalent of them.
> 
> Worst case we can print a warning message once about the underlying
> storage failing to complete flush/barrier requests, and that data
> integrity might not be guaranteed going forward, and not propagate the
> error to the upper layer?
> 
> What would be the consequence of propagating a flush error to the
> upper layers?

If you propage the error to the upper layer you will generate an
I/O error there, which usually leads to a file system shutdown.

> Given the description of the feature in the blkif header, I'm afraid
> we cannot guarantee that seeing the feature exposed implies barrier or
> flush support, since the request could fail at any time (or even from
> the start of the disk attachment) and it would still sadly be a correct
> implementation given the description of the options.

Well, then we could do something like the patch below, which keeps
the existing behavior, but insolates the block layer from it and
removes the only user of blk_queue_write_cache from interrupt
context:

---
>From e6e82c769ab209a77302994c3829cf6ff7a595b8 Mon Sep 17 00:00:00 2001
From: Christoph Hellwig <hch@lst.de>
Date: Thu, 30 May 2024 08:58:52 +0200
Subject: xen-blkfront: don't disable cache flushes when they fail

blkfront always had a robust negotiation protocol for detecting a write
cache.  Stop simply disabling cache flushes in the block layer as the
flags handling is moving to the atomic queue limits API that needs
user context to freeze the queue for that.  Instead handle the case
of the feature flags cleared inside of blkfront.  This removes old
debug code to check for such a mismatch which was previously impossible
to hit, including the check for passthrough requests that blkfront
never used to start with.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/xen-blkfront.c | 44 +++++++++++++++++++-----------------
 1 file changed, 23 insertions(+), 21 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 9b4ec3e4908cce..e2c92d5095ff17 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -788,6 +788,14 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 			 * A barrier request a superset of FUA, so we can
 			 * implement it the same way.  (It's also a FLUSH+FUA,
 			 * since it is guaranteed ordered WRT previous writes.)
+			 *
+			 * Note that can end up here with a FUA write and the
+			 * flags cleared.  This happens when the flag was
+			 * run-time disabled and raced with I/O submission in
+			 * the block layer.  We submit it as a normal write
+			 * here.  A pure flush should never end up here with
+			 * the flags cleared as they are completed earlier for
+			 * the !feature_flush case.
 			 */
 			if (info->feature_flush && info->feature_fua)
 				ring_req->operation =
@@ -795,8 +803,6 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 			else if (info->feature_flush)
 				ring_req->operation =
 					BLKIF_OP_FLUSH_DISKCACHE;
-			else
-				ring_req->operation = 0;
 		}
 		ring_req->u.rw.nr_segments = num_grant;
 		if (unlikely(require_extra_req)) {
@@ -887,16 +893,6 @@ static inline void flush_requests(struct blkfront_ring_info *rinfo)
 		notify_remote_via_irq(rinfo->irq);
 }
 
-static inline bool blkif_request_flush_invalid(struct request *req,
-					       struct blkfront_info *info)
-{
-	return (blk_rq_is_passthrough(req) ||
-		((req_op(req) == REQ_OP_FLUSH) &&
-		 !info->feature_flush) ||
-		((req->cmd_flags & REQ_FUA) &&
-		 !info->feature_fua));
-}
-
 static blk_status_t blkif_queue_rq(struct blk_mq_hw_ctx *hctx,
 			  const struct blk_mq_queue_data *qd)
 {
@@ -908,23 +904,30 @@ static blk_status_t blkif_queue_rq(struct blk_mq_hw_ctx *hctx,
 	rinfo = get_rinfo(info, qid);
 	blk_mq_start_request(qd->rq);
 	spin_lock_irqsave(&rinfo->ring_lock, flags);
-	if (RING_FULL(&rinfo->ring))
-		goto out_busy;
 
-	if (blkif_request_flush_invalid(qd->rq, rinfo->dev_info))
-		goto out_err;
+	/*
+	 * Check if the backend actually supports flushes.
+	 *
+	 * While the block layer won't send us flushes if we don't claim to
+	 * support them, the Xen protocol allows the backend to revoke support
+	 * at any time.  That is of course a really bad idea and dangerous, but
+	 * has been allowed for 10+ years.  In that case we simply clear the
+	 * flags, and directly return here for an empty flush and ignore the
+	 * FUA flag later on.
+	 */
+	if (unlikely(req_op(qd->rq) == REQ_OP_FLUSH && !info->feature_flush))
+		goto out;
 
+	if (RING_FULL(&rinfo->ring))
+		goto out_busy;
 	if (blkif_queue_request(qd->rq, rinfo))
 		goto out_busy;
 
 	flush_requests(rinfo);
+out:
 	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
 	return BLK_STS_OK;
 
-out_err:
-	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
-	return BLK_STS_IOERR;
-
 out_busy:
 	blk_mq_stop_hw_queue(hctx);
 	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
@@ -1627,7 +1630,6 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 					blkif_req(req)->error = BLK_STS_OK;
 				info->feature_fua = 0;
 				info->feature_flush = 0;
-				xlvbd_flush(info);
 			}
 			fallthrough;
 		case BLKIF_OP_READ:
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 14:32:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 14:32:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740014.1147006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHlV2-0005nY-A4; Thu, 13 Jun 2024 14:32:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740014.1147006; Thu, 13 Jun 2024 14:32:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHlV2-0005nR-5k; Thu, 13 Jun 2024 14:32:24 +0000
Received: by outflank-mailman (input) for mailman id 740014;
 Thu, 13 Jun 2024 14:32:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHlV0-0005nL-3d
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 14:32:22 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bd82f4f3-2991-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 16:32:19 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-a6e43dad8ecso204241566b.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 07:32:20 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56db5b98sm77613766b.86.2024.06.13.07.32.18
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 07:32:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd82f4f3-2991-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718289139; x=1718893939; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=H8hdHd+JfSqX4icctlXjtDZnZCcoDc5cvXDOcUQHCnQ=;
        b=bfqjEG6iqvQmAUx14cfKOLDexKsHhiApnV07Sh/X9KFCMZHZXLUYNjq8JDn/8IaLqr
         epwKeldosA8wTvFMg4rz2MeAhTZY4iwVyV7de+Kr6X/i69R5HZScKLHfFvq/R+2T94HV
         2cn/wzrNBTHfaxDhhlEEUsA8JKPmr/Ojnx7akDWVtn7cls7CbzB5U7NSS4vdkddTUA7z
         OkRpvUeknyG48/0rKOircNUQpGTH9jEVT7CMCTi32gfbXzs7aP2NdJxOC3xPW3f358Gx
         gDrLYAlqaD8b0xKF0CNjN7ecqDG53DXd2WSQg+aGUOm7N7lHUMYtbErt1aDVD2m+TZdG
         B2Vw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718289139; x=1718893939;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=H8hdHd+JfSqX4icctlXjtDZnZCcoDc5cvXDOcUQHCnQ=;
        b=FFumIbNr34uYb19nkeEHJyspd0t4S0NbubQuEDI/fKETLQ4pze3yTByRXc5b1/85YM
         iZepIvxErEk936aygthSLaEo9wIrgwnW8+kbxzOjh9brcpc8LAqqgdReaCcfJpPLNXZB
         sgfXJ7YnhvuSBAjrS+J68oiSqadX34AumHItnxvbEplqOmvaxhJDAqV3zDG9lg77oDjl
         amxD4hU3oYvxM1bjXVweOiJ+cdU8xSgdDt58Emyc0vNH9glP2XjaSLV0WXk6/IHewsSl
         AJV6LY2g1GZfXZs3gSSwcKo22Z0i4gPeuJLE2weLXo5zqRInG5EHXupszyWDBL5GMYTR
         0rxg==
X-Forwarded-Encrypted: i=1; AJvYcCVSq//2CIxJF/WTmbukidcl6yrl4pgldqlRO3NqEnRujv1F0SEF+2RYHafNxCRUC/o0Pi7SpY1lCovGTQy8OahUJ+uzxoptYwCPwaXsab8=
X-Gm-Message-State: AOJu0YyColC/br4Bqf1UqXJVcCAsprwIs9PoTOHp1PME/Pbr9R+Wlh+Y
	DnInOTBufU5HpjIMokzm/htRSpvIEo8ybgylMlkkUbJNIbaOXNLbetJFLHzAbQ==
X-Google-Smtp-Source: AGHT+IE2qhU5F+JpTGV9aNXNR4R4rO/hYPvUlKujrAOHJT5A7rvJJOddpFyigwaDJHM0jNu9Jjn3pA==
X-Received: by 2002:a17:907:944c:b0:a6f:56d7:2caf with SMTP id a640c23a62f3a-a6f56d72f33mr184680366b.30.1718289139347;
        Thu, 13 Jun 2024 07:32:19 -0700 (PDT)
Message-ID: <8b0151a8-2293-409a-8469-d9e73cf561a3@suse.com>
Date: Thu, 13 Jun 2024 16:32:17 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC PATCH] iommu/xen: Add Xen PV-IOMMU driver
To: Teddy Astie <teddy.astie@vates.tech>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
 Robin Murphy <robin.murphy@arm.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org,
 iommu@lists.linux.dev
References: <fe36b8d36ed3bc01c78901bdf7b87a71cb1adaad.1718286176.git.teddy.astie@vates.tech>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <fe36b8d36ed3bc01c78901bdf7b87a71cb1adaad.1718286176.git.teddy.astie@vates.tech>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 13.06.2024 15:50, Teddy Astie wrote:
> @@ -214,6 +215,38 @@ struct xen_add_to_physmap_range {
>  };
>  DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap_range);
>  
> +/*
> + * With some legacy devices, certain guest-physical addresses cannot safely
> + * be used for other purposes, e.g. to map guest RAM.  This hypercall
> + * enumerates those regions so the toolstack can avoid using them.
> + */
> +#define XENMEM_reserved_device_memory_map   27
> +struct xen_reserved_device_memory {
> +    xen_pfn_t start_pfn;
> +    xen_ulong_t nr_pages;
> +};
> +DEFINE_GUEST_HANDLE_STRUCT(xen_reserved_device_memory);
> +
> +struct xen_reserved_device_memory_map {
> +#define XENMEM_RDM_ALL 1 /* Request all regions (ignore dev union). */
> +    /* IN */
> +    uint32_t flags;
> +    /*
> +     * IN/OUT
> +     *
> +     * Gets set to the required number of entries when too low,
> +     * signaled by error code -ERANGE.
> +     */
> +    unsigned int nr_entries;
> +    /* OUT */
> +    GUEST_HANDLE(xen_reserved_device_memory) buffer;
> +    /* IN */
> +    union {
> +        struct physdev_pci_device pci;
> +    } dev;
> +};
> +DEFINE_GUEST_HANDLE_STRUCT(xen_reserved_device_memory_map);

This is a tools-only (i.e. unstable) sub-function in Xen; even the comment
at the top says "toolstack". It is therefore not suitable for use in a
kernel.

> --- /dev/null
> +++ b/include/xen/interface/pv-iommu.h
> @@ -0,0 +1,114 @@
> +/* SPDX-License-Identifier: MIT */
> +/******************************************************************************
> + * pv-iommu.h
> + *
> + * Paravirtualized IOMMU driver interface.
> + *
> + * Copyright (c) 2024 Teddy Astie <teddy.astie@vates.tech>
> + */
> +
> +#ifndef __XEN_PUBLIC_PV_IOMMU_H__
> +#define __XEN_PUBLIC_PV_IOMMU_H__
> +
> +#include "xen.h"
> +#include "physdev.h"
> +
> +#define IOMMU_DEFAULT_CONTEXT (0)
> +
> +/**
> + * Query PV-IOMMU capabilities for this domain.
> + */
> +#define IOMMUOP_query_capabilities    1
> +
> +/**
> + * Allocate an IOMMU context, the new context handle will be written to ctx_no.
> + */
> +#define IOMMUOP_alloc_context         2
> +
> +/**
> + * Destroy a IOMMU context.
> + * All devices attached to this context are reattached to default context.
> + *
> + * The default context can't be destroyed (0).
> + */
> +#define IOMMUOP_free_context          3
> +
> +/**
> + * Reattach the device to IOMMU context.
> + */
> +#define IOMMUOP_reattach_device       4
> +
> +#define IOMMUOP_map_pages             5
> +#define IOMMUOP_unmap_pages           6
> +
> +/**
> + * Get the GFN associated to a specific DFN.
> + */
> +#define IOMMUOP_lookup_page           7
> +
> +struct pv_iommu_op {
> +    uint16_t subop_id;
> +    uint16_t ctx_no;
> +
> +/**
> + * Create a context that is cloned from default.
> + * The new context will be populated with 1:1 mappings covering the entire guest memory.
> + */
> +#define IOMMU_CREATE_clone (1 << 0)
> +
> +#define IOMMU_OP_readable (1 << 0)
> +#define IOMMU_OP_writeable (1 << 1)
> +    uint32_t flags;
> +
> +    union {
> +        struct {
> +            uint64_t gfn;
> +            uint64_t dfn;
> +            /* Number of pages to map */
> +            uint32_t nr_pages;
> +            /* Number of pages actually mapped after sub-op */
> +            uint32_t mapped;
> +        } map_pages;
> +
> +        struct {
> +            uint64_t dfn;
> +            /* Number of pages to unmap */
> +            uint32_t nr_pages;
> +            /* Number of pages actually unmapped after sub-op */
> +            uint32_t unmapped;
> +        } unmap_pages;
> +
> +        struct {
> +            struct physdev_pci_device dev;
> +        } reattach_device;
> +
> +        struct {
> +            uint64_t gfn;
> +            uint64_t dfn;
> +        } lookup_page;
> +
> +        struct {
> +            /* Maximum number of IOMMU context this domain can use. */
> +            uint16_t max_ctx_no;
> +            /* Maximum number of pages that can be modified in a single map/unmap operation. */
> +            uint32_t max_nr_pages;
> +            /* Maximum device address (iova) that the guest can use for mappings. */
> +            uint64_t max_iova_addr;
> +        } cap;
> +    };
> +};
> +
> +typedef struct pv_iommu_op pv_iommu_op_t;
> +DEFINE_GUEST_HANDLE_STRUCT(pv_iommu_op_t);
> +
> +#endif
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> \ No newline at end of file

Nit: I'm pretty sure you want to avoid this.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 14:35:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 14:35:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740022.1147016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHlYD-0006N7-NF; Thu, 13 Jun 2024 14:35:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740022.1147016; Thu, 13 Jun 2024 14:35:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHlYD-0006N0-Jw; Thu, 13 Jun 2024 14:35:41 +0000
Received: by outflank-mailman (input) for mailman id 740022;
 Thu, 13 Jun 2024 14:35:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g45b=NP=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sHlYD-0006Mu-6E
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 14:35:41 +0000
Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com
 [2a00:1450:4864:20::12f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 34fe97a8-2992-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 16:35:40 +0200 (CEST)
Received: by mail-lf1-x12f.google.com with SMTP id
 2adb3069b0e04-52c4b92c09bso1725139e87.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 07:35:40 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52ca2825b43sm215269e87.12.2024.06.13.07.35.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Jun 2024 07:35:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34fe97a8-2992-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718289340; x=1718894140; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=9yiTskAtNCA9fyCqx4OEuFLI4DcW6IWPjT6NF2UIsco=;
        b=KYLsbDx+4BFNMSlUNL5Fr2F+whWbMw+WVUtIoO6WKQY2W9O2XhyXAAFHs0wyQBqUrm
         1M9ttm5OL8XocgW8dBNenUs9lSvOSw/JOF0tBkOlk7OxUGJgGhTIKlRq9V4RZHyZPOtH
         6afjEpx6633anwDw9xbXdww4u7PgEu95e9nxyKGOWUBSwKn3b3ScqLV/Xa1E+N7Rlk0o
         BFa23gHmlx2+S3Y/k88HGO/YDXBEWmg6SCYl+bbj8JAqNonUd2TTaWp3xucJYw2HBsGi
         1CBPDVR2vxkdezsFoAW703o9robozkJroNF3bRqck3fxriQ7NzusNgaYFqMQ879nclSN
         bsSA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718289340; x=1718894140;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=9yiTskAtNCA9fyCqx4OEuFLI4DcW6IWPjT6NF2UIsco=;
        b=mIRfwWCjZsNiGEu32WMmRYiZyhIt6WZQm0SbBrhD+MKuo+fOYIxujsz+LeXdOrI9eo
         rPhxbbvhGuWcd+P+6DIl8KEsY5K9ZGmnYw/jox+UXZKkophr5MkcVK1LTbXy27dELomz
         L+0kYwWFJxkCmUnYAZIqigeWUtY7ZVk1rwGvs8QuChqXyxOIdy0Z+LWZPaHRiAQU2aSv
         /YbOZ86fujDF0QCU5MBNyti6Ro/LUclfypOwZ9nheT3hu4qhAhFES1sw3oQtmkDpoDTN
         pJLg9pU9PpUsjkPwLTZj8DRDVZO7RcuzcVlsHqaltnYq9Rfp1ocng3L2i+ZTz4xM6wP/
         WlBA==
X-Forwarded-Encrypted: i=1; AJvYcCUC7DVAVWT+qSrvD+n/vQS5VzoPKS2S1Jm4dEvsC3/jZkBUcjIJSQJcJ9gqJYC13yc9QhRZ0WT/GQVjsKSJOQwRokOlx9e2YwkcaI/WRsc=
X-Gm-Message-State: AOJu0Yz0EPCxUaPcWnlFztVW/LRRiwE6hkLuyqGXDxbkGss6rPvYgsSb
	+8n0Q0DLwnpfAPOVmZOJyifkwP16MHjWbaHIaxS4/Z29yKBgnOeu
X-Google-Smtp-Source: AGHT+IHCysbjE5hkGzIy6B7zMaSqwZWTRetzl6lngZ9l8PkcgV1+FzHBVM/9tPsW5MOdE1sV6IpTpw==
X-Received: by 2002:ac2:58c3:0:b0:52c:8984:6b63 with SMTP id 2adb3069b0e04-52c9a3c6c49mr3304092e87.26.1718289339421;
        Thu, 13 Jun 2024 07:35:39 -0700 (PDT)
Message-ID: <840d78dc1b27de61a0aa9931eb26ae896194fd0a.camel@gmail.com>
Subject: Re: [PATCH for 4.19?] x86/Intel: unlock CPUID earlier for the BSP
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau
 =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>
Date: Thu, 13 Jun 2024 16:35:38 +0200
In-Reply-To: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>
References: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Thu, 2024-06-13 at 10:19 +0200, Jan Beulich wrote:
> Intel CPUs have a MSR bit to limit CPUID enumeration to leaf two. If
> this bit is set by the BIOS then CPUID evaluation does not work when
> data from any leaf greater than two is needed; early_cpu_init() in
> particular wants to collect leaf 7 data.
>=20
> Cure this by unlocking CPUID right before evaluating anything which
> depends on the maximum CPUID leaf being greater than two.
>=20
> Inspired by (and description cloned from) Linux commit 0c2f6d04619e
> ("x86/topology/intel: Unlock CPUID before evaluating anything").
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii
> ---
> While I couldn't spot anything, it kind of feels as if I'm
> overlooking
> further places where we might be inspecting in particular leaf 7 yet
> earlier.
>=20
> No Fixes: tag(s), as imo it would be too many that would want
> enumerating.
>=20
> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -336,7 +336,8 @@ void __init early_cpu_init(bool verbose)
> =C2=A0
> =C2=A0	c->x86_vendor =3D x86_cpuid_lookup_vendor(ebx, ecx, edx);
> =C2=A0	switch (c->x86_vendor) {
> -	case X86_VENDOR_INTEL:=C2=A0=C2=A0=C2=A0 actual_cpu =3D intel_cpu_dev;=
=C2=A0=C2=A0=C2=A0
> break;
> +	case X86_VENDOR_INTEL:=C2=A0=C2=A0=C2=A0 intel_unlock_cpuid_leaves(c);
> +				=C2=A0 actual_cpu =3D intel_cpu_dev;=C2=A0=C2=A0=C2=A0
> break;
> =C2=A0	case X86_VENDOR_AMD:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 actual_cpu =3D =
amd_cpu_dev;=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
> break;
> =C2=A0	case X86_VENDOR_CENTAUR:=C2=A0 actual_cpu =3D centaur_cpu_dev;=C2=
=A0
> break;
> =C2=A0	case X86_VENDOR_SHANGHAI: actual_cpu =3D shanghai_cpu_dev;
> break;
> --- a/xen/arch/x86/cpu/cpu.h
> +++ b/xen/arch/x86/cpu/cpu.h
> @@ -24,3 +24,5 @@ void amd_init_lfence(struct cpuinfo_x86
> =C2=A0void amd_init_ssbd(const struct cpuinfo_x86 *c);
> =C2=A0void amd_init_spectral_chicken(void);
> =C2=A0void detect_zen2_null_seg_behaviour(void);
> +
> +void intel_unlock_cpuid_leaves(struct cpuinfo_x86 *c);
> --- a/xen/arch/x86/cpu/intel.c
> +++ b/xen/arch/x86/cpu/intel.c
> @@ -303,10 +303,24 @@ static void __init noinline intel_init_l
> =C2=A0		ctxt_switch_masking =3D intel_ctxt_switch_masking;
> =C2=A0}
> =C2=A0
> -static void cf_check early_init_intel(struct cpuinfo_x86 *c)
> +/* Unmask CPUID levels if masked. */
> +void intel_unlock_cpuid_leaves(struct cpuinfo_x86 *c)
> =C2=A0{
> -	u64 misc_enable, disable;
> +	uint64_t misc_enable, disable;
> +
> +	rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
> +
> +	disable =3D misc_enable & MSR_IA32_MISC_ENABLE_LIMIT_CPUID;
> +	if (disable) {
> +		wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable &
> ~disable);
> +		bootsym(trampoline_misc_enable_off) |=3D disable;
> +		c->cpuid_level =3D cpuid_eax(0);
> +		printk(KERN_INFO "revised cpuid level: %u\n", c-
> >cpuid_level);
> +	}
> +}
> =C2=A0
> +static void cf_check early_init_intel(struct cpuinfo_x86 *c)
> +{
> =C2=A0	/* Netburst reports 64 bytes clflush size, but does IO in
> 128 bytes */
> =C2=A0	if (c->x86 =3D=3D 15 && c->x86_cache_alignment =3D=3D 64)
> =C2=A0		c->x86_cache_alignment =3D 128;
> @@ -315,16 +329,7 @@ static void cf_check early_init_intel(st
> =C2=A0	=C2=A0=C2=A0=C2=A0 bootsym(trampoline_misc_enable_off) &
> MSR_IA32_MISC_ENABLE_XD_DISABLE)
> =C2=A0		printk(KERN_INFO "re-enabled NX (Execute Disable)
> protection\n");
> =C2=A0
> -	/* Unmask CPUID levels and NX if masked: */
> -	rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
> -
> -	disable =3D misc_enable & MSR_IA32_MISC_ENABLE_LIMIT_CPUID;
> -	if (disable) {
> -		wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable &
> ~disable);
> -		bootsym(trampoline_misc_enable_off) |=3D disable;
> -		printk(KERN_INFO "revised cpuid level: %d\n",
> -		=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 cpuid_eax(0));
> -	}
> +	intel_unlock_cpuid_leaves(c);
> =C2=A0
> =C2=A0	/* CPUID workaround for Intel 0F33/0F34 CPU */
> =C2=A0	if (boot_cpu_data.x86 =3D=3D 0xF && boot_cpu_data.x86_model =3D=3D=
 3
> &&



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 14:38:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 14:38:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740031.1147025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHlb6-0007Nh-3C; Thu, 13 Jun 2024 14:38:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740031.1147025; Thu, 13 Jun 2024 14:38:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHlb6-0007Na-0Q; Thu, 13 Jun 2024 14:38:40 +0000
Received: by outflank-mailman (input) for mailman id 740031;
 Thu, 13 Jun 2024 14:38:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g45b=NP=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sHlb5-0007NU-3d
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 14:38:39 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9efd71fc-2992-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 16:38:38 +0200 (CEST)
Received: by mail-lf1-x130.google.com with SMTP id
 2adb3069b0e04-52c7f7fdd24so1586263e87.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 07:38:38 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52ca287b357sm216038e87.236.2024.06.13.07.38.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Jun 2024 07:38:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9efd71fc-2992-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718289518; x=1718894318; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=crUZ8CcQxQzcWTke0O5S4Uk6BJ9qzUN61uNds8btAd4=;
        b=lck/4HtcmHjZJZ4keSY8eHFFJpKDzgIeCDA7kXm8SYuh+E17Am7uZ8Ug/CHqiUezX0
         p0anOn8bjfRkI6VI1BtnMYnVtTgDzOEhET12mC+FbTz1oKkF7sZ/bBeGvEekosqjhLkF
         lHQ4Nu5zKcRL4tBHpyz9ohZMGQsSsM/JygY2/nJ/mBYKfumOepSTtOPZ07f6zOchs6Yt
         OV9IV4s2xRcEyjt69uobWTEzszGRmEOqmTVNtyPmt0bnVQ2aOqof70HlvGpjWYTMv6LT
         x5oCRT+WaDTR8hIcnJDbE4RwVBXCm95FTzymhF3tWePgchwat3uOIIgOE9wXOJAV9vW2
         G4gA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718289518; x=1718894318;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=crUZ8CcQxQzcWTke0O5S4Uk6BJ9qzUN61uNds8btAd4=;
        b=GUPKsCpIGa1uvoJt0XnLkmzRFGwSCNKJgvlcul5nU9xksXBMT8pbtI+9CVzeMc7CUD
         qlJnldiz9KYJXe9JwZ95F8AHVV14oCmx+gH33tfU2ZWzbq/z8iuaJQFPqzy05SRAQEnz
         uSX9EgANse0R28jLb46sBf3wXGtYiB3w0DLnpVSxOwp13ruy3smGgWlIYUAj21+qWK+f
         h3DXtbdSem08k/pd2RzO9MFJTnuYZ1IfHycZQAN2NCveFbF/NngE/YuB3zvmEErDLZEI
         wAicJSoncT1GExxSIy100D6oQyfMRJxYfo5iaWyfTReRAiBh4DQ6NmeskDiCFEYMyRgL
         VSHw==
X-Forwarded-Encrypted: i=1; AJvYcCXHcS500dwXn3lb2UPPw8+BH22JFuytZHz/SeQpb8qEewjMoo0fzrUQ3FR06bHvC5HC30GEsWCB6LJ66f08lHOY3TQzintG5Mt4h+GiXiQ=
X-Gm-Message-State: AOJu0YyYPX8bjdK9702m1PPsEu6+iKb0ITyKGVxdR+MzESdcTcW5bVJt
	ZuCgFAnFDz2JufMcJ7RYWsrEqDXpi+84/VqADSXC9v81VIcen6n5
X-Google-Smtp-Source: AGHT+IG2JjFHDFkzX9vDpnk8HmkuybuIJTpYDm+aVb4jtILI5VGl98+ujPeuD7SMJJHA+gG+g0q82g==
X-Received: by 2002:a05:6512:131f:b0:52b:bee0:54b0 with SMTP id 2adb3069b0e04-52c9a3fd79dmr3599028e87.54.1718289517402;
        Thu, 13 Jun 2024 07:38:37 -0700 (PDT)
Message-ID: <39ecc9a67a2d71a4f97b80c97b7505664411046f.camel@gmail.com>
Subject: Re: [PATCH v2 for-4.19 1/3] x86/EPT: correct special page checking
 in epte_get_entry_emt()
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau
 =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>
Date: Thu, 13 Jun 2024 16:38:36 +0200
In-Reply-To: <175df1a2-a95f-462b-ad49-3a0fef727658@suse.com>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
	 <175df1a2-a95f-462b-ad49-3a0fef727658@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Wed, 2024-06-12 at 15:16 +0200, Jan Beulich wrote:
> mfn_valid() granularity is (currently) 256Mb. Therefore the start of
> a
> 1Gb page passing the test doesn't necessarily mean all parts of such
> a
> range would also pass. Yet using the result of mfn_to_page() on an
> MFN
> which doesn't pass mfn_valid() checking is liable to result in a
> crash
> (the invocation of mfn_to_page() alone is presumably "just" UB in
> such a
> case).
>=20
> Fixes: ca24b2ffdbd9 ("x86/hvm: set 'ipat' in EPT for special pages")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii
> ---
> Of course we could leverage mfn_valid() granularity here to do an
> increment by more than 1 if mfn_valid() returned false. Yet doing so
> likely would want a suitable helper to be introduced first, rather
> than
> open-coding such logic here.
> ---
> v2: New.
>=20
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -519,8 +519,12 @@ int epte_get_entry_emt(struct domain *d,
> =C2=A0=C2=A0=C2=A0=C2=A0 }
> =C2=A0
> =C2=A0=C2=A0=C2=A0=C2=A0 for ( special_pgs =3D i =3D 0; i < (1ul << order=
); i++ )
> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( is_special_page(mfn_to_p=
age(mfn_add(mfn, i))) )
> +=C2=A0=C2=A0=C2=A0 {
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 mfn_t cur =3D mfn_add(mfn, i)=
;
> +
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( mfn_valid(cur) && is_spe=
cial_page(mfn_to_page(cur)) )
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
special_pgs++;
> +=C2=A0=C2=A0=C2=A0 }
> =C2=A0
> =C2=A0=C2=A0=C2=A0=C2=A0 if ( special_pgs )
> =C2=A0=C2=A0=C2=A0=C2=A0 {
>=20



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 14:39:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 14:39:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740038.1147037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHlcI-0007xK-IO; Thu, 13 Jun 2024 14:39:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740038.1147037; Thu, 13 Jun 2024 14:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHlcI-0007xD-Dr; Thu, 13 Jun 2024 14:39:54 +0000
Received: by outflank-mailman (input) for mailman id 740038;
 Thu, 13 Jun 2024 14:39:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=g45b=NP=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sHlcH-0007x5-7Z
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 14:39:53 +0000
Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com
 [2a00:1450:4864:20::132])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ca48798d-2992-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 16:39:52 +0200 (CEST)
Received: by mail-lf1-x132.google.com with SMTP id
 2adb3069b0e04-52b7ffd9f6eso1223413e87.3
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 07:39:50 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52ca287ae79sm219196e87.202.2024.06.13.07.39.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Jun 2024 07:39:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca48798d-2992-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718289590; x=1718894390; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Ec9JPpKhNcApee8NaDvVHEWeq39BraxKjX5B4FRrBUI=;
        b=UASa6DA/LtUBgVKW2VTNWAoTTxT2StDqVhTxwHklexAkF0W57YZshifNeKYLI10h5e
         mvJa50RqsdqXoFSZR6GgjF/5Tkfi9RuXBmnJxB5eC5xi77cWgHSSoPkM+OqPKsqZYRSA
         hAMBYE8uV1f+EA3k0BkWyzWml45cv7qxlCMYQAhWfZcxDD6o8BedcEpMnRj8lRzMVEON
         vgJF8iWUu/FS0ETQA8hgAy//ldNQyXdlYR89MXeBxkj0CJTR6OLDXhLeL7hQnXicTuDP
         t5kIaufiIh/XkDxIj4HI3UGPYtxZHKRq/MtOHU7OZTiNUl/0layzuT38b5gbRNyRWy/v
         ztLQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718289590; x=1718894390;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Ec9JPpKhNcApee8NaDvVHEWeq39BraxKjX5B4FRrBUI=;
        b=MbyJbqSEwgZftoEawOldgeD9seIp912LruHLktrWlWfl8hCbvPlGZGhJ4tSbTOSQCB
         JiQi17yhr/sHJdjp/VqCz9c+P39WjUt1GMutznAOIaOjdEfalZGSM6CN66FrUf57j+JH
         3bpT9cQYOuy2UIu9RkBVyDRHFRY3abz94QT3piQ79EvKE/Fp0NiSF0DXnSkhFDAZlCcH
         vQMpYGBi9gbxyMY7xGiMDJ/JRlLdXMXHth3BghwvTEmE5aRnKIcqcwOG2jrJsB48XV+t
         lzBBkqtqNDnnIZx6OBd4Q2ngMZ9y7dbst7MCahZY5jMxI9emJHDZmBastOxZfF7yBdO3
         z+SA==
X-Forwarded-Encrypted: i=1; AJvYcCXMCipBfpqtrjP9FjqIN41GCm3HtQU/VXKQQQI1fpnHCqWK8Ltm73kmGhSrdHED8QDKm7i4yfJUVO/LNelUZv4rJ7wVLzluozs0YwK9oG4=
X-Gm-Message-State: AOJu0YyZ0qI8pn9FGB1Cqg+TV5weJHfsPINADuSv9OtKNrNumV1t045b
	e7H8e2SFYID1hDBW3vHrBzor2PHf2GHf8wQGvqjBMC06lvQ5dkXJ
X-Google-Smtp-Source: AGHT+IFp4/JsvV6tLQqoQAz/NQPkpjzz5AikW9wFNTTlYMsx5gWmhIz6XSu+LrdqjpAX9Tr9tkwYOQ==
X-Received: by 2002:a19:384f:0:b0:52c:8009:e0cb with SMTP id 2adb3069b0e04-52c9a3dfacamr3049459e87.41.1718289590088;
        Thu, 13 Jun 2024 07:39:50 -0700 (PDT)
Message-ID: <eeed765c5d0a849742d51f829dc4c7ab2f967b81.camel@gmail.com>
Subject: Re: [PATCH v2 for-4.19 2/3] x86/EPT: avoid marking non-present
 entries for re-configuring
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau
 =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>
Date: Thu, 13 Jun 2024 16:39:49 +0200
In-Reply-To: <d31f0f8e-4eb7-4617-86f6-81f38b5c61aa@suse.com>
References: <2936ffad-5395-45fd-877f-7fb2ca8b9dc8@suse.com>
	 <d31f0f8e-4eb7-4617-86f6-81f38b5c61aa@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

T24gV2VkLCAyMDI0LTA2LTEyIGF0IDE1OjE2ICswMjAwLCBKYW4gQmV1bGljaCB3cm90ZToKPiBG
b3Igbm9uLXByZXNlbnQgZW50cmllcyBFTVQsIGxpa2UgbW9zdCBvdGhlciBmaWVsZHMsIGlzIG1l
YW5pbmdsZXNzCj4gdG8KPiBoYXJkd2FyZS4gTWFrZSB0aGUgbG9naWMgaW4gZXB0X3NldF9lbnRy
eSgpIHNldHRpbmcgdGhlIGZpZWxkIChhbmQKPiBpUEFUKQo+IGNvbmRpdGlvbmFsIHVwb24gZGVh
bGluZyB3aXRoIGEgcHJlc2VudCBlbnRyeSwgbGVhdmluZyB0aGUgdmFsdWUgYXQgMAo+IG90aGVy
d2lzZS4gVGhpcyBoYXMgdHdvIGVmZmVjdHMgZm9yIGVwdGVfZ2V0X2VudHJ5X2VtdCgpIHdoaWNo
IHdlJ2xsCj4gd2FudCB0byBsZXZlcmFnZSBzdWJzZXF1ZW50bHk6Cj4gMSkgVGhlIGNhbGwgbW92
ZWQgaGVyZSBub3cgd29uJ3QgYmUgaXNzdWVkIHdpdGggSU5WQUxJRF9NRk4gYW55bW9yZQo+IChh
Cj4gwqDCoCByZXNwZWN0aXZlIEJVR19PTigpIGlzIGJlaW5nIGFkZGVkKS4KPiAyKSBOZWl0aGVy
IG9mIHRoZSBvdGhlciB0d28gY2FsbHMgY291bGQgbm93IGJlIGlzc3VlZCB3aXRoIGEKPiB0cnVu
Y2F0ZWQKPiDCoMKgIGZvcm0gb2YgSU5WQUxJRF9NRk4gYW55bW9yZSAoYXMgbG9uZyBhcyB0aGVy
ZSdzIG5vIGJ1ZyBhbnl3aGVyZQo+IMKgwqAgbWFya2luZyBhbiBlbnRyeSBwcmVzZW50IHdoZW4g
dGhhdCB3YXMgcG9wdWxhdGVkIHVzaW5nCj4gSU5WQUxJRF9NRk4pLgo+IAo+IFNpZ25lZC1vZmYt
Ynk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KUmVsZWFzZS1BY2tlZC1CeTogT2xl
a3NpaSBLdXJvY2hrbyA8b2xla3NpaS5rdXJvY2hrb0BnbWFpbC5jb20+Cgp+IE9sZWtzaWkKPiAt
LS0KPiB2MjogTmV3Lgo+IAo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9tbS9wMm0tZXB0LmMKPiArKysg
Yi94ZW4vYXJjaC94ODYvbW0vcDJtLWVwdC5jCj4gQEAgLTY1MCw2ICs2NTAsOCBAQCBzdGF0aWMg
aW50IGNmX2NoZWNrIHJlc29sdmVfbWlzY29uZmlnKHN0Cj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIGlmICggZS5lbXQgIT0gTVRSUl9OVU1fVFlQRVMgKQo+IMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgIGJyZWFrOwo+IMKgCj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgQVNTRVJU
KGlzX2VwdGVfcHJlc2VudCgmZSkpOwo+ICsKPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgaWYg
KCBsZXZlbCA9PSAwICkKPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgewo+IMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgIGZvciAoIGdmbiAtPSBpLCBpID0gMDsgaSA8IEVQVF9QQUdF
VEFCTEVfRU5UUklFUzsKPiArK2kgKQo+IEBAIC05MTUsMTcgKzkxNyw2IEBAIGVwdF9zZXRfZW50
cnkoc3RydWN0IHAybV9kb21haW4gKnAybSwgZ2YKPiDCoAo+IMKgwqDCoMKgIGlmICggbWZuX3Zh
bGlkKG1mbikgfHwgcDJtX2FsbG93c19pbnZhbGlkX21mbihwMm10KSApCj4gwqDCoMKgwqAgewo+
IC3CoMKgwqDCoMKgwqDCoCBib29sIGlwYXQ7Cj4gLcKgwqDCoMKgwqDCoMKgIGludCBlbXQgPSBl
cHRlX2dldF9lbnRyeV9lbXQocDJtLT5kb21haW4sIF9nZm4oZ2ZuKSwgbWZuLAo+IC3CoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqAgaSAqIEVQVF9UQUJMRV9PUkRFUiwgJmlwYXQsCj4gLcKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBw
Mm10KTsKPiAtCj4gLcKgwqDCoMKgwqDCoMKgIGlmICggZW10ID49IDAgKQo+IC3CoMKgwqDCoMKg
wqDCoMKgwqDCoMKgIG5ld19lbnRyeS5lbXQgPSBlbXQ7Cj4gLcKgwqDCoMKgwqDCoMKgIGVsc2Ug
LyogZXB0X2hhbmRsZV9taXNjb25maWcoKSB3aWxsIG5lZWQgdG8gdGFrZSBjYXJlIG9mCj4gdGhp
cy4gKi8KPiAtwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBuZXdfZW50cnkuZW10ID0gTVRSUl9OVU1f
VFlQRVM7Cj4gLQo+IC3CoMKgwqDCoMKgwqDCoCBuZXdfZW50cnkuaXBhdCA9IGlwYXQ7Cj4gwqDC
oMKgwqDCoMKgwqDCoCBuZXdfZW50cnkuc3AgPSAhIWk7Cj4gwqDCoMKgwqDCoMKgwqDCoCBuZXdf
ZW50cnkuc2FfcDJtdCA9IHAybXQ7Cj4gwqDCoMKgwqDCoMKgwqDCoCBuZXdfZW50cnkuYWNjZXNz
ID0gcDJtYTsKPiBAQCAtOTQxLDYgKzkzMiwyMiBAQCBlcHRfc2V0X2VudHJ5KHN0cnVjdCBwMm1f
ZG9tYWluICpwMm0sIGdmCj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIG5lZWRfbW9kaWZ5X3Z0
ZF90YWJsZSA9IDA7Cj4gwqAKPiDCoMKgwqDCoMKgwqDCoMKgIGVwdF9wMm1fdHlwZV90b19mbGFn
cyhwMm0sICZuZXdfZW50cnkpOwo+ICsKPiArwqDCoMKgwqDCoMKgwqAgaWYgKCBpc19lcHRlX3By
ZXNlbnQoJm5ld19lbnRyeSkgKQo+ICvCoMKgwqDCoMKgwqDCoCB7Cj4gK8KgwqDCoMKgwqDCoMKg
wqDCoMKgwqAgYm9vbCBpcGF0Owo+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGludCBlbXQgPSBl
cHRlX2dldF9lbnRyeV9lbXQocDJtLT5kb21haW4sIF9nZm4oZ2ZuKSwKPiBtZm4sCj4gK8KgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIGkgKiBFUFRfVEFCTEVfT1JERVIsICZpcGF0LAo+ICvCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCBwMm10KTsKPiArCj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgQlVH
X09OKG1mbl9lcShtZm4sIElOVkFMSURfTUZOKSk7Cj4gKwo+ICvCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIGlmICggZW10ID49IDAgKQo+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgbmV3
X2VudHJ5LmVtdCA9IGVtdDsKPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBlbHNlIC8qIGVwdF9o
YW5kbGVfbWlzY29uZmlnKCkgd2lsbCBuZWVkIHRvIHRha2UgY2FyZSBvZgo+IHRoaXMuICovCj4g
K8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBuZXdfZW50cnkuZW10ID0gTVRSUl9OVU1f
VFlQRVM7Cj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgbmV3X2VudHJ5LmlwYXQgPSBpcGF0Owo+
ICvCoMKgwqDCoMKgwqDCoCB9Cj4gwqDCoMKgwqAgfQo+IMKgCj4gwqDCoMKgwqAgaWYgKCBzdmUg
IT0gLTEgKQo+IAoK



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 14:59:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 14:59:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740046.1147046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHluu-00030l-1Y; Thu, 13 Jun 2024 14:59:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740046.1147046; Thu, 13 Jun 2024 14:59:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHlut-00030e-V7; Thu, 13 Jun 2024 14:59:07 +0000
Received: by outflank-mailman (input) for mailman id 740046;
 Thu, 13 Jun 2024 14:59:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X0CL=NP=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sHlus-00030Y-6J
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 14:59:06 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 79a86c59-2995-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 16:59:04 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-79-46-197-197.retail.telecomitalia.it [79.46.197.197])
 by support.bugseng.com (Postfix) with ESMTPSA id DCF064EE0756;
 Thu, 13 Jun 2024 16:59:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79a86c59-2995-11ef-b4bb-af5377834399
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH] public/sysctl: address violations of MISRA C: 2012 Rule 7.3
Date: Thu, 13 Jun 2024 16:58:43 +0200
Message-Id: <a68e796048912c816bc8320416024a60290f33e7.1718290222.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This addresses violations of MISRA C:2012 Rule 7.3 which states as
following: The lowercase character `l' shall not be used in a literal
suffix.

Changed moreover suffixes 'u' in 'U' for better readability next to
the 'L's.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
---
 xen/include/public/sysctl.h | 18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 3a6e7d48f0..b2a5a724db 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -898,15 +898,15 @@ struct xen_sysctl_psr_alloc {
  * instruction for PV guests.
  */
 struct xen_sysctl_cpu_levelling_caps {
-#define XEN_SYSCTL_CPU_LEVELCAP_faulting    (1ul <<  0) /* CPUID faulting    */
-#define XEN_SYSCTL_CPU_LEVELCAP_ecx         (1ul <<  1) /* 0x00000001.ecx    */
-#define XEN_SYSCTL_CPU_LEVELCAP_edx         (1ul <<  2) /* 0x00000001.edx    */
-#define XEN_SYSCTL_CPU_LEVELCAP_extd_ecx    (1ul <<  3) /* 0x80000001.ecx    */
-#define XEN_SYSCTL_CPU_LEVELCAP_extd_edx    (1ul <<  4) /* 0x80000001.edx    */
-#define XEN_SYSCTL_CPU_LEVELCAP_xsave_eax   (1ul <<  5) /* 0x0000000D:1.eax  */
-#define XEN_SYSCTL_CPU_LEVELCAP_thermal_ecx (1ul <<  6) /* 0x00000006.ecx    */
-#define XEN_SYSCTL_CPU_LEVELCAP_l7s0_eax    (1ul <<  7) /* 0x00000007:0.eax  */
-#define XEN_SYSCTL_CPU_LEVELCAP_l7s0_ebx    (1ul <<  8) /* 0x00000007:0.ebx  */
+#define XEN_SYSCTL_CPU_LEVELCAP_faulting    (1UL <<  0) /* CPUID faulting    */
+#define XEN_SYSCTL_CPU_LEVELCAP_ecx         (1UL <<  1) /* 0x00000001.ecx    */
+#define XEN_SYSCTL_CPU_LEVELCAP_edx         (1UL <<  2) /* 0x00000001.edx    */
+#define XEN_SYSCTL_CPU_LEVELCAP_extd_ecx    (1UL <<  3) /* 0x80000001.ecx    */
+#define XEN_SYSCTL_CPU_LEVELCAP_extd_edx    (1UL <<  4) /* 0x80000001.edx    */
+#define XEN_SYSCTL_CPU_LEVELCAP_xsave_eax   (1UL <<  5) /* 0x0000000D:1.eax  */
+#define XEN_SYSCTL_CPU_LEVELCAP_thermal_ecx (1UL <<  6) /* 0x00000006.ecx    */
+#define XEN_SYSCTL_CPU_LEVELCAP_l7s0_eax    (1UL <<  7) /* 0x00000007:0.eax  */
+#define XEN_SYSCTL_CPU_LEVELCAP_l7s0_ebx    (1UL <<  8) /* 0x00000007:0.ebx  */
     uint32_t caps;
 };
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 15:16:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 15:16:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740059.1147067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHmBi-00064E-JT; Thu, 13 Jun 2024 15:16:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740059.1147067; Thu, 13 Jun 2024 15:16:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHmBi-000647-Gw; Thu, 13 Jun 2024 15:16:30 +0000
Received: by outflank-mailman (input) for mailman id 740059;
 Thu, 13 Jun 2024 15:16:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9y96=NP=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHmBh-00063O-HA
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 15:16:29 +0000
Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com
 [2607:f8b0:4864:20::731])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e8068be1-2997-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 17:16:28 +0200 (CEST)
Received: by mail-qk1-x731.google.com with SMTP id
 af79cd13be357-7971a9947e6so60524685a.3
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 08:16:28 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798ab4c16acsm58807085a.59.2024.06.13.08.16.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Jun 2024 08:16:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8068be1-2997-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718291787; x=1718896587; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=PfLirjC4I7QEInamUbgkgzt7hcToCwfJqOKrYGCSyaQ=;
        b=iFQSpkfd+H/AHBTKlvwatEKw3VUQ36fhzShMpIkSwHm662wkyiWtB9A37ipvrhvcAX
         QN3Htrz/wcpZJmU/QDxcx1W4xDbxEdf+PDW1PC/EJTZHGDSzoL15mBlopnNXw22YIOp8
         EYgGe35+YQB2KD7LM+YC30xhnp3Mtn3NRIZ6U=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718291787; x=1718896587;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=PfLirjC4I7QEInamUbgkgzt7hcToCwfJqOKrYGCSyaQ=;
        b=nEPPa6afQFV1M+jHVIelJOZKYWrcbczAiPGghCdBtDt400DZYMjYKEGTAJFklvi92F
         9nyN5oIHF7m8RKEH3blM7QlIYj46LtV/mTJ5F52rzaL3WymLUf11YFLiGwq9uRi9kkPj
         WEOg/fKMmx9sQGHBqdC+L0ZWyFMLjmqBJ+x9RXd+fHLpjre+Ff2l3XssBQqf6cserH/2
         WagueA+GRsvr9BtNn1f0d+elRaK4gk8zVOo9xIAlnRU9ngNNCsYt63oBky/5z6hFOQPJ
         2uvMcM9ouHJsfA1wKNATaGvZQYqyFIFbSKlkecuQpLkQop8HBJx+pzNeFdo+HWCY53/N
         M8FA==
X-Gm-Message-State: AOJu0YwO9YHN6sV6x5bXFpNhfZk/8LzxLfhX9rj5wHWV2lwehFb0dfy7
	ryiA3YgS+e9PuuaCNuJSh0jKh0g3IKGRSnWeeKuIjDZYE3dgYWiuTQZpudNfNIY=
X-Google-Smtp-Source: AGHT+IEc3GLRs7dJQIIceFqtjsa8+3aiiyeOKgbOgzs7/CBDk5W5gZ9cAD4fcuD99qlHYQb/dWAIrQ==
X-Received: by 2002:a05:620a:29d2:b0:795:5440:f872 with SMTP id af79cd13be357-797f60a6d49mr609755785a.28.1718291787271;
        Thu, 13 Jun 2024 08:16:27 -0700 (PDT)
Date: Thu, 13 Jun 2024 17:16:25 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH for 4.19?] x86/Intel: unlock CPUID earlier for the BSP
Message-ID: <ZmsNSUmum8mRxkCs@macbook>
References: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>

On Thu, Jun 13, 2024 at 10:19:30AM +0200, Jan Beulich wrote:
> Intel CPUs have a MSR bit to limit CPUID enumeration to leaf two. If
> this bit is set by the BIOS then CPUID evaluation does not work when
> data from any leaf greater than two is needed; early_cpu_init() in
> particular wants to collect leaf 7 data.
> 
> Cure this by unlocking CPUID right before evaluating anything which
> depends on the maximum CPUID leaf being greater than two.
> 
> Inspired by (and description cloned from) Linux commit 0c2f6d04619e
> ("x86/topology/intel: Unlock CPUID before evaluating anything").
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> While I couldn't spot anything, it kind of feels as if I'm overlooking
> further places where we might be inspecting in particular leaf 7 yet
> earlier.
> 
> No Fixes: tag(s), as imo it would be too many that would want
> enumerating.
> 
> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -336,7 +336,8 @@ void __init early_cpu_init(bool verbose)
>  
>  	c->x86_vendor = x86_cpuid_lookup_vendor(ebx, ecx, edx);
>  	switch (c->x86_vendor) {
> -	case X86_VENDOR_INTEL:    actual_cpu = intel_cpu_dev;    break;
> +	case X86_VENDOR_INTEL:    intel_unlock_cpuid_leaves(c);
> +				  actual_cpu = intel_cpu_dev;    break;
>  	case X86_VENDOR_AMD:      actual_cpu = amd_cpu_dev;      break;
>  	case X86_VENDOR_CENTAUR:  actual_cpu = centaur_cpu_dev;  break;
>  	case X86_VENDOR_SHANGHAI: actual_cpu = shanghai_cpu_dev; break;
> --- a/xen/arch/x86/cpu/cpu.h
> +++ b/xen/arch/x86/cpu/cpu.h
> @@ -24,3 +24,5 @@ void amd_init_lfence(struct cpuinfo_x86
>  void amd_init_ssbd(const struct cpuinfo_x86 *c);
>  void amd_init_spectral_chicken(void);
>  void detect_zen2_null_seg_behaviour(void);
> +
> +void intel_unlock_cpuid_leaves(struct cpuinfo_x86 *c);
> --- a/xen/arch/x86/cpu/intel.c
> +++ b/xen/arch/x86/cpu/intel.c
> @@ -303,10 +303,24 @@ static void __init noinline intel_init_l
>  		ctxt_switch_masking = intel_ctxt_switch_masking;
>  }
>  
> -static void cf_check early_init_intel(struct cpuinfo_x86 *c)
> +/* Unmask CPUID levels if masked. */
> +void intel_unlock_cpuid_leaves(struct cpuinfo_x86 *c)
>  {
> -	u64 misc_enable, disable;
> +	uint64_t misc_enable, disable;
> +
> +	rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
> +
> +	disable = misc_enable & MSR_IA32_MISC_ENABLE_LIMIT_CPUID;
> +	if (disable) {
> +		wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable & ~disable);
> +		bootsym(trampoline_misc_enable_off) |= disable;
> +		c->cpuid_level = cpuid_eax(0);
> +		printk(KERN_INFO "revised cpuid level: %u\n", c->cpuid_level);
> +	}
> +}
>  
> +static void cf_check early_init_intel(struct cpuinfo_x86 *c)
> +{
>  	/* Netburst reports 64 bytes clflush size, but does IO in 128 bytes */
>  	if (c->x86 == 15 && c->x86_cache_alignment == 64)
>  		c->x86_cache_alignment = 128;
> @@ -315,16 +329,7 @@ static void cf_check early_init_intel(st
>  	    bootsym(trampoline_misc_enable_off) & MSR_IA32_MISC_ENABLE_XD_DISABLE)
>  		printk(KERN_INFO "re-enabled NX (Execute Disable) protection\n");
>  
> -	/* Unmask CPUID levels and NX if masked: */
> -	rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
> -
> -	disable = misc_enable & MSR_IA32_MISC_ENABLE_LIMIT_CPUID;
> -	if (disable) {
> -		wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable & ~disable);
> -		bootsym(trampoline_misc_enable_off) |= disable;
> -		printk(KERN_INFO "revised cpuid level: %d\n",
> -		       cpuid_eax(0));
> -	}
> +	intel_unlock_cpuid_leaves(c);

Do you really need to call intel_unlock_cpuid_leaves() here?

For the BSP it will be taken care in early_cpu_init(), and for the APs
MSR_IA32_MISC_ENABLE it will be set as part of the trampoline with the
disables from trampoline_misc_enable_off.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 15:39:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 15:39:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740075.1147077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHmXQ-0001Vc-C9; Thu, 13 Jun 2024 15:38:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740075.1147077; Thu, 13 Jun 2024 15:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHmXQ-0001VV-99; Thu, 13 Jun 2024 15:38:56 +0000
Received: by outflank-mailman (input) for mailman id 740075;
 Thu, 13 Jun 2024 15:38:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T6Wr=NP=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sHmXP-0001VK-2T
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 15:38:55 +0000
Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com
 [2a00:1450:4864:20::132])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 09ca034f-299b-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 17:38:53 +0200 (CEST)
Received: by mail-lf1-x132.google.com with SMTP id
 2adb3069b0e04-52ca342d6f3so700886e87.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 08:38:53 -0700 (PDT)
Received: from [192.168.1.132] (cm-93-156-201-199.telecable.es.
 [93.156.201.199]) by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-422f641a6c1sm28939125e9.40.2024.06.13.08.38.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 08:38:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09ca034f-299b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718293133; x=1718897933; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:content-language:references
         :cc:to:subject:from:user-agent:mime-version:date:message-id:from:to
         :cc:subject:date:message-id:reply-to;
        bh=28c2zISvgcTW8QUWfgyLBiP9vbBrRAxGiXJK9ylRM2c=;
        b=bg79w/Kzg0g/84v2ydZFkR+6wZ19YTeTFh2fZyrdgCFffxjNogciIVmTVQVBz+ZUUe
         Mqj+NgKay+w7onuzawS9jbt6oGJBRQmpkjlg+6xoe5hpg7jAQXqCz+CVfVh+9HF7Ba4d
         eb5OS5UxHiJeq/hl+g5wavFqfm18KUShFWetA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718293133; x=1718897933;
        h=content-transfer-encoding:in-reply-to:content-language:references
         :cc:to:subject:from:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=28c2zISvgcTW8QUWfgyLBiP9vbBrRAxGiXJK9ylRM2c=;
        b=bbaB+6NzIi4rC2Cgw0F/cUXFCa9Meu35Ax2jhtM9bRiZt9vUGwumnS7xe2SAqRKCyL
         PK+wNTE9EnGsPM0Hz7JpKz8sK4wvyAXN0Sop4jTrkt9fc/fkqLG3vTac4klurupCgwRD
         VD+2z3xGUH2XDGaVKCA302sK8yhH7iHJb4RXDr7rNVmt7Fx2NCXhtlbe7ItUq9LufiIw
         UBGoD+1esok0PawBs0ydOGkKF1wc0a2PrmJkIhjN7W8rpOZfHYJkY4h6uEMphe5DpNTH
         H8ilynXYI1RgAY0S7xFAQm/Xw8huABHHWCaSSBC8w9gE8swRVep9NOQ0um62nNA42UxW
         N2Kw==
X-Forwarded-Encrypted: i=1; AJvYcCUV8CxGS5WQ4sV78jw8bHCl6NUg9niIBs2SDuqGymS12SSjEQNYu3+qgdlty84NbljLgU/PzidbLyOIahDzA7mna4mENltnHjg1DsbqeMA=
X-Gm-Message-State: AOJu0YzHP3Gg95+dAr/vT/TDCg/WaavK9/WTL7U1q76jj7mahzgz61Yx
	zZhCWevg5sksiDF1UzoBGdndN1Iq23NssHYn7o6mnGWHTR9nEXctTMBbx3/e4g0=
X-Google-Smtp-Source: AGHT+IHpGlqOG2pF26ZxPjRXtw33gbzAef1/iHN+Ijew0cqKIsfohNj0jcyc7eCJPj9BGI03u7Sb0w==
X-Received: by 2002:a19:f618:0:b0:52c:8023:e021 with SMTP id 2adb3069b0e04-52ca6e6b22bmr90988e87.33.1718293132574;
        Thu, 13 Jun 2024 08:38:52 -0700 (PDT)
Message-ID: <5d00914f-f43c-4e3e-885c-7a9d3fa48411@cloud.com>
Date: Thu, 13 Jun 2024 16:38:51 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Subject: Re: [PATCH v4 1/2] tools/xg: Streamline cpu policy
 serialise/deserialise calls
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>
References: <cover.1716992707.git.alejandro.vallejo@cloud.com>
 <f456bfb8996bb1fd4b965755622cda6fcb61b297.1716992707.git.alejandro.vallejo@cloud.com>
 <497624e9-053a-407e-86f0-e3d2c8883cd7@citrix.com>
Content-Language: en-GB
In-Reply-To: <497624e9-053a-407e-86f0-e3d2c8883cd7@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 31/05/2024 00:59, Andrew Cooper wrote:
> On 29/05/2024 3:30 pm, Alejandro Vallejo wrote:
>> diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
>> index e01f494b772a..85d56f26537b 100644
>> --- a/tools/include/xenguest.h
>> +++ b/tools/include/xenguest.h
>> @@ -799,15 +799,23 @@ int xc_cpu_policy_set_domain(xc_interface *xch, uint32_t domid,
>>                               xc_cpu_policy_t *policy);
>>  
>>  /* Manipulate a policy via architectural representations. */
>> -int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t *policy,
>> -                            xen_cpuid_leaf_t *leaves, uint32_t *nr_leaves,
>> -                            xen_msr_entry_t *msrs, uint32_t *nr_msrs);
>> +int xc_cpu_policy_serialise(xc_interface *xch, xc_cpu_policy_t *policy);
>>  int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
>>                                 const xen_cpuid_leaf_t *leaves,
>>                                 uint32_t nr);
>>  int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t *policy,
>>                                const xen_msr_entry_t *msrs, uint32_t nr);
>>  
>> +/*
>> + * Accessors for the serialised forms of the policy. The outputs are pointers
>> + * into the policy object and not fresh allocations, so their lifetimes are tied
>> + * to the policy object itself.
> 
> This is far more complicated.  See below.
> 
>> + */
>> +int xc_cpu_policy_get_leaves(xc_interface *xch, const xc_cpu_policy_t *policy,
>> +                             const xen_cpuid_leaf_t **leaves, uint32_t *nr);
>> +int xc_cpu_policy_get_msrs(xc_interface *xch, const xc_cpu_policy_t *policy,
>> +                           const xen_msr_entry_t **msrs, uint32_t *nr);
>> +
>>  /* Compatibility calculations. */
>>  bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
>>                                   xc_cpu_policy_t *guest);
>> diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
>> index 4453178100ad..6cab5c60bb41 100644
>> --- a/tools/libs/guest/xg_cpuid_x86.c
>> +++ b/tools/libs/guest/xg_cpuid_x86.c
>> @@ -834,14 +834,13 @@ void xc_cpu_policy_destroy(xc_cpu_policy_t *policy)
>>      }
>>  }
>>  
>> -static int deserialize_policy(xc_interface *xch, xc_cpu_policy_t *policy,
>> -                              unsigned int nr_leaves, unsigned int nr_entries)
>> +static int deserialize_policy(xc_interface *xch, xc_cpu_policy_t *policy)
>>  {
>>      uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
>>      int rc;
>>  
>>      rc = x86_cpuid_copy_from_buffer(&policy->policy, policy->leaves,
>> -                                    nr_leaves, &err_leaf, &err_subleaf);
>> +                                    policy->nr_leaves, &err_leaf, &err_subleaf);
>>      if ( rc )
>>      {
>>          if ( err_leaf != -1 )
> 
> Urgh - this is a mess.

That's a fair assessment, yes :)

> (Not your fault, but we really need to think
> twice before continuing.)>
> xc_cpu_policy_serialise() is an exported function and, prior to this
> series, used by external entities to get at the content inside the
> opaque object.>
> deserialize_policy()  (Clearly not written by me - Roger?) is a local
> helper.  Also it looks wonky in the next patch, although I think that's
> just code movement to avoid a forward declaration?

Correct.

> 
> By the end of the series, xc_cpu_policy_serialise() isn't used
> externally, but it's still exported.

Because it's external and I didn't want to break the ABI should it be
used somewhere downstream. Happy to change that though.

> 
> But, besides the visibility, there's a second difference...
> 
> 
>> @@ -851,7 +850,7 @@ static int deserialize_policy(xc_interface *xch, xc_cpu_policy_t *policy,
>>      }
>>  
>>      rc = x86_msr_copy_from_buffer(&policy->policy, policy->msrs,
>> -                                  nr_entries, &err_msr);
>> +                                  policy->nr_msrs, &err_msr);
>>      if ( rc )
>>      {
>>          if ( err_msr != -1 )
>> @@ -878,7 +877,10 @@ int xc_cpu_policy_get_system(xc_interface *xch, unsigned int policy_idx,
>>          return rc;
>>      }
>>  
>> -    rc = deserialize_policy(xch, policy, nr_leaves, nr_msrs);
>> +    policy->nr_leaves = nr_leaves;
>> +    policy->nr_msrs = nr_msrs;
>> +
>> +    rc = deserialize_policy(xch, policy);
> 
> ... they're asymmetric as to whether the caller or the callee preloads
> policy->nr_*.>
> Both of these need rationalising, one way or another.
> 


I'm not sure I follow. Neither the why nor the how.

nr_* assignments are not arbitrary. Deserialising doesn't involve
modifying the number of leaves/MSRs. The cases in which they are set are:

1. New policy is loaded from the hypervisor
    * Must be reset to maximum values before hypercall and set to the
outputs of the hypercall after.
2. Policy is serialised
    * Number of leaves could increase/decrease

Case (1) must be handled in the hypercall wrappers themselves because
the policy is effectively an output object as a whole then. On case (2)
they must be set by the serialiser itself, as that's the one that knows
the end result.

Could you elaborate in what you mean?

> 
> But, there's a related problem.
> 
> Previously there was only one canonical form (the deserialised form),
> and anything operating on state was responsible for getting it back to
> being the deserialised form.

That's true. But this tension exists regardless of this patch. Some
parts of the code want to operate on raw data and others on structured
data; and then others on featuresets because 3 forms are better than 2.

The roundtrip through featuresets already breaks apart the idea of a
"canonical" form. Truth is, users of this framework operate on a "I know
what form I'm operating on and what form I must restore it to", if not
by design then by accident.

I don't have a good argument about the fragility of the whole thing
besides it being silly, convoluted and visually noisy to dynamically
allocate a fixed-sized array in order to deserialize things. Plus it's
yet another source of errors when callers have to keep track of their
own dynamic buffers.

> 
> Now, there are two forms which are coexist side by side.  The buffer
> exposed by get_{cpuid,msr}() is only good until the next operation which
> uses what were (previously) the internal staging buffer(s).
> 
> And that makes it a fragile and error prone interface.

3, including the featuresets. And note that dynamically allocating
buffers outside the policy object is very error prone as well. It's a
matter of where we want to have fragility.

Unrelated for what you mean, but I'll put my Rust hat for this and start
preaching; This is the sort of thing Rust's borrow checker can be abused
to avoid lifetime related pitfalls.

Can you think of another way that doesn't involve a copyout?

>>      if ( rc )
>>      {
>>          errno = -rc;
>> @@ -903,7 +905,10 @@ int xc_cpu_policy_get_domain(xc_interface *xch, uint32_t domid,
>>          return rc;
>>      }
>>  
>> -    rc = deserialize_policy(xch, policy, nr_leaves, nr_msrs);
>> +    policy->nr_leaves = nr_leaves;
>> +    policy->nr_msrs = nr_msrs;
>> +
>> +    rc = deserialize_policy(xch, policy);
>>      if ( rc )
>>      {
>>          errno = -rc;
>> @@ -917,17 +922,14 @@ int xc_cpu_policy_set_domain(xc_interface *xch, uint32_t domid,
>>                               xc_cpu_policy_t *policy)
>>  {
>>      uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
>> -    unsigned int nr_leaves = ARRAY_SIZE(policy->leaves);
>> -    unsigned int nr_msrs = ARRAY_SIZE(policy->msrs);
>>      int rc;
>>  
>> -    rc = xc_cpu_policy_serialise(xch, policy, policy->leaves, &nr_leaves,
>> -                                 policy->msrs, &nr_msrs);
>> +    rc = xc_cpu_policy_serialise(xch, policy);
>>      if ( rc )
>>          return rc;
>>  
>> -    rc = xc_set_domain_cpu_policy(xch, domid, nr_leaves, policy->leaves,
>> -                                  nr_msrs, policy->msrs,
>> +    rc = xc_set_domain_cpu_policy(xch, domid, policy->nr_leaves, policy->leaves,
>> +                                  policy->nr_msrs, policy->msrs,
>>                                    &err_leaf, &err_subleaf, &err_msr);
>>      if ( rc )
>>      {
>> @@ -942,32 +944,26 @@ int xc_cpu_policy_set_domain(xc_interface *xch, uint32_t domid,
>>      return rc;
>>  }
>>  
>> -int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t *p,
>> -                            xen_cpuid_leaf_t *leaves, uint32_t *nr_leaves,
>> -                            xen_msr_entry_t *msrs, uint32_t *nr_msrs)
>> +int xc_cpu_policy_serialise(xc_interface *xch, xc_cpu_policy_t *p)
>>  {
>>      int rc;
>> +    p->nr_leaves = ARRAY_SIZE(p->leaves);
>> +    p->nr_msrs = ARRAY_SIZE(p->msrs);
>>  
>> -    if ( leaves )
>> +    rc = x86_cpuid_copy_to_buffer(&p->policy, p->leaves, &p->nr_leaves);
>> +    if ( rc )
>>      {
>> -        rc = x86_cpuid_copy_to_buffer(&p->policy, leaves, nr_leaves);
>> -        if ( rc )
>> -        {
>> -            ERROR("Failed to serialize CPUID policy");
>> -            errno = -rc;
>> -            return -1;
>> -        }
>> +        ERROR("Failed to serialize CPUID policy");
>> +        errno = -rc;
>> +        return -1;
>>      }
>>  
>> -    if ( msrs )
>> +    rc = x86_msr_copy_to_buffer(&p->policy, p->msrs, &p->nr_msrs);
>> +    if ( rc )
>>      {
>> -        rc = x86_msr_copy_to_buffer(&p->policy, msrs, nr_msrs);
>> -        if ( rc )
>> -        {
>> -            ERROR("Failed to serialize MSR policy");
>> -            errno = -rc;
>> -            return -1;
>> -        }
>> +        ERROR("Failed to serialize MSR policy");
>> +        errno = -rc;
>> +        return -1;
>>      }
>>  
>>      errno = 0;
>> @@ -1012,6 +1008,42 @@ int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t *policy,
>>      return rc;
>>  }
>>  
>> +int xc_cpu_policy_get_leaves(xc_interface *xch,
>> +                             const xc_cpu_policy_t *policy,
>> +                             const xen_cpuid_leaf_t **leaves,
>> +                             uint32_t *nr)
>> +{
>> +    if ( !policy )
>> +    {
>> +        ERROR("Failed to fetch CPUID leaves from policy object");
>> +        errno = -EINVAL;
>> +        return -1;
>> +    }
> 
> This check isn't useful, and it's making the interface inconsistent. 
> There's no case ever where a NULL policy is meaningful, except for the
> very initial failure to allocate, and there it's the return value not an
> input parameter.>
> More importantly however, the error message is misleading as a consequence.

Pretty sure I've been asked before to validate trivial preconditions so
I'm guessing various maintainers have conflicting opinions on how
aggresively to validate?

TL;DR: Sure, happy to rip that out.

> 
>> +
>> +    *leaves = policy->leaves;
>> +    *nr = policy->nr_leaves;
>> +
>> +    return 0;
>> +}
>> +
>> +int xc_cpu_policy_get_msrs(xc_interface *xch,
>> +                           const xc_cpu_policy_t *policy,
>> +                           const xen_msr_entry_t **msrs,
>> +                           uint32_t *nr)
>> +{
>> +    if ( !policy )
>> +    {
>> +        ERROR("Failed to fetch MSRs from policy object");
>> +        errno = -EINVAL;
>> +        return -1;
>> +    }
>> +
>> +    *msrs = policy->msrs;
>> +    *nr = policy->nr_msrs;
>> +
>> +    return 0;
>> +}
>> +
>>  bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
>>                                   xc_cpu_policy_t *guest)
>>  {
>> diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.h
>> index d73947094f2e..a65dae818f3d 100644
>> --- a/tools/libs/guest/xg_private.h
>> +++ b/tools/libs/guest/xg_private.h
>> @@ -177,6 +177,8 @@ struct xc_cpu_policy {
>>      struct cpu_policy policy;
>>      xen_cpuid_leaf_t leaves[CPUID_MAX_SERIALISED_LEAVES];
>>      xen_msr_entry_t msrs[MSR_MAX_SERIALISED_ENTRIES];
>> +    uint32_t nr_leaves;
>> +    uint32_t nr_msrs;
> 
> These need a comment explaining how they're used, and sadly they have no
> relationship to the lengths of the array.  There's a corner case where
> they can end up larger.

The may end up smaller, but they must absolutely never end up larger. If
such a corner case exists, please elaborate because it's a bug I'd like
to have fixed.

>  
>>  };
>>  #endif /* x86 */
>>  
>> diff --git a/tools/libs/guest/xg_sr_common_x86.c b/tools/libs/guest/xg_sr_common_x86.c
>> index 563b4f016877..a0d67c3211c6 100644
>> --- a/tools/libs/guest/xg_sr_common_x86.c
>> +++ b/tools/libs/guest/xg_sr_common_x86.c
>> @@ -1,4 +1,5 @@
>>  #include "xg_sr_common_x86.h"
>> +#include "xg_sr_stream_format.h"
> 
> I'm pretty sure this shouldn't be necessary.  Is it?

Indeed. Leftover from previous version

> 
>>  
>>  int write_x86_tsc_info(struct xc_sr_context *ctx)
>>  {
>> @@ -45,54 +46,39 @@ int handle_x86_tsc_info(struct xc_sr_context *ctx, struct xc_sr_record *rec)
>>  int write_x86_cpu_policy_records(struct xc_sr_context *ctx)
>>  {
>>      xc_interface *xch = ctx->xch;
>> -    struct xc_sr_record cpuid = { .type = REC_TYPE_X86_CPUID_POLICY, };
>> -    struct xc_sr_record msrs  = { .type = REC_TYPE_X86_MSR_POLICY, };
>> -    uint32_t nr_leaves = 0, nr_msrs = 0;
>> -    xc_cpu_policy_t *policy = NULL;
>> +    xc_cpu_policy_t *policy = xc_cpu_policy_init();
>>      int rc;
>>  
>> -    if ( xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs) < 0 )
>> -    {
>> -        PERROR("Unable to get CPU Policy size");
>> -        return -1;
>> -    }
>> -
>> -    cpuid.data = malloc(nr_leaves * sizeof(xen_cpuid_leaf_t));
>> -    msrs.data  = malloc(nr_msrs   * sizeof(xen_msr_entry_t));
>> -    policy = xc_cpu_policy_init();
>> -    if ( !cpuid.data || !msrs.data || !policy )
>> -    {
>> -        ERROR("Cannot allocate memory for CPU Policy");
>> -        rc = -1;
>> -        goto out;
>> -    }
>> -
>> -    if ( xc_cpu_policy_get_domain(xch, ctx->domid, policy) )
>> +    if ( !policy || xc_cpu_policy_get_domain(xch, ctx->domid, policy) )
>>      {
>>          PERROR("Unable to get d%d CPU Policy", ctx->domid);
>>          rc = -1;
>>          goto out;
>>      }
>> -    if ( xc_cpu_policy_serialise(xch, policy, cpuid.data, &nr_leaves,
>> -                                 msrs.data, &nr_msrs) )
>> -    {
>> -        PERROR("Unable to serialize d%d CPU Policy", ctx->domid);
>> -        rc = -1;
>> -        goto out;
>> -    }
> 
> Wow, the old code here was especially daft.

Can confirm.

> 
> We're having Xen serialise the policy, copying (double buffering) into
> the policy object then desensitising.  And vs the old copy, we've got
> rid of the re-serialise into yet another buffer.
> 
> But we should still be using a plain XEN_DOMCTL_get_cpu_policy here. 
> Literally all we want to do is take the array(s) Xen gave us and feed
> them straight into the fd.
> 
> deserialising is already a reasonably expensive operation (every
> individual leaf coordinate needs re-range checking), and is only ever
> going to get worse.

I wouldn't go that far. It's definitely on the "don't do it every other
operation", but it's a tiny fraction of the (current) cost of a single
hypercall.

> 
> It will probably help to split the changes to
> write_x86_cpu_policy_records() out into a separate patch.  It's more
> clear cut and also addresses one of the local vs external issues
> discussed above.
> 
> 
>>  
>> -    cpuid.length = nr_leaves * sizeof(xen_cpuid_leaf_t);
>> -    if ( cpuid.length )
>> +
>> +    if ( policy->nr_leaves )
>>      {
>> -        rc = write_record(ctx, &cpuid);
>> +        struct xc_sr_record record = {
>> +            .type = REC_TYPE_X86_CPUID_POLICY,
>> +            .data = policy->leaves,
>> +            .length = policy->nr_leaves * sizeof(*policy->leaves),
>> +        };
>> +
>> +        rc = write_record(ctx, &record);
> 
> Please keep this name being cpuid.  It's more helpful when grepping, and
> it also shrinks the diff.

Ack

> 
>>          if ( rc )
>>              goto out;
>>      }
>>  
>> -    msrs.length = nr_msrs * sizeof(xen_msr_entry_t);
>> -    if ( msrs.length )
>> +    if ( policy->nr_msrs )
>>      {
>> -        rc = write_record(ctx, &msrs);
>> +        struct xc_sr_record record = {
>> +            .type = REC_TYPE_X86_MSR_POLICY,
>> +            .data = policy->msrs,
>> +            .length = policy->nr_msrs * sizeof(*policy->msrs),
>> +        };
>> +
>> +        rc = write_record(ctx, &record);
>>          if ( rc )
>>              goto out;
>>      }
>> @@ -100,8 +86,6 @@ int write_x86_cpu_policy_records(struct xc_sr_context *ctx)
>>      rc = 0;
>>  
>>   out:
>> -    free(cpuid.data);
>> -    free(msrs.data);
>>      xc_cpu_policy_destroy(policy);
>>  
>>      return rc;
>> diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
>> index 4c4593528dfe..488f43378406 100644
>> --- a/tools/misc/xen-cpuid.c
>> +++ b/tools/misc/xen-cpuid.c
>> @@ -156,12 +156,18 @@ static void dump_info(xc_interface *xch, bool detail)
>>  
>>      free(fs);
>>  }
>> -
> 
> Stray (deleted) whitespace.

Ack

> 
>> -static void print_policy(const char *name,
>> -                         xen_cpuid_leaf_t *leaves, uint32_t nr_leaves,
>> -                         xen_msr_entry_t *msrs, uint32_t nr_msrs)
>> +static void print_policy(xc_interface *xch, const char *name,
>> +                         const xc_cpu_policy_t *policy)
>>  {
>>      unsigned int l;
>> +    const xen_cpuid_leaf_t *leaves;
>> +    const xen_msr_entry_t *msrs;
>> +    uint32_t nr_leaves, nr_msrs;
>> +
>> +    if ( xc_cpu_policy_get_leaves(xch, policy, &leaves, &nr_leaves) )
>> +        err(1, "xc_cpu_policy_get_leaves()");
>> +    if ( xc_cpu_policy_get_msrs(xch, policy, &msrs, &nr_msrs) )
>> +        err(1, "xc_cpu_policy_get_msrs()");
> 
> Not an issue with here per say, but to drive home the main problem.
> 
> This doesn't return the current leaves/msrs.  It gives you whatever's
> stale in the staging buffer, which happens to be ok in xen-cpuid because
> it only ever reads a policy...
> 
> ~Andrew

It's not stale if the deserialized policy is not dirty. The alternative
is a copyout (what happened before) which was way more brittle and
involved more hypercalls.

As I mentioned before, I you can think of a better scheme, I'm happy to
consider it.

Cheers,
Alejandro


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 15:53:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 15:53:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740084.1147088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHml9-0004Sw-Lp; Thu, 13 Jun 2024 15:53:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740084.1147088; Thu, 13 Jun 2024 15:53:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHml9-0004Sp-In; Thu, 13 Jun 2024 15:53:07 +0000
Received: by outflank-mailman (input) for mailman id 740084;
 Thu, 13 Jun 2024 15:53:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHml7-0004Sh-Ul
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 15:53:05 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 054695f3-299d-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 17:53:04 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a6269885572so402393666b.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 08:53:04 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56db6b32sm85095666b.59.2024.06.13.08.53.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 08:53:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 054695f3-299d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718293984; x=1718898784; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=0wkBGFeQc6d141aqiejmspfIqUVIBgsP8HKCCA/h/Sg=;
        b=BC7wMZZCqQ05L7dzNvHazFhoZWPpH0HMNBZTSOuUG3nKKXD9YJqNJFlC9AQ/mSP+/J
         3gq7S0XZu/2iPvjYgN64FkM7cOmg/JMrz8phK8f8+vdws+gSCBRgtfMmGK+gg6Xkej39
         6cTna/0hLsarKgyWP5QvSKsIyPc18EBEEse6S7CSdjof22KJ49zlvMEGjX7PEVr8ZUjc
         BQVRl1pADvs7HSn63MbOXl38eXc26KOlmmWMaMLUFiKMn0G0n+i3HfOLpfyzDSZEEmi6
         S0t3iNKhTLkc3nvNgx0vxwxNw1jP1JZ/K6MKcAjnonVmBFyRlmD6gioZp3yidQzi1XVq
         cFVA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718293984; x=1718898784;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0wkBGFeQc6d141aqiejmspfIqUVIBgsP8HKCCA/h/Sg=;
        b=CBRK61bmAXz8lAwzytnKaDLY1OAz4i2Vqc+jV7vOWj+8Mc61rzPdnuq+zKzW7oF1Do
         /SwGmzp+ps/P0SKjoedgbThbkvSLPO75aMgyn5oVLGZdHlngNdia1EypAYPVUPgBucEy
         5AwhFsdNKzyI3BnnNNJAzevNkHjllleVsUWWS3HwxP3gKedjKwp5hZtWGotS/ugIalh2
         om9/Rk8Vh2faGApS+2cxVKh+C7QP4IL0gtKiVwkXe60+5Cv73eZFRMxvqoFpwq+EG1d1
         ow3DsTLGw4o0/KCTPcxpG/vEO28xR+8JXBCYg21NgQcm/auQJxy0i5AL4XG4WM4UkNze
         VLBg==
X-Gm-Message-State: AOJu0Yyil/2QPyLufjoJXf9JZHBBDf7KBuX4lhdjI2qhMdw0yuYnyQu9
	weHGFRi/5pK8KcsCP1nvI0UDxUOL3Pt7YEv/snsVztBjUQx10KYX6A9lHVhVuLcizCIS+InE4vk
	=
X-Google-Smtp-Source: AGHT+IEsoqNyaRagjRSI18XW8J/p5xP9Sm8+kDFNwdpgzP0l1li/ntQPBI/MeJiblIrAPuWHfm51Yw==
X-Received: by 2002:a17:906:57c7:b0:a6e:ffef:b6ab with SMTP id a640c23a62f3a-a6f6080c7b5mr19148566b.7.1718293984278;
        Thu, 13 Jun 2024 08:53:04 -0700 (PDT)
Message-ID: <4cb42dae-430b-4740-bddd-df5c4c783724@suse.com>
Date: Thu, 13 Jun 2024 17:53:02 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for 4.19?] x86/Intel: unlock CPUID earlier for the BSP
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>
 <ZmsNSUmum8mRxkCs@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmsNSUmum8mRxkCs@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 13.06.2024 17:16, Roger Pau Monné wrote:
> On Thu, Jun 13, 2024 at 10:19:30AM +0200, Jan Beulich wrote:
>> @@ -315,16 +329,7 @@ static void cf_check early_init_intel(st
>>  	    bootsym(trampoline_misc_enable_off) & MSR_IA32_MISC_ENABLE_XD_DISABLE)
>>  		printk(KERN_INFO "re-enabled NX (Execute Disable) protection\n");
>>  
>> -	/* Unmask CPUID levels and NX if masked: */
>> -	rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
>> -
>> -	disable = misc_enable & MSR_IA32_MISC_ENABLE_LIMIT_CPUID;
>> -	if (disable) {
>> -		wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable & ~disable);
>> -		bootsym(trampoline_misc_enable_off) |= disable;
>> -		printk(KERN_INFO "revised cpuid level: %d\n",
>> -		       cpuid_eax(0));
>> -	}
>> +	intel_unlock_cpuid_leaves(c);
> 
> Do you really need to call intel_unlock_cpuid_leaves() here?
> 
> For the BSP it will be taken care in early_cpu_init(), and for the APs
> MSR_IA32_MISC_ENABLE it will be set as part of the trampoline with the
> disables from trampoline_misc_enable_off.

The way the original code works, it allows to deal with the BSP having the
bit clear, but some or all of the APs having it set. I simply didn't want
to break that property.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 15:53:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 15:53:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740085.1147098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHmlM-0004ku-Sj; Thu, 13 Jun 2024 15:53:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740085.1147098; Thu, 13 Jun 2024 15:53:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHmlM-0004kn-Pp; Thu, 13 Jun 2024 15:53:20 +0000
Received: by outflank-mailman (input) for mailman id 740085;
 Thu, 13 Jun 2024 15:53:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHmlL-0004kO-Uq; Thu, 13 Jun 2024 15:53:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHmlL-0003rq-ON; Thu, 13 Jun 2024 15:53:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHmlL-00057S-G3; Thu, 13 Jun 2024 15:53:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHmlL-0004OY-FZ; Thu, 13 Jun 2024 15:53:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EXMhHsspUDQoDOTmhlXkxNXfSpbBwD8IsVKEDeOgfNo=; b=Jts/XLN7USdsbhEzNhiuehp2Yq
	F23y7hWdf2bcKDU5Xw6WSDUaFwyeM3CaizUrmwCuf+/afwAQnvll+ANQn7VDmCnOOo3vN6VWJf4jJ
	2urg8ONBv7TDF0Q/xDMchpCo51lmH1NUw3sjiMFcYHFSHq4YAUWEkMS6uDi1PK//5+/I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186335-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186335: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b490f470f58d0b87653bfbee23c7e9342b049ab4
X-Osstest-Versions-That:
    xen=401448f2d12f969956fac3ef3d29dc065430fd56
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Jun 2024 15:53:19 +0000

flight 186335 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186335/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b490f470f58d0b87653bfbee23c7e9342b049ab4
baseline version:
 xen                  401448f2d12f969956fac3ef3d29dc065430fd56

Last test of basis   186323  2024-06-12 13:02:39 Z    1 days
Testing same since   186335  2024-06-13 13:02:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jens Wiklander <jens.wiklander@linaro.org>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   401448f2d1..b490f470f5  b490f470f58d0b87653bfbee23c7e9342b049ab4 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 16:04:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 16:04:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740099.1147108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHmvq-0007XY-Ta; Thu, 13 Jun 2024 16:04:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740099.1147108; Thu, 13 Jun 2024 16:04:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHmvq-0007XR-Q4; Thu, 13 Jun 2024 16:04:10 +0000
Received: by outflank-mailman (input) for mailman id 740099;
 Thu, 13 Jun 2024 16:04:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9y96=NP=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHmvq-0007XL-9S
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 16:04:10 +0000
Received: from mail-qk1-x72a.google.com (mail-qk1-x72a.google.com
 [2607:f8b0:4864:20::72a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 904cadb3-299e-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 18:04:08 +0200 (CEST)
Received: by mail-qk1-x72a.google.com with SMTP id
 af79cd13be357-79550284502so75491685a.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 09:04:07 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2a5eb4d84sm8012916d6.87.2024.06.13.09.04.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Jun 2024 09:04:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 904cadb3-299e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718294647; x=1718899447; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=gDJU5XGCJzqp3hM/CcIIri13Jl6D5WNERNa3feaCvmQ=;
        b=v561k9IEjaxHkpUCFywzpeQyHS/pElxiuaD7pE4tDuU3chDU+jH4w9dHsUYCmljfV7
         Ju++vz1AkYj3yauVoARvY+57FCUKeDL0Kg+Wqh1Gk8/QERyrlxn/mYgTtPrd9FlclFNn
         8vbBAOa3WFQfQ8Gag6YNcU+JZb5TL08Mmu5HI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718294647; x=1718899447;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=gDJU5XGCJzqp3hM/CcIIri13Jl6D5WNERNa3feaCvmQ=;
        b=IQRIqohqnc/zzc/NM6hWk8CpnkGarHBahL4THssqSzqrv1XIPWa2pjGkrBBVPopq6f
         TjvM+xmseJgNAak+GHpkG7PLw9eh3SZ3P8KE1yIxeZq3gTxwk/biHqABhyrKdE1GB0A6
         afSHXkv05G8cjuI55fsSHjrx41TLCLD94RFof1YGsWkvPGcqIR0a3UQPHLiRJVGftMBC
         2iIwPtu69iK4pEwkgeAiU9USiFFmRoe0sAmsC9fioVYypYPvaXiIdypDEnz2ss9m9xiy
         u7bowp/kizJklFusTq0yxUojJZ+XmZWQVVct3JKFWj9rPwJdwwlXCvq3QkUG1CPUn/Rq
         3/bQ==
X-Gm-Message-State: AOJu0YwBx2wxS5VTpqyyI4+xGVz6QioDzqL2hAg4m+a1fa8yJtVM3YNY
	jU0rE6blL1lJEWwAuxcc/vqFcxshrrRhrX1/1vx1O5M9ZkE/JH8ifOOvMX424wQ=
X-Google-Smtp-Source: AGHT+IHyaFFfKMsjwy2BxOsh/JwUHs9l3fGRP43xE2JmdJF3qasfFT3J49ZJ2rWYxWJBIoHRzml+TA==
X-Received: by 2002:a05:6214:3ca1:b0:6b0:48fb:138e with SMTP id 6a1803df08f44-6b1918711a1mr65061196d6.14.1718294646667;
        Thu, 13 Jun 2024 09:04:06 -0700 (PDT)
Date: Thu, 13 Jun 2024 18:04:04 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH for 4.19?] x86/Intel: unlock CPUID earlier for the BSP
Message-ID: <ZmsYdA6uwR4nGEYp@macbook>
References: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>
 <ZmsNSUmum8mRxkCs@macbook>
 <4cb42dae-430b-4740-bddd-df5c4c783724@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4cb42dae-430b-4740-bddd-df5c4c783724@suse.com>

On Thu, Jun 13, 2024 at 05:53:02PM +0200, Jan Beulich wrote:
> On 13.06.2024 17:16, Roger Pau Monné wrote:
> > On Thu, Jun 13, 2024 at 10:19:30AM +0200, Jan Beulich wrote:
> >> @@ -315,16 +329,7 @@ static void cf_check early_init_intel(st
> >>  	    bootsym(trampoline_misc_enable_off) & MSR_IA32_MISC_ENABLE_XD_DISABLE)
> >>  		printk(KERN_INFO "re-enabled NX (Execute Disable) protection\n");
> >>  
> >> -	/* Unmask CPUID levels and NX if masked: */
> >> -	rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
> >> -
> >> -	disable = misc_enable & MSR_IA32_MISC_ENABLE_LIMIT_CPUID;
> >> -	if (disable) {
> >> -		wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable & ~disable);
> >> -		bootsym(trampoline_misc_enable_off) |= disable;
> >> -		printk(KERN_INFO "revised cpuid level: %d\n",
> >> -		       cpuid_eax(0));
> >> -	}
> >> +	intel_unlock_cpuid_leaves(c);
> > 
> > Do you really need to call intel_unlock_cpuid_leaves() here?
> > 
> > For the BSP it will be taken care in early_cpu_init(), and for the APs
> > MSR_IA32_MISC_ENABLE it will be set as part of the trampoline with the
> > disables from trampoline_misc_enable_off.
> 
> The way the original code works, it allows to deal with the BSP having the
> bit clear, but some or all of the APs having it set. I simply didn't want
> to break that property.

Oh, I see.  This looks like something we would unconditionally like to
set in trampoline_misc_enable_off?  Except that would trigger an
unnecessary write to the MSR if the CPU already has it disabled.

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I think the printk could be made a printk_once, since it doesn't even
print the CPU where the bit has been seen as set, but anyway, that
would be further adjustments.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 16:12:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 16:12:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740105.1147118 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHn4G-00017k-N1; Thu, 13 Jun 2024 16:12:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740105.1147118; Thu, 13 Jun 2024 16:12:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHn4G-00017d-KB; Thu, 13 Jun 2024 16:12:52 +0000
Received: by outflank-mailman (input) for mailman id 740105;
 Thu, 13 Jun 2024 16:12:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mvQ+=NP=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sHn4F-00017X-WE
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 16:12:52 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c7b1c369-299f-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 18:12:50 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a6ef8e62935so168707166b.3
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 09:12:49 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56fa674bsm86448666b.210.2024.06.13.09.12.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 09:12:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7b1c369-299f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718295169; x=1718899969; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=LjBNhA9kYYmdqZKBwleySkHS0tySokdVtq7hkjjJu7I=;
        b=QRDUAtKlqlZURw6QegbH6B7xZt5uvBtQF3JhOK4wdVkmTb/0WsZ5PKPF9B3sPhc0oY
         yPbhZSYQeN7VdrBtLb/hbd5l3jwbJoJ5hFIqbRNJ6Hy9wuSApdIfzLKziQJ/6TRoYEsA
         bxZL13f9gRts1Fj3E/DeJ3d4d21ltujeCc9MEPL9kU81mrrRnByK6oVsncyTUHHjVHQZ
         IZKsyGZaMK6RpDxVCOGRRgblI0eEXssqAWYqnt6wNs2vHVjI5GgxeWJvhgMQM7PoY1+I
         YZFJA1ebcZLtgcmfHq0YgVzS2EvF0hhVe33OAjCY5sjEHPUDFQZ85y+RVJywkoZ6RGbS
         fYdw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718295169; x=1718899969;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=LjBNhA9kYYmdqZKBwleySkHS0tySokdVtq7hkjjJu7I=;
        b=eGhcyagaa+Z6KbhhvHskTh+/t118hl6QT4dNejX8Ijzl9EvwDzcBj10+XG78oZFPV+
         6eiJ1u6vwjtBAv+P2axivKzeMWmMAe7/UOVvQMJhKc45gxfJJ8waU1QLUarpQtUNHm+X
         K5hS1/DlanOINIM3mGrfYEfI/BtdqAVYkcnbhX9LE27NJ41c/sTlM5KeLpndRT+yffI0
         STqgibAKlCJwUW4U7EozOvMf6TC77kTMccvawpBcGf+IsNKQjR/Si+i9t4CiG2PNcpp+
         HMAmZIUXZcf11MP5EPNzScmojaFvWc+O3hA1gRDH9UbTCdcim9cNtM4Vsa7SzlYFoCX1
         XGIg==
X-Gm-Message-State: AOJu0YzXKq/4smYIahatKXC5ADMWhwKcexTnzuZO+29nv4QETocBvRT9
	DsuypwnrjKTP4lUa988XlKC1O8XDyK3L2xzwAAaw3N8BMciuj5g9gzur/8VYv9q9Cn9fIMtU3YY
	=
X-Google-Smtp-Source: AGHT+IGB75onHmcJLItz5KHxsXY8gnkMDeEK6AYiBOEXazUiHLQoGOCfuBQbsBl4IOM3XSx6JZ4/lA==
X-Received: by 2002:a17:906:3c56:b0:a6f:12:8c48 with SMTP id a640c23a62f3a-a6f60d5f66bmr15917466b.39.1718295169217;
        Thu, 13 Jun 2024 09:12:49 -0700 (PDT)
Message-ID: <4f518072-572e-469c-baed-349c34e7ddfc@suse.com>
Date: Thu, 13 Jun 2024 18:12:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for 4.19?] x86/Intel: unlock CPUID earlier for the BSP
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>
 <ZmsNSUmum8mRxkCs@macbook> <4cb42dae-430b-4740-bddd-df5c4c783724@suse.com>
 <ZmsYdA6uwR4nGEYp@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmsYdA6uwR4nGEYp@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 13.06.2024 18:04, Roger Pau Monné wrote:
> On Thu, Jun 13, 2024 at 05:53:02PM +0200, Jan Beulich wrote:
>> On 13.06.2024 17:16, Roger Pau Monné wrote:
>>> On Thu, Jun 13, 2024 at 10:19:30AM +0200, Jan Beulich wrote:
>>>> @@ -315,16 +329,7 @@ static void cf_check early_init_intel(st
>>>>  	    bootsym(trampoline_misc_enable_off) & MSR_IA32_MISC_ENABLE_XD_DISABLE)
>>>>  		printk(KERN_INFO "re-enabled NX (Execute Disable) protection\n");
>>>>  
>>>> -	/* Unmask CPUID levels and NX if masked: */
>>>> -	rdmsrl(MSR_IA32_MISC_ENABLE, misc_enable);
>>>> -
>>>> -	disable = misc_enable & MSR_IA32_MISC_ENABLE_LIMIT_CPUID;
>>>> -	if (disable) {
>>>> -		wrmsrl(MSR_IA32_MISC_ENABLE, misc_enable & ~disable);
>>>> -		bootsym(trampoline_misc_enable_off) |= disable;
>>>> -		printk(KERN_INFO "revised cpuid level: %d\n",
>>>> -		       cpuid_eax(0));
>>>> -	}
>>>> +	intel_unlock_cpuid_leaves(c);
>>>
>>> Do you really need to call intel_unlock_cpuid_leaves() here?
>>>
>>> For the BSP it will be taken care in early_cpu_init(), and for the APs
>>> MSR_IA32_MISC_ENABLE it will be set as part of the trampoline with the
>>> disables from trampoline_misc_enable_off.
>>
>> The way the original code works, it allows to deal with the BSP having the
>> bit clear, but some or all of the APs having it set. I simply didn't want
>> to break that property.
> 
> Oh, I see.  This looks like something we would unconditionally like to
> set in trampoline_misc_enable_off?  Except that would trigger an
> unnecessary write to the MSR if the CPU already has it disabled.

Might be an option; the extra MSR access may not be _that_ harmful.

> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> I think the printk could be made a printk_once, since it doesn't even
> print the CPU where the bit has been seen as set, but anyway, that
> would be further adjustments.

Well, yes and no. Having it the way it is now and seeing the message twice
in a log would be indicative of a problem.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 16:17:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 16:17:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740111.1147128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHn8p-0001jp-9s; Thu, 13 Jun 2024 16:17:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740111.1147128; Thu, 13 Jun 2024 16:17:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHn8p-0001ji-5X; Thu, 13 Jun 2024 16:17:35 +0000
Received: by outflank-mailman (input) for mailman id 740111;
 Thu, 13 Jun 2024 16:17:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VBX3=NP=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sHn8n-0001i3-De
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 16:17:33 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 70053b80-29a0-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 18:17:32 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a6ef793f4b8so141381966b.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 09:17:32 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f985dfsm86195766b.192.2024.06.13.09.17.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 09:17:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70053b80-29a0-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718295452; x=1718900252; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=kh3CJSm3nyaO+PLJ6VErw1IpQhOXR9cy6TuxpT7vbQQ=;
        b=JdSgQ56iexATLb06hAovFRos46Y/ob2qStG1/vMC681G+Ywgf2XWhEQBVRjvL8zVP6
         gzk0JC6Wspdb0YjL4zNRxqDzOqnoTf7ohCXhnFrh8kSpjyylBhRsurGxB8mty/vL4tKs
         Be84TzFXYHYcvDQlNzm5tvh0qRTg8Y5Kn5VoU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718295452; x=1718900252;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kh3CJSm3nyaO+PLJ6VErw1IpQhOXR9cy6TuxpT7vbQQ=;
        b=thAZWounlOoh6rWBhCG5lFrxJQOAso+s03bedBIhYO/dsT5svAmlM3gjSIIDpWWjCE
         edXgMs0UmP5QkDkYTBMKwvSE+1D7ep3feSPDzxd8PtbV3oIbK785O8LPMyaVqlCyuj/4
         3RY4YFTqsSDfEKmE7V9u0OPewxEbB7dVEgVUCU+xOlNChGV7/f82jFLmZUSN6cfMrxrh
         7v22o2uVwllIVmKAb5jXZGnXmZM24kKQqmzlzsLRATIRG1FpLEDdRq0ZklUpxXmbf+aD
         5wFS9WfsNrEtLUa/G0petAwzMu1oRrnT0VfL+zH78Fx8qOOKZg4OfH27msv0vMmXUf5s
         UlZg==
X-Forwarded-Encrypted: i=1; AJvYcCVf+jGkzas4jr5uUs8hJ1+FAaOzI5I6g4pQab9aBp9af7g/Rc6NfSAItPHFy8nhTrAn5vMCCrXwnyxuCTPqm24EzNI1ExBkhExodP8Telg=
X-Gm-Message-State: AOJu0YwxdUjIolriOo7CL9VB2Z4f1l+cQNTfaE8XAMgzmIiJn0Jktu5J
	g28Z6Cy7STYKSw2LlXKU0XHX9Clx1IB2ZD4oTKz2tUJHJpLDB1qNIh55EwppHek25oRkow2bWlp
	fwkQ=
X-Google-Smtp-Source: AGHT+IE14Z8XcvpHTNosFzcbkcypMwWKMzlwdrQvLNzvYMkhs+7Q4yY1nSTiRhviuQ8Sw/+UIqtYEQ==
X-Received: by 2002:a17:906:318e:b0:a6e:fe2b:7d58 with SMTP id a640c23a62f3a-a6f60de5eefmr14569566b.63.1718295451732;
        Thu, 13 Jun 2024 09:17:31 -0700 (PDT)
Message-ID: <f51b2240-03da-4aee-8972-a72b53916ce1@citrix.com>
Date: Thu, 13 Jun 2024 17:17:30 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for 4.19?] x86/Intel: unlock CPUID earlier for the BSP
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 13/06/2024 9:19 am, Jan Beulich wrote:
> Intel CPUs have a MSR bit to limit CPUID enumeration to leaf two. If
> this bit is set by the BIOS then CPUID evaluation does not work when
> data from any leaf greater than two is needed; early_cpu_init() in
> particular wants to collect leaf 7 data.
>
> Cure this by unlocking CPUID right before evaluating anything which
> depends on the maximum CPUID leaf being greater than two.
>
> Inspired by (and description cloned from) Linux commit 0c2f6d04619e
> ("x86/topology/intel: Unlock CPUID before evaluating anything").
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> While I couldn't spot anything, it kind of feels as if I'm overlooking
> further places where we might be inspecting in particular leaf 7 yet
> earlier.
>
> No Fixes: tag(s), as imo it would be too many that would want
> enumerating.

I also saw that go by, but concluded that Xen doesn't need it, hence why
I left it alone.

The truth is that only the BSP needs it.  APs sort it out in the
trampoline via trampoline_misc_enable_off, because they need to clear
XD_DISABLE in prior to enabling paging, so we should be taking it out of
early_init_intel().

But, we don't have an early BSP-only early hook, and I'm not overwhelmed
at the idea of exporting it from intel.c

I was intending to leave it alone until I can burn this whole
infrastructure to the ground and make it work nicely with policies, but
that's not a job for this point in the release...

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 16:31:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 16:31:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740121.1147138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHnM4-0005HF-Gi; Thu, 13 Jun 2024 16:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740121.1147138; Thu, 13 Jun 2024 16:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHnM4-0005H8-DU; Thu, 13 Jun 2024 16:31:16 +0000
Received: by outflank-mailman (input) for mailman id 740121;
 Thu, 13 Jun 2024 16:31:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dc9R=NP=amazon.co.uk=prvs=88774d79b=eliasely@srs-se1.protection.inumbo.net>)
 id 1sHnM2-0005Gx-Iv
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 16:31:14 +0000
Received: from smtp-fw-52003.amazon.com (smtp-fw-52003.amazon.com
 [52.119.213.152]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 585bbdf1-29a2-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 18:31:12 +0200 (CEST)
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.43.8.6])
 by smtp-border-fw-52003.iad7.amazon.com with
 ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jun 2024 16:31:11 +0000
Received: from EX19MTAEUC001.ant.amazon.com [10.0.10.100:46197]
 by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.47.146:2525]
 with esmtp (Farcaster)
 id 8100c6ab-c6a2-4d88-8eb8-fc882871ded1; Thu, 13 Jun 2024 16:31:09 +0000 (UTC)
Received: from EX19D018EUA002.ant.amazon.com (10.252.50.146) by
 EX19MTAEUC001.ant.amazon.com (10.252.51.193) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.2.1258.34; Thu, 13 Jun 2024 16:31:08 +0000
Received: from [10.95.132.234] (10.95.132.234) by
 EX19D018EUA002.ant.amazon.com (10.252.50.146) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.2.1258.34; Thu, 13 Jun 2024 16:31:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 585bbdf1-29a2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1718296273; x=1749832273;
  h=message-id:date:mime-version:subject:to:cc:references:
   from:in-reply-to:content-transfer-encoding;
  bh=EOs3UZ4vIwDq2rjD+Isyyo91v0ijB9Y4SztO4w/VQ7o=;
  b=ZAQsFGnIlhyaLsjRJdf0+2rxSKbN8W1kIs6fvChL6KQpqq1MKAm80QIR
   ryvQ/8qphBWyKCLE0gVxkCXPwDHseL8prARG4iXhskbOnE0fGdxFj6Vlo
   rFNktWqv3RVu3T9EXYW09yUyOmUfN27bZhXTO5xo4iyzlnNuRc64PW5Rr
   s=;
X-IronPort-AV: E=Sophos;i="6.08,235,1712620800"; 
   d="scan'208";a="4786701"
X-Farcaster-Flow-ID: 8100c6ab-c6a2-4d88-8eb8-fc882871ded1
Message-ID: <1ad9ccce-c02b-46c0-8fea-10b35b574cb8@amazon.com>
Date: Thu, 13 Jun 2024 17:31:01 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH V3 (resend) 01/19] x86: Create per-domain mapping of
 guest_root_pt
To: Jan Beulich <jbeulich@suse.com>
CC: <julien@xen.org>, <pdurrant@amazon.com>, <dwmw@amazon.com>, Hongyan Xia
	<hongyxia@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Julien Grall
	<jgrall@amazon.com>, <xen-devel@lists.xenproject.org>
References: <20240513134046.82605-1-eliasely@amazon.com>
 <20240513134046.82605-2-eliasely@amazon.com>
 <dd145c67-8e3e-4b15-94f7-c7cd1f127d45@suse.com>
 <bda3386e-26c5-4efd-b7ad-00f3643523fa@amazon.com>
 <b50d0a83-fab4-4f59-bf4d-5c5593923f34@suse.com>
Content-Language: en-US
From: Elias El Yandouzi <eliasely@amazon.com>
In-Reply-To: <b50d0a83-fab4-4f59-bf4d-5c5593923f34@suse.com>
Content-Type: text/plain; charset="UTF-8"; format=flowed
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.95.132.234]
X-ClientProxiedBy: EX19D039UWA004.ant.amazon.com (10.13.139.68) To
 EX19D018EUA002.ant.amazon.com (10.252.50.146)

Hi Jan,

On 16/05/2024 08:17, Jan Beulich wrote:
> On 15.05.2024 20:25, Elias El Yandouzi wrote:
>> However, I noticed quite a weird bug while doing some testing. I may
>> need your expertise to find the root cause.
> 
> Looks like you've overflowed the dom0 kernel stack, most likely because
> of recurring nested exceptions.
> 
>> In the case where I have more vCPUs than pCPUs (and let's consider we
>> have one pCPU for two vCPUs), I noticed that I would always get a page
>> fault in dom0 kernel (5.10.0-13-amd64) at the exact same location. I did
>> a bit of investigation but I couldn't come to a clear conclusion.
>> Looking at the stack trace [1], I have the feeling the crash occurs in a
>> loop or a recursive call.
>>
>> I tried to identify where the crash occurred using addr2line:
>>
>>   > addr2line -e vmlinux-5.10.0-29-amd64 0xffffffff810218a0
>> debian/build/build_amd64_none_amd64/arch/x86/xen/mmu_pv.c:880
>>
>> It turns out to point on the closing bracket of the function
>> xen_mm_unpin_all()[2].
>>
>> I thought the crash could happen while returning from the function in
>> the assembly epilogue but the output of objdump doesn't even show the
>> address.
>>
>> The only theory I could think of was that because we only have one pCPU,
>> we may never execute one of the two vCPUs, and never setup the mapping
>> to the guest_root_pt in write_ptbase(), hence the page fault. This is
>> just a random theory, I couldn't find any hint suggesting it would be
>> the case though. Any idea how I could debug this?
> 
> I guess you want to instrument Xen enough to catch the top level fault (or
> the 2nd from top, depending on where the nesting actually starts) to see
> why that happens. Quite likely some guest mapping isn't set up properly.
> 

Julien helped me with this one and I believe we have identified the 
problem.

As you've suggested, I wrote the mapping of the guest root PT in our 
per-domain section, root_pt_l1tab, within write_ptbase() function as 
we'd always be in the case v == current plus switch_cr3_cr4() would 
always flush local tlb.

However, there exists a path, in toggle_guest_mode(), where we could 
call update_cr3()/make_cr3() without calling write_ptbase() and hence 
not maintain mappings properly. Instead toggle_guest_mode() has a partly 
open-coded version of write_ptbase().

Would you rather like to see the mappings written in make_cr3() or in 
toggle_guest_mode() within the pseudo open-coded version of write_ptbase()?

Elias



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 16:58:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 16:58:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740137.1147168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHnm8-0001K1-UM; Thu, 13 Jun 2024 16:58:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740137.1147168; Thu, 13 Jun 2024 16:58:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHnm8-0001Jo-R9; Thu, 13 Jun 2024 16:58:12 +0000
Received: by outflank-mailman (input) for mailman id 740137;
 Thu, 13 Jun 2024 16:58:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9y96=NP=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHnm7-00013v-BK
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 16:58:11 +0000
Received: from mail-ot1-x32f.google.com (mail-ot1-x32f.google.com
 [2607:f8b0:4864:20::32f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1cff2ad8-29a6-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 18:58:10 +0200 (CEST)
Received: by mail-ot1-x32f.google.com with SMTP id
 46e09a7af769-6f97a4c4588so698040a34.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 09:58:10 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798abc037desm66960085a.97.2024.06.13.09.58.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Jun 2024 09:58:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1cff2ad8-29a6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718297889; x=1718902689; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=2OqTsWR9SuIo4uzvIZP/P5NOenHh2rQIF4jI5UgbX64=;
        b=DNVthAALlq9MSTc7XvwOPslvo1Y+ykqZ3BCSR8Wsb5lLhoP0qsVyCKl+oXwf3u6uhR
         MR1K6R/QO4a7rr7mB0K4jGeSKHUPxPzpBMyWml+2pAJTIPig8sQwP40mc3F+FYKPc6/R
         UwORBkvFyuyx91WQoviuKI1wCb5H+Ta2oZrtk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718297889; x=1718902689;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=2OqTsWR9SuIo4uzvIZP/P5NOenHh2rQIF4jI5UgbX64=;
        b=lNYsWJTrEJ3elRLB9bza0EmkN5lHX35KpAT/uLX/dR7p/I/YWbvqNWmUrB5psaa1m2
         dS3BYnV7BAHGuGfs30wCRV7EoOIRi8PyAJ5zwsQa5p6Twvox74ysIfL8GTSaPL7aqoJM
         bL+VV366RAHIczHDHL803yn4FzgQ/IMH81yrbzbHAIXkdNh3pIxToRnmi7gPvd2XZjSi
         eJ8I6OAyKa7wcWrgJzzx0evhRSoRetlqw5dgnO5Ur1+D+JU4DyrGOMUUc91O/QKD/G0N
         kkyuV5MQ+kpKQFRRRyyh1l1xARTb5tB1ln9Didp4ve/PLyHA2WNPDVmQUDlNqnzS4yAm
         VL7w==
X-Gm-Message-State: AOJu0YxfYnf3N2MKBhLlI4JlwhmPAV2VsKwshIfqE4Y4jeFTbpnxLXIl
	cm+/wkHruX/GFPO9nOINu5rg9bkXnEUOiafvOkc8uuqplxbGhjjvi9nsGWBpZWUs4lLgXGDQyQJ
	w
X-Google-Smtp-Source: AGHT+IH2m9tIxZZ8OzShQsKDG0eFpEjdz4EKdEJZcMSsChf60s3+TVzqkR3vmgdqdLizA1ele7eBXw==
X-Received: by 2002:a05:6830:4429:b0:6f9:74db:5dc4 with SMTP id 46e09a7af769-6fb9376dcdamr409423a34.14.1718297888900;
        Thu, 13 Jun 2024 09:58:08 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v3 3/3] x86/irq: forward pending interrupts to new destination in fixup_irqs()
Date: Thu, 13 Jun 2024 18:56:17 +0200
Message-ID: <20240613165617.42538-4-roger.pau@citrix.com>
X-Mailer: git-send-email 2.45.2
In-Reply-To: <20240613165617.42538-1-roger.pau@citrix.com>
References: <20240613165617.42538-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

fixup_irqs() is used to evacuate interrupts from to be offlined CPUs.  Given
the CPU is to become offline, the normal migration logic used by Xen where the
vector in the previous target(s) is left configured until the interrupt is
received on the new destination is not suitable.

Instead attempt to do as much as possible in order to prevent loosing
interrupts.  If fixup_irqs() is called from the CPU to be offlined (as is
currently the case) attempt to forward pending vectors when interrupts that
target the current CPU are migrated to a different destination.

Additionally, for interrupts that have already been moved from the current CPU
prior to the call to fixup_irqs() but that haven't been delivered to the new
destination (iow: interrupts with move_in_progress set and the current CPU set
in ->arch.old_cpu_mask) also check whether the previous vector is pending and
forward it to the new destination.

This allows us to remove the window with interrupts enabled at the bottom of
fixup_irqs().  Such window wasn't safe anyway: references to the CPU to become
offline are removed from interrupts masks, but the per-CPU vector_irq[] array
is not updated to reflect those changes (as the CPU is going offline anyway).

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Remove interrupt enabled window from fixup_irqs().
 - Adjust comments and commit message.

Changes since v1:
 - Rename to apic_irr_read().
---
 xen/arch/x86/include/asm/apic.h |  5 ++++
 xen/arch/x86/irq.c              | 42 ++++++++++++++++++++++++++++-----
 2 files changed, 41 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/include/asm/apic.h b/xen/arch/x86/include/asm/apic.h
index d1cb001fb4ab..7bd66dc6e151 100644
--- a/xen/arch/x86/include/asm/apic.h
+++ b/xen/arch/x86/include/asm/apic.h
@@ -132,6 +132,11 @@ static inline bool apic_isr_read(uint8_t vector)
             (vector & 0x1f)) & 1;
 }
 
+static inline bool apic_irr_read(unsigned int vector)
+{
+    return apic_read(APIC_IRR + (vector / 32 * 0x10)) & (1U << (vector % 32));
+}
+
 static inline u32 get_apic_id(void)
 {
     u32 id = apic_read(APIC_ID);
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index f36962fc1dc3..a2b04c793292 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2593,7 +2593,7 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
 
     for ( irq = 0; irq < nr_irqs; irq++ )
     {
-        bool break_affinity = false, set_affinity = true;
+        bool break_affinity = false, set_affinity = true, check_irr = false;
         unsigned int vector, cpu = smp_processor_id();
         cpumask_t *affinity = this_cpu(scratch_cpumask);
 
@@ -2646,6 +2646,25 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
              !cpu_online(cpu) &&
              cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) )
         {
+            /*
+             * This to be offlined CPU was the target of an interrupt that's
+             * been moved, and the new destination target hasn't yet
+             * acknowledged any interrupt from it.
+             *
+             * We know the interrupt is configured to target the new CPU at
+             * this point, so we can check IRR for any pending vectors and
+             * forward them to the new destination.
+             *
+             * Note that for the other case of an interrupt movement being in
+             * progress (move_cleanup_count being non-zero) we know the new
+             * destination has already acked at least one interrupt from this
+             * source, and hence there's no need to forward any stale
+             * interrupts.
+             */
+            if ( apic_irr_read(desc->arch.old_vector) )
+                send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
+                              desc->arch.vector);
+
             /*
              * This CPU is going offline, remove it from ->arch.old_cpu_mask
              * and possibly release the old vector if the old mask becomes
@@ -2686,11 +2705,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
         if ( desc->handler->disable )
             desc->handler->disable(desc);
 
+        /*
+         * If the current CPU is going offline and is (one of) the target(s) of
+         * the interrupt, signal to check whether there are any pending vectors
+         * to be handled in the local APIC after the interrupt has been moved.
+         */
+        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
+            check_irr = true;
+
         if ( desc->handler->set_affinity )
             desc->handler->set_affinity(desc, affinity);
         else if ( !(warned++) )
             set_affinity = false;
 
+        if ( check_irr && apic_irr_read(vector) )
+            /*
+             * Forward pending interrupt to the new destination, this CPU is
+             * going offline and otherwise the interrupt would be lost.
+             */
+            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
+                          desc->arch.vector);
+
         if ( desc->handler->enable )
             desc->handler->enable(desc);
 
@@ -2707,11 +2742,6 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
             printk("Broke affinity for IRQ%u, new: {%*pbl}\n",
                    irq, CPUMASK_PR(affinity));
     }
-
-    /* That doesn't seem sufficient.  Give it 1ms. */
-    local_irq_enable();
-    mdelay(1);
-    local_irq_disable();
 }
 
 void fixup_eoi(void)
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 16:58:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 16:58:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740135.1147147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHnm2-0000pp-Ev; Thu, 13 Jun 2024 16:58:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740135.1147147; Thu, 13 Jun 2024 16:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHnm2-0000pi-CN; Thu, 13 Jun 2024 16:58:06 +0000
Received: by outflank-mailman (input) for mailman id 740135;
 Thu, 13 Jun 2024 16:58:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9y96=NP=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHnm2-0000pc-0P
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 16:58:06 +0000
Received: from mail-qk1-x72c.google.com (mail-qk1-x72c.google.com
 [2607:f8b0:4864:20::72c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 18f14291-29a6-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 18:58:03 +0200 (CEST)
Received: by mail-qk1-x72c.google.com with SMTP id
 af79cd13be357-79767180a15so82812585a.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 09:58:03 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798aaecbe5esm67202485a.30.2024.06.13.09.58.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Jun 2024 09:58:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18f14291-29a6-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718297882; x=1718902682; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=6JVEr9JpY61teVKNr86SoOyf1LAKqFW45HZt709v3dc=;
        b=OLc38gPEx7IX1LfjRLrNUA22tVkanguCpRyVOlvK8+h+nF0f1V1QWDY/QlCgLrLmza
         crVGHWAn/Uq11XvdbXHYQsyHqmO/3Xp6N8+Ix8x+wfz46GlZx9NY4lhAX1Qrj8H33I71
         oCnAluvYoB7TOmmGZkDUw3Q9KhTH+hFFSp7O4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718297882; x=1718902682;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=6JVEr9JpY61teVKNr86SoOyf1LAKqFW45HZt709v3dc=;
        b=ZdV1R2t+KKyzfAlk2RCxICCuVIer0Q6vIkWHQ4zd0gmJVKPqE1Pr881sKFe3Xd3wui
         YgR+9NjPfsoq+k1gNMEkTlJk1bywrR1r+X9ROizXKrV42jcC1jPNI90goelpokPyKih5
         Z17YpDdiwIHipFRK0Th0sdWZespoqbyhGuTRwFF2HR6B9J6TZkv6UHQ0XKX6f6+JLD3R
         Ea91Mp78RkZnz6Ovo0drsMm7mdIPQh6JAAra8bb6t1lfXkc1IsfnKDkxS0mNYXW+3V+w
         IVUIlnM6nWrix/S5w7calGft0B8LtD/y/SZ6OhrS79DoVoFxL0LrfutTxHQJ2XINz3eO
         rKTw==
X-Gm-Message-State: AOJu0YysuvApRdkIjAXvUYEF4TmKNdcYtpwmQXVDpXY5K94CwYvony06
	4pXIrHF6FDpduMEehbh78Q0tYKpGGWmhEBXrTL/WmMGtRH7el0ym+V3tEG7Nm8DxOD1I7gtX/+Y
	P
X-Google-Smtp-Source: AGHT+IHpSs4+6mXHWJLYaAvCpygKpSrq7mkyuLwbAv5zOAoJN6bzOMi7+iYp0Mu/I7XEm/gvGdPlFA==
X-Received: by 2002:a05:620a:4493:b0:795:47c5:59b1 with SMTP id af79cd13be357-798d23f9f04mr11876785a.7.1718297882206;
        Thu, 13 Jun 2024 09:58:02 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v3 0/3] x86/irq: fixes for CPU hot{,un}plug
Date: Thu, 13 Jun 2024 18:56:14 +0200
Message-ID: <20240613165617.42538-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.45.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Hello,

The following series aim to fix interrupt handling when doing CPU
plug/unplug operations.  Without this series running:

cpus=`xl info max_cpu_id`
while [ 1 ]; do
    for i in `seq 1 $cpus`; do
        xen-hptool cpu-offline $i;
        xen-hptool cpu-online $i;
    done
done

Quite quickly results in interrupts getting lost and "No irq handler for
vector" messages on the Xen console.  Drivers in dom0 also start getting
interrupt timeouts and the system becomes unusable.

After applying the series running the loop over night still result in a
fully usable system, no  "No irq handler for vector" messages at all, no
interrupt loses reported by dom0.  Test with x2apic-mode={mixed,cluster}.

I've attempted to document all code as good as I could, interrupt
handling has some unexpected corner cases that are hard to diagnose and
reason about.

Some XenRT testing is undergoing to ensure no breakages.

Thanks, Roger.

Roger Pau Monne (3):
  x86/irq: deal with old_cpu_mask for interrupts in movement in
    fixup_irqs()
  x86/irq: handle moving interrupts in _assign_irq_vector()
  x86/irq: forward pending interrupts to new destination in fixup_irqs()

 xen/arch/x86/include/asm/apic.h |   5 +
 xen/arch/x86/irq.c              | 163 +++++++++++++++++++++++++-------
 2 files changed, 132 insertions(+), 36 deletions(-)

-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 16:58:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 16:58:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740136.1147158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHnm5-00014G-MT; Thu, 13 Jun 2024 16:58:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740136.1147158; Thu, 13 Jun 2024 16:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHnm5-000149-Jo; Thu, 13 Jun 2024 16:58:09 +0000
Received: by outflank-mailman (input) for mailman id 740136;
 Thu, 13 Jun 2024 16:58:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9y96=NP=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHnm4-00013v-Ei
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 16:58:08 +0000
Received: from mail-qv1-xf31.google.com (mail-qv1-xf31.google.com
 [2607:f8b0:4864:20::f31])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1a4f6571-29a6-11ef-90a3-e314d9c70b13;
 Thu, 13 Jun 2024 18:58:07 +0200 (CEST)
Received: by mail-qv1-xf31.google.com with SMTP id
 6a1803df08f44-6b072522bd5so6230486d6.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 09:58:07 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2a5efb07csm8501036d6.134.2024.06.13.09.58.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Jun 2024 09:58:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a4f6571-29a6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718297884; x=1718902684; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=PC76GIkUGU8UlLIXtJNN8BuZtt7P1TDX2gYZkFadrsY=;
        b=Q3HxLKZGA8/LB9HR7L5aozT4/ibGj8IBRCVM4dXwQS3Xx6wocgDkCDXCcvknPtJGnk
         TA4Y1kCc/Oq+YwgA0NaMHqK5PAgTQBlvQyIXm+HDEHOA5F/PxJivpu83CN7tJOuSVQpp
         yhyufF23TYpXHE5TuQgflSa4X01/v6VlQyv3U=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718297884; x=1718902684;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=PC76GIkUGU8UlLIXtJNN8BuZtt7P1TDX2gYZkFadrsY=;
        b=URHiF41Hvs6fpDzEMuxk+qKi2eAurp+KzSsgfpYbRCzfhdi3F6DM9Ay//p7byVxn4P
         Ol0ep40URTC/iMTGTHKNt7qwLgIS+3hdMdvjNNj9JpzW2iG9USkMCgqvPDUHP6P1K8Qm
         Mv6JGfHAvRe8liVCejzeigvUeJk5sHiYc9uZafSvkUM/0B+rb4f4FEnjN8vCeDvZh9Y8
         ntmLMGFXmfymt4iA/qsq3UjJZQ/zHfIDRjRxzeawCcSifeacdH4y3G6NrsMM247LNBtQ
         DJ8T6WT6481KulXAe2ivhId1kwV2mhh5oHT9YMp6hP3LqvNmfRQ0TRiOo53EfSod+plI
         G79A==
X-Gm-Message-State: AOJu0Yw6kg5BShd5ECmrgmEJFoSFm+ihdVsCq3axhv2dFcWI/2sNGrsA
	tWk90kb8ovdcNuO7A6teJP7xoPm5XiC7eGaH1H/gcnhsWmX1qkxVeAQPHzJ6jFw+uEz1ZHqXlkd
	d
X-Google-Smtp-Source: AGHT+IGul1+xQlyP5wng89N5Wm066PxOGU65bM5Nel+PzvhHTqNKFIMShWyiLZjCbJahOiflP3+Pzg==
X-Received: by 2002:a05:6214:943:b0:6b0:91d4:5825 with SMTP id 6a1803df08f44-6b2afd958d0mr1142786d6.56.1718297884533;
        Thu, 13 Jun 2024 09:58:04 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v3 1/3] x86/irq: deal with old_cpu_mask for interrupts in movement in fixup_irqs()
Date: Thu, 13 Jun 2024 18:56:15 +0200
Message-ID: <20240613165617.42538-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.45.2
In-Reply-To: <20240613165617.42538-1-roger.pau@citrix.com>
References: <20240613165617.42538-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Given the current logic it's possible for ->arch.old_cpu_mask to get out of
sync: if a CPU set in old_cpu_mask is offlined and then onlined
again without old_cpu_mask having been updated the data in the mask will no
longer be accurate, as when brought back online the CPU will no longer have
old_vector configured to handle the old interrupt source.

If there's an interrupt movement in progress, and the to be offlined CPU (which
is the call context) is in the old_cpu_mask clear it and update the mask, so it
doesn't contain stale data.

Note that when the system is going down fixup_irqs() will be called by
smp_send_stop() from CPU 0 with a mask with only CPU 0 on it, effectively
asking to move all interrupts to the current caller (CPU 0) which is the only
CPU to remain online.  In that case we don't care to migrate interrupts that
are in the process of being moved, as it's likely we won't be able to move all
interrupts to CPU 0 due to vector shortage anyway.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Adjust commit message.
 - Add comment about excluding smp_send_stop() case.
 - Use cpu_online().
---
 xen/arch/x86/irq.c | 29 ++++++++++++++++++++++++++++-
 1 file changed, 28 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 263e502bc0f6..d305aed317f2 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2526,7 +2526,7 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
     for ( irq = 0; irq < nr_irqs; irq++ )
     {
         bool break_affinity = false, set_affinity = true;
-        unsigned int vector;
+        unsigned int vector, cpu = smp_processor_id();
         cpumask_t *affinity = this_cpu(scratch_cpumask);
 
         if ( irq == 2 )
@@ -2569,6 +2569,33 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
                                affinity);
         }
 
+        if ( desc->arch.move_in_progress &&
+             /*
+              * Only attempt to adjust the mask if the current CPU is going
+              * offline, otherwise the whole system is going down and leaving
+              * stale data in the masks is fine.
+              */
+             !cpu_online(cpu) &&
+             cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) )
+        {
+            /*
+             * This CPU is going offline, remove it from ->arch.old_cpu_mask
+             * and possibly release the old vector if the old mask becomes
+             * empty.
+             *
+             * Note cleaning ->arch.old_cpu_mask is required if the CPU is
+             * brought offline and then online again, as when re-onlined the
+             * per-cpu vector table will no longer have ->arch.old_vector
+             * setup, and hence ->arch.old_cpu_mask would be stale.
+             */
+            cpumask_clear_cpu(cpu, desc->arch.old_cpu_mask);
+            if ( cpumask_empty(desc->arch.old_cpu_mask) )
+            {
+                desc->arch.move_in_progress = 0;
+                release_old_vec(desc);
+            }
+        }
+
         /*
          * Avoid shuffling the interrupt around as long as current target CPUs
          * are a subset of the input mask.  What fixup_irqs() cares about is
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 16:58:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 16:58:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740138.1147173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHnm9-0001Mh-7d; Thu, 13 Jun 2024 16:58:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740138.1147173; Thu, 13 Jun 2024 16:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHnm9-0001Lv-2Z; Thu, 13 Jun 2024 16:58:13 +0000
Received: by outflank-mailman (input) for mailman id 740138;
 Thu, 13 Jun 2024 16:58:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9y96=NP=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sHnm7-0000pc-Jx
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 16:58:11 +0000
Received: from mail-qk1-x729.google.com (mail-qk1-x729.google.com
 [2607:f8b0:4864:20::729])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1c37be52-29a6-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 18:58:10 +0200 (CEST)
Received: by mail-qk1-x729.google.com with SMTP id
 af79cd13be357-795482e114cso71841885a.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 09:58:09 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-441f2ff7fe5sm7757661cf.79.2024.06.13.09.58.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 Jun 2024 09:58:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c37be52-29a6-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718297887; x=1718902687; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=OY8rwLUUYs4x7J36qwiR2SVYEkIjFJPbuqF3KJ+AInE=;
        b=RDU9WeiA5fuNlpa4SqrO8yc3nDLNMKUodv0znPy0ThqyG1booQET3h4YqCSuz3q5u0
         TO/1VRCS8cxC9C/hilVoG2IJpZfZf8UZcMBIZwPgc4eQ1npCUFuJ1+f6rxG1BE3TyWsD
         idksdIKneg2znpGWgu44GzqODO2w5ndoOe3CI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718297887; x=1718902687;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=OY8rwLUUYs4x7J36qwiR2SVYEkIjFJPbuqF3KJ+AInE=;
        b=EZxb5G/3u6iLuLizPWeLelRn3llcvYMJHQOZtMJtx4zXREv10NW7q39g5BgzM+G0/D
         s+VOWtoUOwS/KVkh7KoyY3lllom8YW8qC4/9gDUaWB2KA9IddAbFKzmN1lX9ne3OLr+N
         x/4OAPVXgQWzLKD6iNmo7gMHY1s2sqZ9//VymKESUzeEHkIylY4pqmE+JfiVpqyIb855
         Y8IFMmaJxuheQ+0F9qPTVju9QdAWRHQo0nd11+cO4bWgYPkLUY4qaqWV6th+dE01vAAm
         l5nP3s/XN0zY9gZC9ZxgqeNOtzJsFkXYaeYNLzAr9av/kZh3d5oQsixKkTwjcjwveYjZ
         00rQ==
X-Gm-Message-State: AOJu0Yxec2bo/d3ErVBPCIjfJcs/8zsEfqqY9GLml7pGCx7WbVIKpevg
	ykzmQNeZtxspPKX6NdxOQ/qwVqRxq2pK148gwRSTRM+KNNts9InPIvkEK9lab9yu+Wd3E8yft52
	e
X-Google-Smtp-Source: AGHT+IGZ1Mvm9118ZVFAEgnpnOMJEyewmbOC5FpeLdNjo2w51nZeduRXnb3h8W3COXmtzSe3y0e2eQ==
X-Received: by 2002:ad4:4b6d:0:b0:6b0:7b39:3c6d with SMTP id 6a1803df08f44-6b2afd8150bmr1035656d6.52.1718297886771;
        Thu, 13 Jun 2024 09:58:06 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v3 2/3] x86/irq: handle moving interrupts in _assign_irq_vector()
Date: Thu, 13 Jun 2024 18:56:16 +0200
Message-ID: <20240613165617.42538-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.45.2
In-Reply-To: <20240613165617.42538-1-roger.pau@citrix.com>
References: <20240613165617.42538-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Currently there's logic in fixup_irqs() that attempts to prevent
_assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
interrupts from the CPUs not present in the input mask.  The current logic in
fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.

Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
_assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
to deal with interrupts that have either move_{in_progress,cleanup_count} set
and no remaining online CPUs in ->arch.cpu_mask.

If _assign_irq_vector() is requested to move an interrupt in the state
described above, first attempt to see if ->arch.old_cpu_mask contains any valid
CPUs that could be used as fallback, and if that's the case do move the
interrupt back to the previous destination.  Note this is easier because the
vector hasn't been released yet, so there's no need to allocate and setup a new
vector on the destination.

Due to the logic in fixup_irqs() that clears offline CPUs from
->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
shouldn't be possible to get into _assign_irq_vector() with
->arch.move_{in_progress,cleanup_count} set but no online CPUs in
->arch.old_cpu_mask.

However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
longer part of the affinity set, move the interrupt to a different CPU part of
the provided mask and keep the current ->arch.old_{cpu_mask,vector} for the
pending interrupt movement to be completed.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Adjust comments.
 - Clean old vector from used_vectors mask.

Changes since v1:
 - Further refine the logic in _assign_irq_vector().
---
 xen/arch/x86/irq.c | 99 ++++++++++++++++++++++++++++++++--------------
 1 file changed, 70 insertions(+), 29 deletions(-)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index d305aed317f2..f36962fc1dc3 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -544,7 +544,58 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
     }
 
     if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
-        return -EAGAIN;
+    {
+        /*
+         * If the current destination is online refuse to shuffle.  Retry after
+         * the in-progress movement has finished.
+         */
+        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
+            return -EAGAIN;
+
+        /*
+         * Due to the logic in fixup_irqs() that clears offlined CPUs from
+         * ->arch.old_cpu_mask it shouldn't be possible to get here with
+         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
+         * ->arch.old_cpu_mask.
+         */
+        ASSERT(valid_irq_vector(desc->arch.old_vector));
+        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
+
+        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
+        {
+            /*
+             * Fallback to the old destination if moving is in progress and the
+             * current destination is to be offlined.  This is only possible if
+             * the CPUs in old_cpu_mask intersect with the affinity mask passed
+             * in the 'mask' parameter.
+             */
+            desc->arch.vector = desc->arch.old_vector;
+            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
+
+            /* Undo any possibly done cleanup. */
+            for_each_cpu(cpu, desc->arch.cpu_mask)
+                per_cpu(vector_irq, cpu)[desc->arch.vector] = irq;
+
+            /* Cancel the pending move and release the current vector. */
+            desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
+            cpumask_clear(desc->arch.old_cpu_mask);
+            desc->arch.move_in_progress = 0;
+            desc->arch.move_cleanup_count = 0;
+            if ( desc->arch.used_vectors )
+            {
+                ASSERT(test_bit(old_vector, desc->arch.used_vectors));
+                clear_bit(old_vector, desc->arch.used_vectors);
+            }
+
+            return 0;
+        }
+
+        /*
+         * There's an interrupt movement in progress but the destination(s) in
+         * ->arch.old_cpu_mask are not suitable given the 'mask' parameter, go
+         * through the full logic to find a new vector in a suitable CPU.
+         */
+    }
 
     err = -ENOSPC;
 
@@ -600,7 +651,24 @@ next:
         current_vector = vector;
         current_offset = offset;
 
-        if ( valid_irq_vector(old_vector) )
+        if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
+        {
+            ASSERT(!cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map));
+            /*
+             * Special case when evacuating an interrupt from a CPU to be
+             * offlined and the interrupt was already in the process of being
+             * moved.  Leave ->arch.old_{vector,cpu_mask} as-is and just
+             * replace ->arch.{cpu_mask,vector} with the new destination.
+             * Cleanup will be done normally for the old fields, just release
+             * the current vector here.
+             */
+            if ( desc->arch.used_vectors )
+            {
+                ASSERT(test_bit(old_vector, desc->arch.used_vectors));
+                clear_bit(old_vector, desc->arch.used_vectors);
+            }
+        }
+        else if ( valid_irq_vector(old_vector) )
         {
             cpumask_and(desc->arch.old_cpu_mask, desc->arch.cpu_mask,
                         &cpu_online_map);
@@ -2607,33 +2675,6 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
             continue;
         }
 
-        /*
-         * In order for the affinity adjustment below to be successful, we
-         * need _assign_irq_vector() to succeed. This in particular means
-         * clearing desc->arch.move_in_progress if this would otherwise
-         * prevent the function from succeeding. Since there's no way for the
-         * flag to get cleared anymore when there's no possible destination
-         * left (the only possibility then would be the IRQs enabled window
-         * after this loop), there's then also no race with us doing it here.
-         *
-         * Therefore the logic here and there need to remain in sync.
-         */
-        if ( desc->arch.move_in_progress &&
-             !cpumask_intersects(mask, desc->arch.cpu_mask) )
-        {
-            unsigned int cpu;
-
-            cpumask_and(affinity, desc->arch.old_cpu_mask, &cpu_online_map);
-
-            spin_lock(&vector_lock);
-            for_each_cpu(cpu, affinity)
-                per_cpu(vector_irq, cpu)[desc->arch.old_vector] = ~irq;
-            spin_unlock(&vector_lock);
-
-            release_old_vec(desc);
-            desc->arch.move_in_progress = 0;
-        }
-
         if ( !cpumask_intersects(mask, desc->affinity) )
         {
             break_affinity = true;
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 18:19:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 18:19:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740172.1147187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHp2f-0006RM-R1; Thu, 13 Jun 2024 18:19:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740172.1147187; Thu, 13 Jun 2024 18:19:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHp2f-0006RF-OK; Thu, 13 Jun 2024 18:19:21 +0000
Received: by outflank-mailman (input) for mailman id 740172;
 Thu, 13 Jun 2024 18:19:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHp2e-0006R0-N6; Thu, 13 Jun 2024 18:19:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHp2e-0007HE-BT; Thu, 13 Jun 2024 18:19:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHp2d-00010Q-RU; Thu, 13 Jun 2024 18:19:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHp2d-0004mW-R7; Thu, 13 Jun 2024 18:19:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SqqJO0muW4uiSAvfMl67iOu2FjZorNj8PyVAn96iRc0=; b=NkON85QsauEl3azy98CwbvEEqQ
	aYsZ1hbhLKrl1T5FXhrMMweVQ4FWYc/tXiLfjgepMpgb1ApK7es3NwvVBux3mez9iuktVeYl7p6Qv
	rXrmz39guwohDM9TYvSVQyydUeSqHVOhD9zBQSjAFMEerATpXqj3arM1GoIa/Ew/tKzg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186338-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186338: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=712797cf19acd292bf203522a79e40e7e13d268b
X-Osstest-Versions-That:
    ovmf=d3b32dca06b987d7214637f3952c2ce1ce69f308
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Jun 2024 18:19:19 +0000

flight 186338 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186338/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 712797cf19acd292bf203522a79e40e7e13d268b
baseline version:
 ovmf                 d3b32dca06b987d7214637f3952c2ce1ce69f308

Last test of basis   186318  2024-06-12 07:41:12 Z    1 days
Testing same since   186338  2024-06-13 16:11:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>
  Pedro Falcato <pedro.falcato@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d3b32dca06..712797cf19  712797cf19acd292bf203522a79e40e7e13d268b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 18:43:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 18:43:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740183.1147197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHpPj-0001qK-Nc; Thu, 13 Jun 2024 18:43:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740183.1147197; Thu, 13 Jun 2024 18:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHpPj-0001qD-Kf; Thu, 13 Jun 2024 18:43:11 +0000
Received: by outflank-mailman (input) for mailman id 740183;
 Thu, 13 Jun 2024 18:43:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=efIH=NP=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1sHpPh-0001q7-DL
 for xen-devel@lists.xenproject.org; Thu, 13 Jun 2024 18:43:09 +0000
Received: from fout2-smtp.messagingengine.com (fout2-smtp.messagingengine.com
 [103.168.172.145]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c521ca8f-29b4-11ef-b4bb-af5377834399;
 Thu, 13 Jun 2024 20:43:05 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailfout.nyi.internal (Postfix) with ESMTP id 77E1713800C3;
 Thu, 13 Jun 2024 14:43:04 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute2.internal (MEProxy); Thu, 13 Jun 2024 14:43:04 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Thu,
 13 Jun 2024 14:43:03 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c521ca8f-29b4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:message-id:mime-version:reply-to
	:subject:subject:to:to; s=fm1; t=1718304184; x=1718390584; bh=qW
	oUrAUWWgJn8eusZ6A5J29EQ7L6X0sMAPMummShSfY=; b=rNzD6g0o9huEoCLXWl
	nz5Vpl4BKz3pOznlI6sBsr7Cta1R9pcsbGnXBLX4fae9DOvXIG910XWiofk+Eaex
	pNHx9EwWFQquJBU9QDXgc4Mfe1dWibzxolfRrnFP+MsLwVNUjdbuDyi9IjbORunV
	UzhOkaVz5KvMuamxtrGSlAfBQVxyd9+FDtA03uSeYWGLlv5WulSqm4eNoKSe2mNe
	s/JkR/+/lSSs4NYlWInS++vuqOWqA49cvbx9RlCieCZ2jVBNTKakcWbfOBpscXjQ
	KaMuHZHmKSkPkaExJD4G9owOTohM7SHiULefFASDF41550/q4MDdnYRUMiFxeBvz
	hIyQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:message-id
	:mime-version:reply-to:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=
	1718304184; x=1718390584; bh=qWoUrAUWWgJn8eusZ6A5J29EQ7L6X0sMAPM
	ummShSfY=; b=E8EsXK02QsLFiEN3HTQbh5324EG2qMAxbu3ElBTy3QuvWB9V4ca
	CNBdTnqGHmvLexTF2SWNC4ZLJLkLOheu65scha4Acxk4WJ1id0V4dzEKXVewVAvP
	CfTKnaBY/H98fZdW2NQLNNWiIjeZClnWtj4VAlcuVGH1FUf956+T1+CFrGbFzD4F
	81GzmSB30fOrSZsdq1lnPuBLEE5mNuIdbRka2YFrmSla6MKfNGsFfqiCb2nTfrQH
	1Lbcl3/YzKAk/IuWkGRQwJpO6NZlQjp8uxPKEvuD4q/S3AkcmCe6Wk0H1fZVeAUT
	WcjTUNmEh5G3AgiyTLtdM942U04MjKJe/Vg==
X-ME-Sender: <xms:uD1rZo7RpyxgCF0zy1u_c_idfWSJfl7HfZWsqfT6U_FiOeZewYLHBw>
    <xme:uD1rZp7pl2TwDPUz7v7FMO3gY6raIwaV7ctuDXbx587B2hygG1wKkogQtQVZNkR2x
    QnRqdKQVhVyiwU>
X-ME-Received: <xmr:uD1rZnfEmqkwIhSasKimraN6aPTX5TedShLRfWr8BO-PlhQSja_-owoKPbHEM9oTV4MpDzGSRlnujKyAILMabZbxBJPHMl_zGzcqlvvdZd8aNvAu>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedujedguddvhecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfggtggusehgtderredttddvnecuhfhrohhmpeffvghmihcu
    ofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinhhgsh
    hlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeeiiedvfeevvddtgfduvdekieekieeg
    heelkeetffefjeekfedvgffgtdejveeljeenucevlhhushhtvghrufhiiigvpedtnecurf
    grrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhl
    rggsrdgtohhm
X-ME-Proxy: <xmx:uD1rZtL8_9e73UwN82bUQXlN6yifkkEmdz4JunohWP0gp5nNLcKdNA>
    <xmx:uD1rZsIV_14LQxXqfPqgJoRo39kl6MwkeSQZRD_GBM4HSLONr3l9Pg>
    <xmx:uD1rZuwr52uLtBQ5AjA88RuaIMCQca8lQmo3xU5srhdSFlWkKwLZEw>
    <xmx:uD1rZgJNETtiG09ogGFmS7yqLppIW4xeipNWXHixH9xasOvUtQs8mg>
    <xmx:uD1rZhG6ShCUlsqjahClr3uo6Zfy1Vj-cuBHvM7yoXo8eenDN1O3jqq2>
Feedback-ID: iac594737:Fastmail
Date: Thu, 13 Jun 2024 14:43:00 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Xen developer discussion <xen-devel@lists.xenproject.org>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper@citrix.com>,
	Ray Huang <ray.huang@amd.com>
Subject: Design session notes: GPU acceleration in Xen
Message-ID: <Zms9tjtg06kKtI_8@itl-email>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="0xB0yhqWj3EK5xsf"
Content-Disposition: inline


--0xB0yhqWj3EK5xsf
Content-Type: text/plain; charset=us-ascii; protected-headers=v1
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Thu, 13 Jun 2024 14:43:00 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Xen developer discussion <xen-devel@lists.xenproject.org>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Andrew Cooper <andrew.cooper@citrix.com>,
	Ray Huang <ray.huang@amd.com>
Subject: Design session notes: GPU acceleration in Xen

GPU acceleration requires that pageable host memory be able to be mapped
into a guest.  This requires changes to all of the Xen hypervisor, Linux
kernel, and userspace device model.

### Goals

 - Allow any userspace pages to be mapped into a guest.
 - Support deprivileged operation: this API must not be usable for privileg=
e escalation.
 - Use MMU notifiers to ensure safety with respect to use-after-free.

### Hypervisor changes

There are at least two Xen changes required:

1. Add a new flag to IOREQ that means "retry this instruction".

   An IOREQ server can set this flag after having successfully handled a
   page fault.  It is expected that the IOREQ server has successfully
   mapped a page into the guest at the location of the fault.
   Otherwise, the same fault will likely happen again.

2. Add support for `XEN_DOMCTL_memory_mapping` to use system RAM, not
   just IOMEM.  Mappings made with `XEN_DOMCTL_memory_mapping` are
   guaranteed to be able to be successfully revoked with
   `XEN_DOMCTL_memory_mapping`, so all operations that would create
   extra references to the mapped memory must be forbidden.  These
   include, but may not be limited to:

   1. Granting the pages to the same or other domains.
   2. Mapping into another domain using `XEN_DOMCTL_memory_mapping`.
   3. Another domain accessing the pages using the foreign memory APIs,
      unless it is privileged over the domain that owns the pages.

   Open question: what if the other domain goes away?  Ideally,
   unmapping would (vacuously) succeed in this case.  Qubes OS doesn't
   care about domid reuse but others might.

### Kernel changes

Linux will add support for mapping userspace memory into an emulated PCI
BAR.  This requires Linux to automatically revoke access when needed.

There will be an IOREQ server that handles page faults.  The discussion
assumed that this handling will happen in kernel mode, but if handling
in user mode is simpler that is also an option.

There is no async #PF in Xen (yet), so the entire vCPU will be blocked
while the fault is handled.  This is not great for performance, but
correctness comes first.

There will be a new kernel ioctl to perform the mapping.  A possible C
prototype (presented at design session, but not discussed there):

    struct xen_linux_register_memory {
        uint64_t pointer;
        uint64_t size;
        uint64_t gpa;
        uint32_t id;
        uint32_t guest_domid;
    };
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--0xB0yhqWj3EK5xsf
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmZrPbYACgkQsoi1X/+c
IsEM+hAAsL7zoAFjhARxX0DWluGmir2SX3Q8fdbxBy/O0kT72u1kUhsnk1IFs1TQ
bFWJLywYbMsYT6UJQjlH/zGdU9vE4jpsWiwbj8b/wd9oRPsxktHo3aGD/nNxRouE
5MCOQB+S+1Vng7JQU8ESdIxiUzwwCNfwxGalmfCTbMDJ5BBHXe7x3Egh5Ni1yENz
aP5QFYd2LBjLA1iCYIq6PDGewrdIKSfOU3SZ17sfrqhpwNLOd5y5PQKhGoRDpwwp
gzEkPvVj7dX4nMTHRr9QaAa42awq7dGNh7fQ3jtJ8DeUkk2kSgKzDFkH5D96ytn9
ntLlPl7ROWsXq0hNpt6LwR2GTiE4LMtiE2eexXg7ZckXEtSo1HZ379v/dAe/ZBhp
2/Y+UsZ0rDGnesPHv9eIPeLPLcg8+ZUatDB3/SjZWygMzeFr++VwBAMjhqZv1V1j
mfWZ+QJ1/wGq8bEir53SjAkRzfmBk7g8W/7rRrhiI8DufubgrhO5U5MOXuLnfzH/
l33aBquc1sII4JxbDnA1bFOGHuFaiVEA+scJT3l9gAgN+Vkk1NVVrCvGkZSugMxb
oaxWr3I2PG9lekRDMq5/3DYP3XHdUDZuoAZlXegtM38Z8uOclBj5jc0XbTFCsSN7
19FU2IfQPmX+ZjiMcQMNv2OOH66dySFwatW01xZH45Ovdi8qRBw=
=1fuv
-----END PGP SIGNATURE-----

--0xB0yhqWj3EK5xsf--


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 18:58:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 18:58:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740195.1147208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHpdw-0003lM-Uf; Thu, 13 Jun 2024 18:57:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740195.1147208; Thu, 13 Jun 2024 18:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHpdw-0003lF-RW; Thu, 13 Jun 2024 18:57:52 +0000
Received: by outflank-mailman (input) for mailman id 740195;
 Thu, 13 Jun 2024 18:57:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHpdw-0003l5-Dm; Thu, 13 Jun 2024 18:57:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHpdw-0008D0-AA; Thu, 13 Jun 2024 18:57:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHpdw-0002UW-13; Thu, 13 Jun 2024 18:57:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHpdw-0002Ob-0b; Thu, 13 Jun 2024 18:57:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HoqXu7YTToJrgZlgrvkncLqcvR6Dl5MfmofrTF519mg=; b=kn+qVrqnl6/63WXwnmB8OMt/vh
	dk64vOixvdKlrs/6O7kSPjzSAYWtdBHOWuFmzlge1uePG6+tZSsB4z6LMGhuQnvBMlq/M0Zr7DDuJ
	4VZa3Hu4VVmCtUNEtzlxaZgN3nXw2fc61ScT01OwQIaes7Q+lbrxmmy/3hNs4iLrSsP0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186337-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186337: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4fdd8d75566fdad06667a79ec0ce6f43cc466c54
X-Osstest-Versions-That:
    xen=b490f470f58d0b87653bfbee23c7e9342b049ab4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Jun 2024 18:57:52 +0000

flight 186337 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186337/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4fdd8d75566fdad06667a79ec0ce6f43cc466c54
baseline version:
 xen                  b490f470f58d0b87653bfbee23c7e9342b049ab4

Last test of basis   186335  2024-06-13 13:02:09 Z    0 days
Testing same since   186337  2024-06-13 16:02:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b490f470f5..4fdd8d7556  4fdd8d75566fdad06667a79ec0ce6f43cc466c54 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 13 20:01:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 20:01:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740205.1147218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHqcs-0004Q1-Dz; Thu, 13 Jun 2024 20:00:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740205.1147218; Thu, 13 Jun 2024 20:00:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHqcs-0004Pu-B0; Thu, 13 Jun 2024 20:00:50 +0000
Received: by outflank-mailman (input) for mailman id 740205;
 Thu, 13 Jun 2024 20:00:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHqcq-0004Pk-Il; Thu, 13 Jun 2024 20:00:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHqcq-0001AE-4e; Thu, 13 Jun 2024 20:00:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHqcp-00046d-O4; Thu, 13 Jun 2024 20:00:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHqcp-0006Gx-NZ; Thu, 13 Jun 2024 20:00:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8F/2fQPbkXYTaEag3vZvRjivh23ojQac/wIRe7OJhcE=; b=UwkoW4dIJzWEFuVgG8xT6gU9vJ
	COfN4iB5r54JcWeX+wxLtf3BVGLhIXmiNiWz795blN8o8MV42Wq9zt6hMasF3Ql9/OsOuaDNxWUIW
	vrxIrQXuod0PYQ4kddtvxurCTWqGUkfIQNmiR0pgWKPOa+kt6uN73TlBvYLrlbWQmfQ8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186332-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186332: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=401448f2d12f969956fac3ef3d29dc065430fd56
X-Osstest-Versions-That:
    xen=401448f2d12f969956fac3ef3d29dc065430fd56
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Jun 2024 20:00:47 +0000

flight 186332 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186332/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186328
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186328
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186328
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186328
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186328
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186328
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  401448f2d12f969956fac3ef3d29dc065430fd56
baseline version:
 xen                  401448f2d12f969956fac3ef3d29dc065430fd56

Last test of basis   186332  2024-06-13 07:15:36 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Thu Jun 13 20:43:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 Jun 2024 20:43:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740216.1147228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHrIB-0001Rk-No; Thu, 13 Jun 2024 20:43:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740216.1147228; Thu, 13 Jun 2024 20:43:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHrIB-0001Rc-Kj; Thu, 13 Jun 2024 20:43:31 +0000
Received: by outflank-mailman (input) for mailman id 740216;
 Thu, 13 Jun 2024 20:43:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHrIB-0001RT-BZ; Thu, 13 Jun 2024 20:43:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHrIB-0001uC-6p; Thu, 13 Jun 2024 20:43:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHrIA-0005A6-Rx; Thu, 13 Jun 2024 20:43:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHrIA-0004GO-RN; Thu, 13 Jun 2024 20:43:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FjeyFzuTczwmF379ZsWvuAglkzUr5LD1g44Wnc/NHl0=; b=wLxW4ps8XdfiTRi2SbU2DDn/2b
	smvQYUPKLHEN772+9+Mj14UqOmiFyAu1wCDG1B6mSoCmtwHTZKBWKzjUR/xtEoyWgUnaX60dYjHt2
	ptiw+wllav9o7QFGFMp0MyazmbsXxl57BcyrUu7gRl6JCa8JW3Lu2PLKl7xSUIIjxMtI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186331-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186331: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:host-ping-check-xen:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=2ccbdf43d5e758f8493a95252073cf9078a5fea5
X-Osstest-Versions-That:
    linux=2ef5971ff345d3c000873725db555085e0131961
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 Jun 2024 20:43:30 +0000

flight 186331 linux-linus real [real]
flight 186339 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186331/
http://logs.test-lab.xenproject.org/osstest/logs/186339/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-qcow2     8 xen-boot                 fail REGR. vs. 186314
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 186314

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit2  10 host-ping-check-xen fail pass in 186339-retest
 test-armhf-armhf-xl           8 xen-boot            fail pass in 186339-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 186339 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 186339 never pass
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 186339 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 186339 never pass
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 186314
 test-armhf-armhf-xl-raw       8 xen-boot                     fail  like 186314
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186314
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186314
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186314
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186314
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                2ccbdf43d5e758f8493a95252073cf9078a5fea5
baseline version:
 linux                2ef5971ff345d3c000873725db555085e0131961

Last test of basis   186314  2024-06-12 00:10:33 Z    1 days
Failing since        186324  2024-06-12 17:12:12 Z    1 days    2 attempts
Testing same since   186331  2024-06-13 04:33:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Hongbo Li <lihongbo22@huawei.com>
  Kent Overstreet <kent.overstreet@linux.dev>
  Linus Torvalds <torvalds@linux-foundation.org>
  Ron Economos <re@w6rz.net>
  Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
  Thorsten Scherer <t.scherer@eckelmann.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 427 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:36:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:36:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740243.1147238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvrJ-0003Fv-IR; Fri, 14 Jun 2024 01:36:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740243.1147238; Fri, 14 Jun 2024 01:36:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvrJ-0003Fo-Ew; Fri, 14 Jun 2024 01:36:05 +0000
Received: by outflank-mailman (input) for mailman id 740243;
 Fri, 14 Jun 2024 01:36:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHvrI-0003Fi-B7
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:36:04 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 735c0eeb-29ee-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 03:36:00 +0200 (CEST)
Received: from pps.filterd (m0246629.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E1O6g3009650;
 Fri, 14 Jun 2024 01:35:44 GMT
Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com
 (iadpaimrmta01.appoci.oracle.com [130.35.100.223])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymh7ftmy0-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:35:44 +0000 (GMT)
Received: from pps.filterd
 (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1])
 by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45E10hg6027051; Fri, 14 Jun 2024 01:35:43 GMT
Received: from nam11-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam11lp2169.outbound.protection.outlook.com [104.47.57.169])
 by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id
 3yncdx2hsp-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:35:42 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by PH0PR10MB4709.namprd10.prod.outlook.com (2603:10b6:510:3d::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.25; Fri, 14 Jun
 2024 01:35:40 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:35:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 735c0eeb-29ee-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=D3TQNM/YrayATk
	bTiRqKOtaetKIPFB5e1u1n1IPb54k=; b=WLTYZYi3KunjKyXR52g3FfhX352058
	OIW5NdUTkoriWAR362rWmaRMMrWXiQ2o7MvNhIxzHn+V5S2fDlajJ5yWo2+WbGga
	mP/OU5jWDrrPxgjSu2OBLIIbH4QkIiWjRL1FlCew/CA9K5GxekTOrvPv915o2sc4
	J8f9fMOECjm56KDWJIvFNjCx7RWvxKk8F2UbkDHPpTZf0QzteJpRNQvDaEmcrFUf
	GOzsVSVP1z/xVYDKaArHLi+fnKPGXlCgO3EFg7V7zc1xRggrO62SKRxxDpTPq+EA
	bHctEseAKeUkl19RrxuChuhjAZiSGPwLrI4bDQWv3Awh3B8/dFFKSAtw==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MaT2oBTB4m8FqXfKWda+CFJ6LwcMIpBH2dzgkCl69uP9wdiUO0Cuvrlb8cf6ut5ZATB5nj//FyB1OY+PxsjxTM79I4bG2CKpARnabKNSWYY8tmwo+emk7BV5Lg4/PvotN0/sef5dX9fOBwGZKNs1zC2d4+VDTnxvqvnLpl9GLz/k9aieI4Q2nFRyyFFusfiDiP56icx5gqIA0cIAocPRp4TU1xjgtVNmPogaXOi4xCg7ySHtSh5r969wT+YI2fzhN2sNKInlUQ27c2VGo00I8BkhyGQ0HXMpEhZKD9IqRnDuNdaTHzmfM4OQC9pNRLFIs1rSOJDQ8q2//+7hUfFM/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=D3TQNM/YrayATkbTiRqKOtaetKIPFB5e1u1n1IPb54k=;
 b=P8pLWQiqxEBZKk6rRTTsGBKc2WEGu8xWBwroaqnCcDUokDQrhn6UaqcqOR9i8yHuryP5/aolu9JSfs4tePZ4Yeliy/yGjbEdgWTkpDClvGu5BwyKIF+zC5JdJ7COeD4yZ7bcWdW5z57xJSCtdDpnhpjbjGyYKOD3jRywCCDnVw018IhHS0jLdGXI2WkidrOzSQQ9iRTfKxMhpSEIsXTVXr0lrQZV9IZivnVy2equxHU0dpquHo5XbwbcqmNuJReqOwxW/r5i/nauqczIW6177SFLvvJccSGZ+mHYtuf17uKTbq+DjYXiV6psN/ZVuIXSZs3Iq04Ztnvf8qqVpudwBg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=D3TQNM/YrayATkbTiRqKOtaetKIPFB5e1u1n1IPb54k=;
 b=TD9vkaDtpl7hT7fLz5trnODOSTgLorxA2D7DWEvMDmHG6X3m27O9EBeQ3yneK2qgw6tf5HUANT8Qhdv5vB6f+lGRqcJTqbebd6JLRwS4tCZzShgI1oKbXNZsO7Pd4nv+xk3oYzBLN2/zFkw9FnFLXwKySZLnhjitXsG+GLQBzHk=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org
Subject: Re: [PATCH 01/14] ubd: refactor the interrupt handler
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-2-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:47:56 +0200")
Organization: Oracle Corporation
Message-ID: <yq1wmmstih8.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-2-hch@lst.de>
Date: Thu, 13 Jun 2024 21:35:39 -0400
Content-Type: text/plain
X-ClientProxiedBy: MN2PR14CA0021.namprd14.prod.outlook.com
 (2603:10b6:208:23e::26) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|PH0PR10MB4709:EE_
X-MS-Office365-Filtering-Correlation-Id: fc9434ca-65c2-487f-2600-08dc8c124cdb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?6tKH6/yUNz2JZ0R9YXERUAfL9Y1IMPBkQotKmP7CsDUHiL78yt0X31Ocn+Kn?=
 =?us-ascii?Q?W1+oPR1/DGwi4gFr6e0pj1eh2rY7oZXsrUwWtRX9fkblsRuDOkICyqecMzkC?=
 =?us-ascii?Q?i+cZCqc3KvUAO2SQtnlQ81nlPC4zeRBkhCfyhIdCTBy6Mr/1mppo36w9sjW9?=
 =?us-ascii?Q?/TNwXuw9aPu7JGFel98qs+DSlg7PFOGsbEP/40BeW8tRn+IaDZ+eLLpwYc1D?=
 =?us-ascii?Q?X9AXJkQSu4Ixqgg+rvOuhdkFsetbVO3WJs7t+rW7/xNPmIye7kV8OPDzH/H2?=
 =?us-ascii?Q?dOEJ+07Xh7EfCHy/QC36hKZNYGx6w5/2zNexoqFVZu0xM6wiHQ7zvNNgWB5A?=
 =?us-ascii?Q?REjXcDZ0xcJTjLDhMTQsskvSwQTtZssMseypzluOn26vtDlIIGzINrlPkece?=
 =?us-ascii?Q?JoDF+r4sb17yLlN3Sm4Tp4PBn2h8OnZVGydz7AAjhiLF5k9zNTkadwl8Su6M?=
 =?us-ascii?Q?nVhXNF+naCEGc4BZOSpNxk33kNDyM5l4ao+BbCUVq0KGHEc9TKQ+gw8nTIiZ?=
 =?us-ascii?Q?blMMgwxdFRIapODr0IoOsNrzRADzUY5t28mRo4Q9DDB01mkzbLIbqIDpSzEp?=
 =?us-ascii?Q?QeKE82lbpUZui3Hc0CihQtFWc+YJXWXEWhW0aWEc+5ezN9vNYeaTDMtLbYax?=
 =?us-ascii?Q?ZK+58vj1vlf3B+aSTK+iKgCf5ZksOtcppf+qWaObUGtAqZ/kDU2Lc5EQ2xut?=
 =?us-ascii?Q?WHcy9QrF1GGrLZ62EAGoTe5HaORkr72+IcDeb5PRI2tf7WnHNmN1hOExwmzt?=
 =?us-ascii?Q?0imEv2RVd2iqfFr0kB+KocaTftIUFM2MKFZXXa9O3t95hG3U1H6uFMehey18?=
 =?us-ascii?Q?LKSNcl69iem8v5x1rvqDXrnM2JmOn42Be9oCzfCSRqMQJqlXzXcWt/j8lW6y?=
 =?us-ascii?Q?JRnXB+2+wF1fsfiplxHo2uR81kYBEnChQsbuosfWCMUNDd/uyLcWAKcnG5VI?=
 =?us-ascii?Q?Yu1/ViHdlkp/RScaJ8EGeZBSDxVOeHj2TjAG+rXk5cRCNHMwRGddbKz0d+08?=
 =?us-ascii?Q?ryMLWTE2I3VFX5TCUcF8D2AnX/8J5xmdSuryLvE9dm/3lnKxD0K23Q6JmPsO?=
 =?us-ascii?Q?8QhpWMa1B6oAdFsmkQvhfsY+ohPJC5ah3PY/Z0FqV0Z+Fk4amGxS17E/rrnj?=
 =?us-ascii?Q?7cF/OwpoCWcbt+nvVXfF3xxkieoxWGNUnU7siWgYzrLgw4bUvTU7XDq7sCjv?=
 =?us-ascii?Q?aiWQb8IoVrAeDAnttZH28f6TypGi64yKGCEG2ytbT1S/DW4dhXx+tOntJLPh?=
 =?us-ascii?Q?fY0csFyVPM9Iwm8QhFzcT36bnTPWR8Bq4y0NJcbPIaI7a1D4RO7WZsh+ExBz?=
 =?us-ascii?Q?GPV5oCOcpmzCVW2x1ex/phLT?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?wPd2cFjO4jpNz/ACotIhIWnC7N5h2V5YS1aDOiru0MzGiOhNXkmUh+QcodM/?=
 =?us-ascii?Q?BgX2LVAQpFDCThWbEZ1mLPv/vRH1Rb7thkGntgdZqbAb6Baqc5qCUxvB7SPY?=
 =?us-ascii?Q?T4D1TTmk5qRyE9tk1BIIVDMroceXf9Di3oHHbY2Fcl4JT00ja+b+5Vs+qc/C?=
 =?us-ascii?Q?LDSkLEdE6poPBJyw/zteJqGZJ48qqGXSyG9ujQFQ8ZrBu9XZraJDapBKVvP1?=
 =?us-ascii?Q?mO9pwAks85xrE5VkvMCQmtS7DLiRubC1HcEWsnaFZfFAhiPXGgLKIWL3seXX?=
 =?us-ascii?Q?5gxCd6XxJajRXL6M35WXQw1ppbPTiXy2+ytYh6U4MRtd3SQR1L0bkk+G18Bl?=
 =?us-ascii?Q?64s5mtsPjzk+Ulci0jT6RnnZa0FiCrXMUEeW0Ug/qgMfq/Ui9CR8CXMfFFxG?=
 =?us-ascii?Q?8KzCQRSlIw7e/X5F/d2tLRbAmSIwSgisEEsbM/cM+QlIYvz3qZEl6ieHpgUw?=
 =?us-ascii?Q?R9x4JWnmE/3HVoyS6NhGP3Gw/0OZNjn3qg0o1Do8SnTvgeAyXfjjDhB73T9l?=
 =?us-ascii?Q?VAb1Ht5tqx06NXRr/SSSLMxFssRdYEepPniAWe1Cojr2beIyrdOTy/4KWHLa?=
 =?us-ascii?Q?nPfBXM7XtB+e9xnEkJN0xskNtTZnnCE3wTbIQvjxxT1a54RWy4sxrH8BperY?=
 =?us-ascii?Q?9X8PCFCRjea7k5jE1xb98/AOOlcctNsKVjCGiDKPeVZqVwJ6+LfaT2dU0Ssr?=
 =?us-ascii?Q?V/QN1lHF2ejlsrEe4TUgoyQ0FiuCzuzkjb92HegkeEsAoVpUm1UKwIUd+wf/?=
 =?us-ascii?Q?sb3UfxUpl4GzrI2DWWqjF594zUUtyBNU2QTC9mAiqPyZPZBvLikZoj8gJx1w?=
 =?us-ascii?Q?ot0azQhPzhQ0Hh5XpXsj866lEyqY6+QveAexoAQsiAFjDS+ZdbTvusHIBQnV?=
 =?us-ascii?Q?ldawLnD7immCU07qjlbZd0Yc1fZWX8kGHLhbGfSpM0Sk58YmBY6KQrPR1Otv?=
 =?us-ascii?Q?G0sgJ12elIsymU5VUrlDjrh6Sl+GxMGB4AXxL/mgPqwkxOX1oLC7r4hLWa5u?=
 =?us-ascii?Q?6ah+7u6/yhk1mtgOGd+l0Di7i8JnFvTVNVOs5PbmrylUULuD/V8qO8buhBPA?=
 =?us-ascii?Q?C6nhdSOpjEdJOmDJxTz8DxX2xXf3CjZUZSuvADJ1a96Ra5OBlFJVxe4D85CG?=
 =?us-ascii?Q?nBm2gOeIca/6jRtypiQwizHV+56MClieArPVCoXO8y3ktAKSahDfgFcrqdCC?=
 =?us-ascii?Q?CdkGXN64xuYM4LYZuicoEL9Cpn4eorIw9xBTNmbCsil8nzHtYdmWUWfrBmfz?=
 =?us-ascii?Q?cAHkGRPt81zwy8a52QzTZ8SM5NqLu9k/tVkZN+h7/Daiz7h0CV/QONRKQ5yK?=
 =?us-ascii?Q?lN09U9UP+igoZaJV9HIdX5q/mQuQUgG1Hj2rlZIsWsDyp4uydb+vsuqA8dDw?=
 =?us-ascii?Q?SUE0i0qdowteAXUDIhYHACax7aE1RXE7/LTMCc5Fqe0tO0j1Iuc61tHSgjm9?=
 =?us-ascii?Q?FAadpTq8ChjoEAoGFRyCTEa1m/niF6eqtCqUnpAFwPhza8AT25gNt2i47mW2?=
 =?us-ascii?Q?1pD2KkhmC9sdb0HaJbVY3jCGsjE/Vr84x1trUEBdupHHty8S1vJSeI4Ogf6Y?=
 =?us-ascii?Q?Xo5AUXR6CO/hik1TPIt7AJfnsnVE0hoeT3m+zBuVFUyYYgvWN/sxJMiSBxkJ?=
 =?us-ascii?Q?zQ=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	+Z0TBsuAxjbiJnXauGxjW+HJugc0tXmrA/+RtqtSJl9PZZoEZJsyEosL3j/jnY10EXxa+BWsFoin48Y4Zv+Xka5kBgUPLX3Cd7oOlZwJkTB3Kyj0kW1VzGYVGZPBsk1RkrB3OBxjIe+yI/XvPmB0/+R25uzfHy7tQ5voZYJzdqRuSAD6oJQCYUkz+et9LZPVQ6IVtYcUubCHJKoKbkEhIAvzwfhnziYpHcTC2v+z66y18lN4zQn3ZV+QNA+//Ltf/yB214ZQHTZWYF0i1gtYQ9olFmCZ3Qp9+RMWrIFH2CV7UvHfzzTBFVLX+K3krLMSpaxdZeKJlkkcAU/nLK8Yl/PltPXJmDw9xYM15TL1XTQI3lJujZbcb9EBAqYRH9tMHhLbFR+BuwSHLvKwkrI6nX7O5n5rVRjSXKDoFX1nA9MEFgNUnOC8o4aTZHztoyKTZTBkfpv4wCOY44/Yc+1Y7fEFrdgWcFfPGyKeKdRCVr7eIHnBeyl2BaDDjKRVr6NbrFS9ynYGPCH2YYTu4l8hUYU9qiuYc7AgVYaV3HBDiLooc4voeiRKM2tfLqeDucFrsagwYRirrFqqdf7eonXoso/97RADDWDa2B6MXySRil8=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fc9434ca-65c2-487f-2600-08dc8c124cdb
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:35:40.7410
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oHShx3j6L42uWoFgenETXJXt2qriiNrZ/IuZ9rq/5sXVzotePpXy28mfWb1ZFxp/nZiyQNrG7/uu8xWHwZ0yGJh9PSMtGD/o3loENmehoPs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR10MB4709
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 malwarescore=0
 mlxlogscore=999 bulkscore=0 suspectscore=0 spamscore=0 mlxscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2405010000 definitions=main-2406140007
X-Proofpoint-GUID: dmXeuSBTYY4HSrcKvdYaxwCtng16SnEJ
X-Proofpoint-ORIG-GUID: dmXeuSBTYY4HSrcKvdYaxwCtng16SnEJ


Christoph,

> Instead of a separate handler function that leaves no work in the
> interrupt hanler itself, split out a per-request end I/O helper and
            ^^^^^^ handler

> clean up the coding style and variable naming while we're at it.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:36:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:36:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740246.1147248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvrp-0003g7-Rd; Fri, 14 Jun 2024 01:36:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740246.1147248; Fri, 14 Jun 2024 01:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvrp-0003g0-ON; Fri, 14 Jun 2024 01:36:37 +0000
Received: by outflank-mailman (input) for mailman id 740246;
 Fri, 14 Jun 2024 01:36:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHvro-0003fb-UV
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:36:36 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 88cddaa3-29ee-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 03:36:35 +0200 (CEST)
Received: from pps.filterd (m0246631.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E136Ge028967;
 Fri, 14 Jun 2024 01:36:09 GMT
Received: from iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com
 (iadpaimrmta01.appoci.oracle.com [130.35.100.223])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymh1mjrwn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:36:09 +0000 (GMT)
Received: from pps.filterd
 (iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1])
 by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45DNGV0h027090; Fri, 14 Jun 2024 01:36:08 GMT
Received: from nam11-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam11lp2168.outbound.protection.outlook.com [104.47.57.168])
 by iadpaimrmta01.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id
 3yncdx2j3a-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:36:08 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by PH0PR10MB4709.namprd10.prod.outlook.com (2603:10b6:510:3d::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.25; Fri, 14 Jun
 2024 01:36:06 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:36:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88cddaa3-29ee-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=uZsVr147xntuGs
	grnROEfKl8s+FW44yp7i6KnMASNy4=; b=WfmA9pBmxuV8UuV5NBhgRLL7SbMsaQ
	yrbXIvWz0x2upt9QtQI5RZftX/tvHhdCZSBlE8TCDNaNrqQXn/DNpTNMTDNMQmXb
	vJsbWq8tld7NLuMeKfgY/Co+ivYAHa8+ccb9g942ObVqV+ajQCMiJh/kitPMY2/0
	M/OpNdoccjjaCVEDsjql+Uth87+/ciToMkRbOMUqhqwZxd4BTeY6NV/VwzOdOEu8
	jdi5fnb8vDiMvV7vM4Px83kYvPZnJYgj0LYpy/Vge0xKAnKUT7Ztz9r7TKNl00yq
	iG2dYzAvEMrx6u3FwF3Kz5hMR20dkQ/tPqbDrplBONeDoc4b0TyiX4Jg==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AxiSpXAJVVUvv9trv6Z7EX66y5T/m1oaoeaRFdGwxWpX/ZKF6uacCbVh/a/erlIpbrTvNgyAZ7m47MFbywnmPhc2lwGpOzMtkWPKj/19Ur+qEPDaIayB6J88KAuHCrGft/ZryUTXHz+Giu/o2cjrLP5+S0UYGTjjt82DgVgwohN28gFGWJMEFD0UWOlhw2WdedAZk3rzx29cAoeBV8LNj24/5m9zzS3eDslDARC3MVaEs8gLHNMz41SkX+lhRD3qm7KzsDwynzsg1u4Aly8iWT2d0OO4gp4U2hskHgBy1rrj0FMGg38gRDne+J17WddiSSk3VXGt/R1K0CjHHflqiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=uZsVr147xntuGsgrnROEfKl8s+FW44yp7i6KnMASNy4=;
 b=ZZDc+189uMvpUvBV4Da6TXpmhwR3f/n1JkQsf/+N3JsBlQG/vHcSX5sYdCLn7Mpeo1HGCYB4f/Ng6BbtlMXW6GYRA3nnkmUifd1rNgsRy2TY/Qt9laKf6YqEyZMP3q9T4b+36L38EoAIHTQgduGs3w4/Cf009O+MBWHOAwuCRqGoczj9PL1YM2J6e4B31vYRS/cHfuYC3w2HhdsHupD3qljq1GzrgUAdM/F1C59ZENMJZuGCgWVKN0OE1LoBZmnRL0lw/DdkJpAMbNNUKmqBA4/OMjv1iQqJeyhY0i6onAv09N6R9+cqtL3SrL8kfZRwychHs2hVE1XNl5+s/Xhqew==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uZsVr147xntuGsgrnROEfKl8s+FW44yp7i6KnMASNy4=;
 b=Skjv7X8BX4FmQG6qIRFStOjnBNZLCr+/4eWfUnA7N2vkbM/gPy7c7nLDLkfWLtQgKZ6PzwkfCM+fmLrTmpf87S7LxJ6F1Urp/9LIaMcAuu3FVaUsZgpaitsNbmfgHr8Yj8UDhAJGF8kFjBSLpI2yT3cngTdf93NAUOlU641r8Uk=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org,
        Bart Van
 Assche <bvanassche@acm.org>,
        Damien Le Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 02/14] ubd: untagle discard vs write zeroes not support
 handling
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-3-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:47:57 +0200")
Organization: Oracle Corporation
Message-ID: <yq1r0d0tifi.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-3-hch@lst.de>
Date: Thu, 13 Jun 2024 21:36:04 -0400
Content-Type: text/plain
X-ClientProxiedBy: BL0PR02CA0105.namprd02.prod.outlook.com
 (2603:10b6:208:51::46) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|PH0PR10MB4709:EE_
X-MS-Office365-Filtering-Correlation-Id: 941f7191-4cea-479a-63e2-08dc8c125c42
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?4i8sekYYye7/EKxM/vCTd+IZVd03ce9hb1OlQEavN8C2CblBU83DqAZ/se2I?=
 =?us-ascii?Q?GcLxt6fXD9V63U5jKxRIap/nw+FqYJJD5Y+RJNU5gJBAl6A/CMvs/CcYGSPz?=
 =?us-ascii?Q?ezxM0mwexpK6Rdqos41xprHS2kYO1q4Zsx1XMQBZTaOHZHiqC+deD+DV1Rw0?=
 =?us-ascii?Q?hLWolfbBG7OBRAZsHszkHmQeRtUpxc7nfO+PONWgsmq//IiXb1Qz8Maqul3C?=
 =?us-ascii?Q?Uam2X6Y1m5Wd3CZ8oKypHmuwl2IpATkccpWJmCJCzUgbuz1F50XaXoamWYEr?=
 =?us-ascii?Q?Wj1fj4H6gDby4Oav2t4kxkHXvQOrbGvqLyvsEUZqeMqVNdSymxWup3bIlmH4?=
 =?us-ascii?Q?Ye2UqOmbv5LVdMihrJ638CoGv8VdOcq6ueYSFRmJhFL4mjW4/kiTBm4hJSAq?=
 =?us-ascii?Q?PFXAoR3RWz9l8fUkW9KvF5M1e/bpQJhBI4nueON+zWYxU1yPCcHQUOdN3enI?=
 =?us-ascii?Q?iiT15QgSPVWyHIqzp+XXiygErwmqmeBiDC1TsfTLwUeWaGi0ghE3VI0RTcW4?=
 =?us-ascii?Q?+fAhio3i+OVnkjNiGBoGaoLyvZQratreTwkOxRmb/ioTV/7/HejBwsk1rp2G?=
 =?us-ascii?Q?al65dOQVomI9e3Y+UmLiHjd2HsnontwqyPMyQEpDTkRPP1C5szsKnl2tJFfA?=
 =?us-ascii?Q?WprUA1iZQh5Zrw6gOJGZXl9uXXI45cs7v+ePW9Ptk87V1hW2NJt3i8WeqeXC?=
 =?us-ascii?Q?btqTMH6tecUUB2nMzq64wMX8bMJu/da5zDrkOXmESclR2WFVM5gb53doi8Aa?=
 =?us-ascii?Q?mEaS5AcH6KpHjbR39piQELqbAO9BSPAjJ1Zh6EYamx2vMu6zaDTh/fIGlbtI?=
 =?us-ascii?Q?PNaS5OoFeePGIec4pP+G2KN64ls4qOz0CeHKjls7P4vN0FMyTBEf3vwIuKu9?=
 =?us-ascii?Q?3p5W++QpL1b8T/XXO+hOXtBXEOHfYEJhiMkZ2/YOcL/JXHalTOiYQWHkNUbj?=
 =?us-ascii?Q?ZDBJWKMyiL9ty22PP+ArivqbHmygiRJ1rybWN7fn5FPHDtSro0hiD3n4vxVH?=
 =?us-ascii?Q?eMhYagl3DphtB8eY1qPUprHg26HCpZMGRZHMzABfpWLO+JQvjzH+Bccn8zQh?=
 =?us-ascii?Q?bEf/SNqqkq0kvshynjKdarW2EvyG9k93GFxkOYUwZ7m5RXMLJKx6tcR6Bn52?=
 =?us-ascii?Q?W+3ApnceXsevImKEIFLRJPS6q5MBhTJuXPLr4YuDhPGy58bijX2QF6T0pgL7?=
 =?us-ascii?Q?SObIeApa29tlEnmmugF29xEfy3ps5bzBwrf9MQBKa6MqPSOxYAHlZ0MiAbTC?=
 =?us-ascii?Q?H3ndF+ka85C3sgjj0AS7gMlMXpRUJmXP6cU4yAaembVXyjArv9DNmumMRJ8i?=
 =?us-ascii?Q?r605JghDZZHNDwR6BhXGKfKU?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?TvgLwiG8WcHzDBND66jfp1Y9ut3hczcMwU46+lHk3mLvsgWPQTbMx7zVYA+A?=
 =?us-ascii?Q?vS9LF74NIo+y4XWSLjH6aoWKdm//qokMYXa/sJQI/tsw2Xb+E0pcwrhi7HyD?=
 =?us-ascii?Q?Ke19DCtavxYDXTrmoL73PD8IDGAVrFxFetJGd/4Ia95kX2kWC/6DZPpsNt+K?=
 =?us-ascii?Q?efoEFexMkZyyEg9IVwu5+davrBrkjWURT8rYdfPOBTglsGICGg8gVlz9xewl?=
 =?us-ascii?Q?zDZkfxEp+IvBYqRtz+4bTBzrZnTsft5qVSoysGPDn59KDLU7jurSRNpEnDaj?=
 =?us-ascii?Q?WDMmITkbL5gbRTutlvxAIFKcGGurnld4SYaR/SelNfMyFWl9N+0MH/L8VnoV?=
 =?us-ascii?Q?yEhEd18e/HCTvFS4qNmjqxGNM80+DF5F/fZQhc48asfzJQCiCejtu7M4iOux?=
 =?us-ascii?Q?RQBi9k+/akxEgysI7mXjd64oJfZ/66gCeXPFFqesPBJUZyF62PZmkrMuJYjm?=
 =?us-ascii?Q?7ctRPE+IjpB63b/OTDJhpBTpvx2KXTZktRY2yeXxgaE/hlhKaVafoide/zlD?=
 =?us-ascii?Q?HacDs8WQziZ4x8Gipm9CIc3gSuDQhxk54/bTvf3HM0wy3oXuFI3wPZhxkNKI?=
 =?us-ascii?Q?p0xuVnnVrfsM21o37uoMXVOapo2w4VHRtobALcSAUD6+w2py/t4GBjpK5i/d?=
 =?us-ascii?Q?DE9dFPyuutjglnzWemR0UHdnDTNLrMAq0VXGPYzMGh+U+sVkjGAoAmPLhsGV?=
 =?us-ascii?Q?g1ORwvA9ISegoIuQYYF0jiUCpx+1jd+axyrM8CbL1buRKFu8Wx5raLSCSECJ?=
 =?us-ascii?Q?BJeXOBxABgKQ4PSWQ5JQogmrof1AjWP6BqQHM3HNigiFyHgTsbOEIKwgmt5d?=
 =?us-ascii?Q?rwYmZXM7my3ZuMFReFDz2DZQwuLhKKT6QQSryFQ2TwRCGP4J+oxspcc+hx+x?=
 =?us-ascii?Q?tc3PNSCgmsQPaQMJktQ8Os0EVB9YwnR/RIa2YOHkvKp/sAQtvm84ukqH4dzo?=
 =?us-ascii?Q?32c2vILBxL0rb4PiMP47GJkceKXfZK0sRaqF9W0gmxi0A+t7tYTJ9I8zfE4y?=
 =?us-ascii?Q?C+SP0m82apGQAP8PzZSPTzENno5NJSy246jTUsCcca4ubu2QV5qUcfjWytNl?=
 =?us-ascii?Q?Oe/jQciHnggOZ8w12vm7LyCLS2j2nT70hRkEHHhEuxiYEsBKnx6byzR2B+nZ?=
 =?us-ascii?Q?mnhRyWGSZaPfEoJ9qJSHvOypQrs5iUx3xB7CnBsEvJFvFGTuZiFmQNamR77C?=
 =?us-ascii?Q?uIVJXezQyXUjF5G5YlkPqQinOx7JdvWPEQqufm0RERtKIC5JoplKa/KLmfM/?=
 =?us-ascii?Q?06PkmmSljtridvatDQrRhmPQaA+/yqxTj5kqd4fXfwnGIYp/1k3JHNQAPadM?=
 =?us-ascii?Q?BAku+hSGHwgWsrVP0RGbztBd2Ijq1+wzd14Bw1URtfepMrvegitnX+363YIc?=
 =?us-ascii?Q?IINSbMBjnNVUCm90gmND6lS/ItDBdORKr2VI2ArqNJZMCRKrI84yXYwyX5x/?=
 =?us-ascii?Q?zlFR6FLHhXSA5RNWsj/T8lkK5IsqRc54pucwov1TuEn6otfLqkzbE4JrL/JW?=
 =?us-ascii?Q?fNqvJUgTAUJLJkv+zfFNakOFFAFsckIF18u7t7IKew5+JcwOaisEqmXULR4T?=
 =?us-ascii?Q?6bblNsmY8WYZw0govdztym9adcLbJwsWzx5oPRnlUXjOdDU2CY9ZiEReTcnp?=
 =?us-ascii?Q?Eg=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	jYLgP0/h3wyXNIL/9eam92dZS7rYXv7VqYLFFstBt1fPcfsGF6KXEduYhbfdgg1/D0sVdDiJsA14vcqsWRXehPjO83eh8clMRROeKWfmHYDAsM07PAkI2IwtMX5ihYqyHEx2OJKMWxNaoSfbslPBVo9SLRTz/q7YyLQGtWvr/HSfbrmBXD2hmvoSv6F0niwf1MrOaSmbui7fW2Ew+OOotSB/aJ4YH8OkQIoi6ylctRFojN63HNUNMVzZeU1Bfe0G06/5ybS6lZDTkZnz4TlzothizutcGWE/RcR1bPrcflqeB22clYlskyQIyBkUnZtxtUEuQwv0hXEc7Hh6juTPwMhuLAQyf6EW8UJmqg7vdQ5NnjWWESGAyDiPvyoMJfcFuLa4v5mts2OI3k+HXZbpIVvBQT/1NebLlx0mImCMPXd4a7pEdr6oKpFaQ5uv2qkToIRXrdfuJpq7yV5/dTtz91z2Xy/k14Xgi/xbxBVBepMD6T3FAB/LSz3dAJIz1N6BwaMjwja3/vdBSOvnn9ds9SJOw2Qq6AliIWyyZlfGmF3W6cfOHI2Ivel0T8fS3vvxW/S/AMxNH2PGraXid0O9pIDQK7hZqlTIamSP15kp7vo=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 941f7191-4cea-479a-63e2-08dc8c125c42
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:36:06.6018
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QaT7cRmCVXEHjf2Pr2bTLSIZoxWp2gZbIa9f/tFKySp3HjyAw/3y/tAg8yMkvuLUDw/ygddVdzUI5cPDCGLHIZOBR3Tr/9NghAFM7fnbef0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR10MB4709
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 malwarescore=0
 mlxlogscore=999 bulkscore=0 suspectscore=0 spamscore=0 mlxscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2405010000 definitions=main-2406140007
X-Proofpoint-ORIG-GUID: 4uuT5ypGdiPimmOi8FZ1BA-syB-HF66T
X-Proofpoint-GUID: 4uuT5ypGdiPimmOi8FZ1BA-syB-HF66T


Christoph,

> Discard and Write Zeroes are different operation and implemented
> by different fallocate opcodes for ubd.  If one fails the other one
> can work and vice versa.
>
> Split the code to disable the operations in ubd_handler to only
> disable the operation that actually failed.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:37:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740253.1147258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvsT-0004HI-8P; Fri, 14 Jun 2024 01:37:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740253.1147258; Fri, 14 Jun 2024 01:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvsT-0004HB-59; Fri, 14 Jun 2024 01:37:17 +0000
Received: by outflank-mailman (input) for mailman id 740253;
 Fri, 14 Jun 2024 01:37:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHvsS-0003fk-BC
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:37:16 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f5cd31a-29ee-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 03:37:14 +0200 (CEST)
Received: from pps.filterd (m0246617.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E1OQCG000838;
 Fri, 14 Jun 2024 01:37:00 GMT
Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta01.appoci.oracle.com [138.1.114.2])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymhajasvy-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:37:00 +0000 (GMT)
Received: from pps.filterd
 (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45E1ShCH020103; Fri, 14 Jun 2024 01:36:59 GMT
Received: from nam11-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam11lp2168.outbound.protection.outlook.com [104.47.57.168])
 by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id
 3ync91yjxn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:36:59 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by PH0PR10MB4709.namprd10.prod.outlook.com (2603:10b6:510:3d::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.25; Fri, 14 Jun
 2024 01:36:57 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:36:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f5cd31a-29ee-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=KtZawd8ipt/bHT
	0UO+nYK5l0nI6MDeL71Dfxb1yrrKk=; b=RCbSgTGTDb2WwZfeyzD7IgJycKPKwe
	YxDsPauTQMaK/2EW0X52vecNca3omO3gz9b/1UxajcCEIkmMgBmWwWRkuR2FpC+W
	hnxO9x04uxLM2WUddImHE+3H2J+psZGm3RseExkIXoFTJt9tCuxk8iKBj4yE+ovp
	d8656zjetqzeIhtsmD8Fkj1Rh5P4jDD1rzrWDL2NHCx4H7H/HM63z5era6JH1pDE
	qyHm+oJaaK3DDMsRBv9qTIiz9cT5sX/T8O3eX2F822zWYZiJ9jR0rJ5nx1CVpEbn
	BTyPuEF25dE3Xm1zRi6pflOum0oMpuQeiXoTLrBEKjsKOkP/LZ9X99dw==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KWJIWIJ0UzEvaLLst+ONo9xd6VG6X8XwQYvcnIASnv34Xw4NzJa4kwoxO388+WJBTPlXR9fsHoWAIVLW+6TyrGfZxpiqSvijQiYCG5hAEgtxEmC8Eq9f6aowhFyd6KMCmK3Lqbo0uNrm3vPZxvq1QVsPyJMZBQd7v8JQVsgDORj1iLWAwisw5UMDTjnLyksy1Is03zNomsGh1PaKclK4N5law5+o0Un/q9gSAIBb4+n9cw/sJb9pq0yHSBvm7tUrAeJgu70id/KRLGOAhaFXgzGs9P7KYVtjX3TTHPkqpNJoo5TFvlBrD1oefqFWOrApYx3PjQ8uT7QSlesWaqKv7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KtZawd8ipt/bHT0UO+nYK5l0nI6MDeL71Dfxb1yrrKk=;
 b=KR6REY5gWtfM1kCiO3HEclVQdZI2gjPsfzInhZbgOtf86E/VElSNQjj03gHuI6TngMuxFpWF9nImpfTkLMag8hFNh2K76vNtrAcLqYr5MLmd9rc6Y8nkpf5ukTuZlZSkrQWGoAatLybF422+R5p7I94oaSvzTo6QwIZuIF3wo8n6ykcXxsDY3zYP8tnWG7KQXQF3BlxtlV2dXyL4w8EoD3h9i7yNrdjB+BPLV35KRcCu26WEBMzftzRHEGjdhRG7iYcBmLPpyHkop7Pl8jF9biIBY2Lhclmr2e7d7T9yELzjrApwOF8mYpHBZzw/KjHvHFExsCT6EGlv/BCenQcNHQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KtZawd8ipt/bHT0UO+nYK5l0nI6MDeL71Dfxb1yrrKk=;
 b=W5HJ8eBJiipxSWlRSnNNKlC7I/Sd9ILYLRlgx8PMMZr6UhxrtPjCVPfPQ3tvE9G3OwkcufHqT/jiJtig7WApFE9NEhBtYUSdSDc2UnylM0WkB+duVCfelMODhqa18235wVSlM/5GacSag3XNRbH5Rpd4EbZEhRF3oWsmMcW458M=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org
Subject: Re: [PATCH 03/14] rbd: increase io_opt again
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-4-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:47:58 +0200")
Organization: Oracle Corporation
Message-ID: <yq1le38tie4.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-4-hch@lst.de>
Date: Thu, 13 Jun 2024 21:36:52 -0400
Content-Type: text/plain
X-ClientProxiedBy: LO4P123CA0495.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1ab::14) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|PH0PR10MB4709:EE_
X-MS-Office365-Filtering-Correlation-Id: ad25b10d-c159-4839-6203-08dc8c127a74
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?CxikHuDB2sDfoq01/Bx5bpIr/1zgNtSwUCQoZOiTUZ6VIbGYs39VC2X5BD58?=
 =?us-ascii?Q?3i4TfSG0p0LKcxqNaKcWSfztZY3DyPOJbrR/oeaHJruapsUZBReH9VWkRl0C?=
 =?us-ascii?Q?oAbUyiTMpgswZkCZDsd1kTHTtxqzsMzVVbQ2KAcGaFxi35If4t7ti/2M7pZg?=
 =?us-ascii?Q?/uKJKnXTPWqKqArAumGz229/fkeVv6VIJDLzlUmrrtr0yY+IDAae3Cg6k4Y/?=
 =?us-ascii?Q?0LSDxPiDsyRNhD/PxPe/j4J/Htw+2jO29gODI0TIMEClR6M6O40ZGnJBgoX5?=
 =?us-ascii?Q?+eptUcCDYx3mRyLouRWYVxZs1rZCa3ZL8Qdfn+g1s0X0S3V1CTFEVhat7ahv?=
 =?us-ascii?Q?334LCuu3ernOFuzRTZLk86fG4ko1LikVA6AbiLIERo7ejjiJZ4BUlmoM4B1S?=
 =?us-ascii?Q?WPetPVqGdFqgCbL4WCcTSdmP12OhfkMa9tDsXx7TcxMfXDNdv/eaymxx+WXL?=
 =?us-ascii?Q?i/CXdupgsASNhw8jqphJRRTJOIy/wa/k2et7fJbQYJ5DDU/Ucwmoz+YCt1CY?=
 =?us-ascii?Q?0c84FED8AHybNEpOHsfFj5o3mIBtV3ij1+Sush0LjYqyp0+TVMEV6LdXGjGD?=
 =?us-ascii?Q?zmMx1vsnhW8da0N8NSgY27LxtqvXbWhScnfqWVqyKD1xr9JlzmOKwm/GF7VZ?=
 =?us-ascii?Q?2TMEth9bGmLW+++b3TEQ7wtoF0qsR612GbmBy0GmG1ryxc+WrKr70Lq8Mr0G?=
 =?us-ascii?Q?KCg3ryIFBVyEGKVdMRTFWDITmFpK5YMm8CTDvsgFzXLCbTMyu5RA5Fw+3qnP?=
 =?us-ascii?Q?SG7xHlCU+m9V8HAu25kE2zkrN2FZg2Pf7VgKt0hX32p5uZomhUzHiYv5iFdV?=
 =?us-ascii?Q?hiftZlqPG/CrilUvSH6jTA1G4VvBEK9wEOiTwq4VEhdhtHrYIEuPb63n4xyK?=
 =?us-ascii?Q?hkyjV/cqbc0P8nheYXJ/751kLUUdtdeYNdUhb+Fk7XyaCkE8js/qcnUQYdnd?=
 =?us-ascii?Q?UR9Jj6LiiU0stCnraWaz6SpB4aP1gWqD5QfCT6ZTkT0sltEQj2OVNej+ssSK?=
 =?us-ascii?Q?M+Vps0Dhb902QlifVovJ3EVJY81HT1kNT4Nv1/lzhxANUhxutgKXqmLBK2mE?=
 =?us-ascii?Q?81JHWwwGAZpzVkHv/mKLVOPbAMQpghBZNV3b0EYqtR4yrUsIEbMXZXXuY//m?=
 =?us-ascii?Q?kTrU4D5aLVY7J4vFIntBp4dviNlGRxjnY++/F8ke8Tc105FGJnK9cRQLVB9g?=
 =?us-ascii?Q?vu0a1EZSsVFUqiAary/lH820VrgOn/ahF1WsVg4cb3AEyLjfOvf4JT3S3MPc?=
 =?us-ascii?Q?qd2MUCreKoFBo0to+kRusJQ0Q/TbSu2pSoGBkmZ8SzsyvCQqyRYWhZnmxijH?=
 =?us-ascii?Q?mqavjQbQmWURYgeguesXMMPY?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?u5bfvi/xH+gURog2qNKgjxRvAnCMZJilnB1JEbGVmVSEnvjB9J7lagn7ztxQ?=
 =?us-ascii?Q?bDbSHBOTqoWUXLDVyvw5ijfFi0Dp5JgoPf7lBi7YW7Mb5gO8RaftE7413erx?=
 =?us-ascii?Q?dZtm0ayce8hSVdU1NEix4zU7EgZSkzwliEHsmGitBw4a2KzTzdqkHn1fdome?=
 =?us-ascii?Q?ww8xteaIY4QuuKXhhwNHYxgYCutt067+QPapfbaHiCWyCueyUK0WFE1aZtTp?=
 =?us-ascii?Q?vowcCJoizi7vIcva4z4gtNb6+5ui92LThkYJSz2sHrBayf/P8vtSSAWGGhkI?=
 =?us-ascii?Q?un754GIDgh+tQnsNkIUF25u8spLF5L/Drx1ZjRKOgQrxUGe0mYzEeMxszn+8?=
 =?us-ascii?Q?VI1u0bomibKC70h5AqGTtL2qfmOOWpokIy4s34QBlK8yNBu87uAOU/D4UkBr?=
 =?us-ascii?Q?WXSlsEwfWNYBAxVD3A48KhyeR2nL+qQlMcKKsbnkQomeu/QYAImz1PxajxA0?=
 =?us-ascii?Q?KNwQjt6tuGyKFId7BbXTX9V3JgEyociWSJ8IoalILU9oD2yKI/beA/zonOh+?=
 =?us-ascii?Q?mLXS71amaha6NY3VsFRRzaOS3fNmqfx11CU90VNMkWEY6SizkGh1R2Txbc90?=
 =?us-ascii?Q?n1q3ecJxHsMWwJjdlpeN51fzckbcnoW3zATng0JyNp+ADMTYi+KBik90b0Gg?=
 =?us-ascii?Q?b25b0xBUvBGVpSnNbDWi8HbOky5oXtzNvq4ynUKml+vVW6DE7NTs4SH6WmWs?=
 =?us-ascii?Q?jVFyN73TNDs8y9SpqKb8Gjxw0dEszBWaLRj26Zb4JjfEqZ888hQQnADNCw2/?=
 =?us-ascii?Q?KNyOlCx5YTSAYc4aoO280jyP0PTgiqdov0XiUr8kSHX3/cUbRLLJF/yTisU8?=
 =?us-ascii?Q?N3Y2F/iTBJMdHSTGVac3fV4jE1P7rgsdaRkslIuE8s3SGmqREdV9JwDCi1Mx?=
 =?us-ascii?Q?/pcLsI8AYTBNZRTWmNdeqK3FrjcaE+ptN1E0I7Yxko20hBmOIJEkrpWCsHqs?=
 =?us-ascii?Q?wQpiyqUredD1ZYpGQPmqNywemWdzhbDncHntqIZIGfqWf7Yo7ecpk0hspSRe?=
 =?us-ascii?Q?PzqOmrSedJi1SysW/BnTsO7UO8rufBkNS1QeBXfQIpwHg18KGz7+jQxle2cV?=
 =?us-ascii?Q?WSpSvIfi7JTbTb4tNqPHi5IELZAjg4GT59Bj4O0MORxz5znbi180w9SPyD9G?=
 =?us-ascii?Q?en8Q9qwJLbxlnMO3IDcTd+pqZg0i0Jj/KG9ibwSh/mrJ+NqY4Czh5hrM59Ld?=
 =?us-ascii?Q?xgIRIauhfZlsZLeMWdBXeij6QJBciVgvkPIo5j7hEPcQJNcqa2tX++Vp1um0?=
 =?us-ascii?Q?UZxpsFK8zY5lnHoJDxaPrfk3q7IwRDYAmeLS9gqtZZa/da10oTo1PZDrKaxo?=
 =?us-ascii?Q?AuGFL2cZOFYV1hkOseUmV9vDF29bHzCfg/V9Z9IyDvfLdDxnJcpVnjBaT+a7?=
 =?us-ascii?Q?hQsIldg2LdpMLEFo2boPNtX3sJDaXynuzN2tQn8jvdUqyY59cPkFWUifrW+R?=
 =?us-ascii?Q?WwqnvnPnIjk0iF/VInqTt91/JOt4HZ9gtgGEZZDAdJxAPOPB1/0pllafUtJU?=
 =?us-ascii?Q?WpL1qwhubFdw2qDdJ75jsvK3xwycO1CCEFS+HZOgn6XkmjipRiCoapjuZVUs?=
 =?us-ascii?Q?rS4RWyUAKTolJZbMdVMab5SS51TxpaKcek/eulmvdUhTGuZhMLCp/e2o0yxC?=
 =?us-ascii?Q?kw=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	RbFpIkPVR2k+z6tofyP8tOhnin8wIyOqiX2NYm0NAPTQAE4f39Haaqufj3LUAi15hx/ZOzIzMuLAmGknwGyBLJteeBXBLJfawwtADzRlyQRiNMMlIlduaG3doFZKBtsAfEISWBXNV5JPJHyFJqrK7bLqqbsMIKIlZQ3SiWWZfDMRrorePRnyH4VvpKPZyTzBMQ+cClzhsXZ2z3MpGlN36/7O/GqaJvBb8nZnjn2ooYngbebD19o7XXqtC4RY9rm5L2SuJFhmJkTGyCXW2esVRC5CuTKdNEjJd9s3bvTZK6Vkq9c+eci/IgbOLStU+HnLj7G9GceC8cdlnUm8CEMnzbnPZk0Ah925mmr2mSG/fJac0fSUQmlIDfszo1HJSQdoMv0DU7qyRsjV/4sMDNBGKcGHnbMQDk/Rz3gKnKWHMDeD9DNlH5Ud0q5Ovjn2MZM1IKiJRljAcW1Q5Zkjj/Z76JvQjCI5DPr8WrnGfcI6rZG4f44WIDpiUUfe20+sIW2wa8O8+i9HdfWN25JTvZv13JwbcO0DgxYZVQUIx/eDVjRXy1FG7qX986LgVmqDBtaSJ47JA2CTC3XHbknZstO2IlRmNYOE9FWR0AY64JACuDk=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ad25b10d-c159-4839-6203-08dc8c127a74
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:36:57.3583
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Xha4n1e8R7krKXHmkcxITaDnJTeFoxOCXzkptQhrFtdMXipgFYcM7NzB3DgSgxT2tejpxGxTEK2fsngPU+pGrwZHpx9MG9/L6DdnvMl8Spo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR10MB4709
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxlogscore=999
 bulkscore=0 malwarescore=0 spamscore=0 phishscore=0 mlxscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2405010000 definitions=main-2406140007
X-Proofpoint-GUID: ct4f6auKYw3A3Hs6XPBaIU7pOYWrvoM-
X-Proofpoint-ORIG-GUID: ct4f6auKYw3A3Hs6XPBaIU7pOYWrvoM-


Christoph,

> Commit 16d80c54ad42 ("rbd: set io_min, io_opt and discard_granularity
> to alloc_size") lowered the io_opt size for rbd from objset_bytes
> which is 4MB for typical setup to alloc_size which is typically 64KB.
>
> The commit mostly talks about discard behavior and does mention io_min
> in passing. Reducing io_opt means reducing the readahead size, which
> seems counter-intuitive given that rbd currently abuses the user
> max_sectors setting to actually increase the I/O size. Switch back to
> the old setting to allow larger reads (the readahead size despite it's
> name actually limits the size of any buffered read) and to prepare for
> using io_opt in the max_sectors calculation and getting drivers out of
> the business of overriding the max_user_sectors value.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:40:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:40:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740259.1147268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvw2-0006AQ-O6; Fri, 14 Jun 2024 01:40:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740259.1147268; Fri, 14 Jun 2024 01:40:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvw2-0006AJ-LM; Fri, 14 Jun 2024 01:40:58 +0000
Received: by outflank-mailman (input) for mailman id 740259;
 Fri, 14 Jun 2024 01:40:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHvw0-0006A5-Mf
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:40:56 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 22dd7e24-29ef-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 03:40:54 +0200 (CEST)
Received: from pps.filterd (m0246627.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E1Nn6l029010;
 Fri, 14 Jun 2024 01:40:35 GMT
Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta01.appoci.oracle.com [138.1.114.2])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymh7dts7b-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:40:35 +0000 (GMT)
Received: from pps.filterd
 (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45E1PiWv020108; Fri, 14 Jun 2024 01:40:34 GMT
Received: from nam12-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam12lp2177.outbound.protection.outlook.com [104.47.55.177])
 by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id
 3ync91yn1u-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:40:34 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by PH0PR10MB5755.namprd10.prod.outlook.com (2603:10b6:510:149::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Fri, 14 Jun
 2024 01:40:30 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:40:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22dd7e24-29ef-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=cdpb33YGoTEW2L
	kfrQ++Ln+DMyW5J1BVoHL3IH4GnVc=; b=RWUA2iwyHFf90pk5+sY6dbXDN3pnPt
	tcDujhYJyf2/vwUDob3rW//I4tyzfmHoAWnxmM4OIsiPLFIepBhOysQIT5KRyDA7
	k0dQePA9k6D+ErhGLl+gBiKLtWAVF9IzeAlpIHLdxwdvxiwJ4mDPg51OzvrHaYVH
	/NczqtQjIm5BFggOXcTe6aVx8wDkHznyzXdcWscE/iY5PW5MFSQD7pvp1wuFuGuy
	x2SWmjVV4ONBodm2VB6sNEy66RDJ8/VlvT8QZPNcDKdrcywZfzMU8TidPIPW+h/o
	ayED7UWajU8EyrHc1B6T30N4Tx7GaTG7e3nJ7qnOHX9U+q3/iJ1VPHsQ==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BrreEC16uiTGQVotfJQjQGPKxbRNEA1++DqOzclNRPyaRv9LVJtUND43YY06As2fXeSSuOMrUZ0g/1hovByNND+XSWMvv3V6ZtnqHpnaZzIgS0TYrPH2PPfVCAj7lCjb9glxzzHr5INwQIjzg8S4hMES480XmwDACdfQUBmg9U1GPi8FRWVD+N4D+7CzBdlbjbVoaIZUDU3O28dB0iw59xYWc2/e19IAsvlgtBuzfx8/YeI3zb9S5yKaf1B4GkbgocuXXwbajS4MTYDxN90Y0jtuI71XrwCwFt2g9Av/mmoFXJjvVqcyaDpfKX5X4QBENXxNWc+t7R6tULzcWQ/wqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=cdpb33YGoTEW2LkfrQ++Ln+DMyW5J1BVoHL3IH4GnVc=;
 b=GfE3hoy/1rynPLd8G+/Nw8upzelhJ0HPa4wg/aj+WrwrZ1ZivvV0C3Xp7zDtAKMgb/Mt2zL1S541TD+oBU+Kl5iWtHhtD57X0H/v9tis4m4rrCdaEikYxBO/MFC7MEeMCneBHKzOC/FaQdrHiulkMA1ER5ndmIh9dQkU+qIXsKe8Wl9qs4TbAhiJpyYMs8+Wmc0obad8QXfkqLUsPbLoexb2fQQuQGIsr9Tik0rx4g+JI3g+NVlCdbCbg21zttVCu9vkcqO2pZpkIws0DZQ5BfAqAYIaZHp+aEIFoblOhyUKk9cS5rAr47bJ1uTeh2yYeSpKbrwFtQox+PvBewDq3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cdpb33YGoTEW2LkfrQ++Ln+DMyW5J1BVoHL3IH4GnVc=;
 b=dGkD3PhZ+boz2qVxHXGoBDXdJNx0oYeV4n2bv9JyPydtnTZQHfGmt92Dubx3QtfV+PdICBzyy1/m1uYoSGJbb4OeU1obi/+50IBg3olYqprwgelub7r+cHIx6aSA/V+mNZuhK/REyGgpY0jSUS8heCJVGSwnLN+mS3+A1r2702M=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org,
        Bart Van
 Assche <bvanassche@acm.org>,
        Damien Le Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 04/14] block: take io_opt and io_min into account for
 max_sectors
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-5-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:47:59 +0200")
Organization: Oracle Corporation
Message-ID: <yq1frtgti7y.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-5-hch@lst.de>
Date: Thu, 13 Jun 2024 21:40:27 -0400
Content-Type: text/plain
X-ClientProxiedBy: BYAPR08CA0055.namprd08.prod.outlook.com
 (2603:10b6:a03:117::32) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|PH0PR10MB5755:EE_
X-MS-Office365-Filtering-Correlation-Id: bfd42842-d075-4403-2b5f-08dc8c12f931
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?U94yYsVBZh7eSvjyTZ8nV2PakJWOZD5IsSFdoSS9v3qh+1vLJ3f3Xc+Ri5XB?=
 =?us-ascii?Q?5/6Hu4YRLkGtjGz1Ble0oNhx1zye1PAZ23ViHWc+YVw43kwz9I3vLO5ok+Ng?=
 =?us-ascii?Q?oo+3S8CCNQONsC6QREQ5vO9lGChJwpzhZkYJ8IJBWCHNqxtfFXVvYq40292+?=
 =?us-ascii?Q?Ys0PhDVwpSymnKk1jFzHEWdkSk9igfOmBwWBcVbYkfopHQNLQo8l/FjliKXz?=
 =?us-ascii?Q?ehnm9+zUM/gT096Ec7l2lRzBUWakjMLQhd5JPcxsIoidPisiHpIW0sjzy0Nn?=
 =?us-ascii?Q?O+9DzHnPuhtueSBNsWDtZDelmxpgN8vk5Sd9jGNZvLQW/Iap2XCQ5LTavjN7?=
 =?us-ascii?Q?ihmc66it32PMdbnsx37eGvzLNsY2ZIhGXmMpEe9ZojC4fJWU4RwaZy9Nm386?=
 =?us-ascii?Q?pmhhYf7OqOfvhl9jg24m0Qp/2+nWysbLA0nCNn7TalblK1ftP1CNQSE9v9yr?=
 =?us-ascii?Q?kZhDNsWdEWKOk2NopBxMiPFWVYUqmhlKzYw7Jqxm2i9civYlttL6RIJ7opdC?=
 =?us-ascii?Q?XKspDrzuCiPDIyeasVjPl/qTpPhyL+Q1+3Zj5ydy8Qnck6dkkptqKVvwPCaC?=
 =?us-ascii?Q?3bDwOkXBsnj2OIryCaKUAEhm2XIt62YrmPp/pD1GfYRX9qI0WZqMG8OzeC5o?=
 =?us-ascii?Q?8HIRlXL/XP0yu6ingJa2Sw/5pz9LKW7ybA3tU4bt0kso1yakEziSns3RITmL?=
 =?us-ascii?Q?MDQ0a4jXOxWAohEs8Qp3/PKTL1eeom8a0yiMdGXHYoxyI1CqyLaOeOew7VpU?=
 =?us-ascii?Q?58NLP0Uxbsf82Xfe91b/n534yAb5uKK9tHOSzB6eYxalT/7iVz2aotthP3KZ?=
 =?us-ascii?Q?3CzxtfeqE+RQlwrNzy7dpFxPS/KnsRiEQn3YFKWsnH6TYtm3yCmnLuxu3ebQ?=
 =?us-ascii?Q?4yna3YFbb6z59yFT0HIHvJTDfn7OFQcyrge7gfdQ7MhzuDjWGflYT9Fsq7U9?=
 =?us-ascii?Q?ZD/NajYpO+jU4miRZRAEcSbtVC+ylilE19UQsGNii7HW420zAJVWRZfbuIju?=
 =?us-ascii?Q?f9WNJ5H4KaUUh83j4+3TEX/AY8cD3zcN0eYcVXlWe1MZk612CsKboWYpgErB?=
 =?us-ascii?Q?Ps5A166dAKGtOXce3Z17eEyPqHDcbCnE79J0bW6NX4c6MtdBV/bitLgcxafX?=
 =?us-ascii?Q?QqqR1mbeYUA1LSo759pIOJ5hhPpZ7XUFCc6NZTjIvYc1n+WFuZjg0h4C9VY2?=
 =?us-ascii?Q?tT63mkPYjoD3k9J5ycQCeJKu7OsJoSYPZlq/HlNNqZ+X9D2yTTmkzYOWunOi?=
 =?us-ascii?Q?3KY58KieN1wwodrtM9fMI70eU9D+6OLgX82C+6ua3zkwbUswlu66v/mGn7FZ?=
 =?us-ascii?Q?Ov0S7IFcnzJX+CzBftLTnjaQ?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?B739H1GS1MAdn5dFR6h7xhHSM78J4mWTklMcJI+lPDAczi1RgMx18WSrlPeJ?=
 =?us-ascii?Q?ceADOqfTj4HLvvwdV0Xy2uatDZSQX+NybsxYaWQ+Oy3bqYDT+yh0Nn25Fuql?=
 =?us-ascii?Q?2ER0lw2Xw7ZSxhrEw02ruAogXb40drMQXw0l8uXJ6BL7IHRri6HkHqy7Bd62?=
 =?us-ascii?Q?v8ebYedzUuuf2xLR5tPxBqS0FZpANXRE42GY8mDAFacEdx9IZlbLDsLLwwMK?=
 =?us-ascii?Q?S6eqybSpchkDh77BB6JaoQUZOJHnkr5vQs+3Jx2vQSfB6oyTrKN6iT82Rrtg?=
 =?us-ascii?Q?HeE6DB6WwZ/1LGDsjEmr/xE5e0rgndPDdEnIc0ydGPM5Rh0mlWMK7RC1ESt9?=
 =?us-ascii?Q?Xc/h1VmzC3tLkxliXhSZOCOOjRWENf9g+U2TlcI7D86LC2YKYGaTh+FKNeDp?=
 =?us-ascii?Q?64iCNU/16UAxw0h3blS4u9mXfxknY7DEkf8/yjn/COUm0ewds4ULraCcIiCW?=
 =?us-ascii?Q?pXdbkY1ggdDtexrSO6e45pv+LjrtatBmzxQsw0W2yeiCv/9rNvNN+BIPJFLs?=
 =?us-ascii?Q?tc6/9VgOefxQ1p+czD1He5yZQ8+qnwaoW8ltErmJvJasOoGB9iAgYcUMN3Mj?=
 =?us-ascii?Q?HKCUi467YP7KS1aX0RDj1LSU4Xr0eKTqmY2pSX16p+d0lrFfBSZlteMYra6t?=
 =?us-ascii?Q?+d3hxGXT2LQKpOEZHtwDn6pWh/UzWFigezlUmU+E1po9xLWK00nqeIQ1xxf1?=
 =?us-ascii?Q?AN0S9TcsI1odoMHMcddLAOy8XNDyZk4dSmmROMUWFU+ciiGH4rOyQeaxRsx9?=
 =?us-ascii?Q?mroy9bK89SCQhQ8tSubiyHzuGAvuHycKi9A4+v5pqOt6M4ePG5vXHuhLneqM?=
 =?us-ascii?Q?wfcVnHlby5Hg52kszAJ9W8mAkEs4cM7eekYqfIq0ePjX88a8pejNEVbSk3h9?=
 =?us-ascii?Q?yEg/7pX1WhCh9mt2iCuREpSVMXKrkljt7YxvSTAZa5/knksfAKgVZbbrzQZz?=
 =?us-ascii?Q?y7Ev/dzQ/Gg8EuYC/aiefbiRpC57HVcgYRBOtZSlCpQf3Lav7QDHpZpxdTLP?=
 =?us-ascii?Q?oBC3/5cvdPXdw017PWMB7i7GIDthFEnhV8rWhwlzebjbL1MOEVJOKnA36/6M?=
 =?us-ascii?Q?rEqOjgSndKnKvv4C7vzy2yStul2xf6dboiZjyPjTgv5SCN1xZoH6pW71Pr2X?=
 =?us-ascii?Q?n+40hEs02Tpwdw32vKLnpiwSdfgWDMbxNVgkl9U3d1gD+CVApAntQEPpoITh?=
 =?us-ascii?Q?XScNsWzUAOachSnvxc8Fm195nVzUOWmfR0VkQCTlr4OrZAbPaj8BIRUwnuKY?=
 =?us-ascii?Q?bZW+qu1/g5/6fubUMDdBcwhoj3NZ7+KrkQmPa3BEWT29lT2pV5daG80LeId9?=
 =?us-ascii?Q?jNRLo60rfWY5GcIYj2qATbx3BI1dYIXp3IrJYyb3FaGj/lPp5NIZHLT+iK4t?=
 =?us-ascii?Q?DVAFmDwiuZ+0b+XwwobS/AJgVtxFmrLll7CZ7l70rAKYHqfv2pJRrnFCBMEb?=
 =?us-ascii?Q?J3S2Iz9PpV6WAkH/FjH8gGWz3/VB6CWJZUDJsdWQRb6LMBXnNFltOFXnx+Mt?=
 =?us-ascii?Q?SqRIxMLGhC+GE0ICZ5DTPR/OJC6KygV7Kvz+Jf0AlivpzXq+yiZYCVCHXIRR?=
 =?us-ascii?Q?NF+hmlAoOdBx/ILSl0djCy6PuZkb1s478y4IKJFrUSbmw3hX92Um8MR01FNF?=
 =?us-ascii?Q?hg=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	e3w948HyPe6sqPX9w+YMaCxEM2WZt28d/cJHVrvDu8q+KANaFBIHjkJBsrory6UrxTbwC2yyskGXmK4tX2ZIyva1yzEwVk+rNA3g9Rkwt8+7HQxCLRTvBKCQ1nTZ7Iw8wOtXqTkEROd/6RuOYtvz1Dal4qPiMZG7GYKdG1aq0g1IM5SQNvO7Vl8DJ8wcb4xwTW59/TY5Gc6gF8hl7Yw8TzUOeUxYmCkOnqsSFCTopxpnJxi7Z4e0XIFPd+NrmpPHk+Wp6IyrfEN0Safd6Kc4FZ4qfB1ookATltOGwqtHN/ADQlAJpWgLRjuR+DTwss1+63rjY4WmhZBJzesr6ft72qwgSl8NrWcVO4ZKObpRH1GZNsupRsoxwVXACejzh7xYLMIX3gVM9uPv2+CfDzNGWs9B7KsCvcMBJMunPmFG92EOhW9kXFw+HrGxjpdYNmsvEU3Hur4xc5B/4qOEjmlachhB+n3q4KA9niug4LeyB8dJi465zvi7oyHRXXJJbGEIMmxENO6+t1On2aXFL6cfbwCNFWmI8+ZOggLBNXB1usSItmdsOL9rumtFBFnzjnab4g4aWigI3FcbHnlxp/BAsW3tVX3T7G+txVKl5nTjHAk=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bfd42842-d075-4403-2b5f-08dc8c12f931
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:40:29.8716
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mAe+SU7Af244vXx/5+RQM7m9f7Sq7WxDvf4/x7utGsCBBoNCX1NXBgGahs+M9xiMNwKXJcoNSLo8CPUPqQ9Jw1qXTF3VXsNCIRvma2aEMi0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR10MB5755
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxlogscore=999
 bulkscore=0 malwarescore=0 spamscore=0 phishscore=0 mlxscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2405010000 definitions=main-2406140008
X-Proofpoint-GUID: hJPf2BKSYX5WRf11Xwi6ABAP3urTXXeZ
X-Proofpoint-ORIG-GUID: hJPf2BKSYX5WRf11Xwi6ABAP3urTXXeZ


Christoph,

> The soft max_sectors limit is normally capped by the hardware limits and
> an arbitrary upper limit enforced by the kernel, but can be modified by
> the user.  A few drivers want to increase this limit (nbd, rbd) or
> adjust it up or down based on hardware capabilities (sd).

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:42:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:42:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740263.1147278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvwy-0006gT-0p; Fri, 14 Jun 2024 01:41:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740263.1147278; Fri, 14 Jun 2024 01:41:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvwx-0006gM-UK; Fri, 14 Jun 2024 01:41:55 +0000
Received: by outflank-mailman (input) for mailman id 740263;
 Fri, 14 Jun 2024 01:41:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHvwv-0006gG-Te
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:41:53 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 458da06f-29ef-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 03:41:52 +0200 (CEST)
Received: from pps.filterd (m0246617.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E1fX5X003274;
 Fri, 14 Jun 2024 01:41:33 GMT
Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta02.appoci.oracle.com [147.154.114.232])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymhajat17-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:41:33 +0000 (GMT)
Received: from pps.filterd
 (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45E1JCkc014165; Fri, 14 Jun 2024 01:41:32 GMT
Received: from nam12-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam12lp2168.outbound.protection.outlook.com [104.47.55.168])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id
 3yncexqd8e-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:41:32 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by PH0PR10MB5755.namprd10.prod.outlook.com (2603:10b6:510:149::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Fri, 14 Jun
 2024 01:41:30 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:41:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 458da06f-29ef-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=YsTri5LDgJOZ7r
	OHk62Dvpk2ux3OuDk+pxi14XzdcgM=; b=U8hRwgPkdyU8FwbVw4Cqt9AIdeXZu6
	Nrn1hqnrcqHZLslcahQ2Jd2tCxBuoX9+DVbGd/Olkp7csjp1L3JfzqmuI4Ens5Wa
	lZtyqXGl3dad+D4vt5B68J13RuiYk1yStsra8YqZ3CkJwTjHa7OGSXePwTAaBXxd
	l/N3ZpxlnyDzFG66Ew3w6DYvJSVA7ZZ+BBCN71coVIc+bf53LhDp2tqQH2CM4Ty3
	3gkJRKd1WOMxP82wZ/NQlRhHR+nQdbpJxGA8r0JnAAC93QwBHdhg1tCwn7SXJddb
	1WGT3rvVio4DLTeb24oVKLqBETad+MkC3qCcnI+SMQmHaqCP49pkje5Q==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gZP3sdNOEkKZHdvO4AmZZCc2/Z64i6VEzdJL8DfNnfJNc++djEHvtVmQIWVggqkxkb2S0r03GgfU4YTvPq6ItwCVnBYgVPo75M8LWsTNQhlMKhKJetDek5o5pAQnjtRpGDFOlLlK/L4VPrB0jhp+E9deARVoQkwCTjylwr6IebMFtZauzTlLbRi0IfUjCILY+yt5RNayqb0MO21QsX4Tl3k6YtBCs+G5xBL12/FRoj60rg8UsFff23hmK3awMqdxnVn//7ttxVTz2Ee/9166XzLcxiAZyEA+wYFsfyTWGT993Pr3j081jkONsuoPa5Wri4gUdJ9Nb3yfM40tX5qk8g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=YsTri5LDgJOZ7rOHk62Dvpk2ux3OuDk+pxi14XzdcgM=;
 b=YSlQJmjetA1m7vYDyIL5Sc6dqarC2HenhoHgmFUrEXKAVS6tiSSPGMn1KH2VeEJ/AcKH/xAKhxthdakWBmd7N4cnao8MU9Np8cDkHfNV4iDbx9LK5XjGXB44pGSDHRhZdiovX0ziHpVekZ7/odD3h+AKw/u8VtjptY4E8yXt9jmlqaGEKID2/Q0XeT+ZTi8+0L7hcDo3XUiTlMZeswwMW8Hr+kAXNy06j3aVGpZ6lMe0EYi18owONAGz2m7BJYbN2J09VDrmxeU9YNI8adzWHkPeMD1jlUebo8/6mueqQjUhadcMCG7nsr791br3vQJRmerrsfRJX9imdVkeJGk4hA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YsTri5LDgJOZ7rOHk62Dvpk2ux3OuDk+pxi14XzdcgM=;
 b=nzRJ96/B5zCEBqYhVdG+oyO59jmjPbnZq2nCQLJ/ZLTAz2tr+Cj2Vo13ysLrpdM3y5Hq+A0aZV8JcAcch7sE8/kXia67cGYw7BBKScjt1+mOW7NlgcB8Zjqz7HRrnBWw3sIuz7f1GPzwyU/WEGXuAEXK4kBlKqIlPu2w0KUAZik=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org,
        Bart Van
 Assche <bvanassche@acm.org>,
        Damien Le Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 05/14] sd: simplify the ZBC case in provisioning_mode_store
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-6-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:48:00 +0200")
Organization: Oracle Corporation
Message-ID: <yq1a5joti66.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-6-hch@lst.de>
Date: Thu, 13 Jun 2024 21:41:28 -0400
Content-Type: text/plain
X-ClientProxiedBy: BL0PR05CA0002.namprd05.prod.outlook.com
 (2603:10b6:208:91::12) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|PH0PR10MB5755:EE_
X-MS-Office365-Filtering-Correlation-Id: 89baff4a-c066-4599-ae3e-08dc8c131cfb
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?87qcYGriZHmZbob+Uwc3AZhTiaeNOh2+FuL6xY2eMhFm68VolehHHIxo0ULt?=
 =?us-ascii?Q?lILiXTbuZQlbpRK1KEknEOf/s/KGohklp0LGp9ycmO2B0Zy6LVjqIKRziXuz?=
 =?us-ascii?Q?l7UmYoUwmQUo2GGqtNVcB0xyXV4SWipYpd7R8FM4UlyROyDEnp8PTyfHYMsl?=
 =?us-ascii?Q?3eJU8tQqFIr0wWJk5+meSFTmKKJm8ets3qKPaDMp77XSUr9om4KG2qWefdWE?=
 =?us-ascii?Q?Sa36XYb7fVRopaO+o48RjA7yVPew0/UR+LTv9KHHg7+tM2sLNoU3eOsV34ty?=
 =?us-ascii?Q?PJxj0beGUcYwGLa3DnDgysMqebuh5T57qUSTreFYth1uKV17YR8oKkyhEC9E?=
 =?us-ascii?Q?29ohKXBo+d0lzH6jB0vY/LFJjburhj73JoLrKU5L+xtNhbTRDOSJXQQpJWBe?=
 =?us-ascii?Q?I2wQs7dtqahI3c6JLIHwvtRA3Q/SiqsNTXP3LVgeIbTh3e6oVoPM24WY/i/I?=
 =?us-ascii?Q?NXbT4DtJlTqSnqFwz79ZQOGHoZJYiMctO3NoWOPXplIivUleSrQ0VuDHncRZ?=
 =?us-ascii?Q?2hXyoFQkHZH2N2pSoy6/MXhvxssdcExq8Jg4ehPybht1WopcB30ufnL3wd5c?=
 =?us-ascii?Q?HXdEhX1DJhydKDvYMD60T/WkjxlPH7gA9Tcuw/uOjOwVNCSdRTTukTIwTKuh?=
 =?us-ascii?Q?hxTF3fWUv3HaXsi/zhroDU/jF1vgq5JDNnXksBirQkWHEUQE/xhVBhwmFRYn?=
 =?us-ascii?Q?Xw1R7HjyJf2TllgHIToGnShOUmmKqqEeQae9E8TsYCayQUvSDo5HqxIlkSvP?=
 =?us-ascii?Q?TqSCqekd9faGL6vOPb0Xuc8EGVesYmd3Ah9uYEDP2oc8ovdDIvL4Lq/b5Mts?=
 =?us-ascii?Q?hUUejfIQ8AZnPb+sHBKY6GkR8mT03lZ3G/kaxu+PV7HHVHNQ2j1CxEv+sDdZ?=
 =?us-ascii?Q?C0hXAVNnmg0ugwUDlDJBTaDRyxBaLCcdtnULMMmIpQNGcAUVkV97ceh1Nb/G?=
 =?us-ascii?Q?Sqt+m7FHntFEXUoqsmDdhSw31YJLZZFPFzfsfRqtTHc29qYvQGhpkj6jRJ6c?=
 =?us-ascii?Q?RcM2I4rP9a9NbKr1k/Te3rjljdnXcZ/49LRHnyX2lTFaarvK63H44pwdGA6B?=
 =?us-ascii?Q?i0rDEAfF6l0D+9qqxgV+mXy4Ul0BYwYiVye4e+I2fNFhFqhh3PHJCqRtnHn0?=
 =?us-ascii?Q?SChzOtcWMsiEtYQoRXLHPAERehmwn2LComqvRbpD8zX4wCPCSIZoA5puXe4q?=
 =?us-ascii?Q?Y1MtvU703PnbgyRDl77tURA4Fu/LyQjFxbJF+4kAosOrcQRE1DJ5TDIpVkC+?=
 =?us-ascii?Q?o0XOqQxeM6k9/vRmHtOapY6tVsY8mRkONleORo1eQG4MFlKYt+R403NATT5H?=
 =?us-ascii?Q?kwlMgJX98kgJliLm0+FfACd1?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?KVD8EfdpweX5O5glbi+GUvLbeMZrF/SXWSVu+xsMCKvP0+2wirWh8PzI//yi?=
 =?us-ascii?Q?hq4onyt+fY5242dSD4GaQOyghCi9lzpNYUKeyo+aA1e70pqvj02O0RI6uxJg?=
 =?us-ascii?Q?R7yNg7Ib6DGOlCjzZkmMdpXfCb0bhFtvGebFnEcGbgUqXZnGDtBOk/Wuv7I/?=
 =?us-ascii?Q?e5N9+w56bVprALJP5IhoxMPgzlLHunVG5tdtXYJTtwDpJgtVu5/k+jOX76YR?=
 =?us-ascii?Q?xQFrzSLlZl9VEAYXctoW59wvVKQZ/dUe0MeAFNjXtbVJHWwlC+P33KDzWOlA?=
 =?us-ascii?Q?TSjGlBs+oOlIYejbHjxfEpKjSdOALmBVu+d9Yviu+qEk1RRWP880jnS3KZ9w?=
 =?us-ascii?Q?c1/XSoQsCzs7HNMx3K/0SRuLZfLBCtx/ErjFNEbRb7qP8JPLqqIkVMZ39yOF?=
 =?us-ascii?Q?d2T3ThhRXtIVPDwdPLClginqjCcOC6vJrCTovKOksRIwTFh2PWykPbrDYmbt?=
 =?us-ascii?Q?bYQ3xske6uUuSZEfRHKIny6jM+3EGG1VD+uvBL6ubY0Qih8lLnTdDo0xfDja?=
 =?us-ascii?Q?syX9UebE+UKXdVnDdK2zrb9v0sjpR+cZSjPf0ox8I8pF6XBypLeGTr+7emLL?=
 =?us-ascii?Q?4R4pIM/Uc6qD77Whx55c0wkALJFbUjpJ4P+LyIppLj9jVhF21SXwx0J4OZ9l?=
 =?us-ascii?Q?/SxMuBfaGsywP2Tn4XWGX6FdcWRrkTrVxc7SamU+GmXt+KsU13iWDt7Z7mf4?=
 =?us-ascii?Q?5RPxCun7POh1KVhpX4ICGqbUs87gkkPCwmqjTI5CRW7yPurKxTpHYm1zOEM4?=
 =?us-ascii?Q?q5rgCeiYkuoTpm+O42HgPtopxv9l5dNVipGDgMNpSJFBhiLzfVUXgXsDwQIM?=
 =?us-ascii?Q?Wjt4VZdUfmb+Afi6XmNXnmGbr+Oql8fIXmQpyw4iDrniy7HgghlyWscENMFP?=
 =?us-ascii?Q?ED9cOeFOUjgS3w2wOO20Z0NuGpfwBq8Cgne2ZCjbCef3l1fF6rphJr/Ner2n?=
 =?us-ascii?Q?v4ZNzuP/YnzBl59kcI0ccbB/LvsBl0p/Ap7DOjqUQiDCMBm1VniQbPxnk++G?=
 =?us-ascii?Q?XjX3FoqFjcl2m/2v7h9b5UFrK9szjWXaUZ+IR5O2M2crBhjAYcEWJgi9zFjY?=
 =?us-ascii?Q?iZbpBxuuXGMlX+8iicDdAFQvcAFi25xdrAtUxa2v+Wo9DZLvJKVSggntUwlE?=
 =?us-ascii?Q?10PQ3WnUuqajQmz3UGxKJMjsv7BxyafdfrCItUnVglSSXuQb4FUVDKcE0d7b?=
 =?us-ascii?Q?saY7GDBMSjcRr0iyQ9bHIy5yBFPjSpfCopKj9gQdL+Idsu6su24i081oovcO?=
 =?us-ascii?Q?eGZkhaTFBW//rAjC8x0QjW1gM0laSL5/ZNDIP3HzsaG7RUj/jjQiJDKuCZ0a?=
 =?us-ascii?Q?/Yu7JYBOyQsGx6aGellLJ2rvxY7wEgFWe4YpSC7KRDPzJh5w3g5Qp3BB1V3O?=
 =?us-ascii?Q?ganHZ4sWDthfUkR1fH/kyXIyXe7Cz42OfTbqCMMwnWipVIVkPPSt8GFo+Aa/?=
 =?us-ascii?Q?Ht6exraakW88QAFJdqv6hZ+xWhcH+C9HEQY5mXgbR5vCzM0/jZPD8Sucnz3B?=
 =?us-ascii?Q?bkWVQP0luxQaE6904rRHoqMMsZ21XXWQosK9Lr5JV1PZeL8PelQrD4a/c6Qw?=
 =?us-ascii?Q?e6H4/+9maJk2Y0HvTHz7U4pfaSlfGLEqdoEAmOgEqwVoKVzXg0BOaYE8q99b?=
 =?us-ascii?Q?JA=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	MqN2bmH7/QyAfgUuCOKwP6cpU7yFUA73t3EhNtQENb38971AoR/88vUfuhVXo+G+9zseYm+hNhW5K7+81orMjfxTmavzIIhp/IA0CM17gg9TLCT4TiRnhN8uY8NRGgFWuZXZIKAppMTar5Je+1qGiP80UzfgfQK9irbCIM3eftjtnObxd1E9F1lZsM1EZQOOO25ISHUndoHQf9wh20KbDhYFv+YAX3oE7+Zx6FTscKn7uOKlGDdaB2KhUUp1lRGV7LW0wOJhqDFa6YdCbQXIYt1ER9mfKvgOtfscSgZqC5/wUYq43fOcauTRBvKNF+z9aLr+E59TnHd958JzDol1w8x1AbgrVuWS3B4g7MBrcMXl4azua1aTBFsm5e8zYpEKPovNo/rW3+DdQSJ4LwblXkfRxikFQJMsdQCoxTEPkgjH7wEQrKSkk3rEXK2P6O61+hNfBjNq/DbYuz2svwl36renQAh0gktquEE5Y0AJyWXElOZs4CAQcji90GZnFftJPu63eT8GfI45BI8PnopoUNAbJEcKn+ttsUUaRYxwnEbWSeD4APJmPV0vhki8lNq08DHRqBLJzSeyjUEawjhLXZRN3SXxZBs2QyaU1bDTm38=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 89baff4a-c066-4599-ae3e-08dc8c131cfb
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:41:29.9330
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: W+V1PMFhTTh3/NoEbKBuWJSdRwoK8lebPM9KeaLEDO7sIsalSxvM/DCLT2CgbFADjenCllX60QXF+n+vuK5kUkNUpbMdq6kNcj2zFPfjaYM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR10MB5755
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 suspectscore=0
 phishscore=0 bulkscore=0 malwarescore=0 spamscore=0 mlxscore=0
 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2405010000 definitions=main-2406140008
X-Proofpoint-GUID: _WTIVA9pCJ9hEPIW6wKhmC3rJT9iqBtM
X-Proofpoint-ORIG-GUID: _WTIVA9pCJ9hEPIW6wKhmC3rJT9iqBtM


Christoph,

> Don't reset the discard settings to no-op over and over when a user
> writes to the provisioning attribute as that is already the default
> mode for ZBC devices.  In hindsight we should have made writing to
> the attribute fail for ZBC devices, but the code has probably been
> around for far too long to change this now.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:42:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:42:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740268.1147287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvxP-0007BY-CX; Fri, 14 Jun 2024 01:42:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740268.1147287; Fri, 14 Jun 2024 01:42:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvxP-0007BR-9h; Fri, 14 Jun 2024 01:42:23 +0000
Received: by outflank-mailman (input) for mailman id 740268;
 Fri, 14 Jun 2024 01:42:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHvxN-0006gG-TW
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:42:21 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 56cedef1-29ef-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 03:42:21 +0200 (CEST)
Received: from pps.filterd (m0246631.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E1fR2E022834;
 Fri, 14 Jun 2024 01:42:05 GMT
Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta03.appoci.oracle.com [138.1.37.129])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymh1mjs24-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:42:04 +0000 (GMT)
Received: from pps.filterd
 (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45DNgrW3012608; Fri, 14 Jun 2024 01:42:03 GMT
Received: from nam12-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam12lp2176.outbound.protection.outlook.com [104.47.55.176])
 by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id
 3ynca1uquf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:42:03 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by PH0PR10MB5755.namprd10.prod.outlook.com (2603:10b6:510:149::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Fri, 14 Jun
 2024 01:42:00 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:42:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56cedef1-29ef-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=7pXJbR58xIZBK1
	VUuESN0sDYFJkgoy9TJWPs3H49R9g=; b=jkk1q+/8rw8wA5JZv/CaEWE3gjse+t
	bKph+RMEo50gB4j3suypwsxOddo+mN3kgAg8N17qzGjACyZZvcOPFXK1oDFHKb1A
	GTlSQ87Ilu5UrKG3L/CBRLHdQty1MUk9m79qWvyd+X4Whvmq2QtyvPfH2HsaLike
	+b8nvpL0GVLVBENqRdXkBjw6DBiI5wXq7Gs0UyZZLz2XwIGZJpLMFkI5coZAoLtT
	QAjqoQgEPz8zDOC314B2+dxOHt9y6UOfZRiCgulv3An87MMBBDusiSG4UODbCYRN
	+Ug2ZIYiGcdbEmzJ5ZQlMDlZQvIa8pP3I/5jVYw1j4UNzsQBXxcqXaxw==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P9j5WF2WHZufuGu5Iq46y/3d4OhxO9t/yDBQarbgqdXpbMP1nuZ9wn8nYIqGsE7mkiCc0t/wEFUupR4WSWXDs6WSaqqnHy4h1Zn667yduxBnkor2oSD3w64lXsM6i3MWNDF5ek7YKQZD3zEbHwDHyN18novHGHI4LOAVPysC3f2BfwadCYBI8H7TxS9fCEXpzGIdQ2Ahjriosao080bpuKWhekrz3pjaUCAW+ZJa4wXlt/adDIblezUvu+ui9of1acomQ4vVH39RyifNJbz4jArF6je+GMf3nFOT22Br2g2dmfTXsoXFfuznqzZHFqpMpchh4vdhyiN5JsXCD7xQug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7pXJbR58xIZBK1VUuESN0sDYFJkgoy9TJWPs3H49R9g=;
 b=F/1nIkTvYO2MSvEZm9MaQ/ovvK88MVlaUkhXyT0O7N/b1+5VvcKsSw2TO2vnVmZYJIyvs8asFo4p1WdL8AHa4WqMSZIazCmoQRqIxe5w2xrqeKz6iP7Wvt+yQaojOZpD3RLoqzvHFj33J98SGwvMdjGxiWbGHf7wHzSoYzIJJxcz13FsYK06GOnAyT/zXgxPaBRFZXXbvzM05WBB3cw2SMrF3KRbamLGloR8/8WiymvrwN6GEMGuBQIfXoMH3BOcFJ9OW0JZjGhUUC9stsMsAj43wIfNsb/qxG42pgFpqVxW8nHFZq7U33aOA+c6QUOUVgHf0jvi+jhRHUi7iICCbQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7pXJbR58xIZBK1VUuESN0sDYFJkgoy9TJWPs3H49R9g=;
 b=zoOJLf/sMM/dqqIu9EQv3jyaEvUfl9C3fuiwXLzDGQL6JH6HybuiYZWNAooRFNExVumCuD6A7CO/QMIo0FplK2hA/hCnSbR3JEObg3YTQY2anCFkLtvtX5X2aHAAopmZOWEJyQHinwcOKdhkuOHrVDPXURnqtqIY9ctVf+FidaQ=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org,
        Bart Van
 Assche <bvanassche@acm.org>,
        Damien Le Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 06/14] sd: add a sd_disable_discard helper
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-7-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:48:01 +0200")
Organization: Oracle Corporation
Message-ID: <yq14j9wti5b.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-7-hch@lst.de>
Date: Thu, 13 Jun 2024 21:41:58 -0400
Content-Type: text/plain
X-ClientProxiedBy: BL1P221CA0010.NAMP221.PROD.OUTLOOK.COM
 (2603:10b6:208:2c5::14) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|PH0PR10MB5755:EE_
X-MS-Office365-Filtering-Correlation-Id: 961fc729-eb1b-4ff8-af5d-08dc8c132f52
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?O92IoR/416z8UPjY5DiHIY8gMClVBoVJjf31xBqAAAkizBNHMVjyBWBdHJiF?=
 =?us-ascii?Q?xcymBH7rMUkMfC0x7CMdpMSEj/SyQR+f3ZUgJrz/PKpkMEDk7LFrr++eWfn6?=
 =?us-ascii?Q?mG7WJTMWiJoWIWEclvddGnjNxzXZps0/9P+i7+BY0GHcpNgWqQTg+hhXd2nP?=
 =?us-ascii?Q?H00DMqN1g6+rLviYwjONuSDGyjPYFduvzBPRWN6f5l+/4p4liq/y+Bordvn4?=
 =?us-ascii?Q?CRPBfnizXqsrcOfNESIZm8+wxFmFWoM3EiAqbB8u3GzwaCR3lK/p+ieCbvbT?=
 =?us-ascii?Q?ZywfZjGaHVSI/CBaIz1yNEcPovH/LmbfxIGpnKs3Trg7vYFUmJw3+ts/pGn0?=
 =?us-ascii?Q?jxCoESlnu5TAAYMRN2P8uCcryWHh1bHBUh2u/Ks/GNAOItxBROIVQlPmmoZR?=
 =?us-ascii?Q?8L5PV+TW9e7nRu0tekpog0DLIgdqG5jL1xKQCiIJN+XNnsMYB5adSQM2l9hd?=
 =?us-ascii?Q?o+1Ys0o3ylrJpxiRy+pK43T2c8TVF+9nnundtPGMA+Vs6kVEWiqJBDyMHwhy?=
 =?us-ascii?Q?UNDnytcOCB/m8pAu8FHKvM2EPj3PpeMd6QkB3jG2xKxQsZZgXPqVaxpa1qBt?=
 =?us-ascii?Q?BPtMQqqoQaj0/z0FAYK1FLVL4MkAj0pCtuwWW/xl58MJP1ILWFdvnMmzynoE?=
 =?us-ascii?Q?g/YzfdE+hVhnMsMaRMj9D7jIpVf7HTzLPdhAxvwrPLG6o6baNBoBMJRL6g/E?=
 =?us-ascii?Q?2DhjYWQ2zSsjzEZaO+v64gjfHOOoGBQjSqKpiEieLKTomDybM11gniWZiVP/?=
 =?us-ascii?Q?LTOC2zHz9JtwIXA6m2SomMALBi80TLlkUUYhYB//VEvmdY8dBXO4Klp1dqIu?=
 =?us-ascii?Q?92O0Qv4zIhtSRoLysitVCK6whs0lrTbi0AgRuGQro5awjAj6pD7kxY518H9/?=
 =?us-ascii?Q?q4mp0lz6pryy4ywBt45Exrbv1Xqmb0A4oifo4fwdnQhA/MvCAIcMG6Ae5QA4?=
 =?us-ascii?Q?/7kqkLS+HlucCAZJA/z5s/GB5JI4hnG6UtX+hHqg5z1sB3OHUmPDseky2XuJ?=
 =?us-ascii?Q?ZZ20+xhnROoS6Diw3KgEnAXr1pE7effEZr/HBYX9v+OwLLrwwmdgw+sVKFOd?=
 =?us-ascii?Q?jtVO+R572JGmJT6dFjl84jadAuWPHDRmuTNtVpo+Ga6cWu/NxFLVJEZzqEdx?=
 =?us-ascii?Q?XWa7IHapLWGJ9yel9IlJkRNjlpOC2dbqA+UM4ByMjszJ2kJz7XOvT6naRwYo?=
 =?us-ascii?Q?SRJPXVlesOi6W2AsOF4E3gfoGNR9WtNlpKEI7FpgQ1Aob4zwfQVXdAehpi91?=
 =?us-ascii?Q?MM94Bm+VF6FCLSURmpEBim4kzTDvAwa22oAo2NjkqdXAdzEQoPIP4+R8P946?=
 =?us-ascii?Q?hEOIReys4L0YhhCFkUH1bEZV?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?/Zr3EAI6LdEis00JxXkTMDlCPmgQCGhfmOU16YDQ2ATIX+69Lv8bkMntOa42?=
 =?us-ascii?Q?pMeWjXtEXCB/H0a9i6sfiPavm5TIR2VQOik7m22Le5ruNg04Rl/94/JIXb13?=
 =?us-ascii?Q?f9sDH3Eghd/G3PVKpztLd5wBZZQ5ppDzR4A2UFszxKb3vit1JF676KPUOZjm?=
 =?us-ascii?Q?+w22K3xX+/wF9HGl4Ajab+/k/MJYdlzSuK8KDBiBOnlfa7z8AHdPxh4FF9cp?=
 =?us-ascii?Q?gsD+sdRG/qU2KqJAlg5X/GBP4PlWNmcyYUoX65FrYLV89ucbEcz+lE/vXpjq?=
 =?us-ascii?Q?yfW2hHtKo+1n9wWQdYcZ1UjIUSv838yAY+/21ow0AWDKFuA5cWHdoG3+9yCJ?=
 =?us-ascii?Q?mgcsUTwaej0cJifofMS5sgJ3YtSUhsIkWJGwN/VDLg8xsk4U7fEBRsLzerZg?=
 =?us-ascii?Q?P5Dg8LrQCw4G4EQ3zyeXWxiqvJIvE/fWOBm/CdVW0pwqx7BEYBkKfQvj3NLe?=
 =?us-ascii?Q?SIC6JzEIjnikG11EarCchw2QzEYQBONxfN7eAbVMa7xGqyzLAVZWZJ1eLuEJ?=
 =?us-ascii?Q?QajjYjquDFzDLufm9TSb7GNio4KY1lyjnpSCRsXc25kcVABA78Nouxb1CwGF?=
 =?us-ascii?Q?IX21nLjucWkLIC7Ni5WKTE9hbd6vrFv6n33faUPZrzPCRXxm5suYGUWAjGzW?=
 =?us-ascii?Q?BMDecNY4IUo1kRh4Pz0DP5P8tI7TFVGwrZ84SfcVzckP4XosSDr7UFLS+V1x?=
 =?us-ascii?Q?RkqC6BHWiROGW1zosWKyeAgI0uMN+9is5ME1l6Qg48Z95FE5fPLjW2bfMUkh?=
 =?us-ascii?Q?U57oqQTm1EdRdB5jupx4MPZn6gQ93/HDn3cCa54ebb3P3ctLfLzOpWModyS/?=
 =?us-ascii?Q?awyH/ASsxO2QA0Kbgsbi2SjY08DUGk0A8fnwno46PBBHAkdFzDPFMBiDCpxP?=
 =?us-ascii?Q?DdnuJVUt5iPInjb2jHBZ1YYAT6jhVwkgN6ZQCXPyRmSQorbYiy6Z9e5JhOEi?=
 =?us-ascii?Q?vm7umIAEZmDSjTK3nDviwUnQR0+jFr6B3hZOuzgx3CFeSjdaoqdss4MBuFgp?=
 =?us-ascii?Q?/xsGdNUDH+/SsHJWLsdncfxyQFlSrWHVEW4CMkP792n0TVt1tMzSLVtNGQlf?=
 =?us-ascii?Q?oTW8qqyHsW3TT/R5+dSGXecoQd1V6G8pTvlhygDLWqGvA0SRnCM1VA7uxKuS?=
 =?us-ascii?Q?fsZWuf53UTXX68CIlCQN89QXQl+Tg3L+NO2Qj0MY9CPRoDCMltQtN/Ew1RBz?=
 =?us-ascii?Q?P5srdfkbTOIy3kBX+JOrUql6YRBAgIMMbZZoMxkwBH/XMfm1OIEvvMiuRXeO?=
 =?us-ascii?Q?AxfmYieuY50ooAhUONMl1U3rK+zhmzsItFxvFwnnHD/t/fjSm2M7mZQZJ23L?=
 =?us-ascii?Q?QBdO478p6oO5GeYlUH5R1bniNwW1ho5+tLPbQ+dovp3pI6kVnGQwOxQJbYMB?=
 =?us-ascii?Q?ShuWywk9080jfFYlQKpXPQOLbyKtJRuHMRLgz06M4eSK+gF9st3NaNsBRAvp?=
 =?us-ascii?Q?TCZzgNJDlzCuCwwEPFi0PFKGuY5bq4FH+l2vPUc7UlKgVo5kguLpNGgC4Q4e?=
 =?us-ascii?Q?nYQZf636pPRTmItzVxNyf62A8dMaEneR6LcwJkf4RG2PTm+1eRhflSM+zWOC?=
 =?us-ascii?Q?/CIOA8ToqXcqKs5xJY8D3A+iBXYLUhEda0P89CXGSf3b+ndJxEBHCmIHZ2SL?=
 =?us-ascii?Q?bQ=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	SlDGcVIB3otWlzFiwR0v61/9shJlS9MXF3H+ftiI7pIV9Gv2D84qDjexcJBC4J0Xi2dTJLZ4LpovK83AnzGFtxcq6Y5NijAaqer8my/+3VyIocJiKThq5xx/eqvVJBoMwdBlWmIp3YTF/JyRzvWlvJnHCM80GHZboj3C/j9ZD/TAdwPUe4tFSWS/rYhD9di82KZGDOqzFt38zAD8oXfk/xtGqHH4XMeI8rIG8Hlk5zHFtN58ek6tRjk/hsCpaezAXhXOS6uKJ3WObIXEhRqo9Q294VHS+oYM9iAyMoYG5gjrshXH18lefpTIZi3t4Qd1zrm2Z1q7aJav97gBA2GQKlGOmhHMZ5xEFWkfy81spdYtDOARt4xU/w2PuxJ+Xa8Tp4TN+zWggDNFT/DSOxwYpsx/lae1R84Vr9XQTJ1VVtLM6xHGh0ccXd4JF07dc+l4hQHNgpUe6U1X0euZvOnBW2N9i8AX6qs4rwod89KtviAWYrHVmEzZilC3oJnaFC7eeDNzNgllW1UPQ/4P7yp3Uagy0UQsLLsX99Tprtyfe9nIzN2gCGCKS8xPBsQ0pXaG5BjzLxyAZejHA/PNgMsNykStUcUcHldHN3ZeG5mUwQU=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 961fc729-eb1b-4ff8-af5d-08dc8c132f52
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:42:00.7044
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6rNopkpxz7QToDWdFC/ZGUt711dBDEP/symVIIeQGiymyoM8WoeDIz1phYJUA1t1tbabn4niXE0bi7nHLafd2U0Cjrnm8ucbIGcOb+CVSxs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR10MB5755
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=849 adultscore=0
 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 spamscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2405010000 definitions=main-2406140008
X-Proofpoint-ORIG-GUID: WAWye8Yc-hdUJhQ3XlTggxtv4C0OHCNK
X-Proofpoint-GUID: WAWye8Yc-hdUJhQ3XlTggxtv4C0OHCNK


Christoph,

> Add helper to disable discard when it is not supported and use it
> instead of sd_config_discard in the I/O completion handler.  This avoids
> touching more fields than required in the I/O completion handler and
> prepares for converting sd to use the atomic queue limits API.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:43:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:43:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740272.1147298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvy0-0007kT-Lr; Fri, 14 Jun 2024 01:43:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740272.1147298; Fri, 14 Jun 2024 01:43:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvy0-0007kM-Hc; Fri, 14 Jun 2024 01:43:00 +0000
Received: by outflank-mailman (input) for mailman id 740272;
 Fri, 14 Jun 2024 01:42:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHvxz-00071T-6t
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:42:59 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6c19163b-29ef-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 03:42:57 +0200 (CEST)
Received: from pps.filterd (m0333521.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E1fVLV006357;
 Fri, 14 Jun 2024 01:42:40 GMT
Received: from phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta03.appoci.oracle.com [138.1.37.129])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymhf1js3u-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:42:39 +0000 (GMT)
Received: from pps.filterd
 (phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45E1Kh86012449; Fri, 14 Jun 2024 01:42:39 GMT
Received: from nam12-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam12lp2169.outbound.protection.outlook.com [104.47.55.169])
 by phxpaimrmta03.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id
 3ynca1ur4n-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:42:39 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by PH0PR10MB5755.namprd10.prod.outlook.com (2603:10b6:510:149::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Fri, 14 Jun
 2024 01:42:36 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:42:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c19163b-29ef-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=JGRsprn62snXHv
	9hTV5QtD2Ljv9gbEkA+M5go9Ecvjg=; b=aAPsIfqu6VVtDveYRCIRQ2c6BQHQw+
	2WNi1bUMXes3Eqxoff5CUtOd9up3KsXGTqEVE/80DCGoRHog4PlJCgLX/OCsro1N
	neyDGzZVyD4QdRr73KO3xCPyZ1MpkHARW6GFwfWQ1lRwB5/Aac2Kmw5sec1a5CR9
	hvQazmC54h6Lz+fgvE+4Xx0yizIo4i5TdBdPIePU7TUf0G5Xdw0B149OEMtPENWO
	qjljkZ7uyZex/R0CMu14Yv/+GDJnCiy5lB3qpFwIDgUVSz7bftk6EbhSq8XcFBHY
	dDoWmlof1fWbzyrl9dRAZkuD7xdiULhN2ZQ+oBRaEOoto3limrEQIY1g==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n4wn931pP+PsFeefJ/QW7EG8tVSf4hwkPda7fCbKSTu/NJBcdaTNhaUXqFYdEaKEy0Tfm72pstm05Ak3ZkAo43cPQZL7jD/nhLUo/NLbPsCcNxlnLJFwZu5DzNu8cshMi96vTq0vLfXVNnwutdeRfV8gZXOTydfcatQDpI0bx7IYfx3TmXTSl2KHGgCzrLlYZKYHQwzVMVFKIQ31yQiUKUOXxmG9E37+lJIpWBsW6hj5EbgGoIcPwxMynn35AYA2jkBSdxYEDbNwZGF3Rj+HDJseqx8jd/lCHPrk/FJ3mOMTsT37icX8mp3eyiJIJsWYjSYlw0RPr/Ws62Rg5EWzUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=JGRsprn62snXHv9hTV5QtD2Ljv9gbEkA+M5go9Ecvjg=;
 b=kgl2Tvf/bNHFG6/N9DjKwXkKDys68Cg/8hpGmQgWy1GCj2u+QrQKWgDscqb8zGQ8AsC6Y1eJs4GNdInCdf1hRPxybx7dBhighxUVjqP6IKBw20mqB+mtk2JEAo5watiLXx4ws7YPKaRxzUceubZ6+oAM5tO88RXjGvy7CiYkL/LgMmPs9ldAcCsDnWK/+qN586i4KoLId9di/uheyrP7PSfaxY6xUsbRr4mQWM1Eq01ybtmVcbwfdD77BTfB2dnF/oLRTf42beBW9nZ4eJ40PqW8wF7FjzkypHA6NI8dIMTCBcz/g//66slFv56UZNmJn7r9lfEMU3iARxKwep3qKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JGRsprn62snXHv9hTV5QtD2Ljv9gbEkA+M5go9Ecvjg=;
 b=fGxMQOT6kbuU8LkSROfUJO1N7VN5xrPq8pV/iufRQAf8zvXnKsZPe7ePFeEvsV9dZFYglKq47kvtlgxmzBMzyc1jYG4K61TREq7g7obXz2OAU5LtiBlmN+Zi9rL03PhBKeF8ISs3OCR+3NsHyOXPx/LFazPRspib49pFvCu1upY=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org,
        Bart Van
 Assche <bvanassche@acm.org>,
        Damien Le Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 07/14] sd: add a sd_disable_write_same helper
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-8-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:48:02 +0200")
Organization: Oracle Corporation
Message-ID: <yq1y178s3k1.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-8-hch@lst.de>
Date: Thu, 13 Jun 2024 21:42:31 -0400
Content-Type: text/plain
X-ClientProxiedBy: MN2PR12CA0030.namprd12.prod.outlook.com
 (2603:10b6:208:a8::43) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|PH0PR10MB5755:EE_
X-MS-Office365-Filtering-Correlation-Id: 7684e247-7f4a-4cb3-9292-08dc8c13432d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?Km9EgCWJBIytRFLuGA7ouokOZLQwse4psS47QsxoOqXixIL8RxD/Tl6kKxqZ?=
 =?us-ascii?Q?4g1JYn7j4s1Rs9ZdXDaw0CG3LjIyjfIP9ntBiXlI+a/VretYIYdUVIR1s+pB?=
 =?us-ascii?Q?uVQGiVJlnVw4srL4e8wEx6i9V+t/7W4vB+6AZlrNVn1lDUwOrF9hDmNANXET?=
 =?us-ascii?Q?h9C6uh0njdpWP6u8lfr0H8aZdv8pE7bJGh6Jx183rSjtjm/uC9uy6Dbb7Wo1?=
 =?us-ascii?Q?i1NhtuQAqbAUj2nEdCZxFyod+RcA6b7GnzdTc27jNcMz+/KKtYUXF191K8kB?=
 =?us-ascii?Q?cwQDsWWly3O0kFd9MU0VCvm2X579zEjDjfRdXdOuGN59FG2NUY4zw8Ln6XYk?=
 =?us-ascii?Q?LDDNxGTNO08tkC/Z5KnNA70LPDeNWLHWGS/qpJiGNnZpJdkMs7tDkhbxHGtf?=
 =?us-ascii?Q?1DNAdJ2CZ4TQ4kaseyFtv4t38mldxBvjD/kxfx4vcgnVac4KYoq8O1/BFxaD?=
 =?us-ascii?Q?d1s7Tthe+kLX/vZaw11eJIC82KYDvRYtJcfU7u3mRTFwIicIVZi9pO6aScp0?=
 =?us-ascii?Q?wFfzv1KjtvRiZ4TXMFZGQDt8E1d0jmixXQObu/50OB+EIh8JSQwq1kpsOw7W?=
 =?us-ascii?Q?FBQRQuCMmuxWSFSv6amwe9NYg2RtfAc8Bs6r5i1uoEA7Bsc/weJMHFKk54/a?=
 =?us-ascii?Q?XzrfCF5pGs0/E4aYC3IaiE3R8T20ShBkXkN4G549DNm8zCVj5GUimaWlztkB?=
 =?us-ascii?Q?0+e3xBwxKRRuoDi9j1bfkZPjtS3X4u3J0fKLeNqpwiN1yp1IqmP1yEMT++Ch?=
 =?us-ascii?Q?Sf1FtYzyko54CfiGtvBlrPWG06bRmDYYVyVzpyw17hC9XwmNz4naS1It1tC3?=
 =?us-ascii?Q?4zrW1CF0zYrJJ0mPl0JssZ03R9xnQq/6Z2FRh2KyVnxkhVySdMkovYq3Zj2A?=
 =?us-ascii?Q?E6AO6DroRxlA4XyqFFX854QWsncA2nXLczVVvmyPYG0sGcoJcTpMtubv0B1n?=
 =?us-ascii?Q?oA+uJvxxXDnsmVFl67sZgYp+LtEo17LYR51cbL5PUiaZagjGddD54hOvaHnL?=
 =?us-ascii?Q?ZsFdUeJZN4NUOrJ7s7/n7sAQq7CWMnm9Db4Pm8xobUbgl8l39NCKvR4fr+/c?=
 =?us-ascii?Q?i8Lbnse9Z9rwSVkya2R4M3TmyC53sPZkmwBiHBOxzwWSzqm8+0e4dyqnJOcf?=
 =?us-ascii?Q?y5M1Y45tJR3DIaaBrJGlfcBxxXrjWo5NI9XqvejUK7JYHWvYW06Pn+wTJRFM?=
 =?us-ascii?Q?/mzZ07QUYpnrCFKSmxMxjt5yJBQe7n7MM8Ae23SzIH1X9FJIwoSSGjnB/F/m?=
 =?us-ascii?Q?SivWzOQsHyOMw3HvFTtlr1LOlqIbIWywGgEwYyc5Aq8P3iagStJh6torsTcz?=
 =?us-ascii?Q?0Pz0eTMr5bhUOu2JET1gsveL?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?CZXKfKvJjLC0Atf5rQ7h4GKbe6rneF/musq3W7mE8CN2e55OM6dfOx/h+p5c?=
 =?us-ascii?Q?5bQZJCIHgCdNaqHIXnk0zmXGsaqzA4GZbQpy8Yec/OaHeAP47IqiNX8yh02Y?=
 =?us-ascii?Q?lRAq6lndxkG2XOj8nKxwylTai+ldxmM0zE1rO28N0VX7PVTuCegHuaIJtgF4?=
 =?us-ascii?Q?xUx2NwC/7aJuaA2eUX2J13TtmQQJGYqPiePiAbgeesRRDbP3G0XhIPPxvqhe?=
 =?us-ascii?Q?XxMKb5OxZpSZH2bfnsj9sj0xrIpu4nmrOkTE/d+4yhyA4AlMQCp+KV7kdrwm?=
 =?us-ascii?Q?zpKDwDBDVcZ7mXL4O6It10TQS1r0o+nt8zpw1GiJTfBZghkIolLW2OANvBqY?=
 =?us-ascii?Q?p0la3bIlhX7zGdoqUZGyUiZYd0/NfeBlMWWRAfFvGHrjwxTk2McrXW2I4f+8?=
 =?us-ascii?Q?x9dsxyx/ltuD6WZLVaPemJoZqi1mTe0zFB7hnGVPQPECWY3BTfA57sHxuLpn?=
 =?us-ascii?Q?5gy/CMMclMndWQZVPA0exx55d+9W+ZDN609zEPQe2nrzzbISKljtllnnmtqm?=
 =?us-ascii?Q?smWc1RwQ3oBcGaasBqBYDYGWGMeYh2MbQhwCpNvB4LH9CehoYZtVVkjajooN?=
 =?us-ascii?Q?VZMxTyh3JKi5TKKCHnLWkcN7Xm26Ea4TNWP+lXOja+3Zdn+0uTeZ8eyi0I21?=
 =?us-ascii?Q?ta+/JOiFOGa1eNenoAs4JTqxJjXQQ42Uo3lEhwFIr7IdfVg9xu2PJdZyWHEQ?=
 =?us-ascii?Q?Spz87XxTYm2VYx0w2dnr3nZF+JTQotQ5Sbkcu29IDaNWgqkABkOKmVDmlZzX?=
 =?us-ascii?Q?o5PjWau3T09npaYWTOVrpU9zdrhOvsC+9VfYEJKxmacjQRDruxEWwywcUckO?=
 =?us-ascii?Q?4wdABBFeNunFfE+APUM+Ds7gHE5xxno3XTb4CzeiOAG9XUiGh913hbVkKgXL?=
 =?us-ascii?Q?AL/DQ2eMnfp8nsMlCc3nrCVe+ynF3ZFsLJdbVAhW//r78x3lOcka3+skLIVu?=
 =?us-ascii?Q?NsPppSr6lHNGm2UFEq7yyQAN9D6aKdWIfJ4j9byJIzSsZxobq9Kcqw/T2dEV?=
 =?us-ascii?Q?ZYQj+FbiIUvwcC4SSC6fgq70cVWi80ytz5LMTp3ntoNFADJFCzANJo2I7Tcs?=
 =?us-ascii?Q?/+6YpoGpRSK/FWqrzvZMrF1XW0b5/DU8mBEZwibOesTTKxwL/Sw7fkZf7sZc?=
 =?us-ascii?Q?UmB7SsEvkqXA1w2RLyI1NGEZurhsjnXd8hbDqbdEuo+m+o5WOxXREz13bOwH?=
 =?us-ascii?Q?GLsB1RlLL7gHmoxoHHQCcbjAy9XUlEVAazrX9GKstszwf+v05ahH8WYSaUKP?=
 =?us-ascii?Q?CpLIG5Q7EzfaBRcIgykRWoVF8hxvYA3KkDSapSq3lw8KZFszZ3uMiwjtppYR?=
 =?us-ascii?Q?kLTzjj+tIGUW9lbl2WBfPF0xzMBI+EAlrEpHsdGty5Mhx0soyL4g66ru0Sui?=
 =?us-ascii?Q?5NNIiX89sV4H9FUPs+tTlhL98pz8OLlhvV3v1TE6tmKi2ZXGe3M5ds9K4jVo?=
 =?us-ascii?Q?6OEH32G1poYqtn2YGsg3mx6zk4QAapaQbe+kGTJ/j6iP36Wd6bEtVogVbM89?=
 =?us-ascii?Q?XthNJC0x9IzGn9qo0ujAnU5awNeYjGA1e+7HGM+x6YyyjU3SgSiytcg1XLlU?=
 =?us-ascii?Q?wJEpCJ7yyAh5KEN7vHGFRVcPjYP0xc+OJXcvx8SoT4EZXXy42GHIHuKcf9pq?=
 =?us-ascii?Q?Wg=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	2Fjd0D9VSNFMlOkFj5ASqqINm+FnJ64mZ/WWIB/pBRnfzxbsllOKESNKaefswS3Wr23irO+8YDrfFWvdPCKPczkIwAgv0tvmYMRi0fvFS9fgk1b08PTs0d5Iqp4gv+/j8Pheh/mv9t15jktiu0rfH0z4vwOgA72W6z2fqq0GJ+7XLFopP5f0ypYGaj2mQGcJA8FS9hRd5xH6CzAyzyqYj+jZEqlvYhSMjAefKPg/w+Ezb1wZ17ODkizm2PZGzQx6udD1ClNMP10l85GmaE4rOF462tUQH8UyUKBG91iHTCGvaFdtp16rRLlwo2mx5bL7uIi+DJKlhBKFgAjyz+v+kCq8b0kE972vJBwI1/vYqzAmVIKEyI3l36KlI4yMZgICWHwvM390qNFgSKYMkb1wgJHVH5/hcmd0V9HclY9mgpguU5HvGSHwT/WDjn4a8A5fqYxp1D9jkQh5fMAcGwMUHAghvNRtAblYWoDlEBvgkwK4y1S9lDAyaQXK/AXmhKNHpCqMwS+PYwMhgn7J8OJb+T6O6z1rIn00vD/LXLOiFbeUDcIuF3MDtKxe/k8QL8+uMcgMj25qjtiPe/YyZ7kmzBhdh9MJNnOtDG4CuZmp3T4=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7684e247-7f4a-4cb3-9292-08dc8c13432d
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:42:33.9955
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NdH+62hBY5gS/mttHiNDJALKi6NjQGfHyyR9oUFXdWpPFOZDmprXs64ib+pnPXaxsVn+oZPMlKoa01F9dNGXN4OtU2opI8fUTYR5/TraW5w=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR10MB5755
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=885 adultscore=0
 phishscore=0 suspectscore=0 malwarescore=0 mlxscore=0 spamscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2405010000 definitions=main-2406140008
X-Proofpoint-ORIG-GUID: timp3nIHCWQgrnjERU0q53rdPcZg0yxJ
X-Proofpoint-GUID: timp3nIHCWQgrnjERU0q53rdPcZg0yxJ


Christoph,

> Add helper to disable WRITE SAME when it is not supported and use it
> instead of sd_config_write_same in the I/O completion handler. This
> avoids touching more fields than required in the I/O completion
> handler and prepares for converting sd to use the atomic queue limits
> API.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:43:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:43:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740277.1147308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvyX-0008G0-TL; Fri, 14 Jun 2024 01:43:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740277.1147308; Fri, 14 Jun 2024 01:43:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvyX-0008Ft-Q0; Fri, 14 Jun 2024 01:43:33 +0000
Received: by outflank-mailman (input) for mailman id 740277;
 Fri, 14 Jun 2024 01:43:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHvyW-0008Fa-6o
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:43:32 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8084ff22-29ef-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 03:43:31 +0200 (CEST)
Received: from pps.filterd (m0333520.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E1fQfK010219;
 Fri, 14 Jun 2024 01:43:12 GMT
Received: from phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta02.appoci.oracle.com [147.154.114.232])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymh1gjvvd-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:43:11 +0000 (GMT)
Received: from pps.filterd
 (phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45E0pne3014361; Fri, 14 Jun 2024 01:43:11 GMT
Received: from nam12-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam12lp2177.outbound.protection.outlook.com [104.47.55.177])
 by phxpaimrmta02.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id
 3yncexqe84-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:43:10 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by PH0PR10MB5755.namprd10.prod.outlook.com (2603:10b6:510:149::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Fri, 14 Jun
 2024 01:43:08 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:43:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8084ff22-29ef-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=83/oQlKqI0WIri
	uOwqQN/8vWAiDrwoYh8lQRRYbJelM=; b=O7echI3uaiuL45xLoXuVYsHFNzObkm
	1HQwlq3/NiFw25gbMUjRNHfVgwFQ+MOj3zHrA3fLF/4Y5CsTZg4WiEMhm5lh6rxJ
	YN02g+M389cxusrmQKrF7T9Qq6LdtnKFIztK8F9N1K4AJnewQyxSpTJvGqNDwFJT
	0jpsO0QSFj13SetCHxxAHN5/oPDwCaY77n8MNa0kN1tYz7T2s578xKx9E8oQiiRU
	wUnScjrS0CPRwVyE/QbX3vM4h21CTqqJyrgfJ9CnepUi2nwjxhXPpReEEhUdYOa/
	kKuL/iTWSAkgkISKUhnWbuQQzoeSfa01mwVnTyXnnSON2967uOAOmppQ==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=e4XIjfkf0M73Qwe6rGb4w91fAv4ZF6psgZMW7U08hTgs4TS/poxp02YNX6Bc4Hrz1PsdQBtdyhLH0MT+jlyvZnaXxqq+ozkjHaSIFu7anHoiLzx6eWGKzkoNVxc2kv3JbfQlX4iLA+uEAG9q20ev/VQmYQ/dDOLW/asUulaK8lSNXP9JLJ08Wop1ohvNwMeE/QVf4BfYsWisCyT5R8CoETWd7KUOpX+3iaPucuYXsj4NqZIFMXtGkm45twOF2Qmvn+BeA221ky2DiNksZUVA3fKjfGIG6EqEK0o2HHS/VryIH5Ep6YBTqVH8n5YwE6xF5fP5EcH8HoICSA+IIqj0qQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=83/oQlKqI0WIriuOwqQN/8vWAiDrwoYh8lQRRYbJelM=;
 b=Mzodl4+jOB3iSjnh9C7ouUrY0YPni9HSnOfxnCBpRh3scyDWjQvtisA8KB/xmMG7tDs7k0GD8eXEtf2WfRGCFCTi6DO3T0tC1Ku9YByWdJLyFkF6U7tqeiI7rMj7MukSdGJWPq1ggA/W8AiEgjEsd7VqOlp8EN6EKwN4eWolE5Imz+DGynDRYPlRrYyxLs2fbhdDmpNS5xJepM0tDHAjsjW1HTB6HcW1+0dxWb/ejNcCYjLYlArnEeHKqf9vPZZkNVITdEyfcmHTKXF3BrM8IoUJv3aAlZ/imREUsoCUwnYiq0tgHFmsNSyYmh4qK6zVMyN89mu/9cAYf/75ENx0kQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=83/oQlKqI0WIriuOwqQN/8vWAiDrwoYh8lQRRYbJelM=;
 b=KPQmcLCXnvj+i3xQjTfMGELVJ/p7Zx5bJYyxvHX95P8d4vRIxGGgEJlmKwNEaYlwHdy5If+0s+tgY+D+W3ug1SeGVH/gOhCqXzIEVvGWrcZhywxLOaklEAjxpSfZ2mGdBzuweagaOeYdchP4A0/DVWtU8ZT56tAalkzNMkR7UZg=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org,
        Bart Van
 Assche <bvanassche@acm.org>,
        Damien Le Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 08/14] sd: simplify the disable case in sd_config_discard
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-9-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:48:03 +0200")
Organization: Oracle Corporation
Message-ID: <yq1sexgs3j2.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-9-hch@lst.de>
Date: Thu, 13 Jun 2024 21:43:06 -0400
Content-Type: text/plain
X-ClientProxiedBy: BL1PR13CA0155.namprd13.prod.outlook.com
 (2603:10b6:208:2bd::10) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|PH0PR10MB5755:EE_
X-MS-Office365-Filtering-Correlation-Id: 8a12a1cf-0693-4ee1-a985-08dc8c1357a4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?FDqOOREikVKNZitaDZO9HCiUDcGw4sMiLpAs8DWGpz1WQTGxlBxBJScRX7/s?=
 =?us-ascii?Q?MOTz6t0daO/8QhtEBIpDhrLoV3mDmEWt2NkWyylhF3fDxFEBt19DUyNwuJk3?=
 =?us-ascii?Q?cxx5kh1UKjkosPnag5RkWsdZQ6YUdRWQrb0YyU4mtyBFQPjphU6hV6D8i1Gk?=
 =?us-ascii?Q?3umYiSAzXYvfOtad4+AIPAa5Kibt8sQ3/z1Z2wpYbF6sk9tn0kY2TovpJk8v?=
 =?us-ascii?Q?szZ+0N9Jx6Ql8QLfV62KO9BjTh8UWY6HkI8gDOLd6IjXD+SA+TmaLXP3WcbR?=
 =?us-ascii?Q?ZzQ7yvZTD8CYeaeskBfZQzvgoOAKcs5N6pmprHU00q4QpnLYlXBFOdSFZwOJ?=
 =?us-ascii?Q?YqnKtbFjRxPcfQO85vXDmFSllr+a0B19rUFoHEGs7VbHzN+Kpq5kO9mqziZy?=
 =?us-ascii?Q?ixisuhrw1DbTF0iZqh492F0K5iSyy0aJPLvld3wr70U74GtB510s8o9wZuqt?=
 =?us-ascii?Q?i8pExzTjVjiyHWmNp7lnjvBcOwFV9rUqx5uFdwctzYs2HMgW2c1iRh3nzeT2?=
 =?us-ascii?Q?pvTKSSCburRFxgkU1jKe+PcTamSF8807JqnIyOn28SCUN+T/2MV6l2hYhHR8?=
 =?us-ascii?Q?jWqXGE9TMBk8cFe/tn47dEBjjd7in/vZITwkSa8sTtIQva97fyJkzSupbnn0?=
 =?us-ascii?Q?q3ytLBhPJSLfhgMuWcq+J3hVcR2bxad22FJXEdgrGrSSO4gaGO6uufKE3KK3?=
 =?us-ascii?Q?ZXL6832Hb83E0KYtsAJW/QwVwe5c5HEyOS9mgCbDin5oRKeLFzbzMgisjbk9?=
 =?us-ascii?Q?mcZamAwAumOdmsbw00jJmRSIei6co6lyWG0VK1mVBMZlNPGR0sWKCWZ+gABC?=
 =?us-ascii?Q?ZT5sM2QiZxhJFdhsqRlgOfsoonr9eCol2DvrAwsf+yapgV1zpWnd3klll+eR?=
 =?us-ascii?Q?FJhmFKr+83fnuZClNEDqPxnAtg4Q9VBgfRHoA4htK64cO78TPD3q/PA5b0Tz?=
 =?us-ascii?Q?63OJyK61KKTWTE5oYsqAeo2Y24KRDU9EkR+ujnXT+JJKn+WWGTYSQeqFwJlm?=
 =?us-ascii?Q?TtskJJ4x9hYeK6xXQWBQYRzmDIZ1RkjOfU0hD2Dp8O907gs070ij3g0/3C2T?=
 =?us-ascii?Q?I2aHZ25aZEsFYjMIt2kdp/DA+iiDRTggMAil4k21ph03GwNTF2scwhi2xc8S?=
 =?us-ascii?Q?OkqbPdE25KkgqPapBlW2762KpKxU+B/TYO/Gf5JHUTfCrJQrBWjHGhnJrtlK?=
 =?us-ascii?Q?4X9hym1KRKkfq2iI7sL0/kkrOE9hK6US0e4aLnVHN1kA/RMAqlp6j8p6Juhk?=
 =?us-ascii?Q?ZCAQwjdoXo2PaFK0q+WBCNAWGjqwC/lDh5ExnjrXcJNaXqMWgRHebaB7B1eW?=
 =?us-ascii?Q?4S+rC72Q1VSfEFRQhM3DCsco?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?74MkWV0yw1MnSRYanqfmBg1SSrfCVMKKZot3UhbIi6ikIL7w61Hs+JzczNtY?=
 =?us-ascii?Q?GzKaP4CV++RccJgK4z2ZkYmuZHKa1rMqIpHDmVIHOhFVEf6GhueIuUUR+FU8?=
 =?us-ascii?Q?Df5/HUvT61SMQdBUcmoM2BURkobuLnos7Qfb468T3ZQ+pN7cqal1oiFIHp/3?=
 =?us-ascii?Q?FZ/XTmWS7q7jEWI+6L+MYzj6cR8jEX6Ekt+6lyNY35SLYHP7ioOzWoPUEqcF?=
 =?us-ascii?Q?0Y4zqNg8Us1HpdDAa+OnAI2sWFLO7YWktVY40o2smSJ554XCYJbmDT3H5gyq?=
 =?us-ascii?Q?wf1DVdHNO/QMqFA6Qs3EDwyyhI2S+s9aGFJt4DUrS0nFtgpXYqiyP0ZBesWF?=
 =?us-ascii?Q?5MdAk6LctyzSLL5/m5ixUi8wuLDqpZX0a1JCOklEI+KdGQERnbKG7S9rSlV+?=
 =?us-ascii?Q?rDQp3yk/4Z4KxOonnOa8Xd28eIo/WBykgs8IeCviLUKVSwdibge+RvO/Dwom?=
 =?us-ascii?Q?miOGstBiaai7oPeqoXCA+Z0Gd4+r65H8ykSDphXpVQXg9+LWie2FneBJ9Tmj?=
 =?us-ascii?Q?brPwqE5uUaaRHOsN94ibPikjiHVHGmO6xTQf1WrRd3VP59O6AAlCd2pQFwz4?=
 =?us-ascii?Q?QvErGpY1tYkMIcrRx0/hDaWN+zaMv9juZEFmTY9+bQI4A+MT9x+5FJ5y44FW?=
 =?us-ascii?Q?jxS51E7d1WfGa9zlcmzy6eJLm+teha7/5V4PlnuAFYF/UWgziunJm24/47s2?=
 =?us-ascii?Q?tG9Es8a6L4lpMEGOb/O9/Y4+f1SdpH7a8Qi7Aohd6CMGwsCjxgTIFAcnPaYa?=
 =?us-ascii?Q?bVoOhfXbeNkq66NNcvXe/fnEfcZOUwyG4VVBFY6xitlx0jAnnLFTR9x80Xc0?=
 =?us-ascii?Q?ixGEA4ri0wOOk3j9dOR+ecSVoLchWQ/YoiE9+KP3GFHKB8dRP+iEClFcBhYs?=
 =?us-ascii?Q?LB7eoPxNdRYBCKL6s7VNrBJpC8+yyWRpALLR4ACYmOnoZMKM1TBlXSF/XHI1?=
 =?us-ascii?Q?HWGYej51KPHYe+AgByXv5KU/OmODTSVAiLLHCPxxaeRPInF4+y+G88G4vaKS?=
 =?us-ascii?Q?k0M8wlvKSGMHV7w8SDjFBxs6xSvl6i85TIhJlMk3xl0194IrMoFt7R4eCpgI?=
 =?us-ascii?Q?5TQ4un5xjgUGpbr4qZEg4xhXXDP03wrYL0b+Z1LKCfYloHMhUK2CMPfDU6zo?=
 =?us-ascii?Q?wgNkPsGMD68s7ZywATErw2IOx428+Jeh/JE1yUpXeJ4SWfzwpAY9JXQWFAo7?=
 =?us-ascii?Q?K/lgSVgDwQTo8htittMBUUOpursd/dyyW/cjwL2FQCmn2nodBOb9w/XDYuOQ?=
 =?us-ascii?Q?VaTUkXBd3QE9zSYpGT4FuXyE7J/Pww+zdM5ka1s+N7w4vMU36Npvqtna6P8l?=
 =?us-ascii?Q?mhhx4RpGFl3Ifi0nuy6QU1oexmPXzJ1BRK/Wue3OL9JnGGPUbgoQeUdhG1td?=
 =?us-ascii?Q?mi4JnJbmavKc8GuK+2AMH4Ctn0Auz/ffmMsIzdxrtvmKyizfGpM7toy/29Af?=
 =?us-ascii?Q?VOJvKlutzyHloE6+MFjUNnU49viafriX63mmbzEowA5hrwi5pIO/c/aa9hcE?=
 =?us-ascii?Q?z3RBci4qmS7N54J+qTaLtbr+KWpWhokJcbW8yPFJfrAIrVoy7HxMYBdZslwP?=
 =?us-ascii?Q?qd9RKU+ghgOlRe6UUXPu3Dv5/9tHIy1E7oq5alwd3ABIkHzea3dteUP3gAfJ?=
 =?us-ascii?Q?rg=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	0lPW65u0YECiqQYUJ7yeLSLftKsAHjN0vcl3MYDlWJhYqGo1dq7NRSVTEsmed34r6CQz1qA+UprMSsSLaCFqU9NrwDmfO2BypP6f+YkFCcr1kHvvnbefQ5DR4GNKDc2/0JZYaqZaYJ0OODxtaaD/H+irq3zwv1OfavA4Xie7DrVYD7pEDynWFXlSHSiw0co95vgDz11sWyrZyd79CAkhGgwF76JaHuyJ69tmRG3XdAbBtukq0HKBfTKAd4sKbHyhdM5rJ8EvoKhv37PhNcMXnHKtunFpyxC6FjD0YSjHh4QdM/t+zm5DdmMa/O36WhFn/ejDLuO6caXNeqZWKxzMHbkD/tLjNA8XoDdpkiu0C2d/XyXEUdqAxRpK4gaEhdok626rnFQLAXzs5YPF5fa4aeG7Nu5kWMOQNlD8/AEUIcWbH2d/NHfdZ5x6IFMZ1jz26mzBcJnNwTJVIfWEFSDR32RpOqNpB/w7YZ5Kbq9iLFH0Jg4XJIjmmuR/R4+tXDgrGyQRXgY4UlI1M51NP0mHyYRpIPBIvUhsdtDBt/81TkUf0vZyY8L7WhrRHc7RR0JgmRDx17bS4TT/2ySZscEE0I2KENwCdp4HGU7d9yEmHzw=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8a12a1cf-0693-4ee1-a985-08dc8c1357a4
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:43:08.3496
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Kn0mFWR8Rz6AnwqoM1ejr+zhCAbtriQhJe0d3JC58dDZhpYkaZ1yPc+A5lec2A32DRXSJrUqvJx7Ipwk1IyL5aXMnIKxpgLjSyX6HzL9dA0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR10MB5755
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0 suspectscore=0
 phishscore=0 bulkscore=0 malwarescore=0 spamscore=0 mlxscore=0
 mlxlogscore=999 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2405010000 definitions=main-2406140008
X-Proofpoint-ORIG-GUID: RnxgKt0-4twp6Tm4NBsFYwO1NBlGp6S3
X-Proofpoint-GUID: RnxgKt0-4twp6Tm4NBsFYwO1NBlGp6S3


Christoph,

> Fall through to the main call to blk_queue_max_discard_sectors given
> that max_blocks has been initialized to zero above instead of
> duplicating the call.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:44:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:44:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740284.1147318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvzP-0000T3-9h; Fri, 14 Jun 2024 01:44:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740284.1147318; Fri, 14 Jun 2024 01:44:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvzP-0000Sw-5W; Fri, 14 Jun 2024 01:44:27 +0000
Received: by outflank-mailman (input) for mailman id 740284;
 Fri, 14 Jun 2024 01:44:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHvzN-0000Sk-OE
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:44:25 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a0807081-29ef-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 03:44:24 +0200 (CEST)
Received: from pps.filterd (m0333520.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E1fQfN010219;
 Fri, 14 Jun 2024 01:44:08 GMT
Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta01.appoci.oracle.com [138.1.114.2])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymh1gjvwb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:44:07 +0000 (GMT)
Received: from pps.filterd
 (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45E0tj8m020118; Fri, 14 Jun 2024 01:44:06 GMT
Received: from nam12-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam12lp2175.outbound.protection.outlook.com [104.47.55.175])
 by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id
 3ync91yqbv-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:44:06 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by PH0PR10MB5755.namprd10.prod.outlook.com (2603:10b6:510:149::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Fri, 14 Jun
 2024 01:44:04 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:44:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0807081-29ef-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=+K/bRxHsws1KRz
	eBoVci8ut7cr6RRzSa4Eor49bSvHA=; b=LO5KKTzAux3HRgbSIeAAdNQJuoBFyS
	Qz7XF8QOxaw/DkoTGgqeWi4a9TsTE80/mPOmfWbXiEh+2LrNLJGyZrjrLYTI9afl
	qh9KYySX7q7sC9WFfdQqt4FsMGpIJnQexT+yx/TEUTQh5jJHZypZILFQKKFH2EZr
	bi4LFrjeEM3ZwbbKB1q0M8LB7RttfeS1tVPGdRwHRt55hacMVE3mBLH482+DYdWS
	KjJKvb0I61xkFXEx9YQa69tcdEyKCfCAh6W/3lNXKWXj8AmUqpnpkMSE6MDvBQxN
	dnrNu1vgMHB7X9X+qKAcCqbs+BebICrOpFH5r8m723YYg87mJYkKWSWg==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QIeFpvqgbAHe0A6cQldeNgm+RfJHt4qhC0Bfc2zzh5nHZg5p9s1HxjxmtQ/ct2hi+bE6VapQ7i6kHWKAV2jnubswiX4n5LYnoTKsMdkHshSHkDvjQiTXL6y4+D8keU0HlC2o51uJ4YlzXW5vyvYo/XAFLgaUfSTTqOB8lgnSpMFkBQ6v7GIhlVpahnV3EqemALGwTB/1rWPPA+14BIu1FtMtd/OR9+RxoWz7/9iQKGN+fYgT7/EB8Xp2e5Bld2Ow0NTQg830+KnBIhLzngt74A8eLdM90B9ShrV0HqI4kJk2DeT9UzhPldv2TGKrkhWIbbneuz+vNhjN7g94CysvKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+K/bRxHsws1KRzeBoVci8ut7cr6RRzSa4Eor49bSvHA=;
 b=dMgb8KeX8P3fUUtoXVqaEKJy/dtCnetXlF+Ft+iarB0eytBt1ENkzqGNryN+xe54FRzx6Y9IiSiX/x3Xi5Q640P7jb3FskW5G68HhuWyAwLuhZfdPRwozFLj/BwP6JBXxghhnRc9TjUju9/qc1fNnPIEqKak0qnBa2YBAcx2fdua2nFZ/3QhTTqTVu2r/nLBZZAgg2KAogV0nskpnbX8RucLQa3d43oOlwm9YZ0mmhcUJV5TzCQxp7489zArl2dtTHG8g2IDYZ4xJyyE1h+duFZGCjKoKS1jTcK2EJQhMdcaUbbmq+3q5yq9AtJbXQlSfnZXEjqgmc4zVIp7OTkkbg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+K/bRxHsws1KRzeBoVci8ut7cr6RRzSa4Eor49bSvHA=;
 b=qXXD7ZZDQpMHn+JcNYKaTrWFFYzy9aSKkQBeG3T7mYk8BQncw45ht1Z1l/c5ifcJCpgDbiOyUHliDWZd3u8Rz23m+WnSn3XCDG6NaC+rBMdFW1/azkAzyWZfBKDgpULAF6tul8X4d92UT591P9VFn1Homob3VzT7CO/RzWacky0=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org,
        Bart Van
 Assche <bvanassche@acm.org>,
        Damien Le Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 09/14] sd: factor out a sd_discard_mode helper
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-10-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:48:04 +0200")
Organization: Oracle Corporation
Message-ID: <yq1msnos3hl.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-10-hch@lst.de>
Date: Thu, 13 Jun 2024 21:43:58 -0400
Content-Type: text/plain
X-ClientProxiedBy: LO6P123CA0020.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:313::8) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|PH0PR10MB5755:EE_
X-MS-Office365-Filtering-Correlation-Id: f9decfb8-e41d-41b5-e5b6-08dc8c1378ca
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?rsMRxuPMhvLXDeDhgiUhfcX7YzJOWjvOnmcDZyAqj/Y7xSxxk9w4j2MuwGwE?=
 =?us-ascii?Q?AT+VyQSAiEY5YLer6xjrONL67JpuinVyauqt8sia5xnaFhAX954zje9zSqVn?=
 =?us-ascii?Q?z/cnoEOvqUobqA6HZ21/Iw/yC77K/mGQSVdfl0lqh6jBBtCGOwc//rGlsUbQ?=
 =?us-ascii?Q?10WXTpEbNQil/YeBQ66WGZQyW8bTX6JCEfBKsACOejESvIYS7C/vX0x4YOiH?=
 =?us-ascii?Q?K/ZPdcMxj8NcpgiwYu+9/KHuATYa9o3wCEdfMlOMMXjvaN6rjIMmuL/C77Hx?=
 =?us-ascii?Q?4fr3vmpA3AWwB6Nv7hilOXK3kKAFDpKlZxwcSfccNx5//TWs2McF4z7CiJI8?=
 =?us-ascii?Q?r4U9/yUriQaDPqjLjMuabYs7mbv1PZkUCXzbSa1kT9ukHsXHZRKK3c/vrTTx?=
 =?us-ascii?Q?cqFUam0EO/ZKpslzqzjSsCWfAKxsrln+FpwS7rWamZmtFKRuohRXy7gkMRjB?=
 =?us-ascii?Q?7ww6X+MZy4d7yERQoJyhlBLeha5z/TXnhwSXCwEAGnfydvfxGNuJI+qzgaVH?=
 =?us-ascii?Q?8lEz/LPhhaQj5a3rc8ktUcpw32unCs8AR4OZUIXb0CX+xAUM4JVECO4d5saL?=
 =?us-ascii?Q?87sPeO6/LNpwn4vVa5Sp/5OIo4pn4U3ifN/+BIgTuYgkqFvpRiICKHR16ZI8?=
 =?us-ascii?Q?GQ5rnMTQtJs3DBvvC9w+/3QdKjX8fAi6d5WDn8V77rQUU9N/JjGNMB0z3juf?=
 =?us-ascii?Q?pXzlOdAfH3+btfTKtdUW4cSxTIXEPuKaDL2ITw87ziUbtrRpcFD8Chx3GDiD?=
 =?us-ascii?Q?1b/AjQQLF1QWM1CajnuRGI0CI4kG+BWPc5kzhGi4UQqsMeYX6M9YYKK8cWxx?=
 =?us-ascii?Q?LnkDxSZ1Hj54+ahyAPUk4YBOgIfRTA+IFgvuCRKYuYM/A63/NLwmPwEFRyqn?=
 =?us-ascii?Q?rY8RFRRo5imriXDiclhIgO8mOrNnYtG1gmAop5Wbx42wjvCdl3RsZUGzsZU9?=
 =?us-ascii?Q?c3q7kMU3mPNVQCxEDpdDPC4B/wp6EmX2E1qxcOeVONZg3muX0f63BtSLrOB9?=
 =?us-ascii?Q?vGCynrNenVr49sQAFF4trLatBunlo+EvXQOSoYMzUX2cbcDac1cAlv35ORDh?=
 =?us-ascii?Q?Wb1ckvNk9uQd+YyWL51AYq2j5BhjX/h7Q35o/Fp0wjmJ7S6wDjtZ/CZvSi+R?=
 =?us-ascii?Q?SmCB9L2vkWqMvTPWrfWfCK2APJxwdreaYr96MBRwOlm27OWkTrnmeZjt1Vbo?=
 =?us-ascii?Q?ntOTITySgIn4TIGw0dKXmBK8gEkqWtLP3gfdGf9+AKjgxnNiINcp7Ixgjdec?=
 =?us-ascii?Q?EFvUQhEUzy63spP23MkX6YacxS5KY5pQXFcYU4jb4XU1f53nSZZTOnjuqOfI?=
 =?us-ascii?Q?rpXHAiLFN0YZ0lH8eHJ/lyF1?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?yE6Bkw+9G+b2exn2zejqkEILGVw4gzJRM71v+QjQlYg9KSRGBQl0S0MkZpW9?=
 =?us-ascii?Q?JX0JqSi6oer9+aFWYmxvavE0G9JxVLSOoTanbP46Z5hTtNKN8cnRyHsa3Tkw?=
 =?us-ascii?Q?3vVBm6aLHySm1sKW9N9XVWH/SVFzsTMr4fNvtCzS+aB7WUzLoBqA2A/EbzYW?=
 =?us-ascii?Q?h5tob1UT3AGkyVHxBejzvPM49cOoVi4uToOr0Ty0IIdp5GANaer2kcHl3vt+?=
 =?us-ascii?Q?JBslZuZProRjrtYRWYM2w0ZLtr9Zt+uEjTcvFE+Aq6dGHSXRHXoq0Xc0ziDj?=
 =?us-ascii?Q?pI2sS3T8SV8K5Cz992S4fEX+mL6lBKQpTRFnJzysL2e9WR8L4rk7uBMtHYNq?=
 =?us-ascii?Q?+XFn+k6eEqTQ6Mp/FaeACr0BWdkSWjPeq4wf3lFQ0zqOTMk0lxK805vAd3wb?=
 =?us-ascii?Q?TC1aKFP0OtzGTsLNZEry+t4ejGr4W9GH9PTbp60ROBYgN646Ch5uthzmpNe9?=
 =?us-ascii?Q?qxtqrFo3CPrCTvp/vE4FE6ACW4TawzsC4P0Sn432VONVi5SySa9wbxQ73UsK?=
 =?us-ascii?Q?FyVp7SCIuSLUhIDdKiJp5sO9XPHr+Gs04YAqfDYGrOTnauhR7oyMpmEWJ6yu?=
 =?us-ascii?Q?XYqU+ruk3wQKstRRnHoJ6i8xp3o0lO7mxl4ihh2cBeI8h0TlW2SbsHDz390H?=
 =?us-ascii?Q?i2+rKohdrdQtldkmts0t1D8IZVo5XXowyJ/Hodegzgfdxy+SFCPtuqjwvqFy?=
 =?us-ascii?Q?5n7kalztQLAWMLMTXuIjoecc07/eTO/8JLr0g00Yhfvzv4dI3+N/jjbNN4N9?=
 =?us-ascii?Q?0cdONZyfXbWb8UDGyjx8n7AsDNR0B62yQrILkn4yPtiVvMK9ZWDuX+z1P0eN?=
 =?us-ascii?Q?42TwgT4FaZdMGL3LNmppOcLncJPvwk0x2ttJLoHAsXyX6fxWUDb22egfyZqZ?=
 =?us-ascii?Q?al1LKKw5ocRmLrh/ywWA8/jtp6jdP0+T4hcpVX+94NP+PvluzvqO5t93bt5H?=
 =?us-ascii?Q?Fy58lVSoFm/lQBPTg2HtSz8Ypfn2i1m0kQI7PdQbMYRGEZysD8+AcUD3oSht?=
 =?us-ascii?Q?MPeFGWZb4gxFWFpvbvq8EOFdsNUcvbS89CEqzzb0ORrqpM/qksJETnXtViq0?=
 =?us-ascii?Q?2W0tXpCgVuRh0Pa5kDmK7i0OrSdxwuPEPx9En9QDybcRElMgJyfx8G2UF25x?=
 =?us-ascii?Q?ErNQ0lgGzI7OXi0x0ydv83+F/FEIintNf1OnW0NGBKvD2C7fy9cUajcjHQD8?=
 =?us-ascii?Q?aWWsw8+VIWXZjoKKdSH3mDFUDktlzu1xweYxOiI83hyhaJ2A18h5d5bQ9+vm?=
 =?us-ascii?Q?7DTMEJC1MTQVzPboGMSRc0g/Z4e+71JV/h4KqRPcXEWbPa6z06nyRtiLdF1h?=
 =?us-ascii?Q?bMTI+UJi+ap1/kaJo5RKB7PqWXWzidW01RV7YZ9qYJIXmxL57jVLK1LLVBrZ?=
 =?us-ascii?Q?dLwK7duvp3gqHhSOes/KAOSpHUvgpAmqog1SD8mdqEiWtuJkU2kT20k1XN+r?=
 =?us-ascii?Q?ZjlqvCnCWyNH8s8TKT4EOhb4pXbJ4DrX8x1zro10huTWQjQ+RM0Oaygbl22G?=
 =?us-ascii?Q?H7rqnQ3EFE7fmI1LfawNsNm5yT1UZxyZ+X4qXEgYe2tv5ag+bZ9+iPlTNMgo?=
 =?us-ascii?Q?aNh1xxp1x2F1Rl8dakfMJJ0Xx8eJkLslH9iA0ld+vE4qKLN5yuFjuHM8CYj1?=
 =?us-ascii?Q?CA=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	1IenAApLP2M+VE62DlMM03WDWVFlMD25xNUKDE62VwpH2CweUvKsuvVF9CCDIiHsMDTvqTcZOejdfwgR+9fbqe0zAhf89lN6WzIhU6LRZJkgvbHwx/xpSOutb9IU4ddPVjg4VbDktizTGTKTeR0tAj7Oa6LhWyNcDlrUHkvDCVJnTTZjsTzsLdlIrjoDv/z1nl7IOwFlPB2FL4FaHQMt0LhUR9PW3AuLIC0ex3noyYBXPY9VC1vMQBvQpHJUiAJqS4PPMdERHhmCuwe/pKt5kuWVwFXm6Y5Th9yrYLUcgQfl1VuKwvdgRAxcRwogznuXOqytOcbvqmf5W/94DAc9Mzq4zACkXt9lgQ+io8UQfwTYvJE0+E1D96HTx4b2B9Jwn3AKaC+08Gd3G77OOD+c3T2w9ILNGdps4lcrlahSbPoGLWPMSLSouDajZbUV6MtBRH9juVjJ63kKs1lwtPTfyLWCPwcxJ8+8G7zLFABtG7K0ryQysP7uElKZcwjmLMjnpGHJGI7vYtYc2vwppBWyovMlGwXQCbqz78e4wXLhnpbTZPuex6qHoxsvYVHDJzoDYdRHnw2wHJQHkAZLv3alabRhWoHRKifI6XhK3nuHxSI=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f9decfb8-e41d-41b5-e5b6-08dc8c1378ca
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:44:03.9615
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9fGuJVVYYXFB7hyFcpET7r0q75Q2jw9LNBVwTRCSs4k4Lj/FUjSX3Zd9ec0vPJxr3BN5v6ma+9+qxxWV95lT3stganohQ74aIqJ24jtWMok=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR10MB5755
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxlogscore=999
 bulkscore=0 malwarescore=0 spamscore=0 phishscore=0 mlxscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2405010000 definitions=main-2406140008
X-Proofpoint-ORIG-GUID: ICQxOEpYu0amJn_xjXlrmMSH3Ui4nFV5
X-Proofpoint-GUID: ICQxOEpYu0amJn_xjXlrmMSH3Ui4nFV5


Christoph,

> Split the logic to pick the right discard mode into a little helper
> to prepare for further changes.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:45:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:45:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740288.1147328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvzz-0000yl-Ge; Fri, 14 Jun 2024 01:45:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740288.1147328; Fri, 14 Jun 2024 01:45:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHvzz-0000ye-Dz; Fri, 14 Jun 2024 01:45:03 +0000
Received: by outflank-mailman (input) for mailman id 740288;
 Fri, 14 Jun 2024 01:45:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHvzz-0000mE-1V
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:45:03 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b5e51c7e-29ef-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 03:45:01 +0200 (CEST)
Received: from pps.filterd (m0246627.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E1g36C023371;
 Fri, 14 Jun 2024 01:44:43 GMT
Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com
 (iadpaimrmta03.appoci.oracle.com [130.35.103.27])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymh7dts9x-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:44:43 +0000 (GMT)
Received: from pps.filterd
 (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1])
 by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45E1MGYk021157; Fri, 14 Jun 2024 01:44:41 GMT
Received: from nam12-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam12lp2174.outbound.protection.outlook.com [104.47.55.174])
 by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id
 3yncayarj2-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:44:41 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by PH0PR10MB5755.namprd10.prod.outlook.com (2603:10b6:510:149::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Fri, 14 Jun
 2024 01:44:39 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:44:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5e51c7e-29ef-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=+79uwD+gc7/Azk
	WHaz9elwN98+GlpMxUcxT2+tx+VF4=; b=D12rKqv82d8NSbeQ5vtou5ZNnEwecA
	+D4yu3ri1535ZUKULSCEBTJgnhob5Wx6m2WFNhNRQtwlJ/jqbZdKc/PbJPG7oo7p
	4kP4fRqOGAk/LqwGzf/wYWn8R6A6v1CQFuGZccplGmPYMoX5zV9aPlmb75AlQawi
	+Y+6UarVjAeT+xKvxbgAvFvN0V7cA8IoW7uYKAy9YtfBXFYefwv72a3poAERnv5S
	u5AgKyiTLNZseYm4HX5Nnq876XtqgGlbQGM+gsYoApO9Pi4osoEI+lsan56yCe3w
	V8AG/0HN6H7v1Mxp0IL2vK286pfuW0R1mQeU24Oc5SJyN5nLhOqC5Drw==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EZbr4pqZzVI5DRq8+M6WJ1ANVA1TWpiJoGxkSq8t1ym/Eyw0DDBmmUVgk6zA/mgSxFwJdII+0SUA9qf455l3jQ3/YYJUIvQOgJfAx6ranftCr2EGWHhR9X5fJ97TJLmGxoHwjFVge+EOEUJTUbSE9h4BwZglrpQet+hAYPLOBAtdeouQ8B/PI63SRXwfJ0Ce/LWT8X70aUmiPkquSOaVm6wYEdXXjHpZlf/hohgqPAwCoPo0zxCGKmrN9moMZU0Y+pbWroTQfCOmXoixKtw8JFMw4w07edl7G8g33pEIby2ze7Rv/+2/Nj/v4d6j7StX2ojiW5cQTuVM4Eq3Mt7efA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+79uwD+gc7/AzkWHaz9elwN98+GlpMxUcxT2+tx+VF4=;
 b=bpyeFv4bnpQeTVf7h7/+AlLuUmXtdybdRioxU312w+cQ0vj+s3MuDbNoqeC3MDV833TyNiyru4MjmXQZRIe6iz5D/Zp+zaBHGBQXS0yykAlq7k+zRnpMytSY8XJYOSTherI79mLPG9XxzX9Eisj4xGXsJBlk8Iu7zWzcrIJ5YoK4zh7G3YxRhmpHd7IWOIO3RKntZfVWNjOPtAmf+AJ+JCUozPfgCsmoebK3XBkrO/jZ9lmN3lQW/C1AQJGNQFZe8IDUEA+D6bIPM/QrZMJ6MeddI/ojl+Q0vjC4QEUHngq9FE6aFfdckL05NroNgwaQ+f7CaqZLzCwNJO7HLhB32Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+79uwD+gc7/AzkWHaz9elwN98+GlpMxUcxT2+tx+VF4=;
 b=G1xjlO7v2Ul0QWTppjIp6ElbfkNybUNLWGUMN42OO7YGv99mr0M4Gv4mk9JxOKqfSxFBgPeVtqd0jIHd1QFoHyZYwAeLAQkrPKR5Z+7zedMVQHdpYVn46cYHkhhCj6k3+IOsGJEPT7LdQU8jiqGTJtcB2Nf1sR4j14zEryx1G6I=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org,
        Bart Van
 Assche <bvanassche@acm.org>,
        Damien Le Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 10/14] sd: cleanup zoned queue limits initialization
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-11-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:48:05 +0200")
Organization: Oracle Corporation
Message-ID: <yq1h6dws3gj.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-11-hch@lst.de>
Date: Thu, 13 Jun 2024 21:44:37 -0400
Content-Type: text/plain
X-ClientProxiedBy: BYAPR06CA0064.namprd06.prod.outlook.com
 (2603:10b6:a03:14b::41) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|PH0PR10MB5755:EE_
X-MS-Office365-Filtering-Correlation-Id: 8072246a-2772-4be7-99dc-08dc8c138dff
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?wIi/g6WvZO5F0nxdoz+q1+Mhy7pTC6OP/8vdRy0QPgQKnBaGrBFr0XHCmn7G?=
 =?us-ascii?Q?e8Bn2c4mIY9GHB+rYUKYXVHt+jL/L22pfKFWYocJLicnRmRSo4RCuLB5jIkg?=
 =?us-ascii?Q?S+noC0/mNwr51AaAnNO6W/vI2U31ThwlI9YtzsOh4AL/SeE7MEQvkBNa+h3S?=
 =?us-ascii?Q?AYdgOlFgRLt3wrVO5WUnppU+SKtgpl0MtNnJqmJnNp0ooh3qI1aPmL+T6A0s?=
 =?us-ascii?Q?b2YOOekXZ7snjrWPOROkvqKKFdagJprxCyvh9VhP6p0T6KulWZP6r3VilMmw?=
 =?us-ascii?Q?N0B4sH7Cxn/Rw8ll3fzq4acrw05w+h1l3ssBMAO454vOAgGBQw0DvX1vmBMz?=
 =?us-ascii?Q?6RIgq26g/aBd1x2ojqcqkU/fR4oYEZZ/bwizY7ZqZDYHRx5UYjllX8RFvr6y?=
 =?us-ascii?Q?aTkg5R+oDqZY4nxdTdBojIFv5BH63DbjJ0JIv/Q4pjsZyJYKSFPziag2FQ8F?=
 =?us-ascii?Q?SGu204FNl9DdEGxikNhJOCOL3ooOJyGtC4FqJFjHAFW52E5rSc0gGwhQ1osj?=
 =?us-ascii?Q?OUZQ5Q8hmeBUNMGql3aHBe5UckmvslgMkWRNBwCIbEYH12f4pQGwwsZAk2ym?=
 =?us-ascii?Q?YNhqn3VhLkr/BC47N2cWcUy2y+X2LPzopHlXqtZ664SwsnnVJurzqPKKoQwf?=
 =?us-ascii?Q?felkGDfh2TKQJPdLd01v2zYguL2GRhSPg1m2cOsFe8iNLE9HVxjDsE+ZL/lx?=
 =?us-ascii?Q?NX0+0Vffgg+2F9tbM1Vk/mQUP2r0/MfJphnwvEGK5pFvixv7C5CX+7ooehlt?=
 =?us-ascii?Q?wM22HSDT3woMkdAamRiNWbZtp2+0cJfnYtf+wiTex0XnIA77K3lDuIV7kRlJ?=
 =?us-ascii?Q?vo858TUD6tr3SacqeY/4Z5ndUn95zLCSG0YAFZkqp/IJSN2c5d/g5R2zF2i3?=
 =?us-ascii?Q?TVVSoRQPHgOiHZK/LBHm5BWkj0RRBpamLRE+Mliweup5MIHsADKF1S4vGeEl?=
 =?us-ascii?Q?Ny4FcDE7s43ZOlio1wKPWxXI590AvaTFM+rNoQ+B4RZIlO8n0dQpGB69L++8?=
 =?us-ascii?Q?7agecV000HLba6V9so6NNIhq0AWlkCI989+Qf3mphhvrR/oZoFfsMnC5HX50?=
 =?us-ascii?Q?8qrB+ngJ+zUEhYcxpb9QVVbKp7esHKmMrI6O+pivc6XawCk4l/VXkQHT0H1F?=
 =?us-ascii?Q?jW/u3+rDhkzsC32ScVboxFTcI7gt/fn8bEPkDF/n8u576M/0lJkAbqJlBt7n?=
 =?us-ascii?Q?ehf5gx+bOnlURfudW3jYD3nTZcQHlPcKnXpOHM2GyV2A1Ss57YP5wQJq13b/?=
 =?us-ascii?Q?fbCqwXGEArl5TwJvRbA2xgoLzWQRKswjXXqPOqV3IfIGkJ8Zk6I80RoORs9G?=
 =?us-ascii?Q?1B+Ke8oWO5jIu8y8iW3b0zVD?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?tHsFYc0+ma7jnbcYoN1o7LWPuzYBs2dT6uJX5BlzGm3ctn3BBV+2CJkN3CJM?=
 =?us-ascii?Q?71h14bQRXq9YbvwHY0aY0fXhwAjWA3OVzrohpArKINj1pWj92BtwD9sc50pO?=
 =?us-ascii?Q?ufn90G3Nz/J2Jde+ZlNsX8MGQBZtpmaAD6Dy7Mxz13kCgPJD6KleGLBd1ccN?=
 =?us-ascii?Q?JvR2pDLeYe7rKEjZxOOC4BVqw2Q1nAUyMFwwXTGoUBKs+n8aIgCI2k+dw5nz?=
 =?us-ascii?Q?+WwLrnsz0dI5Lqi1Kjo+05gI1GzW4AxCuRpFMbsrGEc2lFc5Fpqb0ZF+os0T?=
 =?us-ascii?Q?xxATboAEc9hqZ+ZfQYGqrWZuk+G9LwbjxuW6FnYV0KdulNOjiiQeBLbUDlZ7?=
 =?us-ascii?Q?FI2t1PSk3ly+bthETuH/Hp05pm5kpGXQRNb8RgyXH/hCps8iXuxP2Lbc5Eu6?=
 =?us-ascii?Q?UbzAXg4G0BazwjM6eaMaWPHFMd26vDaNT2E5it6KMioeUFqtP1HMocFNSncr?=
 =?us-ascii?Q?7+XIz+K5vvTrSwmtHCs4n9YWs1ZA7/6g0bQZUCE7yIetj8E1nX/k9o8KbdOe?=
 =?us-ascii?Q?dlHDWhh6eY3gZmSc3DlsaMpOFkHaAwnD1br+5LbCiD122VS1rrsl6YJp7ITG?=
 =?us-ascii?Q?EZQd+iGpFDkvbpQt0U+134TwqZFmYW2Ew4slzFfu73vJUIsAjxNxeCgd9lnt?=
 =?us-ascii?Q?uGXwTV5w2JnOlTUOz7+3AukJKtwS0fOllTH31DfmgZTjOxnLmI5Emc+egpOy?=
 =?us-ascii?Q?LblY8PgYatOYPnSHAd8MyeqdNaE1RWf660APta9TDYQNEWn3t99o/0DxTMHn?=
 =?us-ascii?Q?4HZKD8ootzRb2v3ElSRls0VBT+/OVeZ47jGFshpB1ISsVnuhivYLNXOKXaU/?=
 =?us-ascii?Q?V39vt0mYCc4fN3j1qKJJcX+hob9L3a88gc+Q4gY8zpTTdTR9iHj5hu3jaYwD?=
 =?us-ascii?Q?T0Ni0UlhpVZMuQS2hMEi8emLX8otDXIUMLGnBmw1q3i/5rA/+F0fWmZboL3i?=
 =?us-ascii?Q?+WOP/4KYaY+SOUMUFaT7vEG8cB5dhaMxEGoKGWWfC8O9zyjYGESBOtjp+sGr?=
 =?us-ascii?Q?JnIBMym1eEtdJcpA5ElYc2SZKd9vUbsDGjSwo+OA3zHhJl0C8Tn1eoS6xPjs?=
 =?us-ascii?Q?Pe36tKJlx2wxiI/BE8YA5HjyePQv258048LImaiuy6rUMrITCYGiS6Dj5cZE?=
 =?us-ascii?Q?M7WT5bH1ISbng741g21l16hLdLGaW7DtGwVJRj7DwcmdIlg9oRSHe3Yxjasc?=
 =?us-ascii?Q?cfT7wA+1fNMJlR/maD85yf82OwQ1TXQ7Y3/n6LSCgTlGWrAfHNbp2F9GhWvv?=
 =?us-ascii?Q?LzgvBt6bMTgOe7VF+PyEZqpwGQ47VRWugrSzR52bLfgmhy2+s+igfHWFRFwY?=
 =?us-ascii?Q?E1R/NiwmWGh8ceSsAJTDICxSUcciUwNej5yEFbEKZEahlqVsr5LHc9UVmHhN?=
 =?us-ascii?Q?Ug70P1M8SnpDSi7NaaVI8nnLA9Qwpf0PG1L8+n1+BvwzXTrwob+3WPRnaurA?=
 =?us-ascii?Q?UCvy78cWTRRffVJRZI8EP+3t0qfL3JWIibuL3qPVpe7jjRoV6/lXbb8QnuQq?=
 =?us-ascii?Q?64nIArzhxsbupVmmckFXcVlg/jyYOW7DgS67kJP3UJNmhxVxnzeLxQymUiHD?=
 =?us-ascii?Q?5XTm3lqwYWSWH6fuMx0ciMOA7KMuqTORC4p6JqCR8oSyOGQe3L5Ju+2GgY2Z?=
 =?us-ascii?Q?wQ=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	e5YnFW6Di41hzp77FO9/RiN0yHTZRxDvmNeB/PbohpYLrcxzP3LOW5JxQJuG4qrpJYYeRbgQ//zi6ZRvtyoo+uJ2arcjUKehW0uuryizO1dKP64b9UQGiyhKVkK+lcK+eX2qzxn48BHZad6VJoZ6D33QpBIVWla5ocCS9r5fxTSQEG4/rj/AA1fHfuzDIEFoh6ptQSCScBC8Sj9Q+C/hs+BcugfbQvAHt7MGIV/H8cT3kIgoFGaJ6A/kbXakOfczcskMeU01ASiuH7Bgsoe/LloyuDHSaHwR67KgN0Hd/Ita+CJD1hYG0ch5/U7h3KLRJ9ZVS048FOc8UcOEoN8IY80SbBdpegBsPVT+P/0jAWuqK6jL9Kd/6JkPgSy3B/TMFxekPkYmhDZJiXQYI8kBjshD77gPWvV5OGljcVR8yKuWxGoQWqV4elj4nmLEibtH6kQ8N+h/SgOGtCAx/w/e4z5yXTHw32UsGIObM9sGPsRbex2OFQpLwiGR529TCw74n/xLgYzqdxul4NOOyRpZMc07C5pjck+Hu8c9sBs27C+kD1x4Fsk/IVk2Hk3qncVkJ0v+FMyXix7dpmdsZXqgVlTFE/vgbb3PaPHd05iGtsU=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8072246a-2772-4be7-99dc-08dc8c138dff
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:44:39.5285
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: T3+Df2OYCm1S7pbGhQ8ax7VIQNnQgSDsL6aekB6ZS8AQL2wK0KjG9FDMg4CXfhRIeY1ZhRqdHRzWp8pHytXWnjaLm1qfh2zyKCO30x4usAk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR10MB5755
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 suspectscore=0
 malwarescore=0 spamscore=0 mlxscore=0 phishscore=0 mlxlogscore=999
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2405010000 definitions=main-2406140008
X-Proofpoint-GUID: 4ru3HucXwh0S_MuScbVo-YjxuGcgwBhV
X-Proofpoint-ORIG-GUID: 4ru3HucXwh0S_MuScbVo-YjxuGcgwBhV


Christoph,

> Consolidate setting zone-related queue limits in sd_zbc_read_zones
> instead of splitting them between sd_zbc_revalidate_zones and
> sd_zbc_read_zones, and move the early_zone_information initialization
> in sd_zbc_read_zones above setting up the queue limits.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:48:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:48:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740296.1147338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHw2k-0001mI-Ui; Fri, 14 Jun 2024 01:47:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740296.1147338; Fri, 14 Jun 2024 01:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHw2k-0001mB-R0; Fri, 14 Jun 2024 01:47:54 +0000
Received: by outflank-mailman (input) for mailman id 740296;
 Fri, 14 Jun 2024 01:47:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHw2j-0001dt-Og
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:47:53 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1b9aef7a-29f0-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 03:47:51 +0200 (CEST)
Received: from pps.filterd (m0246617.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E1fYUg003508;
 Fri, 14 Jun 2024 01:47:28 GMT
Received: from phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com
 (phxpaimrmta01.appoci.oracle.com [138.1.114.2])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymhajat5g-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:47:28 +0000 (GMT)
Received: from pps.filterd
 (phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com [127.0.0.1])
 by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45E0GIoR019902; Fri, 14 Jun 2024 01:47:27 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2049.outbound.protection.outlook.com [104.47.70.49])
 by phxpaimrmta01.imrmtpd1.prodappphxaev1.oraclevcn.com (PPS) with ESMTPS id
 3ync91ysmr-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:47:27 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by PH0PR10MB5755.namprd10.prod.outlook.com (2603:10b6:510:149::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Fri, 14 Jun
 2024 01:47:24 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:47:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b9aef7a-29f0-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=xz2nyBT+Qcfd5X
	dUAydJGxUHEjVbPYk5E5e0VlEOR/4=; b=SRJm5EcXZotiUmvO0lUgUjWERAomZi
	pHyN0P3DZpxyZmKFmP/uLq36q4BpszcTfMCzHmMDrMaEWJyZkjwqk6yR/imk6x1U
	AMJWqlIQ0ZcZ12mY19S6rvtkF6Q6UlmjtRZ4Gle7L3yGLhAscSccaniA63Qdtclk
	lAjMN3Cg/liQOQ9n5IHs6EHePzW/iCyUs5UKRzOka0+URfJnSdLLErnQ5BUue80n
	c3hKV10/y6oRc2YW9IYqcCNcY5MVqbz7atSNErHzcg0VJxHBpEHgLM5YqAyVaN9N
	Xwi+37XYhSj2Mxx6J8rrV5JfSlYsN6adZjCFVsXlupSXTX/T0tyyGo7A==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kDc3m0/C1IoXbj1xRlBWeffJyu0t9kjjuBUrLhLE5Tbh42G1Y14ttsOzmxHzQCzyOdH4ZXttXC0q/sjqelkkpJnWxQmsdvCjTicuh5iTQE8e52e1OCNyyCpwhwsTssq3bmG7iZekdVWUx7Axn5TCYdp3PkQM0IalOz7CfIVV1LYF2lKbmgxrV2xOkoAIDdjgQI8oMhYu7XyiHdAJFegZlfh9qJWkEceCyB6VFH3TMhp7bbiI/7q9APjqHv/TtRzrN+YAhMO8Dgi19Pe/yk7ZFy8uE2sQ7hsrUg9F7yLdAKK6fYuHT82eMKZkDc54XrGroprySsDjeR3Vw1Vo89SaQw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=xz2nyBT+Qcfd5XdUAydJGxUHEjVbPYk5E5e0VlEOR/4=;
 b=a0kJcFLuyr6Wi6k28KiRI5lPtAWtHRN4FyYsqmXdxAqqUpTObU4etiNrYdxaT00oKzusqUIiLOG1Mb8gNT4QK9BYWpSLcdIQn1hIs0wbxgBIrb05jZ1nSVcCV/lOPUl5Kb/0PzFuGuGLJDVmI7OcFdwd2rhWfGNDFzXaH1LTFP39JLaK32SXxlXWevTMfYE4g/eWrQEd6nzkzX3eUXClei2Gi7hY3ZZW3YiywKWDGUNOwQiZjqUoOpBMAKqna9ThfY+BikuN/p2/tTMl5FKl42jeFscuimU2HQV+IDTJzkZmKBBhkIJG3b39KkiX87wNDJe7YuPJ+f6wyHgFQUGPeg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xz2nyBT+Qcfd5XdUAydJGxUHEjVbPYk5E5e0VlEOR/4=;
 b=Tlq4beLQWCVSlGNuvrOMQgnSEXDDhzpBHhF02pQFq5po6OwFo/2oHlN1AzwisG6sXVqIktRhjNiMF7+vbkM7CgYt2H5bO17ihfjbWFFDYf8EdAIzLpTLmse2N+6/PJSGuZTVA6jie7FKPMcF/BixsjeMzF4E1sg+gkZ07L71NkQ=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org,
        Damien Le
 Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 11/14] sd: convert to the atomic queue limits API
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-12-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:48:06 +0200")
Organization: Oracle Corporation
Message-ID: <yq1bk44s3bx.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-12-hch@lst.de>
Date: Thu, 13 Jun 2024 21:47:22 -0400
Content-Type: text/plain
X-ClientProxiedBy: MN2PR11CA0015.namprd11.prod.outlook.com
 (2603:10b6:208:23b::20) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|PH0PR10MB5755:EE_
X-MS-Office365-Filtering-Correlation-Id: 9ecc8082-9642-4147-4e4a-08dc8c13f05c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?umrq4BmdG1XBAd8+r3wgIyRSpgJIkPn/YbgeJW1wixy39VSs7DaiUwQLC+4I?=
 =?us-ascii?Q?an3whvu5RfjKYz7V08F7MiMeLL4+YvOi8M1+sVOE1sh5UGCOmtPXzrayGTvf?=
 =?us-ascii?Q?539Mt3UO3D9YHVJCkifGS9lzPzW24ZIq0+OVjezAm/amejYabOnJh6umyiGU?=
 =?us-ascii?Q?/rKciw0eba2dAP+i/Hr4AUWcKlCn4XozPOBfmKSMO3/ePmQFvsjelKCijfQP?=
 =?us-ascii?Q?wo2VOA/YALuKK0MJ8WB7xKIiQqyBPIUcckFGHGDSwGDDKbHyC9kTNUfJ72Zi?=
 =?us-ascii?Q?+bEDqe7c/ndZBQWIWnewoKNmEFVWMyeB1MPHmvm0OOBU+YSnQu/Mk6Msgh9a?=
 =?us-ascii?Q?YULGd6KOQdrXVbioqTGT5fxqm8E6/PsGOzBXL01JPfxN9otDgtjRAzHhLwMc?=
 =?us-ascii?Q?+eidnWANd7OrKv9P/LrxOd++6x36KKo56/lrQDzGeFbr/7aDomu4jmxurulB?=
 =?us-ascii?Q?zzl+7t9mMkbnhmVM/CD3Hz0x57xWXp1b0lZ9zCZ5rFI/azgYx90Tt7xboI7P?=
 =?us-ascii?Q?2CJJRybHbEu/zJwmVLwYKTGx2R1st++HOybf6ugH+K6is5uXzlnPQ9M/BzSK?=
 =?us-ascii?Q?t2wKFRRkUj3nURIe35BNAWVzZ2/b+YaJrjVKBFWfr3vtLAtou6bh1ICUjilX?=
 =?us-ascii?Q?jWdTMLFN7WD1+8FB1iS+FY5vCHHSAQA0nQxGA3rUYQSXHJNbELnepMjgXgy4?=
 =?us-ascii?Q?csWP/s1ZaesJraRuIgi120ZZk/ENOryd8KyHVrLcKU5pFBtiM3GMNGj7GsqB?=
 =?us-ascii?Q?tzJ5VVmznOFM+VB+abFlgf2hFXyqtQcZu0XrTpCS84PSkfcGRLTdavdJ+4xt?=
 =?us-ascii?Q?cMBh00jl2IIACX87jDF7jQhek8cGbxsLs1LvgUYZ1YXoFOxUsjAAqyRKRUEB?=
 =?us-ascii?Q?YQmLExY+sK5bLYuMGwrh1lJwadzovePyARFvmOYjfRoywfYeqx6gfTptwl5b?=
 =?us-ascii?Q?BIZGJE/UL7delxY7pXaZlrOpByeiZSpUGO+VFdB1IKLKqixdB8nv1Ep0CJW7?=
 =?us-ascii?Q?sG91aDre5dTr8AWfuAo22iv8DJ/N9tkl0PuQrdcAz7jtAiAKmw+fnGa3/tdx?=
 =?us-ascii?Q?R45JYFTokqJynDbJ/4iON7CsVp98wxOHfhdCDiC32xgurUB6rkrcy3f2WWnq?=
 =?us-ascii?Q?qrdkdwJ+aAYrLNI18NKn3TF7aFC/Yc39IWRnDrTsEw4/A4pHw2ZHXHjn9iQU?=
 =?us-ascii?Q?hZF2XAoVZzViHsQ08sLD2p+0zpRt6L2hsL6niuBL4a9c1SLa26vrd4DeMDZ/?=
 =?us-ascii?Q?Xt/EeKsDfv0blqYO2dUiVV9ErxpitGkCYBJtsT9jjPHKDmbGW09HtweI7Xbg?=
 =?us-ascii?Q?DCOj+OXdd0UvGiBAmYJPzv7o?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?rSYeoEduWLKFQa4apemX9gFGO/DS6/rHgOTGqUN63N3oe6hBk4sbW143vG4z?=
 =?us-ascii?Q?9oM5FFkhNu4yEBlC2+EnmibGMnCVJwPvmWQX5wSXXcF2wNS0HXoulYMMwnl7?=
 =?us-ascii?Q?Ppyhrq1PkNghLcxj1/8SkEJKrdY1T26ytP6xwYPeI+nS/QmGQGhDIbyAX8LT?=
 =?us-ascii?Q?cfv557zeLYWV4I1xlRpNd7R+F3msuIu2VjsebEnL+qhBEhh7NHCbkOJfdjHv?=
 =?us-ascii?Q?v8aaJgD4en6HFyUwTgRo/sVFA0d0yBI+CKLbcVicyeAwtZI1AZDFnHPGUtZ4?=
 =?us-ascii?Q?/2vDxlXIkjhmge/fhT8WDktFDeJd6PKyYNFjr1I0+KG/wsfBN48ULHvDYdE2?=
 =?us-ascii?Q?/nGrQABbbdSybXIgFbnqHDdWB9QNyBzXpV+B1QrZicTnDu0exBMbjTPoFU3Q?=
 =?us-ascii?Q?7PttDmjJXSsdHKb9JHrMwHL+DYg/kN+SoDRoQvP/EWDC5PHTmw/6wx9AeZ6b?=
 =?us-ascii?Q?JRrt3CrpsFI4V0NtZSt8rDyew0MNpv215fSCj7Yanag0kh6ZKZDC4/KxQ8hY?=
 =?us-ascii?Q?UVBC5Q2+Pv+rW7OcySnDkqrebsVyTicRSrTHS20tqCh3Y5pbU1Nz16siWHjG?=
 =?us-ascii?Q?Zo0bEzm65osKLjxbfh7i0h0J18QAKC0jGFR/7e94x8EzgREKYGlZsdA3+Xok?=
 =?us-ascii?Q?+B0CSVmdI3uAwLByaSmdQsnbdjAgmE4TS+ftXZ87tzbPdNe2EEPeE79N8f7f?=
 =?us-ascii?Q?SibETJqY69TB3DhIO3jLmxxQ8pXtbgW9SnCIhh/0DFq1Ckw3hbD2qII3Ksc8?=
 =?us-ascii?Q?c4z6ZyId3XlP+gjNqJBKPgez6a4xp/XWPXNZHuA33wYwa9+xX4lR7Ds7XegI?=
 =?us-ascii?Q?hKg1HtvG7Jn6ZLWRmlL87+fKLdERGNG+oZVPQVbGz3GpwDeSgp+VCZplnFn6?=
 =?us-ascii?Q?VHheR+ztugTxeAfEZFrhTXm61stYmyAeNw9ST2pLcbKUMxhLoXJi4eZfuigq?=
 =?us-ascii?Q?YEWX7Y8XVjowoW9LqsZDLJcszGlovvm0HmfAgNeU6C+IR4MClaKA7RW2ZSSb?=
 =?us-ascii?Q?91zGbFeGqPc9aWc6HEod1x8YBVmeEUCE5UnW+IJTysKVxHtfuK/GOs+Y7fey?=
 =?us-ascii?Q?8/BGuEEL6L5hTVClF1cPxtwpYwtZS72SvmdA1HZMvslBYwJZUs5xFUsUBTN+?=
 =?us-ascii?Q?+GK9MAOzh3dkL5rMiX9jtoy76+BozCCyxD5+coEkm7G/el+lkSmC7edgLoFa?=
 =?us-ascii?Q?MPCSmlMSTeNDha+8qmal4PIT9Y++Sy5SmAUUnbTPK4qTKviJR2aefiqV6HL0?=
 =?us-ascii?Q?3cJDxsUaACagoFOEMbIrgQGq+05RqRtQr7u9UAHQnVTkv9nAmkQu66U4jqfD?=
 =?us-ascii?Q?zQMuJYrs3NzMwr+OGXWQg6IeF5seFGemFpI7FvSoh7IyWaJrW/hHQMToteb6?=
 =?us-ascii?Q?mK8F8RZnCvghyVkaWXewtP3T/pOcC75TyfOSfNIxqh8HB4rlGl1/qhlt4Dpi?=
 =?us-ascii?Q?bh1Aii76mpnueKFpCVWxrWptx66NVmXZlwaX0T+OjIpsRtFTTSHwD+NL41MX?=
 =?us-ascii?Q?zSlUVTstx3cCa0Uy2ddto+rhG2lQBtbNEjNPmbHLMtDxA8RIcZ5lND5RqQuE?=
 =?us-ascii?Q?rtsxhJje5a8GrhrIZl++GS4zuPBs7A+wObzX944bQ0+0+tUlc5fHU/dRJm7t?=
 =?us-ascii?Q?SA=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	BgC351fjvi3odrnDvOPq3TN1GC/SHFFOsZh+fHBi18wH9YEpOdmO0Q4Ccxe2C3hWHIt0zrOz0BHQsvkNxPL53XevjUmRSfWgd/xVz9FYf/pV+WixGGYMCXXb+dS179CQQXPGBivajrtpZ8iLwWe8H0A9wHe1KSUSkhQdvYWKTzZhIJ/jFxHavoXQGAstQpMcmjhmTpZyrV4wBSeYbzEO7IfHaxWuWtfD5CIms7LQnJ67qqCASbECnPA4tLhOxtiXyqY52O4Jp4CASsdenMOqJiG3cIRyIVYqoanaOcCLaPwOKlGEdOGGQzvhvBI5JknSRDXvFqqYmw7N7owwskjglwxnEIZj0Olic1JgAgm5pIdpEJ+geJpff+FKn3owhjUo+/5hAa+Ag4yzNlSeP7FypmEA2+NxIzhOkuh/0xbqYXLghkZkiAcOsZjMYheeZ6vxfsh4jR6DXkF0JeWmK2a0l3n97nqgGStXcyNWvrtORiLZ2aH4S/ZcRxOrINpDGM61RM1nNeHLTSBzXvkUw6deN3E3ScvQU3FzhgcuZOLDHOWGcEmlsvultjOejl681xgxOrWKevlmJJCl42tQSgbTI+k3TdXctFzDgbJqRy6pb/U=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9ecc8082-9642-4147-4e4a-08dc8c13f05c
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:47:24.5487
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6JkFr+VqQUFhIflRA7BB03jIEC+3tCSsyt539dzSXThJFJ6QZ5hEbXCOFACo9q04z6U4EgjdA/e/t2L3sdMoZyjv+JF5YSMwx0HReclFCRs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR10MB5755
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxlogscore=999
 bulkscore=0 malwarescore=0 spamscore=0 phishscore=0 mlxscore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2405010000 definitions=main-2406140008
X-Proofpoint-GUID: OQ-Im8B6BvVuOqT1_6GUFXVlargNJvNc
X-Proofpoint-ORIG-GUID: OQ-Im8B6BvVuOqT1_6GUFXVlargNJvNc


Christoph,

> Assign all queue limits through a local queue_limits variable and
> queue_limits_commit_update so that we can't race updating them from
> multiple places, and freeze the queue when updating them so that
> in-progress I/O submissions don't see half-updated limits.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:48:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:48:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740305.1147348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHw3W-0002h5-AJ; Fri, 14 Jun 2024 01:48:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740305.1147348; Fri, 14 Jun 2024 01:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHw3W-0002gy-66; Fri, 14 Jun 2024 01:48:42 +0000
Received: by outflank-mailman (input) for mailman id 740305;
 Fri, 14 Jun 2024 01:48:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHw3U-0001dt-UA
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:48:40 +0000
Received: from mx0b-00069f02.pphosted.com (mx0b-00069f02.pphosted.com
 [205.220.177.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3801bc9d-29f0-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 03:48:39 +0200 (CEST)
Received: from pps.filterd (m0246630.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E1fX4B029893;
 Fri, 14 Jun 2024 01:48:20 GMT
Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com
 (iadpaimrmta03.appoci.oracle.com [130.35.103.27])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymh3panfb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:48:20 +0000 (GMT)
Received: from pps.filterd
 (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1])
 by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45E1VxwD020526; Fri, 14 Jun 2024 01:48:19 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2040.outbound.protection.outlook.com [104.47.70.40])
 by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id
 3yncayatnt-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:48:19 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by PH0PR10MB5755.namprd10.prod.outlook.com (2603:10b6:510:149::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Fri, 14 Jun
 2024 01:48:17 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:48:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3801bc9d-29f0-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=ohpvYIYQ5Y0rSx
	gbAlfez/Hw2+CM5J1pX2W9CzsjH5w=; b=J0Jki9/Wre9po1SfHT2rp3jbyNr7ko
	IDhjR9lZqhtbUECDHx3bULd3zR72iR53QNhDrUTbaZNeY/407YCTFAZAkqfS7KMF
	v5N9xjWQWTffzxlP977S7/D+UF1t5VZGDZn0DzGWPxaCNiHmHyrnGJIFL7zJStAY
	5zm4azP8DHe2GzYqS8og6dnW7EemRcz5syzkpFQynj8QMZPMdD61sMs4Tpq4hb4f
	3lZwVEtXhJ4Lwq7Cf4L5zE/exWwBtextwTcIjTIG+GPhUcSJgN4dJeBeH/m7/dDF
	RBPq3zcmYZacUmvc3wDPAPyX8H9Zem1Va0fB20QXgTGHcaTUyEtOdwjg==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dxbP85PzWyZ/7IA4Ug/QIIvhmCkRGOxCdhaNgBDsLlAOhfDdTiUcTvFMLGfrnC87PxGf5wezA2Q58USSpxNbEUnHo5sjUjyBwqDQ0BTkX6ksuCpSfPzkiNCn+GGx3h7sESGXryJZ5BFRAVxWqAm1QlwPzvveM4BTDQvm7EL4bX/GbNfA7Zr5PSZmNZ0AT5n9T70m6wYgcocpqfwHIuO9Fg1qOT37KX4D8ntPMAc2KseeWlSxAII7wTuB+blrDceAOTLOB3i5zG2f6D3YAS/Jf093HSy918u3pusS25nUjCzZ0dCyuKUWjNR1n0eoOIoKmd5hhDbQHNQhbf7AGUEfmQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ohpvYIYQ5Y0rSxgbAlfez/Hw2+CM5J1pX2W9CzsjH5w=;
 b=HzT2sECt6435M5L/LKfVYgeberuuILMyFGgG+pCBCtn+N6Leuhz1nRQt/tI1+O0eekH+S98+gwdzc6oWIy89qoK7OCbxQU+pWnQdHd5ltMPuhHuZbKiKfMAsdHMeNoJzMtm0QLWitFUhF7jjCt/vrx7gP02A/Rexww76ijf1FdXTNi84wlj+i7mMg+3LazFAg4dUYU/5glJfKTJPCrwfpWlDwjkhR5NvDH88XweIPMuW7UMXIUtUaonf6yaEz3//IBmaZIM/f2IeGUbvxl0r/nicPQpP9hg5r78E5x4KgWahohP0wIQhxxkJFxv47vjP6Qh9lX9w26hnNejP9Gr5Hw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ohpvYIYQ5Y0rSxgbAlfez/Hw2+CM5J1pX2W9CzsjH5w=;
 b=jI6bfMd9Dnf6P2YbHi/7+rC8sMOyR7KNaNR6xgptKB/CnRhgKtSHXrh1p8uHl9c7VGJOE24MKJcdvg6QfaUXGvRNT/3FWAP9rQEBWY54ha1OlqetMVG+5rJoUNJrfi4ZxQNeypT2/ji6ndMGVkf4EAycc37608zi0FD+2brAZjI=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org,
        Damien Le
 Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 12/14] sr: convert to the atomic queue limits API
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-13-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:48:07 +0200")
Organization: Oracle Corporation
Message-ID: <yq15xucs3bj.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-13-hch@lst.de>
Date: Thu, 13 Jun 2024 21:48:15 -0400
Content-Type: text/plain
X-ClientProxiedBy: BLAPR03CA0080.namprd03.prod.outlook.com
 (2603:10b6:208:329::25) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|PH0PR10MB5755:EE_
X-MS-Office365-Filtering-Correlation-Id: fe24db9e-b4a8-48b7-155f-08dc8c140fd4
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?OWfCm7UBPM8DG8Pfe8ib4K3jd8JVHQYvJgGArVv3hNKtCAFxrp/XSfkzZWLJ?=
 =?us-ascii?Q?32i7yokaK5DiX19R2NkpKisTzYLfiQHw4+HFe3rd6HUsFnhtxmwzzIGdEnYp?=
 =?us-ascii?Q?6Vr3GB3L9hcQE3OMXbyfDFMmzv0br3iQoNtnduoE3d4M2Id4yoSK2+f1877j?=
 =?us-ascii?Q?vMdEo8OsadqFrJPL23yhag4Y+JAlXmK7KOfe+pEMBEz0iMVLnNaNzUH4hipR?=
 =?us-ascii?Q?JDA4M/M7R/8OmgUntKa9uLm7AgkvGu79Ozd3wl9LeoD8ApVGf+UQbLoiFcsV?=
 =?us-ascii?Q?douv2NEkjcEelgID6F7w+sNhgttNCFul5kquUntIw18i/v67yxubQM+G5ynX?=
 =?us-ascii?Q?MmqO8gd4Uc6NgMfnQcqKHw+lgU77pzNpQ+fzDElE6hF/Omc9Hdyv/8I9K3ut?=
 =?us-ascii?Q?Xf5RFrA6+lDx3TWpdCld9tDZAmdS8gBdjewJeEfFLbdFtZ4MNFMQSyRaDfYB?=
 =?us-ascii?Q?1rFGu2HBYx3NBuhHguk0i3vtQs37r6LoldMgQAA3i6MpVeIBofIGNcr3VRQh?=
 =?us-ascii?Q?GDqJ5qwxwyqJo2VnkR1bJMeXQBegWHcjF6UOg6+pqylZocxpsGqyeSJllAkq?=
 =?us-ascii?Q?FIGphwvndEXzuLe70E6szlQ5UDYH8AvqVZI4YXoDwD9zh2VC2bq4p5uoeV3/?=
 =?us-ascii?Q?RLdRORlGxoCMQUjLwlIOLgt0jdDera5M48crMrrOcNL1X/pw485LAxarGsHG?=
 =?us-ascii?Q?GYSLbheUIGbT/Ndr00j5Q3d00Gq01ZsuCaBaQMNsNHsu76AVfXR9vAe544m1?=
 =?us-ascii?Q?fNRLAlnAEmJAJRr+fD6CI9pCmRHzZN/WU3cAZian9G5WuxaPfAQ3cqILFPHd?=
 =?us-ascii?Q?if6YROIaf7sL0qEpc6iirU0JwvOq7vsI0z0vTzRpmTRwKbNsFYi79z63gpex?=
 =?us-ascii?Q?d6XmmH8EmhjSt0zbSpCWFC8o2AHU6Jif58eqCYRlCHxbdE2RkqVcpO10Tv35?=
 =?us-ascii?Q?ftf66YIHF/tmJa2KlqIyaRocbGEbjmEYxynmq+u7slA0PIG6usWQZ/ZkOgP1?=
 =?us-ascii?Q?n50HX+xw3af6HfgljE6Z3wKsXBHvv3CyKgsp9nELrm5+P3R/9TX4QbFjXhOO?=
 =?us-ascii?Q?74KfWJlIgLqqfVmRcUvybn4Kl1nxci1MT1AiWDlkhn4Dissf4qSf7yKlQjsd?=
 =?us-ascii?Q?dwkfNpwQr8XZ0+bYHdXSMCGqrQN49x36mACxDSkO3EWXgh2WoUkJbG3NgH0k?=
 =?us-ascii?Q?gL2MBAEPR1ZylQuX7Z6/IzeixQERoJqyWLV2ba+xMA3YN3Gz9RiCeD+L07oF?=
 =?us-ascii?Q?o1qnVxfQXgt5He9g6Sj9kE7sM/jnoZ68CSIps333VVpQdMvMBrmr89fvyiHk?=
 =?us-ascii?Q?ClaSWx9ltghOKuRsTmDJz60J?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?Sj/o6BSjsPpq/MCTgy8XPEetesO3ZygJX7qeMnIHcNiwHP33e3M57zKrV9gA?=
 =?us-ascii?Q?hWjItlJ6x9uOLlRSamVFdAUq44TN1YFUgIKMdtvYUdtXSoDNj7YRCMn0zPVA?=
 =?us-ascii?Q?+T6X/osZfIyk0fN5Ou/2Ah9rQhBGESrmjcq+6N4VXZ2KVu/g0pGK/CBsaatq?=
 =?us-ascii?Q?RVuJ9wTbX8Wlsb3XEihNq9SUQqW7T+q6R/Qv6Ot4lutqJMWRp+3Uv8lk8Liu?=
 =?us-ascii?Q?mjkymcOf5qaokdTfnWOoKQIm/dqjOBw1q8BryqdiX+C7C2ez3K6xvwoNGKUx?=
 =?us-ascii?Q?cofLojRc39Q5hTCQAQ29/dLUwX74U73vgNPXbsoLXbDFxXBebp7fyWaa8WEf?=
 =?us-ascii?Q?yF/REFss0qzIxCsI2fzn8iPG6sJNJY/b0g7zGtc56Nj4hzlUTKqo6aB5ieZY?=
 =?us-ascii?Q?JXfu5ELBZCcJGBeDwUD1mZNSOd5Am0v5/+vThZrhLP+3dVDF2aSYxqn+HSmY?=
 =?us-ascii?Q?iyHHYJqeJEBTUpKD1BUQ3XQnqDdRI2QWOkV6kSk6CXwzo2Z0zhI9FeN8GqDX?=
 =?us-ascii?Q?RUpYhqVU7ZmnGxBRZ2Et9BftkaCH0/VEUjFWdGtoJjTkOn+IPjqpAbx1D3vs?=
 =?us-ascii?Q?wyAP6w+wYueENMP5+UuZmaQcwjy9zGP4IOtPNzl8hqTP4CgHqXax3f5pbkQw?=
 =?us-ascii?Q?/gd1+3z8w45SnSYtpv00gwF4Db5V+K9VGewD7ggbgY98dyz7H000Vp7X/7eM?=
 =?us-ascii?Q?D16SdX5grtXYdQID5QtnOHNb6WGehlCWdupotk7W4upf+8aIidXBr9GcrmAV?=
 =?us-ascii?Q?rCOt/UPvU82dKXBZn/bD2aaHvTg8/Uyg0k56IVX5EXSm5BUzzAqLm0Mvw17X?=
 =?us-ascii?Q?QP3pnPUh7vnbffIrZ3oEKlfYTviXZ9axYzhGPj++G6vjtG5XhWwpYfjlCOHb?=
 =?us-ascii?Q?lyftZBgvB4kK6ZjlPJldP6CT6iE3VxqxpG0DVeKWVXMI2FkY8XZ7AhtnbRsQ?=
 =?us-ascii?Q?vt80Tiw+eOgkjLewP/zhc1NpAGuRfn9FPVpsSLtdJkKedltsiwpynHlDDnIs?=
 =?us-ascii?Q?n6+gNn4o/CbnT5D7HlUWdgSS10x/jGyRvb2URR5kNO1mdLKJz2fywO8p9YAx?=
 =?us-ascii?Q?xMABzsl5aa0eBHThXIRoi+rxkvxriivwd03cAdWjgX22FbFFm+yKoRkH5Ioo?=
 =?us-ascii?Q?LNKavTU68K/agkJtL1UPu8D4nYEY38jwkvD3QIif2c3B04hNVO4AtzfplOVB?=
 =?us-ascii?Q?cdunnhsJdvm5UX//CTV0HzvQ8woBeZdZui7bvow9QOn+boXC4EijozIUifgz?=
 =?us-ascii?Q?gsDrMFtUpurUDIc+j50ufe2wnjw831iIIcH/hL23Bd7wmdYuYz0SGYAQDWrl?=
 =?us-ascii?Q?NR6apWsfMQ9ncuJrzpFpZN8Upa9hdYLB0q6pJklmGNnFUuMosVg1bx067Z33?=
 =?us-ascii?Q?s90H6BEqr/hinc0XFs0kTQD6W8XlXOESHMxiSTdw+aJLNGMIcQBczhrVr2ag?=
 =?us-ascii?Q?ZhVLz3BJ2wgmLNa7gHQrid90DJQT4YtfEdwbh/FoPClxO7acokZabzAp0Q48?=
 =?us-ascii?Q?4lebyIyuv6jBtL5+Q4xNX9H4Gh/vVGRX5PHOX0bHZCKKKFpp0vCJ4oR+gdU7?=
 =?us-ascii?Q?Prji6lw5QEZ98ugdWugYBWYQKV8bqWgBx1gf3DmSQ7PaUetMmLVf2MncA9RG?=
 =?us-ascii?Q?lw=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	MbpD4eiClUMzRM+aAWx9fHOuDDDOiWGbW0051E80tgtENvFA5sHk47RBiGJzbtqArE7l6iL3ZaUsj8+mQUiwCAUAL7v04oC7KztMXZwf8pqDu0NAtcf7mQ0aLPRn+JPA9/SX/S5a0lulHGDL3Rbkvj/m9AjsCrn0T//+TH6cLtfJNvaCYFvKlznLdZvlm5Ruf4tbsuQTbzixB6yFAIFv5XooS8M98qcrnQ6uItomEfZhgcxeJaESYEi5ZjfiIxzYlglDGHsQQcluIJPZeu0O+aKOudccCZa1bd5+EsS7GPtNRgpGd3V6W/kZyuOC3T2CfTtHvkocTKdQ625Tp332cq019J/s1a8ogXdqUmklKvazVnLCwp0NppdH3ZCevoVUsC/6gzS0hgxPN0WgB8ktdlIBQJqKotMEbi3v/at/vnOE6L2wJXmLRRzJMuOd595ix5rVjamWUlKF0Vg2VbVAClfytjPGPRGxnvAtGvIqm9w+Kop8yBjK9c7TKIxL+zmNIGcGpPkZW1RjYFOivjuAQkwk3/wWzrsiOmZhOTXVKusmtwYgswz98N67GLFdMNrQlZXAaYeuwa74rwsMZv6F9IzT+sR/jKwrgknQhu+9xKE=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fe24db9e-b4a8-48b7-155f-08dc8c140fd4
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:48:17.3419
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NffmYyIziCYS4phZOecb+IlCnXGO9+m8ol9VCc/2Sbk4/jGK4P3qR9rfnVdk/wWB9+Yp1XhkGYX8N4vXrpRpBSwqmYIJ9S7vEfSYp6anLFg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR10MB5755
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 suspectscore=0
 malwarescore=0 spamscore=0 mlxscore=0 phishscore=0 mlxlogscore=999
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2405010000 definitions=main-2406140009
X-Proofpoint-GUID: lU4zMv3_ZY9DHk9lG_Ex9-ZAo10cCLgm
X-Proofpoint-ORIG-GUID: lU4zMv3_ZY9DHk9lG_Ex9-ZAo10cCLgm


Christoph,

> Assign all queue limits through a local queue_limits variable and
> queue_limits_commit_update so that we can't race updating them from
> multiple places, and free the queue when updating them so that
> in-progress I/O submissions don't see half-updated limits.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:49:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740309.1147358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHw3v-0003Jp-IN; Fri, 14 Jun 2024 01:49:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740309.1147358; Fri, 14 Jun 2024 01:49:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHw3v-0003Ji-Eh; Fri, 14 Jun 2024 01:49:07 +0000
Received: by outflank-mailman (input) for mailman id 740309;
 Fri, 14 Jun 2024 01:49:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHw3u-00036U-Af
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:49:06 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 477caef7-29f0-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 03:49:05 +0200 (CEST)
Received: from pps.filterd (m0333521.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E1fV0a006363;
 Fri, 14 Jun 2024 01:48:47 GMT
Received: from iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com
 (iadpaimrmta03.appoci.oracle.com [130.35.103.27])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymhf1js89-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:48:46 +0000 (GMT)
Received: from pps.filterd
 (iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1])
 by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45E03pnb020427; Fri, 14 Jun 2024 01:48:45 GMT
Received: from nam12-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam12lp2171.outbound.protection.outlook.com [104.47.55.171])
 by iadpaimrmta03.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id
 3yncayatub-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:48:45 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by PH0PR10MB5755.namprd10.prod.outlook.com (2603:10b6:510:149::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Fri, 14 Jun
 2024 01:48:43 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:48:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 477caef7-29f0-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=fMAKYWmGAZ+qLI
	hrJJYLdlVPumk+F8f9TtYJ2iTb86o=; b=Snp3BwQmul4oKkLdC05ZpJx+dgUglV
	ICS8bt6Rx3AsIgfJuRiYDsdP/ty57jhLjvFKPb45aUd3Nx6HgMgJVaf1aEtXndJQ
	mJSMaflybskHnyljsgTOa5HI2aXBVGqgBpCZ5n+mEZC49xkVrRvVY8TQmDNIqn1J
	JZY3Ubcvz3PUJ9tK/9ELfxqu23OLjhKOBEGQxiKl8yiPiPvO8wF97G3UqB0dolko
	5IaV0KEx0xscZhr4e9lt/d8gvHLg2OBbC4meLyQuuKhcCEMz1L4Q7vcwDV2bjOm7
	rYgiqx5nPV8SJzCLGtPUo7gjfQimAjaAyuzcDXLq4pZtvSfGtbibne7A==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cQi+yUM3P4SqKUIYV9/yaa/9/+ZfaXBZRwj6tKqR8zHv7M0kduVLok87OB5hTAvToQiq4PMrJppxy/NBAUbt3RcJ09kAHHgBTwyUmsTYvap4iAa2K7Wg6L/EzbVFP0RmbLtCzjZGDXPIHI88zqv4O1+mztU0maJKOh0XJlOrMgDpQsUHOs8kjh1dwuk308rJ6dymWss1UxCN8a0thqGZr7IsH4lydrS/EAJ+1cfTZRNEacGGhShapTv60q5IJb9xQnAudY4+9dwUw1ZRLITuRi8Q6rmlSJRED5EWaw9TQH0Elj/rHbvufgwW6dEnOkx6VLnIK3wXxl4U/IlAEM5aTg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=fMAKYWmGAZ+qLIhrJJYLdlVPumk+F8f9TtYJ2iTb86o=;
 b=KGR17uFIk7qNfPFijHKAahYHUBpRQeHw8I+tCuER8DJwRn4jC66ny5GqUZSADk9yLGHUiBEmWScmiBwVBAvwC1kI/+o+UqX5NeTFoXbke/FPA/qYbqvV4+9AR1W2tRZncjiX6+z0SxILVrNrusGI9gEqn4YawTckDmskoibuOCI9PceGW8D2MdH2ZaqhpkdghX8XM3bv1yCjL6P5kVd8VYBklUqX88XRg+AY3Vyx5gerJRQ+H3CgzIX056M+wYhJMoH4Sx7/oCE1IHCUlY9HMCFVArn/bgOhXNm2ETJRAfofX6cINIGE5FFpX6ooEB++i5o5aQc5ISICvHJcimIbtw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fMAKYWmGAZ+qLIhrJJYLdlVPumk+F8f9TtYJ2iTb86o=;
 b=g1GRjIMQ+Ik8vyEPLcJ2Z4IINfo4DACfBP6jbFzmicnqw2jq4MlTjs62QmJhL5ELO+Ol1Y73vGJcsk43ym6P1IH2Q7jGYmJGQdUIm9k68LGg3Qw4Zx5I7pkJIi5XTWOQvGBL0ROIU8+VjWBcaUelJeBJwRSaMuA8eVKq8qEA+FU=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org,
        Bart Van
 Assche <bvanassche@acm.org>
Subject: Re: [PATCH 13/14] block: remove unused queue limits API
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-14-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:48:08 +0200")
Organization: Oracle Corporation
Message-ID: <yq1zfroqopc.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-14-hch@lst.de>
Date: Thu, 13 Jun 2024 21:48:41 -0400
Content-Type: text/plain
X-ClientProxiedBy: SJ0PR05CA0162.namprd05.prod.outlook.com
 (2603:10b6:a03:339::17) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|PH0PR10MB5755:EE_
X-MS-Office365-Filtering-Correlation-Id: 47790e30-7971-41e0-ee08-08dc8c141f65
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?UPcHvOW8u5EsHjSoXQH+27BQFD16qdu+oze9Pu51XAfqnb0RQK/YJaOKyQVB?=
 =?us-ascii?Q?evdwCYKVO2v7fcbLDjS9YOFwjzVE/FiwlywN4EPekh3t/XXj9NUAcJN5AXAZ?=
 =?us-ascii?Q?XzxQfmJ/3z4n7WWvYKD9APTgOSjmO4Y8YiLHzavTesR9iNpZmfxKLd5ye/rQ?=
 =?us-ascii?Q?Qo3gmWYXCqjHGAS2Q8oX8ufWencZJbzM/t4VBKUOroRw0+dR/i00+3Ew6x1Z?=
 =?us-ascii?Q?5rWhrEdOZm+YeIxRTFuY1pMFNFU05GMFHoacaFaeAhkpudn9UdQcZ2LNlUm7?=
 =?us-ascii?Q?s77cx06EE+mkl8Cn+2lVrI+leQiTSxj8ohm6h87/+bwfE39MOOOHsKSxkaHz?=
 =?us-ascii?Q?wol/xF2MqWBMDVme6H1cyb1AcfDQjlr7GGeUSjCfSWYvAoBoterEzaAfgZB7?=
 =?us-ascii?Q?ObwMrWiBGS3yEZMwE8PjekPAjOUUrt36PDmG1T04mr65wHTDYSmpVNP/Bh4x?=
 =?us-ascii?Q?3ajepieQM7ZZjVqhQYAWZ4KzYOLS+ci6uzRohhNW9kc7v5j/IyU4NJSc1Dsw?=
 =?us-ascii?Q?27WjP+xfA9xK+WUuEDIuNT+DGpRL87ThsEgmk9vaB2dKU8zD4bPmStxiV2+n?=
 =?us-ascii?Q?vAhC4mrjjBVsIvlIU6od61cRpHvTjlMID0q2XGbnBHgPtTOp7tTU98wUqYLm?=
 =?us-ascii?Q?5SA0eXTo6h0v/+OJcM3lmz3iaena6tMBQCTmbSGca9DP8OrdV/wgtl9uMOtP?=
 =?us-ascii?Q?3GuVNRLY9Lfuv+8mHEG0IhXW5NcL4iZyBZr4Vi5Dz//G06yvtvY/H1uIz03H?=
 =?us-ascii?Q?e3KtlvXvEo0URB52Oieyoq0yhDEX9ZTa7KRmhfTUxMN3hQSXoW7RtQFonkMU?=
 =?us-ascii?Q?mGgunLGOe/HavdEynbHyx4d0patxu/Zb5dMuvcFJKWTL06ALFciGGN5kkHYo?=
 =?us-ascii?Q?ZW7OlBrA+sLEACZhlhysPw+oDG62GPixkpgNZWMdA0CrpJZFRXeHH3MMmjhz?=
 =?us-ascii?Q?JdVga5jGOycV7KMQYGSp4GoieOnU07VFSKa5cksDEViwQCMqXQed9jY8YisS?=
 =?us-ascii?Q?+AFhSubPvhKhyt9WcK7vT3Z18/fgwV5yyPtizOguuNiz7IPnssPp9er+Rd5z?=
 =?us-ascii?Q?FV+zBMar0DHwBtEqxCF/fyaWzwmzTM3Hlez6oxwHZZYfZ1Fo3jQj7oj9Qm0W?=
 =?us-ascii?Q?Q2gV6iXdaCUXYPW+/0PoGwCc9IaCWn1XvGYypyuBVKe3D76Lp/OpsGLXeyPA?=
 =?us-ascii?Q?y1NoWAuN1GZxQQpdkAaPiDNhcjjCsjHG1dW35e1vIKJkhmeSmdGBgbeBkbYM?=
 =?us-ascii?Q?Ye1Sph/mVbI9I+Ax17kDAWWK7Z1RVQD472SB6cn+Z95CozDSZYki8Fv4bqr2?=
 =?us-ascii?Q?Xz0iGMmAMyEwj64mHwkG8KdD?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?6our2LDB0ZxZfDg5ZnURebslzGcDZX1yVsXEStC9WPd28STp1Kr+OFbyducJ?=
 =?us-ascii?Q?sRY7ui08N1cqKaDH5K6JIsTOfDULJZpl2Mr1tdYvL8Esaeyb1QfqbL9N2klX?=
 =?us-ascii?Q?n0F+TOOUBe418qtbzw4kr2I2MTaFaJFDQ4HHX2ZmgbXZ1eLhyh/IlI6atQMz?=
 =?us-ascii?Q?xo1ChTXxGZGjcxzeiZjSgV89FsMd5NDDC3Z530UFba+ncLY/nIGOHEE7dAC/?=
 =?us-ascii?Q?B6mn39qLbLAvB305zx/4JsvmOO/zRqT75f6OZ+9WO9s3RSOws+rNACQuX1cn?=
 =?us-ascii?Q?mbNr0dyjekuJ5sVYfDSnIaS2HvvLD8EYHY4hlRolzt/cZe/0WlUYY7mjkolS?=
 =?us-ascii?Q?/smLIvY/ddiR0KsghMlJuZ6ImFBuRPiYPnXY2W0j4tc0mFr3XAPbsxOrWYAe?=
 =?us-ascii?Q?P4LBzJPLx0HAoeJYCD/Z/QZkFgMcBTPyVAqPFhtvjFZGQdqiBAPKt0klv/km?=
 =?us-ascii?Q?6Nl8QQOGIq2k/FW/0C3KvFbWxvw+5L1s65Z1er2YbmvUuVisDqTLDYWZidfo?=
 =?us-ascii?Q?LJBNDkB3Z0DJeQ7QyaC41/AZkf2u+XRv2Gj6J/9KxCpoSOiQmoqSwZmV5pGT?=
 =?us-ascii?Q?DLfycPYLcgWf0wtuZcMcXhYIb9JA2CvEhNKQ4h/g4/EDc/ks7rdIJwodfhCh?=
 =?us-ascii?Q?BUD3IzJIGR5Lvs2OmrTQ8sn8Lbw3MyBtkSBU/51dpBPIavVI2o+WLJEx+OP7?=
 =?us-ascii?Q?xaRhGIytiLKoyKyWtUgslYOUExNcJjB5eEEobRv5DeOjlZ4HewsCbB6nrrjQ?=
 =?us-ascii?Q?T9cdaY8ZyMJ4Yms6KMW81aJx+AdNswgD7XtrWzQxEdfTSlgv0LAnIl7wPsTv?=
 =?us-ascii?Q?mVvIagLsg+rbNQpxfmaImN34ppTIYMktxfOyaZtc7gxSfzpy4yuiBHWaV098?=
 =?us-ascii?Q?a4PoJ7oaEuCb44GzdAEtQvriKfZVKRBl0Z7YEjEZPAIkoAymWD07nk+kAf6h?=
 =?us-ascii?Q?v/kZ7ZLlsNasX/Fb9kxmNWeg6KuIt9vRtUnyF/CFO2crxbNk5G6raParV1IX?=
 =?us-ascii?Q?loCX8YiEZfdFf/WyDhRYX6kHitv8T5tcVbpFZL4CdM6AirerPFh9a3D6+eUO?=
 =?us-ascii?Q?j/gBUwKYoS++IdbAlZIz8KeIDR3Mq8SNnIfLnKoowJXZzmckpVbkDc+4x/Jz?=
 =?us-ascii?Q?GBBS+JvHTo/pvAfeDS+IYAUnENTeHCFrEpWKUce/DMTrtLMoWIy4pgeHKVkr?=
 =?us-ascii?Q?JAWdow8j/AYvzDfXSD0XmoxDwklyG6cp/Gd9FBNZueB/kJQQlN9nG95j0H8e?=
 =?us-ascii?Q?wd4d7whPgqml148usIdcq6jQQTcKuYmWRcyfME+9Dc3mws2tLww/gcYsHFwG?=
 =?us-ascii?Q?ugbTFiXckqX4F4pfe8Vuezr2jxuijQv3YNnqfx47t6GP5YDxaGVgehxoTET6?=
 =?us-ascii?Q?rg0gq//qVZUe5YEWEqmqlz45PaD04AEqMgDke2V5IRqN9nTjWZStOvoIak+v?=
 =?us-ascii?Q?TfuJS/hJ/01mOj6ur/UlnfQf7OaLf+117ldMIsEGlFo/Xj2+N5krMPW67twR?=
 =?us-ascii?Q?dvyBh/RQKLrZ8PDl8eXU62B6oVJ5L/ivA1KJ13sM8nIGYKiL82Bp9yHuxPK0?=
 =?us-ascii?Q?Hnj8bhI7WxSA8Ce5p9sFCq6uyUDAKfZxDXexQHNY6Pg33nt+aoWTk79DLngZ?=
 =?us-ascii?Q?gg=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	Wrabk7OQdjUuHg4Sb3dkhxhkURu++SqTIJkhPIiJO/+UMx2JEnXqdgqHxwLCA0Y/bI9SOg4kGvZGyb9IwyK3CE/TbZYnSEe7PHQ4upRpAn/Fn1w5ZHXtRc+xI07Ug1Q6rsS3dEvlC47VtzPHaxEAOQZzXZKdE4alzfOhIs2y8URgP1jSf9st5VIqlIe0OketvmHpAbUTDkF74E8TFd+0fiXFFUtbNItPP9FD/Vbr5DpbPiEme+9SPIrmoMq9VAijrAIZ3R0BSgqIfTqmwT9Q7Mc+Rlqi2ljCsSZzOkcdOPB0ODex1m16pnz08pZ5o+Yruw7Nu6PIl/pVPcKEj0m4QqguhTFUz+4Cnc6nnca7mPvx7/pBJaL7NALcA+V3q670eT7kNugLxs9ZF9ncgWW59SLEBFZuc3c0/OxKcNuOV+KmHzHmQ9POfp1+7GCUMDDKrpXyGaYVh9PgJfMCxG7ZLOChOe0ptqAHr6p3F/tPc+YR/sDjEIBLcV9f+biN0eNqnsHNL17PwAVcvC9GQLRPeCxlq4CPNo/XDogMNqmtHT1ouDg3Rg1F0F0fkCKwjluNte7kCO3siOlHoIa1YAGZ7Nyan4PBLXNEzZuEGhbweeM=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 47790e30-7971-41e0-ee08-08dc8c141f65
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:48:43.4664
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZigB/Kfj4C7T3nO4z+73KeJmxIOb6IV0I/84566gQA/1ZPEMzbw6QiY6bLVTciGMP5VwXxGbgKPpIxDgwbvVrFx1KaeQ5iWxIDd4NzX0k48=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR10MB5755
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 suspectscore=0
 malwarescore=0 spamscore=0 mlxscore=0 phishscore=0 mlxlogscore=999
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2405010000 definitions=main-2406140009
X-Proofpoint-ORIG-GUID: JImaFDinQMRorhjRTuci7drsfCpRHjig
X-Proofpoint-GUID: JImaFDinQMRorhjRTuci7drsfCpRHjig


Christoph,

> Remove all APIs that are unused now that sd and sr have been converted
> to the atomic queue limits API.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 01:50:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 01:50:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740313.1147367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHw4w-00059j-R8; Fri, 14 Jun 2024 01:50:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740313.1147367; Fri, 14 Jun 2024 01:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHw4w-00059c-O1; Fri, 14 Jun 2024 01:50:10 +0000
Received: by outflank-mailman (input) for mailman id 740313;
 Fri, 14 Jun 2024 01:50:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OutG=NQ=oracle.com=martin.petersen@srs-se1.protection.inumbo.net>)
 id 1sHw4u-00057k-Oa
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 01:50:08 +0000
Received: from mx0a-00069f02.pphosted.com (mx0a-00069f02.pphosted.com
 [205.220.165.32]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6bf15127-29f0-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 03:50:06 +0200 (CEST)
Received: from pps.filterd (m0246617.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45E1fQEI003152;
 Fri, 14 Jun 2024 01:49:46 GMT
Received: from iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com
 (iadpaimrmta02.appoci.oracle.com [147.154.18.20])
 by mx0b-00069f02.pphosted.com (PPS) with ESMTPS id 3ymhajat7g-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:49:45 +0000 (GMT)
Received: from pps.filterd
 (iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com [127.0.0.1])
 by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (8.17.1.19/8.17.1.19)
 with ESMTP id 45E09lfY036560; Fri, 14 Jun 2024 01:49:44 GMT
Received: from nam11-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam11lp2177.outbound.protection.outlook.com [104.47.58.177])
 by iadpaimrmta02.imrmtpd1.prodappiadaev1.oraclevcn.com (PPS) with ESMTPS id
 3ynce139q8-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 Jun 2024 01:49:44 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com (2603:10b6:510:3d::12)
 by MN2PR10MB4127.namprd10.prod.outlook.com (2603:10b6:208:1d8::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7633.36; Fri, 14 Jun
 2024 01:49:42 +0000
Received: from PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7]) by PH0PR10MB4759.namprd10.prod.outlook.com
 ([fe80::5c74:6a24:843e:e8f7%4]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 01:49:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bf15127-29f0-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=to
	:cc:subject:from:in-reply-to:message-id:references:date
	:content-type:mime-version; s=corp-2023-11-20; bh=7i0BWLigEmvelo
	6RcYuunWojtZaq12LF0xnWgDFBmLo=; b=VV4BITIaOdufl+RFTmJ711ibFCYN62
	XEa6en63RJIgpf2u/xDwHcYYZEYLBQz8QQ78jnQbeiLDhwxGUtnnJcijxMEOlkPK
	SynNEnRPK7Dw+EhzJnuQRp/E3t3tkXCiyk4v55UXzTbAmR6uy70HoLcELMXXMfWB
	w/Lw11/hjogfzalO9V+ktYQlIe9MeVEgv5fIJhBdNzBkTuddvSPTmt6UPz2JE5U6
	cE5E1AWpc96usbpWLD+C+zDc8bKFtSvAVupRZXYyuHsIv5c0iE368C01IT2CJgIH
	AtCxyQZ541+N19oZ/pgTv3SBXQ03kQMRGs+m5vfqjKJtpurI5hvlXx4A==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YJt4WZtND+QWB8fJmyHVT61nvZ1LxQB267G63DmNhrXPF87/F+/Dz+PFk1DMuAuUGydfWiOrwutkh+XYcOa0K9as5mWnyjShLCs0n+9xiwmOWKEc7Hwh0LRznM6U09EKbwWQtBhmj9IvveWnn8sSWcOkhpy/GABV9XnvZmDSOwP/vKinWgOTXxFM97ih2fjcZSIJLgZxgMG7MzE2zmDgGWtVqbS0NahTyCB5AcFJ4fxph16SkjW89+4Mib2vTKBcTK3jImm70QhOPhg7jAQPEyoHw7KxElUutnYfYvKgUI2J5yCM5FfsTtgrwRuYkbFaNUAkog0Q4CBvNHclbydDNQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7i0BWLigEmvelo6RcYuunWojtZaq12LF0xnWgDFBmLo=;
 b=BCd7NL2GP8tFUwW/BwmZ2u1Mv0b/B4nREMRrRQLLYcGKy0d1qyJQPh1ue5ECTbFJr1xOVTADhcaSK499hMZjyFas0c7qnLMI6CkOPoCflEIooibHxlR9IRv0WbkYmtbpjiz6z2BErDG64fqAqEOsFVfssvwLU4b5kx4elaPN3l62xWRrICa0usTDeq8gVcemQyY2SyQxdiqzZBAOzaIspiLfwK06naYyszZu9q2srqARkvt+C6YCsNWUVgbvi/1AoF/n1jQjUIXbTxRNI9BWO8/14aMhyM9ggRYEDM4ewzqNeL5Wo/8/RYqkUq7kDIk5+v6UOYv0BuGShHOCfDRxfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7i0BWLigEmvelo6RcYuunWojtZaq12LF0xnWgDFBmLo=;
 b=iZOjTw1Vd779mdrhk42hFSvPel+UkvR25ds4lyUhzTsYIsOWQT5+0vOCG4bHnSlIf/zGxWt1SndzQ9F69GrRBhvpNiM9MGQrPJYB0bdgIf/Eik0KRITSzga4xT3S09MqOv/71GHnWFy893ybb/2tVWGHSzl4fR8T5rdu4aoeqPE=
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
        "Martin K. Petersen"
 <martin.petersen@oracle.com>,
        Richard Weinberger <richard@nod.at>,
        Anton
 Ivanov <anton.ivanov@cambridgegreys.com>,
        Johannes Berg
 <johannes@sipsolutions.net>,
        Josef Bacik <josef@toxicpanda.com>, Ilya
 Dryomov <idryomov@gmail.com>,
        Dongsheng Yang
 <dongsheng.yang@easystack.cn>,
        Roger Pau =?utf-8?Q?Monn=C3=A9?=
 <roger.pau@citrix.com>,
        linux-um@lists.infradead.org, linux-block@vger.kernel.org,
        nbd@other.debian.org, ceph-devel@vger.kernel.org,
        xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org,
        Bart Van
 Assche <bvanassche@acm.org>,
        Damien Le Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 14/14] block: add special APIs for run-time disabling of
 discard and friends
From: "Martin K. Petersen" <martin.petersen@oracle.com>
In-Reply-To: <20240531074837.1648501-15-hch@lst.de> (Christoph Hellwig's
	message of "Fri, 31 May 2024 09:48:09 +0200")
Organization: Oracle Corporation
Message-ID: <yq1tthwqonx.fsf@ca-mkp.ca.oracle.com>
References: <20240531074837.1648501-1-hch@lst.de>
	<20240531074837.1648501-15-hch@lst.de>
Date: Thu, 13 Jun 2024 21:49:40 -0400
Content-Type: text/plain
X-ClientProxiedBy: SJ0PR03CA0169.namprd03.prod.outlook.com
 (2603:10b6:a03:338::24) To PH0PR10MB4759.namprd10.prod.outlook.com
 (2603:10b6:510:3d::12)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH0PR10MB4759:EE_|MN2PR10MB4127:EE_
X-MS-Office365-Filtering-Correlation-Id: 519fd7ed-4fb4-4a53-bcb4-08dc8c144259
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230035|1800799019|366011|7416009|376009;
X-Microsoft-Antispam-Message-Info: 
	=?us-ascii?Q?OKnD2sLHJTprwkcnzem4Bo3tqmk2vfMfcxmSFwmhGVhvc7cTLHCxeECLr2cD?=
 =?us-ascii?Q?TTQMhVLdZRfe3c48im67hKGsWZB1aIIwAjP0tQS0ElYHXw/+O6WaLqRJZddZ?=
 =?us-ascii?Q?hnrk1nlZTL/yW5zUte7L69LFRTrUjneUI0ZjFOIdOEPTncHx1fjMSUXGq+2s?=
 =?us-ascii?Q?9OchBSfaF/bK/heSJYrcr/XtOrAaCcXJKNWMzmCYvoBONMlNOuKQnCax65es?=
 =?us-ascii?Q?yiSxK2ivmOEq9j8eKurLoiqV59AqF2JALHdAmLSo5uEhBF/wah6XMxg4I/Tz?=
 =?us-ascii?Q?Wu7DRnHC0ut6/nC2mrE+JSr4I5jE5vf86NEreckU0+2Wb3vCHrhsU/C2J7eR?=
 =?us-ascii?Q?SRdFo1ESZe3mv0aTZIkjQoWOuuFVEgNFbo21l9hjDXh6jS72pDDaF6e/ZBqt?=
 =?us-ascii?Q?EMDmsQRlYaBJcJuygTmq8oxE8oB11u2w7GKgkXRaH1IHDYYsFO52e5ZGQn9d?=
 =?us-ascii?Q?OWHH3GcPSiBoFLiFQ+vuQs8ldgvogV4p+6PGEsYNGpI6U1EnP0aNsizYwqE7?=
 =?us-ascii?Q?+g5h4HENIc4AcB1/pU3AfYidDTEFBbTBum7Qd28GIbnNd8AS4C2wtih/L3zV?=
 =?us-ascii?Q?/RaRzCtpRpNY42QzsdsPp5O47/MLm3smwhz1oxjbUAfeAUQZEZqECoINaOEl?=
 =?us-ascii?Q?uNfkrmwD2rDAQLhRSvDIDnfKaZBdfZ8nnrtHu70lvwgBEQuvKPMuDgn9PIho?=
 =?us-ascii?Q?b0R/0LH5yd3s4k5Db/Xj1Raii+ERjRlVQaKCJFafaiFyjpYwHsMQP2AEuu95?=
 =?us-ascii?Q?Q6ToAIW1AJKfbjd6Z1KjKzsYrsqy9DuPLQmHKOKpOJ02SyNxP01XMCa38pdu?=
 =?us-ascii?Q?ORK9zudYhpzjXnkI36gTJdDRorRltAK3oKZKqDhiBLdXWbebP85tbkOwmD3q?=
 =?us-ascii?Q?J9qVNTE6gRPEoju/3lzA0N5b3pdsnpZu0ijgtF12LiIovLHk3en3L7AVQbjb?=
 =?us-ascii?Q?8krNq7KomkjedAjGV8SUfqCxFQzgrfJ8GqeoixOWOY2sbboUIB/kpwnKIeAF?=
 =?us-ascii?Q?XW/gdtdGiTcUWTtKO6X2PNnb9RcrmkhUnYWuvSH7yqhfpfcqsytIRPBw2Yhj?=
 =?us-ascii?Q?kK7l9sPPX6ji2J1sXHbF4Fr8tMZRLxXvQS/lwqP8m+ot5msPLwxrISewLVYA?=
 =?us-ascii?Q?Ozy00cuQbg/GnIw5Xql+xadsMSeKaAWrKWopyhak7Pd9n9G/JJ6qwo5EMfvk?=
 =?us-ascii?Q?DGyRXS9BM4SIYGuXovml3uP/9vLYmnIHVw+WM/011FlHBDx35lO0CAVLHy3r?=
 =?us-ascii?Q?wdS1wXPzWuBnQ+VClNKh13eug6G329wcNotzjqiJeejbyOv/d3eKoMsvdEWh?=
 =?us-ascii?Q?1FPNox7G2+kuioNco2dwM7mB?=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR10MB4759.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: 
	=?us-ascii?Q?Zg/CuGIymdJmPuD9qEghGVOLOcf/XyiuZ25VH1eCw2NGJt1d2/HYqABlT24h?=
 =?us-ascii?Q?E+/4/ufEA0zEe9I8Dlq5RMnGTfHKiUDxe3xNnW+HYb8pza+upbZ79SDky5Dt?=
 =?us-ascii?Q?+TK4EVV0QUa3wgM4i8ym6wlzmX/UABlO7I2V+HO5IJkVmR+SZu4SqblJTuO0?=
 =?us-ascii?Q?7k3AI2TRySgl9snTGWbmyMnQMvW7Nu++4Cihjm5bxgk/Vypkd2egtlvH3i1y?=
 =?us-ascii?Q?uErNLpWrEBhtAStNWAdWB4+0s9gnzC5JaaTls7QrK7RRClLPtur3VfHpPD4Q?=
 =?us-ascii?Q?k7WqGM10Jqolmd+BYD87g6iRQc6uZAthKt+6woNDFoq6YXz5twfB4iu7CVan?=
 =?us-ascii?Q?ugHI1aWd7rdOcXHlcMgj4ixRqz6XHI627Cb+fTg1mI7gePZ7ksMsiUtJBLv8?=
 =?us-ascii?Q?tsRplwH23IeYRx6XQTfc83dJvcJeHHwLLxATvfjaqZ9JBgaYYC38IyHyc/7Q?=
 =?us-ascii?Q?1Wpe+mXtoaASTSgArCE+vEDMg8FmLQ6zevwWqtkOD+Umnv1NMtSuPeZnRQP2?=
 =?us-ascii?Q?DsIJG+0Y1nLn1kmlRH/4oGlRU9a+cWQWSfsmjgRZDQ8jufBdS+o4HT5o7uDZ?=
 =?us-ascii?Q?sWiaMt9Aw4h0GsbKzLiLw+yPKReytFwAH5ccpYSXMvBRipJWiSeM9zsoWCri?=
 =?us-ascii?Q?N8j4LSllOry/K5570mMjAqZNNHuJ4nkAAiAWjzEr+j0R6gPz+IA+m1l87zJD?=
 =?us-ascii?Q?G5r2whu78cx26/026/H0wT/74kP+JOaoCrn9BHzj49uRetNfEJZayXUhmacU?=
 =?us-ascii?Q?CkwsyKsCb8ArOppjPE6AV6FRGMXA95w3njiSculC5GueHNtaBVLrgTV8xLD8?=
 =?us-ascii?Q?q64Lc1vVH5Bh6rud3f6eNMKAdQT7UaCrgJegq0uLfpXccDDjKeONp2cMbXry?=
 =?us-ascii?Q?sjYwhsCW+f9wOVwLCNg87CX+IXQSmfGVsqqRTUzx7c8RZjmwfvM9WfDKdrAl?=
 =?us-ascii?Q?onOUfPPh3epDcA2Fh/05SZYuRz9HooTbp9pdTaePeki/N4PlyHOCI56mLAhm?=
 =?us-ascii?Q?4mYQWt3uQ3BsA+66mI4nk5uEGFWDtqul1zTkSWbynO68oTVTBMY2U4foj2Sp?=
 =?us-ascii?Q?wF4JCqr2TpcmYZ/vBwiVnZOLjzyTMzIv91m3pmlhNfNo7n6sWNwMP38L0Qn0?=
 =?us-ascii?Q?SVGs1XyxOIfjT98wcPm5ablNAKkinY0DX25AFD0/iw8fftB89HA+E/tQSmdh?=
 =?us-ascii?Q?M7bVix13MGmDP6QRtcXbfrbTK3wQy+6EGMBWSsBVhGDjsWCywz6ZK5MYsQco?=
 =?us-ascii?Q?7QSIfngRtaWcw5/Qm45wbolvDlUR9JZiDEEiUWWWI9JZnPcgo89YaQod8gQ7?=
 =?us-ascii?Q?aP97U8/U1r9hAWkfW2Zk24QN5FHsb0Ac7mOyXtXGa/qgk5JE5c52MfcSm5Iy?=
 =?us-ascii?Q?iutjMVFi+Ll9F8Qh+wtLFU3P17fZe1XjFGckJDIWEa1CuNXeTK5Tc89z9AfU?=
 =?us-ascii?Q?EGYqlTXMe2BePcnPp5f33D/+M/d7a5Li0bxFIdtRlLArpF4AT2mfdV2OUcpY?=
 =?us-ascii?Q?SJAM2GcKiC90eArpSpZcfN91Rc8RQDzUho3m45XVpammOnimr0KM7kynCXuZ?=
 =?us-ascii?Q?IMU7R6jibA804ZJXZWq8BjK9aIdaQXIMF9gbxKh97jeQ9QaylcVb0OJ3ZbF2?=
 =?us-ascii?Q?Wg=3D=3D?=
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: 
	v2+cBe+h3qbNw1Kpa5edH3rlhPakpfD052bnTUlv/TBMN1naRTuk0tmneQNX2KdaB9XoSVLijL8PRz10NKTUpe9HTDLsbSeFwUrLpesBXXphGsKeAAbgJAlkjmeJ97kEmbzz4XdtVpb+QtsJbntC1fwoXwzmz840+a7uoCg3Lueo934bHfHLVDNqmo1/UcwR+kSCHevW3u2OGwh3inHVBVfxr3kAf+kjJTLgYqk5dFIbIavnJous0lYjz7Qeg/3mrNlhaIlr5k/bWkx7Ogq1SWxY7L5wrutzhc2CKUDN3tSHNXhPAq2kYM/FxJKpSEnlCqa7R/6I0FIx+L0bJFNzh3swubAE3dV5THpiFHS5ZiBaeZt5iIU+R/nkqPGAw+czopwHSjwddEPyE2VW1p7jBgvCDgUqD3KhHivuZU+YfAlX4yuO4sKVJWnQwASekWJkv1RU4RmB4Yi4KcCW/tnNpqMEjc80udSNF/WnurTQllpufi2zE98XNfuClp5SYzYveEJSKmXHXywCV5rL2cI3Bi89skISAAOUw3uR4j35QLO28Zwx5KAxT/MVTcDUD3ZTshSPxAB/85JusJYTx5BMq7vOBF0Wcdz4gaPSqnmeZiQ=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 519fd7ed-4fb4-4a53-bcb4-08dc8c144259
X-MS-Exchange-CrossTenant-AuthSource: PH0PR10MB4759.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 01:49:42.1938
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ajE9wAEicao402gwx5PWNEpRKeQeG7mbf/HQo94ojaR6ee6DEG+736HJnY0TYrvK1/S63WsRd078JW9u0RqdmdkghR0ynrbTmETJtx+MToo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR10MB4127
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-13_15,2024-06-13_02,2024-05-17_01
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 phishscore=0 adultscore=0
 mlxscore=0 bulkscore=0 malwarescore=0 mlxlogscore=999 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2405010000
 definitions=main-2406140009
X-Proofpoint-GUID: ldAyEroBn7WPHPTa4EEnlL1HfKJO2sYs
X-Proofpoint-ORIG-GUID: ldAyEroBn7WPHPTa4EEnlL1HfKJO2sYs


Christoph,

> A few drivers optimistically try to support discard, write zeroes and
> secure erase and disable the features from the I/O completion handler
> if the hardware can't support them. This disable can't be done using
> the atomic queue limits API because the I/O completion handlers can't
> take sleeping locks or freeze the queue. Keep the existing clearing of
> the relevant field to zero, but replace the old blk_queue_max_* APIs
> with new disable APIs that force the value to 0.

Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>

-- 
Martin K. Petersen	Oracle Linux Engineering


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 03:12:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 03:12:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740330.1147381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHxLz-00036M-O1; Fri, 14 Jun 2024 03:11:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740330.1147381; Fri, 14 Jun 2024 03:11:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHxLz-00036F-L7; Fri, 14 Jun 2024 03:11:51 +0000
Received: by outflank-mailman (input) for mailman id 740330;
 Fri, 14 Jun 2024 03:11:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eLuH=NQ=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sHxLz-000369-1X
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 03:11:51 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20600.outbound.protection.outlook.com
 [2a01:111:f403:2416::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d5489efa-29fb-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 05:11:48 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by IA1PR12MB6331.namprd12.prod.outlook.com (2603:10b6:208:3e3::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.20; Fri, 14 Jun
 2024 03:11:43 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 03:11:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5489efa-29fb-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W/1rwMcORubOtJ+ozlC44yE5MGAfmSm0qiK+J4EwXJeJ6RUukJ9QF2aYEhO/VReQ3KImmOJsTW2pi2Qdo1N9vHZnhIDHk5H1hGiZNeJISbXcRllCPEWa+UawXOCyP8kerL+QNVmR7h5is/lUV4wqjw97xc/AeGZVtYpvthl8Wus+7MXR0zNPRFJ85stjNGffFaAvxVs84d5HyjJWoGZ4e9a2Pyphe6Kxen9pfcIjlHXHoEy70hi+Kf0klFfHZclC+njAp5hZLrSWThh3bDOLfFMbtgZyw39bsz2+JxdVto4s0pDahyOAj9OC9z7skQnpXckdmtfilhdBVJN0CeyeZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=sTjKdXIljRw0k4xLpeVRBGmFVG5YT/vjrzgp9ae5AcU=;
 b=USCUdpSkuE9ARXtWdM5ISAWIrkqSuhl2AkRsAoqzEm1PfE1F9QhmoIsm31h3GqVhrGy6w5+NzgLo5LqQ7AcUi8Cx2BL5O+7YfALPpkjX2h/dkqzteJeSsOBcnTjtfz1uKb4oPI4kuFBSQZzWtOEeBKMHqAzw6y4juGgBSrqzO7gtpjp7p2Yb/m7IeLjB7BvG23s4wJCW1ej68WJlJpAhL1YhFODxnC4yG1GA55qiPsZqiF66I8nWKEAT4TAR8VYoOJnaTJTBgSop2X6EYwr3H0eC7GE1s7LB6KM6X6Z1WrPASA/fP3LVC/1pZ6J+Likkc6t3J7aiwNwEtNRbEinM2A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sTjKdXIljRw0k4xLpeVRBGmFVG5YT/vjrzgp9ae5AcU=;
 b=JByyy6MIu8p39OPSSYNmvCrtDCl7+em/XZFBrikEPWaqgIghMJ/2GAjMlaYGb72Hg3tVyNj0+TJdyDW4i3xaDr5Y309FSwGK/Kj5AnOuRKs7eWVzukJhbuclB2HXQ15Qs00haLUXWoEm7KWtl7iJ1s3dq945MgMwnR7cWpx+KF4=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Anthony PERARD <anthony.perard@vates.tech>, Jan Beulich
	<jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross
	<jgross@suse.com>, "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
	"Huang, Ray" <Ray.Huang@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Chen, Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index:
 AQHauLJn3FT0TCW9KkmRq1iRWqwBt7HCqQ8AgAG9m4D//5AdgIAAiWiAgAEvHwCAAW0dgA==
Date: Fri, 14 Jun 2024 03:11:43 +0000
Message-ID:
 <BL1PR12MB584926B7F6153287479E4CB4E7C22@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-6-Jiqian.Chen@amd.com>
 <987f5d21-bbb5-4cdb-975b-91949e802921@suse.com>
 <BL1PR12MB5849FF595AEED1112622A98DE7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
 <c2a5b9cd-2a85-4e01-8b8b-31b85726dbd4@suse.com>
 <BL1PR12MB5849652CE3039C8D17CD7FA6E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
 <ZmrrNvv2sVaOIS5h@l14>
In-Reply-To: <ZmrrNvv2sVaOIS5h@l14>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7677.008)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|IA1PR12MB6331:EE_
x-ms-office365-filtering-correlation-id: 96e28b2d-624a-4234-5b70-08dc8c1fb78b
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230035|1800799019|366011|7416009|376009|38070700013;
x-microsoft-antispam-message-info:
 =?utf-8?B?VXJNU2VoTDJkbjZNUTJtWlVXWE5nSi9IaEQrSU1xdFlQOHFGZmFGcjI4clIx?=
 =?utf-8?B?MnFkQVlHampwZGc3ZWFOdHdOOUllVzJkc1JPd1VSanlBNFJwcVRNZWdCVnM0?=
 =?utf-8?B?WkY5SjhwR1czTW5aTkVMMGc0RE9pSmNaS2ZJVEVOeTlMOVBYOU5OM1p3T1FI?=
 =?utf-8?B?UVU2L2VuNVRBQ0hDWUVoV1V3TkhmNUd6blJiMEd5WStmYmowK3MwNlN3NHVU?=
 =?utf-8?B?YWp0Mmwzckk3djZzTW9SVTNwaXd4Ykx6SXViR1hpNzg0aTByTjlLaXBZUmJ6?=
 =?utf-8?B?bkJxZ2UremJTaWNhWkUyZ2RWRWJURy9zOUNFMjdiSFA0aHFWdWd6MU9pOUgx?=
 =?utf-8?B?dUxvM2xwNnRIenpDS3RianA5NWFxLzNqeXBzUlpSQW5Oa1dOcmd3RXZ4WVc4?=
 =?utf-8?B?TXFqRld4cldTUVJzTzdzNzhtVUEvRTd1dk1QRkVFLzI2Wk5yYmllS3RPakRu?=
 =?utf-8?B?ek5YaW1aNWsvQmIwSEpJY1RwOWthMUNKTUc3RWFseVhuYjNUZDg4OXJtRVFC?=
 =?utf-8?B?ZFFoaEVBbjdML3ZmRDV2RVdDQTY5bTYzbU5LcTZSc1E3ZWVkTWx4bWI2NENw?=
 =?utf-8?B?QXAzbFZLenVXVm5hNjdRTnZvaG5LWDN3cW9vV0tZZ2prNFRyN1pESFBkTGFo?=
 =?utf-8?B?Ykx1Z1RCMVVNNURXY0JlMTFlb3ArRGEwUzZQRVRzbENIemNJbnlERGJSMUxI?=
 =?utf-8?B?OE5oUktNR01OUXJQU1ZBeG9YRzhZelpBYWxGVElhTEZ6R0tsUlRsM0tlNkdP?=
 =?utf-8?B?bUx5UDZTd3NiYUxZOVYvanJ5eEdzYzJqZkNpdmRlRW05RGU0WkxmOG9vck56?=
 =?utf-8?B?WW1WU3dZYmZ1UFBobWhaZDNDUnlYUUN1anNvR3h5eSt1YUYvcktYOUZFeloz?=
 =?utf-8?B?NC9ucjRNTkdCbVZCakt6V3drV01wOU5VeVFrdld4dkYyRnM5U1NYNmYrN09X?=
 =?utf-8?B?NHJmczY5MzBZMlZsSmxxMG9VOE93YzRLTGY4VFBoUjA1dURHMTRxR0F6S3o1?=
 =?utf-8?B?TWllbXpSS1BzMEMxWloweG5wVC9EeE1DSmM4SFdvZlVKbTJ6SjhibFdSeHY4?=
 =?utf-8?B?VVVRb0tpL2lzMmt5dVZSSzRVVGtad2ZlM1hmdENoa3NGVDBZK0xiWloxV2pp?=
 =?utf-8?B?ckJFOUFZYjZ0NFBDdzB0Vjg0TGl0Z3ArVDBsUEVhRmI0dUxZemJDVWF0UDBP?=
 =?utf-8?B?R3o4cVpzNDNya1BVai8vS2xoYmhLdFlHZUFpVGsxZ1Fmc2xmZmZXOThxdU1S?=
 =?utf-8?B?ckZITldqUG11NWp6TG9XbFdrZ1R6dGhzM2lWMWFVa083b3FkM3hiTFk4eEZH?=
 =?utf-8?B?ancwQlZXNmVvZjZMNlJOQjlCbEVuUkFTYjNHNFlXYWt2NTlDcG5wZFcyK1Vw?=
 =?utf-8?B?N2VnTStQeXU3aEczNWd5a1lzSm52UDM4ejBXWjdNczFKU3BRd21RT3NGSU02?=
 =?utf-8?B?RUk4V3hlMGF0ZFBmb1MrY2JyNVBSRGdZam5sR2w4d1lzV0tWN2hGMG0xR1Aw?=
 =?utf-8?B?UXpXOFR1OHV6V0pBY1hPL0h1d3lVMXFQRkFwanNGaVNtOVhaSnpZZGRMbUZj?=
 =?utf-8?B?Mm4vcHdFM1Y4ZllnRTdQa25GV2V5TEp0U1YwcldpRXZhWmhtajVXVWlrblgy?=
 =?utf-8?B?UlRTSjNmZXB0b05pYVFKMEMwaWNYUUZaZnc4NXFWVjVDL1F4eUdZWml3Rmsv?=
 =?utf-8?B?akxReHQ2L3pFT1FDWEFZVTl0QUVmenRQOFJ2TVQyLzBGdkJzY3daZFphSEpD?=
 =?utf-8?B?UG8vQVlhT2k4b1p0N1drMXJLbFVZNUpEcUs5cHpHM1VEdjVyUnBrbGhsNXEz?=
 =?utf-8?Q?GqcVLYuc8pJucRhMgdcQvh50t59fbWNP4dmJg=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(1800799019)(366011)(7416009)(376009)(38070700013);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?UndTdVNtZ25TanJJNkxpS1g4MDNPaGdtMDB2V0FSSVZsSTNLV0RHZ1U2SGFC?=
 =?utf-8?B?UlRibVF1alNVM1dnTHZyUGxiNzVuMFppUWN5bTBOV1VUMUtEaGtteVFUMmxJ?=
 =?utf-8?B?TTg2SXFZY2U2djkvMGtyK1hkL0hUVE85ZEMrWm84Y3p2QlZtbU8xalpReHpP?=
 =?utf-8?B?cTFDMVYwanhkN1l2T3dZbk5rM2U2ZmVMbXR5Z0dVRjA4eWlubERmSUVDem9n?=
 =?utf-8?B?cGk3NS8wK1BPUlFHK1B6VkQ2SDRXaEo5eDVmV0JWUERQcFVLOUVJQ2hMMGls?=
 =?utf-8?B?djRwWHJZT0djOUNINzBaL3Z4UEFKVkt3WlZWN2djUEU3b2pIS0xWS3RsSm9X?=
 =?utf-8?B?MDNNdzJ3TCt4Ly9vZ3B0Mkx1Q1JpTWo4d0NFNkt2aGdsbk9TY3V6K3I1N3Bj?=
 =?utf-8?B?QTZkWjRiMVp0T1VEc0sxU2FZZ2tnM3lSYzNDaTE3SStmZ3RIY2JvUy91aHV6?=
 =?utf-8?B?YW52YXJEbUQza2hXRCtFcXlUWnFHcm9COGQvL25GUW1uYUhmemdKSmhhbEdu?=
 =?utf-8?B?aGVQVktXSzdZckIvazA1OWt3dVcwL0VndjZBU0dnMGkzLzB5TWJKSi9obFBl?=
 =?utf-8?B?MDE5bVlDWkZwKzR1UTljTlFTMlg2NU1OeTRmQnFVOGpSSzhUVFhEY2E3VlRE?=
 =?utf-8?B?bytmdFRBbkFCT1VORDRSUXhFRS9LZDZXQWhFcWJrb2loakpLV3UyVWJYajNW?=
 =?utf-8?B?MVJqbzBaOXBYd0xvVnRGWG5RejNObUtkUXZkRGpsTnh4SlJ0ZUxSWEo4Rjdk?=
 =?utf-8?B?MUpuV3crWTRXWkFQR0RaRFhkQWE4WlJvZXVOeVpNTGlUcVhsU0xYY0s2aEtm?=
 =?utf-8?B?dUI1Y0NyOGtIbno5V05NYkxoem52VjI5VzJHMjJwaDRoU09nanNMOStzMjRW?=
 =?utf-8?B?V1VmRU9EekxTWkh6aElrd0JTRVByNGlxcTlJOUJyQTBQKzBTMWw3UVB3MlZD?=
 =?utf-8?B?b29XWlVxNGhGckpod3daeGRGNytzVFljRU12M29LN3laams4VlJXckZOdlEy?=
 =?utf-8?B?VzdDQmR0SEdrbUR1dk1mbmpXWkhRREs5TTF5LzVjK2p4QjZmS0RSeEdRaFdL?=
 =?utf-8?B?YlJYQVJ3bEprUnpRSHptcnNUNUxmYTQ0ck0zZVBPV0IyNitSaDNZWlZxYWFF?=
 =?utf-8?B?Zkd6Yk52RERpK3JlV0RwS1VjZ3o1Qi9IQ2VMaTZDNnR5VjJ3KzQyL2s0TVhT?=
 =?utf-8?B?K2JWakxJOEdoeUJIaUFPNVBnL1ZMRzkyZGRQQ0lKb2RpZEVEMzh2N0VwekVL?=
 =?utf-8?B?SmxIWVNkQWpxWWxmQ0dMdGlkWWYyUVlZbTNUcmF0aG53bnRCYmlWUytQcFRC?=
 =?utf-8?B?cGl2Z1p0ZGNGaXRWYzJhcjQ2cGt4ZkRrL05VZVMwWHpBdUxCakZUc0MrRU1N?=
 =?utf-8?B?WE9RZjNRWDRuVVNnU0tuVnRGc1RnTTVNdGZXSmplYW1TQVZpMEdzQnM0NUNW?=
 =?utf-8?B?eGxGNTJSNnF3UExvUDcyUldLTitwaG5HR2JpUzdSZDRHcFU4a0UwSkVFYWtE?=
 =?utf-8?B?aXE0VEhWTUxsWE02YzFhOGNvUmdqak5teWdEc2ZtKzJJWkd3TmExUTdkSE1t?=
 =?utf-8?B?L2xsNmVKczN2bEt2Q1RGbW1HNEVYUGFmb3ZzekRGYTVCb0oyK0JjeEMwcmlU?=
 =?utf-8?B?QldXOUpEdm9DQ01PeElncnlFWVowdXFoMW5iZmJzRTNtQktwTmQzTnY1cXRJ?=
 =?utf-8?B?WXFuZjlMMFI2NHc2d2ZBUG5mNDE4Uk43ZUdhODRtaVlTdlZxSXU5c1V1QUho?=
 =?utf-8?B?RHR6WDZwVWs4S013MzZhUVVFZWJacm43bEpJMFVraU9hc1ZkQWJ4ZDJzdWNB?=
 =?utf-8?B?U1hnK2YwV1NZMXQybUl6UEszR2FGY2xwNDVTL2N3OFk2OGRqaVd1TlZZdXJ6?=
 =?utf-8?B?emxpU1pEVjFZOFZGQ3FGdzB4R24rTTZIYVVSdDdERVdRdTJsUkl6K2UxNGRE?=
 =?utf-8?B?ak1Uc3NKQnBwU1FsTnZERGlqOFlFcFZ3M0VSejA3STJCalk4allZTHpQcEtG?=
 =?utf-8?B?NENOb1BKdlp0bGpsSTBiS1ZsZmNCbXBUaEV1UmhzS1NRRTFoalpkaG9jQU1O?=
 =?utf-8?B?UGhBVWM3MFlkYTRFY0l6Y3hwV081VVFGU1FIOVJXa3ZCMHNBcEM1MG1PYTB2?=
 =?utf-8?Q?hF8I=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <22BC7D97DAA6164FA8644633BAEEC9C8@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 96e28b2d-624a-4234-5b70-08dc8c1fb78b
X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Jun 2024 03:11:43.0695
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Fe962HF16K5F48o9z7z7OVzlk2i/UbZJpAp7u+506WYJPS1sMBW6OkMS+bcO3Bqbmc2GFDOAaDyRpRvqqJ7JsQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6331

T24gMjAyNC82LzEzIDIwOjUxLCBBbnRob255IFBFUkFSRCB3cm90ZToNCj4gT24gV2VkLCBKdW4g
MTIsIDIwMjQgYXQgMTA6NTU6MTRBTSArMDAwMCwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24g
MjAyNC82LzEyIDE4OjM0LCBKYW4gQmV1bGljaCB3cm90ZToNCj4+PiBPbiAxMi4wNi4yMDI0IDEy
OjEyLCBDaGVuLCBKaXFpYW4gd3JvdGU6DQo+Pj4+IE9uIDIwMjQvNi8xMSAyMjozOSwgSmFuIEJl
dWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAwNy4wNi4yMDI0IDEwOjExLCBKaXFpYW4gQ2hlbiB3cm90
ZToNCj4+Pj4+PiArICAgIHIgPSB4Y19kb21haW5fZ3NpX3Blcm1pc3Npb24oY3R4LT54Y2gsIGRv
bWlkLCBnc2ksIG1hcCk7DQo+Pj4+Pg0KPj4+Pj4gTG9va2luZyBhdCB0aGUgaHlwZXJ2aXNvciBz
aWRlLCB0aGlzIHdpbGwgZmFpbCBmb3IgUFYgRG9tMC4gSW4gd2hpY2ggY2FzZSBpbW8NCj4+Pj4+
IHlvdSBiZXR0ZXIgd291bGQgYXZvaWQgbWFraW5nIHRoZSBjYWxsIGluIHRoZSBmaXJzdCBwbGFj
ZS4NCj4+Pj4gWWVzLCBmb3IgUFYgZG9tMCwgdGhlIGVycm5vIGlzIEVPUE5PVFNVUFAsIHRoZW4g
aXQgd2lsbCBkbyBiZWxvdyB4Y19kb21haW5faXJxX3Blcm1pc3Npb24uDQo+Pj4NCj4+PiBIZW5j
ZSB3aHkgY2FsbCB4Y19kb21haW5fZ3NpX3Blcm1pc3Npb24oKSBhdCBhbGwgb24gYSBQViBEb20w
Pw0KPj4gSXMgdGhlcmUgYSBmdW5jdGlvbiB0byBkaXN0aW5ndWlzaCB0aGF0IGN1cnJlbnQgZG9t
MCBpcyBQViBvciBQVkggZG9tMCBpbiB0b29scy9saWJzPw0KPiANCj4gVGhhdCBtaWdodCBoYXZl
IG5ldmVyIGJlZW4gbmVlZGVkIGJlZm9yZSwgc28gcHJvYmFibHkgbm90LiBUaGVyZSdzDQo+IGxp
YnhsX19kb21haW5fdHlwZSgpIGJ1dCBpZiB0aGF0IHdvcmtzIHdpdGggZG9tMCBpdCBtaWdodCBy
ZXR1cm4gIkhWTSINCj4gZm9yIFBWSCBkb20wLiBTbyBpZiB4Y19kb21haW5fZ2V0aW5mb19zaW5n
bGUoKSB3b3JrcyBhbmQgZ2l2ZSB0aGUgcmlnaHQNCj4gaW5mbyBhYm91dCBkb20wLCBsaWJ4bF9f
ZG9tYWluX3R5cGUoKSBjb3VsZCBiZSBleHRlbmRlZCB0byBkZWFsIHdpdGgNCj4gZG9tMCBJIGd1
ZXNzLiBJIGRvbid0IGtub3cgaWYgdGhlcmUncyBhIGdvb2Qgd2F5IHRvIGZpbmQgb3V0IHdoaWNo
DQo+IGZsYXZvciBvZiBkb20wIGlzIHJ1bm5pbmcuDQpUaGFua3MgQW50aG9ueSENCkkgdGhpbmsg
aGVyZSB3ZSByZWFsbHkgbmVlZCB0byBjaGVjayBpcyB0aGF0IHdoZXRoZXIgY3VycmVudCBkb21h
aW4gaGFzIFBJUlEgZmxhZyhYODZfRU1VX1VTRV9QSVJRKSBvciBub3QuDQpBbmQgaXQgc2VlbXMg
eGNfZG9tYWluX2dzaV9wZXJtaXNzaW9uIGFscmVhZHkgcmV0dXJuIHRoZSBpbmZvcm1hdGlvbi4N
CklmIGN1cnJlbnQgZG9tYWluIGhhcyBubyBQSVJRcywgdGhlbiBJIHNob3VsZCB1c2UgeGNfZG9t
YWluX2dzaV9wZXJtaXNzaW9uIHRvIGdyYW50IHBlcm1pc3Npb24sIG90aGVyd2lzZSBJIHNob3Vs
ZA0Ka2VlcCB0aGUgb3JpZ2luYWwgZnVuY3Rpb24geGNfZG9tYWluX2lycV9wZXJtaXNzaW9uLg0K
DQo+IA0KPiBDaGVlcnMsDQo+IA0KDQotLSANCkJlc3QgcmVnYXJkcywNCkppcWlhbiBDaGVuLg0K


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 04:01:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 04:01:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740336.1147391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHy86-00087I-A2; Fri, 14 Jun 2024 04:01:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740336.1147391; Fri, 14 Jun 2024 04:01:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHy86-00087B-6x; Fri, 14 Jun 2024 04:01:34 +0000
Received: by outflank-mailman (input) for mailman id 740336;
 Fri, 14 Jun 2024 04:01:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eLuH=NQ=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sHy85-000875-En
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 04:01:33 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20624.outbound.protection.outlook.com
 [2a01:111:f403:2414::624])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c78fa9e7-2a02-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 06:01:30 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by CH3PR12MB8546.namprd12.prod.outlook.com (2603:10b6:610:15f::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.25; Fri, 14 Jun
 2024 04:01:26 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%6]) with mapi id 15.20.7677.024; Fri, 14 Jun 2024
 04:01:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c78fa9e7-2a02-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=asEC24kCQp7g+PznBUZQQ39JzNW1c4ERXkxCHzSjEdxorS7KtAVUAvYlryDapFJj5mbZ7HRZwebEFTIyJjsUcdnuoqYW6R2te7+42tbI7fPKWN5JuKkly+oN7ySPyin3jI9ZEY0cVxEw5odLOgxk8l0+okpeG60MAEU7nFSLYFu500x7PNmXRJjsLdxlteaFoLD6yAE5Ka1E8sQDAJ8NjRKH66sY/OodjsfTAoFiBDa0p3XLlarFK1L/waIkehI/9neKOQxyruYMn1Ce0pVlrc4bZjpBfTkNY4vOXQ1olj117ufNTJAHTBCO+RhJxSrLaXV3ulaPzwCmmt2iu5l5ew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=06MsgDEZiAwmFJ2UuVfXnsegMPTTzvj5G+vfGBEw5E0=;
 b=FaO7HoXxWP3Eo8T3OAfYP1Nv2fM/1dKg70YYq84q+csw2IOiMktECqWvyvi2yBm1V1lDHi6UJgvdGYKWKFxrnem1oHH1TRYAlUnI9q2PpbPY3o19fZ0WxECpnHN3j0i8I+QB3Wmg/u6R5+3nnBlDqkYoBNR6uY/WVXywoDBFxX+wM7eVSQkjnWwPlfpVBKSqFkrzYtL+It0tPrxb2hESRvqn2dJH0a+JMEFYWGSgJDrt81snOMPJPDN6xLCHHk+iwYAKPmA0VoI5OZ5HHcU79WlCeceRN51neuk5LSg89XTN5uPGNGNy3eJCRry837gXxkVWGVR40YCrywsdHDKsSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=06MsgDEZiAwmFJ2UuVfXnsegMPTTzvj5G+vfGBEw5E0=;
 b=tFJ/B4SW/qHGVbnBZT2H8YpIvYSNXB8ApzvKgNcmBCbQlUp5XsN/Kea+1LgWKxhJTgcJUAcGlymgB+gWeDGWDDhVL2+7VJlehJGRHtsGSRXJZ5FH5hXMkjVIHzRWqKYuyw03uPC5qahqqIBUiHJ1qpR8RLcvUiOHYnhQtddJ2WU=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: "Daniel P . Smith" <dpsmith@apertussolutions.com>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, Juergen
 Gross <jgross@suse.com>, "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
	"Huang, Ray" <Ray.Huang@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "Chen, Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index: AQHauLJn3FT0TCW9KkmRq1iRWqwBt7HCqQ8AgASJ3oA=
Date: Fri, 14 Jun 2024 04:01:26 +0000
Message-ID:
 <BL1PR12MB584991A01727FD8C7BD430BDE7C22@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-6-Jiqian.Chen@amd.com>
 <987f5d21-bbb5-4cdb-975b-91949e802921@suse.com>
In-Reply-To: <987f5d21-bbb5-4cdb-975b-91949e802921@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7677.008)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|CH3PR12MB8546:EE_
x-ms-office365-filtering-correlation-id: 7d961c13-3b37-42fe-c171-08dc8c26a9aa
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230035|376009|366011|7416009|1800799019|38070700013;
x-microsoft-antispam-message-info:
 =?utf-8?B?QVhZbnN6VmZKdnplVUs1SDlzeEtvMXFINTJEWU5YVE84WVBpLzkzRWZsTmM1?=
 =?utf-8?B?SlkxVjRtL0lpc3FZbmlsaWpYdVZOMWpTM1Y1WnVRUVZUQ2llQ3JlZnZlSE0v?=
 =?utf-8?B?Y0Rud2RyQ0RsTGtlaHYyeWVNdHNINmFxYkNSL3ExS1UyMWtPOGhsc3dOMC9k?=
 =?utf-8?B?SlY3VXJFWGNQKzlWMzlSa25PV2ZhRnlpbmRJYmd5QlpoL0M5bmtZakRERmFz?=
 =?utf-8?B?Q1RYZ1p6YXE1eFRqMFRFajhZUHNkOW5XVk1vY1FRZS90eGNmMzRYbzZZK2d6?=
 =?utf-8?B?MHFMTzBFaFhuSUdGRFNyK3RUbllRdHJYSSt0SzRuTDdsQjRvOEtJcGwybUdk?=
 =?utf-8?B?TS9RVkZ3eXBWc2d0OUNBWEM2czlnT1k1V1hTcDllZ2tXQVVFeGNGV09VeGl2?=
 =?utf-8?B?eTJzZmcyUS9NZEc0eWtzc2cyc2R5NHprQVdxWk56RmhWaW5yWkRHL3FqOTBK?=
 =?utf-8?B?aHFmY0lrR0VXbkNnZ2pRSmlYRXRtT1pYOTdFTStUZHdQUzFTMytLMjV5Wlh3?=
 =?utf-8?B?UEw3ZEUzQVR1bWVRSUJZakZBb0lvcHpMdmQ2Zm5qTnprNzFkMS9JTkhWL3Ba?=
 =?utf-8?B?bmRkcTlON1JYSEgrZnV3STRwMFM5eER3NjJXTkpaQ3BvbmlmODY2YlIyM1N2?=
 =?utf-8?B?ZkdMMURmSkYxOUtGSWNLMXVONU0wd0RUZmROSnBwc0V1cC9Dcys3VEJtdzNw?=
 =?utf-8?B?ZXR1enFienB5bEJwTGRPMDZrSGxQK3NJR0Y4R1J6TCtMQjB4ZDhHTUR4WGdX?=
 =?utf-8?B?L3ZWRjUxRDdoQ2owODRBd0l2ckdrcmZZZ1hsTjRJU3FRYktkSEFPK1BvcFhI?=
 =?utf-8?B?RE5CaGhVNXIxZFNjVUFTUWZPaU9jUDcrWkNxV1RoUUhyd2JseGdyYUxpSmVr?=
 =?utf-8?B?K2YyWUNkQTk3UXNyV0tTeGhIdkJuQ1BmWmhQbDFSZldYOFc0WmZOSVVRTldI?=
 =?utf-8?B?MGhQTEZIWS8xOS9reHJkS3Jpdm51MDBzcWMrc2VzWm9SajJYMWdrdDFOL2Ns?=
 =?utf-8?B?VFEyOU9ZMUg4cWhjU3RLejdnaDFObTdML1BnRlBtTmtTRmJkTEJFZjZNNlpU?=
 =?utf-8?B?dmsrNUVoczhiWGUyTWdJT0FTaGFxTUt1L1NiYjFwRHV1bHNTMUR6VVNwbkRp?=
 =?utf-8?B?OC9sMDZrZGlKWEhiVE9Sb0t0cnpMUHdMdllUQzZJNWZuMHpEKzNkSk9lODQy?=
 =?utf-8?B?VVFPQmF1enJHZ1BmTXg3ZDJpYUlXenhUbzN2SjB2ZHpvVG84UmxqSGJyb1N5?=
 =?utf-8?B?NjVRNm5PeEl3bUpjZlp0ZFY1aldQMm9wUHpROHJVdjJWeDVhWlpqOUk4Z3Fm?=
 =?utf-8?B?dXFHQjIzbGI3Y3Niay9uUVVDb3MwUGhURmc3NytBTlNRMUJHVG15QjRlTGpv?=
 =?utf-8?B?NXBtTWxJemlEOVV3WStLWVRnQlRqdEY3YTdQM2o1SE1KMllpRUNHV2UzM3B0?=
 =?utf-8?B?R1ZlQ1BhdW0zWmtNM2ttenFIZmJaTWgrTFROWVlWVjRWUnpQQ2VjOWlvTHZU?=
 =?utf-8?B?WXQ0S2JYckZZQ0JwM20vUTdySVZWZUovbnhRdmRZVWJ0ekJ6Ukk5dGFWRkVI?=
 =?utf-8?B?d3dVc09kY054aDdlbVVUMmJIYmtGc0dZZVhMVUYreGFHQWdkd3k1N2pZM3Iw?=
 =?utf-8?B?NWt1OVI5Rnk0YjFTaSs2ZTZEc0g4VWZhZGNIRzJvdE1WU2ZaOFVFb2ZsdUFx?=
 =?utf-8?B?bjUyb3JxMXNaQmJabmZxb2dlSUVsaUZIdG9iTGhuSzV4Njd4a0p2NlBRZ09B?=
 =?utf-8?B?MFl0TXdjbERkd2ZxZnV4dDVLQlBZSStVVHZiVlp0eTZhbjNrMzhwY0V1L2Ux?=
 =?utf-8?Q?OE1AUH7e5olCBR4muqTYyf6U+WQgfl4CmrKA0=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(376009)(366011)(7416009)(1800799019)(38070700013);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?NVc2b3c4Q3JhSHFZd3JBUmdGQjdYS3ROY3k3MW8yYWNxZHd2d0NsdkJ5K0tO?=
 =?utf-8?B?UEJDMkhVOVdjRXJwNzkwZTlRZGVla2Z0T2FXMnVaNjRGeGlnd1hRaUNUNWdZ?=
 =?utf-8?B?RUEvb0FLc0pyT215alZ4dkRNeEJrbzRzRUtpdW9tTE93c21uN2dHUWdxOHB6?=
 =?utf-8?B?TVRWaDIzT2hXTHVBQWx5Z2lzQmVoUGRsTkNvQmUrS0JjZjBjVFJYZHRQUjda?=
 =?utf-8?B?dFBYMFc1R1h2V0F4SzdTYnBFMmJVKzZTdzU2dWtMRGpBa01mY25OTk45bU1V?=
 =?utf-8?B?T2gwVDZZRXNKek9hTm0xbkhDcmUxcVc4MHBaaWNuUG1iTUVldnpzMHhEV2gz?=
 =?utf-8?B?ODc4Sm9KYW1CODFSdzFROUtoVG45T2tBMlNNY3BjaGxvbHpGM0E3SzI3bkRn?=
 =?utf-8?B?azBGOVA5cWJkb0M1bVYzQ2Q0WStTb2NpSmlxQXJ6YStWQmF6bG9UZ2hHNmpq?=
 =?utf-8?B?YTUwcUlIaHBKQ1A3RFlJMjJ3R3BvWFdkbjNDWTdGU1VQdzMvOEhVeEpSOGJ1?=
 =?utf-8?B?WW91SGl3amY5a1hEcUxmempXRXpLR28xcHRuYWRaZ1U4WGtXVWFSVWpDQ0th?=
 =?utf-8?B?MWE5ZmlPZmdRRVh5OG5hcEtVL0xnbVN5L3ROTG94YUozV3RKdlBOUHJtRFM4?=
 =?utf-8?B?ODdFSkVFOXdsd2JVMDdaTncrb3BoQTRHWCtRcFJMWnFJWHpZanNlbjdKcFl3?=
 =?utf-8?B?VExoWEN4YWtpVUJSRlEvdkx3Ti9RK3ppTFlSb29FcmZicTFLMVBtZzVrVGFo?=
 =?utf-8?B?WkFNSzNWSWw1NU14ZHRzM1pPWDExMkd2QnJlOHBLNVQ0bUFqeXdWMWVCUDFR?=
 =?utf-8?B?bEdDNjZvZzV6OVhHTmQ3N3ptUGZYV3hlOGdyK2E0OGJQNkdscWpBdmVyQ1l3?=
 =?utf-8?B?bURDbzBCenNZdUZ4M0w3bE1CelhHUE5VYU5wM2NYOTk0YnRnSDFWOXVRUmR3?=
 =?utf-8?B?T3p4aWRNSUdtbms0ejR4SGZGMWwvRVc3M2hGRFB5dFJBOC9VVFliQ0g0WSt3?=
 =?utf-8?B?SzlBZHI5OXQzbnFhM01tb0RIeFV1MUtBRTZKamxaSDFJcWRRbEllZjN4Si8y?=
 =?utf-8?B?NDVKRjZSWTJleFdqU28yRk1iMXBiKy9xRVV4eXRuazVoc2h0YWZnakVXVUIr?=
 =?utf-8?B?WUN2ZnpBS0dLQXhpQ1ljRHY3UlN4OHk1WDR0THJBV0RBVnN5NlFYcjJlV2xE?=
 =?utf-8?B?MGhUNEdFMGFtRFBRd1B3OW10Uk9KWlVzSEFZVFlIR3ZuMCt1azlKUjQrTlRt?=
 =?utf-8?B?UTc4VWZZUFNhTGl4VG5kamxhcGk5MG13UnNoK2xKOTgvQ0krS1dKb0Z0WWI2?=
 =?utf-8?B?ZzhGVStKZzRLMFNVbTVlQVFYQUlxREI5Y0FiMVh6WDQ0c29NbVVuYVdZaERB?=
 =?utf-8?B?MkxzT2JrODJYYmJYK3QreDNyT1lXaGxHRWdiL3RDdzh4TnV6b1cxM01tenVq?=
 =?utf-8?B?SU1JS1JVWDdDV014aGNtbURTR3QybTNsZ2I3NXpHS1FweUZPNzFRTkVPZXZu?=
 =?utf-8?B?UHZjRXlaTlF3RXA3RmcrVUcwQ3kwcmNkNWN2T1Z6WlVQaW9kWEY1WUZSUFJY?=
 =?utf-8?B?cTB4ZHF5cmloaWZ2dTlCYXJmS0YzbkovcGg4eDIyU0tpVHozS0xXZ1F1bjI2?=
 =?utf-8?B?OWJDSGxhT0JNRzBaY212bWpoSitBUXdkTzBvZnB0eXdQdlhNSzlCNGJvdTU0?=
 =?utf-8?B?cCtEdklUMXRaekJOM1lYUVBYK1ozVDdncndabTFsQXJ6QWtSM0UvZTg5WmJi?=
 =?utf-8?B?dExRM3hmU2JVVGxrM0dOYnFRVkRIa01oVVowWU44bDFUeFdTa0JLaTliMXVF?=
 =?utf-8?B?MTZVLzQrU2ZlWlBvKzc3ME5pQWdtNmNmazF4bGs5cEtpeGdNVDhhQ0tzbEhF?=
 =?utf-8?B?aDRXOEJ2dVhsS0ljb0pQQ3REMFozT2d5M0tYYitjN2RUYVhzNy9mdnNydlFB?=
 =?utf-8?B?Sjk3Uk96dThQbDRZZEhraGIwdndCZXUvL2RGeFI2d2FuS3JSVHAveHN1dVdo?=
 =?utf-8?B?Y2FiNGljRTJ3N0RjWjB4djJ2cEVDMWVrNjhKMlFlQWhGb3ZBdnVVbk1HSkU0?=
 =?utf-8?B?TzNBOHdpV2Vna1VPZkxFMXhFZEg4OHRWV21USDRXWGcxM2cwM2s0SWJGbkZ6?=
 =?utf-8?Q?2Iz0=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <2D8FBA5EC224284D88F0C9BA25F0EDF1@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7d961c13-3b37-42fe-c171-08dc8c26a9aa
X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Jun 2024 04:01:26.2539
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: EbmNz5aeJ0ZItYQv/aJJX+sAZzPkBkKJ7Ki+oVPFxOFgHRBIAGC0N22k0QByUJr7JhMubBycE4BJBfNsyKp1lQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8546

SGkgRGFuaWVsLA0KDQpPbiAyMDI0LzYvMTEgMjI6MzksIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBP
biAwNy4wNi4yMDI0IDEwOjExLCBKaXFpYW4gQ2hlbiB3cm90ZTogDQo+PiArICAgIGNhc2UgWEVO
X0RPTUNUTF9nc2lfcGVybWlzc2lvbjoNCj4+ICsgICAgew0KPj4gKyAgICAgICAgdW5zaWduZWQg
aW50IGdzaSA9IGRvbWN0bC0+dS5nc2lfcGVybWlzc2lvbi5nc2k7DQo+PiArICAgICAgICBpbnQg
aXJxID0gZ3NpXzJfaXJxKGdzaSk7IA0KPj4gKyAgICAgICAgYm9vbCBhbGxvdyA9IGRvbWN0bC0+
dS5nc2lfcGVybWlzc2lvbi5hbGxvd19hY2Nlc3M7IA0KPj4gKyAgICAgICAgLyoNCj4+ICsgICAg
ICAgICAqIElmIGN1cnJlbnQgZG9tYWluIGlzIFBWIG9yIGl0IGhhcyBQSVJRIGZsYWcsIGl0IGhh
cyBhIG1hcHBpbmcNCj4+ICsgICAgICAgICAqIG9mIGdzaSwgcGlycSBhbmQgaXJxLCBzbyBpdCBz
aG91bGQgdXNlIFhFTl9ET01DVExfaXJxX3Blcm1pc3Npb24NCj4+ICsgICAgICAgICAqIHRvIGdy
YW50IGlycSBwZXJtaXNzaW9uLg0KPj4gKyAgICAgICAgICovDQo+PiArICAgICAgICBpZiAoIGlz
X3B2X2RvbWFpbihjdXJyZW50LT5kb21haW4pIHx8IGhhc19waXJxKGN1cnJlbnQtPmRvbWFpbikg
KSANCj4+ICsgICAgICAgIHsNCj4+ICsgICAgICAgICAgICByZXQgPSAtRU9QTk9UU1VQUDsNCj4+
ICsgICAgICAgICAgICBicmVhazsNCj4+ICsgICAgICAgIH0NCj4+ICsNCj4+ICsgICAgICAgIGlm
ICggZ3NpID49IG5yX2lycXNfZ3NpIHx8IGlycSA8IDAgKQ0KPj4gKyAgICAgICAgew0KPj4gKyAg
ICAgICAgICAgIHJldCA9IC1FSU5WQUw7DQo+PiArICAgICAgICAgICAgYnJlYWs7DQo+PiArICAg
ICAgICB9DQo+PiArDQo+PiArICAgICAgICBpZiAoICFpcnFfYWNjZXNzX3Blcm1pdHRlZChjdXJy
ZW50LT5kb21haW4sIGlycSkgfHwNCj4+ICsgICAgICAgICAgICAgeHNtX2lycV9wZXJtaXNzaW9u
KFhTTV9IT09LLCBkLCBpcnEsIGFsbG93KSApDQo+IA0KPiBEYW5pZWwsIGlzIGl0IG9rYXkgdG8g
aXNzdWUgdGhlIFhTTSBjaGVjayB1c2luZyB0aGUgdHJhbnNsYXRlZCB2YWx1ZSwgbm90DQo+IHRo
ZSBvbmUgdGhhdCB3YXMgb3JpZ2luYWxseSBwYXNzZWQgaW50byB0aGUgaHlwZXJjYWxsPw0KSXMg
aXQgb2theT8NCg0KPiANCj4gSmFuDQoNCi0tIA0KQmVzdCByZWdhcmRzLA0KSmlxaWFuIENoZW4u
DQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 05:45:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 05:45:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740361.1147421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHzk4-0006kb-Ov; Fri, 14 Jun 2024 05:44:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740361.1147421; Fri, 14 Jun 2024 05:44:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHzk4-0006kU-ME; Fri, 14 Jun 2024 05:44:52 +0000
Received: by outflank-mailman (input) for mailman id 740361;
 Fri, 14 Jun 2024 05:44:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHzk3-0006kK-RM; Fri, 14 Jun 2024 05:44:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHzk3-0003qc-Po; Fri, 14 Jun 2024 05:44:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHzk3-0005Jm-DT; Fri, 14 Jun 2024 05:44:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHzk3-0004dF-D6; Fri, 14 Jun 2024 05:44:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oigHOoahYBnglx8vA3pKmW30s/PSQSuteww3RVHki1A=; b=rdBh6FPYht+DrIDGwS7FNrA/tF
	gjaRY5SCK8WnghG/BxqzTbBZ7+mbfglLHjwgkLSDGzR4ARdjrGC8wotuIP2oAN0GmDBWdX2LNHlMM
	aglht4v8NJ5jm0j5b2A7zNyzbUlgIWMvibc7pn8ePZTAUdd92Dwhw98LrJdjA6JRhBMc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186341-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186341: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4fdd8d75566fdad06667a79ec0ce6f43cc466c54
X-Osstest-Versions-That:
    xen=401448f2d12f969956fac3ef3d29dc065430fd56
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Jun 2024 05:44:51 +0000

flight 186341 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186341/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4fdd8d75566fdad06667a79ec0ce6f43cc466c54
baseline version:
 xen                  401448f2d12f969956fac3ef3d29dc065430fd56

Last test of basis   186332  2024-06-13 07:15:36 Z    0 days
Testing same since   186341  2024-06-13 20:10:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Jens Wiklander <jens.wiklander@linaro.org>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   401448f2d1..4fdd8d7556  4fdd8d75566fdad06667a79ec0ce6f43cc466c54 -> master


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 05:53:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 05:53:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740377.1147450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHzsC-00016H-Ue; Fri, 14 Jun 2024 05:53:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740377.1147450; Fri, 14 Jun 2024 05:53:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sHzsC-00016A-S1; Fri, 14 Jun 2024 05:53:16 +0000
Received: by outflank-mailman (input) for mailman id 740377;
 Fri, 14 Jun 2024 05:53:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHzsA-000160-Ui; Fri, 14 Jun 2024 05:53:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHzsA-00041N-N7; Fri, 14 Jun 2024 05:53:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sHzsA-0005YE-8K; Fri, 14 Jun 2024 05:53:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sHzsA-0007ZQ-7y; Fri, 14 Jun 2024 05:53:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0Ab0RVTrj7rnGVFJPMaibuMbsP/F8ji0m+LeRYOAALQ=; b=h2JMOZ0FOgcFvXw4oWrjOUBet5
	ZN3TLvWMbqexNvMk6HDdWd6IgoHAEWC/nUeNe1PDed+8W3Va4kvwxlReW9dl/Fplezk24DyiXPNPP
	KGRRtoOug/I4kJtTFG+M8waDBqt50j5hfxChp5F4cuye/41j/5GXbj8RYQsNNoMKQG9U=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186342-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186342: regressions - FAIL
X-Osstest-Failures:
    linux-linus:build-armhf:xen-build:fail:regression
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d20f6b3d747c36889b7ce75ee369182af3decb6b
X-Osstest-Versions-That:
    linux=2ef5971ff345d3c000873725db555085e0131961
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Jun 2024 05:53:14 +0000

flight 186342 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186342/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                   6 xen-build                fail REGR. vs. 186314

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-qcow2     1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-raw       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186314
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                d20f6b3d747c36889b7ce75ee369182af3decb6b
baseline version:
 linux                2ef5971ff345d3c000873725db555085e0131961

Last test of basis   186314  2024-06-12 00:10:33 Z    2 days
Failing since        186324  2024-06-12 17:12:12 Z    1 days    3 attempts
Testing same since   186342  2024-06-13 21:12:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Csókás, Bence" <csokas.bence@prolan.hu>
  Aleksandr Mishin <amishin@t-argos.ru>
  Andrei Vagin <avagin@gmail.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Borislav Petkov (AMD) <bp@alien8.de>
  Chen Hanxiao <chenhx.fnst@fujitsu.com>
  Csókás, Bence <csokas.bence@prolan.hu>
  David S. Miller <davem@davemloft.net>
  David Wei <dw@davidwei.uk>
  Davide Ornaghi <d.ornaghi97@gmail.com>
  Dmitry Mastykin <mastichi@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Gal Pressman <gal@nvidia.com>
  Geliang Tang <geliang@kernel.org>
  Hongbo Li <lihongbo22@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jan Beulich <jbeulich@suse.com>
  Jan Kara <jack@suse.cz>
  Jie Wang <wangjie125@huawei.com>
  Jijie Shao <shaojijie@huawei.com>
  Johannes Berg <johannes.berg@intel.com>
  Joshua Washington <joshwash@google.com>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Kent Overstreet <kent.overstreet@linux.dev>
  Kory Maincent <kory.maincent@bootlin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lion Ackermann <nnamrec@gmail.com>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Matthieu Baerts (NGI0) <matttbe@kernel.org>
  Michael Chan <michael.chan@broadcom.com>
  Mike Rapoport (IBM) <rppt@kernel.org>
  Naama Meir <naamax.meir@linux.intel.com>
  Neal Cardwell <ncardwell@google.com>
  NeilBrown <neilb@suse.de>
  Nikolay Aleksandrov <razor@blackwall.org>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Olga Kornievskaia <kolga@netapp.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Pauli Virtanen <pav@iki.fi>
  Petr Pavlu <petr.pavlu@suse.com>
  Rao Shoaib <Rao.Shoaib@oracle.com>
  Rob Herring <robh@kernel.org>
  Ron Economos <re@w6rz.net>
  Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
  Sagar Cheluvegowda <quic_scheluve@quicinc.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sasha Neftin <sasha.neftin@intel.com>
  Scott Mayhew <smayhew@redhat.com>
  Taehee Yoo <ap420073@gmail.com>
  Tariq Toukan <tariqt@nvidia.com>
  Thorsten Scherer <t.scherer@eckelmann.de>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Udit Kumar <u-kumar1@ti.com>
  Wadim Egorov <w.egorov@phytec.de>
  Xiaolei Wang <xiaolei.wang@windriver.com>
  Yonglong Liu <liuyonglong@huawei.com>
  YonglongLi <liyonglong@chinatelecom.cn>
  Ziwei Xiao <ziweixiao@google.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  fail    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    blocked 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 blocked 
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1891 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 06:23:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 06:23:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740388.1147461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0LZ-00079C-35; Fri, 14 Jun 2024 06:23:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740388.1147461; Fri, 14 Jun 2024 06:23:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0LY-000795-Vu; Fri, 14 Jun 2024 06:23:36 +0000
Received: by outflank-mailman (input) for mailman id 740388;
 Fri, 14 Jun 2024 06:23:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5IQt=NQ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sI0LX-00078z-H5
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 06:23:35 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f1fdbab-2a16-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 08:23:33 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-a6f09eaf420so205863066b.3
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 23:23:32 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56db67e5sm148611466b.66.2024.06.13.23.23.30
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 23:23:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f1fdbab-2a16-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718346211; x=1718951011; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=MjPFxLQdffOSAFJB98ZTDFQqB/Q5QxWzfVKUdWHijdE=;
        b=Nn4apzEQRxa4jIHg6yYxcadflY8OulZz7g07c7ppcUefAf6l1FQ/eKJexNgKZKWwpb
         5JqM4oklgtGmqd7zTvBfbjNlgOv0gGzDM9TUQZlDj81fDMIuWjQdbFZ4pE0o08Yf0evZ
         jjvujFxFDvbOgz19nshpahJCima2tYRTsukEHLvWFFL0M6ZdwxFF4HxvpDyM4ssRCYH2
         JuxmXSArFj7wUBQonSjEFk1KxaG6b7weH/gczxVWKjAw4wm0t/rcu9GkMHbYYVK2HtWR
         JVPn7HsRpRNgJEp3FrG/ppvGOmEPZs6Wdf5pLNdl/shoUjHPUwMk59EnVeAvg1N47aZz
         ogvg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718346211; x=1718951011;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=MjPFxLQdffOSAFJB98ZTDFQqB/Q5QxWzfVKUdWHijdE=;
        b=MAq2YD+guhgWLyMOJhJGX5b2ufJnhmivdI0M1HQvq5+8h6eobPPFh2WlfGMivS+57t
         r7laTslT8Tp8f/AnmbCQVsTZnvjCD9amCPzGtcU28NDW69D5vJqo711WBtRgxHRrduli
         DSp4rDQ4Y3mcBmFXQQGXoku/u4w2QCYP9bQmRPNlAivSbsE25w498Ua9kLHdGESlNxze
         xiFBrVvg4TgtDNZkhp9ibfuEvkMTuiJObyFOr8IWz98PK3vWXnnDBNAGpNIoemqtXEFv
         bb900u3kFUbKzvTd2w5clIq6bjiq0nVZWNQtpYfEgsE1lkqQnyRXcQqjSEJXcFlP4prs
         X8lQ==
X-Forwarded-Encrypted: i=1; AJvYcCUh4T36Sx0sbh9ghW9QMIiLoVNu+ss+eUjXmYhLZeBz380+Chzmx1I3BlIXuoHXMLR9/hOzYcgH6JsVBnrghLMF0JUf4aIRt+nRfge4HIY=
X-Gm-Message-State: AOJu0Yw3+B+MYu0J51eTzvBqf+wLo1ranbwXlLLnZF/RK+ROmB7mEbdp
	N1uGlVR0D3RgtqJXpb9RUJtd268Kvjw+08TKoaYa8cSRJEHVlJcqXw8NuB/78g==
X-Google-Smtp-Source: AGHT+IHTZl4IDtyta0OsljdSyemYEqeoKnFr3SMoZhE+XFwHdT22MKGr3jX2qUiuIlWKgrkvbh5Xmw==
X-Received: by 2002:a17:906:f74b:b0:a6f:2605:aaaf with SMTP id a640c23a62f3a-a6f60d204a7mr110589666b.22.1718346211414;
        Thu, 13 Jun 2024 23:23:31 -0700 (PDT)
Message-ID: <71f7b9c8-43f9-4703-b6e3-8b3fe8b740c0@suse.com>
Date: Fri, 14 Jun 2024 08:23:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH V3 (resend) 01/19] x86: Create per-domain mapping of
 guest_root_pt
To: Elias El Yandouzi <eliasely@amazon.com>
Cc: julien@xen.org, pdurrant@amazon.com, dwmw@amazon.com,
 Hongyan Xia <hongyxia@amazon.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, Julien Grall <jgrall@amazon.com>,
 xen-devel@lists.xenproject.org
References: <20240513134046.82605-1-eliasely@amazon.com>
 <20240513134046.82605-2-eliasely@amazon.com>
 <dd145c67-8e3e-4b15-94f7-c7cd1f127d45@suse.com>
 <bda3386e-26c5-4efd-b7ad-00f3643523fa@amazon.com>
 <b50d0a83-fab4-4f59-bf4d-5c5593923f34@suse.com>
 <1ad9ccce-c02b-46c0-8fea-10b35b574cb8@amazon.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <1ad9ccce-c02b-46c0-8fea-10b35b574cb8@amazon.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 13.06.2024 18:31, Elias El Yandouzi wrote:
> On 16/05/2024 08:17, Jan Beulich wrote:
>> On 15.05.2024 20:25, Elias El Yandouzi wrote:
>>> However, I noticed quite a weird bug while doing some testing. I may
>>> need your expertise to find the root cause.
>>
>> Looks like you've overflowed the dom0 kernel stack, most likely because
>> of recurring nested exceptions.
>>
>>> In the case where I have more vCPUs than pCPUs (and let's consider we
>>> have one pCPU for two vCPUs), I noticed that I would always get a page
>>> fault in dom0 kernel (5.10.0-13-amd64) at the exact same location. I did
>>> a bit of investigation but I couldn't come to a clear conclusion.
>>> Looking at the stack trace [1], I have the feeling the crash occurs in a
>>> loop or a recursive call.
>>>
>>> I tried to identify where the crash occurred using addr2line:
>>>
>>>   > addr2line -e vmlinux-5.10.0-29-amd64 0xffffffff810218a0
>>> debian/build/build_amd64_none_amd64/arch/x86/xen/mmu_pv.c:880
>>>
>>> It turns out to point on the closing bracket of the function
>>> xen_mm_unpin_all()[2].
>>>
>>> I thought the crash could happen while returning from the function in
>>> the assembly epilogue but the output of objdump doesn't even show the
>>> address.
>>>
>>> The only theory I could think of was that because we only have one pCPU,
>>> we may never execute one of the two vCPUs, and never setup the mapping
>>> to the guest_root_pt in write_ptbase(), hence the page fault. This is
>>> just a random theory, I couldn't find any hint suggesting it would be
>>> the case though. Any idea how I could debug this?
>>
>> I guess you want to instrument Xen enough to catch the top level fault (or
>> the 2nd from top, depending on where the nesting actually starts) to see
>> why that happens. Quite likely some guest mapping isn't set up properly.
>>
> 
> Julien helped me with this one and I believe we have identified the 
> problem.
> 
> As you've suggested, I wrote the mapping of the guest root PT in our 
> per-domain section, root_pt_l1tab, within write_ptbase() function as 
> we'd always be in the case v == current plus switch_cr3_cr4() would 
> always flush local tlb.
> 
> However, there exists a path, in toggle_guest_mode(), where we could 
> call update_cr3()/make_cr3() without calling write_ptbase() and hence 
> not maintain mappings properly. Instead toggle_guest_mode() has a partly 
> open-coded version of write_ptbase().
> 
> Would you rather like to see the mappings written in make_cr3() or in 
> toggle_guest_mode() within the pseudo open-coded version of write_ptbase()?

Likely the latter, but that's hard to tell without seeing the resulting
code.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 06:27:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 06:27:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740393.1147471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0P6-0007h2-H3; Fri, 14 Jun 2024 06:27:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740393.1147471; Fri, 14 Jun 2024 06:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0P6-0007gv-EE; Fri, 14 Jun 2024 06:27:16 +0000
Received: by outflank-mailman (input) for mailman id 740393;
 Fri, 14 Jun 2024 06:27:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5IQt=NQ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sI0P5-0007gp-Bk
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 06:27:15 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 231680b7-2a17-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 08:27:13 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a6f0dc80ab9so282767466b.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 23:27:13 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f4182fsm149764266b.178.2024.06.13.23.27.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 23:27:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 231680b7-2a17-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718346433; x=1718951233; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=uiiGIfn0KxmgRHe7E3PFtjPcK8iBnXHQ2pGIajwaFEM=;
        b=X+A5G9OIM8BpvhT9gsN3o07R5tHO3DoOUOhnlQVog+x3Xw51sqi2kW1+iuaKF445xJ
         1s0WqaoEfsIks3Hma9iPxzJqJmkux60/HgQ5ve4HBb1ww+aaS1P9YWCsaL8rBjJFON+v
         AJdwV0sIjXaEwD/3yAmDgEwj+ZcwpMpkWtuj9Lm8dHS6CG1AUYeBJyWpK33sAVIGXBe/
         RHLDuqcJb1Av0/T9CD7VyLGs8wH1Qe/r0VXJ6c4JC2AHH82LZEqg+OK6cTIH8dNhmsam
         RDxmfI+AFu9JaZYhfhekAvRiMH34Kx3N07CccJZEGWhaEbpsqqAfjUpe0rUz1UTCuvBC
         MMXQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718346433; x=1718951233;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=uiiGIfn0KxmgRHe7E3PFtjPcK8iBnXHQ2pGIajwaFEM=;
        b=Bx1dX1uFRdmRsqbz6tnq9mFaHy+7IohSSqM/hAjkxhgYPvm20ppPxnlC0Wr3lL9Vcf
         Tr4yuizbR8f4UQI0hSBU8H0swXF5NTEsuz8/TdFyXTDYxs9jJLIlrWpOlkXFG84WZ9yk
         EibJyv9za/y/J8ztlFUDNc3ifFfpAR4fOnskE92iNniKYWx8FaQH3Az1+36XZZLF3M+R
         hk/BxhryAk3a0n9eM3X47zEUG1h8j26mKhfSxY1qvslIfuzHiAWkEvgwHAdvCocf9rgx
         VV9H7tvQ5glmcZXto5MqKTnyvPWEJ+ePxIWnI2IQSeXPjAU2eCtH8NQnUSULnvKbMRmX
         LFqA==
X-Forwarded-Encrypted: i=1; AJvYcCXwtQY0WSZdoR1Mh3kwPx6z5y3ngEBnbLvoLjSBgQc4ukws5CifKRK1wgvKyJLSFPtEX01Daa+rDHqetDcLsbR7ODhfLdiECQUQHrlCgVk=
X-Gm-Message-State: AOJu0YxCnlCKRA6C29Dn0C3Q5VfkbJ6Qo6oyazPSFbHf/flfhk3cfG4S
	DaNnfJaF+R3owkkePERN/+dmtJx6dPeVWFj5pbab6ZynIa35HamCY5mbcGk1EQ==
X-Google-Smtp-Source: AGHT+IEZzlIwqPD90GoYGlKcMjMkKB1HtzNzQyksXMZRCRVxkORVHCCs11PdOI/z3vVWVoT153OX6g==
X-Received: by 2002:a17:906:388b:b0:a6f:1475:33a8 with SMTP id a640c23a62f3a-a6f60d29606mr105070866b.17.1718346432789;
        Thu, 13 Jun 2024 23:27:12 -0700 (PDT)
Message-ID: <e493035c-2954-418e-96fb-add1577df59f@suse.com>
Date: Fri, 14 Jun 2024 08:27:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for 4.19?] x86/Intel: unlock CPUID earlier for the BSP
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>
 <f51b2240-03da-4aee-8972-a72b53916ce1@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <f51b2240-03da-4aee-8972-a72b53916ce1@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 13.06.2024 18:17, Andrew Cooper wrote:
> On 13/06/2024 9:19 am, Jan Beulich wrote:
>> Intel CPUs have a MSR bit to limit CPUID enumeration to leaf two. If
>> this bit is set by the BIOS then CPUID evaluation does not work when
>> data from any leaf greater than two is needed; early_cpu_init() in
>> particular wants to collect leaf 7 data.
>>
>> Cure this by unlocking CPUID right before evaluating anything which
>> depends on the maximum CPUID leaf being greater than two.
>>
>> Inspired by (and description cloned from) Linux commit 0c2f6d04619e
>> ("x86/topology/intel: Unlock CPUID before evaluating anything").
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> While I couldn't spot anything, it kind of feels as if I'm overlooking
>> further places where we might be inspecting in particular leaf 7 yet
>> earlier.
>>
>> No Fixes: tag(s), as imo it would be too many that would want
>> enumerating.
> 
> I also saw that go by, but concluded that Xen doesn't need it, hence why
> I left it alone.
> 
> The truth is that only the BSP needs it.  APs sort it out in the
> trampoline via trampoline_misc_enable_off, because they need to clear
> XD_DISABLE in prior to enabling paging, so we should be taking it out of
> early_init_intel().

Except for the (odd) case also mentioned to Roger, where the BSP might have
the bit clear but some (or all) AP(s) have it set.

> But, we don't have an early BSP-only early hook, and I'm not overwhelmed
> at the idea of exporting it from intel.c
> 
> I was intending to leave it alone until I can burn this whole
> infrastructure to the ground and make it work nicely with policies, but
> that's not a job for this point in the release...

This last part reads like the rest of your reply isn't an objection to me
putting this in with Roger's R-b, but it would be nice if you could
confirm this understanding of mine. Without this last part it, especially
the 2nd from last paragraph, certainly reads a little like an objection.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 06:31:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 06:31:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740400.1147481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0Sz-0001b9-34; Fri, 14 Jun 2024 06:31:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740400.1147481; Fri, 14 Jun 2024 06:31:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0Sz-0001b2-0R; Fri, 14 Jun 2024 06:31:17 +0000
Received: by outflank-mailman (input) for mailman id 740400;
 Fri, 14 Jun 2024 06:31:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V6dE=NQ=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sI0Sy-0001aw-9L
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 06:31:16 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b2641d6d-2a17-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 08:31:14 +0200 (CEST)
Received: from truciolo.homenet.telecomitalia.it
 (host-82-58-35-96.retail.telecomitalia.it [82.58.35.96])
 by support.bugseng.com (Postfix) with ESMTPSA id 097F64EE0756;
 Fri, 14 Jun 2024 08:31:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2641d6d-2a17-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v2] automation/eclair: add deviation for MISRA C Rule 17.7
Date: Fri, 14 Jun 2024 08:31:05 +0200
Message-Id: <5d4294f9a33cd647b6365614d88665b19a89d62b.1718346542.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update ECLAIR configuration to deviate some cases where not using
the return value of a function is not dangerous.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
Changes in v2:
- do not deviate strlcpy and strlcat.
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 4 ++++
 docs/misra/deviations.rst                        | 9 +++++++++
 2 files changed, 13 insertions(+)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index 447c1e6661..97281082a8 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -413,6 +413,10 @@ explicit comment indicating the fallthrough intention is present."
 -config=MC3R1.R17.1,macros+={hide , "^va_(arg|start|copy|end)$"}
 -doc_end
 
+-doc_begin="Not using the return value of a function do not endanger safety if it coincides with the first actual argument."
+-config=MC3R1.R17.7,calls+={safe, "any()", "decl(name(__builtin_memcpy||__builtin_memmove||__builtin_memset||cpumask_check))"}
+-doc_end
+
 #
 # Series 18.
 #
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 36959aa44a..2a10de5a66 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -364,6 +364,15 @@ Deviations related to MISRA C:2012 Rules:
        by `stdarg.h`.
      - Tagged as `deliberate` for ECLAIR.
 
+   * - R17.7
+     - Not using the return value of a function do not endanger safety if it
+       coincides with the first actual argument.
+     - Tagged as `safe` for ECLAIR. Such functions are:
+         - __builtin_memcpy()
+         - __builtin_memmove()
+         - __builtin_memset()
+         - __cpumask_check()
+
    * - R20.4
      - The override of the keyword \"inline\" in xen/compiler.h is present so
        that section contents checks pass when the compiler chooses not to
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 06:39:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 06:39:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740409.1147490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0aO-00032L-R8; Fri, 14 Jun 2024 06:38:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740409.1147490; Fri, 14 Jun 2024 06:38:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0aO-00032E-OJ; Fri, 14 Jun 2024 06:38:56 +0000
Received: by outflank-mailman (input) for mailman id 740409;
 Fri, 14 Jun 2024 06:38:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5IQt=NQ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sI0aN-000328-06
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 06:38:55 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c45cec3d-2a18-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 08:38:53 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a6f09eaf420so207322766b.3
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 23:38:53 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56db5bafsm149950166b.50.2024.06.13.23.38.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 23:38:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c45cec3d-2a18-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718347133; x=1718951933; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=LueDS0Sam1tzVyoa6y3TzgSgb4R1TaIjJXTdw5fwPjA=;
        b=Dc6F1M9TQLzEHHIrweP8qo/qff439XNDsHGXrJdEoetOs3fSmB2pvjPw9u1qWbeEcv
         UX2dCD43O65O31e1QR83igJQQ/v91wdM3T0jG30wukb9f/SZULmXQqAdvpa5HDBTnssf
         qQIJW9bD9g6lbEkio0+o7tyXNUpC8NIRa7JluXaNiOhnDECPdeyTznaHyR0Cz5Qjsl5R
         I/M6CqXxqkzRnEAMHY/CVWz4jl74mWb5y16GjOkUM5Sr7yNO+Q6sKCfIjTmXxSxlTwkl
         9dkPsap+cLPovJWzv3QXFKOUOv9qnh4UX9MBc6ke5vACiGksO2YhEUYG5mh/ujuxLA4s
         1w9g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718347133; x=1718951933;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=LueDS0Sam1tzVyoa6y3TzgSgb4R1TaIjJXTdw5fwPjA=;
        b=DZGaGMGKQ/rkQZYX3o+vy3zWM0LEge5kvHzrDgAYM1Rs3qXWmN1hOYOy+nvRMF0Zdy
         Y/pIh2iYUQhRQh+3i1XiBXCUbCPcVWI7XZYrRz+Hbn4ArLzW3wKUHxRe3+N19zU6gCUn
         J5whTK+SGAXXJIajWO1+CXyWp1EqB5idId34i7q4kuA+geXypUu9c4GltQuKtZPbWfLr
         Wgtin9L7bl1ekMKINM+j/aoVRBON7LLfqqbvWtjDtKOCEmYoBrvD6BMZ6mCNnKNByk9x
         Cu0xLF4oSoQ4Z8mH7N6HE+Vsyshsq9r6q9+2dh5VFTTG5gWzceu037Ii7Nizs7cMtNku
         sLSQ==
X-Forwarded-Encrypted: i=1; AJvYcCXMBhYgQp8aZ5Mtr7gsV3y1CzlbD7TZTVD7RQ7W26FIXmDF53hQFPDj8//d1gObsO84C0IiqhE/CdWU8a5uBbwb5aOeufjCZX9gXEd5p94=
X-Gm-Message-State: AOJu0YwO8A7mXJkNYUNQTrXaKSkXf+DZLRAYBPE9l/wYHYNycBcdql0F
	WkySzdTCtANsdnkfvB3V3y+cst2eFjKdnfa9R+TcKph1UT9G6mhs4ezRkK1AZw==
X-Google-Smtp-Source: AGHT+IGAb3o1NZEqsL7bUEt8CMjfPNLENDrWdc7Un+HOKJugXowepRoGNhYMf11blZTz2oMnBTh5Pw==
X-Received: by 2002:a17:906:57c2:b0:a6f:eb8:801a with SMTP id a640c23a62f3a-a6f60dc51c7mr101285166b.56.1718347133292;
        Thu, 13 Jun 2024 23:38:53 -0700 (PDT)
Message-ID: <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
Date: Fri, 14 Jun 2024 08:38:51 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: Design session notes: GPU acceleration in Xen
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Ray Huang <ray.huang@amd.com>,
 Xen developer discussion <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <Zms9tjtg06kKtI_8@itl-email>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <Zms9tjtg06kKtI_8@itl-email>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 13.06.2024 20:43, Demi Marie Obenour wrote:
> GPU acceleration requires that pageable host memory be able to be mapped
> into a guest.

I'm sure it was explained in the session, which sadly I couldn't attend.
I've been asking Ray and Xenia the same before, but I'm afraid it still
hasn't become clear to me why this is a _requirement_. After all that's
against what we're doing elsewhere (i.e. so far it has always been
guest memory that's mapped in the host). I can appreciate that it might
be more difficult to implement, but avoiding to violate this fundamental
(kind of) rule might be worth the price (and would avoid other
complexities, of which there may be lurking more than what you enumerate
below).

>  This requires changes to all of the Xen hypervisor, Linux
> kernel, and userspace device model.
> 
> ### Goals
> 
>  - Allow any userspace pages to be mapped into a guest.
>  - Support deprivileged operation: this API must not be usable for privilege escalation.
>  - Use MMU notifiers to ensure safety with respect to use-after-free.
> 
> ### Hypervisor changes
> 
> There are at least two Xen changes required:
> 
> 1. Add a new flag to IOREQ that means "retry this instruction".
> 
>    An IOREQ server can set this flag after having successfully handled a
>    page fault.  It is expected that the IOREQ server has successfully
>    mapped a page into the guest at the location of the fault.
>    Otherwise, the same fault will likely happen again.

Were there any thoughts on how to prevent this becoming an infinite loop?
I.e. how to (a) guarantee forward progress in the guest and (b) deal with
misbehaving IOREQ servers?

> 2. Add support for `XEN_DOMCTL_memory_mapping` to use system RAM, not
>    just IOMEM.  Mappings made with `XEN_DOMCTL_memory_mapping` are
>    guaranteed to be able to be successfully revoked with
>    `XEN_DOMCTL_memory_mapping`, so all operations that would create
>    extra references to the mapped memory must be forbidden.  These
>    include, but may not be limited to:
> 
>    1. Granting the pages to the same or other domains.
>    2. Mapping into another domain using `XEN_DOMCTL_memory_mapping`.
>    3. Another domain accessing the pages using the foreign memory APIs,
>       unless it is privileged over the domain that owns the pages.

All of which may call for actually converting the memory to kind-of-MMIO,
with a means to later convert it back.

Jan

>    Open question: what if the other domain goes away?  Ideally,
>    unmapping would (vacuously) succeed in this case.  Qubes OS doesn't
>    care about domid reuse but others might.
> 
> ### Kernel changes
> 
> Linux will add support for mapping userspace memory into an emulated PCI
> BAR.  This requires Linux to automatically revoke access when needed.
> 
> There will be an IOREQ server that handles page faults.  The discussion
> assumed that this handling will happen in kernel mode, but if handling
> in user mode is simpler that is also an option.
> 
> There is no async #PF in Xen (yet), so the entire vCPU will be blocked
> while the fault is handled.  This is not great for performance, but
> correctness comes first.
> 
> There will be a new kernel ioctl to perform the mapping.  A possible C
> prototype (presented at design session, but not discussed there):
> 
>     struct xen_linux_register_memory {
>         uint64_t pointer;
>         uint64_t size;
>         uint64_t gpa;
>         uint32_t id;
>         uint32_t guest_domid;
>     };



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 06:41:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 06:41:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740414.1147500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0ch-0004RR-6O; Fri, 14 Jun 2024 06:41:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740414.1147500; Fri, 14 Jun 2024 06:41:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0ch-0004RK-3k; Fri, 14 Jun 2024 06:41:19 +0000
Received: by outflank-mailman (input) for mailman id 740414;
 Fri, 14 Jun 2024 06:41:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5IQt=NQ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sI0cf-0004RE-LT
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 06:41:17 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 197588c3-2a19-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 08:41:16 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6f176c5c10so216804566b.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 23:41:16 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f9982asm148447766b.202.2024.06.13.23.41.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 23:41:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 197588c3-2a19-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718347276; x=1718952076; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=MwZ/lLOCD2cBUb21EwgNs8l6zW8XPvoE717Wle2NtRw=;
        b=E39CjanTa62udH/SH7uQbEalV38/aNhZhu/3yzoJZ3hMPkEcYBd/JIJ9tZ+WE5dj+T
         EM7M+275dgi3CcsAeAbcC2OLA+5Hqakt4RI6zAOJHQ/SWEB3RSRTJblpDTaPK3yQuhZ9
         Vg3X6zo+jUev70VlmybqIGNq+j09hP/8pHJ4I2Awm2MdTPTjHKjsUre6RZPo1VwXWiXY
         hApHDOKGTQ0/7JS3kapFTfSln5OSQNrK2ukzjOCuRnt34n7XUVkeHYwR8voyRsUmHy5w
         d0vl3bFksjVKEMQWTaCFweUvbPuuwFncAS6C5gLfgIWOyMzT6d6prMSAPFZ/l4nwISz2
         0+Kw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718347276; x=1718952076;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=MwZ/lLOCD2cBUb21EwgNs8l6zW8XPvoE717Wle2NtRw=;
        b=pR0B7mYHeXmBhSUVPDQMWWK1tN1IoWXxsvyV2Mxp2xlM1xLX6dsgDwPnUwMdR2fzgf
         4n3jUUj8byOZ+TeKysYe5R5xz7YlNxb67gIW0DK3cJlMSkjZPO8WBO9PON5eFlpJRI+m
         C6x1PSWzK/NGU7qWxzuWdllGNTeqUUjKxWTescJ64zaPQGFrz8jlTi4DhtmV9xcx3Hcu
         u3vEcQpz9a4mrw+EpsjDuXxHZbHSlCz39kN3VKm0JBq8bPC9enlQQuJB8r3Z4OpOLyFh
         Va9NLD6VpF9wor4fwegKMAOsZbdBBsfeEoSJ6kngMTzOH863v3QkHYKLJCQ90jfkqb39
         6n4A==
X-Forwarded-Encrypted: i=1; AJvYcCXum6VdKAPcPyQIykB8nz/J/DCaHuLvxJrdgqxeTCSm5hKGuuxxfyA6Awb7tiFzM7WRwg9lUQcqkZ01ko4O8tnU2N/1dQdWpUWvjoF+Wyg=
X-Gm-Message-State: AOJu0YwTeO6oYqCTzJZHybiIGUHCubBZ5jZldEpWcrLMf3DvFBtbQX10
	wI6hniEIlRsjKLscCt0OWst58QY4EaxVgLL9D2bjUeqrnmfHSqcQ8rIap3qoHA==
X-Google-Smtp-Source: AGHT+IHxw9dymhwNcdPDIW8z5Ady2yM7Yig0iXHZIjxpnNeDR/cutci+Fp8LeIyC97A+zyRYvo320A==
X-Received: by 2002:a17:906:3e97:b0:a6f:1590:ab06 with SMTP id a640c23a62f3a-a6f60d298c6mr114262566b.31.1718347276040;
        Thu, 13 Jun 2024 23:41:16 -0700 (PDT)
Message-ID: <3fde1817-72a6-484f-9777-567b062c1913@suse.com>
Date: Fri, 14 Jun 2024 08:41:14 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>,
 Anthony PERARD <anthony.perard@vates.tech>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Juergen Gross <jgross@suse.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-6-Jiqian.Chen@amd.com>
 <987f5d21-bbb5-4cdb-975b-91949e802921@suse.com>
 <BL1PR12MB5849FF595AEED1112622A98DE7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
 <c2a5b9cd-2a85-4e01-8b8b-31b85726dbd4@suse.com>
 <BL1PR12MB5849652CE3039C8D17CD7FA6E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
 <ZmrrNvv2sVaOIS5h@l14>
 <BL1PR12MB584926B7F6153287479E4CB4E7C22@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB584926B7F6153287479E4CB4E7C22@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 14.06.2024 05:11, Chen, Jiqian wrote:
> On 2024/6/13 20:51, Anthony PERARD wrote:
>> On Wed, Jun 12, 2024 at 10:55:14AM +0000, Chen, Jiqian wrote:
>>> On 2024/6/12 18:34, Jan Beulich wrote:
>>>> On 12.06.2024 12:12, Chen, Jiqian wrote:
>>>>> On 2024/6/11 22:39, Jan Beulich wrote:
>>>>>> On 07.06.2024 10:11, Jiqian Chen wrote:
>>>>>>> +    r = xc_domain_gsi_permission(ctx->xch, domid, gsi, map);
>>>>>>
>>>>>> Looking at the hypervisor side, this will fail for PV Dom0. In which case imo
>>>>>> you better would avoid making the call in the first place.
>>>>> Yes, for PV dom0, the errno is EOPNOTSUPP, then it will do below xc_domain_irq_permission.
>>>>
>>>> Hence why call xc_domain_gsi_permission() at all on a PV Dom0?
>>> Is there a function to distinguish that current dom0 is PV or PVH dom0 in tools/libs?
>>
>> That might have never been needed before, so probably not. There's
>> libxl__domain_type() but if that works with dom0 it might return "HVM"
>> for PVH dom0. So if xc_domain_getinfo_single() works and give the right
>> info about dom0, libxl__domain_type() could be extended to deal with
>> dom0 I guess. I don't know if there's a good way to find out which
>> flavor of dom0 is running.
> Thanks Anthony!
> I think here we really need to check is that whether current domain has PIRQ flag(X86_EMU_USE_PIRQ) or not.
> And it seems xc_domain_gsi_permission already return the information.

By way of failing, if I'm not mistaken? As indicated before, I don't
think you should invoke the function when it's clear it's going to fail.

Jan

> If current domain has no PIRQs, then I should use xc_domain_gsi_permission to grant permission, otherwise I should
> keep the original function xc_domain_irq_permission.
> 
>>
>> Cheers,
>>
> 



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 06:46:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 06:46:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740421.1147510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0hF-00052l-OI; Fri, 14 Jun 2024 06:46:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740421.1147510; Fri, 14 Jun 2024 06:46:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0hF-00052e-LO; Fri, 14 Jun 2024 06:46:01 +0000
Received: by outflank-mailman (input) for mailman id 740421;
 Fri, 14 Jun 2024 06:46:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5IQt=NQ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sI0hE-00052Y-Fd
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 06:46:00 +0000
Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com
 [2a00:1450:4864:20::530])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c1f93146-2a19-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 08:45:59 +0200 (CEST)
Received: by mail-ed1-x530.google.com with SMTP id
 4fb4d7f45d1cf-57a5bcfb2d3so972208a12.3
 for <xen-devel@lists.xenproject.org>; Thu, 13 Jun 2024 23:45:59 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb72da83asm1832865a12.25.2024.06.13.23.45.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 Jun 2024 23:45:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1f93146-2a19-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718347559; x=1718952359; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=PXgLrd85ARxcZJPOzvD6zAvkCTekLQfVFWsxeuf+ouI=;
        b=UA7IZQKDDyAIbooLVsav7vKmgGtGmEyv7QCAhb9Obt9JlrdgaJc0Ho/zaIK8dB3xNs
         cCkxYpH1YCYo2IEV3+/l60FIrNDhmi7LeBol7XEHNtKajY0lq0cRNq9II9jKu1Zysnth
         B8PkXS7aaD7vJGUl0VqrT2OPVcPt5eqnGnzNVr1SnO+qY2XKGpVFHpOoX2t6tiqYYlHm
         8oyLhuWmmWv5EkWe/fw7TfQ+OBqqLwwP75rbI58ecHu1ECfRcwJWNUQTG1eZZ8aJy4BL
         YgfGp9lHVu1h16V1YGvsQkS1G1wQp5VN2x08I2l0AZwub4cMCzCqaRqtYVkPRC6cYqmI
         OVFw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718347559; x=1718952359;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=PXgLrd85ARxcZJPOzvD6zAvkCTekLQfVFWsxeuf+ouI=;
        b=cOhRfBRnQOMeubE7x68ZRs7yP01qe4hwyv48X5izVFyfB3EJKGwJrPQR3x4fVada+C
         fja0EUKZv2v4H1yYnD/RNvXCyOybLtztkuTgyCfArGGAsd9rbG/0Vz+ZXP0dn73SjKLt
         fI0kxYmpp4c+MhclPAy+4QbMNu9k/wFrBE3Fk8xuz9ByrtHOWnfYqisBlL+oFYjeR9HQ
         AcOiEEwLUqSfAz+BMaNTGHdeLT2L/QNzc+N2vqbahodiDghjzKviNg9PO2fUzUcEyV0M
         68vF9wW0jurTxlilzAK4TPqXW+YaqSd1maNgtOYUOFLvSP4vqfLafTAlS/Co4ZVW98uX
         FOsA==
X-Forwarded-Encrypted: i=1; AJvYcCXfE0mRyOCP6sUwKzPrKMy7va5RuqPShczWiP/1Fydwu5rlPv6Oy4k6n3OXD5bG83BTz1P1JxKsZBjnLhZ3BSzZ0KXTp5BQIEK1JY6PAiw=
X-Gm-Message-State: AOJu0Yz7qaGMKs44XQVPHBZp2wTD/7pE/Np5BXy9YAVIcWUhhkcHpG+r
	LZ7/OHFbMiqNaq/YWZPmvrg94J7l9zXAif6BDwhX8dkg8W1GTbfOTjjr07jeQA==
X-Google-Smtp-Source: AGHT+IEuLd6zpPtA1Z4B4oXzHduerSkaHlqsAzYyrUhzcH4XJukV6HFr/baqtZBi36kG6DGFGCTY6A==
X-Received: by 2002:a50:cd0e:0:b0:57c:a7fe:b155 with SMTP id 4fb4d7f45d1cf-57cbd68487bmr1409480a12.15.1718347558734;
        Thu, 13 Jun 2024 23:45:58 -0700 (PDT)
Message-ID: <39020e27-0b25-4b7e-b7d6-6171f4e30c41@suse.com>
Date: Fri, 14 Jun 2024 08:45:57 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2] automation/eclair: add deviation for MISRA C Rule
 17.7
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <5d4294f9a33cd647b6365614d88665b19a89d62b.1718346542.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <5d4294f9a33cd647b6365614d88665b19a89d62b.1718346542.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 14.06.2024 08:31, Federico Serafini wrote:
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -413,6 +413,10 @@ explicit comment indicating the fallthrough intention is present."
>  -config=MC3R1.R17.1,macros+={hide , "^va_(arg|start|copy|end)$"}
>  -doc_end
>  
> +-doc_begin="Not using the return value of a function do not endanger safety if it coincides with the first actual argument."
> +-config=MC3R1.R17.7,calls+={safe, "any()", "decl(name(__builtin_memcpy||__builtin_memmove||__builtin_memset||cpumask_check))"}

While correct here, ...

> --- a/docs/misra/deviations.rst
> +++ b/docs/misra/deviations.rst
> @@ -364,6 +364,15 @@ Deviations related to MISRA C:2012 Rules:
>         by `stdarg.h`.
>       - Tagged as `deliberate` for ECLAIR.
>  
> +   * - R17.7
> +     - Not using the return value of a function do not endanger safety if it
> +       coincides with the first actual argument.
> +     - Tagged as `safe` for ECLAIR. Such functions are:
> +         - __builtin_memcpy()
> +         - __builtin_memmove()
> +         - __builtin_memset()
> +         - __cpumask_check()

... there are stray leading underscores on the last one here. With that
adjustment (and perhaps "s/ do / does /") the deviations.rst change would then
look okay to me, but I don't feel competent to ack deviations.ecl changes.

Still, as another question: Is it really relevant here that the argument in
question is specifically the 1st one?

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 06:53:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 06:53:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740427.1147520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0o8-0007Vh-Er; Fri, 14 Jun 2024 06:53:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740427.1147520; Fri, 14 Jun 2024 06:53:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI0o8-0007Va-CE; Fri, 14 Jun 2024 06:53:08 +0000
Received: by outflank-mailman (input) for mailman id 740427;
 Fri, 14 Jun 2024 06:53:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eLuH=NQ=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sI0o7-0007VU-1n
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 06:53:07 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20601.outbound.protection.outlook.com
 [2a01:111:f403:2417::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bf07ca3c-2a1a-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 08:53:04 +0200 (CEST)
Received: from PH7PR12MB5854.namprd12.prod.outlook.com (2603:10b6:510:1d5::20)
 by MW6PR12MB8865.namprd12.prod.outlook.com (2603:10b6:303:23b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.24; Fri, 14 Jun
 2024 06:53:00 +0000
Received: from PH7PR12MB5854.namprd12.prod.outlook.com
 ([fe80::bd58:fa72:e622:dd76]) by PH7PR12MB5854.namprd12.prod.outlook.com
 ([fe80::bd58:fa72:e622:dd76%5]) with mapi id 15.20.7633.037; Fri, 14 Jun 2024
 06:53:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf07ca3c-2a1a-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l7bckpAC8muMYvg6t2KOf6DEvLnu4CpHkVy+0ZE8vyqDr+OmKfmTscOekw70hQ14YxX6DGbly2zyco45FkKHSqCCRIUN8N3zHIWTH6Uz2/cP9ieqYGieacR+Rl6aZbGmX00xHpCIMMH6Gh8AhX4JkhaJ6KuUDkI/OVRoZJyWRmLJ7vQubWlbinK7Z20TxWOyNLWucxlfnYyDChhlimotG/Ye5J2A+yhKsPfyHH5T/XM4sJ6MXdc8NLkoeeViyE34ojNqNdh/WtoWDTd4ZdD2HuR/fH+uzRmASnIa6bnBmM/IfbqpvX7BxpQWFc9/wXX5122+gLJ2mluUZn7+LtICmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=egHMo954rEXsb0aCxtZ4+HZhqsooqjiCjJ19VN12AHo=;
 b=MnewoMnV8NPZ4Zsx/iTuDAZHQbTqTjk7EdwK1XmHDwafBf44NsAw0zw4zOmBqVCrt0fhXfzMiNOcb4uncowyDP7G3dguW+PD5gO5MoWaba13g801N4rS9e7GXQRQ8HL63FLqfB/hLdxIi8nf42zBmE8Y/6dGfEvxa5MyOF6gMu0jDOXEKDh3b+dFRwwSLYqHUPSwd/WvsB4dAJDV9eS/BMJ9vBJQ6dc1/ull4myMLgUggyOiGaakvQlnqqObgFu0WElDLkJY76dUCUhQvXXZydtC46QwFC9oDI4DNj17hx16QJIDHz8fgbrrsPtW6oEcQqdb64/VowDMiolJsAtKkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=egHMo954rEXsb0aCxtZ4+HZhqsooqjiCjJ19VN12AHo=;
 b=TxQuHKSHFD92qrBdE59SRHBS9hCYKA3hCi5RpMKP88zuAWlyMzfAGOX2ZXF84tHXf4NFe90Glka80NmvKtwr+DcFYQy8Z6Dg9HCk+GQRc8xOcDQWJAU+BX3VCY70n/ITasTGDF+gjTwCMeRQWGBcTWkP8RPNQbKWtFE/prVEr1I=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>, Anthony PERARD
	<anthony.perard@vates.tech>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross
	<jgross@suse.com>, "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
	"Huang, Ray" <Ray.Huang@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Chen, Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [RFC XEN PATCH v9 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index:
 AQHauLJn3FT0TCW9KkmRq1iRWqwBt7HCqQ8AgAG9m4D//5AdgIAAiWiAgAEvHwCAAW0dgP//veQAgACIjAA=
Date: Fri, 14 Jun 2024 06:53:00 +0000
Message-ID:
 <PH7PR12MB58543888E330570A28DE2466E7C22@PH7PR12MB5854.namprd12.prod.outlook.com>
References: <20240607081127.126593-1-Jiqian.Chen@amd.com>
 <20240607081127.126593-6-Jiqian.Chen@amd.com>
 <987f5d21-bbb5-4cdb-975b-91949e802921@suse.com>
 <BL1PR12MB5849FF595AEED1112622A98DE7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
 <c2a5b9cd-2a85-4e01-8b8b-31b85726dbd4@suse.com>
 <BL1PR12MB5849652CE3039C8D17CD7FA6E7C02@BL1PR12MB5849.namprd12.prod.outlook.com>
 <ZmrrNvv2sVaOIS5h@l14>
 <BL1PR12MB584926B7F6153287479E4CB4E7C22@BL1PR12MB5849.namprd12.prod.outlook.com>
 <3fde1817-72a6-484f-9777-567b062c1913@suse.com>
In-Reply-To: <3fde1817-72a6-484f-9777-567b062c1913@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: PH7PR12MB5854.namprd12.prod.outlook.com
 (15.20.7633.034)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: PH7PR12MB5854:EE_|MW6PR12MB8865:EE_
x-ms-office365-filtering-correlation-id: 06d656cc-1df3-41aa-0c92-08dc8c3ea145
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230035|376009|7416009|1800799019|366011|38070700013;
x-microsoft-antispam-message-info:
 =?utf-8?B?VDJ1czB3M2I0MUg5SDJ5a3l3UElVOFJFc3l6R1p3dmhuTjIwdXRERzZySmlJ?=
 =?utf-8?B?OXRmZUxWK3ArV1piQ0xSUVN3dUJJdXFYa3BkU3lmNis5NEZRc0VULzRKSzhZ?=
 =?utf-8?B?ZGt1UXJNOFJHNUFhdU5DYzRvMVhSUnNsWWJqOVdhTStiK1FzdytUQlpPZlRY?=
 =?utf-8?B?bWZHaU9aYzUzYlc1NDJBdGMrU29xN1kyZm1sYWNoamRBMW1MQXJwbFlnUlVk?=
 =?utf-8?B?QUhFMlIzWlpHeTFhU1A4ckNsVDkzZFE4SXFYbzU3ZHhoa1haUUw4WnhZZzR2?=
 =?utf-8?B?amNGdHRGbFM0akZPbTZCbVhVUEt0S3pucktXLzg5b3Myd3IrK1ZUSzF5ak1T?=
 =?utf-8?B?eml4Q0xDNjJwTS9Sc3V5R1JEdFVrdVF6bE9WZGpTMHo2K0tKVjBja2R2MHp0?=
 =?utf-8?B?REpUMGhodTlWcGxBWlM3aWdaMmkybVdNL0tWUzFGTTF1U3lsdkdlbjZ5UjlF?=
 =?utf-8?B?ZG5yS2ltRE1KdStIUXJtZHBSdHE5MVRORnZKcHAwYUpDclpGODFCOGlMNVk0?=
 =?utf-8?B?UnovejgrbmlSeTkyN2QvekFnZlMvS0Y4K0xvRmtydWEvWXhkeWdMSCtLUElF?=
 =?utf-8?B?WkMyNUg2ZFpWdjh0OFlsYW8vWExRbFZSdEluaFhDM3VubFNHdEJJZ1J4b0NB?=
 =?utf-8?B?V05nV2E0eWJQVnJoNGp1NXg0WGFFcE5qcVdCVEhhVUJKdWJWRlpoemtYK0JW?=
 =?utf-8?B?L2VaRW9tMXdTYXFBVUdadnVDNDBsMDN2NjhDK2I1d0E4UHJhUm5qaEdVcTY0?=
 =?utf-8?B?Ny9Ld1N4bDc2SzBCMGdHOFJHY2ZsWlhXMXE2N2FkM2FIdGVwQnBJaWlIRXNx?=
 =?utf-8?B?R1dZREozQkFKendxUEk0N0NHODArb1JBQ0dmNC8weTQrYkU2N3l4WUU5TWhT?=
 =?utf-8?B?YVNLWTVwT2IzWnJjSFJreWRTc3ZuTHAxbXh3UXJPNTIzNU92dithVzhheEM1?=
 =?utf-8?B?OWdPSkkyb3JNb2JWSk00ckFzVUVJQi9VUWRYTk5aYjNVQzBQTm1YbUZ3eXJz?=
 =?utf-8?B?R0FCTjFIK2lZUWRDK3Blc2dzYjZXQVNaT0pPSDRIdThZN3dBZDR6QTIzak9m?=
 =?utf-8?B?NlFNQVViRW1VU2RqcGZwTFVZUVJrTEx4UjJiZnppS1Zzd2dNdVUza09RQmZF?=
 =?utf-8?B?ZHQrNDJzY0pSSkFCTzhzQmJzVytHODhqeVJSZFgvRi9KVXBwdGk4SXpMV0Rt?=
 =?utf-8?B?ZDFvVnJ5MFZTR1c4UHpBY2ZEOFIzZ1hiT042bnhDQkFHMm1nMHN4YVFsb3FH?=
 =?utf-8?B?Y29iOVdXYko3bk1jWHI3SVpFaGh2aGVPOWZ6Ly9UOUpIVUZVN2V2TmRsZ2dB?=
 =?utf-8?B?YWs4YVByN1pLUitIdVlVdlltaUVHWlI0c2dSdEw4MUxKekRUNDJ2RG9MblR0?=
 =?utf-8?B?ZW1iTEhIZExGQ3dIbjduYVlnekF4UnkyMm1vQVIvTFZzYTdzWlMwN0lBem1K?=
 =?utf-8?B?d2cxUUlFNjVXK2ZUWFdhM3NKcDdJQ1RYWW8wSDM3R2QwK2xxL3JOSVlJVGNy?=
 =?utf-8?B?eCtDZUQzQVJYSkdJOXpDRnRyajBGWHJFbEhZcFh1U3VNaytiMlVjSHFucy9X?=
 =?utf-8?B?QjVGcEYvTFFuQUsxdDRidW1mWmc1NW5CWDFiMzI1NTRXWEVobUVGeDhIeUZX?=
 =?utf-8?B?aERxdVI0endiR2FBVWk5Kys2VG1ISE9zSVNvRWFBODIvR0laWDdCK0RoVjh4?=
 =?utf-8?B?Z2FFbENZT3dmc0M5eGg5R01mRUFmZ1NUNmlHT3JzYW5wRk9CcHlMV01sT1Zv?=
 =?utf-8?B?ZjNnOThjYURRMGp6T0hUUmljWURHYkF2OGRFb0F3VkV4QVlLVkJMVnQ5eWpo?=
 =?utf-8?Q?8UcdCJrf9gW14++vHv43umAVwuNSIDaspeth4=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH7PR12MB5854.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(376009)(7416009)(1800799019)(366011)(38070700013);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ZTRFaTVwcmVFY3QzRGdJRmZxUjRsT2l5Z0RkOHJDUG1heTIvK1Y3TnkxTHFi?=
 =?utf-8?B?SUdmdldKQitQUDRNeU9Nd3B3Tjdaclg3OUs5d0VRbURLWWk5Z3dxN0tyQ2tx?=
 =?utf-8?B?WUtqYUJyYjhLbVJrNUJ4YXFmb3hvMVRsTU1wVkZPbmlSU0E0bjBnMTN5U3Zt?=
 =?utf-8?B?N3Z0L0JtbzBFU0VLRHMxWFc5OE91WVdyWXlBTzlPcWhzM0JyTGlnbXRmVWZE?=
 =?utf-8?B?N0xjM1VvUG0wekhUNjNPY3R5djJXK1BFcjkrUTdSOW1Nak5wY3d4bGdMNEpR?=
 =?utf-8?B?RUhISlJ6NXNyYnNzanlrVnpiMG0yVlk1aGExWGpiQjJIbDFOWTJxTTU2am9p?=
 =?utf-8?B?cDFXKzRwRkV4Rjk3SUx3b0JieGlQMXNtSk4xMVhqQjlsZjEvS0Jyd0FLOTM4?=
 =?utf-8?B?cUJlT3dvOFBJZHdrYTlhMVl4SWViYU5sUVkzYXg1QkNEY0xaSjJpbVlwQ2Qz?=
 =?utf-8?B?YjNybWdPdWlmZkhSNnYyYXdRWUllQ2xaakQ0aVFuTDJJcnA4N29KL1U4MzRs?=
 =?utf-8?B?RVlwNzB6N3dneWp4YVQwK2thL0xvdk5rM3NxTDNQM282RUFOcWl2cC9qTTl0?=
 =?utf-8?B?SlJtUFQ0S0NHVURCd3RNN0pGRjNaTU1HUTFFV3VXWUpJTEMzRDJGY3ovY0o0?=
 =?utf-8?B?cHdCT1ZGMXlPamU3Nk5YeGJuTDRzZTViMlRoNjJmZ2trdVIwUEVZMlpZRjc0?=
 =?utf-8?B?SExtN3JtMU04QU1BMTNFTlp4VzNtK241UUU1S0xST3RLT0ZCcmxJTkNWSXdi?=
 =?utf-8?B?UWFhdUZYekZEcStsMzZ0d01XU29QMWNhSU83TWlBT21LVXkvMUdLMHlMWkkv?=
 =?utf-8?B?MnJCdCtrb0laRkg3bzloUjk0WFRyWDBSbFExZnpsY0w3ZTl1TGFHMHkwN0JZ?=
 =?utf-8?B?TTZ4Ynp0c2hrU0RKWmg2Y3FEbXNnSHhXN2owZ295YXlCMTRGczR5MkI5Y0ZW?=
 =?utf-8?B?dzh0STU3d2ZaaGZ2dUNzbWZQeXlrV0VqS0xxUUVVS0s2RWhPWlNaam1yTGtq?=
 =?utf-8?B?NGxSeUQ3N25RUlF4dHRuY1NCY1hJR0JwRCtFMTVkZ1kyd2VBem1XQ2w2bFpK?=
 =?utf-8?B?elRqUlIwOGJWSnF4UWxnb2xrRUhiS0xJNkJGbmdTSmY4b0pEdXpyb3plVTRO?=
 =?utf-8?B?eEpsS0gxR1ZGcmF5SjUxRitQU1A4Q1ozN3RRempQUXdkUXFDN1dVYk1ndkhR?=
 =?utf-8?B?dDNrWll0RWJPZmV5aDc2d1FEMERSOEhQc0tLTmtPNU1VejVtbEYwbDZRTGVE?=
 =?utf-8?B?cnBBazRyL1FBa05uSWdLdFIxdWlhUkFjelo1N3VCbkhKVm5Uak5hVVNwNSta?=
 =?utf-8?B?am9zUkh0VFRKVFA3S2FIRVJJL0hRVnBqbE0vRkt0blEwYm5Ublg2dXptVTNo?=
 =?utf-8?B?dkJiTURVemxpK0xNM2VOSFBxOWdMQ0t3a0xYY3lVM1JPRDJSVG5ZSG4wTUlR?=
 =?utf-8?B?T1h2WUN0TWZ2Y0dWM1BJbkZWYVpaeXdPcHIvOXp6Y09RVTJXdW1VR2FBR2hu?=
 =?utf-8?B?ZlBRMHVnc0swQTFOWDRRc05iVW5Nc00vQ01MVzRWRWgvd3dpaUp3L2I3UDE3?=
 =?utf-8?B?bnErbjdhNzlCZFJKTGZ1Rk1WOFM4RGtyakhyaXQxRUo4ZklWUlBXTDNNTnlN?=
 =?utf-8?B?KzBhcHhRQjVQQll6TngybVhJUUI4R2Rrb3MwbU5SQTlacUIrc1JRMFBPVGZo?=
 =?utf-8?B?emxXTHRWU0c1VzVqM0pLSTF1VHVxd0F3bDZGUHVyYVBNcW9hYmdrQmVwaUpR?=
 =?utf-8?B?ZHA3djRyc3JlTUdCTFJOK29JYUl3RnlPdkZsRmwxSC8zUTFlS2Z6QVN1ejBp?=
 =?utf-8?B?VW5zVG9kUDJaWjlkTEIyeHdwb1pLb1Vmb3pyQUIrVURuZGQ1Q2NIUG9WSFlP?=
 =?utf-8?B?bzI1WjVEN1cxWmdaOHN4eXVKOW5HM1BLVHNBcjBOSU5yL2pER3JkQ2lLRVE1?=
 =?utf-8?B?VnZjWjBCZ0pPc0YrVTFjcDFRZ1JndXVNV0dWUnRweWVlMlpIRXJQbFNYeW9B?=
 =?utf-8?B?MDFiZjBXeGxTb2VYT2ZVUytBd1FzR1I2aHVXTm5ySVNaYjhDb2dXbFl4b3hK?=
 =?utf-8?B?QVRES2YxMUk2UVFsVGMvTmVNMlYybFJhNEJyZ2h1d3p6YmhZTitmZmh5NWU0?=
 =?utf-8?Q?Jq0w=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <D76B493CA71BD74E82FEB103EB28F525@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH7PR12MB5854.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 06d656cc-1df3-41aa-0c92-08dc8c3ea145
X-MS-Exchange-CrossTenant-originalarrivaltime: 14 Jun 2024 06:53:00.0775
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: i3kEB1DPxbGxiXSHV6vTsFoS4B4Fkg4bp606+GHEgvudLiB7kiSAF6XXyaGc8bWf/9axDClyg09OispwISk75Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW6PR12MB8865

T24gMjAyNC82LzE0IDE0OjQxLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTQuMDYuMjAyNCAw
NToxMSwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzEzIDIwOjUxLCBBbnRob255
IFBFUkFSRCB3cm90ZToNCj4+PiBPbiBXZWQsIEp1biAxMiwgMjAyNCBhdCAxMDo1NToxNEFNICsw
MDAwLCBDaGVuLCBKaXFpYW4gd3JvdGU6DQo+Pj4+IE9uIDIwMjQvNi8xMiAxODozNCwgSmFuIEJl
dWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAxMi4wNi4yMDI0IDEyOjEyLCBDaGVuLCBKaXFpYW4gd3Jv
dGU6DQo+Pj4+Pj4gT24gMjAyNC82LzExIDIyOjM5LCBKYW4gQmV1bGljaCB3cm90ZToNCj4+Pj4+
Pj4gT24gMDcuMDYuMjAyNCAxMDoxMSwgSmlxaWFuIENoZW4gd3JvdGU6DQo+Pj4+Pj4+PiArICAg
IHIgPSB4Y19kb21haW5fZ3NpX3Blcm1pc3Npb24oY3R4LT54Y2gsIGRvbWlkLCBnc2ksIG1hcCk7
DQo+Pj4+Pj4+DQo+Pj4+Pj4+IExvb2tpbmcgYXQgdGhlIGh5cGVydmlzb3Igc2lkZSwgdGhpcyB3
aWxsIGZhaWwgZm9yIFBWIERvbTAuIEluIHdoaWNoIGNhc2UgaW1vDQo+Pj4+Pj4+IHlvdSBiZXR0
ZXIgd291bGQgYXZvaWQgbWFraW5nIHRoZSBjYWxsIGluIHRoZSBmaXJzdCBwbGFjZS4NCj4+Pj4+
PiBZZXMsIGZvciBQViBkb20wLCB0aGUgZXJybm8gaXMgRU9QTk9UU1VQUCwgdGhlbiBpdCB3aWxs
IGRvIGJlbG93IHhjX2RvbWFpbl9pcnFfcGVybWlzc2lvbi4NCj4+Pj4+DQo+Pj4+PiBIZW5jZSB3
aHkgY2FsbCB4Y19kb21haW5fZ3NpX3Blcm1pc3Npb24oKSBhdCBhbGwgb24gYSBQViBEb20wPw0K
Pj4+PiBJcyB0aGVyZSBhIGZ1bmN0aW9uIHRvIGRpc3Rpbmd1aXNoIHRoYXQgY3VycmVudCBkb20w
IGlzIFBWIG9yIFBWSCBkb20wIGluIHRvb2xzL2xpYnM/DQo+Pj4NCj4+PiBUaGF0IG1pZ2h0IGhh
dmUgbmV2ZXIgYmVlbiBuZWVkZWQgYmVmb3JlLCBzbyBwcm9iYWJseSBub3QuIFRoZXJlJ3MNCj4+
PiBsaWJ4bF9fZG9tYWluX3R5cGUoKSBidXQgaWYgdGhhdCB3b3JrcyB3aXRoIGRvbTAgaXQgbWln
aHQgcmV0dXJuICJIVk0iDQo+Pj4gZm9yIFBWSCBkb20wLiBTbyBpZiB4Y19kb21haW5fZ2V0aW5m
b19zaW5nbGUoKSB3b3JrcyBhbmQgZ2l2ZSB0aGUgcmlnaHQNCj4+PiBpbmZvIGFib3V0IGRvbTAs
IGxpYnhsX19kb21haW5fdHlwZSgpIGNvdWxkIGJlIGV4dGVuZGVkIHRvIGRlYWwgd2l0aA0KPj4+
IGRvbTAgSSBndWVzcy4gSSBkb24ndCBrbm93IGlmIHRoZXJlJ3MgYSBnb29kIHdheSB0byBmaW5k
IG91dCB3aGljaA0KPj4+IGZsYXZvciBvZiBkb20wIGlzIHJ1bm5pbmcuDQo+PiBUaGFua3MgQW50
aG9ueSENCj4+IEkgdGhpbmsgaGVyZSB3ZSByZWFsbHkgbmVlZCB0byBjaGVjayBpcyB0aGF0IHdo
ZXRoZXIgY3VycmVudCBkb21haW4gaGFzIFBJUlEgZmxhZyhYODZfRU1VX1VTRV9QSVJRKSBvciBu
b3QuDQo+PiBBbmQgaXQgc2VlbXMgeGNfZG9tYWluX2dzaV9wZXJtaXNzaW9uIGFscmVhZHkgcmV0
dXJuIHRoZSBpbmZvcm1hdGlvbi4NCj4gDQo+IEJ5IHdheSBvZiBmYWlsaW5nLCBpZiBJJ20gbm90
IG1pc3Rha2VuPyBBcyBpbmRpY2F0ZWQgYmVmb3JlLCBJIGRvbid0DQo+IHRoaW5rIHlvdSBzaG91
bGQgaW52b2tlIHRoZSBmdW5jdGlvbiB3aGVuIGl0J3MgY2xlYXIgaXQncyBnb2luZyB0byBmYWls
Lg0KU29ycnksIEkgd3JvdGUgd3JvbmcgaGVyZSwgaXQgc2hvdWxkIGJlICIgQW5kIGl0IHNlZW1z
IHhjX2RvbWFpbl9nZXRpbmZvX3NpbmdsZSBhbHJlYWR5IHJldHVybiB0aGUgaW5mb3JtYXRpb24u
Ig0KQW5kIG5leHQgdmVyc2lvbiB3aWxsIGJlIGxpa2U6DQp4Y19kb21haW5pbmZvX3QgeGNpbmZv
Ow0KeGNfZG9tYWluX2dldGluZm9fc2luZ2xlKHhjX2hhbmRsZSwgZG9taWQsICZ4Y2luZm8pOw0K
aWYoIHhjaW5mby5hcmNoX2NvbmZpZy5lbXVsYXRpb25fZmxhZ3MgJiBYRU5fWDg2X0VNVV9VU0Vf
UElSUSApDQoJeGNfZG9tYWluX2lycV9wZXJtaXNzaW9uDQplbHNlDQoJeGNfZG9tYWluX2dzaV9w
ZXJtaXNzaW9uDQoNCj4gDQo+IEphbg0KPiANCj4+IElmIGN1cnJlbnQgZG9tYWluIGhhcyBubyBQ
SVJRcywgdGhlbiBJIHNob3VsZCB1c2UgeGNfZG9tYWluX2dzaV9wZXJtaXNzaW9uIHRvIGdyYW50
IHBlcm1pc3Npb24sIG90aGVyd2lzZSBJIHNob3VsZA0KPj4ga2VlcCB0aGUgb3JpZ2luYWwgZnVu
Y3Rpb24geGNfZG9tYWluX2lycV9wZXJtaXNzaW9uLg0KPj4NCj4+Pg0KPj4+IENoZWVycywNCj4+
Pg0KPj4NCj4gDQoNCi0tIA0KQmVzdCByZWdhcmRzLA0KSmlxaWFuIENoZW4uDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 07:22:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 07:22:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740438.1147531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI1G7-0005Bu-P3; Fri, 14 Jun 2024 07:22:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740438.1147531; Fri, 14 Jun 2024 07:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI1G7-0005Bn-MT; Fri, 14 Jun 2024 07:22:03 +0000
Received: by outflank-mailman (input) for mailman id 740438;
 Fri, 14 Jun 2024 07:22:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9HBj=NQ=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sI1G6-0005Bh-Df
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 07:22:02 +0000
Received: from mail-qk1-x72d.google.com (mail-qk1-x72d.google.com
 [2607:f8b0:4864:20::72d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c9f74736-2a1e-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 09:22:00 +0200 (CEST)
Received: by mail-qk1-x72d.google.com with SMTP id
 af79cd13be357-795502843ccso109971485a.1
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 00:22:00 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798aaecc7f3sm120444685a.43.2024.06.14.00.21.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 Jun 2024 00:21:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9f74736-2a1e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718349719; x=1718954519; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=xnoCCgRLyUWwUMYn4tudG1mI7YQwgMQiGD4YRa1Vi9o=;
        b=q2Vnj45uMtUipqyJgEmysKNfb0PT4SrkCMSLGnG+oS5QL6ICAjoiJG6gX+apR77osi
         ZE9Z2pLNzRYgpgRv6UR2p/SULRUXrPx7z1Vwj1sOmo6LIH+ejm23A+2GSHpL1oDh9kPl
         l0N8RTxtyqhVrO0B77HYnaGNPOnn/cx8YHeKM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718349719; x=1718954519;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xnoCCgRLyUWwUMYn4tudG1mI7YQwgMQiGD4YRa1Vi9o=;
        b=JuDv8wF3r7AzIANqrwwk2QTfdj3D2wy8CKBE0775zli7ZXg0+uUsqtHu/MIqQCA6Sf
         dNkVQ44y6bymP9fXWNaDI0sizETEboV5v95UQ3VZIp8Lg0/jXx2w80N/uUx1Mj33zFXO
         MlE2by0FSkATekvyD3TWAIts3ER+FmlAn/alKo4b7pZHCGvJzU6SgONl3T7JPbJR1J4F
         77+iPoJLKfTHNttsvYFz0ucq6AWNUa4oUFJmMWqfNYWmVvvXZUC2NN1UqX894gysShvj
         oo7FpVCvChMorQY7V24FwiY3zqfUADbShy4465cITqTiCGk+jRzkv3qDda/Xlrmc45fP
         TWbg==
X-Forwarded-Encrypted: i=1; AJvYcCVxgJmoGLp2WFKW9obw7phj4ICubvVMk4p/CdtrUp2rQ/wg3fyLuZeObdPou45pncj1ppc/WVmPYxYlCyrs/Z20XqxWhgFaHOqfbE9Q+Ug=
X-Gm-Message-State: AOJu0YxDOAtJgpWRKVjV37vGIi9VI3dznTkWVsKvd5CoUbHadJfML2bc
	MyizQthKMEzS7WHFbkS0VOMsqKig5zt1AA02R1JoEBYFuIcSjLMjeez7qhSka3A=
X-Google-Smtp-Source: AGHT+IHzw1zT/HmxBawCcIN4QJBKG3tS7vWSqr0CxQK7aIDCn4R1d0MSgfhp4uh2oDx99x237P0VtQ==
X-Received: by 2002:a05:620a:1919:b0:797:b1ea:953b with SMTP id af79cd13be357-798d23e6988mr192231985a.4.1718349719003;
        Fri, 14 Jun 2024 00:21:59 -0700 (PDT)
Date: Fri, 14 Jun 2024 09:21:56 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>,
	Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <ZmvvlF0gpqFB7UC9@macbook>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>

On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote:
> On 13.06.2024 20:43, Demi Marie Obenour wrote:
> > GPU acceleration requires that pageable host memory be able to be mapped
> > into a guest.
> 
> I'm sure it was explained in the session, which sadly I couldn't attend.
> I've been asking Ray and Xenia the same before, but I'm afraid it still
> hasn't become clear to me why this is a _requirement_. After all that's
> against what we're doing elsewhere (i.e. so far it has always been
> guest memory that's mapped in the host). I can appreciate that it might
> be more difficult to implement, but avoiding to violate this fundamental
> (kind of) rule might be worth the price (and would avoid other
> complexities, of which there may be lurking more than what you enumerate
> below).

My limited understanding (please someone correct me if wrong) is that
the GPU buffer (or context I think it's also called?) is always
allocated from dom0 (the owner of the GPU).  The underling memory
addresses of such buffer needs to be mapped into the guest.  The
buffer backing memory might be GPU MMIO from the device BAR(s) or
system RAM, and such buffer can be paged by the dom0 kernel at any
time (iow: changing the backing memory from MMIO to RAM or vice
versa).  Also, the buffer must be contiguous in physical address
space.

I'm not sure it's possible to ensure that when using system RAM such
memory comes from the guest rather than the host, as it would likely
require some very intrusive hooks into the kernel logic, and
negotiation with the guest to allocate the requested amount of
memory and hand it over to dom0.  If the maximum size of the buffer is
known in advance maybe dom0 can negotiate with the guest to allocate
such a region and grant it access to dom0 at driver attachment time.

One aspect that I'm lacking clarity is better understanding of how the
process of allocating and assigning a GPU buffer to a guest is
performed (I think this is the key to how GPU VirtIO native contexts
work?).

Another question I have, are guest expected to have a single GPU
buffer, or they can have multiple GPU buffers simultaneously
allocated?

Regards, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 07:28:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 07:28:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740444.1147540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI1M2-0006OV-CR; Fri, 14 Jun 2024 07:28:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740444.1147540; Fri, 14 Jun 2024 07:28:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI1M2-0006OO-9s; Fri, 14 Jun 2024 07:28:10 +0000
Received: by outflank-mailman (input) for mailman id 740444;
 Fri, 14 Jun 2024 07:28:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9HBj=NQ=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sI1M0-0006OI-T2
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 07:28:08 +0000
Received: from mail-qk1-x72e.google.com (mail-qk1-x72e.google.com
 [2607:f8b0:4864:20::72e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a5052de6-2a1f-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 09:28:08 +0200 (CEST)
Received: by mail-qk1-x72e.google.com with SMTP id
 af79cd13be357-795fb13b256so240523885a.0
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 00:28:07 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798c39d8cafsm99920785a.127.2024.06.14.00.28.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 Jun 2024 00:28:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5052de6-2a1f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718350087; x=1718954887; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=AjdI3UexGQhSAVV8WQLC6fi29mLFXRJSnGVS0+jkJ74=;
        b=U2vjOQVZhwOWq4t9V5eaeE02i3DNwaWvxxGpTmYVxPZ6oGJCRechX1c0L6wfYIlcaE
         xYkIKVL/4wo906hBPT6qHZdsTsu1s//MiCM9SupADFhHFI7R44FlHdbwweT0SWUMfh56
         hwmwVhnQlv831ryjhkqouCUp66cSeI6MGNagE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718350087; x=1718954887;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=AjdI3UexGQhSAVV8WQLC6fi29mLFXRJSnGVS0+jkJ74=;
        b=FguhVm9MOJ6RmaoIM2f4h8lYYTlGg/c3QmG/r5ZY5AkSjWSeAJ6azxOKj4khJ6392G
         eNF2ncUlVcYHLDZ0OP8NqZCotZ27xrzfPAJ18KKU5lCwC8pU85+jwYmd/czH5qk8FA+X
         51MdwiLHuO88O7LVs22U5tTztCpcqWopHZhkz0Np/lfy7o61kHPYVF7NZ05NkeR3V6v4
         oBIuqwrqzJCTM5SfGmF6tKtKYORGsLg+YWUCOZ4l/OKXXuODO7gToLVKiPwtGprhq/ct
         wFUcL4bViPwPdX6dSz3+CKC2caD+RFo4MaMkyEvd5I00S3OBKaAT0KH4T4ivyekK1AX9
         IT7g==
X-Gm-Message-State: AOJu0YxmchEkj9VObQOZuiTyps+8tQw4znm/K+4tDFVFbevbeVNwTZz5
	k77543NCnkepNG6uDgL5mRAHdvS/zebfmJaMETpkePJLVHhDKVFSxmQEbepcV/M=
X-Google-Smtp-Source: AGHT+IHhW+F41haxSkC2Q6cu+he4fZModuuYyU37dQ0vGZEezfrPmzEd+2p1UnWWVxKDh0kb8K4cHQ==
X-Received: by 2002:a05:620a:254e:b0:797:f5ee:91d with SMTP id af79cd13be357-798d03b7913mr366542485a.24.1718350086726;
        Fri, 14 Jun 2024 00:28:06 -0700 (PDT)
Date: Fri, 14 Jun 2024 09:28:04 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3 0/3] x86/irq: fixes for CPU hot{,un}plug
Message-ID: <ZmvxBDomxxBjOYEK@macbook>
References: <20240613165617.42538-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20240613165617.42538-1-roger.pau@citrix.com>

Sorry, forgot to add the for-4.19 tag and Cc Oleksii.

Since we have taken the start of the series, we might as well take the
remaining patches (if other x86 maintainers agree) and attempt to
hopefully fix all the interrupt issues with CPU hotplug/unplug.

FTR: there are further issues when doing CPU hotplug/unplug from a PVH
dom0, but those are out of the scope for 4.19, as I haven't even
started to diagnose what's going on.

Thanks, Roger.

On Thu, Jun 13, 2024 at 06:56:14PM +0200, Roger Pau Monne wrote:
> Hello,
> 
> The following series aim to fix interrupt handling when doing CPU
> plug/unplug operations.  Without this series running:
> 
> cpus=`xl info max_cpu_id`
> while [ 1 ]; do
>     for i in `seq 1 $cpus`; do
>         xen-hptool cpu-offline $i;
>         xen-hptool cpu-online $i;
>     done
> done
> 
> Quite quickly results in interrupts getting lost and "No irq handler for
> vector" messages on the Xen console.  Drivers in dom0 also start getting
> interrupt timeouts and the system becomes unusable.
> 
> After applying the series running the loop over night still result in a
> fully usable system, no  "No irq handler for vector" messages at all, no
> interrupt loses reported by dom0.  Test with x2apic-mode={mixed,cluster}.
> 
> I've attempted to document all code as good as I could, interrupt
> handling has some unexpected corner cases that are hard to diagnose and
> reason about.
> 
> Some XenRT testing is undergoing to ensure no breakages.
> 
> Thanks, Roger.
> 
> Roger Pau Monne (3):
>   x86/irq: deal with old_cpu_mask for interrupts in movement in
>     fixup_irqs()
>   x86/irq: handle moving interrupts in _assign_irq_vector()
>   x86/irq: forward pending interrupts to new destination in fixup_irqs()
> 
>  xen/arch/x86/include/asm/apic.h |   5 +
>  xen/arch/x86/irq.c              | 163 +++++++++++++++++++++++++-------
>  2 files changed, 132 insertions(+), 36 deletions(-)
> 
> -- 
> 2.45.2
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 07:57:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 07:57:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740463.1147554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI1nl-0002hb-J6; Fri, 14 Jun 2024 07:56:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740463.1147554; Fri, 14 Jun 2024 07:56:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI1nl-0002hU-GJ; Fri, 14 Jun 2024 07:56:49 +0000
Received: by outflank-mailman (input) for mailman id 740463;
 Fri, 14 Jun 2024 07:56:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9HBj=NQ=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sI1nj-0002hO-E5
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 07:56:47 +0000
Received: from mail-qt1-x830.google.com (mail-qt1-x830.google.com
 [2607:f8b0:4864:20::830])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a540d842-2a23-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 09:56:46 +0200 (CEST)
Received: by mail-qt1-x830.google.com with SMTP id
 d75a77b69052e-43ffbc0927fso10911321cf.3
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 00:56:46 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-441f2ff9ef8sm13823221cf.89.2024.06.14.00.56.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 Jun 2024 00:56:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a540d842-2a23-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718351805; x=1718956605; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=BedO1x12znmji4xt7UpC8XDe1FvgdzsEI86mvlWGudA=;
        b=nFmA1lB50keV9bmD35ZrzE3OIvUgJLKpTwToVErVx5SzHDCNwmUMPUnAVd8PF0loME
         35SxmILWzcho5V3bt8CqzYCLUNgpcKLtar+HibvJUVT7Xf7kQIzOCiJpSBH9Yu4Vy1Ui
         PTjdybCiN8BeWQfrIiCHzzG0fb1ZyfMgxuRMs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718351805; x=1718956605;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=BedO1x12znmji4xt7UpC8XDe1FvgdzsEI86mvlWGudA=;
        b=B5Z5FxG0NvO5C8+X/8XJ2EZlG0cOjcPLPejzWUN3GZEHMECRXHDbRvc0zaYbBl78q5
         mCpXTRpnR6ymWrAtB1rkDbwSNQ6T/eKciVmrxuAikkLv5Gbv+w+t8UfAcqw41o6sJVeK
         saBCDQ0yAu5ro+FO8GlRGHHLQkK/4Tn5uZb06VwwDlEWh3ubqguAuRw8DWYWO8fTNyI5
         Mdsdo+l6JmyfHQzYCJ1/bNtbSRxxLXK7D8NQx/S9MvJ1CxkjaxGdpdrEHG/n+BXNd+n6
         g9xkcLlD/gAx4dM+5rJUftoi3YZhkm6GYNaLuv2E2VIaVLbJ92tLB/R95QUoTfp2RUeu
         Z3Jg==
X-Forwarded-Encrypted: i=1; AJvYcCVXhulEOYisJv2L7S2VLzU9mpTJu76jb18YIYy4wfOUOCFEgo0QdkASyQwu3Wrc2KuKSa7+H+GAwTRfzF2bsSNemTISjHguN1pu9OhZ5I0=
X-Gm-Message-State: AOJu0YzwVJdnQ9CisLzyp839vmv5DIPYdfORLHct3guAdcwOLgo00gwx
	mwKyVZROB4Vnd1wk3HrVcSySWBBP9b7H4pNcF5fbGwGLzokQ/Wsgs9j2vIySw24=
X-Google-Smtp-Source: AGHT+IEDi8BhVjLu2vbVbmBxB83hP6gyIibQXt+H84ODwaVrhmTedMPZYFZeMGGsfo/+7hGiDnXdRg==
X-Received: by 2002:a05:622a:1822:b0:43e:2639:a987 with SMTP id d75a77b69052e-44216b3a874mr27744421cf.59.1718351804940;
        Fri, 14 Jun 2024 00:56:44 -0700 (PDT)
Date: Fri, 14 Jun 2024 09:56:42 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?utf-8?Q?B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>, Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 10/26] xen-blkfront: don't disable cache flushes when
 they fail
Message-ID: <Zmv3usMvGGK7ZbMT@macbook>
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-11-hch@lst.de>
 <ZmlVziizbaboaBSn@macbook>
 <20240612150030.GA29188@lst.de>
 <ZmnFH17bTV2Ot_iR@macbook>
 <20240613140508.GA16529@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20240613140508.GA16529@lst.de>

On Thu, Jun 13, 2024 at 04:05:08PM +0200, Christoph Hellwig wrote:
> On Wed, Jun 12, 2024 at 05:56:15PM +0200, Roger Pau Monné wrote:
> > Right.  AFAICT advertising "feature-barrier" and/or
> > "feature-flush-cache" could be done based on whether blkback
> > understand those commands, not on whether the underlying storage
> > supports the equivalent of them.
> > 
> > Worst case we can print a warning message once about the underlying
> > storage failing to complete flush/barrier requests, and that data
> > integrity might not be guaranteed going forward, and not propagate the
> > error to the upper layer?
> > 
> > What would be the consequence of propagating a flush error to the
> > upper layers?
> 
> If you propage the error to the upper layer you will generate an
> I/O error there, which usually leads to a file system shutdown.
> 
> > Given the description of the feature in the blkif header, I'm afraid
> > we cannot guarantee that seeing the feature exposed implies barrier or
> > flush support, since the request could fail at any time (or even from
> > the start of the disk attachment) and it would still sadly be a correct
> > implementation given the description of the options.
> 
> Well, then we could do something like the patch below, which keeps
> the existing behavior, but insolates the block layer from it and
> removes the only user of blk_queue_write_cache from interrupt
> context:

LGTM, I'm not sure there's much else we can do.

> ---
> From e6e82c769ab209a77302994c3829cf6ff7a595b8 Mon Sep 17 00:00:00 2001
> From: Christoph Hellwig <hch@lst.de>
> Date: Thu, 30 May 2024 08:58:52 +0200
> Subject: xen-blkfront: don't disable cache flushes when they fail
> 
> blkfront always had a robust negotiation protocol for detecting a write
> cache.  Stop simply disabling cache flushes in the block layer as the
> flags handling is moving to the atomic queue limits API that needs
> user context to freeze the queue for that.  Instead handle the case
> of the feature flags cleared inside of blkfront.  This removes old
> debug code to check for such a mismatch which was previously impossible
> to hit, including the check for passthrough requests that blkfront
> never used to start with.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/block/xen-blkfront.c | 44 +++++++++++++++++++-----------------
>  1 file changed, 23 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 9b4ec3e4908cce..e2c92d5095ff17 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -788,6 +788,14 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
>  			 * A barrier request a superset of FUA, so we can
>  			 * implement it the same way.  (It's also a FLUSH+FUA,
>  			 * since it is guaranteed ordered WRT previous writes.)
> +			 *
> +			 * Note that can end up here with a FUA write and the
> +			 * flags cleared.  This happens when the flag was
> +			 * run-time disabled and raced with I/O submission in
> +			 * the block layer.  We submit it as a normal write

Since blkfront no longer signals that FUA is no longer available for the
device, getting a request with FUA is not actually a race I think?

> +			 * here.  A pure flush should never end up here with
> +			 * the flags cleared as they are completed earlier for
> +			 * the !feature_flush case.
>  			 */
>  			if (info->feature_flush && info->feature_fua)
>  				ring_req->operation =
> @@ -795,8 +803,6 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
>  			else if (info->feature_flush)
>  				ring_req->operation =
>  					BLKIF_OP_FLUSH_DISKCACHE;
> -			else
> -				ring_req->operation = 0;
>  		}
>  		ring_req->u.rw.nr_segments = num_grant;
>  		if (unlikely(require_extra_req)) {
> @@ -887,16 +893,6 @@ static inline void flush_requests(struct blkfront_ring_info *rinfo)
>  		notify_remote_via_irq(rinfo->irq);
>  }
>  
> -static inline bool blkif_request_flush_invalid(struct request *req,
> -					       struct blkfront_info *info)
> -{
> -	return (blk_rq_is_passthrough(req) ||
> -		((req_op(req) == REQ_OP_FLUSH) &&
> -		 !info->feature_flush) ||
> -		((req->cmd_flags & REQ_FUA) &&
> -		 !info->feature_fua));
> -}
> -
>  static blk_status_t blkif_queue_rq(struct blk_mq_hw_ctx *hctx,
>  			  const struct blk_mq_queue_data *qd)
>  {
> @@ -908,23 +904,30 @@ static blk_status_t blkif_queue_rq(struct blk_mq_hw_ctx *hctx,
>  	rinfo = get_rinfo(info, qid);
>  	blk_mq_start_request(qd->rq);
>  	spin_lock_irqsave(&rinfo->ring_lock, flags);
> -	if (RING_FULL(&rinfo->ring))
> -		goto out_busy;
>  
> -	if (blkif_request_flush_invalid(qd->rq, rinfo->dev_info))
> -		goto out_err;
> +	/*
> +	 * Check if the backend actually supports flushes.
> +	 *
> +	 * While the block layer won't send us flushes if we don't claim to
> +	 * support them, the Xen protocol allows the backend to revoke support
> +	 * at any time.  That is of course a really bad idea and dangerous, but
> +	 * has been allowed for 10+ years.  In that case we simply clear the
> +	 * flags, and directly return here for an empty flush and ignore the
> +	 * FUA flag later on.
> +	 */
> +	if (unlikely(req_op(qd->rq) == REQ_OP_FLUSH && !info->feature_flush))
> +		goto out;

Don't you need to complete the request here?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 07:58:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 07:58:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740472.1147564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI1pQ-0003nD-UG; Fri, 14 Jun 2024 07:58:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740472.1147564; Fri, 14 Jun 2024 07:58:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI1pQ-0003n6-RB; Fri, 14 Jun 2024 07:58:32 +0000
Received: by outflank-mailman (input) for mailman id 740472;
 Fri, 14 Jun 2024 07:58:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BY3m=NQ=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1sI1pP-0003n0-Sk
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 07:58:32 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur01on20600.outbound.protection.outlook.com
 [2a01:111:f403:2600::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e2929496-2a23-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 09:58:29 +0200 (CEST)
Received: from DUZPR01CA0252.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4b5::22) by VI1PR08MB9984.eurprd08.prod.outlook.com
 (2603:10a6:800:1c7::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.25; Fri, 14 Jun
 2024 07:58:24 +0000
Received: from DU6PEPF0000B620.eurprd02.prod.outlook.com
 (2603:10a6:10:4b5:cafe::38) by DUZPR01CA0252.outlook.office365.com
 (2603:10a6:10:4b5::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.25 via Frontend
 Transport; Fri, 14 Jun 2024 07:58:24 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DU6PEPF0000B620.mail.protection.outlook.com (10.167.8.136) with
 Microsoft
 SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.7677.15
 via Frontend Transport; Fri, 14 Jun 2024 07:58:24 +0000
Received: ("Tessian outbound e43fd1351ded:v332");
 Fri, 14 Jun 2024 07:58:24 +0000
Received: from 8e5663abba87.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4027F759-80AB-4772-8B45-37473E5CDD19.1; 
 Fri, 14 Jun 2024 07:54:41 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8e5663abba87.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 14 Jun 2024 07:54:41 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by AS2PR08MB8309.eurprd08.prod.outlook.com (2603:10a6:20b:554::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.25; Fri, 14 Jun
 2024 07:54:36 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6204:b901:9cc6:bf2b]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6204:b901:9cc6:bf2b%3]) with mapi id 15.20.7677.019; Fri, 14 Jun 2024
 07:54:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2929496-2a23-11ef-b4bb-af5377834399
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Cj9SCKfASqQYXfl3ql5z0nZxgE+iKTtBUEVCukHzMHd3uo/TlxrYMmf+QsGa3EFMHJFZ+c8O+ytHHvdjbjNwm+iKN9kqMiUi1s/YvYEjBQxDlFY0hrxAxEG7XsC+v56CqlzjchrOLRoJUh/aB8LE77ySMaMUPvqyAn9I9k57dSSxCNncJYRfvnI7d9HO2UbtU9kHEMy/BUPLTDE7Cd42H0+DqW2EO01mNNZIt5mwbiv6NrhDJ4EU5XdpOlzfMIfO27FAZIQKKIRYdSf03eOfNHdlWR2IjqOzpnasLFQr+CC1Jn8Lg2o7LaGJlCdgtPMWqBvxLY+qFRWsUZAF6D0eSw==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6UGpXkunA9fOoYkDFW40muvm+1IVctnokFDRn8Lwnqk=;
 b=eNdz8YxevacI6iEGciN7f7Vqit9mMwZoVq4q0cgKyvorZdJiHCw6uVYZayHOVjT2gkzm2t/sJ5egF4N0aB48Ic3HA+0jYO8K2NYoGAw/gDqlzhFwqGAJOnPzbBuk1pf2rqihi+BIgezXs7UkFJ4GZe1GSgsHvnJW+lsoFsmVErUtqo3LrVH4BASuO0pXnkyJKn8qCmbTJ91e5cllQj2J8pYVNpMie1zmXoDtV3seamp7Cuw8kMNmk0EhSdKw9HWuuSeGV809lf8cWHs1HwvRJulvEfz1F/mWrQB7MVlqzXguxNYbA/1jxRxRE5G9cTtQ5T7V/4RbEbl0O8HvLP01rA==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1
 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6UGpXkunA9fOoYkDFW40muvm+1IVctnokFDRn8Lwnqk=;
 b=T2V3ZwSMScP2JrI7AI0eRkC0b1tHauB/9m7ZwqbhwAP5el7StbgtF6AQgE1J/dqMXvz8oEtGtv+ycSySEk5MWFknwfpDWrXZuxVXC3Vwi2buyVir+YAxXza5/LM3hCANDVjybOmQhCEnLcT+7RACmicPnyV3dghyoaLs+WaSUg4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=arm.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3d1bde0996cf9d7b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dtQMQnki+MuLrpc5/CmT8qi91BXhh4njHKFYDv+Hk2NLz1T0M+9E5vSRUP7j8l5YdeabN2/r7olZbyiQRe5Wh8fwpCw/RjVqbAmX/SD3jOA1ijlddeQQPZuCygBGIVbkLCPG+6++cX+tyLXrFAB9Pk8tWfiCEAkPB5h99fHQuIEFf4Ck4zBAJ5g00nQFM5bA/3XcxptZFVKchdn1/rGb1FHb4foNEWv0MzvextHJ+NnA2CPU0YgWBiZY50f4qNcG0djt6K1CoaX6Jvx/MlFAWzILYm5C3efNzPa7+7LNcppWooCHKH1GoWKqtYO22+4dmU93ES+/vHVN1fA/fAQ+qw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=6UGpXkunA9fOoYkDFW40muvm+1IVctnokFDRn8Lwnqk=;
 b=JsfgRNvwsQCYN0hVYpH8hzUi5Eh7lPP+HgGG1IdxjmZkeb9JOmeDZIeQE3gCTNpU1QA/URoP7KIGlX33O2GDuTT1d1mzxricK9xGwA2mfAlzcQK9BkfL2gs0tUaxIq0KMtUBmyhGNhv2cvKXLh0UMY+sunZOpp72DxUlOrpvgZASPvXGD1Om/ggfvkw2Rm9HGOIsCVRoK1ic1ko7Ivd1bz6/DD/vBTmYRnNSpt3Wp67ESH+WqDC2I2Mrxs0Gj04elnf2F0g+NbujjWjzJ1u+yzjItwm9/sAcN3nXsOe98B4U08zImMgImFyLZmorJaFidaS+/vELwq3WHE4AxK3DFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6UGpXkunA9fOoYkDFW40muvm+1IVctnokFDRn8Lwnqk=;
 b=T2V3ZwSMScP2JrI7AI0eRkC0b1tHauB/9m7ZwqbhwAP5el7StbgtF6AQgE1J/dqMXvz8oEtGtv+ycSySEk5MWFknwfpDWrXZuxVXC3Vwi2buyVir+YAxXza5/LM3hCANDVjybOmQhCEnLcT+7RACmicPnyV3dghyoaLs+WaSUg4=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Julien Grall <julien@xen.org>
CC: Michal Orzel <michal.orzel@amd.com>, Xen-devel
	<xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Oleksii Kurochko
	<oleksii.kurochko@gmail.com>, Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>, Oleksandr Tyshchenko <olekstysh@gmail.com>
Subject: Re: [PATCH v4 0/7] Static shared memory followup v2 - pt2
Thread-Topic: [PATCH v4 0/7] Static shared memory followup v2 - pt2
Thread-Index: AQHarde1wBZMIqRdOku1vS0eoyuKdLHCnAmAgAAByACAAxERgIABVZYA
Date: Fri, 14 Jun 2024 07:54:36 +0000
Message-ID: <92B10944-083B-4DB3-8257-ADD452FBFF69@arm.com>
References: <20240524124055.3871399-1-luca.fancellu@arm.com>
 <3DDAAFF7-5E43-4B92-9D6B-6D8AFBA8496F@arm.com>
 <66e5d584-b326-4197-81d5-ec2b8233a3fa@amd.com>
 <502d0284-1ca2-478c-b4f4-fda5caff3723@xen.org>
In-Reply-To: <502d0284-1ca2-478c-b4f4-fda5caff3723@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3774.600.62)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|AS2PR08MB8309:EE_|DU6PEPF0000B620:EE_|VI1PR08MB9984:EE_
X-MS-Office365-Filtering-Correlation-Id: f840c506-c20d-4039-e328-08dc8c47c458
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted:
 BCL:0;ARA:13230035|366011|376009|1800799019|38070700013;
X-Microsoft-Antispam-Message-Info-Original:
 =?utf-8?B?OWpEWFJlOUJZK3hwSjRiQ2J4V0RxVXRhd3VkNnYzRFQwd1lBM08vTm1OT0Jx?=
 =?utf-8?B?WUJaWmR6SGZXYWthZ1E4ZUlvbUlRdERtM2o5c0ZFZkZaeTQ2SE1nWUJqdkFm?=
 =?utf-8?B?Z2poSXdvS1pkYlAxWnAyVTBKWEwrQ0ZWOVZnUTZ4WHhLWnRWN2RERjdlVmt3?=
 =?utf-8?B?djN5bnI0ZHBVbE83K2IrUUdtMFJSejJPaHdndnN1N3JQK0NlYXcvRmJnRTJ1?=
 =?utf-8?B?Q0ZJN3UxVkphNzFSaUwyNytGM29NOERlaXl1OWY1YzlyV21GRVJJY2ZPWVBS?=
 =?utf-8?B?dU52V1FCcUZxakIyeGs5NUdDR0RLd0VGWFNNVzBHWDFabmVDaFB2YVVsV00r?=
 =?utf-8?B?L2RURVBkMk1GZFJaSHJDbW9rZlM4MDRMdEV0eGgwTHJlQXZZcThCOGNyWDhu?=
 =?utf-8?B?U1ZuQ3pMUkhTVzZxS3FOdUhjV3M3eXdWbUtBd2JPSW10VGQwN3YzNXNYVGEw?=
 =?utf-8?B?VWE3bnBiTFRZaHdYcXZRS1F1bG1lMUpIQUtuK1RaNXBudW91UnFGOGx0K2U4?=
 =?utf-8?B?Ny9Xb2tUR3BOZmFaa1hkYjRkTjZ6aDRIMlV2NjY4OCt4U3B5VFMxYzdLUUhH?=
 =?utf-8?B?VldTd3R0NVE3NEJ6b0dmS0dkZng3YUtZWFk2aDNOayt4VGFmS2ZYcy83SmJl?=
 =?utf-8?B?R05TckdVeFlXendIZmhDemZZZTlVajhSdW9pZ1RJdE9BSVAyVUIwRUpOWk9a?=
 =?utf-8?B?SEFjUzNZV2l1Y1A1VitHdTVGT3pBN3lLSkVRWlpDUjJYTWlmMGxYY2F3MmE5?=
 =?utf-8?B?U3dETVkwSnMvSkdMMXBudVNwb3FQUXAxSzQ5YnNxM3piclBiQWtCa0Jrakdz?=
 =?utf-8?B?aFpScGZLb0cvbEJGMlNsNzZJWHA5aFY5cEVaNERMMTVlQkhCT1ptTVVhOVdv?=
 =?utf-8?B?eVRqczBqQkg1VGFhd0RiVE1adVlIa0hSUVVUMFNRakVQWmxXNXpYdzNrTHRF?=
 =?utf-8?B?UUM4SjY1cHZzeDN6bDhPeFhXbGI3RXdvSHhuRXhKaDQzUzNwNDdsYmFscmJN?=
 =?utf-8?B?Mnh2ZE1PNkFHT1hZblhwSS9LTUJ3bHE0OEp4VVdybmJ0b3JwZDVmcThGaXhL?=
 =?utf-8?B?T1JOdGtkQUV5Y1pubXYwcFQ3WnRtMVNNR3kvZzRDS01ZRVliWC81aFAvaCtX?=
 =?utf-8?B?UXVOT3F1UWhjUWhHWm9qelh6YkthTjRuS2F4RTNXd293UkVpdlFaUzgweXpT?=
 =?utf-8?B?YlU3SVYrcnBSdG9iMXlOM3kxRGpRZVFlNitlUnRWUm9vWFpjUS9DQzlCdXdY?=
 =?utf-8?B?NS9OUmwrdnRjVnpNK2ljU2VPRkUySk1xSnVNQktXeHdxelQ2eXRlUjlFQUtJ?=
 =?utf-8?B?bDQrZ09VdmZpVWRmNGFSczNnaFFKOGRmaUdaN2hVekFQOW9WRHNtMzJ5bThH?=
 =?utf-8?B?QmhvMDdkMjFxdEQwbVVyek1DSFVYWGpQTFFrVGJhU21DbnZrdTUyVEs1U2lr?=
 =?utf-8?B?ZVRXQnhWelpqYW9sZW42a2R6N0lZL040eWVTaVl1V1AwNEFVYkV3dExDYXhi?=
 =?utf-8?B?bEdJOVlLc3FweVBFVzlmRFVZS0hpZWQvRHBEV1RHbktlLzIrc2pGejNhM3ZJ?=
 =?utf-8?B?aW42M0JVOGlhZnQrRE5xRnBsN1RDa2JFbXRKMWk5VWF6NTFsMzRZd2FwZ0p2?=
 =?utf-8?B?MHoxVmdXSVdWdjBrZmxXMkxBUy9iQlpQdGwvRkZrL1ZMYXk5TzVMaW5Oc20x?=
 =?utf-8?B?c1o5MmhBaklHUWtCQmNNTjdHYjZuZDI5SHN2SG1NdEttbkpWekY4TWhqQTU4?=
 =?utf-8?Q?U9iRZv4f13gh0agR7CZH2cUDUVZqUqz5TZcZFd4?=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230035)(366011)(376009)(1800799019)(38070700013);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <39EB34B3AA00A04096BFB5000782570D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB8309
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DU6PEPF0000B620.eurprd02.prod.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8402177b-f265-4332-9fc1-08dc8c473ca4
X-Microsoft-Antispam:
	BCL:0;ARA:13230035|82310400021|1800799019|34020700011|36860700008|35042699017|376009;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?WG96RVRhV3lQSDRJeG1hRVl0dlh3RzNkSWtpMll4L2NrT3dDZTIzT2RRVkNF?=
 =?utf-8?B?Y3ZldngvdFFBZ1Zta0pTMVdYMVBxaU5GSzVDWkF5bjQraXlPRklZNUhOK1Bx?=
 =?utf-8?B?RmlPWFNVdlJCOGliaWZWSTUvZmJMWnV1SzBNN09sRWQxNUhrQ1BydVNnZW1r?=
 =?utf-8?B?bml6WTkrRDExWk5JOTVQcGpsSEtZaWljZVZ2NC9WY29kZjFaSlpWSFNQUUdI?=
 =?utf-8?B?NGZBeHZqeXRSWitCMnZndHdzeTAwc0lKUU0zZnhKazJXMEt2ZUloU1hpWWZj?=
 =?utf-8?B?U2lNZkNHNVEvUjdROHlScFhjbjR5ODJEaG9XNm9mbkR6ZncwL3R3TEs3ZjNl?=
 =?utf-8?B?TklKVi9FUFdCVHVhTVBQTUhPakIwWDI2aWViSnpZdWhVY1NFVFFneGs0SG5l?=
 =?utf-8?B?SHJ6UUlLY3NEVnUwcE1zT1pRZ2ZmWjU4SHVhckNhVGcwSUFzTkhRYTdCQURx?=
 =?utf-8?B?bTFhRFg5ZXQzWUNHOUd4SWo5cTEza1dJcWQ0T2J3UkFjVUEvZ3VVa1NyN3BE?=
 =?utf-8?B?SCt2eEUvQ2haYk9QWVVGV3dDcWVXaWpuWGRLOTJwMkFSUEl0RnExODBLdWpH?=
 =?utf-8?B?R1RuLy9JWGpWZWpvMnY2dW5RWDl1dEV0cmJZYnBpdUdHVUViaUczMlFGWkxi?=
 =?utf-8?B?dFZHNERyaUJvcTFQaEJQbWxyNkYwRzlNY2dmRmtib2h2MHFXN3RzcDZ0UktI?=
 =?utf-8?B?S3lOUkNQdW5QbnRLRm5DYkpsVld5WjBHMkt3TGI2UmRNcnowb3hzeGhFaVJz?=
 =?utf-8?B?ZTlLYXd3TitJOE52UjhKcXE1Z3RTUHpYZCs3Zks3YVYrbTZmNWh4UnF0aDlF?=
 =?utf-8?B?c2d3cTlhUys4WFhhaXBCb3MrQVJzc3o1ZU9STW5EZ1pab0pSZWkrbFg0dDlW?=
 =?utf-8?B?Z1hRUzh0dC9pT256Zk0yMExsUmwzVnNqaUNwZmhINVlITGdzZHBhNElMdFlr?=
 =?utf-8?B?UUpqYldRaHRDTU52LzZtc3Q1M0VwcGpYTXJ0aFBiaUZPQms1NDNhaXZSMFpT?=
 =?utf-8?B?R3lrRklROGZ1Yy9WM3I4bVpLV0VSSGtZb2JjWE9DWlhaczFkUkhIVTFBWGZU?=
 =?utf-8?B?OTZvcnk3YU44NFZYZ05QaXBkd0Z2SjRwVnJNenJEZWJqcS9mKzN1SmFEQkJa?=
 =?utf-8?B?QnBjRDFWQVAvZDlJRlZrVDBVN0JmWldpMUZJVUxLN1k5NmROZDJLUW90SDhB?=
 =?utf-8?B?eXdaKzk1QUxGOElDRFRydlNjaFBOYVlTYWdUbTlwWkxCaFBOcGk5cHVMUis4?=
 =?utf-8?B?OHpxU0dTV0Nzc1VVeUlROUZRODB3UDE5NmlCTHdhZWt4TjhkNnMrb2FnSUJO?=
 =?utf-8?B?T3k4dnJUMGJKZjg5L1ZzZFpRSGhwQU5QZFNQRENxLzhEdUplZkRib2ExRTVK?=
 =?utf-8?B?NllJZDExekh1aXE2ckRuWmxzTUlhbWtmb0RjaXRkNGxxeDB1bVMrNW9nZUVX?=
 =?utf-8?B?K3VDeGJXK3Fnek9yZ2ZJVENPNmMrZHo3am1DaGhORWg0M3IvUCs3ZFQ0eTkz?=
 =?utf-8?B?QVNRcEdMRzZlZkJMNTdPRDRRT05TQytPeVVCZ3k5SCthaGNhYU5rbTljRENE?=
 =?utf-8?B?RkIzQ29TZmNIdmFjSDlOOE85MUxaak5EUTZycGFJaC9RRk9vOE1QbVE0Uytr?=
 =?utf-8?B?Mm00emFDNVhvNnA4ZFkwb2xhaHNISEl4WGNpN3lmWWdGZkMwckRycCtYS3lB?=
 =?utf-8?B?alNhMUFrWmM5akltaWk4WFJnb1AwbVd2UDNhblRpUlVsLzcvNDN6VzE4bDlz?=
 =?utf-8?B?WTAvOHI3czF2c3I4cVJnSXh0REUrSkFERW9Qb2UrUi9QaWJXNEsxM1lUakFs?=
 =?utf-8?Q?dQTwwz0aCbERncdLLrylJyqMzti2Ybp3f89fg=3D?=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230035)(82310400021)(1800799019)(34020700011)(36860700008)(35042699017)(376009);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 Jun 2024 07:58:24.4181
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f840c506-c20d-4039-e328-08dc8c47c458
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DU6PEPF0000B620.eurprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB9984

SGkgSnVsaWVuLA0KDQo+IE9uIDEzIEp1biAyMDI0LCBhdCAxMjozMSwgSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+IA0KPiBIaSwNCj4gDQo+IE9uIDExLzA2LzIwMjQgMTM6
NDIsIE1pY2hhbCBPcnplbCB3cm90ZToNCj4+PiBXZSB3b3VsZCBsaWtlIHRoaXMgc2VyaWUgdG8g
YmUgaW4gWGVuIDQuMTksIHRoZXJlIHdhcyBhIG1pc3VuZGVyc3RhbmRpbmcgb24gb3VyIHNpZGUg
YmVjYXVzZSB3ZSB0aG91Z2h0DQo+Pj4gdGhhdCBzaW5jZSB0aGUgc2VyaWUgd2FzIHNlbnQgYmVm
b3JlIHRoZSBsYXN0IHBvc3RpbmcgZGF0ZSwgaXQgY291bGQgaGF2ZSBiZWVuIGEgY2FuZGlkYXRl
IGZvciBtZXJnaW5nIGluIHRoZQ0KPj4+IG5ldyByZWxlYXNlLCBpbnN0ZWFkIGFmdGVyIHNwZWFr
aW5nIHdpdGggSnVsaWVuIGFuZCBPbGVrc2lpIHdlIGFyZSBub3cgYXdhcmUgdGhhdCB3ZSBuZWVk
IHRvIHByb3ZpZGUgYQ0KPj4+IGp1c3RpZmljYXRpb24gZm9yIGl0cyBwcmVzZW5jZS4NCj4+PiAN
Cj4+PiBQcm9zIHRvIHRoaXMgc2VyaWUgaXMgdGhhdCB3ZSBhcmUgY2xvc2luZyB0aGUgY2lyY2xl
IGZvciBzdGF0aWMgc2hhcmVkIG1lbW9yeSwgYWxsb3dpbmcgaXQgdG8gdXNlIG1lbW9yeSBmcm9t
DQo+Pj4gdGhlIGhvc3Qgb3IgZnJvbSBYZW4sIGl0IGlzIGFsc28gYSBmZWF0dXJlIHRoYXQgaXMg
bm90IGVuYWJsZWQgYnkgZGVmYXVsdCwgc28gaXQgc2hvdWxkIG5vdCBjYXVzZSB0b28gbXVjaA0K
Pj4+IGRpc3J1cHRpb24gaW4gY2FzZSBvZiBhbnkgYnVncyB0aGF0IGVzY2FwZWQgdGhlIHJldmll
dywgaG93ZXZlciB3ZeKAmXZlIHRlc3RlZCBtYW55IGNvbmZpZ3VyYXRpb25zIGZvciB0aGF0DQo+
Pj4gd2l0aC93aXRob3V0IGVuYWJsaW5nIHRoZSBmZWF0dXJlIGlmIHRoYXQgY2FuIGJlIGFuIGFk
ZGl0aW9uYWwgdmFsdWUuDQo+Pj4gDQo+Pj4gQ29uczogd2UgYXJlIHRvdWNoaW5nIHNvbWUgY29t
bW9uIGNvZGUgcmVsYXRlZCB0byBwMm0sIGJ1dCBhbHNvIHRoZXJlIHRoZSBpbXBhY3Qgc2hvdWxk
IGJlIG1pbmltYWwgYmVjYXVzZQ0KPj4+IHRoZSBuZXcgY29kZSBpcyBzdWJqZWN0IHRvIGwyIGZv
cmVpZ24gbWFwcGluZyAodG8gYmUgY29uZmlybWVkIG1heWJlIGZyb20gYSBwMm0gZXhwZXJ0IGxp
a2UgSnVsaWVuKS4NCj4+PiANCj4+PiBUaGUgY29tbWVudHMgb24gcGF0Y2ggMyBvZiB0aGlzIHNl
cmllIGFyZSBhZGRyZXNzZWQgYnkgdGhpcyBwYXRjaDoNCj4+PiBodHRwczovL3BhdGNod29yay5r
ZXJuZWwub3JnL3Byb2plY3QveGVuLWRldmVsL3BhdGNoLzIwMjQwNTI4MTI1NjAzLjI0Njc2NDAt
MS1sdWNhLmZhbmNlbGx1QGFybS5jb20vDQo+Pj4gQW5kIHRoZSBzZXJpZSBpcyBmdWxseSByZXZp
ZXdlZC4NCj4+PiANCj4+PiBTbyBvdXIgcmVxdWVzdCBpcyB0byBhbGxvdyB0aGlzIHNlcmllIGlu
IDQuMTksIE9sZWtzaWksIEFSTSBtYWludGFpbmVycywgZG8geW91IGFncmVlIG9uIHRoYXQ/DQo+
PiBBcyBhIG1haW4gcmV2aWV3ZXIgb2YgdGhpcyBzZXJpZXMgSSdtIG9rIHRvIGhhdmUgdGhpcyBz
ZXJpZXMgaW4uIEl0IGlzIG5pY2VseSBlbmNhcHN1bGF0ZWQgYW5kIHRoZSBmZWF0dXJlIGl0c2Vs
Zg0KPj4gaXMgc3RpbGwgaW4gdW5zdXBwb3J0ZWQgc3RhdGUuIEkgZG9uJ3QgZm9yZXNlZSBhbnkg
aXNzdWVzIHdpdGggaXQuDQo+IA0KPiBUaGVyZSBhcmUgY2hhbmdlcyBpbiB0aGUgcDJtIGNvZGUg
YW5kIHRoZSBtZW1vcnkgYWxsb2NhdGlvbiBmb3IgYm9vdCBkb21haW5zLiBTbyBpcyBpdCByZWFs
bHkgZW5jYXBzdWxhdGVkPw0KPiANCj4gRm9yIG1lIHRoZXJlIGFyZSB0d28gcmlza3M6DQo+ICog
cDJtIChhbHJlYWR5IG1lbnRpb25lZCBieSBMdWNhKTogV2UgbW9kaWZ5IHRoZSBjb2RlIHRvIHB1
dCBmb3JlaWduIG1hcHBpbmcuIFRoZSB3b3JzZSB0aGF0IGNhbiBoYXBwZW4gaWYgd2UgZG9uJ3Qg
cmVsZWFzZSB0aGUgZm9yZWlnbiBtYXBwaW5nLiBUaGlzIHdvdWxkIG1lYW4gdGhlIHBhZ2Ugd2ls
bCBub3QgYmUgZnJlZWQuIEFGQUlLLCB3ZSBkb24ndCBleGVyY2lzZSB0aGlzIHBhdGggaW4gdGhl
IENJLg0KPiAqIGRvbWFpbiBhbGxvY2F0aW9uOiBUaGlzIG1haW5seSBsb29rIGxpa2UgcmVmYWN0
b3JpbmcuIEFuZCB0aGUgcGF0aCBpcyBleGVyY2lzZWQgaW4gdGhlIENJLg0KPiANCj4gU28gSSBh
bSBub3QgY29uY2VybmVkIHdpdGggdGhlIGRvbWFpbiBhbGxvY2F0aW9uIG9uZS4gQEx1Y2EsIHdv
dWxkIGl0IGJlIHBvc3NpYmxlIHRvIGRldGFpbCBob3cgZGlkIHlvdSB0ZXN0IHRoZSBmb3JlaWdu
IHBhZ2VzIHdlcmUgcHJvcGVybHkgcmVtb3ZlZD8NCg0KU28gYXQgZmlyc3Qgd2UgdGVzdGVkIHRo
ZSBjb2RlLCB3aXRoL3dpdGhvdXQgdGhlIHN0YXRpYyBzaGFyZWQgbWVtb3J5IGZlYXR1cmUgZW5h
YmxlZCwgY3JlYXRpbmcvZGVzdHJveWluZyBndWVzdCBmcm9tIERvbTAgYW5kIGNoZWNraW5nIHRo
YXQgZXZlcnl0aGluZw0Kd2FzIG9rLg0KDQpBZnRlciBhIGNoYXQgb24gTWF0cml4IHdpdGggSnVs
aWVuIGhlIHN1Z2dlc3RlZCB0aGF0IHVzaW5nIGEgdmlydGlvLW1taW8gZGlzayB3YXMgYmV0dGVy
IHRvIHN0cmVzcyBvdXQgdGhlIGZvcmVpZ24gbWFwcGluZyBsb29raW5nIGZvcg0KcmVncmVzc2lv
bnMuDQoNCkx1Y2tpbHkgSeKAmXZlIGZvdW5kIHRoaXMgc2xpZGUgZGVjayBmcm9tIEBPbGVrc2Fu
ZHI6IGh0dHBzOi8vc3RhdGljLmxpbmFyby5vcmcvY29ubmVjdC9sdmMyMS9wcmVzZW50YXRpb25z
L2x2YzIxLTMxNC5wZGYNCg0KU28gSSBkaWQgYSBzZXR1cCB1c2luZyBmdnAtYmFzZSwgaGF2aW5n
IGEgZGlzayB3aXRoIHR3byBwYXJ0aXRpb25zIGNvbnRhaW5pbmcgRG9tMCByb290ZnMgYW5kIERv
bVUgcm9vdGZzLCBEb20wIHNlZXMNCnRoaXMgZGlzayB1c2luZyBWaXJ0SU8gYmxvY2suDQoNClRo
ZSBEb20wIHJvb3RmcyBjb250YWlucyB0aGUgdmlydGlvLWRpc2sgYmFja2VuZDogaHR0cHM6Ly9n
aXRodWIuY29tL3hlbi10cm9vcHMvdmlydGlvLWRpc2sNCg0KQW5kIHRoZSBEb21VIFhMIGNvbmZp
Z3VyYXRpb24gaXMgdXNpbmcgdGhlc2UgcGFyYW1ldGVyczoNCg0KY21kbGluZT0iY29uc29sZT1o
dmMwIHJvb3Q9L2Rldi92ZGEgcnciDQpkaXNrID0gWycvZGV2L3ZkYTIscmF3LHh2ZGEsdyxzcGVj
aWZpY2F0aW9uPXZpcnRpb+KAmV0NCg0KUnVubmluZyB0aGUgc2V0dXAgYW5kIGNyZWF0aW5nL2Rl
c3Ryb3lpbmcgYSBjb3VwbGUgb2YgdGltZXMgdGhlIGd1ZXN0IGlzIG5vdCBzaG93aW5nIHJlZ3Jl
c3Npb25zLCBoZXJlIGFuIGV4YW1wbGUgb2YgdGhlIG91dHB1dDoNCg0Kcm9vdEBmdnAtYmFzZTov
b3B0L3h0cC9ndWVzdHMvbGludXgtZ3Vlc3RzIyB4bCBjcmVhdGUgLWMgbGludXgtZXh0LWFybTY0
LXN0cmVzc3Rlc3RzLXJvb3Rmcy5jZmcNClBhcnNpbmcgY29uZmlnIGZyb20gbGludXgtZXh0LWFy
bTY0LXN0cmVzc3Rlc3RzLXJvb3Rmcy5jZmcNCm1haW46IHJlYWQgZnJvbnRlbmQgZG9taWQgMg0K
ICBJbmZvOiBjb25uZWN0ZWQgdG8gZG9tMg0KDQpkZW11X3NlcV9uZXh0OiA+WEVOU1RPUkVfQVRU
QUNIRUQNCmRlbXVfc2VxX25leHQ6IGRvbWlkID0gMg0KZGVtdV9zZXFfbmV4dDogZGV2aWQgPSA1
MTcxMg0KZGVtdV9zZXFfbmV4dDogZmlsZW5hbWVbMF0gPSAvZGV2L3ZkYTINCmRlbXVfc2VxX25l
eHQ6IHJlYWRvbmx5WzBdID0gMA0KZGVtdV9zZXFfbmV4dDogYmFzZVswXSAgICAgPSAweDIwMDAw
MDANCmRlbXVfc2VxX25leHQ6IGlycVswXSAgICAgID0gMzMNCmRlbXVfc2VxX25leHQ6ID5YRU5F
VlRDSE5fT1BFTg0KZGVtdV9zZXFfbmV4dDogPlhFTkZPUkVJR05NRU1PUllfT1BFTg0KZGVtdV9z
ZXFfbmV4dDogPlhFTkRFVklDRU1PREVMX09QRU4NCmRlbXVfc2VxX25leHQ6ID5YRU5HTlRUQUJf
T1BFTg0KZGVtdV9pbml0aWFsaXplOiAxIHZDUFUocykNCmRlbXVfc2VxX25leHQ6ID5TRVJWRVJf
UkVHSVNURVJFRA0KZGVtdV9zZXFfbmV4dDogaW9zZXJ2aWQgPSAwDQpkZW11X3NlcV9uZXh0OiA+
UkVTT1VSQ0VfTUFQUEVEDQpkZW11X3NlcV9uZXh0OiBzaGFyZWRfaW9wYWdlID0gMHg3ZjgwYzU4
MDAwDQpkZW11X3NlcV9uZXh0OiA+U0VSVkVSX0VOQUJMRUQNCmRlbXVfc2VxX25leHQ6ID5QT1JU
X0FSUkFZX0FMTE9DQVRFRA0KZGVtdV9zZXFfbmV4dDogPkVWVENITl9QT1JUU19CT1VORA0KZGVt
dV9zZXFfbmV4dDogVkNQVTA6IDMgLT4gNg0KZGVtdV9yZWdpc3Rlcl9tZW1vcnlfc3BhY2U6IDIw
MDAwMDAgLSAyMDAwMWZmDQogIEluZm86ICh2aXJ0aW8vbW1pby5jKSB2aXJ0aW9fbW1pb19pbml0
OjE2NTogdmlydGlvLW1taW8uZGV2aWNlcz0weDIwMEAweDIwMDAwMDA6MzMNCmRlbXVfc2VxX25l
eHQ6ID5ERVZJQ0VfSU5JVElBTElaRUQNCmRlbXVfc2VxX25leHQ6ID5JTklUSUFMSVpFRA0KSU8g
cmVxdWVzdCBub3QgcmVhZHkNCihYRU4pIGQydjAgVW5oYW5kbGVkIFNNQy9IVkM6IDB4ODQwMDAw
NTANCihYRU4pIGQydjAgVW5oYW5kbGVkIFNNQy9IVkM6IDB4ODYwMGZmMDENCihYRU4pIGQydjA6
IHZHSUNEOiBSQVogb24gcmVzZXJ2ZWQgcmVnaXN0ZXIgb2Zmc2V0IDB4MDAwMDBjDQooWEVOKSBk
MnYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FD
VElWRVI0DQooWEVOKSBkMnYwOiB2R0lDUjogU0dJOiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAw
MDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjANClsgICAgMC4wMDAwMDBdIEJvb3RpbmcgTGludXgg
b24gcGh5c2ljYWwgQ1BVIDB4MDAwMDAwMDAwMCBbMHg0MTBmZDBmMF0NClsgICAgMC4wMDAwMDBd
IExpbnV4IHZlcnNpb24gNi4xLjI1IChsdWNmYW4wMUBlMTI1NzcwKSAoYWFyY2g2NC1wb2t5LWxp
bnV4LWdjYyAoR0NDKSAxMi4yLjAsIEdOVSBsZCAoR05VIEJpbnV0aWxzKSAyLjQwLjIwMjMwMTE5
KSAjNCBTTVAgUFJFRU1QVCBUaHUgSnVuIDEzIDIxOjU1OjA2IFVUQyAyMDI0DQpbICAgIDAuMDAw
MDAwXSBNYWNoaW5lIG1vZGVsOiBYRU5WTS00LjE5DQpbICAgIDAuMDAwMDAwXSBYZW4gNC4xOSBz
dXBwb3J0IGZvdW5kDQpbICAgIDAuMDAwMDAwXSBlZmk6IFVFRkkgbm90IGZvdW5kLg0KWyAgICAw
LjAwMDAwMF0gTlVNQTogTm8gTlVNQSBjb25maWd1cmF0aW9uIGZvdW5kDQoNClsuLi5dDQoNClsg
ICAgMC43Mzc3NThdIHZpcnRpb19ibGsgdmlydGlvMDogMS8wLzAgZGVmYXVsdC9yZWFkL3BvbGwg
cXVldWVzDQpkZW11X2RldGVjdF9tYXBwaW5nc19tb2RlbDogVXNlIGZvcmVpZ24gbWFwcGluZyAo
YWRkciAweDVkNjYwMDAwKQ0KWyAgICAwLjc2NDI1OF0gdmlydGlvX2JsayB2aXJ0aW8wOiBbdmRh
XSA3NDcwOTQgNTEyLWJ5dGUgbG9naWNhbCBibG9ja3MgKDM4MyBNQi8zNjUgTWlCKQ0KWyAgICAw
Ljc4MTg2Nl0gSW52YWxpZCBtYXhfcXVldWVzICg0KSwgd2lsbCB1c2UgZGVmYXVsdCBtYXg6IDEu
DQoNClsuLi5dDQoNCklOSVQ6IEVudGVyaW5nIHJ1bmxldmVsOiA1DQpDb25maWd1cmluZyBuZXR3
b3JrIGludGVyZmFjZXMuLi4gaXA6IFNJT0NHSUZGTEFHUzogTm8gc3VjaCBkZXZpY2UNClN0YXJ0
aW5nIHN5c2xvZ2Qva2xvZ2Q6IGRvbmUNCg0KUG9reSAoWW9jdG8gUHJvamVjdCBSZWZlcmVuY2Ug
RGlzdHJvKSA0LjIuMSBzdHJlc3Nyb290ZnMgL2Rldi9odmMwDQoNCnN0cmVzc3Jvb3RmcyBsb2dp
bjogWyAgIDYyLjU5MzQ0MF0gY2ZnODAyMTE6IGZhaWxlZCB0byBsb2FkIHJlZ3VsYXRvcnkuZGIN
Cg0KUG9reSAoWW9jdG8gUHJvamVjdCBSZWZlcmVuY2UgRGlzdHJvKSA0LjIuMSBzdHJlc3Nyb290
ZnMgL2Rldi9odmMwDQoNCnN0cmVzc3Jvb3RmcyBsb2dpbjogcm9vdA0Kcm9vdEBzdHJlc3Nyb290
ZnM6fiMgbHMgLw0KYmluICAgICAgICAgZXRjICAgICAgICAgbG9zdCtmb3VuZCAgcHJvYyAgICAg
ICAgc3lzICAgICAgICAgdmFyDQpib290ICAgICAgICBob21lICAgICAgICBtZWRpYSAgICAgICBy
dW4gICAgICAgICB0bXANCmRldiAgICAgICAgIGxpYiAgICAgICAgIG1udCAgICAgICAgIHNiaW4g
ICAgICAgIHVzcg0Kcm9vdEBzdHJlc3Nyb290ZnM6fiMNCg0KWy4uLl0NCg0Kcm9vdEBmdnAtYmFz
ZTovb3B0L3h0cC9ndWVzdHMvbGludXgtZ3Vlc3RzIyB4bCBkZXN0cm95IDINCiAgRXJyb3I6IHJl
YWRpbmcgZnJvbnRlbmQgc3RhdGUgZmFpbGVkDQoNCm1haW46IGxvc3QgY29ubmVjdGlvbiB0byBk
b20yDQpkZW11X3RlYXJkb3duOiA8SU5JVElBTElaRUQNCmRlbXVfdGVhcmRvd246IDxERVZJQ0Vf
SU5JVElBTElaRUQNCmRlbXVfZGVyZWdpc3Rlcl9tZW1vcnlfc3BhY2U6IDIwMDAwMDANCmRlbXVf
dGVhcmRvd246IDxFVlRDSE5fUE9SVFNfQk9VTkQNCmRlbXVfdGVhcmRvd246IDxQT1JUX0FSUkFZ
X0FMTE9DQVRFRA0KZGVtdV90ZWFyZG93bjogVkNQVTA6IDYNCmRlbXVfdGVhcmRvd246IDxTRVJW
RVJfRU5BQkxFRA0KZGVtdV90ZWFyZG93bjogPFJFU09VUkNFX01BUFBFRA0KZGVtdV90ZWFyZG93
bjogPFNFUlZFUl9SRUdJU1RFUkVEDQpkZW11X3RlYXJkb3duOiA8WEVOR05UVEFCX09QRU4NCmRl
bXVfdGVhcmRvd246IDxYRU5ERVZJQ0VNT0RFTF9PUEVODQpkZW11X3RlYXJkb3duOiA8WEVORk9S
RUlHTk1FTU9SWV9PUEVODQpkZW11X3RlYXJkb3duOiA8WEVORVZUQ0hOX09QRU4NCmRlbXVfdGVh
cmRvd246IDxYRU5TVE9SRV9BVFRBQ0hFRA0KICBJbmZvOiBkaXNjb25uZWN0ZWQgZnJvbSBkb20y
DQoNCnJvb3RAZnZwLWJhc2U6L29wdC94dHAvZ3Vlc3RzL2xpbnV4LWd1ZXN0cyMgeGwgbGlzdA0K
TmFtZSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBJRCAgIE1lbSBWQ1BV
cyAgICAgIFN0YXRlICAgVGltZShzKQ0KRG9tYWluLTAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgMCAgMTAyNCAgICAgMiAgICAgci0tLS0tICAgICAgNjYuNg0Kcm9vdEBmdnAt
YmFzZTovb3B0L3h0cC9ndWVzdHMvbGludXgtZ3Vlc3RzIw0KDQoNCkNoZWVycywNCkx1Y2ENCg0K


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 08:12:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 08:12:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740489.1147573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI23D-0007Wa-Cq; Fri, 14 Jun 2024 08:12:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740489.1147573; Fri, 14 Jun 2024 08:12:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI23D-0007WT-AL; Fri, 14 Jun 2024 08:12:47 +0000
Received: by outflank-mailman (input) for mailman id 740489;
 Fri, 14 Jun 2024 08:12:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5IQt=NQ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sI23C-0007WN-4R
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 08:12:46 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dfada994-2a25-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 10:12:43 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-57c83100c5fso1946673a12.3
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 01:12:43 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb72cdffcsm1947325a12.6.2024.06.14.01.12.41
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 Jun 2024 01:12:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfada994-2a25-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718352762; x=1718957562; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=de4i5K8rRLFBrvE5Dl/eCQ0mxMUURyt44lvMGtKgvls=;
        b=MI0JAyajzMvuU6FUB9hyj2kcf2ZmKLq1yu7SsF6b2OILO71ae/PQqgTCPKVAChoT0r
         L1g1yMj/a5R3QNEqMSTQBr/+mIWDCRwBIpof1Eo6ZzlY+3pPIvUGLOdTdI/1PxMIUNIi
         A9GGA6jcw+94YvpzeCe8FS08Fva5wleofU8IvxyWS9NY/pbAmCiJfMkQQw9qcdM5Myrk
         WtSikSobqqwrR23k0+6arKPoqM9LVWHRaImujGuIXW3O1f/xliCbol/AuJdlKCimExoo
         wcamvidRa6awr42jBTzGmeF4E0DE8ScDc2jwHIPez+MBCk9Q/HEvvoVoDQDnXfwtxcXK
         9yqw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718352762; x=1718957562;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=de4i5K8rRLFBrvE5Dl/eCQ0mxMUURyt44lvMGtKgvls=;
        b=gS8LjROMrivVX2hPLYqz7NoqtL1cW63O5ah8UQtB47d7iO9iGv1JXOZn5zZiLEkM88
         MshdAhnAMGZ4irdnjfKrrON+Jt5KSlYWILihQ7J20Xm3VJt1Ttm9xjeqH4IWx+07dhUW
         arIgf2XgEnPPFXNfNY6ksHxyQzS7mNPSd8FjDJ7tfBZ5cofat8EpgeEbIrYrF14XV0Sb
         tg5S0I6ssXLHrqEZPNi6G4q/TAWDDBmPM7HtnEep0rbXRQxf4C3Gw728uMVY0xijaXEU
         Jg2+ivNv55nXjeMONEygeJ9QsPrOHn+DmM2W7kDL9as+YndsA76zRH5bogu81Rg+Df/0
         5Fgg==
X-Forwarded-Encrypted: i=1; AJvYcCUSMOVuq7Zj9vjO0/s//zXLCvK69vKTOhjZAn2w2V/5+OlHzBTexegJrjleyNmMSF05fGWzeQsbghLyR2IcY1T721EMWjGrVO6U+vcBYlY=
X-Gm-Message-State: AOJu0Yyul0id0roZvCT6kKnVZVzO9x8LXE/Q/jkYsZ9pQldB+1aXwcDp
	WidRIBXRtAAITt2oxBky/WDTZ8g5G45ez3N9YPFADZ+KXGBVhsue62/C8m7fcQ==
X-Google-Smtp-Source: AGHT+IG4ypwXYWti4VpFUt5WlzvdS4pKcygUt3xXM3lvF61LJZ2VjRyjAZOyhLBYk9dQcuL2o9VyAw==
X-Received: by 2002:a50:d75a:0:b0:57c:6f1d:1926 with SMTP id 4fb4d7f45d1cf-57cbd68f814mr1374768a12.22.1718352762086;
        Fri, 14 Jun 2024 01:12:42 -0700 (PDT)
Message-ID: <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
Date: Fri, 14 Jun 2024 10:12:40 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: Design session notes: GPU acceleration in Xen
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Ray Huang <ray.huang@amd.com>,
 Xen developer discussion <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Demi Marie Obenour <demi@invisiblethingslab.com>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com> <ZmvvlF0gpqFB7UC9@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZmvvlF0gpqFB7UC9@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 14.06.2024 09:21, Roger Pau Monné wrote:
> On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote:
>> On 13.06.2024 20:43, Demi Marie Obenour wrote:
>>> GPU acceleration requires that pageable host memory be able to be mapped
>>> into a guest.
>>
>> I'm sure it was explained in the session, which sadly I couldn't attend.
>> I've been asking Ray and Xenia the same before, but I'm afraid it still
>> hasn't become clear to me why this is a _requirement_. After all that's
>> against what we're doing elsewhere (i.e. so far it has always been
>> guest memory that's mapped in the host). I can appreciate that it might
>> be more difficult to implement, but avoiding to violate this fundamental
>> (kind of) rule might be worth the price (and would avoid other
>> complexities, of which there may be lurking more than what you enumerate
>> below).
> 
> My limited understanding (please someone correct me if wrong) is that
> the GPU buffer (or context I think it's also called?) is always
> allocated from dom0 (the owner of the GPU).  The underling memory
> addresses of such buffer needs to be mapped into the guest.  The
> buffer backing memory might be GPU MMIO from the device BAR(s) or
> system RAM, and such buffer can be paged by the dom0 kernel at any
> time (iow: changing the backing memory from MMIO to RAM or vice
> versa).  Also, the buffer must be contiguous in physical address
> space.

This last one in particular would of course be a severe restriction.
Yet: There's an IOMMU involved, isn't there?

> I'm not sure it's possible to ensure that when using system RAM such
> memory comes from the guest rather than the host, as it would likely
> require some very intrusive hooks into the kernel logic, and
> negotiation with the guest to allocate the requested amount of
> memory and hand it over to dom0.  If the maximum size of the buffer is
> known in advance maybe dom0 can negotiate with the guest to allocate
> such a region and grant it access to dom0 at driver attachment time.

Besides the thought of transiently converting RAM to kind-of-MMIO, this
makes me think of another possible option: Could Dom0 transfer ownership
of the RAM that wants mapping in the guest (remotely resembling
grant-transfer)? Would require the guest to have ballooned down enough
first, of course. (In both cases it would certainly need working out how
the conversion / transfer back could be made work safely and reasonably
cleanly.)

Jan

> One aspect that I'm lacking clarity is better understanding of how the
> process of allocating and assigning a GPU buffer to a guest is
> performed (I think this is the key to how GPU VirtIO native contexts
> work?).
> 
> Another question I have, are guest expected to have a single GPU
> buffer, or they can have multiple GPU buffers simultaneously
> allocated?
> 
> Regards, Roger.



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 08:40:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 08:40:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740507.1147584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI2TH-0003Uq-EN; Fri, 14 Jun 2024 08:39:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740507.1147584; Fri, 14 Jun 2024 08:39:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI2TH-0003Uj-Bg; Fri, 14 Jun 2024 08:39:43 +0000
Received: by outflank-mailman (input) for mailman id 740507;
 Fri, 14 Jun 2024 08:39:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9HBj=NQ=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sI2TF-0003Ub-O7
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 08:39:41 +0000
Received: from mail-yb1-xb2f.google.com (mail-yb1-xb2f.google.com
 [2607:f8b0:4864:20::b2f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a39c8bbe-2a29-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 10:39:40 +0200 (CEST)
Received: by mail-yb1-xb2f.google.com with SMTP id
 3f1490d57ef6-dff0067a263so2206383276.0
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 01:39:40 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-4421772802fsm6066561cf.58.2024.06.14.01.39.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 Jun 2024 01:39:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a39c8bbe-2a29-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718354379; x=1718959179; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=NQ7OxOE1EqJKw5zZWwVwDHuYw5v/P5AaYrgZ+3BOWtM=;
        b=M2tYbDdbKosB+vBJ+/UtWfS/saOyrMAhxSlORubCdUB6OUbkkPTK0FVxeFnWo5CYtL
         xVwjsRIgD+zFnwRpbJIE4anWniGZBGW0vn48WZXSEY9wAktwS5m2shr3keGsnatjNhPJ
         AwFNDKCaZYjJYZvtqna7sluqIPqaQ49fZSpUA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718354379; x=1718959179;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=NQ7OxOE1EqJKw5zZWwVwDHuYw5v/P5AaYrgZ+3BOWtM=;
        b=YIO70HGblV7zlt3ibtpbpnQKVu31kv6qJUeWhA5DQMYPA2sBuJrlxaU/xpslLBM1Me
         e7m2KC6GNFxnVr+q3pefUZbESwk11YJ5FRoz0LNR7lU4ADOvqK9Kcra1DOK5O7Ee0LbT
         ShjjgkJwx8B1D1hUJX4Nz7y8+n2O1QzIf9Z4A33SuZ+x7t1+G7SOtqfzgjTYrD52IcDa
         23JT5l/yS1hIvapaF/mq/IGhxjNYkzbP8TM+rcmScugs0Oy2Of2Wrv/atoLaXfU10Wbx
         DEa8V7jO8hc2uRMreMATemjZpFI5wQjC/X2YZMzmsSkLRXswIi/5Fob9nn5A4k1nJpPm
         S7QA==
X-Forwarded-Encrypted: i=1; AJvYcCUiawQLm0TOzauU1jzEnuzfclOKnCU//aGpTJoXCvJZHw9RpWlkLc90bgQvEC75MtOtAxCu00rXuNx0a8YlxlW+3eFJxrZy0DRoelb5wTk=
X-Gm-Message-State: AOJu0YwxZyhfD56EBjUEkb/y69RkWSouDZW3bm/YtkJchZA8NwuKfPrc
	OgrtSZ0SvpR9Mg1eDQZfalUWruhvtVFW6xaoGtTr81NVKbkr/CN49VCfyurYEdw=
X-Google-Smtp-Source: AGHT+IGYZSXa2eoTU8nbEHbr3mMWfDGCB51t3w6GRjs+cjvjV/owODB2XQoaPMaqy1Z9kb2+M42LsQ==
X-Received: by 2002:a25:ad48:0:b0:dfb:6d6:a542 with SMTP id 3f1490d57ef6-dff153a66d9mr1913013276.28.1718354379271;
        Fri, 14 Jun 2024 01:39:39 -0700 (PDT)
Date: Fri, 14 Jun 2024 10:39:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Demi Marie Obenour <demi@invisiblethingslab.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <ZmwByZnn5vKcVLKI@macbook>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>

On Fri, Jun 14, 2024 at 10:12:40AM +0200, Jan Beulich wrote:
> On 14.06.2024 09:21, Roger Pau Monné wrote:
> > On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote:
> >> On 13.06.2024 20:43, Demi Marie Obenour wrote:
> >>> GPU acceleration requires that pageable host memory be able to be mapped
> >>> into a guest.
> >>
> >> I'm sure it was explained in the session, which sadly I couldn't attend.
> >> I've been asking Ray and Xenia the same before, but I'm afraid it still
> >> hasn't become clear to me why this is a _requirement_. After all that's
> >> against what we're doing elsewhere (i.e. so far it has always been
> >> guest memory that's mapped in the host). I can appreciate that it might
> >> be more difficult to implement, but avoiding to violate this fundamental
> >> (kind of) rule might be worth the price (and would avoid other
> >> complexities, of which there may be lurking more than what you enumerate
> >> below).
> > 
> > My limited understanding (please someone correct me if wrong) is that
> > the GPU buffer (or context I think it's also called?) is always
> > allocated from dom0 (the owner of the GPU).  The underling memory
> > addresses of such buffer needs to be mapped into the guest.  The
> > buffer backing memory might be GPU MMIO from the device BAR(s) or
> > system RAM, and such buffer can be paged by the dom0 kernel at any
> > time (iow: changing the backing memory from MMIO to RAM or vice
> > versa).  Also, the buffer must be contiguous in physical address
> > space.
> 
> This last one in particular would of course be a severe restriction.
> Yet: There's an IOMMU involved, isn't there?

Yup, IIRC that's why Ray said it was much more easier for them to
support VirtIO GPUs from a PVH dom0 rather than classic PV one.

It might be easier to implement from a classic PV dom0 if there's
pv-iommu support, so that dom0 can create it's own contiguous memory
buffers from the device PoV.

> > I'm not sure it's possible to ensure that when using system RAM such
> > memory comes from the guest rather than the host, as it would likely
> > require some very intrusive hooks into the kernel logic, and
> > negotiation with the guest to allocate the requested amount of
> > memory and hand it over to dom0.  If the maximum size of the buffer is
> > known in advance maybe dom0 can negotiate with the guest to allocate
> > such a region and grant it access to dom0 at driver attachment time.
> 
> Besides the thought of transiently converting RAM to kind-of-MMIO, this

As a note here, changing the type to MMIO would likely involve
modifying the EPT/NPT tables to propagate the new type.  On a PVH dom0
this would likely involve shattering superpages in order to set the
correct memory types.

Depending on how often and how random those system RAM changes are
necessary this could also create contention on the p2m lock.

> makes me think of another possible option: Could Dom0 transfer ownership
> of the RAM that wants mapping in the guest (remotely resembling
> grant-transfer)? Would require the guest to have ballooned down enough
> first, of course. (In both cases it would certainly need working out how
> the conversion / transfer back could be made work safely and reasonably
> cleanly.)

Maybe.  The fact the guest needs to balloon down that amount of memory
seems weird to me, as from the guest PoV that mapped memory is
MMIO-like and not system RAM.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 08:53:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 08:53:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740452.1147600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI2g4-0006ca-Rc; Fri, 14 Jun 2024 08:52:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740452.1147600; Fri, 14 Jun 2024 08:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI2g4-0006bQ-Mx; Fri, 14 Jun 2024 08:52:56 +0000
Received: by outflank-mailman (input) for mailman id 740452;
 Fri, 14 Jun 2024 07:30:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oO8k=NQ=cambridgegreys.com=anton.ivanov@srs-se1.protection.inumbo.net>)
 id 1sI1OG-0007oa-VP
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 07:30:28 +0000
Received: from www.kot-begemot.co.uk (ns1.kot-begemot.co.uk [217.160.28.25])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f7b6474e-2a1f-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 09:30:26 +0200 (CEST)
Received: from [192.168.17.6] (helo=jain.kot-begemot.co.uk)
 by www.kot-begemot.co.uk with esmtps (TLS1.3) tls
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2)
 (envelope-from <anton.ivanov@cambridgegreys.com>)
 id 1sI1NW-004AyG-4w; Fri, 14 Jun 2024 07:29:42 +0000
Received: from jain.kot-begemot.co.uk ([192.168.3.3])
 by jain.kot-begemot.co.uk with esmtp (Exim 4.96)
 (envelope-from <anton.ivanov@cambridgegreys.com>)
 id 1sI1NT-000Wne-0I; Fri, 14 Jun 2024 08:29:41 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7b6474e-2a1f-11ef-b4bb-af5377834399
Message-ID: <b9909e61-7fc2-4d10-8000-d23b7def93de@cambridgegreys.com>
Date: Fri, 14 Jun 2024 08:29:38 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 02/14] ubd: untagle discard vs write zeroes not support
 handling
Content-Language: en-US
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
 "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Richard Weinberger <richard@nod.at>,
 Johannes Berg <johannes@sipsolutions.net>, Josef Bacik
 <josef@toxicpanda.com>, Ilya Dryomov <idryomov@gmail.com>,
 Dongsheng Yang <dongsheng.yang@easystack.cn>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 linux-um@lists.infradead.org, linux-block@vger.kernel.org,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org,
 Bart Van Assche <bvanassche@acm.org>, Damien Le Moal <dlemoal@kernel.org>
References: <20240531074837.1648501-1-hch@lst.de>
 <20240531074837.1648501-3-hch@lst.de>
From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
In-Reply-To: <20240531074837.1648501-3-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-Spam-Score: -1.0
X-Spam-Score: -1.0
X-Clacks-Overhead: GNU Terry Pratchett



On 31/05/2024 08:47, Christoph Hellwig wrote:
> Discard and Write Zeroes are different operation and implemented
> by different fallocate opcodes for ubd.  If one fails the other one
> can work and vice versa.
> 
> Split the code to disable the operations in ubd_handler to only
> disable the operation that actually failed.
> 
> Fixes: 50109b5a03b4 ("um: Add support for DISCARD in the UBD Driver")
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
> ---
>   arch/um/drivers/ubd_kern.c | 9 +++++----
>   1 file changed, 5 insertions(+), 4 deletions(-)
> 
> diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
> index 0c9542d58c01b7..093c87879d08ba 100644
> --- a/arch/um/drivers/ubd_kern.c
> +++ b/arch/um/drivers/ubd_kern.c
> @@ -449,10 +449,11 @@ static int bulk_req_safe_read(
>   
>   static void ubd_end_request(struct io_thread_req *io_req)
>   {
> -	if (io_req->error == BLK_STS_NOTSUPP &&
> -	    req_op(io_req->req) == REQ_OP_DISCARD) {
> -		blk_queue_max_discard_sectors(io_req->req->q, 0);
> -		blk_queue_max_write_zeroes_sectors(io_req->req->q, 0);
> +	if (io_req->error == BLK_STS_NOTSUPP) {
> +		if (req_op(io_req->req) == REQ_OP_DISCARD)
> +			blk_queue_max_discard_sectors(io_req->req->q, 0);
> +		else if (req_op(io_req->req) == REQ_OP_WRITE_ZEROES)
> +			blk_queue_max_write_zeroes_sectors(io_req->req->q, 0);
>   	}
>   	blk_mq_end_request(io_req->req, io_req->error);
>   	kfree(io_req);
Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com>
-- 
Anton R. Ivanov
Cambridgegreys Limited. Registered in England. Company Number 10273661
https://www.cambridgegreys.com/


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 08:53:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 08:53:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740449.1147593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI2g4-0006ZM-JG; Fri, 14 Jun 2024 08:52:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740449.1147593; Fri, 14 Jun 2024 08:52:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI2g4-0006ZF-GG; Fri, 14 Jun 2024 08:52:56 +0000
Received: by outflank-mailman (input) for mailman id 740449;
 Fri, 14 Jun 2024 07:29:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oO8k=NQ=cambridgegreys.com=anton.ivanov@srs-se1.protection.inumbo.net>)
 id 1sI1N6-0006sR-8G
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 07:29:16 +0000
Received: from www.kot-begemot.co.uk (ns1.kot-begemot.co.uk [217.160.28.25])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cd3fd67f-2a1f-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 09:29:15 +0200 (CEST)
Received: from [192.168.17.6] (helo=jain.kot-begemot.co.uk)
 by www.kot-begemot.co.uk with esmtps (TLS1.3) tls
 TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (Exim 4.94.2)
 (envelope-from <anton.ivanov@cambridgegreys.com>)
 id 1sI1MY-004Axd-Js; Fri, 14 Jun 2024 07:28:42 +0000
Received: from jain.kot-begemot.co.uk ([192.168.3.3])
 by jain.kot-begemot.co.uk with esmtp (Exim 4.96)
 (envelope-from <anton.ivanov@cambridgegreys.com>)
 id 1sI1MU-000Wne-1M; Fri, 14 Jun 2024 08:28:42 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd3fd67f-2a1f-11ef-90a3-e314d9c70b13
Message-ID: <b15de345-838b-4cbb-a156-22b527ed03b6@cambridgegreys.com>
Date: Fri, 14 Jun 2024 08:28:38 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 01/14] ubd: refactor the interrupt handler
Content-Language: en-US
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
 "Martin K. Petersen" <martin.petersen@oracle.com>
Cc: Richard Weinberger <richard@nod.at>,
 Johannes Berg <johannes@sipsolutions.net>, Josef Bacik
 <josef@toxicpanda.com>, Ilya Dryomov <idryomov@gmail.com>,
 Dongsheng Yang <dongsheng.yang@easystack.cn>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 linux-um@lists.infradead.org, linux-block@vger.kernel.org,
 nbd@other.debian.org, ceph-devel@vger.kernel.org,
 xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org
References: <20240531074837.1648501-1-hch@lst.de>
 <20240531074837.1648501-2-hch@lst.de>
From: Anton Ivanov <anton.ivanov@cambridgegreys.com>
In-Reply-To: <20240531074837.1648501-2-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-Spam-Score: -1.0
X-Spam-Score: -1.0
X-Clacks-Overhead: GNU Terry Pratchett



On 31/05/2024 08:47, Christoph Hellwig wrote:
> Instead of a separate handler function that leaves no work in the
> interrupt hanler itself, split out a per-request end I/O helper and
> clean up the coding style and variable naming while we're at it.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
> ---
>   arch/um/drivers/ubd_kern.c | 49 ++++++++++++++------------------------
>   1 file changed, 18 insertions(+), 31 deletions(-)
> 
> diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
> index ef805eaa9e013d..0c9542d58c01b7 100644
> --- a/arch/um/drivers/ubd_kern.c
> +++ b/arch/um/drivers/ubd_kern.c
> @@ -447,43 +447,30 @@ static int bulk_req_safe_read(
>   	return n;
>   }
>   
> -/* Called without dev->lock held, and only in interrupt context. */
> -static void ubd_handler(void)
> +static void ubd_end_request(struct io_thread_req *io_req)
>   {
> -	int n;
> -	int count;
> -
> -	while(1){
> -		n = bulk_req_safe_read(
> -			thread_fd,
> -			irq_req_buffer,
> -			&irq_remainder,
> -			&irq_remainder_size,
> -			UBD_REQ_BUFFER_SIZE
> -		);
> -		if (n < 0) {
> -			if(n == -EAGAIN)
> -				break;
> -			printk(KERN_ERR "spurious interrupt in ubd_handler, "
> -			       "err = %d\n", -n);
> -			return;
> -		}
> -		for (count = 0; count < n/sizeof(struct io_thread_req *); count++) {
> -			struct io_thread_req *io_req = (*irq_req_buffer)[count];
> -
> -			if ((io_req->error == BLK_STS_NOTSUPP) && (req_op(io_req->req) == REQ_OP_DISCARD)) {
> -				blk_queue_max_discard_sectors(io_req->req->q, 0);
> -				blk_queue_max_write_zeroes_sectors(io_req->req->q, 0);
> -			}
> -			blk_mq_end_request(io_req->req, io_req->error);
> -			kfree(io_req);
> -		}
> +	if (io_req->error == BLK_STS_NOTSUPP &&
> +	    req_op(io_req->req) == REQ_OP_DISCARD) {
> +		blk_queue_max_discard_sectors(io_req->req->q, 0);
> +		blk_queue_max_write_zeroes_sectors(io_req->req->q, 0);
>   	}
> +	blk_mq_end_request(io_req->req, io_req->error);
> +	kfree(io_req);
>   }
>   
>   static irqreturn_t ubd_intr(int irq, void *dev)
>   {
> -	ubd_handler();
> +	int len, i;
> +
> +	while ((len = bulk_req_safe_read(thread_fd, irq_req_buffer,
> +			&irq_remainder, &irq_remainder_size,
> +			UBD_REQ_BUFFER_SIZE)) >= 0) {
> +		for (i = 0; i < len / sizeof(struct io_thread_req *); i++)
> +			ubd_end_request((*irq_req_buffer)[i]);
> +	}
> +
> +	if (len < 0 && len != -EAGAIN)
> +		pr_err("spurious interrupt in %s, err = %d\n", __func__, len);
>   	return IRQ_HANDLED;
>   }
>   
Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com>
-- 
Anton R. Ivanov
Cambridgegreys Limited. Registered in England. Company Number 10273661
https://www.cambridgegreys.com/


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 09:16:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 09:16:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740532.1147613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI32C-0002Ln-Ki; Fri, 14 Jun 2024 09:15:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740532.1147613; Fri, 14 Jun 2024 09:15:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI32C-0002Lg-HW; Fri, 14 Jun 2024 09:15:48 +0000
Received: by outflank-mailman (input) for mailman id 740532;
 Fri, 14 Jun 2024 09:15:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V6dE=NQ=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sI32B-0002LY-Cl
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 09:15:47 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ae1600f4-2a2e-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 11:15:45 +0200 (CEST)
Received: from truciolo.homenet.telecomitalia.it
 (host-82-58-35-96.retail.telecomitalia.it [82.58.35.96])
 by support.bugseng.com (Postfix) with ESMTPSA id 41CEE4EE0756;
 Fri, 14 Jun 2024 11:15:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae1600f4-2a2e-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v3] automation/eclair: add deviation for MISRA C Rule 17.7
Date: Fri, 14 Jun 2024 11:15:38 +0200
Message-Id: <b571bd05955ab9967a44517c9947545a2a530f01.1718354974.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update ECLAIR configuration to deviate some cases where not using
the return value of a function is not dangerous.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
Changes in v3:
- removed unwanted underscores;
- grammar fixed;
- do not constraint to the first actual argument.
Changes in v2:
- do not deviate strlcpy and strlcat.
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 4 ++++
 docs/misra/deviations.rst                        | 9 +++++++++
 2 files changed, 13 insertions(+)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index 447c1e6661..97281082a8 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -413,6 +413,10 @@ explicit comment indicating the fallthrough intention is present."
 -config=MC3R1.R17.1,macros+={hide , "^va_(arg|start|copy|end)$"}
 -doc_end
 
+-doc_begin="Not using the return value of a function does not endanger safety if it coincides with an actual argument."
+-config=MC3R1.R17.7,calls+={safe, "any()", "decl(name(__builtin_memcpy||__builtin_memmove||__builtin_memset||cpumask_check))"}
+-doc_end
+
 #
 # Series 18.
 #
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 36959aa44a..f3abe31eb5 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -364,6 +364,15 @@ Deviations related to MISRA C:2012 Rules:
        by `stdarg.h`.
      - Tagged as `deliberate` for ECLAIR.
 
+   * - R17.7
+     - Not using the return value of a function does not endanger safety if it
+       coincides with an actual argument.
+     - Tagged as `safe` for ECLAIR. Such functions are:
+         - __builtin_memcpy()
+         - __builtin_memmove()
+         - __builtin_memset()
+         - cpumask_check()
+
    * - R20.4
      - The override of the keyword \"inline\" in xen/compiler.h is present so
        that section contents checks pass when the compiler chooses not to
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 09:23:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 09:23:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740542.1147625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI39o-0004Q8-GF; Fri, 14 Jun 2024 09:23:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740542.1147625; Fri, 14 Jun 2024 09:23:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI39o-0004Q1-BP; Fri, 14 Jun 2024 09:23:40 +0000
Received: by outflank-mailman (input) for mailman id 740542;
 Fri, 14 Jun 2024 09:23:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V6dE=NQ=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sI39m-0004Pv-Ld
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 09:23:38 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c7a06279-2a2f-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 11:23:37 +0200 (CEST)
Received: from truciolo.homenet.telecomitalia.it
 (host-82-58-35-96.retail.telecomitalia.it [82.58.35.96])
 by support.bugseng.com (Postfix) with ESMTPSA id 941834EE0756;
 Fri, 14 Jun 2024 11:23:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7a06279-2a2f-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v3] automation/eclair: extend existing deviations of MISRA C Rule 16.3
Date: Fri, 14 Jun 2024 11:23:26 +0200
Message-Id: <71a69d25e7889ed6e8546b5cd18d423006d69ceb.1718356683.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update ECLAIR configuration to deviate more cases where an
unintentional fallthrough cannot happen.

Add Rule 16.3 to the monitored set and tag it as clean for arm.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
Changes from v2:
- fixed grammar;
- reprhased deviations regarding do-while-false and ASSERT_UNREACHABLE().
---
 .../eclair_analysis/ECLAIR/deviations.ecl     | 31 ++++++++++++++-----
 .../eclair_analysis/ECLAIR/monitored.ecl      |  1 +
 automation/eclair_analysis/ECLAIR/tagging.ecl |  2 +-
 docs/misra/deviations.rst                     | 28 +++++++++++++++--
 4 files changed, 50 insertions(+), 12 deletions(-)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index 447c1e6661..3bdfc3a84d 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -364,14 +364,30 @@ therefore it is deemed better to leave such files as is."
 -config=MC3R1.R16.2,reports+={deliberate, "any_area(any_loc(file(x86_emulate||x86_svm_emulate)))"}
 -doc_end
 
--doc_begin="Switch clauses ending with continue, goto, return statements are
-safe."
--config=MC3R1.R16.3,terminals+={safe, "node(continue_stmt||goto_stmt||return_stmt)"}
+-doc_begin="Statements that change the control flow (i.e., break, continue, goto, return) and calls to functions that do not return the control back are \"allowed terminal statements\"."
+-stmt_selector+={r16_3_allowed_terminal, "node(break_stmt||continue_stmt||goto_stmt||return_stmt)||call(property(noreturn))"}
+-config=MC3R1.R16.3,terminals+={safe, "r16_3_allowed_terminal"}
+-doc_end
+
+-doc_begin="An if-else statement having both branches ending with an allowed terminal statement is itself an allowed terminal statement."
+-stmt_selector+={r16_3_if, "node(if_stmt)&&(child(then,r16_3_allowed_terminal)||child(then,any_stmt(stmt,-1,r16_3_allowed_terminal)))"}
+-stmt_selector+={r16_3_else, "node(if_stmt)&&(child(else,r16_3_allowed_terminal)||child(else,any_stmt(stmt,-1,r16_3_allowed_terminal)))"}
+-stmt_selector+={r16_3_if_else, "r16_3_if&&r16_3_else"}
+-config=MC3R1.R16.3,terminals+={safe, "r16_3_if_else"}
+-doc_end
+
+-doc_begin="An if-else statement having an always true condition and the true branch ending with an allowed terminal statement is itself an allowed terminal statement."
+-stmt_selector+={r16_3_if_true, "r16_3_if&&child(cond,definitely_in(1..))"}
+-config=MC3R1.R16.3,terminals+={safe, "r16_3_if_true"}
+-doc_end
+
+-doc_begin="A switch clause ending with a statement expression which, in turn, ends with an allowed terminal statement is safe."
+-config=MC3R1.R16.3,terminals+={safe, "node(stmt_expr)&&child(stmt,node(compound_stmt)&&any_stmt(stmt,-1,r16_3_allowed_terminal||r16_3_if_else||r16_3_if_true))"}
 -doc_end
 
--doc_begin="Switch clauses ending with a call to a function that does not give
-the control back (i.e., a function with attribute noreturn) are safe."
--config=MC3R1.R16.3,terminals+={safe, "call(property(noreturn))"}
+-doc_begin="A switch clause ending with a do-while-false the body of which, in turn, ends with an allowed terminal statement is safe.
+An exception to that is the macro ASSERT_UNREACHABLE() which is effective in debug build only: a switch clause ending with ASSERT_UNREACHABLE() is not considered safe."
+-config=MC3R1.R16.3,terminals+={safe, "!macro(name(ASSERT_UNREACHABLE))&&node(do_stmt)&&child(cond,definitely_in(0))&&child(body,any_stmt(stmt,-1,r16_3_allowed_terminal||r16_3_if_else||r16_3_if_true))"}
 -doc_end
 
 -doc_begin="Switch clauses ending with pseudo-keyword \"fallthrough\" are
@@ -383,8 +399,7 @@ safe."
 -config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(/BUG\\(\\);/))))"}
 -doc_end
 
--doc_begin="Switch clauses not ending with the break statement are safe if an
-explicit comment indicating the fallthrough intention is present."
+-doc_begin="Switch clauses ending with an explicit comment indicating the fallthrough intention are safe."
 -config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through.? \\*/.*$,0..1))))"}
 -doc_end
 
diff --git a/automation/eclair_analysis/ECLAIR/monitored.ecl b/automation/eclair_analysis/ECLAIR/monitored.ecl
index 4daecb0c83..45a60074f9 100644
--- a/automation/eclair_analysis/ECLAIR/monitored.ecl
+++ b/automation/eclair_analysis/ECLAIR/monitored.ecl
@@ -22,6 +22,7 @@
 -enable=MC3R1.R14.1
 -enable=MC3R1.R14.4
 -enable=MC3R1.R16.2
+-enable=MC3R1.R16.3
 -enable=MC3R1.R16.6
 -enable=MC3R1.R16.7
 -enable=MC3R1.R17.1
diff --git a/automation/eclair_analysis/ECLAIR/tagging.ecl b/automation/eclair_analysis/ECLAIR/tagging.ecl
index a354ff322e..07de2e7b65 100644
--- a/automation/eclair_analysis/ECLAIR/tagging.ecl
+++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
@@ -105,7 +105,7 @@ if(string_equal(target,"x86_64"),
 )
 
 if(string_equal(target,"arm64"),
-    service_selector({"additional_clean_guidelines","MC3R1.R14.4||MC3R1.R16.6||MC3R1.R20.12||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.2||MC3R1.R7.3||MC3R1.R8.6||MC3R1.R9.3"})
+    service_selector({"additional_clean_guidelines","MC3R1.R14.4||MC3R1.R16.3||MC3R1.R16.6||MC3R1.R20.12||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.2||MC3R1.R7.3||MC3R1.R8.6||MC3R1.R9.3"})
 )
 
 -reports+={clean:added,"service(clean_guidelines_common||additional_clean_guidelines)"}
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 36959aa44a..41cdfbe5f5 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -309,12 +309,34 @@ Deviations related to MISRA C:2012 Rules:
      - Tagged as `deliberate` for ECLAIR.
 
    * - R16.3
-     - Switch clauses ending with continue, goto, return statements are safe.
+     - Statements that change the control flow (i.e., break, continue, goto,
+       return) and calls to functions that do not return the control back are
+       \"allowed terminal statements\".
      - Tagged as `safe` for ECLAIR.
 
    * - R16.3
-     - Switch clauses ending with a call to a function that does not give
-       the control back (i.e., a function with attribute noreturn) are safe.
+     - An if-else statement having both branches ending with one of the allowed
+       terminal statemets is itself an allowed terminal statement.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R16.3
+     - An if-else statement having an always true condition and the true
+       branch ending with an allowed terminal statement is itself an allowed
+       terminal statement.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R16.3
+     - A switch clause ending with a statement expression which, in turn, ends
+       with an allowed terminal statement (e.g., the expansion of
+       generate_exception()) is safe.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R16.3
+     - A switch clause ending with a do-while-false the body of which, in turn,
+       ends with an allowed terminal statement (e.g., PARSE_ERR_RET()) is safe.
+       An exception to that is the macro ASSERT_UNREACHABLE() which is
+       effective in debug build only: a switch clause ending with
+       ASSERT_UNREACHABLE() is not considered safe.
      - Tagged as `safe` for ECLAIR.
 
    * - R16.3
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 09:32:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 09:32:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740549.1147634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI3IV-0006NB-9P; Fri, 14 Jun 2024 09:32:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740549.1147634; Fri, 14 Jun 2024 09:32:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI3IV-0006N4-5z; Fri, 14 Jun 2024 09:32:39 +0000
Received: by outflank-mailman (input) for mailman id 740549;
 Fri, 14 Jun 2024 09:32:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI3IU-0006Mu-OQ; Fri, 14 Jun 2024 09:32:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI3IU-0000iu-HY; Fri, 14 Jun 2024 09:32:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI3IU-0007oK-2f; Fri, 14 Jun 2024 09:32:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sI3IU-0008IG-28; Fri, 14 Jun 2024 09:32:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8OmgvVt9XdIP13mh8Q1cgRKkr8vo1fNXP16fMfEShi4=; b=IQ695rKK6Ysa+8w+HLzFZ4CtXc
	BHiPQHWoOHbRs3rs6IGsJxrL5Yue6eVQcKNzQNzkButcoyOZIsXu/Xa2g3L9QAwNTsCs1y7n98eP/
	IkesH6NZF04ARdx6UGOaktJSWFWKyCp+6AvSnYv9z/H4EnILw8nBLAOBz2sazTpS9PvE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186343-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186343: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=991c324fae2cfca8d592437ffc386171d343c836
X-Osstest-Versions-That:
    libvirt=230d81fc3a1398e66f482e404abe9759afd9bb59
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Jun 2024 09:32:38 +0000

flight 186343 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186343/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186330
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              991c324fae2cfca8d592437ffc386171d343c836
baseline version:
 libvirt              230d81fc3a1398e66f482e404abe9759afd9bb59

Last test of basis   186330  2024-06-13 04:18:43 Z    1 days
Testing same since   186343  2024-06-14 04:22:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel P. Berrangé <berrange@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   230d81fc3a..991c324fae  991c324fae2cfca8d592437ffc386171d343c836 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 09:35:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 09:35:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740556.1147643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI3L0-0006x8-N0; Fri, 14 Jun 2024 09:35:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740556.1147643; Fri, 14 Jun 2024 09:35:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI3L0-0006x1-Jg; Fri, 14 Jun 2024 09:35:14 +0000
Received: by outflank-mailman (input) for mailman id 740556;
 Fri, 14 Jun 2024 09:35:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI3L0-0006wr-0a; Fri, 14 Jun 2024 09:35:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI3Kz-0000mH-Vw; Fri, 14 Jun 2024 09:35:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI3Kz-0007su-NC; Fri, 14 Jun 2024 09:35:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sI3Kz-0001wk-Mi; Fri, 14 Jun 2024 09:35:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=igUA5ddgMFyY8muW04mJAuGvcYuswOBeaLhBe8iHnbI=; b=wVMPFG8FjiWpkRbRTmTIjr8Oca
	9LyUktHvQ+WekOD0XtgZ1dQxLo6flJ3LIFINjmp0f4wvx7ow1zLhM4fW6CSF9UQ1AgSHl8hjDw1+p
	TzlQY65Tw0FPxmfPUoyyp44A5mb4Pkrtc0BT1E34Nz5aXaHg+dnpGapo1rzoWkxOxJaE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186346-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186346: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ce91687a1b2d4e03b01abb474b4665629776f588
X-Osstest-Versions-That:
    ovmf=712797cf19acd292bf203522a79e40e7e13d268b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Jun 2024 09:35:13 +0000

flight 186346 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186346/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ce91687a1b2d4e03b01abb474b4665629776f588
baseline version:
 ovmf                 712797cf19acd292bf203522a79e40e7e13d268b

Last test of basis   186338  2024-06-13 16:11:15 Z    0 days
Testing same since   186346  2024-06-14 07:13:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiaxin Wu <jiaxin.wu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   712797cf19..ce91687a1b  ce91687a1b2d4e03b01abb474b4665629776f588 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 09:46:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 09:46:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740567.1147654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI3WD-0000Wt-OX; Fri, 14 Jun 2024 09:46:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740567.1147654; Fri, 14 Jun 2024 09:46:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI3WD-0000Wm-KW; Fri, 14 Jun 2024 09:46:49 +0000
Received: by outflank-mailman (input) for mailman id 740567;
 Fri, 14 Jun 2024 09:46:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sI3WC-0000We-8E
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 09:46:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sI3WB-0000yz-Oq; Fri, 14 Jun 2024 09:46:47 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.0.211])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sI3WB-00010A-If; Fri, 14 Jun 2024 09:46:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=nE3GURDV+edZoPtQu7IG+u+8p2e0vOsmsHTgeOnPpno=; b=xqOwj7sRUpp62iDKNg0DF/dIhj
	cySir8qomt+x11KMmFEEyMSaeRAxYeRX/KXoFt3GmIaGNcK7p7QI/5+PRkCTYCkqwyifxDf7ZfIIc
	Ix/v11o0+pLfAbtlHoUq3osjdzT4zEn5a4qE31ZtLsu1ufupuyZzl9pcU9GGZxtqGSUU=;
Message-ID: <216fc5bb-e148-47d4-8946-75abd7f3251e@xen.org>
Date: Fri, 14 Jun 2024 10:46:45 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 0/7] Static shared memory followup v2 - pt2
Content-Language: en-GB
To: Luca Fancellu <Luca.Fancellu@arm.com>
Cc: Michal Orzel <michal.orzel@amd.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Oleksandr Tyshchenko <olekstysh@gmail.com>
References: <20240524124055.3871399-1-luca.fancellu@arm.com>
 <3DDAAFF7-5E43-4B92-9D6B-6D8AFBA8496F@arm.com>
 <66e5d584-b326-4197-81d5-ec2b8233a3fa@amd.com>
 <502d0284-1ca2-478c-b4f4-fda5caff3723@xen.org>
 <92B10944-083B-4DB3-8257-ADD452FBFF69@arm.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <92B10944-083B-4DB3-8257-ADD452FBFF69@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 14/06/2024 08:54, Luca Fancellu wrote:
> Hi Julien,

Hi Luca,

>> On 13 Jun 2024, at 12:31, Julien Grall <julien@xen.org> wrote:
>>
>> Hi,
>>
>> On 11/06/2024 13:42, Michal Orzel wrote:
>>>> We would like this serie to be in Xen 4.19, there was a misunderstanding on our side because we thought
>>>> that since the serie was sent before the last posting date, it could have been a candidate for merging in the
>>>> new release, instead after speaking with Julien and Oleksii we are now aware that we need to provide a
>>>> justification for its presence.
>>>>
>>>> Pros to this serie is that we are closing the circle for static shared memory, allowing it to use memory from
>>>> the host or from Xen, it is also a feature that is not enabled by default, so it should not cause too much
>>>> disruption in case of any bugs that escaped the review, however we’ve tested many configurations for that
>>>> with/without enabling the feature if that can be an additional value.
>>>>
>>>> Cons: we are touching some common code related to p2m, but also there the impact should be minimal because
>>>> the new code is subject to l2 foreign mapping (to be confirmed maybe from a p2m expert like Julien).
>>>>
>>>> The comments on patch 3 of this serie are addressed by this patch:
>>>> https://patchwork.kernel.org/project/xen-devel/patch/20240528125603.2467640-1-luca.fancellu@arm.com/
>>>> And the serie is fully reviewed.
>>>>
>>>> So our request is to allow this serie in 4.19, Oleksii, ARM maintainers, do you agree on that?
>>> As a main reviewer of this series I'm ok to have this series in. It is nicely encapsulated and the feature itself
>>> is still in unsupported state. I don't foresee any issues with it.
>>
>> There are changes in the p2m code and the memory allocation for boot domains. So is it really encapsulated?
>>
>> For me there are two risks:
>> * p2m (already mentioned by Luca): We modify the code to put foreign mapping. The worse that can happen if we don't release the foreign mapping. This would mean the page will not be freed. AFAIK, we don't exercise this path in the CI.
>> * domain allocation: This mainly look like refactoring. And the path is exercised in the CI.
>>
>> So I am not concerned with the domain allocation one. @Luca, would it be possible to detail how did you test the foreign pages were properly removed?
> 
> So at first we tested the code, with/without the static shared memory feature enabled, creating/destroying guest from Dom0 and checking that everything
> was ok.

Just to details a bit more for the other. In a normal setup, guest would 
be using PV block/network which are based on grant table. To exercise 
the foreing mapping path, you would need a backend that map using the 
GFN directly.

This is the case for virtio MMIO. Other users are IIRC XenFB, IOREQ (if 
you emulate a device).

> 
> After a chat on Matrix with Julien he suggested that using a virtio-mmio disk was better to stress out the foreign mapping looking for
> regressions.
> 
> Luckily I’ve found this slide deck from @Oleksandr: https://static.linaro.org/connect/lvc21/presentations/lvc21-314.pdf
> 
> So I did a setup using fvp-base, having a disk with two partitions containing Dom0 rootfs and DomU rootfs, Dom0 sees
> this disk using VirtIO block.

Thanks for the testing! I am satisfied with the result. I have committed 
the series.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 09:47:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 09:47:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740572.1147664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI3X9-0001Ra-05; Fri, 14 Jun 2024 09:47:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740572.1147664; Fri, 14 Jun 2024 09:47:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI3X8-0001RT-TW; Fri, 14 Jun 2024 09:47:46 +0000
Received: by outflank-mailman (input) for mailman id 740572;
 Fri, 14 Jun 2024 09:47:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J/js=NQ=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sI3X7-0001RH-EO
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 09:47:45 +0000
Received: from mail-qt1-x830.google.com (mail-qt1-x830.google.com
 [2607:f8b0:4864:20::830])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 259d77ed-2a33-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 11:47:44 +0200 (CEST)
Received: by mail-qt1-x830.google.com with SMTP id
 d75a77b69052e-4403bb543a4so11011651cf.1
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 02:47:44 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-441f3111ff4sm14575411cf.53.2024.06.14.02.47.41
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 Jun 2024 02:47:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 259d77ed-2a33-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718358463; x=1718963263; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=z1rMhTdFN2tBYyOnOGtD6mGUNPVmMbr8TyhzjgcH7oU=;
        b=QfjwTlhonMQs08k1lnW7wM9ojCNXX6d/OIEKJ7WSZBIXGoL2aW4UDfGXT7V9VKNc/6
         BtJnObwYVBWCwfrGnd0ZVrG3aRfkvblTSapvP6PebJ/dXXqPgpsn5IFalQEBzxtKscXQ
         pi+PFpdkt7tsoCdNs0usUXO/jFSMaeCBRXhDA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718358463; x=1718963263;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=z1rMhTdFN2tBYyOnOGtD6mGUNPVmMbr8TyhzjgcH7oU=;
        b=sMNNVkr0Jp3XZhMVSuacG3wfgJRjeYh1tXNJwj1HqItX3Fsy5uNTQqyOWe6894yDyc
         8YtTSk4gu37tzq9+0QanTdQbdvQAi6OFnU+L0AmjXF5rLpp6BMpjMHNQCsCO4g1rk0Gu
         ZsG/KuPVHXbBxgUjEga64okRNVn0xAmfesQIQGR3UW4VEqDMZid+WsuK6p96bqsLK1jJ
         rbWFzsZ/5bDBCz6p/zknxq3s/O6IhuFahXG4V5qzP7aK0x+AhJ16yA7lcdcOdHMvAjTH
         J5L30sfXalkc1y9RV07lTLONNBHr33lFn6sDqrYDxiPXF0e58eHBM1m+nsY7QjBNp6F+
         QYeQ==
X-Forwarded-Encrypted: i=1; AJvYcCX+MrIdaQ+DF4lDSHDK5lw0xwf0uZUZASc5SmnZBAvBES0VX54UlDeFeh7cHjIl1eTTy9vSPlYscsJjRNO5F5pDTYZ2LGGq4SUY3up0WkQ=
X-Gm-Message-State: AOJu0YykFbv/mUy7Amk9M0ifJaf2frmsiQ5AvRAqYZmKTEUfWNfVoRHY
	7k+Otj9mqfY8Hr2Ebhs7mKKGPgSZUZY5nBGt+MX4CMECsE+6N2M6KqytHj1jj7o=
X-Google-Smtp-Source: AGHT+IFWmWrZg58+zkZC0cJPU/UHnJw2XD90HQPdydJnh3ua2xkLmI8onQx5FiAR5JAwkNMwDkKOnA==
X-Received: by 2002:ac8:5786:0:b0:440:4eb7:293f with SMTP id d75a77b69052e-442168a80d0mr27054771cf.12.1718358462786;
        Fri, 14 Jun 2024 02:47:42 -0700 (PDT)
Message-ID: <fa62c314-424e-4e5b-9046-3a2e1eba654e@citrix.com>
Date: Fri, 14 Jun 2024 10:47:39 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v12 5/8] xen/riscv: add minimal stuff to mm.h to build
 full Xen
To: "Oleksii K." <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <cover.1717008161.git.oleksii.kurochko@gmail.com>
 <d00b86f41ef2c7d928a28dadd8c34fb845f23d0a.1717008161.git.oleksii.kurochko@gmail.com>
 <70128dba-498f-4d85-8507-bb1621182754@citrix.com>
 <7721c1b4eb0ea76cca5460264954d40d639499b7.camel@gmail.com>
 <e80e30c9-6558-4b70-ab2f-18c34c359dae@citrix.com>
 <1b3b389156ad924f00af8af1d173b89fc533050e.camel@gmail.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <1b3b389156ad924f00af8af1d173b89fc533050e.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 11/06/2024 7:23 pm, Oleksii K. wrote:
> On Tue, 2024-06-11 at 16:53 +0100, Andrew Cooper wrote:
>> On 30/05/2024 7:22 pm, Oleksii K. wrote:
>>> On Thu, 2024-05-30 at 18:23 +0100, Andrew Cooper wrote:
>>>> On 29/05/2024 8:55 pm, Oleksii Kurochko wrote:
>>>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>>>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>>> This patch looks like it can go in independently?  Or does it
>>>> depend
>>>> on
>>>> having bitops.h working in practice?
>>>>
>>>> However, one very strong suggestion...
>>>>
>>>>
>>>>> diff --git a/xen/arch/riscv/include/asm/mm.h
>>>>> b/xen/arch/riscv/include/asm/mm.h
>>>>> index 07c7a0abba..cc4a07a71c 100644
>>>>> --- a/xen/arch/riscv/include/asm/mm.h
>>>>> +++ b/xen/arch/riscv/include/asm/mm.h
>>>>> @@ -3,11 +3,246 @@
>>>>> <snip>
>>>>> +/* PDX of the first page in the frame table. */
>>>>> +extern unsigned long frametable_base_pdx;
>>>>> +
>>>>> +/* Convert between machine frame numbers and page-info
>>>>> structures.
>>>>> */
>>>>> +#define
>>>>> mfn_to_page(mfn)                                            \
>>>>> +    (frame_table + (mfn_to_pdx(mfn) - frametable_base_pdx))
>>>>> +#define
>>>>> page_to_mfn(pg)                                             \
>>>>> +    pdx_to_mfn((unsigned long)((pg) - frame_table) +
>>>>> frametable_base_pdx)
>>>> Do yourself a favour and not introduce frametable_base_pdx to
>>>> begin
>>>> with.
>>>>
>>>> Every RISC-V board I can find has things starting from 0 in
>>>> physical
>>>> address space, with RAM starting immediately after.
>>> I checked Linux kernel and grep there:
>>>    [ok@fedora linux-aia]$ grep -Rni "memory@" arch/riscv/boot/dts/
>>> --
>>>    exclude "*.tmp" -I
>>>    arch/riscv/boot/dts/starfive/jh7110-starfive-visionfive-
>>> 2.dtsi:33:    
>>>    memory@40000000 {
>>>    arch/riscv/boot/dts/starfive/jh7100-common.dtsi:28:    
>>> memory@80000000
>>>    {
>>>    arch/riscv/boot/dts/microchip/mpfs-sev-kit.dts:49:     
>>> ddrc_cache:
>>>    memory@1000000000 {
>>>    arch/riscv/boot/dts/microchip/mpfs-m100pfsevp.dts:33:  
>>> ddrc_cache_lo:
>>>    memory@80000000 {
>>>    arch/riscv/boot/dts/microchip/mpfs-m100pfsevp.dts:37:  
>>> ddrc_cache_hi:
>>>    memory@1040000000 {
>>>    arch/riscv/boot/dts/microchip/mpfs-tysom-m.dts:34:     
>>> ddrc_cache_lo:
>>>    memory@80000000 {
>>>    arch/riscv/boot/dts/microchip/mpfs-tysom-m.dts:40:     
>>> ddrc_cache_hi:
>>>    memory@1000000000 {
>>>    arch/riscv/boot/dts/microchip/mpfs-polarberry.dts:22:  
>>> ddrc_cache_lo:
>>>    memory@80000000 {
>>>    arch/riscv/boot/dts/microchip/mpfs-polarberry.dts:27:  
>>> ddrc_cache_hi:
>>>    memory@1000000000 {
>>>    arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts:57:  
>>> ddrc_cache_lo:
>>>    memory@80000000 {
>>>    arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts:63:  
>>> ddrc_cache_hi:
>>>    memory@1040000000 {
>>>    arch/riscv/boot/dts/thead/th1520-beaglev-ahead.dts:32:  memory@0
>>> {
>>>    arch/riscv/boot/dts/thead/th1520-lichee-module-4a.dtsi:14:     
>>>    memory@0 {
>>>    arch/riscv/boot/dts/sophgo/cv1800b-milkv-duo.dts:26:   
>>> memory@80000000
>>>    {
>>>    arch/riscv/boot/dts/sophgo/cv1812h.dtsi:12:     memory@80000000
>>> {
>>>    arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts:26:
>>> memory@80000000
>>>    {
>>>    arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts:25:
>>> memory@80000000
>>>    {
>>>    arch/riscv/boot/dts/canaan/k210.dtsi:82:        sram:
>>> memory@80000000 {
>>>    
>>> And based on  that majority of supported by Linux kernel boards has
>>> RAM
>>> starting not from 0 in physical address space. Am I confusing
>>> something?
>>>
>>>> Taking the microchip board as an example, RAM actually starts at
>>>> 0x8000000,
>>> Today we had conversation with the guy from SiFive in xen-devel
>>> channel
>>> and he mentioned that they are using "starfive visionfive2 and
>>> sifive
>>> unleashed platforms." which based on the grep above has RAM not at
>>> 0
>>> address.
>>>
>>> Also, QEMU uses 0x8000000.
>>>
>>>>  which means that having frametable_base_pdx and assuming it
>>>> does get set to 0x8000 (which isn't even a certainty, given that
>>>> I
>>>> think
>>>> you'll need struct pages covering the PLICs), then what you are
>>>> trading
>>>> off is:
>>>>
>>>> * Saving 32k of virtual address space only (no need to even
>>>> allocate
>>>> memory for this range of the framtable), by
>>>> * Having an extra memory load and add/sub in every page <-> mfn
>>>> conversion, which is a screaming hotpath all over Xen.
>>>>
>>>> It's a terribly short-sighted tradeoff.
>>>>
>>>> 32k of VA space might be worth saving in a 32bit build (I
>>>> personally
>>>> wouldn't - especially as there's no need to share Xen's VA space
>>>> with
>>>> guests, given no PV guests on ARM/RISC-V), but it's absolutely
>>>> not at
>>>> all in an a 64bit build with TB of VA space available.
>>>>
>>>> Even if we do find a board with the first interesting thing in
>>>> the
>>>> frametable starting sufficiently away from 0 that it might be
>>>> worth
>>>> considering this slide, then it should still be Kconfig-able in a
>>>> similar way to PDX_COMPRESSION.
>>> I found your tradeoffs reasonable, but I don't understand how it
>>> will
>>> work if RAM does not start from 0, as the frametable address and
>>> RAM
>>> address are linked.
>>> I tried to look at the PDX_COMPRESSION config and couldn't find any
>>> "slide" there. Could you please clarify this for me?
>>> If to use this "slide" would it help to avoid the mentioned above
>>> tradeoffs?
>>>
>>> One more question: if we decide to go without frametable_base_pdx,
>>> would it be sufficient to simply remove mentions of it from the
>>> code (
>>> at least, for now )?
>> There is a relationship between system/host physical addresses (what
>> Xen
>> called maddr/mfn), and the frametable.  The frametable has one entry
>> per
>> mfn.
>>
>> In the most simple case, there's a 1:1 relationship.  i.e.
>> frametable[0]
>> = maddr(0), frametable[1] = maddr(4k) etc.  This is very simple, and
>> very easy to calculate (page_to_mfn()/mfn_to_page()).
>>
>> The frametable is one big array.  It starts at a compile-time fixed
>> address, and needs to be long enough to cover everything interesting
>> in
>> memory.  Therefore it potentially takes a large amount of virtual
>> address space.
>>
>> However, only interesting maddrs need to have data in the frametable,
>> so
>> it's fine for the backing RAM to be sparsely allocated/mapped in the
>> frametable virtual addresses.
>>
>> For 64bit, that's really all you need, because there's always far
>> more
>> virtual address space than physical RAM in the system, even when
>> you're
>> looking at TB-scale giant servers.
>>
>>
>> For 32bit, virtual address space is a limited resource.  (Also to an
>> extent, 64bit x86 with PV guests because we give 98% of the virtual
>> address space to the guest kernel to use.)
>>
>> There are two tricks to reduce the virtual address space used, but
>> they
>> both cost performance in fastpaths.
>>
>> 1) PDX Compression.
>>
>> PDX compression makes a non-linear mfn <-> maddr mapping.  This is
>> for a
>> usecase where you've got multiple RAM banks which are separated by a
>> large distance (and evenly spaced), then you can "compress" a single
>> range of 0's out of the middle of the system/host physical address.
>>
>> The cost is that all page <-> mfn conversions need to read two masks
>> and
>> a shift-count from variables in memory, to split/shift/recombine the
>> address bits.
>>
>> 2) A slide, which is frametable_base_pdx in this context.
>>
>> When there's a big gap between 0 and the start of something
>> interesting,
>> you could chop out that range by just subtracting base_pdx.  What
>> qualifies as "big" is subjective, but Qemu starting at 128M certainly
>> does not qualify as big enough to warrant frametable_base_pdx.
>>
>> This is less expensive that PDX compression.  It only adds one memory
>> read to the fastpath, but it also doesn't save as much virtual
>> address
>> space as PDX compression.
>>
>>
>> When virtual address space is a major constraint (32 bit builds),
>> both
>> of these techniques are worth doing.  But when there's no constraint
>> on
>> virtual address space (64 bit builds), there's no reason to use
>> either;
>> and the performance will definitely improve as a result.
> Thanks for such good explanation.
>
> For RISC-V we have PDX config disabled as I haven't seen multiple RAM
> banks at boards which has hypervisor extension. Thereby mfn_to_pdx()
> and pdx_to_mfn() do nothing. The same for frametable_base_pdx, in case
> of PDX disabled, it just an offset ( or a slide ).
>
> IIUUC, you meant that it make sense to map frametable not to the
> address of where RAM is started, but to 0, so then we don't need this
> +-frametable_base_pdx. The price for that is loosing of VA space for
> the range from 0 to RAM start address.
>
> Right now, we are trying to support 3 boards with the following RAM
> address:
> 1. 0x8000_0000 - QEMU and SiFive board
> 2. 0x40_0000_0000 - Microchip board
>
> So if we mapping frametable to 0 ( not RAM start ) we will loose:
> 1. 0x8000_0 ( amount of page entries to cover range [0, 0x8000_0000] )
> * 64 ( size of struct page_info ) = 32 MB
> 2. 0x40_0000_0 ( amount of page entries to cover range [0,
> 0x40_0000_0000] ) * 64 ( size of struct page_info ) = 4 Gb
>
> In terms of available virtual address space for RV-64 we can consider
> both options as acceptable.

For Qemu and SiFive, 32M is definitely not worth worrying about.

I personally wouldn't worry about Microchip either.  That's 4G of 1T VA
space (assuming Sv39).

> [OPTION 1] If we accepting of loosing 4 Gb of VA then we could
> implement mfn_to_page() and page_to_mfn() in the following way:
> ```
>    diff --git a/xen/arch/riscv/include/asm/mm.h
>    b/xen/arch/riscv/include/asm/mm.h
>    index cc4a07a71c..fdac7e0646 100644
>    --- a/xen/arch/riscv/include/asm/mm.h
>    +++ b/xen/arch/riscv/include/asm/mm.h
>    @@ -107,14 +107,11 @@ struct page_info
>    
>     #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
>    
>    -/* PDX of the first page in the frame table. */
>    -extern unsigned long frametable_base_pdx;
>    -
>     /* Convert between machine frame numbers and page-info structures.
> */
>     #define mfn_to_page(mfn)                                          
> \
>    -    (frame_table + (mfn_to_pdx(mfn) - frametable_base_pdx))
>    +    (frame_table + mfn))
>     #define page_to_mfn(pg)                                           
> \
>    -    pdx_to_mfn((unsigned long)((pg) - frame_table) +
>    frametable_base_pdx)
>    +    ((unsigned long)((pg) - frame_table))
>    
>     static inline void *page_to_virt(const struct page_info *pg)
>     {
>    diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
>    index 9c0fd80588..8f6dbdc699 100644
>    --- a/xen/arch/riscv/mm.c
>    +++ b/xen/arch/riscv/mm.c
>    @@ -15,7 +15,7 @@
>     #include <asm/page.h>
>     #include <asm/processor.h>
>    
>    -unsigned long __ro_after_init frametable_base_pdx;
>     unsigned long __ro_after_init frametable_virt_end;
>    
>     struct mmu_desc {
> ```

I firmly recommend option 1, especially at this point.

If specific boards really have a problem with losing 4G of VA space,
then option 2 can be added easily at a later point.

That said, I'd think carefully about doing option 2.  Even subtracting a
constant - which is far better than subtracting a variable - is still
extra overhead in fastpaths.  Option 2 needs careful consideration on a
board-by-board case.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 10:15:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 10:15:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740589.1147702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI3xS-0006en-Ig; Fri, 14 Jun 2024 10:14:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740589.1147702; Fri, 14 Jun 2024 10:14:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI3xS-0006eg-FQ; Fri, 14 Jun 2024 10:14:58 +0000
Received: by outflank-mailman (input) for mailman id 740589;
 Fri, 14 Jun 2024 10:14:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J/js=NQ=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sI3xR-0006ea-2C
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 10:14:57 +0000
Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com
 [2607:f8b0:4864:20::733])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f252b340-2a36-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 12:14:56 +0200 (CEST)
Received: by mail-qk1-x733.google.com with SMTP id
 af79cd13be357-7955f3d4516so275636885a.1
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 03:14:56 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798aaecce85sm131324785a.41.2024.06.14.03.14.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 Jun 2024 03:14:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f252b340-2a36-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718360095; x=1718964895; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=H7DpoWPouBKOJC14cOpMhsozJyXb1pXesO9Vcihq3xk=;
        b=JwRr95gOktszw2Rf+UWiCcDc1yuqgoqkQ0IaGNTHMhCLycTLQezoEETeQAzBf45BUY
         9kasC6CQIrPNqdg84njMaNGAd0fhj22YLH+uPXdhuuKvCuoC8eMERYq6NFVutu29fQnI
         ItQBs3zpWZF2m6fnq5GOPXhl5C1N7fdjnTHvk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718360095; x=1718964895;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=H7DpoWPouBKOJC14cOpMhsozJyXb1pXesO9Vcihq3xk=;
        b=mJwzX29+9jGALAHBTfSq81ywspLMUAO7D5DiXW9eZZBcs3ATfY3/jtZDG3F5NEwdGH
         5okRQi7pkvHk/WLixKLt69nAm2J11CBkxWqU3YG9wiZMATda14qvozLj2hFRuaeY7NtR
         1bmiS5HOaDwU2Yc4XXIEUExLbpRO1teMpLB6aFXNu5dJrK2ap2JCUvwYeukD3Y/pbKOM
         iTM3zFE1BarEwZGXHxUvoEN20UggyXcrAT1dSUx/LobRFBmhslBoIN6abSI/TVwOHL6y
         7m6W5CGmLZ/XmHHAi8j1lmdruKq9cCP62TxRYN+6FW1QCYWMTkqV4nUrQbP5ob+sIRxx
         M4uA==
X-Forwarded-Encrypted: i=1; AJvYcCWAzWYd2TRZYIqUbRWGIcswuziPop2+cjIhdbCKlFXqYhBId5UklBxR9IPVFqi7hDOzNpaNZ4pOVq1KmwQG3Va0/fNMBE5hdUNtiWsR56M=
X-Gm-Message-State: AOJu0YzfUyW2w9pzcGoZCuBfrNv0iB51DbmPasQ2bcm7yWPOJrHFto/B
	mARt2ndLchAiF8s+qc2FS6ygASm3WTcZnjeEgU3thVOrGgxKJmuPYoxsi14UXImvLT1mlEJYKD6
	NohE=
X-Google-Smtp-Source: AGHT+IGbakePLWN+o0Y5B5jNWxalCWKJRf+lOa7mRsmVFUbtPwD3IjOHRwvXmONrale6ehiWV0K7pA==
X-Received: by 2002:a05:620a:404b:b0:795:4e69:5932 with SMTP id af79cd13be357-7981017a1cdmr861968285a.29.1718360094857;
        Fri, 14 Jun 2024 03:14:54 -0700 (PDT)
Message-ID: <8fb21b45-c803-4d37-8df8-3a1afa677ef7@citrix.com>
Date: Fri, 14 Jun 2024 11:14:51 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for 4.19?] x86/Intel: unlock CPUID earlier for the BSP
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>
 <f51b2240-03da-4aee-8972-a72b53916ce1@citrix.com>
 <e493035c-2954-418e-96fb-add1577df59f@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <e493035c-2954-418e-96fb-add1577df59f@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 14/06/2024 7:27 am, Jan Beulich wrote:
> On 13.06.2024 18:17, Andrew Cooper wrote:
>> On 13/06/2024 9:19 am, Jan Beulich wrote:
>>> Intel CPUs have a MSR bit to limit CPUID enumeration to leaf two. If
>>> this bit is set by the BIOS then CPUID evaluation does not work when
>>> data from any leaf greater than two is needed; early_cpu_init() in
>>> particular wants to collect leaf 7 data.
>>>
>>> Cure this by unlocking CPUID right before evaluating anything which
>>> depends on the maximum CPUID leaf being greater than two.
>>>
>>> Inspired by (and description cloned from) Linux commit 0c2f6d04619e
>>> ("x86/topology/intel: Unlock CPUID before evaluating anything").
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> While I couldn't spot anything, it kind of feels as if I'm overlooking
>>> further places where we might be inspecting in particular leaf 7 yet
>>> earlier.
>>>
>>> No Fixes: tag(s), as imo it would be too many that would want
>>> enumerating.
>> I also saw that go by, but concluded that Xen doesn't need it, hence why
>> I left it alone.
>>
>> The truth is that only the BSP needs it.  APs sort it out in the
>> trampoline via trampoline_misc_enable_off, because they need to clear
>> XD_DISABLE in prior to enabling paging, so we should be taking it out of
>> early_init_intel().
> Except for the (odd) case also mentioned to Roger, where the BSP might have
> the bit clear but some (or all) AP(s) have it set.

Fine I suppose.  It's a single MSR adjustment once per CPU.

>
>> But, we don't have an early BSP-only early hook, and I'm not overwhelmed
>> at the idea of exporting it from intel.c
>>
>> I was intending to leave it alone until I can burn this whole
>> infrastructure to the ground and make it work nicely with policies, but
>> that's not a job for this point in the release...
> This last part reads like the rest of your reply isn't an objection to me
> putting this in with Roger's R-b, but it would be nice if you could
> confirm this understanding of mine. Without this last part it, especially
> the 2nd from last paragraph, certainly reads a little like an objection.

I'm -1 to this generally.  It's churn without fixing anything AFAICT,
and is moving in a direction which will need undoing in the future.

But, because it doesn't fix anything, I don't think it's appropriate to
be tagged as 4.19 even if you and roger feel strongly enough to put it
into 4.20.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 11:12:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 11:12:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740608.1147712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI4r8-0006uu-Mo; Fri, 14 Jun 2024 11:12:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740608.1147712; Fri, 14 Jun 2024 11:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI4r8-0006un-Iy; Fri, 14 Jun 2024 11:12:30 +0000
Received: by outflank-mailman (input) for mailman id 740608;
 Fri, 14 Jun 2024 11:12:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5IQt=NQ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sI4r6-0006uh-NB
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 11:12:28 +0000
Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com
 [2a00:1450:4864:20::52f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fb99dce2-2a3e-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 13:12:27 +0200 (CEST)
Received: by mail-ed1-x52f.google.com with SMTP id
 4fb4d7f45d1cf-57c73a3b3d7so2205189a12.1
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 04:12:27 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb741e911sm2137500a12.78.2024.06.14.04.12.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 Jun 2024 04:12:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb99dce2-2a3e-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718363546; x=1718968346; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Owchbm2U97lLtQQNHjQ0o8i+EjNvSM498LpnXbTgqyc=;
        b=Rn7Y4Ds+Mb6ZqyUhfT8RZ6yLSzRd2VDaGGQgSehOWod3v5BtVX4Plz2w8I5CbzqRFE
         MluYq8rhmEKg1yW5m3NgqgEm7uvw7SXICZwFzsxhZuHIeDpRBadc4irMsvuhJmrkQ8yG
         pXAVzlZFkXDKGxnsf/96c5Sr1gor1bovG3XmXBrDxWlB+SDJHuTaeeLeKvCty35ZevxV
         SUOHaZHncM1Fbd9MIwbMXkEKGqx5yzFu1M0Zat5zaGP9J2JzmpTWxeBOEd6xsE6/QWuQ
         8rRYAc/ha1N/TW8R/TvajDPNPKSsJpNerugWHVd9On+RmjhMGOWdSFC6n9qn1JnISbbK
         G6wg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718363546; x=1718968346;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Owchbm2U97lLtQQNHjQ0o8i+EjNvSM498LpnXbTgqyc=;
        b=iHi517DTYfnoXjSue4bKw/r4vzu2lBIojPnWjNEEZnqv1IUg2RkWIJjv5grh+nBRFG
         CkOg0sWjbb70GX0W8xuETFfNRSA8mOl1SnmHZhP7L5raTFxSyxc5LFvrqUQ4a8LNcCY2
         H2mVeBia8w5lkswMMpxMCeKctpOixnyqYSgLqDzJ8aPGlMITKeZZbbXYocKlHjYXl8go
         gbAiPtnRX/pbq5FMEQGSi7+lczecTI36m+hyhZzTwxwiFO9Vb7OdtPbotnQkNyyshwn7
         6E79mR7ZBA6/EFDN6q8tnUcl/Nx3w3H7sSOg6sdZeuOt1q59AW1WjWWmX0xdq7BVykKE
         RP5g==
X-Forwarded-Encrypted: i=1; AJvYcCUu0epGHMOshw44ogChlhPFsfapdHS15c5sHHnRZ03huvlRQY6pRPzho/0u1LMK31crntBCYRQYm/wihRwJMZ5jYFoAGNX6lHRmg3C5TEs=
X-Gm-Message-State: AOJu0Yw2xOO39xK645g4oGEjUzObRBGQymkC2ph64PajomyZfFuInODS
	HffQihEoK7ehujL3ayoQGtYe/Kl4bDHFIawSWq529bSp1Cf1u1uURcMNQSVc6Q==
X-Google-Smtp-Source: AGHT+IFWDdHRGBWsAYUDiWFCQxeZgX9gxbejOPifd2ExsNWRlj1Py4l8R51hIRnu6aL/z5A0Pfad6Q==
X-Received: by 2002:a50:d65a:0:b0:572:1589:eb98 with SMTP id 4fb4d7f45d1cf-57cbd6641dfmr1606003a12.12.1718363546419;
        Fri, 14 Jun 2024 04:12:26 -0700 (PDT)
Message-ID: <d3f9ae64-fb85-47b2-bb69-153c7734a0a3@suse.com>
Date: Fri, 14 Jun 2024 13:12:25 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for 4.19?] x86/Intel: unlock CPUID earlier for the BSP
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>
 <f51b2240-03da-4aee-8972-a72b53916ce1@citrix.com>
 <e493035c-2954-418e-96fb-add1577df59f@suse.com>
 <8fb21b45-c803-4d37-8df8-3a1afa677ef7@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <8fb21b45-c803-4d37-8df8-3a1afa677ef7@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 14.06.2024 12:14, Andrew Cooper wrote:
> On 14/06/2024 7:27 am, Jan Beulich wrote:
>> On 13.06.2024 18:17, Andrew Cooper wrote:
>>> On 13/06/2024 9:19 am, Jan Beulich wrote:
>>>> Intel CPUs have a MSR bit to limit CPUID enumeration to leaf two. If
>>>> this bit is set by the BIOS then CPUID evaluation does not work when
>>>> data from any leaf greater than two is needed; early_cpu_init() in
>>>> particular wants to collect leaf 7 data.
>>>>
>>>> Cure this by unlocking CPUID right before evaluating anything which
>>>> depends on the maximum CPUID leaf being greater than two.
>>>>
>>>> Inspired by (and description cloned from) Linux commit 0c2f6d04619e
>>>> ("x86/topology/intel: Unlock CPUID before evaluating anything").
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> While I couldn't spot anything, it kind of feels as if I'm overlooking
>>>> further places where we might be inspecting in particular leaf 7 yet
>>>> earlier.
>>>>
>>>> No Fixes: tag(s), as imo it would be too many that would want
>>>> enumerating.
>>> I also saw that go by, but concluded that Xen doesn't need it, hence why
>>> I left it alone.
>>>
>>> The truth is that only the BSP needs it.  APs sort it out in the
>>> trampoline via trampoline_misc_enable_off, because they need to clear
>>> XD_DISABLE in prior to enabling paging, so we should be taking it out of
>>> early_init_intel().
>> Except for the (odd) case also mentioned to Roger, where the BSP might have
>> the bit clear but some (or all) AP(s) have it set.
> 
> Fine I suppose.  It's a single MSR adjustment once per CPU.
> 
>>
>>> But, we don't have an early BSP-only early hook, and I'm not overwhelmed
>>> at the idea of exporting it from intel.c
>>>
>>> I was intending to leave it alone until I can burn this whole
>>> infrastructure to the ground and make it work nicely with policies, but
>>> that's not a job for this point in the release...
>> This last part reads like the rest of your reply isn't an objection to me
>> putting this in with Roger's R-b, but it would be nice if you could
>> confirm this understanding of mine. Without this last part it, especially
>> the 2nd from last paragraph, certainly reads a little like an objection.
> 
> I'm -1 to this generally.  It's churn without fixing anything AFAICT,

How that? We clearly do the adjustment too late right now for the BSP.
All the leaf-7 stuff added to early_cpu_init() (iirc in part in the course
of speculation work) is useless on a system where firmware set that flag.

Jan

> and is moving in a direction which will need undoing in the future.
> 
> But, because it doesn't fix anything, I don't think it's appropriate to
> be tagged as 4.19 even if you and roger feel strongly enough to put it
> into 4.20.
> 
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 11:53:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 11:53:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740619.1147723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI5UN-0004R7-NW; Fri, 14 Jun 2024 11:53:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740619.1147723; Fri, 14 Jun 2024 11:53:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI5UN-0004R0-IV; Fri, 14 Jun 2024 11:53:03 +0000
Received: by outflank-mailman (input) for mailman id 740619;
 Fri, 14 Jun 2024 11:53:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VLs9=NQ=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sI5UM-0004Qu-HI
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 11:53:02 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a6b3d0e5-2a44-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 13:53:01 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6ef8e62935so277906566b.3
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 04:53:01 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f9c858sm175400266b.206.2024.06.14.04.52.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 Jun 2024 04:53:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6b3d0e5-2a44-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718365981; x=1718970781; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=5Gf2rFrdxxxqYb6NjHSYVD56Yl5jvgrIesjzPVVg/AQ=;
        b=Q5tPxqZ61Zy/zGvQ1rMWx9GmlIWUE1efzP/eoJBhwnznfwjWMA7gwLrVd9/G2c4NFo
         VFwYVI3mXIq5cPQWY4+NemqFlTjcDUD2Bi3HD7s0AjlxK6Fdb65FOheMVakbXmlNz39q
         ti1adhUSGo2shFSrGNM1TU527kkPA074xx8KvCWGeG0fa+Ihcq9XDvgv4EhIsNRvLit+
         +jjQn7AtM07BEmixOP4U4RWF0ZQr6nWcjGSbiul/zfps4XQ2TrshaPHc/tj586107BUq
         vufq9g42n6l94YFOKC8+1IgWLFcw058FjOQuJIUR4aPgHG2IUXi6OWZbfytkV2XkWlmf
         kQhw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718365981; x=1718970781;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=5Gf2rFrdxxxqYb6NjHSYVD56Yl5jvgrIesjzPVVg/AQ=;
        b=vFhcgI2NNFLPDIcny0WVMnQsRWJisH17uccC0avbY10aJbblelYMNsepFELS8uwpfc
         mh4ksUKL/KFAEVbojypBivedPRG8z8oFt0QthEFxkWnnwS7F9++dtE7sSYmmOu01YN9l
         0YLTVweqKOmME5eiZTqCGQW2Ub71O48W2Oon3UgTonvKsH/LD2UOKe9Z8q9qYChnWmX/
         6EjMo8/lFGTCBrAm1Z6O/nRlcxSlNA5BRbSSY6E1r7bJ2FVhwhBDUOgTHkJ1kXaE+0JF
         r0TUCzbbPzmRe4cU169v4BUH1DnXT2ue4d/z6hjwuJMNVbsfOGJ36niTcQSN2+43Tu9j
         z24Q==
X-Gm-Message-State: AOJu0Yy3XfzMuCSZye6ttYqvTRU689j1R3e4XZhTANJ6mUlvA7DoBTmY
	FJhOHl5B7P7mw/1LcmkvKbbhWRAdG3b8mnJFK/Y1GkRuoKKsqTt6
X-Google-Smtp-Source: AGHT+IHHjxR6DoT1uXy/OZTCzinHPLGLLPkn7w4LVlNVGMGXUxaLfocsxhXsoOeOxeESurrqj5VCdw==
X-Received: by 2002:a17:906:494c:b0:a6f:2de0:54d with SMTP id a640c23a62f3a-a6f60de60b5mr233061366b.76.1718365980640;
        Fri, 14 Jun 2024 04:53:00 -0700 (PDT)
Message-ID: <0aa934d9f4bdc8ebfa832aa56e2fe9659236441d.camel@gmail.com>
Subject: Re: [PATCH v3 0/3] x86/irq: fixes for CPU hot{,un}plug
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Date: Fri, 14 Jun 2024 13:52:59 +0200
In-Reply-To: <ZmvxBDomxxBjOYEK@macbook>
References: <20240613165617.42538-1-roger.pau@citrix.com>
	 <ZmvxBDomxxBjOYEK@macbook>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Fri, 2024-06-14 at 09:28 +0200, Roger Pau Monn=C3=A9 wrote:
> Sorry, forgot to add the for-4.19 tag and Cc Oleksii.
>=20
> Since we have taken the start of the series, we might as well take
> the
> remaining patches (if other x86 maintainers agree) and attempt to
> hopefully fix all the interrupt issues with CPU hotplug/unplug.
>=20
> FTR: there are further issues when doing CPU hotplug/unplug from a
> PVH
> dom0, but those are out of the scope for 4.19, as I haven't even
> started to diagnose what's going on.
And this issues were before the current patch series was introduced?

~ Oleksii
>=20
> Thanks, Roger.
>=20
> On Thu, Jun 13, 2024 at 06:56:14PM +0200, Roger Pau Monne wrote:
> > Hello,
> >=20
> > The following series aim to fix interrupt handling when doing CPU
> > plug/unplug operations.=C2=A0 Without this series running:
> >=20
> > cpus=3D`xl info max_cpu_id`
> > while [ 1 ]; do
> > =C2=A0=C2=A0=C2=A0 for i in `seq 1 $cpus`; do
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xen-hptool cpu-offline $i;
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xen-hptool cpu-online $i;
> > =C2=A0=C2=A0=C2=A0 done
> > done
> >=20
> > Quite quickly results in interrupts getting lost and "No irq
> > handler for
> > vector" messages on the Xen console.=C2=A0 Drivers in dom0 also start
> > getting
> > interrupt timeouts and the system becomes unusable.
> >=20
> > After applying the series running the loop over night still result
> > in a
> > fully usable system, no=C2=A0 "No irq handler for vector" messages at
> > all, no
> > interrupt loses reported by dom0.=C2=A0 Test with x2apic-
> > mode=3D{mixed,cluster}.
> >=20
> > I've attempted to document all code as good as I could, interrupt
> > handling has some unexpected corner cases that are hard to diagnose
> > and
> > reason about.
> >=20
> > Some XenRT testing is undergoing to ensure no breakages.
> >=20
> > Thanks, Roger.
> >=20
> > Roger Pau Monne (3):
> > =C2=A0 x86/irq: deal with old_cpu_mask for interrupts in movement in
> > =C2=A0=C2=A0=C2=A0 fixup_irqs()
> > =C2=A0 x86/irq: handle moving interrupts in _assign_irq_vector()
> > =C2=A0 x86/irq: forward pending interrupts to new destination in
> > fixup_irqs()
> >=20
> > =C2=A0xen/arch/x86/include/asm/apic.h |=C2=A0=C2=A0 5 +
> > =C2=A0xen/arch/x86/irq.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 163 +++++++++++++++++++++++++---
> > ----
> > =C2=A02 files changed, 132 insertions(+), 36 deletions(-)
> >=20
> > --=20
> > 2.45.2
> >=20



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 12:18:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 12:18:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740630.1147747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI5sk-0008Gt-Fv; Fri, 14 Jun 2024 12:18:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740630.1147747; Fri, 14 Jun 2024 12:18:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI5sk-0008EX-6S; Fri, 14 Jun 2024 12:18:14 +0000
Received: by outflank-mailman (input) for mailman id 740630;
 Fri, 14 Jun 2024 12:18:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ENa6=NQ=cloud.com=kelly.choi@srs-se1.protection.inumbo.net>)
 id 1sI5si-00086k-PC
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 12:18:12 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 29bcb582-2a48-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 14:18:10 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-a6f1cf00b3aso331831966b.0
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 05:18:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29bcb582-2a48-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718367489; x=1718972289; darn=lists.xenproject.org;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=gQ7XG3/WqyWP23l3ehKvs/mLQi2UWm2geje26JlAzdw=;
        b=AxRVbpZRjyalLQfEz4XlAQO+oAcvhfsPOelz5Yn5lNj/rTbhym4ieFM6idFHZqxHSo
         kSSxKKEBVtHuZZgVDcajT2dDVpDc9muOVDVHb7fyBOD3PpciA+3Dkh+xtSTQ2uOjZbVz
         1ZPfgsiI4FkuQkspEKiqfRUvHhZQgoe2lafrk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718367489; x=1718972289;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=gQ7XG3/WqyWP23l3ehKvs/mLQi2UWm2geje26JlAzdw=;
        b=wnqI73BiP1OU3YqJk7A9kaF0dVmAj4wZCORPFTqadqvfkv+P5zZk8ZmQ4C33Za7+D2
         8zNWvz6z0NgsztvmbE7Jv1SX3xnrKtpxjgRzrSwBTiNcll77EC7t7oiZC9ZN3cfD6v1C
         U46a5T0KJFGkBx0V5fvm3XzCCakfE6hcl+++feAZxfU38SnqYShAK2NjIJXIG4eeGcMl
         DX3AN8R5I9+H4yga7rpxhPN2klcZomRj3cJ8sTldyjVeWOk7R4MFaRciSEhjSI0U4oC2
         m9tVwAhwlVABm5SQ2xG1VOakuzh8azTOMnIjbl0radaEYJvleAVzg1r0zwxQwRjXPDRd
         tz9w==
X-Gm-Message-State: AOJu0YzOy70Qw7C5+FISaWDzzvJCt22mk2n8qpranNLky9HQlchZ39sh
	MShij8q5G7oRbypNEWJ9UcVqfbQ1YsR8JzHoxnXjt2hCrzCHcAl7W+tW3dPJqyfKA1pz9+vHIEF
	rVX8V2aLC4+4UcuNplq2MvDgaetghJ3sjSWlHBZ4uEbytMF/qfdM=
X-Google-Smtp-Source: AGHT+IEi8nrtONktmz+LedoCsjvTQRCpwP4l8cBn2ougfq5zge/+4La6TDDUJrUaLT/ppg6VT1vEsudBpmn7INGiNp0=
X-Received: by 2002:a17:907:a70a:b0:a6f:657a:c9a2 with SMTP id
 a640c23a62f3a-a6f657acb32mr140515866b.25.1718367489053; Fri, 14 Jun 2024
 05:18:09 -0700 (PDT)
MIME-Version: 1.0
From: Kelly Choi <kelly.choi@cloud.com>
Date: Fri, 14 Jun 2024 13:17:33 +0100
Message-ID: <CAO-mL=xajGrkz7x+SFtz8U=N56TWY81N=2qsSwW0CnJeGJMaUQ@mail.gmail.com>
Subject: Xen Summit talks live on YouTube
To: xen-devel <xen-devel@lists.xenproject.org>, xen-users@lists.xenproject.org, 
	xen-announce@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000c77ed6061ad89b36"

--000000000000c77ed6061ad89b36
Content-Type: text/plain; charset="UTF-8"

Hi everyone,

We had a great few days filled with discussions and talks during the Xen
Summit in Lisbon.

These are now available for you to watch on YouTube!
https://www.youtube.com/playlist?list=PLQMQQsKgvLntZiKoELFs22Mtk-tBNNOMJ

Many thanks,
Kelly Choi

Community Manager
Xen Project

--000000000000c77ed6061ad89b36
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi everyone,<div><br></div><div>We had a great few days fi=
lled with discussions and talks during the Xen Summit in Lisbon.=C2=A0</div=
><div><br></div><div>These are now available=C2=A0for you to watch on YouTu=
be!=C2=A0</div><div><a href=3D"https://www.youtube.com/playlist?list=3DPLQM=
QQsKgvLntZiKoELFs22Mtk-tBNNOMJ" class=3D"gmail-linkified" target=3D"_blank"=
 rel=3D"noreferrer noopener" style=3D"box-sizing:border-box;background-imag=
e:initial;background-position:0px 0px;background-size:initial;background-re=
peat:initial;background-origin:initial;background-clip:initial;margin-botto=
m:0px">https://www.youtube.com/playlist?list=3DPLQMQQsKgvLntZiKoELFs22Mtk-t=
BNNOMJ</a><br></div><div><br></div><div><div><div dir=3D"ltr" class=3D"gmai=
l_signature" data-smartmail=3D"gmail_signature"><div dir=3D"ltr"><div>Many =
thanks,</div><div>Kelly Choi</div><div><br></div><div><div style=3D"color:r=
gb(136,136,136)">Community Manager</div><div style=3D"color:rgb(136,136,136=
)">Xen Project=C2=A0<br></div></div></div></div></div></div></div>

--000000000000c77ed6061ad89b36--


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 12:33:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 12:33:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740677.1147770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI67k-0004UT-Ls; Fri, 14 Jun 2024 12:33:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740677.1147770; Fri, 14 Jun 2024 12:33:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI67k-0004UM-Ho; Fri, 14 Jun 2024 12:33:44 +0000
Received: by outflank-mailman (input) for mailman id 740677;
 Fri, 14 Jun 2024 12:33:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9HBj=NQ=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sI67j-0004UG-83
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 12:33:43 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 54b3ee53-2a4a-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 14:33:41 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 5b1f17b1804b1-421eab59723so16761625e9.3
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 05:33:41 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42286eef9c1sm97139085e9.7.2024.06.14.05.33.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 Jun 2024 05:33:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54b3ee53-2a4a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718368420; x=1718973220; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=UY+0Q1cbhl8LMp+0qpAzN+wviM6VWHTtrc+iPGIfLN4=;
        b=R6CYcfDbGY0KvRJbRMJckYgE3giT9KnYQeZA06St2/lp9VEd7xyl9Hhbozv8GETWhB
         vwoguDPQMswfZw3rGHCw6holTYv4l386Ao0vWWqvIwOty5fQHpVPLAMZ5tt6QP+NQr13
         GDBdmuO3xDoiIDkMQcOV9pk39lO8s8j63dHXE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718368420; x=1718973220;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=UY+0Q1cbhl8LMp+0qpAzN+wviM6VWHTtrc+iPGIfLN4=;
        b=CSve/9CRuWT5NKts52LYsJZgJBZ2lnj7lfOwXawJrMGPASlIaNzPzP/9E2YthS7qEn
         bRcMeWDVnvKUk3c7m2yzTH87vhNcudKBfisqXj9DQrdfQHzozFwZAmXVJa/r0W4zrnBa
         nb1OLMmgTc5tjS9gHZ9KvEoTMCFX19PpvXZL1kpsFiZ5Cq7CXi3Dx6NS+Y0FQuge4ueK
         x6XRo3JIQSn3UWxYuHq59+r8FnpzDBic23FuRb+PZ/QbGcEiSDwdeWceTYZSmtCc67Mb
         9gOluG0dF1boDtCHCOUWhi6GZXEeTRfb5cZKWQ5md9oRlNf5CNoG32tua23LHKaKwp0K
         Lgew==
X-Gm-Message-State: AOJu0YxCMpR0Qj2ljF44M+YVKO/MTNVJ95dFPBZq7n8NvV/qjlfAEJGl
	hltMm3LQZAbYSaSW2R4OlJWr3gmL+rBOzayWtl20y76Nn1ppFmdLVBg3+uZJ5xg=
X-Google-Smtp-Source: AGHT+IFjRVqm27N9vvYzPCnFKysglqKWN/tSaeY921zK7mrVIi+Me9jWpQHn/sicD+CHV0TqPyIWgw==
X-Received: by 2002:a05:600c:a09:b0:422:648d:bdf1 with SMTP id 5b1f17b1804b1-423048491aemr22330025e9.34.1718368420418;
        Fri, 14 Jun 2024 05:33:40 -0700 (PDT)
Date: Fri, 14 Jun 2024 14:33:39 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: "Oleksii K." <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3 0/3] x86/irq: fixes for CPU hot{,un}plug
Message-ID: <Zmw4ox1anCgbUTxs@macbook>
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <ZmvxBDomxxBjOYEK@macbook>
 <0aa934d9f4bdc8ebfa832aa56e2fe9659236441d.camel@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0aa934d9f4bdc8ebfa832aa56e2fe9659236441d.camel@gmail.com>

On Fri, Jun 14, 2024 at 01:52:59PM +0200, Oleksii K. wrote:
> On Fri, 2024-06-14 at 09:28 +0200, Roger Pau Monné wrote:
> > Sorry, forgot to add the for-4.19 tag and Cc Oleksii.
> > 
> > Since we have taken the start of the series, we might as well take
> > the
> > remaining patches (if other x86 maintainers agree) and attempt to
> > hopefully fix all the interrupt issues with CPU hotplug/unplug.
> > 
> > FTR: there are further issues when doing CPU hotplug/unplug from a
> > PVH
> > dom0, but those are out of the scope for 4.19, as I haven't even
> > started to diagnose what's going on.
> And this issues were before the current patch series was introduced?

Sure, the issues with PVH dom0 cpu hotplug/unplug are additional to
the ones fixed here.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 12:50:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 12:50:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740684.1147779 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI6NT-0007Sx-VI; Fri, 14 Jun 2024 12:49:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740684.1147779; Fri, 14 Jun 2024 12:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI6NT-0007Sq-Se; Fri, 14 Jun 2024 12:49:59 +0000
Received: by outflank-mailman (input) for mailman id 740684;
 Fri, 14 Jun 2024 12:49:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J/js=NQ=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sI6NS-0007Sk-77
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 12:49:58 +0000
Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com
 [2a00:1450:4864:20::533])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 99bd14cd-2a4c-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 14:49:56 +0200 (CEST)
Received: by mail-ed1-x533.google.com with SMTP id
 4fb4d7f45d1cf-57c75464e77so2532270a12.0
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 05:49:56 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da3fb9sm182654866b.30.2024.06.14.05.49.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 Jun 2024 05:49:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99bd14cd-2a4c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718369395; x=1718974195; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=cNg/dPncJ5qZGG9Ab9DyeK/zvBXVRY+iIi96hwU1hbU=;
        b=QGDM8uCQX+fNYTLoU2+KRIfW4bEjr5agWC56Up997pK7hxLMnaJkNwIND/bzIeuInq
         wpi85T6tloteTf3/UDZgdOWenylSHC2AhMpR5PzaR0mITDXl6qPtzk3VPhib9ZLcnXv+
         9S1qnZ5X3cf9Vw1u+ctntJcgP/3Az5hREqy0w=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718369395; x=1718974195;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=cNg/dPncJ5qZGG9Ab9DyeK/zvBXVRY+iIi96hwU1hbU=;
        b=FpHVu4747bAY7EGy6cIbt312lA2oyl3G6ib+i8L0vSx5Fj0pszCSTz20GF7MzCQnJs
         lJO4Cvu6hfFIZ12XJOspQTr5cH+emoWmJBca0WOdvjwwYAlyC1W8c5d0KRkbXHNlQf3h
         OQInCyul0Xj7s9M+Z2NyhHKdxZl4qVnyBWoxECDXWpe6HkgF4rey1yXpxMAGWOdF7XSA
         RXWmjZUFtr63JMYRclQDK/duZDbK+FPrUomeAkWEOMNOyIcp0bIgPL3IaKzmIKAFxryX
         5Y4NPZVW1nbt2VdMMR9LeEvzDFLc5pJ6oCY5bHvgF1onKvqIaB80UqYtNU0F480oEbhP
         2TwQ==
X-Gm-Message-State: AOJu0YzuBbMNLmyhuGqE+kOmedqE5fw0vNZIoQBKfGHacgAwLeHKQEUP
	+tmbzkJDnGNO21Jqb9fiRyKdvbi1T/Qm4hhYK3CFtRX1psVdTyc76APU5q7aIfcjZ7tqpC/yhzr
	1XsA=
X-Google-Smtp-Source: AGHT+IFD57JmEK2X990R98Iafbehf5kBDNmxRpZ2N7QCaNurdAUZnGfrr26o+tj5qZg6s4PfZpVukA==
X-Received: by 2002:a17:906:2f81:b0:a6f:57f1:cebe with SMTP id a640c23a62f3a-a6f60cefd96mr194337966b.5.1718369394581;
        Fri, 14 Jun 2024 05:49:54 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Shawn Anastasio <sanastasio@raptorengineering.com>
Subject: [PATCH for-4.19] xen/arch: Centralise __read_mostly and __ro_after_init
Date: Fri, 14 Jun 2024 13:49:50 +0100
Message-Id: <20240614124950.1557058-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

These being in cache.h is inherited from Linux, but is an inappropriate
location to live.

__read_mostly is an optimisation related to data placement in order to avoid
having shared data in cachelines that are likely to be written to, but it
really is just a section of the linked image separating data by usage
patterns; it has nothing to do with cache sizes or flushing logic.

Worse, __ro_after_init was only in xen/cache.h because __read_mostly was in
arch/cache.h, and has literally nothing whatsoever to do with caches.

Move the definitions into xen/sections.h, which in paritcular means that
RISC-V doesn't need to repeat the problematic pattern.  Take the opportunity
to provide a short descriptions of what these are used for.

For now, leave TODO comments next to the other identical definitions.  It
turns out that unpicking cache.h is more complicated than it appears because a
number of files use it for transitive dependencies.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Michal Orzel <michal.orzel@amd.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
CC: Shawn Anastasio <sanastasio@raptorengineering.com>

For 4.19.  This is to help the RISC-V "full build of Xen" effort without
introducing a pattern that's going to require extra effort to undo after the
fact.
---
 xen/arch/arm/include/asm/cache.h |  1 +
 xen/arch/ppc/include/asm/cache.h |  1 +
 xen/arch/x86/include/asm/cache.h |  1 +
 xen/include/xen/cache.h          |  1 +
 xen/include/xen/sections.h       | 21 +++++++++++++++++++++
 5 files changed, 25 insertions(+)

diff --git a/xen/arch/arm/include/asm/cache.h b/xen/arch/arm/include/asm/cache.h
index 240b6ae0eaa3..029b2896fb3e 100644
--- a/xen/arch/arm/include/asm/cache.h
+++ b/xen/arch/arm/include/asm/cache.h
@@ -6,6 +6,7 @@
 #define L1_CACHE_SHIFT  (CONFIG_ARM_L1_CACHE_SHIFT)
 #define L1_CACHE_BYTES  (1 << L1_CACHE_SHIFT)
 
+/* TODO: Phase out the use of this via cache.h */
 #define __read_mostly __section(".data.read_mostly")
 
 #endif
diff --git a/xen/arch/ppc/include/asm/cache.h b/xen/arch/ppc/include/asm/cache.h
index 0d7323d7892f..13c0bf3242c8 100644
--- a/xen/arch/ppc/include/asm/cache.h
+++ b/xen/arch/ppc/include/asm/cache.h
@@ -3,6 +3,7 @@
 #ifndef _ASM_PPC_CACHE_H
 #define _ASM_PPC_CACHE_H
 
+/* TODO: Phase out the use of this via cache.h */
 #define __read_mostly __section(".data.read_mostly")
 
 #endif /* _ASM_PPC_CACHE_H */
diff --git a/xen/arch/x86/include/asm/cache.h b/xen/arch/x86/include/asm/cache.h
index e4770efb22b9..956c05493e23 100644
--- a/xen/arch/x86/include/asm/cache.h
+++ b/xen/arch/x86/include/asm/cache.h
@@ -9,6 +9,7 @@
 #define L1_CACHE_SHIFT	(CONFIG_X86_L1_CACHE_SHIFT)
 #define L1_CACHE_BYTES	(1 << L1_CACHE_SHIFT)
 
+/* TODO: Phase out the use of this via cache.h */
 #define __read_mostly __section(".data.read_mostly")
 
 #ifndef __ASSEMBLY__
diff --git a/xen/include/xen/cache.h b/xen/include/xen/cache.h
index f52a0aedf768..55456823c543 100644
--- a/xen/include/xen/cache.h
+++ b/xen/include/xen/cache.h
@@ -15,6 +15,7 @@
 #define __cacheline_aligned __attribute__((__aligned__(SMP_CACHE_BYTES)))
 #endif
 
+/* TODO: Phase out the use of this via cache.h */
 #define __ro_after_init __section(".data.ro_after_init")
 
 #endif /* __LINUX_CACHE_H */
diff --git a/xen/include/xen/sections.h b/xen/include/xen/sections.h
index b6cb5604c285..6d4db2b38f0f 100644
--- a/xen/include/xen/sections.h
+++ b/xen/include/xen/sections.h
@@ -3,9 +3,30 @@
 #ifndef __XEN_SECTIONS_H__
 #define __XEN_SECTIONS_H__
 
+#include <xen/compiler.h>
+
 /* SAF-0-safe */
 extern char __init_begin[], __init_end[];
 
+/*
+ * Some data is expected to be written very rarely (if at all).
+ *
+ * For performance reasons is it helpful to group such data in the build, to
+ * avoid the linker placing it adjacent to often-written data.
+ */
+#define __read_mostly __section(".data.read_mostly")
+
+/*
+ * Some data should be chosen during boot and be immutable thereafter.
+ *
+ * Variables annotated with __ro_after_init will become read-only after boot
+ * and suffer a runtime access fault if modified.
+ *
+ * For architectures/platforms which haven't implemented support, these
+ * variables will be treated as regular mutable data.
+ */
+#define __ro_after_init __section(".data.ro_after_init")
+
 #endif /* !__XEN_SECTIONS_H__ */
 /*
  * Local variables:

base-commit: 8b4243a9b560c89bb259db5a27832c253d4bebc7
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 13:39:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 13:39:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740705.1147789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI792-0007VX-F3; Fri, 14 Jun 2024 13:39:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740705.1147789; Fri, 14 Jun 2024 13:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI792-0007VQ-C5; Fri, 14 Jun 2024 13:39:08 +0000
Received: by outflank-mailman (input) for mailman id 740705;
 Fri, 14 Jun 2024 13:39:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5IQt=NQ=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sI790-0007VK-SV
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 13:39:06 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 77edd372-2a53-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 15:39:05 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-a6265d48ec3so313959066b.0
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 06:39:05 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f6a1f2cb9sm49174166b.17.2024.06.14.06.39.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 Jun 2024 06:39:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77edd372-2a53-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718372345; x=1718977145; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=pyC+ifg+PK0asFFx28j34au92Zo5AJ40YDNHpmuSqBs=;
        b=OMOVUIdn0Q0R2dFRgmE5xCPWNZxLy8KNlSugdcdqTitxqV5hXYbbnJWgL7gIujSCNj
         HUrz6h9kILQU7avyvc7fOYJDzls69J6bOCBp4Mz3dH4RQnUaelrL3u1HbamiRrASQgGX
         irn9EJIxwYoiHrrZicrFiiZNwJrU7KgBvocol1E8WYra+UXU1CmUQ3YHTuQshbFSEemd
         vqmbcYfu5f47DLofnjZhCzStG7CofNx0zwVGI9APsWULpgoLDFMk6ejZXh5sJYcrhrXl
         P3lrYdwcoo4E3HQj+sYPGYQuY0BwRz8ln+S70b6V6HopWI7OAI+C+Kf5YOZvxDGjOLxi
         Jm5g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718372345; x=1718977145;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=pyC+ifg+PK0asFFx28j34au92Zo5AJ40YDNHpmuSqBs=;
        b=DhbTEcwWdgDcEPf5XkyBx0O2bR4zYJamainOmznn2TNzAQvJe2AtedYbaUPYQSceyj
         P0v5xRacowekEzTBNNRihpb9sOrazZ0UlNHPH8O3+gd2JS8uG9GQWfuG0BHBf8U7ayKr
         W49YeXbKR1Q0/GmnCDXasz8Sko4wNZEDMNacAlFl6CP0WPvyOQX2AD7NUOLFrArTfH3S
         Eg1xcf/qLtupHny1bVei+RRs5+TjJ/hLzO7NyAhVmY9Kz2DyhBQWI0kYJA+3G3Pj7Nin
         cggEQhyjHCEeVm0WD5a5grrSki0fYhx344oGc986OHPzhQwua1vu8ajy0qD0me8EOOVl
         7QtQ==
X-Forwarded-Encrypted: i=1; AJvYcCVpTmlLQi99NHeC6BOsPkNQNAEVZorT6kPXqYV8yX+R435XjlauIevWbdWRRktJbctfnzJBRF2yNw316GE3/cB4KAXaSo/MrP6ib85A05s=
X-Gm-Message-State: AOJu0Ywe8zExFBSLqW9UixttinvsvPxkBoqA3ehCnhvChnQvMQsXUGQR
	IWVFZqbVCAElILmEYvbJOsRojWS19HozqdcAnAnUgLkzNMkQLeMVScd9ELfCVQ==
X-Google-Smtp-Source: AGHT+IFIzVdqwwmK4lxQiTjjf4jPfajmGHrhqOHg3ce8G3EEWYRC3rl/fPzzUnaqFGrqmWd+YQU67w==
X-Received: by 2002:a17:906:2885:b0:a6e:f6a1:324d with SMTP id a640c23a62f3a-a6f60de1c6fmr150137266b.69.1718372345011;
        Fri, 14 Jun 2024 06:39:05 -0700 (PDT)
Message-ID: <0d7bcd4b-b09a-4ce1-9042-2e8b916f5fcb@suse.com>
Date: Fri, 14 Jun 2024 15:39:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/arch: Centralise __read_mostly and
 __ro_after_init
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Shawn Anastasio <sanastasio@raptorengineering.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240614124950.1557058-1-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240614124950.1557058-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 14.06.2024 14:49, Andrew Cooper wrote:
> These being in cache.h is inherited from Linux, but is an inappropriate
> location to live.
> 
> __read_mostly is an optimisation related to data placement in order to avoid
> having shared data in cachelines that are likely to be written to, but it
> really is just a section of the linked image separating data by usage
> patterns; it has nothing to do with cache sizes or flushing logic.
> 
> Worse, __ro_after_init was only in xen/cache.h because __read_mostly was in
> arch/cache.h, and has literally nothing whatsoever to do with caches.
> 
> Move the definitions into xen/sections.h, which in paritcular means that
> RISC-V doesn't need to repeat the problematic pattern.  Take the opportunity
> to provide a short descriptions of what these are used for.
> 
> For now, leave TODO comments next to the other identical definitions.  It
> turns out that unpicking cache.h is more complicated than it appears because a
> number of files use it for transitive dependencies.

I don't particularly mind this approach, so ...

> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

Yet this (or variants thereof) is precisely what I wouldn't have wanted to do
myself, for leaving garbled state and, if I'm not mistaken, introducing new
Misra violations because of the redundant #define-s.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 13:51:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 13:51:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740714.1147799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI7Kg-0002Fm-Fz; Fri, 14 Jun 2024 13:51:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740714.1147799; Fri, 14 Jun 2024 13:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI7Kg-0002Ff-DU; Fri, 14 Jun 2024 13:51:10 +0000
Received: by outflank-mailman (input) for mailman id 740714;
 Fri, 14 Jun 2024 13:51:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9HBj=NQ=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sI7Kf-0002FW-UP
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 13:51:09 +0000
Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com
 [2607:f8b0:4864:20::731])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 252b5ffc-2a55-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 15:51:06 +0200 (CEST)
Received: by mail-qk1-x731.google.com with SMTP id
 af79cd13be357-7955ddc6516so141691585a.1
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 06:51:06 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2a5c20f53sm18354606d6.52.2024.06.14.06.51.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 Jun 2024 06:51:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 252b5ffc-2a55-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718373065; x=1718977865; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=lL/roLVeJrxRc9BOdQ3ATJcgWgV8aNLf03wtQeihneo=;
        b=U1TnUvn7hlXjYqObVYha4Ac4ss6KSUU/FFPB8ybxDfT9nWdofqpNUt6tzWQCOTPMdk
         bvITgHQRj4Qd1saaAepH6QxUn75r9NtnfgIFpk4IiatIWAPftdTrzV+quJAdHZdOYP1o
         cZg1D7/h+j4LiC0YrVeowcSbAN9JgWW35MmUk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718373065; x=1718977865;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=lL/roLVeJrxRc9BOdQ3ATJcgWgV8aNLf03wtQeihneo=;
        b=bI5upXs9OsLaNfdU6OHz7miwnHbq9Tge7s7bTTKSiw+uETQngfBpUQOirb5ef41xUu
         G7cecSLMS2HuLTKkxa9Oe8cSOy6WcAOUuxXX5LIVKIUpUCe0K+pWZolM6dEVI2BEGMTw
         5FizjMBIF0fZy2jVP2NNfQ89y4pNo+GfbVeCtHhRtXKHVi5r5vRjIhAv5mhqJUNzQxex
         9R14zIqSa4YvVFp+GqOOmO0NAX34iwFyI6InpUGEooXO1950thZc03v60tYxrRqhVmx7
         Vf7Gjfjn06fIpNzDH/Sx5S+4uO6mSjiRp0KpwLERzrt/5gYw47hXFdDmoQu4zQw7JjQ5
         G//A==
X-Gm-Message-State: AOJu0Yz1Ub4KqM1ETV+TSZnKtyN8pi8VYanPpU4YgAmA664zbqqaiAOX
	TtWVNyw75yRJsJRi7y5uNbb8aG4LCR4Sim9wQ3eomqKWedVanYdzgQviOnzD34k=
X-Google-Smtp-Source: AGHT+IEVb6mR2a1WL/ir6xYvS9r7GSEHw+uF2sOMFs3/1AWICfiujuV+pf+dOsFZEFxAEBcNMFZbOQ==
X-Received: by 2002:a05:6214:2a2:b0:6ae:2da4:fd74 with SMTP id 6a1803df08f44-6b2afc6efcfmr26644376d6.6.1718373064891;
        Fri, 14 Jun 2024 06:51:04 -0700 (PDT)
Date: Fri, 14 Jun 2024 15:51:02 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Shawn Anastasio <sanastasio@raptorengineering.com>
Subject: Re: [PATCH for-4.19] xen/arch: Centralise __read_mostly and
 __ro_after_init
Message-ID: <ZmxKxob_5LZHvCaa@macbook>
References: <20240614124950.1557058-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20240614124950.1557058-1-andrew.cooper3@citrix.com>

On Fri, Jun 14, 2024 at 01:49:50PM +0100, Andrew Cooper wrote:
> These being in cache.h is inherited from Linux, but is an inappropriate
> location to live.
> 
> __read_mostly is an optimisation related to data placement in order to avoid
> having shared data in cachelines that are likely to be written to, but it
> really is just a section of the linked image separating data by usage
> patterns; it has nothing to do with cache sizes or flushing logic.
> 
> Worse, __ro_after_init was only in xen/cache.h because __read_mostly was in
> arch/cache.h, and has literally nothing whatsoever to do with caches.
> 
> Move the definitions into xen/sections.h, which in paritcular means that
> RISC-V doesn't need to repeat the problematic pattern.  Take the opportunity
> to provide a short descriptions of what these are used for.
> 
> For now, leave TODO comments next to the other identical definitions.  It
> turns out that unpicking cache.h is more complicated than it appears because a
> number of files use it for transitive dependencies.

I assume that including sections.h from cache.h (in the meantime)
creates a circular header dependency?

> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> CC: Bertrand Marquis <bertrand.marquis@arm.com>
> CC: Michal Orzel <michal.orzel@amd.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> CC: Shawn Anastasio <sanastasio@raptorengineering.com>
> 
> For 4.19.  This is to help the RISC-V "full build of Xen" effort without
> introducing a pattern that's going to require extra effort to undo after the
> fact.
> ---
>  xen/arch/arm/include/asm/cache.h |  1 +
>  xen/arch/ppc/include/asm/cache.h |  1 +
>  xen/arch/x86/include/asm/cache.h |  1 +
>  xen/include/xen/cache.h          |  1 +
>  xen/include/xen/sections.h       | 21 +++++++++++++++++++++
>  5 files changed, 25 insertions(+)
> 
> diff --git a/xen/arch/arm/include/asm/cache.h b/xen/arch/arm/include/asm/cache.h
> index 240b6ae0eaa3..029b2896fb3e 100644
> --- a/xen/arch/arm/include/asm/cache.h
> +++ b/xen/arch/arm/include/asm/cache.h
> @@ -6,6 +6,7 @@
>  #define L1_CACHE_SHIFT  (CONFIG_ARM_L1_CACHE_SHIFT)
>  #define L1_CACHE_BYTES  (1 << L1_CACHE_SHIFT)
>  
> +/* TODO: Phase out the use of this via cache.h */
>  #define __read_mostly __section(".data.read_mostly")
>  
>  #endif
> diff --git a/xen/arch/ppc/include/asm/cache.h b/xen/arch/ppc/include/asm/cache.h
> index 0d7323d7892f..13c0bf3242c8 100644
> --- a/xen/arch/ppc/include/asm/cache.h
> +++ b/xen/arch/ppc/include/asm/cache.h
> @@ -3,6 +3,7 @@
>  #ifndef _ASM_PPC_CACHE_H
>  #define _ASM_PPC_CACHE_H
>  
> +/* TODO: Phase out the use of this via cache.h */
>  #define __read_mostly __section(".data.read_mostly")
>  
>  #endif /* _ASM_PPC_CACHE_H */
> diff --git a/xen/arch/x86/include/asm/cache.h b/xen/arch/x86/include/asm/cache.h
> index e4770efb22b9..956c05493e23 100644
> --- a/xen/arch/x86/include/asm/cache.h
> +++ b/xen/arch/x86/include/asm/cache.h
> @@ -9,6 +9,7 @@
>  #define L1_CACHE_SHIFT	(CONFIG_X86_L1_CACHE_SHIFT)
>  #define L1_CACHE_BYTES	(1 << L1_CACHE_SHIFT)
>  
> +/* TODO: Phase out the use of this via cache.h */
>  #define __read_mostly __section(".data.read_mostly")
>  
>  #ifndef __ASSEMBLY__
> diff --git a/xen/include/xen/cache.h b/xen/include/xen/cache.h
> index f52a0aedf768..55456823c543 100644
> --- a/xen/include/xen/cache.h
> +++ b/xen/include/xen/cache.h
> @@ -15,6 +15,7 @@
>  #define __cacheline_aligned __attribute__((__aligned__(SMP_CACHE_BYTES)))
>  #endif
>  
> +/* TODO: Phase out the use of this via cache.h */
>  #define __ro_after_init __section(".data.ro_after_init")
>  
>  #endif /* __LINUX_CACHE_H */
> diff --git a/xen/include/xen/sections.h b/xen/include/xen/sections.h
> index b6cb5604c285..6d4db2b38f0f 100644
> --- a/xen/include/xen/sections.h
> +++ b/xen/include/xen/sections.h
> @@ -3,9 +3,30 @@
>  #ifndef __XEN_SECTIONS_H__
>  #define __XEN_SECTIONS_H__
>  
> +#include <xen/compiler.h>
> +
>  /* SAF-0-safe */
>  extern char __init_begin[], __init_end[];
>  
> +/*
> + * Some data is expected to be written very rarely (if at all).
> + *
> + * For performance reasons is it helpful to group such data in the build, to
> + * avoid the linker placing it adjacent to often-written data.
> + */
> +#define __read_mostly __section(".data.read_mostly")
> +
> +/*
> + * Some data should be chosen during boot and be immutable thereafter.
> + *
> + * Variables annotated with __ro_after_init will become read-only after boot
> + * and suffer a runtime access fault if modified.
> + *
> + * For architectures/platforms which haven't implemented support, these
> + * variables will be treated as regular mutable data.
> + */
> +#define __ro_after_init __section(".data.ro_after_init")

Is it fine for MISRA to have possibly two identical defines in the
same translation unit?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 13:56:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 13:56:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740721.1147810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI7Q2-0002tr-61; Fri, 14 Jun 2024 13:56:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740721.1147810; Fri, 14 Jun 2024 13:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI7Q2-0002tk-2l; Fri, 14 Jun 2024 13:56:42 +0000
Received: by outflank-mailman (input) for mailman id 740721;
 Fri, 14 Jun 2024 13:56:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J/js=NQ=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sI7Q1-0002te-13
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 13:56:41 +0000
Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com
 [2a00:1450:4864:20::131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ec628a70-2a55-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 15:56:39 +0200 (CEST)
Received: by mail-lf1-x131.google.com with SMTP id
 2adb3069b0e04-52bc29c79fdso3092222e87.1
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 06:56:39 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56db5bbcsm186315566b.82.2024.06.14.06.56.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 Jun 2024 06:56:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec628a70-2a55-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718373399; x=1718978199; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=QjN60CHXWEs5C3gxYG/+9+0tbqixkJhfEtFYsb6OZO0=;
        b=mPkS332nJOrdZWM7Dfjb67nSllE/bWFycCwmA+qnvMj+e3CuzJvPBkVjzR+xHwBuwh
         r+udQDomqW/txyeeVn9kMJN01WcTjLqIwaNlAZ8hoYwia8FMrEklHK4KLMmDHIFDDp8z
         x6+kZ0MKKjn3JVTebPZc6kNZZLalQWm7SPw90=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718373399; x=1718978199;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=QjN60CHXWEs5C3gxYG/+9+0tbqixkJhfEtFYsb6OZO0=;
        b=tozX/q6wkWYhEo3HKFnkxkgRij4P91/+2W4ANLEMGT+EDJ4kxEpa8E+l3ACjyLbnNj
         MHuomAXc2v7loB+ZOTsbiGfUB44EeOOdIKhiEf/O6z47khzzYo5xRdwQp9Bf8SZWIhXH
         8vTvVJd2h+G81373anIjvX8fdwdrfazpUzo2/dt2wBBIrLXStUbxS18wvPZQu4IPenCP
         dHmfbvx/NwADsyTm43kxmFdnHkAmopH5NwJKXDRaLrSwN/5NKUSrA1hEEnJ2yWZ2Sgr/
         gBApQCbKZP6of/YsGzcrd36Pegk/yEVJx/IyL311GbJE6t2TarfIqjD77gPvVhuX5gJc
         oQ5A==
X-Forwarded-Encrypted: i=1; AJvYcCVzhfBMRJBXLjjGEMr179MknU9emTD6lNkktn2ML7EssVtS0gG6oBGVJZpU4m69GcjBwee9JMxsZpowboFrkyGFpJaG7BC3W0auYQ2vEns=
X-Gm-Message-State: AOJu0YxgBiaLVBzFVPKUKPspxwZJw2MR22/mV6InmfGpSdnJze78ZOek
	RdQpIwsqLwpXWeBUhn4dq38x1UobCGYarLMliQKumselmEhvCmgzeWSvuCPx7gI=
X-Google-Smtp-Source: AGHT+IEp1XVBpuZjiSiq1ZKqSI7AJWEQdVC9olJ+QflF/zR4KZZXS23VqR3uoJ65kjrOCicRkhxUmg==
X-Received: by 2002:a05:6512:3f27:b0:52c:9ae0:beed with SMTP id 2adb3069b0e04-52ca6e92a3cmr2751951e87.52.1718373399276;
        Fri, 14 Jun 2024 06:56:39 -0700 (PDT)
Message-ID: <71cbc846-527d-45f0-bd0b-df3f0294e7cd@citrix.com>
Date: Fri, 14 Jun 2024 14:56:38 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/7] x86/xstate: Cross-check dynamic XSTATE sizes at boot
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240523111627.28896-1-andrew.cooper3@citrix.com>
 <20240523111627.28896-3-andrew.cooper3@citrix.com>
 <22e0473a-aca8-4645-a3f4-21a9d9e0ebd3@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <22e0473a-aca8-4645-a3f4-21a9d9e0ebd3@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 23/05/2024 4:34 pm, Jan Beulich wrote:
> On 23.05.2024 13:16, Andrew Cooper wrote:
>> Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
>> every call.  This is expensive, being used for domain create/migrate, as well
>> as to service certain guest CPUID instructions.
>>
>> Instead, arrange to check the sizes once at boot.  See the code comments for
>> details.  Right now, it just checks hardware against the algorithm
>> expectations.  Later patches will add further cross-checking.
>>
>> Introduce the missing X86_XCR0_* and X86_XSS_* constants, and a couple of
>> missing CPUID bits.  This is to maximise coverage in the sanity check, even if
>> we don't expect to use/virtualise some of these features any time soon.  Leave
>> HDC and HWP alone for now.  We don't have CPUID bits from them stored nicely.
> Since you say "the missing", ...
>
>> --- a/xen/arch/x86/include/asm/x86-defns.h
>> +++ b/xen/arch/x86/include/asm/x86-defns.h
>> @@ -77,7 +77,7 @@
>>  #define X86_CR4_PKS        0x01000000 /* Protection Key Supervisor */
>>  
>>  /*
>> - * XSTATE component flags in XCR0
>> + * XSTATE component flags in XCR0 | MSR_XSS
>>   */
>>  #define X86_XCR0_FP_POS           0
>>  #define X86_XCR0_FP               (1ULL << X86_XCR0_FP_POS)
>> @@ -95,11 +95,34 @@
>>  #define X86_XCR0_ZMM              (1ULL << X86_XCR0_ZMM_POS)
>>  #define X86_XCR0_HI_ZMM_POS       7
>>  #define X86_XCR0_HI_ZMM           (1ULL << X86_XCR0_HI_ZMM_POS)
>> +#define X86_XSS_PROC_TRACE        (_AC(1, ULL) <<  8)
>>  #define X86_XCR0_PKRU_POS         9
>>  #define X86_XCR0_PKRU             (1ULL << X86_XCR0_PKRU_POS)
>> +#define X86_XSS_PASID             (_AC(1, ULL) << 10)
>> +#define X86_XSS_CET_U             (_AC(1, ULL) << 11)
>> +#define X86_XSS_CET_S             (_AC(1, ULL) << 12)
>> +#define X86_XSS_HDC               (_AC(1, ULL) << 13)
>> +#define X86_XSS_UINTR             (_AC(1, ULL) << 14)
>> +#define X86_XSS_LBR               (_AC(1, ULL) << 15)
>> +#define X86_XSS_HWP               (_AC(1, ULL) << 16)
>> +#define X86_XCR0_TILE_CFG         (_AC(1, ULL) << 17)
>> +#define X86_XCR0_TILE_DATA        (_AC(1, ULL) << 18)
> ... I'm wondering if you deliberately left out APX (bit 19).

It was deliberate.  APX isn't in the SDM yet, so in principle is still
subject to change.

I've tweaked the commit message to avoid using the word 'missing'.

> Since you're re-doing some of what I have long had in patches already,
> I'd also like to ask whether the last underscores each in the two AMX
> names really are useful in your opinion. While rebasing isn't going
> to be difficult either way, it would be yet simpler with
> X86_XCR0_TILECFG and X86_XCR0_TILEDATA, as I've had it in my patches
> for over 3 years.

I'm torn here.  I don't want to deliberately make things harder for you,
but I would really prefer that we use the more legible form...
>> --- a/xen/arch/x86/xstate.c
>> +++ b/xen/arch/x86/xstate.c
>> @@ -604,9 +604,156 @@ static bool valid_xcr0(uint64_t xcr0)
>>      if ( !(xcr0 & X86_XCR0_BNDREGS) != !(xcr0 & X86_XCR0_BNDCSR) )
>>          return false;
>>  
>> +    /* TILE_CFG and TILE_DATA must be the same. */
>> +    if ( !(xcr0 & X86_XCR0_TILE_CFG) != !(xcr0 & X86_XCR0_TILE_DATA) )
>> +        return false;
>> +
>>      return true;
>>  }
>>  
>> +struct xcheck_state {
>> +    uint64_t states;
>> +    uint32_t uncomp_size;
>> +    uint32_t comp_size;
>> +};
>> +
>> +static void __init check_new_xstate(struct xcheck_state *s, uint64_t new)
>> +{
>> +    uint32_t hw_size;
>> +
>> +    BUILD_BUG_ON(X86_XCR0_STATES & X86_XSS_STATES);
>> +
>> +    BUG_ON(s->states & new); /* States only increase. */
>> +    BUG_ON(!valid_xcr0(s->states | new)); /* Xen thinks it's a good value. */
>> +    BUG_ON(new & ~(X86_XCR0_STATES | X86_XSS_STATES)); /* Known state. */
>> +    BUG_ON((new & X86_XCR0_STATES) &&
>> +           (new & X86_XSS_STATES)); /* User or supervisor, not both. */
>> +
>> +    s->states |= new;
>> +    if ( new & X86_XCR0_STATES )
>> +    {
>> +        if ( !set_xcr0(s->states & X86_XCR0_STATES) )
>> +            BUG();
>> +    }
>> +    else
>> +        set_msr_xss(s->states & X86_XSS_STATES);
>> +
>> +    /*
>> +     * Check the uncompressed size.  Some XSTATEs are out-of-order and fill in
>> +     * prior holes in the state area, so we check that the size doesn't
>> +     * decrease.
>> +     */
>> +    hw_size = cpuid_count_ebx(0xd, 0);
>> +
>> +    if ( hw_size < s->uncomp_size )
>> +        panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, uncompressed hw size %#x < prev size %#x\n",
>> +              s->states, &new, hw_size, s->uncomp_size);
>> +
>> +    s->uncomp_size = hw_size;
>> +
>> +    /*
>> +     * Check the compressed size, if available.  All components strictly
>> +     * appear in index order.  In principle there are no holes, but some
>> +     * components have their base address 64-byte aligned for efficiency
>> +     * reasons (e.g. AMX-TILE) and there are other components small enough to
>> +     * fit in the gap (e.g. PKRU) without increasing the overall length.
>> +     */
>> +    hw_size = cpuid_count_ebx(0xd, 1);
>> +
>> +    if ( cpu_has_xsavec )
>> +    {
>> +        if ( hw_size < s->comp_size )
>> +            panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, compressed hw size %#x < prev size %#x\n",
>> +                  s->states, &new, hw_size, s->comp_size);
>> +
>> +        s->comp_size = hw_size;
>> +    }
>> +    else
>> +        BUG_ON(hw_size); /* Compressed size reported, but no XSAVEC ? */
> I'm not quite happy with this being fatal to booting. Maybe just WARN_ON()?

It's going to trigger on every pass.   I've reworked it to be an
opencoded WARN_ONCE() (as we don't have this construct yet), but it's
ended up as a plain WARN().

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 13:57:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 13:57:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740723.1147819 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI7QL-0003Ed-Ch; Fri, 14 Jun 2024 13:57:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740723.1147819; Fri, 14 Jun 2024 13:57:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI7QL-0003EU-9w; Fri, 14 Jun 2024 13:57:01 +0000
Received: by outflank-mailman (input) for mailman id 740723;
 Fri, 14 Jun 2024 13:56:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J/js=NQ=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sI7QJ-0003Dt-ME
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 13:56:59 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f6f19e8a-2a55-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 15:56:57 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id
 4fb4d7f45d1cf-57c6011d75dso2634342a12.3
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 06:56:57 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb7439606sm2311690a12.90.2024.06.14.06.56.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 Jun 2024 06:56:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6f19e8a-2a55-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718373417; x=1718978217; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=TiWABf3QR31YPZBIEqyXpYVGJAdZuRLQ3bCJAb3zXF4=;
        b=SQ2r+DYy06ePahsmisFSX1ffG8ZG7ehCfpYS1TjZdSqAWkqdAjzexpY1yrxAFw0AeS
         pj0esoeYsPNECYNoa9Ii9ZFlpezfSIveECWXfKuffO5LAIiNr5uAduKgTBptJSS96CNN
         uTv1PTB7GXDiIUtB+PE41HXx0gjYr+Vqw2MD4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718373417; x=1718978217;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=TiWABf3QR31YPZBIEqyXpYVGJAdZuRLQ3bCJAb3zXF4=;
        b=wmWh2fqZdvW9+6PO8CoAiyJroPHhPS88M2BAtE1+lYCHhfgrsUVaTywnxEueJLw5Y3
         O9n02n+711g4QTro605AAVjzgCjuHM4AyYLM4myqU5M3LIkHANSzlHSRfqn/8LQPXKhs
         nm+qGUMH8Avck95f6kHOu76N64pyQqIyeeP5fiGFo2TFQrv6GJ3dMiDScCgCjEpn7X4G
         IcZGHD8Ez0h8OmZEPQr7+f1lesDMD2uCdxviLa7ZitsaMLWa1c4vliDph32hnPERuH4X
         uYhI0uqEWn4sk9JnrfjXzIctS36LTgWcl6vu8sp/f70BkntL/GV8djLeuPDbHt5QRYSG
         wD2A==
X-Forwarded-Encrypted: i=1; AJvYcCXiefsaXrtLC1TBGnbUxo+FeiAUlvq3uUQXwGvEznmdHYgJf9cYzrtcmT5QlNToH55Q03YG8sU3gAcQm1t45NA1cApa/1UiTiqme0ys+Xk=
X-Gm-Message-State: AOJu0YwV4KbD7IYV7zJx9qzL/CJ2GPOL/AgBcedsGKmAvRDoAB4y9m/t
	0ais5Dix1qzeDoRhjyV+KFPd23KzinJMus3FWFImBz4xq+rrR27GVnZ1r0A0uc8=
X-Google-Smtp-Source: AGHT+IHUvLNEtk8+RCwmbFPacJJjegh2Voyl2SEQ9OW97g3s+D0z7azwFwGPhwmvLM1ivNp1PiQ9eQ==
X-Received: by 2002:a50:96d2:0:b0:57c:5f74:d8ff with SMTP id 4fb4d7f45d1cf-57cbd68e730mr2322462a12.21.1718373417169;
        Fri, 14 Jun 2024 06:56:57 -0700 (PDT)
Message-ID: <ef40e586-b1cd-4620-9d64-22fc2786c072@citrix.com>
Date: Fri, 14 Jun 2024 14:56:56 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 4/7] x86/xstate: Rework xstate_ctxt_size() as
 xstate_uncompressed_size()
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240523111627.28896-1-andrew.cooper3@citrix.com>
 <20240523111627.28896-5-andrew.cooper3@citrix.com>
 <b0d92d89-5ca7-4870-8118-139a47057a88@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <b0d92d89-5ca7-4870-8118-139a47057a88@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 23/05/2024 5:09 pm, Jan Beulich wrote:
> On 23.05.2024 13:16, Andrew Cooper wrote:
>> @@ -611,6 +587,40 @@ static bool valid_xcr0(uint64_t xcr0)
>>      return true;
>>  }
>>  
>> +unsigned int xstate_uncompressed_size(uint64_t xcr0)
>> +{
>> +    unsigned int size = XSTATE_AREA_MIN_SIZE, i;
>> +
>> +    ASSERT((xcr0 & ~X86_XCR0_STATES) == 0);
> I'm puzzled by the combination of this assertion and ...
>
>> +    if ( xcr0 == xfeature_mask )
>> +        return xsave_cntxt_size;
> ... this conditional return. Yes, right now we don't support/use any XSS
> components, but without any comment the assertion looks overly restrictive
> to me.

The ASSERT() is to cover a bug I found during testing.

It is a hard error to pass in non-XCR0 states.  XSS states do not exist
in an uncompressed image, and have a base of 0, which ends up hitting a
later assertion.

This snippet with xfeature_mask is just code motion from
xstate_ctxt_size(), expressed as an addition because of diff.  It was to
save a double XCR0 write in a perceived common case.

But, your AMX series makes both xfeature_mask and xsave_cntxt_size bogus
by there no longer being a uniform size of the save area, so I can
probably get away with simply dropping it here.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 14:07:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 14:07:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740733.1147829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI7Zz-0005j7-9i; Fri, 14 Jun 2024 14:06:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740733.1147829; Fri, 14 Jun 2024 14:06:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI7Zz-0005j0-71; Fri, 14 Jun 2024 14:06:59 +0000
Received: by outflank-mailman (input) for mailman id 740733;
 Fri, 14 Jun 2024 14:06:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J/js=NQ=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sI7Zy-0005iu-4h
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 14:06:58 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5bfd33a9-2a57-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 16:06:57 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id
 38308e7fff4ca-2ec002caf3eso36321931fa.1
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 07:06:57 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da40f6sm188129666b.43.2024.06.14.07.06.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 Jun 2024 07:06:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5bfd33a9-2a57-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718374016; x=1718978816; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=GG+LmC7JCivuLSPlMRF8/UrM2XQvSMmEr23dsq/XCts=;
        b=g9D6z41rD55BLBvRk+HvWdPPeguk7ai9cjzqSGZotEHVc98Jyebv17y1Co73+bk813
         8eAzcu7MBgvYHHYGJN40h4Ody0/1qYXLN3uEQYYMZPHSkxm9+awJblypecVuH4Gzpt5C
         QMOeRSKjFqcs1CcC3ikPp5l2Q7nu3ukyvRoBE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718374016; x=1718978816;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GG+LmC7JCivuLSPlMRF8/UrM2XQvSMmEr23dsq/XCts=;
        b=n0cloHCabaCo4t1MgMP7g0OVkz0Bxi9OLZ0j7cQknMyV2bfe83DI3O0e8j220bPniD
         j61MIUM/tytFxmaHedHRYoyKthD6HLh2vKPemF5LC8w8uqasOgRlXqw8VUIDeRVyEl2o
         eugUUcwsItBjURguxhROOnS8aZ+l97zITbsJSBkTW4koewzJLBy234rvDAWGRHd90vmj
         du4esKDqB6iVAcbgFDuT90SnJuj+5nbMcr+QqNNNDbYCce7W2HClpQvJme2Fqs9iJ9Rt
         FzbxMjVmGYvboaj/4n1YuqHTQUAIhynk5zgo7A7YwV9y7cYF7myzSj2nB5g9LQdcP1jA
         e2LQ==
X-Forwarded-Encrypted: i=1; AJvYcCXNnbGDd4WxS7NThVKADfuJ1dx3yjweAJ8+h/jMt40r1/veCIpYBKJIsfw76/Ih/5BadEUcU0qpyagXhDW16GyvyFbQYcc+9D1h1vWu/Aw=
X-Gm-Message-State: AOJu0YwdWMAKjHPRm14FIGwmv1ceTXlL2wbKYTc1hBW1MWpKVyn4YmD1
	fprfw72ci4FDu2GTLxKuv1IuattcSdNp0X+xQaW8KBjjL4rkCFRA01jVHQu3NsI=
X-Google-Smtp-Source: AGHT+IHkmnkjurRVetym/WDZPXA+ZsFtgqHA1M1O5kWuGSQUyXbMYFpjVxVDT40XkMZ6ilPvaHmpXQ==
X-Received: by 2002:a05:6512:549:b0:521:cc8a:46dd with SMTP id 2adb3069b0e04-52ca6e56e2dmr2216246e87.11.1718374016080;
        Fri, 14 Jun 2024 07:06:56 -0700 (PDT)
Message-ID: <49313683-fb34-4f6f-a41c-e02ea6837cfd@citrix.com>
Date: Fri, 14 Jun 2024 15:06:55 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 6/7] x86/cpuid: Fix handling of XSAVE dynamic leaves
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20240523111627.28896-1-andrew.cooper3@citrix.com>
 <20240523111627.28896-7-andrew.cooper3@citrix.com>
 <9e6803c6-3d83-4e5c-a7bd-b8b844eec66d@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <9e6803c6-3d83-4e5c-a7bd-b8b844eec66d@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 23/05/2024 5:16 pm, Jan Beulich wrote:
> On 23.05.2024 13:16, Andrew Cooper wrote:
>> First, if XSAVE is available in hardware but not visible to the guest, the
>> dynamic leaves shouldn't be filled in.
>>
>> Second, the comment concerning XSS state is wrong.  VT-x doesn't manage
>> host/guest state automatically, but there is provision for "host only" bits to
>> be set, so the implications are still accurate.
>>
>> Introduce xstate_compressed_size() to mirror the uncompressed one.  Cross
>> check it at boot.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

> Irrespective ...
>
>> v3:
>>  * Adjust commit message about !XSAVE guests
>>  * Rebase over boot time cross check
>>  * Use raw policy
> ... it should probably have occurred to me earlier on to ask: Why raw policy?
> Isn't the host one the more appropriate one to use for any kind of internal
> decisions?

State information is identical in all policies.  It's the ABI of the
X{SAVE,RSTOR}* instructions.

Beyond that, consistency.

xstate_uncompressed_size() does strictly need to be the raw policy,
because it is used by recalculate_xstate() to calculate the host policy.

xstate_compressed_size() doesn't have the same restriction, but should
use the same source of data.

Finally, xstate_{un,}compressed_size() aren't really tied to a choice of
features in the first place.  They shouldn't be limited to the
host_policy's subset of active features.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 14:14:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 14:14:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740741.1147840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI7h7-0007zw-49; Fri, 14 Jun 2024 14:14:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740741.1147840; Fri, 14 Jun 2024 14:14:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI7h7-0007zp-10; Fri, 14 Jun 2024 14:14:21 +0000
Received: by outflank-mailman (input) for mailman id 740741;
 Fri, 14 Jun 2024 14:14:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI7h6-0007za-7s; Fri, 14 Jun 2024 14:14:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI7h6-0006U7-1p; Fri, 14 Jun 2024 14:14:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI7h5-0008Vj-NG; Fri, 14 Jun 2024 14:14:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sI7h5-0003Rd-Mc; Fri, 14 Jun 2024 14:14:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hP6MeUhjpn+UnLzhEPXyo4ok/ORyqO3y5K9Oi58QKIg=; b=Y3YHcnvkAmeEA9NjSMBO1CDPZL
	LRruTSMZSa4bVNbp0PX2Y2tXKoYtiyoR6BBaSeNpN0lj3dazTbR6ZernhvL79X94NrvuPsv9hp8X9
	4auirdjzlljHOOGdtSSL/AGW98qLzl/lumjvUIKl8hKTNfyAjQKKJdOOWDId4zay1Nog=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186344-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186344: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4fdd8d75566fdad06667a79ec0ce6f43cc466c54
X-Osstest-Versions-That:
    xen=4fdd8d75566fdad06667a79ec0ce6f43cc466c54
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Jun 2024 14:14:19 +0000

flight 186344 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186344/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186341
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186341
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186341
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186341
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186341
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186341
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4fdd8d75566fdad06667a79ec0ce6f43cc466c54
baseline version:
 xen                  4fdd8d75566fdad06667a79ec0ce6f43cc466c54

Last test of basis   186344  2024-06-14 05:48:54 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 14:50:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 14:50:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740752.1147850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI8G7-0006Z5-P7; Fri, 14 Jun 2024 14:50:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740752.1147850; Fri, 14 Jun 2024 14:50:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI8G7-0006Yy-La; Fri, 14 Jun 2024 14:50:31 +0000
Received: by outflank-mailman (input) for mailman id 740752;
 Fri, 14 Jun 2024 14:50:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI8G7-0006Yo-4m; Fri, 14 Jun 2024 14:50:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI8G6-00075u-O5; Fri, 14 Jun 2024 14:50:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI8G6-0000z6-Bh; Fri, 14 Jun 2024 14:50:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sI8G6-0005OL-BJ; Fri, 14 Jun 2024 14:50:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FvgZ0SqOqdWDS6jKgkxc9Dte+rOf1uWbHaMaP5WV7Rs=; b=2FQY8dZEI+7HEAUvtzjw1ZFZCJ
	AH3uNTRkwO6t73iBeg4/D5/0tLgZqagG5GOFRwMtSVKd5idXEiOHKveqUSRkgq3UAwzQGwLcDEeC0
	d9oAZ1s7LLkhUP8Kf/QkVLi7hvdDdAbHnjNuPdZNeV2L1eIbYNQXmCzsXLyP3zcPPrY8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186349-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186349: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8b4243a9b560c89bb259db5a27832c253d4bebc7
X-Osstest-Versions-That:
    xen=4fdd8d75566fdad06667a79ec0ce6f43cc466c54
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Jun 2024 14:50:30 +0000

flight 186349 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186349/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8b4243a9b560c89bb259db5a27832c253d4bebc7
baseline version:
 xen                  4fdd8d75566fdad06667a79ec0ce6f43cc466c54

Last test of basis   186337  2024-06-13 16:02:07 Z    0 days
Testing same since   186349  2024-06-14 10:03:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Luca Fancellu <luca.fancellu@arm.com>
  Penny Zheng <penny.zheng@arm.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4fdd8d7556..8b4243a9b5  8b4243a9b560c89bb259db5a27832c253d4bebc7 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 14:54:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 14:54:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740758.1147859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI8KE-00079i-7v; Fri, 14 Jun 2024 14:54:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740758.1147859; Fri, 14 Jun 2024 14:54:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI8KE-00079b-5C; Fri, 14 Jun 2024 14:54:46 +0000
Received: by outflank-mailman (input) for mailman id 740758;
 Fri, 14 Jun 2024 14:54:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CnHe=NQ=linux.intel.com=ilpo.jarvinen@srs-se1.protection.inumbo.net>)
 id 1sI8KC-00079V-Iu
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 14:54:44 +0000
Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.18])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 06393e07-2a5e-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 16:54:41 +0200 (CEST)
Received: from orviesa001.jf.intel.com ([10.64.159.141])
 by fmvoesa112.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 14 Jun 2024 07:54:38 -0700
Received: from ijarvine-desk1.ger.corp.intel.com (HELO localhost)
 ([10.245.247.222])
 by smtpauth.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 14 Jun 2024 07:54:34 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06393e07-2a5e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1718376881; x=1749912881;
  h=from:date:to:cc:subject:in-reply-to:message-id:
   references:mime-version;
  bh=OzFYerauOBqolouxgkZVxe0lW7TOcqR6/DYM51DUUMo=;
  b=jhpDin8pJDILbLFyeduAPVcXYlpuqOdZ42aVIxsV4tln9ZaI5S+ezyGQ
   BPqVESkeUDiULDHQpKjQTbkr/JgiWItCRHRJp1D9EcXzoLortn1AaZWnd
   GjDihkoPjbUVd6D3EKQBhvBeDUgHJub8M5sKKO1RCSOjpNrFyhXtsIcC/
   9BLhAvMDTx/4MPP8PI3W3HFsIWtq8RXSxyV/YRsfcjPtMZgGLAs+D08u6
   RvT+kNoiYyf8SAud99HzBcK0KvipJjCTOsBhSjidltk6+S4SorPz3gtSq
   9sPnMzbx99wgcGJu9QuANeCoyVNSEG8BIjKyGyNHodx2ehClv9OwXKN5B
   g==;
X-CSE-ConnectionGUID: 1RxbpxxOROOjRFgEDYyqFw==
X-CSE-MsgGUID: hrli3pnWQZaTDSJNtc/mRQ==
X-IronPort-AV: E=McAfee;i="6700,10204,11103"; a="14997206"
X-IronPort-AV: E=Sophos;i="6.08,238,1712646000"; 
   d="scan'208";a="14997206"
X-CSE-ConnectionGUID: +PcahY8mTLKwdfa1YAsVXw==
X-CSE-MsgGUID: Wz+fHa1oRq2VAKHfFneYkQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="6.08,238,1712646000"; 
   d="scan'208";a="78002414"
From: =?UTF-8?q?Ilpo=20J=C3=A4rvinen?= <ilpo.jarvinen@linux.intel.com>
Date: Fri, 14 Jun 2024 17:54:30 +0300 (EEST)
To: Li zeming <zeming@nfschina.com>
cc: jgross@suse.com, bhelgaas@google.com, tglx@linutronix.de, mingo@redhat.com, 
    bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org, 
    xen-devel@lists.xenproject.org, linux-pci@vger.kernel.org, 
    linux-kernel@vger.kernel.org
Subject: =?ISO-8859-7?Q?Re=3A_=5BPATCH=5D_x86=3A_pci=3A_xen=3A_Remove_?=
 =?ISO-8859-7?Q?unnecessary_=A10=A2_values_from_ret?=
In-Reply-To: <20240612092406.39007-1-zeming@nfschina.com>
Message-ID: <b1c91d7e-9701-c93c-d336-3729be33f67e@linux.intel.com>
References: <20240612092406.39007-1-zeming@nfschina.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 12 Jun 2024, Li zeming wrote:

> ret is assigned first, so it does not need to initialize the assignment.

While the patch seems fine, this description and the shortlog are
confusing.

-- 
 i.

> Signed-off-by: Li zeming <zeming@nfschina.com>
> ---
>  arch/x86/pci/xen.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
> index 652cd53e77f6..67cb9dc9b2e7 100644
> --- a/arch/x86/pci/xen.c
> +++ b/arch/x86/pci/xen.c
> @@ -267,7 +267,7 @@ static bool __read_mostly pci_seg_supported = true;
>  
>  static int xen_initdom_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
>  {
> -	int ret = 0;
> +	int ret;
>  	struct msi_desc *msidesc;
>  
>  	msi_for_each_desc(msidesc, &dev->dev, MSI_DESC_NOTASSOCIATED) {
> @@ -353,7 +353,7 @@ static int xen_initdom_setup_msi_irqs(struct pci_dev *dev, int nvec, int type)
>  
>  bool xen_initdom_restore_msi(struct pci_dev *dev)
>  {
> -	int ret = 0;
> +	int ret;
>  
>  	if (!xen_initial_domain())
>  		return true;
> 



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 14:55:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 14:55:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740762.1147870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI8L8-0007fA-H6; Fri, 14 Jun 2024 14:55:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740762.1147870; Fri, 14 Jun 2024 14:55:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI8L8-0007f3-Dl; Fri, 14 Jun 2024 14:55:42 +0000
Received: by outflank-mailman (input) for mailman id 740762;
 Fri, 14 Jun 2024 14:55:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VLs9=NQ=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sI8L7-0007ev-5Y
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 14:55:41 +0000
Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com
 [2a00:1450:4864:20::531])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 29edae3b-2a5e-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 16:55:39 +0200 (CEST)
Received: by mail-ed1-x531.google.com with SMTP id
 4fb4d7f45d1cf-57c7681ccf3so2586088a12.2
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 07:55:39 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f6be36a5fsm36308966b.58.2024.06.14.07.55.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 Jun 2024 07:55:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29edae3b-2a5e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718376938; x=1718981738; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=1FtZ8fNt1ygCbnKlIwd80G2l/ldpa2huJAp4utij7+8=;
        b=d5oXYlVGZEq0qI3AjByKHVyD1cRUWKm4pEToJdBpEc3u5A7Y4yhpPKhF9YftKdF+Nt
         IrAASPriDfRTB1eJWJxM32MbDl7VtOvfrCuQHKuNtJDjmqDdIUqfSEuhHi63Fg0Lz/1z
         rUw2apFSVv21e0NQGHAy8gHQMZrMbL7ia1ocjJgKPjeSNBZ69ggrFntHnG0mem6G4b5F
         7TRNKtmK0V6vjOJ4Q+OyMSW0C/98zgN8Aw2Ht1n3nPgwY6sL7c1L6d9tgoaLQ8EAox0y
         pqBwuism1iSAgeCYt/8sAo5q3kLSlzzPi9E/pTQjXBFViNh1e4uMO0yr48Jr/3y6VuS+
         ya+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718376938; x=1718981738;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=1FtZ8fNt1ygCbnKlIwd80G2l/ldpa2huJAp4utij7+8=;
        b=Zx7LaagrWvu+DyA4LVUE0VckSEaMxpBZPiQ5Wk8xpP2ex6M2crR7BSY0UHMD+AE0nx
         L3IN1kI2KbL02Q2B51kmAmocmDyJmkEYYuxjNgXm+hUrzHTZfF5oQez7zMGKa7BB7DXb
         YMz4UGv03jrVkuJhwGTzw5hFYzR/nxKUzSLIf+pzUgUSYckBkikAu1o3MY/+eYLvJIrU
         P2ja+rrsLE79HYYvdkiAOLA9UWMrPCEwTn+saTXc/+Ce0ZIxzyYoEqUdSOwkP+BGSOr9
         dqvA/wjg5UJhKGpe01PBhRp5IsC0UQheBH/Jd1k05bm63h4Y838Avk5kbC56b7bkFHHm
         vLCQ==
X-Forwarded-Encrypted: i=1; AJvYcCX89T89q50bpNHa0SCRXnp6+OVFAFrRcoZBnDCoa9DntJ6wvX56c9eQQ6iXmkbX2gF04B7jnFx7+mcELziyxe4HMKqsB1LrwGn9Bodye/U=
X-Gm-Message-State: AOJu0Yzf0/3IWph3WMM5IdVxXF6JHWPL2T3tv5bZhl9NgsJGXLDAI7DE
	O44oQ5ctmrxtlf8yOmuttsgk7KB4baLZlEjNG9ilo1xJIQYKr8CE
X-Google-Smtp-Source: AGHT+IHrXgV+Ny+mHqU/1+NEuHskkxrW79qH9AsV24behMi7w64iv9vCxH7muxMy+EpSSsA/9IBRiA==
X-Received: by 2002:a17:906:22d7:b0:a6f:4d6b:c779 with SMTP id a640c23a62f3a-a6f60dc8007mr192538466b.52.1718376938264;
        Fri, 14 Jun 2024 07:55:38 -0700 (PDT)
Message-ID: <3bbb1acf921f7c600148e4d18ea713d7826aa8c1.camel@gmail.com>
Subject: Re: [PATCH for-4.19] xen/arch: Centralise __read_mostly and
 __ro_after_init
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, Roger Pau =?ISO-8859-1?Q?Monn=E9?=
 <roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel
 <michal.orzel@amd.com>, Shawn Anastasio <sanastasio@raptorengineering.com>
Date: Fri, 14 Jun 2024 16:55:37 +0200
In-Reply-To: <20240614124950.1557058-1-andrew.cooper3@citrix.com>
References: <20240614124950.1557058-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Fri, 2024-06-14 at 13:49 +0100, Andrew Cooper wrote:
> These being in cache.h is inherited from Linux, but is an
> inappropriate
> location to live.
>=20
> __read_mostly is an optimisation related to data placement in order
> to avoid
> having shared data in cachelines that are likely to be written to,
> but it
> really is just a section of the linked image separating data by usage
> patterns; it has nothing to do with cache sizes or flushing logic.
>=20
> Worse, __ro_after_init was only in xen/cache.h because __read_mostly
> was in
> arch/cache.h, and has literally nothing whatsoever to do with caches.
>=20
> Move the definitions into xen/sections.h, which in paritcular means
> that
> RISC-V doesn't need to repeat the problematic pattern.=C2=A0 Take the
> opportunity
> to provide a short descriptions of what these are used for.
>=20
> For now, leave TODO comments next to the other identical
> definitions.=C2=A0 It
> turns out that unpicking cache.h is more complicated than it appears
> because a
> number of files use it for transitive dependencies.
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> CC: Bertrand Marquis <bertrand.marquis@arm.com>
> CC: Michal Orzel <michal.orzel@amd.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> CC: Shawn Anastasio <sanastasio@raptorengineering.com>
>=20
> For 4.19.=C2=A0 This is to help the RISC-V "full build of Xen" effort
> without
> introducing a pattern that's going to require extra effort to undo
> after the
> fact.
> ---
> =C2=A0xen/arch/arm/include/asm/cache.h |=C2=A0 1 +
> =C2=A0xen/arch/ppc/include/asm/cache.h |=C2=A0 1 +
> =C2=A0xen/arch/x86/include/asm/cache.h |=C2=A0 1 +
> =C2=A0xen/include/xen/cache.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 |=C2=A0 1 +
> =C2=A0xen/include/xen/sections.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 21=
 +++++++++++++++++++++
> =C2=A05 files changed, 25 insertions(+)
>=20
> diff --git a/xen/arch/arm/include/asm/cache.h
> b/xen/arch/arm/include/asm/cache.h
> index 240b6ae0eaa3..029b2896fb3e 100644
> --- a/xen/arch/arm/include/asm/cache.h
> +++ b/xen/arch/arm/include/asm/cache.h
> @@ -6,6 +6,7 @@
> =C2=A0#define L1_CACHE_SHIFT=C2=A0 (CONFIG_ARM_L1_CACHE_SHIFT)
> =C2=A0#define L1_CACHE_BYTES=C2=A0 (1 << L1_CACHE_SHIFT)
> =C2=A0
> +/* TODO: Phase out the use of this via cache.h */
> =C2=A0#define __read_mostly __section(".data.read_mostly")
> =C2=A0
> =C2=A0#endif
> diff --git a/xen/arch/ppc/include/asm/cache.h
> b/xen/arch/ppc/include/asm/cache.h
> index 0d7323d7892f..13c0bf3242c8 100644
> --- a/xen/arch/ppc/include/asm/cache.h
> +++ b/xen/arch/ppc/include/asm/cache.h
> @@ -3,6 +3,7 @@
> =C2=A0#ifndef _ASM_PPC_CACHE_H
> =C2=A0#define _ASM_PPC_CACHE_H
> =C2=A0
> +/* TODO: Phase out the use of this via cache.h */
> =C2=A0#define __read_mostly __section(".data.read_mostly")
> =C2=A0
> =C2=A0#endif /* _ASM_PPC_CACHE_H */
> diff --git a/xen/arch/x86/include/asm/cache.h
> b/xen/arch/x86/include/asm/cache.h
> index e4770efb22b9..956c05493e23 100644
> --- a/xen/arch/x86/include/asm/cache.h
> +++ b/xen/arch/x86/include/asm/cache.h
> @@ -9,6 +9,7 @@
> =C2=A0#define L1_CACHE_SHIFT	(CONFIG_X86_L1_CACHE_SHIFT)
> =C2=A0#define L1_CACHE_BYTES	(1 << L1_CACHE_SHIFT)
> =C2=A0
> +/* TODO: Phase out the use of this via cache.h */
> =C2=A0#define __read_mostly __section(".data.read_mostly")
> =C2=A0
> =C2=A0#ifndef __ASSEMBLY__
> diff --git a/xen/include/xen/cache.h b/xen/include/xen/cache.h
> index f52a0aedf768..55456823c543 100644
> --- a/xen/include/xen/cache.h
> +++ b/xen/include/xen/cache.h
> @@ -15,6 +15,7 @@
> =C2=A0#define __cacheline_aligned
> __attribute__((__aligned__(SMP_CACHE_BYTES)))
> =C2=A0#endif
> =C2=A0
> +/* TODO: Phase out the use of this via cache.h */
> =C2=A0#define __ro_after_init __section(".data.ro_after_init")
> =C2=A0
> =C2=A0#endif /* __LINUX_CACHE_H */
> diff --git a/xen/include/xen/sections.h b/xen/include/xen/sections.h
> index b6cb5604c285..6d4db2b38f0f 100644
> --- a/xen/include/xen/sections.h
> +++ b/xen/include/xen/sections.h
> @@ -3,9 +3,30 @@
> =C2=A0#ifndef __XEN_SECTIONS_H__
> =C2=A0#define __XEN_SECTIONS_H__
> =C2=A0
> +#include <xen/compiler.h>
> +
> =C2=A0/* SAF-0-safe */
> =C2=A0extern char __init_begin[], __init_end[];
> =C2=A0
> +/*
> + * Some data is expected to be written very rarely (if at all).
> + *
> + * For performance reasons is it helpful to group such data in the
> build, to
> + * avoid the linker placing it adjacent to often-written data.
> + */
> +#define __read_mostly __section(".data.read_mostly")
> +
> +/*
> + * Some data should be chosen during boot and be immutable
> thereafter.
> + *
> + * Variables annotated with __ro_after_init will become read-only
> after boot
> + * and suffer a runtime access fault if modified.
> + *
> + * For architectures/platforms which haven't implemented support,
> these
> + * variables will be treated as regular mutable data.
> + */
> +#define __ro_after_init __section(".data.ro_after_init")
> +
> =C2=A0#endif /* !__XEN_SECTIONS_H__ */
> =C2=A0/*
> =C2=A0 * Local variables:
>=20
> base-commit: 8b4243a9b560c89bb259db5a27832c253d4bebc7



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 15:30:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 15:30:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740781.1147879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI8sm-0006d7-6T; Fri, 14 Jun 2024 15:30:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740781.1147879; Fri, 14 Jun 2024 15:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI8sm-0006d0-3u; Fri, 14 Jun 2024 15:30:28 +0000
Received: by outflank-mailman (input) for mailman id 740781;
 Fri, 14 Jun 2024 15:30:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI8sk-0006cq-Nj; Fri, 14 Jun 2024 15:30:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI8sk-0007rl-I7; Fri, 14 Jun 2024 15:30:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sI8sk-00027A-BM; Fri, 14 Jun 2024 15:30:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sI8sk-0004N2-Ak; Fri, 14 Jun 2024 15:30:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aRkbbZNvo4L+FG8kbhD38NfOvh93qyESAdH61muZZMo=; b=HY4KD0MkN0nJpJnIvbeLDgQoq5
	XvL2KvVBZxZ6f+NQToCVScaWaJJobtEjvq4xuLEO8r8kFkwJM7CG71Gy/CuNJ2poY1LLF7q0QbO7X
	InB9yrplJOu4e5g4Jo1EeuzgZZarG903akiO65QDENinoQ9k26t4nOioM6XeUILefvX8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186352-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186352: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=5e776299a2604b336a947e68593012ab2cc16eb4
X-Osstest-Versions-That:
    ovmf=ce91687a1b2d4e03b01abb474b4665629776f588
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Jun 2024 15:30:26 +0000

flight 186352 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186352/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 5e776299a2604b336a947e68593012ab2cc16eb4
baseline version:
 ovmf                 ce91687a1b2d4e03b01abb474b4665629776f588

Last test of basis   186346  2024-06-14 07:13:40 Z    0 days
Testing same since   186352  2024-06-14 13:42:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ce91687a1b..5e776299a2  5e776299a2604b336a947e68593012ab2cc16eb4 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 16:13:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 16:13:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740799.1147890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI9Y1-0005UL-Aq; Fri, 14 Jun 2024 16:13:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740799.1147890; Fri, 14 Jun 2024 16:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI9Y1-0005UE-75; Fri, 14 Jun 2024 16:13:05 +0000
Received: by outflank-mailman (input) for mailman id 740799;
 Fri, 14 Jun 2024 16:13:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ldK3=NQ=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sI9Y0-0005U8-8E
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 16:13:04 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f99b2bfb-2a68-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 18:13:02 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-79-46-197-197.retail.telecomitalia.it [79.46.197.197])
 by support.bugseng.com (Postfix) with ESMTPSA id E03DE4EE0756;
 Fri, 14 Jun 2024 18:13:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f99b2bfb-2a68-11ef-90a3-e314d9c70b13
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 0/2] Address violations of MISRA C:2012 Rule 5.3
Date: Fri, 14 Jun 2024 18:12:24 +0200
Message-Id: <cover.1718380780.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This addresses violations of MISRA C:2012 Rule 5.3 which states as
following: An identifier declared in an inner scope shall not hide an
identifier declared in an outer scope.

In this series are modified files x86/mm.c and x86/e820.c in which occurred
instances of variable names shadowing a global variable; these patches are aimed
to remove said occurrences leading to partial compliance under MISRA C:2012
Rule 5.3.

No functional change.

Alessandro Zucchelli (2):
  x86/mm address violations of MISRA C:2012 Rule 5.3
  x86/e820 address violations of MISRA C:2012 Rule 5.3

 xen/arch/x86/e820.c | 74 ++++++++++++++++++++++-----------------------
 xen/arch/x86/mm.c   | 12 ++++----
 2 files changed, 43 insertions(+), 43 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 16:13:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 16:13:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740801.1147899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI9YE-0005ka-Gp; Fri, 14 Jun 2024 16:13:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740801.1147899; Fri, 14 Jun 2024 16:13:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI9YE-0005kT-Da; Fri, 14 Jun 2024 16:13:18 +0000
Received: by outflank-mailman (input) for mailman id 740801;
 Fri, 14 Jun 2024 16:13:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ldK3=NQ=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sI9YD-0005jz-6e
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 16:13:17 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 013e1e08-2a69-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 18:13:15 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-79-46-197-197.retail.telecomitalia.it [79.46.197.197])
 by support.bugseng.com (Postfix) with ESMTPSA id 369AF4EE0757;
 Fri, 14 Jun 2024 18:13:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 013e1e08-2a69-11ef-b4bb-af5377834399
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 1/2] x86/mm address violations of MISRA C:2012 Rule 5.3
Date: Fri, 14 Jun 2024 18:12:25 +0200
Message-Id: <80cb7054b82f55f11159faf5f10bfacf44758be0.1718380780.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718380780.git.alessandro.zucchelli@bugseng.com>
References: <cover.1718380780.git.alessandro.zucchelli@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This addresses violations of MISRA C:2012 Rule 5.3 which states as
following: An identifier declared in an inner scope shall not hide an
identifier declared in an outer scope.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
---
 xen/arch/x86/mm.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 5471b6b1f2..720d56e103 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4703,7 +4703,7 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     {
         struct xen_foreign_memory_map fmap;
         struct domain *d;
-        struct e820entry *map;
+        struct e820entry *e;
 
         if ( copy_from_guest(&fmap, arg, 1) )
             return -EFAULT;
@@ -4722,23 +4722,23 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return rc;
         }
 
-        map = xmalloc_array(e820entry_t, fmap.map.nr_entries);
-        if ( map == NULL )
+        e = xmalloc_array(e820entry_t, fmap.map.nr_entries);
+        if ( e == NULL )
         {
             rcu_unlock_domain(d);
             return -ENOMEM;
         }
 
-        if ( copy_from_guest(map, fmap.map.buffer, fmap.map.nr_entries) )
+        if ( copy_from_guest(e, fmap.map.buffer, fmap.map.nr_entries) )
         {
-            xfree(map);
+            xfree(e);
             rcu_unlock_domain(d);
             return -EFAULT;
         }
 
         spin_lock(&d->arch.e820_lock);
         xfree(d->arch.e820);
-        d->arch.e820 = map;
+        d->arch.e820 = e;
         d->arch.nr_e820 = fmap.map.nr_entries;
         spin_unlock(&d->arch.e820_lock);
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 16:13:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 16:13:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740806.1147909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI9YP-00068S-OH; Fri, 14 Jun 2024 16:13:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740806.1147909; Fri, 14 Jun 2024 16:13:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI9YP-00068H-LV; Fri, 14 Jun 2024 16:13:29 +0000
Received: by outflank-mailman (input) for mailman id 740806;
 Fri, 14 Jun 2024 16:13:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ldK3=NQ=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sI9YO-0005jz-WC
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 16:13:28 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 085fe45b-2a69-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 18:13:27 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-79-46-197-197.retail.telecomitalia.it [79.46.197.197])
 by support.bugseng.com (Postfix) with ESMTPSA id 3516B4EE0756;
 Fri, 14 Jun 2024 18:13:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 085fe45b-2a69-11ef-b4bb-af5377834399
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 2/2] x86/e820 address violations of MISRA C:2012 Rule 5.3
Date: Fri, 14 Jun 2024 18:12:26 +0200
Message-Id: <1a02a5af6c2a737bc814610d4cc684ad4a00b8dc.1718380780.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718380780.git.alessandro.zucchelli@bugseng.com>
References: <cover.1718380780.git.alessandro.zucchelli@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This addresses violations of MISRA C:2012 Rule 5.3 which states as
following: An identifier declared in an inner scope shall not hide an
identifier declared in an outer scope.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
---
 xen/arch/x86/e820.c | 74 ++++++++++++++++++++++-----------------------
 1 file changed, 37 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/e820.c b/xen/arch/x86/e820.c
index 6a3ce7e0a0..3726823e88 100644
--- a/xen/arch/x86/e820.c
+++ b/xen/arch/x86/e820.c
@@ -593,79 +593,79 @@ int __init e820_add_range(uint64_t s, uint64_t e, uint32_t type)
 }
 
 int __init e820_change_range_type(
-    struct e820map *e820, uint64_t s, uint64_t e,
+    struct e820map *map, uint64_t s, uint64_t e,
     uint32_t orig_type, uint32_t new_type)
 {
     uint64_t rs = 0, re = 0;
     unsigned int i;
 
-    for ( i = 0; i < e820->nr_map; i++ )
+    for ( i = 0; i < map->nr_map; i++ )
     {
         /* Have we found the e820 region that includes the specified range? */
-        rs = e820->map[i].addr;
-        re = rs + e820->map[i].size;
+        rs = map->map[i].addr;
+        re = rs + map->map[i].size;
         if ( (s >= rs) && (e <= re) )
             break;
     }
 
-    if ( (i == e820->nr_map) || (e820->map[i].type != orig_type) )
+    if ( (i == map->nr_map) || (map->map[i].type != orig_type) )
         return 0;
 
     if ( (s == rs) && (e == re) )
     {
-        e820->map[i].type = new_type;
+        map->map[i].type = new_type;
     }
     else if ( (s == rs) || (e == re) )
     {
-        if ( (e820->nr_map + 1) > ARRAY_SIZE(e820->map) )
+        if ( (map->nr_map + 1) > ARRAY_SIZE(map->map) )
             goto overflow;
 
-        memmove(&e820->map[i+1], &e820->map[i],
-                (e820->nr_map-i) * sizeof(e820->map[0]));
-        e820->nr_map++;
+        memmove(&map->map[i+1], &map->map[i],
+                (map->nr_map-i) * sizeof(map->map[0]));
+        map->nr_map++;
 
         if ( s == rs )
         {
-            e820->map[i].size = e - s;
-            e820->map[i].type = new_type;
-            e820->map[i+1].addr = e;
-            e820->map[i+1].size = re - e;
+            map->map[i].size = e - s;
+            map->map[i].type = new_type;
+            map->map[i+1].addr = e;
+            map->map[i+1].size = re - e;
         }
         else
         {
-            e820->map[i].size = s - rs;
-            e820->map[i+1].addr = s;
-            e820->map[i+1].size = e - s;
-            e820->map[i+1].type = new_type;
+            map->map[i].size = s - rs;
+            map->map[i+1].addr = s;
+            map->map[i+1].size = e - s;
+            map->map[i+1].type = new_type;
         }
     }
     else
     {
-        if ( (e820->nr_map + 2) > ARRAY_SIZE(e820->map) )
+        if ( (map->nr_map + 2) > ARRAY_SIZE(map->map) )
             goto overflow;
 
-        memmove(&e820->map[i+2], &e820->map[i],
-                (e820->nr_map-i) * sizeof(e820->map[0]));
-        e820->nr_map += 2;
+        memmove(&map->map[i+2], &map->map[i],
+                (map->nr_map-i) * sizeof(map->map[0]));
+        map->nr_map += 2;
 
-        e820->map[i].size = s - rs;
-        e820->map[i+1].addr = s;
-        e820->map[i+1].size = e - s;
-        e820->map[i+1].type = new_type;
-        e820->map[i+2].addr = e;
-        e820->map[i+2].size = re - e;
+        map->map[i].size = s - rs;
+        map->map[i+1].addr = s;
+        map->map[i+1].size = e - s;
+        map->map[i+1].type = new_type;
+        map->map[i+2].addr = e;
+        map->map[i+2].size = re - e;
     }
 
     /* Finally, look for any opportunities to merge adjacent e820 entries. */
-    for ( i = 0; i < (e820->nr_map - 1); i++ )
+    for ( i = 0; i < (map->nr_map - 1); i++ )
     {
-        if ( (e820->map[i].type != e820->map[i+1].type) ||
-             ((e820->map[i].addr + e820->map[i].size) != e820->map[i+1].addr) )
+        if ( (map->map[i].type != map->map[i+1].type) ||
+             ((map->map[i].addr + map->map[i].size) != map->map[i+1].addr) )
             continue;
-        e820->map[i].size += e820->map[i+1].size;
-        memmove(&e820->map[i+1], &e820->map[i+2],
-                (e820->nr_map-i-2) * sizeof(e820->map[0]));
-        e820->nr_map--;
+        map->map[i].size += map->map[i+1].size;
+        memmove(&map->map[i+1], &map->map[i+2],
+                (map->nr_map-i-2) * sizeof(map->map[0]));
+        map->nr_map--;
         i--;
     }
 
@@ -678,9 +678,9 @@ int __init e820_change_range_type(
 }
 
 /* Set E820_RAM area (@s,@e) as RESERVED in specified e820 map. */
-int __init reserve_e820_ram(struct e820map *e820, uint64_t s, uint64_t e)
+int __init reserve_e820_ram(struct e820map *map, uint64_t s, uint64_t e)
 {
-    return e820_change_range_type(e820, s, e, E820_RAM, E820_RESERVED);
+    return e820_change_range_type(map, s, e, E820_RAM, E820_RESERVED);
 }
 
 unsigned long __init init_e820(const char *str, struct e820map *raw)
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 14 16:23:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 16:23:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740822.1147919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI9iM-000062-Kl; Fri, 14 Jun 2024 16:23:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740822.1147919; Fri, 14 Jun 2024 16:23:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sI9iM-00005u-IB; Fri, 14 Jun 2024 16:23:46 +0000
Received: by outflank-mailman (input) for mailman id 740822;
 Fri, 14 Jun 2024 16:23:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tF+V=NQ=kernel.dk=axboe@srs-se1.protection.inumbo.net>)
 id 1sI9iL-00005o-6b
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 16:23:45 +0000
Received: from mail-pj1-x102d.google.com (mail-pj1-x102d.google.com
 [2607:f8b0:4864:20::102d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 778ff5f8-2a6a-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 18:23:44 +0200 (CEST)
Received: by mail-pj1-x102d.google.com with SMTP id
 98e67ed59e1d1-2c4b8d8b8e0so399739a91.2
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 09:23:44 -0700 (PDT)
Received: from [127.0.0.1] ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c4c46701absm4112038a91.40.2024.06.14.09.23.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 Jun 2024 09:23:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 778ff5f8-2a6a-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1718382222; x=1718987022; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:date:message-id:subject
         :references:in-reply-to:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=qOsdq+Rd2jD/oYTlQL4Vcr47c5ELyE1/1jwJx8ny+zc=;
        b=QRHyidpvoYHPYzOT5jaop4RQQPIbozT0KvgMXZXJ+C557BnVBX0uSfbNhp5FI+J7C1
         bI7/8umNBaqDdK7VhY/gyyKdCgn5jHIevphVR+CpF0TvL+VjTGAYNiPWEipiD9nFu4ql
         WHFIuYj+xg4wJKdzbQjJdGresTFoWva/TApleyhTBhZlSt1PAVgTx5B7DaTu/SkrBpUh
         f96ZKbmUuS/egB/oGCoSfu0koF2F2N4XwlO1fngmxM5a3bCr4WTP6kCT1UwxxeGTUyJ5
         7QpNOkseoABV+asQNSqmLOi6WWwzrHwdigrKYPteQKLUV1C3zp7fkXQ9lcQ1ym489ApZ
         oLWg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718382222; x=1718987022;
        h=content-transfer-encoding:mime-version:date:message-id:subject
         :references:in-reply-to:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=qOsdq+Rd2jD/oYTlQL4Vcr47c5ELyE1/1jwJx8ny+zc=;
        b=wUS+RtlTZ2OPQZia/ih2wbSpXKrgqq53xnoXhHmZd/W8MapG9ijwNbfjEsAJXHcdZv
         cyhM9T5iwIiiovVl2V1KvdNX/qi49iztFyZfZvXquUxHO4sBnSmRIfvtsK09NNwTN3og
         CzOqCKMLFK+wtahke39yn+JSf/sXcuruQc72XTEKz88tj7aAPVbcVcWtBFtausaKn5PZ
         4X26+g0nDxrO4g37WsTW2MeU/EN/nvN0t81IkQk6ecWxoAaB5BAF59bOjK3hoY1gnjGv
         njN2cqvLzWo7fdyS9ap3oGFMcFLFZejL7gqI8nznSBD+fnSqUHVfnbC0rbYXXu4KcwzH
         tatw==
X-Forwarded-Encrypted: i=1; AJvYcCUoEYPS58DAab3ZQafE8yHhEtE7k33Et/bnPTnUUd11SXpIK0N/vuh/Z6F+Wlj+ZtWswQ4CQKD25FXvKrGExOI4WeXauZcvahIOhGmQrtw=
X-Gm-Message-State: AOJu0YxwdSf5kpyftrtoHdQR2KXPS5s4lc78mly3ZmFAiTqdtDmLf5HZ
	qCyBAMI7/Xmh8XVciN/N2BPqpD2gyVQmIp82nF45NniFbncu9jqQno88TMUrvg8=
X-Google-Smtp-Source: AGHT+IFzZlkOeh7ub8gHV6v3rJ7EdBnks7HtbNFvAppAM0NMs6uCGvh3Ey3eFD0Th7n1FGRILE+Uqw==
X-Received: by 2002:a17:90a:d313:b0:2c4:da09:e29 with SMTP id 98e67ed59e1d1-2c4dbd431cbmr3209949a91.3.1718382222558;
        Fri, 14 Jun 2024 09:23:42 -0700 (PDT)
From: Jens Axboe <axboe@kernel.dk>
To: "Martin K. Petersen" <martin.petersen@oracle.com>, 
 Christoph Hellwig <hch@lst.de>
Cc: Richard Weinberger <richard@nod.at>, 
 Anton Ivanov <anton.ivanov@cambridgegreys.com>, 
 Johannes Berg <johannes@sipsolutions.net>, 
 Josef Bacik <josef@toxicpanda.com>, Ilya Dryomov <idryomov@gmail.com>, 
 Dongsheng Yang <dongsheng.yang@easystack.cn>, 
 =?utf-8?q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
 linux-um@lists.infradead.org, linux-block@vger.kernel.org, 
 nbd@other.debian.org, ceph-devel@vger.kernel.org, 
 xen-devel@lists.xenproject.org, linux-scsi@vger.kernel.org
In-Reply-To: <20240531074837.1648501-1-hch@lst.de>
References: <20240531074837.1648501-1-hch@lst.de>
Subject: Re: convert the SCSI ULDs to the atomic queue limits API v2
Message-Id: <171838222101.240089.17677804682941719694.b4-ty@kernel.dk>
Date: Fri, 14 Jun 2024 10:23:41 -0600
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: b4 0.14.0-rc0


On Fri, 31 May 2024 09:47:55 +0200, Christoph Hellwig wrote:
> this series converts the SCSI upper level drivers to the atomic queue
> limits API.
> 
> The first patch is a bug fix for ubd that later patches depend on and
> might be worth picking up for 6.10.
> 
> The second patch changes the max_sectors calculation to take the optimal
> I/O size into account so that sd, nbd and rbd don't have to mess with
> the user max_sector value.  I'd love to see a careful review from the
> nbd and rbd maintainers for this one!
> 
> [...]

Applied, thanks!

[01/14] ubd: refactor the interrupt handler
        commit: 5db755fbb1a0de4a4cfd5d5edfaa19853b9c56e6
[02/14] ubd: untagle discard vs write zeroes not support handling
        commit: 31ade7d4fdcf382beb8cb229a1f5d77e0f239672
[03/14] rbd: increase io_opt again
        commit: a00d4bfce7c6d7fa4712b8133ec195c9bd142ae6
[04/14] block: take io_opt and io_min into account for max_sectors
        commit: a23634644afc2f7c1bac98776440a1f3b161819e
[05/14] sd: simplify the ZBC case in provisioning_mode_store
        commit: b3491b0db165c0cbe25874da66d97652c03db654
[06/14] sd: add a sd_disable_discard helper
        commit: b0dadb86a90bd5a7b723f9d3a6cf69da9b596496
[07/14] sd: add a sd_disable_write_same helper
        commit: 9972b8ce0d4ba373901bdd1e15e4de58fcd7f662
[08/14] sd: simplify the disable case in sd_config_discard
        commit: d15b9bd42cd3b2077812d4bf32f532a9bd5c4914
[09/14] sd: factor out a sd_discard_mode helper
        commit: f1e8185fc12c699c3abf4f39b1ff5d7793da3a95
[10/14] sd: cleanup zoned queue limits initialization
        commit: 9c1d339a1bf45f4d3a2e77bbf24b0ec51f02551c
[11/14] sd: convert to the atomic queue limits API
        commit: 804e498e0496d889090f32f812b5ce1a4f2aa63e
[12/14] sr: convert to the atomic queue limits API
        commit: 969f17e10f5b732c05186ee0126c8a08166d0cda
[13/14] block: remove unused queue limits API
        commit: 1652b0bafeaa8281ca9a805d81e13d7647bd2f44
[14/14] block: add special APIs for run-time disabling of discard and friends
        commit: 73e3715ed14844067c5c598e72777641004a7f60

Best regards,
-- 
Jens Axboe





From xen-devel-bounces@lists.xenproject.org Fri Jun 14 16:45:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 16:45:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740828.1147930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIA2p-0003jo-5R; Fri, 14 Jun 2024 16:44:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740828.1147930; Fri, 14 Jun 2024 16:44:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIA2p-0003jh-2O; Fri, 14 Jun 2024 16:44:55 +0000
Received: by outflank-mailman (input) for mailman id 740828;
 Fri, 14 Jun 2024 16:44:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tuT0=NQ=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1sIA2n-0003jb-Is
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 16:44:53 +0000
Received: from fhigh6-smtp.messagingengine.com
 (fhigh6-smtp.messagingengine.com [103.168.172.157])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 69d62ad6-2a6d-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 18:44:49 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailfhigh.nyi.internal (Postfix) with ESMTP id 2350C1140107;
 Fri, 14 Jun 2024 12:44:48 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute4.internal (MEProxy); Fri, 14 Jun 2024 12:44:48 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 14 Jun 2024 12:44:46 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69d62ad6-2a6d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718383488;
	 x=1718469888; bh=O2BEZck2Ho0YRHHcW7T5XMJ8DmyJB8VHc7Ut7VyrFs4=; b=
	M3Ttco2IUQ0lHETGVV/LMduBzATwcGM+0A4DKR+gVmtXDd8vlQIVOm02P5DifIWJ
	eR+/AwuS3Db1fxFdJeCgdIb2Zrjyd4zUIc/rYEDgE7Qx7t+KLZZIat9LgRwx9YQJ
	/iRl0k2XfgGd39+VaiIIFpzwor81U4PRudAUWNlbFD4Rfq+3Oze2KaG6rlJCmzua
	cH0L3DJieftNhTbdwIl00C6vYxplcPxuUDcsNHAjExxSpCll8ZkfoyHrb1jYFTmj
	DrPXULiB6txSMqCPTBleyJT6hwXM3C9qasRkD3VYjKVlGMGav7slSyWTXak5AOGb
	VXANElysgg9H9SWJQ6s0Jw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718383488; x=1718469888; bh=O2BEZck2Ho0YRHHcW7T5XMJ8DmyJ
	B8VHc7Ut7VyrFs4=; b=RjyBcaUzGa510Y+gYlMmUuSGbVrcd/G/aLGs1DW5EtVA
	kB1AWMeaW4gX/Q34ueMSzNUdquhBJ6YyNvkdMXDJM1DT+1Kp6TgvPiOVLm3dzV3m
	CPSdVyre14GK818k3vEYwIOiu6NMO2u7HqnMsF9Y7ylw97Frm0fGY8ahBKRrQumy
	diD8wN8MnDdtfkQanr9p136sI3V5ZImn1y7N+c6fq2NAvgONgt4oOoZVZPYOGM44
	9Tq5DBEOQP7I28DJ+gTINaq/piwpqwl5oVvvM3f2G/S0PGPYt2u4DsZT1ed3ETfU
	qDfUHOfFPc1cpBDNgz6zhJpJ6We7nD6O1ddhNAznQg==
X-ME-Sender: <xms:f3NsZoq3CqOY4BULHgydhgb8MR7nQ3gqV0xQ4GbyPmWYtE0xgqUg7A>
    <xme:f3NsZuo9pDMAA6krABrgWgLmBL2eT7-XIkOXjos8FhV8CzGP4dvcL9IyB7VvYpnli
    lKYENybq61VGpQ>
X-ME-Received: <xmr:f3NsZtPwpoNbYh1herJ0fjlvj1ua9oE5H2-BMZJIXNSmT6lVq0mR1PsY8cPG_9ndo2USGmwt_I2LFGw6XEEw8r2hStOatbA4HcX9TB_UkvNG11IS>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfeduledguddtgecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpedvjeetgeekhfetudfhgfetffeg
    fffguddvgffhffeifeeikeektdehgeetheffleenucevlhhushhtvghrufhiiigvpedtne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:f3NsZv6H2UvWmK2Cvt7b2EiMiwXrbnr6NmSlRsrlbF8X-UWU-47qdQ>
    <xmx:f3NsZn7fOJpYSd69AtS9upXguXDQ_3j4fF0U6lbrH56y9X4fKRGP4Q>
    <xmx:f3NsZvj4EJBUK_dewf69S8UXc0sg69mz4M7bEaL53-09NFJi2lvzng>
    <xmx:f3NsZh5jJp7GqcDvpw7GIEYkYAeOkyQE2B2DFB6OhJv1eVWMPhbJNw>
    <xmx:gHNsZksTHRE7hWLrCUCAg-N-hElsPgZWd4qZP-7KO-gmnAD5OKnbcjNl>
Feedback-ID: iac594737:Fastmail
Date: Fri, 14 Jun 2024 12:44:42 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <Zmxze4a0PZbwcLSb@itl-email>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="yMmMNAtPZdFRNM6Y"
Content-Disposition: inline
In-Reply-To: <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>


--yMmMNAtPZdFRNM6Y
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 14 Jun 2024 12:44:42 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen

On Fri, Jun 14, 2024 at 10:12:40AM +0200, Jan Beulich wrote:
> On 14.06.2024 09:21, Roger Pau Monn=C3=A9 wrote:
> > On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote:
> >> On 13.06.2024 20:43, Demi Marie Obenour wrote:
> >>> GPU acceleration requires that pageable host memory be able to be map=
ped
> >>> into a guest.
> >>
> >> I'm sure it was explained in the session, which sadly I couldn't atten=
d.
> >> I've been asking Ray and Xenia the same before, but I'm afraid it still
> >> hasn't become clear to me why this is a _requirement_. After all that's
> >> against what we're doing elsewhere (i.e. so far it has always been
> >> guest memory that's mapped in the host). I can appreciate that it might
> >> be more difficult to implement, but avoiding to violate this fundament=
al
> >> (kind of) rule might be worth the price (and would avoid other
> >> complexities, of which there may be lurking more than what you enumera=
te
> >> below).
> >=20
> > My limited understanding (please someone correct me if wrong) is that
> > the GPU buffer (or context I think it's also called?) is always
> > allocated from dom0 (the owner of the GPU).  The underling memory
> > addresses of such buffer needs to be mapped into the guest.  The
> > buffer backing memory might be GPU MMIO from the device BAR(s) or
> > system RAM, and such buffer can be paged by the dom0 kernel at any
> > time (iow: changing the backing memory from MMIO to RAM or vice
> > versa).  Also, the buffer must be contiguous in physical address
> > space.
>=20
> This last one in particular would of course be a severe restriction.
> Yet: There's an IOMMU involved, isn't there?

On x86 there is.  On Arm there might or might not be.  There are
non-embedded systems (such as Apple silicon) where the GPU is not behind
an IOMMU, for performance reasons IIUC.

> > I'm not sure it's possible to ensure that when using system RAM such
> > memory comes from the guest rather than the host, as it would likely
> > require some very intrusive hooks into the kernel logic, and
> > negotiation with the guest to allocate the requested amount of
> > memory and hand it over to dom0.  If the maximum size of the buffer is
> > known in advance maybe dom0 can negotiate with the guest to allocate
> > such a region and grant it access to dom0 at driver attachment time.
>=20
> Besides the thought of transiently converting RAM to kind-of-MMIO, this
> makes me think of another possible option: Could Dom0 transfer ownership
> of the RAM that wants mapping in the guest (remotely resembling
> grant-transfer)? Would require the guest to have ballooned down enough
> first, of course. (In both cases it would certainly need working out how
> the conversion / transfer back could be made work safely and reasonably
> cleanly.)
>=20
> Jan

The kernel driver needs to be able to reclaim the memory at any time.
My understanding is that this is used to migrate memory between VRAM and
system RAM.  It might also be used for other purposes.

--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--yMmMNAtPZdFRNM6Y
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmZsc3sACgkQsoi1X/+c
IsFmlA//aHkzdFRW8EExFchfLYhsYOwY9lIAWxNiDNuNfup2CCsdQjKsY/+9mILv
PXpUxrhOqYNzTYiV7Rl8CWXIS9VM7BzzmU7qDuZ75cG5AyGGkX/jW2450QIhnsYn
oOuuBJApysERb5WVRxpQNrsEqPje1VBE/pnC5bZon1tWitN4CsihAtmkB/xZ+o42
GEr9QPWbA6U2bgr7vDcy2LZ510wGZ6NZsSHpAl4nwby1wrCL1uP7E4QhuRcKju/4
ovD/LkCzysCmD4T4kWjS11AtUI69vEzEpN2H0EtKQzxNprY3wkLQwtruks0bFCr1
eN9feAYllXF0nGzjlbxqRiaCUyuWiJXD82gjBKqWKs/4r78IfJ/cYzgYYsW+K8uC
nNoJTmKzU24/bw0DsoQdIPcWTYDs99kA0yjKgpzh3OHPZh9rzc5DR/PRC4UHs0ag
/zD4H4iEIPCA5o3fYHbHmspTHbv2cJBRuV8zkDYsnHFDwrMduaA0goRZqADlSUe+
GbUT/nkWnUvRbMFLG7ScA8UcDNDxnM6rffQWK4hE4Cnce89CKKJv1312qvW63u7T
0+AF8d2bWtHcDl+oQUVLTYBVqyY3Se52/c56wm5rS7L+svCElXX+QVFingfaY7h1
GoyNTZOGY47DJaAcuajsiIHKRBL6F8QAveRI4ouFMjthUPVOR6Q=
=WkhK
-----END PGP SIGNATURE-----

--yMmMNAtPZdFRNM6Y--


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 17:07:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 17:07:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740839.1147940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIAOr-0007Ve-1u; Fri, 14 Jun 2024 17:07:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740839.1147940; Fri, 14 Jun 2024 17:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIAOq-0007Ur-U6; Fri, 14 Jun 2024 17:07:40 +0000
Received: by outflank-mailman (input) for mailman id 740839;
 Fri, 14 Jun 2024 17:07:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J/js=NQ=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sIAOp-0007Uc-0y
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 17:07:39 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 992297b1-2a70-11ef-b4bb-af5377834399;
 Fri, 14 Jun 2024 19:07:37 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a68b41ef3f6so281274766b.1
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 10:07:37 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da32c3sm207057266b.13.2024.06.14.10.07.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 Jun 2024 10:07:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 992297b1-2a70-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718384855; x=1718989655; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:autocrypt:subject:from:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=xWI1hpd63xiNM8gEgVvcx+MNCHuqUWCo1glSOM5ZNN4=;
        b=RwKXJTOBfGgQfCwgdkvKIJ7VyRTEmWXkBroWIJcmTi4j5bSN4ziHfWNDFSOqV2zgge
         f0DOQ2Gsma0CoNC61nL7w40huQB+jXeQPMrNJI60t/K1WMQvoGdLvg3LtZlXDIICZOur
         AjT0dPdR6vCMuDVxdiSud1xgIMpJoW2duue8Q=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718384855; x=1718989655;
        h=content-transfer-encoding:cc:autocrypt:subject:from:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=xWI1hpd63xiNM8gEgVvcx+MNCHuqUWCo1glSOM5ZNN4=;
        b=B9o46qsIC++lQZOIPNTmIVJoQpd8YbG1Dd1Cof23wx43zhU0nVAusS9WUGO0lJdqMQ
         AExnJ70W9Nir6jSasgrlsDqFAIJ3P5d1fZzTRCouKfIRZx7GtT7cn1C5aGEqOq7m9vNX
         VK2qVc6rcEG18iYBsbDZc91cKRzQWSP1qgeUwDalHDFGcmKOHzOXBmwPD/oFRwBnOKiy
         KT7f/sMxneMRrM8xObC5LjOoOAMXWEBdTEv9T4UeuKnD59o+fwISvQUM0SZ2DIYj7ydd
         DyU8fqz9R+wprVCKj/wOJXxANHw8aCfFvCEJorEthQO8e2hDjitgamzCTe3K0hP6BzD8
         u5Gw==
X-Gm-Message-State: AOJu0YxvVGAVcATh3iOiTfHDv1uxC08Gi+ThufanUWpuMRy1jXW+QdgS
	rPkRV7IyMFiEi9kc6TOLDG6FevQ5D6FhR5W4LejLdUYVYF2QLhihotzIBtMuK0LxWE+GtIe05TE
	ywXs=
X-Google-Smtp-Source: AGHT+IF4lCmjQVG0V/v6a78HxywKIWXL9U0EfU4lnYPISoLvyREwf9bpwOZPBTgPznrRyUVbKfOscQ==
X-Received: by 2002:a17:906:b74c:b0:a68:e161:b765 with SMTP id a640c23a62f3a-a6f60d295eamr206911366b.29.1718384855513;
        Fri, 14 Jun 2024 10:07:35 -0700 (PDT)
Message-ID: <46abec6c-ebe9-4426-865e-5513107949be@citrix.com>
Date: Fri, 14 Jun 2024 18:07:33 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-GB
To: xen-devel <xen-devel@lists.xenproject.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: for_each_set_bit() clean-up (API RFC)
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Michal Orzel <michal.orzel@amd.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Shawn Anastasio <sanastasio@raptorengineering.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

More fallout from looking at the code generation...

for_each_set_bit() forces it's bitmap parameter out into memory.  For an
arbitrary sized bitmap, this is fine - and likely preferable as it's an
in-memory to begin with.

However, more than half the current users of for_each_set_bit() are
operating over an single int/long, and this too is spilled to the
stack.  Worse, x86 seems to be the only architecture which (tries, but
not very well) to optimise find_{first,next}_bit() for GPR-sized
quantities, meaning that for_each_set_bit() hides 2 backing function calls.

The ARM (v)GIC code in particular suffers horribly because of this.

We also have several interesting opencoded forms:
* evtchn_check_pollers() is a (preprocessor identical) opencoding.
* hvm_emulate_writeback() is equivalent.
* for_each_vp() exists just to hardcode a constant and swap the other
two parameters.

and several others forms which I think could be expressed more cleanly
as for_each_set_bit().

We also have the while()/ffs() forms which are "just" for_each_set_bit()
and some even manage to not spill their main variable to memory.


I want to get to a position where there is one clear API to use, and
that the compiler will handle nicely.  Xen's code generation will
definitely improve as a consequence.


Sadly, transforming the ideal while()/ffs() form into a for() loop is a
bit tricky.  This works:

for ( unsigned int v = (val), (bit);
      v;
      v &= v - 1 )
if ( 1 )
{
    (bit) = ffs(v) - 1;
    goto body;
}
else
    body:

which is a C metaprogramming trick borrowed from PuTTY to make:

for_each_BLAH ( bit, val )
{
    // nice loop body
}

work, while having the ffs() calculated logically within the loop body.

The first issue I expect people to have with the above is the raw 'body'
label, although with a macro that can be fixed using body_ ## __COUNTER__.

A full example is https://godbolt.org/z/oMGfah696 although a real
example in Xen is going to have to be variadic for at least ffs() and
ffsl().


Now, from an API point of view, it would be lovely if we could make a
single for_each_set_bit() which covers both cases, and while I can
distinguish the two forms by whether there are 2 or 3 args, I expect
MISRA is going to have a fit at that.  Also there's a difference based
on the scope of 'bit' and also whether modifications to 'val' in the
loop body take effect on the loop condition (they don't because a copy
is taken).

So I expect everyone is going to want a new API to use here.  But what
to call it?

More than half of the callers in Xen really want the GPR form, so we
could introduce a new bitmap_for_each_set_bit(), move all the callers
over, then introduce a "new" for_each_set_bit() which is only of the GPR
form.

Or does anyone want to suggest an alternative name?

Thoughts?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 17:55:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 17:55:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740851.1147949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIB96-00075x-Fm; Fri, 14 Jun 2024 17:55:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740851.1147949; Fri, 14 Jun 2024 17:55:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIB96-00075q-DF; Fri, 14 Jun 2024 17:55:28 +0000
Received: by outflank-mailman (input) for mailman id 740851;
 Fri, 14 Jun 2024 17:55:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tuT0=NQ=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1sIB95-00075k-L6
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 17:55:27 +0000
Received: from fhigh7-smtp.messagingengine.com
 (fhigh7-smtp.messagingengine.com [103.168.172.158])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 462b4e70-2a77-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 19:55:24 +0200 (CEST)
Received: from compute2.internal (compute2.nyi.internal [10.202.2.46])
 by mailfhigh.nyi.internal (Postfix) with ESMTP id 405F9114010A;
 Fri, 14 Jun 2024 13:55:23 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute2.internal (MEProxy); Fri, 14 Jun 2024 13:55:23 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 14 Jun 2024 13:55:22 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 462b4e70-2a77-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718387723;
	 x=1718474123; bh=9L2NVQxYD5hYWwvVR+Bo+TDhcY3MpXNKTx8JCGqHKlA=; b=
	eVbLuIuT/ZecNUm89mQxxpsbkWtZCH7z+la/zQ8tCsACE7/Wsbumt1udJqFyqM0k
	OkV9hmoytnDEt/cCYL60XJac9kAF2aSl7zYXc3ZCqP304fzaYtY6cd1MT7zFKPRR
	f+p4JeIRAjgk3hY0cWP2UYysZUSHVBRkvK8+cNhccNeZSWf0bNyRo9WLJZvFuSrA
	q34LKzHZB2HgX1KvQcF/AEYk4FLjD0wFKwXp7ZvKydcNGDyYSIa0bV8cATopUMKs
	cZv9pwBx3IeQ084sPHuWI+wZfNlFCwVlraF4f7C5C2ywISTDJu3xyO6vSwHeIiiP
	RyF8ym8+BMDGHHeAcY807w==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718387723; x=1718474123; bh=9L2NVQxYD5hYWwvVR+Bo+TDhcY3M
	pXNKTx8JCGqHKlA=; b=S1IGOjHzzWZ+o5FrXoJ5v1wVeHZdGyNHoISFhlSKQCGV
	Xchgk5vc3PS5iD03rMdCPJdo7/lcVmylEWMSB8B4P1f8nwc1iqG2CCxjmD+bfnPr
	PPxaaGjS5pPSqD1vi2UR4WEPhM9Ihw+m7gLZCE2T4qT7XKFzjzMHmj3hNh6vMRyP
	9uG5BO2+bR5rgsLrP0CesLhN8Yw4gPEOX0OXssgrWQPQLh4W0YraJsB45pp44Sv+
	U7JXjDlBB3enLKPN6dch55FLj5jbyXlWsY3GGjoz92/4dzAIRK2wCU2UN69wwnQQ
	oEGfEGNvH0/jokU28c1C8uFa8UMdq3CBPl6HnALrdg==
X-ME-Sender: <xms:CoRsZo5MkbmpcaDn-A3DH9pjx7gup2aF5C4yeQ7rdAdOpGiJzA1Wpw>
    <xme:CoRsZp5Ig4wcM4QBQdxtZBBP2QkJe4adr_-sxTnK-Nh9-3rcO_3ALwai4lrRpyXZz
    o1VY8T2RB68tl4>
X-ME-Received: <xmr:CoRsZnc3LLTtFq3b-J6LzZ67Ebe0-9U2DtFx59xGZ6rHkIBtFtB54JKJS6cp9x13tsPlNqeTVAjjF5I6D64jrNvW5JwVzewy62RIljNOBkCXt_rI>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfeduledguddukecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpedvjeetgeekhfetudfhgfetffeg
    fffguddvgffhffeifeeikeektdehgeetheffleenucevlhhushhtvghrufhiiigvpedtne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:CoRsZtLzPwliKxfjQOoeX3pqvEHQ74p9SDVtw6cSGOBt0zB5LF1vRA>
    <xmx:CoRsZsJAJYPWP2ZF0mNNRk-xNhU-3y9RN2tN7aL16z_-pFCV8ccrHQ>
    <xmx:CoRsZux6yb7qKW-lXW0_lhOj7k84bUhckw0YcFayN0sxY1_nrjHd-g>
    <xmx:CoRsZgLwZqzImE2teHViE7JCGPhavd4Oai8DCemU8gmthK8SKLwHqg>
    <xmx:C4RsZk_xQtxZecSN_36VA5hgylEufkRpXI20-awZVlPK7F7ZZ0sMW9Qc>
Feedback-ID: iac594737:Fastmail
Date: Fri, 14 Jun 2024 13:55:19 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <ZmyECbWrPxU-rUVv@itl-email>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="Xsnp8/rOO0bVMLnv"
Content-Disposition: inline
In-Reply-To: <ZmvvlF0gpqFB7UC9@macbook>


--Xsnp8/rOO0bVMLnv
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 14 Jun 2024 13:55:19 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen

On Fri, Jun 14, 2024 at 09:21:56AM +0200, Roger Pau Monn=C3=A9 wrote:
> On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote:
> > On 13.06.2024 20:43, Demi Marie Obenour wrote:
> > > GPU acceleration requires that pageable host memory be able to be map=
ped
> > > into a guest.
> >=20
> > I'm sure it was explained in the session, which sadly I couldn't attend.
> > I've been asking Ray and Xenia the same before, but I'm afraid it still
> > hasn't become clear to me why this is a _requirement_. After all that's
> > against what we're doing elsewhere (i.e. so far it has always been
> > guest memory that's mapped in the host). I can appreciate that it might
> > be more difficult to implement, but avoiding to violate this fundamental
> > (kind of) rule might be worth the price (and would avoid other
> > complexities, of which there may be lurking more than what you enumerate
> > below).
>=20
> My limited understanding (please someone correct me if wrong) is that
> the GPU buffer (or context I think it's also called?) is always
> allocated from dom0 (the owner of the GPU).

A GPU context is a GPU address space.  It's the GPU equivalent of a CPU
process.  I don't believe that the same context can be used by more than
one userspace process (though I could be wrong), but the same userspace
process can create and use as many contexts as it wants.

> The underling memory
> addresses of such buffer needs to be mapped into the guest.  The
> buffer backing memory might be GPU MMIO from the device BAR(s) or
> system RAM, and such buffer can be paged by the dom0 kernel at any
> time (iow: changing the backing memory from MMIO to RAM or vice
> versa).  Also, the buffer must be contiguous in physical address
> space.
>=20
> I'm not sure it's possible to ensure that when using system RAM such
> memory comes from the guest rather than the host, as it would likely
> require some very intrusive hooks into the kernel logic, and
> negotiation with the guest to allocate the requested amount of
> memory and hand it over to dom0.  If the maximum size of the buffer is
> known in advance maybe dom0 can negotiate with the guest to allocate
> such a region and grant it access to dom0 at driver attachment time.

I don't think there is a useful maximum size known.  There may be a
limit, but it would be around 4GiB or more, which is far too high to
reserve physical memory for up front.

> One aspect that I'm lacking clarity is better understanding of how the
> process of allocating and assigning a GPU buffer to a guest is
> performed (I think this is the key to how GPU VirtIO native contexts
> work?).

The buffer is allocated by the GPU driver in response to an ioctl() made
by the userspace server process.  If the buffer needs to be accessed by
the guest CPU (not all do), it is mapped into part of an emulated PCI
BAR for access by the guest.  This mailing list thread is about making
that possible.

> Another question I have, are guest expected to have a single GPU
> buffer, or they can have multiple GPU buffers simultaneously
> allocated?

I believe there is only one emulated BAR, but this is very large (GiBs)
and sparsely populated.  There can be many GPU buffers mapped into the
BAR.
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--Xsnp8/rOO0bVMLnv
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmZshAgACgkQsoi1X/+c
IsHPYA/+MEmp9PkinbHFL3NMuwl19VvxMGbayIsrz/jYo350GAAXbTkH3KfZgBue
H2HeB7uTrmsB/hd0Y1Vzf2DEV9hPmUsRBOgT6ymxahpUIDvkkx+ewPLxDxy3rLjr
qchWMDcNvFHFPJf45b9CAdmYsMetmpPQiZOJ6FEPJP0/xdB9Ntu6UUEQ2IRZe2z0
sKjSB+sfHXBIaQv9sFcv0r2K/jvfFPmp8AbTtBnmfPCZ9BrTyL82XTHLwNP9nqk3
4LfQwg2VrXLhf6+E8hSRpbyRAMakd1X3UTkYatIGcjZc6Ji5g29Xs5qJhwZvy3yS
UzxsDfFXaduYTWFVkfybLg+X/0GiUBuRcKeAnVgxO7+HIyUUgkKqYSaIYyH2yWNG
yz/5Cmzc5FQ6pznPk3TAyFYUcZyl2Jxmx4c79B0HqWA21e91mMVX7O4uxHReXP+S
BTYVSgUAgcCBmx46gDlZUWlqTIqQ0tY5baZJRqDbbuBcvCKlehZUhfVOCtPdxIlP
iRFPeM1NFb5Jun6G5Pk/knfuw8VJoPIbiJxPFCxlUTqBL4aepqVfV3rsZlVM/RIv
7Eut6UlkqwUKtAXtRD6q6XpkPg2Rx2iPoLfmT8gDkIWUKLeyXR1DDI67OEzXgZfy
h2FGQkrzaZB5f1scNTzPIdKvOwgQiZoUmNnSARX11j9WuGGKwkM=
=3lLp
-----END PGP SIGNATURE-----

--Xsnp8/rOO0bVMLnv--


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 18:02:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 18:02:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740856.1147960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIBFR-0000l9-50; Fri, 14 Jun 2024 18:02:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740856.1147960; Fri, 14 Jun 2024 18:02:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIBFR-0000l2-2N; Fri, 14 Jun 2024 18:02:01 +0000
Received: by outflank-mailman (input) for mailman id 740856;
 Fri, 14 Jun 2024 18:02:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gphc=NQ=suse.com=dfaggioli@srs-se1.protection.inumbo.net>)
 id 1sIBFQ-0000kw-A3
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 18:02:00 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de
 [2a07:de40:b251:101:10:150:64:2])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 314b1943-2a78-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 20:01:58 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E167220706;
 Fri, 14 Jun 2024 18:01:57 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 759DB13AB1;
 Fri, 14 Jun 2024 18:01:57 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id qljFGZWFbGZ9DgAAD6G6ig
 (envelope-from <dfaggioli@suse.com>); Fri, 14 Jun 2024 18:01:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 314b1943-2a78-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1718388118; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=f5yQKxc2z8485EXDsp/1/Qd8BMEe9RTqKKHEO0vlaUE=;
	b=lbJ9/vxAaaAdrGI4qnO/Ae5dWEadX51mY6kIqEeUtPjON1/D7D5s7pAyg3SpqbWir5wCke
	q6eNBzZDuAIsJecTzFur6LawADx+0gh41PwsKsiEqdWHrahjNRLXGz2RxZQR1xpGNEQpwq
	ZiiLldozE5JUKoNDImLKDFa/VQpug2o=
Authentication-Results: smtp-out2.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1718388117; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=f5yQKxc2z8485EXDsp/1/Qd8BMEe9RTqKKHEO0vlaUE=;
	b=UJp9WRKNtN3jaQwzE/MOsAna9qJifnIu2ZJGj2uzzfX5G/W+X75GJ1ucFmJSZQ3cIA3fY6
	R1h1WObTfWRSpsoGq3E77Df2u3PWaR1jlwUj9xHleFDPpfx2cxZRahf3hOtVD0QQpNn1cp
	Y94fmeEa6DDuCf3mGGfzKZ5wCeWio3w=
Message-ID: <4711906a11110f0933162e81008d3eb33011e92e.camel@suse.com>
Subject: Re: [PATCH] MAINTAINERS: add me as scheduer maintainer
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>, George Dunlap <george.dunlap@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall
 <julien@xen.org>,  Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Juergen Gross <jgross@suse.com>
Date: Fri, 14 Jun 2024 20:01:56 +0200
In-Reply-To: <1fe65c97-6aea-452d-99c3-d9da018b33f7@suse.com>
References: <20240606054745.23555-1-jgross@suse.com>
	 <20240606054745.23555-2-jgross@suse.com>
	 <1fe65c97-6aea-452d-99c3-d9da018b33f7@suse.com>
Autocrypt: addr=dfaggioli@suse.com; prefer-encrypt=mutual;
 keydata=mQINBFcqIZ4BEADwW0E1y+J8FG0kGAA0y5UqenJaGp9B6gpm6aAAVkKYBDreeasOb/LQ7
 OqYHbJpkEjDsEwS9K1/iCTtjSO02Klk0vW4T1rlRbjgtyCevHUwINQhYnwREWOkogeTAcrT+2tq/x
 Sxl/sR73vgLtMSqYXsIY7Pqxbi9CF7irfA8A2gGvToXrQw7C6jlFJa+l1gGYclA9bc7TSJzIzTui9
 z4oA6R8Ygrl8ugf69vd9hxGavqvz4vRARAxFgucPs00Aj0WnUTzRuUAF7VHp4VZ56Z0I2gv0M2YVJ
 YjTw+5YbgjzL92T8xPnyZ8q+DjiCDP+v2h//j3BOHtOWnkBmDFpYjix+JuV5J/Ig9icyMo67WrkTG
 7sK4wI28QLQMdoaZrYVA1mkYTWBCpWNbVAjMCS5vPKQVGh32OGsZ6qSMuGiynwDu5ksIDX16kx74a
 gtF3stSM8BVOYJWaGbmMiMogd0lswYQU6Wx8Z5osMvbFLc+CQnavJqhg/UnqDvZ6TyWir5NJ3Wo+Y
 Qh22bW0zchpWeLrXelH5UxNGK/dM26/7gKzKe8T9SUIxaxpawHcpPBB35W4Xwg94bcSQeS5KN3Swb
 lj+C2FkPu40KZ2gV+STkmxyWbUamQPf0Q5M8ih1cSopOwvsG14i5V8PqFH/JBbJUlrCOD6ZDdBStI
 eTLnuwrxYMjGQARAQABtDVEYXJpbyBGYWdnaW9saSAoY29ycG9yYXRlIGVtYWlsKSA8ZGZhZ2dpb2
 xpQHN1c2UuY29tPokCUgQTAQIAPAIbAwYLCQgHAwIGFQgCCQoLBBYCAwECHgECF4AWIQRLmyw6PdW
 GvRY+c4sWQniJpbhz7gUCXHiV1gIZAQAKCRAWQniJpbhz7uX6D/oCWVhNZe7PQfLxbGIPVaf2yMQM
 1zlUA62Xegv7dA1me5NbEcbGwJ0NvwcM6DLIxnVTbSMMA5M04flSFmrvjMVO6E8a9y9N+o27WS2sn
 hZUufqj9LUf9KLWS/aRlnyWBGeg0ut9LUfLx874CEuHwJM/rjSzXTNKap2YD8zd9S1JTDZ8gUismo
 d+TTh70r6xzibgZklcupECDgp2iwRUAqoEfj3rTqDFkVyySFH1OiP4NYx5TcivwkUML3UKedzdz3Z
 eANbdV2XpNGGWMoccRlJBgIhHJURm1TNPkXSTzEHzZkNE740ygQhMUu9zM8RoyQ09sR7a/z7EESPb
 4xitPqnbYd0EoLnZOquW2qjnM1xrULNbMATW3bYmWGtpjWpl6VY2caVy9DCgEimvlQLTkj0cAF6Cz
 /ZNj7xvN26ZdOch+ji9dDoPJBzjUfNZwEYsCc4l3wXmBnLZmF8kUZEtEOEECkP7nbNc2r+HUN1Zzs
 +DOmaWjniR7b65qShIDdvI3T/jd1sG59snXGUcIDu2MuARHMY0AiHaZHAAOnUu8317oPgVHepVkff
 i9wLkZtcv++aeU/OGZkgyCcX49wCLmUdgK2z2GJnT4QIKHKzpeVl3vx4bH0uZI6Zvv7qtZbZ+3Bqd
 5c/H1C9LbK/zbJAvu+yOcLQ00VW+SMPVaE1CHRIperQ5RGFyaW8gRmFnZ2lvbGkgKHBlcnNvbmFsI
 GVtYWlsKSA8ZGFyaW8uZmFnZ2lvbGlAbGludXguaXQ+iQJOBBMBCAA4AhsDBQsJCAcCBhUKCQgLAg
 QWAgMBAh4BAheAFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAlx4ldUACgkQFkJ4iaW4c+6Z3g/+N3/
 dMZAjEEnBqhHr28Dg5OoQGxCt209zj50gTGIw09J0Dzg+tPILAC5IZzjGlEuQI4015N3bJpz56N2g
 IjT1B0Rxh+HMd+4wKz/TZ+rUHgwhIfBei9jDzlqD4Z+hSnIpPN3mqQ7as4RdBmC0WhFKY/BB4V/ED
 yZfXzCJAKvysQFIsf3i0DJo1CC8hZK588dyAbB62Qh6ookOhfdTmEapcSHFjfd0osJiHo4+3kJP53
 HxNPvIWyxrbznrfVg6cHJOKKx5yowWYe4cEJcCLYCAy9UjGmTDEl5Rwq8J9kihQpGCtA2ivEcmIpj
 59JeQ5sv1IRcwamSxgylWvJR+Om3nz2Ma3334GdaIaeyb/dR9lyxB2fiBB8V6Avo+oJQniWqXxyJ0
 HhZkRBOTX7LtSzQFOnYKXz2mWRkZpclmztX3BqctB0Z/K1cm2KIcm+MBUqjLZeprfhFS9f3WCYOOS
 SLRvYRVSwXw8ImJYHqWbePQYD8LeAJ7Hs0kqhd/CtUDyUrwtwzzKRs/8wVSRCLHLTZiSZua8N1Tqo
 5M4t6wSeENALB2kFLEmlgApTghCj51kWpTzysL9RgREoKSgdsqwfzaQlZH490H1WIu1zedsdaigeJ
 7G6UIVWjTOwK59s1pEyrtz/gZWJUOJh77MspoF/mUjSXm6W9YAQu0pahk4KdbZKW0M0RhcmlvIEZh
 Z2dpb2xpIChwZXJzb25hbCBlbWFpbCkgPHJhaXN0bGluQGxpbnV4Lml0PokCTgQTAQIAOAIbAwIeA
 QIXgAULCQgHAwUVCgkICwUWAgMBABYhBEubLDo91Ya9Fj5zixZCeImluHPuBQJceJXFAAoJEBZCeI
 mluHPuiZUQAN4FY5DlI11sTYcdG1VyLYgE76mek5ItP0ZblcSF0INr6O9jn3zWEgyr6pFzSIXu81W
 W2o6UJEeb5wJlbte00Oxlgwshg3q1/Zd5MshtAjGGcCvnnffrcyrbyi6cuj/KwvRQFGsaT3getrf5
 LqIuC/HJgd+4k+S3Y2qOjq6qPZLG3I58F/K+SjFFeoX2CJvZEKPuMf51TvrBWQMK7qAf0nCG0noyt
 Zpbm+lCcHdJmoQZozn0e+4ENLduDe8c4Fsi2Fgjvuc250mC8avBidX6M+ONJrJTW2iSiqaLrp7FzS
 5f6SzRS7hKw9USmG7p30PFP+u2eBXfcriaIttlXgRcfQWZhd6c432wcssUlW1ykiqHBeElK0W3XD5
 5RahdJwLnX2ycToXAYp1afOAk8l2WKP1euXxNAN+toXpFRZpJDoebFHVuBKzff5F9yaF6cN65FZrU
 UZeT/6UlQj7aEsRorozZpzJN2f/fa97PSR99+pOAmoAIs52tME4QTNExHCZJFvQTI2GxrFQV8qTfo
 7ZswjXDui84NbUhlYnGH3Qk/iMKWfCGt2GyGpWQFV14u2sstHIKIRIj7EmL2tEoQGaySvN9HAnNfr
 W1Sd/zkzr6Wy+sYTOABgkxOtwb/aVfVVnl1PhMiQfTXTvsX9m6e4ZXTxh+pnJgyx58PG1haeGDTGJ
 etDJEYXJpbyBGYWdnaW9saSAoZ29vZ2xlIElEKSA8cmFpc3RsaW4uZGZAZ21haWwuY29tPokCTgQT
 AQIAOAIbAwULCQgHAwUVCgkICwUWAgMBAAIeAQIXgBYhBEubLDo91Ya9Fj5zixZCeImluHPuBQJce
 JW4AAoJEBZCeImluHPuAVUQANDzlRpfMMUtVvVQLtYIm06rJQhbjwd8UE1Yq5pwxfVUYHm5JmvDI9
 ugOl9gAo6O29Cfrmc7Om8x3ewBAjQymNCHMq+MYPNqyVZVfSMH9CEg8/btGhm4IdvjXkqTtX2uZLq
 jJ5tHGxYuUbeL7uQBIFgxEpvXuHlg6mixcpyah+pYmmt0LnCCyj2f4iTZXuGXLKvayskCO6+2s++j
 F5f2HbBGe0ZkwjNbbCvxbhnX9YdYVvWEMRxBVxEsN1+n+MlvNkWp/sfBddsS8v1FpoLg2uUvJMhxi
 RoqxZCHYK1q/Obn5dWfN5inq6GUp205MESiV8NbwFYxI5H+r3OqWhb2OcQDiBlepJ3PJzKrZEr+6M
 YwWu36/XGqFFz7rxD48+QdlUFi8CpPCw2hMAzap3e2QwmkPlSQqtANKXs89M2Gc88dkwAi+L/DX30
 aFiMx6KcJkD6Up15N2x6FZh9VT45C9xPa4/IFcNpswn9Tngyi7wR7bvY3/daeuSw6pzUARZ9IC6rR
 xVqf92gykLEfcIWGpYlKDmnKKMTSgGBycNwk6nzhfa3VLAtxrNfG6bvzwXTQE9UBOC+8Ogu+BUvbH
 lA9+B1pkThQLyo4biSYbvcUNsOqYtugWW3gy2ogAHHcRXiFxxz5hKdkVwCeQteIPaTeMiZckuktpC
 8ioAT//C1pmVpvtDxEYXJpbyBGYWdnaW9saSAoY29ycG9yYXRlIGVtYWlsKSA8ZGFyaW8uZmFnZ2l
 vbGlAY2l0cml4LmNvbT6JAjYEMAECACAFAlnqAncZHSBObyBsb25nZXIgd29ya2luZyB0aGVyZQAK
 CRAWQniJpbhz7rEeD/4s3ewT5VjgFTJGA3e3xRkh4Qz3Ri8mDZeyrwWw4dr5vZnAZMAG+NTaQMYLt
 cKg5DUsRBNGHUL5ZH70sBPYFMG2Fg4eddRVewC9cJ6sJBh97u8RXueBhu8GDinMkJZitnrCHR8mEK
 g8szWHIqM/ohsPp2FbUdsqqky1XGYNDdKHIMMQpEYVgBKWKFMDq08nzFrJrGeRgg1Gdsa9JoE9/rM
 pcwwnoy5z0Bvij0u8PoSp+aBJAgGWJPu+abJghc2V3sRR/vyZyPPNZKJyirPqXy2ZQVYrMM/jFsJs
 I2POz8uEq5v4lf5MnJZNas785F4klpzi+6LaIBVtNm6l8ANU8Ad+RKsgoMnAx46ClYYCJmC2luzIo
 4hxD5fDyCQOGSxp6S1ONbbxg5N/XsD4yuJ+ORzO/6BylBArRo7c2qHACD9qvu1VXIQn9/IbxznGOl
 CRv4xAD2mGzom/umsTpTWus4pjo3G1/f/rkK4PYI8Kxsfi+WPD986deQLScMQM5hYAb26apvjv9w0
 XYLQWY6cQKvquMVTdb5bIxddgr35PLdUd3DZUtOAmm1pdveD2EyerECOLp03MZXRO4J818to/tCCd
 XA3l2Osx6i9443aTew/QlG1qp7kWk24ZP1pgMSSuEaFmdcmeLdk0VKVevW3g5GzlS+FTdhuMz8WgV
 fkAJ0OEQQ==
Organization: SUSE Software Solutions Italy S.r.l.
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-2g5c8jxA3WMaDPexoAxR"
User-Agent: Evolution 3.52.2 (by Flathub.org) 
MIME-Version: 1.0
X-Spam-Score: -6.39
X-Spam-Level: 
X-Spam-Flag: NO
X-Spamd-Result: default: False [-6.39 / 50.00];
	BAYES_HAM(-2.99)[99.97%];
	SIGNED_PGP(-2.00)[];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	MIME_GOOD(-0.20)[multipart/signed,text/plain];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	HAS_ORG_HEADER(0.00)[];
	MIME_TRACE(0.00)[0:+,1:+,2:~];
	RCPT_COUNT_SEVEN(0.00)[7];
	ARC_NA(0.00)[];
	MID_RHS_MATCH_FROM(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	TO_DN_SOME(0.00)[];
	FROM_EQ_ENVFROM(0.00)[];
	FROM_HAS_DN(0.00)[];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	RCVD_COUNT_TWO(0.00)[2];
	DKIM_SIGNED(0.00)[suse.com:s=susede1];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email,suse.com:url,imap1.dmz-prg2.suse.org:helo,about.me:url]


--=-2g5c8jxA3WMaDPexoAxR
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2024-06-10 at 13:35 +0200, Jan Beulich wrote:
> George, Dario,
>=20
> On 06.06.2024 07:47, Juergen Gross wrote:
> > I've been active in the scheduling code since many years now. Add
> > me as a maintainer.
> >=20
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> > ---
> > =C2=A0MAINTAINERS | 1 +
> > =C2=A01 file changed, 1 insertion(+)
> >=20
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index 6ba7d2765f..cc40c0be9d 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -490,6 +490,7 @@ F:	xen/common/sched/rt.c
> > =C2=A0SCHEDULING
> > =C2=A0M:	George Dunlap <george.dunlap@citrix.com>
> > =C2=A0M:	Dario Faggioli <dfaggioli@suse.com>
> > +M:	Juergen Gross <jgross@suse.com>
> > =C2=A0S:	Supported
> > =C2=A0F:	xen/common/sched/
>=20
> no matter what get-maintainers.pl may say for changes here, I think
> it's
> largely on the two of you to approve this addition.
>=20
Well, I for sure could not approve more, and I'm supper happy to
provide my:

Acked-by: Dario Faggioli <dfaggioli@suse.com>

And thanks a lot again Juergen for offering help!=20

Regards,
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-2g5c8jxA3WMaDPexoAxR
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmZshZQACgkQFkJ4iaW4
c+7ilhAA2t987hUoNNUO0eCXiYcLIPkKgPRixQAo9dAwgiReBQCcNB7NUS845hA8
XVfLkizBPbqCuNicSPH1Kla0e6XMAxk383BfWPPyFCgZKwEGc5f+A9hqcR5HqOVh
xhkOxolFgiyAA5Ffg/XReIhAR2KbIar9G/Nq4xIjwO85b7/1CQyxBDqS+SlAeD9r
y9fxyer9Pveq728uiSQkRwr7EnrFF+GFPeVVaHmynZN7UKx+VC9i17Daymu8rywh
wHQgYbdymD5Dot2h/KO3NAH49lRe7LGdhSktCpiSJpDWi3UD1RySVJQieZAZ4nI9
NJH2sVMySRrzXcBMq4hp1Ul3VGj50zFNBL+UDH6Rij5lNcyj2/wzSenBlMxLYYR0
KKynPOBxJavpz2BW5BlTn2j51F3O8oCTUZuzoC549f/7RGMvTeEtel7KGA9hlK0j
O3eumCpEF6ucnT+UkkcHASNenVF1bawZAfqaRZN/EWLdW7zTakUTqFF0+dN+NIAA
tVu0PXOPSybsho9kZ9tzDbhJVh6mQKDa+RjoQ+TtxCUyTpJxDAkd3jjRGElBaseL
dfnDocW/8cTPLMC7LS3huV3CH0qH8vR58xwcu/aH+ydLt8ysshVIfCB76Ilnu0EQ
0IqvoclqJfqlCvkFULqtJGxPV9wDbyKwqai8js7CsVo5DMMf+VA=
=6D/y
-----END PGP SIGNATURE-----

--=-2g5c8jxA3WMaDPexoAxR--


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 18:26:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 18:26:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740864.1147970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIBd9-0004di-4x; Fri, 14 Jun 2024 18:26:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740864.1147970; Fri, 14 Jun 2024 18:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIBd9-0004db-1u; Fri, 14 Jun 2024 18:26:31 +0000
Received: by outflank-mailman (input) for mailman id 740864;
 Fri, 14 Jun 2024 18:26:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J/js=NQ=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sIBd7-0004dV-98
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 18:26:29 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9cba849c-2a7b-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 20:26:27 +0200 (CEST)
Received: by mail-ej1-x633.google.com with SMTP id
 a640c23a62f3a-a6f3efa1cc7so608747966b.0
 for <xen-devel@lists.xenproject.org>; Fri, 14 Jun 2024 11:26:27 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f57ed951dsm203075166b.196.2024.06.14.11.26.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 Jun 2024 11:26:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cba849c-2a7b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718389587; x=1718994387; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=/vhdDHCYDXHupSWeUUjAzlXBqri4hWTvUpp5oPp4bJk=;
        b=GsQ9ZL6lFXSJuT/rB3uFyuMmQYfnfeLAJ8746kji5eoLq0OuAmwoQi2Fl3B1NZo4Ed
         Qf4jIhPC8o0fMzVl3uocB7WW4CJbbw0eCtUd3xsdZfkvyqVDESJNPZs3EKbHSHRXYtyZ
         PRwDGiotfnNatzDSjW1LcwWDh5pDTbjoglECY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718389587; x=1718994387;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/vhdDHCYDXHupSWeUUjAzlXBqri4hWTvUpp5oPp4bJk=;
        b=DYoWl4jDiUmaDaDDgAkuRvtEQPodoonSyqXYKJS0ZSxdkYTV67dyePgO6pWXI7LOSm
         VA8kT+MPiH5MJOZZHUGhXX+lysDXdhL5HZtYoLHdOjUiVeh4sJXyqagNm2E0lURFsdAR
         ZfFbl1NmnhG70oz4Ns4UfmE9YRFHBJ7gdwq0NNAvMgMl2MWx6rJeh5HK7m4lqlwOXfcB
         79Z7xAKXPFzZ4yGb5KsyF254/gK22R8dwtOlcjsuCnxhM84R71CWlMYn9nXp33Go2VKp
         EfboWCXjTUCFVdN4rfaBRcJn9eAgkvkLjZ0tYwO6nRRtQaaII9C9ngm6TFiIjRfXF7ID
         kzQQ==
X-Forwarded-Encrypted: i=1; AJvYcCXi2vV3i6rTGaC0OASzTt1nLV4JCYjuZNXQJ1bzTtQBC8r4CekyUvs1trX/oXq5tDwcV9Cccea72ZvrjddfQdouqCdAqLzdY5LjnXmmt2A=
X-Gm-Message-State: AOJu0YyS1XLdR2TKDrtCDCcqruqaGNzze+dDMQkRLjIIGyxwg/1Ca+Y3
	CXnk6PTXR43PE7KoZ/kF4fLwckJx7yfLiQhpnwvMLVEzeCzrkhOpumSo8WoBzEg=
X-Google-Smtp-Source: AGHT+IGjw7iJhFWbp2AFZ8V+x8TxhiqugyqAYidc6yx3dW7aHYKqtSgj0VtseWAlQZYwFStJxtWd+g==
X-Received: by 2002:a17:906:1d03:b0:a6f:2380:3a32 with SMTP id a640c23a62f3a-a6f52415714mr458483166b.21.1718389586272;
        Fri, 14 Jun 2024 11:26:26 -0700 (PDT)
Message-ID: <6b4ed926-8934-4660-98c7-1856d0c90878@citrix.com>
Date: Fri, 14 Jun 2024 19:26:25 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 3/7] x86/boot: Collect the Raw CPU Policy earlier on boot
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240523111627.28896-1-andrew.cooper3@citrix.com>
 <20240523111627.28896-4-andrew.cooper3@citrix.com>
 <8245f0ce-2964-4ecb-a31d-3e182a6d3e0b@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <8245f0ce-2964-4ecb-a31d-3e182a6d3e0b@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 23/05/2024 4:44 pm, Jan Beulich wrote:
> On 23.05.2024 13:16, Andrew Cooper wrote:
>> This is a tangle, but it's a small step in the right direction.
>>
>> xstate_init() is shortly going to want data from the Raw policy.
>> calculate_raw_cpu_policy() is sufficiently separate from the other policies to
>> be safe to do.
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Would you mind taking a look at
> https://lists.xen.org/archives/html/xen-devel/2021-04/msg01335.html
> to make clear (to me at least) in how far we can perhaps find common grounds
> on what wants doing when? (Of course the local version I have has been
> constantly re-based, so some of the function names would have changed from
> what's visible there.)

That's been covered several times, at least in part.

I want to eventually move the host policy too, but I'm not willing to
compound the mess we've currently got just to do it earlier.  It's just
creating even more obstacles to doing it nicely.

Nothing in this series needs (or indeed should) use the host policy.

The same is true of your AMX series.  You're (correctly) breaking the
uniform allocation size and (when policy selection is ordered WRT vCPU
creation, as discussed) it becomes solely depend on the guest policy.

xsave.c really has no legitimate use for the host policy once the
uniform allocation size aspect has gone away.


>> --- a/xen/arch/x86/cpu-policy.c
>> +++ b/xen/arch/x86/cpu-policy.c
>> @@ -845,7 +845,6 @@ static void __init calculate_hvm_def_policy(void)
>>  
>>  void __init init_guest_cpu_policies(void)
>>  {
>> -    calculate_raw_cpu_policy();
>>      calculate_host_policy();
>>  
>>      if ( IS_ENABLED(CONFIG_PV) )
>> --- a/xen/arch/x86/setup.c
>> +++ b/xen/arch/x86/setup.c
>> @@ -1888,7 +1888,9 @@ void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
>>  
>>      tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
>>  
>> -    identify_cpu(&boot_cpu_data);
>> +    calculate_raw_cpu_policy(); /* Needs microcode.  No other dependenices. */
>> +
>> +    identify_cpu(&boot_cpu_data); /* Needs microcode and raw policy. */
> You don't introduce any dependency on raw policy here, and there cannot possibly
> have been such a dependency before (unless there was a bug somewhere). Therefore
> I consider this latter comment misleading at this point.

It's made true by the next patch, and I'm not included to split the
comment across two patches which are going to be committed together in a
unit.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 18:26:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 18:26:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740866.1147980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIBdZ-00054W-EE; Fri, 14 Jun 2024 18:26:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740866.1147980; Fri, 14 Jun 2024 18:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIBdZ-00054P-BQ; Fri, 14 Jun 2024 18:26:57 +0000
Received: by outflank-mailman (input) for mailman id 740866;
 Fri, 14 Jun 2024 18:26:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIBdY-000542-8U; Fri, 14 Jun 2024 18:26:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIBdY-0003gU-0b; Fri, 14 Jun 2024 18:26:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIBdX-0001SD-Mu; Fri, 14 Jun 2024 18:26:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sIBdX-0003Ap-MT; Fri, 14 Jun 2024 18:26:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iei8TS+2E9hithlY1dKeOFvZoSX+jbfoPaksHdCtf6E=; b=rogoBjFFbykSyfgyFaHWDDlh6V
	x72hqLGxDcHwn2pvI1B7M3fD6auphni0jfjh6JLcybD0jf/tk/Inv9mfyQ+yimkpiFvtzq7Wfn8fo
	bJng9BbkdwyjpVyjz2dNFpVyKqKbYgbuVvxillAYgGGYxiALSS4QRI9hMb5NL+HRh0DM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186345-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186345: regressions - FAIL
X-Osstest-Failures:
    linux-linus:build-armhf:xen-build:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d20f6b3d747c36889b7ce75ee369182af3decb6b
X-Osstest-Versions-That:
    linux=2ef5971ff345d3c000873725db555085e0131961
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 Jun 2024 18:26:55 +0000

flight 186345 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186345/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                   6 xen-build      fail in 186342 REGR. vs. 186314

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186342

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2   1 build-check(1)           blocked in 186342 n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)           blocked in 186342 n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)           blocked in 186342 n/a
 test-armhf-armhf-xl-raw       1 build-check(1)           blocked in 186342 n/a
 test-armhf-armhf-libvirt      1 build-check(1)           blocked in 186342 n/a
 test-armhf-armhf-xl-qcow2     1 build-check(1)           blocked in 186342 n/a
 test-armhf-armhf-xl           1 build-check(1)           blocked in 186342 n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)           blocked in 186342 n/a
 test-armhf-armhf-examine      1 build-check(1)           blocked in 186342 n/a
 build-armhf-libvirt           1 build-check(1)           blocked in 186342 n/a
 test-armhf-armhf-libvirt-vhd  1 build-check(1)           blocked in 186342 n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)          blocked in 186342 n/a
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 186314
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186314
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186314
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 186314
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186314
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                d20f6b3d747c36889b7ce75ee369182af3decb6b
baseline version:
 linux                2ef5971ff345d3c000873725db555085e0131961

Last test of basis   186314  2024-06-12 00:10:33 Z    2 days
Failing since        186324  2024-06-12 17:12:12 Z    2 days    4 attempts
Testing same since   186342  2024-06-13 21:12:16 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Csókás, Bence" <csokas.bence@prolan.hu>
  Aleksandr Mishin <amishin@t-argos.ru>
  Andrei Vagin <avagin@gmail.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Borislav Petkov (AMD) <bp@alien8.de>
  Chen Hanxiao <chenhx.fnst@fujitsu.com>
  Csókás, Bence <csokas.bence@prolan.hu>
  David S. Miller <davem@davemloft.net>
  David Wei <dw@davidwei.uk>
  Davide Ornaghi <d.ornaghi97@gmail.com>
  Dmitry Mastykin <mastichi@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Gal Pressman <gal@nvidia.com>
  Geliang Tang <geliang@kernel.org>
  Hongbo Li <lihongbo22@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jan Beulich <jbeulich@suse.com>
  Jan Kara <jack@suse.cz>
  Jie Wang <wangjie125@huawei.com>
  Jijie Shao <shaojijie@huawei.com>
  Johannes Berg <johannes.berg@intel.com>
  Joshua Washington <joshwash@google.com>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Kent Overstreet <kent.overstreet@linux.dev>
  Kory Maincent <kory.maincent@bootlin.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lion Ackermann <nnamrec@gmail.com>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Matthieu Baerts (NGI0) <matttbe@kernel.org>
  Michael Chan <michael.chan@broadcom.com>
  Mike Rapoport (IBM) <rppt@kernel.org>
  Naama Meir <naamax.meir@linux.intel.com>
  Neal Cardwell <ncardwell@google.com>
  NeilBrown <neilb@suse.de>
  Nikolay Aleksandrov <razor@blackwall.org>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Olga Kornievskaia <kolga@netapp.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Pauli Virtanen <pav@iki.fi>
  Petr Pavlu <petr.pavlu@suse.com>
  Rao Shoaib <Rao.Shoaib@oracle.com>
  Rob Herring <robh@kernel.org>
  Ron Economos <re@w6rz.net>
  Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
  Sagar Cheluvegowda <quic_scheluve@quicinc.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sasha Neftin <sasha.neftin@intel.com>
  Scott Mayhew <smayhew@redhat.com>
  Taehee Yoo <ap420073@gmail.com>
  Tariq Toukan <tariqt@nvidia.com>
  Thorsten Scherer <t.scherer@eckelmann.de>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Udit Kumar <u-kumar1@ti.com>
  Wadim Egorov <w.egorov@phytec.de>
  Xiaolei Wang <xiaolei.wang@windriver.com>
  Yonglong Liu <liuyonglong@huawei.com>
  YonglongLi <liyonglong@chinatelecom.cn>
  Ziwei Xiao <ziweixiao@google.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1891 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 14 20:57:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 Jun 2024 20:57:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740893.1147990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIDyY-00028k-4r; Fri, 14 Jun 2024 20:56:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740893.1147990; Fri, 14 Jun 2024 20:56:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIDyY-00028d-1p; Fri, 14 Jun 2024 20:56:46 +0000
Received: by outflank-mailman (input) for mailman id 740893;
 Fri, 14 Jun 2024 20:56:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tuT0=NQ=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1sIDyW-00028X-7x
 for xen-devel@lists.xenproject.org; Fri, 14 Jun 2024 20:56:44 +0000
Received: from fout2-smtp.messagingengine.com (fout2-smtp.messagingengine.com
 [103.168.172.145]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 992aee7a-2a90-11ef-90a3-e314d9c70b13;
 Fri, 14 Jun 2024 22:56:41 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailfout.nyi.internal (Postfix) with ESMTP id 0738013800FD;
 Fri, 14 Jun 2024 16:56:40 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Fri, 14 Jun 2024 16:56:40 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 14 Jun 2024 16:56:38 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 992aee7a-2a90-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718398600;
	 x=1718485000; bh=GZPfKZ/Rz0jT5WuRTnq+r2l0cpFH+3QRn9r5BMxT+Ts=; b=
	Mt5B9ogHpW1dhxnaLK59J5ig5ZJWDOk5db/jq4tUMQ2ox52KOTKQy9UaBLS2wbjm
	nkBMZc898CGrcZ8GZH+5xShl5vVXNhfXGgDG+kshuVj8CABS668Y9fAvnozR2D5o
	RQTFzeVdRNN2p7M94tWuF3Z+/MUhUM7kGb7+TA/+xYsbrocOQ65DvmLBp0DJRyPN
	W+b2k5h+qxWSJKdTbl5V6EBg8AJB76OIYKZRSZhcFhgKOpYxleH19qqgjK5oqk1H
	dbbMbljKMvaJgRx+4EdxA/54UVvflhru/lVIh+dj8dStLrKNd96wcXtML5U0Mr04
	NPJkJPfNVAiFVAdK5B6JNw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718398600; x=1718485000; bh=GZPfKZ/Rz0jT5WuRTnq+r2l0cpFH
	+3QRn9r5BMxT+Ts=; b=EmkugCjsSEP+9Srmhwxh5eneGc8BRY3mw0rwtq+IcWn5
	xGI4qvdP0wYo6NnjH5hv9TblYpHDvz9abkg27jqZTGGacuCKni8v7948B/nxXzRy
	N62Ywnomj/pTCqhDlhodcnPlK/RKh+z+AGcKTuo6itasQ9Tyci2CoU2h8ZTNqTZA
	3D5W+OklbQbsmykljZ8dODJP0MtEtq5mbe7o5J3aPPe7P8mzU5gXwaBAtxCziV0x
	juNV8OIyF4EY6z+JGR2sCz1AeS4IpUVeYsP7/4/dxnU60tbngOiS7A0G7j3+1Rpa
	jwW3ok4A6ZbAUzewgI7JP8KJDOnZ5yhMH6DGkvKR2g==
X-ME-Sender: <xms:h65sZk72XJs0HKW8UQatexwDrD9GGPjJBa5iNXdy0DwHvuAuPLqhxA>
    <xme:h65sZl4Pvaefasuz-sShVLrEkuLAnTtnKRDpmgDezzJot1_jqKIZE2_IhJvPvfZZI
    nsT9BI77jItKjA>
X-ME-Received: <xmr:h65sZjdXVu0QxA9Uh-Lst5YVCtPfLxcrqM01twdBF1iaJzts0B3s-AfVfZ1T4YiDWZvAlovtk9H1FHnpnp-5vNojjDlP7Vl0BNuZLok2E2Qi85vL>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfeduledgudehgecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehgtderredttddvnecuhfhrohhmpeffvghm
    ihcuofgrrhhivgcuqfgsvghnohhurhcuoeguvghmihesihhnvhhishhisghlvghthhhinh
    hgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeduieelfeeutedvleehueetffej
    geejgeffkeelveeuleeukeejjeduffetjeekteenucevlhhushhtvghrufhiiigvpedtne
    curfgrrhgrmhepmhgrihhlfhhrohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhg
    shhlrggsrdgtohhm
X-ME-Proxy: <xmx:h65sZpL4tf3cQXEmmpwHCVckqTZF-n6MXtAa3ubaKoOua9KbTfpEGA>
    <xmx:h65sZoLGqYKy_CrKTcv4NpEnXKPqUZEUdQgR0FRJjpxMhoGyptwomg>
    <xmx:h65sZqztYGeYvgGkgnwFpw63dIst23dyXgZZEJ7PqR1lAtPitTlDSg>
    <xmx:h65sZsJL-877f7QIWuPZNoy6m3SGo9FQYgV_yeNehhy_7S4cxrxyow>
    <xmx:h65sZlD99sM47HY8pBaFvYzpFUjTkEH3q7vghxPlhdmzjzheYv5WUy0o>
Feedback-ID: iac594737:Fastmail
Date: Fri, 14 Jun 2024 16:56:35 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Direct Rendering Infrastructure development <dri-devel@lists.freedesktop.org>,
	Daniel Vetter <daniel@ffwll.ch>, David Airlie <airlied@gmail.com>,
	Rob Clark <robdclark@gmail.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <ZmyuhRQlHxo2JXPu@itl-email>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="Gd/4I9Ey0Xn0VJx2"
Content-Disposition: inline
In-Reply-To: <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>


--Gd/4I9Ey0Xn0VJx2
Content-Type: text/plain; protected-headers=v1; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 14 Jun 2024 16:56:35 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Direct Rendering Infrastructure development <dri-devel@lists.freedesktop.org>,
	Daniel Vetter <daniel@ffwll.ch>, David Airlie <airlied@gmail.com>,
	Rob Clark <robdclark@gmail.com>
Subject: Re: Design session notes: GPU acceleration in Xen

On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote:
> On 13.06.2024 20:43, Demi Marie Obenour wrote:
> > GPU acceleration requires that pageable host memory be able to be mapped
> > into a guest.
>=20
> I'm sure it was explained in the session, which sadly I couldn't attend.
> I've been asking Ray and Xenia the same before, but I'm afraid it still
> hasn't become clear to me why this is a _requirement_. After all that's
> against what we're doing elsewhere (i.e. so far it has always been
> guest memory that's mapped in the host). I can appreciate that it might
> be more difficult to implement, but avoiding to violate this fundamental
> (kind of) rule might be worth the price (and would avoid other
> complexities, of which there may be lurking more than what you enumerate
> below).

The GPU driver knows how to allocate buffers that are usable by the GPU.
On a discrete GPU, these buffers will generally be in VRAM, rather than
in system RAM, because access to system RAM requires going through the
PCI bus (slow).  However, VRAM is a limited resource, so the driver will
migrate pages between VRAM and system RAM as needed.  During the
migration, a guest that tries to access the pages must block until the
migration is complete.

Some GPU drivers support accessing externally provided memory.  This is
called "userptr", and is supported by i915 and amdgpu.  However, it
appears that some other drivers (such as MSM) do not support it, and
since GPUs with VRAM need to be supported anyway, Xen still needs to
support GPU driver-allocated memory.

I also CCd dri-devel@lists.freedesktop.org and the general GPU driver
maintainers in Linux in case they can give a better answer, as well as
Rob Clark who invented native contexts.

> >  This requires changes to all of the Xen hypervisor, Linux
> > kernel, and userspace device model.
> >=20
> > ### Goals
> >=20
> >  - Allow any userspace pages to be mapped into a guest.
> >  - Support deprivileged operation: this API must not be usable for priv=
ilege escalation.
> >  - Use MMU notifiers to ensure safety with respect to use-after-free.
> >=20
> > ### Hypervisor changes
> >=20
> > There are at least two Xen changes required:
> >=20
> > 1. Add a new flag to IOREQ that means "retry this instruction".
> >=20
> >    An IOREQ server can set this flag after having successfully handled a
> >    page fault.  It is expected that the IOREQ server has successfully
> >    mapped a page into the guest at the location of the fault.
> >    Otherwise, the same fault will likely happen again.
>=20
> Were there any thoughts on how to prevent this becoming an infinite loop?
> I.e. how to (a) guarantee forward progress in the guest and (b) deal with
> misbehaving IOREQ servers?

Guaranteeing forward progress is up to the IOREQ server.  If the IOREQ
server misbehaves, an infinite loop is possible, but the CPU time used
by it should be charged to the IOREQ server, so this isn't a
vulnerability.

> > 2. Add support for `XEN_DOMCTL_memory_mapping` to use system RAM, not
> >    just IOMEM.  Mappings made with `XEN_DOMCTL_memory_mapping` are
> >    guaranteed to be able to be successfully revoked with
> >    `XEN_DOMCTL_memory_mapping`, so all operations that would create
> >    extra references to the mapped memory must be forbidden.  These
> >    include, but may not be limited to:
> >=20
> >    1. Granting the pages to the same or other domains.
> >    2. Mapping into another domain using `XEN_DOMCTL_memory_mapping`.
> >    3. Another domain accessing the pages using the foreign memory APIs,
> >       unless it is privileged over the domain that owns the pages.
>=20
> All of which may call for actually converting the memory to kind-of-MMIO,
> with a means to later convert it back.

Would this support the case where the mapping domain is not fully
priviliged, and where it might be a PV guest?
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--Gd/4I9Ey0Xn0VJx2
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmZsroUACgkQsoi1X/+c
IsEFXQ//QMybLNE90OiAvTVes1tBRcRaHkgZqnorLYTtL0spBy81VQCcOYtw1M8m
OcWvJrV0xhbCg82ALyiw4IWvPMwqNg17x5S+WAE4pNMhr7zbTzxGYtT/QS2wXeZ3
/MA7AidErZtH6qHuC5uyXvjI7FIh67M7AiSUnjT14sygwB4s9JTc1qqIwBK8DoH9
ll5mWBrh4Dez/Tf+Z5oJptCBv86+jP6/kopJymasvVAV3NFjngm7/mbEOrk41sHp
616ZpooGaNYqUW19ilIsVnW1vgcmL1N9icFIskgSTPrvto2MZynwEzJFhN5NBfBi
HiVrMk2XZUsQX19r0kx2vOh3iWD3ou4WRKOCYaaR7Y96PTKi+uQO1RPo+iUm/DHo
0kkd1om1OgI5fV+qLszMpY93GE+fs3GHn1B/qtGc1SRtqxu6V9Rjoj+wMr40sI6L
k/p9dqQAT+XwbdFupCtTA34pVHUcJKhmckchObddbVrzfsQ+iM+om9jORy5nmose
hcAzXtSap8gwVMjEU/t8zzMeNgcNXQn9nzP2LmSygfHqOLxDrrbeB6KuSUirVRgy
t4GE9zBKy+zU0umR7RJ8zIOPj3PApAmzdIGdzNv7m0vtu5IMiDcbl4N/JzZrkl11
NvufwCaM2kV1/g6JXYNM+46fOgWPUeQFscfiBG3nm8aR8mxpjP4=
=l9yO
-----END PGP SIGNATURE-----

--Gd/4I9Ey0Xn0VJx2--


From xen-devel-bounces@lists.xenproject.org Sat Jun 15 00:12:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Jun 2024 00:12:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740927.1148027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIH22-0004LF-Ey; Sat, 15 Jun 2024 00:12:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740927.1148027; Sat, 15 Jun 2024 00:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIH22-0004L8-CE; Sat, 15 Jun 2024 00:12:34 +0000
Received: by outflank-mailman (input) for mailman id 740927;
 Sat, 15 Jun 2024 00:12:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIH21-0004Ku-5Z; Sat, 15 Jun 2024 00:12:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIH21-00027T-40; Sat, 15 Jun 2024 00:12:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIH20-000590-Oi; Sat, 15 Jun 2024 00:12:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sIH20-00072K-O9; Sat, 15 Jun 2024 00:12:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VQ+ggEWiFPGSdoQXlwBD5JFIBZ3SsFaV4BbWiBWpiGs=; b=VuZJ7km0oCZHs1fUli6FJFTyb8
	tSb7EIJ+ZKYNlZZGMJAHyhrNxZqLtwhYpjgkrpH7L7TD2wgWUy/8C2snOXtm0OcDeB7AQCJZIdoXw
	Px0bVxn/3R0xgUBUqNYPq6YEEWFmojnWEHycMoElc3viqt0fVdMBoq/2MThqTgsszd6A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186353-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186353: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-arndale:host-ping-check-xen:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8b4243a9b560c89bb259db5a27832c253d4bebc7
X-Osstest-Versions-That:
    xen=4fdd8d75566fdad06667a79ec0ce6f43cc466c54
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 Jun 2024 00:12:32 +0000

flight 186353 xen-unstable real [real]
flight 186355 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186353/
http://logs.test-lab.xenproject.org/osstest/logs/186355/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale  10 host-ping-check-xen fail pass in 186355-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 186355 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 186355 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186344
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186344
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186344
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186344
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186344
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186344
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8b4243a9b560c89bb259db5a27832c253d4bebc7
baseline version:
 xen                  4fdd8d75566fdad06667a79ec0ce6f43cc466c54

Last test of basis   186344  2024-06-14 05:48:54 Z    0 days
Testing same since   186353  2024-06-14 15:10:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Luca Fancellu <luca.fancellu@arm.com>
  Penny Zheng <penny.zheng@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4fdd8d7556..8b4243a9b5  8b4243a9b560c89bb259db5a27832c253d4bebc7 -> master


From xen-devel-bounces@lists.xenproject.org Sat Jun 15 04:57:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Jun 2024 04:57:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740954.1148038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sILSy-0007Bl-IB; Sat, 15 Jun 2024 04:56:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740954.1148038; Sat, 15 Jun 2024 04:56:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sILSy-0007Be-Es; Sat, 15 Jun 2024 04:56:40 +0000
Received: by outflank-mailman (input) for mailman id 740954;
 Sat, 15 Jun 2024 04:56:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sILSx-0007BU-8E; Sat, 15 Jun 2024 04:56:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sILSx-0006ua-2u; Sat, 15 Jun 2024 04:56:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sILSw-0005yf-Ms; Sat, 15 Jun 2024 04:56:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sILSw-0001ZM-MP; Sat, 15 Jun 2024 04:56:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tRiNO/yfWOvNXwX3F2TwhtkKnyRVuoKqdg+v7R67zb8=; b=5NoR/3rT+AYs9QyJCjVYuKqgZG
	tSY3yP72T4SpAbuAWfDhZq5SBoeHnFgBzKtHgSJljSpzKXWmkxJ2KPTPAuoONTXzdZ9ICCDPY4ion
	5MHi0+gsvfXafMm3wTkE22bFBtx4AsaNUdjqTLkYyACb4rlD3pffjf+zeTDVorCXqehc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186354-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186354: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start/freebsd.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=0cac73eb3875f6ecb6105e533218dba1868d04c9
X-Osstest-Versions-That:
    linux=2ef5971ff345d3c000873725db555085e0131961
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 Jun 2024 04:56:38 +0000

flight 186354 linux-linus real [real]
flight 186357 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186354/
http://logs.test-lab.xenproject.org/osstest/logs/186357/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd12-amd64 21 guest-start/freebsd.repeat fail pass in 186357-retest
 test-armhf-armhf-examine      8 reboot              fail pass in 186357-retest
 test-armhf-armhf-xl-multivcpu  8 xen-boot           fail pass in 186357-retest
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186357-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 186357 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 186357 never pass
 test-armhf-armhf-xl-raw       8 xen-boot                     fail  like 186314
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186314
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186314
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186314
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                0cac73eb3875f6ecb6105e533218dba1868d04c9
baseline version:
 linux                2ef5971ff345d3c000873725db555085e0131961

Last test of basis   186314  2024-06-12 00:10:33 Z    3 days
Failing since        186324  2024-06-12 17:12:12 Z    2 days    5 attempts
Testing same since   186354  2024-06-14 18:42:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Csókás, Bence" <csokas.bence@prolan.hu>
  Aleksandr Mishin <amishin@t-argos.ru>
  Andrei Vagin <avagin@gmail.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Borislav Petkov (AMD) <bp@alien8.de>
  Chen Hanxiao <chenhx.fnst@fujitsu.com>
  Csókás, Bence <csokas.bence@prolan.hu>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  David S. Miller <davem@davemloft.net>
  David Wei <dw@davidwei.uk>
  Davide Ornaghi <d.ornaghi97@gmail.com>
  Dmitry Mastykin <mastichi@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Gal Pressman <gal@nvidia.com>
  Geliang Tang <geliang@kernel.org>
  Hans de Goede <hdegoede@redhat.com>
  Hongbo Li <lihongbo22@huawei.com>
  Jakub Kicinski <kuba@kernel.org>
  Jan Beulich <jbeulich@suse.com>
  Jan Kara <jack@suse.cz>
  Jie Wang <wangjie125@huawei.com>
  Jijie Shao <shaojijie@huawei.com>
  Johan Hovold <johan+linaro@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  Joshua Washington <joshwash@google.com>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Kent Overstreet <kent.overstreet@linux.dev>
  Kory Maincent <kory.maincent@bootlin.com>
  Laura Nao <laura.nao@collabora.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lion Ackermann <nnamrec@gmail.com>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Matthieu Baerts (NGI0) <matttbe@kernel.org>
  Michael Chan <michael.chan@broadcom.com>
  Mike Rapoport (IBM) <rppt@kernel.org>
  Naama Meir <naamax.meir@linux.intel.com>
  Neal Cardwell <ncardwell@google.com>
  NeilBrown <neilb@suse.de>
  Nikolay Aleksandrov <razor@blackwall.org>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Olga Kornievskaia <kolga@netapp.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Pauli Virtanen <pav@iki.fi>
  Petr Pavlu <petr.pavlu@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rao Shoaib <Rao.Shoaib@oracle.com>
  Rob Herring <robh@kernel.org>
  Ron Economos <re@w6rz.net>
  Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
  Sagar Cheluvegowda <quic_scheluve@quicinc.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sasha Neftin <sasha.neftin@intel.com>
  Scott Mayhew <smayhew@redhat.com>
  Taehee Yoo <ap420073@gmail.com>
  Tariq Toukan <tariqt@nvidia.com>
  Thorsten Scherer <t.scherer@eckelmann.de>
  Tibor Billes <tbilles@gmx.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Udit Kumar <u-kumar1@ti.com>
  VitaliiT <vitaly.torshyn@gmail.com>
  Wadim Egorov <w.egorov@phytec.de>
  Xi Ruoyao <xry111@xry111.site>
  Xiaolei Wang <xiaolei.wang@windriver.com>
  Yonglong Liu <liuyonglong@huawei.com>
  YonglongLi <liyonglong@chinatelecom.cn>
  Ziwei Xiao <ziweixiao@google.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   2ef5971ff345..0cac73eb3875  0cac73eb3875f6ecb6105e533218dba1868d04c9 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sat Jun 15 05:35:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Jun 2024 05:35:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740966.1148048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIM4a-0004iv-Lq; Sat, 15 Jun 2024 05:35:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740966.1148048; Sat, 15 Jun 2024 05:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIM4a-0004io-Hf; Sat, 15 Jun 2024 05:35:32 +0000
Received: by outflank-mailman (input) for mailman id 740966;
 Sat, 15 Jun 2024 05:35:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIM4a-0004ie-4y; Sat, 15 Jun 2024 05:35:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIM4Z-0007sM-NS; Sat, 15 Jun 2024 05:35:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIM4Z-0007F5-Do; Sat, 15 Jun 2024 05:35:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sIM4Z-0004cv-DD; Sat, 15 Jun 2024 05:35:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uuP59QluP5Vrl1hiIGwaKUkyJ2rwf1H29ny0CS0tpb8=; b=GS7pSqXDURVVlgr7Ik8JTpxf97
	8yfI4p/xh6+3NzXb1aSCbvyyeJ+fZ4UYYI3YqOcPSb5TUCbYaI0PpjVJZwjt5LImW9QwBOaTAV+bN
	DKGdCcQb8Rbyl7+39JAyjFP3jA1w3NRXuXeocU12i4LI5VhNcKyBTZEf69xqHMMi7LpM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186358-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186358: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=cf323e2839ce260fde43487baae205527dee1b2f
X-Osstest-Versions-That:
    ovmf=5e776299a2604b336a947e68593012ab2cc16eb4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 Jun 2024 05:35:31 +0000

flight 186358 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186358/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 cf323e2839ce260fde43487baae205527dee1b2f
baseline version:
 ovmf                 5e776299a2604b336a947e68593012ab2cc16eb4

Last test of basis   186352  2024-06-14 13:42:52 Z    0 days
Testing same since   186358  2024-06-15 04:11:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Leif Lindholm <quic_llindhol@quicinc.com>
  Pierre Gondois <pierre.gondois@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   5e776299a2..cf323e2839  cf323e2839ce260fde43487baae205527dee1b2f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jun 15 08:42:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Jun 2024 08:42:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.740999.1148058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIOzP-0004qA-EH; Sat, 15 Jun 2024 08:42:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 740999.1148058; Sat, 15 Jun 2024 08:42:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIOzP-0004q3-BP; Sat, 15 Jun 2024 08:42:23 +0000
Received: by outflank-mailman (input) for mailman id 740999;
 Sat, 15 Jun 2024 08:42:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIOzO-0004pt-BO; Sat, 15 Jun 2024 08:42:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIOzO-0003i9-6r; Sat, 15 Jun 2024 08:42:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIOzN-00085n-Vj; Sat, 15 Jun 2024 08:42:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sIOzN-00063U-V4; Sat, 15 Jun 2024 08:42:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xsRh3sMNlVCgB4e4HnDL0laZKJqwKR3it2ZdYsqMwXQ=; b=Q2ycVoIda/cWh1eKm9Em4a5+Lg
	yvey8f0l9IOztSaKA5QSmVdWPA+T3ROnRAY4IFN9/vTKvY3ozjj15JnvSENyAxfBIijPQYqsfkzkJ
	vZ7L5fWM8/UcL36qhCpd07m6cbLAUXJL2SuZX3NcERey0KWj6UaUXvPY3K11lSVGSdVw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186356-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186356: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8b4243a9b560c89bb259db5a27832c253d4bebc7
X-Osstest-Versions-That:
    xen=8b4243a9b560c89bb259db5a27832c253d4bebc7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 Jun 2024 08:42:21 +0000

flight 186356 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186356/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186353
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186353
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186353
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186353
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186353
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186353
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8b4243a9b560c89bb259db5a27832c253d4bebc7
baseline version:
 xen                  8b4243a9b560c89bb259db5a27832c253d4bebc7

Last test of basis   186356  2024-06-15 01:53:38 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat Jun 15 09:37:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Jun 2024 09:37:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741018.1148068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIPq9-0003I4-6M; Sat, 15 Jun 2024 09:36:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741018.1148068; Sat, 15 Jun 2024 09:36:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIPq9-0003Hx-3M; Sat, 15 Jun 2024 09:36:53 +0000
Received: by outflank-mailman (input) for mailman id 741018;
 Sat, 15 Jun 2024 09:36:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIPq7-0003Hn-PL; Sat, 15 Jun 2024 09:36:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIPq7-00064K-L4; Sat, 15 Jun 2024 09:36:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIPq7-0001FT-Ak; Sat, 15 Jun 2024 09:36:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sIPq7-0002lt-AF; Sat, 15 Jun 2024 09:36:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ipCbP95R2Ibsk3a5plgTcKGHY+WEv73gnBGFlAFcZi0=; b=Chy3stNiOYlJRAVD91HznVe/fE
	EFYxuOLsAk3WoSZ4bWYdS322gU1ZhLCBdSY4QH4B2cG6/ZMHFb/undfTUhU8AmLmTTikUiE85ieNG
	Yfr91f+NIb61SYh2+72cjAC+sWgyx2yCODR1cK7HCxhPsWAofZa/E2z3BX3l5Ky3xnVA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186360-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186360: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=a84876ba283176eb683dc84274bc6c66faffc7a0
X-Osstest-Versions-That:
    ovmf=cf323e2839ce260fde43487baae205527dee1b2f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 Jun 2024 09:36:51 +0000

flight 186360 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186360/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 a84876ba283176eb683dc84274bc6c66faffc7a0
baseline version:
 ovmf                 cf323e2839ce260fde43487baae205527dee1b2f

Last test of basis   186358  2024-06-15 04:11:23 Z    0 days
Testing same since   186360  2024-06-15 07:42:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ross Lagerwall <ross.lagerwall@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   cf323e2839..a84876ba28  a84876ba283176eb683dc84274bc6c66faffc7a0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jun 15 11:16:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Jun 2024 11:16:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741042.1148078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIROR-0008D1-6w; Sat, 15 Jun 2024 11:16:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741042.1148078; Sat, 15 Jun 2024 11:16:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIROR-0008Cu-48; Sat, 15 Jun 2024 11:16:23 +0000
Received: by outflank-mailman (input) for mailman id 741042;
 Sat, 15 Jun 2024 11:16:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIROP-0008Ck-Ld; Sat, 15 Jun 2024 11:16:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIROP-0007zU-H8; Sat, 15 Jun 2024 11:16:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIROP-00040P-9V; Sat, 15 Jun 2024 11:16:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sIROP-00072J-8z; Sat, 15 Jun 2024 11:16:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WGAYwdlUBV7hLQSAQXXnxqDq8/QHbwJYiupCfU4QQjM=; b=lloxx01XEWLKWEq222USflXQyB
	SeVzLBFSUM6cpSaA967eL4zRs7EwDAiIdncitKAtlg0rxkqElcifzQYG2ULRMV/BoQ+LFK1c8qyty
	8+6NYKIaX2ZyUUiJ8ESmMmTXQlfvt7M/5beZ5v2hDMyXKWWL1X2P769JtBBdgANEEUlA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186361-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186361: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d8095b36abc521970dd930449a8ae8ddc431314c
X-Osstest-Versions-That:
    ovmf=a84876ba283176eb683dc84274bc6c66faffc7a0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 Jun 2024 11:16:21 +0000

flight 186361 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186361/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d8095b36abc521970dd930449a8ae8ddc431314c
baseline version:
 ovmf                 a84876ba283176eb683dc84274bc6c66faffc7a0

Last test of basis   186360  2024-06-15 07:42:57 Z    0 days
Testing same since   186361  2024-06-15 09:42:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jeff Brasen <jbrasen@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   a84876ba28..d8095b36ab  d8095b36abc521970dd930449a8ae8ddc431314c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jun 15 14:01:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Jun 2024 14:01:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741317.1148087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sITyB-0005O3-Kl; Sat, 15 Jun 2024 14:01:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741317.1148087; Sat, 15 Jun 2024 14:01:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sITyB-0005Nw-IF; Sat, 15 Jun 2024 14:01:27 +0000
Received: by outflank-mailman (input) for mailman id 741317;
 Sat, 15 Jun 2024 14:01:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sITyA-0005Nj-Gs; Sat, 15 Jun 2024 14:01:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sITyA-0002YN-7A; Sat, 15 Jun 2024 14:01:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sITy9-0007vf-Rn; Sat, 15 Jun 2024 14:01:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sITy9-00052T-RR; Sat, 15 Jun 2024 14:01:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c52Oi4Qt63KAwqqp9Qo7ENiWZB9ZmWbkFQrn9R7zcBk=; b=HKWJdlykuyEcISwT89Uu8NJt4A
	+ioe97ne+HExxuqdzpxHiK3yj2bR/2cYALL+ICj7dQQ0t+LP9meuEvo5rTq6dswSSSCkwisV9ZiXL
	OGc+Y4rZ3dJGUUU9/Rusyn/TYz8CymHpOzU36AM7nXv7sSEZWoVxIlyNYWp/CX3Cjtyo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186359-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186359: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=44ef20baed8edcb1799bec1e7ad2debbc93eedd8
X-Osstest-Versions-That:
    linux=0cac73eb3875f6ecb6105e533218dba1868d04c9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 Jun 2024 14:01:25 +0000

flight 186359 linux-linus real [real]
flight 186364 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186359/
http://logs.test-lab.xenproject.org/osstest/logs/186364/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale   8 xen-boot            fail pass in 186364-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 186364 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 186364 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186354
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186354
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186354
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186354
 test-armhf-armhf-examine      8 reboot                       fail  like 186354
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186354
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186354
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                44ef20baed8edcb1799bec1e7ad2debbc93eedd8
baseline version:
 linux                0cac73eb3875f6ecb6105e533218dba1868d04c9

Last test of basis   186354  2024-06-14 18:42:32 Z    0 days
Testing same since   186359  2024-06-15 05:00:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adam Miotk <adam.miotk@arm.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Amjad Ouled-Ameur <amjad.ouled-ameur@arm.com>
  Andrzej Hajda <andrzej.hajda@intel.com>
  Anuj Gupta <anuj20.g@samsung.com>
  Arunpravin Paneer Selvam <Arunpravin.PaneerSelvam@amd.com>
  Bart Van Assche <bvanassche@acm.org>
  Breno Leitao <leitao@debian.org>
  Can Guo <quic_cang@quicinc.com>
  Chengming Zhou <chengming.zhou@linux.dev>
  Christoph Hellwig <hch@lst.de>
  Chunguang Xu <chunguang.xu@shopee.com>
  Cyril Hrubis <chrubis@suse.cz>
  Damien Le Moal <dlemoal@kernel.org>
  Daniel Wagner <dwagner@suse.de>
  Danilo Krummrich <dakr@redhat.com>
  Dave Airlie <airlied@redhat.com>
  Dimitri Sivanich <sivanich@hpe.com>
  Douglas Anderson <dianders@chromium.org>
  Dr. David Alan Gilbert <linux@treblig.org>
  Fei Shao <fshao@chromium.org>
  Friedrich Weber <f.weber@proxmox.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Hans de Goede <hdegoede@redhat.com>
  Heiko Carstens <hca@linux.ibm.com>
  Inki Dae <inki.dae@samsung.com>
  Jani Nikula <jani.nikula@intel.com>
  Jens Axboe <axboe@kernel.dk>
  Joerg Roedel <jroedel@suse.de>
  Kanchan Joshi <joshi.k@samsung.com>
  Keith Busch <kbusch@kernel.org>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liviu Dudau <liviu.dudau@arm.com>
  Lucas De Marchi <lucas.demarchi@intel.com>
  Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Matthew Brost <matthew.brost@intel.com>
  Maxime Ripard <mripard@kernel.org>
  Michal Wajdeczko <michal.wajdeczko@intel.com>
  Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Oded Gabbay <ogabbay@kernel.org>
  Pavel Begunkov <asml.silence@gmail.com>
  pengfuyuan <pengfuyuan@kylinos.cn>
  Pierre Tomon <pierretom+12@ik.me>
  Riana Tauro <riana.tauro@intel.com>
  Richard Gong <richard.gong@amd.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Su Hui <suhui@nfschina.com>
  Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
  Thomas Hellström <thomas.hellstrom@linux.intel.com>
  Tobias Jakobi <tjakobi@math.uni-bielefeld.de>
  Vasily Gorbik <gor@linux.ibm.com>
  Vasily Khoruzhick <anarsoul@gmail.com>
  Venkat Rao Bagalkote <venkat88@linux.vnet.ibm.com>
  Yi Zhang <yi.zhang@redhat.com>
  Ziqi Chen <quic_ziqichen@quicinc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   0cac73eb3875..44ef20baed8e  44ef20baed8edcb1799bec1e7ad2debbc93eedd8 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sat Jun 15 14:04:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 Jun 2024 14:04:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741326.1148097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIU1U-0005xT-3o; Sat, 15 Jun 2024 14:04:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741326.1148097; Sat, 15 Jun 2024 14:04:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIU1U-0005xM-1B; Sat, 15 Jun 2024 14:04:52 +0000
Received: by outflank-mailman (input) for mailman id 741326;
 Sat, 15 Jun 2024 14:04:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIU1S-0005xA-HW; Sat, 15 Jun 2024 14:04:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIU1S-0002c8-Gf; Sat, 15 Jun 2024 14:04:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIU1S-0007zP-AA; Sat, 15 Jun 2024 14:04:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sIU1S-00067k-9l; Sat, 15 Jun 2024 14:04:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fPDC4eftMxjn4Uc9v9aqVwHbIkl8abeqvvRznbQCb6U=; b=MvKYbgIZedghF47YNbRvnkH0cI
	6uohOkRrXWsDV+O8NAPc5k75IuBgR4dglCNLA7fk3Txqfql1/hC18IsipLMvRk+BrFHsOM9ScYAdD
	pmhPU/ukSIoVAuubyQY5Wj2fNZ2hkRN41DHWoXxSBp+HnTD/tYo1uFfl61jm0zoRT9OM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186362-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186362: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=aa99d36be9ad68d8d0a99896332a9b5da10cf343
X-Osstest-Versions-That:
    ovmf=d8095b36abc521970dd930449a8ae8ddc431314c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 Jun 2024 14:04:50 +0000

flight 186362 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186362/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 aa99d36be9ad68d8d0a99896332a9b5da10cf343
baseline version:
 ovmf                 d8095b36abc521970dd930449a8ae8ddc431314c

Last test of basis   186361  2024-06-15 09:42:56 Z    0 days
Testing same since   186362  2024-06-15 11:42:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jeff Brasen <jbrasen@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d8095b36ab..aa99d36be9  aa99d36be9ad68d8d0a99896332a9b5da10cf343 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun Jun 16 07:04:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Jun 2024 07:04:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741469.1148109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIjw3-0000vG-Lu; Sun, 16 Jun 2024 07:04:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741469.1148109; Sun, 16 Jun 2024 07:04:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIjw3-0000v9-Hc; Sun, 16 Jun 2024 07:04:19 +0000
Received: by outflank-mailman (input) for mailman id 741469;
 Sun, 16 Jun 2024 07:04:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIjw2-0000uz-Qb; Sun, 16 Jun 2024 07:04:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIjw2-0004hH-HO; Sun, 16 Jun 2024 07:04:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIjw2-0005vT-56; Sun, 16 Jun 2024 07:04:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sIjw2-00045X-4T; Sun, 16 Jun 2024 07:04:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sLtpVDx1oJqbxoSe+DHmMrT1Zlhuwyl/Xpt+TftxUIY=; b=e0p6mgN7xx5GaITbCbOg8EydE2
	OsZGtsKfAvqRP5MrOCccQkRqLIN2I/MuGQqosjOZFXeVBV5OiyIwkjuNpiHbFKq+ZuNb63oKk6hx7
	1s7FmEuGFJcJL9y0Y/fDf4ci+YoL2bR79njGm5V72yE8q9PaPdwPTUR1y5F0sU6Ikebs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186365-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186365: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=08a6b55aa0c66b1c4f6ff35402c971420335b11c
X-Osstest-Versions-That:
    linux=44ef20baed8edcb1799bec1e7ad2debbc93eedd8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Jun 2024 07:04:18 +0000

flight 186365 linux-linus real [real]
flight 186368 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186365/
http://logs.test-lab.xenproject.org/osstest/logs/186368/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186368-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186359
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186359
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186359
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186359
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186359
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186359
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                08a6b55aa0c66b1c4f6ff35402c971420335b11c
baseline version:
 linux                44ef20baed8edcb1799bec1e7ad2debbc93eedd8

Last test of basis   186359  2024-06-15 05:00:30 Z    1 days
Testing same since   186365  2024-06-15 18:43:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ben Segall <bsegall@google.com>
  Benjamin Segall <bsegall@google.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Dave Hansen <dave.hansen@linux.intel.com>
  David Gow <davidgow@google.com>
  Kees Cook <kees@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Oleg Nesterov <oleg@redhat.com>
  Thomas Gleixner <tglx@linutronix.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   44ef20baed8e..08a6b55aa0c6  08a6b55aa0c66b1c4f6ff35402c971420335b11c -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun Jun 16 09:48:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Jun 2024 09:48:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741495.1148117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sImUV-0001Vo-BY; Sun, 16 Jun 2024 09:48:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741495.1148117; Sun, 16 Jun 2024 09:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sImUV-0001Vh-8e; Sun, 16 Jun 2024 09:48:03 +0000
Received: by outflank-mailman (input) for mailman id 741495;
 Sun, 16 Jun 2024 09:48:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sImUT-0001VF-9o; Sun, 16 Jun 2024 09:48:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sImUS-0008AI-Um; Sun, 16 Jun 2024 09:48:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sImUS-0003Zc-Ji; Sun, 16 Jun 2024 09:48:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sImUS-0000eh-JG; Sun, 16 Jun 2024 09:48:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tPIrIQM7GR9E7tnXK9YFqRovEZPV0MF8edJoQ1EnVEY=; b=cwHDaUw56TGv0/OUU9qK1gPhIU
	za/y4RTv74y4h1dNzfwfYU82t7rbSaiQ+JNPDjYenF/R38hpFLgFloCqMjIIlgnZaJ3v9SJpdOetb
	CSHCySvOw9g1/JBYzR1d3TVEMf7RsFw+nEAmbqWg4/ILOmKxhbhwVByALJvc6zfSVDHw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186367-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186367: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8b4243a9b560c89bb259db5a27832c253d4bebc7
X-Osstest-Versions-That:
    xen=8b4243a9b560c89bb259db5a27832c253d4bebc7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Jun 2024 09:48:00 +0000

flight 186367 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186367/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186356
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186356
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186356
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186356
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186356
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186356
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8b4243a9b560c89bb259db5a27832c253d4bebc7
baseline version:
 xen                  8b4243a9b560c89bb259db5a27832c253d4bebc7

Last test of basis   186367  2024-06-16 01:55:33 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jun 16 15:20:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Jun 2024 15:20:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741518.1148127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIrfv-0002Ox-VY; Sun, 16 Jun 2024 15:20:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741518.1148127; Sun, 16 Jun 2024 15:20:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIrfv-0002Oq-T5; Sun, 16 Jun 2024 15:20:11 +0000
Received: by outflank-mailman (input) for mailman id 741518;
 Sun, 16 Jun 2024 15:20:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIrfu-0002Og-7q; Sun, 16 Jun 2024 15:20:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIrft-0005hY-Rz; Sun, 16 Jun 2024 15:20:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIrft-00080u-GS; Sun, 16 Jun 2024 15:20:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sIrft-0004lZ-G4; Sun, 16 Jun 2024 15:20:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=J0mzuqUDasDgIHEUHcW7RHg+/Hu1xZb37ZhF9mENsuI=; b=zSTzn6lUj2jwFexShO/dp/ZUgY
	6tb6Mj9zH9IimxbTO+gd52fNafEphjuKazuVSCZkHgj1GCK5Rf1bsbDTJkUiKJWn5It9hVTauZzhY
	TjMsyyzPG71cTM4HaDgbjCR/+PKWcbx1/GhCjdmpPSOMLO1SJRJo3zVMx7OlsQN6Bprs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186369-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186369: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a3e18a540541325a8c8848171f71e0d45ad30b2c
X-Osstest-Versions-That:
    linux=08a6b55aa0c66b1c4f6ff35402c971420335b11c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Jun 2024 15:20:09 +0000

flight 186369 linux-linus real [real]
flight 186371 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186369/
http://logs.test-lab.xenproject.org/osstest/logs/186371/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit2   8 xen-boot            fail pass in 186371-retest
 test-armhf-armhf-xl-qcow2     8 xen-boot            fail pass in 186371-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 186371 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 186371 never pass
 test-armhf-armhf-xl-qcow2   14 migrate-support-check fail in 186371 never pass
 test-armhf-armhf-xl-qcow2 15 saverestore-support-check fail in 186371 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186365
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186365
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186365
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186365
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186365
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186365
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                a3e18a540541325a8c8848171f71e0d45ad30b2c
baseline version:
 linux                08a6b55aa0c66b1c4f6ff35402c971420335b11c

Last test of basis   186365  2024-06-15 18:43:41 Z    0 days
Testing same since   186369  2024-06-16 07:09:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chandan Babu R <chandanbabu@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Namjae Jeon <linkinjeon@kernel.org>
  Steve French <stfrench@microsoft.com>
  Wengang Wang <wen.gang.wang@oracle.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   08a6b55aa0c6..a3e18a540541  a3e18a540541325a8c8848171f71e0d45ad30b2c -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun Jun 16 20:40:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Jun 2024 20:40:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741528.1148138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIwg4-0005Kv-9d; Sun, 16 Jun 2024 20:40:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741528.1148138; Sun, 16 Jun 2024 20:40:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIwg4-0005Ko-62; Sun, 16 Jun 2024 20:40:40 +0000
Received: by outflank-mailman (input) for mailman id 741528;
 Sun, 16 Jun 2024 20:40:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIwg2-0005Kd-K1; Sun, 16 Jun 2024 20:40:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIwg2-0003oM-6r; Sun, 16 Jun 2024 20:40:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sIwg1-0001qP-TP; Sun, 16 Jun 2024 20:40:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sIwg1-00028s-Sz; Sun, 16 Jun 2024 20:40:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xy2ntVYtBJhsY4cl86V7/6//7NVXTRRoulmvnUy71Zk=; b=XBD5MtRKtOr34M7JXP6d//l1sd
	VpuqPccvE0BqKHvMAgoQINEayv9jJ3ZCdRIgmLCV+w1lJ88XyvqX1l5/GdGQ+BsxUkd3dMBVmBp05
	r3+1AvHsIXcVghu/hi+w23p1UEa/ppfWK/IdU/Hn+O7Sf4mDjlSW8oN7oBr8dYXGVKes=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186370-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-6.1 test] 186370: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-6.1:test-armhf-armhf-xl-multivcpu:host-ping-check-xen:fail:heisenbug
    linux-6.1:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=eb44d83053d66372327e69145e8d2fa7400a4991
X-Osstest-Versions-That:
    linux=ae9f2a70d69e9c840ee1eda201f09662ca7e2038
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 Jun 2024 20:40:37 +0000

flight 186370 linux-6.1 real [real]
flight 186373 linux-6.1 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186370/
http://logs.test-lab.xenproject.org/osstest/logs/186373/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-multivcpu 10 host-ping-check-xen fail pass in 186373-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 186373 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 186373 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186320
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186320
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186320
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186320
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186320
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186320
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                eb44d83053d66372327e69145e8d2fa7400a4991
baseline version:
 linux                ae9f2a70d69e9c840ee1eda201f09662ca7e2038

Last test of basis   186320  2024-06-12 09:12:33 Z    4 days
Testing same since   186370  2024-06-16 12:15:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Potapenko <glider@google.com>
  Alexander Shishkin <alexander.shishkin@linux.intel.com>
  Alexandre Belloni <alexandre.belloni@bootlin.com>
  Andreas Larsson <andreas@gaisler.com>
  Andrew Morton <akpm@linux-foundation.org>
  Animesh Manna <animesh.manna@intel.com>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Arnd Bergmann <arnd@arndb.de>
  Avri Altman <avri.altman@wdc.com>
  Baokun Li <libaokun1@huawei.com>
  Bitterblue Smith <rtl8821cerfe2@gmail.com>
  Bjorn Andersson <andersson@kernel.org>
  Bob Zhou <bob.zhou@amd.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Brian Johannesmeyer <bjohannesmeyer@gmail.com>
  Cai Xinchen <caixinchen1@huawei.com>
  Chaitanya Kumar Borah <chaitanya.kumar.borah@intel.com>
  Chao Yu <chao@kernel.org>
  Christian Brauner <brauner@kernel.org>
  Christian König <christian.koenig@amd.com>
  Christoffer Sandberg <cs@tuxedo.de>
  Coly Li <colyli@suse.de>
  Damien Le Moal <dlemoal@kernel.org>
  Dan Gora <dan.gora@gmail.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Thompson <daniel.thompson@linaro.org>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Dhananjay Ugwekar <Dhananjay.Ugwekar@amd.com>
  Dominique Martinet <asmadeus@codewreck.org>
  Elliot Berman <quic_eberman@quicinc.com> # sm8650-qrd
  Enzo Matsumiya <ematsumiya@suse.de>
  Eric Dumazet <edumazet@google.com>
  Fan Yu <fan.yu9@zte.com.cn>
  Florian Fainelli <florian.fainelli@broadcom.com>
  Frank Li <Frank.Li@nxp.com>
  Frank van der Linden <fvdl@google.com>
  Gautham R. Shenoy <gautham.shenoy@amd.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hans de Goede <hdegoede@redhat.com>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Haorong Lu <ancientmodern4@gmail.com>
  Harald Freudenberger <freude@linux.ibm.com>
  Heiko Carstens <hca@linux.ibm.com>
  Helge Deller <deller@gmx.de>
  Helge Deller <deller@kernel.org>
  Herbert Xu <herbert@gondor.apana.org.au>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Ingo Molnar <mingo@kernel.org>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  Jens Axboe <axboe@kernel.dk>
  Johan Hovold <johan+linaro@kernel.org>
  John David Anglin <dave.anglin@bell.net>
  Jon Hunter <jonathanh@nvidia.com>
  Jorge Ramirez-Ortiz <jorge@foundries.io>
  Judith Mendez <jm@ti.com>
  Justin Stitt <justinstitt@google.com>
  Kai Vehmanen <kai.vehmanen@intel.com>
  Konrad Dybcio <konrad.dybcio@linaro.org>
  Krzysztof Kozlowski <krzk@kernel.org>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Lee Jones <lee@kernel.org>
  Li Ma <li.ma@amd.com>
  Liam R. Howlett <Liam.Howlett@oracle.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Marc Dionne <marc.dionne@auristor.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc Zyngier <maz@kernel.org>
  Mario Limonciello <mario.limonciello@amd.com>
  Marius Fleischer <fleischermarius@gmail.com>
  Mark Brown <broonie@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mat Martineau <martineau@kernel.org>
  Mateusz Jończyk <mat.jonczyk@o2.pl>
  Matthew Mirvish <matthew@mm12.xyz>
  Matthieu Baerts (NGI0) <matttbe@kernel.org>
  Maulik Shah <quic_mkshah@quicinc.com>
  Mauro Carvalho Chehab <mchehab@kernel.org>
  Michael Ellerman <mpe@ellerman.id.au>
  Miguel Ojeda <ojeda@kernel.org>
  Mike Gilbert <floppym@gentoo.org>
  Nathan Chancellor <nathan@kernel.org>
  Nick Bowler <nbowler@draconx.ca>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Oliver Upton <oliver.upton@linux.dev>
  Omar Sandoval <osandov@fb.com>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Abeni <pabeni@redhat.com>
  Paul E. McKenney <paulmck@kernel.org>
  Pavel Machek (CIP) <pavel@denx.de>
  Peng Zhang <zhangpeng.00@bytedance.com>
  Peter Jung <ptr1337@cachyos.org>
  Peter Schneider <pschneider1968@googlemail.com>
  Ping-Ke Shih <pkshih@realtek.com>
  Puranjay Mohan <puranjay@kernel.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Roman Gushchin <roman.gushchin@linux.dev>
  Ron Economos <re@w6rz.net>
  Ryan Roberts <ryan.roberts@arm.com>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Salvatore Bonaccorso <carnil@debian.org>
  Sam Ravnborg <sam@ravnborg.org>
  SeongJae Park <sj@kernel.org>
  Sergey Shtylyov <s.shtylyov@omp.ru>
  Shradha Gupta <shradhagupta@linux.microsoft.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Sidhartha Kumar <sidhartha.kumar@oracle.com>
  Song Liu <song@kernel.org>
  Stefan Berger <stefanb@linux.ibm.com>
  Steve French <stfrench@microsoft.com>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <treding@nvidia.com>
  Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vitaly Chikunov <vt@altlinux.org>
  Werner Sembach <wse@tuxedocomputers.com>
  Wim Van Sebroeck <wim@linux-watchdog.org>
  xu xin <xu.xin16@zte.com.cn>
  Yang Xiwen <forbidden405@outlook.com>
  Yu Kuai <yukuai3@huawei.com>
  Zheyu Ma <zheyuma97@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   ae9f2a70d69e..eb44d83053d6  eb44d83053d66372327e69145e8d2fa7400a4991 -> tested/linux-6.1


From xen-devel-bounces@lists.xenproject.org Sun Jun 16 23:01:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 Jun 2024 23:01:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741539.1148148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIysC-0003JU-UG; Sun, 16 Jun 2024 23:01:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741539.1148148; Sun, 16 Jun 2024 23:01:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sIysC-0003JN-RL; Sun, 16 Jun 2024 23:01:20 +0000
Received: by outflank-mailman (input) for mailman id 741539;
 Sun, 16 Jun 2024 23:01:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SkS0=NS=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sIysA-0003JH-UB
 for xen-devel@lists.xenproject.org; Sun, 16 Jun 2024 23:01:19 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 53b15056-2c34-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 01:01:15 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 522C7CE0E5D;
 Sun, 16 Jun 2024 23:01:11 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2CAA0C2BD10;
 Sun, 16 Jun 2024 23:01:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53b15056-2c34-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718578869;
	bh=K48GiByJehCmQjwi20pmfnU+WsCyr4S6WxHNS+fcYIA=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=gczAGXT2JgNBboue0aH6XDCajaGt0Tn9T6RNwUC8NdBoTEgjtTbwpmUfCPkjFQ9wM
	 MM968k+DIaMHgn73yxqDo9X2C3EJ5/755Y+99nXMZx497l32DWuxyKEgkUS1Mfauzi
	 zlSFlVj6tWpctoPTJ+wOY0xCMexSj4HtxdjXI3+rZfPjj6bie6Cukz6u7sVo5A7bEU
	 9rKTuHOjY9Tr5Ii5b3jprMmIgspdDlLslR4qSbgMPqGu86/2ypBmGpS+m4Lh9tPaWA
	 jbUd1obFjKTKufNm0BQAftChaDznmaK4Iq8S3q+7ZRNQrBkFu3/tvjG2xwv4gQQlB4
	 JHm9Ls++1gm6g==
Message-ID: <5a697233-0611-459d-b889-2e0133bbb541@kernel.org>
Date: Mon, 17 Jun 2024 08:01:04 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 02/26] sd: move zone limits setup out of
 sd_read_block_characteristics
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-3-hch@lst.de>
 <40ca8052-6ac1-4c1b-8c39-b0a7948839f8@kernel.org>
 <20240613093918.GA27629@lst.de>
From: Damien Le Moal <dlemoal@kernel.org>
Content-Language: en-US
Organization: Western Digital Research
In-Reply-To: <20240613093918.GA27629@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/13/24 18:39, Christoph Hellwig wrote:
> On Tue, Jun 11, 2024 at 02:51:24PM +0900, Damien Le Moal wrote:
>>> +	if (sdkp->device->type == TYPE_ZBC)
>>
>> Nit: use sd_is_zoned() here ?
> 
> Actually - is there much in even keeping sd_is_zoned now that the
> host aware support is removed?  Just open coding the type check isn't
> any more code, and probably easier to follow.

Removing this helper is fine by me. There are only 2 call sites in sd.c and the
some of 4 calls in sd_zbc.c are not really needed:
1) The call in sd_zbc_print_zones() is not needed at all since this function is
called only for a zoned drive from sd_zbc_revalidate_zones().
2) The calls in sd_zbc_report_zones() and sd_zbc_cmnd_checks() are probably
useless as these are called only for zoned drives in the first place. The checks
would be useful only for passthrough commands, but then we do not really care
about these and the user will get a failure anyway if it tries to do ZBC
commands on non-ZBC drives.
3) That leaves only the call in sd_zbc_read_zones() but that check can probably
be moved to sd.c to conditionally call  sd_zbc_read_zones().

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 00:39:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 00:39:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741546.1148158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ0OQ-000684-Ar; Mon, 17 Jun 2024 00:38:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741546.1148158; Mon, 17 Jun 2024 00:38:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ0OQ-00067x-8C; Mon, 17 Jun 2024 00:38:42 +0000
Received: by outflank-mailman (input) for mailman id 741546;
 Mon, 17 Jun 2024 00:38:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Jjg=NT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1sJ0OP-00067r-3U
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 00:38:41 +0000
Received: from fout2-smtp.messagingengine.com (fout2-smtp.messagingengine.com
 [103.168.172.145]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ef10fbf6-2c41-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 02:38:37 +0200 (CEST)
Received: from compute7.internal (compute7.nyi.internal [10.202.2.48])
 by mailfout.nyi.internal (Postfix) with ESMTP id 32A8613800DF;
 Sun, 16 Jun 2024 20:38:36 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute7.internal (MEProxy); Sun, 16 Jun 2024 20:38:36 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Sun,
 16 Jun 2024 20:38:35 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef10fbf6-2c41-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718584716;
	 x=1718671116; bh=buHh/g/ZcgkgzTrs9IOuh36gYSiy5F9j6JuWTipj/KU=; b=
	EKhVMkDZArlB22SdjBx2VDzLwk8U7Q32ZAul4dP18xR/pSp6ad+LDWt28ccs/p4O
	nbUk+3G282VeEPa/oX6+hQoAesAUHxau0ird7NaLAMOzR5pHs9NFK/YuODjb0q0/
	BH24uAhwNsml2icQv+X/IcDC107eJJgaZlkwd8w0arojS72rOrkASqmDSvmIcrUX
	s0tJDkKOUFXPMmvZq3LIbF1LB0sWno0+p0DJIWWHfB1+y3nrvK7Euntl6vHZkfMR
	Ec1+G8Rl+KZbrZU0ZzZq0F0SWnFTnOgFWJj0EmYLNRdGuXWTg5fEouv0YxF2CB6s
	A7ukuKyAiCgCC9tA1qbxKg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718584716; x=1718671116; bh=buHh/g/ZcgkgzTrs9IOuh36gYSiy
	5F9j6JuWTipj/KU=; b=FBy+aty2AcBZ5nMX/8ylG79aNLHk7L3DxwkUapEvepL+
	ddysaDcQRfyFQVFYNk/SFcZw5kZhWaKB3DrvXFm9oA52xRv9uEMVOMStYBcNLWlV
	zFopOYolSoqW5w5rVJckKJkWEo4Lu8UA4Av3sCRcSLhJta53RUgCB/IUcy5YLqPY
	DlhSzbSO5VIe6eQAcZfC3qOtD0Ub1DyiQxP2CCu2CPNuBScyEbmur0edsTvAUS2z
	djICsqGrpfvb/pJNFb1C108gqjj/dn0lz15X6WUj7Vl+/bFWWblCr4uIz1MsMSvE
	0GSclkgKA4URAguh0sCruzd37I4sZHkJY1WXx/fMBQ==
X-ME-Sender: <xms:i4VvZjA1jHBKLSN9wOtRhpc3S1kNV4vzybekT1DtsTVoUuwJerDrqg>
    <xme:i4VvZpjhsdOA6HovByvYSH_xQr_ReYKZSELL5Ptap6_bpv9FoJDgEObv52MoWt7lj
    3SqRhue2_XKQEc>
X-ME-Received: <xmr:i4VvZun167JVpdkErYf-UHbfkQHeHk4zqRe12r2ZsIZQlM3WuoWNgTrOj7aTT6GJToCf9iSKxTih7pYjVBFlySNg9kp51edKrBaqxjyP14ft-Esh>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedvgedgfeejucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepvdejteegkefhteduhffgteffgeff
    gfduvdfghfffieefieekkedtheegteehffelnecuvehluhhsthgvrhfuihiivgeptdenuc
    frrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhs
    lhgrsgdrtghomh
X-ME-Proxy: <xmx:i4VvZlx2FjNBBDq_gssfpWZj_9nas5fJEeoZR4NZ_2DtRWTwlyaTzQ>
    <xmx:i4VvZoTR-g82EzePmGoYNbKHEbFyZxiF55y8XsBds-mWgzv0lhcuig>
    <xmx:i4VvZoZpnkQaHBcAVEXOvsHKsXgm0aIanIbh8a7GObJZt2j-g4R-3g>
    <xmx:i4VvZpTjZnj4u3OstGpmPuql78rruO3CfDhm0gfiinCNtEFLKy-8Jw>
    <xmx:jIVvZnFGMXHB4Xij1uSW9_fyC6WigPn2-Mk_PTvzWtXOLDXhbjb3LVYc>
Feedback-ID: iac594737:Fastmail
Date: Sun, 16 Jun 2024 20:38:19 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <Zm-FidjSK3mOieSC@itl-email>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
 <ZmwByZnn5vKcVLKI@macbook>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="XjN9Y7EgndXGttCA"
Content-Disposition: inline
In-Reply-To: <ZmwByZnn5vKcVLKI@macbook>


--XjN9Y7EgndXGttCA
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Sun, 16 Jun 2024 20:38:19 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen

On Fri, Jun 14, 2024 at 10:39:37AM +0200, Roger Pau Monn=C3=A9 wrote:
> On Fri, Jun 14, 2024 at 10:12:40AM +0200, Jan Beulich wrote:
> > On 14.06.2024 09:21, Roger Pau Monn=C3=A9 wrote:
> > > On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote:
> > >> On 13.06.2024 20:43, Demi Marie Obenour wrote:
> > >>> GPU acceleration requires that pageable host memory be able to be m=
apped
> > >>> into a guest.
> > >>
> > >> I'm sure it was explained in the session, which sadly I couldn't att=
end.
> > >> I've been asking Ray and Xenia the same before, but I'm afraid it st=
ill
> > >> hasn't become clear to me why this is a _requirement_. After all tha=
t's
> > >> against what we're doing elsewhere (i.e. so far it has always been
> > >> guest memory that's mapped in the host). I can appreciate that it mi=
ght
> > >> be more difficult to implement, but avoiding to violate this fundame=
ntal
> > >> (kind of) rule might be worth the price (and would avoid other
> > >> complexities, of which there may be lurking more than what you enume=
rate
> > >> below).
> > >=20
> > > My limited understanding (please someone correct me if wrong) is that
> > > the GPU buffer (or context I think it's also called?) is always
> > > allocated from dom0 (the owner of the GPU).  The underling memory
> > > addresses of such buffer needs to be mapped into the guest.  The
> > > buffer backing memory might be GPU MMIO from the device BAR(s) or
> > > system RAM, and such buffer can be paged by the dom0 kernel at any
> > > time (iow: changing the backing memory from MMIO to RAM or vice
> > > versa).  Also, the buffer must be contiguous in physical address
> > > space.
> >=20
> > This last one in particular would of course be a severe restriction.
> > Yet: There's an IOMMU involved, isn't there?
>=20
> Yup, IIRC that's why Ray said it was much more easier for them to
> support VirtIO GPUs from a PVH dom0 rather than classic PV one.
>=20
> It might be easier to implement from a classic PV dom0 if there's
> pv-iommu support, so that dom0 can create it's own contiguous memory
> buffers from the device PoV.

What makes PVH an improvement here?  I thought PV dom0 uses an identity
mapping for the IOMMU, while a PVH dom0 uses an IOMMU that mirrors the
dom0 second-stage page tables.  In both cases, the device physical
addresses are identical to dom0=E2=80=99s physical addresses.

PV is terrible for many reasons, so I=E2=80=99m okay with focusing on PVH d=
om0,
but I=E2=80=99d like to know why there is a difference.

> > > I'm not sure it's possible to ensure that when using system RAM such
> > > memory comes from the guest rather than the host, as it would likely
> > > require some very intrusive hooks into the kernel logic, and
> > > negotiation with the guest to allocate the requested amount of
> > > memory and hand it over to dom0.  If the maximum size of the buffer is
> > > known in advance maybe dom0 can negotiate with the guest to allocate
> > > such a region and grant it access to dom0 at driver attachment time.
> >=20
> > Besides the thought of transiently converting RAM to kind-of-MMIO, this
>=20
> As a note here, changing the type to MMIO would likely involve
> modifying the EPT/NPT tables to propagate the new type.  On a PVH dom0
> this would likely involve shattering superpages in order to set the
> correct memory types.
>=20
> Depending on how often and how random those system RAM changes are
> necessary this could also create contention on the p2m lock.
>=20
> > makes me think of another possible option: Could Dom0 transfer ownership
> > of the RAM that wants mapping in the guest (remotely resembling
> > grant-transfer)? Would require the guest to have ballooned down enough
> > first, of course. (In both cases it would certainly need working out how
> > the conversion / transfer back could be made work safely and reasonably
> > cleanly.)
>=20
> Maybe.  The fact the guest needs to balloon down that amount of memory
> seems weird to me, as from the guest PoV that mapped memory is
> MMIO-like and not system RAM.

I don=E2=80=99t like it either.  Furthermore, this would require changes to=
 the
virtio-GPU driver in the guest, which I=E2=80=99d prefer to avoid.
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--XjN9Y7EgndXGttCA
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmZvhYkACgkQsoi1X/+c
IsFL3w/+LBsUSGk/DJ2vcKSLsZTx6RXXBzVUVEy/cQ9Irx+j6O2zo+Eia0NSFhNf
PjZ0MgbveQNIznnIhlhPA1HzTmiLXyggr3aqPfneRLnxApf/FJZAetzTKZq3Lpxx
Rh8k7MTFwrjEqw1C/XFsDP3wvnhpnVbEv/s1l4kbodc0CnPLQ8uPo0bCt19t7TGZ
Y+0vG6cAQ3dXwmuf39P14CsxqhnAWroqPmOdoQVb4VKJIfLBTMHh5QrzhrNWZbQd
tWxeSk4unq1j6crV5Zpy6FaBKX1mIq498FhxYFcpTnToCrknSK8j33ObDG3WFEPd
/k4HYJk0CTwv2XEKvL9mhPwBB7JzuAGPuNbgECtUiRGWQai2nVFPmeKld1qgqab7
37LfSnoT3wE64yQIngCc02xdT+zHdxLfMOy4Mzp+cTRl9qbA1W0GFbDF7PyMDUHk
1VM7f3Ij9mbSJjNf6NergKUmIC3ALDq7jnVxDAKQKRxtM3oZSYUNpvHpZnv8SVng
SkXShQcVnXFqxAE1iay6cCe7QEutyVTifcpVlp31+1DHxYb6b8xbxC7exPUn2vbB
DWCvbdJFQpBZLHEXbVTNbMPGbtvxJbStNx+ooYguRMkGfsJ9oB6+t3zahDxV+vl2
wIvP2obafKGq9wJYA2byyc4i11PqgomKGsTRl2n8vE3Hrz7JzcU=
=KfvV
-----END PGP SIGNATURE-----

--XjN9Y7EgndXGttCA--


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 02:47:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 02:47:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741559.1148168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ2OT-00037L-Sf; Mon, 17 Jun 2024 02:46:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741559.1148168; Mon, 17 Jun 2024 02:46:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ2OT-00037E-P7; Mon, 17 Jun 2024 02:46:53 +0000
Received: by outflank-mailman (input) for mailman id 741559;
 Mon, 17 Jun 2024 02:46:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ2OS-000374-OV; Mon, 17 Jun 2024 02:46:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ2OS-000280-EB; Mon, 17 Jun 2024 02:46:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ2OS-0003Tz-2n; Mon, 17 Jun 2024 02:46:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ2OS-0000uZ-2D; Mon, 17 Jun 2024 02:46:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KcDjFrg59t2/S006il4jY4imyRY2QkGJC9vbILNOp0w=; b=dUtBFeCLw41iwV8QTswCutZbdj
	8D8qvU2pf+SggmDbGytWERr1lZWO3jmDSjbWUVCVD4CH3TcKWXtkdriEBdPL+XM47UAw/8zFaiTB1
	LdeIIk9BA+46KuOKcvRaHVFI3ILPvn9WbGkWlA8vzlsLugWJq7TVnD7XlgUgbYmgGyzM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186372-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186372: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-examine:reboot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-raw:debian-di-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b5beaa44747bddbabb338377340244f56465cd7d
X-Osstest-Versions-That:
    linux=a3e18a540541325a8c8848171f71e0d45ad30b2c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Jun 2024 02:46:52 +0000

flight 186372 linux-linus real [real]
flight 186374 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186372/
http://logs.test-lab.xenproject.org/osstest/logs/186374/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-examine      8 reboot              fail pass in 186374-retest
 test-armhf-armhf-xl-raw      12 debian-di-install   fail pass in 186374-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 186369

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-raw     14 migrate-support-check fail in 186374 never pass
 test-armhf-armhf-xl-raw 15 saverestore-support-check fail in 186374 never pass
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 186369
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186369
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186369
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186369
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186369
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186369
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186369
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                b5beaa44747bddbabb338377340244f56465cd7d
baseline version:
 linux                a3e18a540541325a8c8848171f71e0d45ad30b2c

Last test of basis   186369  2024-06-16 07:09:37 Z    0 days
Testing same since   186372  2024-06-16 18:43:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Rafael J. Wysocki" <rafael@kernel.org>
  "Rob Herring (Arm)" <robh@kernel.org>
  Aapo Vienamo <aapo.vienamo@linux.intel.com>
  Adam Rizkalla <ajarizzo@gmail.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Amit Sunil Dhamne <amitsd@google.com>
  Andrey Konovalov <andreyknvl@gmail.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Angel Iglesias <ang.iglesiasg@gmail.com>
  Angelo Dureghello <adureghello@baylibre.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Damien Le Moal <dlemoal@kernel.org>
  David Lechner <dlechner@baylibre.com>
  Dirk Behme <dirk.behme@de.bosch.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dmitry Vyukov <dvyukov@google.com>
  Doug Brown <doug@schmorgal.com>
  Douglas Anderson <dianders@chromium.org>
  Dumitru Ceclan <dumitru.ceclan@analog.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hans de Goede <hdegoede@redhat.com>
  Harshit Mogalapalli <harshit.m.mogalapalli@oracle.com>
  Hector Martin <marcan@marcan.st>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Hugo Villeneuve <hvilleneuve@dimonoff.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Jason Chen <jason.z.chen@intel.com>
  Jean-Baptiste Maneyrol <jean-baptiste.maneyrol@tdk.com>
  Jiri Slaby <jirislaby@kernel.org>
  Johan Hovold <johan+linaro@kernel.org>
  John Ernberg <john.ernberg@actia.se>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Krzysztof Kozlowski <krzk@kernel.org>
  Kuangyi Chiang <ki.chiang65@gmail.com>
  Kyle Tso <kyletso@google.com>
  Lee Jones <lee@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lukas Wunner <lukas@wunner.de>
  Marc Ferland <marc.ferland@sonatest.com>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Niklas Cassel <cassel@kernel.org>
  Peter Chen <peter.chen@kernel.org>
  Pierre Tomon <pierretom+12@ik.me>
  Rob Herring (Arm) <robh@kernel.org>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Shichao Lai <shichaorai@gmail.com>
  Stefan Wahren <wahrenst@gmx.net>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thomas Weißschuh <linux@weissschuh.net>
  Tomas Winkler <tomas.winkler@intel.com>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vadym Krevs <vkrevs@yahoo.com>
  Vasileios Amoiridis <vassilisamir@gmail.com>
  Vincent Mailhol <mailhol.vincent@wanadoo.fr>
  Wentong Wu <wentong.wu@intel.com>
  Yazen Ghannam <yazen.ghannam@amd.com>
  Yongzhi Liu <hyperlyzcs@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   a3e18a540541..b5beaa44747b  b5beaa44747bddbabb338377340244f56465cd7d -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 03:27:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 03:27:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741570.1148178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ31d-0008Mw-Q5; Mon, 17 Jun 2024 03:27:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741570.1148178; Mon, 17 Jun 2024 03:27:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ31d-0008Mp-NK; Mon, 17 Jun 2024 03:27:21 +0000
Received: by outflank-mailman (input) for mailman id 741570;
 Mon, 17 Jun 2024 03:27:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ31c-0008Mf-G1; Mon, 17 Jun 2024 03:27:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ31c-0003IE-9O; Mon, 17 Jun 2024 03:27:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ31c-0004Sf-0m; Mon, 17 Jun 2024 03:27:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ31c-0001sH-0O; Mon, 17 Jun 2024 03:27:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VHztLn89YGOacChiQgYRx5abaNG9rCIzc+mKwbW+o9Q=; b=CVPvlS6WzZbcxwz4ki/jQgqS89
	FDxJ+B9BGzZC47JLk1huBJZq1vweUQN72V5zcSjfWyiH+sm2ZkN1yfYjGV2s2TONDkLF9PLqdQm3i
	ctSDrtD1lUN6VgShoyWQt6tW72I3DLWnEcN4Vb5DEH+mlVItbldLhtZ/BbZOLURR18mk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186375-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186375: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=a7dbd2ac7b359644b4961b027d711893132cdb00
X-Osstest-Versions-That:
    ovmf=aa99d36be9ad68d8d0a99896332a9b5da10cf343
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Jun 2024 03:27:20 +0000

flight 186375 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186375/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 a7dbd2ac7b359644b4961b027d711893132cdb00
baseline version:
 ovmf                 aa99d36be9ad68d8d0a99896332a9b5da10cf343

Last test of basis   186362  2024-06-15 11:42:45 Z    1 days
Testing same since   186375  2024-06-17 01:41:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Wenxing Hou <wenxing.hou@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   aa99d36be9..a7dbd2ac7b  a7dbd2ac7b359644b4961b027d711893132cdb00 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 04:54:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 04:54:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741579.1148188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ4Nb-0001P2-Jz; Mon, 17 Jun 2024 04:54:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741579.1148188; Mon, 17 Jun 2024 04:54:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ4Nb-0001Ov-GZ; Mon, 17 Jun 2024 04:54:07 +0000
Received: by outflank-mailman (input) for mailman id 741579;
 Mon, 17 Jun 2024 04:54:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SiCA=NT=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sJ4NZ-0001Op-M2
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 04:54:05 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9ccc3cc8-2c65-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 06:54:01 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id EB40668B05; Mon, 17 Jun 2024 06:53:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ccc3cc8-2c65-11ef-b4bb-af5377834399
Date: Mon, 17 Jun 2024 06:53:56 +0200
From: Christoph Hellwig <hch@lst.de>
To: Damien Le Moal <dlemoal@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 02/26] sd: move zone limits setup out of
 sd_read_block_characteristics
Message-ID: <20240617045356.GA16277@lst.de>
References: <20240611051929.513387-1-hch@lst.de> <20240611051929.513387-3-hch@lst.de> <40ca8052-6ac1-4c1b-8c39-b0a7948839f8@kernel.org> <20240613093918.GA27629@lst.de> <5a697233-0611-459d-b889-2e0133bbb541@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <5a697233-0611-459d-b889-2e0133bbb541@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Mon, Jun 17, 2024 at 08:01:04AM +0900, Damien Le Moal wrote:
> On 6/13/24 18:39, Christoph Hellwig wrote:
> > On Tue, Jun 11, 2024 at 02:51:24PM +0900, Damien Le Moal wrote:
> >>> +	if (sdkp->device->type == TYPE_ZBC)
> >>
> >> Nit: use sd_is_zoned() here ?
> > 
> > Actually - is there much in even keeping sd_is_zoned now that the
> > host aware support is removed?  Just open coding the type check isn't
> > any more code, and probably easier to follow.
> 
> Removing this helper is fine by me.

FYI, I've removed it yesterday, but not done much of the cleanups suggest
here.  We should probably do those in a follow up up, uncluding removing
the !ZBC check in sd_zbc_check_zoned_characteristics.



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:04:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:04:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741589.1148197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5T0-0000ty-DJ; Mon, 17 Jun 2024 06:03:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741589.1148197; Mon, 17 Jun 2024 06:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5T0-0000tr-Al; Mon, 17 Jun 2024 06:03:46 +0000
Received: by outflank-mailman (input) for mailman id 741589;
 Mon, 17 Jun 2024 06:03:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sOKI=NT=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sJ5Sz-0000tk-AB
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:03:45 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 58cf3145-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:03:43 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 2D34ACE0E51;
 Mon, 17 Jun 2024 06:03:40 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 30A36C2BD10;
 Mon, 17 Jun 2024 06:03:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58cf3145-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718604219;
	bh=m0CKH+FHwbkUOr+sSrcVkW5ESZFXp5kIa1//vjrk4yE=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=ZHL4hpDIzus9MORWkAbgDJINc4LVHx93lSfKixSkm6WuLxyXO8kkYM44k+wcOnr5y
	 tkFWuqAKw955Q/rvEo4d1MS5n/NwL4TdN4ssl8cV7ZAZLbnGhDKwhC73QRhgwv+NZq
	 4MHo4ahMA2gZVzsFtyj1OAL5yJiTS0PIsFOfjpr6Q4Iz0vizRFUgFcQcYzSr+vYOlU
	 9makxthgJ0C1BEmV0I9GbD23VHSGMav9AQ29gAEuujqqpGZAlwMM1nxM2wawtGe3Za
	 Rfe2pF85RFEJcZaeOU4UkFt4BeHCBkbgQEQvS0KKa82U3uwq4hheAWgp20LXEFXakj
	 o5wuVRw+MEylQ==
Message-ID: <bf52121f-38f2-4789-b545-7c6ed0fe55b2@kernel.org>
Date: Mon, 17 Jun 2024 15:03:33 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 02/26] sd: move zone limits setup out of
 sd_read_block_characteristics
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240611051929.513387-1-hch@lst.de>
 <20240611051929.513387-3-hch@lst.de>
 <40ca8052-6ac1-4c1b-8c39-b0a7948839f8@kernel.org>
 <20240613093918.GA27629@lst.de>
 <5a697233-0611-459d-b889-2e0133bbb541@kernel.org>
 <20240617045356.GA16277@lst.de>
From: Damien Le Moal <dlemoal@kernel.org>
Content-Language: en-US
Organization: Western Digital Research
In-Reply-To: <20240617045356.GA16277@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/17/24 13:53, Christoph Hellwig wrote:
> On Mon, Jun 17, 2024 at 08:01:04AM +0900, Damien Le Moal wrote:
>> On 6/13/24 18:39, Christoph Hellwig wrote:
>>> On Tue, Jun 11, 2024 at 02:51:24PM +0900, Damien Le Moal wrote:
>>>>> +	if (sdkp->device->type == TYPE_ZBC)
>>>>
>>>> Nit: use sd_is_zoned() here ?
>>>
>>> Actually - is there much in even keeping sd_is_zoned now that the
>>> host aware support is removed?  Just open coding the type check isn't
>>> any more code, and probably easier to follow.
>>
>> Removing this helper is fine by me.
> 
> FYI, I've removed it yesterday, but not done much of the cleanups suggest
> here.  We should probably do those in a follow up up, uncluding removing
> the !ZBC check in sd_zbc_check_zoned_characteristics.

OK. I will send that once your series in queued.

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:05:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:05:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741595.1148213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5V8-0001T5-0G; Mon, 17 Jun 2024 06:05:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741595.1148213; Mon, 17 Jun 2024 06:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5V7-0001SY-SE; Mon, 17 Jun 2024 06:05:57 +0000
Received: by outflank-mailman (input) for mailman id 741595;
 Mon, 17 Jun 2024 06:05:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5V6-0001PY-KM
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:05:56 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a5bb2015-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:05:51 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Un-00000009IBT-13tu; Mon, 17 Jun 2024 06:05:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5bb2015-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=qPBJslfqWjz6ccntprxT8r9b8bc2Nh8I2X0y5YXiLrY=; b=H7sPQj5uSpQ8xoiZtGJBid1xZu
	if6mAqA0Jfn3igEw6WdszQJfz5gH3lyZMnstvAzV3P6serQmkAQyiADHD9wWOOPOF9HP4DBW1GR2g
	jgbtirHJj+0wShDJc2fQyh1RCasnfNjxJOqpGUt9TfDBEEIsM7YbcF1ehsywJSRiXEtWqa5IAPivq
	Whnv97Fge0UeW/m//VhN8O0fP1mMPRjqdJRnrVxH4yr4HWBwZjknPZIFb7Mpe61CRsUHpOZfdP1no
	4LhjEos8Iy6NCRKQSI8NlGYeodCulXCzsT3f2+L1F9TfNW2bq7y/erGTZtNK0CymRxBgerAU8FCd5
	7vMi+W6Q==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 01/26] xen-blkfront: don't disable cache flushes when they fail
Date: Mon, 17 Jun 2024 08:04:28 +0200
Message-ID: <20240617060532.127975-2-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

blkfront always had a robust negotiation protocol for detecting a write
cache.  Stop simply disabling cache flushes in the block layer as the
flags handling is moving to the atomic queue limits API that needs
user context to freeze the queue for that.  Instead handle the case
of the feature flags cleared inside of blkfront.  This removes old
debug code to check for such a mismatch which was previously impossible
to hit, including the check for passthrough requests that blkfront
never used to start with.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/xen-blkfront.c | 44 +++++++++++++++++++-----------------
 1 file changed, 23 insertions(+), 21 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 9b4ec3e4908cce..851b03844edd13 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -788,6 +788,11 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 			 * A barrier request a superset of FUA, so we can
 			 * implement it the same way.  (It's also a FLUSH+FUA,
 			 * since it is guaranteed ordered WRT previous writes.)
+			 *
+			 * Note that can end up here with a FUA write and the
+			 * flags cleared.  This happens when the flag was
+			 * run-time disabled after a failing I/O, and we'll
+			 * simplify submit it as a normal write.
 			 */
 			if (info->feature_flush && info->feature_fua)
 				ring_req->operation =
@@ -795,8 +800,6 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 			else if (info->feature_flush)
 				ring_req->operation =
 					BLKIF_OP_FLUSH_DISKCACHE;
-			else
-				ring_req->operation = 0;
 		}
 		ring_req->u.rw.nr_segments = num_grant;
 		if (unlikely(require_extra_req)) {
@@ -887,16 +890,6 @@ static inline void flush_requests(struct blkfront_ring_info *rinfo)
 		notify_remote_via_irq(rinfo->irq);
 }
 
-static inline bool blkif_request_flush_invalid(struct request *req,
-					       struct blkfront_info *info)
-{
-	return (blk_rq_is_passthrough(req) ||
-		((req_op(req) == REQ_OP_FLUSH) &&
-		 !info->feature_flush) ||
-		((req->cmd_flags & REQ_FUA) &&
-		 !info->feature_fua));
-}
-
 static blk_status_t blkif_queue_rq(struct blk_mq_hw_ctx *hctx,
 			  const struct blk_mq_queue_data *qd)
 {
@@ -908,12 +901,22 @@ static blk_status_t blkif_queue_rq(struct blk_mq_hw_ctx *hctx,
 	rinfo = get_rinfo(info, qid);
 	blk_mq_start_request(qd->rq);
 	spin_lock_irqsave(&rinfo->ring_lock, flags);
-	if (RING_FULL(&rinfo->ring))
-		goto out_busy;
 
-	if (blkif_request_flush_invalid(qd->rq, rinfo->dev_info))
-		goto out_err;
+	/*
+	 * Check if the backend actually supports flushes.
+	 *
+	 * While the block layer won't send us flushes if we don't claim to
+	 * support them, the Xen protocol allows the backend to revoke support
+	 * at any time.  That is of course a really bad idea and dangerous, but
+	 * has been allowed for 10+ years.  In that case we simply clear the
+	 * flags, and directly return here for an empty flush and ignore the
+	 * FUA flag later on.
+	 */
+	if (unlikely(req_op(qd->rq) == REQ_OP_FLUSH && !info->feature_flush))
+		goto complete;
 
+	if (RING_FULL(&rinfo->ring))
+		goto out_busy;
 	if (blkif_queue_request(qd->rq, rinfo))
 		goto out_busy;
 
@@ -921,14 +924,14 @@ static blk_status_t blkif_queue_rq(struct blk_mq_hw_ctx *hctx,
 	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
 	return BLK_STS_OK;
 
-out_err:
-	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
-	return BLK_STS_IOERR;
-
 out_busy:
 	blk_mq_stop_hw_queue(hctx);
 	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
 	return BLK_STS_DEV_RESOURCE;
+complete:
+	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+	blk_mq_end_request(qd->rq, BLK_STS_OK);
+	return BLK_STS_OK;
 }
 
 static void blkif_complete_rq(struct request *rq)
@@ -1627,7 +1630,6 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 					blkif_req(req)->error = BLK_STS_OK;
 				info->feature_fua = 0;
 				info->feature_flush = 0;
-				xlvbd_flush(info);
 			}
 			fallthrough;
 		case BLKIF_OP_READ:
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:05:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:05:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741594.1148207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5V7-0001Pq-Oz; Mon, 17 Jun 2024 06:05:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741594.1148207; Mon, 17 Jun 2024 06:05:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5V7-0001Pj-Lb; Mon, 17 Jun 2024 06:05:57 +0000
Received: by outflank-mailman (input) for mailman id 741594;
 Mon, 17 Jun 2024 06:05:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5V5-0001PY-Hw
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:05:56 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a5997ad4-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:05:51 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Uk-00000009IBH-3zsu; Mon, 17 Jun 2024 06:05:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5997ad4-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
	Content-ID:Content-Description:In-Reply-To:References;
	bh=w5I58mRKj5AU6Yqi3NMuzFTOUn3zWDih3UgJBo1+OXs=; b=xHDYpaffUakf5PuOWBpPooKbf/
	9TfbzNtEmDZrZ632SAc4SRfUuVJxNjnIE5gNHB7SDcAJ13WXLVDplEzCkwbhxr8th8FA1AGmh61+R
	g24j1QTav3xOiX0eITNyRi5Mz2E8AK3xQKNReoItBJuLvJZGhaoHXPsRavxTQyGXej16tbWpW9fkb
	MiE3u8l6QDmidGewrW4mjUWDPWBwBicf+CUbBFJdBIXJq2Vt/dKLWmd1vyB4Oh0xvZA38NfBT/pij
	A+DvQuobhBHBxhKADMh8M+nXHtE0Qh4gwc7U1XvGwmsXXCd4Sd1vVhVl6cw/4+q61JkWAXyCyIYQD
	kLWvV1DA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: move features flags into queue_limits v2
Date: Mon, 17 Jun 2024 08:04:27 +0200
Message-ID: <20240617060532.127975-1-hch@lst.de>
X-Mailer: git-send-email 2.43.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Hi all,

this is the third and last major series to convert settings to
queue_limits for this merge window.  After a bunch of prep patches to
get various drivers in shape, it moves all the queue_flags that specify
driver controlled features into the queue limits so that they can be
set atomically and are separated from the blk-mq internal flags.

Note that I've only Cc'ed the maintainers for drivers with non-mechanical
changes as the Cc list is already huge.

This series sits on top of the for-6.11/block-limits branch.

A git tree is available here:

    git://git.infradead.org/users/hch/block.git block-limit-flags

Gitweb:

    http://git.infradead.org/?p=users/hch/block.git;a=shortlog;h=refs/heads/block-limit-flags


Changes since v1:
 - fix an inverted condition
 - fix the runtime flush disable in xen-blkfront
 - remove sd_is_zoned entirely
 - use SECTOR_SIZE in a few more places
 - fix REQ_NOWAIT disabling for dm targets that don't support it
 - fix typos
 - reword various commit logs

Diffstat:
 Documentation/block/writeback_cache_control.rst |   67 ++++----
 arch/m68k/emu/nfblock.c                         |    1 
 arch/um/drivers/ubd_kern.c                      |    3 
 arch/xtensa/platforms/iss/simdisk.c             |    5 
 block/blk-core.c                                |    7 
 block/blk-flush.c                               |   36 ++--
 block/blk-mq-debugfs.c                          |   13 -
 block/blk-mq.c                                  |   42 +++--
 block/blk-settings.c                            |   46 ++----
 block/blk-sysfs.c                               |  118 ++++++++-------
 block/blk-wbt.c                                 |    4 
 block/blk.h                                     |    2 
 drivers/block/amiflop.c                         |    5 
 drivers/block/aoe/aoeblk.c                      |    1 
 drivers/block/ataflop.c                         |    5 
 drivers/block/brd.c                             |    6 
 drivers/block/drbd/drbd_main.c                  |    6 
 drivers/block/floppy.c                          |    3 
 drivers/block/loop.c                            |   79 ++++------
 drivers/block/mtip32xx/mtip32xx.c               |    2 
 drivers/block/n64cart.c                         |    2 
 drivers/block/nbd.c                             |   24 +--
 drivers/block/null_blk/main.c                   |   13 -
 drivers/block/null_blk/zoned.c                  |    3 
 drivers/block/pktcdvd.c                         |    1 
 drivers/block/ps3disk.c                         |    8 -
 drivers/block/rbd.c                             |   12 -
 drivers/block/rnbd/rnbd-clt.c                   |   14 -
 drivers/block/sunvdc.c                          |    1 
 drivers/block/swim.c                            |    5 
 drivers/block/swim3.c                           |    5 
 drivers/block/ublk_drv.c                        |   21 +-
 drivers/block/virtio_blk.c                      |   37 ++--
 drivers/block/xen-blkfront.c                    |   53 +++---
 drivers/block/zram/zram_drv.c                   |    6 
 drivers/cdrom/gdrom.c                           |    1 
 drivers/md/bcache/super.c                       |    9 -
 drivers/md/dm-table.c                           |  183 +++++-------------------
 drivers/md/dm-zone.c                            |    2 
 drivers/md/dm-zoned-target.c                    |    2 
 drivers/md/dm.c                                 |   13 -
 drivers/md/md.c                                 |   40 -----
 drivers/md/raid5.c                              |    6 
 drivers/mmc/core/block.c                        |   42 ++---
 drivers/mmc/core/queue.c                        |   20 +-
 drivers/mmc/core/queue.h                        |    3 
 drivers/mtd/mtd_blkdevs.c                       |    9 -
 drivers/nvdimm/btt.c                            |    4 
 drivers/nvdimm/pmem.c                           |   14 -
 drivers/nvme/host/core.c                        |   33 ++--
 drivers/nvme/host/multipath.c                   |   24 ---
 drivers/nvme/host/zns.c                         |    3 
 drivers/s390/block/dasd_genhd.c                 |    1 
 drivers/s390/block/dcssblk.c                    |    2 
 drivers/s390/block/scm_blk.c                    |    5 
 drivers/scsi/iscsi_tcp.c                        |    8 -
 drivers/scsi/scsi_lib.c                         |    5 
 drivers/scsi/sd.c                               |   66 +++-----
 drivers/scsi/sd.h                               |    5 
 drivers/scsi/sd_zbc.c                           |   25 +--
 include/linux/blkdev.h                          |  119 ++++++++++-----
 61 files changed, 572 insertions(+), 728 deletions(-)


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:05:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:05:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741596.1148228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5V9-0001tE-7u; Mon, 17 Jun 2024 06:05:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741596.1148228; Mon, 17 Jun 2024 06:05:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5V9-0001t3-3P; Mon, 17 Jun 2024 06:05:59 +0000
Received: by outflank-mailman (input) for mailman id 741596;
 Mon, 17 Jun 2024 06:05:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5V8-0001Pt-1Z
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:05:58 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a75ff241-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:05:56 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Uq-00000009ICC-1LaY; Mon, 17 Jun 2024 06:05:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a75ff241-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=ycWXVfvI3dK1V2c7ItJhOtlEZyidcuD5U4kmQV06Zcw=; b=heSiK92MA+vKxOlE6EXLUE6On9
	7Zsrfed6mixhHUmpt2lI2hu+R9PUTTsRGr+PRYFt5uXNG0UIYBrn9VmJfN6V1jYlw7asNk6CVoOBU
	YMQZ5v5qc/XznfcoioTiECosf4MFH5HDIv7Cj5h+JFJSKwpa7i2h51gd5s+Y9Ij5OY2EZKJPkol2z
	GZKkuFfNKqqcAFsjjROAuUPyXjCegaRrQ/oIDlzuNmaO4lmNXJZ7btXZ9PiHz0mAQpUHcVdZKbRVM
	mbYcXnOsz6UsPwLxDp90icMVaOUjBPKmLSWISWsqvPA4jGbHvkH6Dob+66AoF0vVMDfse4q8/+/o9
	YheA9vxQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Bart Van Assche <bvanassche@acm.org>,
	Damien Le Moal <dlemoal@kernel.org>,
	Hannes Reinecke <hare@suse.de>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>
Subject: [PATCH 02/26] sd: remove sd_is_zoned
Date: Mon, 17 Jun 2024 08:04:29 +0200
Message-ID: <20240617060532.127975-3-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Since commit 7437bb73f087 ("block: remove support for the host aware zone
model"), only ZBC devices expose a zoned access model.  sd_is_zoned is
used to check for that and thus return false for host aware devices.

Replace the helper with the simple open coded TYPE_ZBC check to fix this.

Fixes: 7437bb73f087 ("block: remove support for the host aware zone model")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
---
 drivers/scsi/sd.c     |  6 +-----
 drivers/scsi/sd.h     |  5 -----
 drivers/scsi/sd_zbc.c | 13 ++++---------
 3 files changed, 5 insertions(+), 19 deletions(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index e01393ed42076b..664523048ce819 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -466,10 +466,6 @@ provisioning_mode_store(struct device *dev, struct device_attribute *attr,
 	if (sdp->type != TYPE_DISK)
 		return -EINVAL;
 
-	/* ignore the provisioning mode for ZBC devices */
-	if (sd_is_zoned(sdkp))
-		return count;
-
 	mode = sysfs_match_string(lbp_mode, buf);
 	if (mode < 0)
 		return -EINVAL;
@@ -2288,7 +2284,7 @@ static int sd_done(struct scsi_cmnd *SCpnt)
 	}
 
  out:
-	if (sd_is_zoned(sdkp))
+	if (sdkp->device->type == TYPE_ZBC)
 		good_bytes = sd_zbc_complete(SCpnt, good_bytes, &sshdr);
 
 	SCSI_LOG_HLCOMPLETE(1, scmd_printk(KERN_INFO, SCpnt,
diff --git a/drivers/scsi/sd.h b/drivers/scsi/sd.h
index 726f1613f6cb56..7603b3c67b233f 100644
--- a/drivers/scsi/sd.h
+++ b/drivers/scsi/sd.h
@@ -222,11 +222,6 @@ static inline sector_t sectors_to_logical(struct scsi_device *sdev, sector_t sec
 
 void sd_dif_config_host(struct scsi_disk *sdkp, struct queue_limits *lim);
 
-static inline int sd_is_zoned(struct scsi_disk *sdkp)
-{
-	return sdkp->zoned == 1 || sdkp->device->type == TYPE_ZBC;
-}
-
 #ifdef CONFIG_BLK_DEV_ZONED
 
 int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
index f685838d9ed214..8cc9c025017961 100644
--- a/drivers/scsi/sd_zbc.c
+++ b/drivers/scsi/sd_zbc.c
@@ -232,7 +232,7 @@ int sd_zbc_report_zones(struct gendisk *disk, sector_t sector,
 	int zone_idx = 0;
 	int ret;
 
-	if (!sd_is_zoned(sdkp))
+	if (sdkp->device->type != TYPE_ZBC)
 		/* Not a zoned device */
 		return -EOPNOTSUPP;
 
@@ -300,7 +300,7 @@ static blk_status_t sd_zbc_cmnd_checks(struct scsi_cmnd *cmd)
 	struct scsi_disk *sdkp = scsi_disk(rq->q->disk);
 	sector_t sector = blk_rq_pos(rq);
 
-	if (!sd_is_zoned(sdkp))
+	if (sdkp->device->type != TYPE_ZBC)
 		/* Not a zoned device */
 		return BLK_STS_IOERR;
 
@@ -521,7 +521,7 @@ static int sd_zbc_check_capacity(struct scsi_disk *sdkp, unsigned char *buf,
 
 static void sd_zbc_print_zones(struct scsi_disk *sdkp)
 {
-	if (!sd_is_zoned(sdkp) || !sdkp->capacity)
+	if (sdkp->device->type != TYPE_ZBC || !sdkp->capacity)
 		return;
 
 	if (sdkp->capacity & (sdkp->zone_info.zone_blocks - 1))
@@ -598,13 +598,8 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
 	u32 zone_blocks = 0;
 	int ret;
 
-	if (!sd_is_zoned(sdkp)) {
-		/*
-		 * Device managed or normal SCSI disk, no special handling
-		 * required.
-		 */
+	if (sdkp->device->type != TYPE_ZBC)
 		return 0;
-	}
 
 	/* READ16/WRITE16/SYNC16 is mandatory for ZBC devices */
 	sdkp->device->use_16_for_rw = 1;
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:05:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:05:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741597.1148234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5V9-0001x4-LY; Mon, 17 Jun 2024 06:05:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741597.1148234; Mon, 17 Jun 2024 06:05:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5V9-0001wb-EL; Mon, 17 Jun 2024 06:05:59 +0000
Received: by outflank-mailman (input) for mailman id 741597;
 Mon, 17 Jun 2024 06:05:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5V8-0001PY-7h
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:05:58 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a89ce27f-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:05:56 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Ut-00000009IDo-122H; Mon, 17 Jun 2024 06:05:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a89ce27f-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=/82cXfoepsjrCLF9yDw8vVInGtt4GPJk52iCZL5X8kA=; b=ra+HvJbCl/BtNyU8RdnS7dRs7s
	O6LBCVDzAMTBjJ+U6/bFUEuKA1u/YpNavXuPiwpaBM2SvFLhrKWKHkSQ1M540dkcyGLo3pmL4QWJT
	2xSmzNE7W7m4yw0S2HfM7DQTNcHjn8OweF4ejpZoPKnI8CvJqF+ZHuFDHDsv9QS4UHWRRHcXw2kRy
	z7JVpylVvNbvCmM4EsAoVdSB5CR3lL9qt7ZSteQka766DrvKfN1np9moQ7J819shVf1699J46ehpx
	PD4BrRSVAZLsmBEWIZpR/Z72M/MhYkKkIRLJsriHO0Q08q8JjUZXaAqIqmk51INY1TifT3fqBt9OT
	c8ga2Wbw==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 03/26] sd: move zone limits setup out of sd_read_block_characteristics
Date: Mon, 17 Jun 2024 08:04:30 +0200
Message-ID: <20240617060532.127975-4-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move a bit of code that sets up the zone flag and the write granularity
into sd_zbc_read_zones to be with the rest of the zoned limits.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/scsi/sd.c     | 21 +--------------------
 drivers/scsi/sd_zbc.c |  9 +++++++++
 2 files changed, 10 insertions(+), 20 deletions(-)

diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 664523048ce819..66f7d1e3429c86 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3312,29 +3312,10 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp,
 		blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q);
 	}
 
-
-#ifdef CONFIG_BLK_DEV_ZONED /* sd_probe rejects ZBD devices early otherwise */
-	if (sdkp->device->type == TYPE_ZBC) {
-		lim->zoned = true;
-
-		/*
-		 * Per ZBC and ZAC specifications, writes in sequential write
-		 * required zones of host-managed devices must be aligned to
-		 * the device physical block size.
-		 */
-		lim->zone_write_granularity = sdkp->physical_block_size;
-	} else {
-		/*
-		 * Host-aware devices are treated as conventional.
-		 */
-		lim->zoned = false;
-	}
-#endif /* CONFIG_BLK_DEV_ZONED */
-
 	if (!sdkp->first_scan)
 		return;
 
-	if (lim->zoned)
+	if (sdkp->device->type == TYPE_ZBC)
 		sd_printk(KERN_NOTICE, sdkp, "Host-managed zoned block device\n");
 	else if (sdkp->zoned == 1)
 		sd_printk(KERN_NOTICE, sdkp, "Host-aware SMR disk used as regular disk\n");
diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
index 8cc9c025017961..360ec980499529 100644
--- a/drivers/scsi/sd_zbc.c
+++ b/drivers/scsi/sd_zbc.c
@@ -601,6 +601,15 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
 	if (sdkp->device->type != TYPE_ZBC)
 		return 0;
 
+	lim->zoned = true;
+
+	/*
+	 * Per ZBC and ZAC specifications, writes in sequential write required
+	 * zones of host-managed devices must be aligned to the device physical
+	 * block size.
+	 */
+	lim->zone_write_granularity = sdkp->physical_block_size;
+
 	/* READ16/WRITE16/SYNC16 is mandatory for ZBC devices */
 	sdkp->device->use_16_for_rw = 1;
 	sdkp->device->use_10_for_rw = 0;
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:06:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:06:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741598.1148241 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5VA-00027I-7k; Mon, 17 Jun 2024 06:06:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741598.1148241; Mon, 17 Jun 2024 06:06:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5V9-00025T-Us; Mon, 17 Jun 2024 06:05:59 +0000
Received: by outflank-mailman (input) for mailman id 741598;
 Mon, 17 Jun 2024 06:05:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5V9-0001Pt-1j
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:05:59 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a9e71ddf-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:05:58 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Uw-00000009IFz-1jkT; Mon, 17 Jun 2024 06:05:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9e71ddf-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=cnf/orgh7i7JukB+LIEJmlRlmvoKMDy94ZJrgGiWZn4=; b=oscIs4jcLQ/H/pIAIEcWRH1UuJ
	5LK/eSZ6W2JZDMCo6nnFviBb4mblJSzVsLNO6B8HAOMmp+qeHzTjN84X8LIW4TpX/7W7LZvbQP5qY
	KZx1IFTMn4rG9QGAEP/ThwOA5gLnIpjk10jpqIzyjW2xAxk1Q75bvwh2qR37kybtmDDuOl9LWL/2L
	5iBEGRVoKvGz072Rd8EEYyPo/8bJMPddmN9YCXh6tUKUU6wy9u20KOoHYnVdWP+UVcDmz9hWB2YH9
	5G7EkdlW/vrVtcRRBnJu/EV4+SB6/p1HGwYyTkPiwfojrOSi/XCbUhunIOn419hT1qlQ33GLsMNSm
	jEJV60cA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>,
	Hannes Reinecke <hare@suse.de>,
	Bart Van Assche <bvanassche@acm.org>
Subject: [PATCH 04/26] loop: stop using loop_reconfigure_limits in __loop_clr_fd
Date: Mon, 17 Jun 2024 08:04:31 +0200
Message-ID: <20240617060532.127975-5-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

__loop_clr_fd wants to clear all settings on the device.  Prepare for
moving more settings into the block limits by open coding
loop_reconfigure_limits.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/loop.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 93780f41646b75..fd671028fa8554 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -1133,6 +1133,7 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
 
 static void __loop_clr_fd(struct loop_device *lo, bool release)
 {
+	struct queue_limits lim;
 	struct file *filp;
 	gfp_t gfp = lo->old_gfp_mask;
 
@@ -1156,7 +1157,14 @@ static void __loop_clr_fd(struct loop_device *lo, bool release)
 	lo->lo_offset = 0;
 	lo->lo_sizelimit = 0;
 	memset(lo->lo_file_name, 0, LO_NAME_SIZE);
-	loop_reconfigure_limits(lo, 512, false);
+
+	/* reset the block size to the default */
+	lim = queue_limits_start_update(lo->lo_queue);
+	lim.logical_block_size = SECTOR_SIZE;
+	lim.physical_block_size = SECTOR_SIZE;
+	lim.io_min = SECTOR_SIZE;
+	queue_limits_commit_update(lo->lo_queue, &lim);
+
 	invalidate_disk(lo->lo_disk);
 	loop_sysfs_exit(lo);
 	/* let user-space know about this change */
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:06:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:06:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741601.1148258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5VF-0002ks-C3; Mon, 17 Jun 2024 06:06:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741601.1148258; Mon, 17 Jun 2024 06:06:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5VF-0002ki-8i; Mon, 17 Jun 2024 06:06:05 +0000
Received: by outflank-mailman (input) for mailman id 741601;
 Mon, 17 Jun 2024 06:06:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5VE-0001Pt-Bb
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:04 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id acec1dc8-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:06:03 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Uz-00000009IIF-1FWI; Mon, 17 Jun 2024 06:05:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acec1dc8-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=meZuXUdMqnfJKNa8LwE+49M6ExyQLw1VDUKKaUAtY/k=; b=kXJ7ztHFcc1IoznmhgthEbCn2W
	e1Zl0vH8pJJ+B1epWy41WO2zQGJBi7XGUy+4DCl9YDXMqTyH3anZVULv8jW+BHuwQ85BKjVgecb4w
	hAGHoW9h4WP4v1QYat3Jf10IP4aUS7o6JZkLXgjO/8HcnWdwF4gdFcc7AKgkddhvpiw4wY1zPxXr8
	3mk8eCEPPsZL/pesCm8st49vLZsZi08TeEmgYQj7DQG8b6JLAICjztbtSkI7ROD2ePH/Irbhf7bQ+
	3BFsrlrwOKGuxd2zhlnwTsZISXTPAagKKPg7HdaG9zcqs83W9ES5qTSx7HYqlSblbPpRJLwJC6SUc
	Bt14RRKQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>,
	Hannes Reinecke <hare@suse.de>,
	Bart Van Assche <bvanassche@acm.org>
Subject: [PATCH 05/26] loop: always update discard settings in loop_reconfigure_limits
Date: Mon, 17 Jun 2024 08:04:32 +0200
Message-ID: <20240617060532.127975-6-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Simplify loop_reconfigure_limits by always updating the discard limits.
This adds a little more work to loop_set_block_size, but doesn't change
the outcome as the discard flag won't change.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/loop.c | 10 ++++------
 1 file changed, 4 insertions(+), 6 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index fd671028fa8554..ce197cbea5f434 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -975,8 +975,7 @@ loop_set_status_from_info(struct loop_device *lo,
 	return 0;
 }
 
-static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize,
-		bool update_discard_settings)
+static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
 {
 	struct queue_limits lim;
 
@@ -984,8 +983,7 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize,
 	lim.logical_block_size = bsize;
 	lim.physical_block_size = bsize;
 	lim.io_min = bsize;
-	if (update_discard_settings)
-		loop_config_discard(lo, &lim);
+	loop_config_discard(lo, &lim);
 	return queue_limits_commit_update(lo->lo_queue, &lim);
 }
 
@@ -1086,7 +1084,7 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
 	else
 		bsize = 512;
 
-	error = loop_reconfigure_limits(lo, bsize, true);
+	error = loop_reconfigure_limits(lo, bsize);
 	if (WARN_ON_ONCE(error))
 		goto out_unlock;
 
@@ -1496,7 +1494,7 @@ static int loop_set_block_size(struct loop_device *lo, unsigned long arg)
 	invalidate_bdev(lo->lo_device);
 
 	blk_mq_freeze_queue(lo->lo_queue);
-	err = loop_reconfigure_limits(lo, arg, false);
+	err = loop_reconfigure_limits(lo, arg);
 	loop_update_dio(lo);
 	blk_mq_unfreeze_queue(lo->lo_queue);
 
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:06:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:06:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741609.1148268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5VM-0003M5-Jp; Mon, 17 Jun 2024 06:06:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741609.1148268; Mon, 17 Jun 2024 06:06:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5VM-0003LN-Fm; Mon, 17 Jun 2024 06:06:12 +0000
Received: by outflank-mailman (input) for mailman id 741609;
 Mon, 17 Jun 2024 06:06:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5VL-0001PY-74
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:11 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aef71d6d-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:06:07 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5V2-00000009IL0-1TcV; Mon, 17 Jun 2024 06:05:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aef71d6d-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=EPlnKHc6KVQAGCnJTSwoLIbiGwof+yF2y8+P5dJEJJ0=; b=llkTt4L32bZ6iyFMLz9Re3O+um
	iBesDRlpJplF5nHKNSh9FM0jpEe2/c66MfT8Bt/I1nw3hw6WP3McM1ajYl6Y1akvjTAolpU25D5JY
	zxQH4i7zPokPqm5UkzIl4783o8GnN2m4K/RS5AtmFVCYJcpZ3vZt5g+K47MvjTQm5CtyFVe/fT2ke
	rYC4/lp/vaeglMrAYZeyvuSQ+AfGZLWPNk1KjYLCo1XrUC4Y0ScTkPZVT9290YdJiktz1OhCGdE8A
	9KDp18wzMDrbO5CYMJwbFfExWY8OcnNQBZ7aJNtMQI7HjejZThxD8rHbzNSixEEcMAQtF1g5wpqRb
	QL0nnnug==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>,
	Bart Van Assche <bvanassche@acm.org>
Subject: [PATCH 06/26] loop: regularize upgrading the block size for direct I/O
Date: Mon, 17 Jun 2024 08:04:33 +0200
Message-ID: <20240617060532.127975-7-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

The LOOP_CONFIGURE path automatically upgrades the block size to that
of the underlying file for O_DIRECT file descriptors, but the
LOOP_SET_BLOCK_SIZE path does not.  Fix this by lifting the code to
pick the block size into common code.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/loop.c | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index ce197cbea5f434..eea3e4919e356e 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -975,10 +975,24 @@ loop_set_status_from_info(struct loop_device *lo,
 	return 0;
 }
 
+static unsigned short loop_default_blocksize(struct loop_device *lo,
+		struct block_device *backing_bdev)
+{
+	/* In case of direct I/O, match underlying block size */
+	if ((lo->lo_backing_file->f_flags & O_DIRECT) && backing_bdev)
+		return bdev_logical_block_size(backing_bdev);
+	return SECTOR_SIZE;
+}
+
 static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
 {
+	struct file *file = lo->lo_backing_file;
+	struct inode *inode = file->f_mapping->host;
 	struct queue_limits lim;
 
+	if (!bsize)
+		bsize = loop_default_blocksize(lo, inode->i_sb->s_bdev);
+
 	lim = queue_limits_start_update(lo->lo_queue);
 	lim.logical_block_size = bsize;
 	lim.physical_block_size = bsize;
@@ -997,7 +1011,6 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
 	int error;
 	loff_t size;
 	bool partscan;
-	unsigned short bsize;
 	bool is_loop;
 
 	if (!file)
@@ -1076,15 +1089,7 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
 	if (!(lo->lo_flags & LO_FLAGS_READ_ONLY) && file->f_op->fsync)
 		blk_queue_write_cache(lo->lo_queue, true, false);
 
-	if (config->block_size)
-		bsize = config->block_size;
-	else if ((lo->lo_backing_file->f_flags & O_DIRECT) && inode->i_sb->s_bdev)
-		/* In case of direct I/O, match underlying block size */
-		bsize = bdev_logical_block_size(inode->i_sb->s_bdev);
-	else
-		bsize = 512;
-
-	error = loop_reconfigure_limits(lo, bsize);
+	error = loop_reconfigure_limits(lo, config->block_size);
 	if (WARN_ON_ONCE(error))
 		goto out_unlock;
 
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:06:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:06:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741612.1148278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5VO-0003qn-TB; Mon, 17 Jun 2024 06:06:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741612.1148278; Mon, 17 Jun 2024 06:06:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5VO-0003qg-Pr; Mon, 17 Jun 2024 06:06:14 +0000
Received: by outflank-mailman (input) for mailman id 741612;
 Mon, 17 Jun 2024 06:06:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5VM-0001PY-Vv
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:12 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b152ca2a-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:06:11 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5V5-00000009IO0-2EQA; Mon, 17 Jun 2024 06:05:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b152ca2a-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=Oo/vqAiMbhGGIUiVbZp/kGzW4LeWY27HMYwjVP6TG+4=; b=n9i+YZUfzxRGa/5geWOPJ7qmNM
	VGAksmvvlnyjAcGKbffm0w8YFWj/mx8ZeNfKbo6XukT1i4z/7qqCv7AqN6cGKNzTZRqs/Rlz98kse
	B60tDUVnvOqPTZ+j07UJh958Lv8xgGd4u21/L50kdMEfOjqnyL+uFLGe/y8Jv2z/tzRoXpfQiVdrY
	jeQ4b6Snlu8NDUy4NjmqAdaKi/3dfCdJChMAKWZrLWqfJnBAbbSVrAz3dOz942P3jNCFUIPZAuF0V
	lyFA/GeDUiQcFhHunnGHw4c5QrxgmJmSg34SATZmhO2AM5VBka2mQq5u28GalsDqRmL602GT6Qes+
	Nt4+DKVg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Hannes Reinecke <hare@suse.de>,
	Bart Van Assche <bvanassche@acm.org>
Subject: [PATCH 07/26] loop: also use the default block size from an underlying block device
Date: Mon, 17 Jun 2024 08:04:34 +0200
Message-ID: <20240617060532.127975-8-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Fix the code in loop_reconfigure_limits to pick a default block size for
O_DIRECT file descriptors to also work when the loop device sits on top
of a block device and not just on a regular file on a block device based
file system.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/loop.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index eea3e4919e356e..6a4826708a3acf 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -988,10 +988,16 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
 {
 	struct file *file = lo->lo_backing_file;
 	struct inode *inode = file->f_mapping->host;
+	struct block_device *backing_bdev = NULL;
 	struct queue_limits lim;
 
+	if (S_ISBLK(inode->i_mode))
+		backing_bdev = I_BDEV(inode);
+	else if (inode->i_sb->s_bdev)
+		backing_bdev = inode->i_sb->s_bdev;
+
 	if (!bsize)
-		bsize = loop_default_blocksize(lo, inode->i_sb->s_bdev);
+		bsize = loop_default_blocksize(lo, backing_bdev);
 
 	lim = queue_limits_start_update(lo->lo_queue);
 	lim.logical_block_size = bsize;
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:06:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:06:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741621.1148288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5VU-0004NL-5Z; Mon, 17 Jun 2024 06:06:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741621.1148288; Mon, 17 Jun 2024 06:06:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5VU-0004N8-24; Mon, 17 Jun 2024 06:06:20 +0000
Received: by outflank-mailman (input) for mailman id 741621;
 Mon, 17 Jun 2024 06:06:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5VS-0001Pt-OC
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:18 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b49594e4-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:06:17 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5V8-00000009IPz-27ba; Mon, 17 Jun 2024 06:05:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b49594e4-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=jO/BbvgdYshCVZegwApTYZew4tnox1zjQOC4cbYeSu8=; b=aj7ZQCaJ/Dw/+h4IVcq1v/5m21
	sd6Y2Jb9o7hr0/IPKveOSVb06koCuObSZOOJJS6BaHZrJMVlgKKzavHrMLF/sMnjznKhVuicLMLke
	v9g+0MseSnUCixJ8sOmO6tut3X5QCQCkwOdnVaTVM0xP9Q8KYLnN68GkSujOXDXYgpoq6COw9lhFA
	AsajI2zI/HE5waax+l/bJjII8Ex/A1pXxfAuQwIIdOSj6FGB/F8zL+vLgbpFDG3rcgB9IVB40hwrn
	Nu/xx2wm6ebX4Y16PCW+jpxcNvz7wCipxzDDnXo4ON8dGg3rMzr5qGXsWWvROR2prjhZAamPxmBfY
	vLU/nybQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>,
	Hannes Reinecke <hare@suse.de>,
	Bart Van Assche <bvanassche@acm.org>
Subject: [PATCH 08/26] loop: fold loop_update_rotational into loop_reconfigure_limits
Date: Mon, 17 Jun 2024 08:04:35 +0200
Message-ID: <20240617060532.127975-9-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

This prepares for moving the rotational flag into the queue_limits and
also fixes it for the case where the loop device is backed by a block
device.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
---
 drivers/block/loop.c | 23 ++++-------------------
 1 file changed, 4 insertions(+), 19 deletions(-)

diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 6a4826708a3acf..8991de8fb1bb0b 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -916,24 +916,6 @@ static void loop_free_idle_workers_timer(struct timer_list *timer)
 	return loop_free_idle_workers(lo, false);
 }
 
-static void loop_update_rotational(struct loop_device *lo)
-{
-	struct file *file = lo->lo_backing_file;
-	struct inode *file_inode = file->f_mapping->host;
-	struct block_device *file_bdev = file_inode->i_sb->s_bdev;
-	struct request_queue *q = lo->lo_queue;
-	bool nonrot = true;
-
-	/* not all filesystems (e.g. tmpfs) have a sb->s_bdev */
-	if (file_bdev)
-		nonrot = bdev_nonrot(file_bdev);
-
-	if (nonrot)
-		blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
-	else
-		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
-}
-
 /**
  * loop_set_status_from_info - configure device from loop_info
  * @lo: struct loop_device to configure
@@ -1003,6 +985,10 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
 	lim.logical_block_size = bsize;
 	lim.physical_block_size = bsize;
 	lim.io_min = bsize;
+	if (!backing_bdev || bdev_nonrot(backing_bdev))
+		blk_queue_flag_set(QUEUE_FLAG_NONROT, lo->lo_queue);
+	else
+		blk_queue_flag_clear(QUEUE_FLAG_NONROT, lo->lo_queue);
 	loop_config_discard(lo, &lim);
 	return queue_limits_commit_update(lo->lo_queue, &lim);
 }
@@ -1099,7 +1085,6 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
 	if (WARN_ON_ONCE(error))
 		goto out_unlock;
 
-	loop_update_rotational(lo);
 	loop_update_dio(lo);
 	loop_sysfs_init(lo);
 
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741645.1148298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5We-00067Z-Jk; Mon, 17 Jun 2024 06:07:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741645.1148298; Mon, 17 Jun 2024 06:07:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5We-00067S-Gf; Mon, 17 Jun 2024 06:07:32 +0000
Received: by outflank-mailman (input) for mailman id 741645;
 Mon, 17 Jun 2024 06:07:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5Vn-0001Pt-TT
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:40 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c060a554-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:06:36 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5VN-00000009Ido-0ak5; Mon, 17 Jun 2024 06:06:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c060a554-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=p+y6DLBmaoPHJSKZ6ySR3UadwvRBQCQMlMyEMuymwXE=; b=ysOy9WJQKAWIPSeH6IW2WS+Wh3
	DKZgl5MmtWQyR6QPABZuuwJWYI7EJjwYJzlJ6ojaxyf+WwX/rZGWfsG9Fuhh05zt+FZDS+mMhGCpW
	X1HpDovz2K7KbZ6tPom3o/Lhr1Q2akjdZ+bRmzUHI4qs6e0cbQWJSGiqk+g/NkNJzyHrRBcEVtj6P
	4LZgM/afbxmToYMHGiQivHXV2idnpWToshKR45s0pTplQCjPFPIijN+yQCRCMelIj+m9JW3eweMXr
	RyqQc4lkyy0mDQMVmuYPlEGmDhJsQ8p31/51CyAgSlqt6M6Mf9wPVh9IQL25IvIVAOG0bYBYt3zuS
	clTcy1kA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Ulf Hansson <ulf.hansson@linaro.org>
Subject: [PATCH 13/26] block: move cache control settings out of queue->flags
Date: Mon, 17 Jun 2024 08:04:40 +0200
Message-ID: <20240617060532.127975-14-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the cache control settings into the queue_limits so that the flags
can be set atomically with the device queue frozen.

Add new features and flags field for the driver set flags, and internal
(usually sysfs-controlled) flags in the block layer.  Note that we'll
eventually remove enough field from queue_limits to bring it back to the
previous size.

The disable flag is inverted compared to the previous meaning, which
means it now survives a rescan, similar to the max_sectors and
max_discard_sectors user limits.

The FLUSH and FUA flags are now inherited by blk_stack_limits, which
simplified the code in dm a lot, but also causes a slight behavior
change in that dm-switch and dm-unstripe now advertise a write cache
despite setting num_flush_bios to 0.  The I/O path will handle this
gracefully, but as far as I can tell the lack of num_flush_bios
and thus flush support is a pre-existing data integrity bug in those
targets that really needs fixing, after which a non-zero num_flush_bios
should be required in dm for targets that map to underlying devices.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Acked-by: Ulf Hansson <ulf.hansson@linaro.org> [mmc]
---
 .../block/writeback_cache_control.rst         | 67 +++++++++++--------
 arch/um/drivers/ubd_kern.c                    |  2 +-
 block/blk-core.c                              |  2 +-
 block/blk-flush.c                             |  9 ++-
 block/blk-mq-debugfs.c                        |  2 -
 block/blk-settings.c                          | 29 ++------
 block/blk-sysfs.c                             | 29 +++++---
 block/blk-wbt.c                               |  4 +-
 drivers/block/drbd/drbd_main.c                |  2 +-
 drivers/block/loop.c                          |  9 +--
 drivers/block/nbd.c                           | 14 ++--
 drivers/block/null_blk/main.c                 | 12 ++--
 drivers/block/ps3disk.c                       |  7 +-
 drivers/block/rnbd/rnbd-clt.c                 | 10 +--
 drivers/block/ublk_drv.c                      |  8 ++-
 drivers/block/virtio_blk.c                    | 20 ++++--
 drivers/block/xen-blkfront.c                  |  8 ++-
 drivers/md/bcache/super.c                     |  7 +-
 drivers/md/dm-table.c                         | 39 +++--------
 drivers/md/md.c                               |  8 ++-
 drivers/mmc/core/block.c                      | 42 ++++++------
 drivers/mmc/core/queue.c                      | 12 ++--
 drivers/mmc/core/queue.h                      |  3 +-
 drivers/mtd/mtd_blkdevs.c                     |  5 +-
 drivers/nvdimm/pmem.c                         |  4 +-
 drivers/nvme/host/core.c                      |  7 +-
 drivers/nvme/host/multipath.c                 |  6 --
 drivers/scsi/sd.c                             | 28 +++++---
 include/linux/blkdev.h                        | 38 +++++++++--
 29 files changed, 227 insertions(+), 206 deletions(-)

diff --git a/Documentation/block/writeback_cache_control.rst b/Documentation/block/writeback_cache_control.rst
index b208488d0aae85..c575e08beda8e3 100644
--- a/Documentation/block/writeback_cache_control.rst
+++ b/Documentation/block/writeback_cache_control.rst
@@ -46,41 +46,50 @@ worry if the underlying devices need any explicit cache flushing and how
 the Forced Unit Access is implemented.  The REQ_PREFLUSH and REQ_FUA flags
 may both be set on a single bio.
 
+Feature settings for block drivers
+----------------------------------
 
-Implementation details for bio based block drivers
---------------------------------------------------------------
+For devices that do not support volatile write caches there is no driver
+support required, the block layer completes empty REQ_PREFLUSH requests before
+entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from
+requests that have a payload.
 
-These drivers will always see the REQ_PREFLUSH and REQ_FUA bits as they sit
-directly below the submit_bio interface.  For remapping drivers the REQ_FUA
-bits need to be propagated to underlying devices, and a global flush needs
-to be implemented for bios with the REQ_PREFLUSH bit set.  For real device
-drivers that do not have a volatile cache the REQ_PREFLUSH and REQ_FUA bits
-on non-empty bios can simply be ignored, and REQ_PREFLUSH requests without
-data can be completed successfully without doing any work.  Drivers for
-devices with volatile caches need to implement the support for these
-flags themselves without any help from the block layer.
+For devices with volatile write caches the driver needs to tell the block layer
+that it supports flushing caches by setting the
 
+   BLK_FEAT_WRITE_CACHE
 
-Implementation details for request_fn based block drivers
----------------------------------------------------------
+flag in the queue_limits feature field.  For devices that also support the FUA
+bit the block layer needs to be told to pass on the REQ_FUA bit by also setting
+the
 
-For devices that do not support volatile write caches there is no driver
-support required, the block layer completes empty REQ_PREFLUSH requests before
-entering the driver and strips off the REQ_PREFLUSH and REQ_FUA bits from
-requests that have a payload.  For devices with volatile write caches the
-driver needs to tell the block layer that it supports flushing caches by
-doing::
+   BLK_FEAT_FUA
+
+flag in the features field of the queue_limits structure.
+
+Implementation details for bio based block drivers
+--------------------------------------------------
+
+For bio based drivers the REQ_PREFLUSH and REQ_FUA bit are simplify passed on
+to the driver if the drivers sets the BLK_FEAT_WRITE_CACHE flag and the drivers
+needs to handle them.
+
+*NOTE*: The REQ_FUA bit also gets passed on when the BLK_FEAT_FUA flags is
+_not_ set.  Any bio based driver that sets BLK_FEAT_WRITE_CACHE also needs to
+handle REQ_FUA.
 
-	blk_queue_write_cache(sdkp->disk->queue, true, false);
+For remapping drivers the REQ_FUA bits need to be propagated to underlying
+devices, and a global flush needs to be implemented for bios with the
+REQ_PREFLUSH bit set.
 
-and handle empty REQ_OP_FLUSH requests in its prep_fn/request_fn.  Note that
-REQ_PREFLUSH requests with a payload are automatically turned into a sequence
-of an empty REQ_OP_FLUSH request followed by the actual write by the block
-layer.  For devices that also support the FUA bit the block layer needs
-to be told to pass through the REQ_FUA bit using::
+Implementation details for blk-mq drivers
+-----------------------------------------
 
-	blk_queue_write_cache(sdkp->disk->queue, true, true);
+When the BLK_FEAT_WRITE_CACHE flag is set, REQ_OP_WRITE | REQ_PREFLUSH requests
+with a payload are automatically turned into a sequence of a REQ_OP_FLUSH
+request followed by the actual write by the block layer.
 
-and the driver must handle write requests that have the REQ_FUA bit set
-in prep_fn/request_fn.  If the FUA bit is not natively supported the block
-layer turns it into an empty REQ_OP_FLUSH request after the actual write.
+When the BLK_FEAT_FUA flags is set, the REQ_FUA bit simplify passed on for the
+REQ_OP_WRITE request, else a REQ_OP_FLUSH request is sent by the block layer
+after the completion of the write request for bio submissions with the REQ_FUA
+bit set.
diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index cdcb75a68989dd..19e01691ea0ea7 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -835,6 +835,7 @@ static int ubd_add(int n, char **error_out)
 	struct queue_limits lim = {
 		.max_segments		= MAX_SG,
 		.seg_boundary_mask	= PAGE_SIZE - 1,
+		.features		= BLK_FEAT_WRITE_CACHE,
 	};
 	struct gendisk *disk;
 	int err = 0;
@@ -882,7 +883,6 @@ static int ubd_add(int n, char **error_out)
 	}
 
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
-	blk_queue_write_cache(disk->queue, true, false);
 	disk->major = UBD_MAJOR;
 	disk->first_minor = n << UBD_SHIFT;
 	disk->minors = 1 << UBD_SHIFT;
diff --git a/block/blk-core.c b/block/blk-core.c
index 82c3ae22d76d88..2b45a4df9a1aa1 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -782,7 +782,7 @@ void submit_bio_noacct(struct bio *bio)
 		if (WARN_ON_ONCE(bio_op(bio) != REQ_OP_WRITE &&
 				 bio_op(bio) != REQ_OP_ZONE_APPEND))
 			goto end_io;
-		if (!test_bit(QUEUE_FLAG_WC, &q->queue_flags)) {
+		if (!bdev_write_cache(bdev)) {
 			bio->bi_opf &= ~(REQ_PREFLUSH | REQ_FUA);
 			if (!bio_sectors(bio)) {
 				status = BLK_STS_OK;
diff --git a/block/blk-flush.c b/block/blk-flush.c
index 2234f8b3fc05f2..30b9d5033a2b85 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -381,8 +381,8 @@ static void blk_rq_init_flush(struct request *rq)
 bool blk_insert_flush(struct request *rq)
 {
 	struct request_queue *q = rq->q;
-	unsigned long fflags = q->queue_flags;	/* may change, cache */
 	struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx);
+	bool supports_fua = q->limits.features & BLK_FEAT_FUA;
 	unsigned int policy = 0;
 
 	/* FLUSH/FUA request must never be merged */
@@ -394,11 +394,10 @@ bool blk_insert_flush(struct request *rq)
 	/*
 	 * Check which flushes we need to sequence for this operation.
 	 */
-	if (fflags & (1UL << QUEUE_FLAG_WC)) {
+	if (blk_queue_write_cache(q)) {
 		if (rq->cmd_flags & REQ_PREFLUSH)
 			policy |= REQ_FSEQ_PREFLUSH;
-		if (!(fflags & (1UL << QUEUE_FLAG_FUA)) &&
-		    (rq->cmd_flags & REQ_FUA))
+		if ((rq->cmd_flags & REQ_FUA) && !supports_fua)
 			policy |= REQ_FSEQ_POSTFLUSH;
 	}
 
@@ -407,7 +406,7 @@ bool blk_insert_flush(struct request *rq)
 	 * REQ_PREFLUSH and FUA for the driver.
 	 */
 	rq->cmd_flags &= ~REQ_PREFLUSH;
-	if (!(fflags & (1UL << QUEUE_FLAG_FUA)))
+	if (!supports_fua)
 		rq->cmd_flags &= ~REQ_FUA;
 
 	/*
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 770c0c2b72faaa..e8b9db7c30c455 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -93,8 +93,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(INIT_DONE),
 	QUEUE_FLAG_NAME(STABLE_WRITES),
 	QUEUE_FLAG_NAME(POLL),
-	QUEUE_FLAG_NAME(WC),
-	QUEUE_FLAG_NAME(FUA),
 	QUEUE_FLAG_NAME(DAX),
 	QUEUE_FLAG_NAME(STATS),
 	QUEUE_FLAG_NAME(REGISTERED),
diff --git a/block/blk-settings.c b/block/blk-settings.c
index f11c8676eb4c67..536ee202fcdccb 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -261,6 +261,9 @@ static int blk_validate_limits(struct queue_limits *lim)
 		lim->misaligned = 0;
 	}
 
+	if (!(lim->features & BLK_FEAT_WRITE_CACHE))
+		lim->features &= ~BLK_FEAT_FUA;
+
 	err = blk_validate_integrity_limits(lim);
 	if (err)
 		return err;
@@ -454,6 +457,8 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 {
 	unsigned int top, bottom, alignment, ret = 0;
 
+	t->features |= (b->features & BLK_FEAT_INHERIT_MASK);
+
 	t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
 	t->max_user_sectors = min_not_zero(t->max_user_sectors,
 			b->max_user_sectors);
@@ -711,30 +716,6 @@ void blk_set_queue_depth(struct request_queue *q, unsigned int depth)
 }
 EXPORT_SYMBOL(blk_set_queue_depth);
 
-/**
- * blk_queue_write_cache - configure queue's write cache
- * @q:		the request queue for the device
- * @wc:		write back cache on or off
- * @fua:	device supports FUA writes, if true
- *
- * Tell the block layer about the write cache of @q.
- */
-void blk_queue_write_cache(struct request_queue *q, bool wc, bool fua)
-{
-	if (wc) {
-		blk_queue_flag_set(QUEUE_FLAG_HW_WC, q);
-		blk_queue_flag_set(QUEUE_FLAG_WC, q);
-	} else {
-		blk_queue_flag_clear(QUEUE_FLAG_HW_WC, q);
-		blk_queue_flag_clear(QUEUE_FLAG_WC, q);
-	}
-	if (fua)
-		blk_queue_flag_set(QUEUE_FLAG_FUA, q);
-	else
-		blk_queue_flag_clear(QUEUE_FLAG_FUA, q);
-}
-EXPORT_SYMBOL_GPL(blk_queue_write_cache);
-
 int bdev_alignment_offset(struct block_device *bdev)
 {
 	struct request_queue *q = bdev_get_queue(bdev);
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 5c787965b7d09e..4f524c1d5e08bd 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -423,32 +423,41 @@ static ssize_t queue_io_timeout_store(struct request_queue *q, const char *page,
 
 static ssize_t queue_wc_show(struct request_queue *q, char *page)
 {
-	if (test_bit(QUEUE_FLAG_WC, &q->queue_flags))
-		return sprintf(page, "write back\n");
-
-	return sprintf(page, "write through\n");
+	if (q->limits.features & BLK_FLAGS_WRITE_CACHE_DISABLED)
+		return sprintf(page, "write through\n");
+	return sprintf(page, "write back\n");
 }
 
 static ssize_t queue_wc_store(struct request_queue *q, const char *page,
 			      size_t count)
 {
+	struct queue_limits lim;
+	bool disable;
+	int err;
+
 	if (!strncmp(page, "write back", 10)) {
-		if (!test_bit(QUEUE_FLAG_HW_WC, &q->queue_flags))
-			return -EINVAL;
-		blk_queue_flag_set(QUEUE_FLAG_WC, q);
+		disable = false;
 	} else if (!strncmp(page, "write through", 13) ||
-		 !strncmp(page, "none", 4)) {
-		blk_queue_flag_clear(QUEUE_FLAG_WC, q);
+		   !strncmp(page, "none", 4)) {
+		disable = true;
 	} else {
 		return -EINVAL;
 	}
 
+	lim = queue_limits_start_update(q);
+	if (disable)
+		lim.flags |= BLK_FLAGS_WRITE_CACHE_DISABLED;
+	else
+		lim.flags &= ~BLK_FLAGS_WRITE_CACHE_DISABLED;
+	err = queue_limits_commit_update(q, &lim);
+	if (err)
+		return err;
 	return count;
 }
 
 static ssize_t queue_fua_show(struct request_queue *q, char *page)
 {
-	return sprintf(page, "%u\n", test_bit(QUEUE_FLAG_FUA, &q->queue_flags));
+	return sprintf(page, "%u\n", !!(q->limits.features & BLK_FEAT_FUA));
 }
 
 static ssize_t queue_dax_show(struct request_queue *q, char *page)
diff --git a/block/blk-wbt.c b/block/blk-wbt.c
index 64472134dd26df..1a5e4b049ecd1d 100644
--- a/block/blk-wbt.c
+++ b/block/blk-wbt.c
@@ -206,8 +206,8 @@ static void wbt_rqw_done(struct rq_wb *rwb, struct rq_wait *rqw,
 	 */
 	if (wb_acct & WBT_DISCARD)
 		limit = rwb->wb_background;
-	else if (test_bit(QUEUE_FLAG_WC, &rwb->rqos.disk->queue->queue_flags) &&
-	         !wb_recent_wait(rwb))
+	else if (blk_queue_write_cache(rwb->rqos.disk->queue) &&
+		 !wb_recent_wait(rwb))
 		limit = 0;
 	else
 		limit = rwb->wb_normal;
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 113b441d4d3670..bf42a46781fa21 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -2697,6 +2697,7 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 		 * connect.
 		 */
 		.max_hw_sectors		= DRBD_MAX_BIO_SIZE_SAFE >> 8,
+		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA,
 	};
 
 	device = minor_to_device(minor);
@@ -2736,7 +2737,6 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 	disk->private_data = device;
 
 	blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, disk->queue);
-	blk_queue_write_cache(disk->queue, true, true);
 
 	device->md_io.page = alloc_page(GFP_KERNEL);
 	if (!device->md_io.page)
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 8991de8fb1bb0b..08d0fc7f17b701 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -985,6 +985,9 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
 	lim.logical_block_size = bsize;
 	lim.physical_block_size = bsize;
 	lim.io_min = bsize;
+	lim.features &= ~BLK_FEAT_WRITE_CACHE;
+	if (file->f_op->fsync && !(lo->lo_flags & LO_FLAGS_READ_ONLY))
+		lim.features |= BLK_FEAT_WRITE_CACHE;
 	if (!backing_bdev || bdev_nonrot(backing_bdev))
 		blk_queue_flag_set(QUEUE_FLAG_NONROT, lo->lo_queue);
 	else
@@ -1078,9 +1081,6 @@ static int loop_configure(struct loop_device *lo, blk_mode_t mode,
 	lo->old_gfp_mask = mapping_gfp_mask(mapping);
 	mapping_set_gfp_mask(mapping, lo->old_gfp_mask & ~(__GFP_IO|__GFP_FS));
 
-	if (!(lo->lo_flags & LO_FLAGS_READ_ONLY) && file->f_op->fsync)
-		blk_queue_write_cache(lo->lo_queue, true, false);
-
 	error = loop_reconfigure_limits(lo, config->block_size);
 	if (WARN_ON_ONCE(error))
 		goto out_unlock;
@@ -1131,9 +1131,6 @@ static void __loop_clr_fd(struct loop_device *lo, bool release)
 	struct file *filp;
 	gfp_t gfp = lo->old_gfp_mask;
 
-	if (test_bit(QUEUE_FLAG_WC, &lo->lo_queue->queue_flags))
-		blk_queue_write_cache(lo->lo_queue, false, false);
-
 	/*
 	 * Freeze the request queue when unbinding on a live file descriptor and
 	 * thus an open device.  When called from ->release we are guaranteed
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index 44b8c671921e5c..cb1c86a6a3fb9d 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -342,12 +342,14 @@ static int __nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 		lim.max_hw_discard_sectors = UINT_MAX;
 	else
 		lim.max_hw_discard_sectors = 0;
-	if (!(nbd->config->flags & NBD_FLAG_SEND_FLUSH))
-		blk_queue_write_cache(nbd->disk->queue, false, false);
-	else if (nbd->config->flags & NBD_FLAG_SEND_FUA)
-		blk_queue_write_cache(nbd->disk->queue, true, true);
-	else
-		blk_queue_write_cache(nbd->disk->queue, true, false);
+	if (!(nbd->config->flags & NBD_FLAG_SEND_FLUSH)) {
+		lim.features &= ~(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA);
+	} else if (nbd->config->flags & NBD_FLAG_SEND_FUA) {
+		lim.features |= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA;
+	} else {
+		lim.features |= BLK_FEAT_WRITE_CACHE;
+		lim.features &= ~BLK_FEAT_FUA;
+	}
 	lim.logical_block_size = blksize;
 	lim.physical_block_size = blksize;
 	error = queue_limits_commit_update(nbd->disk->queue, &lim);
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index 75f189e42f885d..21f9d256e88402 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1928,6 +1928,13 @@ static int null_add_dev(struct nullb_device *dev)
 			goto out_cleanup_tags;
 	}
 
+	if (dev->cache_size > 0) {
+		set_bit(NULLB_DEV_FL_CACHE, &nullb->dev->flags);
+		lim.features |= BLK_FEAT_WRITE_CACHE;
+		if (dev->fua)
+			lim.features |= BLK_FEAT_FUA;
+	}
+
 	nullb->disk = blk_mq_alloc_disk(nullb->tag_set, &lim, nullb);
 	if (IS_ERR(nullb->disk)) {
 		rv = PTR_ERR(nullb->disk);
@@ -1940,11 +1947,6 @@ static int null_add_dev(struct nullb_device *dev)
 		nullb_setup_bwtimer(nullb);
 	}
 
-	if (dev->cache_size > 0) {
-		set_bit(NULLB_DEV_FL_CACHE, &nullb->dev->flags);
-		blk_queue_write_cache(nullb->q, true, dev->fua);
-	}
-
 	nullb->q->queuedata = nullb;
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, nullb->q);
 
diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
index b810ac0a5c4b97..8b73cf459b5937 100644
--- a/drivers/block/ps3disk.c
+++ b/drivers/block/ps3disk.c
@@ -388,9 +388,8 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
 		.max_segments		= -1,
 		.max_segment_size	= dev->bounce_size,
 		.dma_alignment		= dev->blk_size - 1,
+		.features		= BLK_FEAT_WRITE_CACHE,
 	};
-
-	struct request_queue *queue;
 	struct gendisk *gendisk;
 
 	if (dev->blk_size < 512) {
@@ -447,10 +446,6 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
 		goto fail_free_tag_set;
 	}
 
-	queue = gendisk->queue;
-
-	blk_queue_write_cache(queue, true, false);
-
 	priv->gendisk = gendisk;
 	gendisk->major = ps3disk_major;
 	gendisk->first_minor = devidx * PS3DISK_MINORS;
diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
index b7ffe03c61606d..02c4b173182719 100644
--- a/drivers/block/rnbd/rnbd-clt.c
+++ b/drivers/block/rnbd/rnbd-clt.c
@@ -1389,6 +1389,12 @@ static int rnbd_client_setup_device(struct rnbd_clt_dev *dev,
 			le32_to_cpu(rsp->max_discard_sectors);
 	}
 
+	if (rsp->cache_policy & RNBD_WRITEBACK) {
+		lim.features |= BLK_FEAT_WRITE_CACHE;
+		if (rsp->cache_policy & RNBD_FUA)
+			lim.features |= BLK_FEAT_FUA;
+	}
+
 	dev->gd = blk_mq_alloc_disk(&dev->sess->tag_set, &lim, dev);
 	if (IS_ERR(dev->gd))
 		return PTR_ERR(dev->gd);
@@ -1397,10 +1403,6 @@ static int rnbd_client_setup_device(struct rnbd_clt_dev *dev,
 
 	blk_queue_flag_set(QUEUE_FLAG_SAME_COMP, dev->queue);
 	blk_queue_flag_set(QUEUE_FLAG_SAME_FORCE, dev->queue);
-	blk_queue_write_cache(dev->queue,
-			      !!(rsp->cache_policy & RNBD_WRITEBACK),
-			      !!(rsp->cache_policy & RNBD_FUA));
-
 	return rnbd_clt_setup_gen_disk(dev, rsp, idx);
 }
 
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 4e159948c912c2..e45c65c1848d31 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -487,8 +487,6 @@ static void ublk_dev_param_basic_apply(struct ublk_device *ub)
 	struct request_queue *q = ub->ub_disk->queue;
 	const struct ublk_param_basic *p = &ub->params.basic;
 
-	blk_queue_write_cache(q, p->attrs & UBLK_ATTR_VOLATILE_CACHE,
-			p->attrs & UBLK_ATTR_FUA);
 	if (p->attrs & UBLK_ATTR_ROTATIONAL)
 		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
 	else
@@ -2210,6 +2208,12 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
 		lim.max_zone_append_sectors = p->max_zone_append_sectors;
 	}
 
+	if (ub->params.basic.attrs & UBLK_ATTR_VOLATILE_CACHE) {
+		lim.features |= BLK_FEAT_WRITE_CACHE;
+		if (ub->params.basic.attrs & UBLK_ATTR_FUA)
+			lim.features |= BLK_FEAT_FUA;
+	}
+
 	if (wait_for_completion_interruptible(&ub->completion) != 0)
 		return -EINTR;
 
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 378b241911ca87..b1a3c293528519 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -1100,6 +1100,7 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
 	struct gendisk *disk = dev_to_disk(dev);
 	struct virtio_blk *vblk = disk->private_data;
 	struct virtio_device *vdev = vblk->vdev;
+	struct queue_limits lim;
 	int i;
 
 	BUG_ON(!virtio_has_feature(vblk->vdev, VIRTIO_BLK_F_CONFIG_WCE));
@@ -1108,7 +1109,17 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
 		return i;
 
 	virtio_cwrite8(vdev, offsetof(struct virtio_blk_config, wce), i);
-	blk_queue_write_cache(disk->queue, virtblk_get_cache_mode(vdev), false);
+
+	lim = queue_limits_start_update(disk->queue);
+	if (virtblk_get_cache_mode(vdev))
+		lim.features |= BLK_FEAT_WRITE_CACHE;
+	else
+		lim.features &= ~BLK_FEAT_WRITE_CACHE;
+	blk_mq_freeze_queue(disk->queue);
+	i = queue_limits_commit_update(disk->queue, &lim);
+	blk_mq_unfreeze_queue(disk->queue);
+	if (i)
+		return i;
 	return count;
 }
 
@@ -1504,6 +1515,9 @@ static int virtblk_probe(struct virtio_device *vdev)
 	if (err)
 		goto out_free_tags;
 
+	if (virtblk_get_cache_mode(vdev))
+		lim.features |= BLK_FEAT_WRITE_CACHE;
+
 	vblk->disk = blk_mq_alloc_disk(&vblk->tag_set, &lim, vblk);
 	if (IS_ERR(vblk->disk)) {
 		err = PTR_ERR(vblk->disk);
@@ -1519,10 +1533,6 @@ static int virtblk_probe(struct virtio_device *vdev)
 	vblk->disk->fops = &virtblk_fops;
 	vblk->index = index;
 
-	/* configure queue flush support */
-	blk_queue_write_cache(vblk->disk->queue, virtblk_get_cache_mode(vdev),
-			false);
-
 	/* If disk is read-only in the host, the guest should obey */
 	if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO))
 		set_disk_ro(vblk->disk, 1);
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 851b03844edd13..9aafce3e5987bf 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -959,6 +959,12 @@ static void blkif_set_queue_limits(const struct blkfront_info *info,
 			lim->max_secure_erase_sectors = UINT_MAX;
 	}
 
+	if (info->feature_flush) {
+		lim->features |= BLK_FEAT_WRITE_CACHE;
+		if (info->feature_fua)
+			lim->features |= BLK_FEAT_FUA;
+	}
+
 	/* Hard sector size and max sectors impersonate the equiv. hardware. */
 	lim->logical_block_size = info->sector_size;
 	lim->physical_block_size = info->physical_sector_size;
@@ -987,8 +993,6 @@ static const char *flush_info(struct blkfront_info *info)
 
 static void xlvbd_flush(struct blkfront_info *info)
 {
-	blk_queue_write_cache(info->rq, info->feature_flush ? true : false,
-			      info->feature_fua ? true : false);
 	pr_info("blkfront: %s: %s %s %s %s %s %s %s\n",
 		info->gd->disk_name, flush_info(info),
 		"persistent grants:", info->feature_persistent ?
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index 4d11fc664cb0b8..cb6595c8b5514e 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -897,7 +897,6 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
 		sector_t sectors, struct block_device *cached_bdev,
 		const struct block_device_operations *ops)
 {
-	struct request_queue *q;
 	const size_t max_stripes = min_t(size_t, INT_MAX,
 					 SIZE_MAX / sizeof(atomic_t));
 	struct queue_limits lim = {
@@ -909,6 +908,7 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
 		.io_min			= block_size,
 		.logical_block_size	= block_size,
 		.physical_block_size	= block_size,
+		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA,
 	};
 	uint64_t n;
 	int idx;
@@ -975,12 +975,7 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
 	d->disk->fops		= ops;
 	d->disk->private_data	= d;
 
-	q = d->disk->queue;
-
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, d->disk->queue);
-
-	blk_queue_write_cache(q, true, true);
-
 	return 0;
 
 out_bioset_exit:
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index fd789eeb62d943..03abdae646829c 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1686,34 +1686,16 @@ int dm_calculate_queue_limits(struct dm_table *t,
 	return validate_hardware_logical_block_alignment(t, limits);
 }
 
-static int device_flush_capable(struct dm_target *ti, struct dm_dev *dev,
-				sector_t start, sector_t len, void *data)
-{
-	unsigned long flush = (unsigned long) data;
-	struct request_queue *q = bdev_get_queue(dev->bdev);
-
-	return (q->queue_flags & flush);
-}
-
-static bool dm_table_supports_flush(struct dm_table *t, unsigned long flush)
+/*
+ * Check if a target requires flush support even if none of the underlying
+ * devices need it (e.g. to persist target-specific metadata).
+ */
+static bool dm_table_supports_flush(struct dm_table *t)
 {
-	/*
-	 * Require at least one underlying device to support flushes.
-	 * t->devices includes internal dm devices such as mirror logs
-	 * so we need to use iterate_devices here, which targets
-	 * supporting flushes must provide.
-	 */
 	for (unsigned int i = 0; i < t->num_targets; i++) {
 		struct dm_target *ti = dm_table_get_target(t, i);
 
-		if (!ti->num_flush_bios)
-			continue;
-
-		if (ti->flush_supported)
-			return true;
-
-		if (ti->type->iterate_devices &&
-		    ti->type->iterate_devices(ti, device_flush_capable, (void *) flush))
+		if (ti->num_flush_bios && ti->flush_supported)
 			return true;
 	}
 
@@ -1855,7 +1837,6 @@ static int device_requires_stable_pages(struct dm_target *ti,
 int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 			      struct queue_limits *limits)
 {
-	bool wc = false, fua = false;
 	int r;
 
 	if (dm_table_supports_nowait(t))
@@ -1876,12 +1857,8 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 	if (!dm_table_supports_secure_erase(t))
 		limits->max_secure_erase_sectors = 0;
 
-	if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_WC))) {
-		wc = true;
-		if (dm_table_supports_flush(t, (1UL << QUEUE_FLAG_FUA)))
-			fua = true;
-	}
-	blk_queue_write_cache(q, wc, fua);
+	if (dm_table_supports_flush(t))
+		limits->features |= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA;
 
 	if (dm_table_supports_dax(t, device_not_dax_capable)) {
 		blk_queue_flag_set(QUEUE_FLAG_DAX, q);
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 67ece2cd725f50..2f4c5d1755d857 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5785,7 +5785,10 @@ struct mddev *md_alloc(dev_t dev, char *name)
 	int partitioned;
 	int shift;
 	int unit;
-	int error ;
+	int error;
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA,
+	};
 
 	/*
 	 * Wait for any previous instance of this device to be completely
@@ -5825,7 +5828,7 @@ struct mddev *md_alloc(dev_t dev, char *name)
 		 */
 		mddev->hold_active = UNTIL_STOP;
 
-	disk = blk_alloc_disk(NULL, NUMA_NO_NODE);
+	disk = blk_alloc_disk(&lim, NUMA_NO_NODE);
 	if (IS_ERR(disk)) {
 		error = PTR_ERR(disk);
 		goto out_free_mddev;
@@ -5843,7 +5846,6 @@ struct mddev *md_alloc(dev_t dev, char *name)
 	disk->fops = &md_fops;
 	disk->private_data = mddev;
 
-	blk_queue_write_cache(disk->queue, true, true);
 	disk->events |= DISK_EVENT_MEDIA_CHANGE;
 	mddev->gendisk = disk;
 	error = add_disk(disk);
diff --git a/drivers/mmc/core/block.c b/drivers/mmc/core/block.c
index 367509b5b6466c..2c9963248fcbd6 100644
--- a/drivers/mmc/core/block.c
+++ b/drivers/mmc/core/block.c
@@ -2466,8 +2466,7 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
 	struct mmc_blk_data *md;
 	int devidx, ret;
 	char cap_str[10];
-	bool cache_enabled = false;
-	bool fua_enabled = false;
+	unsigned int features = 0;
 
 	devidx = ida_alloc_max(&mmc_blk_ida, max_devices - 1, GFP_KERNEL);
 	if (devidx < 0) {
@@ -2499,7 +2498,24 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
 	 */
 	md->read_only = mmc_blk_readonly(card);
 
-	md->disk = mmc_init_queue(&md->queue, card);
+	if (mmc_host_cmd23(card->host)) {
+		if ((mmc_card_mmc(card) &&
+		     card->csd.mmca_vsn >= CSD_SPEC_VER_3) ||
+		    (mmc_card_sd(card) &&
+		     card->scr.cmds & SD_SCR_CMD23_SUPPORT))
+			md->flags |= MMC_BLK_CMD23;
+	}
+
+	if (md->flags & MMC_BLK_CMD23 &&
+	    ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
+	     card->ext_csd.rel_sectors)) {
+		md->flags |= MMC_BLK_REL_WR;
+		features |= (BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA);
+	} else if (mmc_cache_enabled(card->host)) {
+		features |= BLK_FEAT_WRITE_CACHE;
+	}
+
+	md->disk = mmc_init_queue(&md->queue, card, features);
 	if (IS_ERR(md->disk)) {
 		ret = PTR_ERR(md->disk);
 		goto err_kfree;
@@ -2539,26 +2555,6 @@ static struct mmc_blk_data *mmc_blk_alloc_req(struct mmc_card *card,
 
 	set_capacity(md->disk, size);
 
-	if (mmc_host_cmd23(card->host)) {
-		if ((mmc_card_mmc(card) &&
-		     card->csd.mmca_vsn >= CSD_SPEC_VER_3) ||
-		    (mmc_card_sd(card) &&
-		     card->scr.cmds & SD_SCR_CMD23_SUPPORT))
-			md->flags |= MMC_BLK_CMD23;
-	}
-
-	if (md->flags & MMC_BLK_CMD23 &&
-	    ((card->ext_csd.rel_param & EXT_CSD_WR_REL_PARAM_EN) ||
-	     card->ext_csd.rel_sectors)) {
-		md->flags |= MMC_BLK_REL_WR;
-		fua_enabled = true;
-		cache_enabled = true;
-	}
-	if (mmc_cache_enabled(card->host))
-		cache_enabled  = true;
-
-	blk_queue_write_cache(md->queue.queue, cache_enabled, fua_enabled);
-
 	string_get_size((u64)size, 512, STRING_UNITS_2,
 			cap_str, sizeof(cap_str));
 	pr_info("%s: %s %s %s%s\n",
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 241cdc2b2a2a3b..97ff993d31570c 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -344,10 +344,12 @@ static const struct blk_mq_ops mmc_mq_ops = {
 };
 
 static struct gendisk *mmc_alloc_disk(struct mmc_queue *mq,
-		struct mmc_card *card)
+		struct mmc_card *card, unsigned int features)
 {
 	struct mmc_host *host = card->host;
-	struct queue_limits lim = { };
+	struct queue_limits lim = {
+		.features		= features,
+	};
 	struct gendisk *disk;
 
 	if (mmc_can_erase(card))
@@ -413,10 +415,12 @@ static inline bool mmc_merge_capable(struct mmc_host *host)
  * mmc_init_queue - initialise a queue structure.
  * @mq: mmc queue
  * @card: mmc card to attach this queue
+ * @features: block layer features (BLK_FEAT_*)
  *
  * Initialise a MMC card request queue.
  */
-struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card)
+struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
+		unsigned int features)
 {
 	struct mmc_host *host = card->host;
 	struct gendisk *disk;
@@ -460,7 +464,7 @@ struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card)
 		return ERR_PTR(ret);
 		
 
-	disk = mmc_alloc_disk(mq, card);
+	disk = mmc_alloc_disk(mq, card, features);
 	if (IS_ERR(disk))
 		blk_mq_free_tag_set(&mq->tag_set);
 	return disk;
diff --git a/drivers/mmc/core/queue.h b/drivers/mmc/core/queue.h
index 9ade3bcbb714e4..1498840a4ea008 100644
--- a/drivers/mmc/core/queue.h
+++ b/drivers/mmc/core/queue.h
@@ -94,7 +94,8 @@ struct mmc_queue {
 	struct work_struct	complete_work;
 };
 
-struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card);
+struct gendisk *mmc_init_queue(struct mmc_queue *mq, struct mmc_card *card,
+		unsigned int features);
 extern void mmc_cleanup_queue(struct mmc_queue *);
 extern void mmc_queue_suspend(struct mmc_queue *);
 extern void mmc_queue_resume(struct mmc_queue *);
diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index 3caa0717d46c01..1b9f57f231e8be 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -336,6 +336,8 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	lim.logical_block_size = tr->blksize;
 	if (tr->discard)
 		lim.max_hw_discard_sectors = UINT_MAX;
+	if (tr->flush)
+		lim.features |= BLK_FEAT_WRITE_CACHE;
 
 	/* Create gendisk */
 	gd = blk_mq_alloc_disk(new->tag_set, &lim, new);
@@ -373,9 +375,6 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	spin_lock_init(&new->queue_lock);
 	INIT_LIST_HEAD(&new->rq_list);
 
-	if (tr->flush)
-		blk_queue_write_cache(new->rq, true, false);
-
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, new->rq);
 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, new->rq);
 
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 598fe2e89bda45..aff818469c114c 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -455,6 +455,7 @@ static int pmem_attach_disk(struct device *dev,
 		.logical_block_size	= pmem_sector_size(ndns),
 		.physical_block_size	= PAGE_SIZE,
 		.max_hw_sectors		= UINT_MAX,
+		.features		= BLK_FEAT_WRITE_CACHE,
 	};
 	int nid = dev_to_node(dev), fua;
 	struct resource *res = &nsio->res;
@@ -495,6 +496,8 @@ static int pmem_attach_disk(struct device *dev,
 		dev_warn(dev, "unable to guarantee persistence of writes\n");
 		fua = 0;
 	}
+	if (fua)
+		lim.features |= BLK_FEAT_FUA;
 
 	if (!devm_request_mem_region(dev, res->start, resource_size(res),
 				dev_name(&ndns->dev))) {
@@ -543,7 +546,6 @@ static int pmem_attach_disk(struct device *dev,
 	}
 	pmem->virt_addr = addr;
 
-	blk_queue_write_cache(q, true, fua);
 	blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, q);
 	if (pmem->pfn_flags & PFN_MAP)
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 5a673fa5cb2612..9fc5e36fe2e55e 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2056,7 +2056,6 @@ static int nvme_update_ns_info_generic(struct nvme_ns *ns,
 static int nvme_update_ns_info_block(struct nvme_ns *ns,
 		struct nvme_ns_info *info)
 {
-	bool vwc = ns->ctrl->vwc & NVME_CTRL_VWC_PRESENT;
 	struct queue_limits lim;
 	struct nvme_id_ns_nvm *nvm = NULL;
 	struct nvme_zone_info zi = {};
@@ -2106,6 +2105,11 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns,
 	    ns->head->ids.csi == NVME_CSI_ZNS)
 		nvme_update_zone_info(ns, &lim, &zi);
 
+	if (ns->ctrl->vwc & NVME_CTRL_VWC_PRESENT)
+		lim.features |= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA;
+	else
+		lim.features &= ~(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA);
+
 	/*
 	 * Register a metadata profile for PI, or the plain non-integrity NVMe
 	 * metadata masquerading as Type 0 if supported, otherwise reject block
@@ -2132,7 +2136,6 @@ static int nvme_update_ns_info_block(struct nvme_ns *ns,
 	if ((id->dlfeat & 0x7) == 0x1 && (id->dlfeat & (1 << 3)))
 		ns->head->features |= NVME_NS_DEAC;
 	set_disk_ro(ns->disk, nvme_ns_is_readonly(ns, info));
-	blk_queue_write_cache(ns->disk->queue, vwc, vwc);
 	set_bit(NVME_NS_READY, &ns->flags);
 	blk_mq_unfreeze_queue(ns->disk->queue);
 
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 12c59db02539e5..3d0e23a0a4ddd8 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -521,7 +521,6 @@ static void nvme_requeue_work(struct work_struct *work)
 int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 {
 	struct queue_limits lim;
-	bool vwc = false;
 
 	mutex_init(&head->lock);
 	bio_list_init(&head->requeue_list);
@@ -562,11 +561,6 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 	if (ctrl->tagset->nr_maps > HCTX_TYPE_POLL &&
 	    ctrl->tagset->map[HCTX_TYPE_POLL].nr_queues)
 		blk_queue_flag_set(QUEUE_FLAG_POLL, head->disk->queue);
-
-	/* we need to propagate up the VMC settings */
-	if (ctrl->vwc & NVME_CTRL_VWC_PRESENT)
-		vwc = true;
-	blk_queue_write_cache(head->disk->queue, vwc, vwc);
 	return 0;
 }
 
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index 66f7d1e3429c86..d8ee4a4d4a6283 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -120,17 +120,18 @@ static const char *sd_cache_types[] = {
 	"write back, no read (daft)"
 };
 
-static void sd_set_flush_flag(struct scsi_disk *sdkp)
+static void sd_set_flush_flag(struct scsi_disk *sdkp,
+		struct queue_limits *lim)
 {
-	bool wc = false, fua = false;
-
 	if (sdkp->WCE) {
-		wc = true;
+		lim->features |= BLK_FEAT_WRITE_CACHE;
 		if (sdkp->DPOFUA)
-			fua = true;
+			lim->features |= BLK_FEAT_FUA;
+		else
+			lim->features &= ~BLK_FEAT_FUA;
+	} else {
+		lim->features &= ~(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA);
 	}
-
-	blk_queue_write_cache(sdkp->disk->queue, wc, fua);
 }
 
 static ssize_t
@@ -168,9 +169,18 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
 	wce = (ct & 0x02) && !sdkp->write_prot ? 1 : 0;
 
 	if (sdkp->cache_override) {
+		struct queue_limits lim;
+
 		sdkp->WCE = wce;
 		sdkp->RCD = rcd;
-		sd_set_flush_flag(sdkp);
+
+		lim = queue_limits_start_update(sdkp->disk->queue);
+		sd_set_flush_flag(sdkp, &lim);
+		blk_mq_freeze_queue(sdkp->disk->queue);
+		ret = queue_limits_commit_update(sdkp->disk->queue, &lim);
+		blk_mq_unfreeze_queue(sdkp->disk->queue);
+		if (ret)
+			return ret;
 		return count;
 	}
 
@@ -3663,7 +3673,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 	 * We now have all cache related info, determine how we deal
 	 * with flush requests.
 	 */
-	sd_set_flush_flag(sdkp);
+	sd_set_flush_flag(sdkp, &lim);
 
 	/* Initial block count limit based on CDB TRANSFER LENGTH field size. */
 	dev_max = sdp->use_16_for_rw ? SD_MAX_XFER_BLOCKS : SD_DEF_XFER_BLOCKS;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 0c247a71688561..acdfe5122faa44 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -282,6 +282,28 @@ static inline bool blk_op_is_passthrough(blk_opf_t op)
 	return op == REQ_OP_DRV_IN || op == REQ_OP_DRV_OUT;
 }
 
+/* flags set by the driver in queue_limits.features */
+enum {
+	/* supports a volatile write cache */
+	BLK_FEAT_WRITE_CACHE			= (1u << 0),
+
+	/* supports passing on the FUA bit */
+	BLK_FEAT_FUA				= (1u << 1),
+};
+
+/*
+ * Flags automatically inherited when stacking limits.
+ */
+#define BLK_FEAT_INHERIT_MASK \
+	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA)
+
+
+/* internal flags in queue_limits.flags */
+enum {
+	/* do not send FLUSH or FUA command despite advertised write cache */
+	BLK_FLAGS_WRITE_CACHE_DISABLED		= (1u << 31),
+};
+
 /*
  * BLK_BOUNCE_NONE:	never bounce (default)
  * BLK_BOUNCE_HIGH:	bounce all highmem pages
@@ -292,6 +314,8 @@ enum blk_bounce {
 };
 
 struct queue_limits {
+	unsigned int		features;
+	unsigned int		flags;
 	enum blk_bounce		bounce;
 	unsigned long		seg_boundary_mask;
 	unsigned long		virt_boundary_mask;
@@ -536,12 +560,9 @@ struct request_queue {
 #define QUEUE_FLAG_ADD_RANDOM	10	/* Contributes to random pool */
 #define QUEUE_FLAG_SYNCHRONOUS	11	/* always completes in submit context */
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
-#define QUEUE_FLAG_HW_WC	13	/* Write back caching supported */
 #define QUEUE_FLAG_INIT_DONE	14	/* queue is initialized */
 #define QUEUE_FLAG_STABLE_WRITES 15	/* don't modify blks until WB is done */
 #define QUEUE_FLAG_POLL		16	/* IO polling enabled if set */
-#define QUEUE_FLAG_WC		17	/* Write back caching */
-#define QUEUE_FLAG_FUA		18	/* device supports FUA writes */
 #define QUEUE_FLAG_DAX		19	/* device supports DAX */
 #define QUEUE_FLAG_STATS	20	/* track IO start and completion times */
 #define QUEUE_FLAG_REGISTERED	22	/* queue has been registered to a disk */
@@ -951,7 +972,6 @@ void queue_limits_stack_bdev(struct queue_limits *t, struct block_device *bdev,
 		sector_t offset, const char *pfx);
 extern void blk_queue_update_dma_pad(struct request_queue *, unsigned int);
 extern void blk_queue_rq_timeout(struct request_queue *, unsigned int);
-extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fua);
 
 struct blk_independent_access_ranges *
 disk_alloc_independent_access_ranges(struct gendisk *disk, int nr_ia_ranges);
@@ -1304,14 +1324,20 @@ static inline bool bdev_stable_writes(struct block_device *bdev)
 	return test_bit(QUEUE_FLAG_STABLE_WRITES, &q->queue_flags);
 }
 
+static inline bool blk_queue_write_cache(struct request_queue *q)
+{
+	return (q->limits.features & BLK_FEAT_WRITE_CACHE) &&
+		!(q->limits.flags & BLK_FLAGS_WRITE_CACHE_DISABLED);
+}
+
 static inline bool bdev_write_cache(struct block_device *bdev)
 {
-	return test_bit(QUEUE_FLAG_WC, &bdev_get_queue(bdev)->queue_flags);
+	return blk_queue_write_cache(bdev_get_queue(bdev));
 }
 
 static inline bool bdev_fua(struct block_device *bdev)
 {
-	return test_bit(QUEUE_FLAG_FUA, &bdev_get_queue(bdev)->queue_flags);
+	return bdev_get_queue(bdev)->limits.features & BLK_FEAT_FUA;
 }
 
 static inline bool bdev_nowait(struct block_device *bdev)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741650.1148307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wf-0006Fo-7F; Mon, 17 Jun 2024 06:07:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741650.1148307; Mon, 17 Jun 2024 06:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5We-0006DJ-Vk; Mon, 17 Jun 2024 06:07:32 +0000
Received: by outflank-mailman (input) for mailman id 741650;
 Mon, 17 Jun 2024 06:07:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5WN-0001PY-0k
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:07:15 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d68f050f-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:07:13 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5W6-00000009JIp-3zn2; Mon, 17 Jun 2024 06:06:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d68f050f-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=PjgmwXiuV7FK2oQNFsskA59bpZQJKHy+bIXVC2fbd1Q=; b=HmokhAbNsUwq3jroNUoAcMYSzq
	1cRhouiA75QpEX4Y0Dohtt+ivS+0KLmBEm4FmWj9AfwENORhjsIGXQ6hlJgQhEWyjOok5tgvx/tX1
	EURIY70h6RrJf+AWaj16GxivXHK/cjqhTaVfyngD0w8g58/BZ+hLOqf3H1HL10dnJNN+V7kLntlgl
	yC1dVGPbGVOx7q7y2xwIwq8WuH46RFqrMZFZFx2+uHSL+X46iOqfjxlB6RH/oqzEqYr354u18u/w2
	XDF2ZMyCP3VNUY7KheadJdNP5wJfg1ZIP+A4py4zEiAweFI6VH+k0GIge/QIa/enI9VYvltd84q17
	H414Zy+w==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: [PATCH 26/26] block: move the bounce flag into the features field
Date: Mon, 17 Jun 2024 08:04:53 +0200
Message-ID: <20240617060532.127975-27-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the bounce flag into the features field to reclaim a little bit of
space.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 block/blk-settings.c    | 1 -
 block/blk.h             | 2 +-
 drivers/scsi/scsi_lib.c | 2 +-
 include/linux/blkdev.h  | 6 ++++--
 4 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/block/blk-settings.c b/block/blk-settings.c
index 96e07f24bd9aa1..d0e9096f93ca8a 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -479,7 +479,6 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 					b->max_write_zeroes_sectors);
 	t->max_zone_append_sectors = min(queue_limits_max_zone_append_sectors(t),
 					 queue_limits_max_zone_append_sectors(b));
-	t->bounce = max(t->bounce, b->bounce);
 
 	t->seg_boundary_mask = min_not_zero(t->seg_boundary_mask,
 					    b->seg_boundary_mask);
diff --git a/block/blk.h b/block/blk.h
index 79e8d5d4fe0caf..fa32f7fad5d7e6 100644
--- a/block/blk.h
+++ b/block/blk.h
@@ -394,7 +394,7 @@ struct bio *__blk_queue_bounce(struct bio *bio, struct request_queue *q);
 static inline bool blk_queue_may_bounce(struct request_queue *q)
 {
 	return IS_ENABLED(CONFIG_BOUNCE) &&
-		q->limits.bounce == BLK_BOUNCE_HIGH &&
+		(q->limits.features & BLK_FEAT_BOUNCE_HIGH) &&
 		max_low_pfn >= max_pfn;
 }
 
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index 54f771ec8cfb5e..e2f7bfb2b9e450 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -1986,7 +1986,7 @@ void scsi_init_limits(struct Scsi_Host *shost, struct queue_limits *lim)
 		shost->dma_alignment, dma_get_cache_alignment() - 1);
 
 	if (shost->no_highmem)
-		lim->bounce = BLK_BOUNCE_HIGH;
+		lim->features |= BLK_FEAT_BOUNCE_HIGH;
 
 	dma_set_seg_boundary(dev, shost->dma_boundary);
 	dma_set_max_seg_size(dev, shost->max_segment_size);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 2c433ebf6f2030..e96ba7b97288d2 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -325,6 +325,9 @@ enum {
 
 	/* skip this queue in blk_mq_(un)quiesce_tagset */
 	BLK_FEAT_SKIP_TAGSET_QUIESCE		= (1u << 13),
+
+	/* bounce all highmem pages */
+	BLK_FEAT_BOUNCE_HIGH			= (1u << 14),
 };
 
 /*
@@ -332,7 +335,7 @@ enum {
  */
 #define BLK_FEAT_INHERIT_MASK \
 	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | BLK_FEAT_ROTATIONAL | \
-	 BLK_FEAT_STABLE_WRITES | BLK_FEAT_ZONED)
+	 BLK_FEAT_STABLE_WRITES | BLK_FEAT_ZONED | BLK_FEAT_BOUNCE_HIGH)
 
 /* internal flags in queue_limits.flags */
 enum {
@@ -352,7 +355,6 @@ enum blk_bounce {
 struct queue_limits {
 	unsigned int		features;
 	unsigned int		flags;
-	enum blk_bounce		bounce;
 	unsigned long		seg_boundary_mask;
 	unsigned long		virt_boundary_mask;
 
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741648.1148304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5We-0006Ay-Uu; Mon, 17 Jun 2024 06:07:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741648.1148304; Mon, 17 Jun 2024 06:07:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5We-000696-NS; Mon, 17 Jun 2024 06:07:32 +0000
Received: by outflank-mailman (input) for mailman id 741648;
 Mon, 17 Jun 2024 06:07:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5Vo-0001Pt-Ti
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:41 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c18d75ec-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:06:38 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5VR-00000009Ihh-066g; Mon, 17 Jun 2024 06:06:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c18d75ec-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=MDa4ltNaiiCB6X4bZ9rgJJTQNuzrZL+N0M8JH6jUL+4=; b=44w3smEKkrItzvN2Um+6LRKIW8
	QDzfZto3vIfo9yAyr31luszOPXCCTAyR1hMdi30zYk1NbTq4VkeP85urtotyiWNNxsc0DRWTw3b5Z
	+8XNd3OdeQuj3T/SXJkA87dxQwyhhrDlOINqva2Fb45qMp+LUFEyuGJAwZhVzGSN5UYWjAo3ok+JD
	2jCv2TNyO+X+i3rxo+vtpWitvoAssUGpzxUz9HB+2j+szsREVnG2GlwHBdfIrvQv0CExTDmi5NYWh
	ZwZUzKJ+S9a2sJiEzEW5Q0bbbMWGAo660zKCa8+muDZK2ST5WgO2o/kBumGbI4n0zHepOdKvkuA7/
	mkv+LAzQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: [PATCH 14/26] block: move the nonrot flag to queue_limits
Date: Mon, 17 Jun 2024 08:04:41 +0200
Message-ID: <20240617060532.127975-15-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the nonrot flag into the queue_limits feature field so that it can
be set atomically with the queue frozen.

Use the chance to switch to defaulting to non-rotational and require
the driver to opt into rotational, which matches the polarity of the
sysfs interface.

For the z2ram, ps3vram, 2x memstick, ubiblock and dcssblk the new
rotational flag is not set as they clearly are not rotational despite
this being a behavior change.  There are some other drivers that
unconditionally set the rotational flag to keep the existing behavior
as they arguably can be used on rotational devices even if that is
probably not their main use today (e.g. virtio_blk and drbd).

The flag is automatically inherited in blk_stack_limits matching the
existing behavior in dm and md.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 arch/m68k/emu/nfblock.c             |  1 +
 arch/um/drivers/ubd_kern.c          |  1 -
 arch/xtensa/platforms/iss/simdisk.c |  5 +++-
 block/blk-mq-debugfs.c              |  1 -
 block/blk-sysfs.c                   | 39 ++++++++++++++++++++++++++---
 drivers/block/amiflop.c             |  5 +++-
 drivers/block/aoe/aoeblk.c          |  1 +
 drivers/block/ataflop.c             |  5 +++-
 drivers/block/brd.c                 |  2 --
 drivers/block/drbd/drbd_main.c      |  3 ++-
 drivers/block/floppy.c              |  3 ++-
 drivers/block/loop.c                |  8 +++---
 drivers/block/mtip32xx/mtip32xx.c   |  1 -
 drivers/block/n64cart.c             |  2 --
 drivers/block/nbd.c                 |  5 ----
 drivers/block/null_blk/main.c       |  1 -
 drivers/block/pktcdvd.c             |  1 +
 drivers/block/ps3disk.c             |  3 ++-
 drivers/block/rbd.c                 |  3 ---
 drivers/block/rnbd/rnbd-clt.c       |  4 ---
 drivers/block/sunvdc.c              |  1 +
 drivers/block/swim.c                |  5 +++-
 drivers/block/swim3.c               |  5 +++-
 drivers/block/ublk_drv.c            |  9 +++----
 drivers/block/virtio_blk.c          |  4 ++-
 drivers/block/xen-blkfront.c        |  1 -
 drivers/block/zram/zram_drv.c       |  2 --
 drivers/cdrom/gdrom.c               |  1 +
 drivers/md/bcache/super.c           |  2 --
 drivers/md/dm-table.c               | 12 ---------
 drivers/md/md.c                     | 13 ----------
 drivers/mmc/core/queue.c            |  1 -
 drivers/mtd/mtd_blkdevs.c           |  1 -
 drivers/nvdimm/btt.c                |  1 -
 drivers/nvdimm/pmem.c               |  1 -
 drivers/nvme/host/core.c            |  1 -
 drivers/nvme/host/multipath.c       |  1 -
 drivers/s390/block/dasd_genhd.c     |  1 -
 drivers/s390/block/scm_blk.c        |  1 -
 drivers/scsi/sd.c                   |  4 +--
 include/linux/blkdev.h              | 10 ++++----
 41 files changed, 83 insertions(+), 88 deletions(-)

diff --git a/arch/m68k/emu/nfblock.c b/arch/m68k/emu/nfblock.c
index 642fb80c5c4e31..8eea7ef9115146 100644
--- a/arch/m68k/emu/nfblock.c
+++ b/arch/m68k/emu/nfblock.c
@@ -98,6 +98,7 @@ static int __init nfhd_init_one(int id, u32 blocks, u32 bsize)
 {
 	struct queue_limits lim = {
 		.logical_block_size	= bsize,
+		.features		= BLK_FEAT_ROTATIONAL,
 	};
 	struct nfhd_device *dev;
 	int dev_id = id - NFHD_DEV_OFFSET;
diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
index 19e01691ea0ea7..9f1e76ddda5a26 100644
--- a/arch/um/drivers/ubd_kern.c
+++ b/arch/um/drivers/ubd_kern.c
@@ -882,7 +882,6 @@ static int ubd_add(int n, char **error_out)
 		goto out_cleanup_tags;
 	}
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
 	disk->major = UBD_MAJOR;
 	disk->first_minor = n << UBD_SHIFT;
 	disk->minors = 1 << UBD_SHIFT;
diff --git a/arch/xtensa/platforms/iss/simdisk.c b/arch/xtensa/platforms/iss/simdisk.c
index defc67909a9c74..d6d2b533a5744d 100644
--- a/arch/xtensa/platforms/iss/simdisk.c
+++ b/arch/xtensa/platforms/iss/simdisk.c
@@ -263,6 +263,9 @@ static const struct proc_ops simdisk_proc_ops = {
 static int __init simdisk_setup(struct simdisk *dev, int which,
 		struct proc_dir_entry *procdir)
 {
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_ROTATIONAL,
+	};
 	char tmp[2] = { '0' + which, 0 };
 	int err;
 
@@ -271,7 +274,7 @@ static int __init simdisk_setup(struct simdisk *dev, int which,
 	spin_lock_init(&dev->lock);
 	dev->users = 0;
 
-	dev->gd = blk_alloc_disk(NULL, NUMA_NO_NODE);
+	dev->gd = blk_alloc_disk(&lim, NUMA_NO_NODE);
 	if (IS_ERR(dev->gd)) {
 		err = PTR_ERR(dev->gd);
 		goto out;
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index e8b9db7c30c455..4d0e62ec88f033 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -84,7 +84,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(NOMERGES),
 	QUEUE_FLAG_NAME(SAME_COMP),
 	QUEUE_FLAG_NAME(FAIL_IO),
-	QUEUE_FLAG_NAME(NONROT),
 	QUEUE_FLAG_NAME(IO_STAT),
 	QUEUE_FLAG_NAME(NOXMERGES),
 	QUEUE_FLAG_NAME(ADD_RANDOM),
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 4f524c1d5e08bd..637ed3bbbfb46f 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -263,6 +263,39 @@ static ssize_t queue_dma_alignment_show(struct request_queue *q, char *page)
 	return queue_var_show(queue_dma_alignment(q), page);
 }
 
+static ssize_t queue_feature_store(struct request_queue *q, const char *page,
+		size_t count, unsigned int feature)
+{
+	struct queue_limits lim;
+	unsigned long val;
+	ssize_t ret;
+
+	ret = queue_var_store(&val, page, count);
+	if (ret < 0)
+		return ret;
+
+	lim = queue_limits_start_update(q);
+	if (val)
+		lim.features |= feature;
+	else
+		lim.features &= ~feature;
+	ret = queue_limits_commit_update(q, &lim);
+	if (ret)
+		return ret;
+	return count;
+}
+
+#define QUEUE_SYSFS_FEATURE(_name, _feature)				 \
+static ssize_t queue_##_name##_show(struct request_queue *q, char *page) \
+{									 \
+	return sprintf(page, "%u\n", !!(q->limits.features & _feature)); \
+}									 \
+static ssize_t queue_##_name##_store(struct request_queue *q,		 \
+		const char *page, size_t count)				 \
+{									 \
+	return queue_feature_store(q, page, count, _feature);		 \
+}
+
 #define QUEUE_SYSFS_BIT_FNS(name, flag, neg)				\
 static ssize_t								\
 queue_##name##_show(struct request_queue *q, char *page)		\
@@ -289,7 +322,7 @@ queue_##name##_store(struct request_queue *q, const char *page, size_t count) \
 	return ret;							\
 }
 
-QUEUE_SYSFS_BIT_FNS(nonrot, NONROT, 1);
+QUEUE_SYSFS_FEATURE(rotational, BLK_FEAT_ROTATIONAL)
 QUEUE_SYSFS_BIT_FNS(random, ADD_RANDOM, 0);
 QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0);
 QUEUE_SYSFS_BIT_FNS(stable_writes, STABLE_WRITES, 0);
@@ -526,7 +559,7 @@ static struct queue_sysfs_entry queue_hw_sector_size_entry = {
 	.show = queue_logical_block_size_show,
 };
 
-QUEUE_RW_ENTRY(queue_nonrot, "rotational");
+QUEUE_RW_ENTRY(queue_rotational, "rotational");
 QUEUE_RW_ENTRY(queue_iostats, "iostats");
 QUEUE_RW_ENTRY(queue_random, "add_random");
 QUEUE_RW_ENTRY(queue_stable_writes, "stable_writes");
@@ -624,7 +657,7 @@ static struct attribute *queue_attrs[] = {
 	&queue_write_zeroes_max_entry.attr,
 	&queue_zone_append_max_entry.attr,
 	&queue_zone_write_granularity_entry.attr,
-	&queue_nonrot_entry.attr,
+	&queue_rotational_entry.attr,
 	&queue_zoned_entry.attr,
 	&queue_nr_zones_entry.attr,
 	&queue_max_open_zones_entry.attr,
diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c
index a25414228e4741..ff45701f7a5e31 100644
--- a/drivers/block/amiflop.c
+++ b/drivers/block/amiflop.c
@@ -1776,10 +1776,13 @@ static const struct blk_mq_ops amiflop_mq_ops = {
 
 static int fd_alloc_disk(int drive, int system)
 {
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_ROTATIONAL,
+	};
 	struct gendisk *disk;
 	int err;
 
-	disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL, NULL);
+	disk = blk_mq_alloc_disk(&unit[drive].tag_set, &lim, NULL);
 	if (IS_ERR(disk))
 		return PTR_ERR(disk);
 
diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
index b6dac8cee70fe1..2028795ec61cbb 100644
--- a/drivers/block/aoe/aoeblk.c
+++ b/drivers/block/aoe/aoeblk.c
@@ -337,6 +337,7 @@ aoeblk_gdalloc(void *vp)
 	struct queue_limits lim = {
 		.max_hw_sectors		= aoe_maxsectors,
 		.io_opt			= SZ_2M,
+		.features		= BLK_FEAT_ROTATIONAL,
 	};
 	ulong flags;
 	int late = 0;
diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
index cacc4ba942a814..4ee10a742bdb93 100644
--- a/drivers/block/ataflop.c
+++ b/drivers/block/ataflop.c
@@ -1992,9 +1992,12 @@ static const struct blk_mq_ops ataflop_mq_ops = {
 
 static int ataflop_alloc_disk(unsigned int drive, unsigned int type)
 {
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_ROTATIONAL,
+	};
 	struct gendisk *disk;
 
-	disk = blk_mq_alloc_disk(&unit[drive].tag_set, NULL, NULL);
+	disk = blk_mq_alloc_disk(&unit[drive].tag_set, &lim, NULL);
 	if (IS_ERR(disk))
 		return PTR_ERR(disk);
 
diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index 558d8e67056608..b25dc463b5e3a6 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -366,8 +366,6 @@ static int brd_alloc(int i)
 	strscpy(disk->disk_name, buf, DISK_NAME_LEN);
 	set_capacity(disk, rd_size * 2);
 	
-	/* Tell the block layer that this is not a rotational device */
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, disk->queue);
 	err = add_disk(disk);
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index bf42a46781fa21..2ef29a47807550 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -2697,7 +2697,8 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 		 * connect.
 		 */
 		.max_hw_sectors		= DRBD_MAX_BIO_SIZE_SAFE >> 8,
-		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA,
+		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
+					  BLK_FEAT_ROTATIONAL,
 	};
 
 	device = minor_to_device(minor);
diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
index 25c9d85667f1a2..6d7f7df97c3a6c 100644
--- a/drivers/block/floppy.c
+++ b/drivers/block/floppy.c
@@ -4516,7 +4516,8 @@ static bool floppy_available(int drive)
 static int floppy_alloc_disk(unsigned int drive, unsigned int type)
 {
 	struct queue_limits lim = {
-		.max_hw_sectors = 64,
+		.max_hw_sectors		= 64,
+		.features		= BLK_FEAT_ROTATIONAL,
 	};
 	struct gendisk *disk;
 
diff --git a/drivers/block/loop.c b/drivers/block/loop.c
index 08d0fc7f17b701..86b5d956dc4e02 100644
--- a/drivers/block/loop.c
+++ b/drivers/block/loop.c
@@ -985,13 +985,11 @@ static int loop_reconfigure_limits(struct loop_device *lo, unsigned short bsize)
 	lim.logical_block_size = bsize;
 	lim.physical_block_size = bsize;
 	lim.io_min = bsize;
-	lim.features &= ~BLK_FEAT_WRITE_CACHE;
+	lim.features &= ~(BLK_FEAT_WRITE_CACHE | BLK_FEAT_ROTATIONAL);
 	if (file->f_op->fsync && !(lo->lo_flags & LO_FLAGS_READ_ONLY))
 		lim.features |= BLK_FEAT_WRITE_CACHE;
-	if (!backing_bdev || bdev_nonrot(backing_bdev))
-		blk_queue_flag_set(QUEUE_FLAG_NONROT, lo->lo_queue);
-	else
-		blk_queue_flag_clear(QUEUE_FLAG_NONROT, lo->lo_queue);
+	if (backing_bdev && !bdev_nonrot(backing_bdev))
+		lim.features |= BLK_FEAT_ROTATIONAL;
 	loop_config_discard(lo, &lim);
 	return queue_limits_commit_update(lo->lo_queue, &lim);
 }
diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index 43a187609ef794..1dbbf72659d549 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -3485,7 +3485,6 @@ static int mtip_block_initialize(struct driver_data *dd)
 		goto start_service_thread;
 
 	/* Set device limits. */
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, dd->queue);
 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, dd->queue);
 	dma_set_max_seg_size(&dd->pdev->dev, 0x400000);
 
diff --git a/drivers/block/n64cart.c b/drivers/block/n64cart.c
index 27b2187e7a6d55..b9fdeff31cafdf 100644
--- a/drivers/block/n64cart.c
+++ b/drivers/block/n64cart.c
@@ -150,8 +150,6 @@ static int __init n64cart_probe(struct platform_device *pdev)
 	set_capacity(disk, size >> SECTOR_SHIFT);
 	set_disk_ro(disk, 1);
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
-
 	err = add_disk(disk);
 	if (err)
 		goto out_cleanup_disk;
diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index cb1c86a6a3fb9d..6cddf5baffe02a 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -1867,11 +1867,6 @@ static struct nbd_device *nbd_dev_add(int index, unsigned int refs)
 		goto out_err_disk;
 	}
 
-	/*
-	 * Tell the block layer that we are not a rotational device
-	 */
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
-
 	mutex_init(&nbd->config_lock);
 	refcount_set(&nbd->config_refs, 0);
 	/*
diff --git a/drivers/block/null_blk/main.c b/drivers/block/null_blk/main.c
index 21f9d256e88402..83a4ebe4763ae5 100644
--- a/drivers/block/null_blk/main.c
+++ b/drivers/block/null_blk/main.c
@@ -1948,7 +1948,6 @@ static int null_add_dev(struct nullb_device *dev)
 	}
 
 	nullb->q->queuedata = nullb;
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, nullb->q);
 
 	rv = ida_alloc(&nullb_indexes, GFP_KERNEL);
 	if (rv < 0)
diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
index 8a2ce80700109d..7cece5884b9c67 100644
--- a/drivers/block/pktcdvd.c
+++ b/drivers/block/pktcdvd.c
@@ -2622,6 +2622,7 @@ static int pkt_setup_dev(dev_t dev, dev_t* pkt_dev)
 	struct queue_limits lim = {
 		.max_hw_sectors		= PACKET_MAX_SECTORS,
 		.logical_block_size	= CD_FRAMESIZE,
+		.features		= BLK_FEAT_ROTATIONAL,
 	};
 	int idx;
 	int ret = -ENOMEM;
diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
index 8b73cf459b5937..ff45ed76646957 100644
--- a/drivers/block/ps3disk.c
+++ b/drivers/block/ps3disk.c
@@ -388,7 +388,8 @@ static int ps3disk_probe(struct ps3_system_bus_device *_dev)
 		.max_segments		= -1,
 		.max_segment_size	= dev->bounce_size,
 		.dma_alignment		= dev->blk_size - 1,
-		.features		= BLK_FEAT_WRITE_CACHE,
+		.features		= BLK_FEAT_WRITE_CACHE |
+					  BLK_FEAT_ROTATIONAL,
 	};
 	struct gendisk *gendisk;
 
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index 22ad704f81d8b9..ec1f1c7d4275cd 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -4997,9 +4997,6 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
 	disk->fops = &rbd_bd_ops;
 	disk->private_data = rbd_dev;
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
-	/* QUEUE_FLAG_ADD_RANDOM is off by default for blk-mq */
-
 	if (!ceph_test_opt(rbd_dev->rbd_client->client, NOCRC))
 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, q);
 
diff --git a/drivers/block/rnbd/rnbd-clt.c b/drivers/block/rnbd/rnbd-clt.c
index 02c4b173182719..4918b0f68b46cd 100644
--- a/drivers/block/rnbd/rnbd-clt.c
+++ b/drivers/block/rnbd/rnbd-clt.c
@@ -1352,10 +1352,6 @@ static int rnbd_clt_setup_gen_disk(struct rnbd_clt_dev *dev,
 	if (dev->access_mode == RNBD_ACCESS_RO)
 		set_disk_ro(dev->gd, true);
 
-	/*
-	 * Network device does not need rotational
-	 */
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, dev->queue);
 	err = add_disk(dev->gd);
 	if (err)
 		put_disk(dev->gd);
diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
index 5286cb8e0824d1..2d38331ee66793 100644
--- a/drivers/block/sunvdc.c
+++ b/drivers/block/sunvdc.c
@@ -791,6 +791,7 @@ static int probe_disk(struct vdc_port *port)
 		.seg_boundary_mask		= PAGE_SIZE - 1,
 		.max_segment_size		= PAGE_SIZE,
 		.max_segments			= port->ring_cookies,
+		.features			= BLK_FEAT_ROTATIONAL,
 	};
 	struct request_queue *q;
 	struct gendisk *g;
diff --git a/drivers/block/swim.c b/drivers/block/swim.c
index 6731678f3a41db..126f151c4f2cf0 100644
--- a/drivers/block/swim.c
+++ b/drivers/block/swim.c
@@ -787,6 +787,9 @@ static void swim_cleanup_floppy_disk(struct floppy_state *fs)
 
 static int swim_floppy_init(struct swim_priv *swd)
 {
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_ROTATIONAL,
+	};
 	int err;
 	int drive;
 	struct swim __iomem *base = swd->base;
@@ -820,7 +823,7 @@ static int swim_floppy_init(struct swim_priv *swd)
 			goto exit_put_disks;
 
 		swd->unit[drive].disk =
-			blk_mq_alloc_disk(&swd->unit[drive].tag_set, NULL,
+			blk_mq_alloc_disk(&swd->unit[drive].tag_set, &lim,
 					  &swd->unit[drive]);
 		if (IS_ERR(swd->unit[drive].disk)) {
 			blk_mq_free_tag_set(&swd->unit[drive].tag_set);
diff --git a/drivers/block/swim3.c b/drivers/block/swim3.c
index a04756ac778ee8..90be1017f7bfcd 100644
--- a/drivers/block/swim3.c
+++ b/drivers/block/swim3.c
@@ -1189,6 +1189,9 @@ static int swim3_add_device(struct macio_dev *mdev, int index)
 static int swim3_attach(struct macio_dev *mdev,
 			const struct of_device_id *match)
 {
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_ROTATIONAL,
+	};
 	struct floppy_state *fs;
 	struct gendisk *disk;
 	int rc;
@@ -1210,7 +1213,7 @@ static int swim3_attach(struct macio_dev *mdev,
 	if (rc)
 		goto out_unregister;
 
-	disk = blk_mq_alloc_disk(&fs->tag_set, NULL, fs);
+	disk = blk_mq_alloc_disk(&fs->tag_set, &lim, fs);
 	if (IS_ERR(disk)) {
 		rc = PTR_ERR(disk);
 		goto out_free_tag_set;
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index e45c65c1848d31..4fcde099935868 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -484,14 +484,8 @@ static inline unsigned ublk_pos_to_tag(loff_t pos)
 
 static void ublk_dev_param_basic_apply(struct ublk_device *ub)
 {
-	struct request_queue *q = ub->ub_disk->queue;
 	const struct ublk_param_basic *p = &ub->params.basic;
 
-	if (p->attrs & UBLK_ATTR_ROTATIONAL)
-		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
-	else
-		blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
-
 	if (p->attrs & UBLK_ATTR_READ_ONLY)
 		set_disk_ro(ub->ub_disk, true);
 
@@ -2214,6 +2208,9 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
 			lim.features |= BLK_FEAT_FUA;
 	}
 
+	if (ub->params.basic.attrs & UBLK_ATTR_ROTATIONAL)
+		lim.features |= BLK_FEAT_ROTATIONAL;
+
 	if (wait_for_completion_interruptible(&ub->completion) != 0)
 		return -EINTR;
 
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index b1a3c293528519..13a2f24f176628 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -1451,7 +1451,9 @@ static int virtblk_read_limits(struct virtio_blk *vblk,
 static int virtblk_probe(struct virtio_device *vdev)
 {
 	struct virtio_blk *vblk;
-	struct queue_limits lim = { };
+	struct queue_limits lim = {
+		.features		= BLK_FEAT_ROTATIONAL,
+	};
 	int err, index;
 	unsigned int queue_depth;
 
diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 9aafce3e5987bf..fa3a2ba525458b 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1146,7 +1146,6 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 		err = PTR_ERR(gd);
 		goto out_free_tag_set;
 	}
-	blk_queue_flag_set(QUEUE_FLAG_VIRT, gd->queue);
 
 	strcpy(gd->disk_name, DEV_NAME);
 	ptr = encode_disk_name(gd->disk_name + sizeof(DEV_NAME) - 1, offset);
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index 3acd7006ad2ccd..aad840fc7e18e3 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -2245,8 +2245,6 @@ static int zram_add(void)
 
 	/* Actual capacity set using sysfs (/sys/block/zram<id>/disksize */
 	set_capacity(zram->disk, 0);
-	/* zram devices sort of resembles non-rotational disks */
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, zram->disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, zram->disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, zram->disk->queue);
 	ret = device_add_disk(NULL, zram->disk, zram_disk_groups);
diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c
index eefdd422ad8e9f..71cfe7a85913c4 100644
--- a/drivers/cdrom/gdrom.c
+++ b/drivers/cdrom/gdrom.c
@@ -744,6 +744,7 @@ static int probe_gdrom(struct platform_device *devptr)
 		.max_segments			= 1,
 		/* set a large max size to get most from DMA */
 		.max_segment_size		= 0x40000,
+		.features			= BLK_FEAT_ROTATIONAL,
 	};
 	int err;
 
diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
index cb6595c8b5514e..baa364eedd0051 100644
--- a/drivers/md/bcache/super.c
+++ b/drivers/md/bcache/super.c
@@ -974,8 +974,6 @@ static int bcache_device_init(struct bcache_device *d, unsigned int block_size,
 	d->disk->minors		= BCACHE_MINORS;
 	d->disk->fops		= ops;
 	d->disk->private_data	= d;
-
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, d->disk->queue);
 	return 0;
 
 out_bioset_exit:
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 03abdae646829c..c062af32970934 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1716,12 +1716,6 @@ static int device_dax_write_cache_enabled(struct dm_target *ti,
 	return false;
 }
 
-static int device_is_rotational(struct dm_target *ti, struct dm_dev *dev,
-				sector_t start, sector_t len, void *data)
-{
-	return !bdev_nonrot(dev->bdev);
-}
-
 static int device_is_not_random(struct dm_target *ti, struct dm_dev *dev,
 			     sector_t start, sector_t len, void *data)
 {
@@ -1870,12 +1864,6 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 	if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, NULL))
 		dax_write_cache(t->md->dax_dev, true);
 
-	/* Ensure that all underlying devices are non-rotational. */
-	if (dm_table_any_dev_attr(t, device_is_rotational, NULL))
-		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
-	else
-		blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
-
 	/*
 	 * Some devices don't use blk_integrity but still want stable pages
 	 * because they do their own checksumming.
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 2f4c5d1755d857..c23423c51fb7c2 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -6151,20 +6151,7 @@ int md_run(struct mddev *mddev)
 
 	if (!mddev_is_dm(mddev)) {
 		struct request_queue *q = mddev->gendisk->queue;
-		bool nonrot = true;
 
-		rdev_for_each(rdev, mddev) {
-			if (rdev->raid_disk >= 0 && !bdev_nonrot(rdev->bdev)) {
-				nonrot = false;
-				break;
-			}
-		}
-		if (mddev->degraded)
-			nonrot = false;
-		if (nonrot)
-			blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
-		else
-			blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
 		blk_queue_flag_set(QUEUE_FLAG_IO_STAT, q);
 
 		/* Set the NOWAIT flags if all underlying devices support it */
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 97ff993d31570c..b4f62fa845864c 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -387,7 +387,6 @@ static struct gendisk *mmc_alloc_disk(struct mmc_queue *mq,
 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, mq->queue);
 	blk_queue_rq_timeout(mq->queue, 60 * HZ);
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue);
 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
 
 	dma_set_max_seg_size(mmc_dev(host), queue_max_segment_size(mq->queue));
diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index 1b9f57f231e8be..bf8369ce7ddf1d 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -375,7 +375,6 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	spin_lock_init(&new->queue_lock);
 	INIT_LIST_HEAD(&new->rq_list);
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, new->rq);
 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, new->rq);
 
 	gd->queue = new->rq;
diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index c5f8451b494d6c..e474afa8e9f68d 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -1518,7 +1518,6 @@ static int btt_blk_init(struct btt *btt)
 	btt->btt_disk->fops = &btt_fops;
 	btt->btt_disk->private_data = btt;
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, btt->btt_disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, btt->btt_disk->queue);
 
 	set_capacity(btt->btt_disk, btt->nlba * btt->sector_size >> 9);
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index aff818469c114c..501cf226df0187 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -546,7 +546,6 @@ static int pmem_attach_disk(struct device *dev,
 	}
 	pmem->virt_addr = addr;
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, q);
 	if (pmem->pfn_flags & PFN_MAP)
 		blk_queue_flag_set(QUEUE_FLAG_DAX, q);
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 9fc5e36fe2e55e..0d753fe71f35b0 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3744,7 +3744,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 	if (ctrl->opts && ctrl->opts->data_digest)
 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, ns->queue);
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, ns->queue);
 	if (ctrl->ops->supports_pci_p2pdma &&
 	    ctrl->ops->supports_pci_p2pdma(ctrl))
 		blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue);
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 3d0e23a0a4ddd8..58c13304e558e0 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -549,7 +549,6 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 	sprintf(head->disk->disk_name, "nvme%dn%d",
 			ctrl->subsys->instance, head->instance);
 
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, head->disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, head->disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_IO_STAT, head->disk->queue);
 	/*
diff --git a/drivers/s390/block/dasd_genhd.c b/drivers/s390/block/dasd_genhd.c
index 4533dd055ca8e3..1aa426b1deddc7 100644
--- a/drivers/s390/block/dasd_genhd.c
+++ b/drivers/s390/block/dasd_genhd.c
@@ -68,7 +68,6 @@ int dasd_gendisk_alloc(struct dasd_block *block)
 		blk_mq_free_tag_set(&block->tag_set);
 		return PTR_ERR(gdp);
 	}
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, gdp->queue);
 
 	/* Initialize gendisk structure. */
 	gdp->major = DASD_MAJOR;
diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
index 1d456a5a3bfb8e..2e2309fa9a0b34 100644
--- a/drivers/s390/block/scm_blk.c
+++ b/drivers/s390/block/scm_blk.c
@@ -475,7 +475,6 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
 		goto out_tag;
 	}
 	rq = bdev->rq = bdev->gendisk->queue;
-	blk_queue_flag_set(QUEUE_FLAG_NONROT, rq);
 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, rq);
 
 	bdev->gendisk->private_data = scmdev;
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index d8ee4a4d4a6283..a42c3c45e86830 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3318,7 +3318,7 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp,
 	rcu_read_unlock();
 
 	if (rot == 1) {
-		blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
+		lim->features &= ~BLK_FEAT_ROTATIONAL;
 		blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q);
 	}
 
@@ -3646,7 +3646,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 		 * cause this to be updated correctly and any device which
 		 * doesn't support it should be treated as rotational.
 		 */
-		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
+		lim.features |= BLK_FEAT_ROTATIONAL;
 		blk_queue_flag_set(QUEUE_FLAG_ADD_RANDOM, q);
 
 		if (scsi_device_supports_vpd(sdp)) {
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index acdfe5122faa44..988e3248cffeb7 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -289,14 +289,16 @@ enum {
 
 	/* supports passing on the FUA bit */
 	BLK_FEAT_FUA				= (1u << 1),
+
+	/* rotational device (hard drive or floppy) */
+	BLK_FEAT_ROTATIONAL			= (1u << 2),
 };
 
 /*
  * Flags automatically inherited when stacking limits.
  */
 #define BLK_FEAT_INHERIT_MASK \
-	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA)
-
+	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | BLK_FEAT_ROTATIONAL)
 
 /* internal flags in queue_limits.flags */
 enum {
@@ -553,8 +555,6 @@ struct request_queue {
 #define QUEUE_FLAG_NOMERGES     3	/* disable merge attempts */
 #define QUEUE_FLAG_SAME_COMP	4	/* complete on same CPU-group */
 #define QUEUE_FLAG_FAIL_IO	5	/* fake timeout */
-#define QUEUE_FLAG_NONROT	6	/* non-rotational device (SSD) */
-#define QUEUE_FLAG_VIRT		QUEUE_FLAG_NONROT /* paravirt device */
 #define QUEUE_FLAG_IO_STAT	7	/* do disk/partitions IO accounting */
 #define QUEUE_FLAG_NOXMERGES	9	/* No extended merges */
 #define QUEUE_FLAG_ADD_RANDOM	10	/* Contributes to random pool */
@@ -589,7 +589,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_nomerges(q)	test_bit(QUEUE_FLAG_NOMERGES, &(q)->queue_flags)
 #define blk_queue_noxmerges(q)	\
 	test_bit(QUEUE_FLAG_NOXMERGES, &(q)->queue_flags)
-#define blk_queue_nonrot(q)	test_bit(QUEUE_FLAG_NONROT, &(q)->queue_flags)
+#define blk_queue_nonrot(q)	((q)->limits.features & BLK_FEAT_ROTATIONAL)
 #define blk_queue_io_stat(q)	test_bit(QUEUE_FLAG_IO_STAT, &(q)->queue_flags)
 #define blk_queue_add_random(q)	test_bit(QUEUE_FLAG_ADD_RANDOM, &(q)->queue_flags)
 #define blk_queue_zone_resetall(q)	\
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741651.1148313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wf-0006KE-GP; Mon, 17 Jun 2024 06:07:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741651.1148313; Mon, 17 Jun 2024 06:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wf-0006Hp-8z; Mon, 17 Jun 2024 06:07:33 +0000
Received: by outflank-mailman (input) for mailman id 741651;
 Mon, 17 Jun 2024 06:07:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5Ve-0001Pt-Rh
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:30 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bc67f9ad-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:06:29 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5VJ-00000009IaG-2UjF; Mon, 17 Jun 2024 06:06:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc67f9ad-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=sVPSgMGllCuuf3AHxeXTZ5uQtffNoK6cimLoRUqfpII=; b=VioNQoDuEPZXCsKlMJ8bqEiq0T
	kRBW+7G7WzsxTJ4OBci76q6USGBDDQiOfibYrttMiSM6xgfALh4eJYkdqxFkKpa0fvjzH6PCnCyq+
	ZKJWcyQ559x7OvgyNRyO76m6ErDBT2VgUaabGfSLSwYnZibkII3KKAqxNLZYVz1hAY37asOxO9J3I
	O0+z/zCRcV2GKzvDH6kzspoBq8lHRUzFkF2FhVrUNIE0TFqG5TiD/1VJMeLWPvEK26w6rEkb+t/4R
	Peq6bWwz+eDuzNivW/LlWb9gU264Y0QgY5uwQWqqOfawz68inb+qcMNDkfGNqakoL9YeDx9VdO556
	pwRWtwmA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Bart Van Assche <bvanassche@acm.org>,
	Damien Le Moal <dlemoal@kernel.org>,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 12/26] block: remove blk_flush_policy
Date: Mon, 17 Jun 2024 08:04:39 +0200
Message-ID: <20240617060532.127975-13-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Fold blk_flush_policy into the only caller to prepare for pending changes
to it.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 block/blk-flush.c | 33 +++++++++++++++------------------
 1 file changed, 15 insertions(+), 18 deletions(-)

diff --git a/block/blk-flush.c b/block/blk-flush.c
index c17cf8ed8113db..2234f8b3fc05f2 100644
--- a/block/blk-flush.c
+++ b/block/blk-flush.c
@@ -100,23 +100,6 @@ blk_get_flush_queue(struct request_queue *q, struct blk_mq_ctx *ctx)
 	return blk_mq_map_queue(q, REQ_OP_FLUSH, ctx)->fq;
 }
 
-static unsigned int blk_flush_policy(unsigned long fflags, struct request *rq)
-{
-	unsigned int policy = 0;
-
-	if (blk_rq_sectors(rq))
-		policy |= REQ_FSEQ_DATA;
-
-	if (fflags & (1UL << QUEUE_FLAG_WC)) {
-		if (rq->cmd_flags & REQ_PREFLUSH)
-			policy |= REQ_FSEQ_PREFLUSH;
-		if (!(fflags & (1UL << QUEUE_FLAG_FUA)) &&
-		    (rq->cmd_flags & REQ_FUA))
-			policy |= REQ_FSEQ_POSTFLUSH;
-	}
-	return policy;
-}
-
 static unsigned int blk_flush_cur_seq(struct request *rq)
 {
 	return 1 << ffz(rq->flush.seq);
@@ -399,12 +382,26 @@ bool blk_insert_flush(struct request *rq)
 {
 	struct request_queue *q = rq->q;
 	unsigned long fflags = q->queue_flags;	/* may change, cache */
-	unsigned int policy = blk_flush_policy(fflags, rq);
 	struct blk_flush_queue *fq = blk_get_flush_queue(q, rq->mq_ctx);
+	unsigned int policy = 0;
 
 	/* FLUSH/FUA request must never be merged */
 	WARN_ON_ONCE(rq->bio != rq->biotail);
 
+	if (blk_rq_sectors(rq))
+		policy |= REQ_FSEQ_DATA;
+
+	/*
+	 * Check which flushes we need to sequence for this operation.
+	 */
+	if (fflags & (1UL << QUEUE_FLAG_WC)) {
+		if (rq->cmd_flags & REQ_PREFLUSH)
+			policy |= REQ_FSEQ_PREFLUSH;
+		if (!(fflags & (1UL << QUEUE_FLAG_FUA)) &&
+		    (rq->cmd_flags & REQ_FUA))
+			policy |= REQ_FSEQ_POSTFLUSH;
+	}
+
 	/*
 	 * @policy now records what operations need to be done.  Adjust
 	 * REQ_PREFLUSH and FUA for the driver.
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741653.1148321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wg-0006VI-43; Mon, 17 Jun 2024 06:07:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741653.1148321; Mon, 17 Jun 2024 06:07:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wf-0006T7-QM; Mon, 17 Jun 2024 06:07:33 +0000
Received: by outflank-mailman (input) for mailman id 741653;
 Mon, 17 Jun 2024 06:07:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5WE-0001Pt-AZ
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:07:06 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d1bb2b65-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:07:05 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Vv-00000009JAE-3Sf5; Mon, 17 Jun 2024 06:06:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1bb2b65-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=vn3qKVXlb3dlpaG1FcP/uUTFvMHCHs0BCPN12d+72uQ=; b=18/16Uiu0l0OVFyCZgOUXI2UsQ
	7OOWoO5104rZ5Q2N8+ViZ3o5DjJxOFVBX+DeTfk5Fu9T9peg/YzVw5MYKf75BMff7Rn2YsPuZRF7y
	Qn3+12Mn+XP5Kc+BUIHvcaOICBzIdGaC/GndpZkR9W1BpcHpbEK3suPTPv+DvSo1wSPu4trKeehaU
	Guot6zH5s3ev2zWqGmquOst4ebK8LrA0s+OXaU5AWBWdnA8RKnDPMViBpuBdYwmfTGtHxnwM1bpVU
	OdPBxcnm4lfgYfk5G1R4T5Kunio2RCvH36OEuh70hsWl738wSVw6ey0RVXhYgcn7dlGrjbNNKbC3U
	9ZIBPwag==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: [PATCH 23/26] block: move the zone_resetall flag to queue_limits
Date: Mon, 17 Jun 2024 08:04:50 +0200
Message-ID: <20240617060532.127975-24-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the zone_resetall flag into the queue_limits feature field so that
it can be set atomically with the queue frozen.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 block/blk-mq-debugfs.c         | 1 -
 drivers/block/null_blk/zoned.c | 3 +--
 drivers/block/ublk_drv.c       | 4 +---
 drivers/block/virtio_blk.c     | 3 +--
 drivers/nvme/host/zns.c        | 3 +--
 drivers/scsi/sd_zbc.c          | 5 +----
 include/linux/blkdev.h         | 6 ++++--
 7 files changed, 9 insertions(+), 16 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 3a21527913840d..f2fd72f4414ae8 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -91,7 +91,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(REGISTERED),
 	QUEUE_FLAG_NAME(QUIESCED),
 	QUEUE_FLAG_NAME(PCI_P2PDMA),
-	QUEUE_FLAG_NAME(ZONE_RESETALL),
 	QUEUE_FLAG_NAME(RQ_ALLOC_TIME),
 	QUEUE_FLAG_NAME(HCTX_ACTIVE),
 	QUEUE_FLAG_NAME(SQ_SCHED),
diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c
index ca8e739e76b981..b42c00f1313254 100644
--- a/drivers/block/null_blk/zoned.c
+++ b/drivers/block/null_blk/zoned.c
@@ -158,7 +158,7 @@ int null_init_zoned_dev(struct nullb_device *dev,
 		sector += dev->zone_size_sects;
 	}
 
-	lim->features |= BLK_FEAT_ZONED;
+	lim->features |= BLK_FEAT_ZONED | BLK_FEAT_ZONE_RESETALL;
 	lim->chunk_sectors = dev->zone_size_sects;
 	lim->max_zone_append_sectors = dev->zone_append_max_sectors;
 	lim->max_open_zones = dev->zone_max_open;
@@ -171,7 +171,6 @@ int null_register_zoned_dev(struct nullb *nullb)
 	struct request_queue *q = nullb->q;
 	struct gendisk *disk = nullb->disk;
 
-	blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q);
 	disk->nr_zones = bdev_nr_zones(disk->part0);
 
 	pr_info("%s: using %s zone append\n",
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 69c16018cbb19a..4fdff13fc23b8a 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -248,8 +248,6 @@ static int ublk_dev_param_zoned_validate(const struct ublk_device *ub)
 
 static void ublk_dev_param_zoned_apply(struct ublk_device *ub)
 {
-	blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, ub->ub_disk->queue);
-
 	ub->ub_disk->nr_zones = ublk_get_nr_zones(ub);
 }
 
@@ -2196,7 +2194,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
 		if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED))
 			return -EOPNOTSUPP;
 
-		lim.features |= BLK_FEAT_ZONED;
+		lim.features |= BLK_FEAT_ZONED | BLK_FEAT_ZONE_RESETALL;
 		lim.max_active_zones = p->max_active_zones;
 		lim.max_open_zones =  p->max_open_zones;
 		lim.max_zone_append_sectors = p->max_zone_append_sectors;
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index cea45b296f8bec..6c64a67ab9c901 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -728,7 +728,7 @@ static int virtblk_read_zoned_limits(struct virtio_blk *vblk,
 
 	dev_dbg(&vdev->dev, "probing host-managed zoned device\n");
 
-	lim->features |= BLK_FEAT_ZONED;
+	lim->features |= BLK_FEAT_ZONED | BLK_FEAT_ZONE_RESETALL;
 
 	virtio_cread(vdev, struct virtio_blk_config,
 		     zoned.max_open_zones, &v);
@@ -1548,7 +1548,6 @@ static int virtblk_probe(struct virtio_device *vdev)
 	 */
 	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
 	    (lim.features & BLK_FEAT_ZONED)) {
-		blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, vblk->disk->queue);
 		err = blk_revalidate_disk_zones(vblk->disk);
 		if (err)
 			goto out_cleanup_disk;
diff --git a/drivers/nvme/host/zns.c b/drivers/nvme/host/zns.c
index 06f2417aa50de7..99bb89c2495ae3 100644
--- a/drivers/nvme/host/zns.c
+++ b/drivers/nvme/host/zns.c
@@ -108,13 +108,12 @@ int nvme_query_zone_info(struct nvme_ns *ns, unsigned lbaf,
 void nvme_update_zone_info(struct nvme_ns *ns, struct queue_limits *lim,
 		struct nvme_zone_info *zi)
 {
-	lim->features |= BLK_FEAT_ZONED;
+	lim->features |= BLK_FEAT_ZONED | BLK_FEAT_ZONE_RESETALL;
 	lim->max_open_zones = zi->max_open_zones;
 	lim->max_active_zones = zi->max_active_zones;
 	lim->max_zone_append_sectors = ns->ctrl->max_zone_append;
 	lim->chunk_sectors = ns->head->zsze =
 		nvme_lba_to_sect(ns->head, zi->zone_size);
-	blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, ns->queue);
 }
 
 static void *nvme_zns_alloc_report_buffer(struct nvme_ns *ns,
diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
index d3f84665946ec4..f7067afac79c14 100644
--- a/drivers/scsi/sd_zbc.c
+++ b/drivers/scsi/sd_zbc.c
@@ -592,8 +592,6 @@ int sd_zbc_revalidate_zones(struct scsi_disk *sdkp)
 int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
 		u8 buf[SD_BUF_SIZE])
 {
-	struct gendisk *disk = sdkp->disk;
-	struct request_queue *q = disk->queue;
 	unsigned int nr_zones;
 	u32 zone_blocks = 0;
 	int ret;
@@ -601,7 +599,7 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
 	if (sdkp->device->type != TYPE_ZBC)
 		return 0;
 
-	lim->features |= BLK_FEAT_ZONED;
+	lim->features |= BLK_FEAT_ZONED | BLK_FEAT_ZONE_RESETALL;
 
 	/*
 	 * Per ZBC and ZAC specifications, writes in sequential write required
@@ -630,7 +628,6 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
 	sdkp->early_zone_info.zone_blocks = zone_blocks;
 
 	/* The drive satisfies the kernel restrictions: set it up */
-	blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, q);
 	if (sdkp->zones_max_open == U32_MAX)
 		lim->max_open_zones = 0;
 	else
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index bdc30c1fb1b57b..1077cb8d8fd808 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -316,6 +316,9 @@ enum {
 
 	/* is a zoned device */
 	BLK_FEAT_ZONED				= (1u << 10),
+
+	/* supports Zone Reset All */
+	BLK_FEAT_ZONE_RESETALL			= (1u << 11),
 };
 
 /*
@@ -586,7 +589,6 @@ struct request_queue {
 #define QUEUE_FLAG_REGISTERED	22	/* queue has been registered to a disk */
 #define QUEUE_FLAG_QUIESCED	24	/* queue has been quiesced */
 #define QUEUE_FLAG_PCI_P2PDMA	25	/* device supports PCI p2p requests */
-#define QUEUE_FLAG_ZONE_RESETALL 26	/* supports Zone Reset All */
 #define QUEUE_FLAG_RQ_ALLOC_TIME 27	/* record rq->alloc_time_ns */
 #define QUEUE_FLAG_HCTX_ACTIVE	28	/* at least one blk-mq hctx is active */
 #define QUEUE_FLAG_SQ_SCHED     30	/* single queue style io dispatch */
@@ -607,7 +609,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_nonrot(q)	((q)->limits.features & BLK_FEAT_ROTATIONAL)
 #define blk_queue_io_stat(q)	((q)->limits.features & BLK_FEAT_IO_STAT)
 #define blk_queue_zone_resetall(q)	\
-	test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags)
+	((q)->limits.features & BLK_FEAT_ZONE_RESETALL)
 #define blk_queue_dax(q)	((q)->limits.features & BLK_FEAT_DAX)
 #define blk_queue_pci_p2pdma(q)	\
 	test_bit(QUEUE_FLAG_PCI_P2PDMA, &(q)->queue_flags)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741654.1148331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wh-0006nq-3T; Mon, 17 Jun 2024 06:07:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741654.1148331; Mon, 17 Jun 2024 06:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wg-0006l6-Rh; Mon, 17 Jun 2024 06:07:34 +0000
Received: by outflank-mailman (input) for mailman id 741654;
 Mon, 17 Jun 2024 06:07:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5WE-0001PY-9n
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:07:06 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d0f844a7-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:07:04 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Vs-00000009J7N-36FO; Mon, 17 Jun 2024 06:06:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0f844a7-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=j3/q6++/T0UiysNKT6ebBR3awbgnCQMkBzrPFvHZKQg=; b=NJZCojfZFjIHjdeThEB5UEO5Nk
	QNC7VhBPWkRcqgkZKxl1pMve2IPbjVKHCTiWE0oKqZkGQr382Hsf+pdh3k2aOSBzxX8pT1Od1LR0p
	ULQFxibinwOrILLy4a8KYEfVWq/uqIsJQzmH/PwZ78+6OG82/CKXLijYXDFRpJ8c0KYO0YwHgQ9La
	K6+cfGf+zXrSVcf79qtnNY1fK2RQPhSPeFvB9iVrCfamm6IUC5lHWTbx3xeT4q/cfnpVGmpwMJfhK
	UzcfgZYbTigV2cdi9g61aNGRbJgATnWpjl0PbssfRrhfVySkFp+s2qp339sm8xlTjxogm8On27Zy6
	FTsiRxhA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: [PATCH 22/26] block: move the zoned flag into the features field
Date: Mon, 17 Jun 2024 08:04:49 +0200
Message-ID: <20240617060532.127975-23-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the zoned flags into the features field to reclaim a little
bit of space.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 block/blk-settings.c           |  5 ++---
 drivers/block/null_blk/zoned.c |  2 +-
 drivers/block/ublk_drv.c       |  2 +-
 drivers/block/virtio_blk.c     |  5 +++--
 drivers/md/dm-table.c          | 11 ++++++-----
 drivers/md/dm-zone.c           |  2 +-
 drivers/md/dm-zoned-target.c   |  2 +-
 drivers/nvme/host/zns.c        |  2 +-
 drivers/scsi/sd_zbc.c          |  2 +-
 include/linux/blkdev.h         |  9 ++++++---
 10 files changed, 23 insertions(+), 19 deletions(-)

diff --git a/block/blk-settings.c b/block/blk-settings.c
index 026ba68d829856..96e07f24bd9aa1 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -68,7 +68,7 @@ static void blk_apply_bdi_limits(struct backing_dev_info *bdi,
 
 static int blk_validate_zoned_limits(struct queue_limits *lim)
 {
-	if (!lim->zoned) {
+	if (!(lim->features & BLK_FEAT_ZONED)) {
 		if (WARN_ON_ONCE(lim->max_open_zones) ||
 		    WARN_ON_ONCE(lim->max_active_zones) ||
 		    WARN_ON_ONCE(lim->zone_write_granularity) ||
@@ -602,8 +602,7 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 						   b->max_secure_erase_sectors);
 	t->zone_write_granularity = max(t->zone_write_granularity,
 					b->zone_write_granularity);
-	t->zoned = max(t->zoned, b->zoned);
-	if (!t->zoned) {
+	if (!(t->features & BLK_FEAT_ZONED)) {
 		t->zone_write_granularity = 0;
 		t->max_zone_append_sectors = 0;
 	}
diff --git a/drivers/block/null_blk/zoned.c b/drivers/block/null_blk/zoned.c
index f118d304f31080..ca8e739e76b981 100644
--- a/drivers/block/null_blk/zoned.c
+++ b/drivers/block/null_blk/zoned.c
@@ -158,7 +158,7 @@ int null_init_zoned_dev(struct nullb_device *dev,
 		sector += dev->zone_size_sects;
 	}
 
-	lim->zoned = true;
+	lim->features |= BLK_FEAT_ZONED;
 	lim->chunk_sectors = dev->zone_size_sects;
 	lim->max_zone_append_sectors = dev->zone_append_max_sectors;
 	lim->max_open_zones = dev->zone_max_open;
diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
index 4fcde099935868..69c16018cbb19a 100644
--- a/drivers/block/ublk_drv.c
+++ b/drivers/block/ublk_drv.c
@@ -2196,7 +2196,7 @@ static int ublk_ctrl_start_dev(struct ublk_device *ub, struct io_uring_cmd *cmd)
 		if (!IS_ENABLED(CONFIG_BLK_DEV_ZONED))
 			return -EOPNOTSUPP;
 
-		lim.zoned = true;
+		lim.features |= BLK_FEAT_ZONED;
 		lim.max_active_zones = p->max_active_zones;
 		lim.max_open_zones =  p->max_open_zones;
 		lim.max_zone_append_sectors = p->max_zone_append_sectors;
diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 13a2f24f176628..cea45b296f8bec 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -728,7 +728,7 @@ static int virtblk_read_zoned_limits(struct virtio_blk *vblk,
 
 	dev_dbg(&vdev->dev, "probing host-managed zoned device\n");
 
-	lim->zoned = true;
+	lim->features |= BLK_FEAT_ZONED;
 
 	virtio_cread(vdev, struct virtio_blk_config,
 		     zoned.max_open_zones, &v);
@@ -1546,7 +1546,8 @@ static int virtblk_probe(struct virtio_device *vdev)
 	 * All steps that follow use the VQs therefore they need to be
 	 * placed after the virtio_device_ready() call above.
 	 */
-	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && lim.zoned) {
+	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
+	    (lim.features & BLK_FEAT_ZONED)) {
 		blk_queue_flag_set(QUEUE_FLAG_ZONE_RESETALL, vblk->disk->queue);
 		err = blk_revalidate_disk_zones(vblk->disk);
 		if (err)
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index ca1f136575cff4..df6313c3fe6ba4 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1605,12 +1605,12 @@ int dm_calculate_queue_limits(struct dm_table *t,
 		ti->type->iterate_devices(ti, dm_set_device_limits,
 					  &ti_limits);
 
-		if (!zoned && ti_limits.zoned) {
+		if (!zoned && (ti_limits.features & BLK_FEAT_ZONED)) {
 			/*
 			 * After stacking all limits, validate all devices
 			 * in table support this zoned model and zone sectors.
 			 */
-			zoned = ti_limits.zoned;
+			zoned = (ti_limits.features & BLK_FEAT_ZONED);
 			zone_sectors = ti_limits.chunk_sectors;
 		}
 
@@ -1658,12 +1658,12 @@ int dm_calculate_queue_limits(struct dm_table *t,
 	 *   zoned model on host-managed zoned block devices.
 	 * BUT...
 	 */
-	if (limits->zoned) {
+	if (limits->features & BLK_FEAT_ZONED) {
 		/*
 		 * ...IF the above limits stacking determined a zoned model
 		 * validate that all of the table's devices conform to it.
 		 */
-		zoned = limits->zoned;
+		zoned = limits->features & BLK_FEAT_ZONED;
 		zone_sectors = limits->chunk_sectors;
 	}
 	if (validate_hardware_zoned(t, zoned, zone_sectors))
@@ -1834,7 +1834,8 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 	 * For a zoned target, setup the zones related queue attributes
 	 * and resources necessary for zone append emulation if necessary.
 	 */
-	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) && limits->zoned) {
+	if (IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
+	    (limits->features & limits->features & BLK_FEAT_ZONED)) {
 		r = dm_set_zones_restrictions(t, q, limits);
 		if (r)
 			return r;
diff --git a/drivers/md/dm-zone.c b/drivers/md/dm-zone.c
index 5d66d916730efa..88d313229b43ff 100644
--- a/drivers/md/dm-zone.c
+++ b/drivers/md/dm-zone.c
@@ -263,7 +263,7 @@ int dm_set_zones_restrictions(struct dm_table *t, struct request_queue *q,
 	if (nr_conv_zones >= ret) {
 		lim->max_open_zones = 0;
 		lim->max_active_zones = 0;
-		lim->zoned = false;
+		lim->features &= ~BLK_FEAT_ZONED;
 		clear_bit(DMF_EMULATE_ZONE_APPEND, &md->flags);
 		disk->nr_zones = 0;
 		return 0;
diff --git a/drivers/md/dm-zoned-target.c b/drivers/md/dm-zoned-target.c
index 12236e6f46f39c..cd0ee144973f9f 100644
--- a/drivers/md/dm-zoned-target.c
+++ b/drivers/md/dm-zoned-target.c
@@ -1009,7 +1009,7 @@ static void dmz_io_hints(struct dm_target *ti, struct queue_limits *limits)
 	limits->max_sectors = chunk_sectors;
 
 	/* We are exposing a drive-managed zoned block device */
-	limits->zoned = false;
+	limits->features &= ~BLK_FEAT_ZONED;
 }
 
 /*
diff --git a/drivers/nvme/host/zns.c b/drivers/nvme/host/zns.c
index 77aa0f440a6d2a..06f2417aa50de7 100644
--- a/drivers/nvme/host/zns.c
+++ b/drivers/nvme/host/zns.c
@@ -108,7 +108,7 @@ int nvme_query_zone_info(struct nvme_ns *ns, unsigned lbaf,
 void nvme_update_zone_info(struct nvme_ns *ns, struct queue_limits *lim,
 		struct nvme_zone_info *zi)
 {
-	lim->zoned = 1;
+	lim->features |= BLK_FEAT_ZONED;
 	lim->max_open_zones = zi->max_open_zones;
 	lim->max_active_zones = zi->max_active_zones;
 	lim->max_zone_append_sectors = ns->ctrl->max_zone_append;
diff --git a/drivers/scsi/sd_zbc.c b/drivers/scsi/sd_zbc.c
index 360ec980499529..d3f84665946ec4 100644
--- a/drivers/scsi/sd_zbc.c
+++ b/drivers/scsi/sd_zbc.c
@@ -601,7 +601,7 @@ int sd_zbc_read_zones(struct scsi_disk *sdkp, struct queue_limits *lim,
 	if (sdkp->device->type != TYPE_ZBC)
 		return 0;
 
-	lim->zoned = true;
+	lim->features |= BLK_FEAT_ZONED;
 
 	/*
 	 * Per ZBC and ZAC specifications, writes in sequential write required
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index cd27b66cbacc00..bdc30c1fb1b57b 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -313,6 +313,9 @@ enum {
 
 	/* supports I/O polling */
 	BLK_FEAT_POLL				= (1u << 9),
+
+	/* is a zoned device */
+	BLK_FEAT_ZONED				= (1u << 10),
 };
 
 /*
@@ -320,7 +323,7 @@ enum {
  */
 #define BLK_FEAT_INHERIT_MASK \
 	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | BLK_FEAT_ROTATIONAL | \
-	 BLK_FEAT_STABLE_WRITES)
+	 BLK_FEAT_STABLE_WRITES | BLK_FEAT_ZONED)
 
 /* internal flags in queue_limits.flags */
 enum {
@@ -372,7 +375,6 @@ struct queue_limits {
 	unsigned char		misaligned;
 	unsigned char		discard_misaligned;
 	unsigned char		raid_partial_stripes_expensive;
-	bool			zoned;
 	unsigned int		max_open_zones;
 	unsigned int		max_active_zones;
 
@@ -654,7 +656,8 @@ static inline enum rpm_status queue_rpm_status(struct request_queue *q)
 
 static inline bool blk_queue_is_zoned(struct request_queue *q)
 {
-	return IS_ENABLED(CONFIG_BLK_DEV_ZONED) && q->limits.zoned;
+	return IS_ENABLED(CONFIG_BLK_DEV_ZONED) &&
+		(q->limits.features & BLK_FEAT_ZONED);
 }
 
 #ifdef CONFIG_BLK_DEV_ZONED
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741656.1148341 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wh-0006ub-Q4; Mon, 17 Jun 2024 06:07:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741656.1148341; Mon, 17 Jun 2024 06:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wh-0006t3-9T; Mon, 17 Jun 2024 06:07:35 +0000
Received: by outflank-mailman (input) for mailman id 741656;
 Mon, 17 Jun 2024 06:07:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5Vd-0001PY-OV
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:29 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bb0229e6-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:06:28 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5VG-00000009IXq-2cbZ; Mon, 17 Jun 2024 06:06:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb0229e6-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=nxrpabn3Yr7lfkqLutYlwz8cLBay3S4a7er01YqK4F0=; b=3GbbfR6oFpvEzbS2oqb0oMYMb+
	vqbV+2hZnIKX5lmR5QgZlLvnc0rJ2iit5xTHKYaGb9SJqJcZlOfTA6zcbvntPb1Mru5TE9/q6H45n
	19Qfjq1hppB4GOL4exJE6AwoSdINllOz9afFax9HKV1yIlpLy6TmyrnAQy5qwQMQNr7wNK53aH1xH
	SISmmh7ZH7/J6+77ZtbQLY1fpW/7mGqQvnVPmK8FpFL6vWGEFgZQxWrh6KnYBifPtspD+lbza+mC3
	/LOD/BEvCMU6Az9Z7rUSmmuvgHWX5bBPf/2gvUdUSDM49z0i7Rb2ZQwMZLDJBbnCqWBn8gW4Op6iX
	HXdocZug==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Bart Van Assche <bvanassche@acm.org>,
	Damien Le Moal <dlemoal@kernel.org>,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 11/26] block: freeze the queue in queue_attr_store
Date: Mon, 17 Jun 2024 08:04:38 +0200
Message-ID: <20240617060532.127975-12-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

queue_attr_store updates attributes used to control generating I/O, and
can cause malformed bios if changed with I/O in flight.  Freeze the queue
in common code instead of adding it to almost every attribute.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 block/blk-mq.c    | 5 +++--
 block/blk-sysfs.c | 9 ++-------
 2 files changed, 5 insertions(+), 9 deletions(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 0d4cd39c3d25da..58b0d6c7cc34d6 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4631,13 +4631,15 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr)
 	int ret;
 	unsigned long i;
 
+	if (WARN_ON_ONCE(!q->mq_freeze_depth))
+		return -EINVAL;
+
 	if (!set)
 		return -EINVAL;
 
 	if (q->nr_requests == nr)
 		return 0;
 
-	blk_mq_freeze_queue(q);
 	blk_mq_quiesce_queue(q);
 
 	ret = 0;
@@ -4671,7 +4673,6 @@ int blk_mq_update_nr_requests(struct request_queue *q, unsigned int nr)
 	}
 
 	blk_mq_unquiesce_queue(q);
-	blk_mq_unfreeze_queue(q);
 
 	return ret;
 }
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index f0f9314ab65c61..5c787965b7d09e 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -189,12 +189,9 @@ static ssize_t queue_discard_max_store(struct request_queue *q,
 	if ((max_discard_bytes >> SECTOR_SHIFT) > UINT_MAX)
 		return -EINVAL;
 
-	blk_mq_freeze_queue(q);
 	lim = queue_limits_start_update(q);
 	lim.max_user_discard_sectors = max_discard_bytes >> SECTOR_SHIFT;
 	err = queue_limits_commit_update(q, &lim);
-	blk_mq_unfreeze_queue(q);
-
 	if (err)
 		return err;
 	return ret;
@@ -241,11 +238,9 @@ queue_max_sectors_store(struct request_queue *q, const char *page, size_t count)
 	if (ret < 0)
 		return ret;
 
-	blk_mq_freeze_queue(q);
 	lim = queue_limits_start_update(q);
 	lim.max_user_sectors = max_sectors_kb << 1;
 	err = queue_limits_commit_update(q, &lim);
-	blk_mq_unfreeze_queue(q);
 	if (err)
 		return err;
 	return ret;
@@ -585,13 +580,11 @@ static ssize_t queue_wb_lat_store(struct request_queue *q, const char *page,
 	 * ends up either enabling or disabling wbt completely. We can't
 	 * have IO inflight if that happens.
 	 */
-	blk_mq_freeze_queue(q);
 	blk_mq_quiesce_queue(q);
 
 	wbt_set_min_lat(q, val);
 
 	blk_mq_unquiesce_queue(q);
-	blk_mq_unfreeze_queue(q);
 
 	return count;
 }
@@ -722,9 +715,11 @@ queue_attr_store(struct kobject *kobj, struct attribute *attr,
 	if (!entry->store)
 		return -EIO;
 
+	blk_mq_freeze_queue(q);
 	mutex_lock(&q->sysfs_lock);
 	res = entry->store(q, page, length);
 	mutex_unlock(&q->sysfs_lock);
+	blk_mq_unfreeze_queue(q);
 	return res;
 }
 
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741659.1148345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wi-00079j-7a; Mon, 17 Jun 2024 06:07:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741659.1148345; Mon, 17 Jun 2024 06:07:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wh-000789-W9; Mon, 17 Jun 2024 06:07:35 +0000
Received: by outflank-mailman (input) for mailman id 741659;
 Mon, 17 Jun 2024 06:07:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5Vf-0001Pt-Rk
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:31 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bc2e9d4f-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:06:30 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5VD-00000009IVS-3gIx; Mon, 17 Jun 2024 06:06:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc2e9d4f-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=Ehwi9XwmCWVOUn2mUiA9tAgE4ehpoZw+qtTIhW3bZ98=; b=b5fuAR9TpDmNCNYT10KjVNVORL
	bhlzMRDqbc+gBpQCMf6xKCzaCmt81etF37Hfc5GNaF+0Z28NBjv3vPGGuIk0Xyt6kTEE+NWbGeZyZ
	h6Q/t7+7IbfjYXP/wOi6yFmM/Aljo7LuWZejVUJJbH+B4KzYBh42aNhJ1xQxtuTsR8htCxIZdw4ig
	e/dc4UJumZkiGeO2ijwxW0xoScDAwbsnVI/3mm4hsHagmG67EpCXpTQtwwznBA4GBXM0HSRKedrwi
	t/TMBViLWu9lh4AmI0TQMwSzBaVPIx9UgQuB8lyYpIaamAqcXrWYoFio8OG8eE2Rrj/s4+u6fInAv
	0AhJ0xvA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Bart Van Assche <bvanassche@acm.org>,
	Damien Le Moal <dlemoal@kernel.org>,
	Hannes Reinecke <hare@suse.de>
Subject: [PATCH 10/26] nbd: move setting the cache control flags to __nbd_set_size
Date: Mon, 17 Jun 2024 08:04:37 +0200
Message-ID: <20240617060532.127975-11-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move setting the cache control flags in nbd in preparation for moving
these flags into the queue_limits structure.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Josef Bacik <josef@toxicpanda.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
---
 drivers/block/nbd.c | 17 +++++++----------
 1 file changed, 7 insertions(+), 10 deletions(-)

diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
index ad887d614d5b3f..44b8c671921e5c 100644
--- a/drivers/block/nbd.c
+++ b/drivers/block/nbd.c
@@ -342,6 +342,12 @@ static int __nbd_set_size(struct nbd_device *nbd, loff_t bytesize,
 		lim.max_hw_discard_sectors = UINT_MAX;
 	else
 		lim.max_hw_discard_sectors = 0;
+	if (!(nbd->config->flags & NBD_FLAG_SEND_FLUSH))
+		blk_queue_write_cache(nbd->disk->queue, false, false);
+	else if (nbd->config->flags & NBD_FLAG_SEND_FUA)
+		blk_queue_write_cache(nbd->disk->queue, true, true);
+	else
+		blk_queue_write_cache(nbd->disk->queue, true, false);
 	lim.logical_block_size = blksize;
 	lim.physical_block_size = blksize;
 	error = queue_limits_commit_update(nbd->disk->queue, &lim);
@@ -1286,19 +1292,10 @@ static void nbd_bdev_reset(struct nbd_device *nbd)
 
 static void nbd_parse_flags(struct nbd_device *nbd)
 {
-	struct nbd_config *config = nbd->config;
-	if (config->flags & NBD_FLAG_READ_ONLY)
+	if (nbd->config->flags & NBD_FLAG_READ_ONLY)
 		set_disk_ro(nbd->disk, true);
 	else
 		set_disk_ro(nbd->disk, false);
-	if (config->flags & NBD_FLAG_SEND_FLUSH) {
-		if (config->flags & NBD_FLAG_SEND_FUA)
-			blk_queue_write_cache(nbd->disk->queue, true, true);
-		else
-			blk_queue_write_cache(nbd->disk->queue, true, false);
-	}
-	else
-		blk_queue_write_cache(nbd->disk->queue, false, false);
 }
 
 static void send_disconnects(struct nbd_device *nbd)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741660.1148367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wk-0007oZ-Dq; Mon, 17 Jun 2024 06:07:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741660.1148367; Mon, 17 Jun 2024 06:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wk-0007jE-5U; Mon, 17 Jun 2024 06:07:38 +0000
Received: by outflank-mailman (input) for mailman id 741660;
 Mon, 17 Jun 2024 06:07:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5VV-0001Pt-Ux
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:21 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b7389522-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:06:21 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5VB-00000009ISa-1G1W; Mon, 17 Jun 2024 06:06:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7389522-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=yUS2iT/5Ns+DBWa4sOqPiiTpW5do+ruhfRfUiDEShWo=; b=guhcPlS/XgOuKYq4PA9jwEJ1Mm
	wjGQambf6XhIV/WMxs6kFqYaaHdFwvX6RWdpisscMw/wZ4WZsA+kBdl3+w11wI64QzozuUw29fCzF
	6bcDqzUSjDf9S0c+iNFHOFpOaM4r3xPA7y9i2B22SLB/iHL8jg8OiIaZ5p0BiZ12ZoIHUAP/V5Phy
	CaqhmT48hLkZcOQmzg39tPBxUdTSF6LnbixjqSNUesHltqV6UJpGD66vOEwdq7R391N+2LJtveGqS
	OXm/2K7ni7fxsGeEBERanjYsCQ59EcXOvmPmxbR6wdKf8nAs0DNy0rPkfkuV/Q20bN324P+kRTZvf
	Ns6wRmFQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Bart Van Assche <bvanassche@acm.org>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	Damien Le Moal <dlemoal@kernel.org>,
	Hannes Reinecke <hare@suse.de>,
	Johannes Thumshirn <johannes.thumshirn@wdc.com>
Subject: [PATCH 09/26] virtio_blk: remove virtblk_update_cache_mode
Date: Mon, 17 Jun 2024 08:04:36 +0200
Message-ID: <20240617060532.127975-10-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

virtblk_update_cache_mode boils down to a single call to
blk_queue_write_cache.  Remove it in preparation for moving the cache
control flags into the queue_limits.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Bart Van Assche <bvanassche@acm.org>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
Reviewed-by: Hannes Reinecke <hare@suse.de>
Reviewed-by: Johannes Thumshirn <johannes.thumshirn@wdc.com>
---
 drivers/block/virtio_blk.c | 13 +++----------
 1 file changed, 3 insertions(+), 10 deletions(-)

diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
index 2351f411fa4680..378b241911ca87 100644
--- a/drivers/block/virtio_blk.c
+++ b/drivers/block/virtio_blk.c
@@ -1089,14 +1089,6 @@ static int virtblk_get_cache_mode(struct virtio_device *vdev)
 	return writeback;
 }
 
-static void virtblk_update_cache_mode(struct virtio_device *vdev)
-{
-	u8 writeback = virtblk_get_cache_mode(vdev);
-	struct virtio_blk *vblk = vdev->priv;
-
-	blk_queue_write_cache(vblk->disk->queue, writeback, false);
-}
-
 static const char *const virtblk_cache_types[] = {
 	"write through", "write back"
 };
@@ -1116,7 +1108,7 @@ cache_type_store(struct device *dev, struct device_attribute *attr,
 		return i;
 
 	virtio_cwrite8(vdev, offsetof(struct virtio_blk_config, wce), i);
-	virtblk_update_cache_mode(vdev);
+	blk_queue_write_cache(disk->queue, virtblk_get_cache_mode(vdev), false);
 	return count;
 }
 
@@ -1528,7 +1520,8 @@ static int virtblk_probe(struct virtio_device *vdev)
 	vblk->index = index;
 
 	/* configure queue flush support */
-	virtblk_update_cache_mode(vdev);
+	blk_queue_write_cache(vblk->disk->queue, virtblk_get_cache_mode(vdev),
+			false);
 
 	/* If disk is read-only in the host, the guest should obey */
 	if (virtio_has_feature(vdev, VIRTIO_BLK_F_RO))
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741664.1148375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wl-0007wn-6I; Mon, 17 Jun 2024 06:07:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741664.1148375; Mon, 17 Jun 2024 06:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wk-0007sV-Lj; Mon, 17 Jun 2024 06:07:38 +0000
Received: by outflank-mailman (input) for mailman id 741664;
 Mon, 17 Jun 2024 06:07:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5Vs-0001PY-SS
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:44 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c3e04773-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:06:42 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5VU-00000009Ikw-1npw; Mon, 17 Jun 2024 06:06:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3e04773-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=an4ub3zkwPlcLTvgEigO1mSuXIiK/vRkYP+NZODSx0U=; b=q/Cuss6R18jq5v15OqdMHV3ABj
	R8SupY98GcjcMqsCeSW1966m+r26wHuRrEjISpMkvs3DUw+/FNsKCtwG2PftoAdiH/+6P9tRrtxB+
	SKUg+0wfA6EdnA2mMdQL56r3DswsDDe9eUSaXoXR606jdytdTn12W8sy6sZn5UFiFkWTHIsa0JFpO
	Epz407mT+TMDCnapa1pMIYfJkOLYtoG2rKTMyJvIovPxh0Pc74EOaTIf9n4ci5kRLLmByV7H/2OCK
	i4eJpnFR1CLNvWHeyVy7NOXpmUDt9fINWr9swHiQSC4ofjGtlxakgAUW/1X5sYP27JJ91YGl8oVOt
	HUr6WRFg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: [PATCH 15/26] block: move the add_random flag to queue_limits
Date: Mon, 17 Jun 2024 08:04:42 +0200
Message-ID: <20240617060532.127975-16-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the add_random flag into the queue_limits feature field so that it
can be set atomically with the queue frozen.

Note that this also removes code from dm to clear the flag based on
the underlying devices, which can't be reached as dm devices will
always start out without the flag set.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 block/blk-mq-debugfs.c            |  1 -
 block/blk-sysfs.c                 |  6 +++---
 drivers/block/mtip32xx/mtip32xx.c |  1 -
 drivers/md/dm-table.c             | 18 ------------------
 drivers/mmc/core/queue.c          |  2 --
 drivers/mtd/mtd_blkdevs.c         |  3 ---
 drivers/s390/block/scm_blk.c      |  4 ----
 drivers/scsi/scsi_lib.c           |  3 +--
 drivers/scsi/sd.c                 | 11 +++--------
 include/linux/blkdev.h            |  5 +++--
 10 files changed, 10 insertions(+), 44 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 4d0e62ec88f033..6b7edb50bfd3fa 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -86,7 +86,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(FAIL_IO),
 	QUEUE_FLAG_NAME(IO_STAT),
 	QUEUE_FLAG_NAME(NOXMERGES),
-	QUEUE_FLAG_NAME(ADD_RANDOM),
 	QUEUE_FLAG_NAME(SYNCHRONOUS),
 	QUEUE_FLAG_NAME(SAME_FORCE),
 	QUEUE_FLAG_NAME(INIT_DONE),
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 637ed3bbbfb46f..9174aca3b85526 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -323,7 +323,7 @@ queue_##name##_store(struct request_queue *q, const char *page, size_t count) \
 }
 
 QUEUE_SYSFS_FEATURE(rotational, BLK_FEAT_ROTATIONAL)
-QUEUE_SYSFS_BIT_FNS(random, ADD_RANDOM, 0);
+QUEUE_SYSFS_FEATURE(add_random, BLK_FEAT_ADD_RANDOM)
 QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0);
 QUEUE_SYSFS_BIT_FNS(stable_writes, STABLE_WRITES, 0);
 #undef QUEUE_SYSFS_BIT_FNS
@@ -561,7 +561,7 @@ static struct queue_sysfs_entry queue_hw_sector_size_entry = {
 
 QUEUE_RW_ENTRY(queue_rotational, "rotational");
 QUEUE_RW_ENTRY(queue_iostats, "iostats");
-QUEUE_RW_ENTRY(queue_random, "add_random");
+QUEUE_RW_ENTRY(queue_add_random, "add_random");
 QUEUE_RW_ENTRY(queue_stable_writes, "stable_writes");
 
 #ifdef CONFIG_BLK_WBT
@@ -665,7 +665,7 @@ static struct attribute *queue_attrs[] = {
 	&queue_nomerges_entry.attr,
 	&queue_iostats_entry.attr,
 	&queue_stable_writes_entry.attr,
-	&queue_random_entry.attr,
+	&queue_add_random_entry.attr,
 	&queue_poll_entry.attr,
 	&queue_wc_entry.attr,
 	&queue_fua_entry.attr,
diff --git a/drivers/block/mtip32xx/mtip32xx.c b/drivers/block/mtip32xx/mtip32xx.c
index 1dbbf72659d549..c6ef0546ffc9d2 100644
--- a/drivers/block/mtip32xx/mtip32xx.c
+++ b/drivers/block/mtip32xx/mtip32xx.c
@@ -3485,7 +3485,6 @@ static int mtip_block_initialize(struct driver_data *dd)
 		goto start_service_thread;
 
 	/* Set device limits. */
-	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, dd->queue);
 	dma_set_max_seg_size(&dd->pdev->dev, 0x400000);
 
 	/* Set the capacity of the device in 512 byte sectors. */
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index c062af32970934..0a3838e45affd4 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1716,14 +1716,6 @@ static int device_dax_write_cache_enabled(struct dm_target *ti,
 	return false;
 }
 
-static int device_is_not_random(struct dm_target *ti, struct dm_dev *dev,
-			     sector_t start, sector_t len, void *data)
-{
-	struct request_queue *q = bdev_get_queue(dev->bdev);
-
-	return !blk_queue_add_random(q);
-}
-
 static int device_not_write_zeroes_capable(struct dm_target *ti, struct dm_dev *dev,
 					   sector_t start, sector_t len, void *data)
 {
@@ -1876,16 +1868,6 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 	else
 		blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, q);
 
-	/*
-	 * Determine whether or not this queue's I/O timings contribute
-	 * to the entropy pool, Only request-based targets use this.
-	 * Clear QUEUE_FLAG_ADD_RANDOM if any underlying device does not
-	 * have it set.
-	 */
-	if (blk_queue_add_random(q) &&
-	    dm_table_any_dev_attr(t, device_is_not_random, NULL))
-		blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q);
-
 	/*
 	 * For a zoned target, setup the zones related queue attributes
 	 * and resources necessary for zone append emulation if necessary.
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index b4f62fa845864c..da00904d4a3c7e 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -387,8 +387,6 @@ static struct gendisk *mmc_alloc_disk(struct mmc_queue *mq,
 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, mq->queue);
 	blk_queue_rq_timeout(mq->queue, 60 * HZ);
 
-	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
-
 	dma_set_max_seg_size(mmc_dev(host), queue_max_segment_size(mq->queue));
 
 	INIT_WORK(&mq->recovery_work, mmc_mq_recovery_handler);
diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
index bf8369ce7ddf1d..47ead84407cdcf 100644
--- a/drivers/mtd/mtd_blkdevs.c
+++ b/drivers/mtd/mtd_blkdevs.c
@@ -374,9 +374,6 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev *new)
 	/* Create the request queue */
 	spin_lock_init(&new->queue_lock);
 	INIT_LIST_HEAD(&new->rq_list);
-
-	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, new->rq);
-
 	gd->queue = new->rq;
 
 	if (new->readonly)
diff --git a/drivers/s390/block/scm_blk.c b/drivers/s390/block/scm_blk.c
index 2e2309fa9a0b34..3fcfe029db1b3a 100644
--- a/drivers/s390/block/scm_blk.c
+++ b/drivers/s390/block/scm_blk.c
@@ -439,7 +439,6 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
 		.logical_block_size	= 1 << 12,
 	};
 	unsigned int devindex;
-	struct request_queue *rq;
 	int len, ret;
 
 	lim.max_segments = min(scmdev->nr_max_block,
@@ -474,9 +473,6 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, struct scm_device *scmdev)
 		ret = PTR_ERR(bdev->gendisk);
 		goto out_tag;
 	}
-	rq = bdev->rq = bdev->gendisk->queue;
-	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, rq);
-
 	bdev->gendisk->private_data = scmdev;
 	bdev->gendisk->fops = &scm_blk_devops;
 	bdev->gendisk->major = scm_major;
diff --git a/drivers/scsi/scsi_lib.c b/drivers/scsi/scsi_lib.c
index ec39acc986d6ec..54f771ec8cfb5e 100644
--- a/drivers/scsi/scsi_lib.c
+++ b/drivers/scsi/scsi_lib.c
@@ -631,8 +631,7 @@ static bool scsi_end_request(struct request *req, blk_status_t error,
 	if (blk_update_request(req, error, bytes))
 		return true;
 
-	// XXX:
-	if (blk_queue_add_random(q))
+	if (q->limits.features & BLK_FEAT_ADD_RANDOM)
 		add_disk_randomness(req->q->disk);
 
 	WARN_ON_ONCE(!blk_rq_is_passthrough(req) &&
diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
index a42c3c45e86830..a27f1c7f1b61d5 100644
--- a/drivers/scsi/sd.c
+++ b/drivers/scsi/sd.c
@@ -3301,7 +3301,6 @@ static void sd_read_block_limits_ext(struct scsi_disk *sdkp)
 static void sd_read_block_characteristics(struct scsi_disk *sdkp,
 		struct queue_limits *lim)
 {
-	struct request_queue *q = sdkp->disk->queue;
 	struct scsi_vpd *vpd;
 	u16 rot;
 
@@ -3317,10 +3316,8 @@ static void sd_read_block_characteristics(struct scsi_disk *sdkp,
 	sdkp->zoned = (vpd->data[8] >> 4) & 3;
 	rcu_read_unlock();
 
-	if (rot == 1) {
-		lim->features &= ~BLK_FEAT_ROTATIONAL;
-		blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q);
-	}
+	if (rot == 1)
+		lim->features &= ~(BLK_FEAT_ROTATIONAL | BLK_FEAT_ADD_RANDOM);
 
 	if (!sdkp->first_scan)
 		return;
@@ -3599,7 +3596,6 @@ static int sd_revalidate_disk(struct gendisk *disk)
 {
 	struct scsi_disk *sdkp = scsi_disk(disk);
 	struct scsi_device *sdp = sdkp->device;
-	struct request_queue *q = sdkp->disk->queue;
 	sector_t old_capacity = sdkp->capacity;
 	struct queue_limits lim;
 	unsigned char *buffer;
@@ -3646,8 +3642,7 @@ static int sd_revalidate_disk(struct gendisk *disk)
 		 * cause this to be updated correctly and any device which
 		 * doesn't support it should be treated as rotational.
 		 */
-		lim.features |= BLK_FEAT_ROTATIONAL;
-		blk_queue_flag_set(QUEUE_FLAG_ADD_RANDOM, q);
+		lim.features |= (BLK_FEAT_ROTATIONAL | BLK_FEAT_ADD_RANDOM);
 
 		if (scsi_device_supports_vpd(sdp)) {
 			sd_read_block_provisioning(sdkp);
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 988e3248cffeb7..cf1bbf566b2bcd 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -292,6 +292,9 @@ enum {
 
 	/* rotational device (hard drive or floppy) */
 	BLK_FEAT_ROTATIONAL			= (1u << 2),
+
+	/* contributes to the random number pool */
+	BLK_FEAT_ADD_RANDOM			= (1u << 3),
 };
 
 /*
@@ -557,7 +560,6 @@ struct request_queue {
 #define QUEUE_FLAG_FAIL_IO	5	/* fake timeout */
 #define QUEUE_FLAG_IO_STAT	7	/* do disk/partitions IO accounting */
 #define QUEUE_FLAG_NOXMERGES	9	/* No extended merges */
-#define QUEUE_FLAG_ADD_RANDOM	10	/* Contributes to random pool */
 #define QUEUE_FLAG_SYNCHRONOUS	11	/* always completes in submit context */
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
 #define QUEUE_FLAG_INIT_DONE	14	/* queue is initialized */
@@ -591,7 +593,6 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 	test_bit(QUEUE_FLAG_NOXMERGES, &(q)->queue_flags)
 #define blk_queue_nonrot(q)	((q)->limits.features & BLK_FEAT_ROTATIONAL)
 #define blk_queue_io_stat(q)	test_bit(QUEUE_FLAG_IO_STAT, &(q)->queue_flags)
-#define blk_queue_add_random(q)	test_bit(QUEUE_FLAG_ADD_RANDOM, &(q)->queue_flags)
 #define blk_queue_zone_resetall(q)	\
 	test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags)
 #define blk_queue_dax(q)	test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741665.1148383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wl-00086f-Vt; Mon, 17 Jun 2024 06:07:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741665.1148383; Mon, 17 Jun 2024 06:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wl-00084B-EJ; Mon, 17 Jun 2024 06:07:39 +0000
Received: by outflank-mailman (input) for mailman id 741665;
 Mon, 17 Jun 2024 06:07:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5W0-0001PY-CS
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:52 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c8d20ade-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:06:50 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Vj-00000009Iyb-0AOH; Mon, 17 Jun 2024 06:06:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8d20ade-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=0+0kfVN+poIOW7TyJ8IvGWts9ay/jCSMmWoFEtfaqiY=; b=Oj54QgNypOFvWLQYtpWWZ3K7Hv
	KgPgK4WxEoascCvZcw9c+tBv6JmauG/P7miwUJZx8LBvJCegwhU9q7k+3k1w0iXy4SRvv2Tg4xB51
	7vuQL68zzMzWtOyhf/u+r5WP9Ym8bv9d3V2MWiIYDKd4VfDKTC2n/uo12lFVpLpdYCu8I1D1y3Wtr
	tXpk97mEvfifvogKN2+UehfnLHSyV+hF6LXFm6AV/ITy88LzszK4+hqzZRtbUUVXl6sup5+O6tdzU
	8ohmOhTkk9eBOJYsHheov19MtZGMAQNzgEXX0DPteBjiV/h0VVbcxkNsl2tyw0+1Y0rMVrTjx3qJX
	Fb9W59iA==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 19/26] block: move the nowait flag to queue_limits
Date: Mon, 17 Jun 2024 08:04:46 +0200
Message-ID: <20240617060532.127975-20-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the nowait flag into the queue_limits feature field so that it can
be set atomically with the queue frozen.

Stacking drivers are simplified in that they now can simply set the
flag, and blk_stack_limits will clear it when the features is not
supported by any of the underlying devices.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq-debugfs.c        |  1 -
 block/blk-mq.c                |  2 +-
 block/blk-settings.c          |  9 +++++++++
 drivers/block/brd.c           |  4 ++--
 drivers/md/dm-table.c         | 18 +++---------------
 drivers/md/md.c               | 18 +-----------------
 drivers/nvme/host/multipath.c |  3 +--
 include/linux/blkdev.h        |  9 +++++----
 8 files changed, 22 insertions(+), 42 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 957774e40b1d0c..62b132e9a9ce3b 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -96,7 +96,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(ZONE_RESETALL),
 	QUEUE_FLAG_NAME(RQ_ALLOC_TIME),
 	QUEUE_FLAG_NAME(HCTX_ACTIVE),
-	QUEUE_FLAG_NAME(NOWAIT),
 	QUEUE_FLAG_NAME(SQ_SCHED),
 	QUEUE_FLAG_NAME(SKIP_TAGSET_QUIESCE),
 };
diff --git a/block/blk-mq.c b/block/blk-mq.c
index cf67dc13f7dd4c..43235acc87505f 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4118,7 +4118,7 @@ struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
 
 	if (!lim)
 		lim = &default_lim;
-	lim->features |= BLK_FEAT_IO_STAT;
+	lim->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
 
 	q = blk_alloc_queue(lim, set->numa_node);
 	if (IS_ERR(q))
diff --git a/block/blk-settings.c b/block/blk-settings.c
index 536ee202fcdccb..bf4622c19b5c09 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -459,6 +459,15 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 
 	t->features |= (b->features & BLK_FEAT_INHERIT_MASK);
 
+	/*
+	 * BLK_FEAT_NOWAIT needs to be supported both by the stacking driver
+	 * and all underlying devices.  The stacking driver sets the flag
+	 * before stacking the limits, and this will clear the flag if any
+	 * of the underlying devices does not support it.
+	 */
+	if (!(b->features & BLK_FEAT_NOWAIT))
+		t->features &= ~BLK_FEAT_NOWAIT;
+
 	t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
 	t->max_user_sectors = min_not_zero(t->max_user_sectors,
 			b->max_user_sectors);
diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index d77deb571dbd06..a300645cd9d4a5 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -335,7 +335,8 @@ static int brd_alloc(int i)
 		.max_hw_discard_sectors	= UINT_MAX,
 		.max_discard_segments	= 1,
 		.discard_granularity	= PAGE_SIZE,
-		.features		= BLK_FEAT_SYNCHRONOUS,
+		.features		= BLK_FEAT_SYNCHRONOUS |
+					  BLK_FEAT_NOWAIT,
 	};
 
 	list_for_each_entry(brd, &brd_devices, brd_list)
@@ -367,7 +368,6 @@ static int brd_alloc(int i)
 	strscpy(disk->disk_name, buf, DISK_NAME_LEN);
 	set_capacity(disk, rd_size * 2);
 	
-	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, disk->queue);
 	err = add_disk(disk);
 	if (err)
 		goto out_cleanup_disk;
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index aaf379cb15d91f..84d636712c7284 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -582,7 +582,7 @@ int dm_split_args(int *argc, char ***argvp, char *input)
 static void dm_set_stacking_limits(struct queue_limits *limits)
 {
 	blk_set_stacking_limits(limits);
-	limits->features |= BLK_FEAT_IO_STAT;
+	limits->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
 }
 
 /*
@@ -1746,12 +1746,6 @@ static bool dm_table_supports_write_zeroes(struct dm_table *t)
 	return true;
 }
 
-static int device_not_nowait_capable(struct dm_target *ti, struct dm_dev *dev,
-				     sector_t start, sector_t len, void *data)
-{
-	return !bdev_nowait(dev->bdev);
-}
-
 static bool dm_table_supports_nowait(struct dm_table *t)
 {
 	for (unsigned int i = 0; i < t->num_targets; i++) {
@@ -1759,10 +1753,6 @@ static bool dm_table_supports_nowait(struct dm_table *t)
 
 		if (!dm_target_supports_nowait(ti->type))
 			return false;
-
-		if (!ti->type->iterate_devices ||
-		    ti->type->iterate_devices(ti, device_not_nowait_capable, NULL))
-			return false;
 	}
 
 	return true;
@@ -1824,10 +1814,8 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 {
 	int r;
 
-	if (dm_table_supports_nowait(t))
-		blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q);
-	else
-		blk_queue_flag_clear(QUEUE_FLAG_NOWAIT, q);
+	if (!dm_table_supports_nowait(t))
+		limits->features &= ~BLK_FEAT_NOWAIT;
 
 	if (!dm_table_supports_discards(t)) {
 		limits->max_hw_discard_sectors = 0;
diff --git a/drivers/md/md.c b/drivers/md/md.c
index 8db0db8d5a27ac..f1c7d4f281c521 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5788,7 +5788,7 @@ struct mddev *md_alloc(dev_t dev, char *name)
 	int error;
 	struct queue_limits lim = {
 		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
-					  BLK_FEAT_IO_STAT,
+					  BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT,
 	};
 
 	/*
@@ -6150,13 +6150,6 @@ int md_run(struct mddev *mddev)
 		}
 	}
 
-	if (!mddev_is_dm(mddev)) {
-		struct request_queue *q = mddev->gendisk->queue;
-
-		/* Set the NOWAIT flags if all underlying devices support it */
-		if (nowait)
-			blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q);
-	}
 	if (pers->sync_request) {
 		if (mddev->kobj.sd &&
 		    sysfs_create_group(&mddev->kobj, &md_redundancy_group))
@@ -7115,15 +7108,6 @@ static int hot_add_disk(struct mddev *mddev, dev_t dev)
 	set_bit(MD_SB_CHANGE_DEVS, &mddev->sb_flags);
 	if (!mddev->thread)
 		md_update_sb(mddev, 1);
-	/*
-	 * If the new disk does not support REQ_NOWAIT,
-	 * disable on the whole MD.
-	 */
-	if (!bdev_nowait(rdev->bdev)) {
-		pr_info("%s: Disabling nowait because %pg does not support nowait\n",
-			mdname(mddev), rdev->bdev);
-		blk_queue_flag_clear(QUEUE_FLAG_NOWAIT, mddev->gendisk->queue);
-	}
 	/*
 	 * Kick recovery, maybe this spare has to be added to the
 	 * array immediately.
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 173796f2ddea9f..61a162c9cf4e6c 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -538,7 +538,7 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 
 	blk_set_stacking_limits(&lim);
 	lim.dma_alignment = 3;
-	lim.features |= BLK_FEAT_IO_STAT;
+	lim.features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
 	if (head->ids.csi != NVME_CSI_ZNS)
 		lim.max_zone_append_sectors = 0;
 
@@ -550,7 +550,6 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 	sprintf(head->disk->disk_name, "nvme%dn%d",
 			ctrl->subsys->instance, head->instance);
 
-	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, head->disk->queue);
 	/*
 	 * This assumes all controllers that refer to a namespace either
 	 * support poll queues or not.  That is not a strict guarantee,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index cee7b44a142513..f3d4519d609d95 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -304,6 +304,9 @@ enum {
 
 	/* always completes in submit context */
 	BLK_FEAT_SYNCHRONOUS			= (1u << 6),
+
+	/* supports REQ_NOWAIT */
+	BLK_FEAT_NOWAIT				= (1u << 7),
 };
 
 /*
@@ -580,12 +583,10 @@ struct request_queue {
 #define QUEUE_FLAG_ZONE_RESETALL 26	/* supports Zone Reset All */
 #define QUEUE_FLAG_RQ_ALLOC_TIME 27	/* record rq->alloc_time_ns */
 #define QUEUE_FLAG_HCTX_ACTIVE	28	/* at least one blk-mq hctx is active */
-#define QUEUE_FLAG_NOWAIT       29	/* device supports NOWAIT */
 #define QUEUE_FLAG_SQ_SCHED     30	/* single queue style io dispatch */
 #define QUEUE_FLAG_SKIP_TAGSET_QUIESCE	31 /* quiesce_tagset skip the queue*/
 
-#define QUEUE_FLAG_MQ_DEFAULT	((1UL << QUEUE_FLAG_SAME_COMP) |	\
-				 (1UL << QUEUE_FLAG_NOWAIT))
+#define QUEUE_FLAG_MQ_DEFAULT	(1UL << QUEUE_FLAG_SAME_COMP)
 
 void blk_queue_flag_set(unsigned int flag, struct request_queue *q);
 void blk_queue_flag_clear(unsigned int flag, struct request_queue *q);
@@ -1348,7 +1349,7 @@ static inline bool bdev_fua(struct block_device *bdev)
 
 static inline bool bdev_nowait(struct block_device *bdev)
 {
-	return test_bit(QUEUE_FLAG_NOWAIT, &bdev_get_queue(bdev)->queue_flags);
+	return bdev->bd_disk->queue->limits.features & BLK_FEAT_NOWAIT;
 }
 
 static inline bool bdev_is_zoned(struct block_device *bdev)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741666.1148389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wn-0008Qn-DP; Mon, 17 Jun 2024 06:07:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741666.1148389; Mon, 17 Jun 2024 06:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wm-0008P5-M1; Mon, 17 Jun 2024 06:07:40 +0000
Received: by outflank-mailman (input) for mailman id 741666;
 Mon, 17 Jun 2024 06:07:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5WK-0001PY-Sr
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:07:12 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d5218f26-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:07:11 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5W3-00000009JFp-161h; Mon, 17 Jun 2024 06:06:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5218f26-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=dPSH4Z18hgBewwqMgLsN2I7WWNT3sMh//IxSNM9UAJw=; b=K/Vv+hFa9Nyaj6c0dtkqikBt6w
	qq2YIoW9lbQTXNs1QZwCu3TJ0gdb1aTGMI/jJDciD0rCDzMo0YBbpIemac2/yg9z1vzk7PHL+Cu21
	hJaAxq4kQMhKmbucck4hfChvPCpVtcHay9f22jj0UEcEQfgYSC4Wqppdr6tM3ho8Qv3Y/Nav4AdiD
	/nqRXJ+/kKq1QJenk4YubPhiXsBeihmEBorMr5EfalN28KRiAoHA9Pc+NO5mqBR0ojItSGvxPwPlD
	FZsTI3i1oePhkKFKVmc7hq1+qX5E0tjVsVmdcjwIlRFF/eDyHTREetqP/tz+7XIOrhzRREm7pFrTG
	SNkrcFPg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: [PATCH 25/26] block: move the skip_tagset_quiesce flag to queue_limits
Date: Mon, 17 Jun 2024 08:04:52 +0200
Message-ID: <20240617060532.127975-26-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the skip_tagset_quiesce flag into the queue_limits feature field so
that it can be set atomically with the queue frozen.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 block/blk-mq-debugfs.c   | 1 -
 drivers/nvme/host/core.c | 8 +++++---
 include/linux/blkdev.h   | 6 ++++--
 3 files changed, 9 insertions(+), 6 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 8b5a68861c119b..344f9e503bdb32 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -93,7 +93,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(RQ_ALLOC_TIME),
 	QUEUE_FLAG_NAME(HCTX_ACTIVE),
 	QUEUE_FLAG_NAME(SQ_SCHED),
-	QUEUE_FLAG_NAME(SKIP_TAGSET_QUIESCE),
 };
 #undef QUEUE_FLAG_NAME
 
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 31e752e8d632cd..bf410d10b12006 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -4489,13 +4489,15 @@ int nvme_alloc_io_tag_set(struct nvme_ctrl *ctrl, struct blk_mq_tag_set *set,
 		return ret;
 
 	if (ctrl->ops->flags & NVME_F_FABRICS) {
-		ctrl->connect_q = blk_mq_alloc_queue(set, NULL, NULL);
+		struct queue_limits lim = {
+			.features	= BLK_FEAT_SKIP_TAGSET_QUIESCE,
+		};
+
+		ctrl->connect_q = blk_mq_alloc_queue(set, &lim, NULL);
         	if (IS_ERR(ctrl->connect_q)) {
 			ret = PTR_ERR(ctrl->connect_q);
 			goto out_free_tag_set;
 		}
-		blk_queue_flag_set(QUEUE_FLAG_SKIP_TAGSET_QUIESCE,
-				   ctrl->connect_q);
 	}
 
 	ctrl->tagset = set;
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index ab0f7dfba556eb..2c433ebf6f2030 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -322,6 +322,9 @@ enum {
 
 	/* supports PCI(e) p2p requests */
 	BLK_FEAT_PCI_P2PDMA			= (1u << 12),
+
+	/* skip this queue in blk_mq_(un)quiesce_tagset */
+	BLK_FEAT_SKIP_TAGSET_QUIESCE		= (1u << 13),
 };
 
 /*
@@ -594,7 +597,6 @@ struct request_queue {
 #define QUEUE_FLAG_RQ_ALLOC_TIME 27	/* record rq->alloc_time_ns */
 #define QUEUE_FLAG_HCTX_ACTIVE	28	/* at least one blk-mq hctx is active */
 #define QUEUE_FLAG_SQ_SCHED     30	/* single queue style io dispatch */
-#define QUEUE_FLAG_SKIP_TAGSET_QUIESCE	31 /* quiesce_tagset skip the queue*/
 
 #define QUEUE_FLAG_MQ_DEFAULT	(1UL << QUEUE_FLAG_SAME_COMP)
 
@@ -629,7 +631,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_registered(q)	test_bit(QUEUE_FLAG_REGISTERED, &(q)->queue_flags)
 #define blk_queue_sq_sched(q)	test_bit(QUEUE_FLAG_SQ_SCHED, &(q)->queue_flags)
 #define blk_queue_skip_tagset_quiesce(q) \
-	test_bit(QUEUE_FLAG_SKIP_TAGSET_QUIESCE, &(q)->queue_flags)
+	((q)->limits.features & BLK_FEAT_SKIP_TAGSET_QUIESCE)
 
 extern void blk_set_pm_only(struct request_queue *q);
 extern void blk_clear_pm_only(struct request_queue *q);
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741667.1148394 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wn-00007l-O7; Mon, 17 Jun 2024 06:07:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741667.1148394; Mon, 17 Jun 2024 06:07:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wn-0008Uk-8Q; Mon, 17 Jun 2024 06:07:41 +0000
Received: by outflank-mailman (input) for mailman id 741667;
 Mon, 17 Jun 2024 06:07:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5W3-0001PY-98
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:55 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id caab80f5-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:06:53 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Vm-00000009J1l-1XQN; Mon, 17 Jun 2024 06:06:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: caab80f5-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=2dJaKm+BiJU63q3gsUH9CigfskC7JViz+3jaXw01E7A=; b=cxeYm0s4jsdRICZx6Akc8m8YW0
	Vhg2XDQtqPhW8I5iZePpDw/KVGsnwdOXfi8cFLWWV/vXN0xw8NTf3V+4woZ2ho3pjGbqoItr8OmVf
	Slhx60fOBSbNZsp/9OGIYvEqktRv+mZiesbKTkVn/anOSDVkbAxUNBL68y9izw7KJFTrrygTli3yu
	lI/Gj2E/n90v/so3fqDa6E0MrtzF72AmZmkARep2ZEflW4Mx9wtBDqVf/JDdhn/dW/l1gsTtLd9mR
	M5hcXxZ7w/wGUQpXtg92QLft3PW+URqUhXTExz50qWMEEWVoH1fCBgauJxK+iVm5ALicFD+T4oAaH
	4buMaMNQ==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: [PATCH 20/26] block: move the dax flag to queue_limits
Date: Mon, 17 Jun 2024 08:04:47 +0200
Message-ID: <20240617060532.127975-21-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the dax flag into the queue_limits feature field so that it can be
set atomically with the queue frozen.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 block/blk-mq-debugfs.c       | 1 -
 drivers/md/dm-table.c        | 4 ++--
 drivers/nvdimm/pmem.c        | 7 ++-----
 drivers/s390/block/dcssblk.c | 2 +-
 include/linux/blkdev.h       | 6 ++++--
 5 files changed, 9 insertions(+), 11 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 62b132e9a9ce3b..f4fa820251ce83 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -88,7 +88,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(SAME_FORCE),
 	QUEUE_FLAG_NAME(INIT_DONE),
 	QUEUE_FLAG_NAME(POLL),
-	QUEUE_FLAG_NAME(DAX),
 	QUEUE_FLAG_NAME(STATS),
 	QUEUE_FLAG_NAME(REGISTERED),
 	QUEUE_FLAG_NAME(QUIESCED),
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 84d636712c7284..e44697037e86f4 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1834,11 +1834,11 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 		limits->features |= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA;
 
 	if (dm_table_supports_dax(t, device_not_dax_capable)) {
-		blk_queue_flag_set(QUEUE_FLAG_DAX, q);
+		limits->features |= BLK_FEAT_DAX;
 		if (dm_table_supports_dax(t, device_not_dax_synchronous_capable))
 			set_dax_synchronous(t->md->dax_dev);
 	} else
-		blk_queue_flag_clear(QUEUE_FLAG_DAX, q);
+		limits->features &= ~BLK_FEAT_DAX;
 
 	if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, NULL))
 		dax_write_cache(t->md->dax_dev, true);
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index b821dcf018f6ae..1dd74c969d5a09 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -465,7 +465,6 @@ static int pmem_attach_disk(struct device *dev,
 	struct dax_device *dax_dev;
 	struct nd_pfn_sb *pfn_sb;
 	struct pmem_device *pmem;
-	struct request_queue *q;
 	struct gendisk *disk;
 	void *addr;
 	int rc;
@@ -499,6 +498,8 @@ static int pmem_attach_disk(struct device *dev,
 	}
 	if (fua)
 		lim.features |= BLK_FEAT_FUA;
+	if (is_nd_pfn(dev))
+		lim.features |= BLK_FEAT_DAX;
 
 	if (!devm_request_mem_region(dev, res->start, resource_size(res),
 				dev_name(&ndns->dev))) {
@@ -509,7 +510,6 @@ static int pmem_attach_disk(struct device *dev,
 	disk = blk_alloc_disk(&lim, nid);
 	if (IS_ERR(disk))
 		return PTR_ERR(disk);
-	q = disk->queue;
 
 	pmem->disk = disk;
 	pmem->pgmap.owner = pmem;
@@ -547,9 +547,6 @@ static int pmem_attach_disk(struct device *dev,
 	}
 	pmem->virt_addr = addr;
 
-	if (pmem->pfn_flags & PFN_MAP)
-		blk_queue_flag_set(QUEUE_FLAG_DAX, q);
-
 	disk->fops		= &pmem_fops;
 	disk->private_data	= pmem;
 	nvdimm_namespace_disk_name(ndns, disk->disk_name);
diff --git a/drivers/s390/block/dcssblk.c b/drivers/s390/block/dcssblk.c
index 6d1689a2717e5f..d5a5d11ae0dcdf 100644
--- a/drivers/s390/block/dcssblk.c
+++ b/drivers/s390/block/dcssblk.c
@@ -548,6 +548,7 @@ dcssblk_add_store(struct device *dev, struct device_attribute *attr, const char
 {
 	struct queue_limits lim = {
 		.logical_block_size	= 4096,
+		.features		= BLK_FEAT_DAX,
 	};
 	int rc, i, j, num_of_segments;
 	struct dcssblk_dev_info *dev_info;
@@ -643,7 +644,6 @@ dcssblk_add_store(struct device *dev, struct device_attribute *attr, const char
 	dev_info->gd->fops = &dcssblk_devops;
 	dev_info->gd->private_data = dev_info;
 	dev_info->gd->flags |= GENHD_FL_NO_PART;
-	blk_queue_flag_set(QUEUE_FLAG_DAX, dev_info->gd->queue);
 
 	seg_byte_size = (dev_info->end - dev_info->start + 1);
 	set_capacity(dev_info->gd, seg_byte_size >> 9); // size in sectors
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index f3d4519d609d95..7022e06a3dd9a3 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -307,6 +307,9 @@ enum {
 
 	/* supports REQ_NOWAIT */
 	BLK_FEAT_NOWAIT				= (1u << 7),
+
+	/* supports DAX */
+	BLK_FEAT_DAX				= (1u << 8),
 };
 
 /*
@@ -575,7 +578,6 @@ struct request_queue {
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
 #define QUEUE_FLAG_INIT_DONE	14	/* queue is initialized */
 #define QUEUE_FLAG_POLL		16	/* IO polling enabled if set */
-#define QUEUE_FLAG_DAX		19	/* device supports DAX */
 #define QUEUE_FLAG_STATS	20	/* track IO start and completion times */
 #define QUEUE_FLAG_REGISTERED	22	/* queue has been registered to a disk */
 #define QUEUE_FLAG_QUIESCED	24	/* queue has been quiesced */
@@ -602,7 +604,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_io_stat(q)	((q)->limits.features & BLK_FEAT_IO_STAT)
 #define blk_queue_zone_resetall(q)	\
 	test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags)
-#define blk_queue_dax(q)	test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
+#define blk_queue_dax(q)	((q)->limits.features & BLK_FEAT_DAX)
 #define blk_queue_pci_p2pdma(q)	\
 	test_bit(QUEUE_FLAG_PCI_P2PDMA, &(q)->queue_flags)
 #ifdef CONFIG_BLK_RQ_ALLOC_TIME
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741668.1148407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wp-0000TS-Ec; Mon, 17 Jun 2024 06:07:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741668.1148407; Mon, 17 Jun 2024 06:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wo-0000QV-Qy; Mon, 17 Jun 2024 06:07:42 +0000
Received: by outflank-mailman (input) for mailman id 741668;
 Mon, 17 Jun 2024 06:07:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5Vz-0001Pt-Bx
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:51 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c8b1981b-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:06:50 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Vc-00000009Irp-1Kdh; Mon, 17 Jun 2024 06:06:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8b1981b-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=coy7WUktq0cqVSrDGz+XN80hfWwE+5h35+hSlMVetYE=; b=sSor2PQw/QuqbhYszFQ8rlEmZr
	SiTwCYDpPaJjWDDrRQyHRgJ0l14gFaeEENBvWGFnnz9JgrJxP9RbrJwv7sI6mq5Vk180rnGhawiLc
	MWEpEO65Vl94ZiuN/Q4XtztWKQkEw9FKQjDlsxcZNNM9BPOgPkHIlvGIpKl+W6wsO36IA362HWMxw
	/ZgAyihyP08wX3SVGzKPxzON4Vh6BgRp8O3pDqq8wx1rWomxqnFYiRragnfJQyPz1C3jRlpl82LGA
	BY0xWnzWuDnLETJCcmPXi4K73c9tiVAY4ATE4VyoW5+A20HvuGtG9pqhwRT70hZl/mHoY3Y4PeOOx
	QHa035Tw==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: [PATCH 17/26]  block: move the stable_writes flag to queue_limits
Date: Mon, 17 Jun 2024 08:04:44 +0200
Message-ID: <20240617060532.127975-18-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the stable_writes flag into the queue_limits feature field so that
it can be set atomically with the queue frozen.

The flag is now inherited by blk_stack_limits, which greatly simplifies
the code in dm, and fixed md which previously did not pass on the flag
set on lower devices.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 block/blk-mq-debugfs.c         |  1 -
 block/blk-sysfs.c              | 29 +----------------------------
 drivers/block/drbd/drbd_main.c |  5 ++---
 drivers/block/rbd.c            |  9 +++------
 drivers/block/zram/zram_drv.c  |  2 +-
 drivers/md/dm-table.c          | 19 -------------------
 drivers/md/raid5.c             |  6 ++++--
 drivers/mmc/core/queue.c       |  5 +++--
 drivers/nvme/host/core.c       |  9 +++++----
 drivers/nvme/host/multipath.c  |  4 ----
 drivers/scsi/iscsi_tcp.c       |  8 ++++----
 include/linux/blkdev.h         |  9 ++++++---
 12 files changed, 29 insertions(+), 77 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index cbe99444ed1a54..eb73f1d348e5a9 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -88,7 +88,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(SYNCHRONOUS),
 	QUEUE_FLAG_NAME(SAME_FORCE),
 	QUEUE_FLAG_NAME(INIT_DONE),
-	QUEUE_FLAG_NAME(STABLE_WRITES),
 	QUEUE_FLAG_NAME(POLL),
 	QUEUE_FLAG_NAME(DAX),
 	QUEUE_FLAG_NAME(STATS),
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 6f58530fb3c08e..cde525724831ef 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -296,37 +296,10 @@ static ssize_t queue_##_name##_store(struct request_queue *q,		 \
 	return queue_feature_store(q, page, count, _feature);		 \
 }
 
-#define QUEUE_SYSFS_BIT_FNS(name, flag, neg)				\
-static ssize_t								\
-queue_##name##_show(struct request_queue *q, char *page)		\
-{									\
-	int bit;							\
-	bit = test_bit(QUEUE_FLAG_##flag, &q->queue_flags);		\
-	return queue_var_show(neg ? !bit : bit, page);			\
-}									\
-static ssize_t								\
-queue_##name##_store(struct request_queue *q, const char *page, size_t count) \
-{									\
-	unsigned long val;						\
-	ssize_t ret;							\
-	ret = queue_var_store(&val, page, count);			\
-	if (ret < 0)							\
-		 return ret;						\
-	if (neg)							\
-		val = !val;						\
-									\
-	if (val)							\
-		blk_queue_flag_set(QUEUE_FLAG_##flag, q);		\
-	else								\
-		blk_queue_flag_clear(QUEUE_FLAG_##flag, q);		\
-	return ret;							\
-}
-
 QUEUE_SYSFS_FEATURE(rotational, BLK_FEAT_ROTATIONAL)
 QUEUE_SYSFS_FEATURE(add_random, BLK_FEAT_ADD_RANDOM)
 QUEUE_SYSFS_FEATURE(iostats, BLK_FEAT_IO_STAT)
-QUEUE_SYSFS_BIT_FNS(stable_writes, STABLE_WRITES, 0);
-#undef QUEUE_SYSFS_BIT_FNS
+QUEUE_SYSFS_FEATURE(stable_writes, BLK_FEAT_STABLE_WRITES);
 
 static ssize_t queue_zoned_show(struct request_queue *q, char *page)
 {
diff --git a/drivers/block/drbd/drbd_main.c b/drivers/block/drbd/drbd_main.c
index 2ef29a47807550..f92673f05c7abc 100644
--- a/drivers/block/drbd/drbd_main.c
+++ b/drivers/block/drbd/drbd_main.c
@@ -2698,7 +2698,8 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 		 */
 		.max_hw_sectors		= DRBD_MAX_BIO_SIZE_SAFE >> 8,
 		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
-					  BLK_FEAT_ROTATIONAL,
+					  BLK_FEAT_ROTATIONAL |
+					  BLK_FEAT_STABLE_WRITES,
 	};
 
 	device = minor_to_device(minor);
@@ -2737,8 +2738,6 @@ enum drbd_ret_code drbd_create_device(struct drbd_config_context *adm_ctx, unsig
 	sprintf(disk->disk_name, "drbd%d", minor);
 	disk->private_data = device;
 
-	blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, disk->queue);
-
 	device->md_io.page = alloc_page(GFP_KERNEL);
 	if (!device->md_io.page)
 		goto out_no_io_page;
diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
index ec1f1c7d4275cd..008e850555f41a 100644
--- a/drivers/block/rbd.c
+++ b/drivers/block/rbd.c
@@ -4949,7 +4949,6 @@ static const struct blk_mq_ops rbd_mq_ops = {
 static int rbd_init_disk(struct rbd_device *rbd_dev)
 {
 	struct gendisk *disk;
-	struct request_queue *q;
 	unsigned int objset_bytes =
 	    rbd_dev->layout.object_size * rbd_dev->layout.stripe_count;
 	struct queue_limits lim = {
@@ -4979,12 +4978,14 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
 		lim.max_write_zeroes_sectors = objset_bytes >> SECTOR_SHIFT;
 	}
 
+	if (!ceph_test_opt(rbd_dev->rbd_client->client, NOCRC))
+		lim.features |= BLK_FEAT_STABLE_WRITES;
+
 	disk = blk_mq_alloc_disk(&rbd_dev->tag_set, &lim, rbd_dev);
 	if (IS_ERR(disk)) {
 		err = PTR_ERR(disk);
 		goto out_tag_set;
 	}
-	q = disk->queue;
 
 	snprintf(disk->disk_name, sizeof(disk->disk_name), RBD_DRV_NAME "%d",
 		 rbd_dev->dev_id);
@@ -4996,10 +4997,6 @@ static int rbd_init_disk(struct rbd_device *rbd_dev)
 		disk->minors = RBD_MINORS_PER_MAJOR;
 	disk->fops = &rbd_bd_ops;
 	disk->private_data = rbd_dev;
-
-	if (!ceph_test_opt(rbd_dev->rbd_client->client, NOCRC))
-		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, q);
-
 	rbd_dev->disk = disk;
 
 	return 0;
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index aad840fc7e18e3..f8f1b5b54795ac 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -2208,6 +2208,7 @@ static int zram_add(void)
 #if ZRAM_LOGICAL_BLOCK_SIZE == PAGE_SIZE
 		.max_write_zeroes_sectors	= UINT_MAX,
 #endif
+		.features			= BLK_FEAT_STABLE_WRITES,
 	};
 	struct zram *zram;
 	int ret, device_id;
@@ -2246,7 +2247,6 @@ static int zram_add(void)
 	/* Actual capacity set using sysfs (/sys/block/zram<id>/disksize */
 	set_capacity(zram->disk, 0);
 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, zram->disk->queue);
-	blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, zram->disk->queue);
 	ret = device_add_disk(NULL, zram->disk, zram_disk_groups);
 	if (ret)
 		goto out_cleanup_disk;
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 5d5431e531aea9..aaf379cb15d91f 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -1819,13 +1819,6 @@ static bool dm_table_supports_secure_erase(struct dm_table *t)
 	return true;
 }
 
-static int device_requires_stable_pages(struct dm_target *ti,
-					struct dm_dev *dev, sector_t start,
-					sector_t len, void *data)
-{
-	return bdev_stable_writes(dev->bdev);
-}
-
 int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 			      struct queue_limits *limits)
 {
@@ -1862,18 +1855,6 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 	if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, NULL))
 		dax_write_cache(t->md->dax_dev, true);
 
-	/*
-	 * Some devices don't use blk_integrity but still want stable pages
-	 * because they do their own checksumming.
-	 * If any underlying device requires stable pages, a table must require
-	 * them as well.  Only targets that support iterate_devices are considered:
-	 * don't want error, zero, etc to require stable pages.
-	 */
-	if (dm_table_any_dev_attr(t, device_requires_stable_pages, NULL))
-		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, q);
-	else
-		blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, q);
-
 	/*
 	 * For a zoned target, setup the zones related queue attributes
 	 * and resources necessary for zone append emulation if necessary.
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 675c68fa6c6403..e875763d69917d 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -7082,12 +7082,14 @@ raid5_store_skip_copy(struct mddev *mddev, const char *page, size_t len)
 		err = -ENODEV;
 	else if (new != conf->skip_copy) {
 		struct request_queue *q = mddev->gendisk->queue;
+		struct queue_limits lim = queue_limits_start_update(q);
 
 		conf->skip_copy = new;
 		if (new)
-			blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, q);
+			lim.features |= BLK_FEAT_STABLE_WRITES;
 		else
-			blk_queue_flag_clear(QUEUE_FLAG_STABLE_WRITES, q);
+			lim.features &= ~BLK_FEAT_STABLE_WRITES;
+		err = queue_limits_commit_update(q, &lim);
 	}
 	mddev_unlock_and_resume(mddev);
 	return err ?: len;
diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index da00904d4a3c7e..d0b3ca8a11f071 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -378,13 +378,14 @@ static struct gendisk *mmc_alloc_disk(struct mmc_queue *mq,
 		lim.max_segments = host->max_segs;
 	}
 
+	if (mmc_host_is_spi(host) && host->use_spi_crc)
+		lim.features |= BLK_FEAT_STABLE_WRITES;
+
 	disk = blk_mq_alloc_disk(&mq->tag_set, &lim, mq);
 	if (IS_ERR(disk))
 		return disk;
 	mq->queue = disk->queue;
 
-	if (mmc_host_is_spi(host) && host->use_spi_crc)
-		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, mq->queue);
 	blk_queue_rq_timeout(mq->queue, 60 * HZ);
 
 	dma_set_max_seg_size(mmc_dev(host), queue_max_segment_size(mq->queue));
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 0d753fe71f35b0..5ecf762d7c8837 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3724,6 +3724,7 @@ static void nvme_ns_add_to_ctrl_list(struct nvme_ns *ns)
 
 static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 {
+	struct queue_limits lim = { };
 	struct nvme_ns *ns;
 	struct gendisk *disk;
 	int node = ctrl->numa_node;
@@ -3732,7 +3733,10 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 	if (!ns)
 		return;
 
-	disk = blk_mq_alloc_disk(ctrl->tagset, NULL, ns);
+	if (ctrl->opts && ctrl->opts->data_digest)
+		lim.features |= BLK_FEAT_STABLE_WRITES;
+
+	disk = blk_mq_alloc_disk(ctrl->tagset, &lim, ns);
 	if (IS_ERR(disk))
 		goto out_free_ns;
 	disk->fops = &nvme_bdev_ops;
@@ -3741,9 +3745,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 	ns->disk = disk;
 	ns->queue = disk->queue;
 
-	if (ctrl->opts && ctrl->opts->data_digest)
-		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, ns->queue);
-
 	if (ctrl->ops->supports_pci_p2pdma &&
 	    ctrl->ops->supports_pci_p2pdma(ctrl))
 		blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue);
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index eea727cfa9e67d..173796f2ddea9f 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -868,10 +868,6 @@ void nvme_mpath_add_disk(struct nvme_ns *ns, __le32 anagrpid)
 		nvme_mpath_set_live(ns);
 	}
 
-	if (test_bit(QUEUE_FLAG_STABLE_WRITES, &ns->queue->queue_flags) &&
-	    ns->head->disk)
-		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES,
-				   ns->head->disk->queue);
 #ifdef CONFIG_BLK_DEV_ZONED
 	if (blk_queue_is_zoned(ns->queue) && ns->head->disk)
 		ns->head->disk->nr_zones = ns->disk->nr_zones;
diff --git a/drivers/scsi/iscsi_tcp.c b/drivers/scsi/iscsi_tcp.c
index 60688f18fac6f7..c708e105963833 100644
--- a/drivers/scsi/iscsi_tcp.c
+++ b/drivers/scsi/iscsi_tcp.c
@@ -1057,15 +1057,15 @@ static umode_t iscsi_sw_tcp_attr_is_visible(int param_type, int param)
 	return 0;
 }
 
-static int iscsi_sw_tcp_slave_configure(struct scsi_device *sdev)
+static int iscsi_sw_tcp_device_configure(struct scsi_device *sdev,
+		struct queue_limits *lim)
 {
 	struct iscsi_sw_tcp_host *tcp_sw_host = iscsi_host_priv(sdev->host);
 	struct iscsi_session *session = tcp_sw_host->session;
 	struct iscsi_conn *conn = session->leadconn;
 
 	if (conn->datadgst_en)
-		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES,
-				   sdev->request_queue);
+		lim->features |= BLK_FEAT_STABLE_WRITES;
 	return 0;
 }
 
@@ -1083,7 +1083,7 @@ static const struct scsi_host_template iscsi_sw_tcp_sht = {
 	.eh_device_reset_handler= iscsi_eh_device_reset,
 	.eh_target_reset_handler = iscsi_eh_recover_target,
 	.dma_boundary		= PAGE_SIZE - 1,
-	.slave_configure        = iscsi_sw_tcp_slave_configure,
+	.device_configure	= iscsi_sw_tcp_device_configure,
 	.proc_name		= "iscsi_tcp",
 	.this_id		= -1,
 	.track_queue_depth	= 1,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 5fafb2f95fd1a3..8936eb6ba60956 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -298,13 +298,17 @@ enum {
 
 	/* do disk/partitions IO accounting */
 	BLK_FEAT_IO_STAT			= (1u << 4),
+
+	/* don't modify data until writeback is done */
+	BLK_FEAT_STABLE_WRITES			= (1u << 5),
 };
 
 /*
  * Flags automatically inherited when stacking limits.
  */
 #define BLK_FEAT_INHERIT_MASK \
-	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | BLK_FEAT_ROTATIONAL)
+	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | BLK_FEAT_ROTATIONAL | \
+	 BLK_FEAT_STABLE_WRITES)
 
 /* internal flags in queue_limits.flags */
 enum {
@@ -565,7 +569,6 @@ struct request_queue {
 #define QUEUE_FLAG_SYNCHRONOUS	11	/* always completes in submit context */
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
 #define QUEUE_FLAG_INIT_DONE	14	/* queue is initialized */
-#define QUEUE_FLAG_STABLE_WRITES 15	/* don't modify blks until WB is done */
 #define QUEUE_FLAG_POLL		16	/* IO polling enabled if set */
 #define QUEUE_FLAG_DAX		19	/* device supports DAX */
 #define QUEUE_FLAG_STATS	20	/* track IO start and completion times */
@@ -1323,7 +1326,7 @@ static inline bool bdev_stable_writes(struct block_device *bdev)
 	if (IS_ENABLED(CONFIG_BLK_DEV_INTEGRITY) &&
 	    q->limits.integrity.csum_type != BLK_INTEGRITY_CSUM_NONE)
 		return true;
-	return test_bit(QUEUE_FLAG_STABLE_WRITES, &q->queue_flags);
+	return q->limits.features & BLK_FEAT_STABLE_WRITES;
 }
 
 static inline bool blk_queue_write_cache(struct request_queue *q)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741670.1148419 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Ws-00012L-3V; Mon, 17 Jun 2024 06:07:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741670.1148419; Mon, 17 Jun 2024 06:07:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wr-0000xi-67; Mon, 17 Jun 2024 06:07:45 +0000
Received: by outflank-mailman (input) for mailman id 741670;
 Mon, 17 Jun 2024 06:07:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5W0-0001Pt-C2
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:52 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c90eff1c-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:06:51 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Vf-00000009IvN-27mo; Mon, 17 Jun 2024 06:06:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c90eff1c-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=SK8RArAP4avqHje0rxA9RB84wXcITze6EgEVeCJ1QVg=; b=m2S75Z6VqYUv9e3AmIV+VpIIju
	9x+ecVjQtPCYeC14SeVPXe758QJJrRrxuC1E0edGmHIp5l8bDg0A9K4szx4dnpYDZeCqhRHqFrGy/
	U8OTlUIak8J0hW+1Ye+ZNpSRUmCHBc6sAzrnIApyPEvryh9UesnysmyYL4L7g6E6L/CoihwNSCtOS
	sjJRfMZTU4nKJRCzb91EvdK/zSEVeI+ybWZPpy2rdejb2886pIbCBATAcW5Ptlyamg5X3ckHSZLdN
	t5aiwmkK8giGNJjeoF/uCW65SttZ0F9QLwXZjq81nlQgVO9Fwf3ZFbgnszfFWbjmxCSn4SSydGqkZ
	Hr7/oFPg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: [PATCH 18/26] block: move the synchronous flag to queue_limits
Date: Mon, 17 Jun 2024 08:04:45 +0200
Message-ID: <20240617060532.127975-19-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the synchronous flag into the queue_limits feature field so that it
can be set atomically with the queue frozen.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 block/blk-mq-debugfs.c        | 1 -
 drivers/block/brd.c           | 2 +-
 drivers/block/zram/zram_drv.c | 4 ++--
 drivers/nvdimm/btt.c          | 3 +--
 drivers/nvdimm/pmem.c         | 4 ++--
 include/linux/blkdev.h        | 7 ++++---
 6 files changed, 10 insertions(+), 11 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index eb73f1d348e5a9..957774e40b1d0c 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -85,7 +85,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(SAME_COMP),
 	QUEUE_FLAG_NAME(FAIL_IO),
 	QUEUE_FLAG_NAME(NOXMERGES),
-	QUEUE_FLAG_NAME(SYNCHRONOUS),
 	QUEUE_FLAG_NAME(SAME_FORCE),
 	QUEUE_FLAG_NAME(INIT_DONE),
 	QUEUE_FLAG_NAME(POLL),
diff --git a/drivers/block/brd.c b/drivers/block/brd.c
index b25dc463b5e3a6..d77deb571dbd06 100644
--- a/drivers/block/brd.c
+++ b/drivers/block/brd.c
@@ -335,6 +335,7 @@ static int brd_alloc(int i)
 		.max_hw_discard_sectors	= UINT_MAX,
 		.max_discard_segments	= 1,
 		.discard_granularity	= PAGE_SIZE,
+		.features		= BLK_FEAT_SYNCHRONOUS,
 	};
 
 	list_for_each_entry(brd, &brd_devices, brd_list)
@@ -366,7 +367,6 @@ static int brd_alloc(int i)
 	strscpy(disk->disk_name, buf, DISK_NAME_LEN);
 	set_capacity(disk, rd_size * 2);
 	
-	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, disk->queue);
 	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, disk->queue);
 	err = add_disk(disk);
 	if (err)
diff --git a/drivers/block/zram/zram_drv.c b/drivers/block/zram/zram_drv.c
index f8f1b5b54795ac..efcb8d9d274c31 100644
--- a/drivers/block/zram/zram_drv.c
+++ b/drivers/block/zram/zram_drv.c
@@ -2208,7 +2208,8 @@ static int zram_add(void)
 #if ZRAM_LOGICAL_BLOCK_SIZE == PAGE_SIZE
 		.max_write_zeroes_sectors	= UINT_MAX,
 #endif
-		.features			= BLK_FEAT_STABLE_WRITES,
+		.features			= BLK_FEAT_STABLE_WRITES |
+						  BLK_FEAT_SYNCHRONOUS,
 	};
 	struct zram *zram;
 	int ret, device_id;
@@ -2246,7 +2247,6 @@ static int zram_add(void)
 
 	/* Actual capacity set using sysfs (/sys/block/zram<id>/disksize */
 	set_capacity(zram->disk, 0);
-	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, zram->disk->queue);
 	ret = device_add_disk(NULL, zram->disk, zram_disk_groups);
 	if (ret)
 		goto out_cleanup_disk;
diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
index e474afa8e9f68d..e79c06d65bb77b 100644
--- a/drivers/nvdimm/btt.c
+++ b/drivers/nvdimm/btt.c
@@ -1501,6 +1501,7 @@ static int btt_blk_init(struct btt *btt)
 		.logical_block_size	= btt->sector_size,
 		.max_hw_sectors		= UINT_MAX,
 		.max_integrity_segments	= 1,
+		.features		= BLK_FEAT_SYNCHRONOUS,
 	};
 	int rc;
 
@@ -1518,8 +1519,6 @@ static int btt_blk_init(struct btt *btt)
 	btt->btt_disk->fops = &btt_fops;
 	btt->btt_disk->private_data = btt;
 
-	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, btt->btt_disk->queue);
-
 	set_capacity(btt->btt_disk, btt->nlba * btt->sector_size >> 9);
 	rc = device_add_disk(&btt->nd_btt->dev, btt->btt_disk, NULL);
 	if (rc)
diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
index 501cf226df0187..b821dcf018f6ae 100644
--- a/drivers/nvdimm/pmem.c
+++ b/drivers/nvdimm/pmem.c
@@ -455,7 +455,8 @@ static int pmem_attach_disk(struct device *dev,
 		.logical_block_size	= pmem_sector_size(ndns),
 		.physical_block_size	= PAGE_SIZE,
 		.max_hw_sectors		= UINT_MAX,
-		.features		= BLK_FEAT_WRITE_CACHE,
+		.features		= BLK_FEAT_WRITE_CACHE |
+					  BLK_FEAT_SYNCHRONOUS,
 	};
 	int nid = dev_to_node(dev), fua;
 	struct resource *res = &nsio->res;
@@ -546,7 +547,6 @@ static int pmem_attach_disk(struct device *dev,
 	}
 	pmem->virt_addr = addr;
 
-	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, q);
 	if (pmem->pfn_flags & PFN_MAP)
 		blk_queue_flag_set(QUEUE_FLAG_DAX, q);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 8936eb6ba60956..cee7b44a142513 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -301,6 +301,9 @@ enum {
 
 	/* don't modify data until writeback is done */
 	BLK_FEAT_STABLE_WRITES			= (1u << 5),
+
+	/* always completes in submit context */
+	BLK_FEAT_SYNCHRONOUS			= (1u << 6),
 };
 
 /*
@@ -566,7 +569,6 @@ struct request_queue {
 #define QUEUE_FLAG_SAME_COMP	4	/* complete on same CPU-group */
 #define QUEUE_FLAG_FAIL_IO	5	/* fake timeout */
 #define QUEUE_FLAG_NOXMERGES	9	/* No extended merges */
-#define QUEUE_FLAG_SYNCHRONOUS	11	/* always completes in submit context */
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
 #define QUEUE_FLAG_INIT_DONE	14	/* queue is initialized */
 #define QUEUE_FLAG_POLL		16	/* IO polling enabled if set */
@@ -1315,8 +1317,7 @@ static inline bool bdev_nonrot(struct block_device *bdev)
 
 static inline bool bdev_synchronous(struct block_device *bdev)
 {
-	return test_bit(QUEUE_FLAG_SYNCHRONOUS,
-			&bdev_get_queue(bdev)->queue_flags);
+	return bdev->bd_disk->queue->limits.features & BLK_FEAT_SYNCHRONOUS;
 }
 
 static inline bool bdev_stable_writes(struct block_device *bdev)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741679.1148432 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wv-0001fe-9I; Mon, 17 Jun 2024 06:07:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741679.1148432; Mon, 17 Jun 2024 06:07:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wu-0001c1-5c; Mon, 17 Jun 2024 06:07:48 +0000
Received: by outflank-mailman (input) for mailman id 741679;
 Mon, 17 Jun 2024 06:07:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5Vx-0001PY-1u
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:06:49 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c694172b-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:06:47 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5VY-00000009Ioc-0t7t; Mon, 17 Jun 2024 06:06:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c694172b-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=IGFuRNJe3zSieqxGyxM6PYAJhOBv/hR1F0ZI4UGAMCk=; b=hOOD1FUEkKABhqQfeW6yEt2Z9t
	1gZluRT8ki+NtUsLsGFCguf4ieQVV25DGxIDtaoF0oOgqPyx30rGJAooHoaEP5p0Tj7WeqkBunq3m
	RkpJhnN7p9TaiO1R/y7mD0hgyqdXhiMvdLv9NnDQbuE8qNW4813zVzamRdIVFk7+ruGsDA3kOSuAI
	DJ0BXWEYRize7AyGdIqI/M2j1wLSQ3h/4XXEeuNpQ+C6goC0Cef7BGa1mi7HxrarciYQth5UZKECl
	S8tvHyXG3+8gOwRcgdvfVVTgT7Stlw3CEzfR585WUfCCyIWxA+56GT15fZU0qnKBDczDisvPBOUMI
	nfyxjK1w==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org
Subject: [PATCH 16/26] block: move the io_stat flag setting to queue_limits
Date: Mon, 17 Jun 2024 08:04:43 +0200
Message-ID: <20240617060532.127975-17-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the io_stat flag into the queue_limits feature field so that it can
be set atomically with the queue frozen.

Simplify md and dm to set the flag unconditionally instead of avoiding
setting a simple flag for cases where it already is set by other means,
which is a bit pointless.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 block/blk-mq-debugfs.c        |  1 -
 block/blk-mq.c                |  6 +++++-
 block/blk-sysfs.c             |  2 +-
 drivers/md/dm-table.c         | 12 +++++++++---
 drivers/md/dm.c               | 13 +++----------
 drivers/md/md.c               |  5 ++---
 drivers/nvme/host/multipath.c |  2 +-
 include/linux/blkdev.h        |  9 +++++----
 8 files changed, 26 insertions(+), 24 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index 6b7edb50bfd3fa..cbe99444ed1a54 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -84,7 +84,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(NOMERGES),
 	QUEUE_FLAG_NAME(SAME_COMP),
 	QUEUE_FLAG_NAME(FAIL_IO),
-	QUEUE_FLAG_NAME(IO_STAT),
 	QUEUE_FLAG_NAME(NOXMERGES),
 	QUEUE_FLAG_NAME(SYNCHRONOUS),
 	QUEUE_FLAG_NAME(SAME_FORCE),
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 58b0d6c7cc34d6..cf67dc13f7dd4c 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4116,7 +4116,11 @@ struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
 	struct request_queue *q;
 	int ret;
 
-	q = blk_alloc_queue(lim ? lim : &default_lim, set->numa_node);
+	if (!lim)
+		lim = &default_lim;
+	lim->features |= BLK_FEAT_IO_STAT;
+
+	q = blk_alloc_queue(lim, set->numa_node);
 	if (IS_ERR(q))
 		return q;
 	q->queuedata = queuedata;
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index 9174aca3b85526..6f58530fb3c08e 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -324,7 +324,7 @@ queue_##name##_store(struct request_queue *q, const char *page, size_t count) \
 
 QUEUE_SYSFS_FEATURE(rotational, BLK_FEAT_ROTATIONAL)
 QUEUE_SYSFS_FEATURE(add_random, BLK_FEAT_ADD_RANDOM)
-QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0);
+QUEUE_SYSFS_FEATURE(iostats, BLK_FEAT_IO_STAT)
 QUEUE_SYSFS_BIT_FNS(stable_writes, STABLE_WRITES, 0);
 #undef QUEUE_SYSFS_BIT_FNS
 
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index 0a3838e45affd4..5d5431e531aea9 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -579,6 +579,12 @@ int dm_split_args(int *argc, char ***argvp, char *input)
 	return 0;
 }
 
+static void dm_set_stacking_limits(struct queue_limits *limits)
+{
+	blk_set_stacking_limits(limits);
+	limits->features |= BLK_FEAT_IO_STAT;
+}
+
 /*
  * Impose necessary and sufficient conditions on a devices's table such
  * that any incoming bio which respects its logical_block_size can be
@@ -617,7 +623,7 @@ static int validate_hardware_logical_block_alignment(struct dm_table *t,
 	for (i = 0; i < t->num_targets; i++) {
 		ti = dm_table_get_target(t, i);
 
-		blk_set_stacking_limits(&ti_limits);
+		dm_set_stacking_limits(&ti_limits);
 
 		/* combine all target devices' limits */
 		if (ti->type->iterate_devices)
@@ -1591,7 +1597,7 @@ int dm_calculate_queue_limits(struct dm_table *t,
 	unsigned int zone_sectors = 0;
 	bool zoned = false;
 
-	blk_set_stacking_limits(limits);
+	dm_set_stacking_limits(limits);
 
 	t->integrity_supported = true;
 	for (unsigned int i = 0; i < t->num_targets; i++) {
@@ -1604,7 +1610,7 @@ int dm_calculate_queue_limits(struct dm_table *t,
 	for (unsigned int i = 0; i < t->num_targets; i++) {
 		struct dm_target *ti = dm_table_get_target(t, i);
 
-		blk_set_stacking_limits(&ti_limits);
+		dm_set_stacking_limits(&ti_limits);
 
 		if (!ti->type->iterate_devices) {
 			/* Set I/O hints portion of queue limits */
diff --git a/drivers/md/dm.c b/drivers/md/dm.c
index 13037d6a6f62a2..8a976cee448bed 100644
--- a/drivers/md/dm.c
+++ b/drivers/md/dm.c
@@ -2386,22 +2386,15 @@ int dm_setup_md_queue(struct mapped_device *md, struct dm_table *t)
 	struct table_device *td;
 	int r;
 
-	switch (type) {
-	case DM_TYPE_REQUEST_BASED:
+	WARN_ON_ONCE(type == DM_TYPE_NONE);
+
+	if (type == DM_TYPE_REQUEST_BASED) {
 		md->disk->fops = &dm_rq_blk_dops;
 		r = dm_mq_init_request_queue(md, t);
 		if (r) {
 			DMERR("Cannot initialize queue for request-based dm mapped device");
 			return r;
 		}
-		break;
-	case DM_TYPE_BIO_BASED:
-	case DM_TYPE_DAX_BIO_BASED:
-		blk_queue_flag_set(QUEUE_FLAG_IO_STAT, md->queue);
-		break;
-	case DM_TYPE_NONE:
-		WARN_ON_ONCE(true);
-		break;
 	}
 
 	r = dm_calculate_queue_limits(t, &limits);
diff --git a/drivers/md/md.c b/drivers/md/md.c
index c23423c51fb7c2..8db0db8d5a27ac 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5787,7 +5787,8 @@ struct mddev *md_alloc(dev_t dev, char *name)
 	int unit;
 	int error;
 	struct queue_limits lim = {
-		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA,
+		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
+					  BLK_FEAT_IO_STAT,
 	};
 
 	/*
@@ -6152,8 +6153,6 @@ int md_run(struct mddev *mddev)
 	if (!mddev_is_dm(mddev)) {
 		struct request_queue *q = mddev->gendisk->queue;
 
-		blk_queue_flag_set(QUEUE_FLAG_IO_STAT, q);
-
 		/* Set the NOWAIT flags if all underlying devices support it */
 		if (nowait)
 			blk_queue_flag_set(QUEUE_FLAG_NOWAIT, q);
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 58c13304e558e0..eea727cfa9e67d 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -538,6 +538,7 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 
 	blk_set_stacking_limits(&lim);
 	lim.dma_alignment = 3;
+	lim.features |= BLK_FEAT_IO_STAT;
 	if (head->ids.csi != NVME_CSI_ZNS)
 		lim.max_zone_append_sectors = 0;
 
@@ -550,7 +551,6 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 			ctrl->subsys->instance, head->instance);
 
 	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, head->disk->queue);
-	blk_queue_flag_set(QUEUE_FLAG_IO_STAT, head->disk->queue);
 	/*
 	 * This assumes all controllers that refer to a namespace either
 	 * support poll queues or not.  That is not a strict guarantee,
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index cf1bbf566b2bcd..5fafb2f95fd1a3 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -295,6 +295,9 @@ enum {
 
 	/* contributes to the random number pool */
 	BLK_FEAT_ADD_RANDOM			= (1u << 3),
+
+	/* do disk/partitions IO accounting */
+	BLK_FEAT_IO_STAT			= (1u << 4),
 };
 
 /*
@@ -558,7 +561,6 @@ struct request_queue {
 #define QUEUE_FLAG_NOMERGES     3	/* disable merge attempts */
 #define QUEUE_FLAG_SAME_COMP	4	/* complete on same CPU-group */
 #define QUEUE_FLAG_FAIL_IO	5	/* fake timeout */
-#define QUEUE_FLAG_IO_STAT	7	/* do disk/partitions IO accounting */
 #define QUEUE_FLAG_NOXMERGES	9	/* No extended merges */
 #define QUEUE_FLAG_SYNCHRONOUS	11	/* always completes in submit context */
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
@@ -577,8 +579,7 @@ struct request_queue {
 #define QUEUE_FLAG_SQ_SCHED     30	/* single queue style io dispatch */
 #define QUEUE_FLAG_SKIP_TAGSET_QUIESCE	31 /* quiesce_tagset skip the queue*/
 
-#define QUEUE_FLAG_MQ_DEFAULT	((1UL << QUEUE_FLAG_IO_STAT) |		\
-				 (1UL << QUEUE_FLAG_SAME_COMP) |	\
+#define QUEUE_FLAG_MQ_DEFAULT	((1UL << QUEUE_FLAG_SAME_COMP) |	\
 				 (1UL << QUEUE_FLAG_NOWAIT))
 
 void blk_queue_flag_set(unsigned int flag, struct request_queue *q);
@@ -592,7 +593,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_noxmerges(q)	\
 	test_bit(QUEUE_FLAG_NOXMERGES, &(q)->queue_flags)
 #define blk_queue_nonrot(q)	((q)->limits.features & BLK_FEAT_ROTATIONAL)
-#define blk_queue_io_stat(q)	test_bit(QUEUE_FLAG_IO_STAT, &(q)->queue_flags)
+#define blk_queue_io_stat(q)	((q)->limits.features & BLK_FEAT_IO_STAT)
 #define blk_queue_zone_resetall(q)	\
 	test_bit(QUEUE_FLAG_ZONE_RESETALL, &(q)->queue_flags)
 #define blk_queue_dax(q)	test_bit(QUEUE_FLAG_DAX, &(q)->queue_flags)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741691.1148450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5X0-00038J-3V; Mon, 17 Jun 2024 06:07:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741691.1148450; Mon, 17 Jun 2024 06:07:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5Wz-000330-CT; Mon, 17 Jun 2024 06:07:53 +0000
Received: by outflank-mailman (input) for mailman id 741691;
 Mon, 17 Jun 2024 06:07:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5W9-0001Pt-RS
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:07:01 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cee0160f-2c6f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:07:00 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Vp-00000009J4S-1rbZ; Mon, 17 Jun 2024 06:06:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cee0160f-2c6f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=XDMmYQEhqfxG2aJiRL/4ZYDAujH/dkiKXCLXQktKzVU=; b=Mw/amKwikts8mPqEzitq6S7gSt
	SqUiszPIuksF9x0en8yoe/G92F4AAVDiQml0cDIv/Bs4SWaYMa4ZjPmLHxhxZmae04xvteiFICg4q
	uFeMrOx1jCTsbVi93WFy17asl+ugvr0swUM7/onpistyMzuFHwhr67Tb4gAPx3xZEJxWeOFbhkyy0
	eDdqkSIQYDTyAvjjOdDfDOhG8hJrCetKbzdTk8hn5jTMBtvclvfbmsu6urUsWKk5lUJVNpJTt4bTA
	4t55FwPr4cOtwfysvFUJ3BFhJ4GHXOkytdQUNpzS2Mlt39fBxfePUyoTnd9rZjlIih0sPH3kMopQ8
	Hp4NGWUw==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: [PATCH 21/26] block: move the poll flag to queue_limits
Date: Mon, 17 Jun 2024 08:04:48 +0200
Message-ID: <20240617060532.127975-22-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the poll flag into the queue_limits feature field so that it can
be set atomically with the queue frozen.

Stacking drivers are simplified in that they now can simply set the
flag, and blk_stack_limits will clear it when the features is not
supported by any of the underlying devices.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 block/blk-core.c              |  5 ++--
 block/blk-mq-debugfs.c        |  1 -
 block/blk-mq.c                | 31 +++++++++++---------
 block/blk-settings.c          | 10 ++++---
 block/blk-sysfs.c             |  4 +--
 drivers/md/dm-table.c         | 54 +++++++++--------------------------
 drivers/nvme/host/multipath.c | 12 +-------
 include/linux/blkdev.h        |  4 ++-
 8 files changed, 45 insertions(+), 76 deletions(-)

diff --git a/block/blk-core.c b/block/blk-core.c
index 2b45a4df9a1aa1..8d9fbd353fc7fc 100644
--- a/block/blk-core.c
+++ b/block/blk-core.c
@@ -791,7 +791,7 @@ void submit_bio_noacct(struct bio *bio)
 		}
 	}
 
-	if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
+	if (!(q->limits.features & BLK_FEAT_POLL))
 		bio_clear_polled(bio);
 
 	switch (bio_op(bio)) {
@@ -915,8 +915,7 @@ int bio_poll(struct bio *bio, struct io_comp_batch *iob, unsigned int flags)
 		return 0;
 
 	q = bdev_get_queue(bdev);
-	if (cookie == BLK_QC_T_NONE ||
-	    !test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
+	if (cookie == BLK_QC_T_NONE || !(q->limits.features & BLK_FEAT_POLL))
 		return 0;
 
 	blk_flush_plug(current->plug, false);
diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index f4fa820251ce83..3a21527913840d 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -87,7 +87,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(NOXMERGES),
 	QUEUE_FLAG_NAME(SAME_FORCE),
 	QUEUE_FLAG_NAME(INIT_DONE),
-	QUEUE_FLAG_NAME(POLL),
 	QUEUE_FLAG_NAME(STATS),
 	QUEUE_FLAG_NAME(REGISTERED),
 	QUEUE_FLAG_NAME(QUIESCED),
diff --git a/block/blk-mq.c b/block/blk-mq.c
index 43235acc87505f..e2b9710ddc5ad1 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -4109,6 +4109,12 @@ void blk_mq_release(struct request_queue *q)
 	blk_mq_sysfs_deinit(q);
 }
 
+static bool blk_mq_can_poll(struct blk_mq_tag_set *set)
+{
+	return set->nr_maps > HCTX_TYPE_POLL &&
+		set->map[HCTX_TYPE_POLL].nr_queues;
+}
+
 struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
 		struct queue_limits *lim, void *queuedata)
 {
@@ -4119,6 +4125,8 @@ struct request_queue *blk_mq_alloc_queue(struct blk_mq_tag_set *set,
 	if (!lim)
 		lim = &default_lim;
 	lim->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
+	if (blk_mq_can_poll(set))
+		lim->features |= BLK_FEAT_POLL;
 
 	q = blk_alloc_queue(lim, set->numa_node);
 	if (IS_ERR(q))
@@ -4273,17 +4281,6 @@ static void blk_mq_realloc_hw_ctxs(struct blk_mq_tag_set *set,
 	mutex_unlock(&q->sysfs_lock);
 }
 
-static void blk_mq_update_poll_flag(struct request_queue *q)
-{
-	struct blk_mq_tag_set *set = q->tag_set;
-
-	if (set->nr_maps > HCTX_TYPE_POLL &&
-	    set->map[HCTX_TYPE_POLL].nr_queues)
-		blk_queue_flag_set(QUEUE_FLAG_POLL, q);
-	else
-		blk_queue_flag_clear(QUEUE_FLAG_POLL, q);
-}
-
 int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
 		struct request_queue *q)
 {
@@ -4311,7 +4308,6 @@ int blk_mq_init_allocated_queue(struct blk_mq_tag_set *set,
 	q->tag_set = set;
 
 	q->queue_flags |= QUEUE_FLAG_MQ_DEFAULT;
-	blk_mq_update_poll_flag(q);
 
 	INIT_DELAYED_WORK(&q->requeue_work, blk_mq_requeue_work);
 	INIT_LIST_HEAD(&q->flush_list);
@@ -4798,8 +4794,10 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
 fallback:
 	blk_mq_update_queue_map(set);
 	list_for_each_entry(q, &set->tag_list, tag_set_list) {
+		struct queue_limits lim;
+
 		blk_mq_realloc_hw_ctxs(set, q);
-		blk_mq_update_poll_flag(q);
+
 		if (q->nr_hw_queues != set->nr_hw_queues) {
 			int i = prev_nr_hw_queues;
 
@@ -4811,6 +4809,13 @@ static void __blk_mq_update_nr_hw_queues(struct blk_mq_tag_set *set,
 			set->nr_hw_queues = prev_nr_hw_queues;
 			goto fallback;
 		}
+		lim = queue_limits_start_update(q);
+		if (blk_mq_can_poll(set))
+			lim.features |= BLK_FEAT_POLL;
+		else
+			lim.features &= ~BLK_FEAT_POLL;
+		if (queue_limits_commit_update(q, &lim) < 0)
+			pr_warn("updating the poll flag failed\n");
 		blk_mq_map_swqueue(q);
 	}
 
diff --git a/block/blk-settings.c b/block/blk-settings.c
index bf4622c19b5c09..026ba68d829856 100644
--- a/block/blk-settings.c
+++ b/block/blk-settings.c
@@ -460,13 +460,15 @@ int blk_stack_limits(struct queue_limits *t, struct queue_limits *b,
 	t->features |= (b->features & BLK_FEAT_INHERIT_MASK);
 
 	/*
-	 * BLK_FEAT_NOWAIT needs to be supported both by the stacking driver
-	 * and all underlying devices.  The stacking driver sets the flag
-	 * before stacking the limits, and this will clear the flag if any
-	 * of the underlying devices does not support it.
+	 * BLK_FEAT_NOWAIT and BLK_FEAT_POLL need to be supported both by the
+	 * stacking driver and all underlying devices.  The stacking driver sets
+	 * the flags before stacking the limits, and this will clear the flags
+	 * if any of the underlying devices does not support it.
 	 */
 	if (!(b->features & BLK_FEAT_NOWAIT))
 		t->features &= ~BLK_FEAT_NOWAIT;
+	if (!(b->features & BLK_FEAT_POLL))
+		t->features &= ~BLK_FEAT_POLL;
 
 	t->max_sectors = min_not_zero(t->max_sectors, b->max_sectors);
 	t->max_user_sectors = min_not_zero(t->max_user_sectors,
diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
index cde525724831ef..da4e96d686f91e 100644
--- a/block/blk-sysfs.c
+++ b/block/blk-sysfs.c
@@ -394,13 +394,13 @@ static ssize_t queue_poll_delay_store(struct request_queue *q, const char *page,
 
 static ssize_t queue_poll_show(struct request_queue *q, char *page)
 {
-	return queue_var_show(test_bit(QUEUE_FLAG_POLL, &q->queue_flags), page);
+	return queue_var_show(q->limits.features & BLK_FEAT_POLL, page);
 }
 
 static ssize_t queue_poll_store(struct request_queue *q, const char *page,
 				size_t count)
 {
-	if (!test_bit(QUEUE_FLAG_POLL, &q->queue_flags))
+	if (!(q->limits.features & BLK_FEAT_POLL))
 		return -EINVAL;
 	pr_info_ratelimited("writes to the poll attribute are ignored.\n");
 	pr_info_ratelimited("please use driver specific parameters instead.\n");
diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
index e44697037e86f4..ca1f136575cff4 100644
--- a/drivers/md/dm-table.c
+++ b/drivers/md/dm-table.c
@@ -582,7 +582,7 @@ int dm_split_args(int *argc, char ***argvp, char *input)
 static void dm_set_stacking_limits(struct queue_limits *limits)
 {
 	blk_set_stacking_limits(limits);
-	limits->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
+	limits->features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | BLK_FEAT_POLL;
 }
 
 /*
@@ -1024,14 +1024,13 @@ bool dm_table_request_based(struct dm_table *t)
 	return __table_type_request_based(dm_table_get_type(t));
 }
 
-static bool dm_table_supports_poll(struct dm_table *t);
-
 static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *md)
 {
 	enum dm_queue_mode type = dm_table_get_type(t);
 	unsigned int per_io_data_size = 0, front_pad, io_front_pad;
 	unsigned int min_pool_size = 0, pool_size;
 	struct dm_md_mempools *pools;
+	unsigned int bioset_flags = 0;
 
 	if (unlikely(type == DM_TYPE_NONE)) {
 		DMERR("no table type is set, can't allocate mempools");
@@ -1048,6 +1047,9 @@ static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *
 		goto init_bs;
 	}
 
+	if (md->queue->limits.features & BLK_FEAT_POLL)
+		bioset_flags |= BIOSET_PERCPU_CACHE;
+
 	for (unsigned int i = 0; i < t->num_targets; i++) {
 		struct dm_target *ti = dm_table_get_target(t, i);
 
@@ -1060,8 +1062,7 @@ static int dm_table_alloc_md_mempools(struct dm_table *t, struct mapped_device *
 
 	io_front_pad = roundup(per_io_data_size,
 		__alignof__(struct dm_io)) + DM_IO_BIO_OFFSET;
-	if (bioset_init(&pools->io_bs, pool_size, io_front_pad,
-			dm_table_supports_poll(t) ? BIOSET_PERCPU_CACHE : 0))
+	if (bioset_init(&pools->io_bs, pool_size, io_front_pad, bioset_flags))
 		goto out_free_pools;
 	if (t->integrity_supported &&
 	    bioset_integrity_create(&pools->io_bs, pool_size))
@@ -1404,14 +1405,6 @@ struct dm_target *dm_table_find_target(struct dm_table *t, sector_t sector)
 	return &t->targets[(KEYS_PER_NODE * n) + k];
 }
 
-static int device_not_poll_capable(struct dm_target *ti, struct dm_dev *dev,
-				   sector_t start, sector_t len, void *data)
-{
-	struct request_queue *q = bdev_get_queue(dev->bdev);
-
-	return !test_bit(QUEUE_FLAG_POLL, &q->queue_flags);
-}
-
 /*
  * type->iterate_devices() should be called when the sanity check needs to
  * iterate and check all underlying data devices. iterate_devices() will
@@ -1459,19 +1452,6 @@ static int count_device(struct dm_target *ti, struct dm_dev *dev,
 	return 0;
 }
 
-static bool dm_table_supports_poll(struct dm_table *t)
-{
-	for (unsigned int i = 0; i < t->num_targets; i++) {
-		struct dm_target *ti = dm_table_get_target(t, i);
-
-		if (!ti->type->iterate_devices ||
-		    ti->type->iterate_devices(ti, device_not_poll_capable, NULL))
-			return false;
-	}
-
-	return true;
-}
-
 /*
  * Check whether a table has no data devices attached using each
  * target's iterate_devices method.
@@ -1817,6 +1797,13 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 	if (!dm_table_supports_nowait(t))
 		limits->features &= ~BLK_FEAT_NOWAIT;
 
+	/*
+	 * The current polling impementation does not support request based
+	 * stacking.
+	 */
+	if (!__table_type_bio_based(t->type))
+		limits->features &= ~BLK_FEAT_POLL;
+
 	if (!dm_table_supports_discards(t)) {
 		limits->max_hw_discard_sectors = 0;
 		limits->discard_granularity = 0;
@@ -1858,21 +1845,6 @@ int dm_table_set_restrictions(struct dm_table *t, struct request_queue *q,
 		return r;
 
 	dm_update_crypto_profile(q, t);
-
-	/*
-	 * Check for request-based device is left to
-	 * dm_mq_init_request_queue()->blk_mq_init_allocated_queue().
-	 *
-	 * For bio-based device, only set QUEUE_FLAG_POLL when all
-	 * underlying devices supporting polling.
-	 */
-	if (__table_type_bio_based(t->type)) {
-		if (dm_table_supports_poll(t))
-			blk_queue_flag_set(QUEUE_FLAG_POLL, q);
-		else
-			blk_queue_flag_clear(QUEUE_FLAG_POLL, q);
-	}
-
 	return 0;
 }
 
diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
index 61a162c9cf4e6c..4933194d00e592 100644
--- a/drivers/nvme/host/multipath.c
+++ b/drivers/nvme/host/multipath.c
@@ -538,7 +538,7 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 
 	blk_set_stacking_limits(&lim);
 	lim.dma_alignment = 3;
-	lim.features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
+	lim.features |= BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT | BLK_FEAT_POLL;
 	if (head->ids.csi != NVME_CSI_ZNS)
 		lim.max_zone_append_sectors = 0;
 
@@ -549,16 +549,6 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, struct nvme_ns_head *head)
 	head->disk->private_data = head;
 	sprintf(head->disk->disk_name, "nvme%dn%d",
 			ctrl->subsys->instance, head->instance);
-
-	/*
-	 * This assumes all controllers that refer to a namespace either
-	 * support poll queues or not.  That is not a strict guarantee,
-	 * but if the assumption is wrong the effect is only suboptimal
-	 * performance but not correctness problem.
-	 */
-	if (ctrl->tagset->nr_maps > HCTX_TYPE_POLL &&
-	    ctrl->tagset->map[HCTX_TYPE_POLL].nr_queues)
-		blk_queue_flag_set(QUEUE_FLAG_POLL, head->disk->queue);
 	return 0;
 }
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 7022e06a3dd9a3..cd27b66cbacc00 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -310,6 +310,9 @@ enum {
 
 	/* supports DAX */
 	BLK_FEAT_DAX				= (1u << 8),
+
+	/* supports I/O polling */
+	BLK_FEAT_POLL				= (1u << 9),
 };
 
 /*
@@ -577,7 +580,6 @@ struct request_queue {
 #define QUEUE_FLAG_NOXMERGES	9	/* No extended merges */
 #define QUEUE_FLAG_SAME_FORCE	12	/* force complete on same CPU */
 #define QUEUE_FLAG_INIT_DONE	14	/* queue is initialized */
-#define QUEUE_FLAG_POLL		16	/* IO polling enabled if set */
 #define QUEUE_FLAG_STATS	20	/* track IO start and completion times */
 #define QUEUE_FLAG_REGISTERED	22	/* queue has been registered to a disk */
 #define QUEUE_FLAG_QUIESCED	24	/* queue has been quiesced */
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:07:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:07:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741696.1148462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5X2-0003i5-CR; Mon, 17 Jun 2024 06:07:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741696.1148462; Mon, 17 Jun 2024 06:07:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5X1-0003Zy-FO; Mon, 17 Jun 2024 06:07:55 +0000
Received: by outflank-mailman (input) for mailman id 741696;
 Mon, 17 Jun 2024 06:07:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JvID=NT=bombadil.srs.infradead.org=BATV+625ba2f6da96caf54eae+7603+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sJ5WI-0001PY-PL
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:07:10 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d3e11e8b-2c6f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 08:07:09 +0200 (CEST)
Received: from [91.187.204.140] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sJ5Vz-00000009JDZ-3yN0; Mon, 17 Jun 2024 06:06:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3e11e8b-2c6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	MIME-Version:References:In-Reply-To:Message-ID:Date:Subject:Cc:To:From:Sender
	:Reply-To:Content-Type:Content-ID:Content-Description;
	bh=7MyimiDl8CyyRZjDerM0QPE1ddqN8Ffub56VTDkefrE=; b=D7Jdi/b5u5YIKYXgqc4/3LEBFA
	LKfANyIFztkmFX98zyoFi9RVS4HLUN2iq5m5JvNCmyV8Oc7DjofO6I13EWboYU4npYW4tMy2rc0oQ
	Js3C42X0I5opn/qlCwdSjZuiS0IbEv+ljEflX5e+PQ0zwm1KSnaf/XDyzUC8NuG8xJSNOEkHXtE93
	IYhu+4Puq8fiLPZS8LmhhzMmPdyeSVCXfrUVA+wLwowE8Wb4jS9YT6zO58Cbp+617ZhMBOY5K+7BC
	0sBJDVWhbHZF0cErQZSa7W23yNH17zr1RSgqozvRNtpF0nU0/C67E7Lgu6VsU/32MXhUzF4KqYnXL
	fShUgDcg==;
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	=?UTF-8?q?Christoph=20B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>,
	Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>,
	Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com,
	nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: [PATCH 24/26] block: move the pci_p2pdma flag to queue_limits
Date: Mon, 17 Jun 2024 08:04:51 +0200
Message-ID: <20240617060532.127975-25-hch@lst.de>
X-Mailer: git-send-email 2.43.0
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Move the pci_p2pdma flag into the queue_limits feature field so that it
can be set atomically with the queue frozen.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
---
 block/blk-mq-debugfs.c   | 1 -
 drivers/nvme/host/core.c | 8 +++-----
 include/linux/blkdev.h   | 7 ++++---
 3 files changed, 7 insertions(+), 9 deletions(-)

diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
index f2fd72f4414ae8..8b5a68861c119b 100644
--- a/block/blk-mq-debugfs.c
+++ b/block/blk-mq-debugfs.c
@@ -90,7 +90,6 @@ static const char *const blk_queue_flag_name[] = {
 	QUEUE_FLAG_NAME(STATS),
 	QUEUE_FLAG_NAME(REGISTERED),
 	QUEUE_FLAG_NAME(QUIESCED),
-	QUEUE_FLAG_NAME(PCI_P2PDMA),
 	QUEUE_FLAG_NAME(RQ_ALLOC_TIME),
 	QUEUE_FLAG_NAME(HCTX_ACTIVE),
 	QUEUE_FLAG_NAME(SQ_SCHED),
diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index 5ecf762d7c8837..31e752e8d632cd 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -3735,6 +3735,9 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 
 	if (ctrl->opts && ctrl->opts->data_digest)
 		lim.features |= BLK_FEAT_STABLE_WRITES;
+	if (ctrl->ops->supports_pci_p2pdma &&
+	    ctrl->ops->supports_pci_p2pdma(ctrl))
+		lim.features |= BLK_FEAT_PCI_P2PDMA;
 
 	disk = blk_mq_alloc_disk(ctrl->tagset, &lim, ns);
 	if (IS_ERR(disk))
@@ -3744,11 +3747,6 @@ static void nvme_alloc_ns(struct nvme_ctrl *ctrl, struct nvme_ns_info *info)
 
 	ns->disk = disk;
 	ns->queue = disk->queue;
-
-	if (ctrl->ops->supports_pci_p2pdma &&
-	    ctrl->ops->supports_pci_p2pdma(ctrl))
-		blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue);
-
 	ns->ctrl = ctrl;
 	kref_init(&ns->kref);
 
diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
index 1077cb8d8fd808..ab0f7dfba556eb 100644
--- a/include/linux/blkdev.h
+++ b/include/linux/blkdev.h
@@ -319,6 +319,9 @@ enum {
 
 	/* supports Zone Reset All */
 	BLK_FEAT_ZONE_RESETALL			= (1u << 11),
+
+	/* supports PCI(e) p2p requests */
+	BLK_FEAT_PCI_P2PDMA			= (1u << 12),
 };
 
 /*
@@ -588,7 +591,6 @@ struct request_queue {
 #define QUEUE_FLAG_STATS	20	/* track IO start and completion times */
 #define QUEUE_FLAG_REGISTERED	22	/* queue has been registered to a disk */
 #define QUEUE_FLAG_QUIESCED	24	/* queue has been quiesced */
-#define QUEUE_FLAG_PCI_P2PDMA	25	/* device supports PCI p2p requests */
 #define QUEUE_FLAG_RQ_ALLOC_TIME 27	/* record rq->alloc_time_ns */
 #define QUEUE_FLAG_HCTX_ACTIVE	28	/* at least one blk-mq hctx is active */
 #define QUEUE_FLAG_SQ_SCHED     30	/* single queue style io dispatch */
@@ -611,8 +613,7 @@ bool blk_queue_flag_test_and_set(unsigned int flag, struct request_queue *q);
 #define blk_queue_zone_resetall(q)	\
 	((q)->limits.features & BLK_FEAT_ZONE_RESETALL)
 #define blk_queue_dax(q)	((q)->limits.features & BLK_FEAT_DAX)
-#define blk_queue_pci_p2pdma(q)	\
-	test_bit(QUEUE_FLAG_PCI_P2PDMA, &(q)->queue_flags)
+#define blk_queue_pci_p2pdma(q)	((q)->limits.features & BLK_FEAT_PCI_P2PDMA)
 #ifdef CONFIG_BLK_RQ_ALLOC_TIME
 #define blk_queue_rq_alloc_time(q)	\
 	test_bit(QUEUE_FLAG_RQ_ALLOC_TIME, &(q)->queue_flags)
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:12:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:12:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741756.1148477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5bi-0000Ji-EZ; Mon, 17 Jun 2024 06:12:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741756.1148477; Mon, 17 Jun 2024 06:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5bi-0000Jb-C2; Mon, 17 Jun 2024 06:12:46 +0000
Received: by outflank-mailman (input) for mailman id 741756;
 Mon, 17 Jun 2024 06:12:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sOKI=NT=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sJ5bh-0000JV-UN
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:12:45 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 992bd30a-2c70-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:12:44 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id C4E92CE0FD5;
 Mon, 17 Jun 2024 06:12:37 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 722E5C2BD10;
 Mon, 17 Jun 2024 06:12:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 992bd30a-2c70-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718604757;
	bh=LrLZKWXkRakpWI4/ueXizJXAwKTR991d/slvg583xhg=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=nFz/S8PqJ0x5s3Ny5I2evBmJDaO2EaUIuiWjAdtRmCQ/A6QpvJbLmjC0Fw7IXR1VT
	 uSNeYADsaTVxJipaBcqAsdKPSVVmorISGDB8ghzkIoBVgWELX0vnHVqN5Xf3bgHIW/
	 Jd7IHk4xFBOBn5HxiZsiAifRO0y8cGSStiGE0NqqwYZj3htvKjZ9JxGjZyZWwJADfE
	 WIUuoyvXMEqafkiqyCnfGPuXY7GxfWhCoyjXAFv1xdeTp+IRRcD16/M5uHMReQAzrM
	 yLcLncP795V2wNVqCHkdUBLru1ejkUmD3WFgvqGfKNm1zjEea2esQdxNT2nTdM/0yi
	 xUG82No+arhQQ==
Message-ID: <e4ce83ca-160f-4dd9-984a-842b6cd2b5c0@kernel.org>
Date: Mon, 17 Jun 2024 15:12:31 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 03/26] sd: move zone limits setup out of
 sd_read_block_characteristics
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-4-hch@lst.de>
From: Damien Le Moal <dlemoal@kernel.org>
Content-Language: en-US
Organization: Western Digital Research
In-Reply-To: <20240617060532.127975-4-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/17/24 15:04, Christoph Hellwig wrote:
> Move a bit of code that sets up the zone flag and the write granularity
> into sd_zbc_read_zones to be with the rest of the zoned limits.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Looks good.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:13:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:13:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741761.1148487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5ce-0000rO-N8; Mon, 17 Jun 2024 06:13:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741761.1148487; Mon, 17 Jun 2024 06:13:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5ce-0000rH-KP; Mon, 17 Jun 2024 06:13:44 +0000
Received: by outflank-mailman (input) for mailman id 741761;
 Mon, 17 Jun 2024 06:13:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sOKI=NT=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sJ5ce-0000rB-7O
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:13:44 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bc4f4f6d-2c70-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:13:40 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 79463CE0FD1;
 Mon, 17 Jun 2024 06:13:36 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id E26F1C2BD10;
 Mon, 17 Jun 2024 06:13:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc4f4f6d-2c70-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718604815;
	bh=i+c+RpVRjeaE7WIb31u0YOButoWpoMUXbmMH2ED9KM8=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=NGDwf0zXEmzo9+7Yx4d9cGXV793JqUi91piwr9DLQr9gvYLzjssulS3nd0FDXod4o
	 1NiEaNf9DKv91oATngwE6MlpuTxufePFFAUPAiGTMSZ8Iyqb43pbMY/AWpT0yfgG9x
	 h7tK/rGWoSYR5lk30x+/zxQfjGaL8mJv2SH7mMjlqBwSC3JpHWjWgqv3xYEnVpUZ3T
	 GLm2uelJLLxYyxbZLJP4lmiyxvgd2hfNI8V/LV1wIStwKjwJDXjuh7JmDm0LI0dTdZ
	 pEsb3rbnrZU0AByYnkj2ZCWuHVBzn8LtTQfnCUTAC98K23Vza+t2P0C82aHVfqiJXZ
	 7lE9UMorz5Ztg==
Message-ID: <72e2cebc-a748-4e39-8783-440a82cd40c1@kernel.org>
Date: Mon, 17 Jun 2024 15:13:29 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 06/26] loop: regularize upgrading the block size for
 direct I/O
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Hannes Reinecke <hare@suse.de>, Bart Van Assche <bvanassche@acm.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-7-hch@lst.de>
From: Damien Le Moal <dlemoal@kernel.org>
Content-Language: en-US
Organization: Western Digital Research
In-Reply-To: <20240617060532.127975-7-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/17/24 15:04, Christoph Hellwig wrote:
> The LOOP_CONFIGURE path automatically upgrades the block size to that
> of the underlying file for O_DIRECT file descriptors, but the
> LOOP_SET_BLOCK_SIZE path does not.  Fix this by lifting the code to
> pick the block size into common code.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>

Looks good to me.

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:14:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:14:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741766.1148497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5dO-0001Nd-0a; Mon, 17 Jun 2024 06:14:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741766.1148497; Mon, 17 Jun 2024 06:14:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5dN-0001NW-UM; Mon, 17 Jun 2024 06:14:29 +0000
Received: by outflank-mailman (input) for mailman id 741766;
 Mon, 17 Jun 2024 06:14:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sOKI=NT=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sJ5dN-0000rB-Ar
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:14:29 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id da05dd61-2c70-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:14:28 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id E064A61127;
 Mon, 17 Jun 2024 06:14:26 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id EAD93C2BD10;
 Mon, 17 Jun 2024 06:14:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da05dd61-2c70-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718604866;
	bh=0lJUnETFqPY6tW6TrXFvmKZbVynd6dSdhAW9BC6m+kE=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=KOFdmVOJ7h8N9C7m0i5IH5j0T08893gjt00viGEXx0ITrjnEVSc7/n9x0PQNWhoCT
	 R0G81HXchAoQj63QO9zRTuNrWvr+9t+JPXFgDrlIxVtX306FULncwLvEYKw7dkrJFz
	 S6g5cy4QX7ofX49F8f+GhBjuuHoOe+F7NO0rxdIU4Xo0KJqwPt8S0uxmuFJz89cUcE
	 b/qPpB1pTQXoj1rhBApU3rTIJGnWH/qdRZJ9WOKQ+vU7PAj3CbnFFbcYvIj/VggV6B
	 QjSaWBl2euqvrvUDeNk514rvS2oC47UJWpHbvgG7uFw1q8j7/c3YYf4Afd7F+Ekvnv
	 W3vwVHRQv5iRw==
Message-ID: <d7b45e0b-68a9-4612-861a-7f192fbe6f84@kernel.org>
Date: Mon, 17 Jun 2024 15:14:21 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 07/26] loop: also use the default block size from an
 underlying block device
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Hannes Reinecke <hare@suse.de>, Bart Van Assche <bvanassche@acm.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-8-hch@lst.de>
From: Damien Le Moal <dlemoal@kernel.org>
Content-Language: en-US
Organization: Western Digital Research
In-Reply-To: <20240617060532.127975-8-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/17/24 15:04, Christoph Hellwig wrote:
> Fix the code in loop_reconfigure_limits to pick a default block size for
> O_DIRECT file descriptors to also work when the loop device sits on top
> of a block device and not just on a regular file on a block device based
> file system.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Hannes Reinecke <hare@suse.de>
> Reviewed-by: Bart Van Assche <bvanassche@acm.org>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:23:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:23:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741845.1148508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5mI-0003LQ-Sb; Mon, 17 Jun 2024 06:23:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741845.1148508; Mon, 17 Jun 2024 06:23:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5mI-0003LJ-PM; Mon, 17 Jun 2024 06:23:42 +0000
Received: by outflank-mailman (input) for mailman id 741845;
 Mon, 17 Jun 2024 06:23:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sOKI=NT=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sJ5mH-0003LD-SR
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:23:41 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 22033120-2c72-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:23:40 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 93B9FCE0FF0;
 Mon, 17 Jun 2024 06:23:33 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6B0EEC4AF1D;
 Mon, 17 Jun 2024 06:23:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22033120-2c72-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718605412;
	bh=uMvSBiMCv5pxii/WRMe23vNNUZcfAqUN7YYlV2bv5Ko=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=BLJEJwmMsCdQfpgeuSDFFhnTeYFrqXQ3GLsKdxLZUdhrqVCdlccETSs3reVgf76Rb
	 4NtHvzGyiVxgePozGjF6t5kslEhri7LHZl0DvN4BR4smuIdi3fWqPb/I1vVcfCYeYi
	 fiVbKgy6qBR19yAJJ+QQiLpd0oScLx6sTzGkD0QZlko71H6Yz0ku2d3RgXA7QaUtXr
	 GCCeiq+Ax8cIg0E+JxaozXjwtykL41yfJ3EoBVzh7JOK3qIJv+ADxMbH5/c2pw1ot4
	 Uz1JoQGCWSohEy6+TWZP70VoRi8sStV1FCojepppq975Nhv9oB4B67NtvIDhRuNUUc
	 t0DawHP78ZCMQ==
Message-ID: <3247433c-b356-425c-a888-8f7904351a2f@kernel.org>
Date: Mon, 17 Jun 2024 15:23:27 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 13/26] block: move cache control settings out of
 queue->flags
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Ulf Hansson <ulf.hansson@linaro.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-14-hch@lst.de>
From: Damien Le Moal <dlemoal@kernel.org>
Content-Language: en-US
Organization: Western Digital Research
In-Reply-To: <20240617060532.127975-14-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/17/24 15:04, Christoph Hellwig wrote:
> Move the cache control settings into the queue_limits so that the flags
> can be set atomically with the device queue frozen.
> 
> Add new features and flags field for the driver set flags, and internal
> (usually sysfs-controlled) flags in the block layer.  Note that we'll
> eventually remove enough field from queue_limits to bring it back to the
> previous size.
> 
> The disable flag is inverted compared to the previous meaning, which
> means it now survives a rescan, similar to the max_sectors and
> max_discard_sectors user limits.
> 
> The FLUSH and FUA flags are now inherited by blk_stack_limits, which
> simplified the code in dm a lot, but also causes a slight behavior
> change in that dm-switch and dm-unstripe now advertise a write cache
> despite setting num_flush_bios to 0.  The I/O path will handle this
> gracefully, but as far as I can tell the lack of num_flush_bios
> and thus flush support is a pre-existing data integrity bug in those
> targets that really needs fixing, after which a non-zero num_flush_bios
> should be required in dm for targets that map to underlying devices.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Acked-by: Ulf Hansson <ulf.hansson@linaro.org> [mmc]

A few nits below. With these fixed,

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

> +Implementation details for bio based block drivers
> +--------------------------------------------------
> +
> +For bio based drivers the REQ_PREFLUSH and REQ_FUA bit are simplify passed on

...bit are simplify... -> ...bits are simply...

> +to the driver if the drivers sets the BLK_FEAT_WRITE_CACHE flag and the drivers
> +needs to handle them.

s/drivers/driver (2 times)

> -and the driver must handle write requests that have the REQ_FUA bit set
> -in prep_fn/request_fn.  If the FUA bit is not natively supported the block
> -layer turns it into an empty REQ_OP_FLUSH request after the actual write.
> +When the BLK_FEAT_FUA flags is set, the REQ_FUA bit simplify passed on for the

s/bit simplify/bit is simply


-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:25:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:25:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741851.1148518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5ny-0003s5-7e; Mon, 17 Jun 2024 06:25:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741851.1148518; Mon, 17 Jun 2024 06:25:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5ny-0003ry-43; Mon, 17 Jun 2024 06:25:26 +0000
Received: by outflank-mailman (input) for mailman id 741851;
 Mon, 17 Jun 2024 06:25:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sOKI=NT=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sJ5nw-0003rs-VQ
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:25:24 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 60a7607a-2c72-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:25:24 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 3B2916114E;
 Mon, 17 Jun 2024 06:25:22 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 64B3DC2BD10;
 Mon, 17 Jun 2024 06:25:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60a7607a-2c72-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718605521;
	bh=b4C7oWXm6AiByOywGcEEC3GnqcHjFi/qcKpmuC4GSl4=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=LEnK17zFqe+hZOSDUj/tOYJPTbrEr/iyN4UNL0SmIILaSKuqTSJ4sjLsh83tbNp98
	 xMaGs3z4bHthSiIVUGgn6p6k1Hp4OFwjqTvBViQ5r26ZmMaTUT3kdJYmUDeoTb8K+4
	 jT0vLs8Spfa7yPWU+yf5YKwUzdEIRo48lJNdqGhIlmZTb6k7761bMjs2rcyd0Hfqhj
	 Vpk+ileeXJ565Q23YPjftwlXpdj9Xgx1bQhDcK36W/6YmID+8NvWJsHJ8cIkaSJCt0
	 a65W2bmG0g0iV3m+rrO4imip0sJi5o8S5QSdLa4VJ1pKGoVgY8bkQ22MoITZxj7f6K
	 IsTYJ2fK4dlcQ==
Message-ID: <dd4ca62c-fc36-4439-a1ad-c55250f8d1a8@kernel.org>
Date: Mon, 17 Jun 2024 15:25:16 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 16/26] block: move the io_stat flag setting to
 queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-17-hch@lst.de>
From: Damien Le Moal <dlemoal@kernel.org>
Content-Language: en-US
Organization: Western Digital Research
In-Reply-To: <20240617060532.127975-17-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/17/24 15:04, Christoph Hellwig wrote:
> Move the io_stat flag into the queue_limits feature field so that it can
> be set atomically with the queue frozen.
> 
> Simplify md and dm to set the flag unconditionally instead of avoiding
> setting a simple flag for cases where it already is set by other means,
> which is a bit pointless.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 06:26:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 06:26:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741861.1148528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5p6-0004Sa-JD; Mon, 17 Jun 2024 06:26:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741861.1148528; Mon, 17 Jun 2024 06:26:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ5p6-0004ST-GW; Mon, 17 Jun 2024 06:26:36 +0000
Received: by outflank-mailman (input) for mailman id 741861;
 Mon, 17 Jun 2024 06:26:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sOKI=NT=kernel.org=dlemoal@srs-se1.protection.inumbo.net>)
 id 1sJ5p5-0003rs-Gm
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 06:26:35 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 88fa8dfc-2c72-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 08:26:34 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id D5232CE0E70;
 Mon, 17 Jun 2024 06:26:29 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 897D7C2BD10;
 Mon, 17 Jun 2024 06:26:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88fa8dfc-2c72-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718605589;
	bh=5ukl1My1jMqhSyTqZEFqdGlI+72qAecy5V2ti5Kc46U=;
	h=Date:Subject:To:Cc:References:From:In-Reply-To:From;
	b=OVa4VXa4gPyN4KTWboLud3pwf0TJbbk/rQ5BM4+Ufc5q6edplugfrWWUEhonGkfMR
	 QHtsT5neG8mJ3vHyVK4COzgFtMwuBSA3Ltr1Kmgg0P9h/SDaPK/4D4vixnSs7AOdoo
	 HP6agM24471d/vJb4Hh8SFe0zewqIqWCGc3aXNawn13hq27CQJOtkzwEi0TWYKBp4l
	 c0TLeE0D0RKiKV3rNJyqOIzjVOG/87Bt7edT59MQLl5UfWjsDRSx6vZMzrYnvW4QJA
	 /RoDuAqhbb/WQma0tOFHLAJHwURkZ+Z0rxy3rGxoovWXWM4rnyYIVDHULrocVjTDvQ
	 k66Zct4kG1uEw==
Message-ID: <c9871979-de72-49ca-879b-5f2bd773d517@kernel.org>
Date: Mon, 17 Jun 2024 15:26:23 +0900
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 19/26] block: move the nowait flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-20-hch@lst.de>
From: Damien Le Moal <dlemoal@kernel.org>
Content-Language: en-US
Organization: Western Digital Research
In-Reply-To: <20240617060532.127975-20-hch@lst.de>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/17/24 15:04, Christoph Hellwig wrote:
> Move the nowait flag into the queue_limits feature field so that it can
> be set atomically with the queue frozen.
> 
> Stacking drivers are simplified in that they now can simply set the
> flag, and blk_stack_limits will clear it when the features is not
> supported by any of the underlying devices.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Reviewed-by: Damien Le Moal <dlemoal@kernel.org>

-- 
Damien Le Moal
Western Digital Research



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 07:34:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 07:34:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741870.1148537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ6sC-0006yf-CS; Mon, 17 Jun 2024 07:33:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741870.1148537; Mon, 17 Jun 2024 07:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ6sC-0006yY-9a; Mon, 17 Jun 2024 07:33:52 +0000
Received: by outflank-mailman (input) for mailman id 741870;
 Mon, 17 Jun 2024 07:33:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5eX2=NT=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sJ6sB-0006yS-K7
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 07:33:51 +0000
Received: from mail-qv1-xf32.google.com (mail-qv1-xf32.google.com
 [2607:f8b0:4864:20::f32])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id efdac342-2c7b-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 09:33:49 +0200 (CEST)
Received: by mail-qv1-xf32.google.com with SMTP id
 6a1803df08f44-6ad8243dba8so20663076d6.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 00:33:49 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2a5ee0346sm52576626d6.115.2024.06.17.00.33.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 00:33:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efdac342-2c7b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718609628; x=1719214428; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=T09eC7D20hXn+Rq7lSZU/rf2mdJcpGS5V6pgfj1k5hw=;
        b=jl3UiyG7nWVcGy0qgencj95903YymhxqN9BNt3mkSNUCyHRNKyQ0pZw+BFvSLDZLAA
         maZI9ilhVq9WVnb3T3oOU71PNob4H6amAnDqNOAxaOxZGdszsG9uARvmwfkSoymuyCGm
         iiVcGYVLpjWUu2uED1gW4n+mdOiT9kzP6OAzQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718609628; x=1719214428;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=T09eC7D20hXn+Rq7lSZU/rf2mdJcpGS5V6pgfj1k5hw=;
        b=RVEdnngQTNWMSmOUB8KgpcgLZ90BmjWHtttX/rVZ1achu4upAJV47aYltFjOeTqlIl
         wB1KgGzhLOPWA3+nfwKM4XKKWgNGhSUvyq1ZX55I2RZwRNMLnEWcIvYlO8QoO9oZONK1
         /YIjFxJBDfD+v7cLR1Ti6OUgOhJDx5H1ue+gWZn322uCg05EScSKeXWA9EDxCd7i1i4E
         8KFHQ2dxdPATV50O9S2u18jiNFlashCLzNr9MHLgNoQH4V/g0ud8Amx0Z/XOhtJSM12n
         Zwd8nquY4MYc8gBsXnMBd1BJnEA7ujLng15fi6GtEdJB46bE0e2gxeVgtXBD6wWcbDIC
         tH4A==
X-Forwarded-Encrypted: i=1; AJvYcCWuNTrzmOw5CBgapBjyejTZP2TrK+fdtOaRgoUgnfjPhUz7jGQ3dRAqinuhdSWS2T6MD1KX6wCRoSGTWOokZHgknV141hXOTzjEdHipFTc=
X-Gm-Message-State: AOJu0YzjsS5FhuzmhsRUZqIyXDnJYnfIWMYh3PXYXSv3ETc9sshkpB5E
	SVlBO5VFp3HeX0jj+aIENINgKsSK2GIWwnV61q7+XXrgkd684rbhNUs6D4EvGRc=
X-Google-Smtp-Source: AGHT+IFm1MIuIWK5yQCGkvPUJH+fdVPXSwahtFdf8QCjx0ph9UV26HUskEZhGKFxeKdxYddmaklG2w==
X-Received: by 2002:a0c:c48b:0:b0:6b2:b05d:aa78 with SMTP id 6a1803df08f44-6b2b05dacddmr94394926d6.9.1718609628256;
        Mon, 17 Jun 2024 00:33:48 -0700 (PDT)
Date: Mon, 17 Jun 2024 09:33:42 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Elias El Yandouzi <eliasely@amazon.com>, julien@xen.org,
	pdurrant@amazon.com, dwmw@amazon.com,
	Hongyan Xia <hongyxia@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Julien Grall <jgrall@amazon.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH V3 (resend) 01/19] x86: Create per-domain mapping of
 guest_root_pt
Message-ID: <Zm_m1shUlyt_KvBJ@macbook>
References: <20240513134046.82605-1-eliasely@amazon.com>
 <20240513134046.82605-2-eliasely@amazon.com>
 <dd145c67-8e3e-4b15-94f7-c7cd1f127d45@suse.com>
 <bda3386e-26c5-4efd-b7ad-00f3643523fa@amazon.com>
 <b50d0a83-fab4-4f59-bf4d-5c5593923f34@suse.com>
 <1ad9ccce-c02b-46c0-8fea-10b35b574cb8@amazon.com>
 <71f7b9c8-43f9-4703-b6e3-8b3fe8b740c0@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <71f7b9c8-43f9-4703-b6e3-8b3fe8b740c0@suse.com>

On Fri, Jun 14, 2024 at 08:23:30AM +0200, Jan Beulich wrote:
> On 13.06.2024 18:31, Elias El Yandouzi wrote:
> > On 16/05/2024 08:17, Jan Beulich wrote:
> >> On 15.05.2024 20:25, Elias El Yandouzi wrote:
> >>> However, I noticed quite a weird bug while doing some testing. I may
> >>> need your expertise to find the root cause.
> >>
> >> Looks like you've overflowed the dom0 kernel stack, most likely because
> >> of recurring nested exceptions.
> >>
> >>> In the case where I have more vCPUs than pCPUs (and let's consider we
> >>> have one pCPU for two vCPUs), I noticed that I would always get a page
> >>> fault in dom0 kernel (5.10.0-13-amd64) at the exact same location. I did
> >>> a bit of investigation but I couldn't come to a clear conclusion.
> >>> Looking at the stack trace [1], I have the feeling the crash occurs in a
> >>> loop or a recursive call.
> >>>
> >>> I tried to identify where the crash occurred using addr2line:
> >>>
> >>>   > addr2line -e vmlinux-5.10.0-29-amd64 0xffffffff810218a0
> >>> debian/build/build_amd64_none_amd64/arch/x86/xen/mmu_pv.c:880
> >>>
> >>> It turns out to point on the closing bracket of the function
> >>> xen_mm_unpin_all()[2].
> >>>
> >>> I thought the crash could happen while returning from the function in
> >>> the assembly epilogue but the output of objdump doesn't even show the
> >>> address.
> >>>
> >>> The only theory I could think of was that because we only have one pCPU,
> >>> we may never execute one of the two vCPUs, and never setup the mapping
> >>> to the guest_root_pt in write_ptbase(), hence the page fault. This is
> >>> just a random theory, I couldn't find any hint suggesting it would be
> >>> the case though. Any idea how I could debug this?
> >>
> >> I guess you want to instrument Xen enough to catch the top level fault (or
> >> the 2nd from top, depending on where the nesting actually starts) to see
> >> why that happens. Quite likely some guest mapping isn't set up properly.
> >>
> > 
> > Julien helped me with this one and I believe we have identified the 
> > problem.
> > 
> > As you've suggested, I wrote the mapping of the guest root PT in our 
> > per-domain section, root_pt_l1tab, within write_ptbase() function as 
> > we'd always be in the case v == current plus switch_cr3_cr4() would 
> > always flush local tlb.
> > 
> > However, there exists a path, in toggle_guest_mode(), where we could 
> > call update_cr3()/make_cr3() without calling write_ptbase() and hence 
> > not maintain mappings properly. Instead toggle_guest_mode() has a partly 
> > open-coded version of write_ptbase().
> > 
> > Would you rather like to see the mappings written in make_cr3() or in 
> > toggle_guest_mode() within the pseudo open-coded version of write_ptbase()?
> 
> Likely the latter, but that's hard to tell without seeing the resulting
> code.

There's already a special case for XPTI in toggle_guest_mode() to deal
exactly with that AFAICT.  Maybe it would be better if write_ptbase()
could be made suitable to be used in _toggle_guest_pt() instead of
directly calling write_cr3(), as we could then avoid having to pile
open-coded bodges in toggle_guest_mode() and/or _toggle_guest_pt().

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 07:46:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 07:46:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741877.1148547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ74X-0000XE-EI; Mon, 17 Jun 2024 07:46:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741877.1148547; Mon, 17 Jun 2024 07:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ74X-0000X7-BM; Mon, 17 Jun 2024 07:46:37 +0000
Received: by outflank-mailman (input) for mailman id 741877;
 Mon, 17 Jun 2024 07:46:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5eX2=NT=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sJ74V-0000X0-Ox
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 07:46:35 +0000
Received: from mail-yb1-xb32.google.com (mail-yb1-xb32.google.com
 [2607:f8b0:4864:20::b32])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b7d2cedb-2c7d-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 09:46:34 +0200 (CEST)
Received: by mail-yb1-xb32.google.com with SMTP id
 3f1490d57ef6-dfde5ae0aaeso4206402276.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 00:46:34 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2d21f0b56sm14986726d6.46.2024.06.17.00.46.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 00:46:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7d2cedb-2c7d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718610393; x=1719215193; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=hxR5Q/aAsURbng3JCie17ZZs34XkIhMPgViBzaUo2nE=;
        b=LxKcqopwKUEHV+x43yeBMjtamGHIJk4n56VX9RJFcOyS9TjIJjJ8Rln8IxG+1TR0Cg
         //V93ckH1dCWhGFaxjcwCvYrHO9ynDjTZAvG1r+wAyqx5zJoSu9b1T8oqW79OiF7ljdd
         PZE4YP5GG5IPvYEz6TGbDWIUXVFevKnDWb9SE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718610393; x=1719215193;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=hxR5Q/aAsURbng3JCie17ZZs34XkIhMPgViBzaUo2nE=;
        b=dksqoZZBK4KYYZdmfyZdpJ+YLka7437iFAd/7asi16iUG7IQ5IXL/NqYOIVuqATnEI
         0CVv8Zs8zm/7EutK5wfKc9y0ssil6bQbAp/rSitxcEfjOLxhC/OwnrLkaJYEN6ikLYJI
         yHiW3Rml2BQgJ0hqi3T3yWX3JLHaK+jjX3SgskSnfJJ5x3rwObkJbzsUKun9kZufnzKP
         GS/Yi13Doifx7TqGSTshlFtNVezlOQFjjbEqN0f0XCEE1Apzoo1HAYOQjr8TjRJr7Emu
         FSmW7aBroWyyrkOeaWLogyLB2TJ+zlmUlWJ169IFZqIj1poP4dyGko7HuFzeQsZU2tDN
         Hg9w==
X-Forwarded-Encrypted: i=1; AJvYcCV77AiQ2u/DYVLgJUaPOW0vaIDIiDdA58rnxrPt/M3KerSO/A0G4bwCpczu5gDhWbteDauIlRiquGUoM+KU2ZbrEGbfzmuBc+5h+MJN1s0=
X-Gm-Message-State: AOJu0Yww5rBQT28XV3Q5LSNGc35Q+DFCTPjfl63STirjkrMIuHKNRYIA
	c3voajaS1VyadT07yXR/F/81dNYll5lFaV0grEZlywNrA+5zpbId3VDAXQ/RiKk=
X-Google-Smtp-Source: AGHT+IH67by9M8/qsH8Fh5/VKPd8wzAE03X+Kkx644weipILcFFSTwYhcvoMoypbKMPBP7V1s3Bx5g==
X-Received: by 2002:a25:ab87:0:b0:dfb:42d:bb32 with SMTP id 3f1490d57ef6-dff153ca962mr8554670276.30.1718610393259;
        Mon, 17 Jun 2024 00:46:33 -0700 (PDT)
Date: Mon, 17 Jun 2024 09:46:29 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <Zm_p1QvoZcjQ4gBa@macbook>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
 <ZmwByZnn5vKcVLKI@macbook>
 <Zm-FidjSK3mOieSC@itl-email>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <Zm-FidjSK3mOieSC@itl-email>

On Sun, Jun 16, 2024 at 08:38:19PM -0400, Demi Marie Obenour wrote:
> On Fri, Jun 14, 2024 at 10:39:37AM +0200, Roger Pau Monné wrote:
> > On Fri, Jun 14, 2024 at 10:12:40AM +0200, Jan Beulich wrote:
> > > On 14.06.2024 09:21, Roger Pau Monné wrote:
> > > > On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote:
> > > >> On 13.06.2024 20:43, Demi Marie Obenour wrote:
> > > >>> GPU acceleration requires that pageable host memory be able to be mapped
> > > >>> into a guest.
> > > >>
> > > >> I'm sure it was explained in the session, which sadly I couldn't attend.
> > > >> I've been asking Ray and Xenia the same before, but I'm afraid it still
> > > >> hasn't become clear to me why this is a _requirement_. After all that's
> > > >> against what we're doing elsewhere (i.e. so far it has always been
> > > >> guest memory that's mapped in the host). I can appreciate that it might
> > > >> be more difficult to implement, but avoiding to violate this fundamental
> > > >> (kind of) rule might be worth the price (and would avoid other
> > > >> complexities, of which there may be lurking more than what you enumerate
> > > >> below).
> > > > 
> > > > My limited understanding (please someone correct me if wrong) is that
> > > > the GPU buffer (or context I think it's also called?) is always
> > > > allocated from dom0 (the owner of the GPU).  The underling memory
> > > > addresses of such buffer needs to be mapped into the guest.  The
> > > > buffer backing memory might be GPU MMIO from the device BAR(s) or
> > > > system RAM, and such buffer can be paged by the dom0 kernel at any
> > > > time (iow: changing the backing memory from MMIO to RAM or vice
> > > > versa).  Also, the buffer must be contiguous in physical address
> > > > space.
> > > 
> > > This last one in particular would of course be a severe restriction.
> > > Yet: There's an IOMMU involved, isn't there?
> > 
> > Yup, IIRC that's why Ray said it was much more easier for them to
> > support VirtIO GPUs from a PVH dom0 rather than classic PV one.
> > 
> > It might be easier to implement from a classic PV dom0 if there's
> > pv-iommu support, so that dom0 can create it's own contiguous memory
> > buffers from the device PoV.
> 
> What makes PVH an improvement here?  I thought PV dom0 uses an identity
> mapping for the IOMMU, while a PVH dom0 uses an IOMMU that mirrors the
> dom0 second-stage page tables.

Indeed, hence finding a physically contiguous buffer on classic PV is
way more complicated, because the IOMMU identity maps mfns, and the PV
address space can be completely scattered.

OTOH, on PVH the IOMMU page tables are the same as the second stage
translation, and hence the physical address is way more compact (as it
would be on native).

> In both cases, the device physical
> addresses are identical to dom0’s physical addresses.

Yes, but a PV dom0 physical address space can be very scattered.

IIRC there's an hypercall to request physically contiguous memory for
PV, but you don't want to be using that every time you allocate a
buffer (not sure it would support the sizes needed by the GPU
anyway).

> PV is terrible for many reasons, so I’m okay with focusing on PVH dom0,
> but I’d like to know why there is a difference.
> 
> > > > I'm not sure it's possible to ensure that when using system RAM such
> > > > memory comes from the guest rather than the host, as it would likely
> > > > require some very intrusive hooks into the kernel logic, and
> > > > negotiation with the guest to allocate the requested amount of
> > > > memory and hand it over to dom0.  If the maximum size of the buffer is
> > > > known in advance maybe dom0 can negotiate with the guest to allocate
> > > > such a region and grant it access to dom0 at driver attachment time.
> > > 
> > > Besides the thought of transiently converting RAM to kind-of-MMIO, this
> > 
> > As a note here, changing the type to MMIO would likely involve
> > modifying the EPT/NPT tables to propagate the new type.  On a PVH dom0
> > this would likely involve shattering superpages in order to set the
> > correct memory types.
> > 
> > Depending on how often and how random those system RAM changes are
> > necessary this could also create contention on the p2m lock.
> > 
> > > makes me think of another possible option: Could Dom0 transfer ownership
> > > of the RAM that wants mapping in the guest (remotely resembling
> > > grant-transfer)? Would require the guest to have ballooned down enough
> > > first, of course. (In both cases it would certainly need working out how
> > > the conversion / transfer back could be made work safely and reasonably
> > > cleanly.)
> > 
> > Maybe.  The fact the guest needs to balloon down that amount of memory
> > seems weird to me, as from the guest PoV that mapped memory is
> > MMIO-like and not system RAM.
> 
> I don’t like it either.  Furthermore, this would require changes to the
> virtio-GPU driver in the guest, which I’d prefer to avoid.

IMO it would be helpful if you (or someone) could write the full
specification of how VirtIO GPU is supposed to work right now (with
the KVM model I assume?) as it would be a good starting point to
provide suggestions about how to make it work (or adapt it) on Xen.

I don't think the high level layers on top of VirtIO GPU are relevant,
but it's important to understand the protocol between the VirtIO GPU
front and back ends.

So far I only had scattered conversation about what's needed, but not
a formal write-up of how this is supposed to work.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 07:56:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 07:56:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741884.1148557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ7DX-0002ZW-71; Mon, 17 Jun 2024 07:55:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741884.1148557; Mon, 17 Jun 2024 07:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ7DX-0002ZP-42; Mon, 17 Jun 2024 07:55:55 +0000
Received: by outflank-mailman (input) for mailman id 741884;
 Mon, 17 Jun 2024 07:55:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5eX2=NT=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sJ7DW-0002ZH-62
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 07:55:54 +0000
Received: from mail-qt1-x833.google.com (mail-qt1-x833.google.com
 [2607:f8b0:4864:20::833])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 04e9f5dd-2c7f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 09:55:53 +0200 (CEST)
Received: by mail-qt1-x833.google.com with SMTP id
 d75a77b69052e-4405743ac19so35246121cf.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 00:55:53 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-441ef4eefaesm44094171cf.21.2024.06.17.00.55.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 00:55:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04e9f5dd-2c7f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718610952; x=1719215752; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=6I7HjTVD0FYGpGJyzdWr9lYyHGbtAeqv6wNa85E5Tlk=;
        b=VAY7IA5xPYNIzp6FaeaQ6lDcLUKWwE5y75LHtZcNMNJ8UN5SP6BggRKeYWjslOF12X
         SSbW1b9SCtVB3w82RZFhoF+IfbznIvhJKHuNW/lwS4guMIYpHQsb6NW+Lyxxi6YKfAQ0
         ZFBpz94mJFh5eaw6kTiiE+Gm8UeZYlP3sjkf8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718610952; x=1719215752;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=6I7HjTVD0FYGpGJyzdWr9lYyHGbtAeqv6wNa85E5Tlk=;
        b=RYWZP5/W/hYq7y64YQ9EDc9kfJIkjFyru4QfgZBiv3lFF8UCK058nDGzIcf0sSCRWV
         ypGFIejLF/vBNvYgZa09aQSYX63Xa0tIJ2xmIaHU3okUKeHlDS99Bbh8zNxshQuWRKmI
         aNzxzk+jG67rUAj6UPYGq5oRhasIV9bDNLnF3P4rhV0nAqf6/Z1rNxexnof+8DZAh5gG
         Q0FgBNnazUdM8GVREU0VpNOmksnpo/PzEZVyR3DNJ6EPvUchW2PhquVhUZ+IkIas9dKB
         ecARA/FgZn/TbBAPWqerZ/fQss5BWNyly83XFxkmpxoGYC78jVe6nwvQLuRTS4o2iAHI
         Mpng==
X-Forwarded-Encrypted: i=1; AJvYcCXIosLs4U1Jk+lh6lqzoXNDWNa/h7y/JlAWoIAzZDeZfpc1H+l58PPj9r4Wu9cIKKpX31DBZApFQn1SlLcJWe/qQK+Ol2LmoubMlnH4gL4=
X-Gm-Message-State: AOJu0YwgNJVUcWXkQJc4q3/DpJrm2KfCxOmUCS6YiXM01JoLE7Uyg174
	L4nfnHVm0J4sS+Xlf+O9cxZ93srrOux9Axhr87V9b1DCuxnAUiZ/dfIvM/0SZcA=
X-Google-Smtp-Source: AGHT+IFxBb+fxu+zFP9/mPYH5nZSCdE6S/NGqQ55TdkfvSsNQ6okiuS1Itv2iysUUW0wZGmJIGDYbg==
X-Received: by 2002:ac8:5e11:0:b0:440:10be:3ecf with SMTP id d75a77b69052e-4417ac402c0mr199666251cf.22.1718610952095;
        Mon, 17 Jun 2024 00:55:52 -0700 (PDT)
Date: Mon, 17 Jun 2024 09:55:48 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?utf-8?Q?B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>, Alasdair Kergon <agk@redhat.com>,
	Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: [PATCH 01/26] xen-blkfront: don't disable cache flushes when
 they fail
Message-ID: <Zm_sBInagtSkOZtg@macbook>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-2-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20240617060532.127975-2-hch@lst.de>

On Mon, Jun 17, 2024 at 08:04:28AM +0200, Christoph Hellwig wrote:
> blkfront always had a robust negotiation protocol for detecting a write
> cache.  Stop simply disabling cache flushes in the block layer as the
> flags handling is moving to the atomic queue limits API that needs
> user context to freeze the queue for that.  Instead handle the case
> of the feature flags cleared inside of blkfront.  This removes old
> debug code to check for such a mismatch which was previously impossible
> to hit, including the check for passthrough requests that blkfront
> never used to start with.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 08:40:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 08:40:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741896.1148567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ7ty-0001o0-Ez; Mon, 17 Jun 2024 08:39:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741896.1148567; Mon, 17 Jun 2024 08:39:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ7ty-0001nt-CT; Mon, 17 Jun 2024 08:39:46 +0000
Received: by outflank-mailman (input) for mailman id 741896;
 Mon, 17 Jun 2024 08:39:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Yk4M=NT=cloud.com=christian.lindig@srs-se1.protection.inumbo.net>)
 id 1sJ7tw-0001nn-HZ
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 08:39:44 +0000
Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com
 [2a00:1450:4864:20::52f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 225bb6de-2c85-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 10:39:42 +0200 (CEST)
Received: by mail-ed1-x52f.google.com with SMTP id
 4fb4d7f45d1cf-57ccd1111aeso1738480a12.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 01:39:39 -0700 (PDT)
Received: from smtpclient.apple ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cbb05b465sm5550010a12.18.2024.06.17.01.39.37
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 17 Jun 2024 01:39:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 225bb6de-2c85-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718613578; x=1719218378; darn=lists.xenproject.org;
        h=to:references:message-id:content-transfer-encoding:cc:date
         :in-reply-to:from:subject:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Mugjmrlf67djGV/fxLe8luEpsefrPNOLtNy50E6bwxg=;
        b=cKZ1TXy/GZDAICt44p+ehGNhtXB9/laXRcZ04Sq/p4yVLtAf1PsxSiEOVIVeSZ7xbA
         utd+SE6qcTml7b30/v53PZS3/XjR5lufRDl2GDa67TwcZMpSk1k6aeVMNVDoFBPtgsj+
         sa4e7bD+SUfczAAwyv98J0iMzRqpT/1z4/RZI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718613578; x=1719218378;
        h=to:references:message-id:content-transfer-encoding:cc:date
         :in-reply-to:from:subject:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Mugjmrlf67djGV/fxLe8luEpsefrPNOLtNy50E6bwxg=;
        b=Vnz4QGZ1rYRhNC9xByUCSoXZC5WioAgIIXO8TK6qLPGV8CTJRUShCupEGIUypZYhLU
         iVYjy9nvxjvKP8/GQnvIdk64IzU6DOb8SGrzLfZqeKSBChNbpIuB/Gs8LyKxp62mlSwS
         ATguNgtaXDbsrQN1OpcLF00UuoAR3ZfSz3s8uv519wg+K4o/KomRSgJWzLpFbkI0Q6f3
         Pv+1/S8r9+pL8YZ4vZu/QxJ6DwMAMZSkrpNTGBCHubovq5kM1WW+jzJ+O95XW5UvRzPL
         H7HG827nSgcB3GMutWkqkOtJ4oTb0uE7cSPupE4uEWGuRqmFKwwL82v6uKwb3kWxkZcJ
         LJlw==
X-Forwarded-Encrypted: i=1; AJvYcCUBhQm62Txto31+JS/q+NyapMJLyNqaa/MDnpdklK33aedw9huPB08z5JPyM/tuG3tfgiqAfshROwPRF8f76fGoJX0zSTO+LXK7uMx19+o=
X-Gm-Message-State: AOJu0Yzza4nGcTIZiUWVpXm54gwFCty3E0DBegXOa9XwFREPNr++/oDL
	SosDiqn7n3O5rEH4LiuBsiC5nTIUSXvfl5PRiHi92WHrPDgdJ4TVUe7BDLLT4Rk=
X-Google-Smtp-Source: AGHT+IE49XWB+Mgt1gTXGnih61wtOUA32pZiLFXyBDTp7KuBSH7u24HdwKdObO7fODBLy5fk8mdS8A==
X-Received: by 2002:a50:9b18:0:b0:57a:2e93:fe80 with SMTP id 4fb4d7f45d1cf-57cbd66349bmr5800234a12.18.1718613578571;
        Mon, 17 Jun 2024 01:39:38 -0700 (PDT)
Content-Type: text/plain;
	charset=utf-8
Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3774.500.171.1.1\))
Subject: Re: [PATCH for-4.19? v6 3/9] xen: Refactor altp2m options into a
 structured format
From: Christian Lindig <christian.lindig@cloud.com>
In-Reply-To: <217202e9-608f-4788-b689-8140567e4485@suse.com>
Date: Mon, 17 Jun 2024 09:39:26 +0100
Cc: =?utf-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>,
 Anthony PERARD <anthony@xenproject.org>,
 Juergen Gross <jgross@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Christian Lindig <christian.lindig@citrix.com>,
 David Scott <dave@recoil.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?utf-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Julien Grall <jgrall@amazon.com>,
 xen-devel@lists.xenproject.org
Content-Transfer-Encoding: quoted-printable
Message-Id: <91079292-EE0F-4A85-A86D-9649CBBF529D@cloud.com>
References: <cover.1718038855.git.w1benny@gmail.com>
 <dcf08c40e37072e18e5e878df8778ce459897bdc.1718038855.git.w1benny@gmail.com>
 <8787608f-f3b0-4fb3-95ee-98050cf95182@suse.com>
 <CAKBKdXiiZdz70nWx7kqp2S5RdbRsku+qtn6z9DBk44LZOgp3Qw@mail.gmail.com>
 <217202e9-608f-4788-b689-8140567e4485@suse.com>
To: Jan Beulich <jbeulich@suse.com>
X-Mailer: Apple Mail (2.3774.500.171.1.1)



> On 11 Jun 2024, at 10:14, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 11.06.2024 10:00, Petr Bene=C5=A1 wrote:
>> On Tue, Jun 11, 2024 at 8:41=E2=80=AFAM Jan Beulich =
<jbeulich@suse.com> wrote:
>>>=20
>>> On 10.06.2024 19:10, Petr Bene=C5=A1 wrote:
>>>> From: Petr Bene=C5=A1 <w1benny@gmail.com>
>>>>=20
>>>> Encapsulate the altp2m options within a struct. This change is =
preparatory
>>>> and sets the groundwork for introducing additional parameter in =
subsequent
>>>> commit.
>>>>=20
>>>> Signed-off-by: Petr Bene=C5=A1 <w1benny@gmail.com>
>>>> Acked-by: Julien Grall <jgrall@amazon.com> # arm
>>>> Reviewed-by: Jan Beulich <jbeulich@suse.com> # hypervisor
>>>=20
>>> Looks like you lost Christian's ack for ...
>>>=20
>>>> ---
>>>> tools/libs/light/libxl_create.c     | 6 +++---
>>>> tools/ocaml/libs/xc/xenctrl_stubs.c | 4 +++-
>>>=20
>>> ... the adjustment of this file?
>>=20
>> In the cover email, Christian only acked:
>>=20
>>> tools/ocaml/libs/xc/xenctrl.ml       |   2 +
>>> tools/ocaml/libs/xc/xenctrl.mli      |   2 +
>>> tools/ocaml/libs/xc/xenctrl_stubs.c  |  40 +++++++---
>=20
> Right, but above I was talking about the last of these three files.
>=20
> Jan

Consider all of this Acked by me. I think this email-based workflow when =
we are going through several iterations are quite a burden.

=E2=80=94 C=


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 08:47:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 08:47:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741906.1148577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ81T-0003M1-BD; Mon, 17 Jun 2024 08:47:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741906.1148577; Mon, 17 Jun 2024 08:47:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ81T-0003Lu-8Z; Mon, 17 Jun 2024 08:47:31 +0000
Received: by outflank-mailman (input) for mailman id 741906;
 Mon, 17 Jun 2024 08:47:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ81S-0003Lg-2L; Mon, 17 Jun 2024 08:47:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ81R-00024q-Rd; Mon, 17 Jun 2024 08:47:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ81R-0002uX-I8; Mon, 17 Jun 2024 08:47:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ81R-0001zu-Hj; Mon, 17 Jun 2024 08:47:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MBEyKBbza6xfVxWumRyipwJv27+L3yNZxStTKhwy66Y=; b=qEp5qJ5Qekj+DOAQlsXlrC2gSZ
	cIKRlSNGBrOXZ534hVBnyKZKP0q/LL/l6bcLtbuHCIC9sHr1Fx8b9/jPE8FNpSCxInGz4tLIaW0rP
	2Z/T/qVBc5I7oJeY3n2R/Gk/RDMQgyvftY4/hEQ57l2axdIN/PuG3pYD+qtWlEsOITuU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186376-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186376: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8b4243a9b560c89bb259db5a27832c253d4bebc7
X-Osstest-Versions-That:
    xen=8b4243a9b560c89bb259db5a27832c253d4bebc7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Jun 2024 08:47:29 +0000

flight 186376 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186376/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                   fail pass in 186367

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 186367 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 186367 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186367
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186367
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186367
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186367
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186367
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186367
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8b4243a9b560c89bb259db5a27832c253d4bebc7
baseline version:
 xen                  8b4243a9b560c89bb259db5a27832c253d4bebc7

Last test of basis   186376  2024-06-17 01:51:57 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 08:57:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 08:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741916.1148587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8B7-0005G7-7E; Mon, 17 Jun 2024 08:57:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741916.1148587; Mon, 17 Jun 2024 08:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8B7-0005G0-4K; Mon, 17 Jun 2024 08:57:29 +0000
Received: by outflank-mailman (input) for mailman id 741916;
 Mon, 17 Jun 2024 08:57:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UpZp=NT=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sJ8B5-0005Fj-RB
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 08:57:27 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9dcac429-2c87-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 10:57:25 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 510F64EE0738;
 Mon, 17 Jun 2024 10:57:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9dcac429-2c87-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v2 0/6][RESEND] address violations of MISRA C Rule 20.7
Date: Mon, 17 Jun 2024 10:57:12 +0200
Message-Id: <cover.1718378539.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi all,

this series addresses several violations of Rule 20.7, as well as a
small fix to the ECLAIR integration scripts that do not influence
the current behaviour, but were mistakenly part of the upstream
configuration.

Note that by applying this series the rule has a few leftover violations.
Most of those are in x86 code in xen/arch/x86/include/asm/msi.h .
I did send a patch [1] to deal with those, limited only to addressing the MISRA
violations, but in the end it was dropped in favour of a more general cleanup of
the file upon agreement, so this is why those changes are not included here.

[1] https://lore.kernel.org/xen-devel/2f2c865f20d0296e623f1d65bed25c083f5dd497.1711700095.git.nicola.vetrini@bugseng.com/

Changes in v2:
- refactor patch 4 to deviate the pattern, instead of fixing the violations
- The series has been resent because I forgot to properly Cc the mailing list

Nicola Vetrini (6):
  automation/eclair: address violations of MISRA C Rule 20.7
  xen/self-tests: address violations of MISRA rule 20.7
  xen/guest_access: address violations of MISRA rule 20.7
  automation/eclair_analysis: address violations of MISRA C Rule 20.7
  x86/irq: address violations of MISRA C Rule 20.7
  automation/eclair_analysis: clean ECLAIR configuration scripts

 automation/eclair_analysis/ECLAIR/analyze.sh     |  3 +--
 automation/eclair_analysis/ECLAIR/deviations.ecl | 14 ++++++++++++--
 docs/misra/deviations.rst                        |  3 ++-
 xen/include/xen/guest_access.h                   |  4 ++--
 xen/include/xen/irq.h                            |  2 +-
 xen/include/xen/self-tests.h                     |  8 ++++----
 6 files changed, 22 insertions(+), 12 deletions(-)

-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 08:57:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 08:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741918.1148602 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8B8-0005X4-LO; Mon, 17 Jun 2024 08:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741918.1148602; Mon, 17 Jun 2024 08:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8B8-0005WY-HS; Mon, 17 Jun 2024 08:57:30 +0000
Received: by outflank-mailman (input) for mailman id 741918;
 Mon, 17 Jun 2024 08:57:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UpZp=NT=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sJ8B6-0005Fp-HW
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 08:57:28 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9e0686b7-2c87-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 10:57:25 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 08EF34EE0757;
 Mon, 17 Jun 2024 10:57:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e0686b7-2c87-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [XEN PATCH v2 1/6][RESEND] automation/eclair: address violations of MISRA C Rule 20.7
Date: Mon, 17 Jun 2024 10:57:13 +0200
Message-Id: <af4b0512eb52be99e37c9c670f98967ca15c68ac.1718378539.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718378539.git.nicola.vetrini@bugseng.com>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses".

The helper macro bitmap_switch has parameters that cannot be parenthesized
in order to comply with the rule, as that would break its functionality.
Moreover, the risk of misuse due developer confusion is deemed not
substantial enough to warrant a more involved refactor, thus the macro
is deviated for this rule.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index 447c1e6661d1..c2698e7074aa 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -463,6 +463,14 @@ of this macro do not lead to developer confusion, and can thus be deviated."
 -config=MC3R1.R20.7,reports+={safe, "any_area(any_loc(any_exp(macro(^count_args_$))))"}
 -doc_end
 
+-doc_begin="The arguments of macro bitmap_switch macro can't be parenthesized as
+the rule would require, without breaking the functionality of the macro. This is
+a specialized local helper macro only used within the bitmap.h header, so it is
+less likely to lead to developer confusion and it is deemed better to deviate it."
+-file_tag+={xen_bitmap_h, "^xen/include/xen/bitmap\\.h$"}
+-config=MC3R1.R20.7,reports+={safe, "any_area(any_loc(any_exp(macro(loc(file(xen_bitmap_h))&&^bitmap_switch$))))"}
+-doc_end
+
 -doc_begin="Uses of variadic macros that have one of their arguments defined as
 a macro and used within the body for both ordinary parameter expansion and as an
 operand to the # or ## operators have a behavior that is well-understood and
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 08:57:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 08:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741919.1148610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8B9-0005ef-1f; Mon, 17 Jun 2024 08:57:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741919.1148610; Mon, 17 Jun 2024 08:57:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8B8-0005df-SM; Mon, 17 Jun 2024 08:57:30 +0000
Received: by outflank-mailman (input) for mailman id 741919;
 Mon, 17 Jun 2024 08:57:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UpZp=NT=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sJ8B7-0005Fp-5t
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 08:57:29 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9e641d19-2c87-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 10:57:26 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 833654EE075B;
 Mon, 17 Jun 2024 10:57:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e641d19-2c87-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v2 2/6][RESEND] xen/self-tests: address violations of MISRA rule 20.7
Date: Mon, 17 Jun 2024 10:57:14 +0200
Message-Id: <2679cc27038689373d2d7e8abd62255bcd1b86f3.1718378539.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718378539.git.nicola.vetrini@bugseng.com>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses". Therefore, some
macro definitions should gain additional parentheses to ensure that all
current and future users will be safe with respect to expansions that
can possibly alter the semantics of the passed-in macro parameter.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
In this case the use of parentheses can detect misuses of the COMPILE_CHECK
macro for the fn argument that happen to pass the compile-time check
(see e.g. https://godbolt.org/z/n4zTdz595).

An alternative would be to deviate these macros, but since they are used
to check the correctness of other code it seemed the better alternative
to futher ensure that all usages of the macros are safe.
---
 xen/include/xen/self-tests.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/include/xen/self-tests.h b/xen/include/xen/self-tests.h
index 42a4cc4d17fe..58484fe5a8ae 100644
--- a/xen/include/xen/self-tests.h
+++ b/xen/include/xen/self-tests.h
@@ -19,11 +19,11 @@
 #if !defined(CONFIG_CC_IS_CLANG) || CONFIG_CLANG_VERSION >= 80000
 #define COMPILE_CHECK(fn, val, res)                                     \
     do {                                                                \
-        typeof(fn(val)) real = fn(val);                                 \
+        typeof((fn)(val)) real = (fn)(val);                             \
                                                                         \
         if ( !__builtin_constant_p(real) )                              \
             asm ( ".error \"'" STR(fn(val)) "' not compile-time constant\"" ); \
-        else if ( real != res )                                         \
+        else if ( real != (res) )                                       \
             asm ( ".error \"Compile time check '" STR(fn(val) == res) "' failed\"" ); \
     } while ( 0 )
 #else
@@ -37,9 +37,9 @@
  */
 #define RUNTIME_CHECK(fn, val, res)                     \
     do {                                                \
-        typeof(fn(val)) real = fn(HIDE(val));           \
+        typeof((fn)(val)) real = (fn)(HIDE(val));       \
                                                         \
-        if ( real != res )                              \
+        if ( real != (res) )                            \
             panic("%s: %s(%s) expected %u, got %u\n",   \
                   __func__, #fn, #val, real, res);      \
     } while ( 0 )
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 08:57:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 08:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741922.1148635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8BA-00065T-FI; Mon, 17 Jun 2024 08:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741922.1148635; Mon, 17 Jun 2024 08:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8BA-00064K-1s; Mon, 17 Jun 2024 08:57:32 +0000
Received: by outflank-mailman (input) for mailman id 741922;
 Mon, 17 Jun 2024 08:57:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UpZp=NT=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sJ8B9-0005Fp-6F
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 08:57:31 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9fb5934f-2c87-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 10:57:28 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id DE8B74EE0739;
 Mon, 17 Jun 2024 10:57:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9fb5934f-2c87-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [XEN PATCH v2 6/6][RESEND] automation/eclair_analysis: clean ECLAIR configuration scripts
Date: Mon, 17 Jun 2024 10:57:18 +0200
Message-Id: <120e7e4579b931c08d28d0a96848af1df7a07f7d.1718378539.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718378539.git.nicola.vetrini@bugseng.com>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove from the ECLAIR integration scripts an unused option, which
was already ignored, and make the help texts consistent
with the rest of the scripts.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/analyze.sh | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/automation/eclair_analysis/ECLAIR/analyze.sh b/automation/eclair_analysis/ECLAIR/analyze.sh
index 0ea5520c93a6..e96456c3c18d 100755
--- a/automation/eclair_analysis/ECLAIR/analyze.sh
+++ b/automation/eclair_analysis/ECLAIR/analyze.sh
@@ -11,7 +11,7 @@ fatal() {
 }
 
 usage() {
-  fatal "Usage: ${script_name} <ARM64|X86_64> <Set0|Set1|Set2|Set3>"
+  fatal "Usage: ${script_name} <ARM64|X86_64> <accepted|monitored>"
 }
 
 if [[ $# -ne 2 ]]; then
@@ -40,7 +40,6 @@ ECLAIR_REPORT_LOG=${ECLAIR_OUTPUT_DIR}/REPORT.log
 if [[ "$1" = "X86_64" ]]; then
   export CROSS_COMPILE=
   export XEN_TARGET_ARCH=x86_64
-  EXTRA_ECLAIR_ENV_OPTIONS=-disable=MC3R1.R20.7
 elif [[ "$1" = "ARM64" ]]; then
   export CROSS_COMPILE=aarch64-linux-gnu-
   export XEN_TARGET_ARCH=arm64
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 08:57:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 08:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741917.1148598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8B8-0005UZ-EB; Mon, 17 Jun 2024 08:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741917.1148598; Mon, 17 Jun 2024 08:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8B8-0005US-BM; Mon, 17 Jun 2024 08:57:30 +0000
Received: by outflank-mailman (input) for mailman id 741917;
 Mon, 17 Jun 2024 08:57:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UpZp=NT=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sJ8B6-0005Fj-Fy
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 08:57:28 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9ebb93e2-2c87-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 10:57:26 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 257F54EE073D;
 Mon, 17 Jun 2024 10:57:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ebb93e2-2c87-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v2 3/6][RESEND] xen/guest_access: address violations of MISRA rule 20.7
Date: Mon, 17 Jun 2024 10:57:15 +0200
Message-Id: <9168dac5b70f919403370844a6a3041781b82501.1718378539.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718378539.git.nicola.vetrini@bugseng.com>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses". Therefore, some
macro definitions should gain additional parentheses to ensure that all
current and future users will be safe with respect to expansions that
can possibly alter the semantics of the passed-in macro parameter.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/include/xen/guest_access.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
index af33ae3ab652..8bd2a124e823 100644
--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -49,9 +49,9 @@
     ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
 
 #define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)(ptr) })
 #define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)(ptr) })
 
 /*
  * Copy an array of objects to guest context via a guest handle,
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 08:57:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 08:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741921.1148625 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8B9-0005yf-T4; Mon, 17 Jun 2024 08:57:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741921.1148625; Mon, 17 Jun 2024 08:57:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8B9-0005tw-Mv; Mon, 17 Jun 2024 08:57:31 +0000
Received: by outflank-mailman (input) for mailman id 741921;
 Mon, 17 Jun 2024 08:57:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UpZp=NT=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sJ8B8-0005Fp-5w
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 08:57:30 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9f1aff13-2c87-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 10:57:27 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id B22044EE0759;
 Mon, 17 Jun 2024 10:57:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f1aff13-2c87-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v2 4/6][RESEND] automation/eclair_analysis: address violations of MISRA C Rule 20.7
Date: Mon, 17 Jun 2024 10:57:16 +0200
Message-Id: <dfebde9cc657f2669df60b08ca34352288e082ab.1718378539.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718378539.git.nicola.vetrini@bugseng.com>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses".

The local helpers GRP2 and XADD in the x86 emulator use their first
argument as the constant expression for a case label. This pattern
is deviated project-wide, because it is very unlikely to induce
developer confusion and result in the wrong control flow being
carried out.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
Changes in v2:
- Introduce a deviation instead of adding parentheses
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++--
 docs/misra/deviations.rst                        | 3 ++-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index c2698e7074aa..fc248641dc78 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -428,13 +428,15 @@ unexpected result when the structure is given as argument to a sizeof() operator
 
 -doc_begin="Code violating Rule 20.7 is safe when macro parameters are used: (1)
 as function arguments; (2) as macro arguments; (3) as array indices; (4) as lhs
-in assignments; (5) as initializers, possibly designated, in initalizer lists."
+in assignments; (5) as initializers, possibly designated, in initalizer lists;
+(6) as the constant expression in a switch clause label."
 -config=MC3R1.R20.7,expansion_context=
 {safe, "context(__call_expr_arg_contexts)"},
 {safe, "left_right(^[(,\\[]$,^[),\\]]$)"},
 {safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(node(array_subscript_expr), subscript)))"},
 {safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(operator(assign), lhs)))"},
-{safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(node(init_list_expr||designated_init_expr), init)))"}
+{safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(node(init_list_expr||designated_init_expr), init)))"},
+{safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(node(case_stmt), lower||upper)))"}
 -doc_end
 
 -doc_begin="Violations involving the __config_enabled macros cannot be fixed without
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 36959aa44ac9..be2cc6bf03eb 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -376,7 +376,8 @@ Deviations related to MISRA C:2012 Rules:
        (2) as macro arguments;
        (3) as array indices;
        (4) as lhs in assignments;
-       (5) as initializers, possibly designated, in initalizer lists.
+       (5) as initializers, possibly designated, in initalizer lists;
+       (6) as constant expressions of switch case labels.
      - Tagged as `safe` for ECLAIR.
 
    * - R20.7
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 08:57:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 08:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741920.1148616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8B9-0005jd-Bu; Mon, 17 Jun 2024 08:57:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741920.1148616; Mon, 17 Jun 2024 08:57:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8B9-0005gd-4P; Mon, 17 Jun 2024 08:57:31 +0000
Received: by outflank-mailman (input) for mailman id 741920;
 Mon, 17 Jun 2024 08:57:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UpZp=NT=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sJ8B7-0005Fj-H6
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 08:57:29 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f6f1f56-2c87-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 10:57:28 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.194])
 by support.bugseng.com (Postfix) with ESMTPSA id 564B14EE075C;
 Mon, 17 Jun 2024 10:57:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f6f1f56-2c87-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v2 5/6][RESEND] x86/irq: address violations of MISRA C Rule 20.7
Date: Mon, 17 Jun 2024 10:57:17 +0200
Message-Id: <58b102b7c73c342aa4e2f54b0ab5188944d97450.1718378539.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718378539.git.nicola.vetrini@bugseng.com>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses". Therefore, some
macro definitions should gain additional parentheses to ensure that all
current and future users will be safe with respect to expansions that
can possibly alter the semantics of the passed-in macro parameter.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Note that the rule does not apply to f because that parameter
is not used as an expression in the macro, but rather as an identifier.
---
 xen/include/xen/irq.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index adf33547d25f..0401f06c4fca 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -178,7 +178,7 @@ extern struct pirq *pirq_get_info(struct domain *d, int pirq);
 
 #define pirq_field(d, p, f, def) ({ \
     const struct pirq *__pi = pirq_info(d, p); \
-    __pi ? __pi->f : def; \
+    __pi ? __pi->f : (def); \
 })
 #define pirq_to_evtchn(d, pirq) pirq_field(d, pirq, evtchn, 0)
 #define pirq_masked(d, pirq) pirq_field(d, pirq, masked, 0)
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 09:01:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 09:01:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741965.1148658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8EZ-0001t9-Q1; Mon, 17 Jun 2024 09:01:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741965.1148658; Mon, 17 Jun 2024 09:01:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8EZ-0001t2-NT; Mon, 17 Jun 2024 09:01:03 +0000
Received: by outflank-mailman (input) for mailman id 741965;
 Mon, 17 Jun 2024 09:01:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d9pI=NT=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJ8EY-0001sw-5j
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 09:01:02 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02on2062e.outbound.protection.outlook.com
 [2a01:111:f400:7ea9::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ca5da22-2c88-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 11:00:59 +0200 (CEST)
Received: from MN2PR05CA0038.namprd05.prod.outlook.com (2603:10b6:208:236::7)
 by DM4PR12MB6448.namprd12.prod.outlook.com (2603:10b6:8:8a::7) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.7677.30; Mon, 17 Jun 2024 09:00:54 +0000
Received: from BN2PEPF00004FBD.namprd04.prod.outlook.com
 (2603:10b6:208:236:cafe::3b) by MN2PR05CA0038.outlook.office365.com
 (2603:10b6:208:236::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31 via Frontend
 Transport; Mon, 17 Jun 2024 09:00:54 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN2PEPF00004FBD.mail.protection.outlook.com (10.167.243.183) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Mon, 17 Jun 2024 09:00:53 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 17 Jun
 2024 04:00:49 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ca5da22-2c88-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ds5qX/pDnqm+WVNxQWzROzIioytddIOISiNUQiblHAER9h9hG2VU4VQlsDHVFbXEg7we+s3G1i14XuYd/M/iavhKemNqLucf7a4RD4fKUSkw1Ajlq+Gwym6nfoeHdPE54KeQI4fgA552mCKvrzYcuth9zjnFWuotZKdi5xDWvN/vabiqpD+Eej9yThVFMrYl56FHtSIhZ9AHWFbhfth9EqsxI57AoWZpybQgO8HjNm9qDxShuUKXV0r2fNxU1PnqxcfR29TR8H+rAmIQ4esSlz+gBDDAKM02I4wnOym0Cqn5jJAYb7/tmhPamIboOnpE69YqImv6Q9kH1lmLrI2v0Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aRnZWkW3LWREEN24Qa5QueIVNYyNzOrYvc9aw6GF67U=;
 b=ZmjurPeqwImLPjLIyg6+kYgxK/eB8ucLs03q+B4QegLGPdz5LhKXcBjQRvM+8ATDJayVEAopzUIW18AyN35bJ53iot6o7xaX1KAlEb8FlONGJfylQ38SQKOnHzaqTrXTs6f9AX+WGEb/VrrRwkBpQYOJu4jFdeErCXhiAaKSy9A67E+roXVIQXHYdEF4fBUCvavpDQ78Srbh3HzOrXa57DStwQkG5a762q+zx/strdjXPv1D+QAMQ5kfBHrSngHVGEcY2kartnLhl0mhk/P9/u3J9jqzBXXEK3YDDqOYOQwbf2keunply6BOa1n1N7LUA3DEXZz93ZL79qzZ65Z0iw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aRnZWkW3LWREEN24Qa5QueIVNYyNzOrYvc9aw6GF67U=;
 b=ojEDZwpKtYRXW+2u1i5w/sJbo9sdnoSkqxufQeQc7VXEyI32NPTvjTZaezho9bLKUWzz9+dF/ep9RA+2S2kR6eco+xWtmnkwGVGfkWKi1E4RqEUHV8sHx2cyrviOF0pYvqn1oO+ZMaVDpYjFOd2/dYac+bK3lrI3Nus3hkCGVek=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [XEN PATCH v10 0/5] Support device passthrough when dom0 is PVH on Xen
Date: Mon, 17 Jun 2024 17:00:30 +0800
Message-ID: <20240617090035.839640-1-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN2PEPF00004FBD:EE_|DM4PR12MB6448:EE_
X-MS-Office365-Filtering-Correlation-Id: 42fc5b07-49ba-4443-8e54-08dc8eabfe70
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|36860700010|376011|7416011|82310400023|1800799021;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?dEXP2ace+cBW+kT49p2olWkcNW09mGD9eTFJ4HifVmGj7k3aX4D4B8ryvGf2?=
 =?us-ascii?Q?vhU/F73Z19N4ZlOg/HUSsQXDBgXK2D5JPIP/w9iblQCHCoy09SbvZfPFAy/Z?=
 =?us-ascii?Q?+JXHYp3O62Li9jBiEIsK39pX2gJXEAbPLyMa347fs2RKJOgWYOjfDK2pzgfM?=
 =?us-ascii?Q?GFtMM5vjIw7rpORLRfQiIF4kg4yf2o6QRshfCOx7GdiMqCkZsqSRkGq0ELaQ?=
 =?us-ascii?Q?3VODhvfoacVBcIpbx4OcXRqOZDRt8ZZBN43aJFzj+1wS3+FAnT5R3/l14Uc4?=
 =?us-ascii?Q?Wbnn6QbwR4ogFsmLBHhEo12QK2z56VV7BroXtolfjW7YEA6Yg24pdzeyFJla?=
 =?us-ascii?Q?FMspuMHKj6bGNKdiDJLq5Fqy6jeTelV9ZCIrSTgjX315rBrCFRzOrl1DxMDx?=
 =?us-ascii?Q?5Ywt0lMj+I9eZa4iDarH2BW42RpmLNlsmXm3XM6fcxw8uekLrDCFStixwHXz?=
 =?us-ascii?Q?IftiYN0h62e1/bjGmbKqPQwshiVFFZ2+9YM8OgF+Y+G9OBm3yg9MKxYigz+u?=
 =?us-ascii?Q?/xiyHWbmSZYSRiDQ4LIzvkEVLD9UMijXS8/d7ZdkUwQB91RFhbpg7vOOTCh2?=
 =?us-ascii?Q?ylgnwXqdP8wcBL/OTZ0FQkXn/OD7Dnc0Pwrs/SLPTXyff8NdXrmpHJ589psV?=
 =?us-ascii?Q?jqqunfkNZpRt1rfeoXa0AK0DZbqFRih0Dd7wsgfQuPzjO5qKalo5i6Tb7TFI?=
 =?us-ascii?Q?mOZ6K1p8txY2vaF2CqSZLp2KxDrd9wdqKzrBwlrTmLTUF3NAy7eeJ2hdEKG2?=
 =?us-ascii?Q?vUZCt1pOjK5WWB/dt0bMYypQoQUcIHtwTHzDDYgKlx94Owsb1ZhjEreUQB32?=
 =?us-ascii?Q?VUF7PxzsP/bmSdAf9jV4jQPtw7kToIelzmnRM64v4SVga3xzJjTrW9994vVc?=
 =?us-ascii?Q?mSy2ckpl+Tkwo2d24mW5Pss1XNWlE/KjZr9b6c/nCBntIfzWmYCc9Oln/afB?=
 =?us-ascii?Q?C69sPwYFmznAZNbfNAPHfQ6/BtW2+ERHPFgsjhNXtLXa+8RKgQVJsJQPKuyu?=
 =?us-ascii?Q?6SQN38Bueogn7uSmRD2qW6HoCbCbpzBEwZfrkwvUb5BBGzJQfn7VsXF5aFXG?=
 =?us-ascii?Q?ZurhMaUupMG5erDAOXZ5/+wBLToeKXyL57oPVGJ5RIGYnlWcnC69q1DRSemf?=
 =?us-ascii?Q?pTzBSad3jFMesCb3KCypj0f0ICab3kcvA+YCDJ5U0RT7OAD8zeyVyZI40Y57?=
 =?us-ascii?Q?tQIIa19lvpI59U9B2lv/C+6vuk8pTA4A9dPA6SSptAUj+IMJWedifB/Zsep0?=
 =?us-ascii?Q?jaAqHl7QxpWvn16gZo/c6+rwlj3hcI4V/MgKF5JLIiwORp6lO9Gz05V0yEeY?=
 =?us-ascii?Q?oHzA2O57VMXab46unicDfjLpk+i/AIi+JuIjQXSESwSSAyTUEeqI+KX9hl6n?=
 =?us-ascii?Q?Yomjp6c=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(36860700010)(376011)(7416011)(82310400023)(1800799021);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2024 09:00:53.8327
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 42fc5b07-49ba-4443-8e54-08dc8eabfe70
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN2PEPF00004FBD.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6448

Hi All,
This is v10 series to support passthrough when dom0 is PVH
v9->v10 changes:
* patch#2: Indent the comments above PHYSDEVOP_map_pirq according to the code style.
* patch#3: Modified the description in the commit message, changing "it calls" to "it will need to call",
           indicating that there will be new codes on the kernel side that will call PHYSDEVOP_setup_gsi.
           Also added an explanation of why the interrupt of passthrough device does not work if gsi is not
           registered.
* patch#4: Added define for CONFIG_X86 in tools/libs/light/Makefile to isolate x86 code in libxl_pci.c.
* patch#5: Modified the commit message to further describe the purpose of adding XEN_DOMCTL_gsi_permission.
           Deleted pci_device_set_gsi and called XEN_DOMCTL_gsi_permission directly in pci_add_dm_done.
           Added a check for all zeros in the padding field in XEN_DOMCTL_gsi_permission, and used currd
           instead of current->domain.
           In the function gsi_2_irq, apic_pin_2_gsi_irq was used instead of the original new code, and
           error handling for irq0 was added.
           Deleted the extra spaces in the upper and lower lines of the struct xen_domctl_gsi_permission
           definition.
All patches have modified signatures as follows:
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com> means I am the author.
Signed-off-by: Huang Rui <ray.huang@amd.com> means Rui sent them to upstream firstly.
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com> means I take continue to upstream.


Best regards,
Jiqian Chen



v8->v9 changes:
* patch#1: Move pcidevs_unlock below write_lock, and remove "ASSERT(pcidevs_locked());"
           from vpci_reset_device_state;
           Add pci_device_state_reset_type to distinguish the reset types.
* patch#2: Add a comment above PHYSDEVOP_map_pirq to describe why need this hypercall.
           Change "!is_pv_domain(d)" to "is_hvm_domain(d)", and "map.domid == DOMID_SELF" to
           "d == current->domian".
* patch#3: Remove the check of PHYSDEVOP_setup_gsi, since there is same checke in below.Although their return
           values are different, this difference is acceptable for the sake of code consistency
           if ( !is_hardware_domain(currd) )
		       return -ENOSYS;
           break;
* patch#5: Change the commit message to describe more why we need this new hypercall.
           Add comment above "if ( is_pv_domain(current->domain) || has_pirq(current->domain) )" to explain
           why we need this check.
           Add gsi_2_irq to transform gsi to irq, instead of considering gsi == irq.
           Add explicit padding to struct xen_domctl_gsi_permission.


v7->v8 changes:
* patch#2: Add the domid check(domid == DOMID_SELF) to prevent self map when guest doesn't use pirq.
           That check was missed in the previous version.
* patch#4: Due to changes in the implementation of obtaining gsi in the kernel. Change to add a new function
           to get gsi by passing in the sbdf of pci device.
* patch#5: Remove the parameter "is_gsi", when there exist gsi, in pci_add_dm_done use a new function
           pci_device_set_gsi to do map_pirq and grant permission. That gets more intuitive code logic.


v6->v7 changes:
* patch#4: Due to changes in the implementation of obtaining gsi in the kernel. Change to add a new function
           to get gsi from irq, instead of gsi sysfs.
* patch#5: Fix the issue with variable usage, rc->r.


v5->v6 changes:
* patch#1: Add Reviewed-by Stefano and Stewart. Rebase code and change old function vpci_remove_device,
           vpci_add_handlers to vpci_deassign_device, vpci_assign_device
* patch#2: Add Reviewed-by Stefano
* patch#3: Remove unnecessary "ASSERT(!has_pirq(currd));"
* patch#4: Fix some coding style issues below directory tools
* patch#5: Modified some variable names and code logic to make code easier to be understood, which to use
           gsi by default and be compatible with older kernel versions to continue to use irq


v4->v5 changes:
* patch#1: add pci_lock wrap function vpci_reset_device_state
* patch#2: move the check of self map_pirq to physdev.c, and change to check if the caller has PIRQ flag, and
           just break for PHYSDEVOP_(un)map_pirq in hvm_physdev_op
* patch#3: return -EOPNOTSUPP instead, and use ASSERT(!has_pirq(currd));
* patch#4: is the patch#5 in v4 because patch#5 in v5 has some dependency on it. And add the handling of errno
           and add the Reviewed-by Stefano
* patch#5: is the patch#4 in v4. New implementation to add new hypercall XEN_DOMCTL_gsi_permission to grant gsi


v3->v4 changes:
* patch#1: change the comment of PHYSDEVOP_pci_device_state_reset; move printings behind pcidevs_unlock
* patch#2: add check to prevent PVH self map
* patch#3: new patch, The implementation of adding PHYSDEVOP_setup_gsi for PVH is treated as a separate patch
* patch#4: new patch to solve the map_pirq problem of PVH dom0. use gsi to grant irq permission in
           XEN_DOMCTL_irq_permission.
* patch#5: to be compatible with previous kernel versions, when there is no gsi sysfs, still use irq
v4 link:
https://lore.kernel.org/xen-devel/20240105070920.350113-1-Jiqian.Chen@amd.com/T/#t

v2->v3 changes:
* patch#1: move the content out of pci_reset_device_state and delete pci_reset_device_state; add
           xsm_resource_setup_pci check for PHYSDEVOP_pci_device_state_reset; add description for
		   PHYSDEVOP_pci_device_state_reset;
* patch#2: du to changes in the implementation of the second patch on kernel side(that it will do setup_gsi and
           map_pirq when assigning a device to passthrough), add PHYSDEVOP_setup_gsi for PVH dom0, and we need
		   to support self mapping.
* patch#3: du to changes in the implementation of the second patch on kernel side(that adds a new sysfs for gsi
           instead of a new syscall), so read gsi number from the sysfs of gsi.
v3 link:
https://lore.kernel.org/xen-devel/20231210164009.1551147-1-Jiqian.Chen@amd.com/T/#t

v2 link:
https://lore.kernel.org/xen-devel/20231124104136.3263722-1-Jiqian.Chen@amd.com/T/#t
Below is the description of v2 cover letter:
This series of patches are the v2 of the implementation of passthrough when dom0 is PVH on Xen.
We sent the v1 to upstream before, but the v1 had so many problems and we got lots of suggestions.
I will introduce all issues that these patches try to fix and the differences between v1 and v2.

Issues we encountered:
1. pci_stub failed to write bar for a passthrough device.
Problem: when we run \u201csudo xl pci-assignable-add <sbdf>\u201d to assign a device, pci_stub will call
pcistub_init_device() -> pci_restore_state() -> pci_restore_config_space() ->
pci_restore_config_space_range() -> pci_restore_config_dword() -> pci_write_config_dword()\u201d, the pci config
write will trigger an io interrupt to bar_write() in the xen, but the
bar->enabled was set before, the write is not allowed now, and then when 
bar->Qemu config the
passthrough device in xen_pt_realize(), it gets invalid bar values.

Reason: the reason is that we don't tell vPCI that the device has been reset, so the current cached state in
pdev->vpci is all out of date and is different from the real device state.

Solution: to solve this problem, the first patch of kernel(xen/pci: Add xen_reset_device_state
function) and the fist patch of xen(xen/vpci: Clear all vpci status of device) add a new hypercall to reset the
state stored in vPCI when the state of real device has changed.
Thank Roger for the suggestion of this v2, and it is different from
v1 (https://lore.kernel.org/xen-devel/20230312075455.450187-3-ray.huang@amd.com/), v1 simply allow domU to write
pci bar, it does not comply with the design principles of vPCI.

2. failed to do PHYSDEVOP_map_pirq when dom0 is PVH
Problem: HVM domU will do PHYSDEVOP_map_pirq for a passthrough device by using gsi. See
xen_pt_realize->xc_physdev_map_pirq and pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq will call
into Xen, but in hvm_physdev_op(), PHYSDEVOP_map_pirq is not allowed.

Reason: In hvm_physdev_op(), the variable "currd" is PVH dom0 and PVH has no X86_EMU_USE_PIRQ flag, it will fail
at has_pirq check.

Solution: I think we may need to allow PHYSDEVOP_map_pirq when "currd" is dom0 (at present dom0 is PVH). The
second patch of xen(x86/pvh: Open PHYSDEVOP_map_pirq for PVH dom0) allow PVH dom0 do PHYSDEVOP_map_pirq. This v2
patch is better than v1, v1 simply remove the has_pirq check
(xen https://lore.kernel.org/xen-devel/20230312075455.450187-4-ray.huang@amd.com/).

3. the gsi of a passthrough device doesn't be unmasked
 3.1 failed to check the permission of pirq
 3.2 the gsi of passthrough device was not registered in PVH dom0

Problem:
3.1 callback function pci_add_dm_done() will be called when qemu config a passthrough device for domU.
This function will call xc_domain_irq_permission()-> pirq_access_permitted() to check if the gsi has corresponding
mappings in dom0. But it didn\u2019t, so failed. See XEN_DOMCTL_irq_permission->pirq_access_permitted, "current"
is PVH dom0 and it return irq is 0.
3.2 it's possible for a gsi (iow: vIO-APIC pin) to never get registered on PVH dom0, because the devices of PVH
are using MSI(-X) interrupts. However, the IO-APIC pin must be configured for it to be able to be mapped into a domU.

Reason: After searching codes, I find "map_pirq" and "register_gsi" will be done in function
vioapic_write_redirent->vioapic_hwdom_map_gsi when the gsi(aka ioapic's pin) is unmasked in PVH dom0.
So the two problems can be concluded to that the gsi of a passthrough device doesn't be unmasked.

Solution: to solve these problems, the second patch of kernel(xen/pvh: Unmask irq for passthrough device in PVH dom0)
call the unmask_irq() when we assign a device to be passthrough. So that passthrough devices can have the mapping of
gsi on PVH dom0 and gsi can be registered. This v2 patch is different from the
v1( kernel https://lore.kernel.org/xen-devel/20230312120157.452859-5-ray.huang@amd.com/,
kernel https://lore.kernel.org/xen-devel/20230312120157.452859-5-ray.huang@amd.com/ and
xen https://lore.kernel.org/xen-devel/20230312075455.450187-5-ray.huang@amd.com/),
v1 performed "map_pirq" and "register_gsi" on all pci devices on PVH dom0, which is unnecessary and may cause
multiple registration.

4. failed to map pirq for gsi
Problem: qemu will call xc_physdev_map_pirq() to map a passthrough device\u2019s gsi to pirq in function
xen_pt_realize(). But failed.

Reason: According to the implement of xc_physdev_map_pirq(), it needs gsi instead of irq, but qemu pass irq to it and
treat irq as gsi, it is got from file /sys/bus/pci/devices/xxxx:xx:xx.x/irq in function xen_host_pci_device_get().
But actually the gsi number is not equal with irq. On PVH dom0, when it allocates irq for a gsi in
function acpi_register_gsi_ioapic(), allocation is dynamic, and follow the principle of applying first, distributing
first. And if you debug the kernel codes(see function __irq_alloc_descs), you will find the irq number is allocated
from small to large by order, but the applying gsi number is not, gsi 38 may come before gsi 28, that causes gsi 38
get a smaller irq number than gsi 28, and then gsi != irq.

Solution: we can record the relation between gsi and irq, then when userspace(qemu) want to use gsi, we can do a
translation. The third patch of kernel(xen/privcmd: Add new syscall to get gsi from irq) records all the relations
in acpi_register_gsi_xen_pvh() when dom0 initialize pci devices, and provide a syscall for userspace to get the gsi
from irq. The third patch of xen(tools: Add new function to get gsi from irq) add a new function
xc_physdev_gsi_from_irq() to call the new syscall added on kernel side.
And then userspace can use that function to get gsi. Then xc_physdev_map_pirq() will success. This v2 patch is the
same as v1( kernel https://lore.kernel.org/xen-devel/20230312120157.452859-6-ray.huang@amd.com/ and
xen https://lore.kernel.org/xen-devel/20230312075455.450187-6-ray.huang@amd.com/)

About the v2 patch of qemu, just change an included head file, other are similar to the
v1 ( qemu https://lore.kernel.org/xen-devel/20230312092244.451465-19-ray.huang@amd.com/), just call
xc_physdev_gsi_from_irq() to get gsi from irq.

Jiqian Chen (5):
  xen/vpci: Clear all vpci status of device
  x86/pvh: Allow (un)map_pirq when dom0 is PVH
  x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
  tools: Add new function to get gsi from dev
  domctl: Add XEN_DOMCTL_gsi_permission to grant gsi

 tools/include/xen-sys/Linux/privcmd.h |   7 ++
 tools/include/xencall.h               |   2 +
 tools/include/xenctrl.h               |   7 ++
 tools/libs/call/core.c                |   5 ++
 tools/libs/call/libxencall.map        |   2 +
 tools/libs/call/linux.c               |  15 ++++
 tools/libs/call/private.h             |   9 +++
 tools/libs/ctrl/xc_domain.c           |  15 ++++
 tools/libs/ctrl/xc_physdev.c          |   4 +
 tools/libs/light/Makefile             |   2 +-
 tools/libs/light/libxl_pci.c          | 105 ++++++++++++++++++++++++--
 xen/arch/x86/domctl.c                 |  43 +++++++++++
 xen/arch/x86/hvm/hypercall.c          |   8 ++
 xen/arch/x86/include/asm/io_apic.h    |   2 +
 xen/arch/x86/io_apic.c                |  17 +++++
 xen/arch/x86/mpparse.c                |   3 +-
 xen/arch/x86/physdev.c                |  14 ++++
 xen/drivers/pci/physdev.c             |  43 +++++++++++
 xen/drivers/vpci/vpci.c               |   9 +++
 xen/include/public/domctl.h           |   8 ++
 xen/include/public/physdev.h          |   7 ++
 xen/include/xen/pci.h                 |  16 ++++
 xen/include/xen/vpci.h                |   6 ++
 xen/xsm/flask/hooks.c                 |   1 +
 24 files changed, 341 insertions(+), 9 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 09:01:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 09:01:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741966.1148668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8Ee-00029J-6I; Mon, 17 Jun 2024 09:01:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741966.1148668; Mon, 17 Jun 2024 09:01:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8Ee-00029B-2i; Mon, 17 Jun 2024 09:01:08 +0000
Received: by outflank-mailman (input) for mailman id 741966;
 Mon, 17 Jun 2024 09:01:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d9pI=NT=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJ8Ec-0001sw-GB
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 09:01:06 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20613.outbound.protection.outlook.com
 [2a01:111:f403:2009::613])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ffbf5d6-2c88-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 11:01:04 +0200 (CEST)
Received: from MN2PR01CA0008.prod.exchangelabs.com (2603:10b6:208:10c::21) by
 PH0PR12MB7094.namprd12.prod.outlook.com (2603:10b6:510:21d::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.30; Mon, 17 Jun
 2024 09:00:59 +0000
Received: from BN2PEPF00004FC1.namprd04.prod.outlook.com
 (2603:10b6:208:10c:cafe::a6) by MN2PR01CA0008.outlook.office365.com
 (2603:10b6:208:10c::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.26 via Frontend
 Transport; Mon, 17 Jun 2024 09:00:59 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN2PEPF00004FC1.mail.protection.outlook.com (10.167.243.187) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Mon, 17 Jun 2024 09:00:58 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 17 Jun
 2024 04:00:53 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ffbf5d6-2c88-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NRjLJhaBRyDq+5cnyIt3BhRt5KVluu2NxTVNPUGwJlKw1InukZqCBywBXC18BwEFkrXFGbCC3g6cNQnWxFrehSETN3mT1Irb/yd2x3sqXuZWi+9fz7AGOq2gei8/tsIjSPExMas+ys7AsqBMw3DLkSq1pNaAUnur7/nK8K0tO5X/zJJR8U0P04goAgKeu6P8efTmWjLV9luOhudDkuI7rluIDwqwCFb92n904MOk9xT9zFbWRaIn1pGcwjwjT8IlXt2sfyBT0AwzPktnjGTI7URhe2LdXqeY7roaotSorspQKDsXP99YZ4mS5FhgeVMXXZSmysD6R0UunMtT3FkUVA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AjVitUbIkh6X7zuVbHNzZlFo2M2H+RQ3FnoFSkKO2gs=;
 b=A1Ntup4jSkAYzLUqnGCMumzFl18Au7QEguno3hubW9WvG08qwk9kxbp4MjyunommKBCtNEaWS9+m6f/1ECbkvFpPtFFPqic6juGzrMbdfSQCQsCtf7WeDf8K+th4u/lNsY0vOFxJhu4u6g6O6MIWr7IvByZk9Gl++RIdQ2KMYokjaZ2AtQELoA3Sd5Xr1OKXt3XibJnVHIJWSctB9h0r0jsz/q9VqyEJTMbkHJCsO4IQ4d4kEmruu/eRcK0IkQTUIiinnRfOb7EvkdCuX7jUyRKuR9mlAB8N67bKKOaTydjXVxDC5OHc1Ecqjb3wKubVfW9rzx+q2JSRP46Kn4vEhw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AjVitUbIkh6X7zuVbHNzZlFo2M2H+RQ3FnoFSkKO2gs=;
 b=BcUStHgYHssKuWcHCfU9KTKx6PCYFsjKSoQtlXIbRSY7wzWpwmN67sBtBf6e9rUE9GMDULa9NSwF2an9Mne87OeJMHlPPDR2yOPNz86s2+RnqIEBIg4/NK+QtUZmLTyoWNqBGylyw8k0XdZVYqIRbCHt/FSYHB26byivzDzd19Q=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>, Stewart Hildebrand <stewart.hildebrand@amd.com>
Subject: [XEN PATCH v10 1/5] xen/vpci: Clear all vpci status of device
Date: Mon, 17 Jun 2024 17:00:31 +0800
Message-ID: <20240617090035.839640-2-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240617090035.839640-1-Jiqian.Chen@amd.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN2PEPF00004FC1:EE_|PH0PR12MB7094:EE_
X-MS-Office365-Filtering-Correlation-Id: ff1566a5-fe34-4277-0ece-08dc8eac0147
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|36860700010|376011|7416011|82310400023|1800799021;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?6RPTYonk2L2i5ysRafTrcPq/HKcR17Hqdf30Osu6YVx/iTiMrl+P/PvTnsxp?=
 =?us-ascii?Q?hz1dAKZMjIM5KVL05ZfQ9qOW42+Ctk++V/nFKj+g3Sh/UU51sr467VcPSE3n?=
 =?us-ascii?Q?redB53M8i/azOd4bpYSwW+R9z8yaM7Q2D0dXuI8+FQmjtJJ52SV6B7ZnDkvo?=
 =?us-ascii?Q?8IUCZEDoW2SL2+4MoDF3JH03kaDbW70rFzhClGaOGryaW5DYjoALNseXkv2j?=
 =?us-ascii?Q?Hbo4yp5/ymn/ZiSAOj2sFbmo2EBSD3JkIaR6dsjB+IxRpiMLl1umufhjso+k?=
 =?us-ascii?Q?L8KUg72MTL5a5hTBo/rsNTHrNXLl7kAY4jvff7Ef/k8Z6Yfo8+S2GHYqFdVF?=
 =?us-ascii?Q?7AtNZYLrPWLK8L0jU2qY2SFhPCNvobYSEtyXyphZMOG2CEmymXbMeeky7wuY?=
 =?us-ascii?Q?0wjbHq6NYTgUNfp2j2qhtUMTiE5Sw5GUueT7HHiK/uPDpPlrAg37eZu6gN8a?=
 =?us-ascii?Q?YmTM6VOcP8B4CerjzehvK4/effPMfR6mSFMBkXQ5TuGVloR08EoZdh26yNPv?=
 =?us-ascii?Q?Y1ONvK5gZPn6iqQjd6HEC7CWIeNdUWRijchu28Sb/Hr+3KBSzG+Zwp1PwK0M?=
 =?us-ascii?Q?+X+YtPwnA6fx8/7eGsf4k4QHYmImEFpMU3Dq/ax24i6AFu7kKKx7L+LY6GF9?=
 =?us-ascii?Q?xBeLGNWtG1Pb77/HWNyaZYXy8SjGxTumul6qf24CuoS23vX+P+ZR6OzTCabO?=
 =?us-ascii?Q?uvyzQAwlCecMs0c4VgiCJHJK7P1d0Rp4FSzihxTQwHyxjtgsNb0eyWwSc+K1?=
 =?us-ascii?Q?qrr3FI+xGh4fxe5snwi57fWXNRw7GQ52E/tUnqpqh28zTRT9wnvdGR9rS+HU?=
 =?us-ascii?Q?r1pTnUaNfzFMPSYEE1eqepQB9TpATjdXCmMiFcAWJM55C8+Z0Ui7mBB5wgBJ?=
 =?us-ascii?Q?6DZV7PIbQOZUVhNlgv6i4mvk98Rcz2bDArcVCv3mXA1yQvETPkiIcKpccBdj?=
 =?us-ascii?Q?KQNTejNJlFBQ8DInENo6K8asotXrfNi1/OargzsYSA9v1RZrwv9KjuikBIFu?=
 =?us-ascii?Q?jWPJVWvseVD/qo2gj769/CZ9w80dzPJjAegfVQwbP6kS5Dy5lBxiczu43su8?=
 =?us-ascii?Q?/HyC1jicS7PjxOce/9NluqtuZMlKqrrfEthfmPx2+m4tKllC/XxMgt3GiFdb?=
 =?us-ascii?Q?H5NL/31QRYUucxYovScjUHEiMou9j1McSOD2XoyP7BLzMYpysoQrAS3+fxPo?=
 =?us-ascii?Q?wWcjsboTrEkdiHg4YFGyn7vxC/SUhNsVdoEOhNEY+VYcXTv3RyEhBK2hAmbT?=
 =?us-ascii?Q?VOOJJjalO5k6hm6rMLjv/lNnfdgMO4VKt1B+Zq4vwNfztsOaMoL9iKIjAY8D?=
 =?us-ascii?Q?Kje67HxwbfgDgUteR/Ry2oIBJVGW0m41Yrb6DaRzvhWkZlOT4oJulS/7XT3L?=
 =?us-ascii?Q?PGo0Cmna1MNAnPSVaNfYp1LOn4XotpSjNA41c7s61QsksHMzrA=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(36860700010)(376011)(7416011)(82310400023)(1800799021);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2024 09:00:58.6095
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ff1566a5-fe34-4277-0ece-08dc8eac0147
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN2PEPF00004FC1.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB7094

When a device has been reset on dom0 side, the vpci on Xen
side won't get notification, so the cached state in vpci is
all out of date compare with the real device state.
To solve that problem, add a new hypercall to clear all vpci
device state. When the state of device is reset on dom0 side,
dom0 can call this hypercall to notify vpci.

Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Reviewed-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/x86/hvm/hypercall.c |  1 +
 xen/drivers/pci/physdev.c    | 43 ++++++++++++++++++++++++++++++++++++
 xen/drivers/vpci/vpci.c      |  9 ++++++++
 xen/include/public/physdev.h |  7 ++++++
 xen/include/xen/pci.h        | 16 ++++++++++++++
 xen/include/xen/vpci.h       |  6 +++++
 6 files changed, 82 insertions(+)

diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 7fb3136f0c7c..0fab670a4871 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -83,6 +83,7 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     case PHYSDEVOP_pci_mmcfg_reserved:
     case PHYSDEVOP_pci_device_add:
     case PHYSDEVOP_pci_device_remove:
+    case PHYSDEVOP_pci_device_state_reset:
     case PHYSDEVOP_dbgp_op:
         if ( !is_hardware_domain(currd) )
             return -ENOSYS;
diff --git a/xen/drivers/pci/physdev.c b/xen/drivers/pci/physdev.c
index 42db3e6d133c..1cce508a73b1 100644
--- a/xen/drivers/pci/physdev.c
+++ b/xen/drivers/pci/physdev.c
@@ -2,11 +2,17 @@
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <xen/init.h>
+#include <xen/vpci.h>
 
 #ifndef COMPAT
 typedef long ret_t;
 #endif
 
+static const struct pci_device_state_reset_method
+                    pci_device_state_reset_methods[] = {
+    [ DEVICE_RESET_FLR ].reset_fn = vpci_reset_device_state,
+};
+
 ret_t pci_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     ret_t ret;
@@ -67,6 +73,43 @@ ret_t pci_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
+    case PHYSDEVOP_pci_device_state_reset: {
+        struct pci_device_state_reset dev_reset;
+        struct physdev_pci_device *dev;
+        struct pci_dev *pdev;
+        pci_sbdf_t sbdf;
+
+        if ( !is_pci_passthrough_enabled() )
+            return -EOPNOTSUPP;
+
+        ret = -EFAULT;
+        if ( copy_from_guest(&dev_reset, arg, 1) != 0 )
+            break;
+        dev = &dev_reset.dev;
+        sbdf = PCI_SBDF(dev->seg, dev->bus, dev->devfn);
+
+        ret = xsm_resource_setup_pci(XSM_PRIV, sbdf.sbdf);
+        if ( ret )
+            break;
+
+        pcidevs_lock();
+        pdev = pci_get_pdev(NULL, sbdf);
+        if ( !pdev )
+        {
+            pcidevs_unlock();
+            ret = -ENODEV;
+            break;
+        }
+
+        write_lock(&pdev->domain->pci_lock);
+        pcidevs_unlock();
+        ret = pci_device_state_reset_methods[dev_reset.reset_type].reset_fn(pdev);
+        write_unlock(&pdev->domain->pci_lock);
+        if ( ret )
+            printk(XENLOG_ERR "%pp: failed to reset vPCI device state\n", &sbdf);
+        break;
+    }
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 1e6aa5d799b9..ff67c2550ccb 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -172,6 +172,15 @@ int vpci_assign_device(struct pci_dev *pdev)
 
     return rc;
 }
+
+int vpci_reset_device_state(struct pci_dev *pdev)
+{
+    ASSERT(rw_is_write_locked(&pdev->domain->pci_lock));
+
+    vpci_deassign_device(pdev);
+    return vpci_assign_device(pdev);
+}
+
 #endif /* __XEN__ */
 
 static int vpci_register_cmp(const struct vpci_register *r1,
diff --git a/xen/include/public/physdev.h b/xen/include/public/physdev.h
index f0c0d4727c0b..a71da5892e5f 100644
--- a/xen/include/public/physdev.h
+++ b/xen/include/public/physdev.h
@@ -296,6 +296,13 @@ DEFINE_XEN_GUEST_HANDLE(physdev_pci_device_add_t);
  */
 #define PHYSDEVOP_prepare_msix          30
 #define PHYSDEVOP_release_msix          31
+/*
+ * Notify the hypervisor that a PCI device has been reset, so that any
+ * internally cached state is regenerated.  Should be called after any
+ * device reset performed by the hardware domain.
+ */
+#define PHYSDEVOP_pci_device_state_reset 32
+
 struct physdev_pci_device {
     /* IN */
     uint16_t seg;
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index 63e49f0117e9..376981f9da98 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -156,6 +156,22 @@ struct pci_dev {
     struct vpci *vpci;
 };
 
+struct pci_device_state_reset_method {
+    int (*reset_fn)(struct pci_dev *pdev);
+};
+
+enum pci_device_state_reset_type {
+    DEVICE_RESET_FLR,
+    DEVICE_RESET_COLD,
+    DEVICE_RESET_WARM,
+    DEVICE_RESET_HOT,
+};
+
+struct pci_device_state_reset {
+    struct physdev_pci_device dev;
+    enum pci_device_state_reset_type reset_type;
+};
+
 #define for_each_pdev(domain, pdev) \
     list_for_each_entry(pdev, &(domain)->pdev_list, domain_list)
 
diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
index da8d0f41e6f4..b230fd374de5 100644
--- a/xen/include/xen/vpci.h
+++ b/xen/include/xen/vpci.h
@@ -38,6 +38,7 @@ int __must_check vpci_assign_device(struct pci_dev *pdev);
 
 /* Remove all handlers and free vpci related structures. */
 void vpci_deassign_device(struct pci_dev *pdev);
+int __must_check vpci_reset_device_state(struct pci_dev *pdev);
 
 /* Add/remove a register handler. */
 int __must_check vpci_add_register_mask(struct vpci *vpci,
@@ -282,6 +283,11 @@ static inline int vpci_assign_device(struct pci_dev *pdev)
 
 static inline void vpci_deassign_device(struct pci_dev *pdev) { }
 
+static inline int __must_check vpci_reset_device_state(struct pci_dev *pdev)
+{
+    return 0;
+}
+
 static inline void vpci_dump_msi(void) { }
 
 static inline uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg,
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 09:01:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 09:01:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741968.1148678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8Ei-0002RV-Cb; Mon, 17 Jun 2024 09:01:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741968.1148678; Mon, 17 Jun 2024 09:01:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8Ei-0002RK-9F; Mon, 17 Jun 2024 09:01:12 +0000
Received: by outflank-mailman (input) for mailman id 741968;
 Mon, 17 Jun 2024 09:01:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d9pI=NT=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJ8Eg-0002Pn-Id
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 09:01:10 +0000
Received: from NAM11-CO1-obe.outbound.protection.outlook.com
 (mail-co1nam11on20600.outbound.protection.outlook.com
 [2a01:111:f403:2416::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 234e7e86-2c88-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 11:01:09 +0200 (CEST)
Received: from BN8PR15CA0029.namprd15.prod.outlook.com (2603:10b6:408:c0::42)
 by PH8PR12MB6700.namprd12.prod.outlook.com (2603:10b6:510:1cf::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31; Mon, 17 Jun
 2024 09:01:04 +0000
Received: from BN2PEPF00004FBB.namprd04.prod.outlook.com
 (2603:10b6:408:c0:cafe::cf) by BN8PR15CA0029.outlook.office365.com
 (2603:10b6:408:c0::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.30 via Frontend
 Transport; Mon, 17 Jun 2024 09:01:03 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN2PEPF00004FBB.mail.protection.outlook.com (10.167.243.181) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Mon, 17 Jun 2024 09:01:03 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 17 Jun
 2024 04:00:58 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 234e7e86-2c88-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X/OItcKfDUgqlBRD20gRODIS1iIFZuRh0Q6vziAgTLz0HI4mlt39d5M02P+nRQJj8v4ndDS0Go4MmPL2wmwI6gKMFOF+n+6NDy4FpjTCAzHfLOBmfdAgZ8oPtyQI6zR2IAF8DdaRxp2Xt59wKJ3gsjARmtYcMMS4HeiVjXt4dV68bL3RoZF0v6opywgIIxhHmsiyAX9MJ01PvEWKymoUIZlDhJf69MVa9ROk+wdR5slKHHL/WU/q0CIUyr95ptlv9h/JxKZ8w8N/SzE3sTbarCHMOJQ+IX4vOKZufidsNwn5vRE+zijXQoMmnsBgW9M6KyOtVSh7jLHXfyN91MDpUQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=eHlhfR+6LFbHZD8B8gyJwfo61Uh/eaul+uFJfc005VE=;
 b=Pb5PS5IbJkTBHHY7EtLfvtYNnn0MEY1frGtttA95AOymU87g97e74p52TLjDQ1+nqnltyV5RRBQEIwRBti24QkDcGhhO7zo3zucF4cSmgYVFOMZG51bssFQ8UIwXAytiOn3yBsShIe4FqrTLB2ge1/m2diyKkAtTfP67ekoLDBBxFV6HPsAhx1CQU0BQTVqhUnlIwpDJLA4aakO4VUX5eh4mlkrTG04h3u/hmhODyFZIvU2ZY9f4ogY7d/CLtjmW+0XA9MlC454sm+++a23aQ8YJdtMvH9EKo+4rmleJZd0C23OZTNEFbqGe7mrOBFSZuTbUyinqm62T5icJmmp1bg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eHlhfR+6LFbHZD8B8gyJwfo61Uh/eaul+uFJfc005VE=;
 b=EW8hSIoz1Ou0bFy8+jXpFmAEcuW+DTbGn933va3B1gGrWZL9VnZaKZ9mspm5x5HFytV1O26aQO8dmvV9EOfgKGMy4Q7+xqLZCeAt7N8C+D2xtKrvOCk8E3Hb5/ItHypJ7UA8xOL+QZx8PVP9zzbC/t2ydx9Ge6S1R0F+vRbZQ4Q=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [XEN PATCH v10 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
Date: Mon, 17 Jun 2024 17:00:32 +0800
Message-ID: <20240617090035.839640-3-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240617090035.839640-1-Jiqian.Chen@amd.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN2PEPF00004FBB:EE_|PH8PR12MB6700:EE_
X-MS-Office365-Filtering-Correlation-Id: 71057a8e-bc9d-4bdc-e12d-08dc8eac03f6
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|36860700010|7416011|376011|1800799021|82310400023;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?5RhCespujpkWCvaWJ2NqD1E6z/focBXAwMAf8d5BTs8cvO/aypjG7ZEagf97?=
 =?us-ascii?Q?Xv3UM3h4U6be0hZwWhxSi8BHtEMkMVR5N7MrVWQiDcvof7SRGJybD9irbKZL?=
 =?us-ascii?Q?nDiGZJ41XXZQWqSHgeiU2m2AwAupOO8H1rsWXJo3rsm6Y7uHCkfuCa8ZF9LD?=
 =?us-ascii?Q?By1Xz4GtEESsSvOVENAikmv5/xtt2OCeC9QoVZ3rhimgApnQxsh8/N0/Z81E?=
 =?us-ascii?Q?uTn13z64OSf7jHkSYnU9d1mUGsZwtaPbUBPJVxnAEQ9hJzql5fY7c7vR1R1K?=
 =?us-ascii?Q?UbCPg/JcOYZm42NFbmkIX43ixDvMuDkwb178iKs26tnpizXLA7XI89vyVwsp?=
 =?us-ascii?Q?5Y3BQiTrfiMdlLds2PSOQhEbWeTH4PcNn/CFWQ3HBruQSRNEYXvD3fqRPL/j?=
 =?us-ascii?Q?jrrY+t4G3/o/U0c+3P2W8w/A6rB+42uNXTxL4QmTmRxGu4W7H7ZK8YMtSDpm?=
 =?us-ascii?Q?oYa9Cbr4kc3IBOQ6OJcdz6Jcz0XZ8gRK4QYcLXJW5SQiedJCd4FTNijqYo6Q?=
 =?us-ascii?Q?+ARFePPP76R0QhFSRMn9SM3NzMxM5S6G+rJGbeq0m/2/Yoa3gE8B5rfH7lBV?=
 =?us-ascii?Q?QcXRnxtxcFQOD83U0EbkIP0xajxBoNRjEJdOjQZdWaDISRX9W8uMGL6XouvM?=
 =?us-ascii?Q?E7JttXcTGLRPVa3kaEqjPWyalm0swRDxByjPF+50bOwkC7DVMdWnoOFfyU3i?=
 =?us-ascii?Q?fQx13hLDy24tDVB+47AXA1nvLqkbi3+l/cOwAWxowXFDg2wiwfbru/2tW2NR?=
 =?us-ascii?Q?sfMyqceJNorwyS0iuEWmKqw9CtKyzhra1Kdj03cV6EzjTqHp0e+XlOiMy4zR?=
 =?us-ascii?Q?H1lHIIYgMa1gwmwB4sMK/Rl9gavZxTfUfGHyipUv11OVLK6ja9pkdqqgLSeA?=
 =?us-ascii?Q?R2X6fHppaF8yVry9WEZRXejlBtruXyV2AyZMqp/e/lvmzK/OUgki9DzRzTiE?=
 =?us-ascii?Q?L5ZR5ftb5NGp/j3nK11Qpx74ctXnJfMju3jQUUzjyHqAFSGDPniYWHZiG1O7?=
 =?us-ascii?Q?sZy+pY/8oHkZBsOLXJunSfxB6c604HKhef6I7qCJQWa+OqrGuPY5gff7QSxg?=
 =?us-ascii?Q?lovg4sjBiU+apM6/tDZJ9G0uC9hNexSwx8lWHL2rk4u4LjYLDwsV9a4zsPwY?=
 =?us-ascii?Q?EhS6ImspflTvor1TSGB/YwB8FRXUKZGtWyyFj5F3Ubvn+vbfHgfk+ZrBYopQ?=
 =?us-ascii?Q?00i1w04GIoN0BsTk/kKM0xAdPWz+PgRXPinOOefRC4rt+9/vbdHrG6aTC4DL?=
 =?us-ascii?Q?bhBv7etykrVS+0fbwvCZqptWRLDPG7STVdqSp5+i065HsbPRo/apTMtrTViW?=
 =?us-ascii?Q?CnA2ovufRvmJ4jI01uJ9I39q056hX3r6PuW9rJMXSAT52zQhUNXIpuDH3OFk?=
 =?us-ascii?Q?/JlBkmVbUDD1A+VtjA2lqYBSGhEy87b+B4XklfiPYcOzR/scFg=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(36860700010)(7416011)(376011)(1800799021)(82310400023);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2024 09:01:03.1173
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 71057a8e-bc9d-4bdc-e12d-08dc8eac03f6
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN2PEPF00004FBB.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB6700

If run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
a passthrough device by using gsi, see qemu code
xen_pt_realize->xc_physdev_map_pirq and libxl code
pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq
will call into Xen, but in hvm_physdev_op, PHYSDEVOP_map_pirq
is not allowed because currd is PVH dom0 and PVH has no
X86_EMU_USE_PIRQ flag, it will fail at has_pirq check.

So, allow PHYSDEVOP_map_pirq when dom0 is PVH and also allow
PHYSDEVOP_unmap_pirq for the failed path to unmap pirq. And
add a new check to prevent self map when subject domain has no
PIRQ flag.

So that domU with PIRQ flag can success to map pirq for
passthrough devices even dom0 has no PIRQ flag.

Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/x86/hvm/hypercall.c |  6 ++++++
 xen/arch/x86/physdev.c       | 14 ++++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 0fab670a4871..03ada3c880bd 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -71,8 +71,14 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     switch ( cmd )
     {
+        /*
+        * Only being permitted for management of other domains.
+        * Further restrictions are enforced in do_physdev_op.
+        */
     case PHYSDEVOP_map_pirq:
     case PHYSDEVOP_unmap_pirq:
+        break;
+
     case PHYSDEVOP_eoi:
     case PHYSDEVOP_irq_status_query:
     case PHYSDEVOP_get_free_pirq:
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index d6dd622952a9..f38cc22c872e 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -323,6 +323,13 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( !d )
             break;
 
+        /* Prevent self-map when currd has no X86_EMU_USE_PIRQ flag */
+        if ( is_hvm_domain(d) && !has_pirq(d) && d == currd )
+        {
+            rcu_unlock_domain(d);
+            return -EOPNOTSUPP;
+        }
+
         ret = physdev_map_pirq(d, map.type, &map.index, &map.pirq, &msi);
 
         rcu_unlock_domain(d);
@@ -346,6 +353,13 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( !d )
             break;
 
+        /* Prevent self-unmap when currd has no X86_EMU_USE_PIRQ flag */
+        if ( is_hvm_domain(d) && !has_pirq(d) && d == currd )
+        {
+            rcu_unlock_domain(d);
+            return -EOPNOTSUPP;
+        }
+
         ret = physdev_unmap_pirq(d, unmap.pirq);
 
         rcu_unlock_domain(d);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 09:01:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 09:01:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741971.1148688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8El-0002jv-L2; Mon, 17 Jun 2024 09:01:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741971.1148688; Mon, 17 Jun 2024 09:01:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8El-0002jk-I5; Mon, 17 Jun 2024 09:01:15 +0000
Received: by outflank-mailman (input) for mailman id 741971;
 Mon, 17 Jun 2024 09:01:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d9pI=NT=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJ8Ek-0001sw-D7
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 09:01:14 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20627.outbound.protection.outlook.com
 [2a01:111:f403:200a::627])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 24a51101-2c88-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 11:01:12 +0200 (CEST)
Received: from BN0PR02CA0026.namprd02.prod.outlook.com (2603:10b6:408:e4::31)
 by SA0PR12MB4367.namprd12.prod.outlook.com (2603:10b6:806:94::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.30; Mon, 17 Jun
 2024 09:01:07 +0000
Received: from BN2PEPF00004FC0.namprd04.prod.outlook.com
 (2603:10b6:408:e4:cafe::bb) by BN0PR02CA0026.outlook.office365.com
 (2603:10b6:408:e4::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.30 via Frontend
 Transport; Mon, 17 Jun 2024 09:01:07 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN2PEPF00004FC0.mail.protection.outlook.com (10.167.243.186) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Mon, 17 Jun 2024 09:01:06 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 17 Jun
 2024 04:01:01 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24a51101-2c88-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QeMfdHsjwibjuPvX1wgZ9PxctMhCa4ReTBMejBfgwoVlZxMQXL8Zo67T53Cqyu8Z0ULz4HvfmYkkEKlE7BBncIV9srlrhriT6+0Kq+94v7lVR1K3Doxk8H4K+WziI5k0p3H3o0uUJQg+iYCWfzcATmuKeJgUlZh2KrRDfDqGO9m4dSGYfwY34UjSLrD3jGOMH0rMlqMsCX4od+ZGmLv61W6wAJfZ27klVr8w2mH21kqihMnQz9lv9/5UShvAP+OUykjD4K7ax8L6wyDvVazY/Tr7SbER+BShA2rOZzmQNX3pF5NEuxqraXHROODw+NtJoDpG49ov1mhazE5FvR94tw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zMKlHsXkyAhfZlOofMMtN4ROHc5mALfPC96QGGMH/X0=;
 b=D+qbV5/5zV6If3j4IjOOxAfVVDNm2UcbU1QnUVjlA+eFe/Nih6Q7DmIwEmd7Tj15IoTTLBmZw3mrJZG01q2nci4hXzCVUtDH/vCTkmvwP+OOSOrLtOKV38HiGFlELs254xhqKv5SnHs0DsN8j3AAlIadkPLcg7basvY0UNzqu1JrrRcLoSCMppMtrQYXU9Z8rBIofn/VGs6E6Gt8xR374k09ddzabuitT/7h12x/Qvz4gFdimVQqwrDXkrARxJiWUR6sxtcmVs4/Y1dgMMOKE9O5pyWwzr/oIxHdvQP8u2DDoeFttvC1XX1TwcJ9JbBQIj/VuLfL5srBl5DlYN6ICw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zMKlHsXkyAhfZlOofMMtN4ROHc5mALfPC96QGGMH/X0=;
 b=ymdY+jdRRjXAe0XqOFFZvr8oj605DJ+xVlYjCjf4InUOs2Tlh+BJOYL5CUJfU4bfoRJ7PsREijfIMVYVvRPGBBhB9iNYGuAwo69t2z7TkI8qPlKFJ5mkBTpPhd8SmT/d+alLMbq+KZGME6q+umLR16/XvczoTYkGkcFV8+/CiKo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [XEN PATCH v10 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
Date: Mon, 17 Jun 2024 17:00:33 +0800
Message-ID: <20240617090035.839640-4-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240617090035.839640-1-Jiqian.Chen@amd.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN2PEPF00004FC0:EE_|SA0PR12MB4367:EE_
X-MS-Office365-Filtering-Correlation-Id: 00028aba-4365-41da-ba50-08dc8eac0645
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|36860700010|376011|7416011|1800799021|82310400023;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?XB/KUpauiID7TSc8Cb+YVhE2fzelwsvdqeM05eRg6810sCCcW40gKuMuIxmd?=
 =?us-ascii?Q?J+SyxlaFKbiUS9fcaKfqt1Y3ZCKthCidCkzRR+49MPHM0lSA6uRf8mjNDTVL?=
 =?us-ascii?Q?Szw03Rzqpj6fZsvwsDMCqIa+bQllKtBEKNErlKKEvpMezBxKyQRIbEboPd4p?=
 =?us-ascii?Q?wTcLQarXkuqqZEG7nmbZk2rIk44iv0qNYMTIOseT/n+DuEj0iQm2IWnr9NIW?=
 =?us-ascii?Q?0kMbhqM7+K541PkQAWuq57+btHANVrEA+6QJpprOBJDTgt8k35EImN+B3iCn?=
 =?us-ascii?Q?J6DAKNYfyyfidftHub+HffclVOL4MIWvwZW6tyPQrDxjBz1Htf0UhBcAXJQR?=
 =?us-ascii?Q?+8AFrsrZQXuKCqdiykZXDX92aPJJeXdUE3swWqn1p4rjlJZGKySVWwLXI0eK?=
 =?us-ascii?Q?/jeQQ4gf4ZO98PTP2ygHuwfDJGfXnL8cXkLTV06Vc7gE0b6SmgDNLlSDUg9V?=
 =?us-ascii?Q?QkgBBVOrxTscK/nsYLt7fO6+ZmXlprSZ7kBjeedAo3r3BwxzY114AvBDuuQ9?=
 =?us-ascii?Q?6TmTQEioRJLLyTG5J7o93RXUIk9VzCyYunDC3SVevqFz0gq8RfjIvXzOxeX/?=
 =?us-ascii?Q?wmmbgKxMu4mnd25883sVdsvJdWHG26w+S62GA9jiX07EMZ6w62VnxKmEXsbg?=
 =?us-ascii?Q?5sGXUTMEwW8Uk5DHc8wPRNpiDf6bUNLxvkL6EGoxJnHWJmRwPuEKA3VS7f/J?=
 =?us-ascii?Q?9nRniSib8s+nCJfC4LyaHOptiKxlgL4vh0gd4SaxnFLYlxDJi6eTtUu4BjTp?=
 =?us-ascii?Q?iJfYVwh8/jRLNPZC6/RTnJRqlAlIWg9O17HQyBi+fSFLrkMcTUI9Kyg6XYLl?=
 =?us-ascii?Q?NG0GvKH8u9mLHw75HEf8uqA7LclTaRo8lIRJxNGXYUZtEwSBDR+AZPV/1k5N?=
 =?us-ascii?Q?Zc+499iUN0PXDacKmwSWJnjYziOF83BSp/yASfA4PilifxWy6jZWSepga2eE?=
 =?us-ascii?Q?GwUKe9czStcDoSjx24di75NTRiW+J8o5D8o7HTaQnAkDSVxJanhJ5OdRmLuj?=
 =?us-ascii?Q?LvtY3oulRj7IinBPiW2OPlqcQ2U2sBeVskNmGw10KaggqyTEAkaGEZzMs3mB?=
 =?us-ascii?Q?CuxQ/kLQslYsyZUM2SW4H7p0+NsHuir4oGxpf0PxPmkYVeX3C44VqS/IByIr?=
 =?us-ascii?Q?86uN2eHoFTSj9WBrqFiQGeZ/PujLK/3iY1yUbbsJCcxzVd1+pd/v+Q61B4LJ?=
 =?us-ascii?Q?FE1KVpKMHo5Zr4VG7PnO5896SfAH/qxNz8xR05bsE/memBAfZEmOmezUhU6G?=
 =?us-ascii?Q?TxFz/KA3N5hi3TPX0bCMCcHnN6p04J3cNKKtK29iI9Mvq9lRHIrNC+pgnIEf?=
 =?us-ascii?Q?HqS5xYcbA6H/4l3aOIHhBCBlGzfyO0FruBgY/AzNCmT7wea4saOuM3w+PMzk?=
 =?us-ascii?Q?5vTA5vs=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(36860700010)(376011)(7416011)(1800799021)(82310400023);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2024 09:01:06.9738
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 00028aba-4365-41da-ba50-08dc8eac0645
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN2PEPF00004FC0.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4367

The gsi of a passthrough device must be configured for it to be
able to be mapped into a hvm domU.
But When dom0 is PVH, the gsis don't get registered, it causes
the info of apic, pin and irq not be added into irq_2_pin list,
and the handler of irq_desc is not set, then when passthrough a
device, setting ioapic affinity and vector will fail.

To fix above problem, on Linux kernel side, a new code will
need to call PHYSDEVOP_setup_gsi for passthrough devices to
register gsi when dom0 is PVH.

So, add PHYSDEVOP_setup_gsi into hvm_physdev_op for above
purpose.

Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
---
The code link that will call this hypercall on linux kernel side is as follows:
https://lore.kernel.org/xen-devel/20240607075109.126277-3-Jiqian.Chen@amd.com/
---
 xen/arch/x86/hvm/hypercall.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 03ada3c880bd..cfe82d0f96ed 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -86,6 +86,7 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return -ENOSYS;
         break;
 
+    case PHYSDEVOP_setup_gsi:
     case PHYSDEVOP_pci_mmcfg_reserved:
     case PHYSDEVOP_pci_device_add:
     case PHYSDEVOP_pci_device_remove:
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 09:01:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 09:01:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741974.1148698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8Er-00037B-1P; Mon, 17 Jun 2024 09:01:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741974.1148698; Mon, 17 Jun 2024 09:01:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8Eq-00036w-Sl; Mon, 17 Jun 2024 09:01:20 +0000
Received: by outflank-mailman (input) for mailman id 741974;
 Mon, 17 Jun 2024 09:01:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d9pI=NT=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJ8Ep-0001sw-AX
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 09:01:19 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2062c.outbound.protection.outlook.com
 [2a01:111:f400:7e88::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2774648a-2c88-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 11:01:17 +0200 (CEST)
Received: from BN8PR15CA0035.namprd15.prod.outlook.com (2603:10b6:408:c0::48)
 by PH7PR12MB9151.namprd12.prod.outlook.com (2603:10b6:510:2e9::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.23; Mon, 17 Jun
 2024 09:01:11 +0000
Received: from BN2PEPF00004FBB.namprd04.prod.outlook.com
 (2603:10b6:408:c0:cafe::d8) by BN8PR15CA0035.outlook.office365.com
 (2603:10b6:408:c0::48) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31 via Frontend
 Transport; Mon, 17 Jun 2024 09:01:11 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN2PEPF00004FBB.mail.protection.outlook.com (10.167.243.181) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Mon, 17 Jun 2024 09:01:10 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 17 Jun
 2024 04:01:06 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2774648a-2c88-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AbDf4dbs+m8vUYVmDh/8Q4rhbo2G30UwUNNWeW7jCcrmY0XdDIhi4w42YXyEmkfZNxTy/PbnDbNODM1+TiT+75ac5Tyfgx0YogqDcOXPQfHmPKBST+1001mrx5P/ICNc5oMALuw1XlH5VJtenWcUIMZsuyIH/1cVnnqta4ak/GtdCqJClNc8SlOVFNKvgPBc2l86YrRKrjzavFYLSaRbNNJZLn7vcr9WcDTNTp+wXYHCzPE7Qz1RB21AijXl3lcwAjF2JgcR1doW3UhNGFWhrkbeVucvweP/EC/qG5if3mTmCEmBO34KQWKKWoBsUuRO2oKbZbRLZssMoLxi76cl/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=O8s3U4eyGcJsdb4tSa9ckkziPh9Oq4xAV7y+FvlnDtE=;
 b=ZICHO0KcrhZJQxPjDVcEMXya01HEcPnp0xQyaCpveFdsm2Ulo7EoQOiqW/BBnBt32umGHU5ES6sTzu4wX7sXkCbcvM2n18rYWObd/LfNnJAAr5IBRiRfnESeCtNNa8PpeK5wdzRrqsiCL2gXaC3QM7EAdjV3jDsXO1DmdLaN0hYX52G3JIrQgw7p+BmyJmtsxLG+iDICLyN10ZaEG44Rt7eU8lSAiKIA+NPaaZ+17H78Eeb+wau3l5J0tDGhXceNK0TaQwJ/oU5lCIzsAErO/mY8WezxE5zK11OeSyaFHfmrnRmOeQNXpkAySLOFvhGzBamhTPt04Ny13rGUgm3CIw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=O8s3U4eyGcJsdb4tSa9ckkziPh9Oq4xAV7y+FvlnDtE=;
 b=jwj5bqeiTXDKukKhCTw+be+IJj15hOH513xBgzOS2FgFRlunNVsCnzjt4LdBLhP+ZU6jWJQ025j1Aiwukhq/VWo7wCU9EoCrGv1/qPBuAZYd0BTWi9WZYvXlICvYIVhI5QzI9AdgfBhOOI7FkuT5zPyuvgGKi76/vwiT/y1eIkA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
Date: Mon, 17 Jun 2024 17:00:34 +0800
Message-ID: <20240617090035.839640-5-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240617090035.839640-1-Jiqian.Chen@amd.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN2PEPF00004FBB:EE_|PH7PR12MB9151:EE_
X-MS-Office365-Filtering-Correlation-Id: 2ec7969c-ea83-48c8-e2fe-08dc8eac08aa
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|7416011|36860700010|376011|1800799021|82310400023;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?OmEa7K1oE+a0C8ZBPzcMAzGV2n54AbTgYjjgSV69MV5FScvnebwT44K3bBug?=
 =?us-ascii?Q?qKSGeVx/+ZqxIKfSCTFSv4f5l8a2su+XB9E7tGp2apWlMMTmgsWCBOuHVR3V?=
 =?us-ascii?Q?tXl2phSLJmABf6PCnZQCthWwzoPyllhlxYGshEQ5qN6J6BSOu6ATLvHyi1dj?=
 =?us-ascii?Q?Au+DuNXNfJ/5IyXuCjYVn+R/KjbW5v4V1GrfjE4Dmm0ygMz3KQMcwz8NT419?=
 =?us-ascii?Q?w4HLTJTwabsKpSsfQuG83BBJCe7m3AxwwnnacxHqdDQLJKmBP9CgGNxGZYEV?=
 =?us-ascii?Q?RpokbQE/uByXpLMn93am0nqYEq1T9Rea4Wq3pbH4LD8QG328cDTTnq6Yzpp9?=
 =?us-ascii?Q?+nUSAP9ayGxqK8aC/FXSRIaAW5WlPi/rvRauQdbrt3V1+xbRSAuGTN33emt8?=
 =?us-ascii?Q?Hr64xFbqFZ87uAXg05Ve6BUfOk39uexKwZ7pITKHoTynDcqquffA2Ven6jxY?=
 =?us-ascii?Q?rMg/1UwWyldvkw6JcbDmTH0qW6rdTqDUFHHdi7ROYq5ZBOdBRrz8sSlJxuU/?=
 =?us-ascii?Q?7v8akwX30hbxoVMD1Ty4CSmQ1GO+flrmnLQlqhr83dGwLXmZvXd2CNie58aF?=
 =?us-ascii?Q?vppQ54YGFjsNuKqFv4IB6ImOqrFcEN0E6UBYuiQ++E7oyEo3pBk49Cu52KtE?=
 =?us-ascii?Q?ciNVz3BMAMJN3DcxdOA0j0MV8WSu3eEp3CjShb4OUw+TAnUVoQmL02YoXfKF?=
 =?us-ascii?Q?Rosq9ARWYMk3RB8O/zROlmr1qN4NTiZqZtclHv3/Tn7/aQqQmWRruu0Ummzc?=
 =?us-ascii?Q?rGg+FsHXf7Uq6VyfiBDW1ffN5/RRoVRQ8AzFBxQD1ljUhu3NczKdPcn8kIrx?=
 =?us-ascii?Q?CdT5bx6fhpJ+ecusVTQ+bFthxmgOi0vA9T3OjXTLU2y2LjPp+Cksx2BbAzuA?=
 =?us-ascii?Q?et2vbLvnz9Xb+Eh9kM/jYwttetZJXUeAbMPypK1e7EsYide0o8+SUZsgQ5YS?=
 =?us-ascii?Q?I/RNKoz8cB8IxhcIPtBrtzoh7q8rSD1CXjzTsurg492iPpi8eZf7vhyeT7UU?=
 =?us-ascii?Q?UVqTV0ZEDTmcrzqeezyjs0UPCMkW49jB6ztOid1nmLo4H6OuEbkC/pY+dP4S?=
 =?us-ascii?Q?X1d5xTb/5JYix/brtYwqBBfcyjYSBW9BRWU0Qw5+4Ou5QKIbHez4H4yKTpqH?=
 =?us-ascii?Q?kotURZIH2Wsfqe42I+ch0iJ7l9NQQlC9zwOVmPdKVWq6Pu9RaOF4ntS4zEVZ?=
 =?us-ascii?Q?6bu9KM+gSTb7ReAsxTYx8z+lX48p/lMo3UGYsn5TOlIrLH1STagV5mkDObb2?=
 =?us-ascii?Q?LrILnus2fAAFUdDQ5bCvPr4wLPK4bHpYr0FwR0VIDkSYXIooyMBKMB6YVMah?=
 =?us-ascii?Q?qsDB5BgWNWHdbV3eK6EtZiYPR6V7ApfqPO86AkmUB4CIAdWW2ZAaEmfYnaQ0?=
 =?us-ascii?Q?qvAQMRZtVpVQulSGXOPBhQFozruXlmTh7r6UVs/1Ikijryjm0A=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(7416011)(36860700010)(376011)(1800799021)(82310400023);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2024 09:01:10.9922
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2ec7969c-ea83-48c8-e2fe-08dc8eac08aa
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN2PEPF00004FBB.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB9151

In PVH dom0, it uses the linux local interrupt mechanism,
when it allocs irq for a gsi, it is dynamic, and follow
the principle of applying first, distributing first. And
irq number is alloced from small to large, but the applying
gsi number is not, may gsi 38 comes before gsi 28, that
causes the irq number is not equal with the gsi number.
And when passthrough a device, QEMU will use its gsi number
to do pirq mapping, see xen_pt_realize->xc_physdev_map_pirq,
but the gsi number is got from file
/sys/bus/pci/devices/<sbdf>/irq, so it will fail when mapping.
And in current codes, there is no method to get gsi for
userspace.

For above purpose, add new function to get gsi. And call this
function before xc_physdev_(un)map_pirq

Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Chen Jiqian <Jiqian.Chen@amd.com>
---
RFC: it needs review and needs to wait for the corresponding third patch on linux kernel side to be merged.
---
 tools/include/xen-sys/Linux/privcmd.h |  7 +++++
 tools/include/xencall.h               |  2 ++
 tools/include/xenctrl.h               |  2 ++
 tools/libs/call/core.c                |  5 ++++
 tools/libs/call/libxencall.map        |  2 ++
 tools/libs/call/linux.c               | 15 +++++++++++
 tools/libs/call/private.h             |  9 +++++++
 tools/libs/ctrl/xc_physdev.c          |  4 +++
 tools/libs/light/Makefile             |  2 +-
 tools/libs/light/libxl_pci.c          | 38 +++++++++++++++++++++++++++
 10 files changed, 85 insertions(+), 1 deletion(-)

diff --git a/tools/include/xen-sys/Linux/privcmd.h b/tools/include/xen-sys/Linux/privcmd.h
index bc60e8fd55eb..977f1a058797 100644
--- a/tools/include/xen-sys/Linux/privcmd.h
+++ b/tools/include/xen-sys/Linux/privcmd.h
@@ -95,6 +95,11 @@ typedef struct privcmd_mmap_resource {
 	__u64 addr;
 } privcmd_mmap_resource_t;
 
+typedef struct privcmd_gsi_from_dev {
+	__u32 sbdf;
+	int gsi;
+} privcmd_gsi_from_dev_t;
+
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
@@ -114,6 +119,8 @@ typedef struct privcmd_mmap_resource {
 	_IOC(_IOC_NONE, 'P', 6, sizeof(domid_t))
 #define IOCTL_PRIVCMD_MMAP_RESOURCE				\
 	_IOC(_IOC_NONE, 'P', 7, sizeof(privcmd_mmap_resource_t))
+#define IOCTL_PRIVCMD_GSI_FROM_DEV				\
+	_IOC(_IOC_NONE, 'P', 10, sizeof(privcmd_gsi_from_dev_t))
 #define IOCTL_PRIVCMD_UNIMPLEMENTED				\
 	_IOC(_IOC_NONE, 'P', 0xFF, 0)
 
diff --git a/tools/include/xencall.h b/tools/include/xencall.h
index fc95ed0fe58e..750aab070323 100644
--- a/tools/include/xencall.h
+++ b/tools/include/xencall.h
@@ -113,6 +113,8 @@ int xencall5(xencall_handle *xcall, unsigned int op,
              uint64_t arg1, uint64_t arg2, uint64_t arg3,
              uint64_t arg4, uint64_t arg5);
 
+int xen_oscall_gsi_from_dev(xencall_handle *xcall, unsigned int sbdf);
+
 /* Variant(s) of the above, as needed, returning "long" instead of "int". */
 long xencall2L(xencall_handle *xcall, unsigned int op,
                uint64_t arg1, uint64_t arg2);
diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 9ceca0cffc2f..a0381f74d24b 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1641,6 +1641,8 @@ int xc_physdev_unmap_pirq(xc_interface *xch,
                           uint32_t domid,
                           int pirq);
 
+int xc_physdev_gsi_from_dev(xc_interface *xch, uint32_t sbdf);
+
 /*
  *  LOGGING AND ERROR REPORTING
  */
diff --git a/tools/libs/call/core.c b/tools/libs/call/core.c
index 02c4f8e1aefa..6dae50c9a6ba 100644
--- a/tools/libs/call/core.c
+++ b/tools/libs/call/core.c
@@ -173,6 +173,11 @@ int xencall5(xencall_handle *xcall, unsigned int op,
     return osdep_hypercall(xcall, &call);
 }
 
+int xen_oscall_gsi_from_dev(xencall_handle *xcall, unsigned int sbdf)
+{
+    return osdep_oscall(xcall, sbdf);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libs/call/libxencall.map b/tools/libs/call/libxencall.map
index d18a3174e9dc..b92a0b5dc12c 100644
--- a/tools/libs/call/libxencall.map
+++ b/tools/libs/call/libxencall.map
@@ -10,6 +10,8 @@ VERS_1.0 {
 		xencall4;
 		xencall5;
 
+		xen_oscall_gsi_from_dev;
+
 		xencall_alloc_buffer;
 		xencall_free_buffer;
 		xencall_alloc_buffer_pages;
diff --git a/tools/libs/call/linux.c b/tools/libs/call/linux.c
index 6d588e6bea8f..92c740e176f2 100644
--- a/tools/libs/call/linux.c
+++ b/tools/libs/call/linux.c
@@ -85,6 +85,21 @@ long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
     return ioctl(xcall->fd, IOCTL_PRIVCMD_HYPERCALL, hypercall);
 }
 
+int osdep_oscall(xencall_handle *xcall, unsigned int sbdf)
+{
+    privcmd_gsi_from_dev_t dev_gsi = {
+        .sbdf = sbdf,
+        .gsi = -1,
+    };
+
+    if (ioctl(xcall->fd, IOCTL_PRIVCMD_GSI_FROM_DEV, &dev_gsi)) {
+        PERROR("failed to get gsi from dev");
+        return -1;
+    }
+
+    return dev_gsi.gsi;
+}
+
 static void *alloc_pages_bufdev(xencall_handle *xcall, size_t npages)
 {
     void *p;
diff --git a/tools/libs/call/private.h b/tools/libs/call/private.h
index 9c3aa432efe2..cd6eb5a3e66f 100644
--- a/tools/libs/call/private.h
+++ b/tools/libs/call/private.h
@@ -57,6 +57,15 @@ int osdep_xencall_close(xencall_handle *xcall);
 
 long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall);
 
+#if defined(__linux__)
+int osdep_oscall(xencall_handle *xcall, unsigned int sbdf);
+#else
+static inline int osdep_oscall(xencall_handle *xcall, unsigned int sbdf)
+{
+    return -1;
+}
+#endif
+
 void *osdep_alloc_pages(xencall_handle *xcall, size_t nr_pages);
 void osdep_free_pages(xencall_handle *xcall, void *p, size_t nr_pages);
 
diff --git a/tools/libs/ctrl/xc_physdev.c b/tools/libs/ctrl/xc_physdev.c
index 460a8e779ce8..c1458f3a38b5 100644
--- a/tools/libs/ctrl/xc_physdev.c
+++ b/tools/libs/ctrl/xc_physdev.c
@@ -111,3 +111,7 @@ int xc_physdev_unmap_pirq(xc_interface *xch,
     return rc;
 }
 
+int xc_physdev_gsi_from_dev(xc_interface *xch, uint32_t sbdf)
+{
+    return xen_oscall_gsi_from_dev(xch->xcall, sbdf);
+}
diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
index 37e4d1670986..6b616d5ee9b6 100644
--- a/tools/libs/light/Makefile
+++ b/tools/libs/light/Makefile
@@ -40,7 +40,7 @@ OBJS-$(CONFIG_X86) += $(ACPI_OBJS)
 
 CFLAGS += -Wno-format-zero-length -Wmissing-declarations -Wformat-nonliteral
 
-CFLAGS-$(CONFIG_X86) += -DCONFIG_PCI_SUPP_LEGACY_IRQ
+CFLAGS-$(CONFIG_X86) += -DCONFIG_PCI_SUPP_LEGACY_IRQ -DCONFIG_X86
 
 OBJS-$(CONFIG_X86) += libxl_cpuid.o
 OBJS-$(CONFIG_X86) += libxl_x86.o
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 96cb4da0794e..376f91759ac6 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1406,6 +1406,12 @@ static bool pci_supp_legacy_irq(void)
 #endif
 }
 
+#define PCI_DEVID(bus, devfn)\
+            ((((uint16_t)(bus)) << 8) | ((devfn) & 0xff))
+
+#define PCI_SBDF(seg, bus, devfn) \
+            ((((uint32_t)(seg)) << 16) | (PCI_DEVID(bus, devfn)))
+
 static void pci_add_dm_done(libxl__egc *egc,
                             pci_add_state *pas,
                             int rc)
@@ -1418,6 +1424,10 @@ static void pci_add_dm_done(libxl__egc *egc,
     unsigned long long start, end, flags, size;
     int irq, i;
     int r;
+#ifdef CONFIG_X86
+    int gsi;
+    uint32_t sbdf;
+#endif
     uint32_t flag = XEN_DOMCTL_DEV_RDM_RELAXED;
     uint32_t domainid = domid;
     bool isstubdom = libxl_is_stubdom(ctx, domid, &domainid);
@@ -1486,6 +1496,18 @@ static void pci_add_dm_done(libxl__egc *egc,
         goto out_no_irq;
     }
     if ((fscanf(f, "%u", &irq) == 1) && irq) {
+#ifdef CONFIG_X86
+        sbdf = PCI_SBDF(pci->domain, pci->bus,
+                        (PCI_DEVFN(pci->dev, pci->func)));
+        gsi = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
+        /*
+         * Old kernel version may not support this function,
+         * so if fail, keep using irq; if success, use gsi
+         */
+        if (gsi > 0) {
+            irq = gsi;
+        }
+#endif
         r = xc_physdev_map_pirq(ctx->xch, domid, irq, &irq);
         if (r < 0) {
             LOGED(ERROR, domainid, "xc_physdev_map_pirq irq=%d (error=%d)",
@@ -2172,6 +2194,10 @@ static void pci_remove_detached(libxl__egc *egc,
     int  irq = 0, i, stubdomid = 0;
     const char *sysfs_path;
     FILE *f;
+#ifdef CONFIG_X86
+    int gsi;
+    uint32_t sbdf;
+#endif
     uint32_t domainid = prs->domid;
     bool isstubdom;
 
@@ -2239,6 +2265,18 @@ skip_bar:
     }
 
     if ((fscanf(f, "%u", &irq) == 1) && irq) {
+#ifdef CONFIG_X86
+        sbdf = PCI_SBDF(pci->domain, pci->bus,
+                        (PCI_DEVFN(pci->dev, pci->func)));
+        gsi = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
+        /*
+         * Old kernel version may not support this function,
+         * so if fail, keep using irq; if success, use gsi
+         */
+        if (gsi > 0) {
+            irq = gsi;
+        }
+#endif
         rc = xc_physdev_unmap_pirq(ctx->xch, domid, irq);
         if (rc < 0) {
             /*
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 09:01:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 09:01:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.741976.1148708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8Et-0003UF-9W; Mon, 17 Jun 2024 09:01:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 741976.1148708; Mon, 17 Jun 2024 09:01:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8Et-0003U4-5V; Mon, 17 Jun 2024 09:01:23 +0000
Received: by outflank-mailman (input) for mailman id 741976;
 Mon, 17 Jun 2024 09:01:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d9pI=NT=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJ8Er-0001sw-Vs
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 09:01:22 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20601.outbound.protection.outlook.com
 [2a01:111:f403:2415::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 293cb3ce-2c88-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 11:01:20 +0200 (CEST)
Received: from MN2PR16CA0043.namprd16.prod.outlook.com (2603:10b6:208:234::12)
 by CYXPR12MB9340.namprd12.prod.outlook.com (2603:10b6:930:e4::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.30; Mon, 17 Jun
 2024 09:01:16 +0000
Received: from BN2PEPF00004FBA.namprd04.prod.outlook.com
 (2603:10b6:208:234:cafe::fd) by MN2PR16CA0043.outlook.office365.com
 (2603:10b6:208:234::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31 via Frontend
 Transport; Mon, 17 Jun 2024 09:01:15 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN2PEPF00004FBA.mail.protection.outlook.com (10.167.243.180) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Mon, 17 Jun 2024 09:01:14 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 17 Jun
 2024 04:01:10 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 293cb3ce-2c88-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mQloWxxgp/tnPmIzJQgSRxIf/PiBrfeiQv7vZiorfp3eLRhRCXxRRj/RJHjCWeAQL0m5mn+fs+NnvB1EJcLj/g0VI3NLxrASKSG4+1oTJGKPsQA/8cbdrbBY0cFfYO6dMDodgKog3PGhftUk6xRYQWTMo8Rczvsr/fZiyGupjvGeeKzJMmC3BA2Pp9kxcY30WFQ7Q8wjIcn16U06somuqMg0+zyz7TQl6kRsMUeFL8isXdTZEXPwnVG10XKQlsCR7NZIFneYboDZM0fT0xxMrlpA1ogFM8ti4LB8NlgKDAKRS/CZczZ9z7weeDj2Qs19lHNOjUtMEm2f3sLqv16JPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=463TpoM27wlYpBH7sX7lVFoBS3xu4UJFDQcBcWDkB5g=;
 b=bfZDa6nNlEnxBBgkIa84aWWjbjSqeIlJBX2MsU/7WHjkw/jmH+ag/isyDjaiqJJWQb3QaKypX2vKRR6YoYul5st26i67m2EeBdXY9zK2Cm6i4xLvhshO+zq+0nQMBtt4/sMwbkWS6OK1aDQFJLHVkZv32F6vh2XzZfJbre/CXue0xhQnOvbM2gKviLflmwLPowPYdNEMzS4d5N5eImdrfzgNq38i+3MvQYFe+SEzctk+I68kPHfSJIyrA78WYe5S0hhHo4a+y0F8kO+U4WCE/vBLkgdsLGPj7typ/M/ipw2ed6XDFFK44gmsLY6s9IL4Wg61pbKpiReiMo14STgScA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=463TpoM27wlYpBH7sX7lVFoBS3xu4UJFDQcBcWDkB5g=;
 b=dqWizIQyepimPvFh/Y+dWCOe/+/8hMSLwDwL7ZjdNW2+upnpkPBQRThYa+uiked+qRJscAIEtubuR6vYhacW18k5STzKQurOC8+jvNeCDj1zUdyB9YdcLW8mwWiN1HPYc1qKGjg8x/Z5HjdsgYgnddxxpX3Dq2k4OoJe+L0Xt7Q=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to grant gsi
Date: Mon, 17 Jun 2024 17:00:35 +0800
Message-ID: <20240617090035.839640-6-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240617090035.839640-1-Jiqian.Chen@amd.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB04.amd.com (10.181.40.145) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN2PEPF00004FBA:EE_|CYXPR12MB9340:EE_
X-MS-Office365-Filtering-Correlation-Id: 8e0900f9-64c4-477d-3f97-08dc8eac0ae3
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|7416011|36860700010|376011|1800799021|82310400023;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?zmPodllpddXMtcnY1eQBUvNL4HLh8u/sf8xOvetHb510YsQBSscZqopC7LvZ?=
 =?us-ascii?Q?Y8dT0On8ZOmnX3Az28z9YmYfAgY4v0cUQC3ekoS+oKVQjAuWwY6GVwBzqoGD?=
 =?us-ascii?Q?/uH51qigA2GuWWe17ZrBvYbXjVH/XTQfQnMWwk0WHAwHFItm6vDzfER1K6qx?=
 =?us-ascii?Q?Q8b434JuD5WGuChOi1CFGJfKqF2fIOQiuVvDw1TvWQaQPjppAPwjlY8q56lE?=
 =?us-ascii?Q?pXFVUKnGwof8tIDUlEzxBLfm1V3J1J4W0sizhmssYh5kZ50TQgrL3gcGE7YB?=
 =?us-ascii?Q?wehSMbiLUWtWtP7dYl+kre610d5q7sRN1kB2Ip4jEbagd0lu4Z03l92jnMHa?=
 =?us-ascii?Q?Ha6NVax93DlKVhuYJ96uImm6gLuwzwlHoA0Ct2lUW0eOW5lDEYmLh3adLGDS?=
 =?us-ascii?Q?p5OidkeAvuWjGgjjpnJq6rW1HW5urtXCNlq4dlLX3V/CCBfqri4yVKasuSCP?=
 =?us-ascii?Q?rS8unTelNONJCSqjPX4quR9WoAs6o3Vk9+ppwFhcfFKIacnaPB+WsFBmIzVI?=
 =?us-ascii?Q?Qu/utFxUWnyhvrUI04GsTxuXwFZnFzlBOMVqmcg44gcQO2kogVNhGYCCHu8b?=
 =?us-ascii?Q?ShlDcGukqpvlSEuVNlj3wf06IDpRxbz6i6e5aVCS3lNZjuyvCU96/9eaVHHI?=
 =?us-ascii?Q?tCNxBvptY408WEP7H9klXErH5dSGS71l5pMgWBxSayFAU1SSY5CwZuCNxNKs?=
 =?us-ascii?Q?eA6Er8cNbaI4PTl1f1Vnx2jxPme0mE1Ivq38bIUVSMXnGSGfMJSbxI2pQqDS?=
 =?us-ascii?Q?EbuUlHGFU3NORh9JKhY28edkEtDnuqKNn0L9K3RQFo9zwFpxzlM9P+CqF+09?=
 =?us-ascii?Q?AkkxZrPzEzUyK7zOd74sZkKHzz61+yBQw0VKzjapisJUcP7+YYJTtKeQKi8u?=
 =?us-ascii?Q?fIds24s3zvNETdTNkyJy9H1BM4xZ00DWnTZK+JFT33PtmvPK7lZswxbD31OA?=
 =?us-ascii?Q?eVzU03LgVOJe2s+35apFiFv9jQgu+icWUUDpRIXG6Ave0HSlUpatCCUCZTwd?=
 =?us-ascii?Q?F//f50w1KoZgTMsnUZM1+klpg1YPad0LJko0r22GRSVn+NNdzk/v/5kl0rhA?=
 =?us-ascii?Q?N7OKQ5JX4l9kSWJGDVeqF0xkcG226QB8xhaMSLo5rB1PYLkNB1mfWjuA1zeC?=
 =?us-ascii?Q?MR3GtkUdN6KDyXpR5xAJtSuuTzYwsfzf+J3v+BfcRP8G1c5zYYODnCBZmrlf?=
 =?us-ascii?Q?CopIL46w2rXSu+nqkw/D/tJ2zcZ3vBS67nrP7GG+ky2zEZ/XSF8xUzHpKu1F?=
 =?us-ascii?Q?9euCkF8d5QGBgD51BpgortdJKKlU/1LZvEtGpn2XFgxeaUvddvS4gHAD1TTI?=
 =?us-ascii?Q?Jo2tLvzdjE5Dbg1eQWfsh9whyNWTR8ds9IT+4XlhaPkZPCjHBiGlO3Q0tn6O?=
 =?us-ascii?Q?vhvaiAXP7Nd5ZfKLdFfQqWWyOt5lxF/DKoJnmG+QUrRz5bJtng=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(7416011)(36860700010)(376011)(1800799021)(82310400023);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 Jun 2024 09:01:14.7170
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8e0900f9-64c4-477d-3f97-08dc8eac0ae3
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN2PEPF00004FBA.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYXPR12MB9340

Some type of domain don't have PIRQs, like PVH, it doesn't do
PHYSDEVOP_map_pirq for each gsi. When passthrough a device
to guest base on PVH dom0, callstack
pci_add_dm_done->XEN_DOMCTL_irq_permission will fail at function
domain_pirq_to_irq, because PVH has no mapping of gsi, pirq and
irq on Xen side.
What's more, current hypercall XEN_DOMCTL_irq_permission requires
passing in pirq, it is not suitable for dom0 that doesn't have
PIRQs.

So, add a new hypercall XEN_DOMCTL_gsi_permission to grant the
permission of irq(translate from gsi) to dumU when dom0 has no
PIRQs.

Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
---
RFC: it needs review and needs to wait for the corresponding third patch on linux kernel side to be merged.
---
 tools/include/xenctrl.h            |  5 +++
 tools/libs/ctrl/xc_domain.c        | 15 +++++++
 tools/libs/light/libxl_pci.c       | 67 +++++++++++++++++++++++++++---
 xen/arch/x86/domctl.c              | 43 +++++++++++++++++++
 xen/arch/x86/include/asm/io_apic.h |  2 +
 xen/arch/x86/io_apic.c             | 17 ++++++++
 xen/arch/x86/mpparse.c             |  3 +-
 xen/include/public/domctl.h        |  8 ++++
 xen/xsm/flask/hooks.c              |  1 +
 9 files changed, 153 insertions(+), 8 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index a0381f74d24b..f3feb6848e25 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1382,6 +1382,11 @@ int xc_domain_irq_permission(xc_interface *xch,
                              uint32_t pirq,
                              bool allow_access);
 
+int xc_domain_gsi_permission(xc_interface *xch,
+                             uint32_t domid,
+                             uint32_t gsi,
+                             bool allow_access);
+
 int xc_domain_iomem_permission(xc_interface *xch,
                                uint32_t domid,
                                unsigned long first_mfn,
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index f2d9d14b4d9f..8540e84fda93 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -1394,6 +1394,21 @@ int xc_domain_irq_permission(xc_interface *xch,
     return do_domctl(xch, &domctl);
 }
 
+int xc_domain_gsi_permission(xc_interface *xch,
+                             uint32_t domid,
+                             uint32_t gsi,
+                             bool allow_access)
+{
+    struct xen_domctl domctl = {
+        .cmd = XEN_DOMCTL_gsi_permission,
+        .domain = domid,
+        .u.gsi_permission.gsi = gsi,
+        .u.gsi_permission.allow_access = allow_access,
+    };
+
+    return do_domctl(xch, &domctl);
+}
+
 int xc_domain_iomem_permission(xc_interface *xch,
                                uint32_t domid,
                                unsigned long first_mfn,
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 376f91759ac6..f027f22c0028 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -1431,6 +1431,9 @@ static void pci_add_dm_done(libxl__egc *egc,
     uint32_t flag = XEN_DOMCTL_DEV_RDM_RELAXED;
     uint32_t domainid = domid;
     bool isstubdom = libxl_is_stubdom(ctx, domid, &domainid);
+#ifdef CONFIG_X86
+    xc_domaininfo_t info;
+#endif
 
     /* Convenience aliases */
     bool starting = pas->starting;
@@ -1516,14 +1519,39 @@ static void pci_add_dm_done(libxl__egc *egc,
             rc = ERROR_FAIL;
             goto out;
         }
-        r = xc_domain_irq_permission(ctx->xch, domid, irq, 1);
+#ifdef CONFIG_X86
+        /* If dom0 doesn't have PIRQs, need to use xc_domain_gsi_permission */
+        r = xc_domain_getinfo_single(ctx->xch, 0, &info);
         if (r < 0) {
-            LOGED(ERROR, domainid,
-                  "xc_domain_irq_permission irq=%d (error=%d)", irq, r);
+            LOGED(ERROR, domainid, "getdomaininfo failed (error=%d)", errno);
             fclose(f);
             rc = ERROR_FAIL;
             goto out;
         }
+        if (info.flags & XEN_DOMINF_hvm_guest &&
+            !(info.arch_config.emulation_flags & XEN_X86_EMU_USE_PIRQ) &&
+            gsi > 0) {
+            r = xc_domain_gsi_permission(ctx->xch, domid, gsi, 1);
+            if (r < 0) {
+                LOGED(ERROR, domainid,
+                    "xc_domain_gsi_permission gsi=%d (error=%d)", gsi, errno);
+                fclose(f);
+                rc = ERROR_FAIL;
+                goto out;
+            }
+        }
+        else
+#endif
+        {
+            r = xc_domain_irq_permission(ctx->xch, domid, irq, 1);
+            if (r < 0) {
+                LOGED(ERROR, domainid,
+                    "xc_domain_irq_permission irq=%d (error=%d)", irq, errno);
+                fclose(f);
+                rc = ERROR_FAIL;
+                goto out;
+            }
+        }
     }
     fclose(f);
 
@@ -2200,6 +2228,10 @@ static void pci_remove_detached(libxl__egc *egc,
 #endif
     uint32_t domainid = prs->domid;
     bool isstubdom;
+#ifdef CONFIG_X86
+    int r;
+    xc_domaininfo_t info;
+#endif
 
     /* Convenience aliases */
     libxl_device_pci *const pci = &prs->pci;
@@ -2287,9 +2319,32 @@ skip_bar:
              */
             LOGED(ERROR, domid, "xc_physdev_unmap_pirq irq=%d", irq);
         }
-        rc = xc_domain_irq_permission(ctx->xch, domid, irq, 0);
-        if (rc < 0) {
-            LOGED(ERROR, domid, "xc_domain_irq_permission irq=%d", irq);
+#ifdef CONFIG_X86
+        /* If dom0 doesn't have PIRQs, need to use xc_domain_gsi_permission */
+        r = xc_domain_getinfo_single(ctx->xch, 0, &info);
+        if (r < 0) {
+            LOGED(ERROR, domid, "getdomaininfo failed (error=%d)", errno);
+            fclose(f);
+            rc = ERROR_FAIL;
+            goto skip_legacy_irq;
+        }
+        if (info.flags & XEN_DOMINF_hvm_guest &&
+            !(info.arch_config.emulation_flags & XEN_X86_EMU_USE_PIRQ) &&
+            gsi > 0) {
+            r = xc_domain_gsi_permission(ctx->xch, domid, gsi, 0);
+            if (r < 0) {
+                LOGED(ERROR, domid,
+                    "xc_domain_gsi_permission gsi=%d (error=%d)", gsi, errno);
+                rc = ERROR_FAIL;
+            }
+        }
+        else
+#endif
+        {
+            rc = xc_domain_irq_permission(ctx->xch, domid, irq, 0);
+            if (rc < 0) {
+                LOGED(ERROR, domid, "xc_domain_irq_permission irq=%d", irq);
+            }
         }
     }
 
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 335aedf46d03..6b465bbc6ec0 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -36,6 +36,7 @@
 #include <asm/xstate.h>
 #include <asm/psr.h>
 #include <asm/cpu-policy.h>
+#include <asm/io_apic.h>
 
 static int update_domain_cpu_policy(struct domain *d,
                                     xen_domctl_cpu_policy_t *xdpc)
@@ -237,6 +238,48 @@ long arch_do_domctl(
         break;
     }
 
+    case XEN_DOMCTL_gsi_permission:
+    {
+        unsigned int gsi = domctl->u.gsi_permission.gsi;
+        int irq;
+        bool allow = domctl->u.gsi_permission.allow_access;
+
+        /* Check all pads are zero */
+        ret = -EINVAL;
+        for ( i = 0;
+              i < sizeof(domctl->u.gsi_permission.pad) /
+                  sizeof(domctl->u.gsi_permission.pad[0]);
+              ++i )
+            if ( domctl->u.gsi_permission.pad[i] )
+                goto out;
+
+        /*
+         * If current domain is PV or it has PIRQ flag, it has a mapping
+         * of gsi, pirq and irq, so it should use XEN_DOMCTL_irq_permission
+         * to grant irq permission.
+         */
+        ret = -EOPNOTSUPP;
+        if ( is_pv_domain(currd) || has_pirq(currd) )
+            goto out;
+
+        ret = -EINVAL;
+        if ( gsi >= nr_irqs_gsi || (irq = gsi_2_irq(gsi)) < 0 )
+            goto out;
+
+        ret = -EPERM;
+        if ( !irq_access_permitted(currd, irq) ||
+             xsm_irq_permission(XSM_HOOK, d, irq, allow) )
+            goto out;
+
+        if ( allow )
+            ret = irq_permit_access(d, irq);
+        else
+            ret = irq_deny_access(d, irq);
+
+    out:
+        break;
+    }
+
     case XEN_DOMCTL_getpageframeinfo3:
     {
         unsigned int num = domctl->u.getpageframeinfo3.num;
diff --git a/xen/arch/x86/include/asm/io_apic.h b/xen/arch/x86/include/asm/io_apic.h
index 78268ea8f666..7e86d8337758 100644
--- a/xen/arch/x86/include/asm/io_apic.h
+++ b/xen/arch/x86/include/asm/io_apic.h
@@ -213,5 +213,7 @@ unsigned highest_gsi(void);
 
 int ioapic_guest_read( unsigned long physbase, unsigned int reg, u32 *pval);
 int ioapic_guest_write(unsigned long physbase, unsigned int reg, u32 val);
+int mp_find_ioapic(int gsi);
+int gsi_2_irq(int gsi);
 
 #endif
diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index b48a64246548..23845c8cb11f 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -955,6 +955,23 @@ static int pin_2_irq(int idx, int apic, int pin)
     return irq;
 }
 
+int gsi_2_irq(int gsi)
+{
+    int ioapic, pin, irq;
+
+    ioapic = mp_find_ioapic(gsi);
+    if ( ioapic < 0 )
+        return -EINVAL;
+
+    pin = gsi - io_apic_gsi_base(ioapic);
+
+    irq = apic_pin_2_gsi_irq(ioapic, pin);
+    if ( irq <= 0 )
+        return -EINVAL;
+
+    return irq;
+}
+
 static inline int IO_APIC_irq_trigger(int irq)
 {
     int apic, idx, pin;
diff --git a/xen/arch/x86/mpparse.c b/xen/arch/x86/mpparse.c
index d8ccab2449c6..c95da0de5770 100644
--- a/xen/arch/x86/mpparse.c
+++ b/xen/arch/x86/mpparse.c
@@ -841,8 +841,7 @@ static struct mp_ioapic_routing {
 } mp_ioapic_routing[MAX_IO_APICS];
 
 
-static int mp_find_ioapic (
-	int			gsi)
+int mp_find_ioapic(int gsi)
 {
 	unsigned int		i;
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 2a49fe46ce25..f7ae8b19d27d 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -464,6 +464,12 @@ struct xen_domctl_irq_permission {
     uint8_t pad[3];
 };
 
+/* XEN_DOMCTL_gsi_permission */
+struct xen_domctl_gsi_permission {
+    uint32_t gsi;
+    uint8_t allow_access;    /* flag to specify enable/disable of x86 gsi access */
+    uint8_t pad[3];
+};
 
 /* XEN_DOMCTL_iomem_permission */
 struct xen_domctl_iomem_permission {
@@ -1306,6 +1312,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_get_paging_mempool_size       85
 #define XEN_DOMCTL_set_paging_mempool_size       86
 #define XEN_DOMCTL_dt_overlay                    87
+#define XEN_DOMCTL_gsi_permission                88
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1328,6 +1335,7 @@ struct xen_domctl {
         struct xen_domctl_setdomainhandle   setdomainhandle;
         struct xen_domctl_setdebugging      setdebugging;
         struct xen_domctl_irq_permission    irq_permission;
+        struct xen_domctl_gsi_permission    gsi_permission;
         struct xen_domctl_iomem_permission  iomem_permission;
         struct xen_domctl_ioport_permission ioport_permission;
         struct xen_domctl_hypercall_init    hypercall_init;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 5e88c71b8e22..a5b134c91101 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -685,6 +685,7 @@ static int cf_check flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_shadow_op:
     case XEN_DOMCTL_ioport_permission:
     case XEN_DOMCTL_ioport_mapping:
+    case XEN_DOMCTL_gsi_permission:
 #endif
 #ifdef CONFIG_HAS_PASSTHROUGH
     /*
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 09:06:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 09:06:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742006.1148718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8JR-0005I7-1Q; Mon, 17 Jun 2024 09:06:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742006.1148718; Mon, 17 Jun 2024 09:06:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8JQ-0005I0-V4; Mon, 17 Jun 2024 09:06:04 +0000
Received: by outflank-mailman (input) for mailman id 742006;
 Mon, 17 Jun 2024 09:06:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJ8JP-0005Hu-Ku
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 09:06:03 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d1ee28c2-2c88-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 11:06:02 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a6267778b3aso377286066b.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 02:06:02 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da4486sm495066766b.1.2024.06.17.02.06.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 02:06:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1ee28c2-2c88-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718615162; x=1719219962; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=gvRmkc7UYREFAt7qwezSV/9vtY87peLc2FPTpzihZiY=;
        b=ZyUAA7cdjPSw27FU+i1Re7krcm6GPgJ2d5k5g0G+VYuXdMzI6OFcNxoNrUo4yu5PYi
         TJ/CDtRW1I4tBEpGf1YQ8OQEJln2kiHzzQfjW6iKuVy2ekAwM+04Mtan1Kn6LtZZ47fP
         4AQE261egi0oo+6cEGl6FtwpR3w5Eq49s2+pW5mjzjhtAHiFJ9owHw/mK+/eMcP3gDbG
         TeJ+y5OMLEI69pDcNTuY70wwCX/iEl2C4fhT72inSh++xwl2zFBzFsPTqZTag91PK9FJ
         6Kl9OYV5Wx55yQ12xUi5TmdbC30fU1z9meMKqGH7Q0c+mxIDRJvKWfSYOEzXpX0PUl7Y
         U7XA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718615162; x=1719219962;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gvRmkc7UYREFAt7qwezSV/9vtY87peLc2FPTpzihZiY=;
        b=T7zjkUZm6OduSd+82KDaSCAO1VXTCZztb98+VBmnZ6yn33rEC/xCJvwIEUpcnk3ssi
         ovQr9i5/zcXWuWW5jGdxLcnxEfmjw94OwSUVfyG6K8XtG2ulG9FB1BtiUmXZGR1r2Hgk
         rOIsU3DoumTd71e3eApYXDsKBriE84WpOy23alM7WG0K/h7hr/U42VrrtURxsGCq/RoZ
         MaiKHu9STGp52Olzyu0f/My4ryjuCRSBqiiqMkhN0z7JTB+UD58e82HV5lHIBG5v4s6i
         sD2MWe8X6/hs57LnmBoQSWXsbCafnUOp0soyqVTm5aYcQFocGBm0Ni5Nksvx+kp0025i
         Eovw==
X-Forwarded-Encrypted: i=1; AJvYcCWs2NZWqaMnJK428YuBJw7DewAbrVHbo5Y+81aWaW4pRas/lwNyWHwmqy1Z5r0kw0+wDXVcwd3swq0pAgOtw2pqHWUVG+kB61+ffyOYvNM=
X-Gm-Message-State: AOJu0YxWZn5w+71mbbbWPt5VWh9bTt51PLqPu50xn+mjXa73rsxpc3b8
	cfpHXoI13vVpc2PQN0Nk5NJWtsjC0zDVUQQBHSt7yf0lMlJi/5KbVV3RUdxu2w==
X-Google-Smtp-Source: AGHT+IFio7W2H3McMW1GKQcHEpdNgG7x0U4Q8MuX4Z8BRf4WELajR7xgxbuujSAPdNrXPTo6eholRw==
X-Received: by 2002:a17:906:f2ce:b0:a6f:1045:d5e2 with SMTP id a640c23a62f3a-a6f60cee9c6mr613581266b.4.1718615161595;
        Mon, 17 Jun 2024 02:06:01 -0700 (PDT)
Message-ID: <b4b14998-8b17-4a22-9c1d-427be61c06a9@suse.com>
Date: Mon, 17 Jun 2024 11:05:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: Design session notes: GPU acceleration in Xen
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Ray Huang <ray.huang@amd.com>,
 Xen developer discussion <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, dri-devel@lists.freedesktop.org,
 Daniel Vetter <daniel@ffwll.ch>, David Airlie <airlied@gmail.com>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com> <Zmxxbk-xWp9AjqIB@itl-email>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <Zmxxbk-xWp9AjqIB@itl-email>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 14.06.2024 18:35, Demi Marie Obenour wrote:
> On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote:
>> On 13.06.2024 20:43, Demi Marie Obenour wrote:
>>> 2. Add support for `XEN_DOMCTL_memory_mapping` to use system RAM, not
>>>    just IOMEM.  Mappings made with `XEN_DOMCTL_memory_mapping` are
>>>    guaranteed to be able to be successfully revoked with
>>>    `XEN_DOMCTL_memory_mapping`, so all operations that would create
>>>    extra references to the mapped memory must be forbidden.  These
>>>    include, but may not be limited to:
>>>
>>>    1. Granting the pages to the same or other domains.
>>>    2. Mapping into another domain using `XEN_DOMCTL_memory_mapping`.
>>>    3. Another domain accessing the pages using the foreign memory APIs,
>>>       unless it is privileged over the domain that owns the pages.
>>
>> All of which may call for actually converting the memory to kind-of-MMIO,
>> with a means to later convert it back.
> 
> Would this support the case where the mapping domain is not fully
> priviliged, and where it might be a PV guest?

I suppose that should be a goal.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 09:08:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 09:08:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742015.1148728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8LI-0005x2-Bh; Mon, 17 Jun 2024 09:08:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742015.1148728; Mon, 17 Jun 2024 09:08:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8LI-0005wv-93; Mon, 17 Jun 2024 09:08:00 +0000
Received: by outflank-mailman (input) for mailman id 742015;
 Mon, 17 Jun 2024 09:07:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJ8LH-0005wo-Mw
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 09:07:59 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 16ac8145-2c89-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 11:07:57 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id
 a640c23a62f3a-a6f7720e6e8so120212966b.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 02:07:57 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f6fb084f5sm299310766b.163.2024.06.17.02.07.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 02:07:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16ac8145-2c89-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718615277; x=1719220077; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=wmCUQhRrp+U/l5cgP6xim2f8AZbvbWBWfJybaQLpqlg=;
        b=Bf3RxvfHMyX1ttpc2GJl4f2zwy54y/n0HNybQqKq/5js4wO/iuNq0MCdhkD9oqG/0t
         OkzVicV2Vu2BjvFHpO7u1r/QOPvxDxaH61UfJbJbcqIyzFjSee/9BLm4R9fEywzzLJMz
         hvzNTC2WZBX5RmP55rIlB6gMeEkvV1IJHNLWIlw8lGjoWojbftolEVLNdomFB5K6JvML
         3TVJqBNCxcAIdcHuh++Q/R2ldxXO+8dA4X/Jo4qV45U6lFgmLyl+Os1/YxadAyjal7W8
         yAWK26ZkoWz0GqN19FhRQTC85Ra2MDGZoNkZaDZ2C0f4sLBv90AgUVySbNz53uJlEsLE
         SKjg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718615277; x=1719220077;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wmCUQhRrp+U/l5cgP6xim2f8AZbvbWBWfJybaQLpqlg=;
        b=xUtGY/m9bx+5O48t/bLngPt98QfzLY6ys5zhB9rTsyG6Gj3C3LwWBSQ/o84PBSQWTJ
         FRo2wZ0PvyLLCTyELsWvE0gni/mzQdAzSR+WG0CwWXQAG6EITrd6f5ngshltsA005uQN
         I2W5Ij+dwnM6BZE57w/BqQH7xGUqJGCxfd0QkNNF/iwXffwV8+S5OrCqQPbptJaanH2X
         FxyvaqSDv7YYiDOlY9kojsPR8FvFyeb6m4ZlJdj7imWk7ImhDXjpVt2AdL48BKpHFMer
         UWIq2btfZqklnS60+jivPOk3N5do7cuR11jksTgKOJgvbBKa0V6iBdPtjamPQeamMEtR
         Cv5Q==
X-Forwarded-Encrypted: i=1; AJvYcCX5WnmSCn7WUbWrjbjjCx5Y5hAFJJHNdLvAIYM44SieHhieTa9fIkgB5egX4jJctou7dXTGVtB+6JKD8aXxsUqYp2dHrNFN216eVPRIktM=
X-Gm-Message-State: AOJu0Ywe19AfZt5JCL6wVkPACUYhXLUecE/N65jsSzZIchGhzQ+S8cFF
	VjzwE4ZX1TZc9JZ9+BA+wzRVqENolmBe6P8oQdLoVVQlBhZoWJl6jt7Qhrpc1A==
X-Google-Smtp-Source: AGHT+IEVZI4QO0poLx8usLv0/oKq7jfbDDfGVZLS/ifZCrzljM2fJEiYp+9DO8HFdoJfwrUxp8/iBA==
X-Received: by 2002:a17:906:a0c:b0:a6f:d57:aedc with SMTP id a640c23a62f3a-a6f60dc50eamr754825366b.57.1718615276836;
        Mon, 17 Jun 2024 02:07:56 -0700 (PDT)
Message-ID: <10c2ab19-e2b7-4f5f-ae73-213e0194bb8e@suse.com>
Date: Mon, 17 Jun 2024 11:07:54 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: Design session notes: GPU acceleration in Xen
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Ray Huang <ray.huang@amd.com>,
 Xen developer discussion <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com> <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com> <Zmxze4a0PZbwcLSb@itl-email>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <Zmxze4a0PZbwcLSb@itl-email>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 14.06.2024 18:44, Demi Marie Obenour wrote:
> On Fri, Jun 14, 2024 at 10:12:40AM +0200, Jan Beulich wrote:
>> On 14.06.2024 09:21, Roger Pau Monné wrote:
>>> I'm not sure it's possible to ensure that when using system RAM such
>>> memory comes from the guest rather than the host, as it would likely
>>> require some very intrusive hooks into the kernel logic, and
>>> negotiation with the guest to allocate the requested amount of
>>> memory and hand it over to dom0.  If the maximum size of the buffer is
>>> known in advance maybe dom0 can negotiate with the guest to allocate
>>> such a region and grant it access to dom0 at driver attachment time.
>>
>> Besides the thought of transiently converting RAM to kind-of-MMIO, this
>> makes me think of another possible option: Could Dom0 transfer ownership
>> of the RAM that wants mapping in the guest (remotely resembling
>> grant-transfer)? Would require the guest to have ballooned down enough
>> first, of course. (In both cases it would certainly need working out how
>> the conversion / transfer back could be made work safely and reasonably
>> cleanly.)
> 
> The kernel driver needs to be able to reclaim the memory at any time.
> My understanding is that this is used to migrate memory between VRAM and
> system RAM.  It might also be used for other purposes.

Except: How would the kernel driver reclaim the memory when it's mapped
by a DomU?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 09:13:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 09:13:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742041.1148739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8Qo-0008FF-02; Mon, 17 Jun 2024 09:13:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742041.1148739; Mon, 17 Jun 2024 09:13:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8Qn-0008F8-SH; Mon, 17 Jun 2024 09:13:41 +0000
Received: by outflank-mailman (input) for mailman id 742041;
 Mon, 17 Jun 2024 09:13:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJ8Qm-0008Es-NO
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 09:13:40 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e2db5e74-2c89-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 11:13:40 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-a6f7b785a01so148881966b.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 02:13:40 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da3f95sm493443566b.35.2024.06.17.02.13.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 02:13:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2db5e74-2c89-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718615619; x=1719220419; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=4yd9cS+9NgACeGGqFmVFjNC48jcTsi3TlckJVh/Dcno=;
        b=H2r8BZdniQ9triOvFCfaIrntv1WEvu2Dh2BTgId6+HqcvEeS1xELvlADjBVPOQYRjw
         31zFumBBjX1JXut8aGCtCxWjtCpdU3uWqKWYYJvv0Zbwv2GZeMRk3i3MussZKRC1GH20
         eVQD6xEallzgtdPRPyaagz8JO1KDWH2gnJg8rQbG4+QooexbYOwkgtZVW1r0cHcNgr90
         lQdrEkz0No8PX1kyTglAcJfeRMgMIzw28gESfwNJEJkDNXZVoSvFG7RygooKvFUBnMvx
         WXLDZTEJ2J31bnZJHmzHDR4KjVcL3RpmHViZdi02SI+wgtaG6smbt7EpMn1zGsfMY1RH
         ZyqA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718615619; x=1719220419;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=4yd9cS+9NgACeGGqFmVFjNC48jcTsi3TlckJVh/Dcno=;
        b=QH+Rc9EzKp7lQDq6g/aN0AvUfQt1PdeehComV62rnUHkoeQ6ZQ+5bBVeIeBY1aCy0X
         ZNFPDuJ3Q0Zn3tiA6Lhf08dj0x92H2ARdwHddQC/uVbIQLJg1RJwTFYmeiQ89eiVMVsC
         eJ30T+VjUAtClBNM4421UWtKl3itqEDirCSj7z9D+kQPwhXP2cV2rRTuUxmAxwWcnqHd
         u+G407etWCoC8hUhfdOVrpcPpP8x+b02jVNRAqJRqX/NKRqLX6uWhDysve5xbihHUlvX
         E3Zm8ySpYKPXrecd/XFD1gPMMIpgGmyu/usqX4Eo1j+/kCVNxBZpfk1DQxZVulcre2oJ
         pHLw==
X-Forwarded-Encrypted: i=1; AJvYcCXfEx6Soazqh7jRZ75gN0TqNu++ZNzcX9C1GmqusQUXFahkyTfwYENVosBqRPxUL8qujY75EDUQJhzvrDv91hWxRlu8OuWFLsi/V14Ixfw=
X-Gm-Message-State: AOJu0Ywhw8g9GnZi0AC0F91QaINMgNp9+JaAIkCOnWENSq5uNcoh1PhR
	hg64CLWQ9P0Aan6glQkD2OsBqtw1JuKuc2csU1/HkHaLIziz3Po8ftnvHGh0yw==
X-Google-Smtp-Source: AGHT+IGG5dlCWB94gjTkoE8In0wIC7uKXOwvuiZm48Pj0Jxu09QELuO+4BrOhFsXbCceKT1vkCYTpQ==
X-Received: by 2002:a17:906:22db:b0:a6f:49b1:dec5 with SMTP id a640c23a62f3a-a6f60d421d0mr507882666b.46.1718615619460;
        Mon, 17 Jun 2024 02:13:39 -0700 (PDT)
Message-ID: <7898be60-3126-456a-ac0e-b7dbd26e0764@suse.com>
Date: Mon, 17 Jun 2024 11:13:38 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: Design session notes: GPU acceleration in Xen
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Ray Huang <ray.huang@amd.com>,
 Xen developer discussion <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com> <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com> <ZmwByZnn5vKcVLKI@macbook>
 <Zm-FidjSK3mOieSC@itl-email>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <Zm-FidjSK3mOieSC@itl-email>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 17.06.2024 02:38, Demi Marie Obenour wrote:
> On Fri, Jun 14, 2024 at 10:39:37AM +0200, Roger Pau Monné wrote:
>> On Fri, Jun 14, 2024 at 10:12:40AM +0200, Jan Beulich wrote:
>>> On 14.06.2024 09:21, Roger Pau Monné wrote:
>>>> On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote:
>>>>> On 13.06.2024 20:43, Demi Marie Obenour wrote:
>>>>>> GPU acceleration requires that pageable host memory be able to be mapped
>>>>>> into a guest.
>>>>>
>>>>> I'm sure it was explained in the session, which sadly I couldn't attend.
>>>>> I've been asking Ray and Xenia the same before, but I'm afraid it still
>>>>> hasn't become clear to me why this is a _requirement_. After all that's
>>>>> against what we're doing elsewhere (i.e. so far it has always been
>>>>> guest memory that's mapped in the host). I can appreciate that it might
>>>>> be more difficult to implement, but avoiding to violate this fundamental
>>>>> (kind of) rule might be worth the price (and would avoid other
>>>>> complexities, of which there may be lurking more than what you enumerate
>>>>> below).
>>>>
>>>> My limited understanding (please someone correct me if wrong) is that
>>>> the GPU buffer (or context I think it's also called?) is always
>>>> allocated from dom0 (the owner of the GPU).  The underling memory
>>>> addresses of such buffer needs to be mapped into the guest.  The
>>>> buffer backing memory might be GPU MMIO from the device BAR(s) or
>>>> system RAM, and such buffer can be paged by the dom0 kernel at any
>>>> time (iow: changing the backing memory from MMIO to RAM or vice
>>>> versa).  Also, the buffer must be contiguous in physical address
>>>> space.
>>>
>>> This last one in particular would of course be a severe restriction.
>>> Yet: There's an IOMMU involved, isn't there?
>>
>> Yup, IIRC that's why Ray said it was much more easier for them to
>> support VirtIO GPUs from a PVH dom0 rather than classic PV one.
>>
>> It might be easier to implement from a classic PV dom0 if there's
>> pv-iommu support, so that dom0 can create it's own contiguous memory
>> buffers from the device PoV.
> 
> What makes PVH an improvement here?  I thought PV dom0 uses an identity
> mapping for the IOMMU,

True, but see how Roger mentioned PV IOMMU (which would allow a domain
to move away from this identity mapping).

Jan

> while a PVH dom0 uses an IOMMU that mirrors the
> dom0 second-stage page tables.  In both cases, the device physical
> addresses are identical to dom0’s physical addresses.
> 
> PV is terrible for many reasons, so I’m okay with focusing on PVH dom0,
> but I’d like to know why there is a difference.



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 09:15:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 09:15:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742047.1148747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8SJ-0000Kl-94; Mon, 17 Jun 2024 09:15:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742047.1148747; Mon, 17 Jun 2024 09:15:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8SJ-0000Ke-6C; Mon, 17 Jun 2024 09:15:15 +0000
Received: by outflank-mailman (input) for mailman id 742047;
 Mon, 17 Jun 2024 09:15:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d9pI=NT=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJ8SH-0000KX-KJ
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 09:15:13 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on2061b.outbound.protection.outlook.com
 [2a01:111:f403:200a::61b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1865535f-2c8a-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 11:15:11 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by MN2PR12MB4095.namprd12.prod.outlook.com (2603:10b6:208:1d1::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.30; Mon, 17 Jun
 2024 09:15:06 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7677.030; Mon, 17 Jun 2024
 09:15:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1865535f-2c8a-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IrmkQKV/66hUPmqRlPhKH/ZOWq+SmINYS77FtW7yBqPxfa6wUJU80M4/hYFRxEEPx/EAEZ/UJtJl0WN3zj1tPI5MXTWew9iPQdXqEJVy+KvftmGMAsxCRSzQvylJnyKSWWEG0L9/zypnREdhe3O/6HBQwwpXWLcqcnfx1D9pMk3gh7aWpn1cmZwT1hJydaAQimD93W2QW/p6BKD6q6pHXb3MD1edudP4iRqauEMKY/vpY7X4VSIOjGTwra8chQKq7wvvgpmrXYKkan0lqHN2ku0ue/SBra8LxNEgFgGlf6jI5L54/tWLEE1FBPWES/C/qGsrDB3Y+35LXgpb0U92uQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=H9Jyk+7wOtcRvWRjFs9/khAmI4dLKk+Jy3McANjw0MA=;
 b=ZVBZ04kENhV4IqPF3YnMcI3eEOyKNPPaKhOFpRFCJRyfD5ejHZtlRhKKcRdSt/JYEQtPw7IYdLbHXlGKJDdZUFeJG4dYzhX8AhJim8ySLwUvrhDkKDshaI+9azk9AiebJsZJO9PpRqG6cmafBbCYRO3WAX0ejdcjtwyxSzh70YRQDYUFtOOc9jFMDJZWG9J0cTR5gk09iBgO5p530hbf4bfY9T7ZHS2qxKw1MxmV76ZRIe4pyxPUmEYlUkte8SvCplib/a2CXUzjswtjVeBH2vpzxypYZC4UZJgPHlWFfYpr8JT8DEzqWKFCbQDP5f+q5xOR4mje1hCF94OyYRnIfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H9Jyk+7wOtcRvWRjFs9/khAmI4dLKk+Jy3McANjw0MA=;
 b=n+VXnCVhjY+Jz0a9DMxOrpNSc100rPoXFuI4jbwS8xmP7FflGYJOj4BFeW+EgPSxHgmQXTwOxBt1qKx3MZXw33272vF3hDffTO6LlKRQJBHmVEFw+xkie3cO1FlAZ1OIA9CHc2h/5g/TwgfZ7cu5o9+82YD5KdbbUJWhN/7V/K8=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: "Daniel P . Smith" <dpsmith@apertussolutions.com>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, Juergen
 Gross <jgross@suse.com>, "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
	"Huang, Ray" <Ray.Huang@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "Chen, Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index: AQHawJTqXyg3PMEiVUiLHPQ86aZkYrHMMTaA
Date: Mon, 17 Jun 2024 09:15:06 +0000
Message-ID:
 <BL1PR12MB58493F7AE460AF3388F6741DE7CD2@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-6-Jiqian.Chen@amd.com>
In-Reply-To: <20240617090035.839640-6-Jiqian.Chen@amd.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7677.026)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|MN2PR12MB4095:EE_
x-ms-office365-filtering-correlation-id: 9ea5339e-9232-4afc-c5d5-08dc8eadfa7d
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|366013|7416011|376011|1800799021|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?QUlvQ1FqMHd5d1dQWkVJVjdpMllCcis4YkNScFFlUmxVYlZ0aUJxZGVMRnFm?=
 =?utf-8?B?bDhVNytPVzR5UFU1SEFOZSsralgwZVdKcW5qNU9ZbXhiMlRGWmFNcmtEMXVu?=
 =?utf-8?B?R1FGM0h3cFNMN21IMmVrdzg3UElWeTYyRm1IZmhxRS9FRnI5V0VXVngzOE40?=
 =?utf-8?B?NUFZMExJaHRRd3FIMm0xeEFhN2NJUTI2OHlqUGkwSTQzdEtUUFNrT0t0SmYx?=
 =?utf-8?B?UGpPejZwc0lRL0U1bVJEQmxBQnNvSkRXZnNFQS95cExDaDIxbE9iNWYvY0Rw?=
 =?utf-8?B?K01XQ0NBSUllR1FhaHlNcGlxcFdPc3F4Umwvczl3L0NhVXZvdEE1OUVVaEti?=
 =?utf-8?B?dktYQmFFbEcwZEtkckk2ZnZnb21DQ2FSNUVZaUtWdWtHSytUZ2JyS2trbFBp?=
 =?utf-8?B?YVdna1pPem1WVmFCZDNjbzNkczhzNkJuSHhrUG1jT1dxR3lqUGMwY3MySHZD?=
 =?utf-8?B?ZWlzclV3Vk1CV1gyd2hnenVpbFVDTXA5Q2cwZkVyZWJYSElrL0lLVHR3YlhO?=
 =?utf-8?B?em53d0tQUFZ4dkg1dWFPZ2FHc0xYbmR6dS9IaG1DN2kyTUJhNDFmMXlNK0c4?=
 =?utf-8?B?cVlrTnltc0FqYS9CMDdnOVE3QVM0OHN3ZmlNUS96VVp2eHpnMlRZVmVXY0Q0?=
 =?utf-8?B?cCtqODd0K2RzcXlJU2NLT0hVa1lTMFM2aXFzNHdaSHY5QXpDTW1SVC9NOFJV?=
 =?utf-8?B?dFJaM1RRWkV0QjFSRFptdllXSnk5amNyRnJsdkJKcW5vZjlWOEgzRkFMUitw?=
 =?utf-8?B?dHhaNVpyZWxHVmEwQkhQb3J6TkhJdkNuUXBoc3B5MTdHKy9ZRjR6SDJkYkZF?=
 =?utf-8?B?a3gzcytScWludDNCVVV1MVp3My9OUVpCMlJ2a3puQnBmTUV0MXFEdG1VbVIw?=
 =?utf-8?B?bFljb2RzbGtCNlo5SzUxeXhFdlpvNHNZU3lTMkoyblh0WGJ1Q3VYZTRxVENV?=
 =?utf-8?B?NktyRVVramZ2QXEyckNnZHBwOHM1dGdwMlFEaVVTbG1wa3BvendFYU1mbmJZ?=
 =?utf-8?B?dytSemVobE9BS0FZUjc4NEROM2pkYW5IdytmKzhUbjBaeFdONlJxbERLRW93?=
 =?utf-8?B?cnlaTW9mSGUvMTNIcGN3WmhrKzVUTzcxNXZZZU9LZGYxQjBFMUNmNmhoUk1P?=
 =?utf-8?B?TG80eGJQRzlzWktiZ3NsVUR6QWl2WkV0MHR4THNaZWRyNlRIUGJVSFJabisr?=
 =?utf-8?B?WWMzM1RGTWQyNVBjUFFqUzc0VDlmZ05BNmVIc0MrMjJuQ3lUZXo1OUZvandR?=
 =?utf-8?B?MzRMVmgwU0Q1YkVEUE52N3RPZ0xWdVlselBZUmdhYXRUR3Q3dWEraVJTSlJN?=
 =?utf-8?B?UWtjR2VYNHhjTCtKR1orYmI3cGVPazNRaTExZGZHM2pLVHV0SWVHenZtRWNU?=
 =?utf-8?B?Zll2ZEJaOHB2cnU1U0UrcW02N2FQM2JOajdudVJhQ0FBR1BlY0ZETzFFL1VI?=
 =?utf-8?B?ZHk2aE9Zc1VZOEk2aVRXRExyRk5qNHAwVjlhTWxyZmE5U0w4aitnb2V5YkdV?=
 =?utf-8?B?b1k4MkZkdnNZQnVoZVFWNDJkQ21ldStPamlrcE90VVo4bzNzMXYvZEZPSFQz?=
 =?utf-8?B?YXFoclkrZ3kyOW0vZnV2VmN2dWM0Rm5VU2Z5dFdyQmZtYTdyblY4RUE5b3BX?=
 =?utf-8?B?Yk5wQ2ZHN3RkNlF6WnFnTUxwZmg1Y3lFbTdYZlVXdk01UjNmTFJOeUVBc3I2?=
 =?utf-8?B?WXlRdGVrcDBBNm9zWld4ODFUUlFyZGp3VG1yRFJQSWNjc2Y0dGc5NWgvR2RV?=
 =?utf-8?B?OTlXRGoyYVJ3VlAzcCtZYnJDc0h1S0RiejB2RkxvZGNwOWlVNWRVcFVqNnBB?=
 =?utf-8?Q?CCE+tgVuZwSrln4x6je/kwfsxcuBM/NqdSh+Y=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(366013)(7416011)(376011)(1800799021)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?NnhEeEZNbFo2MWwrOFEzcGo2cDN3cTJNR2ZkQU5ObkUvclA2T2doM1J6Ry9k?=
 =?utf-8?B?Mk5XYjgwOTF1Q3MxenpiaHVUNjBpOCtjMFhYdE12bWpvLzVWbDhOU2R3S1ZR?=
 =?utf-8?B?MUFLRjZjbzA2YVpZV3RnYk1XVkRFaTcwK1lDZlp6TVVrK1pteEF2QmcyNnpu?=
 =?utf-8?B?aXNDWmNNd3lVUWU3T3Z4aGFzbkZhZFUxTFpuUUJ4Z2R6c01NWTlIMGlEWTFO?=
 =?utf-8?B?SC9DK0pUdFJaaGFZUDVwYUhlWC9JQ0lISERSNllhZndZTkZQNXdCTjhTcU8y?=
 =?utf-8?B?NmV6M2p2cjd5MjdPZ2VCTGsyaVNHZ2Y1NUM3aUVrVkpFakVBVW1vb000NURQ?=
 =?utf-8?B?SEt6SDFlbktwSGR3QzlUczh1SFo4WUc1czFrb2xBdDFpZDRSMUF3eXVOUWtk?=
 =?utf-8?B?UnNoK1l3bUVNNXlrUGdpNWNBeFpJM2RPTGlqRDBTZGJoc3p0eGExWXJ3NnFO?=
 =?utf-8?B?Y2FJd01ucEdkT0VvZ1FiaWduQnd5dHlTU0FsUzdYSG1Hd1ZmWnRiYXBxQ28v?=
 =?utf-8?B?WjVCL3dzQ0dhQkxFUmJjb05Qbm1SZmdKY2tKZzM1cGs5TzBwYTRFZ0UwbzZL?=
 =?utf-8?B?RXF1Q01iNmJHMUxXd3VINU1OWkVadWpLWjE3WVlmM3hwTVJ5Q2R6eUNBLytS?=
 =?utf-8?B?U0NFbjlIb2VSZnJpZ1d5WHVReW0xNzVMTTZ6MzdyaEJueU1Hd0ZtOWRTYm5a?=
 =?utf-8?B?RFZwQ2NKU085UEFvZi9oenRXalA5L0tyNmNxQzBCZnY2V2RGWHEwWHoxV29F?=
 =?utf-8?B?MDB5QU40SExMbmNJdmFQL0NpZVRUTjF4Q20wc3ZON3RsendyRmNIN1dYc0pF?=
 =?utf-8?B?eFJCdnVPZG85U2xTcWZNY2psNUM2YXIxN3ZCTEtLQjBXaVVoYWowNStCTVIx?=
 =?utf-8?B?b01IbjU2dXhiTkRxU2pjQzByRWhWV1Iyd2dwbUs4enRaazlJWHc0UjBHRU16?=
 =?utf-8?B?OVVjSXl4RzlCQURIMzNaN1ZjaWVqYXY3SXVZREtwMGRYanFHb3JReXc0Sy8v?=
 =?utf-8?B?dG9kdjdtUkdqa0w3UnJPKzkvZnpobkJHcDFsZEJZcWxsTFk0aXd3MCs2M3E3?=
 =?utf-8?B?U2FuY3hwN2dqOUtJKzF4SlU0UUtORXFHYjBrTDZyNnpZOGl0Q09qbmpLZUVt?=
 =?utf-8?B?aTM0Y2hYckxiaTVZdVg5NFlweHVjMVJzTHRLOFlUNko3S1pveWprdFE1YXlw?=
 =?utf-8?B?aGUrc2NpdVZsQzEwRklxN2p2aWdjSFhBakcySGp4eU5iTzF1ZXF6KzMzb1NR?=
 =?utf-8?B?WDI1Q0RHdUZ5NFRSY09IamI2MkgyUE9yeE8wNUMrWGJpWktGWGJnVUlkWXFR?=
 =?utf-8?B?RDJ2VldhUFJza0I3RUdtKzFIcEV5NHF0NTVTR3NyenAzUVZ2QWlRdkNiSmYv?=
 =?utf-8?B?VnYvcVU0bExrUmFUbXdFTWdjcnU5UjUzZmYzOFU0VlZDc3hqZmx0YkJLVlVq?=
 =?utf-8?B?SkxvYkNzbzZzUUFmK21Md1lsWENGSENvcGJJVzRRMUprT2NTcER0RlJFM3J2?=
 =?utf-8?B?RWp4N1MrREF3NE9DRXdTTzE2c21CdmJtc0RKRWdoY2pXUmE5elQvYWVZL3Fl?=
 =?utf-8?B?Vk1TQmhLUFpOZzN3MlFtOXhQamtVNGo4YmRmUlJmaDlaM0xPeXZ3VDArV2dO?=
 =?utf-8?B?WmZhUXFKSk1LcmNLUGxnUEZNdGxZODNZZGlZTTBqNHNjRm54OTVLTFpBZ09C?=
 =?utf-8?B?RmFaYjBpdHVsL2JJbnNCNWdINWlTcjhJUGhzNVpiNUorZ2tVb3J4Y0gwY2tO?=
 =?utf-8?B?cnI4Uml4S05kME9YOFJ3b0Z3TE9nQ1VEOTlqRk9WSHNkMmVXcEZBV25SZENs?=
 =?utf-8?B?MkU2V2ovdWsvYUNOTnpCZWx5azkxWXRDdVdUOW8rVVloOFkvTk5BeDcva0RR?=
 =?utf-8?B?T1hiTHFPUFRidFFMR0xCcmZ3blJkRnhYVHQ4aDJxekk1TVBTM3VvbG1FcThQ?=
 =?utf-8?B?U3pIOCtRTndtbTMvS09EczR6SU56bEVnVlZRZzdZVE1ySHJpODZFWDFqMmQ0?=
 =?utf-8?B?SVZ3cGNWMXR5UXpId2RuNFgxZHdkaTNPMW0ycDRGRjNEOFVOcVhIdnNCdFYz?=
 =?utf-8?B?eVVna1RKcDJDZUNPbDFvRmxLV2phVFRIQjZibVQwc2Z4R3lzQjJ1ZWx5OUhI?=
 =?utf-8?Q?48HM=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <CBE5ACA2B658DC4494F2D31409BE3992@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 9ea5339e-9232-4afc-c5d5-08dc8eadfa7d
X-MS-Exchange-CrossTenant-originalarrivaltime: 17 Jun 2024 09:15:06.2372
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: j1ZKrQZZ5k/LFOdlnzQL6zWxe78mIvsLVi4M3Rhs1LkN2nIGPJKN+6hDSyziju7DCwZrSQDFbRCHEX0KfhdfFw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4095

SGkgRGFuaWVsLA0KDQpPbiAyMDI0LzYvMTcgMTc6MDAsIEppcWlhbiBDaGVuIHdyb3RlOg0KPiBT
b21lIHR5cGUgb2YgZG9tYWluIGRvbid0IGhhdmUgUElSUXMsIGxpa2UgUFZILCBpdCBkb2Vzbid0
IGRvDQo+IFBIWVNERVZPUF9tYXBfcGlycSBmb3IgZWFjaCBnc2kuIFdoZW4gcGFzc3Rocm91Z2gg
YSBkZXZpY2UNCj4gdG8gZ3Vlc3QgYmFzZSBvbiBQVkggZG9tMCwgY2FsbHN0YWNrDQo+IHBjaV9h
ZGRfZG1fZG9uZS0+WEVOX0RPTUNUTF9pcnFfcGVybWlzc2lvbiB3aWxsIGZhaWwgYXQgZnVuY3Rp
b24NCj4gZG9tYWluX3BpcnFfdG9faXJxLCBiZWNhdXNlIFBWSCBoYXMgbm8gbWFwcGluZyBvZiBn
c2ksIHBpcnEgYW5kDQo+IGlycSBvbiBYZW4gc2lkZS4NCj4gV2hhdCdzIG1vcmUsIGN1cnJlbnQg
aHlwZXJjYWxsIFhFTl9ET01DVExfaXJxX3Blcm1pc3Npb24gcmVxdWlyZXMNCj4gcGFzc2luZyBp
biBwaXJxLCBpdCBpcyBub3Qgc3VpdGFibGUgZm9yIGRvbTAgdGhhdCBkb2Vzbid0IGhhdmUNCj4g
UElSUXMuDQo+IA0KPiBTbywgYWRkIGEgbmV3IGh5cGVyY2FsbCBYRU5fRE9NQ1RMX2dzaV9wZXJt
aXNzaW9uIHRvIGdyYW50IHRoZQ0KPiBwZXJtaXNzaW9uIG9mIGlycSh0cmFuc2xhdGUgZnJvbSBn
c2kpIHRvIGR1bVUgd2hlbiBkb20wIGhhcyBubw0KPiBQSVJRcy4NCj4gDQo+IFNpZ25lZC1vZmYt
Ynk6IEppcWlhbiBDaGVuIDxKaXFpYW4uQ2hlbkBhbWQuY29tPg0KPiBTaWduZWQtb2ZmLWJ5OiBI
dWFuZyBSdWkgPHJheS5odWFuZ0BhbWQuY29tPg0KPiBTaWduZWQtb2ZmLWJ5OiBKaXFpYW4gQ2hl
biA8SmlxaWFuLkNoZW5AYW1kLmNvbT4NCj4gLS0tDQo+IFJGQzogaXQgbmVlZHMgcmV2aWV3IGFu
ZCBuZWVkcyB0byB3YWl0IGZvciB0aGUgY29ycmVzcG9uZGluZyB0aGlyZCBwYXRjaCBvbiBsaW51
eCBrZXJuZWwgc2lkZSB0byBiZSBtZXJnZWQuDQo+IC0tLQ0KPiAgdG9vbHMvaW5jbHVkZS94ZW5j
dHJsLmggICAgICAgICAgICB8ICA1ICsrKw0KPiAgdG9vbHMvbGlicy9jdHJsL3hjX2RvbWFpbi5j
ICAgICAgICB8IDE1ICsrKysrKysNCj4gIHRvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMgICAg
ICAgfCA2NyArKysrKysrKysrKysrKysrKysrKysrKysrKystLS0NCj4gIHhlbi9hcmNoL3g4Ni9k
b21jdGwuYyAgICAgICAgICAgICAgfCA0MyArKysrKysrKysrKysrKysrKysrDQo+ICB4ZW4vYXJj
aC94ODYvaW5jbHVkZS9hc20vaW9fYXBpYy5oIHwgIDIgKw0KPiAgeGVuL2FyY2gveDg2L2lvX2Fw
aWMuYyAgICAgICAgICAgICB8IDE3ICsrKysrKysrDQo+ICB4ZW4vYXJjaC94ODYvbXBwYXJzZS5j
ICAgICAgICAgICAgIHwgIDMgKy0NCj4gIHhlbi9pbmNsdWRlL3B1YmxpYy9kb21jdGwuaCAgICAg
ICAgfCAgOCArKysrDQo+ICB4ZW4veHNtL2ZsYXNrL2hvb2tzLmMgICAgICAgICAgICAgIHwgIDEg
Kw0KPiAgOSBmaWxlcyBjaGFuZ2VkLCAxNTMgaW5zZXJ0aW9ucygrKSwgOCBkZWxldGlvbnMoLSkN
Cj4gDQo+IGRpZmYgLS1naXQgYS90b29scy9pbmNsdWRlL3hlbmN0cmwuaCBiL3Rvb2xzL2luY2x1
ZGUveGVuY3RybC5oDQo+IGluZGV4IGEwMzgxZjc0ZDI0Yi4uZjNmZWI2ODQ4ZTI1IDEwMDY0NA0K
PiAtLS0gYS90b29scy9pbmNsdWRlL3hlbmN0cmwuaA0KPiArKysgYi90b29scy9pbmNsdWRlL3hl
bmN0cmwuaA0KPiBAQCAtMTM4Miw2ICsxMzgyLDExIEBAIGludCB4Y19kb21haW5faXJxX3Blcm1p
c3Npb24oeGNfaW50ZXJmYWNlICp4Y2gsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IHVpbnQzMl90IHBpcnEsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGJvb2wgYWxs
b3dfYWNjZXNzKTsNCj4gIA0KPiAraW50IHhjX2RvbWFpbl9nc2lfcGVybWlzc2lvbih4Y19pbnRl
cmZhY2UgKnhjaCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgZG9t
aWQsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IGdzaSwNCj4gKyAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbCBhbGxvd19hY2Nlc3MpOw0KPiArDQo+ICBp
bnQgeGNfZG9tYWluX2lvbWVtX3Blcm1pc3Npb24oeGNfaW50ZXJmYWNlICp4Y2gsDQo+ICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgZG9taWQsDQo+ICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgbG9uZyBmaXJzdF9tZm4sDQo+IGRpZmYgLS1n
aXQgYS90b29scy9saWJzL2N0cmwveGNfZG9tYWluLmMgYi90b29scy9saWJzL2N0cmwveGNfZG9t
YWluLmMNCj4gaW5kZXggZjJkOWQxNGI0ZDlmLi44NTQwZTg0ZmRhOTMgMTAwNjQ0DQo+IC0tLSBh
L3Rvb2xzL2xpYnMvY3RybC94Y19kb21haW4uYw0KPiArKysgYi90b29scy9saWJzL2N0cmwveGNf
ZG9tYWluLmMNCj4gQEAgLTEzOTQsNiArMTM5NCwyMSBAQCBpbnQgeGNfZG9tYWluX2lycV9wZXJt
aXNzaW9uKHhjX2ludGVyZmFjZSAqeGNoLA0KPiAgICAgIHJldHVybiBkb19kb21jdGwoeGNoLCAm
ZG9tY3RsKTsNCj4gIH0NCj4gIA0KPiAraW50IHhjX2RvbWFpbl9nc2lfcGVybWlzc2lvbih4Y19p
bnRlcmZhY2UgKnhjaCwNCj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3Qg
ZG9taWQsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IGdzaSwNCj4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbCBhbGxvd19hY2Nlc3MpDQo+ICt7DQo+
ICsgICAgc3RydWN0IHhlbl9kb21jdGwgZG9tY3RsID0gew0KPiArICAgICAgICAuY21kID0gWEVO
X0RPTUNUTF9nc2lfcGVybWlzc2lvbiwNCj4gKyAgICAgICAgLmRvbWFpbiA9IGRvbWlkLA0KPiAr
ICAgICAgICAudS5nc2lfcGVybWlzc2lvbi5nc2kgPSBnc2ksDQo+ICsgICAgICAgIC51LmdzaV9w
ZXJtaXNzaW9uLmFsbG93X2FjY2VzcyA9IGFsbG93X2FjY2VzcywNCj4gKyAgICB9Ow0KPiArDQo+
ICsgICAgcmV0dXJuIGRvX2RvbWN0bCh4Y2gsICZkb21jdGwpOw0KPiArfQ0KPiArDQo+ICBpbnQg
eGNfZG9tYWluX2lvbWVtX3Blcm1pc3Npb24oeGNfaW50ZXJmYWNlICp4Y2gsDQo+ICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgZG9taWQsDQo+ICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgdW5zaWduZWQgbG9uZyBmaXJzdF9tZm4sDQo+IGRpZmYgLS1naXQg
YS90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9w
Y2kuYw0KPiBpbmRleCAzNzZmOTE3NTlhYzYuLmYwMjdmMjJjMDAyOCAxMDA2NDQNCj4gLS0tIGEv
dG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYw0KPiArKysgYi90b29scy9saWJzL2xpZ2h0L2xp
YnhsX3BjaS5jDQo+IEBAIC0xNDMxLDYgKzE0MzEsOSBAQCBzdGF0aWMgdm9pZCBwY2lfYWRkX2Rt
X2RvbmUobGlieGxfX2VnYyAqZWdjLA0KPiAgICAgIHVpbnQzMl90IGZsYWcgPSBYRU5fRE9NQ1RM
X0RFVl9SRE1fUkVMQVhFRDsNCj4gICAgICB1aW50MzJfdCBkb21haW5pZCA9IGRvbWlkOw0KPiAg
ICAgIGJvb2wgaXNzdHViZG9tID0gbGlieGxfaXNfc3R1YmRvbShjdHgsIGRvbWlkLCAmZG9tYWlu
aWQpOw0KPiArI2lmZGVmIENPTkZJR19YODYNCj4gKyAgICB4Y19kb21haW5pbmZvX3QgaW5mbzsN
Cj4gKyNlbmRpZg0KPiAgDQo+ICAgICAgLyogQ29udmVuaWVuY2UgYWxpYXNlcyAqLw0KPiAgICAg
IGJvb2wgc3RhcnRpbmcgPSBwYXMtPnN0YXJ0aW5nOw0KPiBAQCAtMTUxNiwxNCArMTUxOSwzOSBA
QCBzdGF0aWMgdm9pZCBwY2lfYWRkX2RtX2RvbmUobGlieGxfX2VnYyAqZWdjLA0KPiAgICAgICAg
ICAgICAgcmMgPSBFUlJPUl9GQUlMOw0KPiAgICAgICAgICAgICAgZ290byBvdXQ7DQo+ICAgICAg
ICAgIH0NCj4gLSAgICAgICAgciA9IHhjX2RvbWFpbl9pcnFfcGVybWlzc2lvbihjdHgtPnhjaCwg
ZG9taWQsIGlycSwgMSk7DQo+ICsjaWZkZWYgQ09ORklHX1g4Ng0KPiArICAgICAgICAvKiBJZiBk
b20wIGRvZXNuJ3QgaGF2ZSBQSVJRcywgbmVlZCB0byB1c2UgeGNfZG9tYWluX2dzaV9wZXJtaXNz
aW9uICovDQo+ICsgICAgICAgIHIgPSB4Y19kb21haW5fZ2V0aW5mb19zaW5nbGUoY3R4LT54Y2gs
IDAsICZpbmZvKTsNCj4gICAgICAgICAgaWYgKHIgPCAwKSB7DQo+IC0gICAgICAgICAgICBMT0dF
RChFUlJPUiwgZG9tYWluaWQsDQo+IC0gICAgICAgICAgICAgICAgICAieGNfZG9tYWluX2lycV9w
ZXJtaXNzaW9uIGlycT0lZCAoZXJyb3I9JWQpIiwgaXJxLCByKTsNCj4gKyAgICAgICAgICAgIExP
R0VEKEVSUk9SLCBkb21haW5pZCwgImdldGRvbWFpbmluZm8gZmFpbGVkIChlcnJvcj0lZCkiLCBl
cnJubyk7DQo+ICAgICAgICAgICAgICBmY2xvc2UoZik7DQo+ICAgICAgICAgICAgICByYyA9IEVS
Uk9SX0ZBSUw7DQo+ICAgICAgICAgICAgICBnb3RvIG91dDsNCj4gICAgICAgICAgfQ0KPiArICAg
ICAgICBpZiAoaW5mby5mbGFncyAmIFhFTl9ET01JTkZfaHZtX2d1ZXN0ICYmDQo+ICsgICAgICAg
ICAgICAhKGluZm8uYXJjaF9jb25maWcuZW11bGF0aW9uX2ZsYWdzICYgWEVOX1g4Nl9FTVVfVVNF
X1BJUlEpICYmDQo+ICsgICAgICAgICAgICBnc2kgPiAwKSB7DQo+ICsgICAgICAgICAgICByID0g
eGNfZG9tYWluX2dzaV9wZXJtaXNzaW9uKGN0eC0+eGNoLCBkb21pZCwgZ3NpLCAxKTsNCj4gKyAg
ICAgICAgICAgIGlmIChyIDwgMCkgew0KPiArICAgICAgICAgICAgICAgIExPR0VEKEVSUk9SLCBk
b21haW5pZCwNCj4gKyAgICAgICAgICAgICAgICAgICAgInhjX2RvbWFpbl9nc2lfcGVybWlzc2lv
biBnc2k9JWQgKGVycm9yPSVkKSIsIGdzaSwgZXJybm8pOw0KPiArICAgICAgICAgICAgICAgIGZj
bG9zZShmKTsNCj4gKyAgICAgICAgICAgICAgICByYyA9IEVSUk9SX0ZBSUw7DQo+ICsgICAgICAg
ICAgICAgICAgZ290byBvdXQ7DQo+ICsgICAgICAgICAgICB9DQo+ICsgICAgICAgIH0NCj4gKyAg
ICAgICAgZWxzZQ0KPiArI2VuZGlmDQo+ICsgICAgICAgIHsNCj4gKyAgICAgICAgICAgIHIgPSB4
Y19kb21haW5faXJxX3Blcm1pc3Npb24oY3R4LT54Y2gsIGRvbWlkLCBpcnEsIDEpOw0KPiArICAg
ICAgICAgICAgaWYgKHIgPCAwKSB7DQo+ICsgICAgICAgICAgICAgICAgTE9HRUQoRVJST1IsIGRv
bWFpbmlkLA0KPiArICAgICAgICAgICAgICAgICAgICAieGNfZG9tYWluX2lycV9wZXJtaXNzaW9u
IGlycT0lZCAoZXJyb3I9JWQpIiwgaXJxLCBlcnJubyk7DQo+ICsgICAgICAgICAgICAgICAgZmNs
b3NlKGYpOw0KPiArICAgICAgICAgICAgICAgIHJjID0gRVJST1JfRkFJTDsNCj4gKyAgICAgICAg
ICAgICAgICBnb3RvIG91dDsNCj4gKyAgICAgICAgICAgIH0NCj4gKyAgICAgICAgfQ0KPiAgICAg
IH0NCj4gICAgICBmY2xvc2UoZik7DQo+ICANCj4gQEAgLTIyMDAsNiArMjIyOCwxMCBAQCBzdGF0
aWMgdm9pZCBwY2lfcmVtb3ZlX2RldGFjaGVkKGxpYnhsX19lZ2MgKmVnYywNCj4gICNlbmRpZg0K
PiAgICAgIHVpbnQzMl90IGRvbWFpbmlkID0gcHJzLT5kb21pZDsNCj4gICAgICBib29sIGlzc3R1
YmRvbTsNCj4gKyNpZmRlZiBDT05GSUdfWDg2DQo+ICsgICAgaW50IHI7DQo+ICsgICAgeGNfZG9t
YWluaW5mb190IGluZm87DQo+ICsjZW5kaWYNCj4gIA0KPiAgICAgIC8qIENvbnZlbmllbmNlIGFs
aWFzZXMgKi8NCj4gICAgICBsaWJ4bF9kZXZpY2VfcGNpICpjb25zdCBwY2kgPSAmcHJzLT5wY2k7
DQo+IEBAIC0yMjg3LDkgKzIzMTksMzIgQEAgc2tpcF9iYXI6DQo+ICAgICAgICAgICAgICAgKi8N
Cj4gICAgICAgICAgICAgIExPR0VEKEVSUk9SLCBkb21pZCwgInhjX3BoeXNkZXZfdW5tYXBfcGly
cSBpcnE9JWQiLCBpcnEpOw0KPiAgICAgICAgICB9DQo+IC0gICAgICAgIHJjID0geGNfZG9tYWlu
X2lycV9wZXJtaXNzaW9uKGN0eC0+eGNoLCBkb21pZCwgaXJxLCAwKTsNCj4gLSAgICAgICAgaWYg
KHJjIDwgMCkgew0KPiAtICAgICAgICAgICAgTE9HRUQoRVJST1IsIGRvbWlkLCAieGNfZG9tYWlu
X2lycV9wZXJtaXNzaW9uIGlycT0lZCIsIGlycSk7DQo+ICsjaWZkZWYgQ09ORklHX1g4Ng0KPiAr
ICAgICAgICAvKiBJZiBkb20wIGRvZXNuJ3QgaGF2ZSBQSVJRcywgbmVlZCB0byB1c2UgeGNfZG9t
YWluX2dzaV9wZXJtaXNzaW9uICovDQo+ICsgICAgICAgIHIgPSB4Y19kb21haW5fZ2V0aW5mb19z
aW5nbGUoY3R4LT54Y2gsIDAsICZpbmZvKTsNCj4gKyAgICAgICAgaWYgKHIgPCAwKSB7DQo+ICsg
ICAgICAgICAgICBMT0dFRChFUlJPUiwgZG9taWQsICJnZXRkb21haW5pbmZvIGZhaWxlZCAoZXJy
b3I9JWQpIiwgZXJybm8pOw0KPiArICAgICAgICAgICAgZmNsb3NlKGYpOw0KPiArICAgICAgICAg
ICAgcmMgPSBFUlJPUl9GQUlMOw0KPiArICAgICAgICAgICAgZ290byBza2lwX2xlZ2FjeV9pcnE7
DQo+ICsgICAgICAgIH0NCj4gKyAgICAgICAgaWYgKGluZm8uZmxhZ3MgJiBYRU5fRE9NSU5GX2h2
bV9ndWVzdCAmJg0KPiArICAgICAgICAgICAgIShpbmZvLmFyY2hfY29uZmlnLmVtdWxhdGlvbl9m
bGFncyAmIFhFTl9YODZfRU1VX1VTRV9QSVJRKSAmJg0KPiArICAgICAgICAgICAgZ3NpID4gMCkg
ew0KPiArICAgICAgICAgICAgciA9IHhjX2RvbWFpbl9nc2lfcGVybWlzc2lvbihjdHgtPnhjaCwg
ZG9taWQsIGdzaSwgMCk7DQo+ICsgICAgICAgICAgICBpZiAociA8IDApIHsNCj4gKyAgICAgICAg
ICAgICAgICBMT0dFRChFUlJPUiwgZG9taWQsDQo+ICsgICAgICAgICAgICAgICAgICAgICJ4Y19k
b21haW5fZ3NpX3Blcm1pc3Npb24gZ3NpPSVkIChlcnJvcj0lZCkiLCBnc2ksIGVycm5vKTsNCj4g
KyAgICAgICAgICAgICAgICByYyA9IEVSUk9SX0ZBSUw7DQo+ICsgICAgICAgICAgICB9DQo+ICsg
ICAgICAgIH0NCj4gKyAgICAgICAgZWxzZQ0KPiArI2VuZGlmDQo+ICsgICAgICAgIHsNCj4gKyAg
ICAgICAgICAgIHJjID0geGNfZG9tYWluX2lycV9wZXJtaXNzaW9uKGN0eC0+eGNoLCBkb21pZCwg
aXJxLCAwKTsNCj4gKyAgICAgICAgICAgIGlmIChyYyA8IDApIHsNCj4gKyAgICAgICAgICAgICAg
ICBMT0dFRChFUlJPUiwgZG9taWQsICJ4Y19kb21haW5faXJxX3Blcm1pc3Npb24gaXJxPSVkIiwg
aXJxKTsNCj4gKyAgICAgICAgICAgIH0NCj4gICAgICAgICAgfQ0KPiAgICAgIH0NCj4gIA0KPiBk
aWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2RvbWN0bC5jIGIveGVuL2FyY2gveDg2L2RvbWN0bC5j
DQo+IGluZGV4IDMzNWFlZGY0NmQwMy4uNmI0NjViYmM2ZWMwIDEwMDY0NA0KPiAtLS0gYS94ZW4v
YXJjaC94ODYvZG9tY3RsLmMNCj4gKysrIGIveGVuL2FyY2gveDg2L2RvbWN0bC5jDQo+IEBAIC0z
Niw2ICszNiw3IEBADQo+ICAjaW5jbHVkZSA8YXNtL3hzdGF0ZS5oPg0KPiAgI2luY2x1ZGUgPGFz
bS9wc3IuaD4NCj4gICNpbmNsdWRlIDxhc20vY3B1LXBvbGljeS5oPg0KPiArI2luY2x1ZGUgPGFz
bS9pb19hcGljLmg+DQo+ICANCj4gIHN0YXRpYyBpbnQgdXBkYXRlX2RvbWFpbl9jcHVfcG9saWN5
KHN0cnVjdCBkb21haW4gKmQsDQo+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICB4ZW5fZG9tY3RsX2NwdV9wb2xpY3lfdCAqeGRwYykNCj4gQEAgLTIzNyw2ICsyMzgsNDggQEAg
bG9uZyBhcmNoX2RvX2RvbWN0bCgNCj4gICAgICAgICAgYnJlYWs7DQo+ICAgICAgfQ0KPiAgDQo+
ICsgICAgY2FzZSBYRU5fRE9NQ1RMX2dzaV9wZXJtaXNzaW9uOg0KPiArICAgIHsNCj4gKyAgICAg
ICAgdW5zaWduZWQgaW50IGdzaSA9IGRvbWN0bC0+dS5nc2lfcGVybWlzc2lvbi5nc2k7DQo+ICsg
ICAgICAgIGludCBpcnE7DQo+ICsgICAgICAgIGJvb2wgYWxsb3cgPSBkb21jdGwtPnUuZ3NpX3Bl
cm1pc3Npb24uYWxsb3dfYWNjZXNzOw0KPiArDQo+ICsgICAgICAgIC8qIENoZWNrIGFsbCBwYWRz
IGFyZSB6ZXJvICovDQo+ICsgICAgICAgIHJldCA9IC1FSU5WQUw7DQo+ICsgICAgICAgIGZvciAo
IGkgPSAwOw0KPiArICAgICAgICAgICAgICBpIDwgc2l6ZW9mKGRvbWN0bC0+dS5nc2lfcGVybWlz
c2lvbi5wYWQpIC8NCj4gKyAgICAgICAgICAgICAgICAgIHNpemVvZihkb21jdGwtPnUuZ3NpX3Bl
cm1pc3Npb24ucGFkWzBdKTsNCj4gKyAgICAgICAgICAgICAgKytpICkNCj4gKyAgICAgICAgICAg
IGlmICggZG9tY3RsLT51LmdzaV9wZXJtaXNzaW9uLnBhZFtpXSApDQo+ICsgICAgICAgICAgICAg
ICAgZ290byBvdXQ7DQo+ICsNCj4gKyAgICAgICAgLyoNCj4gKyAgICAgICAgICogSWYgY3VycmVu
dCBkb21haW4gaXMgUFYgb3IgaXQgaGFzIFBJUlEgZmxhZywgaXQgaGFzIGEgbWFwcGluZw0KPiAr
ICAgICAgICAgKiBvZiBnc2ksIHBpcnEgYW5kIGlycSwgc28gaXQgc2hvdWxkIHVzZSBYRU5fRE9N
Q1RMX2lycV9wZXJtaXNzaW9uDQo+ICsgICAgICAgICAqIHRvIGdyYW50IGlycSBwZXJtaXNzaW9u
Lg0KPiArICAgICAgICAgKi8NCj4gKyAgICAgICAgcmV0ID0gLUVPUE5PVFNVUFA7DQo+ICsgICAg
ICAgIGlmICggaXNfcHZfZG9tYWluKGN1cnJkKSB8fCBoYXNfcGlycShjdXJyZCkgKQ0KPiArICAg
ICAgICAgICAgZ290byBvdXQ7DQo+ICsNCj4gKyAgICAgICAgcmV0ID0gLUVJTlZBTDsNCj4gKyAg
ICAgICAgaWYgKCBnc2kgPj0gbnJfaXJxc19nc2kgfHwgKGlycSA9IGdzaV8yX2lycShnc2kpKSA8
IDAgKQ0KPiArICAgICAgICAgICAgZ290byBvdXQ7DQo+ICsNCj4gKyAgICAgICAgcmV0ID0gLUVQ
RVJNOw0KPiArICAgICAgICBpZiAoICFpcnFfYWNjZXNzX3Blcm1pdHRlZChjdXJyZCwgaXJxKSB8
fA0KPiArICAgICAgICAgICAgIHhzbV9pcnFfcGVybWlzc2lvbihYU01fSE9PSywgZCwgaXJxLCBh
bGxvdykgKQ0KQ29weSB0aGUgcXVlc3Rpb24gb2YgSmFuIGZyb20gdGhlIHByZXZpb3VzIHZlcnNp
b24gdG8gaGVyZToNCklzIGl0IG9rYXkgdG8gaXNzdWUgdGhlIFhTTSBjaGVjayB1c2luZyB0aGUg
dHJhbnNsYXRlZCB2YWx1ZSwgbm90IHRoZSBvbmUgdGhhdCB3YXMgb3JpZ2luYWxseSBwYXNzZWQg
aW50byB0aGUgaHlwZXJjYWxsPw0KDQo+ICsgICAgICAgICAgICBnb3RvIG91dDsNCj4gKw0KPiAr
ICAgICAgICBpZiAoIGFsbG93ICkNCj4gKyAgICAgICAgICAgIHJldCA9IGlycV9wZXJtaXRfYWNj
ZXNzKGQsIGlycSk7DQo+ICsgICAgICAgIGVsc2UNCj4gKyAgICAgICAgICAgIHJldCA9IGlycV9k
ZW55X2FjY2VzcyhkLCBpcnEpOw0KPiArDQo+ICsgICAgb3V0Og0KPiArICAgICAgICBicmVhazsN
Cj4gKyAgICB9DQo+ICsNCj4gICAgICBjYXNlIFhFTl9ET01DVExfZ2V0cGFnZWZyYW1laW5mbzM6
DQo+ICAgICAgew0KPiAgICAgICAgICB1bnNpZ25lZCBpbnQgbnVtID0gZG9tY3RsLT51LmdldHBh
Z2VmcmFtZWluZm8zLm51bTsNCj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9pbmNsdWRlL2Fz
bS9pb19hcGljLmggYi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vaW9fYXBpYy5oDQo+IGluZGV4
IDc4MjY4ZWE4ZjY2Ni4uN2U4NmQ4MzM3NzU4IDEwMDY0NA0KPiAtLS0gYS94ZW4vYXJjaC94ODYv
aW5jbHVkZS9hc20vaW9fYXBpYy5oDQo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9p
b19hcGljLmgNCj4gQEAgLTIxMyw1ICsyMTMsNyBAQCB1bnNpZ25lZCBoaWdoZXN0X2dzaSh2b2lk
KTsNCj4gIA0KPiAgaW50IGlvYXBpY19ndWVzdF9yZWFkKCB1bnNpZ25lZCBsb25nIHBoeXNiYXNl
LCB1bnNpZ25lZCBpbnQgcmVnLCB1MzIgKnB2YWwpOw0KPiAgaW50IGlvYXBpY19ndWVzdF93cml0
ZSh1bnNpZ25lZCBsb25nIHBoeXNiYXNlLCB1bnNpZ25lZCBpbnQgcmVnLCB1MzIgdmFsKTsNCj4g
K2ludCBtcF9maW5kX2lvYXBpYyhpbnQgZ3NpKTsNCj4gK2ludCBnc2lfMl9pcnEoaW50IGdzaSk7
DQo+ICANCj4gICNlbmRpZg0KPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2lvX2FwaWMuYyBi
L3hlbi9hcmNoL3g4Ni9pb19hcGljLmMNCj4gaW5kZXggYjQ4YTY0MjQ2NTQ4Li4yMzg0NWM4Y2Ix
MWYgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9pb19hcGljLmMNCj4gKysrIGIveGVuL2Fy
Y2gveDg2L2lvX2FwaWMuYw0KPiBAQCAtOTU1LDYgKzk1NSwyMyBAQCBzdGF0aWMgaW50IHBpbl8y
X2lycShpbnQgaWR4LCBpbnQgYXBpYywgaW50IHBpbikNCj4gICAgICByZXR1cm4gaXJxOw0KPiAg
fQ0KPiAgDQo+ICtpbnQgZ3NpXzJfaXJxKGludCBnc2kpDQo+ICt7DQo+ICsgICAgaW50IGlvYXBp
YywgcGluLCBpcnE7DQo+ICsNCj4gKyAgICBpb2FwaWMgPSBtcF9maW5kX2lvYXBpYyhnc2kpOw0K
PiArICAgIGlmICggaW9hcGljIDwgMCApDQo+ICsgICAgICAgIHJldHVybiAtRUlOVkFMOw0KPiAr
DQo+ICsgICAgcGluID0gZ3NpIC0gaW9fYXBpY19nc2lfYmFzZShpb2FwaWMpOw0KPiArDQo+ICsg
ICAgaXJxID0gYXBpY19waW5fMl9nc2lfaXJxKGlvYXBpYywgcGluKTsNCj4gKyAgICBpZiAoIGly
cSA8PSAwICkNCj4gKyAgICAgICAgcmV0dXJuIC1FSU5WQUw7DQo+ICsNCj4gKyAgICByZXR1cm4g
aXJxOw0KPiArfQ0KPiArDQo+ICBzdGF0aWMgaW5saW5lIGludCBJT19BUElDX2lycV90cmlnZ2Vy
KGludCBpcnEpDQo+ICB7DQo+ICAgICAgaW50IGFwaWMsIGlkeCwgcGluOw0KPiBkaWZmIC0tZ2l0
IGEveGVuL2FyY2gveDg2L21wcGFyc2UuYyBiL3hlbi9hcmNoL3g4Ni9tcHBhcnNlLmMNCj4gaW5k
ZXggZDhjY2FiMjQ0OWM2Li5jOTVkYTBkZTU3NzAgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL3g4
Ni9tcHBhcnNlLmMNCj4gKysrIGIveGVuL2FyY2gveDg2L21wcGFyc2UuYw0KPiBAQCAtODQxLDgg
Kzg0MSw3IEBAIHN0YXRpYyBzdHJ1Y3QgbXBfaW9hcGljX3JvdXRpbmcgew0KPiAgfSBtcF9pb2Fw
aWNfcm91dGluZ1tNQVhfSU9fQVBJQ1NdOw0KPiAgDQo+ICANCj4gLXN0YXRpYyBpbnQgbXBfZmlu
ZF9pb2FwaWMgKA0KPiAtCWludAkJCWdzaSkNCj4gK2ludCBtcF9maW5kX2lvYXBpYyhpbnQgZ3Np
KQ0KPiAgew0KPiAgCXVuc2lnbmVkIGludAkJaTsNCj4gIA0KPiBkaWZmIC0tZ2l0IGEveGVuL2lu
Y2x1ZGUvcHVibGljL2RvbWN0bC5oIGIveGVuL2luY2x1ZGUvcHVibGljL2RvbWN0bC5oDQo+IGlu
ZGV4IDJhNDlmZTQ2Y2UyNS4uZjdhZThiMTlkMjdkIDEwMDY0NA0KPiAtLS0gYS94ZW4vaW5jbHVk
ZS9wdWJsaWMvZG9tY3RsLmgNCj4gKysrIGIveGVuL2luY2x1ZGUvcHVibGljL2RvbWN0bC5oDQo+
IEBAIC00NjQsNiArNDY0LDEyIEBAIHN0cnVjdCB4ZW5fZG9tY3RsX2lycV9wZXJtaXNzaW9uIHsN
Cj4gICAgICB1aW50OF90IHBhZFszXTsNCj4gIH07DQo+ICANCj4gKy8qIFhFTl9ET01DVExfZ3Np
X3Blcm1pc3Npb24gKi8NCj4gK3N0cnVjdCB4ZW5fZG9tY3RsX2dzaV9wZXJtaXNzaW9uIHsNCj4g
KyAgICB1aW50MzJfdCBnc2k7DQo+ICsgICAgdWludDhfdCBhbGxvd19hY2Nlc3M7ICAgIC8qIGZs
YWcgdG8gc3BlY2lmeSBlbmFibGUvZGlzYWJsZSBvZiB4ODYgZ3NpIGFjY2VzcyAqLw0KPiArICAg
IHVpbnQ4X3QgcGFkWzNdOw0KPiArfTsNCj4gIA0KPiAgLyogWEVOX0RPTUNUTF9pb21lbV9wZXJt
aXNzaW9uICovDQo+ICBzdHJ1Y3QgeGVuX2RvbWN0bF9pb21lbV9wZXJtaXNzaW9uIHsNCj4gQEAg
LTEzMDYsNiArMTMxMiw3IEBAIHN0cnVjdCB4ZW5fZG9tY3RsIHsNCj4gICNkZWZpbmUgWEVOX0RP
TUNUTF9nZXRfcGFnaW5nX21lbXBvb2xfc2l6ZSAgICAgICA4NQ0KPiAgI2RlZmluZSBYRU5fRE9N
Q1RMX3NldF9wYWdpbmdfbWVtcG9vbF9zaXplICAgICAgIDg2DQo+ICAjZGVmaW5lIFhFTl9ET01D
VExfZHRfb3ZlcmxheSAgICAgICAgICAgICAgICAgICAgODcNCj4gKyNkZWZpbmUgWEVOX0RPTUNU
TF9nc2lfcGVybWlzc2lvbiAgICAgICAgICAgICAgICA4OA0KPiAgI2RlZmluZSBYRU5fRE9NQ1RM
X2dkYnN4X2d1ZXN0bWVtaW8gICAgICAgICAgICAxMDAwDQo+ICAjZGVmaW5lIFhFTl9ET01DVExf
Z2Ric3hfcGF1c2V2Y3B1ICAgICAgICAgICAgIDEwMDENCj4gICNkZWZpbmUgWEVOX0RPTUNUTF9n
ZGJzeF91bnBhdXNldmNwdSAgICAgICAgICAgMTAwMg0KPiBAQCAtMTMyOCw2ICsxMzM1LDcgQEAg
c3RydWN0IHhlbl9kb21jdGwgew0KPiAgICAgICAgICBzdHJ1Y3QgeGVuX2RvbWN0bF9zZXRkb21h
aW5oYW5kbGUgICBzZXRkb21haW5oYW5kbGU7DQo+ICAgICAgICAgIHN0cnVjdCB4ZW5fZG9tY3Rs
X3NldGRlYnVnZ2luZyAgICAgIHNldGRlYnVnZ2luZzsNCj4gICAgICAgICAgc3RydWN0IHhlbl9k
b21jdGxfaXJxX3Blcm1pc3Npb24gICAgaXJxX3Blcm1pc3Npb247DQo+ICsgICAgICAgIHN0cnVj
dCB4ZW5fZG9tY3RsX2dzaV9wZXJtaXNzaW9uICAgIGdzaV9wZXJtaXNzaW9uOw0KPiAgICAgICAg
ICBzdHJ1Y3QgeGVuX2RvbWN0bF9pb21lbV9wZXJtaXNzaW9uICBpb21lbV9wZXJtaXNzaW9uOw0K
PiAgICAgICAgICBzdHJ1Y3QgeGVuX2RvbWN0bF9pb3BvcnRfcGVybWlzc2lvbiBpb3BvcnRfcGVy
bWlzc2lvbjsNCj4gICAgICAgICAgc3RydWN0IHhlbl9kb21jdGxfaHlwZXJjYWxsX2luaXQgICAg
aHlwZXJjYWxsX2luaXQ7DQo+IGRpZmYgLS1naXQgYS94ZW4veHNtL2ZsYXNrL2hvb2tzLmMgYi94
ZW4veHNtL2ZsYXNrL2hvb2tzLmMNCj4gaW5kZXggNWU4OGM3MWI4ZTIyLi5hNWIxMzRjOTExMDEg
MTAwNjQ0DQo+IC0tLSBhL3hlbi94c20vZmxhc2svaG9va3MuYw0KPiArKysgYi94ZW4veHNtL2Zs
YXNrL2hvb2tzLmMNCj4gQEAgLTY4NSw2ICs2ODUsNyBAQCBzdGF0aWMgaW50IGNmX2NoZWNrIGZs
YXNrX2RvbWN0bChzdHJ1Y3QgZG9tYWluICpkLCBpbnQgY21kKQ0KPiAgICAgIGNhc2UgWEVOX0RP
TUNUTF9zaGFkb3dfb3A6DQo+ICAgICAgY2FzZSBYRU5fRE9NQ1RMX2lvcG9ydF9wZXJtaXNzaW9u
Og0KPiAgICAgIGNhc2UgWEVOX0RPTUNUTF9pb3BvcnRfbWFwcGluZzoNCj4gKyAgICBjYXNlIFhF
Tl9ET01DVExfZ3NpX3Blcm1pc3Npb246DQo+ICAjZW5kaWYNCj4gICNpZmRlZiBDT05GSUdfSEFT
X1BBU1NUSFJPVUdIDQo+ICAgICAgLyoNCg0KLS0gDQpCZXN0IHJlZ2FyZHMsDQpKaXFpYW4gQ2hl
bi4NCg==


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 09:50:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 09:50:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742061.1148758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8zv-00076M-13; Mon, 17 Jun 2024 09:49:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742061.1148758; Mon, 17 Jun 2024 09:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ8zu-00076F-Tj; Mon, 17 Jun 2024 09:49:58 +0000
Received: by outflank-mailman (input) for mailman id 742061;
 Mon, 17 Jun 2024 09:49:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YyRo=NT=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sJ8zt-000769-PJ
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 09:49:57 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f396bfb3-2c8e-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 11:49:55 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.163.161.70])
 by support.bugseng.com (Postfix) with ESMTPSA id EB7404EE0738;
 Mon, 17 Jun 2024 11:49:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f396bfb3-2c8e-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Juergen Gross <jgross@suse.com>
Subject: [XEN PATCH] xen: add explicit comment to identify notifier patterns
Date: Mon, 17 Jun 2024 11:49:46 +0200
Message-Id: <96a1b98d7831154c58d39b85071b9670de94aed0.1718617636.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 16.4 states that every `switch' statement shall have a
`default' label" and a statement or a comment prior to the
terminating break statement.

This patch addresses some violations of the rule related to the
"notifier pattern": a frequently-used pattern whereby only a few values
are handled by the switch statement and nothing should be done for
others (nothing to do in the default case).

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/arm/cpuerrata.c            | 1 +
 xen/arch/arm/gic.c                  | 1 +
 xen/arch/arm/irq.c                  | 4 ++++
 xen/arch/arm/mmu/p2m.c              | 1 +
 xen/arch/arm/percpu.c               | 1 +
 xen/arch/arm/smpboot.c              | 1 +
 xen/arch/arm/time.c                 | 1 +
 xen/arch/arm/vgic-v3-its.c          | 2 ++
 xen/arch/x86/cpu/mcheck/mce.c       | 4 ++++
 xen/arch/x86/genapic/x2apic.c       | 3 +++
 xen/arch/x86/hvm/hvm.c              | 1 +
 xen/arch/x86/nmi.c                  | 1 +
 xen/arch/x86/percpu.c               | 3 +++
 xen/arch/x86/psr.c                  | 3 +++
 xen/arch/x86/smpboot.c              | 3 +++
 xen/common/rcupdate.c               | 1 +
 xen/common/sched/core.c             | 1 +
 xen/common/sched/cpupool.c          | 1 +
 xen/common/spinlock.c               | 1 +
 xen/common/tasklet.c                | 1 +
 xen/common/timer.c                  | 1 +
 xen/drivers/cpufreq/cpufreq.c       | 1 +
 xen/drivers/passthrough/x86/hvm.c   | 3 +++
 xen/drivers/passthrough/x86/iommu.c | 3 +++
 24 files changed, 43 insertions(+)

diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index 2b7101ea25..69c30aecd8 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -730,6 +730,7 @@ static int cpu_errata_callback(struct notifier_block *nfb,
         rc = enable_nonboot_cpu_caps(arm_errata);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 3eaf670fd7..dc5408a456 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -463,6 +463,7 @@ static int cpu_gic_callback(struct notifier_block *nfb,
         release_irq(gic_hw_ops->info->maintenance_irq, NULL);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index c60502444c..61ca6f5b87 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -127,6 +127,10 @@ static int cpu_callback(struct notifier_block *nfb, unsigned long action,
             printk(XENLOG_ERR "Unable to allocate local IRQ for CPU%u\n",
                    cpu);
         break;
+
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(rc);
diff --git a/xen/arch/arm/mmu/p2m.c b/xen/arch/arm/mmu/p2m.c
index 1725cca649..bf7c66155d 100644
--- a/xen/arch/arm/mmu/p2m.c
+++ b/xen/arch/arm/mmu/p2m.c
@@ -1839,6 +1839,7 @@ static int cpu_virt_paging_callback(struct notifier_block *nfb,
         setup_virt_paging_one(NULL);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/arm/percpu.c b/xen/arch/arm/percpu.c
index 87fe960330..81f91f05bb 100644
--- a/xen/arch/arm/percpu.c
+++ b/xen/arch/arm/percpu.c
@@ -66,6 +66,7 @@ static int cpu_percpu_callback(
         free_percpu_area(cpu);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 04e363088d..3d481e59f9 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -591,6 +591,7 @@ static int cpu_smpboot_callback(struct notifier_block *nfb,
         remove_cpu_sibling_map(cpu);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index e74d30d258..27cbfae874 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -382,6 +382,7 @@ static int cpu_time_callback(struct notifier_block *nfb,
         deinit_timer_interrupt();
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 70b5aeb822..a33ff64ff2 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -1194,6 +1194,7 @@ static void sanitize_its_base_reg(uint64_t *reg)
         r |= GIC_BASER_InnerShareable << GITS_BASER_SHAREABILITY_SHIFT;
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
@@ -1206,6 +1207,7 @@ static void sanitize_its_base_reg(uint64_t *reg)
         r |= GIC_BASER_CACHE_RaWb << GITS_BASER_INNER_CACHEABILITY_SHIFT;
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index 32c1b2756b..222b174bbb 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -722,6 +722,10 @@ static int cf_check cpu_callback(
         if ( park_offline_cpus )
             cpu_bank_free(cpu);
         break;
+
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(rc);
diff --git a/xen/arch/x86/genapic/x2apic.c b/xen/arch/x86/genapic/x2apic.c
index 371dd100c7..d271102f9f 100644
--- a/xen/arch/x86/genapic/x2apic.c
+++ b/xen/arch/x86/genapic/x2apic.c
@@ -238,6 +238,9 @@ static int cf_check update_clusterinfo(
         }
         FREE_CPUMASK_VAR(per_cpu(scratch_mask, cpu));
         break;
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(err);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 8334ab1711..00c360cf24 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -123,6 +123,7 @@ static int cf_check cpu_callback(
         alternative_vcall(hvm_funcs.cpu_dead, cpu);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/x86/nmi.c b/xen/arch/x86/nmi.c
index 9793fa2316..105efa5a71 100644
--- a/xen/arch/x86/nmi.c
+++ b/xen/arch/x86/nmi.c
@@ -434,6 +434,7 @@ static int cf_check cpu_nmi_callback(
         kill_timer(&per_cpu(nmi_timer, cpu));
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/x86/percpu.c b/xen/arch/x86/percpu.c
index 3205eacea6..627b56b9f3 100644
--- a/xen/arch/x86/percpu.c
+++ b/xen/arch/x86/percpu.c
@@ -84,6 +84,9 @@ static int cf_check cpu_percpu_callback(
         if ( park_offline_cpus )
             free_percpu_area(cpu);
         break;
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(rc);
diff --git a/xen/arch/x86/psr.c b/xen/arch/x86/psr.c
index 0b9631ac44..e76b129e6c 100644
--- a/xen/arch/x86/psr.c
+++ b/xen/arch/x86/psr.c
@@ -1661,6 +1661,9 @@ static int cf_check cpu_callback(
     case CPU_DEAD:
         psr_cpu_fini(cpu);
         break;
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(rc);
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 8aa621533f..5b9b196d58 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -1134,6 +1134,9 @@ static int cf_check cpu_smpboot_callback(
     case CPU_REMOVE:
         cpu_smpboot_free(cpu, true);
         break;
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(rc);
diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index 212a99acd8..0fe4097544 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -657,6 +657,7 @@ static int cf_check cpu_callback(
         rcu_offline_cpu(&this_cpu(rcu_data), &rcu_ctrlblk, rdp);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d84b65f197..dffa1ef476 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2907,6 +2907,7 @@ static int cf_check cpu_schedule_callback(
         cpu_schedule_down(cpu);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index ad8f608462..c7117f4243 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -1073,6 +1073,7 @@ static int cf_check cpu_callback(
         cpupool_cpu_remove_forced(cpu);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index 28c6e9d3ac..bf082478db 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -55,6 +55,7 @@ static int cf_check cpu_lockdebug_callback(struct notifier_block *nfb,
         break;
 
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/common/tasklet.c b/xen/common/tasklet.c
index 4c8d87a338..879b1f0d80 100644
--- a/xen/common/tasklet.c
+++ b/xen/common/tasklet.c
@@ -232,6 +232,7 @@ static int cf_check cpu_callback(
         migrate_tasklets_from_cpu(cpu, &per_cpu(softirq_tasklet_list, cpu));
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/common/timer.c b/xen/common/timer.c
index a21798b76f..60e9a1493e 100644
--- a/xen/common/timer.c
+++ b/xen/common/timer.c
@@ -677,6 +677,7 @@ static int cf_check cpu_callback(
         break;
 
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/drivers/cpufreq/cpufreq.c b/xen/drivers/cpufreq/cpufreq.c
index 8659ad3aee..9584b55398 100644
--- a/xen/drivers/cpufreq/cpufreq.c
+++ b/xen/drivers/cpufreq/cpufreq.c
@@ -682,6 +682,7 @@ static int cf_check cpu_callback(
         (void)cpufreq_del_cpu(cpu);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/drivers/passthrough/x86/hvm.c b/xen/drivers/passthrough/x86/hvm.c
index d3627e4af7..e5b6be4794 100644
--- a/xen/drivers/passthrough/x86/hvm.c
+++ b/xen/drivers/passthrough/x86/hvm.c
@@ -1122,6 +1122,9 @@ static int cf_check cpu_callback(
          */
         ASSERT(list_empty(&per_cpu(dpci_list, cpu)));
         break;
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return NOTIFY_DONE;
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index cc0062b027..f0c84eeb85 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -749,6 +749,9 @@ static int cf_check cpu_callback(
         if ( !page_list_empty(list) )
             tasklet_schedule(tasklet);
         break;
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return NOTIFY_DONE;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 09:55:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 09:55:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742069.1148768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ94m-00008D-Ig; Mon, 17 Jun 2024 09:55:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742069.1148768; Mon, 17 Jun 2024 09:55:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ94m-000081-Fd; Mon, 17 Jun 2024 09:55:00 +0000
Received: by outflank-mailman (input) for mailman id 742069;
 Mon, 17 Jun 2024 09:54:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJ94l-00007r-62
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 09:54:59 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a7dae6a7-2c8f-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 11:54:58 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a6ef8bf500dso458507266b.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 02:54:58 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f788f46c6sm250503766b.23.2024.06.17.02.54.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 02:54:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7dae6a7-2c8f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718618097; x=1719222897; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=iIl2yFaxdeXaDq0qccBc21ksOLX5sCwwn6NGjrpQZt0=;
        b=ZvSiKi3X5cf23PFFxnJMwXPJ4ZsZxPoQJJMEEpFA+YT/AFlVCeyjCS3jNLLXPM8wg4
         flV+E20TiYsVUL3iWXwxlwDdadYlvtl4Lw7n6FxbKX08HqhMcYIJqXkKqser5HqNmpRj
         ztGXyi1VP2Yd30fKljXpApTSWxZvjxmc2JBNjJbNqJxbqifeiRLrnPlVgiOpQqpl2JyF
         g/6hImUgUxJW0xPcHAMOh/dgw8aGuSzIjluoo2wMXeSxMf8kXqfzXSrD9j5rK8G/CsgG
         TVTJvoiID30jTPCsmygo5g1Z4isei0j4TTGohLXiEtpulO6J+yeQaLJ3jOaIBSavsgpb
         G51Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718618097; x=1719222897;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=iIl2yFaxdeXaDq0qccBc21ksOLX5sCwwn6NGjrpQZt0=;
        b=VZr4UucuoTc8RKcdEnv9atEuLzmWSQbLwm050YtFlX0EJKKVf0y9/uTG6EtCjN7L40
         IGgKAESE8qAa6hrzp23B5A++DLfnTUlcMZCSLcLhphaH6eUxE6l7KBo88t04lYR/TFnq
         eP3ha0rd4ob1+6rgesw1QT98RuOsRFfu39CcUnRgD6S+Er7dsSG4TGQf+8fQueNyjrLq
         k/gfQoVZ//bESMA4fu3pE/OLy7qg/JVu9CJLKdG3jkOUy1GO31mvSCS0mhfjeoxUJ6ZH
         ZHCh1uMMgL+0Br4tyPguIGPTMEo8CQ5Gy58UuoP5BKY94Ip91Fib8FqS7BCuLp4TM7mQ
         XpCw==
X-Forwarded-Encrypted: i=1; AJvYcCV5HNdQSzs9cfL+ctYEkCywVrT9e53a5o6UU2CvogWK9UoUIDsre3y0zG6Ga7vQi9ius2SCXy9pBEdJVkDS+O0BvbRoI595UxGAHTD4b8Y=
X-Gm-Message-State: AOJu0YwqRpspQusE+N2cF8yrnFpqbtFTpeTWGd+D9d8LJ9sTmR6qaijM
	2MArVDLXIwvVldPop6MDASg0UrbjtgjFuZkG2yAJJPPwDd/3/FdQMhGFeAjFAw==
X-Google-Smtp-Source: AGHT+IGeiIVY5wYnvycHwP612zDj+dwhFoNT1Kk60PSCKabrhzvCJkIPUFtQaAkyGURTkJ+1jYH+Ww==
X-Received: by 2002:a17:906:4953:b0:a6f:df9:6da4 with SMTP id a640c23a62f3a-a6f60d42940mr700807866b.44.1718618097478;
        Mon, 17 Jun 2024 02:54:57 -0700 (PDT)
Message-ID: <5e2b6c55-8d6d-4153-8632-a6608cd41012@suse.com>
Date: Mon, 17 Jun 2024 11:54:56 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: for_each_set_bit() clean-up (API RFC)
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Michal Orzel <michal.orzel@amd.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Shawn Anastasio <sanastasio@raptorengineering.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <46abec6c-ebe9-4426-865e-5513107949be@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <46abec6c-ebe9-4426-865e-5513107949be@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 14.06.2024 19:07, Andrew Cooper wrote:
> More fallout from looking at the code generation...
> 
> for_each_set_bit() forces it's bitmap parameter out into memory.  For an
> arbitrary sized bitmap, this is fine - and likely preferable as it's an
> in-memory to begin with.
> 
> However, more than half the current users of for_each_set_bit() are
> operating over an single int/long, and this too is spilled to the
> stack.  Worse, x86 seems to be the only architecture which (tries, but
> not very well) to optimise find_{first,next}_bit() for GPR-sized
> quantities, meaning that for_each_set_bit() hides 2 backing function calls.
> 
> The ARM (v)GIC code in particular suffers horribly because of this.
> 
> We also have several interesting opencoded forms:
> * evtchn_check_pollers() is a (preprocessor identical) opencoding.
> * hvm_emulate_writeback() is equivalent.
> * for_each_vp() exists just to hardcode a constant and swap the other
> two parameters.
> 
> and several others forms which I think could be expressed more cleanly
> as for_each_set_bit().

I agree.

> We also have the while()/ffs() forms which are "just" for_each_set_bit()
> and some even manage to not spill their main variable to memory.
> 
> 
> I want to get to a position where there is one clear API to use, and
> that the compiler will handle nicely.  Xen's code generation will
> definitely improve as a consequence.
> 
> 
> Sadly, transforming the ideal while()/ffs() form into a for() loop is a
> bit tricky.  This works:
> 
> for ( unsigned int v = (val), (bit);
>       v;
>       v &= v - 1 )
> if ( 1 )
> {
>     (bit) = ffs(v) - 1;
>     goto body;
> }
> else
>     body:
> 
> which is a C metaprogramming trick borrowed from PuTTY to make:
> 
> for_each_BLAH ( bit, val )
> {
>     // nice loop body
> }
> 
> work, while having the ffs() calculated logically within the loop body.

What's wrong with

#define for_each_set_bit(iter, val) \
    for ( unsigned int v_ = (val), iter; \
          v_ && ((iter) = ffs(v_) - 1, true); \
          v_ &= v_ - 1 )

? I'll admit though that it's likely a matter of taste which one is
"uglier". Yet I'd be in favor of avoiding the scope trickery.

> The first issue I expect people to have with the above is the raw 'body'
> label, although with a macro that can be fixed using body_ ## __COUNTER__.
> 
> A full example is https://godbolt.org/z/oMGfah696 although a real
> example in Xen is going to have to be variadic for at least ffs() and
> ffsl().

How would variadic-ness help with this? Unless we play some type
trickery (like typeof((val) + 0U), thus yielding at least an unsigned,
but an unsigned long if the incoming value is such, followed by a
compile-time conditional operator to select between ffs() and ffsl()),
I don't think we'd get away with just a single construct for both the
int and long (for Arm32: long long) cases.

> Now, from an API point of view, it would be lovely if we could make a
> single for_each_set_bit() which covers both cases, and while I can
> distinguish the two forms by whether there are 2 or 3 args,

With the 3-argument form specifying the number of bits in the 3rd arg?
I'd fear such mixed uses may end up confusing.

> I expect
> MISRA is going to have a fit at that.  Also there's a difference based
> on the scope of 'bit' and also whether modifications to 'val' in the
> loop body take effect on the loop condition (they don't because a copy
> is taken).
> 
> So I expect everyone is going to want a new API to use here.  But what
> to call it?
> 
> More than half of the callers in Xen really want the GPR form, so we
> could introduce a new bitmap_for_each_set_bit(), move all the callers
> over, then introduce a "new" for_each_set_bit() which is only of the GPR
> form.
> 
> Or does anyone want to suggest an alternative name?

I'd be okay-ish with those, maybe with slight shortening to bitmap_for_each()
or bitmap_for_each_set().

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:02:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:02:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742077.1148778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9By-0002RC-9S; Mon, 17 Jun 2024 10:02:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742077.1148778; Mon, 17 Jun 2024 10:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9By-0002R5-66; Mon, 17 Jun 2024 10:02:26 +0000
Received: by outflank-mailman (input) for mailman id 742077;
 Mon, 17 Jun 2024 10:02:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ9Bx-0002Qv-5k; Mon, 17 Jun 2024 10:02:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ9Bw-0003Vq-Qw; Mon, 17 Jun 2024 10:02:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ9Bw-0004t3-Ie; Mon, 17 Jun 2024 10:02:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJ9Bw-0001Fh-IA; Mon, 17 Jun 2024 10:02:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OWcs7Xfyyl7eA1dCQVjBrpItv2PWDtTeaZfigoRdXqA=; b=NxBk9Uu4MEU3iufA35psJKGIWE
	lHpF0FbokwsvlGrQk/9IILJwJy0BzAard/n4DPu4OGrIUujhLm2wfwCyhgpyETAdij2XS4OTgeSyI
	neXO1T1pgA4szfbOXZlNgDRAdv4Jc6eXUxOLoF6He9heMHMgDUnIrysOrLd3yV8NZKHI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186378-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186378: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=587100a95d7bfddc60bc5699ae0cca45914f1d81
X-Osstest-Versions-That:
    ovmf=a7dbd2ac7b359644b4961b027d711893132cdb00
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Jun 2024 10:02:24 +0000

flight 186378 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186378/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 587100a95d7bfddc60bc5699ae0cca45914f1d81
baseline version:
 ovmf                 a7dbd2ac7b359644b4961b027d711893132cdb00

Last test of basis   186375  2024-06-17 01:41:11 Z    0 days
Testing same since   186378  2024-06-17 08:14:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  xieyuanh <yuanhao.xie@intel.com>
  Yuanhao Xie <yuanhao.xie@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   a7dbd2ac7b..587100a95d  587100a95d7bfddc60bc5699ae0cca45914f1d81 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:03:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:03:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742086.1148788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9D5-0002zK-Iy; Mon, 17 Jun 2024 10:03:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742086.1148788; Mon, 17 Jun 2024 10:03:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9D5-0002zD-Fu; Mon, 17 Jun 2024 10:03:35 +0000
Received: by outflank-mailman (input) for mailman id 742086;
 Mon, 17 Jun 2024 10:03:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9D4-0002z7-Hl
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:03:34 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de
 [2a07:de40:b251:101:10:150:64:2])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id da2858f3-2c90-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 12:03:32 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 71D2B5FE68;
 Mon, 17 Jun 2024 10:03:31 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id C1D5F139AB;
 Mon, 17 Jun 2024 10:03:30 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id AoLhLvIJcGaVAQAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:03:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da2858f3-2c90-11ef-b4bb-af5377834399
Authentication-Results: smtp-out2.suse.de;
	none
Message-ID: <9e1764de-0080-4b8f-a705-de4016a55f5a@suse.de>
Date: Mon, 17 Jun 2024 12:03:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 01/26] xen-blkfront: don't disable cache flushes when they
 fail
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-2-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-2-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Rspamd-Queue-Id: 71D2B5FE68
X-Rspamd-Server: rspamd2.dmz-prg2.suse.org
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Action: no action
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Spam-Level: 

On 6/17/24 08:04, Christoph Hellwig wrote:
> blkfront always had a robust negotiation protocol for detecting a write
> cache.  Stop simply disabling cache flushes in the block layer as the
> flags handling is moving to the atomic queue limits API that needs
> user context to freeze the queue for that.  Instead handle the case
> of the feature flags cleared inside of blkfront.  This removes old
> debug code to check for such a mismatch which was previously impossible
> to hit, including the check for passthrough requests that blkfront
> never used to start with.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/block/xen-blkfront.c | 44 +++++++++++++++++++-----------------
>   1 file changed, 23 insertions(+), 21 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:03:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:03:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742087.1148798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9DB-0003Gg-Sr; Mon, 17 Jun 2024 10:03:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742087.1148798; Mon, 17 Jun 2024 10:03:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9DB-0003GZ-Pw; Mon, 17 Jun 2024 10:03:41 +0000
Received: by outflank-mailman (input) for mailman id 742087;
 Mon, 17 Jun 2024 10:03:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJ9D9-0002z7-OO
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:03:39 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ddca29e7-2c90-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 12:03:38 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6efae34c83so519669366b.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 03:03:38 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f42ddfsm504024966b.171.2024.06.17.03.03.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 03:03:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddca29e7-2c90-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718618617; x=1719223417; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=nSMDGcOXBS91cZZPSxa6N5d2Ejw8Vm3ipq+/3jGSiY4=;
        b=F2b8ZMysvaVMd+mGfzY8wFEpMbgHq69leJEJNV1YpRaoDzhzhD2f5jUMKFmg+LtIRY
         4ersN3mEyd8yLTYegjkGk6hGzAfGe2yJHpe9xyDiQpaeE+Y4c0C0q1dkYVgkxiySVXkN
         Cj6wHZ7CJP1vw71/7uPlkNrIzyLc28y4l95CWJpCI5f5AEhaNAifSKk92tlXe2CtaAAS
         kNfebTk0wUVyDR6q3b+c7wX2U2Txgf0TXIKjW2Kkcyk0YWsRIUgAeB8ecaKD44nPSwHw
         XiLAsLyTCZlrzIOE6g/MN4R5mhpy65NoE1oW4gTra29I2N+Yj3GmBByICBs2x0v2663B
         GS0A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718618617; x=1719223417;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nSMDGcOXBS91cZZPSxa6N5d2Ejw8Vm3ipq+/3jGSiY4=;
        b=jo0UzEZkCteFgiCetskkBvZ1hxD4BaCFA8qKB13cDgpuTHZbz21gUhtboQaeFK3cve
         MFvHen9uvRqzvP52w27ivp8dP88Pk9qAN3RnQMqY4jxMTzzbxFoILiKF/327TO/lj+i4
         +0ix6xa5zFuo0bCG193POtmSLcX2aw20i6BqZAL1XFnzazlF4hYzZauhRfnpmUFU51xg
         VcaNgQQCP9NsHu/1ThsrUaip8NApXCSgKI65V5Nt39F9ntmj1mNZ94xCvTK5fZbIDDgf
         pK1dwCKxW8anSZWy/PpAQar+REBli3hbGuMi3Uh5YQs8dgmdpyq6EcA1uZHIi4Mx4m7x
         TZEg==
X-Forwarded-Encrypted: i=1; AJvYcCWVYHLy1RCEAxhf9e6cNTT+gmY6s3wgCqf3qL/mKVJehZNROI0b4GnS9URP03atAVGJ2IYidi+marjtXcwXb2zpoidGMY5vFOwLOQe4TC8=
X-Gm-Message-State: AOJu0Ywc0EUPEYTz6XPLENuj59q9fCrPsLlg7xxixY3JikWgp2YhvyED
	bf8C29/M1MN0bofInoBs2pOYyD8TECcyGuZ80rDdlWYRvuHftm54wKVAWruGeg==
X-Google-Smtp-Source: AGHT+IEZSkyFbOsWkPvyzLY/9hXwOgu70r6KcgOnVUoASsxro+kK7R4gjX+m+Gb+5w7N7E66DHPeRw==
X-Received: by 2002:a17:906:dfe9:b0:a6f:4de6:79f with SMTP id a640c23a62f3a-a6f60d45630mr580378066b.40.1718618617335;
        Mon, 17 Jun 2024 03:03:37 -0700 (PDT)
Message-ID: <058a6cc6-bf84-4140-a3fb-12ec648536b0@suse.com>
Date: Mon, 17 Jun 2024 12:03:35 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] xen: add explicit comment to identify notifier
 patterns
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <96a1b98d7831154c58d39b85071b9670de94aed0.1718617636.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <96a1b98d7831154c58d39b85071b9670de94aed0.1718617636.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 17.06.2024 11:49, Federico Serafini wrote:
> MISRA C Rule 16.4 states that every `switch' statement shall have a
> `default' label" and a statement or a comment prior to the
> terminating break statement.
> 
> This patch addresses some violations of the rule related to the
> "notifier pattern": a frequently-used pattern whereby only a few values
> are handled by the switch statement and nothing should be done for
> others (nothing to do in the default case).
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

I guess I shouldn't outright NAK this, but I certainly won't ack it. This
is precisely the purely mechanical change that in earlier discussions some
(including me) have indicated isn't going to help safety. However, if
others want to ack something purely mechanical like this, then my minimal
requirement would be that somewhere it is spelled out what falls under

> ---
>  xen/arch/arm/cpuerrata.c            | 1 +
>  xen/arch/arm/gic.c                  | 1 +
>  xen/arch/arm/irq.c                  | 4 ++++

giv-v3-lpi.c has a similar instance, yet you don't adjust that. This may
be because that possibly is the one where it was previously indicated that
it may in fact be a mistake that the dying/dead case isn't handled, but
then at the very least I'd have expected that you explicitly mention cases
where the adjustment is (deliberately) not made.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:04:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:04:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742098.1148808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9Dk-000426-3v; Mon, 17 Jun 2024 10:04:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742098.1148808; Mon, 17 Jun 2024 10:04:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9Dk-00041z-0t; Mon, 17 Jun 2024 10:04:16 +0000
Received: by outflank-mailman (input) for mailman id 742098;
 Mon, 17 Jun 2024 10:04:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9Dj-0002z7-5F
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:04:15 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f2b3b23e-2c90-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 12:04:13 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C806B5FE69;
 Mon, 17 Jun 2024 10:04:12 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 20824139AB;
 Mon, 17 Jun 2024 10:04:12 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id vXUeBxwKcGbQAQAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:04:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2b3b23e-2c90-11ef-b4bb-af5377834399
Authentication-Results: smtp-out2.suse.de;
	none
Message-ID: <114579f8-74e1-416f-a808-420ec8314882@suse.de>
Date: Mon, 17 Jun 2024 12:04:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 03/26] sd: move zone limits setup out of
 sd_read_block_characteristics
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-4-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-4-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Rspamd-Queue-Id: C806B5FE69
X-Rspamd-Server: rspamd2.dmz-prg2.suse.org
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Action: no action
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Spam-Level: 

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move a bit of code that sets up the zone flag and the write granularity
> into sd_zbc_read_zones to be with the rest of the zoned limits.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   drivers/scsi/sd.c     | 21 +--------------------
>   drivers/scsi/sd_zbc.c |  9 +++++++++
>   2 files changed, 10 insertions(+), 20 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:08:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:08:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742107.1148817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9Hb-0004pk-J1; Mon, 17 Jun 2024 10:08:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742107.1148817; Mon, 17 Jun 2024 10:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9Hb-0004pd-GF; Mon, 17 Jun 2024 10:08:15 +0000
Received: by outflank-mailman (input) for mailman id 742107;
 Mon, 17 Jun 2024 10:08:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJ9Ha-0004pX-5U
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:08:14 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 812178b5-2c91-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 12:08:12 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id
 a640c23a62f3a-a6f7720e6e8so127957966b.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 03:08:12 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56db6808sm496939266b.81.2024.06.17.03.08.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 03:08:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 812178b5-2c91-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718618891; x=1719223691; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=sUrf39DcEODG4w8OGMJJJVPos55kiRh6t8fckRHa0Ac=;
        b=BudNOL/yfVTCYafJeLDOGdyQAoJnz4ohtkCQfj9xk4NZ2/DeMGfl06RxZ+Zz2yMfNt
         N+eFgmt5XwXc5q6NSLjLjNa+36VwQUSkRqV8/NOo7XFZ06TW4PwC0p7JAqZkG+QpTPWu
         nM8UgDN2cQjKfXJ1nNyPs+uxF3zSmGkKS3RNwNvJAWu4rSlIg+oD8zjuW999xUkCXPg9
         7x73gE/Wwd5EJx5bDM+isSnL02lfddtzpIGptXOTkpRDaXkhdoOV2Q2Gre37U7ejcFgg
         HZQpvaE3HnPAwetluVn0ha/JVihb58kBxj6dUSyxhLDQGjNWQI/HwF/YmjY98vituj2R
         CzCw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718618891; x=1719223691;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sUrf39DcEODG4w8OGMJJJVPos55kiRh6t8fckRHa0Ac=;
        b=HS5F+IHbklbTd3nTqj71p6ThePZwm0Z2YPYdqnN1E+X/X17ZfNfbBMJG9P8TjRFEe0
         gwMJOkEMKIAYBnGczAdtCxYtYj1vLurLFHyo4Pki2U87iT8k7gJ02Tn7izH2JAsZgeC8
         dgy1SobiXBJ0A90YtsTQZKz+uEShjfOC9JBbSuyvxkU/+3wSH6/YEfm2UQKZX/PKtekl
         Ij5gF80agO2Us33KmDEVTBYCJ+sOw5AmiNJAY5mRg/On1gxj15FVIaARge7FAGRMC5+r
         nYJNbzvPIeRco5b/IKJ/iX3Rf3aaocDv0ujfVL6qAv0VSQwzNGZTZaiaZ8QZ8ZmGLLre
         JkLw==
X-Forwarded-Encrypted: i=1; AJvYcCXpXtrpnk0sIp7xSiSL+H9qLiND8IVtQwjpTzB5lSfddm55VCmFnqW6GAc0EEJLu6491tODNoLJc41PGBr5rfU9g8Hl75OWMf6KSWQziaQ=
X-Gm-Message-State: AOJu0YwpGLLw1RJT2aEa5FohHUSd9vA0D8fdBePUm5+dY2pI+Z9XTn0Q
	fTTy4lNFMyiAQO4nT1VSj7zk2fW4VkVRVZnKvZTm8uUCMXLggN0/dQwLjm2sZvlykjmJzjp4qkQ
	=
X-Google-Smtp-Source: AGHT+IEsI0rgoiyVdgUUUyeoAEm404fm7RJlZVJxQSeqVFWsDUFEoN24VR8coBrkEpHiWGaLXmmnLA==
X-Received: by 2002:a17:906:756:b0:a6f:1df1:1ef2 with SMTP id a640c23a62f3a-a6f60dc517fmr719388866b.47.1718618891542;
        Mon, 17 Jun 2024 03:08:11 -0700 (PDT)
Message-ID: <bd174018-e243-400c-845f-68b08d976d1f@suse.com>
Date: Mon, 17 Jun 2024 12:08:09 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/7] x86/xstate: Cross-check dynamic XSTATE sizes at boot
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240523111627.28896-1-andrew.cooper3@citrix.com>
 <20240523111627.28896-3-andrew.cooper3@citrix.com>
 <22e0473a-aca8-4645-a3f4-21a9d9e0ebd3@suse.com>
 <71cbc846-527d-45f0-bd0b-df3f0294e7cd@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <71cbc846-527d-45f0-bd0b-df3f0294e7cd@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 14.06.2024 15:56, Andrew Cooper wrote:
> On 23/05/2024 4:34 pm, Jan Beulich wrote:
>> On 23.05.2024 13:16, Andrew Cooper wrote:
>>> Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
>>> every call.  This is expensive, being used for domain create/migrate, as well
>>> as to service certain guest CPUID instructions.
>>>
>>> Instead, arrange to check the sizes once at boot.  See the code comments for
>>> details.  Right now, it just checks hardware against the algorithm
>>> expectations.  Later patches will add further cross-checking.
>>>
>>> Introduce the missing X86_XCR0_* and X86_XSS_* constants, and a couple of
>>> missing CPUID bits.  This is to maximise coverage in the sanity check, even if
>>> we don't expect to use/virtualise some of these features any time soon.  Leave
>>> HDC and HWP alone for now.  We don't have CPUID bits from them stored nicely.
>> Since you say "the missing", ...
>>
>>> --- a/xen/arch/x86/include/asm/x86-defns.h
>>> +++ b/xen/arch/x86/include/asm/x86-defns.h
>>> @@ -77,7 +77,7 @@
>>>  #define X86_CR4_PKS        0x01000000 /* Protection Key Supervisor */
>>>  
>>>  /*
>>> - * XSTATE component flags in XCR0
>>> + * XSTATE component flags in XCR0 | MSR_XSS
>>>   */
>>>  #define X86_XCR0_FP_POS           0
>>>  #define X86_XCR0_FP               (1ULL << X86_XCR0_FP_POS)
>>> @@ -95,11 +95,34 @@
>>>  #define X86_XCR0_ZMM              (1ULL << X86_XCR0_ZMM_POS)
>>>  #define X86_XCR0_HI_ZMM_POS       7
>>>  #define X86_XCR0_HI_ZMM           (1ULL << X86_XCR0_HI_ZMM_POS)
>>> +#define X86_XSS_PROC_TRACE        (_AC(1, ULL) <<  8)
>>>  #define X86_XCR0_PKRU_POS         9
>>>  #define X86_XCR0_PKRU             (1ULL << X86_XCR0_PKRU_POS)
>>> +#define X86_XSS_PASID             (_AC(1, ULL) << 10)
>>> +#define X86_XSS_CET_U             (_AC(1, ULL) << 11)
>>> +#define X86_XSS_CET_S             (_AC(1, ULL) << 12)
>>> +#define X86_XSS_HDC               (_AC(1, ULL) << 13)
>>> +#define X86_XSS_UINTR             (_AC(1, ULL) << 14)
>>> +#define X86_XSS_LBR               (_AC(1, ULL) << 15)
>>> +#define X86_XSS_HWP               (_AC(1, ULL) << 16)
>>> +#define X86_XCR0_TILE_CFG         (_AC(1, ULL) << 17)
>>> +#define X86_XCR0_TILE_DATA        (_AC(1, ULL) << 18)
>> ... I'm wondering if you deliberately left out APX (bit 19).
> 
> It was deliberate.  APX isn't in the SDM yet, so in principle is still
> subject to change.
> 
> I've tweaked the commit message to avoid using the word 'missing'.

Thanks.

>> Since you're re-doing some of what I have long had in patches already,
>> I'd also like to ask whether the last underscores each in the two AMX
>> names really are useful in your opinion. While rebasing isn't going
>> to be difficult either way, it would be yet simpler with
>> X86_XCR0_TILECFG and X86_XCR0_TILEDATA, as I've had it in my patches
>> for over 3 years.
> 
> I'm torn here.  I don't want to deliberately make things harder for you,
> but I would really prefer that we use the more legible form...

Well, okay, so be it then.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:26:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:26:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742118.1148828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9Yg-0008Rm-Vp; Mon, 17 Jun 2024 10:25:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742118.1148828; Mon, 17 Jun 2024 10:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9Yg-0008Rf-Rt; Mon, 17 Jun 2024 10:25:54 +0000
Received: by outflank-mailman (input) for mailman id 742118;
 Mon, 17 Jun 2024 10:25:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJ9Yf-0008RZ-TH
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:25:53 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f8809015-2c93-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 12:25:51 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-57c83100c5fso4561565a12.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 03:25:51 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb72da6fdsm6229678a12.31.2024.06.17.03.25.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 03:25:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8809015-2c93-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718619951; x=1719224751; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=0GprUjYlawhnXFHZSE6K5QIBPrw0qkzh+mD69PrN5Lg=;
        b=dpc35s5JNL/D4Xz0XpKH4EMJCKY3d6eLQ+o0AXvOk13Z2lbl/lmnwsi8ST43qWpHoH
         i65YmEcAjOmiC7tsE9icFFLsHg0KyQcCMQNY5gJr3+SVu3d6ANC8jNCK8u8QB0607bEI
         VqIEBWUUYGXSm1XhS3KhAL6HGJ0Q/iWFj4BTxsHeyeQIEt58uVgzo3GP1H5mSDz4b/Jf
         RhQScfiIvj5wHUPucpT2sE5NrZ1hIzLlRd1q+cAqP0JeRcPjjs/MUUjHdzrmvpT4h9T/
         WNpzVUbwDCxpCdXLuLrWeHrfhmOshKfnOgcOOW84ZxvwF7ygu3JXsefKSU8RD08TqZc8
         a/9Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718619951; x=1719224751;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0GprUjYlawhnXFHZSE6K5QIBPrw0qkzh+mD69PrN5Lg=;
        b=n8l20YVoETHnxyHRLvTtQItxRdLzJ/bircQI27z7FsFsjB8YQI2qEDl98mcKTwCmwz
         XsdvmB47JrJPVtR302+v2CIO8XVo2zrZ+khct+QsUR6IpwbK4B4QRnGvOqJHfsTFhDoF
         mPim7pYs/X09OAltpFcgmZUR4HPJx+7IZuDgW60NldDvlEXHQMLleAOpog0m8DUmEjlL
         lOvIiTnZh0oyS9K1yz4prDxn+CuIjpzYLo6cHbCsHWwhcxKDl1htvwcGAm2VSMWWnPl3
         HD9+FJq2QT4zRAhNvs0m83DowAqeRD5h0Ifz9Lfbxi54O+YttsQkOwjyzdqWYVc66xQW
         cNJg==
X-Forwarded-Encrypted: i=1; AJvYcCVz5A30MZBA5J2jfa2iZZV9KZdi3KwxZOBo+6HpjcnZeCXL3otT+QM5+HzUPjPU14xPfjMWnjHVK29ztAbCXm4+5gumKt3S89J8w+VDcWE=
X-Gm-Message-State: AOJu0Yxd6WGouw28WqnisFg600td6xEFzEHal2yhsM0dvbXe87EDnR8z
	kdv8j51//+xjMP9JMaV7nW/dpPuHkBPEG1kmWlj7y3VZ64Y7duG3EEC9BAmGsg==
X-Google-Smtp-Source: AGHT+IGdhSocCvf3TX/yE7jXrTPFBei84T/Aqf1HbH8d8eq4JGfBaU43OFBKGnphMBWvtooeJoEtbA==
X-Received: by 2002:a50:c051:0:b0:57c:7413:a6e0 with SMTP id 4fb4d7f45d1cf-57cbd674310mr5608784a12.2.1718619950710;
        Mon, 17 Jun 2024 03:25:50 -0700 (PDT)
Message-ID: <cc1b52d8-163f-443c-8418-aba1c7d77ecb@suse.com>
Date: Mon, 17 Jun 2024 12:25:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 3/7] x86/boot: Collect the Raw CPU Policy earlier on boot
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240523111627.28896-1-andrew.cooper3@citrix.com>
 <20240523111627.28896-4-andrew.cooper3@citrix.com>
 <8245f0ce-2964-4ecb-a31d-3e182a6d3e0b@suse.com>
 <6b4ed926-8934-4660-98c7-1856d0c90878@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <6b4ed926-8934-4660-98c7-1856d0c90878@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 14.06.2024 20:26, Andrew Cooper wrote:
> On 23/05/2024 4:44 pm, Jan Beulich wrote:
>> On 23.05.2024 13:16, Andrew Cooper wrote:
>>> This is a tangle, but it's a small step in the right direction.
>>>
>>> xstate_init() is shortly going to want data from the Raw policy.
>>> calculate_raw_cpu_policy() is sufficiently separate from the other policies to
>>> be safe to do.
>>>
>>> No functional change.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Would you mind taking a look at
>> https://lists.xen.org/archives/html/xen-devel/2021-04/msg01335.html
>> to make clear (to me at least) in how far we can perhaps find common grounds
>> on what wants doing when? (Of course the local version I have has been
>> constantly re-based, so some of the function names would have changed from
>> what's visible there.)
> 
> That's been covered several times, at least in part.
> 
> I want to eventually move the host policy too, but I'm not willing to
> compound the mess we've currently got just to do it earlier.  It's just
> creating even more obstacles to doing it nicely.
> 
> Nothing in this series needs (or indeed should) use the host policy.

Hmm, I'm irritated: You talk about host policy here, ...

> The same is true of your AMX series.  You're (correctly) breaking the
> uniform allocation size and (when policy selection is ordered WRT vCPU
> creation, as discussed) it becomes solely depend on the guest policy.

... then guest policy, and ...

> xsave.c really has no legitimate use for the host policy once the
> uniform allocation size aspect has gone away.

... then host policy again. Whereas my patch switches to using the raw
policy, simply to eliminate redundant data. And your patch here is about
collecting raw policy earlier, too, for that to become usable by
xstate_init(). Differences between your any my variant are when exactly
raw policy collection happens, and that mine _additionally_ calculates
host policy a first time right after having calculated the raw one. My
patch specifically does not use the host policy in xstate_init(), nor in
the two new macros that are being introduced.

In the end it sounds like all you object to is my patch calculating the
host policy (a first time) earlier, too. As the description there says,
a subsequent change in that series needs this movement anyway. If some
suitable replacement for that dependency exists, I'm sure that early
calculation could be left out of the patch referenced above, if that's
indeed the sole concern.

>>> --- a/xen/arch/x86/cpu-policy.c
>>> +++ b/xen/arch/x86/cpu-policy.c
>>> @@ -845,7 +845,6 @@ static void __init calculate_hvm_def_policy(void)
>>>  
>>>  void __init init_guest_cpu_policies(void)
>>>  {
>>> -    calculate_raw_cpu_policy();
>>>      calculate_host_policy();
>>>  
>>>      if ( IS_ENABLED(CONFIG_PV) )
>>> --- a/xen/arch/x86/setup.c
>>> +++ b/xen/arch/x86/setup.c
>>> @@ -1888,7 +1888,9 @@ void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
>>>  
>>>      tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
>>>  
>>> -    identify_cpu(&boot_cpu_data);
>>> +    calculate_raw_cpu_policy(); /* Needs microcode.  No other dependenices. */
>>> +
>>> +    identify_cpu(&boot_cpu_data); /* Needs microcode and raw policy. */
>> You don't introduce any dependency on raw policy here, and there cannot possibly
>> have been such a dependency before (unless there was a bug somewhere). Therefore
>> I consider this latter comment misleading at this point.
> 
> It's made true by the next patch, and I'm not included to split the
> comment across two patches which are going to be committed together in a
> unit.

Which is fine, so long as this is then not done silently, leaving it to
reviewers to notice (or not). IOW please: Just mention anomalies like this
in half a sentence in the description. (Committing as a unit is also an
uncertain thing, as long as that's not put forth as a strict requirement
somewhere. We do partial commits of series all the time, after all.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:36:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:36:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742127.1148838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9ia-00022n-Vt; Mon, 17 Jun 2024 10:36:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742127.1148838; Mon, 17 Jun 2024 10:36:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9ia-00022g-TE; Mon, 17 Jun 2024 10:36:08 +0000
Received: by outflank-mailman (input) for mailman id 742127;
 Mon, 17 Jun 2024 10:36:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9iZ-00022a-Fn
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:36:07 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65ae90f7-2c95-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 12:36:04 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 75F3B5FECC;
 Mon, 17 Jun 2024 10:36:03 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id D6842139AB;
 Mon, 17 Jun 2024 10:36:02 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id AHKOMJIRcGYTDAAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:36:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65ae90f7-2c95-11ef-b4bb-af5377834399
Authentication-Results: smtp-out2.suse.de;
	none
Message-ID: <c91d77a0-eec5-4af0-b3dd-bc2724108fc9@suse.de>
Date: Mon, 17 Jun 2024 12:36:02 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 13/26] block: move cache control settings out of
 queue->flags
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Ulf Hansson <ulf.hansson@linaro.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-14-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-14-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Spam-Level: 
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Queue-Id: 75F3B5FECC
X-Rspamd-Action: no action
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Rspamd-Server: rspamd1.dmz-prg2.suse.org

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the cache control settings into the queue_limits so that the flags
> can be set atomically with the device queue frozen.
> 
> Add new features and flags field for the driver set flags, and internal
> (usually sysfs-controlled) flags in the block layer.  Note that we'll
> eventually remove enough field from queue_limits to bring it back to the
> previous size.
> 
> The disable flag is inverted compared to the previous meaning, which
> means it now survives a rescan, similar to the max_sectors and
> max_discard_sectors user limits.
> 
> The FLUSH and FUA flags are now inherited by blk_stack_limits, which
> simplified the code in dm a lot, but also causes a slight behavior
> change in that dm-switch and dm-unstripe now advertise a write cache
> despite setting num_flush_bios to 0.  The I/O path will handle this
> gracefully, but as far as I can tell the lack of num_flush_bios
> and thus flush support is a pre-existing data integrity bug in those
> targets that really needs fixing, after which a non-zero num_flush_bios
> should be required in dm for targets that map to underlying devices.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Acked-by: Ulf Hansson <ulf.hansson@linaro.org> [mmc]
> ---

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:36:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:36:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742133.1148848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9jG-0002Xx-75; Mon, 17 Jun 2024 10:36:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742133.1148848; Mon, 17 Jun 2024 10:36:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9jG-0002Xq-4b; Mon, 17 Jun 2024 10:36:50 +0000
Received: by outflank-mailman (input) for mailman id 742133;
 Mon, 17 Jun 2024 10:36:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9jF-0002IM-BY
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:36:49 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 801ec71e-2c95-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 12:36:48 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id CE20A38037;
 Mon, 17 Jun 2024 10:36:47 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 34E77139AB;
 Mon, 17 Jun 2024 10:36:47 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id oblHDL8RcGZPDAAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:36:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 801ec71e-2c95-11ef-90a3-e314d9c70b13
Authentication-Results: smtp-out1.suse.de;
	none
Message-ID: <0f819ed5-9549-4edf-98b3-19eed8558dfe@suse.de>
Date: Mon, 17 Jun 2024 12:36:46 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 14/26] block: move the nonrot flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Damien Le Moal <dlemoal@kernel.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-15-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-15-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Spam-Level: 
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Queue-Id: CE20A38037
X-Rspamd-Action: no action
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Rspamd-Server: rspamd1.dmz-prg2.suse.org

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the nonrot flag into the queue_limits feature field so that it can
> be set atomically with the queue frozen.
> 
> Use the chance to switch to defaulting to non-rotational and require
> the driver to opt into rotational, which matches the polarity of the
> sysfs interface.
> 
> For the z2ram, ps3vram, 2x memstick, ubiblock and dcssblk the new
> rotational flag is not set as they clearly are not rotational despite
> this being a behavior change.  There are some other drivers that
> unconditionally set the rotational flag to keep the existing behavior
> as they arguably can be used on rotational devices even if that is
> probably not their main use today (e.g. virtio_blk and drbd).
> 
> The flag is automatically inherited in blk_stack_limits matching the
> existing behavior in dm and md.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> ---

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:38:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:38:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742140.1148858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9kv-0003QZ-Hh; Mon, 17 Jun 2024 10:38:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742140.1148858; Mon, 17 Jun 2024 10:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9kv-0003QS-EX; Mon, 17 Jun 2024 10:38:33 +0000
Received: by outflank-mailman (input) for mailman id 742140;
 Mon, 17 Jun 2024 10:38:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9ku-0003QG-73
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:38:32 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bcbcc5d2-2c95-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 12:38:30 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id D55E65FCD9;
 Mon, 17 Jun 2024 10:38:29 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 337B2139AB;
 Mon, 17 Jun 2024 10:38:29 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id WyrSCyUScGbNDAAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:38:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bcbcc5d2-2c95-11ef-b4bb-af5377834399
Authentication-Results: smtp-out2.suse.de;
	none
Message-ID: <74df67d6-3d02-4987-becb-eebf60492d26@suse.de>
Date: Mon, 17 Jun 2024 12:38:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 15/26] block: move the add_random flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Damien Le Moal <dlemoal@kernel.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-16-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-16-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Spam-Level: 
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Queue-Id: D55E65FCD9
X-Rspamd-Action: no action
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Rspamd-Server: rspamd1.dmz-prg2.suse.org

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the add_random flag into the queue_limits feature field so that it
> can be set atomically with the queue frozen.
> 
> Note that this also removes code from dm to clear the flag based on
> the underlying devices, which can't be reached as dm devices will
> always start out without the flag set.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> ---
>   block/blk-mq-debugfs.c            |  1 -
>   block/blk-sysfs.c                 |  6 +++---
>   drivers/block/mtip32xx/mtip32xx.c |  1 -
>   drivers/md/dm-table.c             | 18 ------------------
>   drivers/mmc/core/queue.c          |  2 --
>   drivers/mtd/mtd_blkdevs.c         |  3 ---
>   drivers/s390/block/scm_blk.c      |  4 ----
>   drivers/scsi/scsi_lib.c           |  3 +--
>   drivers/scsi/sd.c                 | 11 +++--------
>   include/linux/blkdev.h            |  5 +++--
>   10 files changed, 10 insertions(+), 44 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:38:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:38:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742145.1148867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9lL-0003zj-Ov; Mon, 17 Jun 2024 10:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742145.1148867; Mon, 17 Jun 2024 10:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9lL-0003zc-MB; Mon, 17 Jun 2024 10:38:59 +0000
Received: by outflank-mailman (input) for mailman id 742145;
 Mon, 17 Jun 2024 10:38:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9lL-0003QG-0o
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:38:59 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ccfd7b44-2c95-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 12:38:57 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 205F75FED3;
 Mon, 17 Jun 2024 10:38:57 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 62CE8139AB;
 Mon, 17 Jun 2024 10:38:56 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id HLO9FEAScGb3DAAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:38:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccfd7b44-2c95-11ef-b4bb-af5377834399
Authentication-Results: smtp-out2.suse.de;
	none
Message-ID: <5380a984-013a-4a63-9d95-7d8eec0b45c7@suse.de>
Date: Mon, 17 Jun 2024 12:38:55 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 16/26] block: move the io_stat flag setting to
 queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-17-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-17-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Rspamd-Queue-Id: 205F75FED3
X-Rspamd-Server: rspamd2.dmz-prg2.suse.org
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Action: no action
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Spam-Level: 

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the io_stat flag into the queue_limits feature field so that it can
> be set atomically with the queue frozen.
> 
> Simplify md and dm to set the flag unconditionally instead of avoiding
> setting a simple flag for cases where it already is set by other means,
> which is a bit pointless.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/blk-mq-debugfs.c        |  1 -
>   block/blk-mq.c                |  6 +++++-
>   block/blk-sysfs.c             |  2 +-
>   drivers/md/dm-table.c         | 12 +++++++++---
>   drivers/md/dm.c               | 13 +++----------
>   drivers/md/md.c               |  5 ++---
>   drivers/nvme/host/multipath.c |  2 +-
>   include/linux/blkdev.h        |  9 +++++----
>   8 files changed, 26 insertions(+), 24 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:40:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:40:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742153.1148878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9mQ-0005MG-1v; Mon, 17 Jun 2024 10:40:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742153.1148878; Mon, 17 Jun 2024 10:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9mP-0005M9-VU; Mon, 17 Jun 2024 10:40:05 +0000
Received: by outflank-mailman (input) for mailman id 742153;
 Mon, 17 Jun 2024 10:40:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9mO-00051n-7C
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:40:04 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f43032f5-2c95-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 12:40:03 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 9E7663803B;
 Mon, 17 Jun 2024 10:40:01 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 1B20313AAA;
 Mon, 17 Jun 2024 10:40:01 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id qgSdBYEScGZBDQAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:40:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f43032f5-2c95-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718620802; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Udw7UxW9RD3hNhptJltcmuBZ5/E3Nt3KJ3oFtf1o28Y=;
	b=oOv3IjDDx2wwA/r3rxn0NrWSiorUlzu/LZiijLg8zoFMNTIfIM4KACkJhGKNr5tgCgDFN2
	swyfAOkUz04diI4ZMqo8O9CfOPpwu5I9Mbz5BfhGU7kk82GiUoe9lOYe/FmJo0YDcXy45h
	p6sRFsU8MYD8T30FAykkFiDl4pSJT1Y=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718620802;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Udw7UxW9RD3hNhptJltcmuBZ5/E3Nt3KJ3oFtf1o28Y=;
	b=Ot8p2izY0JtiMVs7/UuRj91DhFSpICfcz8ETan8iR+o2zqUCceOag3Og+ELTgEFRWMxfbJ
	H8kXANgn0V48g2BA==
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718620801; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Udw7UxW9RD3hNhptJltcmuBZ5/E3Nt3KJ3oFtf1o28Y=;
	b=Qm6g/5pLH7+rQF9CQcnhh4VfHY9JM1VLyrzusyShCKdjoL8ke6Y5p2a8ZvdRK9QO5njFgF
	YkCNGSzlCWSjguEvmCBuyak35LtQXw/NkOXCaYD3xb3wZfhtp026a65y7DMmmBLXFaq8EC
	tC3POtul8QOovRe401BsrfyBEimwH3I=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718620801;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Udw7UxW9RD3hNhptJltcmuBZ5/E3Nt3KJ3oFtf1o28Y=;
	b=I5XoAx519a0gdMtOMGCbku8aSmfojCQpJBFdmrQf3R2hUK40sd4Fa4cUr6ieELRADq/qgE
	4hoN05xE6nA/YkBQ==
Message-ID: <293c79ef-43fe-4a08-8e2d-54aa61d04d43@suse.de>
Date: Mon, 17 Jun 2024 12:40:00 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 17/26] block: move the stable_writes flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Damien Le Moal <dlemoal@kernel.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-18-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-18-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spam-Score: -4.29
X-Spam-Level: 
X-Spam-Flag: NO
X-Spamd-Result: default: False [-4.29 / 50.00];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_HAS_DN(0.00)[];
	ARC_NA(0.00)[];
	MIME_TRACE(0.00)[0:+];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	RCPT_COUNT_TWELVE(0.00)[38];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	R_RATELIMIT(0.00)[to_ip_from(RLex1noz7jcsrkfdtgx8bqesde)];
	FROM_EQ_ENVFROM(0.00)[];
	TO_DN_SOME(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	MID_RHS_MATCH_FROM(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[suse.de:email,imap1.dmz-prg2.suse.org:helo,lst.de:email]

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the stable_writes flag into the queue_limits feature field so that
> it can be set atomically with the queue frozen.
> 
> The flag is now inherited by blk_stack_limits, which greatly simplifies
> the code in dm, and fixed md which previously did not pass on the flag
> set on lower devices.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> ---
>   block/blk-mq-debugfs.c         |  1 -
>   block/blk-sysfs.c              | 29 +----------------------------
>   drivers/block/drbd/drbd_main.c |  5 ++---
>   drivers/block/rbd.c            |  9 +++------
>   drivers/block/zram/zram_drv.c  |  2 +-
>   drivers/md/dm-table.c          | 19 -------------------
>   drivers/md/raid5.c             |  6 ++++--
>   drivers/mmc/core/queue.c       |  5 +++--
>   drivers/nvme/host/core.c       |  9 +++++----
>   drivers/nvme/host/multipath.c  |  4 ----
>   drivers/scsi/iscsi_tcp.c       |  8 ++++----
>   include/linux/blkdev.h         |  9 ++++++---
>   12 files changed, 29 insertions(+), 77 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:40:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:40:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742160.1148888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9nA-00061C-EO; Mon, 17 Jun 2024 10:40:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742160.1148888; Mon, 17 Jun 2024 10:40:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9nA-000615-AJ; Mon, 17 Jun 2024 10:40:52 +0000
Received: by outflank-mailman (input) for mailman id 742160;
 Mon, 17 Jun 2024 10:40:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9n8-0005nb-Cd
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:40:50 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0f4f0dd9-2c96-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 12:40:48 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 502A33803D;
 Mon, 17 Jun 2024 10:40:48 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 74D3713AAA;
 Mon, 17 Jun 2024 10:40:47 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id TLFtGK8ScGaFDQAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:40:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f4f0dd9-2c96-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718620848; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YRZ+MD2RD7DoqN1zKcgy+HFLwesZ3pPlf+aKbLWxjwU=;
	b=g6i+HE7UdDx9sU/po4lLP5ccgANLhcybh16BiI+67VqBuwRwDVonJPFzOMCDdBfpbCNKBC
	MD1s1VKGcMU3aCdrY17XV1IfskxdUXEg0QjRAJZi8F4DBktSipA264j6zDbr9/7MInSii3
	xnuF2R+5yQTsAoOY/MXu1e+Bwa+aCec=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718620848;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YRZ+MD2RD7DoqN1zKcgy+HFLwesZ3pPlf+aKbLWxjwU=;
	b=DC8Hya9o4RrY68KluA6Q1U2drYPRvGd5eOn0Qpqja/t1DhzewBZPUwXN8T5m7vch3lMroJ
	rOjFgOFCF0RfD+BA==
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718620848; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YRZ+MD2RD7DoqN1zKcgy+HFLwesZ3pPlf+aKbLWxjwU=;
	b=g6i+HE7UdDx9sU/po4lLP5ccgANLhcybh16BiI+67VqBuwRwDVonJPFzOMCDdBfpbCNKBC
	MD1s1VKGcMU3aCdrY17XV1IfskxdUXEg0QjRAJZi8F4DBktSipA264j6zDbr9/7MInSii3
	xnuF2R+5yQTsAoOY/MXu1e+Bwa+aCec=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718620848;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YRZ+MD2RD7DoqN1zKcgy+HFLwesZ3pPlf+aKbLWxjwU=;
	b=DC8Hya9o4RrY68KluA6Q1U2drYPRvGd5eOn0Qpqja/t1DhzewBZPUwXN8T5m7vch3lMroJ
	rOjFgOFCF0RfD+BA==
Message-ID: <82e013f1-9029-460d-8a71-a64fd8ee58d0@suse.de>
Date: Mon, 17 Jun 2024 12:40:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 18/26] block: move the synchronous flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Damien Le Moal <dlemoal@kernel.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-19-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-19-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spam-Score: -8.29
X-Spam-Level: 
X-Spam-Flag: NO
X-Spamd-Result: default: False [-8.29 / 50.00];
	REPLY(-4.00)[];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	MIME_TRACE(0.00)[0:+];
	TO_DN_SOME(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[38];
	MID_RHS_MATCH_FROM(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_HAS_DN(0.00)[];
	R_RATELIMIT(0.00)[to_ip_from(RLex1noz7jcsrkfdtgx8bqesde)];
	FROM_EQ_ENVFROM(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,lst.de:email]

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the synchronous flag into the queue_limits feature field so that it
> can be set atomically with the queue frozen.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> ---
>   block/blk-mq-debugfs.c        | 1 -
>   drivers/block/brd.c           | 2 +-
>   drivers/block/zram/zram_drv.c | 4 ++--
>   drivers/nvdimm/btt.c          | 3 +--
>   drivers/nvdimm/pmem.c         | 4 ++--
>   include/linux/blkdev.h        | 7 ++++---
>   6 files changed, 10 insertions(+), 11 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:41:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:41:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742170.1148899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9nk-0006Xi-O7; Mon, 17 Jun 2024 10:41:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742170.1148899; Mon, 17 Jun 2024 10:41:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9nk-0006Xb-Ji; Mon, 17 Jun 2024 10:41:28 +0000
Received: by outflank-mailman (input) for mailman id 742170;
 Mon, 17 Jun 2024 10:41:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9nj-0005nb-5P
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:41:27 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2543b379-2c96-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 12:41:25 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id EDEFF3803D;
 Mon, 17 Jun 2024 10:41:24 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 7B5A113AAA;
 Mon, 17 Jun 2024 10:41:24 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id 5susHdQScGauDQAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:41:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2543b379-2c96-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718620885; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jgtG6m+m1FRqVquXEHPrn3wmZaPvf3OoaYY0kuxzjZY=;
	b=IeFJAWsH75KCYIyv8/qhBGrl5836YIwBOLXQ3jGs9AGu4tCLccHXdaOiEmZNwYilYVnXvA
	SRmfeAJG/mJlmng7doCTF2JrMj/j1v1AtSvyaeFMA6zC9Uo8Apn9VwECptdlpKPI1o/2oN
	26YOZad+Gw/DJeftHbAa4kTM/y28MNI=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718620885;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jgtG6m+m1FRqVquXEHPrn3wmZaPvf3OoaYY0kuxzjZY=;
	b=uBuChXRm9UIyUB63kLl7vU6i6aEesxd1N6Ygk5vJm28e2KE/5WoIJeYPC8888eXD55Tt/v
	dimXmhyYdWeO0iCQ==
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718620884; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jgtG6m+m1FRqVquXEHPrn3wmZaPvf3OoaYY0kuxzjZY=;
	b=tk56hY7XbD+wgCVvAVC5TAAMNdWYTqZxaVADpw5mpu/Frmc+Y8zzf8x9gaIUWi7pHO5rjZ
	Tm01WQr3GPaAKNEgrckaVdo8Qm+/hXaj53GmO1gpM2xNwupMPVgSsIQq7PJCLZa0lbR+XN
	ddtS7T4F7YuLIEsh0v7LCAvj96Hq13s=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718620884;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jgtG6m+m1FRqVquXEHPrn3wmZaPvf3OoaYY0kuxzjZY=;
	b=uAuSy9hbCb4YbUBJdXvIQbCOSketcC7FumfbAwU1hmYHwd+Jx1h140JOgS5gMLtaFdjOqr
	qYR/OoHcbBQkZAAA==
Message-ID: <24322288-fd9f-4f49-9a94-e2aaf97bb700@suse.de>
Date: Mon, 17 Jun 2024 12:41:24 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 19/26] block: move the nowait flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-20-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-20-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spamd-Result: default: False [-8.29 / 50.00];
	REPLY(-4.00)[];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	MIME_TRACE(0.00)[0:+];
	TO_DN_SOME(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[37];
	MID_RHS_MATCH_FROM(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_HAS_DN(0.00)[];
	R_RATELIMIT(0.00)[to_ip_from(RLex1noz7jcsrkfdtgx8bqesde)];
	FROM_EQ_ENVFROM(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,lst.de:email]
X-Spam-Flag: NO
X-Spam-Score: -8.29
X-Spam-Level: 

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the nowait flag into the queue_limits feature field so that it can
> be set atomically with the queue frozen.
> 
> Stacking drivers are simplified in that they now can simply set the
> flag, and blk_stack_limits will clear it when the features is not
> supported by any of the underlying devices.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>   block/blk-mq-debugfs.c        |  1 -
>   block/blk-mq.c                |  2 +-
>   block/blk-settings.c          |  9 +++++++++
>   drivers/block/brd.c           |  4 ++--
>   drivers/md/dm-table.c         | 18 +++---------------
>   drivers/md/md.c               | 18 +-----------------
>   drivers/nvme/host/multipath.c |  3 +--
>   include/linux/blkdev.h        |  9 +++++----
>   8 files changed, 22 insertions(+), 42 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:42:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:42:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742175.1148907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9oJ-000729-Ub; Mon, 17 Jun 2024 10:42:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742175.1148907; Mon, 17 Jun 2024 10:42:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9oJ-000722-Rj; Mon, 17 Jun 2024 10:42:03 +0000
Received: by outflank-mailman (input) for mailman id 742175;
 Mon, 17 Jun 2024 10:42:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9oJ-00071w-D5
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:42:03 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3ab186dc-2c96-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 12:42:01 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E7E405FEE9;
 Mon, 17 Jun 2024 10:42:00 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 84C9013AAA;
 Mon, 17 Jun 2024 10:42:00 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id bAPLHPgScGbeDQAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:42:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ab186dc-2c96-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718620921; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ei0H6kpwHCrAVar/RhH6iZ8w6j4VD0fXtJ6uoOiivc4=;
	b=0aaHE0nDv3sFsKVo4RUNdgi1Gj4Z+MuxUZnX8NnbX4LzirK+9YxDEKeKTIvvb3Wd8UZxyQ
	pL02vk2jca8kXflRM+INPCSpkiMwMWo7zucPCyoXwRKjtHW7Td94vfcVrUznX/Wrnql6uU
	nbRjt2sHYPvoTYF1vldXy3sMrqyJ5ns=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718620921;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ei0H6kpwHCrAVar/RhH6iZ8w6j4VD0fXtJ6uoOiivc4=;
	b=u/mMPT+20sxE/ovrpfH8HyTUV2cuE+R8tuWdBd19R5MUKmu37GlwIbXmeKfQurMrJvkHuR
	ct5GSA7YBBDQI5Ag==
Authentication-Results: smtp-out2.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718620920; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ei0H6kpwHCrAVar/RhH6iZ8w6j4VD0fXtJ6uoOiivc4=;
	b=xTEWwfzFhRWHcOXnCi0toiBdIbekgACZlSGjefUOBapq+sr5vsf4vQJQ1gCTgtm0E9xveU
	accWTuwIKPqiaNL6bALixViV7AYgXFVqL7fo2rJJx59ox9JSCXpgx4591/71uuV/xfscUt
	sB86pC4PSbMmA80wDdJRXCgbv9+51b4=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718620920;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ei0H6kpwHCrAVar/RhH6iZ8w6j4VD0fXtJ6uoOiivc4=;
	b=qkTWtDj2KUeKlE0sNEFFUQfU+KX8KlrCG6SQVbv5pLDtOUA6x3/nFzgU57ULpz5g8bO5y2
	Ph8SFQJlrPTNjiAw==
Message-ID: <c33258a8-6ad2-4b8b-bd38-90c08d366c2c@suse.de>
Date: Mon, 17 Jun 2024 12:42:00 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 20/26] block: move the dax flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Damien Le Moal <dlemoal@kernel.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-21-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-21-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spamd-Result: default: False [-4.29 / 50.00];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[38];
	MIME_TRACE(0.00)[0:+];
	TO_DN_SOME(0.00)[];
	MID_RHS_MATCH_FROM(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_HAS_DN(0.00)[];
	R_RATELIMIT(0.00)[to_ip_from(RLex1noz7jcsrkfdtgx8bqesde)];
	FROM_EQ_ENVFROM(0.00)[];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	DBL_BLOCKED_OPENRESOLVER(0.00)[lst.de:email,imap1.dmz-prg2.suse.org:helo]
X-Spam-Flag: NO
X-Spam-Score: -4.29
X-Spam-Level: 

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the dax flag into the queue_limits feature field so that it can be
> set atomically with the queue frozen.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> ---
>   block/blk-mq-debugfs.c       | 1 -
>   drivers/md/dm-table.c        | 4 ++--
>   drivers/nvdimm/pmem.c        | 7 ++-----
>   drivers/s390/block/dcssblk.c | 2 +-
>   include/linux/blkdev.h       | 6 ++++--
>   5 files changed, 9 insertions(+), 11 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:42:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:42:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742180.1148919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9or-0007ac-7D; Mon, 17 Jun 2024 10:42:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742180.1148919; Mon, 17 Jun 2024 10:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9or-0007aV-31; Mon, 17 Jun 2024 10:42:37 +0000
Received: by outflank-mailman (input) for mailman id 742180;
 Mon, 17 Jun 2024 10:42:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9op-0007aH-U3
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:42:35 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4ea91c22-2c96-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 12:42:35 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id AB3C95FEE7;
 Mon, 17 Jun 2024 10:42:34 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id D7F6813AAA;
 Mon, 17 Jun 2024 10:42:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id MDskNBkTcGb/DQAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:42:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ea91c22-2c96-11ef-90a3-e314d9c70b13
Authentication-Results: smtp-out2.suse.de;
	none
Message-ID: <10acf40f-b4e0-40d1-ab6c-5c2baa178362@suse.de>
Date: Mon, 17 Jun 2024 12:42:33 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 21/26] block: move the poll flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Damien Le Moal <dlemoal@kernel.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-22-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-22-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Spam-Level: 
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Queue-Id: AB3C95FEE7
X-Rspamd-Action: no action
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Rspamd-Server: rspamd1.dmz-prg2.suse.org

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the poll flag into the queue_limits feature field so that it can
> be set atomically with the queue frozen.
> 
> Stacking drivers are simplified in that they now can simply set the
> flag, and blk_stack_limits will clear it when the features is not
> supported by any of the underlying devices.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> ---
>   block/blk-core.c              |  5 ++--
>   block/blk-mq-debugfs.c        |  1 -
>   block/blk-mq.c                | 31 +++++++++++---------
>   block/blk-settings.c          | 10 ++++---
>   block/blk-sysfs.c             |  4 +--
>   drivers/md/dm-table.c         | 54 +++++++++--------------------------
>   drivers/nvme/host/multipath.c | 12 +-------
>   include/linux/blkdev.h        |  4 ++-
>   8 files changed, 45 insertions(+), 76 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:43:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:43:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742186.1148927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9pP-00088L-Ea; Mon, 17 Jun 2024 10:43:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742186.1148927; Mon, 17 Jun 2024 10:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9pP-00088E-Bc; Mon, 17 Jun 2024 10:43:11 +0000
Received: by outflank-mailman (input) for mailman id 742186;
 Mon, 17 Jun 2024 10:43:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9pO-0007aH-6j
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:43:10 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de
 [2a07:de40:b251:101:10:150:64:2])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6325101d-2c96-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 12:43:09 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id D41D35FEE9;
 Mon, 17 Jun 2024 10:43:08 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 23BEA13AAA;
 Mon, 17 Jun 2024 10:43:08 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id HfgcCDwTcGYwDgAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:43:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6325101d-2c96-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718620989; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eS6cZAP8PrCUOUN3SU0P3EiUgWRFs6k3X4SwgCCySaI=;
	b=cmmSBPs25p4H4sEPwGAIYUiX2uwegnyNhIlwPRzQIpdoOZGj4JzaP3T8DaODLiPWw9mUSz
	s5RMqwmmPfI36ETol7BRoAzdTd/vWZnvteESUa9wQj/IFwaYT5ru7oXBeGITPuMeg/aVw4
	ZM+TU4wok4o8O2S9lA/KuJze2r5v9p0=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718620989;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eS6cZAP8PrCUOUN3SU0P3EiUgWRFs6k3X4SwgCCySaI=;
	b=B92gbOLVQD6eAgdjn6Bzl4NFN7iNb0ySX50WmsPod9PQCFGJEWc7mxy6VV5cJV6T7cKH+y
	aEdGgm/8XNL9s8Cw==
Authentication-Results: smtp-out2.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718620988; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eS6cZAP8PrCUOUN3SU0P3EiUgWRFs6k3X4SwgCCySaI=;
	b=TiRVzeY4b8BgXR1AVgVsNj7xraN6gx3lVPXLO5ki0L3pyeW2PafUroYNXEvKyBSl6Ggs44
	O8c3ELPSkDfeSso6KRmUefFHpezgWAKZGNzsEQ4v1MvPsuF7iSYFfz+y7Q05o2mR/FYIfg
	IE68WQfGWmDYf2Hn74jpq+bs83fGxNo=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718620988;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eS6cZAP8PrCUOUN3SU0P3EiUgWRFs6k3X4SwgCCySaI=;
	b=t5CaYJdw/5jN1yYQGG3K3dgg+uYjoVndVeOMh4ZeKdHOePhVDEm+C37+u16j5art4bCLkv
	h7mou1Tr9IUL9RCg==
Message-ID: <23e0aac6-9af5-468f-a7d1-a331fe06c3a3@suse.de>
Date: Mon, 17 Jun 2024 12:43:07 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 22/26] block: move the zoned flag into the features field
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Damien Le Moal <dlemoal@kernel.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-23-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-23-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spam-Score: -4.29
X-Spam-Level: 
X-Spam-Flag: NO
X-Spamd-Result: default: False [-4.29 / 50.00];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[38];
	MIME_TRACE(0.00)[0:+];
	TO_DN_SOME(0.00)[];
	MID_RHS_MATCH_FROM(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_HAS_DN(0.00)[];
	R_RATELIMIT(0.00)[to_ip_from(RLex1noz7jcsrkfdtgx8bqesde)];
	FROM_EQ_ENVFROM(0.00)[];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,lst.de:email]

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the zoned flags into the features field to reclaim a little
> bit of space.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> ---
>   block/blk-settings.c           |  5 ++---
>   drivers/block/null_blk/zoned.c |  2 +-
>   drivers/block/ublk_drv.c       |  2 +-
>   drivers/block/virtio_blk.c     |  5 +++--
>   drivers/md/dm-table.c          | 11 ++++++-----
>   drivers/md/dm-zone.c           |  2 +-
>   drivers/md/dm-zoned-target.c   |  2 +-
>   drivers/nvme/host/zns.c        |  2 +-
>   drivers/scsi/sd_zbc.c          |  2 +-
>   include/linux/blkdev.h         |  9 ++++++---
>   10 files changed, 23 insertions(+), 19 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:43:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:43:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742193.1148938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9pv-0000Gc-QL; Mon, 17 Jun 2024 10:43:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742193.1148938; Mon, 17 Jun 2024 10:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9pv-0000GT-NP; Mon, 17 Jun 2024 10:43:43 +0000
Received: by outflank-mailman (input) for mailman id 742193;
 Mon, 17 Jun 2024 10:43:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9pt-0000G1-RE
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:43:41 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75f0b4d0-2c96-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 12:43:40 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 443995FEED;
 Mon, 17 Jun 2024 10:43:39 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 887F713AAA;
 Mon, 17 Jun 2024 10:43:38 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id tnC5IFoTcGZeDgAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:43:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75f0b4d0-2c96-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718621020; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IgK6RwhDUFxF8Iz2VUACB/sUS3MvYSpSBSy5xHHsOn8=;
	b=OYjxGhYyyQb45Cjpg2jj8Zng/KAeprTWq6By5kQsloM/+bjnB7MwfD1io5KsfcQWClfe8Q
	ux4gGhoADB3s5tZgNTTntmvUH+f2q37co3qYOXxa6m6ujU43jiPcjZo/iEjZBFTlDBcQVo
	3DDBlyYbqz5tCIxw6ctlNGuPk/57+1c=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718621020;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IgK6RwhDUFxF8Iz2VUACB/sUS3MvYSpSBSy5xHHsOn8=;
	b=vdHwFKFiaDsyrYLtzzb8VwXA1CKpHrjMPYXmpV6QfQj4yBBHTn74T2tcS3xdvNFziKYD/u
	5J7jvec2TpKhULAQ==
Authentication-Results: smtp-out2.suse.de;
	dkim=pass header.d=suse.de header.s=susede2_rsa header.b="mSpeQay/";
	dkim=pass header.d=suse.de header.s=susede2_ed25519 header.b=fz3pdYP1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718621019; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IgK6RwhDUFxF8Iz2VUACB/sUS3MvYSpSBSy5xHHsOn8=;
	b=mSpeQay/kNxbILiyXm2KQHqRDB0WEjJiP5+B0drP2AZJWzFOVLnMiWZSsPqTzNBteAAmcW
	oTHliSiV/s9mkHVBlKNRNcRAHWRM0376bOT0hhQfyd+gG5a95e7fhNg/oNNK2IJIt+sq23
	/bLdmEKEkrqqeWS3ZSGAJi8G3s95o/k=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718621019;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IgK6RwhDUFxF8Iz2VUACB/sUS3MvYSpSBSy5xHHsOn8=;
	b=fz3pdYP1xeuHSh+k87nz0CaWb9INbdnXjbTEwidcznuyNlrZEoTrB7Betlso9m6oK80ky4
	xDmnArveZJIbKADw==
Message-ID: <7b159e24-2926-4b3a-b204-5601b4098d32@suse.de>
Date: Mon, 17 Jun 2024 12:43:38 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 23/26] block: move the zone_resetall flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Damien Le Moal <dlemoal@kernel.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-24-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-24-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spamd-Result: default: False [-4.50 / 50.00];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	R_DKIM_ALLOW(-0.20)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	MX_GOOD(-0.01)[];
	RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from];
	RCPT_COUNT_TWELVE(0.00)[38];
	SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	MIME_TRACE(0.00)[0:+];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	ARC_NA(0.00)[];
	FROM_HAS_DN(0.00)[];
	TO_DN_SOME(0.00)[];
	MID_RHS_MATCH_FROM(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	FROM_EQ_ENVFROM(0.00)[];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received];
	R_RATELIMIT(0.00)[to_ip_from(RL7i91f5w8duz44ptrxmeazktf)];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	DKIM_TRACE(0.00)[suse.de:+];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns,lst.de:email]
X-Rspamd-Action: no action
X-Rspamd-Server: rspamd2.dmz-prg2.suse.org
X-Rspamd-Queue-Id: 443995FEED
X-Spam-Flag: NO
X-Spam-Score: -4.50
X-Spam-Level: 

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the zone_resetall flag into the queue_limits feature field so that
> it can be set atomically with the queue frozen.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> ---
>   block/blk-mq-debugfs.c         | 1 -
>   drivers/block/null_blk/zoned.c | 3 +--
>   drivers/block/ublk_drv.c       | 4 +---
>   drivers/block/virtio_blk.c     | 3 +--
>   drivers/nvme/host/zns.c        | 3 +--
>   drivers/scsi/sd_zbc.c          | 5 +----
>   include/linux/blkdev.h         | 6 ++++--
>   7 files changed, 9 insertions(+), 16 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:44:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:44:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742198.1148948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9qQ-0000oG-2w; Mon, 17 Jun 2024 10:44:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742198.1148948; Mon, 17 Jun 2024 10:44:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9qP-0000o9-V4; Mon, 17 Jun 2024 10:44:13 +0000
Received: by outflank-mailman (input) for mailman id 742198;
 Mon, 17 Jun 2024 10:44:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9qO-0000iv-31
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:44:12 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 879e6597-2c96-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 12:44:10 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 23BCE5FEF1;
 Mon, 17 Jun 2024 10:44:10 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id B5A2F13AAA;
 Mon, 17 Jun 2024 10:44:09 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id 70jqK3kTcGZ+DgAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:44:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 879e6597-2c96-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718621050; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=siM3UuCseWBh/CBe/7d9FrlzrcJvhVMSOhKdgRyXxso=;
	b=s7paDZZVBRNdcFAQ16oQKsSkpX+26Uxtw+kewokSLFpaj5yQFHpEN78Tv/osc/OrBycvE5
	RCiR/PIm1PLSmpFRynIYOOclrcytpyHDo4HXtMPeBL5e6qpUe4Ouce4OyAZL2ULaHETYqu
	NCBoB90PdZM66VpPn1emRpRri3j4i6I=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718621050;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=siM3UuCseWBh/CBe/7d9FrlzrcJvhVMSOhKdgRyXxso=;
	b=E5BZWKGRbvowhF34WQLyeDkMaCHWluLJK85qeAVsoDqrwhv9wbmLzMQ4EfADgpjAT5kIr7
	7cDoWPqSUh7WLzDQ==
Authentication-Results: smtp-out2.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718621050; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=siM3UuCseWBh/CBe/7d9FrlzrcJvhVMSOhKdgRyXxso=;
	b=s7paDZZVBRNdcFAQ16oQKsSkpX+26Uxtw+kewokSLFpaj5yQFHpEN78Tv/osc/OrBycvE5
	RCiR/PIm1PLSmpFRynIYOOclrcytpyHDo4HXtMPeBL5e6qpUe4Ouce4OyAZL2ULaHETYqu
	NCBoB90PdZM66VpPn1emRpRri3j4i6I=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718621050;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=siM3UuCseWBh/CBe/7d9FrlzrcJvhVMSOhKdgRyXxso=;
	b=E5BZWKGRbvowhF34WQLyeDkMaCHWluLJK85qeAVsoDqrwhv9wbmLzMQ4EfADgpjAT5kIr7
	7cDoWPqSUh7WLzDQ==
Message-ID: <ee0a4868-93d1-4a58-9495-ddb4c63fba6e@suse.de>
Date: Mon, 17 Jun 2024 12:44:09 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 24/26] block: move the pci_p2pdma flag to queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Damien Le Moal <dlemoal@kernel.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-25-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-25-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spam-Score: -8.29
X-Spam-Level: 
X-Spam-Flag: NO
X-Spamd-Result: default: False [-8.29 / 50.00];
	REPLY(-4.00)[];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	MIME_TRACE(0.00)[0:+];
	TO_DN_SOME(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[38];
	MID_RHS_MATCH_FROM(0.00)[];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_HAS_DN(0.00)[];
	R_RATELIMIT(0.00)[to_ip_from(RLex1noz7jcsrkfdtgx8bqesde)];
	FROM_EQ_ENVFROM(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[suse.de:email,imap1.dmz-prg2.suse.org:helo,lst.de:email]

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the pci_p2pdma flag into the queue_limits feature field so that it
> can be set atomically with the queue frozen.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> ---
>   block/blk-mq-debugfs.c   | 1 -
>   drivers/nvme/host/core.c | 8 +++-----
>   include/linux/blkdev.h   | 7 ++++---
>   3 files changed, 7 insertions(+), 9 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:45:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:45:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742204.1148958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9rD-0001Ny-BA; Mon, 17 Jun 2024 10:45:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742204.1148958; Mon, 17 Jun 2024 10:45:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9rD-0001Nr-7B; Mon, 17 Jun 2024 10:45:03 +0000
Received: by outflank-mailman (input) for mailman id 742204;
 Mon, 17 Jun 2024 10:45:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9rC-00013f-Qp
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:45:02 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de
 [2a07:de40:b251:101:10:150:64:2])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a66df06c-2c96-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 12:45:02 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id ED0C85FEF2;
 Mon, 17 Jun 2024 10:45:01 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 470E513AAA;
 Mon, 17 Jun 2024 10:45:01 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id /B9dD60TcGbdDgAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:45:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a66df06c-2c96-11ef-90a3-e314d9c70b13
Authentication-Results: smtp-out2.suse.de;
	none
Message-ID: <36bcda92-7c4c-41e6-978e-fb749b66607f@suse.de>
Date: Mon, 17 Jun 2024 12:45:00 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 25/26] block: move the skip_tagset_quiesce flag to
 queue_limits
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Damien Le Moal <dlemoal@kernel.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-26-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-26-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Spamd-Result: default: False [-4.00 / 50.00];
	REPLY(-4.00)[]
X-Rspamd-Queue-Id: ED0C85FEF2
X-Rspamd-Server: rspamd2.dmz-prg2.suse.org
X-Rspamd-Pre-Result: action=no action;
	module=replies;
	Message is reply to one we originated
X-Rspamd-Action: no action
X-Spam-Flag: NO
X-Spam-Score: -4.00
X-Spam-Level: 

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the skip_tagset_quiesce flag into the queue_limits feature field so
> that it can be set atomically with the queue frozen.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> ---
>   block/blk-mq-debugfs.c   | 1 -
>   drivers/nvme/host/core.c | 8 +++++---
>   include/linux/blkdev.h   | 6 ++++--
>   3 files changed, 9 insertions(+), 6 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 10:45:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 10:45:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742208.1148968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9rl-0001tw-K5; Mon, 17 Jun 2024 10:45:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742208.1148968; Mon, 17 Jun 2024 10:45:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJ9rl-0001tp-GF; Mon, 17 Jun 2024 10:45:37 +0000
Received: by outflank-mailman (input) for mailman id 742208;
 Mon, 17 Jun 2024 10:45:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qWef=NT=suse.de=hare@srs-se1.protection.inumbo.net>)
 id 1sJ9rk-0001td-4x
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 10:45:36 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de
 [2a07:de40:b251:101:10:150:64:1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b97c72f6-2c96-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 12:45:34 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id CCF9C3804B;
 Mon, 17 Jun 2024 10:45:33 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 6C97B13AAA;
 Mon, 17 Jun 2024 10:45:33 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id xJQVGs0TcGYTDwAAD6G6ig
 (envelope-from <hare@suse.de>); Mon, 17 Jun 2024 10:45:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b97c72f6-2c96-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718621133; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=slP8wzcVm+UAH6cgQCtZYK/eQ7AbIkDnYf3KLR9axr8=;
	b=cHdgn9W4G9J0bE9J3jdrIA5h9r80guqMNBb5W5GdFASLapAtRmAO+S/TDk3J9mxlXYKQ3p
	XeyPNYiHIsartGEh8VHnpDVPl9PlSbixXhfd1N8WS6mi9euOFHAttCLDL2I6PiAjR9dt4c
	b8iVdUv2vEMMALrToXYEjc2bLtRNGCk=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718621133;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=slP8wzcVm+UAH6cgQCtZYK/eQ7AbIkDnYf3KLR9axr8=;
	b=j6x1ql3oipn4My7hzsZomUyOPKAHtZ3GuatVGjA1bA/HdSpLPFhhpg/i+I1E4ZCYoxxHNa
	kb4ijVWkbYZKZPAA==
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.de; s=susede2_rsa;
	t=1718621133; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=slP8wzcVm+UAH6cgQCtZYK/eQ7AbIkDnYf3KLR9axr8=;
	b=cHdgn9W4G9J0bE9J3jdrIA5h9r80guqMNBb5W5GdFASLapAtRmAO+S/TDk3J9mxlXYKQ3p
	XeyPNYiHIsartGEh8VHnpDVPl9PlSbixXhfd1N8WS6mi9euOFHAttCLDL2I6PiAjR9dt4c
	b8iVdUv2vEMMALrToXYEjc2bLtRNGCk=
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=suse.de;
	s=susede2_ed25519; t=1718621133;
	h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=slP8wzcVm+UAH6cgQCtZYK/eQ7AbIkDnYf3KLR9axr8=;
	b=j6x1ql3oipn4My7hzsZomUyOPKAHtZ3GuatVGjA1bA/HdSpLPFhhpg/i+I1E4ZCYoxxHNa
	kb4ijVWkbYZKZPAA==
Message-ID: <94db71a8-75ef-4490-a28a-aea26f6dd945@suse.de>
Date: Mon, 17 Jun 2024 12:45:33 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 26/26] block: move the bounce flag into the features field
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
 Damien Le Moal <dlemoal@kernel.org>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-27-hch@lst.de>
Content-Language: en-US
From: Hannes Reinecke <hare@suse.de>
In-Reply-To: <20240617060532.127975-27-hch@lst.de>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-Spamd-Result: default: False [-8.29 / 50.00];
	REPLY(-4.00)[];
	BAYES_HAM(-3.00)[100.00%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	DKIM_SIGNED(0.00)[suse.de:s=susede2_rsa,suse.de:s=susede2_ed25519];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	RCPT_COUNT_TWELVE(0.00)[38];
	MIME_TRACE(0.00)[0:+];
	ARC_NA(0.00)[];
	FROM_HAS_DN(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	FROM_EQ_ENVFROM(0.00)[];
	TO_DN_SOME(0.00)[];
	MID_RHS_MATCH_FROM(0.00)[];
	R_RATELIMIT(0.00)[to_ip_from(RLex1noz7jcsrkfdtgx8bqesde)];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,lst.de:email,suse.de:email]
X-Spam-Flag: NO
X-Spam-Score: -8.29
X-Spam-Level: 

On 6/17/24 08:04, Christoph Hellwig wrote:
> Move the bounce flag into the features field to reclaim a little bit of
> space.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> ---
>   block/blk-settings.c    | 1 -
>   block/blk.h             | 2 +-
>   drivers/scsi/scsi_lib.c | 2 +-
>   include/linux/blkdev.h  | 6 ++++--
>   4 files changed, 6 insertions(+), 5 deletions(-)
> 
Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
-- 
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@suse.de                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 11:19:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 11:19:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742226.1148978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJAO2-0000o9-1R; Mon, 17 Jun 2024 11:18:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742226.1148978; Mon, 17 Jun 2024 11:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJAO1-0000o2-Ub; Mon, 17 Jun 2024 11:18:57 +0000
Received: by outflank-mailman (input) for mailman id 742226;
 Mon, 17 Jun 2024 11:18:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gp4O=NT=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJAO0-0000nw-BZ
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 11:18:56 +0000
Received: from mail-vk1-xa2d.google.com (mail-vk1-xa2d.google.com
 [2607:f8b0:4864:20::a2d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 61200927-2c9b-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 13:18:54 +0200 (CEST)
Received: by mail-vk1-xa2d.google.com with SMTP id
 71dfb90a1353d-4ed0b3c6a76so1210536e0c.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 04:18:54 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798abc02e72sm420421485a.96.2024.06.17.04.18.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 04:18:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61200927-2c9b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718623133; x=1719227933; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=aD/kHSHDd1AmHQBBqxIXCmE4l2MuaIQ5nOaIbQOtiMU=;
        b=ggSMDROCp6jUnbf3ULjBzZruTcTQPLCJT7zon/sLllBSIW3RGGvBtEavFA4Amky+qf
         DCLXx9n0VPgZtaTDTvzDZcs0ArDzBP9lAU4ClOljcb0M48Z2O7j7HnNuGrIsU71RRLDI
         A6fCiFQbGdlrfTXaQKp9ZIJtvvpCwesUjDSIE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718623133; x=1719227933;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=aD/kHSHDd1AmHQBBqxIXCmE4l2MuaIQ5nOaIbQOtiMU=;
        b=LxuTEQ/Q82eGYiQaJDAeEkq90rC8/G+Fve8EOgbcYnNitcVIZPRS9eNDWN/RTCxxQe
         5uoLFBHuWtioYRf5jU7kl5QmVuUsqH2tfy5V0cXG3ltqxQd7O3EN6jAJ/5sBDKsZK3Dw
         nLL5OxpqGb8D9G0o3Brrroc43cas6YhR+wklxKPxIfAqd+Q7gnhCTUKqP8uRpv34gX94
         D4p6TOF20DDkz3UGHlKPzTHBi30hPs2QUYatjsIPw8assheocMR2fNSFotwapgbZZH5O
         PqgNkSvhljzxmL9/tN1tjaV9+IIkj3bebL7UqlmhWO7bIB3d8Dv0Hh3i3o+R/4kgHM9M
         zAFg==
X-Forwarded-Encrypted: i=1; AJvYcCXiLlWcyCw14CLjavmKvUB2Uk47Ugv8EvhLWrJgzw2I5CWvq6D90258+TfebQ2DVQVSppeT9G+guVY4uqX3gru4lDo5KTITm/dqR01Jess=
X-Gm-Message-State: AOJu0Ywc29XlOoQE02ucvuURvYYJMYBfRbr371bWPDtuttssUZsn4jGA
	hxfHFhoUsWS4qxBk+W/eb4GthqkcOl7ck3FBvx9aeb/I9nwJqoTpqCUaaAUVCME=
X-Google-Smtp-Source: AGHT+IGqQGKg93phNICfTYyaI/i8v/ia8Qu0b/LtSpc/0xgA82aGOkgtYqxlG6z59o1AoEbTTG6Vtg==
X-Received: by 2002:a05:6122:310d:b0:4e4:e9db:6b10 with SMTP id 71dfb90a1353d-4ee3db8b0f0mr8755637e0c.2.1718623132710;
        Mon, 17 Jun 2024 04:18:52 -0700 (PDT)
Message-ID: <5746769a-6868-48ed-a58b-f8f75f17965a@citrix.com>
Date: Mon, 17 Jun 2024 12:18:48 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: for_each_set_bit() clean-up (API RFC)
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Michal Orzel <michal.orzel@amd.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Shawn Anastasio <sanastasio@raptorengineering.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <46abec6c-ebe9-4426-865e-5513107949be@citrix.com>
 <5e2b6c55-8d6d-4153-8632-a6608cd41012@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <5e2b6c55-8d6d-4153-8632-a6608cd41012@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 17/06/2024 10:54 am, Jan Beulich wrote:
> On 14.06.2024 19:07, Andrew Cooper wrote:
>> More fallout from looking at the code generation...
>>
>> for_each_set_bit() forces it's bitmap parameter out into memory.  For an
>> arbitrary sized bitmap, this is fine - and likely preferable as it's an
>> in-memory to begin with.
>>
>> However, more than half the current users of for_each_set_bit() are
>> operating over an single int/long, and this too is spilled to the
>> stack.  Worse, x86 seems to be the only architecture which (tries, but
>> not very well) to optimise find_{first,next}_bit() for GPR-sized
>> quantities, meaning that for_each_set_bit() hides 2 backing function calls.
>>
>> The ARM (v)GIC code in particular suffers horribly because of this.
>>
>> We also have several interesting opencoded forms:
>> * evtchn_check_pollers() is a (preprocessor identical) opencoding.
>> * hvm_emulate_writeback() is equivalent.
>> * for_each_vp() exists just to hardcode a constant and swap the other
>> two parameters.
>>
>> and several others forms which I think could be expressed more cleanly
>> as for_each_set_bit().
> I agree.
>
>> We also have the while()/ffs() forms which are "just" for_each_set_bit()
>> and some even manage to not spill their main variable to memory.
>>
>>
>> I want to get to a position where there is one clear API to use, and
>> that the compiler will handle nicely.  Xen's code generation will
>> definitely improve as a consequence.
>>
>>
>> Sadly, transforming the ideal while()/ffs() form into a for() loop is a
>> bit tricky.  This works:
>>
>> for ( unsigned int v = (val), (bit);
>>       v;
>>       v &= v - 1 )
>> if ( 1 )
>> {
>>     (bit) = ffs(v) - 1;
>>     goto body;
>> }
>> else
>>     body:
>>
>> which is a C metaprogramming trick borrowed from PuTTY to make:
>>
>> for_each_BLAH ( bit, val )
>> {
>>     // nice loop body
>> }
>>
>> work, while having the ffs() calculated logically within the loop body.
> What's wrong with
>
> #define for_each_set_bit(iter, val) \
>     for ( unsigned int v_ = (val), iter; \
>           v_ && ((iter) = ffs(v_) - 1, true); \
>           v_ &= v_ - 1 )
>
> ? I'll admit though that it's likely a matter of taste which one is
> "uglier". Yet I'd be in favor of avoiding the scope trickery.

Oh, of course.

FWIW, Frediano pointed out a form without the goto, but this is better
still.

It's certainly less bad than having to also explain the metaprogramming
to get scope working nicely.


>> The first issue I expect people to have with the above is the raw 'body'
>> label, although with a macro that can be fixed using body_ ## __COUNTER__.
>>
>> A full example is https://godbolt.org/z/oMGfah696 although a real
>> example in Xen is going to have to be variadic for at least ffs() and
>> ffsl().
> How would variadic-ness help with this? Unless we play some type
> trickery (like typeof((val) + 0U), thus yielding at least an unsigned,
> but an unsigned long if the incoming value is such, followed by a
> compile-time conditional operator to select between ffs() and ffsl()),
> I don't think we'd get away with just a single construct for both the
> int and long (for Arm32: long long) cases.

Ideally we want _Generic() but we can't have that yet.

In lieu of that, I was thinking a chain of __builtin_choose_expr()
instantiating variants for int/long/long-long.

The complication is that we need a double for() loop for the
long/long-long in order to declare the iterator as strictly unsigned
int.Without this, a CLTQ gets emitted to extend the result of the ffs*()
call. This https://godbolt.org/z/Px88EWdPb ought to do. I'll probably
end up expressing that as __for_each_set_bit() taking in a type derived
from typeof(), and an ffs*() variant to use.
>> Now, from an API point of view, it would be lovely if we could make a
>> single for_each_set_bit() which covers both cases, and while I can
>> distinguish the two forms by whether there are 2 or 3 args,
> With the 3-argument form specifying the number of bits in the 3rd arg?
> I'd fear such mixed uses may end up confusing.
>
>> I expect
>> MISRA is going to have a fit at that.  Also there's a difference based
>> on the scope of 'bit' and also whether modifications to 'val' in the
>> loop body take effect on the loop condition (they don't because a copy
>> is taken).
>>
>> So I expect everyone is going to want a new API to use here.  But what
>> to call it?
>>
>> More than half of the callers in Xen really want the GPR form, so we
>> could introduce a new bitmap_for_each_set_bit(), move all the callers
>> over, then introduce a "new" for_each_set_bit() which is only of the GPR
>> form.
>>
>> Or does anyone want to suggest an alternative name?
> I'd be okay-ish with those, maybe with slight shortening to bitmap_for_each()
> or bitmap_for_each_set().

Lets go with bitmap_for_each().  While we've got some examples looking
for the first clear bit, I don't believe we've got any examples wanting
to loop over all clear bits.

Are you happy repurposing for_each_set_bit() to be a 2-argument macro
operating strictly on a single GPR?

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 11:22:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 11:22:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742235.1148989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJARg-0002UF-Kg; Mon, 17 Jun 2024 11:22:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742235.1148989; Mon, 17 Jun 2024 11:22:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJARg-0002U8-GC; Mon, 17 Jun 2024 11:22:44 +0000
Received: by outflank-mailman (input) for mailman id 742235;
 Mon, 17 Jun 2024 11:22:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJARe-0002S7-Tx
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 11:22:42 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e8a91a34-2c9b-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 13:22:40 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id
 a640c23a62f3a-a6f8ebbd268so52559666b.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 04:22:40 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f6b2b04f5sm356461566b.192.2024.06.17.04.22.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 04:22:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8a91a34-2c9b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718623360; x=1719228160; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=WeL/w+Lw/tfH+gPjLo/DZiUR7+US6eJWedp8tMKIQJI=;
        b=OtSdzJ8g5WSK8ejvStNXssSWXRuEoj3vgRUYUq95ddYmP7tVxqILxt2fdlm9kukrfj
         HdDlgOgtkfcc8aYI10bHghFoftcdYsO1fqd/MLFXk4Jf1LPcTJbUz/qVz8SHB3m7yZJH
         90UVUWbTvsM+8mdZNaxXP+9GRMwdnJARrkRSkJu0opgsU3QIwknQBUxF+4bWp4SazMmD
         HtGyLhmPSKwG1b5i0HjEOqi9RvflTl6Hqg5gdD8cGe+AwPpxNZ/5rfZKRGPNwqw0Lxz3
         oyCi8H/Ox8Bg/5YmVNMt65LcyPVtZa2AOUS8F4Dc4bcV957DHg7u4g3BjUSwTrTUswDr
         Twag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718623360; x=1719228160;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=WeL/w+Lw/tfH+gPjLo/DZiUR7+US6eJWedp8tMKIQJI=;
        b=ee2yqlZDkYA8y7ItEoXaO2nqTej49BoHOWW7AGfRga3YFmI3p5Sq26HXmKD5LNHzFw
         reDZYpnuOFCi57cc3g8UAVK3rAk1dGUDfjt99XdyIASWy3cLyEcx0MEWj7A7CS6pQ5qH
         5TntvWYHSfZYS65OUC0KfB/iWsR7oR1Hp3zR3sK405O4jhKgpbghRM8fHVJS/Hke2eqM
         NNft0rbq2x0a5d+O2q2pbctI94+jFZNeQNtslSBMNP9wNkpt0nocNU6LlGu2v502wJST
         cCvJQGdxzIZeDqyVQz81U7qYMa7lsCoej7Pf7wZ9nCrIaC9dc9x7JFniAM3RGMkB33B9
         R2Xw==
X-Gm-Message-State: AOJu0Yz0jXxKVIIJl2V3UoVgoPhUz2xH2lM1+potmwntsEFiEETtrWUd
	qzoiDi0HNBKTvcGxwslLk14bPCx4bgxX79+mfsp74jl+IsztfgfDkT5OyQ3ZIffF4bq5qy1PY4c
	=
X-Google-Smtp-Source: AGHT+IFmKpg4saMv7jrHZDM9iJOTh0sI61WnQiO+PFfwP+GFiamq2KErJ+Xpzud/CEsoLiWUiqgm/w==
X-Received: by 2002:a17:907:6e87:b0:a6f:7707:b846 with SMTP id a640c23a62f3a-a6f7707b91bmr479569966b.15.1718623360213;
        Mon, 17 Jun 2024 04:22:40 -0700 (PDT)
Message-ID: <a5a8a016-2107-46fb-896b-2baaf66566d4@suse.com>
Date: Mon, 17 Jun 2024 13:22:37 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Juergen Gross <jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: ACPI NVS range conflicting with Dom0 page tables (or kernel image)
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hello,

while it feels like we had a similar situation before, I can't seem to be
able to find traces thereof, or associated (Linux) commits.

With

(XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x4000000
...
(XEN)  Dom0 alloc.:   0000000440000000->0000000448000000 (619175 pages to be allocated)
...
(XEN)  Loaded kernel: ffffffff81000000->ffffffff84000000

the kernel occupies the space from 16Mb to 64Mb in the initial allocation.
Page tables come (almost) directly above:

(XEN)  Page tables:   ffffffff84001000->ffffffff84026000

I.e. they're just above the 64Mb boundary. Yet sadly in the host E820 map
there is

(XEN)  [0000000004000000, 0000000004009fff] (ACPI NVS)

i.e. a non-RAM range starting at 64Mb. The kernel (currently) won't tolerate
such an overlap (also if it was overlapping the kernel image, e.g. if on the
machine in question s sufficiently much larger kernel was used). Yet with its
fundamental goal of making its E820 match the host one I'm also in trouble
thinking of possible solutions / workarounds. I certainly do not see Xen
trying to cover for this, as the E820 map re-arrangement is purely a kernel
side decision (forward ported kernels got away without, and what e.g. the
BSDs do is entirely unknown to me).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 11:50:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 11:50:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742242.1148997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJAsu-0000xQ-Nj; Mon, 17 Jun 2024 11:50:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742242.1148997; Mon, 17 Jun 2024 11:50:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJAsu-0000xJ-Kn; Mon, 17 Jun 2024 11:50:52 +0000
Received: by outflank-mailman (input) for mailman id 742242;
 Mon, 17 Jun 2024 11:50:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJAst-0000vv-Gs
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 11:50:51 +0000
Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com
 [2a00:1450:4864:20::52e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d5d5cacd-2c9f-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 13:50:47 +0200 (CEST)
Received: by mail-ed1-x52e.google.com with SMTP id
 4fb4d7f45d1cf-57c7ec8f1fcso4930869a12.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 04:50:47 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb72da790sm6287464a12.35.2024.06.17.04.50.45
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 04:50:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5d5cacd-2c9f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718625047; x=1719229847; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=FxqEqOYubojEKCidWitbRztrsng+kGn7n5ru+wAjWu4=;
        b=UeH2DP0+BPzRBBz5e1NqKT57mOAlZgDADp2h0tkB49xis9L5AtFrmnWXrj+PhSg5yb
         +wvoRPYCefmmneeKmqqB0n3kWsVwLN4ifDXNUNpV1wEcY2hNGPRndwapryjWi+vlgz1I
         Q4RqF56osLbD/ecfJLwL3gDSFUO6Zy/llIVaJTqCQxnNCVFif/sznIsUJYig3byTbFzx
         2zmOU9wVdKjAHXVc/QdlWYsFSy5pQ7gQUED7S2bcrXQUGSy6GMA7UqZh9NrPhgtVR2yH
         i3Sj8EVQyBhHEyc6E5dq/LmV1jLHeVQzYHaq0TKc47o9g5Q+FwKsmRBn4twSWUankeJ/
         taeg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718625047; x=1719229847;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FxqEqOYubojEKCidWitbRztrsng+kGn7n5ru+wAjWu4=;
        b=spMpM25RH71DlX141tLSAZ/sgM53RABH9utVroFqAIl2qFjmXf8aCMk+ecgtY5wNO/
         EjSvBhtthJLE2ZwNe7vGTtoZkG0Kha/m/46IGl8JWeoCmZ+cO+3EN5qHTYDMmWx3Kaki
         AvZ4J9yVDp1S9q/FY9jN0DnFTjo+44VnhrXYxQ73TmInmN/iotpPTqrWuZxDJ3/Brthn
         3/PuHSyisFliA755ggfz0LUQoI/NLKljg19aVXlBcGGzNtYyqFZyrGT59a6sPMQyMJBJ
         ii4aMJEDILopk9xRUX3/vJccU4EieVYlLnjb6sywwNaRKTeGAKdZV8aSHjFVepdHVmM2
         J+Ag==
X-Forwarded-Encrypted: i=1; AJvYcCXW+xV5r3q6k6XeM/HmT4npzBl3fppNtANBl3fVoyVbDedkoCf6q3HxjC/2Hkn9zB75OpSWxhlcTOiukIFUA1vfc3n8bX78fmnXtZbVXv0=
X-Gm-Message-State: AOJu0YyCbAYhrZYgBCG+/ZPrBY7zjcwi2T2ozSU+8keFKdEh7lWX36rL
	X4GIHhD9p68q/Hfab4o57UWCk2mWSMordUKkdxC52CG8hTZ6VW2IzPnGnFACng==
X-Google-Smtp-Source: AGHT+IGF953aUAHGv06+vvTlpgMejmOWWOK64EY+jPjgESVI4Xnt1Qp/n0nOHfCDwKobPPMhniOUiQ==
X-Received: by 2002:a50:d59b:0:b0:57c:6d58:e1f1 with SMTP id 4fb4d7f45d1cf-57cbd65246emr6125379a12.9.1718625046534;
        Mon, 17 Jun 2024 04:50:46 -0700 (PDT)
Message-ID: <a47db485-52df-42a6-a045-0d695b74c3eb@suse.com>
Date: Mon, 17 Jun 2024 13:50:44 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: for_each_set_bit() clean-up (API RFC)
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Michal Orzel <michal.orzel@amd.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Shawn Anastasio <sanastasio@raptorengineering.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <46abec6c-ebe9-4426-865e-5513107949be@citrix.com>
 <5e2b6c55-8d6d-4153-8632-a6608cd41012@suse.com>
 <5746769a-6868-48ed-a58b-f8f75f17965a@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <5746769a-6868-48ed-a58b-f8f75f17965a@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 17.06.2024 13:18, Andrew Cooper wrote:
> On 17/06/2024 10:54 am, Jan Beulich wrote:
>> On 14.06.2024 19:07, Andrew Cooper wrote:
>>> More fallout from looking at the code generation...
>>>
>>> for_each_set_bit() forces it's bitmap parameter out into memory.  For an
>>> arbitrary sized bitmap, this is fine - and likely preferable as it's an
>>> in-memory to begin with.
>>>
>>> However, more than half the current users of for_each_set_bit() are
>>> operating over an single int/long, and this too is spilled to the
>>> stack.  Worse, x86 seems to be the only architecture which (tries, but
>>> not very well) to optimise find_{first,next}_bit() for GPR-sized
>>> quantities, meaning that for_each_set_bit() hides 2 backing function calls.
>>>
>>> The ARM (v)GIC code in particular suffers horribly because of this.
>>>
>>> We also have several interesting opencoded forms:
>>> * evtchn_check_pollers() is a (preprocessor identical) opencoding.
>>> * hvm_emulate_writeback() is equivalent.
>>> * for_each_vp() exists just to hardcode a constant and swap the other
>>> two parameters.
>>>
>>> and several others forms which I think could be expressed more cleanly
>>> as for_each_set_bit().
>> I agree.
>>
>>> We also have the while()/ffs() forms which are "just" for_each_set_bit()
>>> and some even manage to not spill their main variable to memory.
>>>
>>>
>>> I want to get to a position where there is one clear API to use, and
>>> that the compiler will handle nicely.  Xen's code generation will
>>> definitely improve as a consequence.
>>>
>>>
>>> Sadly, transforming the ideal while()/ffs() form into a for() loop is a
>>> bit tricky.  This works:
>>>
>>> for ( unsigned int v = (val), (bit);
>>>       v;
>>>       v &= v - 1 )
>>> if ( 1 )
>>> {
>>>     (bit) = ffs(v) - 1;
>>>     goto body;
>>> }
>>> else
>>>     body:
>>>
>>> which is a C metaprogramming trick borrowed from PuTTY to make:
>>>
>>> for_each_BLAH ( bit, val )
>>> {
>>>     // nice loop body
>>> }
>>>
>>> work, while having the ffs() calculated logically within the loop body.
>> What's wrong with
>>
>> #define for_each_set_bit(iter, val) \
>>     for ( unsigned int v_ = (val), iter; \
>>           v_ && ((iter) = ffs(v_) - 1, true); \
>>           v_ &= v_ - 1 )
>>
>> ? I'll admit though that it's likely a matter of taste which one is
>> "uglier". Yet I'd be in favor of avoiding the scope trickery.
> 
> Oh, of course.
> 
> FWIW, Frediano pointed out a form without the goto, but this is better
> still.
> 
> It's certainly less bad than having to also explain the metaprogramming
> to get scope working nicely.
> 
> 
>>> The first issue I expect people to have with the above is the raw 'body'
>>> label, although with a macro that can be fixed using body_ ## __COUNTER__.
>>>
>>> A full example is https://godbolt.org/z/oMGfah696 although a real
>>> example in Xen is going to have to be variadic for at least ffs() and
>>> ffsl().
>> How would variadic-ness help with this? Unless we play some type
>> trickery (like typeof((val) + 0U), thus yielding at least an unsigned,
>> but an unsigned long if the incoming value is such, followed by a
>> compile-time conditional operator to select between ffs() and ffsl()),
>> I don't think we'd get away with just a single construct for both the
>> int and long (for Arm32: long long) cases.
> 
> Ideally we want _Generic() but we can't have that yet.
> 
> In lieu of that, I was thinking a chain of __builtin_choose_expr()
> instantiating variants for int/long/long-long.

Is __builtin_choose_expr() going to be a win over the conditional operator?
All forms of ffs*() return unsigned int, so the main difference between the
two is not interesting here.

> The complication is that we need a double for() loop for the
> long/long-long in order to declare the iterator as strictly unsigned
> int.Without this, a CLTQ gets emitted to extend the result of the ffs*()
> call. This https://godbolt.org/z/Px88EWdPb ought to do. I'll probably
> end up expressing that as __for_each_set_bit() taking in a type derived
> from typeof(), and an ffs*() variant to use.

The double for isn't very nice, but what do you do. If it's to be kept as
a helper, then for for_each_set_bit_uint() I'd suggest, however, to avoid
typeof(), and use unsigned int directly. I'm not sure though that helper
constructs are really going to be needed, other than to perhaps help
readability.

>>> Now, from an API point of view, it would be lovely if we could make a
>>> single for_each_set_bit() which covers both cases, and while I can
>>> distinguish the two forms by whether there are 2 or 3 args,
>> With the 3-argument form specifying the number of bits in the 3rd arg?
>> I'd fear such mixed uses may end up confusing.
>>
>>> I expect
>>> MISRA is going to have a fit at that.  Also there's a difference based
>>> on the scope of 'bit' and also whether modifications to 'val' in the
>>> loop body take effect on the loop condition (they don't because a copy
>>> is taken).
>>>
>>> So I expect everyone is going to want a new API to use here.  But what
>>> to call it?
>>>
>>> More than half of the callers in Xen really want the GPR form, so we
>>> could introduce a new bitmap_for_each_set_bit(), move all the callers
>>> over, then introduce a "new" for_each_set_bit() which is only of the GPR
>>> form.
>>>
>>> Or does anyone want to suggest an alternative name?
>> I'd be okay-ish with those, maybe with slight shortening to bitmap_for_each()
>> or bitmap_for_each_set().
> 
> Lets go with bitmap_for_each().  While we've got some examples looking
> for the first clear bit, I don't believe we've got any examples wanting
> to loop over all clear bits.
> 
> Are you happy repurposing for_each_set_bit() to be a 2-argument macro
> operating strictly on a single GPR?

As to re-purposing - yes, I think so. The construct is semantically different
enough from what we have right now that, during backporting, I believe the
compiler will flag any unconverted uses.

The "single GPR" aspect worries me, though: I think we want it to be "scalar",
permitting Arm32 to also operate on long long / uint64_t. Eventually, if need
be, we could even introduce a uint128_t form (backed by a suitable ffs128()).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 12:35:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 12:35:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742251.1149007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJBZO-0000v8-0C; Mon, 17 Jun 2024 12:34:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742251.1149007; Mon, 17 Jun 2024 12:34:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJBZN-0000v1-U0; Mon, 17 Jun 2024 12:34:45 +0000
Received: by outflank-mailman (input) for mailman id 742251;
 Mon, 17 Jun 2024 12:34:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YyRo=NT=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sJBZM-0000uv-ED
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 12:34:44 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f7e595a2-2ca5-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 14:34:41 +0200 (CEST)
Received: from [10.176.134.52] (unknown [160.78.253.153])
 by support.bugseng.com (Postfix) with ESMTPSA id D36284EE0738;
 Mon, 17 Jun 2024 14:34:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7e595a2-2ca5-11ef-b4bb-af5377834399
Message-ID: <cad05a5c-e2d8-4e5d-af05-30ae6f959184@bugseng.com>
Date: Mon, 17 Jun 2024 14:34:38 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] xen: add explicit comment to identify notifier
 patterns
To: Jan Beulich <jbeulich@suse.com>
Cc: consulting@bugseng.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <96a1b98d7831154c58d39b85071b9670de94aed0.1718617636.git.federico.serafini@bugseng.com>
 <058a6cc6-bf84-4140-a3fb-12ec648536b0@suse.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <058a6cc6-bf84-4140-a3fb-12ec648536b0@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 17/06/24 12:03, Jan Beulich wrote:
> On 17.06.2024 11:49, Federico Serafini wrote:
>> MISRA C Rule 16.4 states that every `switch' statement shall have a
>> `default' label" and a statement or a comment prior to the
>> terminating break statement.
>>
>> This patch addresses some violations of the rule related to the
>> "notifier pattern": a frequently-used pattern whereby only a few values
>> are handled by the switch statement and nothing should be done for
>> others (nothing to do in the default case).
>>
>> No functional change.
>>
>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> 
> I guess I shouldn't outright NAK this, but I certainly won't ack it. This
> is precisely the purely mechanical change that in earlier discussions some
> (including me) have indicated isn't going to help safety. However, if
> others want to ack something purely mechanical like this, then my minimal
> requirement would be that somewhere it is spelled out what falls under
> 
>> ---
>>   xen/arch/arm/cpuerrata.c            | 1 +
>>   xen/arch/arm/gic.c                  | 1 +
>>   xen/arch/arm/irq.c                  | 4 ++++
> 
> giv-v3-lpi.c has a similar instance, yet you don't adjust that. This may
> be because that possibly is the one where it was previously indicated that
> it may in fact be a mistake that the dying/dead case isn't handled, but
> then at the very least I'd have expected that you explicitly mention cases
> where the adjustment is (deliberately) not made.

I did the changes accordingly to the instruction in docs/misra/rules.rst
for Rule 16.4 and I touched only the files having an unjustified
violations of the rule.
The violation triggered by gic-v3-lpi.c is currently deviated with the
following justification: "A switch statement with a single switch clause
and no default label may be used in place of an equivalent if statement
if it is considered to improve readability."
However, I agree with you that also gic-v3.c should be consistent with
the other.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 13:04:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 13:04:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742261.1149018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJC2M-0006Zd-6U; Mon, 17 Jun 2024 13:04:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742261.1149018; Mon, 17 Jun 2024 13:04:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJC2M-0006ZW-3l; Mon, 17 Jun 2024 13:04:42 +0000
Received: by outflank-mailman (input) for mailman id 742261;
 Mon, 17 Jun 2024 13:04:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJC2L-0006ZM-3v; Mon, 17 Jun 2024 13:04:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJC2K-0006ju-Ma; Mon, 17 Jun 2024 13:04:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJC2K-0000i9-As; Mon, 17 Jun 2024 13:04:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJC2K-0003Kd-AM; Mon, 17 Jun 2024 13:04:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7FxdBd7m5UQjr/hapXaVXzVgvEb4J6qdt1PksCJ7xqk=; b=HQlEAqjMPDAhqULfYrNOnlp7ll
	2CxuKTjsyuZOa3MaA3HzhakPDiXazDs2NMHXKXJ0pvHl2d+NNynCw2kC34q72rQmnriluAGQ6Xui5
	vqBXfd1WPT79MrE/owXwivOU8E7dfSRFaveBC5C5fBk3u2SY31k6PK5rIc77uG7BEf1g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186377-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186377: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6ba59ff4227927d3a8530fc2973b80e94b54d58f
X-Osstest-Versions-That:
    linux=b5beaa44747bddbabb338377340244f56465cd7d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Jun 2024 13:04:40 +0000

flight 186377 linux-linus real [real]
flight 186380 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186377/
http://logs.test-lab.xenproject.org/osstest/logs/186380/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-qcow2     8 xen-boot                 fail REGR. vs. 186372

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 186372
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186372
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186372
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186372
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186372
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186372
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186372
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6ba59ff4227927d3a8530fc2973b80e94b54d58f
baseline version:
 linux                b5beaa44747bddbabb338377340244f56465cd7d

Last test of basis   186372  2024-06-16 18:43:40 Z    0 days
Testing same since   186377  2024-06-17 02:49:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andi Shyti <andi.shyti@kernel.org>
  Helge Deller <deller@gmx.de>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jean Delvare <jdelvare@suse.de>
  John David Anglin <dave.anglin@bell.net>
  John David Anglin <dave@parisc-linux.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Wolfram Sang <wsa+renesas@sang-engineering.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 6ba59ff4227927d3a8530fc2973b80e94b54d58f
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sun Jun 16 13:40:16 2024 -0700

    Linux 6.10-rc4

commit 6456c4256d1cf1591634b39e58bced37539d35b1
Merge: 4301487e6b25 72d95924ee35
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sun Jun 16 11:50:16 2024 -0700

    Merge tag 'parisc-for-6.10-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux
    
    Pull parisc fix from Helge Deller:
     "On parisc we have suffered since years from random segfaults which
      seem to have been triggered due to cache inconsistencies. Those
      segfaults happened more often on machines with PA8800 and PA8900 CPUs,
      which have much bigger caches than the earlier machines.
    
      Dave Anglin has worked over the last few weeks to fix this bug. His
      patch has been successfully tested by various people on various
      machines and with various kernels (6.6, 6.8 and 6.9), and the debian
      buildd servers haven't shown a single random segfault with this patch.
    
      Since the cache handling has been reworked, the patch is slightly
      bigger than I would like in this stage, but the greatly improved
      stability IMHO justifies the inclusion now"
    
    * tag 'parisc-for-6.10-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux:
      parisc: Try to fix random segmentation faults in package builds

commit 4301487e6b25276e0270a7547150e0304da2ba78
Merge: b5beaa44747b 7e9bb0cb50fe
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sun Jun 16 11:37:38 2024 -0700

    Merge tag 'i2c-for-6.10-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux
    
    Pull i2c fixes from Wolfram Sang:
     "Two fixes to correctly report i2c functionality, ensuring that
      I2C_FUNC_SLAVE is reported when a device operates solely as a slave
      interface"
    
    * tag 'i2c-for-6.10-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/wsa/linux:
      i2c: designware: Fix the functionality flags of the slave-only interface
      i2c: at91: Fix the functionality flags of the slave-only interface

commit 7e9bb0cb50fec5d287749a58de5bb32220881b46
Merge: 83a7eefedc9b cbf3fb5b29e9
Author: Wolfram Sang <wsa+renesas@sang-engineering.com>
Date:   Sun Jun 16 12:48:30 2024 +0200

    Merge tag 'i2c-host-fixes-6.10-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/andi.shyti/linux into i2c/for-current
    
    Two fixes from Jean aim to correctly report i2c functionality,
    specifically ensuring that I2C_FUNC_SLAVE is reported when a
    device operates solely as a slave interface.

commit cbf3fb5b29e99e3689d63a88c3cddbffa1b8de99
Author: Jean Delvare <jdelvare@suse.de>
Date:   Fri May 31 11:17:48 2024 +0200

    i2c: designware: Fix the functionality flags of the slave-only interface
    
    When an I2C adapter acts only as a slave, it should not claim to
    support I2C master capabilities.
    
    Fixes: 5b6d721b266a ("i2c: designware: enable SLAVE in platform module")
    Signed-off-by: Jean Delvare <jdelvare@suse.de>
    Cc: Luis Oliveira <lolivei@synopsys.com>
    Cc: Jarkko Nikula <jarkko.nikula@linux.intel.com>
    Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
    Cc: Mika Westerberg <mika.westerberg@linux.intel.com>
    Cc: Jan Dabros <jsd@semihalf.com>
    Cc: Andi Shyti <andi.shyti@kernel.org>
    Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
    Acked-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
    Tested-by: Jarkko Nikula <jarkko.nikula@linux.intel.com>
    Signed-off-by: Andi Shyti <andi.shyti@kernel.org>

commit d6d5645e5fc1233a7ba950de4a72981c394a2557
Author: Jean Delvare <jdelvare@suse.de>
Date:   Fri May 31 11:19:14 2024 +0200

    i2c: at91: Fix the functionality flags of the slave-only interface
    
    When an I2C adapter acts only as a slave, it should not claim to
    support I2C master capabilities.
    
    Fixes: 9d3ca54b550c ("i2c: at91: added slave mode support")
    Signed-off-by: Jean Delvare <jdelvare@suse.de>
    Cc: Juergen Fitschen <me@jue.yt>
    Cc: Ludovic Desroches <ludovic.desroches@microchip.com>
    Cc: Codrin Ciubotariu <codrin.ciubotariu@microchip.com>
    Cc: Andi Shyti <andi.shyti@kernel.org>
    Cc: Nicolas Ferre <nicolas.ferre@microchip.com>
    Cc: Alexandre Belloni <alexandre.belloni@bootlin.com>
    Cc: Claudiu Beznea <claudiu.beznea@tuxon.dev>
    Signed-off-by: Andi Shyti <andi.shyti@kernel.org>

commit 72d95924ee35c8cd16ef52f912483ee938a34d49
Author: John David Anglin <dave@parisc-linux.org>
Date:   Mon Jun 10 18:47:07 2024 +0000

    parisc: Try to fix random segmentation faults in package builds
    
    PA-RISC systems with PA8800 and PA8900 processors have had problems
    with random segmentation faults for many years.  Systems with earlier
    processors are much more stable.
    
    Systems with PA8800 and PA8900 processors have a large L2 cache which
    needs per page flushing for decent performance when a large range is
    flushed. The combined cache in these systems is also more sensitive to
    non-equivalent aliases than the caches in earlier systems.
    
    The majority of random segmentation faults that I have looked at
    appear to be memory corruption in memory allocated using mmap and
    malloc.
    
    My first attempt at fixing the random faults didn't work. On
    reviewing the cache code, I realized that there were two issues
    which the existing code didn't handle correctly. Both relate
    to cache move-in. Another issue is that the present bit in PTEs
    is racy.
    
    1) PA-RISC caches have a mind of their own and they can speculatively
    load data and instructions for a page as long as there is a entry in
    the TLB for the page which allows move-in. TLBs are local to each
    CPU. Thus, the TLB entry for a page must be purged before flushing
    the page. This is particularly important on SMP systems.
    
    In some of the flush routines, the flush routine would be called
    and then the TLB entry would be purged. This was because the flush
    routine needed the TLB entry to do the flush.
    
    2) My initial approach to trying the fix the random faults was to
    try and use flush_cache_page_if_present for all flush operations.
    This actually made things worse and led to a couple of hardware
    lockups. It finally dawned on me that some lines weren't being
    flushed because the pte check code was racy. This resulted in
    random inequivalent mappings to physical pages.
    
    The __flush_cache_page tmpalias flush sets up its own TLB entry
    and it doesn't need the existing TLB entry. As long as we can find
    the pte pointer for the vm page, we can get the pfn and physical
    address of the page. We can also purge the TLB entry for the page
    before doing the flush. Further, __flush_cache_page uses a special
    TLB entry that inhibits cache move-in.
    
    When switching page mappings, we need to ensure that lines are
    removed from the cache.  It is not sufficient to just flush the
    lines to memory as they may come back.
    
    This made it clear that we needed to implement all the required
    flush operations using tmpalias routines. This includes flushes
    for user and kernel pages.
    
    After modifying the code to use tmpalias flushes, it became clear
    that the random segmentation faults were not fully resolved. The
    frequency of faults was worse on systems with a 64 MB L2 (PA8900)
    and systems with more CPUs (rp4440).
    
    The warning that I added to flush_cache_page_if_present to detect
    pages that couldn't be flushed triggered frequently on some systems.
    
    Helge and I looked at the pages that couldn't be flushed and found
    that the PTE was either cleared or for a swap page. Ignoring pages
    that were swapped out seemed okay but pages with cleared PTEs seemed
    problematic.
    
    I looked at routines related to pte_clear and noticed ptep_clear_flush.
    The default implementation just flushes the TLB entry. However, it was
    obvious that on parisc we need to flush the cache page as well. If
    we don't flush the cache page, stale lines will be left in the cache
    and cause random corruption. Once a PTE is cleared, there is no way
    to find the physical address associated with the PTE and flush the
    associated page at a later time.
    
    I implemented an updated change with a parisc specific version of
    ptep_clear_flush. It fixed the random data corruption on Helge's rp4440
    and rp3440, as well as on my c8000.
    
    At this point, I realized that I could restore the code where we only
    flush in flush_cache_page_if_present if the page has been accessed.
    However, for this, we also need to flush the cache when the accessed
    bit is cleared in ptep_clear_flush_young to keep things synchronized.
    The default implementation only flushes the TLB entry.
    
    Other changes in this version are:
    
    1) Implement parisc specific version of ptep_get. It's identical to
    default but needed in arch/parisc/include/asm/pgtable.h.
    2) Revise parisc implementation of ptep_test_and_clear_young to use
    ptep_get (READ_ONCE).
    3) Drop parisc implementation of ptep_get_and_clear. We can use default.
    4) Revise flush_kernel_vmap_range and invalidate_kernel_vmap_range to
    use full data cache flush.
    5) Move flush_cache_vmap and flush_cache_vunmap to cache.c. Handle
    VM_IOREMAP case in flush_cache_vmap.
    
    At this time, I don't know whether it is better to always flush when
    the PTE present bit is set or when both the accessed and present bits
    are set. The later saves flushing pages that haven't been accessed,
    but we need to flush in ptep_clear_flush_young. It also needs a page
    table lookup to find the PTE pointer. The lpa instruction only needs
    a page table lookup when the PTE entry isn't in the TLB.
    
    We don't atomically handle setting and clearing the _PAGE_ACCESSED bit.
    If we miss an update, we may miss a flush and the cache may get corrupted.
    Whether the current code is effectively atomic depends on process control.
    
    When CONFIG_FLUSH_PAGE_ACCESSED is set to zero, the page will eventually
    be flushed when the PTE is cleared or in flush_cache_page_if_present. The
    _PAGE_ACCESSED bit is not used, so the problem is avoided.
    
    The flush method can be selected using the CONFIG_FLUSH_PAGE_ACCESSED
    define in cache.c. The default is 0. I didn't see a large difference
    in performance.
    
    Signed-off-by: John David Anglin <dave.anglin@bell.net>
    Cc: <stable@vger.kernel.org> # v6.6+
    Signed-off-by: Helge Deller <deller@gmx.de>


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 13:18:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 13:18:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742272.1149028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCG2-0000ay-Ef; Mon, 17 Jun 2024 13:18:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742272.1149028; Mon, 17 Jun 2024 13:18:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCG2-0000ar-C3; Mon, 17 Jun 2024 13:18:50 +0000
Received: by outflank-mailman (input) for mailman id 742272;
 Mon, 17 Jun 2024 13:18:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJCG0-0000al-Gl
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 13:18:48 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1fa33631-2cac-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 15:18:45 +0200 (CEST)
Received: by mail-ed1-x535.google.com with SMTP id
 4fb4d7f45d1cf-57c7681ccf3so5059607a12.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 06:18:45 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb72da156sm6378848a12.22.2024.06.17.06.18.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 06:18:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1fa33631-2cac-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718630324; x=1719235124; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Ty2WRfswxNJCcX/10HNUqpmpOI5m+4BDAmERYC3WGCU=;
        b=CEQS1FlF/u0z3N66zq5YL6OCJ5pKA6hkFqP/9uYBgSRl5ryR/zEqLaT0LeQkIdOETV
         1Y+fbUFHg9iWVlAVTTzx6GuhoRJbEXVMguUy+HesPE7pOUwJoDOkjQBAVJPRRSXKcru/
         NERBdIwF4qTUu6rnFPdD6Le6aaocDAIe7i2MOVMAaqpj+OSi4Ugeuv2iyI0Sasx759Sk
         Txg1CdEooIpj6ZtFqrE2GcKJ1FeGj2CVO5FhGB8YuJQbxS18OFgCB77muPiOQUG8R+We
         Gbs758eKaFs2ODwienDJhl6DYUNEt8waaSfcZWfAQs7U3zxYBH4nH45QISMQGbOveLZZ
         4QvQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718630324; x=1719235124;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Ty2WRfswxNJCcX/10HNUqpmpOI5m+4BDAmERYC3WGCU=;
        b=bJQI1/fvlcYHLUYCq1R2Y1+gmF4h9gIKgZPvsKag1xW2tbfZ+6pAMN8KOXmDi6vO9M
         lA5ond5piPsvvNJ0gtUfGh0PtsE9J8wxfmnlMXJsbON02vgCsRKtc73gRYdx7MJqLZm4
         tSkv0weEQigyiKFDe+Jvmtikaz42UXKsBUepdJcE+hcQqNhj1WJNGJcYwbaN/NAh1UtW
         lhL6TUMxGp4UV5zS1dcOxGSkP24kzcXB1hnhK/SYA/8+yQFVx6LZQVO+eNgXd6LpRh8W
         aOvg44hiZQIX8BrdAPaxzhuyXlR2jFvOGuCCpLzj+8jBJl1lCgMVtFFITGiY7BlCspvT
         SFdA==
X-Forwarded-Encrypted: i=1; AJvYcCUHk58529zAJei/oXocRl/oI8T0I6eReMSZUu17t6jdWrW/y3hSgwoHQA1coi528uxB7shCC/jW71IUGaxk6RRndaAz+ynadPLavJEpXOE=
X-Gm-Message-State: AOJu0YzOya9G1opM5ahwIyggAZrdZ9LxDodslmHhieVVQhGGnl+rhgUn
	m9o+02PIT7xE+mtfowFEvVNmynCgjn0kR7eVlnt1QoRupqDuJdio8bT6xfG07w==
X-Google-Smtp-Source: AGHT+IF30vTGhiqW3bTL2c40ag/cE69HhbA370NHPjO2R6HRjbdIPGak5cF7P/8SiPIk2z/Y0MLo/Q==
X-Received: by 2002:a50:9348:0:b0:57c:73a7:da08 with SMTP id 4fb4d7f45d1cf-57cbd67e4ebmr6648911a12.21.1718630324406;
        Mon, 17 Jun 2024 06:18:44 -0700 (PDT)
Message-ID: <f92fc38b-aba9-4f8f-b95c-4723049523d0@suse.com>
Date: Mon, 17 Jun 2024 15:18:42 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v3 for-4.19 1/3] x86/irq: deal with old_cpu_mask for
 interrupts in movement in fixup_irqs()
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <20240613165617.42538-2-roger.pau@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240613165617.42538-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 13.06.2024 18:56, Roger Pau Monne wrote:
> Given the current logic it's possible for ->arch.old_cpu_mask to get out of
> sync: if a CPU set in old_cpu_mask is offlined and then onlined
> again without old_cpu_mask having been updated the data in the mask will no
> longer be accurate, as when brought back online the CPU will no longer have
> old_vector configured to handle the old interrupt source.
> 
> If there's an interrupt movement in progress, and the to be offlined CPU (which
> is the call context) is in the old_cpu_mask clear it and update the mask, so it
> doesn't contain stale data.

Perhaps a comma before "clear" might further help reading. Happy to
add while committing.

> Note that when the system is going down fixup_irqs() will be called by
> smp_send_stop() from CPU 0 with a mask with only CPU 0 on it, effectively
> asking to move all interrupts to the current caller (CPU 0) which is the only
> CPU to remain online.  In that case we don't care to migrate interrupts that
> are in the process of being moved, as it's likely we won't be able to move all
> interrupts to CPU 0 due to vector shortage anyway.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jun 17 13:31:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 13:31:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742279.1149039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCS7-0003W0-Hr; Mon, 17 Jun 2024 13:31:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742279.1149039; Mon, 17 Jun 2024 13:31:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCS7-0003Vt-DD; Mon, 17 Jun 2024 13:31:19 +0000
Received: by outflank-mailman (input) for mailman id 742279;
 Mon, 17 Jun 2024 13:31:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJCS6-0003VU-4H
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 13:31:18 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id df37b9a9-2cad-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 15:31:15 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-a6f1c4800easo537460466b.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 06:31:15 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56ed359bsm517544966b.137.2024.06.17.06.31.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 06:31:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df37b9a9-2cad-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718631075; x=1719235875; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Pp1K9s1hMstswjLeUFQ3jILwvALrqQKbc0WAq7+jK4U=;
        b=S5wrSeM56ecCo8LMCAek8OWACSGX38H4E8YPGNUfb6WQfggI309oFkOYy5Kb8FMGRg
         DdXa6+KZA0o570v39ZN0Ie/BH0jy1/MAu3Y1VS3VZuknhjTkmGOcCMPf6NF97N+p9Dvh
         KpPD0t19FUVOmQvkDgcL4JZqZcA/tl0yZ6apVcnSQ2+gmko9uHss6+T7cQmvQwswGlPu
         2FZSJR7SjAAaXxIbdMDj02Ej4I+7zsmr0qTANti5/dFKtWnCUjL3tdogpAoKIYEQj10h
         TK+J+j0T6N5OBNRzWTlmqw3zSTQh4kp1b57nW3g0NlTUNivPU3gPWhL3sWuwC3/MAVNS
         3K+g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718631075; x=1719235875;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Pp1K9s1hMstswjLeUFQ3jILwvALrqQKbc0WAq7+jK4U=;
        b=Efth9PEXIJB3njlb9KI0y4Muq7j1fspl6S8/F5jFTVxNRvWO70VWn3m+fsOcd2Ilxa
         jLU7BGt4ZEQ01rdFGKxhxjMK7Ehkkf+SPqFRn9cjUjxafW92wjtICCYTm9FpnoCJSyBP
         obvE9vuEsCqOw03HwZ/U27vJ5moosJDO5nKJddRI8Oh7mzR3yR1XoFtxjdUPV+O2V/U2
         tpW8Eh6TV08dxPBJ7ilCvN9nHxO8ZCL+sWe/8Q8hX4HJ3D3VftxTCUb9pLw9U64EgwBC
         dXlWiX4ekDdtyA7GHDvlaIJN4P5VzHRx7lu9sdj4jZg0AtKSfb9aYIrD/fR6JEhxFA1y
         Y1uQ==
X-Forwarded-Encrypted: i=1; AJvYcCWfdeHURUl3fttiz6Vn1jWXIsD1dhKiDsLJmssEKOYdM8a6Ol5xe8GqNmhCjBw+PKNsDOozDv+6rqBMqu0rX1GpdhE6dtHiVRE9+xSdfmU=
X-Gm-Message-State: AOJu0YyTtZ1LHmBIzOHEb1cZdajKsYkGHeqQ1Czl3/38cJExUlZ9NEHy
	+opSBc9kuggrYayGkQAgZdi2MFykeCQs0FLka02KeBvBZYz+Bkff5fzZkxHdvQ==
X-Google-Smtp-Source: AGHT+IEjgg0aIMSIJD23O9mK/uJQj2OftbuoLDJj//ut0gOdqetyjnRGXUseLz1C/Tb0jJj7QJbFVg==
X-Received: by 2002:a17:906:2c45:b0:a6f:4c39:8aeb with SMTP id a640c23a62f3a-a6f60dc7fe3mr584457366b.50.1718631075220;
        Mon, 17 Jun 2024 06:31:15 -0700 (PDT)
Message-ID: <f263d178-3a06-4c65-a6c0-a77f85c559b6@suse.com>
Date: Mon, 17 Jun 2024 15:31:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v3 for-4.19 2/3] x86/irq: handle moving interrupts in
 _assign_irq_vector()
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <20240613165617.42538-3-roger.pau@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240613165617.42538-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 13.06.2024 18:56, Roger Pau Monne wrote:
> Currently there's logic in fixup_irqs() that attempts to prevent
> _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
> interrupts from the CPUs not present in the input mask.  The current logic in
> fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
> move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
> 
> Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
> _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
> to deal with interrupts that have either move_{in_progress,cleanup_count} set
> and no remaining online CPUs in ->arch.cpu_mask.
> 
> If _assign_irq_vector() is requested to move an interrupt in the state
> described above, first attempt to see if ->arch.old_cpu_mask contains any valid
> CPUs that could be used as fallback, and if that's the case do move the
> interrupt back to the previous destination.  Note this is easier because the
> vector hasn't been released yet, so there's no need to allocate and setup a new
> vector on the destination.
> 
> Due to the logic in fixup_irqs() that clears offline CPUs from
> ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
> shouldn't be possible to get into _assign_irq_vector() with
> ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
> ->arch.old_cpu_mask.
> 
> However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
> also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
> longer part of the affinity set, move the interrupt to a different CPU part of
> the provided mask and keep the current ->arch.old_{cpu_mask,vector} for the
> pending interrupt movement to be completed.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -544,7 +544,58 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
>      }
>  
>      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> -        return -EAGAIN;
> +    {
> +        /*
> +         * If the current destination is online refuse to shuffle.  Retry after
> +         * the in-progress movement has finished.
> +         */
> +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
> +            return -EAGAIN;
> +
> +        /*
> +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
> +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
> +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
> +         * ->arch.old_cpu_mask.
> +         */
> +        ASSERT(valid_irq_vector(desc->arch.old_vector));
> +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
> +
> +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
> +        {
> +            /*
> +             * Fallback to the old destination if moving is in progress and the
> +             * current destination is to be offlined.  This is only possible if
> +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
> +             * in the 'mask' parameter.
> +             */
> +            desc->arch.vector = desc->arch.old_vector;

I'm a little puzzled that you use desc->arch.old_vector here, but ...

> +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
> +
> +            /* Undo any possibly done cleanup. */
> +            for_each_cpu(cpu, desc->arch.cpu_mask)
> +                per_cpu(vector_irq, cpu)[desc->arch.vector] = irq;
> +
> +            /* Cancel the pending move and release the current vector. */
> +            desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
> +            cpumask_clear(desc->arch.old_cpu_mask);
> +            desc->arch.move_in_progress = 0;
> +            desc->arch.move_cleanup_count = 0;
> +            if ( desc->arch.used_vectors )
> +            {
> +                ASSERT(test_bit(old_vector, desc->arch.used_vectors));
> +                clear_bit(old_vector, desc->arch.used_vectors);

... old_vector here. Since we have the latter, uniformly using it might
be more consistent. I realize though that irq_to_vector() has cases where
it wouldn't return desc->arch.old_vector; I think, however, that in those
case we can't make it here. Still I'm not going to insist on making the
adjustment. Happy to make it though while committing, should you agree.

Also I'm not happy to see another instance of this pattern appear. In
x86-specific code this is inefficient, as {set,clear}_bit resolve to the
same insn as test_and_{set,clear}_bit(). Therefore imo more efficient
would be

                if (!test_and_clear_bit(old_vector, desc->arch.used_vectors))
                    ASSERT_UNREACHABLE();

(and then the two if()s folded).

I've been meaning to propose a patch to the other similar sites ...

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 13:36:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 13:36:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742285.1149047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCXA-00046H-0i; Mon, 17 Jun 2024 13:36:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742285.1149047; Mon, 17 Jun 2024 13:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCX9-00046A-U7; Mon, 17 Jun 2024 13:36:31 +0000
Received: by outflank-mailman (input) for mailman id 742285;
 Mon, 17 Jun 2024 13:36:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9TOf=NT=bounce.vates.tech=bounce-md_30504962.66703bdb.v1-596ddebcd27e4a80866745c63510b597@srs-se1.protection.inumbo.net>)
 id 1sJCX8-000464-FW
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 13:36:30 +0000
Received: from mail177-18.suw61.mandrillapp.com
 (mail177-18.suw61.mandrillapp.com [198.2.177.18])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 991a80fa-2cae-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 15:36:28 +0200 (CEST)
Received: from pmta14.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail177-18.suw61.mandrillapp.com (Mailchimp) with ESMTP id
 4W2rWb0ZFYzCf9KH4
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 13:36:27 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 596ddebcd27e4a80866745c63510b597; Mon, 17 Jun 2024 13:36:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 991a80fa-2cae-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718631387; x=1718891887;
	bh=ANoB2hVLGG/+0YUoUyUuDSxhyaDSfD0W7DFkZ5zOFpI=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=WV7YF7M8wSLvI1P7Rwq1vgjWkhIOQD+PDZtKFfG+2thNNZeoDgBMzEDSJ5kpzXTpx
	 4Pc8KI3BLOGx/MkgGyNCuiwYiOSQmbcpwo6NP1zX9qtWpp/5TicGsuJ+0CZexP5tAj
	 BHAwIrphRTaSBqMAxwsL80/tZO1dQ/sQNyoKmuXOcSgX7RufGEiW/m3AH+u918yTaL
	 Xj2dHQeYRN8mQQ9d7i9gcC5GgImQOL81vQ1wpZsJ6FIWAe4M8Jvg5J4Y1YKI9jJzTg
	 WVDpIr3WmKg+BWOLwa3m7zE1bVJz1W0Pm3pxwRH66ShuJlTJuNKOKHTZdsNf394iSr
	 zCPXUsTqxROeQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718631387; x=1718891887; i=teddy.astie@vates.tech;
	bh=ANoB2hVLGG/+0YUoUyUuDSxhyaDSfD0W7DFkZ5zOFpI=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=vZYWjYSF7bPtrzdw7uEFljrrEWqYsXzWG1x/bq5zkvfGbs1lewOxLqFD5EnHS3Yhg
	 92lhKeshbjIUZrSaSGvl0iLvTkyo/TypSCJp6pNJg9/wbOcUziSt6b9Mt3GbnF+eJ3
	 /ItfQxxwqpy2qHyYDAgFTLzQC47rzcI2q3h2Dub++8NYiccaPOimS77Rvrnx7RjNkR
	 cKHO9IhQaGTIKACjwpUVWGph4Kz/LIhueO399QXzUPFDON48RbBQQO3X/fAVNS8fxc
	 7aU1MqwVwWztIyx/RnV2u+y60FoleBV7FEDA13BruCfMAkypLeYhT3qTrEaxXtQ3dA
	 o/G8tkfC6PmXw==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?Re:=20[RFC=20PATCH]=20iommu/xen:=20Add=20Xen=20PV-IOMMU=20driver?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718631384460
Message-Id: <eceaa7e7-d07f-4a41-b39a-0b32be6724ae@vates.tech>
To: Jan Beulich <jbeulich@suse.com>
Cc: Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, Robin Murphy <robin.murphy@arm.com>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org, iommu@lists.linux.dev
References: <fe36b8d36ed3bc01c78901bdf7b87a71cb1adaad.1718286176.git.teddy.astie@vates.tech> <8b0151a8-2293-409a-8469-d9e73cf561a3@suse.com>
In-Reply-To: <8b0151a8-2293-409a-8469-d9e73cf561a3@suse.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.596ddebcd27e4a80866745c63510b597?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240617:md
Date: Mon, 17 Jun 2024 13:36:27 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Le 13/06/2024 =C3=A0 16:32, Jan Beulich a =C3=A9crit=C2=A0:
> On 13.06.2024 15:50, Teddy Astie wrote:
>> @@ -214,6 +215,38 @@ struct xen_add_to_physmap_range {
>>   };
>>   DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap_range);
>>   
>> +/*
>> + * With some legacy devices, certain guest-physical addresses cannot sa=
fely
>> + * be used for other purposes, e.g. to map guest RAM.  This hypercall
>> + * enumerates those regions so the toolstack can avoid using them.
>> + */
>> +#define XENMEM_reserved_device_memory_map   27
>> +struct xen_reserved_device_memory {
>> +    xen_pfn_t start_pfn;
>> +    xen_ulong_t nr_pages;
>> +};
>> +DEFINE_GUEST_HANDLE_STRUCT(xen_reserved_device_memory);
>> +
>> +struct xen_reserved_device_memory_map {
>> +#define XENMEM_RDM_ALL 1 /* Request all regions (ignore dev union). */
>> +    /* IN */
>> +    uint32_t flags;
>> +    /*
>> +     * IN/OUT
>> +     *
>> +     * Gets set to the required number of entries when too low,
>> +     * signaled by error code -ERANGE.
>> +     */
>> +    unsigned int nr_entries;
>> +    /* OUT */
>> +    GUEST_HANDLE(xen_reserved_device_memory) buffer;
>> +    /* IN */
>> +    union {
>> +        struct physdev_pci_device pci;
>> +    } dev;
>> +};
>> +DEFINE_GUEST_HANDLE_STRUCT(xen_reserved_device_memory_map);
> 
> This is a tools-only (i.e. unstable) sub-function in Xen; even the commen=
t
> at the top says "toolstack". It is therefore not suitable for use in a
> kernel.
> 
IMO this comment actually describes how the toolstack uses the 
hypercall, but I don't think it is actually reserved for toolstack use.
Or maybe we should allow the kernel to use this hypercall as well.

>> +
>> +/*
>> + * Local variables:
>> + * mode: C
>> + * c-file-style: "BSD"
>> + * c-basic-offset: 4
>> + * tab-width: 4
>> + * indent-tabs-mode: nil
>> + * End:
>> + */
>> \ No newline at end of file
> 
> Nit: I'm pretty sure you want to avoid this.
> 
Indeed.

Teddy


Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 13:41:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 13:41:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742292.1149058 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCbm-00067K-I1; Mon, 17 Jun 2024 13:41:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742292.1149058; Mon, 17 Jun 2024 13:41:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCbm-00067D-Ed; Mon, 17 Jun 2024 13:41:18 +0000
Received: by outflank-mailman (input) for mailman id 742292;
 Mon, 17 Jun 2024 13:41:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJCbl-000677-DT
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 13:41:17 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 449743ff-2caf-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 15:41:15 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-a6ef8bf500dso484785666b.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 06:41:15 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f427a5sm509162366b.180.2024.06.17.06.41.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 06:41:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 449743ff-2caf-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718631675; x=1719236475; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=3RrjWoN4LNZjIiRgyoaKWvDVkeibzlX7t9ypKotessc=;
        b=WoBLhAgzcf6rW4oaDUoNyMjESrMESomcraYSAOrFQFg+Fo4SFtWpL9vd+K2XHDmOIV
         Dqn1IACAAl5b1s+93Xreg6/Nl35NSc3HBeR6gS1V1xMlYSzGGfWJB5amRfXeNF3y224i
         K7GgwAq4EDj9d5oKF8/bqELqUEdufDRssZzeEEaHQrgQL+C9B9j7XbXyR7ob7WCleyPr
         sRNely8MviG5ygmoFaIX545bU6IZTO23yqDXi16vOgbZdq3GX/7E7aA8SWA1DGe83wFN
         UV2T8HCVzxMe0r3CO2mCp8TE0o1YWrqe2YfaEEu8BVLt5jLxY4vMmC/FSFP05BFMopaF
         fegQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718631675; x=1719236475;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=3RrjWoN4LNZjIiRgyoaKWvDVkeibzlX7t9ypKotessc=;
        b=cWeyFg1t3C9ENoWQaU/BOdn7g38WoTkBRNexEswXQVoL5iNanm4KgKd2wwuvKLWWw/
         A2SkROPTAUKV2Rrn9noOQZocrozh1mmyd7eZozETH8EBRbTNU3rAD9hfy+jD4Z5lS8eG
         +csokis2DRK2QjnJkaODOFcUYfoYDKhCmTKgEKyYOpE3mBYx7XQXJPIVufOseZ9UJTtG
         /Jl7Mdf94VSYfQaWE2HXQ/5u6xB7Aio2i3L2WMWih+jGUasGmjwvNskQQX+7GfqFFWVU
         hUKtrLLeYCLg0QkZGhFLE9JzSd0JzAy1tVu39iWMGvTb4c7m05yamNZrIkW+UOQWh22P
         TLNg==
X-Forwarded-Encrypted: i=1; AJvYcCUzTbk283M4szMwAz8kjQEH2mHSmYPnGpVBsKP8pQqj/NU5V/P/mCmB8hfAX76a4u1hhv/+/RWAH6X3iIWatgQgiczI/+CCPjIsw/rGDLs=
X-Gm-Message-State: AOJu0Yyuc81HPKdqma6ojEt8ecILV5qusDMJAtq0plKve+F8gmKCQ2Yy
	sWILEwN5XsWLkP+dBFtgOCBpbcZC9w8mnzegzNqbWUvr0GDDCITeFs40f+mpTw==
X-Google-Smtp-Source: AGHT+IEAxQrUW+f8PTtqixOQ6f0lZv/OBjKQw8OEEcA2Ik3oz9d5W+dBQnROFgHipoNOML18n8TzWQ==
X-Received: by 2002:a17:907:7f26:b0:a6f:50ae:e09 with SMTP id a640c23a62f3a-a6f60cefc4emr912807066b.4.1718631674759;
        Mon, 17 Jun 2024 06:41:14 -0700 (PDT)
Message-ID: <e3912334-4dbe-40e9-aed4-8b47e1570cc7@suse.com>
Date: Mon, 17 Jun 2024 15:41:12 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v3 3/3] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <20240613165617.42538-4-roger.pau@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240613165617.42538-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 13.06.2024 18:56, Roger Pau Monne wrote:
> fixup_irqs() is used to evacuate interrupts from to be offlined CPUs.  Given
> the CPU is to become offline, the normal migration logic used by Xen where the
> vector in the previous target(s) is left configured until the interrupt is
> received on the new destination is not suitable.
> 
> Instead attempt to do as much as possible in order to prevent loosing
> interrupts.  If fixup_irqs() is called from the CPU to be offlined (as is
> currently the case)

Except (again) for smp_send_stop().

> attempt to forward pending vectors when interrupts that
> target the current CPU are migrated to a different destination.
> 
> Additionally, for interrupts that have already been moved from the current CPU
> prior to the call to fixup_irqs() but that haven't been delivered to the new
> destination (iow: interrupts with move_in_progress set and the current CPU set
> in ->arch.old_cpu_mask) also check whether the previous vector is pending and
> forward it to the new destination.
> 
> This allows us to remove the window with interrupts enabled at the bottom of
> fixup_irqs().  Such window wasn't safe anyway: references to the CPU to become
> offline are removed from interrupts masks, but the per-CPU vector_irq[] array
> is not updated to reflect those changes (as the CPU is going offline anyway).
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>[...]
> @@ -2686,11 +2705,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
>          if ( desc->handler->disable )
>              desc->handler->disable(desc);
>  
> +        /*
> +         * If the current CPU is going offline and is (one of) the target(s) of
> +         * the interrupt, signal to check whether there are any pending vectors
> +         * to be handled in the local APIC after the interrupt has been moved.
> +         */
> +        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
> +            check_irr = true;
> +
>          if ( desc->handler->set_affinity )
>              desc->handler->set_affinity(desc, affinity);
>          else if ( !(warned++) )
>              set_affinity = false;
>  
> +        if ( check_irr && apic_irr_read(vector) )
> +            /*
> +             * Forward pending interrupt to the new destination, this CPU is
> +             * going offline and otherwise the interrupt would be lost.
> +             */
> +            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
> +                          desc->arch.vector);

Hmm, IRR may become set right after the IRR read (unlike in the other cases,
where new IRQs ought to be surfacing only at the new destination). Doesn't
this want moving ...

>          if ( desc->handler->enable )
>              desc->handler->enable(desc);

... past the actual affinity change?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 13:42:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 13:42:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742300.1149067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCdG-0006jH-09; Mon, 17 Jun 2024 13:42:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742300.1149067; Mon, 17 Jun 2024 13:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCdF-0006jA-T2; Mon, 17 Jun 2024 13:42:49 +0000
Received: by outflank-mailman (input) for mailman id 742300;
 Mon, 17 Jun 2024 13:42:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJCdE-0006it-5T; Mon, 17 Jun 2024 13:42:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJCdD-0007XR-Nx; Mon, 17 Jun 2024 13:42:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJCdD-0001cd-GX; Mon, 17 Jun 2024 13:42:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJCdD-0007Gf-Fx; Mon, 17 Jun 2024 13:42:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hK92EAs0X1M1EyxPaYYEhxxXGX4zOCOJM3b1PRHhzwI=; b=ugMMKvI12rCDmXgbDWr58h3rgP
	rAuc0bvgGP+Bkim7l0QOa/JwBEHFQJnqw42PNj92BRosOTXxm6+kIhrKWdUwSCSR3ZAtu+P/SUlGO
	C/GuBubQlK0sK+pSDD/ojeTLQv4Ps2GB3jAqXkJbs6CKV8F1RxQ8l2w+Les2IjWvoJiw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186379-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186379: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=9fc61309bf56aa7863e36b8f418a49ca6d8364d0
X-Osstest-Versions-That:
    ovmf=587100a95d7bfddc60bc5699ae0cca45914f1d81
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Jun 2024 13:42:47 +0000

flight 186379 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186379/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 9fc61309bf56aa7863e36b8f418a49ca6d8364d0
baseline version:
 ovmf                 587100a95d7bfddc60bc5699ae0cca45914f1d81

Last test of basis   186378  2024-06-17 08:14:56 Z    0 days
Testing same since   186379  2024-06-17 12:12:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jeff Brasen <jbrasen@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   587100a95d..9fc61309bf  9fc61309bf56aa7863e36b8f418a49ca6d8364d0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 13:43:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 13:43:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742305.1149077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCdp-0007CO-8S; Mon, 17 Jun 2024 13:43:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742305.1149077; Mon, 17 Jun 2024 13:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCdp-0007CF-5l; Mon, 17 Jun 2024 13:43:25 +0000
Received: by outflank-mailman (input) for mailman id 742305;
 Mon, 17 Jun 2024 13:43:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJCdn-0007By-S4
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 13:43:23 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9051e7a2-2caf-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 15:43:22 +0200 (CEST)
Received: by mail-ed1-x535.google.com with SMTP id
 4fb4d7f45d1cf-57c7ec8f1fcso5100269a12.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 06:43:22 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb741e6c8sm6463910a12.77.2024.06.17.06.43.21
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 06:43:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9051e7a2-2caf-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718631802; x=1719236602; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=APk91TXp5Ugv3ZDi4Fkr4LxFk9bAvdubbKJ1lckRAt4=;
        b=CCzrsXjtQABeew0tJwd8epPy+pYtOuwl3rzT0sFsOx42By5Mh2bRRm4DzzGram+nyt
         HJHIkxtQWWL3VsknWwTajZXoUo9F5sNXnRbDpRZ2tTJ6e4kSVhxRLmXuPDVhO1wsqrZF
         t5mpQDoon+YBFZijJ8b8TSmXAxKfFD0VhJv/AKMrd60C3vs1ItDyNtflsyraO6p1O/41
         sXEUcjQCyb3hHKlAuEBlDi0WGlZkygwmlO5JoaqfZKGDiox7ADEyuKgExo6k4yjqoSSi
         br0o5tu7wVSAK0PUSb9Z0uGfGiQMwnpADjDDPvNmOpkyYfJXMrkfvy8G5xeaarlZs6mL
         OJ7A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718631802; x=1719236602;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=APk91TXp5Ugv3ZDi4Fkr4LxFk9bAvdubbKJ1lckRAt4=;
        b=SvUw9bHteMuxqJ/j9/if4YD3siSI423UuwbPERw7yZCD+pMQzJ9DYgl5NC2uC4Y49O
         lEhviPNk/r8ULnpLN2N2mLoeaUAHuxaM023YkldBhYUjObtOr6yxZ8PssbX2vUa11vEZ
         /Zslv2wOFz/Itd1sNlsEp5xkdzsunxuAoVkQgCXGWNFO9mcwNoXVex0YeAZijJn6Zfd9
         dnDEGrL2QBvGn3LbHt3sURCwr8kymhdxXvKloedJ6M47auYFvsJExUmEiYWp/JUukkS4
         pI7FqNDBwGy6bd/urmYobs+KmaNIofPd1FyLICJZpFvyPtJ3jZ8EPmb+CIR/5x0pNGpP
         hZAw==
X-Forwarded-Encrypted: i=1; AJvYcCV6o1SFV5/tlPLepikAVS0taIIZ6UJ6Pd035cTGIuk6pflESrS5/5G4KQLsbepQTKH0KhUFWhtuTURmvOeQRJm+cQRAP451OCRnCv8jRLs=
X-Gm-Message-State: AOJu0YwXXe8Wx4LU/lFKlNhdS1Rg2MVZeA4NGFZl2jDeBC/ddhSBjAEE
	6c/I/3kymQ6wU12iWL6mIWUNx8vwW+GAB/5/r4YQOi2sbhmFVTyoirHP33Ohu5VJ47lW61DDnGQ
	=
X-Google-Smtp-Source: AGHT+IGDT6FgU1fHBZu/3F+toBLz1JtLYQGomy3PSpECSQUUTJMxPHs+ZBmMyNrLpOhhhJMUu54eAA==
X-Received: by 2002:a50:9997:0:b0:57a:3273:e648 with SMTP id 4fb4d7f45d1cf-57cbd6767admr5570285a12.18.1718631801918;
        Mon, 17 Jun 2024 06:43:21 -0700 (PDT)
Message-ID: <af39bac8-bb0d-415b-8cb3-79f3d5369d9d@suse.com>
Date: Mon, 17 Jun 2024 15:43:20 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC PATCH] iommu/xen: Add Xen PV-IOMMU driver
To: Teddy Astie <teddy.astie@vates.tech>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
 Robin Murphy <robin.murphy@arm.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel@lists.xenproject.org,
 iommu@lists.linux.dev
References: <fe36b8d36ed3bc01c78901bdf7b87a71cb1adaad.1718286176.git.teddy.astie@vates.tech>
 <8b0151a8-2293-409a-8469-d9e73cf561a3@suse.com>
 <eceaa7e7-d07f-4a41-b39a-0b32be6724ae@vates.tech>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <eceaa7e7-d07f-4a41-b39a-0b32be6724ae@vates.tech>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 17.06.2024 15:36, Teddy Astie wrote:
> Le 13/06/2024 à 16:32, Jan Beulich a écrit :
>> On 13.06.2024 15:50, Teddy Astie wrote:
>>> @@ -214,6 +215,38 @@ struct xen_add_to_physmap_range {
>>>   };
>>>   DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap_range);
>>>   
>>> +/*
>>> + * With some legacy devices, certain guest-physical addresses cannot safely
>>> + * be used for other purposes, e.g. to map guest RAM.  This hypercall
>>> + * enumerates those regions so the toolstack can avoid using them.
>>> + */
>>> +#define XENMEM_reserved_device_memory_map   27
>>> +struct xen_reserved_device_memory {
>>> +    xen_pfn_t start_pfn;
>>> +    xen_ulong_t nr_pages;
>>> +};
>>> +DEFINE_GUEST_HANDLE_STRUCT(xen_reserved_device_memory);
>>> +
>>> +struct xen_reserved_device_memory_map {
>>> +#define XENMEM_RDM_ALL 1 /* Request all regions (ignore dev union). */
>>> +    /* IN */
>>> +    uint32_t flags;
>>> +    /*
>>> +     * IN/OUT
>>> +     *
>>> +     * Gets set to the required number of entries when too low,
>>> +     * signaled by error code -ERANGE.
>>> +     */
>>> +    unsigned int nr_entries;
>>> +    /* OUT */
>>> +    GUEST_HANDLE(xen_reserved_device_memory) buffer;
>>> +    /* IN */
>>> +    union {
>>> +        struct physdev_pci_device pci;
>>> +    } dev;
>>> +};
>>> +DEFINE_GUEST_HANDLE_STRUCT(xen_reserved_device_memory_map);
>>
>> This is a tools-only (i.e. unstable) sub-function in Xen; even the comment
>> at the top says "toolstack". It is therefore not suitable for use in a
>> kernel.
>>
> IMO this comment actually describes how the toolstack uses the 
> hypercall, but I don't think it is actually reserved for toolstack use.

Well, the canonical version of the header is quite explicit about this,
by having the definition in a __XEN__ || __XEN_TOOLS__ section.

> Or maybe we should allow the kernel to use this hypercall as well.

That's an option to consider.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 13:50:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 13:50:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742317.1149087 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCkH-0000te-UT; Mon, 17 Jun 2024 13:50:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742317.1149087; Mon, 17 Jun 2024 13:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCkH-0000sy-RI; Mon, 17 Jun 2024 13:50:05 +0000
Received: by outflank-mailman (input) for mailman id 742317;
 Mon, 17 Jun 2024 13:50:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJCkG-0000c8-N4
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 13:50:04 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7f76c117-2cb0-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 15:50:03 +0200 (CEST)
Received: by mail-ej1-x633.google.com with SMTP id
 a640c23a62f3a-a6f0c3d0792so506468966b.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 06:50:03 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f8cc20663sm63773266b.190.2024.06.17.06.50.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 06:50:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f76c117-2cb0-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718632203; x=1719237003; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=rdJOBROw5bhO0Zz4HbPSKOcX74L1n40vY0872unV42E=;
        b=LShG6x6WZ6K33izXPML/jh1wcx/lj/fCjYHugWWUzQzMkqPPaMVZh/BhAqBbGY0yeV
         jNWleDeIxD2XXOMoLPSkqIKdbqN6mPFRNzvS2LiM54E6WuEgDDMSts9G+kRaAWOs71iO
         qCXWrMr0/0Xq+iHVO3rdkBo/WzJgWcCEQQkDHIZ9AFJf6A0tmW6M3/lAqL+08lv6JRMj
         Szgnb1GAx5ntrgXLr4zZnL8WePRMpw+pQXxWyQtp9+VEAg1rSOuzeTaA5vG0V+eTx8aB
         MHAscXUvXp+6OhCHiKTqIO0IAersMb749XZ/JkVgbl3X36GKF5MWNg/Vb55XEYniAuSf
         TPUw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718632203; x=1719237003;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=rdJOBROw5bhO0Zz4HbPSKOcX74L1n40vY0872unV42E=;
        b=Y0m715l5c0RDMYpoFkOkLuiNCU9rZg7z2o53uCboodZW/upa/ptllLJ7IDkR1si6/s
         5Mau769rM4BZ4sOdMlJTjMTt+N+cC+ZoUbq7Cml0bRoFy+2ClI7SM0Qk/pljIcFVnvza
         2Eehn1ML2xLi3d0/Wa35WiIiHPhQC/1WiWtEIU1oRiPgNZ2su6ncqSsmWW6YTP772gQr
         GdAL7hmxn2mcNnA9W9W6o3Foi9xjGYvkXP28uJexmSJMghWPUQWm8w8YoFEiVykaLB1a
         fPDGH6ZgZ0zyC6B0lQPHRrprTVRWMqLrsOczbn19OqSoyVBJ72+HiajzBuBG22V/CeuO
         aNpg==
X-Forwarded-Encrypted: i=1; AJvYcCXTvCFgmiP5c+EDYl+PRFuJLIxLyKe9dNKfF8TLSmQbQ3wDnzXMXCS3iy/946XuEmhsj2GrDUM+MtoI9UNu9D9lBPp8/rtsMPMSXmkVXpk=
X-Gm-Message-State: AOJu0YyHjkt31W9pRX57VAGa7R9TkrQfvP6LU40Vegyh9sw9KbR0OQIj
	h38qjT2ILXzX0oDkYmNDnKZhCpjQ+rBU6QRG5ND4SXe6ACTo1SbmSXUrWyxcyQ==
X-Google-Smtp-Source: AGHT+IFMWu4uzYtgWVDz8CbPsWhYEpEuXpOWdPQkpf89U4M+xWQTyv8JSCj/cxCucokftGG71Wgwtg==
X-Received: by 2002:a17:907:7f26:b0:a6f:50ae:e09 with SMTP id a640c23a62f3a-a6f60cefc4emr914686166b.4.1718632203009;
        Mon, 17 Jun 2024 06:50:03 -0700 (PDT)
Message-ID: <7e656934-617d-4385-9da9-658c07aeed69@suse.com>
Date: Mon, 17 Jun 2024 15:50:01 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] public/sysctl: address violations of MISRA C: 2012 Rule
 7.3
To: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <a68e796048912c816bc8320416024a60290f33e7.1718290222.git.alessandro.zucchelli@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <a68e796048912c816bc8320416024a60290f33e7.1718290222.git.alessandro.zucchelli@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 13.06.2024 16:58, Alessandro Zucchelli wrote:
> This addresses violations of MISRA C:2012 Rule 7.3 which states as
> following: The lowercase character `l' shall not be used in a literal
> suffix.
> 
> Changed moreover suffixes 'u' in 'U' for better readability next to
> the 'L's.
> 
> No functional change.
> 
> Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jun 17 13:54:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 13:54:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742324.1149097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCou-0001d7-EW; Mon, 17 Jun 2024 13:54:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742324.1149097; Mon, 17 Jun 2024 13:54:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCou-0001d0-Bm; Mon, 17 Jun 2024 13:54:52 +0000
Received: by outflank-mailman (input) for mailman id 742324;
 Mon, 17 Jun 2024 13:54:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJCot-0001cu-Fz
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 13:54:51 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 29ac6007-2cb1-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 15:54:49 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-a6f253a06caso519600266b.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 06:54:49 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f99774sm518419066b.203.2024.06.17.06.54.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 06:54:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29ac6007-2cb1-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718632489; x=1719237289; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=k930QoTtYQ0UG35AdGJNKynRKWLkbZDpAZ2hemTOnVg=;
        b=OMbwaGer30UuNMbkXw6ovLA1DzaGyyK+wQ51xLHgFom4vyt0tEQ/Ld/j52/rxRYM/w
         oR6Iq/GwJgME9LT7nQ2jzvTVdyHY1v3JkqJvHFz5jsolzJtoyC0Xi0Wzvas2eMMPTNq8
         MM2tnnK7cvxDcDCjyZx5MOzVkyHDBMwFr9OKCk4Ni3tMnAKGnqtL6V+OhiXPhP95cJ/J
         L8xdGe+WAL1m0j0AW0pZYCvmSBtdF50D3ciFIDudiaOZfb8zXOTsmdzfOFdiJJOZL+Wf
         /TrLVvEmcszQw383PBNRInDCkflhEkbjXToUfu72cURHGM8SrkBSNmiw08aexY6PKEHu
         RqJw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718632489; x=1719237289;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=k930QoTtYQ0UG35AdGJNKynRKWLkbZDpAZ2hemTOnVg=;
        b=vQqwTTZEhcJeDgvF7j4filb1RzTapAMpFuAG3DbDP36w2Lg6+mXIQIMbNPb4/D1aL8
         IFmoFSIRxBERMjfY+vR1c+cct9/ddIFdQRKXlGXdvuVsIelCUx6h7UwIPlTFl+wpmoym
         +kz+gzqJNswq0/nk0tIcUrsyofmx8FCudA/ItqelZUDg1Jhc2a8QHqEDNmO/hEKD2Ny9
         +iGqPLhKJ3rWdj8jMuCQrP1PlNKfFaXn8pC8Wekbt8vcmckGDb+tZ5f2r5liMChoYJkJ
         rzIzHwOUMiQjPm9gObaHSeLofzJauoEWsO7gukgSD0M7nQ2S0RlI7VnT30PqqC0l0arp
         BUog==
X-Forwarded-Encrypted: i=1; AJvYcCXHagEqWhohxsSW6MUxQDsKUA4Bq2VQ/YP12gvXQVxOTACQMPhw5hECMiuuC1fS/2u+TV88cwyZ+lVd+L3TAd9MFDDy0JZse39Eun98jvM=
X-Gm-Message-State: AOJu0Yy1K+DtvR5dPn9W/IAsJU0OIaXYuu/tPj6j8NniQuk/Zg0gJli3
	WRDt0drX2n0eicJrnVHp24NB+2GJEfCIFbqvAPHZi3pOO4kIedsCafoaAS0Wcw==
X-Google-Smtp-Source: AGHT+IFTEy9iULiT3ZDAYfFjjClER6MqGNwkFuqib/y0OY+n0yDlALd0L5BKvMoYEFZQfhXyJLUblw==
X-Received: by 2002:a17:906:ae5a:b0:a6f:21e8:ad06 with SMTP id a640c23a62f3a-a6f60d3780amr856627666b.20.1718632488640;
        Mon, 17 Jun 2024 06:54:48 -0700 (PDT)
Message-ID: <b9848aab-dd12-49f4-a1e0-a1d622359c8d@suse.com>
Date: Mon, 17 Jun 2024 15:54:46 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/2] x86/mm address violations of MISRA C:2012 Rule 5.3
To: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1718380780.git.alessandro.zucchelli@bugseng.com>
 <80cb7054b82f55f11159faf5f10bfacf44758be0.1718380780.git.alessandro.zucchelli@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <80cb7054b82f55f11159faf5f10bfacf44758be0.1718380780.git.alessandro.zucchelli@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 14.06.2024 18:12, Alessandro Zucchelli wrote:
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -4703,7 +4703,7 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>      {
>          struct xen_foreign_memory_map fmap;
>          struct domain *d;
> -        struct e820entry *map;
> +        struct e820entry *e;

What version of the tree is this against? The variable in my copy is named
"e820", and it is only then that I could see what the conflict actually is.
I can't see any conflict with anything named "map". Saying what the actual
conflict is imo also ought to be part if the description.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 13:59:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 13:59:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742331.1149107 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCtE-000302-UO; Mon, 17 Jun 2024 13:59:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742331.1149107; Mon, 17 Jun 2024 13:59:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCtE-0002zv-Re; Mon, 17 Jun 2024 13:59:20 +0000
Received: by outflank-mailman (input) for mailman id 742331;
 Mon, 17 Jun 2024 13:59:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJCtD-0002zL-SC
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 13:59:19 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ca05eda7-2cb1-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 15:59:18 +0200 (CEST)
Received: by mail-ed1-x529.google.com with SMTP id
 4fb4d7f45d1cf-57cb9a370ddso4935592a12.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 06:59:18 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f427a5sm510584866b.180.2024.06.17.06.59.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 06:59:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca05eda7-2cb1-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718632758; x=1719237558; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=S6FTxIfyV3G3v0Xf00XwxNaEQwEy+XPadyrqAOm4T8U=;
        b=CHajNYEJQ2ecPdvfLA4Ka3J7kMukDCKIIACIg69m35yFT2DFOv+8mpBSHlG2dUVrkG
         N+87jvo4i0GxjMDNTrqPnZtX8AO63pCiP54nPnmtYtEmq3PZzgFC1pnlABeSJRwvWxQ8
         +iWx476/LndMS8//+yXxW3iGNnknJl9lGgFWDQuwBhkVis3426LB3eERb4m3Pg1RI9du
         8ZOlYVytpjcIYzTOzYhoWoo04ZTaAZPGwSw5954XjZgyTVaNWsVMyHELnuFMtJ1r9nHU
         lNp/Gzg5RtxvQv0REc3xOxERD08Wh8FkHDxZVRMd8b3qJadvXUaiMi4j4ZgYHfvgV2tw
         iodg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718632758; x=1719237558;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=S6FTxIfyV3G3v0Xf00XwxNaEQwEy+XPadyrqAOm4T8U=;
        b=P5Mjbg/6Aq8bw5iQcXVBgykbXffCy93r/zQT0CYkf5iavctTOAy/otrIeLyy2FsTC7
         dkqJ/c6BP+/3Ip6c6rbXZ0OHxe39QL873ozp3emLq+5pbPOH18pLoWA7tQn9qEz+TwJU
         87JUI0LM8LiGXrjn2zWFmDBOoBe7TqQpR4ZqpCiIyBRcqv+flRf/RH0Mk3DkMIC7VQ2Y
         8uXenSU4SOpa2ZVkaoda8nehpIhvfXFGItlUEbluIKaj/ZOM8HkE+kYlkMoMaIUqNajY
         DIFYvNXfFX+kDqbRNROwERu5CNgDWOhsOmod7pDNiaUfbqUHMAMJt+ZDiiTsajKIHGiW
         KhmA==
X-Forwarded-Encrypted: i=1; AJvYcCWP+4pbmmSZxWJzHuAxPACURAy+MQMWCTmvCW5bFIhuCR2D2FX9RJD3mZBj3hKVa86be3Y0549ShohXHLW8aXgUsO4KZMiioe9ily0RV80=
X-Gm-Message-State: AOJu0YzbbPD95G1x8AoE6coxykIh2YYax5vEMr/EfqhxL6GWeZZTrEDZ
	XX4TyYiIU2LNGe6zmIg31StRbff9JuhzSSw/y9syTZ6GfpQBDH40Bb5mUhj5uA==
X-Google-Smtp-Source: AGHT+IFuWRNI4WU/0NI91wSTK7aC91vjaBJFxZ5g6WrCGLQ4YILFcumXere8oGHySDRkGTodqwkxBA==
X-Received: by 2002:a17:906:b858:b0:a68:ece7:8db5 with SMTP id a640c23a62f3a-a6f60d2c9e0mr615190066b.31.1718632757612;
        Mon, 17 Jun 2024 06:59:17 -0700 (PDT)
Message-ID: <fe216906-da96-4815-8a85-bf3428f380fa@suse.com>
Date: Mon, 17 Jun 2024 15:59:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/2] x86/e820 address violations of MISRA C:2012 Rule 5.3
To: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1718380780.git.alessandro.zucchelli@bugseng.com>
 <1a02a5af6c2a737bc814610d4cc684ad4a00b8dc.1718380780.git.alessandro.zucchelli@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <1a02a5af6c2a737bc814610d4cc684ad4a00b8dc.1718380780.git.alessandro.zucchelli@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 14.06.2024 18:12, Alessandro Zucchelli wrote:
> --- a/xen/arch/x86/e820.c
> +++ b/xen/arch/x86/e820.c
> @@ -593,79 +593,79 @@ int __init e820_add_range(uint64_t s, uint64_t e, uint32_t type)
>  }
>  
>  int __init e820_change_range_type(
> -    struct e820map *e820, uint64_t s, uint64_t e,
> +    struct e820map *map, uint64_t s, uint64_t e,
>      uint32_t orig_type, uint32_t new_type)
>  {
>      uint64_t rs = 0, re = 0;
>      unsigned int i;
>  
> -    for ( i = 0; i < e820->nr_map; i++ )
> +    for ( i = 0; i < map->nr_map; i++ )
>      {
>          /* Have we found the e820 region that includes the specified range? */
> -        rs = e820->map[i].addr;
> -        re = rs + e820->map[i].size;
> +        rs = map->map[i].addr;

I'm not overly happy with the many instances of map->map that we're now
gaining, but perhaps that's about as good as it can get. Hence
Acked-by: Jan Beulich <jbeulich@suse.com>

As mentioned for patch 1, please remember though to actually describe
what the conflict is in patches like this one. In this case, unless there
ends up being a need to submit another version, I'll try to remember to
add half a sentence while committing.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 14:03:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 14:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742339.1149117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCwy-0004Zl-HK; Mon, 17 Jun 2024 14:03:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742339.1149117; Mon, 17 Jun 2024 14:03:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJCwy-0004Ze-Eq; Mon, 17 Jun 2024 14:03:12 +0000
Received: by outflank-mailman (input) for mailman id 742339;
 Mon, 17 Jun 2024 14:03:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BQ9S=NT=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1sJCwx-0004ZY-3Q
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 14:03:11 +0000
Received: from fhigh8-smtp.messagingengine.com
 (fhigh8-smtp.messagingengine.com [103.168.172.159])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 52b3a46b-2cb2-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 16:03:08 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailfhigh.nyi.internal (Postfix) with ESMTP id EB1C111401EE;
 Mon, 17 Jun 2024 10:03:06 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Mon, 17 Jun 2024 10:03:06 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 17 Jun 2024 10:03:04 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52b3a46b-2cb2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718632986;
	 x=1718719386; bh=W1xHVkhqMM4DNKJlUR7Gv2638u+H1uW6AQE6QMPxhP0=; b=
	nDAjA2Mrbtmg4xxQSBIiPDzOvrLWix3dGe6xwqWOnEA1RJ3V3B6uT1wJvuGNOGim
	sn0le6VQK8tfq8q89ujyEzjr/0C1OCrEAzKmBEZi1ASbzSN6e4RwzPv8BUIflsJ1
	KpVm1FrnaZu7JIgJ78BlYhoS/7HkbgJUKgBqX+8mbyvYSDYhRHNyf+nSgb7Hoalo
	8R416BL59QgBJTx0jfwY3Mp5iEi1B6Umq17r5N+PVpViza2RFilnz0wudGFcm9U+
	ihedKCjtcYX7qM7ZBA0YwOsIW6CO5HAoojEOz1k/4eaF+jB6NnQ34F30cEgDlk00
	vB7raztyYosnwcE6+/s0Ww==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718632986; x=1718719386; bh=W1xHVkhqMM4DNKJlUR7Gv2638u+H
	1uW6AQE6QMPxhP0=; b=j9RpCEZ+9CdirtB661/un0oh+AlpwrDi4KGLU6kIP89z
	iRCcV1y1LBioY/GkjAxUAC2sU5aKhTcjuc/cXTV9cOqWqkOrRhi3MXb+OCxynpB/
	/DWrRBn8zG1c+FXzEVPmo03tDs5zU1Rbi75tigTRK0DI+zZGKKBcnfH8pRRNAb2h
	GeE6XDRN9akOhW4wu+dKh459TCc2lApgJKWIiRsSN2Hc/o0yxlIyG7703KyXvumq
	aHQL+z1ZcJEz2wst/Z7T8QIq7qWnfn/D9TieXivST2zJQFFZc73NRAyhyOK2cEiJ
	w+xXY52uzN5Tg3+L6DibshPz7zoHDfAS7VtsLxEfhA==
X-ME-Sender: <xms:GkJwZsrTAZF5huh53Q33UFqihuWq3W6rgTYe_3xs-gDfI6ZMO1WF1Q>
    <xme:GkJwZirFPBS1rTIm83Ux9lDgDvtqNeXO9VywFPSHmcfM235h3EgZuGOUtiWmkE3X5
    3iCzrGe-steKg>
X-ME-Received: <xmr:GkJwZhMHHF0PvG46picaDQ7bggiLp5l5MZxTpZkjZOIc2mHFwmyQfJm3kxY>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedvhedgjeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteeu
    geefhfffvdffteelledtleekvdelhfekhefghefhhfehheduiedtgeehjeelnecuffhomh
    grihhnpehkvghrnhgvlhdrohhrghdpghhithhhuhgsrdgtohhmnecuvehluhhsthgvrhfu
    ihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:GkJwZj6BsRexTvs1u01UwVM70czF85e_l5FEwW9fq-uEIz3o-Ltf7w>
    <xmx:GkJwZr5_HbbRf59uqTKh9GSDH58sByAXF06jMMgjXaunBW1M3qxPCw>
    <xmx:GkJwZji0YhOep0XBi8_vMe2FmkMWSL-qd36HhPHVWsxIgeleuncTOg>
    <xmx:GkJwZl6dLhVlbHkVt28SBbxXpl1PFlAeATl_2-rC6tqOMOMrQXcYuA>
    <xmx:GkJwZrGGZQGnaGJEQIeRA3-PimkNkyHCO3cjNoJ2elouJ8DtGvGhcsDJ>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 17 Jun 2024 16:03:01 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>
Subject: Re: ACPI NVS range conflicting with Dom0 page tables (or kernel
 image)
Message-ID: <ZnBCFgHltVqj2FDh@mail-itl>
References: <a5a8a016-2107-46fb-896b-2baaf66566d4@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="UkDnYqeCInIhjvEd"
Content-Disposition: inline
In-Reply-To: <a5a8a016-2107-46fb-896b-2baaf66566d4@suse.com>


--UkDnYqeCInIhjvEd
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 17 Jun 2024 16:03:01 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>
Subject: Re: ACPI NVS range conflicting with Dom0 page tables (or kernel
 image)

On Mon, Jun 17, 2024 at 01:22:37PM +0200, Jan Beulich wrote:
> Hello,
>=20
> while it feels like we had a similar situation before, I can't seem to be
> able to find traces thereof, or associated (Linux) commits.

Is it some AMD Threadripper system by a chance? Previous thread on this
issue:
https://lore.kernel.org/xen-devel/CAOCpoWdOH=3DxGxiQSC1c5Ueb1THxAjH4WiZbCZq=
-QT+d_KAk3SA@mail.gmail.com/

> With
>=20
> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x4000000
> ...
> (XEN)  Dom0 alloc.:   0000000440000000->0000000448000000 (619175 pages to=
 be allocated)
> ...
> (XEN)  Loaded kernel: ffffffff81000000->ffffffff84000000
>=20
> the kernel occupies the space from 16Mb to 64Mb in the initial allocation.
> Page tables come (almost) directly above:
>=20
> (XEN)  Page tables:   ffffffff84001000->ffffffff84026000
>=20
> I.e. they're just above the 64Mb boundary. Yet sadly in the host E820 map
> there is
>=20
> (XEN)  [0000000004000000, 0000000004009fff] (ACPI NVS)
>=20
> i.e. a non-RAM range starting at 64Mb. The kernel (currently) won't toler=
ate
> such an overlap (also if it was overlapping the kernel image, e.g. if on =
the
> machine in question s sufficiently much larger kernel was used). Yet with=
 its
> fundamental goal of making its E820 match the host one I'm also in trouble
> thinking of possible solutions / workarounds. I certainly do not see Xen
> trying to cover for this, as the E820 map re-arrangement is purely a kern=
el
> side decision (forward ported kernels got away without, and what e.g. the
> BSDs do is entirely unknown to me).

In Qubes we have worked around the issue by moving the kernel lower
(CONFIG_PHYSICAL_START=3D0x200000):
https://github.com/QubesOS/qubes-linux-kernel/commit/3e8be4ac1682370977d4d0=
dc1d782c428d860282

Far from ideal, but gets it bootable...

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--UkDnYqeCInIhjvEd
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmZwQhYACgkQ24/THMrX
1ywSVwf/fyMEu68K8SLXaLRbbLDBTExW/bTQw8LZQNZt32UXgRtPc6aYtw0YVgJP
73KDFeHAJIwS7/douOaLJQVKpHQTbTN5wK+BTOIct5dXcekWD/Q2AnhrAxoOVse/
B5BF8m+pBjrb1IVxS1uDGvjsvqYNPQRv2HttnP7niXz5s7FxWnP4z8PUla59vQW7
cThUv1gwIgxy+aQxf8R2vFFovgt73IEH2686UAjYwDrmmnCi+fOPnQuKZTu7W/Sq
aYt3n/TB6Em82DY5WOGTy4XOQQrRZWL8EDiUb6MizvMY7CJ0oVn2YKKU8/A/UHiQ
iO4C+ufb4PNxJ+HX5sc2Atw0NCXZRA==
=RfCu
-----END PGP SIGNATURE-----

--UkDnYqeCInIhjvEd--


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 14:13:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 14:13:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742348.1149127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJD6u-0006ts-DH; Mon, 17 Jun 2024 14:13:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742348.1149127; Mon, 17 Jun 2024 14:13:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJD6u-0006tl-AU; Mon, 17 Jun 2024 14:13:28 +0000
Received: by outflank-mailman (input) for mailman id 742348;
 Mon, 17 Jun 2024 14:13:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sWYw=NT=cloud.com=frediano.ziglio@srs-se1.protection.inumbo.net>)
 id 1sJD6t-0006tf-Uq
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 14:13:27 +0000
Received: from mail-qv1-xf2f.google.com (mail-qv1-xf2f.google.com
 [2607:f8b0:4864:20::f2f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c2e18e92-2cb3-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 16:13:26 +0200 (CEST)
Received: by mail-qv1-xf2f.google.com with SMTP id
 6a1803df08f44-6b065d12e81so21662076d6.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 07:13:26 -0700 (PDT)
Received: from fziglio-xenia-fedora.eng.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2a5eb4811sm55124386d6.77.2024.06.17.07.13.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 07:13:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2e18e92-2cb3-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718633604; x=1719238404; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=AEPYDVeLkU0IonjTu34unuOh82lmo92YGof/pNWGY/U=;
        b=lhKDFu2VUexnKdnHioAzF28P4ER/bXf/wEUOFglvTaquUeAJ3eGuUogrrguFnhze53
         3mAgmm21tNR7VdufUNDbHVQDRfJjlrxz74JqYGv3JqTx80O3E6WYGG5zHRFL54Uy/4Qb
         aOxfVIjT0iagwUxIy4uq4TZE8xqhC8w9K0VL0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718633604; x=1719238404;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=AEPYDVeLkU0IonjTu34unuOh82lmo92YGof/pNWGY/U=;
        b=Xgy+mwvS5DPiyob6yfP72mJpFBSQzPUvOSA2LbWmCarpmbuMF/ZIMO4Y3+y/IYabdS
         8kZZ5rmB/jdSAutOtYhtgOJ7smQPR47avlJP+/F7VolQjtk5JKmeDPfH4E33JH41Xm6W
         z07PnsJ9lz0lFWBdz1W5+DD5eGZ8+xWD6nocXe2rXeR5EdUMWdZL4PWMQY7A6wyjYMk3
         bbV+mLqw/xOzhZhK/jCkxPWQHHTrLjgOljKfrv4SRr4qlL2SzRHNKpVsAkKT43qvBybZ
         Nrw1OCN3fgyOKdUHHhmA7/8OuC2D1e/ZYJWq1OAJjLT82MXc81KoX24Dr2+O4t7MVrU8
         DUnA==
X-Gm-Message-State: AOJu0YzfAL/7c49glaDcLVReI2nsaQ81l5rEsr8QyP6wEkYNW1jhd8Ni
	oH7uIstfILYI0Ty/E+lcynlwaDMUj0kOtKCkqSrXmrzSYDkZe51rWnHENaG1EPrNQGiq/ZvgoYH
	Ko2K1eg==
X-Google-Smtp-Source: AGHT+IGng51gHJ++fAhRU0HYTBiDRSVfUV9r9cDVWxS6lcJMbyubX5iCAI3jBFKyZg6BMsISJA1PAA==
X-Received: by 2002:a0c:e6ca:0:b0:6b2:c888:6c7 with SMTP id 6a1803df08f44-6b2c888089emr48248246d6.7.1718633604500;
        Mon, 17 Jun 2024 07:13:24 -0700 (PDT)
From: Frediano Ziglio <frediano.ziglio@cloud.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: "H. Peter Anvin" <hpa@zytor.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Borislav Petkov <bp@alien8.de>,
	Ingo Molnar <mingo@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Frediano Ziglio <frediano.ziglio@cloud.com>
Subject: [PATCH] x86/xen/time: Reduce Xen timer tick
Date: Mon, 17 Jun 2024 15:13:03 +0100
Message-ID: <20240617141303.53857-1-frediano.ziglio@cloud.com>
X-Mailer: git-send-email 2.45.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Current timer tick is causing some deadline to fail.
The current high value constant was probably due to an old
bug in the Xen timer implementation causing errors if the
deadline was in the future.
This was fixed in Xen commit:
19c6cbd90965 xen/vcpu: ignore VCPU_SSHOTTMR_future

Signed-off-by: Frediano Ziglio <frediano.ziglio@cloud.com>
---
 arch/x86/xen/time.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 52fa5609b7f6..ce30b8d3efe7 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -30,7 +30,7 @@
 #include "xen-ops.h"
 
 /* Minimum amount of time until next clock event fires */
-#define TIMER_SLOP	100000
+#define TIMER_SLOP	1000
 
 static u64 xen_sched_clock_offset __read_mostly;
 
-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 14:17:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 14:17:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742353.1149137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDAV-0007Rm-Sj; Mon, 17 Jun 2024 14:17:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742353.1149137; Mon, 17 Jun 2024 14:17:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDAV-0007Rf-Q2; Mon, 17 Jun 2024 14:17:11 +0000
Received: by outflank-mailman (input) for mailman id 742353;
 Mon, 17 Jun 2024 14:17:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJDAU-0007RZ-Jl
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 14:17:10 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 487508ba-2cb4-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 16:17:09 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id
 a640c23a62f3a-a689ad8d1f6so542892166b.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 07:17:09 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56db6182sm519650166b.51.2024.06.17.07.17.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 07:17:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 487508ba-2cb4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718633829; x=1719238629; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ZPxgaFWrmJdTu1iQU8AcpoZT59Gp9X82wVc51hl10nw=;
        b=CU/kzgMNC0iZ6BP7azHq5GaegFUXE2zjpPBGgSFqRjhngLvZq8YdaVj0N9IIJxwpjZ
         nFi5CDvC7A1HM0Sdvfy5BpK0wipD6G3MRbskz2LVWeSuhI/PpCPvIjq/OeAZR2RGajM4
         7s5eifNbpX50xiVavPYrGT7slqesJQsHmpWcfeG9c2hMrgmF7GqlV4CeE4r78iU4mBiD
         1jQ/EyUp0GLAm97TQYsw+BWtYh+4F+g3Qey3kJuW9RRgRfYrzbHavuz9nu8/f2z57NAS
         kRE1vq3htDn0GYWp9yRAVHksSaLQI0Yjf3xwIqzpom6hVw21Y5UmcQw7iC92kdw94VHe
         cYkg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718633829; x=1719238629;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ZPxgaFWrmJdTu1iQU8AcpoZT59Gp9X82wVc51hl10nw=;
        b=lXdIMc8E125doo6IAbqH8/vtQrycI0QXztN2u+M2Nnj5pcI5eoxk9+U1SlOJa2F6wn
         iDw1r3K01bXnp+Pa0XY2Vico7hgmKRwbeuaUSH76V6/pFCwdHAUGVQm7sdqcxL0QbXUF
         FSJhZLOheo/esdjKkFNYTNt0zgxywniilNXkR7g/DKanm54WMfTOXeIXe1pAgZ45pxcH
         +/r6VVORH7vCiLd4RxyctWND+JI7SctqSZhrWBu+DSq4kEt1ui6AGlGID+45titVgzJA
         ArhSmSsaBaNi5um7dHnT2uXHCCDfZQTSFDer6omiW6L15dNFADsFRuHeg+rpMgERfGgc
         NFWg==
X-Forwarded-Encrypted: i=1; AJvYcCW4HOFdstt/P8RQ0mFlBA7FG/1J+th9pKqZ7G1VXvgfbeMxjrttCB/gkYR7G2OYW9AVK+6Q26qdkjHK8Z2swuIWyuAIt5HCuMfure/ON6M=
X-Gm-Message-State: AOJu0Yw370Ixu749QfT3p+V/X97zzq6aqlHwEREeEa0lCv5c8zgdZlo4
	VexQYQL/HrZp3a/+m064Z9d5GVGPAIWvpKPYoDhIpaqf0wilqGeLMnwPDGpWuw==
X-Google-Smtp-Source: AGHT+IHWuQ1nSiZHYXYcpJ0bA4xZaaG0szUD6Ej+x71cUy6b9hWeTTCBudD5sCBJ9b2L9X1SDAQNTg==
X-Received: by 2002:a17:906:656:b0:a6f:481f:77eb with SMTP id a640c23a62f3a-a6f60d1e0f6mr633010466b.20.1718633828783;
        Mon, 17 Jun 2024 07:17:08 -0700 (PDT)
Message-ID: <4e2accc2-e81d-450a-af2d-38884455de9c@suse.com>
Date: Mon, 17 Jun 2024 16:17:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 1/5] xen/vpci: Clear all vpci status of device
To: Jiqian Chen <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 Stewart Hildebrand <Stewart.Hildebrand@amd.com>,
 Huang Rui <Ray.Huang@amd.com>, xen-devel@lists.xenproject.org
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-2-Jiqian.Chen@amd.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240617090035.839640-2-Jiqian.Chen@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 17.06.2024 11:00, Jiqian Chen wrote:
> --- a/xen/drivers/pci/physdev.c
> +++ b/xen/drivers/pci/physdev.c
> @@ -2,11 +2,17 @@
>  #include <xen/guest_access.h>
>  #include <xen/hypercall.h>
>  #include <xen/init.h>
> +#include <xen/vpci.h>
>  
>  #ifndef COMPAT
>  typedef long ret_t;
>  #endif
>  
> +static const struct pci_device_state_reset_method
> +                    pci_device_state_reset_methods[] = {
> +    [ DEVICE_RESET_FLR ].reset_fn = vpci_reset_device_state,
> +};

What about the other three DEVICE_RESET_*? In particular ...

> @@ -67,6 +73,43 @@ ret_t pci_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>          break;
>      }
>  
> +    case PHYSDEVOP_pci_device_state_reset: {
> +        struct pci_device_state_reset dev_reset;
> +        struct physdev_pci_device *dev;
> +        struct pci_dev *pdev;
> +        pci_sbdf_t sbdf;
> +
> +        if ( !is_pci_passthrough_enabled() )
> +            return -EOPNOTSUPP;
> +
> +        ret = -EFAULT;
> +        if ( copy_from_guest(&dev_reset, arg, 1) != 0 )
> +            break;
> +        dev = &dev_reset.dev;
> +        sbdf = PCI_SBDF(dev->seg, dev->bus, dev->devfn);
> +
> +        ret = xsm_resource_setup_pci(XSM_PRIV, sbdf.sbdf);
> +        if ( ret )
> +            break;
> +
> +        pcidevs_lock();
> +        pdev = pci_get_pdev(NULL, sbdf);
> +        if ( !pdev )
> +        {
> +            pcidevs_unlock();
> +            ret = -ENODEV;
> +            break;
> +        }
> +
> +        write_lock(&pdev->domain->pci_lock);
> +        pcidevs_unlock();
> +        ret = pci_device_state_reset_methods[dev_reset.reset_type].reset_fn(pdev);

... you're setting this up for calling NULL. In fact there's also no bounds
check for the array index.

Also, nit (further up): Opening figure braces for a new scope go onto their
own line. Then again I notice that apparenly _all_ other instances in this
file are doing it the wrong way, too.

Finally, is the "dev" local variable really needed? It effectively hides that
PCI_SBDF() is invoked on the hypercall arguments.

> +        write_unlock(&pdev->domain->pci_lock);
> +        if ( ret )
> +            printk(XENLOG_ERR "%pp: failed to reset vPCI device state\n", &sbdf);

Maybe downgrade to dprintk()? The caller ought to handle the error anyway.

> --- a/xen/drivers/vpci/vpci.c
> +++ b/xen/drivers/vpci/vpci.c
> @@ -172,6 +172,15 @@ int vpci_assign_device(struct pci_dev *pdev)
>  
>      return rc;
>  }
> +
> +int vpci_reset_device_state(struct pci_dev *pdev)

As a target of an indirect call this needs to be annotated cf_check (both
here and in the declaration, unlike __must_check, which is sufficient to
have on just the declaration).

> --- a/xen/include/xen/pci.h
> +++ b/xen/include/xen/pci.h
> @@ -156,6 +156,22 @@ struct pci_dev {
>      struct vpci *vpci;
>  };
>  
> +struct pci_device_state_reset_method {
> +    int (*reset_fn)(struct pci_dev *pdev);
> +};
> +
> +enum pci_device_state_reset_type {
> +    DEVICE_RESET_FLR,
> +    DEVICE_RESET_COLD,
> +    DEVICE_RESET_WARM,
> +    DEVICE_RESET_HOT,
> +};
> +
> +struct pci_device_state_reset {
> +    struct physdev_pci_device dev;
> +    enum pci_device_state_reset_type reset_type;
> +};

This is the struct to use as hypercall argument. How can it live outside of
any public header? Also, when moving it there, beware that you should not
use enum-s there. Only handles and fixed-width types are permitted.

> --- a/xen/include/xen/vpci.h
> +++ b/xen/include/xen/vpci.h
> @@ -38,6 +38,7 @@ int __must_check vpci_assign_device(struct pci_dev *pdev);
>  
>  /* Remove all handlers and free vpci related structures. */
>  void vpci_deassign_device(struct pci_dev *pdev);
> +int __must_check vpci_reset_device_state(struct pci_dev *pdev);

What's the purpose of this __must_check, when the sole caller is calling
this through a function pointer, which isn't similarly annotated?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 14:22:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 14:22:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742361.1149147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDFb-0001ME-EO; Mon, 17 Jun 2024 14:22:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742361.1149147; Mon, 17 Jun 2024 14:22:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDFb-0001M7-Bt; Mon, 17 Jun 2024 14:22:27 +0000
Received: by outflank-mailman (input) for mailman id 742361;
 Mon, 17 Jun 2024 14:22:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJDFa-0001M1-3C
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 14:22:26 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 04aa5163-2cb5-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 16:22:25 +0200 (CEST)
Received: by mail-ed1-x52d.google.com with SMTP id
 4fb4d7f45d1cf-57cbc66a0a6so475679a12.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 07:22:25 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cf709b49dsm76051a12.49.2024.06.17.07.22.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 07:22:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04aa5163-2cb5-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718634145; x=1719238945; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=RDYlK32PcEbzjDkBUw1Lslht69kdYDtkHTsgamegvR4=;
        b=QdijaweahB0/Bd8sFP/XpqVj8bgplMplUdYc7ryY2br//8oGuokXielPcFP/Fha6MP
         eNyWPbN61a6Xj6XR/Gs2oSC/5dd/Qj4MPbzk3D2iP+K4fjjLBW9/lLmt+QIsJcN/5boH
         sWMZplN5T8V4hrk3WWB0Cm+/nXbzWLBIyTq+nbH9NW0dOkOP+34+vuuc9mV2+FpBoN3w
         GHtvNxj/+OLiixmnbwUVw/tHyS6AtnIFc5MuX4iHWdaM8uIE4jlrQ0y6QuHnJme2k26F
         PI1rHPLxrDAvyWQoowwmzHhdXX1iS2wS7XgFzNvIfV2aAzUHkUjikRa1E53FVk6B14PX
         mq2g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718634145; x=1719238945;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=RDYlK32PcEbzjDkBUw1Lslht69kdYDtkHTsgamegvR4=;
        b=h/8bCh8DB+p6rUyo8oJDJ/ozE/vHXUeChmKFjVFoaNo6TwErq9koiTxqQG2e+KzQGF
         EH8PtwWZK9HrjPQEaJB5zbRkvoXRTqxKw+EM1iXiVFwwx/R/pHI2R27KrkrorbILG3xE
         kN7sa3HFfni+nR9VlWFE+zQ8x+fpAPKqdGCKgDwykICqybfWwf+vlacnvF0xuJ3AzP6O
         gHwFr3t5PEtY56FZJ8Z08ix+F47WnC0aogMilYJ/blmzNk4OP8zUGMkxn0PhftRreOTZ
         X/FIm6Vjuxh6kZ24WA1d1iA0d6B50jLhChP+2oWBptU9fgZ4pXclUSuK2aiwCqK5Kxxf
         n0lQ==
X-Forwarded-Encrypted: i=1; AJvYcCWrOpTgsyhe9bU+kA3JPd99Kmg9Duvw3zI5w4x/RYSuNdN64NBdSKBhqYKhpqIZtkTrhvEeWHZpCS2r/Ei0p8yAtq5D9/ps6dRTp6L2q5Q=
X-Gm-Message-State: AOJu0Yxg6NG5Nwpq1y6CYaJMA7UkyM/puU/S9ItRH4temMLYp5vdwwbj
	1a2rIBns+ffq/pKRPqen+mgdjdlCuIh3yrSO8A+xYKjeFt5noLqBwHUShyo6Rw==
X-Google-Smtp-Source: AGHT+IGTdnNmx/lrvgSy2nQrknFrK5oVSLM37bT6s12vjnAcrOUwqPbxH/dj19ONIPn97Cv2O4Ww2A==
X-Received: by 2002:aa7:cb87:0:b0:57c:a796:ed6e with SMTP id 4fb4d7f45d1cf-57cb4bb0d49mr9430677a12.6.1718634144499;
        Mon, 17 Jun 2024 07:22:24 -0700 (PDT)
Message-ID: <2fe6ef97-84f2-4bf4-870b-b0bb580fa38f@suse.com>
Date: Mon, 17 Jun 2024 16:22:21 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] x86/xen/time: Reduce Xen timer tick
To: Frediano Ziglio <frediano.ziglio@cloud.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>, x86@kernel.org,
 Dave Hansen <dave.hansen@linux.intel.com>, Borislav Petkov <bp@alien8.de>,
 Ingo Molnar <mingo@redhat.com>, Thomas Gleixner <tglx@linutronix.de>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20240617141303.53857-1-frediano.ziglio@cloud.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240617141303.53857-1-frediano.ziglio@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 17.06.2024 16:13, Frediano Ziglio wrote:
> Current timer tick is causing some deadline to fail.
> The current high value constant was probably due to an old
> bug in the Xen timer implementation causing errors if the
> deadline was in the future.
> This was fixed in Xen commit:
> 19c6cbd90965 xen/vcpu: ignore VCPU_SSHOTTMR_future

And then newer kernels are no longer reliably usable on Xen older than
this?

> --- a/arch/x86/xen/time.c
> +++ b/arch/x86/xen/time.c
> @@ -30,7 +30,7 @@
>  #include "xen-ops.h"
>  
>  /* Minimum amount of time until next clock event fires */
> -#define TIMER_SLOP	100000
> +#define TIMER_SLOP	1000

It may be just the lack of knowledge of mine towards noadays's Linux'es
time handling, but the change of a value with this name and thus
commented doesn't directly relate to "timer tick" rate. Could you maybe
help me see the connection?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 14:25:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 14:25:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742366.1149157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDIn-0001vj-SN; Mon, 17 Jun 2024 14:25:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742366.1149157; Mon, 17 Jun 2024 14:25:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDIn-0001vc-Pq; Mon, 17 Jun 2024 14:25:45 +0000
Received: by outflank-mailman (input) for mailman id 742366;
 Mon, 17 Jun 2024 14:25:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJDIm-0001vW-J7
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 14:25:44 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ae2a795-2cb5-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 16:25:43 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a63359aaaa6so665460466b.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 07:25:43 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f8176eea5sm178998766b.88.2024.06.17.07.25.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 07:25:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ae2a795-2cb5-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718634343; x=1719239143; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=QKUxaFYJiLg9Ph/VfOVu53T0ux5v0DrUMuEEkKaFwe4=;
        b=Cw+uSzZDw8g7jhIKbeQDP/uSYxPCFWUwD11XxzHzD93h1SBUM2bfhxx7q0+HBQjSmB
         QSlEXFDACmrFRxUevuoiyPVJsOaVJ9YklHCPFYv5vIBWgDDjvQH+0JGvdTOyaezxfU/k
         KNcNBXsJ7Z3bu2QYFXBrpiGviU7Fi5g8QglS92hJ6Y58Yfcg1cF/nPQUc8a2oRxQTMcT
         Zprg9zvT3x7Wgd6LYDuXJASSwTj0jvTf0knWbjHiNRKmthO8ZoX743XygpSQQAdRActy
         q9fhfrtJDSy19bBsPqFqnlDL3DFsiAEEek/gZvbTygWuNU6z5dy83In5TmNvbzP2fhL9
         cJAA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718634343; x=1719239143;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=QKUxaFYJiLg9Ph/VfOVu53T0ux5v0DrUMuEEkKaFwe4=;
        b=Zc+IutychFcUxnUQH8a0IB/DYKHcfoJAx0AAFC3wYouxWH5Fe8CJBSc5AutXpBfV5x
         xiac+kkg5+1ZocgpdfaW184XksKZzBJwFtemIohvNQvyafV1rlJ3kQhcFzM4Cj19c9JS
         W4kz5hnAHz0ffRXMDUoA2ukiu3NopqfVslOCPEJKH/C1R6HGuF9ys/cZ/PuVJVuO/5jh
         p7j6VPwqyN2ktxq6HrfPPNkWhqYCJSVbch/47xJkqmRrsGRCv39mWrwwkbMu2t1kNu+5
         QB47MqUrNwR+/d478xFl2mIRM5b4bnEPVD6Z5E5B7dbY3pxU0r9ylIAHgwZa+Uy+fKcw
         /VaA==
X-Gm-Message-State: AOJu0Yxhd2vgEO3FRCo2GCgxv8hMpEMtD8D8j0K1YM4dUswWgziohvKr
	lwzyVR3WQ9gDTuDC5Opz1CfSJ3KmuHXG6oSYKaS03FmNsPLuw3BPjNJUi2XQJA==
X-Google-Smtp-Source: AGHT+IGCRyB/yokdEe2X6n7ZLojfXcD6MbiY45JVgMP9N+wJUoaqOs5hTl81gNYNcF3RIn1k8yEKKQ==
X-Received: by 2002:a17:907:3f03:b0:a6f:5fe2:56e9 with SMTP id a640c23a62f3a-a6f60d2b9a7mr842972766b.17.1718634342984;
        Mon, 17 Jun 2024 07:25:42 -0700 (PDT)
Message-ID: <bd2eb947-7fca-4f1a-bf43-addccdda35a0@suse.com>
Date: Mon, 17 Jun 2024 16:25:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: ACPI NVS range conflicting with Dom0 page tables (or kernel
 image)
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Juergen Gross <jgross@suse.com>
References: <a5a8a016-2107-46fb-896b-2baaf66566d4@suse.com>
 <ZnBCFgHltVqj2FDh@mail-itl>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZnBCFgHltVqj2FDh@mail-itl>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 17.06.2024 16:03, Marek Marczykowski-Górecki wrote:
> On Mon, Jun 17, 2024 at 01:22:37PM +0200, Jan Beulich wrote:
>> Hello,
>>
>> while it feels like we had a similar situation before, I can't seem to be
>> able to find traces thereof, or associated (Linux) commits.
> 
> Is it some AMD Threadripper system by a chance?

It's an AMD system in any event, yes. I don't have all the details on it.

> Previous thread on this issue:
> https://lore.kernel.org/xen-devel/CAOCpoWdOH=xGxiQSC1c5Ueb1THxAjH4WiZbCZq-QT+d_KAk3SA@mail.gmail.com/

Ah yes, that's probably the one I was vaguely remembering. There it was the
kernel image E820 conflicted with. Yet ...

>> With
>>
>> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x4000000
>> ...
>> (XEN)  Dom0 alloc.:   0000000440000000->0000000448000000 (619175 pages to be allocated)
>> ...
>> (XEN)  Loaded kernel: ffffffff81000000->ffffffff84000000
>>
>> the kernel occupies the space from 16Mb to 64Mb in the initial allocation.
>> Page tables come (almost) directly above:
>>
>> (XEN)  Page tables:   ffffffff84001000->ffffffff84026000
>>
>> I.e. they're just above the 64Mb boundary. Yet sadly in the host E820 map
>> there is
>>
>> (XEN)  [0000000004000000, 0000000004009fff] (ACPI NVS)
>>
>> i.e. a non-RAM range starting at 64Mb. The kernel (currently) won't tolerate
>> such an overlap (also if it was overlapping the kernel image, e.g. if on the
>> machine in question s sufficiently much larger kernel was used). Yet with its
>> fundamental goal of making its E820 match the host one I'm also in trouble
>> thinking of possible solutions / workarounds. I certainly do not see Xen
>> trying to cover for this, as the E820 map re-arrangement is purely a kernel
>> side decision (forward ported kernels got away without, and what e.g. the
>> BSDs do is entirely unknown to me).
> 
> In Qubes we have worked around the issue by moving the kernel lower
> (CONFIG_PHYSICAL_START=0x200000):
> https://github.com/QubesOS/qubes-linux-kernel/commit/3e8be4ac1682370977d4d0dc1d782c428d860282
> 
> Far from ideal, but gets it bootable...

... as you say, it's a workaround for particular systems, but not generally
dealing with the underlying issue. This explains why I couldn't find any
patch(es), though.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 14:35:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 14:35:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742376.1149167 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDRw-0004PW-Ph; Mon, 17 Jun 2024 14:35:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742376.1149167; Mon, 17 Jun 2024 14:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDRw-0004PP-N2; Mon, 17 Jun 2024 14:35:12 +0000
Received: by outflank-mailman (input) for mailman id 742376;
 Mon, 17 Jun 2024 14:35:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=e5mD=NT=kernel.org=kbusch@srs-se1.protection.inumbo.net>)
 id 1sJDRv-0004PJ-Ci
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 14:35:11 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cb541614-2cb6-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 16:35:09 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 7CA6C60F73;
 Mon, 17 Jun 2024 14:35:07 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D7346C2BD10;
 Mon, 17 Jun 2024 14:35:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb541614-2cb6-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718634907;
	bh=zR4PmzG8eM11uEea9QT6dpcyWJEpQ/sobqtYRuY7p2o=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=eD5n/nvWPH0mROLi1mYqjLG+6CkIkCxCEDr7KHYv5mCJ6cJ3axvL3iUR8FSfF/NAJ
	 aWF+hMFp26+MYq12L40CvD0LxAzWJTFIpFvJFlT/13JJ1EJCWWsc/XXuMYInx3fYPk
	 PVqFFZp9qinzyqQ1wbJp1uQUpFSEgZjA7/AJ/D87BnMG71EbVAfG4CFQOyKDe6kjS2
	 0qjhzv59DCAwJqh60nhssiSotn3a1GiLXJljndQUFTrzdMkRj7pO46aG6KaDjyXOpw
	 MXJnwDrZxtrM68cj9z93ToW3XbCMjuJFfQuHw/qbhAPx1AOR5+hjUoIbpzcdxhC3Vq
	 UbiPTmWDLBjCA==
Date: Mon, 17 Jun 2024 08:35:02 -0600
From: Keith Busch <kbusch@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 26/26] block: move the bounce flag into the features field
Message-ID: <ZnBJlix63Fj_G1px@kbusch-mbp.dhcp.thefacebook.com>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-27-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20240617060532.127975-27-hch@lst.de>

On Mon, Jun 17, 2024 at 08:04:53AM +0200, Christoph Hellwig wrote:
> @@ -352,7 +355,6 @@ enum blk_bounce {

No more users of "enum blk_bounce" after this, so you can delete that
too.

>  struct queue_limits {
>  	unsigned int		features;
>  	unsigned int		flags;
> -	enum blk_bounce		bounce;


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 14:37:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 14:37:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742381.1149178 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDTn-0004w0-4H; Mon, 17 Jun 2024 14:37:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742381.1149178; Mon, 17 Jun 2024 14:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDTn-0004vt-1j; Mon, 17 Jun 2024 14:37:07 +0000
Received: by outflank-mailman (input) for mailman id 742381;
 Mon, 17 Jun 2024 14:37:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5eX2=NT=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sJDTm-0004vn-2S
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 14:37:06 +0000
Received: from mail-qk1-x72c.google.com (mail-qk1-x72c.google.com
 [2607:f8b0:4864:20::72c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 10e3588a-2cb7-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 16:37:05 +0200 (CEST)
Received: by mail-qk1-x72c.google.com with SMTP id
 af79cd13be357-795502843ccso254935385a.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 07:37:05 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798abc02e72sm433891885a.96.2024.06.17.07.37.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 07:37:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10e3588a-2cb7-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718635024; x=1719239824; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=XvP4+t8l416VgzjO1ZXGZmV/a/Zd7bc8WzT6PYRtni8=;
        b=SlJLXfOAZ35kHqarAP9x+qLWE2vL6+CL+TkHKyerplrPWTLTLazJ8sW1BALQfMkolY
         2UiHJyMj6GH3wUVG5hQx9K7wdzbcUPeBV6L6FeRI7axkefDB14lkoJ575jJ0zGdrOt5q
         flwkzDLAJ9SGsmluqIi/updmzSZkvwvxRg3KE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718635024; x=1719239824;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=XvP4+t8l416VgzjO1ZXGZmV/a/Zd7bc8WzT6PYRtni8=;
        b=CE1mYJ439P/EdC2PAIKJZq+Ff/lP31458qr5McYzHoYay+6ZwDQo3uFDOURKhablDD
         +0t5zvoUp8mlRr27Yn4JYNKXmtXUj3VGRUrC9C1Dutd22bu+VD35XVqZWDqXd6DOtG8c
         QIAB54ELU0ElvckakGy1xnkqko6tu8VXXHqYEicvvp8K1QYujCNBMgaQwTGiMg9A0itq
         tSTsF93qlvIsHLmdQCo/t7dtKdXeVcRHZFRlR7uMgxUzE8rUa08UD81fHKw2ehq5598K
         iRyw9DAuWkpqFUPnU8+Kwdj5YsgHxKqUTsrNu7BEswbF8u6S/mT6Uv26KbIRIBe5cMJE
         7jHg==
X-Forwarded-Encrypted: i=1; AJvYcCX0rZ45qXiydFWiDfxZp9D+IAc4JD6UL9KChZiTki6TpTkP5ewBrcVmuvrUvaf5mns6zSBR0tuYHthxu6f2AAXGr40m7+2rhzV5ay74I6w=
X-Gm-Message-State: AOJu0YxADBaOwSgXPP0rcdluASaA7nFbeZlD+D2YM2wZeMbxikWtj05E
	Cf0vZtGai9y5ZIq80hHEVoyjnSP5Mp9cyEVHhWREW2cBJKXjy3yP0se2fwEtmwE=
X-Google-Smtp-Source: AGHT+IHQDMfoJ/LOZJYFoBhJ4TbHf9tXcpQcYRrBKsc6jJQgkvkhyZr/hctT8019B9VlxZwrZ62Uqw==
X-Received: by 2002:a05:620a:4507:b0:795:4e89:53b2 with SMTP id af79cd13be357-798d26a8ce3mr1215726185a.70.1718635023959;
        Mon, 17 Jun 2024 07:37:03 -0700 (PDT)
Date: Mon, 17 Jun 2024 16:37:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Frediano Ziglio <frediano.ziglio@cloud.com>,
	"H. Peter Anvin" <hpa@zytor.com>, x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Borislav Petkov <bp@alien8.de>, Ingo Molnar <mingo@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] x86/xen/time: Reduce Xen timer tick
Message-ID: <ZnBKDRWi_2cO6WbA@macbook>
References: <20240617141303.53857-1-frediano.ziglio@cloud.com>
 <2fe6ef97-84f2-4bf4-870b-b0bb580fa38f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <2fe6ef97-84f2-4bf4-870b-b0bb580fa38f@suse.com>

On Mon, Jun 17, 2024 at 04:22:21PM +0200, Jan Beulich wrote:
> On 17.06.2024 16:13, Frediano Ziglio wrote:
> > Current timer tick is causing some deadline to fail.
> > The current high value constant was probably due to an old
> > bug in the Xen timer implementation causing errors if the
> > deadline was in the future.
> > This was fixed in Xen commit:
> > 19c6cbd90965 xen/vcpu: ignore VCPU_SSHOTTMR_future
> 
> And then newer kernels are no longer reliably usable on Xen older than
> this?

I think this should reference the Linux commit that removed the usage
of VCPU_SSHOTTMR_future on Linux itself, not the change that makes Xen
ignore the flag.

> > --- a/arch/x86/xen/time.c
> > +++ b/arch/x86/xen/time.c
> > @@ -30,7 +30,7 @@
> >  #include "xen-ops.h"
> >  
> >  /* Minimum amount of time until next clock event fires */
> > -#define TIMER_SLOP	100000
> > +#define TIMER_SLOP	1000
> 
> It may be just the lack of knowledge of mine towards noadays's Linux'es
> time handling, but the change of a value with this name and thus
> commented doesn't directly relate to "timer tick" rate. Could you maybe
> help me see the connection?

The TIMER_SLOP define is used in min_delta_{ns,ticks} field, and I
think this is wrong.

The min_delta_ns for the Xen timer is 1ns.  If Linux needs some
greater min delta than what the timer interface supports it should be
handled in the generic timer code, not open coded at the definition of
possibly each timer implementation.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 14:41:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 14:41:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742386.1149187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDXg-0007Go-Hx; Mon, 17 Jun 2024 14:41:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742386.1149187; Mon, 17 Jun 2024 14:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDXg-0007Gh-F6; Mon, 17 Jun 2024 14:41:08 +0000
Received: by outflank-mailman (input) for mailman id 742386;
 Mon, 17 Jun 2024 14:41:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <anthony@xenproject.org>) id 1sJDXf-0007Gb-K5
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 14:41:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <anthony@xenproject.org>)
 id 1sJDXf-0000Fc-1u; Mon, 17 Jun 2024 14:41:07 +0000
Received: from lfbn-gre-1-246-234.w90-112.abo.wanadoo.fr ([90.112.203.234]
 helo=l14.home) by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <anthony@xenproject.org>)
 id 1sJDXe-0003hN-NF; Mon, 17 Jun 2024 14:41:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:
	Message-Id:Date:Subject:Cc:To:From;
	bh=PuC0rrmLqc+ryhwI3RIlUNKMT7nejiT7aD4KbDKVFzk=; b=HSfljoc5uhBLD9x1pLkduKHPQ+
	gEHWqCYCLOUcKDmamdaIAlAzkRdwGfBa8i7cfm58YwWilPOjNXZk8ZfPSbs7KeNhoEtCM4GJI6PXw
	uklLIQybxhZGn01WHmJ8+oALmUQZaPEmtdBbpBtU9mHnKcRukXk1Vc/O0dQkyiVMWePU=;
From: Anthony PERARD <anthony@xenproject.org>
To: xen-devel@lists.xenproject.org
Cc: Luca Fancellu <luca.fancellu@arm.com>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@citrix.com>,
	Anthony PERARD <anthony.perard@vates.tech>
Subject: [OSSTEST PATCH] preseed_base: Use "keep" NIC NamePolicy when "force-mac-address"
Date: Mon, 17 Jun 2024 16:40:51 +0200
Message-Id: <20240617144051.29547-1-anthony@xenproject.org>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Anthony PERARD <anthony.perard@vates.tech>

We have a few machine (arndale-*) that have a nic without mac address,
so the kernel assign a random one. For those there's a flags
"force-mac-address" which tells osstest to make it so that the machine
changes the mac address to a predefined one at boot. This normally
tells systemd rules to not use the mac address to rename the network
interface as it a temporary mac, but that doesn't always work.
(Machine installed by osstest should use the mac namepolicy otherwise,
since 367166c32329 ("preseed_base, ts-host-install: Change NIC
NamePolicy to "mac"")).

Often, on the branch "linux-linus", so with more recent version of
Linux, the network interface gets renamed sometime with the "mac"
namepolicy which break networking. These are the kernel messages when
the rename happen:

> usb 1-3.2.4: new high-speed USB device number 4 using exynos-ehci
> asix 1-3.2.4:1.0 (unnamed net_device) (uninitialized): invalid hw address, using random
> asix 1-3.2.4:1.0 (unnamed net_device) (uninitialized): PHY [usb-001:004:10] driver [Asix Electronics AX88772A] (irq=POLL)
> asix 1-3.2.4:1.0 eth0: register 'asix' at usb-12110000.usb-3.2.4, ASIX AX88772 USB 2.0 Ethernet, 06:85:e5:95:f0:7c
> usbcore: registered new device driver onboard-usb-dev
> usb 1-3.2.4: USB disconnect, device number 4
> asix 1-3.2.4:1.0 eth0: unregister 'asix' usb-12110000.usb-3.2.4, ASIX AX88772 USB 2.0 Ethernet
> hub 1-3.2:1.0: USB hub found
> hub 1-3.2:1.0: 4 ports detected
> hub 1-3.2:1.0: USB hub found
> hub 1-3.2:1.0: 4 ports detected
> usb 1-3.2.4: new high-speed USB device number 5 using exynos-ehci
> asix 1-3.2.4:1.0 (unnamed net_device) (uninitialized): PHY [usb-001:005:10] driver [Asix Electronics AX88772A] (irq=POLL)
> Asix Electronics AX88772A usb-001:005:10: attached PHY driver (mii_bus:phy_addr=usb-001:005:10, irq=POLL)
> asix 1-3.2.4:1.0 eth0: register 'asix' at usb-12110000.usb-3.2.4, ASIX AX88772 USB 2.0 Ethernet, 06:85:e5:95:f0:7c
> asix 1-3.2.4:1.0 enx0685e595f07c: renamed from eth0

The "xenbr0" bridge is setup to use "eth0", because that was the name
of the nic during setup, so with a new name for the main interface the
bridge doesn't work.

In order to avoid the issue, we will use the NamePolicy "keep" when
there is a flag "force-mac-address", which keep the original name of
the interface (eth0). That flags only works if there's a single
network interface, so we can expect "eth0" to always be the same
interface.

Even if the problem so far exhibit only at runtime after rebooting
under Xen (which is fixed by a change in preseed_base()), we will also
add the policy change to the installer (change in ts-host-install), to
be future proof.

(The filename of the policy is to have it apply before
"73-usb-net-by-mac.link" that is installed on the system.)

Signed-off-by: Anthony PERARD <anthony.perard@vates.tech>
---

Notes:
    CCing people mostly FYI rather than for review.
    
    I would wait until the release of Xen before pushing that as the issue
    doesn't prevent progress of the xen-unstable branch, it just slow down a
    bit linux-linus, with maybe unnecessary retry.
    
    I did run that, with config which hopefully replicates linux-linus
    branch and xen-unstable branch:
    
    linux-linus:
        http://logs.test-lab.xenproject.org/osstest/logs/186363/
        no regression
    
    xen-unstable:
        http://logs.test-lab.xenproject.org/osstest/logs/186366/
        Just one regression (test-amd64-amd64-qemuu-freebsd12-amd64) but
        isn't caused by the new patch.

 Osstest/Debian.pm | 14 +++++++++++++-
 ts-host-install   | 16 +++++++++++++++-
 2 files changed, 28 insertions(+), 2 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 3545f3fd..d974fea5 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -972,7 +972,19 @@ END
         # is going to be added to dom0's initrd, which is used by some guests
         # (created with ts-debian-install).
         preseed_hook_installscript($ho, $sfx,
-            '/usr/lib/base-installer.d/', '05ifnamepolicy', <<'END');
+            '/usr/lib/base-installer.d/', '05ifnamepolicy',
+            $ho->{Flags}{'force-mac-address'} ? <<'END' : <<'END');
+#!/bin/sh -e
+linkfile=/target/etc/systemd/network/70-eth-keep-policy.link
+mkdir -p `dirname $linkfile`
+cat > $linkfile <<EOF
+[Match]
+Type=ether
+Driver=!vif
+[Link]
+NamePolicy=keep
+EOF
+END
 #!/bin/sh -e
 linkfile=/target/etc/systemd/network/90-eth-mac-policy.link
 mkdir -p `dirname $linkfile`
diff --git a/ts-host-install b/ts-host-install
index 0b6aaeea..fbbfeecc 100755
--- a/ts-host-install
+++ b/ts-host-install
@@ -248,7 +248,21 @@ END
     print CANARY "\n# - canary - came via initramfs\n" or die $!;
     close CANARY or die $!;
 
-    if ($ho->{Suite} !~ m/lenny|squeeze|wheezy|jessie|stretch|buster/) {
+    if ($ho->{Flags}{'force-mac-address'}) {
+        # When we have to set a MAC address, make sure that the interface keep
+        # the original name that the kernel give, "eth0". There should only be
+        # one interface in the ysstem in this case, so no risk of mixup.
+        system_checked(qw(mkdir -p --), "$initrd_overlay.d/lib/systemd/network");
+        file_simple_write_contents
+            ("$initrd_overlay.d/lib/systemd/network/70-eth-keep-policy.link",
+                <<END);
+[Match]
+Type=ether
+Driver=!vif
+[Link]
+NamePolicy=keep
+END
+    } elsif ($ho->{Suite} !~ m/lenny|squeeze|wheezy|jessie|stretch|buster/) {
         # Switch to more predictale nic name based on mac address, instead of the
         # policy "onboard" which can try to set the same name ("eno1") to two
         # differents nic, or "slot". New names are "enx$mac".
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 14:45:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 14:45:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742393.1149198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDbd-0007sB-2N; Mon, 17 Jun 2024 14:45:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742393.1149198; Mon, 17 Jun 2024 14:45:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDbc-0007s4-Uz; Mon, 17 Jun 2024 14:45:12 +0000
Received: by outflank-mailman (input) for mailman id 742393;
 Mon, 17 Jun 2024 14:45:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJDbb-0007ry-Mv
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 14:45:11 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3258e1fe-2cb8-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 16:45:10 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id
 4fb4d7f45d1cf-57cb9a370ddso5012252a12.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 07:45:10 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cddba1a17sm2242771a12.43.2024.06.17.07.45.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 07:45:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3258e1fe-2cb8-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718635510; x=1719240310; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Q+2Mxxt+USuoP3e0fzSGmT0itw6+be4kBqbL/ExF9EM=;
        b=PxvKgy2D5XFI3MgGLGdi5mb14GFJdXeksCQ8c1v0mFSwjIHDkLqHUrTiBVOuCjslTY
         iCuziWBAtOTLbA9cZe5WnOx9MYXywpkxItTt56tKFjnmNz4iI5O7fBe2dMjiDSSVIoKv
         v7omOvHP9XJIedHsnaojc2tbMwSCibwFQoXcw+GoNSBRNRv4L8nRWDhuWkNEO6v2ogLZ
         JlBaizUPNkiX7QPuBA42IG8XBtztTMDK45ke3iTFmocW2HXdgtnWdqSHtL8KFJnJ47sL
         iarZVmMKQ9ve+8ShmRHOSMa6HVMHBu8EBt+PHD48IMDH5LIaBSHDB+3+UvxoU9N5KM1g
         ivow==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718635510; x=1719240310;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Q+2Mxxt+USuoP3e0fzSGmT0itw6+be4kBqbL/ExF9EM=;
        b=A5Kg1p6KZt2vfm6wrLAEg09iwbhNllAVTd1H4WcDv4oYKrmHP6MoaUJxeNokvMZM4l
         qGJasgZjQ5xooYDwYGkReNuZ2+1s9BfOIEoem1vUC5XCdsVsT9ftBOtd8jhMSNMxh1dX
         hMj7wOLp+7m2iF+lX6HkZAzNzO3e8bg3veUYETRybUbdD3BiyX+zA0UWKgY1je5t9Jrw
         mJ67P9wzoK7ZxSp1er2V5/0z9ETazYZVS8Hh7rGzie9wXVVWyEB+ZIUGj/+Ryxrha7FN
         QZ06mlFGe6qnSktJqmTDLesnLXY3QqFa4xj8aoXgQsPxGYWBioPcmdNm3ikD//K4MGhE
         p0mg==
X-Forwarded-Encrypted: i=1; AJvYcCV4T5+ehNWwnbK0+uamDNgtp2Ng5DjfsZRMzKdTvnpi/X/EJxM9x7RFQg2TqE87fNYpjGimsl3NdhtoT7kB2eo7eGnGol0pQFAq94fiGeQ=
X-Gm-Message-State: AOJu0YxkazKlU9DWBIwUVzkwmdFkiYlOsX5EOBivam+9hXoT31sAvuzj
	oJH2Oky6V5mNMLqf3sWu5Uub1KEv5q6CQRj/hsU/GwAJhxsDc4uaIi5Xnvqq1g==
X-Google-Smtp-Source: AGHT+IHa4kr1TAl8Ve1dyaQtTALk+ruPgSzgrbBHfwymQyL5eUP+XT9odskuo6BKFD3ebIbEoVp4Mw==
X-Received: by 2002:a50:cdc2:0:b0:578:649e:e63e with SMTP id 4fb4d7f45d1cf-57cbd66c9bdmr5918321a12.16.1718635506047;
        Mon, 17 Jun 2024 07:45:06 -0700 (PDT)
Message-ID: <cb9910cd-7045-4c0d-a7cf-2bcf36e30cb2@suse.com>
Date: Mon, 17 Jun 2024 16:45:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
To: Jiqian Chen <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 Stewart Hildebrand <Stewart.Hildebrand@amd.com>,
 Huang Rui <Ray.Huang@amd.com>, xen-devel@lists.xenproject.org
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-3-Jiqian.Chen@amd.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240617090035.839640-3-Jiqian.Chen@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 17.06.2024 11:00, Jiqian Chen wrote:
> If run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
> a passthrough device by using gsi, see qemu code
> xen_pt_realize->xc_physdev_map_pirq and libxl code
> pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq
> will call into Xen, but in hvm_physdev_op, PHYSDEVOP_map_pirq
> is not allowed because currd is PVH dom0 and PVH has no
> X86_EMU_USE_PIRQ flag, it will fail at has_pirq check.
> 
> So, allow PHYSDEVOP_map_pirq when dom0 is PVH and also allow
> PHYSDEVOP_unmap_pirq for the failed path to unmap pirq.

Why "failed path"? Isn't unmapping also part of normal device removal
from a guest?

> And
> add a new check to prevent self map when subject domain has no
> PIRQ flag.

You still talk of only self mapping, and the code also still does only
that. As pointed out before: Why would you allow mapping into a PVH
DomU? IOW what purpose do the "d == currd" checks have?

> So that domU with PIRQ flag can success to map pirq for
> passthrough devices even dom0 has no PIRQ flag.

There's still a description problem here. Much like the first sentence,
this last one also says that the guest would itself map the pIRQ. In
which case there would still not be any reason to expose the sub-
functions to Dom0.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 14:52:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 14:52:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742403.1149211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDiL-0001rZ-QW; Mon, 17 Jun 2024 14:52:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742403.1149211; Mon, 17 Jun 2024 14:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDiL-0001rS-N0; Mon, 17 Jun 2024 14:52:09 +0000
Received: by outflank-mailman (input) for mailman id 742403;
 Mon, 17 Jun 2024 14:52:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJDiK-0001rK-S2
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 14:52:08 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2b451679-2cb9-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 16:52:07 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6f0c3d0792so513924966b.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 07:52:07 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f86dfdb22sm145574166b.77.2024.06.17.07.52.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 07:52:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b451679-2cb9-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718635927; x=1719240727; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Yx1wzW7/Kzm5E6sX8YV5e9fXpW4+X/mgyYQf3ioxS0A=;
        b=UKgOVL+Q4vXG/W7bcKG7ibTpWEH7eAPtG/T6owYIzWfmaBHO1HB2GBdRhiWQkg+tsT
         fWXILR89+V0mgUKejni0cCit9lmjhVv9lIHc9RX11NEaQ0nfNQj2KAYDQccReabqaQxU
         JHJg95PsUzIuRMstvDRc6YSoh3JmuoJ5nF+YE4+UP1KHAMn0N08Pv6dVeF/oq77y1CBI
         69EIsxow68IjYOncnzDz3oogrU24k3uVoNS/rIR1WMJfXd7JBLSm42mY20ovpQ3GjHEA
         W9OgIX07HYkHS2hPrWyp0t9yM/aTQYMpFw0G9vMOx/iNybsholPsUx5vKSr9x+chMbl7
         muNw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718635927; x=1719240727;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Yx1wzW7/Kzm5E6sX8YV5e9fXpW4+X/mgyYQf3ioxS0A=;
        b=IEGJOt3WI0S0lYmmC86Y6K7wdjGSe9daXYt4Zbuy6EmpXsI3+1ZA3fTKliiNMDkBRW
         w/kiE6pOm2C8WlK5hCrw+vcEqpQ6cZzXsfknLzRJP8+2kJu7vteDEBtq9bSQ3yOHt1We
         5YFLtS8ROuDWqoHE2Vp7DWilg06RwhDOFQr42fvqe88mP7Vg50yMdfLdWPzDT0EjMoZo
         /Z0rLQIDykP5KxB//PslVvvOJl2mgGHZVEoMroRL6ePZFTJSs077lR7B+bjSipvhVH6K
         uW9LT/oxmfjuE8gNvt0xjpT+Tpl15WVRy/66m5wYVsrh7KfHmhQYglzaRoqQPf5y+wPN
         lBAQ==
X-Forwarded-Encrypted: i=1; AJvYcCXNHKbXyGM2zOFnF9ZX3vuMLvFr2UUx4lTA2BGx3BpM4yEPbSRDNAhpFQEf/jMiqR4woeLU17UQ0YCc2PiJ2MePbWhMeYdVxnBosyoPm7k=
X-Gm-Message-State: AOJu0YzhL3M/rCpMpcjSLvAWlyQMwJHj3MRGjQi8RXU/jMq9i6zf1XhP
	dtgoEeH9S/iVTglQom77swDnA+D38RynbionCtkuQ5/7EBkThImPbovBRCMhRg==
X-Google-Smtp-Source: AGHT+IG1UzbLqhgaIVDrRJYaVEaXsK7EJ0KpVTqfnJy3uR24H6rrcN+skomr16Yp7jlSFWOReBD8Xw==
X-Received: by 2002:a17:906:528a:b0:a6f:501c:5da8 with SMTP id a640c23a62f3a-a6f60d27e07mr754006966b.22.1718635927304;
        Mon, 17 Jun 2024 07:52:07 -0700 (PDT)
Message-ID: <ed36b376-a5f0-457b-8a1e-61104c26ffce@suse.com>
Date: Mon, 17 Jun 2024 16:52:05 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
To: Jiqian Chen <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 Stewart Hildebrand <Stewart.Hildebrand@amd.com>,
 Huang Rui <Ray.Huang@amd.com>, xen-devel@lists.xenproject.org
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-4-Jiqian.Chen@amd.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240617090035.839640-4-Jiqian.Chen@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 17.06.2024 11:00, Jiqian Chen wrote:
> The gsi of a passthrough device must be configured for it to be
> able to be mapped into a hvm domU.
> But When dom0 is PVH, the gsis don't get registered, it causes
> the info of apic, pin and irq not be added into irq_2_pin list,
> and the handler of irq_desc is not set, then when passthrough a
> device, setting ioapic affinity and vector will fail.
> 
> To fix above problem, on Linux kernel side, a new code will
> need to call PHYSDEVOP_setup_gsi for passthrough devices to
> register gsi when dom0 is PVH.
> 
> So, add PHYSDEVOP_setup_gsi into hvm_physdev_op for above
> purpose.
> 
> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
> ---
> The code link that will call this hypercall on linux kernel side is as follows:
> https://lore.kernel.org/xen-devel/20240607075109.126277-3-Jiqian.Chen@amd.com/

One of my v9 comments was addressed, thanks. Repeating the other, unaddressed
one here:
"As to GSIs not being registered: If that's not a problem for Dom0's own
 operation, I think it'll also want/need explaining why what is sufficient for
 Dom0 alone isn't sufficient when pass-through comes into play."

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 14:57:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 14:57:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742413.1149228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDng-0002cy-VE; Mon, 17 Jun 2024 14:57:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742413.1149228; Mon, 17 Jun 2024 14:57:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDng-0002cm-PU; Mon, 17 Jun 2024 14:57:40 +0000
Received: by outflank-mailman (input) for mailman id 742413;
 Mon, 17 Jun 2024 14:57:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MAK4=NT=cloud.com=kelly.choi@srs-se1.protection.inumbo.net>)
 id 1sJDng-0002Xy-5P
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 14:57:40 +0000
Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com
 [2a00:1450:4864:20::530])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id efc4343c-2cb9-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 16:57:37 +0200 (CEST)
Received: by mail-ed1-x530.google.com with SMTP id
 4fb4d7f45d1cf-57c7ec8f1fcso5216677a12.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 07:57:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efc4343c-2cb9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718636257; x=1719241057; darn=lists.xenproject.org;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=86RGoK/LDSuEJM28NytYmGjmR2Si4KO8R4io5TDshLc=;
        b=Vw4k54V0/DPTdamFs23VB1sg3ijxrN7W0dQvEWo8KFIJPt3QTnwdHfN3ZjJteR9/MK
         flmueD65LYXWggjR84jo5XRx5ObyLIUzIrPnby1cxFuciSVxm+Tn34ePXNd2axCLzU9P
         xVRguTZON8DIskK7bQ+EWqAXqYOiiWmM1sXcM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718636257; x=1719241057;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=86RGoK/LDSuEJM28NytYmGjmR2Si4KO8R4io5TDshLc=;
        b=LAqEB5xSb0KGsoxU2TJ6RaMEJW9Mq9XX6rmTP/jq7oHl+q50C1cYm7BYc/frdnsTtG
         aMgBCq2Oe5/trslWT1OqBxut9E2DVFXU0xpYGHFyPuwbhfYVlRpAtjkkq2DNrcOLzPCA
         hY0mjEiyVncAjVn3VD2I24YZPyr4g6ojP5p3e85focWV75KpHX7+Pv/rXKORl9c4jAq1
         hskqvbFWyhsIim5dxZzxsV6BbUc4q9TlPekUKta4H90dCjU2R0/MKvmw2m0MFFdOskwS
         5EH/cHNClvMQSujrClgMaPIBzx2d+eu6KMnovx2C0nyfLr6FFp8PKRQfCZ2p2fg/KwlW
         hG7g==
X-Gm-Message-State: AOJu0YyFVYMmSBdthft7icbL7GDMkuAbeAp0OPqPtm91HgUrtI7ypSxX
	V+1teBeU/Fhe3C4dQilSe48O8EWYCk0c/Y3a795ZNQtDjFnOmUc0ESq7Tjw1TJRr43/NzkRjsam
	hCZ3xaqVBzxb8bj8MipK8kWL4uPTypjhF7n0epXNuPcDOroNWQLo=
X-Google-Smtp-Source: AGHT+IGGRM6D9hTWHjtjGlMUp6snqCC/84qOFHQ1BwiqlZGWGsK8A6Qfje7lcdpAij0pjJa1LKLf6vVAXVQx2NhurFM=
X-Received: by 2002:a50:d70a:0:b0:57c:6b49:af5 with SMTP id
 4fb4d7f45d1cf-57cbd69e833mr7462718a12.29.1718636256601; Mon, 17 Jun 2024
 07:57:36 -0700 (PDT)
MIME-Version: 1.0
From: Kelly Choi <kelly.choi@cloud.com>
Date: Mon, 17 Jun 2024 15:57:01 +0100
Message-ID: <CAO-mL=zsrUQypSBRyV18dMZUMM8s3BJBQQ_t7wH9XJYq+NfHDg@mail.gmail.com>
Subject: [Volunteers needed] - Host a local Xen Meetup
To: xen-devel <xen-devel@lists.xenproject.org>, xen-users@lists.xenproject.org, 
	xen-announce@lists.xenproject.org
Content-Type: multipart/alternative; boundary="00000000000092bdac061b172f67"

--00000000000092bdac061b172f67
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello Xen Community!

I'm on the lookout for volunteers who are willing to host some local Xen
events or workshops.

The goal of our meetups is to connect our community members. Meetups allow
members (that=E2=80=99s you) to network, collaborate, and have a pulse on t=
heir
local network. We welcome professionals at all stages of their careers.

Some examples:

   - Hosting a local Xen pub social/meetup in your area
   - Hosting a workshop at a University
   - Hosting a design session
   - Introducing the community to open source and what the Xen Project
   is about
   - Running a joint local meetup with other open-source projects (e.g.
   safety)

I want to get involved, what do I need to do?

   - Contact me (community manager) with some ideas and locations. Many
   companies are open to hosting their office as a meetup space.
   - We'll work closely together to set up an agenda and I'll support with
   promotional activities
   - Gather some interest
   - Submit the budget you would require (e.g. =C2=A3200 for some pizza dur=
ing
   the workshop/event)
   - Let us know if you need some swag for the event
   - Set up a date and start inviting your local community
   - All we ask is that you write up a small event summary/take some photos

Many thanks,
Kelly Choi

Community Manager
Xen Project

--00000000000092bdac061b172f67
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hello Xen Community!=C2=A0<div><br></div><div>I&#39;m on t=
he lookout for volunteers who are willing to host some local Xen events or =
workshops.</div><div><br></div><div>The goal of our meetups is to connect o=
ur community members. Meetups allow members (that=E2=80=99s you) to network=
, collaborate, and have a pulse on their local network. We welcome professi=
onals at all stages of their careers.<br></div><div><br></div><div>Some exa=
mples:</div><div><ul><li>Hosting a local Xen pub social/meetup in your area=
</li><li>Hosting a workshop at a University=C2=A0</li><li>Hosting a design =
session=C2=A0</li><li>Introducing the community to open source and what the=
 Xen Project is=C2=A0about</li><li>Running a joint local meetup with other =
open-source projects (e.g. safety)</li></ul><div>I want to get involved, wh=
at do I need to do?</div></div><div><ul><li>Contact me (community manager) =
with some ideas and locations. Many companies are open to hosting their off=
ice as a meetup space.=C2=A0</li><li>We&#39;ll work closely together to set=
 up an agenda and I&#39;ll support with promotional=C2=A0activities</li><li=
>Gather some interest=C2=A0</li><li>Submit the budget you would require (e.=
g. =C2=A3200 for some pizza during the workshop/event)</li><li>Let us know =
if you need some swag for=C2=A0the event</li><li>Set up a date and start in=
viting your local community</li><li>All we ask is that you write up a small=
 event summary/take some photos=C2=A0</li></ul></div><div><div dir=3D"ltr" =
class=3D"gmail_signature" data-smartmail=3D"gmail_signature"><div dir=3D"lt=
r"><div>Many thanks,</div><div>Kelly Choi</div><div><br></div><div><div sty=
le=3D"color:rgb(136,136,136)">Community Manager</div><div style=3D"color:rg=
b(136,136,136)">Xen Project=C2=A0<br></div></div></div></div></div></div>

--00000000000092bdac061b172f67--


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 15:08:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 15:08:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742448.1149258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDxV-0006k2-27; Mon, 17 Jun 2024 15:07:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742448.1149258; Mon, 17 Jun 2024 15:07:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJDxU-0006jv-Vd; Mon, 17 Jun 2024 15:07:48 +0000
Received: by outflank-mailman (input) for mailman id 742448;
 Mon, 17 Jun 2024 15:07:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Jjg=NT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1sJDxT-0006jp-Mb
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 15:07:47 +0000
Received: from fhigh1-smtp.messagingengine.com
 (fhigh1-smtp.messagingengine.com [103.168.172.152])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5827efd0-2cbb-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 17:07:44 +0200 (CEST)
Received: from compute5.internal (compute5.nyi.internal [10.202.2.45])
 by mailfhigh.nyi.internal (Postfix) with ESMTP id 9F21611401D7;
 Mon, 17 Jun 2024 11:07:41 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute5.internal (MEProxy); Mon, 17 Jun 2024 11:07:41 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 17 Jun 2024 11:07:40 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5827efd0-2cbb-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718636861;
	 x=1718723261; bh=zCVsZVjhO+hN9MCUoE86QP37zx9/iqxqHFwDyh/DIpM=; b=
	FPSHjFNgKZlB7kmpd+SSTxnpPxLhHrkZ3behBc6ElGxpuYJK5oodeXPkhWlblC/f
	s2Iu8wxWE9088NQw7vF6ATAcaHIjury195Q8x5tBTzuDjaS7kR8IuZ1p1P0jt7pz
	txQDlA6VLgp8vpq6VL/aszxRRC42/Ml6JCP4e5DVk6M/wp8+Vfhi6f5xnFqQcsb1
	pzwfnnjyY4SNcfPd3JQsq8JXhRznrta1yiuwyBtDynrFFpOvtz/p3v2Urcn6V2A0
	fdJn75K1f3Mbps4IYrhwjm+2B4I/diVgPDIPzFguFmCtNFuF6PcK4Isxc68P2OuI
	H1ErpHl1UpmS5qa8hqCxFg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718636861; x=1718723261; bh=zCVsZVjhO+hN9MCUoE86QP37zx9/
	iqxqHFwDyh/DIpM=; b=bYCjz7FTwR1d0I7tJl1/0WO/WVQ1CdApK//MRQjW/Mk3
	xlFW0QPnjUIsAphW32LsZdaHJ/otc9JgN7rp8aZ+Z3yessZ8chUOnqOGUVmgo9so
	K6dzVL2BBvNj3lCAFacpW4I0uCeB7b0tggm/jafWA+wQW6cmiXZmlAgwZL1pX2QV
	nLxqjdcFt6S4OfyEFqlOqfCdAtufzN6jl1aL1jDt5nKmNCgHWOz7EIqE5XisBJnI
	5Sb0d3RTXyItAreh9p9hAL9TvqBmruaLuXHLSHe4f08kV8p75KbrnelCkpHSrkza
	Lh6goJiDcXaybW0ZF9ggLr+cvw8lJug398aPqhV31Q==
X-ME-Sender: <xms:PFFwZjnt_sYQFRQsRe4cmFAyE9-a3nqJFZiIMevSMF5i2awBC6btYw>
    <xme:PFFwZm1TLiqeVaLRkvkKP4AJZaccdoNvsTQWInOLs9KWjsIu3SXb9fogcpiV8MYH3
    164pH04SrqMW3s>
X-ME-Received: <xmr:PFFwZprLifrWvpwLtbvlcQTk8PdHtuYCjMInR7lWHtqzzFsi2Q-0iq8vNbgsBgG0dDoiSv_RVj1_NZpdE1OZXBc_vbYgP1gtshT97OnvFo09QDYk>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedvhedgkeegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteekuddvfedtheethedttdfgleet
    ffeghfduieefffdutdejtddvkeeifefhuddunecuffhomhgrihhnpehorghsihhsqdhoph
    gvnhdrohhrghenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhr
    ohhmpeguvghmihesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:PVFwZrl4c_xZ08tTlAAUICNqQ-c5xvtfB6yhuVMSIvdT-DYQ6PQtUA>
    <xmx:PVFwZh1hcDAiW7FyrVM_Ey8a58p3dDzG7gf2YS78YorSVU_WLKIv-g>
    <xmx:PVFwZqsDvMRUgg8xk6orCXMP5WwmAq3wrVoVbJ6g3n4Whdr-H8hTtg>
    <xmx:PVFwZlXDms1JG9MjR58cDR-wYFM5uTkfCQ7BgakeWr0afcaFIVXvKw>
    <xmx:PVFwZroS1g22U3UHAJ6VdNsh3KHFWYgwStsbXmQ1r8YrFaHJr3TWLITU>
Feedback-ID: iac594737:Fastmail
Date: Mon, 17 Jun 2024 11:07:25 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <ZnBRO7hlCy2_HgwW@itl-email>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
 <ZmwByZnn5vKcVLKI@macbook>
 <Zm-FidjSK3mOieSC@itl-email>
 <Zm_p1QvoZcjQ4gBa@macbook>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="HOhfUfe+b94jgfrk"
Content-Disposition: inline
In-Reply-To: <Zm_p1QvoZcjQ4gBa@macbook>


--HOhfUfe+b94jgfrk
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 17 Jun 2024 11:07:25 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen

On Mon, Jun 17, 2024 at 09:46:29AM +0200, Roger Pau Monn=C3=A9 wrote:
> On Sun, Jun 16, 2024 at 08:38:19PM -0400, Demi Marie Obenour wrote:
> > On Fri, Jun 14, 2024 at 10:39:37AM +0200, Roger Pau Monn=C3=A9 wrote:
> > > On Fri, Jun 14, 2024 at 10:12:40AM +0200, Jan Beulich wrote:
> > > > On 14.06.2024 09:21, Roger Pau Monn=C3=A9 wrote:
> > > > > On Fri, Jun 14, 2024 at 08:38:51AM +0200, Jan Beulich wrote:
> > > > >> On 13.06.2024 20:43, Demi Marie Obenour wrote:
> > > > >>> GPU acceleration requires that pageable host memory be able to =
be mapped
> > > > >>> into a guest.
> > > > >>
> > > > >> I'm sure it was explained in the session, which sadly I couldn't=
 attend.
> > > > >> I've been asking Ray and Xenia the same before, but I'm afraid i=
t still
> > > > >> hasn't become clear to me why this is a _requirement_. After all=
 that's
> > > > >> against what we're doing elsewhere (i.e. so far it has always be=
en
> > > > >> guest memory that's mapped in the host). I can appreciate that i=
t might
> > > > >> be more difficult to implement, but avoiding to violate this fun=
damental
> > > > >> (kind of) rule might be worth the price (and would avoid other
> > > > >> complexities, of which there may be lurking more than what you e=
numerate
> > > > >> below).
> > > > >=20
> > > > > My limited understanding (please someone correct me if wrong) is =
that
> > > > > the GPU buffer (or context I think it's also called?) is always
> > > > > allocated from dom0 (the owner of the GPU).  The underling memory
> > > > > addresses of such buffer needs to be mapped into the guest.  The
> > > > > buffer backing memory might be GPU MMIO from the device BAR(s) or
> > > > > system RAM, and such buffer can be paged by the dom0 kernel at any
> > > > > time (iow: changing the backing memory from MMIO to RAM or vice
> > > > > versa).  Also, the buffer must be contiguous in physical address
> > > > > space.
> > > >=20
> > > > This last one in particular would of course be a severe restriction.
> > > > Yet: There's an IOMMU involved, isn't there?
> > >=20
> > > Yup, IIRC that's why Ray said it was much more easier for them to
> > > support VirtIO GPUs from a PVH dom0 rather than classic PV one.
> > >=20
> > > It might be easier to implement from a classic PV dom0 if there's
> > > pv-iommu support, so that dom0 can create it's own contiguous memory
> > > buffers from the device PoV.
> >=20
> > What makes PVH an improvement here?  I thought PV dom0 uses an identity
> > mapping for the IOMMU, while a PVH dom0 uses an IOMMU that mirrors the
> > dom0 second-stage page tables.
>=20
> Indeed, hence finding a physically contiguous buffer on classic PV is
> way more complicated, because the IOMMU identity maps mfns, and the PV
> address space can be completely scattered.
>=20
> OTOH, on PVH the IOMMU page tables are the same as the second stage
> translation, and hence the physical address is way more compact (as it
> would be on native).

Ah, _that_ is what I missed.  I didn't realize that the physical address
space of PV guests was so scattered.

> > In both cases, the device physical
> > addresses are identical to dom0=E2=80=99s physical addresses.
>=20
> Yes, but a PV dom0 physical address space can be very scattered.
>=20
> IIRC there's an hypercall to request physically contiguous memory for
> PV, but you don't want to be using that every time you allocate a
> buffer (not sure it would support the sizes needed by the GPU
> anyway).

That makes sense, thanks!

> > PV is terrible for many reasons, so I=E2=80=99m okay with focusing on P=
VH dom0,
> > but I=E2=80=99d like to know why there is a difference.
> >=20
> > > > > I'm not sure it's possible to ensure that when using system RAM s=
uch
> > > > > memory comes from the guest rather than the host, as it would lik=
ely
> > > > > require some very intrusive hooks into the kernel logic, and
> > > > > negotiation with the guest to allocate the requested amount of
> > > > > memory and hand it over to dom0.  If the maximum size of the buff=
er is
> > > > > known in advance maybe dom0 can negotiate with the guest to alloc=
ate
> > > > > such a region and grant it access to dom0 at driver attachment ti=
me.
> > > >=20
> > > > Besides the thought of transiently converting RAM to kind-of-MMIO, =
this
> > >=20
> > > As a note here, changing the type to MMIO would likely involve
> > > modifying the EPT/NPT tables to propagate the new type.  On a PVH dom0
> > > this would likely involve shattering superpages in order to set the
> > > correct memory types.
> > >=20
> > > Depending on how often and how random those system RAM changes are
> > > necessary this could also create contention on the p2m lock.
> > >=20
> > > > makes me think of another possible option: Could Dom0 transfer owne=
rship
> > > > of the RAM that wants mapping in the guest (remotely resembling
> > > > grant-transfer)? Would require the guest to have ballooned down eno=
ugh
> > > > first, of course. (In both cases it would certainly need working ou=
t how
> > > > the conversion / transfer back could be made work safely and reason=
ably
> > > > cleanly.)
> > >=20
> > > Maybe.  The fact the guest needs to balloon down that amount of memory
> > > seems weird to me, as from the guest PoV that mapped memory is
> > > MMIO-like and not system RAM.
> >=20
> > I don=E2=80=99t like it either.  Furthermore, this would require change=
s to the
> > virtio-GPU driver in the guest, which I=E2=80=99d prefer to avoid.
>=20
> IMO it would be helpful if you (or someone) could write the full
> specification of how VirtIO GPU is supposed to work right now (with
> the KVM model I assume?) as it would be a good starting point to
> provide suggestions about how to make it work (or adapt it) on Xen.
>=20
> I don't think the high level layers on top of VirtIO GPU are relevant,
> but it's important to understand the protocol between the VirtIO GPU
> front and back ends.

virtio-GPU is part of the OASIS VirtIO standard [1].

[1]: https://docs.oasis-open.org/virtio/virtio/v1.3/virtio-v1.3.html

> So far I only had scattered conversation about what's needed, but not
> a formal write-up of how this is supposed to work.

My understanding is that mapping GPU buffers into guests ("blob
resources" in virtio-GPU terms) is the only part of virtio-GPU that
didn't just work.  Furthermore, any solution that uses Linux's
kernel-mode GPU driver on the host will have the same requirements.
I don't consider writing a bespoke GPU driver that uses caller-allocated
buffers to be a reasonable solution that can support many GPU models.
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--HOhfUfe+b94jgfrk
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmZwUToACgkQsoi1X/+c
IsFx8w//Zb9R/oVB59xhbuzGwRZYVLeY9sIIdo92vOu8ogQ9JesHGLoVL5HIgGEk
9BYi+u6DehaEdR27/13hd0M1cg88RLkLwQqYRn6DSKpCP2eqhAwX8s2sDSOqULef
cC8ZsH6Cm2LCIlRiRwX0vy5Tty1EhdBtd09cZx8YAv0EmzTrEKef7sCZIc9QbVWv
XRgQzRKKPLVNIjwt/u9+cVESe9IvWJhnGMJ3ltv/JJjTZNnqEswwy76e/BlCbtwc
JFVIVKrt2He5mD1Eg9DyLsDEgt0xFSh7m72bOMGQx1W9NVHAQBWkU8z5q5VK9RlP
v0nl8ndZb2ylpTswX0iTTSF3mjRsmmbi4VsXQIOPF/Ssme6PgRc0Ax6H6ryy/LHJ
GETfCE1+VZRKnOfkzi63gw5cx7bzycXOECPzc9ofWzOMkQtUzcSZ3POOBTn5wZWi
Gw2+yeEvPDHnSZevlO2iz9tRNpwilYNS23CsJ4enkt/oaneCZByLJN08wIpDA5QV
0iufE78UrRATTclMmEoZGTp5bRpfBUZhux46mG0bOG5mQS/0wVh+zQuolXCKgTAm
oXKDSByzYInswrS3KoHubCrxIkjvcS+6NhVBmHfTYnGOptIRl5laSjeE5sZ3L9G6
Qmefux05LGVEAWXwGuiIpRnWBXQQJbJRkkCdrRAINN2GGm5uP/c=
=6EsX
-----END PGP SIGNATURE-----

--HOhfUfe+b94jgfrk--


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 15:10:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 15:10:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742463.1149268 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJE0E-0000Ud-EU; Mon, 17 Jun 2024 15:10:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742463.1149268; Mon, 17 Jun 2024 15:10:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJE0E-0000UW-Bu; Mon, 17 Jun 2024 15:10:38 +0000
Received: by outflank-mailman (input) for mailman id 742463;
 Mon, 17 Jun 2024 15:10:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJE0C-0000UO-GN
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 15:10:36 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bf50c465-2cbb-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 17:10:35 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id
 4fb4d7f45d1cf-57cbc2a2496so4550625a12.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 08:10:35 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb72da790sm6525691a12.35.2024.06.17.08.10.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 08:10:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf50c465-2cbb-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718637035; x=1719241835; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=t2zsxTeXKA4vIeerirtdfv0KCg0uCuxRjr/Gi/36UOE=;
        b=Uad1Hv1+jE4VnM1cQZxUHG0Nkdiv2pDJI/AishzPk99no1IPD59Ww4Vt0NUXAA1gbp
         OgRppdTSdBseTWswWeMcK7JXsXBRTwbGPQIMad2clxtV1QA60kz5UTsIPo7JaRQgPJGi
         8NsZdX1aNmmJP48DcaoOuW8zMXETOlq4zS3AWPw26DqsnjShVbYXPtO48+ySI6WmCy17
         dR2++nHs/M1b2seAqKO+cvOmjuhYYgmds1RnrVQ2s6yqxOafHIxn1XyUtP16wijUg6Ku
         4kdYdCJVYfb/10aPENlmKed/sr6+LCBD8yHHQw9tbe/tPCWDjsVlkeJMObvQK4rrNuBb
         FRYw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718637035; x=1719241835;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=t2zsxTeXKA4vIeerirtdfv0KCg0uCuxRjr/Gi/36UOE=;
        b=Hv0xC618DoASaPFPdT7I+sb+UxB94KWi4YIzVo8KqACxppvYM3uDAxoO/D7Uf5n+Os
         UyG73K7AW9tPzUnMSkOq4R8RjFM6L8fQTxq+0hQTrGq4XixlBLs67uKCA6T+Dl/w9XJ0
         EJgC3IyLdRgFxb3WVpKwzgi0TKmuht2oE0MPwzWMHrPpWrjDwE1aVplEeNvbDy8XeFIJ
         ewfD6PSSE/oHvgRudbagaG7jHpdoA17fgndz5V8LfOf20UYQ6bUx7ZCr5n92cHtoaOc0
         DWQllws+LldoLMXhFE+nCvFzLDdruqMDvqlm60Ea/4+GWKIBS6zIA9nS98dh7T96z791
         Zx+w==
X-Forwarded-Encrypted: i=1; AJvYcCXNIK2aHiwW08f6MQcaCjWAXSkRoSNOP9/tsYF4OeJzFkkvfGSgTal4b6xdngklG0pPituA6MfaL6XAmgVFUkJcZFljXO+y5bJ1InKVXyU=
X-Gm-Message-State: AOJu0YxYBkaEXlV40HiBG46rR1iR9J0so7xgwWzDcI6NUOh6L1cU9hH4
	t36yfGhrrN2F0WytI3NIQB7vS64PWBK/hBLeCTHb/JODcl2KXB1/5Bge3Q1jGQ==
X-Google-Smtp-Source: AGHT+IGfEAbpawAR0GiW7E9D6iKEGn4mumolRoVN8odrjpiI4CJZQopL3e7i75xvPYJxOBte7DfFzA==
X-Received: by 2002:a05:6402:2293:b0:57c:c125:d638 with SMTP id 4fb4d7f45d1cf-57cc125dc2fmr7346644a12.39.1718637034743;
        Mon, 17 Jun 2024 08:10:34 -0700 (PDT)
Message-ID: <49563a31-d50e-4015-88ee-e0dab9193cd1@suse.com>
Date: Mon, 17 Jun 2024 17:10:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
To: Jiqian Chen <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 Stewart Hildebrand <Stewart.Hildebrand@amd.com>,
 Huang Rui <Ray.Huang@amd.com>, xen-devel@lists.xenproject.org
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-5-Jiqian.Chen@amd.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240617090035.839640-5-Jiqian.Chen@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 17.06.2024 11:00, Jiqian Chen wrote:
> In PVH dom0, it uses the linux local interrupt mechanism,
> when it allocs irq for a gsi, it is dynamic, and follow
> the principle of applying first, distributing first. And
> irq number is alloced from small to large, but the applying
> gsi number is not, may gsi 38 comes before gsi 28, that
> causes the irq number is not equal with the gsi number.

Hmm, see my earlier explanations on patch 5: GSI and IRQ generally aren't
the same anyway. Therefore this part of the description, while not wrong,
is at least at risk of misleading people.

> --- a/tools/include/xen-sys/Linux/privcmd.h
> +++ b/tools/include/xen-sys/Linux/privcmd.h
> @@ -95,6 +95,11 @@ typedef struct privcmd_mmap_resource {
>  	__u64 addr;
>  } privcmd_mmap_resource_t;
>  
> +typedef struct privcmd_gsi_from_dev {
> +	__u32 sbdf;

That's PCI-centric, without struct and IOCTL names reflecting this fact.

> +	int gsi;

Is "int" legitimate to use here? Doesn't this want to similarly be __u32?

> --- a/tools/include/xencall.h
> +++ b/tools/include/xencall.h
> @@ -113,6 +113,8 @@ int xencall5(xencall_handle *xcall, unsigned int op,
>               uint64_t arg1, uint64_t arg2, uint64_t arg3,
>               uint64_t arg4, uint64_t arg5);
>  
> +int xen_oscall_gsi_from_dev(xencall_handle *xcall, unsigned int sbdf);

Hmm, something (by name at least) OS-specific being in the public header
and ...

> --- a/tools/libs/call/libxencall.map
> +++ b/tools/libs/call/libxencall.map
> @@ -10,6 +10,8 @@ VERS_1.0 {
>  		xencall4;
>  		xencall5;
>  
> +		xen_oscall_gsi_from_dev;

... map file. I'm not sure things are intended to be this way.

> --- a/tools/libs/light/libxl_pci.c
> +++ b/tools/libs/light/libxl_pci.c
> @@ -1406,6 +1406,12 @@ static bool pci_supp_legacy_irq(void)
>  #endif
>  }
>  
> +#define PCI_DEVID(bus, devfn)\
> +            ((((uint16_t)(bus)) << 8) | ((devfn) & 0xff))
> +
> +#define PCI_SBDF(seg, bus, devfn) \
> +            ((((uint32_t)(seg)) << 16) | (PCI_DEVID(bus, devfn)))

I'm not a maintainer of this file; if I were, I'd ask that for readability's
sake all excess parentheses be dropped from these.

> @@ -1486,6 +1496,18 @@ static void pci_add_dm_done(libxl__egc *egc,
>          goto out_no_irq;
>      }
>      if ((fscanf(f, "%u", &irq) == 1) && irq) {
> +#ifdef CONFIG_X86
> +        sbdf = PCI_SBDF(pci->domain, pci->bus,
> +                        (PCI_DEVFN(pci->dev, pci->func)));
> +        gsi = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
> +        /*
> +         * Old kernel version may not support this function,

Just kernel?

> +         * so if fail, keep using irq; if success, use gsi
> +         */
> +        if (gsi > 0) {
> +            irq = gsi;

I'm still puzzled by this, when by now I think we've sufficiently clarified
that IRQs and GSIs use two distinct numbering spaces.

Also, as previously indicated, you call this for PV Dom0 as well. Aiui on
the assumption that it'll fail. What if we decide to make the functionality
available there, too (if only for informational purposes, or for
consistency)? Suddenly you're fallback logic wouldn't work anymore, and
you'd call ...

> +        }
> +#endif
>          r = xc_physdev_map_pirq(ctx->xch, domid, irq, &irq);

... the function with a GSI when a pIRQ is meant. Imo, as suggested before,
you strictly want to avoid the call on PV Dom0.

Also for PVH Dom0: I don't think I've seen changes to the hypercall
handling, yet. How can that be when GSI and IRQ aren't the same, and hence
incoming GSI would need translating to IRQ somewhere? I can once again only
assume all your testing was done with IRQs whose numbers happened to match
their GSI numbers. (The difference, imo, would also need calling out in the
public header, where the respective interface struct(s) is/are defined.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 15:18:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 15:18:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742478.1149278 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJE7K-0001gG-8i; Mon, 17 Jun 2024 15:17:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742478.1149278; Mon, 17 Jun 2024 15:17:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJE7K-0001g9-68; Mon, 17 Jun 2024 15:17:58 +0000
Received: by outflank-mailman (input) for mailman id 742478;
 Mon, 17 Jun 2024 15:17:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Jjg=NT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1sJE7J-0001g3-HV
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 15:17:57 +0000
Received: from fout3-smtp.messagingengine.com (fout3-smtp.messagingengine.com
 [103.168.172.146]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c51b8eaa-2cbc-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 17:17:55 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailfout.nyi.internal (Postfix) with ESMTP id DD23913801AB;
 Mon, 17 Jun 2024 11:17:53 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Mon, 17 Jun 2024 11:17:53 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 17 Jun 2024 11:17:53 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c51b8eaa-2cbc-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718637473;
	 x=1718723873; bh=yPlbPgNMJs03DT8I3W/lOWEJ4QxQTYZpqWFT0Vff8lg=; b=
	K0CQjke8hL+VmmXkVytThngyxi/pzWorvxQOlIVNTLPsD37S3/u0V5IM0BI+wzxE
	5u4jQRPIKgzHq3mbM6IlnsGwUgk+Td2XaYhoZ6Id5yKkhboU+tLi8SKRZULd3uBn
	FSH7Bm3CsrxYo+kvEy/H4wh6HlfllCerhOGEabisNUEK4ze0A0Gg4afNe+h9Uzet
	gne0oq4OgZBbG/Zts+wZtaRG5Kt3tN4jyJSsgy72wwHemFzeh5YovZtkS2fqi0H3
	WF+vplz9aE0/JSS1FYUuVaZOR9d5DjPkW+BpfyszkLe/SnO7yJIyX1dexoRR1zkU
	lCey3td/z/24uvdZrknHtg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718637473; x=1718723873; bh=yPlbPgNMJs03DT8I3W/lOWEJ4QxQ
	TYZpqWFT0Vff8lg=; b=BnBsSCslUjTMPLI1bHo3jikXT/RLlwfx9En6bYx6Xgar
	Mtw7G12gmiYMinOWe7Cg2N5doG1GzZU/fpH9X70kWVns38YlzbYg60uLdLDX9ARt
	4dMaD9ieO3tF/rJFD49YTROSxFjCkzG0zDsUaglphdw/kZkIcatQAjwbr7cy0qWd
	dBcC5E9Qgt3TBGm74X0+GwLw+uZI1TpHxZd8EZNlcbjiaZFr4ZH4V7q0003otu67
	d8/SgJnX1ciQR7YonGbYbwvIIBqXwezBWGthZYYKM/xpLArMMTNKxtpvm4Kmwmbx
	rxl232h4IX1pKvvcpD2UqvM+reT0QyRm4WoTsItWTA==
X-ME-Sender: <xms:oVNwZl_TorcTZ-tQFsUv3KVVW4Vlx-iN05F7kOBJgSUBT7e3dZCNEg>
    <xme:oVNwZpt1xf_-9KbGyLrYTjcU6wCcq5L9ysj84KftimguU25Do_vWX6tJoALWz5ptI
    StbPEcSuRUYW7E>
X-ME-Received: <xmr:oVNwZjCtABt1FvZx2_bSSSS6ePotD3hoa6THHnerrdPGc-fTjAQeYqf56I4CT5PwVEgM1fGebTSB5u5NSJsSmJn-AXjTtT1lKwb1CXlEV-tXwtSn>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedvhedgkeeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepvdejteegkefhteduhffgteffgeff
    gfduvdfghfffieefieekkedtheegteehffelnecuvehluhhsthgvrhfuihiivgeptdenuc
    frrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhs
    lhgrsgdrtghomh
X-ME-Proxy: <xmx:oVNwZpdJBqh_ueBbupbsSftJ5q5nNL_WU0nGdj19AdSI2v5mqKWgAg>
    <xmx:oVNwZqPDQjaQcq1uZlZ0lgZZW6bO-ey6St_ZGv-Y13b83_jTelQmaA>
    <xmx:oVNwZrlaXEu37l_NBKkzKMzcX6_EmEv23GItfjXytjHujCu1khujDw>
    <xmx:oVNwZkto_pJNovQFZvil9DXoaErdFC-2OOeJvnC11alaHOUB6bHkEw>
    <xmx:oVNwZmDOd_hbjiFHYr8ZMVM_8YvDCBI6tdvAQOD5E0z624NKaYiVa9pj>
Feedback-ID: iac594737:Fastmail
Date: Mon, 17 Jun 2024 11:17:49 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <ZnBTn6FXXOpnBJCb@itl-email>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
 <Zmxze4a0PZbwcLSb@itl-email>
 <10c2ab19-e2b7-4f5f-ae73-213e0194bb8e@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="EfeTc80EU04X+O6P"
Content-Disposition: inline
In-Reply-To: <10c2ab19-e2b7-4f5f-ae73-213e0194bb8e@suse.com>


--EfeTc80EU04X+O6P
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 17 Jun 2024 11:17:49 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen

On Mon, Jun 17, 2024 at 11:07:54AM +0200, Jan Beulich wrote:
> On 14.06.2024 18:44, Demi Marie Obenour wrote:
> > On Fri, Jun 14, 2024 at 10:12:40AM +0200, Jan Beulich wrote:
> >> On 14.06.2024 09:21, Roger Pau Monn=C3=A9 wrote:
> >>> I'm not sure it's possible to ensure that when using system RAM such
> >>> memory comes from the guest rather than the host, as it would likely
> >>> require some very intrusive hooks into the kernel logic, and
> >>> negotiation with the guest to allocate the requested amount of
> >>> memory and hand it over to dom0.  If the maximum size of the buffer is
> >>> known in advance maybe dom0 can negotiate with the guest to allocate
> >>> such a region and grant it access to dom0 at driver attachment time.
> >>
> >> Besides the thought of transiently converting RAM to kind-of-MMIO, this
> >> makes me think of another possible option: Could Dom0 transfer ownersh=
ip
> >> of the RAM that wants mapping in the guest (remotely resembling
> >> grant-transfer)? Would require the guest to have ballooned down enough
> >> first, of course. (In both cases it would certainly need working out h=
ow
> >> the conversion / transfer back could be made work safely and reasonably
> >> cleanly.)
> >=20
> > The kernel driver needs to be able to reclaim the memory at any time.
> > My understanding is that this is used to migrate memory between VRAM and
> > system RAM.  It might also be used for other purposes.
>=20
> Except: How would the kernel driver reclaim the memory when it's mapped
> by a DomU?

The Xen driver in dom0 will register for MMU notifier callbacks.  When
the kernel driver reclaims the memory, the Xen driver will be notified,
and it will issue a hypercall that tells Xen to remove the memory from
the DomU's address space.  Subsequent accesses to the pages will trigger
a stage 2 translation fault that is handled by an IOREQ server.

For I/O memory, this is already possible via XEN_DOMCTL_memory_mapping.
The proposal in this thread is to make this possible for system RAM as
well.
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--EfeTc80EU04X+O6P
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmZwU58ACgkQsoi1X/+c
IsFyWw/+MVUOg5Pgcoj7IjWV75JBiahd5+5pydvIzMIYqjltliXGd5FfltJvTzOQ
gTMRFKkcaHWmthtxKpFHZNCV4VDfCytFTgunMDTpTZdd4iACs2o/SUwzn94axDsm
OKLFV0pVIvySD2+QTIe5SZz7rrIoU4VU4xmV9jIsyHERv0CnWXfbKoY7dkwrALqn
GAdmv9DazkEL4/o4r/eg0nDQHVefFW7Qrlbpd2Vf4jH0SGtRYgM2ib5Di1u80gJ/
jyiHl9YP7VnYj7HXKUHhSeNyPAFKZ+juMnFdmTXbRUusiG5lLYB/J8R9NFiNGZiZ
Tgq4WU4KJ0X39pj8ENdv32/7yYsuUNogn6CuTTTP/ZmYK7WCMhcSwAhBrOt/XlFU
8bMMaewxigQMPZTXx+y5Un2t52tJNDpo582zj6d0VhgARUfQh6IyWPgFq/sWbYiw
+VSnIM1bnckUZBxhpwkRokpnhl1B+mgwbwCZs8w5saTntVfkslyGGPlh+ui2oCzg
G4/nJe35XNLeD0ixr9PuST+x2C87Zddgf7Z0N899ROfVdYSKoLgz3DiR4SZPouYN
1YZPTguL2rP/nDRwoTLdeKAGC7P2XXQarcMqAMY9Xk4kEElhQyoMCZOJfv76Id4d
vgxDioN1YTOsiBLBaNosfbK33MYQdvmI1aS2ztrdePQwEJtE+LQ=
=B4uy
-----END PGP SIGNATURE-----

--EfeTc80EU04X+O6P--


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 15:32:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 15:32:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742487.1149289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJELM-00060I-Eb; Mon, 17 Jun 2024 15:32:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742487.1149289; Mon, 17 Jun 2024 15:32:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJELM-00060B-BR; Mon, 17 Jun 2024 15:32:28 +0000
Received: by outflank-mailman (input) for mailman id 742487;
 Mon, 17 Jun 2024 15:32:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJELK-000605-Vl
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 15:32:26 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cb820edc-2cbe-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 17:32:24 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id
 a640c23a62f3a-a6f09eaf420so524320466b.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 08:32:24 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f5c8e2ef2sm489337566b.213.2024.06.17.08.32.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 08:32:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb820edc-2cbe-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718638344; x=1719243144; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=YxdBmjhOhZV+X1e8AMNTWfQMmwwy+lg/TU9LF3pkN2c=;
        b=CTb+a/7Ev2kAfvzymr1mrzU22Cfi34yap1EcSRtqQ8vZTm+DrW2fW3R+xpUSpTye+E
         cWHLTEac2Gkp0wLi2ez8DxS0JeCjTbNjO1ZQNBsJEGbPP1/n6C+nZt4mKtuYd7jk1//X
         +JT2YtFtJn4VOt6QBZ6VP0fCD0sng5CSCpW04HAYS8U5mZpO21TG9ohdad0f+Z6MPYGy
         fFO3BbrQU8/FMQIP19eqUETmpNLxNI9zsk4iIlWoFw/tt0YrmgFiAYY08r0HZJp0Qyc7
         mDdGAkkqkE9VFY26YWSOVz+p38ZuXCTmJSBdWpuuhT83yBXhYUPtgyc+Q0334DxGfpb8
         G2UA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718638344; x=1719243144;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=YxdBmjhOhZV+X1e8AMNTWfQMmwwy+lg/TU9LF3pkN2c=;
        b=HKdTY3T46PXXEJy8B8CuHOalx6TxG7iXqDdfsijImj/0kwVpwagCp5ZZVS13odT2oI
         Z/QOlDuisOpfhMOJR1iRTZlRhS2u+/NxGTmmVWc/+fYYAMMvH168ULcBffgofizP8whc
         kgJOj4Q/aG0knAdfduyHyPO4zRQixvxr/Rs9GjrPAVHCT/DQnD/iizjQcHcwBn7LdSSL
         znqbMzx+VhAaw9j+xe8gN8erMBjAWM2IdF9EQP7NQnAjpfu4ZNOTY6AklxlBKaTLVm95
         MGmKOTI++D7ObbroMMt2N/HWJ4TQ0C18zG+F9jddinNpWMwOSRKu0ZxHocdVX2GmEMpq
         2xYQ==
X-Forwarded-Encrypted: i=1; AJvYcCV/gY+rJ1SArGOn/k50SkdSNXkKGTkgO2ZtgnkPjtlAPAhSp+oSNOIxgm//OR8noiLCyR04JRSFjYL5s1J6ynVvPPkTVgUBVh9i2SD5m68=
X-Gm-Message-State: AOJu0Yw3qDTf8KbEtjqU9AHl48cjmR8Qk882NulSCTbJuuvPwpVEdJOR
	82p9mpuZz91sgDqubRAdfvw0QkAL8e3V5HdBvHO+R3SM/fg1csMhbQxShTS6jQ==
X-Google-Smtp-Source: AGHT+IF/dqOXbSCxPfnTFu1JETVqYDZxfafj4HkWYTaY1NcpFjHhtzPzJUaR6zxDPZNCS+8MkndlAw==
X-Received: by 2002:a17:906:6bcc:b0:a6f:593f:d334 with SMTP id a640c23a62f3a-a6f60dc5279mr601789366b.51.1718638343578;
        Mon, 17 Jun 2024 08:32:23 -0700 (PDT)
Message-ID: <b4b6cbcd-dd71-44da-aea8-6a4a170d73d5@suse.com>
Date: Mon, 17 Jun 2024 17:32:21 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: Jiqian Chen <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 Stewart Hildebrand <Stewart.Hildebrand@amd.com>,
 Huang Rui <Ray.Huang@amd.com>, xen-devel@lists.xenproject.org
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-6-Jiqian.Chen@amd.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240617090035.839640-6-Jiqian.Chen@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 17.06.2024 11:00, Jiqian Chen wrote:
> Some type of domain don't have PIRQs, like PVH, it doesn't do
> PHYSDEVOP_map_pirq for each gsi. When passthrough a device
> to guest base on PVH dom0, callstack
> pci_add_dm_done->XEN_DOMCTL_irq_permission will fail at function
> domain_pirq_to_irq, because PVH has no mapping of gsi, pirq and
> irq on Xen side.
> What's more, current hypercall XEN_DOMCTL_irq_permission requires
> passing in pirq, it is not suitable for dom0 that doesn't have
> PIRQs.
> 
> So, add a new hypercall XEN_DOMCTL_gsi_permission to grant the
> permission of irq(translate from gsi) to dumU when dom0 has no
> PIRQs.
> 
> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
> Signed-off-by: Huang Rui <ray.huang@amd.com>
> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
> ---
> RFC: it needs review and needs to wait for the corresponding third patch on linux kernel side to be merged.
> ---
>  tools/include/xenctrl.h            |  5 +++
>  tools/libs/ctrl/xc_domain.c        | 15 +++++++
>  tools/libs/light/libxl_pci.c       | 67 +++++++++++++++++++++++++++---
>  xen/arch/x86/domctl.c              | 43 +++++++++++++++++++
>  xen/arch/x86/include/asm/io_apic.h |  2 +
>  xen/arch/x86/io_apic.c             | 17 ++++++++
>  xen/arch/x86/mpparse.c             |  3 +-
>  xen/include/public/domctl.h        |  8 ++++
>  xen/xsm/flask/hooks.c              |  1 +
>  9 files changed, 153 insertions(+), 8 deletions(-)
> 
> diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> index a0381f74d24b..f3feb6848e25 100644
> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -1382,6 +1382,11 @@ int xc_domain_irq_permission(xc_interface *xch,
>                               uint32_t pirq,
>                               bool allow_access);
>  
> +int xc_domain_gsi_permission(xc_interface *xch,
> +                             uint32_t domid,
> +                             uint32_t gsi,
> +                             bool allow_access);
> +
>  int xc_domain_iomem_permission(xc_interface *xch,
>                                 uint32_t domid,
>                                 unsigned long first_mfn,
> diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
> index f2d9d14b4d9f..8540e84fda93 100644
> --- a/tools/libs/ctrl/xc_domain.c
> +++ b/tools/libs/ctrl/xc_domain.c
> @@ -1394,6 +1394,21 @@ int xc_domain_irq_permission(xc_interface *xch,
>      return do_domctl(xch, &domctl);
>  }
>  
> +int xc_domain_gsi_permission(xc_interface *xch,
> +                             uint32_t domid,
> +                             uint32_t gsi,
> +                             bool allow_access)
> +{
> +    struct xen_domctl domctl = {
> +        .cmd = XEN_DOMCTL_gsi_permission,
> +        .domain = domid,
> +        .u.gsi_permission.gsi = gsi,
> +        .u.gsi_permission.allow_access = allow_access,
> +    };
> +
> +    return do_domctl(xch, &domctl);
> +}
> +
>  int xc_domain_iomem_permission(xc_interface *xch,
>                                 uint32_t domid,
>                                 unsigned long first_mfn,
> diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
> index 376f91759ac6..f027f22c0028 100644
> --- a/tools/libs/light/libxl_pci.c
> +++ b/tools/libs/light/libxl_pci.c
> @@ -1431,6 +1431,9 @@ static void pci_add_dm_done(libxl__egc *egc,
>      uint32_t flag = XEN_DOMCTL_DEV_RDM_RELAXED;
>      uint32_t domainid = domid;
>      bool isstubdom = libxl_is_stubdom(ctx, domid, &domainid);
> +#ifdef CONFIG_X86
> +    xc_domaininfo_t info;
> +#endif
>  
>      /* Convenience aliases */
>      bool starting = pas->starting;
> @@ -1516,14 +1519,39 @@ static void pci_add_dm_done(libxl__egc *egc,
>              rc = ERROR_FAIL;
>              goto out;
>          }
> -        r = xc_domain_irq_permission(ctx->xch, domid, irq, 1);
> +#ifdef CONFIG_X86
> +        /* If dom0 doesn't have PIRQs, need to use xc_domain_gsi_permission */
> +        r = xc_domain_getinfo_single(ctx->xch, 0, &info);

Hard-coded 0 is imposing limitations. Ideally you would use DOMID_SELF, but
I didn't check if that can be used with the underlying hypercall(s). Otherwise
you want to pass the actual domid of the local domain here.

>          if (r < 0) {
> -            LOGED(ERROR, domainid,
> -                  "xc_domain_irq_permission irq=%d (error=%d)", irq, r);
> +            LOGED(ERROR, domainid, "getdomaininfo failed (error=%d)", errno);
>              fclose(f);
>              rc = ERROR_FAIL;
>              goto out;
>          }
> +        if (info.flags & XEN_DOMINF_hvm_guest &&

You want to parenthesize the & here.

> +            !(info.arch_config.emulation_flags & XEN_X86_EMU_USE_PIRQ) &&
> +            gsi > 0) {

So if gsi < 0 failure of xc_domain_getinfo_single() would needlessly result
in failure of this function?

> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -36,6 +36,7 @@
>  #include <asm/xstate.h>
>  #include <asm/psr.h>
>  #include <asm/cpu-policy.h>
> +#include <asm/io_apic.h>
>  
>  static int update_domain_cpu_policy(struct domain *d,
>                                      xen_domctl_cpu_policy_t *xdpc)
> @@ -237,6 +238,48 @@ long arch_do_domctl(
>          break;
>      }
>  
> +    case XEN_DOMCTL_gsi_permission:
> +    {
> +        unsigned int gsi = domctl->u.gsi_permission.gsi;
> +        int irq;
> +        bool allow = domctl->u.gsi_permission.allow_access;

See my earlier comments on this conversion of 8 bits into just one.

> +        /* Check all pads are zero */
> +        ret = -EINVAL;
> +        for ( i = 0;
> +              i < sizeof(domctl->u.gsi_permission.pad) /
> +                  sizeof(domctl->u.gsi_permission.pad[0]);

Please don't open-code ARRAY_SIZE().

> +              ++i )
> +            if ( domctl->u.gsi_permission.pad[i] )
> +                goto out;
> +
> +        /*
> +         * If current domain is PV or it has PIRQ flag, it has a mapping
> +         * of gsi, pirq and irq, so it should use XEN_DOMCTL_irq_permission
> +         * to grant irq permission.
> +         */
> +        ret = -EOPNOTSUPP;
> +        if ( is_pv_domain(currd) || has_pirq(currd) )
> +            goto out;

I'm curious what other x86 maintainers think: I for one would not impose such
an artificial restriction.

> +        ret = -EINVAL;
> +        if ( gsi >= nr_irqs_gsi || (irq = gsi_2_irq(gsi)) < 0 )
> +            goto out;
> +
> +        ret = -EPERM;
> +        if ( !irq_access_permitted(currd, irq) ||
> +             xsm_irq_permission(XSM_HOOK, d, irq, allow) )
> +            goto out;
> +
> +        if ( allow )
> +            ret = irq_permit_access(d, irq);
> +        else
> +            ret = irq_deny_access(d, irq);
> +
> +    out:

Please use a less generic name for such a label local to just one case
block. However, with ..

> +        break;

.. this being all that's done here: Why have a label in the first place?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 15:34:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 15:34:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742493.1149299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJENO-0006YN-Q1; Mon, 17 Jun 2024 15:34:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742493.1149299; Mon, 17 Jun 2024 15:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJENO-0006YG-N3; Mon, 17 Jun 2024 15:34:34 +0000
Received: by outflank-mailman (input) for mailman id 742493;
 Mon, 17 Jun 2024 15:34:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gp4O=NT=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJENN-0006YA-U0
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 15:34:33 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0b37d18c-2cbf-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 17:34:11 +0200 (CEST)
Received: by mail-ej1-x62b.google.com with SMTP id
 a640c23a62f3a-a6efae34c83so561540466b.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 08:34:11 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb7438a18sm6500989a12.82.2024.06.17.08.34.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 08:34:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b37d18c-2cbf-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718638450; x=1719243250; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Kau6RjO1EJJYBkiTGtGs2LEHEzcR7Sfp+HbdphasrAU=;
        b=rdgsZd5teoNPA/j/tMNk1pOF7jVbyts5hjis7W33M8qxwrz7dknUKCxTzoLYi8afbn
         4Zmmk9UB0Gcnui6mdkFQ+jEjizcIYLxEC5u3biwiXEAxRKDxYwFJulpFGVd3o/+giv/F
         fiSiVVn/Aeby7BeBq2oO8BPoV5SpZZYYIdXtQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718638450; x=1719243250;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Kau6RjO1EJJYBkiTGtGs2LEHEzcR7Sfp+HbdphasrAU=;
        b=gR96O18VjGJYd/YWWxIUNWGwD04XEF6+X2aJiKGQ2pwKBXsULlfNG5DoicpDAV8pvC
         rYw/09q6n2hJ+IBP0P886IfINSBn3WTAkwR3ht86Iu5KGyYQtNby0qqj3UBQlrxaa7cF
         1lHPWMPX1ebJFfaoQAnf9AC2lmAY6JNn7bsG3dSExyACWL2GAIPNKPBXaNU3zp2ajDit
         sU7F54ErHAGioyVG3J1BN1PMpaODVyJ45f2mtsFFNM1emVndfVDd7mJNlgn2TPtEQdcl
         YpxvytK+ECSLP/ixaWSGy2ssfwzPDO8RQmYcgap2ncVTIiq7cFoRwhbwgDiM+O9Epsjq
         rGLw==
X-Forwarded-Encrypted: i=1; AJvYcCVp9Z6BJy0X4unASyFgsRjeFkShers4vFqGl36Fh5OIGMPb6GIT38Gt30WXTWmuC2YKDLYHdxZ7AO4utncOFz0MbQcr7FhlnPBeAYNgono=
X-Gm-Message-State: AOJu0Yzo3StLoRqU1OCfkvYqFCDlNmfCx+Ifj/NNA5Ws480z5qoHJGHt
	RTpV4AwZFsA7xSNpIqB8IcZvmZmhB+jm12e3AUYgMG+LeSKHgxAba3xMCWtRnpI=
X-Google-Smtp-Source: AGHT+IHOuRP1n1WsIiXO/pAVmXfGzUoNYbcVUQBHo+wA+r1uywdzBQymQfjmzIOxsXjULY/L1xWyDA==
X-Received: by 2002:a17:907:d312:b0:a6f:7d7e:ab0c with SMTP id a640c23a62f3a-a6f7d7eabd9mr328638166b.64.1718638450620;
        Mon, 17 Jun 2024 08:34:10 -0700 (PDT)
Message-ID: <a65a83be-1236-4699-8124-c0bd809c4b97@citrix.com>
Date: Mon, 17 Jun 2024 16:34:09 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [OSSTEST PATCH] preseed_base: Use "keep" NIC NamePolicy when
 "force-mac-address"
To: Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org
Cc: Luca Fancellu <luca.fancellu@arm.com>, Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Anthony PERARD <anthony.perard@vates.tech>
References: <20240617144051.29547-1-anthony@xenproject.org>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <20240617144051.29547-1-anthony@xenproject.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 17/06/2024 3:40 pm, Anthony PERARD wrote:
> diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
> index 3545f3fd..d974fea5 100644
> --- a/Osstest/Debian.pm
> +++ b/Osstest/Debian.pm
> @@ -972,7 +972,19 @@ END
>          # is going to be added to dom0's initrd, which is used by some guests
>          # (created with ts-debian-install).
>          preseed_hook_installscript($ho, $sfx,
> -            '/usr/lib/base-installer.d/', '05ifnamepolicy', <<'END');
> +            '/usr/lib/base-installer.d/', '05ifnamepolicy',
> +            $ho->{Flags}{'force-mac-address'} ? <<'END' : <<'END');

The conditional looks suspicious if both options are <<'END'.

Doesn't this just write 70-eth-keep-policy.link unconditionally?

~Andrew

> +#!/bin/sh -e
> +linkfile=/target/etc/systemd/network/70-eth-keep-policy.link
> +mkdir -p `dirname $linkfile`
> +cat > $linkfile <<EOF
> +[Match]
> +Type=ether
> +Driver=!vif
> +[Link]
> +NamePolicy=keep
> +EOF
> +END
>  #!/bin/sh -e
>  linkfile=/target/etc/systemd/network/90-eth-mac-policy.link
>  mkdir -p `dirname $linkfile`
>


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 15:39:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 15:39:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742501.1149309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJESA-0007eV-DD; Mon, 17 Jun 2024 15:39:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742501.1149309; Mon, 17 Jun 2024 15:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJESA-0007eO-AF; Mon, 17 Jun 2024 15:39:30 +0000
Received: by outflank-mailman (input) for mailman id 742501;
 Mon, 17 Jun 2024 15:39:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E9YK=NT=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJES9-0007eI-0p
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 15:39:29 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c7097d10-2cbf-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 17:39:26 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a6f1dc06298so540934066b.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 08:39:26 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f416dfsm523948266b.164.2024.06.17.08.39.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 08:39:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7097d10-2cbf-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718638766; x=1719243566; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=nxjO6maD7OtXVQwEr6cHb5BiQfSIKF4D4k7lDp7g2J4=;
        b=Rm8Azo7WyXg5/s8fzLbnMRnRcjEjTzFeqm0dtSWOx8FUc/ggDFLID/fzaPvgKVg8mH
         dWrqzEDFHkyN+7YXEo1YkEmG2eh2/Cj3kfKiqRGCEDJonzUKqGJ88emefIfpfIswikw9
         L1aAX8FGADdGSZjdN9jxrJdwG2VBcXWkBy9MoaZgNbyX98wkBw9ba7/oYELM96KFiqB/
         aHn4hTg2r3EmwwMJw3rhbmH2GpEPz/jwpEqVSwEAOLhRjlqUYyK21NBbfzeaky4KzsV1
         hcjceewvGTkOA2WNMdPXkrHCDZpz6AuHn2YEPs2Z8POa4CeqJXqTsmfv606/E+3qiPLk
         4nkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718638766; x=1719243566;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nxjO6maD7OtXVQwEr6cHb5BiQfSIKF4D4k7lDp7g2J4=;
        b=Ax9XF3gOzd7hhRY6QJpEaLEA3NETi7YSgj88CH6O8dpmEnmD9JA3nF9VRXMr2gpThc
         7MUuywVget0LCHj6uTAbpDe0BZn/AgQV/bWOIETsNd2Drr2P2c/WlEFEtX1Djl8n9py2
         SlyfRK57I7ZT7xlM6sPezgMrpVHbu1Jmx9E5Jdu+OFYbB5SwXx/b1fIX7KsOpkSb0heP
         ItJ96gC9uTPSgKPUUtkxGonkhujLvjw7Udmbg4Cqwu055hz1Y97gxnjBh0etN4RiaHyo
         oQZQL/fYfiPyuJkS1xonHBwmlfIbA0K8OAJ91dulQkv6i2Hi2jcr3zGtTigsbd6cJfOA
         K+CQ==
X-Forwarded-Encrypted: i=1; AJvYcCUGM1Ik9jqX5+V/I9YM13qEyDllmr0HTgVdzAr8VsLz0eTOn7s9fDoFvymxn/rSs4mcbltvBSTJg29KB6ib/FoG+wMF3ghQ3MJND6nNlVw=
X-Gm-Message-State: AOJu0Yy4KbQXuZnIE1NP5wJZ6eeH0ikGCrQP1v2TM88KPn4YzZnu1d9k
	D7J2l8Eff4svuOFKVGS8ZSjBOIuYautlZJU9zaZ70nJwj0ltMhiQ4MQHtUehSw==
X-Google-Smtp-Source: AGHT+IEwoljcJ8iwGoaglCl8O6rt85NZ5FdDBP8eiQ+QI9Er+gS5jILL4G/xyvxjAGDIqWdFWL6wVw==
X-Received: by 2002:a17:907:8b8d:b0:a6f:64cc:ca2e with SMTP id a640c23a62f3a-a6f64cccb75mr854515566b.44.1718638765660;
        Mon, 17 Jun 2024 08:39:25 -0700 (PDT)
Message-ID: <acad1669-ceaf-463b-ad1c-4e290ccccb23@suse.com>
Date: Mon, 17 Jun 2024 17:39:23 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: Design session notes: GPU acceleration in Xen
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Ray Huang <ray.huang@amd.com>,
 Xen developer discussion <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com> <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com> <Zmxze4a0PZbwcLSb@itl-email>
 <10c2ab19-e2b7-4f5f-ae73-213e0194bb8e@suse.com> <ZnBTn6FXXOpnBJCb@itl-email>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZnBTn6FXXOpnBJCb@itl-email>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 17.06.2024 17:17, Demi Marie Obenour wrote:
> On Mon, Jun 17, 2024 at 11:07:54AM +0200, Jan Beulich wrote:
>> On 14.06.2024 18:44, Demi Marie Obenour wrote:
>>> On Fri, Jun 14, 2024 at 10:12:40AM +0200, Jan Beulich wrote:
>>>> On 14.06.2024 09:21, Roger Pau Monné wrote:
>>>>> I'm not sure it's possible to ensure that when using system RAM such
>>>>> memory comes from the guest rather than the host, as it would likely
>>>>> require some very intrusive hooks into the kernel logic, and
>>>>> negotiation with the guest to allocate the requested amount of
>>>>> memory and hand it over to dom0.  If the maximum size of the buffer is
>>>>> known in advance maybe dom0 can negotiate with the guest to allocate
>>>>> such a region and grant it access to dom0 at driver attachment time.
>>>>
>>>> Besides the thought of transiently converting RAM to kind-of-MMIO, this
>>>> makes me think of another possible option: Could Dom0 transfer ownership
>>>> of the RAM that wants mapping in the guest (remotely resembling
>>>> grant-transfer)? Would require the guest to have ballooned down enough
>>>> first, of course. (In both cases it would certainly need working out how
>>>> the conversion / transfer back could be made work safely and reasonably
>>>> cleanly.)
>>>
>>> The kernel driver needs to be able to reclaim the memory at any time.
>>> My understanding is that this is used to migrate memory between VRAM and
>>> system RAM.  It might also be used for other purposes.
>>
>> Except: How would the kernel driver reclaim the memory when it's mapped
>> by a DomU?
> 
> The Xen driver in dom0 will register for MMU notifier callbacks.  When
> the kernel driver reclaims the memory, the Xen driver will be notified,
> and it will issue a hypercall that tells Xen to remove the memory from
> the DomU's address space.  Subsequent accesses to the pages will trigger
> a stage 2 translation fault that is handled by an IOREQ server.

And such an ioreq server, which I assume isn't going to run in the Dom0
kernel, will then also need keeping up-to-date on holes in the (virtual)
BAR. Oh well ...

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 16:02:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 16:02:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742510.1149318 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJEoh-0005Qr-RN; Mon, 17 Jun 2024 16:02:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742510.1149318; Mon, 17 Jun 2024 16:02:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJEoh-0005Qk-Oh; Mon, 17 Jun 2024 16:02:47 +0000
Received: by outflank-mailman (input) for mailman id 742510;
 Mon, 17 Jun 2024 16:02:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Jjg=NT=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1sJEog-0005Qe-2Y
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 16:02:46 +0000
Received: from fout4-smtp.messagingengine.com (fout4-smtp.messagingengine.com
 [103.168.172.147]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0771ac38-2cc3-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 18:02:43 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailfout.nyi.internal (Postfix) with ESMTP id 2A4EE1380257;
 Mon, 17 Jun 2024 12:02:42 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Mon, 17 Jun 2024 12:02:42 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 17 Jun 2024 12:02:40 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0771ac38-2cc3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718640162;
	 x=1718726562; bh=Acb1tUk4yc+J2PZ0FemvMFzaNS+v52Y0cg8sMBNp2wE=; b=
	iSZqrenb4ST6kXzdZzoQwJe0r6xhzP02F9VO1I77btueZcTZ3JKmrq8lwrcbWFKF
	7OUkl6mDQG/l+jq5qUxfmu/FOPh+RCC1kpDPsNzQuA8VssyCfjNjFjX7OVMOUFHM
	JSePBxDwHVw5yGiO8uUJeNSULLjMQQ6Eptpcu6Dx0i0/vx7zpG6l00HpU6ndj8Oc
	osgOQNfKzNFDgNtoos/QUT8aDaXcjhWDTITU3JKW+0JLfxbXlnzZlAS9MW3tpP9d
	w4jB2dgTPshElOWwSnLGik1E26Egdyf/gU8bFUQu6pX2/Nem4hliDd+YL+r1z2gZ
	LukpnaEFkN+trw+TPQHzTw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718640162; x=1718726562; bh=Acb1tUk4yc+J2PZ0FemvMFzaNS+v
	52Y0cg8sMBNp2wE=; b=AAeT6JA9hGrQbWQxtQtBC520Wj4m0rTax5dXp99J89ir
	fKP9hbHVe7wGdUcMquF0Kvyh705RsB1T2l5OqOuZGdTgE67f/a6Gtc0ICPQlJvhk
	Styt/FK40++lw+VrjeXdD3MBcIFsmeR7geK8ChW5qrV3jYQoY6d9Ntcfm0zEWIZN
	+zQvUlPHtpkwy6THbgbC0EJFcR0dawvD8lUh8FF54CfrbQGE0Rk1xNzRJY+OiR1m
	R2ItP3kTNN900ZTx0er90csTe/HeiLAUq+RLKTRDTv33jD4DI/H4L9PsQq04zxtq
	oGhriM+S2bAlXyDSz91SUeIrYyWRoKVorm3ZTj+/0g==
X-ME-Sender: <xms:IV5wZuIKpk1g4YQpQq7CulvfwU0LB9rR8Tgqdy6LVrvbbhRDj6mU-A>
    <xme:IV5wZmLKx4l0w2ABbNDRB5-WEsBRF1uJmMUVOL3xQBpsuEjQeuhGpZVt-PWd4G5A_
    3QCzb01ml8Ss6U>
X-ME-Received: <xmr:IV5wZuuPna_0O4-WLf5JJ6edG1bikaguozK6FGKDpgbmrZqf8oSL8JjDobvzXVvFYWsdZdyjc87chfVLmAbhz0GAqUCtm1TrTZxGL4nPGrLEtBLo>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedvhedgleeiucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepvdejteegkefhteduhffgteffgeff
    gfduvdfghfffieefieekkedtheegteehffelnecuvehluhhsthgvrhfuihiivgeptdenuc
    frrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhs
    lhgrsgdrtghomh
X-ME-Proxy: <xmx:IV5wZjbV6hC4p2Cj1xMQPrh2zYljeX5gshb79G9WbU8na1TFMMKLSQ>
    <xmx:IV5wZlYQl82D6IZxyHkbmltMDmT3aSm9hKpaaFIbTqOUnoOW68nuoA>
    <xmx:IV5wZvDULWZRQkRZslo1UE5HLeX9Mz8_0Qr5ccAJeIaA3mCmRw1K5A>
    <xmx:IV5wZrbbsI6fuFa7H8k0eQ-OwhpA6y77tr6GLazlQELTIQuqru4l4Q>
    <xmx:Il5wZqPsE13fmFglEQZXmEK6gtnQZHGULqUbGcgIkxRr9e8Fsl_sIjat>
Feedback-ID: iac594737:Fastmail
Date: Mon, 17 Jun 2024 12:02:36 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <ZnBeH_SXvFgwCsEZ@itl-email>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
 <Zmxze4a0PZbwcLSb@itl-email>
 <10c2ab19-e2b7-4f5f-ae73-213e0194bb8e@suse.com>
 <ZnBTn6FXXOpnBJCb@itl-email>
 <acad1669-ceaf-463b-ad1c-4e290ccccb23@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="eSYgnFjd2QIBSMRA"
Content-Disposition: inline
In-Reply-To: <acad1669-ceaf-463b-ad1c-4e290ccccb23@suse.com>


--eSYgnFjd2QIBSMRA
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 17 Jun 2024 12:02:36 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Xenia Ragiadakou <burzalodowa@gmail.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen

On Mon, Jun 17, 2024 at 05:39:23PM +0200, Jan Beulich wrote:
> On 17.06.2024 17:17, Demi Marie Obenour wrote:
> > On Mon, Jun 17, 2024 at 11:07:54AM +0200, Jan Beulich wrote:
> >> On 14.06.2024 18:44, Demi Marie Obenour wrote:
> >>> On Fri, Jun 14, 2024 at 10:12:40AM +0200, Jan Beulich wrote:
> >>>> On 14.06.2024 09:21, Roger Pau Monn=C3=A9 wrote:
> >>>>> I'm not sure it's possible to ensure that when using system RAM such
> >>>>> memory comes from the guest rather than the host, as it would likely
> >>>>> require some very intrusive hooks into the kernel logic, and
> >>>>> negotiation with the guest to allocate the requested amount of
> >>>>> memory and hand it over to dom0.  If the maximum size of the buffer=
 is
> >>>>> known in advance maybe dom0 can negotiate with the guest to allocate
> >>>>> such a region and grant it access to dom0 at driver attachment time.
> >>>>
> >>>> Besides the thought of transiently converting RAM to kind-of-MMIO, t=
his
> >>>> makes me think of another possible option: Could Dom0 transfer owner=
ship
> >>>> of the RAM that wants mapping in the guest (remotely resembling
> >>>> grant-transfer)? Would require the guest to have ballooned down enou=
gh
> >>>> first, of course. (In both cases it would certainly need working out=
 how
> >>>> the conversion / transfer back could be made work safely and reasona=
bly
> >>>> cleanly.)
> >>>
> >>> The kernel driver needs to be able to reclaim the memory at any time.
> >>> My understanding is that this is used to migrate memory between VRAM =
and
> >>> system RAM.  It might also be used for other purposes.
> >>
> >> Except: How would the kernel driver reclaim the memory when it's mapped
> >> by a DomU?
> >=20
> > The Xen driver in dom0 will register for MMU notifier callbacks.  When
> > the kernel driver reclaims the memory, the Xen driver will be notified,
> > and it will issue a hypercall that tells Xen to remove the memory from
> > the DomU's address space.  Subsequent accesses to the pages will trigger
> > a stage 2 translation fault that is handled by an IOREQ server.
>=20
> And such an ioreq server, which I assume isn't going to run in the Dom0
> kernel, will then also need keeping up-to-date on holes in the (virtual)
> BAR. Oh well ...

My initial plan was that it _would_ run in the dom0 kernel, because this
results in a cleaner userspace API.  Ultimately I think it is best to go
with whichever approach keeps the kernel code simpler, but I'm not sure.
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--eSYgnFjd2QIBSMRA
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmZwXh8ACgkQsoi1X/+c
IsFHIw/9GlnPySED9eXtWpbMH9pwdE7KxE/VNsD1QMO3KSyTpHlVTahtHhOrWJ+B
s1le+k6OHjzNap8/c8R9GZMncAvvbhfOhbufnw5Je0nvfOr1XUqIbex22fQZSg8y
7OosYrMu9F9t8WOQwkca2Oge7/4MMgKDWf+20xFHjHbpujUptofWUXTWRCK6k7jA
+awLoT1Kg71ZFxGLo2n1sTS56gi0H2K7oXPE9MGdk/VGVhRiA/s+6TVYeJx09Oyv
ueNj00UciYcez3PfWP+5qc2ziAW3XRo1ZKgnIfxgMdzb5t2dorQJGu65ADckDJ+z
hnfYLy3VSBqgwUhHxacf2na/BsMadFRRJR1Urtb5zWcJ9URJ/YNgX1/Yle5BxpfD
xhfEdUAx2VeCQ/h2Tjr3KweGvmkeaNZ5QjPE7npiXXl7Yz1h/TUhZSYlQ0vEeW0q
mM1CRRFRdSkbdTpYXZSjZ2EWiUher+OGkmsT6ylfcnSEMY00jpXsmoKyINmbKv69
oV7lM0AiJY1l2gQHzxJvnLni8bHHhxkUZD591L6yyKU4hRsbZvRt7L1UM6W3zyg7
tUuZDGygDDygRCy4Sk89xFrXuLrp2cgjQlhQQpb1iHHrc+2HglqTHVg8hq8mce9y
YvdcdTNSkwKIUNAnbxa1lLDneJ83xHOnU5IoC2da7bqlypRgneE=
=OO/s
-----END PGP SIGNATURE-----

--eSYgnFjd2QIBSMRA--


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 16:35:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 16:35:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742519.1149329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJFJw-0002ON-4I; Mon, 17 Jun 2024 16:35:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742519.1149329; Mon, 17 Jun 2024 16:35:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJFJw-0002OG-0e; Mon, 17 Jun 2024 16:35:04 +0000
Received: by outflank-mailman (input) for mailman id 742519;
 Mon, 17 Jun 2024 16:35:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJFJu-0002O6-HF; Mon, 17 Jun 2024 16:35:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJFJu-0002kH-As; Mon, 17 Jun 2024 16:35:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJFJu-0000Vx-08; Mon, 17 Jun 2024 16:35:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJFJt-0000YT-Vo; Mon, 17 Jun 2024 16:35:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DnubjkE1ogvk2LVLk9bop5nDr48EGPR41PdUTrHyR6c=; b=JBHBiqP/Q/YRLy81AYRVAbSBPZ
	pjbfQc0sWxhAXK2DP5TZzveahBdgcVUlRneZIQIoT1Ase48+hy/Kzsx86NPSpCeKFZ9ocBbl9DYn5
	DtU6q9UtWIL9W0vkTi55yYk0/lhSq5wyoWpJ8p37qbOOD0LzOez2gqBGdFGvTDuoSgBs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186382-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186382: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=128513afcdfa77e94c9637e643898e61c8218e34
X-Osstest-Versions-That:
    ovmf=9fc61309bf56aa7863e36b8f418a49ca6d8364d0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Jun 2024 16:35:01 +0000

flight 186382 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186382/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 128513afcdfa77e94c9637e643898e61c8218e34
baseline version:
 ovmf                 9fc61309bf56aa7863e36b8f418a49ca6d8364d0

Last test of basis   186379  2024-06-17 12:12:58 Z    0 days
Testing same since   186382  2024-06-17 14:13:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dun Tan <dun.tan@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   9fc61309bf..128513afcd  128513afcdfa77e94c9637e643898e61c8218e34 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 17:31:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 17:31:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742528.1149338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGBy-0002Be-NP; Mon, 17 Jun 2024 17:30:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742528.1149338; Mon, 17 Jun 2024 17:30:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGBy-0002BX-Ke; Mon, 17 Jun 2024 17:30:54 +0000
Received: by outflank-mailman (input) for mailman id 742528;
 Mon, 17 Jun 2024 17:30:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gp4O=NT=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJGBx-0002BP-5p
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 17:30:53 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 568b5465-2ccf-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 19:30:51 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id
 a640c23a62f3a-a6cb130027aso310217666b.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 10:30:49 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cbc852bf5sm5901534a12.58.2024.06.17.10.30.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 Jun 2024 10:30:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 568b5465-2ccf-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718645449; x=1719250249; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=7/xARnIxZ3vDTJqd5Pdnj0Kwhy1Z2i2+RNe91WeCq8E=;
        b=aWjgYv/HEtQZA+u1M4MAl6841naHexWK19re2hPUVV8EyFZ3wp+/WSVSxML2c5fc0d
         h71Ativj9IoFBCfk3tysSZe1Qt88bxyBO73ZdlteeMQIATnSdUqYToE6dnEq3zzcu7+m
         XFiFZUPlkEYpjc4R8otPjCxtPllyEV8Qw/ccg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718645449; x=1719250249;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7/xARnIxZ3vDTJqd5Pdnj0Kwhy1Z2i2+RNe91WeCq8E=;
        b=UGEJ456YVIpxjYaVGuOAi/UBVP4vQSo4mdl46bwbfwowkue0hmoRgXs6/IAxP0Otfi
         EwYMXKPWOilHM/Ej61lSqMW3T94ZdUQJ4A0wfBVuKcg3/Y/gUTGzr9ApvrIYFiYNwozQ
         ZtmB6XGMGTWMSch6y80A57Wg0DIglo9uAuPuGMiPXaMLNzqeznfH9T9FYchtWaMIT0MC
         Hwx/ygxeYYv1H1+ziBUvVflwirBxOiOqMaBiHavwSP30WoJ98n3ub30AkzcPDc0HPFvf
         ImfVbxzSyAFe8pggyFF4nEQ8cGEUVQiVBbuhUBLDlu+Hrp5A+liHZJ/f6m5DizvWUtGl
         xpag==
X-Forwarded-Encrypted: i=1; AJvYcCU7WVUF5XgyuF/3dsshgVHTSREWPRDwpGj7Pw92uADt8Ei2sESfsp9CDO/LahAEhXmVbE3qQe7jtFmWtkxo7hoc8/Z7/eEt3nXNrPWOXI0=
X-Gm-Message-State: AOJu0Yx+pDPs/k9SR8ty3wpOxGNGZD0HIXtX4Zkn8PhibVoOc+48sdLR
	O5FYO0nyEycTeB1N5uOnEQTk6Em7vSLcyltjBnRrelP+Fr28PfbiKXoklknnxIU=
X-Google-Smtp-Source: AGHT+IFLx2aAgn/ANZseNFV/XUwfiU9hY7C3sp5f7YhaRPYMxrhjPZE/yHw/qz/F3OgAMjraiouDLA==
X-Received: by 2002:a50:c056:0:b0:57c:6338:328a with SMTP id 4fb4d7f45d1cf-57cbd6c7589mr9672923a12.30.1718645448831;
        Mon, 17 Jun 2024 10:30:48 -0700 (PDT)
Message-ID: <af1419dd-c9bf-4f15-b2dd-c3883030d4f4@citrix.com>
Date: Mon, 17 Jun 2024 18:30:48 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 3/7] x86/boot: Collect the Raw CPU Policy earlier on boot
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240523111627.28896-1-andrew.cooper3@citrix.com>
 <20240523111627.28896-4-andrew.cooper3@citrix.com>
 <8245f0ce-2964-4ecb-a31d-3e182a6d3e0b@suse.com>
 <6b4ed926-8934-4660-98c7-1856d0c90878@citrix.com>
 <cc1b52d8-163f-443c-8418-aba1c7d77ecb@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <cc1b52d8-163f-443c-8418-aba1c7d77ecb@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 17/06/2024 11:25 am, Jan Beulich wrote:
> On 14.06.2024 20:26, Andrew Cooper wrote:
>> On 23/05/2024 4:44 pm, Jan Beulich wrote:
>>> On 23.05.2024 13:16, Andrew Cooper wrote:
>>>> This is a tangle, but it's a small step in the right direction.
>>>>
>>>> xstate_init() is shortly going to want data from the Raw policy.
>>>> calculate_raw_cpu_policy() is sufficiently separate from the other policies to
>>>> be safe to do.
>>>>
>>>> No functional change.
>>>>
>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> Would you mind taking a look at
>>> https://lists.xen.org/archives/html/xen-devel/2021-04/msg01335.html
>>> to make clear (to me at least) in how far we can perhaps find common grounds
>>> on what wants doing when? (Of course the local version I have has been
>>> constantly re-based, so some of the function names would have changed from
>>> what's visible there.)
>> That's been covered several times, at least in part.
>>
>> I want to eventually move the host policy too, but I'm not willing to
>> compound the mess we've currently got just to do it earlier.  It's just
>> creating even more obstacles to doing it nicely.
>>
>> Nothing in this series needs (or indeed should) use the host policy.
> Hmm, I'm irritated: You talk about host policy here, ...
>
>> The same is true of your AMX series.  You're (correctly) breaking the
>> uniform allocation size and (when policy selection is ordered WRT vCPU
>> creation, as discussed) it becomes solely depend on the guest policy.
> ... then guest policy, and ...
>
>> xsave.c really has no legitimate use for the host policy once the
>> uniform allocation size aspect has gone away.
> ... then host policy again.

Yes.  Notice how host policy is always referred to in the negative.

The raw policy should be used for everything pertaining to the
instruction ABI itself, and the guest policy for sizing information.

> Whereas my patch switches to using the raw
> policy, simply to eliminate redundant data.

Except it doesn't.  The latest posted version of your series contains:

-static u32 __read_mostly xsave_cntxt_size;
+#define xsave_cntxt_size (host_cpuid_policy.xstate.max_size | 0)

and you've even stated that I should have acked this patch simply on its
age.

I acked the patches that were good, and you did committed them at the
time.  Then I put together this series to fix the bugs the latent bugs
which you were making less latent; this series really is the same one
discussed back then, and really does have some 2020/2021 author dates in it.


It is, AFAICT, not safe to move the calculation of the host policy as
early as you did, without arranging for setup_{force,clear}_cap() to
edit the host policy synchronously.  Recalculating a second time later
isn't good enough, because you've created an asymmetry for most of boot
between two pieces of state which are supposed to be in sync, and that
you're intentionally starting to use.  So yes - while I do intend to
eventually make the host policy usable that early too, I'm really not
happy doing so in a manner that has "ticking timebomb" written all over it.

As to xsave_cntxt_size, it can (and should) be eliminated entirely when
xstate_alloc_save_area() can use the guest policy to size the allocation.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 17:39:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 17:39:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742543.1149414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKM-0004cy-6Q; Mon, 17 Jun 2024 17:39:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742543.1149414; Mon, 17 Jun 2024 17:39:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKM-0004az-0p; Mon, 17 Jun 2024 17:39:34 +0000
Received: by outflank-mailman (input) for mailman id 742543;
 Mon, 17 Jun 2024 17:39:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gp4O=NT=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJGKK-00036g-LI
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 17:39:32 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8cdfb16e-2cd0-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 19:39:30 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a6f0dc80ab9so704391266b.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 10:39:30 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da4496sm532010666b.8.2024.06.17.10.39.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 10:39:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8cdfb16e-2cd0-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718645969; x=1719250769; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=91rqmVIJvtgwrFeMK2tOHbISsSTuJ9u05BoupIBlW0Q=;
        b=dlPLAIWROcYXZC/TEGChqyb2bI0PIbZX5zYiAJg2wLV5M1qOWO+0+Dtzs17Bz4ceVJ
         GAH7ORlU8qPN0DF3trGnF0CQBHdrVg92sIKoH0OkEmvDT4YUWMij3BKa3t9F+yjQlW+n
         Kae2NElpdL9uOvbHI3Cla4uvQl9ce0uzYZOlc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718645969; x=1719250769;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=91rqmVIJvtgwrFeMK2tOHbISsSTuJ9u05BoupIBlW0Q=;
        b=dBs8nasJTnogFK/JdKecSy5zreTkaJPU2WVFTEV2PobpYsbzOkzO0a7A5JEIV+iooR
         56sfkGvRJrXswo6OZM0HzdnVUOKAfxPXh1sykjUXOhPC1aFnJSWQFs0ZrhsjtgcEMIKl
         VWhxCKUO0mU8owFwQba+i3DwDCj14gquxyf4phXFKj0mZPnPmofw6J2spi3JYeP1gH6n
         A34wYfmAONa9nWUaUX/JDYKZS3jsvQibhr9MGARqknprSz7/aV9b7wzDdKBV0PB1cjMR
         AQLn/VLBupyft3M2ID4ooWpFi5p1uzaOH8nqQcO7EebjVffJja/ijV/SXtZ6jq6H+TlI
         /ZEg==
X-Gm-Message-State: AOJu0Yxl1hj+PztDiRAdiFY40bnT9tDNKYMNfaSZtdAzdBGMQqvQLApm
	6loSr//tXt4dPCJEw0wq8ZdAy0iU6smW9l7lL5AwqVCqXMw9PvFhNXWgy+d+rSps/fk93GuAqCc
	76Kk=
X-Google-Smtp-Source: AGHT+IGzn5ZkK1YZ3Abyw+STSNcbhHiblepXrKhd7oNDvDgw69jiQUs3j43NO0ZVmfS9SARRXyaAxQ==
X-Received: by 2002:a17:907:d312:b0:a6f:668b:3434 with SMTP id a640c23a62f3a-a6f668b35f7mr806562166b.31.1718645969387;
        Mon, 17 Jun 2024 10:39:29 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH v4 7/7] x86/defns: Clean up X86_{XCR0,XSS}_* constants
Date: Mon, 17 Jun 2024 18:39:21 +0100
Message-Id: <20240617173921.1755439-8-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
References: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

With the exception of one case in read_bndcfgu() which can use ilog2(),
the *_POS defines are unused.  Drop them.

X86_XCR0_X87 is the name used by both the SDM and APM, rather than
X86_XCR0_FP.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

v3:
 * New
---
 xen/arch/x86/i387.c                  |  2 +-
 xen/arch/x86/include/asm/x86-defns.h | 32 ++++++++++------------------
 xen/arch/x86/include/asm/xstate.h    |  4 ++--
 xen/arch/x86/xstate.c                | 18 ++++++++--------
 4 files changed, 23 insertions(+), 33 deletions(-)

diff --git a/xen/arch/x86/i387.c b/xen/arch/x86/i387.c
index 7a4297cc921e..fcdee10a6e69 100644
--- a/xen/arch/x86/i387.c
+++ b/xen/arch/x86/i387.c
@@ -369,7 +369,7 @@ void vcpu_setup_fpu(struct vcpu *v, struct xsave_struct *xsave_area,
         {
             v->arch.xsave_area->xsave_hdr.xstate_bv &= ~XSTATE_FP_SSE;
             if ( fcw_default != FCW_DEFAULT )
-                v->arch.xsave_area->xsave_hdr.xstate_bv |= X86_XCR0_FP;
+                v->arch.xsave_area->xsave_hdr.xstate_bv |= X86_XCR0_X87;
         }
     }
 
diff --git a/xen/arch/x86/include/asm/x86-defns.h b/xen/arch/x86/include/asm/x86-defns.h
index d7602ab225c4..3bcdbaccd3aa 100644
--- a/xen/arch/x86/include/asm/x86-defns.h
+++ b/xen/arch/x86/include/asm/x86-defns.h
@@ -79,25 +79,16 @@
 /*
  * XSTATE component flags in XCR0 | MSR_XSS
  */
-#define X86_XCR0_FP_POS           0
-#define X86_XCR0_FP               (1ULL << X86_XCR0_FP_POS)
-#define X86_XCR0_SSE_POS          1
-#define X86_XCR0_SSE              (1ULL << X86_XCR0_SSE_POS)
-#define X86_XCR0_YMM_POS          2
-#define X86_XCR0_YMM              (1ULL << X86_XCR0_YMM_POS)
-#define X86_XCR0_BNDREGS_POS      3
-#define X86_XCR0_BNDREGS          (1ULL << X86_XCR0_BNDREGS_POS)
-#define X86_XCR0_BNDCSR_POS       4
-#define X86_XCR0_BNDCSR           (1ULL << X86_XCR0_BNDCSR_POS)
-#define X86_XCR0_OPMASK_POS       5
-#define X86_XCR0_OPMASK           (1ULL << X86_XCR0_OPMASK_POS)
-#define X86_XCR0_ZMM_POS          6
-#define X86_XCR0_ZMM              (1ULL << X86_XCR0_ZMM_POS)
-#define X86_XCR0_HI_ZMM_POS       7
-#define X86_XCR0_HI_ZMM           (1ULL << X86_XCR0_HI_ZMM_POS)
+#define X86_XCR0_X87              (_AC(1, ULL) <<  0)
+#define X86_XCR0_SSE              (_AC(1, ULL) <<  1)
+#define X86_XCR0_YMM              (_AC(1, ULL) <<  2)
+#define X86_XCR0_BNDREGS          (_AC(1, ULL) <<  3)
+#define X86_XCR0_BNDCSR           (_AC(1, ULL) <<  4)
+#define X86_XCR0_OPMASK           (_AC(1, ULL) <<  5)
+#define X86_XCR0_ZMM              (_AC(1, ULL) <<  6)
+#define X86_XCR0_HI_ZMM           (_AC(1, ULL) <<  7)
 #define X86_XSS_PROC_TRACE        (_AC(1, ULL) <<  8)
-#define X86_XCR0_PKRU_POS         9
-#define X86_XCR0_PKRU             (1ULL << X86_XCR0_PKRU_POS)
+#define X86_XCR0_PKRU             (_AC(1, ULL) <<  9)
 #define X86_XSS_PASID             (_AC(1, ULL) << 10)
 #define X86_XSS_CET_U             (_AC(1, ULL) << 11)
 #define X86_XSS_CET_S             (_AC(1, ULL) << 12)
@@ -107,11 +98,10 @@
 #define X86_XSS_HWP               (_AC(1, ULL) << 16)
 #define X86_XCR0_TILE_CFG         (_AC(1, ULL) << 17)
 #define X86_XCR0_TILE_DATA        (_AC(1, ULL) << 18)
-#define X86_XCR0_LWP_POS          62
-#define X86_XCR0_LWP              (1ULL << X86_XCR0_LWP_POS)
+#define X86_XCR0_LWP              (_AC(1, ULL) << 62)
 
 #define X86_XCR0_STATES                                                 \
-    (X86_XCR0_FP | X86_XCR0_SSE | X86_XCR0_YMM | X86_XCR0_BNDREGS |     \
+    (X86_XCR0_X87 | X86_XCR0_SSE | X86_XCR0_YMM | X86_XCR0_BNDREGS |    \
      X86_XCR0_BNDCSR | X86_XCR0_OPMASK | X86_XCR0_ZMM |                 \
      X86_XCR0_HI_ZMM | X86_XCR0_PKRU | X86_XCR0_TILE_CFG |              \
      X86_XCR0_TILE_DATA |                                               \
diff --git a/xen/arch/x86/include/asm/xstate.h b/xen/arch/x86/include/asm/xstate.h
index da1d89d2f416..f4a8e5f814a0 100644
--- a/xen/arch/x86/include/asm/xstate.h
+++ b/xen/arch/x86/include/asm/xstate.h
@@ -29,8 +29,8 @@ extern uint32_t mxcsr_mask;
 #define XSAVE_HDR_OFFSET          FXSAVE_SIZE
 #define XSTATE_AREA_MIN_SIZE      (FXSAVE_SIZE + XSAVE_HDR_SIZE)
 
-#define XSTATE_FP_SSE  (X86_XCR0_FP | X86_XCR0_SSE)
-#define XCNTXT_MASK    (X86_XCR0_FP | X86_XCR0_SSE | X86_XCR0_YMM | \
+#define XSTATE_FP_SSE  (X86_XCR0_X87 | X86_XCR0_SSE)
+#define XCNTXT_MASK    (X86_XCR0_X87 | X86_XCR0_SSE | X86_XCR0_YMM | \
                         X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM | \
                         XSTATE_NONLAZY)
 
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index 31bf2dc95f57..2acd02449dba 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -313,7 +313,7 @@ void xsave(struct vcpu *v, uint64_t mask)
                            "=m" (*ptr), \
                            "a" (lmask), "d" (hmask), "D" (ptr))
 
-    if ( fip_width == 8 || !(mask & X86_XCR0_FP) )
+    if ( fip_width == 8 || !(mask & X86_XCR0_X87) )
     {
         XSAVE("0x48,");
     }
@@ -366,7 +366,7 @@ void xsave(struct vcpu *v, uint64_t mask)
             fip_width = 8;
     }
 #undef XSAVE
-    if ( mask & X86_XCR0_FP )
+    if ( mask & X86_XCR0_X87 )
         ptr->fpu_sse.x[FPU_WORD_SIZE_OFFSET] = fip_width;
 }
 
@@ -558,7 +558,7 @@ void xstate_free_save_area(struct vcpu *v)
 static bool valid_xcr0(uint64_t xcr0)
 {
     /* FP must be unconditionally set. */
-    if ( !(xcr0 & X86_XCR0_FP) )
+    if ( !(xcr0 & X86_XCR0_X87) )
         return false;
 
     /* YMM depends on SSE. */
@@ -597,7 +597,7 @@ unsigned int xstate_uncompressed_size(uint64_t xcr0)
     if ( xcr0 == 0 )
         return 0;
 
-    if ( xcr0 <= (X86_XCR0_SSE | X86_XCR0_FP) )
+    if ( xcr0 <= (X86_XCR0_SSE | X86_XCR0_X87) )
         return size;
 
     /*
@@ -605,7 +605,7 @@ unsigned int xstate_uncompressed_size(uint64_t xcr0)
      * maximum offset+size.  Some states (e.g. LWP, APX_F) are out-of-order
      * with respect their index.
      */
-    xcr0 &= ~(X86_XCR0_SSE | X86_XCR0_FP);
+    xcr0 &= ~(X86_XCR0_SSE | X86_XCR0_X87);
     for_each_set_bit ( i, &xcr0, 63 )
     {
         const struct xstate_component *c = &raw_cpu_policy.xstate.comp[i];
@@ -626,14 +626,14 @@ unsigned int xstate_compressed_size(uint64_t xstates)
     if ( xstates == 0 )
         return 0;
 
-    if ( xstates <= (X86_XCR0_SSE | X86_XCR0_FP) )
+    if ( xstates <= (X86_XCR0_SSE | X86_XCR0_X87) )
         return size;
 
     /*
      * For the compressed size, every non-legacy component matters.  Some
      * componenets require aligning to 64 first.
      */
-    xstates &= ~(X86_XCR0_SSE | X86_XCR0_FP);
+    xstates &= ~(X86_XCR0_SSE | X86_XCR0_X87);
     for_each_set_bit ( i, &xstates, 63 )
     {
         const struct xstate_component *c = &raw_cpu_policy.xstate.comp[i];
@@ -756,7 +756,7 @@ static void __init noinline xstate_check_sizes(void)
      * layout compatibility with Intel and having a knock-on effect on all
      * subsequent states.
      */
-    check_new_xstate(&s, X86_XCR0_SSE | X86_XCR0_FP);
+    check_new_xstate(&s, X86_XCR0_SSE | X86_XCR0_X87);
 
     if ( cpu_has_avx )
         check_new_xstate(&s, X86_XCR0_YMM);
@@ -1008,7 +1008,7 @@ uint64_t read_bndcfgu(void)
               : "=m" (*xstate)
               : "a" (X86_XCR0_BNDCSR), "d" (0), "D" (xstate) );
 
-        bndcsr = (void *)xstate + xstate_offsets[X86_XCR0_BNDCSR_POS];
+        bndcsr = (void *)xstate + xstate_offsets[ilog2(X86_XCR0_BNDCSR)];
     }
 
     if ( cr0 & X86_CR0_TS )
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 17:39:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 17:39:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742542.1149408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKL-0004V6-Ij; Mon, 17 Jun 2024 17:39:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742542.1149408; Mon, 17 Jun 2024 17:39:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKL-0004UF-Dw; Mon, 17 Jun 2024 17:39:33 +0000
Received: by outflank-mailman (input) for mailman id 742542;
 Mon, 17 Jun 2024 17:39:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gp4O=NT=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJGKJ-00036g-LD
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 17:39:31 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8c77ddd1-2cd0-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 19:39:29 +0200 (CEST)
Received: by mail-ej1-x634.google.com with SMTP id
 a640c23a62f3a-a6f176c5c10so549351266b.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 10:39:29 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da4496sm532010666b.8.2024.06.17.10.39.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 10:39:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c77ddd1-2cd0-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718645968; x=1719250768; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=iaQNeWXVPJOBqi4Jz3ws3Uw0UDtY4Ak6EMg/xM7DVZo=;
        b=VrFGtfmSSLW8DVJ5vvTBz2Ob+XtmWvZSEd+RK64X477iC0lyGg6e8MOS/E/x1tFjL8
         vBZ+VV4H2fmzcDJ3m0IJJic2VpzyGLF4v87hqB8V6gqv7Ke7bWUm9LfLX6H2QBR9sY9K
         qz0VkGNmcAZox1QtsBWdCcajvFyb5OTTaosdE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718645968; x=1719250768;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=iaQNeWXVPJOBqi4Jz3ws3Uw0UDtY4Ak6EMg/xM7DVZo=;
        b=lWTsItn18jIuVvLUXWJdvK8FVMNg+abRB24p3G5pyfdFRMXW5Q06JAnF4bZVZ8EdHH
         UQUf2aNgsjTKorAwnScTsIbLbsxtNugB4aFEjGJJi00LbL7x1Rjd7YWU4imRSug7zDfq
         k2YkXiPhjvJvYEqIQtkuupXHa1lTTkjViCvmGhM7tRowjuCGzIdhfGPDhFSgL142ue1F
         A1HX96teIC9qCk83ylPKj4aaJIDpk54dJO6rd4pOH5Qt9WgkeKnG5fsjfR5CIaOE1XmY
         ZHmMBJTeR8qIo71HnuGMGvYZgoOTXjc1+0TPrSq7DodVVO/nVwEVFhAJZQvXnPaumqio
         OPMQ==
X-Gm-Message-State: AOJu0YyVFnU7tWum9jhfb4wyW9VSasNcuyw+y3nRAY9pLwahK+IRMuHV
	bC8Xn1Knl8SIoluP/2PLel9UzA2SdoaKT8SdJL//QnZpggxlAzyUEQjRYCkYMgl/LIIbwFmWcyW
	KO1U=
X-Google-Smtp-Source: AGHT+IGKrdBfI9Nm+aHZ6UkUA5/JNpbfFSVehqAWMLC2Ni5M753x4xlQAZiw4qCHY7guT4Ih8WAYvg==
X-Received: by 2002:a17:906:2c0d:b0:a64:a091:91f2 with SMTP id a640c23a62f3a-a6f60d40c99mr732397266b.37.1718645968592;
        Mon, 17 Jun 2024 10:39:28 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH v4 6/7] x86/cpuid: Fix handling of XSAVE dynamic leaves
Date: Mon, 17 Jun 2024 18:39:20 +0100
Message-Id: <20240617173921.1755439-7-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
References: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

First, if XSAVE is available in hardware but not visible to the guest, the
dynamic leaves shouldn't be filled in.

Second, the comment concerning XSS state is wrong.  VT-x doesn't manage
host/guest state automatically, but there is provision for "host only" bits to
be set, so the implications are still accurate.

Introduce xstate_compressed_size() to mirror the uncompressed one.  Cross
check it at boot.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

v3:
 * Adjust commit message about !XSAVE guests
 * Rebase over boot time cross check
 * Use raw policy
v4:
 * Drop the TODO comment.  The CPUID path is always liable to pass 0 here.
 * ASSERT() a nonzero c->size like we do in the uncompressed helper.
---
 xen/arch/x86/cpuid.c              | 24 +++++++--------------
 xen/arch/x86/include/asm/xstate.h |  1 +
 xen/arch/x86/xstate.c             | 36 +++++++++++++++++++++++++++++++
 3 files changed, 45 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 7a38e032146a..a822e80c7ea7 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -330,23 +330,15 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
     case XSTATE_CPUID:
         switch ( subleaf )
         {
-        case 1:
-            if ( !p->xstate.xsavec && !p->xstate.xsaves )
-                break;
-
-            /*
-             * TODO: Figure out what to do for XSS state.  VT-x manages host
-             * vs guest MSR_XSS automatically, so as soon as we start
-             * supporting any XSS states, the wrong XSS will be in context.
-             */
-            BUILD_BUG_ON(XSTATE_XSAVES_ONLY != 0);
-            fallthrough;
         case 0:
-            /*
-             * Read CPUID[0xD,0/1].EBX from hardware.  They vary with enabled
-             * XSTATE, and appropriate XCR0|XSS are in context.
-             */
-            res->b = cpuid_count_ebx(leaf, subleaf);
+            if ( p->basic.xsave )
+                res->b = xstate_uncompressed_size(v->arch.xcr0);
+            break;
+
+        case 1:
+            if ( p->xstate.xsavec )
+                res->b = xstate_compressed_size(v->arch.xcr0 |
+                                                v->arch.msrs->xss.raw);
             break;
         }
         break;
diff --git a/xen/arch/x86/include/asm/xstate.h b/xen/arch/x86/include/asm/xstate.h
index bfb66dd766b6..da1d89d2f416 100644
--- a/xen/arch/x86/include/asm/xstate.h
+++ b/xen/arch/x86/include/asm/xstate.h
@@ -109,6 +109,7 @@ void xstate_free_save_area(struct vcpu *v);
 int xstate_alloc_save_area(struct vcpu *v);
 void xstate_init(struct cpuinfo_x86 *c);
 unsigned int xstate_uncompressed_size(uint64_t xcr0);
+unsigned int xstate_compressed_size(uint64_t xstates);
 
 static inline uint64_t xgetbv(unsigned int index)
 {
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index 8edc4792a8fd..31bf2dc95f57 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -619,6 +619,36 @@ unsigned int xstate_uncompressed_size(uint64_t xcr0)
     return size;
 }
 
+unsigned int xstate_compressed_size(uint64_t xstates)
+{
+    unsigned int i, size = XSTATE_AREA_MIN_SIZE;
+
+    if ( xstates == 0 )
+        return 0;
+
+    if ( xstates <= (X86_XCR0_SSE | X86_XCR0_FP) )
+        return size;
+
+    /*
+     * For the compressed size, every non-legacy component matters.  Some
+     * componenets require aligning to 64 first.
+     */
+    xstates &= ~(X86_XCR0_SSE | X86_XCR0_FP);
+    for_each_set_bit ( i, &xstates, 63 )
+    {
+        const struct xstate_component *c = &raw_cpu_policy.xstate.comp[i];
+
+        ASSERT(c->size);
+
+        if ( c->align )
+            size = ROUNDUP(size, 64);
+
+        size += c->size;
+    }
+
+    return size;
+}
+
 struct xcheck_state {
     uint64_t states;
     uint32_t uncomp_size;
@@ -681,6 +711,12 @@ static void __init check_new_xstate(struct xcheck_state *s, uint64_t new)
                   s->states, &new, hw_size, s->comp_size);
 
         s->comp_size = hw_size;
+
+        xen_size = xstate_compressed_size(s->states);
+
+        if ( xen_size != hw_size )
+            panic("XSTATE 0x%016"PRIx64", compressed hw size %#x != xen size %#x\n",
+                  s->states, hw_size, xen_size);
     }
     else if ( hw_size ) /* Compressed size reported, but no XSAVEC ? */
     {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 17:39:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 17:39:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742538.1149361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKI-0003EG-8o; Mon, 17 Jun 2024 17:39:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742538.1149361; Mon, 17 Jun 2024 17:39:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKH-0003Bx-W6; Mon, 17 Jun 2024 17:39:29 +0000
Received: by outflank-mailman (input) for mailman id 742538;
 Mon, 17 Jun 2024 17:39:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gp4O=NT=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJGKH-00036g-3H
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 17:39:29 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8a6e9f41-2cd0-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 19:39:26 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a6f958a3a69so7951566b.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 10:39:26 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da4496sm532010666b.8.2024.06.17.10.39.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 10:39:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a6e9f41-2cd0-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718645965; x=1719250765; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Z5V14Q8Gcx1y36spQ2JEJ7cbqPfjGOYxmj1ldZEYhTQ=;
        b=M9Owe0bbw1/pBNPLQBAKvhPY1TDE7qQhmLlkFBcM3ye+wsgSP9ER2N6e/BA1bYoSaA
         77ZHH2Xg4lZrr++2TtMiHQufGaoDoCVJLMFRNjOhbxTZSNjSVf6guwdqU/ye6m+DI0ns
         GI8U4OBygkj+2Ly4h2ylZjN7cPN21SIdiuhJ0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718645965; x=1719250765;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Z5V14Q8Gcx1y36spQ2JEJ7cbqPfjGOYxmj1ldZEYhTQ=;
        b=OXWTiFkKQMP2wqZOOSq85H/fpHqz/NxLAqsoovNAXGzP/DuExvCvZLCISc2U6j176G
         Yk5zXa8wuJOEnv5w8KDaPwYwg/IIul4VLxdrR4b9+/7p1nUwtCVIwtyMaXoVRuXme1of
         wbmKoVxLxOShrB3xyXL1TRBWf5hcGlMGIJMihj+631S/f1fsme8qywICrFecOc3TpQ6L
         AU79K+Fxe1ccfRRLBEvrFErbpwKgtCyTW+bIe5HzZr7f28hRQ6dQvbNpGUihYB9HxnjY
         nc1btGf6mQE9WQRnlnBVLAKJOiCLD6XbL1N28086IvkesrOIiIQyUQilNz/JT7u+5JTn
         t7NA==
X-Gm-Message-State: AOJu0YzCW5RGUf2CzdSYTrV/2PFiJrzcEWT7gUFpJ8iAxT9ll/5+uk+G
	OZeaIyCE+zNv2FSeW/giXiodzlk+Y/5howRDl/UwEsorIkirrRyq2bDsx5aKlQqBcrYc9iuDNRg
	5PeM=
X-Google-Smtp-Source: AGHT+IHDPj2YJF/xoswS4Hxe1qyfmkAv45GDhCV3GJGVWSh9zGuNHazvSlL1zN/PexzvEbPHklSOoQ==
X-Received: by 2002:a17:906:b756:b0:a6e:fad9:6dbb with SMTP id a640c23a62f3a-a6f60bcbb01mr710081566b.0.1718645965045;
        Mon, 17 Jun 2024 10:39:25 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH v4 1/7] x86/xstate: Fix initialisation of XSS cache
Date: Mon, 17 Jun 2024 18:39:15 +0100
Message-Id: <20240617173921.1755439-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
References: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The clobbering of this_cpu(xcr0) and this_cpu(xss) to architecturally invalid
values is to force the subsequent set_xcr0() and set_msr_xss() to reload the
hardware register.

While XCR0 is reloaded in xstate_init(), MSR_XSS isn't.  This causes
get_msr_xss() to return the invalid value, and logic of the form:

    old = get_msr_xss();
    set_msr_xss(new);
    ...
    set_msr_xss(old);

to try and restore said invalid value.

The architecturally invalid value must be purged from the cache, meaning the
hardware register must be written at least once.  This in turn highlights that
the invalid value must only be used in the case that the hardware register is
available.

Fixes: f7f4a523927f ("x86/xstate: reset cached register values on resume")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

v3:
 * Split out of later patch
---
 xen/arch/x86/xstate.c | 18 +++++++++++-------
 1 file changed, 11 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index 99cedb4f5e24..75788147966a 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -641,13 +641,6 @@ void xstate_init(struct cpuinfo_x86 *c)
         return;
     }
 
-    /*
-     * Zap the cached values to make set_xcr0() and set_msr_xss() really
-     * write it.
-     */
-    this_cpu(xcr0) = 0;
-    this_cpu(xss) = ~0;
-
     cpuid_count(XSTATE_CPUID, 0, &eax, &ebx, &ecx, &edx);
     feature_mask = (((u64)edx << 32) | eax) & XCNTXT_MASK;
     BUG_ON(!valid_xcr0(feature_mask));
@@ -657,8 +650,19 @@ void xstate_init(struct cpuinfo_x86 *c)
      * Set CR4_OSXSAVE and run "cpuid" to get xsave_cntxt_size.
      */
     set_in_cr4(X86_CR4_OSXSAVE);
+
+    /*
+     * Zap the cached values to make set_xcr0() and set_msr_xss() really write
+     * the hardware register.
+     */
+    this_cpu(xcr0) = 0;
     if ( !set_xcr0(feature_mask) )
         BUG();
+    if ( cpu_has_xsaves )
+    {
+        this_cpu(xss) = ~0;
+        set_msr_xss(0);
+    }
 
     if ( bsp )
     {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 17:39:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 17:39:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742540.1149385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKJ-0003qq-PA; Mon, 17 Jun 2024 17:39:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742540.1149385; Mon, 17 Jun 2024 17:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKJ-0003pT-HZ; Mon, 17 Jun 2024 17:39:31 +0000
Received: by outflank-mailman (input) for mailman id 742540;
 Mon, 17 Jun 2024 17:39:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gp4O=NT=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJGKH-00036g-L8
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 17:39:29 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8b0748c8-2cd0-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 19:39:27 +0200 (CEST)
Received: by mail-ej1-x634.google.com with SMTP id
 a640c23a62f3a-a6f936d2ba1so36321466b.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 10:39:27 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da4496sm532010666b.8.2024.06.17.10.39.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 10:39:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b0748c8-2cd0-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718645966; x=1719250766; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Zhl3ukCaOtB54z1wi1NLzU/jmU9SjtnheNpVm9jxFJQ=;
        b=QDCbE3XBxwBmCDbPdgqBaGh0tGF2vut0nRt3/l35Xh9dsR5M3sS+YoFU3THwsTYaOY
         ghyI6xKoEBvUtoiJqvBV+1Z7HpKSxxz4u4vTcuMH5ZU3ttlXWBMnvY6lNP7W8e6RIY6/
         CIQmBAhF+9LiuYxMkHzTRVrftEMN58dHQ6NEs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718645966; x=1719250766;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Zhl3ukCaOtB54z1wi1NLzU/jmU9SjtnheNpVm9jxFJQ=;
        b=Lgltc1MlhFgFx8zdRfCbuD3HJLUB/H8yYE8kE5aQ4MeaozTmSWysgFmAM2qufPycDp
         Cgk4H5Blk2lI14LY1zr7/B16rqsE81WSMlb13sR86msZLEVKUJCk9QyBZ9dcy5cxizVw
         aF+gbfgXuDVEVqSuX32nGIX76MS6+tbViCttXzRigxrKdXjA2u+uIP8AxTlKERwdeSJm
         OHBUT8pvhAprCuFDKUkN8xFJfaZ6UVGnFDzEPJZoLVxAOA4Li+elCSf8R+dukYF1LTvF
         c4BoKAJlw1L1b0SZ4Ovqgo9iMI9OUISHZfKRKULxaoxPm32Oqmn2k3Y8qHgQuxFtt8xe
         4c0Q==
X-Gm-Message-State: AOJu0YypyJkAFOB2xzd+w/fcVEO2whxF8Y9pILQjjGT43fGGwubBhuld
	0V8ozmzAiOftw65WAaBwkMS4rqtxGhlQ3s6ZLLkB8SjoChyUcma6pITaUm135IVZsoAv+yTNGgy
	XnvE=
X-Google-Smtp-Source: AGHT+IE2ePwbkm/Jzh5+1PD5kPs6RXA3hTe5nwJ2ydBXKARDAIfPc8oUxJZ52aObkYo78bSbB9bKjw==
X-Received: by 2002:a17:906:a847:b0:a6f:46f1:5434 with SMTP id a640c23a62f3a-a6f94c047famr24028366b.6.1718645966154;
        Mon, 17 Jun 2024 10:39:26 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH v4 2/7] x86/xstate: Cross-check dynamic XSTATE sizes at boot
Date: Mon, 17 Jun 2024 18:39:16 +0100
Message-Id: <20240617173921.1755439-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
References: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
every call.  This is expensive, being used for domain create/migrate, as well
as to service certain guest CPUID instructions.

Instead, arrange to check the sizes once at boot.  See the code comments for
details.  Right now, it just checks hardware against the algorithm
expectations.  Later patches will add further cross-checking.

Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits.  This is to
maximise coverage in the sanity check, even if we don't expect to
use/virtualise some of these features any time soon.  Leave HDC and HWP alone
for now; we don't have CPUID bits from them stored nicely.

Only perform the cross-checks when SELF_TESTS are active.  It's only
developers or new hardware liable to trip these checks, and Xen at least
tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which we
don't want to be tickling in the general case.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

v3:
 * New
v4:
 * Rebase over CONFIG_SELF_TESTS
 * Swap one BUG_ON() for a WARN()

On Sapphire Rapids with the whole series inc diagnostics, we get this pattern:

  (XEN) *** check_new_xstate(, 0x00000003)
  (XEN) *** check_new_xstate(, 0x00000004)
  (XEN) *** check_new_xstate(, 0x000000e0)
  (XEN) *** check_new_xstate(, 0x00000200)
  (XEN) *** check_new_xstate(, 0x00060000)
  (XEN) *** check_new_xstate(, 0x00000100)
  (XEN) *** check_new_xstate(, 0x00000400)
  (XEN) *** check_new_xstate(, 0x00000800)
  (XEN) *** check_new_xstate(, 0x00001000)
  (XEN) *** check_new_xstate(, 0x00004000)
  (XEN) *** check_new_xstate(, 0x00008000)

and on Genoa, this pattern:

  (XEN) *** check_new_xstate(, 0x00000003)
  (XEN) *** check_new_xstate(, 0x00000004)
  (XEN) *** check_new_xstate(, 0x000000e0)
  (XEN) *** check_new_xstate(, 0x00000200)
  (XEN) *** check_new_xstate(, 0x00000800)
  (XEN) *** check_new_xstate(, 0x00001000)
---
 xen/arch/x86/include/asm/x86-defns.h        |  25 +++-
 xen/arch/x86/xstate.c                       | 158 ++++++++++++++++++++
 xen/include/public/arch-x86/cpufeatureset.h |   3 +
 3 files changed, 185 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/include/asm/x86-defns.h b/xen/arch/x86/include/asm/x86-defns.h
index 48d7a3b7af45..d7602ab225c4 100644
--- a/xen/arch/x86/include/asm/x86-defns.h
+++ b/xen/arch/x86/include/asm/x86-defns.h
@@ -77,7 +77,7 @@
 #define X86_CR4_PKS        0x01000000 /* Protection Key Supervisor */
 
 /*
- * XSTATE component flags in XCR0
+ * XSTATE component flags in XCR0 | MSR_XSS
  */
 #define X86_XCR0_FP_POS           0
 #define X86_XCR0_FP               (1ULL << X86_XCR0_FP_POS)
@@ -95,11 +95,34 @@
 #define X86_XCR0_ZMM              (1ULL << X86_XCR0_ZMM_POS)
 #define X86_XCR0_HI_ZMM_POS       7
 #define X86_XCR0_HI_ZMM           (1ULL << X86_XCR0_HI_ZMM_POS)
+#define X86_XSS_PROC_TRACE        (_AC(1, ULL) <<  8)
 #define X86_XCR0_PKRU_POS         9
 #define X86_XCR0_PKRU             (1ULL << X86_XCR0_PKRU_POS)
+#define X86_XSS_PASID             (_AC(1, ULL) << 10)
+#define X86_XSS_CET_U             (_AC(1, ULL) << 11)
+#define X86_XSS_CET_S             (_AC(1, ULL) << 12)
+#define X86_XSS_HDC               (_AC(1, ULL) << 13)
+#define X86_XSS_UINTR             (_AC(1, ULL) << 14)
+#define X86_XSS_LBR               (_AC(1, ULL) << 15)
+#define X86_XSS_HWP               (_AC(1, ULL) << 16)
+#define X86_XCR0_TILE_CFG         (_AC(1, ULL) << 17)
+#define X86_XCR0_TILE_DATA        (_AC(1, ULL) << 18)
 #define X86_XCR0_LWP_POS          62
 #define X86_XCR0_LWP              (1ULL << X86_XCR0_LWP_POS)
 
+#define X86_XCR0_STATES                                                 \
+    (X86_XCR0_FP | X86_XCR0_SSE | X86_XCR0_YMM | X86_XCR0_BNDREGS |     \
+     X86_XCR0_BNDCSR | X86_XCR0_OPMASK | X86_XCR0_ZMM |                 \
+     X86_XCR0_HI_ZMM | X86_XCR0_PKRU | X86_XCR0_TILE_CFG |              \
+     X86_XCR0_TILE_DATA |                                               \
+     X86_XCR0_LWP)
+
+#define X86_XSS_STATES                                                  \
+    (X86_XSS_PROC_TRACE | X86_XSS_PASID | X86_XSS_CET_U |               \
+     X86_XSS_CET_S | X86_XSS_HDC | X86_XSS_UINTR | X86_XSS_LBR |        \
+     X86_XSS_HWP |                                                      \
+     0)
+
 /*
  * Debug status flags in DR6.
  *
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index 75788147966a..650206d9d2b6 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -604,9 +604,164 @@ static bool valid_xcr0(uint64_t xcr0)
     if ( !(xcr0 & X86_XCR0_BNDREGS) != !(xcr0 & X86_XCR0_BNDCSR) )
         return false;
 
+    /* TILECFG and TILEDATA must be the same. */
+    if ( !(xcr0 & X86_XCR0_TILE_CFG) != !(xcr0 & X86_XCR0_TILE_DATA) )
+        return false;
+
     return true;
 }
 
+struct xcheck_state {
+    uint64_t states;
+    uint32_t uncomp_size;
+    uint32_t comp_size;
+};
+
+static void __init check_new_xstate(struct xcheck_state *s, uint64_t new)
+{
+    uint32_t hw_size;
+
+    BUILD_BUG_ON(X86_XCR0_STATES & X86_XSS_STATES);
+
+    BUG_ON(s->states & new); /* States only increase. */
+    BUG_ON(!valid_xcr0(s->states | new)); /* Xen thinks it's a good value. */
+    BUG_ON(new & ~(X86_XCR0_STATES | X86_XSS_STATES)); /* Known state. */
+    BUG_ON((new & X86_XCR0_STATES) &&
+           (new & X86_XSS_STATES)); /* User or supervisor, not both. */
+
+    s->states |= new;
+    if ( new & X86_XCR0_STATES )
+    {
+        if ( !set_xcr0(s->states & X86_XCR0_STATES) )
+            BUG();
+    }
+    else
+        set_msr_xss(s->states & X86_XSS_STATES);
+
+    /*
+     * Check the uncompressed size.  Some XSTATEs are out-of-order and fill in
+     * prior holes in the state area, so we check that the size doesn't
+     * decrease.
+     */
+    hw_size = cpuid_count_ebx(0xd, 0);
+
+    if ( hw_size < s->uncomp_size )
+        panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, uncompressed hw size %#x < prev size %#x\n",
+              s->states, &new, hw_size, s->uncomp_size);
+
+    s->uncomp_size = hw_size;
+
+    /*
+     * Check the compressed size, if available.  All components strictly
+     * appear in index order.  In principle there are no holes, but some
+     * components have their base address 64-byte aligned for efficiency
+     * reasons (e.g. AMX-TILE) and there are other components small enough to
+     * fit in the gap (e.g. PKRU) without increasing the overall length.
+     */
+    hw_size = cpuid_count_ebx(0xd, 1);
+
+    if ( cpu_has_xsavec )
+    {
+        if ( hw_size < s->comp_size )
+            panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, compressed hw size %#x < prev size %#x\n",
+                  s->states, &new, hw_size, s->comp_size);
+
+        s->comp_size = hw_size;
+    }
+    else if ( hw_size ) /* Compressed size reported, but no XSAVEC ? */
+    {
+        static bool once;
+
+        if ( !once )
+        {
+            WARN();
+            once = true;
+        }
+    }
+}
+
+/*
+ * The {un,}compressed XSTATE sizes are reported by dynamic CPUID value, based
+ * on the current %XCR0 and MSR_XSS values.  The exact layout is also feature
+ * and vendor specific.  Cross-check Xen's understanding against real hardware
+ * on boot.
+ *
+ * Testing every combination is prohibitive, so we use a partial approach.
+ * Starting with nothing active, we add new XSTATEs and check that the CPUID
+ * dynamic values never decreases.
+ */
+static void __init noinline xstate_check_sizes(void)
+{
+    uint64_t old_xcr0 = get_xcr0();
+    uint64_t old_xss = get_msr_xss();
+    struct xcheck_state s = {};
+
+    /*
+     * User XSTATEs, increasing by index.
+     *
+     * Chronologically, Intel and AMD had identical layouts for AVX (YMM).
+     * AMD introduced LWP in Fam15h, following immediately on from YMM.  Intel
+     * left an LWP-shaped hole when adding MPX (BND{CSR,REGS}) in Skylake.
+     * AMD removed LWP in Fam17h, putting PKRU in the same space, breaking
+     * layout compatibility with Intel and having a knock-on effect on all
+     * subsequent states.
+     */
+    check_new_xstate(&s, X86_XCR0_SSE | X86_XCR0_FP);
+
+    if ( cpu_has_avx )
+        check_new_xstate(&s, X86_XCR0_YMM);
+
+    if ( cpu_has_mpx )
+        check_new_xstate(&s, X86_XCR0_BNDCSR | X86_XCR0_BNDREGS);
+
+    if ( cpu_has_avx512f )
+        check_new_xstate(&s, X86_XCR0_HI_ZMM | X86_XCR0_ZMM | X86_XCR0_OPMASK);
+
+    if ( cpu_has_pku )
+        check_new_xstate(&s, X86_XCR0_PKRU);
+
+    if ( boot_cpu_has(X86_FEATURE_AMX_TILE) )
+        check_new_xstate(&s, X86_XCR0_TILE_DATA | X86_XCR0_TILE_CFG);
+
+    if ( boot_cpu_has(X86_FEATURE_LWP) )
+        check_new_xstate(&s, X86_XCR0_LWP);
+
+    /*
+     * Supervisor XSTATEs, increasing by index.
+     *
+     * Intel Broadwell has Processor Trace but no XSAVES.  There doesn't
+     * appear to have been a new enumeration when X86_XSS_PROC_TRACE was
+     * introduced in Skylake.
+     */
+    if ( cpu_has_xsaves )
+    {
+        if ( cpu_has_proc_trace )
+            check_new_xstate(&s, X86_XSS_PROC_TRACE);
+
+        if ( boot_cpu_has(X86_FEATURE_ENQCMD) )
+            check_new_xstate(&s, X86_XSS_PASID);
+
+        if ( boot_cpu_has(X86_FEATURE_CET_SS) ||
+             boot_cpu_has(X86_FEATURE_CET_IBT) )
+        {
+            check_new_xstate(&s, X86_XSS_CET_U);
+            check_new_xstate(&s, X86_XSS_CET_S);
+        }
+
+        if ( boot_cpu_has(X86_FEATURE_UINTR) )
+            check_new_xstate(&s, X86_XSS_UINTR);
+
+        if ( boot_cpu_has(X86_FEATURE_ARCH_LBR) )
+            check_new_xstate(&s, X86_XSS_LBR);
+    }
+
+    /* Restore old state now the test is done. */
+    if ( !set_xcr0(old_xcr0) )
+        BUG();
+    if ( cpu_has_xsaves )
+        set_msr_xss(old_xss);
+}
+
 /* Collect the information of processor's extended state */
 void xstate_init(struct cpuinfo_x86 *c)
 {
@@ -683,6 +838,9 @@ void xstate_init(struct cpuinfo_x86 *c)
 
     if ( setup_xstate_features(bsp) && bsp )
         BUG();
+
+    if ( IS_ENABLED(CONFIG_SELF_TESTS) && bsp )
+        xstate_check_sizes();
 }
 
 int validate_xstate(const struct domain *d, uint64_t xcr0, uint64_t xcr0_accum,
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 6627453e3985..d9eba5e9a714 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -266,6 +266,7 @@ XEN_CPUFEATURE(IBPB_RET,      8*32+30) /*A  IBPB clears RSB/RAS too. */
 XEN_CPUFEATURE(AVX512_4VNNIW, 9*32+ 2) /*A  AVX512 Neural Network Instructions */
 XEN_CPUFEATURE(AVX512_4FMAPS, 9*32+ 3) /*A  AVX512 Multiply Accumulation Single Precision */
 XEN_CPUFEATURE(FSRM,          9*32+ 4) /*A  Fast Short REP MOVS */
+XEN_CPUFEATURE(UINTR,         9*32+ 5) /*   User-mode Interrupts */
 XEN_CPUFEATURE(AVX512_VP2INTERSECT, 9*32+8) /*a  VP2INTERSECT{D,Q} insns */
 XEN_CPUFEATURE(SRBDS_CTRL,    9*32+ 9) /*   MSR_MCU_OPT_CTRL and RNGDS_MITG_DIS. */
 XEN_CPUFEATURE(MD_CLEAR,      9*32+10) /*!A| VERW clears microarchitectural buffers */
@@ -274,8 +275,10 @@ XEN_CPUFEATURE(TSX_FORCE_ABORT, 9*32+13) /* MSR_TSX_FORCE_ABORT.RTM_ABORT */
 XEN_CPUFEATURE(SERIALIZE,     9*32+14) /*A  SERIALIZE insn */
 XEN_CPUFEATURE(HYBRID,        9*32+15) /*   Heterogeneous platform */
 XEN_CPUFEATURE(TSXLDTRK,      9*32+16) /*a  TSX load tracking suspend/resume insns */
+XEN_CPUFEATURE(ARCH_LBR,      9*32+19) /*   Architectural Last Branch Record */
 XEN_CPUFEATURE(CET_IBT,       9*32+20) /*   CET - Indirect Branch Tracking */
 XEN_CPUFEATURE(AVX512_FP16,   9*32+23) /*A  AVX512 FP16 instructions */
+XEN_CPUFEATURE(AMX_TILE,      9*32+24) /*   AMX Tile architecture */
 XEN_CPUFEATURE(IBRSB,         9*32+26) /*A  IBRS and IBPB support (used by Intel) */
 XEN_CPUFEATURE(STIBP,         9*32+27) /*A  STIBP */
 XEN_CPUFEATURE(L1D_FLUSH,     9*32+28) /*S  MSR_FLUSH_CMD and L1D flush. */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 17:39:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 17:39:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742536.1149348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKH-000374-JM; Mon, 17 Jun 2024 17:39:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742536.1149348; Mon, 17 Jun 2024 17:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKH-00036x-Go; Mon, 17 Jun 2024 17:39:29 +0000
Received: by outflank-mailman (input) for mailman id 742536;
 Mon, 17 Jun 2024 17:39:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gp4O=NT=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJGKG-00036g-Sd
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 17:39:28 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8a4cbf23-2cd0-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 19:39:25 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a6265d3ba8fso534988066b.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 10:39:25 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da4496sm532010666b.8.2024.06.17.10.39.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 10:39:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a4cbf23-2cd0-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718645964; x=1719250764; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=zlq2KA/Rl2jpa+CD3iKff2ATLeNrqe6aHn27qOTXtpE=;
        b=eSLi34n3l5anaUrp6G3ImCnB70fJ+ksS/EM4OtMJbCPYYOVXZG+3bwJVKH5jWEcyJh
         nhZ9cghG/YgLe45Ld6Z1w5uIIcAnUbZtHAcgxAWwRlSsXNfgNBUk//clMOA9Ls8kz/br
         tdSELTXzaeyU780dmjEDiQFKueXat8URoy/Do=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718645964; x=1719250764;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=zlq2KA/Rl2jpa+CD3iKff2ATLeNrqe6aHn27qOTXtpE=;
        b=G/F5TYPuUNJVKZ6vo7vWkZ09IrhPJtiDXKfEREozHUcCwMRqra5whB0ossuQO/3De8
         dT8BOafqReOrOQZUuU699dL0fzQ3JFdeEJ1ql7Cutp/WoESVGbxECDGm0QbFgFII2Jvx
         5Rw5Vdg+rB09t1Op7wIOyrjW6K/tKtprVNNvPUWnWrvGSMOt8GnemqvNiQEvPqGJGpui
         K1D9JnFvhAvbobNZUCgjQoDNNv1lx9jbVsDQlR6CwrbEzvF34r6bySpPozvs1GFzVAqe
         1QhxtirHzJ0xgUwSTBNf6Lz6rIbPnpW6pCvsE480Phg6lBn2NvPqgS+VfsxhsNuHR6ZO
         NWhw==
X-Gm-Message-State: AOJu0Yw2b0Nx24ji8NKSLOWIUYDfDKtojXM0lo8bTnO4rlERz4tdJunV
	Qm8L2jW7kmc8yvrgMrtQmIjcz45ydx1hIOkxcFD/ObXGFaInW6kNVzz3NxpbB8Jo7ttP6oCCKeO
	WVYk=
X-Google-Smtp-Source: AGHT+IH0Cf5w34tJZlQQYn8ISB9ntlcGzbS6+l97omLPz4iYgaP6n6gLYIJ9DopkPO/6M6RgBbmjCA==
X-Received: by 2002:a17:907:c081:b0:a6f:77bb:1713 with SMTP id a640c23a62f3a-a6f77bb1889mr384559566b.9.1718645964528;
        Mon, 17 Jun 2024 10:39:24 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 v4 0/7] x86/xstate: Fixes to size calculations
Date: Mon, 17 Jun 2024 18:39:14 +0100
Message-Id: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Only minor changes in v4 vs v3.  See patches for details.

The end result has been tested across the entire XenServer hardware lab.  This
found several false assupmtion about how the dynamic sizes behave.

Patches 1 and 6 the main bugfixes from this series.  There's still lots more
work to do in order to get AMX and/or CET working, but this is at least a 4-yo
series finally off my plate.

Andrew Cooper (7):
  x86/xstate: Fix initialisation of XSS cache
  x86/xstate: Cross-check dynamic XSTATE sizes at boot
  x86/boot: Collect the Raw CPU Policy earlier on boot
  x86/xstate: Rework xstate_ctxt_size() as xstate_uncompressed_size()
  x86/cpu-policy: Simplify recalculate_xstate()
  x86/cpuid: Fix handling of XSAVE dynamic leaves
  x86/defns: Clean up X86_{XCR0,XSS}_* constants

 xen/arch/x86/cpu-policy.c                   |  56 ++--
 xen/arch/x86/cpuid.c                        |  24 +-
 xen/arch/x86/domctl.c                       |   2 +-
 xen/arch/x86/hvm/hvm.c                      |   2 +-
 xen/arch/x86/i387.c                         |   2 +-
 xen/arch/x86/include/asm/x86-defns.h        |  55 ++--
 xen/arch/x86/include/asm/xstate.h           |   8 +-
 xen/arch/x86/setup.c                        |   4 +-
 xen/arch/x86/xstate.c                       | 294 +++++++++++++++++---
 xen/include/public/arch-x86/cpufeatureset.h |   3 +
 xen/include/xen/lib/x86/cpu-policy.h        |   2 +-
 11 files changed, 330 insertions(+), 122 deletions(-)

-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 17:39:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 17:39:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742539.1149380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKJ-0003nz-F3; Mon, 17 Jun 2024 17:39:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742539.1149380; Mon, 17 Jun 2024 17:39:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKJ-0003nq-9G; Mon, 17 Jun 2024 17:39:31 +0000
Received: by outflank-mailman (input) for mailman id 742539;
 Mon, 17 Jun 2024 17:39:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gp4O=NT=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJGKH-00036h-L3
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 17:39:29 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8c2e2180-2cd0-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 19:39:29 +0200 (CEST)
Received: by mail-ej1-x634.google.com with SMTP id
 a640c23a62f3a-a6f0c3d0792so531942266b.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 10:39:28 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da4496sm532010666b.8.2024.06.17.10.39.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 10:39:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c2e2180-2cd0-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718645968; x=1719250768; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=3uvI0uPWUeQgT6Y03c0cvjouO28il7Tm0yQNEqwNcQQ=;
        b=cUcmywmQ9KtmYsGeHwwNG/Z0718OLni1WCnSOh66t1FlsDC2wcTrFBKi7vBr8Bhsyy
         oXkfPYYgXSmkjswirH2DKu6ThShxOHuhpgmxzIQft+1M0MHpWetYM9SK1Xy+FMvblP5t
         MqBKcbkLOL+2WeCzSzk6lDmsjm68csfRs39WM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718645968; x=1719250768;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=3uvI0uPWUeQgT6Y03c0cvjouO28il7Tm0yQNEqwNcQQ=;
        b=FG1egbY4DGvXGAawn6ASbeQFFiRTr2bwq0gAM6gmrAhrvNSR3kJsCBp12wQdhOWvQ/
         WiQTS7IpOYu/5SeO7PF2pv8OerUmY/fIdu/lTldNxCfkZffCsY9Gf3ghdoPvZrLWyfzJ
         IK+CCcfzROgAfWiEfLMaeTCqVrPBDy3w4skTf7wgMDySjvWyUXiSbAiJAo9kRRV6pX5V
         TxRPJLeXlBTVcbUNEGX+be2sQ5iFzTiA6VRM0VcnDQJaSaSrEJUuMZWg0chHltviPavF
         E0IeJPdeOtL7tHHRCEQv+TPJ9zLWl6yYLVcnb0CXnWv4tctwDSTymyLn/gySs0Qg9CHw
         9iAA==
X-Gm-Message-State: AOJu0YxzpQmf+Ze9aUwPw4szhi/PnTs6qBNhFdC/3dkrFFGhLBXXgboF
	XtUMHfi8uG+fqN2z65QePd2aH1ed5wwzi4TSAN2WRRSHBTx9uKzlu+ZdCWdo1wneZAxlaaTUeih
	3+CY=
X-Google-Smtp-Source: AGHT+IHfbAFMyFk+KRS69Rs46acGyugZCHtGPpEtfuM/I3FtBCHjLXabA2YjEFtlhhgRngQufGCmTw==
X-Received: by 2002:a17:906:1854:b0:a6f:1dbb:d38b with SMTP id a640c23a62f3a-a6f60d2bd6dmr685432466b.28.1718645968045;
        Mon, 17 Jun 2024 10:39:28 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH v4 5/7] x86/cpu-policy: Simplify recalculate_xstate()
Date: Mon, 17 Jun 2024 18:39:19 +0100
Message-Id: <20240617173921.1755439-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
References: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Make use of xstate_uncompressed_size() helper rather than maintaining the
running calculation while accumulating feature components.

The rest of the CPUID data can come direct from the raw cpu policy.  All
per-component data form an ABI through the behaviour of the X{SAVE,RSTOR}*
instructions.

Use for_each_set_bit() rather than opencoding a slightly awkward version of
it.  Mask the attributes in ecx down based on the visible features.  This
isn't actually necessary for any components or attributes defined at the time
of writing (up to AMX), but is added out of an abundance of caution.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

v2:
 * Tie ALIGN64 to xsavec rather than xsaves.
v3:
 * Tweak commit message.
---
 xen/arch/x86/cpu-policy.c         | 55 +++++++++++--------------------
 xen/arch/x86/include/asm/xstate.h |  1 +
 2 files changed, 21 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 5b66f002df05..304dc20cfab8 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -193,8 +193,7 @@ static void sanitise_featureset(uint32_t *fs)
 static void recalculate_xstate(struct cpu_policy *p)
 {
     uint64_t xstates = XSTATE_FP_SSE;
-    uint32_t xstate_size = XSTATE_AREA_MIN_SIZE;
-    unsigned int i, Da1 = p->xstate.Da1;
+    unsigned int i, ecx_mask = 0, Da1 = p->xstate.Da1;
 
     /*
      * The Da1 leaf is the only piece of information preserved in the common
@@ -206,61 +205,47 @@ static void recalculate_xstate(struct cpu_policy *p)
         return;
 
     if ( p->basic.avx )
-    {
         xstates |= X86_XCR0_YMM;
-        xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_YMM_POS] +
-                          xstate_sizes[X86_XCR0_YMM_POS]);
-    }
 
     if ( p->feat.mpx )
-    {
         xstates |= X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
-        xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_BNDCSR_POS] +
-                          xstate_sizes[X86_XCR0_BNDCSR_POS]);
-    }
 
     if ( p->feat.avx512f )
-    {
         xstates |= X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
-        xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_HI_ZMM_POS] +
-                          xstate_sizes[X86_XCR0_HI_ZMM_POS]);
-    }
 
     if ( p->feat.pku )
-    {
         xstates |= X86_XCR0_PKRU;
-        xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_PKRU_POS] +
-                          xstate_sizes[X86_XCR0_PKRU_POS]);
-    }
 
-    p->xstate.max_size  =  xstate_size;
+    /* Subleaf 0 */
+    p->xstate.max_size =
+        xstate_uncompressed_size(xstates & ~XSTATE_XSAVES_ONLY);
     p->xstate.xcr0_low  =  xstates & ~XSTATE_XSAVES_ONLY;
     p->xstate.xcr0_high = (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
 
+    /* Subleaf 1 */
     p->xstate.Da1 = Da1;
+    if ( p->xstate.xsavec )
+        ecx_mask |= XSTATE_ALIGN64;
+
     if ( p->xstate.xsaves )
     {
+        ecx_mask |= XSTATE_XSS;
         p->xstate.xss_low   =  xstates & XSTATE_XSAVES_ONLY;
         p->xstate.xss_high  = (xstates & XSTATE_XSAVES_ONLY) >> 32;
     }
-    else
-        xstates &= ~XSTATE_XSAVES_ONLY;
 
-    for ( i = 2; i < min(63UL, ARRAY_SIZE(p->xstate.comp)); ++i )
+    /* Subleafs 2+ */
+    xstates &= ~XSTATE_FP_SSE;
+    BUILD_BUG_ON(ARRAY_SIZE(p->xstate.comp) < 63);
+    for_each_set_bit ( i, &xstates, 63 )
     {
-        uint64_t curr_xstate = 1UL << i;
-
-        if ( !(xstates & curr_xstate) )
-            continue;
-
-        p->xstate.comp[i].size   = xstate_sizes[i];
-        p->xstate.comp[i].offset = xstate_offsets[i];
-        p->xstate.comp[i].xss    = curr_xstate & XSTATE_XSAVES_ONLY;
-        p->xstate.comp[i].align  = curr_xstate & xstate_align;
+        /*
+         * Pass through size (eax) and offset (ebx) directly.  Visbility of
+         * attributes in ecx limited by visible features in Da1.
+         */
+        p->xstate.raw[i].a = raw_cpu_policy.xstate.raw[i].a;
+        p->xstate.raw[i].b = raw_cpu_policy.xstate.raw[i].b;
+        p->xstate.raw[i].c = raw_cpu_policy.xstate.raw[i].c & ecx_mask;
     }
 }
 
diff --git a/xen/arch/x86/include/asm/xstate.h b/xen/arch/x86/include/asm/xstate.h
index f5115199d4f9..bfb66dd766b6 100644
--- a/xen/arch/x86/include/asm/xstate.h
+++ b/xen/arch/x86/include/asm/xstate.h
@@ -40,6 +40,7 @@ extern uint32_t mxcsr_mask;
 #define XSTATE_XSAVES_ONLY         0
 #define XSTATE_COMPACTION_ENABLED  (1ULL << 63)
 
+#define XSTATE_XSS     (1U << 0)
 #define XSTATE_ALIGN64 (1U << 1)
 
 extern u64 xfeature_mask;
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 17:39:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 17:39:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742537.1149354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKH-00039T-T8; Mon, 17 Jun 2024 17:39:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742537.1149354; Mon, 17 Jun 2024 17:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKH-00039D-Nm; Mon, 17 Jun 2024 17:39:29 +0000
Received: by outflank-mailman (input) for mailman id 742537;
 Mon, 17 Jun 2024 17:39:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gp4O=NT=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJGKG-00036h-Vr
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 17:39:28 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8b93d0cf-2cd0-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 19:39:27 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a689ad8d1f6so564426666b.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 10:39:27 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da4496sm532010666b.8.2024.06.17.10.39.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 10:39:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b93d0cf-2cd0-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718645967; x=1719250767; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=dJvTgVXIVw103tYxIPd995a8BWqGdmptrn91B2t2Nw8=;
        b=vXETgQC29QAKPiVQWNRD9KJ+OuWoQrXbOHBHKn0j+9HFiDMnmYso40/nXYqfXdloaV
         rQOD9bo0zPPAIxjiy+sO7e3X3cnL7HfnEYc2j/xvTOU/ttwFvq1w5c/L2ynZbqI26EYH
         hI2gg8l24qNpQsN9Ymj3DDkzP7b3jkmGS32qs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718645967; x=1719250767;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=dJvTgVXIVw103tYxIPd995a8BWqGdmptrn91B2t2Nw8=;
        b=vJhmgK+/uqQUBZtJ5xHP0IxN4k97yZ25Os7tbuk/yq2k6tLO6hst0I8IO4xWTP3JVU
         Yi/eZX0sU806jbdLyvywLrYCnBv5FwocEXmUKi9uMX8SN4aGaGW7pEjVjeyT74UrgFVk
         m8xKqYMgNIER/DJPfJGk7zVZ3cFAXWsfPpeYxLbwC4zrItkoUWCmCobxK73ZC5AYz9+g
         pEskoYok3M29Q8Ow/StVMbMSdX/dTdrv13973J5pMgh8pV1vJGwi840dnJVLciQ0dE3p
         8JKf8fpQZOvMq/NJcVKPhKkxwIncbnbA7kj7Ij8YP6/R0eSWLByH6IsDTto+E//yOYUq
         3DGA==
X-Gm-Message-State: AOJu0YykImBmENl4hy6HS87EiCWzFawXQCmem+js082zKTbvTtRJQof4
	EmuRKwKd1OOsY6gjvEZRl50x7KK6A2DKsU8/ewxIOSrzRmuFdcrxNjnK9KGoowpy/i7ANXEf6Jp
	04wE=
X-Google-Smtp-Source: AGHT+IHFvU0Yf0KF7Z3VV7Z6VimGbUT9n427p2ZJ0GGUaMHTo0dABR06CfcAJSIZ2fNhWii6+O+jNg==
X-Received: by 2002:a17:906:d111:b0:a6f:10d0:fb85 with SMTP id a640c23a62f3a-a6f60d203e1mr638499366b.19.1718645967214;
        Mon, 17 Jun 2024 10:39:27 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH v4 4/7] x86/xstate: Rework xstate_ctxt_size() as xstate_uncompressed_size()
Date: Mon, 17 Jun 2024 18:39:18 +0100
Message-Id: <20240617173921.1755439-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
References: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

We're soon going to need a compressed helper of the same form.

The size of the uncompressed image depends on the single element with the
largest offset + size.  Sadly this isn't always the element with the largest
index.

Name the per-xstate-component cpu_policy struture, for legibility of the logic
in xstate_uncompressed_size().  Cross-check with hardware during boot, and
remove hw_uncompressed_size().

This means that the migration paths don't need to mess with XCR0 just to
sanity check the buffer size.  It also means we can drop the "fastpath" check
against xfeature_mask (there to skip some XCR0 writes); this path is going to
be dead logic the moment Xen starts using supervisor states itself.

The users of hw_uncompressed_size() in xstate_init() can (and indeed need) to
be replaced with CPUID instructions.  They run with feature_mask in XCR0, and
prior to setup_xstate_features() on the BSP.

No practical change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

v2:
 * Scan all features.  LWP/APX_F are out-of-order.
v3:
 * Rebase over boot time check.
 * Use the raw CPU policy.
v4:
 * Explain the ASSERT() checking X86_XCR0_STATES.
 * Drop the xfeature_mask check.
 * Drop the comment about 0, because we'll probably never be able to clean up
   the use from the CPUID path.
---
 xen/arch/x86/domctl.c                |  2 +-
 xen/arch/x86/hvm/hvm.c               |  2 +-
 xen/arch/x86/include/asm/xstate.h    |  2 +-
 xen/arch/x86/xstate.c                | 76 ++++++++++++++++------------
 xen/include/xen/lib/x86/cpu-policy.h |  2 +-
 5 files changed, 49 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 335aedf46d03..9190e11faaa3 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -833,7 +833,7 @@ long arch_do_domctl(
         uint32_t offset = 0;
 
 #define PV_XSAVE_HDR_SIZE (2 * sizeof(uint64_t))
-#define PV_XSAVE_SIZE(xcr0) (PV_XSAVE_HDR_SIZE + xstate_ctxt_size(xcr0))
+#define PV_XSAVE_SIZE(xcr0) (PV_XSAVE_HDR_SIZE + xstate_uncompressed_size(xcr0))
 
         ret = -ESRCH;
         if ( (evc->vcpu >= d->max_vcpus) ||
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 8334ab171110..7f4b627b1f5f 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1206,7 +1206,7 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, NULL, hvm_load_cpu_ctxt, 1,
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
                                            save_area) + \
-                                  xstate_ctxt_size(xcr0))
+                                  xstate_uncompressed_size(xcr0))
 
 static int cf_check hvm_save_cpu_xsave_states(
     struct vcpu *v, hvm_domain_context_t *h)
diff --git a/xen/arch/x86/include/asm/xstate.h b/xen/arch/x86/include/asm/xstate.h
index c08c267884f0..f5115199d4f9 100644
--- a/xen/arch/x86/include/asm/xstate.h
+++ b/xen/arch/x86/include/asm/xstate.h
@@ -107,7 +107,7 @@ void compress_xsave_states(struct vcpu *v, const void *src, unsigned int size);
 void xstate_free_save_area(struct vcpu *v);
 int xstate_alloc_save_area(struct vcpu *v);
 void xstate_init(struct cpuinfo_x86 *c);
-unsigned int xstate_ctxt_size(u64 xcr0);
+unsigned int xstate_uncompressed_size(uint64_t xcr0);
 
 static inline uint64_t xgetbv(unsigned int index)
 {
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index 650206d9d2b6..8edc4792a8fd 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -8,6 +8,8 @@
 #include <xen/param.h>
 #include <xen/percpu.h>
 #include <xen/sched.h>
+
+#include <asm/cpu-policy.h>
 #include <asm/current.h>
 #include <asm/processor.h>
 #include <asm/i387.h>
@@ -183,7 +185,7 @@ void expand_xsave_states(const struct vcpu *v, void *dest, unsigned int size)
     /* Check there is state to serialise (i.e. at least an XSAVE_HDR) */
     BUG_ON(!v->arch.xcr0_accum);
     /* Check there is the correct room to decompress into. */
-    BUG_ON(size != xstate_ctxt_size(v->arch.xcr0_accum));
+    BUG_ON(size != xstate_uncompressed_size(v->arch.xcr0_accum));
 
     if ( !(xstate->xsave_hdr.xcomp_bv & XSTATE_COMPACTION_ENABLED) )
     {
@@ -245,7 +247,7 @@ void compress_xsave_states(struct vcpu *v, const void *src, unsigned int size)
     u64 xstate_bv, valid;
 
     BUG_ON(!v->arch.xcr0_accum);
-    BUG_ON(size != xstate_ctxt_size(v->arch.xcr0_accum));
+    BUG_ON(size != xstate_uncompressed_size(v->arch.xcr0_accum));
     ASSERT(!xsave_area_compressed(src));
 
     xstate_bv = ((const struct xsave_struct *)src)->xsave_hdr.xstate_bv;
@@ -553,32 +555,6 @@ void xstate_free_save_area(struct vcpu *v)
     v->arch.xsave_area = NULL;
 }
 
-static unsigned int hw_uncompressed_size(uint64_t xcr0)
-{
-    u64 act_xcr0 = get_xcr0();
-    unsigned int size;
-    bool ok = set_xcr0(xcr0);
-
-    ASSERT(ok);
-    size = cpuid_count_ebx(XSTATE_CPUID, 0);
-    ok = set_xcr0(act_xcr0);
-    ASSERT(ok);
-
-    return size;
-}
-
-/* Fastpath for common xstate size requests, avoiding reloads of xcr0. */
-unsigned int xstate_ctxt_size(u64 xcr0)
-{
-    if ( xcr0 == xfeature_mask )
-        return xsave_cntxt_size;
-
-    if ( xcr0 == 0 ) /* TODO: clean up paths passing 0 in here. */
-        return 0;
-
-    return hw_uncompressed_size(xcr0);
-}
-
 static bool valid_xcr0(uint64_t xcr0)
 {
     /* FP must be unconditionally set. */
@@ -611,6 +587,38 @@ static bool valid_xcr0(uint64_t xcr0)
     return true;
 }
 
+unsigned int xstate_uncompressed_size(uint64_t xcr0)
+{
+    unsigned int size = XSTATE_AREA_MIN_SIZE, i;
+
+    /* Non-XCR0 states don't exist in an uncompressed image. */
+    ASSERT((xcr0 & ~X86_XCR0_STATES) == 0);
+
+    if ( xcr0 == 0 )
+        return 0;
+
+    if ( xcr0 <= (X86_XCR0_SSE | X86_XCR0_FP) )
+        return size;
+
+    /*
+     * For the non-legacy states, search all activate states and find the
+     * maximum offset+size.  Some states (e.g. LWP, APX_F) are out-of-order
+     * with respect their index.
+     */
+    xcr0 &= ~(X86_XCR0_SSE | X86_XCR0_FP);
+    for_each_set_bit ( i, &xcr0, 63 )
+    {
+        const struct xstate_component *c = &raw_cpu_policy.xstate.comp[i];
+        unsigned int s = c->offset + c->size;
+
+        ASSERT(c->offset && c->size);
+
+        size = max(size, s);
+    }
+
+    return size;
+}
+
 struct xcheck_state {
     uint64_t states;
     uint32_t uncomp_size;
@@ -619,7 +627,7 @@ struct xcheck_state {
 
 static void __init check_new_xstate(struct xcheck_state *s, uint64_t new)
 {
-    uint32_t hw_size;
+    uint32_t hw_size, xen_size;
 
     BUILD_BUG_ON(X86_XCR0_STATES & X86_XSS_STATES);
 
@@ -651,6 +659,12 @@ static void __init check_new_xstate(struct xcheck_state *s, uint64_t new)
 
     s->uncomp_size = hw_size;
 
+    xen_size = xstate_uncompressed_size(s->states & X86_XCR0_STATES);
+
+    if ( xen_size != hw_size )
+        panic("XSTATE 0x%016"PRIx64", uncompressed hw size %#x != xen size %#x\n",
+              s->states, hw_size, xen_size);
+
     /*
      * Check the compressed size, if available.  All components strictly
      * appear in index order.  In principle there are no holes, but some
@@ -826,14 +840,14 @@ void xstate_init(struct cpuinfo_x86 *c)
          * xsave_cntxt_size is the max size required by enabled features.
          * We know FP/SSE and YMM about eax, and nothing about edx at present.
          */
-        xsave_cntxt_size = hw_uncompressed_size(feature_mask);
+        xsave_cntxt_size = cpuid_count_ebx(0xd, 0);
         printk("xstate: size: %#x and states: %#"PRIx64"\n",
                xsave_cntxt_size, xfeature_mask);
     }
     else
     {
         BUG_ON(xfeature_mask != feature_mask);
-        BUG_ON(xsave_cntxt_size != hw_uncompressed_size(feature_mask));
+        BUG_ON(xsave_cntxt_size != cpuid_count_ebx(0xd, 0));
     }
 
     if ( setup_xstate_features(bsp) && bsp )
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index d5e447e9dc06..d26012c6da78 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -248,7 +248,7 @@ struct cpu_policy
         };
 
         /* Per-component common state.  Valid for i >= 2. */
-        struct {
+        struct xstate_component {
             uint32_t size, offset;
             bool xss:1, align:1;
             uint32_t _res_d;
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 17:39:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 17:39:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742541.1149391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKK-000421-9A; Mon, 17 Jun 2024 17:39:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742541.1149391; Mon, 17 Jun 2024 17:39:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGKK-0003zi-3w; Mon, 17 Jun 2024 17:39:32 +0000
Received: by outflank-mailman (input) for mailman id 742541;
 Mon, 17 Jun 2024 17:39:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gp4O=NT=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJGKI-00036g-LB
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 17:39:30 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8b6c3158-2cd0-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 19:39:27 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id
 2adb3069b0e04-52b7ffd9f6eso4553420e87.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 10:39:27 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da4496sm532010666b.8.2024.06.17.10.39.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 10:39:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b6c3158-2cd0-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718645967; x=1719250767; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7hZrd7J+i5bTlpEezybGSWjb1th3JpIpYnEjFoW/+BI=;
        b=eqqLgikruXq+2H7oZBZ19ee9T8xWoxjR2ZAEXu6DtEhhp0r2+vODmYF3fhqluhZrqU
         zHaLmG5GqG/sQXd2xI9xffLyBN4CIC8erBCoaruu+yn2BCImWBjRjKrtFVWFdEM5PdlB
         Okt1Me80+Qbx9IBbGIt64pmq0FQ1To8XozU/o=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718645967; x=1719250767;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=7hZrd7J+i5bTlpEezybGSWjb1th3JpIpYnEjFoW/+BI=;
        b=Ue8JcaPB3WyX61ob3Qd01lRLEFOKiqmCBjg9s2hBVd+UQa46aJADLCBOutYUIP1sNk
         BarxM2SZ3QP76/gW3Zf7ryhXH7oeBE6bHyJ4AtsDk6lrCf/d37gDQ0QDdMF42onAreT6
         yAvggoUb9PN5IwzBNevXq1hEHl8py3Qd/2Q7REHz30K/iJJC7dcx/+J+m5IQOK33xRdm
         XgADBRfHd0AqbIk+KSj4lcXu3D8kYlAEWMlL7BuQAtwrAdGZjMZnnoo+mC8eWRwuNljN
         VUqhHZEuSv6JB+4LbwkavEjILIZpVmKi6G4zjwzOcxeENRIs2jviBf9I9lxEggyVf9GD
         2Cww==
X-Gm-Message-State: AOJu0YzYbSth5amJlnPaqj4tHsj6Dis4HnXGsQtxE8p5pcNfA0oDVeGt
	AmSvx/sdiYFQybO+d2JItSYERS33BHirhs57d+qrI7ZArz9E39sxXPqJvI1kK15hlmzdXmBuww6
	N8tw=
X-Google-Smtp-Source: AGHT+IGqswVteiKgyhCwzghy5F3j/dZoVVGu9AAXTgMs18mPwS1twjrtvvS5gzvDBXXDpOpKJ67J7Q==
X-Received: by 2002:a05:6512:3113:b0:52b:c089:4579 with SMTP id 2adb3069b0e04-52ca6e91501mr7271082e87.49.1718645966685;
        Mon, 17 Jun 2024 10:39:26 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH v4 3/7] x86/boot: Collect the Raw CPU Policy earlier on boot
Date: Mon, 17 Jun 2024 18:39:17 +0100
Message-Id: <20240617173921.1755439-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
References: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This is a tangle, but it's a small step in the right direction.

In the following change, xstate_init() is going to start using the Raw policy.

calculate_raw_cpu_policy() is sufficiently separate from the other policies to
safely move like this.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

This is necessary for the forthcoming xstate_{un,}compressed_size() to perform
boot-time sanity checks on state components which aren't fully enabled yet.  I
decided that doing this was better than extending the xstate_{offsets,sizes}[]
logic that we're intending to retire in due course.

v3:
 * New.
v4:
 * Adjust commit message a little.
---
 xen/arch/x86/cpu-policy.c | 1 -
 xen/arch/x86/setup.c      | 4 +++-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index b96f4ee55cc4..5b66f002df05 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -845,7 +845,6 @@ static void __init calculate_hvm_def_policy(void)
 
 void __init init_guest_cpu_policies(void)
 {
-    calculate_raw_cpu_policy();
     calculate_host_policy();
 
     if ( IS_ENABLED(CONFIG_PV) )
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index dd51e68dbe5b..eee20bb1753c 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1888,7 +1888,9 @@ void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
 
     tsx_init(); /* Needs microcode.  May change HLE/RTM feature bits. */
 
-    identify_cpu(&boot_cpu_data);
+    calculate_raw_cpu_policy(); /* Needs microcode.  No other dependenices. */
+
+    identify_cpu(&boot_cpu_data); /* Needs microcode and raw policy. */
 
     set_in_cr4(X86_CR4_OSFXSR | X86_CR4_OSXMMEXCPT);
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 17:55:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 17:55:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742597.1149428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGZj-0001oE-It; Mon, 17 Jun 2024 17:55:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742597.1149428; Mon, 17 Jun 2024 17:55:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJGZj-0001o7-G1; Mon, 17 Jun 2024 17:55:27 +0000
Received: by outflank-mailman (input) for mailman id 742597;
 Mon, 17 Jun 2024 17:55:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gp4O=NT=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJGZi-0001o1-8n
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 17:55:26 +0000
Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com
 [2a00:1450:4864:20::533])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c58d87cb-2cd2-11ef-b4bb-af5377834399;
 Mon, 17 Jun 2024 19:55:24 +0200 (CEST)
Received: by mail-ed1-x533.google.com with SMTP id
 4fb4d7f45d1cf-57c83100cb4so5305933a12.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 Jun 2024 10:55:24 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f3d07asm542595166b.144.2024.06.17.10.55.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 Jun 2024 10:55:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c58d87cb-2cd2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718646923; x=1719251723; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=uU+mOcCo/JALQrJDkVLj51yiAq57mhcEBnRa/6VyFWA=;
        b=Ua0M8I/wTolsXAzENXskOMVhFb60NrPdJDFi+rS5VbLdD7nKkesIPZ/MrX9XyLoNL5
         JBS3qILeKqbWaP/eN5DKTlr77BhSYnnpMqRFR4Mp3YNk6WUWj23FYbAm53pP5zWzvAkg
         QHYQMznyHcMEp7YM+coRzsfNDsUUhXF3hkgGk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718646923; x=1719251723;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=uU+mOcCo/JALQrJDkVLj51yiAq57mhcEBnRa/6VyFWA=;
        b=cFRdO63X1A0mEuUp2VmXQ8E7V91lYmgtUR2WnYHz0zXXLypGT01It6aZBg3Qrn9qi4
         ROq45poO282iJO1A+Ig0G/ys/OPyzh2SqdCfq/veLEH4n4kNjU5jP3fjU/OqwlDdPtw3
         5Xp3gNbCk/Q6h67l1A5FI8LhaSLqzU1q1KiM1V51EMl9gPSAHBKNXnXviwKkd/Cnj7p9
         sXGUUpg3a8eN4TMcUJ71LZ4vXMcLzTNBS/QoGBAR4J6FxkIW2bONrXplK/4iaurDpl26
         L9Wj7jcenKyieWXZT8QaNn3anpwjINojdXlh3vrC7vMcgr0w/AeGwMSe2lFvYE5e31oK
         gDyw==
X-Gm-Message-State: AOJu0YzA2eZmJbyj+gQ+s8NLFwrKYM1r6REJN7Pa+8qZ0Hbsf20fuYUL
	iCKk2MailglrncRikwk9mTQz874EDPMK/FIljRCV+lBq6ChkZMruqCByidNYkIWh8l/pqClA367
	Jq7M=
X-Google-Smtp-Source: AGHT+IGcAf31PONkt7BW7uDavz9BH3MCc5gIlkFRvjxFEQcEq8CkAqf4ltS717ugXDU6V0hrLs1+uA==
X-Received: by 2002:a17:906:f2ce:b0:a6f:1045:d5e2 with SMTP id a640c23a62f3a-a6f60cee9c6mr703268366b.4.1718646923346;
        Mon, 17 Jun 2024 10:55:23 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH] xen/ubsan: Fix UB in type_descriptor declaration
Date: Mon, 17 Jun 2024 18:55:21 +0100
Message-Id: <20240617175521.1766698-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

struct type_descriptor is arranged with a NUL terminated string following the
kind/info fields.

The only reason this doesn't trip UBSAN detection itself (on more modern
compilers at least) is because struct type_descriptor is only referenced in
suppressed regions.

Switch the declaration to be a real flexible member.  No functional change.

Fixes: 00fcf4dd8eb4 ("xen/ubsan: Import ubsan implementation from Linux 4.13")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

For 4.19, and for backport to all reasonable versions.  This bug deserves some
kind of irony award.
---
 xen/common/ubsan/ubsan.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/ubsan/ubsan.h b/xen/common/ubsan/ubsan.h
index a3159040fefb..3db42e75b138 100644
--- a/xen/common/ubsan/ubsan.h
+++ b/xen/common/ubsan/ubsan.h
@@ -10,7 +10,7 @@ enum {
 struct type_descriptor {
 	u16 type_kind;
 	u16 type_info;
-	char type_name[1];
+	char type_name[];
 };
 
 struct source_location {

base-commit: 8b4243a9b560c89bb259db5a27832c253d4bebc7
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Mon Jun 17 19:11:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 19:11:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742605.1149439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJHkk-0006Za-To; Mon, 17 Jun 2024 19:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742605.1149439; Mon, 17 Jun 2024 19:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJHkk-0006ZT-Qy; Mon, 17 Jun 2024 19:10:54 +0000
Received: by outflank-mailman (input) for mailman id 742605;
 Mon, 17 Jun 2024 19:10:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJHkk-0006ZJ-40; Mon, 17 Jun 2024 19:10:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJHkj-0005Se-RP; Mon, 17 Jun 2024 19:10:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJHkj-0004pT-GB; Mon, 17 Jun 2024 19:10:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJHkj-0005dY-Fh; Mon, 17 Jun 2024 19:10:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=f4+zGQnWBayAU02QbEqT7cGLM4vIaXbORxXTpOgFeIY=; b=o7nJ/H5TdbPl4DcRYVURjgnd6r
	Tr9lpeq0BylznJajaLV8tGtYQjAWsK1LbEniHrT8xeTBuPhX6mn1pfpFZRO9NUKyugnvGNlEI5U1J
	E5/IOgcXuprKsdsDsf+ws3o4539/HLy7zfCg6pnWCV50FRlvadjfLNn0slcrCQQMEiS4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186381-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186381: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6ba59ff4227927d3a8530fc2973b80e94b54d58f
X-Osstest-Versions-That:
    linux=b5beaa44747bddbabb338377340244f56465cd7d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 Jun 2024 19:10:53 +0000

flight 186381 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186381/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-qcow2     8 xen-boot         fail in 186377 pass in 186381
 test-armhf-armhf-xl-raw       8 xen-boot                   fail pass in 186377
 test-armhf-armhf-xl-multivcpu  8 xen-boot                  fail pass in 186377

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2   8 xen-boot            fail in 186377 like 186372
 test-armhf-armhf-xl-raw     14 migrate-support-check fail in 186377 never pass
 test-armhf-armhf-xl-raw 15 saverestore-support-check fail in 186377 never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 186377 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 186377 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186372
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186372
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186372
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186372
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186372
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186372
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6ba59ff4227927d3a8530fc2973b80e94b54d58f
baseline version:
 linux                b5beaa44747bddbabb338377340244f56465cd7d

Last test of basis   186372  2024-06-16 18:43:40 Z    1 days
Testing same since   186377  2024-06-17 02:49:50 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andi Shyti <andi.shyti@kernel.org>
  Helge Deller <deller@gmx.de>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jean Delvare <jdelvare@suse.de>
  John David Anglin <dave.anglin@bell.net>
  John David Anglin <dave@parisc-linux.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Wolfram Sang <wsa+renesas@sang-engineering.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   b5beaa44747b..6ba59ff42279  6ba59ff4227927d3a8530fc2973b80e94b54d58f -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon Jun 17 20:46:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 Jun 2024 20:46:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742616.1149449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJJFA-0000ec-E6; Mon, 17 Jun 2024 20:46:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742616.1149449; Mon, 17 Jun 2024 20:46:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJJFA-0000eV-AL; Mon, 17 Jun 2024 20:46:24 +0000
Received: by outflank-mailman (input) for mailman id 742616;
 Mon, 17 Jun 2024 20:46:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BQ9S=NT=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1sJJF9-0000e9-Ib
 for xen-devel@lists.xenproject.org; Mon, 17 Jun 2024 20:46:23 +0000
Received: from fout6-smtp.messagingengine.com (fout6-smtp.messagingengine.com
 [103.168.172.149]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a64ba5e1-2cea-11ef-90a3-e314d9c70b13;
 Mon, 17 Jun 2024 22:46:20 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailfout.nyi.internal (Postfix) with ESMTP id BD62813800E2;
 Mon, 17 Jun 2024 16:46:18 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Mon, 17 Jun 2024 16:46:18 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 17 Jun 2024 16:46:15 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a64ba5e1-2cea-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718657178;
	 x=1718743578; bh=+KIGKt3L+2xGLRMAjC07BGnH/n0I2J6DSDMjl2wShMw=; b=
	f9B3pVOAOSQ3slBOgF5ZqBkD7Yz9+nMxVf6RSRnUTkPpP4rXVgZTVQfPLynzXAVM
	SpRG/GF4sUy9wc6FkIXknPaag/cqsAW+SUZVOpJ0OrFhjAduuLw4SyXc5NIFg49c
	zMe17Uf8C5Dsy6e/1KJrrLoYtpOUOWuHWvwlLjJ7MRbi+YfqaBhghol/wXLA1AIU
	K6R3TlfNwmd/LQ0YfiiF9RHD1pqu5l3ZSr1EHipYfLua1SCp1baGVNhvpxsuJuq5
	UVT9vZgBFbdhzwVILCjEGj6IxxxLGTtn4rnTTKM+fmZV9u0b6IKoC/UjtMZanU0v
	2ZCdZex4TOzhKuiwjbuaZA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm1; t=1718657178; x=1718743578; bh=+KIGKt3L+2xGLRMAjC07BGnH/n0I
	2J6DSDMjl2wShMw=; b=nONttP2/Irx7cZ9/mn6X4T5mlolhyFXaSbzRhC9D6siV
	h0PLUKO7135lLLilsIViBboyeKCJcSXiSgtiaXkSurflYOiLtscI5iQWs3NcC00u
	xj36aoO0gXUyxtTtng69uRywEqHlnGUvN6tOoHR4up9Pewv+P3pPr0yaVEHDJQyz
	XXARIaworkn4eqAWS6wvmf3CAOKkWD7sMobKmUX4oOCoOtAlIofYiU9wXHzgbauY
	H0pFlxKMMC3osrrPLxPEwXVJdlUsMSSAto44yYDUU9WD0qywh2P7/PiOGgTMPmwd
	+qKLAigEK3e/mXxtzY7pRVa78PZTvEKMkiYbBfCNag==
X-ME-Sender: <xms:mqBwZjsU9oi0g7ZOJzofngkZt5LnJtm6aTOSPg-3x2W4AStomO6O_Q>
    <xme:mqBwZke7TX_YdMuDOHZ1C3-49b0p3VDC1d_wpKwP0NtLG6ZqZo_TL4ZizQpBJ1IHx
    o6clkuB1HAsKw>
X-ME-Received: <xmr:mqBwZmzfLjlkmlQa4XUqjvnDHIeImmsVSditHvFgn0iPE0t8tSw9IuIEfR8>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedvhedgudehfecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehgtdorredttdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpedt
    tedvhfekudetvdelffeguedtkeethfethfffhfefteeghfeigeelvddttdektdenucevlh
    hushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrhgv
    khesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:mqBwZiOVI1oq8Hb6dsrU0f8U3AwyIM-QFm0HUUIBlhJ9jLBZhQoJNg>
    <xmx:mqBwZj-2q2sHejWxfd7dSZqWaF-lcNcUKR9cCMcPE7xoZFsN67zIyA>
    <xmx:mqBwZiXP00JcyCPhOFHLc7khZJH9UQbDG5W6IR7Saw-ZPP7Q9a4pAA>
    <xmx:mqBwZkcCBD-xtkDwrZWIfdCJufPTmU3FnGFQrkmdQKE-DFkNDhg78A>
    <xmx:mqBwZjz-zKgHlDLARfdfPnVU0YWUJ62kWHn384uNi9nPkosymHoL0D6w>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 17 Jun 2024 22:46:13 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>,
	Xenia Ragiadakou <burzalodowa@gmail.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <ZnCglhYlXmRPBZXE@mail-itl>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
 <ZmwByZnn5vKcVLKI@macbook>
 <Zm-FidjSK3mOieSC@itl-email>
 <Zm_p1QvoZcjQ4gBa@macbook>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="yIH0d/1pE7yrflk9"
Content-Disposition: inline
In-Reply-To: <Zm_p1QvoZcjQ4gBa@macbook>


--yIH0d/1pE7yrflk9
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 17 Jun 2024 22:46:13 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>,
	Xenia Ragiadakou <burzalodowa@gmail.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Design session notes: GPU acceleration in Xen

On Mon, Jun 17, 2024 at 09:46:29AM +0200, Roger Pau Monn=C3=A9 wrote:
> On Sun, Jun 16, 2024 at 08:38:19PM -0400, Demi Marie Obenour wrote:
> > In both cases, the device physical
> > addresses are identical to dom0=E2=80=99s physical addresses.
>=20
> Yes, but a PV dom0 physical address space can be very scattered.
>=20
> IIRC there's an hypercall to request physically contiguous memory for
> PV, but you don't want to be using that every time you allocate a
> buffer (not sure it would support the sizes needed by the GPU
> anyway).

Indeed that isn't going to fly. In older Qubes versions we had PV
sys-net with PCI passthrough for a network card. After some uptime it
was basically impossible to restart and still have enough contagious
memory for a network driver, and there it was about _much_ smaller
buffers, like 2M or 4M. At least not without shutting down a lot more
things to free some more memory.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--yIH0d/1pE7yrflk9
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmZwoJYACgkQ24/THMrX
1yxKvwf9Guw4aqpy5v+VhCgQ95lk7MKJmp79thklrahi/8YDVryBuwVd+hfJnCCK
HGyoahndaV0k2kZK77lkDtayP1Vyeg0WCWeLDr8hgIykdaupN4A/9Ep0iiTpDyn1
1e/Isroc9Wxt7/1HPqDOSVc8hU5hI2ccTVuNdyUUzK5Ps3rVpINIr7xSdY3NNYTn
5fKYnGrGb7fJjczCgPlGwvK2wDiLKeI+gC8NRQrS3QeLsEowEckMDMFmzhymfqDA
/RKfGj9hGf7ehhKZ8Zr/jV7qztUtWXRnMg4Ozp4CVs87fPFWjp/Kg/eW7vJrr7ov
/3QAzzB5RpII+3AH0O6XUIO5cpGIgw==
=/1j1
-----END PGP SIGNATURE-----

--yIH0d/1pE7yrflk9--


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 00:57:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 00:57:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742626.1149458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNA1-0002Z2-Sy; Tue, 18 Jun 2024 00:57:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742626.1149458; Tue, 18 Jun 2024 00:57:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNA1-0002Yv-Pf; Tue, 18 Jun 2024 00:57:21 +0000
Received: by outflank-mailman (input) for mailman id 742626;
 Tue, 18 Jun 2024 00:57:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6aKm=NU=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1sJNA0-0002Yo-UM
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 00:57:21 +0000
Received: from fout1-smtp.messagingengine.com (fout1-smtp.messagingengine.com
 [103.168.172.144]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b51c1a15-2d0d-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 02:57:17 +0200 (CEST)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailfout.nyi.internal (Postfix) with ESMTP id 231161380803;
 Mon, 17 Jun 2024 20:57:16 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute4.internal (MEProxy); Mon, 17 Jun 2024 20:57:16 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 17 Jun 2024 20:57:15 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b51c1a15-2d0d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:subject
	:subject:to:to; s=fm1; t=1718672236; x=1718758636; bh=HthqcoOdBh
	/JK0eSSjRZiYaRguDJqed9JCqeCaWqaF4=; b=X2GIm8MqFYIjY9hChpzwMV4a4B
	DWt59AMHg/VJVG5y6Rco+PbdySHeM2Lll9EyLn8Emgy+Xp6DitfgwPgCzd+gVLYb
	F6jsrrdqRU3EvhJ0hm1Woeaf1LredHqeNf07J4wGsm0QAoO/muAZlGZsP5wX8I+9
	aNv99soLhwE9RyeG4Yf9N3ouE/p5f2umX2kt7+ToY6UTqM46adpi9UGR0cy93ihY
	58WWkfx/XQIPtbJIvO9HVdwRQf10fyc2mhOXND8GjFADik1+jnJqVTmJZr9T7Sdg
	k8entB0Px4f1q5QAl8KyxN7FE9nxnCH//OltmltW1Fd1vbnYteroYkJ4jP0Q==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm1; t=1718672236; x=
	1718758636; bh=HthqcoOdBh/JK0eSSjRZiYaRguDJqed9JCqeCaWqaF4=; b=D
	lh01JSRa/LB9EMZLvlsYZUNBJvMpYIQd0NzMbdDnYJWzb16IdWTIf1DJhtBRWq8r
	KmOUmt0QWK29vQsgSHcBUF6Y5NgjynqkzPqkgSyCzZ9GPmHIzLVT9Vpjed6RxNs5
	K111QEiyRVPZ3C/uw2ch/hIOEjSYCqlcfurxsvKBb1lzPK1k9XZzlNGqSQVu1OzT
	ZME6D3a+lGnzuIdvO/Bp4EsI7lF1M8BaMqq8kDUE8MG68/wgFzT+7qKj2K7dFXXo
	/pOXz6BeEM8/932vNt+vYti+JnMr32YAys3J5PvYf5EHfChzaBGhe+1ZUursERhd
	LM4x9moCS/SUNFKZqsvIg==
X-ME-Sender: <xms:a9twZnLk5vFSlfFivWahqkY2MeN_LCZojpsCHXIgPPADF3Pp6dK3Rg>
    <xme:a9twZrLMmy1ERlFVL5GlVKrIy4sbTBcWVQq0uTDzpQsW3AJ13A-JtPqvseHDdKv7t
    UDvD98EY9ufHX0>
X-ME-Received: <xmr:a9twZvtrtmwRbDJNhyKQQAN8tmdOyP2h8njU1A_ls6XzuqQZrWsq3ZwpO5YWCqh4E0u94gS_euu9DzSWtht5OlWfmfj2uGCswpsFS54OpEap30OG>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedviedggeduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtgfgjgesthekredttddtjeenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepudfgieetueeuueeihefhfeetudfh
    iefgteekuedvgfeuhffggeegfedvkeegkeeinecuvehluhhsthgvrhfuihiivgeptdenuc
    frrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhs
    lhgrsgdrtghomh
X-ME-Proxy: <xmx:a9twZgbo9qjBtr9aI7AfebSQQDY9_2brye7Hpa0BU0M6Cn3CtZOF6A>
    <xmx:a9twZubyYRtVeubFjkmi5KoX0QJHGa_J_Elx-ok0GDtuh0VmToViIQ>
    <xmx:a9twZkCs3Ylc0k4DNivo_f7_bWw-ECMwhdHLYLwd7w0CZOYf9aiGKQ>
    <xmx:a9twZsZowZl3KsmY6OV45_JeW-pA187fQwaUXx4uSLixI46sY37M7Q>
    <xmx:bNtwZkkWTlRIUvyeWN5yg9jy0bap-acAklEI19GGuLNXmRX8h6Pcwqw0>
Feedback-ID: iac594737:Fastmail
Date: Mon, 17 Jun 2024 20:57:14 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Xenia Ragiadakou <burzalodowa@gmail.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Direct Rendering Infrastructure development <dri-devel@lists.freedesktop.org>,
	Christian =?utf-8?B?S8O2bmln?= <christian.koenig@amd.com>,
	Qubes OS Development Mailing List <qubes-devel@googlegroups.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <ZnDbaply6KaBUKJb@itl-email>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
 <ZmwByZnn5vKcVLKI@macbook>
 <Zm-FidjSK3mOieSC@itl-email>
 <Zm_p1QvoZcjQ4gBa@macbook>
 <ZnCglhYlXmRPBZXE@mail-itl>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; x-action=pgp-signed
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZnCglhYlXmRPBZXE@mail-itl>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On Mon, Jun 17, 2024 at 10:46:13PM +0200, Marek Marczykowski-Górecki wrote:
> On Mon, Jun 17, 2024 at 09:46:29AM +0200, Roger Pau Monné wrote:
> > On Sun, Jun 16, 2024 at 08:38:19PM -0400, Demi Marie Obenour wrote:
> > > In both cases, the device physical
> > > addresses are identical to dom0’s physical addresses.
> > 
> > Yes, but a PV dom0 physical address space can be very scattered.
> > 
> > IIRC there's an hypercall to request physically contiguous memory for
> > PV, but you don't want to be using that every time you allocate a
> > buffer (not sure it would support the sizes needed by the GPU
> > anyway).
> 
> Indeed that isn't going to fly. In older Qubes versions we had PV
> sys-net with PCI passthrough for a network card. After some uptime it
> was basically impossible to restart and still have enough contagious
> memory for a network driver, and there it was about _much_ smaller
> buffers, like 2M or 4M. At least not without shutting down a lot more
> things to free some more memory.

Ouch!  That makes me wonder if all GPU drivers actually need physically
contiguous buffers, or if it is (as I suspect) driver-specific.  CCing
Christian König who has mentioned issues in this area.

Given the recent progress on PVH dom0, is it reasonable to assume that
PVH dom0 will be ready in time for R4.3, and that therefore Qubes OS
doesn't need to worry about this problem on x86?
- -- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab
-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmZw22kACgkQsoi1X/+c
IsGqtA/+INEbVP6pjKoMOJStaXajIvx19hJFU5HJQT0FBe4u2VXd3wfhK5gbJ90P
NrlE3Lstzper0qBG7Lt8lt4DAcL9Q3Ml9d8M0K7z6VYIKPqiu2Wh/P25HD7r+Adn
L2AMwnKUHtC02LJpT1Cjt/acKU3En9TMd35RhCNf4K+c9Swodtea3iOo7GzgQjNA
TFMAYiiIlhwQIvThrVlcKktCMZajvhudxwfZTd3EfUkIQbMtc/ydkeqL92nV9Fg4
uz+AEeDDNhCGsEjrFUFTXKnXc/28jpVIc4mXyGW+x4dginRjrjRVmtNrnz/1wO+S
X/xVUVnvLoTUXI+dKI9y5XmobVAJzLNZaEOEfnKePj5zA2ayRfnWybPBjzJuU+S4
wKevyBDlTuOdgtOT9nktd+qzXBQYtreEu8f+t9sEezURpVU/oOyrVn7Ui0RMtZID
W3sXJH3NfVb3mWCsYOMpJyzb5VYfYR5PWN6Ggln/CHvfLTDI8TKdaO41INkXLlTC
fA1cXVSKPn/VX9LRIFcQ81v9MGBAFkDX4Mf7z7xodi9Qopj+o2Yw66g5vLrPxPCH
asJSdnrnaZAtZSsbEhY4uV5+4QLD0dyNUqj+HxRlODFwhpDyervCikfp0MoSsWmT
qFvFHkiSqkx7E33QaVjmcGmFv4eWTVunYxW0j8tWnpWQLNLfPzY=
=H5gN
-----END PGP SIGNATURE-----


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 01:11:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 01:11:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742633.1149469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNNd-0004TO-2M; Tue, 18 Jun 2024 01:11:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742633.1149469; Tue, 18 Jun 2024 01:11:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNNc-0004TH-VT; Tue, 18 Jun 2024 01:11:24 +0000
Received: by outflank-mailman (input) for mailman id 742633;
 Tue, 18 Jun 2024 01:11:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gCZ4=NU=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sJNNb-0004TB-OZ
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 01:11:23 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a7a743e4-2d0f-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 03:11:19 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id BBD2FCE1258;
 Tue, 18 Jun 2024 01:11:08 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CCE68C2BD10;
 Tue, 18 Jun 2024 01:11:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7a743e4-2d0f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718673068;
	bh=OQ22/uCThoNFdDtdfZSE4e8hGRjjMHUTDSLgy3euapY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ZRpIQ7M2hk3uim0382qQiJ/gJ6QL468G0IRq0Ugv7WvJY+Yya4rTO97SgwXPOuaxZ
	 WmX51j/8KLDwyxmqDkNtVMOiUdOPd5xuy4BbPVWjeNFuq8PkIX8l6asNgeW4v1dSU9
	 LMqQUQNe4K0Jb5Lzf789ypdM7aJHF46tc3Nsa20qM7XK6D1CgRFXprcfjVPoOOsGqf
	 gXlIKlScbnez8iKqrjSm/j5Jlohv9fUAOklSyRu++g0NoaoIz9fe8oSwSXps3X2czZ
	 ots62YI54RNJmhaJyj3xs4vEZVlIIDbWS7WN605TcoFZKtRU3XgmOqQj4fKPKBUaoT
	 C++EuS3APBjCA==
Date: Mon, 17 Jun 2024 18:11:05 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [XEN PATCH for-4.19 v2 2/3] automation/eclair_analysis: address
 remaining violations of MISRA C Rule 20.12
In-Reply-To: <4ea119f84e075ebcdfe2669527826c269a454d0e.1717790683.git.nicola.vetrini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406171810590.2572888@ubuntu-linux-20-04-desktop>
References: <cover.1717790683.git.nicola.vetrini@bugseng.com> <4ea119f84e075ebcdfe2669527826c269a454d0e.1717790683.git.nicola.vetrini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 7 Jun 2024, Nicola Vetrini wrote:
> The DEFINE macro in asm-offsets.c (for all architectures) still generates
> violations despite the file(s) being excluded from compliance, due to the
> fact that in its expansion it sometimes refers entities in non-excluded files.
> These corner cases are deviated by the configuration.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>

> ---
>  automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> index 447c1e6661d1..e2653f77eb2c 100644
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -483,6 +483,12 @@ leads to a violation of the Rule are deviated."
>  -config=MC3R1.R20.12,macros+={deliberate, "name(GENERATE_CASE)&&loc(file(deliberate_generate_case))"}
>  -doc_end
> 
> +-doc_begin="The macro DEFINE is defined and used in excluded files asm-offsets.c.
> +This may still cause violations if entities outside these files are referred to
> +in the expansion."
> +-config=MC3R1.R20.12,macros+={deliberate, "name(DEFINE)&&loc(file(asm_offsets))"}
> +-doc_end
> +
>  #
>  # Series 21.
>  #
> --
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 01:17:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 01:17:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742640.1149479 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNTA-000544-Le; Tue, 18 Jun 2024 01:17:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742640.1149479; Tue, 18 Jun 2024 01:17:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNTA-00053x-Ig; Tue, 18 Jun 2024 01:17:08 +0000
Received: by outflank-mailman (input) for mailman id 742640;
 Tue, 18 Jun 2024 01:17:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gCZ4=NU=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sJNT9-00053r-0X
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 01:17:07 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7821bcfa-2d10-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 03:17:03 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 9303961244;
 Tue, 18 Jun 2024 01:17:02 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D9AA5C2BD10;
 Tue, 18 Jun 2024 01:17:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7821bcfa-2d10-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718673422;
	bh=eAiEM6R2ju/hh0m68CvMCvF8ojNqOd5VppqdXgBHb+0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QrBuj16T40GeGBvvtDXXUQhilQHlfywNzzCZDNA8jpOHvaJjvpeHlcTGE1jslDE6J
	 o9qCf6c5//S4fp7Xl7ZAqjgmjW0ddOLYazcULge62bBH4p9TzDPcbqO0W9pEtCjR+U
	 7kdZkHSWXCs4jPGqXqJOuVQnPq/3bpctbbD0AfvOQj/tZMdL1TT5h2vwF4eITVL+sZ
	 oUrf7uv6KjFfgS3AvZeBbPgOAhixV/6q/mvkDse2exKJFOuPUvVdI0WjLsGNflXzUJ
	 nLaSPOUJb06mhRGHsZ+pkaIf6ZxC/ir446BHAO3v+mbP8GfzXYfVF0JQIeVOgO9JIU
	 0qKHi5yhfHdtA==
Date: Mon, 17 Jun 2024 18:16:59 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: Jan Beulich <jbeulich@suse.com>, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>, 
    xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH for-4.19 v2 3/3] automation/eclair_analysis: add more
 clean MISRA guidelines
In-Reply-To: <e199bad317efee793a995523d6d10eac@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406171816370.2572888@ubuntu-linux-20-04-desktop>
References: <cover.1717790683.git.nicola.vetrini@bugseng.com> <42645b41cf9d2d8b5ef72f0b171989711edb00a1.1717790683.git.nicola.vetrini@bugseng.com> <0cae0e19-8512-40e0-9ef2-6e91069779ec@suse.com> <e199bad317efee793a995523d6d10eac@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 10 Jun 2024, Nicola Vetrini wrote:
> On 2024-06-10 09:43, Jan Beulich wrote:
> > On 07.06.2024 22:13, Nicola Vetrini wrote:
> > > Rules 20.9, 20.12 and 14.4 are now clean on ARM and x86, so they are added
> > > to the list of clean guidelines.
> > 
> > Why is 20.9 being mentioned here when ...
> > 
> > > --- a/automation/eclair_analysis/ECLAIR/tagging.ecl
> > > +++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
> > > @@ -60,6 +60,7 @@ MC3R1.R11.7||
> > >  MC3R1.R11.9||
> > >  MC3R1.R12.5||
> > >  MC3R1.R14.1||
> > > +MC3R1.R14.4||
> > >  MC3R1.R16.7||
> > >  MC3R1.R17.1||
> > >  MC3R1.R17.3||
> > > @@ -73,6 +74,7 @@ MC3R1.R20.4||
> > >  MC3R1.R20.6||
> > >  MC3R1.R20.9||
> > >  MC3R1.R20.11||
> > > +MC3R1.R20.12||
> > >  MC3R1.R20.13||
> > >  MC3R1.R20.14||
> > >  MC3R1.R21.3||
> > 
> > ... nothing changes in its regard?
> > 
> 
> Right, it should be removed from the message.

I fixed the commit message, acked the patch and committed it


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 01:39:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 01:39:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742651.1149497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNoj-0008RC-DK; Tue, 18 Jun 2024 01:39:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742651.1149497; Tue, 18 Jun 2024 01:39:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNoj-0008R5-9l; Tue, 18 Jun 2024 01:39:25 +0000
Received: by outflank-mailman (input) for mailman id 742651;
 Tue, 18 Jun 2024 01:39:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2iLq=NU=nvidia.com=chaitanyak@srs-se1.protection.inumbo.net>)
 id 1sJNoi-0008Qz-8C
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 01:39:24 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2062d.outbound.protection.outlook.com
 [2a01:111:f403:2418::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 95901594-2d13-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 03:39:22 +0200 (CEST)
Received: from LV3PR12MB9404.namprd12.prod.outlook.com (2603:10b6:408:219::9)
 by SJ2PR12MB8884.namprd12.prod.outlook.com (2603:10b6:a03:547::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.30; Tue, 18 Jun
 2024 01:39:16 +0000
Received: from LV3PR12MB9404.namprd12.prod.outlook.com
 ([fe80::57ac:82e6:1ec5:f40b]) by LV3PR12MB9404.namprd12.prod.outlook.com
 ([fe80::57ac:82e6:1ec5:f40b%5]) with mapi id 15.20.7677.030; Tue, 18 Jun 2024
 01:39:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95901594-2d13-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T8N5UmuOg3DbfZZqqNuAOnKUX0leBKNlMh92PnYnQzHWaHcYH5oY7Jf2nPO+Qk4etgTefBsRz3xTvg77Jx9BrsBmWHm8RbnrjS1ufhMTqRs6dylRjdOdMD97ygb4ytvK5Qjkj8Kz8MRxBP/WuwMiGA7SkMEKFodhVM+YB51bGW0JnZJaLNz6kEFsTzGWYOWMLq8wxf/4xoHKCIuZe4fyI2wUhojLUyDe9Bg+W3oPTBeMr6cKyvdP6bSsbGAnOgK21LrcnL97kmtwVPq8iUbashUt8ve2tudcNvSNT6STdsrALGg1xZkdhApbYZ7FLnmIE0Sh2xWcxVXR2MNK5gpu6Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ypMPtz/9WIFSGNNpA6pjZRdqPMAO+pisYx7f9K2KVd0=;
 b=AlS7cUUNT2GENU/FMqDmwPtmxqJQKiKaf2yIXUIstnYMTzQ3k2Y/23Bo1upLZOBV3BLvSRWMF/Kci2GeXrW1SpXtQ0sT1IVSz89wVNEBq5XvOcWz1VWVvcePfY8KTvORUzzSr6The+VwqqF2mVbiSMJRt+ka0cLV+511gQazCUjgS5rkPBXN3QNBRfj7EEEGj26igCXm9R/iCCNmNupkQe/dG1th7uXWejVlmwRgzPDwTysMdAGax42SVqoAbn1BIRlFk8j2N1wRuFDwqYVYunagRDvfyEA6V0TlucrgQUTEgU+8DSMMgVHdMqKIVkR8rZKG0w2DlY+BPUGJYiXKqQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;
 dkim=pass header.d=nvidia.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ypMPtz/9WIFSGNNpA6pjZRdqPMAO+pisYx7f9K2KVd0=;
 b=PksuVYKKhrvecgqS+9aiMNGQiWVwXuX+AwaD8YHjxwFIv+B2j9hQnJxGTDMA9+6af35W2T6/HIZ9BIpvU1s9HcU+dpkEiSZbco9YTjpiheB5yRm4kIGenaJGdtu+iBxymX/YH0YmGlpQfKVBrmcBCZwn3fvBy1PCfYKL183BvjT92sH86WLNJcTCxH0FCxnyKIpIFTntL5yIlJEtLXlk40OlJ4GLfPs5eEiy6O03syOjPIrgnseZ7QiJVtAERoc8Mxsn676RnPqKsLde+LvUUXNjs6uG1bOFg0RuBo5U9ho2LPCJ2A6SALLoV4Li9Pk7lLLRVJzw2oV4gcazeMdBVg==
From: Chaitanya Kulkarni <chaitanyak@nvidia.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Geert Uytterhoeven <geert@linux-m68k.org>, Richard Weinberger
	<richard@nod.at>, Philipp Reisner <philipp.reisner@linbit.com>, Lars
 Ellenberg <lars.ellenberg@linbit.com>,
	=?utf-8?B?Q2hyaXN0b3BoIELDtmhtd2FsZGVy?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>, "Michael
 S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Alasdair Kergon
	<agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>, Mikulas Patocka
	<mpatocka@redhat.com>, Song Liu <song@kernel.org>, Yu Kuai
	<yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>, "Martin K.
 Petersen" <martin.petersen@oracle.com>, "linux-m68k@lists.linux-m68k.org"
	<linux-m68k@lists.linux-m68k.org>, "linux-um@lists.infradead.org"
	<linux-um@lists.infradead.org>, "drbd-dev@lists.linbit.com"
	<drbd-dev@lists.linbit.com>, "nbd@other.debian.org" <nbd@other.debian.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	"virtualization@lists.linux.dev" <virtualization@lists.linux.dev>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"dm-devel@lists.linux.dev" <dm-devel@lists.linux.dev>,
	"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>,
	"linux-mmc@vger.kernel.org" <linux-mmc@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"nvdimm@lists.linux.dev" <nvdimm@lists.linux.dev>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-s390@vger.kernel.org" <linux-s390@vger.kernel.org>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>, Bart Van Assche
	<bvanassche@acm.org>, Damien Le Moal <dlemoal@kernel.org>, Hannes Reinecke
	<hare@suse.de>, Johannes Thumshirn <johannes.thumshirn@wdc.com>
Subject: Re: [PATCH 02/26] sd: remove sd_is_zoned
Thread-Topic: [PATCH 02/26] sd: remove sd_is_zoned
Thread-Index: AQHawHyGRqOeyk9mTUyH0jN7gvO2LrHMv68A
Date: Tue, 18 Jun 2024 01:39:16 +0000
Message-ID: <117628c1-7378-4195-89d1-d5df7d3d22b2@nvidia.com>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-3-hch@lst.de>
In-Reply-To: <20240617060532.127975-3-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla Thunderbird
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=nvidia.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: LV3PR12MB9404:EE_|SJ2PR12MB8884:EE_
x-ms-office365-filtering-correlation-id: a24f19e2-a3bb-447b-5f5e-08dc8f377743
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|7416011|376011|1800799021|366013|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?YldBZjhNY09zL1licEhjV2FjeENsMXIxMVFxckxEcC8zSGdhUEh1K2dtNUdK?=
 =?utf-8?B?cnh4S3BHN3dQM0djVnZIV202MjYrQ1g3MWNCai9OMTk1OUZvUUZOY0k1NTNk?=
 =?utf-8?B?YUVLZmdyUFluQ0VsMDBtakFIMjFxNmsxR2tyMElDN1hXRktLU0cvWnBsKzR3?=
 =?utf-8?B?SkdpSDVpYTVIMHllNkI0UGxMZkVNN1dtRnBqNld5bmdpRzExSWZnTWw5L2kx?=
 =?utf-8?B?NUx5WisvdDJXZnp6VmNhSkl4dzZYNUd2VnkvWlpkVlR6UzluYmVFc1BNOSt6?=
 =?utf-8?B?MUUzaDVjelFzdnpRa1VmaldOb3Q1NC9YVCtUTXFKMm05dGIvZkd2OEtqSHhu?=
 =?utf-8?B?VlZZNFNOK0hhVkhpWHd3ekcxc2VYdnk3a2MvUkRzRDNkUVdFRk1JT2tnaTd2?=
 =?utf-8?B?My9CRmtEL3VWU2ZiRDdSNkxJdFRPK1hISXU5ZC9Qa1owL204cXFnZU9HQnE1?=
 =?utf-8?B?aURmUSs5SDRqZVg3YzVTVFBIRmUzWTQrc2RnWURhZ21zVnA3blNnUzVKTXZh?=
 =?utf-8?B?OUJzZnhUaVdJZWxBcVZmTkhLSXVaYW5kRTBmTGpmQk12dHppMXFTK0tWNWN2?=
 =?utf-8?B?VHFsbEx3MTdLbmVpemYzaFVXODJpMVR5ekdCMDVKL2UrSCs3d3MrMU9OdTRS?=
 =?utf-8?B?dU9raGhmaXlMcEZuaExQdWdiMlNHY3BjZ0RqZE4xcnlzSzVMMkxCYVVHMWw3?=
 =?utf-8?B?c0QzUGpES2xndWhZN0c1ZzdFOHp0di9LaDFCVTFZc0hOcU9hTnptMi90STVW?=
 =?utf-8?B?Y25GMnlhOGppcGJidGsxL3ByMCtPU1U2aE9YNHVEa3loeFQ1OEZaVGw4NmFy?=
 =?utf-8?B?KytMdmVSOVR6YzNROS8ySHBJeTlWS3Fsc3FBWGVPemo3ODJlNDlqYjJsbWtM?=
 =?utf-8?B?cnNjcHB2T3IrNGxhdEZnYi9pSXp1U25qR25odEUveE5IeXhRcFUxcktsRGRl?=
 =?utf-8?B?VUtqWkFHZ3g1KzhhdXNqUEhLL0htTVdiRVRMNzNZMGY5M2sxeFBncWFBdkp4?=
 =?utf-8?B?ZDRWTkNuaFZXRlk2Nk1QMWhzNENSQ2VOZ0dDaFdpUXkzTksrNnhZMGp6aTlT?=
 =?utf-8?B?cEFnTHh6K0szQ2xGeVJSSG1yTWp0RWk4Sm55a3ZhSCtlelZUUGdDTW55ckNw?=
 =?utf-8?B?cFducnNTRC9od2M1QVJZbVZhWERUZUFkVTExZ1ozaHNOUk8zNWs0NE1rQkpi?=
 =?utf-8?B?VThiRW9ZaURzZTJscXRKT2d1SVYwcGFQQW56akkvbW5NN1VsQlp4YzB3QXpR?=
 =?utf-8?B?cEgxQy9KMzVkTGk4NWFEZGtmZml4VUFWRFdZdHhmSmlnSW5EVkVsdGRpdW00?=
 =?utf-8?B?U25LSzc4b0NON08zSDBjQXQxRlNrQVRMbnloSlNHK014c3lKRXJ0NytoRXpK?=
 =?utf-8?B?djB1TG1ObDhzQkY5WVh3MzZRbkpkYVhnMFozTkpaTUx5NWRHVHV3QWtmdjNp?=
 =?utf-8?B?Q1RNcVJPbHJXTXBLcDdhMUhjZVhiNGRhZGhSOFhzL1F0WWI0cEVoSjJOVGNL?=
 =?utf-8?B?QmFOS1NEbkxaYWdkY0hUakRIOEpmZDlkR3JDUlp6eUFQQ09YNEdiM0pHeUo0?=
 =?utf-8?B?WldaWkFGZ0Q0OWNkM1plbzBUR0dHQkVGa0JqTjA5aFdGVThtZk5XRVZmQjNN?=
 =?utf-8?B?YjZPWS96djA3bTFiaUZqbUl3OE9yNm1adWtoYThHOFJQOVdEYkV2OFpIRVFy?=
 =?utf-8?B?VlJRMnJuQkhmbms1eWRHVGRGZk5ZUEg3bmNwN1hDcnI5L2VOcGYrV1BYaStY?=
 =?utf-8?B?ZVhkeXp0SlliTEhjaDhiYkpHdk44NFpORkJFZDhwTHJFaDJ4ei8xUENjdHFX?=
 =?utf-8?Q?OtQ9mQztXhiEHIpXzLQe6Ig8+4Tr36Xi3M7ao=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LV3PR12MB9404.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(7416011)(376011)(1800799021)(366013)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?TXV4d3VjdmNaY3IzUHF3aU5NaitSNjBjWWNVZnpxSnFaUGd2WVIwZjNNTlFq?=
 =?utf-8?B?Rk9oSXNrcHN1NTFhbUhXa0NPWkJJSVpCMSt6VGVFaXVPQXlYTUdsMTNoczJs?=
 =?utf-8?B?K042RmpvS3JyajV2UklUWmpUeW5EUm4rd3lDM2Q4bm1YRUQrSjZFdHFEZnVD?=
 =?utf-8?B?NDJ2T2dqUXRTYWhtV1NVMVQzWW5qdk9nUjZaOExPMmxRaVc1UWlQVGVWL3RO?=
 =?utf-8?B?YjhEei9jV0hxSE9JbllpOExWbVErbDREQkFPUEJVR0pBSTBPMHZQN000Sk9X?=
 =?utf-8?B?VVY0Ly9PUUh6UHgzaCtZWis1blBxUFZoeDBIQkZhWGpTN2F1bTlKbFNNUTYr?=
 =?utf-8?B?SGVHM2FaOTFKaHpaclZBNUUxVml4eGlZanh5T3hSWFFEUFdQTnpta1FZMG9l?=
 =?utf-8?B?Sy9ZVGhqQTJQNUZ4d1dQYStleTlEKzc5MEYvQkQyQzNQckFKZlUzTW1NbTli?=
 =?utf-8?B?clk3OFlWUThPek4zbWlFQWMvSm9BTzVVU1lPYnNPbjJGSlg2bm0wL092Y1Nu?=
 =?utf-8?B?YlZQWGpzNGNabWF2bE5vYmxVZ3d2am9zNVV2OHMvb2paQmUrZUYxLzlENU53?=
 =?utf-8?B?U21KZG5qRUZNb1RFS3Jjdk10NGFmSHVXYnNNR1dIMk5PN0VjOGdabWdvTi9p?=
 =?utf-8?B?UmZ2VHMvYjRIUEcwQjd2Uy82THFwMkxWZTA3cnRnd3VGVjBhRHVlbjNrNlZn?=
 =?utf-8?B?NUc2WE84VGhxdGVZRkRCUkd5d2FYWXk3OCtSREF3WGZWQlZENDRiUDdiSGYz?=
 =?utf-8?B?OEZkQ0F0QzFjajN2VkQwdllPUEtiSUk0T0VPVk5XQ24zSXVqazY2YWFVMEhZ?=
 =?utf-8?B?TmhkSkRWZlJrVEczck16S0tEMUUvWlcvWktPWlFQZ1JnWmwxbzdITFdIaVJI?=
 =?utf-8?B?TzEwMEN4WUNxakQ2VFVOcWJjRmJ4SFlxcWZsSkYyeVI0M1V6aEszdE85SXMw?=
 =?utf-8?B?bm9sekxTVG5xTkx5TXJMK2VpS2xJZnFXYjhQVmpURXJJeEtBdE1SdXJnQzh5?=
 =?utf-8?B?cDlPNFVMY05pN2pHWFc5cWllNjlFY0EzTlNweXdqQlphVXg4cndiU1R6TlJ6?=
 =?utf-8?B?NkJIMUxOcE5pMzJreWorbFlBYXdWdGpoZVNaZE9oYTNTVlVHQVBhT1pPQ1p4?=
 =?utf-8?B?K2FCUE5McFlJVFQ1cjc4UkFKOTdzVE14dGxydzFNWVRWd09seERkQTROVVdE?=
 =?utf-8?B?M2tCZS9WbzhlbmRkOTJ0QktuSDFZOEoydVUxdVludGRBbkRUdmN5S1lOWE5q?=
 =?utf-8?B?NmpLYk0zcXRrb3d1ZkwwNnpuNnNpZlY1MXhKblAzU01BM2NTSlBaTE1UaFZ0?=
 =?utf-8?B?RDgzdC9INjFkM09NMFI2RE5PZUNiT1ErdnhnS0lESGJ5cFJLL1ExaWlJS3lU?=
 =?utf-8?B?QzkyUXR1dEgyNnZiVjVVMWpZYlJveEt5am90cURXZjNqV29LWHIzcGUvUGNZ?=
 =?utf-8?B?bG1FYU1Rb2J2d0hId1ZJcWFqWGd6TUpJSzRJK1EzVjFFZDlSdmRhQXhpWGRG?=
 =?utf-8?B?eUpxeGJNRmRCOXVrUmNQWWMvdUYrUzFHaFM1VHpITWNKS1lTUC84YWtpcm8r?=
 =?utf-8?B?dHptUjMvSERweFZ6Uy8vMWhtMVBaK09VUDFvaTl1TWtqNHNldmRaQ01OUEZL?=
 =?utf-8?B?d2RQelJtVkFBZzF1b2ErZmNvc1RFL1JQTnJKTnVLMUVTVWNaNm5KaU91M0RM?=
 =?utf-8?B?OURMTGtacW5XQ1IzQUtCUk1CTWR1cUlYVUxVOEZtN2V1T212S3JUc3F5S21N?=
 =?utf-8?B?dVZ3TzVUb2QyYlFON1BhVDhTdHJucXprekZjLzg4OTNTcis2WnVVMDc1SnFM?=
 =?utf-8?B?aVNhMlpuYnh0TjF6S1Jxa1dkRG9tSEpYOW5IeWVoRXMybUczTk52MlRwbSsw?=
 =?utf-8?B?WDdGWDFjRStGR0ZQV2Uvd3Q5dUVqNXpGSVltQWlxSzVhbW1VdFNTZGhWeUlp?=
 =?utf-8?B?c0plbDlUdzRVSDN0WFJVVU8yLzVGcWhscFFrcUU3N0VaMHZZbjlwaG1QV0FX?=
 =?utf-8?B?bnRUUm40QlYwcDlzUlM5T0MxNU00V2IzUkhtNnlkcTJDUDAzRmRGYUNqaWpo?=
 =?utf-8?B?L2sva1hYbFdYMHh6cDJNYUxnc0s4dzI0WkMvS0ZRUGNZak9oU2FJWEtJamRz?=
 =?utf-8?B?TDVhRWd2QjkzV1R5Z1Z2Ung2YkUxNGlXWldCdkRnVzQ2SXhYQ01Mc012NlQ5?=
 =?utf-8?Q?w8GYB5pM2Mk2Gf/Dm1gQ9pu+ddAC6O1L+90dogtUght1?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <BA6754379EC35C49B5EFA1BCA5963450@namprd12.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: Nvidia.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: LV3PR12MB9404.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a24f19e2-a3bb-447b-5f5e-08dc8f377743
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2024 01:39:16.6435
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 70ANE6Zvl450ORuJDVNcemcJUkoaM3ff48ZcnrnAVEtgTHWVtCloarSJ3+wrSx5yarVIxAT7WJVryThOXcYA5g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8884

T24gNi8xNi8yNCAyMzowNCwgQ2hyaXN0b3BoIEhlbGx3aWcgd3JvdGU6DQo+IFNpbmNlIGNvbW1p
dCA3NDM3YmI3M2YwODcgKCJibG9jazogcmVtb3ZlIHN1cHBvcnQgZm9yIHRoZSBob3N0IGF3YXJl
IHpvbmUNCj4gbW9kZWwiKSwgb25seSBaQkMgZGV2aWNlcyBleHBvc2UgYSB6b25lZCBhY2Nlc3Mg
bW9kZWwuICBzZF9pc196b25lZCBpcw0KPiB1c2VkIHRvIGNoZWNrIGZvciB0aGF0IGFuZCB0aHVz
IHJldHVybiBmYWxzZSBmb3IgaG9zdCBhd2FyZSBkZXZpY2VzLg0KPg0KPiBSZXBsYWNlIHRoZSBo
ZWxwZXIgd2l0aCB0aGUgc2ltcGxlIG9wZW4gY29kZWQgVFlQRV9aQkMgY2hlY2sgdG8gZml4IHRo
aXMuDQo+DQo+IEZpeGVzOiA3NDM3YmI3M2YwODcgKCJibG9jazogcmVtb3ZlIHN1cHBvcnQgZm9y
IHRoZSBob3N0IGF3YXJlIHpvbmUgbW9kZWwiKQ0KPiBTaWduZWQtb2ZmLWJ5OiBDaHJpc3RvcGgg
SGVsbHdpZzxoY2hAbHN0LmRlPg0KPiBSZXZpZXdlZC1ieTogQmFydCBWYW4gQXNzY2hlPGJ2YW5h
c3NjaGVAYWNtLm9yZz4NCj4gUmV2aWV3ZWQtYnk6IERhbWllbiBMZSBNb2FsPGRsZW1vYWxAa2Vy
bmVsLm9yZz4NCj4gUmV2aWV3ZWQtYnk6IEhhbm5lcyBSZWluZWNrZTxoYXJlQHN1c2UuZGU+DQo+
IFJldmlld2VkLWJ5OiBKb2hhbm5lcyBUaHVtc2hpcm48am9oYW5uZXMudGh1bXNoaXJuQHdkYy5j
b20+DQoNCg0KTG9va3MgZ29vZC4NCg0KUmV2aWV3ZWQtYnk6IENoYWl0YW55YSBLdWxrYXJuaSA8
a2NoQG52aWRpYS5jb20+DQoNCi1jaw0KDQoNCg==


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 01:40:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 01:40:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742657.1149506 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNpP-0001Ps-OH; Tue, 18 Jun 2024 01:40:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742657.1149506; Tue, 18 Jun 2024 01:40:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNpP-0001Pl-LZ; Tue, 18 Jun 2024 01:40:07 +0000
Received: by outflank-mailman (input) for mailman id 742657;
 Tue, 18 Jun 2024 01:40:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2iLq=NU=nvidia.com=chaitanyak@srs-se1.protection.inumbo.net>)
 id 1sJNpN-0008Qz-NA
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 01:40:05 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2060e.outbound.protection.outlook.com
 [2a01:111:f403:2418::60e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aeff0e65-2d13-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 03:40:05 +0200 (CEST)
Received: from LV3PR12MB9404.namprd12.prod.outlook.com (2603:10b6:408:219::9)
 by SJ2PR12MB8884.namprd12.prod.outlook.com (2603:10b6:a03:547::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.30; Tue, 18 Jun
 2024 01:40:00 +0000
Received: from LV3PR12MB9404.namprd12.prod.outlook.com
 ([fe80::57ac:82e6:1ec5:f40b]) by LV3PR12MB9404.namprd12.prod.outlook.com
 ([fe80::57ac:82e6:1ec5:f40b%5]) with mapi id 15.20.7677.030; Tue, 18 Jun 2024
 01:39:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aeff0e65-2d13-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m6czwxac9jwpJGKgdP+YxOzIuz4+JPrKKK862AQvbe7SyUyAAurMVus7acSzZkbb+97s4PaFzrtDZKX++k8XQXkhTvDrivCrsRE+1MhujtE16a09WPgUqeCvqvL83xY3akDsRQR1ge6WlNwMUmujbZsNkxeMA9qVDuO6c+9S2GUYlNDWMHcQ3ENUI9FazJf2rFsJ9yUJqZDu5yJhknLsW4nkP7jHWHihzFW84Z954cbgWo4IZvefV78ETNq9qwlbSQ5LkA9tqNbeXDG+wUeT6WkC9dLZuKQIA6B3uAjVUMUiHFJkFqc0MIA/kf8a62PZCzuReK+j9V0RdujAe/sWZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=iyWzqgWMr39DzFWVNgHbQzxwZNrFXOBMYbmXYJ7MEe4=;
 b=lbUwRq0nNq1NLdc8KP8SAjETpJpP6KNdH5LsaNTVVJk6dXZaZUdVhkQMJQdYfEXCeVani8QC2IOdX6HN9YzoLeB9kPCxEWl0U5r2Fmrphu7jzmiwmvaMpy/QirrtMespp374hOVa5B4OBVqwJF919Y49rvHYTGYwozjR1DgJwcHL60TPXbGOaWEwnFCXASxKfC/brpPKhfAi3kdB1LtdtGvwnIXfcV+C0GZJntmWPfdqJSgdCvg458kaCMhw25VQwixpluIlG8YcGleCs7XhjF1MWKbCoAPn5bvPCz9AUH2MuoeATSx7QP/JOTnxsthUBac7EszZfvEIpgQNhoO5tA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;
 dkim=pass header.d=nvidia.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iyWzqgWMr39DzFWVNgHbQzxwZNrFXOBMYbmXYJ7MEe4=;
 b=BJxm6AJaUVicEpJOQaXQ3Us/ojmY0iUVqG9MBuKv+vkxwap4R9eOVt05as+HP4BPzzxR1ruSRvbcN59XblUXoU6b0IG+cVPuZXcYatiHkYeE+UZ9IT6UObjxSO0FwWnd3vaDyK1G42ERds8RacUVbvrdlUjAMHeWIDk1Mq1Dmr2XnGKLq2i4J+0gOMNmGQTdn/gp43TC1Wp/xGxiSTX3apIRu8xT6Fpc7ugEFdoszuRyaZfcsjhjrhONt/qrrhd9omg8fguAQskQwXAMqP1MPwiib5KEW6ZHNQPmXmrSib8rKuqmLmrLkqwdfM5ubJqygXQRRPJIaMpfrb4Fhmvzxg==
From: Chaitanya Kulkarni <chaitanyak@nvidia.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Geert Uytterhoeven <geert@linux-m68k.org>, Richard Weinberger
	<richard@nod.at>, Philipp Reisner <philipp.reisner@linbit.com>, Lars
 Ellenberg <lars.ellenberg@linbit.com>,
	=?utf-8?B?Q2hyaXN0b3BoIELDtmhtd2FsZGVy?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>, "Michael
 S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Alasdair Kergon
	<agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>, Mikulas Patocka
	<mpatocka@redhat.com>, Song Liu <song@kernel.org>, Yu Kuai
	<yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>, "Martin K.
 Petersen" <martin.petersen@oracle.com>, "linux-m68k@lists.linux-m68k.org"
	<linux-m68k@lists.linux-m68k.org>, "linux-um@lists.infradead.org"
	<linux-um@lists.infradead.org>, "drbd-dev@lists.linbit.com"
	<drbd-dev@lists.linbit.com>, "nbd@other.debian.org" <nbd@other.debian.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	"virtualization@lists.linux.dev" <virtualization@lists.linux.dev>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"dm-devel@lists.linux.dev" <dm-devel@lists.linux.dev>,
	"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>,
	"linux-mmc@vger.kernel.org" <linux-mmc@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"nvdimm@lists.linux.dev" <nvdimm@lists.linux.dev>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-s390@vger.kernel.org" <linux-s390@vger.kernel.org>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>, Bart Van Assche
	<bvanassche@acm.org>, Stefan Hajnoczi <stefanha@redhat.com>, Damien Le Moal
	<dlemoal@kernel.org>, Hannes Reinecke <hare@suse.de>, Johannes Thumshirn
	<johannes.thumshirn@wdc.com>
Subject: Re: [PATCH 09/26] virtio_blk: remove virtblk_update_cache_mode
Thread-Topic: [PATCH 09/26] virtio_blk: remove virtblk_update_cache_mode
Thread-Index: AQHawHzWgIhsYEQ2yk+R6EXm0n/PkrHMv+EA
Date: Tue, 18 Jun 2024 01:39:59 +0000
Message-ID: <9485740f-56a2-45ae-9dea-4ee89ba9d937@nvidia.com>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-10-hch@lst.de>
In-Reply-To: <20240617060532.127975-10-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla Thunderbird
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=nvidia.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: LV3PR12MB9404:EE_|SJ2PR12MB8884:EE_
x-ms-office365-filtering-correlation-id: 1959c64a-710d-4577-02b2-08dc8f3790ae
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|7416011|376011|1800799021|366013|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?V1JYZlpwOWhDY1dnU0ptcndEYlRZQ1llUUl6d2dwU2FBNWFBMURPdVZNUTlh?=
 =?utf-8?B?bFBiNHZaaXZwOXNxQTlCT1B5Z1hkUm5HcHpwemQ0RjlEV01sR0ZSSHNQbWJG?=
 =?utf-8?B?SjUwRFpzQnJxb1A3N09WS3hJeEE4NFJYVDR3UWtHdnJFU2pIMEtZR3puUjhp?=
 =?utf-8?B?dUJYQTB2RzNvbzZkTVhYSzNEYUxqWThJajFyakhDRE9mdm5ybis2cmg5TXNs?=
 =?utf-8?B?cFYrVXoyUThMVVNWc2VJcHRYKytLOEpuL0svQUZISkpRNzVIQkJYNEh6RGlC?=
 =?utf-8?B?cytLenJZRjZpdm9ZS2E4ZCswNmM1MVFPMkJVcXlxNmQ5eVN5SHJiOG53N1ll?=
 =?utf-8?B?bW51L3ZLbjl3VTRsUjVSRElGbndwOWhOczNtcTVTd004T3Zkc2tSOVJOSVd0?=
 =?utf-8?B?czBmUlVFejhDU1h5a3FPVHVRQjYvOXZhWUxqZ1hGVk94SGViVmxMWDlHNWRP?=
 =?utf-8?B?T0c4cnBsRjR2NWgwekFVd0hNaHRkUURQdlVnbUhITTM2MDk5WStxR0FDY3hv?=
 =?utf-8?B?OG5mMkw2MDFFSXZiNWZLdXJCZ0ZPZVNNS09PaStMN1pCTmZoMnZWc1Nva0hi?=
 =?utf-8?B?M1FINXBISDlNYUd6NUNrS1ZCemlFRGQzOWoyYVIyOGduREI2ZzBZcTBUTEg3?=
 =?utf-8?B?WTVxREY3ZFpTMGNNaGkyaHVWTjdGd25KZ3hKQTZPd3p6bk5rSVdPMlpRVllG?=
 =?utf-8?B?SHdUTlZESk1YVGVtR1B0ZW5JckwzdkdMUEd2UlV6Zy83a2xXVzdyalBCSEQz?=
 =?utf-8?B?Mkp4NWh4dWRkU2lzcFcyOEIvbS8wazJhZG10MWdsN2V2aUFZazliSUdLZ2ZW?=
 =?utf-8?B?cytJaTM0MldISUQvZkM0YytzWU5EbDZOWlNOWlRtREc3TUdZTFU1U1BSRXYy?=
 =?utf-8?B?MVZscWRoZHdBcEhSNFhBMXBaQTZOeUpLVHEwZWtlaHkrNTEyUFc5VXRCTXBQ?=
 =?utf-8?B?OFFKaTdvaWxZM2VEYjFyWitMbWlXWDBZdTZObThPb1RSNlV5OWt5NEYyQmsz?=
 =?utf-8?B?bkxDQnArUy9VYXNjWnNRRWdzVnAxVUM3MTRVY0RDZWhRVzhnWklscUx3UDQy?=
 =?utf-8?B?K01PUXdFZGdKT0JrL2VYYTdPN3pJZEdRSlN6RkM0WGJrNURDV0FNTlh5dHp3?=
 =?utf-8?B?UmUra0htTWZqZXI2QVlYbzBqY3dnWTF4UXJZelZUUldKbEV3RWFxa3B0c1lM?=
 =?utf-8?B?bVFOc0xGREJvUW1waW9EWWVwUjc2Y01BMHZlS0E3d2xxUG53NG1HQUlwTDFu?=
 =?utf-8?B?dmFyZWhQci9BQWQycG1MMDJ5YlRuby9qMW9MdWlJRUdyZ0tmckVSTDI4bnha?=
 =?utf-8?B?aDRoM0R4cVc4S2VlakZJSXhLQW0rek93M3ZRQkU4VjI0amh4UzhTeWdvN0J0?=
 =?utf-8?B?WjZxOENPS3k4T3JtSFdIblhpVDFsYXhXS1NtTlM2eTBTVzFOeWcwdm1QREVm?=
 =?utf-8?B?VndCM2lEV3ZYTDdaczZQNUM3ZFdDb2hKSjk2ckk2bGg5dXlHd21oYUduODJs?=
 =?utf-8?B?UTEweExoRGVhelpBNWN3ZE9JKzEyQVgvS1lhOFVGOWZzQi9JOHhEbHM3bHd2?=
 =?utf-8?B?dTdQOWZYY2Ftb3dDdnYwRHRCeWJVRnJrWHBObnR3RUk4Ym1WZ0NWSUptWHlH?=
 =?utf-8?B?R3h6clRYekEwQ28ySHFDcUIzemJxd1BtUmVRSDFPSnp5RE80NkZRVXJpZlZP?=
 =?utf-8?B?T29BdUkxTWRKbzFRQzRSUWxjcDF2d1czaFY4aFlmZU1SM0NSRGl5N3ZoZXZK?=
 =?utf-8?B?TVdpL3NaOVRaTi84U1l6U05JS3g1citxR2kycGF4MWJQS01tM2ZGNmpBbmx2?=
 =?utf-8?Q?ekWoc6MoiwyTCYyhobDlOIplg/9IlAzcu8eec=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LV3PR12MB9404.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(7416011)(376011)(1800799021)(366013)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?UFVLdGFHKzNDSnpJT0lyUXhmL2ZWN2R4cHFLRW9oZlIrSjAwcktTYU1ERTh1?=
 =?utf-8?B?ZlFFOHNEaW95RjZ2dm5DZVU3dVpWSHpNdUxLYkRNMnlZamEvbnNHSEFiazdN?=
 =?utf-8?B?YjNXUUhVSStNWjAwVEJpVjF3L1VLVW9veXp5bFF3dUU5N3hEOWVBdnhTYmk1?=
 =?utf-8?B?bmFsNmxsTDJleEZuUURtc1RQT1FjVlh5elp2d0JFZEY4UURrb214NFNDWlFU?=
 =?utf-8?B?R1BMNGhlZlN5N0dZbHJJVlJqa3hSaWxNTitMTlNUT2dEMmRDd1JDSm9qb1dw?=
 =?utf-8?B?bVVWS29VVnpGOCtVMnNXdXAyaER4ZWpWWGZoV28wdHFEaCtnWUpJMG05MDdE?=
 =?utf-8?B?VVpVZFRZQWtmUTVtV1BWZHJDTGdWTXFjTERXUEpQdjVFUFNVNVJPbExsK0Nl?=
 =?utf-8?B?RG9Cd21qZW5tN25OWW9USXZDR01FUnVabTJ3b0dvaGNhZzNseUJLUCtrVGlZ?=
 =?utf-8?B?ZG5WYVdYZEJiU2REaUZFZXkvQnJPeWc0VlhNc2RBOVpXZkp4U3NvSDdKRU9m?=
 =?utf-8?B?aE80Y0hLZ2hrbUU2ZmtFNGp3TGZhSnhFQW9DdlowbWk4Zll4U3Fra2NncHBh?=
 =?utf-8?B?cUlEMktwN2dYbytuSXZkTkk1aitRUUR5dVd3enZWWDBLWTZ3V3N5RVUrSmZy?=
 =?utf-8?B?Si83aWJCZThOdkVBVzV1TktCdnJrZms0Y2V3QklZOFZtWGZqZk1TcTJwTXJx?=
 =?utf-8?B?cVlVK3pBVE02V2d5UlB1b2xYUFZmUjNjVlArYUNUTmE5N3oyYkJzUnBWVWtK?=
 =?utf-8?B?WjdYZTZTMXJvQmVoalpIVGp2Nm4vaXpDWWJuMzlhdmxvK0ZWNTJiL1JsMFBG?=
 =?utf-8?B?dkg4MVdDeDZzRnJ5SlFIRVVQdzltMjR5d3Y2c0xNQjhuaERWWC9ZbzlhdUZq?=
 =?utf-8?B?UU5JTENSVDE5QWlyZXBkNXNQTk5iUEhraS9xei9NditlOUh4S3lSS1Q0cGhm?=
 =?utf-8?B?eEtsTzJHeU9HcmxPbFBTZXY4dVhFNUMwS3hUVDdXcTg1Z3hyRGR2T0JXeGlV?=
 =?utf-8?B?czltcmNya3ZRTEt1YmdHZFFhU29oVStZd1BpSlc5V29nK2x1QTBVdUZiVXk5?=
 =?utf-8?B?NnE3VURBLyt1UkVUOVN2VnpwcUFqM3lqY3E4TFdldGYwM3RPQVNiUm92TzNj?=
 =?utf-8?B?YUdrTGZ3QjFTcDQ1TXR3UEZPQndUQk1IeklWQkFQU0NJRkEyWXdQWXAxOFlK?=
 =?utf-8?B?UDhtY1d1Q1NZTnlCcXJYd0pIeFFWVDlLTVdZYUFtdkhPM1Nkblg0L3c4Wjc5?=
 =?utf-8?B?eU9OS2c0RHdteHRBSktvNW1SN2FjVWloUVE1ckU5eDNXUlVPVmRXODNpaFlK?=
 =?utf-8?B?eExHajQ3dWtORFVaRGRpeDRFdEpnODVqREovTVRFU1p6Y0V0NUMxaGQ5QVFH?=
 =?utf-8?B?TFF1Q203KzU5aGNUazhyU0dRSFRYaG5odG5rTUVTT29pQlBnRUVIUUpHVlVz?=
 =?utf-8?B?eTkzank0L0E4MUJ6RTFlT0wvU2Y3a1V3MUZmTWhNbVE2MDJsZHVFQUp1WE5z?=
 =?utf-8?B?QWJaNnZkSHpJbk15MkIwaGt2aGRsT0I2RUkyblpCcG5TSUw4eUp5M1JMWjE0?=
 =?utf-8?B?YWd1MFpMdDZSczlqUDZjbEluUlhUdGVKM3BPM0tyTEU4MTVLeHJJNld2NDBy?=
 =?utf-8?B?OXV0dksyL1o1T2ZLU202TGtvZXpXekNWSDdZUUt2bnFjSHNkZGVteFVYaHIz?=
 =?utf-8?B?RHlUOWZpR1plcDJ0V3VLZ1c1RW9Eejg0NitjTmNYeXY3eVpHVlFnZnhwbURF?=
 =?utf-8?B?K2o0d0Y5dlBOeEpIY0RDQm55Q1VQamF6czAwZWFMem1HS25WUXhUMFNhZ3hT?=
 =?utf-8?B?MW14Ynh2cm04dlBhU0c2elZEMXVaWDNka21mdHQxUFFGSFdFNjViSEFoYmYr?=
 =?utf-8?B?cHNxZ3pNYkcvaDN2UkdHRWcrQkZlbHFuS3VxMDJudE40TmxmcE82SzJ2REZs?=
 =?utf-8?B?WklQQUt4V2liSVlwMmgyYldjbEl0bk1YWENVMlVIa3lqUHhqNVhvL1NFYXRh?=
 =?utf-8?B?WnV6RTVSNVZ5SGkvUmZjdksrRVl0YTgvSU5QM0RuN3NpcGdOdC9IN25rRTVZ?=
 =?utf-8?B?SVBUT21zREdPT1crYS8wZFU5NWpGM2laQ3N5cVpPR2FXOTdVUThPTFhaTkdo?=
 =?utf-8?B?R0cvTGtFNHo1N08rNjh2SG5iM2pIWDFlN21Na0tGTkVuOWNQUkVaTFFKaDR5?=
 =?utf-8?Q?mjnEdybVF2VVfmKAZQqs+69/iGsijkVScSTB0Mb7+m8r?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C4BE9555634EB14E9154B298C7604805@namprd12.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: Nvidia.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: LV3PR12MB9404.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1959c64a-710d-4577-02b2-08dc8f3790ae
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2024 01:39:59.2712
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: xM0U1IDiJDmLjeW3IkomIb+gOMdS8yym0GH2AoZB03MuGNqhrEUxcxbdpwol8wts+jy2p0YsN9ZCPHVd/MbkFw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8884

T24gNi8xNi8yNCAyMzowNCwgQ2hyaXN0b3BoIEhlbGx3aWcgd3JvdGU6DQo+IHZpcnRibGtfdXBk
YXRlX2NhY2hlX21vZGUgYm9pbHMgZG93biB0byBhIHNpbmdsZSBjYWxsIHRvDQo+IGJsa19xdWV1
ZV93cml0ZV9jYWNoZS4gIFJlbW92ZSBpdCBpbiBwcmVwYXJhdGlvbiBmb3IgbW92aW5nIHRoZSBj
YWNoZQ0KPiBjb250cm9sIGZsYWdzIGludG8gdGhlIHF1ZXVlX2xpbWl0cy4NCj4NCj4gU2lnbmVk
LW9mZi1ieTogQ2hyaXN0b3BoIEhlbGx3aWc8aGNoQGxzdC5kZT4NCj4gUmV2aWV3ZWQtYnk6IEJh
cnQgVmFuIEFzc2NoZTxidmFuYXNzY2hlQGFjbS5vcmc+DQo+IFJldmlld2VkLWJ5OiBTdGVmYW4g
SGFqbm9jemk8c3RlZmFuaGFAcmVkaGF0LmNvbT4NCj4gUmV2aWV3ZWQtYnk6IERhbWllbiBMZSBN
b2FsPGRsZW1vYWxAa2VybmVsLm9yZz4NCj4gUmV2aWV3ZWQtYnk6IEhhbm5lcyBSZWluZWNrZTxo
YXJlQHN1c2UuZGU+DQo+IFJldmlld2VkLWJ5OiBKb2hhbm5lcyBUaHVtc2hpcm48am9oYW5uZXMu
dGh1bXNoaXJuQHdkYy5jb20+DQoNCkxvb2tzIGdvb2QuDQoNClJldmlld2VkLWJ5OiBDaGFpdGFu
eWEgS3Vsa2FybmkgPGtjaEBudmlkaWEuY29tPg0KDQotY2sNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 01:41:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 01:41:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742665.1149517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNr0-0001xs-2p; Tue, 18 Jun 2024 01:41:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742665.1149517; Tue, 18 Jun 2024 01:41:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNqz-0001xl-Vy; Tue, 18 Jun 2024 01:41:45 +0000
Received: by outflank-mailman (input) for mailman id 742665;
 Tue, 18 Jun 2024 01:41:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2iLq=NU=nvidia.com=chaitanyak@srs-se1.protection.inumbo.net>)
 id 1sJNqz-0001xf-0D
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 01:41:45 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20624.outbound.protection.outlook.com
 [2a01:111:f403:2412::624])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e906eca9-2d13-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 03:41:42 +0200 (CEST)
Received: from LV3PR12MB9404.namprd12.prod.outlook.com (2603:10b6:408:219::9)
 by LV3PR12MB9143.namprd12.prod.outlook.com (2603:10b6:408:19e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31; Tue, 18 Jun
 2024 01:41:36 +0000
Received: from LV3PR12MB9404.namprd12.prod.outlook.com
 ([fe80::57ac:82e6:1ec5:f40b]) by LV3PR12MB9404.namprd12.prod.outlook.com
 ([fe80::57ac:82e6:1ec5:f40b%5]) with mapi id 15.20.7677.030; Tue, 18 Jun 2024
 01:41:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e906eca9-2d13-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XAAiTyEijMTi1pqi6OQZ9E7fo/FrAMCRmGyOhMSoQ9T0szq603SqkEbmZ3abOt5/cdvPvM93TUuAk078DCHKPtZrZB+6zedhGfql7HDA8HXqQME08nP76PuxRsvJybHu+y6LVUarqtFX72aLCiZ52r2mMBS8TUSzV/2yKSLjEJKYXDyIIAbiuZulPdjNRc47TShOtLJKfJoh7/bpX81PCMJ3g5KTOcAMotLp5FrP/blYUzTtDTA9LMtFaRoU8GeML6Qkd8ROVsRsaM2DiG5Te6f2MqpmS4QpiQWbSTMSe2BO51MNIYG4aBqPDhUIQCL1AssHt34v6+mOTUs0Sdy28Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=p+4N1S30rum21qkQT06ApcKDgx92UXru4uX9+/mhiOU=;
 b=VxkNS3sCVR7rNGAOGkWfAvVTZQ5YgLErIftcgFOK6exMbD6PKHWpMeapBzaV73ltwRV1MJV9JvzUEa238InZF/fmCv0Iy4nrkgof6Ov0/S+eF2QFTbaNmhATrhQtvryzcwMMlh5Af1C6eiMAPbK3mJpxS6KoEphm3iBDzsH69tuxwWkT7A9FNxRwiURexpdsAH2QOYvACdaS6Nta3tNpIAZM6AOTKNkMONdjphxoiydOqvTfTLV4PQOHNEO/w9kobZ1nyz1cpK4/ciX64t8/vtJGlUnRcXzVsBiQQC2NotEYBHrxaGVIe39AKJq8k3E6mWFUX6S7FLKGD2Q4zbmKfQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;
 dkim=pass header.d=nvidia.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=p+4N1S30rum21qkQT06ApcKDgx92UXru4uX9+/mhiOU=;
 b=m1c+7rH9cmDdMCNSxBcnPUD/on6ZNGVQ1KRzDnAt/sk23sKsRIvrJ/TrLRg5TaF5oTqx0Hs39BnWhisunIvqRmuYCaEm/Rsl3nsrQb+DfgZ0UXJ+VPZox1MFV7Se6nxLAiL9OOCTmkx7TXByw5IC9xNNtBOTiVbbSt7KGJVUsAQWkN8zHy4XKuIAD4k7skHsWCOlSCGaIj3jGAkRPwCQurVpAzqjsMqo7wUAfKA9UpbffVtZVggzRAef5tm13cDTwQkacw3b7Xv2XEdfbCG1xR8B3vAT4zrxXTo5WdNAJcLWI+38SLCYUNaK5sIjD7gdxEtKNdjzo/9bADT9DMGIjA==
From: Chaitanya Kulkarni <chaitanyak@nvidia.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Geert Uytterhoeven <geert@linux-m68k.org>, Richard Weinberger
	<richard@nod.at>, Philipp Reisner <philipp.reisner@linbit.com>, Lars
 Ellenberg <lars.ellenberg@linbit.com>,
	=?utf-8?B?Q2hyaXN0b3BoIELDtmhtd2FsZGVy?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>, "Michael
 S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Alasdair Kergon
	<agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>, Mikulas Patocka
	<mpatocka@redhat.com>, Song Liu <song@kernel.org>, Yu Kuai
	<yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>, "Martin K.
 Petersen" <martin.petersen@oracle.com>, "linux-m68k@lists.linux-m68k.org"
	<linux-m68k@lists.linux-m68k.org>, "linux-um@lists.infradead.org"
	<linux-um@lists.infradead.org>, "drbd-dev@lists.linbit.com"
	<drbd-dev@lists.linbit.com>, "nbd@other.debian.org" <nbd@other.debian.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	"virtualization@lists.linux.dev" <virtualization@lists.linux.dev>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"dm-devel@lists.linux.dev" <dm-devel@lists.linux.dev>,
	"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>,
	"linux-mmc@vger.kernel.org" <linux-mmc@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"nvdimm@lists.linux.dev" <nvdimm@lists.linux.dev>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-s390@vger.kernel.org" <linux-s390@vger.kernel.org>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>, Bart Van Assche
	<bvanassche@acm.org>, Damien Le Moal <dlemoal@kernel.org>, Hannes Reinecke
	<hare@suse.de>
Subject: Re: [PATCH 11/26] block: freeze the queue in queue_attr_store
Thread-Topic: [PATCH 11/26] block: freeze the queue in queue_attr_store
Thread-Index: AQHawHztZf+US5g9EkKLFc+wTLOrGbHMwFUA
Date: Tue, 18 Jun 2024 01:41:36 +0000
Message-ID: <d0d47735-ca4d-4744-bb8a-607d59c67315@nvidia.com>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-12-hch@lst.de>
In-Reply-To: <20240617060532.127975-12-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla Thunderbird
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=nvidia.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: LV3PR12MB9404:EE_|LV3PR12MB9143:EE_
x-ms-office365-filtering-correlation-id: 4c152403-08d1-47d7-1138-08dc8f37ca9c
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|376011|7416011|1800799021|366013|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?eEhxVE5BUktkU2hReldpbFpRMjdOWXlzeG5zZDROWFRHTFNOSDIzeExQV1M5?=
 =?utf-8?B?OUdrb0paT3J0K3JzelEvSk5EQWd0YnkzaXJEbVdweUdKdjBtazJoZEN0QUY2?=
 =?utf-8?B?Q3B3S0JMUGVoa2lSRXRxM3plaU0zeU1Hc3ZyNmlDcmhzeHFkdGh4bFFia2VG?=
 =?utf-8?B?dE5rbVpPTzA4RjhlaFdpUlk0M1Zoc21MaGoyM0h6NklxMzIwanIxTWM2QnF1?=
 =?utf-8?B?Yk92bHJkdmtlQmVzQ1NJN1R3TlJHK0hjZk9GVVlqaWU4VWFvV3A4WGZtVDI3?=
 =?utf-8?B?K0dKVnk3eDNtZ1FWNjRqUHEzMDBNTm5kbUlIWXBxZExXUHFIY0QyVkswQVpk?=
 =?utf-8?B?QlBvUEpQVW1lQldIQ0F3dWJ3blVDdVFCTWZ5cUdpcVBJd3pFcUlmaEpIanJn?=
 =?utf-8?B?REE1T3pFeU9GdXIvRDkvYUorUnRLbmFDTXpobVR4VXdDbzJCOUMxc2R0b3NC?=
 =?utf-8?B?aVFyTWxDV2dOaTBqYnlkeWVSQnJramoyd08xc1ZOSmVwUGxMNnMreVlLK1dI?=
 =?utf-8?B?bTZ6aGtTanhkT0VsaXEvQTJmZG40UkJKVThkdG1uL1IyR3ZlaE1BRGxjUXBr?=
 =?utf-8?B?azNHRFFQaUxpTWZJcFh5SVpDU2tab2YzbzE1UHJOOTlVSUp5OUxZUnVJQ3Fi?=
 =?utf-8?B?QnU1dVQ5SUcwTU11K1d0MWhnSXZqRFo4Mi85UWpJS1VQcTI0Z2hkbUVna2xM?=
 =?utf-8?B?eVZ2bHQxQS9UMStOOXpVbU1iUEU2cGZqRmRCZU9aOG14anBOMkRZRXRJL282?=
 =?utf-8?B?bjIyVWFaOHVRN0pMWU0yUDJhVy9GaVdEOGM5SkYyZnhWTzZwV0xtMmIxS01q?=
 =?utf-8?B?WVJIWDlvenRpYmFMUmgrME1SZ012VnZPbWpIT014eW4za0JhWjNXT212TlJq?=
 =?utf-8?B?MDI0bEZYak01Y2pPUnJnN28yeUhTZDcvb2NYYkFZck5KZmJ2dWZrSEJMaVNY?=
 =?utf-8?B?WmxlNmVRVGFheHdOMVNid0NMVkcrWWpUNk95VzZLVUVpYm5ZWGNKdE5kVTgz?=
 =?utf-8?B?YXlvdy80NTI2bnRyWFBNRCtvOCtrbEJLWlBQU2hoRm9yeHU0R2crU21vN29v?=
 =?utf-8?B?Tyt5V3BIV2ZYZHp1UXJKeHQzYVplYUM5SFZ0L0RoN3BLcHlJangyN29Jdml6?=
 =?utf-8?B?OG0vWUhzeUdTZStxLyt3NHR4aUZxODhxWXZISUp2U0JUMms5SFd2ME1IRVhh?=
 =?utf-8?B?K0UrQVZtNFE2a0VjamJ6T2lYb3BxVXZLRnQ3Mzl3dzg3ZWlwaU1CL1U1VVg4?=
 =?utf-8?B?c3IyOEcyb3RZQUJ4SWJmU1I4VFJiZldZSEJCZEhwa1dDbi9Jc3JXRVg0SjVW?=
 =?utf-8?B?dHRCT1hPdndOSmNZQUlKUVM1ODV2MkxPWjRMdDQwellWTGEwVHZSSHZhNlhL?=
 =?utf-8?B?Wm5KQmd0YjJ3akdVeTBoSDYxT21PdUR6RkFsZkxFUGVkRzlOUHhFOE95WTVj?=
 =?utf-8?B?MDJqa1FObnpSSjNHRkUyY2gyKzJoUnp3NExXclE0eDRJS21NZlZDcnFETjA1?=
 =?utf-8?B?c2svc3NzajVtdDcvUHhUT2FwRjNuWUU4SlBKT25UeW5UeStHeXI5Q3MwcEll?=
 =?utf-8?B?RnlUZnJSYlhxczVxUFpkSyszN0dWbHlYYndDOXlBNDZScytmTy9rMk9oQkhP?=
 =?utf-8?B?Tk44RUh3VmZMRktqblk0SnhWWlFHSTVXVFVtallFb0N4azhKaFRIa3UzRSt6?=
 =?utf-8?B?TklDSE5GbzFaMUxSK2lScnk0Qng0MmJhWWtzS1c2eUxlMlY4NVFOVG9oNnhB?=
 =?utf-8?B?QS9pN29EKzZDQjFzZFFCZmE4M3ZvZ3g1eTVjK01LN09hLy9DVnpwVitselA4?=
 =?utf-8?Q?yDaYv2cPpCpg0IUjKcHimjy+oHUBvt5whOaCE=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LV3PR12MB9404.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(376011)(7416011)(1800799021)(366013)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?dFFUZUxPTDNtMWVpUDludzVjc1hkOENZdlRjUGZRWU4vZDlnSTlGMlpNWkZL?=
 =?utf-8?B?enVJU1MzS1FRaFNPcGdHMU1jbjRPNjZ1U0o2bXRaWjNpREFBaTFwRkZ2SXY1?=
 =?utf-8?B?M0NaTDVSaEQxYk1uQm9pWTQyc0hidmdtdysrN1A2eVp1YkFEUm4yOE9MV2wz?=
 =?utf-8?B?Y3NSNXpuWXMrN1hYeGVoaGx3YUdrczZzTUEwWml4dit3cWt5aW4xWkJLRnNH?=
 =?utf-8?B?K2ZFRXQ1SW53VEc0cC95NjVSZmZ6SnpuTzRaOEtmaUlSeVdiS0NRTS95dzJQ?=
 =?utf-8?B?VU80UmRCdjUvOUVXMVEvc09qdy9CeUNxd2k5MjZnSnVUVUd5a3haWmdQRXBH?=
 =?utf-8?B?aFJPbWZXMUphQ3MvL2ZhM3NIOWY3NnJLOTVUZHBSc21Cc2dZTWdJSlViSEVa?=
 =?utf-8?B?T3lpWDk3MjRnQjYxdUNZNUt0b2RKSlY0ZGhQcHh0M1RxZGtiOFdNTytVWjJB?=
 =?utf-8?B?S1h2UjJCeUc4S3FpZ1BkZVREQjY1aUt1MUIyRndQZFR3b1BmZHlWVzJpNVkv?=
 =?utf-8?B?a0s0ckZ5T0NXYU00eEVrTFBqRVV1L1BITEtQMW5Ld3BuY1B4RThjSnhnM0gx?=
 =?utf-8?B?NTJiSHpabU5wclBUZFFRVjJaNERodTgzT0NML0hwbyszMDZ6aXNjWXN1S3R4?=
 =?utf-8?B?QWUxTHRXdSsvcStPYnF2eTBpdkcwamdna2hIRm8rQTdTRDlpME5Cd3hjUVo1?=
 =?utf-8?B?VUQ4Z2dJdFRUc215NFhtUTBxZlVOOGwxTUhSMUw4NHg3YlU5bDVYbzZiYkRX?=
 =?utf-8?B?YXhBak1POC84NVJUc2FlWTZJalZhaitZYVhEY2o0cXlVVGU4eGNFaXpjSzhI?=
 =?utf-8?B?eHdnNDc2V2pzb2tNRlI3WWE5VjYwYWFmNTJnbWpZNmJiUkYwbWhlTFdaMXNz?=
 =?utf-8?B?TlRxSEpzd1Q2cUd4OXIyOHV3Z2RiU3lSVjQ0eEFramlPNDZLUXFjandEN1Y2?=
 =?utf-8?B?ZU5COXpGSFdKeUxJRWVBaFZVcnZWOWNNalZUak10OC92UDJuOVdVVUxoaXJJ?=
 =?utf-8?B?amNxaUQ4bXRXTWNuTDNNSjNPRk9meS9SMEN6ZW1GbGowT3Vka1BFM240cUhE?=
 =?utf-8?B?SjF0MytHTGcra0hOK21jbGpkWEV0UDdOa1BxMGpWTVFVNVhLVXNQb2tTWnRC?=
 =?utf-8?B?dEttNlFuNVlhWnZhSk1wdlFMRUVramsrMGlLS3A2dGJpZDZCTXh1Vk5tUTYy?=
 =?utf-8?B?VkJwai9MaGRrVG0rL2hwbEtPeHJTNzhGYWUvTFhMYm4rbDR6YXVDU3Q1RDN3?=
 =?utf-8?B?aUw1WHhQSEluclNtVE01eXVrNkszZW1Od2cyNlRIUGJKL1RNeGxYUk1adzNX?=
 =?utf-8?B?bDZxUVpEQytVTi9HbWVnQlZpWHJCeWZtR1d3MG1JSWxRSlp3ZUhXRVFRMHNj?=
 =?utf-8?B?RnN6V2RkVUFXWkMwZHFxWXkrQ0ZwVFdpS2dLS0pqRlRRNzlJTGdqWjYyZktq?=
 =?utf-8?B?ZFJiT0FBY3FNcDVRVDNCV3lUNTdvM0ptclQrOWxIWHlyaXBNTTdMNTdwaW45?=
 =?utf-8?B?RUR1dkpWdDFGd3I5aE9Sam1IRW1kYUpOU1dzbjJRWnpUaEJBMlM4ZmJFeHlM?=
 =?utf-8?B?RzY2UzMzSkFTSURpMmMvVTVJZ3pBanJvT2xqL2ppUlhSdStLQkRkRFdxTys5?=
 =?utf-8?B?OVdlemRsNlovVUhmam1ZWmtYVEVDOW16bTBHeVZBWjMvdFErcWxkYUVBQnY4?=
 =?utf-8?B?blJMaDJOT09kNWxjZDF4UVd0cnBsbytONERubWVORGxUSElzNkdPVkt5RURv?=
 =?utf-8?B?b0VhWFNXNmR0WE9JT1BNM2VxbzZEekV1d2NTMSsxSmhoQklnV2JTZnVZMFRY?=
 =?utf-8?B?em9yWUJKeXRsNXhuRE5WVkc0TWh6SEFmSHlQRkM5Z21jY2ZxUTFpNXBFR1Z4?=
 =?utf-8?B?b3pBL2U2S2lZcDFib1BQTzk5ME1wa3RnR3pwSjdjVTBCcWV1eFliMnFBdm5s?=
 =?utf-8?B?Vjk1RW8yUVF1dmFNL0ZHaDBtWkprTHRoT1k1clNqalc0cEFpeW5kU2RoaG11?=
 =?utf-8?B?TkF5YUI3TTlJK2xwTUJneU4zdHVxdFY4dyt5VjZXaDUxWnNuMStLQ253Yi8x?=
 =?utf-8?B?UUQzc1ZOemZxanFuYXg2cGhmSHQ0S3FwTlNXbVFGZk9XT0JkbC8rbUU1Q3ls?=
 =?utf-8?B?ZDl1cHM1d2FMV2Q3NmNpY3NycW85ZFRwOEZ2Q1lTblZWbERkWnNyRC9GWEVt?=
 =?utf-8?Q?jyBU2G9AYkAY6ICQoFyf7aMkk3wiXDwKEcgM4OEMKwMP?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <F0E393A774C60C469B0101BC1E571406@namprd12.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: Nvidia.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: LV3PR12MB9404.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4c152403-08d1-47d7-1138-08dc8f37ca9c
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2024 01:41:36.4521
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: WRLy+uAERJ1P0hWsb9WWiPJU0Nl7a1j1bdEqUClZSeVr7v3i2pq/EX+cPq+Qu3jBm86xyl8mWbPMkGpOs1cK5A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR12MB9143

T24gNi8xNi8yNCAyMzowNCwgQ2hyaXN0b3BoIEhlbGx3aWcgd3JvdGU6DQo+IHF1ZXVlX2F0dHJf
c3RvcmUgdXBkYXRlcyBhdHRyaWJ1dGVzIHVzZWQgdG8gY29udHJvbCBnZW5lcmF0aW5nIEkvTywg
YW5kDQo+IGNhbiBjYXVzZSBtYWxmb3JtZWQgYmlvcyBpZiBjaGFuZ2VkIHdpdGggSS9PIGluIGZs
aWdodC4gIEZyZWV6ZSB0aGUgcXVldWUNCj4gaW4gY29tbW9uIGNvZGUgaW5zdGVhZCBvZiBhZGRp
bmcgaXQgdG8gYWxtb3N0IGV2ZXJ5IGF0dHJpYnV0ZS4NCj4NCj4gU2lnbmVkLW9mZi1ieTogQ2hy
aXN0b3BoIEhlbGx3aWc8aGNoQGxzdC5kZT4NCj4gUmV2aWV3ZWQtYnk6IEJhcnQgVmFuIEFzc2No
ZTxidmFuYXNzY2hlQGFjbS5vcmc+DQo+IFJldmlld2VkLWJ5OiBEYW1pZW4gTGUgTW9hbDxkbGVt
b2FsQGtlcm5lbC5vcmc+DQo+IFJldmlld2VkLWJ5OiBIYW5uZXMgUmVpbmVja2U8aGFyZUBzdXNl
LmRlPg0KDQpMb29rcyBnb29kLg0KDQpSZXZpZXdlZC1ieTogQ2hhaXRhbnlhIEt1bGthcm5pIDxr
Y2hAbnZpZGlhLmNvbT4NCg0KLWNrDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 01:42:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 01:42:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742669.1149527 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNrZ-0002RY-BY; Tue, 18 Jun 2024 01:42:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742669.1149527; Tue, 18 Jun 2024 01:42:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJNrZ-0002RR-7Q; Tue, 18 Jun 2024 01:42:21 +0000
Received: by outflank-mailman (input) for mailman id 742669;
 Tue, 18 Jun 2024 01:42:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2iLq=NU=nvidia.com=chaitanyak@srs-se1.protection.inumbo.net>)
 id 1sJNrY-0002Q8-4E
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 01:42:20 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20614.outbound.protection.outlook.com
 [2a01:111:f403:2412::614])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fe41b149-2d13-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 03:42:18 +0200 (CEST)
Received: from LV3PR12MB9404.namprd12.prod.outlook.com (2603:10b6:408:219::9)
 by LV3PR12MB9143.namprd12.prod.outlook.com (2603:10b6:408:19e::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31; Tue, 18 Jun
 2024 01:42:13 +0000
Received: from LV3PR12MB9404.namprd12.prod.outlook.com
 ([fe80::57ac:82e6:1ec5:f40b]) by LV3PR12MB9404.namprd12.prod.outlook.com
 ([fe80::57ac:82e6:1ec5:f40b%5]) with mapi id 15.20.7677.030; Tue, 18 Jun 2024
 01:42:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe41b149-2d13-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cz6BEx4CprwjUxnx0wQGmK6ABnqQWoGDgcRu/HJcZVFncWnK6kkJMxQwJgzNmkGBgJDxs1C5pfHJIJlJNxLkKUoHRhe55E9KBNgN4T8l3DdIRdurQsiEsoDU/8yCvyXv23fPA0C5hoQIfAPsbdLbOMaG0Vp1c1umIALwICbm7LsGSEyqcgarrbIB+KRK46DDD6isw4/YQcIKk+QvaLzvn/YsaxUxwmP9cMUGcdyCMTnUnoIkRxn8t2oIu+I5q5XFgUCKJXDVUOq9O842RKlRfO/liTF57I5HbpvEOBejQwm3kQhlcvnsheVMTbxxhgYbh1x8KvpyqFFJGcg5w3m/mA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AvqBCrHLxrSkVI4/ya++JVzMGTBvamOk/ZlCrkTQ4C8=;
 b=iUD50X4pidXQFI7huAOAJe87IqTyF/Ec9wz+31dsQNX1kwzoDQwj20XfhW3EwHvi+aHGlWohHu4c9v6YkpknXy7zcopHJfGnCLMIqyKdZmxW2GOCO3ZQh+vUdfy1Dj1JIunF++ZK/4x6Z3h+Xkt1rI6mN/OkoxoujCMs+TYq1xsdJiLHUuDJodIzcaOlGiOwOTcdZFoRXz66OwfAhxkJi/5GY0bwUfrU22Vf0dLd9vhBHjnpj8fRAB2dcYDqSJQXIhOuWj4SE4PflMDWQtVEUG5xsCzJ9bH0D7sYzTfgPaJZ5CbZjjAbBMlAjwUYWqWWChvyqyBpkvqdMc3gtg2CTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nvidia.com; dmarc=pass action=none header.from=nvidia.com;
 dkim=pass header.d=nvidia.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com;
 s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AvqBCrHLxrSkVI4/ya++JVzMGTBvamOk/ZlCrkTQ4C8=;
 b=Kvw9/6byNcgTDwCWHNJ8GC7jvqIfi3Ip1Pk/zNEco5FhrEdCCo15nbxxXFeZVMEIqEVwn5h3OCG4k8T7SMW72s6r/MD3i3OdZo7Fs2PG+lJoACJMG0Sl6ro0JsTSO7OaQXovekeU3RfCe8URm2SAbsB2G3sQ17t1oPiXUYi1ajWBqSi2yOKH/M3Q3k8Jp1EV+bGDZ6Q4mg+tldO0d3ugw9IU1/dnD7+Vpx4fCYJKfldOKXD3qpZ859mRFvvCGUlEtVQU87Z3Sc53cm6zfBidI3VyJkqFWr1jGgngBPEknDtk612To35RVbnnUvcaT5U3jM3ZhTIErcNTXZ+etRhkAg==
From: Chaitanya Kulkarni <chaitanyak@nvidia.com>
To: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>
CC: Geert Uytterhoeven <geert@linux-m68k.org>, Richard Weinberger
	<richard@nod.at>, Philipp Reisner <philipp.reisner@linbit.com>, Lars
 Ellenberg <lars.ellenberg@linbit.com>,
	=?utf-8?B?Q2hyaXN0b3BoIELDtmhtd2FsZGVy?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>, "Michael
 S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Alasdair Kergon
	<agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>, Mikulas Patocka
	<mpatocka@redhat.com>, Song Liu <song@kernel.org>, Yu Kuai
	<yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>, "Martin K.
 Petersen" <martin.petersen@oracle.com>, "linux-m68k@lists.linux-m68k.org"
	<linux-m68k@lists.linux-m68k.org>, "linux-um@lists.infradead.org"
	<linux-um@lists.infradead.org>, "drbd-dev@lists.linbit.com"
	<drbd-dev@lists.linbit.com>, "nbd@other.debian.org" <nbd@other.debian.org>,
	"linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"ceph-devel@vger.kernel.org" <ceph-devel@vger.kernel.org>,
	"virtualization@lists.linux.dev" <virtualization@lists.linux.dev>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"linux-bcache@vger.kernel.org" <linux-bcache@vger.kernel.org>,
	"dm-devel@lists.linux.dev" <dm-devel@lists.linux.dev>,
	"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>,
	"linux-mmc@vger.kernel.org" <linux-mmc@vger.kernel.org>,
	"linux-mtd@lists.infradead.org" <linux-mtd@lists.infradead.org>,
	"nvdimm@lists.linux.dev" <nvdimm@lists.linux.dev>,
	"linux-nvme@lists.infradead.org" <linux-nvme@lists.infradead.org>,
	"linux-s390@vger.kernel.org" <linux-s390@vger.kernel.org>,
	"linux-scsi@vger.kernel.org" <linux-scsi@vger.kernel.org>,
	"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>, Bart Van Assche
	<bvanassche@acm.org>, Damien Le Moal <dlemoal@kernel.org>, Hannes Reinecke
	<hare@suse.de>
Subject: Re: [PATCH 12/26] block: remove blk_flush_policy
Thread-Topic: [PATCH 12/26] block: remove blk_flush_policy
Thread-Index: AQHawHz28nZoEIRw00+RUjxGUtmA5LHMwIGA
Date: Tue, 18 Jun 2024 01:42:13 +0000
Message-ID: <1060c01c-febc-40d0-95ad-0be879c05545@nvidia.com>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-13-hch@lst.de>
In-Reply-To: <20240617060532.127975-13-hch@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Mozilla Thunderbird
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=nvidia.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: LV3PR12MB9404:EE_|LV3PR12MB9143:EE_
x-ms-office365-filtering-correlation-id: 52b9fe8e-a740-4958-c050-08dc8f37e0ba
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|376011|7416011|1800799021|366013|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?K0pRL3ZxUVBESk5qNlErZms4MytiZmxGcDNTSU04SUhsNUJNQTAxaURMNnVp?=
 =?utf-8?B?V1YvT2RxNlhmMWZESGFtQTJXdGQ1QXNMdjFRMVhYUlNmQjliU215WXFKSU94?=
 =?utf-8?B?T0J1UVFiLzAyaEdlNTZYeXpxd0IwRUJrSWF5Wk40TXp6SWhYZGtMMU5zbWdP?=
 =?utf-8?B?T2p1aEJrcG1XYjlZWVM3Z1h3OHpxZ1VvVE9UcVA2UlhjcnFWeU9ZbDBTclFO?=
 =?utf-8?B?VUV3bGQ4L1h3VlF5NlpOa0d2VysrNEc4YjdVU1VnK3MrWkFXL2pIUURuMGhM?=
 =?utf-8?B?K0d5TUFNUDNsVWR0K0w5N0pOQ1ovdDkyT0QxQkd1dGh1NlAza016bktORlM2?=
 =?utf-8?B?cGpqYjg3QmZCSnE2RDVHTFdmZXBwc0YzaklhS0lRVVpzLzFRbGRzMzFKUkU3?=
 =?utf-8?B?ZngzZmtQeDdzQWNyZ0tYNG8vR05WSmNMRWFjWDRvU2NFTVllVXRveTBPaU5X?=
 =?utf-8?B?cUpIRzA4Ym5HRnlHMGRmTFUxSHFkMUkvMHdlbEU4dEQ2NHNuc2NEWVpmdTFu?=
 =?utf-8?B?RUFHcy9PcWsyUW83Z2tMRCtvUlJiNm5ORGUzbXFGVmR2Uk9hYTc1Nnozd1d0?=
 =?utf-8?B?KzFZOWNIRkRnMnRKeGRuRkpQdE54REZ4ZmE3Y3NERlNibENPaFFRL2QyWFFP?=
 =?utf-8?B?YU05V2MxUnRMT05mQ2tpVXE3TEhFMVBCK1V0aVNVUWN1V0pGQzFid2RJTFVI?=
 =?utf-8?B?Qy9yZ1RObmxhOWgwWjVDeDNteGFJK2lrajBFeFN6bDJwdjNoSGhNelZDdm1B?=
 =?utf-8?B?Q3pIR1ErcC9tb1RBb1FMcENTSk9xQ1pkYUdkWWU1SEFELzhrSWp5dFZKcHBY?=
 =?utf-8?B?QTduckxkV3BpVmpEMElXMDlBUjVhMnNweWovMTBaOHQxUWl4UVZFLzgzUXZF?=
 =?utf-8?B?cmdFendlRnRMdHlHdG9HOWVvaUpMeUVBQlVleVF5M0dYbTVycC9oMFlmTUJs?=
 =?utf-8?B?NGd5MEJnaWFnOU0wZFBaWWR1OUk1UWIraHMrTzRiYzk0RHB3YWgyTmZpbmFB?=
 =?utf-8?B?RTNRdXVhN0JXdTdJRmZUY1p6bnFBYlpjZG5ESnlDZTFybm1HeCtjTUZ4bE5k?=
 =?utf-8?B?YTM3SjVQOFRMUG85N0FlaWFsdDBpVDd6NW1nOUZ6M2tNOE95bElKWHFiUFln?=
 =?utf-8?B?YzhiSCsvLzFHdDhGVUVNWTZZMWRNTlh4Z0tFREhoUm44Skc0YlMvdStWeFJN?=
 =?utf-8?B?Y0pEOUsxaWFXVCs3dWk2VVdtU0dmTVlqclNvU2xOS0pmZHV2eUc0T0dyajc0?=
 =?utf-8?B?alNJZnpTRkV1Y0xBRERzZlZUVUl3bFBSYklpWnU2Q2hDak5tZVhrSk9sVzhP?=
 =?utf-8?B?QnJsOTBUQVRQeGhmSDU4aExUUnd2VG1reElqRjNsZHpBWTQvWWVxSWVUaVRS?=
 =?utf-8?B?TzJYTGZMaGxmQ1VWTFpwd1BiZ250YzZnWXRueEk2c1d1U3l6L1Z3QU5qSDRZ?=
 =?utf-8?B?WHRHZ1lCZzlwRFNHbkNSS3hjWTFCVUoyWHRyOERaS2hDTUxPMUI3ZGF5U3VB?=
 =?utf-8?B?WHFqQTZxdlZuM2ZwMU92bmZoYmppQVFOR2E1aVVLaFJsOHAwb2hlWDNwN3NF?=
 =?utf-8?B?RTExc2lScHlKZE03aVcrUWtZN0dBbm1qQ0xHSXY5Rk1NRUhuVGVjYnBGN2lV?=
 =?utf-8?B?NEN5RjdwaURiK0FwY05FNTRmL2dRMUVtd2pSSzlsVDZwaXgxYkg4aGM4RjZS?=
 =?utf-8?B?aWF1bUQ1cFBpZjFPdVk4TDJiTy84ZFB2QW02amgzKzZVcWRIdGdkYWNZbmpw?=
 =?utf-8?B?TlJUKzQ1Tng2b1ZkbGJQSTJNK21PMFhGeW9xYXBIQm5YUldSbG5wTDlwRld5?=
 =?utf-8?Q?49GHNv/izdmHdk+Cr3zKjy9gUkccNSGAIcySk=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LV3PR12MB9404.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(376011)(7416011)(1800799021)(366013)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?Q0RwQ2hVVWprSzY0c0Q3TS9YMTdpZ2Y1SmxMbVVXek51aDgxVkZZTDNldkVJ?=
 =?utf-8?B?NnVBTHhWK1c2R2tNb0pxMUMxMWJZdEQxeVFrZjBpdkJHNUNRbkI5MnN2OW0v?=
 =?utf-8?B?SWNvc3lydzBYVGZ0eVplYzEyTlZxN1RTQU9qZEFoNHhzQkRtajQ5bWdZWUNs?=
 =?utf-8?B?b2NSUnpiSzR4eEJTaHR2Y0RCUGZGTnlDKzVDVUk0T3J0dlpmQ28wd1ZEdWZv?=
 =?utf-8?B?US8yZDJYREErVGJja2NUbUxzd3J3ekNCeVVBYUtuSWZNYkxyMThab05uSHZs?=
 =?utf-8?B?czJGQmdQV2Zzdml0bVJKeW1ZZ0dyaVZuTzdmamh3c1BTNXNWR2hzK3E0S0JF?=
 =?utf-8?B?RUJ6S1RnUFRoOTdxZWlBTjBETUhzYXM2RHRjSTdNVW8yNTk1a2psOUpPOTJ0?=
 =?utf-8?B?WDBIMEFncDdnanJMc3ErVkZVMFgzaWFUU0pWNGsrenlmN3NZNTFqYzk4SGpV?=
 =?utf-8?B?aStrL2ZQREFoV0lNNkFpazhaK1cyS25XZEpOdXZlQXZtVDI1WGF3ejhRZFZO?=
 =?utf-8?B?U3BBRTlKcnl5Q29yMFpNemxlWG1Zd3FwRFVvajViM2VoYmZOZlVpMlRiUzBj?=
 =?utf-8?B?RTBJam43Qzg5Z043U0k4RDJ0TExSb1JKUjlJQ3dVM2RPbmlVOWFxRUtFYzEv?=
 =?utf-8?B?am1lendYWWZlSldJZkluOW01aEVuZ2FoZzVSLzRNaXV5OUJReWEva1U1ekxs?=
 =?utf-8?B?bkJydHJDTGxyTml6b3VjWEtXUkh0MGFTcWtvRVlia05KRDZTektEREFNYnAx?=
 =?utf-8?B?QUNHNU1JaUFwVmZ3bGg4a2Z1OHU5RnJab2xzM3ZmazFmdVdFV3dVajhZOTVw?=
 =?utf-8?B?L0NtRGRvc2JsTDRDSE9VMjJxTk9iQzIrdzVuakUrRkFMNXMwRGUwcnRSZ211?=
 =?utf-8?B?dEFnbTNJVk1nNG9KZVpWVnI5RWJXVUFQbkpYWlJpSlVXeVdKZ0pwVUZaNzdu?=
 =?utf-8?B?SzZYVmc0S05HTUlkTEltNmVmQVd5cnFuT1hTVUw1YnRqUHc3VTVuM2NlUjdU?=
 =?utf-8?B?Z21Bd1E1ZFpUV25CaU05OUgxUjFGQW81RmVIRHQ3bUozUVBVVml6NnlBd01t?=
 =?utf-8?B?ekJPakJIYlEzK2VlbjdZY2lRTEFNUjFscG9yTnY4aExQV0tZU2dsbWxtTnFL?=
 =?utf-8?B?SnEybzJjMnNwTGJ6RlpzUHJpYUNMRi92NWtFMGdhZzBpWElYUFVJaWh6VGdk?=
 =?utf-8?B?dFdCUU9hSDRnL1hjOG14U1EwZkZmVUxSYlZ3MlQzMFU2Ymk5cU1NRVd0cnFP?=
 =?utf-8?B?ejNIbGprZENqTDZxM0JvNjNOZnIzT29Eazl3cXhiQkErdllkUkltWnRocHlV?=
 =?utf-8?B?dHlxOVpwNDMzdG1GUTNMVVNwT3RJTzFuK1NnS0l6cGxWWUNKdVdtTi9Fcmhm?=
 =?utf-8?B?YVV6NmNMNTFwV3FwSlZ1c3BMRUt3S3hTbHRFdjNzRENCUUxGaElhR3g5MUhn?=
 =?utf-8?B?dzN2MWkwWlVkOUpmaWdLejhvbHNGL2M5Q1E5Vnp5YWNKY05zejI5RitrS2NB?=
 =?utf-8?B?WFhueEd3ajUyb1VaZThRS3JXcGlaZldoZEF5cm8zUU1EdERGQ1pwN1JtRk9o?=
 =?utf-8?B?ejd5SDlzWjZiV0R3VUZlY0VLTFZVREpUcHM1NThBWnZzY1JjbENVOVBtZUxp?=
 =?utf-8?B?VzlFR0Nzc0xRSXJ3SUtvdkhYdVNNbnF6eVl4ZVZjU1lBOTg2UStaWEp2S0Jp?=
 =?utf-8?B?TXYyL09QTko5bmVhRG5xSzIrakZIanJ5Q2tpREJnRGoxdUk1d2pKaXpQbWp6?=
 =?utf-8?B?aWd0KzF1ZkNnRFEzMm14SWFadXBtN0ZFU2JTYU5DTTdRUlVONVBZdkx5eG5n?=
 =?utf-8?B?MDJDMnNvMnQ1bjVvajVqS0lKaFRDRzVXMStrNkI5RVRLQ1lQODVqcVVoR1hk?=
 =?utf-8?B?R0pQYk9BbkRMUDkwaFNYaDFSbUtmeXlZZWJMQmxVc282RzNTQ09CUysydDVz?=
 =?utf-8?B?WEFhMnlSS2FadW0rNzVxRS9aVGNyL2R6Y1A5MU5JeTZESWFaWGZZQjc0S2JC?=
 =?utf-8?B?cDdUUzJycjM5bHl0L3J5OGpWc0lQNnpVcTgvaFNmNUE4aFVBOEJESzhwT1lM?=
 =?utf-8?B?d2VIem9vOFlPYm0xTGRiTjlCd3BnSWZ3TlNEYzRGTUo3bm9kL2JURFBaRnkv?=
 =?utf-8?B?eUMzTlZmR2FxZURUQXpYN3ZuVktudFRsc3g1M2ZmVUUvOEVsb1BldUNnNHh3?=
 =?utf-8?Q?Zf/7lYEfeRH45bcTQWT3N7HCUIX1kADFhrRsQhhvTqW/?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <4C038F52BEFB8F419B64CC739CDBAAE2@namprd12.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: Nvidia.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: LV3PR12MB9404.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 52b9fe8e-a740-4958-c050-08dc8f37e0ba
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2024 01:42:13.5830
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: o6WqYsOoZkowW1dtrOjCSGbhd3lBCoC6MtH/Y9tz3WCPrl1k9cYw2gR+RauMyHkz5ngZPKhagSHPfMHoDmZi+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR12MB9143

T24gNi8xNi8yNCAyMzowNCwgQ2hyaXN0b3BoIEhlbGx3aWcgd3JvdGU6DQo+IEZvbGQgYmxrX2Zs
dXNoX3BvbGljeSBpbnRvIHRoZSBvbmx5IGNhbGxlciB0byBwcmVwYXJlIGZvciBwZW5kaW5nIGNo
YW5nZXMNCj4gdG8gaXQuDQo+DQo+IFNpZ25lZC1vZmYtYnk6IENocmlzdG9waCBIZWxsd2lnIDxo
Y2hAbHN0LmRlPg0KPiBSZXZpZXdlZC1ieTogQmFydCBWYW4gQXNzY2hlIDxidmFuYXNzY2hlQGFj
bS5vcmc+DQo+IFJldmlld2VkLWJ5OiBEYW1pZW4gTGUgTW9hbCA8ZGxlbW9hbEBrZXJuZWwub3Jn
Pg0KPiBSZXZpZXdlZC1ieTogSGFubmVzIFJlaW5lY2tlIDxoYXJlQHN1c2UuZGU+DQo+IC0tLQ0K
PiAgIA0KDQoNCkxvb2tzIGdvb2QuDQoNClJldmlld2VkLWJ5OiBDaGFpdGFueWEgS3Vsa2Fybmkg
PGtjaEBudmlkaWEuY29tPg0KDQotY2sNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 03:53:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 03:53:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742681.1149537 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJPtv-0006xr-E0; Tue, 18 Jun 2024 03:52:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742681.1149537; Tue, 18 Jun 2024 03:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJPtv-0006xk-As; Tue, 18 Jun 2024 03:52:55 +0000
Received: by outflank-mailman (input) for mailman id 742681;
 Tue, 18 Jun 2024 03:52:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJPtt-0006xa-Na; Tue, 18 Jun 2024 03:52:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJPtt-0006ar-9n; Tue, 18 Jun 2024 03:52:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJPts-0004T7-Vj; Tue, 18 Jun 2024 03:52:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJPts-0002iz-VD; Tue, 18 Jun 2024 03:52:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vYt+jrnthf05SUWCS6FkJOxGcMAjJzK8flqgSY0+66U=; b=0XCObriCTGQaev/YW/pyOwtAQV
	2aaR2AD3uxc7sbBjd82TQME8Qzkp+ZqpeQWiLf7QgC4m1Lf4/Hr0Uf22MxRG8Ef4xj2ciwSxzO01B
	idh6jyCOVBArzdWQw4vtXu7oukMXeVRKKiXbvEibgd3X0AS+PMSPWIuazB13GvBLo90k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186385-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186385: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6226e74900d7c106c7c86b878dc6779cfdb20c2b
X-Osstest-Versions-That:
    linux=6ba59ff4227927d3a8530fc2973b80e94b54d58f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Jun 2024 03:52:52 +0000

flight 186385 linux-linus real [real]
flight 186388 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186385/
http://logs.test-lab.xenproject.org/osstest/logs/186388/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit1   8 xen-boot            fail pass in 186388-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 186388 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 186388 never pass
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 186377
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186381
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186381
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186381
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186381
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186381
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186381
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6226e74900d7c106c7c86b878dc6779cfdb20c2b
baseline version:
 linux                6ba59ff4227927d3a8530fc2973b80e94b54d58f

Last test of basis   186381  2024-06-17 13:12:17 Z    0 days
Testing same since   186385  2024-06-17 19:43:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aditya Nagesh <adityanagesh@linux.microsoft.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael Kelley <mhklinux@outlook.com>
  Saurabh Sengar <ssengar@linux.microsoft.com>
  Wei Liu <wei.liu@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   6ba59ff42279..6226e74900d7  6226e74900d7c106c7c86b878dc6779cfdb20c2b -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 06:25:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 06:25:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742705.1149546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJSHh-0001Yx-IW; Tue, 18 Jun 2024 06:25:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742705.1149546; Tue, 18 Jun 2024 06:25:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJSHh-0001Yq-FM; Tue, 18 Jun 2024 06:25:37 +0000
Received: by outflank-mailman (input) for mailman id 742705;
 Tue, 18 Jun 2024 06:25:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kZif=NU=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJSHf-0001Yk-HR
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 06:25:35 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on2061b.outbound.protection.outlook.com
 [2a01:111:f403:240a::61b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 90871517-2d3b-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 08:25:33 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by DS7PR12MB6144.namprd12.prod.outlook.com (2603:10b6:8:98::8) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.7677.30; Tue, 18 Jun 2024 06:25:29 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7677.030; Tue, 18 Jun 2024
 06:25:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90871517-2d3b-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kv6u6JXxI9WdGBplVaHiB8zDK4/6QDVEKVY5CuWLk6HjUKCF/ILJ3KY8tfB0FyVo+HtVWhZySr2A4+0SeY1tETNuYjgSNkFgdQCEj5/JjTgN/j0WUrGU5aEZcXxh+CxdIVQrjfyVTldRFJPxogrQNsE0NfWFV1lkKg1/K8gGlS6Glg5nVTuFBU7Pcr4nj902muf+TL1r8CiNfzp2AYChpl0jISCx4wKsE4525VnHG5l7FTbw0SOOrB6QGVApBhkd5WioKJSBwI4uCPv/kiq96G/jV2JUxZQBxxvmA7gferwktHVVcIkhx/s6Il/DZnew/LtElO3UVLcIff5RbNvwuA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aF/u2HYjaNDBqyK+8tQIhNMCEVfbij6OHSG2EI0sfOw=;
 b=ckAi/pkk3C8qZ5VsGODkDrgn4TG4a6wl7FmzIIXUVJZmVOaKlg55sSmENALcWZCB1lGKpM0EFtlnJS9Ruv8nXqHjzRsXw2oVvKmZKVLr7JBEogOXOOP42iktdQHyRtOwVT/32CjAGG9q/dinCb+Q010DHbPnurAnOL0NmyeuFeFjwd/AG4GOLNheOUFav916enhyLK9wKxEO3APLlbb8bHtNqYfrI9Hw2VuuWh6auyQ3Pa912mh2PPls50/Tk8UJHgSzougKkfH82K8GoUjYhRAMaMf5u1Jc9LX5tJfYpboLqN2Txv95CZXoIX0yFev0fDEQYoUb1jNZTfaLQg18Pw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aF/u2HYjaNDBqyK+8tQIhNMCEVfbij6OHSG2EI0sfOw=;
 b=C9jTkCq1HBXFKx5Z7O7DZ4UDJCErYbpOwZt2coYINuZrFnQfEgDMGHqi0Deao5U90liND9BJwy6A0Rs0Jq3o0U7KcXebAEERnDywNBpOYWY9cGFhc4mOsZJYHAE90V04z2gDXinWFnc7P9SP8+fmsxeXQYNHCnoolAXA4ESBl1M=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 1/5] xen/vpci: Clear all vpci status of device
Thread-Topic: [XEN PATCH v10 1/5] xen/vpci: Clear all vpci status of device
Thread-Index: AQHawJTga4AgQ2zhd0+hWXdwZesfOrHMAOYAgAGQCAA=
Date: Tue, 18 Jun 2024 06:25:28 +0000
Message-ID:
 <BL1PR12MB58499527CFA36446EAD3FCE0E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-2-Jiqian.Chen@amd.com>
 <4e2accc2-e81d-450a-af2d-38884455de9c@suse.com>
In-Reply-To: <4e2accc2-e81d-450a-af2d-38884455de9c@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7677.026)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|DS7PR12MB6144:EE_
x-ms-office365-filtering-correlation-id: 2f789970-0eab-49d9-d350-08dc8f5f72b7
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|366013|7416011|376011|1800799021|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?eThHNXY0bWJMNEswM21uU0w4dE04N0VCZFUzR2JieG45NkFaK3QxOTNlRnhy?=
 =?utf-8?B?VTRGSnZ6clJTbytFT2MyeTV0bC8vcmxpVFhoc2s3eHB6LzlES1kzcHNkL3ZL?=
 =?utf-8?B?dm1Bakk2YTdTSzFxbk14aHdsOFFEMldVTVl5V2xQK0FMUC9RdWJZNlZvRDE1?=
 =?utf-8?B?NHBCZFN0cmdsYUtEU2prS0o2QXA4NzYyRVFPNTlFYm1RbDMvOHBLUDFtcWxM?=
 =?utf-8?B?dmR2aWpubmV6a05aNGdwYlE2aUZ6aFZ1dHFZeGV6TWtBaEV6WUFheXFra3Vk?=
 =?utf-8?B?MGpUQ2VyUnV2MTRWU0tpTXVsdUJBWDNhclAvQkduRDEvNEgrNmZjTDJYZzVL?=
 =?utf-8?B?S1I1YTRkNDc1UjNXYm5jL1lCVEYzYS9NWHZzYU5aVS9CTVJCK1ZkUVlSc1Rr?=
 =?utf-8?B?UlJCdW16ZWwxSTllL3VnRm1OZERpdEJkKzBoNjVVeTB5Mk00VlVKYkdKclZC?=
 =?utf-8?B?Y3RFQWdVaEhzUjR2U3hvU1hyZGMyOVYzdlBiSm50YS9Pd3FHL0I2Y1hLRk9j?=
 =?utf-8?B?OGw0ZG16V0hsUGE1czZneVNrRW5tK3BrWVdha1NDNVVZbndNZ0hST1BSaHNH?=
 =?utf-8?B?TytzR0dCY1c1eWZxRTVieUJGWk8wK1cvNkkyVE91OWJXRXlSZGUvZ3MzWFFE?=
 =?utf-8?B?N0NncHVPNFFKOXFBYW0zMGtYY0NWQmxYSm8yODhlUjc4QStmVG85UXRNY28y?=
 =?utf-8?B?dHJSeDBEc3NsVmZwbmxld1ZQVHZuU3hiQU1mSm14NDNtT0hTQVBONDNKQ0Rh?=
 =?utf-8?B?eUVSckh1SjM1MTg2R2xUb1U3K1NlL1BaZW12bGJ3N2xnZkJQZWpyMUZabVRG?=
 =?utf-8?B?d0kyaTZqc1YyV0VWTklnMHJjTFh2MDJLQ0tvM0lwYVZrdjk4eEc5S3E3MExu?=
 =?utf-8?B?MFVUSFoyM2tiRmFIZGt1Q2Q0R0kwNHdIVnFJZHQ3MDVlWFFvVGpnMDFqOGx4?=
 =?utf-8?B?RGRuUXFkbTRLajRaS0pQUDIxaWhjSkFoSEkxbWd6MGdIWFZTbzI0V0pNS2VH?=
 =?utf-8?B?dVBMMmlhWnFpUngrVnJGSzJVZTRaVVNhaEpGdWJ0SEs4WmVKU0xlNEVpNm1D?=
 =?utf-8?B?c2lVSWdCSjVrV2Q2d0hEMmhvZEZzcGlpSFgvN3Yvc1ViSFl2Wm1GbkpHWXQ2?=
 =?utf-8?B?a05iRzNhVjBORnE2YXVFQTFOWE1RQmtHbUQrcytHaHk1eW5nZGh6VmNWZElK?=
 =?utf-8?B?QTdGaHErdGUwcjVLNmdzNzR3bTJHU2FPNFdMQldwMWR6ZSt3bjBsMlFzY2ly?=
 =?utf-8?B?QWw0djJrQU04cGV3MFB6OUt6UHdDT3RNaS8zUFNIbklkWVlicldueHhtbjkx?=
 =?utf-8?B?d1ZLZHYxcm1RTWl0ZUpFWDdPdGF1V2V4bitPNTBETmJVaUpReUNYT1I1T2NF?=
 =?utf-8?B?WEhld09jRDltWlVzYXp1TnRrVXV0SkV1UWVhWklMZ1RPZmQ2KzZHeDAwOWtt?=
 =?utf-8?B?TlYzSFArRGN3QnF5VkdnV2Erd05zdERGRXNHcGozeDc2U3dVdlZDL3JCQUVI?=
 =?utf-8?B?ZmV4cW9tNk5GSzUxNDgyVkhzL0dLRkdXZVNTbm41aytOQThFa3VmSmt1UDN3?=
 =?utf-8?B?bUZjMXZyOVl5S29EWGhKbDhzZUdsbUppZ05hN2pHbmlVeEJ3WjdoTGIxNHY2?=
 =?utf-8?B?TGFWNTRZRXRPUnJHVHJ0UjdkUDkzZHpSS1NpcE9QNGpjL1A0RmdRdGxNbkp0?=
 =?utf-8?B?OEZ0UDBHR0MrRjBPMWpqSjlhQlBNSHBJaGZjWjJxa2JtNG4vbVNTWE9QQ0h5?=
 =?utf-8?B?OFhhRFQwTjA3eEFkVDlxbU8zWk5yYjdHZFh0VlZxTUVOYW1oeHFiL3lQQ2Fl?=
 =?utf-8?Q?bLE6Fb2CXMBRDyAChEqkOy4LY8Qe57A8mmV7c=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(366013)(7416011)(376011)(1800799021)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?RllwVjdtRytUOVh1S0d3Wkhxb1NDUFNXYTRZdE5mcXBsSWFuandoT014RWw0?=
 =?utf-8?B?R2lsYXdqL0s0TVR4cmIrWnozUlN0Uk5SbVVIT3dmbkVMNTU1ZVVxWlJYMmZG?=
 =?utf-8?B?UmhGYmJ2L0RVSDJjODBZcGkrWk1FRzFCVmQ5Ui9VcUNLRTJ6aDBRZ2N5UHo5?=
 =?utf-8?B?VmJkd0VYWmJCUjJWNDlrY25PU01HaERQRTZ6ZW1NRTZXenh6NnNXNzdNNFNM?=
 =?utf-8?B?c0Q4S3ZiOWowWEdSNmtjcTVmdHhzdWZqZG5kN29YanZzcG9CYXhPR003eW83?=
 =?utf-8?B?bXNzdktleW1WTnkwaEowRmJ5TU50Y2NBbnBkMW1USVR3dEdkbldSUjJGVzVp?=
 =?utf-8?B?d3ZySkNmaVB4YjVmTmdnMzBvNU55Y2lCTnBwUUIxcDYxUVVLeUt5RlZaMUh2?=
 =?utf-8?B?bEQ4Z1Q2cEQ5dzdaZG5vZDRKcU14S05qaitSci9zcUdDbGF6NXJ4eXpLNjdO?=
 =?utf-8?B?SHVKYVk2N3dremplSjdEY1E5UWNnVDFIV3lhRnFEcXE1TUtSSm5uMlN5VkRI?=
 =?utf-8?B?TmdTZFlLcG5idVBLRFRHajllbTdJWGVEbXQvV1NxNUx6d2NZWlV3djhDYktM?=
 =?utf-8?B?bzVVNnhta21vdVVvbXI2MSttRWZMZGtHV2k5cUZrd2t2SDkwaDh3SDNSczBq?=
 =?utf-8?B?UHdHOXRnY1JaTGExbWVLU2RHUnF5eXhJV2MrclZUbk5PREFodTNuYkxOTnR6?=
 =?utf-8?B?RU5ERU5MRm4zZGsrcWxhOHlqVVczQWJaeHVQU2g5T2R6b3ZadTNRa3V3a2Jm?=
 =?utf-8?B?R0JqVXNuMkxVN0FFZmUzb1c4T29mSnB4NjhDUHVCUldBeFVHWjVFcGYwekhI?=
 =?utf-8?B?V3NtSUpKU05lVlZqWG4rRVBlRHBCSVZvdnRhbzFzNER3YWFjTFRRWWd1WW15?=
 =?utf-8?B?Nzk3aGlEM3NjWmEvMCtpYklBbEZmUlVDQjY4SC96eGk3aENISk1hRENtNEc0?=
 =?utf-8?B?aTloL05pTHdJREwzODkycXcrY2FiWjNjN1NDTlplUHNDdytpQXpJOXc1eUNt?=
 =?utf-8?B?ZmxHMlVrdFVCcWxVMEI3dkNjc3BHbXJZeWsrd2IzbG5nNmwyblJOd0pJTzNq?=
 =?utf-8?B?SCtSVTJkNUNOREFKdzZNM3dtV2JFaE5TQUIvK2V2bjFRUFNySTlmS1lrN29Y?=
 =?utf-8?B?RS83dTlOelBDWXd6VTJYL083bE95RjVPOGUvanY1dXA3ek8zYm5pWmEzMHNv?=
 =?utf-8?B?L0xMalFnQkxReTZNUFZpUi9XRTRJV01CMWZKRTRlR0c1VGFVMmlXYnUzaklv?=
 =?utf-8?B?QnRUUHVjaXFGcG05c2ZxWGZlSUlWdldsVTczSFJtRC9iMFI2eVkxRkNMNkVJ?=
 =?utf-8?B?bWhNak5Xek1ZUWdSTkJRREtzR3dPQmk3UEVIdUJZdHVnaXRubDBUSnJKWTBq?=
 =?utf-8?B?UGFLeGsxZGs5ZDlYdXhJa1JtMzNrV3VzQXk2cE40SmszYkV2eEFWdmNmSTF1?=
 =?utf-8?B?MjR0cnpFemdCSHRWdU91b0w5VlQwZkhjcXZUbjNyTkRQTjg0bjdYeS8wWkZO?=
 =?utf-8?B?aUhzWGlBclJ6NlR2aDEySmhUQTQrdllVT2YzNWF5Ris2WXBVODJia3pBR1hl?=
 =?utf-8?B?bnBMUzNtZlBaWkxNQkloWEVyeFBSdFAyNTBKQklGVE9tajRYeHM3RUMzZlFY?=
 =?utf-8?B?dnFuWWQ3M0dCbS91OEdnMzVKR0N5VGhzY0V5S0RFNE5qTmtkSUpWRWExUHdj?=
 =?utf-8?B?MysvdFp3RWl0TnhTSzlySXNXV2c1YjNyMFQvTFlhcDdhdTdDNm5MTExVd1dV?=
 =?utf-8?B?cHhhTVRmaTU5UmdBSjdPUVFnRGFRWU03UUNVOVJWMmhkQjdPeEprdG1HYktH?=
 =?utf-8?B?bktjT1ZxQjFqZmdrUzUyOHpVR0ZlQzlVSkxNZUJ0WUxJeHB4cXdHVkN0emlP?=
 =?utf-8?B?TkEzaUJRRnM2QlA4L0V3ZGVVVGNuUkY1SUIzWXZ3MHhlTjgvMXNIRkUxMmR5?=
 =?utf-8?B?eDZZcnJybVkzQ0VBcjBHb0hzMTRpYi9MRUxDNUhnM29nR2dyeEJhNDNYTUhh?=
 =?utf-8?B?eG9QVTNWUEFPUE41UVZPc0Z5N0JXcDd6eEt5QXVoanZtQzBoZEJrdDJDTldC?=
 =?utf-8?B?SXVWUy9tcGhWOGtJNGMvOVdMeXlKemVhRDZCVmhqZEk2U2czREo3bmp0czRF?=
 =?utf-8?Q?cy+I=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <4F7DEA5FB5B5544D9091BE68FD21122C@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2f789970-0eab-49d9-d350-08dc8f5f72b7
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2024 06:25:28.8866
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: HHYz98x98tnGdrTL7M7zN7Cc6rYQhHA712gWdIANzHZHMIWo1nqySxdVddIuIYPQMhd88mAaqHPGrBi9jdUKPQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB6144

T24gMjAyNC82LzE3IDIyOjE3LCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTcuMDYuMjAyNCAx
MTowMCwgSmlxaWFuIENoZW4gd3JvdGU6DQo+PiAtLS0gYS94ZW4vZHJpdmVycy9wY2kvcGh5c2Rl
di5jDQo+PiArKysgYi94ZW4vZHJpdmVycy9wY2kvcGh5c2Rldi5jDQo+PiBAQCAtMiwxMSArMiwx
NyBAQA0KPj4gICNpbmNsdWRlIDx4ZW4vZ3Vlc3RfYWNjZXNzLmg+DQo+PiAgI2luY2x1ZGUgPHhl
bi9oeXBlcmNhbGwuaD4NCj4+ICAjaW5jbHVkZSA8eGVuL2luaXQuaD4NCj4+ICsjaW5jbHVkZSA8
eGVuL3ZwY2kuaD4NCj4+ICANCj4+ICAjaWZuZGVmIENPTVBBVA0KPj4gIHR5cGVkZWYgbG9uZyBy
ZXRfdDsNCj4+ICAjZW5kaWYNCj4+ICANCj4+ICtzdGF0aWMgY29uc3Qgc3RydWN0IHBjaV9kZXZp
Y2Vfc3RhdGVfcmVzZXRfbWV0aG9kDQo+PiArICAgICAgICAgICAgICAgICAgICBwY2lfZGV2aWNl
X3N0YXRlX3Jlc2V0X21ldGhvZHNbXSA9IHsNCj4+ICsgICAgWyBERVZJQ0VfUkVTRVRfRkxSIF0u
cmVzZXRfZm4gPSB2cGNpX3Jlc2V0X2RldmljZV9zdGF0ZSwNCj4+ICt9Ow0KPiANCj4gV2hhdCBh
Ym91dCB0aGUgb3RoZXIgdGhyZWUgREVWSUNFX1JFU0VUXyo/IEluIHBhcnRpY3VsYXIgLi4uDQpJ
IGRvbid0IGtub3cgaG93IHRvIGltcGxlbWVudCB0aGUgb3RoZXIgdGhyZWUgdHlwZXMgb2YgcmVz
ZXQuDQpUaGlzIGlzIGEgZGVzaWduIGZvcm0gc28gdGhhdCBjb3JyZXNwb25kaW5nIHByb2Nlc3Np
bmcgZnVuY3Rpb25zIGNhbiBiZSBhZGRlZCBsYXRlciBpZiBuZWNlc3NhcnkuIERvIEkgbmVlZCB0
byBzZXQgdGhlbSB0byBOVUxMIHBvaW50ZXJzIGluIHRoaXMgYXJyYXk/DQpEb2VzIHRoaXMgZm9y
bSBjb25mb3JtIHRvIHlvdXIgcHJldmlvdXMgc3VnZ2VzdGlvbiBvZiB1c2luZyBvbmx5IG9uZSBo
eXBlcmNhbGwgdG8gaGFuZGxlIGFsbCB0eXBlcyBvZiByZXNldHM/DQoNCj4gDQo+PiBAQCAtNjcs
NiArNzMsNDMgQEAgcmV0X3QgcGNpX3BoeXNkZXZfb3AoaW50IGNtZCwgWEVOX0dVRVNUX0hBTkRM
RV9QQVJBTSh2b2lkKSBhcmcpDQo+PiAgICAgICAgICBicmVhazsNCj4+ICAgICAgfQ0KPj4gIA0K
Pj4gKyAgICBjYXNlIFBIWVNERVZPUF9wY2lfZGV2aWNlX3N0YXRlX3Jlc2V0OiB7DQo+PiArICAg
ICAgICBzdHJ1Y3QgcGNpX2RldmljZV9zdGF0ZV9yZXNldCBkZXZfcmVzZXQ7DQo+PiArICAgICAg
ICBzdHJ1Y3QgcGh5c2Rldl9wY2lfZGV2aWNlICpkZXY7DQo+PiArICAgICAgICBzdHJ1Y3QgcGNp
X2RldiAqcGRldjsNCj4+ICsgICAgICAgIHBjaV9zYmRmX3Qgc2JkZjsNCj4+ICsNCj4+ICsgICAg
ICAgIGlmICggIWlzX3BjaV9wYXNzdGhyb3VnaF9lbmFibGVkKCkgKQ0KPj4gKyAgICAgICAgICAg
IHJldHVybiAtRU9QTk9UU1VQUDsNCj4+ICsNCj4+ICsgICAgICAgIHJldCA9IC1FRkFVTFQ7DQo+
PiArICAgICAgICBpZiAoIGNvcHlfZnJvbV9ndWVzdCgmZGV2X3Jlc2V0LCBhcmcsIDEpICE9IDAg
KQ0KPj4gKyAgICAgICAgICAgIGJyZWFrOw0KPj4gKyAgICAgICAgZGV2ID0gJmRldl9yZXNldC5k
ZXY7DQo+PiArICAgICAgICBzYmRmID0gUENJX1NCREYoZGV2LT5zZWcsIGRldi0+YnVzLCBkZXYt
PmRldmZuKTsNCj4+ICsNCj4+ICsgICAgICAgIHJldCA9IHhzbV9yZXNvdXJjZV9zZXR1cF9wY2ko
WFNNX1BSSVYsIHNiZGYuc2JkZik7DQo+PiArICAgICAgICBpZiAoIHJldCApDQo+PiArICAgICAg
ICAgICAgYnJlYWs7DQo+PiArDQo+PiArICAgICAgICBwY2lkZXZzX2xvY2soKTsNCj4+ICsgICAg
ICAgIHBkZXYgPSBwY2lfZ2V0X3BkZXYoTlVMTCwgc2JkZik7DQo+PiArICAgICAgICBpZiAoICFw
ZGV2ICkNCj4+ICsgICAgICAgIHsNCj4+ICsgICAgICAgICAgICBwY2lkZXZzX3VubG9jaygpOw0K
Pj4gKyAgICAgICAgICAgIHJldCA9IC1FTk9ERVY7DQo+PiArICAgICAgICAgICAgYnJlYWs7DQo+
PiArICAgICAgICB9DQo+PiArDQo+PiArICAgICAgICB3cml0ZV9sb2NrKCZwZGV2LT5kb21haW4t
PnBjaV9sb2NrKTsNCj4+ICsgICAgICAgIHBjaWRldnNfdW5sb2NrKCk7DQo+PiArICAgICAgICBy
ZXQgPSBwY2lfZGV2aWNlX3N0YXRlX3Jlc2V0X21ldGhvZHNbZGV2X3Jlc2V0LnJlc2V0X3R5cGVd
LnJlc2V0X2ZuKHBkZXYpOw0KPiANCj4gLi4uIHlvdSdyZSBzZXR0aW5nIHRoaXMgdXAgZm9yIGNh
bGxpbmcgTlVMTC4gSW4gZmFjdCB0aGVyZSdzIGFsc28gbm8gYm91bmRzDQo+IGNoZWNrIGZvciB0
aGUgYXJyYXkgaW5kZXguDQpPaCwgcmlnaHQuIEkgd2lsbCBhZGQgY2hlY2tzIG5leHQgdmVyc2lv
bi4NCg0KPiANCj4gQWxzbywgbml0IChmdXJ0aGVyIHVwKTogT3BlbmluZyBmaWd1cmUgYnJhY2Vz
IGZvciBhIG5ldyBzY29wZSBnbyBvbnRvIHRoZWlyDQpPSywgd2lsbCBjaGFuZ2UgaW4gbmV4dCB2
ZXJzaW9uLg0KPiBvd24gbGluZS4gVGhlbiBhZ2FpbiBJIG5vdGljZSB0aGF0IGFwcGFyZW5seSBf
YWxsXyBvdGhlciBpbnN0YW5jZXMgaW4gdGhpcw0KPiBmaWxlIGFyZSBkb2luZyBpdCB0aGUgd3Jv
bmcgd2F5LCB0b28uDQpEbyBJIG5lZWQgdG8gY2hhbmdlIHRoZW0gaW4gdGhpcyBwYXRjaD8NCj4g
DQo+IEZpbmFsbHksIGlzIHRoZSAiZGV2IiBsb2NhbCB2YXJpYWJsZSByZWFsbHkgbmVlZGVkPyBJ
dCBlZmZlY3RpdmVseSBoaWRlcyB0aGF0DQo+IFBDSV9TQkRGKCkgaXMgaW52b2tlZCBvbiB0aGUg
aHlwZXJjYWxsIGFyZ3VtZW50cy4NCldpbGwgcmVtb3ZlICJkZXYiIGluIG5leHQgdmVyc2lvbi4N
Cj4gDQo+PiArICAgICAgICB3cml0ZV91bmxvY2soJnBkZXYtPmRvbWFpbi0+cGNpX2xvY2spOw0K
Pj4gKyAgICAgICAgaWYgKCByZXQgKQ0KPj4gKyAgICAgICAgICAgIHByaW50ayhYRU5MT0dfRVJS
ICIlcHA6IGZhaWxlZCB0byByZXNldCB2UENJIGRldmljZSBzdGF0ZVxuIiwgJnNiZGYpOw0KPiAN
Cj4gTWF5YmUgZG93bmdyYWRlIHRvIGRwcmludGsoKT8gVGhlIGNhbGxlciBvdWdodCB0byBoYW5k
bGUgdGhlIGVycm9yIGFueXdheS4NCldpbGwgZG93bmdyYWRlIGluIG5leHQgdmVyc2lvbi4NCj4g
DQo+PiAtLS0gYS94ZW4vZHJpdmVycy92cGNpL3ZwY2kuYw0KPj4gKysrIGIveGVuL2RyaXZlcnMv
dnBjaS92cGNpLmMNCj4+IEBAIC0xNzIsNiArMTcyLDE1IEBAIGludCB2cGNpX2Fzc2lnbl9kZXZp
Y2Uoc3RydWN0IHBjaV9kZXYgKnBkZXYpDQo+PiAgDQo+PiAgICAgIHJldHVybiByYzsNCj4+ICB9
DQo+PiArDQo+PiAraW50IHZwY2lfcmVzZXRfZGV2aWNlX3N0YXRlKHN0cnVjdCBwY2lfZGV2ICpw
ZGV2KQ0KPiANCj4gQXMgYSB0YXJnZXQgb2YgYW4gaW5kaXJlY3QgY2FsbCB0aGlzIG5lZWRzIHRv
IGJlIGFubm90YXRlZCBjZl9jaGVjayAoYm90aA0KPiBoZXJlIGFuZCBpbiB0aGUgZGVjbGFyYXRp
b24sIHVubGlrZSBfX211c3RfY2hlY2ssIHdoaWNoIGlzIHN1ZmZpY2llbnQgdG8NCj4gaGF2ZSBv
biBqdXN0IHRoZSBkZWNsYXJhdGlvbikuDQpPSywgd2lsbCBhZGQgY2ZfY2hlY2sgaW4gbmV4dCB2
ZXJzaW9uLg0KPiANCj4+IC0tLSBhL3hlbi9pbmNsdWRlL3hlbi9wY2kuaA0KPj4gKysrIGIveGVu
L2luY2x1ZGUveGVuL3BjaS5oDQo+PiBAQCAtMTU2LDYgKzE1NiwyMiBAQCBzdHJ1Y3QgcGNpX2Rl
diB7DQo+PiAgICAgIHN0cnVjdCB2cGNpICp2cGNpOw0KPj4gIH07DQo+PiAgDQo+PiArc3RydWN0
IHBjaV9kZXZpY2Vfc3RhdGVfcmVzZXRfbWV0aG9kIHsNCj4+ICsgICAgaW50ICgqcmVzZXRfZm4p
KHN0cnVjdCBwY2lfZGV2ICpwZGV2KTsNCj4+ICt9Ow0KPj4gKw0KPj4gK2VudW0gcGNpX2Rldmlj
ZV9zdGF0ZV9yZXNldF90eXBlIHsNCj4+ICsgICAgREVWSUNFX1JFU0VUX0ZMUiwNCj4+ICsgICAg
REVWSUNFX1JFU0VUX0NPTEQsDQo+PiArICAgIERFVklDRV9SRVNFVF9XQVJNLA0KPj4gKyAgICBE
RVZJQ0VfUkVTRVRfSE9ULA0KPj4gK307DQo+PiArDQo+PiArc3RydWN0IHBjaV9kZXZpY2Vfc3Rh
dGVfcmVzZXQgew0KPj4gKyAgICBzdHJ1Y3QgcGh5c2Rldl9wY2lfZGV2aWNlIGRldjsNCj4+ICsg
ICAgZW51bSBwY2lfZGV2aWNlX3N0YXRlX3Jlc2V0X3R5cGUgcmVzZXRfdHlwZTsNCj4+ICt9Ow0K
PiANCj4gVGhpcyBpcyB0aGUgc3RydWN0IHRvIHVzZSBhcyBoeXBlcmNhbGwgYXJndW1lbnQuIEhv
dyBjYW4gaXQgbGl2ZSBvdXRzaWRlIG9mDQo+IGFueSBwdWJsaWMgaGVhZGVyPyBBbHNvLCB3aGVu
IG1vdmluZyBpdCB0aGVyZSwgYmV3YXJlIHRoYXQgeW91IHNob3VsZCBub3QNCj4gdXNlIGVudW0t
cyB0aGVyZS4gT25seSBoYW5kbGVzIGFuZCBmaXhlZC13aWR0aCB0eXBlcyBhcmUgcGVybWl0dGVk
LnQNClllcywgSSBwdXQgdGhlbSB0aGVyZSBiZWZvcmUsIGJ1dCBlbnVtIGlzIG5vdCBwZXJtaXR0
ZWQuDQpUaGVuLCBkbyB5b3UgaGF2ZSBvdGhlciBzdWdnZXN0ZWQgdHlwZSB0byB1c2UgdG8gZGlz
dGluZ3Vpc2ggZGlmZmVyZW50IHR5cGVzIG9mIHJlc2V0cywgYmVjYXVzZSBlbnVtIGNhbid0IHdv
cmsgaW4gdGhlIHB1YmxpYyBoZWFkZXI/DQoNCj4gDQo+PiAtLS0gYS94ZW4vaW5jbHVkZS94ZW4v
dnBjaS5oDQo+PiArKysgYi94ZW4vaW5jbHVkZS94ZW4vdnBjaS5oDQo+PiBAQCAtMzgsNiArMzgs
NyBAQCBpbnQgX19tdXN0X2NoZWNrIHZwY2lfYXNzaWduX2RldmljZShzdHJ1Y3QgcGNpX2RldiAq
cGRldik7DQo+PiAgDQo+PiAgLyogUmVtb3ZlIGFsbCBoYW5kbGVycyBhbmQgZnJlZSB2cGNpIHJl
bGF0ZWQgc3RydWN0dXJlcy4gKi8NCj4+ICB2b2lkIHZwY2lfZGVhc3NpZ25fZGV2aWNlKHN0cnVj
dCBwY2lfZGV2ICpwZGV2KTsNCj4+ICtpbnQgX19tdXN0X2NoZWNrIHZwY2lfcmVzZXRfZGV2aWNl
X3N0YXRlKHN0cnVjdCBwY2lfZGV2ICpwZGV2KTsNCj4gDQo+IFdoYXQncyB0aGUgcHVycG9zZSBv
ZiB0aGlzIF9fbXVzdF9jaGVjaywgd2hlbiB0aGUgc29sZSBjYWxsZXIgaXMgY2FsbGluZw0KPiB0
aGlzIHRocm91Z2ggYSBmdW5jdGlvbiBwb2ludGVyLCB3aGljaCBpc24ndCBzaW1pbGFybHkgYW5u
b3RhdGVkPw0KVGhpcyBpcyB3aGF0IEkgYWRkZWQgYmVmb3JlIGludHJvZHVjaW5nIGZ1bmN0aW9u
IHBvaW50ZXJzLCBidXQgYWZ0ZXIgbW9kaWZ5aW5nIHRoZSBpbXBsZW1lbnRhdGlvbiwgaXQgd2Fz
IG5vdCB0YWtlbiBpbnRvIGFjY291bnQuDQpJIHdpbGwgcmVtb3ZlIF9fbXVzdF9jaGVjayBhbmQg
Y2hhbmdlIHRvIGNmX2NoZWNrLCBhY2NvcmRpbmcgdG8geW91ciBhYm92ZSBjb21tZW50Lg0KDQo+
IA0KPiBKYW4NCg0KLS0gDQpCZXN0IHJlZ2FyZHMsDQpKaXFpYW4gQ2hlbi4NCg==


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 06:33:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 06:33:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742716.1149557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJSPj-0003LX-DE; Tue, 18 Jun 2024 06:33:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742716.1149557; Tue, 18 Jun 2024 06:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJSPj-0003LQ-9l; Tue, 18 Jun 2024 06:33:55 +0000
Received: by outflank-mailman (input) for mailman id 742716;
 Tue, 18 Jun 2024 06:33:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R1NC=NU=amd.com=Christian.Koenig@srs-se1.protection.inumbo.net>)
 id 1sJSPi-0003LK-7A
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 06:33:54 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20612.outbound.protection.outlook.com
 [2a01:111:f403:2414::612])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ba90a821-2d3c-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 08:33:53 +0200 (CEST)
Received: from SJ0PR12MB5673.namprd12.prod.outlook.com (2603:10b6:a03:42b::13)
 by DM4PR12MB6566.namprd12.prod.outlook.com (2603:10b6:8:8d::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31; Tue, 18 Jun
 2024 06:33:47 +0000
Received: from SJ0PR12MB5673.namprd12.prod.outlook.com
 ([fe80::ec7a:dd71:9d6c:3062]) by SJ0PR12MB5673.namprd12.prod.outlook.com
 ([fe80::ec7a:dd71:9d6c:3062%2]) with mapi id 15.20.7677.030; Tue, 18 Jun 2024
 06:33:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba90a821-2d3c-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XzLmQO2vANzlkqIshnofyBhtA0y5AIiJxWYy9yUM+TxQoZqFpWwgVXUkJN9q3zVUsEcwDQwgbLW8uaSby2jTpxeYamB4Z1Y4Puz75zuacwUVq/HLzexlOAzAYhZwq6Sw1wJyJzRY+znlk15UXzTVocVseX1TcnioPheAVhxEiiYGQzq/v9ghRCYU2MJst1tsW5YRTCcBlf7qdbHMuvcXxwWy7IwPfxCHVZz6C7zrHr5c8gSSpj+/zQlt06Bt6iM3iT1lawMo1xusxj4uCpqiO62FUAjmWTe1aGTsTbsDNmL5qp1Tgd3tKECweWiLiLRqbXhskqqSs8R5OPDKd4vdBQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q+/pLV36q1F0ZCqIFBAoYGsfYfHZZGIWtg7sDNv5GgA=;
 b=NOW5bJrllBryMAodAYSTEMPC+SArfNtZ71psT3bEer3Y0Y12Kq1fmn9WV5BUoZc8WDg7eCYScHzGnk1K4Nh1S/TdDkh00T4MTKPV1DxhNy/wSF1tcguauT/hBp+yNeYpzcAnKK2+heVPN05QxkHYHry/V0bHIrzd1NDm+AgLVOOKK4jHCHNJTl+/bJDxtI/xsDm++mIyk/4twp5qBJxqtGn+I8S8sTXqI+F9VSfBug8hPMCQGnMFUHWU4oGPUt45aOdYvsf/iNwhix6X4OmX9lavpsFda81t/lEc4ph4dUteq2t5NIM+teRHCoesP3hUgTw3Jb48vPCWPMCHkZvIOg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q+/pLV36q1F0ZCqIFBAoYGsfYfHZZGIWtg7sDNv5GgA=;
 b=q1tVTr+0TOKrt0iSPoKHe6HcRh9jPERfehSc0cGiGtIZ4LYSO0tqxg1tun3PHwW4gF4qOZlrort3rg/vrNOtCZB7zzbsIzxueQxhHXBsksGptvlGw5NNu3W1zlp21rtlvHIn/D4uyoXK8s6+nkE9mosZEJWT81nOEPNuuMNedpI=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <0b00c8f9-fb79-4b11-ae22-931205653203@amd.com>
Date: Tue, 18 Jun 2024 08:33:38 +0200
User-Agent: Mozilla Thunderbird
Subject: Re: Design session notes: GPU acceleration in Xen
To: Demi Marie Obenour <demi@invisiblethingslab.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, Xenia Ragiadakou
 <burzalodowa@gmail.com>, Ray Huang <ray.huang@amd.com>,
 Xen developer discussion <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Direct Rendering Infrastructure development
 <dri-devel@lists.freedesktop.org>,
 Qubes OS Development Mailing List <qubes-devel@googlegroups.com>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com> <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com> <ZmwByZnn5vKcVLKI@macbook>
 <Zm-FidjSK3mOieSC@itl-email> <Zm_p1QvoZcjQ4gBa@macbook>
 <ZnCglhYlXmRPBZXE@mail-itl> <ZnDbaply6KaBUKJb@itl-email>
Content-Language: en-US
From: =?UTF-8?Q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>
In-Reply-To: <ZnDbaply6KaBUKJb@itl-email>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR0P281CA0064.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:49::8) To SJ0PR12MB5673.namprd12.prod.outlook.com
 (2603:10b6:a03:42b::13)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ0PR12MB5673:EE_|DM4PR12MB6566:EE_
X-MS-Office365-Filtering-Correlation-Id: e4355196-0b44-4b39-cce1-08dc8f609bed
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230037|1800799021|376011|366013;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?R0VDUGkraUx5ekEzTExhSkl1U0I5VE9uUUlpZFY2QW9pbzlaZlJITEhWek4r?=
 =?utf-8?B?QmlGSXJra3lYMW84WFFVN05oLzNCZTN6NUVKYUdKcjZFZEZjaUJtSjg0V1Zl?=
 =?utf-8?B?aEtoTjV0MmJVMGJSa2RIWk5rSFcycjJkVWJJWVBsNitZWk95TnlCdW1jNU81?=
 =?utf-8?B?VUM1OVZmcFNVNWZCQzhjV0tPMVpQSFp0ekhRNVVBMWlnakdDMDkwWVB4RXFo?=
 =?utf-8?B?cmcyRDZ4NmcwZGtWUHJXdFI1bGVJQTlVdEJuT3lrNGpKWkVzb2kvM2wwU2lj?=
 =?utf-8?B?eVcrV0NKaUNvRlVKOXZON0F0b2cvUG1BNjFLeUh0OEpPa2Frb1p3S0lleE9H?=
 =?utf-8?B?K2g4ZWRCcWozSm1OLzlBbVkva3gzZ1BDNTcyWDUrUTl3ekNSYTAyQUllSzJJ?=
 =?utf-8?B?dlRWQkloYWFaNXF5amRDK0pCTUdjVmhERWlUZG9CZEV1QXhIMCtvOFdEWmRW?=
 =?utf-8?B?cWRoR0JqU1lQclQ2WWNYMDZsSlhKTFBmQlJrNjdEbXNPbWFvN2xhR1pKei9M?=
 =?utf-8?B?U1FDTm5zcXkzZE5sUVZFZlVFeFpWTnh3blJsaFlFTENFVE4vWDc5UTJwU0dw?=
 =?utf-8?B?SFlXNEFpM2pPN2ZKT0todnl0cnNZM1FoQ21RTzFZTGZJYi9MaFZsN2pPbVRV?=
 =?utf-8?B?d0hsZkh1YWVFaGltaFVoQ3hSS1BaQU9IcmlQTzlvcTZFUWtxWWlDQzRqR1NZ?=
 =?utf-8?B?SkpLQncxQm1qWlJRcmN5OGVMb2pGak9RSThMR0FlcHh1OEJLN3lBVk1GazFW?=
 =?utf-8?B?cGdVM0djcVJDTW94d1JjcjF4NnJZeEg1MTJhWU9wNjhQaDVicGd4aFpxZTl0?=
 =?utf-8?B?SXdOcWJTWXZ4S1R0VExrTDVxc1YxWXc1b0xreTA3Sm44b3FhNVpNb0xPK1lv?=
 =?utf-8?B?UWh5NmVqNUQ4Y1RXck15QWo0cHQxSjRzaE9xU05HQ3NPaDBDNkJWLzl4bDhR?=
 =?utf-8?B?YVVnOWZTNEVKbUF4cC81WXZGZy9KMG5XR0ZFMTdMa2Q0b0xuRkRkT1p3UlR3?=
 =?utf-8?B?Q2JyVEx3aWRmci9GR3FyWUpDamhEZ0xNcnhJYkpSZ1VhNndCYWd0eW9odlU5?=
 =?utf-8?B?cUlKTFU5UUxGdU5wRzVIcGtHZ2xNRG85OXJybDIrTTdJcDMwbGo1RTZoVGd2?=
 =?utf-8?B?enI4UzBXRE5CVlBwMm9nMEV3RjlGOGt4UG05SnFpSmFzMEZEVGovQTAweVpW?=
 =?utf-8?B?Y2lndGZ0dVF4V3hVdjB3eUJzaXVlQ0VYbjhvZUhyWHhqTWJLS2xLZ3lzeWJw?=
 =?utf-8?B?cjlxOHJPcXFiZnpTSXlmZWMzNjNTMkJGT01JV0k2Z2VuMTJEL3J6bG42UmR6?=
 =?utf-8?B?WEFjVFhmNTF2N1ZPaFFOcFQvRnJsQ3dWR2V5WWxGODVkTzJNWVBtUHdldWp0?=
 =?utf-8?B?MmU2N2pLRjg1SzdGb3J0c2dDUlRMUG01emNGTklMMXQyZ2lBa3dFRkhOcHFX?=
 =?utf-8?B?R21YZHM0OEp5QWc5YU5LaTRKb3YvbGFUWnZDZC9iQm9oVUZyYUg1T0NYU09I?=
 =?utf-8?B?ZnVDeW81V2FYV0tZS1VGY2FFUVNaN1ZFbHZ0UjlHVU5QRzFhYUlmNUZCNHlS?=
 =?utf-8?B?aFFRRkVYcU4yKzUrWTRFUHdCcUV5VXJyMkJPN3lZYkV0aEVqN3BGcFA1MkE5?=
 =?utf-8?B?UTJIWGNlZG9wMGFKT2ZSQWhuL01SOXVBWjg2bXJyMElaYjYvYkYvTmE1T21i?=
 =?utf-8?B?VEZmVjB5T2NzemZld0J4SnFKQis5YVY0RDdEV0hMeEpVbERjQjdYMk1DbDB1?=
 =?utf-8?Q?tL7pffzRUJFAp8DYoy6xW5ViWbh0UObbKETcKHz?=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR12MB5673.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(1800799021)(376011)(366013);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?M1VZejg3dDVrZlMzRUpYOFRTQzlXcDBJVDhjNmdtVHhJR2xBbW93ZzNKUWc3?=
 =?utf-8?B?dHh0Y2xxdHdRVVB3SHhHVGkwZ1JxWjExNkZ2bjF1aE12dURQVlZ3YkdQeHBW?=
 =?utf-8?B?RnpCbldGdWt4WnNNdHlSbFdGOTExclBRSU9JRTJNWVE5eWlENmNpUjJUOWNa?=
 =?utf-8?B?ZDJNemJGMEpkMmk2QkpXMkg4dFgxREdZTjQrVXlBWUVDWVFIbWhxWklhVnY3?=
 =?utf-8?B?Z1dITEZUTCtHcmFIdHFXOTZGV0ZDcTRINUQ0UDhWbWpTWHZJY1l1U3V6UUNR?=
 =?utf-8?B?OEZWUXRvMmRIU25UZEtDWlhzaThlWlZyUVV3S1pVSW95TnI1S2QrTGNSNmpq?=
 =?utf-8?B?NzFmczgyVUd3bHd0elhUYjF4YUFFZHBweTRtMUE0Z0xjbTNEZTdkWXdveHBB?=
 =?utf-8?B?NlpKMHhGSXZYWDBYdkd6OG83Q3l5R0h4NDVsUVNiTGFJa2FVQit3Z2I1OENN?=
 =?utf-8?B?UVZqdjQwOUFTTlJIZkxjRWdCQkR5RzczWlZBeUZ4dzUrSHRINFdJc0RERmpQ?=
 =?utf-8?B?TDBJS1czSlJMTUZQMnZ3UnU0dktrYXgxYUNkbVN2SnV4eCtxV1Q0Vm9JRmJa?=
 =?utf-8?B?bWw0bzFyYk1tL3RiYU1OQjhNVXBOaWtGSnlqY3liMEhiTEtYOFltRmQ4WEpU?=
 =?utf-8?B?VVhoRWpGblZvdTMxZ2tnWEt3clBFNXIwbk9RK29aTHFuYitSdlFrREc3TWZL?=
 =?utf-8?B?bm5WUDd5R2dDeTA3NmFja2NybXlLMGZrN3ZvSStwTWNnYll4b2tva3RuVFJP?=
 =?utf-8?B?ekpiOUxBZ0lCclZzS00rUlZSUm9OZk4xVVFRUmU2MW42Uy9HbzArcjBRRG5l?=
 =?utf-8?B?ZVVkUGI2RHJVOXloVHdhQVNTakd5UHEvTFUybjlFaGhkSnVWRUIrWmd2QzF2?=
 =?utf-8?B?S2lLcnFvZFR6NnBDdkVFcm9KcHpQOHVRcUQ5ZHIrbHNsMHlRdmJzdCtlNTFR?=
 =?utf-8?B?MWJpYkdsQm1aMk9rMThpSG43MzNLdDM3YVFvaG1JSWZhQXo2aE1zVHVaOEVK?=
 =?utf-8?B?VjY3M1g5OWRVSEErTXAyZmptOTVWa01zVGJLMW9SQlFJOHBoTUVPSlpQZXpj?=
 =?utf-8?B?WkhnajFGeEtnTDhSMkxnZFVTU2hjMG1UZVZKei9pQXM5Z3BDR3dHMHU4djJl?=
 =?utf-8?B?TTJzN0NpcUMramF2bko4MnRqemx5YWJZYTNmRjc1NWdHVUt6N2VyLy9IeUpy?=
 =?utf-8?B?Smp3c1JLa3d6MFI0VXVzS1BzTklNU2YzWXRaQ2pFQStUTDhETFVRaFIwYzZk?=
 =?utf-8?B?V2Rid3hBVDdhSGhGa0UxQUFzNHQ4WkpmTVhyd1BOYmNFKzcxcGs5YjFYTzZM?=
 =?utf-8?B?cmlKdDBaR0MwZ2dqbnNtRWVLWjRwRkNuRzdwTXVRQXlLbEJNY0h4b0gySzl5?=
 =?utf-8?B?WUhuNkxTWkI2TFo4ckI4RGpNQWxDOHQ5dFgrcUdDeVk5ZFFRU29TMGYvZXdo?=
 =?utf-8?B?Qmd5Y2kzNGJSOUlDMSs2eHJkTEhnTDJsWi9laFp6RDBSS3JlcDZMZEplREJv?=
 =?utf-8?B?N3FXN0NOTDQvelY4S0s4bnJvMVpSU2Y1cTIyL3JFdnExQnNQVjBDNkoydlFy?=
 =?utf-8?B?UE0zd0x2aFZMeldWdlowdDU4UTYvRENEVEtYMWl3Yzh4VDBIU1dOYU5COHBM?=
 =?utf-8?B?VURYbkhVTDZjYjJVQVZRNHgwend5OWNUZ0s3enpBUkh5dVlneGlnQ2c4dkk5?=
 =?utf-8?B?dEhqaVR3eTdYSmdibHJXU0pvM2tySXoweTNjMzdRYzg1djJ0bmZKQXpiYWhk?=
 =?utf-8?B?SFpDekVzWWw1R0FlQnN6cXJlc296cWExandZUU56ak50NkFOREdZbHZDOEYz?=
 =?utf-8?B?WXBGOGtWVXQwamgxRTRSRlNnVEZlb3JvaThiWDB6SnBlTy9jdkF6L2Z4cldu?=
 =?utf-8?B?UmYzMFpXU3JwbzBFbE5PRUVWTlBqOUpxMTR6a21PTVNkcE5FUHVBaThQQUxm?=
 =?utf-8?B?eEdvVmVHbG5tbkdGRXhibWJXRHphWFZhNzNseGpSTitpbmFpKy9RbjFnNmZ0?=
 =?utf-8?B?SEJpN2pwS0U2cDdJVEhkQXFEOUNXUHVUYkRtTEtTRnREKzZKVS92YW1IUFlW?=
 =?utf-8?B?ZUc2bmdWWEQ2cHBJemN1VzF0cUlzcEZ3NmRGdVRIRVFxaWd0eCtBRG1IaTJu?=
 =?utf-8?Q?b87UAhV++DSFGiJf2JYtYryy7?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e4355196-0b44-4b39-cce1-08dc8f609bed
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR12MB5673.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 Jun 2024 06:33:47.7877
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dbdHAEG6s0S0mMFAqsF3EduaDZ5XlNI/vvYBjteK21lkmJlRJPeGTd4YmXrxhKxi
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6566

Am 18.06.24 um 02:57 schrieb Demi Marie Obenour:
> On Mon, Jun 17, 2024 at 10:46:13PM +0200, Marek Marczykowski-Górecki 
> wrote:
> > On Mon, Jun 17, 2024 at 09:46:29AM +0200, Roger Pau Monné wrote:
> >> On Sun, Jun 16, 2024 at 08:38:19PM -0400, Demi Marie Obenour wrote:
> >>> In both cases, the device physical
> >>> addresses are identical to dom0’s physical addresses.
> >>
> >> Yes, but a PV dom0 physical address space can be very scattered.
> >>
> >> IIRC there's an hypercall to request physically contiguous memory for
> >> PV, but you don't want to be using that every time you allocate a
> >> buffer (not sure it would support the sizes needed by the GPU
> >> anyway).
>
> > Indeed that isn't going to fly. In older Qubes versions we had PV
> > sys-net with PCI passthrough for a network card. After some uptime it
> > was basically impossible to restart and still have enough contagious
> > memory for a network driver, and there it was about _much_ smaller
> > buffers, like 2M or 4M. At least not without shutting down a lot more
> > things to free some more memory.
>
> Ouch!  That makes me wonder if all GPU drivers actually need physically
> contiguous buffers, or if it is (as I suspect) driver-specific. CCing
> Christian König who has mentioned issues in this area.

Well GPUs don't need physical contiguous memory to function, but if they 
only get 4k pages to work with it means a quite large (up to 30%) 
performance penalty.

So scattering memory like you described is probably a very bad idea if 
you want any halve way decent performance.

Regards,
Christian.

>
> Given the recent progress on PVH dom0, is it reasonable to assume that
> PVH dom0 will be ready in time for R4.3, and that therefore Qubes OS
> doesn't need to worry about this problem on x86?



From xen-devel-bounces@lists.xenproject.org Tue Jun 18 06:49:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 06:49:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742723.1149567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJSf5-0005a1-Mt; Tue, 18 Jun 2024 06:49:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742723.1149567; Tue, 18 Jun 2024 06:49:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJSf5-0005Zu-KJ; Tue, 18 Jun 2024 06:49:47 +0000
Received: by outflank-mailman (input) for mailman id 742723;
 Tue, 18 Jun 2024 06:49:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kZif=NU=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJSf4-0005Zo-PY
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 06:49:46 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20600.outbound.protection.outlook.com
 [2a01:111:f403:2417::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f213d70b-2d3e-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 08:49:45 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by SJ1PR12MB6265.namprd12.prod.outlook.com (2603:10b6:a03:458::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31; Tue, 18 Jun
 2024 06:49:42 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7677.030; Tue, 18 Jun 2024
 06:49:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f213d70b-2d3e-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=akOmFUnMh9zIUyNTxQZb++H4M0PzBXIEDcUOu3Xk0TK+ZRrS/aUGBX2VGE8k5dibIHz8ZXVK9jeGAas95NSQcSb68Z+9A36rTnUwTVhthqt8/T7O4WLi3iqFxyvRM6vYUYgxpfrRVxSYAw87XWH82inKGkdFmoqWzIYnIzwhXZ1wIU2chpz4SJHSfpTYs3xIZchzKET2XIm1f1GVt/EVTGKYpAGUJvHQHKZALYE4/s1EnKD5ztu75zTPVTfZD3ZodF5sMg88q8SuDiysium7uzEVhUrVJfZirL9iwWk2iPbckLok8Zb7+cfYAf8L8L2rOJwVz0XhqArSefnz3wvs5A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7Z317kdvVsYYquI0dJeuWg9+8nI6pwxFMuT/MhT7h7M=;
 b=eoMmcfRumMy4HIPoF/hgWt95aIuB2nWWiekqSM7G4810cJD45h6U264AUZUOzq5diLkz/7yfbNDnZqPL02yziRB5Ba7SbzLxUqddCGhcOVfQCWnsDoVG5DMlHmFddCimAZ2JMoBgO0ldQjy6836wWdpiC8HE8xoBUI+L4K4XzjGaTAJhwyJ4cKc9/iDAnAv+KuqjzchGAOHJnXFIwSYkLg9NNd68Z+fZThKPaLC8pUXkQVtLC93wZWVhZIS9nvYLGV76iy5aO/iKZ28+Wr9+NJFp97QH6oBOlG5CrftQ0IwFiqIpTwjslD0AgejdkNQPylpGr/+sUyJQHSVcYXmiKA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7Z317kdvVsYYquI0dJeuWg9+8nI6pwxFMuT/MhT7h7M=;
 b=pHO9/VmuVUnBtcv0X6PN37PGN6H62bhIGPQlapsR8UCCpgEPjxiT08rkr6iTu+liceKLe+FaxJILmeoXYmGf6Hw4BzXS1a12PghleASHcaRBX2/dFblP8imn7zLzk8vI7s+ndriAN+6ax3zLbBNJW9Tz8ovey99Z9eXRsFXpFqM=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
Thread-Topic: [XEN PATCH v10 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
Thread-Index: AQHawJTj1WGIKHaz7U+FK+dQX59GpLHMCLWAgAGOVIA=
Date: Tue, 18 Jun 2024 06:49:42 +0000
Message-ID:
 <BL1PR12MB5849FC16D91FADD5B7D30A63E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-3-Jiqian.Chen@amd.com>
 <cb9910cd-7045-4c0d-a7cf-2bcf36e30cb2@suse.com>
In-Reply-To: <cb9910cd-7045-4c0d-a7cf-2bcf36e30cb2@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7677.026)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|SJ1PR12MB6265:EE_
x-ms-office365-filtering-correlation-id: e5a653d3-6788-4102-d49e-08dc8f62d4e2
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|366013|7416011|1800799021|376011|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?M3pmcWp1K1VUVkxNaXRqNGpkRXNGd1VZWFNuREdyRGJBalgwMm12UWJZcGR5?=
 =?utf-8?B?aGNrK2VSSWVIUVF6NXJqT2xmSHZnTFQ0VFZoQlQreVlzVnZEdWxSRnBRMEZO?=
 =?utf-8?B?M2p1L1pSN2lVczhTOERzdVFHc2h2SWRWMzY2M1N5eEtKQWc0aGliSGtGQkhE?=
 =?utf-8?B?cW9aWVIvdmRydURReEgzUEwySnp4dzRTOW5mUERuRFZYSU9uRU5TVnhXelZz?=
 =?utf-8?B?Vk1rSGRPRlV1SzUzemE0UXlSM01zTTlrZ1N0Mkg4OXBFK2xEVUQ3NStkY3do?=
 =?utf-8?B?NGFDYjk0UlYzMEtqZ25tL3ZCSnpwbllnaUlyblQ4K0FZeUJHSTF6YXZ1R3Br?=
 =?utf-8?B?THJlMUE4YTRYbmRKUFNGQVJlUTNzNURLc1ZGRndQZFFXTmY2MVM5ZjFvTFlZ?=
 =?utf-8?B?OTRUcnowVVFOUExScE9ENEo4VzkzRFZITkF4OGsrNjJZTllnU0RsRitwY2la?=
 =?utf-8?B?MGlneUFkYnZ3L2phVnNpN2pXRVBKcENlUlVveExXdWZIbS85ODJoRjBnaDJz?=
 =?utf-8?B?YkpUL3FmeHFQcjB5SnV6TDJVV2tmNEJTNWp2cmpUdkdHYzBJeTNBNUZkelF6?=
 =?utf-8?B?S3g5YmFWb1RRTDVyYjNOcE5VYTRMQm8wd3F4bjJCNEpnZVh5eVJzQ3RSbTFT?=
 =?utf-8?B?RGRXTE9SMVQ0b29IUHFFTEZ2N2EzLzl3aXV1dE5wM0didWtjY21kbERVMmR0?=
 =?utf-8?B?Z2tmb0lISXJ4VVZyekdXNDNQcnVxSGlsTExSaE9rZ2d3MllIUWZiV0NMMWds?=
 =?utf-8?B?UkFIL1dTbWFpRXZhMnJHZ3MxWU5PWlZheHp4ci9vbjlEcUx0d1FjblN1Y3RX?=
 =?utf-8?B?QU5GVlgybWNVc3BzNFo3b2tCc3A2ZnNVS2hEWFVuTElyaVc5cEVmTWhPZHJu?=
 =?utf-8?B?V3BLam9RVjFkVWdmWnVEdUFYaXRCRnFEZG9zY1h4NUJXZXFERVNDaHorMWJ3?=
 =?utf-8?B?NDV6bUJubGJrWTNmL2ZHU1JocHl5cUlINlRLSURWeXR5VmdDRjZvMjV3TXpG?=
 =?utf-8?B?M0FNV09YVTJrbUkxc28waWNIdlp5ZjZDcDJqbWRhdXo2dFBiQVdZOXR4UDhi?=
 =?utf-8?B?dWhUeFRnbUd3VmJZYVlqb2poUG9hNzhTVFRNZzBhOHlrM0xySW56SVdTeUZ1?=
 =?utf-8?B?dVk2Q3U2akJBZUxkMFMxdE5rTHZaQnkyZkRobklCUm5mOEZkU1N1U1hQRmV5?=
 =?utf-8?B?SnRQbzh2ME84QUhHL3BzZkFTUTB6MGhDQzJCR0FFZGRyNzVKNUpjNnRoekVt?=
 =?utf-8?B?SER5aGxWb05pSkI4VVRXTHpTUVdtSEF2TzhtSUEvVDhvbDhCOE1vUlRETU1O?=
 =?utf-8?B?NGtxUU9BVDB6Y0xtK2J3c1dOU0thYWpicUJSQjM1STZrZ1hZYUFoT3JtMmlj?=
 =?utf-8?B?QnV4dk9WVzN4YmdJRXBsc3pJOWowZFdpQVZ3YjF6anIwS05QZ2F3ZEd0Rkhx?=
 =?utf-8?B?S3FMZnJyS044amVzcUpxcVo1ZStVTjJDQXFHdVRNUXhMejFQZy9JeDhNckJS?=
 =?utf-8?B?SFNDWDc5elBJR0lVaHB0TUVRMGNZTWlZa2l2bnhkSmVoa3ZwTE9lcVlGdG5X?=
 =?utf-8?B?NTgwYkp1alhPdzZKNnpzVjhEL0ZlWU03dmdESXg5Mi9VWDFpMU1EMm1VeFZY?=
 =?utf-8?B?SlF6SGJKZ3JtbXRVTlBWMzB3a1BxVG9BTzNEcFV3T0JGaU1BcE4rZ3hLcmpK?=
 =?utf-8?B?SDdFZ09pUGdWWkhNRkd0cE9EeXNYTEVYU0F0SFRkODdqaW0wM0JXL3Z1TGtt?=
 =?utf-8?B?VEFXaDR6cG93cVE3SldLSnBBcUxQNjUrL1MwN25FckgwRTBQK3RsMC95VHBk?=
 =?utf-8?Q?RGyOBslXMK2cqg03VkB7sDzzae7d+0ldSQNB8=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(366013)(7416011)(1800799021)(376011)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?R1BWWWlPZU5sYTd2alhyTjFkT2tSQ2dsa1VPSFIwQTBQdW82T1BjTzA3Zm1h?=
 =?utf-8?B?WlRzNWZnaDBwdU9Kc2tMUjBCbnZnZGpoK1VjUE5kYmVwcjJ0enlRYnQyRStl?=
 =?utf-8?B?Yk93WDlEdlBYbzNUNVJhM0FsTzg2WmNjaHpwWDdVVGtjc2toNy9vOGpPZWhK?=
 =?utf-8?B?R080SmNiMDN6U2NISFVxODZRWUFxQXdyRzZVUk94bVh3cWRndFNsMmcxajVt?=
 =?utf-8?B?cDdJZ0l6bVFsbzI5Wkt2ZE1vall6eWM0emRNcDVnNEVwblk3MkUzR2o3RXZQ?=
 =?utf-8?B?S05aZUR6bnplMElMY2lSVitWT2tOeWxidmhPZFFpZENWOXczVEIza3dyUUcv?=
 =?utf-8?B?YUhjYTBJdlo2S01oSGNXdTJsRmFQOE0vK3dIbVpHa0VxOEU4UzdzTnp3d0tS?=
 =?utf-8?B?b2NGZVBqeVBPWG55ckVCTWJiWUlsMUpNeTIxa2RYZDA3ODFZRGpKR0NFcDIr?=
 =?utf-8?B?V05jNlVlaW9wRllrUDNEU21iVmZhNVgvQWxHNlJWUnFwSGxQUVU5dVRtMTJH?=
 =?utf-8?B?U010TXBPRFJ6bDBUMkVaSm10bEhmM01CSGo0R3NBKzVTSzAvQXlOeFVxdXRM?=
 =?utf-8?B?TEpIcEJ2Q2tFblpBenkva0dtOHBNaEVTSzlVeldPWVE0Ni9VSmpGUGlhU0VG?=
 =?utf-8?B?VzNrZlJ1bGk2eU1VRWZmZGZkL0hOd1Q4alBVZ1Rrck1QMUtEczFyc29YSnJM?=
 =?utf-8?B?Qng1dHVDc2hpQVhLaFV0R0N1dXU5c2k3b3AyVDhpem1UM1lXSGprTUZtdmhk?=
 =?utf-8?B?MVNIM0FrSlluZmNyMWlNQmpscTNWajJnTVZJZGdISnRpb21qNk00MDZ6YWtL?=
 =?utf-8?B?SkdKeFh4UzhSc2JWMWhJMDZrZVk1UG5QdXcxdE52amM5ZXBpVjROZElLUnZk?=
 =?utf-8?B?K0RFNHlFcDRySnZOWGtVamxIc2hkNmJTQ2VPS25ZZ0orYU1mUkg3SjNOSDdn?=
 =?utf-8?B?WG4wUVZBOXd5eGRGQ3VXMmU1Wm00T1YzMndmOW9YS2djaDhOWHNsTmlDR0pH?=
 =?utf-8?B?TnZpS2V1eGU1NGRCaXBrQTdDdEtDcC9oSWJjeEs0YjFWN0dkVDR4NVRNbUhu?=
 =?utf-8?B?UlRidTJqNTNkZS9zdmxSTlFvK2NhODQ2aTFtQktSUzkyKytjQ05GYlZ0NXRC?=
 =?utf-8?B?SkNXU204anJucW9MOW5nK1JCZGJmMncvelNjZjc3ZTU4anA1bDlaV0dvNVR3?=
 =?utf-8?B?T0NwNUphdTR6NHg4WDVtVC9TQWF6S2ljUTVrQXpHYlRaVFROSTRFcjhRMU0z?=
 =?utf-8?B?cnZtRzZvSTlVSmVHNytDRWFGUDRzWklMelc0dEhXTDNxaU5oc3pFVFZTdEFr?=
 =?utf-8?B?ZkxQd1Y3V1ROMEJxTTZjSXUrTmJqQ2xIM1dwVHA1aWhHakhwUFB0MjJNYWc0?=
 =?utf-8?B?Zy9mT0NoWk8zTEhFQjB5ZkltNVRmUkw4WDJPRStsNFFYbDdLVkhiRjdUV0hy?=
 =?utf-8?B?RlplMW5nYmpDNm5wazRmYWxJd2J4RlJSaEFRZmdQRC9JbGg2NkhJT0ZTTGZU?=
 =?utf-8?B?ZThlZGloSUJINThISTFEbGlCY204MHJSMjNNYjIvTHBJZVlLaXp0VHFCRlJ0?=
 =?utf-8?B?cmJZWVFFSUVyOWMvYWhGdFMxbktGZVV2QWxkSUVQUUJzT2FvSFhRVEh4OWFM?=
 =?utf-8?B?b3ZnUkRBOXZlcjdzYjhMcmlGemlqQ1ZpME9yenVwMk9td2pUWkl4RlZMMFpv?=
 =?utf-8?B?b25sNmkvMzhoVjc2VUFyL3dML1JxNlpiWHNQeVFJQlZHMDVwQzhhR1VnNGN6?=
 =?utf-8?B?NUxmUlUrb3RHU09HSExkSXdNcm90REtoU1ZPempFbXF0UCtPRUp6aWNRZTZC?=
 =?utf-8?B?cnAyN3FmREVrWVVaODVxLzU0YkNPeXpOdHh4aFQwSW5NSTViUGdmcERMcDU2?=
 =?utf-8?B?UzhtV29ablNDQ0tIaXpwT0hkT0dvWGlCdWhyNUtvZUlIT0NQeWZycXducDZE?=
 =?utf-8?B?bk43UVdqdFFRTXduLzMwWk84QjRuUDMxd2Q4U0FRcFhXU09heE5zTk43SDk5?=
 =?utf-8?B?R2h2T3ZkaFlwTDE1RnpXMWtMWkNScmR3UVVPWkNlK2pacndKQWwvYXpTdWEr?=
 =?utf-8?B?Qlk3YVdTM2U1UGsvMU9ITXNJZ2t1TFNHbXBXZ2wvUVV4ZTdlNHd3bDM3Wlhn?=
 =?utf-8?Q?uHHM=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <2688986108B4454EA3C0123B9EEC9382@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e5a653d3-6788-4102-d49e-08dc8f62d4e2
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2024 06:49:42.0468
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ElsbhQtxZBQy4lN+ShCv+8wTCSxK6qPM3WYLWjoXcfgUX5iH4h+Iut5ZWPOhPQb59Bq2njwK9Do2WNeIs84hwg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ1PR12MB6265

T24gMjAyNC82LzE3IDIyOjQ1LCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTcuMDYuMjAyNCAx
MTowMCwgSmlxaWFuIENoZW4gd3JvdGU6DQo+PiBJZiBydW4gWGVuIHdpdGggUFZIIGRvbTAgYW5k
IGh2bSBkb21VLCBodm0gd2lsbCBtYXAgYSBwaXJxIGZvcg0KPj4gYSBwYXNzdGhyb3VnaCBkZXZp
Y2UgYnkgdXNpbmcgZ3NpLCBzZWUgcWVtdSBjb2RlDQo+PiB4ZW5fcHRfcmVhbGl6ZS0+eGNfcGh5
c2Rldl9tYXBfcGlycSBhbmQgbGlieGwgY29kZQ0KPj4gcGNpX2FkZF9kbV9kb25lLT54Y19waHlz
ZGV2X21hcF9waXJxLiBUaGVuIHhjX3BoeXNkZXZfbWFwX3BpcnENCj4+IHdpbGwgY2FsbCBpbnRv
IFhlbiwgYnV0IGluIGh2bV9waHlzZGV2X29wLCBQSFlTREVWT1BfbWFwX3BpcnENCj4+IGlzIG5v
dCBhbGxvd2VkIGJlY2F1c2UgY3VycmQgaXMgUFZIIGRvbTAgYW5kIFBWSCBoYXMgbm8NCj4+IFg4
Nl9FTVVfVVNFX1BJUlEgZmxhZywgaXQgd2lsbCBmYWlsIGF0IGhhc19waXJxIGNoZWNrLg0KPj4N
Cj4+IFNvLCBhbGxvdyBQSFlTREVWT1BfbWFwX3BpcnEgd2hlbiBkb20wIGlzIFBWSCBhbmQgYWxz
byBhbGxvdw0KPj4gUEhZU0RFVk9QX3VubWFwX3BpcnEgZm9yIHRoZSBmYWlsZWQgcGF0aCB0byB1
bm1hcCBwaXJxLg0KPiANCj4gV2h5ICJmYWlsZWQgcGF0aCI/IElzbid0IHVubWFwcGluZyBhbHNv
IHBhcnQgb2Ygbm9ybWFsIGRldmljZSByZW1vdmFsDQo+IGZyb20gYSBndWVzdD8NClllcywgYm90
aC4gSSB3aWxsIGNoYW5nZSB0byBhbHNvICJhbGxvdyBQSFlTREVWT1BfdW5tYXBfcGlycSBmb3Ig
dGhlIGRldmljZSByZW1vdmFsIHBhdGggdG8gdW5tYXAgcGlycSIuDQoNCj4gDQo+PiBBbmQNCj4+
IGFkZCBhIG5ldyBjaGVjayB0byBwcmV2ZW50IHNlbGYgbWFwIHdoZW4gc3ViamVjdCBkb21haW4g
aGFzIG5vDQo+PiBQSVJRIGZsYWcuDQo+IA0KPiBZb3Ugc3RpbGwgdGFsayBvZiBvbmx5IHNlbGYg
bWFwcGluZywgYW5kIHRoZSBjb2RlIGFsc28gc3RpbGwgZG9lcyBvbmx5DQo+IHRoYXQuIEFzIHBv
aW50ZWQgb3V0IGJlZm9yZTogV2h5IHdvdWxkIHlvdSBhbGxvdyBtYXBwaW5nIGludG8gYSBQVkgN
Cj4gRG9tVT8gSU9XIHdoYXQgcHVycG9zZSBkbyB0aGUgImQgPT0gY3VycmQiIGNoZWNrcyBoYXZl
Pw0KVGhlIGNoZWNraW5nIEkgYWRkZWQgaGFzIHR3byBwdXJwb3NlLCBmaXJzdCBpcyBJIG5lZWQg
dG8gYWxsb3cgdGhpcyBjYXNlOg0KRG9tMCh3aXRob3V0IFBJUlEpICsgRG9tVSh3aXRoIFBJUlEp
LCBiZWNhdXNlIHRoZSBvcmlnaW5hbCBjb2RlIGp1c3QgZG8gKCFoYXNfcGlycShjdXJyZCkpIHdp
bGwgY2F1c2UgbWFwX3BpcnEgZmFpbCBpbiB0aGlzIGNhc2UuDQpTZWNvbmQgSSBuZWVkIHRvIGRp
c2FsbG93IHNlbGYtbWFwcGluZzoNCkRvbVUod2l0aG91dCBQSVJRKSBkbyBtYXBfcGlycSwgdGhl
ICJkPT1jdXJyZCIgbWVhbnMgdGhlIGN1cnJkIGlzIHRoZSBzdWJqZWN0IGRvbWFpbiBpdHNlbGYu
DQoNCkVtbW0sIEkgdGhpbmsgSSBrbm93IHdoYXQncyB5b3VyIGNvbmNlcm4uDQpEbyB5b3UgbWVh
biBJIG5lZWQgdG8NCiIgUHJldmVudCBtYXBfcGlycSB3aGVuIGN1cnJkIGhhcyBubyBYODZfRU1V
X1VTRV9QSVJRIGZsYWcgIg0KaW5zdGVhZCBvZg0KIiBQcmV2ZW50IHNlbGYtbWFwIHdoZW4gY3Vy
cmQgaGFzIG5vIFg4Nl9FTVVfVVNFX1BJUlEgZmxhZyAiLA0Kc28gSSBuZWVkIHRvIHJlbW92ZSAi
ZD09Y3VycmQiLCByaWdodD8NCg0KPiANCj4+IFNvIHRoYXQgZG9tVSB3aXRoIFBJUlEgZmxhZyBj
YW4gc3VjY2VzcyB0byBtYXAgcGlycSBmb3INCj4+IHBhc3N0aHJvdWdoIGRldmljZXMgZXZlbiBk
b20wIGhhcyBubyBQSVJRIGZsYWcuDQo+IA0KPiBUaGVyZSdzIHN0aWxsIGEgZGVzY3JpcHRpb24g
cHJvYmxlbSBoZXJlLiBNdWNoIGxpa2UgdGhlIGZpcnN0IHNlbnRlbmNlLA0KPiB0aGlzIGxhc3Qg
b25lIGFsc28gc2F5cyB0aGF0IHRoZSBndWVzdCB3b3VsZCBpdHNlbGYgbWFwIHRoZSBwSVJRLiBJ
bg0KPiB3aGljaCBjYXNlIHRoZXJlIHdvdWxkIHN0aWxsIG5vdCBiZSBhbnkgcmVhc29uIHRvIGV4
cG9zZSB0aGUgc3ViLQ0KPiBmdW5jdGlvbnMgdG8gRG9tMC4NCklmIGNoYW5nZSB0byAiIFNvIHRo
YXQgdGhlIGludGVycnVwdCBvZiBhIHBhc3N0aHJvdWdoIGRldmljZSBjYW4gc3VjY2VzcyB0byBi
ZSBtYXBwZWQgdG8gcGlycSBmb3IgZG9tVSB3aXRoIFBJUlEgZmxhZyB3aGVuIGRvbTAgaXMgUFZI
LiIsDQpJcyBpdCBPSz8NCg0KPiANCj4gSmFuDQoNCi0tIA0KQmVzdCByZWdhcmRzLA0KSmlxaWFu
IENoZW4uDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 06:57:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 06:57:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742730.1149578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJSmh-00078I-GH; Tue, 18 Jun 2024 06:57:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742730.1149578; Tue, 18 Jun 2024 06:57:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJSmh-00078B-BG; Tue, 18 Jun 2024 06:57:39 +0000
Received: by outflank-mailman (input) for mailman id 742730;
 Tue, 18 Jun 2024 06:57:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kZif=NU=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJSmf-000785-Kk
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 06:57:37 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on20615.outbound.protection.outlook.com
 [2a01:111:f403:2407::615])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0a4f6130-2d40-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 08:57:35 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by SA3PR12MB9090.namprd12.prod.outlook.com (2603:10b6:806:397::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31; Tue, 18 Jun
 2024 06:57:31 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7677.030; Tue, 18 Jun 2024
 06:57:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a4f6130-2d40-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HTA25Hzd9l5Hx/6fi9sxYi5PEhiGWYMHqzL4HtUfvyCMKgdS1KMN+gPAZeaiUQYsZjenpIQDqOoGzNgcwKN517/hl+SW/8dr+yo1Ejp3VVXYIINlqpaw8TnNm29e+AmfvEC3jCrSsZyp/YJrTQlMRDZHrUpsw35RuZtu+d1K76ov7etBrpCd4wRKM5G6qITX4I1B1kMLHbY8VTpltQFNOtjEU3H3RCzdmvX0rPJJHylORsXf5ctjw1+ArI402RuZSGNzIsKe9QvbNoyStxgVrupgXaPWX0tivQe/fFhwgBYzXq3YInIPo2jMpep29QXPE2G4isGINwhPIhiXf43TRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=5uEM2O6gE80Wmbd/IF65Tij8x9OmILiYI9NuqFdZjSo=;
 b=eXabIw9wqdAEy6yg4R0PilguvIwt8dsdvxzAg8L/CpB7vDm4YaJRT5IZ9E6+LapiugZGGpFRv5njbD245DiB0wkL9usY83PKRW5u1rgHGFe0hwXm37RHzxFV0ctnQIBtHRrbr3gYKe55h6MGOPlqDDkjgLFD8dSkeyHiuwS6ks2+FajktlpRaDl0yVXwPQ0EmrLGPsDt3OFyODNREfyRHUkNOS9e2nT+W5YyndhotRB06sHVlZgXePFEloNTLqo9CkHo6dqpSgc+38ZlHa1TgT4oz69JEkCjHAHuTQNphjeL1IynvXJuTu7iIbte5t8kHxs7Jm1/r+hWoMhTqIOkRA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5uEM2O6gE80Wmbd/IF65Tij8x9OmILiYI9NuqFdZjSo=;
 b=5PIhD/eg9e3He+s5V0hAtPuFxGV4vokVScoBsnJRbVzJ8kZLundclvMYSIcHf/zOsEf+3ty8Z57jvhD1oinvkS8DfugJydWRnRLjsUqlAX6nCg5/deQ7ug4mLdPdkNkSr8L2RAKaJXXuIU/MTxVgmrrnNrTDlTldbGCZ/Gz2wCQ=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
Thread-Topic: [XEN PATCH v10 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH
 dom0
Thread-Index: AQHawJTljFn+TFlFw0Cz/HTewbBwKbHMCqyAgAGSNQA=
Date: Tue, 18 Jun 2024 06:57:30 +0000
Message-ID:
 <BL1PR12MB5849FE3A4897DF166159B906E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-4-Jiqian.Chen@amd.com>
 <ed36b376-a5f0-457b-8a1e-61104c26ffce@suse.com>
In-Reply-To: <ed36b376-a5f0-457b-8a1e-61104c26ffce@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7677.026)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|SA3PR12MB9090:EE_
x-ms-office365-filtering-correlation-id: da3f8d04-104d-4630-e25f-08dc8f63ec65
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|7416011|376011|1800799021|366013|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?Z2JLWkdnTFN0MWk3QSt2cVZhM0x6MEdwNHdXcnFrVDFMNTIzK3FwTDA4bEM4?=
 =?utf-8?B?UG4yajBHSWxkNHQ0TEowa0h0ZFp1VGwwUGU3MkcwVHh4UFpTWitQUmhPKzU4?=
 =?utf-8?B?L2VSZUU0Qk52aTJuNWphRkkxRmtoOGFyWHN0bVVXOHdvcHhaL3dVRmdsbzVh?=
 =?utf-8?B?RTlsMEtQZUpnMkZITURRTzB4RnV3aFhHdUIrM3JQbnR6Yk5HRzNIeDNwNVBN?=
 =?utf-8?B?bFdkYlE3OHZCSzRtemJqOW9nRTFFRjBKUHFHM2svTzh2SDBqMU5za2V6Z1pj?=
 =?utf-8?B?elk5ZkdyM3dHNWhacVB2SmMxNnZVS3htdnU2d1hER1VNcUR3djhsQUZGdmYr?=
 =?utf-8?B?eDA3THhPQXcxa2x2S0pEWGVXUnhGNktpZUFHbEJlMTZBN2FtVDJ0YVB0WitO?=
 =?utf-8?B?YXVMYjNnN0ZkTnpOTmRKUldxei9QcmRaUC9IYXhkNjJWRTdDZ1lqVnRPMWZ3?=
 =?utf-8?B?OStkT0hEdXlseFNEbnpsVEZlTktVMm5qNVdmUHpWRVhqczBJeUU2V2h0S0p5?=
 =?utf-8?B?U0NhOEVYWWRNd1k3SkJGM3h3ZG5QamF0aGgxSEJtWkJYZnJQY21UMW8vY2tx?=
 =?utf-8?B?QnRITXVQSDI2REpLN0ZQbTZhR1JtSUVzVUxxdjU4WTdiVTdGRWNmd2ZVSm5z?=
 =?utf-8?B?ODRScDczRVkwcHQ3Um5qdzNSZGJIK0FSdkwrRzNwckxRcjAyTENPWnQ0MHhS?=
 =?utf-8?B?aHlEeGorTHdOQVFuczhNTVRZeEVxakdnK1p6UWJueDlROEhrK3ZQOEJoTzIv?=
 =?utf-8?B?L05lQjJuVzlJNHpUc1RMWVB2dWtqQ2NwLzNIK1piWlpmYk0rbUU2VVUwdjBQ?=
 =?utf-8?B?YitNbmN5OXFVdmlzY0c5cEtzdHJrdnBNK1UydjhtRmh4bmc2cktZaGI1MzVZ?=
 =?utf-8?B?SHdMaDZvYWVwaGZnQmlCdXhxVElCem5weGNia0FxZThxWWllUzQ4VWJJMGI3?=
 =?utf-8?B?bFBIQ0piM1J2M1ZIcDhxNjdlcVpCK0k5Q1Q5K0VJU1JXNDAwMEIrNGd2NmR5?=
 =?utf-8?B?aVpnR0ZWVlc0NHZBcjdYUXRQcEJ3citvYTV4eDlrS3g2Ny9adlY0SVkzOGp0?=
 =?utf-8?B?T1BURkVUZjM2MGc2aEEvVkRrT3luTzdLd3RTWWV3ZEJNaVhPRStUMnhEM0Ix?=
 =?utf-8?B?Rko5ZzR1OUg3RzMrbWNXaUF4RVA2WldQc2hmeW0wYVRPWEkrazhZbTJDTVNQ?=
 =?utf-8?B?NVJ5aHpQNklXNzZHQWQ3bUhBMS9mOEJyN01WZkhVN3VDZU50UjBjUU55cWp3?=
 =?utf-8?B?QmZ5RFVuMUQvSmt2a1BGbmcwM3VBeW5QZGovSW8rdG14YzVISFlSQ3FpM1dB?=
 =?utf-8?B?RzhaK3NidDVTY2tPY0hDUkdmTTBFdWRkS1RITC96WmRuQ1MwNStpeFZmY255?=
 =?utf-8?B?MkExUDVKQjB1aEFwSXptUHR3ZXFyS1ZNL3R0eFVKdFlEdFd6a1hUOVJuR2pn?=
 =?utf-8?B?L3NwbmIraHRQZ01DMjA2WTIxWTdFcFZKaWlpbVhXNkdiTWIwTTVNUGpaMDJq?=
 =?utf-8?B?bitFdWFzbVBSMHo5Z0dvSHhHamJFQXZSUkM4TWw4bGFieGVkYU95Z0FzTW1t?=
 =?utf-8?B?VnVaOGFZRk9pVXExUzcvUzBTQUFxS2s1NDgxUkFLSEdVa2Zuc2tOZHJETmx5?=
 =?utf-8?B?cGFxQ3ltUEoybkxzS0NkUUtYL3RwTXFsZ3k2UHFzaG5NdlRQTi9JZjhCMjR6?=
 =?utf-8?B?NzNXM1IwbkN6Y3g5d3R4Vm9RNTNuRVJsbW5PRkwzWCswWWNLbGFwK3VCQk1a?=
 =?utf-8?Q?tw3wKCjDoHJi67D7rKPg/U4C2SIWM3VpKYAf9S7?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(7416011)(376011)(1800799021)(366013)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?WTBhTEwwMko0b1l5RGMzZDJ1Zi9kUHEycWIwTFMvcWVRbnA4b1BiNmo5TFZE?=
 =?utf-8?B?ZHBYcmV5MDFITVFtNE0rUUx3TVVkY29xZ1Vsam9vYVhEMDVjQWRIVGIyTllt?=
 =?utf-8?B?ZHl6YzZFQVV2ZUpGV2tidXo3OHdIWXltMUN4S044S0dYaU5IcExyV2dFL1g3?=
 =?utf-8?B?b1pzQVlQSExURXBjQWEvUGNiUXp2RzNQUXZobGNpWmVaQmVoMUx3SEgxZWs0?=
 =?utf-8?B?ZnIxeWVzMHEwdGpHcFZ5MWUyblh6REhvSU15dktEMmZFUm9ZVG5HRVZEREtL?=
 =?utf-8?B?d04zL1loMGV4YVUxbzFjWnlkcXdMdlljbW5ZOGZMaTRFSWdTc3h3bTNmOG5G?=
 =?utf-8?B?TEVvNGd1a29ZRUdqZVBDM0Z6cFI0c2JYUU9FY0pwVU1OSFl0SG5oUFlNVzha?=
 =?utf-8?B?MVRlbkVxUGpnWGV0Y0lucFRaRkdLcXdvVkZ2SWI4cnFlWWNpYUxNbUMvVHcz?=
 =?utf-8?B?NmhXWVVzZEpSRG4rUEdBa2Z5Skw2TjVEdUVRdnQ4MEM2eUYzNHowbWJKdTJh?=
 =?utf-8?B?blB1MzFSZ0VWa1IwVnphZDVJZFNlOTdDcFc3S1NaeFdNeHg4YkdmV2o4dmxZ?=
 =?utf-8?B?MGE5Vnd4eXI1K0hGdEtIenZBRzVCa0pWckZvYlQ4R1pvUUFZVnlXUEF2UHNN?=
 =?utf-8?B?VnVyT3BxemJwWS8vemtaUUdOVDh2VmRkcVhiUG5UVGRmNkdhbGVwaGUrSTQw?=
 =?utf-8?B?Z1dEL0IxZWZMZ2hFcmJ0c0lCbVhZcDlkbzVWQkJ6VUZ5MEhtV09CUThFWC8z?=
 =?utf-8?B?K2hMK1YrbTVXbmYyVVd4enlXRTdYLzNCUVp3aHZmaHNmN29GNXoyMFRwSSt1?=
 =?utf-8?B?QlZHUE9tV0tOM0tZTEliTGVNZWFHWXlvTlJkY1p4ZzdxRzBobWViUGpINnpj?=
 =?utf-8?B?K0tzeVpaMHN0LzRaRi9uUGtGSkwwSWJ1OEQ0NlpveXo4U0JkRG5jVCtpdWsw?=
 =?utf-8?B?bjdJMW8zWGFEcEREbkgwbjY3Q2QvNkJQV3BsaGMzZkxvSVBsT20xZjRBRW1W?=
 =?utf-8?B?NGllVVdybzV3eXM2UnZrRDUvWm5oOC9qTVlKMmQ2d0JvajNxVFJlZGNtUjE1?=
 =?utf-8?B?V0c2NldjMjhZMDZUT3BEVDk0bWw2T3V2TVlKYWFlMDg5WlkyVi8wSWRwYkd4?=
 =?utf-8?B?QWgzK1BFWWx4dzNIYjliUmtlWm9TL05XaHAzR0d3alVOVUt3dUduaW40eVQr?=
 =?utf-8?B?bHNXYzRNK2k2QTM1clRFM1ZlUkV6QTJ3cGlNRVFlZ01vLytCYVdFVnZZMk0y?=
 =?utf-8?B?eWNUV0hCVTlWbGdzcUFXVzBkU2daMEIvL0ZaWFc2QzRYdS9JOTdieFFDdGts?=
 =?utf-8?B?eVVXT3BKNWFEVE8zV285VnNkMXpURUtxZnZiK3gzK3B0bjl2RnFDaVh0SjJ6?=
 =?utf-8?B?d1E2L0hUb2w0MFJVTFc5SjVzOGE2VFRBaFJRenJDQTdpYTdrVVRWRi9sUHNF?=
 =?utf-8?B?dTVTSUdwQTNWUzh2a2Z2UFdrbHoyL3lkbGM5NGNyVklUM2pwcTE2aUZkM2s1?=
 =?utf-8?B?cEZkNzF1bWdWS3ZuazVIVWFCc1hoNTlUTG8zTGJFUmVYWTByWGp5bDFqWG9u?=
 =?utf-8?B?UDdSSGR4YVllK3duUzdzNktmWG9nVzZGN2hLZloycmh2R2hPckxaQ0tOOGdZ?=
 =?utf-8?B?V0I2TytscE90WkNicnBUTzdGV3JqMFV0bXJQVDVrOXVSbUxhSDE2OTNyVWFG?=
 =?utf-8?B?ZHRtdnJhVzl6ZDd3cFRQTmtSdFZwbTVtczdyb1RubEJpcFZoRjJLcjVKa3E0?=
 =?utf-8?B?OGlXWEFyKzJzVlcxclY5MzZhaTFyLzlpZkVjRytzTHFBdFRua1oxWHl0MEV5?=
 =?utf-8?B?dGlzbzA1V3NwcEVsdkVXRStud2FDQUxYRExYQndEaGJHVEs0clV3dWVMMHpn?=
 =?utf-8?B?cGRoNTFFMjFVUktteisyTkk3VjRybGxKQWFEcmp0UDg1VnNoeGVQeGJMWUVv?=
 =?utf-8?B?L2JyamVVZk5EaEplWVdoVXFqYVFDWVlQV0t0VWs1T0FvYzBFc1RvTDRVU2Zz?=
 =?utf-8?B?VDhvZlFtcDhLUHdhaUtvQ1A3QnNVY1lEbFVaY1BjVk1OYVRWclBPTGpUOGNw?=
 =?utf-8?B?akd6ajA0Vk1JVWFpNTBVUHRObVJnUno0SHRvbzVpMitSMFF3Q3NtWDlyN08r?=
 =?utf-8?Q?FTZE=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <2162767E76750F499F1961F560B50422@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: da3f8d04-104d-4630-e25f-08dc8f63ec65
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2024 06:57:31.0126
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: J0WvbGT0cL7c+JHQOjBulLDmUDI+5DZHK7qg/Xlau9nxhN6OKNBBcp/lYz830faHIciecD9okJjanmA1q1ouWw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB9090

T24gMjAyNC82LzE3IDIyOjUyLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTcuMDYuMjAyNCAx
MTowMCwgSmlxaWFuIENoZW4gd3JvdGU6DQo+PiBUaGUgZ3NpIG9mIGEgcGFzc3Rocm91Z2ggZGV2
aWNlIG11c3QgYmUgY29uZmlndXJlZCBmb3IgaXQgdG8gYmUNCj4+IGFibGUgdG8gYmUgbWFwcGVk
IGludG8gYSBodm0gZG9tVS4NCj4+IEJ1dCBXaGVuIGRvbTAgaXMgUFZILCB0aGUgZ3NpcyBkb24n
dCBnZXQgcmVnaXN0ZXJlZCwgaXQgY2F1c2VzDQo+PiB0aGUgaW5mbyBvZiBhcGljLCBwaW4gYW5k
IGlycSBub3QgYmUgYWRkZWQgaW50byBpcnFfMl9waW4gbGlzdCwNCj4+IGFuZCB0aGUgaGFuZGxl
ciBvZiBpcnFfZGVzYyBpcyBub3Qgc2V0LCB0aGVuIHdoZW4gcGFzc3Rocm91Z2ggYQ0KPj4gZGV2
aWNlLCBzZXR0aW5nIGlvYXBpYyBhZmZpbml0eSBhbmQgdmVjdG9yIHdpbGwgZmFpbC4NCj4+DQo+
PiBUbyBmaXggYWJvdmUgcHJvYmxlbSwgb24gTGludXgga2VybmVsIHNpZGUsIGEgbmV3IGNvZGUg
d2lsbA0KPj4gbmVlZCB0byBjYWxsIFBIWVNERVZPUF9zZXR1cF9nc2kgZm9yIHBhc3N0aHJvdWdo
IGRldmljZXMgdG8NCj4+IHJlZ2lzdGVyIGdzaSB3aGVuIGRvbTAgaXMgUFZILg0KPj4NCj4+IFNv
LCBhZGQgUEhZU0RFVk9QX3NldHVwX2dzaSBpbnRvIGh2bV9waHlzZGV2X29wIGZvciBhYm92ZQ0K
Pj4gcHVycG9zZS4NCj4+DQo+PiBTaWduZWQtb2ZmLWJ5OiBKaXFpYW4gQ2hlbiA8SmlxaWFuLkNo
ZW5AYW1kLmNvbT4NCj4+IFNpZ25lZC1vZmYtYnk6IEh1YW5nIFJ1aSA8cmF5Lmh1YW5nQGFtZC5j
b20+DQo+PiBTaWduZWQtb2ZmLWJ5OiBKaXFpYW4gQ2hlbiA8SmlxaWFuLkNoZW5AYW1kLmNvbT4N
Cj4+IC0tLQ0KPj4gVGhlIGNvZGUgbGluayB0aGF0IHdpbGwgY2FsbCB0aGlzIGh5cGVyY2FsbCBv
biBsaW51eCBrZXJuZWwgc2lkZSBpcyBhcyBmb2xsb3dzOg0KPj4gaHR0cHM6Ly9sb3JlLmtlcm5l
bC5vcmcveGVuLWRldmVsLzIwMjQwNjA3MDc1MTA5LjEyNjI3Ny0zLUppcWlhbi5DaGVuQGFtZC5j
b20vDQo+IA0KPiBPbmUgb2YgbXkgdjkgY29tbWVudHMgd2FzIGFkZHJlc3NlZCwgdGhhbmtzLiBS
ZXBlYXRpbmcgdGhlIG90aGVyLCB1bmFkZHJlc3NlZA0KPiBvbmUgaGVyZToNCj4gIkFzIHRvIEdT
SXMgbm90IGJlaW5nIHJlZ2lzdGVyZWQ6IElmIHRoYXQncyBub3QgYSBwcm9ibGVtIGZvciBEb20w
J3Mgb3duDQo+ICBvcGVyYXRpb24sIEkgdGhpbmsgaXQnbGwgYWxzbyB3YW50L25lZWQgZXhwbGFp
bmluZyB3aHkgd2hhdCBpcyBzdWZmaWNpZW50IGZvcg0KPiAgRG9tMCBhbG9uZSBpc24ndCBzdWZm
aWNpZW50IHdoZW4gcGFzcy10aHJvdWdoIGNvbWVzIGludG8gcGxheS4iDQpJIGhhdmUgbW9kaWZp
ZWQgdGhlIGNvbW1pdCBtZXNzYWdlIHRvIGRlc2NyaWJlIHdoeSBHU0lzIGFyZSBub3QgcmVnaXN0
ZXJlZCBjYW4gY2F1c2UgcGFzc3Rocm91Z2ggbm90IHdvcmssIGFjY29yZGluZyB0byB0aGlzIHY5
IGNvbW1lbnQuDQoiIGl0IGNhdXNlcyB0aGUgaW5mbyBvZiBhcGljLCBwaW4gYW5kIGlycSBub3Qg
YmUgYWRkZWQgaW50byBpcnFfMl9waW4gbGlzdCwgYW5kIHRoZSBoYW5kbGVyIG9mIGlycV9kZXNj
IGlzIG5vdCBzZXQsIHRoZW4gd2hlbiBwYXNzdGhyb3VnaCBhIGRldmljZSwgc2V0dGluZyBpb2Fw
aWMgYWZmaW5pdHkgYW5kIHZlY3RvciB3aWxsIGZhaWwuIg0KV2hhdCBkZXNjcmlwdGlvbiBkbyB5
b3Ugd2FudCBtZSB0byBhZGQ/DQoNCj4gDQo+IEphbg0KPiANCg0KLS0gDQpCZXN0IHJlZ2FyZHMs
DQpKaXFpYW4gQ2hlbi4NCg==


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 06:58:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 06:58:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742732.1149586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJSn4-0007mN-OK; Tue, 18 Jun 2024 06:58:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742732.1149586; Tue, 18 Jun 2024 06:58:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJSn4-0007mG-Lj; Tue, 18 Jun 2024 06:58:02 +0000
Received: by outflank-mailman (input) for mailman id 742732;
 Tue, 18 Jun 2024 06:58:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJSn2-0007ki-Qa; Tue, 18 Jun 2024 06:58:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJSn2-0002DH-5r; Tue, 18 Jun 2024 06:58:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJSn1-0005sq-UU; Tue, 18 Jun 2024 06:57:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJSn1-00069G-UB; Tue, 18 Jun 2024 06:57:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=F1djtTEvLudFrmRYavdUJ2JMTZqO76Wxl95+2Avm42I=; b=zVIoD6CqAEesqI2Hk+GOJxCVYw
	4jXKWewC4ETattkX2et7mTAC/YvPQnH0WG0BYOu204JPJHINw2u1BQb32ifR6TjWbkB4pQW4T3oRH
	q5VegDI5xMnH/NIU/jZ3rdo1RviJIf2kshyPwID6V6g2ovHEW6cL/aIO6o8ICJusrzz8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186387-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186387: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=77b1ed1d02d082c457924a695e8dde7076285271
X-Osstest-Versions-That:
    xen=8b4243a9b560c89bb259db5a27832c253d4bebc7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Jun 2024 06:57:59 +0000

flight 186387 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186387/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  77b1ed1d02d082c457924a695e8dde7076285271
baseline version:
 xen                  8b4243a9b560c89bb259db5a27832c253d4bebc7

Last test of basis   186349  2024-06-14 10:03:57 Z    3 days
Testing same since   186387  2024-06-18 02:00:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nicola Vetrini <nicola.vetrini@bugseng.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8b4243a9b5..77b1ed1d02  77b1ed1d02d082c457924a695e8dde7076285271 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 07:04:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 07:04:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742743.1149597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJStG-00016v-EI; Tue, 18 Jun 2024 07:04:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742743.1149597; Tue, 18 Jun 2024 07:04:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJStG-00016o-B0; Tue, 18 Jun 2024 07:04:26 +0000
Received: by outflank-mailman (input) for mailman id 742743;
 Tue, 18 Jun 2024 07:04:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJStF-00016i-8U
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 07:04:25 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fe3bd8cc-2d40-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 09:04:23 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a6f1dc06298so605253866b.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 00:04:23 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f756ee325sm364933566b.90.2024.06.18.00.04.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 00:04:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe3bd8cc-2d40-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718694263; x=1719299063; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=aZiI4CDsGmd5XRSgfqrLfHXNyPqcl3hxH0aNxpnOtko=;
        b=bRfy5uYPUJmIBOL9ViyvLnUMo3S/J134X0YOsoifTOmGJvLUaZcRnwUc9uJ9LOQmrq
         Fmm0h/V+05mH9kKTqRrw8wYwqCdFtxa2OLqmn1bVx8eggNMjrSIaWhYKPpnooCdbMK+v
         ZZKSAu0Ws3fpc0fcqtIaCZQONHTZORfY6N7sgY0qd+UaM/H5F3qkjpaeHlvvzM1S+dku
         cePEn6jwYvAtbatwPzrXgHZscPGwZVMZi2HTUm+RuhgmW06VMeoTTfK8HumACl94Lf16
         8OCOMRWWVn4UFcdQUfsAebeBzh4Ppmsy0ZkQhAzyoRd9lGY8HwhAinUBzoEejfDBn9sc
         Y38A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718694263; x=1719299063;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=aZiI4CDsGmd5XRSgfqrLfHXNyPqcl3hxH0aNxpnOtko=;
        b=wNhxWMWoYW/toG5m9Wlr5BKB97j6sZWmXz5tPbrffeMR4L3GP5I8W5n7HvL2JeJLgp
         WIFvKSNC/3McgVRt0bFTGw+DHVzzPmL0ODpENlBHwq74qdDn0atKCE06fibkYLdMOPFD
         KrCnmcEcZG3IU8odxYvaDIK8UVClM703Ja2S9yrxBZF6upCGqFh+J4ZSIcjFTDsL4JwN
         8/vR5lephYSKo8rlBKTKQrosPHu1Mtf4cjOPnh7iIj4j3ZebSGaUbwesGOltNiLMLQsh
         Gkvoa6Bo8DOdK1Y+BXtYFIylll1lz4YBlQXz35LmZz+3QGnVusFuNhNX8dm90OTDSyUl
         zl4w==
X-Forwarded-Encrypted: i=1; AJvYcCW6yCDxby8FjkPF5MIvElpI2yEBm/hiqjrKUiXs0KlLV0Gkk/5sE6woKi4I3sRcda2lVABjoYo/EeBhKlFiPihLJRFZ4ZXbPVt3aUj6pDc=
X-Gm-Message-State: AOJu0YzDVlBy3D7QPNAmMd/cRNrTgWxGVNSgRPzk8X9auHt4xwZaeUdd
	k32c8GD0JGZtV0b68HnNx/lLYK0YqO3dG/XhVdZNJrVNKxLmoy9ZgB17L4qCyA==
X-Google-Smtp-Source: AGHT+IH2S5tn/jQwfkl8rTT2t3xFY9XCMvMCoIt5K0s8lYzmUpwWzODMSlbH5oPB4tdbJAIgfZrqTQ==
X-Received: by 2002:a17:906:4312:b0:a6f:1839:ed40 with SMTP id a640c23a62f3a-a6f60de267emr714277266b.73.1718694263206;
        Tue, 18 Jun 2024 00:04:23 -0700 (PDT)
Message-ID: <06b0f91a-bb5c-4053-a462-def054c68201@suse.com>
Date: Tue, 18 Jun 2024 09:04:21 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 3/7] x86/boot: Collect the Raw CPU Policy earlier on boot
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240523111627.28896-1-andrew.cooper3@citrix.com>
 <20240523111627.28896-4-andrew.cooper3@citrix.com>
 <8245f0ce-2964-4ecb-a31d-3e182a6d3e0b@suse.com>
 <6b4ed926-8934-4660-98c7-1856d0c90878@citrix.com>
 <cc1b52d8-163f-443c-8418-aba1c7d77ecb@suse.com>
 <af1419dd-c9bf-4f15-b2dd-c3883030d4f4@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <af1419dd-c9bf-4f15-b2dd-c3883030d4f4@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 17.06.2024 19:30, Andrew Cooper wrote:
> On 17/06/2024 11:25 am, Jan Beulich wrote:
>> On 14.06.2024 20:26, Andrew Cooper wrote:
>>> On 23/05/2024 4:44 pm, Jan Beulich wrote:
>>>> On 23.05.2024 13:16, Andrew Cooper wrote:
>>>>> This is a tangle, but it's a small step in the right direction.
>>>>>
>>>>> xstate_init() is shortly going to want data from the Raw policy.
>>>>> calculate_raw_cpu_policy() is sufficiently separate from the other policies to
>>>>> be safe to do.
>>>>>
>>>>> No functional change.
>>>>>
>>>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> Would you mind taking a look at
>>>> https://lists.xen.org/archives/html/xen-devel/2021-04/msg01335.html
>>>> to make clear (to me at least) in how far we can perhaps find common grounds
>>>> on what wants doing when? (Of course the local version I have has been
>>>> constantly re-based, so some of the function names would have changed from
>>>> what's visible there.)
>>> That's been covered several times, at least in part.
>>>
>>> I want to eventually move the host policy too, but I'm not willing to
>>> compound the mess we've currently got just to do it earlier.  It's just
>>> creating even more obstacles to doing it nicely.
>>>
>>> Nothing in this series needs (or indeed should) use the host policy.
>> Hmm, I'm irritated: You talk about host policy here, ...
>>
>>> The same is true of your AMX series.  You're (correctly) breaking the
>>> uniform allocation size and (when policy selection is ordered WRT vCPU
>>> creation, as discussed) it becomes solely depend on the guest policy.
>> ... then guest policy, and ...
>>
>>> xsave.c really has no legitimate use for the host policy once the
>>> uniform allocation size aspect has gone away.
>> ... then host policy again.
> 
> Yes.  Notice how host policy is always referred to in the negative.
> 
> The raw policy should be used for everything pertaining to the
> instruction ABI itself, and the guest policy for sizing information.
> 
>> Whereas my patch switches to using the raw
>> policy, simply to eliminate redundant data.
> 
> Except it doesn't.  The latest posted version of your series contains:
> 
> -static u32 __read_mostly xsave_cntxt_size;
> +#define xsave_cntxt_size (host_cpuid_policy.xstate.max_size | 0)

That's "x86/xstate: replace xsave_cntxt_size and drop XCNTXT_MASK", but
what I referenced in the context here is "x86/xstate: drop
xstate_offsets[] and xstate_sizes[]". If there use of the host policy
is wrong in your opinion, then can you please reply there to clarify
what's wrong with the justification for doing so? (See also below.)

> and you've even stated that I should have acked this patch simply on its
> age.

Well, it's not quite "simply on its age". Part of me thinks that our
rules should permit for things to go in when no-one looked at them
despite reminders, but the other part of me knows that then we'll move
back to what we had in the old days, where all kinds of (subtle or
supposedly obvious) breakage would be introduced. So it's more like
"considering its age, it really should have had replies". Anyway, there
was now a vague promise that this series is going to be helped make
progress in the 4.20 cycle.

> I acked the patches that were good, and you did committed them at the
> time.  Then I put together this series to fix the bugs the latent bugs
> which you were making less latent; this series really is the same one
> discussed back then, and really does have some 2020/2021 author dates in it.
> 
> 
> It is, AFAICT, not safe to move the calculation of the host policy as
> early as you did, without arranging for setup_{force,clear}_cap() to
> edit the host policy synchronously.  Recalculating a second time later
> isn't good enough, because you've created an asymmetry for most of boot
> between two pieces of state which are supposed to be in sync, and that
> you're intentionally starting to use.  So yes - while I do intend to
> eventually make the host policy usable that early too, I'm really not
> happy doing so in a manner that has "ticking timebomb" written all over it.

That's a fair concern. My counter argument is that right now the host
policy can't possibly be used ahead of when its calculated prior to
that patch (i.e. ahead of the 2nd calculation after the patch). Yes,
care will need to be taken (for the time being) to, between the two
points, only access parts which can't change again during the 2nd
calculation.

Considering the plans to invoke identify_cpu() earlier, it simply didn't
appear necessary to, transiently, change setup_{force,clear}_cap().

> As to xsave_cntxt_size, it can (and should) be eliminated entirely when
> xstate_alloc_save_area() can use the guest policy to size the allocation.

Again I'm confused, I'm afraid: xsave_cntxt_size isn't really used for
much guest-ish right now; its main use is in xstate_init(). Hence why
that other patch is replacing it by use of the host policy. Perhaps
you're simply suggesting that its present uses aren't quite right then;
as you say, that sole truly guest related use in xstate_alloc_save_area()
is there only because right now we need to over-estimate the eventual
size needed. For that purpose, host policy is the right thing to use, imo.
xsave_cntxt_size going away altogether at some point is no reason to,
transiently, switch it from being an actual variable to referencing the
host policy. It's just that if it can be made go away earlier, that part
of that other patch could then simply be dropped.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 07:07:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 07:07:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742749.1149606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJSwW-0001ny-RS; Tue, 18 Jun 2024 07:07:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742749.1149606; Tue, 18 Jun 2024 07:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJSwW-0001nr-On; Tue, 18 Jun 2024 07:07:48 +0000
Received: by outflank-mailman (input) for mailman id 742749;
 Tue, 18 Jun 2024 07:07:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJSwW-0001l8-6U
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 07:07:48 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 75a30120-2d41-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 09:07:44 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-a6f0dc80ab9so768436866b.2
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 00:07:44 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56ecdd63sm588842366b.124.2024.06.18.00.07.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 00:07:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75a30120-2d41-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718694464; x=1719299264; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=wmqHpjlrF/8EuIZ3+EMf7K0vzVJ3D8wKTPfD2B2kFDg=;
        b=fHJomjlFIf80GLwhU4w0uh4ENS1OwZyDEUPSnUZNLhxAxI0Igd+OLaZljdxQ2lYJPs
         l4/KlUm/MPGN1XwIy8XxsEmlCPHtTuwuu3DPf5pZY6jfUF3r9+9VESPKKnGYcyUhJUyh
         Js59JYHMHEt34B2fBsfFOD0YqdBfsteRaDqqAz+8KhHipLCfp9+DyC1EPeaD9ZhcDA+h
         xTmKToh+N8fnOWdi8N/6gaiNQVdBLViorbE8FxyFC6v+fYmK8XFO5y5YjsfhcN8asPtt
         5d4u7pv7/TDNo/qcgmSG1i9wZvTCACLfARAYdMbloXdIISEiktqrFxkqeJR5/5wClLUS
         SZ/g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718694464; x=1719299264;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wmqHpjlrF/8EuIZ3+EMf7K0vzVJ3D8wKTPfD2B2kFDg=;
        b=tAXyDKaaBOA9yvPiYGGySJUn/aw8SFI3kzgHlXsOm4z69iVgYxAViszcdA00HIG0+/
         9tFQ+V6Es/Mxz/kD4uYSPGLFtjpIm9Bz3ISU+ZrKDy/jrW9PYHivkRLWHT2iargVj/DZ
         67XTCvbtLwhVHBRZQMGRItlpxp7Je46Dj0ommlKWK7lWKwCYUduvwOe1EveC499X2Dbh
         2NANUP+uwsaSKI/ij4nTaCHBnO6uBLUD20BlMfNhDLQee2o5JU9AblAlUN5C/tcGZuyt
         k6PudQWMOkD7nyCQqheymEi3/Jde4vtvfv0uX5R/cNTDecPLwInYPaalHGSSofWzh0Th
         s2uQ==
X-Forwarded-Encrypted: i=1; AJvYcCWXuNRwDwvLy+RqzczYTwFJu/k1QO1y21QcARi2EkJOlg2AgxGCQmlSzTGcfgXtY1iL72B6+9zeiHqutMqDhpFULVuHX4s2AI0tVzktW7I=
X-Gm-Message-State: AOJu0YzJZTreKmFMPlECxFr2oJaaNjN3wxQHQwz67CAksyQsT/WkHIUy
	H8EpMFjk1b7zEzurn100FwsDBQLLXZR9nFpPYbt/U2IqtEWJUvCkrCfQWkJ1mw==
X-Google-Smtp-Source: AGHT+IG9qGgAvQORThwDkhdGNmFw9izjdFgu7MYYv+MqcOI9u6vrZ6CNzjiYrOQI50jMZCpbPU6vig==
X-Received: by 2002:a17:906:ca0b:b0:a6f:1025:8dd5 with SMTP id a640c23a62f3a-a6f60de24dfmr707771966b.75.1718694463722;
        Tue, 18 Jun 2024 00:07:43 -0700 (PDT)
Message-ID: <cc6ca257-4a1d-405d-97e3-f6ef1834435c@suse.com>
Date: Tue, 18 Jun 2024 09:07:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/ubsan: Fix UB in type_descriptor declaration
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240617175521.1766698-1-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240617175521.1766698-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 17.06.2024 19:55, Andrew Cooper wrote:
> struct type_descriptor is arranged with a NUL terminated string following the
> kind/info fields.
> 
> The only reason this doesn't trip UBSAN detection itself (on more modern
> compilers at least) is because struct type_descriptor is only referenced in
> suppressed regions.
> 
> Switch the declaration to be a real flexible member.  No functional change.
> 
> Fixes: 00fcf4dd8eb4 ("xen/ubsan: Import ubsan implementation from Linux 4.13")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue Jun 18 07:39:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 07:39:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742759.1149616 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJTQT-0006bg-0B; Tue, 18 Jun 2024 07:38:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742759.1149616; Tue, 18 Jun 2024 07:38:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJTQS-0006bZ-Tk; Tue, 18 Jun 2024 07:38:44 +0000
Received: by outflank-mailman (input) for mailman id 742759;
 Tue, 18 Jun 2024 07:38:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJTQS-0006bP-3D; Tue, 18 Jun 2024 07:38:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJTQR-0002xj-QE; Tue, 18 Jun 2024 07:38:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJTQR-0008Bi-BQ; Tue, 18 Jun 2024 07:38:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJTQR-0002Xd-Aw; Tue, 18 Jun 2024 07:38:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Lh67WCtQ02Or1J4BrJFejXEB9U+2B6kU6/wHvKjBJS0=; b=fXGKBq2W97Y0UKgcujhn1LqctZ
	cEO3GvAiFBhDws9g54H1zdT4pt2HdYhKbHZv4iAyAQmZWN1GsimAchq8vnvZVK+XK7lUxk4cZU2QX
	uyTonh2hno2tWNCI56JyBqQK/l+341nmtMtcVdbBfxKlPiyQw+0bMgHPL9GejSbBiZzI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186391-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186391: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=537a81ae81622d65052184b281e8b2754d0b5313
X-Osstest-Versions-That:
    ovmf=128513afcdfa77e94c9637e643898e61c8218e34
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Jun 2024 07:38:43 +0000

flight 186391 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186391/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 537a81ae81622d65052184b281e8b2754d0b5313
baseline version:
 ovmf                 128513afcdfa77e94c9637e643898e61c8218e34

Last test of basis   186382  2024-06-17 14:13:00 Z    0 days
Testing same since   186391  2024-06-18 06:13:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Paul Grimes <paul.grimes@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   128513afcd..537a81ae81  537a81ae81622d65052184b281e8b2754d0b5313 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 08:07:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 08:07:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742772.1149626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJTrs-0003CV-56; Tue, 18 Jun 2024 08:07:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742772.1149626; Tue, 18 Jun 2024 08:07:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJTrs-0003CO-2e; Tue, 18 Jun 2024 08:07:04 +0000
Received: by outflank-mailman (input) for mailman id 742772;
 Tue, 18 Jun 2024 08:07:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oyK7=NU=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sJTrr-0003CI-7V
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 08:07:03 +0000
Received: from mail-lf1-x131.google.com (mail-lf1-x131.google.com
 [2a00:1450:4864:20::131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id be6c9d74-2d49-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 10:07:02 +0200 (CEST)
Received: by mail-lf1-x131.google.com with SMTP id
 2adb3069b0e04-52c84a21c62so5735482e87.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 01:07:02 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52ca2872427sm1455326e87.166.2024.06.18.01.07.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 01:07:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be6c9d74-2d49-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718698022; x=1719302822; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=oOp5IY/ju+wrRhPM+rZbHzXF+t0glHzY2TCapP0EYd0=;
        b=NMVjI3htE6xakQ8N4PiOLC0ZVo9qOqe68aHPihL+ytr+aSVlHupY6nj2nTdUEvHre+
         Z/9VUyrElOCmH6O3LdCOSWDcwF9tkzlw+jN9N75Y4xPtnKh5nMbosnSGvRC7dgNGDlCx
         NLGDcepwIjZUb4A0WJaeZ5Aes8/fPQL57KJTJHPJQWRQiPorOpk4DCBfcG6eqLTO1/wo
         TdSzIdD3lxzucrEX7bxW4RhziDrNrn6QtAeRWPnfpr/0RnVKfrgdpVcm541LOU2T0Xto
         v0N+qzvuR9SmZwt1cd8UVKGKqWX645hoh/+7LGQ5XrXs4FDP23yhwQvrgH7bCBXPJLdo
         BSwQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718698022; x=1719302822;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=oOp5IY/ju+wrRhPM+rZbHzXF+t0glHzY2TCapP0EYd0=;
        b=fGgjXvXLHyPuMkN8Pq7AuEUcGTxASbuoVQ/e+JnkSILUHV3VmGJXPNSctb6EannQ/j
         DwmNh+yw2tF22Pdp3geaULUhkgFnE9us+9rXQWEh/BmVSc4QTUabxKQiBft1Q7y9+Kxc
         TYxiN/ilTqbT5VlYDhYFaQuE1TxgyPUEuJZAT4VcVOTGbb6pitJsAa2+q7HKCh8j4I3N
         guF9DrdG8mdocseaGoy12nRZwSOU8iS7ZVwRK37DwxlhPOsQFtOFzyTnsAPgxPNl5joH
         q4HY7jqEC6zrRoA0zUGb+k5qk/DPnTBrnDSzyQCVOV/yu6K4RH7KSM8/bVUKhlDTXX0+
         IM/w==
X-Forwarded-Encrypted: i=1; AJvYcCWp45drqBhHFuYRXfnTkD1PrLAE5cj5MRnsu3Y67qT1tHsMVMjWch2E5X9hZtxlgP+3jaXXnKvuFo0j24wl36Nb0W0ZAOXa//64a+wY9HI=
X-Gm-Message-State: AOJu0Yx14c7EhD/R1ODQfl7jECO47dpjAc6oO8YEEdIwOMexyk21awUZ
	8EVcoWfiuZ6ft6HJYc7ndadkxdkSQw3JWYgyEKF2Ebq/rDT18S/u
X-Google-Smtp-Source: AGHT+IH/TP0USun3+K1F68eFUQQzP0Z90ED5Ie2Ct3NEpL6pLlhspG8/ky7hgw78zYqAw4DTGYF0iA==
X-Received: by 2002:a05:6512:2348:b0:52b:bf9c:7e28 with SMTP id 2adb3069b0e04-52ca6e56012mr10412188e87.14.1718698021480;
        Tue, 18 Jun 2024 01:07:01 -0700 (PDT)
Message-ID: <69eb7670c15d31cad3d9cac919a69a5e85f04ce0.camel@gmail.com>
Subject: Re: [PATCH] xen/ubsan: Fix UB in type_descriptor declaration
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich
 <JBeulich@suse.com>,  Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>
Date: Tue, 18 Jun 2024 10:07:00 +0200
In-Reply-To: <20240617175521.1766698-1-andrew.cooper3@citrix.com>
References: <20240617175521.1766698-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Mon, 2024-06-17 at 18:55 +0100, Andrew Cooper wrote:
> struct type_descriptor is arranged with a NUL terminated string
Should it be NULL instead of NUL?

> following the
> kind/info fields.
>=20
> The only reason this doesn't trip UBSAN detection itself (on more
> modern
> compilers at least) is because struct type_descriptor is only
> referenced in
> suppressed regions.
>=20
> Switch the declaration to be a real flexible member.=C2=A0 No functional
> change.
>=20
> Fixes: 00fcf4dd8eb4 ("xen/ubsan: Import ubsan implementation from
> Linux 4.13")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii
> ---
> CC: George Dunlap <George.Dunlap@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> For 4.19, and for backport to all reasonable versions.=C2=A0 This bug
> deserves some
> kind of irony award.
> ---
> =C2=A0xen/common/ubsan/ubsan.h | 2 +-
> =C2=A01 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/xen/common/ubsan/ubsan.h b/xen/common/ubsan/ubsan.h
> index a3159040fefb..3db42e75b138 100644
> --- a/xen/common/ubsan/ubsan.h
> +++ b/xen/common/ubsan/ubsan.h
> @@ -10,7 +10,7 @@ enum {
> =C2=A0struct type_descriptor {
> =C2=A0	u16 type_kind;
> =C2=A0	u16 type_info;
> -	char type_name[1];
> +	char type_name[];
> =C2=A0};
> =C2=A0
> =C2=A0struct source_location {
>=20
> base-commit: 8b4243a9b560c89bb259db5a27832c253d4bebc7



From xen-devel-bounces@lists.xenproject.org Tue Jun 18 08:10:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 08:10:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742780.1149637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJTuy-0004yx-OY; Tue, 18 Jun 2024 08:10:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742780.1149637; Tue, 18 Jun 2024 08:10:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJTuy-0004yq-Ll; Tue, 18 Jun 2024 08:10:16 +0000
Received: by outflank-mailman (input) for mailman id 742780;
 Tue, 18 Jun 2024 08:10:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kZif=NU=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJTuw-00046j-Os
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 08:10:14 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20618.outbound.protection.outlook.com
 [2a01:111:f400:7e88::618])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2fbcacbd-2d4a-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 10:10:13 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by MW4PR12MB7014.namprd12.prod.outlook.com (2603:10b6:303:218::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31; Tue, 18 Jun
 2024 08:10:08 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7677.030; Tue, 18 Jun 2024
 08:10:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fbcacbd-2d4a-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DpGZojTp/Sz3F81NHMOE2Lv6JuqiRUKoVfwoDvT5c/ptt44ky4rrIL5RYcsAd5I1PUqgy1nL9oHcuTjA5tdqOWhtlusbRsWgoNNFJwODX3XJ2gx4Iv6GOQTlz3dHByXKBf/6jmtN8nX+ADesvG4YE92iwGs/WtAmE8WEbzPDcQoa2aHn0PBjLgBRLR9m6XaXE8JjbMcmiNIdsuOQE9yoYM+ZnubksQERxC48/mDrZx3tBZjZIdNJ3zimVhfPsi9akinxcIQcjyYBVv5csK+DiSzfkHXZBXudYhkc0P57O7+hvThANpGc4/xjOqEo9l+bKxFIG5HaUYt36bXd2JMfSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=L4IQjgBn0srP20CSsCsJAOpgijcXG538FJxRv6z0gY0=;
 b=Pf3deRn6Opv3HLUpR4QDZNctfwg4DD9ShNy78baRCHPTOsYynChPx+z7S6EMNcUQ8sbdmP4k78Nk8wbof5HjvMhGB9bsOUQmDvGrgcPPVLJIW/zpL9L95i1wtZbv3Z/wx1aPiMhWeFARFUsmy0+WXnlP99JJh4H8F6MGdE8OSGSbMESU5Jf2WXU/Bh8PCbssPSyt8RwlRFggqz0HmwR5jQR/WeDLjj/aFO9fXOe9Rb/2ByZtGKiZ6r0m9kmeNNshd051K72YHOGUokwAV1OhQFbM2nhcn1QR5sT7EgrlGn9c/HleBAfx1r8J8/+1k0W4aXrKtbdw/sbYJaTa+XlgEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L4IQjgBn0srP20CSsCsJAOpgijcXG538FJxRv6z0gY0=;
 b=m+YgyMzAJBbMV3BrG0+sz5y3Ra3ffKzinSCSeC/vlnmeuHvrCCsA3czq3fyUjxnWhqNu59nJHpeAKkqsAHHLccQ8yBb2GsTx/xPic3EPbwV+rhmn5qRq5w9Tk4JbW9XNRkZ5Bxbeo1paw7WqLTamranYR0c3iRolkSNE+DEbEHQ=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
Thread-Topic: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
Thread-Index: AQHawJToSxHkdXzldkCDZE5kihITT7HMD9QAgAGTOgA=
Date: Tue, 18 Jun 2024 08:10:08 +0000
Message-ID:
 <BL1PR12MB584910D242D9D8B4BA8B15C1E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-5-Jiqian.Chen@amd.com>
 <49563a31-d50e-4015-88ee-e0dab9193cd1@suse.com>
In-Reply-To: <49563a31-d50e-4015-88ee-e0dab9193cd1@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7677.026)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|MW4PR12MB7014:EE_
x-ms-office365-filtering-correlation-id: 66d9da63-169a-4974-467c-08dc8f6e11d7
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|366013|376011|7416011|1800799021|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?c3QrMXYvZ1NTUzNMcW51Q3R6SWUwUERxNVl3RHkvKzFlNWhOOExFb29wT3VD?=
 =?utf-8?B?aE5NTW1xK0V1ZGQrUWZTWWgyWnIrMmZlRXo5WTBQdGxrdW5qWlNOb3lweFdM?=
 =?utf-8?B?TlByWDZrNkxESWoyZ0tpaytvYU84UW1NcU83WmM5RzR3T1BDdUwvUjRVNjhP?=
 =?utf-8?B?UFk0N284S2RDUzJRZXdqTS8xVUZ2ZXJTZ3pUdGNvZUpHUWlTSG5iUCt4NUZO?=
 =?utf-8?B?MjlhR2RaZmptdkFHc3RncERQbWxpcytHOVljWXlnTWVRQ3lvYVpiU3JCRUpr?=
 =?utf-8?B?MjMrQkNrTVBWS1FjTUN4clpvblpDRVd2RjZqK3JVMHZTa0dyNjIxMFhYYnc0?=
 =?utf-8?B?U0JuQmxPV2xiNzVtU2duL3BIRTY5WkdYdTNCQ3lMaHhkQll1V1BMOWJsZDFU?=
 =?utf-8?B?U1dkbVkyaHhRYXlnYmNZUUpxZ1c0T0l6NlpkWldnWlJSWUdGYndxdTdQOWdx?=
 =?utf-8?B?Q3JYOW1wSjVSaXVFZVd0WXFKV2NFS0FlbGYvSFRZTkRjcWh2ZTJPZ3hWWmdh?=
 =?utf-8?B?OGNqeVZVMzZjVGp3MHVxMW1GbS9XM1VmMkthWDJkcStCVGxqMFZxN0ZKU29K?=
 =?utf-8?B?c08raC9kQk96cW5lTVptVjRPQy9uUldvYjJLanJISzhEcWRRZUNHc0xxWE5B?=
 =?utf-8?B?QzE1aHg0STV5MUJ4cWZnMWc3bGF6YjR0TlV5T2NhVDdza3NMN3pHSEpaNWFM?=
 =?utf-8?B?NzVEaVc2MHBNMU13NW9qai9VQ21jcmVOVUkyWkkyaDlOWk5PcnVGbUI0UnYz?=
 =?utf-8?B?WncvMm1IZXR1L0M3WHZ3N2MwTXRnSHhwcVhjRXhQblN2bFBZaFFGc29OS203?=
 =?utf-8?B?WGxpOTBlWllKbDQvRTFYM21NOXRvT3JqeE03Z3FVcVR5ZHA5M21rQXpoZzZL?=
 =?utf-8?B?Vm5Ucytqb2tBd1RmZFdjeGx0ampybWJadFFuVk9QMmVzRFd0Y0d2ZUVramJT?=
 =?utf-8?B?SkgyUEY4T0FOYXBYLzhvOTYrdFM0QThsQkpvWUhXdGdERGpKSjQzVGN1aU5y?=
 =?utf-8?B?STFkK2ZUa05reXpPMjNoRlI1SVdNQ0ZleVMzek9MNjl3ME5KVElpY2JDZnZo?=
 =?utf-8?B?UlJ1UGtENlQ0VjhNdEh2bDhLT0I5ZkU4eWxKZ1FqVTRNQUhLVjVrak5scU9h?=
 =?utf-8?B?WllJd1VCOFppM29BRDIwUlBhcEFKYUV1M2VIOG9maW93TG9KOHB6VVBCSkRi?=
 =?utf-8?B?cjR1cGQwUUV6bXNpb01QYjlRY3Y0QkttQmw1TmpxcHFwcjB2c21CS01kN21B?=
 =?utf-8?B?WnlDb3JZeDlZYjE4N2FjbU1scGs4V0U5dVBXaWxDVjdxTFAyK2J2US9nWDAz?=
 =?utf-8?B?UFhxcEJjRkJwY2lSU3U2NUY2QlpXMFpCdUhjK2xIdnFSRnZxV1lLRmlkTjY3?=
 =?utf-8?B?bHlJVWw3L2ZBS2tLQkRRYVZzakc1V0tyY25iME04SGJDcytSZTNzcE9Cbnl4?=
 =?utf-8?B?ekZvREc0NGRxajRldFNnTGhMcVBORmxIWWtUYk5PMjVPOFhLOFN1cktINU1M?=
 =?utf-8?B?ZU9wRmIreUdFRW9ib1NNb0J0NkpMZlhWK1R0YW5IQlFWdTFNL1VNN3dPUFIy?=
 =?utf-8?B?QkhlN3NGd1FBdEwzRDAzUnMwZzNsN1d2Q3Q1MzVMN2FzcU1KT29zUVFZa09p?=
 =?utf-8?B?bnZNOStzcjZuWFIxVHpCUC9ya3NUbDZuT0pnSVMvK3ByUTNibDZwOTBSSGZD?=
 =?utf-8?B?OVc2QWlHVkFIby80NERZRnUzSmpna2ZGMk1Kc0tRNnFpNEVwc3l1Ykx1VUwy?=
 =?utf-8?B?QUNHYkE4WFZjZjJrWExGMkoyZDF1RE81SE54WjRqV0JyWEJ0bUJZdUVTNU1P?=
 =?utf-8?Q?I8414eMf3tUEnf3myAUyflYwKnln5KDn1uXDM=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(366013)(376011)(7416011)(1800799021)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?eWRmTEl6R0ZSZlhkY05GbEduL0xjUUI0UGJXc05zaHpNVHhRR1Jzc0pXR0FJ?=
 =?utf-8?B?NHpHaE10Nm5PTDIwWm45a0c3a2J4UTlzRnk5cG4xZms0OU5YQTI1N2NPemNr?=
 =?utf-8?B?V1ZrN21UalNtZGhRVzEvTFh0YWtSM1dqRHl2Y1hidlkza1IwWmFZRW5NdlBr?=
 =?utf-8?B?ejZqcjJzUE5ZWjhJYmVUTDFadmJpLzVCYTQ2V3hmQmkyNlFKbDBYdjBkUFVH?=
 =?utf-8?B?QkdwQkgrdnhFQTNTcDNWMnpPdXk2ZWswU0pNNGNpVjBsdjFLNHI1OFdoRWdU?=
 =?utf-8?B?c2N0WUx5WW90Z3ErYUFPK1V1OERaeUVJeTUyTW5TdzJvWnB4WjVDUHBCak1B?=
 =?utf-8?B?ZStVQ1Q3Z0dpWFM3SkxkSWJNRkJObTlocnkvOTJzZXlQcHVoOVVmamFnTVZC?=
 =?utf-8?B?eXlNYUtWY0JXTEE2d2F6RGhUMWljSHI4ZUV6RjZwS0NNTC9RUUZWbWxSRHhO?=
 =?utf-8?B?Y05hTHE5d1V3Vi8wbjZBWmtaT3ZJT0EzRUIxZzZYQ3JFMVpiQ3dEQzNnQUpT?=
 =?utf-8?B?U3ZhNUhhWTM5Rm14aHprZjA0U3pOQjlBdURGelU2SkRqa1BMakE1aEJLaGth?=
 =?utf-8?B?Ni9yRE5ON1ZmQSs3RWhlSFJXcWFEbHAvUGM4endnbzhuNXJ1QWttellXVVU5?=
 =?utf-8?B?ekhvR3dyeGQ2VGdGVDlFR3hEU28zWkF5YWJTdmVuRkkxTFBNZXF2VGNUN1JE?=
 =?utf-8?B?TXBId29IYWxMTStLS3JGTjBvYlBYeE9sQy91K3RDRzlBZEJDTjVMQmV6Ykor?=
 =?utf-8?B?aHZvQ3Rpb05SQTBCWk1JdklYSGtkcGRDdzZ2WFVoM2pUazFNOVIyVDNXdEw0?=
 =?utf-8?B?WnljWnJYWWZlK2lkaVQvaGR2V1NzTG9KS2pmeWhaYmFVcUY3V0M5NjcxZ2hv?=
 =?utf-8?B?bGxhekJmVW5SS2haOWgzS1VkUEtBbzlQVjltUFl0VE9OTnUzNlFwcUhJOGFm?=
 =?utf-8?B?NGw4TjJnOVFvaVFXUnhrWGJZSHRwRVljS1lrSDk5cHFYZyt6ZkVFUEh6K3Ni?=
 =?utf-8?B?cHlxS0M2dmV0WTRlb2ZPdElhdVNQSDJwMlBkK0ZNV2tUNTBSclI3UkEyV1RH?=
 =?utf-8?B?YTBRZHdUWUhOM2R5aXZSc25CWUpGQUl5NnlYeE9PdksrQmtTeXVnVm5vbGM0?=
 =?utf-8?B?S0FJamUyK1UrVE9TbVZqVWUxYTFXS3ozeFNFRmZQdHdOSzZMVG1JNjlrWkRw?=
 =?utf-8?B?R0hBWnJjSFNkcDlGc2dlMVFvcS8zdVU5SzBNQndMbHpBa2hjLzd6dUVmOXV6?=
 =?utf-8?B?Zk0rV0VZRmNqbnloUkg1ejRvaWxtK2NYVmF1RTN2VmJUT0djN0h2UnVyRUNi?=
 =?utf-8?B?RkhBSTFMcUxDS2IvcExTWmFVakg5UmNvdm43bHdPWlR2QytOOFZWbXdvUkdi?=
 =?utf-8?B?c1VZNExOZXhQY1VMcDJqSS9Tdm5qTzBRMnNVNlJNTzVzUndwdE1RZUk0VG9p?=
 =?utf-8?B?WWx4SnE0cXhPc1V1NlFTRSswSWM5a2lnclV0bVNWb2c5ekphRTdoa1lieFBn?=
 =?utf-8?B?MUd3ZUYyVW1pK2tTT2ZkWmQrZHJnL1NDbERvNmNhdVVqU2FEMmNRbnBNKzA3?=
 =?utf-8?B?TTdSa0ZiQlRCeEN5Rk9LRDU1YnVVRGFaMUxiMVB4dTYyQkdYQ1h1TXlqUWNN?=
 =?utf-8?B?QUVKSDFxNGJ0bllpNk1nVmhqSFg4bHJaNXVMUjNVYXBLQnZEMkhlN2oxQk13?=
 =?utf-8?B?aC94THByUnVRUDNjK3ZLMGRsLzVjbHJwWDRJbjUwVExrMkJUR00xaVZhVk9W?=
 =?utf-8?B?YTRYUU9FWnlNTzhPcTBXRnFid3Jvd2w2VHhZY3BSdUZTRUx0ZzVLSVg1RzZm?=
 =?utf-8?B?QmU3UmdLZ0FyeTkxNVlRUXRJSXJZeG9HRHpiT0QxalV3OVNjUTgyK3hpV1dR?=
 =?utf-8?B?OEVFWHlveU14WE1Ed1R2aWRQR3ZIOWlZZjVKUUV2c0dMMzlnUTQwOC9yYTM4?=
 =?utf-8?B?S1dxUmprLzZIbW9idXJZdjVjc3VaL2dDUklEOHhPUWhRa1habWlQakF5SFBx?=
 =?utf-8?B?ZGRzb3ZQMUQxcklicWdIUUp5YWtCSXZ3VUc5ZERYYWxwYjUrK0QyMmtJbmJ2?=
 =?utf-8?B?UXAzc1VvMzVBUUszcHdtVmVmSE8zL0h5dFR4Q0J4Ly8xRlJPZWdTYnk3cG1P?=
 =?utf-8?Q?As3Y=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <9FAAB5392264144B9ED578F409D9B605@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 66d9da63-169a-4974-467c-08dc8f6e11d7
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2024 08:10:08.7909
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: dPA2Plkjm+8U7Di9Lh6bUp1AqGlKtxUFYyv9ztl15vslcb1Z939y5n9d0aJoLXpoEWZm1jREP+rQLQU+VgcUzw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7014

T24gMjAyNC82LzE3IDIzOjEwLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTcuMDYuMjAyNCAx
MTowMCwgSmlxaWFuIENoZW4gd3JvdGU6DQo+PiBJbiBQVkggZG9tMCwgaXQgdXNlcyB0aGUgbGlu
dXggbG9jYWwgaW50ZXJydXB0IG1lY2hhbmlzbSwNCj4+IHdoZW4gaXQgYWxsb2NzIGlycSBmb3Ig
YSBnc2ksIGl0IGlzIGR5bmFtaWMsIGFuZCBmb2xsb3cNCj4+IHRoZSBwcmluY2lwbGUgb2YgYXBw
bHlpbmcgZmlyc3QsIGRpc3RyaWJ1dGluZyBmaXJzdC4gQW5kDQo+PiBpcnEgbnVtYmVyIGlzIGFs
bG9jZWQgZnJvbSBzbWFsbCB0byBsYXJnZSwgYnV0IHRoZSBhcHBseWluZw0KPj4gZ3NpIG51bWJl
ciBpcyBub3QsIG1heSBnc2kgMzggY29tZXMgYmVmb3JlIGdzaSAyOCwgdGhhdA0KPj4gY2F1c2Vz
IHRoZSBpcnEgbnVtYmVyIGlzIG5vdCBlcXVhbCB3aXRoIHRoZSBnc2kgbnVtYmVyLg0KPiANCj4g
SG1tLCBzZWUgbXkgZWFybGllciBleHBsYW5hdGlvbnMgb24gcGF0Y2ggNTogR1NJIGFuZCBJUlEg
Z2VuZXJhbGx5IGFyZW4ndA0KPiB0aGUgc2FtZSBhbnl3YXkuIFRoZXJlZm9yZSB0aGlzIHBhcnQg
b2YgdGhlIGRlc2NyaXB0aW9uLCB3aGlsZSBub3Qgd3JvbmcsDQo+IGlzIGF0IGxlYXN0IGF0IHJp
c2sgb2YgbWlzbGVhZGluZyBwZW9wbGUuDQpPSywgSSB3bGwgY2hhbmdlIHRvIHNheSAiaXJxIGlz
IG5vdCB0aGUgc2FtZSBhcyBnc2kiLg0KPiANCj4+IC0tLSBhL3Rvb2xzL2luY2x1ZGUveGVuLXN5
cy9MaW51eC9wcml2Y21kLmgNCj4+ICsrKyBiL3Rvb2xzL2luY2x1ZGUveGVuLXN5cy9MaW51eC9w
cml2Y21kLmgNCj4+IEBAIC05NSw2ICs5NSwxMSBAQCB0eXBlZGVmIHN0cnVjdCBwcml2Y21kX21t
YXBfcmVzb3VyY2Ugew0KPj4gIAlfX3U2NCBhZGRyOw0KPj4gIH0gcHJpdmNtZF9tbWFwX3Jlc291
cmNlX3Q7DQo+PiAgDQo+PiArdHlwZWRlZiBzdHJ1Y3QgcHJpdmNtZF9nc2lfZnJvbV9kZXYgew0K
Pj4gKwlfX3UzMiBzYmRmOw0KPiANCj4gVGhhdCdzIFBDSS1jZW50cmljLCB3aXRob3V0IHN0cnVj
dCBhbmQgSU9DVEwgbmFtZXMgcmVmbGVjdGluZyB0aGlzIGZhY3QuDQpTbywgY2hhbmdlIHRvIHBy
aXZjbWRfZ3NpX2Zyb21fcGNpZGV2ID8NCg0KPiANCj4+ICsJaW50IGdzaTsNCj4gDQo+IElzICJp
bnQiIGxlZ2l0aW1hdGUgdG8gdXNlIGhlcmU/IERvZXNuJ3QgdGhpcyB3YW50IHRvIHNpbWlsYXJs
eSBiZSBfX3UzMj8NCkkgd2FudCB0byBzZXQgZ3NpIHRvIG5lZ2F0aXZlIGlmIHRoZXJlIGlzIG5v
IHJlY29yZCBvZiB0aGlzIHRyYW5zbGF0aW9uLg0KDQo+IA0KPj4gLS0tIGEvdG9vbHMvaW5jbHVk
ZS94ZW5jYWxsLmgNCj4+ICsrKyBiL3Rvb2xzL2luY2x1ZGUveGVuY2FsbC5oDQo+PiBAQCAtMTEz
LDYgKzExMyw4IEBAIGludCB4ZW5jYWxsNSh4ZW5jYWxsX2hhbmRsZSAqeGNhbGwsIHVuc2lnbmVk
IGludCBvcCwNCj4+ICAgICAgICAgICAgICAgdWludDY0X3QgYXJnMSwgdWludDY0X3QgYXJnMiwg
dWludDY0X3QgYXJnMywNCj4+ICAgICAgICAgICAgICAgdWludDY0X3QgYXJnNCwgdWludDY0X3Qg
YXJnNSk7DQo+PiAgDQo+PiAraW50IHhlbl9vc2NhbGxfZ3NpX2Zyb21fZGV2KHhlbmNhbGxfaGFu
ZGxlICp4Y2FsbCwgdW5zaWduZWQgaW50IHNiZGYpOw0KPiANCj4gSG1tLCBzb21ldGhpbmcgKGJ5
IG5hbWUgYXQgbGVhc3QpIE9TLXNwZWNpZmljIGJlaW5nIGluIHRoZSBwdWJsaWMgaGVhZGVyDQo+
IGFuZCAuLi4NCj4gDQo+PiAtLS0gYS90b29scy9saWJzL2NhbGwvbGlieGVuY2FsbC5tYXANCj4+
ICsrKyBiL3Rvb2xzL2xpYnMvY2FsbC9saWJ4ZW5jYWxsLm1hcA0KPj4gQEAgLTEwLDYgKzEwLDgg
QEAgVkVSU18xLjAgew0KPj4gIAkJeGVuY2FsbDQ7DQo+PiAgCQl4ZW5jYWxsNTsNCj4+ICANCj4+
ICsJCXhlbl9vc2NhbGxfZ3NpX2Zyb21fZGV2Ow0KPiANCj4gLi4uIG1hcCBmaWxlLiBJJ20gbm90
IHN1cmUgdGhpbmdzIGFyZSBpbnRlbmRlZCB0byBiZSB0aGlzIHdheS4NCkxldCdzIHNlZSBvdGhl
ciBtYWludGFpbmVyJ3Mgb3Bpbmlvbi4NCg0KPiANCj4+IC0tLSBhL3Rvb2xzL2xpYnMvbGlnaHQv
bGlieGxfcGNpLmMNCj4+ICsrKyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMNCj4+IEBA
IC0xNDA2LDYgKzE0MDYsMTIgQEAgc3RhdGljIGJvb2wgcGNpX3N1cHBfbGVnYWN5X2lycSh2b2lk
KQ0KPj4gICNlbmRpZg0KPj4gIH0NCj4+ICANCj4+ICsjZGVmaW5lIFBDSV9ERVZJRChidXMsIGRl
dmZuKVwNCj4+ICsgICAgICAgICAgICAoKCgodWludDE2X3QpKGJ1cykpIDw8IDgpIHwgKChkZXZm
bikgJiAweGZmKSkNCj4+ICsNCj4+ICsjZGVmaW5lIFBDSV9TQkRGKHNlZywgYnVzLCBkZXZmbikg
XA0KPj4gKyAgICAgICAgICAgICgoKCh1aW50MzJfdCkoc2VnKSkgPDwgMTYpIHwgKFBDSV9ERVZJ
RChidXMsIGRldmZuKSkpDQo+IA0KPiBJJ20gbm90IGEgbWFpbnRhaW5lciBvZiB0aGlzIGZpbGU7
IGlmIEkgd2VyZSwgSSdkIGFzayB0aGF0IGZvciByZWFkYWJpbGl0eSdzDQo+IHNha2UgYWxsIGV4
Y2VzcyBwYXJlbnRoZXNlcyBiZSBkcm9wcGVkIGZyb20gdGhlc2UuDQpJc24ndCBpdCBhIGNvZGlu
ZyByZXF1aXJlbWVudCB0byBlbmNsb3NlIGVhY2ggZWxlbWVudCBpbiBwYXJlbnRoZXNlcyBpbiB0
aGUgbWFjcm8gZGVmaW5pdGlvbj8NCkl0IHNlZW1zIG90aGVyIGZpbGVzIGFsc28gZG8gdGhpcy4g
U2VlIHRvb2xzL2xpYnMvbGlnaHQvbGlieGxfaW50ZXJuYWwuaA0KDQo+IA0KPj4gQEAgLTE0ODYs
NiArMTQ5NiwxOCBAQCBzdGF0aWMgdm9pZCBwY2lfYWRkX2RtX2RvbmUobGlieGxfX2VnYyAqZWdj
LA0KPj4gICAgICAgICAgZ290byBvdXRfbm9faXJxOw0KPj4gICAgICB9DQo+PiAgICAgIGlmICgo
ZnNjYW5mKGYsICIldSIsICZpcnEpID09IDEpICYmIGlycSkgew0KPj4gKyNpZmRlZiBDT05GSUdf
WDg2DQo+PiArICAgICAgICBzYmRmID0gUENJX1NCREYocGNpLT5kb21haW4sIHBjaS0+YnVzLA0K
Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgIChQQ0lfREVWRk4ocGNpLT5kZXYsIHBjaS0+ZnVu
YykpKTsNCj4+ICsgICAgICAgIGdzaSA9IHhjX3BoeXNkZXZfZ3NpX2Zyb21fZGV2KGN0eC0+eGNo
LCBzYmRmKTsNCj4+ICsgICAgICAgIC8qDQo+PiArICAgICAgICAgKiBPbGQga2VybmVsIHZlcnNp
b24gbWF5IG5vdCBzdXBwb3J0IHRoaXMgZnVuY3Rpb24sDQo+IA0KPiBKdXN0IGtlcm5lbD8NClll
cywgeGNfcGh5c2Rldl9nc2lfZnJvbV9kZXYgZGVwZW5kcyBvbiB0aGUgZnVuY3Rpb24gaW1wbGVt
ZW50ZWQgb24gbGludXgga2VybmVsIHNpZGUuDQo+IA0KPj4gKyAgICAgICAgICogc28gaWYgZmFp
bCwga2VlcCB1c2luZyBpcnE7IGlmIHN1Y2Nlc3MsIHVzZSBnc2kNCj4+ICsgICAgICAgICAqLw0K
Pj4gKyAgICAgICAgaWYgKGdzaSA+IDApIHsNCj4+ICsgICAgICAgICAgICBpcnEgPSBnc2k7DQo+
IA0KPiBJJ20gc3RpbGwgcHV6emxlZCBieSB0aGlzLCB3aGVuIGJ5IG5vdyBJIHRoaW5rIHdlJ3Zl
IHN1ZmZpY2llbnRseSBjbGFyaWZpZWQNCj4gdGhhdCBJUlFzIGFuZCBHU0lzIHVzZSB0d28gZGlz
dGluY3QgbnVtYmVyaW5nIHNwYWNlcy4NCj4gDQo+IEFsc28sIGFzIHByZXZpb3VzbHkgaW5kaWNh
dGVkLCB5b3UgY2FsbCB0aGlzIGZvciBQViBEb20wIGFzIHdlbGwuIEFpdWkgb24NCj4gdGhlIGFz
c3VtcHRpb24gdGhhdCBpdCdsbCBmYWlsLiBXaGF0IGlmIHdlIGRlY2lkZSB0byBtYWtlIHRoZSBm
dW5jdGlvbmFsaXR5DQo+IGF2YWlsYWJsZSB0aGVyZSwgdG9vIChpZiBvbmx5IGZvciBpbmZvcm1h
dGlvbmFsIHB1cnBvc2VzLCBvciBmb3INCj4gY29uc2lzdGVuY3kpPyBTdWRkZW5seSB5b3UncmUg
ZmFsbGJhY2sgbG9naWMgd291bGRuJ3Qgd29yayBhbnltb3JlLCBhbmQNCj4geW91J2QgY2FsbCAu
Li4NCj4gDQo+PiArICAgICAgICB9DQo+PiArI2VuZGlmDQo+PiAgICAgICAgICByID0geGNfcGh5
c2Rldl9tYXBfcGlycShjdHgtPnhjaCwgZG9taWQsIGlycSwgJmlycSk7DQo+IA0KPiAuLi4gdGhl
IGZ1bmN0aW9uIHdpdGggYSBHU0kgd2hlbiBhIHBJUlEgaXMgbWVhbnQuIEltbywgYXMgc3VnZ2Vz
dGVkIGJlZm9yZSwNCj4geW91IHN0cmljdGx5IHdhbnQgdG8gYXZvaWQgdGhlIGNhbGwgb24gUFYg
RG9tMC4NCj4gDQo+IEFsc28gZm9yIFBWSCBEb20wOiBJIGRvbid0IHRoaW5rIEkndmUgc2VlbiBj
aGFuZ2VzIHRvIHRoZSBoeXBlcmNhbGwNCj4gaGFuZGxpbmcsIHlldC4gSG93IGNhbiB0aGF0IGJl
IHdoZW4gR1NJIGFuZCBJUlEgYXJlbid0IHRoZSBzYW1lLCBhbmQgaGVuY2UNCj4gaW5jb21pbmcg
R1NJIHdvdWxkIG5lZWQgdHJhbnNsYXRpbmcgdG8gSVJRIHNvbWV3aGVyZT8gSSBjYW4gb25jZSBh
Z2FpbiBvbmx5DQo+IGFzc3VtZSBhbGwgeW91ciB0ZXN0aW5nIHdhcyBkb25lIHdpdGggSVJRcyB3
aG9zZSBudW1iZXJzIGhhcHBlbmVkIHRvIG1hdGNoDQo+IHRoZWlyIEdTSSBudW1iZXJzLiAoVGhl
IGRpZmZlcmVuY2UsIGltbywgd291bGQgYWxzbyBuZWVkIGNhbGxpbmcgb3V0IGluIHRoZQ0KPiBw
dWJsaWMgaGVhZGVyLCB3aGVyZSB0aGUgcmVzcGVjdGl2ZSBpbnRlcmZhY2Ugc3RydWN0KHMpIGlz
L2FyZSBkZWZpbmVkLikNCkkgZmVlbCBsaWtlIHlvdSBtaXNzZWQgb3V0IG9uIG1hbnkgb2YgdGhl
IHByZXZpb3VzIGRpc2N1c3Npb25zLg0KV2l0aG91dCBteSBjaGFuZ2VzLCB0aGUgb3JpZ2luYWwg
Y29kZXMgdXNlIGlycSAocmVhZCBmcm9tIGZpbGUgL3N5cy9idXMvcGNpL2RldmljZXMvPHNiZGY+
L2lycSkgdG8gZG8geGNfcGh5c2Rldl9tYXBfcGlycSwNCmJ1dCB4Y19waHlzZGV2X21hcF9waXJx
IHJlcXVpcmUgcGFzc2luZyBpbnRvIGdzaSBpbnN0ZWFkIG9mIGlycSwgc28gd2UgbmVlZCB0byB1
c2UgZ3NpIHdoZXRoZXIgZG9tMCBpcyBQViBvciBQVkgsIHNvIGZvciB0aGUgb3JpZ2luYWwgY29k
ZXMsIHRoZXkgYXJlIHdyb25nLg0KSnVzdCBiZWNhdXNlIGJ5IGNoYW5jZSwgdGhlIGlycSB2YWx1
ZSBpbiB0aGUgTGludXgga2VybmVsIG9mIHB2IGRvbTAgaXMgZXF1YWwgdG8gdGhlIGdzaSB2YWx1
ZSwgc28gdGhlcmUgd2FzIG5vIHByb2JsZW0gd2l0aCB0aGUgb3JpZ2luYWwgcHYgcGFzc3Rocm91
Z2guDQpCdXQgbm90IHdoZW4gdXNpbmcgUFZILCBzbyBwYXNzdGhyb3VnaCBmYWlsZWQuDQpXaXRo
IG15IGNoYW5nZXMsIEkgZ290IGdzaSB0aHJvdWdoIGZ1bmN0aW9uIHhjX3BoeXNkZXZfZ3NpX2Zy
b21fZGV2IGZpcnN0bHksIGFuZCB0byBiZSBjb21wYXRpYmxlIHdpdGggb2xkIGtlcm5lbCB2ZXJz
aW9ucyhpZiB0aGUgaW9jdGwNCklPQ1RMX1BSSVZDTURfR1NJX0ZST01fREVWIGlzIG5vdCBpbXBs
ZW1lbnRlZCksIEkgc3RpbGwgbmVlZCB0byB1c2UgdGhlIGlycSBudW1iZXIsIHNvIEkgbmVlZCB0
byBjaGVjayB0aGUgcmVzdWx0DQpvZiBnc2ksIGlmIGdzaSA+IDAgbWVhbnMgSU9DVExfUFJJVkNN
RF9HU0lfRlJPTV9ERVYgaXMgaW1wbGVtZW50ZWQgSSBjYW4gdXNlIHRoZSByaWdodCBvbmUgZ3Np
LCBvdGhlcndpc2Uga2VlcCB1c2luZyB3cm9uZyBvbmUgaXJxLiANCg0KQW5kIHJlZ2FyZGluZyB0
byB0aGUgaW1wbGVtZW50YXRpb24gb2YgaW9jdGwgSU9DVExfUFJJVkNNRF9HU0lfRlJPTV9ERVYs
IGl0IGRvZXNuJ3QgaGF2ZSBhbnkgeGVuIGhleXBlcmNhbGwgaGFuZGxpbmcgY2hhbmdlcywgYWxs
IG9mIGl0cyBwcm9jZXNzaW5nIGxvZ2ljIGlzIG9uIHRoZSBrZXJuZWwgc2lkZS4NCkkga25vdywg
c28geW91IG1pZ2h0IHdhbnQgdG8gc2F5LCAiVGhlbiB5b3Ugc2hvdWxkbid0IHB1dCB0aGlzIGlu
IHhlbidzIGNvZGUuIiBCdXQgdGhpcyBjb25jZXJuIHdhcyBkaXNjdXNzZWQgaW4gcHJldmlvdXMg
dmVyc2lvbnMsIGFuZCBzaW5jZSB0aGUgcGNpIG1haW50YWluZXIgZGlzYWxsb3dlZCB0byBhZGQN
CmdzaSBzeXNmcyBvbiBsaW51eCBrZXJuZWwgc2lkZSwgSSBoYWQgdG8gZG8gc28uDQpSb2dlciwg
U3RlZmFubyBhbmQgSnVlcmdlbiBtYXkga25vdyBtb3JlIGFib3V0IHRoaXMgcGFydC4NCg0KPiAN
Cj4gSmFuDQoNCi0tIA0KQmVzdCByZWdhcmRzLA0KSmlxaWFuIENoZW4uDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 08:11:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 08:11:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742786.1149647 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJTwQ-0005XA-2v; Tue, 18 Jun 2024 08:11:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742786.1149647; Tue, 18 Jun 2024 08:11:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJTwP-0005X3-W8; Tue, 18 Jun 2024 08:11:45 +0000
Received: by outflank-mailman (input) for mailman id 742786;
 Tue, 18 Jun 2024 08:11:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oyK7=NU=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sJTwO-0005Wv-Qy
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 08:11:44 +0000
Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com
 [2a00:1450:4864:20::12d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65b5dd8c-2d4a-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 10:11:43 +0200 (CEST)
Received: by mail-lf1-x12d.google.com with SMTP id
 2adb3069b0e04-52c84a21c62so5739819e87.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 01:11:43 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52ca28880casm1474273e87.277.2024.06.18.01.11.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 01:11:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65b5dd8c-2d4a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718698302; x=1719303102; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=epKhMboTN8fRpvby0jSwmPI7ImG32pT8SgyyVWKv004=;
        b=OGmtaxjbamnV7HSHFnf4upkJ6hs4eULK1HVuzILpVNGOFv5bbtsf1c8xY23PnKZaOW
         HXTKY306ldK8dnReF6HBUHia4VcoD6Mx/WT+/2cvo5Tu1MWxMLlDOvVheJVCK1w6O3uk
         QPz8SXJWFfJrjkRyjszThtwYINoj7/rXjWQh9W+xpkU7/INbdeJ5LypWEDHlY1cKR4EE
         U26sIo1JnpmJGJRpt1F+ZfE6/Uh7lLL39/HGE4qWncqu8YtXmjlYVAsrD67+3dmY5RJf
         NA04DBpJtGAuQt2Sh0j3avO/2Y2QBLtyySWQxpJK3ty6yxmo4zzvuFDTQ8CKexVUeC/U
         NJ7A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718698302; x=1719303102;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=epKhMboTN8fRpvby0jSwmPI7ImG32pT8SgyyVWKv004=;
        b=L7dglT+emLEBZkHS6yyCSptlDhI6sjvl9ZwQCdwRdeyDAxUnjKT319p06rjTo4FBAq
         F3WG97UZRdEug8bleU/Q65DMRIWjxuTLhymVE4k8Dlzm7Hk/xBafM/UoKStxAY0QtoPb
         Rgegz5NqMK1d2BMpnSQbpLfXPlF/OVMOIkwoHPo4ckElwF3i9CiGyCoeyW2myWnWbg4b
         N+QymjDYWn8DoafeHkN5qCWQVm6H8xLfWOD1O4tXitq8qaZXG97nYAfLdRBapGGdWwtq
         5QjViM28fdL3ylK6F8iK6t8IZqfDry36zTkcML85V2059iU7wy6iTXKS2tR5+6W69kUs
         Ur8g==
X-Forwarded-Encrypted: i=1; AJvYcCUbGsn7R8f67liTjipNkgva1L56clERVT2DIrqnG6Buqo4I+pjtCxnTXBIpQntuC2yybjGeLPMgbx/sKu5rjxfEVi06BordcWUCQgRxwAk=
X-Gm-Message-State: AOJu0YwQyLbjUDRxskwz8LC8ORGVhDwROa1Abb+dIy2uzCHO9+xkNJyQ
	Thr6WaRPEKPBQsHiLdVxlvn/Hx2IaRFGBoUZOkzJGJZohpaMAQ3m
X-Google-Smtp-Source: AGHT+IF1ZULFn4olreZ/lbfyZ9NjDRchIhQrSITWPCFh6xHNudHkoY8R12XlZoUJVGI4w32mrYONwg==
X-Received: by 2002:ac2:5e70:0:b0:52c:7fe3:d3e5 with SMTP id 2adb3069b0e04-52ca6e91424mr6264613e87.50.1718698301289;
        Tue, 18 Jun 2024 01:11:41 -0700 (PDT)
Message-ID: <f51438cf61c8b7e8fd5444dbea69f732cdd66ac0.camel@gmail.com>
Subject: Re: [PATCH v3 for-4.19 1/3] x86/irq: deal with old_cpu_mask for
 interrupts in movement in fixup_irqs()
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Date: Tue, 18 Jun 2024 10:11:40 +0200
In-Reply-To: <f92fc38b-aba9-4f8f-b95c-4723049523d0@suse.com>
References: <20240613165617.42538-1-roger.pau@citrix.com>
	 <20240613165617.42538-2-roger.pau@citrix.com>
	 <f92fc38b-aba9-4f8f-b95c-4723049523d0@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Mon, 2024-06-17 at 15:18 +0200, Jan Beulich wrote:
> On 13.06.2024 18:56, Roger Pau Monne wrote:
> > Given the current logic it's possible for ->arch.old_cpu_mask to
> > get out of
> > sync: if a CPU set in old_cpu_mask is offlined and then onlined
> > again without old_cpu_mask having been updated the data in the mask
> > will no
> > longer be accurate, as when brought back online the CPU will no
> > longer have
> > old_vector configured to handle the old interrupt source.
> >=20
> > If there's an interrupt movement in progress, and the to be
> > offlined CPU (which
> > is the call context) is in the old_cpu_mask clear it and update the
> > mask, so it
> > doesn't contain stale data.
>=20
> Perhaps a comma before "clear" might further help reading. Happy to
> add while committing.
>=20
> > Note that when the system is going down fixup_irqs() will be called
> > by
> > smp_send_stop() from CPU 0 with a mask with only CPU 0 on it,
> > effectively
> > asking to move all interrupts to the current caller (CPU 0) which
> > is the only
> > CPU to remain online.=C2=A0 In that case we don't care to migrate
> > interrupts that
> > are in the process of being moved, as it's likely we won't be able
> > to move all
> > interrupts to CPU 0 due to vector shortage anyway.
> >=20
> > Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 08:16:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 08:16:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742795.1149657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJU0s-00069A-FR; Tue, 18 Jun 2024 08:16:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742795.1149657; Tue, 18 Jun 2024 08:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJU0s-000693-Bg; Tue, 18 Jun 2024 08:16:22 +0000
Received: by outflank-mailman (input) for mailman id 742795;
 Tue, 18 Jun 2024 08:16:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oyK7=NU=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sJU0r-00068x-64
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 08:16:21 +0000
Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com
 [2a00:1450:4864:20::22a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0adee551-2d4b-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 10:16:20 +0200 (CEST)
Received: by mail-lj1-x22a.google.com with SMTP id
 38308e7fff4ca-2eaa89464a3so58706181fa.3
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 01:16:20 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 38308e7fff4ca-2ec05c89bdbsm16261751fa.110.2024.06.18.01.16.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 01:16:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0adee551-2d4b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718698579; x=1719303379; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=hy+g3glMzONrTHqG6tUjb//TZuhFL1a/k+2l5hV1+t0=;
        b=E3+ZWAvl6ioJeerdBHkn5nwulNZQCC0yuZEnL8+gzlzkNaAyHKaXeJKKmSbu6PN1uk
         xRUvFhiPaWU8vJA6dsr+vufIfc56OxhPwJ+VQtl6XePO6MO6DJn5T3g2dPW/l254vpdF
         1W26y04uMXqAqQHeEsX04TkvuI+Cd0of7OzSNsZT/LNCKamGE57vCYYKFTpARbX48lQo
         ZneevLzvCsyrpYxBl6+ImKL+mk+YVZFjBXF2HQUuATMMXZmgqAr7mxqekslH2X8+dN+N
         isc04h16CHeR+v0oFE3EPuhd63TpqLwFm49nMNPwUY8Ap/zwGza2GVcYHdMDMrorvrue
         eBew==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718698579; x=1719303379;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=hy+g3glMzONrTHqG6tUjb//TZuhFL1a/k+2l5hV1+t0=;
        b=GTihLEpXEPC0GArqH0ar1BS50BVuejoh8LASAehyweZzfZo0v3s33SIoN11X0gl6LC
         wgNx9Uj53uzSDgdeiX2xIax/SQu+b8ywtPA3XZ3ajpl7viR38r4J88Y2Quiefi7A70AD
         +CGxhexEMGGj503y1UPTdZr2jj4tRNPXbpUwXCqS21XGzzMAH8Dg+GzJVCVztN10iikq
         W94j0Y2zjVedToeNuJ/vv83v68sqY1qIBdKcg4dPXA1FuW/uo5GH7SkkRnAwm3RHgP/V
         kLUPKuJL1lxmW5v/cwJI4KCca/2xUy+8d7UQb78AEQ81Q+lQPgLueRMRYIIge2BdtU9N
         XkoQ==
X-Forwarded-Encrypted: i=1; AJvYcCWA+U58zO5MvSQDGCuhAxOA4o/MfI/NUyF5QG0IpBY5DIPtRTwAnO3PCYBvFl515j4yRBp/NppUY2fXWhkqyJ1E6iDkU7x90Oy2Te65wxQ=
X-Gm-Message-State: AOJu0YyHmSYiDNA2BS4Noopt5pCVq3NSK4rprziat2biV4cuRlcAwVxz
	+eVFBZgMfMWJiAvDwMyMVVrPXRh919YRvwXh4B9tEuzXTnYjaeW2
X-Google-Smtp-Source: AGHT+IEH6CJDJPIiRGfYB4dNrRHhybI4rSkqmhNuCvFYljo2t6ZNyZXmKaI2iC58TPR0qo8xC3Aw7w==
X-Received: by 2002:a2e:9616:0:b0:2eb:e9cf:e179 with SMTP id 38308e7fff4ca-2ec0e5c6d69mr69917311fa.21.1718698579216;
        Tue, 18 Jun 2024 01:16:19 -0700 (PDT)
Message-ID: <a997863f9b4cde8e4c5858c456ab458d82553ffe.camel@gmail.com>
Subject: Re: [PATCH v3 for-4.19 2/3] x86/irq: handle moving interrupts in
 _assign_irq_vector()
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Date: Tue, 18 Jun 2024 10:16:18 +0200
In-Reply-To: <f263d178-3a06-4c65-a6c0-a77f85c559b6@suse.com>
References: <20240613165617.42538-1-roger.pau@citrix.com>
	 <20240613165617.42538-3-roger.pau@citrix.com>
	 <f263d178-3a06-4c65-a6c0-a77f85c559b6@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Mon, 2024-06-17 at 15:31 +0200, Jan Beulich wrote:
> On 13.06.2024 18:56, Roger Pau Monne wrote:
> > Currently there's logic in fixup_irqs() that attempts to prevent
> > _assign_irq_vector() from failing, as fixup_irqs() is required to
> > evacuate all
> > interrupts from the CPUs not present in the input mask.=C2=A0 The
> > current logic in
> > fixup_irqs() is incomplete, as it doesn't deal with interrupts that
> > have
> > move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
> >=20
> > Instead of attempting to fixup the interrupt descriptor in
> > fixup_irqs() so that
> > _assign_irq_vector() cannot fail, introduce logic in
> > _assign_irq_vector()
> > to deal with interrupts that have either
> > move_{in_progress,cleanup_count} set
> > and no remaining online CPUs in ->arch.cpu_mask.
> >=20
> > If _assign_irq_vector() is requested to move an interrupt in the
> > state
> > described above, first attempt to see if ->arch.old_cpu_mask
> > contains any valid
> > CPUs that could be used as fallback, and if that's the case do move
> > the
> > interrupt back to the previous destination.=C2=A0 Note this is easier
> > because the
> > vector hasn't been released yet, so there's no need to allocate and
> > setup a new
> > vector on the destination.
> >=20
> > Due to the logic in fixup_irqs() that clears offline CPUs from
> > ->arch.old_cpu_mask (and releases the old vector if the mask
> > becomes empty) it
> > shouldn't be possible to get into _assign_irq_vector() with
> > ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
> > ->arch.old_cpu_mask.
> >=20
> > However if ->arch.move_{in_progress,cleanup_count} is set and the
> > interrupt has
> > also changed affinity, it's possible the members of -
> > >arch.old_cpu_mask are no
> > longer part of the affinity set, move the interrupt to a different
> > CPU part of
> > the provided mask and keep the current ->arch.old_{cpu_mask,vector}
> > for the
> > pending interrupt movement to be completed.
> >=20
> > Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
Release-acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii
>=20
> > --- a/xen/arch/x86/irq.c
> > +++ b/xen/arch/x86/irq.c
> > @@ -544,7 +544,58 @@ static int _assign_irq_vector(struct irq_desc
> > *desc, const cpumask_t *mask)
> > =C2=A0=C2=A0=C2=A0=C2=A0 }
> > =C2=A0
> > =C2=A0=C2=A0=C2=A0=C2=A0 if ( desc->arch.move_in_progress || desc-
> > >arch.move_cleanup_count )
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return -EAGAIN;
> > +=C2=A0=C2=A0=C2=A0 {
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /*
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * If the current dest=
ination is online refuse to
> > shuffle.=C2=A0 Retry after
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * the in-progress mov=
ement has finished.
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( cpumask_intersects(des=
c->arch.cpu_mask,
> > &cpu_online_map) )
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ret=
urn -EAGAIN;
> > +
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /*
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * Due to the logic in=
 fixup_irqs() that clears offlined
> > CPUs from
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * ->arch.old_cpu_mask=
 it shouldn't be possible to get
> > here with
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * ->arch.move_{in_pro=
gress,cleanup_count} set and no
> > online CPUs in
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * ->arch.old_cpu_mask=
.
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ASSERT(valid_irq_vector(des=
c->arch.old_vector));
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ASSERT(cpumask_intersects(d=
esc->arch.old_cpu_mask,
> > &cpu_online_map));
> > +
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if ( cpumask_intersects(des=
c->arch.old_cpu_mask, mask) )
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 {
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /*
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 * Fallback to the old destination if moving is in
> > progress and the
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 * current destination is to be offlined.=C2=A0 This is
> > only possible if
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 * the CPUs in old_cpu_mask intersect with the
> > affinity mask passed
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 * in the 'mask' parameter.
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 */
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 des=
c->arch.vector =3D desc->arch.old_vector;
>=20
> I'm a little puzzled that you use desc->arch.old_vector here, but ...
>=20
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 cpu=
mask_and(desc->arch.cpu_mask, desc-
> > >arch.old_cpu_mask, mask);
> > +
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* =
Undo any possibly done cleanup. */
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 for=
_each_cpu(cpu, desc->arch.cpu_mask)
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 per_cpu(vector_irq, cpu)[desc->arch.vector] =3D irq;
> > +
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* =
Cancel the pending move and release the current
> > vector. */
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 des=
c->arch.old_vector =3D IRQ_VECTOR_UNASSIGNED;
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 cpu=
mask_clear(desc->arch.old_cpu_mask);
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 des=
c->arch.move_in_progress =3D 0;
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 des=
c->arch.move_cleanup_count =3D 0;
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if =
( desc->arch.used_vectors )
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 {
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 ASSERT(test_bit(old_vector, desc-
> > >arch.used_vectors));
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 clear_bit(old_vector, desc->arch.used_vectors);
>=20
> ... old_vector here. Since we have the latter, uniformly using it
> might
> be more consistent. I realize though that irq_to_vector() has cases
> where
> it wouldn't return desc->arch.old_vector; I think, however, that in
> those
> case we can't make it here. Still I'm not going to insist on making
> the
> adjustment. Happy to make it though while committing, should you
> agree.
>=20
> Also I'm not happy to see another instance of this pattern appear. In
> x86-specific code this is inefficient, as {set,clear}_bit resolve to
> the
> same insn as test_and_{set,clear}_bit(). Therefore imo more efficient
> would be
>=20
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 if (!test_and_clear_bit(old_vector, desc-
> >arch.used_vectors))
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ASSERT_UNREACHABLE();
>=20
> (and then the two if()s folded).
>=20
> I've been meaning to propose a patch to the other similar sites ...
>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Tue Jun 18 08:21:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 08:21:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742802.1149667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJU5w-00081z-4I; Tue, 18 Jun 2024 08:21:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742802.1149667; Tue, 18 Jun 2024 08:21:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJU5w-00081s-0u; Tue, 18 Jun 2024 08:21:36 +0000
Received: by outflank-mailman (input) for mailman id 742802;
 Tue, 18 Jun 2024 08:21:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oyK7=NU=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sJU5u-00081m-F4
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 08:21:34 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c4d3c84d-2d4b-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 10:21:32 +0200 (CEST)
Received: by mail-lf1-x129.google.com with SMTP id
 2adb3069b0e04-52bc335e49aso5859406e87.3
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 01:21:32 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52ca282eedesm1467022e87.83.2024.06.18.01.21.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 01:21:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4d3c84d-2d4b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718698891; x=1719303691; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=5AvF6KVKKcEoMWoZpmiRsX9jafSgmMdcvmYL7YdTlz0=;
        b=f9IyDfkYHKFFcb1pvJxZ/ymKJo+PS6OiY5jxXSBrrrgK4zAT8AMGJQHFVoxzo3zvTe
         z9e2AGg4StdP2m0WTCf9fFnzAN9p+808HFe45PXZDJCq40IZ/awCN8bQxKOlaMSnb9ih
         N6NwOcECnL/nSgtvR9dgicaq+JkxsRitNTJbXiAcVLENQWCd8Q21GeaRK2f89KFe3pIY
         yaaey0NmgY9aqqfFlmVTJsIfbqItTG0UktK7cbE9INPGC3ZAmDfqRaH6veX8ySCzJOb6
         8k85Qv2z4MbIUvo5+6yS6VOmMGUWEmL165rlX/dGpRJv2rzkLMyHaXnmvm0n8CkTPKqc
         qFog==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718698891; x=1719303691;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=5AvF6KVKKcEoMWoZpmiRsX9jafSgmMdcvmYL7YdTlz0=;
        b=ePjkCnDshQ83ar2kJgC8muhUAK0Sked1lxwnX+6yv4zzsQFfCeZUmBWmOwa2nVgcih
         smcLIsCBhZNOBXg9eWOjM8ZaM4JFAnUS6rXPb0e6UqXjZ24i6t8ECVkp2vEK4Lnpm2Ht
         kDd1tPkiswQhnpwuWopBLkySaVBeairXANEByHA77MngCKC/9W8V1h8YozNB7UFPrDUZ
         l9tJEVdA24OCwolkVmBpS4dpBRDXnmj0KpxU5lFqD6T0n28NnYOXRTvbJEWUgdw4e69Y
         1r9ao6fslIAbrnLdUHQjI2TdIxmSDn5Jehv1v4KlGy9ohZBh5YqVrMH9jw3TPUs4zenS
         eiMw==
X-Forwarded-Encrypted: i=1; AJvYcCVd7QZycElF7umlrEhVR8AYcNNfgaXl67E0GW4sEI2UHBDl8p3gPkHGIH+AbCxQQXNMJCPB7mckTkJu9QenStRtHbNNyKtW2VWizGTXa5E=
X-Gm-Message-State: AOJu0YxVQP+kWQ4zdC33lMiFNMzrgfBPNV2RR64+QqCRKlplB2U73Nq4
	KZGQg+o7tULDdWV2008eSt/nxqkPke7y9OLlxn4dtvo/soZQ51FX
X-Google-Smtp-Source: AGHT+IHhIIoUv1xyLbGIs+moTcjRoubpenokyM8SWzEx3XXXzNadnKALOTMtdk7nY5VgfaC/8+J4bA==
X-Received: by 2002:a19:c515:0:b0:52c:8479:21fb with SMTP id 2adb3069b0e04-52ca6e64c25mr8492688e87.27.1718698891212;
        Tue, 18 Jun 2024 01:21:31 -0700 (PDT)
Message-ID: <0b6350f2d8cfe8f2f8f24f1f2de601f3db529335.camel@gmail.com>
Subject: Re: [PATCH v12 5/8] xen/riscv: add minimal stuff to mm.h to build
 full Xen
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Cc: Alistair Francis <alistair.francis@wdc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, George
 Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Andrew Cooper
 <andrew.cooper3@citrix.com>
Date: Tue, 18 Jun 2024 10:21:30 +0200
In-Reply-To: <fa62c314-424e-4e5b-9046-3a2e1eba654e@citrix.com>
References: <cover.1717008161.git.oleksii.kurochko@gmail.com>
	 <d00b86f41ef2c7d928a28dadd8c34fb845f23d0a.1717008161.git.oleksii.kurochko@gmail.com>
	 <70128dba-498f-4d85-8507-bb1621182754@citrix.com>
	 <7721c1b4eb0ea76cca5460264954d40d639499b7.camel@gmail.com>
	 <e80e30c9-6558-4b70-ab2f-18c34c359dae@citrix.com>
	 <1b3b389156ad924f00af8af1d173b89fc533050e.camel@gmail.com>
	 <fa62c314-424e-4e5b-9046-3a2e1eba654e@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Fri, 2024-06-14 at 10:47 +0100, Andrew Cooper wrote:
> On 11/06/2024 7:23 pm, Oleksii K. wrote:
> > On Tue, 2024-06-11 at 16:53 +0100, Andrew Cooper wrote:
> > > On 30/05/2024 7:22 pm, Oleksii K. wrote:
> > > > On Thu, 2024-05-30 at 18:23 +0100, Andrew Cooper wrote:
> > > > > On 29/05/2024 8:55 pm, Oleksii Kurochko wrote:
> > > > > > Signed-off-by: Oleksii Kurochko
> > > > > > <oleksii.kurochko@gmail.com>
> > > > > > Acked-by: Jan Beulich <jbeulich@suse.com>
> > > > > This patch looks like it can go in independently?=C2=A0 Or does i=
t
> > > > > depend
> > > > > on
> > > > > having bitops.h working in practice?
> > > > >=20
> > > > > However, one very strong suggestion...
> > > > >=20
> > > > >=20
> > > > > > diff --git a/xen/arch/riscv/include/asm/mm.h
> > > > > > b/xen/arch/riscv/include/asm/mm.h
> > > > > > index 07c7a0abba..cc4a07a71c 100644
> > > > > > --- a/xen/arch/riscv/include/asm/mm.h
> > > > > > +++ b/xen/arch/riscv/include/asm/mm.h
> > > > > > @@ -3,11 +3,246 @@
> > > > > > <snip>
> > > > > > +/* PDX of the first page in the frame table. */
> > > > > > +extern unsigned long frametable_base_pdx;
> > > > > > +
> > > > > > +/* Convert between machine frame numbers and page-info
> > > > > > structures.
> > > > > > */
> > > > > > +#define
> > > > > > mfn_to_page(mfn)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
> > > > > > \
> > > > > > +=C2=A0=C2=A0=C2=A0 (frame_table + (mfn_to_pdx(mfn) -
> > > > > > frametable_base_pdx))
> > > > > > +#define
> > > > > > page_to_mfn(pg)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0
> > > > > > \
> > > > > > +=C2=A0=C2=A0=C2=A0 pdx_to_mfn((unsigned long)((pg) - frame_tab=
le) +
> > > > > > frametable_base_pdx)
> > > > > Do yourself a favour and not introduce frametable_base_pdx to
> > > > > begin
> > > > > with.
> > > > >=20
> > > > > Every RISC-V board I can find has things starting from 0 in
> > > > > physical
> > > > > address space, with RAM starting immediately after.
> > > > I checked Linux kernel and grep there:
> > > > =C2=A0=C2=A0 [ok@fedora linux-aia]$ grep -Rni "memory@"
> > > > arch/riscv/boot/dts/
> > > > --
> > > > =C2=A0=C2=A0 exclude "*.tmp" -I
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/starfive/jh7110-starfive-visionfiv=
e-
> > > > 2.dtsi:33:=C2=A0=C2=A0=C2=A0=20
> > > > =C2=A0=C2=A0 memory@40000000 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/starfive/jh7100-common.dtsi:28:=C2=
=A0=C2=A0=C2=A0=C2=A0
> > > > memory@80000000
> > > > =C2=A0=C2=A0 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-sev-kit.dts:49:=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0
> > > > ddrc_cache:
> > > > =C2=A0=C2=A0 memory@1000000000 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-m100pfsevp.dts:33:=
=C2=A0=C2=A0
> > > > ddrc_cache_lo:
> > > > =C2=A0=C2=A0 memory@80000000 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-m100pfsevp.dts:37:=
=C2=A0=C2=A0
> > > > ddrc_cache_hi:
> > > > =C2=A0=C2=A0 memory@1040000000 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-tysom-m.dts:34:=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0
> > > > ddrc_cache_lo:
> > > > =C2=A0=C2=A0 memory@80000000 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-tysom-m.dts:40:=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0
> > > > ddrc_cache_hi:
> > > > =C2=A0=C2=A0 memory@1000000000 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-polarberry.dts:22:=
=C2=A0=C2=A0
> > > > ddrc_cache_lo:
> > > > =C2=A0=C2=A0 memory@80000000 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-polarberry.dts:27:=
=C2=A0=C2=A0
> > > > ddrc_cache_hi:
> > > > =C2=A0=C2=A0 memory@1000000000 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts:57:=
=C2=A0=C2=A0
> > > > ddrc_cache_lo:
> > > > =C2=A0=C2=A0 memory@80000000 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/microchip/mpfs-icicle-kit.dts:63:=
=C2=A0=C2=A0
> > > > ddrc_cache_hi:
> > > > =C2=A0=C2=A0 memory@1040000000 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/thead/th1520-beaglev-ahead.dts:32:=
=C2=A0
> > > > memory@0
> > > > {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/thead/th1520-lichee-module-
> > > > 4a.dtsi:14:=C2=A0=C2=A0=C2=A0=C2=A0=20
> > > > =C2=A0=C2=A0 memory@0 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/sophgo/cv1800b-milkv-duo.dts:26:=
=C2=A0=C2=A0=C2=A0
> > > > memory@80000000
> > > > =C2=A0=C2=A0 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/sophgo/cv1812h.dtsi:12:=C2=A0=C2=
=A0=C2=A0=C2=A0
> > > > memory@80000000
> > > > {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/sifive/hifive-unmatched-a00.dts:26=
:
> > > > memory@80000000
> > > > =C2=A0=C2=A0 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/sifive/hifive-unleashed-a00.dts:25=
:
> > > > memory@80000000
> > > > =C2=A0=C2=A0 {
> > > > =C2=A0=C2=A0 arch/riscv/boot/dts/canaan/k210.dtsi:82:=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sram:
> > > > memory@80000000 {
> > > > =C2=A0=C2=A0=20
> > > > And based on=C2=A0 that majority of supported by Linux kernel board=
s
> > > > has
> > > > RAM
> > > > starting not from 0 in physical address space. Am I confusing
> > > > something?
> > > >=20
> > > > > Taking the microchip board as an example, RAM actually starts
> > > > > at
> > > > > 0x8000000,
> > > > Today we had conversation with the guy from SiFive in xen-devel
> > > > channel
> > > > and he mentioned that they are using "starfive visionfive2 and
> > > > sifive
> > > > unleashed platforms." which based on the grep above has RAM not
> > > > at
> > > > 0
> > > > address.
> > > >=20
> > > > Also, QEMU uses 0x8000000.
> > > >=20
> > > > > =C2=A0which means that having frametable_base_pdx and assuming it
> > > > > does get set to 0x8000 (which isn't even a certainty, given
> > > > > that
> > > > > I
> > > > > think
> > > > > you'll need struct pages covering the PLICs), then what you
> > > > > are
> > > > > trading
> > > > > off is:
> > > > >=20
> > > > > * Saving 32k of virtual address space only (no need to even
> > > > > allocate
> > > > > memory for this range of the framtable), by
> > > > > * Having an extra memory load and add/sub in every page <->
> > > > > mfn
> > > > > conversion, which is a screaming hotpath all over Xen.
> > > > >=20
> > > > > It's a terribly short-sighted tradeoff.
> > > > >=20
> > > > > 32k of VA space might be worth saving in a 32bit build (I
> > > > > personally
> > > > > wouldn't - especially as there's no need to share Xen's VA
> > > > > space
> > > > > with
> > > > > guests, given no PV guests on ARM/RISC-V), but it's
> > > > > absolutely
> > > > > not at
> > > > > all in an a 64bit build with TB of VA space available.
> > > > >=20
> > > > > Even if we do find a board with the first interesting thing
> > > > > in
> > > > > the
> > > > > frametable starting sufficiently away from 0 that it might be
> > > > > worth
> > > > > considering this slide, then it should still be Kconfig-able
> > > > > in a
> > > > > similar way to PDX_COMPRESSION.
> > > > I found your tradeoffs reasonable, but I don't understand how
> > > > it
> > > > will
> > > > work if RAM does not start from 0, as the frametable address
> > > > and
> > > > RAM
> > > > address are linked.
> > > > I tried to look at the PDX_COMPRESSION config and couldn't find
> > > > any
> > > > "slide" there. Could you please clarify this for me?
> > > > If to use this "slide" would it help to avoid the mentioned
> > > > above
> > > > tradeoffs?
> > > >=20
> > > > One more question: if we decide to go without
> > > > frametable_base_pdx,
> > > > would it be sufficient to simply remove mentions of it from the
> > > > code (
> > > > at least, for now )?
> > > There is a relationship between system/host physical addresses
> > > (what
> > > Xen
> > > called maddr/mfn), and the frametable.=C2=A0 The frametable has one
> > > entry
> > > per
> > > mfn.
> > >=20
> > > In the most simple case, there's a 1:1 relationship.=C2=A0 i.e.
> > > frametable[0]
> > > =3D maddr(0), frametable[1] =3D maddr(4k) etc.=C2=A0 This is very sim=
ple,
> > > and
> > > very easy to calculate (page_to_mfn()/mfn_to_page()).
> > >=20
> > > The frametable is one big array.=C2=A0 It starts at a compile-time
> > > fixed
> > > address, and needs to be long enough to cover everything
> > > interesting
> > > in
> > > memory.=C2=A0 Therefore it potentially takes a large amount of virtua=
l
> > > address space.
> > >=20
> > > However, only interesting maddrs need to have data in the
> > > frametable,
> > > so
> > > it's fine for the backing RAM to be sparsely allocated/mapped in
> > > the
> > > frametable virtual addresses.
> > >=20
> > > For 64bit, that's really all you need, because there's always far
> > > more
> > > virtual address space than physical RAM in the system, even when
> > > you're
> > > looking at TB-scale giant servers.
> > >=20
> > >=20
> > > For 32bit, virtual address space is a limited resource.=C2=A0 (Also t=
o
> > > an
> > > extent, 64bit x86 with PV guests because we give 98% of the
> > > virtual
> > > address space to the guest kernel to use.)
> > >=20
> > > There are two tricks to reduce the virtual address space used,
> > > but
> > > they
> > > both cost performance in fastpaths.
> > >=20
> > > 1) PDX Compression.
> > >=20
> > > PDX compression makes a non-linear mfn <-> maddr mapping.=C2=A0 This
> > > is
> > > for a
> > > usecase where you've got multiple RAM banks which are separated
> > > by a
> > > large distance (and evenly spaced), then you can "compress" a
> > > single
> > > range of 0's out of the middle of the system/host physical
> > > address.
> > >=20
> > > The cost is that all page <-> mfn conversions need to read two
> > > masks
> > > and
> > > a shift-count from variables in memory, to split/shift/recombine
> > > the
> > > address bits.
> > >=20
> > > 2) A slide, which is frametable_base_pdx in this context.
> > >=20
> > > When there's a big gap between 0 and the start of something
> > > interesting,
> > > you could chop out that range by just subtracting base_pdx.=C2=A0 Wha=
t
> > > qualifies as "big" is subjective, but Qemu starting at 128M
> > > certainly
> > > does not qualify as big enough to warrant frametable_base_pdx.
> > >=20
> > > This is less expensive that PDX compression.=C2=A0 It only adds one
> > > memory
> > > read to the fastpath, but it also doesn't save as much virtual
> > > address
> > > space as PDX compression.
> > >=20
> > >=20
> > > When virtual address space is a major constraint (32 bit builds),
> > > both
> > > of these techniques are worth doing.=C2=A0 But when there's no
> > > constraint
> > > on
> > > virtual address space (64 bit builds), there's no reason to use
> > > either;
> > > and the performance will definitely improve as a result.
> > Thanks for such good explanation.
> >=20
> > For RISC-V we have PDX config disabled as I haven't seen multiple
> > RAM
> > banks at boards which has hypervisor extension. Thereby
> > mfn_to_pdx()
> > and pdx_to_mfn() do nothing. The same for frametable_base_pdx, in
> > case
> > of PDX disabled, it just an offset ( or a slide ).
> >=20
> > IIUUC, you meant that it make sense to map frametable not to the
> > address of where RAM is started, but to 0, so then we don't need
> > this
> > +-frametable_base_pdx. The price for that is loosing of VA space
> > for
> > the range from 0 to RAM start address.
> >=20
> > Right now, we are trying to support 3 boards with the following RAM
> > address:
> > 1. 0x8000_0000 - QEMU and SiFive board
> > 2. 0x40_0000_0000 - Microchip board
> >=20
> > So if we mapping frametable to 0 ( not RAM start ) we will loose:
> > 1. 0x8000_0 ( amount of page entries to cover range [0,
> > 0x8000_0000] )
> > * 64 ( size of struct page_info ) =3D 32 MB
> > 2. 0x40_0000_0 ( amount of page entries to cover range [0,
> > 0x40_0000_0000] ) * 64 ( size of struct page_info ) =3D 4 Gb
> >=20
> > In terms of available virtual address space for RV-64 we can
> > consider
> > both options as acceptable.
>=20
> For Qemu and SiFive, 32M is definitely not worth worrying about.
>=20
> I personally wouldn't worry about Microchip either.=C2=A0 That's 4G of 1T
> VA
> space (assuming Sv39).
>=20
> > [OPTION 1] If we accepting of loosing 4 Gb of VA then we could
> > implement mfn_to_page() and page_to_mfn() in the following way:
> > ```
> > =C2=A0=C2=A0 diff --git a/xen/arch/riscv/include/asm/mm.h
> > =C2=A0=C2=A0 b/xen/arch/riscv/include/asm/mm.h
> > =C2=A0=C2=A0 index cc4a07a71c..fdac7e0646 100644
> > =C2=A0=C2=A0 --- a/xen/arch/riscv/include/asm/mm.h
> > =C2=A0=C2=A0 +++ b/xen/arch/riscv/include/asm/mm.h
> > =C2=A0=C2=A0 @@ -107,14 +107,11 @@ struct page_info
> > =C2=A0=C2=A0=20
> > =C2=A0=C2=A0=C2=A0 #define frame_table ((struct page_info *)FRAMETABLE_=
VIRT_START)
> > =C2=A0=C2=A0=20
> > =C2=A0=C2=A0 -/* PDX of the first page in the frame table. */
> > =C2=A0=C2=A0 -extern unsigned long frametable_base_pdx;
> > =C2=A0=C2=A0 -
> > =C2=A0=C2=A0=C2=A0 /* Convert between machine frame numbers and page-in=
fo
> > structures.
> > */
> > =C2=A0=C2=A0=C2=A0 #define
> > mfn_to_page(mfn)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=20
> > \
> > =C2=A0=C2=A0 -=C2=A0=C2=A0=C2=A0 (frame_table + (mfn_to_pdx(mfn) - fram=
etable_base_pdx))
> > =C2=A0=C2=A0 +=C2=A0=C2=A0=C2=A0 (frame_table + mfn))
> > =C2=A0=C2=A0=C2=A0 #define
> > page_to_mfn(pg)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=20
> > \
> > =C2=A0=C2=A0 -=C2=A0=C2=A0=C2=A0 pdx_to_mfn((unsigned long)((pg) - fram=
e_table) +
> > =C2=A0=C2=A0 frametable_base_pdx)
> > =C2=A0=C2=A0 +=C2=A0=C2=A0=C2=A0 ((unsigned long)((pg) - frame_table))
> > =C2=A0=C2=A0=20
> > =C2=A0=C2=A0=C2=A0 static inline void *page_to_virt(const struct page_i=
nfo *pg)
> > =C2=A0=C2=A0=C2=A0 {
> > =C2=A0=C2=A0 diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
> > =C2=A0=C2=A0 index 9c0fd80588..8f6dbdc699 100644
> > =C2=A0=C2=A0 --- a/xen/arch/riscv/mm.c
> > =C2=A0=C2=A0 +++ b/xen/arch/riscv/mm.c
> > =C2=A0=C2=A0 @@ -15,7 +15,7 @@
> > =C2=A0=C2=A0=C2=A0 #include <asm/page.h>
> > =C2=A0=C2=A0=C2=A0 #include <asm/processor.h>
> > =C2=A0=C2=A0=20
> > =C2=A0=C2=A0 -unsigned long __ro_after_init frametable_base_pdx;
> > =C2=A0=C2=A0=C2=A0 unsigned long __ro_after_init frametable_virt_end;
> > =C2=A0=C2=A0=20
> > =C2=A0=C2=A0=C2=A0 struct mmu_desc {
> > ```
>=20
> I firmly recommend option 1, especially at this point.
Jan, as you gave your Acked before, don't you mind to define
mfn_to_page() and page_to_mfn as mentioned above ( Option 1 )?

~ Oleksi

>=20
> If specific boards really have a problem with losing 4G of VA space,
> then option 2 can be added easily at a later point.
>=20
> That said, I'd think carefully about doing option 2.=C2=A0 Even
> subtracting a
> constant - which is far better than subtracting a variable - is still
> extra overhead in fastpaths.=C2=A0 Option 2 needs careful consideration o=
n
> a
> board-by-board case.
>=20
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Tue Jun 18 08:24:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 08:24:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742809.1149677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJU8N-000094-HX; Tue, 18 Jun 2024 08:24:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742809.1149677; Tue, 18 Jun 2024 08:24:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJU8N-00008x-Dd; Tue, 18 Jun 2024 08:24:07 +0000
Received: by outflank-mailman (input) for mailman id 742809;
 Tue, 18 Jun 2024 08:24:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kZif=NU=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJU8L-00008n-MJ
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 08:24:05 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2061f.outbound.protection.outlook.com
 [2a01:111:f403:2418::61f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1f12d825-2d4c-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 10:24:04 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by PH7PR12MB6882.namprd12.prod.outlook.com (2603:10b6:510:1b8::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.30; Tue, 18 Jun
 2024 08:24:00 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7677.030; Tue, 18 Jun 2024
 08:23:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f12d825-2d4c-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZSyoMmP2r3KPC17OEPWykaZGn8BY5PlalkbQqnKmhGSbmtHNARxUfkbGrmsKy8NRE0u18fb9CggkfL+1RkkR/TRymO4G1eJWR9/H5/n+CaX3WuXSht3uOpHQ/z2cpWCh9p4yI0RN3Biz9Zn+ziDIDs1EWjZsOYMQYDIczT8bhOjGXT2PvlXMBQzEBpWO3Z+xNmoPGTN0twOmIFZtMpUj0g8EDnzGDcjJeNR3p6D3iZKJJe+uHd/gSzqEjWTdoegRyapdsncYkRgSE55XVQNf8V/RA53XByIqYMGF7LSezoupnblXmVgxHkIBUzMw++DbXoxutzZRYoR6HscTm3CqfA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=7miz8qTI9zr5xqRGBBgWCe6vq7t40Ed+vk14O0FXN6w=;
 b=FscOtnNXu1Dteyc9RYiPlrg58DGrnf8+KKjLIjr1oJ0tH06KClg6GT/NdtOAoWX37bD9aGYCwxrRPpW7yx8xG6uge4CjVOTZckw26nQr1dC6dR9sysAPPLGN9fUnOZQEEfLQu2BD9tq/EO6UxmgDkbjYSWTE/ncRfV+X62V/WSFYpM+jwKX5NazxJNVxlVYmWDOWEyfrnlqBnaIWt//y5vDi2/peYRNuIZrpVvSIR+gF9JU22AUB2y9V/iKsk5g4UJXaUp9m60Vxq+6+OKxgo0Z7QvVB9b/1SRZNLQgJ32z83gLECNiSQpD8/OdLz8B9YmOgSliXkh98IaSO8bun2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7miz8qTI9zr5xqRGBBgWCe6vq7t40Ed+vk14O0FXN6w=;
 b=xQIElMy/lB/SNg9wRRO2yTMBR6cXKOwwawwuy7/ljpZ7v5yvvz9H9IfLsWaPKWMDG9luXETAHmy7vxSlUqW/zOrgIMvZ6u2vHEQf1PrB6wID1OXca7vDKTG2omeJW1DSanXuXQVdOA32zWWU7Seya7hBHLQ0rVr9Is/B7xGHL/s=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index: AQHawJTqXyg3PMEiVUiLHPQ86aZkYrHMFeyAgAGdTwA=
Date: Tue, 18 Jun 2024 08:23:59 +0000
Message-ID:
 <BL1PR12MB584916579E2C16C6C9F86D1FE7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-6-Jiqian.Chen@amd.com>
 <b4b6cbcd-dd71-44da-aea8-6a4a170d73d5@suse.com>
In-Reply-To: <b4b6cbcd-dd71-44da-aea8-6a4a170d73d5@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7677.026)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|PH7PR12MB6882:EE_
x-ms-office365-filtering-correlation-id: cda5de20-a29e-40dc-ed42-08dc8f7000e1
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|1800799021|7416011|376011|366013|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?bW1zaU43TFZueXNDYTlJdlFBZGx0bzBlclRsVE5pdkNNd1gySnNwazNZcTd1?=
 =?utf-8?B?Wi9NV2JYWXNTOFVNL1pKR3dHN0xYRWJ3VHlwS3UxMVNMV09Qb0dqRi9NdS90?=
 =?utf-8?B?T0plUE1pdjVQQXlWL0VNTXBhZDNzNmpOQ09mS3lndW5HQkFTYkI1Y0JLdVl4?=
 =?utf-8?B?TUpNdEZYOGk0MGJMamhnWlVPTzhFb2tCZ05saXAwNlcveE5jS0xMZlNDWnhj?=
 =?utf-8?B?VFFJRWd5K2ZiU2IvcFA3cHp6M09TcWNZSmdjVG1VdnhENXRpQ1ZXSWh1K3p2?=
 =?utf-8?B?TmFicFhBMnA5clNHLzNrcTlUYkxaZ0E4Sm4xdWw0VjFML0ZXaVlrZTJHdS9z?=
 =?utf-8?B?RlNGd2E4V3V0ZC96eHJ4VUR3N2J1cGYybHJRS2o3QnRtMUFhVWY4MmFlWlM2?=
 =?utf-8?B?RlpCVGxVRGpwMW5uWXpHQTR6aVNFRzVhVkFuOUNNS1ZGdHM3TzZCbmt0Rjlo?=
 =?utf-8?B?blVUMVNQa2dmamovcXREelJudTAycVMwbVIwNFV5VHhrb0xCQmZKclpnT0VX?=
 =?utf-8?B?ZGk5N2ZHRHlPZGFWNTRad1ZMcDZsRkR3MXhnbkhlT1U0cjV6UVN5d2d0Mm14?=
 =?utf-8?B?bEtSWi9yTUNPNGVwc1ZoazI1ZGdyaXVyYi8rSXM0bk8yQnhVa0xkWjdBNG13?=
 =?utf-8?B?eFhjdlE3ajB0WURVTkRSU3NBV3ExVEJ4OW5mbnRGdVoxNEtPZ0RHSHAvVlVp?=
 =?utf-8?B?YUVoRHhVL3laQVM4dGNHNFBiMlluN0RyZnhUYWZ5elEvREhRNzRnL1pVL3lO?=
 =?utf-8?B?eUFRaVZ3OXNHNTJRdCthKzlQcGJSTE83aHIwRzZSN2hmeFhzM2EvWVc2a2Vk?=
 =?utf-8?B?OGNMczdRQmQ1YU9HYVNHSmcyOEx6U1JnZmFRY3IvdVZmKzB5ZStBWXlURTlj?=
 =?utf-8?B?Y2dQenorWS9GeFlHRFBEbFJBMTlPdmNVQjRJOEtKUngreTYyOGpOSUE1RDhq?=
 =?utf-8?B?MFFsRnQ1V0p4UG9rT1NzcVJNWUc5L01WR1NTdzlBdDVCNGViNjh3c0dEb3c0?=
 =?utf-8?B?dVRULzRXMmpjK2RIWGJJVVVPL1IyR1Z4NTRmbmVRSGU2OXhteTNCS25BNVpN?=
 =?utf-8?B?RnRWcnRMczhwc0ozdi8ra01VdU1mQ2VLcUFaTVhSWlIxdm5SNWJpV1I0djR1?=
 =?utf-8?B?cUI2cFgvWTZQQ0UvK1E1Qy9jcnl0UGtVV1BvYzFTK2FVdC96WXpHOGxlQWJ6?=
 =?utf-8?B?VzhnaUxzSGVKVFcySE82UTh5UW5TcmttY2hjZ0FvOVVnWFY0bEhtWHlRcm9X?=
 =?utf-8?B?TFhmcDVzTmxsM01ZRkgrS3lOYkM2ZWZGVXBRNWdkS0duVkc4WWdOOU9ISkdo?=
 =?utf-8?B?Mm40Z1BNMnZmbWZHanl5Mm1ham5TazlCMHE2TFZFUEtldVFmOThoSFV1MW85?=
 =?utf-8?B?WDdTUTFaeFArWC90R0hmZEY2aGw4eGVZbFl6YmFpYnJtRGV1UDdROXF2dmVl?=
 =?utf-8?B?NVg4OEt6cFVaK1p4bTByK2JIM0RNYnNoRUM4cWpFdlM4T2VtRjBVRDhzVGNq?=
 =?utf-8?B?MTZqckZEYnhNS3k2aGhWeERBdkhHc0p1NnY2c0M2Q0QxeERhVG1adjUrZFow?=
 =?utf-8?B?bnduei8rdzAzcWkraC9ldGVMNTAvSkVha2ZpSmxtTC8wSXcrMGxRUjhHYTRs?=
 =?utf-8?B?TTZZVjFsSDZ5N0pWUmtjTDFORVZhMDRPUTUvVStCalFsK3I3akcydGhrT0w2?=
 =?utf-8?B?ZDRpZ2swdjZZMUJjZ2VzL2pQWE5DV0FySUo2MnV4cWtmdFRrQnJCSGx2WW9o?=
 =?utf-8?B?Tnc4NzcydnE0Um1oYTRXQWtqeHJTR2tleC9OMEQ2S1pubjlrN0gwMm9nUUs1?=
 =?utf-8?Q?NgXiARrChoosAIJdtTxe4ck/g5qiDK940wMF8=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(1800799021)(7416011)(376011)(366013)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?b25nYkdaWVhxek5FWTVzcmMvSjV2a0NiU3RWMlY3VTFvM2lQSDhMRHM1UGFT?=
 =?utf-8?B?S2JOQXVOdDBjaVIrRjR5a00xVjVtcFBvbzBkRjVDQVY2Y2Jnd2pSNE5qTzVu?=
 =?utf-8?B?MXpyVTZPT1JUdkFBbFA0NGRxWFVqekd1NGtKWEtrb1UrNld6bDRXdGRkMFNK?=
 =?utf-8?B?ZDYyWm0wc0Y0TW9KclpSbFhLcDdLc2ozalhDa0N3UGRkcUczNzgzR2tWWVR5?=
 =?utf-8?B?N3kvZE9aeXJSQ2Z0cm1jeStjaS9TTlRMcnY0UjJ5M0hBeCt4QnNWYjhCWTc5?=
 =?utf-8?B?L2xJZTU4Q1Y0bFR2UnZ5L25HcFVVdkNuL1YxbTJEcVV0bnozTllnKy83Z2tL?=
 =?utf-8?B?MW9wMXJ2Z010TjRnWFFLb1B6dHJCSVhCa2tEbEhZVDJ1YTNFenhpS1hOd2Fr?=
 =?utf-8?B?b01YTzZ6WldQakYvOEJjU0lpWU1NTW1ma2JnVEc3VFNLK1ZMcXI0ejlwcVpH?=
 =?utf-8?B?OVdqYnFXRFptTkFwNUdOR0hVSnRoN1g3Qmg3ckVZTjFLQkd3U3Bobm0xYkox?=
 =?utf-8?B?cjM1N2h1S1pDLzQ2V2dJQW45MkhkdEhtYUlMcGZDSjVyMWRKNldweUoweSti?=
 =?utf-8?B?MWhGczZyeUNjYlJzOUlJaFNwcENWc2VTdzJkajJ2TlBPYzBNOU9GOGJITUJu?=
 =?utf-8?B?L21oVzJseFVJRjE5RXJmOHo3L1JWWEo3eitKbUtRQmJFMDB2RTV0K1NFbUt3?=
 =?utf-8?B?QUExYlU3R3AySDBVbnpIUnY5bnR3UFJXd3hMc09hYmFqam56QnpsNjRwcDNE?=
 =?utf-8?B?VGJadU93ejErYmFaQXU2OUxZUGdyeUNrVG1Cd2p0QzJPUDk0NlhadVFmVWVn?=
 =?utf-8?B?K1hqRkpnMW5zYzVJeVQ0aXErZzBWZ3dCK1o2WWFPM0hJaVVKZEJhVk5ZVngx?=
 =?utf-8?B?eUVDQ01BeDZWUWE1QnVFNlByTFJMZ1VOeFRtSmJrL0laVjJ6azRCdG9HSTBR?=
 =?utf-8?B?NGlLbnRRNmpmYWx2TmRjc3dzaE9FaVpaZ0EwK2hzUW5teGxkcVNPMmVJOFV1?=
 =?utf-8?B?TFFLQVFWc3p4ZmRJd2hOSVk5TGZPRUJMb3IwN0FKbUMzTjZVRDZ4TWRwVk9z?=
 =?utf-8?B?blFOZTlGTkFKYyt3OWQreHFCcGdrTGNvWEJqMUUzNC9NRWR0ai9kcDBZdTdK?=
 =?utf-8?B?SnF1S3d6SFdiTnA2MWNVYnRXbnA5bUZTWlZrQ0lmY3lsenJONENGd1hvT0xD?=
 =?utf-8?B?cmhSM3YxKzhmNVhGcUV1NnlMSUZSZWtubGRUSmsxalo4RldhOGRobnlvdW4v?=
 =?utf-8?B?eWlRNllxZHh1MDArSzdHY2FvU2pIazZDU2JHcUZuai9OTHVRY2ljbXV2SklM?=
 =?utf-8?B?dDY0NnM1OU5Zb0ZsUzNwRXJWZXgwbVYreXUwUHhWWXFvNy80bThCZ3BUTGR3?=
 =?utf-8?B?TFVXMzNRUncwTTROSnhPVFl1eE0vZGwyL01iT1hRSkZBVUdFbkhUSUtBY29B?=
 =?utf-8?B?UmxKYlMxMTNtcnphajVuUXJCd1Q0S2VPaVZMYXRGaFJITGtDMkxyRFhDSEJv?=
 =?utf-8?B?U0V1Y0lsbXVVWkdPRGxGaTE2YTRNaTBxZHdRNmlkRTVad0h0WkNFdCswQU1i?=
 =?utf-8?B?ZXozUmlxWjBwdlQwUCtRVnA2QnF4dnZhbU5wK2VrQ0wyUURXZEk0QnFVbTFF?=
 =?utf-8?B?SGZMNFNHTzVqWXpxS3A3Vlc2WjBueU9FSmo4eVc2MzQrTm1UcG9QcU9tMFcx?=
 =?utf-8?B?RHJkYnRWUy9NcHN6SWFrU1JmVytrdFJIM3lOQitIL1BtcWNJWkpwSGw2aVc5?=
 =?utf-8?B?TkJWbis0SXJwcUtoNGdtK0xuOFRTT3k2YUxLUWRwOWNZQ3VXQnBFUi83UTdo?=
 =?utf-8?B?ejhtRTNxa3NySUc1NEsvcFhWdlFxVHFYdDE2SGhQSFpla2haRm5YS2hqRW0v?=
 =?utf-8?B?alYydHpPN1lMekpvSllac1d5Q29wL3g1VmlvVzYzR29za2plWGFDU0k5cTQw?=
 =?utf-8?B?UGlTczBveHgxUWMrVjF2S3d3dDcxdE9VVEs1cTM0QUNkMWptbGloREF3WDNz?=
 =?utf-8?B?dkthOUIyb000R1NEZ3NCVmJiNXNLQ0VWVjBCVE1TczdaK2dTV1NRUHpYbmc3?=
 =?utf-8?B?YTJ2Qkw0bUdKM3VDUm5MY0pJNnBGZ1RNYlluczZqNDRqcm1xZ3VURnJ4R0N0?=
 =?utf-8?Q?WGM0=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <755B514EA0994F4BAA9E12A76F387F1A@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cda5de20-a29e-40dc-ed42-08dc8f7000e1
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 Jun 2024 08:23:59.3417
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: uS/k5RF0EkPpfGur3sLIr0dlFpLNl9MQFj/DBaQT8CpgV3nG9pXn4Qjbqi/HbChnEZJ3WwZE06rbspaTBiD9gw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB6882

T24gMjAyNC82LzE3IDIzOjMyLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTcuMDYuMjAyNCAx
MTowMCwgSmlxaWFuIENoZW4gd3JvdGU6DQo+PiBTb21lIHR5cGUgb2YgZG9tYWluIGRvbid0IGhh
dmUgUElSUXMsIGxpa2UgUFZILCBpdCBkb2Vzbid0IGRvDQo+PiBQSFlTREVWT1BfbWFwX3BpcnEg
Zm9yIGVhY2ggZ3NpLiBXaGVuIHBhc3N0aHJvdWdoIGEgZGV2aWNlDQo+PiB0byBndWVzdCBiYXNl
IG9uIFBWSCBkb20wLCBjYWxsc3RhY2sNCj4+IHBjaV9hZGRfZG1fZG9uZS0+WEVOX0RPTUNUTF9p
cnFfcGVybWlzc2lvbiB3aWxsIGZhaWwgYXQgZnVuY3Rpb24NCj4+IGRvbWFpbl9waXJxX3RvX2ly
cSwgYmVjYXVzZSBQVkggaGFzIG5vIG1hcHBpbmcgb2YgZ3NpLCBwaXJxIGFuZA0KPj4gaXJxIG9u
IFhlbiBzaWRlLg0KPj4gV2hhdCdzIG1vcmUsIGN1cnJlbnQgaHlwZXJjYWxsIFhFTl9ET01DVExf
aXJxX3Blcm1pc3Npb24gcmVxdWlyZXMNCj4+IHBhc3NpbmcgaW4gcGlycSwgaXQgaXMgbm90IHN1
aXRhYmxlIGZvciBkb20wIHRoYXQgZG9lc24ndCBoYXZlDQo+PiBQSVJRcy4NCj4+DQo+PiBTbywg
YWRkIGEgbmV3IGh5cGVyY2FsbCBYRU5fRE9NQ1RMX2dzaV9wZXJtaXNzaW9uIHRvIGdyYW50IHRo
ZQ0KPj4gcGVybWlzc2lvbiBvZiBpcnEodHJhbnNsYXRlIGZyb20gZ3NpKSB0byBkdW1VIHdoZW4g
ZG9tMCBoYXMgbm8NCj4+IFBJUlFzLg0KPj4NCj4+IFNpZ25lZC1vZmYtYnk6IEppcWlhbiBDaGVu
IDxKaXFpYW4uQ2hlbkBhbWQuY29tPg0KPj4gU2lnbmVkLW9mZi1ieTogSHVhbmcgUnVpIDxyYXku
aHVhbmdAYW1kLmNvbT4NCj4+IFNpZ25lZC1vZmYtYnk6IEppcWlhbiBDaGVuIDxKaXFpYW4uQ2hl
bkBhbWQuY29tPg0KPj4gLS0tDQo+PiBSRkM6IGl0IG5lZWRzIHJldmlldyBhbmQgbmVlZHMgdG8g
d2FpdCBmb3IgdGhlIGNvcnJlc3BvbmRpbmcgdGhpcmQgcGF0Y2ggb24gbGludXgga2VybmVsIHNp
ZGUgdG8gYmUgbWVyZ2VkLg0KPj4gLS0tDQo+PiAgdG9vbHMvaW5jbHVkZS94ZW5jdHJsLmggICAg
ICAgICAgICB8ICA1ICsrKw0KPj4gIHRvb2xzL2xpYnMvY3RybC94Y19kb21haW4uYyAgICAgICAg
fCAxNSArKysrKysrDQo+PiAgdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYyAgICAgICB8IDY3
ICsrKysrKysrKysrKysrKysrKysrKysrKysrKy0tLQ0KPj4gIHhlbi9hcmNoL3g4Ni9kb21jdGwu
YyAgICAgICAgICAgICAgfCA0MyArKysrKysrKysrKysrKysrKysrDQo+PiAgeGVuL2FyY2gveDg2
L2luY2x1ZGUvYXNtL2lvX2FwaWMuaCB8ICAyICsNCj4+ICB4ZW4vYXJjaC94ODYvaW9fYXBpYy5j
ICAgICAgICAgICAgIHwgMTcgKysrKysrKysNCj4+ICB4ZW4vYXJjaC94ODYvbXBwYXJzZS5jICAg
ICAgICAgICAgIHwgIDMgKy0NCj4+ICB4ZW4vaW5jbHVkZS9wdWJsaWMvZG9tY3RsLmggICAgICAg
IHwgIDggKysrKw0KPj4gIHhlbi94c20vZmxhc2svaG9va3MuYyAgICAgICAgICAgICAgfCAgMSAr
DQo+PiAgOSBmaWxlcyBjaGFuZ2VkLCAxNTMgaW5zZXJ0aW9ucygrKSwgOCBkZWxldGlvbnMoLSkN
Cj4+DQo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMvaW5jbHVkZS94ZW5jdHJsLmggYi90b29scy9pbmNs
dWRlL3hlbmN0cmwuaA0KPj4gaW5kZXggYTAzODFmNzRkMjRiLi5mM2ZlYjY4NDhlMjUgMTAwNjQ0
DQo+PiAtLS0gYS90b29scy9pbmNsdWRlL3hlbmN0cmwuaA0KPj4gKysrIGIvdG9vbHMvaW5jbHVk
ZS94ZW5jdHJsLmgNCj4+IEBAIC0xMzgyLDYgKzEzODIsMTEgQEAgaW50IHhjX2RvbWFpbl9pcnFf
cGVybWlzc2lvbih4Y19pbnRlcmZhY2UgKnhjaCwNCj4+ICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgIHVpbnQzMl90IHBpcnEsDQo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBi
b29sIGFsbG93X2FjY2Vzcyk7DQo+PiAgDQo+PiAraW50IHhjX2RvbWFpbl9nc2lfcGVybWlzc2lv
bih4Y19pbnRlcmZhY2UgKnhjaCwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVp
bnQzMl90IGRvbWlkLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3Qg
Z3NpLA0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgYm9vbCBhbGxvd19hY2Nlc3Mp
Ow0KPj4gKw0KPj4gIGludCB4Y19kb21haW5faW9tZW1fcGVybWlzc2lvbih4Y19pbnRlcmZhY2Ug
KnhjaCwNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdWludDMyX3QgZG9taWQs
DQo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcgZmlyc3Rf
bWZuLA0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnMvY3RybC94Y19kb21haW4uYyBiL3Rvb2xz
L2xpYnMvY3RybC94Y19kb21haW4uYw0KPj4gaW5kZXggZjJkOWQxNGI0ZDlmLi44NTQwZTg0ZmRh
OTMgMTAwNjQ0DQo+PiAtLS0gYS90b29scy9saWJzL2N0cmwveGNfZG9tYWluLmMNCj4+ICsrKyBi
L3Rvb2xzL2xpYnMvY3RybC94Y19kb21haW4uYw0KPj4gQEAgLTEzOTQsNiArMTM5NCwyMSBAQCBp
bnQgeGNfZG9tYWluX2lycV9wZXJtaXNzaW9uKHhjX2ludGVyZmFjZSAqeGNoLA0KPj4gICAgICBy
ZXR1cm4gZG9fZG9tY3RsKHhjaCwgJmRvbWN0bCk7DQo+PiAgfQ0KPj4gIA0KPj4gK2ludCB4Y19k
b21haW5fZ3NpX3Blcm1pc3Npb24oeGNfaW50ZXJmYWNlICp4Y2gsDQo+PiArICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICB1aW50MzJfdCBkb21pZCwNCj4+ICsgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIHVpbnQzMl90IGdzaSwNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
IGJvb2wgYWxsb3dfYWNjZXNzKQ0KPj4gK3sNCj4+ICsgICAgc3RydWN0IHhlbl9kb21jdGwgZG9t
Y3RsID0gew0KPj4gKyAgICAgICAgLmNtZCA9IFhFTl9ET01DVExfZ3NpX3Blcm1pc3Npb24sDQo+
PiArICAgICAgICAuZG9tYWluID0gZG9taWQsDQo+PiArICAgICAgICAudS5nc2lfcGVybWlzc2lv
bi5nc2kgPSBnc2ksDQo+PiArICAgICAgICAudS5nc2lfcGVybWlzc2lvbi5hbGxvd19hY2Nlc3Mg
PSBhbGxvd19hY2Nlc3MsDQo+PiArICAgIH07DQo+PiArDQo+PiArICAgIHJldHVybiBkb19kb21j
dGwoeGNoLCAmZG9tY3RsKTsNCj4+ICt9DQo+PiArDQo+PiAgaW50IHhjX2RvbWFpbl9pb21lbV9w
ZXJtaXNzaW9uKHhjX2ludGVyZmFjZSAqeGNoLA0KPj4gICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICB1aW50MzJfdCBkb21pZCwNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgdW5zaWduZWQgbG9uZyBmaXJzdF9tZm4sDQo+PiBkaWZmIC0tZ2l0IGEvdG9vbHMvbGlicy9s
aWdodC9saWJ4bF9wY2kuYyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMNCj4+IGluZGV4
IDM3NmY5MTc1OWFjNi4uZjAyN2YyMmMwMDI4IDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMvbGlicy9s
aWdodC9saWJ4bF9wY2kuYw0KPj4gKysrIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYw0K
Pj4gQEAgLTE0MzEsNiArMTQzMSw5IEBAIHN0YXRpYyB2b2lkIHBjaV9hZGRfZG1fZG9uZShsaWJ4
bF9fZWdjICplZ2MsDQo+PiAgICAgIHVpbnQzMl90IGZsYWcgPSBYRU5fRE9NQ1RMX0RFVl9SRE1f
UkVMQVhFRDsNCj4+ICAgICAgdWludDMyX3QgZG9tYWluaWQgPSBkb21pZDsNCj4+ICAgICAgYm9v
bCBpc3N0dWJkb20gPSBsaWJ4bF9pc19zdHViZG9tKGN0eCwgZG9taWQsICZkb21haW5pZCk7DQo+
PiArI2lmZGVmIENPTkZJR19YODYNCj4+ICsgICAgeGNfZG9tYWluaW5mb190IGluZm87DQo+PiAr
I2VuZGlmDQo+PiAgDQo+PiAgICAgIC8qIENvbnZlbmllbmNlIGFsaWFzZXMgKi8NCj4+ICAgICAg
Ym9vbCBzdGFydGluZyA9IHBhcy0+c3RhcnRpbmc7DQo+PiBAQCAtMTUxNiwxNCArMTUxOSwzOSBA
QCBzdGF0aWMgdm9pZCBwY2lfYWRkX2RtX2RvbmUobGlieGxfX2VnYyAqZWdjLA0KPj4gICAgICAg
ICAgICAgIHJjID0gRVJST1JfRkFJTDsNCj4+ICAgICAgICAgICAgICBnb3RvIG91dDsNCj4+ICAg
ICAgICAgIH0NCj4+IC0gICAgICAgIHIgPSB4Y19kb21haW5faXJxX3Blcm1pc3Npb24oY3R4LT54
Y2gsIGRvbWlkLCBpcnEsIDEpOw0KPj4gKyNpZmRlZiBDT05GSUdfWDg2DQo+PiArICAgICAgICAv
KiBJZiBkb20wIGRvZXNuJ3QgaGF2ZSBQSVJRcywgbmVlZCB0byB1c2UgeGNfZG9tYWluX2dzaV9w
ZXJtaXNzaW9uICovDQo+PiArICAgICAgICByID0geGNfZG9tYWluX2dldGluZm9fc2luZ2xlKGN0
eC0+eGNoLCAwLCAmaW5mbyk7DQo+IA0KPiBIYXJkLWNvZGVkIDAgaXMgaW1wb3NpbmcgbGltaXRh
dGlvbnMuIElkZWFsbHkgeW91IHdvdWxkIHVzZSBET01JRF9TRUxGLCBidXQNCj4gSSBkaWRuJ3Qg
Y2hlY2sgaWYgdGhhdCBjYW4gYmUgdXNlZCB3aXRoIHRoZSB1bmRlcmx5aW5nIGh5cGVyY2FsbChz
KS4gT3RoZXJ3aXNlDQo+IHlvdSB3YW50IHRvIHBhc3MgdGhlIGFjdHVhbCBkb21pZCBvZiB0aGUg
bG9jYWwgZG9tYWluIGhlcmUuDQpCdXQgdGhlIGFjdGlvbiBvZiBncmFudGluZyBwZXJtaXNzaW9u
IGlzIGZyb20gZG9tMCB0byBkb21VLCB3aGF0IEkgbmVlZCB0byBnZXQgaXMgdGhlIGluZm9tYXRp
b24gb2YgZG9tMCwNClRoZSBhY3R1YWwgZG9taWQgaGVyZSBpcyBkb21VJ3MgaWQgSSB0aGluaywg
aXQgaXMgbm90IHVzZWZ1bC4NCg0KPiANCj4+ICAgICAgICAgIGlmIChyIDwgMCkgew0KPj4gLSAg
ICAgICAgICAgIExPR0VEKEVSUk9SLCBkb21haW5pZCwNCj4+IC0gICAgICAgICAgICAgICAgICAi
eGNfZG9tYWluX2lycV9wZXJtaXNzaW9uIGlycT0lZCAoZXJyb3I9JWQpIiwgaXJxLCByKTsNCj4+
ICsgICAgICAgICAgICBMT0dFRChFUlJPUiwgZG9tYWluaWQsICJnZXRkb21haW5pbmZvIGZhaWxl
ZCAoZXJyb3I9JWQpIiwgZXJybm8pOw0KPj4gICAgICAgICAgICAgIGZjbG9zZShmKTsNCj4+ICAg
ICAgICAgICAgICByYyA9IEVSUk9SX0ZBSUw7DQo+PiAgICAgICAgICAgICAgZ290byBvdXQ7DQo+
PiAgICAgICAgICB9DQo+PiArICAgICAgICBpZiAoaW5mby5mbGFncyAmIFhFTl9ET01JTkZfaHZt
X2d1ZXN0ICYmDQo+IA0KPiBZb3Ugd2FudCB0byBwYXJlbnRoZXNpemUgdGhlICYgaGVyZS4NCldp
bGwgY2hhbmdlIGluIG5leHQgdmVyc2lvbi4NCg0KPiANCj4+ICsgICAgICAgICAgICAhKGluZm8u
YXJjaF9jb25maWcuZW11bGF0aW9uX2ZsYWdzICYgWEVOX1g4Nl9FTVVfVVNFX1BJUlEpICYmDQo+
PiArICAgICAgICAgICAgZ3NpID4gMCkgew0KPiANCj4gU28gaWYgZ3NpIDwgMCBmYWlsdXJlIG9m
IHhjX2RvbWFpbl9nZXRpbmZvX3NpbmdsZSgpIHdvdWxkIG5lZWRsZXNzbHkgcmVzdWx0DQo+IGlu
IGZhaWx1cmUgb2YgdGhpcyBmdW5jdGlvbj8NCkluIG5leHQgdmVyc2lvbiwgdGhlIGp1ZGdtZW50
IG9mIGdzaT4wIHdpbGwgYmUgcGxhY2VkIGluIHRoZSBuZXh0IGxheWVyIG9mIGlmLCBhbmQgdGhl
biB0aGUgZXJyb3Igd2lsbCBiZSBoYW5kbGVkLg0KDQo+IA0KPj4gLS0tIGEveGVuL2FyY2gveDg2
L2RvbWN0bC5jDQo+PiArKysgYi94ZW4vYXJjaC94ODYvZG9tY3RsLmMNCj4+IEBAIC0zNiw2ICsz
Niw3IEBADQo+PiAgI2luY2x1ZGUgPGFzbS94c3RhdGUuaD4NCj4+ICAjaW5jbHVkZSA8YXNtL3Bz
ci5oPg0KPj4gICNpbmNsdWRlIDxhc20vY3B1LXBvbGljeS5oPg0KPj4gKyNpbmNsdWRlIDxhc20v
aW9fYXBpYy5oPg0KPj4gIA0KPj4gIHN0YXRpYyBpbnQgdXBkYXRlX2RvbWFpbl9jcHVfcG9saWN5
KHN0cnVjdCBkb21haW4gKmQsDQo+PiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgeGVuX2RvbWN0bF9jcHVfcG9saWN5X3QgKnhkcGMpDQo+PiBAQCAtMjM3LDYgKzIzOCw0OCBA
QCBsb25nIGFyY2hfZG9fZG9tY3RsKA0KPj4gICAgICAgICAgYnJlYWs7DQo+PiAgICAgIH0NCj4+
ICANCj4+ICsgICAgY2FzZSBYRU5fRE9NQ1RMX2dzaV9wZXJtaXNzaW9uOg0KPj4gKyAgICB7DQo+
PiArICAgICAgICB1bnNpZ25lZCBpbnQgZ3NpID0gZG9tY3RsLT51LmdzaV9wZXJtaXNzaW9uLmdz
aTsNCj4+ICsgICAgICAgIGludCBpcnE7DQo+PiArICAgICAgICBib29sIGFsbG93ID0gZG9tY3Rs
LT51LmdzaV9wZXJtaXNzaW9uLmFsbG93X2FjY2VzczsNCj4gDQo+IFNlZSBteSBlYXJsaWVyIGNv
bW1lbnRzIG9uIHRoaXMgY29udmVyc2lvbiBvZiA4IGJpdHMgaW50byBqdXN0IG9uZS4NCkRvIHlv
dSBtZWFuIHRoYXQgSSBuZWVkIHRvIGNoZWNrIGFsbG93X2FjY2VzcyBpcyA+PSAwPw0KQnV0IGFs
bG93X2FjY2VzcyBpcyB1OCwgaXQgY2FuJ3QgYmUgbmVnYXRpdmUuDQoNCj4gDQo+PiArICAgICAg
ICAvKiBDaGVjayBhbGwgcGFkcyBhcmUgemVybyAqLw0KPj4gKyAgICAgICAgcmV0ID0gLUVJTlZB
TDsNCj4+ICsgICAgICAgIGZvciAoIGkgPSAwOw0KPj4gKyAgICAgICAgICAgICAgaSA8IHNpemVv
Zihkb21jdGwtPnUuZ3NpX3Blcm1pc3Npb24ucGFkKSAvDQo+PiArICAgICAgICAgICAgICAgICAg
c2l6ZW9mKGRvbWN0bC0+dS5nc2lfcGVybWlzc2lvbi5wYWRbMF0pOw0KPiANCj4gUGxlYXNlIGRv
bid0IG9wZW4tY29kZSBBUlJBWV9TSVpFKCkuDQpXaWxsIGNoYW5nZSBpbiBuZXh0IHZlcnNpb24u
DQoNCj4gDQo+PiArICAgICAgICAgICAgICArK2kgKQ0KPj4gKyAgICAgICAgICAgIGlmICggZG9t
Y3RsLT51LmdzaV9wZXJtaXNzaW9uLnBhZFtpXSApDQo+PiArICAgICAgICAgICAgICAgIGdvdG8g
b3V0Ow0KPj4gKw0KPj4gKyAgICAgICAgLyoNCj4+ICsgICAgICAgICAqIElmIGN1cnJlbnQgZG9t
YWluIGlzIFBWIG9yIGl0IGhhcyBQSVJRIGZsYWcsIGl0IGhhcyBhIG1hcHBpbmcNCj4+ICsgICAg
ICAgICAqIG9mIGdzaSwgcGlycSBhbmQgaXJxLCBzbyBpdCBzaG91bGQgdXNlIFhFTl9ET01DVExf
aXJxX3Blcm1pc3Npb24NCj4+ICsgICAgICAgICAqIHRvIGdyYW50IGlycSBwZXJtaXNzaW9uLg0K
Pj4gKyAgICAgICAgICovDQo+PiArICAgICAgICByZXQgPSAtRU9QTk9UU1VQUDsNCj4+ICsgICAg
ICAgIGlmICggaXNfcHZfZG9tYWluKGN1cnJkKSB8fCBoYXNfcGlycShjdXJyZCkgKQ0KPj4gKyAg
ICAgICAgICAgIGdvdG8gb3V0Ow0KPiANCj4gSSdtIGN1cmlvdXMgd2hhdCBvdGhlciB4ODYgbWFp
bnRhaW5lcnMgdGhpbms6IEkgZm9yIG9uZSB3b3VsZCBub3QgaW1wb3NlIHN1Y2gNCj4gYW4gYXJ0
aWZpY2lhbCByZXN0cmljdGlvbi4NCkVtbW0sIHNvIEkgbmVlZCB0byByZW1vdmUgdGhpcyBjaGVj
ay4NCg0KPiANCj4+ICsgICAgICAgIHJldCA9IC1FSU5WQUw7DQo+PiArICAgICAgICBpZiAoIGdz
aSA+PSBucl9pcnFzX2dzaSB8fCAoaXJxID0gZ3NpXzJfaXJxKGdzaSkpIDwgMCApDQo+PiArICAg
ICAgICAgICAgZ290byBvdXQ7DQo+PiArDQo+PiArICAgICAgICByZXQgPSAtRVBFUk07DQo+PiAr
ICAgICAgICBpZiAoICFpcnFfYWNjZXNzX3Blcm1pdHRlZChjdXJyZCwgaXJxKSB8fA0KPj4gKyAg
ICAgICAgICAgICB4c21faXJxX3Blcm1pc3Npb24oWFNNX0hPT0ssIGQsIGlycSwgYWxsb3cpICkN
Cj4+ICsgICAgICAgICAgICBnb3RvIG91dDsNCj4+ICsNCj4+ICsgICAgICAgIGlmICggYWxsb3cg
KQ0KPj4gKyAgICAgICAgICAgIHJldCA9IGlycV9wZXJtaXRfYWNjZXNzKGQsIGlycSk7DQo+PiAr
ICAgICAgICBlbHNlDQo+PiArICAgICAgICAgICAgcmV0ID0gaXJxX2RlbnlfYWNjZXNzKGQsIGly
cSk7DQo+PiArDQo+PiArICAgIG91dDoNCj4gDQo+IFBsZWFzZSB1c2UgYSBsZXNzIGdlbmVyaWMg
bmFtZSBmb3Igc3VjaCBhIGxhYmVsIGxvY2FsIHRvIGp1c3Qgb25lIGNhc2UNCj4gYmxvY2suIEhv
d2V2ZXIsIHdpdGggLi4NCk9rLCB3aWxsIGNoYW5nZSBpbiBuZXh0IHZlcnNpb24uDQoNCj4gDQo+
PiArICAgICAgICBicmVhazsNCj4gDQo+IC4uIHRoaXMgYmVpbmcgYWxsIHRoYXQncyBkb25lIGhl
cmU6IFdoeSBoYXZlIGEgbGFiZWwgaW4gdGhlIGZpcnN0IHBsYWNlPw0KPiANCj4gSmFuDQoNCi0t
IA0KQmVzdCByZWdhcmRzLA0KSmlxaWFuIENoZW4uDQo=


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 08:26:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 08:26:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742820.1149686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUAW-0000m4-1D; Tue, 18 Jun 2024 08:26:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742820.1149686; Tue, 18 Jun 2024 08:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUAV-0000lx-Ur; Tue, 18 Jun 2024 08:26:19 +0000
Received: by outflank-mailman (input) for mailman id 742820;
 Tue, 18 Jun 2024 08:26:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJUAU-0000lf-5m
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 08:26:18 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6eab53ee-2d4c-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 10:26:17 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-a6e43dad8ecso897936666b.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 01:26:17 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da40easm593951966b.26.2024.06.18.01.26.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 01:26:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6eab53ee-2d4c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718699176; x=1719303976; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=CcAq5yeb76TA5G66ic4dihfr2dtGNBChhAibNhJp7Co=;
        b=Zw5WRQ8+v3XUqq3Ph5NdKq/Dyu8rqW1gqFgClyCaRqbdw61gLbTSJPw0GkBqdTeRKd
         LjmlGM8gIXuWuiyoipNGxlJ1f0TSTGFSTasKaTtPbuAvtOJaxIiV/LZJG1lfG0PytL/5
         QxvmiZq9Q+ZlSxSa+Dc6m7RvxiZvS5w4Leb1+q7qRfjeCirVuF9MdTxAfhD9sojYVTa6
         +x0rNILUuyYxFOma+s2m+eugood0u1ESs0ZyITVQEVRYrPsFXVX5QGzXWqDVbQTzbQ03
         DDs5hh1bOc0NU0V4a9SN2pZ6RrjXrBIzlW8clfa3WyemQxhljHdawQs8JAttr3UJhX2E
         7jwg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718699176; x=1719303976;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=CcAq5yeb76TA5G66ic4dihfr2dtGNBChhAibNhJp7Co=;
        b=xT2KqcDBOLyZiSQLCiIC0DiB1DG69gHAymeSdBYrHEGlt/Plu64dIPqJa09tqqf0ce
         i0WxMHb+wzw5EBig9hUoizAuedqp3Eou5gJxhE8B+QhfTA0aJM3TdB3V0iTfvkFuyx8p
         t4OZrx3Gdd4nREURlH0DLDNw2q/R2HaTXxnatuf2xMimF0RKgfCZvMYmM+mOXIpDMWNw
         HSou+7/A435G83xXuFFQsBcnDn69Yja2S4sAmz8CxMZuhV4cSiYmTDJrbe0XUZJZdD7c
         Rh4BptrvmuNI7c4oeeepc9rmJVfVgYUGu8ZWgCvpU92bF5F3lKiosPJNR7ZIa80xwezM
         R0sQ==
X-Forwarded-Encrypted: i=1; AJvYcCXBpV9bIYm7lBn+fvsrfdcbdpjkAFuYHhIhp6CC83gzezje82FpcnopRJGbG+przHcapiuUQ4KqlYomKYUJtDfTssr87cS+VI5QN+8DLLw=
X-Gm-Message-State: AOJu0Yx/UohjMHzImMk3c/Lz5DW0ifBYFEKq342yEvsUKS5trQcT+tUi
	HZrcqtRF+l9b46UMW0U27kc8tajPxNzFXMaYTaVoyDZJXGwuE8VpEbA5OKQMpQ==
X-Google-Smtp-Source: AGHT+IFwFLMRnUo8jgcrzJtZSR/iEbhZ8VfqaBORz9p25rIDdt4JiIQ7mnGF7hEznnrPbfaNwEY0Gw==
X-Received: by 2002:a17:906:66ce:b0:a6f:1285:5799 with SMTP id a640c23a62f3a-a6f9508d99bmr114658166b.36.1718699176513;
        Tue, 18 Jun 2024 01:26:16 -0700 (PDT)
Message-ID: <dfae4cdc-d445-4e1e-83f4-3b1ca790beb7@suse.com>
Date: Tue, 18 Jun 2024 10:26:14 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v12 5/8] xen/riscv: add minimal stuff to mm.h to build
 full Xen
To: "Oleksii K." <oleksii.kurochko@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <cover.1717008161.git.oleksii.kurochko@gmail.com>
 <d00b86f41ef2c7d928a28dadd8c34fb845f23d0a.1717008161.git.oleksii.kurochko@gmail.com>
 <70128dba-498f-4d85-8507-bb1621182754@citrix.com>
 <7721c1b4eb0ea76cca5460264954d40d639499b7.camel@gmail.com>
 <e80e30c9-6558-4b70-ab2f-18c34c359dae@citrix.com>
 <1b3b389156ad924f00af8af1d173b89fc533050e.camel@gmail.com>
 <fa62c314-424e-4e5b-9046-3a2e1eba654e@citrix.com>
 <0b6350f2d8cfe8f2f8f24f1f2de601f3db529335.camel@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <0b6350f2d8cfe8f2f8f24f1f2de601f3db529335.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 18.06.2024 10:21, Oleksii K. wrote:
> On Fri, 2024-06-14 at 10:47 +0100, Andrew Cooper wrote:
>> On 11/06/2024 7:23 pm, Oleksii K. wrote:
>>> [OPTION 1] If we accepting of loosing 4 Gb of VA then we could
>>> implement mfn_to_page() and page_to_mfn() in the following way:
>>> ```
>>>    diff --git a/xen/arch/riscv/include/asm/mm.h
>>>    b/xen/arch/riscv/include/asm/mm.h
>>>    index cc4a07a71c..fdac7e0646 100644
>>>    --- a/xen/arch/riscv/include/asm/mm.h
>>>    +++ b/xen/arch/riscv/include/asm/mm.h
>>>    @@ -107,14 +107,11 @@ struct page_info
>>>    
>>>     #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
>>>    
>>>    -/* PDX of the first page in the frame table. */
>>>    -extern unsigned long frametable_base_pdx;
>>>    -
>>>     /* Convert between machine frame numbers and page-info
>>> structures.
>>> */
>>>     #define
>>> mfn_to_page(mfn)                                          
>>> \
>>>    -    (frame_table + (mfn_to_pdx(mfn) - frametable_base_pdx))
>>>    +    (frame_table + mfn))
>>>     #define
>>> page_to_mfn(pg)                                           
>>> \
>>>    -    pdx_to_mfn((unsigned long)((pg) - frame_table) +
>>>    frametable_base_pdx)
>>>    +    ((unsigned long)((pg) - frame_table))
>>>    
>>>     static inline void *page_to_virt(const struct page_info *pg)
>>>     {
>>>    diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
>>>    index 9c0fd80588..8f6dbdc699 100644
>>>    --- a/xen/arch/riscv/mm.c
>>>    +++ b/xen/arch/riscv/mm.c
>>>    @@ -15,7 +15,7 @@
>>>     #include <asm/page.h>
>>>     #include <asm/processor.h>
>>>    
>>>    -unsigned long __ro_after_init frametable_base_pdx;
>>>     unsigned long __ro_after_init frametable_virt_end;
>>>    
>>>     struct mmu_desc {
>>> ```
>>
>> I firmly recommend option 1, especially at this point.
> Jan, as you gave your Acked before, don't you mind to define
> mfn_to_page() and page_to_mfn as mentioned above ( Option 1 )?

No, I don't mind. And please feel free to keep my ack if no other significant
changes are made.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 08:33:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 08:33:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742829.1149697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUH8-0002tt-Pj; Tue, 18 Jun 2024 08:33:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742829.1149697; Tue, 18 Jun 2024 08:33:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUH8-0002tm-LU; Tue, 18 Jun 2024 08:33:10 +0000
Received: by outflank-mailman (input) for mailman id 742829;
 Tue, 18 Jun 2024 08:33:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJUH7-0002tg-TF
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 08:33:09 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 633607b1-2d4d-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 10:33:07 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a6f253a06caso610514366b.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 01:33:07 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f42801sm593429966b.166.2024.06.18.01.33.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 01:33:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 633607b1-2d4d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718699587; x=1719304387; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=X4WHF8hc7Sw6TrRc++lpt1ZOAsyYQQFuAPuA7Q3d9iQ=;
        b=LqcBKJLJdvXNyNn/lXpk169rddFp7fTJfYopfm0IDJ7dd/UTx3P+xUcA83RYDonlKH
         HW622pLXI2vlJMEQZvKBOW5YCUZ3nZoznJytt7EQLwndSbmmf0HdMUdGyFIRk4UPbMM2
         kqj5u6qzhdxbp0e8zT8AryAO7IyIouHTQd3qtcSC4jMJeLh4KTziGWHsLXXbs8s/WatI
         6YEMhUdQq3aZj2ADRbFV5A6wzW/2/9CzcFHHF2fTXkMDzcGMozYbiPxO2LRZj1MB7tP1
         ryYrKQU6ZJDAQLMqG7lB2vSppVg/vrhHrC0qOxiFerYJlyPdXKDQ5JAcBnSvBZB7rki4
         4hXA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718699587; x=1719304387;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=X4WHF8hc7Sw6TrRc++lpt1ZOAsyYQQFuAPuA7Q3d9iQ=;
        b=qNiIEVOb1Tr9J4VUSryGzv3dRum/qZkFn2hf62Bl5o4uyqtSbdyZRcA35P3VZO2iZ/
         oihoF4IJiai+95XHfLRirZCflHrZKcSGYLVh8/tO1aZlw93WXk6QfIJamK87+0w6pPKx
         ZQoukyNq3XQ4SxASL23X4JaoB6f2p0S/BC6hyaGZfcm1HvCYkDR8XH9I5YcwZVltUhvy
         q+x1cx9ofPUMHolyWEvCoK1biZ+1oXU3UT+AyJoDkKZIf54fnK8lP2y9vtbuald1RQPK
         YXkVENoQ2kgsl6Uley0V4OmqK6zpBV6b0yIdOQlst7PT4Lot4hY7dHExkxGVxiS0C/CP
         TWQQ==
X-Forwarded-Encrypted: i=1; AJvYcCVmUixykTx0EtZHakH+DduGcpfGL45EYxUxVNvFfLWRyn0v1+ogTiKHG7TtkFAHlRfYB87ZJOQg56+AvkLyrXJPZQ/l2zKli258sEaR+Ac=
X-Gm-Message-State: AOJu0Yy6BxCTjf+iXTMsTf6KteVQTogZKGjjFMBB8g51nn+s6RzLfmIH
	9jtmJdij+CYaYnIePKxZwAejLaXSMoCiV5wzU05ZjASC3546DR5sNF/Zi46pmA==
X-Google-Smtp-Source: AGHT+IErUDur7idyakpyW77bZ1vXRhkTEyn/hKarEnoC3QgFO/z8MPejw9H7Uc+R1hStwXJxCGU9JA==
X-Received: by 2002:a17:906:1148:b0:a6f:1541:e8d5 with SMTP id a640c23a62f3a-a6f60d37820mr1008414266b.23.1718699586622;
        Tue, 18 Jun 2024 01:33:06 -0700 (PDT)
Message-ID: <f8381d8b-0ad2-4766-8a53-d1ee44ea7e05@suse.com>
Date: Tue, 18 Jun 2024 10:33:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 1/5] xen/vpci: Clear all vpci status of device
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-2-Jiqian.Chen@amd.com>
 <4e2accc2-e81d-450a-af2d-38884455de9c@suse.com>
 <BL1PR12MB58499527CFA36446EAD3FCE0E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB58499527CFA36446EAD3FCE0E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 18.06.2024 08:25, Chen, Jiqian wrote:
> On 2024/6/17 22:17, Jan Beulich wrote:
>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>> --- a/xen/drivers/pci/physdev.c
>>> +++ b/xen/drivers/pci/physdev.c
>>> @@ -2,11 +2,17 @@
>>>  #include <xen/guest_access.h>
>>>  #include <xen/hypercall.h>
>>>  #include <xen/init.h>
>>> +#include <xen/vpci.h>
>>>  
>>>  #ifndef COMPAT
>>>  typedef long ret_t;
>>>  #endif
>>>  
>>> +static const struct pci_device_state_reset_method
>>> +                    pci_device_state_reset_methods[] = {
>>> +    [ DEVICE_RESET_FLR ].reset_fn = vpci_reset_device_state,
>>> +};
>>
>> What about the other three DEVICE_RESET_*? In particular ...
> I don't know how to implement the other three types of reset.
> This is a design form so that corresponding processing functions can be added later if necessary. Do I need to set them to NULL pointers in this array?

No.

> Does this form conform to your previous suggestion of using only one hypercall to handle all types of resets?

Yes, at least in principle. Question here is: To be on the safe side,
wouldn't we better reset state for all forms of reset, leaving possible
relaxation of that for later? At which point the function wouldn't need
calling indirectly, and instead would be passed the reset type as an
argument.

>> Also, nit (further up): Opening figure braces for a new scope go onto their
> OK, will change in next version.
>> own line. Then again I notice that apparenly _all_ other instances in this
>> file are doing it the wrong way, too.
> Do I need to change them in this patch?

No.

>>> --- a/xen/drivers/vpci/vpci.c
>>> +++ b/xen/drivers/vpci/vpci.c
>>> @@ -172,6 +172,15 @@ int vpci_assign_device(struct pci_dev *pdev)
>>>  
>>>      return rc;
>>>  }
>>> +
>>> +int vpci_reset_device_state(struct pci_dev *pdev)
>>
>> As a target of an indirect call this needs to be annotated cf_check (both
>> here and in the declaration, unlike __must_check, which is sufficient to
>> have on just the declaration).
> OK, will add cf_check in next version.

Which may not be necessary if you go the route suggested above.

>>> --- a/xen/include/xen/pci.h
>>> +++ b/xen/include/xen/pci.h
>>> @@ -156,6 +156,22 @@ struct pci_dev {
>>>      struct vpci *vpci;
>>>  };
>>>  
>>> +struct pci_device_state_reset_method {
>>> +    int (*reset_fn)(struct pci_dev *pdev);
>>> +};
>>> +
>>> +enum pci_device_state_reset_type {
>>> +    DEVICE_RESET_FLR,
>>> +    DEVICE_RESET_COLD,
>>> +    DEVICE_RESET_WARM,
>>> +    DEVICE_RESET_HOT,
>>> +};
>>> +
>>> +struct pci_device_state_reset {
>>> +    struct physdev_pci_device dev;
>>> +    enum pci_device_state_reset_type reset_type;
>>> +};
>>
>> This is the struct to use as hypercall argument. How can it live outside of
>> any public header? Also, when moving it there, beware that you should not
>> use enum-s there. Only handles and fixed-width types are permitted.t
> Yes, I put them there before, but enum is not permitted.
> Then, do you have other suggested type to use to distinguish different types of resets, because enum can't work in the public header?

Do like we do everywhere else: Use a set of #define-s.

>>> --- a/xen/include/xen/vpci.h
>>> +++ b/xen/include/xen/vpci.h
>>> @@ -38,6 +38,7 @@ int __must_check vpci_assign_device(struct pci_dev *pdev);
>>>  
>>>  /* Remove all handlers and free vpci related structures. */
>>>  void vpci_deassign_device(struct pci_dev *pdev);
>>> +int __must_check vpci_reset_device_state(struct pci_dev *pdev);
>>
>> What's the purpose of this __must_check, when the sole caller is calling
>> this through a function pointer, which isn't similarly annotated?
> This is what I added before introducing function pointers, but after modifying the implementation, it was not taken into account.
> I will remove __must_check

Why remove? Is it relevant for the return value to be checked? Or if it
isn't, why would there be a return value?

Jan

> and change to cf_check, according to your above comment.



From xen-devel-bounces@lists.xenproject.org Tue Jun 18 08:37:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 08:37:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742836.1149707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJULP-0003Yk-8K; Tue, 18 Jun 2024 08:37:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742836.1149707; Tue, 18 Jun 2024 08:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJULP-0003Yd-5O; Tue, 18 Jun 2024 08:37:35 +0000
Received: by outflank-mailman (input) for mailman id 742836;
 Tue, 18 Jun 2024 08:37:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GTcF=NU=cloud.com=frediano.ziglio@srs-se1.protection.inumbo.net>)
 id 1sJULO-0003YX-F1
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 08:37:34 +0000
Received: from mail-ot1-x334.google.com (mail-ot1-x334.google.com
 [2607:f8b0:4864:20::334])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 00c6f9a1-2d4e-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 10:37:32 +0200 (CEST)
Received: by mail-ot1-x334.google.com with SMTP id
 46e09a7af769-6f9fbec4fd9so3000683a34.2
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 01:37:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00c6f9a1-2d4e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718699851; x=1719304651; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=eb0RXj6kXsdM0uSSP81X1dIWEztXTh0vf9AdUTcavuo=;
        b=FGjRBK0IItjBr4lgVSi62Aj8Wm22Qiktp8tQHn26rptUohjopAIHsEKCCqr+zdfdJm
         aSBY9oaaL+j9UabWS56mzuptc7Yr0UoyCt53ZQM15itl6UN/o8LzCTaVryaKIthIFhMq
         RPNn05k23uxqR3abv2pQwBJPasS/RQuEAdtbk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718699851; x=1719304651;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=eb0RXj6kXsdM0uSSP81X1dIWEztXTh0vf9AdUTcavuo=;
        b=g8TY/PNk+nz5CMMiiVz8rhehSdgdM1pGMnwjVo7myAbLEcJwi0OGoYGnSHAyj5LSZc
         vYXHIReg0E71IEcRN9z+TkJL7BJ6q2/LNxFjWQTpt99wVrewinv81BFclxQ2p5Vx7J+k
         LhJU1IgJ0TTeGk0OJKUF4CrErOLg8QumejisgMNoAW/JbVJSsrlcLGuceQAwk3s2lxNR
         JDrGIKXe9j/0KJ8pqEjL4H4VKd2qAjSKwQFcql7wwnR2jOBeO64w/fsGSfySskzHQgtE
         VxqF10sVAgS2xKrU2CPFUCQYNItWhPqNa8OCEQofN+yh43B6MS0TNjXoHPBUlF5S8Rni
         YWXA==
X-Forwarded-Encrypted: i=1; AJvYcCUpMiPSDjB0cM4hVP0L+p+pkSgyrdmk7pSg1h6cEXwprh8itThPKfY1urbhACnSP2585KZeXVVjTTPUMfY+j4kPyTlxtnAbyjqyjbx62y8=
X-Gm-Message-State: AOJu0YyQEZ2tXRfrDqOR+ErNryXpU4gKdUgWKrekV4R+h7cCeYAXbCD1
	2dhbL7REd8EQkr9L6pBK9pu4gTcBZUEvKxOIG7NPIQjJYFyo/j5l4LyIkIqP2yBoSoXa098SlZb
	DaAdBamnaczbiGah6Tzk+y+OlYr9jShhVYpNMvQ==
X-Google-Smtp-Source: AGHT+IEWkf5j2W5kw2iqzjVXFFr/KNj2UxQ6QnKHSgfx22Hyq9qpLr4BXiXOgbfkFgo2DeJ1XRr/iFXuaj+25RD/LPA=
X-Received: by 2002:a05:6870:b011:b0:254:a89e:acca with SMTP id
 586e51a60fabf-2584258a3d9mr12682841fac.0.1718699850938; Tue, 18 Jun 2024
 01:37:30 -0700 (PDT)
MIME-Version: 1.0
References: <20240617141303.53857-1-frediano.ziglio@cloud.com>
 <2fe6ef97-84f2-4bf4-870b-b0bb580fa38f@suse.com> <ZnBKDRWi_2cO6WbA@macbook>
In-Reply-To: <ZnBKDRWi_2cO6WbA@macbook>
From: Frediano Ziglio <frediano.ziglio@cloud.com>
Date: Tue, 18 Jun 2024 09:37:08 +0100
Message-ID: <CACHz=Zg4Zoyr4KNeig4yDDNUxvV325beJEyT-L-K0a+FHp7oDg@mail.gmail.com>
Subject: Re: [PATCH] x86/xen/time: Reduce Xen timer tick
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, "H. Peter Anvin" <hpa@zytor.com>, x86@kernel.org, 
	Dave Hansen <dave.hansen@linux.intel.com>, Borislav Petkov <bp@alien8.de>, 
	Ingo Molnar <mingo@redhat.com>, Thomas Gleixner <tglx@linutronix.de>, 
	Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, 
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 17, 2024 at 3:37=E2=80=AFPM Roger Pau Monn=C3=A9 <roger.pau@cit=
rix.com> wrote:
>
> On Mon, Jun 17, 2024 at 04:22:21PM +0200, Jan Beulich wrote:
> > On 17.06.2024 16:13, Frediano Ziglio wrote:
> > > Current timer tick is causing some deadline to fail.
> > > The current high value constant was probably due to an old
> > > bug in the Xen timer implementation causing errors if the
> > > deadline was in the future.
> > > This was fixed in Xen commit:
> > > 19c6cbd90965 xen/vcpu: ignore VCPU_SSHOTTMR_future
> >
> > And then newer kernels are no longer reliably usable on Xen older than
> > this?
>
> I think this should reference the Linux commit that removed the usage
> of VCPU_SSHOTTMR_future on Linux itself, not the change that makes Xen
> ignore the flag.
>

Yes, Linux kernel stopped using this flag since 2016 with commit
c06b6d70feb32d28f04ba37aa3df17973fd37b6b, "xen/x86: don't lose event
interrupts", I'll add it in the commit message.

> > > --- a/arch/x86/xen/time.c
> > > +++ b/arch/x86/xen/time.c
> > > @@ -30,7 +30,7 @@
> > >  #include "xen-ops.h"
> > >
> > >  /* Minimum amount of time until next clock event fires */
> > > -#define TIMER_SLOP 100000
> > > +#define TIMER_SLOP 1000
> >
> > It may be just the lack of knowledge of mine towards noadays's Linux'es
> > time handling, but the change of a value with this name and thus
> > commented doesn't directly relate to "timer tick" rate. Could you maybe
> > help me see the connection?
>
> The TIMER_SLOP define is used in min_delta_{ns,ticks} field, and I
> think this is wrong.
>
> The min_delta_ns for the Xen timer is 1ns.  If Linux needs some
> greater min delta than what the timer interface supports it should be
> handled in the generic timer code, not open coded at the definition of
> possibly each timer implementation.
>

I think this is done to reduce potential event handling frequency, in
some other part of timer code (in kernel/time/clockevents.c) there's a
comment "Deltas less than 1usec are pointless noise".
I think it's hard for a software to get a frequency so high so I
didn't propose 1ns.
What are you suggesting? To put 1ns and see what happens? Is there any
proper test code for this?

> Thanks, Roger.

Frediano


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 08:39:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 08:39:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742843.1149716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUMj-0004Zu-IB; Tue, 18 Jun 2024 08:38:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742843.1149716; Tue, 18 Jun 2024 08:38:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUMj-0004Zl-FD; Tue, 18 Jun 2024 08:38:57 +0000
Received: by outflank-mailman (input) for mailman id 742843;
 Tue, 18 Jun 2024 08:38:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJUMi-0004Zf-Ai
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 08:38:56 +0000
Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com
 [2a00:1450:4864:20::530])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 31f59772-2d4e-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 10:38:54 +0200 (CEST)
Received: by mail-ed1-x530.google.com with SMTP id
 4fb4d7f45d1cf-57ca578ce8dso5935816a12.2
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 01:38:54 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f9806csm598447266b.195.2024.06.18.01.38.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 01:38:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31f59772-2d4e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718699934; x=1719304734; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=AnZWs13gONBrlER4YdTUIiLsTrQi0YoR2Ni4Wt7t7Hs=;
        b=NznerrHi/GjD8W0HNJ8oTgxM6E+qVHfx/PhgZqrsxB7gdEMFbD4DRADoNFTikaS6zZ
         jeePtvKIxe55hzmnS4P3JWXxHF22DkDnQT8ob8L98kJYkfrsJWPoKJaKvvg3z2aqgvQ+
         vV5NPIcV2o+sBJRcw2Au0F/Q3PivwWYEQI7Ocp28HLIiSKNk6pBFMBBct851MEhuz4M5
         NxEZiAO2SY7+y21dJUJJ8qC3HYZLF6Q8wSd0S+DKUxAcUh1MhTbmSbzlw895vfsGOEQY
         ON2/sro6e4ID2Bl4BqbUHv6EEN0M00BGghuw4UCcC18ph56qqlDtgFrKV5Mk215s344J
         nyVQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718699934; x=1719304734;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=AnZWs13gONBrlER4YdTUIiLsTrQi0YoR2Ni4Wt7t7Hs=;
        b=UsvDiv4zMcPfaHIwc19XjXqj2nC6u+UoCkQNG6PIekeU6hw7PDgDso5tX/hQyVeu1e
         I+NihVB3RHiWi5XQ5qxmTnTSPYlNOpxE9iVFgQH6hoShW7KZlH/VyI6Vd3SWu8h3nKpG
         RZjLG1FHfL06axXX6AEeu3+mhI8z/qm78yZlpsCzyumNma2WIYs5JBgUzbp1iaOXKE/4
         +Yr+Zzl1xYHWG4vT5UAvPnfcwAbEM60cm/I2iTx0B38jEmrjOvgftMoVTaI0f4ZO9Mf+
         2ZpS1smXWxf/qyA3N1rgasY6S019YbhrG9odHbJpDj0u6r38c2k5P/p89CkpDxIq5k6U
         qnUw==
X-Forwarded-Encrypted: i=1; AJvYcCW+Pp5+fUhf4OwLg3WIFA72ZdLHaSdzbwdUGuXhN+VJyC800eXAM0NUYXxy5P/6kxISXnPsZ3veYPi1V60RzzZPg4+cXiJp/Aas7pe5gcM=
X-Gm-Message-State: AOJu0YwFKE/miY9gCOPPGRUEHRGTOhP70Xdmi8ICB0GciU67lvVgDR3U
	W77B3UQ/o2qhb5LtqCmDjQmvqbouTcB8TXKomHt6Rvlk5PbgZP7Sl+Z4FFOsKQ==
X-Google-Smtp-Source: AGHT+IFg1le6jF5017yZVjJTJvP5H6IzgVTsawT7JR4KZm/VHhY2dFfTNXcUyY+cAOIHNnOzMG7uvQ==
X-Received: by 2002:a17:907:a64b:b0:a6f:8a1b:9964 with SMTP id a640c23a62f3a-a6f8a1ba21fmr323140766b.57.1718699933558;
        Tue, 18 Jun 2024 01:38:53 -0700 (PDT)
Message-ID: <c41031f9-bf3f-472e-82be-c1efea07c343@suse.com>
Date: Tue, 18 Jun 2024 10:38:51 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-3-Jiqian.Chen@amd.com>
 <cb9910cd-7045-4c0d-a7cf-2bcf36e30cb2@suse.com>
 <BL1PR12MB5849FC16D91FADD5B7D30A63E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB5849FC16D91FADD5B7D30A63E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 18.06.2024 08:49, Chen, Jiqian wrote:
> On 2024/6/17 22:45, Jan Beulich wrote:
>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>> If run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
>>> a passthrough device by using gsi, see qemu code
>>> xen_pt_realize->xc_physdev_map_pirq and libxl code
>>> pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq
>>> will call into Xen, but in hvm_physdev_op, PHYSDEVOP_map_pirq
>>> is not allowed because currd is PVH dom0 and PVH has no
>>> X86_EMU_USE_PIRQ flag, it will fail at has_pirq check.
>>>
>>> So, allow PHYSDEVOP_map_pirq when dom0 is PVH and also allow
>>> PHYSDEVOP_unmap_pirq for the failed path to unmap pirq.
>>
>> Why "failed path"? Isn't unmapping also part of normal device removal
>> from a guest?
> Yes, both. I will change to also "allow PHYSDEVOP_unmap_pirq for the device removal path to unmap pirq".
> 
>>
>>> And
>>> add a new check to prevent self map when subject domain has no
>>> PIRQ flag.
>>
>> You still talk of only self mapping, and the code also still does only
>> that. As pointed out before: Why would you allow mapping into a PVH
>> DomU? IOW what purpose do the "d == currd" checks have?
> The checking I added has two purpose, first is I need to allow this case:
> Dom0(without PIRQ) + DomU(with PIRQ), because the original code just do (!has_pirq(currd)) will cause map_pirq fail in this case.
> Second I need to disallow self-mapping:
> DomU(without PIRQ) do map_pirq, the "d==currd" means the currd is the subject domain itself.
> 
> Emmm, I think I know what's your concern.
> Do you mean I need to
> " Prevent map_pirq when currd has no X86_EMU_USE_PIRQ flag "
> instead of
> " Prevent self-map when currd has no X86_EMU_USE_PIRQ flag ",

No. What I mean is that I continue to fail to see why you mention "currd".
IOW it would be more like "prevent mapping when the subject domain has no
X86_EMU_USE_PIRQ" (which, as a specific sub-case, includes self-mapping
if the caller specifies DOMID_SELF for the subject domain).

> so I need to remove "d==currd", right?

Removing this check is what I'm after, yes. Yet that's not in sync with
either of the two quoted sentences above.

>>> So that domU with PIRQ flag can success to map pirq for
>>> passthrough devices even dom0 has no PIRQ flag.
>>
>> There's still a description problem here. Much like the first sentence,
>> this last one also says that the guest would itself map the pIRQ. In
>> which case there would still not be any reason to expose the sub-
>> functions to Dom0.
> If change to " So that the interrupt of a passthrough device can success to be mapped to pirq for domU with PIRQ flag when dom0 is PVH.",
> Is it OK?

Kind of, yes. "can be successfully mapped" is one of the various possibilities
of making this read a little more smoothly.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 08:56:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 08:56:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742853.1149727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUd9-0007nx-0g; Tue, 18 Jun 2024 08:55:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742853.1149727; Tue, 18 Jun 2024 08:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUd8-0007nq-Tq; Tue, 18 Jun 2024 08:55:54 +0000
Received: by outflank-mailman (input) for mailman id 742853;
 Tue, 18 Jun 2024 08:55:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJUd7-0007nk-Ic
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 08:55:53 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 90cdfadc-2d50-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 10:55:52 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id
 4fb4d7f45d1cf-57cad452f8bso5804702a12.2
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 01:55:52 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb7439070sm7547280a12.85.2024.06.18.01.55.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 01:55:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90cdfadc-2d50-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718700952; x=1719305752; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=EdwxUBMtJqpxDgmXryYAvIazZRX1Hy+h3AFFbKAnUKM=;
        b=fekdAqaNhi1KC00MJGVkcDL7Jg6dlEc4uZaFwRrJUJehdp2AAP59ehBLjjKI1iHuRY
         C/KwBbkTsUIHtKnRqSZix6qxcR+AVcb2NYkl0vYZQXozs5t3QMHcmYmj62m1uV3U5QHD
         bmG1qyQLQTIklN1AdwzqvTL8zJ6XF5s0AsbsVfz0Zn8fjgXNDPtwn316XFa5Fq0Uw2DL
         mc1x1X19BQ6bWsX7KnEqrg8LZWyjrSvjlaH8dDINIxGVR8JqybHdQZGqsPNLNuzOmh+D
         7UxZME6X7tj9+1umWtToX2zWsO+Gm5WG/kd+WrUSIf5SpIh/HjgSQXyONtuha6naa09D
         ubIg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718700952; x=1719305752;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=EdwxUBMtJqpxDgmXryYAvIazZRX1Hy+h3AFFbKAnUKM=;
        b=YNbm5wDibO6BqJZK6iIyi7X6ip1Gmnx/LXPXVdpZtG9hYlgVR7Mgb8BfoG0BcOq5zb
         rwYXosCQdMpZyUV51uPnidCOtNwMdsqEJ0hOts2ZBwfJHMERLpUSN8CGqAcbYrELUJml
         j1O9UhDMRadzFG7ao3cfX5BxQwvmSSlz0SbChNpxoaR6aQaqD8tF5sVzxqfcpn0kL9nx
         N18rw+YwhS2f13ases9yiZ3hdTVAAdovGOjLfTGUsiTF9/0PIvu+ymE2OU4LXjFc7AaB
         vwp8FXmaWkATxPkLTiV/xl8jofr0o3IYkqOhpA6CmX0s4jw9uvO7npCYwSy0dByEIY1v
         uADw==
X-Forwarded-Encrypted: i=1; AJvYcCXR4DqmBy4TEUCb8fdO9SqHIdcpdIKyvnDbTFYBBerrOzM9XCcyA3wmkaxxHdDRXY0Cu7wa21sCgwKqAsp0+6eu2ZVhXj6sTxTn1F4P714=
X-Gm-Message-State: AOJu0YwnKbWQTuYSuhKktTmguH3BbVIcZYI76WQJyDLXYlbIj7Uiq9sU
	2U5OzLttG1WNbQDO5ktan4i0KtyGFtief0+P8nBIvWgd3b+3DE9YiFu46QXyZw==
X-Google-Smtp-Source: AGHT+IE9l6dW2GadYgqgSFh0k5pgmlZtlxz6ZwK8aMMxQ0wdOzPGNjCWUSeEbjS70dr8oUqb5LrqWg==
X-Received: by 2002:a50:ed82:0:b0:57c:bf3b:76f5 with SMTP id 4fb4d7f45d1cf-57cbf3b78acmr7326488a12.35.1718700951644;
        Tue, 18 Jun 2024 01:55:51 -0700 (PDT)
Message-ID: <b923a32e-3c22-4e7a-8844-b33322ef8ad1@suse.com>
Date: Tue, 18 Jun 2024 10:55:49 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-4-Jiqian.Chen@amd.com>
 <ed36b376-a5f0-457b-8a1e-61104c26ffce@suse.com>
 <BL1PR12MB5849FE3A4897DF166159B906E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB5849FE3A4897DF166159B906E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 18.06.2024 08:57, Chen, Jiqian wrote:
> On 2024/6/17 22:52, Jan Beulich wrote:
>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>> The gsi of a passthrough device must be configured for it to be
>>> able to be mapped into a hvm domU.
>>> But When dom0 is PVH, the gsis don't get registered, it causes
>>> the info of apic, pin and irq not be added into irq_2_pin list,
>>> and the handler of irq_desc is not set, then when passthrough a
>>> device, setting ioapic affinity and vector will fail.
>>>
>>> To fix above problem, on Linux kernel side, a new code will
>>> need to call PHYSDEVOP_setup_gsi for passthrough devices to
>>> register gsi when dom0 is PVH.
>>>
>>> So, add PHYSDEVOP_setup_gsi into hvm_physdev_op for above
>>> purpose.
>>>
>>> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
>>> ---
>>> The code link that will call this hypercall on linux kernel side is as follows:
>>> https://lore.kernel.org/xen-devel/20240607075109.126277-3-Jiqian.Chen@amd.com/
>>
>> One of my v9 comments was addressed, thanks. Repeating the other, unaddressed
>> one here:
>> "As to GSIs not being registered: If that's not a problem for Dom0's own
>>  operation, I think it'll also want/need explaining why what is sufficient for
>>  Dom0 alone isn't sufficient when pass-through comes into play."
> I have modified the commit message to describe why GSIs are not registered can cause passthrough not work, according to this v9 comment.
> " it causes the info of apic, pin and irq not be added into irq_2_pin list, and the handler of irq_desc is not set, then when passthrough a device, setting ioapic affinity and vector will fail."
> What description do you want me to add?

What I'd first like to have clarification on (i.e. before putting it in
the description one way or another): How come Dom0 alone gets away fine
without making the call, yet for passthrough-to-DomU it's needed? Is it
perhaps that it just so happened that for Dom0 things have been working
on systems where it was tested, but the call should in principle have been
there in this case, too [1]? That (to me at least) would make quite a
difference for both this patch's description and us accepting it.

Jan

[1] Alternative e.g. being that because of other actions PVH Dom0 takes,
like the IO-APIC RTE programming it does for IRQs it wants to use for
itself, the necessary information is already suitably conveyed to Xen in
that case. In such a case imo it's relevant to mention in the description.
Not the least because iirc the pciback driver sets up a fake IRQ handler
in such cases, which ought to lead to similar IO-APIC RTE programming, at
which point the question would again arise why the hypercall needs
exposing.


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 08:57:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 08:57:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742859.1149737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUee-0008Jh-9a; Tue, 18 Jun 2024 08:57:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742859.1149737; Tue, 18 Jun 2024 08:57:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUee-0008Ja-6r; Tue, 18 Jun 2024 08:57:28 +0000
Received: by outflank-mailman (input) for mailman id 742859;
 Tue, 18 Jun 2024 08:57:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJUed-0008JO-4n; Tue, 18 Jun 2024 08:57:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJUec-0004tG-N3; Tue, 18 Jun 2024 08:57:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJUec-0002Ac-Ch; Tue, 18 Jun 2024 08:57:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJUec-0000Ju-C7; Tue, 18 Jun 2024 08:57:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kxTk9dmf4jmoSyl/wj/iJTHJtnB0i4DyJplpC9xNbs0=; b=Pbcp3zhJTBGPMNfqjSv5iTLTZU
	ICtB//B+XHmIHdxhXbzLJJLo42zKyWKGhO2zm3jBTsBY+xJ0XaisygxsCmp/MJhBQrZLMal1CJzT1
	Twjrz+uuaT8X1CGiH8dXrGUy+ObrdPA+IO6InaLumHe34eRY/0Bvyvxgm3I4NgypaCts=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186386-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186386: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8b4243a9b560c89bb259db5a27832c253d4bebc7
X-Osstest-Versions-That:
    xen=8b4243a9b560c89bb259db5a27832c253d4bebc7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Jun 2024 08:57:26 +0000

flight 186386 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186386/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186376
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186376
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186376
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186376
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186376
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186376
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8b4243a9b560c89bb259db5a27832c253d4bebc7
baseline version:
 xen                  8b4243a9b560c89bb259db5a27832c253d4bebc7

Last test of basis   186386  2024-06-18 01:53:35 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue Jun 18 08:57:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 08:57:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742862.1149746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUer-0000Fx-I8; Tue, 18 Jun 2024 08:57:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742862.1149746; Tue, 18 Jun 2024 08:57:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUer-0000Fo-FX; Tue, 18 Jun 2024 08:57:41 +0000
Received: by outflank-mailman (input) for mailman id 742862;
 Tue, 18 Jun 2024 08:57:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TBP2=NU=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sJUer-0000DG-0s
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 08:57:41 +0000
Received: from mail-qk1-x734.google.com (mail-qk1-x734.google.com
 [2607:f8b0:4864:20::734])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d0ba5ae3-2d50-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 10:57:40 +0200 (CEST)
Received: by mail-qk1-x734.google.com with SMTP id
 af79cd13be357-7953f1dcb01so449730685a.3
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 01:57:40 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798abe6b4dfsm506192185a.125.2024.06.18.01.57.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 01:57:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0ba5ae3-2d50-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718701059; x=1719305859; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=1MJWeAXBDDEpLuGkGIY36gY2Jr0ZSeY/lNnVY9HPv1s=;
        b=fXovXeN2AsU06pmGEuJ8It8mRgdPfybM/gRMHZLxA7up9M3vAec2u+gOm0p7xMXZ5E
         OTLk78VU+Qbsh72M2qpe+8jLAfS/FADmmfPaQvc0wak93b/e+fupcFf3dSrxCELvNMOX
         j9wpPdE+xWlkvjzHv6GVToPM/CbpwLxYEDLOY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718701059; x=1719305859;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=1MJWeAXBDDEpLuGkGIY36gY2Jr0ZSeY/lNnVY9HPv1s=;
        b=FKe/UI1QN9oOOVaDTgiDMZdxogS2y7EyurekNCFKZbTKQihkNj5UtSMxiftLXQqolD
         7ySKM9asBA6uL5fkK9LdBX71fOYx+79RdgJqDcVPD+y/s2C6h3z/II8mFvWmyDJEhA/W
         IYgZhS736LTCXPqF4J2a/jc9tIfUGlK1AGFnOFHgezmhmerTXwx5zMZMv04vxIIcyv/V
         4hZhZrsab/I6njDojaOVIv/YL6fhGqxuaEZ7+48EykocsZd/fjyWEbsUTKsc5nJ/TFMZ
         +C0aL9s1H1DvOz9wJgXANMgBh8RK70mnBtkHxlgueVPr9Ir4Z1+CPv+k4eCSHOsnJ1/7
         tpgw==
X-Forwarded-Encrypted: i=1; AJvYcCVb/cBHLG1aL912Ff811wJ9UHIUptJmCsKP9O3ICrfVCbIV/yDaE6r02FXtoWEPxv8/uDRMIkXiDBqWBff8dgN6CEVfB2o4yb75txD0luc=
X-Gm-Message-State: AOJu0YyFj2lx7Ihtgm9JDMFMIK2gWpfkt2ZwFGvjK07j4fgLdCFwroR2
	D1FitG+gIAugwN7cFr2/n6Jyjd/GqalgERNxRn9jn8QWX1fYyYjlpPjbOuTTFH0=
X-Google-Smtp-Source: AGHT+IFECmvJp1ez2eLa+dpQfO6eX8nUMX4FYtnOHPp9ThmxkkpraoTUSlv3dCTdEWRDY19HCOSrbQ==
X-Received: by 2002:a05:620a:4005:b0:797:cfb3:155f with SMTP id af79cd13be357-798d240afdamr1497813685a.27.1718701058859;
        Tue, 18 Jun 2024 01:57:38 -0700 (PDT)
Date: Tue, 18 Jun 2024 10:57:35 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Frediano Ziglio <frediano.ziglio@cloud.com>
Cc: Jan Beulich <jbeulich@suse.com>, "H. Peter Anvin" <hpa@zytor.com>,
	x86@kernel.org, Dave Hansen <dave.hansen@linux.intel.com>,
	Borislav Petkov <bp@alien8.de>, Ingo Molnar <mingo@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] x86/xen/time: Reduce Xen timer tick
Message-ID: <ZnFL_0ihWiI7Yaf0@macbook>
References: <20240617141303.53857-1-frediano.ziglio@cloud.com>
 <2fe6ef97-84f2-4bf4-870b-b0bb580fa38f@suse.com>
 <ZnBKDRWi_2cO6WbA@macbook>
 <CACHz=Zg4Zoyr4KNeig4yDDNUxvV325beJEyT-L-K0a+FHp7oDg@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CACHz=Zg4Zoyr4KNeig4yDDNUxvV325beJEyT-L-K0a+FHp7oDg@mail.gmail.com>

On Tue, Jun 18, 2024 at 09:37:08AM +0100, Frediano Ziglio wrote:
> On Mon, Jun 17, 2024 at 3:37 PM Roger Pau Monné <roger.pau@citrix.com> wrote:
> >
> > On Mon, Jun 17, 2024 at 04:22:21PM +0200, Jan Beulich wrote:
> > > On 17.06.2024 16:13, Frediano Ziglio wrote:
> > > > Current timer tick is causing some deadline to fail.
> > > > The current high value constant was probably due to an old
> > > > bug in the Xen timer implementation causing errors if the
> > > > deadline was in the future.
> > > > This was fixed in Xen commit:
> > > > 19c6cbd90965 xen/vcpu: ignore VCPU_SSHOTTMR_future
> > >
> > > And then newer kernels are no longer reliably usable on Xen older than
> > > this?
> >
> > I think this should reference the Linux commit that removed the usage
> > of VCPU_SSHOTTMR_future on Linux itself, not the change that makes Xen
> > ignore the flag.
> >
> 
> Yes, Linux kernel stopped using this flag since 2016 with commit
> c06b6d70feb32d28f04ba37aa3df17973fd37b6b, "xen/x86: don't lose event
> interrupts", I'll add it in the commit message.
> 
> > > > --- a/arch/x86/xen/time.c
> > > > +++ b/arch/x86/xen/time.c
> > > > @@ -30,7 +30,7 @@
> > > >  #include "xen-ops.h"
> > > >
> > > >  /* Minimum amount of time until next clock event fires */
> > > > -#define TIMER_SLOP 100000
> > > > +#define TIMER_SLOP 1000
> > >
> > > It may be just the lack of knowledge of mine towards noadays's Linux'es
> > > time handling, but the change of a value with this name and thus
> > > commented doesn't directly relate to "timer tick" rate. Could you maybe
> > > help me see the connection?
> >
> > The TIMER_SLOP define is used in min_delta_{ns,ticks} field, and I
> > think this is wrong.
> >
> > The min_delta_ns for the Xen timer is 1ns.  If Linux needs some
> > greater min delta than what the timer interface supports it should be
> > handled in the generic timer code, not open coded at the definition of
> > possibly each timer implementation.
> >
> 
> I think this is done to reduce potential event handling frequency, in
> some other part of timer code (in kernel/time/clockevents.c) there's a
> comment "Deltas less than 1usec are pointless noise".

Then why does the interface allow for timers having a resolution up to
1ns then?

> I think it's hard for a software to get a frequency so high so I
> didn't propose 1ns.
> What are you suggesting? To put 1ns and see what happens? Is there any
> proper test code for this?

The Xen timer interface has a resolution of 1ns, and the Linux
structures that describe timers also support a 1ns resolution.  I can
perfectly understand that deltas of 1ns make no sense, but given how
the Xen timer works those won't be a problem.  The interrupt will get
injected strictly after the hypercall to setup the timer, because by
the time Xen processes the delta it will most likely have already
expired.

Forcing every timer to setup a minimal delta of 1usec is pointless.
It either needs to be done in the generic code, or the interface to
register timers needs to be adjusted to allow for a minimum resolution
of 1usec.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 09:01:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 09:01:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742875.1149756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUiN-0002dj-49; Tue, 18 Jun 2024 09:01:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742875.1149756; Tue, 18 Jun 2024 09:01:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUiN-0002dc-1H; Tue, 18 Jun 2024 09:01:19 +0000
Received: by outflank-mailman (input) for mailman id 742875;
 Tue, 18 Jun 2024 09:01:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJUiL-0002dS-Q8; Tue, 18 Jun 2024 09:01:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJUiL-00050f-Ou; Tue, 18 Jun 2024 09:01:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJUiL-0002Fl-HT; Tue, 18 Jun 2024 09:01:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJUiL-0001TF-Gw; Tue, 18 Jun 2024 09:01:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AGE+PRkppbK2F4u7+oFDD3iffPb/U5gUv8mULAg8Zmc=; b=q8Sp0osSs+b4J8Rlmoge3Qc9AE
	E9oPwJ1vofWnlUtPSpEHRqv+o8DEs24EAVmTIyckuzBJ5hGmKb4RvxcLFbREyaBf605ILqnq65qJ7
	CY+uXHvXTfrt7H71NbjcTF9ia1F1vx8u6KYY+S7dLQFyxWS8wDLNaktlIiHUsh06zQMs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186392-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186392: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=176b9d41f8f71c7572dab615a8d5259dd2cbc02a
X-Osstest-Versions-That:
    ovmf=537a81ae81622d65052184b281e8b2754d0b5313
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Jun 2024 09:01:17 +0000

flight 186392 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186392/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 176b9d41f8f71c7572dab615a8d5259dd2cbc02a
baseline version:
 ovmf                 537a81ae81622d65052184b281e8b2754d0b5313

Last test of basis   186391  2024-06-18 06:13:04 Z    0 days
Testing same since   186392  2024-06-18 07:42:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Zhihao Li <zhihao.li@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   537a81ae81..176b9d41f8  176b9d41f8f71c7572dab615a8d5259dd2cbc02a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 09:14:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 09:14:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742892.1149766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUui-0004wR-7D; Tue, 18 Jun 2024 09:14:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742892.1149766; Tue, 18 Jun 2024 09:14:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJUui-0004wK-4d; Tue, 18 Jun 2024 09:14:04 +0000
Received: by outflank-mailman (input) for mailman id 742892;
 Tue, 18 Jun 2024 09:14:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJUuh-0004wE-60
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 09:14:03 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 19c59955-2d53-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 11:14:01 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-57c83100bd6so6350650a12.3
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 02:14:01 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56ed357asm601906766b.125.2024.06.18.02.13.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 02:14:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19c59955-2d53-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718702040; x=1719306840; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=z8kss9qGBQgb51Jq9Iz0lX8khDkUj6+Ajw4D7ZRyHpM=;
        b=SXXsg+Z8r8BUZOQsGTdBQ8LkijGfmElLZtvxO4cX3akTuuKGLj8VHxuu4vgBRO4e2k
         KgVh9xptV/U7g7kUqwbgaT+lIl7KAk/sAt5FP6FPV9qs9BmtQPgXd969y2Vdub9H+uuN
         rvbqGNA90taAyhLHUU05De5mq4mVgNfnisUN7Hp6j3TNvZM2+Z62zzZ6rPYtoqkInzqC
         HaHcM7S56zrCQ/bkn/zrMZkt1r/Mc9+RJf1oaNO+A5TFTo0UFelV9SLeZ5AXqSgVz4cC
         Jk/yDt8aBvVqdS8icNF+0gt+uWFjedbfyYno2YECDy39365X4ientmM6zV8gmpsaAaOc
         VCXw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718702040; x=1719306840;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=z8kss9qGBQgb51Jq9Iz0lX8khDkUj6+Ajw4D7ZRyHpM=;
        b=vO+dwY/kKe/cGVgTisXjobNkQMP5jzTiziQbiKK0R6fIUsQkDUAc7wBkoQhWUwyjgB
         95zVOcyARxt+c19mSgsZVR8gFKNw/gKx3RDYu4opAIp2CXVagcCi90hD2MTWakRF71TR
         iBcJSbssPGWzEi6JJyn6pUODute9LusMp5g/9zWSDlYbd8yMxBXOfrcCWOh08rHKWFtI
         ZGrFheOyUJ85gEAs/rFVYZR8+Uw+uYhknRIu61YKkUFaQNM5NdJtd9xktRM4z0D/TIrD
         WPBh5OHKGObnBmXWifnguuPqetosR6+cWfaAsD0iOHirzLFar9LxzBcrC2PrkJynwhmz
         KSew==
X-Forwarded-Encrypted: i=1; AJvYcCXkTvBYf4bUcJZVvlRaWNSX5vo5Z1BV47nOczh7eb4+mNU32A7ep9KdcJifi2TFUNY1xQ47WcknGhsGw0nVWo+1lxZDJZ0b9ygkmCWhAN0=
X-Gm-Message-State: AOJu0Yzc26Do9rdr41K3im+eZJm3qzTvDK/Q06CBzWBaY6yCIzV3+esV
	FayupMunZ9YXndtnnNOv194ug2RXh6S4oNgNAYwpz1o7YfulZo0pxdllu5fZKA==
X-Google-Smtp-Source: AGHT+IF9koj9b1SfYi+gReJwPgiOMjOZFRPdDCJuI1FNSfDA1eonowDxUU2QlcuquR88GLJhT9H4oQ==
X-Received: by 2002:a17:906:37d9:b0:a68:bfd7:b0f3 with SMTP id a640c23a62f3a-a6f60d2975bmr777644966b.21.1718702040414;
        Tue, 18 Jun 2024 02:14:00 -0700 (PDT)
Message-ID: <ab99b766-7bec-4046-beb2-f77a2591a911@suse.com>
Date: Tue, 18 Jun 2024 11:13:58 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-5-Jiqian.Chen@amd.com>
 <49563a31-d50e-4015-88ee-e0dab9193cd1@suse.com>
 <BL1PR12MB584910D242D9D8B4BA8B15C1E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB584910D242D9D8B4BA8B15C1E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 18.06.2024 10:10, Chen, Jiqian wrote:
> On 2024/6/17 23:10, Jan Beulich wrote:
>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>> --- a/tools/include/xen-sys/Linux/privcmd.h
>>> +++ b/tools/include/xen-sys/Linux/privcmd.h
>>> @@ -95,6 +95,11 @@ typedef struct privcmd_mmap_resource {
>>>  	__u64 addr;
>>>  } privcmd_mmap_resource_t;
>>>  
>>> +typedef struct privcmd_gsi_from_dev {
>>> +	__u32 sbdf;
>>
>> That's PCI-centric, without struct and IOCTL names reflecting this fact.
> So, change to privcmd_gsi_from_pcidev ?

That's what I'd suggest, yes. But remember that it's the kernel maintainers
who have the ultimate say here, as here you're only making a copy of what
the canonical header (in the kernel tree) is going to have.

>>> +	int gsi;
>>
>> Is "int" legitimate to use here? Doesn't this want to similarly be __u32?
> I want to set gsi to negative if there is no record of this translation.

There are surely more explicit ways to signal that case?

>>> --- a/tools/libs/light/libxl_pci.c
>>> +++ b/tools/libs/light/libxl_pci.c
>>> @@ -1406,6 +1406,12 @@ static bool pci_supp_legacy_irq(void)
>>>  #endif
>>>  }
>>>  
>>> +#define PCI_DEVID(bus, devfn)\
>>> +            ((((uint16_t)(bus)) << 8) | ((devfn) & 0xff))
>>> +
>>> +#define PCI_SBDF(seg, bus, devfn) \
>>> +            ((((uint32_t)(seg)) << 16) | (PCI_DEVID(bus, devfn)))
>>
>> I'm not a maintainer of this file; if I were, I'd ask that for readability's
>> sake all excess parentheses be dropped from these.
> Isn't it a coding requirement to enclose each element in parentheses in the macro definition?
> It seems other files also do this. See tools/libs/light/libxl_internal.h

As said, I'm not a maintainer of this code. Yet while I'm aware that libxl
has its own CODING_STYLE, I can't spot anything towards excessive use of
parentheses there.

>>> @@ -1486,6 +1496,18 @@ static void pci_add_dm_done(libxl__egc *egc,
>>>          goto out_no_irq;
>>>      }
>>>      if ((fscanf(f, "%u", &irq) == 1) && irq) {
>>> +#ifdef CONFIG_X86
>>> +        sbdf = PCI_SBDF(pci->domain, pci->bus,
>>> +                        (PCI_DEVFN(pci->dev, pci->func)));
>>> +        gsi = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
>>> +        /*
>>> +         * Old kernel version may not support this function,
>>
>> Just kernel?
> Yes, xc_physdev_gsi_from_dev depends on the function implemented on linux kernel side.

Okay, and when the kernel supports it but the underlying hypervisor doesn't
support what the kernel wants to use in order to fulfill the request, all
is fine? (See also below for what may be needed in the hypervisor, even if
this IOCTL would be satisfied by the kernel without needing to interact with
the hypervisor.)

>>> +         * so if fail, keep using irq; if success, use gsi
>>> +         */
>>> +        if (gsi > 0) {
>>> +            irq = gsi;
>>
>> I'm still puzzled by this, when by now I think we've sufficiently clarified
>> that IRQs and GSIs use two distinct numbering spaces.
>>
>> Also, as previously indicated, you call this for PV Dom0 as well. Aiui on
>> the assumption that it'll fail. What if we decide to make the functionality
>> available there, too (if only for informational purposes, or for
>> consistency)? Suddenly you're fallback logic wouldn't work anymore, and
>> you'd call ...
>>
>>> +        }
>>> +#endif
>>>          r = xc_physdev_map_pirq(ctx->xch, domid, irq, &irq);
>>
>> ... the function with a GSI when a pIRQ is meant. Imo, as suggested before,
>> you strictly want to avoid the call on PV Dom0.
>>
>> Also for PVH Dom0: I don't think I've seen changes to the hypercall
>> handling, yet. How can that be when GSI and IRQ aren't the same, and hence
>> incoming GSI would need translating to IRQ somewhere? I can once again only
>> assume all your testing was done with IRQs whose numbers happened to match
>> their GSI numbers. (The difference, imo, would also need calling out in the
>> public header, where the respective interface struct(s) is/are defined.)
> I feel like you missed out on many of the previous discussions.
> Without my changes, the original codes use irq (read from file /sys/bus/pci/devices/<sbdf>/irq) to do xc_physdev_map_pirq,
> but xc_physdev_map_pirq require passing into gsi instead of irq, so we need to use gsi whether dom0 is PV or PVH, so for the original codes, they are wrong.
> Just because by chance, the irq value in the Linux kernel of pv dom0 is equal to the gsi value, so there was no problem with the original pv passthrough.
> But not when using PVH, so passthrough failed.
> With my changes, I got gsi through function xc_physdev_gsi_from_dev firstly, and to be compatible with old kernel versions(if the ioctl
> IOCTL_PRIVCMD_GSI_FROM_DEV is not implemented), I still need to use the irq number, so I need to check the result
> of gsi, if gsi > 0 means IOCTL_PRIVCMD_GSI_FROM_DEV is implemented I can use the right one gsi, otherwise keep using wrong one irq. 

I understand all of this, to a (I think) sufficient degree at least. Yet what
you're effectively proposing (without explicitly saying so) is that e.g.
struct physdev_map_pirq's pirq field suddenly may no longer hold a pIRQ
number, but (when the caller is PVH) a GSI. This may be a necessary adjustment
to make (simply because the caller may have no way to express things in pIRQ
terms), but then suitable adjustments to the handling of PHYSDEVOP_map_pirq
would be necessary. In fact that field is presently marked as "IN or OUT";
when re-purposed to take a GSI on input, it may end up being necessary to pass
back the pIRQ (in the subject domain's numbering space). Or alternatively it
may be necessary to add yet another sub-function so the GSI can be translated
to the corresponding pIRQ for the domain that's going to use the IRQ, for that
then to be passed into PHYSDEVOP_map_pirq.

> And regarding to the implementation of ioctl IOCTL_PRIVCMD_GSI_FROM_DEV, it doesn't have any xen heypercall handling changes, all of its processing logic is on the kernel side.
> I know, so you might want to say, "Then you shouldn't put this in xen's code." But this concern was discussed in previous versions, and since the pci maintainer disallowed to add
> gsi sysfs on linux kernel side, I had to do so.

Right, but this is a separate aspect (which we simply need to live with on
the Xen side).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 09:23:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 09:23:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742902.1149776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJV3Y-0007HB-1K; Tue, 18 Jun 2024 09:23:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742902.1149776; Tue, 18 Jun 2024 09:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJV3X-0007H4-Uv; Tue, 18 Jun 2024 09:23:11 +0000
Received: by outflank-mailman (input) for mailman id 742902;
 Tue, 18 Jun 2024 09:23:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJV3X-0007Gy-0O
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 09:23:11 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 60123bef-2d54-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 11:23:09 +0200 (CEST)
Received: by mail-ed1-x52d.google.com with SMTP id
 4fb4d7f45d1cf-57c83100c5fso5652297a12.3
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 02:23:08 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f63f7e9d8sm492780866b.182.2024.06.18.02.23.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 02:23:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 60123bef-2d54-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718702588; x=1719307388; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=X2cOGxP+OyaxgX906gQx67pblQNDp8zrUXWHTlRxoMk=;
        b=gPQXa9wq81PW27PVQRsSxpAZEpRUTuDXQ3kqxW6i9gXYdxkykTkK3iCQdXQSefyLrw
         985PwbH8uGpFhqIi7Qt8I8elHHxhNaMxkf9i0footZcXGK7FRUaaWcNKnkycVhvh2jMo
         L+bRjOIeRila8qx+Xtkfv0veuGu0tGrqDmEifdErjFN6ZjOEz1D3IJgODR+nppgqENNc
         Sk2eOG86O9Rr39wG4/6FChVQrceD28XmdxadWdQPaaz35KzT4lqm6l3OuolIByvkXiBT
         oT+j95BOMJY7Q90+qsxk99+CZgSmZhY4DyXBOoW3KIw/cxb82f9DGxq08Zl2t0JDKemo
         x5Xg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718702588; x=1719307388;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=X2cOGxP+OyaxgX906gQx67pblQNDp8zrUXWHTlRxoMk=;
        b=wLxxbtSi64Oek448wmpOVUL+LdLOBhoDzmu7HNWlh5P8HEI/vCFp4dC++SSKDloPBz
         mOOiF6UgVPvwuGkkW/XIEoFzqDsVE1TKtfjWNr5SQR4uAvvr1IAq7vnpwU+QmHLUERmy
         dKkRNbiK5MYKfJngQsEYZ366RzjKaBcC5tp7D+SyUVakGfEJNG3t+WBmlo+vDYHn6pHn
         Ee95V/sQcSZqTTpmnCsrihpXmYB7SOUDqjW5zVDvhMZ5F/ECHgmpc4RJD5/gy3f8IaKp
         DipH/Aky1JKHQTW1NRF2MlU7pyRTE1aeVve+laHrM7Eo1wCDx04cjxiuaY9fiP5Spi1C
         N98g==
X-Forwarded-Encrypted: i=1; AJvYcCVP1W66XkgyuC4DnJwKfmHwixul2aasGI+YhG3GR1vovPwNxVSt37OTX71hQTAwTVbQuSd3vpx2uz03WxkL9nkx4gFvsyU4S3cRdWwQwFc=
X-Gm-Message-State: AOJu0YyH7dG8wLIp2pi+Wu5/oMAgKK1DXLbfzp2aUcy+vbbDHfyXObJ6
	53U8yyxs13SuXv+FgtG9PXGXUABXEkPw+5t317pWBnhk+Uah9YpE/C2JwJWgFQ==
X-Google-Smtp-Source: AGHT+IGNSpRQzR/k4AoPEJne+cDcAACxwW6b/CiQ/k1ex8Yt/fwQx+LLgfytepi24DUW4cYisMRmhQ==
X-Received: by 2002:a17:906:4883:b0:a6f:13f0:c8e9 with SMTP id a640c23a62f3a-a6f60dc54acmr733896766b.53.1718702587876;
        Tue, 18 Jun 2024 02:23:07 -0700 (PDT)
Message-ID: <b6beb3f3-9c33-4d4c-a607-ca0eba76f049@suse.com>
Date: Tue, 18 Jun 2024 11:23:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-6-Jiqian.Chen@amd.com>
 <b4b6cbcd-dd71-44da-aea8-6a4a170d73d5@suse.com>
 <BL1PR12MB584916579E2C16C6C9F86D1FE7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB584916579E2C16C6C9F86D1FE7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 18.06.2024 10:23, Chen, Jiqian wrote:
> On 2024/6/17 23:32, Jan Beulich wrote:
>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>> @@ -1516,14 +1519,39 @@ static void pci_add_dm_done(libxl__egc *egc,
>>>              rc = ERROR_FAIL;
>>>              goto out;
>>>          }
>>> -        r = xc_domain_irq_permission(ctx->xch, domid, irq, 1);
>>> +#ifdef CONFIG_X86
>>> +        /* If dom0 doesn't have PIRQs, need to use xc_domain_gsi_permission */
>>> +        r = xc_domain_getinfo_single(ctx->xch, 0, &info);
>>
>> Hard-coded 0 is imposing limitations. Ideally you would use DOMID_SELF, but
>> I didn't check if that can be used with the underlying hypercall(s). Otherwise
>> you want to pass the actual domid of the local domain here.
> But the action of granting permission is from dom0 to domU, what I need to get is the infomation of dom0,
> The actual domid here is domU's id I think, it is not useful.

Note how I said DOMID_SELF and "local domain". There's no talk of using the
DomU's domid. But what you apparently neglect is the fact that the hardware
domain isn't necessarily Dom0 (see CONFIG_LATE_HWDOM in the hypervisor).
While benign in most cases, this is relevant when it comes to referencing
the hardware domain by domid. And it is the hardware domain which is going
to drive the device re-assignment, as that domain is who's in possession of
all the devices not yet assigned to any DomU.

>>> @@ -237,6 +238,48 @@ long arch_do_domctl(
>>>          break;
>>>      }
>>>  
>>> +    case XEN_DOMCTL_gsi_permission:
>>> +    {
>>> +        unsigned int gsi = domctl->u.gsi_permission.gsi;
>>> +        int irq;
>>> +        bool allow = domctl->u.gsi_permission.allow_access;
>>
>> See my earlier comments on this conversion of 8 bits into just one.
> Do you mean that I need to check allow_access is >= 0?
> But allow_access is u8, it can't be negative.

Right. What I can only re-iterate from earlier commenting is that you
want to check for 0 or 1 (can be viewed as looking at just the low bit),
rejecting everything else. It is only this way that down the road we
could assign meaning to the other bits, without risking to break existing
callers. That's the same as the requirement to check padding fields to be
zero.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 09:42:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 09:42:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742910.1149786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJVMV-0002bd-IJ; Tue, 18 Jun 2024 09:42:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742910.1149786; Tue, 18 Jun 2024 09:42:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJVMV-0002bW-FW; Tue, 18 Jun 2024 09:42:47 +0000
Received: by outflank-mailman (input) for mailman id 742910;
 Tue, 18 Jun 2024 09:42:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x1Qc=NU=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1sJVMU-0002bQ-6c
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 09:42:46 +0000
Received: from mail-ot1-x335.google.com (mail-ot1-x335.google.com
 [2607:f8b0:4864:20::335])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1c6c19f9-2d57-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 11:42:44 +0200 (CEST)
Received: by mail-ot1-x335.google.com with SMTP id
 46e09a7af769-6fb840d8ffdso2493323a34.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 02:42:44 -0700 (PDT)
Received: from localhost ([122.172.82.13]) by smtp.gmail.com with ESMTPSA id
 41be03b00d2f7-6fedf0bef16sm7753098a12.43.2024.06.18.02.42.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 02:42:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c6c19f9-2d57-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718703763; x=1719308563; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=8UALesfbksrHHNtNwGYa4E+DNFkRXivhYiDbx6+KbJc=;
        b=S+74Pnur6hpPfzhn/qtKmLoZ3J1fVWoLtYJQkVpqpm8Rsz+XPBfu/OiEHwUt81k307
         nrIWbLVS9z43ZtMP8KPs0fn3a8/dZQsxRZHFxqcbEhSZ1PpMod4j+dqN5BhSFRuppqUu
         qyCF+v9maudBj38cK1xCcNOXZc2jxb4/7uCfJ7fTF7IIbYarQOSELxqg8BJgwEyvv29a
         JEvl0pUgEtnn0GdRGE3bYEXxZX5SqQYQEkMNbtZWKQD3SSIpoaXGI+CM9W4Q7ujOw5wl
         6Rel2CVqsrQYkhESv5kjmhbHIsB15CopEK1xYttYptzOmmE9fVRUP3N4V/1Vm4KLa1Tq
         NO+Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718703763; x=1719308563;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=8UALesfbksrHHNtNwGYa4E+DNFkRXivhYiDbx6+KbJc=;
        b=UALrsZawBzkdHSuxS+TP/mtYkcV0fei8N7BbkKZMthClE8ULJ7+JgJrQc9kJ8maUEz
         PNbZ9FDdqEa5YwHgwa2gYnG2G51A2tAOkRCCG6vPbyhC34inDsuQHe9si88JBnclPhvK
         GkLLXAqObPCz90HlY2/N5tWHMN5Qq4AEQxBJX7x9ITAliRZ46y0DTin71SheaaZPOnWJ
         7U5wi7BK0y3lAsUaF2loGdRktgok26cnYtMvi1218uesFYg8vEeJmD9DG2iAdhHtvR77
         mBXZOJFAshFUzv/491AAVHvpuSiVSjL2E78RWe21morCPikomIRXuZzkFdR5y5PZJtRZ
         buMQ==
X-Forwarded-Encrypted: i=1; AJvYcCW3byWCaxn3dVyGkdu4ofaKYkEYGKSuNoEDrZ6ikYUMNLMUVUFWdJTg8EmnqzKqkBKrsVCqxB7Gt3wpYV9rC/4pQmMHP07vyvcO5A7sMpY=
X-Gm-Message-State: AOJu0Yx2Gpe4gbU3hCCaNa4Y0ZSE5ia4qjg6AxbO/YG22VgfPVX0aNr2
	Q36lXSnQLQRxHwtNL3+/tYlJYJq/wtFcPeE5DER4n3E+txvwMF/CtvLshLnY/GQ=
X-Google-Smtp-Source: AGHT+IHduV0HGFrVmQSxIvryeJgCIEd+h/riwq126F2u6x4F+za92Uiu45uPCTH4ThXJnQkPLkaziw==
X-Received: by 2002:a9d:4d91:0:b0:6fa:81b:d4f8 with SMTP id 46e09a7af769-6fb9364ecd9mr12749877a34.1.1718703762611;
        Tue, 18 Jun 2024 02:42:42 -0700 (PDT)
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Manos Pitsidianakis <manos.pitsidianakis@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Al Viro <viro@zeniv.linux.org.uk>,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH 1/2] xen: privcmd: Switch from mutex to spinlock for irqfds
Date: Tue, 18 Jun 2024 15:12:28 +0530
Message-Id: <a66d7a7a9001424d432f52a9fc3931a1f345464f.1718703669.git.viresh.kumar@linaro.org>
X-Mailer: git-send-email 2.31.1.272.g89b43f80a514
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

irqfd_wakeup() gets EPOLLHUP, when it is called by
eventfd_release() by way of wake_up_poll(&ctx->wqh, EPOLLHUP), which
gets called under spin_lock_irqsave(). We can't use a mutex here as it
will lead to a deadlock.

Fix it by switching over to a spin lock.

Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
 drivers/xen/privcmd.c | 26 +++++++++++++++-----------
 1 file changed, 15 insertions(+), 11 deletions(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 67dfa4778864..5ceb6c56cf3e 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -13,7 +13,6 @@
 #include <linux/file.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
-#include <linux/mutex.h>
 #include <linux/poll.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
@@ -845,7 +844,7 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
 #ifdef CONFIG_XEN_PRIVCMD_EVENTFD
 /* Irqfd support */
 static struct workqueue_struct *irqfd_cleanup_wq;
-static DEFINE_MUTEX(irqfds_lock);
+static DEFINE_SPINLOCK(irqfds_lock);
 static LIST_HEAD(irqfds_list);
 
 struct privcmd_kernel_irqfd {
@@ -909,9 +908,11 @@ irqfd_wakeup(wait_queue_entry_t *wait, unsigned int mode, int sync, void *key)
 		irqfd_inject(kirqfd);
 
 	if (flags & EPOLLHUP) {
-		mutex_lock(&irqfds_lock);
+		unsigned long flags;
+
+		spin_lock_irqsave(&irqfds_lock, flags);
 		irqfd_deactivate(kirqfd);
-		mutex_unlock(&irqfds_lock);
+		spin_unlock_irqrestore(&irqfds_lock, flags);
 	}
 
 	return 0;
@@ -929,6 +930,7 @@ irqfd_poll_func(struct file *file, wait_queue_head_t *wqh, poll_table *pt)
 static int privcmd_irqfd_assign(struct privcmd_irqfd *irqfd)
 {
 	struct privcmd_kernel_irqfd *kirqfd, *tmp;
+	unsigned long flags;
 	__poll_t events;
 	struct fd f;
 	void *dm_op;
@@ -968,18 +970,18 @@ static int privcmd_irqfd_assign(struct privcmd_irqfd *irqfd)
 	init_waitqueue_func_entry(&kirqfd->wait, irqfd_wakeup);
 	init_poll_funcptr(&kirqfd->pt, irqfd_poll_func);
 
-	mutex_lock(&irqfds_lock);
+	spin_lock_irqsave(&irqfds_lock, flags);
 
 	list_for_each_entry(tmp, &irqfds_list, list) {
 		if (kirqfd->eventfd == tmp->eventfd) {
 			ret = -EBUSY;
-			mutex_unlock(&irqfds_lock);
+			spin_unlock_irqrestore(&irqfds_lock, flags);
 			goto error_eventfd;
 		}
 	}
 
 	list_add_tail(&kirqfd->list, &irqfds_list);
-	mutex_unlock(&irqfds_lock);
+	spin_unlock_irqrestore(&irqfds_lock, flags);
 
 	/*
 	 * Check if there was an event already pending on the eventfd before we
@@ -1011,12 +1013,13 @@ static int privcmd_irqfd_deassign(struct privcmd_irqfd *irqfd)
 {
 	struct privcmd_kernel_irqfd *kirqfd;
 	struct eventfd_ctx *eventfd;
+	unsigned long flags;
 
 	eventfd = eventfd_ctx_fdget(irqfd->fd);
 	if (IS_ERR(eventfd))
 		return PTR_ERR(eventfd);
 
-	mutex_lock(&irqfds_lock);
+	spin_lock_irqsave(&irqfds_lock, flags);
 
 	list_for_each_entry(kirqfd, &irqfds_list, list) {
 		if (kirqfd->eventfd == eventfd) {
@@ -1025,7 +1028,7 @@ static int privcmd_irqfd_deassign(struct privcmd_irqfd *irqfd)
 		}
 	}
 
-	mutex_unlock(&irqfds_lock);
+	spin_unlock_irqrestore(&irqfds_lock, flags);
 
 	eventfd_ctx_put(eventfd);
 
@@ -1073,13 +1076,14 @@ static int privcmd_irqfd_init(void)
 static void privcmd_irqfd_exit(void)
 {
 	struct privcmd_kernel_irqfd *kirqfd, *tmp;
+	unsigned long flags;
 
-	mutex_lock(&irqfds_lock);
+	spin_lock_irqsave(&irqfds_lock, flags);
 
 	list_for_each_entry_safe(kirqfd, tmp, &irqfds_list, list)
 		irqfd_deactivate(kirqfd);
 
-	mutex_unlock(&irqfds_lock);
+	spin_unlock_irqrestore(&irqfds_lock, flags);
 
 	destroy_workqueue(irqfd_cleanup_wq);
 }
-- 
2.31.1.272.g89b43f80a514



From xen-devel-bounces@lists.xenproject.org Tue Jun 18 09:42:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 09:42:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742911.1149797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJVMY-0002qH-Sk; Tue, 18 Jun 2024 09:42:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742911.1149797; Tue, 18 Jun 2024 09:42:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJVMY-0002qA-Pf; Tue, 18 Jun 2024 09:42:50 +0000
Received: by outflank-mailman (input) for mailman id 742911;
 Tue, 18 Jun 2024 09:42:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x1Qc=NU=linaro.org=viresh.kumar@srs-se1.protection.inumbo.net>)
 id 1sJVMX-0002bQ-HV
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 09:42:49 +0000
Received: from mail-pf1-x429.google.com (mail-pf1-x429.google.com
 [2607:f8b0:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1e94caf9-2d57-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 11:42:47 +0200 (CEST)
Received: by mail-pf1-x429.google.com with SMTP id
 d2e1a72fcca58-705fff50de2so1318800b3a.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 02:42:47 -0700 (PDT)
Received: from localhost ([122.172.82.13]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-705ccb92c89sm8582018b3a.213.2024.06.18.02.42.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 02:42:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e94caf9-2d57-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718703766; x=1719308566; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jSBvd6UdPXxQAYikzAX9WnO+l3sKpqabC3P6H43CjNw=;
        b=RLTRwGd40dE3Q8y3kuTkx25H0711nLMlpxztcrfr3F/bbJSK0Eb8yjXunbbbAqb8ZU
         lv6b6SFthbg7ESD2PnYUO8WSVylKYT3HdgdGkuGZ09AZUMOFw3p8tvzEU7skT7pW1dK1
         CjdQR9YaboaFXq0DqA7v6zldvfB4o8ddgr0dGshCsBSFzlMzMtSxpHfJdHPiDlAdRKKY
         cZMFTBf9Kf8Tu+IX/bAC3K7ADNFqS3ypPTlk9qQ8i1f+ILbMfjIutsrv/OUWxQuJPSlY
         eEeMO//O7fCeDAfRwzCjQsPgcHmTFEKnrn3jLkeUPKWSxqZESYxMOr+rD4izooWFeAt3
         zZ0g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718703766; x=1719308566;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=jSBvd6UdPXxQAYikzAX9WnO+l3sKpqabC3P6H43CjNw=;
        b=q93f5cRtSMTOknJjqiM3w9Tu+9eXNTS/zyW+esjyCiZKPEoart6OBrbbzbndTzFJRO
         yevkSX3ZK9oJebt1B1awEfYXpW1k28lfRLbXnLqLnKc2ccwCNIC8h1PJwLfn3w230aYg
         9raEwBqKXk/3ialWKEUsvQIJF+EQCtC3zHVAhPRsTB9Qf20l/e/Wq+bYX/bQTuSdvoqi
         41LN8dtFb7YHwUMtKnQhTHYUlZHPEodKXZyj/7HfHzGbuJ6DkdIx4iHoT3S8Rf/4WRRP
         K1Pyj47p+PE8cWt8xdbibmEpojkdxbXpb2PZ0fbD5AQ2p3WoD8SlV8XeZCPQWNYWFhSW
         zPvw==
X-Forwarded-Encrypted: i=1; AJvYcCX/IimIETHjuQeQCf2mIX5FqrPT+EzRaZ4Dh+Xa0yiAqFGFqIPYenriA8RA6wy3HZVPmgsf3XSxJ3/S0vy8KMSayTarE4cJxa5fpSbEnIA=
X-Gm-Message-State: AOJu0Ywz5H/QwOsCBIwVuR9afmEaCOkt2IW1xPHtkk33aiOfkn/uNMj+
	QZ2YYFRzzK9J3z2IN1mt6JcNO2zsi1bQucNiQ3az3eYhNbEhkVGDzMqBnX3a6jA=
X-Google-Smtp-Source: AGHT+IHcl/2RoXVQ+71aHNYVjn3NoTwhv2qPyWAbcWs0ParO44xZQCCVzutKlPjbF2/gE9SyhHgTKQ==
X-Received: by 2002:a05:6a00:4b47:b0:706:1bb3:fb1d with SMTP id d2e1a72fcca58-7061bb3fc6amr3055052b3a.7.1718703766091;
        Tue, 18 Jun 2024 02:42:46 -0700 (PDT)
From: Viresh Kumar <viresh.kumar@linaro.org>
To: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: Viresh Kumar <viresh.kumar@linaro.org>,
	Vincent Guittot <vincent.guittot@linaro.org>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Manos Pitsidianakis <manos.pitsidianakis@linaro.org>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Al Viro <viro@zeniv.linux.org.uk>,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH 2/2] xen: privcmd: Fix possible access to a freed kirqfd instance
Date: Tue, 18 Jun 2024 15:12:29 +0530
Message-Id: <9e884af1f1f842eacbb7afc5672c8feb4dea7f3f.1718703669.git.viresh.kumar@linaro.org>
X-Mailer: git-send-email 2.31.1.272.g89b43f80a514
In-Reply-To: <a66d7a7a9001424d432f52a9fc3931a1f345464f.1718703669.git.viresh.kumar@linaro.org>
References: <a66d7a7a9001424d432f52a9fc3931a1f345464f.1718703669.git.viresh.kumar@linaro.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Nothing prevents simultaneous ioctl calls to privcmd_irqfd_assign() and
privcmd_irqfd_deassign(). If that happens, it is possible that a kirqfd
created and added to the irqfds_list by privcmd_irqfd_assign() may get
removed by another thread executing privcmd_irqfd_deassign(), while the
former is still using it after dropping the locks.

This can lead to a situation where an already freed kirqfd instance may
be accessed and cause kernel oops.

Use SRCU locking to prevent the same, as is done for the KVM
implementation for irqfds.

Reported-by: Al Viro <viro@zeniv.linux.org.uk>
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org>
---
 drivers/xen/privcmd.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index 5ceb6c56cf3e..041774750e52 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -16,6 +16,7 @@
 #include <linux/poll.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
+#include <linux/srcu.h>
 #include <linux/string.h>
 #include <linux/workqueue.h>
 #include <linux/errno.h>
@@ -845,6 +846,7 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
 /* Irqfd support */
 static struct workqueue_struct *irqfd_cleanup_wq;
 static DEFINE_SPINLOCK(irqfds_lock);
+DEFINE_STATIC_SRCU(irqfds_srcu);
 static LIST_HEAD(irqfds_list);
 
 struct privcmd_kernel_irqfd {
@@ -872,6 +874,9 @@ static void irqfd_shutdown(struct work_struct *work)
 		container_of(work, struct privcmd_kernel_irqfd, shutdown);
 	u64 cnt;
 
+	/* Make sure irqfd has been initialized in assign path */
+	synchronize_srcu(&irqfds_srcu);
+
 	eventfd_ctx_remove_wait_queue(kirqfd->eventfd, &kirqfd->wait, &cnt);
 	eventfd_ctx_put(kirqfd->eventfd);
 	kfree(kirqfd);
@@ -934,7 +939,7 @@ static int privcmd_irqfd_assign(struct privcmd_irqfd *irqfd)
 	__poll_t events;
 	struct fd f;
 	void *dm_op;
-	int ret;
+	int ret, idx;
 
 	kirqfd = kzalloc(sizeof(*kirqfd) + irqfd->size, GFP_KERNEL);
 	if (!kirqfd)
@@ -980,6 +985,7 @@ static int privcmd_irqfd_assign(struct privcmd_irqfd *irqfd)
 		}
 	}
 
+	idx = srcu_read_lock(&irqfds_srcu);
 	list_add_tail(&kirqfd->list, &irqfds_list);
 	spin_unlock_irqrestore(&irqfds_lock, flags);
 
@@ -991,6 +997,8 @@ static int privcmd_irqfd_assign(struct privcmd_irqfd *irqfd)
 	if (events & EPOLLIN)
 		irqfd_inject(kirqfd);
 
+	srcu_read_unlock(&irqfds_srcu, idx);
+
 	/*
 	 * Do not drop the file until the kirqfd is fully initialized, otherwise
 	 * we might race against the EPOLLHUP.
-- 
2.31.1.272.g89b43f80a514



From xen-devel-bounces@lists.xenproject.org Tue Jun 18 09:45:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 09:45:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742926.1149807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJVOe-0003iM-7K; Tue, 18 Jun 2024 09:45:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742926.1149807; Tue, 18 Jun 2024 09:45:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJVOe-0003iF-4B; Tue, 18 Jun 2024 09:45:00 +0000
Received: by outflank-mailman (input) for mailman id 742926;
 Tue, 18 Jun 2024 09:44:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qMuf=NU=bounce.vates.tech=bounce-md_30504962.66715718.v1-20c3e0d591fb42a98919864b1479226f@srs-se1.protection.inumbo.net>)
 id 1sJVOc-0003i1-Uj
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 09:44:59 +0000
Received: from mail177-18.suw61.mandrillapp.com
 (mail177-18.suw61.mandrillapp.com [198.2.177.18])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6be7a28b-2d57-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 11:44:57 +0200 (CEST)
Received: from pmta14.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail177-18.suw61.mandrillapp.com (Mailchimp) with ESMTP id
 4W3ML01GbXzCf9KCN
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 09:44:56 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 20c3e0d591fb42a98919864b1479226f; Tue, 18 Jun 2024 09:44:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6be7a28b-2d57-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718703896; x=1718964396;
	bh=Ogh1Twxwluua6JbFcStBvILAn7Dr8nm1RzNhVradvDc=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=eW2IGoW2cgOMx5cb94n1/+CrXR9xjb36LjglpP5rXG0PgM/t21INLIeJviwt4moFD
	 GKQWrJy9k/cx16MrnzN8ZiNsizGVJhjzBYpU3uLCbbTwmCZ/zzskbxr07uQncG1QPr
	 8Q3WVsrgagsmNvVND71WCQuF+Jx0JqxZOi3YV8O/rVsWM8N9oIppkO4afJqEfyLmwF
	 JmIqOpAG7yJJOVaXohoT7C//JYIWiNeSfqwTvlYgs+8suCzwD8+UUAO2CiJqTIuxFz
	 tEKW/XUu6tc7s3qZVOGADmz56U2X7vjoNso2cuUW9l11BuFbWZ+mUkPrCNF0NPmEIe
	 C2cybaBmR38Fg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718703896; x=1718964396; i=anthony.perard@vates.tech;
	bh=Ogh1Twxwluua6JbFcStBvILAn7Dr8nm1RzNhVradvDc=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=FZcTzsJ/MY1N3/2UOn9UK50g0TK+V6aBZu7n/ouxl+iWk0g86RhHWh8iYq2yQ6xPd
	 7A5BZEYHlbEC5tIiw0uuSVk+O8WIUZWpThYEEkMOx+t2G/VudnZISGysZrMp26Yn2h
	 11SMMn0nZErrHL41vNlt7MQaNzPOW6Zf561NB+YzjysCkRzri7MW70CP1NbtsiNM0D
	 UCUas1kvqpsumiFaEhZW/MVSUJzp7XS0GxQMqvHLc4nK+uHBszbdQzlXHqw/BtFGI7
	 37yLTZvi6E2p0lRse5uKBHE7KwITmg8+5JscYKu+qRTpROHjpJ02kOXQ+AkR/8wEzf
	 Q68IlkNw/hGMA==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[OSSTEST=20PATCH]=20preseed=5Fbase:=20Use=20"keep"=20NIC=20NamePolicy=20when=20"force-mac-address"?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718703894789
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org, Luca Fancellu <luca.fancellu@arm.com>, Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Message-Id: <ZnFXFjeakYBmBHSB@l14>
References: <20240617144051.29547-1-anthony@xenproject.org> <a65a83be-1236-4699-8124-c0bd809c4b97@citrix.com>
In-Reply-To: <a65a83be-1236-4699-8124-c0bd809c4b97@citrix.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.20c3e0d591fb42a98919864b1479226f?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240618:md
Date: Tue, 18 Jun 2024 09:44:56 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

On Mon, Jun 17, 2024 at 04:34:09PM +0100, Andrew Cooper wrote:
> On 17/06/2024 3:40 pm, Anthony PERARD wrote:
> > diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
> > index 3545f3fd..d974fea5 100644
> > --- a/Osstest/Debian.pm
> > +++ b/Osstest/Debian.pm
> > @@ -972,7 +972,19 @@ END
> >          # is going to be added to dom0's initrd, which is used by some guests
> >          # (created with ts-debian-install).
> >          preseed_hook_installscript($ho, $sfx,
> > -            '/usr/lib/base-installer.d/', '05ifnamepolicy', <<'END');
> > +            '/usr/lib/base-installer.d/', '05ifnamepolicy',
> > +            $ho->{Flags}{'force-mac-address'} ? <<'END' : <<'END');
> 
> The conditional looks suspicious if both options are <<'END'.

That works fine, this pattern is already used in few places in osstest,
like here:
https://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=ts-host-install;h=0b6aaeeae228551064618abfa624321992a2eb2d;hb=HEAD#l240
    >  $ho->{Flags}{'force-mac-address'} ? <<END : <<END);

Or even here:
https://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=ts-xen-build;h=c294a51eafc26e53b5417529b943224902870acf;hb=HEAD#l173
    > buildcmd_stamped_logged(600, 'xen', 'configure', <<END,<<END,<<END);

> Doesn't this just write 70-eth-keep-policy.link unconditionally?

I've check that, on a different host, and the "mac" name policy is used
as expected, so the file "70-eth-keep-policy.link" isn't created on that
host.

Cheers,

-- 


Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 09:55:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 09:55:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742935.1149817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJVYp-00062h-5P; Tue, 18 Jun 2024 09:55:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742935.1149817; Tue, 18 Jun 2024 09:55:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJVYp-00062a-1V; Tue, 18 Jun 2024 09:55:31 +0000
Received: by outflank-mailman (input) for mailman id 742935;
 Tue, 18 Jun 2024 09:55:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=veLi=NU=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sJVYn-00062U-LB
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 09:55:29 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e3a454af-2d58-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 11:55:27 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-87-17-171-46.retail.telecomitalia.it [87.17.171.46])
 by support.bugseng.com (Postfix) with ESMTPSA id 711AA4EE0738;
 Tue, 18 Jun 2024 11:55:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3a454af-2d58-11ef-b4bb-af5377834399
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH] automation/eclair_analysis: deviate common/unlzo.c for MISRA Rule 7.3
Date: Tue, 18 Jun 2024 11:54:49 +0200
Message-Id: <20342a68627d5fe7c85c50f64e9300e9a587974b.1718704260.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The file contains violations of Rule 7.3 which states as following: The
lowercase character `l' shall not be used in a literal suffix.

This file defines a non-compliant constant used in a macro expanded in a
non-excluded file, so this deviation is needed in order to avoid
a spurious violation involving both files.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index 447c1e6661..90413c5458 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -139,6 +139,12 @@ particular use of it done in xen_mk_ulong."
 -config=MC3R1.R7.2,reports+={deliberate,"any_area(any_loc(macro(name(BUILD_BUG_ON))))"}
 -doc_end
 
+-doc_begin="Violations in files that maintainers have asked to not modify in the
+context of R7.3."
+-file_tag+={adopted_r7_3,"^xen/common/unlzo\\.c$"}
+-config=MC3R1.R7.3,reports+={deliberate,"any_area(any_loc(file(adopted_r7_3)))"}
+-doc_end
+
 -doc_begin="Allow pointers of non-character type as long as the pointee is
 const-qualified."
 -config=MC3R1.R7.4,same_pointee=false
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 18 10:06:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 10:06:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742945.1149826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJVix-0008HM-18; Tue, 18 Jun 2024 10:05:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742945.1149826; Tue, 18 Jun 2024 10:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJViw-0008HF-US; Tue, 18 Jun 2024 10:05:58 +0000
Received: by outflank-mailman (input) for mailman id 742945;
 Tue, 18 Jun 2024 10:05:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJViv-0008H9-4V
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 10:05:57 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5a26e3c0-2d5a-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 12:05:55 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id
 4fb4d7f45d1cf-57c8353d8d0so6988497a12.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 03:05:55 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cbdfe1428sm6474302a12.27.2024.06.18.03.05.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 03:05:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a26e3c0-2d5a-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718705155; x=1719309955; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=nEQwruvM+KC5haEB6+QlUQopBtHXFRwc47OmNmDxcnM=;
        b=ZVWeyygKBFfQO9/8OS8GKt07VubUc/AX/5PY0v91gDvXheW8jwQw2YCncQrgWKLoSL
         X3d6xY9XRgYBQB24aVciT8AZZrPOCR35l/hV4KJ5v2TnmXm7AcpfIArA6lTJLjA12kTA
         1a3jivCcRM4yVTK5tz1mIg5rOhx2/i19Qvj+t3ojo2rqlOoG1mMDRxHdVij2l3b1zhNr
         btFiIZZxU+qF5/U+JaFI5UhwITDreRpRm9LKBDP7UJZtPgWLc+UWzekxtVSfGXHbQIxs
         o1Q2wl5e4tZAaqvmAC+wBFLMHMxsyGCEM6jr67w3e6e2Zzxt9kehL1H1oqpg7rd5qrBy
         1LBg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718705155; x=1719309955;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nEQwruvM+KC5haEB6+QlUQopBtHXFRwc47OmNmDxcnM=;
        b=JQic4iCext6zb1lj63wjuCeoUgnZhQYNXJccNK9a4qe2YD5g3l431yf1B2j5AksDy6
         jLpYm5X1Cs85aKZsfmabRaXqXu1fL7VASUHvTWAOpFMaF6IS1PeVQY85OyKa9AH7cDSB
         d+PMwlIvNPeMxgndEvlbpD/CssVMBmZlkVezKvk8FAyAa5buyL4NnrbNPFjRiFFF3YeT
         IgFaE4pTUsMzJjeo4UvBgrkIE7EDFszZnJUb9t22plUt/yhO1bo8lC2AwlAMF+CzE59/
         +ndiX/m167qhhUI52xdO2BoHFf8KAiD7Vu7F/Plm8C9RI2IdsNA2d9wez9ni/FPATvT3
         OHKg==
X-Forwarded-Encrypted: i=1; AJvYcCV5Xy6YfwpFqaSsMEvCEOyMNCXNB3DpoMs+Vsr/q3CX74Hh9EID3l6keOJTW3rT3qWBgadjju8s52pupMVGOrb8NDFr+0y/FzuJPiE8rMA=
X-Gm-Message-State: AOJu0YzFllOCnhoddiLxmlrpNjNIMo0rMNX5WYXwyLDRaZ7bt93J/BBi
	l+7DQomOa8LA8VCCyDGtDwGAzj4p9798YX8UVpJ3/bNMgFUYnbxp1kJxCe9XdA==
X-Google-Smtp-Source: AGHT+IFX2oXBn9E+GQlj5WaLL56zRiqX9OyxdXPifbxESZfOKBZeVkpJ0qsKF8kBjprtR3n49csNYw==
X-Received: by 2002:a50:9b04:0:b0:578:60a6:7c69 with SMTP id 4fb4d7f45d1cf-57cbd69e7a0mr8551338a12.30.1718705154868;
        Tue, 18 Jun 2024 03:05:54 -0700 (PDT)
Message-ID: <be1baa64-ba01-49ce-a59e-53f3bef1cda0@suse.com>
Date: Tue, 18 Jun 2024 12:05:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 2/7] x86/xstate: Cross-check dynamic XSTATE sizes at
 boot
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
 <20240617173921.1755439-3-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240617173921.1755439-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 17.06.2024 19:39, Andrew Cooper wrote:
> Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
> every call.  This is expensive, being used for domain create/migrate, as well
> as to service certain guest CPUID instructions.
> 
> Instead, arrange to check the sizes once at boot.  See the code comments for
> details.  Right now, it just checks hardware against the algorithm
> expectations.  Later patches will add further cross-checking.
> 
> Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits.  This is to
> maximise coverage in the sanity check, even if we don't expect to
> use/virtualise some of these features any time soon.  Leave HDC and HWP alone
> for now; we don't have CPUID bits from them stored nicely.
> 
> Only perform the cross-checks when SELF_TESTS are active.  It's only
> developers or new hardware liable to trip these checks, and Xen at least
> tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which we
> don't want to be tickling in the general case.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

I may certainly give R-b on the patch as it is, but I have a few questions
first:

> --- a/xen/arch/x86/xstate.c
> +++ b/xen/arch/x86/xstate.c
> @@ -604,9 +604,164 @@ static bool valid_xcr0(uint64_t xcr0)
>      if ( !(xcr0 & X86_XCR0_BNDREGS) != !(xcr0 & X86_XCR0_BNDCSR) )
>          return false;
>  
> +    /* TILECFG and TILEDATA must be the same. */
> +    if ( !(xcr0 & X86_XCR0_TILE_CFG) != !(xcr0 & X86_XCR0_TILE_DATA) )
> +        return false;
> +
>      return true;
>  }
>  
> +struct xcheck_state {
> +    uint64_t states;
> +    uint32_t uncomp_size;
> +    uint32_t comp_size;
> +};
> +
> +static void __init check_new_xstate(struct xcheck_state *s, uint64_t new)
> +{
> +    uint32_t hw_size;
> +
> +    BUILD_BUG_ON(X86_XCR0_STATES & X86_XSS_STATES);
> +
> +    BUG_ON(s->states & new); /* States only increase. */
> +    BUG_ON(!valid_xcr0(s->states | new)); /* Xen thinks it's a good value. */
> +    BUG_ON(new & ~(X86_XCR0_STATES | X86_XSS_STATES)); /* Known state. */
> +    BUG_ON((new & X86_XCR0_STATES) &&
> +           (new & X86_XSS_STATES)); /* User or supervisor, not both. */
> +
> +    s->states |= new;
> +    if ( new & X86_XCR0_STATES )
> +    {
> +        if ( !set_xcr0(s->states & X86_XCR0_STATES) )
> +            BUG();
> +    }
> +    else
> +        set_msr_xss(s->states & X86_XSS_STATES);
> +
> +    /*
> +     * Check the uncompressed size.  Some XSTATEs are out-of-order and fill in
> +     * prior holes in the state area, so we check that the size doesn't
> +     * decrease.
> +     */
> +    hw_size = cpuid_count_ebx(0xd, 0);

Going forward, do we mean to get rid of XSTATE_CPUID? Else imo it should be
used here (and again below).

> +    if ( hw_size < s->uncomp_size )
> +        panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, uncompressed hw size %#x < prev size %#x\n",
> +              s->states, &new, hw_size, s->uncomp_size);
> +
> +    s->uncomp_size = hw_size;

Since XSS state doesn't affect uncompressed layout, this looks like largely
dead code for that case. Did you consider moving this into the if() above?
Alternatively, should the comparison use == when dealing with XSS bits?

> +    /*
> +     * Check the compressed size, if available.  All components strictly
> +     * appear in index order.  In principle there are no holes, but some
> +     * components have their base address 64-byte aligned for efficiency
> +     * reasons (e.g. AMX-TILE) and there are other components small enough to
> +     * fit in the gap (e.g. PKRU) without increasing the overall length.
> +     */
> +    hw_size = cpuid_count_ebx(0xd, 1);
> +
> +    if ( cpu_has_xsavec )
> +    {
> +        if ( hw_size < s->comp_size )
> +            panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, compressed hw size %#x < prev size %#x\n",
> +                  s->states, &new, hw_size, s->comp_size);

Unlike for uncompressed size, can't it be <= here, for - as the comment
says - it being strictly index order, and no component having zero size?

> +        s->comp_size = hw_size;
> +    }
> +    else if ( hw_size ) /* Compressed size reported, but no XSAVEC ? */
> +    {
> +        static bool once;
> +
> +        if ( !once )
> +        {
> +            WARN();
> +            once = true;
> +        }
> +    }
> +}
> +
> +/*
> + * The {un,}compressed XSTATE sizes are reported by dynamic CPUID value, based
> + * on the current %XCR0 and MSR_XSS values.  The exact layout is also feature
> + * and vendor specific.  Cross-check Xen's understanding against real hardware
> + * on boot.
> + *
> + * Testing every combination is prohibitive, so we use a partial approach.
> + * Starting with nothing active, we add new XSTATEs and check that the CPUID
> + * dynamic values never decreases.
> + */
> +static void __init noinline xstate_check_sizes(void)
> +{
> +    uint64_t old_xcr0 = get_xcr0();
> +    uint64_t old_xss = get_msr_xss();
> +    struct xcheck_state s = {};
> +
> +    /*
> +     * User XSTATEs, increasing by index.
> +     *
> +     * Chronologically, Intel and AMD had identical layouts for AVX (YMM).
> +     * AMD introduced LWP in Fam15h, following immediately on from YMM.  Intel
> +     * left an LWP-shaped hole when adding MPX (BND{CSR,REGS}) in Skylake.
> +     * AMD removed LWP in Fam17h, putting PKRU in the same space, breaking
> +     * layout compatibility with Intel and having a knock-on effect on all
> +     * subsequent states.
> +     */
> +    check_new_xstate(&s, X86_XCR0_SSE | X86_XCR0_FP);
> +
> +    if ( cpu_has_avx )
> +        check_new_xstate(&s, X86_XCR0_YMM);
> +
> +    if ( cpu_has_mpx )
> +        check_new_xstate(&s, X86_XCR0_BNDCSR | X86_XCR0_BNDREGS);
> +
> +    if ( cpu_has_avx512f )
> +        check_new_xstate(&s, X86_XCR0_HI_ZMM | X86_XCR0_ZMM | X86_XCR0_OPMASK);
> +
> +    if ( cpu_has_pku )
> +        check_new_xstate(&s, X86_XCR0_PKRU);
> +
> +    if ( boot_cpu_has(X86_FEATURE_AMX_TILE) )
> +        check_new_xstate(&s, X86_XCR0_TILE_DATA | X86_XCR0_TILE_CFG);
> +
> +    if ( boot_cpu_has(X86_FEATURE_LWP) )
> +        check_new_xstate(&s, X86_XCR0_LWP);
> +
> +    /*
> +     * Supervisor XSTATEs, increasing by index.
> +     *
> +     * Intel Broadwell has Processor Trace but no XSAVES.  There doesn't
> +     * appear to have been a new enumeration when X86_XSS_PROC_TRACE was
> +     * introduced in Skylake.
> +     */
> +    if ( cpu_has_xsaves )
> +    {
> +        if ( cpu_has_proc_trace )
> +            check_new_xstate(&s, X86_XSS_PROC_TRACE);
> +
> +        if ( boot_cpu_has(X86_FEATURE_ENQCMD) )
> +            check_new_xstate(&s, X86_XSS_PASID);
> +
> +        if ( boot_cpu_has(X86_FEATURE_CET_SS) ||
> +             boot_cpu_has(X86_FEATURE_CET_IBT) )
> +        {
> +            check_new_xstate(&s, X86_XSS_CET_U);
> +            check_new_xstate(&s, X86_XSS_CET_S);
> +        }
> +
> +        if ( boot_cpu_has(X86_FEATURE_UINTR) )
> +            check_new_xstate(&s, X86_XSS_UINTR);
> +
> +        if ( boot_cpu_has(X86_FEATURE_ARCH_LBR) )
> +            check_new_xstate(&s, X86_XSS_LBR);
> +    }

In principle compressed state checking could be extended to also verify
the offsets are strictly increasing. That, however, would require to
interleave XCR0 and XSS checks, strictly by index. Did you consider (and
then discard) doing so?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 10:09:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 10:09:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742952.1149836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJVmD-0000kb-Ev; Tue, 18 Jun 2024 10:09:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742952.1149836; Tue, 18 Jun 2024 10:09:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJVmD-0000kU-CL; Tue, 18 Jun 2024 10:09:21 +0000
Received: by outflank-mailman (input) for mailman id 742952;
 Tue, 18 Jun 2024 10:09:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJVmC-0000kJ-NE
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 10:09:20 +0000
Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com
 [2a00:1450:4864:20::534])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d318c4d2-2d5a-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 12:09:18 +0200 (CEST)
Received: by mail-ed1-x534.google.com with SMTP id
 4fb4d7f45d1cf-57cc1c00b97so4077101a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 03:09:18 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56db61a3sm599205466b.85.2024.06.18.03.09.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 03:09:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d318c4d2-2d5a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718705358; x=1719310158; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=/po8wEE6YUYLe87jzJ8jnbUrBoqdLLA+3OtQ8amLD9Y=;
        b=drKX0aL9DKtQsYIT3UmgSunCmg8ZjD5Cv5Ihze6JJsGSLdqQXtkNWQHIU5DpNLTWxm
         re6yYtcaMYQqIuzCrKyo4ZMN6Pv86iPknkNB6ZTBiP6WBA7x94ZPTUQRkGQAR+wCqikJ
         V/kKFAnTx7oox9/2GwVxk02XrZaYHHoCdMh/qT7YZsyCaTbJzdhU7KvZRybjk1erDeDi
         Gbng1VSxkUyUkbWSPUCzKllTvBT1fXL0ZqlJ0Ob4gO7x5rrG4umhH03bvKmxxf4zBPf+
         AZaeOeUC2MSh/tej8GcECtqpuBzaihukhzeE6UPaTiJZlR8J9M2+ST4hk9cWw+WejfYf
         Yqiw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718705358; x=1719310158;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/po8wEE6YUYLe87jzJ8jnbUrBoqdLLA+3OtQ8amLD9Y=;
        b=MRU9AAiinUMJTNCVEf6M7DgEngLiAh7bJ4/Z5zvZ87wwTIhQce4Gl30r9TrZPnKo32
         +dvjuf2eBZ3aoYuItJ0IqzP5rsjPzLLG7WBYIBG3u7oGaEQ5HLRLlCB2cC9lAkqTOomV
         XiGyg6IWCruR393Qt0WKErEo1W8DIS5HEE0MlYvsUR0/gjnBVRWECw5eilMlQYj4N7ot
         8X9MZBYLZqYEMN14RjD1vyM2taJNTSbDVCCPCYh4iJFpTTwgUm0AQPMN3qyKv+LwCtO/
         Yi99MJmKnERZRq+iuUict4fPo26jbL4N2fH8bPiHcGgxuQgDIsIXzTj25adXGA+vYgkX
         byNQ==
X-Forwarded-Encrypted: i=1; AJvYcCV1Hzs5/kMgMNi7akB/mSMUyX8IbKuyCg3ry91r+LwiCmgqWqZR+BmhUus5cHbIkD5Y/OIxBIZ8+DbPWWRI1zGDpPcmoh6/Tud/BasdS0M=
X-Gm-Message-State: AOJu0YyOQUbwc5AKrdpomWQv66X8gdujrfNPDVfSftNBOMckba16N0U9
	AzGELQS7HM1tSXMWOztFbVWVqcNb9NHMsQFMFIh1d7nN7nXRh1mCDmXfv91iziA1t1jRfhXdxcQ
	=
X-Google-Smtp-Source: AGHT+IGLl9tZMK/x7+Px3RA8WsGFEHIGZ3NkHoxybOxs0UbfCLse4TMCiqwg8VNzjTnBidpSA4ppeA==
X-Received: by 2002:a17:906:f2da:b0:a6f:5562:167 with SMTP id a640c23a62f3a-a6f60d42710mr776464666b.38.1718705357939;
        Tue, 18 Jun 2024 03:09:17 -0700 (PDT)
Message-ID: <63d11da5-4a5a-4354-ab57-67fbb7110f45@suse.com>
Date: Tue, 18 Jun 2024 12:09:16 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] automation/eclair_analysis: deviate common/unlzo.c for
 MISRA Rule 7.3
To: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20342a68627d5fe7c85c50f64e9300e9a587974b.1718704260.git.alessandro.zucchelli@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20342a68627d5fe7c85c50f64e9300e9a587974b.1718704260.git.alessandro.zucchelli@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 18.06.2024 11:54, Alessandro Zucchelli wrote:
> The file contains violations of Rule 7.3 which states as following: The
> lowercase character `l' shall not be used in a literal suffix.
> 
> This file defines a non-compliant constant used in a macro expanded in a
> non-excluded file, so this deviation is needed in order to avoid
> a spurious violation involving both files.

Imo it would be nice to be specific in such cases: Which constant? And
which macro expanded in which file?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 10:17:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 10:17:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742963.1149848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJVuT-0002dN-Ed; Tue, 18 Jun 2024 10:17:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742963.1149848; Tue, 18 Jun 2024 10:17:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJVuT-0002dG-9j; Tue, 18 Jun 2024 10:17:53 +0000
Received: by outflank-mailman (input) for mailman id 742963;
 Tue, 18 Jun 2024 10:17:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJVuS-0002by-Kn
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 10:17:52 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 04398e3f-2d5c-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 12:17:50 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id
 4fb4d7f45d1cf-57a30dbdb7fso9312296a12.3
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 03:17:50 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cbb05b465sm7017004a12.18.2024.06.18.03.17.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 03:17:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04398e3f-2d5c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718705870; x=1719310670; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=mmse11iwk+dOuhhetfXQZRuoY9UK/MzzpAZaloY4n94=;
        b=e8+WRRSiu41PWNGVFgG6Zqo4yI7TF9WrqlsZ8nLP9NpCl7rb8ZtTbWRPo10DF/abLE
         vyqKsViYMdqbXdZUzzxEqAkYWWDtZq7cYQI4YtsfJTGpa0s4Jzs0tOdJ5TUs8Nd9gvDy
         VU+ikW6jJw0I53eEsAbXD4CPVb2xnFh6j5woCvxgI+XADDQge4EXc1D/YqAGnIL81Nll
         seRDlqjlTxnX55pb1nTS3TM2Gs69COfJxN58BRJ3f5AVGr/NqY2boleEVWuvCdmaVG/G
         Wg/X/799UqlgCfoUtzcf8LoJTUVPGfny6cOo2zOSH3JvtctjMD6rfu/SmL8ncoQ4qZF8
         k0rg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718705870; x=1719310670;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=mmse11iwk+dOuhhetfXQZRuoY9UK/MzzpAZaloY4n94=;
        b=tiyGJ91L1iXoZDnO5e1iaYukocF7A59QShMImVUBHbADNRKYnBZCsxzns21007ER1t
         jhtBTK3i3K62/aJ1aBqDqgB/X3DJhLwZ/5acpZL7wAVrn+Qtm+8VE7kX0czlQFxZs6ah
         Tb46+UqIrEfzAtQxXJx641rGHzIzugEhNZ0GhyVIBSUinasU9+L4f6SloFftDTx+kjuz
         64SvY2WzMrCvwI/9JfkaKghbPdkRP4uNkVjQa1guS3ypX3wGOgyRrsh6NUYVByJKXVga
         hSTiTs1kRgiaQfzDy3S39/kA/9Vv8Nko63zPiUNA01KPyZVvIVQKuzAtnRtnAIFETZia
         gF6Q==
X-Forwarded-Encrypted: i=1; AJvYcCW61txxcIpcbi5aLs0P95H5m+Sk6pvd5UUoWuVBtOuNet0Ol2zaJwsipheVikW/A/2hscholPLcv8E+t2ZVsb0jLlSsUayHrO2oV4A2gxs=
X-Gm-Message-State: AOJu0YwYDb6Er9V6gjseON5mY7o/QuucjMCQWQvG9abftdX2x6aaVGvL
	gvh4u3nU0VdOLH93U8bpHbFSKbCmjOqgkXXDODFHdBRHD69qrjvfi8LGMMZpww==
X-Google-Smtp-Source: AGHT+IHnINw1V0tHbmiq3APKrRqreJIbHSt0C1d2QVxCbPFa79Pygjv5nj5vh0FBWCe+s8dhyOhvMQ==
X-Received: by 2002:a50:cd01:0:b0:57c:6832:7b2e with SMTP id 4fb4d7f45d1cf-57cbd4d96admr8996861a12.0.1718705869892;
        Tue, 18 Jun 2024 03:17:49 -0700 (PDT)
Message-ID: <9c1aa55f-f5e9-45a4-8df1-358db981bdfd@suse.com>
Date: Tue, 18 Jun 2024 12:17:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 3/7] x86/boot: Collect the Raw CPU Policy earlier on
 boot
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
 <20240617173921.1755439-4-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240617173921.1755439-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 17.06.2024 19:39, Andrew Cooper wrote:
> This is a tangle, but it's a small step in the right direction.
> 
> In the following change, xstate_init() is going to start using the Raw policy.
> 
> calculate_raw_cpu_policy() is sufficiently separate from the other policies to
> safely move like this.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue Jun 18 10:24:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 10:24:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742968.1149857 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJW0j-0004Wp-2k; Tue, 18 Jun 2024 10:24:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742968.1149857; Tue, 18 Jun 2024 10:24:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJW0i-0004Wi-Vt; Tue, 18 Jun 2024 10:24:20 +0000
Received: by outflank-mailman (input) for mailman id 742968;
 Tue, 18 Jun 2024 10:24:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJW0h-0004WX-8A
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 10:24:19 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb6cb674-2d5c-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 12:24:18 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-57cc7e85b4bso1921413a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 03:24:18 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb741e725sm7470119a12.69.2024.06.18.03.24.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 03:24:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb6cb674-2d5c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718706258; x=1719311058; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=jhKvdBHeU4NEIyccg9DwoIAuB5VXN7v/wbrCReFFQak=;
        b=Hc5A0fBom70Ac3L+uft3hIm81ge176z7l65Z4eTICxBAnfrFumKRmpm3MkePHxoPkr
         lR8eZ7j5AoB/QJwKK9Qwrc9jRpov43xsWvX8bCyKZ2bh4ZHB1W3qlxm3cIfs4nWI+Zhg
         +Iz8v4aTIrwhB162b7Pih/mTSKqOZiFHumpbrVY54qwgyM1cj8ohnmPJBB/irefmSuS0
         RNo5T6brFx9NlOtU9InKtkR5rjkX2HZair7bKEPsvhXy6WkX7EHHnQYViklVTaBJ9of3
         45C0yzP5gddrS0XRGD1BAGsjfLQaLRFL8r0zHUt9O0Nh28Edw2nh7rOXXaNiVWDuQIFB
         XyBA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718706258; x=1719311058;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jhKvdBHeU4NEIyccg9DwoIAuB5VXN7v/wbrCReFFQak=;
        b=C9i4sjMsBzxVH26r0B5HYXhBOyNTkE0MR3CI4mamYbYliPOGeCMgCd4jRj304Y+RTW
         9Qu/317j/nS03dgF0nFkPWyPEk49SihfqAx0T8nydtb4+qeQEQZSFyGWsTiOQc8N0LVB
         BjZIKG5u9DF28Yfvl0NRBzZKRAdyfcyadySmgj2cpNPuOIeiPbGtCga46H05yXcbZFDk
         jbXIntpNPiJiiX3BZSjdQWrGjZ+zpoVEWT+nuoASPIOog+cq08Fh7v1jHe3jI5VnQyFG
         4bXdSOV3Rh33E9TGodTKEuVRg97S4CVchle1h+xf9Y1xvKZJKpx71bs0nWkCzamKn72A
         As0g==
X-Forwarded-Encrypted: i=1; AJvYcCXEeP8a+zCbQtLMINY80CoIk4V9YFvLAdoaejcWEYbJao4Rtf08awObV9/21VxirhyXNJ3L5v9bB0e0xZNR9LsxbVbjmvhq33Q4FnnWZeY=
X-Gm-Message-State: AOJu0YxRAhHuc6d6MHC0BY0026pNt8jo42jxJ6kuXKUWvFtUVGns4Ebe
	3dtPDjNHI1pW1lGNqLrJ1wVCQXGvNKNOzSTL/xcW2jwSwTdDYLiZegbacb/jsA4Nud/xwTfhVOE
	=
X-Google-Smtp-Source: AGHT+IFoWqcvbozpXxvQNqxhIE1z1TSZsg7NwPsNrrU5fsLyspLnzRNQT0koo4krqHA8yioj2dFYrw==
X-Received: by 2002:a05:6402:890:b0:57c:fb41:eba2 with SMTP id 4fb4d7f45d1cf-57cfb41ed0fmr948248a12.1.1718706257726;
        Tue, 18 Jun 2024 03:24:17 -0700 (PDT)
Message-ID: <2b0b993d-5b8c-4a02-b2a2-1a7aa69a41c0@suse.com>
Date: Tue, 18 Jun 2024 12:24:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 4/7] x86/xstate: Rework xstate_ctxt_size() as
 xstate_uncompressed_size()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
 <20240617173921.1755439-5-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240617173921.1755439-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 17.06.2024 19:39, Andrew Cooper wrote:
> We're soon going to need a compressed helper of the same form.
> 
> The size of the uncompressed image depends on the single element with the
> largest offset + size.  Sadly this isn't always the element with the largest
> index.
> 
> Name the per-xstate-component cpu_policy struture, for legibility of the logic
> in xstate_uncompressed_size().  Cross-check with hardware during boot, and
> remove hw_uncompressed_size().
> 
> This means that the migration paths don't need to mess with XCR0 just to
> sanity check the buffer size.  It also means we can drop the "fastpath" check
> against xfeature_mask (there to skip some XCR0 writes); this path is going to
> be dead logic the moment Xen starts using supervisor states itself.
> 
> The users of hw_uncompressed_size() in xstate_init() can (and indeed need) to
> be replaced with CPUID instructions.  They run with feature_mask in XCR0, and
> prior to setup_xstate_features() on the BSP.
> 
> No practical change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Albeit possibly also subject to use of XSTATE_CPUID, as per the question on
the earlier patch.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 11:03:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 11:03:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742983.1149867 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJWc1-0002AF-KC; Tue, 18 Jun 2024 11:02:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742983.1149867; Tue, 18 Jun 2024 11:02:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJWc1-0002A8-Gs; Tue, 18 Jun 2024 11:02:53 +0000
Received: by outflank-mailman (input) for mailman id 742983;
 Tue, 18 Jun 2024 11:02:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2ol3=NU=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1sJWc0-0002A2-0k
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 11:02:52 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4de4933f-2d62-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 13:02:51 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-a6f09eaf420so623618366b.3
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 04:02:51 -0700 (PDT)
Received: from [192.168.69.100] ([176.187.212.55])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56ed357asm610564366b.125.2024.06.18.04.02.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 04:02:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4de4933f-2d62-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718708570; x=1719313370; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=xcgha9EMwGFy4GfTwIT+dAmK/I6ZbD/6U8eKBQbvJxE=;
        b=euCUT5BA+n/shshXZwJCKoSPPnSu5EqXscw/0sSa4PLyzQ94cW5qvBtyPdW1t7iBHH
         zI3iy0Pjf/tnB5si9H7r3HajUQ/h435MThHb0Myw771A6v78ZERdPLsHfuTDlElzd+1Y
         tQqkIQp5EAV0XD2whPlrSgt9k5B8lztpyTHLR5MX6S9WPXeWSxqwGyHLkbAfORCgJEGC
         4vf+iyaG0U+nzrtq1RpcKXkvYF+wv+0YNoiMmjRsqCcdv/88MascGo/QF8fQpWi62kwF
         y6jUjpJpoTdmmHWNyE5f002/LfBtkqfUENDE66kFk2HQG5XpeXMtE/vsLh3jcXD9bQIV
         KMBQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718708570; x=1719313370;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=xcgha9EMwGFy4GfTwIT+dAmK/I6ZbD/6U8eKBQbvJxE=;
        b=cbjLi1mAiZoMqP6QBD4whB3Nrq6rosooN1MoCs7dF303lepg95VCX8O9vQm/WHlnwZ
         Ow0P4dgNYBqnj9rHwFqdMZEYIBu/GWr8Zk4sqrrUGQxwKA4C9Ho9sNyW69Cw5yL6w6dB
         5pER7z/wBVSR9Gjnfwj0oF2AswVphFg2BAr7B6Oj2wZH3tpT6YEDPdXfvOcuiQeoB4Dk
         DM2MuiI1o9cSdykIt2QbPMGVtNs/XnV5cb6Ua6fEpWjW1J0xThQFhvQp+fI8m+QBLl30
         Y1P2mj4z00RYisPGPD+EpWHfM6w57WMRARFxVceyBTFEso7mgFnt9CP9zaMdIC/AC8D/
         Cabg==
X-Forwarded-Encrypted: i=1; AJvYcCW+bd1TuSFLOCayp5YCE3hTWyXvDu0BHiIt+BBCSqJxrXQ8c6XjGYYn6GC1o07dQJxc0WOyevUUAHFjpNymxyAl0/3fDIJF3RfDalDcGtg=
X-Gm-Message-State: AOJu0YwrdyOldv65iZukQmZVS0LJRrkWalG2wYYQn6MrQ8oQ6YTzgRSK
	uFaDi/TOptG+GCHCshdxYVDc01guU3A3+qV2BXlkNvBveUkAKgsNK3uf7tgYKOM=
X-Google-Smtp-Source: AGHT+IGZiB4TOusLWVX3ERf9zLJVoY4WNAK84tTxY7ZKm/36Wvr4gkTfJHbEnLonioHVYesTa7/8zw==
X-Received: by 2002:a17:907:c283:b0:a6f:815f:16d8 with SMTP id a640c23a62f3a-a6f815f17ccmr459067366b.7.1718708570402;
        Tue, 18 Jun 2024 04:02:50 -0700 (PDT)
Message-ID: <aba83790-fec2-498d-a0b6-4ea01a1893fb@linaro.org>
Date: Tue, 18 Jun 2024 13:02:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v3 2/3] ui+display: rename is_placeholder() ->
 surface_is_placeholder()
To: Gerd Hoffmann <kraxel@redhat.com>, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Marc-Andr=C3=A9_Lureau?= <marcandre.lureau@redhat.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org,
 Anthony PERARD <anthony@xenproject.org>
References: <20240605131444.797896-1-kraxel@redhat.com>
 <20240605131444.797896-3-kraxel@redhat.com>
Content-Language: en-US
From: =?UTF-8?Q?Philippe_Mathieu-Daud=C3=A9?= <philmd@linaro.org>
In-Reply-To: <20240605131444.797896-3-kraxel@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 5/6/24 15:14, Gerd Hoffmann wrote:
> No functional change.
> 
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> ---
>   include/ui/surface.h | 2 +-
>   ui/console.c         | 2 +-
>   ui/sdl2-2d.c         | 2 +-
>   ui/sdl2-gl.c         | 2 +-
>   4 files changed, 4 insertions(+), 4 deletions(-)

Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>



From xen-devel-bounces@lists.xenproject.org Tue Jun 18 11:04:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 11:04:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742987.1149877 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJWdH-0002f0-Ua; Tue, 18 Jun 2024 11:04:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742987.1149877; Tue, 18 Jun 2024 11:04:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJWdH-0002et-RR; Tue, 18 Jun 2024 11:04:11 +0000
Received: by outflank-mailman (input) for mailman id 742987;
 Tue, 18 Jun 2024 11:04:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=An5i=NU=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJWdG-0002ea-DF
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 11:04:10 +0000
Received: from mail-qv1-xf2b.google.com (mail-qv1-xf2b.google.com
 [2607:f8b0:4864:20::f2b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7c623ba4-2d62-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 13:04:09 +0200 (CEST)
Received: by mail-qv1-xf2b.google.com with SMTP id
 6a1803df08f44-6b07e641535so27812846d6.3
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 04:04:09 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2a5bf2879sm65400806d6.21.2024.06.18.04.04.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 04:04:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c623ba4-2d62-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718708648; x=1719313448; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ubvMTyLxpTIVR9o/ecLoRMfB8L0MpN0HMxm8gMBgCmg=;
        b=ICl3SbDQRZq6S5uWZSHKDpMyHr38xkj9jUgX4qAjKEYBeF12oK0RtKHXz+bOupvTd7
         LNboXqr3ALlFQ3Dq41wqNkKKIZr7bROB+6++U9J0zhEiTlEM9SpJQ3MWGqGPpX/5C80A
         URagLbgmqciQUk8obgM/7Ucy0wfiZUuVzAi3g=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718708648; x=1719313448;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ubvMTyLxpTIVR9o/ecLoRMfB8L0MpN0HMxm8gMBgCmg=;
        b=NcW5B0/GwkXFEuHHMSqVebLsAG9tLuTI5GYSPDVVnHhtAIRBbXrSJJCjkIdO9C2gSP
         R8LeXI1GjKkFdOzBMd9EhAD4B2+OFVdnUnc9SMWieIfvazZKTKkv1XC540ti8J21GIkl
         nZqvB6yh1wnUGVKtlJB/lgxJ+FeaBQI4r86J7hu6G1nIfMxDELVEvg9bJUJfnWCSGvGf
         B4pOSPowf9DYQEEpSFBocTkFaUTegZShiXRyd4FE3Y4yfCuwlM7F/bID69tSQ9Zz0bcP
         eV2kKECWUGQyO0/gmFNHLWda0ZG6E0Ih65JZDHIrjosXv/SvLRFZoBlJbU3MjTbsQGOS
         szYw==
X-Forwarded-Encrypted: i=1; AJvYcCXn0TflYaDXq0cRK8EeCqFpyFzI68yakm4+f4CQCUYb/8uwQfAtpxsoDK/AwjbXN835usrYCeQHH32vVUtmZkRYgdf/vqvUANMm3zZw2js=
X-Gm-Message-State: AOJu0Yz1OdyhHPDFowyxH+UpJ/6iBVRsFHYArEviVC2WdzKGPEWfeh29
	kfIX4Lji1nzF1sJ9hOUwZTDhztdZYH49vOdRMt8EU0kSxdJx1EenuB0w4g+GnGo=
X-Google-Smtp-Source: AGHT+IE6E/NG17tJsTbKKPvEa5gIaI35Erc7JsSpzhJKtgDBDc+mwDks8dkbbgR/37yJ/M/yvZVHDA==
X-Received: by 2002:a0c:f2d0:0:b0:6b4:fde8:868b with SMTP id 6a1803df08f44-6b4fde88991mr5513296d6.50.1718708648262;
        Tue, 18 Jun 2024 04:04:08 -0700 (PDT)
Message-ID: <bb768c00-0a91-4e47-91b4-21ec31a71f13@citrix.com>
Date: Tue, 18 Jun 2024 12:04:05 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [OSSTEST PATCH] preseed_base: Use "keep" NIC NamePolicy when
 "force-mac-address"
To: Anthony PERARD <anthony.perard@vates.tech>
Cc: Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org,
 Luca Fancellu <luca.fancellu@arm.com>, Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>
References: <20240617144051.29547-1-anthony@xenproject.org>
 <a65a83be-1236-4699-8124-c0bd809c4b97@citrix.com> <ZnFXFjeakYBmBHSB@l14>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <ZnFXFjeakYBmBHSB@l14>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 18/06/2024 10:44 am, Anthony PERARD wrote:
> On Mon, Jun 17, 2024 at 04:34:09PM +0100, Andrew Cooper wrote:
>> On 17/06/2024 3:40 pm, Anthony PERARD wrote:
>>> diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
>>> index 3545f3fd..d974fea5 100644
>>> --- a/Osstest/Debian.pm
>>> +++ b/Osstest/Debian.pm
>>> @@ -972,7 +972,19 @@ END
>>>          # is going to be added to dom0's initrd, which is used by some guests
>>>          # (created with ts-debian-install).
>>>          preseed_hook_installscript($ho, $sfx,
>>> -            '/usr/lib/base-installer.d/', '05ifnamepolicy', <<'END');
>>> +            '/usr/lib/base-installer.d/', '05ifnamepolicy',
>>> +            $ho->{Flags}{'force-mac-address'} ? <<'END' : <<'END');
>> The conditional looks suspicious if both options are <<'END'.
> That works fine, this pattern is already used in few places in osstest,
> like here:
> https://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=ts-host-install;h=0b6aaeeae228551064618abfa624321992a2eb2d;hb=HEAD#l240
>     >  $ho->{Flags}{'force-mac-address'} ? <<END : <<END);
>
> Or even here:
> https://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=ts-xen-build;h=c294a51eafc26e53b5417529b943224902870acf;hb=HEAD#l173
>     > buildcmd_stamped_logged(600, 'xen', 'configure', <<END,<<END,<<END);
>
>> Doesn't this just write 70-eth-keep-policy.link unconditionally?
> I've check that, on a different host, and the "mac" name policy is used
> as expected, so the file "70-eth-keep-policy.link" isn't created on that
> host.

This is horrifying.  Given a construct which specifically lets you
choose a semantically meaningful name, using END for all options is rude.

Despite the pre-existing antipatterns, it would be better to turn this
one into:

$ho->{Flags}{'force-mac-address'} ? <<'END_KEEP' : <<'END_MAC');

which gives the reader some chance of spotting that there are two
adjacent scripts and we're choosing one of them.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 11:08:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 11:08:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.742995.1149886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJWhp-0003er-FA; Tue, 18 Jun 2024 11:08:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 742995.1149886; Tue, 18 Jun 2024 11:08:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJWhp-0003ek-CJ; Tue, 18 Jun 2024 11:08:53 +0000
Received: by outflank-mailman (input) for mailman id 742995;
 Tue, 18 Jun 2024 11:08:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2ol3=NU=linaro.org=philmd@srs-se1.protection.inumbo.net>)
 id 1sJWho-0003ee-6I
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 11:08:52 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 24ab607b-2d63-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 13:08:51 +0200 (CEST)
Received: by mail-ed1-x52d.google.com with SMTP id
 4fb4d7f45d1cf-57cb9efd8d1so3364548a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 04:08:51 -0700 (PDT)
Received: from [192.168.69.100] ([176.187.212.55])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56da4486sm607939266b.1.2024.06.18.04.08.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 04:08:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24ab607b-2d63-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718708931; x=1719313731; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=TUa1inIX9KdwJHKeGFbDHGO0EDxxcgeVNNKp5ffKjYY=;
        b=Pf+dce7dXMNoVrgUQWLwSoCuWxN2zYnfhvFuU1uvsoTwJXRL/8TtnQrsN5uctLsaaY
         Lkch4H6ojtkrMxnAsXfFaTsHUo0zxfn+sSI5G6frkThK5cnuFDjnxmrKbQBt7XOlpdcn
         bvv1DVH14GjPOVGR0LSt0ui9Wn74/yk/U+Z2oWu7cpB7Bhbs7alsHJ5zJwYce/PlhNFN
         gcFme9SFkBYQ6nw5UoQMRlwg4xe/98FTq+NVw+a98NQXruPee6zmFQzsaROrJL8z90Wl
         j1ptqUGHkPPngk17mxWssUsSUJKtJCFNfh+MEMyNIa6F1+5rcdjvdoemeFYIEqUpnP8N
         zUGQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718708931; x=1719313731;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=TUa1inIX9KdwJHKeGFbDHGO0EDxxcgeVNNKp5ffKjYY=;
        b=VrdAGxUt7UBtIBJNaLeesWd74XGh4iQtStCoOXayAdWrRk8pHwWBprBEzmoAZUk+By
         kP87YEhNDcgRQETuXhYi7gOTF5tyRGRT49eLBYTFJ9A4PfCSV0cdO1j3RAwcpP9SyTyz
         1/+WtnIs1C9OX7n759Msqx0jobKx55LlZe9xIMlp5sQBK6yBYJ4tGfg6Cj60vXF+nMlq
         zbSwJ13nqmmn0AQwrsJ4eCWZiOYFIqA2btb0aHFi8XeOZMOXz0tw9WN2EU43YDCDx73L
         2IZ8681sZQA2lZ2ZqHdO4mqWTg6fVAcJYE0AmrxzsIVF6zklNp3N8NuJg/6zpHVlw2Wn
         bmOA==
X-Forwarded-Encrypted: i=1; AJvYcCVtrTzkP6S3R/RlzFQtXSdFJtrxFtEfWA9fWVecAZpZ2c9o4QZVU+7o+xM3ETTTSX458ac/WDth+ta6sWku6c2hODmkQ9S9cNPAFpCDL1w=
X-Gm-Message-State: AOJu0YxQD72QdteWprpkDd2EYOoiipVLfHnvdPw9XR+5ZMfd5GL+HPeD
	bFnlyDM62lawF0IQxjkNNCOsk8nNDjqEkyyiMrJNlyVcJ06gS+vPOIQEehOFfEc=
X-Google-Smtp-Source: AGHT+IEWOyL7yTFsmsSyir5tQhxbluhVNQhJn74o5D642Kwf7mxM1ofGsp9yGSyxaAEYq/J2aUUHkg==
X-Received: by 2002:a17:906:b318:b0:a6f:259d:9a5f with SMTP id a640c23a62f3a-a6f9508e83cmr133167066b.35.1718708930655;
        Tue, 18 Jun 2024 04:08:50 -0700 (PDT)
Message-ID: <01fd70a0-30e3-45aa-ac95-ce36e593a264@linaro.org>
Date: Tue, 18 Jun 2024 13:08:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v3 0/3] stdvga: fix screen blanking
To: Gerd Hoffmann <kraxel@redhat.com>, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Marc-Andr=C3=A9_Lureau?= <marcandre.lureau@redhat.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org,
 Anthony PERARD <anthony@xenproject.org>
References: <20240605131444.797896-1-kraxel@redhat.com>
Content-Language: en-US
From: =?UTF-8?Q?Philippe_Mathieu-Daud=C3=A9?= <philmd@linaro.org>
In-Reply-To: <20240605131444.797896-1-kraxel@redhat.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 5/6/24 15:14, Gerd Hoffmann wrote:

> Gerd Hoffmann (3):
>    stdvga: fix screen blanking
>    ui+display: rename is_placeholder() -> surface_is_placeholder()
>    ui+display: rename is_buffer_shared() -> surface_is_allocated()

Since Marc-André reviewed, I'm queuing this series, thanks!


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 11:15:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 11:15:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743004.1149897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJWo2-0005C2-7u; Tue, 18 Jun 2024 11:15:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743004.1149897; Tue, 18 Jun 2024 11:15:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJWo2-0005Bv-47; Tue, 18 Jun 2024 11:15:18 +0000
Received: by outflank-mailman (input) for mailman id 743004;
 Tue, 18 Jun 2024 11:15:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TBP2=NU=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sJWo1-0005Bp-DD
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 11:15:17 +0000
Received: from mail-qt1-x82c.google.com (mail-qt1-x82c.google.com
 [2607:f8b0:4864:20::82c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 098b7f81-2d64-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 13:15:15 +0200 (CEST)
Received: by mail-qt1-x82c.google.com with SMTP id
 d75a77b69052e-4421c014b95so30913461cf.0
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 04:15:15 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-441ef3d88f9sm54879211cf.12.2024.06.18.04.15.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 04:15:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 098b7f81-2d64-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718709315; x=1719314115; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=fCWxaXWRj+zULFoQ4vnFkM0t4dQAaomXJTUwJyMFBMM=;
        b=Fyk/Cqp44jJ/Bjuk3p6pzAbVjeI40mN8ApmpCy6IxMWYxC750huUK2RB1gTIPkPn7w
         k2vuWzcYkFLpudiyTPzYdqD6l0qfKLNkslmYQuPDggLGp4SZ83dkr5y5J8L30qu+98PL
         pf8RwBO5EgLl8V+F0YjFNAAMP+jrdmaltE9Yc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718709315; x=1719314115;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=fCWxaXWRj+zULFoQ4vnFkM0t4dQAaomXJTUwJyMFBMM=;
        b=QOsmIYpKgFSWcHGDQ3+gg0HCcQhf1IJIQWKIkoLWMt74Cz6LGSD+eSkGcbxwn0Z3SP
         sCEGjLQrPBAd6p/qkPLpPo8BSAD7J8RfMNHiRAg/nFK0uAt+on8PXSUFnoo2ojC1/+c+
         X5mYZpz9UinHEveVicbv59K5lmd3njc1douUm1mWotMHKw7yQKSk/VEbYScm2IUy+ybB
         5VsQZPKa0tKRZF6pOlMYrCbqj2rnMriUdbnnmXEK0XEo1WKLWlVDHYIsxG/lxDx7Pv1N
         /Ad3Oy2aH/3w0c7Y5cGo3c2oCHNDfQrAqGnTB5JAV4As+nZFpZkF42tGg87sgHkIy2n7
         zA4g==
X-Forwarded-Encrypted: i=1; AJvYcCWgLqzEqe73vuTkC260I8Iy1pYck+fO2Y1DI52/42FEV2q/vy2CpAjej6fkWy3HMqBRG8mhXFTXBF9xbxWTfOcuDjz8k4X+hpY/auNHrig=
X-Gm-Message-State: AOJu0YxT2QodvpqsvQiwnWhQtJG6xuMG9uW89s7LQUfs0loAJIFek/Tk
	86jpq5P1Y6xF3MxNfu4IdEs/Ew3ogYSGWSqXnEYJr2VqHXmZqFqcdTDDqz7is70=
X-Google-Smtp-Source: AGHT+IE9+HMwRLeqBgzYSPw7aYJMXz74PpS4QhlqOkG1uQZnNB/HDKYWZIltdJ1RKhjRU+hnvfk43w==
X-Received: by 2002:ac8:5d02:0:b0:43a:c0c7:a218 with SMTP id d75a77b69052e-4449bc9c025mr40537571cf.33.1718709313678;
        Tue, 18 Jun 2024 04:15:13 -0700 (PDT)
Date: Tue, 18 Jun 2024 13:15:11 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH v3 for-4.19 1/3] x86/irq: deal with old_cpu_mask for
 interrupts in movement in fixup_irqs()
Message-ID: <ZnFsP9Xt4e1cQsCA@macbook>
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <20240613165617.42538-2-roger.pau@citrix.com>
 <f92fc38b-aba9-4f8f-b95c-4723049523d0@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f92fc38b-aba9-4f8f-b95c-4723049523d0@suse.com>

On Mon, Jun 17, 2024 at 03:18:42PM +0200, Jan Beulich wrote:
> On 13.06.2024 18:56, Roger Pau Monne wrote:
> > Given the current logic it's possible for ->arch.old_cpu_mask to get out of
> > sync: if a CPU set in old_cpu_mask is offlined and then onlined
> > again without old_cpu_mask having been updated the data in the mask will no
> > longer be accurate, as when brought back online the CPU will no longer have
> > old_vector configured to handle the old interrupt source.
> > 
> > If there's an interrupt movement in progress, and the to be offlined CPU (which
> > is the call context) is in the old_cpu_mask clear it and update the mask, so it
> > doesn't contain stale data.
> 
> Perhaps a comma before "clear" might further help reading. Happy to
> add while committing.

Maybe, I'm trying to think of other ways to word the sentence in order
to make is simpler, but I'm out of ideas.

> > Note that when the system is going down fixup_irqs() will be called by
> > smp_send_stop() from CPU 0 with a mask with only CPU 0 on it, effectively
> > asking to move all interrupts to the current caller (CPU 0) which is the only
> > CPU to remain online.  In that case we don't care to migrate interrupts that
> > are in the process of being moved, as it's likely we won't be able to move all
> > interrupts to CPU 0 due to vector shortage anyway.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 11:22:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 11:22:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743013.1149906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJWvH-0007F6-Uf; Tue, 18 Jun 2024 11:22:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743013.1149906; Tue, 18 Jun 2024 11:22:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJWvH-0007Ez-RZ; Tue, 18 Jun 2024 11:22:47 +0000
Received: by outflank-mailman (input) for mailman id 743013;
 Tue, 18 Jun 2024 11:22:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TBP2=NU=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sJWvH-0007Et-0n
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 11:22:47 +0000
Received: from mail-oi1-x22e.google.com (mail-oi1-x22e.google.com
 [2607:f8b0:4864:20::22e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 15d3659b-2d65-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 13:22:46 +0200 (CEST)
Received: by mail-oi1-x22e.google.com with SMTP id
 5614622812f47-3d21f7cc6c0so2655126b6e.0
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 04:22:46 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798b73fb4dcsm508664285a.100.2024.06.18.04.22.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 04:22:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15d3659b-2d65-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718709765; x=1719314565; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=uMdWOYDCF4L/JS5yOW7ALJCaoP82n/+gWVij8fGbdXo=;
        b=mrLcSs3lUoAhbxBBIlCRcJJAY/X9OKqEDxEmmV+IRdVwyKCAnGDPgPzw9E9HYtRUx+
         TixHSR695GjBFhDiAO6cUnZPtEkJ3AvwYouOQM9pOmQBKh/8UKa8L64rHaJtd4Rzr7fo
         OzugpNRSg4uyNIKzVmTVEAY/7KyZrNaAJnFIw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718709765; x=1719314565;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=uMdWOYDCF4L/JS5yOW7ALJCaoP82n/+gWVij8fGbdXo=;
        b=pH4o2yG4kjDthfqEq3+9w5oRU+DGbJKFT/wc16CUF+KEn2TdT0W+jWSGQxzetH/q/j
         64S9KbgnK5CbmeWBBL2K+rkSHQVVuZrErUXdAH8G65Y3SfPvdMvAeM+NfimREV67z3Ee
         QOuNqc6gPxMSpQg5P9VTDYaGBqEyZX0tiCvTPk/CwtU0n/zKMmf/uvYar4wpt39E24Pw
         Dx0epHOjdsUmPGwBapbDlqQ6NBkm9EqyC7rAIbj7E498WEZW89cWJcX96US6w3+SzKNY
         hih5SemYftFU7BCElNwNI4NGtXdbZIvvfKPL6NCGTLRQsyLPvHf5KxPNas6aHKMqvSuo
         64Xg==
X-Forwarded-Encrypted: i=1; AJvYcCURV6Twd6RnD071jiz84OqwVOuYUAfv80ejKKfp4nmHArGovHmRI6AeXJ776dYY2LQHzINa30tzB0A1FLOKcDmplyNlsBMKvPfOu2PxLQ8=
X-Gm-Message-State: AOJu0Yy9cxMDXgghWdcxZAjkMSL092FsbcLXnsjucy+wCeujrsiq+qYw
	+y8hvMnw4ermeytgWQ6CvKXHdlSmsiDoKu9zjqqZxNIfqoRxjQ7Rv2HFGvQ4Q6I=
X-Google-Smtp-Source: AGHT+IF6X8Iri1KEMEYghMRXGEQYzcU7kYUwTy0wk0hNb4VrEnztHGaVY5yJviz4YKaixjkW9uwZbg==
X-Received: by 2002:a05:6808:3028:b0:3d2:236d:6b57 with SMTP id 5614622812f47-3d24e928004mr12029217b6e.27.1718709764568;
        Tue, 18 Jun 2024 04:22:44 -0700 (PDT)
Date: Tue, 18 Jun 2024 13:22:42 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH v3 for-4.19 2/3] x86/irq: handle moving interrupts in
 _assign_irq_vector()
Message-ID: <ZnFuAoP-FNiNfcKd@macbook>
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <20240613165617.42538-3-roger.pau@citrix.com>
 <f263d178-3a06-4c65-a6c0-a77f85c559b6@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f263d178-3a06-4c65-a6c0-a77f85c559b6@suse.com>

On Mon, Jun 17, 2024 at 03:31:13PM +0200, Jan Beulich wrote:
> On 13.06.2024 18:56, Roger Pau Monne wrote:
> > Currently there's logic in fixup_irqs() that attempts to prevent
> > _assign_irq_vector() from failing, as fixup_irqs() is required to evacuate all
> > interrupts from the CPUs not present in the input mask.  The current logic in
> > fixup_irqs() is incomplete, as it doesn't deal with interrupts that have
> > move_cleanup_count > 0 and a non-empty ->arch.old_cpu_mask field.
> > 
> > Instead of attempting to fixup the interrupt descriptor in fixup_irqs() so that
> > _assign_irq_vector() cannot fail, introduce logic in _assign_irq_vector()
> > to deal with interrupts that have either move_{in_progress,cleanup_count} set
> > and no remaining online CPUs in ->arch.cpu_mask.
> > 
> > If _assign_irq_vector() is requested to move an interrupt in the state
> > described above, first attempt to see if ->arch.old_cpu_mask contains any valid
> > CPUs that could be used as fallback, and if that's the case do move the
> > interrupt back to the previous destination.  Note this is easier because the
> > vector hasn't been released yet, so there's no need to allocate and setup a new
> > vector on the destination.
> > 
> > Due to the logic in fixup_irqs() that clears offline CPUs from
> > ->arch.old_cpu_mask (and releases the old vector if the mask becomes empty) it
> > shouldn't be possible to get into _assign_irq_vector() with
> > ->arch.move_{in_progress,cleanup_count} set but no online CPUs in
> > ->arch.old_cpu_mask.
> > 
> > However if ->arch.move_{in_progress,cleanup_count} is set and the interrupt has
> > also changed affinity, it's possible the members of ->arch.old_cpu_mask are no
> > longer part of the affinity set, move the interrupt to a different CPU part of
> > the provided mask and keep the current ->arch.old_{cpu_mask,vector} for the
> > pending interrupt movement to be completed.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> > --- a/xen/arch/x86/irq.c
> > +++ b/xen/arch/x86/irq.c
> > @@ -544,7 +544,58 @@ static int _assign_irq_vector(struct irq_desc *desc, const cpumask_t *mask)
> >      }
> >  
> >      if ( desc->arch.move_in_progress || desc->arch.move_cleanup_count )
> > -        return -EAGAIN;
> > +    {
> > +        /*
> > +         * If the current destination is online refuse to shuffle.  Retry after
> > +         * the in-progress movement has finished.
> > +         */
> > +        if ( cpumask_intersects(desc->arch.cpu_mask, &cpu_online_map) )
> > +            return -EAGAIN;
> > +
> > +        /*
> > +         * Due to the logic in fixup_irqs() that clears offlined CPUs from
> > +         * ->arch.old_cpu_mask it shouldn't be possible to get here with
> > +         * ->arch.move_{in_progress,cleanup_count} set and no online CPUs in
> > +         * ->arch.old_cpu_mask.
> > +         */
> > +        ASSERT(valid_irq_vector(desc->arch.old_vector));
> > +        ASSERT(cpumask_intersects(desc->arch.old_cpu_mask, &cpu_online_map));
> > +
> > +        if ( cpumask_intersects(desc->arch.old_cpu_mask, mask) )
> > +        {
> > +            /*
> > +             * Fallback to the old destination if moving is in progress and the
> > +             * current destination is to be offlined.  This is only possible if
> > +             * the CPUs in old_cpu_mask intersect with the affinity mask passed
> > +             * in the 'mask' parameter.
> > +             */
> > +            desc->arch.vector = desc->arch.old_vector;
> 
> I'm a little puzzled that you use desc->arch.old_vector here, but ...

old_vector can't be used here, as old_vector == desc->arch.vector at
this point.

The name of the variable is IMO a bit misleading, as it's value only
becomes the old_vector once a new vector is assigned.  It would be
more appropriate to name it current_vector IMO.

> > +            cpumask_and(desc->arch.cpu_mask, desc->arch.old_cpu_mask, mask);
> > +
> > +            /* Undo any possibly done cleanup. */
> > +            for_each_cpu(cpu, desc->arch.cpu_mask)
> > +                per_cpu(vector_irq, cpu)[desc->arch.vector] = irq;
> > +
> > +            /* Cancel the pending move and release the current vector. */
> > +            desc->arch.old_vector = IRQ_VECTOR_UNASSIGNED;
> > +            cpumask_clear(desc->arch.old_cpu_mask);
> > +            desc->arch.move_in_progress = 0;
> > +            desc->arch.move_cleanup_count = 0;
> > +            if ( desc->arch.used_vectors )
> > +            {
> > +                ASSERT(test_bit(old_vector, desc->arch.used_vectors));
> > +                clear_bit(old_vector, desc->arch.used_vectors);
> 
> ... old_vector here. Since we have the latter, uniformly using it might
> be more consistent.

Keep in mind that old_vector a cache of the value of desc->arch.vector
at the start of the function.

> I realize though that irq_to_vector() has cases where
> it wouldn't return desc->arch.old_vector; I think, however, that in those
> case we can't make it here. Still I'm not going to insist on making the
> adjustment. Happy to make it though while committing, should you agree.
> 
> Also I'm not happy to see another instance of this pattern appear. In
> x86-specific code this is inefficient, as {set,clear}_bit resolve to the
> same insn as test_and_{set,clear}_bit(). Therefore imo more efficient
> would be
> 
>                 if (!test_and_clear_bit(old_vector, desc->arch.used_vectors))
>                     ASSERT_UNREACHABLE();
> 
> (and then the two if()s folded).
> 
> I've been meaning to propose a patch to the other similar sites ...

Oh, indeed.  IIRC I've copied this pattern from somewhere else without
noticing.  I'm happy for you to fold the test_and_clear_bit() call
into the parent if condition.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 11:31:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 11:31:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743021.1149916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJX3I-0000tD-Lb; Tue, 18 Jun 2024 11:31:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743021.1149916; Tue, 18 Jun 2024 11:31:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJX3I-0000t6-J7; Tue, 18 Jun 2024 11:31:04 +0000
Received: by outflank-mailman (input) for mailman id 743021;
 Tue, 18 Jun 2024 11:31:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TBP2=NU=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sJX3G-0000t0-TT
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 11:31:02 +0000
Received: from mail-yw1-x112b.google.com (mail-yw1-x112b.google.com
 [2607:f8b0:4864:20::112b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3bed4ed4-2d66-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 13:30:59 +0200 (CEST)
Received: by mail-yw1-x112b.google.com with SMTP id
 00721157ae682-6326f1647f6so36809177b3.0
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 04:30:59 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2a5eb4a9csm65426756d6.80.2024.06.18.04.30.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 04:30:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3bed4ed4-2d66-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718710258; x=1719315058; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=jfDsGWpu8kmMigXDssbe2VS7M/If25548mxXexQvdaY=;
        b=KjB/rSIo+GMI3bg6l4n8AAGLOjLgFgAeWvXucmopJTCctOUkPy66Vv87pWRhRdBkeb
         3k/L9tw6pSUBd//Br7WW1Ra9fyeW3Z2W4A1mWNgW66x9C9H6lThcLns2If534xjtwarg
         V8Ao2DundyyY1wmP8TIgO9nQrR6DF4ztaF0F4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718710258; x=1719315058;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=jfDsGWpu8kmMigXDssbe2VS7M/If25548mxXexQvdaY=;
        b=u4FkNtKN5xD4wx1OHOc60bsWMBLyySqQoM6NPueEc+tA1m6mJsdDdkhINB1lzxbhSx
         ZFlMnPxlDKjm5fmAfJyP5GQsGUM1mIpEekabN91/teZ/5isa/cktjxZ4L8CQcMgTw4Ip
         M8JqxButK7ILsmZov7AARDx16Y3tLSTbhmTMTKQXX+q0bfDhgawnB7KYyu8AJ3UDhMxU
         7ONBttVPjHvwVAikGQbqHmZI7CP0HHH8nP9tBetO8Ixel52nq672ZujhTqb9ndN3Ziu3
         PMs+wGIbPQF6RFgwDPKKo09XYpTb15kzWDNjQMcJQ07fRLhoghRMFkqSZeA2PwKEG5mN
         7zGw==
X-Forwarded-Encrypted: i=1; AJvYcCXIyKS3Ql4AugBuyAtU6F3pS0Vg9EzcSoegHX8GVjJ5BcJHfwzVgi9KvlCymkgm5oNHVHvXjI6a7zwOJ5Hswn9yFiuBFPXdemwVXhV43UY=
X-Gm-Message-State: AOJu0Yx3sIW72g5Mjdvt4JCoCZcBkPMfTcq080/MpSvK4MqrBuIJS/N2
	l8CYIF5RKNkJ+nO4WsUzHPTNsPNW+wsODq780h0FjvNG7vCX0rz461+sFJGlgdgqZasmr49tqkZ
	K
X-Google-Smtp-Source: AGHT+IFNljBgGvji7v9Ju+ECIIS8KF+0wNUSyIiZazzVLjN3HK+Uc+rPTfxXuiJWkxQX5vQ4Y4pmKQ==
X-Received: by 2002:a81:7bc6:0:b0:62f:eab8:7a09 with SMTP id 00721157ae682-63224ef2521mr113565647b3.44.1718710256601;
        Tue, 18 Jun 2024 04:30:56 -0700 (PDT)
Date: Tue, 18 Jun 2024 13:30:53 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3 3/3] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
Message-ID: <ZnFv7b4YNjeRXj6-@macbook>
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <20240613165617.42538-4-roger.pau@citrix.com>
 <e3912334-4dbe-40e9-aed4-8b47e1570cc7@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <e3912334-4dbe-40e9-aed4-8b47e1570cc7@suse.com>

On Mon, Jun 17, 2024 at 03:41:12PM +0200, Jan Beulich wrote:
> On 13.06.2024 18:56, Roger Pau Monne wrote:
> > fixup_irqs() is used to evacuate interrupts from to be offlined CPUs.  Given
> > the CPU is to become offline, the normal migration logic used by Xen where the
> > vector in the previous target(s) is left configured until the interrupt is
> > received on the new destination is not suitable.
> > 
> > Instead attempt to do as much as possible in order to prevent loosing
> > interrupts.  If fixup_irqs() is called from the CPU to be offlined (as is
> > currently the case)
> 
> Except (again) for smp_send_stop().

I guess I haven't worded this properly, the point I was trying to make
is that in the context of a CPU unplug fixup_irqs() is always called
from the CPU that's going offline.

What about:

"If fixup_irqs() is called from the CPU to be offlined (as is
currently the case for CPU hot unplug) ..."

> > attempt to forward pending vectors when interrupts that
> > target the current CPU are migrated to a different destination.
> > 
> > Additionally, for interrupts that have already been moved from the current CPU
> > prior to the call to fixup_irqs() but that haven't been delivered to the new
> > destination (iow: interrupts with move_in_progress set and the current CPU set
> > in ->arch.old_cpu_mask) also check whether the previous vector is pending and
> > forward it to the new destination.
> > 
> > This allows us to remove the window with interrupts enabled at the bottom of
> > fixup_irqs().  Such window wasn't safe anyway: references to the CPU to become
> > offline are removed from interrupts masks, but the per-CPU vector_irq[] array
> > is not updated to reflect those changes (as the CPU is going offline anyway).
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> >[...]
> > @@ -2686,11 +2705,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
> >          if ( desc->handler->disable )
> >              desc->handler->disable(desc);
> >  
> > +        /*
> > +         * If the current CPU is going offline and is (one of) the target(s) of
> > +         * the interrupt, signal to check whether there are any pending vectors
> > +         * to be handled in the local APIC after the interrupt has been moved.
> > +         */
> > +        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
> > +            check_irr = true;
> > +
> >          if ( desc->handler->set_affinity )
> >              desc->handler->set_affinity(desc, affinity);
> >          else if ( !(warned++) )
> >              set_affinity = false;
> >  
> > +        if ( check_irr && apic_irr_read(vector) )
> > +            /*
> > +             * Forward pending interrupt to the new destination, this CPU is
> > +             * going offline and otherwise the interrupt would be lost.
> > +             */
> > +            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
> > +                          desc->arch.vector);
> 
> Hmm, IRR may become set right after the IRR read (unlike in the other cases,
> where new IRQs ought to be surfacing only at the new destination). Doesn't
> this want moving ...
> 
> >          if ( desc->handler->enable )
> >              desc->handler->enable(desc);
> 
> ... past the actual affinity change?

Hm, but the ->enable() hook is just unmasking the interrupt, the
actual affinity change is done in ->set_affinity(), and hence after
the call to ->set_affinity() no further interrupts should be delivered
to the CPU regardless of whether the source is masked?

Or is it possible for the device/interrupt controller to not switch to
use the new destination until the interrupt is unmasked, and hence
could have pending masked interrupts still using the old destination?
IIRC For MSI-X it's required that the device updates the destination
target once the entry is unmasked.

I'm happy to move it after the ->enable() hook, but would like to
better understand why this is required.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 12:46:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 12:46:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743050.1149927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYEN-0002vu-8m; Tue, 18 Jun 2024 12:46:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743050.1149927; Tue, 18 Jun 2024 12:46:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYEN-0002vn-4q; Tue, 18 Jun 2024 12:46:35 +0000
Received: by outflank-mailman (input) for mailman id 743050;
 Tue, 18 Jun 2024 12:46:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=An5i=NU=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJYEM-0002vh-DP
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 12:46:34 +0000
Received: from mail-ot1-x332.google.com (mail-ot1-x332.google.com
 [2607:f8b0:4864:20::332])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c99451bf-2d70-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 14:46:32 +0200 (CEST)
Received: by mail-ot1-x332.google.com with SMTP id
 46e09a7af769-6f98178ceb3so2951878a34.0
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 05:46:32 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2a5eb4dc1sm65847196d6.100.2024.06.18.05.46.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 05:46:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c99451bf-2d70-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718714791; x=1719319591; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Q3inkv2VK9BTIRLUHw7TDus7E6YrNOM+cOrHUXeW0n0=;
        b=MfNCAygjlqcRtFqTlaAzRWbatrBilLsG3dEZxeeDA9ZC4/KYZ0VA10D1BbnGHWgQ4m
         aWT5VBmg1oNTBiujzvocooT7y7j1UGD1xVE6JjbXR4YPZvh9zFWYu6k86kkiLZBPdnIP
         sgPOy3UQacAwUNAGpbLxoiNtAoqSVlmNa5M38=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718714791; x=1719319591;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Q3inkv2VK9BTIRLUHw7TDus7E6YrNOM+cOrHUXeW0n0=;
        b=SXNcVAoOVIMDHQu/tyD86O3xSVF4l0yxub1u04bmtU/UaBpbPWt82/mXTHwiSwg4TA
         n8cW1N+3bO2vhFVMtwGPrZ6sVjkQNopMMepMH4Q9/iWNsZvS0OvrXJOkYII/KBk+5c+0
         bE3cnI3pwIENwITbJh22tYYe8iJzZVtYhoBpN8rQiVk5v8HAL7St3Ywr5Pn7wOnmfmf7
         P5zooHxTWWs2w/HRLhclXBPpeMSgefClDPEkvrcnIkdS0dDGC3PQ2PWBEngKdG8H1q9N
         IyJcXUY3Rl3ilJTfvKOMUpIVF/Ps5DOS+TPEx/kkkxLmftSiDLlYfn6f59Yoi61czlSt
         pNsQ==
X-Forwarded-Encrypted: i=1; AJvYcCVXHieeD6lYJrliikwlHK3P8qwdXR6NXDNMQZM63TgI+eQH+jtwpXScSOQLFpXBmWO2ldVGgwHR6zJ09u3EPApvK3dbzRUrI4xuWe4Yggg=
X-Gm-Message-State: AOJu0Yze8AaDN2Vz5iaxPQE4xmpbNm0HR/sr2ng6Z4UYg9NHo4A25PEl
	ORO5YmMUmv0L/hebxxnoawq/36gE65gA/BstRsafx7AumYwdqTtdQnEqpJpyTLU=
X-Google-Smtp-Source: AGHT+IEkKpl7JK/kqiyPl56NU53+vw7BNCCoorwn1aek/kN8SdxFdptHiz8mzmGwr2qQNcrYO795SA==
X-Received: by 2002:a05:6830:3495:b0:6fc:3491:877 with SMTP id 46e09a7af769-6fc34910b59mr10570275a34.26.1718714790668;
        Tue, 18 Jun 2024 05:46:30 -0700 (PDT)
Message-ID: <9d43b148-2cb6-45c8-a84c-0e51219669fa@citrix.com>
Date: Tue, 18 Jun 2024 13:46:28 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for 4.19?] x86/Intel: unlock CPUID earlier for the BSP
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <82277592-ea96-47c8-a991-7afd97d7a7bc@suse.com>
 <f51b2240-03da-4aee-8972-a72b53916ce1@citrix.com>
 <e493035c-2954-418e-96fb-add1577df59f@suse.com>
 <8fb21b45-c803-4d37-8df8-3a1afa677ef7@citrix.com>
 <d3f9ae64-fb85-47b2-bb69-153c7734a0a3@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <d3f9ae64-fb85-47b2-bb69-153c7734a0a3@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 14/06/2024 12:12 pm, Jan Beulich wrote:
> On 14.06.2024 12:14, Andrew Cooper wrote:
>> On 14/06/2024 7:27 am, Jan Beulich wrote:
>>> On 13.06.2024 18:17, Andrew Cooper wrote:
>>>> On 13/06/2024 9:19 am, Jan Beulich wrote:
>>>>> Intel CPUs have a MSR bit to limit CPUID enumeration to leaf two. If
>>>>> this bit is set by the BIOS then CPUID evaluation does not work when
>>>>> data from any leaf greater than two is needed; early_cpu_init() in
>>>>> particular wants to collect leaf 7 data.
>>>>>
>>>>> Cure this by unlocking CPUID right before evaluating anything which
>>>>> depends on the maximum CPUID leaf being greater than two.
>>>>>
>>>>> Inspired by (and description cloned from) Linux commit 0c2f6d04619e
>>>>> ("x86/topology/intel: Unlock CPUID before evaluating anything").
>>>>>
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>> ---
>>>>> While I couldn't spot anything, it kind of feels as if I'm overlooking
>>>>> further places where we might be inspecting in particular leaf 7 yet
>>>>> earlier.
>>>>>
>>>>> No Fixes: tag(s), as imo it would be too many that would want
>>>>> enumerating.
>>>> I also saw that go by, but concluded that Xen doesn't need it, hence why
>>>> I left it alone.
>>>>
>>>> The truth is that only the BSP needs it.  APs sort it out in the
>>>> trampoline via trampoline_misc_enable_off, because they need to clear
>>>> XD_DISABLE in prior to enabling paging, so we should be taking it out of
>>>> early_init_intel().
>>> Except for the (odd) case also mentioned to Roger, where the BSP might have
>>> the bit clear but some (or all) AP(s) have it set.
>> Fine I suppose.  It's a single MSR adjustment once per CPU.
>>
>>>> But, we don't have an early BSP-only early hook, and I'm not overwhelmed
>>>> at the idea of exporting it from intel.c
>>>>
>>>> I was intending to leave it alone until I can burn this whole
>>>> infrastructure to the ground and make it work nicely with policies, but
>>>> that's not a job for this point in the release...
>>> This last part reads like the rest of your reply isn't an objection to me
>>> putting this in with Roger's R-b, but it would be nice if you could
>>> confirm this understanding of mine. Without this last part it, especially
>>> the 2nd from last paragraph, certainly reads a little like an objection.
>> I'm -1 to this generally.  It's churn without fixing anything AFAICT,
> How that? We clearly do the adjustment too late right now for the BSP.
> All the leaf-7 stuff added to early_cpu_init() (iirc in part in the course
> of speculation work) is useless on a system where firmware set that flag.

After discussing this at the x86 maintainers meeting, I agree that it is
fixing a potential bug.

So, while I really dislike the patch, it's the right approach in the
short term, and should go in.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 13:01:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 13:01:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743057.1149937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYSG-0005ks-D1; Tue, 18 Jun 2024 13:00:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743057.1149937; Tue, 18 Jun 2024 13:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYSG-0005kl-9c; Tue, 18 Jun 2024 13:00:56 +0000
Received: by outflank-mailman (input) for mailman id 743057;
 Tue, 18 Jun 2024 13:00:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=An5i=NU=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJYSE-0005kf-PT
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 13:00:54 +0000
Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com
 [2a00:1450:4864:20::533])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cb00a2fd-2d72-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 15:00:52 +0200 (CEST)
Received: by mail-ed1-x533.google.com with SMTP id
 4fb4d7f45d1cf-57cc1c00b97so4308117a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 06:00:52 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb72e9dd7sm7676040a12.57.2024.06.18.06.00.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 06:00:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb00a2fd-2d72-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718715652; x=1719320452; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=LygUhwAafTLJDGyqQwpjFQew1zv8q0DsIF8+8q2BsUI=;
        b=gVYdeyipkZeCt2nAK25FMnnSttKNRVzLkF1cqLP6016I9K+8K/HTcivVwR/o7p+ZAo
         v9WfsG8ck1FpuyQik0H5+kTUix6NfTj1CVgaUeZvgsgEkqwttCd5xryyz5olzhydnOZY
         OH68reV1rBYI7Mz9PPKFpZTACj53kE7+bH8Dw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718715652; x=1719320452;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=LygUhwAafTLJDGyqQwpjFQew1zv8q0DsIF8+8q2BsUI=;
        b=J/Ky4spyz9SoOvhCqG1EWV1lg7Gw3UFHhhtPzAGWPStzMGm9YlmhJHlGBBU+4h5RHU
         Qcd7T3scR5O2pSvkm8cG9151M2p3hhrB1Daa5i0CA/jfv+xuZAQZ//nK3MpveVlvWHVj
         NLpaXJKhW6EfiaiiTaF21nnPlvHCmLZcVSBl7qs0S/TqgqK+P3jkMKcS0sgaIbTjZmkL
         v+KM7Z4fQx8CFC/DhvAWWp8a+WHoXzADwDijDqEKJ3TD15M2NoVZROwATt1BkF8GMrVr
         616jU1tn9ag6tRG1meD589MPqNV7qpNnGFMFc6nb6NVtwV3cOClfHoSmHA8+SNmbUJ6Z
         inmw==
X-Gm-Message-State: AOJu0Ywf8PTkW+b1+e9gNx5SfzNdp+rX7WxZVr2Sf703JOA/ZOM2mqKi
	mAcZcYydJAHGRjHJF9gXw1xbLB4lwJjI7S2xysjfaaP4aFvsft11Wdt6nOH09VMYkyEdy0Lf51Z
	DjJ4=
X-Google-Smtp-Source: AGHT+IGEvmiTw/b4hOdDD0Xg67JH9GVOpj2zIuAqSChp4G0k/ytPJ1q5jrFjsIOpvoeCXfTM6Bw/+g==
X-Received: by 2002:a50:c350:0:b0:57d:4b7:a8d8 with SMTP id 4fb4d7f45d1cf-57d04b7b554mr253597a12.25.1718715651641;
        Tue, 18 Jun 2024 06:00:51 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Roberto Bagnara <roberto.bagnara@bugseng.com>,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	"consulting @ bugseng . com" <consulting@bugseng.com>
Subject: [PATCH for-4.19] xen/irq: Address MISRA Rule 8.3 violation
Date: Tue, 18 Jun 2024 14:00:48 +0100
Message-Id: <20240618130048.1768639-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When centralising irq_ack_none(), different architectures had different names
for the parameter of irq_ack_none().  As it's type is struct irq_desc *, it
should be named desc.  Make this consistent.

No functional change.

Fixes: 8aeda4a241ab ("arch/irq: Make irq_ack_none() mandatory")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Michal Orzel <michal.orzel@amd.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
CC: Roberto Bagnara <roberto.bagnara@bugseng.com>
CC: Nicola Vetrini <nicola.vetrini@bugseng.com>
CC: consulting@bugseng.com <consulting@bugseng.com>

Request for 4.19.  This was an accidental regression in a recent cleanup
patch, and the fix is just a rename - its no functional change.
---
 xen/arch/arm/irq.c    | 4 ++--
 xen/include/xen/irq.h | 2 +-
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index c60502444ccf..6b89f64fd194 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -31,9 +31,9 @@ struct irq_guest
     unsigned int virq;
 };
 
-void irq_ack_none(struct irq_desc *irq)
+void irq_ack_none(struct irq_desc *desc)
 {
-    printk("unexpected IRQ trap at irq %02x\n", irq->irq);
+    printk("unexpected IRQ trap at irq %02x\n", desc->irq);
 }
 
 void irq_end_none(struct irq_desc *irq)
diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index adf33547d25f..580ae37e7428 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -134,7 +134,7 @@ void cf_check irq_actor_none(struct irq_desc *desc);
  * irq_ack_none() must be provided by the architecture.
  * irq_end_none() is optional, and opted into using a define.
  */
-void cf_check irq_ack_none(struct irq_desc *irq);
+void cf_check irq_ack_none(struct irq_desc *desc);
 
 /*
  * Per-cpu interrupted context register state - the inner-most interrupt frame

base-commit: 8b4243a9b560c89bb259db5a27832c253d4bebc7
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 18 13:06:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 13:06:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743063.1149947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYXL-0006LB-Vf; Tue, 18 Jun 2024 13:06:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743063.1149947; Tue, 18 Jun 2024 13:06:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYXL-0006L4-Rl; Tue, 18 Jun 2024 13:06:11 +0000
Received: by outflank-mailman (input) for mailman id 743063;
 Tue, 18 Jun 2024 13:06:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJYXK-0006Ky-Qt
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 13:06:10 +0000
Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com
 [2a00:1450:4864:20::534])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 86d122be-2d73-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 15:06:07 +0200 (CEST)
Received: by mail-ed1-x534.google.com with SMTP id
 4fb4d7f45d1cf-57c60b13a56so6465838a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 06:06:07 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cbdfe1428sm6639758a12.27.2024.06.18.06.06.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 06:06:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86d122be-2d73-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718715967; x=1719320767; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=sgJyzSQXGbKHDq4saDfIHt0tUyvz/5gPNImLn2CE/rQ=;
        b=b3FL3S3EczRdlY8OiX5QrgHmhrDmurqk1JTtqRKdBqD8kawyn0CuJP1Tk6Ri3uAmQ7
         GO2f6/W4X2qtswU1DsB4KSpO0MK5s1EJ31WYylEseKKAzH+/tQwEIb1AEt4Ni+wZlRHj
         VqLmOkCV7SF1JNLtWQ+0r55AJJhioD/qmePJNmW9Ayv/z/Fv8A3ZduCLJ0fjvv7k2ZhU
         LSbm/TDLsmfJ38GnbhWBKV4YBzM5Z5RogsYaMlJ5YVXscaAIds5TCMYUZyJgjNw+rx7I
         zWKcF5rViNIUHnR4RdPHLIeyv5r53AIaGSULuaNEIae43wNTnT6L82j8FQRfEo6rv3cE
         adoA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718715967; x=1719320767;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sgJyzSQXGbKHDq4saDfIHt0tUyvz/5gPNImLn2CE/rQ=;
        b=J7GsjCpLAJr3lTEj3s+ZpcMInSDKt9I+PdDFmXwx4iKJAgUUvfAicnI/2dVoBGTyKz
         AkcBvQaaq3oAqZsi7zMkAxbA7aI81jkCh34NLrkbY59AkCBqUW6Kac70pYUgpMG+AAcN
         HMm8o6o4FyF4p0+WUdXKWMifK3fcqF4hLvP0i2JHXDsC8cNpZ8AOhD76XLNX2/avqSBd
         Bf90MM/TqRvgTYDGIRK18BcGmaJeOQauYihwnWDfDRIKpUV5IufCAO6VGMZk9v3a24gJ
         0+AvuLAj7ZztoTMi9BrM1tq6CvCXlWrRWSVCi8KjyNOg66KFxCxYmKPekrumk7TTF599
         MeSw==
X-Forwarded-Encrypted: i=1; AJvYcCWDdU+U5cShWi5gXsojUpild+1ZLHUKC2HgLRbX2PPJuPunU84hy7SavaftwaueP+olCZjWBf4aT46dFISn261mRX5ymCu6eyMbJnm0JMA=
X-Gm-Message-State: AOJu0YxW/sh8ZMlhgvrr865tPllx0ioFFjI8RyUcLgjvCFmHtB4fOV9u
	47GM1ovnX8dthGuHg1zkO0gHuocPujn8X8RldixO9CzijEhIcYbh1gce/5GDMw==
X-Google-Smtp-Source: AGHT+IGA8Ry1ScgoxdyLV52Q75jeK69dhaTRNJxZvTTAECJYuRH7WZDv8j+iy0uWCxk9Fqj1QJWWUQ==
X-Received: by 2002:a50:d65a:0:b0:572:1589:eb98 with SMTP id 4fb4d7f45d1cf-57cbd6641dfmr7808584a12.12.1718715967302;
        Tue, 18 Jun 2024 06:06:07 -0700 (PDT)
Message-ID: <06ce4039-9a03-415b-83b0-9df8dfe8bded@suse.com>
Date: Tue, 18 Jun 2024 15:06:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/irq: Address MISRA Rule 8.3 violation
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Roberto Bagnara <roberto.bagnara@bugseng.com>,
 Nicola Vetrini <nicola.vetrini@bugseng.com>,
 "consulting @ bugseng . com" <consulting@bugseng.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240618130048.1768639-1-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240618130048.1768639-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 18.06.2024 15:00, Andrew Cooper wrote:
> When centralising irq_ack_none(), different architectures had different names
> for the parameter of irq_ack_none().  As it's type is struct irq_desc *, it
> should be named desc.  Make this consistent.
> 
> No functional change.
> 
> Fixes: 8aeda4a241ab ("arch/irq: Make irq_ack_none() mandatory")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Tue Jun 18 13:12:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 13:12:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743071.1149957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYdS-00082u-Ik; Tue, 18 Jun 2024 13:12:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743071.1149957; Tue, 18 Jun 2024 13:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYdS-00082n-Fe; Tue, 18 Jun 2024 13:12:30 +0000
Received: by outflank-mailman (input) for mailman id 743071;
 Tue, 18 Jun 2024 13:12:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sJYdQ-00082f-Vs
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 13:12:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJYdQ-0001HP-0y; Tue, 18 Jun 2024 13:12:28 +0000
Received: from [15.248.2.238] (helo=[10.24.67.30])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJYdP-0002zp-Ny; Tue, 18 Jun 2024 13:12:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=gJaxuMfvp89J1+GtLdqn/VuAte4tCKIphQyPNu8UHlg=; b=Ar8GNMwg756LB8ngFVvPU8HDxW
	PBpFx/uH8Vk44okjJoMRWdy0mwst9zCdgstga8iOnM73ZT26GJ+ZKAFuvkpC3HjqM4T/9jwcKdVps
	Kcm0mzUKicwuQWwXhnmhnn6xdRmPyaATmq2sao9OcKm/E5ep1KmqqKXNgpyv0s/Diqoo=;
Message-ID: <f18706b7-de22-4c4a-a830-043e8fc7056b@xen.org>
Date: Tue, 18 Jun 2024 14:12:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/irq: Address MISRA Rule 8.3 violation
Content-Language: en-GB
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich
 <JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Roberto Bagnara <roberto.bagnara@bugseng.com>,
 Nicola Vetrini <nicola.vetrini@bugseng.com>,
 "consulting @ bugseng . com" <consulting@bugseng.com>
References: <20240618130048.1768639-1-andrew.cooper3@citrix.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20240618130048.1768639-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Andrew,

On 18/06/2024 14:00, Andrew Cooper wrote:
> When centralising irq_ack_none(), different architectures had different names
> for the parameter of irq_ack_none().  As it's type is struct irq_desc *, it
> should be named desc.  Make this consistent.
> 
> No functional change.
> 
> Fixes: 8aeda4a241ab ("arch/irq: Make irq_ack_none() mandatory")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 13:14:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 13:14:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743076.1149966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYf6-0000Ay-SI; Tue, 18 Jun 2024 13:14:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743076.1149966; Tue, 18 Jun 2024 13:14:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYf6-0000Ar-Pd; Tue, 18 Jun 2024 13:14:12 +0000
Received: by outflank-mailman (input) for mailman id 743076;
 Tue, 18 Jun 2024 13:14:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJYf5-0000Ag-HB; Tue, 18 Jun 2024 13:14:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJYf5-0001LH-GK; Tue, 18 Jun 2024 13:14:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJYf5-0003l2-9B; Tue, 18 Jun 2024 13:14:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJYf5-0002iW-8b; Tue, 18 Jun 2024 13:14:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tKJfor7VtybYBktKn6/7mI11ujD8+KApXHwaPKq/I9Q=; b=Y9dyUWJDrl1I52a3CyZpswvg2a
	B6cdS3mZj5TXimE2fCqGxo3JrjWkyp8L9H08bal3d2xNl+e6jH4AK5FqMcmKVR83kKT5maElYIdKP
	MuCNZJcRvvoAM3QGBxYMi3EaSp1MP6SBHYT1Es3Bvyxp3GjE9KmH9Oc8pZ9foT+Jy6pA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186394-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186394: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b0c5781671f322472836ff25ee11242f59aa9945
X-Osstest-Versions-That:
    ovmf=176b9d41f8f71c7572dab615a8d5259dd2cbc02a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Jun 2024 13:14:11 +0000

flight 186394 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186394/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b0c5781671f322472836ff25ee11242f59aa9945
baseline version:
 ovmf                 176b9d41f8f71c7572dab615a8d5259dd2cbc02a

Last test of basis   186392  2024-06-18 07:42:57 Z    0 days
Testing same since   186394  2024-06-18 11:42:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   176b9d41f8..b0c5781671  b0c5781671f322472836ff25ee11242f59aa9945 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 13:24:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 13:24:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743094.1149989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYow-0002Be-VF; Tue, 18 Jun 2024 13:24:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743094.1149989; Tue, 18 Jun 2024 13:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYow-0002BX-SK; Tue, 18 Jun 2024 13:24:22 +0000
Received: by outflank-mailman (input) for mailman id 743094;
 Tue, 18 Jun 2024 13:24:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=An5i=NU=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJYow-0002BR-2k
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 13:24:22 +0000
Received: from mail-qk1-x72c.google.com (mail-qk1-x72c.google.com
 [2607:f8b0:4864:20::72c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 119eae25-2d76-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 15:24:21 +0200 (CEST)
Received: by mail-qk1-x72c.google.com with SMTP id
 af79cd13be357-7955841fddaso442602185a.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 06:24:21 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-444a743c42esm20831cf.79.2024.06.18.06.24.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 06:24:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 119eae25-2d76-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718717059; x=1719321859; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=lcFO7b8bjtTpIe4yE5niynAMbFBLfap9jFo4QsJAN/0=;
        b=rzOnbGrqjYosBIAlchDjBVscsXFLlSmg+99p9OsQ9L6owDYBTJqzMuvT9Qu8F+P2Cp
         wG4RT55hpo0CzXKsajt9WcllkUUHGq3DflmXu2jysX8dMEjirPBO2nt/jUp+CBwq9nfg
         JLOt8sK68UEb/7yFLIEzjxyX9FEZu5MIi0dHE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718717059; x=1719321859;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lcFO7b8bjtTpIe4yE5niynAMbFBLfap9jFo4QsJAN/0=;
        b=vGfV3eWAKd5Xua4+A0z3ksJTTOTGaI9eivPEmTn+Y/CP2IzZPj0gP6Jop9+Bi55uxO
         +5oYEh/G3/OQzG1lpOMjhzb5g4gK/Oynt4gWjFLwN3tAzX+BySXc0GfsRXKGxb5EvDIj
         x5fY5YsEp90xtXEZKEdy7t69ostR7VcBVtJi8ZCIvhOXC8RqGKhkm/hidEtTaNJHLW/l
         YYPog8X8Pn2BXly1tIyDn4v2mQdwMKVaifR30e+Q78yQ2Px9j0Gryn6JdxXfK03MgiP8
         wbXLrTS59y3QWNCW3lJ+VaM9xKJ3ebEPtZFK0ckIOk0XwwwCSb/bs119MYhfyYKB4uqz
         se3A==
X-Forwarded-Encrypted: i=1; AJvYcCVUIu42eABmLTfMCaayI9vu8zVUkIKmNFrobFFqa1GijiAeadESr+Ma84HpqC7+cX2yV25C05fbWbJV6ZCg/Cfd+yE5+taVGYDSpeaMnIw=
X-Gm-Message-State: AOJu0YwWcAMTrlX0VZBlD7ZGA5yQbd1KJRMqDWiGuKHlFEknp3u/BVo9
	I6ALqiwfPYspD3IUq5CR3oEIm4NbrCxajFbHPTTO0YktY1JSk8b1I7c2Qv18ej0=
X-Google-Smtp-Source: AGHT+IG1BkgppzIm2PDNhGWLdLvFwmDNrCGqrixYYe6BUwlMKwgZsGdfZQfN4SnLjai8q1AtbP6MUg==
X-Received: by 2002:a05:620a:4507:b0:795:5f15:f9e8 with SMTP id af79cd13be357-79ba7738442mr379307885a.31.1718717059160;
        Tue, 18 Jun 2024 06:24:19 -0700 (PDT)
Message-ID: <13a5650d-8623-4fec-9383-ef04023257ff@citrix.com>
Date: Tue, 18 Jun 2024 14:24:16 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] avoid UB in guest handle arithmetic
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <227fbeda-1690-4158-8404-53b4236c0235@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <227fbeda-1690-4158-8404-53b4236c0235@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 19/03/2024 1:26 pm, Jan Beulich wrote:
> At least XENMEM_memory_exchange can have huge values passed in the
> nr_extents and nr_exchanged fields. Adding such values to pointers can
> overflow, resulting in UB. Cast respective pointers to "unsigned long"
> while at the same time making the necessary multiplication explicit.
> Remaining arithmetic is, despite there possibly being mathematical
> overflow, okay as per the C99 spec: "A computation involving unsigned
> operands can never overflow, because a result that cannot be represented
> by the resulting unsigned integer type is reduced modulo the number that
> is one greater than the largest value that can be represented by the
> resulting type." The overflow that we need to guard against is checked
> for in array_access_ok().
>
> Note that in / down from array_access_ok() the address value is only
> ever cast to "unsigned long" anyway, which is why in the invocation from
> guest_handle_subrange_okay() the value doesn't need casting back to
> pointer type.
>
> In compat grant table code change two guest_handle_add_offset() to avoid
> passing in negative offsets.
>
> Since {,__}clear_guest_offset() need touching anyway, also deal with
> another (latent) issue there: They were losing the handle type, i.e. the
> size of the individual objects accessed. Luckily the few users we
> presently have all pass char or uint8 handles.
>
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

There wants to be a xen: prefix in the subject.

But as for the UB aspect, I've checked that this does resolve the
failure identified by the XSA-212 XTF test.

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>

CC'ing Oleksii as this wants to go into 4.19.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 13:34:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 13:34:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743101.1149999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYyp-0003yj-PX; Tue, 18 Jun 2024 13:34:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743101.1149999; Tue, 18 Jun 2024 13:34:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJYyp-0003yc-MQ; Tue, 18 Jun 2024 13:34:35 +0000
Received: by outflank-mailman (input) for mailman id 743101;
 Tue, 18 Jun 2024 13:34:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=An5i=NU=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJYyo-0003yG-8Q
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 13:34:34 +0000
Received: from mail-oo1-xc36.google.com (mail-oo1-xc36.google.com
 [2607:f8b0:4864:20::c36])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e38a50c-2d77-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 15:34:32 +0200 (CEST)
Received: by mail-oo1-xc36.google.com with SMTP id
 006d021491bc7-5b9f9e7176eso2662481eaf.2
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 06:34:32 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79a30cff770sm354577085a.3.2024.06.18.06.34.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 06:34:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e38a50c-2d77-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718717671; x=1719322471; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=CLnvtNLWT1SPBUo52qyhrmzGb9JB0iifrNjgzCoMNic=;
        b=en7liiBEy1smoqmgRsHCS2VlJlWtQLIPKM1jR0HfqvcBB1WxGRHw/yEZvMzCZE8Yeh
         +fsYyYbR5X1cUsGWH4T2CJ1VO+Joe8Cz1uqORtW5G+YhPwgWXtCiNZYHvZ6uqlOxUM4k
         Hm60cxn/FPQzTThJWahVxboGavsDdNVL/h3Wc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718717671; x=1719322471;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=CLnvtNLWT1SPBUo52qyhrmzGb9JB0iifrNjgzCoMNic=;
        b=V+SqG+Sw7ht8gJpc3PRzLdVgDe/dpsVtaI0l0HEPkr02pvVvzqHLVhw+OFYrosVCot
         oT0OR18XJKPXk4+WA/R2sVeJSC2GJq559yDXvNDejWPdveyEiqks4lgWdr9/yWrJPztI
         W6u1KayO14g+hJpfLtwUOp9bX9P4CrkUEzSkLISsaXs7CByZhUrRSl5yhOopcvgtRMbR
         MPN1geoeieId2XIJvGW89AZzWfbec19YAdjoGRyN1lh2oypmSb/vNKNR/qjFi9uBtsBZ
         0xSotkLdTVNAZmTe6cozBTUu/4fUpDZ+Jo9uyzcW7Tb1DsGyUxeRIvmh3EzWn4Q8mCFT
         IzBg==
X-Forwarded-Encrypted: i=1; AJvYcCXuzRsgQSmq1xeGmSqkZfIbnr7HGwMxReBmHbe6qHcC0gjyu7C+e6/rP8vHUVlTC47sSMykQGgLtXkJiEhtuCSmq8OqvzsbPfDfnHQ4TbI=
X-Gm-Message-State: AOJu0YxQg08rDW29qfF9bTykozbSzK+PvviB0TxQ9axfGpcYYiovCHTE
	UggEYF8Xg6JrNELEZKTwa1ncwkjBRZWzj8rlXLUoymIqUIcZiyBgyOwARvbqa2E=
X-Google-Smtp-Source: AGHT+IEM+fSDtrYFlzMy06hw4NwpN+EtOpDDyINpMTDLMyWAHuW6AhqNv3U4PfgW7SAypOtBzu5XIA==
X-Received: by 2002:a05:6358:5390:b0:19f:4c1b:f0e4 with SMTP id e5c5f4694b2df-19fb500d7a3mr1515333955d.21.1718717670790;
        Tue, 18 Jun 2024 06:34:30 -0700 (PDT)
Message-ID: <a14df3e5-4b70-474c-b55a-a06147c18b25@citrix.com>
Date: Tue, 18 Jun 2024 14:34:28 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] xen/ubsan: Fix UB in type_descriptor declaration
To: "Oleksii K." <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich
 <JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>
References: <20240617175521.1766698-1-andrew.cooper3@citrix.com>
 <69eb7670c15d31cad3d9cac919a69a5e85f04ce0.camel@gmail.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <69eb7670c15d31cad3d9cac919a69a5e85f04ce0.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 18/06/2024 9:07 am, Oleksii K. wrote:
> On Mon, 2024-06-17 at 18:55 +0100, Andrew Cooper wrote:
>> struct type_descriptor is arranged with a NUL terminated string
> Should it be NULL instead of NUL?

NULL and NUL can be used interchangeably; they're different spellings
for the same thing.

In the ASCII spec, the character with value 0 is spelt NUL.

>
>> following the
>> kind/info fields.
>>
>> The only reason this doesn't trip UBSAN detection itself (on more
>> modern
>> compilers at least) is because struct type_descriptor is only
>> referenced in
>> suppressed regions.
>>
>> Switch the declaration to be a real flexible member.  No functional
>> change.
>>
>> Fixes: 00fcf4dd8eb4 ("xen/ubsan: Import ubsan implementation from
>> Linux 4.13")
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Thanks.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 14:12:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 14:12:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743115.1150012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJZZO-000123-I8; Tue, 18 Jun 2024 14:12:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743115.1150012; Tue, 18 Jun 2024 14:12:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJZZO-00011w-Fc; Tue, 18 Jun 2024 14:12:22 +0000
Received: by outflank-mailman (input) for mailman id 743115;
 Tue, 18 Jun 2024 14:12:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6aKm=NU=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1sJZZM-00011q-SY
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 14:12:21 +0000
Received: from wfhigh8-smtp.messagingengine.com
 (wfhigh8-smtp.messagingengine.com [64.147.123.159])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c4495bf9-2d7c-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 16:12:19 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailfhigh.west.internal (Postfix) with ESMTP id 7FD29180007B;
 Tue, 18 Jun 2024 10:12:14 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 18 Jun 2024 10:12:15 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 18 Jun 2024 10:12:12 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4495bf9-2d7c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:subject
	:subject:to:to; s=fm1; t=1718719934; x=1718806334; bh=PUyO7rEqEy
	UMRxDgGs+JjU2AQ3HsAsI5ZlgH+/pmHr4=; b=h/Fp34FRivK2MGFipBl5dFBabT
	wGT7vNPIn5LkChuZpJMnTaCqrbxDeWXeOWjuti+G//juPpNz0FTfQs7YMQ7SQLX5
	Yg0oGkyQwmnSeLVfTYFMfZjG79dLVHHHswpb3AEz3q5j7ca1sEfHPQQQX8ZFSVaH
	47pE+bLFrN5hXJy9NizLY/b1Odi4CFJjBeqm6oUlu6T8Q593iMkhiwt9sgnbFoJl
	4xZck/in6AAG44nhd36FzjBV95Rs/Ghp7hvrYO7dfk4MFukNTpJuaueDTQyoXChe
	txqRzesA9pv1h8TVpp+U2zmprDT/VXROs/AXENwMywYFFW2B9Ku48o4sqgcw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1718719934; x=
	1718806334; bh=PUyO7rEqEyUMRxDgGs+JjU2AQ3HsAsI5ZlgH+/pmHr4=; b=P
	y1JfzbVT/7Bl/qHNtB95zZ6JPVF0FIOLMHKJZBvn9P0Tdfmv4q4vR1XYx3WxzRxp
	qFmKzvioFVU9bY57DevQuRQU2mPBLNOLMQMCQSIZ60ZaDs1mE40ViA0y0YW5kjBe
	rEZn9E8nv+D6M/9rzXZvfDUhMqRXvrvbneMNJP9Vhv0c7H4Go2qIKSjZQtDztO51
	c381Ir4hbdkxH6wrkWZ2JWYtgs9JPA0mBqVf3Hdm7837e4r700/evzMw8VnxKOTb
	6uuGMKkdEKGDuIC7ZkRJLbVlllMeQQ7PjUMwmxNH50Y7ffwrBd2auRQZ45GK2yl9
	bu4+CZQQuA114nZavUYYQ==
X-ME-Sender: <xms:vZVxZhgPqeKTTQIt9vuoV5faWXgMpHSRmnDgKn7Y4XbisKFZDiOM3A>
    <xme:vZVxZmAZoWcnL2ec8lbGk8jKF-ldjb1DJAZTymEE-beuk_DvC6oKNDWSHPycHPHIA
    ybaBmaMh0XrfLQ>
X-ME-Received: <xmr:vZVxZhFmRHhxafYg5s0QzU2z95sTKltlFTQ_MbYsxBL0FzlFxqDOWl0gGy1VJExF9vh0HR2Ii7bNRYajCrH6NV5UcwEIcw66-8-6YCcqSOtOBaba>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedvkedggeegucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtgfgjgesthekredttddtjeenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepudfgieetueeuueeihefhfeetudfh
    iefgteekuedvgfeuhffggeegfedvkeegkeeinecuvehluhhsthgvrhfuihiivgeptdenuc
    frrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhs
    lhgrsgdrtghomh
X-ME-Proxy: <xmx:vZVxZmQnYsMF8uzLv4z65V6PgrYt9naVdkzZrC9LC0L_Uz3sV6y3qA>
    <xmx:vZVxZuxhoqSr1HGwzXO9nJ5F1OwQ77Ak9SSEZQtpQh0tsS20HAgLxA>
    <xmx:vZVxZs5VlGsOOnk1imAZxYYZV51vo2OCPOwWeCOacCzAwS9E0dxFMw>
    <xmx:vZVxZjwuxUOn8g82LmiWiCOvpDx4ct9LLoGnZp6QAE7pNaglyRSjRw>
    <xmx:vpVxZifpn9YmOCRm3CdF8lKxLmbq9rqr8gYS8I4BG6YZYUHuO56iS8XI>
Feedback-ID: iac594737:Fastmail
Date: Tue, 18 Jun 2024 10:12:11 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Christian =?utf-8?B?S8O2bmln?= <christian.koenig@amd.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
	Xenia Ragiadakou <burzalodowa@gmail.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Direct Rendering Infrastructure development <dri-devel@lists.freedesktop.org>,
	Qubes OS Development Mailing List <qubes-devel@googlegroups.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <ZnGVu9TjHKiEqxsu@itl-email>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
 <ZmwByZnn5vKcVLKI@macbook>
 <Zm-FidjSK3mOieSC@itl-email>
 <Zm_p1QvoZcjQ4gBa@macbook>
 <ZnCglhYlXmRPBZXE@mail-itl>
 <ZnDbaply6KaBUKJb@itl-email>
 <0b00c8f9-fb79-4b11-ae22-931205653203@amd.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; x-action=pgp-signed
Content-Transfer-Encoding: 8bit
In-Reply-To: <0b00c8f9-fb79-4b11-ae22-931205653203@amd.com>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On Tue, Jun 18, 2024 at 08:33:38AM +0200, Christian König wrote:
> Am 18.06.24 um 02:57 schrieb Demi Marie Obenour:
> > On Mon, Jun 17, 2024 at 10:46:13PM +0200, Marek Marczykowski-Górecki
> > wrote:
> > > On Mon, Jun 17, 2024 at 09:46:29AM +0200, Roger Pau Monné wrote:
> > >> On Sun, Jun 16, 2024 at 08:38:19PM -0400, Demi Marie Obenour wrote:
> > >>> In both cases, the device physical
> > >>> addresses are identical to dom0’s physical addresses.
> > >>
> > >> Yes, but a PV dom0 physical address space can be very scattered.
> > >>
> > >> IIRC there's an hypercall to request physically contiguous memory for
> > >> PV, but you don't want to be using that every time you allocate a
> > >> buffer (not sure it would support the sizes needed by the GPU
> > >> anyway).
> > 
> > > Indeed that isn't going to fly. In older Qubes versions we had PV
> > > sys-net with PCI passthrough for a network card. After some uptime it
> > > was basically impossible to restart and still have enough contagious
> > > memory for a network driver, and there it was about _much_ smaller
> > > buffers, like 2M or 4M. At least not without shutting down a lot more
> > > things to free some more memory.
> > 
> > Ouch!  That makes me wonder if all GPU drivers actually need physically
> > contiguous buffers, or if it is (as I suspect) driver-specific. CCing
> > Christian König who has mentioned issues in this area.
> 
> Well GPUs don't need physical contiguous memory to function, but if they
> only get 4k pages to work with it means a quite large (up to 30%)
> performance penalty.

The status quo is "no GPU acceleration at all", so 70% of bare metal
performance would be amazing right now.  However, the implementation
should not preclude eliminating this performance penalty in the future.

What size pages do GPUs need for good performance?  Is it the same as
CPU huge pages?  PV dom0 doesn't get huge pages at all, but PVH and HVM
guests do, and the goal is to move away from PV guests as they have lots
of unrelated problems.

> So scattering memory like you described is probably a very bad idea if you
> want any halve way decent performance.

For an initial prototype a 30% performance penalty is acceptable, but
it's good to know that memory fragmentation needs to be avoided.

> Regards,
> Christian

Thanks for the prompt response!
- -- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab
-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmZxlbsACgkQsoi1X/+c
IsG+WhAA00y83cU94MMJCuDMqTCSOgJraPchvQHLBuMIB0cJkIbVxhA2T4yuvVZy
Bzg/oVvWJH8B+p47HHo6uyjoPoeO659q8Hyea6zT8yMrKhiwOF8UxFRyxakdYHRs
l793sCwUtMFwkJdsfacTSKjL6sMktWhicvOqX4rA/SIVpwzZh1auFjAIrZ2BENb/
YIRH18Dfl2iEOA2W3TQTNiaqLeT2qtYspDVVLuUeAe7OAFCJVSkeMpAPPR15jCzm
Ou0HP6JP2jH6h7Shd09ns+3UvQK4xaygpvEsj+BwpXPf2CDNgypKHezqgF1WMzCc
HGXK1deGXE35XNH4EL5jgRlF7FmLT54CXuMpPIGbfNWbT2fvpoS2tyrdQPHxwgr8
lqqqfjugZ9qzbqA4v/m+v0cKFclMvSYL8Rzn+tbz8kAFf7VTglypY55RIIStdnSZ
sLYStA6qv8Mcu4NHYvdGeatTS26XR72X+dB5ApTn4dLLttnzbXMAyqDSTys28XQb
jeHnh1uTOLChODJHu5prHJ6bN0MxmISwFuot58gW/iI0spyihRhPNjZ/6E/7BpIm
8AGiT+p96dvaymLB5k6dqj5ruqVPP8HLBibB8zafzJn3JIJpjCZm9HM5YcO7xMQ2
92ZNZ/XOswah+0s6MyWDCsU8jKnhQ87ESnB4JItI5skKj+001Jg=
=ddxn
-----END PGP SIGNATURE-----


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 14:24:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 14:24:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743124.1150023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJZkd-0002sU-Hw; Tue, 18 Jun 2024 14:23:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743124.1150023; Tue, 18 Jun 2024 14:23:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJZkd-0002sN-F0; Tue, 18 Jun 2024 14:23:59 +0000
Received: by outflank-mailman (input) for mailman id 743124;
 Tue, 18 Jun 2024 14:23:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJZkc-0002sD-5I; Tue, 18 Jun 2024 14:23:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJZkc-0002dS-12; Tue, 18 Jun 2024 14:23:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJZkb-0005ds-Mm; Tue, 18 Jun 2024 14:23:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJZkb-0000K6-MM; Tue, 18 Jun 2024 14:23:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=p336wnOcRfHwF49Et89EgYFW6sKuHkoEsUZHv8CeaSI=; b=v2Xdl1A+7GoJUXqfmGnLm9TJsa
	G+D6aOfd+AoVd7T0QEvrUO1ZfO45osUQCbR0zc9Si18x4kLva5A3Ao2R3Evia4KoB6WRC5l5ka+jh
	2TJwC+YLqR/O5EkLIaAMU8WhVkbPcBiwk8zuwLXEj8QVFNrtAbvaN73L//700P2+sGDs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186390-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186390: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=3fff8c91b059ec450a1c94c8fced685a7de19d42
X-Osstest-Versions-That:
    libvirt=991c324fae2cfca8d592437ffc386171d343c836
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Jun 2024 14:23:57 +0000

flight 186390 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186390/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186343
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              3fff8c91b059ec450a1c94c8fced685a7de19d42
baseline version:
 libvirt              991c324fae2cfca8d592437ffc386171d343c836

Last test of basis   186343  2024-06-14 04:22:22 Z    4 days
Testing same since   186390  2024-06-18 04:18:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Daniel P. Berrangé <berrange@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   991c324fae..3fff8c91b0  3fff8c91b059ec450a1c94c8fced685a7de19d42 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 14:34:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 14:34:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743136.1150033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJZvC-0004hY-HL; Tue, 18 Jun 2024 14:34:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743136.1150033; Tue, 18 Jun 2024 14:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJZvC-0004hR-EE; Tue, 18 Jun 2024 14:34:54 +0000
Received: by outflank-mailman (input) for mailman id 743136;
 Tue, 18 Jun 2024 14:34:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJZvC-0004hL-2j
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 14:34:54 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eca9e5a3-2d7f-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 16:34:52 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6f177b78dcso702281666b.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 07:34:52 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f427b0sm620412066b.156.2024.06.18.07.34.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 07:34:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eca9e5a3-2d7f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718721292; x=1719326092; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=uKLUkESMTrOsQqMMfXqsg15PVib2Z0WuGw7Z25qzMog=;
        b=J0BY/IiVwusI8NKYMytM4etaR/qODUbY8ZZsNBzV2jWYB0koR19ojX5DposGp4aJ1m
         UY/ZQJjxeN2lSNvVgEBXo5AHFidO6s02cnuppStSJa4vifEgfV3V0L3GiEOwu2j28OgO
         IS7zNo0w98EgGPX1K0tb3FaNuJGgtRQmciUKWA6f4VGXPLmCMbtUbetHT6iUqh8OLARr
         QdezO7ojErHV8whuQhCeNSKtZm8+m2+8NqSME3DEOKeDDE06qePGOots6+oC+xWK43zh
         WGkDFgtQd5td3RzHjBfxuJrLkPond8HZLah/uWn1kRMPu3eqXbx1QtkY9Pa0/+3YXhAY
         fuuA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718721292; x=1719326092;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=uKLUkESMTrOsQqMMfXqsg15PVib2Z0WuGw7Z25qzMog=;
        b=vgQFouqhezzVPl/RNdZtKQmRTKhCR/Uz0YzFgne81lcZm/dJZcEWTPsEPi/aQY9PYn
         67EC98/XpGQpBFA/v3jeO8ZZleJRooP0GR7CwF1fwHA5WF4VrQBXA27SjvIhSRr9Yklv
         CVFBTkS4czynq2AwMvsEj3oHucKH4+4Ls3kQAzmiyL9SEh1SgJs7+usW9MeL8CBzxjL+
         jqMtVMc0QtT49kCsl+yzGEieYzXpVMLbkbjuUuA4ETO6EHFYkC5lCcKUSfYyzcFaxEa+
         p4I9Fvc5qgr3LHNvHcfo/ioGhQpjqi0hQ80WCnxxKtlEpOP30gCmA7EliaWzbHbkBOOV
         C3xQ==
X-Forwarded-Encrypted: i=1; AJvYcCXk+nAtuo0Gz17qY5SHtRZnL85LiCpbPgLoPuA4TtZFc9MKuRfus0NDjdN8eFcp19JWniXhjKSrNhw80gvgsiE6JkpPNSuOhS/Za+zkhRM=
X-Gm-Message-State: AOJu0YyBO0qhJbr1GfGs+xjTM9KxP/p8pg6qnUKePNccqWzz2nrV8MAM
	k7ja/pd8rkqLw9Fo9OSw/MtKXOX8EZY5t/14vQuFmheJpuzB/jkDC9MT+vMLULZzMTYRogwp2gQ
	=
X-Google-Smtp-Source: AGHT+IE64b/WygIA61duSYkXqUQ0/f+T9xlJVymO7iOt8QFLUW2tJxGYxCXiA3BIo/chAqnT5dCF9w==
X-Received: by 2002:a17:907:11c3:b0:a6e:fe98:946c with SMTP id a640c23a62f3a-a6f60d1d1demr815192266b.23.1718721292253;
        Tue, 18 Jun 2024 07:34:52 -0700 (PDT)
Message-ID: <2f388d0a-c9b5-409a-b622-5dfeb3093e82@suse.com>
Date: Tue, 18 Jun 2024 16:34:50 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v3 3/3] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <20240613165617.42538-4-roger.pau@citrix.com>
 <e3912334-4dbe-40e9-aed4-8b47e1570cc7@suse.com> <ZnFv7b4YNjeRXj6-@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZnFv7b4YNjeRXj6-@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 18.06.2024 13:30, Roger Pau Monné wrote:
> On Mon, Jun 17, 2024 at 03:41:12PM +0200, Jan Beulich wrote:
>> On 13.06.2024 18:56, Roger Pau Monne wrote:
>>> fixup_irqs() is used to evacuate interrupts from to be offlined CPUs.  Given
>>> the CPU is to become offline, the normal migration logic used by Xen where the
>>> vector in the previous target(s) is left configured until the interrupt is
>>> received on the new destination is not suitable.
>>>
>>> Instead attempt to do as much as possible in order to prevent loosing
>>> interrupts.  If fixup_irqs() is called from the CPU to be offlined (as is
>>> currently the case)
>>
>> Except (again) for smp_send_stop().
> 
> I guess I haven't worded this properly, the point I was trying to make
> is that in the context of a CPU unplug fixup_irqs() is always called
> from the CPU that's going offline.
> 
> What about:
> 
> "If fixup_irqs() is called from the CPU to be offlined (as is
> currently the case for CPU hot unplug) ..."

Sounds good to me.

>>> @@ -2686,11 +2705,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
>>>          if ( desc->handler->disable )
>>>              desc->handler->disable(desc);
>>>  
>>> +        /*
>>> +         * If the current CPU is going offline and is (one of) the target(s) of
>>> +         * the interrupt, signal to check whether there are any pending vectors
>>> +         * to be handled in the local APIC after the interrupt has been moved.
>>> +         */
>>> +        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
>>> +            check_irr = true;
>>> +
>>>          if ( desc->handler->set_affinity )
>>>              desc->handler->set_affinity(desc, affinity);
>>>          else if ( !(warned++) )
>>>              set_affinity = false;
>>>  
>>> +        if ( check_irr && apic_irr_read(vector) )
>>> +            /*
>>> +             * Forward pending interrupt to the new destination, this CPU is
>>> +             * going offline and otherwise the interrupt would be lost.
>>> +             */
>>> +            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
>>> +                          desc->arch.vector);
>>
>> Hmm, IRR may become set right after the IRR read (unlike in the other cases,
>> where new IRQs ought to be surfacing only at the new destination). Doesn't
>> this want moving ...
>>
>>>          if ( desc->handler->enable )
>>>              desc->handler->enable(desc);
>>
>> ... past the actual affinity change?
> 
> Hm, but the ->enable() hook is just unmasking the interrupt, the
> actual affinity change is done in ->set_affinity(), and hence after
> the call to ->set_affinity() no further interrupts should be delivered
> to the CPU regardless of whether the source is masked?
> 
> Or is it possible for the device/interrupt controller to not switch to
> use the new destination until the interrupt is unmasked, and hence
> could have pending masked interrupts still using the old destination?
> IIRC For MSI-X it's required that the device updates the destination
> target once the entry is unmasked.

That's all not relevant here, I think. IRR gets set when an interrupt is
signaled, no matter whether it's masked. It's its handling which the
masking would prevent, i.e. the "moving" of the set bit from IRR to ISR.
Plus we run with IRQs off here anyway if I'm not mistaken, so no
interrupt can be delivered to the local CPU. IOW whatever IRR bits it
has set (including ones becoming set between the IRR read and the actual
vector change), those would never be serviced. Hence the reading of the
bit ought to occur after the vector change: It's only then that we know
the IRR bit corresponding to the old vector can't become set anymore.

And even then we're assuming that no interrupt signals might still be
"on their way" from the IO-APIC or a posted MSI message write by a
device to the LAPIC (I have no idea how to properly fence that, or
whether there are guarantees for this to never occur).

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 14:44:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 14:44:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743144.1150043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJa3x-0006Xg-BQ; Tue, 18 Jun 2024 14:43:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743144.1150043; Tue, 18 Jun 2024 14:43:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJa3x-0006XZ-8U; Tue, 18 Jun 2024 14:43:57 +0000
Received: by outflank-mailman (input) for mailman id 743144;
 Tue, 18 Jun 2024 14:43:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TBP2=NU=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sJa3w-0006XT-AT
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 14:43:56 +0000
Received: from mail-qk1-x72e.google.com (mail-qk1-x72e.google.com
 [2607:f8b0:4864:20::72e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2ef8ed43-2d81-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 16:43:54 +0200 (CEST)
Received: by mail-qk1-x72e.google.com with SMTP id
 af79cd13be357-7955af79812so303153585a.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 07:43:54 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798abc0cf98sm523962485a.80.2024.06.18.07.43.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 07:43:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ef8ed43-2d81-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718721833; x=1719326633; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=H1+SS+CgyscfT+wjW8/qeAG6YffPFvdXGu7/EUf5Kw8=;
        b=LGiFohsrFy67YAQKwk1TnA+RIR3StQiu6cgWv0iDAA78s5DySfwCKH/e2Ajf7pe9DF
         7eiFXPl2fvol9UE3yN0itLkeMtR4b5nMbhQ/5NC1ZCOgJSr7vOSUKXXXkPjCHb1NBSBD
         cZ+8KiXbwIIyq7aCmg3Q+PpxI7CbV9LqlJuPY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718721833; x=1719326633;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=H1+SS+CgyscfT+wjW8/qeAG6YffPFvdXGu7/EUf5Kw8=;
        b=DNw7VOAnOUx9k0xgiTeYOlt0K6RPXJ3z2a3Fl0ycafJ1GYvflYSCNHFHe7lsTtqhx1
         04w2zUIOC1lytQ0aTCgOaUSoX4CViOCw6Kq7ThH2MgoC3u+ftq9Pt62dyq6d8xtZgWVm
         +iPKIEsu0TJ6OF36KvWkPIUPef/RhP/qzeAaz6u5oV5HOujfS5ld/HHggCwmQ3to02Rw
         DwPK7NGLORSURbnn3Y1s1Z5FMuPNDaXfT/HClRgLPpvJ8HDT38KtAsIn9uSOsF3OliYW
         aoh8fYKO1wDnReGJmyejXyj//ssSn6DEOxqLSKhGfUSg9slbldbqbwPqMMWYeA40UtJZ
         jpkg==
X-Forwarded-Encrypted: i=1; AJvYcCWD+lLHRXa9TjASWuMUwYl4dGH1psolLyAK84+a0PttbWswjXdAeXMOepBR3YDMgeFySEL1emRQ8F/7/+lTd5P4SqFCmkhHgEdgmhCuDV4=
X-Gm-Message-State: AOJu0YwaSsw2KRprEBYgw5Fmo0vUtQHLibse9mRJ9j9XKYKcWHVNKzJM
	gh2f5KJkPXnwajHd4fz4+rxZ8ffDxhlLPQFcE4Tyrnn0srX1NXMuG821HK+W2gk=
X-Google-Smtp-Source: AGHT+IE1YlRuWEi0vQfDOPjPfHR2PbvrEzE38XoxBE6yNs4Wq+SxKec9MZIMG60RC/XOLzcX9esWng==
X-Received: by 2002:a05:620a:2a11:b0:797:ee31:c39d with SMTP id af79cd13be357-798d23f0d1dmr1496021585a.11.1718721832764;
        Tue, 18 Jun 2024 07:43:52 -0700 (PDT)
Date: Tue, 18 Jun 2024 16:43:50 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Demi Marie Obenour <demi@invisiblethingslab.com>
Cc: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>,
	Xenia Ragiadakou <burzalodowa@gmail.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Direct Rendering Infrastructure development <dri-devel@lists.freedesktop.org>,
	Christian =?utf-8?B?S8O2bmln?= <christian.koenig@amd.com>,
	Qubes OS Development Mailing List <qubes-devel@googlegroups.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <ZnGdJoCtbIrf4-dW@macbook>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
 <ZmwByZnn5vKcVLKI@macbook>
 <Zm-FidjSK3mOieSC@itl-email>
 <Zm_p1QvoZcjQ4gBa@macbook>
 <ZnCglhYlXmRPBZXE@mail-itl>
 <ZnDbaply6KaBUKJb@itl-email>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <ZnDbaply6KaBUKJb@itl-email>

On Mon, Jun 17, 2024 at 08:57:14PM -0400, Demi Marie Obenour wrote:
> Given the recent progress on PVH dom0, is it reasonable to assume that
> PVH dom0 will be ready in time for R4.3, and that therefore Qubes OS
> doesn't need to worry about this problem on x86?

PVH dom0 will only be ready (whatever ready means in your use-case)
when people test and fix the issues, otherwise it would stay in the
same limbo it's currently in.

I guess the main blocker for Qubes is the lack of PCI passthrough
support in order to test it more aggressively?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 14:50:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 14:50:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743155.1150052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJaAG-0008I3-0I; Tue, 18 Jun 2024 14:50:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743155.1150052; Tue, 18 Jun 2024 14:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJaAF-0008Hw-Te; Tue, 18 Jun 2024 14:50:27 +0000
Received: by outflank-mailman (input) for mailman id 743155;
 Tue, 18 Jun 2024 14:50:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TBP2=NU=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sJaAE-0008Hq-VU
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 14:50:26 +0000
Received: from mail-qt1-x834.google.com (mail-qt1-x834.google.com
 [2607:f8b0:4864:20::834])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 17dc6cc9-2d82-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 16:50:24 +0200 (CEST)
Received: by mail-qt1-x834.google.com with SMTP id
 d75a77b69052e-44051a92f37so45034151cf.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 07:50:24 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-441f310c9f3sm55928931cf.94.2024.06.18.07.50.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 07:50:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17dc6cc9-2d82-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718722223; x=1719327023; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=sHm6Lq/dBzn2nekR8fHZVMB6KXyV9j+XN5zY9w7Xlfg=;
        b=VNqT4sGG9UY8uX3P/WMYuqgHoRd8eRkCc+HC/e24tQ61SGLqs5EDiWQDQS8iSOkRbk
         zTDQNXiieF+bJcyiHMScG3mcQfMCkt+m5TBx62v8kI8rjbxIBxzM2uYN3d4YkbFxYTx3
         +oYuk2a6cpbA/hdKeElvkJLm5W6zkSAFQVxD0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718722223; x=1719327023;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=sHm6Lq/dBzn2nekR8fHZVMB6KXyV9j+XN5zY9w7Xlfg=;
        b=H0pY8rniu1XPF8J3RHnI9AthtETjdQ96zn+AVCkJrP2bPNFBECGFWFJD3QjJzStyl4
         4nPeTMEdV0oDox0/ei+MzFjbtGXHsvMkf/PbVKCyTQJjLH5OmM0LRZMtm3nOTxHA4qlX
         iHL7PFjXaPMNxM89gjUaEfJJ6RhozxWfageriAgPYaH9u2po9oaqRk4MzKalSe3BXFRp
         0Iwo8wInj1GnouIqRdqOzABk5vtSffCcKTYVI8+ziVVddy9dsdGEoLkAjMDYQ6m2dIU8
         T0aL5SZBY0RemGU78Qwha0SXty4TFisMpouNANlsL9fRte9Ev6rg0+gUh3jEbpJT2EKM
         yhAg==
X-Forwarded-Encrypted: i=1; AJvYcCUD5qaiQDA+pbDHm65WN5FHScs0vpTOSRbGeXJmEwP9UbdqUoU7LUDPVD1YqPWK4UkGIu/Ccsoh2RCufCwTm/i7WFROsx5qdE8lzEtrtOw=
X-Gm-Message-State: AOJu0Yzd7RGKOs48n80mSYEoTOmdYXATBgd8P/z8eMQMDTysN5LpRG6b
	Gw+4LBs9q345GYpe+hx2XKLjBh0ynvgmdjKp/hEor4BpPMQrZREk+ZX0u/P/sUTp4yibhUcI0Sa
	s
X-Google-Smtp-Source: AGHT+IFzZ3DO7sBqc/aNKF7cs90d7dtPBY2NGr/Um6aP1QEh6UQq7+HshwlxMEwgs75pOYiBQjWG+g==
X-Received: by 2002:a05:622a:1a8b:b0:440:57ca:5b6b with SMTP id d75a77b69052e-4449bb37ca4mr51502951cf.18.1718722223417;
        Tue, 18 Jun 2024 07:50:23 -0700 (PDT)
Date: Tue, 18 Jun 2024 16:50:21 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3 3/3] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
Message-ID: <ZnGerbiI7P9PHPmK@macbook>
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <20240613165617.42538-4-roger.pau@citrix.com>
 <e3912334-4dbe-40e9-aed4-8b47e1570cc7@suse.com>
 <ZnFv7b4YNjeRXj6-@macbook>
 <2f388d0a-c9b5-409a-b622-5dfeb3093e82@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2f388d0a-c9b5-409a-b622-5dfeb3093e82@suse.com>

On Tue, Jun 18, 2024 at 04:34:50PM +0200, Jan Beulich wrote:
> On 18.06.2024 13:30, Roger Pau Monné wrote:
> > On Mon, Jun 17, 2024 at 03:41:12PM +0200, Jan Beulich wrote:
> >> On 13.06.2024 18:56, Roger Pau Monne wrote:
> >>> fixup_irqs() is used to evacuate interrupts from to be offlined CPUs.  Given
> >>> the CPU is to become offline, the normal migration logic used by Xen where the
> >>> vector in the previous target(s) is left configured until the interrupt is
> >>> received on the new destination is not suitable.
> >>>
> >>> Instead attempt to do as much as possible in order to prevent loosing
> >>> interrupts.  If fixup_irqs() is called from the CPU to be offlined (as is
> >>> currently the case)
> >>
> >> Except (again) for smp_send_stop().
> > 
> > I guess I haven't worded this properly, the point I was trying to make
> > is that in the context of a CPU unplug fixup_irqs() is always called
> > from the CPU that's going offline.
> > 
> > What about:
> > 
> > "If fixup_irqs() is called from the CPU to be offlined (as is
> > currently the case for CPU hot unplug) ..."
> 
> Sounds good to me.
> 
> >>> @@ -2686,11 +2705,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
> >>>          if ( desc->handler->disable )
> >>>              desc->handler->disable(desc);
> >>>  
> >>> +        /*
> >>> +         * If the current CPU is going offline and is (one of) the target(s) of
> >>> +         * the interrupt, signal to check whether there are any pending vectors
> >>> +         * to be handled in the local APIC after the interrupt has been moved.
> >>> +         */
> >>> +        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
> >>> +            check_irr = true;
> >>> +
> >>>          if ( desc->handler->set_affinity )
> >>>              desc->handler->set_affinity(desc, affinity);
> >>>          else if ( !(warned++) )
> >>>              set_affinity = false;
> >>>  
> >>> +        if ( check_irr && apic_irr_read(vector) )
> >>> +            /*
> >>> +             * Forward pending interrupt to the new destination, this CPU is
> >>> +             * going offline and otherwise the interrupt would be lost.
> >>> +             */
> >>> +            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
> >>> +                          desc->arch.vector);
> >>
> >> Hmm, IRR may become set right after the IRR read (unlike in the other cases,
> >> where new IRQs ought to be surfacing only at the new destination). Doesn't
> >> this want moving ...
> >>
> >>>          if ( desc->handler->enable )
> >>>              desc->handler->enable(desc);
> >>
> >> ... past the actual affinity change?
> > 
> > Hm, but the ->enable() hook is just unmasking the interrupt, the
> > actual affinity change is done in ->set_affinity(), and hence after
> > the call to ->set_affinity() no further interrupts should be delivered
> > to the CPU regardless of whether the source is masked?
> > 
> > Or is it possible for the device/interrupt controller to not switch to
> > use the new destination until the interrupt is unmasked, and hence
> > could have pending masked interrupts still using the old destination?
> > IIRC For MSI-X it's required that the device updates the destination
> > target once the entry is unmasked.
> 
> That's all not relevant here, I think. IRR gets set when an interrupt is
> signaled, no matter whether it's masked.

I'm kind of lost here, what does signaling mean in this context?

I would expect the interrupt vector to not get set in IRR if the MSI-X
entry is masked, as at that point the state of the address/data fields
might not be consistent (that's the whole point of masking it right?)

> It's its handling which the
> masking would prevent, i.e. the "moving" of the set bit from IRR to ISR.

My understanding was that the masking would prevent the message write to
the APIC from happening, and hence no vector should get set in IRR.

> Plus we run with IRQs off here anyway if I'm not mistaken, so no
> interrupt can be delivered to the local CPU. IOW whatever IRR bits it
> has set (including ones becoming set between the IRR read and the actual
> vector change), those would never be serviced. Hence the reading of the
> bit ought to occur after the vector change: It's only then that we know
> the IRR bit corresponding to the old vector can't become set anymore.

Right, and the vector change happens in ->set_affinity(), not
->enable().  See for example set_msi_affinity() and the
write_msi_msg(), that's where the vector gets changed.

> And even then we're assuming that no interrupt signals might still be
> "on their way" from the IO-APIC or a posted MSI message write by a
> device to the LAPIC (I have no idea how to properly fence that, or
> whether there are guarantees for this to never occur).

Yeah, those I expect would be completed in the window between the
write of the new vector/destination and the reading of IRR.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 14:55:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 14:55:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743164.1150062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJaEx-0000SI-GN; Tue, 18 Jun 2024 14:55:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743164.1150062; Tue, 18 Jun 2024 14:55:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJaEx-0000SB-Dx; Tue, 18 Jun 2024 14:55:19 +0000
Received: by outflank-mailman (input) for mailman id 743164;
 Tue, 18 Jun 2024 14:55:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=14Hy=NU=cloud.com=kelly.choi@srs-se1.protection.inumbo.net>)
 id 1sJaEv-0000S0-VV
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 14:55:18 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c572b778-2d82-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 16:55:15 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2ec0f3b9cdbso38630031fa.0
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 07:55:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c572b778-2d82-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718722515; x=1719327315; darn=lists.xenproject.org;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=wcypt+L8Iba4SmY3eU311zAlxCRR2frtdOHy8LxL6r8=;
        b=Ajh4NxOAPbAsLPrtaoxWhusAiMplyqy+Im3lrDs1LEHuMHMsRfZK9R9YHRDSsrxakl
         Hr0YnSO1W8gXoqjiSWNzCWJsr66hVZJiCGnuNMki3vJHgU4UEliYM3+r1Q3Ed5WsJSt0
         Q7CTEZSYazaS6pfLcCgO8G+YhN50GUnyaUXFQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718722515; x=1719327315;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=wcypt+L8Iba4SmY3eU311zAlxCRR2frtdOHy8LxL6r8=;
        b=IwxJqztBTiBOXnktnwLiZAx4Xxb5E7NHwlDDX/vx+bkeIv7LTnfCvsXUEuNETMUJsw
         vbXAu2xLzoDaVGY7BTnDgpa5RfeWGqLY25pJ9yi6kicYNu+9Nw2PwD3B4XcA7R6h0YwX
         tk9mrbpMFaUP8vYzmlk4o3kJn28DBR3V+MvHtV2o58YkL12TPy83jKvS4Zr2dBtZiwro
         zuN9WqP/Ft9xJLi6Z9G83DW5nt/g2SE987ox0Zxr6eoacBr53dpDlu84YKT7K/DBCWal
         ZE1o9cuF8rZO+3I1oiVvTFHX9lccjJxYBhJUmY/VhHkozG0t0VeF9RKGlZad4d/TwLtR
         OEhw==
X-Gm-Message-State: AOJu0YwZP1iavI9zTCZ/3rKJtwcczwXkjJwAELPvHLGlhLK+905mdqIG
	ZjMtIKKke6N/EmeRTuUwquMp/u7AiEdhcUp0NAgFJSMukoTIlfft1q9rp0FQt3++K+nWG4DeCgA
	jR4zdvoTHcf4sZjVpfeqDTgb+4z3giKwNiHptb9z/ezbV0P1D
X-Google-Smtp-Source: AGHT+IGzopj7oemB//TivOoZF912xlaDY5ennoTWHjo8YTk+Kka4rMdhBBsC+IlVCBuJTxnLLYPNcZn3tJ2JSkXVl6o=
X-Received: by 2002:a2e:b0d5:0:b0:2eb:f31e:9e7b with SMTP id
 38308e7fff4ca-2ec3ceb7cdfmr678021fa.14.1718722514254; Tue, 18 Jun 2024
 07:55:14 -0700 (PDT)
MIME-Version: 1.0
From: Kelly Choi <kelly.choi@cloud.com>
Date: Tue, 18 Jun 2024 15:54:38 +0100
Message-ID: <CAO-mL=wNrPZOBmbEHRg2y=LA0G0Ge-+pnfF65pn3qXKGMdwdGg@mail.gmail.com>
Subject: Community call recording - June 2024
To: xen-devel <xen-devel@lists.xenproject.org>, xen-users@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000ee18c7061b2b4497"

--000000000000ee18c7061b2b4497
Content-Type: text/plain; charset="UTF-8"

Hi all,

The June community call recording has been uploaded:
https://youtu.be/cJyX6FLK4iU

This has also been saved in the Cryptpad file.
https://cryptpad.fr/pad/#/2/pad/view/bFelqwYToFejOhnu1bInhJ7sJwPGqW55gOpWx+VJ0GU/

Many thanks,
Kelly Choi

Community Manager
Xen Project

--000000000000ee18c7061b2b4497
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi all,<div><br></div><div>The June community=C2=A0ca=
ll=C2=A0<span>recording</span>=C2=A0has been uploaded:<br><a href=3D"https:=
//youtu.be/cJyX6FLK4iU" target=3D"_blank">https://youtu.be/cJyX6FLK4iU</a>=
=C2=A0<br></div><div><br clear=3D"all"><div><div dir=3D"ltr" class=3D"gmail=
_signature"><div dir=3D"ltr"><div>This has also been saved in the Cryptpad =
file.</div><div><a href=3D"https://cryptpad.fr/pad/#/2/pad/view/bFelqwYToFe=
jOhnu1bInhJ7sJwPGqW55gOpWx+VJ0GU/" target=3D"_blank">https://cryptpad.fr/pa=
d/#/2/pad/view/bFelqwYToFejOhnu1bInhJ7sJwPGqW55gOpWx+VJ0GU/</a><br></div><d=
iv><br></div></div></div></div></div><div><div dir=3D"ltr" class=3D"gmail_s=
ignature" data-smartmail=3D"gmail_signature"><div dir=3D"ltr"><div>Many tha=
nks,</div><div>Kelly Choi</div><div><br></div><div><div style=3D"color:rgb(=
136,136,136)">Community Manager</div><div style=3D"color:rgb(136,136,136)">=
Xen Project=C2=A0<br></div></div></div></div></div></div></div>

--000000000000ee18c7061b2b4497--


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 14:56:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 14:56:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743182.1150089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJaFu-0001ht-55; Tue, 18 Jun 2024 14:56:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743182.1150089; Tue, 18 Jun 2024 14:56:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJaFu-0001hm-20; Tue, 18 Jun 2024 14:56:18 +0000
Received: by outflank-mailman (input) for mailman id 743182;
 Tue, 18 Jun 2024 14:56:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6aKm=NU=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1sJaFs-0001HG-US
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 14:56:16 +0000
Received: from wfhigh8-smtp.messagingengine.com
 (wfhigh8-smtp.messagingengine.com [64.147.123.159])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e81a4260-2d82-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 16:56:14 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailfhigh.west.internal (Postfix) with ESMTP id A785E18000A9;
 Tue, 18 Jun 2024 10:56:11 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 18 Jun 2024 10:56:12 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue,
 18 Jun 2024 10:56:10 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e81a4260-2d82-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:from:from:in-reply-to
	:in-reply-to:message-id:mime-version:references:reply-to:subject
	:subject:to:to; s=fm1; t=1718722571; x=1718808971; bh=B/9otjLkie
	tO2fQCE6ReuUJAVy9niNui+hiS7LRBurQ=; b=ZONOJ/lNizyfsswCdIfSk06Gka
	JYd8df2F1ociOHyeR0tytFhnWtKxx1ZPax38WnMEW75xHrR/8Lrbm4qAHL7zAZX/
	UOWjWCGW1NRtC/3FBcXE2zbqv0XHF/bPJgy8B8tuYGkb3+/h1lzJFp8fYFnM8lHE
	NjYd69oq7BI64IvOM8uiaj9knds124i0Zn+OBmAHqnP0jD0hS9S1FzS1PSx5dMlN
	AyU+1EyVFaVo2ykCCCFYJlKthoIjUUs5RTvGS2o5gp98yvjJMd8a7Sr3mEfdlGO3
	nBBg7p2a7+Opcr3R7LOnuuUO0B6PMkcBYhn+TY653F+ay6b4oJBrJY24msyg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-transfer-encoding
	:content-type:content-type:date:date:feedback-id:feedback-id
	:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to:x-me-proxy:x-me-proxy
	:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=1718722571; x=
	1718808971; bh=B/9otjLkietO2fQCE6ReuUJAVy9niNui+hiS7LRBurQ=; b=q
	aWXfqHpLNarTGwwmrWErHHm7K2stP6oKJiOMZjF/LAXnt3nm+oCj1x+e6bsRPHMG
	9DYlP4/1K7pog+ljiCW8Yu8DKpIJTkY+yRynvwOxO3WJJ8ZRqRoErSNU95gUN5lh
	jHcdbYNT0bCGkk5//NoSZmiZstQP7FhKvuBUCk58MlJCisT8urDdw+ye5CM+hRYE
	8pNipuDRTZzxWkNUS3TeGfMnQRjx8S6lWBMS4VgsHkHb9eKVI1TJGOYN5bavoSpY
	IPYvdgPAmwX1nQFVi+87Kzo/yuUsdoHEpk8i9L6wUYOMrZD57Ets1XUMXJSOhG0a
	LVIYlvXkTQJBMh/+CS72A==
X-ME-Sender: <xms:CqBxZpg1JcepDtk3uRyQMJsbKSfqk95sLaOsAWLcuDLx6-rWEJo9zw>
    <xme:CqBxZuB9FLkLQx2rPlI38AYQhs9mRPIX4LAWsD9cC7NJ5l6dmwkHq8uj4oAR_V-9a
    EVbGfUGHUQgFio>
X-ME-Received: <xmr:CqBxZpEwsXg947WqA4MCeb_0VnmwiKZQBm1xCHnLuhR8Ve_nDQSJigwmOBNMxMGtGAP4FkIJ9vAwm8nMN1eIkTtop5_ztwilnDjFhyM1dzkG_UCl>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfedvkedghedvucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtgfgjgesthekredttddtjeenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepudfgieetueeuueeihefhfeetudfh
    iefgteekuedvgfeuhffggeegfedvkeegkeeinecuvehluhhsthgvrhfuihiivgeptdenuc
    frrghrrghmpehmrghilhhfrhhomhepuggvmhhisehinhhvihhsihgslhgvthhhihhnghhs
    lhgrsgdrtghomh
X-ME-Proxy: <xmx:CqBxZuRfo13XizYGaJXgEoTmW9jOfh3m-4sYDJz2qfbXg6s64-F1qA>
    <xmx:CqBxZmxwKgH6YfWjGMh-7d2xvU1NS2sCkLcEzd1zXFYlUD7V8qlnkg>
    <xmx:CqBxZk6CiWxAcDV65Tho1DzJf9Y_YXJxmIZxI57_8tmB2NiNoYL1Vw>
    <xmx:CqBxZrwgJPhtTrp3D4Ko_VHI2ztBbVoZ-UZVBOUBSrwYX9r9ktq43w>
    <xmx:C6BxZqdBoB0i4K6tUhUm_JchZ2kqQWYdHvOFn86GBf6URi1M_t_Vty6G>
Feedback-ID: iac594737:Fastmail
Date: Tue, 18 Jun 2024 10:56:09 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Jan Beulich <jbeulich@suse.com>,
	Xenia Ragiadakou <burzalodowa@gmail.com>,
	Ray Huang <ray.huang@amd.com>,
	Xen developer discussion <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Direct Rendering Infrastructure development <dri-devel@lists.freedesktop.org>,
	Christian =?utf-8?B?S8O2bmln?= <christian.koenig@amd.com>,
	Qubes OS Development Mailing List <qubes-devel@googlegroups.com>
Subject: Re: Design session notes: GPU acceleration in Xen
Message-ID: <ZnGgCawl4RcXA2W9@itl-email>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
 <ZmwByZnn5vKcVLKI@macbook>
 <Zm-FidjSK3mOieSC@itl-email>
 <Zm_p1QvoZcjQ4gBa@macbook>
 <ZnCglhYlXmRPBZXE@mail-itl>
 <ZnDbaply6KaBUKJb@itl-email>
 <ZnGdJoCtbIrf4-dW@macbook>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; x-action=pgp-signed
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZnGdJoCtbIrf4-dW@macbook>

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

On Tue, Jun 18, 2024 at 04:43:50PM +0200, Roger Pau Monné wrote:
> On Mon, Jun 17, 2024 at 08:57:14PM -0400, Demi Marie Obenour wrote:
> > Given the recent progress on PVH dom0, is it reasonable to assume that
> > PVH dom0 will be ready in time for R4.3, and that therefore Qubes OS
> > doesn't need to worry about this problem on x86?
> 
> PVH dom0 will only be ready (whatever ready means in your use-case)
> when people test and fix the issues, otherwise it would stay in the
> same limbo it's currently in.
> 
> I guess the main blocker for Qubes is the lack of PCI passthrough
> support in order to test it more aggressively?

I suspect so, though Marek would need to confirm.
- -- 
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab
-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmZxoAgACgkQsoi1X/+c
IsH7QhAAjZiGKHUUE62xWcI4bxz/ebW6hS9eMEgRPpd9NSOt2slf5NBdGnKtYj1y
mCE+hpyBS3ZKD+4ERbJ4U6K/MrwXaUHc/gqwnRB+rrrKevP/oy/+mI8z8OPrGSc0
0ZCu3AfNKk5Bohf15IMtiqKkk+tsztLTfjgso7lJ1sK1wobdf8Ps97shdbCrnjlI
QlHIXWtYIJse4UKR1aZ1eZ/dggLKOyye3ukF6OSet8tLWbG258wdhRDwC57So5nI
xZdZayCpbixhFQLxbSy+L5lbEVTaq7Ymkoca33Fhn6kFtxzXv/gBoHz+nZBiqVZG
6fSQrIxr0MgDvQRzEvh90fnIDcAQtqRuvDJvB3jjkHjkQzuWsOpZycJytSEfisCw
//Z/T7DsbE581T9sBCpoZ4a7k89zsnZfT2MK7pypPL+spxtVTK2man6Us8mdEj85
5d+f3MGaoHQBPAbn5eoSWCzJCmdDBHIvMnIrxvvx+ZyD74nv4v8OMfUeMbDK8jz0
Z4LKG+cF0hc9pl/DlewrvP3spuw/a3KyxeKZBPKiZmArxuUbiuarbowauBT+YmgT
KTkWs/hL2VIq2+kX82DckABvroIhDm/YVF4miX4WIJMhoiEE0+zB35Gjyw19QvXr
+WviUWvA3a6icPzCUz2tIZlBabQ3fcgD/+IWVuuDv+7x9Kwy088=
=U2jm
-----END PGP SIGNATURE-----


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 15:22:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 15:22:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743209.1150100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJaen-0007Ub-8t; Tue, 18 Jun 2024 15:22:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743209.1150100; Tue, 18 Jun 2024 15:22:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJaen-0007UU-42; Tue, 18 Jun 2024 15:22:01 +0000
Received: by outflank-mailman (input) for mailman id 743209;
 Tue, 18 Jun 2024 15:21:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=An5i=NU=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJael-0007TB-Fs
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 15:21:59 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 80a76a43-2d86-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 17:21:58 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-a6e349c0f2bso728905666b.2
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 08:21:58 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f9982asm622875966b.202.2024.06.18.08.21.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 08:21:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80a76a43-2d86-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718724117; x=1719328917; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=3A5BvofJ84SXtt2DL854SMDMW7ui6tD+mZ1gzbNf5vw=;
        b=NVfnJYliB1iaZRvyXlAe3+nkI4BcdfmXZCqZnnyZknKVga//bAZPMOfxKfH1oLW5ar
         FsCa0VRokMjb0/uboy/y1oUXIVOAQqMMKTZiwzTJHTw0A8sNVLoHCP9y9clDFXkKdtO8
         N+6+eV/NuM9+KRTQ2dI/Aj1QfEkXFoYGvCLVs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718724117; x=1719328917;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=3A5BvofJ84SXtt2DL854SMDMW7ui6tD+mZ1gzbNf5vw=;
        b=NRb9fJYbaJu59Gdu9HAl//4YrsiJo+mMrQJe8JeDURNzmqCsjiLvTIOR67bxyzFizu
         ZTs7ZLK3AlEqH38quOVgVtNxDi9/HVEBrLkik2HfdeUpQBu86/3L/gEp7AjJm4S5Tcxl
         /NpiXbQiuMmrY5Nh88HBLnt74cuNTRex2342sxqYmMVUlvOzRi48vzz8N/hFpVXRUIyi
         Nto+epn1FvZo0bJIXUS5dhiyrl6rUUitU3qMh/ovTE8mrLZfgUVOZ0GAADLif3dIZqc+
         n727dWrhgBpQ7RvPbfp1Pj9EcrmnxoJr19og+NTSmLjas+ofFOExS+sIORmPDcW4mhHc
         m/EA==
X-Forwarded-Encrypted: i=1; AJvYcCVvCmWL5loZx54pQRxsBEHkJRiSVyoL9Ay16zAxLItA/4eETvbJjFpfZj2o54ir2oI7B5nhS7Xj7NzPsqwYIS8P5WnhfooyC2oT3EOV+NY=
X-Gm-Message-State: AOJu0YwQpobVDY6vh4f5V3FXdSPS9G+o+xG9wrHayFNAdeLmnYETYhhZ
	mWMEXTcMwMPW0/BBwdNTncszbbHWX/LvgSj0T62UMzYWId55Q4ixtZ2aPS2/TcPt640zCWbxjl0
	+1AE=
X-Google-Smtp-Source: AGHT+IGmnrdD5PWrFqSM0ZfAqFb8ANg9tdBb1PrWStrA5ZElm9f0HIPYflNyJhZg+kJGEfeyxfxpbA==
X-Received: by 2002:a17:906:b2c8:b0:a6f:5723:fb11 with SMTP id a640c23a62f3a-a6f60dc50bcmr815917266b.58.1718724117299;
        Tue, 18 Jun 2024 08:21:57 -0700 (PDT)
Message-ID: <973b1d2c-c27c-4fb8-92b1-dfaefdeda7e2@citrix.com>
Date: Tue, 18 Jun 2024 16:21:56 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 2/7] x86/xstate: Cross-check dynamic XSTATE sizes at
 boot
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
 <20240617173921.1755439-3-andrew.cooper3@citrix.com>
 <be1baa64-ba01-49ce-a59e-53f3bef1cda0@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <be1baa64-ba01-49ce-a59e-53f3bef1cda0@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 18/06/2024 11:05 am, Jan Beulich wrote:
> On 17.06.2024 19:39, Andrew Cooper wrote:
>> Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
>> every call.  This is expensive, being used for domain create/migrate, as well
>> as to service certain guest CPUID instructions.
>>
>> Instead, arrange to check the sizes once at boot.  See the code comments for
>> details.  Right now, it just checks hardware against the algorithm
>> expectations.  Later patches will add further cross-checking.
>>
>> Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits.  This is to
>> maximise coverage in the sanity check, even if we don't expect to
>> use/virtualise some of these features any time soon.  Leave HDC and HWP alone
>> for now; we don't have CPUID bits from them stored nicely.
>>
>> Only perform the cross-checks when SELF_TESTS are active.  It's only
>> developers or new hardware liable to trip these checks, and Xen at least
>> tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which we
>> don't want to be tickling in the general case.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> I may certainly give R-b on the patch as it is, but I have a few questions
> first:
>
>> --- a/xen/arch/x86/xstate.c
>> +++ b/xen/arch/x86/xstate.c
>> @@ -604,9 +604,164 @@ static bool valid_xcr0(uint64_t xcr0)
>>      if ( !(xcr0 & X86_XCR0_BNDREGS) != !(xcr0 & X86_XCR0_BNDCSR) )
>>          return false;
>>  
>> +    /* TILECFG and TILEDATA must be the same. */
>> +    if ( !(xcr0 & X86_XCR0_TILE_CFG) != !(xcr0 & X86_XCR0_TILE_DATA) )
>> +        return false;
>> +
>>      return true;
>>  }
>>  
>> +struct xcheck_state {
>> +    uint64_t states;
>> +    uint32_t uncomp_size;
>> +    uint32_t comp_size;
>> +};
>> +
>> +static void __init check_new_xstate(struct xcheck_state *s, uint64_t new)
>> +{
>> +    uint32_t hw_size;
>> +
>> +    BUILD_BUG_ON(X86_XCR0_STATES & X86_XSS_STATES);
>> +
>> +    BUG_ON(s->states & new); /* States only increase. */
>> +    BUG_ON(!valid_xcr0(s->states | new)); /* Xen thinks it's a good value. */
>> +    BUG_ON(new & ~(X86_XCR0_STATES | X86_XSS_STATES)); /* Known state. */
>> +    BUG_ON((new & X86_XCR0_STATES) &&
>> +           (new & X86_XSS_STATES)); /* User or supervisor, not both. */
>> +
>> +    s->states |= new;
>> +    if ( new & X86_XCR0_STATES )
>> +    {
>> +        if ( !set_xcr0(s->states & X86_XCR0_STATES) )
>> +            BUG();
>> +    }
>> +    else
>> +        set_msr_xss(s->states & X86_XSS_STATES);
>> +
>> +    /*
>> +     * Check the uncompressed size.  Some XSTATEs are out-of-order and fill in
>> +     * prior holes in the state area, so we check that the size doesn't
>> +     * decrease.
>> +     */
>> +    hw_size = cpuid_count_ebx(0xd, 0);
> Going forward, do we mean to get rid of XSTATE_CPUID? Else imo it should be
> used here (and again below).

All documentation about CPUID, from the vendors and from secondary
sources, is written in terms of numerals, not names.

XSTATE_CPUID is less meaningful than 0xd, and I would prefer to phase it
out.


>> +    if ( hw_size < s->uncomp_size )
>> +        panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, uncompressed hw size %#x < prev size %#x\n",
>> +              s->states, &new, hw_size, s->uncomp_size);
>> +
>> +    s->uncomp_size = hw_size;
> Since XSS state doesn't affect uncompressed layout, this looks like largely
> dead code for that case. Did you consider moving this into the if() above?

If that were a printk() rather than a panic(), then having the
assignment in the if() would be wrong.

So while it doesn't really matter given the way the logic is currently
written, it's more code, and interferes with manual debugging to move
the check into the if().

> Alternatively, should the comparison use == when dealing with XSS bits?

Hmm.  We probably can make this check work, given that we ascend through
user states first, and the supervisor states second.

Although I'd need to rerun such a change through the entire hardware
lab.  There have been enough unexpected surprises with "obvious" changes
already.

>> +    /*
>> +     * Check the compressed size, if available.  All components strictly
>> +     * appear in index order.  In principle there are no holes, but some
>> +     * components have their base address 64-byte aligned for efficiency
>> +     * reasons (e.g. AMX-TILE) and there are other components small enough to
>> +     * fit in the gap (e.g. PKRU) without increasing the overall length.
>> +     */
>> +    hw_size = cpuid_count_ebx(0xd, 1);
>> +
>> +    if ( cpu_has_xsavec )
>> +    {
>> +        if ( hw_size < s->comp_size )
>> +            panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, compressed hw size %#x < prev size %#x\n",
>> +                  s->states, &new, hw_size, s->comp_size);
> Unlike for uncompressed size, can't it be <= here, for - as the comment
> says - it being strictly index order, and no component having zero size?

The first version of this patch did have <=, and it really failed on SPR.

When you activate AMX first, then PKRU next, PKRU really does fit in the
alignment hole, and the overall compressed size is the same.


It's a consequence of doing all the user states first, then all the
supervisor states second.  I did have them strictly in index order to
begin with, but then hit the enumeration issue on Broadwell and reworked
xstate_check_sizes() to have a single common cpu_has_xsaves around all
supervisor states.

I could potentially undo that, but the consequence is needing an
cpu_has_xsaves in every call passing in X86_XSS_*.
>> +        s->comp_size = hw_size;
>> +    }
>> +    else if ( hw_size ) /* Compressed size reported, but no XSAVEC ? */
>> +    {
>> +        static bool once;
>> +
>> +        if ( !once )
>> +        {
>> +            WARN();
>> +            once = true;
>> +        }
>> +    }
>> +}
>> +
>> +/*
>> + * The {un,}compressed XSTATE sizes are reported by dynamic CPUID value, based
>> + * on the current %XCR0 and MSR_XSS values.  The exact layout is also feature
>> + * and vendor specific.  Cross-check Xen's understanding against real hardware
>> + * on boot.
>> + *
>> + * Testing every combination is prohibitive, so we use a partial approach.
>> + * Starting with nothing active, we add new XSTATEs and check that the CPUID
>> + * dynamic values never decreases.
>> + */
>> +static void __init noinline xstate_check_sizes(void)
>> +{
>> +    uint64_t old_xcr0 = get_xcr0();
>> +    uint64_t old_xss = get_msr_xss();
>> +    struct xcheck_state s = {};
>> +
>> +    /*
>> +     * User XSTATEs, increasing by index.
>> +     *
>> +     * Chronologically, Intel and AMD had identical layouts for AVX (YMM).
>> +     * AMD introduced LWP in Fam15h, following immediately on from YMM.  Intel
>> +     * left an LWP-shaped hole when adding MPX (BND{CSR,REGS}) in Skylake.
>> +     * AMD removed LWP in Fam17h, putting PKRU in the same space, breaking
>> +     * layout compatibility with Intel and having a knock-on effect on all
>> +     * subsequent states.
>> +     */
>> +    check_new_xstate(&s, X86_XCR0_SSE | X86_XCR0_FP);
>> +
>> +    if ( cpu_has_avx )
>> +        check_new_xstate(&s, X86_XCR0_YMM);
>> +
>> +    if ( cpu_has_mpx )
>> +        check_new_xstate(&s, X86_XCR0_BNDCSR | X86_XCR0_BNDREGS);
>> +
>> +    if ( cpu_has_avx512f )
>> +        check_new_xstate(&s, X86_XCR0_HI_ZMM | X86_XCR0_ZMM | X86_XCR0_OPMASK);
>> +
>> +    if ( cpu_has_pku )
>> +        check_new_xstate(&s, X86_XCR0_PKRU);
>> +
>> +    if ( boot_cpu_has(X86_FEATURE_AMX_TILE) )
>> +        check_new_xstate(&s, X86_XCR0_TILE_DATA | X86_XCR0_TILE_CFG);
>> +
>> +    if ( boot_cpu_has(X86_FEATURE_LWP) )
>> +        check_new_xstate(&s, X86_XCR0_LWP);
>> +
>> +    /*
>> +     * Supervisor XSTATEs, increasing by index.
>> +     *
>> +     * Intel Broadwell has Processor Trace but no XSAVES.  There doesn't
>> +     * appear to have been a new enumeration when X86_XSS_PROC_TRACE was
>> +     * introduced in Skylake.
>> +     */
>> +    if ( cpu_has_xsaves )
>> +    {
>> +        if ( cpu_has_proc_trace )
>> +            check_new_xstate(&s, X86_XSS_PROC_TRACE);
>> +
>> +        if ( boot_cpu_has(X86_FEATURE_ENQCMD) )
>> +            check_new_xstate(&s, X86_XSS_PASID);
>> +
>> +        if ( boot_cpu_has(X86_FEATURE_CET_SS) ||
>> +             boot_cpu_has(X86_FEATURE_CET_IBT) )
>> +        {
>> +            check_new_xstate(&s, X86_XSS_CET_U);
>> +            check_new_xstate(&s, X86_XSS_CET_S);
>> +        }
>> +
>> +        if ( boot_cpu_has(X86_FEATURE_UINTR) )
>> +            check_new_xstate(&s, X86_XSS_UINTR);
>> +
>> +        if ( boot_cpu_has(X86_FEATURE_ARCH_LBR) )
>> +            check_new_xstate(&s, X86_XSS_LBR);
>> +    }
> In principle compressed state checking could be extended to also verify
> the offsets are strictly increasing. That, however, would require to
> interleave XCR0 and XSS checks, strictly by index. Did you consider (and
> then discard) doing so?

What offsets are you referring to?

Compressed images have no offset information.  Every "row" which has
ecx.xss set has ebx (offset) reported as 0.  The offset information for
the user rows are only applicable for uncompressed images.

The layout of a compressed image is a strict function of RFBM derived
from component sizes alone.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 15:47:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 15:47:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743216.1150108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJb3F-00030l-4m; Tue, 18 Jun 2024 15:47:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743216.1150108; Tue, 18 Jun 2024 15:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJb3F-00030e-1s; Tue, 18 Jun 2024 15:47:17 +0000
Received: by outflank-mailman (input) for mailman id 743216;
 Tue, 18 Jun 2024 15:47:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJb3D-00030T-Jh; Tue, 18 Jun 2024 15:47:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJb3D-00044s-3w; Tue, 18 Jun 2024 15:47:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJb3C-0007li-Rk; Tue, 18 Jun 2024 15:47:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJb3C-0002az-RH; Tue, 18 Jun 2024 15:47:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=R2K8uhSqpd1+1uLWMRE0dPFY/Stb+88sQe+bX+KqgQc=; b=pbxfd9oBAxg2MDPD7JvrXVi0fa
	zjbtbl+L2WlMDszlTArrkCSDQ8Mbx+doF6nGOeWx72/Pi9A/c+YLpHh18jNT3UxYLrxMVW+Zbz7Ff
	aFda7vhyy538UXeDKnwMMAf5rBR7nXg3RGJp+8SFsYxwE0VyfWipnxZ2NJNXZOCpYbAY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186389-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186389: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=14d7c92f8df9c0964ae6f8b813c1b3ac38120825
X-Osstest-Versions-That:
    linux=6226e74900d7c106c7c86b878dc6779cfdb20c2b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Jun 2024 15:47:14 +0000

flight 186389 linux-linus real [real]
flight 186395 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186389/
http://logs.test-lab.xenproject.org/osstest/logs/186395/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 186385

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt      8 xen-boot            fail pass in 186395-retest
 test-armhf-armhf-libvirt-vhd  8 xen-boot            fail pass in 186395-retest
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186395-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 186385

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 186395 like 186385
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 186395 never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check fail in 186395 never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check fail in 186395 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186385
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186385
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186385
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186385
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186385
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                14d7c92f8df9c0964ae6f8b813c1b3ac38120825
baseline version:
 linux                6226e74900d7c106c7c86b878dc6779cfdb20c2b

Last test of basis   186385  2024-06-17 19:43:39 Z    0 days
Testing same since   186389  2024-06-18 03:58:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aleksandr Nogikh <nogikh@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Konovalov <andreyknvl@gmail.com>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Christian Göttsche <cgzones@googlemail.com>
  Christian Schrefl <chrisi.schrefl@gmail.com>
  Chuck Lever <chuck.lever@oracle.com>
  David Hildenbrand <david@redhat.com>
  Hugh Dickins <hughd@google.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jeff Xu <jeffxu@chromium.org>
  Joseph Qi <joseph.qi@linux.alibaba.com>
  Kees Cook <kees@kernel.org>
  Kefeng Wang <wangkefeng.wang@huawei.com>
  Lance Yang <ioworker0@gmail.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lorenzo Stoakes <lstoakes@gmail.com>
  Mark Brown <broonie@kernel.org>
  Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
  Oleg Nesterov <oleg@redhat.com>
  Peter Oberparleiter <oberpar@linux.ibm.com>
  Peter Xu <peterx@redhat.com>
  Rafael Aquini <aquini@redhat.com>
  Ran Xiaokai <ran.xiaokai@zte.com.cn>
  Suren Baghdasaryan <surenb@google.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Fu <fuweid89@gmail.com>
  Yury Norov <yury.norov@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 963 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 16:21:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 16:21:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743230.1150119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJbaO-0001Vk-R3; Tue, 18 Jun 2024 16:21:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743230.1150119; Tue, 18 Jun 2024 16:21:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJbaO-0001Vd-NE; Tue, 18 Jun 2024 16:21:32 +0000
Received: by outflank-mailman (input) for mailman id 743230;
 Tue, 18 Jun 2024 16:21:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJbaO-0001VX-7x
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 16:21:32 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d2438740-2d8e-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 18:21:30 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a6f177b78dcso718903566b.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 09:21:30 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f8186b686sm289176966b.7.2024.06.18.09.21.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 09:21:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2438740-2d8e-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718727690; x=1719332490; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=65l+y+VMGKICQfKg8KgEhKW8NBQP4Iy2j7XrYTn75M0=;
        b=UYOxfgZnwXJIaGBITUPqKvcX8Vfs/jmfm7B/Eq14JHHm4l367ZAF/d6N0YcjJ8m2kh
         4GhdwW1oAjg1X3tdHzZoeQ3SXNiss4ePTh+SlK8F33CXg8Q4kXrdjQ/nSX2tyyAlGwE8
         09ZETYrxaF86/dllgoAi3iyqsHAskRk1QzC4JSfFkVPs8msBSEkHFw5M4gPk5QmoK8uE
         uxTjt6p8DRNgkxZkq7IXcwYJ5c/7PkecJwQVSjnizg9H5vt3qrub0oohDvIWiGF595/9
         gee31R9UK5b9n1FoG/PMV7b3ryzHMnSX+ofaCbsuRxGG9QJ7IoYZopXKH9YKNeydIzBx
         y4aw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718727690; x=1719332490;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=65l+y+VMGKICQfKg8KgEhKW8NBQP4Iy2j7XrYTn75M0=;
        b=NGAhfN7VyS/+CBt552leNHNj2JR0YggkPPMS44i0ETc6OHaHv0yQovHjensM8dJHmz
         0kR0NzAy1eZsstyfaJvLBJ9LGdVrjNGAGaX71KluLIY08+nP+1ZoNIgNHEgoy5qeZoP7
         pQ97sg1NyeK3y46rnvwCMGAop9i4G4IwCc6ItGrMnrLcwP+7FT0is/fMliicq8UL5d8N
         vTYeErZ80+9JiEoHSzJ3uvC1KSC7LZECTKLzDTDvsffZgcxVOGAe3N/ffMq6mvzL2v7J
         wKtuv5MR75Cgs7IJm9YShy0ZDeKz9gTlU4e4ibtZA7M6adSZeIh9oFyRUZNco/qWaqrk
         YzZw==
X-Forwarded-Encrypted: i=1; AJvYcCVkcforXq3GabIT9TdfRRDqsP4j4JD69LxKj4BybZX2jh3qGFzcUH0LL6Kcv/FxrMN1iRkZN/usz3ShdDpL4Oa2O9LWFkyYs5KXSIlRik4=
X-Gm-Message-State: AOJu0YwE4EsbX0PdbMKH8R6tLkr64fy1j9R8xVi1aSH8OfMTs80EEi6f
	812LhpwFT73T1noexqoRDRRKXQYHZNbr7kHuyEf4h5WBPNimkhOVLUXbevdtVA==
X-Google-Smtp-Source: AGHT+IF4jWLGi0RBFY9Gll5lfUokxykgs+9rVIy5xj2yKvRAMZlZuMtcC/CYX2YsuGh8NiU/BxB94w==
X-Received: by 2002:a17:907:8b95:b0:a6e:fccc:e4a with SMTP id a640c23a62f3a-a6fab060dd2mr2007966b.0.1718727690174;
        Tue, 18 Jun 2024 09:21:30 -0700 (PDT)
Message-ID: <fd503d44-c1db-4aba-bb3d-9478cbdfe56d@suse.com>
Date: Tue, 18 Jun 2024 18:21:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 2/7] x86/xstate: Cross-check dynamic XSTATE sizes at
 boot
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
 <20240617173921.1755439-3-andrew.cooper3@citrix.com>
 <be1baa64-ba01-49ce-a59e-53f3bef1cda0@suse.com>
 <973b1d2c-c27c-4fb8-92b1-dfaefdeda7e2@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <973b1d2c-c27c-4fb8-92b1-dfaefdeda7e2@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 18.06.2024 17:21, Andrew Cooper wrote:
> On 18/06/2024 11:05 am, Jan Beulich wrote:
>> On 17.06.2024 19:39, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/xstate.c
>>> +++ b/xen/arch/x86/xstate.c
>>> @@ -604,9 +604,164 @@ static bool valid_xcr0(uint64_t xcr0)
>>>      if ( !(xcr0 & X86_XCR0_BNDREGS) != !(xcr0 & X86_XCR0_BNDCSR) )
>>>          return false;
>>>  
>>> +    /* TILECFG and TILEDATA must be the same. */
>>> +    if ( !(xcr0 & X86_XCR0_TILE_CFG) != !(xcr0 & X86_XCR0_TILE_DATA) )
>>> +        return false;
>>> +
>>>      return true;
>>>  }
>>>  
>>> +struct xcheck_state {
>>> +    uint64_t states;
>>> +    uint32_t uncomp_size;
>>> +    uint32_t comp_size;
>>> +};
>>> +
>>> +static void __init check_new_xstate(struct xcheck_state *s, uint64_t new)
>>> +{
>>> +    uint32_t hw_size;
>>> +
>>> +    BUILD_BUG_ON(X86_XCR0_STATES & X86_XSS_STATES);
>>> +
>>> +    BUG_ON(s->states & new); /* States only increase. */
>>> +    BUG_ON(!valid_xcr0(s->states | new)); /* Xen thinks it's a good value. */
>>> +    BUG_ON(new & ~(X86_XCR0_STATES | X86_XSS_STATES)); /* Known state. */
>>> +    BUG_ON((new & X86_XCR0_STATES) &&
>>> +           (new & X86_XSS_STATES)); /* User or supervisor, not both. */
>>> +
>>> +    s->states |= new;
>>> +    if ( new & X86_XCR0_STATES )
>>> +    {
>>> +        if ( !set_xcr0(s->states & X86_XCR0_STATES) )
>>> +            BUG();
>>> +    }
>>> +    else
>>> +        set_msr_xss(s->states & X86_XSS_STATES);
>>> +
>>> +    /*
>>> +     * Check the uncompressed size.  Some XSTATEs are out-of-order and fill in
>>> +     * prior holes in the state area, so we check that the size doesn't
>>> +     * decrease.
>>> +     */
>>> +    hw_size = cpuid_count_ebx(0xd, 0);
>> Going forward, do we mean to get rid of XSTATE_CPUID? Else imo it should be
>> used here (and again below).
> 
> All documentation about CPUID, from the vendors and from secondary
> sources, is written in terms of numerals, not names.
> 
> XSTATE_CPUID is less meaningful than 0xd, and I would prefer to phase it
> out.

Fair enough; hence why I was asking.

>>> +    if ( hw_size < s->uncomp_size )
>>> +        panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, uncompressed hw size %#x < prev size %#x\n",
>>> +              s->states, &new, hw_size, s->uncomp_size);
>>> +
>>> +    s->uncomp_size = hw_size;
>> Since XSS state doesn't affect uncompressed layout, this looks like largely
>> dead code for that case. Did you consider moving this into the if() above?
> 
> If that were a printk() rather than a panic(), then having the
> assignment in the if() would be wrong.

Would it? For an XSS component the uncompressed size isn't supposed to
change. (Or else doing an == check, as per below, wouldn't be an option.)

> So while it doesn't really matter given the way the logic is currently
> written, it's more code, and interferes with manual debugging to move
> the check into the if().
> 
>> Alternatively, should the comparison use == when dealing with XSS bits?
> 
> Hmm.  We probably can make this check work, given that we ascend through
> user states first, and the supervisor states second.
> 
> Although I'd need to rerun such a change through the entire hardware
> lab.  There have been enough unexpected surprises with "obvious" changes
> already.

Well, we can of course decide to go with what you have for now, and then
see about tightening the check. I fear though that doing so may then be
forgotten ...

>>> +    /*
>>> +     * Check the compressed size, if available.  All components strictly
>>> +     * appear in index order.  In principle there are no holes, but some
>>> +     * components have their base address 64-byte aligned for efficiency
>>> +     * reasons (e.g. AMX-TILE) and there are other components small enough to
>>> +     * fit in the gap (e.g. PKRU) without increasing the overall length.
>>> +     */
>>> +    hw_size = cpuid_count_ebx(0xd, 1);
>>> +
>>> +    if ( cpu_has_xsavec )
>>> +    {
>>> +        if ( hw_size < s->comp_size )
>>> +            panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, compressed hw size %#x < prev size %#x\n",
>>> +                  s->states, &new, hw_size, s->comp_size);
>> Unlike for uncompressed size, can't it be <= here, for - as the comment
>> says - it being strictly index order, and no component having zero size?
> 
> The first version of this patch did have <=, and it really failed on SPR.
> 
> When you activate AMX first, then PKRU next, PKRU really does fit in the
> alignment hole, and the overall compressed size is the same.
> 
> 
> It's a consequence of doing all the user states first, then all the
> supervisor states second.  I did have them strictly in index order to
> begin with, but then hit the enumeration issue on Broadwell and reworked
> xstate_check_sizes() to have a single common cpu_has_xsaves around all
> supervisor states.
> 
> I could potentially undo that, but the consequence is needing an
> cpu_has_xsaves in every call passing in X86_XSS_*.

Just to say as much: To me, doing things strictly in order would feel more
"natural" (even if, see below, the other reason that would have seemed
desirable, really has vaporized). Other than the need for cpu_has_xsaves,
is there a particular reason you split things as all XCR0 first, then all
XSS? (I'm a little worried anyway that this code will need updating for
each new component addition. Yet "automating" this won't work, because of
the need for the cpu_has_* checks.)

>>> +        s->comp_size = hw_size;
>>> +    }
>>> +    else if ( hw_size ) /* Compressed size reported, but no XSAVEC ? */
>>> +    {
>>> +        static bool once;
>>> +
>>> +        if ( !once )
>>> +        {
>>> +            WARN();
>>> +            once = true;
>>> +        }
>>> +    }
>>> +}
>>> +
>>> +/*
>>> + * The {un,}compressed XSTATE sizes are reported by dynamic CPUID value, based
>>> + * on the current %XCR0 and MSR_XSS values.  The exact layout is also feature
>>> + * and vendor specific.  Cross-check Xen's understanding against real hardware
>>> + * on boot.
>>> + *
>>> + * Testing every combination is prohibitive, so we use a partial approach.
>>> + * Starting with nothing active, we add new XSTATEs and check that the CPUID
>>> + * dynamic values never decreases.
>>> + */
>>> +static void __init noinline xstate_check_sizes(void)
>>> +{
>>> +    uint64_t old_xcr0 = get_xcr0();
>>> +    uint64_t old_xss = get_msr_xss();
>>> +    struct xcheck_state s = {};
>>> +
>>> +    /*
>>> +     * User XSTATEs, increasing by index.
>>> +     *
>>> +     * Chronologically, Intel and AMD had identical layouts for AVX (YMM).
>>> +     * AMD introduced LWP in Fam15h, following immediately on from YMM.  Intel
>>> +     * left an LWP-shaped hole when adding MPX (BND{CSR,REGS}) in Skylake.
>>> +     * AMD removed LWP in Fam17h, putting PKRU in the same space, breaking
>>> +     * layout compatibility with Intel and having a knock-on effect on all
>>> +     * subsequent states.
>>> +     */
>>> +    check_new_xstate(&s, X86_XCR0_SSE | X86_XCR0_FP);
>>> +
>>> +    if ( cpu_has_avx )
>>> +        check_new_xstate(&s, X86_XCR0_YMM);
>>> +
>>> +    if ( cpu_has_mpx )
>>> +        check_new_xstate(&s, X86_XCR0_BNDCSR | X86_XCR0_BNDREGS);
>>> +
>>> +    if ( cpu_has_avx512f )
>>> +        check_new_xstate(&s, X86_XCR0_HI_ZMM | X86_XCR0_ZMM | X86_XCR0_OPMASK);
>>> +
>>> +    if ( cpu_has_pku )
>>> +        check_new_xstate(&s, X86_XCR0_PKRU);
>>> +
>>> +    if ( boot_cpu_has(X86_FEATURE_AMX_TILE) )
>>> +        check_new_xstate(&s, X86_XCR0_TILE_DATA | X86_XCR0_TILE_CFG);
>>> +
>>> +    if ( boot_cpu_has(X86_FEATURE_LWP) )
>>> +        check_new_xstate(&s, X86_XCR0_LWP);
>>> +
>>> +    /*
>>> +     * Supervisor XSTATEs, increasing by index.
>>> +     *
>>> +     * Intel Broadwell has Processor Trace but no XSAVES.  There doesn't
>>> +     * appear to have been a new enumeration when X86_XSS_PROC_TRACE was
>>> +     * introduced in Skylake.
>>> +     */
>>> +    if ( cpu_has_xsaves )
>>> +    {
>>> +        if ( cpu_has_proc_trace )
>>> +            check_new_xstate(&s, X86_XSS_PROC_TRACE);
>>> +
>>> +        if ( boot_cpu_has(X86_FEATURE_ENQCMD) )
>>> +            check_new_xstate(&s, X86_XSS_PASID);
>>> +
>>> +        if ( boot_cpu_has(X86_FEATURE_CET_SS) ||
>>> +             boot_cpu_has(X86_FEATURE_CET_IBT) )
>>> +        {
>>> +            check_new_xstate(&s, X86_XSS_CET_U);
>>> +            check_new_xstate(&s, X86_XSS_CET_S);
>>> +        }
>>> +
>>> +        if ( boot_cpu_has(X86_FEATURE_UINTR) )
>>> +            check_new_xstate(&s, X86_XSS_UINTR);
>>> +
>>> +        if ( boot_cpu_has(X86_FEATURE_ARCH_LBR) )
>>> +            check_new_xstate(&s, X86_XSS_LBR);
>>> +    }
>> In principle compressed state checking could be extended to also verify
>> the offsets are strictly increasing. That, however, would require to
>> interleave XCR0 and XSS checks, strictly by index. Did you consider (and
>> then discard) doing so?
> 
> What offsets are you referring to?
> 
> Compressed images have no offset information.  Every "row" which has
> ecx.xss set has ebx (offset) reported as 0.  The offset information for
> the user rows are only applicable for uncompressed images.

Hmm, right, nothing to compare our calculations against. And for the
compressed form the (calculated) offsets aren't any different from the
previous component's accumulated size.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 16:30:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 16:30:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743239.1150129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJbj3-0003PK-Iq; Tue, 18 Jun 2024 16:30:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743239.1150129; Tue, 18 Jun 2024 16:30:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJbj3-0003PD-Fn; Tue, 18 Jun 2024 16:30:29 +0000
Received: by outflank-mailman (input) for mailman id 743239;
 Tue, 18 Jun 2024 16:30:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MUlf=NU=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJbj2-0003P7-5v
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 16:30:28 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 10c1aed5-2d90-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 18:30:25 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id
 a640c23a62f3a-a6f09eaf420so674700566b.3
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 09:30:25 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56ed379esm638596466b.139.2024.06.18.09.30.24
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 Jun 2024 09:30:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10c1aed5-2d90-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718728225; x=1719333025; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=8D4675Yda6h51MngzOhKZL5G8jAVqgVVqWF7KX4GrlE=;
        b=EAWVoJO5V/3GPzbTlLsBj2EA/s/m5gFm+xiMdb4FbrouOo+AHzbqViJbTz7Di2HUma
         mB/ifmvzfdWWKM2PEbtQmRM7B2jM2SjTGKw0jQbvOb4K+j+WC76zTOr8dipTA1TfTDEc
         w2pTtFNTBN2U7g3HOqEE0IJgDbSxm09Gm3X7dTLo6g20aHLXbSJy2yW/R4m2QbcR0obQ
         LyLSQ8rcKAPFsxvuny4zJ/Z3YSSLKcR6gYpxRLOgOq4mpReiyWpHzcGDmECBfgFk1TS8
         rqbkRqqfcf2Q1cFFZPOLhu3BY0sDW8nO/6TSk4B16HEZKNv4H44yjDHnA1ByagFl5dxa
         IHAA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718728225; x=1719333025;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=8D4675Yda6h51MngzOhKZL5G8jAVqgVVqWF7KX4GrlE=;
        b=TkeXdiBLdfy0x2zheMSUVdFXZBHHCtkoje6DA7e54JYZCHgl0j0XxVZcLJ0rSfPHKp
         97GEabqkRIhoO5u3JDqGo9OIuWePfdhthjEX0cqV4oht34atqTV7rWlx8Pnf+da4zk5u
         i2jdVltxtNvbXxFTcgYZMSYkiRZ+8lZBjL9TMHKCSyawsXzL2X+kY61HuDcq+ht9L8oE
         FQEd6cBgi43tiRwM8Mfbx4i6DiELgPeiKoBu7FmXv1Zcsfag8H38s/pAureuDuD0ufCp
         yuOYI4iHY91u+pLsKqHNhtVAHvVzCEFxpWYd6LHviHrMFeqSbhWBHSi5MEnqkvqw3J5p
         +EiQ==
X-Forwarded-Encrypted: i=1; AJvYcCUefVhoAx4whek/jbysAJ01l3zhVsce9zJjR36P2aDPbfGEjUeH5RK3JU/tfg76DyF0/1hot9e0dfzyfSGnpvS3bduM3juUFvzFzMVIXXk=
X-Gm-Message-State: AOJu0Yzrsd39nx4ExSMDbzd/XDhoM33UBU9Hf2CaBSEUYXk45SqjopcZ
	RANAxb4t7ymyDaqDpIEM7CKisc0wGH006i/LdUm2tByMR8SEFG3jWVVe5JoRqHgX/WUkTIv/6Jk
	=
X-Google-Smtp-Source: AGHT+IG/6Z3mn/HqpO8HF/xrAvsJsBP+shInRSSpHlhRWCpB/4TGkIbUzimaT5T5aS8YAOFsNsodnA==
X-Received: by 2002:a17:907:d30d:b0:a6f:5202:3dad with SMTP id a640c23a62f3a-a6fab77a1afmr625866b.55.1718728224634;
        Tue, 18 Jun 2024 09:30:24 -0700 (PDT)
Message-ID: <ba89126f-715d-498e-81e1-2ed105ac2d1c@suse.com>
Date: Tue, 18 Jun 2024 18:30:22 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v3 3/3] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <20240613165617.42538-4-roger.pau@citrix.com>
 <e3912334-4dbe-40e9-aed4-8b47e1570cc7@suse.com> <ZnFv7b4YNjeRXj6-@macbook>
 <2f388d0a-c9b5-409a-b622-5dfeb3093e82@suse.com> <ZnGerbiI7P9PHPmK@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZnGerbiI7P9PHPmK@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 18.06.2024 16:50, Roger Pau Monné wrote:
> On Tue, Jun 18, 2024 at 04:34:50PM +0200, Jan Beulich wrote:
>> On 18.06.2024 13:30, Roger Pau Monné wrote:
>>> On Mon, Jun 17, 2024 at 03:41:12PM +0200, Jan Beulich wrote:
>>>> On 13.06.2024 18:56, Roger Pau Monne wrote:
>>>>> @@ -2686,11 +2705,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
>>>>>          if ( desc->handler->disable )
>>>>>              desc->handler->disable(desc);
>>>>>  
>>>>> +        /*
>>>>> +         * If the current CPU is going offline and is (one of) the target(s) of
>>>>> +         * the interrupt, signal to check whether there are any pending vectors
>>>>> +         * to be handled in the local APIC after the interrupt has been moved.
>>>>> +         */
>>>>> +        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
>>>>> +            check_irr = true;
>>>>> +
>>>>>          if ( desc->handler->set_affinity )
>>>>>              desc->handler->set_affinity(desc, affinity);
>>>>>          else if ( !(warned++) )
>>>>>              set_affinity = false;
>>>>>  
>>>>> +        if ( check_irr && apic_irr_read(vector) )
>>>>> +            /*
>>>>> +             * Forward pending interrupt to the new destination, this CPU is
>>>>> +             * going offline and otherwise the interrupt would be lost.
>>>>> +             */
>>>>> +            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
>>>>> +                          desc->arch.vector);
>>>>
>>>> Hmm, IRR may become set right after the IRR read (unlike in the other cases,
>>>> where new IRQs ought to be surfacing only at the new destination). Doesn't
>>>> this want moving ...
>>>>
>>>>>          if ( desc->handler->enable )
>>>>>              desc->handler->enable(desc);
>>>>
>>>> ... past the actual affinity change?
>>>
>>> Hm, but the ->enable() hook is just unmasking the interrupt, the
>>> actual affinity change is done in ->set_affinity(), and hence after
>>> the call to ->set_affinity() no further interrupts should be delivered
>>> to the CPU regardless of whether the source is masked?
>>>
>>> Or is it possible for the device/interrupt controller to not switch to
>>> use the new destination until the interrupt is unmasked, and hence
>>> could have pending masked interrupts still using the old destination?
>>> IIRC For MSI-X it's required that the device updates the destination
>>> target once the entry is unmasked.
>>
>> That's all not relevant here, I think. IRR gets set when an interrupt is
>> signaled, no matter whether it's masked.
> 
> I'm kind of lost here, what does signaling mean in this context?
> 
> I would expect the interrupt vector to not get set in IRR if the MSI-X
> entry is masked, as at that point the state of the address/data fields
> might not be consistent (that's the whole point of masking it right?)
> 
>> It's its handling which the
>> masking would prevent, i.e. the "moving" of the set bit from IRR to ISR.
> 
> My understanding was that the masking would prevent the message write to
> the APIC from happening, and hence no vector should get set in IRR.

Hmm, yes, looks like I was confused. The masking is at the source side
(IO-APIC RTE, MSI-X entry, or - if supported - in the MSI capability).
So the sole case to worry about is MSI without mask-bit support then.

>> Plus we run with IRQs off here anyway if I'm not mistaken, so no
>> interrupt can be delivered to the local CPU. IOW whatever IRR bits it
>> has set (including ones becoming set between the IRR read and the actual
>> vector change), those would never be serviced. Hence the reading of the
>> bit ought to occur after the vector change: It's only then that we know
>> the IRR bit corresponding to the old vector can't become set anymore.
> 
> Right, and the vector change happens in ->set_affinity(), not
> ->enable().  See for example set_msi_affinity() and the
> write_msi_msg(), that's where the vector gets changed.
> 
>> And even then we're assuming that no interrupt signals might still be
>> "on their way" from the IO-APIC or a posted MSI message write by a
>> device to the LAPIC (I have no idea how to properly fence that, or
>> whether there are guarantees for this to never occur).
> 
> Yeah, those I expect would be completed in the window between the
> write of the new vector/destination and the reading of IRR.

Except we have no idea on the latencies.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 16:59:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 16:59:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743253.1150139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJcBP-00074p-RW; Tue, 18 Jun 2024 16:59:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743253.1150139; Tue, 18 Jun 2024 16:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJcBP-00074i-OC; Tue, 18 Jun 2024 16:59:47 +0000
Received: by outflank-mailman (input) for mailman id 743253;
 Tue, 18 Jun 2024 16:59:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJcBO-00074Y-7y; Tue, 18 Jun 2024 16:59:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJcBN-0005mk-VE; Tue, 18 Jun 2024 16:59:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJcBN-0001BV-N1; Tue, 18 Jun 2024 16:59:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJcBN-0004Gv-MV; Tue, 18 Jun 2024 16:59:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BHocqoz+i15sGRgpltDp2gqvvBTyJe3Fogb4yn8qNLA=; b=DmWlv09YtFpVJTpdl51OlxPsEn
	5O4eb0kMeASwYKCuNz5TEoxxIXME1i/MkGuFbfbVrGUcI4zpG5w0VZ8UKlD2hisj/m4k8W8rgg1Zd
	fQCoI+9/9y4dhQ9J+fou//vc5aLES1SlAnvfL3R9kz+pq4ENcHeaWppqLGivGO/m1aws=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186397-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186397: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=bfda27ddc89502190c79f74fc20cb81458d58449
X-Osstest-Versions-That:
    ovmf=b0c5781671f322472836ff25ee11242f59aa9945
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Jun 2024 16:59:45 +0000

flight 186397 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186397/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 bfda27ddc89502190c79f74fc20cb81458d58449
baseline version:
 ovmf                 b0c5781671f322472836ff25ee11242f59aa9945

Last test of basis   186394  2024-06-18 11:42:54 Z    0 days
Testing same since   186397  2024-06-18 15:12:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chao Li <lichao@loongson.cn>
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b0c5781671..bfda27ddc8  bfda27ddc89502190c79f74fc20cb81458d58449 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 17:54:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 17:54:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743270.1150149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJd26-0006Ai-Ot; Tue, 18 Jun 2024 17:54:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743270.1150149; Tue, 18 Jun 2024 17:54:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJd26-0006Ab-MD; Tue, 18 Jun 2024 17:54:14 +0000
Received: by outflank-mailman (input) for mailman id 743270;
 Tue, 18 Jun 2024 17:54:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJd25-0006AR-OU; Tue, 18 Jun 2024 17:54:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJd25-00079X-9c; Tue, 18 Jun 2024 17:54:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJd25-0003Zy-0p; Tue, 18 Jun 2024 17:54:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJd25-0005oL-0F; Tue, 18 Jun 2024 17:54:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NRB5jNrLOZWJQXLON3rRRB1rQ6L3PrlyLB+OjyA4klw=; b=otBZkgtTyO1LNQY8p2tl6QVAX1
	VN/EwjYEdBhUmfADdP8rkhX26bNec8/JiqrLrJyzmDle6sfrMi9klO3i4RAIZE+T79wLL7t42R8eo
	+voNwUfQL3tqM5NSx1H8cUvefC3VRZIxMjH0/GVYoX14hu7j1KdchFGSVSvdUpWPFDAo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186396-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186396: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=bd59af99700f075d06a6d47a16f777c9519928e0
X-Osstest-Versions-That:
    xen=77b1ed1d02d082c457924a695e8dde7076285271
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Jun 2024 17:54:13 +0000

flight 186396 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186396/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  bd59af99700f075d06a6d47a16f777c9519928e0
baseline version:
 xen                  77b1ed1d02d082c457924a695e8dde7076285271

Last test of basis   186387  2024-06-18 02:00:22 Z    0 days
Testing same since   186396  2024-06-18 14:00:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   77b1ed1d02..bd59af9970  bd59af99700f075d06a6d47a16f777c9519928e0 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 18:05:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 18:05:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743280.1150158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJdD3-0007vC-Lj; Tue, 18 Jun 2024 18:05:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743280.1150158; Tue, 18 Jun 2024 18:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJdD3-0007v5-JB; Tue, 18 Jun 2024 18:05:33 +0000
Received: by outflank-mailman (input) for mailman id 743280;
 Tue, 18 Jun 2024 18:05:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2Kzm=NU=linaro.org=manos.pitsidianakis@srs-se1.protection.inumbo.net>)
 id 1sJdD2-0007uz-5e
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 18:05:32 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5845ba12-2d9d-11ef-90a3-e314d9c70b13;
 Tue, 18 Jun 2024 20:05:30 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2eaa89464a3so65091071fa.3
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 11:05:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5845ba12-2d9d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google; t=1718733928; x=1719338728; darn=lists.xenproject.org;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=1ZzHD8q6NG2XTuCLsc1kImC2M7CKDyg/5o6KXtlMNfw=;
        b=q6sQoVA30xYOrdxJHOM/dgl/3a3IUW8orZFiclceUSBGxObDRljPfmg5fm0UFnSGWa
         6i/BE6qskFvQM32YmW4XCMvcSm3rnuMmE3+enU+1rLfRkX7/GXBqHZdYjt8L598V+u9R
         PzdWDkzNc6S6BenqUGcsQFYXMnKyW2CYYZPr9Xu9y2xiYG4Q4d7CulB1iZ7pCzjmBEOl
         kDOBGvkhZaGqAtSbdYmuyzPorjA+MR1EMsuBvkqokqQgQj5iNpo1c7EXqNQZJO3X9wCX
         2N4wLI9JucjE8Fw9CNF8kOFwpRCMqiyOXTjRNgXbNwp7KiyY8vq6C4xB9AkEZWd32klF
         9MLQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718733928; x=1719338728;
        h=cc:to:subject:message-id:date:from:in-reply-to:references
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=1ZzHD8q6NG2XTuCLsc1kImC2M7CKDyg/5o6KXtlMNfw=;
        b=odvc5bjcwzEar9Ox6Z8K8efCTM41LHa2hcz/0R8H/UWLzKjtzltlFL3EWRe74ELZxO
         jyB7KEtfr29WeZ/zvZqj0ZLp/ZBSCScvkB6cPg/JTRJyuEFGVKN/bPevXNbrq2ZAwbfG
         HZ+WTNKOeAlCzQAHhLHpl+zQZhgaiqyEnK4dcBGVB5H1qTEctnpiIN8pXx2NrwoyFRfE
         ze3h97yNI0ng1O3vuujujmyLa5V60jyHXmXw4j7A1whVUQw5bt9kaK1k98XUOozDi2K6
         cC+YSYeGMtcpFQbuXYxNVQbXzir/HTsMxAOEhaSpD+y8VXZIHwtayreS07uxFoXBbP7Z
         EYmg==
X-Gm-Message-State: AOJu0YzuZVI53tcAoRr91bO/OkqwqtRJGPeKUobeY4DHJDDFC2FBxxPO
	NJp4N1fVSLnKil3nnSV1p9B1xwyjC2E2XDHN1Db8OP929ENuD4G8TGNuFH83iCkmLAzQ5xnFuUV
	SUZZgkO97Tyu2NGTPMGYt01JIxosDrPHdgAyzMQ==
X-Google-Smtp-Source: AGHT+IHk5dbPZ8iZrA/+lMaJQGdCAunIqeOItEfvxohsFWwWQYvVpFVhPrhMHd8O46kTWLrogwJWnliwwzfdg+oYwBM=
X-Received: by 2002:a2e:b0c9:0:b0:2eb:68d0:f1cc with SMTP id
 38308e7fff4ca-2ec3cfe125emr3746561fa.43.1718733928140; Tue, 18 Jun 2024
 11:05:28 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1712837961.git.manos.pitsidianakis@linaro.org>
 <334b5a46e31dbf3e8114e9ea8bafd92cf060f2af.1712837961.git.manos.pitsidianakis@linaro.org>
 <86e7327e-0a3c-4e50-bd62-720e986efdaa@perard>
In-Reply-To: <86e7327e-0a3c-4e50-bd62-720e986efdaa@perard>
From: Manos Pitsidianakis <manos.pitsidianakis@linaro.org>
Date: Tue, 18 Jun 2024 21:05:15 +0300
Message-ID: <CAAjaMXYX9x5U1Anyk03__VRMvBYLUimAx=mHoKojmgz6SEhkRQ@mail.gmail.com>
Subject: Re: [RFC PATCH v2 1/2] libs/light: add device model start timeout env var
To: Anthony PERARD <anthony.perard@cloud.com>
Cc: "open list:X86 Xen CPUs" <xen-devel@lists.xenproject.org>, Juergen Gross <jgross@suse.com>
Content-Type: multipart/alternative; boundary="000000000000403b33061b2dedcf"

--000000000000403b33061b2dedcf
Content-Type: text/plain; charset="UTF-8"

Hello Anthony, thank you very much for your review comments! They are very
thorough and knowledgeable.

I unfortunately only just saw them, I don't know why I missed them back in
April but doesn't matter. I am not going to continue work on these patches
due to circumstances changing for me, if the community deems them useful
with your corrections applied I hope someone else picks them up in the
future.

Thanks again,
--
Manos Pitsidianakis, Virtualization Engineer at Linaro

--000000000000403b33061b2dedcf
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto">Hello=C2=A0Anthony, thank you very much for your review c=
omments! They are very thorough and knowledgeable.<div dir=3D"auto"><br></d=
iv><div dir=3D"auto">I unfortunately only just saw them, I don&#39;t know w=
hy I missed them back in April but doesn&#39;t matter. I am not going to co=
ntinue work on these patches due to circumstances changing for me, if the c=
ommunity deems them useful with your corrections applied I hope someone els=
e picks them up in the future.</div><div dir=3D"auto"><br></div><div dir=3D=
"auto">Thanks again,</div><div dir=3D"auto">--</div><div dir=3D"auto">Manos=
 Pitsidianakis, Virtualization Engineer at Linaro</div><div dir=3D"auto"><b=
r></div></div>

--000000000000403b33061b2dedcf--


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 18:31:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 18:31:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743288.1150168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJdcE-0003Wr-JN; Tue, 18 Jun 2024 18:31:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743288.1150168; Tue, 18 Jun 2024 18:31:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJdcE-0003Wk-Ft; Tue, 18 Jun 2024 18:31:34 +0000
Received: by outflank-mailman (input) for mailman id 743288;
 Tue, 18 Jun 2024 18:31:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=An5i=NU=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJdcD-0003We-8S
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 18:31:33 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fb93cb68-2da0-11ef-b4bb-af5377834399;
 Tue, 18 Jun 2024 20:31:31 +0200 (CEST)
Received: by mail-ej1-x634.google.com with SMTP id
 a640c23a62f3a-a63359aaacaso915433766b.1
 for <xen-devel@lists.xenproject.org>; Tue, 18 Jun 2024 11:31:31 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56fa41cbsm637414266b.225.2024.06.18.11.31.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 18 Jun 2024 11:31:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb93cb68-2da0-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718735490; x=1719340290; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=rjVrqNbP8JDAWM4YAylBjML9Ao9iqvaZbmYG98FXKbU=;
        b=EjT9DoL8hlAWP+n/oAYkB0DebU9/52FqFDIgZHFBg9sz2ko5IaWgW+Ui0JDk6WNJBD
         vymMnEF4FxfKVV9BodDQx8f40QFVhZxbFz5d8WIP7tE+CM35gaZ/O5yJB5bwllEyLXRk
         1RJc7zA/6UImsca3+rLBAd1m+sSH1h4H/C2Tk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718735490; x=1719340290;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=rjVrqNbP8JDAWM4YAylBjML9Ao9iqvaZbmYG98FXKbU=;
        b=NI3/VYVc2etY62ykfEnDNfyYxqbmGobm6Q8CVwtBq8EyV8TRLAlARjF8eDHm5G2WkZ
         TAV806O14ISvzC31aecqgy35KAGGUH+lU1SG0amNkTgBSJcKobuZ+O6wICR/RokUgfnL
         BLoCNi5my2iI9g3Q5QyB6WbaeHRVy3J37ZNpBM0XDU/NEMzyT9bHN3q9Hfu/onJqyfBx
         s4GvFs1HniYRkhCaEyIa879hqzjxE5EitO1cAunagGmF/54L0uqsMeZywj1/Lx3xGocq
         gJ+ApdNfq7s1vtm2YRbmop6Mh9/X0/PE+Q0IFJ1aaAKXEwoW91L6WA4tpK7fWULgwQRd
         Awzw==
X-Gm-Message-State: AOJu0YxoKtDUzpHZgD0lcQ+Y9cXZtdLD9FUKb5MksmTq6gE8sMuFREl5
	7YZdrDn8A0u5AUuTzsxD1Y6Ff2Kv5AqTIDtU7LdckMVYwORLP+jM7lT7Fdt8lOYDmj7ZvJqYzcW
	P3uA=
X-Google-Smtp-Source: AGHT+IEsnxVZ0LAM2ciHaE95xLl55lbzkuG+XsBpj3CGebNqe2SSGACfTWmgaGvnVe923BTR5NNDeA==
X-Received: by 2002:a17:907:dac:b0:a6e:fccb:7146 with SMTP id a640c23a62f3a-a6fab615b9dmr26720166b.23.1718735490379;
        Tue, 18 Jun 2024 11:31:30 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH] AMD/IOMMU: Improve register_iommu_exclusion_range()
Date: Tue, 18 Jun 2024 19:31:28 +0100
Message-Id: <20240618183128.1981751-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

 * Use 64bit accesses instead of 32bit accesses
 * Simplify the constant names
 * Pull base into a local variable to avoid it being reloaded because of the
   memory clobber in writeq().

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>

RFC.  This is my proposed way of cleaning up the whole IOMMU file.  The
diffstat speaks for itself.

I've finally found the bit in the AMD IOMMU spec which says 64bit accesses are
permitted:

  3.4 IOMMU MMIO Registers:

  Software access to IOMMU registers may not be larger than 64 bits. Accesses
  must be aligned to the size of the access and the size in bytes must be a
  power of two. Software may use accesses as small as one byte.

If we want to further simplify the logic, we could reject non-page-aligned
base/limits when parsing IVRS.

Also, these registers don't exist in newer AMD systems:

  When the system is SNP-enabled, the contents of the Exclusion range base
  address field are locked and re- purposed as the Completion store base
  address field. This contains bits [51:12] of the 4Kbyte-aligned base address
  that defines the starting address range that host COMPLETION_WAIT stores may
  target

I take this to mean the writes are discarded.
---
 xen/drivers/passthrough/amd/iommu-defs.h | 20 +++---------
 xen/drivers/passthrough/amd/iommu_init.c | 41 ++++++------------------
 2 files changed, 14 insertions(+), 47 deletions(-)

diff --git a/xen/drivers/passthrough/amd/iommu-defs.h b/xen/drivers/passthrough/amd/iommu-defs.h
index c145248f9af1..9cf509b1f78b 100644
--- a/xen/drivers/passthrough/amd/iommu-defs.h
+++ b/xen/drivers/passthrough/amd/iommu-defs.h
@@ -338,22 +338,10 @@ union amd_iommu_control {
 };
 
 /* Exclusion Register */
-#define IOMMU_EXCLUSION_BASE_LOW_OFFSET		0x20
-#define IOMMU_EXCLUSION_BASE_HIGH_OFFSET	0x24
-#define IOMMU_EXCLUSION_LIMIT_LOW_OFFSET	0x28
-#define IOMMU_EXCLUSION_LIMIT_HIGH_OFFSET	0x2C
-#define IOMMU_EXCLUSION_BASE_LOW_MASK		0xFFFFF000U
-#define IOMMU_EXCLUSION_BASE_LOW_SHIFT		12
-#define IOMMU_EXCLUSION_BASE_HIGH_MASK		0xFFFFFFFFU
-#define IOMMU_EXCLUSION_BASE_HIGH_SHIFT		0
-#define IOMMU_EXCLUSION_RANGE_ENABLE_MASK	0x00000001U
-#define IOMMU_EXCLUSION_RANGE_ENABLE_SHIFT	0
-#define IOMMU_EXCLUSION_ALLOW_ALL_MASK		0x00000002U
-#define IOMMU_EXCLUSION_ALLOW_ALL_SHIFT		1
-#define IOMMU_EXCLUSION_LIMIT_LOW_MASK		0xFFFFF000U
-#define IOMMU_EXCLUSION_LIMIT_LOW_SHIFT		12
-#define IOMMU_EXCLUSION_LIMIT_HIGH_MASK		0xFFFFFFFFU
-#define IOMMU_EXCLUSION_LIMIT_HIGH_SHIFT	0
+#define IOMMU_MMIO_EXCLUSION_BASE           0x20
+#define   EXCLUSION_RANGE_ENABLE            (1 << 0)
+#define   EXCLUSION_ALLOW_ALL               (1 << 1)
+#define IOMMU_MMIO_EXCLUSION_LIMIT          0x28
 
 /* Extended Feature Register */
 #define IOMMU_EXT_FEATURE_MMIO_OFFSET                   0x30
diff --git a/xen/drivers/passthrough/amd/iommu_init.c b/xen/drivers/passthrough/amd/iommu_init.c
index 6c0dc2d5cb69..bcf1903e716e 100644
--- a/xen/drivers/passthrough/amd/iommu_init.c
+++ b/xen/drivers/passthrough/amd/iommu_init.c
@@ -223,40 +223,19 @@ static void set_iommu_command_buffer_control(struct amd_iommu *iommu,
 
 static void register_iommu_exclusion_range(struct amd_iommu *iommu)
 {
-    u32 addr_lo, addr_hi;
-    u32 entry;
-
-    addr_lo = iommu->exclusion_limit;
-    addr_hi = iommu->exclusion_limit >> 32;
-
-    set_field_in_reg_u32((u32)addr_hi, 0,
-                         IOMMU_EXCLUSION_LIMIT_HIGH_MASK,
-                         IOMMU_EXCLUSION_LIMIT_HIGH_SHIFT, &entry);
-    writel(entry, iommu->mmio_base+IOMMU_EXCLUSION_LIMIT_HIGH_OFFSET);
-
-    set_field_in_reg_u32((u32)addr_lo >> PAGE_SHIFT, 0,
-                         IOMMU_EXCLUSION_LIMIT_LOW_MASK,
-                         IOMMU_EXCLUSION_LIMIT_LOW_SHIFT, &entry);
-    writel(entry, iommu->mmio_base+IOMMU_EXCLUSION_LIMIT_LOW_OFFSET);
-
-    addr_lo = iommu->exclusion_base & DMA_32BIT_MASK;
-    addr_hi = iommu->exclusion_base >> 32;
+    void *__iomem base = iommu->mmio_base;
+    uint64_t val;
 
-    entry = 0;
-    iommu_set_addr_hi_to_reg(&entry, addr_hi);
-    writel(entry, iommu->mmio_base+IOMMU_EXCLUSION_BASE_HIGH_OFFSET);
-
-    entry = 0;
-    iommu_set_addr_lo_to_reg(&entry, addr_lo >> PAGE_SHIFT);
+    /* Exclusion Limit */
+    val = iommu->exclusion_limit & PAGE_MASK;
+    writeq(val, base + IOMMU_MMIO_EXCLUSION_LIMIT);
 
-    set_field_in_reg_u32(iommu->exclusion_allow_all, entry,
-                         IOMMU_EXCLUSION_ALLOW_ALL_MASK,
-                         IOMMU_EXCLUSION_ALLOW_ALL_SHIFT, &entry);
+    /* Exclusion Base, inc control bits. */
+    val = ((iommu->exclusion_base & PAGE_MASK) |
+           (iommu->exclusion_allow_all ? EXCLUSION_ALLOW_ALL : 0) |
+           (iommu->exclusion_enable    ? EXCLUSION_RANGE_ENABLE : 0));
 
-    set_field_in_reg_u32(iommu->exclusion_enable, entry,
-                         IOMMU_EXCLUSION_RANGE_ENABLE_MASK,
-                         IOMMU_EXCLUSION_RANGE_ENABLE_SHIFT, &entry);
-    writel(entry, iommu->mmio_base+IOMMU_EXCLUSION_BASE_LOW_OFFSET);
+    writeq(val, base + IOMMU_MMIO_EXCLUSION_BASE);
 }
 
 static void cf_check set_iommu_event_log_control(
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 18 20:51:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 20:51:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743301.1150180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJfnB-0002j4-BQ; Tue, 18 Jun 2024 20:51:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743301.1150180; Tue, 18 Jun 2024 20:51:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJfnB-0002iw-5G; Tue, 18 Jun 2024 20:51:01 +0000
Received: by outflank-mailman (input) for mailman id 743301;
 Tue, 18 Jun 2024 20:51:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJfnA-0002im-H3; Tue, 18 Jun 2024 20:51:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJfn9-0002D2-Mp; Tue, 18 Jun 2024 20:50:59 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJfn9-00021g-CU; Tue, 18 Jun 2024 20:50:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJfn9-0006YF-C4; Tue, 18 Jun 2024 20:50:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iShEJbxRB4TCONkFIgFVwwryzE0yOGA/teM9Q+1YS44=; b=gzc7zvxWrfYu84c88eZlBazH8Z
	Ge6NWGZjMczhda+LapGm+DgNr7lZ/UoBk7OKZWlHMCoABktJ/ahlRPBzRAfa4tuwvT0o+E997j1uO
	bTKZTwciX2aYjNURlvEVjmT24exXhZahbjcmxvaiKqVfkDPXiG5MHo4wu127ESc91sg8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186393-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186393: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:guest-start:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=77b1ed1d02d082c457924a695e8dde7076285271
X-Osstest-Versions-That:
    xen=8b4243a9b560c89bb259db5a27832c253d4bebc7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Jun 2024 20:50:59 +0000

flight 186393 xen-unstable real [real]
flight 186400 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186393/
http://logs.test-lab.xenproject.org/osstest/logs/186400/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-dom0pvh-xl-amd 14 guest-start      fail pass in 186400-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186386
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186386
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186386
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186386
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186386
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186386
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  77b1ed1d02d082c457924a695e8dde7076285271
baseline version:
 xen                  8b4243a9b560c89bb259db5a27832c253d4bebc7

Last test of basis   186386  2024-06-18 01:53:35 Z    0 days
Testing same since   186393  2024-06-18 09:10:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nicola Vetrini <nicola.vetrini@bugseng.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8b4243a9b5..77b1ed1d02  77b1ed1d02d082c457924a695e8dde7076285271 -> master


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 21:40:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 21:40:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743316.1150197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJgYV-0007aK-2x; Tue, 18 Jun 2024 21:39:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743316.1150197; Tue, 18 Jun 2024 21:39:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJgYU-0007aD-W2; Tue, 18 Jun 2024 21:39:54 +0000
Received: by outflank-mailman (input) for mailman id 743316;
 Tue, 18 Jun 2024 21:39:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJgYT-0007a3-Py; Tue, 18 Jun 2024 21:39:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJgYT-00033b-I3; Tue, 18 Jun 2024 21:39:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJgYT-0003Me-8k; Tue, 18 Jun 2024 21:39:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJgYT-0000bc-8F; Tue, 18 Jun 2024 21:39:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=orW34woupyhSLZynTNM1gAMSndXNrGnCY9MSTKLeSTQ=; b=BKXHReNrkp/y6bUOsKrmkc8lIs
	DRcAwLoxj8+JyScrSBzxkWiaXk1YzgNG1x2CSlpCMZ04qPYNZv/0l3owlnvO6YgLHf1yXql5h6bWP
	kpVFKSV3APf7j2Bugmk0DjVkkt2C8ROPQ+O6jGANG6ozr1Fq2a3quSPDznDNqiSKPssA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186399-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186399: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ffce430d2b65d508a1604dc986ba16db3583943d
X-Osstest-Versions-That:
    ovmf=bfda27ddc89502190c79f74fc20cb81458d58449
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Jun 2024 21:39:53 +0000

flight 186399 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186399/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ffce430d2b65d508a1604dc986ba16db3583943d
baseline version:
 ovmf                 bfda27ddc89502190c79f74fc20cb81458d58449

Last test of basis   186397  2024-06-18 15:12:52 Z    0 days
Testing same since   186399  2024-06-18 19:43:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Corvin Köhne <c.koehne@beckhoff.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   bfda27ddc8..ffce430d2b  ffce430d2b65d508a1604dc986ba16db3583943d -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 23:12:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 23:12:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743327.1150207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJhzU-0001BD-DM; Tue, 18 Jun 2024 23:11:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743327.1150207; Tue, 18 Jun 2024 23:11:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJhzU-0001B5-AN; Tue, 18 Jun 2024 23:11:52 +0000
Received: by outflank-mailman (input) for mailman id 743327;
 Tue, 18 Jun 2024 23:11:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gCZ4=NU=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sJhzS-0001Ax-4b
 for xen-devel@lists.xenproject.org; Tue, 18 Jun 2024 23:11:50 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 217383e7-2dc8-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 01:11:47 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 0291BCE1BD2;
 Tue, 18 Jun 2024 23:11:44 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id DA36DC3277B;
 Tue, 18 Jun 2024 23:11:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 217383e7-2dc8-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718752303;
	bh=7USpFt4OqIKRqBf2KY/DprGxHV1VmAEb2Wjbt9eDtu8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=QF61WvPE15eKCUr5MxuyhW1yMo6/xt/0E5yrDha/IkiyHRbDGEBBhAjVD/2szTLSs
	 rEhiO3ukYHbO+DO4hAJRl6cW4MkPl+xEozQMDvwRuqiQ+xGX9fu5AKtg8+45Vyw278
	 WK3fWWLNavAuX1WSL3mVvghxCNp78t/WsQANUI5lBxlcDy3gV/ccJnCJ48UAC1wxut
	 TneH+9mPxwy8FWlRNLhPEH0bLvftNPsUmjLKXilBQTpdzqEdHHMO52M/Gk0+U0IFgE
	 l280Wn0H0hf3JssyDBNjsIK4icuZZ9hv/xj5vJCiEUal7zc6FX7qK1sltzaBXrs055
	 ZPZ2u/ayyy6qQ==
Date: Tue, 18 Jun 2024 16:11:40 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
cc: Andrew Cooper <andrew.cooper3@citrix.com>, 
    Xen-devel <xen-devel@lists.xenproject.org>, 
    Jan Beulich <JBeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>, 
    Shawn Anastasio <sanastasio@raptorengineering.com>
Subject: Re: [PATCH for-4.19] xen/arch: Centralise __read_mostly and
 __ro_after_init
In-Reply-To: <ZmxKxob_5LZHvCaa@macbook>
Message-ID: <alpine.DEB.2.22.394.2406181611190.2572888@ubuntu-linux-20-04-desktop>
References: <20240614124950.1557058-1-andrew.cooper3@citrix.com> <ZmxKxob_5LZHvCaa@macbook>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1100158092-1718752303=:2572888"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1100158092-1718752303=:2572888
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Fri, 13 Jun 2024, Roger Pau Monné wrote:
> On Fri, Jun 14, 2024 at 01:49:50PM +0100, Andrew Cooper wrote:
> > These being in cache.h is inherited from Linux, but is an inappropriate
> > location to live.
> > 
> > __read_mostly is an optimisation related to data placement in order to avoid
> > having shared data in cachelines that are likely to be written to, but it
> > really is just a section of the linked image separating data by usage
> > patterns; it has nothing to do with cache sizes or flushing logic.
> > 
> > Worse, __ro_after_init was only in xen/cache.h because __read_mostly was in
> > arch/cache.h, and has literally nothing whatsoever to do with caches.
> > 
> > Move the definitions into xen/sections.h, which in paritcular means that
> > RISC-V doesn't need to repeat the problematic pattern.  Take the opportunity
> > to provide a short descriptions of what these are used for.
> > 
> > For now, leave TODO comments next to the other identical definitions.  It
> > turns out that unpicking cache.h is more complicated than it appears because a
> > number of files use it for transitive dependencies.
> 
> I assume that including sections.h from cache.h (in the meantime)
> creates a circular header dependency?

Assuming this patch doesn't introduce ECLAIR regressions:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>
--8323329-1100158092-1718752303=:2572888--


From xen-devel-bounces@lists.xenproject.org Tue Jun 18 23:52:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 Jun 2024 23:52:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743334.1150217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJicJ-0006CL-CB; Tue, 18 Jun 2024 23:51:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743334.1150217; Tue, 18 Jun 2024 23:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJicJ-0006CE-7v; Tue, 18 Jun 2024 23:51:59 +0000
Received: by outflank-mailman (input) for mailman id 743334;
 Tue, 18 Jun 2024 23:51:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJicH-0006C4-SC; Tue, 18 Jun 2024 23:51:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJicH-0005LX-HD; Tue, 18 Jun 2024 23:51:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJicH-0001iw-7K; Tue, 18 Jun 2024 23:51:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJicH-0006l4-6l; Tue, 18 Jun 2024 23:51:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vBWcF2h2wA78dl4aFgGjuAhkN+3/aPpGZDG8J/vRDRI=; b=RusA9+0xOm5eLRmqoE47C1MiQz
	REwEggCtZd5RM+mFe9Y5HHkkyQSnREKSqY/XtrP6/YlC31MuIQLpNlQLPlj+p5w6VKF63/Xy5S65q
	ZEVSA+/U5Gq76m61j2Gcc4oyl0rJJExqEKBIia53PbTK8irJH6+cnftisD/30X6gHIl4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186402-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186402: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=c1d1910be6e04a8b1a73090cf2881fb698947a6e
X-Osstest-Versions-That:
    ovmf=ffce430d2b65d508a1604dc986ba16db3583943d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 Jun 2024 23:51:57 +0000

flight 186402 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186402/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c1d1910be6e04a8b1a73090cf2881fb698947a6e
baseline version:
 ovmf                 ffce430d2b65d508a1604dc986ba16db3583943d

Last test of basis   186399  2024-06-18 19:43:01 Z    0 days
Testing same since   186402  2024-06-18 21:41:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ffce430d2b..c1d1910be6  c1d1910be6e04a8b1a73090cf2881fb698947a6e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 00:38:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 00:38:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743346.1150227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJjKw-0003EO-Kt; Wed, 19 Jun 2024 00:38:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743346.1150227; Wed, 19 Jun 2024 00:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJjKw-0003EH-HG; Wed, 19 Jun 2024 00:38:06 +0000
Received: by outflank-mailman (input) for mailman id 743346;
 Wed, 19 Jun 2024 00:38:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zUHp=NV=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sJjKu-0003EB-Hi
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 00:38:04 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2f2f7cb2-2dd4-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 02:38:03 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 6043161908;
 Wed, 19 Jun 2024 00:38:01 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B73A7C3277B;
 Wed, 19 Jun 2024 00:37:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f2f7cb2-2dd4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718757481;
	bh=yBrrMoQUpHToHslhYSOqlIBGUK4AP3EOd3KMNC+pt6w=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=fZlpiDG9oKXg7HfsERuZOo9sVKKwbymQWJS++SPiojT5dnYF1/sakM6S1XyTyX0vj
	 1LCTev0qohtO/znan/POAhNBJDZ5btnFaJ96mjQwk8EYPcsaZ+gE5oJKNOExqER132
	 qFruw7CKQDmEQSJQ6GYZ81b1bMY3XXyzAimmAyfDwUKbBa2tn/lS96prOWk0LhRs5V
	 OdNgAc1J+ttVys8qEja23KQHkhvWem3pvmWvRoM53Z8bj92M/NGtTmois8kLh8TEZN
	 7KXE6MetE8D+unmCany2e0BqR8ongab8HYsAkFbwomDkoMKrb2vkXwTkrwY7ZPPamB
	 5etIvzvUdi5vQ==
Date: Tue, 18 Jun 2024 17:37:58 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: "Oleksii K." <oleksii.kurochko@gmail.com>
cc: Julien Grall <julien@xen.org>, 
    Stefano Stabellini <stefano.stabellini@amd.com>, 
    xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    bertrand.marquis@arm.com, michal.orzel@amd.com, Volodymyr_Babchuk@epam.com, 
    Henry Wang <xin.wang2@amd.com>, Alec Kwapis <alec.kwapis@medtronic.com>, 
    "Daniel P . Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH v4 2/4] xen/arm: Alloc XenStore page for Dom0less DomUs
 from hypervisor
In-Reply-To: <b9c8e762af9ca04d9194fdaa0379f2fe9096af29.camel@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2406181734140.2572888@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2405241552240.2557291@ubuntu-linux-20-04-desktop>  <20240524225522.2878481-2-stefano.stabellini@amd.com>  <697aadfd-a8c1-4f1b-8806-6a5acbf343ba@xen.org> <b9c8e762af9ca04d9194fdaa0379f2fe9096af29.camel@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 27 May 2024, Oleksii K. wrote:
> > I don't think it is a big problem if this is not merged for the code 
> > freeze as this is technically a bug fix.
>
> Agree, this is not a problem as it is still looks to me as a bug fix.
> 
> ~ Oleksii

Hi Oleksii, this version of the series was already all acked with minor
NITs and you gave the go-ahead for this release as it is a bug fix. Due
to 2 weeks of travels I only managed to commit the series now, sorry for
the delay.


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 02:43:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 02:43:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743364.1150250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJlHU-00007W-Uv; Wed, 19 Jun 2024 02:42:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743364.1150250; Wed, 19 Jun 2024 02:42:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJlHU-00007N-QQ; Wed, 19 Jun 2024 02:42:40 +0000
Received: by outflank-mailman (input) for mailman id 743364;
 Wed, 19 Jun 2024 02:42:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJlHT-00007C-Jn; Wed, 19 Jun 2024 02:42:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJlHT-0008Jv-8q; Wed, 19 Jun 2024 02:42:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJlHS-0006Oo-Tc; Wed, 19 Jun 2024 02:42:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJlHS-00011X-TA; Wed, 19 Jun 2024 02:42:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dmyGMjBTnf1UFWByWBYBLlfKRUR0NBWFMzQovALb5NA=; b=oLZ8ZOXggPBEgQU7C6RNEPxu47
	DC6U7KrSyssN41glD0lal/e0qJAAAMW0+gsNaGmvAS0JwwefQWRoe72kVyp7tCFW78nlXFE65LBdN
	QIJwNcITDjUVSFjMGO2EAymKDTaFRsjAS5tkOiUb2sw13DPnwwgCCNKL0jkj12vxBXPo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186398-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186398: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=46d1907d1caaaaa422ae814c52065f243caa010a
X-Osstest-Versions-That:
    linux=6226e74900d7c106c7c86b878dc6779cfdb20c2b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Jun 2024 02:42:38 +0000

flight 186398 linux-linus real [real]
flight 186403 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186398/
http://logs.test-lab.xenproject.org/osstest/logs/186403/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt      8 xen-boot            fail pass in 186403-retest
 test-armhf-armhf-examine      8 reboot              fail pass in 186403-retest
 test-armhf-armhf-libvirt-vhd  8 xen-boot            fail pass in 186403-retest
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186403-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 186385

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 186403 like 186385
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 186403 never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check fail in 186403 never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check fail in 186403 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186385
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186385
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186385
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186385
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186385
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                46d1907d1caaaaa422ae814c52065f243caa010a
baseline version:
 linux                6226e74900d7c106c7c86b878dc6779cfdb20c2b

Last test of basis   186385  2024-06-17 19:43:39 Z    1 days
Failing since        186389  2024-06-18 03:58:06 Z    0 days    2 attempts
Testing same since   186398  2024-06-18 16:13:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aleksandr Nogikh <nogikh@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Konovalov <andreyknvl@gmail.com>
  Ard Biesheuvel <ardb@kernel.org>
  Ashish Kalra <Ashish.Kalra@amd.com>
  Baolin Wang <baolin.wang@linux.alibaba.com>
  Christian Göttsche <cgzones@googlemail.com>
  Christian Schrefl <chrisi.schrefl@gmail.com>
  Chuck Lever <chuck.lever@oracle.com>
  David Hildenbrand <david@redhat.com>
  GUO Zihua <guozihua@huawei.com>
  Hugh Dickins <hughd@google.com>
  Jean-Philippe Brucker <jean-philippe@linaro.org>
  Jeff Xu <jeffxu@chromium.org>
  John Johansen <john.johansen@canonical.com>
  Joseph Qi <joseph.qi@linux.alibaba.com>
  Kees Cook <kees@kernel.org>
  Kefeng Wang <wangkefeng.wang@huawei.com>
  Lance Yang <ioworker0@gmail.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lorenzo Stoakes <lstoakes@gmail.com>
  Mark Brown <broonie@kernel.org>
  Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com>
  Oleg Nesterov <oleg@redhat.com>
  Paul Moore <paul@paul-moore.com>
  Peter Oberparleiter <oberpar@linux.ibm.com>
  Peter Xu <peterx@redhat.com>
  Rafael Aquini <aquini@redhat.com>
  Ran Xiaokai <ran.xiaokai@zte.com.cn>
  Suren Baghdasaryan <surenb@google.com>
  Vlastimil Babka <vbabka@suse.cz>
  Waiman Long <longman@redhat.com>
  Wei Fu <fuweid89@gmail.com>
  Yury Norov <yury.norov@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   6226e74900d7..46d1907d1caa  46d1907d1caaaaa422ae814c52065f243caa010a -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 03:14:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 03:14:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743375.1150259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJllm-0003mJ-AO; Wed, 19 Jun 2024 03:13:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743375.1150259; Wed, 19 Jun 2024 03:13:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJllm-0003mC-6z; Wed, 19 Jun 2024 03:13:58 +0000
Received: by outflank-mailman (input) for mailman id 743375;
 Wed, 19 Jun 2024 03:13:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJlll-0003m2-Kq; Wed, 19 Jun 2024 03:13:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJlll-0000Pk-EQ; Wed, 19 Jun 2024 03:13:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJlll-0007Hs-5e; Wed, 19 Jun 2024 03:13:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJlll-00041B-5E; Wed, 19 Jun 2024 03:13:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hyatbNoXIHvQ42m7f2HdDhB288E4++v/6h5SjP7xlEE=; b=ZoXCeq5HSnFd5WnALaFLe+THcz
	USSY34S1TUT/ivKJCtujJMrE9FtJKfy43/41XaDmzBJJDTCyIotD4Vgj1aWZeLKfU4nZnoVhVuhVc
	9i/illfY/yDEfpx4PBxN4pngkuRhaRO6fG8WpueU2BxWReTauaOMhwIXbA5Wz6GVug3k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186405-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186405: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=26a30abdd0f7fe5a9d2421cba6efe9397185ad98
X-Osstest-Versions-That:
    ovmf=c1d1910be6e04a8b1a73090cf2881fb698947a6e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Jun 2024 03:13:57 +0000

flight 186405 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186405/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 26a30abdd0f7fe5a9d2421cba6efe9397185ad98
baseline version:
 ovmf                 c1d1910be6e04a8b1a73090cf2881fb698947a6e

Last test of basis   186402  2024-06-18 21:41:10 Z    0 days
Testing same since   186405  2024-06-19 01:11:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nhi Pham <nhi@os.amperecomputing.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c1d1910be6..26a30abdd0  26a30abdd0f7fe5a9d2421cba6efe9397185ad98 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 03:40:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 03:40:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743385.1150270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJmB0-0006UU-8p; Wed, 19 Jun 2024 03:40:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743385.1150270; Wed, 19 Jun 2024 03:40:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJmB0-0006U6-3X; Wed, 19 Jun 2024 03:40:02 +0000
Received: by outflank-mailman (input) for mailman id 743385;
 Wed, 19 Jun 2024 03:40:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WeeD=NV=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJmAy-0006N9-Tz
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 03:40:01 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20601.outbound.protection.outlook.com
 [2a01:111:f403:2415::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 99b727c2-2ded-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 05:39:59 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by BL3PR12MB6641.namprd12.prod.outlook.com (2603:10b6:208:38d::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31; Wed, 19 Jun
 2024 03:39:55 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7677.030; Wed, 19 Jun 2024
 03:39:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99b727c2-2ded-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PH/4BNwtSh53qtQTA+noW2eYfSU8tMTFPzVy3u2fyHqOJHhC9uZ0T8II3ZTrDGginvILFWHUopaEvqy27z7MxfP7dKyXSqxpfXF8TnhE7sZNtiI4gVyA00Uc7ZIIybIE4tHyBqUCyJZtjwdcX1NpU+2JGG13lppw3HjvlPDyzIVEjMrZnrdU7rogW7JV8uEwpnv2EIB0QIHRxJ9sTdoqMKv2GIOUsMPowAYKWuTaLqPF2BAA3HlnWb3oL1aDmq9MQmt7X4Wk/ZeMehuiykrWJWH56P1+nn20sL3TFaEZs8TqjEH2QWp7eJl2xNZGrjQ6tRRHxJdmEbNc53jghgzu7Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UHwP6RvVoC1Qe2a9MsmnLxvwXeRhc2OA9H5xZ9TTw9c=;
 b=LiNMiaXfbERub4aXsDsCtwaygadXK3x6rxw8ha39LetnBmvKNKsEHezE4Pre5Ys+BjWN4/bSDQfrATzhAf2g6N/C9ZWYjK7/6EL7IHwoHcOSXqov/JHWY7m2xuWCem5kY/z+aWETo5GDSIh4VneRtmo5pSz6NAPTIvHRuU4elzbIpC3obeTPjwNt2bKQnQfBvbT2UY+3fU8KVQT7bhcmtBK1H2FN2UEM+IRFXa1Zks9dtozAbuBqlA+NuJ7vbYwRgY9zlO2XzWVeyLIYErdL1LcerDnAc4cdUuN5k7LS7/qTnxZBHXxnK2e2ETjCx07LIHQMWCC1CFSxgX3YoyZrww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UHwP6RvVoC1Qe2a9MsmnLxvwXeRhc2OA9H5xZ9TTw9c=;
 b=MfwP18PyOD4peqWTp5LKXy3m2ipVkVWe7GNibD/QMz8efeGDEcz0JANo8RljWICt//iD5h1m5G9INVuDaxKX+YYUa9eK74i9RHVGUb0BP2xKk5rxPRu0Lt4XNwGQ6zk8T01Y2sDEaxnqI7Zc3Z4uFd8UHVOjjGMsMNy+uZ0JcWg=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 1/5] xen/vpci: Clear all vpci status of device
Thread-Topic: [XEN PATCH v10 1/5] xen/vpci: Clear all vpci status of device
Thread-Index: AQHawJTga4AgQ2zhd0+hWXdwZesfOrHMAOYAgAGQCAD//6IuAIABxSoA
Date: Wed, 19 Jun 2024 03:39:55 +0000
Message-ID:
 <BL1PR12MB5849E84E58725FB947CCECD8E7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-2-Jiqian.Chen@amd.com>
 <4e2accc2-e81d-450a-af2d-38884455de9c@suse.com>
 <BL1PR12MB58499527CFA36446EAD3FCE0E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <f8381d8b-0ad2-4766-8a53-d1ee44ea7e05@suse.com>
In-Reply-To: <f8381d8b-0ad2-4766-8a53-d1ee44ea7e05@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7677.026)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|BL3PR12MB6641:EE_
x-ms-office365-filtering-correlation-id: 2cf8d7da-9826-4778-408d-08dc90117c3c
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|366013|7416011|376011|1800799021|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?VmNrOURVSDZxakxaTFV5ZTE4c0h2cjVIU0hjSVRYalVwY1NiNkl0aHRxdXdT?=
 =?utf-8?B?YXNuLzJ4OU5JelBqVm9WOHpyTWZFM2UyQjdWbXJnWWdRWXFZTGhhRUJZeU84?=
 =?utf-8?B?MFY3V2xmTjM5bzZSS0YwTmlvUU9HSFpXTG4yZGs4ZmdBRmZhd1V2cjBQOGtj?=
 =?utf-8?B?WDQ0Y0xuNnduNXdaZFplQjhLNzBWVU9ESGFndWRhNmwyT3hRQnlaUnh4WFhX?=
 =?utf-8?B?eVhnWW1vaERFbFpDNFhBYmh1ZnljWEswbkl0dEU1VFNEeU4rNWx0cUZwMWxD?=
 =?utf-8?B?ZDNNOEw0Z1JiQnNQaHVKcGZUV04xK2srLzFSY204ZXU3SC9SUGhtaW0zR3RG?=
 =?utf-8?B?S2ZUMjMyYTR1UWxjWEtqd0tKZVZsU29aRmRHdFRPRUNFcms2VVF0TXJtcFJL?=
 =?utf-8?B?YWo0cXMyOWxqV3NSOER6WjJsVWlGZ2Qxdk1iWHRBOWN0STRMblN2T0l6eHhn?=
 =?utf-8?B?OE44aHA1WVdCd3ZZWnZYSHFvUUdVeGlWYWx3Z2w1SzNHRTdqSnByV1BEeExv?=
 =?utf-8?B?RURRWTJWMUZic25VS1V5LytBRlkvMVN1WWhmOHBwUm5YZ21sSHdpNzQwVkhR?=
 =?utf-8?B?UnVZWFNHdWZ1bDVwZC9ra1pkRTJHWk9FUVl1bnNUWjk4UDcrTlJQcmdaRkth?=
 =?utf-8?B?bm9qU2U0UjVVampsVW9WVGx2amJzV1JYckwxUGN2b1hOUmI3QjdSZS9BUlF3?=
 =?utf-8?B?U2dGcVJMZ1gvZmczSTNzR0VjWUVZM0Q5d1ZpYTZIdFZSUHFCeUhoOTdEQ080?=
 =?utf-8?B?VGxPNjBUS2Y5YitneFlOZHhDTVFFTFNHNW03cmdyVjdrT2ZFYnNMdUtITGdH?=
 =?utf-8?B?cXlGa1FFcFZaOVVPYzJUQWJGSExZN0pvTUdOQXRWSHBpbUxLUi9RdGUrc3lO?=
 =?utf-8?B?MDFDVTB0cHltVlNYbENUQXVhd3V4L09ockpoM1FVaUxVc1dKa1ZnYURSQ0xr?=
 =?utf-8?B?bGpURmU2VHhTMlpDWkkza1B5akpzTGdDdVZhSnJBYVF3K2ZQdkdodmY3ZS81?=
 =?utf-8?B?TG1ydmQ3VG1CcnpyNTBPZmdVVHlhYTRrNGtNQmZpVzFsQ1I5a2ZiZ0pjempF?=
 =?utf-8?B?aURDZzRPQVBFSjV5d0NiT00rZE96ZUIzUlp4UVJWT1U1MFhBS0ozSUxpS3dt?=
 =?utf-8?B?WDAvR2xIaDYzVjM3VVRDR1I1dUkyYjRrSkdTOVZid0RRMElLWG5pS3pLUHZt?=
 =?utf-8?B?SzlFUmNmM2xRWjdXNmVWTkZZODA5QUV3RjhXTkJMams1U0cvRUlqNXNXSTdo?=
 =?utf-8?B?SWJuaWMwQjFkOTJ4SVk3NzdHVHVOQ3V6R0ZpWkdtUUVmRkk1U0d3NDd1ZWZU?=
 =?utf-8?B?Z0twVlVIVlNrUWhkL2hzc1NkTzQvMkMzNGxza2lIU2Nzd2p6UVdHUG5pRm5t?=
 =?utf-8?B?dnVBc0pya0h2V21ZTHl0bmhGdDNaTnVnWFhpaDlQSjBZOFFsaDFnbjFBNHdJ?=
 =?utf-8?B?Sm5EVkt0VFVuQzg2b0dzWnd4QlFhODBoK1ByWmZUMGhTeUNqZDhuSmd1Zmw1?=
 =?utf-8?B?QXZMb2ZwYm9hK3pWUkF1T042TkFpUlJtQmZGSUU5b0hoNDhQVUtyY2c1anV5?=
 =?utf-8?B?TDAydlI1WDQ5L0NBWm5mUnJTV3RJNUxCOGFqQ1J1dG5SRXNZd3FxRStNcnAx?=
 =?utf-8?B?cXhWMlA3ZERUd1VWQ09xd2hvNkV3ZDF5Q0tXOUcreFBIK3lSTXkvN0Nic0RE?=
 =?utf-8?B?NWQzd0RZOTBnK0tKbUZuRGpOQjdEV2NXV0RwZmxXclo3VGM0WG1zYlJUQmJJ?=
 =?utf-8?B?NG5UMDhFcUw0NDNFbnBZaGkxT05raXdRYnR2NWdaVldRSkR2TUUrVjZJZVkx?=
 =?utf-8?Q?UoyLrJqii8utM5y955svnWjU0VXBrMUUskyFQ=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(366013)(7416011)(376011)(1800799021)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?aEF5K3Y2R09DS2RhODVoVjFQRWlJa2htME5LVUxBajVGSjYxN2xtcSt1T0xp?=
 =?utf-8?B?bE5JUDZGYmdZTkhhVkUrcEltd2liMjJRM3BScjhUWCtDcVE5YkVWZUVGZ3hx?=
 =?utf-8?B?QkZsVnlURXFHSmVuRUlrN3NOaEtRU1VFTEtUYWVjYy9EbzFPUzluVE9YeEho?=
 =?utf-8?B?Z0FTVmREV0NFRU8vc2Qwa05hbnpjMTMvT0hGaEI5STdyRlJUOVJrV1JRWCs3?=
 =?utf-8?B?U3BPbXBXSXRjb3pxb0JmbnIyYWt1NWowMkJ4cXd5M2k1RDM5dU1mdHg4TTg1?=
 =?utf-8?B?L2QrUFJSZDE2YTdxUUdxUmRWamxEalp1ejZCdGs3UjdOQXZKUFIrYU1jN1lD?=
 =?utf-8?B?c2Y2MlAwWGtlT3lNUG16NkZvVTVIdmpHWWxsUzkrTzhPUSs0M2VuTFJ6U1I0?=
 =?utf-8?B?YWVZbjdFbDNJMHFhZEhvQUdKZHpwaXppS05YdkE2cE5BRk9uLzQ5MHNJSHNo?=
 =?utf-8?B?U25DdXpEZk1QSmlFcXQ4N3lIZTlFS050YzVmMVR6dnMrUnRhTTNXKzZ4TEFW?=
 =?utf-8?B?QXVJb0RDaGt5NWRwaGVKd0dNdTBFeHE5ZlhYNFRuWkhKOWMyeUd0QnRwOFBa?=
 =?utf-8?B?bzRJQXYrZk0xMDhXTVVzSjltV2xNemowTk10MjJBaW4zRC9SeVVtTktadjJw?=
 =?utf-8?B?WURpN2FtZER3NWJJL2tVRDgwNTFLMEFiMG4rVnJ2UHBWOG5uTGZOZlB5OWtT?=
 =?utf-8?B?NnVRbkJuV2dyYU1KT21TMUd5TzRrQktPNDdJZkhMS09CajJQcDhtVXRWQmp3?=
 =?utf-8?B?ZVU3Y0QrUEIzYW5CTXhaWFJJOGV4SkVGMGZhSy83THIxWlprZlMxV2NqSDdY?=
 =?utf-8?B?bWlLS2VVeTkraEJHVkxhL1NiSGR6dFRCMGc1THNZTEZxMFhFK3RMcHZmMTR0?=
 =?utf-8?B?emlFZ0d0MGpMQ3ErS2Z3eFhJWFZVMDlKTXo4UGQrdys0K1JtVEJxTnZIckVO?=
 =?utf-8?B?b3J2dXBpZkV3NDNpZCs3Um11MlhVbGlWS3Zvb3pjL3FQMWJiMVBLSUVRcStR?=
 =?utf-8?B?dHNFKzM3RUFsaFJhU2x2dHBpUHQzWXcvZTBxTE5FSnlGaGR2aUh0ODB3QytR?=
 =?utf-8?B?VllnOUJqS2c2RU5PYWprei9WT3Z1UEVBcUx1REY0bjF6UEsvb0NhMzk1S3dU?=
 =?utf-8?B?MUlQSXVOdGFPN0txVGtlNUptUGtnc1o2WldybmZBVE82Sm9KMU13aXNERXdU?=
 =?utf-8?B?MjBwSGVva1BGR0xMQlBXdFRSSE12Nlljc0VPR2NUTGFsQkV5dk9wQjExNWtp?=
 =?utf-8?B?VkNkaVRtOWw1b3RSbFBIWEpMSGZsc0V2RzRpQUQ0aWtJUFFEaTRVS2lYeGtW?=
 =?utf-8?B?MUV0cVJhaU1QOGZLeERJWHZEb3NtZWJuRzJIUWlYZTJ0Qk85ckFCQWszUlhC?=
 =?utf-8?B?c1lDdktXaXNVTHZrRnFoRGNTWGZicG1wV3RUMkFFbFRqWXN1RmdzYUVxalBx?=
 =?utf-8?B?Ui9yWS9qaDZSZ1FhaTUveURZQ3FYNE1WZytwK05wN0FyZ0ltcThCMFpTSGFi?=
 =?utf-8?B?TTBjZVFkZEplSVdjMGxCcFNySjUyOUdzYkNpWm05VEkxV0pFdHpCeGV2dExT?=
 =?utf-8?B?T3UvbDFrcE9oRUwyWWdyVzdhUGkwS0lHeE05QVlhNGFCS0JzOXRPaXk2RmJj?=
 =?utf-8?B?dW5MYjNvbVh4a3FLNkxia1JFZTBnRnJqTlM2bDBFVEZMNGV0Q2VPek1jQkVh?=
 =?utf-8?B?ZU1Rb09KQTdOVGh5QkJ2b2FaZnpnZFhsR2RiNS9GMUp5K000V0ZqS1BFSTh5?=
 =?utf-8?B?d1RwUUFTRzdrUGFtbkYrL0lOcjNYYURWZlVLZkhJU1FwU0dXc3p0UlF2MytW?=
 =?utf-8?B?ZnJqNTFqQUZKQWpsTjBrYVFWZFRQT1JRWG5IWGJjNWhmdzhTaXJ6bkx3UDRF?=
 =?utf-8?B?a05SMEtLQTh1Sk1sd2k2bHRhS0lOcFpuU0d4S01iU3Q0SXIvQUdQcklQdjdJ?=
 =?utf-8?B?ZWMwUEJ4NjBoSEpSalc5dUV3cWtJZ05vN1grcE8xanFCUE1BemYrSVc2ekNT?=
 =?utf-8?B?bHJ1aHl3VUVPN0dDR3BxS1VacVRlUXVjL0ZhR1I1emE1Um1BOWhZY1VLd3A0?=
 =?utf-8?B?NXBMdlhZWUxjUkMzeURQeXhnM2xZVk9Zb0l4ZlRwa2lqWjZhK0l4bWRWY203?=
 =?utf-8?Q?xTNU=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <17B104F7D7CD5D4EAD67D251F96424CD@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2cf8d7da-9826-4778-408d-08dc90117c3c
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Jun 2024 03:39:55.2664
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: rjmfN5GJgBiIvsHsgYILkb1nT0GmjiygD88d6rqKHQPdw9llAQWJnIYowTul2yekU+Qg70wYYxV5SI5YpIerAQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR12MB6641

T24gMjAyNC82LzE4IDE2OjMzLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTguMDYuMjAyNCAw
ODoyNSwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzE3IDIyOjE3LCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAxNy4wNi4yMDI0IDExOjAwLCBKaXFpYW4gQ2hlbiB3cm90ZToN
Cj4+Pj4gLS0tIGEveGVuL2RyaXZlcnMvcGNpL3BoeXNkZXYuYw0KPj4+PiArKysgYi94ZW4vZHJp
dmVycy9wY2kvcGh5c2Rldi5jDQo+Pj4+IEBAIC0yLDExICsyLDE3IEBADQo+Pj4+ICAjaW5jbHVk
ZSA8eGVuL2d1ZXN0X2FjY2Vzcy5oPg0KPj4+PiAgI2luY2x1ZGUgPHhlbi9oeXBlcmNhbGwuaD4N
Cj4+Pj4gICNpbmNsdWRlIDx4ZW4vaW5pdC5oPg0KPj4+PiArI2luY2x1ZGUgPHhlbi92cGNpLmg+
DQo+Pj4+ICANCj4+Pj4gICNpZm5kZWYgQ09NUEFUDQo+Pj4+ICB0eXBlZGVmIGxvbmcgcmV0X3Q7
DQo+Pj4+ICAjZW5kaWYNCj4+Pj4gIA0KPj4+PiArc3RhdGljIGNvbnN0IHN0cnVjdCBwY2lfZGV2
aWNlX3N0YXRlX3Jlc2V0X21ldGhvZA0KPj4+PiArICAgICAgICAgICAgICAgICAgICBwY2lfZGV2
aWNlX3N0YXRlX3Jlc2V0X21ldGhvZHNbXSA9IHsNCj4+Pj4gKyAgICBbIERFVklDRV9SRVNFVF9G
TFIgXS5yZXNldF9mbiA9IHZwY2lfcmVzZXRfZGV2aWNlX3N0YXRlLA0KPj4+PiArfTsNCj4+Pg0K
Pj4+IFdoYXQgYWJvdXQgdGhlIG90aGVyIHRocmVlIERFVklDRV9SRVNFVF8qPyBJbiBwYXJ0aWN1
bGFyIC4uLg0KPj4gSSBkb24ndCBrbm93IGhvdyB0byBpbXBsZW1lbnQgdGhlIG90aGVyIHRocmVl
IHR5cGVzIG9mIHJlc2V0Lg0KPj4gVGhpcyBpcyBhIGRlc2lnbiBmb3JtIHNvIHRoYXQgY29ycmVz
cG9uZGluZyBwcm9jZXNzaW5nIGZ1bmN0aW9ucyBjYW4gYmUgYWRkZWQgbGF0ZXIgaWYgbmVjZXNz
YXJ5LiBEbyBJIG5lZWQgdG8gc2V0IHRoZW0gdG8gTlVMTCBwb2ludGVycyBpbiB0aGlzIGFycmF5
Pw0KPiANCj4gTm8uDQo+IA0KPj4gRG9lcyB0aGlzIGZvcm0gY29uZm9ybSB0byB5b3VyIHByZXZp
b3VzIHN1Z2dlc3Rpb24gb2YgdXNpbmcgb25seSBvbmUgaHlwZXJjYWxsIHRvIGhhbmRsZSBhbGwg
dHlwZXMgb2YgcmVzZXRzPw0KPiANCj4gWWVzLCBhdCBsZWFzdCBpbiBwcmluY2lwbGUuIFF1ZXN0
aW9uIGhlcmUgaXM6IFRvIGJlIG9uIHRoZSBzYWZlIHNpZGUsDQo+IHdvdWxkbid0IHdlIGJldHRl
ciByZXNldCBzdGF0ZSBmb3IgYWxsIGZvcm1zIG9mIHJlc2V0LCBsZWF2aW5nIHBvc3NpYmxlDQo+
IHJlbGF4YXRpb24gb2YgdGhhdCBmb3IgbGF0ZXI/IEF0IHdoaWNoIHBvaW50IHRoZSBmdW5jdGlv
biB3b3VsZG4ndCBuZWVkDQo+IGNhbGxpbmcgaW5kaXJlY3RseSwgYW5kIGluc3RlYWQgd291bGQg
YmUgcGFzc2VkIHRoZSByZXNldCB0eXBlIGFzIGFuDQo+IGFyZ3VtZW50Lg0KSWYgSSB1bmRlcnN0
b29kIGNvcnJlY3RseSwgbmV4dCB2ZXJzaW9uIHNob3VsZCBiZT8NClVzZSBtYWNyb3MgdG8gcmVw
cmVzZW50IGRpZmZlcmVudCByZXNldCB0eXBlcy4NCkFkZCBzd2l0Y2ggY2FzZXMgaW4gUEhZU0RF
Vk9QX3BjaV9kZXZpY2Vfc3RhdGVfcmVzZXQgdG8gaGFuZGxlIGRpZmZlcmVudCByZXNldCBmdW5j
dGlvbnMuDQpBZGQgcmVzZXRfdHlwZSBhcyBhIGZ1bmN0aW9uIHBhcmFtZXRlciB0byB2cGNpX3Jl
c2V0X2RldmljZV9zdGF0ZSBmb3IgcG9zc2libGUgZnV0dXJlIHVzZS4NCg0KKyAgICBjYXNlIFBI
WVNERVZPUF9wY2lfZGV2aWNlX3N0YXRlX3Jlc2V0Og0KKyAgICB7DQorICAgICAgICBzdHJ1Y3Qg
cGNpX2RldmljZV9zdGF0ZV9yZXNldCBkZXZfcmVzZXQ7DQorICAgICAgICBzdHJ1Y3QgcGNpX2Rl
diAqcGRldjsNCisgICAgICAgIHBjaV9zYmRmX3Qgc2JkZjsNCisNCisgICAgICAgIGlmICggIWlz
X3BjaV9wYXNzdGhyb3VnaF9lbmFibGVkKCkgKQ0KKyAgICAgICAgICAgIHJldHVybiAtRU9QTk9U
U1VQUDsNCisNCisgICAgICAgIHJldCA9IC1FRkFVTFQ7DQorICAgICAgICBpZiAoIGNvcHlfZnJv
bV9ndWVzdCgmZGV2X3Jlc2V0LCBhcmcsIDEpICE9IDAgKQ0KKyAgICAgICAgICAgIGJyZWFrOw0K
Kw0KKyAgICAgICAgc2JkZiA9IFBDSV9TQkRGKGRldl9yZXNldC5kZXYuc2VnLA0KKyAgICAgICAg
ICAgICAgICAgICAgICAgIGRldl9yZXNldC5kZXYuYnVzLA0KKyAgICAgICAgICAgICAgICAgICAg
ICAgIGRldl9yZXNldC5kZXYuZXZmbik7DQorDQorICAgICAgICByZXQgPSB4c21fcmVzb3VyY2Vf
c2V0dXBfcGNpKFhTTV9QUklWLCBzYmRmLnNiZGYpOw0KKyAgICAgICAgaWYgKCByZXQgKQ0KKyAg
ICAgICAgICAgIGJyZWFrOw0KKw0KKyAgICAgICAgcGNpZGV2c19sb2NrKCk7DQorICAgICAgICBw
ZGV2ID0gcGNpX2dldF9wZGV2KE5VTEwsIHNiZGYpOw0KKyAgICAgICAgaWYgKCAhcGRldiApDQor
ICAgICAgICB7DQorICAgICAgICAgICAgcGNpZGV2c191bmxvY2soKTsNCisgICAgICAgICAgICBy
ZXQgPSAtRU5PREVWOw0KKyAgICAgICAgICAgIGJyZWFrOw0KKyAgICAgICAgfQ0KKw0KKyAgICAg
ICAgd3JpdGVfbG9jaygmcGRldi0+ZG9tYWluLT5wY2lfbG9jayk7DQorICAgICAgICBwY2lkZXZz
X3VubG9jaygpOw0KKyAgICAgICAgLyogSW1wbGVtZW50IEZMUiwgb3RoZXIgcmVzZXQgdHlwZXMg
bWF5IGJlIGltcGxlbWVudGVkIGluIGZ1dHVyZSAqLw0KKyAgICAgICAgc3dpdGNoICggZGV2X3Jl
c2V0LnJlc2V0X3R5cGUgKQ0KKyAgICAgICAgew0KKyAgICAgICAgY2FzZSBQQ0lfREVWSUNFX1NU
QVRFX1JFU0VUX0NPTEQ6DQorICAgICAgICBjYXNlIFBDSV9ERVZJQ0VfU1RBVEVfUkVTRVRfV0FS
TToNCisgICAgICAgIGNhc2UgUENJX0RFVklDRV9TVEFURV9SRVNFVF9IT1Q6DQorICAgICAgICBj
YXNlIFBDSV9ERVZJQ0VfU1RBVEVfUkVTRVRfRkxSOg0KKyAgICAgICAgICAgIHJldCA9IHZwY2lf
cmVzZXRfZGV2aWNlX3N0YXRlKHBkZXYsIGRldl9yZXNldC5yZXNldF90eXBlKTsNCisgICAgICAg
ICAgICBicmVhazsNCisgICAgICAgIH0NCisgICAgICAgIHdyaXRlX3VubG9jaygmcGRldi0+ZG9t
YWluLT5wY2lfbG9jayk7DQorDQorICAgICAgICBpZiAoIHJldCApDQorICAgICAgICAgICAgZHBy
aW50ayhYRU5MT0dfRVJSLA0KKyAgICAgICAgICAgICAgICAgICAgIiVwcDogZmFpbGVkIHRvIHJl
c2V0IHZQQ0kgZGV2aWNlIHN0YXRlXG4iLCAmc2JkZik7DQorICAgICAgICBicmVhazsNCisgICAg
fQ0KDQo+IA0KPj4+IEFsc28sIG5pdCAoZnVydGhlciB1cCk6IE9wZW5pbmcgZmlndXJlIGJyYWNl
cyBmb3IgYSBuZXcgc2NvcGUgZ28gb250byB0aGVpcg0KPj4gT0ssIHdpbGwgY2hhbmdlIGluIG5l
eHQgdmVyc2lvbi4NCj4+PiBvd24gbGluZS4gVGhlbiBhZ2FpbiBJIG5vdGljZSB0aGF0IGFwcGFy
ZW5seSBfYWxsXyBvdGhlciBpbnN0YW5jZXMgaW4gdGhpcw0KPj4+IGZpbGUgYXJlIGRvaW5nIGl0
IHRoZSB3cm9uZyB3YXksIHRvby4NCj4+IERvIEkgbmVlZCB0byBjaGFuZ2UgdGhlbSBpbiB0aGlz
IHBhdGNoPw0KPiANCj4gTm8uDQo+IA0KPj4+PiAtLS0gYS94ZW4vZHJpdmVycy92cGNpL3ZwY2ku
Yw0KPj4+PiArKysgYi94ZW4vZHJpdmVycy92cGNpL3ZwY2kuYw0KPj4+PiBAQCAtMTcyLDYgKzE3
MiwxNSBAQCBpbnQgdnBjaV9hc3NpZ25fZGV2aWNlKHN0cnVjdCBwY2lfZGV2ICpwZGV2KQ0KPj4+
PiAgDQo+Pj4+ICAgICAgcmV0dXJuIHJjOw0KPj4+PiAgfQ0KPj4+PiArDQo+Pj4+ICtpbnQgdnBj
aV9yZXNldF9kZXZpY2Vfc3RhdGUoc3RydWN0IHBjaV9kZXYgKnBkZXYpDQo+Pj4NCj4+PiBBcyBh
IHRhcmdldCBvZiBhbiBpbmRpcmVjdCBjYWxsIHRoaXMgbmVlZHMgdG8gYmUgYW5ub3RhdGVkIGNm
X2NoZWNrIChib3RoDQo+Pj4gaGVyZSBhbmQgaW4gdGhlIGRlY2xhcmF0aW9uLCB1bmxpa2UgX19t
dXN0X2NoZWNrLCB3aGljaCBpcyBzdWZmaWNpZW50IHRvDQo+Pj4gaGF2ZSBvbiBqdXN0IHRoZSBk
ZWNsYXJhdGlvbikuDQo+PiBPSywgd2lsbCBhZGQgY2ZfY2hlY2sgaW4gbmV4dCB2ZXJzaW9uLg0K
PiANCj4gV2hpY2ggbWF5IG5vdCBiZSBuZWNlc3NhcnkgaWYgeW91IGdvIHRoZSByb3V0ZSBzdWdn
ZXN0ZWQgYWJvdmUuDQo+IA0KPj4+PiAtLS0gYS94ZW4vaW5jbHVkZS94ZW4vcGNpLmgNCj4+Pj4g
KysrIGIveGVuL2luY2x1ZGUveGVuL3BjaS5oDQo+Pj4+IEBAIC0xNTYsNiArMTU2LDIyIEBAIHN0
cnVjdCBwY2lfZGV2IHsNCj4+Pj4gICAgICBzdHJ1Y3QgdnBjaSAqdnBjaTsNCj4+Pj4gIH07DQo+
Pj4+ICANCj4+Pj4gK3N0cnVjdCBwY2lfZGV2aWNlX3N0YXRlX3Jlc2V0X21ldGhvZCB7DQo+Pj4+
ICsgICAgaW50ICgqcmVzZXRfZm4pKHN0cnVjdCBwY2lfZGV2ICpwZGV2KTsNCj4+Pj4gK307DQo+
Pj4+ICsNCj4+Pj4gK2VudW0gcGNpX2RldmljZV9zdGF0ZV9yZXNldF90eXBlIHsNCj4+Pj4gKyAg
ICBERVZJQ0VfUkVTRVRfRkxSLA0KPj4+PiArICAgIERFVklDRV9SRVNFVF9DT0xELA0KPj4+PiAr
ICAgIERFVklDRV9SRVNFVF9XQVJNLA0KPj4+PiArICAgIERFVklDRV9SRVNFVF9IT1QsDQo+Pj4+
ICt9Ow0KPj4+PiArDQo+Pj4+ICtzdHJ1Y3QgcGNpX2RldmljZV9zdGF0ZV9yZXNldCB7DQo+Pj4+
ICsgICAgc3RydWN0IHBoeXNkZXZfcGNpX2RldmljZSBkZXY7DQo+Pj4+ICsgICAgZW51bSBwY2lf
ZGV2aWNlX3N0YXRlX3Jlc2V0X3R5cGUgcmVzZXRfdHlwZTsNCj4+Pj4gK307DQo+Pj4NCj4+PiBU
aGlzIGlzIHRoZSBzdHJ1Y3QgdG8gdXNlIGFzIGh5cGVyY2FsbCBhcmd1bWVudC4gSG93IGNhbiBp
dCBsaXZlIG91dHNpZGUgb2YNCj4+PiBhbnkgcHVibGljIGhlYWRlcj8gQWxzbywgd2hlbiBtb3Zp
bmcgaXQgdGhlcmUsIGJld2FyZSB0aGF0IHlvdSBzaG91bGQgbm90DQo+Pj4gdXNlIGVudW0tcyB0
aGVyZS4gT25seSBoYW5kbGVzIGFuZCBmaXhlZC13aWR0aCB0eXBlcyBhcmUgcGVybWl0dGVkLnQN
Cj4+IFllcywgSSBwdXQgdGhlbSB0aGVyZSBiZWZvcmUsIGJ1dCBlbnVtIGlzIG5vdCBwZXJtaXR0
ZWQuDQo+PiBUaGVuLCBkbyB5b3UgaGF2ZSBvdGhlciBzdWdnZXN0ZWQgdHlwZSB0byB1c2UgdG8g
ZGlzdGluZ3Vpc2ggZGlmZmVyZW50IHR5cGVzIG9mIHJlc2V0cywgYmVjYXVzZSBlbnVtIGNhbid0
IHdvcmsgaW4gdGhlIHB1YmxpYyBoZWFkZXI/DQo+IA0KPiBEbyBsaWtlIHdlIGRvIGV2ZXJ5d2hl
cmUgZWxzZTogVXNlIGEgc2V0IG9mICNkZWZpbmUtcy4NCj4gDQo+Pj4+IC0tLSBhL3hlbi9pbmNs
dWRlL3hlbi92cGNpLmgNCj4+Pj4gKysrIGIveGVuL2luY2x1ZGUveGVuL3ZwY2kuaA0KPj4+PiBA
QCAtMzgsNiArMzgsNyBAQCBpbnQgX19tdXN0X2NoZWNrIHZwY2lfYXNzaWduX2RldmljZShzdHJ1
Y3QgcGNpX2RldiAqcGRldik7DQo+Pj4+ICANCj4+Pj4gIC8qIFJlbW92ZSBhbGwgaGFuZGxlcnMg
YW5kIGZyZWUgdnBjaSByZWxhdGVkIHN0cnVjdHVyZXMuICovDQo+Pj4+ICB2b2lkIHZwY2lfZGVh
c3NpZ25fZGV2aWNlKHN0cnVjdCBwY2lfZGV2ICpwZGV2KTsNCj4+Pj4gK2ludCBfX211c3RfY2hl
Y2sgdnBjaV9yZXNldF9kZXZpY2Vfc3RhdGUoc3RydWN0IHBjaV9kZXYgKnBkZXYpOw0KPj4+DQo+
Pj4gV2hhdCdzIHRoZSBwdXJwb3NlIG9mIHRoaXMgX19tdXN0X2NoZWNrLCB3aGVuIHRoZSBzb2xl
IGNhbGxlciBpcyBjYWxsaW5nDQo+Pj4gdGhpcyB0aHJvdWdoIGEgZnVuY3Rpb24gcG9pbnRlciwg
d2hpY2ggaXNuJ3Qgc2ltaWxhcmx5IGFubm90YXRlZD8NCj4+IFRoaXMgaXMgd2hhdCBJIGFkZGVk
IGJlZm9yZSBpbnRyb2R1Y2luZyBmdW5jdGlvbiBwb2ludGVycywgYnV0IGFmdGVyIG1vZGlmeWlu
ZyB0aGUgaW1wbGVtZW50YXRpb24sIGl0IHdhcyBub3QgdGFrZW4gaW50byBhY2NvdW50Lg0KPj4g
SSB3aWxsIHJlbW92ZSBfX211c3RfY2hlY2sNCj4gDQo+IFdoeSByZW1vdmU/IElzIGl0IHJlbGV2
YW50IGZvciB0aGUgcmV0dXJuIHZhbHVlIHRvIGJlIGNoZWNrZWQ/IE9yIGlmIGl0DQo+IGlzbid0
LCB3aHkgd291bGQgdGhlcmUgYmUgYSByZXR1cm4gdmFsdWU/DQo+IA0KPiBKYW4NCj4gDQo+PiBh
bmQgY2hhbmdlIHRvIGNmX2NoZWNrLCBhY2NvcmRpbmcgdG8geW91ciBhYm92ZSBjb21tZW50Lg0K
PiANCg0KLS0gDQpCZXN0IHJlZ2FyZHMsDQpKaXFpYW4gQ2hlbi4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 03:54:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 03:54:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743392.1150279 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJmOH-0000Un-Aq; Wed, 19 Jun 2024 03:53:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743392.1150279; Wed, 19 Jun 2024 03:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJmOH-0000Ug-85; Wed, 19 Jun 2024 03:53:45 +0000
Received: by outflank-mailman (input) for mailman id 743392;
 Wed, 19 Jun 2024 03:53:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJmOF-0000UD-MC; Wed, 19 Jun 2024 03:53:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJmOF-0001HK-El; Wed, 19 Jun 2024 03:53:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJmOF-0008M8-41; Wed, 19 Jun 2024 03:53:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJmOF-0003q8-3c; Wed, 19 Jun 2024 03:53:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=l/keBvUxs6mMkorAYbIE9iYg/2sb/xWeS1qa+Aons4o=; b=O5D5ILqLKMKdhHDW4jH8flKCkE
	OfPhviSEgxUoVUrWdoJw53TTAsSw5C2UxYexWSBBjuf5AMw5VzsB0s8uGkF/fn91WzcEdHtGSSlaP
	FhTwto3nIUCRZifW40pDjlbPaifxKoBKd3CO0sQFnQUWHICFm3aPEdUb3ab6Pa5kXv8c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186404-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186404: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=53c5c99e8744495395c1274595d6ca55947d1d6a
X-Osstest-Versions-That:
    xen=bd59af99700f075d06a6d47a16f777c9519928e0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Jun 2024 03:53:43 +0000

flight 186404 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186404/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  53c5c99e8744495395c1274595d6ca55947d1d6a
baseline version:
 xen                  bd59af99700f075d06a6d47a16f777c9519928e0

Last test of basis   186396  2024-06-18 14:00:23 Z    0 days
Testing same since   186404  2024-06-19 01:00:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@vates.tech>
  Henry Wang <xin.wang2@amd.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   bd59af9970..53c5c99e87  53c5c99e8744495395c1274595d6ca55947d1d6a -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 05:35:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 05:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743401.1150288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJnye-0004H1-Az; Wed, 19 Jun 2024 05:35:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743401.1150288; Wed, 19 Jun 2024 05:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJnye-0004Gu-8O; Wed, 19 Jun 2024 05:35:24 +0000
Received: by outflank-mailman (input) for mailman id 743401;
 Wed, 19 Jun 2024 05:35:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WeeD=NV=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJnyd-0004Go-K2
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 05:35:23 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2062f.outbound.protection.outlook.com
 [2a01:111:f403:2009::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b67ea022-2dfd-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 07:35:19 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by SJ2PR12MB8062.namprd12.prod.outlook.com (2603:10b6:a03:4c8::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.30; Wed, 19 Jun
 2024 05:35:13 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7677.030; Wed, 19 Jun 2024
 05:35:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b67ea022-2dfd-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hTUN2cFhEsz4SI+0crkDGxaXLq6OtAewgtEegpS8COX2lkIbKHQegBoK6WyPEIh3ZwIe2fs2/0ESznaFIZ6vh78bpBN0F5k5nldNjGLfMwB4oeMBXNIwo7YgPZNxdZZDX9Mr8WdHur0vNMcWHXI4+r1XhEjSzTiDc2rWAKfEqFF0j8QuZRzBGOrhHcattUlWXjaCuKfulLqn+7URLxrOEaFnhpYVKwQUYSvW8NHLzsACAgh+8ofDBcl/TCO+H8j0JLVUG0oscX5x8ySz1XUVOFEUBd0vX26znWOxZsGPznD+EPhKp6WyD3q4mVoAUHKWYvQVYhlb/v0XdSHuRqjigQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=CzMA9lz9iA2SZWyeRQmXvLtV+c5nHm2p5SDzuMVJGM4=;
 b=BWFagARvK9kZf5vwLUxohHoUJSce4G2P1KYZprlWLZmfy1JvKc33FDcANKiZ1t7elFi1Ouw4MXfP7ltH/CeTnrRYhCFkN+WpVgXa7ZT0FUbR8exH00G0nmzwrF+tlWDKbv1dNBedUn//4jLL7QzASUAnhs/lhzp5o/JFkaJPwNSYLE64rHWjUsAzERc+UtnREabu3vf8BMpdMWPA8bUgT2PMiTLYUxlfWM8qw3FtmEES2kcpGBeFLlSaG3sxz43x6GLLtJUQRqIbzH1UNSuRqxe7NWCP0RX846JNHOsqo0+/VK8PjQAYAzXhC6WSjpfsTrgwcUAz7xPJISGtCxXPCQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CzMA9lz9iA2SZWyeRQmXvLtV+c5nHm2p5SDzuMVJGM4=;
 b=n2KX4LsorOgDXgm4KyYBxwU54lfK5SA5FCthhBZfPD0U2ZzG7FvWGB6LuzoBAjbWTGnLRFoeh4UGZamDk8/BvGXWmvB0+ulZUn2qeuMeNzzsJd4Uk4Ckx09gaEnRnNjR/GHbVBK7VswPrJjNsMrA+WKM0eylluf3TWFCrvlxUnI=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
Thread-Topic: [XEN PATCH v10 2/5] x86/pvh: Allow (un)map_pirq when dom0 is PVH
Thread-Index: AQHawJTj1WGIKHaz7U+FK+dQX59GpLHMCLWAgAGOVID//52wgIAB5DMA
Date: Wed, 19 Jun 2024 05:35:13 +0000
Message-ID:
 <BL1PR12MB58494E4BAEDA35E24329432CE7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-3-Jiqian.Chen@amd.com>
 <cb9910cd-7045-4c0d-a7cf-2bcf36e30cb2@suse.com>
 <BL1PR12MB5849FC16D91FADD5B7D30A63E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <c41031f9-bf3f-472e-82be-c1efea07c343@suse.com>
In-Reply-To: <c41031f9-bf3f-472e-82be-c1efea07c343@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7677.026)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|SJ2PR12MB8062:EE_
x-ms-office365-filtering-correlation-id: 4200681d-8699-4553-d860-08dc902197b1
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|376011|7416011|1800799021|366013|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?MHl1eWgzRWxEdlhPUlJpNE9xMURSdWZZbzhITDFHL28rdmovcklGSXM2cjM3?=
 =?utf-8?B?KzVaQmRTdHhzK1B4ZGZWREk4bW1DYXRUY1Q1YnJ4a3VOUEpkcGRHWGEyMHlI?=
 =?utf-8?B?NWxXU0FON3EzVjFYWnBxSjBJd25QaXFZSzJrSG9WVjVUbEEyRjFCTE9lUkJs?=
 =?utf-8?B?Yy82bWpsRnlJQzlMVjBjdktGbHQ0b2N1QVYzRVR0YmpNdHpaQXVqZDJFTW82?=
 =?utf-8?B?V2pwbHBxS3V3QlpwNExDV0tOQ1VUai9qZS9TUXZscVZoMHBOV2hZZE10S0gw?=
 =?utf-8?B?cG12NzZFckpGRGJtem94RFFzN2RzSCtpWURWV2svemc5bkZZRFQwS2hiT1FF?=
 =?utf-8?B?RE5YbmJha0ozMEZLNUtsaTRJMmVOVkxFcEh5UE1KQnNXWkFoOHZwNWhmRi9t?=
 =?utf-8?B?THVlV2dmSFJKUmhHMWVPVkF0Z3BMNG1sbnN6ZVF6OVZlNmNjc1FHeUNuOGJL?=
 =?utf-8?B?T0ExWXFwK1BZRkw1aE9uMWZkYW1wbWhraXl3WVd4U3RiUkt4d2wzQXUrcXBP?=
 =?utf-8?B?c1lRWkNjb2JEOGI1ZHQzY0FmWnBQT0NFN0hFTFZnSUVpWFMwTVVOSzhSaURN?=
 =?utf-8?B?TmFRWHdKdk1ML2pWQ1pLSkxmTm5LVndTaHZQVnRVdnVqV09zTVl5YW5hYWth?=
 =?utf-8?B?a2JTY0NXcm1rdmVzN3ZYV2V4cW8zZ3lXWXRNWm4wOVB1Y1dlOWFkZklwZDhF?=
 =?utf-8?B?N04zRnJTT3pEaW5Ed2NVWldhM2cvTnJpRzdpbXBhQ1BlcDc1T0toSDJIZEE2?=
 =?utf-8?B?bVlEVHE4LzZEZ2IvdmtJS0l1S2pONFBPM3RZZzZHVUFwWDBteFAwQjhMckhu?=
 =?utf-8?B?OFloYVBxUGNWTzFtYUtHaUEyQ21LaGp2WGFkWmZTdUhaT1R5d0JyMFRyZVdx?=
 =?utf-8?B?TVpmTDlyaWhQejhydE5FN2JJQ3g2TUx1Wi9nWGZ0MWtsUU1pVGxKemhKK1hL?=
 =?utf-8?B?dEhRbHg0Wmh4eUZCRUozZWNBSXZGZ2xLeXVJZTEvOGlEbHpSK2hRMzlIZURT?=
 =?utf-8?B?eXhqc1dDYm9CaFlzQm44YWZwd08vbU9QU2RLcld5NDRyUlpHZXVLWms3QjFo?=
 =?utf-8?B?d0UxbXdrYUpuNUR3SG5kaFEraTFHTExwdjRaWHVOOVlabXpZdlBJd2FpMU5E?=
 =?utf-8?B?MmtzR1BUUDVnUUFuZFBOZ3kxZWgzTjNxbWR2N3J6TzVEWWZ1SHRjSE9PTVZ2?=
 =?utf-8?B?WWVEUkt5Z081VEo0Y1NrOEVGNmh3Znh5eDRHcDcxYTArNGp4d0crYzZuSFg5?=
 =?utf-8?B?T2VDU1hHU1RVVUl3MWllTVJDbnNPc0RvbHdBUS85SXdBSlFnZXVENHlxUGhU?=
 =?utf-8?B?SzFMei9sMkdzaEhzdHRhSm5kREpWbEVUenZldW9GK01raWxzcStoNyswN0N0?=
 =?utf-8?B?VmhWZzlJOUpxUjhEYVcvR1lRL3dOR29xbjN2UXFBdWsvZ2V3dk9kT0IyZS9J?=
 =?utf-8?B?TzhYa2s1SlFOa0NPaFlkUTZ2WHlLVWtUWU5idzB1eUJCb2kwY1NORFc1UEVB?=
 =?utf-8?B?WVZVTXFOMGE5eEpuWjcxbzNLL3AwRE9FN1duS1U1WEJyMnJWbFdWSFo5WXNl?=
 =?utf-8?B?SGJmbHdSck1FNmI4UVVrb3RYeTM0TElBYmJTTnZ5cW5UZStvQU1NWXpIQ09G?=
 =?utf-8?B?QVgwa1hwRlV6WnhSQWs0VC9sbUpzWEJwSzVESXJWUGJOdFdCKzJYUjdoUUNQ?=
 =?utf-8?B?aVViamZ5QWMzUnNwN0c4TWRXZmhxd0l4R29qYjdramluSkxYUlhySGs3U3Vw?=
 =?utf-8?B?ODcwZ1pxSjdocS9rMzIyVE1xODZZK0xpS2x1cTVuWjFkQWxHZTRrSWlBQlBy?=
 =?utf-8?Q?TWPfjR6XfHgeacW9wObqYWqDAa0aQKqf/lcIU=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(376011)(7416011)(1800799021)(366013)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ZU83clFWUWVMNzlGM0MyVFpXK05MTTJPQS8ra0RQZVpoa0hwRnhBK21yV1BR?=
 =?utf-8?B?em5iQVlVREhsajIrRFI0R2RQREdjdjRPQk9xVlVTK3ovZkQvVUZtdHlsMzBr?=
 =?utf-8?B?cU00QjV3bFE1T2NWcVhOc1U5Qi9wNWVGMlRMMmtwVDlOQ3JwWkc5WGZrMVhx?=
 =?utf-8?B?SXVGeWo1Tk9uR1pUU21EekZRVWpVRFYrVWtTSEZvMTdTeThHakplSVl4cmQ0?=
 =?utf-8?B?ZTNxaVlJTkhpcjZRYUdnL2xTNGpVbE4vZzQycGhnUHJCblNYN2VqTlNPbnlI?=
 =?utf-8?B?ekFnd0M0VUZoY2FDUGM4eTRPU2QwT2FFSmR2RCs0dXZnL0ZKSTR3TUdVQWs3?=
 =?utf-8?B?ai8rSlJranBwQ3Vaams0Zmh0ek54MWI3N2tqOXdzeVVsOFA1YkVnZGxmRGFj?=
 =?utf-8?B?MXQrQWk3dXRHMkd5b0tRa0VDZDUzYU9FQWgzbGdvTG1LUWVad0pqdjFVYXF6?=
 =?utf-8?B?VFBEN1hvWTkwVm9aaDlFcm1JUGlxTmpjeVNmTVN4dUZmcllFYnBUTjQvVVlk?=
 =?utf-8?B?VE40Q1Z6eUNqT1puaHAzWE9ZbWt5RDVHcU1ZSHViMkRqOWZTelBoeUcxMDlz?=
 =?utf-8?B?WWhWeVIyb3RUVko4Rm5DbTdIZWJmYXk1YmRqWEhFRWo1ckF6LzBUNFRJM0dy?=
 =?utf-8?B?UjNkdjFYWFBtOUV3M2lpb21LajJ3akZFVHV5ZzZoNE8zQXAwcndiaTQweFow?=
 =?utf-8?B?dm5jSnBHcjk5Q2ZDcmNxc1FKc1ZKQWlldWRtSTdISENZQlpXVGQ4MlJvZ0pK?=
 =?utf-8?B?RGpKODFBaFV2Skh3dFI0azN1RGhWQTcwb25yZTcwUnZaOFlGUVFOWmk0TGln?=
 =?utf-8?B?d0JWR0ZYdE1raHc1MUdBSU9lc1RMZlZ2TUx6NWlFbndBLzJIRW9iSXNlbVAx?=
 =?utf-8?B?V0FiOWxKV083aWdaQnF6U3NrbzhCaFFNa0tFa1JKeWVYeXUwcksyYXdkZWJG?=
 =?utf-8?B?czE2Y2FkaXpSbURDWTdJRFlCUXZzNVNNc1RzOFN1NWdrQ1dndWpqZnVOOXBO?=
 =?utf-8?B?ZGVFRkF3ODVKMktxU0FDOTFiUEViTTBHYmszSmp4dGdobHo0VGt5emtScjY2?=
 =?utf-8?B?QkZsQ0lwTjZYc3Q2SzhUMW1HOTBrRzBFQ3ZkeUYzZ2tDaXhqUXhOa2xhdGpX?=
 =?utf-8?B?aklkTWpkcG91K1EwamhqSmVtOHVlUndqV01WZkhaSjFoQ0pwdFp1R1hub01E?=
 =?utf-8?B?Qi83d0tNUCtoUTlMK3prL0hFdVJjcHhCRXUrQk4xaWNsN3hheXA3MVg2bzBt?=
 =?utf-8?B?T2lFdmJjVkpvY05KSEgvbU9MOWR5UXh6bXhFMXZZcGJoODV6aFhMV0pOdi85?=
 =?utf-8?B?V0tjajZZV296VFczb2JTc09xaDR4R0tPSjROU2pVdmpPVmlWR2k4QXVkK29G?=
 =?utf-8?B?dVlHZVNtb21oOHo0dG9Idkp2MFhQeVZQNElDbHdyNmNsVG4rZ3NENzRlS3FB?=
 =?utf-8?B?NGFCaUJlQisrTEljbHlJUXRGT0FNaGhyWm9tMzN6Kzd5VVM5TFdZL2xCeUVj?=
 =?utf-8?B?ZkZ4WjB6eFgyWWxDVGNMMDVjUzQxZXoreGR6VzlFYXhCRnI2U2xHSStjODJl?=
 =?utf-8?B?Si9oR2U1MFVYQW1renJndUcyMXNOQ1NVVlZmTGNiWVNzaEhQcEpFcEs5QzR2?=
 =?utf-8?B?SGFYTlE4MlVrOXJJVnUrRkxKWmlndjZibGgzSTVFUEtNQ3ZrV3VXbHQwR1BN?=
 =?utf-8?B?a3k3NjI0Q0lkYk1SMUFtcGhKTFdFYVdXR0NVQ0lXYkxtNGtBdndlLzRYa0FE?=
 =?utf-8?B?QjdCRms4a3lnZ3hzVXBSakJINzJJTnQrVmNyQi9iOUVvR0tPVVpya2tQTVhz?=
 =?utf-8?B?Y25McGtmajZEeHNCWTZzNm44bk5uTUtEVE04ai9kYnRNY1FSeE8waHNGYUw2?=
 =?utf-8?B?YlZpWE9CSzhHT0FvVW9iVzNrTzIxaXpHckFRcFZic0wzeVNMekYxNWpscTFw?=
 =?utf-8?B?dWlVSHVqREJOTENVQmVkdlBOVzFNaUZsc1krcE81ZE8rRXljNTJ3RVdOd2x0?=
 =?utf-8?B?aEFGZXZPYVo5aWV4U3k5cTZkZkloWjFOcmo1Ulh2dnFlbmNkczdkZEx4Z3Mw?=
 =?utf-8?B?Wk5zazBBZUxUUDhxQ2swaVdZWmxEaURvTTJ1WENNREwxTkVIODUyWjBYVnZM?=
 =?utf-8?Q?FT1Y=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <2C803258AA12A647BB381ABCB163D2F3@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4200681d-8699-4553-d860-08dc902197b1
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Jun 2024 05:35:13.2485
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: EQToiEf8Y1SpL89jRpuoDDvGEenIA176lzoMUD5n/Hp5V6vInwK7ALIwnjDGHDyJQvto7ZamxQv2JI6FsiAZGQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ2PR12MB8062

T24gMjAyNC82LzE4IDE2OjM4LCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTguMDYuMjAyNCAw
ODo0OSwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzE3IDIyOjQ1LCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAxNy4wNi4yMDI0IDExOjAwLCBKaXFpYW4gQ2hlbiB3cm90ZToN
Cj4+Pj4gSWYgcnVuIFhlbiB3aXRoIFBWSCBkb20wIGFuZCBodm0gZG9tVSwgaHZtIHdpbGwgbWFw
IGEgcGlycSBmb3INCj4+Pj4gYSBwYXNzdGhyb3VnaCBkZXZpY2UgYnkgdXNpbmcgZ3NpLCBzZWUg
cWVtdSBjb2RlDQo+Pj4+IHhlbl9wdF9yZWFsaXplLT54Y19waHlzZGV2X21hcF9waXJxIGFuZCBs
aWJ4bCBjb2RlDQo+Pj4+IHBjaV9hZGRfZG1fZG9uZS0+eGNfcGh5c2Rldl9tYXBfcGlycS4gVGhl
biB4Y19waHlzZGV2X21hcF9waXJxDQo+Pj4+IHdpbGwgY2FsbCBpbnRvIFhlbiwgYnV0IGluIGh2
bV9waHlzZGV2X29wLCBQSFlTREVWT1BfbWFwX3BpcnENCj4+Pj4gaXMgbm90IGFsbG93ZWQgYmVj
YXVzZSBjdXJyZCBpcyBQVkggZG9tMCBhbmQgUFZIIGhhcyBubw0KPj4+PiBYODZfRU1VX1VTRV9Q
SVJRIGZsYWcsIGl0IHdpbGwgZmFpbCBhdCBoYXNfcGlycSBjaGVjay4NCj4+Pj4NCj4+Pj4gU28s
IGFsbG93IFBIWVNERVZPUF9tYXBfcGlycSB3aGVuIGRvbTAgaXMgUFZIIGFuZCBhbHNvIGFsbG93
DQo+Pj4+IFBIWVNERVZPUF91bm1hcF9waXJxIGZvciB0aGUgZmFpbGVkIHBhdGggdG8gdW5tYXAg
cGlycS4NCj4+Pg0KPj4+IFdoeSAiZmFpbGVkIHBhdGgiPyBJc24ndCB1bm1hcHBpbmcgYWxzbyBw
YXJ0IG9mIG5vcm1hbCBkZXZpY2UgcmVtb3ZhbA0KPj4+IGZyb20gYSBndWVzdD8NCj4+IFllcywg
Ym90aC4gSSB3aWxsIGNoYW5nZSB0byBhbHNvICJhbGxvdyBQSFlTREVWT1BfdW5tYXBfcGlycSBm
b3IgdGhlIGRldmljZSByZW1vdmFsIHBhdGggdG8gdW5tYXAgcGlycSIuDQo+Pg0KPj4+DQo+Pj4+
IEFuZA0KPj4+PiBhZGQgYSBuZXcgY2hlY2sgdG8gcHJldmVudCBzZWxmIG1hcCB3aGVuIHN1Ympl
Y3QgZG9tYWluIGhhcyBubw0KPj4+PiBQSVJRIGZsYWcuDQo+Pj4NCj4+PiBZb3Ugc3RpbGwgdGFs
ayBvZiBvbmx5IHNlbGYgbWFwcGluZywgYW5kIHRoZSBjb2RlIGFsc28gc3RpbGwgZG9lcyBvbmx5
DQo+Pj4gdGhhdC4gQXMgcG9pbnRlZCBvdXQgYmVmb3JlOiBXaHkgd291bGQgeW91IGFsbG93IG1h
cHBpbmcgaW50byBhIFBWSA0KPj4+IERvbVU/IElPVyB3aGF0IHB1cnBvc2UgZG8gdGhlICJkID09
IGN1cnJkIiBjaGVja3MgaGF2ZT8NCj4+IFRoZSBjaGVja2luZyBJIGFkZGVkIGhhcyB0d28gcHVy
cG9zZSwgZmlyc3QgaXMgSSBuZWVkIHRvIGFsbG93IHRoaXMgY2FzZToNCj4+IERvbTAod2l0aG91
dCBQSVJRKSArIERvbVUod2l0aCBQSVJRKSwgYmVjYXVzZSB0aGUgb3JpZ2luYWwgY29kZSBqdXN0
IGRvICghaGFzX3BpcnEoY3VycmQpKSB3aWxsIGNhdXNlIG1hcF9waXJxIGZhaWwgaW4gdGhpcyBj
YXNlLg0KPj4gU2Vjb25kIEkgbmVlZCB0byBkaXNhbGxvdyBzZWxmLW1hcHBpbmc6DQo+PiBEb21V
KHdpdGhvdXQgUElSUSkgZG8gbWFwX3BpcnEsIHRoZSAiZD09Y3VycmQiIG1lYW5zIHRoZSBjdXJy
ZCBpcyB0aGUgc3ViamVjdCBkb21haW4gaXRzZWxmLg0KPj4NCj4+IEVtbW0sIEkgdGhpbmsgSSBr
bm93IHdoYXQncyB5b3VyIGNvbmNlcm4uDQo+PiBEbyB5b3UgbWVhbiBJIG5lZWQgdG8NCj4+ICIg
UHJldmVudCBtYXBfcGlycSB3aGVuIGN1cnJkIGhhcyBubyBYODZfRU1VX1VTRV9QSVJRIGZsYWcg
Ig0KPj4gaW5zdGVhZCBvZg0KPj4gIiBQcmV2ZW50IHNlbGYtbWFwIHdoZW4gY3VycmQgaGFzIG5v
IFg4Nl9FTVVfVVNFX1BJUlEgZmxhZyAiLA0KPiANCj4gTm8uIFdoYXQgSSBtZWFuIGlzIHRoYXQg
SSBjb250aW51ZSB0byBmYWlsIHRvIHNlZSB3aHkgeW91IG1lbnRpb24gImN1cnJkIi4NCj4gSU9X
IGl0IHdvdWxkIGJlIG1vcmUgbGlrZSAicHJldmVudCBtYXBwaW5nIHdoZW4gdGhlIHN1YmplY3Qg
ZG9tYWluIGhhcyBubw0KPiBYODZfRU1VX1VTRV9QSVJRIiAod2hpY2gsIGFzIGEgc3BlY2lmaWMg
c3ViLWNhc2UsIGluY2x1ZGVzIHNlbGYtbWFwcGluZw0KPiBpZiB0aGUgY2FsbGVyIHNwZWNpZmll
cyBET01JRF9TRUxGIGZvciB0aGUgc3ViamVjdCBkb21haW4pLg0KT2gsIEkgc2VlLCBub3Qgb25s
eSB0byBwcmV2ZW50IHNlbGYtbWFwcGluZywgYnV0IGlmIHRoZSBzdWJqZWN0IGRvbWFpbiBoYXMg
bm8gUElSUXMsIHdlIHNob3VsZCByZWplY3QsIHNlbGYtbWFwcGluZyBpcyBqdXN0IHRoZSBvbmUg
c3ViIGNhc2UuDQoNCj4gDQo+PiBzbyBJIG5lZWQgdG8gcmVtb3ZlICJkPT1jdXJyZCIsIHJpZ2h0
Pw0KPiANCj4gUmVtb3ZpbmcgdGhpcyBjaGVjayBpcyB3aGF0IEknbSBhZnRlciwgeWVzLiBZZXQg
dGhhdCdzIG5vdCBpbiBzeW5jIHdpdGgNCj4gZWl0aGVyIG9mIHRoZSB0d28gcXVvdGVkIHNlbnRl
bmNlcyBhYm92ZS4NCj4gDQo+Pj4+IFNvIHRoYXQgZG9tVSB3aXRoIFBJUlEgZmxhZyBjYW4gc3Vj
Y2VzcyB0byBtYXAgcGlycSBmb3INCj4+Pj4gcGFzc3Rocm91Z2ggZGV2aWNlcyBldmVuIGRvbTAg
aGFzIG5vIFBJUlEgZmxhZy4NCj4+Pg0KPj4+IFRoZXJlJ3Mgc3RpbGwgYSBkZXNjcmlwdGlvbiBw
cm9ibGVtIGhlcmUuIE11Y2ggbGlrZSB0aGUgZmlyc3Qgc2VudGVuY2UsDQo+Pj4gdGhpcyBsYXN0
IG9uZSBhbHNvIHNheXMgdGhhdCB0aGUgZ3Vlc3Qgd291bGQgaXRzZWxmIG1hcCB0aGUgcElSUS4g
SW4NCj4+PiB3aGljaCBjYXNlIHRoZXJlIHdvdWxkIHN0aWxsIG5vdCBiZSBhbnkgcmVhc29uIHRv
IGV4cG9zZSB0aGUgc3ViLQ0KPj4+IGZ1bmN0aW9ucyB0byBEb20wLg0KPj4gSWYgY2hhbmdlIHRv
ICIgU28gdGhhdCB0aGUgaW50ZXJydXB0IG9mIGEgcGFzc3Rocm91Z2ggZGV2aWNlIGNhbiBzdWNj
ZXNzIHRvIGJlIG1hcHBlZCB0byBwaXJxIGZvciBkb21VIHdpdGggUElSUSBmbGFnIHdoZW4gZG9t
MCBpcyBQVkguIiwNCj4+IElzIGl0IE9LPw0KPiANCj4gS2luZCBvZiwgeWVzLiAiY2FuIGJlIHN1
Y2Nlc3NmdWxseSBtYXBwZWQiIGlzIG9uZSBvZiB0aGUgdmFyaW91cyBwb3NzaWJpbGl0aWVzDQo+
IG9mIG1ha2luZyB0aGlzIHJlYWQgYSBsaXR0bGUgbW9yZSBzbW9vdGhseS4NCk9LLg0KDQo+IA0K
PiBKYW4NCg0KLS0gDQpCZXN0IHJlZ2FyZHMsDQpKaXFpYW4gQ2hlbi4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 05:56:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 05:56:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743419.1150315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJoIj-0007Bp-3c; Wed, 19 Jun 2024 05:56:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743419.1150315; Wed, 19 Jun 2024 05:56:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJoIj-0007Bi-0H; Wed, 19 Jun 2024 05:56:09 +0000
Received: by outflank-mailman (input) for mailman id 743419;
 Wed, 19 Jun 2024 05:56:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJoIh-0007BY-Pq; Wed, 19 Jun 2024 05:56:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJoIh-0003or-OF; Wed, 19 Jun 2024 05:56:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJoIh-0006Co-Dx; Wed, 19 Jun 2024 05:56:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJoIh-0005kw-DT; Wed, 19 Jun 2024 05:56:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=B8mwCWVGPUV+62LlZiXVixyaXSP1KGAcEVVH72u7H48=; b=uLn8swbMduNMi0hGmXt0Mhu2LQ
	6xWj50SNf4Loy5fLKBCFqrsGRgIu6V9hpMDSiTCO5uEI7B1eEE+Cb0bAvWe993DxF1QUlhFmZEMby
	aohX2VUNfdQwUkjGEcKCKPovgFnn2+lPJol0uAqEsOUi2UHwXB0ITFYWdtARsH/Dc9H4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186401-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186401: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=bd59af99700f075d06a6d47a16f777c9519928e0
X-Osstest-Versions-That:
    xen=77b1ed1d02d082c457924a695e8dde7076285271
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Jun 2024 05:56:07 +0000

flight 186401 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186401/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186393
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186393
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186393
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186393
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186393
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186393
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  bd59af99700f075d06a6d47a16f777c9519928e0
baseline version:
 xen                  77b1ed1d02d082c457924a695e8dde7076285271

Last test of basis   186393  2024-06-18 09:10:23 Z    0 days
Testing same since   186401  2024-06-18 21:08:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   77b1ed1d02..bd59af9970  bd59af99700f075d06a6d47a16f777c9519928e0 -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 06:47:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 06:47:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743430.1150325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJp63-0004cY-II; Wed, 19 Jun 2024 06:47:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743430.1150325; Wed, 19 Jun 2024 06:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJp63-0004cR-FR; Wed, 19 Jun 2024 06:47:07 +0000
Received: by outflank-mailman (input) for mailman id 743430;
 Wed, 19 Jun 2024 06:47:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5jEq=NV=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1sJp62-0004cL-3e
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 06:47:06 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on2060e.outbound.protection.outlook.com
 [2a01:111:f403:2407::60e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bb582e12-2e07-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 08:47:01 +0200 (CEST)
Received: from SJ0PR05CA0036.namprd05.prod.outlook.com (2603:10b6:a03:33f::11)
 by PH0PR12MB8773.namprd12.prod.outlook.com (2603:10b6:510:28d::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31; Wed, 19 Jun
 2024 06:46:58 +0000
Received: from SJ5PEPF000001EF.namprd05.prod.outlook.com
 (2603:10b6:a03:33f:cafe::7d) by SJ0PR05CA0036.outlook.office365.com
 (2603:10b6:a03:33f::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.32 via Frontend
 Transport; Wed, 19 Jun 2024 06:46:58 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 SJ5PEPF000001EF.mail.protection.outlook.com (10.167.242.203) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Wed, 19 Jun 2024 06:46:57 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 19 Jun
 2024 01:46:56 -0500
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39
 via Frontend Transport; Wed, 19 Jun 2024 01:46:55 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb582e12-2e07-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hxAjFhAP3d9qmwaygbniizDeALAi2u9gN/DA7n9+NBD3ymgRn2Z4nfPgQQTS7p1m1lO/xZbX6hfSX2XHAp2LU8cZ6CNs/YvP3l1UzUkLKzb+S7NEia/HIKYb8HL+FEckJeyGLt4vTvbcD3iW/h938Jy+V0VvDC27oz6MM36K1hKa5lzP3da3vV5FLR/eeULfSx4YwaR5UXtmjGfP6BLTObiyKVlaBcmFNfKt47p34zBY8cYyPFYEHh3hfjd0ruTSS8wyAPl9TX7Oe+GyL5WtqPUjgt1Sot0NaGl54HFCgZs1QiiKPvZHCVWLeCtU9t3cwPpXIfHEULPslx6EENcPog==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Q8IhlyOSy1QVYCO5JuO87iS9W8d2PIRMLFUhNMPaHD4=;
 b=IUO5jujTamQ7vYF0SJ1bxqQAPVkEf3v8KHvpHLazcujPmRrYb0FR7m3kgi1GluLYlA9ySX1y/vNS4HXqxf218XARnUGzV1OQcMsgq5U4nJEwpBOaYWBybYvLrUItNQ3uKzxttZt6mCFZ5yIqrFmoT/40PAkhRSJkgnEN+VOEug7I4bmjQQnDzyFwecL2FIEA0KCqLlmczSlgQ653HWzTGVQdT/oHC/08XIRaHQm8FV9fRrvbtun/jE4LGCwR/h4RqetbSFN4jAhuu/uEycUdTGyPTmVC0KDjoUPag7qTwlMK9uzfqCJtkQ4B+dSKZqdV6a+7rmyzVPEeEkWWHaSEOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q8IhlyOSy1QVYCO5JuO87iS9W8d2PIRMLFUhNMPaHD4=;
 b=PkgSavB6WgzylpGGzT1HUha7XqhyG1ry1PPFo9e6Gdd51pwT/u80X/loljYIMuxgeSzCrZdjZuEKk+u9MNHIgx6RLhIiReQPwowndhrKoDsWDqAKniEwdxmRFdG5ymQ7IARC7ccWEXU9aTfEF/gFbWX0JSrMV1O1HaIMdtEAZhY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH for-4.19] xen/arm: static-shmem: fix "gbase/pbase used uninitialized" build failure
Date: Wed, 19 Jun 2024 08:46:52 +0200
Message-ID: <20240619064652.18266-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
Received-SPF: None (SATLEXMB04.amd.com: michal.orzel@amd.com does not
 designate permitted sender hosts)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ5PEPF000001EF:EE_|PH0PR12MB8773:EE_
X-MS-Office365-Filtering-Correlation-Id: 0d1368dd-9ca9-48c7-ab76-08dc902b9d5f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|36860700010|376011|82310400023|1800799021;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?WKpw7SppmYYMYRZl098WthIQOorjQ32v7SYsICo+bfAm4ipw5BTb8q6tCYAH?=
 =?us-ascii?Q?BQQm3zPCTvy7SJ9bKwhWx9F+PKYGw5Uek5AvX6l473kHWUwKTvwPNBCQwhfL?=
 =?us-ascii?Q?4by55/S1B1XoVokcpPoyTpvsYak7k/AbDQVj5n+QbYRlDPa7ff7SNfxHR0pL?=
 =?us-ascii?Q?AuBqWlenmPp3VJQulAx3axDoOXVf1pQQjpqa0fvmtCp+AMTkySSXN0LDPLfv?=
 =?us-ascii?Q?HpT5a0yFq9mJjocnR/hsesRHJw+rV+l0xvleY573QzbDXPP2ArQWRVWzGWbu?=
 =?us-ascii?Q?fGsdJQqHagZQWbAOInQ4905QW7q1hekVgkgSWYprSOQGZ7M3P7YszWogsQ9u?=
 =?us-ascii?Q?8iojqG0LXp7NhA+GRX+plTrCfyiu168pJsNqKx4ob3OG+hO6pzY2VspiX5+0?=
 =?us-ascii?Q?5FzacRB5F0mrXs2OiBdneK76q2C6zBcnJ27Fnc3B8Sqxtg0S893WXrfrMcpn?=
 =?us-ascii?Q?IXzXw4lo6pfJ39+NJ9m5471a43mlotv1G9XSOh//0c+6O6C6hpe9Mv4GQbK2?=
 =?us-ascii?Q?576oQuL/IRoqoLcdtqr0iuo5rsM0/tRUrHraz2hY/srrfRHVIYwvMaFoWNx8?=
 =?us-ascii?Q?OazZPx4NLau2PZ7yzPd0QBiIJZJD2BtjcieaCVjfo9V/e0NwP1BBbXmzQ0ei?=
 =?us-ascii?Q?UqynoZithPo183ENGlBQLajEMrIzT63AE0uyK0/uzlpbb6bN72AjKGJO+O6r?=
 =?us-ascii?Q?NKIlm3g7SgrSUft+q8g8xohJcCzvAxJFZKVR4TMdN865ajex4sUjnQRk4Bh6?=
 =?us-ascii?Q?R6azAuhUZkyaCB4kNHw1kq3FO3E8hFORAt3jGZeBsiIJzkz/JrjLdh3I9bz+?=
 =?us-ascii?Q?TdqtIL33Najp4xZuYqgjiTWGemdK67/lruC5XbQ0hNx6ihcgLh3hDzchMq9c?=
 =?us-ascii?Q?WRzvmHmkaZcwqgxgiy7p5nNXkfs9sKgEcYI6DmRyoQQSBh62IwIOf0lPrkmP?=
 =?us-ascii?Q?npvSADQsP01ZXjJlq+Iah31gVwmRsUaRkXmw6E2KZsV9lBf69/USYOtPoS6h?=
 =?us-ascii?Q?Ycyu2FlSvRaiq9ErmpSEts8SgDfzh2xnqWmQxwRWLwLnxzpYYa2dNQHvGCcR?=
 =?us-ascii?Q?0cfDGegwHGMrnTPV9CHqNAK634EBJs+jEhDE7fDoUAMPSXi71h7mt6wC3XBj?=
 =?us-ascii?Q?/7EcN+jj/DiN4f71MUUXwy0DMm2yQrX6NHwUlfa/oeqP5iMeXyHXX/Y1UUg3?=
 =?us-ascii?Q?AY7A4wcj4dW9lr5jEKqVg05kmA/VANxqsICHrIlJCx3tb4RjhZvBrr/F1wJs?=
 =?us-ascii?Q?3+L9bWTJkALNO/pqvJ6kceeViU/GRAH1F5/M6ZpXu3Bruq4Asf3F3AdFoAiz?=
 =?us-ascii?Q?mtnbsKrNCkIT1Ri/02IxLIFE/4u6FmXmWRPt5mLn7SRSxg=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(36860700010)(376011)(82310400023)(1800799021);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jun 2024 06:46:57.6647
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0d1368dd-9ca9-48c7-ab76-08dc902b9d5f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	SJ5PEPF000001EF.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB8773

Building Xen with CONFIG_STATIC_SHM=y results in a build failure:

arch/arm/static-shmem.c: In function 'process_shm':
arch/arm/static-shmem.c:327:41: error: 'gbase' may be used uninitialized [-Werror=maybe-uninitialized]
  327 |         if ( is_domain_direct_mapped(d) && (pbase != gbase) )
arch/arm/static-shmem.c:305:17: note: 'gbase' was declared here
  305 |         paddr_t gbase, pbase, psize;

This is because the commit cb1ddafdc573 adds a check referencing
gbase/pbase variables which were not yet assigned a value. Fix it.

Fixes: cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem should be direct-mapped for direct-mapped domains")
Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
Rationale for 4.19: this patch fixes a build failure reported by CI:
https://gitlab.com/xen-project/xen/-/jobs/7131807878
---
 xen/arch/arm/static-shmem.c | 13 +++++++------
 1 file changed, 7 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c
index c434b96e6204..cd48d2896b7e 100644
--- a/xen/arch/arm/static-shmem.c
+++ b/xen/arch/arm/static-shmem.c
@@ -324,12 +324,6 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo,
             printk("%pd: static shared memory bank not found: '%s'", d, shm_id);
             return -ENOENT;
         }
-        if ( is_domain_direct_mapped(d) && (pbase != gbase) )
-        {
-            printk("%pd: physical address 0x%"PRIpaddr" and guest address 0x%"PRIpaddr" are not direct-mapped.\n",
-                   d, pbase, gbase);
-            return -EINVAL;
-        }
 
         pbase = boot_shm_bank->start;
         psize = boot_shm_bank->size;
@@ -353,6 +347,13 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo,
             /* guest phys address is after host phys address */
             gbase = dt_read_paddr(cells + addr_cells, addr_cells);
 
+            if ( is_domain_direct_mapped(d) && (pbase != gbase) )
+            {
+                printk("%pd: physical address 0x%"PRIpaddr" and guest address 0x%"PRIpaddr" are not direct-mapped.\n",
+                       d, pbase, gbase);
+                return -EINVAL;
+            }
+
             for ( i = 0; i < PFN_DOWN(psize); i++ )
                 if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) )
                 {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 07:03:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 07:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743438.1150334 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJpLX-0007N2-RM; Wed, 19 Jun 2024 07:03:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743438.1150334; Wed, 19 Jun 2024 07:03:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJpLX-0007Mv-OV; Wed, 19 Jun 2024 07:03:07 +0000
Received: by outflank-mailman (input) for mailman id 743438;
 Wed, 19 Jun 2024 07:03:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJpLW-0007Mp-Gd
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 07:03:06 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f800e227-2e09-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 09:03:02 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2e72224c395so63492031fa.3
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 00:03:02 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f855ee6eebsm109306725ad.158.2024.06.19.00.02.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 00:03:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f800e227-2e09-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718780582; x=1719385382; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=+n3bJ+/wwvNBDF0b+OS1XGT2Hbd4sRyhnquBwVP9Mcs=;
        b=Xa/d0niKPDnSu9FMEgmlgJTNsOsFjP3fX9rR/Js1iNDvABUxn4GM04xgYloVBiB0jZ
         /ECWUSLDBc/eEcfUuDewPu3ypxLGgOR7G4Hf/ZM59wbeyrHH3FiVrFTtQGhbvXbkzIHR
         Gwv5OGRfTSXxkRzhArs/oRx6hJmwGVj5Sn3n1pZmvJtOWhwyCA+cFelxFREiuiMQGOe0
         Qt9mKJWKfwzFrP67eBMdQFi2YNJ/Ll3X7hD7S7bkvfKzfVDbSp8Xj5Ylr05zAEVM2iKU
         0JdfODq1P3fH1+92UaMJXq2YY4QEqPsOkTQbrYkVpXfyxLrTbsw+pU1Z5kYFOjLa48ZD
         kDaw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718780582; x=1719385382;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+n3bJ+/wwvNBDF0b+OS1XGT2Hbd4sRyhnquBwVP9Mcs=;
        b=b11sojR/NNBhoInt41sxNICBBObHPyyjkcDxEFZNoVEe3jdOgy1BZmSqKWS+NkGIn1
         Gw5MKgY8JfRa0FQubD0uvRdhGUNjyWscw22rQm7Zw/v+hPrVYXvTzs9xsAPOXZRAEq9o
         ArUqi6s4aKzo5PrdAVEFubbhi9BuaEehP84Y19/Jxas6+gUJjZop3uyl89rxPJBPLXTd
         yndW0gMTtGEdzB/xLzQ/gM+cFqXsf5TdQGeqjVXGWQa+3va3dIY2riObUdLL1ZK+jmJw
         cf/WR/JVugUppoRYVzhWCf4trrCTuFu6qCMZ3xdc/9n14qFDA9wvdzXHPBjGLRR6iTrl
         hwtg==
X-Forwarded-Encrypted: i=1; AJvYcCUJRMVKBDpEHlUSjyZPMF9PjbnXewDHUJNf1MHPYSTmRBKpQJJSEL4os6DsJBiX9f2L6eG4RycNZvwLIaNlh4d9gH3FS1iDTBHYBhhCHdY=
X-Gm-Message-State: AOJu0YwDYjLQl1ItTIgosSbQd/cZY5vvN6+ht1ROQGLHljjVdHb95L+N
	AflUmbqsxy401CJcb8dJxkitKkDrQgeVNMtIsg4KpqctdmqhuNMQXvjyNQZUHw==
X-Google-Smtp-Source: AGHT+IGIBSaVJmgQ0u65W+M1lhucdxYDZotMAN7lah+V1KTJqiWDBfttcSHvkiS1hlx90ouPaZmIrw==
X-Received: by 2002:a2e:8701:0:b0:2ec:f68:51de with SMTP id 38308e7fff4ca-2ec3cffc8c5mr11140401fa.47.1718780582105;
        Wed, 19 Jun 2024 00:03:02 -0700 (PDT)
Message-ID: <896c7cde-4117-47a9-9b23-66876365713c@suse.com>
Date: Wed, 19 Jun 2024 09:02:52 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 1/5] xen/vpci: Clear all vpci status of device
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-2-Jiqian.Chen@amd.com>
 <4e2accc2-e81d-450a-af2d-38884455de9c@suse.com>
 <BL1PR12MB58499527CFA36446EAD3FCE0E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <f8381d8b-0ad2-4766-8a53-d1ee44ea7e05@suse.com>
 <BL1PR12MB5849E84E58725FB947CCECD8E7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB5849E84E58725FB947CCECD8E7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 19.06.2024 05:39, Chen, Jiqian wrote:
> On 2024/6/18 16:33, Jan Beulich wrote:
>> On 18.06.2024 08:25, Chen, Jiqian wrote:
>>> On 2024/6/17 22:17, Jan Beulich wrote:
>>>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>>>> --- a/xen/drivers/pci/physdev.c
>>>>> +++ b/xen/drivers/pci/physdev.c
>>>>> @@ -2,11 +2,17 @@
>>>>>  #include <xen/guest_access.h>
>>>>>  #include <xen/hypercall.h>
>>>>>  #include <xen/init.h>
>>>>> +#include <xen/vpci.h>
>>>>>  
>>>>>  #ifndef COMPAT
>>>>>  typedef long ret_t;
>>>>>  #endif
>>>>>  
>>>>> +static const struct pci_device_state_reset_method
>>>>> +                    pci_device_state_reset_methods[] = {
>>>>> +    [ DEVICE_RESET_FLR ].reset_fn = vpci_reset_device_state,
>>>>> +};
>>>>
>>>> What about the other three DEVICE_RESET_*? In particular ...
>>> I don't know how to implement the other three types of reset.
>>> This is a design form so that corresponding processing functions can be added later if necessary. Do I need to set them to NULL pointers in this array?
>>
>> No.
>>
>>> Does this form conform to your previous suggestion of using only one hypercall to handle all types of resets?
>>
>> Yes, at least in principle. Question here is: To be on the safe side,
>> wouldn't we better reset state for all forms of reset, leaving possible
>> relaxation of that for later? At which point the function wouldn't need
>> calling indirectly, and instead would be passed the reset type as an
>> argument.
> If I understood correctly, next version should be?
> Use macros to represent different reset types.
> Add switch cases in PHYSDEVOP_pci_device_state_reset to handle different reset functions.
> Add reset_type as a function parameter to vpci_reset_device_state for possible future use.
> 
> +    case PHYSDEVOP_pci_device_state_reset:
> +    {
> +        struct pci_device_state_reset dev_reset;
> +        struct pci_dev *pdev;
> +        pci_sbdf_t sbdf;
> +
> +        if ( !is_pci_passthrough_enabled() )
> +            return -EOPNOTSUPP;
> +
> +        ret = -EFAULT;
> +        if ( copy_from_guest(&dev_reset, arg, 1) != 0 )
> +            break;
> +
> +        sbdf = PCI_SBDF(dev_reset.dev.seg,
> +                        dev_reset.dev.bus,
> +                        dev_reset.dev.evfn);
> +
> +        ret = xsm_resource_setup_pci(XSM_PRIV, sbdf.sbdf);
> +        if ( ret )
> +            break;
> +
> +        pcidevs_lock();
> +        pdev = pci_get_pdev(NULL, sbdf);
> +        if ( !pdev )
> +        {
> +            pcidevs_unlock();
> +            ret = -ENODEV;
> +            break;
> +        }
> +
> +        write_lock(&pdev->domain->pci_lock);
> +        pcidevs_unlock();
> +        /* Implement FLR, other reset types may be implemented in future */
> +        switch ( dev_reset.reset_type )
> +        {
> +        case PCI_DEVICE_STATE_RESET_COLD:
> +        case PCI_DEVICE_STATE_RESET_WARM:
> +        case PCI_DEVICE_STATE_RESET_HOT:
> +        case PCI_DEVICE_STATE_RESET_FLR:
> +            ret = vpci_reset_device_state(pdev, dev_reset.reset_type);
> +            break;
> +        }

If you use a switch() here, then there wants to be a default case returning
e.g. -EOPNOTSUPP or -EINVAL. Else the switch wants dropping. I'm not sure
which one's better in this specific case, I'm only slightly tending towards
the former.

In any event the comment (if any) wants to reflect what the actual code does.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 07:05:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 07:05:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743446.1150345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJpOA-0007ya-Aa; Wed, 19 Jun 2024 07:05:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743446.1150345; Wed, 19 Jun 2024 07:05:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJpOA-0007yT-7h; Wed, 19 Jun 2024 07:05:50 +0000
Received: by outflank-mailman (input) for mailman id 743446;
 Wed, 19 Jun 2024 07:05:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hxBI=NV=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sJpO9-0007yJ-Ix
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 07:05:49 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5a33ff5c-2e0a-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 09:05:47 +0200 (CEST)
Received: by mail-wr1-x429.google.com with SMTP id
 ffacd0b85a97d-3609565a1bdso2501738f8f.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 00:05:47 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-36262f77ad9sm3040467f8f.109.2024.06.19.00.05.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Jun 2024 00:05:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a33ff5c-2e0a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718780747; x=1719385547; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=FnJcbZR1wadcGo0ka0WGG6S+d48KhFZlkXZKoDcxjHY=;
        b=fQgfrs4Ybv9PwJ8eJMtyShTPAIxRmCHiUx/3RHGNl/BCIeSf4sOIIHMDVPkWTOJOge
         3YXZjTRkKovMfF0zShFP3KwPyWkGnm5vpbx0LuSq9lmy8KL1jGuT4p3kKmXnFwLPzX/u
         RZGm1YbCk6Bg2SBDuer+HQCVBnv2LbuYvODkYn0PPeeUZ8fCuhG5p+qqkYLq7k93xfHp
         HjHx2K47P39pzA4/bx2hAoiAbHPB+jDIAVx8Di5Im4IAyc4/mlj0bWZ+jId+vwCgvDiw
         duEHDQdr+uKD9QrKVLom6Oe6BQXM5L3tm4XzakBz24oyBSLGKav2tVuqfRTIB888JsdS
         c17g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718780747; x=1719385547;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=FnJcbZR1wadcGo0ka0WGG6S+d48KhFZlkXZKoDcxjHY=;
        b=dLDW65HGRhNSXaDIVB81Y/N2g6dA4DAFePAK7r7MPV9K/Izl5Esm2IFeYyUaW6ZQbk
         dFlH39kprI59A3TV1Mkoa3/IAzsGdvuLRb0YJUdhCh6BE1K5xvpPyVmjV/famUe98fS0
         pia8wCATyKq0558w9ygPpkds4qGY9leW4uMF2/5DhwQKMNSxOxTwQ9595UEJejznznsL
         h7dMer845wpfNsV4nEnx1VQGyXGlakOTabyOCilb1uCi6sP1AkK9Qa8Dnn56hh37SUGA
         QK79d3AcAG+YBRXDENB96jayw7pQvVI0IBFPmQBRPaoRC1S10eyg8W/98W3Dcz8IfcpT
         zoNA==
X-Forwarded-Encrypted: i=1; AJvYcCUw5O/kVfLIBigT6G7ulVTE6LNhvYH/RENrdfhS4o2F43TPEFD7h8uKwcPBEMz4njazQjhYzoC2V9eT+xqYd+Wfii65ypxL9jE4VVbwaS0=
X-Gm-Message-State: AOJu0YzPRMRDUhwR0pBOB4M0KPMtwap8AzBJ+QHR0lsDXo9td1v1LnnZ
	qOltFwYLv9yqr2vI0jv9auCfxFI1CAjJpu53uHC2K239teZwjZMM
X-Google-Smtp-Source: AGHT+IEvlIuV/yjWjqeb0415ez2uvAKwDu1mpzkxvSfuMYRY3iOg7nnQUpCWwhaDVyQF9p38G+pHVw==
X-Received: by 2002:adf:f48a:0:b0:360:96c5:fdfd with SMTP id ffacd0b85a97d-36317c7ee07mr1159993f8f.30.1718780746606;
        Wed, 19 Jun 2024 00:05:46 -0700 (PDT)
Message-ID: <ecc96961b27d45ac6a5ca7a761f4d8c801c26d5e.camel@gmail.com>
Subject: Re: [PATCH for-4.19] avoid UB in guest handle arithmetic
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>,  "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
  Stefano Stabellini <sstabellini@kernel.org>, Roger Pau
 =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Date: Wed, 19 Jun 2024 09:05:45 +0200
In-Reply-To: <13a5650d-8623-4fec-9383-ef04023257ff@citrix.com>
References: <227fbeda-1690-4158-8404-53b4236c0235@suse.com>
	 <13a5650d-8623-4fec-9383-ef04023257ff@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Tue, 2024-06-18 at 14:24 +0100, Andrew Cooper wrote:
> On 19/03/2024 1:26 pm, Jan Beulich wrote:
> > At least XENMEM_memory_exchange can have huge values passed in the
> > nr_extents and nr_exchanged fields. Adding such values to pointers
> > can
> > overflow, resulting in UB. Cast respective pointers to "unsigned
> > long"
> > while at the same time making the necessary multiplication
> > explicit.
> > Remaining arithmetic is, despite there possibly being mathematical
> > overflow, okay as per the C99 spec: "A computation involving
> > unsigned
> > operands can never overflow, because a result that cannot be
> > represented
> > by the resulting unsigned integer type is reduced modulo the number
> > that
> > is one greater than the largest value that can be represented by
> > the
> > resulting type." The overflow that we need to guard against is
> > checked
> > for in array_access_ok().
> >=20
> > Note that in / down from array_access_ok() the address value is
> > only
> > ever cast to "unsigned long" anyway, which is why in the invocation
> > from
> > guest_handle_subrange_okay() the value doesn't need casting back to
> > pointer type.
> >=20
> > In compat grant table code change two guest_handle_add_offset() to
> > avoid
> > passing in negative offsets.
> >=20
> > Since {,__}clear_guest_offset() need touching anyway, also deal
> > with
> > another (latent) issue there: They were losing the handle type,
> > i.e. the
> > size of the individual objects accessed. Luckily the few users we
> > presently have all pass char or uint8 handles.
> >=20
> > Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> There wants to be a xen: prefix in the subject.
>=20
> But as for the UB aspect, I've checked that this does resolve the
> failure identified by the XSA-212 XTF test.
>=20
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>=20
> CC'ing Oleksii as this wants to go into 4.19.
Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 07:05:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 07:05:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743447.1150355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJpOF-0008FL-HQ; Wed, 19 Jun 2024 07:05:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743447.1150355; Wed, 19 Jun 2024 07:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJpOF-0008FE-ES; Wed, 19 Jun 2024 07:05:55 +0000
Received: by outflank-mailman (input) for mailman id 743447;
 Wed, 19 Jun 2024 07:05:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ri0R=NV=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sJpOD-0008EY-Uf
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 07:05:53 +0000
Received: from mail-vk1-xa2d.google.com (mail-vk1-xa2d.google.com
 [2607:f8b0:4864:20::a2d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5d4772aa-2e0a-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 09:05:52 +0200 (CEST)
Received: by mail-vk1-xa2d.google.com with SMTP id
 71dfb90a1353d-4ecf00ea4fbso1665836e0c.3
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 00:05:52 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-441eede79basm63233131cf.0.2024.06.19.00.05.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Jun 2024 00:05:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d4772aa-2e0a-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718780751; x=1719385551; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=csUiPWVeVAZ6dD/SGvM5eZ7HmV22urkoGCaDVvk3BVg=;
        b=Dg6m2ZTvz3W8Mcd7lHVgG0h6kocT4OAo/nFxiTM739vInGkWHMKJAaVuUqpeywah6u
         bW0WZktOEEB8jJY1x8AOSLpZ6sEC3IiVip1h6OkS79d3VzY3eOUjuqoYsscy1h6CxgIw
         i7uaObHFB+CY0oiNuWv52ULkya5p4sM9HImrU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718780751; x=1719385551;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=csUiPWVeVAZ6dD/SGvM5eZ7HmV22urkoGCaDVvk3BVg=;
        b=ZPxWO+qqjDVU9QjSJvWzwTT0pE/7sbNBco82+w4f0wAEPJIuprYG8WGI9dyDGzyp2i
         Scky8zrXbfAQ1VKUBwqSrFoQmrXO657viSe0P2KL2H1v6udGfYyrifcoV5uFXTqymNKd
         hVvomL/s7GBo8fXHvlnVvKCYkjWrwmY19D20ntm0nwnd9McvmEEgMiag0+eR/wBJ/O33
         6raPRgYIccsMvT8s25XqvlLzMkGls9u1dooJcVVTH4HTIjTe2NMUrjUbuUgW5bITjLyp
         hMASlUFSSOv4jHBcS4lp54jZ4vyjo3ellmK0BiXBBsRegqDkXOSQScIYvQoKMKYfObz1
         VZ3A==
X-Forwarded-Encrypted: i=1; AJvYcCX+rtMYadvRHRzXEf8d1BkLNz8o/ajZJqd+51E06zxc2S/pxRwmChdGTTdLRKZPX3aBgNhrt0pvvLcq+JR/uxVsNKC0MvxGBwdrUBBOzzM=
X-Gm-Message-State: AOJu0Yzrp77EDKetfnGP6xiUhYDkQ0ZDygPSgEzxgQY8WU3sYSgSzKED
	P50jnxFMZ5F1jyCGAlXAkg0TFtU6daFJFdhf/iAhVcwOFNS2ujAppIncnwFFiQg=
X-Google-Smtp-Source: AGHT+IFTlVdmtgH+8ATyFC4+KicezDBhKia3tVo65rhh+4KHxa07vvHQdUA+A9OEupcxUq0xaXr97w==
X-Received: by 2002:a05:6122:8d0:b0:4ed:145:3489 with SMTP id 71dfb90a1353d-4ef277db851mr2222709e0c.13.1718780751460;
        Wed, 19 Jun 2024 00:05:51 -0700 (PDT)
Date: Wed, 19 Jun 2024 09:05:48 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3 3/3] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
Message-ID: <ZnKDTEX_eGz2sS4K@macbook>
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <20240613165617.42538-4-roger.pau@citrix.com>
 <e3912334-4dbe-40e9-aed4-8b47e1570cc7@suse.com>
 <ZnFv7b4YNjeRXj6-@macbook>
 <2f388d0a-c9b5-409a-b622-5dfeb3093e82@suse.com>
 <ZnGerbiI7P9PHPmK@macbook>
 <ba89126f-715d-498e-81e1-2ed105ac2d1c@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ba89126f-715d-498e-81e1-2ed105ac2d1c@suse.com>

On Tue, Jun 18, 2024 at 06:30:22PM +0200, Jan Beulich wrote:
> On 18.06.2024 16:50, Roger Pau Monné wrote:
> > On Tue, Jun 18, 2024 at 04:34:50PM +0200, Jan Beulich wrote:
> >> On 18.06.2024 13:30, Roger Pau Monné wrote:
> >>> On Mon, Jun 17, 2024 at 03:41:12PM +0200, Jan Beulich wrote:
> >>>> On 13.06.2024 18:56, Roger Pau Monne wrote:
> >>>>> @@ -2686,11 +2705,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
> >>>>>          if ( desc->handler->disable )
> >>>>>              desc->handler->disable(desc);
> >>>>>  
> >>>>> +        /*
> >>>>> +         * If the current CPU is going offline and is (one of) the target(s) of
> >>>>> +         * the interrupt, signal to check whether there are any pending vectors
> >>>>> +         * to be handled in the local APIC after the interrupt has been moved.
> >>>>> +         */
> >>>>> +        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
> >>>>> +            check_irr = true;
> >>>>> +
> >>>>>          if ( desc->handler->set_affinity )
> >>>>>              desc->handler->set_affinity(desc, affinity);
> >>>>>          else if ( !(warned++) )
> >>>>>              set_affinity = false;
> >>>>>  
> >>>>> +        if ( check_irr && apic_irr_read(vector) )
> >>>>> +            /*
> >>>>> +             * Forward pending interrupt to the new destination, this CPU is
> >>>>> +             * going offline and otherwise the interrupt would be lost.
> >>>>> +             */
> >>>>> +            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
> >>>>> +                          desc->arch.vector);
> >>>>
> >>>> Hmm, IRR may become set right after the IRR read (unlike in the other cases,
> >>>> where new IRQs ought to be surfacing only at the new destination). Doesn't
> >>>> this want moving ...
> >>>>
> >>>>>          if ( desc->handler->enable )
> >>>>>              desc->handler->enable(desc);
> >>>>
> >>>> ... past the actual affinity change?
> >>>
> >>> Hm, but the ->enable() hook is just unmasking the interrupt, the
> >>> actual affinity change is done in ->set_affinity(), and hence after
> >>> the call to ->set_affinity() no further interrupts should be delivered
> >>> to the CPU regardless of whether the source is masked?
> >>>
> >>> Or is it possible for the device/interrupt controller to not switch to
> >>> use the new destination until the interrupt is unmasked, and hence
> >>> could have pending masked interrupts still using the old destination?
> >>> IIRC For MSI-X it's required that the device updates the destination
> >>> target once the entry is unmasked.
> >>
> >> That's all not relevant here, I think. IRR gets set when an interrupt is
> >> signaled, no matter whether it's masked.
> > 
> > I'm kind of lost here, what does signaling mean in this context?
> > 
> > I would expect the interrupt vector to not get set in IRR if the MSI-X
> > entry is masked, as at that point the state of the address/data fields
> > might not be consistent (that's the whole point of masking it right?)
> > 
> >> It's its handling which the
> >> masking would prevent, i.e. the "moving" of the set bit from IRR to ISR.
> > 
> > My understanding was that the masking would prevent the message write to
> > the APIC from happening, and hence no vector should get set in IRR.
> 
> Hmm, yes, looks like I was confused. The masking is at the source side
> (IO-APIC RTE, MSI-X entry, or - if supported - in the MSI capability).
> So the sole case to worry about is MSI without mask-bit support then.

Yeah, and for MSI without masking bit support we don't care doing the
IRR check before or after the ->enable() hook, as that's a no-op in
that case.  The write to the MSI address/data fields has already been
done, and hence the issue would be exclusively with draining any
in-flight writes to the APIC doorbell (what you mention below).

> >> Plus we run with IRQs off here anyway if I'm not mistaken, so no
> >> interrupt can be delivered to the local CPU. IOW whatever IRR bits it
> >> has set (including ones becoming set between the IRR read and the actual
> >> vector change), those would never be serviced. Hence the reading of the
> >> bit ought to occur after the vector change: It's only then that we know
> >> the IRR bit corresponding to the old vector can't become set anymore.
> > 
> > Right, and the vector change happens in ->set_affinity(), not
> > ->enable().  See for example set_msi_affinity() and the
> > write_msi_msg(), that's where the vector gets changed.
> > 
> >> And even then we're assuming that no interrupt signals might still be
> >> "on their way" from the IO-APIC or a posted MSI message write by a
> >> device to the LAPIC (I have no idea how to properly fence that, or
> >> whether there are guarantees for this to never occur).
> > 
> > Yeah, those I expect would be completed in the window between the
> > write of the new vector/destination and the reading of IRR.
> 
> Except we have no idea on the latencies.

There isn't much else we can do? Even the current case where we add
the 1ms window at the end of the shuffling could still suffer from
this issue because we don't know the latencies.  IOW: I don't think
this is any worse than the current approach.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 07:06:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 07:06:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743458.1150364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJpOt-0000Yi-Op; Wed, 19 Jun 2024 07:06:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743458.1150364; Wed, 19 Jun 2024 07:06:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJpOt-0000Yb-M3; Wed, 19 Jun 2024 07:06:35 +0000
Received: by outflank-mailman (input) for mailman id 743458;
 Wed, 19 Jun 2024 07:06:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hxBI=NV=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sJpOt-0007yJ-0f
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 07:06:35 +0000
Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com
 [2a00:1450:4864:20::331])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7589bac6-2e0a-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 09:06:33 +0200 (CEST)
Received: by mail-wm1-x331.google.com with SMTP id
 5b1f17b1804b1-42122ac2f38so3478745e9.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 00:06:33 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42286fe8fb8sm257346515e9.11.2024.06.19.00.06.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Jun 2024 00:06:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7589bac6-2e0a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718780793; x=1719385593; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=r3h+a2TBSX1go0KsZFGfwtOr49kqMAbh2tBBH01hpMU=;
        b=lQ9Pc7oqFYJJfjuF0WUyxmg6MyKaNapxdoebxsOSG1SCE1A4z/whh7JjYg9pP0BMoB
         AbBnHMF7faowDLghnwXcrYgyKppJVSy50clH7JkzOm9PpGBTMbFkc9tFXZ0BvatIgbDa
         VgxpoPGzwp3QYVGRXm6kv0qvg5/FimWEHzCQzgv47f6UYCapooQI6x3NpJ+R6rbNb+2i
         8nVyCGKdNKCmUvNi9IJQYpuTHXmo0TMO1dNMNmLNY4TqfsgpeYxcWIStqo7FesB+xKPp
         HBcN2BVDc6syNSVDWtV65hdT05eSsA6J7GSKkwUNAbF6mk/svRwTqxvpOzygEXLl3BUq
         TwTg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718780793; x=1719385593;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=r3h+a2TBSX1go0KsZFGfwtOr49kqMAbh2tBBH01hpMU=;
        b=RMGbkCPetsAJNB9wOIP/E08lUPxP4iGoKCTnoDoBGp+bGTr08GqP5Lhb8I+bBQ2n6I
         VwihOylrMlZUdvTx9PsPRCrjbCZFnH27n3DZR8wxACVn1B93ahimXZ16dGxE5xRklxYQ
         p75UjQh1kdoIrP3jWEJEVZW/2V7CmCnkvoQVdnlZG9EbpQ1O+Tamfi8b90KVY+d8z3Q/
         8Pv/F9RMzzu4w0xDuwqr/paDGTrMMPa7DgAcXgI4m+463/82V41XVcgwhU8cL1mxL9/b
         KCpZn1gmSDr2y6beF/UDsa6DbfTp9xtRDW75km5rqNFlznb8hP8B4eWvwwpzl65csIAy
         DcNg==
X-Forwarded-Encrypted: i=1; AJvYcCUfCqO7nDP4nSDGVpw6ll9Mgymbka8+lclV4Exp025l/8QPwOQOhAsulmY1bFDI0glLiALQycjLRF4MJXdgxBRE0i3A9UEfkxknHJKEcGU=
X-Gm-Message-State: AOJu0YyXD+Z1lhBu6YyuHpBo9i2NEEyHFLXBy+oen1wtZwmOJ+Nt9AGX
	4c/uJk8ENC9uF+Tlt8a4UWoGNV8MLTviZHOSX9BONoQboScreyBP
X-Google-Smtp-Source: AGHT+IH5u93jnq9ZgwjRl+3DlYlSjlrHAXWLnrWfScUIGDRsQuYiz7pzFseLw5ULbKpSMiRb+b1hcw==
X-Received: by 2002:a7b:c8cc:0:b0:418:2981:c70f with SMTP id 5b1f17b1804b1-4246fb8e71dmr46621715e9.19.1718780792514;
        Wed, 19 Jun 2024 00:06:32 -0700 (PDT)
Message-ID: <72d00f8df6d6682c3b9163108b340e0cdd665151.camel@gmail.com>
Subject: Re: [PATCH for-4.19] xen/irq: Address MISRA Rule 8.3 violation
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich
 <JBeulich@suse.com>,  Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
 Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel
 <michal.orzel@amd.com>, Roberto Bagnara <roberto.bagnara@bugseng.com>,
 Nicola Vetrini <nicola.vetrini@bugseng.com>,  "consulting @ bugseng . com"
 <consulting@bugseng.com>
Date: Wed, 19 Jun 2024 09:06:31 +0200
In-Reply-To: <20240618130048.1768639-1-andrew.cooper3@citrix.com>
References: <20240618130048.1768639-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Tue, 2024-06-18 at 14:00 +0100, Andrew Cooper wrote:
> When centralising irq_ack_none(), different architectures had
> different names
> for the parameter of irq_ack_none().=C2=A0 As it's type is struct irq_des=
c
> *, it
> should be named desc.=C2=A0 Make this consistent.
>=20
> No functional change.
>=20
> Fixes: 8aeda4a241ab ("arch/irq: Make irq_ack_none() mandatory")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

> ---
> CC: George Dunlap <George.Dunlap@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
> CC: Bertrand Marquis <bertrand.marquis@arm.com>
> CC: Michal Orzel <michal.orzel@amd.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> CC: Roberto Bagnara <roberto.bagnara@bugseng.com>
> CC: Nicola Vetrini <nicola.vetrini@bugseng.com>
> CC: consulting@bugseng.com=C2=A0<consulting@bugseng.com>
>=20
> Request for 4.19.=C2=A0 This was an accidental regression in a recent
> cleanup
> patch, and the fix is just a rename - its no functional change.
> ---
> =C2=A0xen/arch/arm/irq.c=C2=A0=C2=A0=C2=A0 | 4 ++--
> =C2=A0xen/include/xen/irq.h | 2 +-
> =C2=A02 files changed, 3 insertions(+), 3 deletions(-)
>=20
> diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
> index c60502444ccf..6b89f64fd194 100644
> --- a/xen/arch/arm/irq.c
> +++ b/xen/arch/arm/irq.c
> @@ -31,9 +31,9 @@ struct irq_guest
> =C2=A0=C2=A0=C2=A0=C2=A0 unsigned int virq;
> =C2=A0};
> =C2=A0
> -void irq_ack_none(struct irq_desc *irq)
> +void irq_ack_none(struct irq_desc *desc)
> =C2=A0{
> -=C2=A0=C2=A0=C2=A0 printk("unexpected IRQ trap at irq %02x\n", irq->irq)=
;
> +=C2=A0=C2=A0=C2=A0 printk("unexpected IRQ trap at irq %02x\n", desc->irq=
);
> =C2=A0}
> =C2=A0
> =C2=A0void irq_end_none(struct irq_desc *irq)
> diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
> index adf33547d25f..580ae37e7428 100644
> --- a/xen/include/xen/irq.h
> +++ b/xen/include/xen/irq.h
> @@ -134,7 +134,7 @@ void cf_check irq_actor_none(struct irq_desc
> *desc);
> =C2=A0 * irq_ack_none() must be provided by the architecture.
> =C2=A0 * irq_end_none() is optional, and opted into using a define.
> =C2=A0 */
> -void cf_check irq_ack_none(struct irq_desc *irq);
> +void cf_check irq_ack_none(struct irq_desc *desc);
> =C2=A0
> =C2=A0/*
> =C2=A0 * Per-cpu interrupted context register state - the inner-most
> interrupt frame
>=20
> base-commit: 8b4243a9b560c89bb259db5a27832c253d4bebc7



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 07:25:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 07:25:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743471.1150378 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJpga-0003qe-8c; Wed, 19 Jun 2024 07:24:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743471.1150378; Wed, 19 Jun 2024 07:24:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJpga-0003qX-5U; Wed, 19 Jun 2024 07:24:52 +0000
Received: by outflank-mailman (input) for mailman id 743471;
 Wed, 19 Jun 2024 07:24:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJpgY-0003qR-PQ
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 07:24:50 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 03394b3b-2e0d-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 09:24:49 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id
 38308e7fff4ca-2e724bc466fso73619531fa.3
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 00:24:49 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-705ccb8d734sm10072805b3a.195.2024.06.19.00.24.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 00:24:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03394b3b-2e0d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718781889; x=1719386689; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=6zDjIrT0SU8j7+7y3dFEFIMxWNOStvLUqcMxBC2QJ2k=;
        b=Q9/ryTIFiEBuvuNpSshSwLXkr7SI0k67LNHvIbToJq0VAmmAee4R1RSjAx1M8Aft3w
         X2RRgY00bYiz3KqlZy5WEBPivcVfXke1E5aITloRvj4sL6awTgxLCVxdWl5MbLev3xhV
         6ByXViwGE5EqjHYIqZQ7nShjGfqfSyhrViGYv8X/SjqLZtyKv81ZWGmIYrZc0duDa6gM
         khGO527uDd8+KWpYmp1RUyQHo+tOvycCJpJcGzbTw4Vk+ILojfOpB14F3RQZqKX8q864
         wUBgtLFF1qNTYhR1es0UVtpwuZAZfTJ1dgbOuML2imw5ziN7tcQYsdY1Ox44E44mKOcK
         JDzQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718781889; x=1719386689;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=6zDjIrT0SU8j7+7y3dFEFIMxWNOStvLUqcMxBC2QJ2k=;
        b=GoetwOjc3IGLmruZL/McGRzIUodGKhDjwAxB7Kw6h1eIaos8SZ6s9X1oWIrnPvIroO
         Tg4IpPM0uNzSclp23Gpy2l2y1uQLndrCo3mxhSG66IoQvvACG50CiUj7YpFtQpLJXk0B
         PxWhchvYDTqT3M7crThmZJtb8QuC2Ln1KJyEn3JennC875p+Wbxvgz+EP7x68RGu+vq0
         36mvHaWVhFwSpwWtlOBNLmsPVL8CAJvBVLFTdNG5d/sev1alGWweLpItqdueZKneLCkv
         gqdnCtgHAGe3JoD7sRMGyFjK3bw07Pz/S74DWWF/LffDf5NqzaY8Q2cPdcY8suPH+ayp
         TUkQ==
X-Forwarded-Encrypted: i=1; AJvYcCUnr/LJmB4rY5tVyzmCWoZhlctZj3vVmPscm4th8QwHj59XPhUiTvicun40940k4KSD4RHpwOtrIJwsCy710La2LqIIdc6khO6Hk9X1wqI=
X-Gm-Message-State: AOJu0YySbKbRPsGHuFBpj2kAYLZLvhvokypStI8tSeqyZljVhIK0cwTu
	eeagOVgsdLlhWgqaWD724lfM0OUCWfPoVsyLkOavm+q/cOfjx75UqQk0BCX/Fw==
X-Google-Smtp-Source: AGHT+IG0Y/3LPmJ6WA8smahMRub1s6Y9hLNtskhW2zaHslxVeoTSk/so7z0gAnHp4IU4r6EmNRhAbQ==
X-Received: by 2002:a2e:97ca:0:b0:2ec:41a6:9ee with SMTP id 38308e7fff4ca-2ec41a60f70mr1611261fa.30.1718781888929;
        Wed, 19 Jun 2024 00:24:48 -0700 (PDT)
Message-ID: <541885b6-fd09-4531-8ae9-8e57e504c1b3@suse.com>
Date: Wed, 19 Jun 2024 09:24:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v3 3/3] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <20240613165617.42538-4-roger.pau@citrix.com>
 <e3912334-4dbe-40e9-aed4-8b47e1570cc7@suse.com> <ZnFv7b4YNjeRXj6-@macbook>
 <2f388d0a-c9b5-409a-b622-5dfeb3093e82@suse.com> <ZnGerbiI7P9PHPmK@macbook>
 <ba89126f-715d-498e-81e1-2ed105ac2d1c@suse.com> <ZnKDTEX_eGz2sS4K@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZnKDTEX_eGz2sS4K@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 19.06.2024 09:05, Roger Pau Monné wrote:
> On Tue, Jun 18, 2024 at 06:30:22PM +0200, Jan Beulich wrote:
>> On 18.06.2024 16:50, Roger Pau Monné wrote:
>>> On Tue, Jun 18, 2024 at 04:34:50PM +0200, Jan Beulich wrote:
>>>> On 18.06.2024 13:30, Roger Pau Monné wrote:
>>>>> On Mon, Jun 17, 2024 at 03:41:12PM +0200, Jan Beulich wrote:
>>>>>> On 13.06.2024 18:56, Roger Pau Monne wrote:
>>>>>>> @@ -2686,11 +2705,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
>>>>>>>          if ( desc->handler->disable )
>>>>>>>              desc->handler->disable(desc);
>>>>>>>  
>>>>>>> +        /*
>>>>>>> +         * If the current CPU is going offline and is (one of) the target(s) of
>>>>>>> +         * the interrupt, signal to check whether there are any pending vectors
>>>>>>> +         * to be handled in the local APIC after the interrupt has been moved.
>>>>>>> +         */
>>>>>>> +        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
>>>>>>> +            check_irr = true;
>>>>>>> +
>>>>>>>          if ( desc->handler->set_affinity )
>>>>>>>              desc->handler->set_affinity(desc, affinity);
>>>>>>>          else if ( !(warned++) )
>>>>>>>              set_affinity = false;
>>>>>>>  
>>>>>>> +        if ( check_irr && apic_irr_read(vector) )
>>>>>>> +            /*
>>>>>>> +             * Forward pending interrupt to the new destination, this CPU is
>>>>>>> +             * going offline and otherwise the interrupt would be lost.
>>>>>>> +             */
>>>>>>> +            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
>>>>>>> +                          desc->arch.vector);
>>>>>>
>>>>>> Hmm, IRR may become set right after the IRR read (unlike in the other cases,
>>>>>> where new IRQs ought to be surfacing only at the new destination). Doesn't
>>>>>> this want moving ...
>>>>>>
>>>>>>>          if ( desc->handler->enable )
>>>>>>>              desc->handler->enable(desc);
>>>>>>
>>>>>> ... past the actual affinity change?
>>>>>
>>>>> Hm, but the ->enable() hook is just unmasking the interrupt, the
>>>>> actual affinity change is done in ->set_affinity(), and hence after
>>>>> the call to ->set_affinity() no further interrupts should be delivered
>>>>> to the CPU regardless of whether the source is masked?
>>>>>
>>>>> Or is it possible for the device/interrupt controller to not switch to
>>>>> use the new destination until the interrupt is unmasked, and hence
>>>>> could have pending masked interrupts still using the old destination?
>>>>> IIRC For MSI-X it's required that the device updates the destination
>>>>> target once the entry is unmasked.
>>>>
>>>> That's all not relevant here, I think. IRR gets set when an interrupt is
>>>> signaled, no matter whether it's masked.
>>>
>>> I'm kind of lost here, what does signaling mean in this context?
>>>
>>> I would expect the interrupt vector to not get set in IRR if the MSI-X
>>> entry is masked, as at that point the state of the address/data fields
>>> might not be consistent (that's the whole point of masking it right?)
>>>
>>>> It's its handling which the
>>>> masking would prevent, i.e. the "moving" of the set bit from IRR to ISR.
>>>
>>> My understanding was that the masking would prevent the message write to
>>> the APIC from happening, and hence no vector should get set in IRR.
>>
>> Hmm, yes, looks like I was confused. The masking is at the source side
>> (IO-APIC RTE, MSI-X entry, or - if supported - in the MSI capability).
>> So the sole case to worry about is MSI without mask-bit support then.
> 
> Yeah, and for MSI without masking bit support we don't care doing the
> IRR check before or after the ->enable() hook, as that's a no-op in
> that case.  The write to the MSI address/data fields has already been
> done, and hence the issue would be exclusively with draining any
> in-flight writes to the APIC doorbell (what you mention below).

Except that both here ...

>>>> Plus we run with IRQs off here anyway if I'm not mistaken, so no
>>>> interrupt can be delivered to the local CPU. IOW whatever IRR bits it
>>>> has set (including ones becoming set between the IRR read and the actual
>>>> vector change), those would never be serviced. Hence the reading of the
>>>> bit ought to occur after the vector change: It's only then that we know
>>>> the IRR bit corresponding to the old vector can't become set anymore.
>>>
>>> Right, and the vector change happens in ->set_affinity(), not
>>> ->enable().  See for example set_msi_affinity() and the
>>> write_msi_msg(), that's where the vector gets changed.
>>>
>>>> And even then we're assuming that no interrupt signals might still be
>>>> "on their way" from the IO-APIC or a posted MSI message write by a
>>>> device to the LAPIC (I have no idea how to properly fence that, or
>>>> whether there are guarantees for this to never occur).
>>>
>>> Yeah, those I expect would be completed in the window between the
>>> write of the new vector/destination and the reading of IRR.
>>
>> Except we have no idea on the latencies.
> 
> There isn't much else we can do? Even the current case where we add
> the 1ms window at the end of the shuffling could still suffer from
> this issue because we don't know the latencies.  IOW: I don't think
> this is any worse than the current approach.

... and here, the later we read IRR, the better the chances we don't miss
anything. Even the no-op ->enable() isn't a no-op execution-wise. In fact
it (quite pointlessly[1]) is an indirect call to irq_enable_none(). I'm
actually inclined to suggest that we try to even further delay the IRR
read, certainly past the cpumask_copy(), maybe even past the spin_unlock()
(latching CPU and vector into local variables, along with the latching of
->affinity that's already there).

Jan

[1] While back when that was written the main goal probably was to avoid
conditionals on what may be deemed fast paths, I wonder whether nowadays
the main goal wouldn't be to avoid indirect calls when we (pretty) easily
can.


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 07:31:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 07:31:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743480.1150387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJpn5-0005pJ-1p; Wed, 19 Jun 2024 07:31:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743480.1150387; Wed, 19 Jun 2024 07:31:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJpn4-0005pC-VK; Wed, 19 Jun 2024 07:31:34 +0000
Received: by outflank-mailman (input) for mailman id 743480;
 Wed, 19 Jun 2024 07:31:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tClf=NV=amd.com=Christian.Koenig@srs-se1.protection.inumbo.net>)
 id 1sJpn3-0005p6-Mg
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 07:31:33 +0000
Received: from NAM11-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam11on20630.outbound.protection.outlook.com
 [2a01:111:f403:2414::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f29c1b76-2e0d-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 09:31:32 +0200 (CEST)
Received: from PH7PR12MB5685.namprd12.prod.outlook.com (2603:10b6:510:13c::22)
 by CY5PR12MB9056.namprd12.prod.outlook.com (2603:10b6:930:34::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.19; Wed, 19 Jun
 2024 07:31:27 +0000
Received: from PH7PR12MB5685.namprd12.prod.outlook.com
 ([fe80::46fb:96f2:7667:7ca5]) by PH7PR12MB5685.namprd12.prod.outlook.com
 ([fe80::46fb:96f2:7667:7ca5%5]) with mapi id 15.20.7698.017; Wed, 19 Jun 2024
 07:31:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f29c1b76-2e0d-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ax0U8SUQOkU4K9AFH5oPHnVNTUAAqxbIbJBFMdsYKu0RepiCra9h9kTkaP71OUlZ6XTkE4SL5wO2F3qNaBjlzFRD7E94ENKcJ+37JgyKQj9N9TidDiZW8eUwtxpjQvmF67j1GJSkAFn88eyjJrT87jgawHjYOZvypC79Bien+pudrJ/EUUB4KkwbqMWFhElnHkXdE82evlZeGFSJ8XzNvBS+z0R1dJvUXDyYuytyMLny1U2Rzjs3vseAIUipnyggpSYr78ip0Sqbi289ly7vpPCbrKCFlAePhJ3sQfKDbAh2HQbPklr4ZwM59TFDgu+DTo0XjcyLNhwzwu6PPkwQ5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FvNrR+GTYGF30Y9iOccAGxsRHNyVf6gXTJUZybTO+ww=;
 b=VOpeJZtqiUy4uWAqPjsDWWMMd5j6rOSNYul5BKZJhlau3oNdK0S67OwQ5qBIlzkFJnbVe5xDkQ83MGaxwmtSuK13IalOdwoT/BGmMwzUjZ8sS0AKf5hieXnkBiVPQdt5Ilgk6ZQlSSwFa0evLf48D6MBHvWnzAu5SkMnLiMH3gUvjARDWKrtdAIctCSlXqiYE8PvW81hX9vi6tFfLlLPCUWrL6CTpWCte6E/tuF43W2Oe+TGy1gaKrgfW3Ufv6dIzOCPs5Jn61PC7h7KrYYKLKfsLz1YvFbOzZhthPrAmVqVCur4KRNUypl5TotgpuiHxoMu7K8/y12ZR9khXXQqnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FvNrR+GTYGF30Y9iOccAGxsRHNyVf6gXTJUZybTO+ww=;
 b=xtIsBF90vw5FRiuQfV2yFJ8ElYCchbW13DlFa3pAj0peYQsX1rFMxhiINQdEx8IlEk3Bo7TDVhqx20JGgoVNTigxpKZwSGGwm+hv63VTZrVygVKEwNMvtVpBQ75Lv+098zYYNid7RbmK+S8dkyEt8nPEWs3DQTXRXQBuq8TJeNs=
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
Message-ID: <13ee25a3-91ef-48da-a58d-f4b972fe0c4f@amd.com>
Date: Wed, 19 Jun 2024 09:31:20 +0200
User-Agent: Mozilla Thunderbird
Subject: Re: Design session notes: GPU acceleration in Xen
To: Demi Marie Obenour <demi@invisiblethingslab.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>,
 "Pelloux-prayer, Pierre-eric" <Pierre-eric.Pelloux-prayer@amd.com>
Cc: Jan Beulich <jbeulich@suse.com>, Xenia Ragiadakou
 <burzalodowa@gmail.com>, Ray Huang <ray.huang@amd.com>,
 Xen developer discussion <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Direct Rendering Infrastructure development
 <dri-devel@lists.freedesktop.org>,
 Qubes OS Development Mailing List <qubes-devel@googlegroups.com>
References: <Zms9tjtg06kKtI_8@itl-email>
 <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com> <ZmvvlF0gpqFB7UC9@macbook>
 <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com> <ZmwByZnn5vKcVLKI@macbook>
 <Zm-FidjSK3mOieSC@itl-email> <Zm_p1QvoZcjQ4gBa@macbook>
 <ZnCglhYlXmRPBZXE@mail-itl> <ZnDbaply6KaBUKJb@itl-email>
 <0b00c8f9-fb79-4b11-ae22-931205653203@amd.com> <ZnGVu9TjHKiEqxsu@itl-email>
Content-Language: en-US
From: =?UTF-8?Q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>
In-Reply-To: <ZnGVu9TjHKiEqxsu@itl-email>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: FR4P281CA0433.DEUP281.PROD.OUTLOOK.COM
 (2603:10a6:d10:d1::15) To PH7PR12MB5685.namprd12.prod.outlook.com
 (2603:10b6:510:13c::22)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: PH7PR12MB5685:EE_|CY5PR12MB9056:EE_
X-MS-Office365-Filtering-Correlation-Id: e089b7c7-0750-4ff1-a076-08dc9031d3f9
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230037|366013|376011|1800799021;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?Uk1HSThNSXoveG1XZ0ZRcSs3V0lsNkQ1SXJmems0MEdsWFd6R2dHZ0d0MTJ6?=
 =?utf-8?B?THpZNktzaHpmUitIYVNFdGpFcVNQbUNISlZKNFZjcnVXZE1VR3p5bk9WTGlh?=
 =?utf-8?B?c0Y4akRVTTJOSjRRWXJjTlJFOFFKOFR5eGRMU1QxRTUyeFNpQzRIdTNVcm1J?=
 =?utf-8?B?K3FCbklHdjRlcHNjRnU5c3UzZDhzbnpLV3FGeE8vUkxRU3I2RmpNWkxFbGs0?=
 =?utf-8?B?a2RwblFaZE1wMGU3YXVxckxiV1RoWDdnTEVJM0hRN0x4RTdzeEZLKzd6Ly9Y?=
 =?utf-8?B?WjBQNnUyREk5dFB6dHp6cXR1UjdYcHhXRHJiYmdHNXpSTkRsUnhvQ2twb3hZ?=
 =?utf-8?B?ZHZMYkplUlQ5T2crRnRza0swMEVKYmxJSVpVL2xRV3VLMXRFWXExL2RNbTBl?=
 =?utf-8?B?bmNRcEY3YUN6azVmUmlDYnpZNk5pZ0s3QkJuSVBZZGt3N3IrYWk0SjgybEt2?=
 =?utf-8?B?RFIwb2ZuWlBKby9YQmg2a1Y4K3hDOUZ2SC9EMy8zVGxJanV0enJhVHZPRElB?=
 =?utf-8?B?SjUxUERvSFJxUWppeTh5bEEzbVZ2U2F6M0l2UlJuNHczZTBzNXh6QkRVNFBU?=
 =?utf-8?B?WGwxMWRML0ZvbUV5ZHVITTdETXFWQWVGTU1UUS9qcTdtTnd2ZVgvaitmYnRs?=
 =?utf-8?B?aFNFR1BjNkNMWnBRdWQ5RVZTNmtoaGo5clllcXJMR1BxTEZiU2FMUWU1N0U5?=
 =?utf-8?B?MkhFby9HZXpqQUVONzV6U1pKc0szNmFWR1RkS2NmVVp2Vjg0T3R0UzQ0d2hE?=
 =?utf-8?B?Y2Z3ajdLQ3grNm81Qk45VjdxUzVXaUJNTWdBeGhJN29XN3FiSFpPR0d2cHhQ?=
 =?utf-8?B?NzZLZkI4WEY5Vzl1MzZlL0ppSng2QkhFdUVpQXJhYi9xZTdHOFFHZzN5ZTFj?=
 =?utf-8?B?SHcvaE94di9lMFRXM0lHSHVheVI0OTRZQmRIbndmRzZkMjRLaU9VbVhmMHJI?=
 =?utf-8?B?TDNMZ2p0REJncTRNTUJXdDZhWk9tUTR3NXdzYjlMRmp0N3ZaN0JRZW5BcHdz?=
 =?utf-8?B?R2gyNzJhVEx6dXNWb1Q2K1l2ejluOWlGYUdIakxTZzE0SllNVG1jcExiL2ll?=
 =?utf-8?B?Z1I1bDRLTHFwWUtISnhCYnlGeEZrY05HcS91U240MWpKL292RktZVlBBM0tq?=
 =?utf-8?B?Q201UlgreFUvV1BJcmlydWdjS2Rqd0I3RWtJVzd5UlJ3Y3MyT3pqdUVCVjdn?=
 =?utf-8?B?RHg1RU1DSjFWTmxBZDJoOWYwc1VGdkdJMkRNOEN3ZGpOVUJ2Y1VxdUJtdCtS?=
 =?utf-8?B?VUJBODJiTnVKVVNHdm5yNnVWbUo0VXJKY1VsaTNHcEdwczl6bkYrdDJOZ1hG?=
 =?utf-8?B?TzRUZjFsdm5vYVEvalhpNWd2QVdMSmppUXg4bkdYZDMzVVljZTFWdlVDUWVP?=
 =?utf-8?B?anhIRTN0TmYzc1lwSG53bHBpWnQ2Z1JZMGRPLzF4UTkveE43R2VzTjU1L1pm?=
 =?utf-8?B?Tzg5RFgwVU1ZdTgvN1JxZnFoZnpmTm1YR2NsUjgvdDJVc2FQQlNCSjFDVmRi?=
 =?utf-8?B?NlpDVlMrLzV5UGVLbVIvekpPeDFsck85aXVoR2hXQ3dlUjdtYTFFVlMxYWti?=
 =?utf-8?B?MmIzUHBmeTJuc2VtbkdtQno4L1BRMnpaTjRLT3JlUGovaEtTVTRFdEhZalhR?=
 =?utf-8?B?S21DRVVoVDBxRjBTTzhESUV5TlczUldHZENqdk4xMVhlMFNhOGRNNitIVVZ4?=
 =?utf-8?B?ckhoVVdVT1FESCtIUTBqUm9UTnZnMUdnbHFuMDVySzJLb0cvcFY2cDBTZWlO?=
 =?utf-8?Q?soP8u8JF6KOgHOycg0N++6Fp49o5m1E5DjkDsEE?=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH7PR12MB5685.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(366013)(376011)(1800799021);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0:
	=?utf-8?B?QzNJTWdPeWhGRTJNYnlSNFlJUy9BVXYwWU1WNGJvY05VOVE3L1l4TEE5UE1F?=
 =?utf-8?B?M2NuTEZ5Z08xejJMWXFVT1YzcVFROG5BMDNDekJjZmVrN2ZjZWM4TlJVUzNG?=
 =?utf-8?B?OXlYc1lJVThzaG9jMlo0R2VKOXlBMkxqdjFRM1IzSzBnbzNkSERUWCt6OEF4?=
 =?utf-8?B?a1RqNGtFdkZPSkZPWlUvTnRwSVZMdDdWZFpoZGpWWlBhU1QxNnNzbVdEV3kx?=
 =?utf-8?B?dFFabDFBZ09tV2VmYkFWNUlYMWhPakdualdVNExkOW96aUZiWTJWVElrbnBB?=
 =?utf-8?B?emJETHZUb2pmTnRWeUlMSjBvTEVnNWlwZjY0ZS94VmkwUGxDSkttdjljdXNa?=
 =?utf-8?B?YklJUExUNFpmNm9hM2hnSXg1QUFwMGpqOUl5eDJPMWlaSFN3Z0lZcXREZzRw?=
 =?utf-8?B?OWtEVEpiZWNidUdDWGNjVXA0VC9KUlBQZExPOElWWlM2eGdsWUI3eUNxVnZq?=
 =?utf-8?B?Umo0RjBMbGR1MGQxRlM3cktEeEdKc2M5SCtWM1pFMFRvTm0rKzZ3Q2hTT05U?=
 =?utf-8?B?QkNkSWdZb0VIS2MzLzN4Qlg4RnhvandNN3dXR0J0RDRhSS96cUl5eWdFMlM2?=
 =?utf-8?B?TkR0MTFSYzRoMGltVnJPaHI3NUJ0SFZFQXlRZ3cyU2tNc1VPOFFPb2I3QU14?=
 =?utf-8?B?MGJmY0dNU3o4TUwrUzFkemNCTzNXaXRNYUNoY2tmQkxXQTFBY3BLaGNKbXY4?=
 =?utf-8?B?UFhENTJyOTJzSFBCcWFZNVNZWFNGTnMwdUw3U0dYVytFYnJRL3V3MENRUzE2?=
 =?utf-8?B?ZG1lSlMxYWE3a2RJb0I5QktDQWRuUFB0RkpCZnQ5SGtSWFY3Z2NZZEVneDhN?=
 =?utf-8?B?QU92ajU0WHFGRzdOaGc2VmtmbG1QaGg1eDYvNmI5SWtueGozZjJLRzBkbjNQ?=
 =?utf-8?B?T2hzajNiTFNDTE1vVnlsS20ySGxWWE5zNjA4SHZWNFhMb2NYQ0V0bGVpNFgw?=
 =?utf-8?B?SjhiSzFucldoNWJ4NGRUS0llQjYvTTA3RmhBQzNsRGxCcXh4VDBROG9wSzJk?=
 =?utf-8?B?bWd4RVJHSDBLb0NxOE5MUUtDcHdWdTY3WGNiZERGQUlNSnYxL08zeXExN29I?=
 =?utf-8?B?UVJURVdoV3hPRXllK24vNVJqbzd3TytFaDVaQlRPcGY0YzBvZUZ4ZXJ0T0RD?=
 =?utf-8?B?NVdKdkNwUHlVZmcxU2M0NkJlblhxV1dPR3Nmc1FNY3NOQ2luZXZEQVJ1M0Jo?=
 =?utf-8?B?cGZqOERqSEpoOXBabC92VVhKdDBLU05WYi9tUy9HMXNUcHllak9aSS9XeVF6?=
 =?utf-8?B?WmphSlI0d21JZCtyQ0x4bXlmL0pXMTNLRnpKUnBwMUxxa2VYeDVPaHFCSm1Z?=
 =?utf-8?B?SmtEK21OMmhPNUl2ZWdhWEQ2dzE5cGpwNUxNb0xwUFNrRlVQeFBPL0ZlOFd6?=
 =?utf-8?B?amc5cUNVRXE0SDQ0Q1RoSmJOVlltay9VY0ROVmsyMTFhUU93M2lFTk9UczNQ?=
 =?utf-8?B?eUZXcGJhZDFEZXhSbGE1dXh0Z1VtVTRRU1VVNkdZR1orWGgwQmZrd2ZNbUNs?=
 =?utf-8?B?MkNXdUZQaENuQlgyVjRSN3NaYy9BUXN5U1RvaU5UamtqdXB5M3hVNGk1Z0FF?=
 =?utf-8?B?YWFtMGh2NS9pMnBvUUdmeWxQdkdzaE9vL3VpN2hXR0puMFBiZk1BYVMvNWlv?=
 =?utf-8?B?WHVnTFVGWTVxckc5ekxLWVdMN0NmcnoyTFNic3JpU0M1UmN5Z0ZWZ0NUUVlO?=
 =?utf-8?B?R3RzaWlXWWVoekV6blRoRUNFSWZhcTVFOHNuMTNGcnpQdTZKNTdqTE5uRXZB?=
 =?utf-8?B?amg4VklFOGZYUEtpd1dCZ3d1SWZRRzJBUEhxZG4xakpKZkVYQzdLWGVObGpK?=
 =?utf-8?B?KzU5UDdmVE8xeHIwL0xINU5KYWwzRUM4RE10enlXdE5ZdWx6QnhCUHJFeW9l?=
 =?utf-8?B?VDhZQmFoSU1pOGRPVnlkaTlHNU9pN3VManpQMmlUWXBXL2RYanZnRlFjMFVh?=
 =?utf-8?B?dTFwWStBSkxIR2htdHk1eE5yVm44Q0xyejZYRzBERWNqbkZxQ2xwMkVPeWlV?=
 =?utf-8?B?M21mMHlscTBld0NDUnU1cUNYNjdkU0l2aG1KOHBoS1ArL2NPRWdMcUIzRnlo?=
 =?utf-8?B?bWxncVF2by9oaDZBSGZGaStZVGs3U0pneTFZdlhaT3dPWmlHLzlBVVk3WERi?=
 =?utf-8?Q?dFqQ=3D?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e089b7c7-0750-4ff1-a076-08dc9031d3f9
X-MS-Exchange-CrossTenant-AuthSource: PH7PR12MB5685.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jun 2024 07:31:26.8979
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3D47eGQRecCqqSP5E4Bz3QnEc0yG4Dq/KLZSJeyv+T6Il29etBPBp8QA6+4Ff4jV
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY5PR12MB9056

Am 18.06.24 um 16:12 schrieb Demi Marie Obenour:
> On Tue, Jun 18, 2024 at 08:33:38AM +0200, Christian König wrote:
> > Am 18.06.24 um 02:57 schrieb Demi Marie Obenour:
> >> On Mon, Jun 17, 2024 at 10:46:13PM +0200, Marek Marczykowski-Górecki
> >> wrote:
> >>> On Mon, Jun 17, 2024 at 09:46:29AM +0200, Roger Pau Monné wrote:
> >>>> On Sun, Jun 16, 2024 at 08:38:19PM -0400, Demi Marie Obenour wrote:
> >>>>> In both cases, the device physical
> >>>>> addresses are identical to dom0’s physical addresses.
> >>>>
> >>>> Yes, but a PV dom0 physical address space can be very scattered.
> >>>>
> >>>> IIRC there's an hypercall to request physically contiguous memory for
> >>>> PV, but you don't want to be using that every time you allocate a
> >>>> buffer (not sure it would support the sizes needed by the GPU
> >>>> anyway).
> >>
> >>> Indeed that isn't going to fly. In older Qubes versions we had PV
> >>> sys-net with PCI passthrough for a network card. After some uptime it
> >>> was basically impossible to restart and still have enough contagious
> >>> memory for a network driver, and there it was about _much_ smaller
> >>> buffers, like 2M or 4M. At least not without shutting down a lot more
> >>> things to free some more memory.
> >>
> >> Ouch!  That makes me wonder if all GPU drivers actually need physically
> >> contiguous buffers, or if it is (as I suspect) driver-specific. CCing
> >> Christian König who has mentioned issues in this area.
>
> > Well GPUs don't need physical contiguous memory to function, but if they
> > only get 4k pages to work with it means a quite large (up to 30%)
> > performance penalty.
>
> The status quo is "no GPU acceleration at all", so 70% of bare metal
> performance would be amazing right now.

Well AMD uses native context approach in XEN which which delivers over 
90% of bare metal performance.

Pierre-Eric can tell you more, but we certainly have GPU solutions in 
productions with XEN which would suffer greatly if we see the underlying 
memory fragmented like this.

>   However, the implementation
> should not preclude eliminating this performance penalty in the future.
>
> What size pages do GPUs need for good performance?  Is it the same as
> CPU huge pages?

2MiB are usually sufficient.

Regards,
Christian.

>   PV dom0 doesn't get huge pages at all, but PVH and HVM
> guests do, and the goal is to move away from PV guests as they have lots
> of unrelated problems.
>
> > So scattering memory like you described is probably a very bad idea 
> if you
> > want any halve way decent performance.
>
> For an initial prototype a 30% performance penalty is acceptable, but
> it's good to know that memory fragmentation needs to be avoided.
>
> > Regards,
> > Christian
>
> Thanks for the prompt response!



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 07:45:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 07:45:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743488.1150397 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJq0M-0007pr-5z; Wed, 19 Jun 2024 07:45:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743488.1150397; Wed, 19 Jun 2024 07:45:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJq0M-0007pk-2z; Wed, 19 Jun 2024 07:45:18 +0000
Received: by outflank-mailman (input) for mailman id 743488;
 Wed, 19 Jun 2024 07:45:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJq0L-0007pa-Bo; Wed, 19 Jun 2024 07:45:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJq0L-0005ms-3o; Wed, 19 Jun 2024 07:45:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJq0K-0002MU-TT; Wed, 19 Jun 2024 07:45:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJq0K-0003sp-T2; Wed, 19 Jun 2024 07:45:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tiT3bnPByNWuRwXBI/VDQh6Ee8P0lTXmFKwaSdUNJ/I=; b=N5VhkO2Z6yEn5RNWf4eRAiWmgb
	W8WmPYonu9ZV4ABz+/L5Do+pbRNCFZMhWt32rmQS72ZRwnh97pL3Fo5gtZ6UoJ1bJw2EThFUeHTei
	y9PYXDtOPAne3f1iT6eKgeEK0V1a5oQcKPnvgo8ZkSfL13N8Ior8NBIqrwPxm97bBjC0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186408-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186408: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=4d4f56992460c039d0cfe48c394c2e07aecf1d22
X-Osstest-Versions-That:
    ovmf=26a30abdd0f7fe5a9d2421cba6efe9397185ad98
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Jun 2024 07:45:16 +0000

flight 186408 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186408/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 4d4f56992460c039d0cfe48c394c2e07aecf1d22
baseline version:
 ovmf                 26a30abdd0f7fe5a9d2421cba6efe9397185ad98

Last test of basis   186405  2024-06-19 01:11:08 Z    0 days
Testing same since   186408  2024-06-19 05:41:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dhaval <dhaval@rivosinc.com>
  Dhaval Sharma <dhaval@rivosinc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   26a30abdd0..4d4f569924  4d4f56992460c039d0cfe48c394c2e07aecf1d22 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 07:46:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 07:46:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743495.1150408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJq13-0008LE-F1; Wed, 19 Jun 2024 07:46:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743495.1150408; Wed, 19 Jun 2024 07:46:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJq13-0008L7-C5; Wed, 19 Jun 2024 07:46:01 +0000
Received: by outflank-mailman (input) for mailman id 743495;
 Wed, 19 Jun 2024 07:45:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJq11-0008I3-Ss
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 07:45:59 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f7ec7549-2e0f-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 09:45:59 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ec0f3b9bb8so45860671fa.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 00:45:59 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9a38630basm17964895ad.241.2024.06.19.00.45.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 00:45:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7ec7549-2e0f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718783158; x=1719387958; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=lzLLyw5Kel/bTpyH9ybDnCL6QQy48ent4QQeIdhBcgc=;
        b=Uw0P0bN0sBS/T4hNkFLtVVvtzp2zTmB6VyHfaIRZPI7NHlPRFj4L0ZKMK8W5r62quE
         qtwU07FN01awDFWN5ECLsGUW1jU6IrCThePML+GERmI4J9piFBrAQuQQH+oIQiJZxLjf
         XPL8VgRrc7Wvrw6xWNUga1OsL6EsV9gcNJzuwstpdWNUtzQ/TlAisTVI5j+TE90jmZqy
         fPs9yQBg2zSQPGbCsK2OtzetJjTqinbR/LuTEmbUvhcCp6LByphaRJxRieSGYW1YoiEv
         NuSWASCNWmklv/H2fergZtTtM2CSzk/GwbdJY24xNhD8V9dHaLNvwZmS3SnIzAn04O4b
         xR3w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718783158; x=1719387958;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lzLLyw5Kel/bTpyH9ybDnCL6QQy48ent4QQeIdhBcgc=;
        b=MM7w1x1xYxciUJlLwMUMlZgj2/9CcKK5KxbzGZKXAu/JMrGz7+umDtNg1V6M4IkkgE
         RDoezlKXDxiaD0aLfEcuNB6JmR4gZ3PG2VH6CtDHzQrhm18pr8mlaOly7RJufzE6WfgK
         U3LMZP01Qt046XlWaKPTxbt6UALa1IVm+E4sji7R+gBubPHVikMULxT2AMG6nKSBev9s
         LdhdFuYR7ndf9/HcefeWBgFMn2sUkNF8dnnqb6qDnJiDaTW2q+pAjw+p7lVfGVMLuS5O
         1ajeTA5020hGeJPTpTguVSYL5gI7UMWus+XhAg1MnQLHFtgyncOBLTeIdPgxWcDbvces
         Phfg==
X-Forwarded-Encrypted: i=1; AJvYcCUZSZGG/q/xUUvWcP3y/aMwHPNTeR14VeVs9rYQY9/mZDHkl+PFNrKMWyyR9lu7CM9gxDwoVnRh21uODASrYu3hzFaTXIS4heBcredtWd0=
X-Gm-Message-State: AOJu0YxVU9BTnJ73WaealFc74zBtSvekZ5rLjuRZnSMfVSZ5lGrlsLd7
	Dlzt84KDdQIa9t9mNC1Ukon8z7J6Q7j3O1vwcx7NCnE+b/Rph4xp78B+FdW+3A==
X-Google-Smtp-Source: AGHT+IHMM+dSkEp0aK4T+B0BBT8WTz23Gj8jHZIFxnytX6o5aKfY5/uiPjw3dDYivXrT4cy5fM2Ugw==
X-Received: by 2002:a2e:b0e8:0:b0:2ec:41b3:f0f1 with SMTP id 38308e7fff4ca-2ec41b3f1dcmr1740261fa.39.1718783158579;
        Wed, 19 Jun 2024 00:45:58 -0700 (PDT)
Message-ID: <052cccac-8c8f-4555-953c-2bd9de460f2a@suse.com>
Date: Wed, 19 Jun 2024 09:45:52 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] AMD/IOMMU: Improve register_iommu_exclusion_range()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240618183128.1981751-1-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240618183128.1981751-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 18.06.2024 20:31, Andrew Cooper wrote:
>  * Use 64bit accesses instead of 32bit accesses
>  * Simplify the constant names
>  * Pull base into a local variable to avoid it being reloaded because of the
>    memory clobber in writeq().
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> 
> RFC.  This is my proposed way of cleaning up the whole IOMMU file.  The
> diffstat speaks for itself.

Absolutely.

> I've finally found the bit in the AMD IOMMU spec which says 64bit accesses are
> permitted:
> 
>   3.4 IOMMU MMIO Registers:
> 
>   Software access to IOMMU registers may not be larger than 64 bits. Accesses
>   must be aligned to the size of the access and the size in bytes must be a
>   power of two. Software may use accesses as small as one byte.

I take it that the use of 32-bit writes was because of the past need
also work in a 32-bit hypervisor, not because of perceived restrictions
by the spec.

> --- a/xen/drivers/passthrough/amd/iommu-defs.h
> +++ b/xen/drivers/passthrough/amd/iommu-defs.h
> @@ -338,22 +338,10 @@ union amd_iommu_control {
>  };
>  
>  /* Exclusion Register */
> -#define IOMMU_EXCLUSION_BASE_LOW_OFFSET		0x20
> -#define IOMMU_EXCLUSION_BASE_HIGH_OFFSET	0x24
> -#define IOMMU_EXCLUSION_LIMIT_LOW_OFFSET	0x28
> -#define IOMMU_EXCLUSION_LIMIT_HIGH_OFFSET	0x2C
> -#define IOMMU_EXCLUSION_BASE_LOW_MASK		0xFFFFF000U
> -#define IOMMU_EXCLUSION_BASE_LOW_SHIFT		12
> -#define IOMMU_EXCLUSION_BASE_HIGH_MASK		0xFFFFFFFFU
> -#define IOMMU_EXCLUSION_BASE_HIGH_SHIFT		0
> -#define IOMMU_EXCLUSION_RANGE_ENABLE_MASK	0x00000001U
> -#define IOMMU_EXCLUSION_RANGE_ENABLE_SHIFT	0
> -#define IOMMU_EXCLUSION_ALLOW_ALL_MASK		0x00000002U
> -#define IOMMU_EXCLUSION_ALLOW_ALL_SHIFT		1
> -#define IOMMU_EXCLUSION_LIMIT_LOW_MASK		0xFFFFF000U
> -#define IOMMU_EXCLUSION_LIMIT_LOW_SHIFT		12
> -#define IOMMU_EXCLUSION_LIMIT_HIGH_MASK		0xFFFFFFFFU
> -#define IOMMU_EXCLUSION_LIMIT_HIGH_SHIFT	0
> +#define IOMMU_MMIO_EXCLUSION_BASE           0x20
> +#define   EXCLUSION_RANGE_ENABLE            (1 << 0)
> +#define   EXCLUSION_ALLOW_ALL               (1 << 1)
> +#define IOMMU_MMIO_EXCLUSION_LIMIT          0x28

Just one question here: Previously you suggested we switch to bitfields
for anything like this, and we've already done so with e.g.
union amd_iommu_control and union amd_iommu_ext_features. IOW I wonder
if we wouldn't better strive to be consistent in this regard. Or if not,
what the (written or unwritten) guidelines are when to use which
approach.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 07:48:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 07:48:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743503.1150417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJq3H-0000mZ-QI; Wed, 19 Jun 2024 07:48:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743503.1150417; Wed, 19 Jun 2024 07:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJq3H-0000mS-Nf; Wed, 19 Jun 2024 07:48:19 +0000
Received: by outflank-mailman (input) for mailman id 743503;
 Wed, 19 Jun 2024 07:48:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJq3G-0000jd-ND
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 07:48:18 +0000
Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com
 [2a00:1450:4864:20::22d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4a1285b0-2e10-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 09:48:17 +0200 (CEST)
Received: by mail-lj1-x22d.google.com with SMTP id
 38308e7fff4ca-2ec0f3b9cfeso58119971fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 00:48:17 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f855e5ba93sm110589115ad.3.2024.06.19.00.48.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 00:48:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a1285b0-2e10-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718783296; x=1719388096; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=StLvSMkQR150txbL7SSXc/unofls6wsIRe1r4HbSewc=;
        b=NPpkVnuZLR4HejQgf0h0KmU5a3XzeoeS9kXIo1N/XI8GP1nxfg7pwWsOC/d7/yhMJj
         mZWCRjljb8KHlPO8kRSbpzG7ojHB08RNojqR+hhDBS7QEWRVNCJ8d50UqO9xxR0Iw2Wg
         QBY++9XzLkFsO49591SVqi8CBiaVzny0cVZAKXyatMd8XvzOw2+tarig2PkScybYlUBI
         9IhTvhEyFj2jUevzKKZC5JqQHfVdE9cvzCkIv5LOgrtZ3yTFfJ1beeAUn3oBv0JxZgge
         es0tmAkUPKHYxe53keaVq2+uc9JeV8xyk3LiuBujAq1j3zNoJVu6JWv3pqnkg9j6tOgs
         z2RA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718783296; x=1719388096;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=StLvSMkQR150txbL7SSXc/unofls6wsIRe1r4HbSewc=;
        b=B9ASOgkj8CZDa6+Ff94qsBirDXFc+ZilWUAusbamz50MsMhYMjOTz00KE6VPiOXhJt
         tsjlm+qI2xWsRYiI4ZdANVjfNt89AaFajjckcqqiM34ihC1MqUso7eaitzeCdI+7Vzj5
         hAbWPhnN90Hyj9Mze7vY+xpNV+pUEL1qk1+7lUCbyKETL7CAtkfmdtyCei9LIAE6L6Xt
         iKR2iKlaftpttqCuzZDILTIwP5S4VSgIFYgRtwR9+6mt5vd7kXZv8zAactRaBz+XFZ9e
         F+8Uf12G5pK0k2TpxIQVfjRhYhpMqZ3B+TRqbvK+0N0TBUsXdryWqzAhwgFtNcN5gZCv
         r7Rw==
X-Forwarded-Encrypted: i=1; AJvYcCUXU/NQIY/Zh7kD/CQSDE0DG+2EfU0m/JcEZf8zUlDuyY5SM9fFb3+wcRkeyiLGV1hTJQzKKKsf6CoVwdVoRbe3r6hKZkVznExJ5+SnPzs=
X-Gm-Message-State: AOJu0Yy47YzbJW0XTydmM+JZXwxGSgad9Aej+yW8JrAE4eWSUHXQzvG3
	oR7eoBkp5HDO5rLrsqqBIlz9z3nO1RKk06sE80H9yPTjPdPTPw76VU0jQa87ThhChWdZHDCUgA4
	=
X-Google-Smtp-Source: AGHT+IH/Xr80OVPuSzjIMqke2a4h6TKZ2KeIKHbHPSwGQoKrQ+IWqKZ2THFflevpyrFDwauWNH+J7Q==
X-Received: by 2002:a2e:9659:0:b0:2eb:850d:a53d with SMTP id 38308e7fff4ca-2ec3ceb6a22mr11558991fa.16.1718783296527;
        Wed, 19 Jun 2024 00:48:16 -0700 (PDT)
Message-ID: <4cdd42b7-3751-484c-80c4-4bf321f1bcc0@suse.com>
Date: Wed, 19 Jun 2024 09:48:10 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] AMD/IOMMU: Improve register_iommu_exclusion_range()
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240618183128.1981751-1-andrew.cooper3@citrix.com>
 <052cccac-8c8f-4555-953c-2bd9de460f2a@suse.com>
Content-Language: en-US
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <052cccac-8c8f-4555-953c-2bd9de460f2a@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 19.06.2024 09:45, Jan Beulich wrote:
> On 18.06.2024 20:31, Andrew Cooper wrote:
>> I've finally found the bit in the AMD IOMMU spec which says 64bit accesses are
>> permitted:
>>
>>   3.4 IOMMU MMIO Registers:
>>
>>   Software access to IOMMU registers may not be larger than 64 bits. Accesses
>>   must be aligned to the size of the access and the size in bytes must be a
>>   power of two. Software may use accesses as small as one byte.
> 
> I take it that the use of 32-bit writes was because of the past need
> also work in a 32-bit hypervisor, not because of perceived restrictions
> by the spec.

In fact it looks like we're already halfway through converting to writeq().

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 07:53:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 07:53:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743515.1150427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJq88-0002Xp-Ej; Wed, 19 Jun 2024 07:53:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743515.1150427; Wed, 19 Jun 2024 07:53:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJq88-0002Xi-Bz; Wed, 19 Jun 2024 07:53:20 +0000
Received: by outflank-mailman (input) for mailman id 743515;
 Wed, 19 Jun 2024 07:53:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WeeD=NV=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJq86-0002Xc-UB
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 07:53:19 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20631.outbound.protection.outlook.com
 [2a01:111:f403:2418::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fbd6aef8-2e10-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 09:53:16 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by PH8PR12MB7279.namprd12.prod.outlook.com (2603:10b6:510:221::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.30; Wed, 19 Jun
 2024 07:53:12 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7677.030; Wed, 19 Jun 2024
 07:53:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fbd6aef8-2e10-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YK/rtaxXeOYC1FeOvH4zbTpezthSiXffkIdXc8r+lKQAcGxjMQouHEOUT24bAaXopgRXsCu4qcBs+HWJrGlwzLUqd2egnp18Ncw5SOcxVAN0nTLQWkCg7ER+LT1HVTCnpAJhANqoSSckgtiDl0yBh2U9fuqHlZ2hx1VmLMv+Tj3gz4mSx4LStqXqV2g5saaEo59cLRDAqSFNKCFGz6tAte7zsEm9/b5QG0yOTy3Yk8FqgWx25jH45K9LVhtcomaf5uXX/b+oNjnf154mo6lVghJxWc247PO/DFy2b0X/8NDw6rSiwwKyFsSFo7E5j4mtaL9ABKMMm/vFUE8QcieMfQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=owIIR3BIiKRD4oy3Mex6AAapkXTJc9SwHKUdDHpBSRc=;
 b=A2fnynodjr9pgU0s0B2Rth+NuM8yv3zk4JEn2SiAL6/dkMRQyMFi9ThJsssL616PIgYpR1u3S0ubyy7LbN8sV8fOweGFksja175ZCxSksaTGMH7fvYgWdR2oT7C677B3AN1ykkdUsyvT1p8w4TK1KhyqVTeMf8n5c1C94vithCEQSnKrMesRX2x88jO9N940Qv6Rms+6dFFvG5rWONM/ZtYfbng6mb1yaWgmotN9eSRVw0zeuj1bxdc9MgODvGiE1zhLVEGDiB0u7+dMhGQhANYH9srXOih3yx/b7k9M+Hs527tLcYEDDXLoJcf7THP21E1zpxNYEmiwbGjeSN2Hig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=owIIR3BIiKRD4oy3Mex6AAapkXTJc9SwHKUdDHpBSRc=;
 b=qmIp6YYG7Cgu9S3JomzfbHhm/75r/4CuJHfyArILUp+4/cxTFKE4Z4rwSeJresfDl46S43Tdt/ipJfYRheZDV3IM5pbK8sC43xb5HbM1QvY9GvcRso5CqcNdIBdo+byIlu1aBb4FZoPgywfo1qhZKuzF5VmDNJTjdsTH7auyrIM=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
Thread-Topic: [XEN PATCH v10 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH
 dom0
Thread-Index: AQHawJTljFn+TFlFw0Cz/HTewbBwKbHMCqyAgAGSNQD//5yWgIAB+7GA
Date: Wed, 19 Jun 2024 07:53:11 +0000
Message-ID:
 <BL1PR12MB5849861E424724C6E9DE3859E7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-4-Jiqian.Chen@amd.com>
 <ed36b376-a5f0-457b-8a1e-61104c26ffce@suse.com>
 <BL1PR12MB5849FE3A4897DF166159B906E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b923a32e-3c22-4e7a-8844-b33322ef8ad1@suse.com>
In-Reply-To: <b923a32e-3c22-4e7a-8844-b33322ef8ad1@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7677.026)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|PH8PR12MB7279:EE_
x-ms-office365-filtering-correlation-id: a9900639-2623-4016-8f80-08dc9034dda5
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|366013|7416011|376011|1800799021|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?dnJrTlowMmdwT1c5eVVZTjBReFg2YlhzWlZJSVY1V3FQOE9obHpSMkJ5VHdq?=
 =?utf-8?B?NzRZaUNsMXNyUUpwOHZwV21DdTZIM3RoNlFxQzhZSENvYlZMa21VU09rTVdw?=
 =?utf-8?B?eGZ4UmhLWEZ3WTJKMFA4eUE5YTJrOHIzaXAvOVUrTXpQais1UVExc1Q1dkJi?=
 =?utf-8?B?cDNkamN1YWZMK3FlcVM2ZHhTNTExcm5mVTBwWHlKVEwwZXlBbGpjcGoyZlBt?=
 =?utf-8?B?eU53cjRhd2hyOUUrdFBta3o0OWd3M2NPcm5oRVJnRnFHRnhIaUFDWDR5Qk1P?=
 =?utf-8?B?OUUwaGFZcVk5NUV5NUNRQU4vZDB5ZTdSMmJFRllUQ2xPQ2NLMkJST0U0Ym9z?=
 =?utf-8?B?VmhjaGRWajRNZTFFWWE4NlhBT2cvZmhZUFUxKzdlb0ZXMzlTc24wc1pkYUgy?=
 =?utf-8?B?SXB1S0lLenF5WXZWVlA5VTZobXVENWRNTFE5SThGT1dpMVNCN0V0RG5OQWdx?=
 =?utf-8?B?aXUzellqc1EzT1ozWElROUZiK2huRm5jQUk1WE9iS1RmVHpDc3hFUHN4K3JC?=
 =?utf-8?B?NXlRcjE1cllpUUpYdHNSdlNtSWlTVjUyOGU3WWtXNDN4dGhnbEZHNzZMVDRM?=
 =?utf-8?B?anE5SHRuMG1GdVRpMmJuYy9MOTVTVWEvZjZOTmZ0OUtFdWpyQmgySG0xYWkr?=
 =?utf-8?B?Z1R0dDVtT3g4VzlNTmRUZnJSdUpWS2xTUk9QVURtOHdmNFVneEYwNnVFbXlq?=
 =?utf-8?B?THc5VWRFWkNBNWNDdks0VndocE1sdHlaVHRENmwwTjZML0IxeG9OOVdGZEFj?=
 =?utf-8?B?Qm41dkdwOXFVRXc3ZDNDWkZGL1IvMzFVdndUWkk5bkpwZUdwVFplcE55bVBQ?=
 =?utf-8?B?Kzh1ckhzczVkZGxPOW9Pa0R2RUsrbzljdVRXcGZiZ2M4cTdCQ1ZFZXRlMWJi?=
 =?utf-8?B?bEJNVWRPZDFMQmlCN2lzQ2NMOWVnVmJxRkFIQ2RTNFlOcUlRWmFmZ1Q4MkJv?=
 =?utf-8?B?eGh0d0o2aTg0ZWJieW15OW5EdFNHSFBKejMyNmtQenV5aEhIOC9HTDN3bFFM?=
 =?utf-8?B?R3hnVDVQNGJ5TExzUHJzWG9BZGh1dmlrbWhzZlRuOWI0enp0bUY5OGhZZFZR?=
 =?utf-8?B?dThOemRuemxzOWFtc2VtZVRDa1l2QXl2TVZDV2piRXJ4eGZrZWVHc05aRldy?=
 =?utf-8?B?U3NQZHQwcmFvR0l3WUw5MjRrb2VsTTIzdm04ZmVWOE5WUkZMclF6OEdCalpR?=
 =?utf-8?B?Y1NkdXlkWWQ0cDgzcml4c0RFYTdCWTNRRGRUUjBweG5YeHB0SHIvNVBUOUJD?=
 =?utf-8?B?ODNEWXduRThWY3kwd1lTWWxsbVFKRHl1a21veFBZU3FtL1BJaVQvaGtrZmh1?=
 =?utf-8?B?MWRMbVZDVllTOFQyYXg5VHZ6aHVGcmVGUmR5YitUU05RaUxwRW51WjZmQmtG?=
 =?utf-8?B?RFVJN01QMVBXejdGUUkvaVkrWVBhL2Q0VXgwanluQVZtdS91alA4RjhxRkVM?=
 =?utf-8?B?WmV0WEtpM1N5allucUNpbzkxYUlUbWluazdFSytqdlduN0lpbWpyeGpJcDVY?=
 =?utf-8?B?aVFOdXhXTjE0VzhIcDBUWVZOeHgxMmJrS0RWOFdRemgvb25yMEVwUUxmV0U0?=
 =?utf-8?B?Y1ptanB5ZUZQbWJkVGFWWldaQmFzbkVCNkI3T1FaNnhJQi9FOVFTbmloWm1r?=
 =?utf-8?B?bmZiclN2SXBlOEQrSFp3WWd3VUltUVBYMTl4eUdIMkFValZIaE9WTFhDeUpY?=
 =?utf-8?B?WCsxd2oxamV6SlRDb0d5WFhhajhQRVRULzI0eUYrVCtJR3doenVvWFEzRGFH?=
 =?utf-8?Q?InPAaUKj9nQoF7udz27xh9b2+6FSt1M6tohYGa4?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(366013)(7416011)(376011)(1800799021)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?bkJpcG1zeFp5LzQvb0NtL3pDeE93SGVjVFhpVmF1SnVZOGFWZlhIVFZpdTNE?=
 =?utf-8?B?UzhoK1RWbW1hNmYyYVQ3emluZ1daSUFrV0VPTlBDcFpHOVljVW43cHZCSEVG?=
 =?utf-8?B?U1hzRlA4akdBN2xJYlo5SURHMDlScUpUR3RvREhhRXFodDFrTEpNeG0rL0RQ?=
 =?utf-8?B?emwycmdadGVtZktpWlk2NCtHemNSUHA1Q0pSZHZPYlFvRnRHck4rbGZhRzZv?=
 =?utf-8?B?cy85QmtqZGg0cTc2MjFHZEIybWt3aDR2MG4xY3lRYnA0Qlp4NFNZQzdJOXY4?=
 =?utf-8?B?L3dORGJ0UE92eEcwRkdXcHJMQmkxazY0eXZ4c0hwdkFBTmNTZnFIbzVRMFBq?=
 =?utf-8?B?RlRyU2M1N3dXVlB4Q1l1cnBUaUdkbi9PL0dzQmFqRXN5NjdhYThPVnVTOUcw?=
 =?utf-8?B?MXRJWi9rb0ZaMjh4enRlS0xEVGFaTTFnQVJPTXFIRTZFODBzdnJ4MDBtWTNl?=
 =?utf-8?B?SVpyRGlUc08zVUdDT2E1M2IrUENKNkJneXkyL3g3VzI4N0lQWWlybTk4Rm56?=
 =?utf-8?B?YW12YVRxc0daVkU3K0dVU093Wi9QOW1kYk9lUGtUTUxJUmYxRTQyL1hyQmJq?=
 =?utf-8?B?ZUp1Q3J1V1F1U2trcUJ0OXhNZU9LcVJwQ1h6MjMwZldBYWlYUXorRzdna2hl?=
 =?utf-8?B?OFppNnh3cUNvTGFVYlVsZENrR0JLaHpMMjBNYzZqSDM1ZHVZR2c2WWpBWm9B?=
 =?utf-8?B?U1Nwa2pyZ2JZQUZCUDF1c2ZhTWJxRXE2d0pNSk5QRnYxUlZyRCtWZ1JBWjl0?=
 =?utf-8?B?dEIvOC9sWnpUUENlVCtHTTBSblk3ZmU5R21ENlNrdmEzVjdFYjRIVTJpS1NY?=
 =?utf-8?B?UEdQRUZ2Y1pWTXBMRzVZK3FqbU5mYnlkWHlnYjM2bWlqQ1M3R3ZHeVBzM3VP?=
 =?utf-8?B?WDZocnYrRGM2VEdtUlVyTVFxNURLV2RFQVR5d2wwZVBkRVYwdXNSbGpBN3FF?=
 =?utf-8?B?ZUxxWDZtd09UT2J4NUVXOHlaWFd4d1RLY2p4ZCtVL21sSVF5ZGNvbitIc1pS?=
 =?utf-8?B?ZGtITnNaWmoxa2JaMkZtOVNYQnNCZDNqeVJEbHlxTFdzbGM5Q1NxVEY1SUtJ?=
 =?utf-8?B?SEIzQzkyVzdGWEdNU0Z2Q0Vsb2VseGlESnBOY2VnYlE2RkZaUTdkZXVyWHdH?=
 =?utf-8?B?ZGVOZU02YmRyZExzT1lvZytOcEIrT0pNck1QeFZGQ0FaZXFENks3dUVmTXZI?=
 =?utf-8?B?dkxpZTZ5VU1rS3hjcS9yWGtXdVJQR1I5dVZuYWRpd2tETFhET2g4SWt6eWFx?=
 =?utf-8?B?QVVJTHBiWk1kTTZ4aFliWVlXdGM5RGVUTUF4UWY0YkVGVmtOU0ZhcjZ4cXY0?=
 =?utf-8?B?LzFUaTl2aHFCa1Y4OEF1Q25xUTBTUksvK1lmYUU1QmVQMXhDNlZXUmlzUkIz?=
 =?utf-8?B?ZTMxZ0tLdFpSMm50YjdaanRUSzNONndhWWhpaDUxMnVtSmJnL1lrWjlhNnZ6?=
 =?utf-8?B?SU5PZndVRVFpS2JFWkg4OHVtdFRXZG0rcFp0NVNrNUN6dUVLU05DcERFbG9h?=
 =?utf-8?B?c2xOYTJtcmNnWStZa2tXYi8zcFkza3RMOXJwUjFnSW9uZ1JFVC9PR2l2Tjgy?=
 =?utf-8?B?cE4vbEZrVjdLd1h4VVZWZUg3di8xSlF6WGd2dnFnSzNocERXQ2h4dnJ4dTQw?=
 =?utf-8?B?RDEyOGpGdjhnQkhkOWJlQklZVXMrY1BXaVRINXIxYWcrbUxBTFY1WGY5a3pa?=
 =?utf-8?B?UU5hV3ZEaGFOQUFMUzNmYUROSVBqaTNNWE1RU3MrbXBkeVFyNk9wQUs0LzYw?=
 =?utf-8?B?dytwMXdNUmk5UmdQNWxLRFdNY1E4TkN0dFo3VkJqNU5lcEVxeGUrR0ZhMEdh?=
 =?utf-8?B?QjRibEIveDQ3M1IvWEYwZldkM1dhQjhFZ25GT3FBSVJQZlUzSzBKWjVrTUZY?=
 =?utf-8?B?Z1lHRmtFQmh1VmFxZVR6eUs5ZC9WZ0syYWNpRElVeDlxd3hjYWx5bTlPU0hq?=
 =?utf-8?B?c0wvenp2ZlppTHdueW9mVlU2M29UN012RlJwV2NBYmFvQ1kyN3duU3RhTmhY?=
 =?utf-8?B?Z3RabytoSW13cTE1UVVrYTE1RU5vY1B3Qk5YR1ZXdzQreVY3cUFYK2wyd3dH?=
 =?utf-8?B?OUNzSUxwSi9FYkZ4RUhWc2o2VzNMdXAyVktaSDBuZ2VvSVF3a3dQNE1obnF3?=
 =?utf-8?Q?sNFI=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <469B569D8493A8489EA98EB4B45D1676@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a9900639-2623-4016-8f80-08dc9034dda5
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Jun 2024 07:53:11.0989
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: NuIsTzYb8QD5fpzPjqGG5EnlZD6GaqLw4LqiJQAsQwW1OJKwEloYGZrcbApyIz6oayp9bYCM6VTtAx+PHeGrWA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR12MB7279

T24gMjAyNC82LzE4IDE2OjU1LCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTguMDYuMjAyNCAw
ODo1NywgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzE3IDIyOjUyLCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAxNy4wNi4yMDI0IDExOjAwLCBKaXFpYW4gQ2hlbiB3cm90ZToN
Cj4+Pj4gVGhlIGdzaSBvZiBhIHBhc3N0aHJvdWdoIGRldmljZSBtdXN0IGJlIGNvbmZpZ3VyZWQg
Zm9yIGl0IHRvIGJlDQo+Pj4+IGFibGUgdG8gYmUgbWFwcGVkIGludG8gYSBodm0gZG9tVS4NCj4+
Pj4gQnV0IFdoZW4gZG9tMCBpcyBQVkgsIHRoZSBnc2lzIGRvbid0IGdldCByZWdpc3RlcmVkLCBp
dCBjYXVzZXMNCj4+Pj4gdGhlIGluZm8gb2YgYXBpYywgcGluIGFuZCBpcnEgbm90IGJlIGFkZGVk
IGludG8gaXJxXzJfcGluIGxpc3QsDQo+Pj4+IGFuZCB0aGUgaGFuZGxlciBvZiBpcnFfZGVzYyBp
cyBub3Qgc2V0LCB0aGVuIHdoZW4gcGFzc3Rocm91Z2ggYQ0KPj4+PiBkZXZpY2UsIHNldHRpbmcg
aW9hcGljIGFmZmluaXR5IGFuZCB2ZWN0b3Igd2lsbCBmYWlsLg0KPj4+Pg0KPj4+PiBUbyBmaXgg
YWJvdmUgcHJvYmxlbSwgb24gTGludXgga2VybmVsIHNpZGUsIGEgbmV3IGNvZGUgd2lsbA0KPj4+
PiBuZWVkIHRvIGNhbGwgUEhZU0RFVk9QX3NldHVwX2dzaSBmb3IgcGFzc3Rocm91Z2ggZGV2aWNl
cyB0bw0KPj4+PiByZWdpc3RlciBnc2kgd2hlbiBkb20wIGlzIFBWSC4NCj4+Pj4NCj4+Pj4gU28s
IGFkZCBQSFlTREVWT1Bfc2V0dXBfZ3NpIGludG8gaHZtX3BoeXNkZXZfb3AgZm9yIGFib3ZlDQo+
Pj4+IHB1cnBvc2UuDQo+Pj4+DQo+Pj4+IFNpZ25lZC1vZmYtYnk6IEppcWlhbiBDaGVuIDxKaXFp
YW4uQ2hlbkBhbWQuY29tPg0KPj4+PiBTaWduZWQtb2ZmLWJ5OiBIdWFuZyBSdWkgPHJheS5odWFu
Z0BhbWQuY29tPg0KPj4+PiBTaWduZWQtb2ZmLWJ5OiBKaXFpYW4gQ2hlbiA8SmlxaWFuLkNoZW5A
YW1kLmNvbT4NCj4+Pj4gLS0tDQo+Pj4+IFRoZSBjb2RlIGxpbmsgdGhhdCB3aWxsIGNhbGwgdGhp
cyBoeXBlcmNhbGwgb24gbGludXgga2VybmVsIHNpZGUgaXMgYXMgZm9sbG93czoNCj4+Pj4gaHR0
cHM6Ly9sb3JlLmtlcm5lbC5vcmcveGVuLWRldmVsLzIwMjQwNjA3MDc1MTA5LjEyNjI3Ny0zLUpp
cWlhbi5DaGVuQGFtZC5jb20vDQo+Pj4NCj4+PiBPbmUgb2YgbXkgdjkgY29tbWVudHMgd2FzIGFk
ZHJlc3NlZCwgdGhhbmtzLiBSZXBlYXRpbmcgdGhlIG90aGVyLCB1bmFkZHJlc3NlZA0KPj4+IG9u
ZSBoZXJlOg0KPj4+ICJBcyB0byBHU0lzIG5vdCBiZWluZyByZWdpc3RlcmVkOiBJZiB0aGF0J3Mg
bm90IGEgcHJvYmxlbSBmb3IgRG9tMCdzIG93bg0KPj4+ICBvcGVyYXRpb24sIEkgdGhpbmsgaXQn
bGwgYWxzbyB3YW50L25lZWQgZXhwbGFpbmluZyB3aHkgd2hhdCBpcyBzdWZmaWNpZW50IGZvcg0K
Pj4+ICBEb20wIGFsb25lIGlzbid0IHN1ZmZpY2llbnQgd2hlbiBwYXNzLXRocm91Z2ggY29tZXMg
aW50byBwbGF5LiINCj4+IEkgaGF2ZSBtb2RpZmllZCB0aGUgY29tbWl0IG1lc3NhZ2UgdG8gZGVz
Y3JpYmUgd2h5IEdTSXMgYXJlIG5vdCByZWdpc3RlcmVkIGNhbiBjYXVzZSBwYXNzdGhyb3VnaCBu
b3Qgd29yaywgYWNjb3JkaW5nIHRvIHRoaXMgdjkgY29tbWVudC4NCj4+ICIgaXQgY2F1c2VzIHRo
ZSBpbmZvIG9mIGFwaWMsIHBpbiBhbmQgaXJxIG5vdCBiZSBhZGRlZCBpbnRvIGlycV8yX3BpbiBs
aXN0LCBhbmQgdGhlIGhhbmRsZXIgb2YgaXJxX2Rlc2MgaXMgbm90IHNldCwgdGhlbiB3aGVuIHBh
c3N0aHJvdWdoIGEgZGV2aWNlLCBzZXR0aW5nIGlvYXBpYyBhZmZpbml0eSBhbmQgdmVjdG9yIHdp
bGwgZmFpbC4iDQo+PiBXaGF0IGRlc2NyaXB0aW9uIGRvIHlvdSB3YW50IG1lIHRvIGFkZD8NCj4g
DQo+IFdoYXQgSSdkIGZpcnN0IGxpa2UgdG8gaGF2ZSBjbGFyaWZpY2F0aW9uIG9uIChpLmUuIGJl
Zm9yZSBwdXR0aW5nIGl0IGluDQo+IHRoZSBkZXNjcmlwdGlvbiBvbmUgd2F5IG9yIGFub3RoZXIp
OiBIb3cgY29tZSBEb20wIGFsb25lIGdldHMgYXdheSBmaW5lDQo+IHdpdGhvdXQgbWFraW5nIHRo
ZSBjYWxsLCB5ZXQgZm9yIHBhc3N0aHJvdWdoLXRvLURvbVUgaXQncyBuZWVkZWQ/IElzIGl0DQo+
IHBlcmhhcHMgdGhhdCBpdCBqdXN0IHNvIGhhcHBlbmVkIHRoYXQgZm9yIERvbTAgdGhpbmdzIGhh
dmUgYmVlbiB3b3JraW5nDQo+IG9uIHN5c3RlbXMgd2hlcmUgaXQgd2FzIHRlc3RlZCwgYnV0IHRo
ZSBjYWxsIHNob3VsZCBpbiBwcmluY2lwbGUgaGF2ZSBiZWVuDQo+IHRoZXJlIGluIHRoaXMgY2Fz
ZSwgdG9vIFsxXT8gVGhhdCAodG8gbWUgYXQgbGVhc3QpIHdvdWxkIG1ha2UgcXVpdGUgYQ0KPiBk
aWZmZXJlbmNlIGZvciBib3RoIHRoaXMgcGF0Y2gncyBkZXNjcmlwdGlvbiBhbmQgdXMgYWNjZXB0
aW5nIGl0Lg0KT2gsIEkgdGhpbmsgSSBrbm93IHdoYXQncyB5b3VyIGNvbmNlcm4gbm93LiBUaGFu
a3MuDQpGaXJzdCBxdWVzdGlvbiwgd2h5IGdzaSBvZiBkZXZpY2UgY2FuIHdvcmsgb24gUFZIIGRv
bTA6DQpCZWNhdXNlIHdoZW4gcHJvYmUgYSBkcml2ZXIgdG8gYSBub3JtYWwgZGV2aWNlLCBpdCB3
aWxsIGNhbGwgbGludXgga2VybmVsIHNpZGU6cGNpX2RldmljZV9wcm9iZS0+IHJlcXVlc3RfdGhy
ZWFkZWRfaXJxLT4gaXJxX3N0YXJ0dXAtPiBfX3VubWFza19pb2FwaWMtPiBpb19hcGljX3dyaXRl
LCB0aGVuIHRyYXAgaW50byB4ZW4gc2lkZSBodm1lbXVsX2RvX2lvLT4gaHZtX2lvX2ludGVyY2Vw
dC0+IGh2bV9wcm9jZXNzX2lvX2ludGVyY2VwdC0+IHZpb2FwaWNfd3JpdGVfaW5kaXJlY3QtPiB2
aW9hcGljX2h3ZG9tX21hcF9nc2ktPiBtcF9yZWdpc3Rlcl9nc2kuIFNvIHRoYXQgdGhlIGdzaSBj
YW4gYmUgcmVnaXN0ZXJlZC4NClNlY29uZCBxdWVzdGlvbiwgd2h5IGdzaSBvZiBwYXNzdGhyb3Vn
aCBjYW4ndCB3b3JrIG9uIFBWSCBkb20wOg0KQmVjYXVzZSB3aGVuIGFzc2lnbiBhIGRldmljZSB0
byBiZSBwYXNzdGhyb3VnaCwgaXQgdXNlcyBwY2liYWNrIHRvIHByb2JlIHRoZSBkZXZpY2UsIGFu
ZCBpdCBjYWxscyBwY2lzdHViX3Byb2JlLCBidXQgaW4gYWxsIGNhbGxzdGFjayBvZiBwY2lzdHVi
X3Byb2JlLCBpdCBkb2Vzbid0IHVubWFzayB0aGUgZ3NpLCBhbmQgd2UgY2FuIHNlZSBvbiBYZW4g
c2lkZSwgdGhlIGZ1bmN0aW9uIHZpb2FwaWNfaHdkb21fbWFwX2dzaS0+IG1wX3JlZ2lzdGVyX2dz
aSB3aWxsIGJlIGNhbGxlZCBvbmx5IHdoZW4gdGhlIGdzaSBpcyB1bm1hc2tlZCwgc28gdGhhdCB0
aGUgZ3NpIGNhbid0IHdvcmsgZm9yIHBhc3N0aHJvdWdoIGRldmljZS4NCg0KPiANCj4gSmFuDQo+
IA0KPiBbMV0gQWx0ZXJuYXRpdmUgZS5nLiBiZWluZyB0aGF0IGJlY2F1c2Ugb2Ygb3RoZXIgYWN0
aW9ucyBQVkggRG9tMCB0YWtlcywNCj4gbGlrZSB0aGUgSU8tQVBJQyBSVEUgcHJvZ3JhbW1pbmcg
aXQgZG9lcyBmb3IgSVJRcyBpdCB3YW50cyB0byB1c2UgZm9yDQo+IGl0c2VsZiwgdGhlIG5lY2Vz
c2FyeSBpbmZvcm1hdGlvbiBpcyBhbHJlYWR5IHN1aXRhYmx5IGNvbnZleWVkIHRvIFhlbiBpbg0K
PiB0aGF0IGNhc2UuIEluIHN1Y2ggYSBjYXNlIGltbyBpdCdzIHJlbGV2YW50IHRvIG1lbnRpb24g
aW4gdGhlIGRlc2NyaXB0aW9uLg0KPiBOb3QgdGhlIGxlYXN0IGJlY2F1c2UgaWlyYyB0aGUgcGNp
YmFjayBkcml2ZXIgc2V0cyB1cCBhIGZha2UgSVJRIGhhbmRsZXINCj4gaW4gc3VjaCBjYXNlcywg
d2hpY2ggb3VnaHQgdG8gbGVhZCB0byBzaW1pbGFyIElPLUFQSUMgUlRFIHByb2dyYW1taW5nLCBh
dA0KPiB3aGljaCBwb2ludCB0aGUgcXVlc3Rpb24gd291bGQgYWdhaW4gYXJpc2Ugd2h5IHRoZSBo
eXBlcmNhbGwgbmVlZHMNCj4gZXhwb3NpbmcuDQoNCi0tIA0KQmVzdCByZWdhcmRzLA0KSmlxaWFu
IENoZW4uDQo=


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 08:07:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 08:07:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743526.1150438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJqLZ-0005Gf-Pr; Wed, 19 Jun 2024 08:07:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743526.1150438; Wed, 19 Jun 2024 08:07:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJqLZ-0005GY-N9; Wed, 19 Jun 2024 08:07:13 +0000
Received: by outflank-mailman (input) for mailman id 743526;
 Wed, 19 Jun 2024 08:07:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJqLZ-0005GS-9i
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 08:07:13 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id edeb8ac2-2e12-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 10:07:10 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id
 38308e7fff4ca-2eaae2a6dc1so110652351fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 01:07:10 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-705ccb71222sm10452006b3a.170.2024.06.19.01.07.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 01:07:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: edeb8ac2-2e12-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718784430; x=1719389230; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=LvIVWdHXMjTRVFC3PH5sqaUdWLXhKSmTDzCF0JH3tjQ=;
        b=Rq4+hA/PeZiT+6tTvbMs7CcKhKneTHEi6i/TWoywDly4TljwskH52b17r1yh/bDCdH
         XLJhhUhyrjdZHp3f0HpPHvhRGtxnZ1KlZq3dQNmoftyzOyuO1IAZJZYIKdKLmPe2cq4L
         dIaJ7V5Iv8CqH+CUdfmv3/Z804MUej71zjlChriSfyuLMZOHYyG7TS/mMRny4+eHGkDY
         ZsFSP43Vl9uy6vcXruubnJhPTXa8EQeH83baVEZRSy+s1DAODy6/BjTBrZkoYQnX6zW4
         Kqhaa3XZ/bh1AMwb45l/ZY0JaKTKqvkN7uAz/tWiyOW39ZyBpfHU1dPoKi9molk2OQrN
         rjUQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718784430; x=1719389230;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=LvIVWdHXMjTRVFC3PH5sqaUdWLXhKSmTDzCF0JH3tjQ=;
        b=JEcMjBP+BQ+I5RYbx3PujF9cIwWaLwr+6dT9Il8G05dKJ87iqbiasXAmtp/b3CZaqn
         f6OkqkBi5yljZpLQvNehfmN1GjrrsZl00zSfnno9FypDp1tuSH4sExW8oXLlUbSd0ujU
         YKlRuhlxy5HFZxprAlF0IDdtMSG+P44IYZcjuTgYXtL6+0REYZd8GDLIx+Yb2hedAtGI
         eM0zpJDdopEUMUl68PUD1x3Nq6zZkZDh0GGto+McCovCpkVMy5n3LHaugnfVGy21ycT4
         yl/s2I7E/Ug46QKTVurtLlK2DjIfBebqIJEoH9iq9EtNSERSXpfjoUykLLYGW+n2XKZ6
         M4EQ==
X-Forwarded-Encrypted: i=1; AJvYcCVmEimRVtWD/WfFO6SOmDBC4OAtiYk3tpb5az9kIC0JPiK0dMvXBMZ+6T8nLgJ7s31XZPA+cJiv1pCluvMJWBH+luDVtvXrbqowds0ICMg=
X-Gm-Message-State: AOJu0YxCA+KhYrC0uIbjtekWJUxmsD3tsEOIYJKCt9RTelYdG957J+VX
	A991hlxdJ+NvOGBgk4nTzy5tiz9yAw9HbEz3+j0yG5GNoQn5RGtm9Izx+0RHsQ==
X-Google-Smtp-Source: AGHT+IFjpN/N3gMT4cDulPmwwrddkiASwahASZJlxfZEWVS0D4EDFNHwR76OxBkAzNUG7gnkz9uB+g==
X-Received: by 2002:a2e:9c99:0:b0:2db:a9c9:4c5e with SMTP id 38308e7fff4ca-2ec3cec42a5mr15039631fa.21.1718784430249;
        Wed, 19 Jun 2024 01:07:10 -0700 (PDT)
Message-ID: <ff66c7aa-585f-4d30-9f4f-e520226825bc@suse.com>
Date: Wed, 19 Jun 2024 10:06:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-4-Jiqian.Chen@amd.com>
 <ed36b376-a5f0-457b-8a1e-61104c26ffce@suse.com>
 <BL1PR12MB5849FE3A4897DF166159B906E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b923a32e-3c22-4e7a-8844-b33322ef8ad1@suse.com>
 <BL1PR12MB5849861E424724C6E9DE3859E7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB5849861E424724C6E9DE3859E7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 19.06.2024 09:53, Chen, Jiqian wrote:
> On 2024/6/18 16:55, Jan Beulich wrote:
>> On 18.06.2024 08:57, Chen, Jiqian wrote:
>>> On 2024/6/17 22:52, Jan Beulich wrote:
>>>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>>>> The gsi of a passthrough device must be configured for it to be
>>>>> able to be mapped into a hvm domU.
>>>>> But When dom0 is PVH, the gsis don't get registered, it causes
>>>>> the info of apic, pin and irq not be added into irq_2_pin list,
>>>>> and the handler of irq_desc is not set, then when passthrough a
>>>>> device, setting ioapic affinity and vector will fail.
>>>>>
>>>>> To fix above problem, on Linux kernel side, a new code will
>>>>> need to call PHYSDEVOP_setup_gsi for passthrough devices to
>>>>> register gsi when dom0 is PVH.
>>>>>
>>>>> So, add PHYSDEVOP_setup_gsi into hvm_physdev_op for above
>>>>> purpose.
>>>>>
>>>>> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
>>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>>>> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
>>>>> ---
>>>>> The code link that will call this hypercall on linux kernel side is as follows:
>>>>> https://lore.kernel.org/xen-devel/20240607075109.126277-3-Jiqian.Chen@amd.com/
>>>>
>>>> One of my v9 comments was addressed, thanks. Repeating the other, unaddressed
>>>> one here:
>>>> "As to GSIs not being registered: If that's not a problem for Dom0's own
>>>>  operation, I think it'll also want/need explaining why what is sufficient for
>>>>  Dom0 alone isn't sufficient when pass-through comes into play."
>>> I have modified the commit message to describe why GSIs are not registered can cause passthrough not work, according to this v9 comment.
>>> " it causes the info of apic, pin and irq not be added into irq_2_pin list, and the handler of irq_desc is not set, then when passthrough a device, setting ioapic affinity and vector will fail."
>>> What description do you want me to add?
>>
>> What I'd first like to have clarification on (i.e. before putting it in
>> the description one way or another): How come Dom0 alone gets away fine
>> without making the call, yet for passthrough-to-DomU it's needed? Is it
>> perhaps that it just so happened that for Dom0 things have been working
>> on systems where it was tested, but the call should in principle have been
>> there in this case, too [1]? That (to me at least) would make quite a
>> difference for both this patch's description and us accepting it.
> Oh, I think I know what's your concern now. Thanks.
> First question, why gsi of device can work on PVH dom0:
> Because when probe a driver to a normal device, it will call linux kernel side:pci_device_probe-> request_threaded_irq-> irq_startup-> __unmask_ioapic-> io_apic_write, then trap into xen side hvmemul_do_io-> hvm_io_intercept-> hvm_process_io_intercept-> vioapic_write_indirect-> vioapic_hwdom_map_gsi-> mp_register_gsi. So that the gsi can be registered.
> Second question, why gsi of passthrough can't work on PVH dom0:
> Because when assign a device to be passthrough, it uses pciback to probe the device, and it calls pcistub_probe, but in all callstack of pcistub_probe, it doesn't unmask the gsi, and we can see on Xen side, the function vioapic_hwdom_map_gsi-> mp_register_gsi will be called only when the gsi is unmasked, so that the gsi can't work for passthrough device.

And why exactly would the fake IRQ handler not be set up by pciback? Its
setting up ought to lead to those same IO-APIC RTE writes that Xen
intercepts.

In any event, imo a summary of the above wants to be part of the patch
description.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 08:32:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 08:32:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743535.1150448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJqkJ-0001nR-NN; Wed, 19 Jun 2024 08:32:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743535.1150448; Wed, 19 Jun 2024 08:32:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJqkJ-0001nK-Ki; Wed, 19 Jun 2024 08:32:47 +0000
Received: by outflank-mailman (input) for mailman id 743535;
 Wed, 19 Jun 2024 08:32:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ri0R=NV=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sJqkI-0001my-MU
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 08:32:46 +0000
Received: from mail-ua1-x929.google.com (mail-ua1-x929.google.com
 [2607:f8b0:4864:20::929])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7f4f118a-2e16-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 10:32:43 +0200 (CEST)
Received: by mail-ua1-x929.google.com with SMTP id
 a1e0cc1a2514c-80b841b1b80so1609165241.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 01:32:43 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798abe47f2csm589470185a.116.2024.06.19.01.32.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Jun 2024 01:32:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f4f118a-2e16-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718785963; x=1719390763; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=e6g/72tlSXRmAUn+a7LaDTmwLDTSr/tE+RieH9sAXL4=;
        b=fzrAxYeke537d3BIRJuoW5mWaTmPdxRHNaP7RbTkTvP7qekh8TRgz3RqizUkVqCvc2
         SGFpRWQdUAuiEP0DiixHtD52zWVt+1Uyy3A4fNOu5gT9EKvUJeaG9aXTdd23ncoFFcPB
         2qGDMlQRG1qe0xwP0XQ8DaFUSlDpqvxZa9nI8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718785963; x=1719390763;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=e6g/72tlSXRmAUn+a7LaDTmwLDTSr/tE+RieH9sAXL4=;
        b=F5NEv+69J+xr7NWxFB8HG7jJxc/HE9Cd/swZ0rEo+ekdjUb6dNqiJP07K+HAooXVRc
         jFIUsXHg9FWEJrvXU9NAeFe9A166rOH+Ld5guMt/VtkD1nP3xk7mgg8lPGFo4YlQHH0W
         rAu1CJeSrWtCSpIR3sMB6NnOdaUPcPHoFpjLmQB+xO7cJLViRD2sQv/kQ4JJB6+o3Yx5
         QyA6UfsaGLTy+npfDeVTZNeXSmLwgj4FfsnBJJQZHGBPWKinkq12Gh2daSCgcC6cmnyN
         UB0LGf4brG/zHpUUa9uEMbkdDWkh6HzCal5fOTimfzhNCTJ7Z54sIUZwcg/4TisKcjej
         M69A==
X-Forwarded-Encrypted: i=1; AJvYcCWozpbfq6YsV/ngP55KyvLN/AB5ivvm3VPHk5IuvKksVSRrM53k9sGFmI7jdrCFk3wyuPkU0BePVLMtV6aovHS7en7vIxYAKkCFHcGE6lo=
X-Gm-Message-State: AOJu0YwFfDwDE1kI4z2ftPf9Lln3uuSXEXKazuRu7qZYwWzmiuuqiLX1
	9KtcCkxnQjxAda5tuivOQrqvKYdLw/EyoVpDiNQ+dtu1asTkEshAVc54AClT2Q2ymJJfqhZOrSE
	r
X-Google-Smtp-Source: AGHT+IH78Qasgtu/d4If44RwNmd72vZQom8OpZpM7CIl1mGl2R9xTG+nrciAu7MGojEZdElHYwZEYg==
X-Received: by 2002:a67:ff07:0:b0:48c:368c:3673 with SMTP id ada2fe7eead31-48f13097569mr2063437137.28.1718785962613;
        Wed, 19 Jun 2024 01:32:42 -0700 (PDT)
Date: Wed, 19 Jun 2024 10:32:40 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH v3 3/3] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
Message-ID: <ZnKXqDxlG2d2MohM@macbook>
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <20240613165617.42538-4-roger.pau@citrix.com>
 <e3912334-4dbe-40e9-aed4-8b47e1570cc7@suse.com>
 <ZnFv7b4YNjeRXj6-@macbook>
 <2f388d0a-c9b5-409a-b622-5dfeb3093e82@suse.com>
 <ZnGerbiI7P9PHPmK@macbook>
 <ba89126f-715d-498e-81e1-2ed105ac2d1c@suse.com>
 <ZnKDTEX_eGz2sS4K@macbook>
 <541885b6-fd09-4531-8ae9-8e57e504c1b3@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <541885b6-fd09-4531-8ae9-8e57e504c1b3@suse.com>

On Wed, Jun 19, 2024 at 09:24:41AM +0200, Jan Beulich wrote:
> On 19.06.2024 09:05, Roger Pau Monné wrote:
> > On Tue, Jun 18, 2024 at 06:30:22PM +0200, Jan Beulich wrote:
> >> On 18.06.2024 16:50, Roger Pau Monné wrote:
> >>> On Tue, Jun 18, 2024 at 04:34:50PM +0200, Jan Beulich wrote:
> >>>> On 18.06.2024 13:30, Roger Pau Monné wrote:
> >>>>> On Mon, Jun 17, 2024 at 03:41:12PM +0200, Jan Beulich wrote:
> >>>>>> On 13.06.2024 18:56, Roger Pau Monne wrote:
> >>>>>>> @@ -2686,11 +2705,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
> >>>>>>>          if ( desc->handler->disable )
> >>>>>>>              desc->handler->disable(desc);
> >>>>>>>  
> >>>>>>> +        /*
> >>>>>>> +         * If the current CPU is going offline and is (one of) the target(s) of
> >>>>>>> +         * the interrupt, signal to check whether there are any pending vectors
> >>>>>>> +         * to be handled in the local APIC after the interrupt has been moved.
> >>>>>>> +         */
> >>>>>>> +        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
> >>>>>>> +            check_irr = true;
> >>>>>>> +
> >>>>>>>          if ( desc->handler->set_affinity )
> >>>>>>>              desc->handler->set_affinity(desc, affinity);
> >>>>>>>          else if ( !(warned++) )
> >>>>>>>              set_affinity = false;
> >>>>>>>  
> >>>>>>> +        if ( check_irr && apic_irr_read(vector) )
> >>>>>>> +            /*
> >>>>>>> +             * Forward pending interrupt to the new destination, this CPU is
> >>>>>>> +             * going offline and otherwise the interrupt would be lost.
> >>>>>>> +             */
> >>>>>>> +            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
> >>>>>>> +                          desc->arch.vector);
> >>>>>>
> >>>>>> Hmm, IRR may become set right after the IRR read (unlike in the other cases,
> >>>>>> where new IRQs ought to be surfacing only at the new destination). Doesn't
> >>>>>> this want moving ...
> >>>>>>
> >>>>>>>          if ( desc->handler->enable )
> >>>>>>>              desc->handler->enable(desc);
> >>>>>>
> >>>>>> ... past the actual affinity change?
> >>>>>
> >>>>> Hm, but the ->enable() hook is just unmasking the interrupt, the
> >>>>> actual affinity change is done in ->set_affinity(), and hence after
> >>>>> the call to ->set_affinity() no further interrupts should be delivered
> >>>>> to the CPU regardless of whether the source is masked?
> >>>>>
> >>>>> Or is it possible for the device/interrupt controller to not switch to
> >>>>> use the new destination until the interrupt is unmasked, and hence
> >>>>> could have pending masked interrupts still using the old destination?
> >>>>> IIRC For MSI-X it's required that the device updates the destination
> >>>>> target once the entry is unmasked.
> >>>>
> >>>> That's all not relevant here, I think. IRR gets set when an interrupt is
> >>>> signaled, no matter whether it's masked.
> >>>
> >>> I'm kind of lost here, what does signaling mean in this context?
> >>>
> >>> I would expect the interrupt vector to not get set in IRR if the MSI-X
> >>> entry is masked, as at that point the state of the address/data fields
> >>> might not be consistent (that's the whole point of masking it right?)
> >>>
> >>>> It's its handling which the
> >>>> masking would prevent, i.e. the "moving" of the set bit from IRR to ISR.
> >>>
> >>> My understanding was that the masking would prevent the message write to
> >>> the APIC from happening, and hence no vector should get set in IRR.
> >>
> >> Hmm, yes, looks like I was confused. The masking is at the source side
> >> (IO-APIC RTE, MSI-X entry, or - if supported - in the MSI capability).
> >> So the sole case to worry about is MSI without mask-bit support then.
> > 
> > Yeah, and for MSI without masking bit support we don't care doing the
> > IRR check before or after the ->enable() hook, as that's a no-op in
> > that case.  The write to the MSI address/data fields has already been
> > done, and hence the issue would be exclusively with draining any
> > in-flight writes to the APIC doorbell (what you mention below).
> 
> Except that both here ...
> 
> >>>> Plus we run with IRQs off here anyway if I'm not mistaken, so no
> >>>> interrupt can be delivered to the local CPU. IOW whatever IRR bits it
> >>>> has set (including ones becoming set between the IRR read and the actual
> >>>> vector change), those would never be serviced. Hence the reading of the
> >>>> bit ought to occur after the vector change: It's only then that we know
> >>>> the IRR bit corresponding to the old vector can't become set anymore.
> >>>
> >>> Right, and the vector change happens in ->set_affinity(), not
> >>> ->enable().  See for example set_msi_affinity() and the
> >>> write_msi_msg(), that's where the vector gets changed.
> >>>
> >>>> And even then we're assuming that no interrupt signals might still be
> >>>> "on their way" from the IO-APIC or a posted MSI message write by a
> >>>> device to the LAPIC (I have no idea how to properly fence that, or
> >>>> whether there are guarantees for this to never occur).
> >>>
> >>> Yeah, those I expect would be completed in the window between the
> >>> write of the new vector/destination and the reading of IRR.
> >>
> >> Except we have no idea on the latencies.
> > 
> > There isn't much else we can do? Even the current case where we add
> > the 1ms window at the end of the shuffling could still suffer from
> > this issue because we don't know the latencies.  IOW: I don't think
> > this is any worse than the current approach.
> 
> ... and here, the later we read IRR, the better the chances we don't miss
> anything. Even the no-op ->enable() isn't a no-op execution-wise. In fact
> it (quite pointlessly[1]) is an indirect call to irq_enable_none(). I'm
> actually inclined to suggest that we try to even further delay the IRR
> read, certainly past the cpumask_copy(), maybe even past the spin_unlock()
> (latching CPU and vector into local variables, along with the latching of
> ->affinity that's already there).

Moving past cpumask_copy() would be OK.  Moving past the spin_unlock()
I'm not so sure.  Isn't it possible that once the desc is unlocked the
interrupt starts another movement, and hence by the time the forwarded
vector is injected the irq desc has already moved to yet a different
CPU?

I don't think this is realistic given the window between the
spin_unlock() and the injection of the vector, but could be
possible.

If you are fine with moving past the cpumask_copy() but before the
spin_unlock() I can post an updated version with that.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 08:51:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 08:51:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743546.1150458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJr2Y-00058k-6X; Wed, 19 Jun 2024 08:51:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743546.1150458; Wed, 19 Jun 2024 08:51:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJr2Y-00058d-2j; Wed, 19 Jun 2024 08:51:38 +0000
Received: by outflank-mailman (input) for mailman id 743546;
 Wed, 19 Jun 2024 08:51:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WeeD=NV=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJr2X-00058X-48
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 08:51:37 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20601.outbound.protection.outlook.com
 [2a01:111:f403:2415::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 20182000-2e19-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 10:51:33 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by DM4PR12MB6615.namprd12.prod.outlook.com (2603:10b6:8:8d::9) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.7698.19; Wed, 19 Jun 2024 08:51:29 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7677.030; Wed, 19 Jun 2024
 08:51:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20182000-2e19-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ncQGy5IJ2lv4/cGeJnyDXdvhZS/0I7DWC98gQLDiBZF1ae5gjBAwteudkXUEdAR3u0tbBw5o5TjvVSwNOnVBRvgW8+0wciJnvt/tcajvHrI2JOF6a3ZaFdaRH2sjaUX8lRbALiukLYPM8qMtVvhvUk/gxxLslQo2BhSXtY8QKP7OsbLGe1aR7x41G5A4iQTFp6ZMTst9gFxCTnhr0vK8Cuuw17ZCEnemg649djZyTfKAbO2Z1CWsa8OvpinuNIYu8PGWGwHfGxEHSfhqoJ/DlZe3fTuTWPiJoQYXN5E0WvcBUWK1//fueccvkEtJfd8WpWj9jAzUfPD9bv7QxxKN8g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ltP34zi2iMUPd73dc8bcCNS1bU6oRElESZnrL+pukuo=;
 b=Cq4/tNWQC9jmEw6EIqBsMSZpx25AteBtA3Wl9/oNOB+VfXWIMKeWPEZ+drd4T5PCOVAw2AfhnAIAMqEivKtKgedyuul/ut1YymI9Daqv0lKs2lDp+EGMDaU7IVy3dtvE8oJz118NMgfS3dEmut4JoehiSisiiloS5kxPqI/N5z/NOxFf00FsvCkCS8vYuVpuoLJVvjtAV0y4m+vJiwrTuzwTeG8sZAIEiT8xdWwD8vpYDdi45JDg13RO+6nVHwORJCRM9YCTWr8qnesMdrn32DKQfwk9Kker56pL3WeeGSWuv+zaagLJKyK7nDvAyH3bLFwDG2YvKiwEztsBCKnFpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ltP34zi2iMUPd73dc8bcCNS1bU6oRElESZnrL+pukuo=;
 b=ARBFveaN/GWIjkxoGa1dGuuIVN3wgOey+IxwsQZw71ISE+UZm1vUsM9GLP7OoSFToycLUrIungiIav2vqfT8JYqJFGhmZgWIelI4JtDUJmRtyQ2KcMSJHK/3OzZW3R5avs9a2W6qBj6WNkbSeOCqVM7GkVxqR45HhrLwR5EJo88=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
Thread-Topic: [XEN PATCH v10 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH
 dom0
Thread-Index:
 AQHawJTljFn+TFlFw0Cz/HTewbBwKbHMCqyAgAGSNQD//5yWgIAB+7GA//+I/4CAAJBIgA==
Date: Wed, 19 Jun 2024 08:51:29 +0000
Message-ID:
 <BL1PR12MB58498905D185C6A720276D1BE7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-4-Jiqian.Chen@amd.com>
 <ed36b376-a5f0-457b-8a1e-61104c26ffce@suse.com>
 <BL1PR12MB5849FE3A4897DF166159B906E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b923a32e-3c22-4e7a-8844-b33322ef8ad1@suse.com>
 <BL1PR12MB5849861E424724C6E9DE3859E7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <ff66c7aa-585f-4d30-9f4f-e520226825bc@suse.com>
In-Reply-To: <ff66c7aa-585f-4d30-9f4f-e520226825bc@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7677.026)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|DM4PR12MB6615:EE_
x-ms-office365-filtering-correlation-id: a35af60c-f4ed-4b05-58ec-08dc903d02ab
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|1800799021|7416011|376011|366013|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?M3dtaWR3bFExL2xPYmhqSHhTalNFTytpZmZZR2I0aG9sK3R4S3c3aStmcnlP?=
 =?utf-8?B?ZFgyYXR3V1JGWVNKdU9FVlFESytmQ0g5UDVxNXNVZE1LcS9OdlBZWWlTZ2ww?=
 =?utf-8?B?bWd0aENiYTJIN3pBMmF5VlZoQXM0ZGx2aWV5eUIrUnh6emlrQ0trMnRoeGVr?=
 =?utf-8?B?V21aeDVoNmRQYUR3UG5wd0l6SXdtSWUwRThheEVCMWE2N0hnRVpTWC9uUVQ1?=
 =?utf-8?B?NGUwL1VkN08rcmpucWo5ZFh3dXVJRmk3Y3NzUlljZDk3RW9SeHVGWVRSanpJ?=
 =?utf-8?B?ZW5HWC9zY0lLWHIwVnQxa0FxSHI2cTRZQzhyZmdCZTJyT1pjNU9xT283a2dv?=
 =?utf-8?B?ek1PMzN4L3o4MWNFTlMweHhkTjl1aUxyT09ZcVBZOWxIQW9ZNTRzYVBMZW5W?=
 =?utf-8?B?M1o0SDhac09vdG1oai9Vb21OREhRL2tONFEwTWc1dGF3M1FNVGhRS0JNWTM2?=
 =?utf-8?B?VTczVEpGeDMvUklYOXRtakVRYllWejhnVFc0YVk0cFJWcDk2YnJPcDhKNTlT?=
 =?utf-8?B?TW5NSG9GNWNrOFRoZTF3VWMwMDF5Mkdvc1JwaXJrQlN2MXUvWW05WWRSZEcz?=
 =?utf-8?B?T0FhNXVmV3ppVVZpbTdCNmozQXFOUTN5dHNKMzJYR0EzVHNJUlJVdVkxUWxF?=
 =?utf-8?B?cW10N1FSS2lrOXJVQXJoUklXbUVMcW9rcmplbTFIT3dpTkdxMEtKZ3lRVDdq?=
 =?utf-8?B?TE1YRDZxMEM3REh0MkVHT3I4YkQzVlpreThBTStBaGtFaG5sQWd5TUtwWG96?=
 =?utf-8?B?MURqdjhSbTlLR3lMdTFsMkhWWHczangzVFcwY0Z3Sng2WWJvNUh0ZzJzOXZL?=
 =?utf-8?B?cGtrS21xaWVHRWhIa2piYTJXT2hoTmhtcFVmU0F2UFFnNTRsdHpaUjhkTjNS?=
 =?utf-8?B?aXcrZUVOZ3NaN28xY2RlcVJxVkZBQmMwQVVPV0E2Wm9kNXdiZ2VlbjVtVnJU?=
 =?utf-8?B?TzZ1Y1dvd3AyeHRqQmtHNkxCNzRhVXNZVVFsZEhzODIrMCtiaTc1UHZNWUty?=
 =?utf-8?B?V0RTenFjK2d0S3I5bUlkNHU1cytrNml5Ujc1dkdGUkRPLzF2ODZHdnRSNkUr?=
 =?utf-8?B?M014TG5ObFlQOEhhSjM3c3FKcDdJR015NUs4TEIrYnJhUHRWZVZNZ3JpS3NQ?=
 =?utf-8?B?cXk5emFUWXJTQTVhbGU4c3htRDFNbWdBdzRMVjJuOWVmRlpBN05aWXJCb3Nn?=
 =?utf-8?B?YkVOUFpDU0RWZzVPK1pab1hkMHNhSXpKL3ZmclBJdnRyZTRId2d0VUtwckVV?=
 =?utf-8?B?UUUyaFR1K3M1eGJJTUVXcW5oZDhKTTAxRVZ5OGZ5TjhnRzlMMzAxUDFhOUtO?=
 =?utf-8?B?WjVNajd2NWZKTlNkS0I2MjE4aG1GRC9ucmNSVXJUQ0dMRFFmbitJcnd1bldz?=
 =?utf-8?B?YWFnSCt6T282dXl6YVFjZVVnUmZYNVlpeldMaUp4WnpuWDlaZ3R4RHNNQjNF?=
 =?utf-8?B?U1Nzc1Byai9TbzdtMi9NV1RVb2VVKzNoeXg2TDVFOFZXM1hOZ3F5U1hGNGZw?=
 =?utf-8?B?dkFZMFpTTVlQY2RPbk9QNFF1MXlkdDJHWlBsTkRXcXdoYTgrcFFVMXFFQlBT?=
 =?utf-8?B?b0U5Uk10TDBEcVJNRndpVzFqM0V6Yy9MVnlUaTAyamllaTh2YmE5eXlPV1lD?=
 =?utf-8?B?aWZ3WlZodjhtdUtwT1poQXlTUEJQVTNXU2VDckpTUnZqWXBEaEtZZ1V4QkNU?=
 =?utf-8?B?bXpRMUZ6VmxBRTJjbUN4TTJjMXBoNFhMVHRCbVhjZEFHZmhjTTA1R0svdlFL?=
 =?utf-8?Q?yCQ8n3b6U4u/rA4oa/yw2PvWs/Ji4iEs0K9BcFh?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(1800799021)(7416011)(376011)(366013)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ZjV6R3UzOHVkN3RHcVY1d2g5bGsxNk96d0sra0RHQ0MrQVhWeFBKV0Z5aCtC?=
 =?utf-8?B?VzR3Y0Mwc0dvZjFoSVRHNXpReTFiLzRaMUV3Vjh4Q05DRE0xb3NseWVOSm5l?=
 =?utf-8?B?VmcvTkxuaWlXM0JyS1RFS0szd0o3TFc4RGtNQ1hIQWgwR0NxVXpBRTlYckYr?=
 =?utf-8?B?dk5hd25KU3R6L0NTT0VhTkRtV3dNL1dZeVlKOUFzc3gyWUlZQXRWWVA0Z2hh?=
 =?utf-8?B?R1F1bHJocDd5NWpvRkNydUF4YkhkejZicFpOdGgxWEp5cU5JZ1hUMzE3OVY0?=
 =?utf-8?B?OWpOQzVYRlN5RFZwRFRKZWRsOHFUcVRRblVZSjlOcW9vWnlaU2ZQY2FSL0NF?=
 =?utf-8?B?ZEZ3V05ERlRwZXBtVDRubFJxQkFnendkcUxJeCtyUXhDb2FVR0d6ZFlZRjZF?=
 =?utf-8?B?eFpQZG1IUXVSeHhRSDNmeVc5bVNVS2YvM0h4UmhuN2lzbm1PWVJ6N2ZIOFZy?=
 =?utf-8?B?NThaNi80aTlMbjg3Ky9JRjFKQ3Vaa0huUTBtbFJWWVpONi83SFl2dloycldN?=
 =?utf-8?B?dUcrbHY4UW8yT0tUWElrWTAyRE53R1piT0NkTFgyWW1nZzR0RXZlcVhzbTNB?=
 =?utf-8?B?YmovYVZtWitrNTh4bWZXWk5XckZaZUNqOWJyR0dWclVPOXVaZm40Si8rTHVB?=
 =?utf-8?B?b2ZBMGx2MlRRUkVwaGtXQWhhRytFZ0J4cHhlV09vVDhwckpkQ1N0Q1pXWDNo?=
 =?utf-8?B?b2h3S0hhcHM3UjRrL1FCRzJSYWZTZHBwYlNZRWNvYUdYT2JrQmZ5QTYvbENz?=
 =?utf-8?B?eVJibTdjUW5hckdIazc4UFVpdkdZUnZzZ012dU1MemR4QnpIeEVwWWJxaTg1?=
 =?utf-8?B?Z2JKRFl3UTg1VndQVDkzN1YyeG5xdkIxdEgwblR5UXlxZlFLRC9ndmxQZmNN?=
 =?utf-8?B?UTRuc2JQbjgvQWJ1UDc2NnRYWkFtdy9Ici9iM0ZmRFVpK01yek9ESGc1NGZr?=
 =?utf-8?B?SGF5eXIybEhoTThHeldLSWJkSm9JMEZFL2ZFUytMN2NVQUoycTdpMnFCWG16?=
 =?utf-8?B?Tm9CS0U0bk5mOGNhVHNRTUl4dFk1SFRaUXVXa3ZjbWJ2cmRpRUFCTU1jR003?=
 =?utf-8?B?OG5hRHhCVW0zN29pR3RWQU9pL3hlK05CUkdlN0pZclFXRHR4bkdQSVhPc1RP?=
 =?utf-8?B?cHBLeFNCbjRQVEI1TFUycGhLT0UwSVBxWnlOQ0ZqTU51QU9XdTVXRzI1TlNP?=
 =?utf-8?B?MlIvaGVNRXZ2UHluc0N3NTU0bExpZEF0aWo4am9INWl0VDVJRkZSUmVnNS9E?=
 =?utf-8?B?K0YrcEQ1WVZESnZmT202NkVkZHRWNi8vckxSUjNMbytYUkxydVJmcG41c0I2?=
 =?utf-8?B?YncwODJFZ1JuUURuMEo2cEdlUXRvd2JOakNCZDNMMTBXYnpDWVNKU1FZcDhX?=
 =?utf-8?B?TzF3dmpiWDE0TEhKTVFLcWJQRGxPMkRBQmp3UTkyQWVPWHZ2b0ZFN3dnMEVV?=
 =?utf-8?B?a2lRdDdwb3c4Q2lHZWlsOUoyZ0dnN1pvVUExdWNBSGJSRmhnZ0tpa2tQWkRX?=
 =?utf-8?B?T1gwRW9iZVRPMHh0NjdCcjhDSUxveWdTNHhhWTRrVzhpbFZRbkkwTllMZjlC?=
 =?utf-8?B?OTlHNzBjS3BXSFRMb0NIMmszNXplVEFybzhEcm9QVTg5dmJPS1E5UEczWFNk?=
 =?utf-8?B?TnlIaHFJKzh5b1NSSGljSVJxc0ZpTWZsbW5mc2d3V0h0UzJtZ1cyd25SRWhT?=
 =?utf-8?B?dDFXS1p5U2tpT1diSHVqV0J0UE5QWVlDalRydVp6aHBYZklPZWtPbjIvTHUv?=
 =?utf-8?B?K0d1UWcyc0JuM3dLa0c2MHc3SmxxVHZnWW9DQU1EODNnR0REbWtjaUdrcjlp?=
 =?utf-8?B?SW9WQ3Q5NGRHYTM3a0crajJXSUFVUVhMNDROc2lCenp1aW85OGtER0EvWFpG?=
 =?utf-8?B?akFqeDhGMXNuWVovNGViam44dXZPL3lTUkl2aHd5ZFF1bEduNW9CNTFhVTZC?=
 =?utf-8?B?NDUydzI1cG4rdWhyYUZHYTJtNnUxVStKMFAva2hPMlg5bjB3K2VNdjh0NXBC?=
 =?utf-8?B?bjRwMFFkTHppT1l1M2FCWGEzY0ZsYWpqZUJHbU4wVklwdmhxZWUydVV2ZGV4?=
 =?utf-8?B?UmE4UEZNZTg3dHdVb1FlRnpOTmtFSzRSSzNMUzJ4OG9CbUFVQWNrUUV5UkdF?=
 =?utf-8?Q?ZmwA=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <8167D8BAD87C5540A296A3A650962898@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a35af60c-f4ed-4b05-58ec-08dc903d02ab
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Jun 2024 08:51:29.1927
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: i9yvRmcV5F3ajqzKElrBaR2+C2VTIxnHe9Ar6lSLLbdJnV0YIsps/UIgCWsP6kQhtnv4KSCG4l1BruZF45gjIg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB6615

T24gMjAyNC82LzE5IDE2OjA2LCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTkuMDYuMjAyNCAw
OTo1MywgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzE4IDE2OjU1LCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAxOC4wNi4yMDI0IDA4OjU3LCBDaGVuLCBKaXFpYW4gd3JvdGU6
DQo+Pj4+IE9uIDIwMjQvNi8xNyAyMjo1MiwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAx
Ny4wNi4yMDI0IDExOjAwLCBKaXFpYW4gQ2hlbiB3cm90ZToNCj4+Pj4+PiBUaGUgZ3NpIG9mIGEg
cGFzc3Rocm91Z2ggZGV2aWNlIG11c3QgYmUgY29uZmlndXJlZCBmb3IgaXQgdG8gYmUNCj4+Pj4+
PiBhYmxlIHRvIGJlIG1hcHBlZCBpbnRvIGEgaHZtIGRvbVUuDQo+Pj4+Pj4gQnV0IFdoZW4gZG9t
MCBpcyBQVkgsIHRoZSBnc2lzIGRvbid0IGdldCByZWdpc3RlcmVkLCBpdCBjYXVzZXMNCj4+Pj4+
PiB0aGUgaW5mbyBvZiBhcGljLCBwaW4gYW5kIGlycSBub3QgYmUgYWRkZWQgaW50byBpcnFfMl9w
aW4gbGlzdCwNCj4+Pj4+PiBhbmQgdGhlIGhhbmRsZXIgb2YgaXJxX2Rlc2MgaXMgbm90IHNldCwg
dGhlbiB3aGVuIHBhc3N0aHJvdWdoIGENCj4+Pj4+PiBkZXZpY2UsIHNldHRpbmcgaW9hcGljIGFm
ZmluaXR5IGFuZCB2ZWN0b3Igd2lsbCBmYWlsLg0KPj4+Pj4+DQo+Pj4+Pj4gVG8gZml4IGFib3Zl
IHByb2JsZW0sIG9uIExpbnV4IGtlcm5lbCBzaWRlLCBhIG5ldyBjb2RlIHdpbGwNCj4+Pj4+PiBu
ZWVkIHRvIGNhbGwgUEhZU0RFVk9QX3NldHVwX2dzaSBmb3IgcGFzc3Rocm91Z2ggZGV2aWNlcyB0
bw0KPj4+Pj4+IHJlZ2lzdGVyIGdzaSB3aGVuIGRvbTAgaXMgUFZILg0KPj4+Pj4+DQo+Pj4+Pj4g
U28sIGFkZCBQSFlTREVWT1Bfc2V0dXBfZ3NpIGludG8gaHZtX3BoeXNkZXZfb3AgZm9yIGFib3Zl
DQo+Pj4+Pj4gcHVycG9zZS4NCj4+Pj4+Pg0KPj4+Pj4+IFNpZ25lZC1vZmYtYnk6IEppcWlhbiBD
aGVuIDxKaXFpYW4uQ2hlbkBhbWQuY29tPg0KPj4+Pj4+IFNpZ25lZC1vZmYtYnk6IEh1YW5nIFJ1
aSA8cmF5Lmh1YW5nQGFtZC5jb20+DQo+Pj4+Pj4gU2lnbmVkLW9mZi1ieTogSmlxaWFuIENoZW4g
PEppcWlhbi5DaGVuQGFtZC5jb20+DQo+Pj4+Pj4gLS0tDQo+Pj4+Pj4gVGhlIGNvZGUgbGluayB0
aGF0IHdpbGwgY2FsbCB0aGlzIGh5cGVyY2FsbCBvbiBsaW51eCBrZXJuZWwgc2lkZSBpcyBhcyBm
b2xsb3dzOg0KPj4+Pj4+IGh0dHBzOi8vbG9yZS5rZXJuZWwub3JnL3hlbi1kZXZlbC8yMDI0MDYw
NzA3NTEwOS4xMjYyNzctMy1KaXFpYW4uQ2hlbkBhbWQuY29tLw0KPj4+Pj4NCj4+Pj4+IE9uZSBv
ZiBteSB2OSBjb21tZW50cyB3YXMgYWRkcmVzc2VkLCB0aGFua3MuIFJlcGVhdGluZyB0aGUgb3Ro
ZXIsIHVuYWRkcmVzc2VkDQo+Pj4+PiBvbmUgaGVyZToNCj4+Pj4+ICJBcyB0byBHU0lzIG5vdCBi
ZWluZyByZWdpc3RlcmVkOiBJZiB0aGF0J3Mgbm90IGEgcHJvYmxlbSBmb3IgRG9tMCdzIG93bg0K
Pj4+Pj4gIG9wZXJhdGlvbiwgSSB0aGluayBpdCdsbCBhbHNvIHdhbnQvbmVlZCBleHBsYWluaW5n
IHdoeSB3aGF0IGlzIHN1ZmZpY2llbnQgZm9yDQo+Pj4+PiAgRG9tMCBhbG9uZSBpc24ndCBzdWZm
aWNpZW50IHdoZW4gcGFzcy10aHJvdWdoIGNvbWVzIGludG8gcGxheS4iDQo+Pj4+IEkgaGF2ZSBt
b2RpZmllZCB0aGUgY29tbWl0IG1lc3NhZ2UgdG8gZGVzY3JpYmUgd2h5IEdTSXMgYXJlIG5vdCBy
ZWdpc3RlcmVkIGNhbiBjYXVzZSBwYXNzdGhyb3VnaCBub3Qgd29yaywgYWNjb3JkaW5nIHRvIHRo
aXMgdjkgY29tbWVudC4NCj4+Pj4gIiBpdCBjYXVzZXMgdGhlIGluZm8gb2YgYXBpYywgcGluIGFu
ZCBpcnEgbm90IGJlIGFkZGVkIGludG8gaXJxXzJfcGluIGxpc3QsIGFuZCB0aGUgaGFuZGxlciBv
ZiBpcnFfZGVzYyBpcyBub3Qgc2V0LCB0aGVuIHdoZW4gcGFzc3Rocm91Z2ggYSBkZXZpY2UsIHNl
dHRpbmcgaW9hcGljIGFmZmluaXR5IGFuZCB2ZWN0b3Igd2lsbCBmYWlsLiINCj4+Pj4gV2hhdCBk
ZXNjcmlwdGlvbiBkbyB5b3Ugd2FudCBtZSB0byBhZGQ/DQo+Pj4NCj4+PiBXaGF0IEknZCBmaXJz
dCBsaWtlIHRvIGhhdmUgY2xhcmlmaWNhdGlvbiBvbiAoaS5lLiBiZWZvcmUgcHV0dGluZyBpdCBp
bg0KPj4+IHRoZSBkZXNjcmlwdGlvbiBvbmUgd2F5IG9yIGFub3RoZXIpOiBIb3cgY29tZSBEb20w
IGFsb25lIGdldHMgYXdheSBmaW5lDQo+Pj4gd2l0aG91dCBtYWtpbmcgdGhlIGNhbGwsIHlldCBm
b3IgcGFzc3Rocm91Z2gtdG8tRG9tVSBpdCdzIG5lZWRlZD8gSXMgaXQNCj4+PiBwZXJoYXBzIHRo
YXQgaXQganVzdCBzbyBoYXBwZW5lZCB0aGF0IGZvciBEb20wIHRoaW5ncyBoYXZlIGJlZW4gd29y
a2luZw0KPj4+IG9uIHN5c3RlbXMgd2hlcmUgaXQgd2FzIHRlc3RlZCwgYnV0IHRoZSBjYWxsIHNo
b3VsZCBpbiBwcmluY2lwbGUgaGF2ZSBiZWVuDQo+Pj4gdGhlcmUgaW4gdGhpcyBjYXNlLCB0b28g
WzFdPyBUaGF0ICh0byBtZSBhdCBsZWFzdCkgd291bGQgbWFrZSBxdWl0ZSBhDQo+Pj4gZGlmZmVy
ZW5jZSBmb3IgYm90aCB0aGlzIHBhdGNoJ3MgZGVzY3JpcHRpb24gYW5kIHVzIGFjY2VwdGluZyBp
dC4NCj4+IE9oLCBJIHRoaW5rIEkga25vdyB3aGF0J3MgeW91ciBjb25jZXJuIG5vdy4gVGhhbmtz
Lg0KPj4gRmlyc3QgcXVlc3Rpb24sIHdoeSBnc2kgb2YgZGV2aWNlIGNhbiB3b3JrIG9uIFBWSCBk
b20wOg0KPj4gQmVjYXVzZSB3aGVuIHByb2JlIGEgZHJpdmVyIHRvIGEgbm9ybWFsIGRldmljZSwg
aXQgd2lsbCBjYWxsIGxpbnV4IGtlcm5lbCBzaWRlOnBjaV9kZXZpY2VfcHJvYmUtPiByZXF1ZXN0
X3RocmVhZGVkX2lycS0+IGlycV9zdGFydHVwLT4gX191bm1hc2tfaW9hcGljLT4gaW9fYXBpY193
cml0ZSwgdGhlbiB0cmFwIGludG8geGVuIHNpZGUgaHZtZW11bF9kb19pby0+IGh2bV9pb19pbnRl
cmNlcHQtPiBodm1fcHJvY2Vzc19pb19pbnRlcmNlcHQtPiB2aW9hcGljX3dyaXRlX2luZGlyZWN0
LT4gdmlvYXBpY19od2RvbV9tYXBfZ3NpLT4gbXBfcmVnaXN0ZXJfZ3NpLiBTbyB0aGF0IHRoZSBn
c2kgY2FuIGJlIHJlZ2lzdGVyZWQuDQo+PiBTZWNvbmQgcXVlc3Rpb24sIHdoeSBnc2kgb2YgcGFz
c3Rocm91Z2ggY2FuJ3Qgd29yayBvbiBQVkggZG9tMDoNCj4+IEJlY2F1c2Ugd2hlbiBhc3NpZ24g
YSBkZXZpY2UgdG8gYmUgcGFzc3Rocm91Z2gsIGl0IHVzZXMgcGNpYmFjayB0byBwcm9iZSB0aGUg
ZGV2aWNlLCBhbmQgaXQgY2FsbHMgcGNpc3R1Yl9wcm9iZSwgYnV0IGluIGFsbCBjYWxsc3RhY2sg
b2YgcGNpc3R1Yl9wcm9iZSwgaXQgZG9lc24ndCB1bm1hc2sgdGhlIGdzaSwgYW5kIHdlIGNhbiBz
ZWUgb24gWGVuIHNpZGUsIHRoZSBmdW5jdGlvbiB2aW9hcGljX2h3ZG9tX21hcF9nc2ktPiBtcF9y
ZWdpc3Rlcl9nc2kgd2lsbCBiZSBjYWxsZWQgb25seSB3aGVuIHRoZSBnc2kgaXMgdW5tYXNrZWQs
IHNvIHRoYXQgdGhlIGdzaSBjYW4ndCB3b3JrIGZvciBwYXNzdGhyb3VnaCBkZXZpY2UuDQo+IA0K
PiBBbmQgd2h5IGV4YWN0bHkgd291bGQgdGhlIGZha2UgSVJRIGhhbmRsZXIgbm90IGJlIHNldCB1
cCBieSBwY2liYWNrPyBJdHMNCj4gc2V0dGluZyB1cCBvdWdodCB0byBsZWFkIHRvIHRob3NlIHNh
bWUgSU8tQVBJQyBSVEUgd3JpdGVzIHRoYXQgWGVuDQo+IGludGVyY2VwdHMuDQpCZWNhdXNlIGlz
cl9vbiBpcyBub3Qgc2V0LCB3aGVuIHhlbl9wY2lia19jb250cm9sX2lzciBpcyBjYWxsZWQsIGl0
IHdpbGwgcmV0dXJuIGR1ZSB0byAiICFkZXZfZGF0YS0+aXNyX29uIi4gU28gdGhhdCBmYWtlIElS
USBoYW5kbGVyIGFyZW4ndCBpbnN0YWxsZWQuDQpBbmQgaXQgc2VlbXMgaXNyX29uIGlzIHNldCB0
aHJvdWdoIGRyaXZlciBzeXNmcyAiIGlycV9oYW5kbGVyX3N0YXRlIiBmb3IgYSBsZXZlbCBkZXZp
Y2UgdGhhdCBpcyB0byBiZSBzaGFyZWQgd2l0aCBndWVzdCBhbmQgdGhlIElSUSBpcyBzaGFyZWQg
d2l0aCB0aGUgaW5pdGlhbCBkb21haW4uDQoNCj4gDQo+IEluIGFueSBldmVudCwgaW1vIGEgc3Vt
bWFyeSBvZiB0aGUgYWJvdmUgd2FudHMgdG8gYmUgcGFydCBvZiB0aGUgcGF0Y2gNCj4gZGVzY3Jp
cHRpb24uDQpPSywgd2lsbCBhZGQgaW50byB0aGUgY29tbWl0IG1lc3NhZ2UgaW4gbmV4dCB2ZXJz
aW9uLg0KDQo+IA0KPiBKYW4NCg0KLS0gDQpCZXN0IHJlZ2FyZHMsDQpKaXFpYW4gQ2hlbi4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 09:02:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 09:02:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743555.1150468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJrCh-00072v-1v; Wed, 19 Jun 2024 09:02:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743555.1150468; Wed, 19 Jun 2024 09:02:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJrCg-00072o-V5; Wed, 19 Jun 2024 09:02:06 +0000
Received: by outflank-mailman (input) for mailman id 743555;
 Wed, 19 Jun 2024 09:02:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2Pjt=NV=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1sJrCf-00072i-7f
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 09:02:05 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur04on20621.outbound.protection.outlook.com
 [2a01:111:f403:2611::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 986ac806-2e1a-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 11:02:03 +0200 (CEST)
Received: from AS9PR06CA0575.eurprd06.prod.outlook.com (2603:10a6:20b:486::6)
 by DU0PR08MB9876.eurprd08.prod.outlook.com (2603:10a6:10:424::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.29; Wed, 19 Jun
 2024 09:01:59 +0000
Received: from AM2PEPF0001C714.eurprd05.prod.outlook.com
 (2603:10a6:20b:486:cafe::32) by AS9PR06CA0575.outlook.office365.com
 (2603:10a6:20b:486::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.32 via Frontend
 Transport; Wed, 19 Jun 2024 09:01:59 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM2PEPF0001C714.mail.protection.outlook.com (10.167.16.184) with
 Microsoft
 SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.7677.15
 via Frontend Transport; Wed, 19 Jun 2024 09:01:59 +0000
Received: ("Tessian outbound 43598937069a:v339");
 Wed, 19 Jun 2024 09:01:59 +0000
Received: from 043f9545d4df.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5E3E8F6D-809F-4F3C-B4BA-D2494AAF358E.1; 
 Wed, 19 Jun 2024 09:01:52 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 043f9545d4df.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 Jun 2024 09:01:52 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com (2603:10a6:10:25a::24)
 by AS4PR08MB7456.eurprd08.prod.outlook.com (2603:10a6:20b:4e7::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.25; Wed, 19 Jun
 2024 09:01:50 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a]) by DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a%6]) with mapi id 15.20.7698.017; Wed, 19 Jun 2024
 09:01:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 986ac806-2e1a-11ef-90a3-e314d9c70b13
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=gEsJ6G2y9HNJVn5SfkkJvIJyQjf6CFt5s5hf8dUFcgsOC3HyVmokWfoZ4myla4bD0pfGQ0maqgrE38WHSrIly90mzOSJihXHqCoAbEeEPX4XU1BSplhCkKasSRcT6x/hmKb0U1X6gg0Bh0Qd6CNElL9ZlKMU0zWnabWGVcHekLQkYSVSwVcXbHY3g/kaQFoav4P0JQTn7yaUL3R6+7LY0r7Yjrio98QzolHsX8Mko6QK3HgJXBY1JFbIvlvKyT6ZM0Bd3VFbq3Lfhmh8LBJ1FtOMJNduv4ksn9Ps6qWgyvsiOhCOi9Jdevc9QZatnlLcYQ97/9NbkkAnTeMyEDxquA==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=17QQVv9ErczK1oQcsp8u/TK83FSQzoafjA3Kjd6F1bA=;
 b=C216jVECgkdSqQ2x5ArORrTUOy07kEs6VzvOQYPmI09lSCSLYxUsCr51xeHURVhULJX6ZC6XdXtFggU9BbtFfCpDr+gMoZqKkCPqxZ4/LogYEo28uOrp0roJ4HuSkwoNlu8lniWBrT/WENk4hbFaaRmFiMei5qVtwbdzFpZ/7BhXRb740myLhPnRz8Q6SsjWyp/qkOKfHwQ9Hdns1vJhiEpSw1U11tsH+8DfgBpzYbwAvz9ZS3hV11yICKYbl12Q8REmj/Ko5Lwg5JfCzwdfAeGlRaGgjv6yz2xfkseZ9TgZkXF9dr35Xet4YBhx6pjsYeIMjnhMeV9lJ+NzyrExYQ==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1
 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=17QQVv9ErczK1oQcsp8u/TK83FSQzoafjA3Kjd6F1bA=;
 b=ejd0EhwbZYTN5w5EUQ8B/KYCiuB7XoDUoYaSfDSWpi95I2rU+axXyjjhg7U3dYq5j83TD1Z+J7sO7v9IYYlPZnzVHgNfz2DAeUyHnvf0DFY8bb0pERM2FAo+RLklkxG4r6/p7aB4T4D7DmE+81jNieGx0y+0KHtvyPxpCrywTMo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=arm.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: cf73b05ed890de4c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W/psGmLUNgkQ+terNqBgJomWx4FVlDbK/g36hVKT2L8yqyDvK6gP62ETBP/QFsS4hlWzVjDTB6cnxj2G5RQouJFWXeC0JXA+NY7Y9cwcrO7MYbdOQneTle+ZLZVBTReiSQdEcvf4SOTyHLveglj8T9ZZqc/kWOK7MmooD5i/HqqFgD3C/n7Gm3zycqO2lLKXIFqFM384MMO81D87rWHgzXfb+87eKvBiWCRrMQwraeLhBr46+70rELg2wLfYIEAv3uiVwx12Fi7ojoPCeGUjNlXPX0ti9QVf66E5469gHLUyAZ2v3CDkzyIpbhgZO7S6Exne/x6HjWTPQ1CguptWCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=17QQVv9ErczK1oQcsp8u/TK83FSQzoafjA3Kjd6F1bA=;
 b=FOUk6oZERtFlu5wJ4IydJCe8aIuK0PeulXBqs9ekzTYjckFz1CV7TBYB56f0Z7Bu5CRLKvTAtHsy74PHMWgDWJVmS5QR3qP5+wabODjd/HmQW291BZgCklcUVs7RO7LaZU1b0JicR/4lxgab7tt9gTOxOorUg4KGgA8UFqR0JgcUWjUtsSSBEkMjw5f4vEVx73nv/GzbgA1LXEemLIUSyMfJeJHrEv8mMnmLUTnkc5TTT2tFoJiTci2wHivnLZ1wjgz/ox8RQ07YJjYBqzPZPxq3UbqmyinS+lRE+1CwT1sIJhXqCjRy4A2w4L6GtVZEM3QZm1Kaf4qroxdUK57DXA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=17QQVv9ErczK1oQcsp8u/TK83FSQzoafjA3Kjd6F1bA=;
 b=ejd0EhwbZYTN5w5EUQ8B/KYCiuB7XoDUoYaSfDSWpi95I2rU+axXyjjhg7U3dYq5j83TD1Z+J7sO7v9IYYlPZnzVHgNfz2DAeUyHnvf0DFY8bb0pERM2FAo+RLklkxG4r6/p7aB4T4D7DmE+81jNieGx0y+0KHtvyPxpCrywTMo=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Michal Orzel <michal.orzel@amd.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH for-4.19] xen/arm: static-shmem: fix "gbase/pbase used
 uninitialized" build failure
Thread-Topic: [PATCH for-4.19] xen/arm: static-shmem: fix "gbase/pbase used
 uninitialized" build failure
Thread-Index: AQHawhSD1mJnwmtkoE+LLMyZ3OtgsbHOym6A
Date: Wed, 19 Jun 2024 09:01:50 +0000
Message-ID: <81B3426E-D92F-4EF2-955C-5375635DE1F9@arm.com>
References: <20240619064652.18266-1-michal.orzel@amd.com>
In-Reply-To: <20240619064652.18266-1-michal.orzel@amd.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3774.600.62)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	DB9PR08MB6588:EE_|AS4PR08MB7456:EE_|AM2PEPF0001C714:EE_|DU0PR08MB9876:EE_
X-MS-Office365-Filtering-Correlation-Id: acc9ffe7-9360-435d-1fb0-08dc903e7a57
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted:
 BCL:0;ARA:13230037|1800799021|376011|366013|38070700015;
X-Microsoft-Antispam-Message-Info-Original:
 =?us-ascii?Q?IPPpDYXd1qky6bqRiXd1YXEAAPSPcUcyLymYBiiTGvuXERlrzqTYdtHkcKCX?=
 =?us-ascii?Q?hV9tpLggBCxNbj3KFkZalH0MkKH7bVqUGJElw9Puj8WLhq34DUh8vlRIN6gp?=
 =?us-ascii?Q?GNkzNPJ5BKi2QnXRK/niqsU4uemFZHHb5C9CZwleImr0DnadfcXIxhAvPgGO?=
 =?us-ascii?Q?6+WsYAPpZFBgGiUvUbUsJHK1BadMTChyVGwAea8ydc+LOOakovVbwjegvddE?=
 =?us-ascii?Q?1gPfgx931fV3eEW2hQtvvzEzoHp6Pui/w46K01jarTBkgEPjVucBNw8yXtn6?=
 =?us-ascii?Q?g0I9/9XiAt1IudEGsqiMh2ZA25uWrMmScFNY9q5Z978aBdbDAkeYS+MAcWVN?=
 =?us-ascii?Q?7/cGWlQBILXxFZYig97U1zGol+dSVmvnFQnmnl0H5VzChz1UQVJ+2sTIGe1d?=
 =?us-ascii?Q?CMxmMEg30P9nINfLpraOFkMaqpBLuXcYdS+9ORA1ihNpXnJAHDDShG+IX6ad?=
 =?us-ascii?Q?NlRYiM94T9LCov2PGX44sN2c7R0WnJVCEuBhk6bs2ebNj97TWQuxG15S+LwO?=
 =?us-ascii?Q?qDnEItpIOCvHYuTRmJkaMsOVSgGl2D8ggvzf2Q2rkl4jt/htFGjYqTvhJmSO?=
 =?us-ascii?Q?FzlQf0rN6ZnkZb2NxWvcAvtQrMGEAa4b+ccrUBnK3jn50h5mCoeRkukRFuoC?=
 =?us-ascii?Q?0xdJWgcCBiYvOEkkaLEvRmd8cV0TQ4BnXr38BGqxAV7Jgh314Mi4LCDyA3Q7?=
 =?us-ascii?Q?yVrrCZpzSi71YS/Z33NCrM7Bxqw4m+VENIYtbVEObvYx95qjHSmL3iqDFvyB?=
 =?us-ascii?Q?vi8+Fl+YBp4M15q8KcuUKkqMnFJg4uHlxRrSxmokrqyHriR5J+kXdE9nGKKG?=
 =?us-ascii?Q?2RQ5rTSsqZ85ygoN8Is9ChKGP9BR8gbbL3+CaTTfM89FEntJkdjvWsX6+QSJ?=
 =?us-ascii?Q?ahCyz31RK6sb3UMQmxVhDnB9FrG9OeAO555lniuJJKdjCQl9h88B06b0V8m3?=
 =?us-ascii?Q?uQr6pyQks6Bf2tMLTsAkP6GQaGcGcdUc0TbS+4z7+ppNWWI55HBvo/8dfEa4?=
 =?us-ascii?Q?bguJhcKVLfJjLxUffiMbGv0JjLr/QtwfTCnSykdNzyULRhohupKlirY2LJjH?=
 =?us-ascii?Q?5sR/1HQUEFHbWwPb5kum+mruyh/acXf1NRVnFWIlO/0AaLjKVb9KCZKtwO2t?=
 =?us-ascii?Q?Pq0p03l43IE8CWXcMNrC4AE6TTpDAAUurkHg4QI2ICeZ4yVLqANDwpnbZOTW?=
 =?us-ascii?Q?gh9TX+Cw1YIz8nc0tfilqJgAXhXgwGc5Rmhc76UVeQETXunGRBmmaX9EoohF?=
 =?us-ascii?Q?Fws3TDXSrJU/xM3v2WDtrss9egnTEokVuuE2vqW/DQkN/87hVhSNV7ylWhim?=
 =?us-ascii?Q?1oF49qM1oWthtbWhc8UEBNbm?=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB9PR08MB6588.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(1800799021)(376011)(366013)(38070700015);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <3574B2439F756540A8B759FD2DF87CA4@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR08MB7456
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM2PEPF0001C714.eurprd05.prod.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1dca6199-2edd-4de6-aa22-08dc903e74eb
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|35042699019|34020700013|376011|1800799021|36860700010|82310400023;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?eByf2QhGDno41niZxGnxWdULxY3kr91SNjG8jgbCSijz31WmBLipBC8AXf45?=
 =?us-ascii?Q?ISMN6IB+efpRZPB4MB6tLLnCHA5he3N75q61v3mFZRx9xU5LgOysYSaThtjT?=
 =?us-ascii?Q?s6ej5bBp3NROzPU/j8Ayw9CtCQPfTH0x3/i3weJpI/uVzxvjDCumf+O6R8X1?=
 =?us-ascii?Q?61Yt80l83/n+6TzYzlC1FQ+U5Ms0Cg/D71RRlxuG6+5q6FkPu4+VEHjAyOO+?=
 =?us-ascii?Q?rEpTd/qzDKJeN9sHSqDZdw/SZcjRvoFJT1QgVIx0xnqV8fTdplmDxHN00DNn?=
 =?us-ascii?Q?y+MaF+erqAqoJqKvFluKcP9PkkjxbAPjW72UnHXRQndzstl5H/Hm5LG01KoL?=
 =?us-ascii?Q?40cALchrII3g59hIMvNPk2sMISnZjeQ+Ui7mO536XrEFAtB7zrNRXae5aA/B?=
 =?us-ascii?Q?v6gSPBTaVe7bFW5U70yyroHiTHHnIAgU5TTleuMPCEnNG7mFGMxLm4nDs+qr?=
 =?us-ascii?Q?ua1iZg18puG9hrfDemQ6N821w+oLOP+6HmaITYwLFpMITun55cjMPAn1Ukfy?=
 =?us-ascii?Q?xUtVZT8cF0RLn/LLzy/kMlXnB42GLTKnsEtM+sn3nRPVtRsHkUb/DX3fYBhE?=
 =?us-ascii?Q?piTVaUEw5NrRwEvZwDCG0vdvRvbIaYs06qyMFLp00eYOv+qT2JvcExaHYhy9?=
 =?us-ascii?Q?ZDa8mTVj+ddOMWwk/jXHvYbawDY6xDfKcihb/ztcQ4/2f6fxQKp057WnCdC3?=
 =?us-ascii?Q?vGOh20qEE9fNLzHFL7OGwT/x56Ohg+Lw1O8RDHzy6+m1rgWrd35uOOnFLsK7?=
 =?us-ascii?Q?18tdoHX4Mb7o6IJIRwE0XNY9etgyrj/lh6p9rQP/0krj/eMCprjSlIfkzYSz?=
 =?us-ascii?Q?gvKCCHZhDPWvU61fKYiuKkWJhbedRtbUy2EHRwPIbM9AvgODlqpFXCl9IZ0m?=
 =?us-ascii?Q?PfQSwnbYsb13MHHBQjtl2r0YVegAbCJf3qKdw2VrGY6DEA+vJ2FWE4dRBktM?=
 =?us-ascii?Q?hWUz8lXE9YcjKmPR5GtGiNXCQkD5dxw1GqzDnm67c/fiuDj3hs7Idwn1ZcOd?=
 =?us-ascii?Q?uVBtBpZOMTOROj2MZlq6liPATODxo71sHiVPZ9yyu+3Gas1h5/IMnhAQhFwZ?=
 =?us-ascii?Q?68OGS0liQiZgNuSrNPGBVgOdgM99eluh2bXm4oP/VBUA4TzOvV5jwpGsR0A7?=
 =?us-ascii?Q?FCseXevNFUIUS+q0iAnpBXrT5rOHPkm7PLOKEpOQT1Xe+L3CLDcbi28lhJ8J?=
 =?us-ascii?Q?ZsNGZdI4OjywY7FZKD/ESW4cuYQeGFLmaTbqicPKlDRkljgU3+F+mtRKXITz?=
 =?us-ascii?Q?JFwRTVZgVNOeCCIEzg+x0s92bw4EhR0eieBBOXVp5u2lbP2JawoiT6egnqz2?=
 =?us-ascii?Q?RTjtqOKtWsiV93yTcOgCn5qaUf4Ot6t145fzij1VOaoYKQ=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230037)(35042699019)(34020700013)(376011)(1800799021)(36860700010)(82310400023);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jun 2024 09:01:59.4045
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: acc9ffe7-9360-435d-1fb0-08dc903e7a57
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM2PEPF0001C714.eurprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR08MB9876

Hi Michal,

> On 19 Jun 2024, at 08:46, Michal Orzel <michal.orzel@amd.com> wrote:
>=20
> Building Xen with CONFIG_STATIC_SHM=3Dy results in a build failure:
>=20
> arch/arm/static-shmem.c: In function 'process_shm':
> arch/arm/static-shmem.c:327:41: error: 'gbase' may be used uninitialized =
[-Werror=3Dmaybe-uninitialized]
>  327 |         if ( is_domain_direct_mapped(d) && (pbase !=3D gbase) )
> arch/arm/static-shmem.c:305:17: note: 'gbase' was declared here
>  305 |         paddr_t gbase, pbase, psize;
>=20
> This is because the commit cb1ddafdc573 adds a check referencing
> gbase/pbase variables which were not yet assigned a value. Fix it.
>=20
> Fixes: cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem should be direct=
-mapped for direct-mapped domains")
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand

> ---
> Rationale for 4.19: this patch fixes a build failure reported by CI:
> https://gitlab.com/xen-project/xen/-/jobs/7131807878
> ---
> xen/arch/arm/static-shmem.c | 13 +++++++------
> 1 file changed, 7 insertions(+), 6 deletions(-)
>=20
> diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c
> index c434b96e6204..cd48d2896b7e 100644
> --- a/xen/arch/arm/static-shmem.c
> +++ b/xen/arch/arm/static-shmem.c
> @@ -324,12 +324,6 @@ int __init process_shm(struct domain *d, struct kern=
el_info *kinfo,
>             printk("%pd: static shared memory bank not found: '%s'", d, s=
hm_id);
>             return -ENOENT;
>         }
> -        if ( is_domain_direct_mapped(d) && (pbase !=3D gbase) )
> -        {
> -            printk("%pd: physical address 0x%"PRIpaddr" and guest addres=
s 0x%"PRIpaddr" are not direct-mapped.\n",
> -                   d, pbase, gbase);
> -            return -EINVAL;
> -        }
>=20
>         pbase =3D boot_shm_bank->start;
>         psize =3D boot_shm_bank->size;
> @@ -353,6 +347,13 @@ int __init process_shm(struct domain *d, struct kern=
el_info *kinfo,
>             /* guest phys address is after host phys address */
>             gbase =3D dt_read_paddr(cells + addr_cells, addr_cells);
>=20
> +            if ( is_domain_direct_mapped(d) && (pbase !=3D gbase) )
> +            {
> +                printk("%pd: physical address 0x%"PRIpaddr" and guest ad=
dress 0x%"PRIpaddr" are not direct-mapped.\n",
> +                       d, pbase, gbase);
> +                return -EINVAL;
> +            }
> +
>             for ( i =3D 0; i < PFN_DOWN(psize); i++ )
>                 if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) )
>                 {
> --=20
> 2.25.1
>=20



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 09:02:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 09:02:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743561.1150478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJrDP-0007aX-Ds; Wed, 19 Jun 2024 09:02:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743561.1150478; Wed, 19 Jun 2024 09:02:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJrDP-0007aQ-BD; Wed, 19 Jun 2024 09:02:51 +0000
Received: by outflank-mailman (input) for mailman id 743561;
 Wed, 19 Jun 2024 09:02:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2Pjt=NV=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1sJrDO-0007Qe-Pw
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 09:02:50 +0000
Received: from EUR03-AM7-obe.outbound.protection.outlook.com
 (mail-am7eur03on20601.outbound.protection.outlook.com
 [2a01:111:f403:260e::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b386900e-2e1a-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 11:02:49 +0200 (CEST)
Received: from DUZPR01CA0332.eurprd01.prod.exchangelabs.com
 (2603:10a6:10:4b8::18) by AS8PR08MB6614.eurprd08.prod.outlook.com
 (2603:10a6:20b:338::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.19; Wed, 19 Jun
 2024 09:02:46 +0000
Received: from DB1PEPF000509E7.eurprd03.prod.outlook.com
 (2603:10a6:10:4b8:cafe::48) by DUZPR01CA0332.outlook.office365.com
 (2603:10a6:10:4b8::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.32 via Frontend
 Transport; Wed, 19 Jun 2024 09:02:46 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB1PEPF000509E7.mail.protection.outlook.com (10.167.242.57) with
 Microsoft
 SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.7677.15
 via Frontend Transport; Wed, 19 Jun 2024 09:02:45 +0000
Received: ("Tessian outbound 01519d7ebfdb:v339");
 Wed, 19 Jun 2024 09:02:45 +0000
Received: from 50a9dff24f94.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1D5C608D-3008-4564-9A2C-FC66A6B72883.1; 
 Wed, 19 Jun 2024 09:02:39 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 50a9dff24f94.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 Jun 2024 09:02:39 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com (2603:10a6:10:25a::24)
 by DB9PR08MB9537.eurprd08.prod.outlook.com (2603:10a6:10:459::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31; Wed, 19 Jun
 2024 09:02:35 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a]) by DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a%6]) with mapi id 15.20.7698.017; Wed, 19 Jun 2024
 09:02:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b386900e-2e1a-11ef-b4bb-af5377834399
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Df/cpKb00GgJeb7wMCAyoCtgxFw78VB8dPAFA9+LLEio4PaUNlMT/yaXIQERZvhEnz9hUEI5X7D4vaR9EnzRGAXvc4ZEX01MZySJUlW4+Y+ocLIRDbOcgaQunq6Hozj01mFsYPYTgvOL/Mtb4tYZ3Do27/4xErlcCXCHgD/3EErgyjKOk8K2x7Z2VZaO6QnXaM4TrlM7HHarSee60eKPcGcvkfOMjm+sEK18/kud+Bome7ZFA7IF/ZyvmqWfXvacdktS+a0DewIlInnM6KH4mTUGTy8TiAF0DZMkh1XQTLQ/y2u8TEaAS8ZIL+BhRNOFDqK8hQYWZYvQyJgvmT0+Xg==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Dg8wFKGvi0pJ6iFSBlBGiM/4Rp2i3FxBb1Sx8wo+uGA=;
 b=WEO3q9QBLlkz2k/0VdE1eupLrmLWDQvq4Ir+BjBiGOmdG7eyJr4/ZPGDB2FSFF2TqDCmPRtTmQF6I3AJyAznRLrV2EUBbHQRayy3XEcTFQIJpyu2xl8MbfOulKTAQL824F65O2dUj9MGFBGFsgAD7XXYDLRzT+8Mn2dDsNyfgKNe0NA3ve2Ar2z08Q9KJvnHBGH565UwWRVBy/vULzI4KhwJnfg3L/IsLLrqBmYTFMLJpGAnyL7qeumkxBUZHpO6LpNvJWwCTX29CLuy/9c5Umnk8xk83wraXFqeebXyrB10vyzedQt2pt/VA8fAvn0n4g9f7URVzTSRaX2LJ0FHUw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1
 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Dg8wFKGvi0pJ6iFSBlBGiM/4Rp2i3FxBb1Sx8wo+uGA=;
 b=dL7BtiPJZA3lHrylTwL2wU263RsLwO2oC2WBVLqr806SEJTeEy9aN9tzpzYYdFFD13knpnhf3sYY/ZBk1GSQn3OL0lJklwFui4NXTIaT/abI+bovF5w1860OxK8kvWanQZ4EF98zQHdWUn7BRoZ8dO/i2WUPCaOhXAtxcoxBJUo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=arm.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: f2aa740b65bad843
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cbVVfGK9mvbr+hFDApZAdyTdvICPFvLVGg3tJNVpV2dcLgUFJjTJkxuQggryn1YRbB3HHTXKfcw8Bh6v4frp13okupsdIzGHWh+YxgYaqUvUSEYOjIxXj4V8TEFNw2+UwWAPEKOzXAHs06yfuYpU5iazjpzQEtp41IuTXe44EXgCHoAFdqWiizCZ+k2IB7lKkKRIFmrLV+4WkexVIeqxT8aiaBmwPEmy0QL8KkVuJ0XJlQDO0d4jQGRA4SSoyjAEN+hyLjtIV3jDWUtL+k5ZR2ILLhGwspDbVzXrmMtU/T9NupBma+9K5bFj3IYY2lL6393tanoWV0xjVpgaWDjVmg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Dg8wFKGvi0pJ6iFSBlBGiM/4Rp2i3FxBb1Sx8wo+uGA=;
 b=G1cJH2IZ4BbjlrhSXEg8A3Ve/mv3B8mbwsjBxuCyvkwSWzxZIM54e/wd5yeRJZQthl5m8GyGtjkrBHPz9yWZdJ6aP6CgB2MJoKr/APCSkbLB2oPB/Bkhg4JFD2l0IyOUFSVgk32ndIdnksvMMgfE1uEN61fruniYV9wrLXAYWs0PUl51jXqm4qDXeooYc8fHk8u5v7yiS1cDJyRwCwNvJYEjW0Olh1lyPA4RRkJnkwvh40UakUw+PNqOqNX+Z+x95OOMBJ3TiLroZAXFDNcpStlpAfWsrFnWWRp3iNKZ6oFFrapI7qd+DieMILJdcBkI/AyjDghO7SX/+hHBqBDMQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Dg8wFKGvi0pJ6iFSBlBGiM/4Rp2i3FxBb1Sx8wo+uGA=;
 b=dL7BtiPJZA3lHrylTwL2wU263RsLwO2oC2WBVLqr806SEJTeEy9aN9tzpzYYdFFD13knpnhf3sYY/ZBk1GSQn3OL0lJklwFui4NXTIaT/abI+bovF5w1860OxK8kvWanQZ4EF98zQHdWUn7BRoZ8dO/i2WUPCaOhXAtxcoxBJUo=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Michal Orzel <michal.orzel@amd.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, Oleksii <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH for-4.19] xen/arm: static-shmem: fix "gbase/pbase used
 uninitialized" build failure
Thread-Topic: [PATCH for-4.19] xen/arm: static-shmem: fix "gbase/pbase used
 uninitialized" build failure
Thread-Index: AQHawhSD1mJnwmtkoE+LLMyZ3OtgsbHOyqWA
Date: Wed, 19 Jun 2024 09:02:35 +0000
Message-ID: <8C571FCD-3EAF-40B5-8694-625880176F8B@arm.com>
References: <20240619064652.18266-1-michal.orzel@amd.com>
In-Reply-To: <20240619064652.18266-1-michal.orzel@amd.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3774.600.62)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	DB9PR08MB6588:EE_|DB9PR08MB9537:EE_|DB1PEPF000509E7:EE_|AS8PR08MB6614:EE_
X-MS-Office365-Filtering-Correlation-Id: 74fa9972-87eb-485a-6eac-08dc903e95d7
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted:
 BCL:0;ARA:13230037|1800799021|376011|366013|38070700015;
X-Microsoft-Antispam-Message-Info-Original:
 =?us-ascii?Q?s0H8eU5HiWkzRs5WH68numXuG2hsSzL+4QOpmz8h3vL9TWHS23C3IKuI19pi?=
 =?us-ascii?Q?22QsdI+t4r8NXmZumXxvH+nALAEg9ByvKPGa3EPp/Zygia+C3I8IcFel8DIf?=
 =?us-ascii?Q?uciUJtFZn0gZYzKQMh7SijqRBg+tGrTbHBrAled3iwaaXQBdI4wQm4oZ5YKw?=
 =?us-ascii?Q?pYfuPhhgsga2O6gMnS3s9MGPWLZy+rl9M8d1A96LQUoXOlEKuD4Iy+iXdn7h?=
 =?us-ascii?Q?13wJn9j1bNfHHBewqSelPE0TvXJzNRutTzzy2LvaL7zna47a6Q9TpdVwXHS2?=
 =?us-ascii?Q?+J0XllpBpMKbx3wcu++vPLhy4hkMzrMIFH19RxPpz2UagrRBshxyOTBSOA58?=
 =?us-ascii?Q?Xa9zl3gsnxkpZTuaJAGr25bZanHdOe3QxrnKTCuw+Gx2ioQHVm6qUq60zNj4?=
 =?us-ascii?Q?h5mnJs1yp55hlFNRo6qZk26oYtdU7TNXeFD8jRBELjUG4s3WdOta35XxjoN4?=
 =?us-ascii?Q?1zkyNnPKYH2umgMVZVcQJk6Hay/hXkm7Rcj7IW+74iB7JEGW5M9On0uvXV9A?=
 =?us-ascii?Q?Nz81u0UXqaIN+VK7ZdlLfyQ7cyURK6wZwIYqt+PCurIJPiWDdZsEHAuwYBOK?=
 =?us-ascii?Q?WziHQGtOlOinXmtDoZgiSbWW7Z0SUgmPcUNJAwzkJeO5DyM+SU0l5uwQUecf?=
 =?us-ascii?Q?uKzP+aWW/FnvTsGslzt+4BuB8HlkNK8nCQhbgQeIA+WE3iNWiXalAO6AVpu7?=
 =?us-ascii?Q?Wfc1FmKfHpQ1DRJjqmjnnyUbtDLRZUdOQxMdHpQpRBiqWixB0Us8aSpoE7M7?=
 =?us-ascii?Q?AYX2K5VX600nYWAuq5xnvXDfQ+HGQp1RMF+yAChS+MkCWfNXLexVH4o6vhil?=
 =?us-ascii?Q?qMq9bF6X1Lsq00SsF2LDplFwRZ8ncAY2oP1gW9WWpQv/wH4GbJoGzyfAuscf?=
 =?us-ascii?Q?eDQLHXE587/KyR5JAWFDbqOT04Fa+Ct2b4KAJTRT/saLOwkdCt+KM+zxP63X?=
 =?us-ascii?Q?YgJry1ks39nIAgk1mjbbgPNa313edQxJuLRSJsFDq7lROlnNDFj/wLH/Slgg?=
 =?us-ascii?Q?s91rPDQAP1Zlmt4cFSASxrP/HQSOi5wObBRugBSzqnD30BxsKi7/u43je1Nw?=
 =?us-ascii?Q?9W+sBz9eVG3CMs1lLHm/Dr1/fk7mXOxmLnGBcAi/ObB1V3v+1UFdGWxMBex5?=
 =?us-ascii?Q?H8yv+0aDi81Xp5GFLtFzatRcnFiYJV0h2xspvjz3ZA3AeVh+F0VXhJIlhkbK?=
 =?us-ascii?Q?AUq76VyX3Mg8D68Ojn3LRW2bxQncKClnhJmt6rSDvor/h6tsUW0DL9wsunWj?=
 =?us-ascii?Q?XpUQVmpDAcki8WgrWzcMUdlG9n9tZZF0+PVvuLl2D1uZBoc4Q2t14+DXbFKt?=
 =?us-ascii?Q?9CDME8gBfl99GiueHfc8e6bu?=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB9PR08MB6588.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(1800799021)(376011)(366013)(38070700015);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <99F336B890451043A54434852F1BBE0A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB9537
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB1PEPF000509E7.eurprd03.prod.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	8af0bcd0-f2a6-4608-dd6a-08dc903e900c
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|34020700013|82310400023|1800799021|35042699019|36860700010|376011;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?Mr4LQ8vF8sQdwhg7lMDqMxvDBXqu9LshdjGXo3c0HnXCBtcNoM71pJ1dXTV9?=
 =?us-ascii?Q?qGxhvW5Bdte1AgFrfXKRBbM0kmMxtiPVN/Ny/jZRXquDTMr/8aAXZ3dpXyt0?=
 =?us-ascii?Q?3BtaMmDN/bX+xyShi402gBeJ3NQSDh3gKuXyi+bbon6xi8bO/6CwN4TysbDq?=
 =?us-ascii?Q?LXvb1LSBBDcPGBeOlFZ6deP/KyuWexzRIay/9oJGvCfcADDn7iYZq2SjN8Pe?=
 =?us-ascii?Q?x9FuIQsPfrRG0HvVL9kBmoH2EREt1p0YvUAG0xuNptn+NfNEAU4HiFdBYLHA?=
 =?us-ascii?Q?EFKZWiDgNsCNh9diVwu9cI4lwm9zjgHw3MJ1VkEgGlWdeFUF66YHXkV6HqXp?=
 =?us-ascii?Q?GRqmnkeScH52rwy5TtDOBKa3fCRW3so0bgWKehbbvO1mMKjiiYdzesPglvgB?=
 =?us-ascii?Q?BR5mnDMb0v4/4APeAqV/pXrxYam9k01nBZSVkuJ9nowHN0ePnqs41JVuCex4?=
 =?us-ascii?Q?NdNZbdSZwE2u0I71STtaiFi3clmt4PQ7K1SEBWQSnmPh3lIozxx8JlBpzHY7?=
 =?us-ascii?Q?Vv15s7POVfY0xy+OaclfytzvEu1oMRDVrPYxTEhLvz9VXeiWfat2+hApXM+B?=
 =?us-ascii?Q?Dyx2gvRuPLXSMIzdwIzUqEyueLpdGEPBjRdvt99nn4ZNs2VV1kWerJk/IIAz?=
 =?us-ascii?Q?bmudxE+aTzj+v5ydGV76db8KkAVr6xY1FP2Ln0t+yfmCvuWDr7cMcYvTczcX?=
 =?us-ascii?Q?56WsmMLaHYaCcTk91GX48DyJxDlC3qwBZvh9l2RDVXc7CEGt7+OrdCcwAghR?=
 =?us-ascii?Q?/8L1bXyHHgWDd7D42UL7xD+vmTGn51LH0E1ZoRVQbuDn/Vcs0TJIHViu/qei?=
 =?us-ascii?Q?hyt61RUpb1BqqRjkdo/4LkcUNvZmq/F3jy4SNjvwqvh/CVeREtfktx6xjC05?=
 =?us-ascii?Q?awfV3yGaO3/JCKHiaZCMheCS8xlVzqZd5GLx7d3GuGC/UBeRhO19JcFQWTMe?=
 =?us-ascii?Q?dRQOG7M4/NcPQPeBgZtPkvvB3en1edvlaXl78P8pmUD2TSIYaQvT1iIzPPMJ?=
 =?us-ascii?Q?jATTQR7xOmZT/tWOGWZEjVxEnGiSOLq0DJ320x2+9b4wLrajx+BKX0Z88Neg?=
 =?us-ascii?Q?rpm5rsGRAtoARoysLufsQgfW6+phHlUEQn0tyK6F7HAKBEjEwFuxUaSaLJ6q?=
 =?us-ascii?Q?2qtr/jWveQRlC7flYPdM0yisp78OkYrvHcb+mFouKUTWgdvToFQ6zd7yFngf?=
 =?us-ascii?Q?1XtEJ9ln0aigAlBDxUPXUuiybayME1nsaqNBngK2eDw3OP0hOzUR2nBx3qj/?=
 =?us-ascii?Q?JFDG1gmvvLEttxlm0KbxToRpYef/rhdtWfqt/+Y1DCyiOnf+ez6T517W5AVK?=
 =?us-ascii?Q?XVm+BQU6zz+7++bcimXJcBnb98QSCJh+X+EMeZcbskZvDQ=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230037)(34020700013)(82310400023)(1800799021)(35042699019)(36860700010)(376011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jun 2024 09:02:45.5907
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 74fa9972-87eb-485a-6eac-08dc903e95d7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB1PEPF000509E7.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6614

Hi,

Adding Oleksii for Release ack.

Cheers
Bertrand

> On 19 Jun 2024, at 08:46, Michal Orzel <michal.orzel@amd.com> wrote:
>=20
> Building Xen with CONFIG_STATIC_SHM=3Dy results in a build failure:
>=20
> arch/arm/static-shmem.c: In function 'process_shm':
> arch/arm/static-shmem.c:327:41: error: 'gbase' may be used uninitialized =
[-Werror=3Dmaybe-uninitialized]
>  327 |         if ( is_domain_direct_mapped(d) && (pbase !=3D gbase) )
> arch/arm/static-shmem.c:305:17: note: 'gbase' was declared here
>  305 |         paddr_t gbase, pbase, psize;
>=20
> This is because the commit cb1ddafdc573 adds a check referencing
> gbase/pbase variables which were not yet assigned a value. Fix it.
>=20
> Fixes: cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem should be direct=
-mapped for direct-mapped domains")
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
> Rationale for 4.19: this patch fixes a build failure reported by CI:
> https://gitlab.com/xen-project/xen/-/jobs/7131807878
> ---
> xen/arch/arm/static-shmem.c | 13 +++++++------
> 1 file changed, 7 insertions(+), 6 deletions(-)
>=20
> diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c
> index c434b96e6204..cd48d2896b7e 100644
> --- a/xen/arch/arm/static-shmem.c
> +++ b/xen/arch/arm/static-shmem.c
> @@ -324,12 +324,6 @@ int __init process_shm(struct domain *d, struct kern=
el_info *kinfo,
>             printk("%pd: static shared memory bank not found: '%s'", d, s=
hm_id);
>             return -ENOENT;
>         }
> -        if ( is_domain_direct_mapped(d) && (pbase !=3D gbase) )
> -        {
> -            printk("%pd: physical address 0x%"PRIpaddr" and guest addres=
s 0x%"PRIpaddr" are not direct-mapped.\n",
> -                   d, pbase, gbase);
> -            return -EINVAL;
> -        }
>=20
>         pbase =3D boot_shm_bank->start;
>         psize =3D boot_shm_bank->size;
> @@ -353,6 +347,13 @@ int __init process_shm(struct domain *d, struct kern=
el_info *kinfo,
>             /* guest phys address is after host phys address */
>             gbase =3D dt_read_paddr(cells + addr_cells, addr_cells);
>=20
> +            if ( is_domain_direct_mapped(d) && (pbase !=3D gbase) )
> +            {
> +                printk("%pd: physical address 0x%"PRIpaddr" and guest ad=
dress 0x%"PRIpaddr" are not direct-mapped.\n",
> +                       d, pbase, gbase);
> +                return -EINVAL;
> +            }
> +
>             for ( i =3D 0; i < PFN_DOWN(psize); i++ )
>                 if ( !mfn_valid(mfn_add(maddr_to_mfn(pbase), i)) )
>                 {
> --=20
> 2.25.1
>=20



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 09:10:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 09:10:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743569.1150488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJrL1-00018v-50; Wed, 19 Jun 2024 09:10:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743569.1150488; Wed, 19 Jun 2024 09:10:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJrL1-00018o-2R; Wed, 19 Jun 2024 09:10:43 +0000
Received: by outflank-mailman (input) for mailman id 743569;
 Wed, 19 Jun 2024 09:10:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJrL0-00018i-Fz
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 09:10:42 +0000
Received: from mail-wr1-x435.google.com (mail-wr1-x435.google.com
 [2a00:1450:4864:20::435])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ccc1149a-2e1b-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 11:10:40 +0200 (CEST)
Received: by mail-wr1-x435.google.com with SMTP id
 ffacd0b85a97d-362f62ae4c5so420667f8f.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 02:10:40 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f855e6e416sm111787995ad.72.2024.06.19.02.10.37
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 02:10:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccc1149a-2e1b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718788240; x=1719393040; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=LZteM++zhl/XcAL1n0Kz5PBw4DY8KmQSRp3n7hs6qaM=;
        b=A4A2jVh076Tocecz1lXP2ZJcN2RV9uhYnw4VTdZVpyqqkWFRacMJM6hiTzKf9vTvrl
         36v4bHP6Jy9IYv8hRZR02ut0tFZTfIbArsZ6NOtEEJJaLY+SwxbPxAkowzWhV9V2AtH1
         OaVawS20ctGSoNCw1yyPF+K3WUr7Gv4uxgmjwMiW0nnLIoyuP8yFB6IPfErx/rh4VR9a
         enqVQqqc5EWPPZWjwsW3gWi/JzgIpI/i1sv2NEiqdM7qtsuOOiQmI/v+xnXZIsMrQzgZ
         I9YdATO8ARxhbAw3F5OrCX4sDDaNJk4G/f7TESra/AKI2PdN56/qyN8JBx/c5Pbzgfzo
         Be3g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718788240; x=1719393040;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=LZteM++zhl/XcAL1n0Kz5PBw4DY8KmQSRp3n7hs6qaM=;
        b=AqftQt1sj1RJSMs+w0Oo4M1FuHOMXFerbnf62RoQ3RhVsviEPrwwFrxjk2fxSfB0sC
         kwyf80NvII9bxYsDvC4kRzkke3eS3MrQSmy+yh0T1Dzqn9EaypRtXYzZfJUVoa1kMrkM
         maN8xRTYnVpEnUBgJgeyDd/UNWE34c5husD3+Ox6WNnRPK8Tx3IRDnhM2C0Ae80UJyQD
         Nj/fEVZYRDvwNSKKyLyHdF1FKBOXQopbpxDxNGcgFCZ/NZCYv9n/bHebGC+RrnnDGXWG
         gbEVutjO5sr3dyAeDX6CZ2xya9XNcS3jiX9OJHOOH+koi+CqM13VOtA6QYjPHZQyvHoF
         WOAw==
X-Forwarded-Encrypted: i=1; AJvYcCU8bdJbISzCXEuRf4keV2spZYMI1pMmLReaq7PhV5ChkwPVrOw0+VApRPSqjjYOLK0eqHIsSd/HuJlimjR1/7qHnRQIhJncDemkU+rxjho=
X-Gm-Message-State: AOJu0YypiZhsg7zoDG9sEYVMwO9SpvrvKlPv9Nw7GnvuQ1Fv+slIfFuH
	C2vkWlgz5qyMoXXj7HJEy8NFdgK6S6h++rfdv4YljYyYTOAZ+uON8/S4y9+zPw==
X-Google-Smtp-Source: AGHT+IE9q9cEIrS3dGVoDqsFTgkEardDM0EkTGatYUKlCa+ZlBYpzs7Yq/RAIusO3TukqqkbZi1W7Q==
X-Received: by 2002:a05:6000:1a54:b0:363:1c9d:d853 with SMTP id ffacd0b85a97d-3631c9dd9damr1718912f8f.32.1718788240005;
        Wed, 19 Jun 2024 02:10:40 -0700 (PDT)
Message-ID: <ac3e58ab-ba3b-4fd9-b676-d9f515a7711c@suse.com>
Date: Wed, 19 Jun 2024 11:10:34 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v3 3/3] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
References: <20240613165617.42538-1-roger.pau@citrix.com>
 <20240613165617.42538-4-roger.pau@citrix.com>
 <e3912334-4dbe-40e9-aed4-8b47e1570cc7@suse.com> <ZnFv7b4YNjeRXj6-@macbook>
 <2f388d0a-c9b5-409a-b622-5dfeb3093e82@suse.com> <ZnGerbiI7P9PHPmK@macbook>
 <ba89126f-715d-498e-81e1-2ed105ac2d1c@suse.com> <ZnKDTEX_eGz2sS4K@macbook>
 <541885b6-fd09-4531-8ae9-8e57e504c1b3@suse.com> <ZnKXqDxlG2d2MohM@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZnKXqDxlG2d2MohM@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 19.06.2024 10:32, Roger Pau Monné wrote:
> On Wed, Jun 19, 2024 at 09:24:41AM +0200, Jan Beulich wrote:
>> On 19.06.2024 09:05, Roger Pau Monné wrote:
>>> On Tue, Jun 18, 2024 at 06:30:22PM +0200, Jan Beulich wrote:
>>>> On 18.06.2024 16:50, Roger Pau Monné wrote:
>>>>> On Tue, Jun 18, 2024 at 04:34:50PM +0200, Jan Beulich wrote:
>>>>>> On 18.06.2024 13:30, Roger Pau Monné wrote:
>>>>>>> On Mon, Jun 17, 2024 at 03:41:12PM +0200, Jan Beulich wrote:
>>>>>>>> On 13.06.2024 18:56, Roger Pau Monne wrote:
>>>>>>>>> @@ -2686,11 +2705,27 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
>>>>>>>>>          if ( desc->handler->disable )
>>>>>>>>>              desc->handler->disable(desc);
>>>>>>>>>  
>>>>>>>>> +        /*
>>>>>>>>> +         * If the current CPU is going offline and is (one of) the target(s) of
>>>>>>>>> +         * the interrupt, signal to check whether there are any pending vectors
>>>>>>>>> +         * to be handled in the local APIC after the interrupt has been moved.
>>>>>>>>> +         */
>>>>>>>>> +        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
>>>>>>>>> +            check_irr = true;
>>>>>>>>> +
>>>>>>>>>          if ( desc->handler->set_affinity )
>>>>>>>>>              desc->handler->set_affinity(desc, affinity);
>>>>>>>>>          else if ( !(warned++) )
>>>>>>>>>              set_affinity = false;
>>>>>>>>>  
>>>>>>>>> +        if ( check_irr && apic_irr_read(vector) )
>>>>>>>>> +            /*
>>>>>>>>> +             * Forward pending interrupt to the new destination, this CPU is
>>>>>>>>> +             * going offline and otherwise the interrupt would be lost.
>>>>>>>>> +             */
>>>>>>>>> +            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
>>>>>>>>> +                          desc->arch.vector);
>>>>>>>>
>>>>>>>> Hmm, IRR may become set right after the IRR read (unlike in the other cases,
>>>>>>>> where new IRQs ought to be surfacing only at the new destination). Doesn't
>>>>>>>> this want moving ...
>>>>>>>>
>>>>>>>>>          if ( desc->handler->enable )
>>>>>>>>>              desc->handler->enable(desc);
>>>>>>>>
>>>>>>>> ... past the actual affinity change?
>>>>>>>
>>>>>>> Hm, but the ->enable() hook is just unmasking the interrupt, the
>>>>>>> actual affinity change is done in ->set_affinity(), and hence after
>>>>>>> the call to ->set_affinity() no further interrupts should be delivered
>>>>>>> to the CPU regardless of whether the source is masked?
>>>>>>>
>>>>>>> Or is it possible for the device/interrupt controller to not switch to
>>>>>>> use the new destination until the interrupt is unmasked, and hence
>>>>>>> could have pending masked interrupts still using the old destination?
>>>>>>> IIRC For MSI-X it's required that the device updates the destination
>>>>>>> target once the entry is unmasked.
>>>>>>
>>>>>> That's all not relevant here, I think. IRR gets set when an interrupt is
>>>>>> signaled, no matter whether it's masked.
>>>>>
>>>>> I'm kind of lost here, what does signaling mean in this context?
>>>>>
>>>>> I would expect the interrupt vector to not get set in IRR if the MSI-X
>>>>> entry is masked, as at that point the state of the address/data fields
>>>>> might not be consistent (that's the whole point of masking it right?)
>>>>>
>>>>>> It's its handling which the
>>>>>> masking would prevent, i.e. the "moving" of the set bit from IRR to ISR.
>>>>>
>>>>> My understanding was that the masking would prevent the message write to
>>>>> the APIC from happening, and hence no vector should get set in IRR.
>>>>
>>>> Hmm, yes, looks like I was confused. The masking is at the source side
>>>> (IO-APIC RTE, MSI-X entry, or - if supported - in the MSI capability).
>>>> So the sole case to worry about is MSI without mask-bit support then.
>>>
>>> Yeah, and for MSI without masking bit support we don't care doing the
>>> IRR check before or after the ->enable() hook, as that's a no-op in
>>> that case.  The write to the MSI address/data fields has already been
>>> done, and hence the issue would be exclusively with draining any
>>> in-flight writes to the APIC doorbell (what you mention below).
>>
>> Except that both here ...
>>
>>>>>> Plus we run with IRQs off here anyway if I'm not mistaken, so no
>>>>>> interrupt can be delivered to the local CPU. IOW whatever IRR bits it
>>>>>> has set (including ones becoming set between the IRR read and the actual
>>>>>> vector change), those would never be serviced. Hence the reading of the
>>>>>> bit ought to occur after the vector change: It's only then that we know
>>>>>> the IRR bit corresponding to the old vector can't become set anymore.
>>>>>
>>>>> Right, and the vector change happens in ->set_affinity(), not
>>>>> ->enable().  See for example set_msi_affinity() and the
>>>>> write_msi_msg(), that's where the vector gets changed.
>>>>>
>>>>>> And even then we're assuming that no interrupt signals might still be
>>>>>> "on their way" from the IO-APIC or a posted MSI message write by a
>>>>>> device to the LAPIC (I have no idea how to properly fence that, or
>>>>>> whether there are guarantees for this to never occur).
>>>>>
>>>>> Yeah, those I expect would be completed in the window between the
>>>>> write of the new vector/destination and the reading of IRR.
>>>>
>>>> Except we have no idea on the latencies.
>>>
>>> There isn't much else we can do? Even the current case where we add
>>> the 1ms window at the end of the shuffling could still suffer from
>>> this issue because we don't know the latencies.  IOW: I don't think
>>> this is any worse than the current approach.
>>
>> ... and here, the later we read IRR, the better the chances we don't miss
>> anything. Even the no-op ->enable() isn't a no-op execution-wise. In fact
>> it (quite pointlessly[1]) is an indirect call to irq_enable_none(). I'm
>> actually inclined to suggest that we try to even further delay the IRR
>> read, certainly past the cpumask_copy(), maybe even past the spin_unlock()
>> (latching CPU and vector into local variables, along with the latching of
>> ->affinity that's already there).
> 
> Moving past cpumask_copy() would be OK.  Moving past the spin_unlock()
> I'm not so sure.  Isn't it possible that once the desc is unlocked the
> interrupt starts another movement, and hence by the time the forwarded
> vector is injected the irq desc has already moved to yet a different
> CPU?
> 
> I don't think this is realistic given the window between the
> spin_unlock() and the injection of the vector, but could be
> possible.

Hmm, yes, in principle this is possible, if we ignore the fact that bringing
down a CPU is done in stop-machine context (i.e. IRQs are off everywhere and
initiating further movement being impossible). But perhaps indeed better not
to bake in dependencies on that.

> If you are fine with moving past the cpumask_copy() but before the
> spin_unlock() I can post an updated version with that.

Yes, please do.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 09:29:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 09:29:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743580.1150497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJrdF-0003Rj-Kh; Wed, 19 Jun 2024 09:29:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743580.1150497; Wed, 19 Jun 2024 09:29:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJrdF-0003Rc-I8; Wed, 19 Jun 2024 09:29:33 +0000
Received: by outflank-mailman (input) for mailman id 743580;
 Wed, 19 Jun 2024 09:29:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mh+g=NV=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sJrdE-0003RW-L8
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 09:29:32 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6e642c78-2e1e-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 11:29:31 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.161.166.217])
 by support.bugseng.com (Postfix) with ESMTPSA id A5CE74EE0738;
 Wed, 19 Jun 2024 11:29:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e642c78-2e1e-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Juergen Gross <jgross@suse.com>
Subject: [XEN PATCH v2] xen: add explicit comment to identify notifier patterns
Date: Wed, 19 Jun 2024 11:29:22 +0200
Message-Id: <d814434bf73e341f5d35836fa7063a728f7b7de4.1718788908.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 16.4 states that every `switch' statement shall have a
`default' label" and a statement or a comment prior to the
terminating break statement.

This patch addresses some violations of the rule related to the
"notifier pattern": a frequently-used pattern whereby only a few values
are handled by the switch statement and nothing should be done for
others (nothing to do in the default case).

Note that for function mwait_idle_cpu_init() in
xen/arch/x86/cpu/mwait-idle.c the /* Notifier pattern. */ comment is
not added: differently from the other functions covered in this patch,
the default label has a return statement that does not violates Rule 16.4.

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
Changes in v2:
as Jan pointed out, in the v1 some patterns were not explicitly identified
(https://lore.kernel.org/xen-devel/cad05a5c-e2d8-4e5d-af05-30ae6f959184@bugseng.com/).

This version adds the /* Notifier pattern. */ to all the pattern present in
the Xen codebase except for mwait_idle_cpu_init().
---
 xen/arch/arm/cpuerrata.c                     | 1 +
 xen/arch/arm/gic-v3-lpi.c                    | 4 ++++
 xen/arch/arm/gic.c                           | 1 +
 xen/arch/arm/irq.c                           | 4 ++++
 xen/arch/arm/mmu/p2m.c                       | 1 +
 xen/arch/arm/percpu.c                        | 1 +
 xen/arch/arm/smpboot.c                       | 1 +
 xen/arch/arm/time.c                          | 1 +
 xen/arch/arm/vgic-v3-its.c                   | 2 ++
 xen/arch/x86/acpi/cpu_idle.c                 | 4 ++++
 xen/arch/x86/cpu/mcheck/mce.c                | 4 ++++
 xen/arch/x86/cpu/mcheck/mce_intel.c          | 4 ++++
 xen/arch/x86/genapic/x2apic.c                | 3 +++
 xen/arch/x86/hvm/hvm.c                       | 1 +
 xen/arch/x86/nmi.c                           | 1 +
 xen/arch/x86/percpu.c                        | 3 +++
 xen/arch/x86/psr.c                           | 3 +++
 xen/arch/x86/smpboot.c                       | 3 +++
 xen/common/kexec.c                           | 1 +
 xen/common/rcupdate.c                        | 1 +
 xen/common/sched/core.c                      | 1 +
 xen/common/sched/cpupool.c                   | 1 +
 xen/common/spinlock.c                        | 1 +
 xen/common/tasklet.c                         | 1 +
 xen/common/timer.c                           | 1 +
 xen/drivers/cpufreq/cpufreq.c                | 1 +
 xen/drivers/cpufreq/cpufreq_misc_governors.c | 3 +++
 xen/drivers/passthrough/x86/hvm.c            | 3 +++
 xen/drivers/passthrough/x86/iommu.c          | 3 +++
 29 files changed, 59 insertions(+)

diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
index 2b7101ea25..69c30aecd8 100644
--- a/xen/arch/arm/cpuerrata.c
+++ b/xen/arch/arm/cpuerrata.c
@@ -730,6 +730,7 @@ static int cpu_errata_callback(struct notifier_block *nfb,
         rc = enable_nonboot_cpu_caps(arm_errata);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index eb0a5535e4..4c2bd35403 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -389,6 +389,10 @@ static int cpu_callback(struct notifier_block *nfb, unsigned long action,
             printk(XENLOG_ERR "Unable to allocate the pendtable for CPU%lu\n",
                    cpu);
         break;
+
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(rc);
diff --git a/xen/arch/arm/gic.c b/xen/arch/arm/gic.c
index 3eaf670fd7..dc5408a456 100644
--- a/xen/arch/arm/gic.c
+++ b/xen/arch/arm/gic.c
@@ -463,6 +463,7 @@ static int cpu_gic_callback(struct notifier_block *nfb,
         release_irq(gic_hw_ops->info->maintenance_irq, NULL);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/arm/irq.c b/xen/arch/arm/irq.c
index c60502444c..61ca6f5b87 100644
--- a/xen/arch/arm/irq.c
+++ b/xen/arch/arm/irq.c
@@ -127,6 +127,10 @@ static int cpu_callback(struct notifier_block *nfb, unsigned long action,
             printk(XENLOG_ERR "Unable to allocate local IRQ for CPU%u\n",
                    cpu);
         break;
+
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(rc);
diff --git a/xen/arch/arm/mmu/p2m.c b/xen/arch/arm/mmu/p2m.c
index 1725cca649..bf7c66155d 100644
--- a/xen/arch/arm/mmu/p2m.c
+++ b/xen/arch/arm/mmu/p2m.c
@@ -1839,6 +1839,7 @@ static int cpu_virt_paging_callback(struct notifier_block *nfb,
         setup_virt_paging_one(NULL);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/arm/percpu.c b/xen/arch/arm/percpu.c
index 87fe960330..81f91f05bb 100644
--- a/xen/arch/arm/percpu.c
+++ b/xen/arch/arm/percpu.c
@@ -66,6 +66,7 @@ static int cpu_percpu_callback(
         free_percpu_area(cpu);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/arm/smpboot.c b/xen/arch/arm/smpboot.c
index 04e363088d..3d481e59f9 100644
--- a/xen/arch/arm/smpboot.c
+++ b/xen/arch/arm/smpboot.c
@@ -591,6 +591,7 @@ static int cpu_smpboot_callback(struct notifier_block *nfb,
         remove_cpu_sibling_map(cpu);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index e74d30d258..27cbfae874 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -382,6 +382,7 @@ static int cpu_time_callback(struct notifier_block *nfb,
         deinit_timer_interrupt();
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/arm/vgic-v3-its.c b/xen/arch/arm/vgic-v3-its.c
index 70b5aeb822..a33ff64ff2 100644
--- a/xen/arch/arm/vgic-v3-its.c
+++ b/xen/arch/arm/vgic-v3-its.c
@@ -1194,6 +1194,7 @@ static void sanitize_its_base_reg(uint64_t *reg)
         r |= GIC_BASER_InnerShareable << GITS_BASER_SHAREABILITY_SHIFT;
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
@@ -1206,6 +1207,7 @@ static void sanitize_its_base_reg(uint64_t *reg)
         r |= GIC_BASER_CACHE_RaWb << GITS_BASER_INNER_CACHEABILITY_SHIFT;
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index 57ac984790..f0af32e578 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -1661,6 +1661,10 @@ static int cf_check cpu_callback(
              processor_powers[cpu] )
             amd_cpuidle_init(processor_powers[cpu]);
         break;
+
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(rc);
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index 32c1b2756b..222b174bbb 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -722,6 +722,10 @@ static int cf_check cpu_callback(
         if ( park_offline_cpus )
             cpu_bank_free(cpu);
         break;
+
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(rc);
diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
index dd812f4b8a..be82ea4631 100644
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -949,6 +949,10 @@ static int cf_check cpu_callback(
         cpu_mcheck_distribute_cmci();
         cpu_mcabank_free(cpu);
         break;
+
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(rc);
diff --git a/xen/arch/x86/genapic/x2apic.c b/xen/arch/x86/genapic/x2apic.c
index 371dd100c7..d271102f9f 100644
--- a/xen/arch/x86/genapic/x2apic.c
+++ b/xen/arch/x86/genapic/x2apic.c
@@ -238,6 +238,9 @@ static int cf_check update_clusterinfo(
         }
         FREE_CPUMASK_VAR(per_cpu(scratch_mask, cpu));
         break;
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(err);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 8334ab1711..00c360cf24 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -123,6 +123,7 @@ static int cf_check cpu_callback(
         alternative_vcall(hvm_funcs.cpu_dead, cpu);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/x86/nmi.c b/xen/arch/x86/nmi.c
index 9793fa2316..105efa5a71 100644
--- a/xen/arch/x86/nmi.c
+++ b/xen/arch/x86/nmi.c
@@ -434,6 +434,7 @@ static int cf_check cpu_nmi_callback(
         kill_timer(&per_cpu(nmi_timer, cpu));
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/arch/x86/percpu.c b/xen/arch/x86/percpu.c
index 3205eacea6..627b56b9f3 100644
--- a/xen/arch/x86/percpu.c
+++ b/xen/arch/x86/percpu.c
@@ -84,6 +84,9 @@ static int cf_check cpu_percpu_callback(
         if ( park_offline_cpus )
             free_percpu_area(cpu);
         break;
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(rc);
diff --git a/xen/arch/x86/psr.c b/xen/arch/x86/psr.c
index 0b9631ac44..e76b129e6c 100644
--- a/xen/arch/x86/psr.c
+++ b/xen/arch/x86/psr.c
@@ -1661,6 +1661,9 @@ static int cf_check cpu_callback(
     case CPU_DEAD:
         psr_cpu_fini(cpu);
         break;
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(rc);
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 8aa621533f..5b9b196d58 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -1134,6 +1134,9 @@ static int cf_check cpu_smpboot_callback(
     case CPU_REMOVE:
         cpu_smpboot_free(cpu, true);
         break;
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return notifier_from_errno(rc);
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 84fe8c3597..96883cdc70 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -549,6 +549,7 @@ static int cf_check cpu_callback(
         kexec_init_cpu_notes(cpu);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
     return NOTIFY_DONE;
diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index 212a99acd8..0fe4097544 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -657,6 +657,7 @@ static int cf_check cpu_callback(
         rcu_offline_cpu(&this_cpu(rcu_data), &rcu_ctrlblk, rdp);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d84b65f197..dffa1ef476 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2907,6 +2907,7 @@ static int cf_check cpu_schedule_callback(
         cpu_schedule_down(cpu);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index ad8f608462..c7117f4243 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -1073,6 +1073,7 @@ static int cf_check cpu_callback(
         cpupool_cpu_remove_forced(cpu);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index 28c6e9d3ac..bf082478db 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -55,6 +55,7 @@ static int cf_check cpu_lockdebug_callback(struct notifier_block *nfb,
         break;
 
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/common/tasklet.c b/xen/common/tasklet.c
index 4c8d87a338..879b1f0d80 100644
--- a/xen/common/tasklet.c
+++ b/xen/common/tasklet.c
@@ -232,6 +232,7 @@ static int cf_check cpu_callback(
         migrate_tasklets_from_cpu(cpu, &per_cpu(softirq_tasklet_list, cpu));
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/common/timer.c b/xen/common/timer.c
index a21798b76f..60e9a1493e 100644
--- a/xen/common/timer.c
+++ b/xen/common/timer.c
@@ -677,6 +677,7 @@ static int cf_check cpu_callback(
         break;
 
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/drivers/cpufreq/cpufreq.c b/xen/drivers/cpufreq/cpufreq.c
index 8659ad3aee..9584b55398 100644
--- a/xen/drivers/cpufreq/cpufreq.c
+++ b/xen/drivers/cpufreq/cpufreq.c
@@ -682,6 +682,7 @@ static int cf_check cpu_callback(
         (void)cpufreq_del_cpu(cpu);
         break;
     default:
+        /* Notifier pattern. */
         break;
     }
 
diff --git a/xen/drivers/cpufreq/cpufreq_misc_governors.c b/xen/drivers/cpufreq/cpufreq_misc_governors.c
index 0327fad23b..464b267a17 100644
--- a/xen/drivers/cpufreq/cpufreq_misc_governors.c
+++ b/xen/drivers/cpufreq/cpufreq_misc_governors.c
@@ -101,6 +101,9 @@ static int cf_check cpufreq_userspace_cpu_callback(
     case CPU_UP_PREPARE:
         per_cpu(cpu_set_freq, cpu) = userspace_cmdline_freq;
         break;
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return NOTIFY_DONE;
diff --git a/xen/drivers/passthrough/x86/hvm.c b/xen/drivers/passthrough/x86/hvm.c
index d3627e4af7..e5b6be4794 100644
--- a/xen/drivers/passthrough/x86/hvm.c
+++ b/xen/drivers/passthrough/x86/hvm.c
@@ -1122,6 +1122,9 @@ static int cf_check cpu_callback(
          */
         ASSERT(list_empty(&per_cpu(dpci_list, cpu)));
         break;
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return NOTIFY_DONE;
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index cc0062b027..f0c84eeb85 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -749,6 +749,9 @@ static int cf_check cpu_callback(
         if ( !page_list_empty(list) )
             tasklet_schedule(tasklet);
         break;
+    default:
+        /* Notifier pattern. */
+        break;
     }
 
     return NOTIFY_DONE;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 09:49:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 09:49:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743592.1150508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJrwS-0006WZ-5s; Wed, 19 Jun 2024 09:49:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743592.1150508; Wed, 19 Jun 2024 09:49:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJrwS-0006WS-28; Wed, 19 Jun 2024 09:49:24 +0000
Received: by outflank-mailman (input) for mailman id 743592;
 Wed, 19 Jun 2024 09:49:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJrwR-0006WM-0N
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 09:49:23 +0000
Received: from mail-lj1-x234.google.com (mail-lj1-x234.google.com
 [2a00:1450:4864:20::234])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 343c6228-2e21-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 11:49:21 +0200 (CEST)
Received: by mail-lj1-x234.google.com with SMTP id
 38308e7fff4ca-2ebe6495aedso60503111fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 02:49:21 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f85cb6bee6sm106183835ad.236.2024.06.19.02.49.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 02:49:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 343c6228-2e21-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718790561; x=1719395361; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=rtNnn8Ft8MQG02uPu91AUtrQJbjGZNviWikxdslEuSw=;
        b=fjHXoBlwCI5v9b193yMKXO8BzreMBUs2+ODi89ic9zlRSdqtxXcwq7yFooEB1pAJGD
         beHu1UJ4z0+E08xIZljuS8+/Vrkhz8uNb0FzdUPExGyWB4AXYg/FfTC7L0nDNmy4IDQV
         5zGbxX0JrgSOMI4AfUUV4Ljefgq1BMXIV7VcP8wDqeYNcRjmKCBn8ZT3gXdBW/UbA2DO
         apFWmHyUQJC2MHaOU64tuTnNzUYd5mE/V3URf555BprXxGZ0sigMrXypQtgGd0UGx+ze
         iqpZxNLSCpzvyhLc49H3GOLyLKnSuRlkYTS/3Q4LXW+guKCRT0Bad3Ssox9qRjr0GwSq
         QD6g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718790561; x=1719395361;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=rtNnn8Ft8MQG02uPu91AUtrQJbjGZNviWikxdslEuSw=;
        b=vnVeytjCA/GGER3dm1ViBmWI+OtkvUCZqwNmxAE0nvxEGlkt4mQFRF8lZygmplqKSD
         Jof+6omrxUOI8oKzEJKlo+kmm6Md1bTU519Yd414WNrOrH+sPhUukNg9gwkUVm9NlRqF
         4M1bGYfR9a7wgXe8QtbQ8sqEMiMknYCeXxzI3WNcQQljtZC936cikZFkkIdEMeKsYoGx
         4KEVatqIFrKsuUxiqTzoWBE5tPyNSCUVmBffgFxtI7Nw8Xo1EBkvOGwQ9TdIkWds2ijQ
         zJyIqsnQ42F6La5m6mS5bgJH+Kc6AUOBdEKM055fnXbX+ZU446LIeIp62NCGu/vh4f90
         f5NQ==
X-Forwarded-Encrypted: i=1; AJvYcCV/eJeZ3xFeXbNrzx3ZefcKAvjbe6GDT/EDkx9OD0cgB3rp0aG2LCtoKUTS0uoWWMwaWON2csyoi7XtGGCybLw8XZ4kEHZNBMAp9GSQtAw=
X-Gm-Message-State: AOJu0YyIHDQV7MfIhjAQa/nqmJjz+ya8IiuG96Cksa1OxzB2CMgJBJoi
	RKHq45ipQ4iwfxcVZmMJY/TvsnnQRWmJrxuSlnefXVxIkagaiWOH+NUyP2blww==
X-Google-Smtp-Source: AGHT+IGhjndnRL8ngkfd5kAuUGzqXE4sMNe7puxMBdwBG7LU3kH8sKLrNN3ZDqrjs/7fz0dsCQNSCw==
X-Received: by 2002:a2e:9eda:0:b0:2ec:1682:b4dc with SMTP id 38308e7fff4ca-2ec3ceb70d4mr13702861fa.10.1718790561158;
        Wed, 19 Jun 2024 02:49:21 -0700 (PDT)
Message-ID: <c83474d9-8c72-412a-92eb-222088a0bf3a@suse.com>
Date: Wed, 19 Jun 2024 11:49:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-4-Jiqian.Chen@amd.com>
 <ed36b376-a5f0-457b-8a1e-61104c26ffce@suse.com>
 <BL1PR12MB5849FE3A4897DF166159B906E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b923a32e-3c22-4e7a-8844-b33322ef8ad1@suse.com>
 <BL1PR12MB5849861E424724C6E9DE3859E7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <ff66c7aa-585f-4d30-9f4f-e520226825bc@suse.com>
 <BL1PR12MB58498905D185C6A720276D1BE7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB58498905D185C6A720276D1BE7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 19.06.2024 10:51, Chen, Jiqian wrote:
> On 2024/6/19 16:06, Jan Beulich wrote:
>> On 19.06.2024 09:53, Chen, Jiqian wrote:
>>> On 2024/6/18 16:55, Jan Beulich wrote:
>>>> On 18.06.2024 08:57, Chen, Jiqian wrote:
>>>>> On 2024/6/17 22:52, Jan Beulich wrote:
>>>>>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>>>>>> The gsi of a passthrough device must be configured for it to be
>>>>>>> able to be mapped into a hvm domU.
>>>>>>> But When dom0 is PVH, the gsis don't get registered, it causes
>>>>>>> the info of apic, pin and irq not be added into irq_2_pin list,
>>>>>>> and the handler of irq_desc is not set, then when passthrough a
>>>>>>> device, setting ioapic affinity and vector will fail.
>>>>>>>
>>>>>>> To fix above problem, on Linux kernel side, a new code will
>>>>>>> need to call PHYSDEVOP_setup_gsi for passthrough devices to
>>>>>>> register gsi when dom0 is PVH.
>>>>>>>
>>>>>>> So, add PHYSDEVOP_setup_gsi into hvm_physdev_op for above
>>>>>>> purpose.
>>>>>>>
>>>>>>> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
>>>>>>> Signed-off-by: Huang Rui <ray.huang@amd.com>
>>>>>>> Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
>>>>>>> ---
>>>>>>> The code link that will call this hypercall on linux kernel side is as follows:
>>>>>>> https://lore.kernel.org/xen-devel/20240607075109.126277-3-Jiqian.Chen@amd.com/
>>>>>>
>>>>>> One of my v9 comments was addressed, thanks. Repeating the other, unaddressed
>>>>>> one here:
>>>>>> "As to GSIs not being registered: If that's not a problem for Dom0's own
>>>>>>  operation, I think it'll also want/need explaining why what is sufficient for
>>>>>>  Dom0 alone isn't sufficient when pass-through comes into play."
>>>>> I have modified the commit message to describe why GSIs are not registered can cause passthrough not work, according to this v9 comment.
>>>>> " it causes the info of apic, pin and irq not be added into irq_2_pin list, and the handler of irq_desc is not set, then when passthrough a device, setting ioapic affinity and vector will fail."
>>>>> What description do you want me to add?
>>>>
>>>> What I'd first like to have clarification on (i.e. before putting it in
>>>> the description one way or another): How come Dom0 alone gets away fine
>>>> without making the call, yet for passthrough-to-DomU it's needed? Is it
>>>> perhaps that it just so happened that for Dom0 things have been working
>>>> on systems where it was tested, but the call should in principle have been
>>>> there in this case, too [1]? That (to me at least) would make quite a
>>>> difference for both this patch's description and us accepting it.
>>> Oh, I think I know what's your concern now. Thanks.
>>> First question, why gsi of device can work on PVH dom0:
>>> Because when probe a driver to a normal device, it will call linux kernel side:pci_device_probe-> request_threaded_irq-> irq_startup-> __unmask_ioapic-> io_apic_write, then trap into xen side hvmemul_do_io-> hvm_io_intercept-> hvm_process_io_intercept-> vioapic_write_indirect-> vioapic_hwdom_map_gsi-> mp_register_gsi. So that the gsi can be registered.
>>> Second question, why gsi of passthrough can't work on PVH dom0:
>>> Because when assign a device to be passthrough, it uses pciback to probe the device, and it calls pcistub_probe, but in all callstack of pcistub_probe, it doesn't unmask the gsi, and we can see on Xen side, the function vioapic_hwdom_map_gsi-> mp_register_gsi will be called only when the gsi is unmasked, so that the gsi can't work for passthrough device.
>>
>> And why exactly would the fake IRQ handler not be set up by pciback? Its
>> setting up ought to lead to those same IO-APIC RTE writes that Xen
>> intercepts.
> Because isr_on is not set, when xen_pcibk_control_isr is called, it will return due to " !dev_data->isr_on". So that fake IRQ handler aren't installed.

I'm afraid I don't follow you here. Quoting from the function:

	enable =  dev_data->enable_intx;

	/* Asked to disable, but ISR isn't runnig */
	if (!enable && !dev_data->isr_on)
		return;

I.e. we bail if the request was to _disable_ and there is no ISR.

I can't exclude though that command_write()'s logic to set ->enable_intx
is insufficient. But in the common case one would surely expect at least
one of PCI_COMMAND_MEMORY and PCI_COMMAND_IO to be set first by a guest.
IOW at some point I'd expect xen_pcibk_control_isr() to be called with
the second argument 0 and with ->enable_intx set.

> And it seems isr_on is set through driver sysfs " irq_handler_state" for a level device that is to be shared with guest and the IRQ is shared with the initial domain.

The sysfs interface is, according to my reading of the description
of the commit introducing it, merely for debugging / recovery purposes.
(It also looks to me as if this was partly broken: If one would use it,
thus clearing ->isr_on, a subsequent disable request would take exactly
that early bailing path quoted above, with nothing removing the IRQ
handler.)

That description also talks about both an edge vs level distinction in
behavior and one for shared vs non-shared, but neither in that commit
nor in present code I can spot any respective checks. Otherwise I could
understand that there are cases where the necessary information isn't
propagated to Xen.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 09:59:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 09:59:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743600.1150517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJs5p-0008MD-09; Wed, 19 Jun 2024 09:59:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743600.1150517; Wed, 19 Jun 2024 09:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJs5o-0008M6-TD; Wed, 19 Jun 2024 09:59:04 +0000
Received: by outflank-mailman (input) for mailman id 743600;
 Wed, 19 Jun 2024 09:59:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ri0R=NV=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sJs5n-0008M0-D5
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 09:59:03 +0000
Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com
 [2607:f8b0:4864:20::732])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8dcb7ace-2e22-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 11:59:02 +0200 (CEST)
Received: by mail-qk1-x732.google.com with SMTP id
 af79cd13be357-7965d034cedso408170685a.3
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 02:59:02 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b4ff61dbb4sm11091936d6.16.2024.06.19.02.58.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Jun 2024 02:58:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8dcb7ace-2e22-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718791140; x=1719395940; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=UNA11vszOHr0gqENa0+ArGVJGYF6xj3lss6+Js7NKHg=;
        b=jFUYHaeUKR6ZY/w+YNf5/F+HwvFRXWcKoCuxR5XAJWJ3x4Yg5df16O/i0lgFiqsJdd
         jrdmqa6L1JXx32DGkGW6n/Bmk2tw20mjD6YMQiyl32/CGmRieE8V4LRTj9gMhQViTSmX
         JL0oBkTVNxGH6Ree554FkxT3TamvGqO0eCIpg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718791140; x=1719395940;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=UNA11vszOHr0gqENa0+ArGVJGYF6xj3lss6+Js7NKHg=;
        b=YD/GmMbXqbgjTuQ5DHIoratzNOz+x9c1pKE3RLevwrVbpFyeb99dHal+hf/MD1kBKj
         q44rrnRVvVFHn+SCHWtPr+98DHFlEgWsRqjDyn13z+p9Scf5qmn2U/bpZNfda6hVaYGW
         OXhrkZr7Ihd6L29otfhcGVrL34thbGTJfecTEr9+gv305LKkkjqfP/Lchk9vLoCS+To4
         iRUdBWETXoBE/BNKHFuwENTgdbbUbSm4+ev4U7CNFXOiAMFFJxqi10KYolNikI8kJcIm
         qWLXdvPF3XJDDRMR5NJaq0ya72+o7EO15msGmuob3lvqqxdxr4fpA8QpmSQcAa5EYBzi
         /Grw==
X-Gm-Message-State: AOJu0YxG6M43zGFkNf4aiOQhDz+ngOIzpk4fBNuQaaFkXMZ4awovJ2O0
	Tdye1Z2XV907N15EIGU9n6l5gvWQQeNmEJp1xDcuuhZzTv5hzu9j9Uee4NUIq2BVAFhB9gbfuWm
	q
X-Google-Smtp-Source: AGHT+IHxwpCtkIs2uOxiy5Q8zIcV+byxyDFHi3RfQcUTFFHCPqN8yjyVfpmRZThxJnmLkwJTB39T0Q==
X-Received: by 2002:a0c:9d0b:0:b0:6b0:66d3:3b50 with SMTP id 6a1803df08f44-6b501ea53f4mr24694446d6.51.1718791139925;
        Wed, 19 Jun 2024 02:58:59 -0700 (PDT)
From: Roger Pau Monne <roger.pau@citrix.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 v4] x86/irq: forward pending interrupts to new destination in fixup_irqs()
Date: Wed, 19 Jun 2024 11:58:33 +0200
Message-ID: <20240619095833.76271-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.45.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

fixup_irqs() is used to evacuate interrupts from to be offlined CPUs.  Given
the CPU is to become offline, the normal migration logic used by Xen where the
vector in the previous target(s) is left configured until the interrupt is
received on the new destination is not suitable.

Instead attempt to do as much as possible in order to prevent loosing
interrupts.  If fixup_irqs() is called from the CPU to be offlined (as is
currently the case for CPU hot unplug) attempt to forward pending vectors when
interrupts that target the current CPU are migrated to a different destination.

Additionally, for interrupts that have already been moved from the current CPU
prior to the call to fixup_irqs() but that haven't been delivered to the new
destination (iow: interrupts with move_in_progress set and the current CPU set
in ->arch.old_cpu_mask) also check whether the previous vector is pending and
forward it to the new destination.

This allows us to remove the window with interrupts enabled at the bottom of
fixup_irqs().  Such window wasn't safe anyway: references to the CPU to become
offline are removed from interrupts masks, but the per-CPU vector_irq[] array
is not updated to reflect those changes (as the CPU is going offline anyway).

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v3:
 - Move the IRR check after the cpumask_copy().

Changes since v2:
 - Remove interrupt enabled window from fixup_irqs().
 - Adjust comments and commit message.

Changes since v1:
 - Rename to apic_irr_read().
---
 xen/arch/x86/include/asm/apic.h |  5 ++++
 xen/arch/x86/irq.c              | 46 ++++++++++++++++++++++++++++-----
 2 files changed, 45 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/include/asm/apic.h b/xen/arch/x86/include/asm/apic.h
index d1cb001fb4ab..7bd66dc6e151 100644
--- a/xen/arch/x86/include/asm/apic.h
+++ b/xen/arch/x86/include/asm/apic.h
@@ -132,6 +132,11 @@ static inline bool apic_isr_read(uint8_t vector)
             (vector & 0x1f)) & 1;
 }
 
+static inline bool apic_irr_read(unsigned int vector)
+{
+    return apic_read(APIC_IRR + (vector / 32 * 0x10)) & (1U << (vector % 32));
+}
+
 static inline u32 get_apic_id(void)
 {
     u32 id = apic_read(APIC_ID);
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index d7f15c38af22..9a611c79e024 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2591,7 +2591,7 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
 
     for ( irq = 0; irq < nr_irqs; irq++ )
     {
-        bool break_affinity = false, set_affinity = true;
+        bool break_affinity = false, set_affinity = true, check_irr = false;
         unsigned int vector, cpu = smp_processor_id();
         cpumask_t *affinity = this_cpu(scratch_cpumask);
 
@@ -2644,6 +2644,25 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
              !cpu_online(cpu) &&
              cpumask_test_cpu(cpu, desc->arch.old_cpu_mask) )
         {
+            /*
+             * This to be offlined CPU was the target of an interrupt that's
+             * been moved, and the new destination target hasn't yet
+             * acknowledged any interrupt from it.
+             *
+             * We know the interrupt is configured to target the new CPU at
+             * this point, so we can check IRR for any pending vectors and
+             * forward them to the new destination.
+             *
+             * Note that for the other case of an interrupt movement being in
+             * progress (move_cleanup_count being non-zero) we know the new
+             * destination has already acked at least one interrupt from this
+             * source, and hence there's no need to forward any stale
+             * interrupts.
+             */
+            if ( apic_irr_read(desc->arch.old_vector) )
+                send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
+                              desc->arch.vector);
+
             /*
              * This CPU is going offline, remove it from ->arch.old_cpu_mask
              * and possibly release the old vector if the old mask becomes
@@ -2684,6 +2703,14 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
         if ( desc->handler->disable )
             desc->handler->disable(desc);
 
+        /*
+         * If the current CPU is going offline and is (one of) the target(s) of
+         * the interrupt, signal to check whether there are any pending vectors
+         * to be handled in the local APIC after the interrupt has been moved.
+         */
+        if ( !cpu_online(cpu) && cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
+            check_irr = true;
+
         if ( desc->handler->set_affinity )
             desc->handler->set_affinity(desc, affinity);
         else if ( !(warned++) )
@@ -2694,6 +2721,18 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
 
         cpumask_copy(affinity, desc->affinity);
 
+        if ( check_irr && apic_irr_read(vector) )
+            /*
+             * Forward pending interrupt to the new destination, this CPU is
+             * going offline and otherwise the interrupt would be lost.
+             *
+             * Do the IRR check as late as possible before releasing the irq
+             * desc in order for any in-flight interrupts to be delivered to
+             * the lapic.
+             */
+            send_IPI_mask(cpumask_of(cpumask_any(desc->arch.cpu_mask)),
+                          desc->arch.vector);
+
         spin_unlock(&desc->lock);
 
         if ( !verbose )
@@ -2705,11 +2744,6 @@ void fixup_irqs(const cpumask_t *mask, bool verbose)
             printk("Broke affinity for IRQ%u, new: {%*pbl}\n",
                    irq, CPUMASK_PR(affinity));
     }
-
-    /* That doesn't seem sufficient.  Give it 1ms. */
-    local_irq_enable();
-    mdelay(1);
-    local_irq_disable();
 }
 
 void fixup_eoi(void)
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 10:10:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 10:10:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743612.1150528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJsH9-0002jJ-Uv; Wed, 19 Jun 2024 10:10:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743612.1150528; Wed, 19 Jun 2024 10:10:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJsH9-0002j7-S6; Wed, 19 Jun 2024 10:10:47 +0000
Received: by outflank-mailman (input) for mailman id 743612;
 Wed, 19 Jun 2024 10:10:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WeeD=NV=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sJsH8-0002iv-IX
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 10:10:46 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2060e.outbound.protection.outlook.com
 [2a01:111:f400:7e88::60e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3070454c-2e24-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 12:10:45 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by CY8PR12MB7193.namprd12.prod.outlook.com (2603:10b6:930:5b::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.19; Wed, 19 Jun
 2024 10:10:41 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7677.030; Wed, 19 Jun 2024
 10:10:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3070454c-2e24-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P1GnRDn0aMS+dNaWBkKrUhExfMQPlN0fPjf9zLtonvtg7ZREsY7DjET//nf34ZrB28+YI3i80wdJjOqk8w740OgxjJgVRiaAEp/mrQ3nFhVH834PAlWSbOqlJpnR99B6C6gx1a+fMFfMi9u1/hwBq95XMwRCJySzDSNWmeNVYtCu7b5KZZNpMURyxaQGhJBK+Q/bQcEXDOXX/znepCiO8ZX0GO7dL3R8Tx257GY8/wZUAI5D8Fu6klDQnMGZM9XC+fTTqVehIwbqqXkX/9B2NKUaJgDkRZZGbCc0yvSGo5dIjVhM5TJGAoE2BsvRzvxEkRatEuR7Od6LDAIaqRmBZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=UAFaY3835/5VXsvcwiLyNxPzUKU3mPun3ddUNNa43as=;
 b=nMqLytwbzGDzG/LyQdh1RtdKPSVe47BlK8oHM2Ickh/rkOnxpQzjJQY9sFMZN8B3/ApI4jqNqhxZwfs2V2Nc5zpP/EzlrglN7Zg2WCymKB0SpOUlunEMMuT3KqKpRzH7LLgHKmcjBqCPmMKwpPEb+DQ+FQZpZ8tnh8gkjbrVFDutixvJvMxSud9b5S/TGlAE7+6uOF04wPTvH6C4wCbKZCoip8HIUTRxbVyVJQ4H1rFTz33/MkMcAWD9BEvC2YgrfURVO4MzFxOjRs994bB+cZ9UcuivdhYrPMq1nkHeqK2pQpRF1+u5ngIUlwY2PwwPTjceDwxAf8grLKshUvSUAg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UAFaY3835/5VXsvcwiLyNxPzUKU3mPun3ddUNNa43as=;
 b=kEpa8QFzphoOakGvgragKVQNHR5gBA0S+qE4Jc30Mb0TiixkXnE7WTKnd9mpyofp6N2FjVoSwFnTpU7i+dvgs2LVpYgRXXYgU3vPz4OXI7e/N0GQkpTrfUJ1FZSDreOaarOmleyHzl8sTruKfQIHMooST2bldHFNHVgEHrlAvWI=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
Thread-Topic: [XEN PATCH v10 3/5] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH
 dom0
Thread-Index:
 AQHawJTljFn+TFlFw0Cz/HTewbBwKbHMCqyAgAGSNQD//5yWgIAB+7GA//+I/4CAAJBIgP//jEaAABEfFIA=
Date: Wed, 19 Jun 2024 10:10:41 +0000
Message-ID:
 <BL1PR12MB58497CC7608B40987BB63C77E7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-4-Jiqian.Chen@amd.com>
 <ed36b376-a5f0-457b-8a1e-61104c26ffce@suse.com>
 <BL1PR12MB5849FE3A4897DF166159B906E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b923a32e-3c22-4e7a-8844-b33322ef8ad1@suse.com>
 <BL1PR12MB5849861E424724C6E9DE3859E7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <ff66c7aa-585f-4d30-9f4f-e520226825bc@suse.com>
 <BL1PR12MB58498905D185C6A720276D1BE7CF2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <c83474d9-8c72-412a-92eb-222088a0bf3a@suse.com>
In-Reply-To: <c83474d9-8c72-412a-92eb-222088a0bf3a@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7677.026)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|CY8PR12MB7193:EE_
x-ms-office365-filtering-correlation-id: f8783d6f-26d7-40d1-2b35-08dc90481311
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|1800799021|366013|7416011|376011|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?MjFKNlJBMk4wRnRDSGlvc0MrYVd2clBLenR6UmNVdmdWNUdVZVM0TkhaV3B4?=
 =?utf-8?B?a3dRTHU2eWJGZjA3VGtEbmQzVDd0bWszaHoraDZJVGJaSEFQbFJzclF3RFJx?=
 =?utf-8?B?Q1ZyajU3TWpPZHhLSGk3OUtHV2wrKzh2enMxc3dlbkt4a3FPZkpMVXZwWTVz?=
 =?utf-8?B?dmtnV2hlOWhFZkVnK3hxSWJoZC9SOEVyMXhZaGpyb0pFOC81Q0pkUzBYTi9l?=
 =?utf-8?B?Zy9MdHJVRGZySjRaeHhOdzljYlVpSEV6ckxENmx0czRpVFY4UVExeHljOU5Q?=
 =?utf-8?B?QkJ5dk9Pd2RHSXZUamdVYUtsZXpYbFU3YWcwenpGR1hKTGJrc1czQUNXUHJ4?=
 =?utf-8?B?M3QzbnFXMUhPeFZ5SGtOWmI1ZGtNdE83WDJUM2FOM2kyYVkxN3RYdlVVZktx?=
 =?utf-8?B?THJ2MS9aNTZMTXNEWEQ5aEhBRG14V3BFWjJHc1YyUnpkT3ZEcmtzcjUxN0Z3?=
 =?utf-8?B?Q2NaQ3dma3FOUHo1YUc3Y2FZZEcvNnJPMjBsUGlhVDdOaElsVHF5d2t2SXNC?=
 =?utf-8?B?ak5wc1J2T1Z5TEpjM2Nrb1NRc0k1bjhQVDNYaGRNL0ExWFFWRC94Wkc2dnpY?=
 =?utf-8?B?UkdibVp0TnhWRW1CNCt0T1c3SWhBMFdiTlBsc2pJZFBMd0RwblpuYjh5Q0Y5?=
 =?utf-8?B?R2FQSDJPWXQ5UlZleUdjOVFlcUR4aC95T3pXT2RBWGNpTzlua21obGhUVGdl?=
 =?utf-8?B?cTFWazhPR3NFQjN6bmdYY2RVWmVVckoyRE5pL3Qrbis0Z2duNU10Wmw1UjFY?=
 =?utf-8?B?MVFkSDlzVmpxM2JXbHl5SWVlbWF1c3RTcHJvcVRlUTZWTFBzYlU2THJMTFM0?=
 =?utf-8?B?TGg2ak42a2xWa0M2bTRiQm1TTGVyeCtnZjBuaVhJTm04V1QwYnZNZUVuTE5G?=
 =?utf-8?B?MzAvRG5IRExGSDdWamtZazllSlZENzQ5VXJHUEduQkNSUkxBWnVYYjJTQ1VF?=
 =?utf-8?B?Y2JhNGZzTDdXRDdkbzlySVBBUnhrUmx4RUU2U2RVWHpVSWQyeGZESldNK0JF?=
 =?utf-8?B?b3pqWk45SDdHNE54V0gzUHFHTllDRmpvSmpMWm5OdFVmY1VLYXlKaW9XMVN5?=
 =?utf-8?B?TExWZU91UHBRR0tjcmNPU3orNEhrZDhpaVo5a2JwbngzMkN1ZURSSGZaS1hV?=
 =?utf-8?B?MFpPd3kwMkFJRUVvUnllUzdDOVNzTXlFbjNUVVZKVmROb1lqSkc3dkNidUZ6?=
 =?utf-8?B?b3ozdUFRMmdOOE9PV3IwWTV0QzdTNWloYVVjOGlUZkttMGoxMEtqZUxObUp5?=
 =?utf-8?B?ZDkrNWFkbzJMTkJIT1hSTEJOS0ZKN1lnQXpuLzQ5NFRPMzVzNDJhTEdHNkM3?=
 =?utf-8?B?STU4U21wb2I0M2s1b0QwMDdkWFNpcHArQ3pQb3ZtRE9iSk5lalhLT3FHc0Z6?=
 =?utf-8?B?dWxGSVZJaW5JZUZ5U0w4OE95ZWRmNFYzM0U3V3B1OXNJSjFjY0liTnA3ekJx?=
 =?utf-8?B?OVFJTW1WRjU1UGdiSGdwQVpaRjBmWC9nZjI2NHBOcEx4TWE0Q2hOZzVXUGEy?=
 =?utf-8?B?Wi9WTVkzeittMGJaY25wVzNSVVZmVzBwMkZ4cVNJZWQ5eDJ3L2dSUElZb2Fr?=
 =?utf-8?B?ZjA4Wlh5VmtUVXF5dzNZL1BOZDZhTHNZQ0VjcFFXVzRzTG1tdGIxQk5MeUMv?=
 =?utf-8?B?Unc5djl4NHRmaWMxeWhwVUNNTGtVUWFSd3BVZTFuZzdkZUtEcExtK1ZaUVdG?=
 =?utf-8?B?VkFEd2xKTmZuTTQ4VjZxUEI1SUZJbEthM3JWKzVWQ3VibVAwR3RtbFVhcE0y?=
 =?utf-8?Q?LdkON1U0V1Q/7Jw51ngSjvwIZi1bfYcd8g+feR6?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(1800799021)(366013)(7416011)(376011)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?YVlTTGg0aHZqWTNhcitJL3owalVmck1XUStyRGl3OGhjeHlPdjlWMjdaUjNj?=
 =?utf-8?B?NzF3UkZzQW1sTHZYN3BMcTQ4L1R2TmlqZmg5U1pzMFRvMmdrTHpWT1hmdG0y?=
 =?utf-8?B?d1BxMU1MU25aRFUyK2ZWY25WRDZRVnM4Q2hjNkxXRmM0bnBJM2RoKzJBN1BJ?=
 =?utf-8?B?QW1SbUV3clFXS1JGRktyRGRYRTFUSW9ObTN5MGtrdkltbEZhMUc0emkwMlBn?=
 =?utf-8?B?Y1FGWldobU9kSGJRZGsrd2oxcCsya2EvZFFvc1ZsOEM1TVFmdDhZRkh6eUtK?=
 =?utf-8?B?cHU3QjVwU0MwNndWS2ZZejQ5UWg1QjE5NG5hSVU5YWUxSThlNGRGUWJxY29i?=
 =?utf-8?B?dDJhYVVVLzhVUlhGWXJPQWJOemxJZlZOeFFFeC9ZM0RvdHF6Z3JXcTRnUXJu?=
 =?utf-8?B?VTFrMjJLQnZhNHdGQUIxRWhhbm5sR25DZWFSSVoxL2h4NzJOeG9TTUNYNUsw?=
 =?utf-8?B?b0p2WXZSZklTTjloTzlkSXpDR2VDRnBuVVEwQUxwbVVzUmxuVGkzSURtSjNP?=
 =?utf-8?B?b1JKL1NwbkFwY3NiTXlQdDdqdlFobUM4SzcxZXA2a3B4emwvVmVaY0phL1JF?=
 =?utf-8?B?ajl5RHdWemJZbkhvclhMdHBnbHpSWGovekNyTThpYlVaTDBXY2k5ZlNQQjV1?=
 =?utf-8?B?MGdMaGhvaExzNWM2akgwRmkxYy84ZTVlVWY2TEhPVFdodlY3UUtVckRrQ1J3?=
 =?utf-8?B?WFhoTFhXRklQcy9xMFJmcXg4V0hQTFhLaGdia3ZGVVhvNTc4ZVEvWU8vMkdu?=
 =?utf-8?B?eW5UOUR1QWZxdnBQcWxaTzdhS0l0alJadW9UZWllVmRUWmVKeWhSMkVOODZF?=
 =?utf-8?B?SFpaU203dlJXZmxGSlg4dFpGVkJJaHBUNk40WjEzVFE2bWZKd2cvUXlXN0Ri?=
 =?utf-8?B?dVdGOHZNN0U3SkRhOWZMZUFsTC9LZlJ6Yk03MDFUZkNGRXRySUdlcDJZUXVj?=
 =?utf-8?B?S2lUV0hsLzB5ZTZFOW1JRldGQnpiblBuUDJLTU9HM3dRb0d6a2NEVGtuS2RS?=
 =?utf-8?B?U0FuTFF6Q0p5M0ZDdUhhMHU3bGh3MWpNcHFqRnNjWWNOM1pFME45UmR1ZDhQ?=
 =?utf-8?B?bXh3eGhiY1J3OXdlQXRqVXBRTHVBMkd6RVQ5Nk12SlFwV05mZ2hVd0FxdU9L?=
 =?utf-8?B?VUtwU2E4NVhsV0IySndSUGZULzdnSE83WmVaNTkvV3BaRHlSVnhYT2htMmkz?=
 =?utf-8?B?cTBSWi9xVDRpTUhuWm5oMlk1b1ozT2toelNCWEl0bVUvbGtPU0t1MDEvM2lU?=
 =?utf-8?B?Z0piZ05vVERvYjB2c2tYWmdHREVwYnZvRzJ4UUgxeVpiMHRTejQrT3d0SFpH?=
 =?utf-8?B?MHVOdm05cGpZZGMvYW56ZHBVRk9rcjJ4Q2NVenlrUFZvTU82TzlmRnBsQzNl?=
 =?utf-8?B?UEMwcTBhR2tHdmh2VSsvMUN3c2xlT3VUQVUwdG1QNXpmNUV2R0hiTUtVTUth?=
 =?utf-8?B?aFd0TWM2OVFrRXlwMzFVOXg3K1lyYmRzejFTMFpyMWFpSEVvTzZIQ0JIYkZh?=
 =?utf-8?B?TXB4TzlJOXpzUkJsSGQyYmdRM2xNbmVXT0k5RVlQZDY4TzZNa3psdXlJWEdR?=
 =?utf-8?B?WDBwdWdmalZBci83REdrSnVXa3JTVzFHdTA2Z2hZTkpoejVhSUpCU3NzeTRq?=
 =?utf-8?B?T0RybTI1cFBXc2JxZTZBR1lsVjdpbUp0aC8vMm5MKzc1OGFZamJmLzBraEE0?=
 =?utf-8?B?MHVDWjBUdTRYSFUzTWRmVTJOUHNwOGVGc0d6dXZKYW9JZlFVZ0MzZTJqVjJx?=
 =?utf-8?B?Q0s1VlpUL09HbXhnL1JKelVscjhFdmFXMVNqdzdNUkNpTlp3RVlhK29WTEp6?=
 =?utf-8?B?cXUybmF6M1VlbGdxOHltYXJjb05HaDJTNThoSzkzRHkwS2NsSUU4ZmRtc2h5?=
 =?utf-8?B?eTNac0Q2M1lRc3d1eUhJUEtXNXBwSGFjNnVPdjNHdjdaT0ZiWU05MFFRYURv?=
 =?utf-8?B?c1MrMmNYS01aVE8zTys2bWNtUUs2dXlSdTYzcEQ3ZUdxZ0dDcFk0QktwbzNl?=
 =?utf-8?B?cGEyMlpIVEt5MGFLdjUxV2ROclhiUUMreU5KT045WUlnZDh6V0Y0ZkZBZnNX?=
 =?utf-8?B?RjlWbGQ2ZVhBNWJnTFdhaCtOczhmOC93VWN4Zkt6TEtKd3AxMWVqaGVHU0Y4?=
 =?utf-8?Q?TSMU=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <1EE31C53AE922149B7E4BFD61410E4E9@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f8783d6f-26d7-40d1-2b35-08dc90481311
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 Jun 2024 10:10:41.1174
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: c+oiYMScLW9Ld/aHg+y5t8Jz7uV+9YF6CyZ83HPUe1XmCFe0mIYTNuJ5Ze5Hq0TELVSSi1cjWGfpFUvRteuEYQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY8PR12MB7193

T24gMjAyNC82LzE5IDE3OjQ5LCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTkuMDYuMjAyNCAx
MDo1MSwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzE5IDE2OjA2LCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAxOS4wNi4yMDI0IDA5OjUzLCBDaGVuLCBKaXFpYW4gd3JvdGU6
DQo+Pj4+IE9uIDIwMjQvNi8xOCAxNjo1NSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAx
OC4wNi4yMDI0IDA4OjU3LCBDaGVuLCBKaXFpYW4gd3JvdGU6DQo+Pj4+Pj4gT24gMjAyNC82LzE3
IDIyOjUyLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+Pj4+Pj4gT24gMTcuMDYuMjAyNCAxMTowMCwg
SmlxaWFuIENoZW4gd3JvdGU6DQo+Pj4+Pj4+PiBUaGUgZ3NpIG9mIGEgcGFzc3Rocm91Z2ggZGV2
aWNlIG11c3QgYmUgY29uZmlndXJlZCBmb3IgaXQgdG8gYmUNCj4+Pj4+Pj4+IGFibGUgdG8gYmUg
bWFwcGVkIGludG8gYSBodm0gZG9tVS4NCj4+Pj4+Pj4+IEJ1dCBXaGVuIGRvbTAgaXMgUFZILCB0
aGUgZ3NpcyBkb24ndCBnZXQgcmVnaXN0ZXJlZCwgaXQgY2F1c2VzDQo+Pj4+Pj4+PiB0aGUgaW5m
byBvZiBhcGljLCBwaW4gYW5kIGlycSBub3QgYmUgYWRkZWQgaW50byBpcnFfMl9waW4gbGlzdCwN
Cj4+Pj4+Pj4+IGFuZCB0aGUgaGFuZGxlciBvZiBpcnFfZGVzYyBpcyBub3Qgc2V0LCB0aGVuIHdo
ZW4gcGFzc3Rocm91Z2ggYQ0KPj4+Pj4+Pj4gZGV2aWNlLCBzZXR0aW5nIGlvYXBpYyBhZmZpbml0
eSBhbmQgdmVjdG9yIHdpbGwgZmFpbC4NCj4+Pj4+Pj4+DQo+Pj4+Pj4+PiBUbyBmaXggYWJvdmUg
cHJvYmxlbSwgb24gTGludXgga2VybmVsIHNpZGUsIGEgbmV3IGNvZGUgd2lsbA0KPj4+Pj4+Pj4g
bmVlZCB0byBjYWxsIFBIWVNERVZPUF9zZXR1cF9nc2kgZm9yIHBhc3N0aHJvdWdoIGRldmljZXMg
dG8NCj4+Pj4+Pj4+IHJlZ2lzdGVyIGdzaSB3aGVuIGRvbTAgaXMgUFZILg0KPj4+Pj4+Pj4NCj4+
Pj4+Pj4+IFNvLCBhZGQgUEhZU0RFVk9QX3NldHVwX2dzaSBpbnRvIGh2bV9waHlzZGV2X29wIGZv
ciBhYm92ZQ0KPj4+Pj4+Pj4gcHVycG9zZS4NCj4+Pj4+Pj4+DQo+Pj4+Pj4+PiBTaWduZWQtb2Zm
LWJ5OiBKaXFpYW4gQ2hlbiA8SmlxaWFuLkNoZW5AYW1kLmNvbT4NCj4+Pj4+Pj4+IFNpZ25lZC1v
ZmYtYnk6IEh1YW5nIFJ1aSA8cmF5Lmh1YW5nQGFtZC5jb20+DQo+Pj4+Pj4+PiBTaWduZWQtb2Zm
LWJ5OiBKaXFpYW4gQ2hlbiA8SmlxaWFuLkNoZW5AYW1kLmNvbT4NCj4+Pj4+Pj4+IC0tLQ0KPj4+
Pj4+Pj4gVGhlIGNvZGUgbGluayB0aGF0IHdpbGwgY2FsbCB0aGlzIGh5cGVyY2FsbCBvbiBsaW51
eCBrZXJuZWwgc2lkZSBpcyBhcyBmb2xsb3dzOg0KPj4+Pj4+Pj4gaHR0cHM6Ly9sb3JlLmtlcm5l
bC5vcmcveGVuLWRldmVsLzIwMjQwNjA3MDc1MTA5LjEyNjI3Ny0zLUppcWlhbi5DaGVuQGFtZC5j
b20vDQo+Pj4+Pj4+DQo+Pj4+Pj4+IE9uZSBvZiBteSB2OSBjb21tZW50cyB3YXMgYWRkcmVzc2Vk
LCB0aGFua3MuIFJlcGVhdGluZyB0aGUgb3RoZXIsIHVuYWRkcmVzc2VkDQo+Pj4+Pj4+IG9uZSBo
ZXJlOg0KPj4+Pj4+PiAiQXMgdG8gR1NJcyBub3QgYmVpbmcgcmVnaXN0ZXJlZDogSWYgdGhhdCdz
IG5vdCBhIHByb2JsZW0gZm9yIERvbTAncyBvd24NCj4+Pj4+Pj4gIG9wZXJhdGlvbiwgSSB0aGlu
ayBpdCdsbCBhbHNvIHdhbnQvbmVlZCBleHBsYWluaW5nIHdoeSB3aGF0IGlzIHN1ZmZpY2llbnQg
Zm9yDQo+Pj4+Pj4+ICBEb20wIGFsb25lIGlzbid0IHN1ZmZpY2llbnQgd2hlbiBwYXNzLXRocm91
Z2ggY29tZXMgaW50byBwbGF5LiINCj4+Pj4+PiBJIGhhdmUgbW9kaWZpZWQgdGhlIGNvbW1pdCBt
ZXNzYWdlIHRvIGRlc2NyaWJlIHdoeSBHU0lzIGFyZSBub3QgcmVnaXN0ZXJlZCBjYW4gY2F1c2Ug
cGFzc3Rocm91Z2ggbm90IHdvcmssIGFjY29yZGluZyB0byB0aGlzIHY5IGNvbW1lbnQuDQo+Pj4+
Pj4gIiBpdCBjYXVzZXMgdGhlIGluZm8gb2YgYXBpYywgcGluIGFuZCBpcnEgbm90IGJlIGFkZGVk
IGludG8gaXJxXzJfcGluIGxpc3QsIGFuZCB0aGUgaGFuZGxlciBvZiBpcnFfZGVzYyBpcyBub3Qg
c2V0LCB0aGVuIHdoZW4gcGFzc3Rocm91Z2ggYSBkZXZpY2UsIHNldHRpbmcgaW9hcGljIGFmZmlu
aXR5IGFuZCB2ZWN0b3Igd2lsbCBmYWlsLiINCj4+Pj4+PiBXaGF0IGRlc2NyaXB0aW9uIGRvIHlv
dSB3YW50IG1lIHRvIGFkZD8NCj4+Pj4+DQo+Pj4+PiBXaGF0IEknZCBmaXJzdCBsaWtlIHRvIGhh
dmUgY2xhcmlmaWNhdGlvbiBvbiAoaS5lLiBiZWZvcmUgcHV0dGluZyBpdCBpbg0KPj4+Pj4gdGhl
IGRlc2NyaXB0aW9uIG9uZSB3YXkgb3IgYW5vdGhlcik6IEhvdyBjb21lIERvbTAgYWxvbmUgZ2V0
cyBhd2F5IGZpbmUNCj4+Pj4+IHdpdGhvdXQgbWFraW5nIHRoZSBjYWxsLCB5ZXQgZm9yIHBhc3N0
aHJvdWdoLXRvLURvbVUgaXQncyBuZWVkZWQ/IElzIGl0DQo+Pj4+PiBwZXJoYXBzIHRoYXQgaXQg
anVzdCBzbyBoYXBwZW5lZCB0aGF0IGZvciBEb20wIHRoaW5ncyBoYXZlIGJlZW4gd29ya2luZw0K
Pj4+Pj4gb24gc3lzdGVtcyB3aGVyZSBpdCB3YXMgdGVzdGVkLCBidXQgdGhlIGNhbGwgc2hvdWxk
IGluIHByaW5jaXBsZSBoYXZlIGJlZW4NCj4+Pj4+IHRoZXJlIGluIHRoaXMgY2FzZSwgdG9vIFsx
XT8gVGhhdCAodG8gbWUgYXQgbGVhc3QpIHdvdWxkIG1ha2UgcXVpdGUgYQ0KPj4+Pj4gZGlmZmVy
ZW5jZSBmb3IgYm90aCB0aGlzIHBhdGNoJ3MgZGVzY3JpcHRpb24gYW5kIHVzIGFjY2VwdGluZyBp
dC4NCj4+Pj4gT2gsIEkgdGhpbmsgSSBrbm93IHdoYXQncyB5b3VyIGNvbmNlcm4gbm93LiBUaGFu
a3MuDQo+Pj4+IEZpcnN0IHF1ZXN0aW9uLCB3aHkgZ3NpIG9mIGRldmljZSBjYW4gd29yayBvbiBQ
VkggZG9tMDoNCj4+Pj4gQmVjYXVzZSB3aGVuIHByb2JlIGEgZHJpdmVyIHRvIGEgbm9ybWFsIGRl
dmljZSwgaXQgd2lsbCBjYWxsIGxpbnV4IGtlcm5lbCBzaWRlOnBjaV9kZXZpY2VfcHJvYmUtPiBy
ZXF1ZXN0X3RocmVhZGVkX2lycS0+IGlycV9zdGFydHVwLT4gX191bm1hc2tfaW9hcGljLT4gaW9f
YXBpY193cml0ZSwgdGhlbiB0cmFwIGludG8geGVuIHNpZGUgaHZtZW11bF9kb19pby0+IGh2bV9p
b19pbnRlcmNlcHQtPiBodm1fcHJvY2Vzc19pb19pbnRlcmNlcHQtPiB2aW9hcGljX3dyaXRlX2lu
ZGlyZWN0LT4gdmlvYXBpY19od2RvbV9tYXBfZ3NpLT4gbXBfcmVnaXN0ZXJfZ3NpLiBTbyB0aGF0
IHRoZSBnc2kgY2FuIGJlIHJlZ2lzdGVyZWQuDQo+Pj4+IFNlY29uZCBxdWVzdGlvbiwgd2h5IGdz
aSBvZiBwYXNzdGhyb3VnaCBjYW4ndCB3b3JrIG9uIFBWSCBkb20wOg0KPj4+PiBCZWNhdXNlIHdo
ZW4gYXNzaWduIGEgZGV2aWNlIHRvIGJlIHBhc3N0aHJvdWdoLCBpdCB1c2VzIHBjaWJhY2sgdG8g
cHJvYmUgdGhlIGRldmljZSwgYW5kIGl0IGNhbGxzIHBjaXN0dWJfcHJvYmUsIGJ1dCBpbiBhbGwg
Y2FsbHN0YWNrIG9mIHBjaXN0dWJfcHJvYmUsIGl0IGRvZXNuJ3QgdW5tYXNrIHRoZSBnc2ksIGFu
ZCB3ZSBjYW4gc2VlIG9uIFhlbiBzaWRlLCB0aGUgZnVuY3Rpb24gdmlvYXBpY19od2RvbV9tYXBf
Z3NpLT4gbXBfcmVnaXN0ZXJfZ3NpIHdpbGwgYmUgY2FsbGVkIG9ubHkgd2hlbiB0aGUgZ3NpIGlz
IHVubWFza2VkLCBzbyB0aGF0IHRoZSBnc2kgY2FuJ3Qgd29yayBmb3IgcGFzc3Rocm91Z2ggZGV2
aWNlLg0KPj4+DQo+Pj4gQW5kIHdoeSBleGFjdGx5IHdvdWxkIHRoZSBmYWtlIElSUSBoYW5kbGVy
IG5vdCBiZSBzZXQgdXAgYnkgcGNpYmFjaz8gSXRzDQo+Pj4gc2V0dGluZyB1cCBvdWdodCB0byBs
ZWFkIHRvIHRob3NlIHNhbWUgSU8tQVBJQyBSVEUgd3JpdGVzIHRoYXQgWGVuDQo+Pj4gaW50ZXJj
ZXB0cy4NCj4+IEJlY2F1c2UgaXNyX29uIGlzIG5vdCBzZXQsIHdoZW4geGVuX3BjaWJrX2NvbnRy
b2xfaXNyIGlzIGNhbGxlZCwgaXQgd2lsbCByZXR1cm4gZHVlIHRvICIgIWRldl9kYXRhLT5pc3Jf
b24iLiBTbyB0aGF0IGZha2UgSVJRIGhhbmRsZXIgYXJlbid0IGluc3RhbGxlZC4NCj4gDQo+IEkn
bSBhZnJhaWQgSSBkb24ndCBmb2xsb3cgeW91IGhlcmUuIFF1b3RpbmcgZnJvbSB0aGUgZnVuY3Rp
b246DQo+IA0KPiAJZW5hYmxlID0gIGRldl9kYXRhLT5lbmFibGVfaW50eDsNCj4gDQo+IAkvKiBB
c2tlZCB0byBkaXNhYmxlLCBidXQgSVNSIGlzbid0IHJ1bm5pZyAqLw0KPiAJaWYgKCFlbmFibGUg
JiYgIWRldl9kYXRhLT5pc3Jfb24pDQo+IAkJcmV0dXJuOw0KPiANCj4gSS5lLiB3ZSBiYWlsIGlm
IHRoZSByZXF1ZXN0IHdhcyB0byBfZGlzYWJsZV8gYW5kIHRoZXJlIGlzIG5vIElTUi4NCkkgbWVh
biwgYWZ0ZXIgZGVidWdnaW5nIHRoZSBwY2lzdHViX3Byb2JlIGNhbGxzdGFjazoNCnBjaXN0dWJf
c2VpemUtPiBwY2lzdHViX2luaXRfZGV2aWNlLT4geGVuX3BjaWJrX3Jlc2V0X2RldmljZS0+IHhl
bl9wY2lia19jb250cm9sX2lzcihkZXYsIDEgLypyZXNldCBkZXZpY2UqLykNCmFuZCBpbiB4ZW5f
cGNpYmtfY29udHJvbF9pc3IgY29kZToNCglpZiAocmVzZXQpIHsNCgkJZGV2X2RhdGEtPmVuYWJs
ZV9pbnR4ID0gMDsNCgkJZGV2X2RhdGEtPmFja19pbnRyID0gMDsNCgl9DQoJZW5hYmxlID0gIGRl
dl9kYXRhLT5lbmFibGVfaW50eDsNCg0KCS8qIEFza2VkIHRvIGRpc2FibGUsIGJ1dCBJU1IgaXNu
J3QgcnVubmlnICovDQoJaWYgKCFlbmFibGUgJiYgIWRldl9kYXRhLT5pc3Jfb24pDQoJCXJldHVy
bjsNCiJyZXNldCIgaXMgMSwgc28gImRldl9kYXRlLT5lbmFibGVfaW50eCIgaXMgc2V0IDAsIHRo
ZW4gImVuYWJsZSIgaXMgMCwgYW5kIHRoZW4gYXJyaXZlIHRoZSBzZWNvbmQgImlmIiBjaGVjaywg
ImVuYWJsZSIgaXMgMCBhbmQgImRldl9kYXRlLT5pc3Jfb24iIGlzIGFsc28gMCwgc28gaXQgcmV0
dXJuIGhlcmUuDQpJdCBjYW4ndCByZWFjaCBmb2xsb3dpbmcgY29kZSB0byBpbnN0YWxsIGlycSBo
YW5kbGUuDQoNCj4gDQo+IEkgY2FuJ3QgZXhjbHVkZSB0aG91Z2ggdGhhdCBjb21tYW5kX3dyaXRl
KCkncyBsb2dpYyB0byBzZXQgLT5lbmFibGVfaW50eA0KPiBpcyBpbnN1ZmZpY2llbnQuIEJ1dCBp
biB0aGUgY29tbW9uIGNhc2Ugb25lIHdvdWxkIHN1cmVseSBleHBlY3QgYXQgbGVhc3QNCj4gb25l
IG9mIFBDSV9DT01NQU5EX01FTU9SWSBhbmQgUENJX0NPTU1BTkRfSU8gdG8gYmUgc2V0IGZpcnN0
IGJ5IGEgZ3Vlc3QuDQo+IElPVyBhdCBzb21lIHBvaW50IEknZCBleHBlY3QgeGVuX3BjaWJrX2Nv
bnRyb2xfaXNyKCkgdG8gYmUgY2FsbGVkIHdpdGgNCj4gdGhlIHNlY29uZCBhcmd1bWVudCAwIGFu
ZCB3aXRoIC0+ZW5hYmxlX2ludHggc2V0Lg0KSSBvbmx5IHNlZSB4ZW5fcGNpYmtfY29udHJvbF9p
c3IoZGV2LCAwKSBpcyBjYWxsZWQgaW4geGVuX3BjaWJrX2RvX29uZV9vcCwgYnV0IHhlbl9wY2li
a19kb19vbmVfb3AgaXMgbm90IGNhbGxlZCBkdXJpbmcgYXNzaWduaW5nIGEgZGV2aWNlIHRvIGJl
IHBhc3N0aHJvdWdoLg0KDQo+IA0KPj4gQW5kIGl0IHNlZW1zIGlzcl9vbiBpcyBzZXQgdGhyb3Vn
aCBkcml2ZXIgc3lzZnMgIiBpcnFfaGFuZGxlcl9zdGF0ZSIgZm9yIGEgbGV2ZWwgZGV2aWNlIHRo
YXQgaXMgdG8gYmUgc2hhcmVkIHdpdGggZ3Vlc3QgYW5kIHRoZSBJUlEgaXMgc2hhcmVkIHdpdGgg
dGhlIGluaXRpYWwgZG9tYWluLg0KPiANCj4gVGhlIHN5c2ZzIGludGVyZmFjZSBpcywgYWNjb3Jk
aW5nIHRvIG15IHJlYWRpbmcgb2YgdGhlIGRlc2NyaXB0aW9uDQo+IG9mIHRoZSBjb21taXQgaW50
cm9kdWNpbmcgaXQsIG1lcmVseSBmb3IgZGVidWdnaW5nIC8gcmVjb3ZlcnkgcHVycG9zZXMuDQo+
IChJdCBhbHNvIGxvb2tzIHRvIG1lIGFzIGlmIHRoaXMgd2FzIHBhcnRseSBicm9rZW46IElmIG9u
ZSB3b3VsZCB1c2UgaXQsDQo+IHRodXMgY2xlYXJpbmcgLT5pc3Jfb24sIGEgc3Vic2VxdWVudCBk
aXNhYmxlIHJlcXVlc3Qgd291bGQgdGFrZSBleGFjdGx5DQo+IHRoYXQgZWFybHkgYmFpbGluZyBw
YXRoIHF1b3RlZCBhYm92ZSwgd2l0aCBub3RoaW5nIHJlbW92aW5nIHRoZSBJUlENCj4gaGFuZGxl
ci4pDQo+IA0KPiBUaGF0IGRlc2NyaXB0aW9uIGFsc28gdGFsa3MgYWJvdXQgYm90aCBhbiBlZGdl
IHZzIGxldmVsIGRpc3RpbmN0aW9uIGluDQo+IGJlaGF2aW9yIGFuZCBvbmUgZm9yIHNoYXJlZCB2
cyBub24tc2hhcmVkLCBidXQgbmVpdGhlciBpbiB0aGF0IGNvbW1pdA0KPiBub3IgaW4gcHJlc2Vu
dCBjb2RlIEkgY2FuIHNwb3QgYW55IHJlc3BlY3RpdmUgY2hlY2tzLiBPdGhlcndpc2UgSSBjb3Vs
ZA0KPiB1bmRlcnN0YW5kIHRoYXQgdGhlcmUgYXJlIGNhc2VzIHdoZXJlIHRoZSBuZWNlc3Nhcnkg
aW5mb3JtYXRpb24gaXNuJ3QNCj4gcHJvcGFnYXRlZCB0byBYZW4uDQo+IA0KPiBKYW4NCg0KLS0g
DQpCZXN0IHJlZ2FyZHMsDQpKaXFpYW4gQ2hlbi4NCg==


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 10:40:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 10:40:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743622.1150538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJsjx-00073J-7W; Wed, 19 Jun 2024 10:40:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743622.1150538; Wed, 19 Jun 2024 10:40:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJsjx-00073C-4Q; Wed, 19 Jun 2024 10:40:33 +0000
Received: by outflank-mailman (input) for mailman id 743622;
 Wed, 19 Jun 2024 10:40:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jAuG=NV=cloud.com=frediano.ziglio@srs-se1.protection.inumbo.net>)
 id 1sJsjv-000736-US
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 10:40:32 +0000
Received: from mail-qk1-x72d.google.com (mail-qk1-x72d.google.com
 [2607:f8b0:4864:20::72d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 59371bf3-2e28-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 12:40:31 +0200 (CEST)
Received: by mail-qk1-x72d.google.com with SMTP id
 af79cd13be357-795ca45c54cso346103085a.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 03:40:30 -0700 (PDT)
Received: from fziglio-xenia-fedora.eng.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798abc06e93sm595417985a.82.2024.06.19.03.40.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Jun 2024 03:40:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59371bf3-2e28-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718793629; x=1719398429; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=p95lb1ctHFp3T74sDBuumFZbHTUwbXXyA3hdbJYiDoU=;
        b=QYHNfRWa77VHvGYm48mqjXTC+/3MyjD2TJ4DBvXkCeAqZQuXyF047/hr7UxCV0Lxdt
         mnu6jqzFETvHdfpTcy0xBgIP70Ehr48V1reJLNtHI1Cvkp92WAiWAoNsdEoD/UsX4i/G
         gd4hz1jS7KAfMvDELctwAB2bY5yF8SdfWug0U=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718793629; x=1719398429;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=p95lb1ctHFp3T74sDBuumFZbHTUwbXXyA3hdbJYiDoU=;
        b=vOCqn1g1OPaTvxiDkyq8vEMKulmvBlbvgX0O+K8ROmJwsfRB1m5yNXVG9my0gvV189
         6c+oxhi8BppG8UEEP0Y58TQdKfiBB/D4vyh5YFSPtAuwxGsmWwzV58RCPmjEUFL65fKy
         u95xLaNF6QmameHOROxnngj3SQCUDjFysOejpJhPFP+qCfzSYLDo7e4okkzdSBk/9RUe
         KXEZag+RXBOYxyJwo9Qi4zcIbiOMHzxxnD0+7s1ncxMQVFxvp4k9cO+MWjM2ukgg3jnC
         vfFHA/p2l1v52Fz0DclleRSKO6dZbt8OE+aCG7UBQbEC+kwSeGNW+5+ELZDlEXhhY+7g
         Qwhw==
X-Gm-Message-State: AOJu0YxDlbV1ylKCXVMN6x0fmS4xrFbZXmP0I5qFMeDjmbOe5NezY0UZ
	jhrr8kjdeOXgVVh+kML2EWJRItapzxCX+V96b6o/C9HssmwQPDPOOY8eQk8Yvs4gFOIgOCYgaJB
	RYFhEeA==
X-Google-Smtp-Source: AGHT+IHxMvPNCyIKkLf/mcijHPuzQSLiYwyl/idVDNGL3pZUbs0K2RBLMulv8SBRXWBQPgj1vIs+sQ==
X-Received: by 2002:a05:620a:4510:b0:795:5120:97ac with SMTP id af79cd13be357-79bb3ebd69fmr203624885a.53.1718793629451;
        Wed, 19 Jun 2024 03:40:29 -0700 (PDT)
From: Frediano Ziglio <frediano.ziglio@cloud.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: "H. Peter Anvin" <hpa@zytor.com>,
	x86@kernel.org,
	Dave Hansen <dave.hansen@linux.intel.com>,
	Borislav Petkov <bp@alien8.de>,
	Ingo Molnar <mingo@redhat.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Frediano Ziglio <frediano.ziglio@cloud.com>
Subject: [PATCH v2] x86/xen/time: Reduce Xen timer tick
Date: Wed, 19 Jun 2024 11:40:15 +0100
Message-ID: <20240619104015.30477-1-frediano.ziglio@cloud.com>
X-Mailer: git-send-email 2.45.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Current timer tick is causing some deadline to fail.
The current high value constant was probably due to an old
bug in the Xen timer implementation causing errors if the
deadline was in the future.

This was fixed in Xen commit:
19c6cbd90965 xen/vcpu: ignore VCPU_SSHOTTMR_future

Usage of VCPU_SSHOTTMR_future in Linux kernel was removed by:
c06b6d70feb3 xen/x86: don't lose event interrupts

Signed-off-by: Frediano Ziglio <frediano.ziglio@cloud.com>
---
 arch/x86/xen/time.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Changes since v1:
- Update commit message;
- reduce delay.

diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 52fa5609b7f6..96521b1874ac 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -30,7 +30,7 @@
 #include "xen-ops.h"
 
 /* Minimum amount of time until next clock event fires */
-#define TIMER_SLOP	100000
+#define TIMER_SLOP	1
 
 static u64 xen_sched_clock_offset __read_mostly;
 
-- 
2.45.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 10:47:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 10:47:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743628.1150547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJsqC-0007eF-Qk; Wed, 19 Jun 2024 10:47:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743628.1150547; Wed, 19 Jun 2024 10:47:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJsqC-0007e8-Nd; Wed, 19 Jun 2024 10:47:00 +0000
Received: by outflank-mailman (input) for mailman id 743628;
 Wed, 19 Jun 2024 10:46:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RXUT=NV=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJsqB-0007e2-BA
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 10:46:59 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 405f893b-2e29-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 12:46:58 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id
 4fb4d7f45d1cf-57cc30eaf0aso2991805a12.2
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 03:46:58 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cddba1b0esm4047201a12.84.2024.06.19.03.46.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Jun 2024 03:46:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 405f893b-2e29-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718794017; x=1719398817; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=thYM6ixVZqhiOCoRaVF9HqB0k4fwsvIvSlZrhWdNBS8=;
        b=itp+oyxSqOSw3kDXoOw5CMFL+ej4EL5av3Ido5RfELPDMKDlMVgtF2nvQgYIOmXDZU
         GWagWMHo31ZWZ8gcp30ZI98+OPVloVTmW1LjtNs809Mh/vVF7Rwjv4B940/C1bWMwpTj
         hqDVYj/jrqdWwgcdYALiOjb5QyhWbaCE3bxks=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718794017; x=1719398817;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=thYM6ixVZqhiOCoRaVF9HqB0k4fwsvIvSlZrhWdNBS8=;
        b=pWniORgw1RnocG49T9mZEfFGZB5y+GTsSwkOq3aeXwa6e942+ZqGap23r20m0lXnup
         4607PE+1+cs1mFd6oSWodw2aeRTmISEjJ0J+2VT2XNvso4aSewkpzj/jzr+M3aOsbCVU
         NCb2qpi2M6gyVVT3LIm2hmv69UAEAcdNcvKhhgLWctM5B4LiroZNYiHYPMEile+7Fq8G
         UTgXxXSr9hlSGFtdU5VafevekyVDJLD7HCxSkCijYTzOlpvgS0b3Qjlx2IxsJifKo2Wq
         p37EX7qhyOF+1kU5zHO583nfkBoskhxrrBpCfxI7Q90ZF5TxwsG9ORDNJ1SxiZh0FgBz
         m3nQ==
X-Gm-Message-State: AOJu0YyTLN7xbI4TSnn6e9+Km2QDXeDysBLhaY7JbI6h14E3RsElUboO
	Aa7WVWIz/rIq87qSd929kkyVY1o6IwCK34MvADrP7+mO2qCJC0mKyk7niIB7x5VmJEbxcZzI0F8
	YPyo=
X-Google-Smtp-Source: AGHT+IHtYDfwEcsnkecUMl/9SIviRGhIImvGhsFAXx2SW0nUslFn+YZQXNeAwxvYdR0VMPR9vlOilw==
X-Received: by 2002:a50:96cf:0:b0:57c:5874:4f5c with SMTP id 4fb4d7f45d1cf-57d07ea857fmr1536615a12.32.1718794017280;
        Wed, 19 Jun 2024 03:46:57 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 v4.5 2/7] x86/xstate: Cross-check dynamic XSTATE sizes at boot
Date: Wed, 19 Jun 2024 11:46:55 +0100
Message-Id: <20240619104655.2401441-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240617173921.1755439-3-andrew.cooper3@citrix.com>
References: <20240617173921.1755439-3-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
every call.  This is expensive, being used for domain create/migrate, as well
as to service certain guest CPUID instructions.

Instead, arrange to check the sizes once at boot.  See the code comments for
details.  Right now, it just checks hardware against the algorithm
expectations.  Later patches will cross-check Xen's XSTATE calculations too.

Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits.  This is to
maximise coverage in the sanity check, even if we don't expect to
use/virtualise some of these features any time soon.  Leave HDC and HWP alone
for now; we don't have CPUID bits from them stored nicely.

Only perform the cross-checks when SELF_TESTS are active.  It's only
developers or new hardware liable to trip these checks, and Xen at least
tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which we
don't want to be tickling in the general case.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

v3:
 * New
v4:
 * Rebase over CONFIG_SELF_TESTS
 * Swap one BUG_ON() for a WARN()

v4.5:
 * Reorder xstate_check_sizes() to strictly increase by index.  In turn this
   strengthens the compressed check to "size always increases".
 * For new superivsor states, check that the uncomressed size doesn't change.

On Sapphire Rapids with the whole series inc diagnostics, we get this pattern:

  (XEN) *** check_new_xstate(, 0x00000003)
  (XEN) *** check_new_xstate(, 0x00000004)
  (XEN) *** check_new_xstate(, 0x000000e0)
  (XEN) *** check_new_xstate(, 0x00000100)
  (XEN) *** check_new_xstate(, 0x00000200)
  (XEN) *** check_new_xstate(, 0x00000400)
  (XEN) *** check_new_xstate(, 0x00000800)
  (XEN) *** check_new_xstate(, 0x00001000)
  (XEN) *** check_new_xstate(, 0x00004000)
  (XEN) *** check_new_xstate(, 0x00008000)
  (XEN) *** check_new_xstate(, 0x00060000)

and on Genoa, this pattern:

  (Xen) *** check_new_xstate(, 0x00000003)
  (Xen) *** check_new_xstate(, 0x00000004)
  (Xen) *** check_new_xstate(, 0x000000e0)
  (Xen) *** check_new_xstate(, 0x00000200)
  (Xen) *** check_new_xstate(, 0x00000800)
  (Xen) *** check_new_xstate(, 0x00001000)
---
 xen/arch/x86/include/asm/x86-defns.h        |  25 ++-
 xen/arch/x86/xstate.c                       | 170 ++++++++++++++++++++
 xen/include/public/arch-x86/cpufeatureset.h |   3 +
 3 files changed, 197 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/include/asm/x86-defns.h b/xen/arch/x86/include/asm/x86-defns.h
index 48d7a3b7af45..d7602ab225c4 100644
--- a/xen/arch/x86/include/asm/x86-defns.h
+++ b/xen/arch/x86/include/asm/x86-defns.h
@@ -77,7 +77,7 @@
 #define X86_CR4_PKS        0x01000000 /* Protection Key Supervisor */
 
 /*
- * XSTATE component flags in XCR0
+ * XSTATE component flags in XCR0 | MSR_XSS
  */
 #define X86_XCR0_FP_POS           0
 #define X86_XCR0_FP               (1ULL << X86_XCR0_FP_POS)
@@ -95,11 +95,34 @@
 #define X86_XCR0_ZMM              (1ULL << X86_XCR0_ZMM_POS)
 #define X86_XCR0_HI_ZMM_POS       7
 #define X86_XCR0_HI_ZMM           (1ULL << X86_XCR0_HI_ZMM_POS)
+#define X86_XSS_PROC_TRACE        (_AC(1, ULL) <<  8)
 #define X86_XCR0_PKRU_POS         9
 #define X86_XCR0_PKRU             (1ULL << X86_XCR0_PKRU_POS)
+#define X86_XSS_PASID             (_AC(1, ULL) << 10)
+#define X86_XSS_CET_U             (_AC(1, ULL) << 11)
+#define X86_XSS_CET_S             (_AC(1, ULL) << 12)
+#define X86_XSS_HDC               (_AC(1, ULL) << 13)
+#define X86_XSS_UINTR             (_AC(1, ULL) << 14)
+#define X86_XSS_LBR               (_AC(1, ULL) << 15)
+#define X86_XSS_HWP               (_AC(1, ULL) << 16)
+#define X86_XCR0_TILE_CFG         (_AC(1, ULL) << 17)
+#define X86_XCR0_TILE_DATA        (_AC(1, ULL) << 18)
 #define X86_XCR0_LWP_POS          62
 #define X86_XCR0_LWP              (1ULL << X86_XCR0_LWP_POS)
 
+#define X86_XCR0_STATES                                                 \
+    (X86_XCR0_FP | X86_XCR0_SSE | X86_XCR0_YMM | X86_XCR0_BNDREGS |     \
+     X86_XCR0_BNDCSR | X86_XCR0_OPMASK | X86_XCR0_ZMM |                 \
+     X86_XCR0_HI_ZMM | X86_XCR0_PKRU | X86_XCR0_TILE_CFG |              \
+     X86_XCR0_TILE_DATA |                                               \
+     X86_XCR0_LWP)
+
+#define X86_XSS_STATES                                                  \
+    (X86_XSS_PROC_TRACE | X86_XSS_PASID | X86_XSS_CET_U |               \
+     X86_XSS_CET_S | X86_XSS_HDC | X86_XSS_UINTR | X86_XSS_LBR |        \
+     X86_XSS_HWP |                                                      \
+     0)
+
 /*
  * Debug status flags in DR6.
  *
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index 75788147966a..408d9dd10897 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -604,9 +604,176 @@ static bool valid_xcr0(uint64_t xcr0)
     if ( !(xcr0 & X86_XCR0_BNDREGS) != !(xcr0 & X86_XCR0_BNDCSR) )
         return false;
 
+    /* TILECFG and TILEDATA must be the same. */
+    if ( !(xcr0 & X86_XCR0_TILE_CFG) != !(xcr0 & X86_XCR0_TILE_DATA) )
+        return false;
+
     return true;
 }
 
+struct xcheck_state {
+    uint64_t states;
+    uint32_t uncomp_size;
+    uint32_t comp_size;
+};
+
+static void __init check_new_xstate(struct xcheck_state *s, uint64_t new)
+{
+    uint32_t hw_size;
+
+    BUILD_BUG_ON(X86_XCR0_STATES & X86_XSS_STATES);
+
+    BUG_ON(new <= s->states); /* States strictly increase by index. */
+    BUG_ON(s->states & new);  /* States only accumulate. */
+    BUG_ON(!valid_xcr0(s->states | new)); /* Xen thinks it's a good value. */
+    BUG_ON(new & ~(X86_XCR0_STATES | X86_XSS_STATES)); /* Known state. */
+    BUG_ON((new & X86_XCR0_STATES) &&
+           (new & X86_XSS_STATES)); /* User or supervisor, not both. */
+
+    s->states |= new;
+    if ( new & X86_XCR0_STATES )
+    {
+        if ( !set_xcr0(s->states & X86_XCR0_STATES) )
+            BUG();
+    }
+    else
+        set_msr_xss(s->states & X86_XSS_STATES);
+
+    /*
+     * Check the uncompressed size.  First ask hardware.
+     */
+    hw_size = cpuid_count_ebx(0xd, 0);
+
+    if ( new & X86_XSS_STATES )
+    {
+        /*
+         * Supervisor states don't exist in an uncompressed image, so check
+         * that the uncompressed size doesn't change.  Otherwise...
+         */
+        if ( hw_size != s->uncomp_size )
+            panic("XSTATE 0x%016"PRIx64", new sup bits {%63pbl}, uncompressed hw size %#x != prev size %#x\n",
+                  s->states, &new, hw_size, s->uncomp_size);
+    }
+    else
+    {
+        /*
+         * ... some user XSTATEs are out-of-order and fill in prior holes.
+         * The best check we make is that the size never decreases.
+         */
+        if ( hw_size < s->uncomp_size )
+            panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, uncompressed hw size %#x < prev size %#x\n",
+                  s->states, &new, hw_size, s->uncomp_size);
+    }
+
+    s->uncomp_size = hw_size;
+
+    /*
+     * Check the compressed size, if available.
+     */
+    hw_size = cpuid_count_ebx(0xd, 1);
+
+    if ( cpu_has_xsavec )
+    {
+        /*
+         * All components strictly appear in index order, irrespective of
+         * whether they're user or supervisor.  As each component also has
+         * non-zero size, the accumulated size should strictly increase.
+         */
+        if ( hw_size <= s->comp_size )
+            panic("XSTATE 0x%016"PRIx64", new bits {%63pbl}, compressed hw size %#x <= prev size %#x\n",
+                  s->states, &new, hw_size, s->comp_size);
+
+        s->comp_size = hw_size;
+    }
+    else if ( hw_size ) /* Compressed size reported, but no XSAVEC ? */
+    {
+        static bool once;
+
+        if ( !once )
+        {
+            WARN();
+            once = true;
+        }
+    }
+}
+
+/*
+ * The {un,}compressed XSTATE sizes are reported by dynamic CPUID value, based
+ * on the current %XCR0 and MSR_XSS values.  The exact layout is also feature
+ * and vendor specific.  Cross-check Xen's understanding against real hardware
+ * on boot.
+ *
+ * Testing every combination is prohibitive, so we use a partial approach.
+ * Starting with nothing active, we add new XSTATEs and check that the CPUID
+ * dynamic values never decreases.
+ */
+static void __init noinline xstate_check_sizes(void)
+{
+    uint64_t old_xcr0 = get_xcr0();
+    uint64_t old_xss = get_msr_xss();
+    struct xcheck_state s = {};
+
+    /*
+     * User and supervisor XSTATEs, increasing by index.
+     *
+     * Chronologically, Intel and AMD had identical layouts for AVX (YMM).
+     * AMD introduced LWP in Fam15h, following immediately on from YMM.  Intel
+     * left an LWP-shaped hole when adding MPX (BND{CSR,REGS}) in Skylake.
+     * AMD removed LWP in Fam17h, putting PKRU in the same space, breaking
+     * layout compatibility with Intel and having a knock-on effect on all
+     * subsequent states.
+     */
+    check_new_xstate(&s, X86_XCR0_SSE | X86_XCR0_FP);
+
+    if ( cpu_has_avx )
+        check_new_xstate(&s, X86_XCR0_YMM);
+
+    if ( cpu_has_mpx )
+        check_new_xstate(&s, X86_XCR0_BNDCSR | X86_XCR0_BNDREGS);
+
+    if ( cpu_has_avx512f )
+        check_new_xstate(&s, X86_XCR0_HI_ZMM | X86_XCR0_ZMM | X86_XCR0_OPMASK);
+
+    /*
+     * Intel Broadwell has Processor Trace but no XSAVES.  There doesn't
+     * appear to have been a new enumeration when X86_XSS_PROC_TRACE was
+     * introduced in Skylake.
+     */
+    if ( cpu_has_xsaves && cpu_has_proc_trace )
+        check_new_xstate(&s, X86_XSS_PROC_TRACE);
+
+    if ( cpu_has_pku )
+        check_new_xstate(&s, X86_XCR0_PKRU);
+
+    if ( cpu_has_xsaves && boot_cpu_has(X86_FEATURE_ENQCMD) )
+        check_new_xstate(&s, X86_XSS_PASID);
+
+    if ( cpu_has_xsaves && (boot_cpu_has(X86_FEATURE_CET_SS) ||
+                            boot_cpu_has(X86_FEATURE_CET_IBT)) )
+    {
+        check_new_xstate(&s, X86_XSS_CET_U);
+        check_new_xstate(&s, X86_XSS_CET_S);
+    }
+
+    if ( cpu_has_xsaves && boot_cpu_has(X86_FEATURE_UINTR) )
+        check_new_xstate(&s, X86_XSS_UINTR);
+
+    if ( cpu_has_xsaves && boot_cpu_has(X86_FEATURE_ARCH_LBR) )
+        check_new_xstate(&s, X86_XSS_LBR);
+
+    if ( boot_cpu_has(X86_FEATURE_AMX_TILE) )
+        check_new_xstate(&s, X86_XCR0_TILE_DATA | X86_XCR0_TILE_CFG);
+
+    if ( boot_cpu_has(X86_FEATURE_LWP) )
+        check_new_xstate(&s, X86_XCR0_LWP);
+
+    /* Restore old state now the test is done. */
+    if ( !set_xcr0(old_xcr0) )
+        BUG();
+    if ( cpu_has_xsaves )
+        set_msr_xss(old_xss);
+}
+
 /* Collect the information of processor's extended state */
 void xstate_init(struct cpuinfo_x86 *c)
 {
@@ -683,6 +850,9 @@ void xstate_init(struct cpuinfo_x86 *c)
 
     if ( setup_xstate_features(bsp) && bsp )
         BUG();
+
+    if ( IS_ENABLED(CONFIG_SELF_TESTS) && bsp )
+        xstate_check_sizes();
 }
 
 int validate_xstate(const struct domain *d, uint64_t xcr0, uint64_t xcr0_accum,
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index 6627453e3985..d9eba5e9a714 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -266,6 +266,7 @@ XEN_CPUFEATURE(IBPB_RET,      8*32+30) /*A  IBPB clears RSB/RAS too. */
 XEN_CPUFEATURE(AVX512_4VNNIW, 9*32+ 2) /*A  AVX512 Neural Network Instructions */
 XEN_CPUFEATURE(AVX512_4FMAPS, 9*32+ 3) /*A  AVX512 Multiply Accumulation Single Precision */
 XEN_CPUFEATURE(FSRM,          9*32+ 4) /*A  Fast Short REP MOVS */
+XEN_CPUFEATURE(UINTR,         9*32+ 5) /*   User-mode Interrupts */
 XEN_CPUFEATURE(AVX512_VP2INTERSECT, 9*32+8) /*a  VP2INTERSECT{D,Q} insns */
 XEN_CPUFEATURE(SRBDS_CTRL,    9*32+ 9) /*   MSR_MCU_OPT_CTRL and RNGDS_MITG_DIS. */
 XEN_CPUFEATURE(MD_CLEAR,      9*32+10) /*!A| VERW clears microarchitectural buffers */
@@ -274,8 +275,10 @@ XEN_CPUFEATURE(TSX_FORCE_ABORT, 9*32+13) /* MSR_TSX_FORCE_ABORT.RTM_ABORT */
 XEN_CPUFEATURE(SERIALIZE,     9*32+14) /*A  SERIALIZE insn */
 XEN_CPUFEATURE(HYBRID,        9*32+15) /*   Heterogeneous platform */
 XEN_CPUFEATURE(TSXLDTRK,      9*32+16) /*a  TSX load tracking suspend/resume insns */
+XEN_CPUFEATURE(ARCH_LBR,      9*32+19) /*   Architectural Last Branch Record */
 XEN_CPUFEATURE(CET_IBT,       9*32+20) /*   CET - Indirect Branch Tracking */
 XEN_CPUFEATURE(AVX512_FP16,   9*32+23) /*A  AVX512 FP16 instructions */
+XEN_CPUFEATURE(AMX_TILE,      9*32+24) /*   AMX Tile architecture */
 XEN_CPUFEATURE(IBRSB,         9*32+26) /*A  IBRS and IBPB support (used by Intel) */
 XEN_CPUFEATURE(STIBP,         9*32+27) /*A  STIBP */
 XEN_CPUFEATURE(L1D_FLUSH,     9*32+28) /*S  MSR_FLUSH_CMD and L1D flush. */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 11:17:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 11:17:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743637.1150557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtJZ-0003lD-1k; Wed, 19 Jun 2024 11:17:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743637.1150557; Wed, 19 Jun 2024 11:17:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtJY-0003l6-VP; Wed, 19 Jun 2024 11:17:20 +0000
Received: by outflank-mailman (input) for mailman id 743637;
 Wed, 19 Jun 2024 11:17:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sJtJX-0003l0-5q
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 11:17:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJtJW-0001Wm-54; Wed, 19 Jun 2024 11:17:18 +0000
Received: from [15.248.3.90] (helo=[10.24.67.26])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJtJV-0000aC-SE; Wed, 19 Jun 2024 11:17:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=DePuSFDo5PFV7BU9iez3rbp5lUMHS9SCGFYrsUgwyEo=; b=3ZSRr1YXubQKyQPeRQCc9RwFJq
	VeFCR82Fslo4kP4lmQpwkE7dOZyRM6+b7M50EXWqFvJPgbuoWLka0xSlGuVDjpPvmJ0rFQqletCvG
	aq2fKSjr2ZeF3P0h69pT2Fh3ELseRN8XJI8Wy9CgujBwG0YeIkaetFOFIrnX9jfCHQUE=;
Message-ID: <f7d46c15-ff85-4a6f-afd7-df18649726c8@xen.org>
Date: Wed, 19 Jun 2024 12:17:14 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2] xen: add explicit comment to identify notifier
 patterns
Content-Language: en-GB
To: Federico Serafini <federico.serafini@bugseng.com>,
 xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>
References: <d814434bf73e341f5d35836fa7063a728f7b7de4.1718788908.git.federico.serafini@bugseng.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d814434bf73e341f5d35836fa7063a728f7b7de4.1718788908.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Federico,

On 19/06/2024 10:29, Federico Serafini wrote:
> MISRA C Rule 16.4 states that every `switch' statement shall have a
> `default' label" and a statement or a comment prior to the
> terminating break statement.
> 
> This patch addresses some violations of the rule related to the
> "notifier pattern": a frequently-used pattern whereby only a few values
> are handled by the switch statement and nothing should be done for
> others (nothing to do in the default case).
> 
> Note that for function mwait_idle_cpu_init() in
> xen/arch/x86/cpu/mwait-idle.c the /* Notifier pattern. */ comment is
> not added: differently from the other functions covered in this patch,
> the default label has a return statement that does not violates Rule 16.4.
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> ---
> Changes in v2:
> as Jan pointed out, in the v1 some patterns were not explicitly identified
> (https://lore.kernel.org/xen-devel/cad05a5c-e2d8-4e5d-af05-30ae6f959184@bugseng.com/).
> 
> This version adds the /* Notifier pattern. */ to all the pattern present in
> the Xen codebase except for mwait_idle_cpu_init().
> ---
>   xen/arch/arm/cpuerrata.c                     | 1 +
>   xen/arch/arm/gic-v3-lpi.c                    | 4 ++++
>   xen/arch/arm/gic.c                           | 1 +
>   xen/arch/arm/irq.c                           | 4 ++++
>   xen/arch/arm/mmu/p2m.c                       | 1 +
>   xen/arch/arm/percpu.c                        | 1 +
>   xen/arch/arm/smpboot.c                       | 1 +
>   xen/arch/arm/time.c                          | 1 +
>   xen/arch/arm/vgic-v3-its.c                   | 2 ++
>   xen/arch/x86/acpi/cpu_idle.c                 | 4 ++++
>   xen/arch/x86/cpu/mcheck/mce.c                | 4 ++++
>   xen/arch/x86/cpu/mcheck/mce_intel.c          | 4 ++++
>   xen/arch/x86/genapic/x2apic.c                | 3 +++
>   xen/arch/x86/hvm/hvm.c                       | 1 +
>   xen/arch/x86/nmi.c                           | 1 +
>   xen/arch/x86/percpu.c                        | 3 +++
>   xen/arch/x86/psr.c                           | 3 +++
>   xen/arch/x86/smpboot.c                       | 3 +++
>   xen/common/kexec.c                           | 1 +
>   xen/common/rcupdate.c                        | 1 +
>   xen/common/sched/core.c                      | 1 +
>   xen/common/sched/cpupool.c                   | 1 +
>   xen/common/spinlock.c                        | 1 +
>   xen/common/tasklet.c                         | 1 +
>   xen/common/timer.c                           | 1 +
>   xen/drivers/cpufreq/cpufreq.c                | 1 +
>   xen/drivers/cpufreq/cpufreq_misc_governors.c | 3 +++
>   xen/drivers/passthrough/x86/hvm.c            | 3 +++
>   xen/drivers/passthrough/x86/iommu.c          | 3 +++
>   29 files changed, 59 insertions(+)
> 
> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> index 2b7101ea25..69c30aecd8 100644
> --- a/xen/arch/arm/cpuerrata.c
> +++ b/xen/arch/arm/cpuerrata.c
> @@ -730,6 +730,7 @@ static int cpu_errata_callback(struct notifier_block *nfb,
>           rc = enable_nonboot_cpu_caps(arm_errata);
>           break;
>       default:
> +        /* Notifier pattern. */
Without looking at the commit message (which may not be trivial when 
committed), it is not clear to me what this is supposed to mean. Will 
there be a longer explanation in the MISRA doc? Should this be a SAF-* 
comment?

>           break;
>       }
>   
> diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
> index eb0a5535e4..4c2bd35403 100644
> --- a/xen/arch/arm/gic-v3-lpi.c
> +++ b/xen/arch/arm/gic-v3-lpi.c
> @@ -389,6 +389,10 @@ static int cpu_callback(struct notifier_block *nfb, unsigned long action,
>               printk(XENLOG_ERR "Unable to allocate the pendtable for CPU%lu\n",
>                      cpu);
>           break;
> +
> +    default:
> +        /* Notifier pattern. */
> +        break;

Skimming through v1, it was pointed out that gic-v3-lpi may miss some cases.

Let me start with that I understand this patch is technically not 
changing anything. However, it gives us an opportunity to check the 
notifier pattern.

Has anyone done any proper investigation? If so, what was the outcome? 
If not, have we identified someone to do it?

The same question will apply for place where you add "default".

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 11:19:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 11:19:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743643.1150568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtM1-0004cG-Fi; Wed, 19 Jun 2024 11:19:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743643.1150568; Wed, 19 Jun 2024 11:19:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtM1-0004c9-Bi; Wed, 19 Jun 2024 11:19:53 +0000
Received: by outflank-mailman (input) for mailman id 743643;
 Wed, 19 Jun 2024 11:19:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hxBI=NV=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sJtLz-0004c3-TB
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 11:19:51 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d8393033-2e2d-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 13:19:51 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-a6fb341a7f2so67405266b.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 04:19:51 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56ed3564sm655882566b.104.2024.06.19.04.19.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Jun 2024 04:19:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8393033-2e2d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718795990; x=1719400790; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Wy8Nw+xYFwxnaJMcrgGgPVXtf+542e3QCucupMXdJrM=;
        b=mEgAKUlXZtnKNQJPSdzsQjnNkaiK2+K9VlT3vEKkuTdn7i2clewMz7xJ/buJGGI0j+
         WkxN9v37VFdHyh4uupLFjE6HJzAD9oMSKGOVZL6u5TaI3yBNXO/tOTOgO72pD/BGJeA0
         Z2WkctyqpF76Q0N9juypTqXBRNYpSnCpM/mZhQEKFMS3bZHndkTlbJDV+bP/dTCUgw2i
         Lf1/p6UV5y0jBhAFZkYtGb1/GhYjrANIho8qN603XoGBRZ73fymxkZU60JUl9ViIgrlT
         10bdk8ILLmHIQ7LowL7MpaFxneMMiZRFxr7DcdBBVnz/BfEjBAYKij9Zqfj1IRUlDfFp
         RNCA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718795990; x=1719400790;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Wy8Nw+xYFwxnaJMcrgGgPVXtf+542e3QCucupMXdJrM=;
        b=A28Rc1V1cTkXI3FGGPpTsMjvenXmn8q26sxyfP4oKQ8Vqq1ZG4kw2OeuOsEOecAIt/
         xrtY4L/HypozA4OFIpmXrXW/Aqa/Zs/5AJvpiPmPaNl7GBaDNDbIun14Eb7IwULS2bTM
         nOCwsmm1O6RG73rh0ZwcqcsUkHMLrTEzSooQGBx0ztM6WSf+1nUcXktB+ouEwQ6V2LLS
         O+LFjIBFcqjTb4qVoGageewmAOzt8kCxSxZ5xCHxmZnQV60cpYTy4dM+z3/wiZ7E8FAr
         R5woJgkn3uveLHcbVCLR43LyAjYQYifbV2VJEnnau9vQ4lxMuMmR4ALJxdYvHYAdJqqj
         6zQw==
X-Gm-Message-State: AOJu0Yytj2Am8urPXqprEjgZ2S53O50K17r1p8A0RB+XnHM+KxjTQwqL
	wdIDLYMbVirYOFPfZKPzMUhMtyETajZJxPJbzUoK4DC1CTzkJ0TC
X-Google-Smtp-Source: AGHT+IGROfozAU4rmfYZDBfT4Qq854mddFasaL0W6dRLNiTtakeKHUgrJ0mes0h4kmwUlsjpBiWSug==
X-Received: by 2002:a17:906:4a82:b0:a6f:9da3:69a1 with SMTP id a640c23a62f3a-a6fab779005mr135733166b.47.1718795990013;
        Wed, 19 Jun 2024 04:19:50 -0700 (PDT)
Message-ID: <691f0bebe10e09f8fb46d0816fa20c61a9d9d3aa.camel@gmail.com>
Subject: Re: [PATCH for-4.19] xen/arm: static-shmem: fix "gbase/pbase used
 uninitialized" build failure
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>, Michal Orzel
	 <michal.orzel@amd.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	 <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Volodymyr Babchuk
	 <Volodymyr_Babchuk@epam.com>
Date: Wed, 19 Jun 2024 13:19:49 +0200
In-Reply-To: <8C571FCD-3EAF-40B5-8694-625880176F8B@arm.com>
References: <20240619064652.18266-1-michal.orzel@amd.com>
	 <8C571FCD-3EAF-40B5-8694-625880176F8B@arm.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

SGksIApPbiBXZWQsIDIwMjQtMDYtMTkgYXQgMDk6MDIgKzAwMDAsIEJlcnRyYW5kIE1hcnF1aXMg
d3JvdGU6Cj4gSGksCj4gCj4gQWRkaW5nIE9sZWtzaWkgZm9yIFJlbGVhc2UgYWNrLgo+IAo+IENo
ZWVycwo+IEJlcnRyYW5kCj4gCj4gPiBPbiAxOSBKdW4gMjAyNCwgYXQgMDg6NDYsIE1pY2hhbCBP
cnplbCA8bWljaGFsLm9yemVsQGFtZC5jb20+Cj4gPiB3cm90ZToKPiA+IAo+ID4gQnVpbGRpbmcg
WGVuIHdpdGggQ09ORklHX1NUQVRJQ19TSE09eSByZXN1bHRzIGluIGEgYnVpbGQgZmFpbHVyZToK
PiA+IAo+ID4gYXJjaC9hcm0vc3RhdGljLXNobWVtLmM6IEluIGZ1bmN0aW9uICdwcm9jZXNzX3No
bSc6Cj4gPiBhcmNoL2FybS9zdGF0aWMtc2htZW0uYzozMjc6NDE6IGVycm9yOiAnZ2Jhc2UnIG1h
eSBiZSB1c2VkCj4gPiB1bmluaXRpYWxpemVkIFstV2Vycm9yPW1heWJlLXVuaW5pdGlhbGl6ZWRd
Cj4gPiDCoDMyNyB8wqDCoMKgwqDCoMKgwqDCoCBpZiAoIGlzX2RvbWFpbl9kaXJlY3RfbWFwcGVk
KGQpICYmIChwYmFzZSAhPSBnYmFzZSkKPiA+ICkKPiA+IGFyY2gvYXJtL3N0YXRpYy1zaG1lbS5j
OjMwNToxNzogbm90ZTogJ2diYXNlJyB3YXMgZGVjbGFyZWQgaGVyZQo+ID4gwqAzMDUgfMKgwqDC
oMKgwqDCoMKgwqAgcGFkZHJfdCBnYmFzZSwgcGJhc2UsIHBzaXplOwo+ID4gCj4gPiBUaGlzIGlz
IGJlY2F1c2UgdGhlIGNvbW1pdCBjYjFkZGFmZGM1NzMgYWRkcyBhIGNoZWNrIHJlZmVyZW5jaW5n
Cj4gPiBnYmFzZS9wYmFzZSB2YXJpYWJsZXMgd2hpY2ggd2VyZSBub3QgeWV0IGFzc2lnbmVkIGEg
dmFsdWUuIEZpeCBpdC4KPiA+IAo+ID4gRml4ZXM6IGNiMWRkYWZkYzU3MyAoInhlbi9hcm0vc3Rh
dGljLXNobWVtOiBTdGF0aWMtc2htZW0gc2hvdWxkIGJlCj4gPiBkaXJlY3QtbWFwcGVkIGZvciBk
aXJlY3QtbWFwcGVkIGRvbWFpbnMiKQo+ID4gU2lnbmVkLW9mZi1ieTogTWljaGFsIE9yemVsIDxt
aWNoYWwub3J6ZWxAYW1kLmNvbT4KUmVsZWFzZS1BY2tlZC1ieTogT2xla3NpaSBLdXJvY2hrbyA8
b2xla3NpaS5rdXJvY2hrb0BnbWFpbC5jb20+Cgp+IE9sZWtzaWkKPiA+IC0tLQo+ID4gUmF0aW9u
YWxlIGZvciA0LjE5OiB0aGlzIHBhdGNoIGZpeGVzIGEgYnVpbGQgZmFpbHVyZSByZXBvcnRlZCBi
eQo+ID4gQ0k6Cj4gPiBodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QveGVuLy0vam9icy83
MTMxODA3ODc4Cj4gPiAtLS0KPiA+IHhlbi9hcmNoL2FybS9zdGF0aWMtc2htZW0uYyB8IDEzICsr
KysrKystLS0tLS0KPiA+IDEgZmlsZSBjaGFuZ2VkLCA3IGluc2VydGlvbnMoKyksIDYgZGVsZXRp
b25zKC0pCj4gPiAKPiA+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vc3RhdGljLXNobWVtLmMg
Yi94ZW4vYXJjaC9hcm0vc3RhdGljLQo+ID4gc2htZW0uYwo+ID4gaW5kZXggYzQzNGI5NmU2MjA0
Li5jZDQ4ZDI4OTZiN2UgMTAwNjQ0Cj4gPiAtLS0gYS94ZW4vYXJjaC9hcm0vc3RhdGljLXNobWVt
LmMKPiA+ICsrKyBiL3hlbi9hcmNoL2FybS9zdGF0aWMtc2htZW0uYwo+ID4gQEAgLTMyNCwxMiAr
MzI0LDYgQEAgaW50IF9faW5pdCBwcm9jZXNzX3NobShzdHJ1Y3QgZG9tYWluICpkLAo+ID4gc3Ry
dWN0IGtlcm5lbF9pbmZvICpraW5mbywKPiA+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgcHJpbnRr
KCIlcGQ6IHN0YXRpYyBzaGFyZWQgbWVtb3J5IGJhbmsgbm90IGZvdW5kOgo+ID4gJyVzJyIsIGQs
IHNobV9pZCk7Cj4gPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHJldHVybiAtRU5PRU5UOwo+ID4g
wqDCoMKgwqDCoMKgwqAgfQo+ID4gLcKgwqDCoMKgwqDCoMKgIGlmICggaXNfZG9tYWluX2RpcmVj
dF9tYXBwZWQoZCkgJiYgKHBiYXNlICE9IGdiYXNlKSApCj4gPiAtwqDCoMKgwqDCoMKgwqAgewo+
ID4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgcHJpbnRrKCIlcGQ6IHBoeXNpY2FsIGFkZHJlc3Mg
MHglIlBSSXBhZGRyIiBhbmQgZ3Vlc3QKPiA+IGFkZHJlc3MgMHglIlBSSXBhZGRyIiBhcmUgbm90
IGRpcmVjdC1tYXBwZWQuXG4iLAo+ID4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoCBkLCBwYmFzZSwgZ2Jhc2UpOwo+ID4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgcmV0dXJu
IC1FSU5WQUw7Cj4gPiAtwqDCoMKgwqDCoMKgwqAgfQo+ID4gCj4gPiDCoMKgwqDCoMKgwqDCoCBw
YmFzZSA9IGJvb3Rfc2htX2JhbmstPnN0YXJ0Owo+ID4gwqDCoMKgwqDCoMKgwqAgcHNpemUgPSBi
b290X3NobV9iYW5rLT5zaXplOwo+ID4gQEAgLTM1Myw2ICszNDcsMTMgQEAgaW50IF9faW5pdCBw
cm9jZXNzX3NobShzdHJ1Y3QgZG9tYWluICpkLAo+ID4gc3RydWN0IGtlcm5lbF9pbmZvICpraW5m
bywKPiA+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgLyogZ3Vlc3QgcGh5cyBhZGRyZXNzIGlzIGFm
dGVyIGhvc3QgcGh5cyBhZGRyZXNzICovCj4gPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGdiYXNl
ID0gZHRfcmVhZF9wYWRkcihjZWxscyArIGFkZHJfY2VsbHMsIGFkZHJfY2VsbHMpOwo+ID4gCj4g
PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAoIGlzX2RvbWFpbl9kaXJlY3RfbWFwcGVkKGQp
ICYmIChwYmFzZSAhPSBnYmFzZSkgKQo+ID4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgewo+ID4g
K8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBwcmludGsoIiVwZDogcGh5c2ljYWwgYWRk
cmVzcyAweCUiUFJJcGFkZHIiIGFuZAo+ID4gZ3Vlc3QgYWRkcmVzcyAweCUiUFJJcGFkZHIiIGFy
ZSBub3QgZGlyZWN0LW1hcHBlZC5cbiIsCj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgZCwgcGJhc2UsIGdiYXNlKTsKPiA+ICvCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgcmV0dXJuIC1FSU5WQUw7Cj4gPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oCB9Cj4gPiArCj4gPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGZvciAoIGkgPSAwOyBpIDwgUEZO
X0RPV04ocHNpemUpOyBpKysgKQo+ID4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGlm
ICggIW1mbl92YWxpZChtZm5fYWRkKG1hZGRyX3RvX21mbihwYmFzZSksIGkpKSApCj4gPiDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgewo+ID4gLS0gCj4gPiAyLjI1LjEKPiA+IAo+IAoK




From xen-devel-bounces@lists.xenproject.org Wed Jun 19 11:21:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 11:21:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743650.1150579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtNV-00064e-UK; Wed, 19 Jun 2024 11:21:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743650.1150579; Wed, 19 Jun 2024 11:21:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtNV-00064X-QM; Wed, 19 Jun 2024 11:21:25 +0000
Received: by outflank-mailman (input) for mailman id 743650;
 Wed, 19 Jun 2024 11:21:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sJtNV-00064P-Ib
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 11:21:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJtNU-0001be-T4; Wed, 19 Jun 2024 11:21:24 +0000
Received: from [15.248.3.90] (helo=[10.24.67.26])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJtNU-0000i6-Mi; Wed, 19 Jun 2024 11:21:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	References:Cc:To:From:Subject:MIME-Version:Date:Message-ID;
	bh=oHKOYPulBdFKe4ZGiYuTL4IDCQS1Zi7HVgsipZMR0XE=; b=F30FsomMYnNVgM0vJkFb6Kb0Az
	G+wioPVD4PIruzgl1bBDM0YgoyxHTJCIrYvEaRGu4i5WQ4oMDiCQRNXYvJ4s2F+liuXoZVJHgkWlX
	1TIi1c37fUaCHluuJF5o6JIQayh2xodRYYJ6WkF36rLLdisXLRFScC5B1fVq5ENvEGSI=;
Message-ID: <1fc8524f-8766-4eee-9b27-0eacd04097d4@xen.org>
Date: Wed, 19 Jun 2024 12:21:22 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2] xen: add explicit comment to identify notifier
 patterns
Content-Language: en-GB
From: Julien Grall <julien@xen.org>
To: Federico Serafini <federico.serafini@bugseng.com>,
 xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>
References: <d814434bf73e341f5d35836fa7063a728f7b7de4.1718788908.git.federico.serafini@bugseng.com>
 <f7d46c15-ff85-4a6f-afd7-df18649726c8@xen.org>
In-Reply-To: <f7d46c15-ff85-4a6f-afd7-df18649726c8@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit



On 19/06/2024 12:17, Julien Grall wrote:
> Hi Federico,
> 
> On 19/06/2024 10:29, Federico Serafini wrote:
>> MISRA C Rule 16.4 states that every `switch' statement shall have a
>> `default' label" and a statement or a comment prior to the
>> terminating break statement.
>>
>> This patch addresses some violations of the rule related to the
>> "notifier pattern": a frequently-used pattern whereby only a few values
>> are handled by the switch statement and nothing should be done for
>> others (nothing to do in the default case).
>>
>> Note that for function mwait_idle_cpu_init() in
>> xen/arch/x86/cpu/mwait-idle.c the /* Notifier pattern. */ comment is
>> not added: differently from the other functions covered in this patch,
>> the default label has a return statement that does not violates Rule 
>> 16.4.
>>
>> No functional change.
>>
>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>> ---
>> Changes in v2:
>> as Jan pointed out, in the v1 some patterns were not explicitly 
>> identified
>> (https://lore.kernel.org/xen-devel/cad05a5c-e2d8-4e5d-af05-30ae6f959184@bugseng.com/).
>>
>> This version adds the /* Notifier pattern. */ to all the pattern 
>> present in
>> the Xen codebase except for mwait_idle_cpu_init().
>> ---
>>   xen/arch/arm/cpuerrata.c                     | 1 +
>>   xen/arch/arm/gic-v3-lpi.c                    | 4 ++++
>>   xen/arch/arm/gic.c                           | 1 +
>>   xen/arch/arm/irq.c                           | 4 ++++
>>   xen/arch/arm/mmu/p2m.c                       | 1 +
>>   xen/arch/arm/percpu.c                        | 1 +
>>   xen/arch/arm/smpboot.c                       | 1 +
>>   xen/arch/arm/time.c                          | 1 +
>>   xen/arch/arm/vgic-v3-its.c                   | 2 ++
>>   xen/arch/x86/acpi/cpu_idle.c                 | 4 ++++
>>   xen/arch/x86/cpu/mcheck/mce.c                | 4 ++++
>>   xen/arch/x86/cpu/mcheck/mce_intel.c          | 4 ++++
>>   xen/arch/x86/genapic/x2apic.c                | 3 +++
>>   xen/arch/x86/hvm/hvm.c                       | 1 +
>>   xen/arch/x86/nmi.c                           | 1 +
>>   xen/arch/x86/percpu.c                        | 3 +++
>>   xen/arch/x86/psr.c                           | 3 +++
>>   xen/arch/x86/smpboot.c                       | 3 +++
>>   xen/common/kexec.c                           | 1 +
>>   xen/common/rcupdate.c                        | 1 +
>>   xen/common/sched/core.c                      | 1 +
>>   xen/common/sched/cpupool.c                   | 1 +
>>   xen/common/spinlock.c                        | 1 +
>>   xen/common/tasklet.c                         | 1 +
>>   xen/common/timer.c                           | 1 +
>>   xen/drivers/cpufreq/cpufreq.c                | 1 +
>>   xen/drivers/cpufreq/cpufreq_misc_governors.c | 3 +++
>>   xen/drivers/passthrough/x86/hvm.c            | 3 +++
>>   xen/drivers/passthrough/x86/iommu.c          | 3 +++
>>   29 files changed, 59 insertions(+)
>>
>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
>> index 2b7101ea25..69c30aecd8 100644
>> --- a/xen/arch/arm/cpuerrata.c
>> +++ b/xen/arch/arm/cpuerrata.c
>> @@ -730,6 +730,7 @@ static int cpu_errata_callback(struct 
>> notifier_block *nfb,
>>           rc = enable_nonboot_cpu_caps(arm_errata);
>>           break;
>>       default:
>> +        /* Notifier pattern. */
> Without looking at the commit message (which may not be trivial when 
> committed), it is not clear to me what this is supposed to mean. Will 
> there be a longer explanation in the MISRA doc? Should this be a SAF-* 
> comment?

Please ignore this comment. Just found it in the rules.rst.

> 
>>           break;
>>       }
>> diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
>> index eb0a5535e4..4c2bd35403 100644
>> --- a/xen/arch/arm/gic-v3-lpi.c
>> +++ b/xen/arch/arm/gic-v3-lpi.c
>> @@ -389,6 +389,10 @@ static int cpu_callback(struct notifier_block 
>> *nfb, unsigned long action,
>>               printk(XENLOG_ERR "Unable to allocate the pendtable for 
>> CPU%lu\n",
>>                      cpu);
>>           break;
>> +
>> +    default:
>> +        /* Notifier pattern. */
>> +        break;
> 
> Skimming through v1, it was pointed out that gic-v3-lpi may miss some 
> cases.
> 
> Let me start with that I understand this patch is technically not 
> changing anything. However, it gives us an opportunity to check the 
> notifier pattern.
> 
> Has anyone done any proper investigation? If so, what was the outcome? 
> If not, have we identified someone to do it?
> 
> The same question will apply for place where you add "default".
> 
> Cheers,
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 11:31:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 11:31:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743660.1150588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtWo-00084G-O6; Wed, 19 Jun 2024 11:31:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743660.1150588; Wed, 19 Jun 2024 11:31:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtWo-000849-LY; Wed, 19 Jun 2024 11:31:02 +0000
Received: by outflank-mailman (input) for mailman id 743660;
 Wed, 19 Jun 2024 11:31:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hxBI=NV=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sJtWn-000843-Rp
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 11:31:01 +0000
Received: from mail-wm1-x329.google.com (mail-wm1-x329.google.com
 [2a00:1450:4864:20::329])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 66c83239-2e2f-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 13:30:59 +0200 (CEST)
Received: by mail-wm1-x329.google.com with SMTP id
 5b1f17b1804b1-4218180a122so46504475e9.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 04:30:59 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-3607509c707sm17009042f8f.36.2024.06.19.04.30.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Jun 2024 04:30:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66c83239-2e2f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718796659; x=1719401459; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=2OQcV8wPm6FM6pib18mi19TvpIhUIlJRckjnpmOo1Yo=;
        b=H5XCTu4Z7IJHKnhqyUOXcyQ1lW6CYnYr1uCbvR8tAfm0lxnKP0r230Ybx7SL5QDNm9
         2yLhrJSya0HdO/P/pgUeNQyLg6rBewOJqbTSL37+yS6zTB/OdwpRfk2W1HTPP72XQJl2
         ooflseCF56DMM2iqWgwVIZt8Xi1e9U+9jjKO/tQMy4PNMTsZGc4LuB+Fdo1QlisT3a4A
         EcA54vFEeIglS+LnGlIx/sKBayXxhhjGgtfwl8Dux7J6sEhBLla7XtuNjzgIYk0Ki/fD
         g1klr6X5aoXydY6UXqmwQEjGa7A2iIAWivipy2iOf8/xJ8RLyEqT9rtfXTndVpYJqDKc
         rmzg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718796659; x=1719401459;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=2OQcV8wPm6FM6pib18mi19TvpIhUIlJRckjnpmOo1Yo=;
        b=DJZwbDrD35HzmwISlHHhayDv3vNQTSqEGifTJIt5CChCStVQWzmuMpl2UwLqPGWe6v
         hzH7bqwS64fkwD21iB5/1ddlxhR6h0JGis+SsGXx1Rflq4i+wtdTtAHwegprF0ELHDKx
         IMToKXTtOfXz7UPQrUfZeM8yy79J+AeE9L1b2bw2JP2o4cagcZifrdhvAud5tQT2XTIW
         hrxBc90zn1h/GpLXR8ejp0SsOCTBXomvICMYne4hMxehv9LSLSTfd+inMiYHaP6qkp62
         4FumwlEv84+zjhCa9RcrkqParudpARpdvUyEXOghQC3hpgilgVOuPNE/ySUIl9cdoMg/
         +NNw==
X-Forwarded-Encrypted: i=1; AJvYcCWFdOUJPo4RIF4y5k+SknDE6XeYuxpu2Na9ETIXhg8V7BlacmfNaN94iXvor9n1h6Qs7d4Q/F/1uSZqlDpzAk+OQZCMFG/6vhNW7+ZUpuc=
X-Gm-Message-State: AOJu0Yzl+DcqL9tLQdnPDlXhhRSw+d3QkiiIYW+XjBkxs0+DKZKzYaHd
	2sTMONbNmFXdIzy9/x32PGc0iIX9ty+RqPR5yyK0lP46CwpLh3rF
X-Google-Smtp-Source: AGHT+IEESA6Zs9Q9brwIwCSKoZHiAO8OgyglAiGCQfnM+vzfu1dJ7v8az8f/Lwan2tnP3Jhzn4yRTA==
X-Received: by 2002:a5d:63c7:0:b0:362:c971:d97e with SMTP id ffacd0b85a97d-363171e241bmr1730637f8f.4.1718796658650;
        Wed, 19 Jun 2024 04:30:58 -0700 (PDT)
Message-ID: <97fb98c6763a9a56875240359919e7713225f53f.camel@gmail.com>
Subject: Re: [PATCH for-4.19 v4 0/7] x86/xstate: Fixes to size calculations
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, Roger Pau =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>
Date: Wed, 19 Jun 2024 13:30:57 +0200
In-Reply-To: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
References: <20240617173921.1755439-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

T24gTW9uLCAyMDI0LTA2LTE3IGF0IDE4OjM5ICswMTAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOgo+
IE9ubHkgbWlub3IgY2hhbmdlcyBpbiB2NCB2cyB2My7CoCBTZWUgcGF0Y2hlcyBmb3IgZGV0YWls
cy4KPiAKPiBUaGUgZW5kIHJlc3VsdCBoYXMgYmVlbiB0ZXN0ZWQgYWNyb3NzIHRoZSBlbnRpcmUg
WGVuU2VydmVyIGhhcmR3YXJlCj4gbGFiLsKgIFRoaXMKPiBmb3VuZCBzZXZlcmFsIGZhbHNlIGFz
c3VwbXRpb24gYWJvdXQgaG93IHRoZSBkeW5hbWljIHNpemVzIGJlaGF2ZS4KUmVsZWFzZS1BY2tl
ZC1ieTogT2xla3NpaSBLdXJvY2hrbyA8b2xla3NpaS5rdXJvY2hrb0BnbWFpbC5jb20+Cgp+IE9s
ZWtzaWkKPiAKPiBQYXRjaGVzIDEgYW5kIDYgdGhlIG1haW4gYnVnZml4ZXMgZnJvbSB0aGlzIHNl
cmllcy7CoCBUaGVyZSdzIHN0aWxsCj4gbG90cyBtb3JlCj4gd29yayB0byBkbyBpbiBvcmRlciB0
byBnZXQgQU1YIGFuZC9vciBDRVQgd29ya2luZywgYnV0IHRoaXMgaXMgYXQKPiBsZWFzdCBhIDQt
eW8KPiBzZXJpZXMgZmluYWxseSBvZmYgbXkgcGxhdGUuCj4gCj4gQW5kcmV3IENvb3BlciAoNyk6
Cj4gwqAgeDg2L3hzdGF0ZTogRml4IGluaXRpYWxpc2F0aW9uIG9mIFhTUyBjYWNoZQo+IMKgIHg4
Ni94c3RhdGU6IENyb3NzLWNoZWNrIGR5bmFtaWMgWFNUQVRFIHNpemVzIGF0IGJvb3QKPiDCoCB4
ODYvYm9vdDogQ29sbGVjdCB0aGUgUmF3IENQVSBQb2xpY3kgZWFybGllciBvbiBib290Cj4gwqAg
eDg2L3hzdGF0ZTogUmV3b3JrIHhzdGF0ZV9jdHh0X3NpemUoKSBhcyB4c3RhdGVfdW5jb21wcmVz
c2VkX3NpemUoKQo+IMKgIHg4Ni9jcHUtcG9saWN5OiBTaW1wbGlmeSByZWNhbGN1bGF0ZV94c3Rh
dGUoKQo+IMKgIHg4Ni9jcHVpZDogRml4IGhhbmRsaW5nIG9mIFhTQVZFIGR5bmFtaWMgbGVhdmVz
Cj4gwqAgeDg2L2RlZm5zOiBDbGVhbiB1cCBYODZfe1hDUjAsWFNTfV8qIGNvbnN0YW50cwo+IAo+
IMKgeGVuL2FyY2gveDg2L2NwdS1wb2xpY3kuY8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoCB8wqAgNTYgKystLQo+IMKgeGVuL2FyY2gveDg2L2NwdWlkLmPCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHzCoCAyNCArLQo+IMKgeGVuL2FyY2gv
eDg2L2RvbWN0bC5jwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
fMKgwqAgMiArLQo+IMKgeGVuL2FyY2gveDg2L2h2bS9odm0uY8KgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB8wqDCoCAyICstCj4gwqB4ZW4vYXJjaC94ODYvaTM4Ny5j
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHzCoMKgIDIg
Ky0KPiDCoHhlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS94ODYtZGVmbnMuaMKgwqDCoMKgwqDCoMKg
IHzCoCA1NSArKy0tCj4gwqB4ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20veHN0YXRlLmjCoMKgwqDC
oMKgwqDCoMKgwqDCoCB8wqDCoCA4ICstCj4gwqB4ZW4vYXJjaC94ODYvc2V0dXAuY8KgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgfMKgwqAgNCArLQo+IMKgeGVu
L2FyY2gveDg2L3hzdGF0ZS5jwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgfCAyOTQgKysrKysrKysrKysrKysrKystCj4gLS0KPiDCoHhlbi9pbmNsdWRlL3B1Ymxp
Yy9hcmNoLXg4Ni9jcHVmZWF0dXJlc2V0LmggfMKgwqAgMyArCj4gwqB4ZW4vaW5jbHVkZS94ZW4v
bGliL3g4Ni9jcHUtcG9saWN5LmjCoMKgwqDCoMKgwqDCoCB8wqDCoCAyICstCj4gwqAxMSBmaWxl
cyBjaGFuZ2VkLCAzMzAgaW5zZXJ0aW9ucygrKSwgMTIyIGRlbGV0aW9ucygtKQo+IAoK



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 11:37:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 11:37:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743667.1150597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtd3-0000EK-BQ; Wed, 19 Jun 2024 11:37:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743667.1150597; Wed, 19 Jun 2024 11:37:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtd3-0000ED-8X; Wed, 19 Jun 2024 11:37:29 +0000
Received: by outflank-mailman (input) for mailman id 743667;
 Wed, 19 Jun 2024 11:37:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sJtd1-0000E7-Ua
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 11:37:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJtcz-0001rP-8H; Wed, 19 Jun 2024 11:37:25 +0000
Received: from [15.248.3.90] (helo=[10.24.67.26])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJtcy-0001i5-VN; Wed, 19 Jun 2024 11:37:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=KvT4XPGLOy9LcTkyKXD0mINhG2TBgfi8ASfYPerfDss=; b=MwiBrOXYZvHufb0BVM9sT2A6RC
	wgdaS2DziV8rcS5iIR4KJ0rz1MWJhKWiwkeYmQRyH1P/yMuzjKiMfsJOkzmeobs8RznQYA9vLk5XJ
	uHJQth2mevyzDVyLvsGWMDUHb4tbjgr8lm2o0ofEAVweMQRQLqmTQCHbJ5u6iE7iUqsw=;
Message-ID: <4213a7b9-b2f8-4e3d-a63a-e8b7b53fd642@xen.org>
Date: Wed, 19 Jun 2024 12:37:22 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/arm: static-shmem: fix "gbase/pbase used
 uninitialized" build failure
Content-Language: en-GB
To: "Oleksii K." <oleksii.kurochko@gmail.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20240619064652.18266-1-michal.orzel@amd.com>
 <8C571FCD-3EAF-40B5-8694-625880176F8B@arm.com>
 <691f0bebe10e09f8fb46d0816fa20c61a9d9d3aa.camel@gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <691f0bebe10e09f8fb46d0816fa20c61a9d9d3aa.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi,

On 19/06/2024 12:19, Oleksii K. wrote:
> Hi,
> On Wed, 2024-06-19 at 09:02 +0000, Bertrand Marquis wrote:
>> Hi,
>>
>> Adding Oleksii for Release ack.
>>
>> Cheers
>> Bertrand
>>
>>> On 19 Jun 2024, at 08:46, Michal Orzel <michal.orzel@amd.com>
>>> wrote:
>>>
>>> Building Xen with CONFIG_STATIC_SHM=y results in a build failure:
>>>
>>> arch/arm/static-shmem.c: In function 'process_shm':
>>> arch/arm/static-shmem.c:327:41: error: 'gbase' may be used
>>> uninitialized [-Werror=maybe-uninitialized]
>>>   327 |         if ( is_domain_direct_mapped(d) && (pbase != gbase)
>>> )
>>> arch/arm/static-shmem.c:305:17: note: 'gbase' was declared here
>>>   305 |         paddr_t gbase, pbase, psize;
>>>
>>> This is because the commit cb1ddafdc573 adds a check referencing
>>> gbase/pbase variables which were not yet assigned a value. Fix it.
>>>
>>> Fixes: cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem should be
>>> direct-mapped for direct-mapped domains")
>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

I have committed to unblock the CI. But I have some questions on the 
approach. I will ask them separately.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 11:44:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 11:44:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743675.1150611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtk6-0002IS-2M; Wed, 19 Jun 2024 11:44:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743675.1150611; Wed, 19 Jun 2024 11:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtk5-0002IL-W6; Wed, 19 Jun 2024 11:44:45 +0000
Received: by outflank-mailman (input) for mailman id 743675;
 Wed, 19 Jun 2024 11:44:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5jEq=NV=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1sJtk4-0002IA-FZ
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 11:44:44 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20600.outbound.protection.outlook.com
 [2a01:111:f403:2417::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 506743ab-2e31-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 13:44:41 +0200 (CEST)
Received: from SJ0PR05CA0196.namprd05.prod.outlook.com (2603:10b6:a03:330::21)
 by DS0PR12MB6463.namprd12.prod.outlook.com (2603:10b6:8:c5::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.33; Wed, 19 Jun
 2024 11:44:36 +0000
Received: from SJ5PEPF000001E8.namprd05.prod.outlook.com
 (2603:10b6:a03:330:cafe::7a) by SJ0PR05CA0196.outlook.office365.com
 (2603:10b6:a03:330::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.32 via Frontend
 Transport; Wed, 19 Jun 2024 11:44:35 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 SJ5PEPF000001E8.mail.protection.outlook.com (10.167.242.196) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Wed, 19 Jun 2024 11:44:34 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 19 Jun
 2024 06:44:34 -0500
Received: from [10.252.147.188] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend
 Transport; Wed, 19 Jun 2024 06:44:32 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 506743ab-2e31-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cPsCyo/hTSKFLicXB9z+ddsRZgpsDYHKLh8wwc6QJ0ZvxpH1MeEuL95gsICqpuSeutEa4SwYkYikX1qLEY9VNOZRMIz8XbpTA79MmTCqcBFP/la5HccSstpqkyeB/GeUZcLo83Nnl6MQyUEGZK3JQsuvoJvh79U1gCCmuRwbVqKfa10vYDltSMQNKXXUv4tIl0aDGOLK6JxxDByZt73xwRjlF+eDYP6TfBtksxrr6FcwwLM4URCdEm7eeYOzdvDHwKkvafYb2vgrj4ZQxXefoZIf7FPav2LiZAbMs50ElWAuLXl8qznDL/sHdrLvyqRMXl73LyBtB/j/VEfaq9eJzw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=leMPA/wXFypHYSB6ShGF1QFdxJ3xOOpa+MO6Y5fM4CM=;
 b=XVSIR/GY7AD9kOK86MFjVjy/44wrx+BsSJ3OZ13L+m7Uyu4v1vUi9HAUqFs4M052gFhJ24F9Dus+Al1cFaRccNRQ0wTGZUlmsJAA+545XMgyFOrquwC/LUmWKiuxf2iqL4VoDNX7j/b+lGdm/mUqyMdb7bH9h+mo6eCabR8h1gcawzlItfxkOCStEP9900aa1OuSdvm7RBgGJ4aHwFN781pDvwBq62XJpK39zx9l+y8aMBGHFG/hPWLzw6TqM0LxsPB5FeO5K/kYmf/b19B7MbkbUVU/UTVdcMLMGaK6UOjn/PXx1pbp0qag3gD/HJvdmrhrvKGyx2zUKcZt7ZBvXg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=leMPA/wXFypHYSB6ShGF1QFdxJ3xOOpa+MO6Y5fM4CM=;
 b=v5T6uc3FCa30yiUFeTbzyA4YXTYk1/8n0XaypSZ3Jdde/uMwkzRwomooNNoS4PpYe52PfedNFT/PVCf9E8LcGUwFga9Lsq/I6rwhLcV271TPMjvhB1K64adosFdWbQhdD+Jg6lxukOlgk9OKlYQDgS9jhtY0hLj5OZIvfTh5dkk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <66a5a0ec-b3df-41a7-9ba6-955259c7a45b@amd.com>
Date: Wed, 19 Jun 2024 13:44:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/arm: static-shmem: fix "gbase/pbase used
 uninitialized" build failure
To: Julien Grall <julien@xen.org>, Oleksii K. <oleksii.kurochko@gmail.com>,
	Bertrand Marquis <Bertrand.Marquis@arm.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20240619064652.18266-1-michal.orzel@amd.com>
 <8C571FCD-3EAF-40B5-8694-625880176F8B@arm.com>
 <691f0bebe10e09f8fb46d0816fa20c61a9d9d3aa.camel@gmail.com>
 <4213a7b9-b2f8-4e3d-a63a-e8b7b53fd642@xen.org>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <4213a7b9-b2f8-4e3d-a63a-e8b7b53fd642@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
Received-SPF: None (SATLEXMB04.amd.com: michal.orzel@amd.com does not
 designate permitted sender hosts)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ5PEPF000001E8:EE_|DS0PR12MB6463:EE_
X-MS-Office365-Filtering-Correlation-Id: 2715500c-178c-4744-b895-08dc90553118
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|36860700010|376011|1800799021|82310400023;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?dG43Q1ovSmE4dVJYSkdLcTZoZnBNVGJDU0tBeXA4UXJsTHBvK0l1Z25xZ2xU?=
 =?utf-8?B?VEJ1ZldaZ2VVS2N6UHdpTlhCS3lCa1FJQWtBa0doWEFJVlphWk1EbU9ub3h4?=
 =?utf-8?B?SUVqbTRGUXlGL2U2Umdoc3R5WG1TU3phSXRYVUlQYy9URGR3Y243UDBqL096?=
 =?utf-8?B?QjY4WXl3cFVpZmlPZ01YQUJLeitubU95MkYvaFd0MzdOemo5K1g0N0VucEJy?=
 =?utf-8?B?OTh6YjVCYkZ0SUplcnZocDZYeXBKSVF1MjRtUitRcXpKRzB2QmczdVFic1Ny?=
 =?utf-8?B?UVpwbStrbUJQS0VyN3BoRThmckkvYUNEMU13UHA0ZzVoeEpwK3M0M25scjds?=
 =?utf-8?B?RTFoSTROamdKVjZ6YVZPbitGdEkwMDRFK3VabXp4MEwyczRwTldkOVJSMjBx?=
 =?utf-8?B?d0xPVnQxVXZvbXNsQXJCbUcxV3d5dEdRL2xPUTJkWTkvL3h0NzFGanQ0c0hl?=
 =?utf-8?B?TndjY3loTUJvdFBvQmdjcEtsajNjVmZITDlZK1NyN3Bkb3pTOERKK2xKODNV?=
 =?utf-8?B?d0toR2FGQitPYUtqTnh3Q1NmTnhiWi9FRkxIYnZUcUsxM2RRRUdlNU5QdE1s?=
 =?utf-8?B?a1BBZTI2TVdoOXYwaHBOaGFCa0IyYXdlekxHdUdXWHh3SURuaVRvVnlqNmNk?=
 =?utf-8?B?K3Q2ZEJuWm13b2t0ZWdramNOWFV2MFpiN2RlV1lhNndncC9TNGhJYUlpZ2di?=
 =?utf-8?B?UEdRTUd1SytyUWhNS1p5SnBPR1RvaE5EME9wNitWMVFiNGpHRVBDOUVJdEhK?=
 =?utf-8?B?MENRcFZ5MkdUM1UvSmhialFqaWh0L2hlV0huYlJwSUlLalVGWEF0KzJ2cFhv?=
 =?utf-8?B?ajZEK21PY1VteUhTa3pWK3B1UlgyNVZFd2c5WktCcjNmb2FCTHpCNmdBSTUr?=
 =?utf-8?B?dXZsV1h2c0h1SC9MYUh5ZHlGOFVMWTRiY1QrZjEvTTRlTXdRNTMwYjc4V3NY?=
 =?utf-8?B?MzN6R3ZYcmgrV3FYSXY4RW1lY3Z5VEI4d1pvbGtOMk9YbVd5bFd3eXVoL2FG?=
 =?utf-8?B?MENxUHo5bEMzYUFOTTJJM1RZNnIvbEY1VU1XYlM3V04wcXRWUXdVT1BhNGtC?=
 =?utf-8?B?cVN3L00zaWJwT2VSTXQwcDVqOGhjZFo2M3daRUVUWFF5alNSODN2eVNUb3NX?=
 =?utf-8?B?c3ZoZVd0UnM3cFA5b0JBdmp3ejVNQmMwY3ZwaTRMRzRmTVpkQlFNLytkYzNq?=
 =?utf-8?B?cW9hUDJ3enBnclNZL1E1emlaalN6NThMald2eTRHalpreVhVM1ZSQzRmU1ZC?=
 =?utf-8?B?NFM0QTBpcU9oUXY1U1h4QWNOZE00MlZBQlJFeWJkZXhqN1IzQkx0ZklRZHE0?=
 =?utf-8?B?cjBwWDBmS2RTaHl0TGFnclhXdllycGVJQ0g1MDFtQ0kvWjk2L3pqaWFmdVFI?=
 =?utf-8?B?V0U4M0dSdDIzaXNOMjlLeFBHcmFMOUZVbzhOalhROVRSaVFHeVdvOVlIVERy?=
 =?utf-8?B?dHNObDB2MDFJV3JlT1JRdlVtOHY3SFRWTHlJTGZ1Yis2WGJPcjJOdUxpWlVF?=
 =?utf-8?B?d0JoMVVaL0E2WGZUeUd5MjZnWWRpZHlQVFk5dkkzcFhZVHBubnpVQVRWbDFh?=
 =?utf-8?B?NGJzeEJWRVNqTGlKSzMxOVR3S2creEU5OVdWLzdiYjVPaDhOWC9DMEtzamsv?=
 =?utf-8?B?RHdyN2dpMWRMbWZhR0U2a1Y4REQzTzNBdmM5cExkZzFlSllKc1lhOW53OUNF?=
 =?utf-8?B?dlZVMkVTNXBhQ1FpbmhFdDNwTFE0emIrc1hTRzJaSTlSN1lrdkFkVFlnU0ZY?=
 =?utf-8?B?eEdMcFl2UWJMR2lyYmxxTjJMWHF1ZFo3ODFvRkg2b2dOMTBIK2ZoTFhPQXFs?=
 =?utf-8?Q?vEI0VtcMJp9fCSnyLoHwF6ltezV+rRmFQgLVg=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(36860700010)(376011)(1800799021)(82310400023);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jun 2024 11:44:34.8685
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2715500c-178c-4744-b895-08dc90553118
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	SJ5PEPF000001E8.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB6463

Hi Julein,

On 19/06/2024 13:37, Julien Grall wrote:
> 
> 
> Hi,
> 
> On 19/06/2024 12:19, Oleksii K. wrote:
>> Hi,
>> On Wed, 2024-06-19 at 09:02 +0000, Bertrand Marquis wrote:
>>> Hi,
>>>
>>> Adding Oleksii for Release ack.
>>>
>>> Cheers
>>> Bertrand
>>>
>>>> On 19 Jun 2024, at 08:46, Michal Orzel <michal.orzel@amd.com>
>>>> wrote:
>>>>
>>>> Building Xen with CONFIG_STATIC_SHM=y results in a build failure:
>>>>
>>>> arch/arm/static-shmem.c: In function 'process_shm':
>>>> arch/arm/static-shmem.c:327:41: error: 'gbase' may be used
>>>> uninitialized [-Werror=maybe-uninitialized]
>>>>   327 |         if ( is_domain_direct_mapped(d) && (pbase != gbase)
>>>> )
>>>> arch/arm/static-shmem.c:305:17: note: 'gbase' was declared here
>>>>   305 |         paddr_t gbase, pbase, psize;
>>>>
>>>> This is because the commit cb1ddafdc573 adds a check referencing
>>>> gbase/pbase variables which were not yet assigned a value. Fix it.
>>>>
>>>> Fixes: cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem should be
>>>> direct-mapped for direct-mapped domains")
>>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>> Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> 
> I have committed to unblock the CI. But I have some questions on the
> approach. I will ask them separately.
The CI failures as seen in:
https://gitlab.com/xen-project/xen/-/pipelines/1338067978
are due to 2 issues. This patch solves the first one. The other is related to Henry's xenstore
series that without a corresponding Linux patch, that has been merged into mainline, causes a regression.
And thus all the dom0less PV tests fail. We will need to revert the xenstore patches for now.

~Michal


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 11:47:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 11:47:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743683.1150621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtmi-0002qx-FF; Wed, 19 Jun 2024 11:47:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743683.1150621; Wed, 19 Jun 2024 11:47:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtmi-0002qq-CK; Wed, 19 Jun 2024 11:47:28 +0000
Received: by outflank-mailman (input) for mailman id 743683;
 Wed, 19 Jun 2024 11:47:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJtmh-0002qi-4u
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 11:47:27 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b1340f5d-2e31-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 13:47:23 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 5b1f17b1804b1-42179dafd6bso5525375e9.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 04:47:23 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-705ccb6af5csm10510053b3a.162.2024.06.19.04.47.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 04:47:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1340f5d-2e31-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718797643; x=1719402443; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=eR1Qy5b+hMB6zfS57BU7m95BnwCTtWaUOxZagZ5scDw=;
        b=JPCOomNWx0NjOVAaYJoxge2db6408oD6xtHPTlrGZrDZmZtvCjCgfS8O8ytGg2HhCI
         meTLU5HXG7luvamxKtlU/4QHTbJAY0zdTwynGRVjjtwbjszoZIq3+6cAifqLo6MU70ga
         u+0JuYFpxjE/hz5uTj9Jb0J/QYGjtMNoEQwGjH6gcwyCFd5Z/FCh9fCCzFOTilX653MO
         1ih0kg6yt9ACTSw5Un1hSGExTf1Y0yTufB382MeHRlOmxPs7u/0Booxk+pNUccFOuald
         bFTUTWOaZ/Ki7A37SbzjnQTiEoIG5gSItFRIeuGysaPvSiziMrLfN0GUCegBnXRR8u6V
         uQ0g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718797643; x=1719402443;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=eR1Qy5b+hMB6zfS57BU7m95BnwCTtWaUOxZagZ5scDw=;
        b=LPkJYar+gQH/345YltaibGrWZ2W2INQSvKAd1XQiO+7buvsziinFr0BiehgtT8QvXg
         zC0Wz6ry/l9WetCllUH5MT8Bn7kZRXWTR4Fke4xcYfzBDTngADoHwmIMw9PMtKobM8r0
         7az3iAOUiAdoaTLMi/WyVWpMZ17O+lYTrsbui3ROI8neEo/t6ohTUdxvGaSzD06qw0h/
         /ooEEt2rHsPYMK7VVewVwE0aix4AeRJ3cfTd6G912zIaYSG6OtnAPCUs0PwkJhXlEHxZ
         Hu2ju9sywTxeRJdqQvPUh5CwQglCatjJtw9JxwigcCW8xy1CHKRTC11URuQgs6XRcLte
         gbRQ==
X-Forwarded-Encrypted: i=1; AJvYcCW/+dzGE7dsVOPfwe2yBMgkHoRpIYgi0OAm5dDpUXVEpfYQGqfJgskFZ7JQZrs9Mi3gBgJpdr5oCe5YL/oQNdD6aF28sPvna0CrYK13bb8=
X-Gm-Message-State: AOJu0YzZPkxrjI9itEHu+GL3giomAlUOQ+R682LgV48Xs8VVepkaF2uX
	IuRB4rwthSJJBLPyIrHRchfyOVvvhXUOnIlZD+1Lz8JBk0rhdAKfLofjiCz+Qw==
X-Google-Smtp-Source: AGHT+IEIq0KAw0PsQ8n8c3lbusWJDF4vFIMDkFYOTWbJTmy55aMHXI+opVdGDM2ap2tmRgUQm26Dgg==
X-Received: by 2002:adf:f4d1:0:b0:362:1d6c:b867 with SMTP id ffacd0b85a97d-3621d6cb9c6mr5186489f8f.3.1718797642732;
        Wed, 19 Jun 2024 04:47:22 -0700 (PDT)
Message-ID: <cae88fd0-95d6-4b8d-a6d4-b297082f44fd@suse.com>
Date: Wed, 19 Jun 2024 13:47:12 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2] xen: add explicit comment to identify notifier
 patterns
To: Julien Grall <julien@xen.org>,
 Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <d814434bf73e341f5d35836fa7063a728f7b7de4.1718788908.git.federico.serafini@bugseng.com>
 <f7d46c15-ff85-4a6f-afd7-df18649726c8@xen.org>
 <1fc8524f-8766-4eee-9b27-0eacd04097d4@xen.org>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <1fc8524f-8766-4eee-9b27-0eacd04097d4@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 19.06.2024 13:21, Julien Grall wrote:
> 
> 
> On 19/06/2024 12:17, Julien Grall wrote:
>> Hi Federico,
>>
>> On 19/06/2024 10:29, Federico Serafini wrote:
>>> MISRA C Rule 16.4 states that every `switch' statement shall have a
>>> `default' label" and a statement or a comment prior to the
>>> terminating break statement.
>>>
>>> This patch addresses some violations of the rule related to the
>>> "notifier pattern": a frequently-used pattern whereby only a few values
>>> are handled by the switch statement and nothing should be done for
>>> others (nothing to do in the default case).
>>>
>>> Note that for function mwait_idle_cpu_init() in
>>> xen/arch/x86/cpu/mwait-idle.c the /* Notifier pattern. */ comment is
>>> not added: differently from the other functions covered in this patch,
>>> the default label has a return statement that does not violates Rule 
>>> 16.4.
>>>
>>> No functional change.
>>>
>>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>>> ---
>>> Changes in v2:
>>> as Jan pointed out, in the v1 some patterns were not explicitly 
>>> identified
>>> (https://lore.kernel.org/xen-devel/cad05a5c-e2d8-4e5d-af05-30ae6f959184@bugseng.com/).
>>>
>>> This version adds the /* Notifier pattern. */ to all the pattern 
>>> present in
>>> the Xen codebase except for mwait_idle_cpu_init().
>>> ---
>>>   xen/arch/arm/cpuerrata.c                     | 1 +
>>>   xen/arch/arm/gic-v3-lpi.c                    | 4 ++++
>>>   xen/arch/arm/gic.c                           | 1 +
>>>   xen/arch/arm/irq.c                           | 4 ++++
>>>   xen/arch/arm/mmu/p2m.c                       | 1 +
>>>   xen/arch/arm/percpu.c                        | 1 +
>>>   xen/arch/arm/smpboot.c                       | 1 +
>>>   xen/arch/arm/time.c                          | 1 +
>>>   xen/arch/arm/vgic-v3-its.c                   | 2 ++
>>>   xen/arch/x86/acpi/cpu_idle.c                 | 4 ++++
>>>   xen/arch/x86/cpu/mcheck/mce.c                | 4 ++++
>>>   xen/arch/x86/cpu/mcheck/mce_intel.c          | 4 ++++
>>>   xen/arch/x86/genapic/x2apic.c                | 3 +++
>>>   xen/arch/x86/hvm/hvm.c                       | 1 +
>>>   xen/arch/x86/nmi.c                           | 1 +
>>>   xen/arch/x86/percpu.c                        | 3 +++
>>>   xen/arch/x86/psr.c                           | 3 +++
>>>   xen/arch/x86/smpboot.c                       | 3 +++
>>>   xen/common/kexec.c                           | 1 +
>>>   xen/common/rcupdate.c                        | 1 +
>>>   xen/common/sched/core.c                      | 1 +
>>>   xen/common/sched/cpupool.c                   | 1 +
>>>   xen/common/spinlock.c                        | 1 +
>>>   xen/common/tasklet.c                         | 1 +
>>>   xen/common/timer.c                           | 1 +
>>>   xen/drivers/cpufreq/cpufreq.c                | 1 +
>>>   xen/drivers/cpufreq/cpufreq_misc_governors.c | 3 +++
>>>   xen/drivers/passthrough/x86/hvm.c            | 3 +++
>>>   xen/drivers/passthrough/x86/iommu.c          | 3 +++
>>>   29 files changed, 59 insertions(+)
>>>
>>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
>>> index 2b7101ea25..69c30aecd8 100644
>>> --- a/xen/arch/arm/cpuerrata.c
>>> +++ b/xen/arch/arm/cpuerrata.c
>>> @@ -730,6 +730,7 @@ static int cpu_errata_callback(struct 
>>> notifier_block *nfb,
>>>           rc = enable_nonboot_cpu_caps(arm_errata);
>>>           break;
>>>       default:
>>> +        /* Notifier pattern. */
>> Without looking at the commit message (which may not be trivial when 
>> committed), it is not clear to me what this is supposed to mean. Will 
>> there be a longer explanation in the MISRA doc? Should this be a SAF-* 
>> comment?
> 
> Please ignore this comment. Just found it in the rules.rst.

Except that there it is only an example (and such an example could even
be replaced at any time). Already on the previous version I had asked
that some explanation be added as to what this means and under what
circumstances it is legitimate to add (kind of related to a later part
of the earlier reply of yours).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 11:49:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 11:49:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743691.1150631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtoJ-000458-TK; Wed, 19 Jun 2024 11:49:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743691.1150631; Wed, 19 Jun 2024 11:49:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtoJ-000451-Qn; Wed, 19 Jun 2024 11:49:07 +0000
Received: by outflank-mailman (input) for mailman id 743691;
 Wed, 19 Jun 2024 11:49:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sJtoJ-00044v-0c
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 11:49:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJtoH-00023M-Ci; Wed, 19 Jun 2024 11:49:05 +0000
Received: from [15.248.3.90] (helo=[10.24.67.26])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJtoH-0002Jf-3h; Wed, 19 Jun 2024 11:49:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=UKkAlB3SFJ3I+sCkQThCeI/yDYr2MaKlnfJQ/w+dC+4=; b=1n5iyvtP+2We7sKT4gek0+GV4v
	AOUkGOMOtFy6ywFazgqSgEMheCcY+xDRJQ3is4vRhhFzQXTkjHRDMDVgGBKPTh2Cohw21calRl2v5
	7x7nCHa5gBwhYaDGM8UnyicICJdbR/QqdhT/LaJcLdPH2eDHDT7mTVRJ/wj731K/UiV4=;
Message-ID: <fb6809b3-ee14-4baa-b6fa-bd2171d61c4b@xen.org>
Date: Wed, 19 Jun 2024 12:49:02 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 2/4] xen/arm: Alloc XenStore page for Dom0less DomUs
 from hypervisor
Content-Language: en-GB
To: Stefano Stabellini <sstabellini@kernel.org>,
 "Oleksii K." <oleksii.kurochko@gmail.com>
Cc: Stefano Stabellini <stefano.stabellini@amd.com>,
 xen-devel@lists.xenproject.org, bertrand.marquis@arm.com,
 michal.orzel@amd.com, Volodymyr_Babchuk@epam.com,
 Henry Wang <xin.wang2@amd.com>, Alec Kwapis <alec.kwapis@medtronic.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>
References: <alpine.DEB.2.22.394.2405241552240.2557291@ubuntu-linux-20-04-desktop>
 <20240524225522.2878481-2-stefano.stabellini@amd.com>
 <697aadfd-a8c1-4f1b-8806-6a5acbf343ba@xen.org>
 <b9c8e762af9ca04d9194fdaa0379f2fe9096af29.camel@gmail.com>
 <alpine.DEB.2.22.394.2406181734140.2572888@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2406181734140.2572888@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 19/06/2024 01:37, Stefano Stabellini wrote:
> On Mon, 27 May 2024, Oleksii K. wrote:
>>> I don't think it is a big problem if this is not merged for the code
>>> freeze as this is technically a bug fix.
>>
>> Agree, this is not a problem as it is still looks to me as a bug fix.
>>
>> ~ Oleksii
> 
> Hi Oleksii, this version of the series was already all acked with minor
> NITs and you gave the go-ahead for this release as it is a bug fix. Due
> to 2 weeks of travels I only managed to commit the series now, sorry for
> the delay.

Unfortunately this series is breaking gitlab CI [1]. I understand the go 
ahead was given two weeks ago, but as we are now past the code freeze, I 
feel we should have had a pros/cons e-mail to assess whether it was 
worth the risk to merge it.

Now to the issues, I vaguely recall this series didn't require any 
change in Linux. Did I miss anything? If so, why are we breaking Linux?

For now, I will revert this series. Once we root cause the issue, we can 
re-assess whether the fix should be apply for 4.19.

Cheers,

[1] https://gitlab.com/xen-project/xen/-/pipelines/1338067978

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 11:51:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 11:51:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743698.1150641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtqw-0005VX-9J; Wed, 19 Jun 2024 11:51:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743698.1150641; Wed, 19 Jun 2024 11:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtqw-0005VQ-6d; Wed, 19 Jun 2024 11:51:50 +0000
Received: by outflank-mailman (input) for mailman id 743698;
 Wed, 19 Jun 2024 11:51:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJtqu-0005UZ-U4
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 11:51:48 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4f184a9a-2e32-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 13:51:48 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ebed33cb65so72884071fa.2
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 04:51:48 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f855e7bf76sm114392635ad.105.2024.06.19.04.51.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 04:51:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f184a9a-2e32-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718797908; x=1719402708; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=H+HsZknUf0FqyjXfhHWAcMQ3h/sOvKR89K3syadBBGc=;
        b=HDLXsUkH9Q+z4ghVpnhrcCPziyIPTfcOxvPowcmEbZdekiSheYeRW8HGOglLGhTlPP
         r6wcOpd8eg21woEsKlcfSiJhtA+UX4BMNuWWi4YnNsUaUBiPT+Nl8l/Cz3A+uluq1iKR
         riTqMPl0r3w8qOgm5GfUO+Q0JUbCcboPG0L+Znv23pp4Y5JSUAF22mI0F4rYd3kMkBx7
         /DQvQN1eiI/yeV4prb/6iXPR2IjRS0z3ihuUCZwmqDGzWDXsryDngYE7nlSZvuonwHZ6
         d52zb7jbBi9Cgu5LQCrVVGsAHQmstavyU+JEu23p9Vsj65GaidrAa/jlLFoGDnYEKYwD
         c+AQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718797908; x=1719402708;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=H+HsZknUf0FqyjXfhHWAcMQ3h/sOvKR89K3syadBBGc=;
        b=DVQ0uvSrEAGEv5nRTMocpVBREGLRn23aBA0K6rHP8Z9YifRrtbEx215GJ+9Xnv1Ocw
         vnYUrACMfR9eaES1i8aITEcFnLAAxvv887Ty3JiZqnRkumYaB8wOTPPEXZRa0gGshenx
         ssb+vs3/ezPhSwc8Z/J5Rkeysweds+KIJjSFG3bCd5jnI5EPHTfMMtcxwm5ZYvpv6Ijv
         FYd2g0ltUMPHXC6zs5wtj09E47O5tFnv05mezYL/bpx73+eIbgkH8t72xUBIrUBo6loi
         hPrPC0MhEjowYTmrv2nHZhHbnV/hyxIPC5a/U7Id235FwSih0Uqs/oIoUDw2kNHXX96m
         Duig==
X-Forwarded-Encrypted: i=1; AJvYcCX7Gqbl9tijrNZPiqbJZob79jfIMZyekDqi9asohx5inn+0UFSioAvvKT8F/kq0F91v7aGO1xvY1XxdsHHJYdE2/ObfcOdVJZbEsPH14UA=
X-Gm-Message-State: AOJu0YwOn28tW5qxMZ/Sh0MQeXmICCFGjDcgN7OkO/3kUI98SAriXDlj
	3s2g0pjKhOy+j04UZwW9N/00aqr221jVPWxl/6O8KLvpietcw6qycWSCpzbKlg==
X-Google-Smtp-Source: AGHT+IE9X96w2hPNyDPa5XqSfMQ1fJ3GKa88BJnFarEpgipsO3lWZrGU47Y9zDiyM840QVDK4eYbxA==
X-Received: by 2002:a2e:9e05:0:b0:2e1:2169:a5cc with SMTP id 38308e7fff4ca-2ec3cec0576mr17373461fa.15.1718797907683;
        Wed, 19 Jun 2024 04:51:47 -0700 (PDT)
Message-ID: <b5767e63-a2ec-4593-b2b1-e6e8aab29b8c@suse.com>
Date: Wed, 19 Jun 2024 13:51:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 v4.5 2/7] x86/xstate: Cross-check dynamic XSTATE
 sizes at boot
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240617173921.1755439-3-andrew.cooper3@citrix.com>
 <20240619104655.2401441-1-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240619104655.2401441-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 19.06.2024 12:46, Andrew Cooper wrote:
> Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
> every call.  This is expensive, being used for domain create/migrate, as well
> as to service certain guest CPUID instructions.
> 
> Instead, arrange to check the sizes once at boot.  See the code comments for
> details.  Right now, it just checks hardware against the algorithm
> expectations.  Later patches will cross-check Xen's XSTATE calculations too.
> 
> Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits.  This is to
> maximise coverage in the sanity check, even if we don't expect to
> use/virtualise some of these features any time soon.  Leave HDC and HWP alone
> for now; we don't have CPUID bits from them stored nicely.
> 
> Only perform the cross-checks when SELF_TESTS are active.  It's only
> developers or new hardware liable to trip these checks, and Xen at least
> tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which we
> don't want to be tickling in the general case.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Wed Jun 19 11:53:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 11:53:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743703.1150651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtsX-00063b-Io; Wed, 19 Jun 2024 11:53:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743703.1150651; Wed, 19 Jun 2024 11:53:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtsX-00063U-GG; Wed, 19 Jun 2024 11:53:29 +0000
Received: by outflank-mailman (input) for mailman id 743703;
 Wed, 19 Jun 2024 11:53:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJtsW-00063N-6T
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 11:53:28 +0000
Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com
 [2a00:1450:4864:20::22a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8a1d236e-2e32-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 13:53:27 +0200 (CEST)
Received: by mail-lj1-x22a.google.com with SMTP id
 38308e7fff4ca-2ebed33cb65so72898821fa.2
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 04:53:27 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f855e6dfb7sm114587185ad.64.2024.06.19.04.53.24
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 04:53:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a1d236e-2e32-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718798007; x=1719402807; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=6bJEvoC3DUAVTtWayZw28l8y94R8V/LKRoLGpWRNzeA=;
        b=MASs2Iffy1ZeVNhhh72L9Tj6zWcrKmDbsMpKfHZ38kJ+k+Y5gtS3MBr51BXw+ERpqG
         bs5xrer4gSPuyEorxurVczjhwL86F4E1/XWa4sVlAsCnXK9qCve4CDOpgDw/BBHhprGT
         p8rgk3MYo1VpBJeTRwh4U+Xs0F2/32aVzgdo18TnZ3mc6TapnMOBv4SCq+5eRKXFtQVH
         biqsGW5rggMd7ORpfP/No9bLJN4LNpyKr0OhpMvo8zPlOIBRI+iR4Fqio2j5zHjktN4i
         ubt/UntCEeZdShNlVljazZYN728BjhuIrky3oJ3Vt1aPXZVELKM4mPM/wX1TxJ1kd4jT
         Eb3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718798007; x=1719402807;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=6bJEvoC3DUAVTtWayZw28l8y94R8V/LKRoLGpWRNzeA=;
        b=pYKxBhJKPFOYkj7Q3t5qgXbb5VQADh3duPN+k9WwkTXbN4PIRvGA5shsVa4BLDGzZG
         tcYQ/ftG/Nzpdgvl57r4lUivCWHhlGXsO9DsWe4NX13e9wQAFu1BYWUtl3rZNliAsj4e
         UwpqO6Pa9IIH6WZp8pg6VjOiIh7tN5+uOLkAFZ7Cu6Voi8vI/d9Wr3QSUgQzjP3Ht0mo
         ukS5GuXNtPRjq4CZHRZUgJ/J7hqC62SLO0M608gftQoxC5wWfmUTVci/dTRQ8qpsue6h
         O7f17eodFcyJwbMn6yzEVFY2r4JZ2l1IXvYjaKU4BqLvOkAGJ849V3RQVn2lzvOyucs/
         4bNw==
X-Forwarded-Encrypted: i=1; AJvYcCW3MPfSLE2SdLEuJFMh7+IW3PKcpvSbYJTQwIbMfse4W8oPUxQ6qMVE6dFp11ImlZ8WggAdbYiRkB9wehj9I1mUJwANxTQmhFgjuoLzq/c=
X-Gm-Message-State: AOJu0YzXOOjRKJ00aHg3Uq6Ri9vHUo0iyib9KX2SN3IkRtDdZZ0Q2Erd
	SpndFgVrkliyejRd6eT1HcL2fFt52tD7YqK5N1u8nKbCwQF6/N0K90YJlDRAQA==
X-Google-Smtp-Source: AGHT+IGL0COs6RQQ7wQ2cAUQqSAKCZ5B+3aE45wZ88NCw/R5cgUeCb0rLRtHiJhvvG7wo3zIzK0GEA==
X-Received: by 2002:a2e:9c99:0:b0:2ea:df2e:428c with SMTP id 38308e7fff4ca-2ec3cffc542mr15692731fa.49.1718798006769;
        Wed, 19 Jun 2024 04:53:26 -0700 (PDT)
Message-ID: <0b54d032-a473-4f3e-8284-b9fe63cbf26a@suse.com>
Date: Wed, 19 Jun 2024 13:53:20 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 v4] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
References: <20240619095833.76271-1-roger.pau@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240619095833.76271-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 19.06.2024 11:58, Roger Pau Monne wrote:
> fixup_irqs() is used to evacuate interrupts from to be offlined CPUs.  Given
> the CPU is to become offline, the normal migration logic used by Xen where the
> vector in the previous target(s) is left configured until the interrupt is
> received on the new destination is not suitable.
> 
> Instead attempt to do as much as possible in order to prevent loosing
> interrupts.  If fixup_irqs() is called from the CPU to be offlined (as is
> currently the case for CPU hot unplug) attempt to forward pending vectors when
> interrupts that target the current CPU are migrated to a different destination.
> 
> Additionally, for interrupts that have already been moved from the current CPU
> prior to the call to fixup_irqs() but that haven't been delivered to the new
> destination (iow: interrupts with move_in_progress set and the current CPU set
> in ->arch.old_cpu_mask) also check whether the previous vector is pending and
> forward it to the new destination.
> 
> This allows us to remove the window with interrupts enabled at the bottom of
> fixup_irqs().  Such window wasn't safe anyway: references to the CPU to become
> offline are removed from interrupts masks, but the per-CPU vector_irq[] array
> is not updated to reflect those changes (as the CPU is going offline anyway).
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Wed Jun 19 11:55:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 11:55:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743713.1150670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtuu-0006i6-24; Wed, 19 Jun 2024 11:55:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743713.1150670; Wed, 19 Jun 2024 11:55:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtut-0006hz-VP; Wed, 19 Jun 2024 11:55:55 +0000
Received: by outflank-mailman (input) for mailman id 743713;
 Wed, 19 Jun 2024 11:55:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sJtus-0006hp-CB
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 11:55:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJtus-0002BY-1E; Wed, 19 Jun 2024 11:55:54 +0000
Received: from [15.248.3.90] (helo=[10.24.67.26])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJtur-0002mN-Rm; Wed, 19 Jun 2024 11:55:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=bbJoz8TGW9TgcOVF8vrqpPTN2DwFiD5nhzK1T3jLmzU=; b=1d83U0tUalBNNtHxMVlvNwUkOr
	40qt5llSYyv73qVngg1Bq1NxJl8jtxyL5PABm1ygjFAg1P/k8lk90kRYUYEAe9TI/ahv9ILKJwnpu
	SaLfgB6bp/NtIJvMqoQUF6RXD++gzCG5GnT2jYlGFzbXiVGaFbj62euze3GRQYzqU0T4=;
Message-ID: <82790448-dd2f-4299-ae3d-938080ee5e19@xen.org>
Date: Wed, 19 Jun 2024 12:55:52 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/arm: static-shmem: fix "gbase/pbase used
 uninitialized" build failure
Content-Language: en-GB
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20240619064652.18266-1-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20240619064652.18266-1-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 19/06/2024 07:46, Michal Orzel wrote:
> Building Xen with CONFIG_STATIC_SHM=y results in a build failure:
> 
> arch/arm/static-shmem.c: In function 'process_shm':
> arch/arm/static-shmem.c:327:41: error: 'gbase' may be used uninitialized [-Werror=maybe-uninitialized]
>    327 |         if ( is_domain_direct_mapped(d) && (pbase != gbase) )
> arch/arm/static-shmem.c:305:17: note: 'gbase' was declared here
>    305 |         paddr_t gbase, pbase, psize;
> 
> This is because the commit cb1ddafdc573 adds a check referencing
> gbase/pbase variables which were not yet assigned a value. Fix it.
> 
> Fixes: cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem should be direct-mapped for direct-mapped domains")
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
> Rationale for 4.19: this patch fixes a build failure reported by CI:
> https://gitlab.com/xen-project/xen/-/jobs/7131807878
> ---
>   xen/arch/arm/static-shmem.c | 13 +++++++------
>   1 file changed, 7 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c
> index c434b96e6204..cd48d2896b7e 100644
> --- a/xen/arch/arm/static-shmem.c
> +++ b/xen/arch/arm/static-shmem.c
> @@ -324,12 +324,6 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>               printk("%pd: static shared memory bank not found: '%s'", d, shm_id);
>               return -ENOENT;
>           }
> -        if ( is_domain_direct_mapped(d) && (pbase != gbase) )
> -        {
> -            printk("%pd: physical address 0x%"PRIpaddr" and guest address 0x%"PRIpaddr" are not direct-mapped.\n",
> -                   d, pbase, gbase);
> -            return -EINVAL;
> -        }
>   
>           pbase = boot_shm_bank->start;
>           psize = boot_shm_bank->size;
> @@ -353,6 +347,13 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>               /* guest phys address is after host phys address */
>               gbase = dt_read_paddr(cells + addr_cells, addr_cells);
>   
> +            if ( is_domain_direct_mapped(d) && (pbase != gbase) )
> +            {
> +                printk("%pd: physical address 0x%"PRIpaddr" and guest address 0x%"PRIpaddr" are not direct-mapped.\n",
> +                       d, pbase, gbase);
> +                return -EINVAL;
> +            }
> +

Before this patch, the check was globally. I guess the intention was it 
covers the two part of the "if". But now, you only have it in when 
"paddr" is specified in the DT.

 From a brief look at the code, I can't figure out why we don't need a 
similar check on the else path. Is this because it is guarantee that 
will be paddr == gaddr?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 11:58:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 11:58:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743723.1150680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtx0-0007R5-D0; Wed, 19 Jun 2024 11:58:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743723.1150680; Wed, 19 Jun 2024 11:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJtx0-0007Qy-9n; Wed, 19 Jun 2024 11:58:06 +0000
Received: by outflank-mailman (input) for mailman id 743723;
 Wed, 19 Jun 2024 11:58:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJtwz-0007Qs-4Y
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 11:58:05 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2e7670d8-2e33-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 13:58:03 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id
 38308e7fff4ca-2ebec2f11b7so64178201fa.2
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 04:58:03 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f855f0b705sm115017995ad.213.2024.06.19.04.57.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 04:58:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e7670d8-2e33-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718798282; x=1719403082; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=7js8CSmtozYgUq5gAEvDke0ZuNKLOKBar2LduVigS+U=;
        b=Z6wWpu1MP1YZaRC5uAhEBfAaK5kff84jzIrppT9GmSS9+dfPd+4BjAqsLL1tU9eZxq
         PY0GVbbe+UqEwmBoFmrL/BwHd56oiwVUMWYQDvc8N5PfZr1Ssy4FLNlB3L2sgi9VacFD
         zJTFbVVO5DEX9/lztPuSgYk9OVN9707vzpB9u0bNfOmEyXsVa9gPw+/bC7Ug9paUEZkT
         tZ7rBzwyv6da+Gz2esJ0yUYpDS5okk7FITJxgsC7wtbA0s/9Pa96aQ71UxQLZxR+CfY0
         VwpxSyQvB19xNCfgKzgIHabiKFFYy/4bFm4JnMwrvW/aSyUojJLkHNd6L3Z9ukVKQNvA
         7x2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718798282; x=1719403082;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7js8CSmtozYgUq5gAEvDke0ZuNKLOKBar2LduVigS+U=;
        b=Wzp+qNyLn4TlBVmjK94vF5g1ZFBn4zhfeWPYXDY4a02NFliE0ezMEd6hjyvxcSyILb
         N/YMqlei4nG+D9wTuodxndJJGBvNCGOm05OJCaRzrXTd5pHrkIGFMRx7cwhkGrCiFS2V
         Ia1pMyikYa8k31wZ7ZoAB8j7mvrF6xCHf9Syg6FV1J6EhO8VOJ8LirUoncbp2wsuxKFC
         /2FwK0PMIRbohT8sdUqtSzBy82COkFV2KNG5hSCjJqIO4jbz71gDdPfCTltFzrvGWgX8
         MZKD1Z55fIunxKQVCpLGCCahrbW/JAfRXvjTjIQky9H9Mrv4w1T2HIAe3GSMPZVCRaxp
         SABA==
X-Forwarded-Encrypted: i=1; AJvYcCUJJk7EJvaRYyU9kh46r5g426kkLGdNHmVTJzVVZK7ep1UaK5d2L/97ZkGByCBHEnvew6k8tO4u9soig5jL7oPWlZuq9M7G0cxwBe8Dyvo=
X-Gm-Message-State: AOJu0Yy/rh5estDdE1yBmEJBRUFrPen35108fhJOgYQd11VvpaMZX3I+
	zdAy+LFdpJl2Un4uBEDTbelJgJSiAMqEQE7t9LK9SNmlr+XOWxaG2WLvLtqOyw==
X-Google-Smtp-Source: AGHT+IFheFwHv0oYp3HK6+0nVL6YynBxO2mgnmhLzvHXscUefps3jvp3qNsqzpxsnidtjOjL2GHGAw==
X-Received: by 2002:a2e:b1c2:0:b0:2eb:e47c:6d2a with SMTP id 38308e7fff4ca-2ec3cecc2abmr14668881fa.28.1718798282354;
        Wed, 19 Jun 2024 04:58:02 -0700 (PDT)
Message-ID: <a8fd0504-23b7-473a-9056-6b51c20e6468@suse.com>
Date: Wed, 19 Jun 2024 13:57:54 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19?] tools/libs/light: Fix nic->vlan memory
 allocation
To: Leigh Brown <leigh@solinno.co.uk>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Anthony Perard <anthony@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Jason Andryuk <jandryuk@gmail.com>, Jason Andryuk <jason.andryuk@amd.com>,
 Xen Devel <xen-devel@lists.xenproject.org>
References: <20240520164400.15740-1-leigh@solinno.co.uk>
 <c600e5e8-d169-417c-bc02-d33e84dca0fb@amd.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <c600e5e8-d169-417c-bc02-d33e84dca0fb@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 20.05.2024 19:08, Jason Andryuk wrote:
> On 2024-05-20 12:44, Leigh Brown wrote:
>> After the following commit:
>> 3bc14e4fa4b9 ("tools/libs/light: Add vlan field to libxl_device_nic")
>> xl list -l aborts with a double free error if a domain has at least
>> one vif defined:
>>
>>    $ sudo xl list -l
>>    free(): double free detected in tcache 2
>>    Aborted
>>
>> Orginally, the vlan field was called vid and was defined as an integer.
>> It was appropriate to call libxl__xs_read_checked() with gc passed as
>> the string data was copied to a different variable.  However, the final
>> version uses a string data type and the call should have been changed
>> to use NOGC instead of gc to allow that data to live past the gc
>> controlled lifetime, in line with the other string fields.
>>
>> This patch makes the change to pass NOGC instead of gc and moves the
>> new code to be next to the other string fields (fixing a couple of
>> errant tabs along the way), as recommended by Jason.
>>
>> Fixes: 3bc14e4fa4b9 ("tools/libs/light: Add vlan field to libxl_device_nic")
>> Signed-off-by: Leigh Brown <leigh@solinno.co.uk>
> 
> Reviewed-by: Jason Andryuk <jason.andryuk@amd.com>

I notice this wasn't Cc-ed to the maintainer, which likely is the reason
for there not having been an ack yet. Anthony, any thoughts?

Further at this point, bug fix or not, it would likely also need a release
ack. Oleksii, thoughts?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 12:04:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 12:04:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743734.1150690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJu3J-0001Kk-AQ; Wed, 19 Jun 2024 12:04:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743734.1150690; Wed, 19 Jun 2024 12:04:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJu3J-0001Kd-7Y; Wed, 19 Jun 2024 12:04:37 +0000
Received: by outflank-mailman (input) for mailman id 743734;
 Wed, 19 Jun 2024 12:04:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJu3I-0001KX-1U
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 12:04:36 +0000
Received: from mail-lj1-x229.google.com (mail-lj1-x229.google.com
 [2a00:1450:4864:20::229])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1784952d-2e34-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 14:04:34 +0200 (CEST)
Received: by mail-lj1-x229.google.com with SMTP id
 38308e7fff4ca-2eaae2a6dc1so115193861fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 05:04:34 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f855e5be4bsm115284975ad.19.2024.06.19.05.04.30
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 05:04:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1784952d-2e34-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718798673; x=1719403473; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=A4ae1nad0OEe4iAybnrKh8rkxZpqVuSSq7udvGtoVcA=;
        b=A5cWhMA4+x/WnEPB7WN9MyVND+szmzdlb4o9+6WqW6rv8gJ/UP1ctE70Wo8PWLGfbH
         cpcwxs7z4E3I+ahRGUvmaa04ir0cI/hFUoSlGBindGTNInsU+nmPrwh+3xK9f8jzFOSd
         BD7buSVnY7AxDnJE3sf9V9NODhackjzMtGETUywzaJoV89v/E+niYW/Y3YHoG6fN6smP
         jKOnFMpSaRFgfnfCMpbxlXFXri1zR5qKoGClu0dLU7qxqstsk8hw4O5+u431isYB7cI0
         itnw2d2SGqDz1aBIpbaqARVjYq4jWVhoqSUNbex2NIXM9KrM27zorFqFmPO+M+JYU/Lj
         h8ig==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718798673; x=1719403473;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=A4ae1nad0OEe4iAybnrKh8rkxZpqVuSSq7udvGtoVcA=;
        b=H6kX9iP5ml/zOdhBMEUS1M0XbKW7f2/Fhiod2z8YL3FV6jLoAlDmI7v6+pRIiLW3YB
         6ys10j+zcGfYwo6/6nDsMtoBIoSfv4yxoghvTOV1X6dG2ipa9BRlI5y5Ec3CXE/acFqY
         0dKnwFDujIAz7tKDccPH4/vd33ugWFtmPFJZ5eQEHKl0thIsXuEVwzWiV0nCqumLGigf
         +SCFbvR5yw5p74R6GbtHHId4ylsduqsFk/9p1gJoL9h8v+HGjjmgryMoI2/FxvmFuOTd
         WsguYt4r3EdnS+YvmD8B4Lrfc7AF2AhDPmdhJGiJ9ZK5CXsq0Id1YRp4JMGgxHdf4ASO
         0wiw==
X-Forwarded-Encrypted: i=1; AJvYcCW477F/wQhFFzKUrt0V5ra2droGaWXvbEadMIaLYq4PdjhXXY8VF0t6IY4gY/RIzPTY0JaicAVKXbgLRHV163ZfHCaRf70f8O9KY9hZVWs=
X-Gm-Message-State: AOJu0YzpV7eSNE1CDWCIg3vUPft2U1tH2cXEfdVxsNcHBAYyyhgmnWnm
	hteh2nlbrux9uZ0cJbDWNw+3YooyLhIWD2d3+yFK63urJwt2m+AhKG7UCrRmiQ==
X-Google-Smtp-Source: AGHT+IH/qitZD0GhN2iiMju5DZO40IQT2LZpqLrw94Zh0xhUiP1MRBfJM0jFTzyRN2zFusHNjpIggQ==
X-Received: by 2002:a2e:87c7:0:b0:2ec:4267:191e with SMTP id 38308e7fff4ca-2ec426722e2mr4311631fa.18.1718798673468;
        Wed, 19 Jun 2024 05:04:33 -0700 (PDT)
Message-ID: <14188e5a-a641-4351-80b3-f69969c4ddba@suse.com>
Date: Wed, 19 Jun 2024 14:04:26 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] loadpolicy: Verifies memory allocation during policy
 loading
To: yskelg@gmail.com, "Daniel P. Smith" <dpsmith@apertussolutions.com>
Cc: Austin Kim <austindh.kim@gmail.com>, shjy180909@gmail.com,
 xen-devel@lists.xenproject.org, Anthony PERARD <anthony@xenproject.org>
References: <20240527125438.66349-1-yskelg@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240527125438.66349-1-yskelg@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 27.05.2024 14:54, yskelg@gmail.com wrote:
> --- a/tools/flask/utils/loadpolicy.c
> +++ b/tools/flask/utils/loadpolicy.c
> @@ -58,6 +58,11 @@ int main (int argCnt, const char *args[])
>      }
>  
>      polMemCp = malloc(info.st_size);
> +    if (!polMemCp) {
> +        fprintf(stderr, "Error occurred allocating %ld bytes\n", info.st_size);
> +        ret = -ENOMEM;

I don't think -ENOMEM is valid to use here. See neighboring code. Nevertheless
it is correct that a check should be here.

As to %ld - is that portably usable with an off_t value?

In any event, Daniel, really your turn to review / ack. I'm looking at this
merely because I found this and another bugfix still sit in waiting-for-ack
state.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 12:06:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 12:06:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743741.1150700 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJu56-0001rw-Jq; Wed, 19 Jun 2024 12:06:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743741.1150700; Wed, 19 Jun 2024 12:06:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJu56-0001rp-Ge; Wed, 19 Jun 2024 12:06:28 +0000
Received: by outflank-mailman (input) for mailman id 743741;
 Wed, 19 Jun 2024 12:06:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJu55-0001rT-0G; Wed, 19 Jun 2024 12:06:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJu54-0002RF-HM; Wed, 19 Jun 2024 12:06:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJu54-0002rm-3Y; Wed, 19 Jun 2024 12:06:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJu54-0005DP-30; Wed, 19 Jun 2024 12:06:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=l9cJXViQIBFUU8VLzNWCTzrHS57ecg3NcC+5ZT1ASmg=; b=BDAPmiLSg4IPL/LbR9LqNYfM41
	IRtL0Ko4RN2Tj5tS2aNBINVyE0EKsT04eWBmidEMUggGbWqZKZOYD+ULvyeRWEu8kVFMoi08Wmcsw
	boo1rvi9L4ssbifJOdXtRVvtfXCCICs02M16Y/Su/c/mrUco7xXJ4ALvNJUOtro2xJEU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186406-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186406: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=92e5605a199efbaee59fb19e15d6cc2103a04ec2
X-Osstest-Versions-That:
    linux=46d1907d1caaaaa422ae814c52065f243caa010a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Jun 2024 12:06:26 +0000

flight 186406 linux-linus real [real]
flight 186410 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186406/
http://logs.test-lab.xenproject.org/osstest/logs/186410/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-arndale   8 xen-boot            fail pass in 186410-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 186410 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 186410 never pass
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 186398
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186398
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186398
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186398
 test-armhf-armhf-examine      8 reboot                       fail  like 186398
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail like 186398
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186398
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186398
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                92e5605a199efbaee59fb19e15d6cc2103a04ec2
baseline version:
 linux                46d1907d1caaaaa422ae814c52065f243caa010a

Last test of basis   186398  2024-06-18 16:13:58 Z    0 days
Testing same since   186406  2024-06-19 02:45:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Amer Al Shanawany <amer.shanawany@gmail.com>
  John Hubbard <jhubbard@nvidia.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Shuah Khan <skhan@linuxfoundation.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   46d1907d1caa..92e5605a199e  92e5605a199efbaee59fb19e15d6cc2103a04ec2 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 12:06:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 12:06:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743746.1150710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJu5P-0002IS-Rm; Wed, 19 Jun 2024 12:06:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743746.1150710; Wed, 19 Jun 2024 12:06:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJu5P-0002IL-Op; Wed, 19 Jun 2024 12:06:47 +0000
Received: by outflank-mailman (input) for mailman id 743746;
 Wed, 19 Jun 2024 12:06:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5jEq=NV=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1sJu5O-0002AW-Eo
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 12:06:46 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20600.outbound.protection.outlook.com
 [2a01:111:f403:2417::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 65252f33-2e34-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 14:06:45 +0200 (CEST)
Received: from PH7P221CA0001.NAMP221.PROD.OUTLOOK.COM (2603:10b6:510:32a::7)
 by IA1PR12MB8263.namprd12.prod.outlook.com (2603:10b6:208:3f8::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.19; Wed, 19 Jun
 2024 12:06:34 +0000
Received: from CY4PEPF0000EE35.namprd05.prod.outlook.com
 (2603:10b6:510:32a:cafe::26) by PH7P221CA0001.outlook.office365.com
 (2603:10b6:510:32a::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.33 via Frontend
 Transport; Wed, 19 Jun 2024 12:06:34 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CY4PEPF0000EE35.mail.protection.outlook.com (10.167.242.41) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Wed, 19 Jun 2024 12:06:33 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 19 Jun
 2024 07:06:30 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 19 Jun
 2024 07:06:30 -0500
Received: from [10.252.147.188] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend
 Transport; Wed, 19 Jun 2024 07:06:29 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65252f33-2e34-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Fc0GhekVQZU5qYJyRs/SUr4Xu0dEViyzikumWRe8q3mgxyGyTcMJjBylFNEtUZZL24Cxx5vMfLJxWpEWDkJl2HbQ/qgcIGjO56YPdg+sN0AfTBTqjYZ+DxOu/2TrvoZjBWLbLLBeZivM7jIx0k7hbMCBtz+EWVM+vcXhCZQovhye4eLqjRjykmSs/+nRhg1MNbFICwBEtbgwn/2+K5DrllQ0nTPF8tSPKiFKFUAEvKbWsMmPYyCoGwsPjERwf8D7S6zeMF1JkwSsf5lnNoaLPQGvRMvpx4unbOdzUsR/pbyLDB4YiiQixwbPqfEuhN8pqf0ti/rtGYu+PcUbXayfmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+RtgDtAPbd+xYchkX/zQp44H7l0HbIK1trRzkvXzwJI=;
 b=GCIMNP5dE6In674IXqlyodZ2loZ3MYpSpU3fineVLhujYliB6fyVmYBBO+HUH8oJgKYHWum/PTfcWEUu59tkdWRyPIEhzMCH11xKmuQZSvwSyD1GdvGGhF2Gl8gz4e3GVJbMFqSUzGDT0BlX7gbRoLb2dfYRyq7wE15KLoSwMNi42l1o6R66z+1gzB2ZiT5KmsrjytEjRUrVdgf+OOovGtBrvMkN9++X+xVS8rzHTP2q9TwEGg/NlsvJpMh0HTZ1lKngxGldZDhoDUM7dk1r1sA6iWMFLGtZ9ctZP7fn02tDG/GZmbTlT3+ntpAAFu0An1saP407hUMlXDdFU6W76w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+RtgDtAPbd+xYchkX/zQp44H7l0HbIK1trRzkvXzwJI=;
 b=mKvEUJlazXtVpgCaUIBiAgyiEBhDn0bWCT0qkSb5SR6mh2wISj41PnLsY8vq9QTEeCiz8dkJNbb4zlvDimd0Yf5Cqp0rspWquFpGwmhHzh8ZcsOdC3hFNmH3brBSqhoj1QCmuliW5xQoYACSfey23yN1dxShVqVPnDpNLsIqp3U=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <99fb367a-7ceb-4769-8120-a06474e98fb3@amd.com>
Date: Wed, 19 Jun 2024 14:06:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/arm: static-shmem: fix "gbase/pbase used
 uninitialized" build failure
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20240619064652.18266-1-michal.orzel@amd.com>
 <82790448-dd2f-4299-ae3d-938080ee5e19@xen.org>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <82790448-dd2f-4299-ae3d-938080ee5e19@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE35:EE_|IA1PR12MB8263:EE_
X-MS-Office365-Filtering-Correlation-Id: 9ec3a82c-0a13-4de6-f7e6-08dc90584324
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|376011|36860700010|1800799021|82310400023;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?TVRmbzZORHlNVGVvK3RoazFPZldxbkMvL2dreDh6U2RBVmJ0MEJ0NC9BZmR3?=
 =?utf-8?B?WFpmTllSSG0ydWlQR2xHeEgzaXN0YTk3cHNmcHliTUJ3cCtRSEdxM0h4NnVL?=
 =?utf-8?B?UTBVdGttOUZ1RFYvTTJyRlNNNW54Q1ZLcGhGM2p1b0xXeDgxaUcra21oTGN4?=
 =?utf-8?B?QU5mM2JnVENyaFpIejNURTRMRWt0OXQ3SndtQnRYVENDTjBkbnh5bmllcmx4?=
 =?utf-8?B?bUV6MmNRVEowNjVxK3Z0WWgrNHVTRlYzYlpRL1VNdXkraU9TNVZPTFI2Q0x2?=
 =?utf-8?B?cldERDBjSmgzME03aDFNa0NjU0FkYk9qaDRwTlptejFkL3MxcmdvOUlyTk1m?=
 =?utf-8?B?ajNIYTdhMnIzc0xMTzZjQXVFd2pLMG5XbmlBem9oOFFIMXNrcmUrYTFqajNj?=
 =?utf-8?B?Sk1ON1NoVVhFaThIS2kvS0s3LzlEdC9VM3J3bzdlZXJJRXNSNmorUTVhQUxv?=
 =?utf-8?B?dnp6QjFXYWhHSkxDbE40aXA2a2RCb3gxRGs1bHJPdU5CaTBkNGJWOHFzYTlC?=
 =?utf-8?B?TGxXSnFGd1RnbTBYNVBwbFE5OFpTbHlVUUk2WDg2MXlhT2pnVHQ5bUo2d1ZC?=
 =?utf-8?B?d0xFcjV4S25lN2JaMFhWUkJ4TVg2czNtSGU2cjdhR1hzaHpRT2xkRTZPdkFO?=
 =?utf-8?B?MVZuMW51VXNyZkxDZEJ4MXV2QjJUUXM1bndiLy9mUEFmNG8wYnl3dDBvZzJi?=
 =?utf-8?B?aWIvQjUzNVBlcThvMlpWaG1NSk8zVDF6SHJNY01oRUVEY2piTHQyWVpTb1ZC?=
 =?utf-8?B?SEJGaURWV3dOeGNsOXl2dXlMRmN2M1hOcGZhWWhScDRyUm02NGN3RXVsZTJo?=
 =?utf-8?B?aEtIR2RMSlhFVmVaOUJHVE1wUVhUL2VQZXN6RDlaNnRCRlZsdHphcUVXNVBu?=
 =?utf-8?B?Ump2MkdiQzlsdmJYYlkvMFZrR1lzM2JBeFlrZU9scjNIc2FMZld2bXMzSzJ4?=
 =?utf-8?B?VlA1dlpjNEVFVkFUZ1QzalQzOS9rU0pTb3pnSk45NjV1cU5oSkVTQ05QZCtP?=
 =?utf-8?B?QU5mNG9SU3ZTU0xwODZCYkVEeWs3cExTSkRDc1Q5eHdHRWZvMnpVcTA2SFdz?=
 =?utf-8?B?ZUNPRzJIT00zSVdvTlRlelpFTjhDcXZlTE9DdXZnTjlETDFFZmwyY2FxMXFj?=
 =?utf-8?B?SGtVOGhSNDFpS1JYT2NzejBhU1lOTUR6akhYTndxMjlDWnprZTFKSlVkTlVr?=
 =?utf-8?B?TnM2UnhFK0N6Zk9md2xnRFY2K0ozRkQzUnBsYVJaa0ZKbXVkTmFwZFJRa2Q5?=
 =?utf-8?B?NzV6MmlaSFJiZFlzYnJJRCsrMDhIckZNSnJkT2Z4UEs2TXlpQnY5N2JGenRB?=
 =?utf-8?B?dTJZNFh5VGw5dnkwSHBwYnIyUms5NDhXc0dCSU1PUk9PSGtiNVgwbXJOeXV6?=
 =?utf-8?B?QXVXeTAzTENBUVpDTUFYbFdsRzlWUWZJMzY0MVZFeDNSOVFObVpydlp2b0sz?=
 =?utf-8?B?U1VlcDJKNFZBc2tiQ3JidlRWK0Q3Unl1UHpUR2VJY3luVWdLZkpVUE1hYm9E?=
 =?utf-8?B?WFhpYm9yNGhwRXhGRno1dldQSUx4V1hEVnlGMVhFeE9hMUsvNGM4aEs2dWl2?=
 =?utf-8?B?cGNPdzdhV3laMkNsbXRnbjFEOW5yQlhTRlN5WGowMnhBWHYvQ3lmYVg3WDlv?=
 =?utf-8?B?bDBCd013TCtJQjVtaVpQelJxNmFBQUJRRVRlYUphc1Rka2UrT0JraE5kL3Jn?=
 =?utf-8?B?TkU2bWFUZUFOS3JHTExBZU51aHhPWFZuekJVWTI3QUVvQVl5V1FEWHhQWDVQ?=
 =?utf-8?B?RlNMN0hRNU9DbSt6TzJnQXdmT2UwUWJTZlRsRWtvWkNkZ3RCQ2ZyRDFYUFRU?=
 =?utf-8?Q?LMzmAGddPTiS6RSjM3M/3NJkx9M+zzSvn4jsk=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(376011)(36860700010)(1800799021)(82310400023);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 Jun 2024 12:06:33.6535
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9ec3a82c-0a13-4de6-f7e6-08dc90584324
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000EE35.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB8263

Hi Julien,

On 19/06/2024 13:55, Julien Grall wrote:
> 
> 
> Hi Michal,
> 
> On 19/06/2024 07:46, Michal Orzel wrote:
>> Building Xen with CONFIG_STATIC_SHM=y results in a build failure:
>>
>> arch/arm/static-shmem.c: In function 'process_shm':
>> arch/arm/static-shmem.c:327:41: error: 'gbase' may be used uninitialized [-Werror=maybe-uninitialized]
>>    327 |         if ( is_domain_direct_mapped(d) && (pbase != gbase) )
>> arch/arm/static-shmem.c:305:17: note: 'gbase' was declared here
>>    305 |         paddr_t gbase, pbase, psize;
>>
>> This is because the commit cb1ddafdc573 adds a check referencing
>> gbase/pbase variables which were not yet assigned a value. Fix it.
>>
>> Fixes: cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem should be direct-mapped for direct-mapped domains")
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>> ---
>> Rationale for 4.19: this patch fixes a build failure reported by CI:
>> https://gitlab.com/xen-project/xen/-/jobs/7131807878
>> ---
>>   xen/arch/arm/static-shmem.c | 13 +++++++------
>>   1 file changed, 7 insertions(+), 6 deletions(-)
>>
>> diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c
>> index c434b96e6204..cd48d2896b7e 100644
>> --- a/xen/arch/arm/static-shmem.c
>> +++ b/xen/arch/arm/static-shmem.c
>> @@ -324,12 +324,6 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>>               printk("%pd: static shared memory bank not found: '%s'", d, shm_id);
>>               return -ENOENT;
>>           }
>> -        if ( is_domain_direct_mapped(d) && (pbase != gbase) )
>> -        {
>> -            printk("%pd: physical address 0x%"PRIpaddr" and guest address 0x%"PRIpaddr" are not direct-mapped.\n",
>> -                   d, pbase, gbase);
>> -            return -EINVAL;
>> -        }
>>
>>           pbase = boot_shm_bank->start;
>>           psize = boot_shm_bank->size;
>> @@ -353,6 +347,13 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>>               /* guest phys address is after host phys address */
>>               gbase = dt_read_paddr(cells + addr_cells, addr_cells);
>>
>> +            if ( is_domain_direct_mapped(d) && (pbase != gbase) )
>> +            {
>> +                printk("%pd: physical address 0x%"PRIpaddr" and guest address 0x%"PRIpaddr" are not direct-mapped.\n",
>> +                       d, pbase, gbase);
>> +                return -EINVAL;
>> +            }
>> +
> 
> Before this patch, the check was globally. I guess the intention was it
> covers the two part of the "if". But now, you only have it in when
> "paddr" is specified in the DT.
> 
>  From a brief look at the code, I can't figure out why we don't need a
> similar check on the else path. Is this because it is guarantee that
> will be paddr == gaddr?
The reason why I added this check only in the first case is due to what doc states.
It says that if a domain is 1:1, the shmem should be also 1:1 i.e. pbase == gbase. In the else
case the pbase is omitted and thus a user cannot know and has no guarantee what will be the backing physical address.
Thus, reading this doc makes me feel that for 1:1 guests user needs to specify pbase == gbase.

~Michal



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 12:07:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 12:07:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743755.1150719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJu5t-0002tS-6W; Wed, 19 Jun 2024 12:07:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743755.1150719; Wed, 19 Jun 2024 12:07:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJu5t-0002tL-41; Wed, 19 Jun 2024 12:07:17 +0000
Received: by outflank-mailman (input) for mailman id 743755;
 Wed, 19 Jun 2024 12:07:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJu5r-0001r1-HB
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 12:07:15 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 76c7e2a2-2e34-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 14:07:13 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id
 38308e7fff4ca-2ec0f3b9cdbso47218841fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 05:07:13 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c709daa42fsm1640971a91.0.2024.06.19.05.07.08
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 05:07:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76c7e2a2-2e34-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718798833; x=1719403633; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=nKnF4xZRu1H+OC8kbromdYm/WrmdId7lQrk1LdUmNIA=;
        b=GH1SqoP6uxT80NDzDn9+XDxwQul2H89qBgFFHdHSRTq4oM06Pd2eaVs/bLzZN4IRg0
         k32H3+Cwtcc9I5kO/KGwdLHKJ4kLlvyoeXywSfW4JSOAXuzXAUu8+lxpW2pOVfiNxIX1
         iN3mUbVQDFWDfxJOfkzGvlrOWQenld1KKGWyULoBf4eKPEG7WztAEXN9dyaOEKTdNQiR
         TY+EoPkEZC/uUvwXu8yEPFUJ1baPvGyDEbYfwWa7vWmlrVU30h6Me37z/ZMAUINyorVK
         ayV4Q6gDk7IJhO6W1d9VgiLnMtBcol1DvF0lF+2PTv/PyYS+TGN+w3L0gm37L8qMfxaY
         3Xaw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718798833; x=1719403633;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nKnF4xZRu1H+OC8kbromdYm/WrmdId7lQrk1LdUmNIA=;
        b=t1RhimlIiNJIYV9btoz3Yjv1s6xG0Hz4XwnN897D2R1us/ycPbo/BKEyffzpJDIRhg
         TCAzB7a9acCnlfhZf9BEKCSiIh0J3oMtYNmqh5rg1D2PwIpzwSjkEUHZYwHUAzbFvEfq
         G+LOmRxFtqf7fUEbmSHDsrlpHQmByfldjBCzB3BR1MetADI28CpBoVGtzfAkdDP3JVLM
         9LLqGbSphSTyzqx+p9DI/jr0pQwDSHPqLLicJ8Iuc6z+/202pU79Z/jJQ4Yy72HXb7o3
         9VfoTzoEb9W0zp9PujikyXD0bSwAJUvyorU1PRM3F0yd8iJ3+9GxiUONPosQ2tv+5JP6
         pJSA==
X-Gm-Message-State: AOJu0Ywf71RVXtNzA69bHhjB9CYNZ5tWglHoJB17rEPAWzS9uTYB0TCk
	iSwTfkiHXST8+FmSg1d439tU0e6u9bY49AUbXK6ItfxS5/TJsDTbC0fMaGN7ow==
X-Google-Smtp-Source: AGHT+IEJWz7W+oSkKluCyJaGfVLWDakSZxCbhJCULRf6Amuo4yHhB8dDaIuOoQsbgw7IbXbcOQvw7w==
X-Received: by 2002:a2e:a787:0:b0:2ec:3f82:f9a2 with SMTP id 38308e7fff4ca-2ec3f82f9dfmr13055611fa.0.1718798831367;
        Wed, 19 Jun 2024 05:07:11 -0700 (PDT)
Message-ID: <7de20763-b9bc-4dfc-b250-8f83c42e9e16@suse.com>
Date: Wed, 19 Jun 2024 14:07:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] hotplug: Restore block-tap phy compatibility
To: Anthony PERARD <anthony@xenproject.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: xen-devel@lists.xenproject.org, Jason Andryuk <jason.andryuk@amd.com>,
 Jason Andryuk <jandryuk@gmail.com>
References: <20240516022212.5034-1-jandryuk@gmail.com>
 <64083e01-edf1-4395-a9d7-82e82d220de7@suse.com>
 <9678073f-82d5-4402-b5a0-e24985c1446b@amd.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <9678073f-82d5-4402-b5a0-e24985c1446b@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 16.05.2024 15:52, Jason Andryuk wrote:
> On 2024-05-16 03:41, Jan Beulich wrote:
>> On 16.05.2024 04:22, Jason Andryuk wrote:
>>> From: Jason Andryuk <jason.andryuk@amd.com>
>>>
>>> From: Jason Andryuk <jason.andryuk@amd.com>
>>
>> Two identical From: (also in another patch of yours, while in yet another one
>> you have two _different_ ones, when only one will survive into the eventual
>> commit anyway)?
> 
> Sorry about that.  Since I was sending from my gmail account, I thought 
> I needed explicit From: lines to ensure the authorship was listed w/ 
> amd.com.  I generated the patches with `git format-patch --from`, to get 
> the explicit From: lines, and then sent with `git send-email`.  The 
> send-email step then inserted the additional lines.  I guess it added 
>  From amd.com since I had changed to that address in .gitconfig.
> 
>>> backendtype=phy using the blktap kernel module needs to use write_dev,
>>> but tapback can't support that.  tapback should perform better, but make
>>> the script compatible with the old kernel module again.
>>>
>>> Signed-off-by: Jason Andryuk <jason.andryuk@amd.com>
>>
>> Should there be a Fixes: tag here?
> 
> That makes sense.
> 
> Fixes: 76a484193d ("hotplug: Update block-tap")

Surely this wants going into 4.19? Thus - Anthony, Oleksii?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 12:34:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 12:34:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743789.1150769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJuW7-0000qG-K9; Wed, 19 Jun 2024 12:34:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743789.1150769; Wed, 19 Jun 2024 12:34:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJuW7-0000q9-Hh; Wed, 19 Jun 2024 12:34:23 +0000
Received: by outflank-mailman (input) for mailman id 743789;
 Wed, 19 Jun 2024 12:34:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sJuW6-0000q3-Ea
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 12:34:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJuW5-00035u-UX; Wed, 19 Jun 2024 12:34:21 +0000
Received: from [15.248.3.90] (helo=[10.24.67.26])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sJuW5-0005Ss-NX; Wed, 19 Jun 2024 12:34:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=KgQ6dB12ouTGXDm+tZ/XwVHqmjS9BEjxZMeNYcchX5A=; b=ek1qGwTNyji1AMYiYzYOPy16SM
	lIKPVf9VtH0+KlhYlR1QVLGeVWxVjqOnXiS+Gj9T0XAnQIMQUzUJ4lvrN3fFAV5GIm4+yyNKgvvDj
	x1Ch0VpFpvP3GDhvv/0fq5gITW91EEMjP3GaDMfSUUA4ppGo0PvpU5jfF5k4UKXSO91w=;
Message-ID: <7bffdbeb-0219-4ec7-a70f-a9fa55cd6b5e@xen.org>
Date: Wed, 19 Jun 2024 13:34:19 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/arm: static-shmem: fix "gbase/pbase used
 uninitialized" build failure
Content-Language: en-GB
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20240619064652.18266-1-michal.orzel@amd.com>
 <82790448-dd2f-4299-ae3d-938080ee5e19@xen.org>
 <99fb367a-7ceb-4769-8120-a06474e98fb3@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <99fb367a-7ceb-4769-8120-a06474e98fb3@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 19/06/2024 13:06, Michal Orzel wrote:
> Hi Julien,
> 
> On 19/06/2024 13:55, Julien Grall wrote:
>>
>>
>> Hi Michal,
>>
>> On 19/06/2024 07:46, Michal Orzel wrote:
>>> Building Xen with CONFIG_STATIC_SHM=y results in a build failure:
>>>
>>> arch/arm/static-shmem.c: In function 'process_shm':
>>> arch/arm/static-shmem.c:327:41: error: 'gbase' may be used uninitialized [-Werror=maybe-uninitialized]
>>>     327 |         if ( is_domain_direct_mapped(d) && (pbase != gbase) )
>>> arch/arm/static-shmem.c:305:17: note: 'gbase' was declared here
>>>     305 |         paddr_t gbase, pbase, psize;
>>>
>>> This is because the commit cb1ddafdc573 adds a check referencing
>>> gbase/pbase variables which were not yet assigned a value. Fix it.
>>>
>>> Fixes: cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem should be direct-mapped for direct-mapped domains")
>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>> ---
>>> Rationale for 4.19: this patch fixes a build failure reported by CI:
>>> https://gitlab.com/xen-project/xen/-/jobs/7131807878
>>> ---
>>>    xen/arch/arm/static-shmem.c | 13 +++++++------
>>>    1 file changed, 7 insertions(+), 6 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c
>>> index c434b96e6204..cd48d2896b7e 100644
>>> --- a/xen/arch/arm/static-shmem.c
>>> +++ b/xen/arch/arm/static-shmem.c
>>> @@ -324,12 +324,6 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>>>                printk("%pd: static shared memory bank not found: '%s'", d, shm_id);
>>>                return -ENOENT;
>>>            }
>>> -        if ( is_domain_direct_mapped(d) && (pbase != gbase) )
>>> -        {
>>> -            printk("%pd: physical address 0x%"PRIpaddr" and guest address 0x%"PRIpaddr" are not direct-mapped.\n",
>>> -                   d, pbase, gbase);
>>> -            return -EINVAL;
>>> -        }
>>>
>>>            pbase = boot_shm_bank->start;
>>>            psize = boot_shm_bank->size;
>>> @@ -353,6 +347,13 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>>>                /* guest phys address is after host phys address */
>>>                gbase = dt_read_paddr(cells + addr_cells, addr_cells);
>>>
>>> +            if ( is_domain_direct_mapped(d) && (pbase != gbase) )
>>> +            {
>>> +                printk("%pd: physical address 0x%"PRIpaddr" and guest address 0x%"PRIpaddr" are not direct-mapped.\n",
>>> +                       d, pbase, gbase);
>>> +                return -EINVAL;
>>> +            }
>>> +
>>
>> Before this patch, the check was globally. I guess the intention was it
>> covers the two part of the "if". But now, you only have it in when
>> "paddr" is specified in the DT.
>>
>>   From a brief look at the code, I can't figure out why we don't need a
>> similar check on the else path. Is this because it is guarantee that
>> will be paddr == gaddr?
> The reason why I added this check only in the first case is due to what doc states.
> It says that if a domain is 1:1, the shmem should be also 1:1 i.e. pbase == gbase. In the else
> case the pbase is omitted and thus a user cannot know and has no guarantee what will be the backing physical address.

The property "direct-map" has the following definition:

"- direct-map

     Only available when statically allocated memory is used for the domain.
     An empty property to request the memory of the domain to be
     direct-map (guest physical address == physical address).
"

So I think it would be fair for someone to interpret it as shared memory 
would also be 1:1 mapped.

> Thus, reading this doc makes me feel that for 1:1 guests user needs to specify pbase == gbase.

See above, I think this is not 100% clear. I am concerned that someone 
may try to use the version where only the guest address is specified.

It would likely be hard to realize that the extended regions would not 
work properly. So something needs to be done.

I don't have any preference on how to address. It could simply be a 
check in Xen to request that both "gaddr" and "paddr" are specified for 
direct mapped domain.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 12:57:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 12:57:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743800.1150780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJusl-0004d0-Dl; Wed, 19 Jun 2024 12:57:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743800.1150780; Wed, 19 Jun 2024 12:57:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJusl-0004ct-AV; Wed, 19 Jun 2024 12:57:47 +0000
Received: by outflank-mailman (input) for mailman id 743800;
 Wed, 19 Jun 2024 12:57:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KjC9=NV=bounce.vates.tech=bounce-md_30504962.6672d5c6.v1-73edaa175ba94638a1fbddc1cbefd527@srs-se1.protection.inumbo.net>)
 id 1sJusj-0004aL-Cv
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 12:57:45 +0000
Received: from mail177-18.suw61.mandrillapp.com
 (mail177-18.suw61.mandrillapp.com [198.2.177.18])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8470a2a6-2e3b-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 14:57:43 +0200 (CEST)
Received: from pmta14.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail177-18.suw61.mandrillapp.com (Mailchimp) with ESMTP id
 4W43Yy46kSzCf9KJW
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 12:57:42 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 73edaa175ba94638a1fbddc1cbefd527; Wed, 19 Jun 2024 12:57:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8470a2a6-2e3b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718801862; x=1719062362;
	bh=QAebLaPD8+cupYVRUnIU5BBiafDK5MduiHxYSEAK7j8=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=1ip7cld0l3PvvJ2DBuacNyUixIU4aHWFy07QAujXMeQ+xNy7dm8iuq6gIlUTTBrgJ
	 7bWQVPp5bpPSMFipQvRNCnVy+Ry5PR2g5d8wJFNPE3I95lUIxaJF62YNq3BU7KmSdX
	 kPUxdKNVucWT1mjp9vnPdgvCyb6R4Jr9FAR65Yv1/6l36wPwp4yB/z5oQ6VwhMIaJp
	 DC7GQ4l8CBFADvFHhhEd6alx17lprVaEdGbICaTafvOV3YJbS2aSd3/tD5UHT3z3l5
	 frGdoTje0VHn+JSXMviELDu6LZNQnAxPXf0lWD4pwMSStRoi48MPEEuHCAHvpgFk52
	 NPUt+0YJKtHZg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718801862; x=1719062362; i=anthony.perard@vates.tech;
	bh=QAebLaPD8+cupYVRUnIU5BBiafDK5MduiHxYSEAK7j8=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=UZuO3lvtfRZ6jcWHx2QKFa3whcv/gcM7hkEdh33iINrgD7PTSQguSf6dRxCexpN6B
	 FJcGiSdHf6Xbsy15ft4lKx7Ak6ieBtcobXXp8VhCu5Yk2Kkej9gVGCl8S4GBkzhD5H
	 uJptj8AYB8H66ozRePlr5O+TEjByi7zEL0a+eosi4WPZmlZAYqnMxHfiz8Zg9ORftx
	 u39137D50c5MKiCMMg9DXXIbkdOVTWAk107jxaMhbNxaaYy4VWjCH079yETr7bIQpp
	 lmCO+T2CF3hG3O1R0LQyB13cx7n1KVKF0Zmn6dDEWNsmPjbhSgM1pUK1kkI7rqvO7p
	 K2GlVGjbT4J9Q==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[PATCH]=20tools/libs/light:=20Fix=20nic->vlan=20memory=20allocation?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718801860969
To: Jason Andryuk <jason.andryuk@amd.com>, Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Leigh Brown <leigh@solinno.co.uk>, Xen Devel <xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Jason Andryuk <jandryuk@gmail.com>, Jan Beulich <jbeulich@suse.com>
Message-Id: <ZnLVxB2XuWL9UKWI@l14>
References: <20240520164400.15740-1-leigh@solinno.co.uk> <c600e5e8-d169-417c-bc02-d33e84dca0fb@amd.com>
In-Reply-To: <c600e5e8-d169-417c-bc02-d33e84dca0fb@amd.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.73edaa175ba94638a1fbddc1cbefd527?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240619:md
Date: Wed, 19 Jun 2024 12:57:42 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

On Mon, May 20, 2024 at 01:08:03PM -0400, Jason Andryuk wrote:
> On 2024-05-20 12:44, Leigh Brown wrote:
> > After the following commit:
> > 3bc14e4fa4b9 ("tools/libs/light: Add vlan field to libxl_device_nic")
> > xl list -l aborts with a double free error if a domain has at least
> > one vif defined:
> > 
> >    $ sudo xl list -l
> >    free(): double free detected in tcache 2
> >    Aborted
> > 
> > Orginally, the vlan field was called vid and was defined as an integer.
> > It was appropriate to call libxl__xs_read_checked() with gc passed as
> > the string data was copied to a different variable.  However, the final
> > version uses a string data type and the call should have been changed
> > to use NOGC instead of gc to allow that data to live past the gc
> > controlled lifetime, in line with the other string fields.
> > 
> > This patch makes the change to pass NOGC instead of gc and moves the
> > new code to be next to the other string fields (fixing a couple of
> > errant tabs along the way), as recommended by Jason.
> > 
> > Fixes: 3bc14e4fa4b9 ("tools/libs/light: Add vlan field to libxl_device_nic")
> > Signed-off-by: Leigh Brown <leigh@solinno.co.uk>
> 
> Reviewed-by: Jason Andryuk <jason.andryuk@amd.com>

Acked-by: Anthony PERARD <anthony.perard@vates.tech>

Thanks,

-- 


Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 13:24:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 13:24:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743809.1150789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJvI3-0000lJ-CC; Wed, 19 Jun 2024 13:23:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743809.1150789; Wed, 19 Jun 2024 13:23:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJvI3-0000lC-9I; Wed, 19 Jun 2024 13:23:55 +0000
Received: by outflank-mailman (input) for mailman id 743809;
 Wed, 19 Jun 2024 13:23:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JaCz=NV=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sJvI1-0000l6-8w
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 13:23:53 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2aa99688-2e3f-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 15:23:50 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 632A64EE0738;
 Wed, 19 Jun 2024 15:23:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2aa99688-2e3f-11ef-b4bb-af5377834399
MIME-Version: 1.0
Date: Wed, 19 Jun 2024 15:23:50 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Xen Devel <xen-devel@lists.xenproject.org>
Cc: Jbeulich <jbeulich@suse.com>, Andrew Cooper3
 <andrew.cooper3@citrix.com>, Roger Pau <roger.pau@citrix.com>, Consulting
 <consulting@bugseng.com>
Subject: MISRA C Rule 5.3 violation - shadowing in mctelem.c
Message-ID: <f351f904fab43f88396b3ae1b5d64e95@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

Hi all,

I was looking at the shadowing due to the struct identifier and the 
local variables "mctctl" in x86/cpu/mcheck/mctelem.c (see [1], the 
second report). This kind of shadowing seems very intentional, and the 
initial naive approach I devised was to simply rename the local 
variables.
This, however, results in build breakages, as sometimes the shadowed 
name seems to be used for accessing the global struct (unless I'm 
missing something), and as a result changing the name of the locals is 
not possible, at least not without further modifications to this file, 
which aren't obvious to me.

It would be really helpful if you could point me to either:
- avoid the shadowing in some way that does not occur to me at the 
moment;
- deviate this file, as many similar files in x86/cpu are already 
deviated.

What's your opinion on this?

Thanks,
   Nicola

[1] 
https://saas.eclairit.com:3787/fs/var/local/eclair/XEN.ecdf/ECLAIR_normal/staging/X86_64-BUGSENG/latest/PROJECT.ecd;/by_service/MC3R1.R5.3.html

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 13:42:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 13:42:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743819.1150799 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJva9-0003aw-Qt; Wed, 19 Jun 2024 13:42:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743819.1150799; Wed, 19 Jun 2024 13:42:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJva9-0003ap-O9; Wed, 19 Jun 2024 13:42:37 +0000
Received: by outflank-mailman (input) for mailman id 743819;
 Wed, 19 Jun 2024 13:42:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhGR=NV=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sJva9-0003aj-4g
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 13:42:37 +0000
Received: from mail-wr1-x436.google.com (mail-wr1-x436.google.com
 [2a00:1450:4864:20::436])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c96ccc5a-2e41-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 15:42:36 +0200 (CEST)
Received: by mail-wr1-x436.google.com with SMTP id
 ffacd0b85a97d-3632a6437d7so638095f8f.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 06:42:36 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c7a2ae3f36sm2835110a91.13.2024.06.19.06.42.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 06:42:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c96ccc5a-2e41-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718804555; x=1719409355; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=et1j6vvmM1PMcLpImEW2C1YU37Ft6lKHqo+EnZSq9Yw=;
        b=gd2KbCYORAuR5FrwbrC7NYKgHzQXbD1WQ/u4bh/dVNeC0CzYGCGx3AolRKRXpM1VAw
         Yp50Sf1DL2nGJpMmB2foEGvqGo9fizuh012Kt6Ut5M+nnAWXYMKJHXSrzsWJdsR084Tv
         SeIbynOnz+eVqMuVQ4fOvb5ML0Y5itnX4LunOEt9sB1zr5R2msefj/9jkFzf0/S3sFTA
         ZFoWinxrR8GJf3BGw3eXo2EzIJzaddDoCeAsKu77OXjQ+Dwc1cxXpq76swTpnMnRgTKA
         MPdFBena74f+thaSDy7D4MxIMBav0n0P9sVjaZQnVHwoZJvs8HBKuUxYgw0jNpu8Bdmn
         1+5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718804555; x=1719409355;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=et1j6vvmM1PMcLpImEW2C1YU37Ft6lKHqo+EnZSq9Yw=;
        b=mZbJm8xMctSXyHbxu7ZjLIqQdcjymPqwp8WbHWRmAJlU6/TJ+m+9IA4yYuR9XdJ/3E
         0yiTc7dLgqnW28z3Stf/hRk/l92taR5ZS869l6hCqtT32P+i/6TsEiaWqHQQvaC2asZM
         1TKnVONclyfJ3ZfVpzExXR8jLX2fVnSukDKaCHjbdYEXDttuGfz6KG9RaW8Pf+tvXe7k
         MoUIm9/6nKcj+Lh3kL0CvTb5547flIg1yGgz3NJptk8TVZfsJCmfgZKsqIwdZMA+1Pqf
         jjfTDXl86s4WQuc3sHPM+YSUnZOUR00PeiKa2JU/ZNdBksUvJkMCNfKlsdzY8wkI90Tu
         0uqQ==
X-Forwarded-Encrypted: i=1; AJvYcCUaUuPyc3QMNmJMInKUWhK2ack4MpzhJUKIQC02QoOPs0eFEDS0tovVHspLQli7M50fB3eLqlLx/l2vACkhXSe+tmWmBXqwfh//EdkDthA=
X-Gm-Message-State: AOJu0Yz6XNC81mFNXYN9JWq4zXiohegjt3VorRb3389ZwuFMmZjo6h3D
	3Y9jxJdPhe2HQaUzrXuTXjsqzTyKpvMsncu3DEBTOS8U7nO9hBk9QZnmo3BIdA==
X-Google-Smtp-Source: AGHT+IEPH4GnVL/N9nbRQiPuJI+QqGbYUtMdakW8VkrtB6klUMim4f0o8qqdyTPOaEQB3DYdicTgrQ==
X-Received: by 2002:a5d:6d4c:0:b0:360:7856:fa62 with SMTP id ffacd0b85a97d-3609ea6c58emr6029379f8f.15.1718804555442;
        Wed, 19 Jun 2024 06:42:35 -0700 (PDT)
Message-ID: <f14f15ad-7d16-4bec-9edc-82956ccc7bb4@suse.com>
Date: Wed, 19 Jun 2024 15:42:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: MISRA C Rule 5.3 violation - shadowing in mctelem.c
To: Nicola Vetrini <nicola.vetrini@bugseng.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Simone Ballarin <simone.ballarin@bugseng.com>
Cc: Andrew Cooper3 <andrew.cooper3@citrix.com>,
 Roger Pau <roger.pau@citrix.com>, Consulting <consulting@bugseng.com>,
 Xen Devel <xen-devel@lists.xenproject.org>
References: <f351f904fab43f88396b3ae1b5d64e95@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <f351f904fab43f88396b3ae1b5d64e95@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 19.06.2024 15:23, Nicola Vetrini wrote:
> I was looking at the shadowing due to the struct identifier and the 
> local variables "mctctl" in x86/cpu/mcheck/mctelem.c (see [1], the 
> second report). This kind of shadowing seems very intentional, and the 
> initial naive approach I devised was to simply rename the local 
> variables.
> This, however, results in build breakages, as sometimes the shadowed 
> name seems to be used for accessing the global struct (unless I'm 
> missing something), and as a result changing the name of the locals is 
> not possible, at least not without further modifications to this file, 
> which aren't obvious to me.
> 
> It would be really helpful if you could point me to either:
> - avoid the shadowing in some way that does not occur to me at the 
> moment;

Could you please be more specific about the issues you encountered? I
hope you don't expect everyone reading this request of yours to (try to)
redo what you did. The only thing I could vaguely guess is that maybe
you went a little too far with the renaming. Plus, just from looking at
the grep output, did you try to simply move down the file scope variable?
It looks like all shadowing instances are ahead of any uses of the
variable (but I may easily be overlooking an important line contradicting
that pattern).

> - deviate this file, as many similar files in x86/cpu are already 
> deviated.

I question the presence of these in those files. They were apparently all
added when the files were introduced, and said commit - from Simone, acked
by Stefano - came with no justification at all.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 14:18:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 14:18:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743830.1150814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJw9A-0007rb-L2; Wed, 19 Jun 2024 14:18:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743830.1150814; Wed, 19 Jun 2024 14:18:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJw9A-0007rU-GK; Wed, 19 Jun 2024 14:18:48 +0000
Received: by outflank-mailman (input) for mailman id 743830;
 Wed, 19 Jun 2024 14:18:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cBaG=NV=kernel.dk=axboe@srs-se1.protection.inumbo.net>)
 id 1sJw99-0007rO-9M
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 14:18:47 +0000
Received: from mail-pl1-x62c.google.com (mail-pl1-x62c.google.com
 [2607:f8b0:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d5b7e7b8-2e46-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 16:18:45 +0200 (CEST)
Received: by mail-pl1-x62c.google.com with SMTP id
 d9443c01a7336-1f60a17e3e7so5885765ad.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 07:18:45 -0700 (PDT)
Received: from [127.0.0.1] ([198.8.77.157]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-705ccb3d2e8sm10689218b3a.107.2024.06.19.07.18.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Jun 2024 07:18:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5b7e7b8-2e46-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1718806723; x=1719411523; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:date:message-id:subject
         :references:in-reply-to:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kVFpcyqMw7891fhn8oME2l6O88u8sGFZorCmmUmii2A=;
        b=j6x2HJukA+LwGKqbgYl84QIq24A1eDKXXBzFPF9DO74yParh0zGhAOTe+Hhwh7W4Mm
         iwhMQMRA/vlBX70Vyw5DUREK2j0GHaPj+3XJBKnpSVrpVtb89IqGhARz++q7zegXUgSj
         dTiMSZdOTv+L0E+mPVB+EcWNghrtT2mntr4/cSMTgIdc1ewWPE7HL4Aq34SOyUr+IWlG
         Y6Fnb1lPoJ/gjg/TkoOYtOuJsoGWzGZ8IPSNE3bWWjd/E7phVYZJQn6z5qaFr5NfN65z
         509hjc/CtlZbFSBlzCrDMwyilYjNoEdSQbXb8A6Lf6NDl8dp3L+CZ8jvKKtDUWFsOtef
         pQqQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718806723; x=1719411523;
        h=content-transfer-encoding:mime-version:date:message-id:subject
         :references:in-reply-to:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=kVFpcyqMw7891fhn8oME2l6O88u8sGFZorCmmUmii2A=;
        b=l/DhbOCQzAgm3yWmWIT+HM+UdJ6cDd9jOpYjkc1n74tRYyz8dj6WJmIsbIuzQqlIpC
         3l8JoaqVS5FX55h+QtB5cc4NR1WwsdASuJKl1AvZK7x6282PLJIuuC4rrgeD6SIs43ok
         uud1j1J30iYXN5/7LOXY44Hz007LL7usmVnfNx9QGf1C1rgSfd95fw/EWW5dHtinl++b
         UbZ/TskLnGDUAW6X/BO3zaCPhI531ZknkflvRrBoN5Xd6Zk5x4IwAzn32LHU2sZlQLdM
         G2+7lFHXif6MQYBPJq2CSdY05AFIXdzHHRPm6DkAL3fY+iXMfnNvhoNquJywepB6/4J6
         CCHA==
X-Forwarded-Encrypted: i=1; AJvYcCVMQFHcbaxiOtyufyttsvxFX5FOCgW+KJUZ+UBFWa8TlOohVw2KI7chKnehN0+EeMVm/AyPI69Gt+1GGlMC/1PNhCWNG2BLb7hpD6x4LhA=
X-Gm-Message-State: AOJu0YwwaRvxbfXOb5+j6oLhx6rSGC8A24wKoxDXsMsC/i6jQ5wF2RWY
	L0jfNcx5JuYfHCqa0N2s+JWT1l5Xg7+ycknX0QOaukOylpeFdbKSdOHtl3KKKJI=
X-Google-Smtp-Source: AGHT+IEWR0naxsjqG6PjL5n5tEp1Ci8rJdxP3J1CbAApgRxLYFkmM+R0aL8qHW5Lv8+tjuuvrYAkhg==
X-Received: by 2002:a05:6a20:3ca0:b0:1b6:fadd:8862 with SMTP id adf61e73a8af0-1bcbb8ce3e2mr2590711637.6.1718806723107;
        Wed, 19 Jun 2024 07:18:43 -0700 (PDT)
From: Jens Axboe <axboe@kernel.dk>
To: Christoph Hellwig <hch@lst.de>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>, 
 Richard Weinberger <richard@nod.at>, 
 Philipp Reisner <philipp.reisner@linbit.com>, 
 Lars Ellenberg <lars.ellenberg@linbit.com>, 
 =?utf-8?q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>, 
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>, 
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>, 
 =?utf-8?q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>, 
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>, 
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>, 
 "Martin K. Petersen" <martin.petersen@oracle.com>, 
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org, 
 drbd-dev@lists.linbit.com, nbd@other.debian.org, 
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org, 
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org, 
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev, 
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org, 
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev, 
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org, 
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
In-Reply-To: <20240617060532.127975-1-hch@lst.de>
References: <20240617060532.127975-1-hch@lst.de>
Subject: Re: move features flags into queue_limits v2
Message-Id: <171880672048.115609.5962725096227627176.b4-ty@kernel.dk>
Date: Wed, 19 Jun 2024 08:18:40 -0600
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: b4 0.14.0


On Mon, 17 Jun 2024 08:04:27 +0200, Christoph Hellwig wrote:
> this is the third and last major series to convert settings to
> queue_limits for this merge window.  After a bunch of prep patches to
> get various drivers in shape, it moves all the queue_flags that specify
> driver controlled features into the queue limits so that they can be
> set atomically and are separated from the blk-mq internal flags.
> 
> Note that I've only Cc'ed the maintainers for drivers with non-mechanical
> changes as the Cc list is already huge.
> 
> [...]

Applied, thanks!

[01/26] xen-blkfront: don't disable cache flushes when they fail
        commit: dd9300e9eaeeb212f77ffeb72d1d8756107f1f1f
[02/26] sd: remove sd_is_zoned
        commit: be60e7700e6df1e16a2f60f45bece08e6140a46d
[03/26] sd: move zone limits setup out of sd_read_block_characteristics
        commit: 308ad58af49d6c4c3b7a36b98972cc9db4d7b36a
[04/26] loop: stop using loop_reconfigure_limits in __loop_clr_fd
        commit: c9055b44abe60da69aa4ee4fdcb78ee7fe733335
[05/26] loop: always update discard settings in loop_reconfigure_limits
        commit: ae0d40ff49642651f969883ef9fc79d69c1632d7
[06/26] loop: regularize upgrading the block size for direct I/O
        commit: a17ece76bcfe7b86327b19cae1652d7c62068a30
[07/26] loop: also use the default block size from an underlying block device
        commit: 4ce37fe0938b02b7b947029c40b72d76a22a3882
[08/26] loop: fold loop_update_rotational into loop_reconfigure_limits
        commit: 97dd4a43d69b74a114be466d6887e257971adfe9
[09/26] virtio_blk: remove virtblk_update_cache_mode
        commit: bbe5c84122b35c37f2706872fe34da66f0854b56
[10/26] nbd: move setting the cache control flags to __nbd_set_size
        commit: 6b377787a306253111404325aee98005b361e59a
[11/26] block: freeze the queue in queue_attr_store
        commit: af2814149883e2c1851866ea2afcd8eadc040f79
[12/26] block: remove blk_flush_policy
        commit: 70905f8706b62113ae32c8df721384ff6ffb6c6a
[13/26] block: move cache control settings out of queue->flags
        commit: 1122c0c1cc71f740fa4d5f14f239194e06a1d5e7
[14/26] block: move the nonrot flag to queue_limits
        commit: bd4a633b6f7c3c6b6ebc1a07317643270e751a94
[15/26] block: move the add_random flag to queue_limits
        commit: 39a9f1c334f9f27b3b3e6d0005c10ed667268346
[16/26] block: move the io_stat flag setting to queue_limits
        commit: cdb2497918cc2929691408bac87b58433b45b6d3
[17/26] block: move the stable_writes flag to queue_limits
        commit: 1a02f3a73f8c670eddeb44bf52a75ae7f67cfc11
[18/26] block: move the synchronous flag to queue_limits
        commit: aadd5c59c910427c0464c217d5ed588ff14e2502
[19/26] block: move the nowait flag to queue_limits
        commit: f76af42f8bf13d2620084f305f01691de9238fc7
[20/26] block: move the dax flag to queue_limits
        commit: f467fee48da4500786e145489787b37adae317c3
[21/26] block: move the poll flag to queue_limits
        commit: 8023e144f9d6e35f8786937e2f0c2fea0aba6dbc
[22/26] block: move the zoned flag into the features field
        commit: b1fc937a55f5735b98d9dceae5bb6ba262501f56
[23/26] block: move the zone_resetall flag to queue_limits
        commit: a52758a39768f441e468a41da6c15a59d6d6011a
[24/26] block: move the pci_p2pdma flag to queue_limits
        commit: 9c1e42e3c876c66796eda23e79836a4d92613a61
[25/26] block: move the skip_tagset_quiesce flag to queue_limits
        commit: 8c8f5c85b20d0a7dc0ab9b2a17318130d69ceb5a
[26/26] block: move the bounce flag into the features field
        commit: 339d3948c07b4aa2940aeb874294a7d6782cec16

Best regards,
-- 
Jens Axboe





From xen-devel-bounces@lists.xenproject.org Wed Jun 19 14:21:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 14:21:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743836.1150823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJwBf-0000rd-Vb; Wed, 19 Jun 2024 14:21:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743836.1150823; Wed, 19 Jun 2024 14:21:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJwBf-0000rW-Sn; Wed, 19 Jun 2024 14:21:23 +0000
Received: by outflank-mailman (input) for mailman id 743836;
 Wed, 19 Jun 2024 14:21:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cBaG=NV=kernel.dk=axboe@srs-se1.protection.inumbo.net>)
 id 1sJwBe-0000rK-42
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 14:21:22 +0000
Received: from mail-pg1-x533.google.com (mail-pg1-x533.google.com
 [2607:f8b0:4864:20::533])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 324fb57a-2e47-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 16:21:20 +0200 (CEST)
Received: by mail-pg1-x533.google.com with SMTP id
 41be03b00d2f7-7119502613bso87121a12.2
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 07:21:20 -0700 (PDT)
Received: from [192.168.1.150] ([198.8.77.157])
 by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-705ccb6b110sm10739431b3a.153.2024.06.19.07.21.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 07:21:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 324fb57a-2e47-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=kernel-dk.20230601.gappssmtp.com; s=20230601; t=1718806878; x=1719411678; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:content-language:references
         :cc:to:from:subject:user-agent:mime-version:date:message-id:from:to
         :cc:subject:date:message-id:reply-to;
        bh=TrxAc+Uu30dWHZppC8CN0wQM7I31e2d3aLis3drkvvE=;
        b=bY+UHX/2zRIjrVvW40Mnc/ieysEDD91I1SaSAGd9LoP7sDKihsZfQO8EYmYXxyhbrg
         JIUv/E3wSanGNRFy7VD7nBDVmUfFllGIspm/2n2mkyXeR3LXTQuoxVcxsOkRJ26FutKo
         3zia1zTJpbIcxJUZv9GUFRIqAbW2NQJ7xQDLc4un30dPk15/3/zfHaUs647XIxEMiz7g
         gVSfHu5CWUn6gHejngvakANbPnD3MaYD9wsSBJX241/fV7qZXmJuN4qwaw/5aJfHsT34
         Jaz1PiU6n8utr8/DA97MQ7sJxoyTrm81fZlUA8hDy4OXw+0FMWKbBbzHEntoRAojcFmD
         YVNw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718806878; x=1719411678;
        h=content-transfer-encoding:in-reply-to:content-language:references
         :cc:to:from:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=TrxAc+Uu30dWHZppC8CN0wQM7I31e2d3aLis3drkvvE=;
        b=liOl2UXldQ5pvKi+jyvMvZdAfh/JSn31H7GwUqZvtygD0dgEr/Rq+gBWMW3I9uOkPp
         Xj3LJ8Rk3KhUuU74oLTjA1wMUo7KZ7O4YLOr+A68L2HkEp/fmZ/IDaLVX7fE2mJWZc/v
         TCXFezAzbO/WmIuNVKVnnHntAvC/Fk90dfbkFT8AD/yT35nzzRgu4XZq/Ze/wMgd8Hg9
         Mq6vG6/SEXBC3vWR9k4xzB+vg0PELAb/EXfuH9GU+MYumCW9UYt78+t1svTrzrRW9rmT
         P03jGyTe4TdwBd1HS6Ks7OL2rPnvleCVfTyzN7UWGrQpqJQapb8p7aNhhl9+G9KI8BMv
         MSxA==
X-Forwarded-Encrypted: i=1; AJvYcCUcsIV4mcKZtpuakXY/Cj2lLpC/n6tI6SzorWmrpHZlNWOCFynvQLlaFH4dbyUTSMVYX6j4ODK9lSW89/GiCQZW6ih033Mw+3u4e1V/gsM=
X-Gm-Message-State: AOJu0YwqE17j6kvTYBFebscwoEc6/khG0Jq5wWUDr+qR4gCrfE1tSxf+
	UbX2hQYY5g7IJ0mT2MDv6lUlILcK8Gl1j3GH2ap3zSQ4ZkZKvoemqqNKPVBOdLU=
X-Google-Smtp-Source: AGHT+IGlNYahWungPdY/VG0D2i+/kOzAqG/9vIEbq77BhnEO6pEhdeI5ihGxkaZBKQBmSE54SEc3pA==
X-Received: by 2002:a05:6a21:33a5:b0:1b8:622a:cf7c with SMTP id adf61e73a8af0-1bcbb727134mr2697727637.3.1718806878609;
        Wed, 19 Jun 2024 07:21:18 -0700 (PDT)
Message-ID: <e8e718ca-7d3a-4bce-b88a-3bcbe1fa32b0@kernel.dk>
Date: Wed, 19 Jun 2024 08:21:14 -0600
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: move features flags into queue_limits v2
From: Jens Axboe <axboe@kernel.dk>
To: Christoph Hellwig <hch@lst.de>
Cc: Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?UTF-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>, Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com, nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
References: <20240617060532.127975-1-hch@lst.de>
 <171880672048.115609.5962725096227627176.b4-ty@kernel.dk>
Content-Language: en-US
In-Reply-To: <171880672048.115609.5962725096227627176.b4-ty@kernel.dk>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/19/24 8:18 AM, Jens Axboe wrote:
> 
> On Mon, 17 Jun 2024 08:04:27 +0200, Christoph Hellwig wrote:
>> this is the third and last major series to convert settings to
>> queue_limits for this merge window.  After a bunch of prep patches to
>> get various drivers in shape, it moves all the queue_flags that specify
>> driver controlled features into the queue limits so that they can be
>> set atomically and are separated from the blk-mq internal flags.
>>
>> Note that I've only Cc'ed the maintainers for drivers with non-mechanical
>> changes as the Cc list is already huge.
>>
>> [...]
> 
> Applied, thanks!

Please check for-6.11/block, as I pulled in the changes to the main
block branch and that threw some merge conflicts mostly due to Damien's
changes in for-6.11/block. While fixing those up, I also came across
oddities like:

(limits->features & limits->features & BLK_FEAT_ZONED)) {

which don't make much sense and hence I changed them to

(limits->features & BLK_FEAT_ZONED)) {

-- 
Jens Axboe



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 14:23:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 14:23:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743843.1150833 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJwE0-0001R4-Ad; Wed, 19 Jun 2024 14:23:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743843.1150833; Wed, 19 Jun 2024 14:23:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJwE0-0001Qx-81; Wed, 19 Jun 2024 14:23:48 +0000
Received: by outflank-mailman (input) for mailman id 743843;
 Wed, 19 Jun 2024 14:23:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=A3PD=NV=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sJwDy-0001QS-L4
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 14:23:46 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 88e9b699-2e47-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 16:23:45 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id D67B368AFE; Wed, 19 Jun 2024 16:23:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88e9b699-2e47-11ef-b4bb-af5377834399
Date: Wed, 19 Jun 2024 16:23:40 +0200
From: Christoph Hellwig <hch@lst.de>
To: Jens Axboe <axboe@kernel.dk>
Cc: Christoph Hellwig <hch@lst.de>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org
Subject: Re: move features flags into queue_limits v2
Message-ID: <20240619142340.GA32100@lst.de>
References: <20240617060532.127975-1-hch@lst.de> <171880672048.115609.5962725096227627176.b4-ty@kernel.dk> <e8e718ca-7d3a-4bce-b88a-3bcbe1fa32b0@kernel.dk>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <e8e718ca-7d3a-4bce-b88a-3bcbe1fa32b0@kernel.dk>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 19, 2024 at 08:21:14AM -0600, Jens Axboe wrote:
> Please check for-6.11/block, as I pulled in the changes to the main
> block branch and that threw some merge conflicts mostly due to Damien's
> changes in for-6.11/block. While fixing those up, I also came across
> oddities like:
> 
> (limits->features & limits->features & BLK_FEAT_ZONED)) {
> 
> which don't make much sense and hence I changed them to
> 
> (limits->features & BLK_FEAT_ZONED)) {

Yeah.  The above is harmless but of course completely pointless.


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 14:32:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 14:32:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743849.1150842 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJwMR-00034t-2K; Wed, 19 Jun 2024 14:32:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743849.1150842; Wed, 19 Jun 2024 14:32:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJwMQ-00034m-Vw; Wed, 19 Jun 2024 14:32:30 +0000
Received: by outflank-mailman (input) for mailman id 743849;
 Wed, 19 Jun 2024 14:32:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=krMZ=NV=apertussolutions.com=dpsmith@srs-se1.protection.inumbo.net>)
 id 1sJwMP-00034g-Qk
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 14:32:29 +0000
Received: from sender4-of-o52.zoho.com (sender4-of-o52.zoho.com
 [136.143.188.52]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c00407f1-2e48-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 16:32:27 +0200 (CEST)
Received: by mx.zohomail.com with SMTPS id 1718807536876432.54109202998006;
 Wed, 19 Jun 2024 07:32:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c00407f1-2e48-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; t=1718807539; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=dW9sxLzN2jZBL5Q2e63z84lkmyzvH/zzDgw0Zbus7D7kq1KBoqjDeMkH2ofNiph/dukCoj/QoXWv2be9x6OhRI68SZWRBQ4GAXwR2dH+z6N6lFhaIigHr3rxxeyWgR8ByvENgZuC7LosfCZNWuc4avLHT1WKtwq4fgitOvnzjrQ=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1718807539; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=/ohNcDv7EIzpISmKimehjsW0lwQc3YgFkUscaIh9NG0=; 
	b=IXrdxmx/zcVw0MS8w8dQCyimp2TylwyH0SK96MufFZpraXo76tqDXH2/NTsN29Wos1GecbK8BRjv8w1ZKIBV0AHLfVsLNQ5kj7jLqnHdml/hGlzcmvUcEHtvarTcfnISUUIbQamTwS49iBjo8tfdJv1bsh8zbP9CVl1zZmTUvo0=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1718807539;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Message-ID:Date:Date:MIME-Version:Subject:Subject:To:To:Cc:Cc:References:From:From:In-Reply-To:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=/ohNcDv7EIzpISmKimehjsW0lwQc3YgFkUscaIh9NG0=;
	b=N53pzzj0BXGVZLyIv1cO+doX0zluuVG1xhUXYaGMPKpy7gzu5QGDEEw+MLZXS2Ud
	zGFTsLXeOznGMoJ0N5j9R6tCAfNRDtysy6crk1tMJFc3aECJjZVBVOBD5KlP8h3HWoO
	Ll7ppadUtGLroDM8JLBc5IutGgDPqsSTfiPkWDhM=
Message-ID: <34971a8f-a3ab-42c4-b96a-59a43c62db85@apertussolutions.com>
Date: Wed, 19 Jun 2024 10:32:14 -0400
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] loadpolicy: Verifies memory allocation during policy
 loading
Content-Language: en-US
To: Jan Beulich <jbeulich@suse.com>, yskelg@gmail.com
Cc: Austin Kim <austindh.kim@gmail.com>, shjy180909@gmail.com,
 xen-devel@lists.xenproject.org, Anthony PERARD <anthony@xenproject.org>
References: <20240527125438.66349-1-yskelg@gmail.com>
 <14188e5a-a641-4351-80b3-f69969c4ddba@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Autocrypt: addr=dpsmith@apertussolutions.com; keydata=
 xsJuBFYrueARCACPWL3r2bCSI6TrkIE/aRzj4ksFYPzLkJbWLZGBRlv7HQLvs6i/K4y/b4fs
 JDq5eL4e9BdfdnZm/b+K+Gweyc0Px2poDWwKVTFFRgxKWq9R7McwNnvuZ4nyXJBVn7PTEn/Z
 G7D08iZg94ZsnUdeXfgYdJrqmdiWA6iX9u84ARHUtb0K4r5WpLUMcQ8PVmnv1vVrs/3Wy/Rb
 foxebZNWxgUiSx+d02e3Ad0aEIur1SYXXv71mqKwyi/40CBSHq2jk9eF6zmEhaoFi5+MMMgX
 X0i+fcBkvmT0N88W4yCtHhHQds+RDbTPLGm8NBVJb7R5zbJmuQX7ADBVuNYIU8hx3dF3AQCm
 601w0oZJ0jGOV1vXQgHqZYJGHg5wuImhzhZJCRESIwf+PJxik7TJOgBicko1hUVOxJBZxoe0
 x+/SO6tn+s8wKlR1Yxy8gYN9ZRqV2I83JsWZbBXMG1kLzV0SAfk/wq0PAppA1VzrQ3JqXg7T
 MZ3tFgxvxkYqUP11tO2vrgys+InkZAfjBVMjqXWHokyQPpihUaW0a8mr40w9Qui6DoJj7+Gg
 DtDWDZ7Zcn2hoyrypuht88rUuh1JuGYD434Q6qwQjUDlY+4lgrUxKdMD8R7JJWt38MNlTWvy
 rMVscvZUNc7gxcmnFUn41NPSKqzp4DDRbmf37Iz/fL7i01y7IGFTXaYaF3nEACyIUTr/xxi+
 MD1FVtEtJncZNkRn7WBcVFGKMAf+NEeaeQdGYQ6mGgk++i/vJZxkrC/a9ZXme7BhWRP485U5
 sXpFoGjdpMn4VlC7TFk2qsnJi3yF0pXCKVRy1ukEls8o+4PF2JiKrtkCrWCimB6jxGPIG3lk
 3SuKVS/din3RHz+7Sr1lXWFcGYDENmPd/jTwr1A1FiHrSj+u21hnJEHi8eTa9029F1KRfocp
 ig+k0zUEKmFPDabpanI323O5Tahsy7hwf2WOQwTDLvQ+eqQu40wbb6NocmCNFjtRhNZWGKJS
 b5GrGDGu/No5U6w73adighEuNcCSNBsLyUe48CE0uTO7eAL6Vd+2k28ezi6XY4Y0mgASJslb
 NwW54LzSSM0uRGFuaWVsIFAuIFNtaXRoIDxkcHNtaXRoQGFwZXJ0dXNzb2x1dGlvbnMuY29t
 PsJ6BBMRCAAiBQJWK7ngAhsjBgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRBTc6WbYpR8
 KrQ9AP94+xjtFfJ8gj5c7PVx06Zv9rcmFUqQspZ5wSEkvxOuQQEAg6qEsPYegI7iByLVzNEg
 7B7fUG7pqWIfMqFwFghYhQzOwU0EViu54BAIAL6MXXNlrJ5tRUf+KMBtVz1LJQZRt/uxWrCb
 T06nZjnbp2UcceuYNbISOVHGXTzu38r55YzpkEA8eURQf+5hjtvlrOiHxvpD+Z6WcpV6rrMB
 kcAKWiZTQihW2HoGgVB3gwG9dCh+n0X5OzliAMiGK2a5iqnIZi3o0SeW6aME94bSkTkuj6/7
 OmH9KAzK8UnlhfkoMg3tXW8L6/5CGn2VyrjbB/rcrbIR4mCQ+yCUlocuOjFCJhBd10AG1IcX
 OXUa/ux+/OAV9S5mkr5Fh3kQxYCTcTRt8RY7+of9RGBk10txi94dXiU2SjPbassvagvu/hEi
 twNHms8rpkSJIeeq0/cAAwUH/jV3tXpaYubwcL2tkk5ggL9Do+/Yo2WPzXmbp8vDiJPCvSJW
 rz2NrYkd/RoX+42DGqjfu8Y04F9XehN1zZAFmCDUqBMa4tEJ7kOT1FKJTqzNVcgeKNBGcT7q
 27+wsqbAerM4A0X/F/ctjYcKwNtXck1Bmd/T8kiw2IgyeOC+cjyTOSwKJr2gCwZXGi5g+2V8
 NhJ8n72ISPnOh5KCMoAJXmCF+SYaJ6hIIFARmnuessCIGw4ylCRIU/TiXK94soilx5aCqb1z
 ke943EIUts9CmFAHt8cNPYOPRd20pPu4VFNBuT4fv9Ys0iv0XGCEP+sos7/pgJ3gV3pCOric
 p15jV4PCYQQYEQgACQUCViu54AIbDAAKCRBTc6WbYpR8Khu7AP9NJrBUn94C/3PeNbtQlEGZ
 NV46Mx5HF0P27lH3sFpNrwD/dVdZ5PCnHQYBZ287ZxVfVr4Zuxjo5yJbRjT93Hl0vMY=
In-Reply-To: <14188e5a-a641-4351-80b3-f69969c4ddba@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 6/19/24 08:04, Jan Beulich wrote:
> On 27.05.2024 14:54, yskelg@gmail.com wrote:
>> --- a/tools/flask/utils/loadpolicy.c
>> +++ b/tools/flask/utils/loadpolicy.c
>> @@ -58,6 +58,11 @@ int main (int argCnt, const char *args[])
>>       }
>>   
>>       polMemCp = malloc(info.st_size);
>> +    if (!polMemCp) {
>> +        fprintf(stderr, "Error occurred allocating %ld bytes\n", info.st_size);
>> +        ret = -ENOMEM;
> 
> I don't think -ENOMEM is valid to use here. See neighboring code. Nevertheless
> it is correct that a check should be here.
> 
> As to %ld - is that portably usable with an off_t value?
> 
> In any event, Daniel, really your turn to review / ack. I'm looking at this
> merely because I found this and another bugfix still sit in waiting-for-ack
> state.

I saw this but was on the fence of whether it really required my ack 
since it was more of a toolstack code fix versus an XSM relevant change.

With that said, and to expand on Jan's comment regarding ENOMEM, the 
utility does not currently differentiate main's return code. Unless the 
tools maintainer wants to start changing this, I would suggest setting 
ret to -1.

As to the '%ld', aligning with Jan's first comment, perhaps you might 
consider just reporting `strerror(errno)` similar to the other error 
handling checks. NB: it is likely errno will be set to -ENOMEM, so by 
doing this you will end up notifying ENOMEM occurred as you were 
attempting to do by providing it with `ret`. Additionally, then you 
won't have to deal with portability concerns over off_t.

V/r,
Daniel P. Smith




From xen-devel-bounces@lists.xenproject.org Wed Jun 19 14:33:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 14:33:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743855.1150853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJwNK-0003aG-Bx; Wed, 19 Jun 2024 14:33:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743855.1150853; Wed, 19 Jun 2024 14:33:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJwNK-0003a9-84; Wed, 19 Jun 2024 14:33:26 +0000
Received: by outflank-mailman (input) for mailman id 743855;
 Wed, 19 Jun 2024 14:33:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pMzu=NV=bounce.vates.tech=bounce-md_30504962.6672ec31.v1-db5fcd6941aa43b2ac53c0d0b3e3427f@srs-se1.protection.inumbo.net>)
 id 1sJwNJ-0003Zi-41
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 14:33:25 +0000
Received: from mail187-11.suw11.mandrillapp.com
 (mail187-11.suw11.mandrillapp.com [198.2.187.11])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e197545d-2e48-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 16:33:23 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-11.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W45hK710SzLfHHmf
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 14:33:21 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 db5fcd6941aa43b2ac53c0d0b3e3427f; Wed, 19 Jun 2024 14:33:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e197545d-2e48-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718807602; x=1719068102;
	bh=dhkIawe/P1gt74JqwmytP5ttHEZSPuBPJ9PBNk5H03E=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=axTUdQ0IP/1hAyNT/eSXYTwYMckNXZRwBzc044DtL56XKUHNcpliOWSp5Xm6E5qlB
	 QYiYHzqjGxcIfwrlzbbzlPVa20j+QEYTp/ggCHyL9n17QN7sg43B1PZIpm5TmKW/qG
	 +hjNaMcmaaIis3i8VialnYSUmQyqmC4nBGuqzP3tDf8TteLJKGcr0zbLWMHrTiZqDO
	 ahbWhjxwH112V84epMaFOKP5QStHr7K1m1GzPcqB3Lj5N6DRmI4RT9eC22NWD/KNzA
	 68/IMlxKPc2vK77st2FmHupqzoUYX7i5MAIVUxe887mtLb6pRw7lIP7216v5E6d3YD
	 yJV8nABZgrp2A==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718807602; x=1719068102; i=anthony.perard@vates.tech;
	bh=dhkIawe/P1gt74JqwmytP5ttHEZSPuBPJ9PBNk5H03E=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=ZNe8OFMfBKO4L3/gmD9sXRiEr3BUYl35tjAg8lD08H0pWnRsz7OqyrsGaPSeUyQ5m
	 /I5hfivaxPSh/ZFs3h/v9Vk0AJ3dzV8O7FRef/xLYSCoaApi3MOxSxIdOx8wpnD8JO
	 2XF+5v+iFktBEauHXl3HKfzwrsc8JlZ7H8kGO6a/Tz2b2Bw5f7w+Yf8be23YEeqGcM
	 LJBHjXGz+zgvdN5ZLNEBWECf9Lk3mhpXwFx9E871vKpu8LLgP4lkyzN/hb4On5YLlC
	 F+xT9YbWC7U3dP31sli/Yv1sYBCmimrw+vnka1TL1MUleXkLxetWUdK6nV8jmjxUJf
	 Ve8tzA8Q3P69A==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[PATCH=20for-4.19]=20hotplug:=20Restore=20block-tap=20phy=20compatibility?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718807600709
To: Jan Beulich <jbeulich@suse.com>
Cc: Anthony PERARD <anthony@xenproject.org>, Oleksii Kurochko <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org, Jason Andryuk <jason.andryuk@amd.com>, Jason Andryuk <jandryuk@gmail.com>
Message-Id: <ZnLsMOQ3zt4W855q@l14>
References: <20240516022212.5034-1-jandryuk@gmail.com> <64083e01-edf1-4395-a9d7-82e82d220de7@suse.com> <9678073f-82d5-4402-b5a0-e24985c1446b@amd.com> <7de20763-b9bc-4dfc-b250-8f83c42e9e16@suse.com>
In-Reply-To: <7de20763-b9bc-4dfc-b250-8f83c42e9e16@suse.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.db5fcd6941aa43b2ac53c0d0b3e3427f?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240619:md
Date: Wed, 19 Jun 2024 14:33:21 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

On Wed, Jun 19, 2024 at 02:07:04PM +0200, Jan Beulich wrote:
> On 16.05.2024 15:52, Jason Andryuk wrote:
> > On 2024-05-16 03:41, Jan Beulich wrote:
> >> On 16.05.2024 04:22, Jason Andryuk wrote:
> >>> From: Jason Andryuk <jason.andryuk@amd.com>
> >>>
> >>> From: Jason Andryuk <jason.andryuk@amd.com>
> >>
> >> Two identical From: (also in another patch of yours, while in yet another one
> >> you have two _different_ ones, when only one will survive into the eventual
> >> commit anyway)?
> > 
> > Sorry about that.  Since I was sending from my gmail account, I thought 
> > I needed explicit From: lines to ensure the authorship was listed w/ 
> > amd.com.  I generated the patches with `git format-patch --from`, to get 
> > the explicit From: lines, and then sent with `git send-email`.  The 
> > send-email step then inserted the additional lines.  I guess it added 
> >  From amd.com since I had changed to that address in .gitconfig.
> > 
> >>> backendtype=phy using the blktap kernel module needs to use write_dev,
> >>> but tapback can't support that.  tapback should perform better, but make
> >>> the script compatible with the old kernel module again.
> >>>
> >>> Signed-off-by: Jason Andryuk <jason.andryuk@amd.com>
> >>
> >> Should there be a Fixes: tag here?
> > 
> > That makes sense.
> > 
> > Fixes: 76a484193d ("hotplug: Update block-tap")
> 
> Surely this wants going into 4.19? Thus - Anthony, Oleksii?

Yes, I think so.

Acked-by: Anthony PERARD <anthony.perard@vates.tech>

Thanks,

-- 


Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 14:52:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 14:52:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743868.1150862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJwg0-0006q2-SE; Wed, 19 Jun 2024 14:52:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743868.1150862; Wed, 19 Jun 2024 14:52:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJwg0-0006pv-Pi; Wed, 19 Jun 2024 14:52:44 +0000
Received: by outflank-mailman (input) for mailman id 743868;
 Wed, 19 Jun 2024 14:52:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJwfz-0006pl-Lq; Wed, 19 Jun 2024 14:52:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJwfz-0005hU-DN; Wed, 19 Jun 2024 14:52:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJwfz-00074B-4y; Wed, 19 Jun 2024 14:52:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJwfz-0003uX-4P; Wed, 19 Jun 2024 14:52:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Fkjh/JkNlvn1hydJCmz5377R2WePZ0Cr6SjYC5ZCkWU=; b=TF068Q17jV8VbYgPfbXBTRcsXd
	UXSnFEsyyWw8EHrBYKAkWsnPj2NndzAciCVmhUlT8pz3CGOxdOsxqnCFn79t16tH5c+gyeyd7GuPH
	XgU421QCdueO6Gz5yMOn0DLaciEisHzzL65j+A3ouMspVkFRovE7BhnBA0T3aXXWNdw8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186411-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186411: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=efa6e9f15ba943d154e8d7b29384581915b2aacd
X-Osstest-Versions-That:
    xen=53c5c99e8744495395c1274595d6ca55947d1d6a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Jun 2024 14:52:43 +0000

flight 186411 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186411/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  efa6e9f15ba943d154e8d7b29384581915b2aacd
baseline version:
 xen                  53c5c99e8744495395c1274595d6ca55947d1d6a

Last test of basis   186404  2024-06-19 01:00:23 Z    0 days
Testing same since   186411  2024-06-19 12:00:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   53c5c99e87..efa6e9f15b  efa6e9f15ba943d154e8d7b29384581915b2aacd -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 15:05:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 15:05:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743877.1150873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJwsY-0000CA-UL; Wed, 19 Jun 2024 15:05:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743877.1150873; Wed, 19 Jun 2024 15:05:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJwsY-0000C3-R9; Wed, 19 Jun 2024 15:05:42 +0000
Received: by outflank-mailman (input) for mailman id 743877;
 Wed, 19 Jun 2024 15:05:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJwsY-0000Bt-3a; Wed, 19 Jun 2024 15:05:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJwsX-0005x0-Kf; Wed, 19 Jun 2024 15:05:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJwsX-0007Nc-AF; Wed, 19 Jun 2024 15:05:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJwsX-0001ea-9s; Wed, 19 Jun 2024 15:05:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PYspLgfKdtMWRKCn3DmOH4k02TqC7meexxBxHSHm1vU=; b=Bt0YmjQzaeFQcsgk+xQ2GZ/56o
	uuoSWca99QRU9Ngb6mmPPrfV6vuzGjuHDoAq/PSpqPDnj/SjxOhMbhQFVXwBNuEUWXA5qCB+eee5A
	CdwnQFameeBPwt5cMePrUQ4mTihYZ60/N58TM0SKbhyhoXzm1ntbl6nKHWy6vvHyQ7P8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186407-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186407: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=2b199ad3f1a69bd6de1b2a1fc7d5bd31817dcb11
X-Osstest-Versions-That:
    libvirt=3fff8c91b059ec450a1c94c8fced685a7de19d42
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Jun 2024 15:05:41 +0000

flight 186407 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186407/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186390
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              2b199ad3f1a69bd6de1b2a1fc7d5bd31817dcb11
baseline version:
 libvirt              3fff8c91b059ec450a1c94c8fced685a7de19d42

Last test of basis   186390  2024-06-18 04:18:45 Z    1 days
Testing same since   186407  2024-06-19 04:22:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adam Julis <ajulis@redhat.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   3fff8c91b0..2b199ad3f1  2b199ad3f1a69bd6de1b2a1fc7d5bd31817dcb11 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 15:16:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 15:16:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743890.1150883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJx2d-0001vU-TF; Wed, 19 Jun 2024 15:16:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743890.1150883; Wed, 19 Jun 2024 15:16:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJx2d-0001vN-Pz; Wed, 19 Jun 2024 15:16:07 +0000
Received: by outflank-mailman (input) for mailman id 743890;
 Wed, 19 Jun 2024 15:16:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RXUT=NV=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJx2c-0001vH-Jt
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 15:16:06 +0000
Received: from mail-qk1-x729.google.com (mail-qk1-x729.google.com
 [2607:f8b0:4864:20::729])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d656f3e0-2e4e-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 17:16:01 +0200 (CEST)
Received: by mail-qk1-x729.google.com with SMTP id
 af79cd13be357-7955aaf8006so506752985a.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 08:16:01 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2a5eca9c1sm78714396d6.39.2024.06.19.08.15.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 08:15:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d656f3e0-2e4e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718810160; x=1719414960; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from:cc
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ItYjzeLz9pCHlYwoslznxmKtrPfREFj+z5aB8KdzW/Y=;
        b=lgiP6/3JCNw2f6h1RxrOoMvpxYKHv+EzvDrLZY4sFjdsaL3hzgTaYlVHJZGDnrNHe3
         3nKQ1VJnV7+EBLC7wo6O1uSoxqSxThwKNFnpMb6QiCs0anid6J7tANPuCqLagjrGd9px
         KVqO/GrvCoFfS5FCPVJF7zA6sbX7SOOmdtXao=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718810160; x=1719414960;
        h=content-transfer-encoding:in-reply-to:autocrypt:from:cc
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=ItYjzeLz9pCHlYwoslznxmKtrPfREFj+z5aB8KdzW/Y=;
        b=jZaemUCDEKowJ2HrH88HmytcyVQB6nfcShu6QbV12UDkhTfvM8mZGeM0kVk+pvNL+K
         O0qx7TcoSqZha3Yyn64Lka2O2OtVdZG+s+6VoFPCbO64jN4Qmn8GtJsxb5Z2CibEtXRN
         zRKUTwXG5FoLnrPADuvV/vNTyR4J827xpmdl7QkmwcPqiVEfgLGhrCy+tz13Y3RwAaVZ
         9hYIHzbojUZuci/USQHXW3qzBh0yZ5W4S/G+tAs907b9fc88mconh+0Dx6XN2e/BjY1z
         3zxJo7A4qmH4QiSaLVHlIBsHA0ekxkv+vt5Rez2t/H817BzwgMDrQJimTpdt8WfYXVRO
         Ve1g==
X-Gm-Message-State: AOJu0Ywzs5a6fZhCyDcv1VXoBrgJnchUQaD/KojO36SjUqKyUndSMOrC
	neumHdqZfqcDTiNH5GaRAdBffp1QObHA5XiR8IsMnkq6xBRwVySxiR8uegxuLJr8unSt0QAwDQg
	9yW8=
X-Google-Smtp-Source: AGHT+IHSvRA+epUNuVPlhYXYEHKgwy49psFLjn7oHwsv/w8BP10LOTc1yhrGfai1ndv/hKONw/pN5Q==
X-Received: by 2002:a05:6214:caa:b0:6b2:bad8:b72b with SMTP id 6a1803df08f44-6b501e3939fmr38892276d6.28.1718810159939;
        Wed, 19 Jun 2024 08:15:59 -0700 (PDT)
Message-ID: <87424aa5-758b-42c6-a777-14af385d1203@cloud.com>
Date: Wed, 19 Jun 2024 16:15:57 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: xen | Failed pipeline for staging | 43d5c5d5
To: xen-devel <xen-devel@lists.xenproject.org>
References: <6672eede44602_2e3c548bc2022ad@gitlab-sidekiq-catchall-v2-649cf875c5-bssqj.mail>
Content-Language: en-GB
Cc: Jan Beulich <jbeulich@suse.com>
From: Andrew Cooper <andrew.cooper@cloud.com>
Autocrypt: addr=andrew.cooper@cloud.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSdBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyQGNsb3VkLmNvbT7CwY4EEwEIADgWIQTPNUlbfqb3Dqd9IXZl
 w/kGpdefoAUCY9sQuAIbAwULCQgHAwUVCgkICwUWAgMBAAIeAQIXgAAKCRBlw/kGpdefoFJ5
 D/9+nAjHPxmjH7Vv423dYvz+smJghc8/DZ+tICR/EjU+bwp896sR3KQSHlo7HJzU0zcRGjwt
 vrfKnWYCkpLn8rEJuQCXbuHplS2ATrTC1jkqytUC/VhVDOVvnrjYL9R5jRdzxqjOCJjaN2OI
 in+zkIX9D9uZs6MdtUTcvUm7RkHF/nYkxfvrkVVBXWt3bJ7b9oPXV6myq9KBp/S5n9ZJLeva
 mn78RUJRKDweKUwXQhGcz8E0tvWrvCE8KG5xQ3vlwF2TXHzo/FImMHSqhQ0dN4KtadaI+JaF
 T1bF5F6UGAoUMv9lf8Y0qVXSLV4NjBwLhJcrJ5OgyVihk/LTFZiqGHBlJ7+RgXNhWn/mDF4D
 pPiJwcp40mD89c3eGhkQ6sP/HIYLr7kpZKFEHLTN+xO45KRPHKKeSBR99xuC3Lgo6qnV6f+V
 FZprHMfYQ1Obzma62gHtgjpg0w1Vs4JpE9HJMKZx6I8KH7GeCw1xhQP5o+rvag7YZOiGa3y8
 dbMiz+1LQwRxWVHJJNF54uScgFC2TO0gyVzkf64X7PfhaRhTH5Ie0ukVY7NZ4sDKURNeaZ0v
 h/TgFxqiJOmFG1EnXETOoB4RPZEYpPHTJXovx8ie0b2Hoje+STJzOgonyiTGTEB2bys4UJyp
 42fZjcJ2w3PkRhm3VrflU1Lvz1pw9k8wEjWOcs7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnl
 qZjzz2vyrYfz7bkPtXaGb9H4Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lK
 DSoAsvt8w82tIlP/EbmRbDVn7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/Q
 zV9f07DJswuda1JH3/qvYu0pvjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjt
 w7KsZ4ygXYrsm/oCBiVW/OgUg/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP
 +wi6y+TnuAL36UtK/uFyEuPywwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48
 Z27ZUUZd2Dr6O/n8poQHbaTd6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y
 0fsJT5RgvefAIFfHBg7fTY/ikBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W
 9CxSyQ0qmZ2bXsLQYRj2xqd1bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N
 7VZtSfEJeRV04unbsKVXWZAkuAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRF
 cMf7xJaL9xXedMcAEQEAAcLBXwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/
 er8EA7g23HMxYWd3FXHThrVQHgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznE
 ns5EAAXEbITrgKWXDDUWGYxdpnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10P
 Z3mZD4Xu9kU2IXYmuW+e5KCAvTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJ
 TGnVxZZSCyLUO83sh6OZhJkkb9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJv
 fPKpNzGD8XWR5HHF0NLIJhgg4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4U
 ePdFLfhKyRdSXuvY3AHJd4CP4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6
 yPbxT7CwRS5dcQPzUiuHLK9invjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZe
 rSm4xaYegEFusyhbZrI0U9tJB8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tv
 G3euCklmkn9Sl9IAKFu29RSod5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2
 nf3Bbzx81wZK14JdDDHUX2Rs6+ahAA==
In-Reply-To: <6672eede44602_2e3c548bc2022ad@gitlab-sidekiq-catchall-v2-649cf875c5-bssqj.mail>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 19/06/2024 3:44 pm, GitLab wrote:
> GitLab
> ✖ 	Pipeline #1338876222 has failed!
> 
>  
> Project 	xen-project <https://gitlab.com/xen-project> / xen
> <https://gitlab.com/xen-project/xen>
> Branch 	
> 	staging <https://gitlab.com/xen-project/xen/-/commits/staging>
> 
> Commit 	
> 	43d5c5d5
> <https://gitlab.com/xen-project/xen/-/commit/43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e>
> 
> xen: avoid UB in guest handle arithmetic At le...
> Commit Author 	
> 	Jan Beulich <https://gitlab.com/jbeulich>
> 
>  
> Pipeline #1338876222
> <https://gitlab.com/xen-project/xen/-/pipelines/1338876222> triggered by
> 		Ganis <https://gitlab.com/ganis>
> 
> had 1 failed job
> Failed job
> ✖ 	build
> 
> 	debian-bookworm-gcc-arm32-debug-randconfig
> <https://gitlab.com/xen-project/xen/-/jobs/7136417308>

This is:

In file included from common/livepatch.c:9:
common/livepatch.c: In function 'livepatch_list':
./include/xen/guest_access.h:130:25: error: cast to pointer from integer
of different size [-Werror=int-to-pointer-cast]
  130 |     __raw_copy_to_guest((void *)(d_ + (off) * sizeof(*_s)), \
      |                         ^
common/livepatch.c:1283:18: note: in expansion of macro
'__copy_to_guest_offset'
 1283 |             if ( __copy_to_guest_offset(list->name, name_offset,
      |                  ^~~~~~~~~~~~~~~~~~~~~~
./include/xen/guest_access.h:130:25: error: cast to pointer from integer
of different size [-Werror=int-to-pointer-cast]
  130 |     __raw_copy_to_guest((void *)(d_ + (off) * sizeof(*_s)), \
      |                         ^
common/livepatch.c:1287:17: note: in expansion of macro
'__copy_to_guest_offset'
 1287 |                 __copy_to_guest_offset(list->metadata,
metadata_offset,
      |                 ^~~~~~~~~~~~~~~~~~~~~~


The problem is that (off) is of type uint64_t, so

   (const void *)(s_ + (off) * sizeof(*_d))

ends up being a uint64_t -> uint32_t down-convert in arm32.

This wants to use the _p() macro which takes care of casting through
(unsigned long) on its way to a pointer.  I'll do a patch.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 15:46:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 15:46:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743901.1150893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJxW3-0006CH-0y; Wed, 19 Jun 2024 15:46:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743901.1150893; Wed, 19 Jun 2024 15:46:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJxW2-0006CA-UW; Wed, 19 Jun 2024 15:46:30 +0000
Received: by outflank-mailman (input) for mailman id 743901;
 Wed, 19 Jun 2024 15:46:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Hkwz=NV=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sJxW2-0006C4-2U
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 15:46:30 +0000
Received: from mail-oi1-x22f.google.com (mail-oi1-x22f.google.com
 [2607:f8b0:4864:20::22f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1575b600-2e53-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 17:46:25 +0200 (CEST)
Received: by mail-oi1-x22f.google.com with SMTP id
 5614622812f47-3c9cc681ee0so3172472b6e.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 08:46:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1575b600-2e53-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718811984; x=1719416784; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=t6i79TBptLGYHvgct8a+nK5k8kULH1VcVm+mKgbhLiQ=;
        b=dsIVVhr5xXAdJWm6jefZ+NeYxTqvPcI9p5EsoGpLkmjBhwd+mDJR4aNgsYwabkMD1K
         4TmH2kVFm6zinrUm5vdgDvfabIYp7tWuTlMd+bKaAstJYUf2U8iQTNTlldYS8ztSTYxH
         yCXcZbtSzDjsjUtwCFi1Dqz/yf3Nw2QUmGVQKMu9qVQIdVYgGsbxwYUyLIDID3aOahTd
         Ds87NdQ32yuHrYnvNWslHQRVx+6OD3RGweMOPOJLuisq3I+AT8+IFQ0UZufImlZVJN/u
         VjesL6zGEMMQfM5tw/7/l4DtkMQU+Ju4bEXm9zKKzsr6MHHKlkP6ViktWNq+meRUy4Fw
         T/sg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718811984; x=1719416784;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=t6i79TBptLGYHvgct8a+nK5k8kULH1VcVm+mKgbhLiQ=;
        b=SLJhgpFXwZ4Bk07IHT8/CfJz4+ITY3To6wG7SBARk1Yxs3TUb80uXMMJLLMy8jUQoQ
         Er/9M+h04dM9do8L/zXZ1HZzMJ8usY2c+THIXGFUcN3EvBmkUks8cEOyKfUShnC8edzg
         K7lxUT86DejUIf3gZF9BXzQvk1R/P/CHeRmiCjWdMwRYAAcJrg29ZAxEP0xIg/oi3UPJ
         C1kx/2CWs4ttkqsDMmGymRMXj+FklnnTjqKwiWfXHZ8DxCmbT/G2ofRM1YF4uvUdd9LQ
         zjsNzjsz6HYJMeRxcT8hYRlvpMFmXF+g8tatoUpS90R2QLz6crWVNRFN4KeBPQbwqovQ
         lE0w==
X-Forwarded-Encrypted: i=1; AJvYcCWQtSWj7bJhP5HxGCJrOjKAINwikqJPCg9Ba4KvAHhZ7GGcuYlXz06LAJ1Ns9m/cR1eh7O+JU0dJwkQi/coRF1/xeZ9i4QD3dX0rxuelAM=
X-Gm-Message-State: AOJu0YzbHMcUAdZES2tlIBgAQCprgsU/iWLv4qpANQOM6ukbiWXQclVz
	M/sDZ4baCjPh4+wvdG6yDjEwnXPNZ0/Uh6A+ZJHIzku2HaXipucExt5DvDT5KzKxIowKX6SoaQf
	hMdcr+Wi0RGmAIjR0Auag3aUXqIk=
X-Google-Smtp-Source: AGHT+IFduTD6szkPAuGBJD+MLbhP/M27BP//P7PnlUhalJm8qvZOP6Sk+wglvNwj0fEYD3dzIw98+ddByD5L8Ay6nss=
X-Received: by 2002:a05:6870:524f:b0:24f:d3ce:fe92 with SMTP id
 586e51a60fabf-25c94980cc1mr3421271fac.14.1718811984200; Wed, 19 Jun 2024
 08:46:24 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1718038855.git.w1benny@gmail.com> <fee20e24a94cb29dea81631a6b775933d1151da4.1718038855.git.w1benny@gmail.com>
 <4a49fe9b-66fd-4a32-ad01-14ed4c5fc34c@suse.com>
In-Reply-To: <4a49fe9b-66fd-4a32-ad01-14ed4c5fc34c@suse.com>
From: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Date: Wed, 19 Jun 2024 17:46:13 +0200
Message-ID: <CAKBKdXgUKYoJfB1mG+6JSaV=jWpmRmS1UbQ6N4JNZ774rP_PoQ@mail.gmail.com>
Subject: Re: [PATCH for-4.19? v6 6/9] xen: Make the maximum number of altp2m
 views configurable for x86
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel <michal.orzel@amd.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Tamas K Lengyel <tamas@tklengyel.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Jun 13, 2024 at 2:03=E2=80=AFPM Jan Beulich <jbeulich@suse.com> wro=
te:
> > @@ -510,13 +526,13 @@ int p2m_change_altp2m_gfn(struct domain *d, unsig=
ned int idx,
> >      mfn_t mfn;
> >      int rc =3D -EINVAL;
> >
> > -    if ( idx >=3D  min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
> > +    if ( idx >=3D d->nr_altp2m ||
> >           d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] =3D=3D
>
> This ends up being suspicious: The range check is against a value differe=
nt
> from what is passed to array_index_nospec(). The two weren't the same
> before either, but there the range check was more strict (which now isn't
> visible anymore, even though I think it would still be true). Imo this
> wants a comment, or an assertion effectively taking the place of a commen=
t.

> Since they're all "is this slot populated" checks, maybe we want
> an is_altp2m_eptp_valid() helper?

Let me see if I understand correctly. You're suggesting the condition
should be replaced with something like this? (Also, I would suggest
altp2m_is_eptp_valid() name, since it's consistent e.g. with
p2m_is_altp2m().)

static inline bool altp2m_is_eptp_valid(const struct domain *d,
                                        unsigned int idx)
{
    /*
     * EPTP index is correlated with altp2m index and should not exceed
     * d->nr_altp2m.
     */
    assert(idx < d->nr_altp2m);

    return idx < MAX_EPTP &&
        d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] !=3D
        mfn_x(INVALID_MFN);
}

Note that in the codebase there are also very similar checks, but
again without array_index_nospec. For instance, in the
p2m_altp2m_propagate_change() function (which is called fairly
frequently):

int p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
                                mfn_t mfn, unsigned int page_order,
                                p2m_type_t p2mt, p2m_access_t p2ma)
{
    struct p2m_domain *p2m;
    unsigned int i;
    unsigned int reset_count =3D 0;
    unsigned int last_reset_idx =3D ~0;
    int ret =3D 0;

    if ( !altp2m_active(d) )
        return 0;

    altp2m_list_lock(d);

    for ( i =3D 0; i < d->nr_altp2m; i++ )
    {
        p2m_type_t t;
        p2m_access_t a;

        // XXX this could be replaced with altp2m_is_eptp_valid(), but
based on previous review remarks,
        // it would introduce unnecessary perf. hit. So, should these
occurrences left unchanged?
        if ( d->arch.altp2m_eptp[i] =3D=3D mfn_x(INVALID_MFN) )
            continue;

       ...

There are more instances of this. Which re-opens again the issue from
previous conversation: should I introduce a function which will be
used in some cases (where _nospec is used) and not used elsewhere?

P.


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 15:47:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 15:47:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743897.1150902 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJxXN-0006rW-Dw; Wed, 19 Jun 2024 15:47:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743897.1150902; Wed, 19 Jun 2024 15:47:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJxXN-0006rP-B9; Wed, 19 Jun 2024 15:47:53 +0000
Received: by outflank-mailman (input) for mailman id 743897;
 Wed, 19 Jun 2024 15:23:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VYNS=NV=gmail.com=fernandez.simon@srs-se1.protection.inumbo.net>)
 id 1sJx9i-0003h9-Er
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 15:23:26 +0000
Received: from mail-wm1-x32e.google.com (mail-wm1-x32e.google.com
 [2a00:1450:4864:20::32e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id de664e95-2e4f-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 17:23:24 +0200 (CEST)
Received: by mail-wm1-x32e.google.com with SMTP id
 5b1f17b1804b1-424720e73e0so16284395e9.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 08:23:24 -0700 (PDT)
Received: from [10.14.0.2] ([178.239.163.102])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-422f641f522sm230634805e9.48.2024.06.19.08.23.20
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 19 Jun 2024 08:23:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de664e95-2e4f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718810603; x=1719415403; darn=lists.xenproject.org;
        h=to:references:message-id:content-transfer-encoding:cc:date
         :in-reply-to:from:subject:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=HSVLHkgMSUqxWcbc1T+fsQsit0Pgp/IVlzKOv2OSI7w=;
        b=UhxsPU5VdRkA3F1K3WkBeE8EHKBqUDhCP5axwoOof2atQxTQSDG8GK9it+PND+QJXu
         rngXeuQLU1g41HbBj+6P2u64uddaGM4301xyqV2IASHyVscmYD5it/rVlX6+VRDhsYQG
         BNYvyhEjg0SfeZ8XozsefSHRa3A10vGUZyKtWtCZJbIZPiTIonG7vY3NS35htZlXYZA9
         KNM2c0IirdRVX3LEqG1eIIVnVkiUGJGSEMusZnYizAjFfJCkVmCsS8y/+/JjvdQgKdHS
         K2qXj4442H1FcwR79+7Cl+Cmu47qwpv+g2eFRnbUDbfpN0DigykfL+AJhrEEVqoCzk6Z
         jU/A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718810603; x=1719415403;
        h=to:references:message-id:content-transfer-encoding:cc:date
         :in-reply-to:from:subject:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=HSVLHkgMSUqxWcbc1T+fsQsit0Pgp/IVlzKOv2OSI7w=;
        b=sgzpHOjblIq0fwP5G5klg1bqB2PQo91nycs8ktLQ4Ra3n5FzQvpIH+WOGw0+PLaC6y
         eqMxUlRfaiXPSmadcYvPPisKs39hCZVH0qPk3rqMf7UgJGj0RQCXu/C4gcTeLViYd/Fv
         yzwgDqO+dI5M7PIg9r4bLjS7zCL/GYeuGCS7fD5wrH7PNlDUiBAmNKAMRrpy9LOihB9o
         lK/NN+7iXlSF6AB63jPchT5KPGLrJMwpPR2ajtov9ESxWEJMbvnD6lG18c3h4ksDXf5U
         tOAR51ipsY3pvhSxmWlmhM4LJq2l4V1fBUYLJXIrZ9mVW5rwszXcxDYYfQwaaxrJuRel
         dzxQ==
X-Forwarded-Encrypted: i=1; AJvYcCVN46RWHmtGymzAEysn1s3z9u6QiShJCGooaRL/R5HB2Oxp0do6FqxGQykrq9qh0JwQqE3v71ur63dJQpa90pOHPRTxHyC9lrTe0Z56ods=
X-Gm-Message-State: AOJu0YxQfiPBo8km62LR/0AExV7951mRuC0TqXqXZ6F7liKQbmslF3og
	pY8rtowLDVE8q4eZanXmPf67HV6B/4SyZZC9Iw9LnmyASCAOhtru
X-Google-Smtp-Source: AGHT+IG+MXhlW+QFpM6Rnupka8napxwJxRH0oCI9mys5+mxI/fFZ68OpTovznOA60BtwebCs2O4yVw==
X-Received: by 2002:a05:600c:6a9a:b0:422:50d7:ff0 with SMTP id 5b1f17b1804b1-4247507a6f6mr19093275e9.3.1718810603126;
        Wed, 19 Jun 2024 08:23:23 -0700 (PDT)
Content-Type: text/plain;
	charset=us-ascii
Mime-Version: 1.0 (Mac OS X Mail 13.4 \(3608.120.23.2.7\))
Subject: Re: [PATCH 14/26] block: move the nonrot flag to queue_limits
From: Simon Fernandez <fernandez.simon@gmail.com>
In-Reply-To: <20240617060532.127975-15-hch@lst.de>
Date: Wed, 19 Jun 2024 16:23:52 +0100
Cc: Jens Axboe <axboe@kernel.dk>,
 Geert Uytterhoeven <geert@linux-m68k.org>,
 Richard Weinberger <richard@nod.at>,
 Philipp Reisner <philipp.reisner@linbit.com>,
 Lars Ellenberg <lars.ellenberg@linbit.com>,
 =?utf-8?Q?Christoph_B=C3=B6hmwalder?= <christoph.boehmwalder@linbit.com>,
 Josef Bacik <josef@toxicpanda.com>,
 Ming Lei <ming.lei@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Jason Wang <jasowang@redhat.com>,
 =?utf-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alasdair Kergon <agk@redhat.com>,
 Mike Snitzer <snitzer@kernel.org>,
 Mikulas Patocka <mpatocka@redhat.com>,
 Song Liu <song@kernel.org>,
 Yu Kuai <yukuai3@huawei.com>,
 Vineeth Vijayan <vneethv@linux.ibm.com>,
 "Martin K. Petersen" <martin.petersen@oracle.com>,
 linux-m68k@lists.linux-m68k.org,
 linux-um@lists.infradead.org,
 drbd-dev@lists.linbit.com,
 nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org,
 ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev,
 xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org,
 dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org,
 linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org,
 nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org,
 linux-block@vger.kernel.org,
 Damien Le Moal <dlemoal@kernel.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <312DD24A-7AB5-4FAC-8880-EA80056CFC44@gmail.com>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-15-hch@lst.de>
To: Christoph Hellwig <hch@lst.de>
X-Mailer: Apple Mail (2.3608.120.23.2.7)

Hi folks, how can I unsubscribe from this group.?
Thanks in advance.
S

> On 17 Jun 2024, at 07:04, Christoph Hellwig <hch@lst.de> wrote:
>=20
> Move the nonrot flag into the queue_limits feature field so that it =
can
> be set atomically with the queue frozen.
>=20
> Use the chance to switch to defaulting to non-rotational and require
> the driver to opt into rotational, which matches the polarity of the
> sysfs interface.
>=20
> For the z2ram, ps3vram, 2x memstick, ubiblock and dcssblk the new
> rotational flag is not set as they clearly are not rotational despite
> this being a behavior change.  There are some other drivers that
> unconditionally set the rotational flag to keep the existing behavior
> as they arguably can be used on rotational devices even if that is
> probably not their main use today (e.g. virtio_blk and drbd).
>=20
> The flag is automatically inherited in blk_stack_limits matching the
> existing behavior in dm and md.
>=20
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Reviewed-by: Damien Le Moal <dlemoal@kernel.org>
> ---
> arch/m68k/emu/nfblock.c             |  1 +
> arch/um/drivers/ubd_kern.c          |  1 -
> arch/xtensa/platforms/iss/simdisk.c |  5 +++-
> block/blk-mq-debugfs.c              |  1 -
> block/blk-sysfs.c                   | 39 ++++++++++++++++++++++++++---
> drivers/block/amiflop.c             |  5 +++-
> drivers/block/aoe/aoeblk.c          |  1 +
> drivers/block/ataflop.c             |  5 +++-
> drivers/block/brd.c                 |  2 --
> drivers/block/drbd/drbd_main.c      |  3 ++-
> drivers/block/floppy.c              |  3 ++-
> drivers/block/loop.c                |  8 +++---
> drivers/block/mtip32xx/mtip32xx.c   |  1 -
> drivers/block/n64cart.c             |  2 --
> drivers/block/nbd.c                 |  5 ----
> drivers/block/null_blk/main.c       |  1 -
> drivers/block/pktcdvd.c             |  1 +
> drivers/block/ps3disk.c             |  3 ++-
> drivers/block/rbd.c                 |  3 ---
> drivers/block/rnbd/rnbd-clt.c       |  4 ---
> drivers/block/sunvdc.c              |  1 +
> drivers/block/swim.c                |  5 +++-
> drivers/block/swim3.c               |  5 +++-
> drivers/block/ublk_drv.c            |  9 +++----
> drivers/block/virtio_blk.c          |  4 ++-
> drivers/block/xen-blkfront.c        |  1 -
> drivers/block/zram/zram_drv.c       |  2 --
> drivers/cdrom/gdrom.c               |  1 +
> drivers/md/bcache/super.c           |  2 --
> drivers/md/dm-table.c               | 12 ---------
> drivers/md/md.c                     | 13 ----------
> drivers/mmc/core/queue.c            |  1 -
> drivers/mtd/mtd_blkdevs.c           |  1 -
> drivers/nvdimm/btt.c                |  1 -
> drivers/nvdimm/pmem.c               |  1 -
> drivers/nvme/host/core.c            |  1 -
> drivers/nvme/host/multipath.c       |  1 -
> drivers/s390/block/dasd_genhd.c     |  1 -
> drivers/s390/block/scm_blk.c        |  1 -
> drivers/scsi/sd.c                   |  4 +--
> include/linux/blkdev.h              | 10 ++++----
> 41 files changed, 83 insertions(+), 88 deletions(-)
>=20
> diff --git a/arch/m68k/emu/nfblock.c b/arch/m68k/emu/nfblock.c
> index 642fb80c5c4e31..8eea7ef9115146 100644
> --- a/arch/m68k/emu/nfblock.c
> +++ b/arch/m68k/emu/nfblock.c
> @@ -98,6 +98,7 @@ static int __init nfhd_init_one(int id, u32 blocks, =
u32 bsize)
> {
> 	struct queue_limits lim =3D {
> 		.logical_block_size	=3D bsize,
> +		.features		=3D BLK_FEAT_ROTATIONAL,
> 	};
> 	struct nfhd_device *dev;
> 	int dev_id =3D id - NFHD_DEV_OFFSET;
> diff --git a/arch/um/drivers/ubd_kern.c b/arch/um/drivers/ubd_kern.c
> index 19e01691ea0ea7..9f1e76ddda5a26 100644
> --- a/arch/um/drivers/ubd_kern.c
> +++ b/arch/um/drivers/ubd_kern.c
> @@ -882,7 +882,6 @@ static int ubd_add(int n, char **error_out)
> 		goto out_cleanup_tags;
> 	}
>=20
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
> 	disk->major =3D UBD_MAJOR;
> 	disk->first_minor =3D n << UBD_SHIFT;
> 	disk->minors =3D 1 << UBD_SHIFT;
> diff --git a/arch/xtensa/platforms/iss/simdisk.c =
b/arch/xtensa/platforms/iss/simdisk.c
> index defc67909a9c74..d6d2b533a5744d 100644
> --- a/arch/xtensa/platforms/iss/simdisk.c
> +++ b/arch/xtensa/platforms/iss/simdisk.c
> @@ -263,6 +263,9 @@ static const struct proc_ops simdisk_proc_ops =3D =
{
> static int __init simdisk_setup(struct simdisk *dev, int which,
> 		struct proc_dir_entry *procdir)
> {
> +	struct queue_limits lim =3D {
> +		.features		=3D BLK_FEAT_ROTATIONAL,
> +	};
> 	char tmp[2] =3D { '0' + which, 0 };
> 	int err;
>=20
> @@ -271,7 +274,7 @@ static int __init simdisk_setup(struct simdisk =
*dev, int which,
> 	spin_lock_init(&dev->lock);
> 	dev->users =3D 0;
>=20
> -	dev->gd =3D blk_alloc_disk(NULL, NUMA_NO_NODE);
> +	dev->gd =3D blk_alloc_disk(&lim, NUMA_NO_NODE);
> 	if (IS_ERR(dev->gd)) {
> 		err =3D PTR_ERR(dev->gd);
> 		goto out;
> diff --git a/block/blk-mq-debugfs.c b/block/blk-mq-debugfs.c
> index e8b9db7c30c455..4d0e62ec88f033 100644
> --- a/block/blk-mq-debugfs.c
> +++ b/block/blk-mq-debugfs.c
> @@ -84,7 +84,6 @@ static const char *const blk_queue_flag_name[] =3D {
> 	QUEUE_FLAG_NAME(NOMERGES),
> 	QUEUE_FLAG_NAME(SAME_COMP),
> 	QUEUE_FLAG_NAME(FAIL_IO),
> -	QUEUE_FLAG_NAME(NONROT),
> 	QUEUE_FLAG_NAME(IO_STAT),
> 	QUEUE_FLAG_NAME(NOXMERGES),
> 	QUEUE_FLAG_NAME(ADD_RANDOM),
> diff --git a/block/blk-sysfs.c b/block/blk-sysfs.c
> index 4f524c1d5e08bd..637ed3bbbfb46f 100644
> --- a/block/blk-sysfs.c
> +++ b/block/blk-sysfs.c
> @@ -263,6 +263,39 @@ static ssize_t queue_dma_alignment_show(struct =
request_queue *q, char *page)
> 	return queue_var_show(queue_dma_alignment(q), page);
> }
>=20
> +static ssize_t queue_feature_store(struct request_queue *q, const =
char *page,
> +		size_t count, unsigned int feature)
> +{
> +	struct queue_limits lim;
> +	unsigned long val;
> +	ssize_t ret;
> +
> +	ret =3D queue_var_store(&val, page, count);
> +	if (ret < 0)
> +		return ret;
> +
> +	lim =3D queue_limits_start_update(q);
> +	if (val)
> +		lim.features |=3D feature;
> +	else
> +		lim.features &=3D ~feature;
> +	ret =3D queue_limits_commit_update(q, &lim);
> +	if (ret)
> +		return ret;
> +	return count;
> +}
> +
> +#define QUEUE_SYSFS_FEATURE(_name, _feature)				 =
\
> +static ssize_t queue_##_name##_show(struct request_queue *q, char =
*page) \
> +{									 =
\
> +	return sprintf(page, "%u\n", !!(q->limits.features & _feature)); =
\
> +}									 =
\
> +static ssize_t queue_##_name##_store(struct request_queue *q,		=
 \
> +		const char *page, size_t count)				 =
\
> +{									 =
\
> +	return queue_feature_store(q, page, count, _feature);		 =
\
> +}
> +
> #define QUEUE_SYSFS_BIT_FNS(name, flag, neg)				=
\
> static ssize_t								=
\
> queue_##name##_show(struct request_queue *q, char *page)		=
\
> @@ -289,7 +322,7 @@ queue_##name##_store(struct request_queue *q, =
const char *page, size_t count) \
> 	return ret;							=
\
> }
>=20
> -QUEUE_SYSFS_BIT_FNS(nonrot, NONROT, 1);
> +QUEUE_SYSFS_FEATURE(rotational, BLK_FEAT_ROTATIONAL)
> QUEUE_SYSFS_BIT_FNS(random, ADD_RANDOM, 0);
> QUEUE_SYSFS_BIT_FNS(iostats, IO_STAT, 0);
> QUEUE_SYSFS_BIT_FNS(stable_writes, STABLE_WRITES, 0);
> @@ -526,7 +559,7 @@ static struct queue_sysfs_entry =
queue_hw_sector_size_entry =3D {
> 	.show =3D queue_logical_block_size_show,
> };
>=20
> -QUEUE_RW_ENTRY(queue_nonrot, "rotational");
> +QUEUE_RW_ENTRY(queue_rotational, "rotational");
> QUEUE_RW_ENTRY(queue_iostats, "iostats");
> QUEUE_RW_ENTRY(queue_random, "add_random");
> QUEUE_RW_ENTRY(queue_stable_writes, "stable_writes");
> @@ -624,7 +657,7 @@ static struct attribute *queue_attrs[] =3D {
> 	&queue_write_zeroes_max_entry.attr,
> 	&queue_zone_append_max_entry.attr,
> 	&queue_zone_write_granularity_entry.attr,
> -	&queue_nonrot_entry.attr,
> +	&queue_rotational_entry.attr,
> 	&queue_zoned_entry.attr,
> 	&queue_nr_zones_entry.attr,
> 	&queue_max_open_zones_entry.attr,
> diff --git a/drivers/block/amiflop.c b/drivers/block/amiflop.c
> index a25414228e4741..ff45701f7a5e31 100644
> --- a/drivers/block/amiflop.c
> +++ b/drivers/block/amiflop.c
> @@ -1776,10 +1776,13 @@ static const struct blk_mq_ops amiflop_mq_ops =
=3D {
>=20
> static int fd_alloc_disk(int drive, int system)
> {
> +	struct queue_limits lim =3D {
> +		.features		=3D BLK_FEAT_ROTATIONAL,
> +	};
> 	struct gendisk *disk;
> 	int err;
>=20
> -	disk =3D blk_mq_alloc_disk(&unit[drive].tag_set, NULL, NULL);
> +	disk =3D blk_mq_alloc_disk(&unit[drive].tag_set, &lim, NULL);
> 	if (IS_ERR(disk))
> 		return PTR_ERR(disk);
>=20
> diff --git a/drivers/block/aoe/aoeblk.c b/drivers/block/aoe/aoeblk.c
> index b6dac8cee70fe1..2028795ec61cbb 100644
> --- a/drivers/block/aoe/aoeblk.c
> +++ b/drivers/block/aoe/aoeblk.c
> @@ -337,6 +337,7 @@ aoeblk_gdalloc(void *vp)
> 	struct queue_limits lim =3D {
> 		.max_hw_sectors		=3D aoe_maxsectors,
> 		.io_opt			=3D SZ_2M,
> +		.features		=3D BLK_FEAT_ROTATIONAL,
> 	};
> 	ulong flags;
> 	int late =3D 0;
> diff --git a/drivers/block/ataflop.c b/drivers/block/ataflop.c
> index cacc4ba942a814..4ee10a742bdb93 100644
> --- a/drivers/block/ataflop.c
> +++ b/drivers/block/ataflop.c
> @@ -1992,9 +1992,12 @@ static const struct blk_mq_ops ataflop_mq_ops =3D=
 {
>=20
> static int ataflop_alloc_disk(unsigned int drive, unsigned int type)
> {
> +	struct queue_limits lim =3D {
> +		.features		=3D BLK_FEAT_ROTATIONAL,
> +	};
> 	struct gendisk *disk;
>=20
> -	disk =3D blk_mq_alloc_disk(&unit[drive].tag_set, NULL, NULL);
> +	disk =3D blk_mq_alloc_disk(&unit[drive].tag_set, &lim, NULL);
> 	if (IS_ERR(disk))
> 		return PTR_ERR(disk);
>=20
> diff --git a/drivers/block/brd.c b/drivers/block/brd.c
> index 558d8e67056608..b25dc463b5e3a6 100644
> --- a/drivers/block/brd.c
> +++ b/drivers/block/brd.c
> @@ -366,8 +366,6 @@ static int brd_alloc(int i)
> 	strscpy(disk->disk_name, buf, DISK_NAME_LEN);
> 	set_capacity(disk, rd_size * 2);
> =09
> -	/* Tell the block layer that this is not a rotational device */
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
> 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, disk->queue);
> 	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, disk->queue);
> 	err =3D add_disk(disk);
> diff --git a/drivers/block/drbd/drbd_main.c =
b/drivers/block/drbd/drbd_main.c
> index bf42a46781fa21..2ef29a47807550 100644
> --- a/drivers/block/drbd/drbd_main.c
> +++ b/drivers/block/drbd/drbd_main.c
> @@ -2697,7 +2697,8 @@ enum drbd_ret_code drbd_create_device(struct =
drbd_config_context *adm_ctx, unsig
> 		 * connect.
> 		 */
> 		.max_hw_sectors		=3D DRBD_MAX_BIO_SIZE_SAFE >> 8,
> -		.features		=3D BLK_FEAT_WRITE_CACHE | =
BLK_FEAT_FUA,
> +		.features		=3D BLK_FEAT_WRITE_CACHE | =
BLK_FEAT_FUA |
> +					  BLK_FEAT_ROTATIONAL,
> 	};
>=20
> 	device =3D minor_to_device(minor);
> diff --git a/drivers/block/floppy.c b/drivers/block/floppy.c
> index 25c9d85667f1a2..6d7f7df97c3a6c 100644
> --- a/drivers/block/floppy.c
> +++ b/drivers/block/floppy.c
> @@ -4516,7 +4516,8 @@ static bool floppy_available(int drive)
> static int floppy_alloc_disk(unsigned int drive, unsigned int type)
> {
> 	struct queue_limits lim =3D {
> -		.max_hw_sectors =3D 64,
> +		.max_hw_sectors		=3D 64,
> +		.features		=3D BLK_FEAT_ROTATIONAL,
> 	};
> 	struct gendisk *disk;
>=20
> diff --git a/drivers/block/loop.c b/drivers/block/loop.c
> index 08d0fc7f17b701..86b5d956dc4e02 100644
> --- a/drivers/block/loop.c
> +++ b/drivers/block/loop.c
> @@ -985,13 +985,11 @@ static int loop_reconfigure_limits(struct =
loop_device *lo, unsigned short bsize)
> 	lim.logical_block_size =3D bsize;
> 	lim.physical_block_size =3D bsize;
> 	lim.io_min =3D bsize;
> -	lim.features &=3D ~BLK_FEAT_WRITE_CACHE;
> +	lim.features &=3D ~(BLK_FEAT_WRITE_CACHE | BLK_FEAT_ROTATIONAL);
> 	if (file->f_op->fsync && !(lo->lo_flags & LO_FLAGS_READ_ONLY))
> 		lim.features |=3D BLK_FEAT_WRITE_CACHE;
> -	if (!backing_bdev || bdev_nonrot(backing_bdev))
> -		blk_queue_flag_set(QUEUE_FLAG_NONROT, lo->lo_queue);
> -	else
> -		blk_queue_flag_clear(QUEUE_FLAG_NONROT, lo->lo_queue);
> +	if (backing_bdev && !bdev_nonrot(backing_bdev))
> +		lim.features |=3D BLK_FEAT_ROTATIONAL;
> 	loop_config_discard(lo, &lim);
> 	return queue_limits_commit_update(lo->lo_queue, &lim);
> }
> diff --git a/drivers/block/mtip32xx/mtip32xx.c =
b/drivers/block/mtip32xx/mtip32xx.c
> index 43a187609ef794..1dbbf72659d549 100644
> --- a/drivers/block/mtip32xx/mtip32xx.c
> +++ b/drivers/block/mtip32xx/mtip32xx.c
> @@ -3485,7 +3485,6 @@ static int mtip_block_initialize(struct =
driver_data *dd)
> 		goto start_service_thread;
>=20
> 	/* Set device limits. */
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, dd->queue);
> 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, dd->queue);
> 	dma_set_max_seg_size(&dd->pdev->dev, 0x400000);
>=20
> diff --git a/drivers/block/n64cart.c b/drivers/block/n64cart.c
> index 27b2187e7a6d55..b9fdeff31cafdf 100644
> --- a/drivers/block/n64cart.c
> +++ b/drivers/block/n64cart.c
> @@ -150,8 +150,6 @@ static int __init n64cart_probe(struct =
platform_device *pdev)
> 	set_capacity(disk, size >> SECTOR_SHIFT);
> 	set_disk_ro(disk, 1);
>=20
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
> -
> 	err =3D add_disk(disk);
> 	if (err)
> 		goto out_cleanup_disk;
> diff --git a/drivers/block/nbd.c b/drivers/block/nbd.c
> index cb1c86a6a3fb9d..6cddf5baffe02a 100644
> --- a/drivers/block/nbd.c
> +++ b/drivers/block/nbd.c
> @@ -1867,11 +1867,6 @@ static struct nbd_device *nbd_dev_add(int =
index, unsigned int refs)
> 		goto out_err_disk;
> 	}
>=20
> -	/*
> -	 * Tell the block layer that we are not a rotational device
> -	 */
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, disk->queue);
> -
> 	mutex_init(&nbd->config_lock);
> 	refcount_set(&nbd->config_refs, 0);
> 	/*
> diff --git a/drivers/block/null_blk/main.c =
b/drivers/block/null_blk/main.c
> index 21f9d256e88402..83a4ebe4763ae5 100644
> --- a/drivers/block/null_blk/main.c
> +++ b/drivers/block/null_blk/main.c
> @@ -1948,7 +1948,6 @@ static int null_add_dev(struct nullb_device =
*dev)
> 	}
>=20
> 	nullb->q->queuedata =3D nullb;
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, nullb->q);
>=20
> 	rv =3D ida_alloc(&nullb_indexes, GFP_KERNEL);
> 	if (rv < 0)
> diff --git a/drivers/block/pktcdvd.c b/drivers/block/pktcdvd.c
> index 8a2ce80700109d..7cece5884b9c67 100644
> --- a/drivers/block/pktcdvd.c
> +++ b/drivers/block/pktcdvd.c
> @@ -2622,6 +2622,7 @@ static int pkt_setup_dev(dev_t dev, dev_t* =
pkt_dev)
> 	struct queue_limits lim =3D {
> 		.max_hw_sectors		=3D PACKET_MAX_SECTORS,
> 		.logical_block_size	=3D CD_FRAMESIZE,
> +		.features		=3D BLK_FEAT_ROTATIONAL,
> 	};
> 	int idx;
> 	int ret =3D -ENOMEM;
> diff --git a/drivers/block/ps3disk.c b/drivers/block/ps3disk.c
> index 8b73cf459b5937..ff45ed76646957 100644
> --- a/drivers/block/ps3disk.c
> +++ b/drivers/block/ps3disk.c
> @@ -388,7 +388,8 @@ static int ps3disk_probe(struct =
ps3_system_bus_device *_dev)
> 		.max_segments		=3D -1,
> 		.max_segment_size	=3D dev->bounce_size,
> 		.dma_alignment		=3D dev->blk_size - 1,
> -		.features		=3D BLK_FEAT_WRITE_CACHE,
> +		.features		=3D BLK_FEAT_WRITE_CACHE |
> +					  BLK_FEAT_ROTATIONAL,
> 	};
> 	struct gendisk *gendisk;
>=20
> diff --git a/drivers/block/rbd.c b/drivers/block/rbd.c
> index 22ad704f81d8b9..ec1f1c7d4275cd 100644
> --- a/drivers/block/rbd.c
> +++ b/drivers/block/rbd.c
> @@ -4997,9 +4997,6 @@ static int rbd_init_disk(struct rbd_device =
*rbd_dev)
> 	disk->fops =3D &rbd_bd_ops;
> 	disk->private_data =3D rbd_dev;
>=20
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
> -	/* QUEUE_FLAG_ADD_RANDOM is off by default for blk-mq */
> -
> 	if (!ceph_test_opt(rbd_dev->rbd_client->client, NOCRC))
> 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, q);
>=20
> diff --git a/drivers/block/rnbd/rnbd-clt.c =
b/drivers/block/rnbd/rnbd-clt.c
> index 02c4b173182719..4918b0f68b46cd 100644
> --- a/drivers/block/rnbd/rnbd-clt.c
> +++ b/drivers/block/rnbd/rnbd-clt.c
> @@ -1352,10 +1352,6 @@ static int rnbd_clt_setup_gen_disk(struct =
rnbd_clt_dev *dev,
> 	if (dev->access_mode =3D=3D RNBD_ACCESS_RO)
> 		set_disk_ro(dev->gd, true);
>=20
> -	/*
> -	 * Network device does not need rotational
> -	 */
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, dev->queue);
> 	err =3D add_disk(dev->gd);
> 	if (err)
> 		put_disk(dev->gd);
> diff --git a/drivers/block/sunvdc.c b/drivers/block/sunvdc.c
> index 5286cb8e0824d1..2d38331ee66793 100644
> --- a/drivers/block/sunvdc.c
> +++ b/drivers/block/sunvdc.c
> @@ -791,6 +791,7 @@ static int probe_disk(struct vdc_port *port)
> 		.seg_boundary_mask		=3D PAGE_SIZE - 1,
> 		.max_segment_size		=3D PAGE_SIZE,
> 		.max_segments			=3D port->ring_cookies,
> +		.features			=3D BLK_FEAT_ROTATIONAL,
> 	};
> 	struct request_queue *q;
> 	struct gendisk *g;
> diff --git a/drivers/block/swim.c b/drivers/block/swim.c
> index 6731678f3a41db..126f151c4f2cf0 100644
> --- a/drivers/block/swim.c
> +++ b/drivers/block/swim.c
> @@ -787,6 +787,9 @@ static void swim_cleanup_floppy_disk(struct =
floppy_state *fs)
>=20
> static int swim_floppy_init(struct swim_priv *swd)
> {
> +	struct queue_limits lim =3D {
> +		.features		=3D BLK_FEAT_ROTATIONAL,
> +	};
> 	int err;
> 	int drive;
> 	struct swim __iomem *base =3D swd->base;
> @@ -820,7 +823,7 @@ static int swim_floppy_init(struct swim_priv *swd)
> 			goto exit_put_disks;
>=20
> 		swd->unit[drive].disk =3D
> -			blk_mq_alloc_disk(&swd->unit[drive].tag_set, =
NULL,
> +			blk_mq_alloc_disk(&swd->unit[drive].tag_set, =
&lim,
> 					  &swd->unit[drive]);
> 		if (IS_ERR(swd->unit[drive].disk)) {
> 			blk_mq_free_tag_set(&swd->unit[drive].tag_set);
> diff --git a/drivers/block/swim3.c b/drivers/block/swim3.c
> index a04756ac778ee8..90be1017f7bfcd 100644
> --- a/drivers/block/swim3.c
> +++ b/drivers/block/swim3.c
> @@ -1189,6 +1189,9 @@ static int swim3_add_device(struct macio_dev =
*mdev, int index)
> static int swim3_attach(struct macio_dev *mdev,
> 			const struct of_device_id *match)
> {
> +	struct queue_limits lim =3D {
> +		.features		=3D BLK_FEAT_ROTATIONAL,
> +	};
> 	struct floppy_state *fs;
> 	struct gendisk *disk;
> 	int rc;
> @@ -1210,7 +1213,7 @@ static int swim3_attach(struct macio_dev *mdev,
> 	if (rc)
> 		goto out_unregister;
>=20
> -	disk =3D blk_mq_alloc_disk(&fs->tag_set, NULL, fs);
> +	disk =3D blk_mq_alloc_disk(&fs->tag_set, &lim, fs);
> 	if (IS_ERR(disk)) {
> 		rc =3D PTR_ERR(disk);
> 		goto out_free_tag_set;
> diff --git a/drivers/block/ublk_drv.c b/drivers/block/ublk_drv.c
> index e45c65c1848d31..4fcde099935868 100644
> --- a/drivers/block/ublk_drv.c
> +++ b/drivers/block/ublk_drv.c
> @@ -484,14 +484,8 @@ static inline unsigned ublk_pos_to_tag(loff_t =
pos)
>=20
> static void ublk_dev_param_basic_apply(struct ublk_device *ub)
> {
> -	struct request_queue *q =3D ub->ub_disk->queue;
> 	const struct ublk_param_basic *p =3D &ub->params.basic;
>=20
> -	if (p->attrs & UBLK_ATTR_ROTATIONAL)
> -		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
> -	else
> -		blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
> -
> 	if (p->attrs & UBLK_ATTR_READ_ONLY)
> 		set_disk_ro(ub->ub_disk, true);
>=20
> @@ -2214,6 +2208,9 @@ static int ublk_ctrl_start_dev(struct =
ublk_device *ub, struct io_uring_cmd *cmd)
> 			lim.features |=3D BLK_FEAT_FUA;
> 	}
>=20
> +	if (ub->params.basic.attrs & UBLK_ATTR_ROTATIONAL)
> +		lim.features |=3D BLK_FEAT_ROTATIONAL;
> +
> 	if (wait_for_completion_interruptible(&ub->completion) !=3D 0)
> 		return -EINTR;
>=20
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index b1a3c293528519..13a2f24f176628 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -1451,7 +1451,9 @@ static int virtblk_read_limits(struct virtio_blk =
*vblk,
> static int virtblk_probe(struct virtio_device *vdev)
> {
> 	struct virtio_blk *vblk;
> -	struct queue_limits lim =3D { };
> +	struct queue_limits lim =3D {
> +		.features		=3D BLK_FEAT_ROTATIONAL,
> +	};
> 	int err, index;
> 	unsigned int queue_depth;
>=20
> diff --git a/drivers/block/xen-blkfront.c =
b/drivers/block/xen-blkfront.c
> index 9aafce3e5987bf..fa3a2ba525458b 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -1146,7 +1146,6 @@ static int xlvbd_alloc_gendisk(blkif_sector_t =
capacity,
> 		err =3D PTR_ERR(gd);
> 		goto out_free_tag_set;
> 	}
> -	blk_queue_flag_set(QUEUE_FLAG_VIRT, gd->queue);
>=20
> 	strcpy(gd->disk_name, DEV_NAME);
> 	ptr =3D encode_disk_name(gd->disk_name + sizeof(DEV_NAME) - 1, =
offset);
> diff --git a/drivers/block/zram/zram_drv.c =
b/drivers/block/zram/zram_drv.c
> index 3acd7006ad2ccd..aad840fc7e18e3 100644
> --- a/drivers/block/zram/zram_drv.c
> +++ b/drivers/block/zram/zram_drv.c
> @@ -2245,8 +2245,6 @@ static int zram_add(void)
>=20
> 	/* Actual capacity set using sysfs (/sys/block/zram<id>/disksize =
*/
> 	set_capacity(zram->disk, 0);
> -	/* zram devices sort of resembles non-rotational disks */
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, zram->disk->queue);
> 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, zram->disk->queue);
> 	blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, zram->disk->queue);
> 	ret =3D device_add_disk(NULL, zram->disk, zram_disk_groups);
> diff --git a/drivers/cdrom/gdrom.c b/drivers/cdrom/gdrom.c
> index eefdd422ad8e9f..71cfe7a85913c4 100644
> --- a/drivers/cdrom/gdrom.c
> +++ b/drivers/cdrom/gdrom.c
> @@ -744,6 +744,7 @@ static int probe_gdrom(struct platform_device =
*devptr)
> 		.max_segments			=3D 1,
> 		/* set a large max size to get most from DMA */
> 		.max_segment_size		=3D 0x40000,
> +		.features			=3D BLK_FEAT_ROTATIONAL,
> 	};
> 	int err;
>=20
> diff --git a/drivers/md/bcache/super.c b/drivers/md/bcache/super.c
> index cb6595c8b5514e..baa364eedd0051 100644
> --- a/drivers/md/bcache/super.c
> +++ b/drivers/md/bcache/super.c
> @@ -974,8 +974,6 @@ static int bcache_device_init(struct bcache_device =
*d, unsigned int block_size,
> 	d->disk->minors		=3D BCACHE_MINORS;
> 	d->disk->fops		=3D ops;
> 	d->disk->private_data	=3D d;
> -
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, d->disk->queue);
> 	return 0;
>=20
> out_bioset_exit:
> diff --git a/drivers/md/dm-table.c b/drivers/md/dm-table.c
> index 03abdae646829c..c062af32970934 100644
> --- a/drivers/md/dm-table.c
> +++ b/drivers/md/dm-table.c
> @@ -1716,12 +1716,6 @@ static int =
device_dax_write_cache_enabled(struct dm_target *ti,
> 	return false;
> }
>=20
> -static int device_is_rotational(struct dm_target *ti, struct dm_dev =
*dev,
> -				sector_t start, sector_t len, void =
*data)
> -{
> -	return !bdev_nonrot(dev->bdev);
> -}
> -
> static int device_is_not_random(struct dm_target *ti, struct dm_dev =
*dev,
> 			     sector_t start, sector_t len, void *data)
> {
> @@ -1870,12 +1864,6 @@ int dm_table_set_restrictions(struct dm_table =
*t, struct request_queue *q,
> 	if (dm_table_any_dev_attr(t, device_dax_write_cache_enabled, =
NULL))
> 		dax_write_cache(t->md->dax_dev, true);
>=20
> -	/* Ensure that all underlying devices are non-rotational. */
> -	if (dm_table_any_dev_attr(t, device_is_rotational, NULL))
> -		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
> -	else
> -		blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
> -
> 	/*
> 	 * Some devices don't use blk_integrity but still want stable =
pages
> 	 * because they do their own checksumming.
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 2f4c5d1755d857..c23423c51fb7c2 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -6151,20 +6151,7 @@ int md_run(struct mddev *mddev)
>=20
> 	if (!mddev_is_dm(mddev)) {
> 		struct request_queue *q =3D mddev->gendisk->queue;
> -		bool nonrot =3D true;
>=20
> -		rdev_for_each(rdev, mddev) {
> -			if (rdev->raid_disk >=3D 0 && =
!bdev_nonrot(rdev->bdev)) {
> -				nonrot =3D false;
> -				break;
> -			}
> -		}
> -		if (mddev->degraded)
> -			nonrot =3D false;
> -		if (nonrot)
> -			blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
> -		else
> -			blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
> 		blk_queue_flag_set(QUEUE_FLAG_IO_STAT, q);
>=20
> 		/* Set the NOWAIT flags if all underlying devices =
support it */
> diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
> index 97ff993d31570c..b4f62fa845864c 100644
> --- a/drivers/mmc/core/queue.c
> +++ b/drivers/mmc/core/queue.c
> @@ -387,7 +387,6 @@ static struct gendisk *mmc_alloc_disk(struct =
mmc_queue *mq,
> 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, mq->queue);
> 	blk_queue_rq_timeout(mq->queue, 60 * HZ);
>=20
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue);
> 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
>=20
> 	dma_set_max_seg_size(mmc_dev(host), =
queue_max_segment_size(mq->queue));
> diff --git a/drivers/mtd/mtd_blkdevs.c b/drivers/mtd/mtd_blkdevs.c
> index 1b9f57f231e8be..bf8369ce7ddf1d 100644
> --- a/drivers/mtd/mtd_blkdevs.c
> +++ b/drivers/mtd/mtd_blkdevs.c
> @@ -375,7 +375,6 @@ int add_mtd_blktrans_dev(struct mtd_blktrans_dev =
*new)
> 	spin_lock_init(&new->queue_lock);
> 	INIT_LIST_HEAD(&new->rq_list);
>=20
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, new->rq);
> 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, new->rq);
>=20
> 	gd->queue =3D new->rq;
> diff --git a/drivers/nvdimm/btt.c b/drivers/nvdimm/btt.c
> index c5f8451b494d6c..e474afa8e9f68d 100644
> --- a/drivers/nvdimm/btt.c
> +++ b/drivers/nvdimm/btt.c
> @@ -1518,7 +1518,6 @@ static int btt_blk_init(struct btt *btt)
> 	btt->btt_disk->fops =3D &btt_fops;
> 	btt->btt_disk->private_data =3D btt;
>=20
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, btt->btt_disk->queue);
> 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, =
btt->btt_disk->queue);
>=20
> 	set_capacity(btt->btt_disk, btt->nlba * btt->sector_size >> 9);
> diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c
> index aff818469c114c..501cf226df0187 100644
> --- a/drivers/nvdimm/pmem.c
> +++ b/drivers/nvdimm/pmem.c
> @@ -546,7 +546,6 @@ static int pmem_attach_disk(struct device *dev,
> 	}
> 	pmem->virt_addr =3D addr;
>=20
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
> 	blk_queue_flag_set(QUEUE_FLAG_SYNCHRONOUS, q);
> 	if (pmem->pfn_flags & PFN_MAP)
> 		blk_queue_flag_set(QUEUE_FLAG_DAX, q);
> diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
> index 9fc5e36fe2e55e..0d753fe71f35b0 100644
> --- a/drivers/nvme/host/core.c
> +++ b/drivers/nvme/host/core.c
> @@ -3744,7 +3744,6 @@ static void nvme_alloc_ns(struct nvme_ctrl =
*ctrl, struct nvme_ns_info *info)
> 	if (ctrl->opts && ctrl->opts->data_digest)
> 		blk_queue_flag_set(QUEUE_FLAG_STABLE_WRITES, ns->queue);
>=20
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, ns->queue);
> 	if (ctrl->ops->supports_pci_p2pdma &&
> 	    ctrl->ops->supports_pci_p2pdma(ctrl))
> 		blk_queue_flag_set(QUEUE_FLAG_PCI_P2PDMA, ns->queue);
> diff --git a/drivers/nvme/host/multipath.c =
b/drivers/nvme/host/multipath.c
> index 3d0e23a0a4ddd8..58c13304e558e0 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -549,7 +549,6 @@ int nvme_mpath_alloc_disk(struct nvme_ctrl *ctrl, =
struct nvme_ns_head *head)
> 	sprintf(head->disk->disk_name, "nvme%dn%d",
> 			ctrl->subsys->instance, head->instance);
>=20
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, head->disk->queue);
> 	blk_queue_flag_set(QUEUE_FLAG_NOWAIT, head->disk->queue);
> 	blk_queue_flag_set(QUEUE_FLAG_IO_STAT, head->disk->queue);
> 	/*
> diff --git a/drivers/s390/block/dasd_genhd.c =
b/drivers/s390/block/dasd_genhd.c
> index 4533dd055ca8e3..1aa426b1deddc7 100644
> --- a/drivers/s390/block/dasd_genhd.c
> +++ b/drivers/s390/block/dasd_genhd.c
> @@ -68,7 +68,6 @@ int dasd_gendisk_alloc(struct dasd_block *block)
> 		blk_mq_free_tag_set(&block->tag_set);
> 		return PTR_ERR(gdp);
> 	}
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, gdp->queue);
>=20
> 	/* Initialize gendisk structure. */
> 	gdp->major =3D DASD_MAJOR;
> diff --git a/drivers/s390/block/scm_blk.c =
b/drivers/s390/block/scm_blk.c
> index 1d456a5a3bfb8e..2e2309fa9a0b34 100644
> --- a/drivers/s390/block/scm_blk.c
> +++ b/drivers/s390/block/scm_blk.c
> @@ -475,7 +475,6 @@ int scm_blk_dev_setup(struct scm_blk_dev *bdev, =
struct scm_device *scmdev)
> 		goto out_tag;
> 	}
> 	rq =3D bdev->rq =3D bdev->gendisk->queue;
> -	blk_queue_flag_set(QUEUE_FLAG_NONROT, rq);
> 	blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, rq);
>=20
> 	bdev->gendisk->private_data =3D scmdev;
> diff --git a/drivers/scsi/sd.c b/drivers/scsi/sd.c
> index d8ee4a4d4a6283..a42c3c45e86830 100644
> --- a/drivers/scsi/sd.c
> +++ b/drivers/scsi/sd.c
> @@ -3318,7 +3318,7 @@ static void sd_read_block_characteristics(struct =
scsi_disk *sdkp,
> 	rcu_read_unlock();
>=20
> 	if (rot =3D=3D 1) {
> -		blk_queue_flag_set(QUEUE_FLAG_NONROT, q);
> +		lim->features &=3D ~BLK_FEAT_ROTATIONAL;
> 		blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, q);
> 	}
>=20
> @@ -3646,7 +3646,7 @@ static int sd_revalidate_disk(struct gendisk =
*disk)
> 		 * cause this to be updated correctly and any device =
which
> 		 * doesn't support it should be treated as rotational.
> 		 */
> -		blk_queue_flag_clear(QUEUE_FLAG_NONROT, q);
> +		lim.features |=3D BLK_FEAT_ROTATIONAL;
> 		blk_queue_flag_set(QUEUE_FLAG_ADD_RANDOM, q);
>=20
> 		if (scsi_device_supports_vpd(sdp)) {
> diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h
> index acdfe5122faa44..988e3248cffeb7 100644
> --- a/include/linux/blkdev.h
> +++ b/include/linux/blkdev.h
> @@ -289,14 +289,16 @@ enum {
>=20
> 	/* supports passing on the FUA bit */
> 	BLK_FEAT_FUA				=3D (1u << 1),
> +
> +	/* rotational device (hard drive or floppy) */
> +	BLK_FEAT_ROTATIONAL			=3D (1u << 2),
> };
>=20
> /*
>  * Flags automatically inherited when stacking limits.
>  */
> #define BLK_FEAT_INHERIT_MASK \
> -	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA)
> -
> +	(BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA | BLK_FEAT_ROTATIONAL)
>=20
> /* internal flags in queue_limits.flags */
> enum {
> @@ -553,8 +555,6 @@ struct request_queue {
> #define QUEUE_FLAG_NOMERGES     3	/* disable merge attempts */
> #define QUEUE_FLAG_SAME_COMP	4	/* complete on same CPU-group */
> #define QUEUE_FLAG_FAIL_IO	5	/* fake timeout */
> -#define QUEUE_FLAG_NONROT	6	/* non-rotational device (SSD) =
*/
> -#define QUEUE_FLAG_VIRT		QUEUE_FLAG_NONROT /* paravirt =
device */
> #define QUEUE_FLAG_IO_STAT	7	/* do disk/partitions IO =
accounting */
> #define QUEUE_FLAG_NOXMERGES	9	/* No extended merges */
> #define QUEUE_FLAG_ADD_RANDOM	10	/* Contributes to random pool */
> @@ -589,7 +589,7 @@ bool blk_queue_flag_test_and_set(unsigned int =
flag, struct request_queue *q);
> #define blk_queue_nomerges(q)	test_bit(QUEUE_FLAG_NOMERGES, =
&(q)->queue_flags)
> #define blk_queue_noxmerges(q)	\
> 	test_bit(QUEUE_FLAG_NOXMERGES, &(q)->queue_flags)
> -#define blk_queue_nonrot(q)	test_bit(QUEUE_FLAG_NONROT, =
&(q)->queue_flags)
> +#define blk_queue_nonrot(q)	((q)->limits.features & =
BLK_FEAT_ROTATIONAL)
> #define blk_queue_io_stat(q)	test_bit(QUEUE_FLAG_IO_STAT, =
&(q)->queue_flags)
> #define blk_queue_add_random(q)	test_bit(QUEUE_FLAG_ADD_RANDOM, =
&(q)->queue_flags)
> #define blk_queue_zone_resetall(q)	\
> --=20
> 2.43.0
>=20



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 16:22:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 16:22:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743919.1150913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJy53-0004ue-3k; Wed, 19 Jun 2024 16:22:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743919.1150913; Wed, 19 Jun 2024 16:22:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJy53-0004uX-16; Wed, 19 Jun 2024 16:22:41 +0000
Received: by outflank-mailman (input) for mailman id 743919;
 Wed, 19 Jun 2024 16:22:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RXUT=NV=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJy52-0004uR-BV
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 16:22:40 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 24c1aea9-2e58-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 18:22:38 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a6f13dddf7eso840296666b.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 09:22:38 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f5b5ce0c2sm648594366b.78.2024.06.19.09.22.37
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 Jun 2024 09:22:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24c1aea9-2e58-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718814157; x=1719418957; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=XV0Ay7uy+lO597QczLYAkB7Vw1GqCFAkG1JuCj/ewig=;
        b=EllZJz68c30BxZHwFPv6jlX6oRtjf09yY9g6t+bZuLNtLYKGxrgHOz2L1DKbiFhjuy
         c8BMfP+dZbdv4cnJGrKKMqEFA7gyxpjOEIIm4rAjlN41iCANjnM7ya+GGbbbnW6wZhWy
         kv18UJQCWDlBjgmxuhf66IdfmMd8heDWA6aoE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718814157; x=1719418957;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=XV0Ay7uy+lO597QczLYAkB7Vw1GqCFAkG1JuCj/ewig=;
        b=B1aTUe32i/TI3czPjNlD26nzICpKd8x3ETEI5KMBvnX44kjhO1sQNfOb2jQ/0Cgr13
         3xQWppfSJn0b+I+JtR8ejwDJRIHZ71vnNzZnv/AD35MNsuBUuf45yrOglDfTXirGcxyh
         TdpSu18NhLcOZ8As/99NsDsnkG5HY4ZcOo2lJLZ+l98t/MlAGZeSRwY/q2vNmh6x9nJ/
         +r/xPlOm/3GDgqIOxL70hUZKOfsxUJBz4cf3DDHEw1FRJHxnYBoEfvWtAs/yeP40kWyo
         LwWSrMimMg6nGMGdSBBcuC37laEIZs8GQQ9fY7qkXiG+8jRImcS1BzlIurSerTPcNV+0
         s1ug==
X-Forwarded-Encrypted: i=1; AJvYcCVNm5GvjXHgAmCu5/SOI4gle7udS5Bs0LV/XzsSIbwWcYspLsxOWTF+l2MC8y/HnjelMqizvqgG54AkxuHSDZ51lr9UYbK99IHgJmcDNOg=
X-Gm-Message-State: AOJu0YzWo8J5WGvwcCQWIP5htizdHneWn2uSGADE8mUXkdGzyFcdbPBk
	t99NV3HhoAClBYggrAtKrQKBQfMjhfX2ph+nzpAQVzJ0ICQ4+4habrpQiWU1+asdvoAlxL07RUG
	CGkM=
X-Google-Smtp-Source: AGHT+IG9oxd4/VrXzrbr2pH/jB5dAT1nPp1MMfb1+ypNdoYoVUSD4vUJ0emZ9iDCOgIVqmBWYGgX7g==
X-Received: by 2002:a17:906:40cd:b0:a59:dd90:baeb with SMTP id a640c23a62f3a-a6fab778cdbmr181720066b.48.1718814157513;
        Wed, 19 Jun 2024 09:22:37 -0700 (PDT)
Message-ID: <9186bb9f-384d-426a-b3d3-40c00236be27@citrix.com>
Date: Wed, 19 Jun 2024 17:22:36 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] AMD/IOMMU: Improve register_iommu_exclusion_range()
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240618183128.1981751-1-andrew.cooper3@citrix.com>
 <052cccac-8c8f-4555-953c-2bd9de460f2a@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <052cccac-8c8f-4555-953c-2bd9de460f2a@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 19/06/2024 8:45 am, Jan Beulich wrote:
> On 18.06.2024 20:31, Andrew Cooper wrote:
>>  * Use 64bit accesses instead of 32bit accesses
>>  * Simplify the constant names
>>  * Pull base into a local variable to avoid it being reloaded because of the
>>    memory clobber in writeq().
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>
>> RFC.  This is my proposed way of cleaning up the whole IOMMU file.  The
>> diffstat speaks for itself.
> Absolutely.
>
>> I've finally found the bit in the AMD IOMMU spec which says 64bit accesses are
>> permitted:
>>
>>   3.4 IOMMU MMIO Registers:
>>
>>   Software access to IOMMU registers may not be larger than 64 bits. Accesses
>>   must be aligned to the size of the access and the size in bytes must be a
>>   power of two. Software may use accesses as small as one byte.
> I take it that the use of 32-bit writes was because of the past need
> also work in a 32-bit hypervisor, not because of perceived restrictions
> by the spec.

I recall having problems getting writeq() acked in the past, even after
we'd dropped 32bit.

But this is the first time that I've positively found anything in the
spec saying that 64bit accesses are ok.

>
>> --- a/xen/drivers/passthrough/amd/iommu-defs.h
>> +++ b/xen/drivers/passthrough/amd/iommu-defs.h
>> @@ -338,22 +338,10 @@ union amd_iommu_control {
>>  };
>>  
>>  /* Exclusion Register */
>> -#define IOMMU_EXCLUSION_BASE_LOW_OFFSET		0x20
>> -#define IOMMU_EXCLUSION_BASE_HIGH_OFFSET	0x24
>> -#define IOMMU_EXCLUSION_LIMIT_LOW_OFFSET	0x28
>> -#define IOMMU_EXCLUSION_LIMIT_HIGH_OFFSET	0x2C
>> -#define IOMMU_EXCLUSION_BASE_LOW_MASK		0xFFFFF000U
>> -#define IOMMU_EXCLUSION_BASE_LOW_SHIFT		12
>> -#define IOMMU_EXCLUSION_BASE_HIGH_MASK		0xFFFFFFFFU
>> -#define IOMMU_EXCLUSION_BASE_HIGH_SHIFT		0
>> -#define IOMMU_EXCLUSION_RANGE_ENABLE_MASK	0x00000001U
>> -#define IOMMU_EXCLUSION_RANGE_ENABLE_SHIFT	0
>> -#define IOMMU_EXCLUSION_ALLOW_ALL_MASK		0x00000002U
>> -#define IOMMU_EXCLUSION_ALLOW_ALL_SHIFT		1
>> -#define IOMMU_EXCLUSION_LIMIT_LOW_MASK		0xFFFFF000U
>> -#define IOMMU_EXCLUSION_LIMIT_LOW_SHIFT		12
>> -#define IOMMU_EXCLUSION_LIMIT_HIGH_MASK		0xFFFFFFFFU
>> -#define IOMMU_EXCLUSION_LIMIT_HIGH_SHIFT	0
>> +#define IOMMU_MMIO_EXCLUSION_BASE           0x20
>> +#define   EXCLUSION_RANGE_ENABLE            (1 << 0)
>> +#define   EXCLUSION_ALLOW_ALL               (1 << 1)
>> +#define IOMMU_MMIO_EXCLUSION_LIMIT          0x28
> Just one question here: Previously you suggested we switch to bitfields
> for anything like this, and we've already done so with e.g.
> union amd_iommu_control and union amd_iommu_ext_features. IOW I wonder
> if we wouldn't better strive to be consistent in this regard. Or if not,
> what the (written or unwritten) guidelines are when to use which
> approach.

We've got two very different kinds of things here.

The device table/etc are in-memory WB datastructure which we're
interpreting and editing routinely.  It's completely full of bits and
small fields, and letting the compiler do the hard work there is
preferable; certainly in terms of legibility.

This example is an MMIO register (in a bar on the IOMMU PCI device, even
though we find the address in the IVRS).  We set it up once at boot and
don't touch it afterwards.

So while we could make a struct for it, we'd still need to get it into a
form that we can writeq(), and that's more code than the single case
were we need to put two metadata bits into an address.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 16:30:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 16:30:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743926.1150923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJyCF-0006ne-Qs; Wed, 19 Jun 2024 16:30:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743926.1150923; Wed, 19 Jun 2024 16:30:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJyCF-0006nX-OD; Wed, 19 Jun 2024 16:30:07 +0000
Received: by outflank-mailman (input) for mailman id 743926;
 Wed, 19 Jun 2024 16:30:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ucR9=NV=ziepe.ca=jgg@srs-se1.protection.inumbo.net>)
 id 1sJyCE-0006iH-62
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 16:30:06 +0000
Received: from mail-qk1-x72d.google.com (mail-qk1-x72d.google.com
 [2607:f8b0:4864:20::72d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2e1719ca-2e59-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 18:30:03 +0200 (CEST)
Received: by mail-qk1-x72d.google.com with SMTP id
 af79cd13be357-797f222c9f9so375414385a.3
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 09:30:03 -0700 (PDT)
Received: from ziepe.ca
 (hlfxns017vw-142-68-80-239.dhcp-dynamic.fibreop.ns.bellaliant.net.
 [142.68.80.239]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2a5ee8bc0sm79593276d6.121.2024.06.19.09.30.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Jun 2024 09:30:01 -0700 (PDT)
Received: from jgg by wakko with local (Exim 4.95)
 (envelope-from <jgg@ziepe.ca>) id 1sJyC8-0059Sy-QH;
 Wed, 19 Jun 2024 13:30:00 -0300
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e1719ca-2e59-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ziepe.ca; s=google; t=1718814602; x=1719419402; darn=lists.xenproject.org;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:from:to:cc:subject:date:message-id:reply-to;
        bh=3c7XeNS34MogaG1zO0Gxm1rH0neHp06/RmAfCCHTdTs=;
        b=VxuUqWdULIrVLjBXv76NDuX2GesmZVDlc3xW9WM80HfAJOIhuB7YTWgggl1WJfCcex
         /7WRTSAcRU7JqKA84Dn5RrnjAcJJdOEeSXFLAJxrm5Jr6EwzV5qUHfd5Wo8Vwha+8y+g
         cjtPgpYbJFKG32SGRh3gX+ARSyRsWaqTpf7IXXKzla5txjyCiDJcpG2jIlQK0SoxCBbR
         0Ngqt7ql55nGO7ufO5HQa/3AXh9C140WpeACqsHx0zkZhyKuFltlMmQirkqQsTmE2Wow
         c0GqWRzaNsZHp/vDH+OQUP5OBS8LQ3TZyD9P+vmRtTTAmxdE9yH8DOIQFW6VWzcZtDry
         7pvQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718814602; x=1719419402;
        h=in-reply-to:content-disposition:mime-version:references:message-id
         :subject:cc:to:from:date:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=3c7XeNS34MogaG1zO0Gxm1rH0neHp06/RmAfCCHTdTs=;
        b=D9PzISYXCvf429oySpuCmdpgPjY9hIe7zMo3arsmaoZ3Vl96R67omh02q0mTG6Jhc+
         I0Q5zIRb4EQ6kUmT7y0U1hSVEFoJ15FHU8pkqbk8rmOQRsj+LtcbnUXAEBGaeiQiXUwr
         4impiwQ+bVoOq0ZtI6IQUstsDvypYZMT2D8tQuj0I3vkBWPtySMc2F9Johvtm84L0jXD
         TxEiR3Zhe++ZNXclSvP6+VZB+mONQqiMxK4u70g4uQOmsMLLip+aXZX0Zj0adzLGp3wy
         te3QS94VKxtgxfGh7CGNxItNsm9oMaCwSQKvUOzGfA7j3jq4RxLdQ/WCz+0RDHpTmCnu
         SC4g==
X-Gm-Message-State: AOJu0YwhJWh5pqdYr+xgtoJBaBDj47zqi/2O6uZW26sOmJGDLLWP1ShI
	uwU8lEUfjgeazAqpAR7+eonXoUb5xnTUK9lGnTA12XNcJeo53mAAl9ZlMq2emoMvxyAHpYf1Ah+
	p
X-Google-Smtp-Source: AGHT+IFNd2KMs9vwOpIBWhDkSpGzT3UnUk3IdJB0sc4eGBGCnxklfZldC9FeJXialSmZ2UnfepoWDA==
X-Received: by 2002:ad4:4484:0:b0:6b5:6a1:f899 with SMTP id 6a1803df08f44-6b506a1fbf6mr19249466d6.10.1718814602415;
        Wed, 19 Jun 2024 09:30:02 -0700 (PDT)
Date: Wed, 19 Jun 2024 13:30:00 -0300
From: Jason Gunthorpe <jgg@ziepe.ca>
To: Teddy Astie <teddy.astie@vates.tech>
Cc: xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Robin Murphy <robin.murphy@arm.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: [RFC PATCH] iommu/xen: Add Xen PV-IOMMU driver
Message-ID: <20240619163000.GK791043@ziepe.ca>
References: <fe36b8d36ed3bc01c78901bdf7b87a71cb1adaad.1718286176.git.teddy.astie@vates.tech>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <fe36b8d36ed3bc01c78901bdf7b87a71cb1adaad.1718286176.git.teddy.astie@vates.tech>

On Thu, Jun 13, 2024 at 01:50:22PM +0000, Teddy Astie wrote:

> +struct iommu_domain *xen_iommu_domain_alloc(unsigned type)
> +{
> +	struct xen_iommu_domain *domain;
> +	u16 ctx_no;
> +	int ret;
> +
> +	if (type & IOMMU_DOMAIN_IDENTITY) {
> +		/* use default domain */
> +		ctx_no = 0;

Please use the new ops, domain_alloc_paging and the static identity domain.

> +static struct iommu_group *xen_iommu_device_group(struct device *dev)
> +{
> +	if (!dev_is_pci(dev))
> +		return ERR_PTR(-ENODEV);
> +

device_group is only called after probe_device, since you already
exclude !pci during probe there is no need for this wrapper, just set
the op directly to pci_device_group.

> +static void xen_iommu_release_device(struct device *dev)
> +{
> +	int ret;
> +	struct pci_dev *pdev;
> +	struct pv_iommu_op op = {
> +		.subop_id = IOMMUOP_reattach_device,
> +		.flags = 0,
> +		.ctx_no = 0 /* reattach device back to default context */
> +	};

Consider if you can use release_domain for this, I think this is
probably a BLOCKED domain behavior.

> +	if (!dev_is_pci(dev))
> +		return;

No op is ever called on a non-probed device, remove all these checks.

> +static int xen_iommu_map_pages(struct iommu_domain *domain, unsigned long iova,
> +							   phys_addr_t paddr, size_t pgsize, size_t pgcount,
> +							   int prot, gfp_t gfp, size_t *mapped)
> +{
> +	size_t xen_pg_count = (pgsize / XEN_PAGE_SIZE) * pgcount;
> +	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
> +	struct pv_iommu_op op = {
> +		.subop_id = IOMMUOP_map_pages,
> +		.flags = 0,
> +		.ctx_no = dom->ctx_no
> +	};
> +	/* NOTE: paddr is actually bound to pfn, not gfn */
> +	uint64_t pfn = addr_to_pfn(paddr);
> +	uint64_t dfn = addr_to_pfn(iova);
> +	int ret = 0;
> +
> +	if (WARN(!dom->ctx_no, "Tried to map page to default context"))
> +		return -EINVAL;

A paging domain should be the only domain ops that have a populated
map so this should be made impossible by construction.

Jason


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 16:31:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 16:31:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743931.1150932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJyDB-0007J9-3L; Wed, 19 Jun 2024 16:31:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743931.1150932; Wed, 19 Jun 2024 16:31:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJyDB-0007J2-0p; Wed, 19 Jun 2024 16:31:05 +0000
Received: by outflank-mailman (input) for mailman id 743931;
 Wed, 19 Jun 2024 16:31:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RXUT=NV=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sJyD9-0007IR-Uk
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 16:31:03 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 51b6ff07-2e59-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 18:31:03 +0200 (CEST)
Received: by mail-ed1-x535.google.com with SMTP id
 4fb4d7f45d1cf-57d1782679fso786922a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 09:31:03 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb72cdfc4sm8468996a12.19.2024.06.19.09.31.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Jun 2024 09:31:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51b6ff07-2e59-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718814662; x=1719419462; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=eNRqIMjqDxOHB6YZkGrW+54h7mw46IrFeNy59xj3dpo=;
        b=M7hKZOBO7JcfPJIh6JIVbbNZT/XHlGxZGqEhjsY5Y05NfiN/bP0su80St5wX0ZgSGJ
         +UEFuh0vKLs3KC+jzMKxiQsrQY4q/1Dk9k1JoEM64LGxB9Stg2BmLp9an9834WDesQu5
         mXncwU7Lv05qo/RzaL4v6qrk3ra1b8ec5CSYI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718814662; x=1719419462;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=eNRqIMjqDxOHB6YZkGrW+54h7mw46IrFeNy59xj3dpo=;
        b=rYCWVAOVSgoEciYXX5g1H8ce4YdXO+Jl02V+x/khjb/sUAQpRDkkyiypXQP3scWUs3
         N8GNLUR6qBQWDlakp9i+e3uIaa3aPfaHeMnfJFdXzJ5vBk93LH3ypqvAddn/ygo2k4mb
         +iJgrRCvqyD5WvYvP0NtUQYI95dfD6jGMKPqaTTUGElh3cbkCyUlHc4CFMA6isNaV2e2
         V7dPRr1rMbXF4BWMfUVj4PRgkPBTZrr+0Bh44TxmUdEr0vqXCVMJPpWXhXOIkqtXlhAg
         v9g8VaDErx2+O3YF4K7qq7GZQ27izZS1V/UMz4jSbo12Sc8tX8+C7f8+Cx5PAX1S91iI
         OyLQ==
X-Gm-Message-State: AOJu0YyDxy3FqXIlbUGZ35KjiwgCR1mPS9BlMy7O3afIEaJGLnoDRV/u
	R5xXfIPf1HHn2RSf51qU1gXLKifIE9Ngv3rhY+PDr7htHnkbPzKvhjCuR2siqeRSZYLZ0FzmQI4
	I+TM=
X-Google-Smtp-Source: AGHT+IEgA4/HyMhc+2wKZZn6cXI1BIGalLsPk0Gby+OcIz04oDGrUjOq1SQIC2Hlmrzn0Usi6pTxeg==
X-Received: by 2002:a50:d50a:0:b0:57c:628d:f0eb with SMTP id 4fb4d7f45d1cf-57d07e6f85emr1809324a12.24.1718814661884;
        Wed, 19 Jun 2024 09:31:01 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH] xen/guest_access: Fix accessors for 32bit builds of Xen
Date: Wed, 19 Jun 2024 17:31:00 +0100
Message-Id: <20240619163100.2556555-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Gitlab CI reports an ARM32 randconfig failure as follows:

  In file included from common/livepatch.c:9:
  common/livepatch.c: In function ‘livepatch_list’:
  ./include/xen/guest_access.h:130:25: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
    130 |     __raw_copy_to_guest((void *)(d_ + (off) * sizeof(*_s)), \
        |                         ^
  common/livepatch.c:1283:18: note: in expansion of macro ‘__copy_to_guest_offset’
   1283 |             if ( __copy_to_guest_offset(list->name, name_offset,
        |                  ^~~~~~~~~~~~~~~~~~~~~~
  ./include/xen/guest_access.h:130:25: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
    130 |     __raw_copy_to_guest((void *)(d_ + (off) * sizeof(*_s)), \
        |                         ^
  common/livepatch.c:1287:17: note: in expansion of macro ‘__copy_to_guest_offset’
   1287 |                 __copy_to_guest_offset(list->metadata, metadata_offset,
        |                 ^~~~~~~~~~~~~~~~~~~~~~

This isn't specific to ARM32; it's LIVEPATCH on any 32bit build of Xen.

Both name_offset and metadata_offset are uint64_t, meaning that the
expression:

  (d_ + (off) * sizeof(*(hnd).p)

gets promoted to uint64_t, and is too wide to cast back to a pointer in 32bit
builds.  The expression needs casting through (unsigned long) before it can be
cast to (void *).

Xen has the _p() wrapper to do precisely this.

No functional change.

Link: https://gitlab.com/xen-project/xen/-/jobs/7136417308
Fixes: 43d5c5d5f70b ("xen: avoid UB in guest handle arithmetic")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

For 4.19, as this was found by CI.
---
 xen/include/xen/guest_access.h | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
index 96dbef2e0251..969813762c25 100644
--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -65,7 +65,7 @@
     /* Check that the handle is not for a const type */ \
     void *__maybe_unused _t = (hnd).p;                  \
     (void)((hnd).p == _s);                              \
-    raw_copy_to_guest((void *)(d_ + (off) * sizeof(*_s)), \
+    raw_copy_to_guest(_p(d_ + (off) * sizeof(*_s)),     \
                       _s, (nr) * sizeof(*_s));          \
 })
 
@@ -75,7 +75,7 @@
  */
 #define clear_guest_offset(hnd, off, nr) ({             \
     unsigned long d_ = (unsigned long)(hnd).p;          \
-    raw_clear_guest((void *)(d_ + (off) * sizeof(*(hnd).p)), \
+    raw_clear_guest(_p(d_ + (off) * sizeof(*(hnd).p)),  \
                     (nr) * sizeof(*(hnd).p));           \
 })
 
@@ -87,7 +87,7 @@
     unsigned long s_ = (unsigned long)(hnd).p;          \
     typeof(*(ptr)) *_d = (ptr);                         \
     raw_copy_from_guest(_d,                             \
-                        (const void *)(s_ + (off) * sizeof(*_d)), \
+                        _p(s_ + (off) * sizeof(*_d)),   \
                         (nr) * sizeof(*_d));            \
 })
 
@@ -127,13 +127,13 @@
     /* Check that the handle is not for a const type */     \
     void *__maybe_unused _t = (hnd).p;                      \
     (void)((hnd).p == _s);                                  \
-    __raw_copy_to_guest((void *)(d_ + (off) * sizeof(*_s)), \
+    __raw_copy_to_guest(_p(d_ + (off) * sizeof(*_s)),       \
                       _s, (nr) * sizeof(*_s));              \
 })
 
 #define __clear_guest_offset(hnd, off, nr) ({               \
     unsigned long d_ = (unsigned long)(hnd).p;              \
-    __raw_clear_guest((void *)(d_ + (off) * sizeof(*(hnd).p)), \
+    __raw_clear_guest(_p(d_ + (off) * sizeof(*(hnd).p)),    \
                       (nr) * sizeof(*(hnd).p));             \
 })
 
@@ -141,7 +141,7 @@
     unsigned long s_ = (unsigned long)(hnd).p;                  \
     typeof(*(ptr)) *_d = (ptr);                                 \
     __raw_copy_from_guest(_d,                                   \
-                          (const void *)(s_ + (off) * sizeof(*_d)), \
+                          _p(s_ + (off) * sizeof(*_d)),         \
                           (nr) * sizeof(*_d));                  \
 })
 

base-commit: 43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 16:56:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 16:56:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743942.1150943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJyc8-0002Uq-2m; Wed, 19 Jun 2024 16:56:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743942.1150943; Wed, 19 Jun 2024 16:56:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJyc7-0002Uj-W3; Wed, 19 Jun 2024 16:56:51 +0000
Received: by outflank-mailman (input) for mailman id 743942;
 Wed, 19 Jun 2024 16:56:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=i/NX=NV=gmail.com=alexdeucher@srs-se1.protection.inumbo.net>)
 id 1sJyc6-0002Ud-8n
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 16:56:50 +0000
Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com
 [2607:f8b0:4864:20::1035])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eab655e0-2e5c-11ef-90a3-e314d9c70b13;
 Wed, 19 Jun 2024 18:56:49 +0200 (CEST)
Received: by mail-pj1-x1035.google.com with SMTP id
 98e67ed59e1d1-2c7dbdc83bfso43288a91.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 09:56:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eab655e0-2e5c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718816207; x=1719421007; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=S2pOiKZCM6AEv9fRPfAqr24dsxT32FXEwY2yo/xeCPU=;
        b=e/H5QM9FmpfOOeg60BIutvS/SwmPC537/DjWIt+yOliJmrNrKIcqwRudCjN9ihiFk8
         64hNcUFejn/avhlbYuqRQs1PZeKzohmKXxA7MG0Yuo6UMf6IgN663/tSD5FxEDWdenvl
         zz4aBK4/nIBjhJta1xCdAs8eQxYqNxnHYyIREgr1hdLSasM+WOfbv6XJFaKZzDSpzHu1
         ASZW26lhqKQMSj0fzOCqROuEuwcyrvXwhBO4XWEvhL7+butvlNyqscxlf/Ysi/V0NL/r
         6oI3S/Mg0gle1KnpapHkqArrV/FTmrOodFoNCP8GjllArtapTXoRbLBP3gFeR/Jypnd5
         mN5g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718816207; x=1719421007;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=S2pOiKZCM6AEv9fRPfAqr24dsxT32FXEwY2yo/xeCPU=;
        b=lhhulAum42kjr+6o606rN3KLBUP5FamxfLVsqPROjnPzinuLS7h+WUl1UiS17Csuoy
         haOku5xRH04oX+3Ch9gq/ohZhduEqULuIpfesH7tiPWOwxgBpb8SaU6myU0qf8LviSuU
         17oN3wGb2GZUP5Es0M0EPAEmaNsi9ovmjpPPLP0sgOBONqmRadZ4LPc4lrSpGF1g/y24
         oi8lW3+dxWnclEdlb/R/VDUmv9FNhaxdWPwhsDxbnSSln1ztb7OfbzNfNdXuzdPmJKtC
         CggGynQSxtsgiACrmVv7mgKxMjww+ncWjkXDvxCPzs0RRV0SPxf/rYXehvZ4LeGoCLJu
         wTww==
X-Forwarded-Encrypted: i=1; AJvYcCXbmaD9Wz/ydQ6eN+pcp2FxBjF8D6IEgOI49JuaUOo69zhZXiTsTiVGrL5rOXQJTXGylbS34bl7sZ/2Aj4RQSBaVsrfN/ebVibCKFoSBcg=
X-Gm-Message-State: AOJu0YxjKiXKAfZp/yrQ1qdg2xK9X/TYuI8W3uvyB+llJuCw0Q+lzUDX
	eWcqiAO3AAQbe7s4HSI77OXBBTel+y7Z9RiBKZCdinLMpaw/Amwa69KAUAz8/NOFYFD9gcvdbdF
	6cWISbBQ+wrMFzeWQlLHULhehH6Y=
X-Google-Smtp-Source: AGHT+IERyacgj62eO2mHHlRd0wPullKaaMd7l6eTgV3VKINVH5yxaU0nlHl6jCJTt+Z3UyH6vlmBZQUH4XKAyUIMX9A=
X-Received: by 2002:a17:90a:fd0a:b0:2c4:e5f4:a20e with SMTP id
 98e67ed59e1d1-2c7b5d8ac46mr2856698a91.44.1718816207374; Wed, 19 Jun 2024
 09:56:47 -0700 (PDT)
MIME-Version: 1.0
References: <Zms9tjtg06kKtI_8@itl-email> <440d6444-3b02-4756-a4fa-02aae3b24b14@suse.com>
 <ZmvvlF0gpqFB7UC9@macbook> <af1f966b-b28f-4a14-b932-3f1523adeff0@suse.com>
 <ZmwByZnn5vKcVLKI@macbook> <Zm-FidjSK3mOieSC@itl-email> <Zm_p1QvoZcjQ4gBa@macbook>
 <ZnCglhYlXmRPBZXE@mail-itl> <ZnDbaply6KaBUKJb@itl-email> <0b00c8f9-fb79-4b11-ae22-931205653203@amd.com>
 <ZnGVu9TjHKiEqxsu@itl-email> <13ee25a3-91ef-48da-a58d-f4b972fe0c4f@amd.com>
In-Reply-To: <13ee25a3-91ef-48da-a58d-f4b972fe0c4f@amd.com>
From: Alex Deucher <alexdeucher@gmail.com>
Date: Wed, 19 Jun 2024 12:56:35 -0400
Message-ID: <CADnq5_N9-Db3+=JB1UrcZZho9psU-mnXz0xnatYJ+oDL24_A7g@mail.gmail.com>
Subject: Re: Design session notes: GPU acceleration in Xen
To: =?UTF-8?Q?Christian_K=C3=B6nig?= <christian.koenig@amd.com>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>, 
	=?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?= <marmarek@invisiblethingslab.com>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	"Pelloux-prayer, Pierre-eric" <Pierre-eric.Pelloux-prayer@amd.com>, Jan Beulich <jbeulich@suse.com>, 
	Xenia Ragiadakou <burzalodowa@gmail.com>, Ray Huang <ray.huang@amd.com>, 
	Xen developer discussion <xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	Direct Rendering Infrastructure development <dri-devel@lists.freedesktop.org>, 
	Qubes OS Development Mailing List <qubes-devel@googlegroups.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Jun 19, 2024 at 12:27=E2=80=AFPM Christian K=C3=B6nig
<christian.koenig@amd.com> wrote:
>
> Am 18.06.24 um 16:12 schrieb Demi Marie Obenour:
> > On Tue, Jun 18, 2024 at 08:33:38AM +0200, Christian K=C3=B6nig wrote:
> > > Am 18.06.24 um 02:57 schrieb Demi Marie Obenour:
> > >> On Mon, Jun 17, 2024 at 10:46:13PM +0200, Marek Marczykowski-G=C3=B3=
recki
> > >> wrote:
> > >>> On Mon, Jun 17, 2024 at 09:46:29AM +0200, Roger Pau Monn=C3=A9 wrot=
e:
> > >>>> On Sun, Jun 16, 2024 at 08:38:19PM -0400, Demi Marie Obenour wrote=
:
> > >>>>> In both cases, the device physical
> > >>>>> addresses are identical to dom0=E2=80=99s physical addresses.
> > >>>>
> > >>>> Yes, but a PV dom0 physical address space can be very scattered.
> > >>>>
> > >>>> IIRC there's an hypercall to request physically contiguous memory =
for
> > >>>> PV, but you don't want to be using that every time you allocate a
> > >>>> buffer (not sure it would support the sizes needed by the GPU
> > >>>> anyway).
> > >>
> > >>> Indeed that isn't going to fly. In older Qubes versions we had PV
> > >>> sys-net with PCI passthrough for a network card. After some uptime =
it
> > >>> was basically impossible to restart and still have enough contagiou=
s
> > >>> memory for a network driver, and there it was about _much_ smaller
> > >>> buffers, like 2M or 4M. At least not without shutting down a lot mo=
re
> > >>> things to free some more memory.
> > >>
> > >> Ouch!  That makes me wonder if all GPU drivers actually need physica=
lly
> > >> contiguous buffers, or if it is (as I suspect) driver-specific. CCin=
g
> > >> Christian K=C3=B6nig who has mentioned issues in this area.
> >
> > > Well GPUs don't need physical contiguous memory to function, but if t=
hey
> > > only get 4k pages to work with it means a quite large (up to 30%)
> > > performance penalty.
> >
> > The status quo is "no GPU acceleration at all", so 70% of bare metal
> > performance would be amazing right now.
>
> Well AMD uses native context approach in XEN which which delivers over
> 90% of bare metal performance.
>
> Pierre-Eric can tell you more, but we certainly have GPU solutions in
> productions with XEN which would suffer greatly if we see the underlying
> memory fragmented like this.
>
> >   However, the implementation
> > should not preclude eliminating this performance penalty in the future.
> >
> > What size pages do GPUs need for good performance?  Is it the same as
> > CPU huge pages?
>
> 2MiB are usually sufficient.

Larger pages are helpful for both system memory and VRAM, but it's
more important for VRAM.

Alex

>
> Regards,
> Christian.
>
> >   PV dom0 doesn't get huge pages at all, but PVH and HVM
> > guests do, and the goal is to move away from PV guests as they have lot=
s
> > of unrelated problems.
> >
> > > So scattering memory like you described is probably a very bad idea
> > if you
> > > want any halve way decent performance.
> >
> > For an initial prototype a 30% performance penalty is acceptable, but
> > it's good to know that memory fragmentation needs to be avoided.
> >
> > > Regards,
> > > Christian
> >
> > Thanks for the prompt response!
>


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 17:09:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 17:09:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743952.1150963 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJyoM-00054r-EC; Wed, 19 Jun 2024 17:09:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743952.1150963; Wed, 19 Jun 2024 17:09:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJyoM-00054k-A0; Wed, 19 Jun 2024 17:09:30 +0000
Received: by outflank-mailman (input) for mailman id 743952;
 Wed, 19 Jun 2024 17:09:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vpHL=NV=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sJyoL-0004ov-BR
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 17:09:29 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id af83f3af-2e5e-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 19:09:28 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-87-17-171-46.retail.telecomitalia.it [87.17.171.46])
 by support.bugseng.com (Postfix) with ESMTPSA id 715DF4EE0739;
 Wed, 19 Jun 2024 19:09:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af83f3af-2e5e-11ef-b4bb-af5377834399
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 1/2] automation/eclair_analysis: deviate MISRA C Rule 21.2
Date: Wed, 19 Jun 2024 19:09:09 +0200
Message-Id: <5b8364528a9ece8fec9f0e70bee81c2ea94c1820.1718816397.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718816397.git.alessandro.zucchelli@bugseng.com>
References: <cover.1718816397.git.alessandro.zucchelli@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rule 21.2 reports identifiers reserved for the C and POSIX standard
libraries: all xen's translation units are compiled with option
-nostdinc, this guarantees that these libraries are not used, therefore
a justification is provided for allowing uses of such identifiers in
the project.
Builtins starting with "__builtin_" still remain available.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index 447c1e6661..9fa9a7f01c 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -487,6 +487,17 @@ leads to a violation of the Rule are deviated."
 # Series 21.
 #
 
+-doc_begin="Rules 21.1 and 21.2 report identifiers reserved for the C and POSIX
+standard libraries: if these libraries are not used there is no reason to avoid such
+identifiers. All xen's translation units are compiled with option -nostdinc,
+this guarantees that these libraries are not used. Some compilers could perform
+optimization using built-in functions: this risk is partially addressed by
+using the compilation option -fno-builtin. Builtins starting with \"__builtin_\"
+still remain available."
+-config=MC3R1.R21.1,macros={safe , "!^__builtin_$" }
+-config=MC3R1.R21.2,declarations+={safe, "!^__builtin_.*$"}
+-doc_end
+
 -doc_begin="Xen does not use the functions provided by the Standard Library, but
 implements a set of functions that share the same names as their Standard Library equivalent.
 The implementation of these functions is available in source form, so the undefined, unspecified
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 17:09:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 17:09:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743951.1150953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJyoI-0004p8-6M; Wed, 19 Jun 2024 17:09:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743951.1150953; Wed, 19 Jun 2024 17:09:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJyoI-0004p1-2z; Wed, 19 Jun 2024 17:09:26 +0000
Received: by outflank-mailman (input) for mailman id 743951;
 Wed, 19 Jun 2024 17:09:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vpHL=NV=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sJyoG-0004ov-UA
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 17:09:24 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ac499f95-2e5e-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 19:09:22 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-87-17-171-46.retail.telecomitalia.it [87.17.171.46])
 by support.bugseng.com (Postfix) with ESMTPSA id B02384EE0738;
 Wed, 19 Jun 2024 19:09:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac499f95-2e5e-11ef-b4bb-af5377834399
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 0/2] automation/eclair_analysis: deviate MISRA C Rule 21.2
Date: Wed, 19 Jun 2024 19:09:08 +0200
Message-Id: <cover.1718816397.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series aims to address several violations of Rule 21.2 which states the
following: A reserved identifier or reserved macro name shall not be declared.
The series contains two patches, one changes x86/APIC which used an identifier
starting with '__', the second deviates all reserved identifiers with the 
exception of those starting with "__builtin_" which still remain available.

Alessandro Zucchelli (1):
  automation/eclair_analysis: deviate MISRA C Rule 21.2

Nicola Vetrini (1):
  x86/APIC: address violation of MISRA C Rule 21.2

 automation/eclair_analysis/ECLAIR/deviations.ecl | 11 +++++++++++
 xen/arch/x86/apic.c                              |  4 ++--
 2 files changed, 13 insertions(+), 2 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 17:09:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 17:09:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743953.1150973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJyoR-0005OL-Kt; Wed, 19 Jun 2024 17:09:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743953.1150973; Wed, 19 Jun 2024 17:09:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJyoR-0005O7-Hr; Wed, 19 Jun 2024 17:09:35 +0000
Received: by outflank-mailman (input) for mailman id 743953;
 Wed, 19 Jun 2024 17:09:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vpHL=NV=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sJyoQ-0004ov-Ce
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 17:09:34 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b28c2bee-2e5e-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 19:09:33 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-87-17-171-46.retail.telecomitalia.it [87.17.171.46])
 by support.bugseng.com (Postfix) with ESMTPSA id 8A0484EE073D;
 Wed, 19 Jun 2024 19:09:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b28c2bee-2e5e-11ef-b4bb-af5377834399
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 2/2] x86/APIC: address violation of MISRA C Rule 21.2
Date: Wed, 19 Jun 2024 19:09:10 +0200
Message-Id: <4a31cfc5e8d4e2c5e159ca4d67ac477feb000073.1718816397.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718816397.git.alessandro.zucchelli@bugseng.com>
References: <cover.1718816397.git.alessandro.zucchelli@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Nicola Vetrini <nicola.vetrini@bugseng.com>

The rule disallows the usage of an identifier reserved by the C standard.
All identfiers starting with '__' are reserved for any use, so the label
can be renamed in order to avoid the violation.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 xen/arch/x86/apic.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/apic.c b/xen/arch/x86/apic.c
index 6567af685a..2a60e6fe26 100644
--- a/xen/arch/x86/apic.c
+++ b/xen/arch/x86/apic.c
@@ -925,7 +925,7 @@ void __init init_apic_mappings(void)
     unsigned long apic_phys;
 
     if ( x2apic_enabled )
-        goto __next;
+        goto next;
     /*
      * If no local APIC can be found then set up a fake all
      * zeroes page to simulate the local APIC and another
@@ -941,7 +941,7 @@ void __init init_apic_mappings(void)
     apic_printk(APIC_VERBOSE, "mapped APIC to %08Lx (%08lx)\n", APIC_BASE,
                 apic_phys);
 
-__next:
+next:
     /*
      * Fetch the APIC ID of the BSP in case we have a
      * default configuration (or the MP table is broken).
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 17:35:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 17:35:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743979.1150983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJzDa-0001nc-KZ; Wed, 19 Jun 2024 17:35:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743979.1150983; Wed, 19 Jun 2024 17:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sJzDa-0001nV-HE; Wed, 19 Jun 2024 17:35:34 +0000
Received: by outflank-mailman (input) for mailman id 743979;
 Wed, 19 Jun 2024 17:35:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJzDY-0001nL-Jq; Wed, 19 Jun 2024 17:35:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJzDY-00016n-EF; Wed, 19 Jun 2024 17:35:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sJzDY-0002UC-4h; Wed, 19 Jun 2024 17:35:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sJzDY-0002ja-4D; Wed, 19 Jun 2024 17:35:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HCSUHkgiroxHhol2VW/R3WqrLwsT4ZYQ7rWmr34pdNU=; b=E1vDC4py+YF7P7Nrj3c3UKl7/g
	MNkphDxmNLBLW3fjEW3qVHw5a2TzxM9nrJ2KSlXM4LDF43DZaJYbvFo90aV+AeZ2kH0W++8zAljN9
	Xca0kbhYxyjng1rPDRQaJVVP8wbuNX69D8QWCajTq8rLKeJtm2cICqR64SQohNJbEj8Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186412-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186412: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
X-Osstest-Versions-That:
    xen=efa6e9f15ba943d154e8d7b29384581915b2aacd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Jun 2024 17:35:32 +0000

flight 186412 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186412/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                   6 xen-build                fail REGR. vs. 186411

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
baseline version:
 xen                  efa6e9f15ba943d154e8d7b29384581915b2aacd

Last test of basis   186411  2024-06-19 12:00:22 Z    0 days
Testing same since   186412  2024-06-19 15:03:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jun 19 14:11:07 2024 +0200

    xen: avoid UB in guest handle arithmetic
    
    At least XENMEM_memory_exchange can have huge values passed in the
    nr_extents and nr_exchanged fields. Adding such values to pointers can
    overflow, resulting in UB. Cast respective pointers to "unsigned long"
    while at the same time making the necessary multiplication explicit.
    Remaining arithmetic is, despite there possibly being mathematical
    overflow, okay as per the C99 spec: "A computation involving unsigned
    operands can never overflow, because a result that cannot be represented
    by the resulting unsigned integer type is reduced modulo the number that
    is one greater than the largest value that can be represented by the
    resulting type." The overflow that we need to guard against is checked
    for in array_access_ok().
    
    Note that in / down from array_access_ok() the address value is only
    ever cast to "unsigned long" anyway, which is why in the invocation from
    guest_handle_subrange_okay() the value doesn't need casting back to
    pointer type.
    
    In compat grant table code change two guest_handle_add_offset() to avoid
    passing in negative offsets.
    
    Since {,__}clear_guest_offset() need touching anyway, also deal with
    another (latent) issue there: They were losing the handle type, i.e. the
    size of the individual objects accessed. Luckily the few users we
    presently have all pass char or uint8 handles.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 267122a24c499d26278ab2dbdfb46ebcaaf38474
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 16:14:36 2021 +0100

    x86/defns: Clean up X86_{XCR0,XSS}_* constants
    
    With the exception of one case in read_bndcfgu() which can use ilog2(),
    the *_POS defines are unused.  Drop them.
    
    X86_XCR0_X87 is the name used by both the SDM and APM, rather than
    X86_XCR0_FP.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 71cacfb035f4a78ee10970dc38a3baa04d387451
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpuid: Fix handling of XSAVE dynamic leaves
    
    First, if XSAVE is available in hardware but not visible to the guest, the
    dynamic leaves shouldn't be filled in.
    
    Second, the comment concerning XSS state is wrong.  VT-x doesn't manage
    host/guest state automatically, but there is provision for "host only" bits to
    be set, so the implications are still accurate.
    
    Introduce xstate_compressed_size() to mirror the uncompressed one.  Cross
    check it at boot.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit fdb7e77fea4cb1c98dc51dd891a47f7e94612ad4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpu-policy: Simplify recalculate_xstate()
    
    Make use of xstate_uncompressed_size() helper rather than maintaining the
    running calculation while accumulating feature components.
    
    The rest of the CPUID data can come direct from the raw cpu policy.  All
    per-component data form an ABI through the behaviour of the X{SAVE,RSTOR}*
    instructions.
    
    Use for_each_set_bit() rather than opencoding a slightly awkward version of
    it.  Mask the attributes in ecx down based on the visible features.  This
    isn't actually necessary for any components or attributes defined at the time
    of writing (up to AMX), but is added out of an abundance of caution.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit df09dfb94de66f7523837c050616a382aa2c7d17
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/xstate: Rework xstate_ctxt_size() as xstate_uncompressed_size()
    
    We're soon going to need a compressed helper of the same form.
    
    The size of the uncompressed image depends on the single element with the
    largest offset + size.  Sadly this isn't always the element with the largest
    index.
    
    Name the per-xstate-component cpu_policy struture, for legibility of the logic
    in xstate_uncompressed_size().  Cross-check with hardware during boot, and
    remove hw_uncompressed_size().
    
    This means that the migration paths don't need to mess with XCR0 just to
    sanity check the buffer size.  It also means we can drop the "fastpath" check
    against xfeature_mask (there to skip some XCR0 writes); this path is going to
    be dead logic the moment Xen starts using supervisor states itself.
    
    The users of hw_uncompressed_size() in xstate_init() can (and indeed need) to
    be replaced with CPUID instructions.  They run with feature_mask in XCR0, and
    prior to setup_xstate_features() on the BSP.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit a09022a09e1a79b3f9574993993bfad803b32596
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu May 23 00:55:34 2024 +0100

    x86/boot: Collect the Raw CPU Policy earlier on boot
    
    This is a tangle, but it's a small step in the right direction.
    
    In the following change, xstate_init() is going to start using the Raw policy.
    
    calculate_raw_cpu_policy() is sufficiently separate from the other policies to
    safely move like this.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit d31a111940de5431c8bf465b1d38b89f1130a24b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Feb 21 17:56:57 2020 +0000

    x86/xstate: Cross-check dynamic XSTATE sizes at boot
    
    Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
    every call.  This is expensive, being used for domain create/migrate, as well
    as to service certain guest CPUID instructions.
    
    Instead, arrange to check the sizes once at boot.  See the code comments for
    details.  Right now, it just checks hardware against the algorithm
    expectations.  Later patches will cross-check Xen's XSTATE calculations too.
    
    Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits.  This is to
    maximise coverage in the sanity check, even if we don't expect to
    use/virtualise some of these features any time soon.  Leave HDC and HWP alone
    for now; we don't have CPUID bits from them stored nicely.
    
    Only perform the cross-checks when SELF_TESTS are active.  It's only
    developers or new hardware liable to trip these checks, and Xen at least
    tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which we
    don't want to be tickling in the general case.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 9e6dbbe8bf400aacb99009ddffa91d2a0c312b39
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 22 17:23:54 2024 +0100

    x86/xstate: Fix initialisation of XSS cache
    
    The clobbering of this_cpu(xcr0) and this_cpu(xss) to architecturally invalid
    values is to force the subsequent set_xcr0() and set_msr_xss() to reload the
    hardware register.
    
    While XCR0 is reloaded in xstate_init(), MSR_XSS isn't.  This causes
    get_msr_xss() to return the invalid value, and logic of the form:
    
        old = get_msr_xss();
        set_msr_xss(new);
        ...
        set_msr_xss(old);
    
    to try and restore said invalid value.
    
    The architecturally invalid value must be purged from the cache, meaning the
    hardware register must be written at least once.  This in turn highlights that
    the invalid value must only be used in the case that the hardware register is
    available.
    
    Fixes: f7f4a523927f ("x86/xstate: reset cached register values on resume")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit aba98c8d671bd290e978ec154d0baf042e093a65
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jun 14 13:05:40 2024 +0100

    xen/arch: Centralise __read_mostly and __ro_after_init
    
    These living in cache.h is inherited from Linux, but cache.h is not a terribly
    appropriately location for them to live.
    
    __read_mostly is an optimisation related to data placement in order to avoid
    having shared data in cachelines that are likely to be written to, but it
    really is just a section of the linked image separating data by usage
    patterns; it has nothing to do with cache sizes or flushing logic.
    
    Worse, __ro_after_init was only in xen/cache.h because __read_mostly was in
    arch/cache.h, and has literally nothing whatsoever to do with caches.
    
    Move the definitions into xen/sections.h, which in particular means that
    RISC-V doesn't need to repeat the problematic pattern.  Take the opportunity
    to provide a short descriptions of what these are used for.
    
    For now, leave TODO comments next to the other identical definitions.  It
    turns out that unpicking cache.h is more complicated than it appears because a
    number of files use it for transitive dependencies.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 82f480944718d9e8340a6ac1af41ece7851115bf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jun 18 13:48:35 2024 +0100

    xen/irq: Address MISRA Rule 8.3 violation
    
    When centralising irq_ack_none(), different architectures had different names
    for the parameter.  As it's type is struct irq_desc *, it should be named
    desc.  Make this consistent.
    
    No functional change.
    
    Fixes: 8aeda4a241ab ("arch/irq: Make irq_ack_none() mandatory")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 19:10:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 19:10:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743989.1150996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK0gh-0004Ld-Qb; Wed, 19 Jun 2024 19:09:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743989.1150996; Wed, 19 Jun 2024 19:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK0gh-0004LW-Mc; Wed, 19 Jun 2024 19:09:43 +0000
Received: by outflank-mailman (input) for mailman id 743989;
 Wed, 19 Jun 2024 19:09:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK0gg-0004LK-AQ; Wed, 19 Jun 2024 19:09:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK0gg-0002ll-3S; Wed, 19 Jun 2024 19:09:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK0gf-0005hM-Ii; Wed, 19 Jun 2024 19:09:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sK0gf-0008Pv-IH; Wed, 19 Jun 2024 19:09:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=a5gKJ1TOziVmTr51c9P1/AD6Dbfm6IOuakJEIXTLUMs=; b=FuczJIy9/+Qk8pCH/T0Vnliwum
	xMRmCFe5fXBgUKzwtjKQjvIsyEt7nGp/ISEiXSTIAwDbYWsTtcuN6WJI09ClDCylzioiUO7Knq1yB
	wwF5aqQSSD3Oxo/j9Y9EQ/kntRWfPMiQi8oyK69/mWxlpHbt2YgUjwqIT/4yCGR5+ZZE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186409-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186409: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=53c5c99e8744495395c1274595d6ca55947d1d6a
X-Osstest-Versions-That:
    xen=bd59af99700f075d06a6d47a16f777c9519928e0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Jun 2024 19:09:41 +0000

flight 186409 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186409/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186401
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186401
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186401
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186401
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186401
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186401
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  53c5c99e8744495395c1274595d6ca55947d1d6a
baseline version:
 xen                  bd59af99700f075d06a6d47a16f777c9519928e0

Last test of basis   186401  2024-06-18 21:08:51 Z    0 days
Testing same since   186409  2024-06-19 05:58:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@vates.tech>
  Henry Wang <xin.wang2@amd.com>
  Michal Orzel <michal.orzel@amd.com>
  Stefano Stabellini <stefano.stabellini@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   bd59af9970..53c5c99e87  53c5c99e8744495395c1274595d6ca55947d1d6a -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 19:11:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 19:11:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.743999.1151009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK0i0-0005nk-9S; Wed, 19 Jun 2024 19:11:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 743999.1151009; Wed, 19 Jun 2024 19:11:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK0i0-0005nd-6s; Wed, 19 Jun 2024 19:11:04 +0000
Received: by outflank-mailman (input) for mailman id 743999;
 Wed, 19 Jun 2024 19:11:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RXUT=NV=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sK0hy-0005mQ-CH
 for xen-devel@lists.xenproject.org; Wed, 19 Jun 2024 19:11:02 +0000
Received: from mail-lf1-x12b.google.com (mail-lf1-x12b.google.com
 [2a00:1450:4864:20::12b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aa128d28-2e6f-11ef-b4bb-af5377834399;
 Wed, 19 Jun 2024 21:11:00 +0200 (CEST)
Received: by mail-lf1-x12b.google.com with SMTP id
 2adb3069b0e04-52bc274f438so185839e87.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 Jun 2024 12:11:00 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56f99774sm691492166b.203.2024.06.19.12.10.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 19 Jun 2024 12:10:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa128d28-2e6f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718824259; x=1719429059; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=wVqgK+Gsjfrd6Xlaw9wBeTvUFp2FnlECp4adM+DnPBY=;
        b=UG+Ty89dX/EOOX3dkYHOpcO8mT85JqPtc/Qca/RiAXiC+5udSlK+S/IH+6Oj+KVHpc
         cEQ4f8k0TrL3cKqaXWBb3MUZmggwgOlVQauKtt7O87kCiSZHWHAHKBJSzmtN9xAF4/oR
         GLpxX3hQ+Yw4WQKROw/VjyegN5+dqyj/xTG1I=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718824259; x=1719429059;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=wVqgK+Gsjfrd6Xlaw9wBeTvUFp2FnlECp4adM+DnPBY=;
        b=pCTLpVUIlaI7lM3EZMPlMMe/S1IQaqaAFjTMBBsOeiZUNiNj+PVqARqxA/jS5Yv9tw
         InXKKyndrvULQTEleUgA/UgjVukn7UVt/ag2JIWy/rkPgGGUyiSD4SLvx4HL8aPi7GPW
         vwQJwSp7WRN+zKgQnhyNTNK+ANX6EbHNHDA3ufUoHJUsUDGXIDLa5c7AX1t8BXQyjbeM
         29T8rO46152gyqJ/fpSOs4475L3ln1+mQ3laGH3IYvGDUIgQNP2f9YlSRtkq3wsahuiY
         snpfb4j40QTqo6lPda98+2apOFCyeSKPk/exKOGivtfFByCc1LPiZOVtME8Kx9xunO5e
         rzVQ==
X-Gm-Message-State: AOJu0Ywt67wYN7bWwjAnAsSobPzcIuJDyJXU3MC1ihJnao2ejSuvLxEo
	NZAEPx+uGuj5+mvnEBHOJqjlgxgJtcp4wzKNqc+pt8GrjAKoxWHCLa+qfAMOK2xAkE1jtM5XcWp
	mYfU=
X-Google-Smtp-Source: AGHT+IHt18fgs/Ory06X+WGbDJjlVz2q4bgmxXq0/Xj8XMdGvLtWnNxubHbyAdiphZFRsUarEjt1mA==
X-Received: by 2002:a05:6512:3f0e:b0:52c:9fd6:1b8a with SMTP id 2adb3069b0e04-52ccaa595c8mr3089490e87.8.1718824258866;
        Wed, 19 Jun 2024 12:10:58 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 v2] x86/spec-ctrl: Support for SRSO_US_NO and SRSO_MSR_FIX
Date: Wed, 19 Jun 2024 20:10:57 +0100
Message-Id: <20240619191057.2588693-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

AMD have updated the SRSO whitepaper[1] with further information.

There's a new SRSO_U/S_NO enumeration saying that SRSO attacks can't cross the
user/supervisor boundary.  i.e. Xen don't need to use IBPB-on-entry for PV.

There's also a new SRSO_MSR_FIX identifying that the BP_SPEC_REDUCE bit is
available in MSR_BP_CFG.  When set, SRSO attacks can't cross the host/guest
boundary.  i.e. Xen don't need to use IBPB-on-entry for HVM.

Extend ibpb_calculations() to account for these when calculating
opt_ibpb_entry_{pv,hvm} defaults.  Add a bp-spec-reduce option to control the
use of BP_SPEC_REDUCE, but activate it by default.

Because MSR_BP_CFG is core-scoped with a race condition updating it, repurpose
amd_check_erratum_1485() into amd_check_bp_cfg() and calculate all updates at
once.

Advertise SRSO_U/S_NO to guests whenever possible, as it allows the guest
kernel to skip SRSO protections too.  This is easy for HVM guests, but hard
for PV guests, as both the guest userspace and kernel operate in CPL3.  After
discussing with AMD, it is believed to be safe to advertise SRSO_U/S_NO to PV
guests when BP_SPEC_REDUCE is active.

Fix a typo in the SRSO_NO's comment.

[1] https://www.amd.com/content/dam/amd/en/documents/corporate/cr/speculative-return-stack-overflow-whitepaper.pdf
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

v2:
 * Add "for HVM guests" to xen-command-line.pandoc
 * Print details on boot
 * Don't advertise SRSO_US_NO to PV guests if BP_SPEC_REDUCE isn't active.

For 4.19.  This should be no functional change on current hardware.  On
forthcoming hardware, it avoids the substantial perf hits which were necessary
to protect against the SRSO speculative vulnerability.
---
 docs/misc/xen-command-line.pandoc           |  9 +++-
 xen/arch/x86/cpu-policy.c                   | 19 ++++++++
 xen/arch/x86/cpu/amd.c                      | 29 +++++++++---
 xen/arch/x86/include/asm/msr-index.h        |  1 +
 xen/arch/x86/include/asm/spec_ctrl.h        |  1 +
 xen/arch/x86/spec_ctrl.c                    | 49 ++++++++++++++++-----
 xen/include/public/arch-x86/cpufeatureset.h |  4 +-
 7 files changed, 92 insertions(+), 20 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index 1dea7431fab6..88beb64525d5 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2390,7 +2390,7 @@ By default SSBD will be mitigated at runtime (i.e `ssbd=runtime`).
 >              {ibrs,ibpb,ssbd,psfd,
 >              eager-fpu,l1d-flush,branch-harden,srb-lock,
 >              unpriv-mmio,gds-mit,div-scrub,lock-harden,
->              bhi-dis-s}=<bool> ]`
+>              bhi-dis-s,bp-spec-reduce}=<bool> ]`
 
 Controls for speculative execution sidechannel mitigations.  By default, Xen
 will pick the most appropriate mitigations based on compiled in support,
@@ -2539,6 +2539,13 @@ boolean can be used to force or prevent Xen from using speculation barriers to
 protect lock critical regions.  This mitigation won't be engaged by default,
 and needs to be explicitly enabled on the command line.
 
+On hardware supporting SRSO_MSR_FIX, the `bp-spec-reduce=` option can be used
+to force or prevent Xen from using MSR_BP_CFG.BP_SPEC_REDUCE to mitigate the
+SRSO (Speculative Return Stack Overflow) vulnerability.  Xen will use
+bp-spec-reduce when available, as it is preferable to using `ibpb-entry=hvm`
+to mitigate SRSO for HVM guests, and because it is a necessary prerequisite in
+order to advertise SRSO_U/S_NO to PV guests.
+
 ### sync_console
 > `= <boolean>`
 
diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 304dc20cfab8..fd32fe333384 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -14,6 +14,7 @@
 #include <asm/msr-index.h>
 #include <asm/paging.h>
 #include <asm/setup.h>
+#include <asm/spec_ctrl.h>
 #include <asm/xstate.h>
 
 struct cpu_policy __read_mostly       raw_cpu_policy;
@@ -605,6 +606,24 @@ static void __init calculate_pv_max_policy(void)
         __clear_bit(X86_FEATURE_IBRS, fs);
     }
 
+    /*
+     * SRSO_U/S_NO means that the CPU is not vulnerable to SRSO attacks across
+     * the User (CPL3)/Supervisor (CPL<3) boundary.  However the PV64
+     * user/kernel boundary is CPL3 on both sides, so it won't convey the
+     * meaning that a PV kernel expects.
+     *
+     * PV32 guests are explicitly unsupported WRT speculative safety, so are
+     * ignored to avoid complicating the logic.
+     *
+     * After discussions with AMD, it is believed to be safe to offer
+     * SRSO_US_NO to PV guests when BP_SPEC_REDUCE is active.
+     *
+     * If BP_SPEC_REDUCE isn't active, remove SRSO_U/S_NO from the PV max
+     * policy, which will cause it to filter out of PV default too.
+     */
+    if ( !boot_cpu_has(X86_FEATURE_SRSO_MSR_FIX) || !opt_bp_spec_reduce )
+        __clear_bit(X86_FEATURE_SRSO_US_NO, fs);
+
     guest_common_max_feature_adjustments(fs);
     guest_common_feature_adjustments(fs);
 
diff --git a/xen/arch/x86/cpu/amd.c b/xen/arch/x86/cpu/amd.c
index ab92333673b9..5213dfff601d 100644
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -1009,16 +1009,33 @@ static void cf_check fam17_disable_c6(void *arg)
 	wrmsrl(MSR_AMD_CSTATE_CFG, val & mask);
 }
 
-static void amd_check_erratum_1485(void)
+static void amd_check_bp_cfg(void)
 {
-	uint64_t val, chickenbit = (1 << 5);
+	uint64_t val, new = 0;
 
-	if (cpu_has_hypervisor || boot_cpu_data.x86 != 0x19 || !is_zen4_uarch())
+	/*
+	 * AMD Erratum #1485.  Set bit 5, as instructed.
+	 */
+	if (!cpu_has_hypervisor && boot_cpu_data.x86 == 0x19 && is_zen4_uarch())
+		new |= (1 << 5);
+
+	/*
+	 * On hardware supporting SRSO_MSR_FIX, activate BP_SPEC_REDUCE by
+	 * default.  This lets us do two things:
+         *
+         * 1) Avoid IBPB-on-entry to mitigate SRSO attacks from HVM guests.
+         * 2) Lets us advertise SRSO_US_NO to PV guests.
+	 */
+	if (boot_cpu_has(X86_FEATURE_SRSO_MSR_FIX) && opt_bp_spec_reduce)
+		new |= BP_CFG_SPEC_REDUCE;
+
+	/* Avoid reading BP_CFG if we don't intend to change anything. */
+	if (!new)
 		return;
 
 	rdmsrl(MSR_AMD64_BP_CFG, val);
 
-	if (val & chickenbit)
+	if ((val & new) == new)
 		return;
 
 	/*
@@ -1027,7 +1044,7 @@ static void amd_check_erratum_1485(void)
 	 * same time before the chickenbit is set. It's benign because the
 	 * value being written is the same on both.
 	 */
-	wrmsrl(MSR_AMD64_BP_CFG, val | chickenbit);
+	wrmsrl(MSR_AMD64_BP_CFG, val | new);
 }
 
 static void cf_check init_amd(struct cpuinfo_x86 *c)
@@ -1297,7 +1314,7 @@ static void cf_check init_amd(struct cpuinfo_x86 *c)
 		disable_c1_ramping();
 
 	amd_check_zenbleed();
-	amd_check_erratum_1485();
+	amd_check_bp_cfg();
 
 	if (fam17_c6_disabled)
 		fam17_disable_c6(NULL);
diff --git a/xen/arch/x86/include/asm/msr-index.h b/xen/arch/x86/include/asm/msr-index.h
index 9cdb5b262566..83fbf4135c6b 100644
--- a/xen/arch/x86/include/asm/msr-index.h
+++ b/xen/arch/x86/include/asm/msr-index.h
@@ -412,6 +412,7 @@
 #define AMD64_DE_CFG_LFENCE_SERIALISE	(_AC(1, ULL) << 1)
 #define MSR_AMD64_EX_CFG		0xc001102cU
 #define MSR_AMD64_BP_CFG		0xc001102eU
+#define  BP_CFG_SPEC_REDUCE		(_AC(1, ULL) << 4)
 #define MSR_AMD64_DE_CFG2		0xc00110e3U
 
 #define MSR_AMD64_DR0_ADDRESS_MASK	0xc0011027U
diff --git a/xen/arch/x86/include/asm/spec_ctrl.h b/xen/arch/x86/include/asm/spec_ctrl.h
index 72347ef2b959..077225418956 100644
--- a/xen/arch/x86/include/asm/spec_ctrl.h
+++ b/xen/arch/x86/include/asm/spec_ctrl.h
@@ -90,6 +90,7 @@ extern int8_t opt_xpti_hwdom, opt_xpti_domu;
 
 extern bool cpu_has_bug_l1tf;
 extern int8_t opt_pv_l1tf_hwdom, opt_pv_l1tf_domu;
+extern bool opt_bp_spec_reduce;
 
 /*
  * The L1D address mask, which might be wider than reported in CPUID, and the
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index 40f6ae017010..7aabb65ba028 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -83,6 +83,7 @@ static bool __initdata opt_unpriv_mmio;
 static bool __ro_after_init opt_verw_mmio;
 static int8_t __initdata opt_gds_mit = -1;
 static int8_t __initdata opt_div_scrub = -1;
+bool __ro_after_init opt_bp_spec_reduce = true;
 
 static int __init cf_check parse_spec_ctrl(const char *s)
 {
@@ -143,6 +144,7 @@ static int __init cf_check parse_spec_ctrl(const char *s)
             opt_unpriv_mmio = false;
             opt_gds_mit = 0;
             opt_div_scrub = 0;
+            opt_bp_spec_reduce = false;
         }
         else if ( val > 0 )
             rc = -EINVAL;
@@ -363,6 +365,8 @@ static int __init cf_check parse_spec_ctrl(const char *s)
             opt_gds_mit = val;
         else if ( (val = parse_boolean("div-scrub", s, ss)) >= 0 )
             opt_div_scrub = val;
+        else if ( (val = parse_boolean("bp-spec-reduce", s, ss)) >= 0 )
+            opt_bp_spec_reduce = val;
         else
             rc = -EINVAL;
 
@@ -505,7 +509,7 @@ static void __init print_details(enum ind_thunk thunk)
      * Hardware read-only information, stating immunity to certain issues, or
      * suggestions of which mitigation to use.
      */
-    printk("  Hardware hints:%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
+    printk("  Hardware hints:%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
            (caps & ARCH_CAPS_RDCL_NO)                        ? " RDCL_NO"        : "",
            (caps & ARCH_CAPS_EIBRS)                          ? " EIBRS"          : "",
            (caps & ARCH_CAPS_RSBA)                           ? " RSBA"           : "",
@@ -529,10 +533,11 @@ static void __init print_details(enum ind_thunk thunk)
            (e8b  & cpufeat_mask(X86_FEATURE_BTC_NO))         ? " BTC_NO"         : "",
            (e8b  & cpufeat_mask(X86_FEATURE_IBPB_RET))       ? " IBPB_RET"       : "",
            (e21a & cpufeat_mask(X86_FEATURE_IBPB_BRTYPE))    ? " IBPB_BRTYPE"    : "",
-           (e21a & cpufeat_mask(X86_FEATURE_SRSO_NO))        ? " SRSO_NO"        : "");
+           (e21a & cpufeat_mask(X86_FEATURE_SRSO_NO))        ? " SRSO_NO"        : "",
+           (e21a & cpufeat_mask(X86_FEATURE_SRSO_US_NO))     ? " SRSO_US_NO"     : "");
 
     /* Hardware features which need driving to mitigate issues. */
-    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
+    printk("  Hardware features:%s%s%s%s%s%s%s%s%s%s%s%s%s%s%s\n",
            (e8b  & cpufeat_mask(X86_FEATURE_IBPB)) ||
            (_7d0 & cpufeat_mask(X86_FEATURE_IBRSB))          ? " IBPB"           : "",
            (e8b  & cpufeat_mask(X86_FEATURE_IBRS)) ||
@@ -551,7 +556,8 @@ static void __init print_details(enum ind_thunk thunk)
            (caps & ARCH_CAPS_FB_CLEAR_CTRL)                  ? " FB_CLEAR_CTRL"  : "",
            (caps & ARCH_CAPS_GDS_CTRL)                       ? " GDS_CTRL"       : "",
            (caps & ARCH_CAPS_RFDS_CLEAR)                     ? " RFDS_CLEAR"     : "",
-           (e21a & cpufeat_mask(X86_FEATURE_SBPB))           ? " SBPB"           : "");
+           (e21a & cpufeat_mask(X86_FEATURE_SBPB))           ? " SBPB"           : "",
+           (e21a & cpufeat_mask(X86_FEATURE_SRSO_MSR_FIX))   ? " SRSO_MSR_FIX"   : "");
 
     /* Compiled-in support which pertains to mitigations. */
     if ( IS_ENABLED(CONFIG_INDIRECT_THUNK) || IS_ENABLED(CONFIG_SHADOW_PAGING) ||
@@ -1120,7 +1126,7 @@ static void __init div_calculations(bool hw_smt_enabled)
 
 static void __init ibpb_calculations(void)
 {
-    bool def_ibpb_entry = false;
+    bool def_ibpb_entry_pv = false, def_ibpb_entry_hvm = false;
 
     /* Check we have hardware IBPB support before using it... */
     if ( !boot_cpu_has(X86_FEATURE_IBRSB) && !boot_cpu_has(X86_FEATURE_IBPB) )
@@ -1145,22 +1151,41 @@ static void __init ibpb_calculations(void)
          * Confusion.  Mitigate with IBPB-on-entry.
          */
         if ( !boot_cpu_has(X86_FEATURE_BTC_NO) )
-            def_ibpb_entry = true;
+            def_ibpb_entry_pv = def_ibpb_entry_hvm = true;
 
         /*
-         * Further to BTC, Zen3/4 CPUs suffer from Speculative Return Stack
-         * Overflow in most configurations.  Mitigate with IBPB-on-entry if we
-         * have the microcode that makes this an effective option.
+         * Further to BTC, Zen3 and later CPUs suffer from Speculative Return
+         * Stack Overflow in most configurations.  Mitigate with IBPB-on-entry
+         * if we have the microcode that makes this an effective option,
+         * except where there are other mitigating factors available.
          */
         if ( !boot_cpu_has(X86_FEATURE_SRSO_NO) &&
              boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) )
-            def_ibpb_entry = true;
+        {
+            /*
+             * SRSO_U/S_NO is a subset of SRSO_NO, identifying that SRSO isn't
+             * possible across the user/supervisor boundary.  We only need to
+             * use IBPB-on-entry for PV guests on hardware which doesn't
+             * enumerate SRSO_US_NO.
+             */
+            if ( !boot_cpu_has(X86_FEATURE_SRSO_US_NO) )
+                def_ibpb_entry_pv = true;
+
+            /*
+             * SRSO_MSR_FIX enumerates that we can use MSR_BP_CFG.SPEC_REDUCE
+             * to mitigate SRSO across the host/guest boundary.  We only need
+             * to use IBPB-on-entry for HVM guests if we haven't enabled this
+             * control.
+             */
+            if ( !boot_cpu_has(X86_FEATURE_SRSO_MSR_FIX) || !opt_bp_spec_reduce )
+                def_ibpb_entry_hvm = true;
+        }
     }
 
     if ( opt_ibpb_entry_pv == -1 )
-        opt_ibpb_entry_pv = IS_ENABLED(CONFIG_PV) && def_ibpb_entry;
+        opt_ibpb_entry_pv = IS_ENABLED(CONFIG_PV) && def_ibpb_entry_pv;
     if ( opt_ibpb_entry_hvm == -1 )
-        opt_ibpb_entry_hvm = IS_ENABLED(CONFIG_HVM) && def_ibpb_entry;
+        opt_ibpb_entry_hvm = IS_ENABLED(CONFIG_HVM) && def_ibpb_entry_hvm;
 
     if ( opt_ibpb_entry_pv )
     {
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index d9eba5e9a714..9c98e4992861 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -312,7 +312,9 @@ XEN_CPUFEATURE(FSRSC,              11*32+19) /*A  Fast Short REP SCASB */
 XEN_CPUFEATURE(AMD_PREFETCHI,      11*32+20) /*A  PREFETCHIT{0,1} Instructions */
 XEN_CPUFEATURE(SBPB,               11*32+27) /*A  Selective Branch Predictor Barrier */
 XEN_CPUFEATURE(IBPB_BRTYPE,        11*32+28) /*A  IBPB flushes Branch Type predictions too */
-XEN_CPUFEATURE(SRSO_NO,            11*32+29) /*A  Hardware not vulenrable to Speculative Return Stack Overflow */
+XEN_CPUFEATURE(SRSO_NO,            11*32+29) /*A  Hardware not vulnerable to Speculative Return Stack Overflow */
+XEN_CPUFEATURE(SRSO_US_NO,         11*32+30) /*A! Hardware not vulnerable to SRSO across the User/Supervisor boundary */
+XEN_CPUFEATURE(SRSO_MSR_FIX,       11*32+31) /*   MSR_BP_CFG.BP_SPEC_REDUCE available */
 
 /* Intel-defined CPU features, CPUID level 0x00000007:1.ebx, word 12 */
 XEN_CPUFEATURE(INTEL_PPIN,         12*32+ 0) /*   Protected Processor Inventory Number */

base-commit: 43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Wed Jun 19 20:08:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 20:08:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744017.1151028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK1bL-0003cv-FN; Wed, 19 Jun 2024 20:08:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744017.1151028; Wed, 19 Jun 2024 20:08:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK1bL-0003co-BW; Wed, 19 Jun 2024 20:08:15 +0000
Received: by outflank-mailman (input) for mailman id 744017;
 Wed, 19 Jun 2024 20:08:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK1bK-0003ce-1l; Wed, 19 Jun 2024 20:08:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK1bJ-0003s2-TI; Wed, 19 Jun 2024 20:08:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK1bJ-0008Fr-Jz; Wed, 19 Jun 2024 20:08:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sK1bJ-0001eD-JR; Wed, 19 Jun 2024 20:08:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aQU9pkyYiXcDD4f+4WZjRqUAq4SjYkSjtSNdiguYD3k=; b=aANmEfg4d/lqnCsxJgxijucB7+
	FLThyxZ2D9KLd1aLipq58UQYBBmqx5anQmcMc2OvbG9bnTmGHHSGpvnR6dJ6AzGzF7pkRn6b9w9gd
	M9ATxpnSXUX5CQdhvAlOK3GgqjRMzai9nSkg6+4X2UDuIh0TAPV4LTCvEfyIoQPEwSzI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186414-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186414: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=95e220e95d6237e21f7c0b83fca02d56b9327c4a
X-Osstest-Versions-That:
    ovmf=4d4f56992460c039d0cfe48c394c2e07aecf1d22
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Jun 2024 20:08:13 +0000

flight 186414 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186414/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 95e220e95d6237e21f7c0b83fca02d56b9327c4a
baseline version:
 ovmf                 4d4f56992460c039d0cfe48c394c2e07aecf1d22

Last test of basis   186408  2024-06-19 05:41:10 Z    0 days
Testing same since   186414  2024-06-19 17:42:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   4d4f569924..95e220e95d  95e220e95d6237e21f7c0b83fca02d56b9327c4a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 20:34:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 20:34:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744026.1151037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK20E-00077e-Dg; Wed, 19 Jun 2024 20:33:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744026.1151037; Wed, 19 Jun 2024 20:33:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK20E-00077X-Ax; Wed, 19 Jun 2024 20:33:58 +0000
Received: by outflank-mailman (input) for mailman id 744026;
 Wed, 19 Jun 2024 20:33:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK20D-00077L-5F; Wed, 19 Jun 2024 20:33:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK20D-0004P1-0D; Wed, 19 Jun 2024 20:33:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK20C-0000od-OF; Wed, 19 Jun 2024 20:33:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sK20C-0007ej-Nn; Wed, 19 Jun 2024 20:33:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9bP/DhnQL6HSLCFIOXm3mW2xgSVWJV/kY/xfFkXNU00=; b=sK4uxGtLe6xf8WkYfffUBvC2Xw
	+WFSfaWSSc9EEE2+nzVdH+OqsrVcxq+hJA5TFeCc2lan0vnh72wRzW4PyHnfBeG8yhJpSBZ7pPCmf
	DOQ5n3ReDg8T8DSrfvW5M5Q8m6kxRLNi5/eSSL4oVbsQ0UwZF6gMN7yHQgMUEd8L65pA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186416-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186416: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
X-Osstest-Versions-That:
    xen=efa6e9f15ba943d154e8d7b29384581915b2aacd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Jun 2024 20:33:56 +0000

flight 186416 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186416/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                   6 xen-build                fail REGR. vs. 186411

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
baseline version:
 xen                  efa6e9f15ba943d154e8d7b29384581915b2aacd

Last test of basis   186411  2024-06-19 12:00:22 Z    0 days
Testing same since   186412  2024-06-19 15:03:58 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jun 19 14:11:07 2024 +0200

    xen: avoid UB in guest handle arithmetic
    
    At least XENMEM_memory_exchange can have huge values passed in the
    nr_extents and nr_exchanged fields. Adding such values to pointers can
    overflow, resulting in UB. Cast respective pointers to "unsigned long"
    while at the same time making the necessary multiplication explicit.
    Remaining arithmetic is, despite there possibly being mathematical
    overflow, okay as per the C99 spec: "A computation involving unsigned
    operands can never overflow, because a result that cannot be represented
    by the resulting unsigned integer type is reduced modulo the number that
    is one greater than the largest value that can be represented by the
    resulting type." The overflow that we need to guard against is checked
    for in array_access_ok().
    
    Note that in / down from array_access_ok() the address value is only
    ever cast to "unsigned long" anyway, which is why in the invocation from
    guest_handle_subrange_okay() the value doesn't need casting back to
    pointer type.
    
    In compat grant table code change two guest_handle_add_offset() to avoid
    passing in negative offsets.
    
    Since {,__}clear_guest_offset() need touching anyway, also deal with
    another (latent) issue there: They were losing the handle type, i.e. the
    size of the individual objects accessed. Luckily the few users we
    presently have all pass char or uint8 handles.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 267122a24c499d26278ab2dbdfb46ebcaaf38474
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 16:14:36 2021 +0100

    x86/defns: Clean up X86_{XCR0,XSS}_* constants
    
    With the exception of one case in read_bndcfgu() which can use ilog2(),
    the *_POS defines are unused.  Drop them.
    
    X86_XCR0_X87 is the name used by both the SDM and APM, rather than
    X86_XCR0_FP.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 71cacfb035f4a78ee10970dc38a3baa04d387451
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpuid: Fix handling of XSAVE dynamic leaves
    
    First, if XSAVE is available in hardware but not visible to the guest, the
    dynamic leaves shouldn't be filled in.
    
    Second, the comment concerning XSS state is wrong.  VT-x doesn't manage
    host/guest state automatically, but there is provision for "host only" bits to
    be set, so the implications are still accurate.
    
    Introduce xstate_compressed_size() to mirror the uncompressed one.  Cross
    check it at boot.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit fdb7e77fea4cb1c98dc51dd891a47f7e94612ad4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpu-policy: Simplify recalculate_xstate()
    
    Make use of xstate_uncompressed_size() helper rather than maintaining the
    running calculation while accumulating feature components.
    
    The rest of the CPUID data can come direct from the raw cpu policy.  All
    per-component data form an ABI through the behaviour of the X{SAVE,RSTOR}*
    instructions.
    
    Use for_each_set_bit() rather than opencoding a slightly awkward version of
    it.  Mask the attributes in ecx down based on the visible features.  This
    isn't actually necessary for any components or attributes defined at the time
    of writing (up to AMX), but is added out of an abundance of caution.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit df09dfb94de66f7523837c050616a382aa2c7d17
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/xstate: Rework xstate_ctxt_size() as xstate_uncompressed_size()
    
    We're soon going to need a compressed helper of the same form.
    
    The size of the uncompressed image depends on the single element with the
    largest offset + size.  Sadly this isn't always the element with the largest
    index.
    
    Name the per-xstate-component cpu_policy struture, for legibility of the logic
    in xstate_uncompressed_size().  Cross-check with hardware during boot, and
    remove hw_uncompressed_size().
    
    This means that the migration paths don't need to mess with XCR0 just to
    sanity check the buffer size.  It also means we can drop the "fastpath" check
    against xfeature_mask (there to skip some XCR0 writes); this path is going to
    be dead logic the moment Xen starts using supervisor states itself.
    
    The users of hw_uncompressed_size() in xstate_init() can (and indeed need) to
    be replaced with CPUID instructions.  They run with feature_mask in XCR0, and
    prior to setup_xstate_features() on the BSP.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit a09022a09e1a79b3f9574993993bfad803b32596
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu May 23 00:55:34 2024 +0100

    x86/boot: Collect the Raw CPU Policy earlier on boot
    
    This is a tangle, but it's a small step in the right direction.
    
    In the following change, xstate_init() is going to start using the Raw policy.
    
    calculate_raw_cpu_policy() is sufficiently separate from the other policies to
    safely move like this.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit d31a111940de5431c8bf465b1d38b89f1130a24b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Feb 21 17:56:57 2020 +0000

    x86/xstate: Cross-check dynamic XSTATE sizes at boot
    
    Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
    every call.  This is expensive, being used for domain create/migrate, as well
    as to service certain guest CPUID instructions.
    
    Instead, arrange to check the sizes once at boot.  See the code comments for
    details.  Right now, it just checks hardware against the algorithm
    expectations.  Later patches will cross-check Xen's XSTATE calculations too.
    
    Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits.  This is to
    maximise coverage in the sanity check, even if we don't expect to
    use/virtualise some of these features any time soon.  Leave HDC and HWP alone
    for now; we don't have CPUID bits from them stored nicely.
    
    Only perform the cross-checks when SELF_TESTS are active.  It's only
    developers or new hardware liable to trip these checks, and Xen at least
    tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which we
    don't want to be tickling in the general case.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 9e6dbbe8bf400aacb99009ddffa91d2a0c312b39
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 22 17:23:54 2024 +0100

    x86/xstate: Fix initialisation of XSS cache
    
    The clobbering of this_cpu(xcr0) and this_cpu(xss) to architecturally invalid
    values is to force the subsequent set_xcr0() and set_msr_xss() to reload the
    hardware register.
    
    While XCR0 is reloaded in xstate_init(), MSR_XSS isn't.  This causes
    get_msr_xss() to return the invalid value, and logic of the form:
    
        old = get_msr_xss();
        set_msr_xss(new);
        ...
        set_msr_xss(old);
    
    to try and restore said invalid value.
    
    The architecturally invalid value must be purged from the cache, meaning the
    hardware register must be written at least once.  This in turn highlights that
    the invalid value must only be used in the case that the hardware register is
    available.
    
    Fixes: f7f4a523927f ("x86/xstate: reset cached register values on resume")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit aba98c8d671bd290e978ec154d0baf042e093a65
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jun 14 13:05:40 2024 +0100

    xen/arch: Centralise __read_mostly and __ro_after_init
    
    These living in cache.h is inherited from Linux, but cache.h is not a terribly
    appropriately location for them to live.
    
    __read_mostly is an optimisation related to data placement in order to avoid
    having shared data in cachelines that are likely to be written to, but it
    really is just a section of the linked image separating data by usage
    patterns; it has nothing to do with cache sizes or flushing logic.
    
    Worse, __ro_after_init was only in xen/cache.h because __read_mostly was in
    arch/cache.h, and has literally nothing whatsoever to do with caches.
    
    Move the definitions into xen/sections.h, which in particular means that
    RISC-V doesn't need to repeat the problematic pattern.  Take the opportunity
    to provide a short descriptions of what these are used for.
    
    For now, leave TODO comments next to the other identical definitions.  It
    turns out that unpicking cache.h is more complicated than it appears because a
    number of files use it for transitive dependencies.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 82f480944718d9e8340a6ac1af41ece7851115bf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jun 18 13:48:35 2024 +0100

    xen/irq: Address MISRA Rule 8.3 violation
    
    When centralising irq_ack_none(), different architectures had different names
    for the parameter.  As it's type is struct irq_desc *, it should be named
    desc.  Make this consistent.
    
    No functional change.
    
    Fixes: 8aeda4a241ab ("arch/irq: Make irq_ack_none() mandatory")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed Jun 19 23:11:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 Jun 2024 23:11:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744047.1151054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK4Ss-000754-1J; Wed, 19 Jun 2024 23:11:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744047.1151054; Wed, 19 Jun 2024 23:11:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK4Sr-00074x-UY; Wed, 19 Jun 2024 23:11:41 +0000
Received: by outflank-mailman (input) for mailman id 744047;
 Wed, 19 Jun 2024 23:11:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK4Sq-00074n-Hi; Wed, 19 Jun 2024 23:11:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK4Sq-0007YJ-9x; Wed, 19 Jun 2024 23:11:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK4Sq-0007EQ-2x; Wed, 19 Jun 2024 23:11:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sK4Sq-0006je-2T; Wed, 19 Jun 2024 23:11:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JPDcjtzlCExcGMs2bFQrcrCo3Z4NETFfGRzPPMU0HSg=; b=U77AtvUYQIcYFpPa5qMF8S10xP
	0PBlXzmHI5UdVJyZF6eytajnczZbwhvF+w0dJkHauHVPeyXvdSwatEtd/aoEOu6kKMFFL/jB3pzFR
	33YDnOknPmc0WrdZrd2qFGqsBP/0VCQdeTiYZYPfil23B0CS5ZMREH+93qVzoDRh7KgA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186418-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186418: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
X-Osstest-Versions-That:
    xen=efa6e9f15ba943d154e8d7b29384581915b2aacd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 Jun 2024 23:11:40 +0000

flight 186418 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186418/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                   6 xen-build                fail REGR. vs. 186411

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
baseline version:
 xen                  efa6e9f15ba943d154e8d7b29384581915b2aacd

Last test of basis   186411  2024-06-19 12:00:22 Z    0 days
Testing same since   186412  2024-06-19 15:03:58 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jun 19 14:11:07 2024 +0200

    xen: avoid UB in guest handle arithmetic
    
    At least XENMEM_memory_exchange can have huge values passed in the
    nr_extents and nr_exchanged fields. Adding such values to pointers can
    overflow, resulting in UB. Cast respective pointers to "unsigned long"
    while at the same time making the necessary multiplication explicit.
    Remaining arithmetic is, despite there possibly being mathematical
    overflow, okay as per the C99 spec: "A computation involving unsigned
    operands can never overflow, because a result that cannot be represented
    by the resulting unsigned integer type is reduced modulo the number that
    is one greater than the largest value that can be represented by the
    resulting type." The overflow that we need to guard against is checked
    for in array_access_ok().
    
    Note that in / down from array_access_ok() the address value is only
    ever cast to "unsigned long" anyway, which is why in the invocation from
    guest_handle_subrange_okay() the value doesn't need casting back to
    pointer type.
    
    In compat grant table code change two guest_handle_add_offset() to avoid
    passing in negative offsets.
    
    Since {,__}clear_guest_offset() need touching anyway, also deal with
    another (latent) issue there: They were losing the handle type, i.e. the
    size of the individual objects accessed. Luckily the few users we
    presently have all pass char or uint8 handles.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 267122a24c499d26278ab2dbdfb46ebcaaf38474
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 16:14:36 2021 +0100

    x86/defns: Clean up X86_{XCR0,XSS}_* constants
    
    With the exception of one case in read_bndcfgu() which can use ilog2(),
    the *_POS defines are unused.  Drop them.
    
    X86_XCR0_X87 is the name used by both the SDM and APM, rather than
    X86_XCR0_FP.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 71cacfb035f4a78ee10970dc38a3baa04d387451
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpuid: Fix handling of XSAVE dynamic leaves
    
    First, if XSAVE is available in hardware but not visible to the guest, the
    dynamic leaves shouldn't be filled in.
    
    Second, the comment concerning XSS state is wrong.  VT-x doesn't manage
    host/guest state automatically, but there is provision for "host only" bits to
    be set, so the implications are still accurate.
    
    Introduce xstate_compressed_size() to mirror the uncompressed one.  Cross
    check it at boot.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit fdb7e77fea4cb1c98dc51dd891a47f7e94612ad4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpu-policy: Simplify recalculate_xstate()
    
    Make use of xstate_uncompressed_size() helper rather than maintaining the
    running calculation while accumulating feature components.
    
    The rest of the CPUID data can come direct from the raw cpu policy.  All
    per-component data form an ABI through the behaviour of the X{SAVE,RSTOR}*
    instructions.
    
    Use for_each_set_bit() rather than opencoding a slightly awkward version of
    it.  Mask the attributes in ecx down based on the visible features.  This
    isn't actually necessary for any components or attributes defined at the time
    of writing (up to AMX), but is added out of an abundance of caution.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit df09dfb94de66f7523837c050616a382aa2c7d17
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/xstate: Rework xstate_ctxt_size() as xstate_uncompressed_size()
    
    We're soon going to need a compressed helper of the same form.
    
    The size of the uncompressed image depends on the single element with the
    largest offset + size.  Sadly this isn't always the element with the largest
    index.
    
    Name the per-xstate-component cpu_policy struture, for legibility of the logic
    in xstate_uncompressed_size().  Cross-check with hardware during boot, and
    remove hw_uncompressed_size().
    
    This means that the migration paths don't need to mess with XCR0 just to
    sanity check the buffer size.  It also means we can drop the "fastpath" check
    against xfeature_mask (there to skip some XCR0 writes); this path is going to
    be dead logic the moment Xen starts using supervisor states itself.
    
    The users of hw_uncompressed_size() in xstate_init() can (and indeed need) to
    be replaced with CPUID instructions.  They run with feature_mask in XCR0, and
    prior to setup_xstate_features() on the BSP.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit a09022a09e1a79b3f9574993993bfad803b32596
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu May 23 00:55:34 2024 +0100

    x86/boot: Collect the Raw CPU Policy earlier on boot
    
    This is a tangle, but it's a small step in the right direction.
    
    In the following change, xstate_init() is going to start using the Raw policy.
    
    calculate_raw_cpu_policy() is sufficiently separate from the other policies to
    safely move like this.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit d31a111940de5431c8bf465b1d38b89f1130a24b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Feb 21 17:56:57 2020 +0000

    x86/xstate: Cross-check dynamic XSTATE sizes at boot
    
    Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
    every call.  This is expensive, being used for domain create/migrate, as well
    as to service certain guest CPUID instructions.
    
    Instead, arrange to check the sizes once at boot.  See the code comments for
    details.  Right now, it just checks hardware against the algorithm
    expectations.  Later patches will cross-check Xen's XSTATE calculations too.
    
    Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits.  This is to
    maximise coverage in the sanity check, even if we don't expect to
    use/virtualise some of these features any time soon.  Leave HDC and HWP alone
    for now; we don't have CPUID bits from them stored nicely.
    
    Only perform the cross-checks when SELF_TESTS are active.  It's only
    developers or new hardware liable to trip these checks, and Xen at least
    tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which we
    don't want to be tickling in the general case.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 9e6dbbe8bf400aacb99009ddffa91d2a0c312b39
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 22 17:23:54 2024 +0100

    x86/xstate: Fix initialisation of XSS cache
    
    The clobbering of this_cpu(xcr0) and this_cpu(xss) to architecturally invalid
    values is to force the subsequent set_xcr0() and set_msr_xss() to reload the
    hardware register.
    
    While XCR0 is reloaded in xstate_init(), MSR_XSS isn't.  This causes
    get_msr_xss() to return the invalid value, and logic of the form:
    
        old = get_msr_xss();
        set_msr_xss(new);
        ...
        set_msr_xss(old);
    
    to try and restore said invalid value.
    
    The architecturally invalid value must be purged from the cache, meaning the
    hardware register must be written at least once.  This in turn highlights that
    the invalid value must only be used in the case that the hardware register is
    available.
    
    Fixes: f7f4a523927f ("x86/xstate: reset cached register values on resume")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit aba98c8d671bd290e978ec154d0baf042e093a65
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jun 14 13:05:40 2024 +0100

    xen/arch: Centralise __read_mostly and __ro_after_init
    
    These living in cache.h is inherited from Linux, but cache.h is not a terribly
    appropriately location for them to live.
    
    __read_mostly is an optimisation related to data placement in order to avoid
    having shared data in cachelines that are likely to be written to, but it
    really is just a section of the linked image separating data by usage
    patterns; it has nothing to do with cache sizes or flushing logic.
    
    Worse, __ro_after_init was only in xen/cache.h because __read_mostly was in
    arch/cache.h, and has literally nothing whatsoever to do with caches.
    
    Move the definitions into xen/sections.h, which in particular means that
    RISC-V doesn't need to repeat the problematic pattern.  Take the opportunity
    to provide a short descriptions of what these are used for.
    
    For now, leave TODO comments next to the other identical definitions.  It
    turns out that unpicking cache.h is more complicated than it appears because a
    number of files use it for transitive dependencies.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 82f480944718d9e8340a6ac1af41ece7851115bf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jun 18 13:48:35 2024 +0100

    xen/irq: Address MISRA Rule 8.3 violation
    
    When centralising irq_ack_none(), different architectures had different names
    for the parameter.  As it's type is struct irq_desc *, it should be named
    desc.  Make this consistent.
    
    No functional change.
    
    Fixes: 8aeda4a241ab ("arch/irq: Make irq_ack_none() mandatory")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 00:35:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 00:35:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744061.1151064 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK5lb-0007sI-9I; Thu, 20 Jun 2024 00:35:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744061.1151064; Thu, 20 Jun 2024 00:35:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK5lb-0007sB-6K; Thu, 20 Jun 2024 00:35:07 +0000
Received: by outflank-mailman (input) for mailman id 744061;
 Thu, 20 Jun 2024 00:35:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QH02=NW=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sK5la-0007s5-Hc
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 00:35:06 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id efa6b6ff-2e9c-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 02:35:05 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 880D562032;
 Thu, 20 Jun 2024 00:35:03 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C444EC2BBFC;
 Thu, 20 Jun 2024 00:35:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efa6b6ff-2e9c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718843703;
	bh=/i83Z4IVxiWl8LKZpTXrkGw55w5Mz0KJLsQVWjxe3Zk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=EGOD7Y81Uc+jRbQXXxiZ0j12rSx5kWJCLWK74FdVjcl/mLhWxzW88c772nprFev7D
	 BbeOZdmH4eMGVM1gTcPSd2JOT/AZuATNlLpzbvt/wmI+rYYqjAHJqh+UcceZ8+AKPA
	 n4D/pZlGaqq/uOq0fwHTjjJzWYjLvgzA61fUGDZz2/lRQo9wkuZYsqQWg8sWsLSTwk
	 yTOFKM73ypBscnkat5ug+KMp1/o9nxq5OD2H7Pm/xO4Z1QXfP1kfiNm5ofIHNB2I9L
	 3A7xZ80wQM8nj7ntu62n0ufsfR0lygGCtCTa9bIDRtrltrgwpWdhYAiZxk5gouWJMN
	 7JDQvEIFAcG0Q==
Date: Wed, 19 Jun 2024 17:35:00 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    "Oleksii K." <oleksii.kurochko@gmail.com>, 
    Stefano Stabellini <stefano.stabellini@amd.com>, 
    xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, 
    michal.orzel@amd.com, Volodymyr_Babchuk@epam.com, 
    Henry Wang <xin.wang2@amd.com>, Alec Kwapis <alec.kwapis@medtronic.com>, 
    "Daniel P . Smith" <dpsmith@apertussolutions.com>
Subject: Re: [PATCH v4 2/4] xen/arm: Alloc XenStore page for Dom0less DomUs
 from hypervisor
In-Reply-To: <fb6809b3-ee14-4baa-b6fa-bd2171d61c4b@xen.org>
Message-ID: <alpine.DEB.2.22.394.2406191734530.2572888@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2405241552240.2557291@ubuntu-linux-20-04-desktop> <20240524225522.2878481-2-stefano.stabellini@amd.com> <697aadfd-a8c1-4f1b-8806-6a5acbf343ba@xen.org> <b9c8e762af9ca04d9194fdaa0379f2fe9096af29.camel@gmail.com>
 <alpine.DEB.2.22.394.2406181734140.2572888@ubuntu-linux-20-04-desktop> <fb6809b3-ee14-4baa-b6fa-bd2171d61c4b@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 18 Jun 2024, Julien Grall wrote:
> Hi Stefano,
> 
> On 19/06/2024 01:37, Stefano Stabellini wrote:
> > On Mon, 27 May 2024, Oleksii K. wrote:
> > > > I don't think it is a big problem if this is not merged for the code
> > > > freeze as this is technically a bug fix.
> > > 
> > > Agree, this is not a problem as it is still looks to me as a bug fix.
> > > 
> > > ~ Oleksii
> > 
> > Hi Oleksii, this version of the series was already all acked with minor
> > NITs and you gave the go-ahead for this release as it is a bug fix. Due
> > to 2 weeks of travels I only managed to commit the series now, sorry for
> > the delay.
> 
> Unfortunately this series is breaking gitlab CI [1]. I understand the go ahead
> was given two weeks ago, but as we are now past the code freeze, I feel we
> should have had a pros/cons e-mail to assess whether it was worth the risk to
> merge it.
> 
> Now to the issues, I vaguely recall this series didn't require any change in
> Linux. Did I miss anything? If so, why are we breaking Linux?
> 
> For now, I will revert this series. Once we root cause the issue, we can
> re-assess whether the fix should be apply for 4.19.

Hi Julien,

Thanks for acting quickly on this. Reverting the series was the right
thing to do, and I apologize for not waiting for the gitlab-ci results
before committing it.

Like you, my understanding was also that there were no Linux changes
required. I think this will need a deeper investigation that we can do
after the 4.19 release.

Thanks again for unblocking the tree quickly.

- Stefano


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 01:18:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 01:18:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744075.1151080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK6RE-0003bR-F4; Thu, 20 Jun 2024 01:18:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744075.1151080; Thu, 20 Jun 2024 01:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK6RE-0003bK-Bg; Thu, 20 Jun 2024 01:18:08 +0000
Received: by outflank-mailman (input) for mailman id 744075;
 Thu, 20 Jun 2024 01:18:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QH02=NW=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sK6RD-0003bE-LM
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 01:18:07 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f03f2b0b-2ea2-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 03:18:04 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id B3A14CE0C71;
 Thu, 20 Jun 2024 01:17:57 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 15877C2BBFC;
 Thu, 20 Jun 2024 01:17:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f03f2b0b-2ea2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718846277;
	bh=0GIpkZY7aVbZFmVOo2ALtB7mwHup3p4Fzi8lCLKrJZ0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ktk5ljKyzxkWooIJRKiYE25vTcn+a1lGZOFpRKZ+7qocpHcZcEfqbsQA0iuG7qVxt
	 e2rLtZX3h3PlC9u/O482N/bCVraJibdbX7rfCtkuRavBxX26vQPWSvEw4jJmF3j1RF
	 8T2T0WJ4ybMo4rff3PFYkctOaxq6li79tKk/eq38Mh+3vUIohuE7gAJAcQLZEl5r2e
	 96cpb2lytph1SCY1+15S1RSia1FE0NimPrd+pGrLDdaCZFRjgqcA0sGTIrj00SQnmD
	 akUJu9CSJGS5ntlaGpneHnI74H1xqUI9/lfAALEYht7SFcNBwDDHg+F3yZINfyxugL
	 hSc5vduc0N3Pw==
Date: Wed, 19 Jun 2024 18:17:54 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH] automation/eclair_analysis: deviate common/unlzo.c for
 MISRA Rule 7.3
In-Reply-To: <63d11da5-4a5a-4354-ab57-67fbb7110f45@suse.com>
Message-ID: <alpine.DEB.2.22.394.2406191817310.2572888@ubuntu-linux-20-04-desktop>
References: <20342a68627d5fe7c85c50f64e9300e9a587974b.1718704260.git.alessandro.zucchelli@bugseng.com> <63d11da5-4a5a-4354-ab57-67fbb7110f45@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 18 Jun 2024, Jan Beulich wrote:
> On 18.06.2024 11:54, Alessandro Zucchelli wrote:
> > The file contains violations of Rule 7.3 which states as following: The
> > lowercase character `l' shall not be used in a literal suffix.
> > 
> > This file defines a non-compliant constant used in a macro expanded in a
> > non-excluded file, so this deviation is needed in order to avoid
> > a spurious violation involving both files.
> 
> Imo it would be nice to be specific in such cases: Which constant? And
> which macro expanded in which file?

Hi Alessandro,
if you give me the details, I could add it to the commit message on commit


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 01:19:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 01:19:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744080.1151089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK6Sx-00047g-Ow; Thu, 20 Jun 2024 01:19:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744080.1151089; Thu, 20 Jun 2024 01:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK6Sx-00047Z-MR; Thu, 20 Jun 2024 01:19:55 +0000
Received: by outflank-mailman (input) for mailman id 744080;
 Thu, 20 Jun 2024 01:19:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QH02=NW=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sK6Sw-00047T-Eh
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 01:19:54 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2ff5f6a4-2ea3-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 03:19:51 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 57355CE22D5;
 Thu, 20 Jun 2024 01:19:48 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6A117C2BBFC;
 Thu, 20 Jun 2024 01:19:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ff5f6a4-2ea3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718846387;
	bh=EzZvJ9mPSUbsnyIM6nx3X6riwBwPu5AH2cnwOyP5G0c=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=THQ8gN0LXsxETcobc63ILcC7moNNSvYLCTqV6rtAqRLlIIxPU3g2Ps8nBWfG6OfCV
	 EU7pEqnkxZQ3ixa+x2Z09AIS8u8qO/pPWzZLuPbRS7rMarwBBKkKbxM0O/glSUAPxF
	 gUI4bq15IiIIKGXAls0W2vmqXrnb3bgobJ27VvsDJs6m1xuIXbG7I2mxbpu0luRmES
	 yrBLuPdHkkL8EQskg7j62m6RDOXPYcJ7itiH/MfNx/AFmMsjNXR0w22EPbfv/jmNGi
	 IEH3/4I/bmOVqa9xqduj8DsRiYsjT9/OLoFBLRL8UaPwxS83sous3jA3yiBJ/o/X2s
	 Sw/oLDUycvS1A==
Date: Wed, 19 Jun 2024 18:19:45 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>
Subject: Re: [XEN PATCH v3] automation/eclair: add deviation for MISRA C Rule
 17.7
In-Reply-To: <b571bd05955ab9967a44517c9947545a2a530f01.1718354974.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406191819370.2572888@ubuntu-linux-20-04-desktop>
References: <b571bd05955ab9967a44517c9947545a2a530f01.1718354974.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 14 Jun 2024, Federico Serafini wrote:
> Update ECLAIR configuration to deviate some cases where not using
> the return value of a function is not dangerous.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in v3:
> - removed unwanted underscores;
> - grammar fixed;
> - do not constraint to the first actual argument.
> Changes in v2:
> - do not deviate strlcpy and strlcat.
> ---
>  automation/eclair_analysis/ECLAIR/deviations.ecl | 4 ++++
>  docs/misra/deviations.rst                        | 9 +++++++++
>  2 files changed, 13 insertions(+)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> index 447c1e6661..97281082a8 100644
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -413,6 +413,10 @@ explicit comment indicating the fallthrough intention is present."
>  -config=MC3R1.R17.1,macros+={hide , "^va_(arg|start|copy|end)$"}
>  -doc_end
>  
> +-doc_begin="Not using the return value of a function does not endanger safety if it coincides with an actual argument."
> +-config=MC3R1.R17.7,calls+={safe, "any()", "decl(name(__builtin_memcpy||__builtin_memmove||__builtin_memset||cpumask_check))"}
> +-doc_end
> +
>  #
>  # Series 18.
>  #
> diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
> index 36959aa44a..f3abe31eb5 100644
> --- a/docs/misra/deviations.rst
> +++ b/docs/misra/deviations.rst
> @@ -364,6 +364,15 @@ Deviations related to MISRA C:2012 Rules:
>         by `stdarg.h`.
>       - Tagged as `deliberate` for ECLAIR.
>  
> +   * - R17.7
> +     - Not using the return value of a function does not endanger safety if it
> +       coincides with an actual argument.
> +     - Tagged as `safe` for ECLAIR. Such functions are:
> +         - __builtin_memcpy()
> +         - __builtin_memmove()
> +         - __builtin_memset()
> +         - cpumask_check()
> +
>     * - R20.4
>       - The override of the keyword \"inline\" in xen/compiler.h is present so
>         that section contents checks pass when the compiler chooses not to
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 01:21:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 01:21:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744090.1151100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK6Un-0005cJ-7H; Thu, 20 Jun 2024 01:21:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744090.1151100; Thu, 20 Jun 2024 01:21:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK6Un-0005cC-4f; Thu, 20 Jun 2024 01:21:49 +0000
Received: by outflank-mailman (input) for mailman id 744090;
 Thu, 20 Jun 2024 01:21:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QH02=NW=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sK6Ul-0005bM-Vf
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 01:21:47 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 75a9fb6f-2ea3-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 03:21:47 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 6DFF361F6C;
 Thu, 20 Jun 2024 01:21:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id DBF30C2BBFC;
 Thu, 20 Jun 2024 01:21:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75a9fb6f-2ea3-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718846505;
	bh=kE4el7/ZjYG+ytPzfW9NHzPa61t4FYQ0j7jmDYRgeW4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=J0Q2Na73Ej3VEPX/8RaH1gl9KbJRdJ9wawnD5KgWIIEVtGiZDMibHrB5cerjWYArD
	 nN2BWfTmX4qMYNDLqhRsY3ppqwfJJTDfYOTSPr8mpf5ZHfWntEoyoNjGNqYn0qHP+x
	 /KrrzOIAi1ZGBNQf9E+VdJHcuCvManG4Urpmf3xQU8DZ5290oOoFCJ8yPkgn2GsmkC
	 N3owY6dRF0zeFsWZZuCLuDDo3bFDzFJWxAQ7huIGIV57oCPpjA/ZNJIeFLJxOQVDq9
	 IBpN1s37Sii0D0y9H81cl5yS/wsBBD7x5ZXztIHDb7v/oqtOUxnyn+qgo7mByXQB03
	 G42hu6hXW/2QA==
Date: Wed, 19 Jun 2024 18:21:42 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>
Subject: Re: [XEN PATCH v3] automation/eclair: extend existing deviations of
 MISRA C Rule 16.3
In-Reply-To: <71a69d25e7889ed6e8546b5cd18d423006d69ceb.1718356683.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406191821310.2572888@ubuntu-linux-20-04-desktop>
References: <71a69d25e7889ed6e8546b5cd18d423006d69ceb.1718356683.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 14 Jun 2024, Federico Serafini wrote:
> Update ECLAIR configuration to deviate more cases where an
> unintentional fallthrough cannot happen.
> 
> Add Rule 16.3 to the monitored set and tag it as clean for arm.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 01:28:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 01:28:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744095.1151110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK6bV-0006TO-TN; Thu, 20 Jun 2024 01:28:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744095.1151110; Thu, 20 Jun 2024 01:28:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK6bV-0006TH-Qc; Thu, 20 Jun 2024 01:28:45 +0000
Received: by outflank-mailman (input) for mailman id 744095;
 Thu, 20 Jun 2024 01:28:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QH02=NW=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sK6bV-0006TB-Em
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 01:28:45 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6d7f88b8-2ea4-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 03:28:43 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 44AEC60D3D;
 Thu, 20 Jun 2024 01:28:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D2FB3C2BBFC;
 Thu, 20 Jun 2024 01:28:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d7f88b8-2ea4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718846920;
	bh=0CKP7fRiUs61hYYKmr8CP6ra9KYZdf2+Na1Po2OCoFA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Pky8cFDPHKjPosyv4FtqaX95BwQqpSXKlaqHjMd3OWQUCTPQo6MbUnIshSMvPyAGY
	 mI/fHFx9zegoG6K1WushW0xNq3U8rTFmG1jw874pOqLVCtoZcxjlRAALr70G6AWrOz
	 W42nFlCdg6qrGJfIPK3QE2npNZZIhMpLljlnWTAy2j+5sjPD0n+X5NzHRwQ3p5zhBA
	 X38OH8xEbf9nbY2MDUnSLFmHrD6qRNd42TGrKFAsavpfBtTKEUDi0vkQb39V4y/Io4
	 AhjNQ0qRkaG1rNqrILRdA6dnA/BqQVDzk2v3v45k4VhrqZz382l4EHHrGoPV7rhakw
	 w56VEDk9PX2xw==
Date: Wed, 19 Jun 2024 18:28:38 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [XEN PATCH 4/5] automation/eclair_analysis: address remaining
 violations of MISRA C Rule 20.12
In-Reply-To: <ba7e17494f0bb167fe48f7fe0a69fabc1c3f5d1a.1717236930.git.nicola.vetrini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406191828311.2572888@ubuntu-linux-20-04-desktop>
References: <cover.1717236930.git.nicola.vetrini@bugseng.com> <ba7e17494f0bb167fe48f7fe0a69fabc1c3f5d1a.1717236930.git.nicola.vetrini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 1 Jun 2024, Nicola Vetrini wrote:
> The DEFINE macro in asm-offsets.c (for all architectures) still generates
> violations despite the file(s) being excluded from compliance, due to the
> fact that in its expansion it sometimes refers entities in non-excluded files.
> These corner cases are deviated by the configuration.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 02:01:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 02:01:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744103.1151120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK76d-0003fi-8x; Thu, 20 Jun 2024 02:00:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744103.1151120; Thu, 20 Jun 2024 02:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK76d-0003fb-5q; Thu, 20 Jun 2024 02:00:55 +0000
Received: by outflank-mailman (input) for mailman id 744103;
 Thu, 20 Jun 2024 02:00:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK76c-0003fR-51; Thu, 20 Jun 2024 02:00:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK76b-0002n7-PM; Thu, 20 Jun 2024 02:00:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK76b-0005bD-Bz; Thu, 20 Jun 2024 02:00:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sK76b-00052Q-Bd; Thu, 20 Jun 2024 02:00:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WKtVsefQ5uQvuHYTWQPr+hwVXuI24rRuw35DnVcArzc=; b=Iw956kl9glPGzcW0KxFyT5gq0g
	YM1YKlL/aj29gSWwPqb3ngGfgsmrMMnh9rUI4bw/1P68NUm17d/pWaVfO2nYu+gYmKh/a+nDfw9D9
	PGXGUY4nKy1/PKRgZ/Yu86qR7idCgOM940Oi46kcirp6yvK80iHTpBkh9IsHsy84i8hc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186420-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186420: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
X-Osstest-Versions-That:
    xen=efa6e9f15ba943d154e8d7b29384581915b2aacd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Jun 2024 02:00:53 +0000

flight 186420 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186420/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                   6 xen-build                fail REGR. vs. 186411

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
baseline version:
 xen                  efa6e9f15ba943d154e8d7b29384581915b2aacd

Last test of basis   186411  2024-06-19 12:00:22 Z    0 days
Testing same since   186412  2024-06-19 15:03:58 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jun 19 14:11:07 2024 +0200

    xen: avoid UB in guest handle arithmetic
    
    At least XENMEM_memory_exchange can have huge values passed in the
    nr_extents and nr_exchanged fields. Adding such values to pointers can
    overflow, resulting in UB. Cast respective pointers to "unsigned long"
    while at the same time making the necessary multiplication explicit.
    Remaining arithmetic is, despite there possibly being mathematical
    overflow, okay as per the C99 spec: "A computation involving unsigned
    operands can never overflow, because a result that cannot be represented
    by the resulting unsigned integer type is reduced modulo the number that
    is one greater than the largest value that can be represented by the
    resulting type." The overflow that we need to guard against is checked
    for in array_access_ok().
    
    Note that in / down from array_access_ok() the address value is only
    ever cast to "unsigned long" anyway, which is why in the invocation from
    guest_handle_subrange_okay() the value doesn't need casting back to
    pointer type.
    
    In compat grant table code change two guest_handle_add_offset() to avoid
    passing in negative offsets.
    
    Since {,__}clear_guest_offset() need touching anyway, also deal with
    another (latent) issue there: They were losing the handle type, i.e. the
    size of the individual objects accessed. Luckily the few users we
    presently have all pass char or uint8 handles.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 267122a24c499d26278ab2dbdfb46ebcaaf38474
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 16:14:36 2021 +0100

    x86/defns: Clean up X86_{XCR0,XSS}_* constants
    
    With the exception of one case in read_bndcfgu() which can use ilog2(),
    the *_POS defines are unused.  Drop them.
    
    X86_XCR0_X87 is the name used by both the SDM and APM, rather than
    X86_XCR0_FP.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 71cacfb035f4a78ee10970dc38a3baa04d387451
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpuid: Fix handling of XSAVE dynamic leaves
    
    First, if XSAVE is available in hardware but not visible to the guest, the
    dynamic leaves shouldn't be filled in.
    
    Second, the comment concerning XSS state is wrong.  VT-x doesn't manage
    host/guest state automatically, but there is provision for "host only" bits to
    be set, so the implications are still accurate.
    
    Introduce xstate_compressed_size() to mirror the uncompressed one.  Cross
    check it at boot.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit fdb7e77fea4cb1c98dc51dd891a47f7e94612ad4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpu-policy: Simplify recalculate_xstate()
    
    Make use of xstate_uncompressed_size() helper rather than maintaining the
    running calculation while accumulating feature components.
    
    The rest of the CPUID data can come direct from the raw cpu policy.  All
    per-component data form an ABI through the behaviour of the X{SAVE,RSTOR}*
    instructions.
    
    Use for_each_set_bit() rather than opencoding a slightly awkward version of
    it.  Mask the attributes in ecx down based on the visible features.  This
    isn't actually necessary for any components or attributes defined at the time
    of writing (up to AMX), but is added out of an abundance of caution.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit df09dfb94de66f7523837c050616a382aa2c7d17
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/xstate: Rework xstate_ctxt_size() as xstate_uncompressed_size()
    
    We're soon going to need a compressed helper of the same form.
    
    The size of the uncompressed image depends on the single element with the
    largest offset + size.  Sadly this isn't always the element with the largest
    index.
    
    Name the per-xstate-component cpu_policy struture, for legibility of the logic
    in xstate_uncompressed_size().  Cross-check with hardware during boot, and
    remove hw_uncompressed_size().
    
    This means that the migration paths don't need to mess with XCR0 just to
    sanity check the buffer size.  It also means we can drop the "fastpath" check
    against xfeature_mask (there to skip some XCR0 writes); this path is going to
    be dead logic the moment Xen starts using supervisor states itself.
    
    The users of hw_uncompressed_size() in xstate_init() can (and indeed need) to
    be replaced with CPUID instructions.  They run with feature_mask in XCR0, and
    prior to setup_xstate_features() on the BSP.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit a09022a09e1a79b3f9574993993bfad803b32596
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu May 23 00:55:34 2024 +0100

    x86/boot: Collect the Raw CPU Policy earlier on boot
    
    This is a tangle, but it's a small step in the right direction.
    
    In the following change, xstate_init() is going to start using the Raw policy.
    
    calculate_raw_cpu_policy() is sufficiently separate from the other policies to
    safely move like this.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit d31a111940de5431c8bf465b1d38b89f1130a24b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Feb 21 17:56:57 2020 +0000

    x86/xstate: Cross-check dynamic XSTATE sizes at boot
    
    Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
    every call.  This is expensive, being used for domain create/migrate, as well
    as to service certain guest CPUID instructions.
    
    Instead, arrange to check the sizes once at boot.  See the code comments for
    details.  Right now, it just checks hardware against the algorithm
    expectations.  Later patches will cross-check Xen's XSTATE calculations too.
    
    Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits.  This is to
    maximise coverage in the sanity check, even if we don't expect to
    use/virtualise some of these features any time soon.  Leave HDC and HWP alone
    for now; we don't have CPUID bits from them stored nicely.
    
    Only perform the cross-checks when SELF_TESTS are active.  It's only
    developers or new hardware liable to trip these checks, and Xen at least
    tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which we
    don't want to be tickling in the general case.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 9e6dbbe8bf400aacb99009ddffa91d2a0c312b39
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 22 17:23:54 2024 +0100

    x86/xstate: Fix initialisation of XSS cache
    
    The clobbering of this_cpu(xcr0) and this_cpu(xss) to architecturally invalid
    values is to force the subsequent set_xcr0() and set_msr_xss() to reload the
    hardware register.
    
    While XCR0 is reloaded in xstate_init(), MSR_XSS isn't.  This causes
    get_msr_xss() to return the invalid value, and logic of the form:
    
        old = get_msr_xss();
        set_msr_xss(new);
        ...
        set_msr_xss(old);
    
    to try and restore said invalid value.
    
    The architecturally invalid value must be purged from the cache, meaning the
    hardware register must be written at least once.  This in turn highlights that
    the invalid value must only be used in the case that the hardware register is
    available.
    
    Fixes: f7f4a523927f ("x86/xstate: reset cached register values on resume")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit aba98c8d671bd290e978ec154d0baf042e093a65
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jun 14 13:05:40 2024 +0100

    xen/arch: Centralise __read_mostly and __ro_after_init
    
    These living in cache.h is inherited from Linux, but cache.h is not a terribly
    appropriately location for them to live.
    
    __read_mostly is an optimisation related to data placement in order to avoid
    having shared data in cachelines that are likely to be written to, but it
    really is just a section of the linked image separating data by usage
    patterns; it has nothing to do with cache sizes or flushing logic.
    
    Worse, __ro_after_init was only in xen/cache.h because __read_mostly was in
    arch/cache.h, and has literally nothing whatsoever to do with caches.
    
    Move the definitions into xen/sections.h, which in particular means that
    RISC-V doesn't need to repeat the problematic pattern.  Take the opportunity
    to provide a short descriptions of what these are used for.
    
    For now, leave TODO comments next to the other identical definitions.  It
    turns out that unpicking cache.h is more complicated than it appears because a
    number of files use it for transitive dependencies.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 82f480944718d9e8340a6ac1af41ece7851115bf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jun 18 13:48:35 2024 +0100

    xen/irq: Address MISRA Rule 8.3 violation
    
    When centralising irq_ack_none(), different architectures had different names
    for the parameter.  As it's type is struct irq_desc *, it should be named
    desc.  Make this consistent.
    
    No functional change.
    
    Fixes: 8aeda4a241ab ("arch/irq: Make irq_ack_none() mandatory")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 03:46:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 03:46:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744121.1151135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK8kK-0006se-Lv; Thu, 20 Jun 2024 03:46:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744121.1151135; Thu, 20 Jun 2024 03:46:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK8kK-0006sX-JN; Thu, 20 Jun 2024 03:46:00 +0000
Received: by outflank-mailman (input) for mailman id 744121;
 Thu, 20 Jun 2024 03:45:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK8kI-0006s4-Jw; Thu, 20 Jun 2024 03:45:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK8kI-0004rK-4p; Thu, 20 Jun 2024 03:45:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK8kH-0008FC-RT; Thu, 20 Jun 2024 03:45:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sK8kH-0003G4-Qr; Thu, 20 Jun 2024 03:45:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+Akyi2ebykoqDX5UrGj/H55qpaMFo95gBf9tbup1rek=; b=40iay0Svaqf+pLBx5dzStGKrXz
	fPga4b4n/cvOeOK4QFKHXFQlIJJ/fofTqlzvCG5KkdpzJJVuxssGPelXx6ag16J5Y/I/Ng6Vbnesi
	+q3iH8E3cREGJ5vgPD7QliJDSdGNcYdcBCPQd1/LhRnf1TRg3DORfU/8Nl1yC6/OKKgk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186413-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186413: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e5b3efbe1ab1793bb49ae07d56d0973267e65112
X-Osstest-Versions-That:
    linux=92e5605a199efbaee59fb19e15d6cc2103a04ec2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Jun 2024 03:45:57 +0000

flight 186413 linux-linus real [real]
flight 186424 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186413/
http://logs.test-lab.xenproject.org/osstest/logs/186424/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-raw       8 xen-boot                 fail REGR. vs. 186406

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 186406
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186406
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186406
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186406
 test-armhf-armhf-examine      8 reboot                       fail  like 186406
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186406
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186406
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                e5b3efbe1ab1793bb49ae07d56d0973267e65112
baseline version:
 linux                92e5605a199efbaee59fb19e15d6cc2103a04ec2

Last test of basis   186406  2024-06-19 02:45:48 Z    1 days
Testing same since   186413  2024-06-19 17:42:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Marangi <ansuelsmth@gmail.com>
  Florian Fainelli <florian.fainelli@broadcom.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Martin Schiller <ms@dev.tdt.de>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit e5b3efbe1ab1793bb49ae07d56d0973267e65112
Merge: 6785e3cc09f1 3572bd5689b0
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Wed Jun 19 10:29:49 2024 -0700

    Merge tag 'probes-fixes-v6.10-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
    
    Pull probes fix from Masami Hiramatsu:
    
     - Restrict gen-API tests for synthetic and kprobe events to only be
       built as modules, as they generate dynamic events that cannot be
       removed, causing ftracetest and startup selftests to fail
    
    * tag 'probes-fixes-v6.10-rc4' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
      tracing: Build event generation tests only as modules

commit 6785e3cc09f149c42ce70eb92736d68c0db64684
Merge: 92e5605a199e 6e5aee08bd25
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Wed Jun 19 10:19:41 2024 -0700

    Merge tag 'mips-fixes_6.10_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux
    
    Pull MIPS fixes from Thomas Bogendoerfer:
    
     - fix for BCM6538 boards
    
     - fix RB532 PCI workaround
    
    * tag 'mips-fixes_6.10_1' of git://git.kernel.org/pub/scm/linux/kernel/git/mips/linux:
      Revert "MIPS: pci: lantiq: restore reset gpio polarity"
      mips: bmips: BCM6358: make sure CBR is correctly set
      MIPS: pci: lantiq: restore reset gpio polarity
      MIPS: Routerboard 532: Fix vendor retry check code

commit 6e5aee08bd2517397c9572243a816664f2ead547
Author: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Date:   Thu Jun 13 10:17:09 2024 +0200

    Revert "MIPS: pci: lantiq: restore reset gpio polarity"
    
    This reverts commit 277a0363120276645ae598d8d5fea7265e076ae9.
    
    While fixing old boards with broken DTs, this change will break
    newer ones with correct gpio polarity annotation.
    
    Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>

commit 3572bd5689b0812b161b40279e39ca5b66d73e88
Author: Masami Hiramatsu (Google) <mhiramat@kernel.org>
Date:   Tue Jun 11 22:30:37 2024 +0900

    tracing: Build event generation tests only as modules
    
    The kprobes and synth event generation test modules add events and lock
    (get a reference) those event file reference in module init function,
    and unlock and delete it in module exit function. This is because those
    are designed for playing as modules.
    
    If we make those modules as built-in, those events are left locked in the
    kernel, and never be removed. This causes kprobe event self-test failure
    as below.
    
    [   97.349708] ------------[ cut here ]------------
    [   97.353453] WARNING: CPU: 3 PID: 1 at kernel/trace/trace_kprobe.c:2133 kprobe_trace_self_tests_init+0x3f1/0x480
    [   97.357106] Modules linked in:
    [   97.358488] CPU: 3 PID: 1 Comm: swapper/0 Not tainted 6.9.0-g699646734ab5-dirty #14
    [   97.361556] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 1.15.0-1 04/01/2014
    [   97.363880] RIP: 0010:kprobe_trace_self_tests_init+0x3f1/0x480
    [   97.365538] Code: a8 24 08 82 e9 ae fd ff ff 90 0f 0b 90 48 c7 c7 e5 aa 0b 82 e9 ee fc ff ff 90 0f 0b 90 48 c7 c7 2d 61 06 82 e9 8e fd ff ff 90 <0f> 0b 90 48 c7 c7 33 0b 0c 82 89 c6 e8 6e 03 1f ff 41 ff c7 e9 90
    [   97.370429] RSP: 0000:ffffc90000013b50 EFLAGS: 00010286
    [   97.371852] RAX: 00000000fffffff0 RBX: ffff888005919c00 RCX: 0000000000000000
    [   97.373829] RDX: ffff888003f40000 RSI: ffffffff8236a598 RDI: ffff888003f40a68
    [   97.375715] RBP: 0000000000000000 R08: 0000000000000001 R09: 0000000000000000
    [   97.377675] R10: ffffffff811c9ae5 R11: ffffffff8120c4e0 R12: 0000000000000000
    [   97.379591] R13: 0000000000000001 R14: 0000000000000015 R15: 0000000000000000
    [   97.381536] FS:  0000000000000000(0000) GS:ffff88807dcc0000(0000) knlGS:0000000000000000
    [   97.383813] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [   97.385449] CR2: 0000000000000000 CR3: 0000000002244000 CR4: 00000000000006b0
    [   97.387347] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    [   97.389277] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
    [   97.391196] Call Trace:
    [   97.391967]  <TASK>
    [   97.392647]  ? __warn+0xcc/0x180
    [   97.393640]  ? kprobe_trace_self_tests_init+0x3f1/0x480
    [   97.395181]  ? report_bug+0xbd/0x150
    [   97.396234]  ? handle_bug+0x3e/0x60
    [   97.397311]  ? exc_invalid_op+0x1a/0x50
    [   97.398434]  ? asm_exc_invalid_op+0x1a/0x20
    [   97.399652]  ? trace_kprobe_is_busy+0x20/0x20
    [   97.400904]  ? tracing_reset_all_online_cpus+0x15/0x90
    [   97.402304]  ? kprobe_trace_self_tests_init+0x3f1/0x480
    [   97.403773]  ? init_kprobe_trace+0x50/0x50
    [   97.404972]  do_one_initcall+0x112/0x240
    [   97.406113]  do_initcall_level+0x95/0xb0
    [   97.407286]  ? kernel_init+0x1a/0x1a0
    [   97.408401]  do_initcalls+0x3f/0x70
    [   97.409452]  kernel_init_freeable+0x16f/0x1e0
    [   97.410662]  ? rest_init+0x1f0/0x1f0
    [   97.411738]  kernel_init+0x1a/0x1a0
    [   97.412788]  ret_from_fork+0x39/0x50
    [   97.413817]  ? rest_init+0x1f0/0x1f0
    [   97.414844]  ret_from_fork_asm+0x11/0x20
    [   97.416285]  </TASK>
    [   97.417134] irq event stamp: 13437323
    [   97.418376] hardirqs last  enabled at (13437337): [<ffffffff8110bc0c>] console_unlock+0x11c/0x150
    [   97.421285] hardirqs last disabled at (13437370): [<ffffffff8110bbf1>] console_unlock+0x101/0x150
    [   97.423838] softirqs last  enabled at (13437366): [<ffffffff8108e17f>] handle_softirqs+0x23f/0x2a0
    [   97.426450] softirqs last disabled at (13437393): [<ffffffff8108e346>] __irq_exit_rcu+0x66/0xd0
    [   97.428850] ---[ end trace 0000000000000000 ]---
    
    And also, since we can not cleanup dynamic_event file, ftracetest are
    failed too.
    
    To avoid these issues, build these tests only as modules.
    
    Link: https://lore.kernel.org/all/171811263754.85078.5877446624311852525.stgit@devnote2/
    
    Fixes: 9fe41efaca08 ("tracing: Add synth event generation test module")
    Fixes: 64836248dda2 ("tracing: Add kprobe event command generation test module")
    Signed-off-by: Masami Hiramatsu (Google) <mhiramat@kernel.org>
    Reviewed-by: Steven Rostedt (Google) <rostedt@goodmis.org>

commit ce5cdd3b05216b704a704f466fb4c2dff3778caf
Author: Christian Marangi <ansuelsmth@gmail.com>
Date:   Tue Jun 11 13:35:33 2024 +0200

    mips: bmips: BCM6358: make sure CBR is correctly set
    
    It was discovered that some device have CBR address set to 0 causing
    kernel panic when arch_sync_dma_for_cpu_all is called.
    
    This was notice in situation where the system is booted from TP1 and
    BMIPS_GET_CBR() returns 0 instead of a valid address and
    !!(read_c0_brcm_cmt_local() & (1 << 31)); not failing.
    
    The current check whether RAC flush should be disabled or not are not
    enough hence lets check if CBR is a valid address or not.
    
    Fixes: ab327f8acdf8 ("mips: bmips: BCM6358: disable RAC flush for TP1")
    Signed-off-by: Christian Marangi <ansuelsmth@gmail.com>
    Acked-by: Florian Fainelli <florian.fainelli@broadcom.com>
    Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>

commit 277a0363120276645ae598d8d5fea7265e076ae9
Author: Martin Schiller <ms@dev.tdt.de>
Date:   Fri Jun 7 11:04:00 2024 +0200

    MIPS: pci: lantiq: restore reset gpio polarity
    
    Commit 90c2d2eb7ab5 ("MIPS: pci: lantiq: switch to using gpiod API") not
    only switched to the gpiod API, but also inverted / changed the polarity
    of the GPIO.
    
    According to the PCI specification, the RST# pin is an active-low
    signal. However, most of the device trees that have been widely used for
    a long time (mainly in the openWrt project) define this GPIO as
    active-high and the old driver code inverted the signal internally.
    
    Apparently there are actually boards where the reset gpio must be
    operated inverted. For this reason, we cannot use the GPIOD_OUT_LOW/HIGH
    flag for initialization. Instead, we must explicitly set the gpio to
    value 1 in order to take into account any "GPIO_ACTIVE_LOW" flag that
    may have been set.
    
    In order to remain compatible with all these existing device trees, we
    should therefore keep the logic as it was before the commit.
    
    Fixes: 90c2d2eb7ab5 ("MIPS: pci: lantiq: switch to using gpiod API")
    Cc: stable@vger.kernel.org
    Signed-off-by: Martin Schiller <ms@dev.tdt.de>
    Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>

commit ae9daffd9028f2500c9ac1517e46d4f2b57efb80
Author: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
Date:   Wed May 8 15:07:00 2024 +0300

    MIPS: Routerboard 532: Fix vendor retry check code
    
    read_config_dword() contains strange condition checking ret for a
    number of values. The ret variable, however, is always zero because
    config_access() never returns anything else. Thus, the retry is always
    taken until number of tries is exceeded.
    
    The code looks like it wants to check *val instead of ret to see if the
    read gave an error response.
    
    Fixes: 73b4390fb234 ("[MIPS] Routerboard 532: Support for base system")
    Signed-off-by: Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
    Signed-off-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 03:58:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 03:58:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744133.1151146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK8wJ-000064-Ug; Thu, 20 Jun 2024 03:58:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744133.1151146; Thu, 20 Jun 2024 03:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sK8wJ-00005w-R1; Thu, 20 Jun 2024 03:58:23 +0000
Received: by outflank-mailman (input) for mailman id 744133;
 Thu, 20 Jun 2024 03:58:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK8wI-00005m-P5; Thu, 20 Jun 2024 03:58:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK8wI-00055L-Hz; Thu, 20 Jun 2024 03:58:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sK8wI-00009q-7p; Thu, 20 Jun 2024 03:58:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sK8wI-0000pi-7D; Thu, 20 Jun 2024 03:58:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GPUodwrxAPFHA3rS/HJkaiVIbV/ODpohGbPR7eg5qVM=; b=lgzxEEjOGYROipgYjeF6pu/ZI2
	QwYzyh0YJwfm8fg8cIdoFRVoHmBxkCrj1jjrixE9TtY/cGFJE/kIp2UXJ0vBYkYwSDgrnFboNnyYT
	3d60MdmSMfjL24XgcvmwTBLh2a4DwY5gW1oi4CMmq9Vg5DSXuDwriNZZ1qefzZA9NOCQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186422-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186422: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=57a890fd03356350a1b7a2a0064c8118f44e9958
X-Osstest-Versions-That:
    ovmf=95e220e95d6237e21f7c0b83fca02d56b9327c4a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Jun 2024 03:58:22 +0000

flight 186422 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186422/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 57a890fd03356350a1b7a2a0064c8118f44e9958
baseline version:
 ovmf                 95e220e95d6237e21f7c0b83fca02d56b9327c4a

Last test of basis   186414  2024-06-19 17:42:49 Z    0 days
Testing same since   186422  2024-06-20 01:56:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   95e220e95d..57a890fd03  57a890fd03356350a1b7a2a0064c8118f44e9958 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 05:20:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 05:20:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744146.1151161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKADW-0001hk-FK; Thu, 20 Jun 2024 05:20:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744146.1151161; Thu, 20 Jun 2024 05:20:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKADW-0001hd-CR; Thu, 20 Jun 2024 05:20:14 +0000
Received: by outflank-mailman (input) for mailman id 744146;
 Thu, 20 Jun 2024 05:20:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKADV-0001hT-7w; Thu, 20 Jun 2024 05:20:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKADU-00078o-UU; Thu, 20 Jun 2024 05:20:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKADU-0003y2-Mm; Thu, 20 Jun 2024 05:20:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKADU-0001wz-MD; Thu, 20 Jun 2024 05:20:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eyNp5aUakBcvM28PFWiFwU+6N10rSxfCYCjhvH/3WCg=; b=M/p6iUSu2XqLN3OXOi7wl1/i5J
	uuQQj21eiOJ5x5iu+mWveZa5xTaqys7XxnLffqeWpaIeSIc7uGbEFuX7xPYEOYSWrAUI97xFSHKL8
	GInq0Aev0uNi+TieNq6uRV/SytYWWfZPhnxJ1zNJ2j9D/pCx2O3E5DSRFTlLjfukpQVE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186425-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186425: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
X-Osstest-Versions-That:
    xen=efa6e9f15ba943d154e8d7b29384581915b2aacd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Jun 2024 05:20:12 +0000

flight 186425 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186425/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                   6 xen-build                fail REGR. vs. 186411

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
baseline version:
 xen                  efa6e9f15ba943d154e8d7b29384581915b2aacd

Last test of basis   186411  2024-06-19 12:00:22 Z    0 days
Testing same since   186412  2024-06-19 15:03:58 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jun 19 14:11:07 2024 +0200

    xen: avoid UB in guest handle arithmetic
    
    At least XENMEM_memory_exchange can have huge values passed in the
    nr_extents and nr_exchanged fields. Adding such values to pointers can
    overflow, resulting in UB. Cast respective pointers to "unsigned long"
    while at the same time making the necessary multiplication explicit.
    Remaining arithmetic is, despite there possibly being mathematical
    overflow, okay as per the C99 spec: "A computation involving unsigned
    operands can never overflow, because a result that cannot be represented
    by the resulting unsigned integer type is reduced modulo the number that
    is one greater than the largest value that can be represented by the
    resulting type." The overflow that we need to guard against is checked
    for in array_access_ok().
    
    Note that in / down from array_access_ok() the address value is only
    ever cast to "unsigned long" anyway, which is why in the invocation from
    guest_handle_subrange_okay() the value doesn't need casting back to
    pointer type.
    
    In compat grant table code change two guest_handle_add_offset() to avoid
    passing in negative offsets.
    
    Since {,__}clear_guest_offset() need touching anyway, also deal with
    another (latent) issue there: They were losing the handle type, i.e. the
    size of the individual objects accessed. Luckily the few users we
    presently have all pass char or uint8 handles.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 267122a24c499d26278ab2dbdfb46ebcaaf38474
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 16:14:36 2021 +0100

    x86/defns: Clean up X86_{XCR0,XSS}_* constants
    
    With the exception of one case in read_bndcfgu() which can use ilog2(),
    the *_POS defines are unused.  Drop them.
    
    X86_XCR0_X87 is the name used by both the SDM and APM, rather than
    X86_XCR0_FP.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 71cacfb035f4a78ee10970dc38a3baa04d387451
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpuid: Fix handling of XSAVE dynamic leaves
    
    First, if XSAVE is available in hardware but not visible to the guest, the
    dynamic leaves shouldn't be filled in.
    
    Second, the comment concerning XSS state is wrong.  VT-x doesn't manage
    host/guest state automatically, but there is provision for "host only" bits to
    be set, so the implications are still accurate.
    
    Introduce xstate_compressed_size() to mirror the uncompressed one.  Cross
    check it at boot.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit fdb7e77fea4cb1c98dc51dd891a47f7e94612ad4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpu-policy: Simplify recalculate_xstate()
    
    Make use of xstate_uncompressed_size() helper rather than maintaining the
    running calculation while accumulating feature components.
    
    The rest of the CPUID data can come direct from the raw cpu policy.  All
    per-component data form an ABI through the behaviour of the X{SAVE,RSTOR}*
    instructions.
    
    Use for_each_set_bit() rather than opencoding a slightly awkward version of
    it.  Mask the attributes in ecx down based on the visible features.  This
    isn't actually necessary for any components or attributes defined at the time
    of writing (up to AMX), but is added out of an abundance of caution.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit df09dfb94de66f7523837c050616a382aa2c7d17
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/xstate: Rework xstate_ctxt_size() as xstate_uncompressed_size()
    
    We're soon going to need a compressed helper of the same form.
    
    The size of the uncompressed image depends on the single element with the
    largest offset + size.  Sadly this isn't always the element with the largest
    index.
    
    Name the per-xstate-component cpu_policy struture, for legibility of the logic
    in xstate_uncompressed_size().  Cross-check with hardware during boot, and
    remove hw_uncompressed_size().
    
    This means that the migration paths don't need to mess with XCR0 just to
    sanity check the buffer size.  It also means we can drop the "fastpath" check
    against xfeature_mask (there to skip some XCR0 writes); this path is going to
    be dead logic the moment Xen starts using supervisor states itself.
    
    The users of hw_uncompressed_size() in xstate_init() can (and indeed need) to
    be replaced with CPUID instructions.  They run with feature_mask in XCR0, and
    prior to setup_xstate_features() on the BSP.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit a09022a09e1a79b3f9574993993bfad803b32596
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu May 23 00:55:34 2024 +0100

    x86/boot: Collect the Raw CPU Policy earlier on boot
    
    This is a tangle, but it's a small step in the right direction.
    
    In the following change, xstate_init() is going to start using the Raw policy.
    
    calculate_raw_cpu_policy() is sufficiently separate from the other policies to
    safely move like this.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit d31a111940de5431c8bf465b1d38b89f1130a24b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Feb 21 17:56:57 2020 +0000

    x86/xstate: Cross-check dynamic XSTATE sizes at boot
    
    Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
    every call.  This is expensive, being used for domain create/migrate, as well
    as to service certain guest CPUID instructions.
    
    Instead, arrange to check the sizes once at boot.  See the code comments for
    details.  Right now, it just checks hardware against the algorithm
    expectations.  Later patches will cross-check Xen's XSTATE calculations too.
    
    Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits.  This is to
    maximise coverage in the sanity check, even if we don't expect to
    use/virtualise some of these features any time soon.  Leave HDC and HWP alone
    for now; we don't have CPUID bits from them stored nicely.
    
    Only perform the cross-checks when SELF_TESTS are active.  It's only
    developers or new hardware liable to trip these checks, and Xen at least
    tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which we
    don't want to be tickling in the general case.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 9e6dbbe8bf400aacb99009ddffa91d2a0c312b39
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 22 17:23:54 2024 +0100

    x86/xstate: Fix initialisation of XSS cache
    
    The clobbering of this_cpu(xcr0) and this_cpu(xss) to architecturally invalid
    values is to force the subsequent set_xcr0() and set_msr_xss() to reload the
    hardware register.
    
    While XCR0 is reloaded in xstate_init(), MSR_XSS isn't.  This causes
    get_msr_xss() to return the invalid value, and logic of the form:
    
        old = get_msr_xss();
        set_msr_xss(new);
        ...
        set_msr_xss(old);
    
    to try and restore said invalid value.
    
    The architecturally invalid value must be purged from the cache, meaning the
    hardware register must be written at least once.  This in turn highlights that
    the invalid value must only be used in the case that the hardware register is
    available.
    
    Fixes: f7f4a523927f ("x86/xstate: reset cached register values on resume")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit aba98c8d671bd290e978ec154d0baf042e093a65
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jun 14 13:05:40 2024 +0100

    xen/arch: Centralise __read_mostly and __ro_after_init
    
    These living in cache.h is inherited from Linux, but cache.h is not a terribly
    appropriately location for them to live.
    
    __read_mostly is an optimisation related to data placement in order to avoid
    having shared data in cachelines that are likely to be written to, but it
    really is just a section of the linked image separating data by usage
    patterns; it has nothing to do with cache sizes or flushing logic.
    
    Worse, __ro_after_init was only in xen/cache.h because __read_mostly was in
    arch/cache.h, and has literally nothing whatsoever to do with caches.
    
    Move the definitions into xen/sections.h, which in particular means that
    RISC-V doesn't need to repeat the problematic pattern.  Take the opportunity
    to provide a short descriptions of what these are used for.
    
    For now, leave TODO comments next to the other identical definitions.  It
    turns out that unpicking cache.h is more complicated than it appears because a
    number of files use it for transitive dependencies.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 82f480944718d9e8340a6ac1af41ece7851115bf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jun 18 13:48:35 2024 +0100

    xen/irq: Address MISRA Rule 8.3 violation
    
    When centralising irq_ack_none(), different architectures had different names
    for the parameter.  As it's type is struct irq_desc *, it should be named
    desc.  Make this consistent.
    
    No functional change.
    
    Fixes: 8aeda4a241ab ("arch/irq: Make irq_ack_none() mandatory")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 07:00:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 07:00:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744178.1151172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKBm9-0003hN-0g; Thu, 20 Jun 2024 07:00:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744178.1151172; Thu, 20 Jun 2024 07:00:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKBm8-0003gp-U0; Thu, 20 Jun 2024 07:00:04 +0000
Received: by outflank-mailman (input) for mailman id 744178;
 Thu, 20 Jun 2024 07:00:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKBm7-0003Mg-EZ
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 07:00:03 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b68fa577-2ed2-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 09:00:01 +0200 (CEST)
Received: by mail-lf1-x129.google.com with SMTP id
 2adb3069b0e04-52bc035a7ccso532680e87.2
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 00:00:01 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7063348ab67sm2306960b3a.66.2024.06.19.23.59.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 00:00:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b68fa577-2ed2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718866801; x=1719471601; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Oc9cQ/GcKk5FRUciRDd9D/g4Z2yaCdrLK01VtJraBPc=;
        b=S1XcTQLY/+zP/nH2VEZbizbFKhhFfVZF+mn5UaHqZSY3AWI/PnOJYTdl4AxnWwhJMZ
         lmur1Iq2Cms0KOqLsSCNR1gp0OE+sF3dq2p7WSfiJpvZINpFyNy+yNqWHNCtcnzH4guW
         P24Xudax0dmxXXu50KqRzqaLiU09yi+cRht5BnDlT2/HNHywB77BX8ajCL6llPQQYBRK
         7VzUcMeNBQp6CjzhEeqQ+u1DtiR+UBmPbfkMyUOP1N9U6nmjp3Hx3QTnhU08AGB8eS17
         sEuQSWNVpJEIKltkHuTJaZfCoZDd2Kif7T2TjxNsQLKepttgHiBa3nB902xBd1A4iYDt
         4UZQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718866801; x=1719471601;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Oc9cQ/GcKk5FRUciRDd9D/g4Z2yaCdrLK01VtJraBPc=;
        b=Pb6w5NhO4MLNq6H7sCRCOZasK0xTv2LjZjICsSOj1hDbdyTktCa3y1RqIFuP8nm3Vw
         pRzrc0u3xGI8/c/KAXPhWjJSV5o9Mbpi+fuhl0Py4mRrK5H+aMCYg4s21i74Oxfo3aio
         mt6vA6VLfdM7y/Ppro4lSm4iOaTDPaHatbUOlipmusmPzMQtgbt1dt9hNMuZlZac5RqI
         Uv5ksUrVzqLhCjbB5qLGNNcVR7EnbIbb95H88zLCO6aCLCpSH2SOxyUbUDciCWWoGrme
         FfhUSB1mcfU4L8OiB+cWjbWEKkwzvUJ0Fr436X+OYeFg4toocVGJx0RdFoEatrMRj3cu
         DxyQ==
X-Forwarded-Encrypted: i=1; AJvYcCVi6NgebfhvjBW1eXZsGm8s6ICeq6yBnkwXyWliZ+lTnZlCVlxJKPEeEGhJLP645GoouIovJR2wyzpn2eScl2rFxYpREWgQ0Paj/oCDA00=
X-Gm-Message-State: AOJu0YyxOqQ3z3yAU6V1u/M/5esJrnK7HW1H954Xd9oAzoFqaaSeBoj2
	bLLmucvbQn4yYcZsdnT7pAV6sZ+dPjAIF6k71CItoJMD3O/TJLndGhnZvZJdKQ==
X-Google-Smtp-Source: AGHT+IE0hulWbwf4yVkfW/zSomLOcUZ46NcYQkXOAHHYJhgkCz7K4wVG9rHkNO6iXzsvKoaDs24PWw==
X-Received: by 2002:a2e:914b:0:b0:2ec:174b:75bf with SMTP id 38308e7fff4ca-2ec3ce9f6c0mr28788051fa.4.1718866800778;
        Thu, 20 Jun 2024 00:00:00 -0700 (PDT)
Message-ID: <ac007fbe-3324-4b7a-a7b9-0ff32c3131bc@suse.com>
Date: Thu, 20 Jun 2024 08:59:54 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] xen/guest_access: Fix accessors for 32bit builds of Xen
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240619163100.2556555-1-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240619163100.2556555-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 19.06.2024 18:31, Andrew Cooper wrote:
> Gitlab CI reports an ARM32 randconfig failure as follows:
> 
>   In file included from common/livepatch.c:9:
>   common/livepatch.c: In function ‘livepatch_list’:
>   ./include/xen/guest_access.h:130:25: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
>     130 |     __raw_copy_to_guest((void *)(d_ + (off) * sizeof(*_s)), \
>         |                         ^
>   common/livepatch.c:1283:18: note: in expansion of macro ‘__copy_to_guest_offset’
>    1283 |             if ( __copy_to_guest_offset(list->name, name_offset,
>         |                  ^~~~~~~~~~~~~~~~~~~~~~
>   ./include/xen/guest_access.h:130:25: error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
>     130 |     __raw_copy_to_guest((void *)(d_ + (off) * sizeof(*_s)), \
>         |                         ^
>   common/livepatch.c:1287:17: note: in expansion of macro ‘__copy_to_guest_offset’
>    1287 |                 __copy_to_guest_offset(list->metadata, metadata_offset,
>         |                 ^~~~~~~~~~~~~~~~~~~~~~
> 
> This isn't specific to ARM32; it's LIVEPATCH on any 32bit build of Xen.
> 
> Both name_offset and metadata_offset are uint64_t, meaning that the
> expression:
> 
>   (d_ + (off) * sizeof(*(hnd).p)
> 
> gets promoted to uint64_t, and is too wide to cast back to a pointer in 32bit
> builds.  The expression needs casting through (unsigned long) before it can be
> cast to (void *).

I disagree. Instead I'd like to raise the question why these two local variables
are uint64_t in the first place. They accumulate buffer size, and hence ought to
have been size_t from the beginning. I'll make an alternative patch (first making
sure I test livepatch building not only for x86 and arm64).

> @@ -65,7 +65,7 @@
>      /* Check that the handle is not for a const type */ \
>      void *__maybe_unused _t = (hnd).p;                  \
>      (void)((hnd).p == _s);                              \
> -    raw_copy_to_guest((void *)(d_ + (off) * sizeof(*_s)), \
> +    raw_copy_to_guest(_p(d_ + (off) * sizeof(*_s)),     \

It's also from an abstract perspective that I disagree with using _p() like this.
We'd rather keep this as straightforward as possible, to keep down the risk of
hiding bugs by excess casting.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 07:03:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 07:03:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744184.1151182 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKBpL-0004Yz-Ew; Thu, 20 Jun 2024 07:03:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744184.1151182; Thu, 20 Jun 2024 07:03:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKBpL-0004Ys-Bo; Thu, 20 Jun 2024 07:03:23 +0000
Received: by outflank-mailman (input) for mailman id 744184;
 Thu, 20 Jun 2024 07:03:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F3B6=NW=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sKBpJ-0004Yl-Lw
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 07:03:21 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on20606.outbound.protection.outlook.com
 [2a01:111:f403:2408::606])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2c6473d5-2ed3-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 09:03:20 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by DS7PR12MB8231.namprd12.prod.outlook.com (2603:10b6:8:db::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.19; Thu, 20 Jun
 2024 07:03:16 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7698.020; Thu, 20 Jun 2024
 07:03:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c6473d5-2ed3-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cby54vvKPo6Zh1+0kfsfKTYBl2wEl9EMRu4mny/ICXhyv4yXHPbkHhrzIlCGlAD0yNqnfq4MVj7Os/c56F2cUsq74aFJwRpVclMb7r2At81XiIa2rpVQSPvBzj0e/x22SAqYRr3fYUh+PEFrNKSJ5029eUqVrF+59F8ljlFdklqcJdZw7JPfQsMXrL2T3mv++7p3qFD6vzZCb7DHIT1QxmWu8D8By9bxyblqsSZSSml6Wzhxh1ffPFSIqbMFBmnpAV5qqjXXu1nW/CYCWLE23qLWQKa/XGxmTInw6L7D3pILygxu22nIjYIav7it5MVPCIxDeXBV9jm9g3xHrVtnBw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=ynlm9TnUOxuZ8OJUp1x9VljZupy4oxRLKp3UZv/IYSs=;
 b=dI7D0Mrp5niAnemW+05hd6RGdTnLDJ6KZpT14nfGcySmox4RAQ0iBrf8VqdRmN7F72aYuymtunjc3X1UoGl4rsKkkFlk69luojlG6AaUth0jfjkzQxZCJzcQn5/Uxo4j6lchXuO4QBhTJEWz6uz6KiaSce3PgcJzEmPxC9yj5GJsiauzmIjxFoHUj77/8LXpMoAdClvL0wYkgBoMCtD0ED0H3FwQ/ihPGOAD2o48vbXp1q/62oEy7G1NxY++BmCwEKuY4JYENdt+1/gCYlVam8lA3RGOXP64SkwuGH81Bd+s7Bu90YkSkD1mObqgH6HhkRWJnKWkrMAKKGZZW0Uz6A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ynlm9TnUOxuZ8OJUp1x9VljZupy4oxRLKp3UZv/IYSs=;
 b=TWiaAY97AGC8J6Z3X0MYzhFkkF6o2135OLniq2S8ujVMhaugtS1pXy2nZZqykpo4r/CQzhHsOFESs7l4GLtadeOiFI6ZbRg8lBLdaTFfRinr4YNu0Kf1Y6N1hRR2sTbD9dBE+qP+5+ONn1Uw1pvL3Og5yq/EaB1a2XZc3Pc3qDU=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
Thread-Topic: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
Thread-Index: AQHawJToSxHkdXzldkCDZE5kihITT7HMD9QAgAGTOgD//5t7AIADeOiA
Date: Thu, 20 Jun 2024 07:03:15 +0000
Message-ID:
 <BL1PR12MB5849ABD858B72505D83678F9E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-5-Jiqian.Chen@amd.com>
 <49563a31-d50e-4015-88ee-e0dab9193cd1@suse.com>
 <BL1PR12MB584910D242D9D8B4BA8B15C1E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <ab99b766-7bec-4046-beb2-f77a2591a911@suse.com>
In-Reply-To: <ab99b766-7bec-4046-beb2-f77a2591a911@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7698.013)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|DS7PR12MB8231:EE_
x-ms-office365-filtering-correlation-id: b89f7d3e-7c07-49eb-ad66-08dc90f70e67
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|376011|1800799021|7416011|366013|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?OWRaL1ExUkoxWVlWTjZDV3JZb0lZcnhDV1A0ZERFMjVmRkRqaEtnZFoxeHhs?=
 =?utf-8?B?YndwcDNOcFpUUHZoeEc0WE1HSk9sOUtUVmtMM054YTdvdm1lcWx1WGRGdlFp?=
 =?utf-8?B?U2V0V29tM29KTVd6Ni9OM0s3YW5ET0ppT1RCSDNzNzlxcWxielBlczlpNURF?=
 =?utf-8?B?N1lZZTBYM2ZxSXBrUVR0SzJFbWJxVnJGOG9YNE9QR1RMOWtZQkNhWVpic2RK?=
 =?utf-8?B?dnJXdFU1YlU2T1V5SnE0V2RrelIwcmZlZTNZNUxFdHdhaVFCSStqZ2hXTWN5?=
 =?utf-8?B?S1ZkQ1dpRFRobXZUaXE0WDZ5RkZTWUNzTE1xZlRsdUhKd2NTdHVXZUE3bEk1?=
 =?utf-8?B?MWhUN09mcXY0R0VDZkdvbUZEbEU5elhaUVlZNitKYVNiT3d5SkgrUFJXL3hV?=
 =?utf-8?B?VlhlMlpldVlWYVZxOEd2Sk1EWjdvRWhKNHNCZEdjYXlyb1hOS3NKNjRKalZi?=
 =?utf-8?B?SzdaUnZobE1ITWg5SXVHZjFXMFFYQXJDc2dWRU12Uy8xMS9xVjJnQjVpTEFC?=
 =?utf-8?B?L0hwckFMRytNdklUT2Z6L0tsa0IwOEZkY2o0OFJIOXJrQi9udXZDOVk1RDVN?=
 =?utf-8?B?NFl2ME1EdXZyMVovWS9BWlRVWnlTWVV2OTdGZHdMWkV2NGM2MFRZRFYrSG9t?=
 =?utf-8?B?eTdwT3hrbnRsQU0vZTFodVNmRkRhbi91eWNxMmFjMDcvRks0aE1BY1VLT0Ji?=
 =?utf-8?B?TTlvNVZsOFNMdDAzUi95SXloL2hYS2lYcEtJWjlMbmhMYWlTMHFYVWZhRzYr?=
 =?utf-8?B?QjVZWkhSeHpWbjlCSkUzL1FISDdRa2haOS9SbGtCamFsMjBndlNQaDJ5dlVX?=
 =?utf-8?B?aW50enRZUFZmdU9Uc2FwNSt6Qlk5QURNd2svejlVOUUwVkRBMmw4Wjh3MTRy?=
 =?utf-8?B?RmgxVGdVUnQ5RlBsR241bEROV0NSSkFsaHlHY01CblgzcU9jckhielU3L1d6?=
 =?utf-8?B?MFU0M0UxRlZzOXRQQktYZWo3TENzSTJDVUxlOUt6ZG1haUhJUXpGdXM3eFlU?=
 =?utf-8?B?NGx2VHFHZTR3UU5nZ1pOSlR1SGJ6c2pyR2VXVGw3Q3NnUnpuQzNsSG0wQTNT?=
 =?utf-8?B?STN5L3VPK09NQXhPeEo4d2hyZVBCOXRSS1p6enBPdTd0MlNPSjBzZTFaYzYw?=
 =?utf-8?B?V1o0cUM2VFlIWlFPUUZSZ2JkTUxPMEhvZzJDTVJJQzFkMTcxcitOckdMY3hr?=
 =?utf-8?B?dy93SEJ3M3BBcDlVM3RKM1IyMHNTdXpUdGJrUlhYbDlPRjc2VStySUxQQzFE?=
 =?utf-8?B?NWZLZW82cENyTDltdmpNUFpJdzhNYkcxR281VGhiS2ZEK0NpVXpIVmlFalZ4?=
 =?utf-8?B?Ylc4b0RzU0VybkxDVzIrZ09jbUxhWng4STdNUUlVMkFBWXlLQ3VQWk1OZi96?=
 =?utf-8?B?M2NHWmJQajd2Z0VwQW1Rd3BrcHQ5TjA3Yk1IV2VsdW41NWE0a2FXTHhwcmZV?=
 =?utf-8?B?bGVCZU5Ld21NVlY5MzhiWStkUGRwbU9Zc1NCdmd6QlAzL1lOT2tiQmJaZ1Bj?=
 =?utf-8?B?M3RVbmdvRmtMYTlvTUlsc0lrd240eGVsK01qOHJaNkZJTFFmZ0h5dmJIZ0VW?=
 =?utf-8?B?cDlWYjQyTi92N0xuaXE1WWhQaU4ySnRmSDNvR2lhRE12ZFdiQ2RJelp5OG5h?=
 =?utf-8?B?aFlpaXpKRGxId082eFI4SzhlSkpBd1Y2T1pxTUxpSVdJVWZxUUx4S3NUNmNG?=
 =?utf-8?B?alNwalV5Y29KcUpNa21qNTh2TFFiQUdZSjNFRnllNm5ZZUsrZGZkUjFpQ1Zi?=
 =?utf-8?B?d3hYY1J5WUZKTGJNUnp3b0cwR1ZIWFRzNVhIL1ZUUmNOUS9MdW8ybFJaN3pX?=
 =?utf-8?Q?cAQrpWWzDfJGMNmzi/Q1M5wwtumrkTl0PARBA=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(376011)(1800799021)(7416011)(366013)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?ZE9zZC91Um8ybktMMDR5VmdyTkxoSGNNeFBFM2FYeis1L0oxL1VnOXIxRmow?=
 =?utf-8?B?azdteEcvVzlIcFlaaGhvQzY0VEFqeWF2OE9QYzgrOE82eE5rMXU3eGZuUkxj?=
 =?utf-8?B?dHpNY2ZPRVp1cGJnMEl3THpoVVFwY1NHTWV3T05Hc2R3endMTnZwZmNVbkRY?=
 =?utf-8?B?cVk2UHIrS1BEbEM1dkRGcGFkL3EySFpJMnlHL0JDRUxMUkNZQTQ2YWdwZFlS?=
 =?utf-8?B?bjNseXpCRzVtWlNhU25LL0JJaFZxRE1hWTNYZm94b2FFT1g0aEcyYmxRVHlv?=
 =?utf-8?B?Ym9JR0dpdzVXUkNBLzlCSFJaTXRlcWxDQWUwWGd2aityaUptcVg1cnFPQWw1?=
 =?utf-8?B?enYwSjFUMUloRnJqMWV1QVEyUDJaS1VZK3hoK3lnTHNlbTZkQmJmSGlpVWQ3?=
 =?utf-8?B?Unc3OVhUbDU1eWYvc3NsdXZXVVBMM3M2QTMvL1BJWGE3djJtbmR4K1FNcWgy?=
 =?utf-8?B?QVdmTkM1enZJdEwwREptRlBOR0c2dEliL0pOQVUxWEpJbnRXUXoyZGM5T3N3?=
 =?utf-8?B?LzA4K1RSbHlhMEdmcWxXeG1oWEh1QVRaNmh2TUN6Rk9KK3EzR1liVk5IaWVy?=
 =?utf-8?B?MEJuOTdDMjNQYUliTnpIeUx1RlZQNjB0YzNLR0M1bjdEbm5wSW1PS3dUSjJ1?=
 =?utf-8?B?dnJLOHJaeDlDTDNEdUQwaDVWZGFKZ0c3VmNGZm5jbzBtRldYZjd5emhrVHpn?=
 =?utf-8?B?elVLd211QVRxU3NzN2VtRG9HRGhsQUdhQ21IY09tRElZckxoc1phaXhCQ0Vx?=
 =?utf-8?B?aHV5YUpOemRlMGRJTk0xaWY2ajhvQjNPYURteUVLODExc0ZjRjZTTjl3SEti?=
 =?utf-8?B?MHh1VXN3cW9OOE1tY3BVRnlMUHY0VTViL3dicE42UXpmY2M3YWJhdVBuWnpj?=
 =?utf-8?B?ZHQ5OVFCNUhDZW9QYkQwd05kTllXWkRUU0VqS21SbTNCM2pZY3dXOUJ1K0JI?=
 =?utf-8?B?cXppYytUcks4aWFXMFcxNVRtS2VzRUNzSXJ1RWl3YnhQV2ZtdnBCSXVZYUgz?=
 =?utf-8?B?bGNodGZ4UlNrdndDWUNOL21qVHdSdjVJK3Q0SnBxZWUxNEM3d0xNZlBXWGZK?=
 =?utf-8?B?V2hqQUxRS2RUclEwZmx1YTdmSVo2MUtzOHNpM0oycG5rblZlRXQ3bUFQdHZL?=
 =?utf-8?B?NUk1NnpKVWYzOVl5aXdXZVFuRzNodnZXMmp0UXJTUTREL0V0RHRpU0lFOGU4?=
 =?utf-8?B?YnZES0huRTJ1RkRaQ3dYaEF0MjdlcEpCMU9YK3Y5SE1OY2liNUdKOGZIdTZr?=
 =?utf-8?B?QlVxTEx5aGFVZ3B3WGgrdVZjT2k2UjRsMWNjbzlhOHFsMG01ZmlMeS8rK2lj?=
 =?utf-8?B?Z09EdU9PKzNNYjFVb240L3hUNllTak9YWXRBbi9JVDhsQWVCT3QyQjhHTFBS?=
 =?utf-8?B?ODRQTFdJRzRxOFRjcEpFaEdOdVhtVEliVVMvU2t1SW1YSHkxUnVKYzJIYllw?=
 =?utf-8?B?ajBOMVpmTzAwMHZIVjNqSjQ1UXJMMDJUZ1VjbTVuMnFGSHJzcXdPRTFYc3l5?=
 =?utf-8?B?UzFscUpUUklRaGxoYU9RVWpQYUJqTGJkcVVGakd1QmlhV01HbU5mR21WR2RP?=
 =?utf-8?B?ZXo0UkpiYmNxNThUM0tlamtKOTFBRDRSQWxweUx4YUhtWjMxSjNLb281ODc4?=
 =?utf-8?B?ZWR5K0ZwK1RpaUNSeUJmMjZKejZkVDZDejd1dC8za1QwZlpTZGN3V2FrUXpB?=
 =?utf-8?B?WVMrK0E0cG16c3ovWUdyb2pHR0lGdU5sa1hQUzVBcjFVLzBsTndNdWpzR1RY?=
 =?utf-8?B?bWFUVnBXWFY3YlVGVllhWUJLZ25OMDFQbHFyVzV5YXdZbGlGNk5JZHhZOTc0?=
 =?utf-8?B?ek1SQXU4U2NpeUc4Q0N1ZjYveTM4ZFN1aWZieUZUYi9kNmdVbHpvclphdXNK?=
 =?utf-8?B?eGY4eWVoMXZrODROckh2c3BSL05Sei9vbTRHZWQ3S1pSR3RDaWVEeU5EbGYy?=
 =?utf-8?B?ZHpiQVZXMkEvV1pWQTB2Q1Bad3RYT3BBemtaOUl3TXBOaWNXV3pGVVlxRTZ6?=
 =?utf-8?B?MnVJdm5uS01IRzkzVlNaUGVJUzVZMWRCSnZYTlRWQWhiL3lZcDMvWDBIRkxt?=
 =?utf-8?B?OUE2MkVpOW1UMzl3L1cxbURwQzNOb3A0U2gxM2hZT25EN2htL2h3MGd5cHNs?=
 =?utf-8?Q?hnn0=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <F0C564795D1A4446A21E0550967D455A@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b89f7d3e-7c07-49eb-ad66-08dc90f70e67
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jun 2024 07:03:15.2278
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: satREuysa0aidf1QCVLFrX2L4EN78hcmhVoy5xJ7KX2Gi4nPLC/dnaPvz4+P09wPhmqpfH7QG1R0CqYX4h4naA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR12MB8231

T24gMjAyNC82LzE4IDE3OjEzLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTguMDYuMjAyNCAx
MDoxMCwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzE3IDIzOjEwLCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAxNy4wNi4yMDI0IDExOjAwLCBKaXFpYW4gQ2hlbiB3cm90ZToN
Cj4+Pj4gLS0tIGEvdG9vbHMvaW5jbHVkZS94ZW4tc3lzL0xpbnV4L3ByaXZjbWQuaA0KPj4+PiAr
KysgYi90b29scy9pbmNsdWRlL3hlbi1zeXMvTGludXgvcHJpdmNtZC5oDQo+Pj4+IEBAIC05NSw2
ICs5NSwxMSBAQCB0eXBlZGVmIHN0cnVjdCBwcml2Y21kX21tYXBfcmVzb3VyY2Ugew0KPj4+PiAg
CV9fdTY0IGFkZHI7DQo+Pj4+ICB9IHByaXZjbWRfbW1hcF9yZXNvdXJjZV90Ow0KPj4+PiAgDQo+
Pj4+ICt0eXBlZGVmIHN0cnVjdCBwcml2Y21kX2dzaV9mcm9tX2RldiB7DQo+Pj4+ICsJX191MzIg
c2JkZjsNCj4+Pg0KPj4+IFRoYXQncyBQQ0ktY2VudHJpYywgd2l0aG91dCBzdHJ1Y3QgYW5kIElP
Q1RMIG5hbWVzIHJlZmxlY3RpbmcgdGhpcyBmYWN0Lg0KPj4gU28sIGNoYW5nZSB0byBwcml2Y21k
X2dzaV9mcm9tX3BjaWRldiA/DQo+IA0KPiBUaGF0J3Mgd2hhdCBJJ2Qgc3VnZ2VzdCwgeWVzLiBC
dXQgcmVtZW1iZXIgdGhhdCBpdCdzIHRoZSBrZXJuZWwgbWFpbnRhaW5lcnMNCj4gd2hvIGhhdmUg
dGhlIHVsdGltYXRlIHNheSBoZXJlLCBhcyBoZXJlIHlvdSdyZSBvbmx5IG1ha2luZyBhIGNvcHkg
b2Ygd2hhdA0KPiB0aGUgY2Fub25pY2FsIGhlYWRlciAoaW4gdGhlIGtlcm5lbCB0cmVlKSBpcyBn
b2luZyB0byBoYXZlLg0KT0ssIHRoZW4gbGV0J3Mgd2FpdCBmb3IgdGhlIGNvcnJlc3BvbmRpbmcg
cGF0Y2ggb24ga2VybmVsIHNpZGUgdG8gYmUgYWNjZXB0ZWQgZmlyc3QuDQo+IA0KPj4+PiArCWlu
dCBnc2k7DQo+Pj4NCj4+PiBJcyAiaW50IiBsZWdpdGltYXRlIHRvIHVzZSBoZXJlPyBEb2Vzbid0
IHRoaXMgd2FudCB0byBzaW1pbGFybHkgYmUgX191MzI/DQo+PiBJIHdhbnQgdG8gc2V0IGdzaSB0
byBuZWdhdGl2ZSBpZiB0aGVyZSBpcyBubyByZWNvcmQgb2YgdGhpcyB0cmFuc2xhdGlvbi4NCj4g
DQo+IFRoZXJlIGFyZSBzdXJlbHkgbW9yZSBleHBsaWNpdCB3YXlzIHRvIHNpZ25hbCB0aGF0IGNh
c2U/DQpNYXliZSwgSSB3aWxsIHRoaW5rIGFib3V0IHRoZSBpbXBsZW1lbnRhdGlvbiBvbiBrZXJu
ZWwgc2lkZSBhZ2Fpbi4NCj4gDQo+Pj4+IC0tLSBhL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNp
LmMNCj4+Pj4gKysrIGIvdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYw0KPj4+PiBAQCAtMTQw
Niw2ICsxNDA2LDEyIEBAIHN0YXRpYyBib29sIHBjaV9zdXBwX2xlZ2FjeV9pcnEodm9pZCkNCj4+
Pj4gICNlbmRpZg0KPj4+PiAgfQ0KPj4+PiAgDQo+Pj4+ICsjZGVmaW5lIFBDSV9ERVZJRChidXMs
IGRldmZuKVwNCj4+Pj4gKyAgICAgICAgICAgICgoKCh1aW50MTZfdCkoYnVzKSkgPDwgOCkgfCAo
KGRldmZuKSAmIDB4ZmYpKQ0KPj4+PiArDQo+Pj4+ICsjZGVmaW5lIFBDSV9TQkRGKHNlZywgYnVz
LCBkZXZmbikgXA0KPj4+PiArICAgICAgICAgICAgKCgoKHVpbnQzMl90KShzZWcpKSA8PCAxNikg
fCAoUENJX0RFVklEKGJ1cywgZGV2Zm4pKSkNCj4+Pg0KPj4+IEknbSBub3QgYSBtYWludGFpbmVy
IG9mIHRoaXMgZmlsZTsgaWYgSSB3ZXJlLCBJJ2QgYXNrIHRoYXQgZm9yIHJlYWRhYmlsaXR5J3MN
Cj4+PiBzYWtlIGFsbCBleGNlc3MgcGFyZW50aGVzZXMgYmUgZHJvcHBlZCBmcm9tIHRoZXNlLg0K
Pj4gSXNuJ3QgaXQgYSBjb2RpbmcgcmVxdWlyZW1lbnQgdG8gZW5jbG9zZSBlYWNoIGVsZW1lbnQg
aW4gcGFyZW50aGVzZXMgaW4gdGhlIG1hY3JvIGRlZmluaXRpb24/DQo+PiBJdCBzZWVtcyBvdGhl
ciBmaWxlcyBhbHNvIGRvIHRoaXMuIFNlZSB0b29scy9saWJzL2xpZ2h0L2xpYnhsX2ludGVybmFs
LmgNCj4gDQo+IEFzIHNhaWQsIEknbSBub3QgYSBtYWludGFpbmVyIG9mIHRoaXMgY29kZS4gWWV0
IHdoaWxlIEknbSBhd2FyZSB0aGF0IGxpYnhsDQo+IGhhcyBpdHMgb3duIENPRElOR19TVFlMRSwg
SSBjYW4ndCBzcG90IGFueXRoaW5nIHRvd2FyZHMgZXhjZXNzaXZlIHVzZSBvZg0KPiBwYXJlbnRo
ZXNlcyB0aGVyZS4NClNvLCB3aGljaCBwYXJlbnRoZXNlcyBkbyB5b3UgdGhpbmsgYXJlIGV4Y2Vz
c2l2ZSB1c2U/DQo+IA0KPj4+PiBAQCAtMTQ4Niw2ICsxNDk2LDE4IEBAIHN0YXRpYyB2b2lkIHBj
aV9hZGRfZG1fZG9uZShsaWJ4bF9fZWdjICplZ2MsDQo+Pj4+ICAgICAgICAgIGdvdG8gb3V0X25v
X2lycTsNCj4+Pj4gICAgICB9DQo+Pj4+ICAgICAgaWYgKChmc2NhbmYoZiwgIiV1IiwgJmlycSkg
PT0gMSkgJiYgaXJxKSB7DQo+Pj4+ICsjaWZkZWYgQ09ORklHX1g4Ng0KPj4+PiArICAgICAgICBz
YmRmID0gUENJX1NCREYocGNpLT5kb21haW4sIHBjaS0+YnVzLA0KPj4+PiArICAgICAgICAgICAg
ICAgICAgICAgICAgKFBDSV9ERVZGTihwY2ktPmRldiwgcGNpLT5mdW5jKSkpOw0KPj4+PiArICAg
ICAgICBnc2kgPSB4Y19waHlzZGV2X2dzaV9mcm9tX2RldihjdHgtPnhjaCwgc2JkZik7DQo+Pj4+
ICsgICAgICAgIC8qDQo+Pj4+ICsgICAgICAgICAqIE9sZCBrZXJuZWwgdmVyc2lvbiBtYXkgbm90
IHN1cHBvcnQgdGhpcyBmdW5jdGlvbiwNCj4+Pg0KPj4+IEp1c3Qga2VybmVsPw0KPj4gWWVzLCB4
Y19waHlzZGV2X2dzaV9mcm9tX2RldiBkZXBlbmRzIG9uIHRoZSBmdW5jdGlvbiBpbXBsZW1lbnRl
ZCBvbiBsaW51eCBrZXJuZWwgc2lkZS4NCj4gDQo+IE9rYXksIGFuZCB3aGVuIHRoZSBrZXJuZWwg
c3VwcG9ydHMgaXQgYnV0IHRoZSB1bmRlcmx5aW5nIGh5cGVydmlzb3IgZG9lc24ndA0KPiBzdXBw
b3J0IHdoYXQgdGhlIGtlcm5lbCB3YW50cyB0byB1c2UgaW4gb3JkZXIgdG8gZnVsZmlsbCB0aGUg
cmVxdWVzdCwgYWxsDQpJIGRvbid0IGtub3cgd2hhdCB0aGluZ3MgeW91IG1lbnRpb25lZCBoeXBl
cnZpc29yIGRvZXNuJ3Qgc3VwcG9ydCBhcmUsDQpiZWNhdXNlIHhjX3BoeXNkZXZfZ3NpX2Zyb21f
ZGV2IGlzIHRvIGdldCB0aGUgZ3NpIG9mIHBjaWRldiB0aHJvdWdoIHNiZGYgaW5mb3JtYXRpb24s
DQp0aGF0IHJlbGF0aW9uc2hpcCBjYW4gYmUgZ290IG9ubHkgaW4gZG9tMCBpbnN0ZWFkIG9mIFhl
biBoeXBlcnZpc29yLg0KDQo+IGlzIGZpbmU/IChTZWUgYWxzbyBiZWxvdyBmb3Igd2hhdCBtYXkg
YmUgbmVlZGVkIGluIHRoZSBoeXBlcnZpc29yLCBldmVuIGlmDQpZb3UgbWVhbiB4Y19waHlzZGV2
X21hcF9waXJxIG5lZWRzIGdzaT8NCg0KPiB0aGlzIElPQ1RMIHdvdWxkIGJlIHNhdGlzZmllZCBi
eSB0aGUga2VybmVsIHdpdGhvdXQgbmVlZGluZyB0byBpbnRlcmFjdCB3aXRoDQo+IHRoZSBoeXBl
cnZpc29yLikNCj4gDQo+Pj4+ICsgICAgICAgICAqIHNvIGlmIGZhaWwsIGtlZXAgdXNpbmcgaXJx
OyBpZiBzdWNjZXNzLCB1c2UgZ3NpDQo+Pj4+ICsgICAgICAgICAqLw0KPj4+PiArICAgICAgICBp
ZiAoZ3NpID4gMCkgew0KPj4+PiArICAgICAgICAgICAgaXJxID0gZ3NpOw0KPj4+DQo+Pj4gSSdt
IHN0aWxsIHB1enpsZWQgYnkgdGhpcywgd2hlbiBieSBub3cgSSB0aGluayB3ZSd2ZSBzdWZmaWNp
ZW50bHkgY2xhcmlmaWVkDQo+Pj4gdGhhdCBJUlFzIGFuZCBHU0lzIHVzZSB0d28gZGlzdGluY3Qg
bnVtYmVyaW5nIHNwYWNlcy4NCj4+Pg0KPj4+IEFsc28sIGFzIHByZXZpb3VzbHkgaW5kaWNhdGVk
LCB5b3UgY2FsbCB0aGlzIGZvciBQViBEb20wIGFzIHdlbGwuIEFpdWkgb24NCj4+PiB0aGUgYXNz
dW1wdGlvbiB0aGF0IGl0J2xsIGZhaWwuIFdoYXQgaWYgd2UgZGVjaWRlIHRvIG1ha2UgdGhlIGZ1
bmN0aW9uYWxpdHkNCj4+PiBhdmFpbGFibGUgdGhlcmUsIHRvbyAoaWYgb25seSBmb3IgaW5mb3Jt
YXRpb25hbCBwdXJwb3Nlcywgb3IgZm9yDQo+Pj4gY29uc2lzdGVuY3kpPyBTdWRkZW5seSB5b3Un
cmUgZmFsbGJhY2sgbG9naWMgd291bGRuJ3Qgd29yayBhbnltb3JlLCBhbmQNCj4+PiB5b3UnZCBj
YWxsIC4uLg0KPj4+DQo+Pj4+ICsgICAgICAgIH0NCj4+Pj4gKyNlbmRpZg0KPj4+PiAgICAgICAg
ICByID0geGNfcGh5c2Rldl9tYXBfcGlycShjdHgtPnhjaCwgZG9taWQsIGlycSwgJmlycSk7DQo+
Pj4NCj4+PiAuLi4gdGhlIGZ1bmN0aW9uIHdpdGggYSBHU0kgd2hlbiBhIHBJUlEgaXMgbWVhbnQu
IEltbywgYXMgc3VnZ2VzdGVkIGJlZm9yZSwNCj4+PiB5b3Ugc3RyaWN0bHkgd2FudCB0byBhdm9p
ZCB0aGUgY2FsbCBvbiBQViBEb20wLg0KPj4+DQo+Pj4gQWxzbyBmb3IgUFZIIERvbTA6IEkgZG9u
J3QgdGhpbmsgSSd2ZSBzZWVuIGNoYW5nZXMgdG8gdGhlIGh5cGVyY2FsbA0KPj4+IGhhbmRsaW5n
LCB5ZXQuIEhvdyBjYW4gdGhhdCBiZSB3aGVuIEdTSSBhbmQgSVJRIGFyZW4ndCB0aGUgc2FtZSwg
YW5kIGhlbmNlDQo+Pj4gaW5jb21pbmcgR1NJIHdvdWxkIG5lZWQgdHJhbnNsYXRpbmcgdG8gSVJR
IHNvbWV3aGVyZT8gSSBjYW4gb25jZSBhZ2FpbiBvbmx5DQo+Pj4gYXNzdW1lIGFsbCB5b3VyIHRl
c3Rpbmcgd2FzIGRvbmUgd2l0aCBJUlFzIHdob3NlIG51bWJlcnMgaGFwcGVuZWQgdG8gbWF0Y2gN
Cj4+PiB0aGVpciBHU0kgbnVtYmVycy4gKFRoZSBkaWZmZXJlbmNlLCBpbW8sIHdvdWxkIGFsc28g
bmVlZCBjYWxsaW5nIG91dCBpbiB0aGUNCj4+PiBwdWJsaWMgaGVhZGVyLCB3aGVyZSB0aGUgcmVz
cGVjdGl2ZSBpbnRlcmZhY2Ugc3RydWN0KHMpIGlzL2FyZSBkZWZpbmVkLikNCj4+IEkgZmVlbCBs
aWtlIHlvdSBtaXNzZWQgb3V0IG9uIG1hbnkgb2YgdGhlIHByZXZpb3VzIGRpc2N1c3Npb25zLg0K
Pj4gV2l0aG91dCBteSBjaGFuZ2VzLCB0aGUgb3JpZ2luYWwgY29kZXMgdXNlIGlycSAocmVhZCBm
cm9tIGZpbGUgL3N5cy9idXMvcGNpL2RldmljZXMvPHNiZGY+L2lycSkgdG8gZG8geGNfcGh5c2Rl
dl9tYXBfcGlycSwNCj4+IGJ1dCB4Y19waHlzZGV2X21hcF9waXJxIHJlcXVpcmUgcGFzc2luZyBp
bnRvIGdzaSBpbnN0ZWFkIG9mIGlycSwgc28gd2UgbmVlZCB0byB1c2UgZ3NpIHdoZXRoZXIgZG9t
MCBpcyBQViBvciBQVkgsIHNvIGZvciB0aGUgb3JpZ2luYWwgY29kZXMsIHRoZXkgYXJlIHdyb25n
Lg0KPj4gSnVzdCBiZWNhdXNlIGJ5IGNoYW5jZSwgdGhlIGlycSB2YWx1ZSBpbiB0aGUgTGludXgg
a2VybmVsIG9mIHB2IGRvbTAgaXMgZXF1YWwgdG8gdGhlIGdzaSB2YWx1ZSwgc28gdGhlcmUgd2Fz
IG5vIHByb2JsZW0gd2l0aCB0aGUgb3JpZ2luYWwgcHYgcGFzc3Rocm91Z2guDQo+PiBCdXQgbm90
IHdoZW4gdXNpbmcgUFZILCBzbyBwYXNzdGhyb3VnaCBmYWlsZWQuDQo+PiBXaXRoIG15IGNoYW5n
ZXMsIEkgZ290IGdzaSB0aHJvdWdoIGZ1bmN0aW9uIHhjX3BoeXNkZXZfZ3NpX2Zyb21fZGV2IGZp
cnN0bHksIGFuZCB0byBiZSBjb21wYXRpYmxlIHdpdGggb2xkIGtlcm5lbCB2ZXJzaW9ucyhpZiB0
aGUgaW9jdGwNCj4+IElPQ1RMX1BSSVZDTURfR1NJX0ZST01fREVWIGlzIG5vdCBpbXBsZW1lbnRl
ZCksIEkgc3RpbGwgbmVlZCB0byB1c2UgdGhlIGlycSBudW1iZXIsIHNvIEkgbmVlZCB0byBjaGVj
ayB0aGUgcmVzdWx0DQo+PiBvZiBnc2ksIGlmIGdzaSA+IDAgbWVhbnMgSU9DVExfUFJJVkNNRF9H
U0lfRlJPTV9ERVYgaXMgaW1wbGVtZW50ZWQgSSBjYW4gdXNlIHRoZSByaWdodCBvbmUgZ3NpLCBv
dGhlcndpc2Uga2VlcCB1c2luZyB3cm9uZyBvbmUgaXJxLiANCj4gDQo+IEkgdW5kZXJzdGFuZCBh
bGwgb2YgdGhpcywgdG8gYSAoSSB0aGluaykgc3VmZmljaWVudCBkZWdyZWUgYXQgbGVhc3QuIFll
dCB3aGF0DQo+IHlvdSdyZSBlZmZlY3RpdmVseSBwcm9wb3NpbmcgKHdpdGhvdXQgZXhwbGljaXRs
eSBzYXlpbmcgc28pIGlzIHRoYXQgZS5nLg0KPiBzdHJ1Y3QgcGh5c2Rldl9tYXBfcGlycSdzIHBp
cnEgZmllbGQgc3VkZGVubHkgbWF5IG5vIGxvbmdlciBob2xkIGEgcElSUQ0KPiBudW1iZXIsIGJ1
dCAod2hlbiB0aGUgY2FsbGVyIGlzIFBWSCkgYSBHU0kuIFRoaXMgbWF5IGJlIGEgbmVjZXNzYXJ5
IGFkanVzdG1lbnQNCj4gdG8gbWFrZSAoc2ltcGx5IGJlY2F1c2UgdGhlIGNhbGxlciBtYXkgaGF2
ZSBubyB3YXkgdG8gZXhwcmVzcyB0aGluZ3MgaW4gcElSUQ0KPiB0ZXJtcyksIGJ1dCB0aGVuIHN1
aXRhYmxlIGFkanVzdG1lbnRzIHRvIHRoZSBoYW5kbGluZyBvZiBQSFlTREVWT1BfbWFwX3BpcnEN
Cj4gd291bGQgYmUgbmVjZXNzYXJ5LiBJbiBmYWN0IHRoYXQgZmllbGQgaXMgcHJlc2VudGx5IG1h
cmtlZCBhcyAiSU4gb3IgT1VUIjsNCj4gd2hlbiByZS1wdXJwb3NlZCB0byB0YWtlIGEgR1NJIG9u
IGlucHV0LCBpdCBtYXkgZW5kIHVwIGJlaW5nIG5lY2Vzc2FyeSB0byBwYXNzDQo+IGJhY2sgdGhl
IHBJUlEgKGluIHRoZSBzdWJqZWN0IGRvbWFpbidzIG51bWJlcmluZyBzcGFjZSkuIE9yIGFsdGVy
bmF0aXZlbHkgaXQNCj4gbWF5IGJlIG5lY2Vzc2FyeSB0byBhZGQgeWV0IGFub3RoZXIgc3ViLWZ1
bmN0aW9uIHNvIHRoZSBHU0kgY2FuIGJlIHRyYW5zbGF0ZWQNCj4gdG8gdGhlIGNvcnJlc3BvbmRp
bmcgcElSUSBmb3IgdGhlIGRvbWFpbiB0aGF0J3MgZ29pbmcgdG8gdXNlIHRoZSBJUlEsIGZvciB0
aGF0DQo+IHRoZW4gdG8gYmUgcGFzc2VkIGludG8gUEhZU0RFVk9QX21hcF9waXJxLg0KSWYgSSB1
bmRlcnN0b29kIGNvcnJlY3RseSwgeW91ciBjb25jZXJucyBhYm91dCB0aGlzIHBhdGNoIGFyZSB0
d286DQpGaXJzdCwgd2hlbiBkb20wIGlzIFBWLCBJIHNob3VsZCBub3QgdXNlIHhjX3BoeXNkZXZf
Z3NpX2Zyb21fZGV2IHRvIGdldCBnc2kgdG8gZG8geGNfcGh5c2Rldl9tYXBfcGlycSwgSSBzaG91
bGQga2VlcCB0aGUgb3JpZ2luYWwgY29kZSB0aGF0IHVzZSBpcnEuDQpTZWNvbmQsIHdoZW4gZG9t
MCBpcyBQVkgsIEkgZ2V0IHRoZSBnc2ksIGJ1dCBJIHNob3VsZCBub3QgcGFzcyBnc2kgYXMgdGhl
IGZvdXJ0aCBwYXJhbWV0ZXIgb2YgeGNfcGh5c2Rldl9tYXBfcGlycSwgSSBzaG91bGQgY3JlYXRl
IGEgbmV3IGxvY2FsIHBhcmFtZXRlciBwaXJxPS0xLCBhbmQgcGFzcyBpdCBpbi4NCg0KPiANCj4+
IEFuZCByZWdhcmRpbmcgdG8gdGhlIGltcGxlbWVudGF0aW9uIG9mIGlvY3RsIElPQ1RMX1BSSVZD
TURfR1NJX0ZST01fREVWLCBpdCBkb2Vzbid0IGhhdmUgYW55IHhlbiBoZXlwZXJjYWxsIGhhbmRs
aW5nIGNoYW5nZXMsIGFsbCBvZiBpdHMgcHJvY2Vzc2luZyBsb2dpYyBpcyBvbiB0aGUga2VybmVs
IHNpZGUuDQo+PiBJIGtub3csIHNvIHlvdSBtaWdodCB3YW50IHRvIHNheSwgIlRoZW4geW91IHNo
b3VsZG4ndCBwdXQgdGhpcyBpbiB4ZW4ncyBjb2RlLiIgQnV0IHRoaXMgY29uY2VybiB3YXMgZGlz
Y3Vzc2VkIGluIHByZXZpb3VzIHZlcnNpb25zLCBhbmQgc2luY2UgdGhlIHBjaSBtYWludGFpbmVy
IGRpc2FsbG93ZWQgdG8gYWRkDQo+PiBnc2kgc3lzZnMgb24gbGludXgga2VybmVsIHNpZGUsIEkg
aGFkIHRvIGRvIHNvLg0KPiANCj4gUmlnaHQsIGJ1dCB0aGlzIGlzIGEgc2VwYXJhdGUgYXNwZWN0
ICh3aGljaCB3ZSBzaW1wbHkgbmVlZCB0byBsaXZlIHdpdGggb24NCj4gdGhlIFhlbiBzaWRlKS4N
Cj4gDQo+IEphbg0KDQotLSANCkJlc3QgcmVnYXJkcywNCkppcWlhbiBDaGVuLg0K


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 07:14:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 07:14:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744200.1151204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKBzh-0006O7-JA; Thu, 20 Jun 2024 07:14:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744200.1151204; Thu, 20 Jun 2024 07:14:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKBzh-0006O0-GF; Thu, 20 Jun 2024 07:14:05 +0000
Received: by outflank-mailman (input) for mailman id 744200;
 Thu, 20 Jun 2024 07:14:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKBzg-0006Nq-FO; Thu, 20 Jun 2024 07:14:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKBzg-0000hH-Da; Thu, 20 Jun 2024 07:14:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKBzg-0000IT-2l; Thu, 20 Jun 2024 07:14:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKBzg-0007ag-2K; Thu, 20 Jun 2024 07:14:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=96FKly/Yk0TGGFPQSzzEgXqmWNuv0AToNVgw3u9sGOM=; b=V/VZkPvJFO/CmXf9gYc1FWo9xj
	sll6ERtzv3Pky5HT6DJDBI20JAokHJU45IZM9oFHA+uxFzdumNekwLUSdpJ6CL+/QIK8UfnE16Y2s
	XFvh9fe3A/cCTyepWtS4CebJpVWJ7BKeTekeGohu+Nf7X8YmXANO98QYcKogFqq5ishU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186417-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186417: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=efa6e9f15ba943d154e8d7b29384581915b2aacd
X-Osstest-Versions-That:
    xen=53c5c99e8744495395c1274595d6ca55947d1d6a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Jun 2024 07:14:04 +0000

flight 186417 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186417/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186409
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186409
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186409
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186409
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186409
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186409
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  efa6e9f15ba943d154e8d7b29384581915b2aacd
baseline version:
 xen                  53c5c99e8744495395c1274595d6ca55947d1d6a

Last test of basis   186409  2024-06-19 05:58:35 Z    1 days
Testing same since   186417  2024-06-19 19:40:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   53c5c99e87..efa6e9f15b  efa6e9f15ba943d154e8d7b29384581915b2aacd -> master


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 07:16:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 07:16:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744207.1151214 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKC1c-0006vQ-VL; Thu, 20 Jun 2024 07:16:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744207.1151214; Thu, 20 Jun 2024 07:16:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKC1c-0006vJ-SV; Thu, 20 Jun 2024 07:16:04 +0000
Received: by outflank-mailman (input) for mailman id 744207;
 Thu, 20 Jun 2024 07:16:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKC1b-0006vD-SK
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 07:16:03 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f2ff7c48-2ed4-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 09:16:01 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id
 38308e7fff4ca-2ebe0a81dc8so6088351fa.2
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 00:16:01 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f98f901bebsm47366065ad.270.2024.06.20.00.15.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 00:16:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2ff7c48-2ed4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718867761; x=1719472561; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=UzY7zpKwp+2V7MnSg8866k/lTCL+uP60ivG63flXUp0=;
        b=Goeh1T1noIt8GblXUbYFSM0yeTa/GkyP0E+rCgNvevIfHP9ve8UtXHA31AMHjTAAQ9
         6pg4fdMFQY4rWPaEGoQ2KB9UiWOU6iJXHljejhVzUjY7z/MPp2WDeitU84ku9YRRVObq
         8mUiEnxF3DiNfJnsPrBuGYvjOzcNK57/bvf7sIaX8nS4o7OeNzRpP4tpFcgEtyLObX26
         M0C3mz+QWR5Q5+qvoNixPPqyscLxKEoSWtb9259IEmPTVf3rFSLa8Zyq+cNeRSYkERQU
         pZgJ8/dON1UEWdNRHR8LT5MF+bEZN+8tb3yfuxmQUpI640TOnYiScYaHscxelRhCFBrQ
         ZFGw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718867761; x=1719472561;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=UzY7zpKwp+2V7MnSg8866k/lTCL+uP60ivG63flXUp0=;
        b=TzfgEwj5YRQnwXnQSqMMFB6SrxPPgNFPChXMK/UJO5XA/gzl8FSJRnzX0GSK1gwIwH
         VTYg2uT9cgHazgE5vFkyzfUnG+rP0k+ot1ZW8mjeens2wchfbJ+oYEFlJwUXYWPh7HuB
         0f/tUWGBylE2+j1rD+Hv9dvCgePZaknrErl/QoxVYjBIvU6mvvE78oAZ9YxnAncJ+q77
         gHZjWZqXKKP5qPALYHn63C/GKinHEJep17/yzZtnR0Zff3GQq+p1wzUenn6gLwqhuZkn
         jCHGoKbH2KYMEGHNgJR7au/0B5liwLmp0F0vQCCigDtxQuowl++EOnU9hI943l9UBiHG
         iXhw==
X-Gm-Message-State: AOJu0YxfTPxT34/Dt8GpgIDdZsB8WdmVfw6ILuWbujrCMx4ksL5Mvz2g
	ToCzy9TcEl4IDGptRptPCmuDiC7JrbQaHTym2y1HGonI9w8WwzSWwn2JCTbqUqhAKW4gXipxWIk
	=
X-Google-Smtp-Source: AGHT+IFDm6fLNOgm7wnljcX2/bhvMfMim2CP8gqUWv1J9qZ69rJa+phOyMeXII1C6Otbxsfh8fmB5A==
X-Received: by 2002:a2e:a26c:0:b0:2ec:3daa:f0b4 with SMTP id 38308e7fff4ca-2ec3daaf51bmr29655061fa.12.1718867761177;
        Thu, 20 Jun 2024 00:16:01 -0700 (PDT)
Message-ID: <a4d780fd-90c2-405e-be21-c323a22a78c6@suse.com>
Date: Thu, 20 Jun 2024 09:15:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Ross Lagerwall <ross.lagerwall@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH for-4.19] livepatch: use appropriate type for buffer offset
 variables
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

As was made noticeable by the last of the commits referenced below,
using a fixed-size type for such purposes is not only against
./CODING_STYLE, but can lead to actual issues. Switch to using size_t
instead, thus also allowing calculations to be lighter-weight in 32-bit
builds.

No functional change for 64-bit builds.

Link: https://gitlab.com/xen-project/xen/-/jobs/7136417308
Fixes: b145b4a39c13 ("livepatch: Handle arbitrary size names with the list operation")
Fixes: 5083e0ff939d ("livepatch: Add metadata runtime retrieval mechanism")
Fixes: 43d5c5d5f70b ("xen: avoid UB in guest handle arithmetic")
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/common/livepatch.c
+++ b/xen/common/livepatch.c
@@ -1252,7 +1252,7 @@ static int livepatch_list(struct xen_sys
     list->metadata_total_size = 0;
     if ( list->nr )
     {
-        uint64_t name_offset = 0, metadata_offset = 0;
+        size_t name_offset = 0, metadata_offset = 0;
 
         list_for_each_entry( data, &payload_list, list )
         {


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 07:20:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 07:20:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744214.1151223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKC5m-000067-FM; Thu, 20 Jun 2024 07:20:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744214.1151223; Thu, 20 Jun 2024 07:20:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKC5m-00005z-CZ; Thu, 20 Jun 2024 07:20:22 +0000
Received: by outflank-mailman (input) for mailman id 744214;
 Thu, 20 Jun 2024 07:20:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKC5l-00005t-8C
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 07:20:21 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8c7b77dd-2ed5-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 09:20:19 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2ebe0a81dc8so6131901fa.2
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 00:20:19 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f85c1d44basm124613255ad.76.2024.06.20.00.20.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 00:20:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c7b77dd-2ed5-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718868019; x=1719472819; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=gkBlXg8IqU9OAf5pLZvGyigOMLuUb64DYUG3mYOyoA0=;
        b=C9MgSsEtuzeEP0eU913B5uo/VgEVbBJwH4kZDugD4b62VZXF8cW+gdWarPCwscsKVH
         k2r+BWfKk0JwgxaxZUjrljoAXa1Sb8mVflsId4qSirRD0/sgTU3RlOK3Zmnza3mFkdjt
         /6SqqpbXPVyhzfj5+8eFvgFz+QDU22CkpMCIEwyjx8RkXMVFXXbkhxM1AfVsjU4SJYfL
         N7XKeZF94bHmiQw+JcBl9eKSgcoF1LVpRMeFZ3P6marnI6s9WqPHDC8dIndHTLG0huDy
         80U+f2ZpL6Y+XtO6h3bv8r/GNFlJHRg0NPHMZki/P5o4XU/kKOoneAvRWZZiEEN6nTFl
         WKLg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718868019; x=1719472819;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gkBlXg8IqU9OAf5pLZvGyigOMLuUb64DYUG3mYOyoA0=;
        b=cUW6xxBl5GgHuMnnkcETQFQFziKSM2TrLt7Er6YZ9qCvhKo4KsB2fZQ6ArvFbeNgbr
         tucgt9Kf7o7lN25IEL5H5T7xKOM9FIUbeY2V/RfNK07BdoKmRyFkQA/kODDgraYzi6GS
         7NdIt0VnfRjeEUhZVrC8o6UvLbr1hACwHITUOaSWXISjssa4ava3MpOeaCPBip/q6YrY
         TVy5eDhpl6ArN8Xri4e0IrXXn58TBO1XmtumKoo9OOEiyazQ0+n6eUHL5YWtAZgZjI2o
         YNIhMc2zuAdtMRXUZNoLZbEQ4mLDdp9hv1F4FM1FNHYq2TNhWuwzAy/Hzjgb08Hdv6uD
         wJbg==
X-Forwarded-Encrypted: i=1; AJvYcCU7+ajY7xJBZl/IK3TSX2fxop2tmqDYpvNSjbJYOImwgYn/Tutlmf+WR/Exh+i7MSHyS5mUp20Z4W7pmoBqoz38qrBHwK5M69/+FQw5U88=
X-Gm-Message-State: AOJu0YzTtOJbBgTufBzPnFVdQFIuq/iS/iXoQA+psgQpefM2kM51vuFN
	IwN/g7Y1Px8WQ3Iviq/HzU2BnjcE0NPoTWe+PQ1DrsABCNc9EBLzVoGPYsNa1g==
X-Google-Smtp-Source: AGHT+IHGVtm/UAErBntgn/h0UvaW/bLLLOCUODlXAoWkP6pKgHWGO/ZWf1srYG0AuN7RSCwkdGT2TQ==
X-Received: by 2002:a2e:8041:0:b0:2ec:403e:6314 with SMTP id 38308e7fff4ca-2ec403e645emr23634851fa.3.1718868018648;
        Thu, 20 Jun 2024 00:20:18 -0700 (PDT)
Message-ID: <3e550b43-38af-4c4f-a0b4-59e7e2fa181c@suse.com>
Date: Thu, 20 Jun 2024 09:20:12 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] AMD/IOMMU: Improve register_iommu_exclusion_range()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240618183128.1981751-1-andrew.cooper3@citrix.com>
 <052cccac-8c8f-4555-953c-2bd9de460f2a@suse.com>
 <9186bb9f-384d-426a-b3d3-40c00236be27@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <9186bb9f-384d-426a-b3d3-40c00236be27@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 19.06.2024 18:22, Andrew Cooper wrote:
> On 19/06/2024 8:45 am, Jan Beulich wrote:
>> On 18.06.2024 20:31, Andrew Cooper wrote:
>>>  * Use 64bit accesses instead of 32bit accesses
>>>  * Simplify the constant names
>>>  * Pull base into a local variable to avoid it being reloaded because of the
>>>    memory clobber in writeq().
>>>
>>> No functional change.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> ---
>>> CC: Jan Beulich <JBeulich@suse.com>
>>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>>
>>> RFC.  This is my proposed way of cleaning up the whole IOMMU file.  The
>>> diffstat speaks for itself.
>> Absolutely.
>>
>>> I've finally found the bit in the AMD IOMMU spec which says 64bit accesses are
>>> permitted:
>>>
>>>   3.4 IOMMU MMIO Registers:
>>>
>>>   Software access to IOMMU registers may not be larger than 64 bits. Accesses
>>>   must be aligned to the size of the access and the size in bytes must be a
>>>   power of two. Software may use accesses as small as one byte.
>> I take it that the use of 32-bit writes was because of the past need
>> also work in a 32-bit hypervisor, not because of perceived restrictions
>> by the spec.
> 
> I recall having problems getting writeq() acked in the past, even after
> we'd dropped 32bit.

That's odd, as per my subsequent reply.

> But this is the first time that I've positively found anything in the
> spec saying that 64bit accesses are ok.
> 
>>> --- a/xen/drivers/passthrough/amd/iommu-defs.h
>>> +++ b/xen/drivers/passthrough/amd/iommu-defs.h
>>> @@ -338,22 +338,10 @@ union amd_iommu_control {
>>>  };
>>>  
>>>  /* Exclusion Register */
>>> -#define IOMMU_EXCLUSION_BASE_LOW_OFFSET		0x20
>>> -#define IOMMU_EXCLUSION_BASE_HIGH_OFFSET	0x24
>>> -#define IOMMU_EXCLUSION_LIMIT_LOW_OFFSET	0x28
>>> -#define IOMMU_EXCLUSION_LIMIT_HIGH_OFFSET	0x2C
>>> -#define IOMMU_EXCLUSION_BASE_LOW_MASK		0xFFFFF000U
>>> -#define IOMMU_EXCLUSION_BASE_LOW_SHIFT		12
>>> -#define IOMMU_EXCLUSION_BASE_HIGH_MASK		0xFFFFFFFFU
>>> -#define IOMMU_EXCLUSION_BASE_HIGH_SHIFT		0
>>> -#define IOMMU_EXCLUSION_RANGE_ENABLE_MASK	0x00000001U
>>> -#define IOMMU_EXCLUSION_RANGE_ENABLE_SHIFT	0
>>> -#define IOMMU_EXCLUSION_ALLOW_ALL_MASK		0x00000002U
>>> -#define IOMMU_EXCLUSION_ALLOW_ALL_SHIFT		1
>>> -#define IOMMU_EXCLUSION_LIMIT_LOW_MASK		0xFFFFF000U
>>> -#define IOMMU_EXCLUSION_LIMIT_LOW_SHIFT		12
>>> -#define IOMMU_EXCLUSION_LIMIT_HIGH_MASK		0xFFFFFFFFU
>>> -#define IOMMU_EXCLUSION_LIMIT_HIGH_SHIFT	0
>>> +#define IOMMU_MMIO_EXCLUSION_BASE           0x20
>>> +#define   EXCLUSION_RANGE_ENABLE            (1 << 0)
>>> +#define   EXCLUSION_ALLOW_ALL               (1 << 1)
>>> +#define IOMMU_MMIO_EXCLUSION_LIMIT          0x28
>> Just one question here: Previously you suggested we switch to bitfields
>> for anything like this, and we've already done so with e.g.
>> union amd_iommu_control and union amd_iommu_ext_features. IOW I wonder
>> if we wouldn't better strive to be consistent in this regard. Or if not,
>> what the (written or unwritten) guidelines are when to use which
>> approach.
> 
> We've got two very different kinds of things here.
> 
> The device table/etc are in-memory WB datastructure which we're
> interpreting and editing routinely.  It's completely full of bits and
> small fields, and letting the compiler do the hard work there is
> preferable; certainly in terms of legibility.

And it was specifically not the DTE I used as example in my reply, ...

> This example is an MMIO register (in a bar on the IOMMU PCI device, even
> though we find the address in the IVRS).  We set it up once at boot and
> don't touch it afterwards.

... but other MMIO registers.

> So while we could make a struct for it, we'd still need to get it into a
> form that we can writeq(), and that's more code than the single case
> were we need to put two metadata bits into an address.

See those other examples, which are usable with writeq() by way of their
"raw" fields.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 07:25:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 07:25:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744222.1151235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKCAX-0000mJ-4m; Thu, 20 Jun 2024 07:25:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744222.1151235; Thu, 20 Jun 2024 07:25:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKCAX-0000mC-0j; Thu, 20 Jun 2024 07:25:17 +0000
Received: by outflank-mailman (input) for mailman id 744222;
 Thu, 20 Jun 2024 07:25:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKCAV-0000m6-GO
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 07:25:15 +0000
Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com
 [2a00:1450:4864:20::22d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3c487ca3-2ed6-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 09:25:14 +0200 (CEST)
Received: by mail-lj1-x22d.google.com with SMTP id
 38308e7fff4ca-2ec002caf3eso7004131fa.1
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 00:25:14 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-705ccb6b4fasm11749796b3a.150.2024.06.20.00.25.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 00:25:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c487ca3-2ed6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718868313; x=1719473113; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=xJ/co182WcYNS1OnFxdpqFtoByqFD684VmQ5KeMq7uQ=;
        b=MBDX7YklnAci5FAeogeLQ5OvA/3TFuLYN6fwOCixmVAHQMFfuJ9vFdZ95f+SoZ4Puz
         4AjbLGj6n9QHN9YKnO2XQwq5XFDRJ8D38HBM/yVY6L/DtHGDg2EW9ZlrWWzMYUMfoCz3
         FOCYGqx1azJPlzq85n1SI3+Gjxh06C+PHSkTVb5q5c2EM+jYbc3foswSL8fHixHfyp+6
         z0v8tp+dj3nx5+dcATKLxsceST5sNmAsIQ7rllfyFsyER6OPLN7uAIrAfAE7egOYJfgG
         lfd6qChGniLO0oTaIg9PbRdMeLe4c+wZnC3qNQVUqtroQIWp9KI/bTenqxyIiBHOaV8f
         mrKg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718868313; x=1719473113;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xJ/co182WcYNS1OnFxdpqFtoByqFD684VmQ5KeMq7uQ=;
        b=iFfeU0Z5HjSRrN0+KK/RGp18ZijVn7VxvLsTIIda40/ZXFGHYUEbsFZmFSJi79MaVu
         pHlbpcaDc33Ko+JgHXSL7Orpl/0+wOYBCQiXjJHmOwVstnXG/z2O++WWRuLZ9n/r1D4c
         GzWQ4XKozHVMt7GByNb3TYPPW7xsbC7LNtHsiLf7syJQ4NMZosbQfv7VA/1xKOXlauXG
         6mPbUsBeY5GJKcPUW9TexTZuMBkVTZwu8UWaVTQAbqu7WkwaC3g9NGMiXU7ibY3QOQl+
         i9nhbuhOSrbjIoP+JAGYb/WHDWe+q6/6WpSaLguDRNvStxIIql3V4nXj16hd8hB+Yopb
         1T2Q==
X-Forwarded-Encrypted: i=1; AJvYcCUoEoHx4R7Kr2r22jxDKUgDl2tXlzTLXvWam/zhv/GVZzMwyfQW9ZCIhBbRdaYcwetpOni6i4tKx/S06kaTTnRWLrGJLmBt7dDBtgqj8Hw=
X-Gm-Message-State: AOJu0Ywg7GKuELSF9+JecZkvQYXMOUi0fgzPtFmeLcS/vrpJHGav81BR
	z6NdNTbWysysYNktD4nS73FvSWVvBm4Aq63bahzWDummWuG2UtjcJ/B8bLNVmQ==
X-Google-Smtp-Source: AGHT+IFJx7TyLrlGOX0d4JxNqPQTuMz0lvzdHLm0D3lIPxOkj4v4IJZOTPs3KAFSSVH3lG3KBuS+Yw==
X-Received: by 2002:a2e:95cf:0:b0:2ec:5cf:565e with SMTP id 38308e7fff4ca-2ec3cec04c8mr36576311fa.12.1718868313537;
        Thu, 20 Jun 2024 00:25:13 -0700 (PDT)
Message-ID: <4cf14abe-881e-4328-9083-bd04afd6b307@suse.com>
Date: Thu, 20 Jun 2024 09:25:02 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v6 6/9] xen: Make the maximum number of altp2m
 views configurable for x86
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <cover.1718038855.git.w1benny@gmail.com>
 <fee20e24a94cb29dea81631a6b775933d1151da4.1718038855.git.w1benny@gmail.com>
 <4a49fe9b-66fd-4a32-ad01-14ed4c5fc34c@suse.com>
 <CAKBKdXgUKYoJfB1mG+6JSaV=jWpmRmS1UbQ6N4JNZ774rP_PoQ@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CAKBKdXgUKYoJfB1mG+6JSaV=jWpmRmS1UbQ6N4JNZ774rP_PoQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 19.06.2024 17:46, Petr Beneš wrote:
> On Thu, Jun 13, 2024 at 2:03 PM Jan Beulich <jbeulich@suse.com> wrote:
>>> @@ -510,13 +526,13 @@ int p2m_change_altp2m_gfn(struct domain *d, unsigned int idx,
>>>      mfn_t mfn;
>>>      int rc = -EINVAL;
>>>
>>> -    if ( idx >=  min(ARRAY_SIZE(d->arch.altp2m_p2m), MAX_EPTP) ||
>>> +    if ( idx >= d->nr_altp2m ||
>>>           d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] ==
>>
>> This ends up being suspicious: The range check is against a value different
>> from what is passed to array_index_nospec(). The two weren't the same
>> before either, but there the range check was more strict (which now isn't
>> visible anymore, even though I think it would still be true). Imo this
>> wants a comment, or an assertion effectively taking the place of a comment.
> 
>> Since they're all "is this slot populated" checks, maybe we want
>> an is_altp2m_eptp_valid() helper?
> 
> Let me see if I understand correctly. You're suggesting the condition
> should be replaced with something like this? (Also, I would suggest
> altp2m_is_eptp_valid() name, since it's consistent e.g. with
> p2m_is_altp2m().)
> 
> static inline bool altp2m_is_eptp_valid(const struct domain *d,
>                                         unsigned int idx)
> {
>     /*
>      * EPTP index is correlated with altp2m index and should not exceed
>      * d->nr_altp2m.
>      */
>     assert(idx < d->nr_altp2m);
> 
>     return idx < MAX_EPTP &&
>         d->arch.altp2m_eptp[array_index_nospec(idx, MAX_EPTP)] !=
>         mfn_x(INVALID_MFN);
> }

Not exactly. You may not assert on idx. The assertion, if any, wants to
check d->nr_altp2m against MAX_EPTP.

> Note that in the codebase there are also very similar checks, but
> again without array_index_nospec. For instance, in the
> p2m_altp2m_propagate_change() function (which is called fairly
> frequently):
> 
> int p2m_altp2m_propagate_change(struct domain *d, gfn_t gfn,
>                                 mfn_t mfn, unsigned int page_order,
>                                 p2m_type_t p2mt, p2m_access_t p2ma)
> {
>     struct p2m_domain *p2m;
>     unsigned int i;
>     unsigned int reset_count = 0;
>     unsigned int last_reset_idx = ~0;
>     int ret = 0;
> 
>     if ( !altp2m_active(d) )
>         return 0;
> 
>     altp2m_list_lock(d);
> 
>     for ( i = 0; i < d->nr_altp2m; i++ )
>     {
>         p2m_type_t t;
>         p2m_access_t a;
> 
>         // XXX this could be replaced with altp2m_is_eptp_valid(), but
> based on previous review remarks,
>         // it would introduce unnecessary perf. hit. So, should these
> occurrences left unchanged?
>         if ( d->arch.altp2m_eptp[i] == mfn_x(INVALID_MFN) )
>             continue;
> 
>        ...
> 
> There are more instances of this. Which re-opens again the issue from
> previous conversation: should I introduce a function which will be
> used in some cases (where _nospec is used) and not used elsewhere?

You're again comparing cases where we control the index (in the loop) with
cases where we don't (hypercall inputs).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 07:26:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 07:26:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744231.1151244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKCBq-0001IL-DT; Thu, 20 Jun 2024 07:26:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744231.1151244; Thu, 20 Jun 2024 07:26:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKCBq-0001IE-A5; Thu, 20 Jun 2024 07:26:38 +0000
Received: by outflank-mailman (input) for mailman id 744231;
 Thu, 20 Jun 2024 07:26:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=duhU=NW=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1sKCBp-0001BS-17
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 07:26:37 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20600.outbound.protection.outlook.com
 [2a01:111:f403:240a::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6bbbdf00-2ed6-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 09:26:35 +0200 (CEST)
Received: from BY5PR04CA0014.namprd04.prod.outlook.com (2603:10b6:a03:1d0::24)
 by PH7PR12MB7017.namprd12.prod.outlook.com (2603:10b6:510:1b7::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.21; Thu, 20 Jun
 2024 07:26:29 +0000
Received: from CO1PEPF000042AE.namprd03.prod.outlook.com
 (2603:10b6:a03:1d0:cafe::37) by BY5PR04CA0014.outlook.office365.com
 (2603:10b6:a03:1d0::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.33 via Frontend
 Transport; Thu, 20 Jun 2024 07:26:29 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CO1PEPF000042AE.mail.protection.outlook.com (10.167.243.43) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Thu, 20 Jun 2024 07:26:29 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 20 Jun
 2024 02:26:27 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 20 Jun
 2024 02:26:27 -0500
Received: from [10.252.147.188] (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend
 Transport; Thu, 20 Jun 2024 02:26:26 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bbbdf00-2ed6-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TQuGi3Ye/E0+8xrDenrAkHe8l/0o0jiVU/eStu3ZEl5JCp83eVURr0OEKRMmDyRDosQAjkn5Pa8mc0s6FvfLR1wuKm7K948KQQPeq4cQrK4g6GvmQ/IoxY61Lomx8Y2Qin4vIyr7l60wIijs4O2J2YG3JqpWBuZWYVdUJ4tbKGFeafRmkhuqfC3aYln334KTL/M7HT7XqDf9Jpm85td5qHvim2GbuaTWSz5Lc1Bz1l9pH0iPKduGRURbxafrxFDBcsXhzLg/RxKAZcd7qSKKyPHeYVDwsxhJfgr+Fqq0sdazMHFVRPJeo5LGGb8w1TTffujSCOnEQ1sfuRg8mxgQQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=O4a2PhBrpHCu3XKgwdNkoRXO5cL+e+BTf3xeee/EQow=;
 b=AV+CckSiCPsCP2fZjTLqGQut2IQWKAlYAfz36ZYaU0i2IwmyUTHoqCZ+cjg9iW8XDsj4F/XXVVhMuvJqiRIx5vYL0o5WP10OVZkh+mKeP1VpKI7VKy04c+lPSRHfrGDvNrXjKzUSqp6L8wC0sKQ9XYoqycJ4iTimSP+gS8NgI7J6r7d1Ti6IGRqtUj7Ws4zdn7gv//XpUuJRrtdyd6Tfv56ueeWOeiLYHgILbSGVV+khNap6OzNIwOb5A6QMq85C8R6Wndozfyf41heCfQIK7JxlUuuxXCGIzPgqv7D7qeM8k/JcBepWFT9u23oSo2blLdYLa8OkOUVNi/T3mvXHlw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=O4a2PhBrpHCu3XKgwdNkoRXO5cL+e+BTf3xeee/EQow=;
 b=C4khkmOA4+iaJyAEv/zNa6eERJ92jGW9rytaHxRVahI2SVomfUkqIWxHQnUUgTTJWD3mu7QK5M9KklPIEhxSNSVUnwb/Lwf/3edIUweJLlqbppqOn8VSNsxNypEvbEdPhAwqmYS8GI8cglx7mSkwPl+VKElp6Fq6cKcgXW9++rI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <bed9d557-66b7-4711-80f7-a85c28e57f6c@amd.com>
Date: Thu, 20 Jun 2024 09:26:25 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/arm: static-shmem: fix "gbase/pbase used
 uninitialized" build failure
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20240619064652.18266-1-michal.orzel@amd.com>
 <82790448-dd2f-4299-ae3d-938080ee5e19@xen.org>
 <99fb367a-7ceb-4769-8120-a06474e98fb3@amd.com>
 <7bffdbeb-0219-4ec7-a70f-a9fa55cd6b5e@xen.org>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <7bffdbeb-0219-4ec7-a70f-a9fa55cd6b5e@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
Received-SPF: None (SATLEXMB05.amd.com: michal.orzel@amd.com does not
 designate permitted sender hosts)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CO1PEPF000042AE:EE_|PH7PR12MB7017:EE_
X-MS-Office365-Filtering-Correlation-Id: 8924aa85-8cd4-4520-f387-08dc90fa4d51
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|1800799021|82310400023|36860700010|376011;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?OEdZOVg4ZGdzOGZzZk9CSnVHc0RBd1Rua3NVSUZZTERkWEFNYUNsaHFzTUZa?=
 =?utf-8?B?bkUvcm1NMVBtVHhzMndGRkcrTnVaVVVxYzZhcFpRamhBT3p2aXFDNXo2TVBZ?=
 =?utf-8?B?VTJnd3k2SXBhSUlhT1BXRDdnbGJua0w1OWFBZE9tTU5jNWY2T3V0NUdGWXc1?=
 =?utf-8?B?aitBNG9vbFNkZllpcUE2S1Zta3Z1eHVLNVBmRjZZK0JoR3RFaFE4UGcxeHhj?=
 =?utf-8?B?UGc0ellUTGJxWU1zRFpaZ3ZmdTJTOFI3cnY0cjVXNHNsY0dIWVhTYTRnYnY4?=
 =?utf-8?B?dU92MmoxZmtDSU9jVzJuSFo0QWt2a0pxem90OWUyM3J2Um91OHhaQm5UQ3FY?=
 =?utf-8?B?U0FOK1FqbXdZMTZlSTQxR2RMWFBTWURxTXpvV0QzeEdnaFBDYWtsOVZyaFUr?=
 =?utf-8?B?M2ZoU3lHQWc4cS9GQkVySlRuSDhkaXlEQ3p0VFFhVFNvaHJDdjZlTnlwNWkr?=
 =?utf-8?B?ZlgrV0h5NEtYSnJxeVhXL0xveGsycGFOS0pWVXc3S0RGdjROLzZLVEV0LzJH?=
 =?utf-8?B?MjFaUFpESC9XQVhJa3B2YVJWMU1TNVdhTTFXN2NIaHNzZTMyY0VRSUIyQUR4?=
 =?utf-8?B?RE5ja1FkWEY2cWFPdlJicVpHKzBwRTVKZkFVMVI3MVUxQ09GVEQ4QXNQdVVU?=
 =?utf-8?B?b2dyaDNock5nQSt2T0IrVU5YYWtzd2JORFpRWU5NbE1JMDdYTUpKbkFOK2Zz?=
 =?utf-8?B?QUd2VnI4R1MwV0FBKzJVT3lrZWx0bWFVREtCNDhrRGMvdko0VnBuYXN4SVgy?=
 =?utf-8?B?bmZESlcxWlRHZDA5SzM3bWJNVUYxUGhDeC9rbHdsVFVhNVRDQ2JGL2wxWVZi?=
 =?utf-8?B?QU9vY2NIRE5SU2FOMll4enpWMjFqM3RpVWNSRCs3RUg5MmczK2gyTE5odkQ0?=
 =?utf-8?B?ajFXWGpsc1NpM1lQYm1mVCtnSVNjNkJhb0ZQR3RsZzBQSzlIKzc2MzV1Sklr?=
 =?utf-8?B?dFQ1c1d0eWpSYWRoL2Z6NnhPUDE3c0lYVDdVWFJ0ei9SbTJoTTZmREZUUUhm?=
 =?utf-8?B?N3NhRXZ1VzBwaUVuN1JQdUZDc1VObTY2S0dMN1R4S2V4SEhxQ1h6aUFqTkF4?=
 =?utf-8?B?WHNvUzFpUEkvQUQ5NE1FMTh5a2pNSHdJazNla0J1ZG81cXZaR1ZrYmp6OHU3?=
 =?utf-8?B?NmdJcFFEZGN6SDRsNStOck13QlhyTGFUUVRUQWNxc2JpdEJKN2ZtM1dML3Bx?=
 =?utf-8?B?TFhJWW8xVnZVU0tzclU5MnQ1dC9DdGRkRzh2WFVWblFuNEJpOWlSRHVENlBy?=
 =?utf-8?B?UEdKRThReXpieWs3cU5zRHMyN2huVy9LZVRYQngyNmV3QllyV1p3aFptRXR6?=
 =?utf-8?B?aSs1S2swVVhJdWNzZnhSaUZ1cU40b3dRNm9sdzZTYnFVZFZhZ3F6RzFqVlZR?=
 =?utf-8?B?VE8ybHR4Y0tHbUJYWGVtb2lENzRUWWJRdi9NVXF2dzhiQzJZbWdGd0J4SUwv?=
 =?utf-8?B?OXp3VU55d2ZHWG40ZDJLbFgzNyswdXd2WDljVHgxajFkUXpTYlRtdmlzZzFZ?=
 =?utf-8?B?dnFQVmdMQ2Z1NGRZZkVKWlFUNFVZTHVDa3FMUGJqNWpBRUZXWGNhd2ZETlpr?=
 =?utf-8?B?UkVNejVuZTFKYTFPaU9naTh6OElaSkNuUXdoSkEvZDBYMmIzeFUwNGwweHlG?=
 =?utf-8?B?QWVEK2tjMjJMYXJaM001eG9LQkU3NVI0NnBzYmF2OERUQzJHTnNrRi96eUpJ?=
 =?utf-8?B?bWYvd3NsaUdPSm9NdVdlNVZ1SDhMN3lPK1ZVemRBNG5NQWh2SUJldU9QMDRL?=
 =?utf-8?B?aFI4bloxaUZBUnRpSWNzNVc5emJUODc0RG1JaVloTlVhQytkbVQzUjJOaXdH?=
 =?utf-8?Q?4+a+zQo8sUsvemujaioGLvHU1MzEA1jkkcuZw=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(1800799021)(82310400023)(36860700010)(376011);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 Jun 2024 07:26:29.1176
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8924aa85-8cd4-4520-f387-08dc90fa4d51
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CO1PEPF000042AE.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH7PR12MB7017



On 19/06/2024 14:34, Julien Grall wrote:
> 
> 
> On 19/06/2024 13:06, Michal Orzel wrote:
>> Hi Julien,
>>
>> On 19/06/2024 13:55, Julien Grall wrote:
>>>
>>>
>>> Hi Michal,
>>>
>>> On 19/06/2024 07:46, Michal Orzel wrote:
>>>> Building Xen with CONFIG_STATIC_SHM=y results in a build failure:
>>>>
>>>> arch/arm/static-shmem.c: In function 'process_shm':
>>>> arch/arm/static-shmem.c:327:41: error: 'gbase' may be used uninitialized [-Werror=maybe-uninitialized]
>>>>     327 |         if ( is_domain_direct_mapped(d) && (pbase != gbase) )
>>>> arch/arm/static-shmem.c:305:17: note: 'gbase' was declared here
>>>>     305 |         paddr_t gbase, pbase, psize;
>>>>
>>>> This is because the commit cb1ddafdc573 adds a check referencing
>>>> gbase/pbase variables which were not yet assigned a value. Fix it.
>>>>
>>>> Fixes: cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem should be direct-mapped for direct-mapped domains")
>>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>>> ---
>>>> Rationale for 4.19: this patch fixes a build failure reported by CI:
>>>> https://gitlab.com/xen-project/xen/-/jobs/7131807878
>>>> ---
>>>>    xen/arch/arm/static-shmem.c | 13 +++++++------
>>>>    1 file changed, 7 insertions(+), 6 deletions(-)
>>>>
>>>> diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c
>>>> index c434b96e6204..cd48d2896b7e 100644
>>>> --- a/xen/arch/arm/static-shmem.c
>>>> +++ b/xen/arch/arm/static-shmem.c
>>>> @@ -324,12 +324,6 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>>>>                printk("%pd: static shared memory bank not found: '%s'", d, shm_id);
>>>>                return -ENOENT;
>>>>            }
>>>> -        if ( is_domain_direct_mapped(d) && (pbase != gbase) )
>>>> -        {
>>>> -            printk("%pd: physical address 0x%"PRIpaddr" and guest address 0x%"PRIpaddr" are not direct-mapped.\n",
>>>> -                   d, pbase, gbase);
>>>> -            return -EINVAL;
>>>> -        }
>>>>
>>>>            pbase = boot_shm_bank->start;
>>>>            psize = boot_shm_bank->size;
>>>> @@ -353,6 +347,13 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo,
>>>>                /* guest phys address is after host phys address */
>>>>                gbase = dt_read_paddr(cells + addr_cells, addr_cells);
>>>>
>>>> +            if ( is_domain_direct_mapped(d) && (pbase != gbase) )
>>>> +            {
>>>> +                printk("%pd: physical address 0x%"PRIpaddr" and guest address 0x%"PRIpaddr" are not direct-mapped.\n",
>>>> +                       d, pbase, gbase);
>>>> +                return -EINVAL;
>>>> +            }
>>>> +
>>>
>>> Before this patch, the check was globally. I guess the intention was it
>>> covers the two part of the "if". But now, you only have it in when
>>> "paddr" is specified in the DT.
>>>
>>>   From a brief look at the code, I can't figure out why we don't need a
>>> similar check on the else path. Is this because it is guarantee that
>>> will be paddr == gaddr?
>> The reason why I added this check only in the first case is due to what doc states.
>> It says that if a domain is 1:1, the shmem should be also 1:1 i.e. pbase == gbase. In the else
>> case the pbase is omitted and thus a user cannot know and has no guarantee what will be the backing physical address.
> 
> The property "direct-map" has the following definition:
> 
> "- direct-map
> 
>      Only available when statically allocated memory is used for the domain.
>      An empty property to request the memory of the domain to be
>      direct-map (guest physical address == physical address).
> "
> 
> So I think it would be fair for someone to interpret it as shared memory
> would also be 1:1 mapped.
> 
>> Thus, reading this doc makes me feel that for 1:1 guests user needs to specify pbase == gbase.
> 
> See above, I think this is not 100% clear. I am concerned that someone
> may try to use the version where only the guest address is specified.
> 
> It would likely be hard to realize that the extended regions would not
> work properly. So something needs to be done.
> 
> I don't have any preference on how to address. It could simply be a
> check in Xen to request that both "gaddr" and "paddr" are specified for
> direct mapped domain.
Fair enough. I can add a check for 1:1 in the else case to return error with a message that
host and guest physical address must be supplied for direct-mapped domains. Would we consider it for 4.19?
In my opinion yes as it would remove the possibility of a feature misuse.

~Michal


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 07:30:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 07:30:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744237.1151254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKCFV-00039L-T8; Thu, 20 Jun 2024 07:30:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744237.1151254; Thu, 20 Jun 2024 07:30:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKCFV-00039E-P9; Thu, 20 Jun 2024 07:30:25 +0000
Received: by outflank-mailman (input) for mailman id 744237;
 Thu, 20 Jun 2024 07:30:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKCFU-000398-EC
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 07:30:24 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f46b24f2-2ed6-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 09:30:23 +0200 (CEST)
Received: from [192.168.1.113] (93-36-220-117.ip62.fastwebnet.it
 [93.36.220.117])
 by support.bugseng.com (Postfix) with ESMTPSA id 3EF194EE0738;
 Thu, 20 Jun 2024 09:30:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f46b24f2-2ed6-11ef-90a3-e314d9c70b13
Message-ID: <2072bf59-f125-4789-be77-40ed3641aec4@bugseng.com>
Date: Thu, 20 Jun 2024 09:30:21 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2] xen: add explicit comment to identify notifier
 patterns
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com, Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>
References: <d814434bf73e341f5d35836fa7063a728f7b7de4.1718788908.git.federico.serafini@bugseng.com>
 <f7d46c15-ff85-4a6f-afd7-df18649726c8@xen.org>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <f7d46c15-ff85-4a6f-afd7-df18649726c8@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 19/06/24 13:17, Julien Grall wrote:
> Hi Federico,
> 
> On 19/06/2024 10:29, Federico Serafini wrote:
>> MISRA C Rule 16.4 states that every `switch' statement shall have a
>> `default' label" and a statement or a comment prior to the
>> terminating break statement.
>>
>> This patch addresses some violations of the rule related to the
>> "notifier pattern": a frequently-used pattern whereby only a few values
>> are handled by the switch statement and nothing should be done for
>> others (nothing to do in the default case).
>>
>> Note that for function mwait_idle_cpu_init() in
>> xen/arch/x86/cpu/mwait-idle.c the /* Notifier pattern. */ comment is
>> not added: differently from the other functions covered in this patch,
>> the default label has a return statement that does not violates Rule 
>> 16.4.
>>
>> No functional change.
>>
>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>> ---
>> Changes in v2:
>> as Jan pointed out, in the v1 some patterns were not explicitly 
>> identified
>> (https://lore.kernel.org/xen-devel/cad05a5c-e2d8-4e5d-af05-30ae6f959184@bugseng.com/).
>>
>> This version adds the /* Notifier pattern. */ to all the pattern 
>> present in
>> the Xen codebase except for mwait_idle_cpu_init().
>> ---
>>   xen/arch/arm/cpuerrata.c                     | 1 +
>>   xen/arch/arm/gic-v3-lpi.c                    | 4 ++++
>>   xen/arch/arm/gic.c                           | 1 +
>>   xen/arch/arm/irq.c                           | 4 ++++
>>   xen/arch/arm/mmu/p2m.c                       | 1 +
>>   xen/arch/arm/percpu.c                        | 1 +
>>   xen/arch/arm/smpboot.c                       | 1 +
>>   xen/arch/arm/time.c                          | 1 +
>>   xen/arch/arm/vgic-v3-its.c                   | 2 ++
>>   xen/arch/x86/acpi/cpu_idle.c                 | 4 ++++
>>   xen/arch/x86/cpu/mcheck/mce.c                | 4 ++++
>>   xen/arch/x86/cpu/mcheck/mce_intel.c          | 4 ++++
>>   xen/arch/x86/genapic/x2apic.c                | 3 +++
>>   xen/arch/x86/hvm/hvm.c                       | 1 +
>>   xen/arch/x86/nmi.c                           | 1 +
>>   xen/arch/x86/percpu.c                        | 3 +++
>>   xen/arch/x86/psr.c                           | 3 +++
>>   xen/arch/x86/smpboot.c                       | 3 +++
>>   xen/common/kexec.c                           | 1 +
>>   xen/common/rcupdate.c                        | 1 +
>>   xen/common/sched/core.c                      | 1 +
>>   xen/common/sched/cpupool.c                   | 1 +
>>   xen/common/spinlock.c                        | 1 +
>>   xen/common/tasklet.c                         | 1 +
>>   xen/common/timer.c                           | 1 +
>>   xen/drivers/cpufreq/cpufreq.c                | 1 +
>>   xen/drivers/cpufreq/cpufreq_misc_governors.c | 3 +++
>>   xen/drivers/passthrough/x86/hvm.c            | 3 +++
>>   xen/drivers/passthrough/x86/iommu.c          | 3 +++
>>   29 files changed, 59 insertions(+)
>>
>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
>> index 2b7101ea25..69c30aecd8 100644
>> --- a/xen/arch/arm/cpuerrata.c
>> +++ b/xen/arch/arm/cpuerrata.c
>> @@ -730,6 +730,7 @@ static int cpu_errata_callback(struct 
>> notifier_block *nfb,
>>           rc = enable_nonboot_cpu_caps(arm_errata);
>>           break;
>>       default:
>> +        /* Notifier pattern. */
> Without looking at the commit message (which may not be trivial when 
> committed), it is not clear to me what this is supposed to mean. Will 
> there be a longer explanation in the MISRA doc? Should this be a SAF-* 
> comment?
> 
>>           break;
>>       }
>> diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
>> index eb0a5535e4..4c2bd35403 100644
>> --- a/xen/arch/arm/gic-v3-lpi.c
>> +++ b/xen/arch/arm/gic-v3-lpi.c
>> @@ -389,6 +389,10 @@ static int cpu_callback(struct notifier_block 
>> *nfb, unsigned long action,
>>               printk(XENLOG_ERR "Unable to allocate the pendtable for 
>> CPU%lu\n",
>>                      cpu);
>>           break;
>> +
>> +    default:
>> +        /* Notifier pattern. */
>> +        break;
> 
> Skimming through v1, it was pointed out that gic-v3-lpi may miss some 
> cases.
> 
> Let me start with that I understand this patch is technically not 
> changing anything. However, it gives us an opportunity to check the 
> notifier pattern.
> 
> Has anyone done any proper investigation? If so, what was the outcome? 
> If not, have we identified someone to do it?
> 
> The same question will apply for place where you add "default".

Yes, I also think this could be an opportunity to check the pattern
but no one has yet been identified to do this.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 07:43:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 07:43:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744253.1151264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKCSP-00057x-5U; Thu, 20 Jun 2024 07:43:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744253.1151264; Thu, 20 Jun 2024 07:43:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKCSP-00057q-1R; Thu, 20 Jun 2024 07:43:45 +0000
Received: by outflank-mailman (input) for mailman id 744253;
 Thu, 20 Jun 2024 07:43:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKCSO-00057k-9g
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 07:43:44 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d0b7f630-2ed8-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 09:43:42 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ec10324791so5661731fa.1
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 00:43:42 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f99b6df01esm43503375ad.133.2024.06.20.00.43.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 00:43:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0b7f630-2ed8-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718869422; x=1719474222; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=7dvSAn3iBf7jRe+Q/nc4m+cZp/UtaRuQEsw3R+WWsA4=;
        b=ETStEoRBU7Tx31ZGHYAuGi5Zfq3ODtbkOxdw13Ap4FwUXPjguzKRlHzc+Cv5s+7jj/
         AEvi+0mPb9v5oMZBhAUpcUGnSXd37iiYgH87dSDjGGMKCKtk4qqeyLsBRaj9pg4b8NVn
         aI8IuhrjP7hkpGyFrZQaqvTIiq5apSwEW9pfVWoW8nDuo2Jc2Mc//zRE2PACjIHRZXz9
         hDxEE/ah6DwKPApNxhFQQ6Ck2UXQ+F2/g33tg/xhc77JdgaLtOewa6f+RTqhT0D7FfJr
         0biQAQt6sHnaljx6d607eT0Cxp2w10VOj4jEf8/0X1g2j/ZTEftgUO3azt9T5SDtPOc8
         IlaQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718869422; x=1719474222;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7dvSAn3iBf7jRe+Q/nc4m+cZp/UtaRuQEsw3R+WWsA4=;
        b=EM1LIeRLTt9foOyt9do2xcLzKE9tg/hi7JEOPeZ7heykMkJevAEW3G0dU40ezkT7FX
         fLjCY9QSQYrxYzOftMvH24jYV/o6HR/n0IW1GsPIIJAS1JQW82J2u69Ppm1cwNnm2JRC
         G9qw7KlKO2MGU7K54kpcblvuUeZ9uNRihFeTF4A/2Oo2REXLrODr4WgceBI1pePPF7qU
         esfAc09IABk/E67r7f0dAOg+UWHwTKCgozwyxkVfSnSmu4bJDMAUI1zwGFQ52Azygtpy
         WYMpAZQ7AzzRvMBnL8Btldo7dkpuE2G/chANfCzWFGE7GAB4E3i7hqrMJZrRVC3EqYwr
         YKRg==
X-Forwarded-Encrypted: i=1; AJvYcCW9Tn3n5sHcH+hY7P6S5fgb31+BZcn3dwA6QHhYI7fQ5Mu3IbsWr9k4zFRW+Ed2MsmMj00kHuZFI4YizsFKdrmJDjFhwuma9PGZ6oXn9pU=
X-Gm-Message-State: AOJu0Yzkbep69Yt9U9KmNWWrriu946t8NA1+nuZlJs5cISibx6n8O/J7
	8Xj0EKM3d8+WTP4Axy9z2UsK826Cy7uY2TvDVHmU+Ff9r9lJLk0+0FRPvKekiA==
X-Google-Smtp-Source: AGHT+IEcnJreC3QRsmTo1R+OWRR1diBxxAcBE0NZqV2mvTL8po4mtdD2mLgXrPuJHRlgdkp7oYK4zA==
X-Received: by 2002:a05:651c:91:b0:2ec:22c0:66e6 with SMTP id 38308e7fff4ca-2ec3ce9b78cmr31604891fa.7.1718869421607;
        Thu, 20 Jun 2024 00:43:41 -0700 (PDT)
Message-ID: <099beaac-ed1f-459b-8c2b-42b325f8e4a4@suse.com>
Date: Thu, 20 Jun 2024 09:43:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-5-Jiqian.Chen@amd.com>
 <49563a31-d50e-4015-88ee-e0dab9193cd1@suse.com>
 <BL1PR12MB584910D242D9D8B4BA8B15C1E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <ab99b766-7bec-4046-beb2-f77a2591a911@suse.com>
 <BL1PR12MB5849ABD858B72505D83678F9E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB5849ABD858B72505D83678F9E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 20.06.2024 09:03, Chen, Jiqian wrote:
> On 2024/6/18 17:13, Jan Beulich wrote:
>> On 18.06.2024 10:10, Chen, Jiqian wrote:
>>> On 2024/6/17 23:10, Jan Beulich wrote:
>>>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>>>> --- a/tools/libs/light/libxl_pci.c
>>>>> +++ b/tools/libs/light/libxl_pci.c
>>>>> @@ -1406,6 +1406,12 @@ static bool pci_supp_legacy_irq(void)
>>>>>  #endif
>>>>>  }
>>>>>  
>>>>> +#define PCI_DEVID(bus, devfn)\
>>>>> +            ((((uint16_t)(bus)) << 8) | ((devfn) & 0xff))
>>>>> +
>>>>> +#define PCI_SBDF(seg, bus, devfn) \
>>>>> +            ((((uint32_t)(seg)) << 16) | (PCI_DEVID(bus, devfn)))
>>>>
>>>> I'm not a maintainer of this file; if I were, I'd ask that for readability's
>>>> sake all excess parentheses be dropped from these.
>>> Isn't it a coding requirement to enclose each element in parentheses in the macro definition?
>>> It seems other files also do this. See tools/libs/light/libxl_internal.h
>>
>> As said, I'm not a maintainer of this code. Yet while I'm aware that libxl
>> has its own CODING_STYLE, I can't spot anything towards excessive use of
>> parentheses there.
> So, which parentheses do you think are excessive use?

#define PCI_DEVID(bus, devfn)\
            (((uint16_t)(bus) << 8) | ((devfn) & 0xff))

#define PCI_SBDF(seg, bus, devfn) \
            (((uint32_t)(seg) << 16) | PCI_DEVID(bus, devfn))

>>>>> @@ -1486,6 +1496,18 @@ static void pci_add_dm_done(libxl__egc *egc,
>>>>>          goto out_no_irq;
>>>>>      }
>>>>>      if ((fscanf(f, "%u", &irq) == 1) && irq) {
>>>>> +#ifdef CONFIG_X86
>>>>> +        sbdf = PCI_SBDF(pci->domain, pci->bus,
>>>>> +                        (PCI_DEVFN(pci->dev, pci->func)));
>>>>> +        gsi = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
>>>>> +        /*
>>>>> +         * Old kernel version may not support this function,
>>>>
>>>> Just kernel?
>>> Yes, xc_physdev_gsi_from_dev depends on the function implemented on linux kernel side.
>>
>> Okay, and when the kernel supports it but the underlying hypervisor doesn't
>> support what the kernel wants to use in order to fulfill the request, all
> I don't know what things you mentioned hypervisor doesn't support are,
> because xc_physdev_gsi_from_dev is to get the gsi of pcidev through sbdf information,
> that relationship can be got only in dom0 instead of Xen hypervisor.
> 
>> is fine? (See also below for what may be needed in the hypervisor, even if
> You mean xc_physdev_map_pirq needs gsi?

I'd put it slightly differently: You arrange for that function to now take a
GSI when the caller is PVH. But yes, the function, when used with
MAP_PIRQ_TYPE_GSI, clearly expects a GSI as input (see also below).

>> this IOCTL would be satisfied by the kernel without needing to interact with
>> the hypervisor.)
>>
>>>>> +         * so if fail, keep using irq; if success, use gsi
>>>>> +         */
>>>>> +        if (gsi > 0) {
>>>>> +            irq = gsi;
>>>>
>>>> I'm still puzzled by this, when by now I think we've sufficiently clarified
>>>> that IRQs and GSIs use two distinct numbering spaces.
>>>>
>>>> Also, as previously indicated, you call this for PV Dom0 as well. Aiui on
>>>> the assumption that it'll fail. What if we decide to make the functionality
>>>> available there, too (if only for informational purposes, or for
>>>> consistency)? Suddenly you're fallback logic wouldn't work anymore, and
>>>> you'd call ...
>>>>
>>>>> +        }
>>>>> +#endif
>>>>>          r = xc_physdev_map_pirq(ctx->xch, domid, irq, &irq);
>>>>
>>>> ... the function with a GSI when a pIRQ is meant. Imo, as suggested before,
>>>> you strictly want to avoid the call on PV Dom0.
>>>>
>>>> Also for PVH Dom0: I don't think I've seen changes to the hypercall
>>>> handling, yet. How can that be when GSI and IRQ aren't the same, and hence
>>>> incoming GSI would need translating to IRQ somewhere? I can once again only
>>>> assume all your testing was done with IRQs whose numbers happened to match
>>>> their GSI numbers. (The difference, imo, would also need calling out in the
>>>> public header, where the respective interface struct(s) is/are defined.)
>>> I feel like you missed out on many of the previous discussions.
>>> Without my changes, the original codes use irq (read from file /sys/bus/pci/devices/<sbdf>/irq) to do xc_physdev_map_pirq,
>>> but xc_physdev_map_pirq require passing into gsi instead of irq, so we need to use gsi whether dom0 is PV or PVH, so for the original codes, they are wrong.
>>> Just because by chance, the irq value in the Linux kernel of pv dom0 is equal to the gsi value, so there was no problem with the original pv passthrough.
>>> But not when using PVH, so passthrough failed.
>>> With my changes, I got gsi through function xc_physdev_gsi_from_dev firstly, and to be compatible with old kernel versions(if the ioctl
>>> IOCTL_PRIVCMD_GSI_FROM_DEV is not implemented), I still need to use the irq number, so I need to check the result
>>> of gsi, if gsi > 0 means IOCTL_PRIVCMD_GSI_FROM_DEV is implemented I can use the right one gsi, otherwise keep using wrong one irq. 
>>
>> I understand all of this, to a (I think) sufficient degree at least. Yet what
>> you're effectively proposing (without explicitly saying so) is that e.g.
>> struct physdev_map_pirq's pirq field suddenly may no longer hold a pIRQ
>> number, but (when the caller is PVH) a GSI. This may be a necessary adjustment
>> to make (simply because the caller may have no way to express things in pIRQ
>> terms), but then suitable adjustments to the handling of PHYSDEVOP_map_pirq
>> would be necessary. In fact that field is presently marked as "IN or OUT";
>> when re-purposed to take a GSI on input, it may end up being necessary to pass
>> back the pIRQ (in the subject domain's numbering space). Or alternatively it
>> may be necessary to add yet another sub-function so the GSI can be translated
>> to the corresponding pIRQ for the domain that's going to use the IRQ, for that
>> then to be passed into PHYSDEVOP_map_pirq.
> If I understood correctly, your concerns about this patch are two:
> First, when dom0 is PV, I should not use xc_physdev_gsi_from_dev to get gsi to do xc_physdev_map_pirq, I should keep the original code that use irq.

Yes.

> Second, when dom0 is PVH, I get the gsi, but I should not pass gsi as the fourth parameter of xc_physdev_map_pirq, I should create a new local parameter pirq=-1, and pass it in.

I think so, yes. You also may need to record the output value, so you can later
use it for unmap. xc_physdev_map_pirq() may also need adjusting, as right now
it wouldn't put a negative incoming *pirq into the .pirq field. I actually
wonder if that's even correct right now, i.e. independent of your change.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 08:05:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 08:05:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744265.1151274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKCnB-0000VM-Qi; Thu, 20 Jun 2024 08:05:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744265.1151274; Thu, 20 Jun 2024 08:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKCnB-0000VF-O7; Thu, 20 Jun 2024 08:05:13 +0000
Received: by outflank-mailman (input) for mailman id 744265;
 Thu, 20 Jun 2024 08:05:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mlza=NW=cloud.com=ross.lagerwall@srs-se1.protection.inumbo.net>)
 id 1sKCnA-0000V9-Ec
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 08:05:12 +0000
Received: from mail-pj1-x1034.google.com (mail-pj1-x1034.google.com
 [2607:f8b0:4864:20::1034])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d018337b-2edb-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 10:05:11 +0200 (CEST)
Received: by mail-pj1-x1034.google.com with SMTP id
 98e67ed59e1d1-2c6e94131cfso538755a91.3
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 01:05:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d018337b-2edb-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718870709; x=1719475509; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Os5+Tn4OGFjzYOGTIdvgLFIxPe6fwG22RT0aaAUWWW0=;
        b=lIthSLMIAMPscun4O/2//qNGQEMHvwjqn/zn2fgkDabuVKBUCPi8UaaJ5DP9+x89oY
         +xcl6vp1rm2IYTNT57dbZMMFmjizL6OEckXnr9JtzaPLKh0EBNQKWRMy+83eL359CNb9
         IuZDSTfJ1v9ZrmcHD7VbvkgLrhGT8bXrzvPPE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718870709; x=1719475509;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Os5+Tn4OGFjzYOGTIdvgLFIxPe6fwG22RT0aaAUWWW0=;
        b=ifxchLR7aqYOQEmAjGJDM+l1p4sypMbH5ld9lH1ipAS1aQFRCSrwqO3sx8aoGuaZBA
         ARJr2Uwhl0LTVPe4NwUFy3PBVd91i0zrbW7cED/EMtVSNLCjRQ0uJoWdrp6CdxxfLJk/
         rd8hUrfE1EI0Loi/6rMsIx3FM8+C25AFO6eMifKA847p1UAkl8moo04102TADhihBL4W
         US0hwlO6N1dWJj8MdiJkqlTWpIL5q0qGD01K4K9YIHkvaGsP44PiqWOC8UWr2Y8qJy1C
         UYakClclgwXX9K/q4/ag7IFseRAXT6fIK1OOEyAoYbYDdeRBmnKtwekodEkPMGXSv4ps
         P27w==
X-Gm-Message-State: AOJu0Yy6QoHq+MQe36NZiUYl8JaOTLuWq6Fd+tIoHC/WVAvtiEpcWQF0
	uziycu55KyLpCFRxn+Cz82188Uz0pM6ck85TceQOXhwaIE44Lr9nTt8yd7U+gRFh8orWi/TcUpU
	D3WEkL95KSalebOKcvObSHhE6NxVvHBmEA/sx
X-Google-Smtp-Source: AGHT+IGCcoEuh5evsnukS934Yv8tuj7w0ZQ7i0Q5RBmthgzjmEzm+sjixTa55qhPGL8oWf3SwbyzPane+J3xEHU/0TE=
X-Received: by 2002:a17:90a:9c6:b0:2c7:aba6:d32f with SMTP id
 98e67ed59e1d1-2c7b5c8a1b1mr4677364a91.22.1718870708921; Thu, 20 Jun 2024
 01:05:08 -0700 (PDT)
MIME-Version: 1.0
References: <a4d780fd-90c2-405e-be21-c323a22a78c6@suse.com>
In-Reply-To: <a4d780fd-90c2-405e-be21-c323a22a78c6@suse.com>
From: Ross Lagerwall <ross.lagerwall@citrix.com>
Date: Thu, 20 Jun 2024 09:04:57 +0100
Message-ID: <CAG7k0ErGHynwYmxWuftUT=yFF0Zrttx0JEAjh3bDzPVzM_MgzA@mail.gmail.com>
Subject: Re: [PATCH for-4.19] livepatch: use appropriate type for buffer
 offset variables
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Oleksii Kurochko <oleksii.kurochko@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Jun 20, 2024 at 8:16=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> As was made noticeable by the last of the commits referenced below,
> using a fixed-size type for such purposes is not only against
> ./CODING_STYLE, but can lead to actual issues. Switch to using size_t
> instead, thus also allowing calculations to be lighter-weight in 32-bit
> builds.
>
> No functional change for 64-bit builds.
>
> Link: https://gitlab.com/xen-project/xen/-/jobs/7136417308
> Fixes: b145b4a39c13 ("livepatch: Handle arbitrary size names with the lis=
t operation")
> Fixes: 5083e0ff939d ("livepatch: Add metadata runtime retrieval mechanism=
")
> Fixes: 43d5c5d5f70b ("xen: avoid UB in guest handle arithmetic")
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>

Reviewed-by: Ross Lagerwall <ross.lagerwall@citrix.com>

Thanks


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 08:09:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 08:09:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744271.1151284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKCrY-0001Rh-D7; Thu, 20 Jun 2024 08:09:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744271.1151284; Thu, 20 Jun 2024 08:09:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKCrY-0001Ra-97; Thu, 20 Jun 2024 08:09:44 +0000
Received: by outflank-mailman (input) for mailman id 744271;
 Thu, 20 Jun 2024 08:09:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKCrW-0001RO-UY; Thu, 20 Jun 2024 08:09:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKCrW-0002H9-Om; Thu, 20 Jun 2024 08:09:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKCrW-0001ho-Ec; Thu, 20 Jun 2024 08:09:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKCrW-0000Ci-E8; Thu, 20 Jun 2024 08:09:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TKGt3KoBlwIxBp5+Wyl4bCJ0psA7jrafzg3RjV4P4jw=; b=U7JLlUe7IJInK5nPU604B2YCj7
	7wBDB2yJPMLYgocp1zsV0vP6IUaqFNv4pFr0qJmUPHARutwln1N7mnb+H0t/FMWPlSuUVxDAJ995T
	hD61tLjMpnY3ivWQY7Ksdc0Ne56sIGz5i5YoRloWxLvbHoH1/PYApKFG8rEoPz99IXXo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186429-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186429: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
X-Osstest-Versions-That:
    xen=efa6e9f15ba943d154e8d7b29384581915b2aacd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Jun 2024 08:09:42 +0000

flight 186429 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186429/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                   6 xen-build                fail REGR. vs. 186411

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
baseline version:
 xen                  efa6e9f15ba943d154e8d7b29384581915b2aacd

Last test of basis   186411  2024-06-19 12:00:22 Z    0 days
Testing same since   186412  2024-06-19 15:03:58 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jun 19 14:11:07 2024 +0200

    xen: avoid UB in guest handle arithmetic
    
    At least XENMEM_memory_exchange can have huge values passed in the
    nr_extents and nr_exchanged fields. Adding such values to pointers can
    overflow, resulting in UB. Cast respective pointers to "unsigned long"
    while at the same time making the necessary multiplication explicit.
    Remaining arithmetic is, despite there possibly being mathematical
    overflow, okay as per the C99 spec: "A computation involving unsigned
    operands can never overflow, because a result that cannot be represented
    by the resulting unsigned integer type is reduced modulo the number that
    is one greater than the largest value that can be represented by the
    resulting type." The overflow that we need to guard against is checked
    for in array_access_ok().
    
    Note that in / down from array_access_ok() the address value is only
    ever cast to "unsigned long" anyway, which is why in the invocation from
    guest_handle_subrange_okay() the value doesn't need casting back to
    pointer type.
    
    In compat grant table code change two guest_handle_add_offset() to avoid
    passing in negative offsets.
    
    Since {,__}clear_guest_offset() need touching anyway, also deal with
    another (latent) issue there: They were losing the handle type, i.e. the
    size of the individual objects accessed. Luckily the few users we
    presently have all pass char or uint8 handles.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 267122a24c499d26278ab2dbdfb46ebcaaf38474
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 16:14:36 2021 +0100

    x86/defns: Clean up X86_{XCR0,XSS}_* constants
    
    With the exception of one case in read_bndcfgu() which can use ilog2(),
    the *_POS defines are unused.  Drop them.
    
    X86_XCR0_X87 is the name used by both the SDM and APM, rather than
    X86_XCR0_FP.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 71cacfb035f4a78ee10970dc38a3baa04d387451
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpuid: Fix handling of XSAVE dynamic leaves
    
    First, if XSAVE is available in hardware but not visible to the guest, the
    dynamic leaves shouldn't be filled in.
    
    Second, the comment concerning XSS state is wrong.  VT-x doesn't manage
    host/guest state automatically, but there is provision for "host only" bits to
    be set, so the implications are still accurate.
    
    Introduce xstate_compressed_size() to mirror the uncompressed one.  Cross
    check it at boot.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit fdb7e77fea4cb1c98dc51dd891a47f7e94612ad4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpu-policy: Simplify recalculate_xstate()
    
    Make use of xstate_uncompressed_size() helper rather than maintaining the
    running calculation while accumulating feature components.
    
    The rest of the CPUID data can come direct from the raw cpu policy.  All
    per-component data form an ABI through the behaviour of the X{SAVE,RSTOR}*
    instructions.
    
    Use for_each_set_bit() rather than opencoding a slightly awkward version of
    it.  Mask the attributes in ecx down based on the visible features.  This
    isn't actually necessary for any components or attributes defined at the time
    of writing (up to AMX), but is added out of an abundance of caution.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit df09dfb94de66f7523837c050616a382aa2c7d17
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/xstate: Rework xstate_ctxt_size() as xstate_uncompressed_size()
    
    We're soon going to need a compressed helper of the same form.
    
    The size of the uncompressed image depends on the single element with the
    largest offset + size.  Sadly this isn't always the element with the largest
    index.
    
    Name the per-xstate-component cpu_policy struture, for legibility of the logic
    in xstate_uncompressed_size().  Cross-check with hardware during boot, and
    remove hw_uncompressed_size().
    
    This means that the migration paths don't need to mess with XCR0 just to
    sanity check the buffer size.  It also means we can drop the "fastpath" check
    against xfeature_mask (there to skip some XCR0 writes); this path is going to
    be dead logic the moment Xen starts using supervisor states itself.
    
    The users of hw_uncompressed_size() in xstate_init() can (and indeed need) to
    be replaced with CPUID instructions.  They run with feature_mask in XCR0, and
    prior to setup_xstate_features() on the BSP.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit a09022a09e1a79b3f9574993993bfad803b32596
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu May 23 00:55:34 2024 +0100

    x86/boot: Collect the Raw CPU Policy earlier on boot
    
    This is a tangle, but it's a small step in the right direction.
    
    In the following change, xstate_init() is going to start using the Raw policy.
    
    calculate_raw_cpu_policy() is sufficiently separate from the other policies to
    safely move like this.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit d31a111940de5431c8bf465b1d38b89f1130a24b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Feb 21 17:56:57 2020 +0000

    x86/xstate: Cross-check dynamic XSTATE sizes at boot
    
    Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
    every call.  This is expensive, being used for domain create/migrate, as well
    as to service certain guest CPUID instructions.
    
    Instead, arrange to check the sizes once at boot.  See the code comments for
    details.  Right now, it just checks hardware against the algorithm
    expectations.  Later patches will cross-check Xen's XSTATE calculations too.
    
    Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits.  This is to
    maximise coverage in the sanity check, even if we don't expect to
    use/virtualise some of these features any time soon.  Leave HDC and HWP alone
    for now; we don't have CPUID bits from them stored nicely.
    
    Only perform the cross-checks when SELF_TESTS are active.  It's only
    developers or new hardware liable to trip these checks, and Xen at least
    tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which we
    don't want to be tickling in the general case.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 9e6dbbe8bf400aacb99009ddffa91d2a0c312b39
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 22 17:23:54 2024 +0100

    x86/xstate: Fix initialisation of XSS cache
    
    The clobbering of this_cpu(xcr0) and this_cpu(xss) to architecturally invalid
    values is to force the subsequent set_xcr0() and set_msr_xss() to reload the
    hardware register.
    
    While XCR0 is reloaded in xstate_init(), MSR_XSS isn't.  This causes
    get_msr_xss() to return the invalid value, and logic of the form:
    
        old = get_msr_xss();
        set_msr_xss(new);
        ...
        set_msr_xss(old);
    
    to try and restore said invalid value.
    
    The architecturally invalid value must be purged from the cache, meaning the
    hardware register must be written at least once.  This in turn highlights that
    the invalid value must only be used in the case that the hardware register is
    available.
    
    Fixes: f7f4a523927f ("x86/xstate: reset cached register values on resume")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit aba98c8d671bd290e978ec154d0baf042e093a65
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jun 14 13:05:40 2024 +0100

    xen/arch: Centralise __read_mostly and __ro_after_init
    
    These living in cache.h is inherited from Linux, but cache.h is not a terribly
    appropriately location for them to live.
    
    __read_mostly is an optimisation related to data placement in order to avoid
    having shared data in cachelines that are likely to be written to, but it
    really is just a section of the linked image separating data by usage
    patterns; it has nothing to do with cache sizes or flushing logic.
    
    Worse, __ro_after_init was only in xen/cache.h because __read_mostly was in
    arch/cache.h, and has literally nothing whatsoever to do with caches.
    
    Move the definitions into xen/sections.h, which in particular means that
    RISC-V doesn't need to repeat the problematic pattern.  Take the opportunity
    to provide a short descriptions of what these are used for.
    
    For now, leave TODO comments next to the other identical definitions.  It
    turns out that unpicking cache.h is more complicated than it appears because a
    number of files use it for transitive dependencies.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 82f480944718d9e8340a6ac1af41ece7851115bf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jun 18 13:48:35 2024 +0100

    xen/irq: Address MISRA Rule 8.3 violation
    
    When centralising irq_ack_none(), different architectures had different names
    for the parameter.  As it's type is struct irq_desc *, it should be named
    desc.  Make this consistent.
    
    No functional change.
    
    Fixes: 8aeda4a241ab ("arch/irq: Make irq_ack_none() mandatory")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 08:39:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 08:39:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744286.1151300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKDK4-0005o0-RG; Thu, 20 Jun 2024 08:39:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744286.1151300; Thu, 20 Jun 2024 08:39:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKDK4-0005nt-Ob; Thu, 20 Jun 2024 08:39:12 +0000
Received: by outflank-mailman (input) for mailman id 744286;
 Thu, 20 Jun 2024 08:39:12 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKDK4-0005nn-6f
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 08:39:12 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 90b4833c-2ee0-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 10:39:11 +0200 (CEST)
Received: by mail-wr1-x433.google.com with SMTP id
 ffacd0b85a97d-35f2c9e23d3so1111544f8f.0
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 01:39:11 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c7e58d3f93sm1101917a91.45.2024.06.20.01.39.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 01:39:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90b4833c-2ee0-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718872750; x=1719477550; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=iK9VrNyqPAWHESqyOB6kolM/1IfMF/IfC3WBxzHWHRc=;
        b=BzEzezWM6w7/GeG/h6LW0LUew8OSVVmcGaBbi0FnL0vXeUrY9BmKxyXw6LGT0OC3sm
         GQ67k4KdFnqnLdHrN9ba+FgUY5YyFlKD2/4h64lVONAy6GLYvdeTnAf/WTg0UHUkRyBi
         1ZlNv5IVHibXvJmYbfIEJKxGGLyBhxit0AmLQoPmaOBcZ9bAiegYMLH+PGGy4gln21Pj
         AYLtKZ+7zYBtE17owqcrrmX6qTAjmTCsfaVusXqjmehDG00D9KXg8h8foW4Eje+1/UKQ
         iNpE9qwo/2UcjqJPBx0J+B/lNEbxXkTkKeiicLt/ynYcVngTipD+Nsp+/kcxtbSXkS1h
         YfZQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718872750; x=1719477550;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=iK9VrNyqPAWHESqyOB6kolM/1IfMF/IfC3WBxzHWHRc=;
        b=HHXsJ29xo39BgOCPiXlsm5sWG+MH0sqpwwg+prsnmAtmWYVXjcYmZikSe6KTjBvkNk
         /VFi7xiisVt5di7SUKl01dHmU/qHxSOQNHM4LBqNjN/kOADG0YDsL50NRjU7ZdIAbNZP
         MAMTSCtniDldGKB5Jxc5+gIV8jeBplMcorK4Kmf5eVvX/Cxs0XKCiWQz9B9emF1XOhcQ
         boVU9sBRyaOu5fqVaH0ityAweT2Mdd7CeXf5kviDM+xDGcSbPNXwaVPcwAeMMTUvdjZj
         iPPlqChCUSIZY6EBDvocThjaGvtr2GUcbzL0czAun4EBfGsgR0QsnSzWe4Vq89qIwgbW
         GDYw==
X-Forwarded-Encrypted: i=1; AJvYcCWX58nK98rQ/4GALVaLAtBcl60TmLCcXoehovLagXvFp4RXsySTWwnB5gm1P36wRSqWCdfrnqfxR6UszOdXQgxO8JjPbrUrRFzy5iIphS0=
X-Gm-Message-State: AOJu0YwO+S3xyRSkp7jnOcWBB98KFHdWbIdnmMpn3jSxkHu+irf179G0
	WXMNvsDoibHORgW6niwevKcz9Lc3cNV4DlZa2p12fFTBorEsrvUxXqboPfyKIA==
X-Google-Smtp-Source: AGHT+IHBaCQJPSHC9eWejO5AgoNza+C+5v/h+Y/DtpN7Y8MfruF0A1kgL0yH5hdinGEDpH9xNJoGhQ==
X-Received: by 2002:a5d:4dc2:0:b0:361:94d9:1e9f with SMTP id ffacd0b85a97d-36194d9203amr5713875f8f.7.1718872750116;
        Thu, 20 Jun 2024 01:39:10 -0700 (PDT)
Message-ID: <8510895d-70fb-4fce-adfa-ac5638b4ae3c@suse.com>
Date: Thu, 20 Jun 2024 10:39:01 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 v2] x86/spec-ctrl: Support for SRSO_US_NO and
 SRSO_MSR_FIX
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240619191057.2588693-1-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240619191057.2588693-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 19.06.2024 21:10, Andrew Cooper wrote:
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -2390,7 +2390,7 @@ By default SSBD will be mitigated at runtime (i.e `ssbd=runtime`).
>  >              {ibrs,ibpb,ssbd,psfd,
>  >              eager-fpu,l1d-flush,branch-harden,srb-lock,
>  >              unpriv-mmio,gds-mit,div-scrub,lock-harden,
> ->              bhi-dis-s}=<bool> ]`
> +>              bhi-dis-s,bp-spec-reduce}=<bool> ]`
>  
>  Controls for speculative execution sidechannel mitigations.  By default, Xen
>  will pick the most appropriate mitigations based on compiled in support,
> @@ -2539,6 +2539,13 @@ boolean can be used to force or prevent Xen from using speculation barriers to
>  protect lock critical regions.  This mitigation won't be engaged by default,
>  and needs to be explicitly enabled on the command line.
>  
> +On hardware supporting SRSO_MSR_FIX, the `bp-spec-reduce=` option can be used
> +to force or prevent Xen from using MSR_BP_CFG.BP_SPEC_REDUCE to mitigate the
> +SRSO (Speculative Return Stack Overflow) vulnerability.

Upon my request to add "... against HVM guests" here you replied "Ok.", yet
then you didn't make the change? Even a changelog entry says you supposedly
added this, so perhaps simply lost in a refresh?

It also didn't really become clear to me whether the entirety of your reply
to my request for adding "AMD" early in the sentence wasn't boiling down
more to a "yes, perhaps".

> @@ -605,6 +606,24 @@ static void __init calculate_pv_max_policy(void)
>          __clear_bit(X86_FEATURE_IBRS, fs);
>      }
>  
> +    /*
> +     * SRSO_U/S_NO means that the CPU is not vulnerable to SRSO attacks across
> +     * the User (CPL3)/Supervisor (CPL<3) boundary.  However the PV64
> +     * user/kernel boundary is CPL3 on both sides, so it won't convey the
> +     * meaning that a PV kernel expects.
> +     *
> +     * PV32 guests are explicitly unsupported WRT speculative safety, so are
> +     * ignored to avoid complicating the logic.
> +     *
> +     * After discussions with AMD, it is believed to be safe to offer
> +     * SRSO_US_NO to PV guests when BP_SPEC_REDUCE is active.

IOW that specific behavior is not tied to #VMEXIT / VMRUN, and also isn't
going to in future hardware?

> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -1009,16 +1009,33 @@ static void cf_check fam17_disable_c6(void *arg)
>  	wrmsrl(MSR_AMD_CSTATE_CFG, val & mask);
>  }
>  
> -static void amd_check_erratum_1485(void)
> +static void amd_check_bp_cfg(void)
>  {
> -	uint64_t val, chickenbit = (1 << 5);
> +	uint64_t val, new = 0;
>  
> -	if (cpu_has_hypervisor || boot_cpu_data.x86 != 0x19 || !is_zen4_uarch())
> +	/*
> +	 * AMD Erratum #1485.  Set bit 5, as instructed.
> +	 */
> +	if (!cpu_has_hypervisor && boot_cpu_data.x86 == 0x19 && is_zen4_uarch())
> +		new |= (1 << 5);
> +
> +	/*
> +	 * On hardware supporting SRSO_MSR_FIX, activate BP_SPEC_REDUCE by
> +	 * default.  This lets us do two things:
> +         *
> +         * 1) Avoid IBPB-on-entry to mitigate SRSO attacks from HVM guests.
> +         * 2) Lets us advertise SRSO_US_NO to PV guests.
> +	 */
> +	if (boot_cpu_has(X86_FEATURE_SRSO_MSR_FIX) && opt_bp_spec_reduce)
> +		new |= BP_CFG_SPEC_REDUCE;
> +
> +	/* Avoid reading BP_CFG if we don't intend to change anything. */
> +	if (!new)
>  		return;
>  
>  	rdmsrl(MSR_AMD64_BP_CFG, val);
>  
> -	if (val & chickenbit)
> +	if ((val & new) == new)
>  		return;

You explained that you want to avoid making this more complex, upon my
question towards tweaking this to also deal with us possibly clearing
flags. I'm okay with that, but I did ask that you add at least half a
sentence to actually clarify this to future readers (which might include
myself).

> @@ -1145,22 +1151,41 @@ static void __init ibpb_calculations(void)
>           * Confusion.  Mitigate with IBPB-on-entry.
>           */
>          if ( !boot_cpu_has(X86_FEATURE_BTC_NO) )
> -            def_ibpb_entry = true;
> +            def_ibpb_entry_pv = def_ibpb_entry_hvm = true;
>  
>          /*
> -         * Further to BTC, Zen3/4 CPUs suffer from Speculative Return Stack
> -         * Overflow in most configurations.  Mitigate with IBPB-on-entry if we
> -         * have the microcode that makes this an effective option.
> +         * Further to BTC, Zen3 and later CPUs suffer from Speculative Return
> +         * Stack Overflow in most configurations.  Mitigate with IBPB-on-entry
> +         * if we have the microcode that makes this an effective option,
> +         * except where there are other mitigating factors available.
>           */
>          if ( !boot_cpu_has(X86_FEATURE_SRSO_NO) &&
>               boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) )
> -            def_ibpb_entry = true;
> +        {
> +            /*
> +             * SRSO_U/S_NO is a subset of SRSO_NO, identifying that SRSO isn't
> +             * possible across the user/supervisor boundary.  We only need to
> +             * use IBPB-on-entry for PV guests on hardware which doesn't
> +             * enumerate SRSO_US_NO.
> +             */
> +            if ( !boot_cpu_has(X86_FEATURE_SRSO_US_NO) )
> +                def_ibpb_entry_pv = true;

To my PV32 related comment here you said "..., we might as well do our best".
Yet nothing has changed here? Yet then, thinking about it again, trying to
help PV32 would apparently mean splitting def_ibpb_entry_pv and hence, via
opt_ibpb_entry_pv, X86_FEATURE_IBPB_ENTRY_PV (and perhaps yet more items). I
guess the resulting complexity then simply isn't worth it.

However, as an unrelated aspect: According to the respective part of the
comment you add to  calculate_pv_max_policy(), do we need the IBPB when
BP_SPEC_REDUCE is active?

> +            /*
> +             * SRSO_MSR_FIX enumerates that we can use MSR_BP_CFG.SPEC_REDUCE
> +             * to mitigate SRSO across the host/guest boundary.  We only need
> +             * to use IBPB-on-entry for HVM guests if we haven't enabled this
> +             * control.
> +             */
> +            if ( !boot_cpu_has(X86_FEATURE_SRSO_MSR_FIX) || !opt_bp_spec_reduce )
> +                def_ibpb_entry_hvm = true;

Here and elsewhere, wouldn't conditionals become simpler if you (early on)
cleared opt_bp_spec_reduce when !boot_cpu_has(X86_FEATURE_SRSO_MSR_FIX)?

> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -312,7 +312,9 @@ XEN_CPUFEATURE(FSRSC,              11*32+19) /*A  Fast Short REP SCASB */
>  XEN_CPUFEATURE(AMD_PREFETCHI,      11*32+20) /*A  PREFETCHIT{0,1} Instructions */
>  XEN_CPUFEATURE(SBPB,               11*32+27) /*A  Selective Branch Predictor Barrier */
>  XEN_CPUFEATURE(IBPB_BRTYPE,        11*32+28) /*A  IBPB flushes Branch Type predictions too */
> -XEN_CPUFEATURE(SRSO_NO,            11*32+29) /*A  Hardware not vulenrable to Speculative Return Stack Overflow */
> +XEN_CPUFEATURE(SRSO_NO,            11*32+29) /*A  Hardware not vulnerable to Speculative Return Stack Overflow */
> +XEN_CPUFEATURE(SRSO_US_NO,         11*32+30) /*A! Hardware not vulnerable to SRSO across the User/Supervisor boundary */

Nit: Elsewhere we have ! first, and I think that's preferable, to avoid
confusion with | (which normally comes last).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 09:08:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 09:08:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744300.1151309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKDmH-0001b7-15; Thu, 20 Jun 2024 09:08:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744300.1151309; Thu, 20 Jun 2024 09:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKDmG-0001b0-Ur; Thu, 20 Jun 2024 09:08:20 +0000
Received: by outflank-mailman (input) for mailman id 744300;
 Thu, 20 Jun 2024 09:08:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DyN3=NW=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sKDmF-0001au-Ii
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 09:08:19 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a1c8d708-2ee4-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 11:08:17 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-87-17-171-46.retail.telecomitalia.it [87.17.171.46])
 by support.bugseng.com (Postfix) with ESMTPSA id 30ED64EE0738;
 Thu, 20 Jun 2024 11:08:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a1c8d708-2ee4-11ef-b4bb-af5377834399
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH] automation/eclair_analysis: deviate and|or|xor for MISRA C Rule 21.2
Date: Thu, 20 Jun 2024 11:07:36 +0200
Message-Id: <b89e106649e3d0ecb41baadb49dc09c54b7563ec.1718873635.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rule 21.2 reports identifiers reserved for the C and POSIX standard
libraries: or, and and xor are reserved identifiers because they constitute
alternate spellings for the corresponding operators; however Xen doesn't
use standard library headers, so there is no risk of overlap.

This addresses violations arising from x86_emulate/x86_emulate.c, where
label statements named as or, and and xor appear.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index 9fa9a7f01c..069519e380 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -498,6 +498,12 @@ still remain available."
 -config=MC3R1.R21.2,declarations+={safe, "!^__builtin_.*$"}
 -doc_end
 
+-doc_begin="or, and and xor are reserved identifiers because they constitute alternate
+spellings for the corresponding operators.
+However, Xen doesn't use standard library headers, so there is no risk of overlap."
+-config=MC3R1.R21.2,reports+={safe, "any_area(stmt(ref(kind(label)&&^(or|and|xor)$)))"}
+-doc_end
+
 -doc_begin="Xen does not use the functions provided by the Standard Library, but
 implements a set of functions that share the same names as their Standard Library equivalent.
 The implementation of these functions is available in source form, so the undefined, unspecified
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 09:11:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 09:11:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744306.1151320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKDp7-00030X-EX; Thu, 20 Jun 2024 09:11:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744306.1151320; Thu, 20 Jun 2024 09:11:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKDp7-00030Q-BQ; Thu, 20 Jun 2024 09:11:17 +0000
Received: by outflank-mailman (input) for mailman id 744306;
 Thu, 20 Jun 2024 09:11:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKDp6-00030I-Cg
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 09:11:16 +0000
Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com
 [2a00:1450:4864:20::22e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0bb73a9a-2ee5-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 11:11:15 +0200 (CEST)
Received: by mail-lj1-x22e.google.com with SMTP id
 38308e7fff4ca-2e95a75a90eso5932261fa.2
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 02:11:15 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9aa0589a7sm33788545ad.159.2024.06.20.02.11.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 02:11:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bb73a9a-2ee5-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718874675; x=1719479475; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=KJLIla7Hi8D3/VqgY3rNh7e9s8feuuJzjDlTKt+1ktw=;
        b=FQhxC+VB97gzmvmY05HbwGciiSttT8wPqIy0bZunWSlKG1lBKcTdT5JSE7GkT9mpng
         xPLez+1BLqacotPHDNjt2Kufl0dvUiM+tONfWUKBl8zUb4MJzSLh8xXFsi3nlhNqfwPO
         34SlD4g7fUcpq//Omj/wQrRLIpUiXUiUHom9Xn70599jDf9y4yxxDp8Y6Odbnmfdzu1K
         N7bEtbPxiXpz2vSf+gYOhx3jOSBIDl7Z5iEsHqgmfKPUsG/rnMHE4bIbpZgRmcmeRV9u
         vCd5bLRshxoBqIe0vMY46PiB7Y/V8bdaSlv0jRAtGT+SFEbASErY/K5QPBkHV8On3Bmc
         /fhQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718874675; x=1719479475;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=KJLIla7Hi8D3/VqgY3rNh7e9s8feuuJzjDlTKt+1ktw=;
        b=UV1ksBXSUnJUFnH/yF5tw19aGXZj49jsjTNNYGxEkAtPboHTEQFmE1XVLDV74vhlda
         7QdLDcHpmO4EXbF2IxY3t/z3osYPYij0DzvbBOQmgOfj/l7qQ3y+vaz49EnoCiOfr49+
         BGYYgDYX8OtT9ZzP/NedqqTVjw1G2fwVlZKjju9YpEdhNVpayKQvjXtKiVD6AuoAcvAo
         UyWrvJizk7svvYi/5NEJ3Ouq6ZSW84VVC+58YVrvJFcoVHQGU6BTOx67naa8CWReWG0M
         aU/hqdtahXua5yOVqCo9QgBN4ZAJO/GPR3M9hHSgV5E2XrUDH6NkXdeHw9CaJn4X+Cwb
         x3Lw==
X-Forwarded-Encrypted: i=1; AJvYcCVC0lo0GAJ5IIiFLxk6qdsMBIdBsEsL0RDfR/vtl5vefsCgTDA9tFxjVrIVUVgQ/1xtK7lGdVWGWFyQYbXjZZaIB9qfvpJ5oqL0Mg3yNQQ=
X-Gm-Message-State: AOJu0YxbIOp5U9KSQtnslv2NF+FFbAWzUh9cHGwKeOi2CAAkyIYBbXRF
	oQz0s4fSqNKgTVWhvs54NGLR7qKVIPU64mmDPIyseaxsea7gj/JE7b76d1QF3Q==
X-Google-Smtp-Source: AGHT+IGmBabIZwC+o1P7ltsvkNzZBXF2yyin05UHybtl71k/F2gcNXkuO5YHk1e0ltSJ3BeWlIBiwA==
X-Received: by 2002:a2e:7d14:0:b0:2ec:174b:75bb with SMTP id 38308e7fff4ca-2ec3cea1b81mr28023181fa.28.1718874674762;
        Thu, 20 Jun 2024 02:11:14 -0700 (PDT)
Message-ID: <02ee9a03-c5b9-4250-960d-e9a2762605c8@suse.com>
Date: Thu, 20 Jun 2024 11:11:07 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/2] automation/eclair_analysis: deviate MISRA C Rule 21.2
To: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1718816397.git.alessandro.zucchelli@bugseng.com>
 <5b8364528a9ece8fec9f0e70bee81c2ea94c1820.1718816397.git.alessandro.zucchelli@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <5b8364528a9ece8fec9f0e70bee81c2ea94c1820.1718816397.git.alessandro.zucchelli@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 19.06.2024 19:09, Alessandro Zucchelli wrote:
> Rule 21.2 reports identifiers reserved for the C and POSIX standard
> libraries: all xen's translation units are compiled with option
> -nostdinc, this guarantees that these libraries are not used, therefore
> a justification is provided for allowing uses of such identifiers in
> the project.
> Builtins starting with "__builtin_" still remain available.
> 
> No functional change.
> 
> Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
> ---
>  automation/eclair_analysis/ECLAIR/deviations.ecl | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> index 447c1e6661..9fa9a7f01c 100644
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -487,6 +487,17 @@ leads to a violation of the Rule are deviated."
>  # Series 21.
>  #
>  
> +-doc_begin="Rules 21.1 and 21.2 report identifiers reserved for the C and POSIX
> +standard libraries: if these libraries are not used there is no reason to avoid such
> +identifiers. All xen's translation units are compiled with option -nostdinc,
> +this guarantees that these libraries are not used. Some compilers could perform
> +optimization using built-in functions: this risk is partially addressed by
> +using the compilation option -fno-builtin. Builtins starting with \"__builtin_\"
> +still remain available."

While the sub-section "Reserved Identifiers" is part of Section 7,
"Library", close coordination is needed between the library and the
compiler, which only together form an "implementation". Therefore any
use of identifiers beginning with two underscores or beginning with an
underscore and an upper case letter is at risk of colliding not only
with a particular library implementation (which we don't use), but
also of such with a particular compiler implementation (which we cannot
avoid to use). How can we permit use of such potentially problematic
identifiers?

Further, as to the rule mentioning file scope identifiers: Why is that?
The text in the C99 specification does not preclude their use, it merely
restricts what they may be used for. Why does Misra go yet farther?

Finally, why "partially addressed"? What part is unaddressed?

> +-config=MC3R1.R21.1,macros={safe , "!^__builtin_$" }
> +-config=MC3R1.R21.2,declarations+={safe, "!^__builtin_.*$"}

First: Why the differences in = vs += and in absence vs presence of .*

Second: The rules, according to my understanding, are about us defining
or declaring such identifiers, not about us using them. There shouldn't
be any #define, #undef, or declaration (let alone definition) of such
entities. All we do is _use_ them as e.g. expansion of #define-s. Thus:
Why is a deviation needed here in the first place? Then again - maybe
I'm reading this wrong, especially the leading ! may perhaps be some
form of negation.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 09:15:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 09:15:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744313.1151329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKDtM-0003nM-UN; Thu, 20 Jun 2024 09:15:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744313.1151329; Thu, 20 Jun 2024 09:15:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKDtM-0003nF-Rt; Thu, 20 Jun 2024 09:15:40 +0000
Received: by outflank-mailman (input) for mailman id 744313;
 Thu, 20 Jun 2024 09:15:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKDtL-0003n8-Q8
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 09:15:39 +0000
Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com
 [2a00:1450:4864:20::22e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a8d877d6-2ee5-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 11:15:38 +0200 (CEST)
Received: by mail-lj1-x22e.google.com with SMTP id
 38308e7fff4ca-2eaea28868dso7758291fa.3
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 02:15:38 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f855e70efdsm132891975ad.89.2024.06.20.02.15.35
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 02:15:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8d877d6-2ee5-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718874938; x=1719479738; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ecg+8Do6KBVsgEIgZxE1KcnUiQXqJiCZhnqnv1stNf0=;
        b=eaWz1UTqOW59e9kyvyFZ5r+ExlUZ7gogr1g4KtoXcZNNspsG8PQutN4sNQHcxJt7YW
         mt9i+21LzZxt0CfhoCniAQAvKa6mr1UpkumGpXDAV0W3b6x/TRfDN9NmdBeYU5uLC9SF
         JUmwX2FI52LEwI1J5LEqz4XvyKUEhgqz6WGkX4pV/9wTdzaypeLdV9u6yTH2OStb9WZO
         Lj+Pr7bd6vUspXzqM+sil7Wn9rKIyfXogewe+ulTx7Eda0FZnlZ1wv9OML/mww3sJIHx
         s9c6/vVb7hPOeSxIbQFyPXfNNXaoLLTOOzBUbHuDaf1d/bHu2yE45e9SC2+TKGgmbvnS
         nu9w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718874938; x=1719479738;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ecg+8Do6KBVsgEIgZxE1KcnUiQXqJiCZhnqnv1stNf0=;
        b=bHazm15b0Z5sf2yRM7gCJ72tu+m03RBaxNVnbw8vKS8vbiszHYAFaXInyZKZALYGzf
         LmB4VNpwrgYJBP8sPynLeKDJ1Eu9+3fNiBsAYKnMAOIYItjxXbSTnTnLR8yFY7Vi+Rj/
         XIrw9ZtcY+H4ybnjxbhn8+KlmMtHMilVZQWJRrXnCFDw1tgsDzURdx88dYHJsq2zNQL3
         pTjjhcKGguihmP6Y3qUNMKj6ZpUGEe5n4cPWrLHRKCeShdTZjHwqsasPvNp89+UG2YKd
         C0jzmZUFluU7KMjac3WOdsnkeoJsUt40EQ37b1meKX1h47GaJeqU7joS/KGsxYhZt/CA
         Go0Q==
X-Forwarded-Encrypted: i=1; AJvYcCX77Vxu8MSR8V9P4xHlKJc6dD0j0zliQZ3ZO8wUDxoVqgckM1OXtxa5+bFD2HX52Xl1KVrbULKRWjWRMO6n8WSiGNeh19m9wtgkqe0c2Ro=
X-Gm-Message-State: AOJu0YxSOirohcEgozd9o9rSaFIHauORfm02d5fUU9QiOAaqYFLlCqbJ
	ygtxnXPcE9No0Cu2IhKChZql3GbLNhLyBPePOJ7qFKsM+xoXv/LpvBy2Qu26tA==
X-Google-Smtp-Source: AGHT+IEZZTqA4XN1zfyUOX/h0sovw2/SA8OqqPUndSyq639G4VrMUMJKqqMZXb60u3WmG/NROm3muQ==
X-Received: by 2002:a2e:9e16:0:b0:2ea:e74c:40a2 with SMTP id 38308e7fff4ca-2ec3cec1131mr35950061fa.20.1718874938455;
        Thu, 20 Jun 2024 02:15:38 -0700 (PDT)
Message-ID: <4189674e-af5e-4ffd-9558-bf0f1bc3338d@suse.com>
Date: Thu, 20 Jun 2024 11:15:31 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/2] x86/APIC: address violation of MISRA C Rule 21.2
To: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
Cc: consulting@bugseng.com, Nicola Vetrini <nicola.vetrini@bugseng.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1718816397.git.alessandro.zucchelli@bugseng.com>
 <4a31cfc5e8d4e2c5e159ca4d67ac477feb000073.1718816397.git.alessandro.zucchelli@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <4a31cfc5e8d4e2c5e159ca4d67ac477feb000073.1718816397.git.alessandro.zucchelli@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 19.06.2024 19:09, Alessandro Zucchelli wrote:
> From: Nicola Vetrini <nicola.vetrini@bugseng.com>
> 
> The rule disallows the usage of an identifier reserved by the C standard.
> All identfiers starting with '__' are reserved for any use, so the label
> can be renamed in order to avoid the violation.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

While the code change is certainly okay (with a cosmetic remark below),
you sending a change Nicola made requires, aiui, you own S-o-b as well.

> @@ -941,7 +941,7 @@ void __init init_apic_mappings(void)
>      apic_printk(APIC_VERBOSE, "mapped APIC to %08Lx (%08lx)\n", APIC_BASE,
>                  apic_phys);
>  
> -__next:
> +next:

While touching this (or any) label, can you please also make sure that
from now on it respects this section of ./CODING_STYLE (part of the
section "Indentation")?

"Due to the behavior of GNU diffutils "diff -p", labels should be
 indented by at least one blank.  Non-case labels inside switch() bodies
 are preferred to be indented the same as the block's case labels."

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 09:26:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 09:26:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744324.1151339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKE41-0005h3-0T; Thu, 20 Jun 2024 09:26:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744324.1151339; Thu, 20 Jun 2024 09:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKE40-0005gw-Tv; Thu, 20 Jun 2024 09:26:40 +0000
Received: by outflank-mailman (input) for mailman id 744324;
 Thu, 20 Jun 2024 09:26:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKE3z-0005gq-1t
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 09:26:39 +0000
Received: from mail-wm1-x32d.google.com (mail-wm1-x32d.google.com
 [2a00:1450:4864:20::32d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 30f4e296-2ee7-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 11:26:36 +0200 (CEST)
Received: by mail-wm1-x32d.google.com with SMTP id
 5b1f17b1804b1-42108856c33so10995105e9.1
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 02:26:36 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-705cc966d12sm12021451b3a.60.2024.06.20.02.26.32
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 02:26:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30f4e296-2ee7-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718875596; x=1719480396; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ukSYQxV8WYuiBktk/w2rk6jVRIf9YinMQmE2j/ELZuU=;
        b=XeVueRZCQCUP4NcIPeXecNbAHVSF56StLZAR/JiogGkC1ZKFtXEb8FJZhKcly8ttLh
         SmyNAbeHk8OwGIAzmtD+Y+IRo7c/eGIti+abNPsaYIrm/gso5YB3qpRjAdPxavwuCwqZ
         QFrFTHjkrJJMoTB4R6u7jJ5XCGqkD7eX0Y7QntKrfr4QYlHToRITo4yecJYT4BIduP4S
         QTmBd82m4re18sajBvf/S9QVBBDRXuucTbLYfJfBu2dhRkr6cqxY0BZ45d0arxNLWxNk
         G5YQ4Oujibsv7g5RrwQLo0ZIHlVoRaAWAo/1EMg8M0SrFkdldHjW9e83jODuQqi0rb1R
         gpNQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718875596; x=1719480396;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ukSYQxV8WYuiBktk/w2rk6jVRIf9YinMQmE2j/ELZuU=;
        b=BuVV9QNUZS+rE3DxtPkHrkklX5BgLMFtRI1N/a3PgBMyIzps58Pg2b3BzeWGQln9ib
         iv3x8Dzp9U0MO41YgVOZgVVooIdzBJJQ9ImZXte2ffn4W2/jnQaCF95/dThrnH+G7RR+
         14NKCpDwnWKsbtdh5xK+7+YitFX5xz7SQEWScW6CqONt/aR8cuuag2dThNDDAjRhjm8v
         +TYgajtfbeE3qE3hgAN/99Znv8Ken0Yh8qwyiAg9k0+VByUNhBENZzm9kDUgbmtL2Icw
         ewO1Ccy9OP+1UD8Mk/RajHmTENEJYxD7QHxFOC/7JpA9jQsSG/O54ZUVPPMKzAsXBJXN
         6Gng==
X-Forwarded-Encrypted: i=1; AJvYcCWBuZdk7FOdHMUUeIVffkO8uS6/pFnnH4wJFIwQicdTiA3rGfc5DRUZLC7MQEmIMteVCrokvkLozJKjcaVN9QHG0wZzxDtJPg6EVhncPHY=
X-Gm-Message-State: AOJu0Yy6F1q5czaWMGAAcDC23XzZLYc8zKID45rNIQM7lX5qYMvZGZNF
	AIB8swbwaa+mNlpMx/gZf38hfviCXKHKIwapuwKS4IIrZQOIkGWlrwdoBASxjIzmJi+T0rcNdRI
	=
X-Google-Smtp-Source: AGHT+IH9m/T7KSOA68Av1ymbJtklP9ULXBtYuXZXXq2GmUiqKeVbMeECEZOtelTXoEkbU2RfjYc9EA==
X-Received: by 2002:adf:fa81:0:b0:362:4679:b5a with SMTP id ffacd0b85a97d-3624688530emr6014052f8f.16.1718875596051;
        Thu, 20 Jun 2024 02:26:36 -0700 (PDT)
Message-ID: <5f486b5a-aba1-41f6-9e24-16ad3acd67bd@suse.com>
Date: Thu, 20 Jun 2024 11:26:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] automation/eclair_analysis: deviate and|or|xor for MISRA
 C Rule 21.2
To: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <b89e106649e3d0ecb41baadb49dc09c54b7563ec.1718873635.git.alessandro.zucchelli@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <b89e106649e3d0ecb41baadb49dc09c54b7563ec.1718873635.git.alessandro.zucchelli@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 20.06.2024 11:07, Alessandro Zucchelli wrote:
> Rule 21.2 reports identifiers reserved for the C and POSIX standard
> libraries: or, and and xor are reserved identifiers because they constitute
> alternate spellings for the corresponding operators; however Xen doesn't
> use standard library headers, so there is no risk of overlap.

This is iso646.h aiui, which imo would be good to mention here, just
to avoid people needing to go hunt for where this is coming from.

> This addresses violations arising from x86_emulate/x86_emulate.c, where
> label statements named as or, and and xor appear.

So a deviation purely by present uses, even ...

> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -498,6 +498,12 @@ still remain available."
>  -config=MC3R1.R21.2,declarations+={safe, "!^__builtin_.*$"}
>  -doc_end
>  
> +-doc_begin="or, and and xor are reserved identifiers because they constitute alternate
> +spellings for the corresponding operators.
> +However, Xen doesn't use standard library headers, so there is no risk of overlap."
> +-config=MC3R1.R21.2,reports+={safe, "any_area(stmt(ref(kind(label)&&^(or|and|xor)$)))"}
> +-doc_end

... constrained to just labels. Why would we do that? Why can't we deviate
them all (or at least all that are plausible to potentially use somewhere,
which imo would include at least "not" as well), and no matter what
syntactical element they would be used as?

Besides, just as a remark: Specifically when used as label names, there's
no risk at all, I'm inclined to say. If iso646.h existed in Xen and was
included in such a source file, the compiler would choke on the result.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 09:40:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 09:40:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744333.1151350 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKEHR-000082-5D; Thu, 20 Jun 2024 09:40:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744333.1151350; Thu, 20 Jun 2024 09:40:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKEHR-00007v-2D; Thu, 20 Jun 2024 09:40:33 +0000
Received: by outflank-mailman (input) for mailman id 744333;
 Thu, 20 Jun 2024 09:40:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F3B6=NW=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sKEHQ-00007p-Gs
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 09:40:32 +0000
Received: from NAM02-DM3-obe.outbound.protection.outlook.com
 (mail-dm3nam02on20601.outbound.protection.outlook.com
 [2a01:111:f403:2405::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2108ba7d-2ee9-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 11:40:29 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by DM4PR12MB5819.namprd12.prod.outlook.com (2603:10b6:8:63::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.21; Thu, 20 Jun
 2024 09:40:26 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7698.020; Thu, 20 Jun 2024
 09:40:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2108ba7d-2ee9-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VelB6SPeycDl1UuXY/XXND0vOydbwdLugDBeyU/5mUm06iv4o6o65z/BGp9lqCV0YjOyeXGmGR5siTzy9X81oSAA51DWIFbNbw3b1YKcNCQwwU7KmHcA0vxJ2j8IgpKG4swy1BpmJE6zHsppXRByizfvDzXrnmlMI4ztMUaMV9BeEjHyb/gdBF4yqMvJCNY4PhDIjFEjqBa9bY6FWWkhiND53D8Mp41aKyWZ5a8OnMmH6DB3RF3Ao/td1vC8QpGR4hmb7qImAQcqh03Jt/DaDrkhjpKm1UU9xfnUdzUxInrmphCLE5bkfevaaHAX2fYG1H+ojBkY4LyTr67xZXAoXw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=XC5nzhJNZXlpnr5jo9kN5ci/J7Al/DdElf+vDCVtvKs=;
 b=iO0U4Ak1WzHFKRkwW5GXhGjzsb3h8HScyoOn3om2b6GUFliMxnwIob1npf+uJNDA0/lF6BBU4534GHhz208LmH2Dt4iuj+ehjzVsYyOUe5CPqD5BGVqMobYk71x74qmpC3efcP6ar1vuDEiYZLeMhTWYb4nCOSkV3G+0rNRFeiPfzf+DHEz2NwUUk0+dMj1WxqdjMNNE9QvAYt0FEDRt5VJivmNnDNNtoDoe8+nbToZCNuMQkbNMp3kEeIrGq5aLEXZf2BdD2t4su+EpWEwwIp6fuJQtb6Uvlx0XzWwBfVydre0fjaJT19zGmd+JLPz4gxExo2s9roesHQ1Z1wzB8A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XC5nzhJNZXlpnr5jo9kN5ci/J7Al/DdElf+vDCVtvKs=;
 b=eWUuNAz59cieg3GsJB1JdWxSBsmuvk1cfmvcpvp+bWaSYEr1FZa2Q7HxeZ8vg9o1T5sJROYrluZqbT/eqSdo6keno3tCWlLkzR2hjF74FC7QIz/1fDOXPdMgT85cMu1gRfVV6k+VKi6ua0TXbnv/jEI7B9W/1Ir8AhE2tuSQIas=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index: AQHawJTqXyg3PMEiVUiLHPQ86aZkYrHMFeyAgAGdTwD//43bAIADi4UA
Date: Thu, 20 Jun 2024 09:40:25 +0000
Message-ID:
 <BL1PR12MB58493479F9EF4E56E9CB814FE7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-6-Jiqian.Chen@amd.com>
 <b4b6cbcd-dd71-44da-aea8-6a4a170d73d5@suse.com>
 <BL1PR12MB584916579E2C16C6C9F86D1FE7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b6beb3f3-9c33-4d4c-a607-ca0eba76f049@suse.com>
In-Reply-To: <b6beb3f3-9c33-4d4c-a607-ca0eba76f049@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7698.013)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|DM4PR12MB5819:EE_
x-ms-office365-filtering-correlation-id: 2e7716ba-1daa-42bd-e720-08dc910d037c
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|366013|7416011|376011|1800799021|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?d2dsbGhYQWJUWDFvL3RRcUhPQVhDK3hHQnl5TDQrcTk0OHRORGJ3bkgxbUFk?=
 =?utf-8?B?ZW5yaERTeVh2bjlJcWRieHpmdHpmVUFEMFlqNDZHZGpHOWxCZmlVZWZySnR6?=
 =?utf-8?B?YTZ5ZWlUOU52cmZ5amxuRk1JT2czZi91a3pBYnVvMFEzTWg3VFlXb3I0alpw?=
 =?utf-8?B?MW1WOU52MmVnRXJtMHdXVWNaWEp0NjJmdGo2cytvQnRWU2dHTGZkNGlwanB6?=
 =?utf-8?B?VUtBeEhybnJFclM0eC8xQXA3SHNUeFZNUEgwRmV1MUFtekpWeVRMVDhKQ0xs?=
 =?utf-8?B?ZXFpdmgwTFp1LzBCalU5MUE3TXNDUFJDZzhSa3NzZTViNXBXdkRCQVdFQ0dV?=
 =?utf-8?B?a3RZUXZWWnYrd0dFUnBFTEcwb04rQnE0M1dYZHdvMWJhZ0s3cXdCZ0ZlejNI?=
 =?utf-8?B?Ti9CNzlPVXF3TlFqeTREbVU0NytrTHgyY0g4cERHOFozYTgxdzVCb0JoT1Rt?=
 =?utf-8?B?OHBaNHhNa1pvR2N5WDZnVUlXeFhHWUw0L1pTbysrRytkT1hDZlR4U0lleWxE?=
 =?utf-8?B?WEpMTllManlwaGZXcnQzM3N0cTVBem1DV2FTUVAwUXA0RzBjNmhqZlVrbE9B?=
 =?utf-8?B?YlZDamZxWlJ0NEJpSTJWSEtlRDhUekRuYUd4WHVnbTh5S2hiTCtISHc4THR4?=
 =?utf-8?B?dGlNcG1Rc2ZpSDJWTEtvMWVDUThZMHRlZkVEMjFIcUptdzVWMUpSQnV1NkxJ?=
 =?utf-8?B?L25lK1pZMWFUNGQ2VFUzeHE1cmt0U1BUc092UlpsQjlTclh0bXdheWlpQ0N6?=
 =?utf-8?B?OW8vSHJtb0JZNzBZRHQ1WG1WSHg3bjIxK053MFZhUXlNdllKTmZhMXROc3RY?=
 =?utf-8?B?MmZkdnBjTzJIYUhBTHVXNnJtSVdwQnNzK0czcTVOK3NHdFNjZU9vL3c3aE8v?=
 =?utf-8?B?YUc1Z0ppTWdnUGNIU0NObUJqcVRlWFZuNmVkREdYYVY0NDBkbWYxaHFjUlN2?=
 =?utf-8?B?bDlvYWtnVHoyanIxTUZISHZ2OXR4WU4rakRwb2FRME56R2xaWjEzZ1VpbVUw?=
 =?utf-8?B?UkFsU3VPcGY2K1Z5Q0U3VTdWblFCQjkyOXMzUkxNSkhRa1dka3MrTUpZRTJ3?=
 =?utf-8?B?enY4K2JQZHZmVmpCbzB6OXdrdGNRZ1YwT3UyQVFWVHFwL2UwSm1sSm1ySUNu?=
 =?utf-8?B?NlJqYmtLYzdnS1BpeUd6K2JJTXcxUG1CaGpRR1FGS2kyWEZQRjByRUM4NHpL?=
 =?utf-8?B?Q1lsYTcrbTQzMjZtZW9jdnY0QkNidkJSWDZaaEk0UUkxQWFDMG5rczVNM1hJ?=
 =?utf-8?B?UGVIS0ovNlJmeHg4M2luN0xObDlCU2pTeXZBSi9KejNacDVVcEVKQ21ZUkxD?=
 =?utf-8?B?OTJvdkVTTU8zVG5LK0w4dUhScnNUUDIyaklTR2JmbnlTZlZGVkxMY3IveE9p?=
 =?utf-8?B?b2ZsSVZ6L0N6K1lQQjh1VVRWWDV3bnpwMHRyVVMzS3gvUXNQcVZiTy9BNlE0?=
 =?utf-8?B?SG1GYVJPN1lzK0hXM2JFeXBwY21BMDFhbWdDMzFtS0ZKNUJpK2NTNVY3ZWRa?=
 =?utf-8?B?ekYwRUd5aHVEOXlQOC9Hc1BqdjFSOWJ0QlNoY0Z0MUR4UWx0L0RQQWhkS2pz?=
 =?utf-8?B?YlRnM2hVNmNkci8rZnQvK0hMWnBEdWZ3VzBDcmJobExIZ1pYK3VmTVNxSmdF?=
 =?utf-8?B?blVmU3BNbHZxa0Nxc1ZIQzdDSW03anoyMklOMFA3cWdqaHJ0VVRBTEVRY1Jt?=
 =?utf-8?B?S09LbFB5QlhoS1RnZHY0dktjRkpVenNwWmtReUZVWlZlMDc4UWxNeDRMZGUv?=
 =?utf-8?B?YnpiTWl0UmF4RkFpRHVUcUVKK0xNN2ZDUHM2dnZiTDNqUGFmVzFDb0FJOVZu?=
 =?utf-8?Q?efcFT8lz/1XFrers6EOfYXa9GT4ZDUK9568Cc=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(366013)(7416011)(376011)(1800799021)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?YWhMZHhvc2hjeWxkMFIrazNWWlJ4UFkxdG5IdWlMVHpLdlJnSHMvL1J1TzFT?=
 =?utf-8?B?anpPZzN4NXVHNDdZemM4NVdlTS8rbHhHNDE2QlV1VTQrZjZtTDczSmhPTlNP?=
 =?utf-8?B?MDFqZlVCSnZLNUZjRXZteVc5QTdkcXJKN3FSS0xXeTE1bHByWW5KTWNXT2Q2?=
 =?utf-8?B?Slc4dzZhQVlhbVlLWXBvWDh5ZmN3WHNWQWJaaDNLSUpWVmZMTThXL1dQa1k2?=
 =?utf-8?B?VzJPRFJVaVk2bzlPZWJ5NlIwRjl3MFJRU2VoNGRySXorc0xBemJQd0krc3Ev?=
 =?utf-8?B?eklTZElNWFZEN29jU1E1RThnanJtVkErVk5zTUtYS0VMZzhmVyt5eStKUldw?=
 =?utf-8?B?Y1NKT1ZhU0JmL0haaEZFSDlMK2NzUjc1TlZrTVFPMkpuUHJXL3BMRGZldjJ2?=
 =?utf-8?B?YlcrdkdEQ1JrelZZYkNLWkZueEtadCtMbE9Nd1VFVzQ0SHJucEZCakZHNmVT?=
 =?utf-8?B?ZC9kTFBOQU1HQ0IrNFZ1QytYSFhhV2E0Rzd6eFg3dmNuN1dDeXFLUmNHRjJK?=
 =?utf-8?B?cFRLN0pnK3dqMms1Q0xJVUhsZjVGZTFoSWJhdmNIYmJDRXEyYXhDRStCdTN4?=
 =?utf-8?B?aXcxR0dxNHE3VWx1MkFvVHZCVFFNbW42bjZHSjJqeEJkRmk0ZHNJdlFLNW1y?=
 =?utf-8?B?aXpJK3dEbm1CRkxvQVlsOHZkdW1Qam9yU3RLWExPSHU3cXpVSkpodExiY3ZR?=
 =?utf-8?B?eVVpUGNhdkV1VDhBSzEveVdNUnMrdEN4ZEoxeVJmMGZZS0U1QkJIdjUvMkNB?=
 =?utf-8?B?VThrR2hZZ1BXSkQvaWQyUGwrdmVtRHg4YVVnRVZmTGVac2RSSkFmMys1bndP?=
 =?utf-8?B?cGNrMXpwWk9YWHRFYkxmYkUrajNqUnhxNjZCb2R1QjJFWXk0QURlQ0M0TEtI?=
 =?utf-8?B?dU9iN2pGUThZR1dDVFlYM1ZuaXZWbjNud2RIY3QwcnNsRkhoY2lidFozaWpM?=
 =?utf-8?B?MjgyYWE0MEdIcEdoT2ozWnhXK2M5bjZEMUpaelJoNHNIZWZjS2dZdVZQQy8v?=
 =?utf-8?B?a3dqQVR5aHkxeTdjYkhiUFpVUEVRWWw5QU5kcnA2TkJiZFJmRE9OQTg0V1di?=
 =?utf-8?B?UVQ5alo2MmRzaCs0TzFETm5BQlI5NisxSHdUNHdsdGZvL21rakNlRmFqR2Y4?=
 =?utf-8?B?UUpkeDNtcCtOR3RTeHNKYTJ1QjlMbUNBeGJJaWhQOW5EckNYenBzQ3VaQXp4?=
 =?utf-8?B?dWNZeEo0eU5hL3NqWXcrQXg5RGxRY2NiN3Q2K3NTVW9NVUNKbVoxY01vaFlJ?=
 =?utf-8?B?WXkvQTBTVDROYU05cWJicnhwd2pCU1Iya3pNRE1QWHI4SkIwOWtvZnhaclJH?=
 =?utf-8?B?WTZJYzNRQWU0eVcydmMxZW5TaE9nZlh5dm0ycXRCNldHbUg5NksxWXhjaUV5?=
 =?utf-8?B?RHVzYksvZUwxeWVickxOR29Wc2FYaE5ucGZlbVRrVzJ0N2pvbk1hQ1NYdVJ3?=
 =?utf-8?B?cXpJN2lObUV5aEZITXc4WHluZ25KSHhoR1NLR1VpcTlGR3ErRGRiczZFS3cz?=
 =?utf-8?B?ZWFHeTdJc3FjZmJLL3F3NnlzMXM1MEU3VkRhQVFjMEtpVzRHOUJ4VmVjNXg5?=
 =?utf-8?B?dDFCOEMrU1dacGhpUkgvbXZXY1pyOU5kZWNjTkpaR29VNERGTVc0VHh2K0RY?=
 =?utf-8?B?L09UVUZxMVl2NjJQVXFHZUxhbEZIUUY3emFudy9QVWZXdTZKYXA3blVhRWF5?=
 =?utf-8?B?cXZtQ21FNk1Yc1dSRk50TFFORytKNnJiQXIwdDAxaDV6ZVdUYmllbkVhTGEx?=
 =?utf-8?B?SUNxWDRZR2ZSZ2lXSjdjc2FrenBRUXUxdnFQQTBPMHpWNjBGbEFZTGx2Q0xn?=
 =?utf-8?B?UHMvejA0TkYwenJabEZlUDJuVVU2c3JKWVJtWjNNVEtXV210QmJXRUNrdHJW?=
 =?utf-8?B?anZ0MFNxU0MxMUNTMEFQaVpLWnFKRVVEWWxIU2hhVUpuck5SOHhjN2pBOVFl?=
 =?utf-8?B?L1lqVXBvaWZqMTBvekdROTIwVElKTWhNNU5iYnc4OTd6SmR4WUhWVTIvZnJQ?=
 =?utf-8?B?RWVickhZb3dTNUxlNXBJWnp4TUMzUVZWQmVKd3V2cE9obCtmMk9TS0M0YlV0?=
 =?utf-8?B?OVRlKzIvWXgyL1Z5NEtxYzJkRE1KMy9VdXpPRFdZSU1IeUZRMWFsSVUySVQw?=
 =?utf-8?Q?1RdE=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <76993DA5CD32A44EB0215BF18FC0B0CF@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 2e7716ba-1daa-42bd-e720-08dc910d037c
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jun 2024 09:40:25.8426
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: UxZ2PBi1FBVb38rcUInBBktGQ0CnxolffDevQ4/SCjqkJPPBZsAUP/tpDTCmlZFhe4KBjO98UhC/WluwCkBAzg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5819

T24gMjAyNC82LzE4IDE3OjIzLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMTguMDYuMjAyNCAx
MDoyMywgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzE3IDIzOjMyLCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAxNy4wNi4yMDI0IDExOjAwLCBKaXFpYW4gQ2hlbiB3cm90ZToN
Cj4+Pj4gQEAgLTE1MTYsMTQgKzE1MTksMzkgQEAgc3RhdGljIHZvaWQgcGNpX2FkZF9kbV9kb25l
KGxpYnhsX19lZ2MgKmVnYywNCj4+Pj4gICAgICAgICAgICAgIHJjID0gRVJST1JfRkFJTDsNCj4+
Pj4gICAgICAgICAgICAgIGdvdG8gb3V0Ow0KPj4+PiAgICAgICAgICB9DQo+Pj4+IC0gICAgICAg
IHIgPSB4Y19kb21haW5faXJxX3Blcm1pc3Npb24oY3R4LT54Y2gsIGRvbWlkLCBpcnEsIDEpOw0K
Pj4+PiArI2lmZGVmIENPTkZJR19YODYNCj4+Pj4gKyAgICAgICAgLyogSWYgZG9tMCBkb2Vzbid0
IGhhdmUgUElSUXMsIG5lZWQgdG8gdXNlIHhjX2RvbWFpbl9nc2lfcGVybWlzc2lvbiAqLw0KPj4+
PiArICAgICAgICByID0geGNfZG9tYWluX2dldGluZm9fc2luZ2xlKGN0eC0+eGNoLCAwLCAmaW5m
byk7DQo+Pj4NCj4+PiBIYXJkLWNvZGVkIDAgaXMgaW1wb3NpbmcgbGltaXRhdGlvbnMuIElkZWFs
bHkgeW91IHdvdWxkIHVzZSBET01JRF9TRUxGLCBidXQNCj4+PiBJIGRpZG4ndCBjaGVjayBpZiB0
aGF0IGNhbiBiZSB1c2VkIHdpdGggdGhlIHVuZGVybHlpbmcgaHlwZXJjYWxsKHMpLiBPdGhlcndp
c2UNCkZyb20gdGhlIGNvbW1pdCAxMGVmN2E5MWI1YThjYjhjNTg5MDNjNjBlMmRkMTZlZDQ5MGIz
YmNmLCBET01JRF9TRUxGIGlzIG5vdCBhbGxvd2VkIGZvciBYRU5fRE9NQ1RMX2dldGRvbWFpbmlu
Zm8uDQpBbmQgbm93IFhFTl9ET01DVExfZ2V0ZG9tYWluaW5mbyBnZXRzIGRvbWFpbiB0aHJvdWdo
IHJjdV9sb2NrX2RvbWFpbl9ieV9pZC4NCg0KPj4+IHlvdSB3YW50IHRvIHBhc3MgdGhlIGFjdHVh
bCBkb21pZCBvZiB0aGUgbG9jYWwgZG9tYWluIGhlcmUuDQpXaGF0IGlzIHRoZSBsb2NhbCBkb21h
aW4gaGVyZT8NCldoYXQgaXMgbWV0aG9kIGZvciBtZSB0byBnZXQgaXRzIGRvbWlkPw0KDQo+PiBC
dXQgdGhlIGFjdGlvbiBvZiBncmFudGluZyBwZXJtaXNzaW9uIGlzIGZyb20gZG9tMCB0byBkb21V
LCB3aGF0IEkgbmVlZCB0byBnZXQgaXMgdGhlIGluZm9tYXRpb24gb2YgZG9tMCwNCj4+IFRoZSBh
Y3R1YWwgZG9taWQgaGVyZSBpcyBkb21VJ3MgaWQgSSB0aGluaywgaXQgaXMgbm90IHVzZWZ1bC4N
Cj4gDQo+IE5vdGUgaG93IEkgc2FpZCBET01JRF9TRUxGIGFuZCAibG9jYWwgZG9tYWluIi4gVGhl
cmUncyBubyB0YWxrIG9mIHVzaW5nIHRoZQ0KPiBEb21VJ3MgZG9taWQuIEJ1dCB3aGF0IHlvdSBh
cHBhcmVudGx5IG5lZ2xlY3QgaXMgdGhlIGZhY3QgdGhhdCB0aGUgaGFyZHdhcmUNCj4gZG9tYWlu
IGlzbid0IG5lY2Vzc2FyaWx5IERvbTAgKHNlZSBDT05GSUdfTEFURV9IV0RPTSBpbiB0aGUgaHlw
ZXJ2aXNvcikuDQo+IFdoaWxlIGJlbmlnbiBpbiBtb3N0IGNhc2VzLCB0aGlzIGlzIHJlbGV2YW50
IHdoZW4gaXQgY29tZXMgdG8gcmVmZXJlbmNpbmcNCj4gdGhlIGhhcmR3YXJlIGRvbWFpbiBieSBk
b21pZC4gQW5kIGl0IGlzIHRoZSBoYXJkd2FyZSBkb21haW4gd2hpY2ggaXMgZ29pbmcNCj4gdG8g
ZHJpdmUgdGhlIGRldmljZSByZS1hc3NpZ25tZW50LCBhcyB0aGF0IGRvbWFpbiBpcyB3aG8ncyBp
biBwb3NzZXNzaW9uIG9mDQo+IGFsbCB0aGUgZGV2aWNlcyBub3QgeWV0IGFzc2lnbmVkIHRvIGFu
eSBEb21VLg0KT0ssIEkgbmVlZCB0byBnZXQgdGhlIGluZm9ybWF0aW9uIG9mIGhhcmR3YXJlIGRv
bWFpbiBoZXJlPw0KDQo+IA0KPj4+PiBAQCAtMjM3LDYgKzIzOCw0OCBAQCBsb25nIGFyY2hfZG9f
ZG9tY3RsKA0KPj4+PiAgICAgICAgICBicmVhazsNCj4+Pj4gICAgICB9DQo+Pj4+ICANCj4+Pj4g
KyAgICBjYXNlIFhFTl9ET01DVExfZ3NpX3Blcm1pc3Npb246DQo+Pj4+ICsgICAgew0KPj4+PiAr
ICAgICAgICB1bnNpZ25lZCBpbnQgZ3NpID0gZG9tY3RsLT51LmdzaV9wZXJtaXNzaW9uLmdzaTsN
Cj4+Pj4gKyAgICAgICAgaW50IGlycTsNCj4+Pj4gKyAgICAgICAgYm9vbCBhbGxvdyA9IGRvbWN0
bC0+dS5nc2lfcGVybWlzc2lvbi5hbGxvd19hY2Nlc3M7DQo+Pj4NCj4+PiBTZWUgbXkgZWFybGll
ciBjb21tZW50cyBvbiB0aGlzIGNvbnZlcnNpb24gb2YgOCBiaXRzIGludG8ganVzdCBvbmUuDQo+
PiBEbyB5b3UgbWVhbiB0aGF0IEkgbmVlZCB0byBjaGVjayBhbGxvd19hY2Nlc3MgaXMgPj0gMD8N
Cj4+IEJ1dCBhbGxvd19hY2Nlc3MgaXMgdTgsIGl0IGNhbid0IGJlIG5lZ2F0aXZlLg0KPiANCj4g
UmlnaHQuIFdoYXQgSSBjYW4gb25seSByZS1pdGVyYXRlIGZyb20gZWFybGllciBjb21tZW50aW5n
IGlzIHRoYXQgeW91DQo+IHdhbnQgdG8gY2hlY2sgZm9yIDAgb3IgMSAoY2FuIGJlIHZpZXdlZCBh
cyBsb29raW5nIGF0IGp1c3QgdGhlIGxvdyBiaXQpLA0KPiByZWplY3RpbmcgZXZlcnl0aGluZyBl
bHNlLiBJdCBpcyBvbmx5IHRoaXMgd2F5IHRoYXQgZG93biB0aGUgcm9hZCB3ZQ0KPiBjb3VsZCBh
c3NpZ24gbWVhbmluZyB0byB0aGUgb3RoZXIgYml0cywgd2l0aG91dCByaXNraW5nIHRvIGJyZWFr
IGV4aXN0aW5nDQo+IGNhbGxlcnMuIFRoYXQncyB0aGUgc2FtZSBhcyB0aGUgcmVxdWlyZW1lbnQg
dG8gY2hlY2sgcGFkZGluZyBmaWVsZHMgdG8gYmUNCj4gemVyby4NCk9LLCBJIHdpbGwgYWRkIGNo
ZWNrIHRoZSBvdGhlciBiaXQgaXMgemVybyBleGNlcHQgdGhlIGxvd2VzdCBvbmUgYml0Lg0KDQo+
IA0KPiBKYW4NCg0KLS0gDQpCZXN0IHJlZ2FyZHMsDQpKaXFpYW4gQ2hlbi4NCg==


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 09:44:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 09:44:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744338.1151361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKEKu-0000to-Ki; Thu, 20 Jun 2024 09:44:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744338.1151361; Thu, 20 Jun 2024 09:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKEKu-0000th-GZ; Thu, 20 Jun 2024 09:44:08 +0000
Received: by outflank-mailman (input) for mailman id 744338;
 Thu, 20 Jun 2024 09:44:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TjK6=NW=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sKEKt-0000tb-Dq
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 09:44:07 +0000
Received: from mail-lf1-x135.google.com (mail-lf1-x135.google.com
 [2a00:1450:4864:20::135])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a228226e-2ee9-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 11:44:05 +0200 (CEST)
Received: by mail-lf1-x135.google.com with SMTP id
 2adb3069b0e04-52c89d6b4adso612073e87.3
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 02:44:05 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52ca2889d38sm1989510e87.303.2024.06.20.02.44.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 20 Jun 2024 02:44:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a228226e-2ee9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718876645; x=1719481445; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=N9xVWL/x/lX73QedKCfBzagjNx6/zXPfp05AhbQOiMY=;
        b=NcGjxo7iVfcAWV1Hoy2MNsp0E+Xo0/yYJ6AMF3cjnSv8jn/Y7sYLmQxo+rslHVRiTi
         e89Nkr2+9baHL6x0R8M77ggUISo2DRgnpoKspYmx03AFK8dW2UY478+L4npUG87rA2ap
         kjdepwo2gurxpIo2OL+wB5a5y0zc8ByeaStsJCmoCeHUtpUw4V1bFlaDMYflpGy3mbv9
         aK3q84ZZ2wNC0uGncTm6okzttFLhV4NFTxWOszk9neS4YLw6pIpBZi2EhvDnCYvyNhC5
         cHG6HOXRAYQjg7I1SoEWlS9EonnYpLJ+7EvVUjQtk99gt1qhVF5vhIEvDH1RKPO/VlID
         U6yg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718876645; x=1719481445;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=N9xVWL/x/lX73QedKCfBzagjNx6/zXPfp05AhbQOiMY=;
        b=nRRqRp1XF9O8AtMBGU8pWlmsfg9NrWXWVJJMYBapsXv/371QI97DM49YCw/g6OnblF
         W/BA8iT5pOggAr1wmuepNCZVfEqvZn9RZbIkQNBZqcZnq22PqiG7pzzwIdCABcny4/jg
         DHsyBIUvhz1IeaqigOkXtZ5cvRLVUUOEG3FYW5gnxtbm+ZieATenlo+55wRz8ZnvFpx7
         eTtAAqkFAJe5YI0fRdeVFLr7WejRDa/e2ARW7KC9tjhwxCZFU6hmclPJ/BnsZVITrZwI
         9EVT4s8dp5bF9WW59DBlFeuXE8Hams4hd6B5cU7/2l/51Sl1yxxByv3KeT8etKVel+Rs
         oVwA==
X-Forwarded-Encrypted: i=1; AJvYcCVdgoU1XEOqPzaQSLYuVvSlu2+XN/RCQvgKCBPuLei+dZT2f+f9mn8inPnMIT4DTD0fORmroTQrbCwKNFfGjc3sR9Jo6FF2gj0Uq+G8wUw=
X-Gm-Message-State: AOJu0Yxp+JiurSvBKkeLdTwf4W1yDjS7BxqmQR4zmeUsEiFhm2y2vZt/
	IirWJhMyj74qlNRHo8A0z6pE5ATWsgIAQgeUxl4r9/9G2K4a+Jua
X-Google-Smtp-Source: AGHT+IG3SDWR509lgY5kuX8YASxACu9X/zy50TSWsHxqRAzHh4x+PppCWM/Ic8dDztiEGf/sBpV0JQ==
X-Received: by 2002:a05:6512:e94:b0:52c:8339:d09c with SMTP id 2adb3069b0e04-52ccaa56a39mr3428272e87.3.1718876644770;
        Thu, 20 Jun 2024 02:44:04 -0700 (PDT)
Message-ID: <79ea8b2dbddc69468865caaff7590dc5e836046f.camel@gmail.com>
Subject: Re: [PATCH for-4.19 v4] x86/irq: forward pending interrupts to new
 destination in fixup_irqs()
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Date: Thu, 20 Jun 2024 11:44:04 +0200
In-Reply-To: <0b54d032-a473-4f3e-8284-b9fe63cbf26a@suse.com>
References: <20240619095833.76271-1-roger.pau@citrix.com>
	 <0b54d032-a473-4f3e-8284-b9fe63cbf26a@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Wed, 2024-06-19 at 13:53 +0200, Jan Beulich wrote:
> On 19.06.2024 11:58, Roger Pau Monne wrote:
> > fixup_irqs() is used to evacuate interrupts from to be offlined
> > CPUs.=C2=A0 Given
> > the CPU is to become offline, the normal migration logic used by
> > Xen where the
> > vector in the previous target(s) is left configured until the
> > interrupt is
> > received on the new destination is not suitable.
> >=20
> > Instead attempt to do as much as possible in order to prevent
> > loosing
> > interrupts.=C2=A0 If fixup_irqs() is called from the CPU to be offlined
> > (as is
> > currently the case for CPU hot unplug) attempt to forward pending
> > vectors when
> > interrupts that target the current CPU are migrated to a different
> > destination.
> >=20
> > Additionally, for interrupts that have already been moved from the
> > current CPU
> > prior to the call to fixup_irqs() but that haven't been delivered
> > to the new
> > destination (iow: interrupts with move_in_progress set and the
> > current CPU set
> > in ->arch.old_cpu_mask) also check whether the previous vector is
> > pending and
> > forward it to the new destination.
> >=20
> > This allows us to remove the window with interrupts enabled at the
> > bottom of
> > fixup_irqs().=C2=A0 Such window wasn't safe anyway: references to the
> > CPU to become
> > offline are removed from interrupts masks, but the per-CPU
> > vector_irq[] array
> > is not updated to reflect those changes (as the CPU is going
> > offline anyway).
> >=20
> > Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 09:47:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 09:47:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744345.1151370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKENd-0001UY-05; Thu, 20 Jun 2024 09:46:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744345.1151370; Thu, 20 Jun 2024 09:46:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKENc-0001UR-Th; Thu, 20 Jun 2024 09:46:56 +0000
Received: by outflank-mailman (input) for mailman id 744345;
 Thu, 20 Jun 2024 09:46:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TjK6=NW=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sKENb-0001UL-DU
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 09:46:55 +0000
Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com
 [2a00:1450:4864:20::132])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 06949549-2eea-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 11:46:54 +0200 (CEST)
Received: by mail-lf1-x132.google.com with SMTP id
 2adb3069b0e04-52bc29c79fdso634782e87.1
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 02:46:54 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52ca2825691sm1974044e87.38.2024.06.20.02.46.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 20 Jun 2024 02:46:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06949549-2eea-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718876813; x=1719481613; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=IVmOJAJmhlZfiLEh6c/Zy5kvqzQCEEcbE7PFhfaHutA=;
        b=CajdyX/aG7nWI5bdu7ESnIxAbqZ3eYRJcx2fFy1zEuLj2xpg4BcRSYbKPFZAev2kA0
         SsaHYpze6ThRpenOFEXd1ecSj0LD33IfUpn20b2G2QS/gTvjYMokiJSbSIBaih6ZZwzm
         fIL0gVyXiDlI4o+NGIXxZwikFTPuLA/zltSY7pBC7D9gbeJYaP0+c8362bzOf5WPIgGL
         Z8qUXNls8UCV3P0c9BKrp6C60mAwrmpuDjuC2GTiFP6QN0ES/8idX9M2U/ZrRWtyER1N
         jXULhDVWm/9TtOTOWVuVQlRisc9tbFCswf3bDPs0TxAzff6EbmzSylQGtE8QdlqnQKei
         TSuQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718876813; x=1719481613;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=IVmOJAJmhlZfiLEh6c/Zy5kvqzQCEEcbE7PFhfaHutA=;
        b=B2QJ8SSykeQLjejytY79CVwZxKALrmtxPk0yJ/oveG7j4g+MXme7olCSUV0VZAWKRZ
         UGWJ+5AayM1k9fwYMolT2cTzTREkupqL7Re+rme85mJZOMF6xF92GJCNgmzXU8YapAmE
         O3+gTg/LLZAu+kPZyGcYCXzw2J5aTe54QRovlaHWtwqtDUA9ZxLoCCAOPXLeICd1WRPu
         hQ9j8RVU874dvykeA+V03KaGXctlqoPZiKFXJakzzuGSrwsaSpbZdqfPP/Wy/LnoZY6X
         s/VmUAWvhZyqt+wGEKwBTxfFKwk00p6f72yqUhltpUcLy4Ikm71AmZzo+RRyf9dE6vcR
         ej7Q==
X-Gm-Message-State: AOJu0YwqQpAWnFhVuqHOenyA4eKUmEmaPavf1rGetgO5ZOja8hSPHPYU
	bZW9wkSIZSa/o5FQWn5A1YpQKuzDUWMaF9UPOHIolEshlasu3jaJ
X-Google-Smtp-Source: AGHT+IE4mnsB4r42pO+MWOUEopECXY+51GQ6m/hDpFQDF8SZUpNOkuFoNm88259EKINTBNc2VcQnXw==
X-Received: by 2002:a05:6512:280e:b0:52c:8b69:e039 with SMTP id 2adb3069b0e04-52ccaa2a937mr4288878e87.4.1718876813295;
        Thu, 20 Jun 2024 02:46:53 -0700 (PDT)
Message-ID: <e1da057746fe4724659b094cf4cd0bc0cc95c48c.camel@gmail.com>
Subject: Re: [PATCH for-4.19] livepatch: use appropriate type for buffer
 offset variables
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Ross Lagerwall <ross.lagerwall@citrix.com>, Jan Beulich
 <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
 Roger Pau =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Date: Thu, 20 Jun 2024 11:46:52 +0200
In-Reply-To: <CAG7k0ErGHynwYmxWuftUT=yFF0Zrttx0JEAjh3bDzPVzM_MgzA@mail.gmail.com>
References: <a4d780fd-90c2-405e-be21-c323a22a78c6@suse.com>
	 <CAG7k0ErGHynwYmxWuftUT=yFF0Zrttx0JEAjh3bDzPVzM_MgzA@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Thu, 2024-06-20 at 09:04 +0100, Ross Lagerwall wrote:
> On Thu, Jun 20, 2024 at 8:16=E2=80=AFAM Jan Beulich <jbeulich@suse.com>
> wrote:
> >=20
> > As was made noticeable by the last of the commits referenced below,
> > using a fixed-size type for such purposes is not only against
> > ./CODING_STYLE, but can lead to actual issues. Switch to using
> > size_t
> > instead, thus also allowing calculations to be lighter-weight in
> > 32-bit
> > builds.
> >=20
> > No functional change for 64-bit builds.
> >=20
> > Link: https://gitlab.com/xen-project/xen/-/jobs/7136417308
> > Fixes: b145b4a39c13 ("livepatch: Handle arbitrary size names with
> > the list operation")
> > Fixes: 5083e0ff939d ("livepatch: Add metadata runtime retrieval
> > mechanism")
> > Fixes: 43d5c5d5f70b ("xen: avoid UB in guest handle arithmetic")
> > Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >=20
>=20
> Reviewed-by: Ross Lagerwall <ross.lagerwall@citrix.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 09:49:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 09:49:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744356.1151380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKEQL-0002Zo-GQ; Thu, 20 Jun 2024 09:49:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744356.1151380; Thu, 20 Jun 2024 09:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKEQL-0002Zh-D3; Thu, 20 Jun 2024 09:49:45 +0000
Received: by outflank-mailman (input) for mailman id 744356;
 Thu, 20 Jun 2024 09:49:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TjK6=NW=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sKEQJ-0002ZZ-II
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 09:49:43 +0000
Received: from mail-lf1-x12e.google.com (mail-lf1-x12e.google.com
 [2a00:1450:4864:20::12e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6b0c861b-2eea-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 11:49:42 +0200 (CEST)
Received: by mail-lf1-x12e.google.com with SMTP id
 2adb3069b0e04-52c84a21b8cso622591e87.1
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 02:49:42 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52ca2872305sm1993975e87.168.2024.06.20.02.49.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 20 Jun 2024 02:49:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b0c861b-2eea-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718876982; x=1719481782; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=jpwNq3EKrDRG+VWcZwBrJhjWEVq3entQLxkRUn3IUj8=;
        b=B6OOttWEt9Z6tnvhVHE3gM8+6zKMrYxjZJPwT6Aip2PTpm/rYu4zG902yk7xvZ88qy
         YkG8WqSpGIN+ytj/rIbWtWDaAgQDaMwyBQhunaqtXk0HexWnfE5x8eWbYlA6Mh0aX18b
         xgQBJbFeLgobwIfO2G5in/seX0sw0tHuEIOXN6sGZXRiEwpKQ7bN+C8Wd291rBFi4lJp
         dnL977+tua2Z/tVzD0Q6wvbfnT/EWuVG3TTMNGx2wHrqqv9tqmidjyHNMswdcNCAPZIz
         XF51wyhBrbw2uf4G/Fsd9e2RRYwxjgDqgRwEFz3DCUcRuTUWlPWv5H9vSQA8wJpjPbuU
         stVQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718876982; x=1719481782;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=jpwNq3EKrDRG+VWcZwBrJhjWEVq3entQLxkRUn3IUj8=;
        b=uuvWTEdqi/Gtf06ySU6YZ/nN9Bdd3f0hgJq4BtWE0QQispWqD2vhyBHwffMKsALvlZ
         /bH78WgOGj55UTwfbBxzqLGPFBBmj47dTFBV5VLS67wXt3GFrFwBryA4kTuacN/sJKIS
         9Neza+TvbQRp+Xs5K4Cqj3i1UCrUSx8Y4mQz9Mn1CK1X05M1yJZMW5Z/OdljxaZ7k9G7
         7fErQ0FynbLQQl/b7FoGJv8OXhsmT2VCgeJqgKvyHqfqYHBHazQ13qrqu+jqlFseCZYd
         w+BDb970Yryk6+uU6oi/JtHAWXwtmhTMZaEPGdzmoT1o2LnRZNqIbV/roR2YmfjPus9h
         bAGg==
X-Forwarded-Encrypted: i=1; AJvYcCUkWqlbAFHb7aenvtn/pTMIk57LKJG9ceJr66uXLRXXflgJgE1ZIheRWRcRE5dlY9oqd3Mazc2g1AYPtbjh5f58OIatfQcKeLEkLtCtdjM=
X-Gm-Message-State: AOJu0Yx/YFRtJo7872LUDRWkeuU0R8S7j7/ushiSO3q3FBzseDCEzMlC
	JddJlBCtWqHEhXUQVGFTzgX04Nf4bYJFBixEpHYAmIxQildvkD0K
X-Google-Smtp-Source: AGHT+IFVQGy75AgyQKRu+PPqCwD+heHmRJnRnFg8jVCLYmN6muhQ9vZDwxO527BuprsGNF0GdK2cVg==
X-Received: by 2002:a19:7418:0:b0:52b:c29e:704d with SMTP id 2adb3069b0e04-52cc47ed3f9mr1458619e87.17.1718876981888;
        Thu, 20 Jun 2024 02:49:41 -0700 (PDT)
Message-ID: <6f93c9a4e661013ae00581959cdc7445a7c6025a.camel@gmail.com>
Subject: Re: [PATCH for-4.19] hotplug: Restore block-tap phy compatibility
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Anthony PERARD <anthony.perard@vates.tech>, Jan Beulich
 <jbeulich@suse.com>
Cc: Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org,
  Jason Andryuk <jason.andryuk@amd.com>, Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 20 Jun 2024 11:49:41 +0200
In-Reply-To: <ZnLsMOQ3zt4W855q@l14>
References: <20240516022212.5034-1-jandryuk@gmail.com>
	 <64083e01-edf1-4395-a9d7-82e82d220de7@suse.com>
	 <9678073f-82d5-4402-b5a0-e24985c1446b@amd.com>
	 <7de20763-b9bc-4dfc-b250-8f83c42e9e16@suse.com> <ZnLsMOQ3zt4W855q@l14>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Wed, 2024-06-19 at 14:33 +0000, Anthony PERARD wrote:
> On Wed, Jun 19, 2024 at 02:07:04PM +0200, Jan Beulich wrote:
> > On 16.05.2024 15:52, Jason Andryuk wrote:
> > > On 2024-05-16 03:41, Jan Beulich wrote:
> > > > On 16.05.2024 04:22, Jason Andryuk wrote:
> > > > > From: Jason Andryuk <jason.andryuk@amd.com>
> > > > >=20
> > > > > From: Jason Andryuk <jason.andryuk@amd.com>
> > > >=20
> > > > Two identical From: (also in another patch of yours, while in
> > > > yet another one
> > > > you have two _different_ ones, when only one will survive into
> > > > the eventual
> > > > commit anyway)?
> > >=20
> > > Sorry about that.=C2=A0 Since I was sending from my gmail account, I
> > > thought=20
> > > I needed explicit From: lines to ensure the authorship was listed
> > > w/=20
> > > amd.com.=C2=A0 I generated the patches with `git format-patch --from`=
,
> > > to get=20
> > > the explicit From: lines, and then sent with `git send-email`.=C2=A0
> > > The=20
> > > send-email step then inserted the additional lines.=C2=A0 I guess it
> > > added=20
> > > =C2=A0From amd.com since I had changed to that address in .gitconfig.
> > >=20
> > > > > backendtype=3Dphy using the blktap kernel module needs to use
> > > > > write_dev,
> > > > > but tapback can't support that.=C2=A0 tapback should perform
> > > > > better, but make
> > > > > the script compatible with the old kernel module again.
> > > > >=20
> > > > > Signed-off-by: Jason Andryuk <jason.andryuk@amd.com>
> > > >=20
> > > > Should there be a Fixes: tag here?
> > >=20
> > > That makes sense.
> > >=20
> > > Fixes: 76a484193d ("hotplug: Update block-tap")
> >=20
> > Surely this wants going into 4.19? Thus - Anthony, Oleksii?
>=20
> Yes, I think so.
>=20
> Acked-by: Anthony PERARD <anthony.perard@vates.tech>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 09:55:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 09:55:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744364.1151389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKEW6-00042Q-2o; Thu, 20 Jun 2024 09:55:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744364.1151389; Thu, 20 Jun 2024 09:55:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKEW6-00042J-04; Thu, 20 Jun 2024 09:55:42 +0000
Received: by outflank-mailman (input) for mailman id 744364;
 Thu, 20 Jun 2024 09:55:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TjK6=NW=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sKEW4-00042D-LV
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 09:55:40 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3fec0094-2eeb-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 11:55:39 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-a6f09b457fdso68171566b.2
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 02:55:39 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56db5c3dsm751289866b.55.2024.06.20.02.55.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 20 Jun 2024 02:55:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3fec0094-2eeb-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718877339; x=1719482139; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=/Wp4kjnDRYcqsWBVpwT0Uxh7oOecFrkJm7uLP4XBRmk=;
        b=YoAxv1yGuFk7THVL5gljNIj70dQWJNSN2L4orbmPQvgqrJN8N0t12ZzdhVUPx6R8Ep
         52x4h8rioNc2XyHwEYMTTTmZLV75MAcm+me0xy/5R8+U3BPw4YPG2HngCmUHLJBQPcm9
         ay9rJonbzKGqWVY4siVU8pvtu66tIT34nCP1YiC5kEK94lDisOwNcBmFTnRsq4fxCo+E
         s2C4hW/0wHgtobvqJ1YpM5kTjeqr5Eb2GqdHeqjZmMwUZ/fFP1EJJSR+uZCwDIC82Kgx
         348S52NAjBJo4fmcFerkHwZsl0enbYZhnizPO+2J8nkJg01FDLnttYdavl2jW9gJ47cL
         dPzg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718877339; x=1719482139;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=/Wp4kjnDRYcqsWBVpwT0Uxh7oOecFrkJm7uLP4XBRmk=;
        b=U/DDf5+TVIvCG3TgarObOEBDvUz7k+sVy433GDCZ7tt2mGmV+Ab4RP6sQfItoNU+0W
         7ZimScUl1zGhv+zwTdX9CXIl3hxV9OYV4p0V1X5BghtbO5Ve+vTeebaSi4XDcQimM0/u
         FK/ZoG4r0PfdeUg7+nNtnwINvT97hEE32QYd+L8khRubIihtT0IZIRVaLYQL2yp4Wu+R
         DZSa3ziztqP8t3sWiBAQi1RdfI5nh3SBZ5y188QZVpIdylxS6Xbe4QWNCiG38cf0xgNS
         9VD8+GK7+m5D4TbeF0/ckQ4SK2+61Mc5doCW9YD79i1/IJjKnIodd+YPmm3sJLlafLdm
         wuQg==
X-Forwarded-Encrypted: i=1; AJvYcCVPXXSmSJlsE89pOS6KGvCxL8o7A23bkTdPt7u4st9l6RYzCywZTkXFTIFQATi7WYZPYFs9XJgrHVR/+pZt0o4GkAxw8cC+EVi0aFTy928=
X-Gm-Message-State: AOJu0YzTwoX67l5KssHmUHA3K2WbDYk5DyH34HLk6KykfQEdygLCKl8F
	xUq2NokLZ+PCQ2h0+vD9UJrZ7lzyS+R0QDoipKBde6jU3l0fdK/A4YHMAQ==
X-Google-Smtp-Source: AGHT+IFBjOvRXMiCaHQhHN5fYmlm9v1h2WC/eUP/667F2pi5Q9ihWG0QiLsPc3YfvnB8NFIXJofr1g==
X-Received: by 2002:a17:906:2a18:b0:a6f:4de8:f490 with SMTP id a640c23a62f3a-a6fab6184d9mr305731066b.24.1718877339021;
        Thu, 20 Jun 2024 02:55:39 -0700 (PDT)
Message-ID: <9bd68869a560cd405cbcce481b5721637671a996.camel@gmail.com>
Subject: Re: [PATCH for-4.19?] tools/libs/light: Fix nic->vlan memory
 allocation
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, Leigh Brown <leigh@solinno.co.uk>, 
 Anthony Perard <anthony@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Jason Andryuk
	 <jandryuk@gmail.com>, Jason Andryuk <jason.andryuk@amd.com>, Xen Devel
	 <xen-devel@lists.xenproject.org>
Date: Thu, 20 Jun 2024 11:55:37 +0200
In-Reply-To: <a8fd0504-23b7-473a-9056-6b51c20e6468@suse.com>
References: <20240520164400.15740-1-leigh@solinno.co.uk>
	 <c600e5e8-d169-417c-bc02-d33e84dca0fb@amd.com>
	 <a8fd0504-23b7-473a-9056-6b51c20e6468@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Wed, 2024-06-19 at 13:57 +0200, Jan Beulich wrote:
> On 20.05.2024 19:08, Jason Andryuk wrote:
> > On 2024-05-20 12:44, Leigh Brown wrote:
> > > After the following commit:
> > > 3bc14e4fa4b9 ("tools/libs/light: Add vlan field to
> > > libxl_device_nic")
> > > xl list -l aborts with a double free error if a domain has at
> > > least
> > > one vif defined:
> > >=20
> > > =C2=A0=C2=A0 $ sudo xl list -l
> > > =C2=A0=C2=A0 free(): double free detected in tcache 2
> > > =C2=A0=C2=A0 Aborted
> > >=20
> > > Orginally, the vlan field was called vid and was defined as an
> > > integer.
> > > It was appropriate to call libxl__xs_read_checked() with gc
> > > passed as
> > > the string data was copied to a different variable.=C2=A0 However, th=
e
> > > final
> > > version uses a string data type and the call should have been
> > > changed
> > > to use NOGC instead of gc to allow that data to live past the gc
> > > controlled lifetime, in line with the other string fields.
> > >=20
> > > This patch makes the change to pass NOGC instead of gc and moves
> > > the
> > > new code to be next to the other string fields (fixing a couple
> > > of
> > > errant tabs along the way), as recommended by Jason.
> > >=20
> > > Fixes: 3bc14e4fa4b9 ("tools/libs/light: Add vlan field to
> > > libxl_device_nic")
> > > Signed-off-by: Leigh Brown <leigh@solinno.co.uk>
> >=20
> > Reviewed-by: Jason Andryuk <jason.andryuk@amd.com>
>=20
> I notice this wasn't Cc-ed to the maintainer, which likely is the
> reason
> for there not having been an ack yet. Anthony, any thoughts?
>=20
> Further at this point, bug fix or not, it would likely also need a
> release
> ack. Oleksii, thoughts?
It seems to me it is bug fix, so it should be in release:
Release-acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 10:03:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 10:03:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744373.1151400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKEdD-0006Ay-Qv; Thu, 20 Jun 2024 10:03:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744373.1151400; Thu, 20 Jun 2024 10:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKEdD-0006Ar-My; Thu, 20 Jun 2024 10:03:03 +0000
Received: by outflank-mailman (input) for mailman id 744373;
 Thu, 20 Jun 2024 10:03:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKEdD-0006AV-79
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 10:03:03 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4703056e-2eec-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 12:03:01 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id
 38308e7fff4ca-2ec0f3b9cfeso6802621fa.0
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 03:03:01 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9c2560b9csm16685705ad.224.2024.06.20.03.02.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 03:03:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4703056e-2eec-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718877780; x=1719482580; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=cow1dyunLsKcbbFUcMDyyuUU2arBYKLGUfIwHegxd7M=;
        b=IRvYTnzD36xjKUqwjiLEZ4RKE2RPjYDHAu3bJl7OuLnHng9UZm86JWozZGrSob7hOM
         hg/r/A5NiRqQmvlJbg+6aKdI3xHD7QuhEo8+etoUMEc7wJzUsVUB6/eNYbbJEqZRwq+z
         3BbuwBgaytUdxKd0yOUD6qnojH05KPA225gVE4KNPxS825EUwHLMCM0qP+YLERQ6KGUh
         6pO+5DLlyjjZz0nE90k3DiKutch7Fv+JDIfGcmgdGO6tbIlE3Wfxk55apR6g+Q9QaRUe
         kQNduXWRXZ1iBQszRKSqKfDPHS/qkOm5fBgQFncfrwFWCMos/DImpQNX4GaPl4UbLeZy
         gYgw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718877780; x=1719482580;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=cow1dyunLsKcbbFUcMDyyuUU2arBYKLGUfIwHegxd7M=;
        b=roOO5uwpNugmnJvbjcVsL/TiGsk0Km9yRirBy0g+iYOfrvOU8jrHs+hqJ9vZSGwIXD
         lRR0SEzpqC/J2K6xsJptvDAZ4bllxJcu3jObapoOoJRl6FstB7lCwt0flh1/0YebAiRg
         AV2O4p+Y8PdCKUcq+s1h0IjvECkRvOwVWLSSRV1YEYyy1DZ1QuBuwkPCotkErglvLvpZ
         aFP2Mb6H7xAJjHqle9f/5BOOhWXTsdK16xEmQKfkv+8o1hYGADPGbsKyLaxT5xOJkbO0
         exDltHxC7GJfyp0OaClyklLElMlnKt8yDs/s2WoB5+Am+7xKsOUV6r5wn5Xcbk8ffNKK
         ep9g==
X-Forwarded-Encrypted: i=1; AJvYcCUwB+Iph94OthTD4wIOs/TXKj7dbssrq0FpUWO1MUO+4KE1qNXvs6um3KAb8xeYWvhT2xoITc07oYQTHt2TjorUuXogXL1CQ/UtVY54z1g=
X-Gm-Message-State: AOJu0YwAx9e/H2XHNwdt4FJDcWsnLDbO4Ig1dwLBG3R2VQY2kvW+FH1i
	jreGpfCwjTpf6X1O/Jxgiwjoq8TKJp3xesrnnNJXTGC5Rs2PyXTVZEkzb8nq+Q==
X-Google-Smtp-Source: AGHT+IEcMIm7+Uc0QVu6WxcvSehHFwynkTCRGetU+DRoPS087TYmMpvPbXyIpIwSK3uRzOf88AWdtw==
X-Received: by 2002:a2e:87c2:0:b0:2ec:488c:cfe5 with SMTP id 38308e7fff4ca-2ec488cd173mr5989071fa.8.1718877780582;
        Thu, 20 Jun 2024 03:03:00 -0700 (PDT)
Message-ID: <fb3c9a3b-8630-4c25-9013-720535eba322@suse.com>
Date: Thu, 20 Jun 2024 12:02:52 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] tools/libs/light: Fix nic->vlan memory allocation
To: Anthony PERARD <anthony.perard@vates.tech>
Cc: Leigh Brown <leigh@solinno.co.uk>,
 Xen Devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jason Andryuk
 <jandryuk@gmail.com>, Jason Andryuk <jason.andryuk@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20240520164400.15740-1-leigh@solinno.co.uk>
 <c600e5e8-d169-417c-bc02-d33e84dca0fb@amd.com> <ZnLVxB2XuWL9UKWI@l14>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZnLVxB2XuWL9UKWI@l14>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 19.06.2024 14:57, Anthony PERARD wrote:
> On Mon, May 20, 2024 at 01:08:03PM -0400, Jason Andryuk wrote:
>> On 2024-05-20 12:44, Leigh Brown wrote:
>>> After the following commit:
>>> 3bc14e4fa4b9 ("tools/libs/light: Add vlan field to libxl_device_nic")
>>> xl list -l aborts with a double free error if a domain has at least
>>> one vif defined:
>>>
>>>    $ sudo xl list -l
>>>    free(): double free detected in tcache 2
>>>    Aborted
>>>
>>> Orginally, the vlan field was called vid and was defined as an integer.
>>> It was appropriate to call libxl__xs_read_checked() with gc passed as
>>> the string data was copied to a different variable.  However, the final
>>> version uses a string data type and the call should have been changed
>>> to use NOGC instead of gc to allow that data to live past the gc
>>> controlled lifetime, in line with the other string fields.
>>>
>>> This patch makes the change to pass NOGC instead of gc and moves the
>>> new code to be next to the other string fields (fixing a couple of
>>> errant tabs along the way), as recommended by Jason.
>>>
>>> Fixes: 3bc14e4fa4b9 ("tools/libs/light: Add vlan field to libxl_device_nic")
>>> Signed-off-by: Leigh Brown <leigh@solinno.co.uk>
>>
>> Reviewed-by: Jason Andryuk <jason.andryuk@amd.com>
> 
> Acked-by: Anthony PERARD <anthony.perard@vates.tech>

Btw, at the example of this: Are you meaning to update ./MAINTAINERS with
that new email address of yours. Strictly speaking I think for Acked-by: to
actually fulfill its purpose (and for R-b to have its normally implied
meaning of "ack" when coming from a maintainer), it probably ought to match
the corresponding entry in ./MAINTAINERS.

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 10:23:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 10:23:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744391.1151429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKEx6-0001ZH-QJ; Thu, 20 Jun 2024 10:23:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744391.1151429; Thu, 20 Jun 2024 10:23:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKEx6-0001ZA-NO; Thu, 20 Jun 2024 10:23:36 +0000
Received: by outflank-mailman (input) for mailman id 744391;
 Thu, 20 Jun 2024 10:23:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F3B6=NW=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sKEx5-0001Z3-OQ
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 10:23:35 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2061c.outbound.protection.outlook.com
 [2a01:111:f403:2412::61c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 25380066-2eef-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 12:23:34 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by CH3PR12MB7762.namprd12.prod.outlook.com (2603:10b6:610:151::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.34; Thu, 20 Jun
 2024 10:23:29 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7698.020; Thu, 20 Jun 2024
 10:23:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25380066-2eef-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U2Lcb4ldGKzfblAOsIfwnLDB+rADTWRx0WXWZYeKgGfIx5mQX/trrEzeIe8d2/T2X0jFyTM6cRguSn3Ga6nX+4Ik1HKA5mzTJA9m4nhONcoF3VQP0FGXkaKA8L7ZJtoKmpdnSuwQerUN9yVSkNQkmhiY7N1yaCRmG33/l+hTvhOXmZHFtxNXyesBnOji2Gm3XDKewiyBcA20BjCksArH4JP3r9yQQrJ3Vo4Lu9/+6c7aQ2Drz8X/qOcaUsgnetQ438sol5K4B8Ry6zOSMGh8cjWD0z54ykEAseIZvaWpveukx1DjLm0K/rYzOXNuT9rtgP7sOnD7hwD78dNFjwvJdg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LR7W2A7mNb3N8Yx/URwMgAiAS+NDVjP4+5CZq5y4040=;
 b=TT3kb/HGlM5oVkZz1+MH/OLCIPXVvziFfHS1us9BByOIzqvTzUDehyVaVHQRAuI9Uq93HRU9tB4ErSqWrar2PjQ4fVebCr4V6TJHSOjdsXhWK71F3p5+wz8/lv1XdvOxWK4j7lmGYSDdwV7BuSQ/s0dTXAFZWqPeaWw4rh9RYhsSlr+mPeDsVBHoVxKQd6xttCuE59r4CrX3L2erQyWyea3h7L3xECHMQLQA1arPX4E7DiDiDt8NcOBS4pCWfUuOmatY6unHm8voN+EeHpqp1VhZmEnm9+JB1YsX31hJkcYrIqsRt/uiyBdwBz7HZXAX1lLWTr2ipEt2scwoBs9oeA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LR7W2A7mNb3N8Yx/URwMgAiAS+NDVjP4+5CZq5y4040=;
 b=zypIxd9qMl47UdAmAdYF1iCl8KPG+gvjEEE1bAz7eKcShmKKnILPumZJuIhlQTY6HBVUt7/39yRMm9M9Z225rcM8hKEjoEsT0cFNfigYhE77OW+N4FfYMXW379swk06Quz9RO3GMY93SF+vhpjrg8ysaRKV/TiqAllgPeen2zX0=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
Thread-Topic: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
Thread-Index:
 AQHawJToSxHkdXzldkCDZE5kihITT7HMD9QAgAGTOgD//5t7AIADeOiA//+SewCAAKx5AA==
Date: Thu, 20 Jun 2024 10:23:29 +0000
Message-ID:
 <BL1PR12MB5849366A442BE6C4C192ABB0E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-5-Jiqian.Chen@amd.com>
 <49563a31-d50e-4015-88ee-e0dab9193cd1@suse.com>
 <BL1PR12MB584910D242D9D8B4BA8B15C1E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <ab99b766-7bec-4046-beb2-f77a2591a911@suse.com>
 <BL1PR12MB5849ABD858B72505D83678F9E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <099beaac-ed1f-459b-8c2b-42b325f8e4a4@suse.com>
In-Reply-To: <099beaac-ed1f-459b-8c2b-42b325f8e4a4@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7698.013)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|CH3PR12MB7762:EE_
x-ms-office365-filtering-correlation-id: fc66c377-4aeb-4d08-7c62-08dc9113079d
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|1800799021|7416011|366013|376011|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?dFBVcURpTk9BeG04RldQdDBRVUFKUWwrZDJFc2ZhdlNiY3h2Vnl6ckpLaVdS?=
 =?utf-8?B?TzhNTmZNQ0ZQRTl4ZkJCckdkWmVpN1dvem43K2RPSnAzRFFWNUtVNjhNUk10?=
 =?utf-8?B?RkREUzRpQ3pSYXM0a0d0YzhtMXhrWUFPTm1kby92c1FaNHUvcExFUEFIaHhz?=
 =?utf-8?B?M0Rpbjkwb0Q2SWxma3VodG1sTmJrdnB5MGc0ZTdTa245YWsvZHFlVkRBWG5T?=
 =?utf-8?B?L29hcldOck1jc1lLVGhNcHkvVlkyMVNpWmVoWnlvRWo4Z2lGSysvVnNocTBX?=
 =?utf-8?B?a25QZWx5TXNiRC9VZkJoTXBMSzdYWUUvUnV2TldXeTJaNGQ3NU8zS1BXOWU0?=
 =?utf-8?B?eUx4Y0FtOWJ1OGdpbW5lU0dKMm1jcDUvRmxxS1RyNy9uazVMNzZGeFdmMjBH?=
 =?utf-8?B?KzIyWmhVT1duQXZYWFBTWXhGQUFQeHUxOWtLb1ZHSkkxYXF3aTlma1ZCMDNF?=
 =?utf-8?B?aGtlTHlEOWRMekwvcDdYZHNFNzh0aE0zMmdDbzBvUlNMTWp3eVNzZ1B3b3Ji?=
 =?utf-8?B?L3VGT05qQ21qUDNmbTQ1WGkzaTQrT21nczQvSER6NnFtNW1uMnlhMFVGbGdD?=
 =?utf-8?B?bU1mTDZmOVB5a1lQUDRuWkl3eW5WbjdMcnZmYTlhTGVnR0NKN2g0d0poUUk5?=
 =?utf-8?B?Z2c0emhNaVFzdkk5QzM1Zkx3SnpxYXIvSDhnRkF6VWxEMVhiMjNpYmtPWkcz?=
 =?utf-8?B?MHduVWtMOCtXNExWUkx6SUQyQUI2Skp3UTVVaWFiMzhLSFdyNWpCMnJNWG9C?=
 =?utf-8?B?dFA5RVlzd0Z3ZG1veUdrZ25EWms1djNpSmtZV25ZS3RvN2hXL2VzTTBIVDdR?=
 =?utf-8?B?UTdqQUtwck1EOFR6T1J3M2N1bE1aZHh0aEc2Q0duQ0ROb0RvcFhXUlBuOFlP?=
 =?utf-8?B?QXJaU1l5TGxhWWZzeExwYitVTHZuVlF1dkdUcy9UalNHR1NiZ0RGVisvMFZX?=
 =?utf-8?B?ZzBqbXd2WHpZSEpibEpTeU5VWDNzNDFPWXFNNmNCMTNCS3gzT3BhZFBaYi9z?=
 =?utf-8?B?S3l6RXVmU3lPSGE0SmZEdzk0TUpDd2wrekZjR2RvT1p1bEdoZDJENkdRNTRn?=
 =?utf-8?B?bTVhZWRIUUU4OWNNSGlnNlA2UkZZa3IxTHl6bCtBVk16NkkyU2ZmYWZoeko1?=
 =?utf-8?B?dElScWk5U3IvVTZTT3FaM1l0R2V6Wm50V1JoUDdWQm5OeHM4Wmo2c2pXS1pH?=
 =?utf-8?B?WjNueVI3NEFqbFFPVUpBRzJWcElZNGJqaXppbG1yMGRta1kwb3JjYUF4YkdJ?=
 =?utf-8?B?cm5tN2ZVdXFiZUc0bDMzNmlsKzBBVVpsMDdPUllnK3JwMEQrcXhvU1p3a2dO?=
 =?utf-8?B?NFNEdmQzdUVFMlIxbUxpV1FhYXRGMzhYNlhGZUxNK0YyUjFaU2pqZXFYcGxU?=
 =?utf-8?B?YTJNTUcyVnVMa2xPTldwZllaTVlxSjJabkdQUnVtelNJZnRmZklnSHkwN2N5?=
 =?utf-8?B?OXlvWDhtL0tIMVd3aFdOQmQySjFxd2RnWkp6dXg0L25YMzh5MU9OY1hUcWN3?=
 =?utf-8?B?K2I4YjRRL2lNcjRRSWJFRTA4TWEwVFhVV3o4cFdHaW9BUGFKcDFDYmQvVXpH?=
 =?utf-8?B?bFRZci9UV3U3RVRzc0xicnhCci9wZnpSaW00VDFKWG1TOElwazRrSittNUdD?=
 =?utf-8?B?N29pTDJidUFvcmdYWFpmN0h4ZEk2VDh1NmM0YURiOXNtY2ZSbnFPc1hSQVZy?=
 =?utf-8?B?Q1hSNEwxRmZqbHUyWjd6K1VlODlRNVRlYU4zRWdJcGRzQ2dWUVlRQldtcGVt?=
 =?utf-8?B?MG5uamxzbFJlY0xXNjM4RVZORDg5L3Q0TG1SU3padWcyamZ4NmhvK2xlSkd1?=
 =?utf-8?Q?6UbEeBWKc0lBvygMqKcMDQWC8u9dIHVj//R+M=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(1800799021)(7416011)(366013)(376011)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?VFZlQ2gwelRvL3dqT0Vub2IvbHBkVTZBaDl5YnpKVTkrak52aFdwV29FbTlZ?=
 =?utf-8?B?eWp5S1NyWm83NGFCd3kyM0tFcHYveXdrbUtvVmpxR3dvNnFMdHdlQi9qTW05?=
 =?utf-8?B?MFRlbXd1R2lvMGxjM3VtdVVPbFJ3VEV5OWtlbStsOFdCS3lrbi9SZXlrQkFu?=
 =?utf-8?B?M29VL2QvVHJQNWQyUHdyeWFsTVoxZ1JQa1U0c1N4Umx3Z3BhN0IyY28vQkhI?=
 =?utf-8?B?N0J1NXQ1THRQRHN1bm45RjNwSmh3QzdOYk1zZVorT0pCZjhLUWJKbTNFbjNE?=
 =?utf-8?B?dS9LaEdhOERwTGM2a2ZlZkVpcjVNcmVoYVBEamdEUE5pL3BjV0kyQk03alAr?=
 =?utf-8?B?N0JlZ1lTYUwrdUJmcXp2cHdONHFGbmQ1ZzZDR0hrckVIcVZPZUZXcVZmUExp?=
 =?utf-8?B?QzlGdUlkOUhMcFdKZ2ZJekVFZWkyamhkTVYyVVBIejhsTEc2K0xTaUJzamVS?=
 =?utf-8?B?b3ZPZm1YdlQ4RCtpbWhlM0Q1eHNWVUNJK0RvL1Nkd01jQUU3b2EzeTRtWXRw?=
 =?utf-8?B?Z2RMYXFWWEx3MzBERUUvUE5ZbTJabkN5Qm9RSWRLV3FjZFcyUEp6YzBVS3pp?=
 =?utf-8?B?YXM1bU5KVzduR0FNSHgxRGljZys2c1lhRVBFbEhmQzI2TmMvcEtLYlBvVDFP?=
 =?utf-8?B?dy8xM3JGbk5jSWYzK2Vpeit1d3E2aUdqem1uL2tTNmgvK0hsR0Z6NHJncXBS?=
 =?utf-8?B?NGxGYUh6TjhVK0dsVmtiVXh2U3JXdjFUQ3Y5d1UzdmlpamtTME1aRkhiL1Q2?=
 =?utf-8?B?VmNkRHRweGNWUTdiRTQrL3FZVkxhYk80NDlzdDF1SVRISEdqNFlNTGVYSlRy?=
 =?utf-8?B?QUZmcWZlOHFOb2k3ckRtbTZNVHh0YzBOVjhQZm1uOHhFaEREUjVTMys1TDdU?=
 =?utf-8?B?a1liVk5HSTFsMEJZczNKbHdWSFZDVWZpckk5UDNSYWUxTmJFNjZWc051bUJK?=
 =?utf-8?B?VkFYdDJiZS9LL0tmMFN4a2FBcTJXVjkxclZHMC9UelpraWxSelY1a1ZQRmR4?=
 =?utf-8?B?ZU9Mc3RlTWEwa0lMejlrTG9sSWlRYW5OZSszL1JlQmpscUZZc2ozSnl5emdR?=
 =?utf-8?B?VG5tcy8vV2lmMXc5UDVFb254SDhHWEdPM3ZSa1FiekRWSm9Wd3lkeTFoc011?=
 =?utf-8?B?ZHZ2dlh2d3dWalFITVJsT0hSUGR3V1hxb2k0OHVxQzlyS01sbVV6ZG4rVCtE?=
 =?utf-8?B?MDBQZytHSEQ0dDRnbmV5OXY0bTY0N0l0N2xPTDNPbE85VDBNMXgrU09hRlFr?=
 =?utf-8?B?TXhNejlzRzI5a2NBSi9wUVEydXA4QmNRaENEZ0pXUHAyTit6M1RybGZNbWM4?=
 =?utf-8?B?cDVlZ05NU2JhL3VNUUxiRGVSMmlqM2pTQjluUTRHTHR5ajA5eURxREdHZW5I?=
 =?utf-8?B?Z2ZBbkxjNVU0SjkydStrZmlFMUtKRzd1Rnh4bWtmbUlFRG9RcVBmTUU2aEd2?=
 =?utf-8?B?NC90WDB1WkVvd21QTkltTERXc2pQZFJwaE9IVW5udWVHZUlJQ2xoT21YV1Zu?=
 =?utf-8?B?ZzNIV3FHelNvcEl1aTdadnhBbEZNRDh1dnNZSlhYVi9YNisyQlRsbEpXTVpT?=
 =?utf-8?B?YnFYdmdWc1k0aUtaZUFkV0ZybnFxY2NKaisya2VINTd4dmhoYW5jaUpqbS9C?=
 =?utf-8?B?K3pHcHhLbDR5UU5zbDhvTXBCenluclFSL1o0elJjQ1JHU1UyOVZFdHIwSGxp?=
 =?utf-8?B?SlJXbmtGTnRqOW9mSEdoRGxUS2lxSGVsOG9vTExQR0RMVmpZaEpzZVF1OUJB?=
 =?utf-8?B?QnlobThZcXducDZ3SE5WbWJ0SkFKNm1hTkNyL1hOenRMSG83VjRjd004dDJS?=
 =?utf-8?B?K1ZzVUlleGhOQ0hodFZTZVBSYjZXWXl1WWJmemZiNEtHVTRVQkJJSXBTMzFY?=
 =?utf-8?B?WTUwa0hjYnE5SGlQbEhySFUzRGk1alQwK1d2ZWR0enRuNGtVSFFBRVlUcXVk?=
 =?utf-8?B?QXBuRWdLUTUyT1FHR3ovWlh5LzE5cEdGbHErSXBpQ2ZlMmFRRTZlMkNmc1hU?=
 =?utf-8?B?cWQvbm1qcXpOdng2M3Y3MTdzMjZld2h3QjdyWlVBVWdCSkllYmtOcWdEMDVv?=
 =?utf-8?B?RUFOZmNWOWdTZkhGWXN6Q2wyUHlicjVMZVNBZ2QwSGcyakRjOWJzSUowbFZF?=
 =?utf-8?Q?4OcA=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <0568CFC1C6D6A44D8A469E96E82D30D3@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fc66c377-4aeb-4d08-7c62-08dc9113079d
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 Jun 2024 10:23:29.7297
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: pIKFE8BVCrYcmDDXrku/nf5TCKJGXcI53NdfZJG14ornJ/0y36rNHf09sGm3AWUl09oUtGme8VIGoJnnj09C6w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB7762

T24gMjAyNC82LzIwIDE1OjQzLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMjAuMDYuMjAyNCAw
OTowMywgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzE4IDE3OjEzLCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAxOC4wNi4yMDI0IDEwOjEwLCBDaGVuLCBKaXFpYW4gd3JvdGU6
DQo+Pj4+IE9uIDIwMjQvNi8xNyAyMzoxMCwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAx
Ny4wNi4yMDI0IDExOjAwLCBKaXFpYW4gQ2hlbiB3cm90ZToNCj4+Pj4+PiAtLS0gYS90b29scy9s
aWJzL2xpZ2h0L2xpYnhsX3BjaS5jDQo+Pj4+Pj4gKysrIGIvdG9vbHMvbGlicy9saWdodC9saWJ4
bF9wY2kuYw0KPj4+Pj4+IEBAIC0xNDA2LDYgKzE0MDYsMTIgQEAgc3RhdGljIGJvb2wgcGNpX3N1
cHBfbGVnYWN5X2lycSh2b2lkKQ0KPj4+Pj4+ICAjZW5kaWYNCj4+Pj4+PiAgfQ0KPj4+Pj4+ICAN
Cj4+Pj4+PiArI2RlZmluZSBQQ0lfREVWSUQoYnVzLCBkZXZmbilcDQo+Pj4+Pj4gKyAgICAgICAg
ICAgICgoKCh1aW50MTZfdCkoYnVzKSkgPDwgOCkgfCAoKGRldmZuKSAmIDB4ZmYpKQ0KPj4+Pj4+
ICsNCj4+Pj4+PiArI2RlZmluZSBQQ0lfU0JERihzZWcsIGJ1cywgZGV2Zm4pIFwNCj4+Pj4+PiAr
ICAgICAgICAgICAgKCgoKHVpbnQzMl90KShzZWcpKSA8PCAxNikgfCAoUENJX0RFVklEKGJ1cywg
ZGV2Zm4pKSkNCj4+Pj4+DQo+Pj4+PiBJJ20gbm90IGEgbWFpbnRhaW5lciBvZiB0aGlzIGZpbGU7
IGlmIEkgd2VyZSwgSSdkIGFzayB0aGF0IGZvciByZWFkYWJpbGl0eSdzDQo+Pj4+PiBzYWtlIGFs
bCBleGNlc3MgcGFyZW50aGVzZXMgYmUgZHJvcHBlZCBmcm9tIHRoZXNlLg0KPj4+PiBJc24ndCBp
dCBhIGNvZGluZyByZXF1aXJlbWVudCB0byBlbmNsb3NlIGVhY2ggZWxlbWVudCBpbiBwYXJlbnRo
ZXNlcyBpbiB0aGUgbWFjcm8gZGVmaW5pdGlvbj8NCj4+Pj4gSXQgc2VlbXMgb3RoZXIgZmlsZXMg
YWxzbyBkbyB0aGlzLiBTZWUgdG9vbHMvbGlicy9saWdodC9saWJ4bF9pbnRlcm5hbC5oDQo+Pj4N
Cj4+PiBBcyBzYWlkLCBJJ20gbm90IGEgbWFpbnRhaW5lciBvZiB0aGlzIGNvZGUuIFlldCB3aGls
ZSBJJ20gYXdhcmUgdGhhdCBsaWJ4bA0KPj4+IGhhcyBpdHMgb3duIENPRElOR19TVFlMRSwgSSBj
YW4ndCBzcG90IGFueXRoaW5nIHRvd2FyZHMgZXhjZXNzaXZlIHVzZSBvZg0KPj4+IHBhcmVudGhl
c2VzIHRoZXJlLg0KPj4gU28sIHdoaWNoIHBhcmVudGhlc2VzIGRvIHlvdSB0aGluayBhcmUgZXhj
ZXNzaXZlIHVzZT8NCj4gDQo+ICNkZWZpbmUgUENJX0RFVklEKGJ1cywgZGV2Zm4pXA0KPiAgICAg
ICAgICAgICAoKCh1aW50MTZfdCkoYnVzKSA8PCA4KSB8ICgoZGV2Zm4pICYgMHhmZikpDQo+IA0K
PiAjZGVmaW5lIFBDSV9TQkRGKHNlZywgYnVzLCBkZXZmbikgXA0KPiAgICAgICAgICAgICAoKCh1
aW50MzJfdCkoc2VnKSA8PCAxNikgfCBQQ0lfREVWSUQoYnVzLCBkZXZmbikpDQpUaGFua3MsIHdp
bGwgY2hhbmdlIGluIG5leHQgdmVyc2lvbi4NCg0KPiANCj4+Pj4+PiBAQCAtMTQ4Niw2ICsxNDk2
LDE4IEBAIHN0YXRpYyB2b2lkIHBjaV9hZGRfZG1fZG9uZShsaWJ4bF9fZWdjICplZ2MsDQo+Pj4+
Pj4gICAgICAgICAgZ290byBvdXRfbm9faXJxOw0KPj4+Pj4+ICAgICAgfQ0KPj4+Pj4+ICAgICAg
aWYgKChmc2NhbmYoZiwgIiV1IiwgJmlycSkgPT0gMSkgJiYgaXJxKSB7DQo+Pj4+Pj4gKyNpZmRl
ZiBDT05GSUdfWDg2DQo+Pj4+Pj4gKyAgICAgICAgc2JkZiA9IFBDSV9TQkRGKHBjaS0+ZG9tYWlu
LCBwY2ktPmJ1cywNCj4+Pj4+PiArICAgICAgICAgICAgICAgICAgICAgICAgKFBDSV9ERVZGTihw
Y2ktPmRldiwgcGNpLT5mdW5jKSkpOw0KPj4+Pj4+ICsgICAgICAgIGdzaSA9IHhjX3BoeXNkZXZf
Z3NpX2Zyb21fZGV2KGN0eC0+eGNoLCBzYmRmKTsNCj4+Pj4+PiArICAgICAgICAvKg0KPj4+Pj4+
ICsgICAgICAgICAqIE9sZCBrZXJuZWwgdmVyc2lvbiBtYXkgbm90IHN1cHBvcnQgdGhpcyBmdW5j
dGlvbiwNCj4+Pj4+DQo+Pj4+PiBKdXN0IGtlcm5lbD8NCj4+Pj4gWWVzLCB4Y19waHlzZGV2X2dz
aV9mcm9tX2RldiBkZXBlbmRzIG9uIHRoZSBmdW5jdGlvbiBpbXBsZW1lbnRlZCBvbiBsaW51eCBr
ZXJuZWwgc2lkZS4NCj4+Pg0KPj4+IE9rYXksIGFuZCB3aGVuIHRoZSBrZXJuZWwgc3VwcG9ydHMg
aXQgYnV0IHRoZSB1bmRlcmx5aW5nIGh5cGVydmlzb3IgZG9lc24ndA0KPj4+IHN1cHBvcnQgd2hh
dCB0aGUga2VybmVsIHdhbnRzIHRvIHVzZSBpbiBvcmRlciB0byBmdWxmaWxsIHRoZSByZXF1ZXN0
LCBhbGwNCj4+IEkgZG9uJ3Qga25vdyB3aGF0IHRoaW5ncyB5b3UgbWVudGlvbmVkIGh5cGVydmlz
b3IgZG9lc24ndCBzdXBwb3J0IGFyZSwNCj4+IGJlY2F1c2UgeGNfcGh5c2Rldl9nc2lfZnJvbV9k
ZXYgaXMgdG8gZ2V0IHRoZSBnc2kgb2YgcGNpZGV2IHRocm91Z2ggc2JkZiBpbmZvcm1hdGlvbiwN
Cj4+IHRoYXQgcmVsYXRpb25zaGlwIGNhbiBiZSBnb3Qgb25seSBpbiBkb20wIGluc3RlYWQgb2Yg
WGVuIGh5cGVydmlzb3IuDQo+Pg0KPj4+IGlzIGZpbmU/IChTZWUgYWxzbyBiZWxvdyBmb3Igd2hh
dCBtYXkgYmUgbmVlZGVkIGluIHRoZSBoeXBlcnZpc29yLCBldmVuIGlmDQo+PiBZb3UgbWVhbiB4
Y19waHlzZGV2X21hcF9waXJxIG5lZWRzIGdzaT8NCj4gDQo+IEknZCBwdXQgaXQgc2xpZ2h0bHkg
ZGlmZmVyZW50bHk6IFlvdSBhcnJhbmdlIGZvciB0aGF0IGZ1bmN0aW9uIHRvIG5vdyB0YWtlIGEN
Cj4gR1NJIHdoZW4gdGhlIGNhbGxlciBpcyBQVkguIEJ1dCB5ZXMsIHRoZSBmdW5jdGlvbiwgd2hl
biB1c2VkIHdpdGgNCj4gTUFQX1BJUlFfVFlQRV9HU0ksIGNsZWFybHkgZXhwZWN0cyBhIEdTSSBh
cyBpbnB1dCAoc2VlIGFsc28gYmVsb3cpLg0KPiANCj4+PiB0aGlzIElPQ1RMIHdvdWxkIGJlIHNh
dGlzZmllZCBieSB0aGUga2VybmVsIHdpdGhvdXQgbmVlZGluZyB0byBpbnRlcmFjdCB3aXRoDQo+
Pj4gdGhlIGh5cGVydmlzb3IuKQ0KPj4+DQo+Pj4+Pj4gKyAgICAgICAgICogc28gaWYgZmFpbCwg
a2VlcCB1c2luZyBpcnE7IGlmIHN1Y2Nlc3MsIHVzZSBnc2kNCj4+Pj4+PiArICAgICAgICAgKi8N
Cj4+Pj4+PiArICAgICAgICBpZiAoZ3NpID4gMCkgew0KPj4+Pj4+ICsgICAgICAgICAgICBpcnEg
PSBnc2k7DQo+Pj4+Pg0KPj4+Pj4gSSdtIHN0aWxsIHB1enpsZWQgYnkgdGhpcywgd2hlbiBieSBu
b3cgSSB0aGluayB3ZSd2ZSBzdWZmaWNpZW50bHkgY2xhcmlmaWVkDQo+Pj4+PiB0aGF0IElSUXMg
YW5kIEdTSXMgdXNlIHR3byBkaXN0aW5jdCBudW1iZXJpbmcgc3BhY2VzLg0KPj4+Pj4NCj4+Pj4+
IEFsc28sIGFzIHByZXZpb3VzbHkgaW5kaWNhdGVkLCB5b3UgY2FsbCB0aGlzIGZvciBQViBEb20w
IGFzIHdlbGwuIEFpdWkgb24NCj4+Pj4+IHRoZSBhc3N1bXB0aW9uIHRoYXQgaXQnbGwgZmFpbC4g
V2hhdCBpZiB3ZSBkZWNpZGUgdG8gbWFrZSB0aGUgZnVuY3Rpb25hbGl0eQ0KPj4+Pj4gYXZhaWxh
YmxlIHRoZXJlLCB0b28gKGlmIG9ubHkgZm9yIGluZm9ybWF0aW9uYWwgcHVycG9zZXMsIG9yIGZv
cg0KPj4+Pj4gY29uc2lzdGVuY3kpPyBTdWRkZW5seSB5b3UncmUgZmFsbGJhY2sgbG9naWMgd291
bGRuJ3Qgd29yayBhbnltb3JlLCBhbmQNCj4+Pj4+IHlvdSdkIGNhbGwgLi4uDQo+Pj4+Pg0KPj4+
Pj4+ICsgICAgICAgIH0NCj4+Pj4+PiArI2VuZGlmDQo+Pj4+Pj4gICAgICAgICAgciA9IHhjX3Bo
eXNkZXZfbWFwX3BpcnEoY3R4LT54Y2gsIGRvbWlkLCBpcnEsICZpcnEpOw0KPj4+Pj4NCj4+Pj4+
IC4uLiB0aGUgZnVuY3Rpb24gd2l0aCBhIEdTSSB3aGVuIGEgcElSUSBpcyBtZWFudC4gSW1vLCBh
cyBzdWdnZXN0ZWQgYmVmb3JlLA0KPj4+Pj4geW91IHN0cmljdGx5IHdhbnQgdG8gYXZvaWQgdGhl
IGNhbGwgb24gUFYgRG9tMC4NCj4+Pj4+DQo+Pj4+PiBBbHNvIGZvciBQVkggRG9tMDogSSBkb24n
dCB0aGluayBJJ3ZlIHNlZW4gY2hhbmdlcyB0byB0aGUgaHlwZXJjYWxsDQo+Pj4+PiBoYW5kbGlu
ZywgeWV0LiBIb3cgY2FuIHRoYXQgYmUgd2hlbiBHU0kgYW5kIElSUSBhcmVuJ3QgdGhlIHNhbWUs
IGFuZCBoZW5jZQ0KPj4+Pj4gaW5jb21pbmcgR1NJIHdvdWxkIG5lZWQgdHJhbnNsYXRpbmcgdG8g
SVJRIHNvbWV3aGVyZT8gSSBjYW4gb25jZSBhZ2FpbiBvbmx5DQo+Pj4+PiBhc3N1bWUgYWxsIHlv
dXIgdGVzdGluZyB3YXMgZG9uZSB3aXRoIElSUXMgd2hvc2UgbnVtYmVycyBoYXBwZW5lZCB0byBt
YXRjaA0KPj4+Pj4gdGhlaXIgR1NJIG51bWJlcnMuIChUaGUgZGlmZmVyZW5jZSwgaW1vLCB3b3Vs
ZCBhbHNvIG5lZWQgY2FsbGluZyBvdXQgaW4gdGhlDQo+Pj4+PiBwdWJsaWMgaGVhZGVyLCB3aGVy
ZSB0aGUgcmVzcGVjdGl2ZSBpbnRlcmZhY2Ugc3RydWN0KHMpIGlzL2FyZSBkZWZpbmVkLikNCj4+
Pj4gSSBmZWVsIGxpa2UgeW91IG1pc3NlZCBvdXQgb24gbWFueSBvZiB0aGUgcHJldmlvdXMgZGlz
Y3Vzc2lvbnMuDQo+Pj4+IFdpdGhvdXQgbXkgY2hhbmdlcywgdGhlIG9yaWdpbmFsIGNvZGVzIHVz
ZSBpcnEgKHJlYWQgZnJvbSBmaWxlIC9zeXMvYnVzL3BjaS9kZXZpY2VzLzxzYmRmPi9pcnEpIHRv
IGRvIHhjX3BoeXNkZXZfbWFwX3BpcnEsDQo+Pj4+IGJ1dCB4Y19waHlzZGV2X21hcF9waXJxIHJl
cXVpcmUgcGFzc2luZyBpbnRvIGdzaSBpbnN0ZWFkIG9mIGlycSwgc28gd2UgbmVlZCB0byB1c2Ug
Z3NpIHdoZXRoZXIgZG9tMCBpcyBQViBvciBQVkgsIHNvIGZvciB0aGUgb3JpZ2luYWwgY29kZXMs
IHRoZXkgYXJlIHdyb25nLg0KPj4+PiBKdXN0IGJlY2F1c2UgYnkgY2hhbmNlLCB0aGUgaXJxIHZh
bHVlIGluIHRoZSBMaW51eCBrZXJuZWwgb2YgcHYgZG9tMCBpcyBlcXVhbCB0byB0aGUgZ3NpIHZh
bHVlLCBzbyB0aGVyZSB3YXMgbm8gcHJvYmxlbSB3aXRoIHRoZSBvcmlnaW5hbCBwdiBwYXNzdGhy
b3VnaC4NCj4+Pj4gQnV0IG5vdCB3aGVuIHVzaW5nIFBWSCwgc28gcGFzc3Rocm91Z2ggZmFpbGVk
Lg0KPj4+PiBXaXRoIG15IGNoYW5nZXMsIEkgZ290IGdzaSB0aHJvdWdoIGZ1bmN0aW9uIHhjX3Bo
eXNkZXZfZ3NpX2Zyb21fZGV2IGZpcnN0bHksIGFuZCB0byBiZSBjb21wYXRpYmxlIHdpdGggb2xk
IGtlcm5lbCB2ZXJzaW9ucyhpZiB0aGUgaW9jdGwNCj4+Pj4gSU9DVExfUFJJVkNNRF9HU0lfRlJP
TV9ERVYgaXMgbm90IGltcGxlbWVudGVkKSwgSSBzdGlsbCBuZWVkIHRvIHVzZSB0aGUgaXJxIG51
bWJlciwgc28gSSBuZWVkIHRvIGNoZWNrIHRoZSByZXN1bHQNCj4+Pj4gb2YgZ3NpLCBpZiBnc2kg
PiAwIG1lYW5zIElPQ1RMX1BSSVZDTURfR1NJX0ZST01fREVWIGlzIGltcGxlbWVudGVkIEkgY2Fu
IHVzZSB0aGUgcmlnaHQgb25lIGdzaSwgb3RoZXJ3aXNlIGtlZXAgdXNpbmcgd3Jvbmcgb25lIGly
cS4gDQo+Pj4NCj4+PiBJIHVuZGVyc3RhbmQgYWxsIG9mIHRoaXMsIHRvIGEgKEkgdGhpbmspIHN1
ZmZpY2llbnQgZGVncmVlIGF0IGxlYXN0LiBZZXQgd2hhdA0KPj4+IHlvdSdyZSBlZmZlY3RpdmVs
eSBwcm9wb3NpbmcgKHdpdGhvdXQgZXhwbGljaXRseSBzYXlpbmcgc28pIGlzIHRoYXQgZS5nLg0K
Pj4+IHN0cnVjdCBwaHlzZGV2X21hcF9waXJxJ3MgcGlycSBmaWVsZCBzdWRkZW5seSBtYXkgbm8g
bG9uZ2VyIGhvbGQgYSBwSVJRDQo+Pj4gbnVtYmVyLCBidXQgKHdoZW4gdGhlIGNhbGxlciBpcyBQ
VkgpIGEgR1NJLiBUaGlzIG1heSBiZSBhIG5lY2Vzc2FyeSBhZGp1c3RtZW50DQo+Pj4gdG8gbWFr
ZSAoc2ltcGx5IGJlY2F1c2UgdGhlIGNhbGxlciBtYXkgaGF2ZSBubyB3YXkgdG8gZXhwcmVzcyB0
aGluZ3MgaW4gcElSUQ0KPj4+IHRlcm1zKSwgYnV0IHRoZW4gc3VpdGFibGUgYWRqdXN0bWVudHMg
dG8gdGhlIGhhbmRsaW5nIG9mIFBIWVNERVZPUF9tYXBfcGlycQ0KPj4+IHdvdWxkIGJlIG5lY2Vz
c2FyeS4gSW4gZmFjdCB0aGF0IGZpZWxkIGlzIHByZXNlbnRseSBtYXJrZWQgYXMgIklOIG9yIE9V
VCI7DQo+Pj4gd2hlbiByZS1wdXJwb3NlZCB0byB0YWtlIGEgR1NJIG9uIGlucHV0LCBpdCBtYXkg
ZW5kIHVwIGJlaW5nIG5lY2Vzc2FyeSB0byBwYXNzDQo+Pj4gYmFjayB0aGUgcElSUSAoaW4gdGhl
IHN1YmplY3QgZG9tYWluJ3MgbnVtYmVyaW5nIHNwYWNlKS4gT3IgYWx0ZXJuYXRpdmVseSBpdA0K
Pj4+IG1heSBiZSBuZWNlc3NhcnkgdG8gYWRkIHlldCBhbm90aGVyIHN1Yi1mdW5jdGlvbiBzbyB0
aGUgR1NJIGNhbiBiZSB0cmFuc2xhdGVkDQo+Pj4gdG8gdGhlIGNvcnJlc3BvbmRpbmcgcElSUSBm
b3IgdGhlIGRvbWFpbiB0aGF0J3MgZ29pbmcgdG8gdXNlIHRoZSBJUlEsIGZvciB0aGF0DQo+Pj4g
dGhlbiB0byBiZSBwYXNzZWQgaW50byBQSFlTREVWT1BfbWFwX3BpcnEuDQo+PiBJZiBJIHVuZGVy
c3Rvb2QgY29ycmVjdGx5LCB5b3VyIGNvbmNlcm5zIGFib3V0IHRoaXMgcGF0Y2ggYXJlIHR3bzoN
Cj4+IEZpcnN0LCB3aGVuIGRvbTAgaXMgUFYsIEkgc2hvdWxkIG5vdCB1c2UgeGNfcGh5c2Rldl9n
c2lfZnJvbV9kZXYgdG8gZ2V0IGdzaSB0byBkbyB4Y19waHlzZGV2X21hcF9waXJxLCBJIHNob3Vs
ZCBrZWVwIHRoZSBvcmlnaW5hbCBjb2RlIHRoYXQgdXNlIGlycS4NCj4gDQo+IFllcy4NCk9LLCBJ
IGNhbiBjaGFuZ2UgdG8gZG8gdGhpcy4NCkJ1dCBJIHN0aWxsIGhhdmUgYSBjb25jZXJuOg0KQWx0
aG91Z2ggd2l0aG91dCBteSBjaGFuZ2VzLCBwYXNzdGhyb3VnaCBjYW4gd29yayBub3cgd2hlbiBk
b20wIGlzIFBWLg0KQW5kIHlvdSBhbHNvIGFncmVlIHRoYXQ6IGZvciB4Y19waHlzZGV2X21hcF9w
aXJxLCB3aGVuIHVzZSB3aXRoIE1BUF9QSVJRX1RZUEVfR1NJLCBpdCBleHBlY3RzIGEgR1NJIGFz
IGlucHV0Lg0KSXNuJ3QgaXQgYSB3cm9uZyBmb3IgUFYgZG9tMCB0byBwYXNzIGlycSBpbj8gV2h5
IGRvbid0IHdlIHVzZSBnc2kgYXMgaXQgc2hvdWxkIGJlIHVzZWQsIHNpbmNlIHdlIGhhdmUgYSBm
dW5jdGlvbiB0byBnZXQgZ3NpIG5vdz8NCg0KPiANCj4+IFNlY29uZCwgd2hlbiBkb20wIGlzIFBW
SCwgSSBnZXQgdGhlIGdzaSwgYnV0IEkgc2hvdWxkIG5vdCBwYXNzIGdzaSBhcyB0aGUgZm91cnRo
IHBhcmFtZXRlciBvZiB4Y19waHlzZGV2X21hcF9waXJxLCBJIHNob3VsZCBjcmVhdGUgYSBuZXcg
bG9jYWwgcGFyYW1ldGVyIHBpcnE9LTEsIGFuZCBwYXNzIGl0IGluLg0KPiANCj4gSSB0aGluayBz
bywgeWVzLiBZb3UgYWxzbyBtYXkgbmVlZCB0byByZWNvcmQgdGhlIG91dHB1dCB2YWx1ZSwgc28g
eW91IGNhbiBsYXRlcg0KPiB1c2UgaXQgZm9yIHVubWFwLiB4Y19waHlzZGV2X21hcF9waXJxKCkg
bWF5IGFsc28gbmVlZCBhZGp1c3RpbmcsIGFzIHJpZ2h0IG5vdw0KPiBpdCB3b3VsZG4ndCBwdXQg
YSBuZWdhdGl2ZSBpbmNvbWluZyAqcGlycSBpbnRvIHRoZSAucGlycSBmaWVsZC4gDQp4Y19waHlz
ZGV2X21hcF9waXJxJ3MgbG9naWMgaXMgaWYgd2UgcGFzcyBhIG5lZ2F0aXZlIGluLCBpdCBzZXRz
ICpwaXJxIHRvIGluZGV4KGdzaSkuDQpJcyBpdHMgbG9naWMgcmlnaHQ/IElmIG5vdCBob3cgZG8g
d2UgY2hhbmdlIGl0Pw0KDQo+IEkgYWN0dWFsbHkgd29uZGVyIGlmIHRoYXQncyBldmVuIGNvcnJl
Y3QgcmlnaHQgbm93LCBpLmUuIGluZGVwZW5kZW50IG9mIHlvdXIgY2hhbmdlLg0KRXZlbiB3aXRo
b3V0IG15IGNoYW5nZXMsIHBhc3N0aHJvdWdoIGNhbiB3b3JrIGZvciBQViBkb20wLCBub3QgZm9y
IFBWSCBkb20wLg0KQWNjb3JkaW5nIHRvIHRoZSBsb2dpYyBvZiBoeXBlcmNhbGwgUEhZU0RFVk9Q
X21hcF9waXJxLA0KaWYgcGlycSBpcyAtMSwgaXQgY2FsbHMgcGh5c2Rldl9tYXBfcGlycS0+IGFs
bG9jYXRlX2FuZF9tYXBfZ3NpX3BpcnEtPiBhbGxvY2F0ZV9waXJxIC0+IGdldF9mcmVlX3BpcnEg
dG8gZ2V0IHBpcnEuDQpJZiBwaXJxIGlzIHNldCB0byBwb3NpdGl2ZSBiZWZvcmUgY2FsbGluZyBo
eXBlcmNhbGwsIGl0IHNldCBwaXJxIHRvIGl0cyBvd24gdmFsdWUgaW4gYWxsb2NhdGVfcGlycS4N
Cg0KPiANCj4gSmFuDQoNCi0tIA0KQmVzdCByZWdhcmRzLA0KSmlxaWFuIENoZW4uDQo=


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 10:38:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 10:38:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744400.1151439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKFB2-0003mg-4A; Thu, 20 Jun 2024 10:38:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744400.1151439; Thu, 20 Jun 2024 10:38:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKFB2-0003mZ-1G; Thu, 20 Jun 2024 10:38:00 +0000
Received: by outflank-mailman (input) for mailman id 744400;
 Thu, 20 Jun 2024 10:37:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKFAz-0003mS-Q8
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 10:37:57 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2785b7cf-2ef1-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 12:37:56 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2ec0f3b9cdbso6705721fa.0
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 03:37:56 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-705cc9257e0sm12078749b3a.29.2024.06.20.03.37.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 03:37:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2785b7cf-2ef1-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718879875; x=1719484675; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=2aHqXuO00w7ji2jtcKC8/O8t1tpR10+HdA38yi/DVN4=;
        b=dWwpQps69tcx5NzHDDZaPXkYJvqoD1q/iGSlUh89j0dlXo97ZznbwI/yDG3ezFUJd9
         OoQYTe8eTkJEsUoyTm3mabQdFoY+C7OiJnrxh1XbUUMdC3qAZlztcRPgK5+khfJCGYnA
         rTvEBBaqDipkAJ9aID4ORJx3LkTWAxqXCF+O2EeH/iglqkZtwj/M4dAJ/5vnCWf5yJQZ
         +XcvvztFuRyy0zIwShjtLYZcPVW/nGgQFd+eqXA2heSyvytwclF4rncrSwvbre7BGE9l
         O16XaLRNYiW0ggaEP+Fz4pysngRLVN3fFp34jLmaZCryKQO23Uez6Fpu8+biPAHYkPKj
         RBVg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718879875; x=1719484675;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=2aHqXuO00w7ji2jtcKC8/O8t1tpR10+HdA38yi/DVN4=;
        b=iAHIDj1p6XF7CV1vdE10bf6RoaklS63AWryYGmuqp7zb95P4sbQIf6ghp30rIHrwNQ
         Y4XYzhaiKPgIUVCHawz901px5/cf79M1Xe4yPkZp/HwQPcjIImQpy3moJf9ngImXVO35
         Zw+9yf679O5wGY6QsxPEmn2j65UZHgHK11J/6jYej/fT4Zjqw8NpHxUbyGZ8MbO0rJgN
         Z3FYnEsYk8qX269O6QFzYVeJCKbRC8E0K83or4dSDjfASpavayJnwkBlwhx5mz1pw3Ol
         NRY8i7gBb1LA2dI9vHqxlUzQhJb4pJROJ+GdRNpeN1wfGp6VJMquiqF8sBNHa4LyMPlN
         0Y2g==
X-Forwarded-Encrypted: i=1; AJvYcCVSMQ3HZu6QtV6rpFKcqzpD5GvbvDYItgwgragSy7md9/0tpfZ/hRFwINLsRlaMiNQZh7rHrkuPjwDVEKmCFrOXKMWtvgC3I8vK/YIPA3o=
X-Gm-Message-State: AOJu0YyBJAF5LKoOsG0Z29DsWtioJf4GEuDdgndaHw+sx/t17ZKsTQFl
	PP2iMIWN0zNbSJy3rdxUPh2kcGPGJES7+1Y4a7mA+tx+eDx0ERMo5JNF2jPO2w==
X-Google-Smtp-Source: AGHT+IGltburmZWyUjsaqo4pb6m/mWOQpg4dJtTge6jZyX55qnjRHnF4R/9GWWhLqFlAX+w1useOOQ==
X-Received: by 2002:a2e:9604:0:b0:2ec:3cdf:6401 with SMTP id 38308e7fff4ca-2ec3ceb7ceamr33950031fa.17.1718879875239;
        Thu, 20 Jun 2024 03:37:55 -0700 (PDT)
Message-ID: <3352736b-e7bc-40d0-ac1f-e58de188c93c@suse.com>
Date: Thu, 20 Jun 2024 12:37:44 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-5-Jiqian.Chen@amd.com>
 <49563a31-d50e-4015-88ee-e0dab9193cd1@suse.com>
 <BL1PR12MB584910D242D9D8B4BA8B15C1E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <ab99b766-7bec-4046-beb2-f77a2591a911@suse.com>
 <BL1PR12MB5849ABD858B72505D83678F9E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <099beaac-ed1f-459b-8c2b-42b325f8e4a4@suse.com>
 <BL1PR12MB5849366A442BE6C4C192ABB0E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB5849366A442BE6C4C192ABB0E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 20.06.2024 12:23, Chen, Jiqian wrote:
> On 2024/6/20 15:43, Jan Beulich wrote:
>> On 20.06.2024 09:03, Chen, Jiqian wrote:
>>> On 2024/6/18 17:13, Jan Beulich wrote:
>>>> On 18.06.2024 10:10, Chen, Jiqian wrote:
>>>>> On 2024/6/17 23:10, Jan Beulich wrote:
>>>>>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>>>>>> --- a/tools/libs/light/libxl_pci.c
>>>>>>> +++ b/tools/libs/light/libxl_pci.c
>>>>>>> @@ -1406,6 +1406,12 @@ static bool pci_supp_legacy_irq(void)
>>>>>>>  #endif
>>>>>>>  }
>>>>>>>  
>>>>>>> +#define PCI_DEVID(bus, devfn)\
>>>>>>> +            ((((uint16_t)(bus)) << 8) | ((devfn) & 0xff))
>>>>>>> +
>>>>>>> +#define PCI_SBDF(seg, bus, devfn) \
>>>>>>> +            ((((uint32_t)(seg)) << 16) | (PCI_DEVID(bus, devfn)))
>>>>>>
>>>>>> I'm not a maintainer of this file; if I were, I'd ask that for readability's
>>>>>> sake all excess parentheses be dropped from these.
>>>>> Isn't it a coding requirement to enclose each element in parentheses in the macro definition?
>>>>> It seems other files also do this. See tools/libs/light/libxl_internal.h
>>>>
>>>> As said, I'm not a maintainer of this code. Yet while I'm aware that libxl
>>>> has its own CODING_STYLE, I can't spot anything towards excessive use of
>>>> parentheses there.
>>> So, which parentheses do you think are excessive use?
>>
>> #define PCI_DEVID(bus, devfn)\
>>             (((uint16_t)(bus) << 8) | ((devfn) & 0xff))
>>
>> #define PCI_SBDF(seg, bus, devfn) \
>>             (((uint32_t)(seg) << 16) | PCI_DEVID(bus, devfn))
> Thanks, will change in next version.
> 
>>
>>>>>>> @@ -1486,6 +1496,18 @@ static void pci_add_dm_done(libxl__egc *egc,
>>>>>>>          goto out_no_irq;
>>>>>>>      }
>>>>>>>      if ((fscanf(f, "%u", &irq) == 1) && irq) {
>>>>>>> +#ifdef CONFIG_X86
>>>>>>> +        sbdf = PCI_SBDF(pci->domain, pci->bus,
>>>>>>> +                        (PCI_DEVFN(pci->dev, pci->func)));
>>>>>>> +        gsi = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
>>>>>>> +        /*
>>>>>>> +         * Old kernel version may not support this function,
>>>>>>
>>>>>> Just kernel?
>>>>> Yes, xc_physdev_gsi_from_dev depends on the function implemented on linux kernel side.
>>>>
>>>> Okay, and when the kernel supports it but the underlying hypervisor doesn't
>>>> support what the kernel wants to use in order to fulfill the request, all
>>> I don't know what things you mentioned hypervisor doesn't support are,
>>> because xc_physdev_gsi_from_dev is to get the gsi of pcidev through sbdf information,
>>> that relationship can be got only in dom0 instead of Xen hypervisor.
>>>
>>>> is fine? (See also below for what may be needed in the hypervisor, even if
>>> You mean xc_physdev_map_pirq needs gsi?
>>
>> I'd put it slightly differently: You arrange for that function to now take a
>> GSI when the caller is PVH. But yes, the function, when used with
>> MAP_PIRQ_TYPE_GSI, clearly expects a GSI as input (see also below).
>>
>>>> this IOCTL would be satisfied by the kernel without needing to interact with
>>>> the hypervisor.)
>>>>
>>>>>>> +         * so if fail, keep using irq; if success, use gsi
>>>>>>> +         */
>>>>>>> +        if (gsi > 0) {
>>>>>>> +            irq = gsi;
>>>>>>
>>>>>> I'm still puzzled by this, when by now I think we've sufficiently clarified
>>>>>> that IRQs and GSIs use two distinct numbering spaces.
>>>>>>
>>>>>> Also, as previously indicated, you call this for PV Dom0 as well. Aiui on
>>>>>> the assumption that it'll fail. What if we decide to make the functionality
>>>>>> available there, too (if only for informational purposes, or for
>>>>>> consistency)? Suddenly you're fallback logic wouldn't work anymore, and
>>>>>> you'd call ...
>>>>>>
>>>>>>> +        }
>>>>>>> +#endif
>>>>>>>          r = xc_physdev_map_pirq(ctx->xch, domid, irq, &irq);
>>>>>>
>>>>>> ... the function with a GSI when a pIRQ is meant. Imo, as suggested before,
>>>>>> you strictly want to avoid the call on PV Dom0.
>>>>>>
>>>>>> Also for PVH Dom0: I don't think I've seen changes to the hypercall
>>>>>> handling, yet. How can that be when GSI and IRQ aren't the same, and hence
>>>>>> incoming GSI would need translating to IRQ somewhere? I can once again only
>>>>>> assume all your testing was done with IRQs whose numbers happened to match
>>>>>> their GSI numbers. (The difference, imo, would also need calling out in the
>>>>>> public header, where the respective interface struct(s) is/are defined.)
>>>>> I feel like you missed out on many of the previous discussions.
>>>>> Without my changes, the original codes use irq (read from file /sys/bus/pci/devices/<sbdf>/irq) to do xc_physdev_map_pirq,
>>>>> but xc_physdev_map_pirq require passing into gsi instead of irq, so we need to use gsi whether dom0 is PV or PVH, so for the original codes, they are wrong.
>>>>> Just because by chance, the irq value in the Linux kernel of pv dom0 is equal to the gsi value, so there was no problem with the original pv passthrough.
>>>>> But not when using PVH, so passthrough failed.
>>>>> With my changes, I got gsi through function xc_physdev_gsi_from_dev firstly, and to be compatible with old kernel versions(if the ioctl
>>>>> IOCTL_PRIVCMD_GSI_FROM_DEV is not implemented), I still need to use the irq number, so I need to check the result
>>>>> of gsi, if gsi > 0 means IOCTL_PRIVCMD_GSI_FROM_DEV is implemented I can use the right one gsi, otherwise keep using wrong one irq. 
>>>>
>>>> I understand all of this, to a (I think) sufficient degree at least. Yet what
>>>> you're effectively proposing (without explicitly saying so) is that e.g.
>>>> struct physdev_map_pirq's pirq field suddenly may no longer hold a pIRQ
>>>> number, but (when the caller is PVH) a GSI. This may be a necessary adjustment
>>>> to make (simply because the caller may have no way to express things in pIRQ
>>>> terms), but then suitable adjustments to the handling of PHYSDEVOP_map_pirq
>>>> would be necessary. In fact that field is presently marked as "IN or OUT";
>>>> when re-purposed to take a GSI on input, it may end up being necessary to pass
>>>> back the pIRQ (in the subject domain's numbering space). Or alternatively it
>>>> may be necessary to add yet another sub-function so the GSI can be translated
>>>> to the corresponding pIRQ for the domain that's going to use the IRQ, for that
>>>> then to be passed into PHYSDEVOP_map_pirq.
>>> If I understood correctly, your concerns about this patch are two:
>>> First, when dom0 is PV, I should not use xc_physdev_gsi_from_dev to get gsi to do xc_physdev_map_pirq, I should keep the original code that use irq.
>>
>> Yes.
> OK, I can change to do this.
> But I still have a concern:
> Although without my changes, passthrough can work now when dom0 is PV.
> And you also agree that: for xc_physdev_map_pirq, when use with MAP_PIRQ_TYPE_GSI, it expects a GSI as input.
> Isn't it a wrong for PV dom0 to pass irq in? Why don't we use gsi as it should be used, since we have a function to get gsi now?

Indeed this and ...

>>> Second, when dom0 is PVH, I get the gsi, but I should not pass gsi as the fourth parameter of xc_physdev_map_pirq, I should create a new local parameter pirq=-1, and pass it in.
>>
>> I think so, yes. You also may need to record the output value, so you can later
>> use it for unmap. xc_physdev_map_pirq() may also need adjusting, as right now
>> it wouldn't put a negative incoming *pirq into the .pirq field. 
> xc_physdev_map_pirq's logic is if we pass a negative in, it sets *pirq to index(gsi).
> Is its logic right? If not how do we change it?

... this matches ...

>> I actually wonder if that's even correct right now, i.e. independent of your change.

... the remark here.

> Even without my changes, passthrough can work for PV dom0, not for PVH dom0.

In the common case. I fear no-one ever tried for a device with an IRQ that
has a source override specified in ACPI.

> According to the logic of hypercall PHYSDEVOP_map_pirq,
> if pirq is -1, it calls physdev_map_pirq-> allocate_and_map_gsi_pirq-> allocate_pirq -> get_free_pirq to get pirq.
> If pirq is set to positive before calling hypercall, it set pirq to its own value in allocate_pirq.

Which is what looks wrong to me. Question is what it was done this way in the
first place.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 10:43:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 10:43:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744410.1151451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKFFs-0005aK-N7; Thu, 20 Jun 2024 10:43:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744410.1151451; Thu, 20 Jun 2024 10:43:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKFFs-0005aD-KY; Thu, 20 Jun 2024 10:43:00 +0000
Received: by outflank-mailman (input) for mailman id 744410;
 Thu, 20 Jun 2024 10:42:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKFFr-0005a6-Ii
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 10:42:59 +0000
Received: from mail-wr1-x42a.google.com (mail-wr1-x42a.google.com
 [2a00:1450:4864:20::42a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id db7bfd2b-2ef1-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 12:42:57 +0200 (CEST)
Received: by mail-wr1-x42a.google.com with SMTP id
 ffacd0b85a97d-35f2c9e23d3so1200111f8f.0
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 03:42:57 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f855e55e54sm134141535ad.23.2024.06.20.03.42.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 03:42:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db7bfd2b-2ef1-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718880177; x=1719484977; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=gUbs0PVhQSU0rqVDNSjoLUAJk3gtFLPXXQqr/l5DJxo=;
        b=fgEq82XknhlGUsF6aApOLdJ7sf4904949fIOU18vv6i9JERVYYU3sjmhTo2KiUCbVl
         Hi0dILz+BeKiPhNrG8kgFaDrT72vg90kUdsdc7gWclnMNDrUItMkKl4xcIFgphR5OM8t
         /aEQflY+9oI2e25DRI57EbxOwZNEtkkoMMya5FB1x8o7+MvSl/VGYm4Q8V3QnJHmjQoW
         jJM7eHiFkGR0R7F+R0K5sQa3LzfIDhgbm0YAGMn3/jjIQfLDF3EwmF13Q8ziKvAc5p/F
         HKoZlxf8pgbW37TgJXszkXkesJe02qHeGVybld2d8wAKNV5VxXwzKiIg78r9BPVRjPDO
         ycdg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718880177; x=1719484977;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gUbs0PVhQSU0rqVDNSjoLUAJk3gtFLPXXQqr/l5DJxo=;
        b=Ip5aprDgXDHoyBdemkoOJHUFCQ3r5APLGcw7RR0pdJo1pdXjo6M2JyjrlX3P4G6vuc
         gXCa1n9RAyXG8rHnI0BHoxzkG3qb1A3yHWHKAVcgZ4faPk2GP0Ara8DKROLAOU/3s8W7
         uH/0gNuiKGb/O6Dsqynphc0ycGpajYhSJm/WuJwT9f67gLpChmHHj5TMrLarHyTs6/rj
         Q0wvKu7Kyzhr9LzoqikYYQM/yuaaX8QIwv4A78dbWXvsxryoVG2cMqQBg38wHjmfbLqe
         fz0odMZNjT6wKS6hyf3hLbBDMrX9DFiCR5tA7cDRbQy10tbuydtCromGB7kA2vlu6Aaq
         Vfmg==
X-Forwarded-Encrypted: i=1; AJvYcCUrWaefhqYfY+FZUkl02fZBGZkShjeyb+l7d3/Q0kWWhdXqWhowyqCKd6pLpNk1tehMh5FuICnu7lFOioRIn0LBYmMjtEGIm8vTcbsZ4pI=
X-Gm-Message-State: AOJu0Ywtjxt3zPrMaIVrQU5NqdIWSPTTOdeIlXChWkn8R8e8gB+8iBKz
	KCiKWD0wIYDrAM5ZC3F1ecBDz7tbU3eqQIMuoPieVG8IA0CgDmxqh7uJFWa5Zw==
X-Google-Smtp-Source: AGHT+IHTF5kXUR9jiDJEWQd2zzP2Xf/mVRGFWG4rDz77KAwQ+s1yxkd8xB0p/eMmxdJkUeqdEI1CNw==
X-Received: by 2002:a5d:52ca:0:b0:35f:1e44:803b with SMTP id ffacd0b85a97d-3609eaa9782mr6292745f8f.35.1718880177268;
        Thu, 20 Jun 2024 03:42:57 -0700 (PDT)
Message-ID: <96ba4e66-5d33-4c39-b733-790e7996332f@suse.com>
Date: Thu, 20 Jun 2024 12:42:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>,
 Anthony PERARD <anthony@xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-6-Jiqian.Chen@amd.com>
 <b4b6cbcd-dd71-44da-aea8-6a4a170d73d5@suse.com>
 <BL1PR12MB584916579E2C16C6C9F86D1FE7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b6beb3f3-9c33-4d4c-a607-ca0eba76f049@suse.com>
 <BL1PR12MB58493479F9EF4E56E9CB814FE7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB58493479F9EF4E56E9CB814FE7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 20.06.2024 11:40, Chen, Jiqian wrote:
> On 2024/6/18 17:23, Jan Beulich wrote:
>> On 18.06.2024 10:23, Chen, Jiqian wrote:
>>> On 2024/6/17 23:32, Jan Beulich wrote:
>>>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>>>> @@ -1516,14 +1519,39 @@ static void pci_add_dm_done(libxl__egc *egc,
>>>>>              rc = ERROR_FAIL;
>>>>>              goto out;
>>>>>          }
>>>>> -        r = xc_domain_irq_permission(ctx->xch, domid, irq, 1);
>>>>> +#ifdef CONFIG_X86
>>>>> +        /* If dom0 doesn't have PIRQs, need to use xc_domain_gsi_permission */
>>>>> +        r = xc_domain_getinfo_single(ctx->xch, 0, &info);
>>>>
>>>> Hard-coded 0 is imposing limitations. Ideally you would use DOMID_SELF, but
>>>> I didn't check if that can be used with the underlying hypercall(s). Otherwise
> From the commit 10ef7a91b5a8cb8c58903c60e2dd16ed490b3bcf, DOMID_SELF is not allowed for XEN_DOMCTL_getdomaininfo.
> And now XEN_DOMCTL_getdomaininfo gets domain through rcu_lock_domain_by_id.
> 
>>>> you want to pass the actual domid of the local domain here.
> What is the local domain here?

The domain your code is running in.

> What is method for me to get its domid?

I hope there's an available function in one of the libraries to do that.
But I wouldn't even know what to look for; that's a question to (primarily)
Anthony then, who sadly continues to be our only tool stack maintainer.

Alternatively we could maybe enable XEN_DOMCTL_getdomaininfo to permit
DOMID_SELF.

>>> But the action of granting permission is from dom0 to domU, what I need to get is the infomation of dom0,
>>> The actual domid here is domU's id I think, it is not useful.
>>
>> Note how I said DOMID_SELF and "local domain". There's no talk of using the
>> DomU's domid. But what you apparently neglect is the fact that the hardware
>> domain isn't necessarily Dom0 (see CONFIG_LATE_HWDOM in the hypervisor).
>> While benign in most cases, this is relevant when it comes to referencing
>> the hardware domain by domid. And it is the hardware domain which is going
>> to drive the device re-assignment, as that domain is who's in possession of
>> all the devices not yet assigned to any DomU.
> OK, I need to get the information of hardware domain here?

Right, with (for this purpose) "hardware domain" == "local domain".

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 11:15:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 11:15:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744424.1151467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKFki-00026S-31; Thu, 20 Jun 2024 11:14:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744424.1151467; Thu, 20 Jun 2024 11:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKFki-00026L-0X; Thu, 20 Jun 2024 11:14:52 +0000
Received: by outflank-mailman (input) for mailman id 744424;
 Thu, 20 Jun 2024 11:14:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WCyn=NW=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKFkg-00026F-MB
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 11:14:50 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4f0b15d8-2ef6-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 13:14:49 +0200 (CEST)
Received: by mail-lf1-x12c.google.com with SMTP id
 2adb3069b0e04-52cc5d5179aso1150344e87.0
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 04:14:49 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d26bddab3sm229612a12.2.2024.06.20.04.14.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 20 Jun 2024 04:14:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f0b15d8-2ef6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718882088; x=1719486888; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=WvLFdQ0Ejqk1I9je0zJda2B7XQlwQn4evR9M4HdGvt4=;
        b=MxD4PW3KxG1GSEoNKeIpTOhrpSZ3dW4p7OOZsDGkAB0Lj1v6Mv3YZorZ4ytm2xbUgc
         +Bagy8beAe2sGuWx39J1rgOBGmcs9jf0kjqxRMamS5M5WPkdC3o3zl9QlneqXiUY7g4H
         EYtjSI/7xvrfC4/wZViHO3cZQ01oYvoZhbl08=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718882088; x=1719486888;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=WvLFdQ0Ejqk1I9je0zJda2B7XQlwQn4evR9M4HdGvt4=;
        b=Q899ODb6yhbakfh3k5X7qjRFk6U0OwnLIeCsatVaxwdW+fP1zOPSRW6UGhNaf6L0tr
         jAvitsujDcZdQDjR55F4MJJeu4jkxVHg9Xwm+R18uSUsqp97pIPodAfUAY/OhdzjXTkB
         U2p0W8dP/gSJbtXGY6jf4TX2oJJE8SPNsth/aMseZTjsAQdwA/+7PefMEeaj0tq2bVnO
         Bhl0K+9fhYQViwHr/hLzLbtNM/OU+rKd5LHtkJB7nkkP2rNQN3rF+BwcH7S4LU/SuguN
         RBwe859w/gAJpU+5iK2jPMp5yynv+9j9OTjz6JLTBR/lzGbbv9YqHCurGejIErq2Y7/E
         E2pg==
X-Gm-Message-State: AOJu0YxbhAT/j66v/nn1HPOLRxz7RvhBPImkRb0/fSseS5xLAYfWRKT0
	CZ+ZnC1hiIPF7u5CXjZbcB6X18Rm9J9xiSRaZ6ezeGhdjAuaxPhpnOEqVh/U+h3zH2RLb99lHnS
	8P3s=
X-Google-Smtp-Source: AGHT+IFgcWA+lDIBtJxOYMHpwJY3mH/AUNEHqYBAXYZ9ZZXvPe/Dw2imHL6ybjubmA56SEk/qLeuJA==
X-Received: by 2002:a05:651c:2221:b0:2ec:3d74:88c8 with SMTP id 38308e7fff4ca-2ec3d748a52mr45255551fa.18.1718882088051;
        Thu, 20 Jun 2024 04:14:48 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>
Subject: [PATCH for-4.18] x86/cpuid: Fix handling of XSAVE dynamic leaves
Date: Thu, 20 Jun 2024 12:14:38 +0100
Message-Id: <20240620111438.2666922-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

[ This is a minimal backport of commit 71cacfb035f4 ("x86/cpuid: Fix handling
  of XSAVE dynamic leaves") to fix the bugs without depending on large rework of
  XSTATE handling in Xen 4.19 ]

First, if XSAVE is available in hardware but not visible to the guest, the
dynamic leaves shouldn't be filled in.

Second, the comment concerning XSS state is wrong.  VT-x doesn't manage
host/guest state automatically, but there is provision for "host only" bits to
be set, so the implications are still accurate.

In Xen 4.18, no XSS states are supported, so it's safe to keep deferring to
real hardware.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
(cherry picked from commit 71cacfb035f4a78ee10970dc38a3baa04d387451)
---
CC: Jan Beulich <JBeulich@suse.com>
---
 xen/arch/x86/cpuid.c | 30 +++++++++++++-----------------
 1 file changed, 13 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 455a09b2dd22..f6fd6cc6b32c 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -330,24 +330,20 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
     case XSTATE_CPUID:
         switch ( subleaf )
         {
-        case 1:
-            if ( p->xstate.xsavec || p->xstate.xsaves )
-            {
-                /*
-                 * TODO: Figure out what to do for XSS state.  VT-x manages
-                 * host vs guest MSR_XSS automatically, so as soon as we start
-                 * supporting any XSS states, the wrong XSS will be in
-                 * context.
-                 */
-                BUILD_BUG_ON(XSTATE_XSAVES_ONLY != 0);
-
-                /*
-                 * Read CPUID[0xD,0/1].EBX from hardware.  They vary with
-                 * enabled XSTATE, and appropraite XCR0|XSS are in context.
-                 */
+            /*
+             * Read CPUID[0xd,0/1].EBX from hardware.  They vary with enabled
+             * XSTATE, and the appropriate XCR0 is in context.
+             */
         case 0:
-                res->b = cpuid_count_ebx(leaf, subleaf);
-            }
+            if ( p->basic.xsave )
+                res->b = cpuid_count_ebx(0xd, 0);
+            break;
+
+        case 1:
+            /* This only works because Xen doesn't support XSS states yet. */
+            BUILD_BUG_ON(XSTATE_XSAVES_ONLY != 0);
+            if ( p->xstate.xsavec )
+                res->b = cpuid_count_ebx(0xd, 1);
             break;
         }
         break;

base-commit: 01f7a3c792241d348a4e454a30afdf6c0d6cd71c
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 12:11:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 12:11:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744442.1151477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKGdA-0002Eh-7r; Thu, 20 Jun 2024 12:11:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744442.1151477; Thu, 20 Jun 2024 12:11:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKGdA-0002Ea-5G; Thu, 20 Jun 2024 12:11:08 +0000
Received: by outflank-mailman (input) for mailman id 744442;
 Thu, 20 Jun 2024 12:11:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKGd9-0002EP-1s
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 12:11:07 +0000
Received: from mail-wm1-x333.google.com (mail-wm1-x333.google.com
 [2a00:1450:4864:20::333])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 298d46f9-2efe-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 14:11:03 +0200 (CEST)
Received: by mail-wm1-x333.google.com with SMTP id
 5b1f17b1804b1-42108856c33so12016195e9.1
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 05:11:02 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f855f1a4fdsm135405765ad.233.2024.06.20.05.10.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 05:11:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 298d46f9-2efe-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718885462; x=1719490262; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from:cc
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=SKD+cu47XfPSl8XESj8gvckbidFA/Q0v6prm1AS0zpE=;
        b=IDbBnswr6tiBmjuUUCmVZjtWVPgtzZ/nAbrayxfsaRbK/1TwCzp8f/o4qL+/mPKNv9
         fuvZY29DFrWBQIQ9OQH9Mg2iIIlWadX7P79zJEtbi9uhvkPcE5Mile+Tsefgv1HwmzTL
         pe1z+0EpBkh8F2mAJZkwbmhlA0usXPEIFDBehXMTiMxGwxAPBx+RQH6Zl4T7sdP1ujxk
         3DGveNc3kM1ECD5AV4lCvKw+My6RNjEWde5B0v0RYlNsojbW0Bv/V0maU8NhuVqUrUoV
         8TzjmBPTBSoxfutXixwc8tKCBih0kULAWti0llPf579oo741ePXG/PqegvixaKUcUpKm
         LFig==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718885462; x=1719490262;
        h=content-transfer-encoding:in-reply-to:autocrypt:from:cc
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=SKD+cu47XfPSl8XESj8gvckbidFA/Q0v6prm1AS0zpE=;
        b=WzI6ZmZSQBpx9V8Mh4KvO2+bm2SHoxq5yOOgnK+0ftGPnmLh8rbSHMZNgAiMPIygYx
         qLbAxgHMV8/dt7cLWsDZufuD54IQCH4YIFj/kcys5RN+VtSpwc46zo0AU68aUVzwAXvt
         W+SleyEEnfhhokzP5XpnXnxHr0SjUDM3+4DsFhpZkZf13i7EfniGNk4E8uGetLwOJqo5
         HExWqgDF7hdR5jYF594g/QdCgQDpGdQH/n+f0dMykBgzlydsOf/oqOjt8DV/BIGnqrHh
         SjCDg5ddg5mqj45Rj+fHxQ7jpPFHSeGxf6Dac5zjkSa5/KHl/dN1lFx/zMoqdRcualeR
         fq4A==
X-Gm-Message-State: AOJu0YwATrwNyyK9+6YtV5S2iQ/Nl9uafDi2zYMSJABzA90+0VhR6n7O
	m8fQQ+Yf8xhMeEA5mLPBUDEiQSWr36Rozwc+YCP/ymxbWYv+LaVbD2D+6P7Iiw==
X-Google-Smtp-Source: AGHT+IG1bnPDXaSA1EmKPlhff1+O+/TPEbRyXg1Z8BvD3mcqyJFyJqyKq87SutlvGyoEGV0wmarJBg==
X-Received: by 2002:a5d:4b48:0:b0:361:e909:60c3 with SMTP id ffacd0b85a97d-361e909623amr7261094f8f.9.1718885462101;
        Thu, 20 Jun 2024 05:11:02 -0700 (PDT)
Message-ID: <4b8d9881-48b4-4600-897d-da8684b10132@suse.com>
Date: Thu, 20 Jun 2024 14:10:55 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.18] x86/cpuid: Fix handling of XSAVE dynamic leaves
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20240620111438.2666922-1-andrew.cooper3@citrix.com>
Content-Language: en-US
Cc: Xen-devel <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240620111438.2666922-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 20.06.2024 13:14, Andrew Cooper wrote:
> [ This is a minimal backport of commit 71cacfb035f4 ("x86/cpuid: Fix handling
>   of XSAVE dynamic leaves") to fix the bugs without depending on large rework of
>   XSTATE handling in Xen 4.19 ]
> 
> First, if XSAVE is available in hardware but not visible to the guest, the
> dynamic leaves shouldn't be filled in.
> 
> Second, the comment concerning XSS state is wrong.  VT-x doesn't manage
> host/guest state automatically, but there is provision for "host only" bits to
> be set, so the implications are still accurate.
> 
> In Xen 4.18, no XSS states are supported, so it's safe to keep deferring to
> real hardware.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

I'll try to not forget to include this in my next backporting batch. Thanks
for getting this sorted so quickly.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 12:29:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 12:29:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744452.1151489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKGun-00049V-JB; Thu, 20 Jun 2024 12:29:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744452.1151489; Thu, 20 Jun 2024 12:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKGun-00049O-Ek; Thu, 20 Jun 2024 12:29:21 +0000
Received: by outflank-mailman (input) for mailman id 744452;
 Thu, 20 Jun 2024 12:29:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WCyn=NW=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKGul-00049D-U4
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 12:29:19 +0000
Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com
 [2a00:1450:4864:20::534])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b6cba318-2f00-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 14:29:18 +0200 (CEST)
Received: by mail-ed1-x534.google.com with SMTP id
 4fb4d7f45d1cf-57d20d89748so749858a12.0
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 05:29:18 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d2376ccb3sm696836a12.74.2024.06.20.05.29.09
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 05:29:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6cba318-2f00-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718886558; x=1719491358; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=nXwHOZ/PDUZeUlO/aEowJKfxsvzpMKteBsTpJA/SLCY=;
        b=cmI9sXxhtSbwEHrIZepNFoSDp1xa/uppqqa6DoJXVfkJATr6fSuW5JIm5MtUpWFgNo
         DkelAD0MJVWwg2JHrqm8e3XWv4ZD/zL0ai/AfjASuidfIHJ208NpvSA69mX2k75RADXJ
         lNigb/254Nyo+X5NfMEmXBxKjoHQ9O0RBs4oc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718886558; x=1719491358;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nXwHOZ/PDUZeUlO/aEowJKfxsvzpMKteBsTpJA/SLCY=;
        b=ELxVIMuUTr4mIjpHEIc2zsaTiyRg2Aepl7zIwG+zjzYbkiZxyNPquVYDL/EMjvEjfL
         9jkO7tKiI1amyVMXuuaxd1kMhSMQgZem+BJj3o490CiuyLXnYvjETUSDhopMk/EDPgr3
         wfGNPBJ+cnPtPINAqBpA+lu2DZrh+yGaGCqbEQkuMkLF18mNEUsxxFX1Hs3tOqrOoddT
         0e4G7xqwCZubz93Fs/1W0Lz1dUgKM8xlHtEp/AEQB06OEZjptXbSf6IBxoBUG7RU9F8S
         z6vtX3Kk90cnK2t0RBN/YVPZwqXbaU+jLR33rYnmorwLTz1QuLXJclxbTwxn4NbJIMOY
         nU+Q==
X-Gm-Message-State: AOJu0YxXBiQHWgyLDZlyuitJ1TCOzWXNayyzT1Eg+tWFsRHSHxjEpnLo
	P1y/KAaCZcd+0juyH2kXjKU06zC6sxBAC6GdieAQJ6KeDbSl0MANGZNUPneq/Vs=
X-Google-Smtp-Source: AGHT+IFMg46g6J8I5fKSoMJjQ4tNuG1avYq0+ygm4sofWn9UPfamr+IcpHIW+hPlmGkYUTEkqc4g3A==
X-Received: by 2002:a50:d543:0:b0:57c:a7dc:9d6e with SMTP id 4fb4d7f45d1cf-57d07ee31d0mr3665339a12.39.1718886558149;
        Thu, 20 Jun 2024 05:29:18 -0700 (PDT)
Message-ID: <fa700afd-c9dc-4bae-bf9f-0b2504b6252b@citrix.com>
Date: Thu, 20 Jun 2024 13:28:58 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.18] x86/cpuid: Fix handling of XSAVE dynamic leaves
To: Jan Beulich <jbeulich@suse.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>
References: <20240620111438.2666922-1-andrew.cooper3@citrix.com>
 <4b8d9881-48b4-4600-897d-da8684b10132@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <4b8d9881-48b4-4600-897d-da8684b10132@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 20/06/2024 1:10 pm, Jan Beulich wrote:
> On 20.06.2024 13:14, Andrew Cooper wrote:
>> [ This is a minimal backport of commit 71cacfb035f4 ("x86/cpuid: Fix handling
>>   of XSAVE dynamic leaves") to fix the bugs without depending on large rework of
>>   XSTATE handling in Xen 4.19 ]
>>
>> First, if XSAVE is available in hardware but not visible to the guest, the
>> dynamic leaves shouldn't be filled in.
>>
>> Second, the comment concerning XSS state is wrong.  VT-x doesn't manage
>> host/guest state automatically, but there is provision for "host only" bits to
>> be set, so the implications are still accurate.
>>
>> In Xen 4.18, no XSS states are supported, so it's safe to keep deferring to
>> real hardware.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
> I'll try to not forget to include this in my next backporting batch. Thanks
> for getting this sorted so quickly.

No problem.  If you're intending to commit this, could you fix:

  "depending on large" => "depending on the large"

in the first sentence please.  Only just spotted it.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 12:50:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 12:50:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744460.1151498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKHFW-0007o0-4t; Thu, 20 Jun 2024 12:50:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744460.1151498; Thu, 20 Jun 2024 12:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKHFW-0007nt-0t; Thu, 20 Jun 2024 12:50:46 +0000
Received: by outflank-mailman (input) for mailman id 744460;
 Thu, 20 Jun 2024 12:50:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKHFV-0007nn-LP
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 12:50:45 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b489d3e1-2f03-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 14:50:43 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 724A54EE0738;
 Thu, 20 Jun 2024 14:50:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b489d3e1-2f03-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH] automation/eclair: add deviations of MISRA C Rule 5.5
Date: Thu, 20 Jun 2024 14:50:34 +0200
Message-Id: <dbd34e37b5d757ff7ae2a7318ad12b159970604c.1718887298.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 5.5 states that "Identifiers shall be distinct from macro
names".

Update ECLAIR configuration to deviate:
- macros expanding to their own name;
- clashes between macros and non-callable entities;
- clashes related to the selection of specific implementations of string
  handling functions.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 .../eclair_analysis/ECLAIR/deviations.ecl     | 16 ++++++++++++++
 docs/misra/deviations.rst                     | 21 +++++++++++++++++++
 2 files changed, 37 insertions(+)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index e2653f77eb..9ad0e1f90a 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -90,6 +90,22 @@ conform to the directive."
 -config=MC3R1.R5.3,reports+={safe, "any_area(any_loc(any_exp(macro(^read_debugreg$))&&any_exp(macro(^write_debugreg$))))"}
 -doc_end
 
+-doc_begin="Macros expanding to their own identifier (e.g., \"#define x x\") are deliberate."
+-config=MC3R1.R5.5,reports+={deliberate, "all_area(macro(same_id_body())||!macro(!same_id_body()))"}
+-doc_end
+
+-doc_begin="There is no clash between function like macros and not callable objects."
+-config=MC3R1.R5.5,reports+={deliberate, "all_area(macro(function_like())||decl(any()))&&all_area(macro(any())||!decl(kind(function))&&!decl(__function_pointer_decls))"}
+-doc_end
+
+-doc_begin="Clashes between function names and macros are deliberate for string handling functions since some architectures may want to use their own arch-specific implementation."
+-config=MC3R1.R5.5,reports+={deliberate, "all_area(all_loc(file(^xen/arch/x86/string\\.c|xen/include/xen/string\\.h|xen/lib/.*$)))"}
+-doc_end
+
+-doc_begin="In libelf, clashes between macros and function names are deliberate and needed to prevent the use of undecorated versions of memcpy, memset and memmove."
+-config=MC3R1.R5.5,reports+={deliberate, "any_area(decl(kind(function))||any_loc(macro(name(memcpy||memset||memmove))))&&any_area(any_loc(file(^xen/common/libelf/libelf-private\\.h$)))"}
+-doc_end
+
 -doc_begin="The type \"ret_t\" is deliberately defined multiple times,
 depending on the guest."
 -config=MC3R1.R5.6,reports+={deliberate,"any_area(any_loc(text(^.*ret_t.*$)))"}
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 36959aa44a..446c758c11 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -98,6 +98,27 @@ Deviations related to MISRA C:2012 Rules:
          - __emulate_2op and __emulate_2op_nobyte
          - read_debugreg and write_debugreg
 
+   * - R5.5
+     - Macros expanding to their own name are allowed.
+     - Tagged as `deliberate` for ECLAIR.
+
+   * - R5.5
+     - Clashes between names of function-like macros and identifiers of
+       non-callable entities are allowed.
+     - Tagged as `deliberate` for ECLAIR.
+
+   * - R5.5
+     - Clashes between function names and macros are deliberate for string
+       handling functions since some architectures may want to use their own
+       arch-specific implementation.
+     - Tagged as `deliberate` for ECLAIR.
+
+   * - R5.5
+     - In libelf, clashes between macros and function names are deliberate and
+       needed to prevent the use of undecorated versions of memcpy, memset and
+       memmove.
+     - Tagged as `deliberate` for ECLAIR.
+
    * - R5.6
      - The type ret_t is deliberately defined multiple times depending on the
        type of guest to service.
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:09:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744471.1151508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKHXn-0001Ic-GY; Thu, 20 Jun 2024 13:09:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744471.1151508; Thu, 20 Jun 2024 13:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKHXn-0001IV-Cz; Thu, 20 Jun 2024 13:09:39 +0000
Received: by outflank-mailman (input) for mailman id 744471;
 Thu, 20 Jun 2024 13:09:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKHXm-0001IL-4G; Thu, 20 Jun 2024 13:09:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKHXl-0007ZI-SH; Thu, 20 Jun 2024 13:09:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKHXl-0004T1-Gx; Thu, 20 Jun 2024 13:09:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKHXl-0006c4-GU; Thu, 20 Jun 2024 13:09:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/PcnEy3aCX8/tkEBI6aCxNss8op5GYNEUtkr5aXxM6Y=; b=sfUCzaXtGDoXyEj/WRjwgc8SQn
	RzLS5yj/iIEJ3nV6tMtXMVxcTpXxgUC/mQAXuS2Jfu4RWkkPurPVTAZz7cC0LjaoVZ/iFPNXa8MUe
	5gzi8uf/8AD029KDkPXWnmXqlZ70oXYKBUTXk6aYI1NnzK2IVbXKSl+ZtVvdCjlN4bKI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186426-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186426: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e5b3efbe1ab1793bb49ae07d56d0973267e65112
X-Osstest-Versions-That:
    linux=92e5605a199efbaee59fb19e15d6cc2103a04ec2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Jun 2024 13:09:37 +0000

flight 186426 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186426/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-raw       8 xen-boot         fail in 186413 pass in 186426
 test-armhf-armhf-xl-credit2   8 xen-boot                   fail pass in 186413

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 186406
 test-armhf-armhf-libvirt      8 xen-boot            fail in 186413 like 186406
 test-armhf-armhf-examine      8 reboot              fail in 186413 like 186406
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 186413 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 186413 never pass
 test-armhf-armhf-xl-arndale 15 migrate-support-check fail in 186413 never pass
 test-armhf-armhf-xl-arndale 16 saverestore-support-check fail in 186413 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186406
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186406
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186406
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 186406
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186406
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186406
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                e5b3efbe1ab1793bb49ae07d56d0973267e65112
baseline version:
 linux                92e5605a199efbaee59fb19e15d6cc2103a04ec2

Last test of basis   186406  2024-06-19 02:45:48 Z    1 days
Testing same since   186413  2024-06-19 17:42:05 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Marangi <ansuelsmth@gmail.com>
  Florian Fainelli <florian.fainelli@broadcom.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Martin Schiller <ms@dev.tdt.de>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   92e5605a199e..e5b3efbe1ab1  e5b3efbe1ab1793bb49ae07d56d0973267e65112 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:19:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:19:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744482.1151518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKHhV-0002vD-HL; Thu, 20 Jun 2024 13:19:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744482.1151518; Thu, 20 Jun 2024 13:19:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKHhV-0002v6-Ed; Thu, 20 Jun 2024 13:19:41 +0000
Received: by outflank-mailman (input) for mailman id 744482;
 Thu, 20 Jun 2024 13:19:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DyN3=NW=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sKHhU-0002v0-79
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:19:40 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id beee9dc0-2f07-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 15:19:39 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 48A504EE0738;
 Thu, 20 Jun 2024 15:19:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: beee9dc0-2f07-11ef-90a3-e314d9c70b13
MIME-Version: 1.0
Date: Thu, 20 Jun 2024 15:19:38 +0200
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, consulting@bugseng.com, Simone Ballarin
 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>,
 xen-devel@lists.xenproject.org
Subject: Re: [PATCH] automation/eclair_analysis: deviate common/unlzo.c for
 MISRA Rule 7.3
Reply-To: alessandro.zucchelli@bugseng.com
Mail-Reply-To: alessandro.zucchelli@bugseng.com
In-Reply-To: <alpine.DEB.2.22.394.2406191817310.2572888@ubuntu-linux-20-04-desktop>
References: <20342a68627d5fe7c85c50f64e9300e9a587974b.1718704260.git.alessandro.zucchelli@bugseng.com>
 <63d11da5-4a5a-4354-ab57-67fbb7110f45@suse.com>
 <alpine.DEB.2.22.394.2406191817310.2572888@ubuntu-linux-20-04-desktop>
Message-ID: <566b0cb9b718b5719a6b497b36e90ab4@bugseng.com>
X-Sender: alessandro.zucchelli@bugseng.com
Organization: BUGSENG Srl
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-20 03:17, Stefano Stabellini wrote:
> On Tue, 18 Jun 2024, Jan Beulich wrote:
>> On 18.06.2024 11:54, Alessandro Zucchelli wrote:
>> > The file contains violations of Rule 7.3 which states as following: The
>> > lowercase character `l' shall not be used in a literal suffix.
>> >
>> > This file defines a non-compliant constant used in a macro expanded in a
>> > non-excluded file, so this deviation is needed in order to avoid
>> > a spurious violation involving both files.
>> 
>> Imo it would be nice to be specific in such cases: Which constant? And
>> which macro expanded in which file?
> 
> Hi Alessandro,
> if you give me the details, I could add it to the commit message on 
> commit

Hi,

The file common/unlzo.c defines the non-compliant constant 
LZO_BLOCK_SIZE as
"256*1024l" (note the 'l'). This file is excluded from ECLAIR analysis 
but
as the constant is used in macro xmalloc_bytes, expanded in non-excluded 
file
include/xen/xmalloc.h, thus these specific violations need this 
configuration
in order to be excluded.

-- 
Alessandro Zucchelli, B.Sc.

Software Engineer, BUGSENG (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:26:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:26:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744488.1151528 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKHoB-0004S9-7Z; Thu, 20 Jun 2024 13:26:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744488.1151528; Thu, 20 Jun 2024 13:26:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKHoB-0004S2-3Q; Thu, 20 Jun 2024 13:26:35 +0000
Received: by outflank-mailman (input) for mailman id 744488;
 Thu, 20 Jun 2024 13:26:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKHoA-0004Rs-90; Thu, 20 Jun 2024 13:26:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKHoA-0007pf-2K; Thu, 20 Jun 2024 13:26:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKHo9-0004u4-NP; Thu, 20 Jun 2024 13:26:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKHo9-0005VN-Mq; Thu, 20 Jun 2024 13:26:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gJiLAhjcgWwp94W+zi8prV0t8pbWOrF9hE/vtVpPLbo=; b=sKPgbeAY4XVXLgdhIevuFKNnOW
	2v+1yTCMd7cUtZcqhHlAQ6d1YPGY655/9oUllQuPOHUWFR2Eb0mWaNiZiy4voT6v8p/Xyg/IkUJR6
	q4L/6jCGjig+1ojt5+oAGU3Fqzw64ogQQnOvcrE/Oy+4Y73y5jnK7aJu1jn8GJ13IB8I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186432-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186432: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
X-Osstest-Versions-That:
    xen=efa6e9f15ba943d154e8d7b29384581915b2aacd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Jun 2024 13:26:33 +0000

flight 186432 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186432/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                   6 xen-build                fail REGR. vs. 186411

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
baseline version:
 xen                  efa6e9f15ba943d154e8d7b29384581915b2aacd

Last test of basis   186411  2024-06-19 12:00:22 Z    1 days
Testing same since   186412  2024-06-19 15:03:58 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 43d5c5d5f70b3f5419e7ef30399d23adf6ddfa8e
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed Jun 19 14:11:07 2024 +0200

    xen: avoid UB in guest handle arithmetic
    
    At least XENMEM_memory_exchange can have huge values passed in the
    nr_extents and nr_exchanged fields. Adding such values to pointers can
    overflow, resulting in UB. Cast respective pointers to "unsigned long"
    while at the same time making the necessary multiplication explicit.
    Remaining arithmetic is, despite there possibly being mathematical
    overflow, okay as per the C99 spec: "A computation involving unsigned
    operands can never overflow, because a result that cannot be represented
    by the resulting unsigned integer type is reduced modulo the number that
    is one greater than the largest value that can be represented by the
    resulting type." The overflow that we need to guard against is checked
    for in array_access_ok().
    
    Note that in / down from array_access_ok() the address value is only
    ever cast to "unsigned long" anyway, which is why in the invocation from
    guest_handle_subrange_okay() the value doesn't need casting back to
    pointer type.
    
    In compat grant table code change two guest_handle_add_offset() to avoid
    passing in negative offsets.
    
    Since {,__}clear_guest_offset() need touching anyway, also deal with
    another (latent) issue there: They were losing the handle type, i.e. the
    size of the individual objects accessed. Luckily the few users we
    presently have all pass char or uint8 handles.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 267122a24c499d26278ab2dbdfb46ebcaaf38474
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 16:14:36 2021 +0100

    x86/defns: Clean up X86_{XCR0,XSS}_* constants
    
    With the exception of one case in read_bndcfgu() which can use ilog2(),
    the *_POS defines are unused.  Drop them.
    
    X86_XCR0_X87 is the name used by both the SDM and APM, rather than
    X86_XCR0_FP.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 71cacfb035f4a78ee10970dc38a3baa04d387451
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpuid: Fix handling of XSAVE dynamic leaves
    
    First, if XSAVE is available in hardware but not visible to the guest, the
    dynamic leaves shouldn't be filled in.
    
    Second, the comment concerning XSS state is wrong.  VT-x doesn't manage
    host/guest state automatically, but there is provision for "host only" bits to
    be set, so the implications are still accurate.
    
    Introduce xstate_compressed_size() to mirror the uncompressed one.  Cross
    check it at boot.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit fdb7e77fea4cb1c98dc51dd891a47f7e94612ad4
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/cpu-policy: Simplify recalculate_xstate()
    
    Make use of xstate_uncompressed_size() helper rather than maintaining the
    running calculation while accumulating feature components.
    
    The rest of the CPUID data can come direct from the raw cpu policy.  All
    per-component data form an ABI through the behaviour of the X{SAVE,RSTOR}*
    instructions.
    
    Use for_each_set_bit() rather than opencoding a slightly awkward version of
    it.  Mask the attributes in ecx down based on the visible features.  This
    isn't actually necessary for any components or attributes defined at the time
    of writing (up to AMX), but is added out of an abundance of caution.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit df09dfb94de66f7523837c050616a382aa2c7d17
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Apr 30 20:17:55 2021 +0100

    x86/xstate: Rework xstate_ctxt_size() as xstate_uncompressed_size()
    
    We're soon going to need a compressed helper of the same form.
    
    The size of the uncompressed image depends on the single element with the
    largest offset + size.  Sadly this isn't always the element with the largest
    index.
    
    Name the per-xstate-component cpu_policy struture, for legibility of the logic
    in xstate_uncompressed_size().  Cross-check with hardware during boot, and
    remove hw_uncompressed_size().
    
    This means that the migration paths don't need to mess with XCR0 just to
    sanity check the buffer size.  It also means we can drop the "fastpath" check
    against xfeature_mask (there to skip some XCR0 writes); this path is going to
    be dead logic the moment Xen starts using supervisor states itself.
    
    The users of hw_uncompressed_size() in xstate_init() can (and indeed need) to
    be replaced with CPUID instructions.  They run with feature_mask in XCR0, and
    prior to setup_xstate_features() on the BSP.
    
    No practical change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit a09022a09e1a79b3f9574993993bfad803b32596
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu May 23 00:55:34 2024 +0100

    x86/boot: Collect the Raw CPU Policy earlier on boot
    
    This is a tangle, but it's a small step in the right direction.
    
    In the following change, xstate_init() is going to start using the Raw policy.
    
    calculate_raw_cpu_policy() is sufficiently separate from the other policies to
    safely move like this.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit d31a111940de5431c8bf465b1d38b89f1130a24b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Feb 21 17:56:57 2020 +0000

    x86/xstate: Cross-check dynamic XSTATE sizes at boot
    
    Right now, xstate_ctxt_size() performs a cross-check of size with CPUID in for
    every call.  This is expensive, being used for domain create/migrate, as well
    as to service certain guest CPUID instructions.
    
    Instead, arrange to check the sizes once at boot.  See the code comments for
    details.  Right now, it just checks hardware against the algorithm
    expectations.  Later patches will cross-check Xen's XSTATE calculations too.
    
    Introduce more X86_XCR0_* and X86_XSS_* constants CPUID bits.  This is to
    maximise coverage in the sanity check, even if we don't expect to
    use/virtualise some of these features any time soon.  Leave HDC and HWP alone
    for now; we don't have CPUID bits from them stored nicely.
    
    Only perform the cross-checks when SELF_TESTS are active.  It's only
    developers or new hardware liable to trip these checks, and Xen at least
    tracks "maximum value ever seen in xcr0" for the lifetime of the VM, which we
    don't want to be tickling in the general case.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 9e6dbbe8bf400aacb99009ddffa91d2a0c312b39
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 22 17:23:54 2024 +0100

    x86/xstate: Fix initialisation of XSS cache
    
    The clobbering of this_cpu(xcr0) and this_cpu(xss) to architecturally invalid
    values is to force the subsequent set_xcr0() and set_msr_xss() to reload the
    hardware register.
    
    While XCR0 is reloaded in xstate_init(), MSR_XSS isn't.  This causes
    get_msr_xss() to return the invalid value, and logic of the form:
    
        old = get_msr_xss();
        set_msr_xss(new);
        ...
        set_msr_xss(old);
    
    to try and restore said invalid value.
    
    The architecturally invalid value must be purged from the cache, meaning the
    hardware register must be written at least once.  This in turn highlights that
    the invalid value must only be used in the case that the hardware register is
    available.
    
    Fixes: f7f4a523927f ("x86/xstate: reset cached register values on resume")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit aba98c8d671bd290e978ec154d0baf042e093a65
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jun 14 13:05:40 2024 +0100

    xen/arch: Centralise __read_mostly and __ro_after_init
    
    These living in cache.h is inherited from Linux, but cache.h is not a terribly
    appropriately location for them to live.
    
    __read_mostly is an optimisation related to data placement in order to avoid
    having shared data in cachelines that are likely to be written to, but it
    really is just a section of the linked image separating data by usage
    patterns; it has nothing to do with cache sizes or flushing logic.
    
    Worse, __ro_after_init was only in xen/cache.h because __read_mostly was in
    arch/cache.h, and has literally nothing whatsoever to do with caches.
    
    Move the definitions into xen/sections.h, which in particular means that
    RISC-V doesn't need to repeat the problematic pattern.  Take the opportunity
    to provide a short descriptions of what these are used for.
    
    For now, leave TODO comments next to the other identical definitions.  It
    turns out that unpicking cache.h is more complicated than it appears because a
    number of files use it for transitive dependencies.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 82f480944718d9e8340a6ac1af41ece7851115bf
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Jun 18 13:48:35 2024 +0100

    xen/irq: Address MISRA Rule 8.3 violation
    
    When centralising irq_ack_none(), different architectures had different names
    for the parameter.  As it's type is struct irq_desc *, it should be named
    desc.  Make this consistent.
    
    No functional change.
    
    Fixes: 8aeda4a241ab ("arch/irq: Make irq_ack_none() mandatory")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:52:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:52:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744508.1151559 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICk-0008Mh-VF; Thu, 20 Jun 2024 13:51:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744508.1151559; Thu, 20 Jun 2024 13:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICk-0008Jo-Pq; Thu, 20 Jun 2024 13:51:58 +0000
Received: by outflank-mailman (input) for mailman id 744508;
 Thu, 20 Jun 2024 13:51:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKICk-0008AF-0s
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:51:58 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 42782e75-2f0c-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 15:51:57 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id CCA504EE073D;
 Thu, 20 Jun 2024 15:51:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42782e75-2f0c-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>
Subject: [XEN PATCH 06/13] x86/mce: address violations of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 15:51:40 +0200
Message-Id: <e418ddee2ff65e12e0e4e45d54acc2fa1b752d4b.1718890095.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
References: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statements to address violations of MISRA C Rule 16.3:
"An unconditional `break' statement shall terminate every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/cpu/mcheck/mce_amd.c   | 1 +
 xen/arch/x86/cpu/mcheck/mce_intel.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/xen/arch/x86/cpu/mcheck/mce_amd.c b/xen/arch/x86/cpu/mcheck/mce_amd.c
index 3318b8204f..4f06a3153b 100644
--- a/xen/arch/x86/cpu/mcheck/mce_amd.c
+++ b/xen/arch/x86/cpu/mcheck/mce_amd.c
@@ -201,6 +201,7 @@ static void mcequirk_amd_apply(enum mcequirk_amd_flags flags)
 
     default:
         ASSERT(flags == MCEQUIRK_NONE);
+        break;
     }
 }
 
diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
index dd812f4b8a..9574dedbfd 100644
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -896,6 +896,8 @@ static void intel_init_ppin(const struct cpuinfo_x86 *c)
             ppin_msr = 0;
         else if ( c == &boot_cpu_data )
             ppin_msr = MSR_PPIN;
+
+        break;
     }
 }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:52:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:52:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744509.1151581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICm-0000bC-CM; Thu, 20 Jun 2024 13:52:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744509.1151581; Thu, 20 Jun 2024 13:52:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICm-0000Zq-8k; Thu, 20 Jun 2024 13:52:00 +0000
Received: by outflank-mailman (input) for mailman id 744509;
 Thu, 20 Jun 2024 13:51:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKICk-0008AK-Ry
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:51:58 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 41d3483f-2f0c-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 15:51:56 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id C30204EE0754;
 Thu, 20 Jun 2024 15:51:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41d3483f-2f0c-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>
Subject: [XEN PATCH 04/13] x86/vpmu: address violations of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 15:51:38 +0200
Message-Id: <2b08ef10fb4907255688d5fb38b5928e972407e9.1718890095.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
References: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statements to address violations of MISRA C Rule
16.3: "An unconditional `break' statement shall terminate every
switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/cpu/vpmu.c       | 3 +++
 xen/arch/x86/cpu/vpmu_intel.c | 1 +
 2 files changed, 4 insertions(+)

diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index a7bc0cd1fc..b2ba999412 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -663,6 +663,8 @@ long do_xenpmu_op(
 
         if ( pmu_params.version.maj != XENPMU_VER_MAJ )
             return -EINVAL;
+
+        break;
     }
 
     switch ( op )
@@ -776,6 +778,7 @@ long do_xenpmu_op(
 
     default:
         ret = -EINVAL;
+        break;
     }
 
     return ret;
diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index cd414165df..46f3ff86e7 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -713,6 +713,7 @@ static int cf_check core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             break;
         default:
             rdmsrl(msr, *msr_content);
+            break;
         }
     }
     else if ( msr == MSR_IA32_MISC_ENABLE )
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:52:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:52:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744505.1151544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICk-0008Ah-8F; Thu, 20 Jun 2024 13:51:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744505.1151544; Thu, 20 Jun 2024 13:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICk-0008Aa-4i; Thu, 20 Jun 2024 13:51:58 +0000
Received: by outflank-mailman (input) for mailman id 744505;
 Thu, 20 Jun 2024 13:51:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKICi-0008AF-Ot
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:51:56 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4185b786-2f0c-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 15:51:55 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 384184EE0739;
 Thu, 20 Jun 2024 15:51:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4185b786-2f0c-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>
Subject: [XEN PATCH 03/13] x86/domctl: address a violation of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 15:51:37 +0200
Message-Id: <a9a7eefc36c74bc16d7ce8ce10be188974667379.1718890095.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
References: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statement to address a violation of MISRA C Rule 16.3:
"An unconditional `break' statement shall terminate every
switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/domctl.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 9190e11faa..68b5b46d1a 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -517,6 +517,7 @@ long arch_do_domctl(
 
         default:
             ret = -ENOSYS;
+            break;
         }
         break;
     }
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:52:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:52:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744512.1151597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICn-0000sg-Cb; Thu, 20 Jun 2024 13:52:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744512.1151597; Thu, 20 Jun 2024 13:52:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICn-0000pI-6p; Thu, 20 Jun 2024 13:52:01 +0000
Received: by outflank-mailman (input) for mailman id 744512;
 Thu, 20 Jun 2024 13:51:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKICl-0008AK-Rz
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:51:59 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 42289b38-2f0c-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 15:51:56 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 497F44EE0739;
 Thu, 20 Jun 2024 15:51:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42289b38-2f0c-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>
Subject: [XEN PATCH 05/13] x86/traps: address violations of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 15:51:39 +0200
Message-Id: <dc14fa77a9e38c70d12f3abd531e64afd12eef50.1718890095.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
References: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add break or pseudo keyword fallthrough to address violations of
MISRA C Rule 16.3: "An unconditional `break' statement shall terminate every
switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/traps.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 9906e874d5..cbcec3fafb 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1186,6 +1186,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
 
     default:
         ASSERT_UNREACHABLE();
+        break;
     }
 }
 
@@ -1748,6 +1749,7 @@ static void io_check_error(const struct cpu_user_regs *regs)
     {
     case 'd': /* 'dom0' */
         nmi_hwdom_report(_XEN_NMIREASON_io_error);
+        fallthrough;
     case 'i': /* 'ignore' */
         break;
     default:  /* 'fatal' */
@@ -1768,6 +1770,7 @@ static void unknown_nmi_error(const struct cpu_user_regs *regs,
     {
     case 'd': /* 'dom0' */
         nmi_hwdom_report(_XEN_NMIREASON_unknown);
+        fallthrough;
     case 'i': /* 'ignore' */
         break;
     default:  /* 'fatal' */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:52:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:52:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744507.1151552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICk-0008FQ-Lv; Thu, 20 Jun 2024 13:51:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744507.1151552; Thu, 20 Jun 2024 13:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICk-0008EJ-HW; Thu, 20 Jun 2024 13:51:58 +0000
Received: by outflank-mailman (input) for mailman id 744507;
 Thu, 20 Jun 2024 13:51:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKICj-0008AK-Rw
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:51:57 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 412d62f4-2f0c-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 15:51:55 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id A051E4EE073D;
 Thu, 20 Jun 2024 15:51:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 412d62f4-2f0c-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>
Subject: [XEN PATCH 02/13] x86/cpuid: use fallthrough pseudo keyword
Date: Thu, 20 Jun 2024 15:51:36 +0200
Message-Id: <23fa8f1061b894d4dd121bc9c2fcaf2c3ad312f3.1718890095.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
References: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The current comment making explicit the fallthrough intention does
not follow the agreed syntax: replace it with the pseduo keyword.

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/cpuid.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index a822e80c7e..2a777436ee 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -97,9 +97,8 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         if ( is_viridian_domain(d) )
             return cpuid_viridian_leaves(v, leaf, subleaf, res);
 
+        fallthrough;
         /*
-         * Fallthrough.
-         *
          * Intel reserve up until 0x4fffffff for hypervisor use.  AMD reserve
          * only until 0x400000ff, but we already use double that.
          */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:52:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:52:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744506.1151549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICk-0008CZ-GZ; Thu, 20 Jun 2024 13:51:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744506.1151549; Thu, 20 Jun 2024 13:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICk-0008CB-BM; Thu, 20 Jun 2024 13:51:58 +0000
Received: by outflank-mailman (input) for mailman id 744506;
 Thu, 20 Jun 2024 13:51:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKICj-0008AK-6R
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:51:57 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 40fe713c-2f0c-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 15:51:55 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 1B2714EE0738;
 Thu, 20 Jun 2024 15:51:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40fe713c-2f0c-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>
Subject: [XEN PATCH 01/13] automation/eclair: consider also hypened fall-through
Date: Thu, 20 Jun 2024 15:51:35 +0200
Message-Id: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update ECLAIR configuration:
- allow to deviate MISRA C Rule 16.3 using different version of
  hypened fall-through;
- search for the comment for 2 lines after the last statement.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 2 +-
 docs/misra/deviations.rst                        | 4 ++++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index b8f9155267..b99a6b8a92 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -400,7 +400,7 @@ safe."
 -doc_end
 
 -doc_begin="Switch clauses ending with an explicit comment indicating the fallthrough intention are safe."
--config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through.? \\*/.*$,0..1))))"}
+-config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all[ -]?through.? \\*/.*$,0..2))))"}
 -doc_end
 
 -doc_begin="Switch statements having a controlling expression of enum type deliberately do not have a default case: gcc -Wall enables -Wswitch which warns (and breaks the build as we use -Werror) if one of the enum labels is missing from the switch."
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 41cdfbe5f5..411e1fed3d 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -353,6 +353,10 @@ Deviations related to MISRA C:2012 Rules:
        However, the use of such comments in new code is deprecated:
        the pseudo-keyword "fallthrough" shall be used.
      - Tagged as `safe` for ECLAIR. The accepted comments are:
+         - /\* fall-through \*/
+         - /\* fall-through. \*/
+         - /\* Fall-through \*/
+         - /\* Fall-through. \*/
          - /\* fall through \*/
          - /\* fall through. \*/
          - /\* fallthrough \*/
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:52:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:52:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744510.1151587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICm-0000g4-Mk; Thu, 20 Jun 2024 13:52:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744510.1151587; Thu, 20 Jun 2024 13:52:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICm-0000eg-Id; Thu, 20 Jun 2024 13:52:00 +0000
Received: by outflank-mailman (input) for mailman id 744510;
 Thu, 20 Jun 2024 13:51:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKICk-0008AF-VW
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:51:58 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4314fd0d-2f0c-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 15:51:58 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id DFE794EE073D;
 Thu, 20 Jun 2024 15:51:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4314fd0d-2f0c-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>
Subject: [XEN PATCH 08/13] x86/vpt: address a violation of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 15:51:42 +0200
Message-Id: <5b9dbf5595a0aa8d7454dcd7be9fae3a12bf2630.1718890095.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
References: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add pseudo keyword fallthrough to meet the requirements to deviate
a violation of MISRA C Rule 16.3 ("An unconditional `break'
statement shall terminate every switch-clause").

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/vpt.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index e1d6845a28..c76a9a272b 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -121,6 +121,8 @@ static int pt_irq_masked(struct periodic_time *pt)
     }
 
     /* Fallthrough to check if the interrupt is masked on the IO APIC. */
+    fallthrough;
+
     case PTSRC_ioapic:
     {
         int mask = vioapic_get_mask(v->domain, gsi);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:52:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:52:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744511.1151594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICn-0000mM-3t; Thu, 20 Jun 2024 13:52:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744511.1151594; Thu, 20 Jun 2024 13:52:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICm-0000ko-Sw; Thu, 20 Jun 2024 13:52:00 +0000
Received: by outflank-mailman (input) for mailman id 744511;
 Thu, 20 Jun 2024 13:51:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKICl-0008AF-G5
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:51:59 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 436bee80-2f0c-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 15:51:59 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 6075A4EE0754;
 Thu, 20 Jun 2024 15:51:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 436bee80-2f0c-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>
Subject: [XEN PATCH 09/13] x86/mm: add defensive return
Date: Thu, 20 Jun 2024 15:51:43 +0200
Message-Id: <282da9981826370ca839503477fb515af14ff4b5.1718890095.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
References: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add defensive return statement at the end of an unreachable
default case. Other than improve safety, this meets the requirements
to deviate a violation of MISRA C Rule 16.3.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/mm.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 648d6dd475..2e19ced15e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -916,6 +916,7 @@ get_page_from_l1e(
                 return 0;
             default:
                 ASSERT_UNREACHABLE();
+                return 0;
             }
         }
         else if ( l1f & _PAGE_RW )
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:52:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:52:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744513.1151617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICp-0001PU-40; Thu, 20 Jun 2024 13:52:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744513.1151617; Thu, 20 Jun 2024 13:52:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICo-0001Ns-Ol; Thu, 20 Jun 2024 13:52:02 +0000
Received: by outflank-mailman (input) for mailman id 744513;
 Thu, 20 Jun 2024 13:52:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKICm-0008AK-S6
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:52:00 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 42cf13a2-2f0c-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 15:51:58 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 619DB4EE0738;
 Thu, 20 Jun 2024 15:51:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42cf13a2-2f0c-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>
Subject: [XEN PATCH 07/13] x86/hvm: address violations of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 15:51:41 +0200
Message-Id: <ee3c18c1a4cf6836d3d5e991908c6ae4ebda6b74.1718890095.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
References: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 16.3 states that "An unconditional `break' statement shall
terminate every switch-clause".

Add pseudo keyword fallthrough and missing break statement
to address violations of the rule.

As a defensive measure, return -EOPNOTSUPP in case an unreachable return
statement is reached.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/emulate.c   | 3 +++
 xen/arch/x86/hvm/hvm.c       | 6 ++++++
 xen/arch/x86/hvm/hypercall.c | 1 +
 xen/arch/x86/hvm/irq.c       | 1 +
 4 files changed, 11 insertions(+)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 02e378365b..6d0fba9285 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2674,6 +2674,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
 
     default:
         ASSERT_UNREACHABLE();
+        break;
     }
 
     if ( hvmemul_ctxt->ctxt.retire.singlestep )
@@ -2764,6 +2765,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
         /* fallthrough */
     default:
         hvm_emulate_writeback(&ctxt);
+        break;
     }
 
     return rc;
@@ -2803,6 +2805,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
     default:
         ctx.set_context = (kind == EMUL_KIND_SET_CONTEXT_DATA);
         rc = hvm_emulate_one(&ctx, VIO_no_completion);
+        break;
     }
 
     switch ( rc )
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f4b627b1f..c263e562ff 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4919,6 +4919,8 @@ static int do_altp2m_op(
 
     default:
         ASSERT_UNREACHABLE();
+        rc = -EOPNOTSUPP;
+        break;
     }
 
  out:
@@ -5020,6 +5022,8 @@ static int compat_altp2m_op(
 
     default:
         ASSERT_UNREACHABLE();
+        rc = -EOPNOTSUPP;
+        break;
     }
 
     return rc;
@@ -5283,6 +5287,8 @@ void hvm_get_segment_register(struct vcpu *v, enum x86_segment seg,
          * %cs and %tr are unconditionally present.  SVM ignores these present
          * bits and will happily run without them set.
          */
+        fallthrough;
+
     case x86_seg_cs:
         reg->p = 1;
         break;
diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 7fb3136f0c..2271afe02a 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -111,6 +111,7 @@ int hvm_hypercall(struct cpu_user_regs *regs)
     case 8:
         eax = regs->rax;
         /* Fallthrough to permission check. */
+        fallthrough;
     case 4:
     case 2:
         if ( currd->arch.monitor.guest_request_userspace_enabled &&
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 210cebb0e6..1eab44defc 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -282,6 +282,7 @@ static void hvm_set_callback_irq_level(struct vcpu *v)
             __hvm_pci_intx_assert(d, pdev, pintx);
         else
             __hvm_pci_intx_deassert(d, pdev, pintx);
+        break;
     default:
         break;
     }
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:52:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:52:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744514.1151623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICp-0001V6-MM; Thu, 20 Jun 2024 13:52:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744514.1151623; Thu, 20 Jun 2024 13:52:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICp-0001TU-9t; Thu, 20 Jun 2024 13:52:03 +0000
Received: by outflank-mailman (input) for mailman id 744514;
 Thu, 20 Jun 2024 13:52:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKICn-0008AF-Hp
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:52:01 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 44a2e37d-2f0c-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 15:52:01 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 7671F4EE0754;
 Thu, 20 Jun 2024 15:52:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44a2e37d-2f0c-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>
Subject: [XEN PATCH 12/13] x86/vPIC: address a violation of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 15:51:46 +0200
Message-Id: <22c3d314f769b683241f8bdabf006225714aa07a.1718890095.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
References: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add pseudokeyword fallthrough to meet the requirements to deviate
a violation of MISRA C Rul 16.3: "An unconditional `break' statement
shall terminate every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/vpic.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index 7c3b5c7254..6427b08086 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -309,6 +309,7 @@ static void vpic_ioport_write(
             if ( !(vpic->init_state & 8) )
                 break; /* CASCADE mode: wait for write to ICW3. */
             /* SNGL mode: fall through (no ICW3). */
+            fallthrough;
         case 2:
             /* ICW3 */
             vpic->init_state++;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:52:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:52:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744515.1151626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICq-0001e6-3f; Thu, 20 Jun 2024 13:52:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744515.1151626; Thu, 20 Jun 2024 13:52:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICp-0001bU-Q1; Thu, 20 Jun 2024 13:52:03 +0000
Received: by outflank-mailman (input) for mailman id 744515;
 Thu, 20 Jun 2024 13:52:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKICn-0008AK-S7
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:52:01 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 43d5c31b-2f0c-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 15:51:59 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 063B24EE073D;
 Thu, 20 Jun 2024 15:51:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43d5c31b-2f0c-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>
Subject: [XEN PATCH 10/13] x86/mpparse: address a violation of MISRA C Rule
Date: Thu, 20 Jun 2024 15:51:44 +0200
Message-Id: <317de680730ffebaae490be5841a7d413c420a54.1718890095.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
References: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a missing break statement to address a violation of MISRA C Rule
16.3: "An unconditional `break' statement shall terminate every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/mpparse.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/mpparse.c b/xen/arch/x86/mpparse.c
index d8ccab2449..306d8ed97a 100644
--- a/xen/arch/x86/mpparse.c
+++ b/xen/arch/x86/mpparse.c
@@ -544,6 +544,7 @@ static inline void __init construct_default_ISA_mptable(int mpc_default_type)
 		case 4:
 		case 7:
 			memcpy(bus.mpc_bustype, "MCA   ", 6);
+			break;
 	}
 	MP_bus_info(&bus);
 	if (mpc_default_type > 4) {
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:52:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:52:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744516.1151637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICq-0001nO-RF; Thu, 20 Jun 2024 13:52:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744516.1151637; Thu, 20 Jun 2024 13:52:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICq-0001kv-Dc; Thu, 20 Jun 2024 13:52:04 +0000
Received: by outflank-mailman (input) for mailman id 744516;
 Thu, 20 Jun 2024 13:52:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKICo-0008AF-18
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:52:02 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 44ec8317-2f0c-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 15:52:01 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 098864EE0738;
 Thu, 20 Jun 2024 15:52:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44ec8317-2f0c-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>
Subject: [XEN PATCH 13/13] x86/vlapic: address a violation of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 15:51:47 +0200
Message-Id: <b37ab35baab67cc147d5f21e15234adc08cd7af9.1718890095.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
References: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statement to address a violation of MISRA C
Rule 16.3: "An unconditional `break' statement shall terminate every
switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/vlapic.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 9cfc82666a..2ec9594271 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -367,6 +367,7 @@ static void vlapic_accept_irq(struct vcpu *v, uint32_t icr_low)
         gdprintk(XENLOG_ERR, "TODO: unsupported delivery mode in ICR %x\n",
                  icr_low);
         domain_crash(v->domain);
+        break;
     }
 }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 13:52:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 13:52:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744517.1151644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICr-00020w-NL; Thu, 20 Jun 2024 13:52:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744517.1151644; Thu, 20 Jun 2024 13:52:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKICr-0001vf-3A; Thu, 20 Jun 2024 13:52:05 +0000
Received: by outflank-mailman (input) for mailman id 744517;
 Thu, 20 Jun 2024 13:52:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKICo-0008AK-ST
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 13:52:02 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 444fefa2-2f0c-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 15:52:00 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id E5AC14EE0739;
 Thu, 20 Jun 2024 15:51:59 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 444fefa2-2f0c-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>
Subject: [XEN PATCH 11/13] x86/pmtimer: address a violation of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 15:51:45 +0200
Message-Id: <3cdd3de1e1d91fb9fbef173f3f1a3470e7dadbca.1718890095.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
References: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718890095.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statement to address a violation of MISRA C Rule 16.3
("An unconditional `break' statement shall terminate every
switch-clause").

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/pmtimer.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 97099ac305..87a7a01c9f 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -185,6 +185,7 @@ static int cf_check handle_evt_io(
                 gdprintk(XENLOG_WARNING, 
                          "Bad ACPI PM register write: %x bytes (%x) at %x\n", 
                          bytes, *val, port);
+                break;
             }
         }
         /* Fix up the SCI state to match the new register state */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:02:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:02:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744609.1151684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN2-00083w-SA; Thu, 20 Jun 2024 14:02:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744609.1151684; Thu, 20 Jun 2024 14:02:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN2-00083n-OA; Thu, 20 Jun 2024 14:02:36 +0000
Received: by outflank-mailman (input) for mailman id 744609;
 Thu, 20 Jun 2024 14:02:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIN1-0007p2-2p
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:35 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bda636d8-2f0d-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 16:02:33 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 7F0E14EE0755;
 Thu, 20 Jun 2024 16:02:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bda636d8-2f0d-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH 01/13] automation/eclair: consider also hypened fall-through
Date: Thu, 20 Jun 2024 16:02:12 +0200
Message-Id: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718892030.git.federico.serafini@bugseng.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update ECLAIR configuration to deviate MISRA C Rule 16.3
using different version of hypened fall-through and search for
the comment for 2 lines after the last statement.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 2 +-
 docs/misra/deviations.rst                        | 4 ++++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index b8f9155267..b99a6b8a92 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -400,7 +400,7 @@ safe."
 -doc_end
 
 -doc_begin="Switch clauses ending with an explicit comment indicating the fallthrough intention are safe."
--config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through.? \\*/.*$,0..1))))"}
+-config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all[ -]?through.? \\*/.*$,0..2))))"}
 -doc_end
 
 -doc_begin="Switch statements having a controlling expression of enum type deliberately do not have a default case: gcc -Wall enables -Wswitch which warns (and breaks the build as we use -Werror) if one of the enum labels is missing from the switch."
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 41cdfbe5f5..411e1fed3d 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -353,6 +353,10 @@ Deviations related to MISRA C:2012 Rules:
        However, the use of such comments in new code is deprecated:
        the pseudo-keyword "fallthrough" shall be used.
      - Tagged as `safe` for ECLAIR. The accepted comments are:
+         - /\* fall-through \*/
+         - /\* Fall-through. \*/
+         - /\* Fall-through \*/
+         - /\* fall-through. \*/
          - /\* fall through \*/
          - /\* fall through. \*/
          - /\* fallthrough \*/
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:02:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:02:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744611.1151704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN4-00005m-8X; Thu, 20 Jun 2024 14:02:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744611.1151704; Thu, 20 Jun 2024 14:02:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN4-00005e-5Z; Thu, 20 Jun 2024 14:02:38 +0000
Received: by outflank-mailman (input) for mailman id 744611;
 Thu, 20 Jun 2024 14:02:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIN2-0007p2-3t
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:36 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id be6ff81b-2f0d-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 16:02:35 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 2DA334EE0754;
 Thu, 20 Jun 2024 16:02:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be6ff81b-2f0d-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 03/13] x86/domctl: add missing break statement
Date: Thu, 20 Jun 2024 16:02:14 +0200
Message-Id: <a9a7eefc36c74bc16d7ce8ce10be188974667379.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718892030.git.federico.serafini@bugseng.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statement to address a violation of MISRA C Rule 16.3.

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/domctl.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 9190e11faa..68b5b46d1a 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -517,6 +517,7 @@ long arch_do_domctl(
 
         default:
             ret = -ENOSYS;
+            break;
         }
         break;
     }
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:02:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:02:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744608.1151674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN1-0007pP-GW; Thu, 20 Jun 2024 14:02:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744608.1151674; Thu, 20 Jun 2024 14:02:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN1-0007pI-D0; Thu, 20 Jun 2024 14:02:35 +0000
Received: by outflank-mailman (input) for mailman id 744608;
 Thu, 20 Jun 2024 14:02:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIN0-0007p3-Ga
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:34 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bce82612-2f0d-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 16:02:32 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id DC19F4EE0738;
 Thu, 20 Jun 2024 16:02:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bce82612-2f0d-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 00/13] x86: address violations of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 16:02:11 +0200
Message-Id: <cover.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This patch series addresses violations of MISRA C Rule 16.3 and updates
the ECLAIR configuration to consider also hypened fall-through comment
as a deviation to the rule.

Federico Serafini (13):
  automation/eclair: consider also hypened fall-through
  x86/cpuid: use fallthrough pseudo keyword
  x86/domctl: add missing break statement
  x86/vpmu: address violations of MISRA C Rule 16.3
  x86/traps: use fallthrough pseudo keyword
  x86/mce: add missing break statements
  x86/hvm: address violations of MISRA C Rule 16.3
  x86/vpt: address a violation of MISRA C Rule 16.3
  x86/mm: add defensive return
  x86/mpparse: add break statement
  x86/pmtimer: address a violation of MISRA C Rule 16.3
  x86/vPIC: address a violation of MISRA C Rule 16.3
  x86/vlapic: address a violation of MISRA C Rule 16.3

 automation/eclair_analysis/ECLAIR/deviations.ecl | 2 +-
 docs/misra/deviations.rst                        | 4 ++++
 xen/arch/x86/cpu/mcheck/mce_amd.c                | 1 +
 xen/arch/x86/cpu/mcheck/mce_intel.c              | 2 ++
 xen/arch/x86/cpu/vpmu.c                          | 3 +++
 xen/arch/x86/cpu/vpmu_intel.c                    | 1 +
 xen/arch/x86/cpuid.c                             | 3 +--
 xen/arch/x86/domctl.c                            | 1 +
 xen/arch/x86/hvm/emulate.c                       | 3 +++
 xen/arch/x86/hvm/hvm.c                           | 6 ++++++
 xen/arch/x86/hvm/hypercall.c                     | 1 +
 xen/arch/x86/hvm/irq.c                           | 1 +
 xen/arch/x86/hvm/pmtimer.c                       | 1 +
 xen/arch/x86/hvm/vlapic.c                        | 1 +
 xen/arch/x86/hvm/vpic.c                          | 1 +
 xen/arch/x86/hvm/vpt.c                           | 2 ++
 xen/arch/x86/mm.c                                | 1 +
 xen/arch/x86/mpparse.c                           | 1 +
 xen/arch/x86/traps.c                             | 3 +++
 19 files changed, 35 insertions(+), 3 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:02:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:02:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744610.1151689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN3-00086s-4e; Thu, 20 Jun 2024 14:02:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744610.1151689; Thu, 20 Jun 2024 14:02:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN2-00086K-VT; Thu, 20 Jun 2024 14:02:36 +0000
Received: by outflank-mailman (input) for mailman id 744610;
 Thu, 20 Jun 2024 14:02:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIN1-0007p2-9v
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:35 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id be091fc4-2f0d-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 16:02:34 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 881EC4EE0756;
 Thu, 20 Jun 2024 16:02:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be091fc4-2f0d-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 02/13] x86/cpuid: use fallthrough pseudo keyword
Date: Thu, 20 Jun 2024 16:02:13 +0200
Message-Id: <23fa8f1061b894d4dd121bc9c2fcaf2c3ad312f3.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718892030.git.federico.serafini@bugseng.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The current comment making explicit the fallthrough intention does
not follow the agreed syntax: replace it with the pseduo keyword.

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/cpuid.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index a822e80c7e..2a777436ee 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -97,9 +97,8 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         if ( is_viridian_domain(d) )
             return cpuid_viridian_leaves(v, leaf, subleaf, res);
 
+        fallthrough;
         /*
-         * Fallthrough.
-         *
          * Intel reserve up until 0x4fffffff for hypervisor use.  AMD reserve
          * only until 0x400000ff, but we already use double that.
          */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:02:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:02:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744612.1151709 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN4-000080-Kq; Thu, 20 Jun 2024 14:02:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744612.1151709; Thu, 20 Jun 2024 14:02:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN4-00007c-Do; Thu, 20 Jun 2024 14:02:38 +0000
Received: by outflank-mailman (input) for mailman id 744612;
 Thu, 20 Jun 2024 14:02:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIN3-0007p3-82
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:37 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id beec8c11-2f0d-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 16:02:35 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id D86744EE0756;
 Thu, 20 Jun 2024 16:02:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: beec8c11-2f0d-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 04/13] x86/vpmu: address violations of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 16:02:15 +0200
Message-Id: <2b08ef10fb4907255688d5fb38b5928e972407e9.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718892030.git.federico.serafini@bugseng.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statements to address violations of MISRA C Rule
16.3: "An unconditional `break' statement shall terminate every
switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/cpu/vpmu.c       | 3 +++
 xen/arch/x86/cpu/vpmu_intel.c | 1 +
 2 files changed, 4 insertions(+)

diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index a7bc0cd1fc..b2ba999412 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -663,6 +663,8 @@ long do_xenpmu_op(
 
         if ( pmu_params.version.maj != XENPMU_VER_MAJ )
             return -EINVAL;
+
+        break;
     }
 
     switch ( op )
@@ -776,6 +778,7 @@ long do_xenpmu_op(
 
     default:
         ret = -EINVAL;
+        break;
     }
 
     return ret;
diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index cd414165df..46f3ff86e7 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -713,6 +713,7 @@ static int cf_check core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             break;
         default:
             rdmsrl(msr, *msr_content);
+            break;
         }
     }
     else if ( msr == MSR_IA32_MISC_ENABLE )
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:02:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:02:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744613.1151724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN5-0000aQ-QF; Thu, 20 Jun 2024 14:02:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744613.1151724; Thu, 20 Jun 2024 14:02:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN5-0000Zu-LK; Thu, 20 Jun 2024 14:02:39 +0000
Received: by outflank-mailman (input) for mailman id 744613;
 Thu, 20 Jun 2024 14:02:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIN4-0007p3-9N
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:38 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bf5b4856-2f0d-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 16:02:36 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 9DE464EE0754;
 Thu, 20 Jun 2024 16:02:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf5b4856-2f0d-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 05/13] x86/traps: use fallthrough pseudo keyword
Date: Thu, 20 Jun 2024 16:02:16 +0200
Message-Id: <dc14fa77a9e38c70d12f3abd531e64afd12eef50.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718892030.git.federico.serafini@bugseng.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add break or pseudo keyword fallthrough to address violations of
MISRA C Rule 16.3.

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/traps.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 9906e874d5..cbcec3fafb 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1186,6 +1186,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
 
     default:
         ASSERT_UNREACHABLE();
+        break;
     }
 }
 
@@ -1748,6 +1749,7 @@ static void io_check_error(const struct cpu_user_regs *regs)
     {
     case 'd': /* 'dom0' */
         nmi_hwdom_report(_XEN_NMIREASON_io_error);
+        fallthrough;
     case 'i': /* 'ignore' */
         break;
     default:  /* 'fatal' */
@@ -1768,6 +1770,7 @@ static void unknown_nmi_error(const struct cpu_user_regs *regs,
     {
     case 'd': /* 'dom0' */
         nmi_hwdom_report(_XEN_NMIREASON_unknown);
+        fallthrough;
     case 'i': /* 'ignore' */
         break;
     default:  /* 'fatal' */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:02:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:02:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744614.1151734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN7-0000sy-2o; Thu, 20 Jun 2024 14:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744614.1151734; Thu, 20 Jun 2024 14:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN6-0000sr-WF; Thu, 20 Jun 2024 14:02:41 +0000
Received: by outflank-mailman (input) for mailman id 744614;
 Thu, 20 Jun 2024 14:02:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIN5-0007p2-4n
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:39 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c0ac3d35-2f0d-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 16:02:38 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id CC3CC4EE0756;
 Thu, 20 Jun 2024 16:02:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0ac3d35-2f0d-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 08/13] x86/vpt: address a violation of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 16:02:19 +0200
Message-Id: <5b9dbf5595a0aa8d7454dcd7be9fae3a12bf2630.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718892030.git.federico.serafini@bugseng.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add pseudo keyword fallthrough to meet the requirements to deviate
a violation of MISRA C Rule 16.3 ("An unconditional `break'
statement shall terminate every switch-clause").

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/vpt.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index e1d6845a28..c76a9a272b 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -121,6 +121,8 @@ static int pt_irq_masked(struct periodic_time *pt)
     }
 
     /* Fallthrough to check if the interrupt is masked on the IO APIC. */
+    fallthrough;
+
     case PTSRC_ioapic:
     {
         int mask = vioapic_get_mask(v->domain, gsi);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:02:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:02:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744615.1151739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN7-0000wI-HZ; Thu, 20 Jun 2024 14:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744615.1151739; Thu, 20 Jun 2024 14:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN7-0000vj-7u; Thu, 20 Jun 2024 14:02:41 +0000
Received: by outflank-mailman (input) for mailman id 744615;
 Thu, 20 Jun 2024 14:02:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIN5-0007p3-89
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:39 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bfc834f3-2f0d-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 16:02:37 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 597F64EE0757;
 Thu, 20 Jun 2024 16:02:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfc834f3-2f0d-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 06/13] x86/mce: add missing break statements
Date: Thu, 20 Jun 2024 16:02:17 +0200
Message-Id: <e418ddee2ff65e12e0e4e45d54acc2fa1b752d4b.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718892030.git.federico.serafini@bugseng.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statements to address violations of MISRA C Rule 16.3.

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/cpu/mcheck/mce_amd.c   | 1 +
 xen/arch/x86/cpu/mcheck/mce_intel.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/xen/arch/x86/cpu/mcheck/mce_amd.c b/xen/arch/x86/cpu/mcheck/mce_amd.c
index 3318b8204f..4f06a3153b 100644
--- a/xen/arch/x86/cpu/mcheck/mce_amd.c
+++ b/xen/arch/x86/cpu/mcheck/mce_amd.c
@@ -201,6 +201,7 @@ static void mcequirk_amd_apply(enum mcequirk_amd_flags flags)
 
     default:
         ASSERT(flags == MCEQUIRK_NONE);
+        break;
     }
 }
 
diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
index dd812f4b8a..9574dedbfd 100644
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -896,6 +896,8 @@ static void intel_init_ppin(const struct cpuinfo_x86 *c)
             ppin_msr = 0;
         else if ( c == &boot_cpu_data )
             ppin_msr = MSR_PPIN;
+
+        break;
     }
 }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:02:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:02:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744617.1151744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN7-0000zs-Re; Thu, 20 Jun 2024 14:02:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744617.1151744; Thu, 20 Jun 2024 14:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN7-0000z2-HM; Thu, 20 Jun 2024 14:02:41 +0000
Received: by outflank-mailman (input) for mailman id 744617;
 Thu, 20 Jun 2024 14:02:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIN6-0007p3-8U
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:40 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c0367b87-2f0d-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 16:02:37 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 1C7C14EE0758;
 Thu, 20 Jun 2024 16:02:37 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0367b87-2f0d-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 07/13] x86/hvm: address violations of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 16:02:18 +0200
Message-Id: <ee3c18c1a4cf6836d3d5e991908c6ae4ebda6b74.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718892030.git.federico.serafini@bugseng.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 16.3 states that "An unconditional `break' statement shall
terminate every switch-clause".

Add pseudo keyword fallthrough and missing break statement
to address violations of the rule.

As a defensive measure, return -EOPNOTSUPP in case an unreachable return
statement is reached.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/emulate.c   | 3 +++
 xen/arch/x86/hvm/hvm.c       | 6 ++++++
 xen/arch/x86/hvm/hypercall.c | 1 +
 xen/arch/x86/hvm/irq.c       | 1 +
 4 files changed, 11 insertions(+)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 02e378365b..6d0fba9285 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2674,6 +2674,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
 
     default:
         ASSERT_UNREACHABLE();
+        break;
     }
 
     if ( hvmemul_ctxt->ctxt.retire.singlestep )
@@ -2764,6 +2765,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
         /* fallthrough */
     default:
         hvm_emulate_writeback(&ctxt);
+        break;
     }
 
     return rc;
@@ -2803,6 +2805,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
     default:
         ctx.set_context = (kind == EMUL_KIND_SET_CONTEXT_DATA);
         rc = hvm_emulate_one(&ctx, VIO_no_completion);
+        break;
     }
 
     switch ( rc )
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f4b627b1f..c263e562ff 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4919,6 +4919,8 @@ static int do_altp2m_op(
 
     default:
         ASSERT_UNREACHABLE();
+        rc = -EOPNOTSUPP;
+        break;
     }
 
  out:
@@ -5020,6 +5022,8 @@ static int compat_altp2m_op(
 
     default:
         ASSERT_UNREACHABLE();
+        rc = -EOPNOTSUPP;
+        break;
     }
 
     return rc;
@@ -5283,6 +5287,8 @@ void hvm_get_segment_register(struct vcpu *v, enum x86_segment seg,
          * %cs and %tr are unconditionally present.  SVM ignores these present
          * bits and will happily run without them set.
          */
+        fallthrough;
+
     case x86_seg_cs:
         reg->p = 1;
         break;
diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 7fb3136f0c..2271afe02a 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -111,6 +111,7 @@ int hvm_hypercall(struct cpu_user_regs *regs)
     case 8:
         eax = regs->rax;
         /* Fallthrough to permission check. */
+        fallthrough;
     case 4:
     case 2:
         if ( currd->arch.monitor.guest_request_userspace_enabled &&
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 210cebb0e6..1eab44defc 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -282,6 +282,7 @@ static void hvm_set_callback_irq_level(struct vcpu *v)
             __hvm_pci_intx_assert(d, pdev, pintx);
         else
             __hvm_pci_intx_deassert(d, pdev, pintx);
+        break;
     default:
         break;
     }
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:02:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:02:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744618.1151756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN9-0001QE-75; Thu, 20 Jun 2024 14:02:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744618.1151756; Thu, 20 Jun 2024 14:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN8-0001PU-UZ; Thu, 20 Jun 2024 14:02:42 +0000
Received: by outflank-mailman (input) for mailman id 744618;
 Thu, 20 Jun 2024 14:02:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIN7-0007p3-8W
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:41 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c1323561-2f0d-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 16:02:39 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 9B80D4EE0755;
 Thu, 20 Jun 2024 16:02:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1323561-2f0d-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 09/13] x86/mm: add defensive return
Date: Thu, 20 Jun 2024 16:02:20 +0200
Message-Id: <282da9981826370ca839503477fb515af14ff4b5.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718892030.git.federico.serafini@bugseng.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add defensive return statement at the end of an unreachable
default case. Other than improve safety, this meets the requirements
to deviate a violation of MISRA C Rule 16.3.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/mm.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 648d6dd475..2e19ced15e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -916,6 +916,7 @@ get_page_from_l1e(
                 return 0;
             default:
                 ASSERT_UNREACHABLE();
+                return 0;
             }
         }
         else if ( l1f & _PAGE_RW )
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:02:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:02:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744619.1151764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN9-0001Up-Pm; Thu, 20 Jun 2024 14:02:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744619.1151764; Thu, 20 Jun 2024 14:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIN9-0001Tf-B0; Thu, 20 Jun 2024 14:02:43 +0000
Received: by outflank-mailman (input) for mailman id 744619;
 Thu, 20 Jun 2024 14:02:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIN7-0007p2-Lk
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:41 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c22d24bd-2f0d-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 16:02:41 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 57AD74EE0757;
 Thu, 20 Jun 2024 16:02:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c22d24bd-2f0d-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 11/13] x86/pmtimer: address a violation of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 16:02:22 +0200
Message-Id: <3cdd3de1e1d91fb9fbef173f3f1a3470e7dadbca.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718892030.git.federico.serafini@bugseng.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statement to address a violation of MISRA C Rule 16.3
("An unconditional `break' statement shall terminate every
switch-clause").

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/pmtimer.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 97099ac305..87a7a01c9f 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -185,6 +185,7 @@ static int cf_check handle_evt_io(
                 gdprintk(XENLOG_WARNING, 
                          "Bad ACPI PM register write: %x bytes (%x) at %x\n", 
                          bytes, *val, port);
+                break;
             }
         }
         /* Fix up the SCI state to match the new register state */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:02:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:02:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744622.1151777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKINB-00023a-Qe; Thu, 20 Jun 2024 14:02:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744622.1151777; Thu, 20 Jun 2024 14:02:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKINB-00021i-FT; Thu, 20 Jun 2024 14:02:45 +0000
Received: by outflank-mailman (input) for mailman id 744622;
 Thu, 20 Jun 2024 14:02:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIN9-0007p2-5T
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:43 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c2a66b2c-2f0d-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 16:02:42 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 26FD04EE0754;
 Thu, 20 Jun 2024 16:02:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2a66b2c-2f0d-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 12/13] x86/vPIC: address a violation of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 16:02:23 +0200
Message-Id: <22c3d314f769b683241f8bdabf006225714aa07a.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718892030.git.federico.serafini@bugseng.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add pseudokeyword fallthrough to meet the requirements to deviate
a violation of MISRA C Rul 16.3: "An unconditional `break' statement
shall terminate every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/vpic.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index 7c3b5c7254..6427b08086 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -309,6 +309,7 @@ static void vpic_ioport_write(
             if ( !(vpic->init_state & 8) )
                 break; /* CASCADE mode: wait for write to ICW3. */
             /* SNGL mode: fall through (no ICW3). */
+            fallthrough;
         case 2:
             /* ICW3 */
             vpic->init_state++;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:02:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:02:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744623.1151784 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKINC-00029i-BH; Thu, 20 Jun 2024 14:02:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744623.1151784; Thu, 20 Jun 2024 14:02:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKINB-000285-Vy; Thu, 20 Jun 2024 14:02:45 +0000
Received: by outflank-mailman (input) for mailman id 744623;
 Thu, 20 Jun 2024 14:02:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIN9-0007p3-8w
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:43 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c1b879de-2f0d-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 16:02:40 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id 772614EE0756;
 Thu, 20 Jun 2024 16:02:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1b879de-2f0d-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 10/13] x86/mpparse: add break statement
Date: Thu, 20 Jun 2024 16:02:21 +0200
Message-Id: <317de680730ffebaae490be5841a7d413c420a54.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718892030.git.federico.serafini@bugseng.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a missing break statement to address a violation of MISRA C Rule
16.3.

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/mpparse.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/mpparse.c b/xen/arch/x86/mpparse.c
index d8ccab2449..306d8ed97a 100644
--- a/xen/arch/x86/mpparse.c
+++ b/xen/arch/x86/mpparse.c
@@ -544,6 +544,7 @@ static inline void __init construct_default_ISA_mptable(int mpc_default_type)
 		case 4:
 		case 7:
 			memcpy(bus.mpc_bustype, "MCA   ", 6);
+			break;
 	}
 	MP_bus_info(&bus);
 	if (mpc_default_type > 4) {
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:08:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:08:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744703.1151804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKISN-0006XD-R6; Thu, 20 Jun 2024 14:08:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744703.1151804; Thu, 20 Jun 2024 14:08:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKISN-0006X6-OS; Thu, 20 Jun 2024 14:08:07 +0000
Received: by outflank-mailman (input) for mailman id 744703;
 Thu, 20 Jun 2024 14:08:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKINC-0007p3-9N
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:02:46 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c30e5dbd-2f0d-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 16:02:42 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.165.219])
 by support.bugseng.com (Postfix) with ESMTPSA id EA6BE4EE0758;
 Thu, 20 Jun 2024 16:02:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c30e5dbd-2f0d-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH 13/13] x86/vlapic: address a violation of MISRA C Rule 16.3
Date: Thu, 20 Jun 2024 16:02:24 +0200
Message-Id: <b37ab35baab67cc147d5f21e15234adc08cd7af9.1718892030.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1718892030.git.federico.serafini@bugseng.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statement to address a violation of MISRA C
Rule 16.3: "An unconditional `break' statement shall terminate every
switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/vlapic.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 9cfc82666a..2ec9594271 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -367,6 +367,7 @@ static void vlapic_accept_irq(struct vcpu *v, uint32_t icr_low)
         gdprintk(XENLOG_ERR, "TODO: unsupported delivery mode in ICR %x\n",
                  icr_low);
         domain_crash(v->domain);
+        break;
     }
 }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:10:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:10:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744714.1151813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIUj-0007yr-6e; Thu, 20 Jun 2024 14:10:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744714.1151813; Thu, 20 Jun 2024 14:10:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIUj-0007yk-3l; Thu, 20 Jun 2024 14:10:33 +0000
Received: by outflank-mailman (input) for mailman id 744714;
 Thu, 20 Jun 2024 14:10:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKIUh-000761-Jv
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:10:31 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d9a0b82b-2f0e-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 16:10:30 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ec1ac1aed2so10336541fa.3
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 07:10:29 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-705cc96eb9bsm12374902b3a.81.2024.06.20.07.10.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 07:10:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9a0b82b-2f0e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718892629; x=1719497429; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=dLibOHro7nMGYz0pO+wN0jLNcwMVfXyUcOvWCE38Pc0=;
        b=grahzZp34ABYdguIPLdhxEAMVmtJAUFEPyMjwkGxee4w+HbkIOgDitaxtRiCszHc9c
         hmunpyl5x6+AH3a4znmtmPf1JBsOCBaBzr4HBoi2sxurOZp6z+TeRBsoyA5KNnBllv4F
         fRqK7zXxwFxZcvQWdBAppxKQgfMInTMJCpXhLXF7NJXXzegv0M9NR7WX1nk5PdhlA/6B
         S6d9jT/NghSi+h9zIFoTjbV3DljYeglZVf/s5OUAVn3sxvEAlq8gx8UECWZyhucDRPeA
         HncvLFh9yMUBqfKAYX/git4mhyl+qcl7DRpdMUrx9Juhvl+EntH+uOxsRjtRiwT1LyS4
         WWsw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718892629; x=1719497429;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=dLibOHro7nMGYz0pO+wN0jLNcwMVfXyUcOvWCE38Pc0=;
        b=oXL+fChY4KNALJX34Sa0wgezWY6meKfdJje4QBM1M2W79sIuRUOBEYjFMxpoot0jr4
         RsSAwjQ/WChZYUfaSevcscmijzB1SzX7FEh8O3amWzJS18cTCpkWuflmPoEW6Frig839
         kBGl2OZyZf4PSK4PpWu8ymvW9rT+cpwu3YTKqX13SSn4i991Hw9TSH961p4Dtn0Gtfqw
         gol3CYZCSeJy/vZ+q3imr8QeUWO4vP+U9Gcu0DtbSJA5s3X1bUGN10CEPaGpBNzioM2/
         h6skuYCGVrhNQbuLd8DYp016wZ4XfCxMX77HyJb+24mWT5dbJs5r9k0otVayoJYpF6qB
         Trsw==
X-Forwarded-Encrypted: i=1; AJvYcCVovUqzTQPp6WMSxL8cru8LZ+ouE5KofaImFjxSHd4QKXbRBnev1rafyzVpHUDZboRcxI9xfTIfydPxj9N3xe+wpv7Fu/uRX6th1rkpFgM=
X-Gm-Message-State: AOJu0YybrOwUXTLQbMhszyek7h2mFtF0A0jKB3TioAOOqhB3LI8TfGWh
	U9kYE2MDb8MWMB6ltORG/CnS6alyTf1Qp+R6tbKY0mfNR2EyFfuWhkRSOYM54nMxeT+vY4WEUK8
	=
X-Google-Smtp-Source: AGHT+IHEMzVIzcgPw6U1pmHJXoDYZChmalJOuVfmV+Y1WOZbLbi3uQGQeFV3VSlqtkMmgYRmRys6Hw==
X-Received: by 2002:a05:651c:90:b0:2eb:e329:5508 with SMTP id 38308e7fff4ca-2ec3ced5bd4mr33211951fa.27.1718892629260;
        Thu, 20 Jun 2024 07:10:29 -0700 (PDT)
Message-ID: <c6a2fb51-1ffe-4d13-9894-5ca3169c392e@suse.com>
Date: Thu, 20 Jun 2024 16:10:22 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] automation/eclair_analysis: deviate common/unlzo.c for
 MISRA Rule 7.3
To: alessandro.zucchelli@bugseng.com,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, xen-devel@lists.xenproject.org
References: <20342a68627d5fe7c85c50f64e9300e9a587974b.1718704260.git.alessandro.zucchelli@bugseng.com>
 <63d11da5-4a5a-4354-ab57-67fbb7110f45@suse.com>
 <alpine.DEB.2.22.394.2406191817310.2572888@ubuntu-linux-20-04-desktop>
 <566b0cb9b718b5719a6b497b36e90ab4@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <566b0cb9b718b5719a6b497b36e90ab4@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 20.06.2024 15:19, Alessandro Zucchelli wrote:
> On 2024-06-20 03:17, Stefano Stabellini wrote:
>> On Tue, 18 Jun 2024, Jan Beulich wrote:
>>> On 18.06.2024 11:54, Alessandro Zucchelli wrote:
>>>> The file contains violations of Rule 7.3 which states as following: The
>>>> lowercase character `l' shall not be used in a literal suffix.
>>>>
>>>> This file defines a non-compliant constant used in a macro expanded in a
>>>> non-excluded file, so this deviation is needed in order to avoid
>>>> a spurious violation involving both files.
>>>
>>> Imo it would be nice to be specific in such cases: Which constant? And
>>> which macro expanded in which file?
>>
>> Hi Alessandro,
>> if you give me the details, I could add it to the commit message on 
>> commit
> 
> The file common/unlzo.c defines the non-compliant constant 
> LZO_BLOCK_SIZE as
> "256*1024l" (note the 'l'). This file is excluded from ECLAIR analysis 
> but
> as the constant is used in macro xmalloc_bytes, expanded in non-excluded 
> file
> include/xen/xmalloc.h, thus these specific violations need this 
> configuration
> in order to be excluded.

Would it perhaps be easier to swap the l for an L?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:15:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:15:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744722.1151824 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIZs-000095-OP; Thu, 20 Jun 2024 14:15:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744722.1151824; Thu, 20 Jun 2024 14:15:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIZs-00008y-LT; Thu, 20 Jun 2024 14:15:52 +0000
Received: by outflank-mailman (input) for mailman id 744722;
 Thu, 20 Jun 2024 14:15:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKIZr-00008s-8p
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:15:51 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 97fec7b2-2f0f-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 16:15:49 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2ec002caeb3so11286991fa.2
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 07:15:49 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-705ccb90a52sm12368294b3a.210.2024.06.20.07.15.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 07:15:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97fec7b2-2f0f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718892949; x=1719497749; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=gbU2ee9jF4hkOpmKc2Nxm5GvPvRgK+PQP3h7UAYtCUw=;
        b=Jjt9DRB3i5z/d+oLh9yc3nJPHFnPHh4WaMGUaE0FMNHt2JD7w0E7yIB8IQyeKJkMpc
         ri1qdQkhFyxpRiWYEu/it9YJ9l7SrmypQqFDJMF9dShSOfeKvUDAgwywvR9nk3WMTXwq
         nWGkJdaQETemZOFp6jwLeLemZE/6TC36QBWy4WUr/Ka1unigmhB/GxgRmvMIUFngZjAQ
         THKWqqUhAwLann8Wv494onFN2qGBkWgKkvMm/xbrdqTiJAHh6LOZ8rOzigGfED/h/wjF
         s0tcsXTG5fGVa4MfuwyE+u9HXL4sIoIvW1eCYVaHLm3OCOoNmMxkvP04JLZLoGO361U6
         hFYA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718892949; x=1719497749;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gbU2ee9jF4hkOpmKc2Nxm5GvPvRgK+PQP3h7UAYtCUw=;
        b=goPBVul9OQFruHYcxPmQDKtFQQPpJ8gCmtHHB8//BIgkKGGdMvjaDv3ZCirFe0+jlT
         zjxe+V2EdNbuLMCKi4fx5eJE7RI2KjH9wE/meeiOTivtoeP+R9HChLltMNXCr5/mdnpT
         9BSgx6pFByWWSPXZ+n5rBd8DKo5l+0B66ODRhXtBMoJBFMIhMcJnO6U5sXubXzUMDbfb
         Kdi1SvLVEKRGDa1r6NE3dCCwsfXQVv9ZC50KIk9ApNbKS3h0rws9+C8qO60ja+pEAPdc
         PAyykk/r9/tTsMrUlpg5WI8MOL/ea16Pn0mYIAnJJtcEBB5CGoojbqjNO8/NiZE4paWh
         dp3Q==
X-Forwarded-Encrypted: i=1; AJvYcCWauhRzHsMVcQguT95WNlO6SUMcdbGfFas7yYoimDsCgODMjXC5O/J2G522kM8Nsjxv82FFuFnQwx2rzkpQgfcdmrV/9U8pt/zH1u9sLBY=
X-Gm-Message-State: AOJu0YyjYulhrDfITdTeVsF/xfBDkNkOu5TpUoagRY9Sze6ZdX9Fc56K
	SwjAbYS7CkBwW2p2DZhmr2dyscZsQBVTbXCfsc+Kda7GL+kNxvhkifK73N3PjQ==
X-Google-Smtp-Source: AGHT+IFQMown054xx4S8PYhqYplypnRCeXdPHxjGhE7O2HwUfzD0YTHbv1q9suBSt2iOabYpiw0Xjg==
X-Received: by 2002:a2e:8811:0:b0:2eb:e9cf:e179 with SMTP id 38308e7fff4ca-2ec3ce9419fmr38769801fa.21.1718892948758;
        Thu, 20 Jun 2024 07:15:48 -0700 (PDT)
Message-ID: <af656be1-3950-45f3-9da9-f377d9e6396c@suse.com>
Date: Thu, 20 Jun 2024 16:15:39 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 01/13] automation/eclair: consider also hypened
 fall-through
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1718892030.git.federico.serafini@bugseng.com>
 <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718892030.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718892030.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 20.06.2024 16:02, Federico Serafini wrote:
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -400,7 +400,7 @@ safe."
>  -doc_end
>  
>  -doc_begin="Switch clauses ending with an explicit comment indicating the fallthrough intention are safe."
> --config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through.? \\*/.*$,0..1))))"}
> +-config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all[ -]?through.? \\*/.*$,0..2))))"}

Is is a regex, isn't it? Doesn't the period also need escaping (or enclosing
in square brackets)? (I realize it was like this before, but still.)

> --- a/docs/misra/deviations.rst
> +++ b/docs/misra/deviations.rst
> @@ -353,6 +353,10 @@ Deviations related to MISRA C:2012 Rules:
>         However, the use of such comments in new code is deprecated:
>         the pseudo-keyword "fallthrough" shall be used.
>       - Tagged as `safe` for ECLAIR. The accepted comments are:
> +         - /\* fall-through \*/
> +         - /\* Fall-through. \*/
> +         - /\* Fall-through \*/
> +         - /\* fall-through. \*/
>           - /\* fall through \*/
>           - /\* fall through. \*/
>           - /\* fallthrough \*/

Nit: Can the capital-F and non-capital-f variants please be next to each other?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:22:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:22:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744731.1151834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIgL-0001qD-EO; Thu, 20 Jun 2024 14:22:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744731.1151834; Thu, 20 Jun 2024 14:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIgL-0001q6-9g; Thu, 20 Jun 2024 14:22:33 +0000
Received: by outflank-mailman (input) for mailman id 744731;
 Thu, 20 Jun 2024 14:22:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKIgJ-0001pz-52
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:22:31 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 860bd7a9-2f10-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 16:22:28 +0200 (CEST)
Received: from [192.168.1.113] (93-36-220-117.ip62.fastwebnet.it
 [93.36.220.117])
 by support.bugseng.com (Postfix) with ESMTPSA id EFB504EE0738;
 Thu, 20 Jun 2024 16:22:27 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 860bd7a9-2f10-11ef-b4bb-af5377834399
Message-ID: <e1e2d30c-6dc1-420b-aa4f-456cba6fb605@bugseng.com>
Date: Thu, 20 Jun 2024 16:22:27 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 01/13] automation/eclair: consider also hypened
 fall-through
To: Jan Beulich <jbeulich@suse.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1718892030.git.federico.serafini@bugseng.com>
 <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718892030.git.federico.serafini@bugseng.com>
 <af656be1-3950-45f3-9da9-f377d9e6396c@suse.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <af656be1-3950-45f3-9da9-f377d9e6396c@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 20/06/24 16:15, Jan Beulich wrote:
> On 20.06.2024 16:02, Federico Serafini wrote:
>> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
>> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
>> @@ -400,7 +400,7 @@ safe."
>>   -doc_end
>>   
>>   -doc_begin="Switch clauses ending with an explicit comment indicating the fallthrough intention are safe."
>> --config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through.? \\*/.*$,0..1))))"}
>> +-config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all[ -]?through.? \\*/.*$,0..2))))"}
> 
> Is is a regex, isn't it? Doesn't the period also need escaping (or enclosing
> in square brackets)? (I realize it was like this before, but still.)

Yes, thanks for noticing.

> 
>> --- a/docs/misra/deviations.rst
>> +++ b/docs/misra/deviations.rst
>> @@ -353,6 +353,10 @@ Deviations related to MISRA C:2012 Rules:
>>          However, the use of such comments in new code is deprecated:
>>          the pseudo-keyword "fallthrough" shall be used.
>>        - Tagged as `safe` for ECLAIR. The accepted comments are:
>> +         - /\* fall-through \*/
>> +         - /\* Fall-through. \*/
>> +         - /\* Fall-through \*/
>> +         - /\* fall-through. \*/
>>            - /\* fall through \*/
>>            - /\* fall through. \*/
>>            - /\* fallthrough \*/
> 
> Nit: Can the capital-F and non-capital-f variants please be next to each other?

Ok.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:32:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:32:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744742.1151843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIpf-0003ZK-B1; Thu, 20 Jun 2024 14:32:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744742.1151843; Thu, 20 Jun 2024 14:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIpf-0003ZD-88; Thu, 20 Jun 2024 14:32:11 +0000
Received: by outflank-mailman (input) for mailman id 744742;
 Thu, 20 Jun 2024 14:32:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gwum=NW=cloud.com=matthew.barnes@srs-se1.protection.inumbo.net>)
 id 1sKIpe-0003Z7-1h
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:32:10 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id df2d6674-2f11-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 16:32:07 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a63359aaaa6so136499466b.2
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 07:32:07 -0700 (PDT)
Received: from EMEAENGAAD91498.citrite.net ([2.223.45.79])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6f56fa674bsm777411866b.210.2024.06.20.07.32.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 20 Jun 2024 07:32:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df2d6674-2f11-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718893927; x=1719498727; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=nFWNbUX04u1jqwokJYULDxHiHNUVzv4LdA47biAV6MA=;
        b=ke4T+h+oUgGeIvr7OE0WUgKPjGEd+AYRDVfJmQfp2ZXr+CS201h9Wu8wli6jTxJFsu
         yQ59rMcJ1ccCFyMGvA3wo7G5D/7AxnTAuxJVofXxy083yifv6kHHN8cLOXzk4pQRdLsa
         UfaCMXLoJJZZJgrQHVvmXvSYLaoXHr0NifEJQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718893927; x=1719498727;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=nFWNbUX04u1jqwokJYULDxHiHNUVzv4LdA47biAV6MA=;
        b=bIC69fI7r4zPDXn63xcBth1b8/hgLef2rMUJTmEcF5OqvoX/8LQSvvjgY5QxjfVhqU
         5EQgm3bdV6MRkh9d/Yrr5mDPx6m/wLpdJEUHUmDHrvjxdJ5uj8YcL9PsvXj83/kWj6JL
         afYflaVNAqpj5AvKZ1SRO0FTDHcBJtFtHIGbBg1R5+A8POL+wfP/Ix3Jlv9skYjdzZX8
         jJ6LTLZ+BE2R9bJ4/6cF+5K2522EvnsxuPxzEZwYsOrxTQabtLfxPMiL67eqKe74szb/
         1bt9e5mpZ6yIR8pl8rODt06gL6uTU+gzB4tbh1IIn78kxfCjJMmyX1T+YLSzR7Yv48nu
         mnfw==
X-Gm-Message-State: AOJu0YxYCvSe94X2BQaZUEcWjYtCVBs2rQ4UT6VPrCOq5feM72a9R+e2
	1ulipNO5PxSFrZ6+GinXqinHcLUMNvaAapCujKqc/Ddhbxivu4IO8E77EdxyswgFPPLmQRdx2PU
	1
X-Google-Smtp-Source: AGHT+IEUvcK+vo1hmj3l9vNJmeQfFb4RFY9Whm35bcrYkvWEO5wNS6pCDXzbJ9W5oFCO55RCC6QQ5w==
X-Received: by 2002:a17:907:590:b0:a6f:b717:cb6 with SMTP id a640c23a62f3a-a6fb7171592mr206928466b.53.1718893927009;
        Thu, 20 Jun 2024 07:32:07 -0700 (PDT)
From: Matthew Barnes <matthew.barnes@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Matthew Barnes <matthew.barnes@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH] x86/apic: Fix signing in left bitshift
Date: Thu, 20 Jun 2024 15:31:54 +0100
Message-Id: <6fe6d88c0e07348d3e08fd51863402827126ecb0.1718893590.git.matthew.barnes@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There exists a bitshift in the IOAPIC code where a signed integer is
shifted to the left by at most 31 bits. This is undefined behaviour,
and can cause faults in xtf tests such as pv64-pv-iopl~hypercall.

This patch fixes this by changing the integer from signed to unsigned.

Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>
---
 xen/arch/x86/io_apic.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index b48a64246548..ae88b1b898fe 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -1756,7 +1756,7 @@ static void cf_check end_level_ioapic_irq_new(struct irq_desc *desc, u8 vector)
          !io_apic_level_ack_pending(desc->irq) )
         move_native_irq(desc);
 
-    if (!(v & (1 << (i & 0x1f)))) {
+    if (!(v & (1U << (i & 0x1f)))) {
         spin_lock(&ioapic_lock);
         __mask_IO_APIC_irq(desc->irq);
         __edge_IO_APIC_irq(desc->irq);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:39:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:39:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744749.1151853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIwJ-0004Pi-0F; Thu, 20 Jun 2024 14:39:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744749.1151853; Thu, 20 Jun 2024 14:39:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIwI-0004Pb-Tu; Thu, 20 Jun 2024 14:39:02 +0000
Received: by outflank-mailman (input) for mailman id 744749;
 Thu, 20 Jun 2024 14:39:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nkum=NW=bounce.vates.tech=bounce-md_30504962.66743f02.v1-7d6c54d1c8194bb4a381d2d9a2e31000@srs-se1.protection.inumbo.net>)
 id 1sKIwH-0004PV-Vp
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:39:02 +0000
Received: from mail134-3.atl141.mandrillapp.com
 (mail134-3.atl141.mandrillapp.com [198.2.134.3])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d4e7ea7f-2f12-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 16:39:00 +0200 (CEST)
Received: from pmta10.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail134-3.atl141.mandrillapp.com (Mailchimp) with ESMTP id
 4W4jmL27RlzDRJ1X7
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 14:38:58 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 7d6c54d1c8194bb4a381d2d9a2e31000; Thu, 20 Jun 2024 14:38:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d4e7ea7f-2f12-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718894338; x=1719154838;
	bh=91Tg+JtM111mPgte8fdOPT6iTIZ3vjTubIw9uyh/OOc=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=HGnS6WsYz55Lc6pJDF9/jNaUGaxeAzkTuKH+P8x0Cuow+9o29HkRQQObBkV9nl4Na
	 gkJixEmJGJhCYUjmjIHN1fHYS8Hkq1xtpHWjYnCwNHS0RpR3Bp+dJbvojF2e/JRa1g
	 CVo/ybv+Q2w2RaTmoIQgKZ0qey3KNvGoqedPXaBp3MPIUvvdoT72XlvD/e+EGIHinw
	 fC0h2OHBG2lE5g2yV70CsvOJWpQFgiA//Wh5d3zwNpgyRbc2UdztUT0QjmshnOCxYZ
	 h81S0tLTPTYc/iD5n+pvV6t/cyDRkIHPEvNh3wPeJZLzvfnkVfje9gIeDhaDp5dg0X
	 +/BGQCz5UOxIQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718894338; x=1719154838; i=anthony.perard@vates.tech;
	bh=91Tg+JtM111mPgte8fdOPT6iTIZ3vjTubIw9uyh/OOc=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=ZkSMRWiU4uTh1HxpEDSIZn/Ox2YBU8zP1n6vXd6c0CtuST4c3FJyuddd5qq4uSd8R
	 4exJx5IAa5Fc6mBhNUUWWWSJVbZMzvAhcYKYzJfWXlJi3oYZJfpUJsIC37j/QtgCNv
	 We+dSAzstjVwZVw2Xfb7T+I3Ml0cy/Vp2EAHzbf4febzoBkDIcOyjCEyJsBROTQBDL
	 0PoPSq/y5hgKgbTCkXNBKqDQsg4P1k/sl9rSAQ1TCXEjjLuiJTEeUNxzEPLgeEfjqb
	 +tvoYUN02Rr7TVlyqEh7IxVzi32QoxEzndwqCDeU5pfGjSby9vw/czLA1Z76zarVpT
	 /AAyfPyllOIZA==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[XEN=20PATCH=20v10=204/5]=20tools:=20Add=20new=20function=20to=20get=20gsi=20from=20dev?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718894336030
To: Jiqian Chen <Jiqian.Chen@amd.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>, Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui <Ray.Huang@amd.com>
Message-Id: <ZnQ+/y/AGyasDGHY@l14>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com> <20240617090035.839640-5-Jiqian.Chen@amd.com>
In-Reply-To: <20240617090035.839640-5-Jiqian.Chen@amd.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.7d6c54d1c8194bb4a381d2d9a2e31000?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240620:md
Date: Thu, 20 Jun 2024 14:38:58 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

On Mon, Jun 17, 2024 at 05:00:34PM +0800, Jiqian Chen wrote:
> diff --git a/tools/include/xencall.h b/tools/include/xencall.h
> index fc95ed0fe58e..750aab070323 100644
> --- a/tools/include/xencall.h
> +++ b/tools/include/xencall.h
> @@ -113,6 +113,8 @@ int xencall5(xencall_handle *xcall, unsigned int op,
>               uint64_t arg1, uint64_t arg2, uint64_t arg3,
>               uint64_t arg4, uint64_t arg5);
>  
> +int xen_oscall_gsi_from_dev(xencall_handle *xcall, unsigned int sbdf);

I don't think that an appropriate library for this new feature.
libxencall is a generic lib to make hypercall.

>  /* Variant(s) of the above, as needed, returning "long" instead of "int". */
>  long xencall2L(xencall_handle *xcall, unsigned int op,
>                 uint64_t arg1, uint64_t arg2);
> diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> index 9ceca0cffc2f..a0381f74d24b 100644
> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -1641,6 +1641,8 @@ int xc_physdev_unmap_pirq(xc_interface *xch,
>                            uint32_t domid,
>                            int pirq);
>  
> +int xc_physdev_gsi_from_dev(xc_interface *xch, uint32_t sbdf);
> +
>  /*
>   *  LOGGING AND ERROR REPORTING
>   */
> diff --git a/tools/libs/call/core.c b/tools/libs/call/core.c
> index 02c4f8e1aefa..6dae50c9a6ba 100644
> --- a/tools/libs/call/core.c
> +++ b/tools/libs/call/core.c
> @@ -173,6 +173,11 @@ int xencall5(xencall_handle *xcall, unsigned int op,
>      return osdep_hypercall(xcall, &call);
>  }
>  
> +int xen_oscall_gsi_from_dev(xencall_handle *xcall, unsigned int sbdf)
> +{
> +    return osdep_oscall(xcall, sbdf);
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/tools/libs/call/libxencall.map b/tools/libs/call/libxencall.map
> index d18a3174e9dc..b92a0b5dc12c 100644
> --- a/tools/libs/call/libxencall.map
> +++ b/tools/libs/call/libxencall.map
> @@ -10,6 +10,8 @@ VERS_1.0 {
>  		xencall4;
>  		xencall5;
>  
> +		xen_oscall_gsi_from_dev;

FYI, never change already released version of a library, this would add
a new function to libxencall.1.0. Instead, when adding a new function
to a library that is supposed to be stable (they have a *.map file in
xen case), add it to a new section, that woud be VERS_1.4 in this case.
But libxencall isn't approriate for this new function, so just for
future reference.

>  		xencall_alloc_buffer;
>  		xencall_free_buffer;
>  		xencall_alloc_buffer_pages;
> diff --git a/tools/libs/call/linux.c b/tools/libs/call/linux.c
> index 6d588e6bea8f..92c740e176f2 100644
> --- a/tools/libs/call/linux.c
> +++ b/tools/libs/call/linux.c
> @@ -85,6 +85,21 @@ long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall)
>      return ioctl(xcall->fd, IOCTL_PRIVCMD_HYPERCALL, hypercall);
>  }
>  
> +int osdep_oscall(xencall_handle *xcall, unsigned int sbdf)
> +{
> +    privcmd_gsi_from_dev_t dev_gsi = {
> +        .sbdf = sbdf,
> +        .gsi = -1,
> +    };
> +
> +    if (ioctl(xcall->fd, IOCTL_PRIVCMD_GSI_FROM_DEV, &dev_gsi)) {

Looks like libxencall is only for hypercall, and so I don't think
it's the right place to introducing another ioctl() call.

> +        PERROR("failed to get gsi from dev");
> +        return -1;
> +    }
> +
> +    return dev_gsi.gsi;
> +}
> +
>  static void *alloc_pages_bufdev(xencall_handle *xcall, size_t npages)
>  {
>      void *p;
> diff --git a/tools/libs/call/private.h b/tools/libs/call/private.h
> index 9c3aa432efe2..cd6eb5a3e66f 100644
> --- a/tools/libs/call/private.h
> +++ b/tools/libs/call/private.h
> @@ -57,6 +57,15 @@ int osdep_xencall_close(xencall_handle *xcall);
>  
>  long osdep_hypercall(xencall_handle *xcall, privcmd_hypercall_t *hypercall);
>  
> +#if defined(__linux__)
> +int osdep_oscall(xencall_handle *xcall, unsigned int sbdf);
> +#else
> +static inline int osdep_oscall(xencall_handle *xcall, unsigned int sbdf)
> +{
> +    return -1;
> +}
> +#endif
> +
>  void *osdep_alloc_pages(xencall_handle *xcall, size_t nr_pages);
>  void osdep_free_pages(xencall_handle *xcall, void *p, size_t nr_pages);
>  
> diff --git a/tools/libs/ctrl/xc_physdev.c b/tools/libs/ctrl/xc_physdev.c
> index 460a8e779ce8..c1458f3a38b5 100644
> --- a/tools/libs/ctrl/xc_physdev.c
> +++ b/tools/libs/ctrl/xc_physdev.c
> @@ -111,3 +111,7 @@ int xc_physdev_unmap_pirq(xc_interface *xch,
>      return rc;
>  }
>  
> +int xc_physdev_gsi_from_dev(xc_interface *xch, uint32_t sbdf)
> +{

I'm not sure if this is the best place for this new function, but I
can't find another one, so that will do.

> +    return xen_oscall_gsi_from_dev(xch->xcall, sbdf);
> +}
> diff --git a/tools/libs/light/Makefile b/tools/libs/light/Makefile
> index 37e4d1670986..6b616d5ee9b6 100644
> --- a/tools/libs/light/Makefile
> +++ b/tools/libs/light/Makefile
> @@ -40,7 +40,7 @@ OBJS-$(CONFIG_X86) += $(ACPI_OBJS)
>  
>  CFLAGS += -Wno-format-zero-length -Wmissing-declarations -Wformat-nonliteral
>  
> -CFLAGS-$(CONFIG_X86) += -DCONFIG_PCI_SUPP_LEGACY_IRQ
> +CFLAGS-$(CONFIG_X86) += -DCONFIG_PCI_SUPP_LEGACY_IRQ -DCONFIG_X86
>  
>  OBJS-$(CONFIG_X86) += libxl_cpuid.o
>  OBJS-$(CONFIG_X86) += libxl_x86.o
> diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
> index 96cb4da0794e..376f91759ac6 100644
> --- a/tools/libs/light/libxl_pci.c
> +++ b/tools/libs/light/libxl_pci.c
> @@ -1406,6 +1406,12 @@ static bool pci_supp_legacy_irq(void)
>  #endif
>  }
>  
> +#define PCI_DEVID(bus, devfn)\
> +            ((((uint16_t)(bus)) << 8) | ((devfn) & 0xff))
> +
> +#define PCI_SBDF(seg, bus, devfn) \
> +            ((((uint32_t)(seg)) << 16) | (PCI_DEVID(bus, devfn)))
> +
>  static void pci_add_dm_done(libxl__egc *egc,
>                              pci_add_state *pas,
>                              int rc)
> @@ -1418,6 +1424,10 @@ static void pci_add_dm_done(libxl__egc *egc,
>      unsigned long long start, end, flags, size;
>      int irq, i;
>      int r;
> +#ifdef CONFIG_X86
> +    int gsi;
> +    uint32_t sbdf;
> +#endif
>      uint32_t flag = XEN_DOMCTL_DEV_RDM_RELAXED;
>      uint32_t domainid = domid;
>      bool isstubdom = libxl_is_stubdom(ctx, domid, &domainid);
> @@ -1486,6 +1496,18 @@ static void pci_add_dm_done(libxl__egc *egc,
>          goto out_no_irq;
>      }
>      if ((fscanf(f, "%u", &irq) == 1) && irq) {
> +#ifdef CONFIG_X86

Could you avoid these #ifdef, and move the new arch specific code (and
maybe existing code) into libxl_x86.c ? There's already examples of arch
specific code.

> +        sbdf = PCI_SBDF(pci->domain, pci->bus,
> +                        (PCI_DEVFN(pci->dev, pci->func)));
> +        gsi = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
> +        /*
> +         * Old kernel version may not support this function,
> +         * so if fail, keep using irq; if success, use gsi
> +         */
> +        if (gsi > 0) {
> +            irq = gsi;
> +        }
> +#endif
>          r = xc_physdev_map_pirq(ctx->xch, domid, irq, &irq);
>          if (r < 0) {
>              LOGED(ERROR, domainid, "xc_physdev_map_pirq irq=%d (error=%d)",
> @@ -2172,6 +2194,10 @@ static void pci_remove_detached(libxl__egc *egc,
>      int  irq = 0, i, stubdomid = 0;
>      const char *sysfs_path;
>      FILE *f;
> +#ifdef CONFIG_X86
> +    int gsi;
> +    uint32_t sbdf;
> +#endif
>      uint32_t domainid = prs->domid;
>      bool isstubdom;
>  
> @@ -2239,6 +2265,18 @@ skip_bar:
>      }
>  
>      if ((fscanf(f, "%u", &irq) == 1) && irq) {
> +#ifdef CONFIG_X86
> +        sbdf = PCI_SBDF(pci->domain, pci->bus,
> +                        (PCI_DEVFN(pci->dev, pci->func)));
> +        gsi = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
> +        /*
> +         * Old kernel version may not support this function,
> +         * so if fail, keep using irq; if success, use gsi
> +         */
> +        if (gsi > 0) {
> +            irq = gsi;
> +        }
> +#endif
>          rc = xc_physdev_unmap_pirq(ctx->xch, domid, irq);
>          if (rc < 0) {
>              /*

Thanks,

-- 


Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:39:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:39:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744750.1151864 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIwW-0004iw-8B; Thu, 20 Jun 2024 14:39:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744750.1151864; Thu, 20 Jun 2024 14:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKIwW-0004in-5L; Thu, 20 Jun 2024 14:39:16 +0000
Received: by outflank-mailman (input) for mailman id 744750;
 Thu, 20 Jun 2024 14:39:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sKIwV-0004iN-9W
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:39:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sKIwU-0000le-O2; Thu, 20 Jun 2024 14:39:14 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.0.211])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sKIwU-0007Zu-Hp; Thu, 20 Jun 2024 14:39:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=jhEcHgIlZfdJss6WFCMhzkj5kMkSAoudV6UuxVlBjlY=; b=u1UqwounVdmEKudk8K1HxU3cwB
	7mdCXHiT864a/nri04fjkOIiJO5do2JcB6+zKVaxPgF2HrS0mkP3mKpOapKXa19lw+D5BnvZvyZ7h
	SbDHd1bHh8Uz1G8BIF9nW+qUB6i802P0vTi2+iBhpJQ3ItfQoYeW+HUa9CBjE5e7UaWg=;
Message-ID: <846a944c-13a6-4b38-915a-d29136a90c19@xen.org>
Date: Thu, 20 Jun 2024 15:39:12 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 01/13] automation/eclair: consider also hypened
 fall-through
Content-Language: en-GB
To: Federico Serafini <federico.serafini@bugseng.com>,
 xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
 <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718892030.git.federico.serafini@bugseng.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718892030.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 20/06/2024 15:02, Federico Serafini wrote:
> Update ECLAIR configuration to deviate MISRA C Rule 16.3
> using different version of hypened fall-through and search for
> the comment for 2 lines after the last statement.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> ---
>   automation/eclair_analysis/ECLAIR/deviations.ecl | 2 +-
>   docs/misra/deviations.rst                        | 4 ++++
>   2 files changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> index b8f9155267..b99a6b8a92 100644
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -400,7 +400,7 @@ safe."
>   -doc_end
>   
>   -doc_begin="Switch clauses ending with an explicit comment indicating the fallthrough intention are safe."
> --config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through.? \\*/.*$,0..1))))"}
> +-config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all[ -]?through.? \\*/.*$,0..2))))"}
>   -doc_end
>   
>   -doc_begin="Switch statements having a controlling expression of enum type deliberately do not have a default case: gcc -Wall enables -Wswitch which warns (and breaks the build as we use -Werror) if one of the enum labels is missing from the switch."
> diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
> index 41cdfbe5f5..411e1fed3d 100644
> --- a/docs/misra/deviations.rst
> +++ b/docs/misra/deviations.rst
> @@ -353,6 +353,10 @@ Deviations related to MISRA C:2012 Rules:
>          However, the use of such comments in new code is deprecated:
>          the pseudo-keyword "fallthrough" shall be used.
>        - Tagged as `safe` for ECLAIR. The accepted comments are:
> +         - /\* fall-through \*/
> +         - /\* Fall-through. \*/
> +         - /\* Fall-through \*/
> +         - /\* fall-through. \*/

How many places use the 4 above? The reason I am asking is I am not 
particularly happy to add yet another set of variant.

I would rather prefer if we settle down with a single comment and 
therefore convert them to the pseudo-keyword.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 14:58:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 14:58:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744772.1151879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJFT-00080M-Rm; Thu, 20 Jun 2024 14:58:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744772.1151879; Thu, 20 Jun 2024 14:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJFT-00080F-OS; Thu, 20 Jun 2024 14:58:51 +0000
Received: by outflank-mailman (input) for mailman id 744772;
 Thu, 20 Jun 2024 14:58:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fPla=NW=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKJFS-000806-II
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 14:58:50 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 99c5e908-2f15-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 16:58:49 +0200 (CEST)
Received: from [192.168.1.113] (93-36-220-117.ip62.fastwebnet.it
 [93.36.220.117])
 by support.bugseng.com (Postfix) with ESMTPSA id 8581B4EE0738;
 Thu, 20 Jun 2024 16:58:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99c5e908-2f15-11ef-90a3-e314d9c70b13
Message-ID: <7f1256d8-6db4-4320-9bd4-c89ab5e68ce6@bugseng.com>
Date: Thu, 20 Jun 2024 16:58:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH 01/13] automation/eclair: consider also hypened
 fall-through
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
References: <cover.1718892030.git.federico.serafini@bugseng.com>
 <10af9145252a2f5c31ea0f13cbb67cbe76a8ba3a.1718892030.git.federico.serafini@bugseng.com>
 <846a944c-13a6-4b38-915a-d29136a90c19@xen.org>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <846a944c-13a6-4b38-915a-d29136a90c19@xen.org>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 20/06/24 16:39, Julien Grall wrote:
> Hi,
> 
> On 20/06/2024 15:02, Federico Serafini wrote:
>> Update ECLAIR configuration to deviate MISRA C Rule 16.3
>> using different version of hypened fall-through and search for
>> the comment for 2 lines after the last statement.
>>
>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>> ---
>>   automation/eclair_analysis/ECLAIR/deviations.ecl | 2 +-
>>   docs/misra/deviations.rst                        | 4 ++++
>>   2 files changed, 5 insertions(+), 1 deletion(-)
>>
>> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl 
>> b/automation/eclair_analysis/ECLAIR/deviations.ecl
>> index b8f9155267..b99a6b8a92 100644
>> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
>> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
>> @@ -400,7 +400,7 @@ safe."
>>   -doc_end
>>   -doc_begin="Switch clauses ending with an explicit comment 
>> indicating the fallthrough intention are safe."
>> --config=MC3R1.R16.3,reports+={safe, 
>> "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through.? 
>> \\*/.*$,0..1))))"}
>> +-config=MC3R1.R16.3,reports+={safe, 
>> "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all[ -]?through.? 
>> \\*/.*$,0..2))))"}
>>   -doc_end
>>   -doc_begin="Switch statements having a controlling expression of 
>> enum type deliberately do not have a default case: gcc -Wall enables 
>> -Wswitch which warns (and breaks the build as we use -Werror) if one 
>> of the enum labels is missing from the switch."
>> diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
>> index 41cdfbe5f5..411e1fed3d 100644
>> --- a/docs/misra/deviations.rst
>> +++ b/docs/misra/deviations.rst
>> @@ -353,6 +353,10 @@ Deviations related to MISRA C:2012 Rules:
>>          However, the use of such comments in new code is deprecated:
>>          the pseudo-keyword "fallthrough" shall be used.
>>        - Tagged as `safe` for ECLAIR. The accepted comments are:
>> +         - /\* fall-through \*/
>> +         - /\* Fall-through. \*/
>> +         - /\* Fall-through \*/
>> +         - /\* fall-through. \*/
> 
> How many places use the 4 above? The reason I am asking is I am not 
> particularly happy to add yet another set of variant.
> 
> I would rather prefer if we settle down with a single comment and 
> therefore convert them to the pseudo-keyword.

7 occurrences and 3 of them are in xen/arch/x86/hvm/emulate.c
PATCH 07/13 modifies emulate.c anyway, so I could remove them in a v2.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 15:04:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 15:04:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744779.1151890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJKT-00013v-EM; Thu, 20 Jun 2024 15:04:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744779.1151890; Thu, 20 Jun 2024 15:04:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJKT-00013o-Ak; Thu, 20 Jun 2024 15:04:01 +0000
Received: by outflank-mailman (input) for mailman id 744779;
 Thu, 20 Jun 2024 15:04:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKJKS-00013e-Cz; Thu, 20 Jun 2024 15:04:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKJKS-0001EX-1M; Thu, 20 Jun 2024 15:04:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKJKR-00079d-LD; Thu, 20 Jun 2024 15:03:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKJKR-00083O-Kh; Thu, 20 Jun 2024 15:03:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MTc8l+//ia2+OBaWVBZ7ogc10v99L+B+S32bBwUbK6U=; b=j0OBQODjDQUGAXT1R5Mki45MZe
	AfMp/YWyqzh0GJ9l0VLBM1Ad4TbYFGBQqozlFyoUVgApVgaQDr5E5e5qJ/+A7aEzxrmua7KmIxT9W
	Q7Qhtv6awjhcp/DqH9fw/fJLcgl4VDzgV3k226fSGmJQHuSw0UvJIVhXRTa3UxY3HqxY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186427-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186427: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=43d2edc08f5a7b5e1d0c3464580113e069d1efa7
X-Osstest-Versions-That:
    libvirt=2b199ad3f1a69bd6de1b2a1fc7d5bd31817dcb11
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Jun 2024 15:03:59 +0000

flight 186427 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186427/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186407
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              43d2edc08f5a7b5e1d0c3464580113e069d1efa7
baseline version:
 libvirt              2b199ad3f1a69bd6de1b2a1fc7d5bd31817dcb11

Last test of basis   186407  2024-06-19 04:22:27 Z    1 days
Testing same since   186427  2024-06-20 04:18:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adam Julis <ajulis@redhat.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Martin Kletzander <mkletzan@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Swapnil Ingle <swapnil.ingle@nutanix.com>
  Weblate <noreply-mt-weblate@weblate.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   2b199ad3f1..43d2edc08f  43d2edc08f5a7b5e1d0c3464580113e069d1efa7 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 15:16:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 15:16:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744791.1151899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJVy-0002sr-In; Thu, 20 Jun 2024 15:15:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744791.1151899; Thu, 20 Jun 2024 15:15:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJVy-0002sk-Fv; Thu, 20 Jun 2024 15:15:54 +0000
Received: by outflank-mailman (input) for mailman id 744791;
 Thu, 20 Jun 2024 15:15:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKJVx-0002se-F8
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 15:15:53 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fb93a3fb-2f17-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 17:15:52 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2eaafda3b5cso10888091fa.3
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 08:15:52 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 41be03b00d2f7-6fedcf36c22sm11347360a12.9.2024.06.20.08.15.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 08:15:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb93a3fb-2f17-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718896552; x=1719501352; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=7Ht+5TDug4TcE2w45ccuq+95zRR/OmLFjn/G12EAkvY=;
        b=fklYfQ5soZqPbG1p8cdXZ1yF/98FgwJi8SDnPXrHsUgNXHHJh2LtBfJofyJFewKxZC
         oCBVcv56/+zockRnoO1dBRejSNGJDRXWk3o4hz/SRzDDU5Pe+tMXL0ENQTD/31/+CGZ3
         RZwmo1MMiwBtUwYisGQ3eZvRfmWnYipT5bIOFcBHYM44hUciGXTxqZkPF+8sy2FSx+mX
         fhWlZR/fpqkMGccVCvL6Gr7IgpW6HTTG7FJDyOwnQAmweqvK/iBYjsCwmOwFnWXXDNoR
         vSOBa4N23E8HvF4dgBx3rNXR+ck0pPxcTDj79ZiwsXr+acQFgx/AKVTblSCH/SB/nFNu
         9meQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718896552; x=1719501352;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7Ht+5TDug4TcE2w45ccuq+95zRR/OmLFjn/G12EAkvY=;
        b=AC8zVxoGBEsp/NViUvP8Dze0SCHD7G+hcGqaBhLwJ9RDwIMnSTURcxH+mL8wFtu2zV
         UHyPikSjzI6rI6luBrB8CJ9Vng3FsLTVEoOTFD8XYVnAjLTWluQ5oxrzNXigh8FiVkob
         JWBSdAu90yKa7D0UFlIClMxS+DsIaTYA8WI1Octfvjvs+WKgm+1oOIBch1JVMKij5LIX
         usFAl/nLKWo0VhSGHyekWViO9cCpYS/Q+81DUntpDIYbPgZn33EzT+FJTTl24dyJN3Pa
         PqV3nR2vUIUPzBnRq7F0DrobmbWQNr3ahD/Scu2cxgRgTgXRjAgEMc3PcgWkOHQ8bb4f
         fL0w==
X-Forwarded-Encrypted: i=1; AJvYcCWmnEJHDW8eAmnLEPMNsr/Vke2Mk7NKmuzymm/yFjbxfw0RyMZUu4HgtWIJ9MwnHrZK4NJ/sEc3ZUnAmVmEacq0Dd8du+cTTfYwZ1ZrZ60=
X-Gm-Message-State: AOJu0YyUom0IsY0yQ9eebUMF/zwj4XRy0sUD4Yc6UQMCwJQvTNaoFPV3
	RgfJvf7UJFkS8reffcyr/FKXDUCpUvlOIFHjxzy9DsmqfleYJN7uhqgRTcLnEfNXKTvG1F6yTNY
	=
X-Google-Smtp-Source: AGHT+IGnlqJ6FoRF3Jz6ct8Tz74MC5gUjv7u6K3dGpAHNOLO5nMQBfbOB9kJd7/pVYF462lRVgU0Ww==
X-Received: by 2002:a2e:860e:0:b0:2ec:1a6:7b01 with SMTP id 38308e7fff4ca-2ec3cfe89ddmr34473451fa.33.1718896551868;
        Thu, 20 Jun 2024 08:15:51 -0700 (PDT)
Message-ID: <b2af04fd-1a7c-4e17-9683-b00c11521a24@suse.com>
Date: Thu, 20 Jun 2024 17:15:44 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] x86/apic: Fix signing in left bitshift
To: Matthew Barnes <matthew.barnes@cloud.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <6fe6d88c0e07348d3e08fd51863402827126ecb0.1718893590.git.matthew.barnes@cloud.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <6fe6d88c0e07348d3e08fd51863402827126ecb0.1718893590.git.matthew.barnes@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 20.06.2024 16:31, Matthew Barnes wrote:
> There exists a bitshift in the IOAPIC code where a signed integer is
> shifted to the left by at most 31 bits. This is undefined behaviour,

s/at most/up to/ maybe?

> and can cause faults in xtf tests such as pv64-pv-iopl~hypercall.
> 
> This patch fixes this by changing the integer from signed to unsigned.
> 
> Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>
> ---
>  xen/arch/x86/io_apic.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> --- a/xen/arch/x86/io_apic.c
> +++ b/xen/arch/x86/io_apic.c
> @@ -1756,7 +1756,7 @@ static void cf_check end_level_ioapic_irq_new(struct irq_desc *desc, u8 vector)
>           !io_apic_level_ack_pending(desc->irq) )
>          move_native_irq(desc);
>  
> -    if (!(v & (1 << (i & 0x1f)))) {
> +    if (!(v & (1U << (i & 0x1f)))) {
>          spin_lock(&ioapic_lock);
>          __mask_IO_APIC_irq(desc->irq);
>          __edge_IO_APIC_irq(desc->irq);

For one, can you please also take care of the similar issue in
mask_and_ack_level_ioapic_irq()? And then here and there, can you please
also address the style issue(s) on the line(s) you're touching? In both
cases it will want to be

    if ( !(v & (1U << (i & 0x1f))) )
    {

thus bringing both fully into proper Xen style afaics. Then:
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 15:16:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 15:16:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744796.1151909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJWl-0003Ln-Qq; Thu, 20 Jun 2024 15:16:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744796.1151909; Thu, 20 Jun 2024 15:16:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJWl-0003Lg-OA; Thu, 20 Jun 2024 15:16:43 +0000
Received: by outflank-mailman (input) for mailman id 744796;
 Thu, 20 Jun 2024 15:16:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WCyn=NW=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKJWk-0002se-Ht
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 15:16:42 +0000
Received: from mail-qk1-x731.google.com (mail-qk1-x731.google.com
 [2607:f8b0:4864:20::731])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 18ccb68b-2f18-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 17:16:42 +0200 (CEST)
Received: by mail-qk1-x731.google.com with SMTP id
 af79cd13be357-795ca45c54cso57665285a.0
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 08:16:41 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b2a5c466bfsm88189746d6.68.2024.06.20.08.16.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 08:16:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18ccb68b-2f18-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718896601; x=1719501401; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=lq9pXP5HQnuXtd7tFhgPkKnrBD7CbpeBtTHng7xRZMo=;
        b=pL/AuGGT6trQfbCA4v3ov6tC8qKvBYpptj+gdf3/kmd8BzbfbOb0CwW2tvRIQsggpL
         RwWw5KHjvNRhlMcsUZvqHpGEGX4kYKlPWz2V2OvQbx115YSQ0prHWlSpcfG+U+ZwwesT
         j1jXmSlqRpFJGvDSHBtPVEufU8WhaCJXqXbN0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718896601; x=1719501401;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lq9pXP5HQnuXtd7tFhgPkKnrBD7CbpeBtTHng7xRZMo=;
        b=Z1yqIM+ES8tLxJx3Hv+ZOiDGooynhG/wcqTmFHE+OanG1ETet7H+13X/7IKgypIkBx
         WRFmLn+HldpwRNc3nQsbLrnMZmdF1xFwRFIBYaRFWTHtKhAzeIhP5dRWhLZGuX5l3BtL
         U3HGvc0Anq5ChhLpAyofjGyYKB64Zlel8u3+KQI8QqJdjkcZWzeJ4+Pyg+pdyX9QPrDp
         fc5qkcOGbHbmR9WtnI0sPR3XVdFjkH4ndhVfpfjTMPp2cs7ne39a9SdUI3718K8p2iDc
         vOnuplwzMtg8SbRfJBZIV9deHA1bH+tY3R1MnlVz74E54Yw/vHyRfrQu8Y4+7nOs6nYB
         a5PA==
X-Forwarded-Encrypted: i=1; AJvYcCVbqGJ72ndBrQz+x7u1ncCZJm2PbP7XGLFdvHJwyqhgGmoN2pFvaUqZOEAxhJ3QMe5XuJNsq61rlChRUA23JDPKFy+DOl7epk0m95CqxDg=
X-Gm-Message-State: AOJu0YzR49OVRWga/qi22s2ljZrAPKfGqbANSrfVNNZudmBT1QJRDeZq
	p1vbL8/L/Na5mPUk2SRTLVKn8JMdUO/jbOyksE7n6ZzuakeycYIfh7qlUNEQ/Fc=
X-Google-Smtp-Source: AGHT+IHix1W9NCwJ1Wdyl0pXo+3Et+o1nJqVkFouJ2Y9IkejYYsvkJC1o0xfCbkFy34u38ftCBWeAQ==
X-Received: by 2002:a05:6214:14ad:b0:6b4:f7bc:40f1 with SMTP id 6a1803df08f44-6b501ebb971mr52450026d6.57.1718896600581;
        Thu, 20 Jun 2024 08:16:40 -0700 (PDT)
Message-ID: <96e8edb4-f9a8-46bf-a99c-cb458b0cb3f0@citrix.com>
Date: Thu, 20 Jun 2024 16:16:37 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN for-4.19 PATCH] x86/apic: Fix signing in left bitshift
To: Matthew Barnes <matthew.barnes@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <6fe6d88c0e07348d3e08fd51863402827126ecb0.1718893590.git.matthew.barnes@cloud.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <6fe6d88c0e07348d3e08fd51863402827126ecb0.1718893590.git.matthew.barnes@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 20/06/2024 3:31 pm, Matthew Barnes wrote:
> There exists a bitshift in the IOAPIC code where a signed integer is
> shifted to the left by at most 31 bits. This is undefined behaviour,
> and can cause faults in xtf tests such as pv64-pv-iopl~hypercall.
>
> This patch fixes this by changing the integer from signed to unsigned.
>
> Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>

The code change itself is fine, but I'm going to recommend some
adjustments to the commit message.

Its "x86/ioapic";  apic implies the Local APIC which is apic.c and
distinct from the IO-APIC.  The subject would be clearer as "Fix signed
shift in end_level_ioapic_irq_new()".

The XTF test has nothing to do with this, so shouldn't be mentioned like
this.  The UBSAN failure was in an interrupt handler, and it was
complete chance that it triggered while pv64-pv-iopl~hypercall was the
test being ran.

I'm happy to fix all of that up on commit.

CC Oleksii for 4.19.  This is low risk, and found during testing with
UBSAN active.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 15:35:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 15:35:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744805.1151920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJob-0006d7-8u; Thu, 20 Jun 2024 15:35:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744805.1151920; Thu, 20 Jun 2024 15:35:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJob-0006d0-6F; Thu, 20 Jun 2024 15:35:09 +0000
Received: by outflank-mailman (input) for mailman id 744805;
 Thu, 20 Jun 2024 15:35:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKJoa-0006cu-3k
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 15:35:08 +0000
Received: from mail-lj1-x234.google.com (mail-lj1-x234.google.com
 [2a00:1450:4864:20::234])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id aad41696-2f1a-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 17:35:05 +0200 (CEST)
Received: by mail-lj1-x234.google.com with SMTP id
 38308e7fff4ca-2e95a75a90eso10402831fa.2
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 08:35:05 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f855f5e72asm138794955ad.308.2024.06.20.08.35.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 08:35:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aad41696-2f1a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718897705; x=1719502505; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=nS3y0u2v2G9v2vb2glO+/ToJqD5MnEVtbXcCrJlbUio=;
        b=Rlt+5RIVaGRFdzGrKNWgqdT28RIuKq01YN/kZJQ4gTAZXaw7pwAPHdIP4nWc2LpPyw
         K0DIPbXg//qarqJqq9uOTtLm7V8ZhBbdHAfMxxCeGgeLm46QhsxFlw7Uiy1lDOn26hmt
         1f7UXPxLGnzlAPqAHxu5NjLVuj4gPRpyTnN+s54iZ9gFXkWOlnTlz179y5IX4znz9zng
         Nm1DF398psvmnLZwdw1M1ngs4bbG36NJaoVifHeHZsDUKmXn6IBbX1zYQQQSsK4l/5YG
         Sgivd3schJFqrRzDDEurH3AYCT6VPnQHKzt8arAqVxGS+A8QkTm0mhWd8MJq/4zYaRAc
         IzNQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718897705; x=1719502505;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=nS3y0u2v2G9v2vb2glO+/ToJqD5MnEVtbXcCrJlbUio=;
        b=Ny3v+VPA2jUHPsZYqSjDsDJvmMg4NKrLrIaUv/tpgnoIA7lRVQvfoDjhVlUqPpMSFg
         N8EvjmLhNYmvOp9nmSyMcM/0ZSqc/zVpPKMUH1qsNoN8QCrZRGn+FmurkeJz8dN6V5nU
         QJdFzWUam9LA+InxoVkFC+6epJ2HUOfCwYN/T/Vw3oQwNYLYLtYelBas29KQTqfLPqtW
         2nyui3PWg6RSddwELhy7CveUNoKZI6RzgfcZX+YNmFv27sufXKCcPQGq0M4bcHXmFz6+
         i4G83DnBHqBQs2K20dEc30J4cnG44rLKwO5Kl001lXvfrwiOBR2zDe/2lMtPf22eGXuQ
         DhTQ==
X-Gm-Message-State: AOJu0YzUgVSr9nYzzRel9HStvNuE1d+ZrdZDqmznYV7fbBX5QbivsaJa
	5pr9flbLXLnjkept+ETMwyGBuyDsTVUWFDi0ia1YIwMORJFoc1LW07QncY4Mj/wMgrMIx0hFzo4
	=
X-Google-Smtp-Source: AGHT+IFHmOYUEx3G4c0dOgxnahi33m0pyZI/jTd8xDMaXHGGk1dPsv6aXViFuTOy9NMHD6sCxpmVrg==
X-Received: by 2002:a2e:9e91:0:b0:2ec:4aac:8fd6 with SMTP id 38308e7fff4ca-2ec4aac928fmr5503741fa.10.1718897704877;
        Thu, 20 Jun 2024 08:35:04 -0700 (PDT)
Message-ID: <42a8061a-b626-443a-ad42-0e05b043c6c7@suse.com>
Date: Thu, 20 Jun 2024 17:34:56 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH for-4.19?] libelf: avoid UB in elf_xen_feature_{get,set}()
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

When the left shift amount is up to 31, the shifted quantity wants to be
of unsigned int (or wider) type.

While there also adjust types: get doesn't alter the array and returns a
boolean, while both don't really accept negative "nr". Drop a stray
blank each as well.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Really I wonder why these exist at all; they're effectively test_bit()
and __set_bit() in hypervisor terms, and iirc something like that exists
in the tool stack as well.

--- a/xen/include/xen/libelf.h
+++ b/xen/include/xen/libelf.h
@@ -445,13 +445,13 @@ struct elf_dom_parms {
     uint64_t virt_kend;
 };
 
-static inline void elf_xen_feature_set(int nr, uint32_t * addr)
+static inline void elf_xen_feature_set(unsigned int nr, uint32_t *addr)
 {
-    addr[nr >> 5] |= 1 << (nr & 31);
+    addr[nr >> 5] |= 1U << (nr & 31);
 }
-static inline int elf_xen_feature_get(int nr, uint32_t * addr)
+static inline bool elf_xen_feature_get(unsigned int nr, const uint32_t *addr)
 {
-    return !!(addr[nr >> 5] & (1 << (nr & 31)));
+    return addr[nr >> 5] & (1U << (nr & 31));
 }
 
 elf_errorstatus elf_xen_parse_features(const char *features,


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 15:37:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 15:37:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744811.1151930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJqs-00079m-KF; Thu, 20 Jun 2024 15:37:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744811.1151930; Thu, 20 Jun 2024 15:37:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJqs-00079f-Gz; Thu, 20 Jun 2024 15:37:30 +0000
Received: by outflank-mailman (input) for mailman id 744811;
 Thu, 20 Jun 2024 15:37:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gwum=NW=cloud.com=matthew.barnes@srs-se1.protection.inumbo.net>)
 id 1sKJqr-00079V-DI
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 15:37:29 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id feb1685e-2f1a-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 17:37:26 +0200 (CEST)
Received: by mail-ed1-x529.google.com with SMTP id
 4fb4d7f45d1cf-57d07f07a27so1108146a12.3
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 08:37:26 -0700 (PDT)
Received: from EMEAENGAAD91498.citrite.net ([2.223.45.79])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57cb72cdf52sm9852778a12.8.2024.06.20.08.37.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 20 Jun 2024 08:37:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: feb1685e-2f1a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718897845; x=1719502645; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=KC6rl9EXhuFpskqMgYdFWCq/EEUOO/iNi0wsiHqBJLw=;
        b=RXgOIF93hswGJyYrwymE4ZKfZjCMzj5W7hEavNPCCsjUjddxRqg0s7Xk+gximoEiyT
         8Rw8qvx6+CZBPU8OG4QttBiU8THNVjryysl/7oI+mKRa848JSrzw//A6eoyPi8xrny3b
         SmCKg7Wi/i9sasvvMuT5y5MCaoIjxQ9u/TwIE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718897845; x=1719502645;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=KC6rl9EXhuFpskqMgYdFWCq/EEUOO/iNi0wsiHqBJLw=;
        b=Ayho/P64d7jtM7zT/8WSW02TpbYSPhgrSqLXDxb3zsXlwHK3RJJBEUDNP3UPGGkvpO
         vGCjCbl679BLDKJtziq2vOhlSiwtIR9eCXlYdWBoNG1nabsqL7XRV1ByhDfrw7trsFtS
         0nbSOo712vAsjOdBpiiGsJqlXApIuCmJJlzC4bjtXKzVNuTs4OnFS7mGjtfiPTDvnCiX
         E24TBLLSLsjU6JUCf+Datka0dvAiHCrI55cFNlq6U9/AGXyTs4uy06m2hdpGIYFv/Bup
         6KG6A28OTk+PidihefbyO0H4VWIOCIhg7Lo3Me1krfNR9aVMcwFPXfj2muvG0gNnfgnh
         lI4g==
X-Gm-Message-State: AOJu0Ywms+TH4cQDerslJat8Q9L2N4XNPbkwYfHMYZz+aiiCt0+9hRR6
	m4sE/QJcOYFYEYDSdiDBkLL27vIMQ/cgWOJcWABGXZCW2khOgxVEHys3uybssqx8VLHZye98x6b
	y
X-Google-Smtp-Source: AGHT+IHv9xbMjgfvHAw8tHesSqBG900ed11MhqqkeJFss7E4gUcRTNAh1AjCwGk1Z9JbKEHeJrTjbg==
X-Received: by 2002:a50:cd1e:0:b0:57a:1e0a:379f with SMTP id 4fb4d7f45d1cf-57d18005efbmr2193314a12.16.1718897845342;
        Thu, 20 Jun 2024 08:37:25 -0700 (PDT)
From: Matthew Barnes <matthew.barnes@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Matthew Barnes <matthew.barnes@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [XEN PATCH v2] x86/apic: Fix signed shift in io_apic.c
Date: Thu, 20 Jun 2024 16:36:46 +0100
Message-Id: <d71b732050d4fff3208205b3117ac5164f889a63.1718897157.git.matthew.barnes@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There exists bitshifts in the IOAPIC code where signed integers are
shifted to the left by up to 31 bits, which is undefined behaviour.

This patch fixes this by changing the integers from signed to unsigned.

Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes in v2:
- Correct signed shifting in mask_and_ack_level_ioapic_irq()
- Adjust bracket spacing to uphold Xen style
- Improve commit message
---
 xen/arch/x86/io_apic.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index b48a64246548..d9070601a26f 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -1692,7 +1692,7 @@ static void cf_check mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
        !io_apic_level_ack_pending(desc->irq))
         move_masked_irq(desc);
 
-    if ( !(v & (1 << (i & 0x1f))) ) {
+    if ( !(v & (1U << (i & 0x1f))) ) {
         spin_lock(&ioapic_lock);
         __edge_IO_APIC_irq(desc->irq);
         __level_IO_APIC_irq(desc->irq);
@@ -1756,7 +1756,7 @@ static void cf_check end_level_ioapic_irq_new(struct irq_desc *desc, u8 vector)
          !io_apic_level_ack_pending(desc->irq) )
         move_native_irq(desc);
 
-    if (!(v & (1 << (i & 0x1f)))) {
+    if ( !(v & (1U << (i & 0x1f))) ) {
         spin_lock(&ioapic_lock);
         __mask_IO_APIC_irq(desc->irq);
         __edge_IO_APIC_irq(desc->irq);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 20 15:40:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 15:40:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744818.1151940 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJte-0000YK-V1; Thu, 20 Jun 2024 15:40:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744818.1151940; Thu, 20 Jun 2024 15:40:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKJte-0000YD-S0; Thu, 20 Jun 2024 15:40:22 +0000
Received: by outflank-mailman (input) for mailman id 744818;
 Thu, 20 Jun 2024 15:40:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Zqic=NW=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKJte-0000Y7-6v
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 15:40:22 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 666f7353-2f1b-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 17:40:20 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id
 38308e7fff4ca-2ec3f875e68so12268481fa.0
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 08:40:20 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f855e55c96sm138161645ad.61.2024.06.20.08.40.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 08:40:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 666f7353-2f1b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718898020; x=1719502820; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=m9F4c/pM6flecAt2LJBnHe8wCWZmuXw9df5Kr9BrsDA=;
        b=Lj6mI5hNEitTU9kLWmirwX8JlZ2Lqnf1y0DUSlp4iVmuxYulU3R2BSiImv0jt/R6qK
         yeYhw/sAV+/k0qYfxG1rn5nLOQp1Btgrk0oB5hFOuKlW4aUBNoh8TguFc6pWH7g2KoKn
         qAjz4qRZbLEz5BP6jl3XME+DgcUPIe226yGxqpx6g9jnH9a562VK+UpOQVyNr6UTbRg/
         uGx6KDVsFrLEJCHPsqz0TDIdZ5J3lG5fKeOJe34DXH93litedUfpqBKCTfCt9xWPpRsn
         o77h5qh3aVmO9hnmNUlskn4FlnevukCT7vbaX7u1IUP8MNrPMPqbiRawuzC7ihcrDosS
         ztbQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718898020; x=1719502820;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=m9F4c/pM6flecAt2LJBnHe8wCWZmuXw9df5Kr9BrsDA=;
        b=iHtYi9t+nfqS6S07MidNQgyxFah3TyfYtL0LFm3BezQW6BEt3GiyAEqZnqXpBwftQe
         adf3OYq+bP/SJfOVZ0pHpcXnUNAhI3fETduP9PA+T7yAsXMuSdCBw+feyyTEhaT3C8MW
         Oa29ABKjR4PKK4hrbZL2Kyy+aTZKQgoFbbQcCTiKZ9fhl3OHkJczusawdAr0xj0HaL3e
         a+G5hz8J96rKpgospY+yio7X3iLh+F4Wis19MVG9F5sjvf9jDTYspDoLaCS793tKAkTt
         uJjMYqIBkX2eYMNB5m8PQ6ep5gebI1dZ1cyZoorEsPiUJyfKb8TFaBVIsy6nAkvEtlFv
         knjg==
X-Forwarded-Encrypted: i=1; AJvYcCVTP+vQpV9dGTBMAfQT2Uay+p4ySSB+yGA22DHHBzN9ieH3q4sdEFJqvLFUmyH0sBP/OgSqnp4BWJHFVg1dVX6zbp8VQw77OD5xdqohleY=
X-Gm-Message-State: AOJu0YyAdmZKKOWeot6y+C6bo01qC+JdGZs2QIepINNzSY5klGhIjkpv
	2Mm1pqHKFje6+NnR0TVD9juoQm3JRIeLCiEFtiWMonrXS8DTnHSdIhO+cGz88zuVJGbYrs/RzWA
	=
X-Google-Smtp-Source: AGHT+IEuelYUXGv29bpabZkiEtVI5SiG4sjDRzM+8SqmVHPFD2OFljKlRplm311OvrPYFFfzn/7qEQ==
X-Received: by 2002:a2e:9612:0:b0:2ec:fb:b469 with SMTP id 38308e7fff4ca-2ec3cfd6de3mr35743761fa.41.1718898019629;
        Thu, 20 Jun 2024 08:40:19 -0700 (PDT)
Message-ID: <fd1f5348-ab90-45ec-a363-2adccfb4feda@suse.com>
Date: Thu, 20 Jun 2024 17:40:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH for-4.19? v2] x86/apic: Fix signed shift in io_apic.c
To: Matthew Barnes <matthew.barnes@cloud.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <d71b732050d4fff3208205b3117ac5164f889a63.1718897157.git.matthew.barnes@cloud.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <d71b732050d4fff3208205b3117ac5164f889a63.1718897157.git.matthew.barnes@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 20.06.2024 17:36, Matthew Barnes wrote:
> There exists bitshifts in the IOAPIC code where signed integers are
> shifted to the left by up to 31 bits, which is undefined behaviour.
> 
> This patch fixes this by changing the integers from signed to unsigned.
> 
> Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Only almost, ...

> ---
> Changes in v2:
> - Correct signed shifting in mask_and_ack_level_ioapic_irq()
> - Adjust bracket spacing to uphold Xen style

... as that was only half of what I had asked for. The other half was ...

> --- a/xen/arch/x86/io_apic.c
> +++ b/xen/arch/x86/io_apic.c
> @@ -1692,7 +1692,7 @@ static void cf_check mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
>         !io_apic_level_ack_pending(desc->irq))
>          move_masked_irq(desc);
>  
> -    if ( !(v & (1 << (i & 0x1f))) ) {
> +    if ( !(v & (1U << (i & 0x1f))) ) {
>          spin_lock(&ioapic_lock);
>          __edge_IO_APIC_irq(desc->irq);
>          __level_IO_APIC_irq(desc->irq);
> @@ -1756,7 +1756,7 @@ static void cf_check end_level_ioapic_irq_new(struct irq_desc *desc, u8 vector)
>           !io_apic_level_ack_pending(desc->irq) )
>          move_native_irq(desc);
>  
> -    if (!(v & (1 << (i & 0x1f)))) {
> +    if ( !(v & (1U << (i & 0x1f))) ) {
>          spin_lock(&ioapic_lock);
>          __mask_IO_APIC_irq(desc->irq);
>          __edge_IO_APIC_irq(desc->irq);

... to put each opening figure brace on their own line. I guess Andrew or
I will do that while committing then.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 16:07:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 16:07:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744834.1151949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKKK3-0004eV-0L; Thu, 20 Jun 2024 16:07:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744834.1151949; Thu, 20 Jun 2024 16:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKKK2-0004eO-TH; Thu, 20 Jun 2024 16:07:38 +0000
Received: by outflank-mailman (input) for mailman id 744834;
 Thu, 20 Jun 2024 16:07:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WCyn=NW=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKKK2-0004eI-Gm
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 16:07:38 +0000
Received: from mail-qt1-x833.google.com (mail-qt1-x833.google.com
 [2607:f8b0:4864:20::833])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 355405e7-2f1f-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 18:07:36 +0200 (CEST)
Received: by mail-qt1-x833.google.com with SMTP id
 d75a77b69052e-440608f5ce4so4833251cf.2
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 09:07:36 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b4ff055c4dsm24707956d6.145.2024.06.20.09.07.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 09:07:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 355405e7-2f1f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718899655; x=1719504455; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=e5DNSfTpIlpL20Op/1nr5uyB8uvBjdprHl6B39gdYoM=;
        b=Ps+RrF9F0eohu1w8RnnTiq70QUEAib6blGoc0ldRTiQOWtF8gE5mcvc8+ffThtw48o
         i+U1slTvB3eCFaFXFr6FvDStiw99AXsrnewncdO7ya2Vmf4rb9VGOgTzyfUdAcdl3HhQ
         j2iIdpw7sqL6OdPA0skOULD7nEiiXW9O9IxTE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718899655; x=1719504455;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=e5DNSfTpIlpL20Op/1nr5uyB8uvBjdprHl6B39gdYoM=;
        b=sYAlTG3NxJ2YRaVeQx0cdXNGmsl73gR/L5TiVRdupc92azT7xaza8PF2nnYBXNAcPR
         9Yz52kjltd0pe/KT5FmWDnFtnK1D9ixOP5BSiQibMOGpk3TLPZdlAVXQjSfqYbxunuhq
         UBmfbrcNP73eZ7GQHbD1owbW0l1Pc/cOn9zsKXELUPD8IVMdEspZe2YV2cqvfFA2WI5q
         umcYkdNnDypkzzILV6bL/zEgLmPQw5+2M1+7D7nSMFTo6XkWr0sjPkEJiACBfU94CJzS
         cFtASx078EHAlJag4mMtZPEQvsZNKuLGl1UFKLx+yIpj/SdLZJOqhxC+Js6e5rjoLvLd
         JsKw==
X-Forwarded-Encrypted: i=1; AJvYcCXDRKuYJ3U7Lp8gzWxKkXjScorQyS22cCibVTVbBff/uxKD9sbtnKR7O/qDbhTGCmrYM4WpNxl42we7QnBAEFsrZhULZPx9UXOX4hocEvs=
X-Gm-Message-State: AOJu0Yxzn33EhheZM4Hf5IbApIDYT/+D4Ry3/kvV+ejEgF4wC7Zc5P09
	06y7iZtTTZQfa3Q4wLuOaSY7FZLd09lHiZK2hhgDdD4tkyqIFaaosuA+/ja9Z4NTOCtfKt0kCBU
	/l30=
X-Google-Smtp-Source: AGHT+IHEX7YKD3XlkZP8YPVxQQNwLTPHOeqIE3qocBvUfkAg4IX2KeoaLnz8hLYLYLgsuZEKWfLpXw==
X-Received: by 2002:a0c:e1cf:0:b0:6b0:77da:3ab6 with SMTP id 6a1803df08f44-6b501df8eb9mr64731446d6.12.1718899655156;
        Thu, 20 Jun 2024 09:07:35 -0700 (PDT)
Message-ID: <40b33e27-663e-460b-8253-0f5b98fe7f23@citrix.com>
Date: Thu, 20 Jun 2024 17:07:32 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19?] libelf: avoid UB in elf_xen_feature_{get,set}()
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <42a8061a-b626-443a-ad42-0e05b043c6c7@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <42a8061a-b626-443a-ad42-0e05b043c6c7@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 20/06/2024 4:34 pm, Jan Beulich wrote:
> When the left shift amount is up to 31, the shifted quantity wants to be
> of unsigned int (or wider) type.
>
> While there also adjust types: get doesn't alter the array and returns a
> boolean, while both don't really accept negative "nr". Drop a stray
> blank each as well.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

+1 for 4.19.

> ---
> Really I wonder why these exist at all; they're effectively test_bit()
> and __set_bit() in hypervisor terms, and iirc something like that exists
> in the tool stack as well.

The toolstack has tools/libs/ctrl/xc_bitops.h but they're not API
compatible with Xen.

They're long-granular rather than int-granular, have swapped arguments,
and are non-LOCKed.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 16:09:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 16:09:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744837.1151959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKKLr-0005EG-B1; Thu, 20 Jun 2024 16:09:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744837.1151959; Thu, 20 Jun 2024 16:09:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKKLr-0005E9-8Y; Thu, 20 Jun 2024 16:09:31 +0000
Received: by outflank-mailman (input) for mailman id 744837;
 Thu, 20 Jun 2024 16:09:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WCyn=NW=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKKLp-0005Dt-SB
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 16:09:29 +0000
Received: from mail-qk1-x734.google.com (mail-qk1-x734.google.com
 [2607:f8b0:4864:20::734])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 77e954b4-2f1f-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 18:09:28 +0200 (CEST)
Received: by mail-qk1-x734.google.com with SMTP id
 af79cd13be357-7955dfce860so71640685a.2
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 09:09:28 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-798abc0385asm705981585a.78.2024.06.20.09.09.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 09:09:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77e954b4-2f1f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718899767; x=1719504567; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=+6+fH4A4Ln3v5AqgVzgQYLuAu4mEm4UFrPxU4D4twZM=;
        b=px3ypPvG475gNdMWSU49WcaoPLMs3SMMA/ViowlW/Dh8PB4ozR7TPBOuja4jbJcRh1
         RBEd0Am5t6Pp2ZWlDLxp5L9ISxX7Z3dRarNx4rNgeuAF2pZVPjKqcTLZgYj4B501sPIE
         ZzflNAJUyKLljSb6VFpdy/hmK13oslqyXrjK4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718899767; x=1719504567;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+6+fH4A4Ln3v5AqgVzgQYLuAu4mEm4UFrPxU4D4twZM=;
        b=naZTy72FIxIjITnVWTeer8C0cciG/by4GnLmC7HrjxZ786RvTQ+od/eoMQEz2aUBue
         cPFwUpqO/F43p4D+WOryR1kTZ2yanoQSBojIH8xZtokbl3aFYUvRIqGPLq6aJCTrT8J4
         9x13KJjld2/PP/u5jiG1ZJgiT0ugLs5KPARrp4gVNBFc+QDVP2iqJmFSQRCElH6DviD+
         5C3YyTddtZ/dRbe7D7ZG5U/hptG+DEuYO2zYDLS6mW6NHCA6p5xZ1WqIzItuHN6K69SL
         qDKlK66vBak4Wgx7wDWLC9pquWV4gwu5MYeXQ67Kw6lDtOiTBMvQ1dBAuq+x9dT56RSk
         bcdA==
X-Forwarded-Encrypted: i=1; AJvYcCWDyAzURRkHOo/IDrtKekDBD9MKFnLGt6si2nKBdf8kacslxEab6+ZOSmZXXFWELEx+89MQIk1DO0Wpa186Dfdqorz/YxyNXRLbqPIYdIQ=
X-Gm-Message-State: AOJu0Yy7zwuhtdOjP7HGSNf1nnJ6e3FMTgk0qMrq/ZoGNUV+d5FF0BKB
	s5DoMlMQe7o8NTzBRYzZvfAreNLsO4kGZw8rAYguIYnvo8zSx5xvsWZ3QYyMoAg=
X-Google-Smtp-Source: AGHT+IFQDIcd8TOE2hHPR7mxsnptymEKf5RPTE5+f3PsWDXAhNUC0hyjg6ZDC2RUvaf+R2ehvdLVAw==
X-Received: by 2002:a05:620a:3184:b0:795:5c3e:eb4a with SMTP id af79cd13be357-79bb3e2f459mr704549785a.26.1718899766749;
        Thu, 20 Jun 2024 09:09:26 -0700 (PDT)
Message-ID: <282cefa2-b07c-421f-9dc2-045206a1f894@citrix.com>
Date: Thu, 20 Jun 2024 17:09:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH for-4.19? v2] x86/apic: Fix signed shift in io_apic.c
To: Jan Beulich <jbeulich@suse.com>, Matthew Barnes <matthew.barnes@cloud.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <d71b732050d4fff3208205b3117ac5164f889a63.1718897157.git.matthew.barnes@cloud.com>
 <fd1f5348-ab90-45ec-a363-2adccfb4feda@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <fd1f5348-ab90-45ec-a363-2adccfb4feda@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 20/06/2024 4:40 pm, Jan Beulich wrote:
> On 20.06.2024 17:36, Matthew Barnes wrote:
>> There exists bitshifts in the IOAPIC code where signed integers are
>> shifted to the left by up to 31 bits, which is undefined behaviour.
>>
>> This patch fixes this by changing the integers from signed to unsigned.
>>
>> Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> Only almost, ...
>
>> ---
>> Changes in v2:
>> - Correct signed shifting in mask_and_ack_level_ioapic_irq()
>> - Adjust bracket spacing to uphold Xen style
> ... as that was only half of what I had asked for. The other half was ...
>
>> --- a/xen/arch/x86/io_apic.c
>> +++ b/xen/arch/x86/io_apic.c
>> @@ -1692,7 +1692,7 @@ static void cf_check mask_and_ack_level_ioapic_irq(struct irq_desc *desc)
>>         !io_apic_level_ack_pending(desc->irq))
>>          move_masked_irq(desc);
>>  
>> -    if ( !(v & (1 << (i & 0x1f))) ) {
>> +    if ( !(v & (1U << (i & 0x1f))) ) {
>>          spin_lock(&ioapic_lock);
>>          __edge_IO_APIC_irq(desc->irq);
>>          __level_IO_APIC_irq(desc->irq);
>> @@ -1756,7 +1756,7 @@ static void cf_check end_level_ioapic_irq_new(struct irq_desc *desc, u8 vector)
>>           !io_apic_level_ack_pending(desc->irq) )
>>          move_native_irq(desc);
>>  
>> -    if (!(v & (1 << (i & 0x1f)))) {
>> +    if ( !(v & (1U << (i & 0x1f))) ) {
>>          spin_lock(&ioapic_lock);
>>          __mask_IO_APIC_irq(desc->irq);
>>          __edge_IO_APIC_irq(desc->irq);
> ... to put each opening figure brace on their own line. I guess Andrew or
> I will do that while committing then.

Yeah.  That can be fixed on commit.

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 16:40:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 16:40:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744847.1151970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKKpb-0002E6-KU; Thu, 20 Jun 2024 16:40:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744847.1151970; Thu, 20 Jun 2024 16:40:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKKpb-0002Dz-HM; Thu, 20 Jun 2024 16:40:15 +0000
Received: by outflank-mailman (input) for mailman id 744847;
 Thu, 20 Jun 2024 16:40:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rnTP=NW=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sKKpZ-0002Dt-NH
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 16:40:13 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c35c52c2-2f23-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 18:40:12 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 8ABCA4EE0738;
 Thu, 20 Jun 2024 18:40:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c35c52c2-2f23-11ef-90a3-e314d9c70b13
MIME-Version: 1.0
Date: Thu, 20 Jun 2024 18:40:11 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Simone Ballarin
 <simone.ballarin@bugseng.com>, Andrew Cooper3 <andrew.cooper3@citrix.com>,
 Roger Pau <roger.pau@citrix.com>, Consulting <consulting@bugseng.com>, Xen
 Devel <xen-devel@lists.xenproject.org>
Subject: Re: MISRA C Rule 5.3 violation - shadowing in mctelem.c
In-Reply-To: <f14f15ad-7d16-4bec-9edc-82956ccc7bb4@suse.com>
References: <f351f904fab43f88396b3ae1b5d64e95@bugseng.com>
 <f14f15ad-7d16-4bec-9edc-82956ccc7bb4@suse.com>
Message-ID: <93cfbed5db97faf7953a9a79461b669a@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-19 15:42, Jan Beulich wrote:
> On 19.06.2024 15:23, Nicola Vetrini wrote:
>> I was looking at the shadowing due to the struct identifier and the
>> local variables "mctctl" in x86/cpu/mcheck/mctelem.c (see [1], the
>> second report). This kind of shadowing seems very intentional, and the
>> initial naive approach I devised was to simply rename the local
>> variables.
>> This, however, results in build breakages, as sometimes the shadowed
>> name seems to be used for accessing the global struct (unless I'm
>> missing something), and as a result changing the name of the locals is
>> not possible, at least not without further modifications to this file,
>> which aren't obvious to me.
>> 
>> It would be really helpful if you could point me to either:
>> - avoid the shadowing in some way that does not occur to me at the
>> moment;
> 
> Could you please be more specific about the issues you encountered? I
> hope you don't expect everyone reading this request of yours to (try 
> to)
> redo what you did. The only thing I could vaguely guess is that maybe
> you went a little too far with the renaming. Plus, just from looking at
> the grep output, did you try to simply move down the file scope 
> variable?
> It looks like all shadowing instances are ahead of any uses of the
> variable (but I may easily be overlooking an important line 
> contradicting
> that pattern).
> 

I think I found a way to refactor it without breaking the build, though 
I'm not sure whether it preserves the semantics of the code. I will send 
an RFC patch. Sorry for the noise.

>> - deviate this file, as many similar files in x86/cpu are already
>> deviated.
> 
> I question the presence of these in those files. They were apparently 
> all
> added when the files were introduced, and said commit - from Simone, 
> acked
> by Stefano - came with no justification at all.
> 
> Jan

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 17:14:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 17:14:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744862.1151986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKLN3-0006tw-64; Thu, 20 Jun 2024 17:14:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744862.1151986; Thu, 20 Jun 2024 17:14:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKLN3-0006tp-2e; Thu, 20 Jun 2024 17:14:49 +0000
Received: by outflank-mailman (input) for mailman id 744862;
 Thu, 20 Jun 2024 17:14:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKLN1-0006td-2X; Thu, 20 Jun 2024 17:14:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKLN1-00041l-1c; Thu, 20 Jun 2024 17:14:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKLN0-0001qY-Pt; Thu, 20 Jun 2024 17:14:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKLN0-0006PR-PO; Thu, 20 Jun 2024 17:14:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=o3IAdy8ZovLuXZaEXBXLVaQ3NgIdge7byh32p5can1M=; b=WfSX9b4Bkx8EywrivYmcANnFYy
	kGgkAYfYUeTn3WKva61pW2zfxlBJ/YvAdINCfXq7mfMVvXFk6OmpB6BeO2TT0rvHKl4yn/Kzrzjs4
	I00e3MzG3XblvmeWHF+rOUbn0rzWHjApg/JDxdh7DFrq6jhSawYffiUYoXOx1xM7BLWw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186435-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186435: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=62071a1c16c4dbe765491e58e456fd3a19b33298
X-Osstest-Versions-That:
    xen=efa6e9f15ba943d154e8d7b29384581915b2aacd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Jun 2024 17:14:46 +0000

flight 186435 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186435/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  62071a1c16c4dbe765491e58e456fd3a19b33298
baseline version:
 xen                  efa6e9f15ba943d154e8d7b29384581915b2aacd

Last test of basis   186411  2024-06-19 12:00:22 Z    1 days
Failing since        186412  2024-06-19 15:03:58 Z    1 days    8 attempts
Testing same since   186435  2024-06-20 14:00:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@vates.tech>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jason.andryuk@amd.com>
  Julien Grall <jgrall@amazon.com>
  Leigh Brown <leigh@solinno.co.uk>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   efa6e9f15b..62071a1c16  62071a1c16c4dbe765491e58e456fd3a19b33298 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 17:22:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 17:22:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744871.1151996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKLUN-0008Vb-UQ; Thu, 20 Jun 2024 17:22:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744871.1151996; Thu, 20 Jun 2024 17:22:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKLUN-0008VU-RV; Thu, 20 Jun 2024 17:22:23 +0000
Received: by outflank-mailman (input) for mailman id 744871;
 Thu, 20 Jun 2024 17:22:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zX/I=NW=raptorengineering.com=sanastasio@srs-se1.protection.inumbo.net>)
 id 1sKLUN-0008VO-8u
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 17:22:23 +0000
Received: from raptorengineering.com (mail.raptorengineering.com
 [23.155.224.40]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a5846fa4-2f29-11ef-b4bb-af5377834399;
 Thu, 20 Jun 2024 19:22:20 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by mail.rptsys.com (Postfix) with ESMTP id 40A378286B5C;
 Thu, 20 Jun 2024 12:22:18 -0500 (CDT)
Received: from mail.rptsys.com ([127.0.0.1])
 by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id Z-KWaynxdQCl; Thu, 20 Jun 2024 12:22:17 -0500 (CDT)
Received: from localhost (localhost [127.0.0.1])
 by mail.rptsys.com (Postfix) with ESMTP id 8AD7882870EC;
 Thu, 20 Jun 2024 12:22:17 -0500 (CDT)
Received: from mail.rptsys.com ([127.0.0.1])
 by localhost (vali.starlink.edu [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id QB0WaMTuvcmS; Thu, 20 Jun 2024 12:22:17 -0500 (CDT)
Received: from [10.11.0.2] (5.edge.rptsys.com [23.155.224.38])
 by mail.rptsys.com (Postfix) with ESMTPSA id 1035F8286B5C;
 Thu, 20 Jun 2024 12:22:16 -0500 (CDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5846fa4-2f29-11ef-b4bb-af5377834399
DKIM-Filter: OpenDKIM Filter v2.10.3 mail.rptsys.com 8AD7882870EC
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
	d=raptorengineering.com; s=B8E824E6-0BE2-11E6-931D-288C65937AAD;
	t=1718904137; bh=xtuUYe8Jk0KeSbFjN2K+KMrzPH6CZWP6yg0zJhSZ/7A=;
	h=Message-ID:Date:MIME-Version:To:From;
	b=k3yrrYinYIEnUb954iOytL9//k57bYGflYFw7arGZ+fFMSNlr+UApHaYueTkxRkj/
	 cF0bfO8h7fz2tZGwt+QJCr/+rZ+tw7Rwj8nZOeLf59q4fSTIGyHCHcVl7jG4UF56tl
	 92ngIfUj13OziYb9lNYvQS2o4Us8IO5bFqEzDLIc=
X-Virus-Scanned: amavisd-new at rptsys.com
Message-ID: <0460ce5d-0fcc-4f9d-9548-6e86bfb8bc4b@raptorengineering.com>
Date: Thu, 20 Jun 2024 12:22:16 -0500
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/arch: Centralise __read_mostly and
 __ro_after_init
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20240614124950.1557058-1-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Shawn Anastasio <sanastasio@raptorengineering.com>
In-Reply-To: <20240614124950.1557058-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 6/14/24 7:49 AM, Andrew Cooper wrote:
> These being in cache.h is inherited from Linux, but is an inappropriate
> location to live.
> 
> __read_mostly is an optimisation related to data placement in order to avoid
> having shared data in cachelines that are likely to be written to, but it
> really is just a section of the linked image separating data by usage
> patterns; it has nothing to do with cache sizes or flushing logic.
> 
> Worse, __ro_after_init was only in xen/cache.h because __read_mostly was in
> arch/cache.h, and has literally nothing whatsoever to do with caches.
> 
> Move the definitions into xen/sections.h, which in paritcular means that
> RISC-V doesn't need to repeat the problematic pattern.  Take the opportunity
> to provide a short descriptions of what these are used for.
> 
> For now, leave TODO comments next to the other identical definitions.  It
> turns out that unpicking cache.h is more complicated than it appears because a
> number of files use it for transitive dependencies.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

This seems like a reasonable approach, and removing usage of the old
cache.h __read_mostly from the PPC tree should be a relatively simple
follow up patch from my end.

Acked-by: Shawn Anastasio <sanastasio@raptorengineering.com>

Thanks,
Shawn


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 17:25:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 17:25:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744879.1152007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKLX1-0000iL-Fe; Thu, 20 Jun 2024 17:25:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744879.1152007; Thu, 20 Jun 2024 17:25:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKLX1-0000iE-B8; Thu, 20 Jun 2024 17:25:07 +0000
Received: by outflank-mailman (input) for mailman id 744879;
 Thu, 20 Jun 2024 17:25:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WCyn=NW=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKLWz-0000i8-V4
 for xen-devel@lists.xenproject.org; Thu, 20 Jun 2024 17:25:05 +0000
Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com
 [2a00:1450:4864:20::132])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0848e5ae-2f2a-11ef-90a3-e314d9c70b13;
 Thu, 20 Jun 2024 19:25:04 +0200 (CEST)
Received: by mail-lf1-x132.google.com with SMTP id
 2adb3069b0e04-52bc121fb1eso1307790e87.1
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 10:25:04 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4247d0c5485sm33668255e9.21.2024.06.20.10.25.03
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 10:25:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0848e5ae-2f2a-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718904304; x=1719509104; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=+srNgqOpyZPQeIGgcEP91HDSa9FPf0wXz9vM3R4iSmc=;
        b=k3BKAzpopEbJCv0J8sEqX0r1ukY6dNZnRNuKNs3xaUeHBRZ3o+pAiAMm7l/rvN4uLU
         AZHgmzRp7LBn9SRznUuCW1yyPzKDAmwUFj2UsKsxoU6wJnHvVisF43FD6DvwagoLzC4Y
         Vlpl8ltYAMz907KxuSXUvuj5LDfx+GEyG738A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718904304; x=1719509104;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+srNgqOpyZPQeIGgcEP91HDSa9FPf0wXz9vM3R4iSmc=;
        b=maeXMKetUrZvFNpknouX2AsBnQaZm7r+QVackqG+AgcEH2uzyobztRYVAU44H/GzAD
         1q569Mgk8YAAxQTTquwSadbo+mQ9XbaTGgoIKu7MklAmOxCxv1zPiohXUgA59/qbZDyp
         jvSH2Hy3vMc0g5Un4yMGL1TlJUPcFOT5MRfi9a3hyFUVDbcBjinzhJlva2vuHJ7wgD3x
         6VsQeTK3njyMqUiwccGvdOB5BhSnt+GR4W9QhCoRDRhF2D7Jsx+9YBNpJfY83dDtOu4t
         dv/0cjJUM83R+PwUyecPxL73uky59iW3b8gpbmEYLX5CLLFX76kTrPpB8mK6gY6EvdOl
         7OsQ==
X-Forwarded-Encrypted: i=1; AJvYcCVj0OdbPGzhpkQfsh/M8efJnejsm37qYb6LrfQfI4/pMJnz+Rry2ReCkMRSIdZ+5pce4I8ctS0IgavUf6FEttbeKcdHkrHqfFvzuc53Dy0=
X-Gm-Message-State: AOJu0YyV46Cp3J5QaBCGsBGQDKRaiR9FHAiYfbmuLFOl6Jax0ASFKkbv
	khn9z4B8xEs6Bd9wwXjwRi5clMvKIV4XD7QQ79JvymciG6/tg7P8jRwq1LZHL3Q=
X-Google-Smtp-Source: AGHT+IFq5xtRCyUb3CtNFRYFi8vTVoJcX4HmYiVQNDSTx3XuVVLwMH9XqPeSj6QHeJnhLl5InZjHCg==
X-Received: by 2002:a05:6512:ac2:b0:52c:8fe4:b153 with SMTP id 2adb3069b0e04-52ccaa376aemr4415579e87.32.1718904304162;
        Thu, 20 Jun 2024 10:25:04 -0700 (PDT)
Message-ID: <3574b89c-d13e-4624-9fd9-4ff641eee80e@citrix.com>
Date: Thu, 20 Jun 2024 18:25:03 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/arch: Centralise __read_mostly and
 __ro_after_init
To: Shawn Anastasio <sanastasio@raptorengineering.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20240614124950.1557058-1-andrew.cooper3@citrix.com>
 <0460ce5d-0fcc-4f9d-9548-6e86bfb8bc4b@raptorengineering.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <0460ce5d-0fcc-4f9d-9548-6e86bfb8bc4b@raptorengineering.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 20/06/2024 6:22 pm, Shawn Anastasio wrote:
> On 6/14/24 7:49 AM, Andrew Cooper wrote:
>> These being in cache.h is inherited from Linux, but is an inappropriate
>> location to live.
>>
>> __read_mostly is an optimisation related to data placement in order to avoid
>> having shared data in cachelines that are likely to be written to, but it
>> really is just a section of the linked image separating data by usage
>> patterns; it has nothing to do with cache sizes or flushing logic.
>>
>> Worse, __ro_after_init was only in xen/cache.h because __read_mostly was in
>> arch/cache.h, and has literally nothing whatsoever to do with caches.
>>
>> Move the definitions into xen/sections.h, which in paritcular means that
>> RISC-V doesn't need to repeat the problematic pattern.  Take the opportunity
>> to provide a short descriptions of what these are used for.
>>
>> For now, leave TODO comments next to the other identical definitions.  It
>> turns out that unpicking cache.h is more complicated than it appears because a
>> number of files use it for transitive dependencies.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> This seems like a reasonable approach, and removing usage of the old
> cache.h __read_mostly from the PPC tree should be a relatively simple
> follow up patch from my end.
>
> Acked-by: Shawn Anastasio <sanastasio@raptorengineering.com>

Thanks.

And funnily enough, I have a patch doing that which I'm just about to
post, because RISC-V needs exactly the same treatment.  x86 and ARM are
a different story.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu Jun 20 20:19:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 Jun 2024 20:19:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744902.1152018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKOF1-0002V4-1v; Thu, 20 Jun 2024 20:18:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744902.1152018; Thu, 20 Jun 2024 20:18:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKOF0-0002Ux-Vd; Thu, 20 Jun 2024 20:18:42 +0000
Received: by outflank-mailman (input) for mailman id 744902;
 Thu, 20 Jun 2024 20:18:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKOEz-0002Un-E5; Thu, 20 Jun 2024 20:18:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKOEy-00075y-Vu; Thu, 20 Jun 2024 20:18:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKOEy-0006Pr-GL; Thu, 20 Jun 2024 20:18:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKOEy-0001qV-Fu; Thu, 20 Jun 2024 20:18:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5xUvFO86yxRU12MyV2WV2WjKib72BhONfEPcxPE7KEE=; b=HIkc43HMF3DvnxDSREe6n/2gS4
	2A+mxEdkUWFsKsXHNl3WrYEwaOjH5pDp5878re9QpJnpzzCcCXTVrI182yy306NyMIf6w0B+md5el
	05wy0x2NcEBI5so/y/vOB7RvJ1ffGHd0ttAgxTp97J4OFqosbfHMabSW+O2qPAHDG4rg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186430-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186430: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=efa6e9f15ba943d154e8d7b29384581915b2aacd
X-Osstest-Versions-That:
    xen=efa6e9f15ba943d154e8d7b29384581915b2aacd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 Jun 2024 20:18:40 +0000

flight 186430 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186430/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186417
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186417
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186417
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186417
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186417
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186417
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  efa6e9f15ba943d154e8d7b29384581915b2aacd
baseline version:
 xen                  efa6e9f15ba943d154e8d7b29384581915b2aacd

Last test of basis   186430  2024-06-20 07:18:11 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 00:18:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 00:18:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744921.1152029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKRz5-0002vD-73; Fri, 21 Jun 2024 00:18:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744921.1152029; Fri, 21 Jun 2024 00:18:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKRz5-0002v6-3h; Fri, 21 Jun 2024 00:18:31 +0000
Received: by outflank-mailman (input) for mailman id 744921;
 Fri, 21 Jun 2024 00:18:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKRz2-0002uz-KO
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 00:18:29 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c5f30e1e-2f63-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 02:18:25 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id BCB56619D9;
 Fri, 21 Jun 2024 00:18:23 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 561C0C2BD10;
 Fri, 21 Jun 2024 00:18:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5f30e1e-2f63-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718929103;
	bh=1USuxWu7b9A6qXib0MYnNhePTbZqLfTkORHQ+Mmycks=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=a21mVf57u5xXnXJRYuhqsZ6SG0mUkSOGDmbN0xE57FCrOfInDM1osxe8gcv5QSkP/
	 8M5ncplykX3cMuobHw2ea/BTo1YsMeMDG9yX6GEZ3dJ6LSjrcPnvGoTWyxsOLkDcMN
	 7LR84hiSuhChXXgzUasXy8OOqC1GmDkrS1fzSyyPfXGuOl9a8oJ0iJAyVSHHtfNcJw
	 af7IFu+XB0r7X5Tr8QRZVRmF1vGfLuUqDaioPQyg1rQAFaRlU1BIfQwWwaVIV08r6M
	 hCSdMUssE1rsXueCv6E3Ws/GLK5+Mhk4DXPVxUqgsJtBiGnE2ZAgdTKGffSk7suofJ
	 0A0NcLc8KJYYA==
Date: Thu, 20 Jun 2024 17:18:20 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [XEN PATCH v2 1/6][RESEND] automation/eclair: address violations
 of MISRA C Rule 20.7
In-Reply-To: <af4b0512eb52be99e37c9c670f98967ca15c68ac.1718378539.git.nicola.vetrini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406201718140.2572888@ubuntu-linux-20-04-desktop>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com> <af4b0512eb52be99e37c9c670f98967ca15c68ac.1718378539.git.nicola.vetrini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 16 Jun 2024, Nicola Vetrini wrote:
> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
> of macro parameters shall be enclosed in parentheses".
> 
> The helper macro bitmap_switch has parameters that cannot be parenthesized
> in order to comply with the rule, as that would break its functionality.
> Moreover, the risk of misuse due developer confusion is deemed not
> substantial enough to warrant a more involved refactor, thus the macro
> is deviated for this rule.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

If possible, I would prefer we used a SAF in-code comment deviation. If
that doesn't work for any reason this patch is fine and I'd ack it.


> ---
>  automation/eclair_analysis/ECLAIR/deviations.ecl | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> index 447c1e6661d1..c2698e7074aa 100644
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -463,6 +463,14 @@ of this macro do not lead to developer confusion, and can thus be deviated."
>  -config=MC3R1.R20.7,reports+={safe, "any_area(any_loc(any_exp(macro(^count_args_$))))"}
>  -doc_end
>  
> +-doc_begin="The arguments of macro bitmap_switch macro can't be parenthesized as
> +the rule would require, without breaking the functionality of the macro. This is
> +a specialized local helper macro only used within the bitmap.h header, so it is
> +less likely to lead to developer confusion and it is deemed better to deviate it."
> +-file_tag+={xen_bitmap_h, "^xen/include/xen/bitmap\\.h$"}
> +-config=MC3R1.R20.7,reports+={safe, "any_area(any_loc(any_exp(macro(loc(file(xen_bitmap_h))&&^bitmap_switch$))))"}
> +-doc_end
> +
>  -doc_begin="Uses of variadic macros that have one of their arguments defined as
>  a macro and used within the body for both ordinary parameter expansion and as an
>  operand to the # or ## operators have a behavior that is well-understood and
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 00:21:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 00:21:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744928.1152038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKS1b-0004NF-Mr; Fri, 21 Jun 2024 00:21:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744928.1152038; Fri, 21 Jun 2024 00:21:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKS1b-0004N8-K6; Fri, 21 Jun 2024 00:21:07 +0000
Received: by outflank-mailman (input) for mailman id 744928;
 Fri, 21 Jun 2024 00:21:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKS1a-0004N2-VM
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 00:21:06 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 23ee70dd-2f64-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 02:21:04 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 138D9CE288B;
 Fri, 21 Jun 2024 00:20:59 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BEB34C2BD10;
 Fri, 21 Jun 2024 00:20:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23ee70dd-2f64-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718929258;
	bh=dVVM6JKb1/aCPj/T6phPl1iizQPNNYqbkm38shN8dbg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Buu5e/fIKc6cj2gwYojE2KWw1BXxRnPo5/hBlN8qhAFCCoT1mNKejL9xhsd61isJH
	 JlI7zWOBePp+NXDqpJ/PSJxzqUUb5Dfply4RzIydyqVKtflaGdBkx4quPoP07QJsjc
	 TfYe1Wl9/UuvhG7ZKQo9uBgeAlbrqMsICIUck35td78WSfJMeqlaey6JNuJCeztESs
	 IeuE4pKrKdUDDhpT/Ic8BY1yXkRlf1+UrI89CXyqPtTSMwZFC17d8fbtQEjHRHAazZ
	 00X0EKpocFz8oZgCpbIMP5HRopv6YBx+mc3EFnsRrnMkNVKAVW0uPns5gmdEoKBim6
	 JI1ob+U6Arsug==
Date: Thu, 20 Jun 2024 17:20:55 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>
Subject: Re: [XEN PATCH v2 4/6][RESEND] automation/eclair_analysis: address
 violations of MISRA C Rule 20.7
In-Reply-To: <dfebde9cc657f2669df60b08ca34352288e082ab.1718378539.git.nicola.vetrini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406201720460.2572888@ubuntu-linux-20-04-desktop>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com> <dfebde9cc657f2669df60b08ca34352288e082ab.1718378539.git.nicola.vetrini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 17 Jun 2024, Nicola Vetrini wrote:
> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
> of macro parameters shall be enclosed in parentheses".
> 
> The local helpers GRP2 and XADD in the x86 emulator use their first
> argument as the constant expression for a case label. This pattern
> is deviated project-wide, because it is very unlikely to induce
> developer confusion and result in the wrong control flow being
> carried out.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 00:21:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 00:21:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744930.1152049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKS1w-0004oF-Ut; Fri, 21 Jun 2024 00:21:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744930.1152049; Fri, 21 Jun 2024 00:21:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKS1w-0004o8-RX; Fri, 21 Jun 2024 00:21:28 +0000
Received: by outflank-mailman (input) for mailman id 744930;
 Fri, 21 Jun 2024 00:21:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=drXV=NX=amd.com=VictorM.Lira@srs-se1.protection.inumbo.net>)
 id 1sKS1v-0004nl-BO
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 00:21:27 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on2061a.outbound.protection.outlook.com
 [2a01:111:f403:2009::61a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 31818b8b-2f64-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 02:21:25 +0200 (CEST)
Received: from BN9PR03CA0085.namprd03.prod.outlook.com (2603:10b6:408:fc::30)
 by SA0PR12MB4496.namprd12.prod.outlook.com (2603:10b6:806:9b::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.19; Fri, 21 Jun
 2024 00:21:22 +0000
Received: from BN3PEPF0000B06D.namprd21.prod.outlook.com
 (2603:10b6:408:fc:cafe::c1) by BN9PR03CA0085.outlook.office365.com
 (2603:10b6:408:fc::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.33 via Frontend
 Transport; Fri, 21 Jun 2024 00:21:21 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 BN3PEPF0000B06D.mail.protection.outlook.com (10.167.243.72) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7719.0 via Frontend Transport; Fri, 21 Jun 2024 00:21:20 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 20 Jun
 2024 19:21:20 -0500
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 20 Jun
 2024 19:21:20 -0500
Received: from xsjwoods50.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39 via Frontend
 Transport; Thu, 20 Jun 2024 19:21:19 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31818b8b-2f64-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Zk61pXsG9FqDEl5FulA1Vz3elwRIHugBhKHZ+6umFwl3ZvlMgVC94DnC7+IDQygWF2aZ9dB/7+cFzhXk70b8wYFvsWtiwX9UCYb01CbxQ/+bk1E+seftq3Kwgx9lLwMZfloR+SiwnhCHf5yNq6i0vAtC5PA6126i9gh0WbRGOHkImYNTljgdykV2cBNkWghUi9SCmPgahg01UfIqSWRTLWCSN3vE9nPERx7K5v/O7fGM7i2uyGFMhFGe6P69Tj6fiUVviDhAaYOak396BOKUzqXqoxXAn568mshq1NCtl0zHI7Io8wztlYXrE1qdNaG3f6X/qhlW3pjPDMrhjdxgQw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vpxELYzvYWyx31e2HQRfcOug61WECZXw5QfrjJ9qZVA=;
 b=K/ZN8uNo8mbmbq2oEMx6OL1sY9GyNEQFcbpVGMt4YqIdPVFgln4EdQtUBOyQaQw3NNu1tkSf1hQfpElPwzWhxXnhdlz+hIBHm4X/sePZ8NbP/DrORflbehYUom1XgfgeHs6HOQsHdda/hAqHDELUlbAa4Z0C+wyCONqbsyl0j7SymlE7JxEPzYBv0JXaiqi1m8uvXA23aCFrat01hLwUtzQepUe2P+8K5ekp1w9UtPKN7DrOu3GDRctOQj09VXK8D5/6zJlogm/xbPl8FnHTnNF3ebqGlcmToeWTC94/9v+1m3ZUyBByfR5ASKC01wDpP1qtC2JIIuUelRlHer49xQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vpxELYzvYWyx31e2HQRfcOug61WECZXw5QfrjJ9qZVA=;
 b=TrNtBvwAelRayBUKjiF+BJJfqWnga73uKVZaiVnWaZiwJToIc30CNibx5aMhR3pHBT1rR7MmYaodtx1NB56jrvCWm9fuxQ/ehE3c7wW+qUrqZR0S6fjT02h+DJZ7tccinTMZ9B28GooilF8Ciquxld3njT288Tt7UPHtSQQDH5w=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: <victorm.lira@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Victor Lira <victorm.lira@amd.com>, "Stewart
 Hildebrand" <stewart.hildebrand@amd.com>, George Dunlap
	<george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>, "Juergen
 Gross" <jgross@suse.com>
Subject: [XEN PATCH] common/sched: address a violation of MISRA C Rule 8.8
Date: Thu, 20 Jun 2024 17:20:30 -0700
Message-ID: <5b6dfc7571bd76b5546d3881bd660a4e7a745409.1718928467.git.victorm.lira@amd.com>
X-Mailer: git-send-email 2.37.6
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
Received-SPF: None (SATLEXMB05.amd.com: victorm.lira@amd.com does not
 designate permitted sender hosts)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN3PEPF0000B06D:EE_|SA0PR12MB4496:EE_
X-MS-Office365-Filtering-Correlation-Id: 463a6245-b74d-41e6-dcab-08dc9188138d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|376011|82310400023|36860700010|1800799021;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?Vl1tuuKaOnLtnLRBHLWWaaToJPoAuGjzctZeMoFntLsklB9g+JsBbbBI7KzY?=
 =?us-ascii?Q?V2GMCVNaSj2FO3vukHoCZPn5OjUs6f8OQLy4Mfto3xO81TjDDpeFumURRH1o?=
 =?us-ascii?Q?dEaTqSGav7Qr+by5rLFucWBlH3ETaJDjxr5Y2N7sMT4R8jxM87zA5PKvG0Ax?=
 =?us-ascii?Q?JUrdS/LnFjjSfnRfAx6NMwN/LU/tyBtzpzLem+nV5nqlYx+ff/vc5zR+Jq7H?=
 =?us-ascii?Q?FZ3z9h/FOblUYDje7+JTf/yJGu/NiZwze2XhmN3Yi6REHjuTsXmHV2Z0eZ9O?=
 =?us-ascii?Q?/L1QY726VCKNl6aFsv9Je9boZJgKTIckNYKATq8qcSEc5ZcjhCEQ21Hfc+63?=
 =?us-ascii?Q?1CEDJpMgTbGh2q2u+YS0/Y1fnT1ujtabeD8PfncGcVLULPFGNL4WC8AVPxax?=
 =?us-ascii?Q?r49MkQB+CDc69LRZWJZt4Qs9VQWq48lYUx/CPJwXpoL1IkAus6JkyMHyuYGP?=
 =?us-ascii?Q?6x0e8s0v44x15V9qRsjjw7l7Xe4BvOAY2pJOC0+mFlEkNzB28Y815dPPDjOL?=
 =?us-ascii?Q?ds0lC0HoXsVJK7QxbYm5hULrP2DsFrLegT0smZZia/yi0y/RVSizW6f2vJ9P?=
 =?us-ascii?Q?i/vHpT9MjsUDKfS2VkJft0ENLKUT2qcy1vXwQQx3X+ttdF5IA6ICdqO71D1x?=
 =?us-ascii?Q?YJscLXE2u9bBNO0qrDKy8q84oYjhmVrYiK9zhPAAH97L4JlaCJXaeDsyDdEu?=
 =?us-ascii?Q?NOABYQkKMoi1ZGSJltwHjLGJWEqBC5YugEuMBU2rajPh7LXXTRpaeOIX0VMJ?=
 =?us-ascii?Q?jKAVr9Pm0sKTiaZAB4fN6Wk4q5fP8+9Z5Qn+gB5obsNGZ1qhaWgRjNbgcSom?=
 =?us-ascii?Q?Th2saXx929V7JPG5Coj3I6LlYfURW0GM+3LyBCbghn4woV0IyJdWztE7fK7/?=
 =?us-ascii?Q?tW6p9uQS6W/E4P2a7k6cuslPrmwRGzVJtLWo+AdRQlp5JTJ4qEFFH3BnxMVG?=
 =?us-ascii?Q?XBPM0Ctqs8npZ44wYH+iZpKFn0ockvn7BNqB+mQWbhE+REZ43rzR8MX0lFG/?=
 =?us-ascii?Q?qeI4akBmYl251z63mbTfKohxPwEz15khgCpBwuaqLZI8IXGKT1ceLETH9Ywg?=
 =?us-ascii?Q?qFlbctvd0taSXM8tvS5Vjrkf+QXi8dH3bePpg0TTcyegppPQwmAtHoJ1uU3Z?=
 =?us-ascii?Q?mJQ85PmfpvOhwM7wfLaIpWR4ftvkg3jk3DdLr6CtLV5BKTiHjSC1b4KUJK4e?=
 =?us-ascii?Q?bRarxS8oUHkUI6Jwzg/iIf8gfoAm7kfsmevUC5Xws8vcFLdls9gXvrA/tN7t?=
 =?us-ascii?Q?nbJ+PSD4Fnp4QrobIKowKFeP2ggEfRNoTWZDZk2u558+rg5PnoLICZSIB9CS?=
 =?us-ascii?Q?CDXFXvY6MKxJHKPojTds7CBs7kYeXNn9vXurHUB0ktqEnI9+DNixUe8Z09UC?=
 =?us-ascii?Q?zuHzWJmkZ77UMziBvyVecsZb5migU+8gMbr9rjMF5Zc0KdAOqQ=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(376011)(82310400023)(36860700010)(1800799021);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2024 00:21:20.8786
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 463a6245-b74d-41e6-dcab-08dc9188138d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN3PEPF0000B06D.namprd21.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR12MB4496

From: Victor Lira <victorm.lira@amd.com>

Rule 8.8: "The static storage class specifier shall be used in all
declarations of objects and functions that have internal linkage"

This patch fixes this by adding the static specifier.
No functional changes.

Reported-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
Signed-off-by: Victor Lira <victorm.lira@amd.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Dario Faggioli <dfaggioli@suse.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org
---
 xen/common/sched/credit2.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index 685929c290..10a32bd160 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -1476,7 +1476,7 @@ static inline void runq_remove(struct csched2_unit *svc)
     list_del_init(&svc->runq_elem);
 }
 
-void burn_credits(struct csched2_runqueue_data *rqd, struct csched2_unit *svc,
+static void burn_credits(struct csched2_runqueue_data *rqd, struct csched2_unit *svc,
                   s_time_t now);
 
 static inline void
@@ -1855,7 +1855,7 @@ static void reset_credit(int cpu, s_time_t now, struct csched2_unit *snext)
     /* No need to resort runqueue, as everyone's order should be the same. */
 }
 
-void burn_credits(struct csched2_runqueue_data *rqd,
+static void burn_credits(struct csched2_runqueue_data *rqd,
                   struct csched2_unit *svc, s_time_t now)
 {
     s_time_t delta;
-- 
2.37.6



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 00:22:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 00:22:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744940.1152059 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKS33-0005QR-9C; Fri, 21 Jun 2024 00:22:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744940.1152059; Fri, 21 Jun 2024 00:22:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKS33-0005QK-4x; Fri, 21 Jun 2024 00:22:37 +0000
Received: by outflank-mailman (input) for mailman id 744940;
 Fri, 21 Jun 2024 00:22:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKS31-0005Q8-9f
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 00:22:35 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5929fb50-2f64-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 02:22:32 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 2D6D9620A2;
 Fri, 21 Jun 2024 00:22:31 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AAC81C2BD10;
 Fri, 21 Jun 2024 00:22:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5929fb50-2f64-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718929350;
	bh=Et+fvGblFJjieSt8EwlBNkOFHYccy8U5mpnF5U2edsg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=meFuUnmW7aniIkmV0xuZINLXRmKUGGBx18b+NedGmwf1nBA+LMKZJwLK4nduwy6hP
	 O7GHgiLVN9AmTB/F2EIwgUUBALjYOYvO+kFC0gGo7vikugp0o4wvJktr09mBoQ+5bJ
	 SBDPTns54/5E7bSyI9B2HG74t4hkFrwlbrk505IOLDfuiMdNHOhtXtCh/MxvSlzZcp
	 G7XTQzw3Z/hdKTE8a/skr3jVTV4b3qwNhIAvjzHNi5DT2l4m8sF4J+kFDYYhCE9elf
	 37W2+AvYw/fyPY6JHqs/HWQ0hEwZnvpcfabZ5W879u/eW+kVFQ5IyukpWHiyivPFkj
	 qOwRFueQqVBcA==
Date: Thu, 20 Jun 2024 17:22:28 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>
Subject: Re: [XEN PATCH] automation/eclair: add deviations of MISRA C Rule
 5.5
In-Reply-To: <dbd34e37b5d757ff7ae2a7318ad12b159970604c.1718887298.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406201722100.2572888@ubuntu-linux-20-04-desktop>
References: <dbd34e37b5d757ff7ae2a7318ad12b159970604c.1718887298.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 20 Jun 2024, Federico Serafini wrote:
> MISRA C Rule 5.5 states that "Identifiers shall be distinct from macro
> names".
> 
> Update ECLAIR configuration to deviate:
> - macros expanding to their own name;
> - clashes between macros and non-callable entities;
> - clashes related to the selection of specific implementations of string
>   handling functions.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 00:44:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 00:44:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744950.1152069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKSNp-0000Ax-SX; Fri, 21 Jun 2024 00:44:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744950.1152069; Fri, 21 Jun 2024 00:44:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKSNp-0000Aq-PE; Fri, 21 Jun 2024 00:44:05 +0000
Received: by outflank-mailman (input) for mailman id 744950;
 Fri, 21 Jun 2024 00:44:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKSNo-0000Ak-4v
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 00:44:04 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 59109a24-2f67-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 02:44:02 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 5b1f17b1804b1-4218314a6c7so13524195e9.0
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 17:43:59 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-4247d21243esm44433695e9.43.2024.06.20.17.43.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 17:43:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59109a24-2f67-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718930639; x=1719535439; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=bo947nRLDAF2ABDm4JTcNha5gtcpqq+bEoananWC350=;
        b=ZU3TN1WKTm1RH+NdJVSieSx1uwQAzBzz04darhnsJZonZgOZG1f2OHTHatrMoNFwfH
         I/R++WBnWSa8ndQgX+DS4EUg5weV9pOR+RilDxhVVvFPBcnICXgHfxpdCMjUgezsJ91Y
         kLqqYmHAeRO5a33aR5JQeRyS9jE8RacLbIP4k=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718930639; x=1719535439;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bo947nRLDAF2ABDm4JTcNha5gtcpqq+bEoananWC350=;
        b=sTkvEpj2UN6oTjPdvNC3WdvBUGJaN2mA628Cd7Ntmko1qW/oJ3IuGeefbyHHyv2U/g
         Tx/6Wr0kCHSwf/XKENyXwf9ToM3vOiT0uXW0iYs4rindj6/GVuVyF7JdDFmWG6sDMKFg
         14cLE0wfZignZCEnaKB1Yu41M5U0Q5sE6+7qjXX4ya6QzIP8POgIHOFrrTAsYGZN6nFC
         VNAZMZJzQNdPDvdRafYmYm3Xj7fIgvQONU0ALhvRju5kCJXed+h0zyf4sjinqdTQz/z1
         OXtpBad4rI6firB8Jz72rB9SmziMTaaF7pA1G5NVQC15fPu+lZLjduvzPlVW4ZDhupAw
         w+6w==
X-Forwarded-Encrypted: i=1; AJvYcCUq8bb2S3Se2QEly9svQm6H22EjcSkLr/0gVDl3pH5dijwKI9ij5iMnzmJ9vnNDHzT9qj7Ndy7wES4d5m2619ldFJWh/NG6S9Zon7qw/vA=
X-Gm-Message-State: AOJu0Yxcu/+kr3z1BkcoEnFsSJ5tQfIpEqugEBjVPQHqunQfjRP3fQI1
	6TcQMwFo1X0YP6hzQZurpgFv2s8XZaQ9dDeNFBiGr/V12jLH9FzqsbH6dMNYwyo=
X-Google-Smtp-Source: AGHT+IEr871tadqB2g5Ut8b+tm1UxSt+2073Qc/diNSmEXiVoE6pG9OBLhOsasViDoWcAxww7aykVg==
X-Received: by 2002:a7b:cd07:0:b0:422:370a:ca57 with SMTP id 5b1f17b1804b1-4247529bd22mr50689465e9.36.1718930638977;
        Thu, 20 Jun 2024 17:43:58 -0700 (PDT)
Message-ID: <120a749d-a62a-4eb1-be2b-2c2299f849d6@citrix.com>
Date: Fri, 21 Jun 2024 01:43:58 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] common/sched: address a violation of MISRA C Rule 8.8
To: victorm.lira@amd.com, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, Stewart Hildebrand <stewart.hildebrand@amd.com>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli
 <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>
References: <5b6dfc7571bd76b5546d3881bd660a4e7a745409.1718928467.git.victorm.lira@amd.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <5b6dfc7571bd76b5546d3881bd660a4e7a745409.1718928467.git.victorm.lira@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 21/06/2024 1:20 am, victorm.lira@amd.com wrote:
> diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
> index 685929c290..10a32bd160 100644
> --- a/xen/common/sched/credit2.c
> +++ b/xen/common/sched/credit2.c
> @@ -1476,7 +1476,7 @@ static inline void runq_remove(struct csched2_unit *svc)
>      list_del_init(&svc->runq_elem);
>  }
>  
> -void burn_credits(struct csched2_runqueue_data *rqd, struct csched2_unit *svc,
> +static void burn_credits(struct csched2_runqueue_data *rqd, struct csched2_unit *svc,
>                    s_time_t now);
>  
>  static inline void
> @@ -1855,7 +1855,7 @@ static void reset_credit(int cpu, s_time_t now, struct csched2_unit *snext)
>      /* No need to resort runqueue, as everyone's order should be the same. */
>  }
>  
> -void burn_credits(struct csched2_runqueue_data *rqd,
> +static void burn_credits(struct csched2_runqueue_data *rqd,
>                    struct csched2_unit *svc, s_time_t now)

Thankyou for the patch.  By and large it's fine, but for both of these
examples, please re-indent the following line too, so the parameter list
remains aligned in the eventual code.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 00:50:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 00:50:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744957.1152079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKSTd-0001sp-G6; Fri, 21 Jun 2024 00:50:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744957.1152079; Fri, 21 Jun 2024 00:50:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKSTd-0001sO-Bo; Fri, 21 Jun 2024 00:50:05 +0000
Received: by outflank-mailman (input) for mailman id 744957;
 Fri, 21 Jun 2024 00:50:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKSTc-0001c8-56
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 00:50:04 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2f041a59-2f68-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 02:50:01 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 58740CE2345;
 Fri, 21 Jun 2024 00:49:55 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B1E8DC2BD10;
 Fri, 21 Jun 2024 00:49:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f041a59-2f68-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718930994;
	bh=uXJaFNeEv2qJH37zXRCpN8maT264MuP1N06mzvDgZko=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=PNNfFP58Ahs12BFLzcy0N73EtrV6be+MG21opFBB56seWL86yZp9FY5ZIxTwF+cDv
	 emEo55fFddbzbDZVt3iBWCmYJfuZtoe6FpAsIO/fEeX3JU4SivTSoS4mF6Gbm43lnV
	 WFMXdIVAzZfUK9j6Jhd02AwIZjjPIA9bkwepKOdM00KN9qVfiF6Zqgbgfw+DIpN+Bv
	 t8sEOhVMilvKkePeFEBIRkfZYPJu9ptEZHekHHgvnTxxBeI4uXmvzweE99//fo65kh
	 WKK8aSPoev/nmxTeEkPWg6TfIxatHCw30677r4+q+eQU73f05XC6KoTU0dKyBTChi9
	 b0Uvrrzneytsg==
Date: Thu, 20 Jun 2024 17:49:52 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH] automation/eclair_analysis: deviate and|or|xor for MISRA
 C Rule 21.2
In-Reply-To: <5f486b5a-aba1-41f6-9e24-16ad3acd67bd@suse.com>
Message-ID: <alpine.DEB.2.22.394.2406201746540.2572888@ubuntu-linux-20-04-desktop>
References: <b89e106649e3d0ecb41baadb49dc09c54b7563ec.1718873635.git.alessandro.zucchelli@bugseng.com> <5f486b5a-aba1-41f6-9e24-16ad3acd67bd@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 20 Jun 2024, Jan Beulich wrote:
> On 20.06.2024 11:07, Alessandro Zucchelli wrote:
> > Rule 21.2 reports identifiers reserved for the C and POSIX standard
> > libraries: or, and and xor are reserved identifiers because they constitute
> > alternate spellings for the corresponding operators; however Xen doesn't
> > use standard library headers, so there is no risk of overlap.
> 
> This is iso646.h aiui, which imo would be good to mention here, just
> to avoid people needing to go hunt for where this is coming from.
> 
> > This addresses violations arising from x86_emulate/x86_emulate.c, where
> > label statements named as or, and and xor appear.
> 
> So a deviation purely by present uses, even ...
> 
> > --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> > +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> > @@ -498,6 +498,12 @@ still remain available."
> >  -config=MC3R1.R21.2,declarations+={safe, "!^__builtin_.*$"}
> >  -doc_end
> >  
> > +-doc_begin="or, and and xor are reserved identifiers because they constitute alternate
> > +spellings for the corresponding operators.
> > +However, Xen doesn't use standard library headers, so there is no risk of overlap."
> > +-config=MC3R1.R21.2,reports+={safe, "any_area(stmt(ref(kind(label)&&^(or|and|xor)$)))"}
> > +-doc_end
> 
> ... constrained to just labels. Why would we do that? Why can't we deviate
> them all (or at least all that are plausible to potentially use somewhere,
> which imo would include at least "not" as well), and no matter what
> syntactical element they would be used as?
> 
> Besides, just as a remark: Specifically when used as label names, there's
> no risk at all, I'm inclined to say. If iso646.h existed in Xen and was
> included in such a source file, the compiler would choke on the result.

I agree with Jan with adding "not" and deviate everywhere, but I would
only deviate for label names. That's because I agree with Jan that when
used as label names there is no risk. However, in other uses it is less
clear and I'd prefer to avoid.

Looking at this patch, it is already applying not just to x86_emulate but
everywhere. So the only improvement would be to add "not" to the list:
or|and|xor|not

I consider it nice-to-have rather than must-have, so:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 01:03:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 01:03:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744967.1152089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKSg7-0003Ep-KL; Fri, 21 Jun 2024 01:02:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744967.1152089; Fri, 21 Jun 2024 01:02:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKSg7-0003Ei-Gr; Fri, 21 Jun 2024 01:02:59 +0000
Received: by outflank-mailman (input) for mailman id 744967;
 Fri, 21 Jun 2024 01:02:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKSg6-0003Ec-Ra
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 01:02:58 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fec733fc-2f69-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 03:02:57 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id AD1B062234;
 Fri, 21 Jun 2024 01:02:55 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7056CC4AF07;
 Fri, 21 Jun 2024 01:02:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fec733fc-2f69-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718931775;
	bh=DfcSr8Bx1doQEh/9BMVK86RhBRiiqnKnr9/JhJ1hEPA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=tKlKPKAQE6H9ZATLfTLddhcux/6stgGRerXMc70U4TICqoEbswbXiwTBAIiPlWSoy
	 E0L01qIBquEAAZ6PgeyRb429YRkQvUDdJIc7cseKwc+5HpJNXsce9g2y09zDvGGoWU
	 JzVJXE5TBXbCiGaxBNxCeQ/Ye4uep5Ltm1deeNtV7VyViiOus2f83pE0xALiaZVq/J
	 ayu5Hqwgod9UGRt5ZN46x0jSNuy0OfZ3fhxmnAT/HXxuJZlRuDBIkEJ4kkpz250yOc
	 KMYdsHHLnnydv9oj5mVSX8NsFFxSV7QwkrP8VJfxqV0v6uqaIGUq4eiboREi0gXtbX
	 wTR8UM4rpib4w==
Date: Thu, 20 Jun 2024 18:02:53 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH 1/2] automation/eclair_analysis: deviate MISRA C Rule
 21.2
In-Reply-To: <02ee9a03-c5b9-4250-960d-e9a2762605c8@suse.com>
Message-ID: <alpine.DEB.2.22.394.2406201758490.2572888@ubuntu-linux-20-04-desktop>
References: <cover.1718816397.git.alessandro.zucchelli@bugseng.com> <5b8364528a9ece8fec9f0e70bee81c2ea94c1820.1718816397.git.alessandro.zucchelli@bugseng.com> <02ee9a03-c5b9-4250-960d-e9a2762605c8@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 20 Jun 2024, Jan Beulich wrote:
> On 19.06.2024 19:09, Alessandro Zucchelli wrote:
> > Rule 21.2 reports identifiers reserved for the C and POSIX standard
> > libraries: all xen's translation units are compiled with option
> > -nostdinc, this guarantees that these libraries are not used, therefore
> > a justification is provided for allowing uses of such identifiers in
> > the project.
> > Builtins starting with "__builtin_" still remain available.
> > 
> > No functional change.
> > 
> > Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
> > ---
> >  automation/eclair_analysis/ECLAIR/deviations.ecl | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> > 
> > diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> > index 447c1e6661..9fa9a7f01c 100644
> > --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> > +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> > @@ -487,6 +487,17 @@ leads to a violation of the Rule are deviated."
> >  # Series 21.
> >  #
> >  
> > +-doc_begin="Rules 21.1 and 21.2 report identifiers reserved for the C and POSIX
> > +standard libraries: if these libraries are not used there is no reason to avoid such
> > +identifiers. All xen's translation units are compiled with option -nostdinc,
> > +this guarantees that these libraries are not used. Some compilers could perform
> > +optimization using built-in functions: this risk is partially addressed by
> > +using the compilation option -fno-builtin. Builtins starting with \"__builtin_\"
> > +still remain available."
> 
> While the sub-section "Reserved Identifiers" is part of Section 7,
> "Library", close coordination is needed between the library and the
> compiler, which only together form an "implementation". Therefore any
> use of identifiers beginning with two underscores or beginning with an
> underscore and an upper case letter is at risk of colliding not only
> with a particular library implementation (which we don't use), but
> also of such with a particular compiler implementation (which we cannot
> avoid to use). How can we permit use of such potentially problematic
> identifiers?

Alternative question: is there a way we can check if there is clash of
some sort between a compiler implementation of something and a MACRO or
identifier we have in Xen? An error or a warning from the compiler for
instance? That could be an easy way to prove we are safe.

Also, can we use the fact that the compiler we use is the same compiler
used to compile Linux, and Linux makes extensive use of identifiers and
macros starting with underscores as one of the reason for being safe
from clashes?


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 01:04:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 01:04:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744973.1152099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKSho-0003lf-VU; Fri, 21 Jun 2024 01:04:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744973.1152099; Fri, 21 Jun 2024 01:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKSho-0003lY-SM; Fri, 21 Jun 2024 01:04:44 +0000
Received: by outflank-mailman (input) for mailman id 744973;
 Fri, 21 Jun 2024 01:04:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKSho-0003lS-1n
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 01:04:44 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3dcfad8c-2f6a-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 03:04:43 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id D61E0620A2;
 Fri, 21 Jun 2024 01:04:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A1C71C2BD10;
 Fri, 21 Jun 2024 01:04:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3dcfad8c-2f6a-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718931881;
	bh=zlaX0qpCKKW4KMtuMQeR0LBSBDo9WEky2UALNOsvfKg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=mO5Jj2Joro3jctAJj3oH6DbBXf7V+jL2gIHnZpcHo433ph2j16B+Y7fTO4o6qPCTs
	 jnecYdKgE0tqcNdWQzpF0LGZcA5La45J5gUw262/f/qSHo6dYYSdfH7y/1ZAuzHdTi
	 XwsKk9nnZCzvnDWtvDuth5ANlr26HakBHLs0dNpfJtY1zL4jEA47SSb9BHKMDJbK0f
	 0qgdDWx9sh24JUYmeMky1ImggeinfLvIPtl9ia3KqP8NCIm4gAt9jB+zCYk9I0Q1x2
	 uetbVjAkJSu8HM5wQ7jpHPpxN2PLfHhyEq/pwk3SgZBOo78wyGHtsGZeQuF0uiEOYN
	 YAeMjxuLttz5A==
Date: Thu, 20 Jun 2024 18:04:39 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: alessandro.zucchelli@bugseng.com, 
    Stefano Stabellini <sstabellini@kernel.org>, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH] automation/eclair_analysis: deviate common/unlzo.c for
 MISRA Rule 7.3
In-Reply-To: <c6a2fb51-1ffe-4d13-9894-5ca3169c392e@suse.com>
Message-ID: <alpine.DEB.2.22.394.2406201804350.2572888@ubuntu-linux-20-04-desktop>
References: <20342a68627d5fe7c85c50f64e9300e9a587974b.1718704260.git.alessandro.zucchelli@bugseng.com> <63d11da5-4a5a-4354-ab57-67fbb7110f45@suse.com> <alpine.DEB.2.22.394.2406191817310.2572888@ubuntu-linux-20-04-desktop> <566b0cb9b718b5719a6b497b36e90ab4@bugseng.com>
 <c6a2fb51-1ffe-4d13-9894-5ca3169c392e@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 20 Jun 2024, Jan Beulich wrote:
> On 20.06.2024 15:19, Alessandro Zucchelli wrote:
> > On 2024-06-20 03:17, Stefano Stabellini wrote:
> >> On Tue, 18 Jun 2024, Jan Beulich wrote:
> >>> On 18.06.2024 11:54, Alessandro Zucchelli wrote:
> >>>> The file contains violations of Rule 7.3 which states as following: The
> >>>> lowercase character `l' shall not be used in a literal suffix.
> >>>>
> >>>> This file defines a non-compliant constant used in a macro expanded in a
> >>>> non-excluded file, so this deviation is needed in order to avoid
> >>>> a spurious violation involving both files.
> >>>
> >>> Imo it would be nice to be specific in such cases: Which constant? And
> >>> which macro expanded in which file?
> >>
> >> Hi Alessandro,
> >> if you give me the details, I could add it to the commit message on 
> >> commit
> > 
> > The file common/unlzo.c defines the non-compliant constant 
> > LZO_BLOCK_SIZE as
> > "256*1024l" (note the 'l'). This file is excluded from ECLAIR analysis 
> > but
> > as the constant is used in macro xmalloc_bytes, expanded in non-excluded 
> > file
> > include/xen/xmalloc.h, thus these specific violations need this 
> > configuration
> > in order to be excluded.
> 
> Would it perhaps be easier to swap the l for an L?

+1


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 01:10:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 01:10:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744981.1152108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKSn0-0005bu-HR; Fri, 21 Jun 2024 01:10:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744981.1152108; Fri, 21 Jun 2024 01:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKSn0-0005bn-Ec; Fri, 21 Jun 2024 01:10:06 +0000
Received: by outflank-mailman (input) for mailman id 744981;
 Fri, 21 Jun 2024 01:10:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKSmz-0005VR-6J
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 01:10:05 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fbf96f0b-2f6a-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 03:10:03 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 49FC2CE0956;
 Fri, 21 Jun 2024 01:09:58 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id DCBB2C2BD10;
 Fri, 21 Jun 2024 01:09:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fbf96f0b-2f6a-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718932197;
	bh=fvLedT6Y/NaMWeO/6E8toVexe19/hwyphiFEwY6kirk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=rsvWMrT99N7r5at3p2rVIpNS7EFJd2i5Nt6AIeml/9ml3HJwM+wn0kVszVeeXb47N
	 oUj0O3zPUZlTtQ6II68zjSEZv/gs3fC2sbfC/TiLFgOBofc7CWobOsyzYIh6D5iweM
	 5mYNSJsojHf0PyOVtGaLTUVNCpHRbAp9yxwwx1z9DWYuknRuY6qeVk7AaXEWkYSGvz
	 S560xQXhADy1j9ZJlG3Ox6RrAiofSg0Fus7izbgNvp7gnK/gf+W8QCJIFIuUA2CSbx
	 j+MnbTkAAJiC2nT75iJ3IWlCt3U2j/so8dkxgQIXflcK1XIsQdD6Fh/sRZ60lTVdZO
	 +LQKvmqXchwkg==
Date: Thu, 20 Jun 2024 18:09:54 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Federico Serafini <federico.serafini@bugseng.com>, consulting@bugseng.com, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>, 
    xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH] xen: add explicit comment to identify notifier
 patterns
In-Reply-To: <058a6cc6-bf84-4140-a3fb-12ec648536b0@suse.com>
Message-ID: <alpine.DEB.2.22.394.2406201808190.2572888@ubuntu-linux-20-04-desktop>
References: <96a1b98d7831154c58d39b85071b9670de94aed0.1718617636.git.federico.serafini@bugseng.com> <058a6cc6-bf84-4140-a3fb-12ec648536b0@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 17 Jun 2024, Jan Beulich wrote:
> On 17.06.2024 11:49, Federico Serafini wrote:
> > MISRA C Rule 16.4 states that every `switch' statement shall have a
> > `default' label" and a statement or a comment prior to the
> > terminating break statement.
> > 
> > This patch addresses some violations of the rule related to the
> > "notifier pattern": a frequently-used pattern whereby only a few values
> > are handled by the switch statement and nothing should be done for
> > others (nothing to do in the default case).
> > 
> > No functional change.
> > 
> > Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> 
> I guess I shouldn't outright NAK this, but I certainly won't ack it. This
> is precisely the purely mechanical change that in earlier discussions some
> (including me) have indicated isn't going to help safety. However, if
> others want to ack something purely mechanical like this, then my minimal
> requirement would be that somewhere it is spelled out what falls under

I know there is a new version of this patch on xen-devel but I just
wanted to add that although it is true that this patch taken on its own
it does not improve safety, it does improve safety because it enables us
to go down to zero violations on rule 16.4 and then make the rule 16.4
blocking in the gitlab-ci ECLAIR job.


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 01:13:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 01:13:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744989.1152119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKSqC-0006Hm-13; Fri, 21 Jun 2024 01:13:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744989.1152119; Fri, 21 Jun 2024 01:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKSqB-0006Hf-TR; Fri, 21 Jun 2024 01:13:23 +0000
Received: by outflank-mailman (input) for mailman id 744989;
 Fri, 21 Jun 2024 01:13:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKSqA-0006HY-Gu
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 01:13:22 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 71ea2fc9-2f6b-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 03:13:19 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 949BC6209A;
 Fri, 21 Jun 2024 01:13:18 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 94EDDC2BD10;
 Fri, 21 Jun 2024 01:13:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71ea2fc9-2f6b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718932398;
	bh=ePFeEBnPEPjTSSw7qv3aMmOiSOi+5sGXRAcMnVXYzBM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=NdL+t2PTpgkKn6L2ELDDNBQ2bIW6U8HCexcJ7hW1Jpe+14Cq0zQaxqqOQuZFISBaI
	 bohW5b+EJ5Ju/slJ+XUW7CX589dlWiXX8auFkIe/Yqf12H0eVBBojpgEeUFTlYddsx
	 eZR3RfV/hgPY1uNsM/Nru/onXQ4Q7/rYbxA3wqPHxWaUAVRFxlhpibWqwLq8VqHxdB
	 ge/MqNmw7YGokMCeNHcVI4oJH52zalyYLVeTLnonxMag11pjJ5qrnqyKUVuaKtlEeB
	 lEnFv8CQWEEyBc6ycBTPMHolX7Wv7XrhKfCiZU6Ysy1MjwnV0cCHL/aBXvc/oMrHbm
	 qYQjaHvQXdkVA==
Date: Thu, 20 Jun 2024 18:13:15 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org, 
    consulting@bugseng.com, Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>
Subject: Re: [XEN PATCH v2] xen: add explicit comment to identify notifier
 patterns
In-Reply-To: <2072bf59-f125-4789-be77-40ed3641aec4@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406201811200.2572888@ubuntu-linux-20-04-desktop>
References: <d814434bf73e341f5d35836fa7063a728f7b7de4.1718788908.git.federico.serafini@bugseng.com> <f7d46c15-ff85-4a6f-afd7-df18649726c8@xen.org> <2072bf59-f125-4789-be77-40ed3641aec4@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1903832196-1718932398=:2572888"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1903832196-1718932398=:2572888
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Thu, 20 Jun 2024, Federico Serafini wrote:
> On 19/06/24 13:17, Julien Grall wrote:
> > Hi Federico,
> > 
> > On 19/06/2024 10:29, Federico Serafini wrote:
> > > MISRA C Rule 16.4 states that every `switch' statement shall have a
> > > `default' label" and a statement or a comment prior to the
> > > terminating break statement.
> > > 
> > > This patch addresses some violations of the rule related to the
> > > "notifier pattern": a frequently-used pattern whereby only a few values
> > > are handled by the switch statement and nothing should be done for
> > > others (nothing to do in the default case).
> > > 
> > > Note that for function mwait_idle_cpu_init() in
> > > xen/arch/x86/cpu/mwait-idle.c the /* Notifier pattern. */ comment is
> > > not added: differently from the other functions covered in this patch,
> > > the default label has a return statement that does not violates Rule 16.4.
> > > 
> > > No functional change.
> > > 
> > > Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> > > ---
> > > Changes in v2:
> > > as Jan pointed out, in the v1 some patterns were not explicitly identified
> > > (https://lore.kernel.org/xen-devel/cad05a5c-e2d8-4e5d-af05-30ae6f959184@bugseng.com/).
> > > 
> > > This version adds the /* Notifier pattern. */ to all the pattern present
> > > in
> > > the Xen codebase except for mwait_idle_cpu_init().
> > > ---
> > >   xen/arch/arm/cpuerrata.c                     | 1 +
> > >   xen/arch/arm/gic-v3-lpi.c                    | 4 ++++
> > >   xen/arch/arm/gic.c                           | 1 +
> > >   xen/arch/arm/irq.c                           | 4 ++++
> > >   xen/arch/arm/mmu/p2m.c                       | 1 +
> > >   xen/arch/arm/percpu.c                        | 1 +
> > >   xen/arch/arm/smpboot.c                       | 1 +
> > >   xen/arch/arm/time.c                          | 1 +
> > >   xen/arch/arm/vgic-v3-its.c                   | 2 ++
> > >   xen/arch/x86/acpi/cpu_idle.c                 | 4 ++++
> > >   xen/arch/x86/cpu/mcheck/mce.c                | 4 ++++
> > >   xen/arch/x86/cpu/mcheck/mce_intel.c          | 4 ++++
> > >   xen/arch/x86/genapic/x2apic.c                | 3 +++
> > >   xen/arch/x86/hvm/hvm.c                       | 1 +
> > >   xen/arch/x86/nmi.c                           | 1 +
> > >   xen/arch/x86/percpu.c                        | 3 +++
> > >   xen/arch/x86/psr.c                           | 3 +++
> > >   xen/arch/x86/smpboot.c                       | 3 +++
> > >   xen/common/kexec.c                           | 1 +
> > >   xen/common/rcupdate.c                        | 1 +
> > >   xen/common/sched/core.c                      | 1 +
> > >   xen/common/sched/cpupool.c                   | 1 +
> > >   xen/common/spinlock.c                        | 1 +
> > >   xen/common/tasklet.c                         | 1 +
> > >   xen/common/timer.c                           | 1 +
> > >   xen/drivers/cpufreq/cpufreq.c                | 1 +
> > >   xen/drivers/cpufreq/cpufreq_misc_governors.c | 3 +++
> > >   xen/drivers/passthrough/x86/hvm.c            | 3 +++
> > >   xen/drivers/passthrough/x86/iommu.c          | 3 +++
> > >   29 files changed, 59 insertions(+)
> > > 
> > > diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> > > index 2b7101ea25..69c30aecd8 100644
> > > --- a/xen/arch/arm/cpuerrata.c
> > > +++ b/xen/arch/arm/cpuerrata.c
> > > @@ -730,6 +730,7 @@ static int cpu_errata_callback(struct notifier_block
> > > *nfb,
> > >           rc = enable_nonboot_cpu_caps(arm_errata);
> > >           break;
> > >       default:
> > > +        /* Notifier pattern. */
> > Without looking at the commit message (which may not be trivial when
> > committed), it is not clear to me what this is supposed to mean. Will there
> > be a longer explanation in the MISRA doc? Should this be a SAF-* comment?
> > 
> > >           break;
> > >       }
> > > diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
> > > index eb0a5535e4..4c2bd35403 100644
> > > --- a/xen/arch/arm/gic-v3-lpi.c
> > > +++ b/xen/arch/arm/gic-v3-lpi.c
> > > @@ -389,6 +389,10 @@ static int cpu_callback(struct notifier_block *nfb,
> > > unsigned long action,
> > >               printk(XENLOG_ERR "Unable to allocate the pendtable for
> > > CPU%lu\n",
> > >                      cpu);
> > >           break;
> > > +
> > > +    default:
> > > +        /* Notifier pattern. */
> > > +        break;
> > 
> > Skimming through v1, it was pointed out that gic-v3-lpi may miss some cases.
> > 
> > Let me start with that I understand this patch is technically not changing
> > anything. However, it gives us an opportunity to check the notifier pattern.
> > 
> > Has anyone done any proper investigation? If so, what was the outcome? If
> > not, have we identified someone to do it?
> > 
> > The same question will apply for place where you add "default".
> 
> Yes, I also think this could be an opportunity to check the pattern
> but no one has yet been identified to do this.

I don't think I understand Julien's question and/or your answer.

Is the question whether someone has done an analysis to make sure this
patch covers all notifier patters in the xen codebase?

If so, I expect that you have done an analysis simply by basing this
patch on the 16.4 violations reported by ECLAIR?
--8323329-1903832196-1718932398=:2572888--


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 01:24:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 01:24:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.744998.1152128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKT10-0008Lk-UC; Fri, 21 Jun 2024 01:24:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 744998.1152128; Fri, 21 Jun 2024 01:24:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKT10-0008Ld-RW; Fri, 21 Jun 2024 01:24:34 +0000
Received: by outflank-mailman (input) for mailman id 744998;
 Fri, 21 Jun 2024 01:24:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKT0z-0008LX-Qd
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 01:24:33 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ff480a33-2f6c-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 03:24:30 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id D401FCE2AC1;
 Fri, 21 Jun 2024 01:24:22 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 069E0C2BD10;
 Fri, 21 Jun 2024 01:24:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff480a33-2f6c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1718933062;
	bh=39d+RSqkVI/IFmqsOyIDeJfWRVBK+BQo0Pl8gVMAu7g=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ehPspm3OqwGGWGhgf9iUmKtj/4OmXh5QrtJeBaCNasgRrv/K6tQ4D9QeHBRhmRSiG
	 As5JciXFePd2IRkI+vLcqUoskQkrc819deL8tj85Mdjgpf8BR7RCztwaKjejSOkV/z
	 yIhKr1iSPJThYbzNsgsSTUXCaKtxYmU4Kg44qtaP6S5W/GwyyAviuIh2nWcmqlLl26
	 he5MnAxK+UtAFBycZ+KNlMrBvP3oxpanv0OOplsNSXQuf07cqSgpsSeQ90dQps+m5E
	 jdfiHBAhgK+NV28whOWpTekQGNZQhkMj6OpZRkj3/fu7pbAFlWCYUKKephPUUsdnEP
	 C8yXQnd6gqaTg==
Date: Thu, 20 Jun 2024 18:24:19 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [XEN PATCH v2 6/6][RESEND] automation/eclair_analysis: clean
 ECLAIR configuration scripts
In-Reply-To: <120e7e4579b931c08d28d0a96848af1df7a07f7d.1718378539.git.nicola.vetrini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406201824090.2572888@ubuntu-linux-20-04-desktop>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com> <120e7e4579b931c08d28d0a96848af1df7a07f7d.1718378539.git.nicola.vetrini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 17 Jun 2024, Nicola Vetrini wrote:
> Remove from the ECLAIR integration scripts an unused option, which
> was already ignored, and make the help texts consistent
> with the rest of the scripts.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/eclair_analysis/ECLAIR/analyze.sh | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/analyze.sh b/automation/eclair_analysis/ECLAIR/analyze.sh
> index 0ea5520c93a6..e96456c3c18d 100755
> --- a/automation/eclair_analysis/ECLAIR/analyze.sh
> +++ b/automation/eclair_analysis/ECLAIR/analyze.sh
> @@ -11,7 +11,7 @@ fatal() {
>  }
>  
>  usage() {
> -  fatal "Usage: ${script_name} <ARM64|X86_64> <Set0|Set1|Set2|Set3>"
> +  fatal "Usage: ${script_name} <ARM64|X86_64> <accepted|monitored>"
>  }
>  
>  if [[ $# -ne 2 ]]; then
> @@ -40,7 +40,6 @@ ECLAIR_REPORT_LOG=${ECLAIR_OUTPUT_DIR}/REPORT.log
>  if [[ "$1" = "X86_64" ]]; then
>    export CROSS_COMPILE=
>    export XEN_TARGET_ARCH=x86_64
> -  EXTRA_ECLAIR_ENV_OPTIONS=-disable=MC3R1.R20.7
>  elif [[ "$1" = "ARM64" ]]; then
>    export CROSS_COMPILE=aarch64-linux-gnu-
>    export XEN_TARGET_ARCH=arm64
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 04:45:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 04:45:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745013.1152140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKW8k-00077o-HJ; Fri, 21 Jun 2024 04:44:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745013.1152140; Fri, 21 Jun 2024 04:44:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKW8k-00077h-D0; Fri, 21 Jun 2024 04:44:46 +0000
Received: by outflank-mailman (input) for mailman id 745013;
 Fri, 21 Jun 2024 04:44:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKW8j-00077X-4f; Fri, 21 Jun 2024 04:44:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKW8i-0007iY-RO; Fri, 21 Jun 2024 04:44:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKW8i-0000eG-Aq; Fri, 21 Jun 2024 04:44:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKW8i-0007ZT-4S; Fri, 21 Jun 2024 04:44:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=x7QguehUAENjcU4ture4L91pA4BaYk2jvwLZtLRg158=; b=zI+UhvRgr4HJNdU7mhuJlp6WSe
	frKBiT08EbUSMSXKk7IA5nBUbA4MIhWNC02vEm1oqVjkc8ZCGB7+rBQrHB8eJ1Prx7NKIn4WCHyKh
	n9Rf0TokWjMtAbGuZtGM5/cj/AAniavHhdIB0diNQeVx/WY3VTFX9PUvso7W4kGtk3UU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186437-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186437: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=50736169ecc8387247fe6a00932852ce7b057083
X-Osstest-Versions-That:
    linux=e5b3efbe1ab1793bb49ae07d56d0973267e65112
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Jun 2024 04:44:44 +0000

flight 186437 linux-linus real [real]
flight 186439 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186437/
http://logs.test-lab.xenproject.org/osstest/logs/186439/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-qcow2     8 xen-boot            fail pass in 186439-retest
 test-armhf-armhf-libvirt-vhd  8 xen-boot            fail pass in 186439-retest
 test-armhf-armhf-xl-credit1   8 xen-boot            fail pass in 186439-retest
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186439-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 186426

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 186439 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 186439 never pass
 test-armhf-armhf-xl-qcow2   14 migrate-support-check fail in 186439 never pass
 test-armhf-armhf-xl-qcow2 15 saverestore-support-check fail in 186439 never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check fail in 186439 never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check fail in 186439 never pass
 test-armhf-armhf-examine      8 reboot                       fail  like 186413
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 186426
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186426
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186426
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186426
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186426
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186426
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186426
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                50736169ecc8387247fe6a00932852ce7b057083
baseline version:
 linux                e5b3efbe1ab1793bb49ae07d56d0973267e65112

Last test of basis   186426  2024-06-20 03:49:07 Z    1 days
Testing same since   186437  2024-06-20 20:10:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Conole <aconole@redhat.com>
  Adrian Moreno <amorenoz@redhat.com>
  Ajrat Makhmutov <rauty@altlinux.org>
  Ajrat Makhmutov <rautyrauty@gmail.com>
  Alexei Starovoitov <ast@kernel.org>
  Andre Przywara <andre.przywara@arm.com>
  Andy Chi <andy.chi@canonical.com>
  Antje Miederhöfer <a.miederhoefer@gmx.de>
  Aryan Srivastava <aryan.srivastava@alliedtelesis.co.nz>
  Ayala Beker <ayala.beker@intel.com>
  Boris Burkov <boris@bur.io>
  Cyrus Lien <cyrus.lien@canonical.com>
  Dan Carpenter <dan.carpenter@linaro.org>
  Daniel Borkmann <daniel@iogearbox.net>
  David Ruth <druth@chromium.org>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Dmitry Antipov <dmantipov@yandex.ru>
  Dmitry Safonov <0x7f454c46@gmail.com>
  Dustin L. Howett <dustin@howett.net>
  Edson Juliano Drosdeck <edson.drosdeck@gmail.com>
  En-Wei Wu <en-wei.wu@canonical.com>
  Eric Dumazet <edumazet@google.com>
  Florian Westphal <fw@strlen.de>
  Gavrilov Ilia <Ilia.Gavrilov@infotecs.ru>
  Geetha sowjanya <gakula@marvell.com>
  Gergely Meszaros <meszaros.gergely97@gmail.com>
  Heng Qi <hengqi@linux.alibaba.com>
  Ignat Korchagin <ignat@cloudflare.com>
  Ilya Maximets <i.maximets@ovn.org>
  Jakub Kicinski <kuba@kernel.org>
  Jason Wang <jasowang@redhat.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jianguo Wu <wujianguo@chinatelecom.cn>
  Jiri Olsa <jolsa@kernel.org>
  Jiri Pirko <jiri@nvidia.com>
  Johannes Berg <johannes.berg@intel.com>
  Johannes Thumshirn <johannes.thumshirn@wdc.com>
  John Fastabend <john.fastabend@gmail.com>
  Jose Ignacio Tornos Martinez <jtornosm@redhat.com>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Kailang Yang <kailang@realtek.com>
  Kees Cook <kees@kernel.org>
  Kenton Groombridge <concord@gentoo.org>
  Lee Jones <lee@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Maciej Żenczykowski <maze@google.com>
  Marcin Szycik <marcin.szycik@linux.intel.com>
  Matthieu Baerts (NGI0) <matttbe@kernel.org>
  Michael Chan <michael.chan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Miri Korenblit <miriam.rachel.korenblit@intel.com>
  Neal Cardwell <ncardwell@google.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Neukum <oneukum@suse.com>
  Ondrej Mosnacek <omosnace@redhat.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Paul Greenwalt <paul.greenwalt@intel.com>
  Pavan Chebbi <pavan.chebbi@broadcom.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Raju Lakkaraju <Raju.Lakkaraju@microchip.com>
  Remi Pommarel <repk@triplefau.lt>
  Shaul Triebitz <shaul.triebitz@intel.com>
  Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>
  Simon Horman <horms@kernel.org>
  Simon Trimmer <simont@opensource.cirrus.com>
  Stanislav Fomichev <sdf@google.com>
  Stefan Binding <sbinding@opensource.cirrus.com>
  Stefan Wahren <wahrenst@gmx.net>
  Sujai Buvaneswaran <sujai.buvaneswaran@intel.com>
  Takashi Iwai <tiwai@suse.de>
  Tony Ambardar <tony.ambardar@gmail.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Wojciech Drewek <wojciech.drewek@intel.com>
  Xiaolei Wang <xiaolei.wang@windriver.com>
  Xin Long <lucien.xin@gmail.com>
  Yongqin Liu <yongqin.liu@linaro.org>
  Yuchung Cheng <ycheng@google.com>
  Yue Haibing <yuehaibing@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   e5b3efbe1ab1..50736169ecc8  50736169ecc8387247fe6a00932852ce7b057083 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 05:09:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 05:09:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745024.1152150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKWWH-0001pH-Fk; Fri, 21 Jun 2024 05:09:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745024.1152150; Fri, 21 Jun 2024 05:09:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKWWH-0001pA-BL; Fri, 21 Jun 2024 05:09:05 +0000
Received: by outflank-mailman (input) for mailman id 745024;
 Fri, 21 Jun 2024 05:09:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKWWF-0001p0-VQ; Fri, 21 Jun 2024 05:09:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKWWF-0008PT-J6; Fri, 21 Jun 2024 05:09:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKWWF-0001KY-7B; Fri, 21 Jun 2024 05:09:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKWWF-0005DX-6N; Fri, 21 Jun 2024 05:09:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ypzTpfqVw8br1oCHpmjju9HD2dMEFBT9dEn/bEakAkY=; b=T+20berCyaDFMwXqHCLW2FzE6c
	l86iw1d9ObAnsIPFTGGW113sPmByt0xDcRGVC0m5CRYrHtlobGvJyawZI4Igw6dSaCu1K/fCtfLKI
	7ripPfXL1Oe2v76rzKiDBTgYPH7Hh6HkoY7WlnBX76t4lBU0LgWFfxvG2DPgIWvtQkFA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186440-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186440: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d512bd31293c7f2aeef9b60fb6f112d0e90adff3
X-Osstest-Versions-That:
    ovmf=57a890fd03356350a1b7a2a0064c8118f44e9958
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Jun 2024 05:09:03 +0000

flight 186440 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186440/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d512bd31293c7f2aeef9b60fb6f112d0e90adff3
baseline version:
 ovmf                 57a890fd03356350a1b7a2a0064c8118f44e9958

Last test of basis   186422  2024-06-20 01:56:09 Z    1 days
Testing same since   186440  2024-06-21 03:11:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  xieyuanh <yuanhao.xie@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   57a890fd03..d512bd3129  d512bd31293c7f2aeef9b60fb6f112d0e90adff3 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 06:17:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 06:17:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745034.1152158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKXaJ-00014m-AQ; Fri, 21 Jun 2024 06:17:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745034.1152158; Fri, 21 Jun 2024 06:17:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKXaJ-00014f-7W; Fri, 21 Jun 2024 06:17:19 +0000
Received: by outflank-mailman (input) for mailman id 745034;
 Fri, 21 Jun 2024 06:17:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=18Fz=NX=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKXaI-00014Z-2w
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 06:17:18 +0000
Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com
 [2a00:1450:4864:20::22d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e7db64ca-2f95-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 08:17:16 +0200 (CEST)
Received: by mail-lj1-x22d.google.com with SMTP id
 38308e7fff4ca-2ec1620a956so18366861fa.1
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 23:17:16 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-706512549f1sm631391b3a.122.2024.06.20.23.17.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 23:17:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7db64ca-2f95-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718950635; x=1719555435; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=rqC3zgvUttbWYziRuiKxsu06lEb9STwqNvJSXaDHAIk=;
        b=NSBQWUi7eB2l3k8+P0/NiqwEXswXaU3dGwETB+GhvS7l38IvhfjR55ldhTc6GF8Slp
         m1oEpyVjRnR1X1SYa1KuhXhBQo8BXf0IMvO5zKLeL+FB1IU0zaGCqbt66/dWx8npUiz+
         ucaAJAOq9P9X+KiqRJf29WHVo4YxmIariQuAEKvqip7znAB62YHBdJTtR61T6nZgUPR4
         PJwdVoD9qALvNXmmxdvmddUI/6bRRSyW2ocYLXPgH708KuYH6XqGVnIs2GTgyv3GVPY5
         0oQJr0NqCjVJj/S3jcirfcdgmV9tqvECGLzqqVV2q+rnQXt849+bNiRMSiNwCC8vkeBm
         ZYlA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718950635; x=1719555435;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=rqC3zgvUttbWYziRuiKxsu06lEb9STwqNvJSXaDHAIk=;
        b=Zj7Kc105DVvdUsuKsvpyd4SzXry3U7GwhWXRXCddhi7cIJMfSekYzYzoiS6tewPliX
         KNtL0EqJt68zXjfEvHXEb5ZqWs00GLhSRtPMU6U8tyOfZ0qsrej94LpHE9f2/Gly2YHm
         DPCPOd0ePuxEw9F8Iw42hIUgEdjW9ZX08l4NPKiDuAuWYMKGGK/BfX9fpBRH658AUg2Q
         WBnO9Im7W1co+4kGbu7NoW86lJyAgRq95dKCB8zWW2V6Qi8GitOrHxw5a5btqt8aCami
         q3aGoXIPXPlk7Y9fL7OQ/mCHMTYTi+Ga9Oehf+sjApMfQXvlFTNK02UIX4SGVUcqXIsf
         GybA==
X-Forwarded-Encrypted: i=1; AJvYcCVxYeFL9QE7XBQyLgQL9iwkq44H4Sp9ojcR4zygkt+TBkpe6vLjwdqy0Xmz4I9PRMlMjPspx62m/2B5Skzu92+8EuP0xBF8BdW/frJzOYE=
X-Gm-Message-State: AOJu0Yw28nhkOsOKWlaPLMJKBqJJSR5eit1Vi5WSHozTtUzeholTW8si
	H6XfcQ69r3z45bA27I4S8qt38Qvbkm+j+Wym4eFJ96XHemEE1WhU8Nu+t2BlWPOHD6ommkSdP/U
	=
X-Google-Smtp-Source: AGHT+IHmzxsofj+tqEa9rW41ypM1AckdKzp7Fu/1jdhR/QH4se2eDZPcNk14fEB7o93EDfURJVQ1KQ==
X-Received: by 2002:a2e:9dc1:0:b0:2ec:4d48:75ec with SMTP id 38308e7fff4ca-2ec4d487811mr5514001fa.22.1718950635238;
        Thu, 20 Jun 2024 23:17:15 -0700 (PDT)
Message-ID: <650b7946-ddb5-4428-b6d9-d8f6e0b0f8b9@suse.com>
Date: Fri, 21 Jun 2024 08:17:07 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/2] automation/eclair_analysis: deviate MISRA C Rule 21.2
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
 consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, xen-devel@lists.xenproject.org
References: <cover.1718816397.git.alessandro.zucchelli@bugseng.com>
 <5b8364528a9ece8fec9f0e70bee81c2ea94c1820.1718816397.git.alessandro.zucchelli@bugseng.com>
 <02ee9a03-c5b9-4250-960d-e9a2762605c8@suse.com>
 <alpine.DEB.2.22.394.2406201758490.2572888@ubuntu-linux-20-04-desktop>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <alpine.DEB.2.22.394.2406201758490.2572888@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 21.06.2024 03:02, Stefano Stabellini wrote:
> On Thu, 20 Jun 2024, Jan Beulich wrote:
>> On 19.06.2024 19:09, Alessandro Zucchelli wrote:
>>> Rule 21.2 reports identifiers reserved for the C and POSIX standard
>>> libraries: all xen's translation units are compiled with option
>>> -nostdinc, this guarantees that these libraries are not used, therefore
>>> a justification is provided for allowing uses of such identifiers in
>>> the project.
>>> Builtins starting with "__builtin_" still remain available.
>>>
>>> No functional change.
>>>
>>> Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
>>> ---
>>>  automation/eclair_analysis/ECLAIR/deviations.ecl | 11 +++++++++++
>>>  1 file changed, 11 insertions(+)
>>>
>>> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
>>> index 447c1e6661..9fa9a7f01c 100644
>>> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
>>> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
>>> @@ -487,6 +487,17 @@ leads to a violation of the Rule are deviated."
>>>  # Series 21.
>>>  #
>>>  
>>> +-doc_begin="Rules 21.1 and 21.2 report identifiers reserved for the C and POSIX
>>> +standard libraries: if these libraries are not used there is no reason to avoid such
>>> +identifiers. All xen's translation units are compiled with option -nostdinc,
>>> +this guarantees that these libraries are not used. Some compilers could perform
>>> +optimization using built-in functions: this risk is partially addressed by
>>> +using the compilation option -fno-builtin. Builtins starting with \"__builtin_\"
>>> +still remain available."
>>
>> While the sub-section "Reserved Identifiers" is part of Section 7,
>> "Library", close coordination is needed between the library and the
>> compiler, which only together form an "implementation". Therefore any
>> use of identifiers beginning with two underscores or beginning with an
>> underscore and an upper case letter is at risk of colliding not only
>> with a particular library implementation (which we don't use), but
>> also of such with a particular compiler implementation (which we cannot
>> avoid to use). How can we permit use of such potentially problematic
>> identifiers?
> 
> Alternative question: is there a way we can check if there is clash of
> some sort between a compiler implementation of something and a MACRO or
> identifier we have in Xen? An error or a warning from the compiler for
> instance? That could be an easy way to prove we are safe.

Well. I think it is the default for the compiler to warn when re-#define-
ing a previously #define-d (by the compiler or by us) symbol, so on that
side we ought to be safe at any given point in time, yet we're still
latently unsafe (as to compilers introducing new pre-defines). For
built-in declarations, though, there's nothing I'm aware of that would
indicate collisions.

> Also, can we use the fact that the compiler we use is the same compiler
> used to compile Linux, and Linux makes extensive use of identifiers and
> macros starting with underscores as one of the reason for being safe
> from clashes?

I think we could, but I don't think we should.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 06:19:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 06:19:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745040.1152168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKXce-0001dm-M7; Fri, 21 Jun 2024 06:19:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745040.1152168; Fri, 21 Jun 2024 06:19:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKXce-0001df-JY; Fri, 21 Jun 2024 06:19:44 +0000
Received: by outflank-mailman (input) for mailman id 745040;
 Fri, 21 Jun 2024 06:19:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=18Fz=NX=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKXcc-0001dX-Gy
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 06:19:42 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3e415274-2f96-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 08:19:41 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2ec4eefbaf1so1957511fa.1
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 23:19:41 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-706511b1be5sm657812b3a.95.2024.06.20.23.19.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 23:19:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e415274-2f96-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718950780; x=1719555580; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=I+4uXk9pjJydHspDN4TTT8selrL52KkIcp4Wjx8MLnE=;
        b=TvpwYplXZG44eJoenlgqe6yIx/PDcRTVO4r7ZBW/tHLRVfmLFkpE/Mh5EJAGH9cFSK
         Zbar2Bc2KbqNMviuKXWbRG9NM6lj4Da2UAJAOXgcJrIbOl5X868/IgtBHy92gvuVC0FG
         eQDajh7rGwWFCt7pXOQFJFQtTBLGg3wKT7I4YgWy6+0hmGT9mcmIZruVBQvaxTSyyuRr
         agspCTgmuOKXPVWXMPOQBZTLLfYU0I1+LNKtCvq9VMB2q49QBZ9ZjFn5TUv5vMmXX0Nu
         tQGEBr/vWQ272J9F2qt+GCNgzsKe1gRdLQUj/SMRQAmdR2+/NBAvTanqpAJO8FinxGjI
         AhGA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718950780; x=1719555580;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=I+4uXk9pjJydHspDN4TTT8selrL52KkIcp4Wjx8MLnE=;
        b=w+8iv6gOTVXssyZZMmFjmsz7ehJpn+ndsf4+Ts9WdzWeK3NKXuFQmm6CvvTJdKz7Yx
         7K85y1bq0zg18ZDLOijR7GQLGpW6v1lEvWIU9v0u2sw3kzKJU3Tsy47GRYA6pZYyofHu
         UDNNnryzeMmJ/cnMaG9oOe/u4Veoj7qTWi3AtrJPXv1W0MLa2plS1nc5kI0CRPjAOUAA
         pKVtgASO2E17Eb2oi5cLYP52/LDjovkre6efFV/qswGkLhfg1irgxiSKyatbqIDuJKiJ
         ebkep2UPE/2oTDgXvpVC0epS4+2dUKDRk+ADFyB7tVwmfsTWIMxZFvvrxr15rN2aumeX
         lK0g==
X-Forwarded-Encrypted: i=1; AJvYcCXnKobvtP91KHwFyfua0bgv36d9mknxyXc4Xe2gtBqPs0pVfW+dsrHGdobOoLdf6DAstVKe0OuVRrlYSzLWE4PYvjhtA/X/jr00voSpoMI=
X-Gm-Message-State: AOJu0Yxcc52uIBtQXVuNOGq4QKoZAin36etnr/BCmSCa2jefesH2Kdwk
	M1GN5zJMeBWg9Ru3vnMVkzbJKsee2UX1t2d2raQR5O5u2BzrBbN5uQDIX13kyw==
X-Google-Smtp-Source: AGHT+IHq8CTioOjEVgVaSbuv/pYjO5WJmSeHlxx6adgruOJtTNi5j3X75rwHIgK8h89syuwL0Dr7Yw==
X-Received: by 2002:a05:651c:1a1e:b0:2ec:3d74:88cc with SMTP id 38308e7fff4ca-2ec3d748a19mr50593391fa.20.1718950780389;
        Thu, 20 Jun 2024 23:19:40 -0700 (PDT)
Message-ID: <0e3f33b7-d4ce-4146-86d4-19a9d24ff6a3@suse.com>
Date: Fri, 21 Jun 2024 08:19:29 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] xen: add explicit comment to identify notifier
 patterns
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Federico Serafini <federico.serafini@bugseng.com>,
 consulting@bugseng.com, Julien Grall <julien@xen.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <96a1b98d7831154c58d39b85071b9670de94aed0.1718617636.git.federico.serafini@bugseng.com>
 <058a6cc6-bf84-4140-a3fb-12ec648536b0@suse.com>
 <alpine.DEB.2.22.394.2406201808190.2572888@ubuntu-linux-20-04-desktop>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <alpine.DEB.2.22.394.2406201808190.2572888@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 21.06.2024 03:09, Stefano Stabellini wrote:
> On Mon, 17 Jun 2024, Jan Beulich wrote:
>> On 17.06.2024 11:49, Federico Serafini wrote:
>>> MISRA C Rule 16.4 states that every `switch' statement shall have a
>>> `default' label" and a statement or a comment prior to the
>>> terminating break statement.
>>>
>>> This patch addresses some violations of the rule related to the
>>> "notifier pattern": a frequently-used pattern whereby only a few values
>>> are handled by the switch statement and nothing should be done for
>>> others (nothing to do in the default case).
>>>
>>> No functional change.
>>>
>>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>>
>> I guess I shouldn't outright NAK this, but I certainly won't ack it. This
>> is precisely the purely mechanical change that in earlier discussions some
>> (including me) have indicated isn't going to help safety. However, if
>> others want to ack something purely mechanical like this, then my minimal
>> requirement would be that somewhere it is spelled out what falls under
> 
> I know there is a new version of this patch on xen-devel but I just
> wanted to add that although it is true that this patch taken on its own
> it does not improve safety, it does improve safety because it enables us
> to go down to zero violations on rule 16.4 and then make the rule 16.4
> blocking in the gitlab-ci ECLAIR job.

And moving out of everyone's sight (unless specifically grep-ing for the
pattern) that then there are a certain number of sites where we're not
really sure.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 06:30:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 06:30:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745049.1152179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKXnH-0004Fa-Mn; Fri, 21 Jun 2024 06:30:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745049.1152179; Fri, 21 Jun 2024 06:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKXnH-0004FJ-Is; Fri, 21 Jun 2024 06:30:43 +0000
Received: by outflank-mailman (input) for mailman id 745049;
 Fri, 21 Jun 2024 06:30:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=18Fz=NX=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sKXnG-0004FD-Sq
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 06:30:42 +0000
Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com
 [2a00:1450:4864:20::22d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c84fce6a-2f97-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 08:30:41 +0200 (CEST)
Received: by mail-lj1-x22d.google.com with SMTP id
 38308e7fff4ca-2ebeefb9a7fso18791291fa.0
 for <xen-devel@lists.xenproject.org>; Thu, 20 Jun 2024 23:30:41 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb3d51easm6610875ad.202.2024.06.20.23.30.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 20 Jun 2024 23:30:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c84fce6a-2f97-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1718951441; x=1719556241; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Do55SX1Xtly8OcIn10bRIQgkWMrsF5SLlmDVtW8D6y0=;
        b=fCIv4zTRq/IP4BORhL/GRYnNWyIcPUCZxeY9kSFnZ0qUx0qXQtpD+Ek1MrrGeCess0
         y5W9TwehbcCJLC33yNASlQdejv7YR1o5g1azk8U8xnk6gABMXuScHB9zunEfWpK0IGLz
         g0nsDw9kQaRTTJmOnXcRjrwRLEmPhlZTUzeiPFLchFXQQlehF2wSOoRD2FePM2U5pz5U
         4iQF14MOxrjt3yIkInrWC80AET0TgtYiGa3l+8qbaxjYeFkj6XCVfBgtukU144qZREQL
         +kvfR8PK1QtyJY8E/p6kvC2R/vx4OzxERmgEny3Z9HG8+Qw601K9jkBC5rLu2/IvK2qG
         OArg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718951441; x=1719556241;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Do55SX1Xtly8OcIn10bRIQgkWMrsF5SLlmDVtW8D6y0=;
        b=OrH4TRDhu1AAKKXVx4WWltLwGPRDur2SHWyzeZjfQIfsxq/8OrZ+L+wJLrTUs40+0A
         6TrjTEvYytgHTNKlGp7fQ5KlZRZDvarGKLTthww4Of3iS2J103Ma0FCSlEw6Ggs1j0aF
         UDnakLAdo9RVSN1eSSW/dzh4L860gUNITnOBq9YksrGKYV0jksfxKJ+80U6N3iCCFmSu
         GRwPb+Ow3E9gikEHxw3NDYQLoVnHOna3f2w60gLbL4pPdkG8BljKf6I6WwHBDavR/FnA
         3sUVLmn2A5F25GjUgE0nTm74IJUH+Q/pzbqkv6tR2snUj7K5uVGVagwCAxyClho2CQbR
         AQ4A==
X-Forwarded-Encrypted: i=1; AJvYcCUq/y84XRuGO6z0LQMKJJavttsa5zm+JaPJglsruddWJGWU8HeaUh/+I4FTLGVoBVhREwjvfiZzG2kXDwXS3A6Hf7phnpZn/YTDI+OkgSw=
X-Gm-Message-State: AOJu0YwuS2BjQGUSjwUJ4wCRpO199gWAjkOqhUQecoY5FbCJ2dKRl3uX
	2lFMaGe8iYsO3d76rGcSvpf/iHq/oc2FwF+XDEfQEFPeJBXGB/0AThqU0vfxww==
X-Google-Smtp-Source: AGHT+IHnAtC17luwHsZRMvQcl/dKMlDhBYwHhohXh9nkdd3HJuHyHFqWF3hcJmLWLwYGphMnkCrQUQ==
X-Received: by 2002:a2e:9acd:0:b0:2ec:4aac:8fd8 with SMTP id 38308e7fff4ca-2ec4aac9280mr14890471fa.1.1718951441373;
        Thu, 20 Jun 2024 23:30:41 -0700 (PDT)
Message-ID: <aa343c91-d705-457b-8e67-8924c8252e5c@suse.com>
Date: Fri, 21 Jun 2024 08:30:31 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] common/sched: address a violation of MISRA C Rule 8.8
To: victorm.lira@amd.com
Cc: sstabellini@kernel.org, Stewart Hildebrand <stewart.hildebrand@amd.com>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli
 <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <5b6dfc7571bd76b5546d3881bd660a4e7a745409.1718928467.git.victorm.lira@amd.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <5b6dfc7571bd76b5546d3881bd660a4e7a745409.1718928467.git.victorm.lira@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 21.06.2024 02:20, victorm.lira@amd.com wrote:
> --- a/xen/common/sched/credit2.c
> +++ b/xen/common/sched/credit2.c
> @@ -1476,7 +1476,7 @@ static inline void runq_remove(struct csched2_unit *svc)
>      list_del_init(&svc->runq_elem);
>  }
>  
> -void burn_credits(struct csched2_runqueue_data *rqd, struct csched2_unit *svc,
> +static void burn_credits(struct csched2_runqueue_data *rqd, struct csched2_unit *svc,
>                    s_time_t now);

On top of Andrew's comment, please also obey to line length restrictions.
This thus needs re-wrapping, to either

static void burn_credits(struct csched2_runqueue_data *rqd,
                         struct csched2_unit *svc, s_time_t now);

(then matching the function definition) or

static void burn_credits(
    struct csched2_runqueue_data *rqd,
    struct csched2_unit *svc,
    s_time_t now);

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 07:47:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 07:47:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745086.1152245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKYzZ-0005dH-O6; Fri, 21 Jun 2024 07:47:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745086.1152245; Fri, 21 Jun 2024 07:47:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKYzZ-0005dA-LJ; Fri, 21 Jun 2024 07:47:29 +0000
Received: by outflank-mailman (input) for mailman id 745086;
 Fri, 21 Jun 2024 07:47:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKYzY-0005d0-Gd; Fri, 21 Jun 2024 07:47:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKYzY-0003Me-Eb; Fri, 21 Jun 2024 07:47:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKYzY-0005PA-0x; Fri, 21 Jun 2024 07:47:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKYzY-00080J-0O; Fri, 21 Jun 2024 07:47:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7fXxJQ7Ct6cwBDiC6SuXDrJ0U49+l93l1F+go0YKIwA=; b=fVIeiuyRodvbWR2Lsr9bvL9Jzj
	/8U0qRtgtGXqX8w3NkT92VurwVGkiym1A67q0KoGt1PaLNdK6/i83mdu3vVoI1ZVFuVuEpkSULfgf
	N06KNtFOtUGdD1Gbe2i52+kDmMCDCtVi6M2MscYqp8koBa1sQ6mw/mbhXP+Ji72ConYg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186438-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186438: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-raw:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=62071a1c16c4dbe765491e58e456fd3a19b33298
X-Osstest-Versions-That:
    xen=efa6e9f15ba943d154e8d7b29384581915b2aacd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Jun 2024 07:47:28 +0000

flight 186438 xen-unstable real [real]
flight 186442 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186438/
http://logs.test-lab.xenproject.org/osstest/logs/186442/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-raw       8 xen-boot            fail pass in 186442-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-raw     14 migrate-support-check fail in 186442 never pass
 test-armhf-armhf-xl-raw 15 saverestore-support-check fail in 186442 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186430
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186430
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186430
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186430
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186430
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186430
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  62071a1c16c4dbe765491e58e456fd3a19b33298
baseline version:
 xen                  efa6e9f15ba943d154e8d7b29384581915b2aacd

Last test of basis   186430  2024-06-20 07:18:11 Z    1 days
Testing same since   186438  2024-06-20 20:41:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@vates.tech>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jason.andryuk@amd.com>
  Julien Grall <jgrall@amazon.com>
  Leigh Brown <leigh@solinno.co.uk>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   efa6e9f15b..62071a1c16  62071a1c16c4dbe765491e58e456fd3a19b33298 -> master


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 08:16:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 08:16:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745105.1152260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKZR6-0001u9-8Z; Fri, 21 Jun 2024 08:15:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745105.1152260; Fri, 21 Jun 2024 08:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKZR6-0001u2-5G; Fri, 21 Jun 2024 08:15:56 +0000
Received: by outflank-mailman (input) for mailman id 745105;
 Fri, 21 Jun 2024 08:15:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YYc3=NX=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sKZR4-0001tv-Gs
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 08:15:54 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20609.outbound.protection.outlook.com
 [2a01:111:f403:2418::609])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 78e7581d-2fa6-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 10:15:52 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by CYYPR12MB8703.namprd12.prod.outlook.com (2603:10b6:930:c4::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.22; Fri, 21 Jun
 2024 08:15:46 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7698.020; Fri, 21 Jun 2024
 08:15:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78e7581d-2fa6-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=X65pj05hocguX3QXqvtKSOY2z/FWL3R282AiqKpq/7rBVjmUdAIrSOEpjmy175kSqaR/nAeq2rthQpBUA2wQ6gCEeh8LT+VHSKLGKXfVGraR021DsMhGbDbzIgvR4a8eNDnoBrr3lNvuMoToOKClVUE6cJTSGaYIc/rO1IIIpu4nrnZE1c/gIf/cI42IKuPKifBGOaSgUcAEfMPMDRx9VJenWJgy0+HnauVp6+QSEKxlFMwZ9TqTwDdPP0YGf1TOmgIha2FMeFXIzg1xJtbekeH2vHvxtVMS6eiM+D2U4n/cXZ2Jf6lJfInjVMDOsm/p/FMT+oIfmeqyINJo7LRJxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=v9pA/4k1oDsOndFxwviSGAYlabNLEzeuyJzYt3aYE+Q=;
 b=CMSIXSj8obgaSAxx0TlgKLn7Nz8LBOX3LY+vuZTQt6t0N01ng0AHtf5AUbx7olqRJtJM4wL/68ELGg+0358G/pLcnYUT7sl0ixSeYUPXzKp02zjbvq1BBYrGHX1QwCJb2MMhXwTMfEhw6lCcK/t1z6Cy/Zs8fzyrfMOOMYJvpFue72xz1fDpqtLm4UTjmGDXjARbx/nyltyBR0ZotUplieKsb4uFAcsDgyXEQaTpDOuWb4dLmoFfB+SBzD2sgkGAdBxzClVawLS45euXnkYigvFzdQEefxsiF5UDvYS2XPQJr1Qe5hz0oiL4k6J6Tz7AvsoHEgYiW4E7i4LitP+zYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v9pA/4k1oDsOndFxwviSGAYlabNLEzeuyJzYt3aYE+Q=;
 b=MEpXEeBqeHWz+LJEC1UNmgqMnrvrSqkqvXBW0aLrzyxvxIQb+ErrRmYFXWQ96+ul29dIjnxtiq9rk7u8WByn2V3fk96l/RsPzUMJowbScWNrf/28Wxg6L1dVldoeHiq7S6tEwibDJ5wdev7GI5DbxFL2T1toX1lqL63w8s+ma9A=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
Thread-Topic: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
Thread-Index:
 AQHawJToSxHkdXzldkCDZE5kihITT7HMD9QAgAGTOgD//5t7AIADeOiA//+SewCAAKx5AP//hDUAAD2nHAA=
Date: Fri, 21 Jun 2024 08:15:46 +0000
Message-ID:
 <BL1PR12MB5849D6943FCB12613A844F33E7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-5-Jiqian.Chen@amd.com>
 <49563a31-d50e-4015-88ee-e0dab9193cd1@suse.com>
 <BL1PR12MB584910D242D9D8B4BA8B15C1E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <ab99b766-7bec-4046-beb2-f77a2591a911@suse.com>
 <BL1PR12MB5849ABD858B72505D83678F9E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <099beaac-ed1f-459b-8c2b-42b325f8e4a4@suse.com>
 <BL1PR12MB5849366A442BE6C4C192ABB0E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <3352736b-e7bc-40d0-ac1f-e58de188c93c@suse.com>
In-Reply-To: <3352736b-e7bc-40d0-ac1f-e58de188c93c@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7698.013)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|CYYPR12MB8703:EE_
x-ms-office365-filtering-correlation-id: 8c54ca89-8df0-46e8-5ccf-08dc91ca5a1b
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|376011|366013|7416011|1800799021|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?T2ZXanRidzlOTkhOdC9sU2dKZHVYOSttZkRMcUlrNU9QZ0VJQ25aT1RiOUxs?=
 =?utf-8?B?U3dPQmJ2dXNiWVgrc1B2V1E3THdYTktqakQ2UkZ0U0lDb1haYk9tRW56eXMx?=
 =?utf-8?B?MVNtMTRVMHNxaUdvN0hPa3hwcDhDS3NGcGtIc2ZPVDJzeVAweVZsRjhXSE9K?=
 =?utf-8?B?KzBRbXQ5OVIyTXM0bXFUbkN1VXpCR0hrTTcvZlBzQkFjMFdHbTdUSmlTSkVT?=
 =?utf-8?B?UVVDcXZRdFh5SGg2N0FuQjkyMC9mQ2Q2MDlrOHNhR1lDMkV5bXA2bGNuSEo4?=
 =?utf-8?B?cjZ4MFRVMHhaNGFlaHF5L0xEemlxSzQxamd1SzBGRzdoVzZReGpDdU5aRStl?=
 =?utf-8?B?YmxXcUVKZW12UXFvZDZPbVNrMmlqazZiTHVPVTZ2S282ZTJHSXkweFRHT0Vv?=
 =?utf-8?B?enpwWDhSbnRPUFQycTZMWGlvSjZ5bU9xN3ZHdWJwRG5Yc1dYQlZwdVpubzdk?=
 =?utf-8?B?bW02RUNwSXNLNXF1UWh4cjBFd1dqU2lURFgzSzcwa3J6S3dmL2ZWVVNwdVIv?=
 =?utf-8?B?RDV3aU5lYTIxL0V0NjVVWTRpS0NWUXlRaVhzenFyMlhxQ2pFRUtHT0xWNzVw?=
 =?utf-8?B?QmdGWDQ3Y3VabHFNWkFyOXdtc2UvOERrSGxUb0svN0FKMWdqSEUyN1hVSkV2?=
 =?utf-8?B?eHNOSzJtZUNCWWdOaGZKM3RLTEZYMmFLS3JwYiswM0hLWWtkOUhCaEo5eHcr?=
 =?utf-8?B?dGNXMUxpbGpNU000TDBOMEhLQnE3dlZvMzFpNThwQnB6bEd4UEMzemJIcmJy?=
 =?utf-8?B?ZkkxR0R6bzRuL01CNUN4NE5kR0xpZC9OR1FQSHNGZS8wZzdiSSthK0ZlbkJt?=
 =?utf-8?B?OHhiK2hCVkdKZDB3RmVoaER6THBmblRSOGk1ZEtvMkZ1bHpEOEF4Nk9kMW4z?=
 =?utf-8?B?anJOTThONUxrSFA5K3lGRkFVTitibE51Q3M3OUFaU2Q3RFFmVndOWmIxYUVt?=
 =?utf-8?B?Y1hjOWEwc3hEVFEydzJ5OXVXelo3VTQzbmZBMXA3RWMweXJpTXRnOVZiV1lH?=
 =?utf-8?B?bGpJOXF2YzdOQk02Z1FHNGF2T2pseHN6NFpYazYvSlFWRHJoeTJSZ1NKQTRH?=
 =?utf-8?B?djkrSFRMYnh2L2dXcGU2Vlh0NkxjNFRKWUk0cjYxS3lVd3JQL1pxUzZkU1FC?=
 =?utf-8?B?eC9hUDdlQTF6UG1HVHhQSjVGNGNrbFF5MjBUQ0cvcW01MmhENnduL0NvWUpH?=
 =?utf-8?B?dk9nVG11L0pCbHNkOGJlSnZPZEtycmNKTURZampkb3dvM0ZjYW1ZSjZJS2Fv?=
 =?utf-8?B?RTlvZ2JMejhoa1ZldE5IcXY0akZ0Rmd2dmJUUEh6ei82V1NjL2djQ2V1VWU0?=
 =?utf-8?B?dGlxZldOblFKbXVaSVErcFV0RVJSR2ZhNkRNRjRtWDJpd0lpNW9ia0xjalhY?=
 =?utf-8?B?SmdqYnhXOUZYeTRMRkRJcFdEbXdJU2Nwd2tQMkVPNHYzZXNnL2lkSFRwckFl?=
 =?utf-8?B?VDVsVkFQNm9NNmpDZzRIT1BBYVZpbUI3bFBCV1NDZGFzVnFaNWtnRHBqNU0w?=
 =?utf-8?B?TlBpdVdwbVBDUUlOUnduUUJvamwxY1hRWHY1NUlHU3M3R3hDc0kxQkVTN2dU?=
 =?utf-8?B?Q1lBbTZpd0o2MzFMcDc1RGwyREppTUZaeXAzRWtqaGFUYk83clgyM3VGN0N6?=
 =?utf-8?B?TVI3RkhtbDE5WTRrVzAwQzc1UHcrb3lodWkwVlhhdUh4VWJsV1VidHJaMTgx?=
 =?utf-8?B?Zm5SRHZ4d3pYai9Kd0tFRk0vQlJQcjRxeG8rN2psMXdtakFRcnNDazd1MHhp?=
 =?utf-8?B?NUs5YkQ5azBhZUtuL2pKN2hqc0tKNXMvMG43a3NSSnZhalVBYk9sYmhab1Q0?=
 =?utf-8?Q?7CfaxhfbzbHq9sLxg5iHVtM2wRnTg57SmIfGE=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(376011)(366013)(7416011)(1800799021)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?VDUveHN4R1V6UnU5bFJIWGwveG5ZUG5tMXNwQTNuOGpYbXpsOTA5akhCMHN3?=
 =?utf-8?B?RHU2SnZoTXZQWTd2eHlob1pBaW1FVWc0SnR3SHd5Z2MyalQ1NzV1d3JYcTVz?=
 =?utf-8?B?M05tTWJiOU1zQTJyanNkaU9XOVNtYjJTb3RoTmhzSi9ianhtaW9zT21rSzVW?=
 =?utf-8?B?OXRHZlJjNGRqYlZCTkxmMkRMczNVZTJnYjBIR1hoRVJ5a0UzTEYwNWprbU5N?=
 =?utf-8?B?bjQ0czhjR0dWQTF0eTRaWkd0ai93b2kvRXVRYXRwZnJHL1BuL2tyOGEvTFFN?=
 =?utf-8?B?MkZVOTY5L1ZRTHhsUzNEMG45ckYva1FVZUxJM3VYajhCRld4QmZ6SVQrdkg4?=
 =?utf-8?B?NmZKeUhyVDZJZ1RJMUxFRmltSnRuL21adHpWemVaYkExamVTdFJQS1Rxamxh?=
 =?utf-8?B?Ymxhc09pVnBhNlNKeHFsbEVvWmFNOFJEVEF4b21IS0k0ZldwdlpYcGtMcVQx?=
 =?utf-8?B?VFBxMjlQekNEbVdjTWw1NjIwTXlrTk5JdS9hVGgvVlhicmozeHhIdm8wb05z?=
 =?utf-8?B?bkVnMk95OUVVWlJ1WU9ieEM1UnJPWHFIRlJYVzNFWWJSZm96b2hWUStMUjF3?=
 =?utf-8?B?V2hQMHEySHhpalRHSCthZTJzbGV2ODhyTzNDcUo1RngxUWhoeFdMT1lBSHZY?=
 =?utf-8?B?UnlCb241WEhNOFRzYVZGWVBIenlRU1VwOXRkME4xMlEzU3J4SHhaaVRBT0I5?=
 =?utf-8?B?VlpMNHlKRG5FT0NJTG4rTGRZTVVKTU1wTDhaNG1Ia2NJT0VxU2tpNEhiSlNI?=
 =?utf-8?B?dTFGNVlDOXNOR3FlSS9Ic0ZpQm5kYS9Kb1g2VHNYRGR2S0dRWHczSURudHpB?=
 =?utf-8?B?RVFxVm9BOHdEWTBhZ0k0L2hKeDFqRDRXbDFxdGFicm05VkF1aXZqTG1qT1lh?=
 =?utf-8?B?cUdQdEluMEU4VDZiTk0rTHdnekhPd0l4V3RVL09YTkI0UnIrbDRKdHV3ZnVW?=
 =?utf-8?B?dGk5dGdZa0RBTkdMUEFDR2ptVngwZEV5d1BoMWowZVo0NkoyS0EzY3hMQzAy?=
 =?utf-8?B?OHU3VXNyQWRrNHd2dkVYMEFXUFd0Q2dTaVVrSmVobFFWdnlwenE2VzQrMTBG?=
 =?utf-8?B?K2dUWUlzTXpyR0VHbXZzcGVWVEQ0akx4WnU4Ym9EbU5Cb1ZmbzRTb3RlMXNP?=
 =?utf-8?B?ckZ1ZXBVVGRIbzRuTldSNXFGMFVlR2dUaXFIYkkzK1dDL0s5QVFXdzYvYXA5?=
 =?utf-8?B?UW9sRE5icUVLNXpRUFFBdWV5L2gwcFk5a2xHZEVKNncvK2ZRTkZKRU1XOVcv?=
 =?utf-8?B?c3g3Yzh1RkhPUzE1TVYrelFDcGhWZWRVYzR4VlM2NFlWNmhxTGFMODNpWUJu?=
 =?utf-8?B?eFdkVHk0a0MrcXU0dEtpR1FtcnJzVFV6a1VsUXNKV1lMdWV6OXdJUmtPek1v?=
 =?utf-8?B?UmNpRXY4cWo5amx3RHNxRzEvZEhCRmRyeHo4WXBENm5adEN2U011dGhBU0M3?=
 =?utf-8?B?OFVHN1J5NHVVV2FOY3ByZEtKRHVvdElEME9YNTRhdE9kRUl4Q1JFYjZwdVAv?=
 =?utf-8?B?bStwQkNOWHJhelZvdXV5bHZnUUEvS2d4MFQxbDJLbnJOQkV5aXlaN0t0MCs5?=
 =?utf-8?B?ZGc2YWYyREc5TWRwWEFMY3crMDZqcFEvbHV1Tmorb1RtT0xWbjNkUWduV0Vl?=
 =?utf-8?B?OGgyV1NRSEt6KzB2ZWl1cnkvZzA5RVdyaXdOd2E5eU4yb0RVY0Jhb1RDSW5H?=
 =?utf-8?B?Z3h5K3dvNXFRSk5wZG9UR3lPbjZ6LzV1R0RKKzRzZWZCYnBVWkRGNTZMa0tk?=
 =?utf-8?B?VEpQN2NydHRydXRGN3BLN1NRRFZ5eWxLZGRkV2VkSzlEQkhqeEl6eGRyWlZJ?=
 =?utf-8?B?MmJwRlZmY09rTnoyS1dWYW9RdmtkTTl4WlJJQktSUDFMVVF4UWpoY3NBL2dp?=
 =?utf-8?B?Q2g1Wk4xb3dFT01ldEttK2wxcEV5c280UzNwa240MGZYM3h1ZE40YTBxNlVF?=
 =?utf-8?B?MUVvZVoxdWdrd2REalZQYkwzb3dvV3pKSGlpdDRldjBRMXkwVHk5UmtQZTBu?=
 =?utf-8?B?VTIxcWk3VndRRDdzMFY3OEIySUtJM1VDTmpCZU9qMkhML210VzgzMHBlaktT?=
 =?utf-8?B?d0NkWGRLSWNBK1ByUGQydXpuWkJuWHVSQm84OTJhUzU4QWxQME1WQk1zKzRz?=
 =?utf-8?Q?BUDo=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <1959805BBD69334E88AF21A96832DE3B@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8c54ca89-8df0-46e8-5ccf-08dc91ca5a1b
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Jun 2024 08:15:46.0721
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: VOCfZWWo6Xyx+dKP+NdKKc7gSdMsdVQzT0EtlCEqgNfUY2Es8khhggwFWlFcHifo6k7tYaUkqsk4SlZ6XvL4mQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYYPR12MB8703

T24gMjAyNC82LzIwIDE4OjM3LCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMjAuMDYuMjAyNCAx
MjoyMywgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzIwIDE1OjQzLCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAyMC4wNi4yMDI0IDA5OjAzLCBDaGVuLCBKaXFpYW4gd3JvdGU6
DQo+Pj4+IE9uIDIwMjQvNi8xOCAxNzoxMywgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAx
OC4wNi4yMDI0IDEwOjEwLCBDaGVuLCBKaXFpYW4gd3JvdGU6DQo+Pj4+Pj4gT24gMjAyNC82LzE3
IDIzOjEwLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+Pj4+Pj4gT24gMTcuMDYuMjAyNCAxMTowMCwg
SmlxaWFuIENoZW4gd3JvdGU6DQo+Pj4+Pj4+PiAtLS0gYS90b29scy9saWJzL2xpZ2h0L2xpYnhs
X3BjaS5jDQo+Pj4+Pj4+PiArKysgYi90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jDQo+Pj4+
Pj4+PiBAQCAtMTQwNiw2ICsxNDA2LDEyIEBAIHN0YXRpYyBib29sIHBjaV9zdXBwX2xlZ2FjeV9p
cnEodm9pZCkNCj4+Pj4+Pj4+ICAjZW5kaWYNCj4+Pj4+Pj4+ICB9DQo+Pj4+Pj4+PiAgDQo+Pj4+
Pj4+PiArI2RlZmluZSBQQ0lfREVWSUQoYnVzLCBkZXZmbilcDQo+Pj4+Pj4+PiArICAgICAgICAg
ICAgKCgoKHVpbnQxNl90KShidXMpKSA8PCA4KSB8ICgoZGV2Zm4pICYgMHhmZikpDQo+Pj4+Pj4+
PiArDQo+Pj4+Pj4+PiArI2RlZmluZSBQQ0lfU0JERihzZWcsIGJ1cywgZGV2Zm4pIFwNCj4+Pj4+
Pj4+ICsgICAgICAgICAgICAoKCgodWludDMyX3QpKHNlZykpIDw8IDE2KSB8IChQQ0lfREVWSUQo
YnVzLCBkZXZmbikpKQ0KPj4+Pj4+Pg0KPj4+Pj4+PiBJJ20gbm90IGEgbWFpbnRhaW5lciBvZiB0
aGlzIGZpbGU7IGlmIEkgd2VyZSwgSSdkIGFzayB0aGF0IGZvciByZWFkYWJpbGl0eSdzDQo+Pj4+
Pj4+IHNha2UgYWxsIGV4Y2VzcyBwYXJlbnRoZXNlcyBiZSBkcm9wcGVkIGZyb20gdGhlc2UuDQo+
Pj4+Pj4gSXNuJ3QgaXQgYSBjb2RpbmcgcmVxdWlyZW1lbnQgdG8gZW5jbG9zZSBlYWNoIGVsZW1l
bnQgaW4gcGFyZW50aGVzZXMgaW4gdGhlIG1hY3JvIGRlZmluaXRpb24/DQo+Pj4+Pj4gSXQgc2Vl
bXMgb3RoZXIgZmlsZXMgYWxzbyBkbyB0aGlzLiBTZWUgdG9vbHMvbGlicy9saWdodC9saWJ4bF9p
bnRlcm5hbC5oDQo+Pj4+Pg0KPj4+Pj4gQXMgc2FpZCwgSSdtIG5vdCBhIG1haW50YWluZXIgb2Yg
dGhpcyBjb2RlLiBZZXQgd2hpbGUgSSdtIGF3YXJlIHRoYXQgbGlieGwNCj4+Pj4+IGhhcyBpdHMg
b3duIENPRElOR19TVFlMRSwgSSBjYW4ndCBzcG90IGFueXRoaW5nIHRvd2FyZHMgZXhjZXNzaXZl
IHVzZSBvZg0KPj4+Pj4gcGFyZW50aGVzZXMgdGhlcmUuDQo+Pj4+IFNvLCB3aGljaCBwYXJlbnRo
ZXNlcyBkbyB5b3UgdGhpbmsgYXJlIGV4Y2Vzc2l2ZSB1c2U/DQo+Pj4NCj4+PiAjZGVmaW5lIFBD
SV9ERVZJRChidXMsIGRldmZuKVwNCj4+PiAgICAgICAgICAgICAoKCh1aW50MTZfdCkoYnVzKSA8
PCA4KSB8ICgoZGV2Zm4pICYgMHhmZikpDQo+Pj4NCj4+PiAjZGVmaW5lIFBDSV9TQkRGKHNlZywg
YnVzLCBkZXZmbikgXA0KPj4+ICAgICAgICAgICAgICgoKHVpbnQzMl90KShzZWcpIDw8IDE2KSB8
IFBDSV9ERVZJRChidXMsIGRldmZuKSkNCj4+IFRoYW5rcywgd2lsbCBjaGFuZ2UgaW4gbmV4dCB2
ZXJzaW9uLg0KPj4NCj4+Pg0KPj4+Pj4+Pj4gQEAgLTE0ODYsNiArMTQ5NiwxOCBAQCBzdGF0aWMg
dm9pZCBwY2lfYWRkX2RtX2RvbmUobGlieGxfX2VnYyAqZWdjLA0KPj4+Pj4+Pj4gICAgICAgICAg
Z290byBvdXRfbm9faXJxOw0KPj4+Pj4+Pj4gICAgICB9DQo+Pj4+Pj4+PiAgICAgIGlmICgoZnNj
YW5mKGYsICIldSIsICZpcnEpID09IDEpICYmIGlycSkgew0KPj4+Pj4+Pj4gKyNpZmRlZiBDT05G
SUdfWDg2DQo+Pj4+Pj4+PiArICAgICAgICBzYmRmID0gUENJX1NCREYocGNpLT5kb21haW4sIHBj
aS0+YnVzLA0KPj4+Pj4+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgIChQQ0lfREVWRk4ocGNp
LT5kZXYsIHBjaS0+ZnVuYykpKTsNCj4+Pj4+Pj4+ICsgICAgICAgIGdzaSA9IHhjX3BoeXNkZXZf
Z3NpX2Zyb21fZGV2KGN0eC0+eGNoLCBzYmRmKTsNCj4+Pj4+Pj4+ICsgICAgICAgIC8qDQo+Pj4+
Pj4+PiArICAgICAgICAgKiBPbGQga2VybmVsIHZlcnNpb24gbWF5IG5vdCBzdXBwb3J0IHRoaXMg
ZnVuY3Rpb24sDQo+Pj4+Pj4+DQo+Pj4+Pj4+IEp1c3Qga2VybmVsPw0KPj4+Pj4+IFllcywgeGNf
cGh5c2Rldl9nc2lfZnJvbV9kZXYgZGVwZW5kcyBvbiB0aGUgZnVuY3Rpb24gaW1wbGVtZW50ZWQg
b24gbGludXgga2VybmVsIHNpZGUuDQo+Pj4+Pg0KPj4+Pj4gT2theSwgYW5kIHdoZW4gdGhlIGtl
cm5lbCBzdXBwb3J0cyBpdCBidXQgdGhlIHVuZGVybHlpbmcgaHlwZXJ2aXNvciBkb2Vzbid0DQo+
Pj4+PiBzdXBwb3J0IHdoYXQgdGhlIGtlcm5lbCB3YW50cyB0byB1c2UgaW4gb3JkZXIgdG8gZnVs
ZmlsbCB0aGUgcmVxdWVzdCwgYWxsDQo+Pj4+IEkgZG9uJ3Qga25vdyB3aGF0IHRoaW5ncyB5b3Ug
bWVudGlvbmVkIGh5cGVydmlzb3IgZG9lc24ndCBzdXBwb3J0IGFyZSwNCj4+Pj4gYmVjYXVzZSB4
Y19waHlzZGV2X2dzaV9mcm9tX2RldiBpcyB0byBnZXQgdGhlIGdzaSBvZiBwY2lkZXYgdGhyb3Vn
aCBzYmRmIGluZm9ybWF0aW9uLA0KPj4+PiB0aGF0IHJlbGF0aW9uc2hpcCBjYW4gYmUgZ290IG9u
bHkgaW4gZG9tMCBpbnN0ZWFkIG9mIFhlbiBoeXBlcnZpc29yLg0KPj4+Pg0KPj4+Pj4gaXMgZmlu
ZT8gKFNlZSBhbHNvIGJlbG93IGZvciB3aGF0IG1heSBiZSBuZWVkZWQgaW4gdGhlIGh5cGVydmlz
b3IsIGV2ZW4gaWYNCj4+Pj4gWW91IG1lYW4geGNfcGh5c2Rldl9tYXBfcGlycSBuZWVkcyBnc2k/
DQo+Pj4NCj4+PiBJJ2QgcHV0IGl0IHNsaWdodGx5IGRpZmZlcmVudGx5OiBZb3UgYXJyYW5nZSBm
b3IgdGhhdCBmdW5jdGlvbiB0byBub3cgdGFrZSBhDQo+Pj4gR1NJIHdoZW4gdGhlIGNhbGxlciBp
cyBQVkguIEJ1dCB5ZXMsIHRoZSBmdW5jdGlvbiwgd2hlbiB1c2VkIHdpdGgNCj4+PiBNQVBfUElS
UV9UWVBFX0dTSSwgY2xlYXJseSBleHBlY3RzIGEgR1NJIGFzIGlucHV0IChzZWUgYWxzbyBiZWxv
dykuDQo+Pj4NCj4+Pj4+IHRoaXMgSU9DVEwgd291bGQgYmUgc2F0aXNmaWVkIGJ5IHRoZSBrZXJu
ZWwgd2l0aG91dCBuZWVkaW5nIHRvIGludGVyYWN0IHdpdGgNCj4+Pj4+IHRoZSBoeXBlcnZpc29y
LikNCj4+Pj4+DQo+Pj4+Pj4+PiArICAgICAgICAgKiBzbyBpZiBmYWlsLCBrZWVwIHVzaW5nIGly
cTsgaWYgc3VjY2VzcywgdXNlIGdzaQ0KPj4+Pj4+Pj4gKyAgICAgICAgICovDQo+Pj4+Pj4+PiAr
ICAgICAgICBpZiAoZ3NpID4gMCkgew0KPj4+Pj4+Pj4gKyAgICAgICAgICAgIGlycSA9IGdzaTsN
Cj4+Pj4+Pj4NCj4+Pj4+Pj4gSSdtIHN0aWxsIHB1enpsZWQgYnkgdGhpcywgd2hlbiBieSBub3cg
SSB0aGluayB3ZSd2ZSBzdWZmaWNpZW50bHkgY2xhcmlmaWVkDQo+Pj4+Pj4+IHRoYXQgSVJRcyBh
bmQgR1NJcyB1c2UgdHdvIGRpc3RpbmN0IG51bWJlcmluZyBzcGFjZXMuDQo+Pj4+Pj4+DQo+Pj4+
Pj4+IEFsc28sIGFzIHByZXZpb3VzbHkgaW5kaWNhdGVkLCB5b3UgY2FsbCB0aGlzIGZvciBQViBE
b20wIGFzIHdlbGwuIEFpdWkgb24NCj4+Pj4+Pj4gdGhlIGFzc3VtcHRpb24gdGhhdCBpdCdsbCBm
YWlsLiBXaGF0IGlmIHdlIGRlY2lkZSB0byBtYWtlIHRoZSBmdW5jdGlvbmFsaXR5DQo+Pj4+Pj4+
IGF2YWlsYWJsZSB0aGVyZSwgdG9vIChpZiBvbmx5IGZvciBpbmZvcm1hdGlvbmFsIHB1cnBvc2Vz
LCBvciBmb3INCj4+Pj4+Pj4gY29uc2lzdGVuY3kpPyBTdWRkZW5seSB5b3UncmUgZmFsbGJhY2sg
bG9naWMgd291bGRuJ3Qgd29yayBhbnltb3JlLCBhbmQNCj4+Pj4+Pj4geW91J2QgY2FsbCAuLi4N
Cj4+Pj4+Pj4NCj4+Pj4+Pj4+ICsgICAgICAgIH0NCj4+Pj4+Pj4+ICsjZW5kaWYNCj4+Pj4+Pj4+
ICAgICAgICAgIHIgPSB4Y19waHlzZGV2X21hcF9waXJxKGN0eC0+eGNoLCBkb21pZCwgaXJxLCAm
aXJxKTsNCj4+Pj4+Pj4NCj4+Pj4+Pj4gLi4uIHRoZSBmdW5jdGlvbiB3aXRoIGEgR1NJIHdoZW4g
YSBwSVJRIGlzIG1lYW50LiBJbW8sIGFzIHN1Z2dlc3RlZCBiZWZvcmUsDQo+Pj4+Pj4+IHlvdSBz
dHJpY3RseSB3YW50IHRvIGF2b2lkIHRoZSBjYWxsIG9uIFBWIERvbTAuDQo+Pj4+Pj4+DQo+Pj4+
Pj4+IEFsc28gZm9yIFBWSCBEb20wOiBJIGRvbid0IHRoaW5rIEkndmUgc2VlbiBjaGFuZ2VzIHRv
IHRoZSBoeXBlcmNhbGwNCj4+Pj4+Pj4gaGFuZGxpbmcsIHlldC4gSG93IGNhbiB0aGF0IGJlIHdo
ZW4gR1NJIGFuZCBJUlEgYXJlbid0IHRoZSBzYW1lLCBhbmQgaGVuY2UNCj4+Pj4+Pj4gaW5jb21p
bmcgR1NJIHdvdWxkIG5lZWQgdHJhbnNsYXRpbmcgdG8gSVJRIHNvbWV3aGVyZT8gSSBjYW4gb25j
ZSBhZ2FpbiBvbmx5DQo+Pj4+Pj4+IGFzc3VtZSBhbGwgeW91ciB0ZXN0aW5nIHdhcyBkb25lIHdp
dGggSVJRcyB3aG9zZSBudW1iZXJzIGhhcHBlbmVkIHRvIG1hdGNoDQo+Pj4+Pj4+IHRoZWlyIEdT
SSBudW1iZXJzLiAoVGhlIGRpZmZlcmVuY2UsIGltbywgd291bGQgYWxzbyBuZWVkIGNhbGxpbmcg
b3V0IGluIHRoZQ0KPj4+Pj4+PiBwdWJsaWMgaGVhZGVyLCB3aGVyZSB0aGUgcmVzcGVjdGl2ZSBp
bnRlcmZhY2Ugc3RydWN0KHMpIGlzL2FyZSBkZWZpbmVkLikNCj4+Pj4+PiBJIGZlZWwgbGlrZSB5
b3UgbWlzc2VkIG91dCBvbiBtYW55IG9mIHRoZSBwcmV2aW91cyBkaXNjdXNzaW9ucy4NCj4+Pj4+
PiBXaXRob3V0IG15IGNoYW5nZXMsIHRoZSBvcmlnaW5hbCBjb2RlcyB1c2UgaXJxIChyZWFkIGZy
b20gZmlsZSAvc3lzL2J1cy9wY2kvZGV2aWNlcy88c2JkZj4vaXJxKSB0byBkbyB4Y19waHlzZGV2
X21hcF9waXJxLA0KPj4+Pj4+IGJ1dCB4Y19waHlzZGV2X21hcF9waXJxIHJlcXVpcmUgcGFzc2lu
ZyBpbnRvIGdzaSBpbnN0ZWFkIG9mIGlycSwgc28gd2UgbmVlZCB0byB1c2UgZ3NpIHdoZXRoZXIg
ZG9tMCBpcyBQViBvciBQVkgsIHNvIGZvciB0aGUgb3JpZ2luYWwgY29kZXMsIHRoZXkgYXJlIHdy
b25nLg0KPj4+Pj4+IEp1c3QgYmVjYXVzZSBieSBjaGFuY2UsIHRoZSBpcnEgdmFsdWUgaW4gdGhl
IExpbnV4IGtlcm5lbCBvZiBwdiBkb20wIGlzIGVxdWFsIHRvIHRoZSBnc2kgdmFsdWUsIHNvIHRo
ZXJlIHdhcyBubyBwcm9ibGVtIHdpdGggdGhlIG9yaWdpbmFsIHB2IHBhc3N0aHJvdWdoLg0KPj4+
Pj4+IEJ1dCBub3Qgd2hlbiB1c2luZyBQVkgsIHNvIHBhc3N0aHJvdWdoIGZhaWxlZC4NCj4+Pj4+
PiBXaXRoIG15IGNoYW5nZXMsIEkgZ290IGdzaSB0aHJvdWdoIGZ1bmN0aW9uIHhjX3BoeXNkZXZf
Z3NpX2Zyb21fZGV2IGZpcnN0bHksIGFuZCB0byBiZSBjb21wYXRpYmxlIHdpdGggb2xkIGtlcm5l
bCB2ZXJzaW9ucyhpZiB0aGUgaW9jdGwNCj4+Pj4+PiBJT0NUTF9QUklWQ01EX0dTSV9GUk9NX0RF
ViBpcyBub3QgaW1wbGVtZW50ZWQpLCBJIHN0aWxsIG5lZWQgdG8gdXNlIHRoZSBpcnEgbnVtYmVy
LCBzbyBJIG5lZWQgdG8gY2hlY2sgdGhlIHJlc3VsdA0KPj4+Pj4+IG9mIGdzaSwgaWYgZ3NpID4g
MCBtZWFucyBJT0NUTF9QUklWQ01EX0dTSV9GUk9NX0RFViBpcyBpbXBsZW1lbnRlZCBJIGNhbiB1
c2UgdGhlIHJpZ2h0IG9uZSBnc2ksIG90aGVyd2lzZSBrZWVwIHVzaW5nIHdyb25nIG9uZSBpcnEu
IA0KPj4+Pj4NCj4+Pj4+IEkgdW5kZXJzdGFuZCBhbGwgb2YgdGhpcywgdG8gYSAoSSB0aGluaykg
c3VmZmljaWVudCBkZWdyZWUgYXQgbGVhc3QuIFlldCB3aGF0DQo+Pj4+PiB5b3UncmUgZWZmZWN0
aXZlbHkgcHJvcG9zaW5nICh3aXRob3V0IGV4cGxpY2l0bHkgc2F5aW5nIHNvKSBpcyB0aGF0IGUu
Zy4NCj4+Pj4+IHN0cnVjdCBwaHlzZGV2X21hcF9waXJxJ3MgcGlycSBmaWVsZCBzdWRkZW5seSBt
YXkgbm8gbG9uZ2VyIGhvbGQgYSBwSVJRDQo+Pj4+PiBudW1iZXIsIGJ1dCAod2hlbiB0aGUgY2Fs
bGVyIGlzIFBWSCkgYSBHU0kuIFRoaXMgbWF5IGJlIGEgbmVjZXNzYXJ5IGFkanVzdG1lbnQNCj4+
Pj4+IHRvIG1ha2UgKHNpbXBseSBiZWNhdXNlIHRoZSBjYWxsZXIgbWF5IGhhdmUgbm8gd2F5IHRv
IGV4cHJlc3MgdGhpbmdzIGluIHBJUlENCj4+Pj4+IHRlcm1zKSwgYnV0IHRoZW4gc3VpdGFibGUg
YWRqdXN0bWVudHMgdG8gdGhlIGhhbmRsaW5nIG9mIFBIWVNERVZPUF9tYXBfcGlycQ0KPj4+Pj4g
d291bGQgYmUgbmVjZXNzYXJ5LiBJbiBmYWN0IHRoYXQgZmllbGQgaXMgcHJlc2VudGx5IG1hcmtl
ZCBhcyAiSU4gb3IgT1VUIjsNCj4+Pj4+IHdoZW4gcmUtcHVycG9zZWQgdG8gdGFrZSBhIEdTSSBv
biBpbnB1dCwgaXQgbWF5IGVuZCB1cCBiZWluZyBuZWNlc3NhcnkgdG8gcGFzcw0KPj4+Pj4gYmFj
ayB0aGUgcElSUSAoaW4gdGhlIHN1YmplY3QgZG9tYWluJ3MgbnVtYmVyaW5nIHNwYWNlKS4gT3Ig
YWx0ZXJuYXRpdmVseSBpdA0KPj4+Pj4gbWF5IGJlIG5lY2Vzc2FyeSB0byBhZGQgeWV0IGFub3Ro
ZXIgc3ViLWZ1bmN0aW9uIHNvIHRoZSBHU0kgY2FuIGJlIHRyYW5zbGF0ZWQNCj4+Pj4+IHRvIHRo
ZSBjb3JyZXNwb25kaW5nIHBJUlEgZm9yIHRoZSBkb21haW4gdGhhdCdzIGdvaW5nIHRvIHVzZSB0
aGUgSVJRLCBmb3IgdGhhdA0KPj4+Pj4gdGhlbiB0byBiZSBwYXNzZWQgaW50byBQSFlTREVWT1Bf
bWFwX3BpcnEuDQo+Pj4+IElmIEkgdW5kZXJzdG9vZCBjb3JyZWN0bHksIHlvdXIgY29uY2VybnMg
YWJvdXQgdGhpcyBwYXRjaCBhcmUgdHdvOg0KPj4+PiBGaXJzdCwgd2hlbiBkb20wIGlzIFBWLCBJ
IHNob3VsZCBub3QgdXNlIHhjX3BoeXNkZXZfZ3NpX2Zyb21fZGV2IHRvIGdldCBnc2kgdG8gZG8g
eGNfcGh5c2Rldl9tYXBfcGlycSwgSSBzaG91bGQga2VlcCB0aGUgb3JpZ2luYWwgY29kZSB0aGF0
IHVzZSBpcnEuDQo+Pj4NCj4+PiBZZXMuDQo+PiBPSywgSSBjYW4gY2hhbmdlIHRvIGRvIHRoaXMu
DQo+PiBCdXQgSSBzdGlsbCBoYXZlIGEgY29uY2VybjoNCj4+IEFsdGhvdWdoIHdpdGhvdXQgbXkg
Y2hhbmdlcywgcGFzc3Rocm91Z2ggY2FuIHdvcmsgbm93IHdoZW4gZG9tMCBpcyBQVi4NCj4+IEFu
ZCB5b3UgYWxzbyBhZ3JlZSB0aGF0OiBmb3IgeGNfcGh5c2Rldl9tYXBfcGlycSwgd2hlbiB1c2Ug
d2l0aCBNQVBfUElSUV9UWVBFX0dTSSwgaXQgZXhwZWN0cyBhIEdTSSBhcyBpbnB1dC4NCj4+IElz
bid0IGl0IGEgd3JvbmcgZm9yIFBWIGRvbTAgdG8gcGFzcyBpcnEgaW4/IFdoeSBkb24ndCB3ZSB1
c2UgZ3NpIGFzIGl0IHNob3VsZCBiZSB1c2VkLCBzaW5jZSB3ZSBoYXZlIGEgZnVuY3Rpb24gdG8g
Z2V0IGdzaSBub3c/DQo+IA0KPiBJbmRlZWQgdGhpcyBhbmQgLi4uDQo+IA0KPj4+PiBTZWNvbmQs
IHdoZW4gZG9tMCBpcyBQVkgsIEkgZ2V0IHRoZSBnc2ksIGJ1dCBJIHNob3VsZCBub3QgcGFzcyBn
c2kgYXMgdGhlIGZvdXJ0aCBwYXJhbWV0ZXIgb2YgeGNfcGh5c2Rldl9tYXBfcGlycSwgSSBzaG91
bGQgY3JlYXRlIGEgbmV3IGxvY2FsIHBhcmFtZXRlciBwaXJxPS0xLCBhbmQgcGFzcyBpdCBpbi4N
Cj4+Pg0KPj4+IEkgdGhpbmsgc28sIHllcy4gWW91IGFsc28gbWF5IG5lZWQgdG8gcmVjb3JkIHRo
ZSBvdXRwdXQgdmFsdWUsIHNvIHlvdSBjYW4gbGF0ZXINCj4+PiB1c2UgaXQgZm9yIHVubWFwLiB4
Y19waHlzZGV2X21hcF9waXJxKCkgbWF5IGFsc28gbmVlZCBhZGp1c3RpbmcsIGFzIHJpZ2h0IG5v
dw0KPj4+IGl0IHdvdWxkbid0IHB1dCBhIG5lZ2F0aXZlIGluY29taW5nICpwaXJxIGludG8gdGhl
IC5waXJxIGZpZWxkLiANCj4+IHhjX3BoeXNkZXZfbWFwX3BpcnEncyBsb2dpYyBpcyBpZiB3ZSBw
YXNzIGEgbmVnYXRpdmUgaW4sIGl0IHNldHMgKnBpcnEgdG8gaW5kZXgoZ3NpKS4NCj4+IElzIGl0
cyBsb2dpYyByaWdodD8gSWYgbm90IGhvdyBkbyB3ZSBjaGFuZ2UgaXQ/DQo+IA0KPiAuLi4gdGhp
cyBtYXRjaGVzIC4uLg0KPiANCj4+PiBJIGFjdHVhbGx5IHdvbmRlciBpZiB0aGF0J3MgZXZlbiBj
b3JyZWN0IHJpZ2h0IG5vdywgaS5lLiBpbmRlcGVuZGVudCBvZiB5b3VyIGNoYW5nZS4NCj4gDQo+
IC4uLiB0aGUgcmVtYXJrIGhlcmUuDQpTbywgd2hhdCBzaG91bGQgSSBkbyBuZXh0IHN0ZXA/DQpJ
ZiBhc3N1bWUgdGhlIGxvZ2ljIG9mIGZ1bmN0aW9uIHhjX3BoeXNkZXZfbWFwX3BpcnEgYW5kIFBI
WVNERVZPUF9tYXBfcGlycSBpcyByaWdodCwNCkkgdGhpbmsgd2hhdCBJIGRpZCBub3cgaXMgcmln
aHQsIGJvdGggUFYgYW5kIFBWSCBkb20wIHNob3VsZCBwYXNzIGdzaSBpbnRvIHhjX3BoeXNkZXZf
bWFwX3BpcnEuDQpCeSB0aGUgd2F5LCBJIGZvdW5kIHhjX3BoeXNkZXZfbWFwX3BpcnEgZGlkbid0
IHN1cHBvcnQgbmVnYXRpdmUgcGlycSBpcyBzaW5jZSB5b3VyIGNvbW1pdCA5MzRhNTI1M2Q5MzJi
NmY2N2ZlNDBmYzQ4OTc1YTJiMDExN2U0Y2NlLCBkbyB5b3UgcmVtZW1iZXIgd2h5Pw0KDQo+IA0K
Pj4gRXZlbiB3aXRob3V0IG15IGNoYW5nZXMsIHBhc3N0aHJvdWdoIGNhbiB3b3JrIGZvciBQViBk
b20wLCBub3QgZm9yIFBWSCBkb20wLg0KPiANCj4gSW4gdGhlIGNvbW1vbiBjYXNlLiBJIGZlYXIg
bm8tb25lIGV2ZXIgdHJpZWQgZm9yIGEgZGV2aWNlIHdpdGggYW4gSVJRIHRoYXQNCj4gaGFzIGEg
c291cmNlIG92ZXJyaWRlIHNwZWNpZmllZCBpbiBBQ1BJLg0KPiANCj4+IEFjY29yZGluZyB0byB0
aGUgbG9naWMgb2YgaHlwZXJjYWxsIFBIWVNERVZPUF9tYXBfcGlycSwNCj4+IGlmIHBpcnEgaXMg
LTEsIGl0IGNhbGxzIHBoeXNkZXZfbWFwX3BpcnEtPiBhbGxvY2F0ZV9hbmRfbWFwX2dzaV9waXJx
LT4gYWxsb2NhdGVfcGlycSAtPiBnZXRfZnJlZV9waXJxIHRvIGdldCBwaXJxLg0KPj4gSWYgcGly
cSBpcyBzZXQgdG8gcG9zaXRpdmUgYmVmb3JlIGNhbGxpbmcgaHlwZXJjYWxsLCBpdCBzZXQgcGly
cSB0byBpdHMgb3duIHZhbHVlIGluIGFsbG9jYXRlX3BpcnEuDQo+IA0KPiBXaGljaCBpcyB3aGF0
IGxvb2tzIHdyb25nIHRvIG1lLiBRdWVzdGlvbiBpcyB3aGF0IGl0IHdhcyBkb25lIHRoaXMgd2F5
IGluIHRoZQ0KPiBmaXJzdCBwbGFjZS4NCj4gDQo+IEphbg0KDQotLSANCkJlc3QgcmVnYXJkcywN
CkppcWlhbiBDaGVuLg0K


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 08:21:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 08:21:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745112.1152270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKZW1-0003Uj-Pu; Fri, 21 Jun 2024 08:21:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745112.1152270; Fri, 21 Jun 2024 08:21:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKZW1-0003Uc-MV; Fri, 21 Jun 2024 08:21:01 +0000
Received: by outflank-mailman (input) for mailman id 745112;
 Fri, 21 Jun 2024 08:21:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YYc3=NX=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sKZW0-0003UW-MB
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 08:21:00 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20601.outbound.protection.outlook.com
 [2a01:111:f403:2417::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2f81a30e-2fa7-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 10:20:58 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by IA1PR12MB8080.namprd12.prod.outlook.com (2603:10b6:208:3fd::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.19; Fri, 21 Jun
 2024 08:20:55 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7698.020; Fri, 21 Jun 2024
 08:20:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f81a30e-2fa7-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kkfscPzWM9aqX/2PmRMwC6XI7Cr6xz79aEwZGsnkmC0sTVQ77LYbz5l8xe+OZilEqTZWAXUf6j/xMglHfXxu+LZ0CRwlOXrZYomUD8MIOIIU96na4h2s5qoLuDXPvoP9xXu7zTLwemx4k3a8eCIyAtpyNxGqtM/UtY+cuuItLyhDxr4wHKfMQvUAzhykaFERmfKBcu0vh8t28mDecr8mJ9vMN/3aDV3mIvtCJG2rujkyxYPdBfsMTlSJUrXilVUMD+1oTitE8irISJnCXE023ppxDzUYpNgT1pBrhioF54oReLA9/UC/Phqpv1A/tEqUoAtRj2hfOokdQPruAQWLLw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tAiDK23Vbh7epmgmoeEaPSJV1lCroCVaYqf9+hyzIDE=;
 b=Jy944lNfA1SbMZSwRYX/FOqw3MSnLM7OOAcO4yEOP8eoiGcnvTSlRIuwa0x493jMS4ct5/28vcKI6etetCE0AKFYYx7SbxOSItoFs1WYPFgFjmEYuoqvaykcjISEdyEERk2h1MhS5+bOJSrvNbLM3uPoH7OmFNqLjnsTogALIstS9Mgogig+X8C4UF4x9MpOKRKMQe9+MXNgdQwPK/c4DhgMttns8FznoKe0t0YOoySy9e7BsEpMLZm+xiEZXxYleEiWlsb76OW3dzWHfOJo9KAxdDkPefpFN0fSP4yr5/HmbUJBh1vAq/6Qcg6FG8R4rn3RGR291qf/030UIjKb9A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tAiDK23Vbh7epmgmoeEaPSJV1lCroCVaYqf9+hyzIDE=;
 b=Y9Xs8b+aWxC0psgE8RmMO9XPEeKVtdcq+iU5ZxGh0LoMUC5gtaAetOFuuVCY02BdNsp1qiJ0Og+vVw4ml8B7sColo5Ip0PUsJHypH8cEyNITr0fZK0UJvvbgbWUIBcVb8BIvQ0REcb2dhWTnfnlrlSCbyxJ8J+qQVzMLw4fPoxw=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Anthony PERARD <anthony@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
	Keir Fraser <keir@xensource.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross
	<jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	"Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>, "Huang, Ray"
	<Ray.Huang@amd.com>, "Chen, Jiqian" <Jiqian.Chen@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index:
 AQHawJTqXyg3PMEiVUiLHPQ86aZkYrHMFeyAgAGdTwD//43bAIADi4UA//+vaICAAe+IgA==
Date: Fri, 21 Jun 2024 08:20:55 +0000
Message-ID:
 <BL1PR12MB58493B55E074243D356B0CAAE7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-6-Jiqian.Chen@amd.com>
 <b4b6cbcd-dd71-44da-aea8-6a4a170d73d5@suse.com>
 <BL1PR12MB584916579E2C16C6C9F86D1FE7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b6beb3f3-9c33-4d4c-a607-ca0eba76f049@suse.com>
 <BL1PR12MB58493479F9EF4E56E9CB814FE7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <96ba4e66-5d33-4c39-b733-790e7996332f@suse.com>
In-Reply-To: <96ba4e66-5d33-4c39-b733-790e7996332f@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7698.013)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|IA1PR12MB8080:EE_
x-ms-office365-filtering-correlation-id: 8be1abac-5b5c-4b7c-3ba5-08dc91cb124e
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|1800799021|366013|376011|7416011|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?RHdEOVN0SGNZbGJIcUVuRTZBR0FxUllIL0VOU3h3Q0l5S0xxRWIwcGxYK1hL?=
 =?utf-8?B?eWxvWjFDOXhweTIzc1VKMGsxc3pOVUJDN1VTV3djOUZqcGdNYWhzd1pWeXFR?=
 =?utf-8?B?Uk41T2dqdHErcUMzM0tMdWJyTC9GeDREanU3cFUzN291Y1Azeng3OFJ0RFA4?=
 =?utf-8?B?bm1uK0dKU2lpRTdBZkVWNmE0aWRMWFlKb3hpK0dZblpoQ1Y0WjZjeVR5S2k2?=
 =?utf-8?B?RkNSQ2VEc1BnMzBhVDJyNHZyM2s4ZEtFaFlBT0lPUXVKRVFGc2FNcjJyY29h?=
 =?utf-8?B?bk1EODRpd0ZxWlUvbGpEelFqeTdmeVV2MlNLTyt1RjhsdWdzOVMxVkNpd3NU?=
 =?utf-8?B?M2lUYnZkSU1XLzlCYXpvWXNGNnZPZnc1S2tqQWNla1N5Ylh5d1dFMXNoVUpS?=
 =?utf-8?B?cUVhZUpucEVlQkpEQ0t4a3FsRldhNVc0K0p5cXVOVDR6UGYyWm9XUDFrZHFl?=
 =?utf-8?B?clR4dHNmM2szSTE4UFZMMUwySVd2b2Z6YXNMUVYxcmFGN2JyZ1NWeE9TZitZ?=
 =?utf-8?B?YWlKV1VFbDBYaFJtS1Z3bnZ4Uk9ESS80OVVHa1EvWk5FSHFKV0xVWGFXelVo?=
 =?utf-8?B?Q3ZWb1VVbVZUMk5BNkFiZlE4N1diTWYzY0ZPWWY3NHB5NVlGM0dSdldXM09l?=
 =?utf-8?B?ZGQzOGlSOTByc0hBaFdlZW5Obk1xZU1QeDIxMlloWWZCVS9UY2lmdFpzclAr?=
 =?utf-8?B?WjFnOVZXdGNoc3MxV2dNeGdmeVgrN2p1SW1ldVpPbWM4Vm4yNWdsRDQ2Sy9u?=
 =?utf-8?B?SXphZTlFa3hJOXhsOWl5QyswWW5HZXg5OVhPbnBZZEZvektKK1ZWb0JlQmZk?=
 =?utf-8?B?b0dpOVRGRWJRVkhiRHZhWEhkdERySU0yRG1GMjVaRnNiTWxUMVJEaUVhRFNX?=
 =?utf-8?B?eThiZW9jN0JOaFhaN0E2VDRhTlFJajJXYkNMa09NeWY0eGY1WFhxVmUzOTNu?=
 =?utf-8?B?SzdiZjhla3o4Z25YTUVMRElrUW9TV0hoSlJZOERubnZ1eENCR0V1MWNnbkM3?=
 =?utf-8?B?Zkl5SERiT3FwdndqTzFaeEdTMTdueS9yS0pNZVpKNW1zc0srUHRoWWhadUJR?=
 =?utf-8?B?V0U1S0JJM0NDQWRCTkRuNFpqcENDQUt0a3Q5NkdMM0FRRVpPYWtyNDhPaUF3?=
 =?utf-8?B?UXdhTnpZZlBHTFpHY0hGRGkxdzd3Z0JCakpUZmRBTG01RW5xSGs0dkpaakI3?=
 =?utf-8?B?a3dDUVFxT2FBZVVBZmh5emhDQVdpNnY2UWtDamExUHpOVkN4L283QTFGYmVV?=
 =?utf-8?B?RENmMXVJdTQxZ2RBUVhjQVJwQWREQXJGWWtDZ0t1cEMyam1YMlAyV0pCK2pJ?=
 =?utf-8?B?OTJrclNLSkt0dk9COVRTTWZRSGJ1dXk1Uy9aRWtSQ0tvV3llMU4xOFRGZmwr?=
 =?utf-8?B?Ymo3VnNVdEZiNUV2UENJUk56bWFMU2dSZDhXZnlmdXhHSkFtQTg3RnArWU05?=
 =?utf-8?B?RVY3bFpTQ3UwTE5wZGFLVjN6MXU3amMvd2wvUU00MnE5R3BTR1dyNEptWS9F?=
 =?utf-8?B?NFlmYXJmZU9DSHYxS09TRms4N2VOTTUxdE5lMkNwUUFxTHRzSFBVNkRBQzIr?=
 =?utf-8?B?aHNJdCtjaFlBTFo2ZW5OMldOMXYvdXBONGE3azNJOEU0dnZnZXB4cEY1RFpi?=
 =?utf-8?B?dU9wUjhQK1poclRMUWE5ZW5QTFBaS3pxMHR4ZzhlVDBkRzVVMUtmeWxld2dZ?=
 =?utf-8?B?ZEhnOU51N2ZRYnNaNlJ2cTdmTGhlVFdFRTNPV1U0a2IvSDJwQm9NZ0t0Qlhy?=
 =?utf-8?B?VmFnMmRTREhaV0dmbXBCRHhPSkVaME1uWUxmN1htS1IwVmRwRlE5b1lnSDNw?=
 =?utf-8?Q?mxf1Rpkx/mF2alTh1ruRFyLQy6D2y9eOmFx9k=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(1800799021)(366013)(376011)(7416011)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?UmlJNW5BSFQySjY2clhzR2YyOGdBVXUvb1BXTlFCL0JuWG9LdXB5UGgrS0xx?=
 =?utf-8?B?YzFZRDk3QjlpNDAreVdNYnVDRHVNTkhOUHR1dEhNT0dXendyZ0VRZzk2VHJh?=
 =?utf-8?B?QUZFeUNRTkZkMkdkWW5ZZ0FqN2hjOEIyL2wxTUxwZGhiM0VsZXVuUlpYV25Z?=
 =?utf-8?B?akFqeER5RURaa09DK05KaHlNNnY2Z0dQS0lFcVAyRGtHTU9NVk5KeU5kWHFl?=
 =?utf-8?B?ZGFoOGliVGVYdm5OUE1hb2RBSnd1Q2w0K2dhalg1TW5PNkRHZE1ZTVBYalZ4?=
 =?utf-8?B?LzVWRkNNZjZMUW5xdnVncytySHRwTFpLd1dZY21JTnpObThUanh2dzJxbElw?=
 =?utf-8?B?TlRzUjJ5aG4vZWxTQWFsdFdCd2d5eEpvc2srT1ViRVE2dnFCa1VuYm5VZjN1?=
 =?utf-8?B?Q2d1SDRPcWF1c0w5aVlOd2llNU1hVks4VllWUmRnSEx5ZlJMT3dlUjBJaDRj?=
 =?utf-8?B?K1doNXpsdENMUmMycnpHUjQ3TGZ2ejU3akNLR3U0RC9iYTd5bWl6UDMrcXJF?=
 =?utf-8?B?VjZMZEZzTFJQMFZwc1ZEbCtHRG9BSVR0ak5LUVdsdjRGNlBnZ2hjZFhXMnJZ?=
 =?utf-8?B?M3BMaFltaW9yMlRBaXFCN3l5SzZzYjFnV0hrRC9QQ3Bnd2N5cHJhVXNtQzlV?=
 =?utf-8?B?R01icm1BUThwTGY3dmY5NlBwaVZXUFJIQ3FIQ1VhWjdGV3lKeUl4S0FRaE9v?=
 =?utf-8?B?dUlBUzFVVG02V2JnRXg0U1ZuTkp3UjFjN3A2U2pvWUJVanIvTGVwVzZRVGIw?=
 =?utf-8?B?Zm16c1laWmFrZXVIZlFVRjJ2bzAzbXdOL2NrWFVUbEJ3QzlNWE5UaGIxdEMv?=
 =?utf-8?B?TFZiV0tpdVgzNkhlWmtLTmhIZXdVV0NmUXZQWVBULzVWbGcvU2I1Z0ZCRTV1?=
 =?utf-8?B?aW8wUlQzenRLL1RNVnYyV21iaWViTG1ERzFRRm1yZ2dMOWNtQ00zMEVDZDBS?=
 =?utf-8?B?T0h0Z2JmUmJ3Z3RYblJERDQ0QmRIY0VQY0FMZWEyRm1RcEtTMkhpUlg5eUlH?=
 =?utf-8?B?djlyTlZRaGRwM3lHekxoMTJHZzZ3ZFFQeGM3WkJVTDRTSFRQbHlta2xON3pL?=
 =?utf-8?B?WkJPc0p6YWx1N0J3bi96NnBJNU9GakJaNVhQRTMwYUlCeUNlRytHYXVTVmF3?=
 =?utf-8?B?Y1BQeGNGV25DbU14L3ZiZmU3ZjJWaFhWZ3Z4WTJBUWh5YUpFTXhsRXpXS2JX?=
 =?utf-8?B?NkQ3Y1JwSno4YmNTd3Bpb2ZJSVhyZVFuTW0zaHJKMUMyZ1pOZ0tvdlI0cldD?=
 =?utf-8?B?TzQyc2k0VHVjc3B6bWRNOWtjNVlCaE50S2NWUkIwZ3pkRXc4Z2xvWDBNSkZX?=
 =?utf-8?B?M2pLQ2FFK0ZkNnJVeHZLa1dXZ0F0UUd2VjhxTXlRYW9TbUNkTDRMcHRQUC9t?=
 =?utf-8?B?aHIwQ0VCaXJWN2VJb0lNcEZyVnAreDZVRVV3enBsR0ljTzZZbk4rQlhQNmNr?=
 =?utf-8?B?bFFqQ2sxTENTdTZCNTQvNEdRd1ZlQTlkc3ZvTWRFcVFTekIyRlJwN3dSRGlM?=
 =?utf-8?B?eXJ5RlZEZWFHTW83N09RU2Q0Ym9mWEc5RkxFT0E3L2hUUUVyaGNVVTNpblBV?=
 =?utf-8?B?R1hmNm8yZ290M001OW9yL0hBNUVhQmdHWWtmZjJxMGh1Y3grclY5S2RqS0xR?=
 =?utf-8?B?QnNTQzB3UFdSTFVVWUVZNWNTYkQxNDZ6T1lHengrbCs1L2JHTm14Sk1LbWtV?=
 =?utf-8?B?Y0t6ZGt1V3FzRytYa05vMkp4TVY0UEdLa3NaUkZMSGtUNDMrcTZuM05lSHN3?=
 =?utf-8?B?U1FXM01FZXUzV2ZweU1mTDlRMHFENWtGQ3VTUjFkd1R1VVN6V3REOEowU1RR?=
 =?utf-8?B?WlljaEM2RHExSFpGZHFCb2JXc3lNbmFENFVtSTVYRWFPcndpRVJ3bC9QYXJW?=
 =?utf-8?B?M01TbFlzQWY3dkNFbDVnLzI1NXBLeldsR0lkSzZLM05XM2NBd09KZ2NvWmdQ?=
 =?utf-8?B?Snp2Wm5vWFV0czFQL2pjRVBSWjgvb2I5T0xtQjJsaHNQRzRCVmw4VXFFTTd3?=
 =?utf-8?B?V1dUbm0zUlY2NTNmN3RmNjRrN2pHTmhveWkremROVVhQckdxTVhJVk8vRlN3?=
 =?utf-8?B?VDJGRTBwZVZOUnZURGhtcms0MWVaTk1mT3B6bUQrVUFMbTdTend4bGpDWjJ6?=
 =?utf-8?Q?sniU=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <301EC31FE2250247BF7BF4B5FF9B4F79@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8be1abac-5b5c-4b7c-3ba5-08dc91cb124e
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Jun 2024 08:20:55.0994
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: PzGHNurSAbLFr7H1/yv6xJYXLtSKz91sejlHOMT+lY7i9Q80U/KWoVyo7eTwetXIYU5ClGd3A3bBM0+QKmKM5g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB8080

T24gMjAyNC82LzIwIDE4OjQyLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMjAuMDYuMjAyNCAx
MTo0MCwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzE4IDE3OjIzLCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAxOC4wNi4yMDI0IDEwOjIzLCBDaGVuLCBKaXFpYW4gd3JvdGU6
DQo+Pj4+IE9uIDIwMjQvNi8xNyAyMzozMiwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAx
Ny4wNi4yMDI0IDExOjAwLCBKaXFpYW4gQ2hlbiB3cm90ZToNCj4+Pj4+PiBAQCAtMTUxNiwxNCAr
MTUxOSwzOSBAQCBzdGF0aWMgdm9pZCBwY2lfYWRkX2RtX2RvbmUobGlieGxfX2VnYyAqZWdjLA0K
Pj4+Pj4+ICAgICAgICAgICAgICByYyA9IEVSUk9SX0ZBSUw7DQo+Pj4+Pj4gICAgICAgICAgICAg
IGdvdG8gb3V0Ow0KPj4+Pj4+ICAgICAgICAgIH0NCj4+Pj4+PiAtICAgICAgICByID0geGNfZG9t
YWluX2lycV9wZXJtaXNzaW9uKGN0eC0+eGNoLCBkb21pZCwgaXJxLCAxKTsNCj4+Pj4+PiArI2lm
ZGVmIENPTkZJR19YODYNCj4+Pj4+PiArICAgICAgICAvKiBJZiBkb20wIGRvZXNuJ3QgaGF2ZSBQ
SVJRcywgbmVlZCB0byB1c2UgeGNfZG9tYWluX2dzaV9wZXJtaXNzaW9uICovDQo+Pj4+Pj4gKyAg
ICAgICAgciA9IHhjX2RvbWFpbl9nZXRpbmZvX3NpbmdsZShjdHgtPnhjaCwgMCwgJmluZm8pOw0K
Pj4+Pj4NCj4+Pj4+IEhhcmQtY29kZWQgMCBpcyBpbXBvc2luZyBsaW1pdGF0aW9ucy4gSWRlYWxs
eSB5b3Ugd291bGQgdXNlIERPTUlEX1NFTEYsIGJ1dA0KPj4+Pj4gSSBkaWRuJ3QgY2hlY2sgaWYg
dGhhdCBjYW4gYmUgdXNlZCB3aXRoIHRoZSB1bmRlcmx5aW5nIGh5cGVyY2FsbChzKS4gT3RoZXJ3
aXNlDQo+PiBGcm9tIHRoZSBjb21taXQgMTBlZjdhOTFiNWE4Y2I4YzU4OTAzYzYwZTJkZDE2ZWQ0
OTBiM2JjZiwgRE9NSURfU0VMRiBpcyBub3QgYWxsb3dlZCBmb3IgWEVOX0RPTUNUTF9nZXRkb21h
aW5pbmZvLg0KPj4gQW5kIG5vdyBYRU5fRE9NQ1RMX2dldGRvbWFpbmluZm8gZ2V0cyBkb21haW4g
dGhyb3VnaCByY3VfbG9ja19kb21haW5fYnlfaWQuDQo+Pg0KPj4+Pj4geW91IHdhbnQgdG8gcGFz
cyB0aGUgYWN0dWFsIGRvbWlkIG9mIHRoZSBsb2NhbCBkb21haW4gaGVyZS4NCj4+IFdoYXQgaXMg
dGhlIGxvY2FsIGRvbWFpbiBoZXJlPw0KPiANCj4gVGhlIGRvbWFpbiB5b3VyIGNvZGUgaXMgcnVu
bmluZyBpbi4NCj4gDQo+PiBXaGF0IGlzIG1ldGhvZCBmb3IgbWUgdG8gZ2V0IGl0cyBkb21pZD8N
Cj4gDQo+IEkgaG9wZSB0aGVyZSdzIGFuIGF2YWlsYWJsZSBmdW5jdGlvbiBpbiBvbmUgb2YgdGhl
IGxpYnJhcmllcyB0byBkbyB0aGF0Lg0KSSBkaWRuJ3QgZmluZCByZWxhdGUgZnVuY3Rpb24uDQpI
aSBBbnRob255LCBkbyB5b3Uga25vdz8NCg0KPiBCdXQgSSB3b3VsZG4ndCBldmVuIGtub3cgd2hh
dCB0byBsb29rIGZvcjsgdGhhdCdzIGEgcXVlc3Rpb24gdG8gKHByaW1hcmlseSkNCj4gQW50aG9u
eSB0aGVuLCB3aG8gc2FkbHkgY29udGludWVzIHRvIGJlIG91ciBvbmx5IHRvb2wgc3RhY2sgbWFp
bnRhaW5lci4NCj4gDQo+IEFsdGVybmF0aXZlbHkgd2UgY291bGQgbWF5YmUgZW5hYmxlIFhFTl9E
T01DVExfZ2V0ZG9tYWluaW5mbyB0byBwZXJtaXQNCj4gRE9NSURfU0VMRi4NCkl0IGRpZG4ndCBw
ZXJtaXQgRE9NSURfU0VMRiBzaW5jZSBiZWxvdyBjb21taXQuIERvZXMgaXQgc3RpbGwgaGF2ZSB0
aGUgc2FtZSBwcm9ibGVtIGlmIHBlcm1pdCBET01JRF9TRUxGPw0KDQpjb21taXQgMTBlZjdhOTFi
NWE4Y2I4YzU4OTAzYzYwZTJkZDE2ZWQ0OTBiM2JjZg0KQXV0aG9yOiBrZnJhc2VyQGxvY2FsaG9z
dC5sb2NhbGRvbWFpbiA8a2ZyYXNlckBsb2NhbGhvc3QubG9jYWxkb21haW4+DQpEYXRlOiAgIFR1
ZSBBdWcgMTQgMDk6NTY6NDYgMjAwNyArMDEwMA0KDQogICAgeGVuOiBEbyBub3QgYWNjZXB0IERP
TUlEX1NFTEYgYXMgaW5wdXQgdG8gRE9NQ1RMX2dldGRvbWFpbmluZm8uDQogICAgVGhpcyB3YXMg
c2NyZXdpbmcgdXAgY2FsbGVycyB0aGF0IGxvb3Agb24gZ2V0ZG9tYWluaW5mbygpLCBpZiB0aGVy
ZQ0KICAgIHdhcyBhIGRvbWFpbiB3aXRoIGRvbWlkIERPTUlEX0ZJUlNUX1JFU0VSVkVELTEgKD09
IERPTUlEX1NFTEYtMSkuDQogICAgVGhleSB3b3VsZCBzZWUgRE9NSURfU0VMRi0xLCB0aGVuIGxv
b2sgdXAgRE9NSURfU0VMRiwgd2hpY2ggaGFzIGRvbWlkDQogICAgMCBvZiBjb3Vyc2UsIGFuZCB0
aGVuIHN0YXJ0IHRoZWlyIGRvbWFpbi1maW5kaW5nIGxvb3AgYWxsIG92ZXIgYWdhaW4hDQogICAg
Rm91bmQgYnkgS291eWEgU2hpbXVyYSA8a291eWFAanAuZnVqaXRzdS5jb20+LiBUaGFua3MhDQog
ICAgU2lnbmVkLW9mZi1ieTogS2VpciBGcmFzZXIgPGtlaXJAeGVuc291cmNlLmNvbT4NCg0KZGlm
ZiAtLWdpdCBhL3hlbi9jb21tb24vZG9tY3RsLmMgYi94ZW4vY29tbW9uL2RvbWN0bC5jDQppbmRl
eCAwOWExZTg0ZDk4ZTAuLjVkMjk2NjdiN2MzZCAxMDA2NDQNCi0tLSBhL3hlbi9jb21tb24vZG9t
Y3RsLmMNCisrKyBiL3hlbi9jb21tb24vZG9tY3RsLmMNCkBAIC00NjMsMTkgKzQ2MywxMyBAQCBs
b25nIGRvX2RvbWN0bChYRU5fR1VFU1RfSEFORExFKHhlbl9kb21jdGxfdCkgdV9kb21jdGwpDQog
ICAgIGNhc2UgWEVOX0RPTUNUTF9nZXRkb21haW5pbmZvOg0KICAgICB7DQogICAgICAgICBzdHJ1
Y3QgZG9tYWluICpkOw0KLSAgICAgICAgZG9taWRfdCBkb207DQotDQotICAgICAgICBkb20gPSBv
cC0+ZG9tYWluOw0KLSAgICAgICAgaWYgKCBkb20gPT0gRE9NSURfU0VMRiApDQotICAgICAgICAg
ICAgZG9tID0gY3VycmVudC0+ZG9tYWluLT5kb21haW5faWQ7DQorICAgICAgICBkb21pZF90IGRv
bSA9IG9wLT5kb21haW47DQoNCiAgICAgICAgIHJjdV9yZWFkX2xvY2soJmRvbWxpc3RfcmVhZF9s
b2NrKTsNCg0KICAgICAgICAgZm9yX2VhY2hfZG9tYWluICggZCApDQotICAgICAgICB7DQogICAg
ICAgICAgICAgaWYgKCBkLT5kb21haW5faWQgPj0gZG9tICkNCiAgICAgICAgICAgICAgICAgYnJl
YWs7DQotICAgICAgICB9DQoNCiAgICAgICAgIGlmICggZCA9PSBOVUxMICkNCiAgICAgICAgIHsN
Cg0KPiANCj4+Pj4gQnV0IHRoZSBhY3Rpb24gb2YgZ3JhbnRpbmcgcGVybWlzc2lvbiBpcyBmcm9t
IGRvbTAgdG8gZG9tVSwgd2hhdCBJIG5lZWQgdG8gZ2V0IGlzIHRoZSBpbmZvbWF0aW9uIG9mIGRv
bTAsDQo+Pj4+IFRoZSBhY3R1YWwgZG9taWQgaGVyZSBpcyBkb21VJ3MgaWQgSSB0aGluaywgaXQg
aXMgbm90IHVzZWZ1bC4NCj4+Pg0KPj4+IE5vdGUgaG93IEkgc2FpZCBET01JRF9TRUxGIGFuZCAi
bG9jYWwgZG9tYWluIi4gVGhlcmUncyBubyB0YWxrIG9mIHVzaW5nIHRoZQ0KPj4+IERvbVUncyBk
b21pZC4gQnV0IHdoYXQgeW91IGFwcGFyZW50bHkgbmVnbGVjdCBpcyB0aGUgZmFjdCB0aGF0IHRo
ZSBoYXJkd2FyZQ0KPj4+IGRvbWFpbiBpc24ndCBuZWNlc3NhcmlseSBEb20wIChzZWUgQ09ORklH
X0xBVEVfSFdET00gaW4gdGhlIGh5cGVydmlzb3IpLg0KPj4+IFdoaWxlIGJlbmlnbiBpbiBtb3N0
IGNhc2VzLCB0aGlzIGlzIHJlbGV2YW50IHdoZW4gaXQgY29tZXMgdG8gcmVmZXJlbmNpbmcNCj4+
PiB0aGUgaGFyZHdhcmUgZG9tYWluIGJ5IGRvbWlkLiBBbmQgaXQgaXMgdGhlIGhhcmR3YXJlIGRv
bWFpbiB3aGljaCBpcyBnb2luZw0KPj4+IHRvIGRyaXZlIHRoZSBkZXZpY2UgcmUtYXNzaWdubWVu
dCwgYXMgdGhhdCBkb21haW4gaXMgd2hvJ3MgaW4gcG9zc2Vzc2lvbiBvZg0KPj4+IGFsbCB0aGUg
ZGV2aWNlcyBub3QgeWV0IGFzc2lnbmVkIHRvIGFueSBEb21VLg0KPj4gT0ssIEkgbmVlZCB0byBn
ZXQgdGhlIGluZm9ybWF0aW9uIG9mIGhhcmR3YXJlIGRvbWFpbiBoZXJlPw0KPiANCj4gUmlnaHQs
IHdpdGggKGZvciB0aGlzIHB1cnBvc2UpICJoYXJkd2FyZSBkb21haW4iID09ICJsb2NhbCBkb21h
aW4iLg0KPiANCj4gSmFuDQoNCi0tIA0KQmVzdCByZWdhcmRzLA0KSmlxaWFuIENoZW4uDQo=


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 08:34:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 08:34:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745123.1152280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKZiu-0005Ln-10; Fri, 21 Jun 2024 08:34:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745123.1152280; Fri, 21 Jun 2024 08:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKZit-0005Lg-UX; Fri, 21 Jun 2024 08:34:19 +0000
Received: by outflank-mailman (input) for mailman id 745123;
 Fri, 21 Jun 2024 08:34:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YYc3=NX=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sKZis-0005LH-Mz
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 08:34:18 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on20611.outbound.protection.outlook.com
 [2a01:111:f403:2408::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0aae2e2a-2fa9-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 10:34:16 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by MW4PR12MB7384.namprd12.prod.outlook.com (2603:10b6:303:22b::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.19; Fri, 21 Jun
 2024 08:34:11 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7698.020; Fri, 21 Jun 2024
 08:34:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0aae2e2a-2fa9-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RuTZxp/WISgi5YvT3xemVW6PYqCDlQSiu3CQyP0mueYyEoFLk43ZUnAevtfbbarCoYOBxC5Dwh5N3YJNDAYjRnBKW4Xb0Muy9WsXxN+AxZK4wxBR7+iul5zDaEsfThNFnUZjoQbuW7nQWMJs/hNwb4glYV60NScyFayMAQOv5WoeifC6Q+BV8c3UsQgpp7j0xGXa0S1HqcdPLEO5A7YEzdwOMVsN5iUpeLKfXWOvD4E8gnQNeGW2ngu5T7oxf5o8vxdzpNBmIKRaS6p12IgNID3AXq5SdHHDEbQFt32nfO/3BhRQbHbbpiv9yrTBPh5oAA5WDJ6IYFpMPfKOBlgsSg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=f9Sm4He7I5BC+M3pNQzD0MSlUiigzZm5yfH66LZRVmU=;
 b=OmjKidDB2TCDdqBFx1ZcQLRlA8TbUPZ5nod153DkOMl48eumuaFpj7lmnzsHGcVyJno+kaAY5ajfDfKOxOP9HhEMWOd4Tctn6novLdcF3Tl5d1Y6rfWxKnqjEA6q0h528d6Mp8RDOHQSoqUD6mNvfeakId6TxDzTDCwzrnOOTL8yXiwxk50vvtWqT9tuPhazDFESl8wg3YSV4is5KSeof/PIrXMikFwdW6exe62iQsSXZfaobv7ZR80BH70AQiPdhpp1GEVMbC1nzRxmWBb+IL08iqRMQ3/z5eJwM23muIncI1ecx5NGpQFuKVUHksYz9IqoULUBl1rhzwEnL93EgA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f9Sm4He7I5BC+M3pNQzD0MSlUiigzZm5yfH66LZRVmU=;
 b=5aefPB5JdtuKuCa9DCqtkaWb3ZgCDnnQiTTTb/9qIz2Ht7w6wZsjHNVmsWct+9FLXKrpjNGUJH51u/RLJTdgARCpI5Di8t9mb0sEVCXR9tHb0xNzvkm5K/cCIvdSLKOIqRWqafj1xSsNyumupRLUJiXsIlmWa+dFJ7Pxmx4VMxE=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Anthony PERARD <anthony.perard@vates.tech>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jan
 Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
Thread-Topic: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
Thread-Index: AQHawJToSxHkdXzldkCDZE5kihITT7HQvf4AgAGvDgA=
Date: Fri, 21 Jun 2024 08:34:11 +0000
Message-ID:
 <BL1PR12MB5849AB68A6D6593A464D4D7EE7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-5-Jiqian.Chen@amd.com> <ZnQ+/y/AGyasDGHY@l14>
In-Reply-To: <ZnQ+/y/AGyasDGHY@l14>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7698.013)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|MW4PR12MB7384:EE_
x-ms-office365-filtering-correlation-id: f0f19fce-887f-4f6f-83db-08dc91cced00
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|1800799021|7416011|366013|376011|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?Y2N0TGtycW9qNC93M002NzFTbVNzSCtFOUg1NGljeHZGcFFpVi9BT3pvSXVm?=
 =?utf-8?B?K3FIMDhJYWVtdzVIbHFGZ3d5Y0xMaEt0L2RySHJUK0cwU2VxSDhaRmprOHdN?=
 =?utf-8?B?d0lSTm1FZkJ1bnFxUEhnWE01eEU4YzJ3K2hIUDNEL2FQQXkvNlZoRVhhcFNV?=
 =?utf-8?B?aHFBaTkwWElLOTZFZ2FLdjdvdnNEVkp0eTVybFo3RjhkMi9lQ0taVnpiMHVx?=
 =?utf-8?B?akpCYU9FYXY3NzRNMUQvSmZaUzFJRmsyK2dTdDdMZWRSQ0M3Q1oxWmJzOENh?=
 =?utf-8?B?NUcyb3R5R2FMQ1pEWXJZdkN2MG9MQnZIcExCa1l5WksrNzdHaS9mZGtUd3pQ?=
 =?utf-8?B?KzFyUElQMEpSVTROWDYwalJXUTF2eE9JUUQrVlNRTFRzN0h1UWVyWDhIdUwz?=
 =?utf-8?B?UGkyNjVQOEd6d3dzUW11aWpTRjJFZ2ZWcEFnc2FMbTBOSUZNN29MdVhsbWtM?=
 =?utf-8?B?c3B2Y0RXNUcwSHkwYWtWYUZsY3RvSGZGUDVubTNTalNvcTBiK0pLcWdMNU9C?=
 =?utf-8?B?VjlqZ0hneXZYTmFTRHpHeGx4TGkwdm41dmVCb2xlam41VGVGOWFIblB5UmZE?=
 =?utf-8?B?ZWN3ZUNrNUtjNlJaWFNyeHRqc2Rrd0pkdXo3dHo0akkrNUE2SFA0ajRZT2Q4?=
 =?utf-8?B?MndDVnVSblAydGlTUmpVem1uT0tZODBReUw2Snk4RVdiZFFRUXAveFIxMllT?=
 =?utf-8?B?MnBVNmFzWlJOTmhiQTkrWjJGSVlHTkVHcEZIZk4vWEk0MGtVWEE3M09qZEN6?=
 =?utf-8?B?Z3J2enRqQytCWFJQRzU3Um9PaEMrMTB2U0lpaUs1ZEVGR3FNYWc0OXVZdXpz?=
 =?utf-8?B?SmdVc3lFclZJSDJCZm4yRk54UUNKdUEyeDZTenZwUzhhMlB1TEtQdTRJNE9w?=
 =?utf-8?B?ZFlxWmJ0cERTaWZ4VzZaR3h0NytlN3lOOWFMbTB6Y0NDbG9sWmFWZkMzOG02?=
 =?utf-8?B?aHI1b0hIQ3RNSCtVZmJ0RXZyS0pmbXlQaXc0djNITkZDNU1kTnhIQjFJUm1h?=
 =?utf-8?B?OHZDczZFTnNJVEhxZElPMHdmME1SQTNTVzZraXcvbGdDMXhoYXB6akxOQmdQ?=
 =?utf-8?B?bWZYdXNUUjZ3Uzd1UTh4THNrZ2FEeWhSeTRLeHVVa0lCbkpTQlRDNGx1aUtq?=
 =?utf-8?B?eEU2RUV0REJHdSs4MTgzN0tCOVRhR2U5RzFjVGMwSTVjN0VjVDhvQ1pWTThr?=
 =?utf-8?B?K2F6L01RTUI4VTBPb01kcDhxSG9laVgvaUV4Ny9FOS9Pc0JmT2FCREd0S1F6?=
 =?utf-8?B?dFZ4OS9qR3FiWFRlZVY3bDNHbVpMakh6bWVJMU4wSURPNVFOTEl6SU4vVmM2?=
 =?utf-8?B?dk5iNkdqVW9JdnhOcE5NNEZueFZCdHZ1QThYWDJEOC96UWtNbWgzTkZySHpY?=
 =?utf-8?B?THhSMndWOXpoelBFWjJYNzltbHBzK21nTzZZTTdWdVB1M1lqSGZwZERCUmQr?=
 =?utf-8?B?SWFaMWJvcWhNWEpyZ09ocWRndisvU2REaGxpYTVIa25OUzdnS3N3WVhld1E2?=
 =?utf-8?B?bU9vMmNxNzZHVXN5a3h2Wnd6UUpjcDE5ZWJnVkpoVFlVTXZyL1NRZ3QxN3pM?=
 =?utf-8?B?YjhwME9Namp4TkR6VGZrN3BFWDFjeCs1anVHbys0dWE5RktVdGJzcm5JR0I0?=
 =?utf-8?B?RE1UL1plNklGQ0dYK0ZvQm9LTGVzTWtSTWU1NERwZTJWeEc3azFpbXFKcGhP?=
 =?utf-8?B?OXlIdk9LTXU3Zjc4Wk0rV0hzMlF2SEkwWmg3cmc2UE52ejE2N3FlQzVCeFF1?=
 =?utf-8?B?OEZBSHBFSGpDU1dQMGV5YnRkY29VZW1CUkhYdm1UYW5hTFRZTG9Fc1ViTERF?=
 =?utf-8?Q?x1YyvSAatbhW9Tn5ZrgqEoOKhkKXck1McYRnU=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(1800799021)(7416011)(366013)(376011)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?SVJtSFlNNUM3cVNLVnhPL0VWTU5xY3crOGxMdFVFYURQZHBTV3ZPOE4wT0xH?=
 =?utf-8?B?cWhSdWlRelpNSTg4OExpeXRXNEFRa1AyKzlNM3l2c2wyM2NvMDVOUG5vOW9C?=
 =?utf-8?B?ZG13RHRTaWdDK2J2dEpRME0zUi96dG5jR2g5Vi9IazlCWXlnUGRucUttTGdL?=
 =?utf-8?B?cWVOcjFDZHR2RDhsNlhFU3dvMGltNHFkbnR5Uk1OSzJveng2cFRhVUhUQW1S?=
 =?utf-8?B?TFNMbkJadXBPUUY2OGR6ZzAwREo1bnM2VDBHTHdZWi9rUHZJemkwUVJveTV0?=
 =?utf-8?B?WHBpOUhGUks1dUt4bFhOdkR4WGp0S3Y4L01leFV2Ym5QaWRhR2NBdVpPSHpS?=
 =?utf-8?B?U1dEK0ZVVlBEU1M0WnlyaDJWV1ZPbmJEbjhZcDZFQm5tY2R6WEhhZEpzRGNi?=
 =?utf-8?B?M1hoeE5wNUJvL2Z5WEhCcWFETmFQTkZBcHRVSVZsb2pkdXJncTdhT3hMVjY1?=
 =?utf-8?B?cGRpdlZEU0J5dGErbE1VTlMwYmpJaENvK1kybWw3RXBqaENaT2NEcFY3aFR1?=
 =?utf-8?B?VHZ0U1J4aUFoTGZsTVNqRVc0aUgzOUxVQUdpQ0JrVUhua0pteUpTUUZQaGE2?=
 =?utf-8?B?V25oMFIrRkJIZzlwRk1QaU80cTBYVGpuVVdQZXR3VHpyWUo1djU5RjN1bUxJ?=
 =?utf-8?B?bzh4Q1hPTm5JU09oM0o2TW5BVFlzd3phaGczNEYxZlUyQmhrWFdNemdhYzFv?=
 =?utf-8?B?bGRFVkFXR25VNTgvb1lPdEcwN0F4SGdlL2JnRTdSQ1ZxcWxlRmhwdzkyZ21k?=
 =?utf-8?B?STRqSVlxVGpYL0I3M0E4ZFduNGJDbEFwb05SVkJQVUxiVytDUTdsN2F0Q3cy?=
 =?utf-8?B?SzdrdC9OQWtnemRVY2pzN0JhbkZWeXJ6N3RtM1N1Sk9YOUlDN0dxa1UwWXRz?=
 =?utf-8?B?WjFicHZnS3NKYlBkcEU5L05wNGNlcmR3YWN2TldSTzZrRXU3VVgrOEE3QlpE?=
 =?utf-8?B?VGwvdW5ocmNPZHpZdjYxZ2wyUjdGTVVYZFFVTmloQ0l4bWN1TjRRT2gzL0xM?=
 =?utf-8?B?QzdrSm5ZVStkYmo2a3ZBcnlmNVRKT3dRVEtSc01DSTFRVW9WSERNem9JUUJ1?=
 =?utf-8?B?TkI2UE9wMGR2MmdNMTFiTnBxdFA2T0FqOWZOTWN3VllldlA3S0ZnVC90RVlO?=
 =?utf-8?B?RUZ3L1hxOVVUNFBSd0VwTnZnMS9nWXVmNmI3S0NoNGdnTVdGUUpURDJRQnMr?=
 =?utf-8?B?UnZrcVNua3FiWTlPcEU4RzdON2FMOWpYOVZvcGI5c3gwVlpWYVZUYjlyTzRS?=
 =?utf-8?B?UkpRaUhncUdhOEFyeitQcWdDc2xJakQxbytQaENmcG1DSWtreW9RUlBxQm9Y?=
 =?utf-8?B?WE8weEM4NyszczJhSENMR0FaS0hZbnBOWlkzSG83WkN0dW1EWjZmanFJcVpX?=
 =?utf-8?B?bkFISW5lcGdSbHBhUWRESWIrT0lVSUlTRnYwaUh1bWgyTDh6T0RhSGMzblZl?=
 =?utf-8?B?QlE3dFQ2U1hKODAzUGVuY055ZCt2cHJlV2RZQ2RFZENPOUY2a01rWTFIaXlx?=
 =?utf-8?B?cm15dzBwdkF2SmVsZmw0MTNMcFM1d2E3ZVcxbmx0OWY3dzliUm1mOUhFWWtQ?=
 =?utf-8?B?L3FSUGJKeVFoaE9Dd3FUc21uaEFIWkJjUXF3MFQ3b05zVkhjSnJaU1ZHTzlp?=
 =?utf-8?B?UDVqU1BtOHpLd1VKWVM5VUlYOW5YSjVxNHFLOFZwTHlTc2l5TWw4dEZLdEUx?=
 =?utf-8?B?Vzh0SWhHRXhBUEdDUnAzODhBaHFmdnpwalBCSjUwWWRleXMyMkJoUE5ubFMw?=
 =?utf-8?B?ZmNEdkRDRTNmcDcyVHFCSS9HSG04SUI5KzBWLzE0THVoK2YvNjFReVRaK0x1?=
 =?utf-8?B?QXNxejN0aUExdW85b1dhQlJvQm8vQWFjdHluNVhxMU9aU0FZQUxzZ2dPYlgx?=
 =?utf-8?B?S1NEMUJRWUdtM0k3cFdabEU0Y1JKdnlCZC9nR1dqQU4rMkRvVm8rNytIU2kx?=
 =?utf-8?B?c0ZGR1BMTXFtK0ZkemRpNEJ4ODNzSkRCYlNFaSs3N0lXMndJSEdKUmxVbkhv?=
 =?utf-8?B?aUV4VHgxMmJzUUcrdnRHb3N3bFpMdjdFV0xKQXpvK3hOZDcvTkFyTGlnRnNq?=
 =?utf-8?B?bDVvTktxYUxpQ1hzVzZqK0pwZ0ZIVGtueGVjbjQ0WTZBUkJoVjhrcG9yWFFJ?=
 =?utf-8?Q?bUpI=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <129BC12041823646B8ED437F5A10DE95@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f0f19fce-887f-4f6f-83db-08dc91cced00
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 Jun 2024 08:34:11.5263
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: uGZBHUGk8hdV5Sg4XXwRc6hrfFqWSYlG97z5d5I8+XdEMJxL2B1A5XV5SahIWvuYZsvJCLMfcIs6jCsQTarhkw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MW4PR12MB7384

T24gMjAyNC82LzIwIDIyOjM4LCBBbnRob255IFBFUkFSRCB3cm90ZToNCj4gT24gTW9uLCBKdW4g
MTcsIDIwMjQgYXQgMDU6MDA6MzRQTSArMDgwMCwgSmlxaWFuIENoZW4gd3JvdGU6DQo+PiBkaWZm
IC0tZ2l0IGEvdG9vbHMvaW5jbHVkZS94ZW5jYWxsLmggYi90b29scy9pbmNsdWRlL3hlbmNhbGwu
aA0KPj4gaW5kZXggZmM5NWVkMGZlNThlLi43NTBhYWIwNzAzMjMgMTAwNjQ0DQo+PiAtLS0gYS90
b29scy9pbmNsdWRlL3hlbmNhbGwuaA0KPj4gKysrIGIvdG9vbHMvaW5jbHVkZS94ZW5jYWxsLmgN
Cj4+IEBAIC0xMTMsNiArMTEzLDggQEAgaW50IHhlbmNhbGw1KHhlbmNhbGxfaGFuZGxlICp4Y2Fs
bCwgdW5zaWduZWQgaW50IG9wLA0KPj4gICAgICAgICAgICAgICB1aW50NjRfdCBhcmcxLCB1aW50
NjRfdCBhcmcyLCB1aW50NjRfdCBhcmczLA0KPj4gICAgICAgICAgICAgICB1aW50NjRfdCBhcmc0
LCB1aW50NjRfdCBhcmc1KTsNCj4+ICANCj4+ICtpbnQgeGVuX29zY2FsbF9nc2lfZnJvbV9kZXYo
eGVuY2FsbF9oYW5kbGUgKnhjYWxsLCB1bnNpZ25lZCBpbnQgc2JkZik7DQo+IA0KPiBJIGRvbid0
IHRoaW5rIHRoYXQgYW4gYXBwcm9wcmlhdGUgbGlicmFyeSBmb3IgdGhpcyBuZXcgZmVhdHVyZS4N
Cj4gbGlieGVuY2FsbCBpcyBhIGdlbmVyaWMgbGliIHRvIG1ha2UgaHlwZXJjYWxsLg0KRG8geW91
IGhhdmUgYSBzdWdnZXN0ZWQgcGxhY2UgdG8gcHV0IHRoaXMgbmV3IGZ1bmN0aW9uPw0KVGhpcyBu
ZXcgZnVuY3Rpb24gaXMgdG8gZ2V0IGdzaSBvZiBhIHBjaSBkZXZpY2UsIGFuZCBvbmx5IGRlcGVu
ZCBvbiB0aGUgZG9tMCBrZXJuZWwsIGRvZXNuJ3QgbmVlZCB0byBpbnRlcmFjdCB3aXRoIGh5cGVy
dmlzb3IuDQoNCj4gDQo+PiAgLyogVmFyaWFudChzKSBvZiB0aGUgYWJvdmUsIGFzIG5lZWRlZCwg
cmV0dXJuaW5nICJsb25nIiBpbnN0ZWFkIG9mICJpbnQiLiAqLw0KPj4gIGxvbmcgeGVuY2FsbDJM
KHhlbmNhbGxfaGFuZGxlICp4Y2FsbCwgdW5zaWduZWQgaW50IG9wLA0KPj4gICAgICAgICAgICAg
ICAgIHVpbnQ2NF90IGFyZzEsIHVpbnQ2NF90IGFyZzIpOw0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xz
L2luY2x1ZGUveGVuY3RybC5oIGIvdG9vbHMvaW5jbHVkZS94ZW5jdHJsLmgNCj4+IGluZGV4IDlj
ZWNhMGNmZmMyZi4uYTAzODFmNzRkMjRiIDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMvaW5jbHVkZS94
ZW5jdHJsLmgNCj4+ICsrKyBiL3Rvb2xzL2luY2x1ZGUveGVuY3RybC5oDQo+PiBAQCAtMTY0MSw2
ICsxNjQxLDggQEAgaW50IHhjX3BoeXNkZXZfdW5tYXBfcGlycSh4Y19pbnRlcmZhY2UgKnhjaCwN
Cj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVpbnQzMl90IGRvbWlkLA0KPj4gICAgICAg
ICAgICAgICAgICAgICAgICAgICAgaW50IHBpcnEpOw0KPj4gIA0KPj4gK2ludCB4Y19waHlzZGV2
X2dzaV9mcm9tX2Rldih4Y19pbnRlcmZhY2UgKnhjaCwgdWludDMyX3Qgc2JkZik7DQo+PiArDQo+
PiAgLyoNCj4+ICAgKiAgTE9HR0lORyBBTkQgRVJST1IgUkVQT1JUSU5HDQo+PiAgICovDQo+PiBk
aWZmIC0tZ2l0IGEvdG9vbHMvbGlicy9jYWxsL2NvcmUuYyBiL3Rvb2xzL2xpYnMvY2FsbC9jb3Jl
LmMNCj4+IGluZGV4IDAyYzRmOGUxYWVmYS4uNmRhZTUwYzlhNmJhIDEwMDY0NA0KPj4gLS0tIGEv
dG9vbHMvbGlicy9jYWxsL2NvcmUuYw0KPj4gKysrIGIvdG9vbHMvbGlicy9jYWxsL2NvcmUuYw0K
Pj4gQEAgLTE3Myw2ICsxNzMsMTEgQEAgaW50IHhlbmNhbGw1KHhlbmNhbGxfaGFuZGxlICp4Y2Fs
bCwgdW5zaWduZWQgaW50IG9wLA0KPj4gICAgICByZXR1cm4gb3NkZXBfaHlwZXJjYWxsKHhjYWxs
LCAmY2FsbCk7DQo+PiAgfQ0KPj4gIA0KPj4gK2ludCB4ZW5fb3NjYWxsX2dzaV9mcm9tX2Rldih4
ZW5jYWxsX2hhbmRsZSAqeGNhbGwsIHVuc2lnbmVkIGludCBzYmRmKQ0KPj4gK3sNCj4+ICsgICAg
cmV0dXJuIG9zZGVwX29zY2FsbCh4Y2FsbCwgc2JkZik7DQo+PiArfQ0KPj4gKw0KPj4gIC8qDQo+
PiAgICogTG9jYWwgdmFyaWFibGVzOg0KPj4gICAqIG1vZGU6IEMNCj4+IGRpZmYgLS1naXQgYS90
b29scy9saWJzL2NhbGwvbGlieGVuY2FsbC5tYXAgYi90b29scy9saWJzL2NhbGwvbGlieGVuY2Fs
bC5tYXANCj4+IGluZGV4IGQxOGEzMTc0ZTlkYy4uYjkyYTBiNWRjMTJjIDEwMDY0NA0KPj4gLS0t
IGEvdG9vbHMvbGlicy9jYWxsL2xpYnhlbmNhbGwubWFwDQo+PiArKysgYi90b29scy9saWJzL2Nh
bGwvbGlieGVuY2FsbC5tYXANCj4+IEBAIC0xMCw2ICsxMCw4IEBAIFZFUlNfMS4wIHsNCj4+ICAJ
CXhlbmNhbGw0Ow0KPj4gIAkJeGVuY2FsbDU7DQo+PiAgDQo+PiArCQl4ZW5fb3NjYWxsX2dzaV9m
cm9tX2RldjsNCj4gDQo+IEZZSSwgbmV2ZXIgY2hhbmdlIGFscmVhZHkgcmVsZWFzZWQgdmVyc2lv
biBvZiBhIGxpYnJhcnksIHRoaXMgd291bGQgYWRkDQo+IGEgbmV3IGZ1bmN0aW9uIHRvIGxpYnhl
bmNhbGwuMS4wLiBJbnN0ZWFkLCB3aGVuIGFkZGluZyBhIG5ldyBmdW5jdGlvbg0KPiB0byBhIGxp
YnJhcnkgdGhhdCBpcyBzdXBwb3NlZCB0byBiZSBzdGFibGUgKHRoZXkgaGF2ZSBhICoubWFwIGZp
bGUgaW4NCj4geGVuIGNhc2UpLCBhZGQgaXQgdG8gYSBuZXcgc2VjdGlvbiwgdGhhdCB3b3VkIGJl
IFZFUlNfMS40IGluIHRoaXMgY2FzZS4NCj4gQnV0IGxpYnhlbmNhbGwgaXNuJ3QgYXBwcm9yaWF0
ZSBmb3IgdGhpcyBuZXcgZnVuY3Rpb24sIHNvIGp1c3QgZm9yDQo+IGZ1dHVyZSByZWZlcmVuY2Uu
DQo+IA0KPj4gIAkJeGVuY2FsbF9hbGxvY19idWZmZXI7DQo+PiAgCQl4ZW5jYWxsX2ZyZWVfYnVm
ZmVyOw0KPj4gIAkJeGVuY2FsbF9hbGxvY19idWZmZXJfcGFnZXM7DQo+PiBkaWZmIC0tZ2l0IGEv
dG9vbHMvbGlicy9jYWxsL2xpbnV4LmMgYi90b29scy9saWJzL2NhbGwvbGludXguYw0KPj4gaW5k
ZXggNmQ1ODhlNmJlYThmLi45MmM3NDBlMTc2ZjIgMTAwNjQ0DQo+PiAtLS0gYS90b29scy9saWJz
L2NhbGwvbGludXguYw0KPj4gKysrIGIvdG9vbHMvbGlicy9jYWxsL2xpbnV4LmMNCj4+IEBAIC04
NSw2ICs4NSwyMSBAQCBsb25nIG9zZGVwX2h5cGVyY2FsbCh4ZW5jYWxsX2hhbmRsZSAqeGNhbGws
IHByaXZjbWRfaHlwZXJjYWxsX3QgKmh5cGVyY2FsbCkNCj4+ICAgICAgcmV0dXJuIGlvY3RsKHhj
YWxsLT5mZCwgSU9DVExfUFJJVkNNRF9IWVBFUkNBTEwsIGh5cGVyY2FsbCk7DQo+PiAgfQ0KPj4g
IA0KPj4gK2ludCBvc2RlcF9vc2NhbGwoeGVuY2FsbF9oYW5kbGUgKnhjYWxsLCB1bnNpZ25lZCBp
bnQgc2JkZikNCj4+ICt7DQo+PiArICAgIHByaXZjbWRfZ3NpX2Zyb21fZGV2X3QgZGV2X2dzaSA9
IHsNCj4+ICsgICAgICAgIC5zYmRmID0gc2JkZiwNCj4+ICsgICAgICAgIC5nc2kgPSAtMSwNCj4+
ICsgICAgfTsNCj4+ICsNCj4+ICsgICAgaWYgKGlvY3RsKHhjYWxsLT5mZCwgSU9DVExfUFJJVkNN
RF9HU0lfRlJPTV9ERVYsICZkZXZfZ3NpKSkgew0KPiANCj4gTG9va3MgbGlrZSBsaWJ4ZW5jYWxs
IGlzIG9ubHkgZm9yIGh5cGVyY2FsbCwgYW5kIHNvIEkgZG9uJ3QgdGhpbmsNCj4gaXQncyB0aGUg
cmlnaHQgcGxhY2UgdG8gaW50cm9kdWNpbmcgYW5vdGhlciBpb2N0bCgpIGNhbGwuDQpJdCBzZWVt
cyBJT0NUTF9QUklWQ01EX0hZUEVSQ0FMTCBpcyBmb3IgaHlwZXJjYWxsLg0KV2hhdCBJIGRvIGhl
cmUgaXMgdG8gaW50cm9kdWNlIG5ldyBjYWxsIGludG8gcHJpdmNtZCBmZC4NCk1heWJlIEkgY2Fu
IG9wZW4gIi9kZXYveGVuL3ByaXZjbWQiIGRpcmVjdGx5LCBzbyB0aGF0IEkgZG9uJ3QgaGF2ZSB0
byBhZGQgdGhlICpfb3NjYWwgZnVuY3Rpb24uDQoNCj4gDQo+PiArICAgICAgICBQRVJST1IoImZh
aWxlZCB0byBnZXQgZ3NpIGZyb20gZGV2Iik7DQo+PiArICAgICAgICByZXR1cm4gLTE7DQo+PiAr
ICAgIH0NCj4+ICsNCj4+ICsgICAgcmV0dXJuIGRldl9nc2kuZ3NpOw0KPj4gK30NCj4+ICsNCj4+
ICBzdGF0aWMgdm9pZCAqYWxsb2NfcGFnZXNfYnVmZGV2KHhlbmNhbGxfaGFuZGxlICp4Y2FsbCwg
c2l6ZV90IG5wYWdlcykNCj4+ICB7DQo+PiAgICAgIHZvaWQgKnA7DQo+PiBkaWZmIC0tZ2l0IGEv
dG9vbHMvbGlicy9jYWxsL3ByaXZhdGUuaCBiL3Rvb2xzL2xpYnMvY2FsbC9wcml2YXRlLmgNCj4+
IGluZGV4IDljM2FhNDMyZWZlMi4uY2Q2ZWI1YTNlNjZmIDEwMDY0NA0KPj4gLS0tIGEvdG9vbHMv
bGlicy9jYWxsL3ByaXZhdGUuaA0KPj4gKysrIGIvdG9vbHMvbGlicy9jYWxsL3ByaXZhdGUuaA0K
Pj4gQEAgLTU3LDYgKzU3LDE1IEBAIGludCBvc2RlcF94ZW5jYWxsX2Nsb3NlKHhlbmNhbGxfaGFu
ZGxlICp4Y2FsbCk7DQo+PiAgDQo+PiAgbG9uZyBvc2RlcF9oeXBlcmNhbGwoeGVuY2FsbF9oYW5k
bGUgKnhjYWxsLCBwcml2Y21kX2h5cGVyY2FsbF90ICpoeXBlcmNhbGwpOw0KPj4gIA0KPj4gKyNp
ZiBkZWZpbmVkKF9fbGludXhfXykNCj4+ICtpbnQgb3NkZXBfb3NjYWxsKHhlbmNhbGxfaGFuZGxl
ICp4Y2FsbCwgdW5zaWduZWQgaW50IHNiZGYpOw0KPj4gKyNlbHNlDQo+PiArc3RhdGljIGlubGlu
ZSBpbnQgb3NkZXBfb3NjYWxsKHhlbmNhbGxfaGFuZGxlICp4Y2FsbCwgdW5zaWduZWQgaW50IHNi
ZGYpDQo+PiArew0KPj4gKyAgICByZXR1cm4gLTE7DQo+PiArfQ0KPj4gKyNlbmRpZg0KPj4gKw0K
Pj4gIHZvaWQgKm9zZGVwX2FsbG9jX3BhZ2VzKHhlbmNhbGxfaGFuZGxlICp4Y2FsbCwgc2l6ZV90
IG5yX3BhZ2VzKTsNCj4+ICB2b2lkIG9zZGVwX2ZyZWVfcGFnZXMoeGVuY2FsbF9oYW5kbGUgKnhj
YWxsLCB2b2lkICpwLCBzaXplX3QgbnJfcGFnZXMpOw0KPj4gIA0KPj4gZGlmZiAtLWdpdCBhL3Rv
b2xzL2xpYnMvY3RybC94Y19waHlzZGV2LmMgYi90b29scy9saWJzL2N0cmwveGNfcGh5c2Rldi5j
DQo+PiBpbmRleCA0NjBhOGU3NzljZTguLmMxNDU4ZjNhMzhiNSAxMDA2NDQNCj4+IC0tLSBhL3Rv
b2xzL2xpYnMvY3RybC94Y19waHlzZGV2LmMNCj4+ICsrKyBiL3Rvb2xzL2xpYnMvY3RybC94Y19w
aHlzZGV2LmMNCj4+IEBAIC0xMTEsMyArMTExLDcgQEAgaW50IHhjX3BoeXNkZXZfdW5tYXBfcGly
cSh4Y19pbnRlcmZhY2UgKnhjaCwNCj4+ICAgICAgcmV0dXJuIHJjOw0KPj4gIH0NCj4+ICANCj4+
ICtpbnQgeGNfcGh5c2Rldl9nc2lfZnJvbV9kZXYoeGNfaW50ZXJmYWNlICp4Y2gsIHVpbnQzMl90
IHNiZGYpDQo+PiArew0KPiANCj4gSSdtIG5vdCBzdXJlIGlmIHRoaXMgaXMgdGhlIGJlc3QgcGxh
Y2UgZm9yIHRoaXMgbmV3IGZ1bmN0aW9uLCBidXQgSQ0KPiBjYW4ndCBmaW5kIGFub3RoZXIgb25l
LCBzbyB0aGF0IHdpbGwgZG8uDQpUaGFua3MuDQoNCj4gDQo+PiArICAgIHJldHVybiB4ZW5fb3Nj
YWxsX2dzaV9mcm9tX2Rldih4Y2gtPnhjYWxsLCBzYmRmKTsNCj4+ICt9DQo+PiBkaWZmIC0tZ2l0
IGEvdG9vbHMvbGlicy9saWdodC9NYWtlZmlsZSBiL3Rvb2xzL2xpYnMvbGlnaHQvTWFrZWZpbGUN
Cj4+IGluZGV4IDM3ZTRkMTY3MDk4Ni4uNmI2MTZkNWVlOWI2IDEwMDY0NA0KPj4gLS0tIGEvdG9v
bHMvbGlicy9saWdodC9NYWtlZmlsZQ0KPj4gKysrIGIvdG9vbHMvbGlicy9saWdodC9NYWtlZmls
ZQ0KPj4gQEAgLTQwLDcgKzQwLDcgQEAgT0JKUy0kKENPTkZJR19YODYpICs9ICQoQUNQSV9PQkpT
KQ0KPj4gIA0KPj4gIENGTEFHUyArPSAtV25vLWZvcm1hdC16ZXJvLWxlbmd0aCAtV21pc3Npbmct
ZGVjbGFyYXRpb25zIC1XZm9ybWF0LW5vbmxpdGVyYWwNCj4+ICANCj4+IC1DRkxBR1MtJChDT05G
SUdfWDg2KSArPSAtRENPTkZJR19QQ0lfU1VQUF9MRUdBQ1lfSVJRDQo+PiArQ0ZMQUdTLSQoQ09O
RklHX1g4NikgKz0gLURDT05GSUdfUENJX1NVUFBfTEVHQUNZX0lSUSAtRENPTkZJR19YODYNCj4+
ICANCj4+ICBPQkpTLSQoQ09ORklHX1g4NikgKz0gbGlieGxfY3B1aWQubw0KPj4gIE9CSlMtJChD
T05GSUdfWDg2KSArPSBsaWJ4bF94ODYubw0KPj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnMvbGln
aHQvbGlieGxfcGNpLmMgYi90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jDQo+PiBpbmRleCA5
NmNiNGRhMDc5NGUuLjM3NmY5MTc1OWFjNiAxMDA2NDQNCj4+IC0tLSBhL3Rvb2xzL2xpYnMvbGln
aHQvbGlieGxfcGNpLmMNCj4+ICsrKyBiL3Rvb2xzL2xpYnMvbGlnaHQvbGlieGxfcGNpLmMNCj4+
IEBAIC0xNDA2LDYgKzE0MDYsMTIgQEAgc3RhdGljIGJvb2wgcGNpX3N1cHBfbGVnYWN5X2lycSh2
b2lkKQ0KPj4gICNlbmRpZg0KPj4gIH0NCj4+ICANCj4+ICsjZGVmaW5lIFBDSV9ERVZJRChidXMs
IGRldmZuKVwNCj4+ICsgICAgICAgICAgICAoKCgodWludDE2X3QpKGJ1cykpIDw8IDgpIHwgKChk
ZXZmbikgJiAweGZmKSkNCj4+ICsNCj4+ICsjZGVmaW5lIFBDSV9TQkRGKHNlZywgYnVzLCBkZXZm
bikgXA0KPj4gKyAgICAgICAgICAgICgoKCh1aW50MzJfdCkoc2VnKSkgPDwgMTYpIHwgKFBDSV9E
RVZJRChidXMsIGRldmZuKSkpDQo+PiArDQo+PiAgc3RhdGljIHZvaWQgcGNpX2FkZF9kbV9kb25l
KGxpYnhsX19lZ2MgKmVnYywNCj4+ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGNpX2Fk
ZF9zdGF0ZSAqcGFzLA0KPj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICBpbnQgcmMpDQo+
PiBAQCAtMTQxOCw2ICsxNDI0LDEwIEBAIHN0YXRpYyB2b2lkIHBjaV9hZGRfZG1fZG9uZShsaWJ4
bF9fZWdjICplZ2MsDQo+PiAgICAgIHVuc2lnbmVkIGxvbmcgbG9uZyBzdGFydCwgZW5kLCBmbGFn
cywgc2l6ZTsNCj4+ICAgICAgaW50IGlycSwgaTsNCj4+ICAgICAgaW50IHI7DQo+PiArI2lmZGVm
IENPTkZJR19YODYNCj4+ICsgICAgaW50IGdzaTsNCj4+ICsgICAgdWludDMyX3Qgc2JkZjsNCj4+
ICsjZW5kaWYNCj4+ICAgICAgdWludDMyX3QgZmxhZyA9IFhFTl9ET01DVExfREVWX1JETV9SRUxB
WEVEOw0KPj4gICAgICB1aW50MzJfdCBkb21haW5pZCA9IGRvbWlkOw0KPj4gICAgICBib29sIGlz
c3R1YmRvbSA9IGxpYnhsX2lzX3N0dWJkb20oY3R4LCBkb21pZCwgJmRvbWFpbmlkKTsNCj4+IEBA
IC0xNDg2LDYgKzE0OTYsMTggQEAgc3RhdGljIHZvaWQgcGNpX2FkZF9kbV9kb25lKGxpYnhsX19l
Z2MgKmVnYywNCj4+ICAgICAgICAgIGdvdG8gb3V0X25vX2lycTsNCj4+ICAgICAgfQ0KPj4gICAg
ICBpZiAoKGZzY2FuZihmLCAiJXUiLCAmaXJxKSA9PSAxKSAmJiBpcnEpIHsNCj4+ICsjaWZkZWYg
Q09ORklHX1g4Ng0KPiANCj4gQ291bGQgeW91IGF2b2lkIHRoZXNlICNpZmRlZiwgYW5kIG1vdmUg
dGhlIG5ldyBhcmNoIHNwZWNpZmljIGNvZGUgKGFuZA0KPiBtYXliZSBleGlzdGluZyBjb2RlKSBp
bnRvIGxpYnhsX3g4Ni5jID8gVGhlcmUncyBhbHJlYWR5IGV4YW1wbGVzIG9mIGFyY2gNCj4gc3Bl
Y2lmaWMgY29kZS4NCk9LLCB3aWxsIGRvIGluIG5leHQgdmVyc2lvbi4NCg0KPiANCj4+ICsgICAg
ICAgIHNiZGYgPSBQQ0lfU0JERihwY2ktPmRvbWFpbiwgcGNpLT5idXMsDQo+PiArICAgICAgICAg
ICAgICAgICAgICAgICAgKFBDSV9ERVZGTihwY2ktPmRldiwgcGNpLT5mdW5jKSkpOw0KPj4gKyAg
ICAgICAgZ3NpID0geGNfcGh5c2Rldl9nc2lfZnJvbV9kZXYoY3R4LT54Y2gsIHNiZGYpOw0KPj4g
KyAgICAgICAgLyoNCj4+ICsgICAgICAgICAqIE9sZCBrZXJuZWwgdmVyc2lvbiBtYXkgbm90IHN1
cHBvcnQgdGhpcyBmdW5jdGlvbiwNCj4+ICsgICAgICAgICAqIHNvIGlmIGZhaWwsIGtlZXAgdXNp
bmcgaXJxOyBpZiBzdWNjZXNzLCB1c2UgZ3NpDQo+PiArICAgICAgICAgKi8NCj4+ICsgICAgICAg
IGlmIChnc2kgPiAwKSB7DQo+PiArICAgICAgICAgICAgaXJxID0gZ3NpOw0KPj4gKyAgICAgICAg
fQ0KPj4gKyNlbmRpZg0KPj4gICAgICAgICAgciA9IHhjX3BoeXNkZXZfbWFwX3BpcnEoY3R4LT54
Y2gsIGRvbWlkLCBpcnEsICZpcnEpOw0KPj4gICAgICAgICAgaWYgKHIgPCAwKSB7DQo+PiAgICAg
ICAgICAgICAgTE9HRUQoRVJST1IsIGRvbWFpbmlkLCAieGNfcGh5c2Rldl9tYXBfcGlycSBpcnE9
JWQgKGVycm9yPSVkKSIsDQo+PiBAQCAtMjE3Miw2ICsyMTk0LDEwIEBAIHN0YXRpYyB2b2lkIHBj
aV9yZW1vdmVfZGV0YWNoZWQobGlieGxfX2VnYyAqZWdjLA0KPj4gICAgICBpbnQgIGlycSA9IDAs
IGksIHN0dWJkb21pZCA9IDA7DQo+PiAgICAgIGNvbnN0IGNoYXIgKnN5c2ZzX3BhdGg7DQo+PiAg
ICAgIEZJTEUgKmY7DQo+PiArI2lmZGVmIENPTkZJR19YODYNCj4+ICsgICAgaW50IGdzaTsNCj4+
ICsgICAgdWludDMyX3Qgc2JkZjsNCj4+ICsjZW5kaWYNCj4+ICAgICAgdWludDMyX3QgZG9tYWlu
aWQgPSBwcnMtPmRvbWlkOw0KPj4gICAgICBib29sIGlzc3R1YmRvbTsNCj4+ICANCj4+IEBAIC0y
MjM5LDYgKzIyNjUsMTggQEAgc2tpcF9iYXI6DQo+PiAgICAgIH0NCj4+ICANCj4+ICAgICAgaWYg
KChmc2NhbmYoZiwgIiV1IiwgJmlycSkgPT0gMSkgJiYgaXJxKSB7DQo+PiArI2lmZGVmIENPTkZJ
R19YODYNCj4+ICsgICAgICAgIHNiZGYgPSBQQ0lfU0JERihwY2ktPmRvbWFpbiwgcGNpLT5idXMs
DQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgKFBDSV9ERVZGTihwY2ktPmRldiwgcGNpLT5m
dW5jKSkpOw0KPj4gKyAgICAgICAgZ3NpID0geGNfcGh5c2Rldl9nc2lfZnJvbV9kZXYoY3R4LT54
Y2gsIHNiZGYpOw0KPj4gKyAgICAgICAgLyoNCj4+ICsgICAgICAgICAqIE9sZCBrZXJuZWwgdmVy
c2lvbiBtYXkgbm90IHN1cHBvcnQgdGhpcyBmdW5jdGlvbiwNCj4+ICsgICAgICAgICAqIHNvIGlm
IGZhaWwsIGtlZXAgdXNpbmcgaXJxOyBpZiBzdWNjZXNzLCB1c2UgZ3NpDQo+PiArICAgICAgICAg
Ki8NCj4+ICsgICAgICAgIGlmIChnc2kgPiAwKSB7DQo+PiArICAgICAgICAgICAgaXJxID0gZ3Np
Ow0KPj4gKyAgICAgICAgfQ0KPj4gKyNlbmRpZg0KPj4gICAgICAgICAgcmMgPSB4Y19waHlzZGV2
X3VubWFwX3BpcnEoY3R4LT54Y2gsIGRvbWlkLCBpcnEpOw0KPj4gICAgICAgICAgaWYgKHJjIDwg
MCkgew0KPj4gICAgICAgICAgICAgIC8qDQo+IA0KPiBUaGFua3MsDQo+IA0KDQotLSANCkJlc3Qg
cmVnYXJkcywNCkppcWlhbiBDaGVuLg0K


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 09:02:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 09:02:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745137.1152289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKaA0-0001CQ-8P; Fri, 21 Jun 2024 09:02:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745137.1152289; Fri, 21 Jun 2024 09:02:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKaA0-0001CJ-5l; Fri, 21 Jun 2024 09:02:20 +0000
Received: by outflank-mailman (input) for mailman id 745137;
 Fri, 21 Jun 2024 09:02:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BNSN=NX=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sKa9y-0001CD-PA
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 09:02:18 +0000
Received: from mail-oo1-xc35.google.com (mail-oo1-xc35.google.com
 [2607:f8b0:4864:20::c35])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f4e25dfd-2fac-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 11:02:16 +0200 (CEST)
Received: by mail-oo1-xc35.google.com with SMTP id
 006d021491bc7-5c19d338401so872949eaf.0
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 02:02:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4e25dfd-2fac-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718960535; x=1719565335; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=McodRIC87u9bhDhJyxOpy6e8jfj9BZt6jkX1PhJdSRE=;
        b=Km31+jwQObFmGhErrku5mFJNKbGmP+3g4C1FrAGTUJFQ2TLqvOkiY7gsbhF6Oz323b
         Hh+EyiEPRVFu5M5nxPy557cFqY4ruCbke5gux9tLB3+aB1xPXPrjIGXfKuV6rJqiKi9g
         5Wya37A6gm0kd5v//sQEntBBAFrh69cdV32ng=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718960535; x=1719565335;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=McodRIC87u9bhDhJyxOpy6e8jfj9BZt6jkX1PhJdSRE=;
        b=PnP2lfNgC9o2WsfEWcqrFz2daVseawU39AUUquqVDt/mVe5IGewFwJCPmkhph75iEz
         EwzT/xyen0K1d7kQR8DlGm0km804m9rxI8aPmtSGvrTBs/vXbWdH+T7RQBUfy20GWwYP
         nYs48Bch3m9XIY/zLi5ozRUidM52N2pnFgKh6z5magaL4GTJCNZRcIvzk6xNqLMzCenI
         /eW5ZlN84YdPqErq72FvO4TzbhFN9sP5fVnyqCaXjeMuPmt/YPARIjDthkA4TJKelWQV
         ktPID7oO83Hto7rbpl8SIu6cVT3lwjarM/m2Vifq1e9sgEp/MMVZ6FfC2Kx8NKRf4JLh
         AMDw==
X-Gm-Message-State: AOJu0YyDcNo18zIfx9HwTTuTxoPmN2CnEae2nb1NJpqTJscwshDzjwaR
	tHCwMvOtWO6ng47pc7LhCoers32EDeTjKPiWhvYw9x8n0Z7WfCjgSxf/baD6qL/VUWJdUj9P/L2
	hBEGArx4kbzyKMqXqZdRv4CiEgkUWOHMrQcS4gA==
X-Google-Smtp-Source: AGHT+IGua3W/8XFYTCTmCz9RNjwwGAwWMI76YhNGQhSVHJedDS0RVnsu0VVpvQMJaOpak8hypVPHYXDxN3iSTRz1Bgs=
X-Received: by 2002:a05:6870:b28a:b0:254:96ec:bc44 with SMTP id
 586e51a60fabf-25c94a1f0ddmr8347130fac.28.1718960535449; Fri, 21 Jun 2024
 02:02:15 -0700 (PDT)
MIME-Version: 1.0
References: <5b6dfc7571bd76b5546d3881bd660a4e7a745409.1718928467.git.victorm.lira@amd.com>
In-Reply-To: <5b6dfc7571bd76b5546d3881bd660a4e7a745409.1718928467.git.victorm.lira@amd.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Fri, 21 Jun 2024 10:02:04 +0100
Message-ID: <CA+zSX=bCUUP8AVeLy9fMjSSqEHGnpJCUnGUgGT1kZ-i6eiWZXg@mail.gmail.com>
Subject: Re: [XEN PATCH] common/sched: address a violation of MISRA C Rule 8.8
To: victorm.lira@amd.com
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
	Stewart Hildebrand <stewart.hildebrand@amd.com>, Dario Faggioli <dfaggioli@suse.com>, 
	Juergen Gross <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, Jun 21, 2024 at 1:21=E2=80=AFAM <victorm.lira@amd.com> wrote:
>
> From: Victor Lira <victorm.lira@amd.com>
>
> Rule 8.8: "The static storage class specifier shall be used in all
> declarations of objects and functions that have internal linkage"
>
> This patch fixes this by adding the static specifier.
> No functional changes.
>
> Reported-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
> Signed-off-by: Victor Lira <victorm.lira@amd.com>

With the changes noted already:

Acked-by: George Dunlap <george.dunlap@cloud.com>


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 09:08:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 09:08:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745142.1152299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKaFw-00021h-P9; Fri, 21 Jun 2024 09:08:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745142.1152299; Fri, 21 Jun 2024 09:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKaFw-00021a-Mg; Fri, 21 Jun 2024 09:08:28 +0000
Received: by outflank-mailman (input) for mailman id 745142;
 Fri, 21 Jun 2024 09:08:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKaFv-00021O-DW; Fri, 21 Jun 2024 09:08:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKaFv-0005HB-0x; Fri, 21 Jun 2024 09:08:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKaFu-0008Hw-Og; Fri, 21 Jun 2024 09:08:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKaFu-0008Uh-Nj; Fri, 21 Jun 2024 09:08:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Rs7PiOksJqdVboV7xJeU5Tp6GL1gTDpUWbTtjYPyDrQ=; b=hzHxZauw9qV62XuQfZDoFEZZMy
	wQNQpPQrcOE/2MhUhf68bA3HNYCdgbZi2vE4+RtYXdkq+YQTnD4g8YuM/k2pM8VmCF8XCCCkdnCwg
	Q90pDNgQX8/6US54ADFBbp0mqHQ32e8u8Ydcg6ovUPxAvU8DdwJy4SP78PERwlU20Rm4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186443-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186443: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=be38c01da2dd949e0a6f8bceeb88d2e19c8c65f7
X-Osstest-Versions-That:
    ovmf=d512bd31293c7f2aeef9b60fb6f112d0e90adff3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Jun 2024 09:08:26 +0000

flight 186443 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186443/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 be38c01da2dd949e0a6f8bceeb88d2e19c8c65f7
baseline version:
 ovmf                 d512bd31293c7f2aeef9b60fb6f112d0e90adff3

Last test of basis   186440  2024-06-21 03:11:13 Z    0 days
Testing same since   186443  2024-06-21 07:11:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Mike Maslenkin <mike.maslenkin@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d512bd3129..be38c01da2  be38c01da2dd949e0a6f8bceeb88d2e19c8c65f7 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 09:22:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 09:22:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745156.1152309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKaTK-0004oh-1c; Fri, 21 Jun 2024 09:22:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745156.1152309; Fri, 21 Jun 2024 09:22:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKaTJ-0004oa-VB; Fri, 21 Jun 2024 09:22:17 +0000
Received: by outflank-mailman (input) for mailman id 745156;
 Fri, 21 Jun 2024 09:22:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CZKQ=NX=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1sKaTI-0004oU-HR
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 09:22:16 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on2062f.outbound.protection.outlook.com
 [2a01:111:f400:7e88::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bed154dc-2faf-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 11:22:15 +0200 (CEST)
Received: from DM6PR18CA0006.namprd18.prod.outlook.com (2603:10b6:5:15b::19)
 by LV3PR12MB9330.namprd12.prod.outlook.com (2603:10b6:408:217::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.31; Fri, 21 Jun
 2024 09:22:10 +0000
Received: from CY4PEPF0000EDD5.namprd03.prod.outlook.com
 (2603:10b6:5:15b:cafe::d3) by DM6PR18CA0006.outlook.office365.com
 (2603:10b6:5:15b::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.36 via Frontend
 Transport; Fri, 21 Jun 2024 09:22:10 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CY4PEPF0000EDD5.mail.protection.outlook.com (10.167.241.201) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Fri, 21 Jun 2024 09:22:09 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Fri, 21 Jun
 2024 04:22:08 -0500
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39
 via Frontend Transport; Fri, 21 Jun 2024 04:22:07 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bed154dc-2faf-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ekI0HVci+7RptkZlYVNA+nv2QV3OfB5sIst+1hcYvJxJOlD+v2CF65ZeSA7/fLNb8EVlukXYJdK+M0rvRhFfSp0aA+cLJDwBltiEk8G2B1fIZFkmFX7VenMyA1Zgl5XpFQsOoWNWLWbKOMHmlaFuAIaVXHBjSQsXcDjaogXZq9NJH0cx7L+dlfAihO2Sy1i+vEDKx2f0Jf1mMxUeMSJbF1crJpbWco07onNNxHfAJnOjuzgjYTgi9CopW6lF5RPtiSD6Jf0NPkW0oeZZ5PvydPd1DBUChDewn6TGx9cdkZbGhEIvXFe12kMj8mf9Ha/LyWeUkKK8sem7kTx3b5nIkA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=FraBI1i4Bnal+sGdEA8zJKnwWAqHj5QXtcKpWhWQ7Yw=;
 b=MJDFT+Nj6arwkKaQqIfuYbToBjLJ0CBhFuRdehxoBe3nxFuqfPuUjaAFjixUkg9puLfy9ZNYNC0e+TxiuMg9G9WELfl+3Bg7zGUZe1puxFxFgn0ZuL4ui5ZwEU9nuZUhPP50csmhQn6mJFQcvaNSNSCsv+yaktuoFVsBP7HRsIIi+aTIul6SITNLFVMFZV0iBmt5yqhVrKQVmPt1keUJfHpDjA4B/Kr6UMafBiMv/mfNne2lQF7MHr1t6hEXB3Dbr7pipmwblF4CvXG6D6NxMDDPOVKvp9/75hNJbyq4+bv+fRWqVrjNsbGxVUyLsGqgQd54DR4iyvczSgRLIJKjmA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FraBI1i4Bnal+sGdEA8zJKnwWAqHj5QXtcKpWhWQ7Yw=;
 b=il/dM67VQu7FDe6WZH93MSrDJApQqcZafMvO5W8JkJcsNoxXagmqJmw6pxe0uamFstGUUjOHX5Rii+YMCqqkVExW2moxemuAArr5LwXA713mRw1sGWuTKfWBKLHnOy/7bbCzRDYq2WgXQPeYFcQyyUcXybfS0+TBsveNjQoX5fs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	<oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19] xen/arm: static-shmem: request host address to be specified for 1:1 domains
Date: Fri, 21 Jun 2024 11:22:05 +0200
Message-ID: <20240621092205.30602-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
Received-SPF: None (SATLEXMB03.amd.com: michal.orzel@amd.com does not
 designate permitted sender hosts)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000EDD5:EE_|LV3PR12MB9330:EE_
X-MS-Office365-Filtering-Correlation-Id: 1bde3ba6-5869-4b26-8169-08dc91d3a08c
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|36860700010|376011|1800799021|82310400023;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?Sw22mpammKbOO9Kl9Pjopt3v5xbFhrSX0hmHweGI99FbiE8JTYI7ZAceJDQ5?=
 =?us-ascii?Q?jS2LXSBJqwDowDHGMFk4uoLzqPou40Zk6sDbvgSzUMrkD64YQksLfGMq7zMW?=
 =?us-ascii?Q?R1lit1pAkiEt8kaOJFUpasCbbO0oKzFHViiNphla8LX5/BFzVQTitOIWTzOj?=
 =?us-ascii?Q?/scY+3ucsSHICQztYaEyiK2bSaPpPYWd8b7ML+um295D7gmRPrWA0p4VC4QP?=
 =?us-ascii?Q?+JzfpbatQr1svz2udZqk6+Kd9POZ4/20n7YOtzYicR28xyShOG0tBgiqAFw0?=
 =?us-ascii?Q?NzYXrkxDsjIFAyyTYbBIh2h/LvSqjMXp9eOJ2aRvr2r8hJQYUJSXA/NhqeWf?=
 =?us-ascii?Q?5koVKHNUyb3pdyFzDMOzQyRCgHpL5ianY+PL+7OniAqXI2Oe1XTh7PUPSUlX?=
 =?us-ascii?Q?ecN5BUieT4IP/AzzxoeydEyfTNbzHDoC916X1moe1votfcFynOkqxSOLF5HL?=
 =?us-ascii?Q?bWyvCIjq2sFHShHe5byUnIkTekVE16sYWhzW0UDdusp+/ysXi0qD19JkGQQd?=
 =?us-ascii?Q?ELyjkq4624lV/AmVxuC9bBx3uKvzC350bhP7RzyTKXDkeAON3AYVUcfoTPYJ?=
 =?us-ascii?Q?e+W650i+aDnB+8rQ2pi7Szr0+K3dsftsysBfyIMLc8FgucbR6Mo/OANGdFrL?=
 =?us-ascii?Q?ZoiLHwsIH7QA8ugf2J5bXClyV0TyB2eb1Dmce8aJ7Xlc2ojzE2+Izlb76oSa?=
 =?us-ascii?Q?OjIOyj8nYYIDP3F7WSnHWftyCOHDt27qMh1TNoQSiwGyBX+A0pT72Cm+3+p0?=
 =?us-ascii?Q?rka3HPlIyYNs8YbM/QXVk9WkHvVY4nMAIcH8eOYHoM/cEWc1mj0cfRyH2fZa?=
 =?us-ascii?Q?dVU6CqBQMP1tlNNRFw2J+lGo/lrBjAbfkKhrrWbbIfep7qKz03tj3Q3jn3km?=
 =?us-ascii?Q?iBONJ90rOBL6MAvJ+/ThSRYwj3+7naEEHtc4dNBEsq9290H00jJhVbqGv2uR?=
 =?us-ascii?Q?SHlWkd06oU72S98pYXvsBzRdWY/sNuxgFv+hpubbf5hPamHstwcPbvJ4E0ca?=
 =?us-ascii?Q?teeyA22xhZX7ZheTWXFP0kPB60Js2pOqti9jx3SH0bsed7ypc3BubhNUvz6H?=
 =?us-ascii?Q?POVWHB6hYDcXIrxnZJi81IMGMulu4xvUdE9N212lWCqnuGDb77VkwoNAbXlQ?=
 =?us-ascii?Q?cneTM6Vq7JnQRtzl8gGNmzTbwxs2M7nNKMdFK989ixvHR/NefJDIHRNUPMy6?=
 =?us-ascii?Q?UZvW2KYEGn+jmr0rahSTs8Uw3y9viAZ8FuDCH04lc1YEGGJ8ovu6jvYCNqvj?=
 =?us-ascii?Q?fFXqAavoRU8oKYjL1e8cUNBNSH7IPU0B/0pZ+SCxXCfiHDHrMKFl88b86NLh?=
 =?us-ascii?Q?qyZBp9496ZZeZk5LXLfScn0/jjlxtMEbqA7txm/Q3Jn/egKVdpnd69C5k7jq?=
 =?us-ascii?Q?WtyzGh4RyFfM7nh+VlntKaomLsEWe0stJG6fKSgXK9CPB7sr0Q=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(36860700010)(376011)(1800799021)(82310400023);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2024 09:22:09.6207
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1bde3ba6-5869-4b26-8169-08dc91d3a08c
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000EDD5.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV3PR12MB9330

As a follow up to commit cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem
should be direct-mapped for direct-mapped domains") add a check to
request that both host and guest physical address must be supplied for
direct mapped domains. Otherwise return an error to prevent unwanted
behavior.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
Reasoning for 4.19:
this is hardening the code to prevent a feature misuse and unwanted behavior.
---
 xen/arch/arm/static-shmem.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/xen/arch/arm/static-shmem.c b/xen/arch/arm/static-shmem.c
index cd48d2896b7e..aa80756c3ca5 100644
--- a/xen/arch/arm/static-shmem.c
+++ b/xen/arch/arm/static-shmem.c
@@ -378,6 +378,13 @@ int __init process_shm(struct domain *d, struct kernel_info *kinfo,
             const struct membank *alloc_bank =
                 find_shm_bank_by_id(get_shmem_heap_banks(), shm_id);
 
+            if ( is_domain_direct_mapped(d) )
+            {
+                printk("%pd: host and guest physical address must be supplied for direct-mapped domains\n",
+                       d);
+                return -EINVAL;
+            }
+
             /* guest phys address is right at the beginning */
             gbase = dt_read_paddr(cells, addr_cells);
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 09:36:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 09:36:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745166.1152320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKagq-0006gf-53; Fri, 21 Jun 2024 09:36:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745166.1152320; Fri, 21 Jun 2024 09:36:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKagq-0006gY-2F; Fri, 21 Jun 2024 09:36:16 +0000
Received: by outflank-mailman (input) for mailman id 745166;
 Fri, 21 Jun 2024 09:36:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MgEJ=NX=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sKago-0006gS-4z
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 09:36:14 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b041b104-2fb1-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 11:36:08 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ec4a35baa7so10085921fa.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 02:36:08 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 38308e7fff4ca-2ec4d7e7906sm1583581fa.121.2024.06.21.02.36.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 02:36:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b041b104-2fb1-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718962568; x=1719567368; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=bWcM+UkH1QZJVe9SfjY7/mAy/7Ki82Qc5EYhPE77GJY=;
        b=V9OA+2ZOtEPARRPeoQtkBXpzNKXIyjml+ZnA4zsCdTRXNb7N3dY49KP9oYbGN3//7j
         Hr0SE9F4SLuFNG+4NpcP0qYzwwGLlkOqCw0QG4J5i9yJ6Cmjkyoee3+s+9GLFMBHJrIO
         1/ccHqBzxA7Ed8CvS2fnaVzgAO3+e+DUIIzzpqV7WIJqbrAdlSkojp83km/VS5danCnW
         78KlYvqrimNEriqZHht7akOldwdPxQzHVQAlbl21Mc7weeuODmhI5n/SfFF4/ZmQlNxi
         yLunvLUixu+hXNOVRNIu0R0aDtg8fzuTSA+mk5JHZgYVSKBpthO5X8LUh+fJaxNVKIyu
         J+ew==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718962568; x=1719567368;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=bWcM+UkH1QZJVe9SfjY7/mAy/7Ki82Qc5EYhPE77GJY=;
        b=smAsNMdVWQkN6RFYmtjzo6MvB2qODeVMc3D6rqzPkd6JjRy84euGMCG2o79WKr762y
         LJoiYFZ4oLOUdds5BMqtAqPEJeeXHGFpBX14BfEoMYbod/R/PYGyjv6+eA5BoF9EAb/U
         PwEMQLzoOWfMX//SErNQSt4D2hnDDsQOeknQsoA7sXURqsB/9F7wBstbcM6s/1unIdIs
         qVP2gv/WQeqFM7uORbZW0CITVDvoHY1NcfpDIgzYhyAqvzFvZOiQWueIbj0iKWShIYGX
         7O27ZGIEmS7+zd9cE9MHGVkCPXipI9H8g6wrJHntjQCA6eGMBPgDKzlJ89ejSuP7YTcV
         T3Sg==
X-Forwarded-Encrypted: i=1; AJvYcCVoNMXBb1H/n0UnjcizEATarqlxiEbBLL2tCGsGJOum5kDerB9z19SsWhMhwyA8eG9FxRZXiUjBAJb8T2YZimp61ivjnN0QeNM0lsvQIC0=
X-Gm-Message-State: AOJu0YyBc4bwz59irAU/u23uGIMcR+2YQMb/oxdlIkNChLl6gsfylo/U
	P4QyipC+QQyhBEoSj/iOwu6KJuRgWu+XidRYOMbg3COC7j/6rNTb
X-Google-Smtp-Source: AGHT+IHVOYG4LMIM8F9GxMCiyWZnN/t1cx95rRcKDlJy9AY96yVa8Un9IG2Jiv5zwYOIQmuv7EEI1A==
X-Received: by 2002:a2e:9e94:0:b0:2ec:4064:18e6 with SMTP id 38308e7fff4ca-2ec40641b04mr20615941fa.5.1718962567486;
        Fri, 21 Jun 2024 02:36:07 -0700 (PDT)
Message-ID: <13a90dee254ed5994dab7454cc744a1b16e34e97.camel@gmail.com>
Subject: Re: [XEN for-4.19 PATCH] x86/apic: Fix signing in left bitshift
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Matthew Barnes
	 <matthew.barnes@cloud.com>, Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>, Roger Pau =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>
Date: Fri, 21 Jun 2024 11:36:06 +0200
In-Reply-To: <96e8edb4-f9a8-46bf-a99c-cb458b0cb3f0@citrix.com>
References: 
	<6fe6d88c0e07348d3e08fd51863402827126ecb0.1718893590.git.matthew.barnes@cloud.com>
	 <96e8edb4-f9a8-46bf-a99c-cb458b0cb3f0@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Thu, 2024-06-20 at 16:16 +0100, Andrew Cooper wrote:
> On 20/06/2024 3:31 pm, Matthew Barnes wrote:
> > There exists a bitshift in the IOAPIC code where a signed integer
> > is
> > shifted to the left by at most 31 bits. This is undefined
> > behaviour,
> > and can cause faults in xtf tests such as pv64-pv-iopl~hypercall.
> >=20
> > This patch fixes this by changing the integer from signed to
> > unsigned.
> >=20
> > Signed-off-by: Matthew Barnes <matthew.barnes@cloud.com>
>=20
> The code change itself is fine, but I'm going to recommend some
> adjustments to the commit message.
>=20
> Its "x86/ioapic";=C2=A0 apic implies the Local APIC which is apic.c and
> distinct from the IO-APIC.=C2=A0 The subject would be clearer as "Fix
> signed
> shift in end_level_ioapic_irq_new()".
>=20
> The XTF test has nothing to do with this, so shouldn't be mentioned
> like
> this.=C2=A0 The UBSAN failure was in an interrupt handler, and it was
> complete chance that it triggered while pv64-pv-iopl~hypercall was
> the
> test being ran.
>=20
> I'm happy to fix all of that up on commit.
>=20
> CC Oleksii for 4.19.=C2=A0 This is low risk, and found during testing wit=
h
> UBSAN active.
Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 09:37:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 09:37:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745171.1152329 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKahh-0007Bd-ET; Fri, 21 Jun 2024 09:37:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745171.1152329; Fri, 21 Jun 2024 09:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKahh-0007BW-Ac; Fri, 21 Jun 2024 09:37:09 +0000
Received: by outflank-mailman (input) for mailman id 745171;
 Fri, 21 Jun 2024 09:37:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MgEJ=NX=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sKahg-0007Af-1M
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 09:37:08 +0000
Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com
 [2a00:1450:4864:20::12a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d3612082-2fb1-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 11:37:07 +0200 (CEST)
Received: by mail-lf1-x12a.google.com with SMTP id
 2adb3069b0e04-52caebc6137so1841365e87.0
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 02:37:07 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52cd63b441csm146566e87.33.2024.06.21.02.37.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 02:37:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3612082-2fb1-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718962627; x=1719567427; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=PLXGmdtvLhHKkU/pDpI+VMRmDdENy4k3JuMXbMA+T98=;
        b=lxbktCFnY/uw+ZOmNajnnfTGAg6Pj3+zUTxlGguc4T0Pg7XLF5kW6tMJkjZpgyua7j
         dzWHhxeJPtH5XhUKlu3TH70LqLbZ4AytQugCPgmnMLko7DPoADnzGXOW9DkrNhWkiXXx
         XpYYvXkpVSOd2MQ9QYGY9dzU2EkQWZFDBsjKbGrmb++lvHDcElxVO45Ye1Sc9hdxuaxV
         DImjJ5Mc6aXe5Mu432CinfT5EHpez4N9UDBzWhdSxpCAsnGIG+x/WgQlaYv7cYZHReZX
         OCzFoHdygR9bPyaxWarsmZQyFchkdwgGNoxzVMJUpzYtTG9Rrh7Nvaqbtn2oW4JnrAoX
         lmhA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718962627; x=1719567427;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=PLXGmdtvLhHKkU/pDpI+VMRmDdENy4k3JuMXbMA+T98=;
        b=ThvCGGaNr9VhAlHjMyndow2NpUgFthwXMT7xlDMZa97nx6KpJpH07eU4S3QXMhcIGd
         FIXb4lD6DJqu83aVqZTWMG4SGU8i8YW4uC5vPBFDSiwlT9gM8gfgZNQWlhFeEYNGJwXa
         zA/Rz+eHRJewUHGP+hg7z21R+L0MUeE9PUzvwThLaV2LSkQLs/n7WM8rs3Ro7JQjdMUA
         Xt9jaBL1wPEIN3htI8modTzD8i25MAotwRYfMH4IcxuPlfuwtAq3avnNm2ASp2X4BOKH
         khQyhLVlZ0tpCpP1DbVcqV7MqjyfrBzUPZ36J2Nxc/3Pdk4nn0tHFkYwH5qeDyGvVrO5
         eeww==
X-Forwarded-Encrypted: i=1; AJvYcCW1pJ4rushk/roQgDrkKHSr/GpWmHApFvnjRNxzz6ql9GZ9rKYtTAtZLui+CbJatcs/aciJXPHq1/IpLGYiBf8eDLcFEXa05uqjTArIyqM=
X-Gm-Message-State: AOJu0YzFEjEy2vwYnKHxLro4fuKPt5W9dbLELYdmEI6pplrn4w7lp3OH
	erHAhF2jWSKkHp4pHh/+aCrdeox8HQilvgeXwobzW1Selo+qDwcsYORP7w==
X-Google-Smtp-Source: AGHT+IGI/wRUMKo1PgAPUQRH1M2iZOxMb1SC0yeSUOKQbygSfIJ9n44cilNThGO+hsjX7ySezl1jgQ==
X-Received: by 2002:ac2:5ecb:0:b0:52c:9383:4c16 with SMTP id 2adb3069b0e04-52ccaa5d4b2mr4507559e87.22.1718962626680;
        Fri, 21 Jun 2024 02:37:06 -0700 (PDT)
Message-ID: <70d4179eed3198a977e147e29d21ff6998eeb29f.camel@gmail.com>
Subject: Re: [PATCH for-4.19?] libelf: avoid UB in
 elf_xen_feature_{get,set}()
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>,  "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
  Stefano Stabellini <sstabellini@kernel.org>
Date: Fri, 21 Jun 2024 11:37:05 +0200
In-Reply-To: <40b33e27-663e-460b-8253-0f5b98fe7f23@citrix.com>
References: <42a8061a-b626-443a-ad42-0e05b043c6c7@suse.com>
	 <40b33e27-663e-460b-8253-0f5b98fe7f23@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

On Thu, 2024-06-20 at 17:07 +0100, Andrew Cooper wrote:
> On 20/06/2024 4:34 pm, Jan Beulich wrote:
> > When the left shift amount is up to 31, the shifted quantity wants
> > to be
> > of unsigned int (or wider) type.
> >=20
> > While there also adjust types: get doesn't alter the array and
> > returns a
> > boolean, while both don't really accept negative "nr". Drop a stray
> > blank each as well.
> >=20
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
>=20
> +1 for 4.19.
Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

>=20
> > ---
> > Really I wonder why these exist at all; they're effectively
> > test_bit()
> > and __set_bit() in hypervisor terms, and iirc something like that
> > exists
> > in the tool stack as well.
>=20
> The toolstack has tools/libs/ctrl/xc_bitops.h but they're not API
> compatible with Xen.
>=20
> They're long-granular rather than int-granular, have swapped
> arguments,
> and are non-LOCKed.
>=20
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 09:37:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 09:37:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745176.1152340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKai8-0007jV-Li; Fri, 21 Jun 2024 09:37:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745176.1152340; Fri, 21 Jun 2024 09:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKai8-0007jO-JA; Fri, 21 Jun 2024 09:37:36 +0000
Received: by outflank-mailman (input) for mailman id 745176;
 Fri, 21 Jun 2024 09:37:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKai7-0007hg-RS; Fri, 21 Jun 2024 09:37:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKai7-0005kq-N8; Fri, 21 Jun 2024 09:37:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKai7-0001NL-Dw; Fri, 21 Jun 2024 09:37:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKai7-0007An-DN; Fri, 21 Jun 2024 09:37:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8dfQqPJxOmyiWRjvD1yzody+vyTUvU/49k48BrY38vM=; b=zeP2B9u/stdR3V6lEDIqS9JvBH
	Q6WqF2mvExVHxyJDQw8oDs7qukM3o7VnIYEuc0YRl4dlB5TOQNg0GTxDgT6KllobOQUrTZH6nkTuX
	OKzwJERr5qPy63hDk6poNQvB/3sxRu0uAFxgA28QbMWH7ynLNl2me+x+o/ZV9PLDenGY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186441-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186441: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=e6b94cba7ee1ea0ab3a49ebdd2520c4a6259a013
X-Osstest-Versions-That:
    libvirt=43d2edc08f5a7b5e1d0c3464580113e069d1efa7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Jun 2024 09:37:35 +0000

flight 186441 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186441/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186427
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              e6b94cba7ee1ea0ab3a49ebdd2520c4a6259a013
baseline version:
 libvirt              43d2edc08f5a7b5e1d0c3464580113e069d1efa7

Last test of basis   186427  2024-06-20 04:18:46 Z    1 days
Testing same since   186441  2024-06-21 04:20:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   43d2edc08f..e6b94cba7e  e6b94cba7ee1ea0ab3a49ebdd2520c4a6259a013 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 09:38:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 09:38:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745185.1152349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKajJ-00009t-Vn; Fri, 21 Jun 2024 09:38:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745185.1152349; Fri, 21 Jun 2024 09:38:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKajJ-00009k-TH; Fri, 21 Jun 2024 09:38:49 +0000
Received: by outflank-mailman (input) for mailman id 745185;
 Fri, 21 Jun 2024 09:38:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MgEJ=NX=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sKajI-00009e-Ny
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 09:38:48 +0000
Received: from mail-lf1-x12f.google.com (mail-lf1-x12f.google.com
 [2a00:1450:4864:20::12f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f0556d5-2fb2-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 11:38:47 +0200 (CEST)
Received: by mail-lf1-x12f.google.com with SMTP id
 2adb3069b0e04-52c94cf4c9bso2286550e87.2
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 02:38:47 -0700 (PDT)
Received: from [192.168.219.221] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52cda743110sm43690e87.201.2024.06.21.02.38.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 02:38:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f0556d5-2fb2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718962727; x=1719567527; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=penVdCFQ4sC++G2qfQuDyrB4tANgUxqYAXUDBN2/v9w=;
        b=lok9+a4oNpGt0yqVJKJYgcoJi2zDYO2h5VME7KbnsL/+c61tnPHL2WoCQH03EWY854
         Na0fPmzFIZY6YmvxmqC4m6c/jCRyYQikiTulknwRQ/uo+weaQWR4y/t35IwN8d5SKrir
         m4RqpcO4DGY4xNTR8SQ7EnkCM2QPQbfRgES8oxA901teV2KeeCBchOwhJ4ARXfhVKNx9
         ZKc1Nhi9CQ/CtuWkGbJ5NvxUxWaKi205X2Q35AXSAh+B6mqaAk25ycZg/u16/Nxlf8Wz
         G1qw2dLcV+dkPmxlFfxWWx7I17StYUH1c/HiZ4GfNNkTxKUQJOskYo1bnsz35lk9VPv6
         ItDA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718962727; x=1719567527;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=penVdCFQ4sC++G2qfQuDyrB4tANgUxqYAXUDBN2/v9w=;
        b=ZAguyKkuMXmbTeeIU5aAj2QqeYoavWbxZvI6fKZXMx+Vww0NlorY1wIWay4KBpe0NR
         hg9H6TaKHaFHmi4/DT9sebcKZGb6G7ZxRiqUaa2oE6vsu/XTRBuscT1VH5aE85UuGF3A
         eHa8p+auKkVRaA4pplmHcObmcgRX289pQwEg5V77Q16qtWnXf3oB8ROppka7xqcmg/SW
         SdiawjKsqsnH3vmw66iA4sTVMrlUlHvzESNLaUqovfmS2RPAbS5vU9yCKhmi7WBxvHli
         9SoS0F+Rd8v6PLG26ZD8jyiXCu7CSvXV9Z9p6+/0RCfcaRyvXVCo9xMXrxtfg4ETCnwv
         OxwQ==
X-Forwarded-Encrypted: i=1; AJvYcCXUrHvZacO7gyIqYLWPuGzSVDyLYCZFB4WuyUlE8GFWemIMG10C1dSNxos1fHbuhbuAk9DyITHePcw2S7vIXZrh8IRLPPq0gBE8AylOYU4=
X-Gm-Message-State: AOJu0YziPSvxlooZcIbA6Ln0u7qNfiXzahXtu3vxJ4oQfCRn5UrCJitb
	FhaJeRn6yaLzcP1cMt87DjFJtnuZUyjDQBb9eRtB9D0waizm8TCp
X-Google-Smtp-Source: AGHT+IH1i4kbje+DslaSrUM6HBz8lfQ5+dYTjbfaAO6pD42p33DBIs65vCg7Biq7lJYxY6pvJN7kxw==
X-Received: by 2002:ac2:4dac:0:b0:52c:dacf:e5ac with SMTP id 2adb3069b0e04-52cdacfe7e9mr177368e87.54.1718962726820;
        Fri, 21 Jun 2024 02:38:46 -0700 (PDT)
Message-ID: <0d890819b8f8616df7592d39bdd97b117c51c643.camel@gmail.com>
Subject: Re: [PATCH for-4.19] xen/arm: static-shmem: request host address to
 be specified for 1:1 domains
From: "Oleksii K." <oleksii.kurochko@gmail.com>
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>,  Bertrand Marquis <bertrand.marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Date: Fri, 21 Jun 2024 11:38:45 +0200
In-Reply-To: <20240621092205.30602-1-michal.orzel@amd.com>
References: <20240621092205.30602-1-michal.orzel@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.52.1 (3.52.1-1.fc40) 
MIME-Version: 1.0

T24gRnJpLCAyMDI0LTA2LTIxIGF0IDExOjIyICswMjAwLCBNaWNoYWwgT3J6ZWwgd3JvdGU6Cj4g
QXMgYSBmb2xsb3cgdXAgdG8gY29tbWl0IGNiMWRkYWZkYzU3MyAoInhlbi9hcm0vc3RhdGljLXNo
bWVtOiBTdGF0aWMtCj4gc2htZW0KPiBzaG91bGQgYmUgZGlyZWN0LW1hcHBlZCBmb3IgZGlyZWN0
LW1hcHBlZCBkb21haW5zIikgYWRkIGEgY2hlY2sgdG8KPiByZXF1ZXN0IHRoYXQgYm90aCBob3N0
IGFuZCBndWVzdCBwaHlzaWNhbCBhZGRyZXNzIG11c3QgYmUgc3VwcGxpZWQKPiBmb3IKPiBkaXJl
Y3QgbWFwcGVkIGRvbWFpbnMuIE90aGVyd2lzZSByZXR1cm4gYW4gZXJyb3IgdG8gcHJldmVudCB1
bndhbnRlZAo+IGJlaGF2aW9yLgo+IAo+IFNpZ25lZC1vZmYtYnk6IE1pY2hhbCBPcnplbCA8bWlj
aGFsLm9yemVsQGFtZC5jb20+Cj4gLS0tCj4gUmVhc29uaW5nIGZvciA0LjE5Ogo+IHRoaXMgaXMg
aGFyZGVuaW5nIHRoZSBjb2RlIHRvIHByZXZlbnQgYSBmZWF0dXJlIG1pc3VzZSBhbmQgdW53YW50
ZWQKPiBiZWhhdmlvci4KPiAtLS0KUmVsZWFzZS1BY2tlZC1CeTogT2xla3NpaSBLdXJvY2hrbyA8
b2xla3NpaS5rdXJvY2hrb0BnbWFpbC5jb20+Cgp+IE9sZWtzaWkKPiDCoHhlbi9hcmNoL2FybS9z
dGF0aWMtc2htZW0uYyB8IDcgKysrKysrKwo+IMKgMSBmaWxlIGNoYW5nZWQsIDcgaW5zZXJ0aW9u
cygrKQo+IAo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vc3RhdGljLXNobWVtLmMgYi94ZW4v
YXJjaC9hcm0vc3RhdGljLQo+IHNobWVtLmMKPiBpbmRleCBjZDQ4ZDI4OTZiN2UuLmFhODA3NTZj
M2NhNSAxMDA2NDQKPiAtLS0gYS94ZW4vYXJjaC9hcm0vc3RhdGljLXNobWVtLmMKPiArKysgYi94
ZW4vYXJjaC9hcm0vc3RhdGljLXNobWVtLmMKPiBAQCAtMzc4LDYgKzM3OCwxMyBAQCBpbnQgX19p
bml0IHByb2Nlc3Nfc2htKHN0cnVjdCBkb21haW4gKmQsIHN0cnVjdAo+IGtlcm5lbF9pbmZvICpr
aW5mbywKPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgY29uc3Qgc3RydWN0IG1lbWJhbmsgKmFs
bG9jX2JhbmsgPQo+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGZpbmRfc2htX2Jh
bmtfYnlfaWQoZ2V0X3NobWVtX2hlYXBfYmFua3MoKSwgc2htX2lkKTsKPiDCoAo+ICvCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgIGlmICggaXNfZG9tYWluX2RpcmVjdF9tYXBwZWQoZCkgKQo+ICvCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIHsKPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHBy
aW50aygiJXBkOiBob3N0IGFuZCBndWVzdCBwaHlzaWNhbCBhZGRyZXNzIG11c3QgYmUKPiBzdXBw
bGllZCBmb3IgZGlyZWN0LW1hcHBlZCBkb21haW5zXG4iLAo+ICvCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBkKTsKPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgIHJldHVybiAtRUlOVkFMOwo+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIH0KPiArCj4g
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIC8qIGd1ZXN0IHBoeXMgYWRkcmVzcyBpcyByaWdodCBh
dCB0aGUgYmVnaW5uaW5nICovCj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGdiYXNlID0gZHRf
cmVhZF9wYWRkcihjZWxscywgYWRkcl9jZWxscyk7Cj4gwqAKCg==



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 09:51:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 09:51:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745204.1152365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKavG-00036E-5k; Fri, 21 Jun 2024 09:51:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745204.1152365; Fri, 21 Jun 2024 09:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKavG-000367-3E; Fri, 21 Jun 2024 09:51:10 +0000
Received: by outflank-mailman (input) for mailman id 745204;
 Fri, 21 Jun 2024 09:51:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UDhT=NX=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sKavE-00035y-As
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 09:51:08 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c6c7413d-2fb3-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 11:51:05 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.202])
 by support.bugseng.com (Postfix) with ESMTPSA id A00AB4EE0738;
 Fri, 21 Jun 2024 11:51:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6c7413d-2fb3-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Nicola Vetrini <nicola.vetrini@bugseng.com>
Subject: [RFC XEN PATCH] x86/mctelem: address violations of MISRA C: 2012 Rule 5.3
Date: Fri, 21 Jun 2024 11:50:59 +0200
Message-Id: <79eb2f12e521f96a53dd166eb7db485bb3d9d067.1718962824.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>

This addresses violations of MISRA C:2012 Rule 5.3 which states as
following: An identifier declared in an inner scope shall not hide an
identifier declared in an outer scope. In this case the shadowing is between
local variables "mctctl" and the file-scope static struct variable with the
same name.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
RFC because I'm not 100% sure the semantics of the code is preserved.
I think so, and it passes gitlab pipelines [1], but there may be some missing
information.

[1] https://gitlab.com/xen-project/people/bugseng/xen/-/pipelines/134025883
---
 xen/arch/x86/cpu/mcheck/mctelem.c | 22 +++++++++++-----------
 1 file changed, 11 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
index b8d0368a7d37..df1a31bffb61 100644
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -168,14 +168,14 @@ static void mctelem_xchg_head(struct mctelem_ent **headp,
 void mctelem_defer(mctelem_cookie_t cookie, bool lmce)
 {
 	struct mctelem_ent *tep = COOKIE2MCTE(cookie);
-	struct mc_telem_cpu_ctl *mctctl = &this_cpu(mctctl);
+	struct mc_telem_cpu_ctl *mctctl_cpu = &this_cpu(mctctl);
 
-	ASSERT(mctctl->pending == NULL || mctctl->lmce_pending == NULL);
+	ASSERT(mctctl_cpu->pending == NULL || mctctl_cpu->lmce_pending == NULL);
 
-	if (mctctl->pending)
-		mctelem_xchg_head(&mctctl->pending, &tep->mcte_next, tep);
+	if (mctctl_cpu->pending)
+		mctelem_xchg_head(&mctctl_cpu->pending, &tep->mcte_next, tep);
 	else if (lmce)
-		mctelem_xchg_head(&mctctl->lmce_pending, &tep->mcte_next, tep);
+		mctelem_xchg_head(&mctctl_cpu->lmce_pending, &tep->mcte_next, tep);
 	else {
 		/*
 		 * LMCE is supported on Skylake-server and later CPUs, on
@@ -186,10 +186,10 @@ void mctelem_defer(mctelem_cookie_t cookie, bool lmce)
 		 * moment. As a result, the following two exchanges together
 		 * can be treated as atomic.
 		 */
-		if (mctctl->lmce_pending)
-			mctelem_xchg_head(&mctctl->lmce_pending,
-					  &mctctl->pending, NULL);
-		mctelem_xchg_head(&mctctl->pending, &tep->mcte_next, tep);
+		if (mctctl_cpu->lmce_pending)
+			mctelem_xchg_head(&mctctl_cpu->lmce_pending,
+					  &mctctl_cpu->pending, NULL);
+		mctelem_xchg_head(&mctctl_cpu->pending, &tep->mcte_next, tep);
 	}
 }
 
@@ -213,7 +213,7 @@ void mctelem_process_deferred(unsigned int cpu,
 {
 	struct mctelem_ent *tep;
 	struct mctelem_ent *head, *prev;
-	struct mc_telem_cpu_ctl *mctctl = &per_cpu(mctctl, cpu);
+	struct mc_telem_cpu_ctl *mctctl_cpu = &per_cpu(mctctl, cpu);
 	int ret;
 
 	/*
@@ -232,7 +232,7 @@ void mctelem_process_deferred(unsigned int cpu,
 	 * Any MC# occurring after the following atomic exchange will be
 	 * handled by another round of MCE softirq.
 	 */
-	mctelem_xchg_head(lmce ? &mctctl->lmce_pending : &mctctl->pending,
+	mctelem_xchg_head(lmce ? &mctctl_cpu->lmce_pending : &mctctl_cpu->pending,
 			  &this_cpu(mctctl.processing), NULL);
 
 	head = this_cpu(mctctl.processing);
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 09:54:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 09:54:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745212.1152375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKayn-0003gE-K8; Fri, 21 Jun 2024 09:54:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745212.1152375; Fri, 21 Jun 2024 09:54:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKayn-0003g7-HY; Fri, 21 Jun 2024 09:54:49 +0000
Received: by outflank-mailman (input) for mailman id 745212;
 Fri, 21 Jun 2024 09:54:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/NIW=NX=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKayl-0003g1-RS
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 09:54:47 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4a2507c9-2fb4-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 11:54:45 +0200 (CEST)
Received: from [192.168.1.113] (93-36-220-117.ip62.fastwebnet.it
 [93.36.220.117])
 by support.bugseng.com (Postfix) with ESMTPSA id D75F24EE0738;
 Fri, 21 Jun 2024 11:54:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a2507c9-2fb4-11ef-b4bb-af5377834399
Message-ID: <bce5eae2-973d-4d69-bee1-09f9f09dd011@bugseng.com>
Date: Fri, 21 Jun 2024 11:54:44 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2] xen: add explicit comment to identify notifier
 patterns
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org,
 consulting@bugseng.com, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>
References: <d814434bf73e341f5d35836fa7063a728f7b7de4.1718788908.git.federico.serafini@bugseng.com>
 <f7d46c15-ff85-4a6f-afd7-df18649726c8@xen.org>
 <2072bf59-f125-4789-be77-40ed3641aec4@bugseng.com>
 <alpine.DEB.2.22.394.2406201811200.2572888@ubuntu-linux-20-04-desktop>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <alpine.DEB.2.22.394.2406201811200.2572888@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 21/06/24 03:13, Stefano Stabellini wrote:
> On Thu, 20 Jun 2024, Federico Serafini wrote:
>> On 19/06/24 13:17, Julien Grall wrote:
>>> Hi Federico,
>>>
>>> On 19/06/2024 10:29, Federico Serafini wrote:
>>>> MISRA C Rule 16.4 states that every `switch' statement shall have a
>>>> `default' label" and a statement or a comment prior to the
>>>> terminating break statement.
>>>>
>>>> This patch addresses some violations of the rule related to the
>>>> "notifier pattern": a frequently-used pattern whereby only a few values
>>>> are handled by the switch statement and nothing should be done for
>>>> others (nothing to do in the default case).
>>>>
>>>> Note that for function mwait_idle_cpu_init() in
>>>> xen/arch/x86/cpu/mwait-idle.c the /* Notifier pattern. */ comment is
>>>> not added: differently from the other functions covered in this patch,
>>>> the default label has a return statement that does not violates Rule 16.4.
>>>>
>>>> No functional change.
>>>>
>>>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>>>> ---
>>>> Changes in v2:
>>>> as Jan pointed out, in the v1 some patterns were not explicitly identified
>>>> (https://lore.kernel.org/xen-devel/cad05a5c-e2d8-4e5d-af05-30ae6f959184@bugseng.com/).
>>>>
>>>> This version adds the /* Notifier pattern. */ to all the pattern present
>>>> in
>>>> the Xen codebase except for mwait_idle_cpu_init().
>>>> ---
>>>>    xen/arch/arm/cpuerrata.c                     | 1 +
>>>>    xen/arch/arm/gic-v3-lpi.c                    | 4 ++++
>>>>    xen/arch/arm/gic.c                           | 1 +
>>>>    xen/arch/arm/irq.c                           | 4 ++++
>>>>    xen/arch/arm/mmu/p2m.c                       | 1 +
>>>>    xen/arch/arm/percpu.c                        | 1 +
>>>>    xen/arch/arm/smpboot.c                       | 1 +
>>>>    xen/arch/arm/time.c                          | 1 +
>>>>    xen/arch/arm/vgic-v3-its.c                   | 2 ++
>>>>    xen/arch/x86/acpi/cpu_idle.c                 | 4 ++++
>>>>    xen/arch/x86/cpu/mcheck/mce.c                | 4 ++++
>>>>    xen/arch/x86/cpu/mcheck/mce_intel.c          | 4 ++++
>>>>    xen/arch/x86/genapic/x2apic.c                | 3 +++
>>>>    xen/arch/x86/hvm/hvm.c                       | 1 +
>>>>    xen/arch/x86/nmi.c                           | 1 +
>>>>    xen/arch/x86/percpu.c                        | 3 +++
>>>>    xen/arch/x86/psr.c                           | 3 +++
>>>>    xen/arch/x86/smpboot.c                       | 3 +++
>>>>    xen/common/kexec.c                           | 1 +
>>>>    xen/common/rcupdate.c                        | 1 +
>>>>    xen/common/sched/core.c                      | 1 +
>>>>    xen/common/sched/cpupool.c                   | 1 +
>>>>    xen/common/spinlock.c                        | 1 +
>>>>    xen/common/tasklet.c                         | 1 +
>>>>    xen/common/timer.c                           | 1 +
>>>>    xen/drivers/cpufreq/cpufreq.c                | 1 +
>>>>    xen/drivers/cpufreq/cpufreq_misc_governors.c | 3 +++
>>>>    xen/drivers/passthrough/x86/hvm.c            | 3 +++
>>>>    xen/drivers/passthrough/x86/iommu.c          | 3 +++
>>>>    29 files changed, 59 insertions(+)
>>>>
>>>> diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
>>>> index 2b7101ea25..69c30aecd8 100644
>>>> --- a/xen/arch/arm/cpuerrata.c
>>>> +++ b/xen/arch/arm/cpuerrata.c
>>>> @@ -730,6 +730,7 @@ static int cpu_errata_callback(struct notifier_block
>>>> *nfb,
>>>>            rc = enable_nonboot_cpu_caps(arm_errata);
>>>>            break;
>>>>        default:
>>>> +        /* Notifier pattern. */
>>> Without looking at the commit message (which may not be trivial when
>>> committed), it is not clear to me what this is supposed to mean. Will there
>>> be a longer explanation in the MISRA doc? Should this be a SAF-* comment?
>>>
>>>>            break;
>>>>        }
>>>> diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
>>>> index eb0a5535e4..4c2bd35403 100644
>>>> --- a/xen/arch/arm/gic-v3-lpi.c
>>>> +++ b/xen/arch/arm/gic-v3-lpi.c
>>>> @@ -389,6 +389,10 @@ static int cpu_callback(struct notifier_block *nfb,
>>>> unsigned long action,
>>>>                printk(XENLOG_ERR "Unable to allocate the pendtable for
>>>> CPU%lu\n",
>>>>                       cpu);
>>>>            break;
>>>> +
>>>> +    default:
>>>> +        /* Notifier pattern. */
>>>> +        break;
>>>
>>> Skimming through v1, it was pointed out that gic-v3-lpi may miss some cases.
>>>
>>> Let me start with that I understand this patch is technically not changing
>>> anything. However, it gives us an opportunity to check the notifier pattern.
>>>
>>> Has anyone done any proper investigation? If so, what was the outcome? If
>>> not, have we identified someone to do it?
>>>
>>> The same question will apply for place where you add "default".
>>
>> Yes, I also think this could be an opportunity to check the pattern
>> but no one has yet been identified to do this.
> 
> I don't think I understand Julien's question and/or your answer.
> 
> Is the question whether someone has done an analysis to make sure this
> patch covers all notifier patters in the xen codebase?

I think Jan and Julien's concerns are about the fact that my patch
takes for granted that all the switch statements are doing the right
thing: someone should investigate the notifier patterns to confirm that
their are handling the different cases correctly.

> 
> If so, I expect that you have done an analysis simply by basing this
> patch on the 16.4 violations reported by ECLAIR?

The previous version of the patch was based only on the reports of
ECLAIR but Jan said "you left out some patterns, why?".

So, this version of the patch adds the comment for all the notifier
patterns I found using git grep "struct notifier_block \*"
(a superset of the ones reported by ECLAIR because some of them are in
files excluded from the analysis or deviated).

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 10:40:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 10:40:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745229.1152386 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKbgv-000288-V5; Fri, 21 Jun 2024 10:40:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745229.1152386; Fri, 21 Jun 2024 10:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKbgv-000281-SH; Fri, 21 Jun 2024 10:40:25 +0000
Received: by outflank-mailman (input) for mailman id 745229;
 Fri, 21 Jun 2024 10:40:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gfdn=NX=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sKbgv-00027v-If
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 10:40:25 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id aa9780d8-2fba-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 12:40:24 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-87-17-171-46.retail.telecomitalia.it [87.17.171.46])
 by support.bugseng.com (Postfix) with ESMTPSA id D914B4EE0738;
 Fri, 21 Jun 2024 12:40:23 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa9780d8-2fba-11ef-90a3-e314d9c70b13
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2] automation/eclair_analysis: deviate and|or|xor|not for MISRA C Rule 21.2
Date: Fri, 21 Jun 2024 12:40:12 +0200
Message-Id: <7b05a537b094598b98b92d0869d16402648fb6f5.1718964932.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rule 21.2 reports identifiers reserved for the C and POSIX standard
libraries: or, and, not and xor are reserved identifiers because they
constitute alternate spellings for the corresponding operators (they are
defined as macros by iso646.h); however Xen doesn't use standard library
headers, so there is no risk of overlap.

This addresses violations arising from x86_emulate/x86_emulate.c, where
label statements named as or, and and xor appear.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes from v1:
Added deviation for 'not' identifier.
Added explanation of where these identifiers are defined, specifically in the
'iso646.h' file of the Standard Library.
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index 069519e380..14c7afb39e 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -501,7 +501,7 @@ still remain available."
 -doc_begin="or, and and xor are reserved identifiers because they constitute alternate
 spellings for the corresponding operators (they are defined as macros by iso646.h).
 However, Xen doesn't use standard library headers, so there is no risk of overlap."
--config=MC3R1.R21.2,reports+={safe, "any_area(stmt(ref(kind(label)&&^(or|and|xor)$)))"}
+-config=MC3R1.R21.2,reports+={safe, "any_area(stmt(ref(kind(label)&&^(or|and|xor|not)$)))"}
 -doc_end
 
 -doc_begin="Xen does not use the functions provided by the Standard Library, but
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 11:40:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 11:40:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745241.1152396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKcdD-0000j0-9S; Fri, 21 Jun 2024 11:40:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745241.1152396; Fri, 21 Jun 2024 11:40:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKcdD-0000it-5E; Fri, 21 Jun 2024 11:40:39 +0000
Received: by outflank-mailman (input) for mailman id 745241;
 Fri, 21 Jun 2024 11:40:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Iemj=NX=gmail.com=w1benny@srs-se1.protection.inumbo.net>)
 id 1sKcdC-0000in-41
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 11:40:38 +0000
Received: from mail-oo1-xc33.google.com (mail-oo1-xc33.google.com
 [2607:f8b0:4864:20::c33])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 12eb76cc-2fc3-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 13:40:36 +0200 (CEST)
Received: by mail-oo1-xc33.google.com with SMTP id
 006d021491bc7-5b53bb4bebaso1037816eaf.0
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 04:40:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12eb76cc-2fc3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1718970035; x=1719574835; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1yaHhB7H1P0V+QSwciRSnhP8rzF3ubJ+4eGBfElpPIo=;
        b=KtnzNnRdHOZkJkWpjjnY8tcTd3FYD9qAuCA+LKqtoZ0GrA+w/1AYlJ2agnW2eAkIVh
         9dcKHuKrjvaqoO/Syb4hNbl0rlemGyXCt6SCG1xShJFm/R/GIGAy2SOMY0lgidolUiLe
         VvweHnIHIWkw37XiGUElQ+qieroozyl58sQ8M5vNd+J+js+odK9rbQdZRT3Cudb5BfQM
         PPsQP1Dyq/UA8Fdd0x0b+doAjdF/VQv8EoqMocbApOwhFDbROqTtBSvhhTOp/Eyi75aP
         jRJJaozu8K9mO6KO+z63rGP390+3YlYHoGdEafvceDHixuEK+B8nfXlfXHrGfjq0R6ai
         9dRw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718970035; x=1719574835;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=1yaHhB7H1P0V+QSwciRSnhP8rzF3ubJ+4eGBfElpPIo=;
        b=paT9Ntpr15/JLYMWuQyN1ZMOnBoe8LFwBRGWVZLUesvwV5XZaTa4MvOVrs8Q/88sAo
         hIkUkZ+QeUuqJ7fkzk9+IWc5/o+Idm61HrzmDJ85p9oWz6ZTCk2i9buZ/F6rurUG8cwP
         M94JVkhFf8I12vf+blj1G5bHtwiXnpzYQ/SUDKTg9zptTvsltYFTKbnJ2+QxGvwb7ARp
         IWGjgNdXoTnsrFsM3SqI285OIkIRJE7peTYt7UM6l9/H1xVHm6iy9EP2PnZyld19jcW7
         Gw3NrxeQ9QBM0mXHuIxyxyTY4KOSTGgatjaAZNEfip2ToVNQzE2pmtsm8am4c16Pv0XX
         aHFw==
X-Forwarded-Encrypted: i=1; AJvYcCUJSKZAAnRX64dn63umaxpMhCXh7FyNxVECO+YbRTd68J+C1+eD2E5p2pul+19hOZRr3JUNNLKAPV5Tk6o+lQ8Xfxk0FvRaeUtI1mccy/g=
X-Gm-Message-State: AOJu0YwJ7P+JBXi/iCruu9YBIAIVAZv0YgzFsgaY/R8yOdADiP1cRY9r
	yeOdoyNCsdjJTgktfja7pIv4YecBpygkXEcQa1gfnznooIdNZWmOXnxyiW1iUaLGO0UNP38Zwn9
	Qtk7MeYt8ikxYwwLHmOFiqccRt7c=
X-Google-Smtp-Source: AGHT+IE200lHfcn6EP/q0c3YeC+W7wRpR+wFUbZbYgI3JVeO7Hkv89H4Cc6NYsDDEiK4/jCgHn1bWm1iCreJKCBDmtM=
X-Received: by 2002:a05:6870:160d:b0:25c:bc3f:f924 with SMTP id
 586e51a60fabf-25cbc3ffa32mr4466368fac.35.1718970034771; Fri, 21 Jun 2024
 04:40:34 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1718038855.git.w1benny@gmail.com> <fee20e24a94cb29dea81631a6b775933d1151da4.1718038855.git.w1benny@gmail.com>
 <4a49fe9b-66fd-4a32-ad01-14ed4c5fc34c@suse.com> <CAKBKdXgUKYoJfB1mG+6JSaV=jWpmRmS1UbQ6N4JNZ774rP_PoQ@mail.gmail.com>
 <4cf14abe-881e-4328-9083-bd04afd6b307@suse.com>
In-Reply-To: <4cf14abe-881e-4328-9083-bd04afd6b307@suse.com>
From: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Date: Fri, 21 Jun 2024 13:40:24 +0200
Message-ID: <CAKBKdXg5jCJwW2n2rkWS1aeHTN4X7-z-STft8n4xJ59JCTDhYA@mail.gmail.com>
Subject: Re: [PATCH for-4.19? v6 6/9] xen: Make the maximum number of altp2m
 views configurable for x86
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
	Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel <michal.orzel@amd.com>, 
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Tamas K Lengyel <tamas@tklengyel.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, Jun 20, 2024 at 9:25=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
> Not exactly. You may not assert on idx. The assertion, if any, wants to
> check d->nr_altp2m against MAX_EPTP.

In addition to the check in arch_sanitize_domain? As a safeguard?

> You're again comparing cases where we control the index (in the loop) wit=
h
> cases where we don't (hypercall inputs).

So, replacing strictly the occurrences where we don't control the
index, and leave everything else as is. Okay.

P.


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 13:41:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 13:41:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745264.1152405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKeVj-0005bJ-RF; Fri, 21 Jun 2024 13:41:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745264.1152405; Fri, 21 Jun 2024 13:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKeVj-0005bC-Ob; Fri, 21 Jun 2024 13:41:03 +0000
Received: by outflank-mailman (input) for mailman id 745264;
 Fri, 21 Jun 2024 13:41:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gfdn=NX=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sKeVi-0005b6-P6
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 13:41:02 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e563ec7a-2fd3-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 15:41:00 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-87-17-171-46.retail.telecomitalia.it [87.17.171.46])
 by support.bugseng.com (Postfix) with ESMTPSA id A38BA4EE0738;
 Fri, 21 Jun 2024 15:40:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e563ec7a-2fd3-11ef-90a3-e314d9c70b13
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2] common/unlzo: address violation of MISRA C Rule 7.3
Date: Fri, 21 Jun 2024 15:40:47 +0200
Message-Id: <847f9b715b3c8e2ba0637fdd79111f4f828389c6.1718976211.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This addresses violations of MISRA C:2012 Rule 7.3 which states as
following: the lowercase character `l' shall not be used in a literal
suffix.

The file common/unlzo.c defines the non-compliant constant LZO_BLOCK_SIZE with
having a lowercase 'l'.
It is now defined as '256*1024L'.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
---
Changes from v1:
Instead of deviating /common/unlzo.c reports fro Rule 7.3 they are addressed by
changing the non-compliant definition of LZO_BLOCK_SIZE.
---
 xen/common/unlzo.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/unlzo.c b/xen/common/unlzo.c
index bdcefa95b3..acb8dff600 100644
--- a/xen/common/unlzo.c
+++ b/xen/common/unlzo.c
@@ -52,7 +52,7 @@ static inline u32 get_unaligned_be32(const void *p)
 static const unsigned char lzop_magic[] = {
 	0x89, 0x4c, 0x5a, 0x4f, 0x00, 0x0d, 0x0a, 0x1a, 0x0a };
 
-#define LZO_BLOCK_SIZE        (256*1024l)
+#define LZO_BLOCK_SIZE        (256*1024L)
 #define HEADER_HAS_FILTER      0x00000800L
 #define HEADER_SIZE_MIN       (9 + 7     + 4 + 8     + 1       + 4)
 #define HEADER_SIZE_MAX       (9 + 7 + 1 + 8 + 8 + 4 + 1 + 255 + 4)
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 13:47:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 13:47:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745271.1152416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKebT-0006C3-FE; Fri, 21 Jun 2024 13:46:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745271.1152416; Fri, 21 Jun 2024 13:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKebT-0006Bw-CS; Fri, 21 Jun 2024 13:46:59 +0000
Received: by outflank-mailman (input) for mailman id 745271;
 Fri, 21 Jun 2024 13:46:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKebR-0006Bm-Om; Fri, 21 Jun 2024 13:46:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKebR-0002DR-HE; Fri, 21 Jun 2024 13:46:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKebR-00011R-1t; Fri, 21 Jun 2024 13:46:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKebR-0007WQ-1H; Fri, 21 Jun 2024 13:46:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=N/r7sXFmnVLlYc9Mi/lb3mqT1kvoquUHFT/hVA19QME=; b=jQKW2jZylygO09mmwdJ23H7nGp
	Qz6bwJO6VQBza7R5Q7XKui7HF33Oud4UcaFI/DF72lOywhwKnJe8fBdR+kzh5tkM3yX2yLZ4aTOdr
	Xa4Hcxcmaw4GrDcu/kyA5fY12jrrJuDcb3NyYefNJDb0bM30OYqd/aMR2Ntab5edP1bE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186444-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186444: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-raw:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=62071a1c16c4dbe765491e58e456fd3a19b33298
X-Osstest-Versions-That:
    xen=62071a1c16c4dbe765491e58e456fd3a19b33298
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Jun 2024 13:46:57 +0000

flight 186444 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186444/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-raw       8 xen-boot         fail in 186438 pass in 186444
 test-armhf-armhf-libvirt-vhd 17 guest-start/debian.repeat  fail pass in 186438

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186438
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186438
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186438
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186438
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186438
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186438
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  62071a1c16c4dbe765491e58e456fd3a19b33298
baseline version:
 xen                  62071a1c16c4dbe765491e58e456fd3a19b33298

Last test of basis   186444  2024-06-21 07:49:49 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 15:09:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 15:09:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745308.1152434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKft3-0006iX-8b; Fri, 21 Jun 2024 15:09:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745308.1152434; Fri, 21 Jun 2024 15:09:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKft3-0006iQ-5r; Fri, 21 Jun 2024 15:09:13 +0000
Received: by outflank-mailman (input) for mailman id 745308;
 Fri, 21 Jun 2024 15:09:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JBkP=NX=bounce.vates.tech=bounce-md_30504962.66759794.v1-7d137cfc1204456185aac7c8658aefb3@srs-se1.protection.inumbo.net>)
 id 1sKft1-0006iK-Op
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 15:09:11 +0000
Received: from mail187-10.suw11.mandrillapp.com
 (mail187-10.suw11.mandrillapp.com [198.2.187.10])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 35d78739-2fe0-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 17:09:10 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-10.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W5LNh3HgHz5QkLqV
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 15:09:08 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 7d137cfc1204456185aac7c8658aefb3; Fri, 21 Jun 2024 15:09:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35d78739-2fe0-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718982548; x=1719243048;
	bh=lIV/U0ezGsXVmOjqbkafB5BjfIUquJ6q+7BRog7b6qo=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=gnYPzEPAuHv5IFdfKKkh/CwhCdlAdKKm0Wz8adR7g9EyQug2oplChvNVVVLyi/ZNR
	 qpyCUa7W4lJlVzsfcKXxnCgwbt03J6ZbvFa3bLp7ZwQo7mMVmgC8BQfcbAvAPhUJap
	 HkurX0avjtzxSL/iJ02Ed87OdDvZRerd4AGwNALFxoS5kDI7Zh9wMt85tvn3esiou9
	 L7n0oUUdZH3ahi8tU2dsL9RCqCdWDu4rajY5L2tFdWZ6C46I5kcAbnl2XWRI5K2QTc
	 THT1AzBQ71mdeEvcaNlPBOFA8VH23K7JWDqBJD4kMgBeE+KHRYw3kxKsXav/7w7Bte
	 YrvPkD1+9UdUw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718982548; x=1719243048; i=teddy.astie@vates.tech;
	bh=lIV/U0ezGsXVmOjqbkafB5BjfIUquJ6q+7BRog7b6qo=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=GV1bbXFF1LgGXDJHgJlI0dsXBzt/XQ29AGxzHN17K2TafFRPAAJBhpiDOILQcgrUf
	 HDhLw8cp6y1FlB/W523p8v6GxkVgVViFcm89+Do0MovrU0XixOXPX1i3vua/scx2k6
	 Yuz1k13lPAX95jlAEmNj9MiyTZ7gbXPSC0Uqgf16VqEwIiPTTMB0fIx2lDOXpdzstt
	 +UFeqt40y0z6eqJ5q7grjOkeTKhA0qakRt2rWQlVDiy2UE2IAPOHsFyVTgjmQBsam/
	 SLyRBnkgzaRgv8dtVf+Qowotmgkx/QDJXUU/g4u98kfcd11h7p6WQortJNVYe6U32p
	 XSogVqNrxSIPA==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?Re:=20[RFC=20PATCH]=20iommu/xen:=20Add=20Xen=20PV-IOMMU=20driver?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718982546557
Message-Id: <750967b7-252f-4523-872f-64b79358c97c@vates.tech>
To: Jason Gunthorpe <jgg@ziepe.ca>
Cc: xen-devel@lists.xenproject.org, iommu@lists.linux.dev, Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, Robin Murphy <robin.murphy@arm.com>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
References: <fe36b8d36ed3bc01c78901bdf7b87a71cb1adaad.1718286176.git.teddy.astie@vates.tech> <20240619163000.GK791043@ziepe.ca>
In-Reply-To: <20240619163000.GK791043@ziepe.ca>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.7d137cfc1204456185aac7c8658aefb3?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240621:md
Date: Fri, 21 Jun 2024 15:09:08 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hello Jason,

Le 19/06/2024 =C3=A0 18:30, Jason Gunthorpe a =C3=A9crit=C2=A0:
> On Thu, Jun 13, 2024 at 01:50:22PM +0000, Teddy Astie wrote:
> 
>> +struct iommu_domain *xen_iommu_domain_alloc(unsigned type)
>> +{
>> +=09struct xen_iommu_domain *domain;
>> +=09u16 ctx_no;
>> +=09int ret;
>> +
>> +=09if (type & IOMMU_DOMAIN_IDENTITY) {
>> +=09=09/* use default domain */
>> +=09=09ctx_no =3D 0;
> 
> Please use the new ops, domain_alloc_paging and the static identity domai=
n.

Yes, in the v2, I will use this newer interface.

I have a question on this new interface : is it valid to not have a 
identity domain (and "default domain" being blocking); well in the 
current implementation it doesn't really matter, but at some point, we 
may want to allow not having it (thus making this driver mandatory).

> 
>> +static struct iommu_group *xen_iommu_device_group(struct device *dev)
>> +{
>> +=09if (!dev_is_pci(dev))
>> +=09=09return ERR_PTR(-ENODEV);
>> +
> 
> device_group is only called after probe_device, since you already
> exclude !pci during probe there is no need for this wrapper, just set
> the op directly to pci_device_group.
> 
>> +=09if (!dev_is_pci(dev))
>> +=09=09return;
> 
> No op is ever called on a non-probed device, remove all these checks.
>
> 
> A paging domain should be the only domain ops that have a populated
> map so this should be made impossible by construction
Makes sense, will remove these redundant checks in v2.

> 
>> +static void xen_iommu_release_device(struct device *dev)
>> +{
>> +=09int ret;
>> +=09struct pci_dev *pdev;
>> +=09struct pv_iommu_op op =3D {
>> +=09=09.subop_id =3D IOMMUOP_reattach_device,
>> +=09=09.flags =3D 0,
>> +=09=09.ctx_no =3D 0 /* reattach device back to default context */
>> +=09};
> 
> Consider if you can use release_domain for this, I think this is
> probably a BLOCKED domain behavior.

The goal is to put back all devices where they were at the beginning 
(the default "context"), which is what release_domain looks like it is 
doing. Will use it for v2.

> 
> Jason

Teddy


Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 15:10:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 15:10:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745312.1152450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKfu2-00089U-Ve; Fri, 21 Jun 2024 15:10:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745312.1152450; Fri, 21 Jun 2024 15:10:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKfu2-00088V-Pi; Fri, 21 Jun 2024 15:10:14 +0000
Received: by outflank-mailman (input) for mailman id 745312;
 Fri, 21 Jun 2024 15:10:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J4PK=NX=cloud.com=kelly.choi@srs-se1.protection.inumbo.net>)
 id 1sKfu1-00084S-K0
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 15:10:13 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5a32d3db-2fe0-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 17:10:10 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id
 4fb4d7f45d1cf-57d26a4ee65so1878722a12.2
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 08:10:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a32d3db-2fe0-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1718982610; x=1719587410; darn=lists.xenproject.org;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=wPb2b2fJ2ZScQtWz48fle1fCrQ89op+wO4Sa/WlrB0w=;
        b=T0Xz2TodYGW4U9S/QwCG29gQ2QoaGVRi9lpikPrMtr4uHsv1XPw2GN+PGFQFzAfhIt
         zUbvHuBeCPilikCH8ngJidlKSaeGU1hvzNWlgNQxwu7EyHSY1eCafLlK0f3ghwlPtYmp
         D+79uzvmAb9iGizPz8qDlWEfb/ozebDB/hD5U=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718982610; x=1719587410;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=wPb2b2fJ2ZScQtWz48fle1fCrQ89op+wO4Sa/WlrB0w=;
        b=FIpIotgJT+pnKInYizXTuyqmb5O95/SJgKBrmpH8wKyaEjBbuFhCOgaLhvF3coWeIT
         ussLioG7hfy2riRXNZNmklHwZb/W/Wn4cClyfS2tOrJHBUjAH1H1DqI9Vqh9bNJYaIN8
         pvouBae/qRbMJqA1WKKpk+ca9PT3IKhtrHmR0yd+4xMdbskpasAVWM8MHCtJqc3aKhFG
         gUPht22dLvQIMXUzkj2XybXlW1Mvz6A6f+S8qUpUwdKfI0F72wk4U7X2AHh9DrWl+xXb
         OJ/ybi8HmQHJU3sFWiXMIQhEpypkHNH05JqMwssE0zf7HA9w0ffNqw9UgEah0LU/gf8s
         9MIQ==
X-Gm-Message-State: AOJu0YxOezEOMxX5vwUo8RKUVflW3mcTf/w8YUdki7h6ZJhTaMgIDo9T
	cFmckL7HquHKGfgD0VMD7i0ss6nv0KmMjZXXFF4lkUGgaeIS4cA7xugsr7NPQg+pg0IfOtypVso
	WtPH7egpqwGM82mQyMLgBkskW35lgCV4J2dRSFuYpC5r4JR74H4rSWw==
X-Google-Smtp-Source: AGHT+IELR8CKOEb0x/QUQK99GFfcyf2b+1UOWkK6oTJQ9nsCc5yGxnx6vmgo9+lQqLOY3XuGA6EtaYbEWKErc+pznl0=
X-Received: by 2002:a05:6402:1d07:b0:57d:f74:c4af with SMTP id
 4fb4d7f45d1cf-57d0f74c4c2mr4686529a12.1.1718982609502; Fri, 21 Jun 2024
 08:10:09 -0700 (PDT)
MIME-Version: 1.0
From: Kelly Choi <kelly.choi@cloud.com>
Date: Fri, 21 Jun 2024 16:09:33 +0100
Message-ID: <CAO-mL=ySoxhyXf71Qyob+3N1xuvvynjbEa8O7ERv8XeV0ouDCA@mail.gmail.com>
Subject: Check out our Xen Summit highlights video!
To: xen-devel <xen-devel@lists.xenproject.org>, xen-users@lists.xenproject.org, 
	xen-announce@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000d09ee1061b67d3d7"

--000000000000d09ee1061b67d3d7
Content-Type: text/plain; charset="UTF-8"

Hi everyone,

Check out our Xen Summit highlights video on YouTube:
https://www.youtube.com/watch?v=qZcCCm_PaHs&ab_channel=TheXenProject

What a great reminder of how important our summits are, and the community
behind it!

Many thanks,
Kelly Choi

Community Manager
Xen Project

--000000000000d09ee1061b67d3d7
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi everyone,<div><br></div><div>Check out our Xen Summit h=
ighlights video on YouTube:</div><div><a href=3D"https://www.youtube.com/wa=
tch?v=3DqZcCCm_PaHs&amp;ab_channel=3DTheXenProject">https://www.youtube.com=
/watch?v=3DqZcCCm_PaHs&amp;ab_channel=3DTheXenProject</a>=C2=A0</div><div><=
br></div><div>What a great reminder of how important our summits are, and t=
he community behind it!</div><div><br clear=3D"all"><div><div dir=3D"ltr" c=
lass=3D"gmail_signature" data-smartmail=3D"gmail_signature"><div dir=3D"ltr=
"><div>Many thanks,</div><div>Kelly Choi</div><div><br></div><div><div styl=
e=3D"color:rgb(136,136,136)">Community Manager</div><div style=3D"color:rgb=
(136,136,136)">Xen Project=C2=A0<br></div></div></div></div></div></div></d=
iv>

--000000000000d09ee1061b67d3d7--


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 15:33:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 15:33:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745353.1152482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKgFw-00056Z-4K; Fri, 21 Jun 2024 15:32:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745353.1152482; Fri, 21 Jun 2024 15:32:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKgFw-00056S-1R; Fri, 21 Jun 2024 15:32:52 +0000
Received: by outflank-mailman (input) for mailman id 745353;
 Fri, 21 Jun 2024 15:32:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/NIW=NX=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKgFu-00056M-Sj
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 15:32:50 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 839ee2c3-2fe3-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 17:32:48 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.208.228.129])
 by support.bugseng.com (Postfix) with ESMTPSA id B12B24EE0738;
 Fri, 21 Jun 2024 17:32:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 839ee2c3-2fe3-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [XEN PATCH] automation/eclair: add more guidelines to the monitored set
Date: Fri, 21 Jun 2024 17:32:41 +0200
Message-Id: <f03398504405689413521de1675a33e50cdbc30b.1718983858.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add more accepted guidelines to the monitored set to check them at each
commit.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/monitored.ecl | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/automation/eclair_analysis/ECLAIR/monitored.ecl b/automation/eclair_analysis/ECLAIR/monitored.ecl
index 4daecb0c83..9ffaebbdc3 100644
--- a/automation/eclair_analysis/ECLAIR/monitored.ecl
+++ b/automation/eclair_analysis/ECLAIR/monitored.ecl
@@ -18,10 +18,13 @@
 -enable=MC3R1.R12.5
 -enable=MC3R1.R1.3
 -enable=MC3R1.R13.6
+-enable=MC3R1.R13.1
 -enable=MC3R1.R1.4
 -enable=MC3R1.R14.1
 -enable=MC3R1.R14.4
 -enable=MC3R1.R16.2
+-enable=MC3R1.R16.3
+-enable=MC3R1.R16.4
 -enable=MC3R1.R16.6
 -enable=MC3R1.R16.7
 -enable=MC3R1.R17.1
@@ -34,6 +37,7 @@
 -enable=MC3R1.R20.13
 -enable=MC3R1.R20.14
 -enable=MC3R1.R20.4
+-enable=MC3R1.R20.7
 -enable=MC3R1.R20.9
 -enable=MC3R1.R2.1
 -enable=MC3R1.R21.10
@@ -58,6 +62,7 @@
 -enable=MC3R1.R5.2
 -enable=MC3R1.R5.3
 -enable=MC3R1.R5.4
+-enable=MC3R1.R5.5
 -enable=MC3R1.R5.6
 -enable=MC3R1.R6.1
 -enable=MC3R1.R6.2
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 15:38:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 15:38:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745362.1152491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKgLZ-0005te-OF; Fri, 21 Jun 2024 15:38:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745362.1152491; Fri, 21 Jun 2024 15:38:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKgLZ-0005tX-Kk; Fri, 21 Jun 2024 15:38:41 +0000
Received: by outflank-mailman (input) for mailman id 745362;
 Fri, 21 Jun 2024 15:38:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r4xA=NX=bounce.vates.tech=bounce-md_30504962.66759e7c.v1-d3f6f7b1faa24e30b72ac5e9274a2fa9@srs-se1.protection.inumbo.net>)
 id 1sKgLY-0005tR-Mk
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 15:38:40 +0000
Received: from mail187-10.suw11.mandrillapp.com
 (mail187-10.suw11.mandrillapp.com [198.2.187.10])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 53ad00aa-2fe4-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 17:38:38 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-10.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W5M2h74pNz5QkLfk
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 15:38:36 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 d3f6f7b1faa24e30b72ac5e9274a2fa9; Fri, 21 Jun 2024 15:38:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53ad00aa-2fe4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718984317; x=1719244817;
	bh=hfad+2rNS0mrqDfEWBN4M8U5znJnUAVi7JfVgoUoF+I=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=yUzDNOdtL7LHsHK9HBhzkzL6I+VotskM+QeWA7BjPLtPzCE90JfFgXDi8yvsXUONJ
	 Ds7p4/eyXr3yeJTsSUeuJLtEQEafOTnYT0qqlAyhM4YcsKoUQHtaJnFo+jArZu3AYc
	 zuKjiZorLb7hq9IcD7UJWIN0lp+xiqBylwPE1K8cAPgP6DADHD7nv6gyt4NP86+bOM
	 yV540uT14v0CWYW7KcDldtsEkYKu/ySkBfl1/UiCIULhooa8HBhW9+JbOGtNX1cEQr
	 oADDMXCXBSNxh1kuoLVWAkeFPZra9iY34SyAdOJe3htDT05k8J6xFTTE4P7cQpuHey
	 5vYM4qpiY1KKQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718984317; x=1719244817; i=anthony.perard@vates.tech;
	bh=hfad+2rNS0mrqDfEWBN4M8U5znJnUAVi7JfVgoUoF+I=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=WzQhxveU/iYCGfzQ4WbIykGFnavUMQDgjbpVq9besL6n+AUldf9ki3JxtOlBqDtdU
	 VZYK8y2vfBDzOlVCWJ394TSzJSKs9jz4CGBSHxDtc5/YA1sN7uMuzSP35z8OM1Q7Y5
	 jA7moR46TQsKJ4atAyHZ95C8lmbkB6gPneOztk0zPybNxGMKZJc16CpxIzFYcVeAwL
	 TmnAgV3/pFWAT/1xWjj65NJRi00o/JXmvDMuJh7Y/OY7+St2fRQ9QiebOypOzTFYwx
	 0EI7GbRVqlwUskiI5IqRsfG0x0tvKqdutPnpGfKMpsGJsecZmHu18lFalIcV9FRvuz
	 BZOfu84+acN6g==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[XEN=20PATCH]=20tools/misc:=20xen-hvmcrash:=20Inject=20#DF=20instead=20of=20overwriting=20RIP?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718984316003
To: Matthew Barnes <matthew.barnes@cloud.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Anthony PERARD <anthony@xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Message-Id: <ZnWee2hUmG42n/W7@l14>
References: <27f4397093d92b53f89d625d682bd4b7145b65d8.1717426439.git.matthew.barnes@cloud.com>
In-Reply-To: <27f4397093d92b53f89d625d682bd4b7145b65d8.1717426439.git.matthew.barnes@cloud.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.d3f6f7b1faa24e30b72ac5e9274a2fa9?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240621:md
Date: Fri, 21 Jun 2024 15:38:36 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

On Mon, Jun 03, 2024 at 03:59:18PM +0100, Matthew Barnes wrote:
> diff --git a/tools/misc/xen-hvmcrash.c b/tools/misc/xen-hvmcrash.c
> index 1d058fa40a47..8ef1beb388f8 100644
> --- a/tools/misc/xen-hvmcrash.c
> +++ b/tools/misc/xen-hvmcrash.c
> @@ -38,22 +38,21 @@
>  #include <sys/stat.h>
>  #include <arpa/inet.h>
>
> +#define XC_WANT_COMPAT_DEVICEMODEL_API
>  #include <xenctrl.h>
>  #include <xen/xen.h>
>  #include <xen/domctl.h>
>  #include <xen/hvm/save.h>

There's lots of headers that aren't used by the new codes and can be
removed. (They were probably way too many headers included when this
utility was introduced.)

> +    for (vcpu_id = 0; vcpu_id <= dominfo.max_vcpu_id; vcpu_id++) {
> +        printf("Injecting #DF to vcpu ID #%d...\n", vcpu_id);
> +        ret = xc_hvm_inject_trap(xch, domid, vcpu_id,
> +                                X86_ABORT_DF,

In the definition of xendevicemodel_inject_event(), the comment say to
look at "enum x86_event_type" for possible event "type", but there's no
"#DF" type, can we add this new one there before using it? (It's going
to be something like X86_EVENTTYPE_*)

> +                                XEN_DMOP_EVENT_hw_exc, 0,
> +                                NULL, NULL);

The new code doesn't build, "NULL" aren't integers.

> +        if (ret < 0) {
> +            fprintf(stderr, "Could not inject #DF to vcpu ID #%d\n", vcpu_id);
> +            perror("xc_hvm_inject_trap");

Are you meant to print two error lines when there's an error? You can
also use strerror() to convert an "errno" to a string.

Also, perror() might print an error from fprintf() if that last one
failed.

Are you meant to keep inject into other vcpus even if one have failed?
Should `xen-hvmcrash` return success when it failed to inject the double
fault to all vcpus?

Thanks,

-- 

Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 16:09:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 16:09:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745385.1152502 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKgop-0002yO-1M; Fri, 21 Jun 2024 16:08:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745385.1152502; Fri, 21 Jun 2024 16:08:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKgoo-0002yH-UP; Fri, 21 Jun 2024 16:08:54 +0000
Received: by outflank-mailman (input) for mailman id 745385;
 Fri, 21 Jun 2024 16:08:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BRiO=NX=bounce.vates.tech=bounce-md_30504962.6675a590.v1-5926d64768ab4ac794063f0d056f783f@srs-se1.protection.inumbo.net>)
 id 1sKgon-0002wx-El
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 16:08:53 +0000
Received: from mail187-10.suw11.mandrillapp.com
 (mail187-10.suw11.mandrillapp.com [198.2.187.10])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8bdcac4f-2fe8-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 18:08:50 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-10.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W5MjX4ZP0z5QkLm5
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 16:08:48 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 5926d64768ab4ac794063f0d056f783f; Fri, 21 Jun 2024 16:08:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8bdcac4f-2fe8-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718986128; x=1719246628;
	bh=rk8qCgsNjgkmAX/pojUQbJPkI3R/bP9yagRBiEKnV0s=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=ZWvlm09KloYruoqr5ZxISN1mpApznROnS2qaMjWkjWUgl2WZ6geGeCyHk/WQgutNP
	 Hi0wtebQDwX9ZrYf2Ul9RI3IJRw6PmEOVxqx1gQk0AM24nMat8AQI3/RyZiIaW0x6t
	 F9Y3U72nU0NoMGXGXHUC7FRTGDHEVtr+U/XdIy7Bn4ZcIHDWe4WTQENLH7wYUo6c4M
	 wAAxRFqTM682bOfnu7iDxdBsx7EBRU+oSeUrFtRfVqYh8Ro9Bak/7aJQv7ZKXw9ntR
	 Mn7ro0m/lMNHPjPIv1zrwAtrpzNB9kPxUAUCBkX+qlwBXRdRXRoPEXu4swrN8EBDxa
	 NzMjccKh82OCg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718986128; x=1719246628; i=teddy.astie@vates.tech;
	bh=rk8qCgsNjgkmAX/pojUQbJPkI3R/bP9yagRBiEKnV0s=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=wXBHRiB2j/zmp7/N/IUbIJSCxYxvhsk+nuichfdQ2K6pPG534V3ANh6bsRCZeKkpG
	 IUB/+0XAwWlBNnc8uQNq55iH6coDn4tbvif8gEZBc0iMQY8EVC0Vz4N39HSNZltLcK
	 /6REDWjuOoBf40k7ge1t67+LGc9Ti/2FXWuswHjiHsG+XaULSfC1OrCkVRSmd9pZ/9
	 BSt+T+mA7GpQBVH1p7S77pBDnifd1x4jpKAFNTzUQvRfoZLPRSbkWfVXZUiyB7jXoM
	 2+dGht5vmSZ73RU23HJV4HfOvDQhgkNN0dCB2SJ3Wt52/renVsMnwugx30Q8xxqPEC
	 mqJikToK/KBpQ==
From: TSnake41 <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20PATCH=20v2]=20iommu/xen:=20Add=20Xen=20PV-IOMMU=20driver?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718986126719
To: xen-devel@lists.xenproject.org, iommu@lists.linux.dev
Cc: Teddy Astie <teddy.astie@vates.tech>, Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, Robin Murphy <robin.murphy@arm.com>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <24d7ec005e77e4e0127995ba6f4ad16f33737fa5.1718981216.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.5926d64768ab4ac794063f0d056f783f?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240621:md
Date: Fri, 21 Jun 2024 16:08:48 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

From: Teddy Astie <teddy.astie@vates.tech>

In the context of Xen, Linux runs as Dom0 and doesn't have access to the
machine IOMMU. Although, a IOMMU is mandatory to use some kernel features
such as VFIO or DMA protection.

In Xen, we added a paravirtualized IOMMU with iommu_op hypercall in order to
allow Dom0 to implement such feature. This commit introduces a new IOMMU driver
that uses this new hypercall interface.

Signed-off-by Teddy Astie <teddy.astie@vates.tech>
---
Changes since v1 :
* formatting changes
* applied Jan Beulich proposed changes : removed vim notes at end of pv-iommu.h
* applied Jason Gunthorpe proposed changes : use new ops and remove redundant
checks
---
 arch/x86/include/asm/xen/hypercall.h |   6 +
 drivers/iommu/Kconfig                |   9 +
 drivers/iommu/Makefile               |   1 +
 drivers/iommu/xen-iommu.c            | 489 +++++++++++++++++++++++++++
 include/xen/interface/memory.h       |  33 ++
 include/xen/interface/pv-iommu.h     | 104 ++++++
 include/xen/interface/xen.h          |   1 +
 7 files changed, 643 insertions(+)
 create mode 100644 drivers/iommu/xen-iommu.c
 create mode 100644 include/xen/interface/pv-iommu.h

diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
index a2dd24947eb8..6b1857f27c14 100644
--- a/arch/x86/include/asm/xen/hypercall.h
+++ b/arch/x86/include/asm/xen/hypercall.h
@@ -490,6 +490,12 @@ HYPERVISOR_xenpmu_op(unsigned int op, void *arg)
 	return _hypercall2(int, xenpmu_op, op, arg);
 }
 
+static inline int
+HYPERVISOR_iommu_op(void *arg)
+{
+	return _hypercall1(int, iommu_op, arg);
+}
+
 static inline int
 HYPERVISOR_dm_op(
 	domid_t dom, unsigned int nr_bufs, struct xen_dm_op_buf *bufs)
diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
index 0af39bbbe3a3..242cefac77c9 100644
--- a/drivers/iommu/Kconfig
+++ b/drivers/iommu/Kconfig
@@ -480,6 +480,15 @@ config VIRTIO_IOMMU
 
 	  Say Y here if you intend to run this kernel as a guest.
 
+config XEN_IOMMU
+	bool "Xen IOMMU driver"
+	depends on XEN_DOM0
+	select IOMMU_API
+	help
+		Xen PV-IOMMU driver for Dom0.
+
+		Say Y here if you intend to run this guest as Xen Dom0.
+
 config SPRD_IOMMU
 	tristate "Unisoc IOMMU Support"
 	depends on ARCH_SPRD || COMPILE_TEST
diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
index 542760d963ec..393afe22c901 100644
--- a/drivers/iommu/Makefile
+++ b/drivers/iommu/Makefile
@@ -30,3 +30,4 @@ obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o
 obj-$(CONFIG_IOMMU_IOPF) += io-pgfault.o
 obj-$(CONFIG_SPRD_IOMMU) += sprd-iommu.o
 obj-$(CONFIG_APPLE_DART) += apple-dart.o
+obj-$(CONFIG_XEN_IOMMU) += xen-iommu.o
\ No newline at end of file
diff --git a/drivers/iommu/xen-iommu.c b/drivers/iommu/xen-iommu.c
new file mode 100644
index 000000000000..b765445d27cd
--- /dev/null
+++ b/drivers/iommu/xen-iommu.c
@@ -0,0 +1,489 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Xen PV-IOMMU driver.
+ *
+ * Copyright (C) 2024 Vates SAS
+ *
+ * Author: Teddy Astie <teddy.astie@vates.tech>
+ *
+ */
+
+#define pr_fmt(fmt)	"xen-iommu: " fmt
+
+#include <linux/kernel.h>
+#include <linux/init.h>
+#include <linux/types.h>
+#include <linux/iommu.h>
+#include <linux/dma-map-ops.h>
+#include <linux/pci.h>
+#include <linux/list.h>
+#include <linux/string.h>
+#include <linux/device/driver.h>
+#include <linux/slab.h>
+#include <linux/err.h>
+#include <linux/printk.h>
+#include <linux/stddef.h>
+#include <linux/spinlock.h>
+#include <linux/minmax.h>
+#include <linux/string.h>
+#include <asm/iommu.h>
+
+#include <xen/xen.h>
+#include <xen/page.h>
+#include <xen/interface/memory.h>
+#include <xen/interface/physdev.h>
+#include <xen/interface/pv-iommu.h>
+#include <asm/xen/hypercall.h>
+#include <asm/xen/page.h>
+
+MODULE_DESCRIPTION("Xen IOMMU driver");
+MODULE_AUTHOR("Teddy Astie <teddy.astie@vates.tech>");
+MODULE_LICENSE("GPL");
+
+#define MSI_RANGE_START		(0xfee00000)
+#define MSI_RANGE_END		(0xfeefffff)
+
+#define XEN_IOMMU_PGSIZES       (0x1000)
+
+struct xen_iommu_domain {
+	struct iommu_domain domain;
+
+	u16 ctx_no; /* Xen PV-IOMMU context number */
+};
+
+static struct iommu_device xen_iommu_device;
+
+static uint32_t max_nr_pages;
+static uint64_t max_iova_addr;
+
+static spinlock_t lock;
+
+static inline struct xen_iommu_domain *to_xen_iommu_domain(struct iommu_domain *dom)
+{
+	return container_of(dom, struct xen_iommu_domain, domain);
+}
+
+static inline u64 addr_to_pfn(u64 addr)
+{
+	return addr >> 12;
+}
+
+static inline u64 pfn_to_addr(u64 pfn)
+{
+	return pfn << 12;
+}
+
+bool xen_iommu_capable(struct device *dev, enum iommu_cap cap)
+{
+	switch (cap) {
+	case IOMMU_CAP_CACHE_COHERENCY:
+		return true;
+
+	default:
+		return false;
+	}
+}
+
+struct iommu_domain *xen_iommu_domain_alloc_paging(struct device *dev)
+{
+	struct xen_iommu_domain *domain;
+	int ret;
+
+	struct pv_iommu_op op = {
+		.ctx_no = 0,
+		.flags = 0,
+		.subop_id = IOMMUOP_alloc_context
+	};
+
+	ret = HYPERVISOR_iommu_op(&op);
+
+	if (ret) {
+		pr_err("Unable to create Xen IOMMU context (%d)", ret);
+		return ERR_PTR(ret);
+	}
+
+	domain = kzalloc(sizeof(*domain), GFP_KERNEL);
+
+	domain->ctx_no = op.ctx_no;
+
+	domain->domain.geometry.aperture_start = 0;
+	domain->domain.geometry.aperture_end = max_iova_addr;
+	domain->domain.geometry.force_aperture = true;
+
+	return &domain->domain;
+}
+
+static struct iommu_device *xen_iommu_probe_device(struct device *dev)
+{
+	if (!dev_is_pci(dev))
+		return ERR_PTR(-ENODEV);
+
+	return &xen_iommu_device;
+}
+
+static void xen_iommu_probe_finalize(struct device *dev)
+{
+	set_dma_ops(dev, NULL);
+	iommu_setup_dma_ops(dev, 0, max_iova_addr);
+}
+
+static int xen_iommu_map_pages(struct iommu_domain *domain, unsigned long iova,
+			       phys_addr_t paddr, size_t pgsize, size_t pgcount,
+			       int prot, gfp_t gfp, size_t *mapped)
+{
+	size_t xen_pg_count = (pgsize / XEN_PAGE_SIZE) * pgcount;
+	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
+	struct pv_iommu_op op = {
+		.subop_id = IOMMUOP_map_pages,
+		.flags = 0,
+		.ctx_no = dom->ctx_no
+	};
+	/* NOTE: paddr is actually bound to pfn, not gfn */
+	uint64_t pfn = addr_to_pfn(paddr);
+	uint64_t dfn = addr_to_pfn(iova);
+	int ret = 0;
+
+	//pr_info("Mapping to %lx %zu %zu paddr %x\n", iova, pgsize, pgcount, paddr);
+
+	if (prot & IOMMU_READ)
+		op.flags |= IOMMU_OP_readable;
+
+	if (prot & IOMMU_WRITE)
+		op.flags |= IOMMU_OP_writeable;
+
+	while (xen_pg_count) {
+		size_t to_map = min(xen_pg_count, max_nr_pages);
+		uint64_t gfn = pfn_to_gfn(pfn);
+
+		//pr_info("Mapping %lx-%lx at %lx-%lx\n", gfn, gfn + to_map - 1, dfn, dfn + to_map - 1);
+
+		op.map_pages.gfn = gfn;
+		op.map_pages.dfn = dfn;
+
+		op.map_pages.nr_pages = to_map;
+
+		ret = HYPERVISOR_iommu_op(&op);
+
+		//pr_info("map_pages.mapped = %u\n", op.map_pages.mapped);
+
+		if (mapped)
+			*mapped += XEN_PAGE_SIZE * op.map_pages.mapped;
+
+		if (ret)
+			break;
+
+		xen_pg_count -= to_map;
+
+		pfn += to_map;
+		dfn += to_map;
+	}
+
+	return ret;
+}
+
+static size_t xen_iommu_unmap_pages(struct iommu_domain *domain, unsigned long iova,
+				    size_t pgsize, size_t pgcount,
+				    struct iommu_iotlb_gather *iotlb_gather)
+{
+	size_t xen_pg_count = (pgsize / XEN_PAGE_SIZE) * pgcount;
+	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
+	struct pv_iommu_op op = {
+		.subop_id = IOMMUOP_unmap_pages,
+		.ctx_no = dom->ctx_no,
+		.flags = 0,
+	};
+	uint64_t dfn = addr_to_pfn(iova);
+	int ret = 0;
+
+	if (WARN(!dom->ctx_no, "Tried to unmap page to default context"))
+		return -EINVAL;
+
+	while (xen_pg_count) {
+		size_t to_unmap = min(xen_pg_count, max_nr_pages);
+
+		//pr_info("Unmapping %lx-%lx\n", dfn, dfn + to_unmap - 1);
+
+		op.unmap_pages.dfn = dfn;
+		op.unmap_pages.nr_pages = to_unmap;
+
+		ret = HYPERVISOR_iommu_op(&op);
+
+		if (ret)
+			pr_warn("Unmap failure (%lx-%lx)\n", dfn, dfn + to_unmap - 1);
+
+		xen_pg_count -= to_unmap;
+
+		dfn += to_unmap;
+	}
+
+	return pgcount * pgsize;
+}
+
+int xen_iommu_attach_dev(struct iommu_domain *domain, struct device *dev)
+{
+	struct pci_dev *pdev;
+	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
+	struct pv_iommu_op op = {
+		.subop_id = IOMMUOP_reattach_device,
+		.flags = 0,
+		.ctx_no = dom->ctx_no,
+	};
+
+	pdev = to_pci_dev(dev);
+
+	op.reattach_device.dev.seg = pci_domain_nr(pdev->bus);
+	op.reattach_device.dev.bus = pdev->bus->number;
+	op.reattach_device.dev.devfn = pdev->devfn;
+
+	return HYPERVISOR_iommu_op(&op);
+}
+
+static void xen_iommu_free(struct iommu_domain *domain)
+{
+	int ret;
+	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
+
+	if (dom->ctx_no != 0) {
+		struct pv_iommu_op op = {
+			.ctx_no = dom->ctx_no,
+			.flags = 0,
+			.subop_id = IOMMUOP_free_context
+		};
+
+		ret = HYPERVISOR_iommu_op(&op);
+
+		if (ret)
+			pr_err("Context %hu destruction failure\n", dom->ctx_no);
+	}
+
+	kfree(domain);
+}
+
+static phys_addr_t xen_iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
+{
+	int ret;
+	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
+
+	struct pv_iommu_op op = {
+		.ctx_no = dom->ctx_no,
+		.flags = 0,
+		.subop_id = IOMMUOP_lookup_page,
+	};
+
+	op.lookup_page.dfn = addr_to_pfn(iova);
+
+	ret = HYPERVISOR_iommu_op(&op);
+
+	if (ret)
+		return 0;
+
+	phys_addr_t page_addr = pfn_to_addr(gfn_to_pfn(op.lookup_page.gfn));
+
+	/* Consider non-aligned iova */
+	return page_addr + (iova & 0xFFF);
+}
+
+static void xen_iommu_get_resv_regions(struct device *dev, struct list_head *head)
+{
+	struct iommu_resv_region *reg;
+	struct xen_reserved_device_memory *entries;
+	struct xen_reserved_device_memory_map map;
+	struct pci_dev *pdev;
+	int ret, i;
+
+	pdev = to_pci_dev(dev);
+
+	reg = iommu_alloc_resv_region(MSI_RANGE_START,
+				      MSI_RANGE_END - MSI_RANGE_START + 1,
+				      0, IOMMU_RESV_MSI, GFP_KERNEL);
+
+	if (!reg)
+		return;
+
+	list_add_tail(&reg->list, head);
+
+	/* Map xen-specific entries */
+
+	/* First, get number of entries to map */
+	map.buffer = NULL;
+	map.nr_entries = 0;
+	map.flags = 0;
+
+	map.dev.pci.seg = pci_domain_nr(pdev->bus);
+	map.dev.pci.bus = pdev->bus->number;
+	map.dev.pci.devfn = pdev->devfn;
+
+	ret = HYPERVISOR_memory_op(XENMEM_reserved_device_memory_map, &map);
+
+	if (ret == 0)
+		/* No reserved region, nothing to do */
+		return;
+
+	if (ret != -ENOBUFS) {
+		pr_err("Unable to get reserved region count (%d)\n", ret);
+		return;
+	}
+
+	/* Assume a reasonable number of entries, otherwise, something is probably wrong */
+	if (WARN_ON(map.nr_entries > 256))
+		pr_warn("Xen reporting many reserved regions (%u)\n", map.nr_entries);
+
+	/* And finally get actual mappings */
+	entries = kcalloc(map.nr_entries, sizeof(struct xen_reserved_device_memory),
+					  GFP_KERNEL);
+
+	if (!entries) {
+		pr_err("No memory for map entries\n");
+		return;
+	}
+
+	map.buffer = entries;
+
+	ret = HYPERVISOR_memory_op(XENMEM_reserved_device_memory_map, &map);
+
+	if (ret != 0) {
+		pr_err("Unable to get reserved regions (%d)\n", ret);
+		kfree(entries);
+		return;
+	}
+
+	for (i = 0; i < map.nr_entries; i++) {
+		struct xen_reserved_device_memory entry = entries[i];
+
+		reg = iommu_alloc_resv_region(pfn_to_addr(entry.start_pfn),
+					      pfn_to_addr(entry.nr_pages),
+					      0, IOMMU_RESV_RESERVED, GFP_KERNEL);
+
+		if (!reg)
+			break;
+
+		list_add_tail(&reg->list, head);
+	}
+
+	kfree(entries);
+}
+
+static int default_domain_attach_dev(struct iommu_domain *domain,
+				     struct device *dev)
+{
+	int ret;
+	struct pci_dev *pdev;
+	struct pv_iommu_op op = {
+		.subop_id = IOMMUOP_reattach_device,
+		.flags = 0,
+		.ctx_no = 0 /* reattach device back to default context */
+	};
+
+	pdev = to_pci_dev(dev);
+
+	op.reattach_device.dev.seg = pci_domain_nr(pdev->bus);
+	op.reattach_device.dev.bus = pdev->bus->number;
+	op.reattach_device.dev.devfn = pdev->devfn;
+
+	ret = HYPERVISOR_iommu_op(&op);
+
+	if (ret)
+		pr_warn("Unable to release device %p\n", &op.reattach_device.dev);
+
+	return ret;
+}
+
+static struct iommu_domain default_domain = {
+	.ops = &(const struct iommu_domain_ops){
+		.attach_dev = default_domain_attach_dev
+	}
+};
+
+static struct iommu_ops xen_iommu_ops = {
+	.identity_domain = &default_domain,
+	.release_domain = &default_domain,
+	.capable = xen_iommu_capable,
+	.domain_alloc_paging = xen_iommu_domain_alloc_paging,
+	.probe_device = xen_iommu_probe_device,
+	.probe_finalize = xen_iommu_probe_finalize,
+	.device_group = pci_device_group,
+	.get_resv_regions = xen_iommu_get_resv_regions,
+	.pgsize_bitmap = XEN_IOMMU_PGSIZES,
+	.default_domain_ops = &(const struct iommu_domain_ops) {
+		.map_pages = xen_iommu_map_pages,
+		.unmap_pages = xen_iommu_unmap_pages,
+		.attach_dev = xen_iommu_attach_dev,
+		.iova_to_phys = xen_iommu_iova_to_phys,
+		.free = xen_iommu_free,
+	},
+};
+
+int __init xen_iommu_init(void)
+{
+	int ret;
+	struct pv_iommu_op op = {
+		.subop_id = IOMMUOP_query_capabilities
+	};
+
+	if (!xen_domain())
+		return -ENODEV;
+
+	/* Check if iommu_op is supported */
+	if (HYPERVISOR_iommu_op(&op) == -ENOSYS)
+		return -ENODEV; /* No Xen IOMMU hardware */
+
+	pr_info("Initialising Xen IOMMU driver\n");
+	pr_info("max_nr_pages=%d\n", op.cap.max_nr_pages);
+	pr_info("max_ctx_no=%d\n", op.cap.max_ctx_no);
+	pr_info("max_iova_addr=%llx\n", op.cap.max_iova_addr);
+
+	if (op.cap.max_ctx_no == 0) {
+		pr_err("Unable to use IOMMU PV driver (no context available)\n");
+		return -ENOTSUPP; /* Unable to use IOMMU PV ? */
+	}
+
+	if (xen_domain_type == XEN_PV_DOMAIN)
+		/* TODO: In PV domain, due to the existing pfn-gfn mapping we need to
+		 * consider that under certains circonstances, we have :
+		 *   pfn_to_gfn(x + 1) != pfn_to_gfn(x) + 1
+		 *
+		 * In these cases, we would want to separate the subop into several calls.
+		 * (only doing the grouped operation when the mapping is actually contigous)
+		 * Only map operation would be affected, as unmap actually uses dfn which
+		 * doesn't have this kind of mapping.
+		 *
+		 * Force single-page operations to work arround this issue for now.
+		 */
+		max_nr_pages = 1;
+	else
+		/* With HVM domains, pfn_to_gfn is identity, there is no issue regarding this. */
+		max_nr_pages = op.cap.max_nr_pages;
+
+	max_iova_addr = op.cap.max_iova_addr;
+
+	spin_lock_init(&lock);
+
+	ret = iommu_device_sysfs_add(&xen_iommu_device, NULL, NULL, "xen-iommu");
+	if (ret) {
+		pr_err("Unable to add Xen IOMMU sysfs\n");
+		return ret;
+	}
+
+	ret = iommu_device_register(&xen_iommu_device, &xen_iommu_ops, NULL);
+	if (ret) {
+		pr_err("Unable to register Xen IOMMU device %d\n", ret);
+		iommu_device_sysfs_remove(&xen_iommu_device);
+		return ret;
+	}
+
+	/* swiotlb is redundant when IOMMU is active. */
+	x86_swiotlb_enable = false;
+
+	return 0;
+}
+
+void __exit xen_iommu_fini(void)
+{
+	pr_info("Unregistering Xen IOMMU driver\n");
+
+	iommu_device_unregister(&xen_iommu_device);
+	iommu_device_sysfs_remove(&xen_iommu_device);
+}
+
+module_init(xen_iommu_init);
+module_exit(xen_iommu_fini);
diff --git a/include/xen/interface/memory.h b/include/xen/interface/memory.h
index 1a371a825c55..c860acaf4b0e 100644
--- a/include/xen/interface/memory.h
+++ b/include/xen/interface/memory.h
@@ -10,6 +10,7 @@
 #ifndef __XEN_PUBLIC_MEMORY_H__
 #define __XEN_PUBLIC_MEMORY_H__
 
+#include <xen/interface/physdev.h>
 #include <linux/spinlock.h>
 
 /*
@@ -214,6 +215,38 @@ struct xen_add_to_physmap_range {
 };
 DEFINE_GUEST_HANDLE_STRUCT(xen_add_to_physmap_range);
 
+/*
+ * With some legacy devices, certain guest-physical addresses cannot safely
+ * be used for other purposes, e.g. to map guest RAM.  This hypercall
+ * enumerates those regions so the toolstack can avoid using them.
+ */
+#define XENMEM_reserved_device_memory_map   27
+struct xen_reserved_device_memory {
+    xen_pfn_t start_pfn;
+    xen_ulong_t nr_pages;
+};
+DEFINE_GUEST_HANDLE_STRUCT(xen_reserved_device_memory);
+
+struct xen_reserved_device_memory_map {
+#define XENMEM_RDM_ALL 1 /* Request all regions (ignore dev union). */
+    /* IN */
+    uint32_t flags;
+    /*
+     * IN/OUT
+     *
+     * Gets set to the required number of entries when too low,
+     * signaled by error code -ERANGE.
+     */
+    unsigned int nr_entries;
+    /* OUT */
+    GUEST_HANDLE(xen_reserved_device_memory) buffer;
+    /* IN */
+    union {
+        struct physdev_pci_device pci;
+    } dev;
+};
+DEFINE_GUEST_HANDLE_STRUCT(xen_reserved_device_memory_map);
+
 /*
  * Returns the pseudo-physical memory map as it was when the domain
  * was started (specified by XENMEM_set_memory_map).
diff --git a/include/xen/interface/pv-iommu.h b/include/xen/interface/pv-iommu.h
new file mode 100644
index 000000000000..8a8d366e5f4c
--- /dev/null
+++ b/include/xen/interface/pv-iommu.h
@@ -0,0 +1,104 @@
+/* SPDX-License-Identifier: MIT */
+/******************************************************************************
+ * pv-iommu.h
+ *
+ * Paravirtualized IOMMU driver interface.
+ *
+ * Copyright (c) 2024 Teddy Astie <teddy.astie@vates.tech>
+ */
+
+#ifndef __XEN_PUBLIC_PV_IOMMU_H__
+#define __XEN_PUBLIC_PV_IOMMU_H__
+
+#include "xen.h"
+#include "physdev.h"
+
+#define IOMMU_DEFAULT_CONTEXT (0)
+
+/**
+ * Query PV-IOMMU capabilities for this domain.
+ */
+#define IOMMUOP_query_capabilities    1
+
+/**
+ * Allocate an IOMMU context, the new context handle will be written to ctx_no.
+ */
+#define IOMMUOP_alloc_context         2
+
+/**
+ * Destroy a IOMMU context.
+ * All devices attached to this context are reattached to default context.
+ *
+ * The default context can't be destroyed (0).
+ */
+#define IOMMUOP_free_context          3
+
+/**
+ * Reattach the device to IOMMU context.
+ */
+#define IOMMUOP_reattach_device       4
+
+#define IOMMUOP_map_pages             5
+#define IOMMUOP_unmap_pages           6
+
+/**
+ * Get the GFN associated to a specific DFN.
+ */
+#define IOMMUOP_lookup_page           7
+
+struct pv_iommu_op {
+    uint16_t subop_id;
+    uint16_t ctx_no;
+
+/**
+ * Create a context that is cloned from default.
+ * The new context will be populated with 1:1 mappings covering the entire guest memory.
+ */
+#define IOMMU_CREATE_clone (1 << 0)
+
+#define IOMMU_OP_readable (1 << 0)
+#define IOMMU_OP_writeable (1 << 1)
+    uint32_t flags;
+
+    union {
+        struct {
+            uint64_t gfn;
+            uint64_t dfn;
+            /* Number of pages to map */
+            uint32_t nr_pages;
+            /* Number of pages actually mapped after sub-op */
+            uint32_t mapped;
+        } map_pages;
+
+        struct {
+            uint64_t dfn;
+            /* Number of pages to unmap */
+            uint32_t nr_pages;
+            /* Number of pages actually unmapped after sub-op */
+            uint32_t unmapped;
+        } unmap_pages;
+
+        struct {
+            struct physdev_pci_device dev;
+        } reattach_device;
+
+        struct {
+            uint64_t gfn;
+            uint64_t dfn;
+        } lookup_page;
+
+        struct {
+            /* Maximum number of IOMMU context this domain can use. */
+            uint16_t max_ctx_no;
+            /* Maximum number of pages that can be modified in a single map/unmap operation. */
+            uint32_t max_nr_pages;
+            /* Maximum device address (iova) that the guest can use for mappings. */
+            uint64_t max_iova_addr;
+        } cap;
+    };
+};
+
+typedef struct pv_iommu_op pv_iommu_op_t;
+DEFINE_GUEST_HANDLE_STRUCT(pv_iommu_op_t);
+
+#endif
diff --git a/include/xen/interface/xen.h b/include/xen/interface/xen.h
index 0ca23eca2a9c..8b1daf3fecc6 100644
--- a/include/xen/interface/xen.h
+++ b/include/xen/interface/xen.h
@@ -65,6 +65,7 @@
 #define __HYPERVISOR_xc_reserved_op       39 /* reserved for XenClient */
 #define __HYPERVISOR_xenpmu_op            40
 #define __HYPERVISOR_dm_op                41
+#define __HYPERVISOR_iommu_op 					  43
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 16:17:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 16:17:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745393.1152511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKgwf-0004hY-U7; Fri, 21 Jun 2024 16:17:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745393.1152511; Fri, 21 Jun 2024 16:17:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKgwf-0004hR-RL; Fri, 21 Jun 2024 16:17:01 +0000
Received: by outflank-mailman (input) for mailman id 745393;
 Fri, 21 Jun 2024 16:17:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKgwf-0004hL-6a
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 16:17:01 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id af81ea30-2fe9-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 18:16:59 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id
 a640c23a62f3a-a6f51660223so122609166b.0
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 09:16:59 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d303da3d6sm1120207a12.13.2024.06.21.09.16.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 09:16:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af81ea30-2fe9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718986618; x=1719591418; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=ZhNWuzpiBITLcC1Y0U8SCKMSOyBGsap/FB26vtaMnqg=;
        b=Fe7EN1uGQIxoMn2F8Ag/l4bp+xtRwxWx/PLfuUSJ4jqKujgDu37aBDrd7dC6djtLZb
         2S2r0o58ptPTqUf9jtmVQ+PrptnWhSiU3IMUUAO7qgUqLzy/H4TUpVnHH9geuYVozoFO
         O0e1rPaVHxxWOWnDLUTDQkWZU9u+3z54DgyTw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718986618; x=1719591418;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=ZhNWuzpiBITLcC1Y0U8SCKMSOyBGsap/FB26vtaMnqg=;
        b=WXCzGjwGz9Gwr1Y2k1CgBstnMgDfVYQWajXVj7aBOq9FM9xvGLxLde5daZ2u6Pzz+U
         niXy4tivOuAO0AyUeeI+eAbm8xDmJAVdBIgIac7tDdw5yBXZZS5iWY5Y5S5KWcFr7NQD
         +paywc5uLGnhCEWnHo1JoI8/k26yBW8OXBxofoibvpXDQcC8nEvDA+WaYTbfOCMfdjLV
         0EGdTazYi7Zt41w5ea+4eKWSDXRYyR4yqRQ0s2ROMlP7pJfdMNXsWVeOB5RYLA2rbaQC
         8o9CTAqRyzBHklCbFGAxM6+fhbNpVelwUnQ7dppehI31bLJ4iO+nSknFybSMo0lxqstt
         Ynqg==
X-Gm-Message-State: AOJu0Yx6iLQEA2c3qlBGutv0hDjBfMWKSGfmBkO7/tYZeTMLKLQgeduK
	EfPhAD11oFi0/75uc6pSnUobAXK5oTBZMOFJ/3esMd2U7DgXW8wG3dIZboKQNbbQOJ7KChzGaKj
	Ic48=
X-Google-Smtp-Source: AGHT+IEYuZvg7pzZXOdLUnH/tGa2L6NR/gZmRQem6PqCCkbR8asQ8R/U4LrZfx0g6M7LdVAIONo62A==
X-Received: by 2002:a50:8ad8:0:b0:57c:5d4a:4122 with SMTP id 4fb4d7f45d1cf-57d07e0d432mr5709478a12.9.1718986618242;
        Fri, 21 Jun 2024 09:16:58 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Demi Marie Obenour <demi@invisiblethingslab.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	=?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 v2] tools/xl: Open xldevd.log with O_CLOEXEC
Date: Fri, 21 Jun 2024 17:16:56 +0100
Message-Id: <20240621161656.63576-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

`xl devd` has been observed leaking /var/log/xldevd.log into children.

Note this is specifically safe; dup2() leaves O_CLOEXEC disabled on newfd, so
after setting up stdout/stderr, it's only the logfile fd which will close on
exec().

Link: https://github.com/QubesOS/qubes-issues/issues/8292
Reported-by: Demi Marie Obenour <demi@invisiblethingslab.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Anthony PERARD <anthony@xenproject.org>
CC: Juergen Gross <jgross@suse.com>
CC: Demi Marie Obenour <demi@invisiblethingslab.com>
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Also entirely speculative based on the QubesOS ticket.

v2:
 * Extend the commit message to explain why stdout/stderr aren't closed by
   this change

For 4.19.  This bugfix was posted earlier, but fell between the cracks.
---
 tools/xl/xl_utils.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/xl/xl_utils.c b/tools/xl/xl_utils.c
index 17489d182954..060186db3a59 100644
--- a/tools/xl/xl_utils.c
+++ b/tools/xl/xl_utils.c
@@ -270,7 +270,7 @@ int do_daemonize(const char *name, const char *pidfile)
         exit(-1);
     }
 
-    CHK_SYSCALL(logfile = open(fullname, O_WRONLY|O_CREAT|O_APPEND, 0644));
+    CHK_SYSCALL(logfile = open(fullname, O_WRONLY | O_CREAT | O_APPEND | O_CLOEXEC, 0644));
     free(fullname);
     assert(logfile >= 3);
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 16:31:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 16:31:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745400.1152523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKhAc-0008EO-6B; Fri, 21 Jun 2024 16:31:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745400.1152523; Fri, 21 Jun 2024 16:31:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKhAc-0008EH-1f; Fri, 21 Jun 2024 16:31:26 +0000
Received: by outflank-mailman (input) for mailman id 745400;
 Fri, 21 Jun 2024 16:31:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1oNf=NX=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1sKhAa-0008EB-Qe
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 16:31:25 +0000
Received: from wfout5-smtp.messagingengine.com
 (wfout5-smtp.messagingengine.com [64.147.123.148])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b14a57f2-2feb-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 18:31:22 +0200 (CEST)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailfout.west.internal (Postfix) with ESMTP id 64AFE1C00078;
 Fri, 21 Jun 2024 12:31:19 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Fri, 21 Jun 2024 12:31:19 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 21 Jun 2024 12:31:17 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b14a57f2-2feb-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1718987479;
	 x=1719073879; bh=Lg6KZ4rv402lmmxWX4I7NIkvVbPXj4grBgrj1P9T370=; b=
	bK1VKnUJ4PTxCcMsXNA3pWtrOYb0W5CdCsYiWKDHnwO0KnpAwe6CTmjFKCKvC/Cy
	P19mB1v1XsozxF/hdnvZDT7Py2aR2ApX9FUu6acQrOmXDZmxLGmxomYMOEgQ75vK
	DkjjjrN4H3IfLJMnhn8OoBBscue8gbQA3EsLNNa0GAxEw2PwHlPFIu/+a58ZUTIs
	KLtr0i5igXMb9ETDulVzGufeVBDNGYMPs+1pPakEiSYfwP80xf3/ssBO6dGRZ2AV
	TMTLTQ3dJXT/1DjVd2Mlc2FbKJedsb8XRHQ6+OeCRPqMRT/XifC8GQHw8HI9sv8K
	T0aAyIUqjZanNHRodczi2A==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1718987479; x=1719073879; bh=Lg6KZ4rv402lmmxWX4I7NIkvVbPX
	j4grBgrj1P9T370=; b=EINf2H/PkzJmvpcTYeX9EKO8I67LVSknJR+CHqE1XpkR
	VN+R68l4VQCVeSBxCYbnhnRqhNmOGE2X3qN0+hcykdDEWwKsRL27E64qceiIikfR
	bwkxBiLEX79pA85rAYvcshXj8M2dLelT+kY/CAkhMfP11X/yU5JHqdUbA5N42XJw
	pb70p65ZIJGTiRDgQZlurwIQfPP9TcxKyzkt9/QV999cxUahdNKk3MqpiRduTwub
	zgasao3v+wYYWUbpGEI/OVhLd9owufFQuZ+Q/XDNnoRHXRpwbzsbqQSNEyZLjpIq
	4U3mWYNVw+qvA0YAN5QYqK4xy1p77Kk527qkSIUGeQ==
X-ME-Sender: <xms:1qp1ZsHizsx_1iy8lp0rpIAiFEy31lZUPqmfnKWUE7EMq9-FwlBwCw>
    <xme:1qp1ZlUOssPCFlJTqJXCCzKJwRjV_LM-TDbDDTfcwcbMcrxBw2bbmL3mrgVTB4VJM
    zrrGp_gFFdXDQ>
X-ME-Received: <xmr:1qp1ZmLQbC8LRAfSlfAyVqq6V_0tXWLbag3cp7epeGDXF77V-cJkOyW-oaVh6_8wVM8xjGt8IuYDx_Lf0P5jKfuzFWm0tJt4WQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfeefgedguddtvdcutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvvefukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghr
    vghkucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesih
    hnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeeu
    keetteeggffgkeduheetgeeileejjeeiiefhjeegvefhtefggfetueetteeuteenucffoh
    hmrghinhepghhithhhuhgsrdgtohhmnecuvehluhhsthgvrhfuihiivgeptdenucfrrghr
    rghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomh
X-ME-Proxy: <xmx:1qp1ZuExyE_I5BpMTLL4GeiuLzlc7u1xUIK6JCyQ16RBpl6fQOu5Tw>
    <xmx:1qp1ZiUnvSoxWCAYJtVqsJ4hveQqGCGML_QFLAQ0JW2k-ldMRNc7AQ>
    <xmx:1qp1ZhOFt9Dceo5un2hLkJvCy1XPWv_T-KTjO02NflQTXPmN_T83cg>
    <xmx:1qp1Zp1M1FlO5yqWj3s4W1H9oOe0sLI9H9de7MEF7Mz2CXIp1uQgug>
    <xmx:16p1ZkfNpwjdRN47-VBHlZZ4tycxOUUfdlxa5_imKbTegie9P6_RcEb3>
Feedback-ID: i1568416f:Fastmail
Date: Fri, 21 Jun 2024 18:31:14 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Demi Marie Obenour <demi@invisiblethingslab.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH for-4.19 v2] tools/xl: Open xldevd.log with O_CLOEXEC
Message-ID: <ZnWq0lzbE3Rxjwna@mail-itl>
References: <20240621161656.63576-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="J9jQFlsl7RzgvedU"
Content-Disposition: inline
In-Reply-To: <20240621161656.63576-1-andrew.cooper3@citrix.com>


--J9jQFlsl7RzgvedU
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 21 Jun 2024 18:31:14 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
	Demi Marie Obenour <demi@invisiblethingslab.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH for-4.19 v2] tools/xl: Open xldevd.log with O_CLOEXEC

On Fri, Jun 21, 2024 at 05:16:56PM +0100, Andrew Cooper wrote:
> `xl devd` has been observed leaking /var/log/xldevd.log into children.
>=20
> Note this is specifically safe; dup2() leaves O_CLOEXEC disabled on newfd=
, so
> after setting up stdout/stderr, it's only the logfile fd which will close=
 on
> exec().
>=20
> Link: https://github.com/QubesOS/qubes-issues/issues/8292
> Reported-by: Demi Marie Obenour <demi@invisiblethingslab.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.c=
om>

> ---
> CC: Anthony PERARD <anthony@xenproject.org>
> CC: Juergen Gross <jgross@suse.com>
> CC: Demi Marie Obenour <demi@invisiblethingslab.com>
> CC: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> Also entirely speculative based on the QubesOS ticket.
>=20
> v2:
>  * Extend the commit message to explain why stdout/stderr aren't closed by
>    this change
>=20
> For 4.19.  This bugfix was posted earlier, but fell between the cracks.
> ---
>  tools/xl/xl_utils.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/tools/xl/xl_utils.c b/tools/xl/xl_utils.c
> index 17489d182954..060186db3a59 100644
> --- a/tools/xl/xl_utils.c
> +++ b/tools/xl/xl_utils.c
> @@ -270,7 +270,7 @@ int do_daemonize(const char *name, const char *pidfil=
e)
>          exit(-1);
>      }
> =20
> -    CHK_SYSCALL(logfile =3D open(fullname, O_WRONLY|O_CREAT|O_APPEND, 06=
44));
> +    CHK_SYSCALL(logfile =3D open(fullname, O_WRONLY | O_CREAT | O_APPEND=
 | O_CLOEXEC, 0644));
>      free(fullname);
>      assert(logfile >=3D 3);
> =20
> --=20
> 2.39.2
>=20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--J9jQFlsl7RzgvedU
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmZ1qtIACgkQ24/THMrX
1yxTBwf+NYZcjqSncwaWLfQxtjZTGZ+mPGAfV934pMPzZEB0+JGOWlBtf/7Md9ri
vWsVM/xi52QphTKZziIL4hfWGpL646AmFRCeFV+msXCAQ/k4a2j8w4U1J9uriPMy
jRx1lH4jR5RokVQ1tHU84mkNVLIY3lIXyEbxl0RejjqM0/sK0xRO7q8/0J2DsEP6
qlOGQmNGKQpByvQqXSMMgXt2wGR7Uv464Ur4HYPO/0+y+l1g/idCUhVTRpD5ZcSG
y4GRb3Z+1GcIe0cxsACez5eWMoLnNisbq7SdGH9uRgZFJ156/Tx312trS+jp2vzb
d8hrjKcdWBvHgZr7NXjAvfnvovb9tA==
=Z2B+
-----END PGP SIGNATURE-----

--J9jQFlsl7RzgvedU--


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 16:34:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 16:34:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745405.1152533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKhDv-0000NF-JY; Fri, 21 Jun 2024 16:34:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745405.1152533; Fri, 21 Jun 2024 16:34:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKhDv-0000N5-FR; Fri, 21 Jun 2024 16:34:51 +0000
Received: by outflank-mailman (input) for mailman id 745405;
 Fri, 21 Jun 2024 16:34:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKhDu-0000Mz-At
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 16:34:50 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2d59e0e6-2fec-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 18:34:49 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a6265d3ba8fso235047466b.0
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 09:34:49 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf48b382sm100461966b.70.2024.06.21.09.34.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 21 Jun 2024 09:34:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2d59e0e6-2fec-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718987688; x=1719592488; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=BH2i7mTf3jz7eyWov8UAQiM1MmKaR0m0M5mJcs+PoGU=;
        b=lW3nqJL9c1oHp8LEeQRbMGTMBPKmmga4vsTVGnjVjqr8FYI2rF2sb7Ui7ma/eAU4Cz
         Lrcq+zLfeZXAKZ69VZU+lb3lQWVeyMydaLqhL1Mw7jeHG6DEJfFlb/XOAPsJENqOpXa9
         Fax8YZCBg0DkY1rbeNYTtUPUZFmv6A9hI5qdI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718987688; x=1719592488;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=BH2i7mTf3jz7eyWov8UAQiM1MmKaR0m0M5mJcs+PoGU=;
        b=DxR8PsinYeHyI7+PSUpxg+HUaGrXEvZc6BMxhUUNfCA5suJ/IFOP8ZgxGR+wZWErDf
         fwd9SXBguVel2UT3MDkxqIhMsDnmBp5rhVIH5PgPLt+1lFgYg1R4z5EthGOkIKCBT/Fy
         gGvTb0Z+zuu1EflzDkA6O2WsQRQ76eMkt+L0iB8dPDbJ4rAnIuNCIdCdhVGFva47JtHX
         4j5EtnPvLT53y6inR4LGK3XPjbTL6dnuDuTSUdWMorjANrYUAKp436U4FsPZeM6at1v+
         0lmlilWVDT9I0cY01MnWsNT6v/avEWj/+pqsIFaBuie5adtHeh2D2+NucPjn1pZ8pVnR
         6Big==
X-Gm-Message-State: AOJu0YzC8RU1Lmz7sq+P+oJxD+qMG12vjEoo9Nu0MHlKi4JEbPOXX9Va
	Cli0h3HeaaFK0ociZSHqDl2soO61OuMTgJA4FOQuMkQer8Nq59sNXhGkfRk32wrWfMLovWdUtOw
	jteo=
X-Google-Smtp-Source: AGHT+IF7NYKCK+VWlO6q/MDpDQ5l2PbTq5IJM+UajUx82fsdp/xW3qdMwiVcH1RsnchD6T2aBgSx6A==
X-Received: by 2002:a17:907:7ba8:b0:a6f:718f:39b6 with SMTP id a640c23a62f3a-a6fab613430mr547612166b.25.1718987688458;
        Fri, 21 Jun 2024 09:34:48 -0700 (PDT)
Message-ID: <dec0441e-c66d-44cd-86cd-7b914320b9c9@citrix.com>
Date: Fri, 21 Jun 2024 17:34:47 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 0/4] xen/xlat: Improvements to compat hypercall
 checking
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich
 <JBeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20240415154155.2718064-1-andrew.cooper3@citrix.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <20240415154155.2718064-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 15/04/2024 4:41 pm, Andrew Cooper wrote:
> This started off as patch 3, and grew somewhat.
>
> Patches 1-3 are simple and hopefully non-controversial.
>
> Patch 4 is an attempt to make the headers less fragile, but came with an
> unexpected complication.  Details in the patch.
>
> Andrew Cooper (4):
>   xen/xlat: Sort out whitespace
>   xen/xlat: Sort structs per file
>   xen/gnttab: Perform compat/native gnttab_query_size check

I'm timing out waiting for a justification on the whitespace comment.

Oleksii: Can I get a release ack on this please?  Patch 3 is the main
bugfix, which is the insertion of a missing build-time cross-check, so
it's very low risk for the release.

Patch 4 was always RFC and not intended to go in as-was.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 16:55:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 16:55:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745411.1152541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKhXh-00049j-4v; Fri, 21 Jun 2024 16:55:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745411.1152541; Fri, 21 Jun 2024 16:55:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKhXh-00049c-21; Fri, 21 Jun 2024 16:55:17 +0000
Received: by outflank-mailman (input) for mailman id 745411;
 Fri, 21 Jun 2024 16:55:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I8X3=NX=bounce.vates.tech=bounce-md_30504962.6675b06e.v1-4f43be8da03145faae47ee3103af9c23@srs-se1.protection.inumbo.net>)
 id 1sKhXf-00049W-4Z
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 16:55:15 +0000
Received: from mail134-3.atl141.mandrillapp.com
 (mail134-3.atl141.mandrillapp.com [198.2.134.3])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 067fb639-2fef-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 18:55:13 +0200 (CEST)
Received: from pmta10.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail134-3.atl141.mandrillapp.com (Mailchimp) with ESMTP id
 4W5Nl26b3zzDRJ1XH
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 16:55:10 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 4f43be8da03145faae47ee3103af9c23; Fri, 21 Jun 2024 16:55:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 067fb639-2fef-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1718988910; x=1719249410;
	bh=mZ9ZjnfEi86A0eJMbTo9hmK9z6gDRUUYORm3zdn0tFk=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=JoSj4dbolwJDIQXSxBCq5uvT7lkpZ0MRxVFzCJuLm5BDpAFunDDrxYip75FjR1A89
	 e2kkkRMKwqCDH23ppPYT1dKVjYtmwqP3mwWdhFvmbRn4F475ZCzPSGobEnXOj4RgtK
	 uMs07SP0+LeEdScyfevMMhKMylogWG4i4QElT6X2bOluJ/YHQBon8HeO0gEsfPMvZ9
	 F7CjbiOh/GraXO0blPHIkFpkGfYiVSbRknafeN1pX9D7t4XYfi/hTVfwfRoc/Bb1KY
	 ql899byLzlE91LgZ6kJZZh4aXf91HhyiOsTTpsZA0T7K3Z8lHcopM0+nPNzQFoByeX
	 Z7TofFl13Uz3w==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1718988910; x=1719249410; i=anthony.perard@vates.tech;
	bh=mZ9ZjnfEi86A0eJMbTo9hmK9z6gDRUUYORm3zdn0tFk=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=YTWcOlzewe06vGnvk3lUhdzmR3TPAMzeTHsE67/d3Z2qTG8Zx2NkhGzaRrnNO/e4f
	 Ba15wbIG7+lyxr5h3jgnEWxSgT31bid+nbYNNAmlyVH5WSbdQgXOSm+ntDJxCi4NsE
	 uHR+dS2VJ2StkhDqMf9H1K7ohJ/P3OtIo+9jxwr8d07BHVVQ13RliGCs9Yz0jltINb
	 0Ccto6OBflplaYySoqF3I1EW9hHEpl+NFT462zi7gDdf5gcL8NbD1aJOQvfbSAKFi3
	 KKquCutTeeA3hpW1uNqZB3c3zHg8nL9lJrQTksALydECSZHHC6O119Q/mPYuqJfLvr
	 ewc8idELVo7Pg==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[PATCH=20for-4.19=20v2]=20tools/xl:=20Open=20xldevd.log=20with=20O=5FCLOEXEC?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1718988909523
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Demi Marie Obenour <demi@invisiblethingslab.com>, Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>, Oleksii Kurochko <oleksii.kurochko@gmail.com>
Message-Id: <ZnWwbJiD6eG85VY9@l14>
References: <20240621161656.63576-1-andrew.cooper3@citrix.com>
In-Reply-To: <20240621161656.63576-1-andrew.cooper3@citrix.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.4f43be8da03145faae47ee3103af9c23?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240621:md
Date: Fri, 21 Jun 2024 16:55:10 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On Fri, Jun 21, 2024 at 05:16:56PM +0100, Andrew Cooper wrote:
> `xl devd` has been observed leaking /var/log/xldevd.log into children.
> 
> Note this is specifically safe; dup2() leaves O_CLOEXEC disabled on newfd=
, so
> after setting up stdout/stderr, it's only the logfile fd which will close=
 on
> exec().
> 
> Link: https://github.com/QubesOS/qubes-issues/issues/8292
> Reported-by: Demi Marie Obenour <demi@invisiblethingslab.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Anthony PERARD <anthony@xenproject.org>
> CC: Juergen Gross <jgross@suse.com>
> CC: Demi Marie Obenour <demi@invisiblethingslab.com>
> CC: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> 
> Also entirely speculative based on the QubesOS ticket.
> 
> v2:
>  * Extend the commit message to explain why stdout/stderr aren't closed b=
y
>    this change
> 
> For 4.19.  This bugfix was posted earlier, but fell between the cracks.
> ---
>  tools/xl/xl_utils.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/xl/xl_utils.c b/tools/xl/xl_utils.c
> index 17489d182954..060186db3a59 100644
> --- a/tools/xl/xl_utils.c
> +++ b/tools/xl/xl_utils.c
> @@ -270,7 +270,7 @@ int do_daemonize(const char *name, const char *pidfil=
e)
>          exit(-1);
>      }
>  
> -    CHK_SYSCALL(logfile =3D open(fullname, O_WRONLY|O_CREAT|O_APPEND, 06=
44));
> +    CHK_SYSCALL(logfile =3D open(fullname, O_WRONLY | O_CREAT | O_APPEND=
 | O_CLOEXEC, 0644));

Everytime we use O_CLOEXEC, we add in the C file
    #ifndef O_CLOEXEC
    #define O_CLOEXEC 0
    #endif
we don't need to do that anymore?
Or I guess we'll see if someone complain when they try to build on an
ancien version of Linux.

Acked-by: Anthony PERARD <anthony.perard@vates.tech>

Thanks,

-- 

Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 16:59:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 16:59:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745417.1152551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKhbT-00050o-Kg; Fri, 21 Jun 2024 16:59:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745417.1152551; Fri, 21 Jun 2024 16:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKhbT-00050h-Hy; Fri, 21 Jun 2024 16:59:11 +0000
Received: by outflank-mailman (input) for mailman id 745417;
 Fri, 21 Jun 2024 16:59:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKhbR-00050b-Dn
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 16:59:09 +0000
Received: from mail-qk1-x72a.google.com (mail-qk1-x72a.google.com
 [2607:f8b0:4864:20::72a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 92bdb3a2-2fef-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 18:59:08 +0200 (CEST)
Received: by mail-qk1-x72a.google.com with SMTP id
 af79cd13be357-7955e1bf5f8so141669785a.0
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 09:59:08 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce89af16sm101789985a.2.2024.06.21.09.59.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 21 Jun 2024 09:59:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92bdb3a2-2fef-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718989147; x=1719593947; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=oLkshFS3LRs1A7FC6l5i5G7juFdfY0j03SZUdzSQzuk=;
        b=W0BReziIvnrtwwbR4yZo1zneKbd5zNoAuNJ49Ci70jGmhUr7CV5sBHMHKXM8EkEaIl
         F38i82gATmuJws+QqEX6FoUwfdIDnbE7FyOWPhJqWbFsampGoKOGt4HjVlPtNEed5buN
         MGf9EgI4pOmDseVZLmW3ngClGhqYrA18h2jg4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718989147; x=1719593947;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=oLkshFS3LRs1A7FC6l5i5G7juFdfY0j03SZUdzSQzuk=;
        b=ofOdRr58x5VGDa0CHhQ5reQn97R2ruw0izLx0T/suq21Vtjgi9iNh3rZEkEdePqcOx
         8Ofjf1urMuxXjwvdre/wVOiDk/sJNeNKN11qotwmGvG36HJlp/NidEYtCXiEfAHbwpqZ
         k8dyTU+uyIql3hqk+GQ2DOsY1nC86e7m/GnkdsxF5+HOmKFNSZnB4+2AjEsQsd+1yxGO
         fHqTx1Cc0o2dfdl7filvYw2eYLEwhNWxq8wK+zpEnaRqXtKuhUSIV3yV8fPtttL5eNjR
         6Rt8GqBv2KXOskD5ZFj51/U2w9RgRgGNEiIz+DM7j9G+3PanCzUho7u35eKuob8+1+45
         HtRA==
X-Gm-Message-State: AOJu0YzAbcyp8887Nph8gtGh+xKYE6JjYoiGEhF+3zcxIPmUcWjfeDm1
	Bfy3CTv0bUoVven8yd/0DHWBCpf/83rWPVF7LWBYxxk96JV0VKK4eMqsoJdquFc=
X-Google-Smtp-Source: AGHT+IE9TwKM16z+70T4ot2UcKfB+6/9rE9/UvcrdS8OB2Ch7Bvr+8KCFcjZG3uxMdmPq2jKuKExQQ==
X-Received: by 2002:a05:620a:4056:b0:795:75ae:b9e4 with SMTP id af79cd13be357-79bb3e1198amr1044819585a.3.1718989147173;
        Fri, 21 Jun 2024 09:59:07 -0700 (PDT)
Message-ID: <e6262049-21b4-4eb7-9f29-6d5983bfb212@citrix.com>
Date: Fri, 21 Jun 2024 17:59:04 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 v2] tools/xl: Open xldevd.log with O_CLOEXEC
To: Anthony PERARD <anthony.perard@vates.tech>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Demi Marie Obenour <demi@invisiblethingslab.com>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20240621161656.63576-1-andrew.cooper3@citrix.com>
 <ZnWwbJiD6eG85VY9@l14>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <ZnWwbJiD6eG85VY9@l14>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 21/06/2024 5:55 pm, Anthony PERARD wrote:
> On Fri, Jun 21, 2024 at 05:16:56PM +0100, Andrew Cooper wrote:
>> `xl devd` has been observed leaking /var/log/xldevd.log into children.
>>
>> Note this is specifically safe; dup2() leaves O_CLOEXEC disabled on newfd, so
>> after setting up stdout/stderr, it's only the logfile fd which will close on
>> exec().
>>
>> Link: https://github.com/QubesOS/qubes-issues/issues/8292
>> Reported-by: Demi Marie Obenour <demi@invisiblethingslab.com>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Anthony PERARD <anthony@xenproject.org>
>> CC: Juergen Gross <jgross@suse.com>
>> CC: Demi Marie Obenour <demi@invisiblethingslab.com>
>> CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>
>> Also entirely speculative based on the QubesOS ticket.
>>
>> v2:
>>  * Extend the commit message to explain why stdout/stderr aren't closed by
>>    this change
>>
>> For 4.19.  This bugfix was posted earlier, but fell between the cracks.
>> ---
>>  tools/xl/xl_utils.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/tools/xl/xl_utils.c b/tools/xl/xl_utils.c
>> index 17489d182954..060186db3a59 100644
>> --- a/tools/xl/xl_utils.c
>> +++ b/tools/xl/xl_utils.c
>> @@ -270,7 +270,7 @@ int do_daemonize(const char *name, const char *pidfile)
>>          exit(-1);
>>      }
>>  
>> -    CHK_SYSCALL(logfile = open(fullname, O_WRONLY|O_CREAT|O_APPEND, 0644));
>> +    CHK_SYSCALL(logfile = open(fullname, O_WRONLY | O_CREAT | O_APPEND | O_CLOEXEC, 0644));
> Everytime we use O_CLOEXEC, we add in the C file
>     #ifndef O_CLOEXEC
>     #define O_CLOEXEC 0
>     #endif
> we don't need to do that anymore?
> Or I guess we'll see if someone complain when they try to build on an
> ancien version of Linux.
>
> Acked-by: Anthony PERARD <anthony.perard@vates.tech>

Thanks.  I did originally have that ifdefary here, but then I noticed
that this isn't the first instance like this in xl, and I'm going to be
using that as a justification soon to explicitly drop support for Linux
< 2.6.23.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 18:07:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 18:07:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745443.1152572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKifA-0007ar-RG; Fri, 21 Jun 2024 18:07:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745443.1152572; Fri, 21 Jun 2024 18:07:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKifA-0007aj-OJ; Fri, 21 Jun 2024 18:07:04 +0000
Received: by outflank-mailman (input) for mailman id 745443;
 Fri, 21 Jun 2024 18:07:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKif9-0007MK-JC
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 18:07:03 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0eeb590d-2ff9-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 20:07:01 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2ec50d4e46aso7560711fa.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 11:07:01 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d30427976sm1244537a12.33.2024.06.21.11.07.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 11:07:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0eeb590d-2ff9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718993220; x=1719598020; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=8mPJ8Fzs9EItfU15q5evCDIy3pVc6ZsOkwV745HGMWA=;
        b=wbIgOqqUiCtrrwJHJ2BWE/V0JpVIpAiUBRXrY1f2LCWXvPQDn4GWgpEu0ZYpdDkkIp
         6J8tfuiayvTCtqULMQATjmrXAKFEO+Ryq9fTO0oDaFdInSpiu4VgXV41DTjV8xFlxTXv
         MsiyUrQbWO/wfUS/o2Eg3s1jl5nNqjBNVRNK8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718993220; x=1719598020;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=8mPJ8Fzs9EItfU15q5evCDIy3pVc6ZsOkwV745HGMWA=;
        b=OpphdFk7TC8hc1nM9PUDw+dEH50nZOB7sz6xdEVgBXsMBo0rYHW5w/xNxcnzeCh3QW
         TopHZt2V3PlkVUHYdEz5CU76mbNbmVBsbFzAeWgvJdGfHL+Ntfg/RNycOGalW47Q0RHf
         jIZp2U/RnjrCOZ1agP89DD28UqRWqwZyhe4CvFZXcELm1x07uaNjH15wVKEdJGtTVbkW
         gIqc2/VkZt4Sg4EVlqfvicS8lj7SleudsKdkc07f4np2HWnkPhBMfPP+v8OtA5As1QDu
         aEmVL7BD9SXY/LYWlSCzs3cGgvNHEwm1lq1EazE3FhdyLc6DkS8HXeMvA4EM/j2LzV4p
         8blQ==
X-Gm-Message-State: AOJu0YxiPU5XhB3KxePskbx6rPIdXqHmTlEny21cno3/UEVnoA3/tyXx
	zz86f+f265Ln4JSu/FHUBp7ez8YPY9G+UVoeLxh+j2iackcspz/49imyd/dhK8ED8hRbz/kyDx5
	e7hQ=
X-Google-Smtp-Source: AGHT+IFYv6awnyCy9QB0BoYbnXNhnIx2dtX3UZAotPVVtYzSxt/xum4ZTkvHGAHU6ckhIcbbyLOOqA==
X-Received: by 2002:a2e:95c4:0:b0:2ec:4fec:812c with SMTP id 38308e7fff4ca-2ec4fec83bdmr14441201fa.44.1718993220565;
        Fri, 21 Jun 2024 11:07:00 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 v3 1/4] x86/shadow: Rework trace_shadow_gen() into sh_trace_va()
Date: Fri, 21 Jun 2024 19:06:55 +0100
Message-Id: <20240621180658.92831-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240621180658.92831-1-andrew.cooper3@citrix.com>
References: <20240621180658.92831-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The ((GUEST_PAGING_LEVELS - 2) << 8) expression in the event field is common
to all shadow trace events, so introduce sh_trace() as a very thin wrapper
around trace().

Then, rename trace_shadow_gen() to sh_trace_va() to better describe what it is
doing, and to be more consistent with later cleanup.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: George Dunlap <george.dunlap@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

v2:
 * New
---
 xen/arch/x86/mm/shadow/multi.c | 24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index bcd02b2d0037..1775952d7e18 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -1974,13 +1974,17 @@ typedef u32 guest_va_t;
 typedef u32 guest_pa_t;
 #endif
 
-static inline void trace_shadow_gen(u32 event, guest_va_t va)
+/* Shadow trace event with GUEST_PAGING_LEVELS folded into the event field. */
+static void sh_trace(uint32_t event, unsigned int extra, const void *extra_data)
+{
+    trace(event | ((GUEST_PAGING_LEVELS - 2) << 8), extra, extra_data);
+}
+
+/* Shadow trace event with the guest's linear address. */
+static void sh_trace_va(uint32_t event, guest_va_t va)
 {
     if ( tb_init_done )
-    {
-        event |= (GUEST_PAGING_LEVELS-2)<<8;
-        trace(event, sizeof(va), &va);
-    }
+        sh_trace(event, sizeof(va), &va);
 }
 
 static inline void trace_shadow_fixup(guest_l1e_t gl1e,
@@ -2239,7 +2243,7 @@ static int cf_check sh_page_fault(
                 sh_reset_early_unshadow(v);
                 perfc_incr(shadow_fault_fast_gnp);
                 SHADOW_PRINTK("fast path not-present\n");
-                trace_shadow_gen(TRC_SHADOW_FAST_PROPAGATE, va);
+                sh_trace_va(TRC_SHADOW_FAST_PROPAGATE, va);
                 return 0;
             }
 #ifdef CONFIG_HVM
@@ -2250,7 +2254,7 @@ static int cf_check sh_page_fault(
             perfc_incr(shadow_fault_fast_mmio);
             SHADOW_PRINTK("fast path mmio %#"PRIpaddr"\n", gpa);
             sh_reset_early_unshadow(v);
-            trace_shadow_gen(TRC_SHADOW_FAST_MMIO, va);
+            sh_trace_va(TRC_SHADOW_FAST_MMIO, va);
             return handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, access)
                    ? EXCRET_fault_fixed : 0;
 #else
@@ -2265,7 +2269,7 @@ static int cf_check sh_page_fault(
              * Retry and let the hardware give us the right fault next time. */
             perfc_incr(shadow_fault_fast_fail);
             SHADOW_PRINTK("fast path false alarm!\n");
-            trace_shadow_gen(TRC_SHADOW_FALSE_FAST_PATH, va);
+            sh_trace_va(TRC_SHADOW_FALSE_FAST_PATH, va);
             return EXCRET_fault_fixed;
         }
     }
@@ -2481,7 +2485,7 @@ static int cf_check sh_page_fault(
 #endif
         paging_unlock(d);
         put_gfn(d, gfn_x(gfn));
-        trace_shadow_gen(TRC_SHADOW_DOMF_DYING, va);
+        sh_trace_va(TRC_SHADOW_DOMF_DYING, va);
         return 0;
     }
 
@@ -2569,7 +2573,7 @@ static int cf_check sh_page_fault(
         put_gfn(d, gfn_x(gfn));
 
         perfc_incr(shadow_fault_mmio);
-        trace_shadow_gen(TRC_SHADOW_MMIO, va);
+        sh_trace_va(TRC_SHADOW_MMIO, va);
 
         return handle_mmio_with_translation(va, gpa >> PAGE_SHIFT, access)
                ? EXCRET_fault_fixed : 0;
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 18:07:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 18:07:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745446.1152601 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKifD-0008IM-FG; Fri, 21 Jun 2024 18:07:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745446.1152601; Fri, 21 Jun 2024 18:07:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKifD-0008Ht-Bo; Fri, 21 Jun 2024 18:07:07 +0000
Received: by outflank-mailman (input) for mailman id 745446;
 Fri, 21 Jun 2024 18:07:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKifB-0007MK-JO
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 18:07:05 +0000
Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com
 [2a00:1450:4864:20::52f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0fd321fc-2ff9-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 20:07:03 +0200 (CEST)
Received: by mail-ed1-x52f.google.com with SMTP id
 4fb4d7f45d1cf-57cc30eaf0aso1308250a12.2
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 11:07:03 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d30427976sm1244537a12.33.2024.06.21.11.07.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 11:07:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fd321fc-2ff9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718993222; x=1719598022; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sJfO+iaO+pfUPdCT3G8BrrqFYQcfAnWPZ8F2JImaFRY=;
        b=Ui7LRpRk7xsqn54aPNmAkvxv1FqjaaHKf4FxJ4o3yPcyHb8j7H/2JGvCtd9WRfSMC9
         wY21ASgGjvCi3AOrYv7CwcXYd6FUwNtTJ6ylltFxusPbbECy1uzpGb3IM6muWULxbrKe
         nDBAI7NGUaoWfFDsj3ZlRWoSRkssuT3N1882s=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718993222; x=1719598022;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=sJfO+iaO+pfUPdCT3G8BrrqFYQcfAnWPZ8F2JImaFRY=;
        b=liiCrUYlwELBGNyuvLftuCRag6IA24fFHhrLTUrCMDFR73TqtTA7FdilfwdC3rqeuV
         bRj+XpBzOggfewenUcddTTzxaDsZrJkSI/ZxkqzLVbQ9lM1ldO/JZQt5PDoHFwPfMMff
         LEX76XGneW46Q/WPef25KFn+IS5hLivmr63t9+i73uWdp1aoI7ZwDSKORwlwEB+2E4cE
         C4cst9vYFB7TMLyx+G1TZM2rt3AP0fagyfUzx+cSrxWxN3OLKwp1Yd6xcTHbOTgewtnn
         TpTZ+Pf3/N3yMYfGrVcc3ieux3iAi84FRu5i0B47R+ChmmeaLJvxwFUqFSm2KPJFj/6z
         gYOA==
X-Gm-Message-State: AOJu0YwG6ZaJ5rsX5yEgj+A017DWBju6v7GyA5sp8+oZsI5ELPhJdx6X
	WSBDMqH+k4T2wSaDDZOu455u9lGl1GmR+wun85TkjlMSIvCx666+0vyr5X5UGBTQQsaZ5pFYGat
	7RZk=
X-Google-Smtp-Source: AGHT+IGvSALMqd/YQW33QFl5caOW9BRj5CwiB10ur4Cn4l+9ONCotJBdJrtSAKbRqC5sGP/e97gkSA==
X-Received: by 2002:a50:d7dc:0:b0:57d:3b8:85e6 with SMTP id 4fb4d7f45d1cf-57d31c9542emr1963784a12.39.1718993222170;
        Fri, 21 Jun 2024 11:07:02 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 v3 3/4] x86/shadow: Rework trace_shadow_emulate_other() as sh_trace_gfn_va()
Date: Fri, 21 Jun 2024 19:06:57 +0100
Message-Id: <20240621180658.92831-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240621180658.92831-1-andrew.cooper3@citrix.com>
References: <20240621180658.92831-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

sh_trace_gfn_va() is very similar to sh_trace_gl1e_va(), and a rather shorter
name than trace_shadow_emulate_other().

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: George Dunlap <george.dunlap@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

v2:
 * New
v3:
 * Retain __packed.
---
 xen/arch/x86/mm/shadow/multi.c | 38 ++++++++++++++++------------------
 1 file changed, 18 insertions(+), 20 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 75250c6f0f7c..7f95d50be397 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2010,29 +2010,30 @@ static void sh_trace_gl1e_va(uint32_t event, guest_l1e_t gl1e, guest_va_t va)
     }
 }
 
-static inline void trace_shadow_emulate_other(u32 event,
-                                                 guest_va_t va,
-                                                 gfn_t gfn)
+/* Shadow trace event with a gfn, linear address and flags. */
+static void sh_trace_gfn_va(uint32_t event, gfn_t gfn, guest_va_t va)
 {
     if ( tb_init_done )
     {
         struct __packed {
-            /* for PAE, guest_l1e may be 64 while guest_va may be 32;
-               so put it first for alignment sake. */
+            /*
+             * For GUEST_PAGING_LEVELS=3 (PAE paging), gfn is 64 while
+             * guest_va is 32.  Put it first to avoid padding.
+             */
 #if GUEST_PAGING_LEVELS == 2
-            u32 gfn;
+            uint32_t gfn;
 #else
-            u64 gfn;
+            uint64_t gfn;
 #endif
             guest_va_t va;
-        } d;
-
-        event |= ((GUEST_PAGING_LEVELS-2)<<8);
-
-        d.gfn=gfn_x(gfn);
-        d.va = va;
+            uint32_t flags;
+        } d = {
+            .gfn   = gfn_x(gfn),
+            .va    = va,
+            .flags = this_cpu(trace_shadow_path_flags),
+        };
 
-        trace(event, sizeof(d), &d);
+        sh_trace(event, sizeof(d), &d);
     }
 }
 
@@ -2603,8 +2604,7 @@ static int cf_check sh_page_fault(
                       mfn_x(gmfn));
         perfc_incr(shadow_fault_emulate_failed);
         shadow_remove_all_shadows(d, gmfn);
-        trace_shadow_emulate_other(TRC_SHADOW_EMULATE_UNSHADOW_USER,
-                                      va, gfn);
+        sh_trace_gfn_va(TRC_SHADOW_EMULATE_UNSHADOW_USER, gfn, va);
         goto done;
     }
 
@@ -2683,8 +2683,7 @@ static int cf_check sh_page_fault(
         }
 #endif
         shadow_remove_all_shadows(d, gmfn);
-        trace_shadow_emulate_other(TRC_SHADOW_EMULATE_UNSHADOW_EVTINJ,
-                                   va, gfn);
+        sh_trace_gfn_va(TRC_SHADOW_EMULATE_UNSHADOW_EVTINJ, gfn, va);
         return EXCRET_fault_fixed;
     }
 
@@ -2739,8 +2738,7 @@ static int cf_check sh_page_fault(
          * though, this is a hint that this page should not be shadowed. */
         shadow_remove_all_shadows(d, gmfn);
 
-        trace_shadow_emulate_other(TRC_SHADOW_EMULATE_UNSHADOW_UNHANDLED,
-                                   va, gfn);
+        sh_trace_gfn_va(TRC_SHADOW_EMULATE_UNSHADOW_UNHANDLED, gfn, va);
         goto emulate_done;
     }
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 18:07:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 18:07:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745442.1152563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKif9-0007MX-M6; Fri, 21 Jun 2024 18:07:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745442.1152563; Fri, 21 Jun 2024 18:07:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKif9-0007MQ-HQ; Fri, 21 Jun 2024 18:07:03 +0000
Received: by outflank-mailman (input) for mailman id 745442;
 Fri, 21 Jun 2024 18:07:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKif8-0007MK-Tg
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 18:07:02 +0000
Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com
 [2a00:1450:4864:20::531])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0e8f3bc5-2ff9-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 20:07:01 +0200 (CEST)
Received: by mail-ed1-x531.google.com with SMTP id
 4fb4d7f45d1cf-57d05e0017aso2873686a12.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 11:07:01 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d30427976sm1244537a12.33.2024.06.21.11.06.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 11:06:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e8f3bc5-2ff9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718993220; x=1719598020; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=D8ccN9jLjGra6aXThf/jLLz5OFdTyTcqpVE1RmNhvHI=;
        b=gfpLtprpybpzuB0dgnRkE9ZEsHUI9uqGvGWM4SLNTMgjk4ktTOLB/Wfs6jDTH+zkMF
         2KCZ+Hbnfa+AWIedmS4tDOCNvBF4lljxvTmDuaWRcYKMSVNcUb1VAdOjrVuiLICyeMZq
         87c2+zJoEM9et9BvIXeBqf0m/jzZNzBSfU7yw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718993220; x=1719598020;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=D8ccN9jLjGra6aXThf/jLLz5OFdTyTcqpVE1RmNhvHI=;
        b=ETV305S+ShP9Gq032uL68UaqntTWN/Z3M2jXyntiXucyPa2JST1giTnDsIWqSazZA5
         Y7WklopGqcrFtlmtXJL2hhED9XS7byeEBpJK22AvuUKZXjt0+LDtGzor68sgse02mhDw
         vJ8Jsuzeoik1afoNzEU8aBt7gh5oqw0C36HMi4SZjV8j0m/twh7pAE5YwzYUNyy3RXvD
         j1vW54iWLHySNQdMzgBXuWwFwrL1wB8WOyp5w1B7TQ4YdvZp2CeLHhL9AFKDetjBhFnM
         0ucyz+R+NBmlbzWpMsEgJ8hZW8WcQnmnlypL3ZfAi1MseFyotaAI4kL6lbTj+7lUjQ85
         tnaw==
X-Gm-Message-State: AOJu0Yx491av8b+D/sU6UhhY8bNZ8isYX2blESVkyL0DIoBHulzOmX6a
	jBWTwEi0QsJaZi42JyOUfZaMZAk8o73CHnlDlQVDLfYB9EgkcwIb96qB5J1wWgy/QdGYvdp4V1D
	Sg6k=
X-Google-Smtp-Source: AGHT+IHyf6QK4xAYo7do/Gh0XVopYETjVy3/AHuBLzrWRis340tLaYotXmQR+e0l52QvTeM71QEPuw==
X-Received: by 2002:a50:d558:0:b0:57d:1627:93ed with SMTP id 4fb4d7f45d1cf-57d1627946amr5666420a12.22.1718993219976;
        Fri, 21 Jun 2024 11:06:59 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 v3 0/4] x86/shadow: Trace fixes and cleanup
Date: Fri, 21 Jun 2024 19:06:54 +0100
Message-Id: <20240621180658.92831-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Patches 1-3 were my review feedback to Jan's patch 4.

For 4.19.  Patch 4 (the bugfix) was Release-Acked after I posted the series
(cleanup and rebased bugfix), which suggests your happy for it in principle,
but I can't treat that as an implict release ack on the whole series.

It's a tracing fix, so is minimal risk to the 4.19 release.

v3:
 * Retain __packed attribute.

Andrew Cooper (3):
  x86/shadow: Rework trace_shadow_gen() into sh_trace_va()
  x86/shadow: Introduce sh_trace_gl1e_va()
  x86/shadow: Rework trace_shadow_emulate_other() as sh_trace_gfn_va()

Jan Beulich (1):
  x86/shadow: Don't leave trace record field uninitialized

 xen/arch/x86/mm/shadow/multi.c | 144 ++++++++++++++-------------------
 1 file changed, 60 insertions(+), 84 deletions(-)


base-commit: 9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 18:07:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 18:07:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745445.1152588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKifC-0007sT-A0; Fri, 21 Jun 2024 18:07:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745445.1152588; Fri, 21 Jun 2024 18:07:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKifC-0007rE-4c; Fri, 21 Jun 2024 18:07:06 +0000
Received: by outflank-mailman (input) for mailman id 745445;
 Fri, 21 Jun 2024 18:07:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKifB-0007ai-1K
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 18:07:05 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1045ebc0-2ff9-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 20:07:04 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6efacd25ecso131881566b.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 11:07:04 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d30427976sm1244537a12.33.2024.06.21.11.07.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 11:07:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1045ebc0-2ff9-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718993223; x=1719598023; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=UENei0RoJJU0F6xVv+xuU03cuzsYd2lsohkbB9XWZBg=;
        b=OYfMmTg3zfXBYpuNWwmVbOmuuXOHfPkdnXyY1o66zeniEKmZiZ62jQjPOh94EEJrtE
         jbaI7NdrY5EVDD5M6/Xa+Z64P99hVwkrNL5nRoRYQtZyxPc3ENKMFQfdTjbPqfv6mN3/
         t1pl3+6XM3BOVT6wRECv3+LUoZJL776eld85E=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718993223; x=1719598023;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=UENei0RoJJU0F6xVv+xuU03cuzsYd2lsohkbB9XWZBg=;
        b=aOYKU2EOAjI/7L0MRzbwAB3Iqs4J72xSDB7JvrG3CWq7eWbMrpCmMmFEcXur7KIM+a
         vhTZb98SfnKcJGKyLQb8IVH305JSH6PX8nE/VoSfvqp0H67dYriltgQQC9JtYRKFR+hP
         kQqzxo24fu+eUiZSyYVjSYDLN/a8lHH794eqKciIK1RFfoo7i4/niE+Ub7ULsHB/xjDu
         hQSMbo3Oyuhkh6RQvPnhpcSjsmfp38Vwvm6MSlLUlfbjhj0myogdB4zPnVFU1R4bxAoj
         Iql37E2HwptN26g0KdRgjzuPMpF2YhSS1RqeuxKMV5CqyuFeKFxAFYLQ5gQ/buYqJWEV
         vB9g==
X-Gm-Message-State: AOJu0Yyv7i2Z8cc1w2Esr8kLHf0cji83pX2G7TAe/04i4Molew/loWpd
	R6L2t7D0HgNLl+N25Mo8cRQVl8dPTtylKY9TmBeqwJUtsx1zB6kL0xWWUhOg+YDFe0oiZBD1HL/
	y7d0=
X-Google-Smtp-Source: AGHT+IHD3hxvnofs4GJv7jRRinBxIQABBtpjKadqoS7PvZBZUJ80/tlbC+MGJ3uPmT09lJgkLxwJDQ==
X-Received: by 2002:a50:c30d:0:b0:57d:2659:9143 with SMTP id 4fb4d7f45d1cf-57d265991d3mr3595483a12.37.1718993223234;
        Fri, 21 Jun 2024 11:07:03 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Jan Beulich <JBeulich@suse.com>,
	George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH for-4.19 v3 4/4] x86/shadow: Don't leave trace record field uninitialized
Date: Fri, 21 Jun 2024 19:06:58 +0100
Message-Id: <20240621180658.92831-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240621180658.92831-1-andrew.cooper3@citrix.com>
References: <20240621180658.92831-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Jan Beulich <jbeulich@suse.com>

The emulation_count field is set only conditionally right now. Convert
all field setting to an initializer, thus guaranteeing that field to be
set to 0 (default initialized) when GUEST_PAGING_LEVELS != 3.

Rework trace_shadow_emulate() to be consistent with the other trace helpers.

Coverity-ID: 1598430
Fixes: 9a86ac1aa3d2 ("xentrace 5/7: Additional tracing for the shadow code")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
Release-acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: George Dunlap <george.dunlap@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

v2:
 * Rebase over packing/sh_trace() cleanup.
---
 xen/arch/x86/mm/shadow/multi.c | 29 ++++++++++++++---------------
 1 file changed, 14 insertions(+), 15 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 7f95d50be397..71a2673682f4 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -2063,30 +2063,29 @@ static void cf_check trace_emulate_write_val(
 #endif
 }
 
-static inline void trace_shadow_emulate(guest_l1e_t gl1e, unsigned long va)
+static inline void sh_trace_emulate(guest_l1e_t gl1e, unsigned long va)
 {
     if ( tb_init_done )
     {
         struct __packed {
-            /* for PAE, guest_l1e may be 64 while guest_va may be 32;
-               so put it first for alignment sake. */
+            /*
+             * For GUEST_PAGING_LEVELS=3 (PAE paging), guest_l1e is 64 while
+             * guest_va is 32.  Put it first to avoid padding.
+             */
             guest_l1e_t gl1e, write_val;
             guest_va_t va;
             uint32_t flags:29, emulation_count:3;
-        } d;
-        u32 event;
-
-        event = TRC_SHADOW_EMULATE | ((GUEST_PAGING_LEVELS-2)<<8);
-
-        d.gl1e = gl1e;
-        d.write_val.l1 = this_cpu(trace_emulate_write_val);
-        d.va = va;
+        } d = {
+            .gl1e = gl1e,
+            .write_val.l1 = this_cpu(trace_emulate_write_val),
+            .va = va,
 #if GUEST_PAGING_LEVELS == 3
-        d.emulation_count = this_cpu(trace_extra_emulation_count);
+            .emulation_count = this_cpu(trace_extra_emulation_count),
 #endif
-        d.flags = this_cpu(trace_shadow_path_flags);
+            .flags = this_cpu(trace_shadow_path_flags),
+        };
 
-        trace(event, sizeof(d), &d);
+        sh_trace(TRC_SHADOW_EMULATE, sizeof(d), &d);
     }
 }
 #endif /* CONFIG_HVM */
@@ -2815,7 +2814,7 @@ static int cf_check sh_page_fault(
     }
 #endif /* PAE guest */
 
-    trace_shadow_emulate(gw.l1e, va);
+    sh_trace_emulate(gw.l1e, va);
  emulate_done:
     SHADOW_PRINTK("emulated\n");
     return EXCRET_fault_fixed;
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 18:07:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 18:07:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745444.1152582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKifC-0007pE-1S; Fri, 21 Jun 2024 18:07:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745444.1152582; Fri, 21 Jun 2024 18:07:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKifB-0007p7-Uk; Fri, 21 Jun 2024 18:07:05 +0000
Received: by outflank-mailman (input) for mailman id 745444;
 Fri, 21 Jun 2024 18:07:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKifA-0007MK-JJ
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 18:07:04 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0f3143fe-2ff9-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 20:07:02 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-57cfe600cbeso2770278a12.2
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 11:07:02 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d30427976sm1244537a12.33.2024.06.21.11.07.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 11:07:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f3143fe-2ff9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1718993221; x=1719598021; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=TYkGBPoHjIs+Sp6qz1v10kMfnsAkcP2P+WxDwIT/KUY=;
        b=DztpuUTu9I7e1RDO0X1OQOmqySHwkMk7/BXMm6i8Zq4YilR/e3K9DRmok4Nx9BqhkN
         dCGW5J3u8G63xBY/uCWQvWaAxv66w98+hZWrLu1we/5dZQH3NdL+i3HMb6ELxwrpUdZZ
         MB1q9LCOscJX6hPp0lhWucsn9ntMK8nIouN2w=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1718993221; x=1719598021;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=TYkGBPoHjIs+Sp6qz1v10kMfnsAkcP2P+WxDwIT/KUY=;
        b=L/4kmFD3gUwO7aGu6iNSi8cQBnwXB1U2VmFGc6ETCurP4OO0fqfMWtV6tz9K15Be0Y
         Kmsg74TPsQtA5dn83+lGujrsjs4sfj7aikn/Le7IATfhACaGX7WiqOM0SR1N8Qtl4jZX
         dOch5ikfxtEFiI9XNR0IWMEs92x9P66zGKK8PTwOeS7XGmhGOPS03naKaSjpJDNJejnS
         +hDC5x9bU2jxTD7ZnCmFtAkkRAHFWVFH17hbjGERSWoYrYuj5tV5aqY+EJNikw81N+6r
         VdxsbvFdD0bPAOw7MZPqlXBOLtVb3+6pHFTV2EeNrf+lRf7SK/OIgRq49Ue9cteNEF3L
         8jZQ==
X-Gm-Message-State: AOJu0YxWAKUZsucY/WGiisVqy2CJFAmZtrFp0++cyIqRDLl7q/p1a/H7
	ozLHycMjxnA2ag+xSlGmxjxyl/UWNa+Gqp1uXy7NvDrq2sj4jVPc1pbWv+Tji9WpSTQ0l0YNhW7
	FmEA=
X-Google-Smtp-Source: AGHT+IFMr855r9r+IShpdogxtKe4tr95yzqOIH263iae9rtPfcbOLdZ6uuLI2EebzIID2jbE7IjhBA==
X-Received: by 2002:a50:9fcc:0:b0:57c:60f0:98bc with SMTP id 4fb4d7f45d1cf-57d07e7996cmr6143786a12.11.1718993221133;
        Fri, 21 Jun 2024 11:07:01 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 v3 2/4] x86/shadow: Introduce sh_trace_gl1e_va()
Date: Fri, 21 Jun 2024 19:06:56 +0100
Message-Id: <20240621180658.92831-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240621180658.92831-1-andrew.cooper3@citrix.com>
References: <20240621180658.92831-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

trace_shadow_fixup() and trace_not_shadow_fault() both write out identical
trace records.  Reimplement them in terms of a common sh_trace_gl1e_va().

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: George Dunlap <george.dunlap@citrix.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

v2:
 * New
v3:
 * Retain the __packed to avoid introducing tail padding
---
 xen/arch/x86/mm/shadow/multi.c | 57 ++++++++++------------------------
 1 file changed, 16 insertions(+), 41 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/multi.c b/xen/arch/x86/mm/shadow/multi.c
index 1775952d7e18..75250c6f0f7c 100644
--- a/xen/arch/x86/mm/shadow/multi.c
+++ b/xen/arch/x86/mm/shadow/multi.c
@@ -1987,51 +1987,26 @@ static void sh_trace_va(uint32_t event, guest_va_t va)
         sh_trace(event, sizeof(va), &va);
 }
 
-static inline void trace_shadow_fixup(guest_l1e_t gl1e,
-                                      guest_va_t va)
+/* Shadow trace event with a gl1e, linear address and flags. */
+static void sh_trace_gl1e_va(uint32_t event, guest_l1e_t gl1e, guest_va_t va)
 {
     if ( tb_init_done )
     {
         struct __packed {
-            /* for PAE, guest_l1e may be 64 while guest_va may be 32;
-               so put it first for alignment sake. */
-            guest_l1e_t gl1e;
-            guest_va_t va;
-            u32 flags;
-        } d;
-        u32 event;
-
-        event = TRC_SHADOW_FIXUP | ((GUEST_PAGING_LEVELS-2)<<8);
-
-        d.gl1e = gl1e;
-        d.va = va;
-        d.flags = this_cpu(trace_shadow_path_flags);
-
-        trace(event, sizeof(d), &d);
-    }
-}
-
-static inline void trace_not_shadow_fault(guest_l1e_t gl1e,
-                                          guest_va_t va)
-{
-    if ( tb_init_done )
-    {
-        struct __packed {
-            /* for PAE, guest_l1e may be 64 while guest_va may be 32;
-               so put it first for alignment sake. */
+            /*
+             * For GUEST_PAGING_LEVELS=3 (PAE paging), guest_l1e is 64 while
+             * guest_va is 32.  Put it first to avoid padding.
+             */
             guest_l1e_t gl1e;
             guest_va_t va;
-            u32 flags;
-        } d;
-        u32 event;
-
-        event = TRC_SHADOW_NOT_SHADOW | ((GUEST_PAGING_LEVELS-2)<<8);
-
-        d.gl1e = gl1e;
-        d.va = va;
-        d.flags = this_cpu(trace_shadow_path_flags);
-
-        trace(event, sizeof(d), &d);
+            uint32_t flags;
+        } d = {
+            .gl1e  = gl1e,
+            .va    = va,
+            .flags = this_cpu(trace_shadow_path_flags),
+        };
+
+        sh_trace(event, sizeof(d), &d);
     }
 }
 
@@ -2603,7 +2578,7 @@ static int cf_check sh_page_fault(
     d->arch.paging.log_dirty.fault_count++;
     sh_reset_early_unshadow(v);
 
-    trace_shadow_fixup(gw.l1e, va);
+    sh_trace_gl1e_va(TRC_SHADOW_FIXUP, gw.l1e, va);
  done: __maybe_unused;
     sh_audit_gw(v, &gw);
     SHADOW_PRINTK("fixed\n");
@@ -2857,7 +2832,7 @@ static int cf_check sh_page_fault(
     put_gfn(d, gfn_x(gfn));
 
 propagate:
-    trace_not_shadow_fault(gw.l1e, va);
+    sh_trace_gl1e_va(TRC_SHADOW_NOT_SHADOW, gw.l1e, va);
 
     return 0;
 }
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 18:31:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 18:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745481.1152612 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKj2g-00068B-HW; Fri, 21 Jun 2024 18:31:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745481.1152612; Fri, 21 Jun 2024 18:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKj2g-000684-Ef; Fri, 21 Jun 2024 18:31:22 +0000
Received: by outflank-mailman (input) for mailman id 745481;
 Fri, 21 Jun 2024 18:31:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKj2f-00067t-5m; Fri, 21 Jun 2024 18:31:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKj2f-0007vK-0E; Fri, 21 Jun 2024 18:31:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKj2e-0007CD-Ja; Fri, 21 Jun 2024 18:31:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKj2e-0002aw-Ir; Fri, 21 Jun 2024 18:31:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wNNe8rLpraqpTyBDWxniHIaqw75ygMUPmarzFA3bxvo=; b=ww0AV/5WPYUQFusGc1o0vS2Gne
	9oU3bswidcfkRoJAVRZVuBif5seXEkk/H7dNdGH3rCbP/8v34ZVV8PNjKWYHvJ87+By+QrrJrt30z
	1WLCB5Np1y+c5ZSqdOu4nKKZD5upXN2KV/eX58fly94XNVCwP+1IMLuOUZNEoFYeThDY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186447-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186447: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
X-Osstest-Versions-That:
    xen=62071a1c16c4dbe765491e58e456fd3a19b33298
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 Jun 2024 18:31:20 +0000

flight 186447 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186447/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
baseline version:
 xen                  62071a1c16c4dbe765491e58e456fd3a19b33298

Last test of basis   186435  2024-06-20 14:00:22 Z    1 days
Testing same since   186447  2024-06-21 15:04:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Matthew Barnes <matthew.barnes@cloud.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   62071a1c16..9e7c26ad85  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 19:15:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 19:15:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745497.1152632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKjj0-0004oL-W0; Fri, 21 Jun 2024 19:15:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745497.1152632; Fri, 21 Jun 2024 19:15:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKjj0-0004oE-TD; Fri, 21 Jun 2024 19:15:06 +0000
Received: by outflank-mailman (input) for mailman id 745497;
 Fri, 21 Jun 2024 19:15:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DyDD=NX=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sKjiy-0004Zs-V3
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 19:15:04 +0000
Received: from sender4-op-o15.zoho.com (sender4-op-o15.zoho.com
 [136.143.188.15]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8f6ce7f0-3002-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 21:15:04 +0200 (CEST)
Delivered-To: tamas@tklengyel.com
Received: by mx.zohomail.com with SMTPS id 1718997294380168.4349606052541;
 Fri, 21 Jun 2024 12:14:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f6ce7f0-3002-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; t=1718997296; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=f0GGN2wPkxoNdUdauBNAYzKCDO2IKvPkY+1VkdH/SMtHDFTJ3K2MuiB0uF2eTAIajjNn9xoLASUhJkq8tJRQmEKkvoGx5z9kk0ummam/PwRTKSoQ2zBA83irqfUsyUL5W0Z8AwApmu9b4y2k/1bjA8MeuozfWHTCLoYk1eg/9Y4=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1718997296; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:MIME-Version:Message-ID:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=tf78nJzAnYuw/IQmW4aPCL35F7SvB2Dnwn2xVGuwziQ=; 
	b=EUgMRdN0JwK65zlcVvYPIFoTD1coq0QvIjy3kX+66DzQWqbt+e+jCWlINGdRWLQ6ntb6AtA1hBrIaMhvvGLgRFHGz2NIsiCkrDNpdZOMvuIDpS6Qtd+6ErkBE8VQsIgKswosrK/hnaH3zlaIRgDXBKSOnmcPLEdJvLXbt7sbd+w=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1718997296;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-Id:Message-Id:MIME-Version:Content-Transfer-Encoding:Reply-To;
	bh=tf78nJzAnYuw/IQmW4aPCL35F7SvB2Dnwn2xVGuwziQ=;
	b=i7JzvkFtyJTIkE7UG+R4PD7GRW6ROMuQuH8+KS8ofCPr6Ck9WFN7l5HxPIM17cka
	+mkQifs7YN2rxkb8Gq26NmCV0epvS4zNDF4dDePw0aioZCK5lA14qE5Lr35UqAdHu9Z
	0id9QOdRAZe42xoJxo+Z3Q/dTN20pPMe7SK2YltU=
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Anthony PERARD <anthony@xenproject.org>
Subject: [PATCH 1/2] Add libfuzzer target to fuzz/x86_instruction_emulator
Date: Fri, 21 Jun 2024 15:14:33 -0400
Message-Id: <20240621191434.5046-1-tamas@tklengyel.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This target enables integration into oss-fuzz.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 tools/fuzz/x86_instruction_emulator/Makefile    | 10 ++++++++--
 tools/fuzz/x86_instruction_emulator/fuzz-emul.c |  6 ++----
 2 files changed, 10 insertions(+), 6 deletions(-)

diff --git a/tools/fuzz/x86_instruction_emulator/Makefile b/tools/fuzz/x86_instruction_emulator/Makefile
index 1e4c6b37f5..de5f1e7e30 100644
--- a/tools/fuzz/x86_instruction_emulator/Makefile
+++ b/tools/fuzz/x86_instruction_emulator/Makefile
@@ -3,7 +3,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 .PHONY: x86-insn-fuzz-all
 ifeq ($(CONFIG_X86_64),y)
-x86-insn-fuzz-all: x86-insn-fuzzer.a fuzz-emul.o afl
+x86-insn-fuzz-all: x86-insn-fuzzer.a fuzz-emul.o afl libfuzzer
 else
 x86-insn-fuzz-all:
 endif
@@ -58,6 +58,9 @@ afl-harness: afl-harness.o $(OBJS) cpuid.o wrappers.o
 afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OBJS)) cpuid.o wrappers.o
 	$(CC) $(CFLAGS) $(GCOV_FLAGS) $(addprefix -Wl$(comma)--wrap=,$(WRAPPED)) $^ -o $@
 
+libfuzzer-harness: $(OBJS) cpuid.o
+	$(CC) $(CFLAGS) $(LIB_FUZZING_ENGINE) -fsanitize=fuzzer $^ -o $@
+
 # Common targets
 .PHONY: all
 all: x86-insn-fuzz-all
@@ -67,7 +70,7 @@ distclean: clean
 
 .PHONY: clean
 clean:
-	rm -f *.a *.o $(DEPS_RM) afl-harness afl-harness-cov *.gcda *.gcno *.gcov
+	rm -f *.a *.o $(DEPS_RM) afl-harness afl-harness-cov *.gcda *.gcno *.gcov libfuzzer-harness
 	rm -rf x86_emulate x86-emulate.c x86-emulate.h wrappers.c cpuid.c
 
 .PHONY: install
@@ -81,4 +84,7 @@ afl: afl-harness
 .PHONY: afl-cov
 afl-cov: afl-harness-cov
 
+.PHONY: libfuzzer
+libfuzzer: libfuzzer-harness
+
 -include $(DEPS_INCLUDE)
diff --git a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
index eeeb6931f4..2ba9ca9e0b 100644
--- a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
+++ b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
@@ -906,14 +906,12 @@ int LLVMFuzzerTestOneInput(const uint8_t *data_p, size_t size)
 
     if ( size <= DATA_OFFSET )
     {
-        printf("Input too small\n");
-        return 1;
+        return -1;
     }
 
     if ( size > FUZZ_CORPUS_SIZE )
     {
-        printf("Input too large\n");
-        return 1;
+        return -1;
     }
 
     memcpy(&input, data_p, size);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 19:15:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 19:15:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745496.1152622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKjiy-0004a5-P6; Fri, 21 Jun 2024 19:15:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745496.1152622; Fri, 21 Jun 2024 19:15:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKjiy-0004Zy-Ls; Fri, 21 Jun 2024 19:15:04 +0000
Received: by outflank-mailman (input) for mailman id 745496;
 Fri, 21 Jun 2024 19:15:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DyDD=NX=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sKjix-0004Zs-8W
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 19:15:03 +0000
Received: from sender4-op-o15.zoho.com (sender4-op-o15.zoho.com
 [136.143.188.15]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8dbed7b9-3002-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 21:15:01 +0200 (CEST)
Delivered-To: tamas@tklengyel.com
Received: by mx.zohomail.com with SMTPS id 1718997295610207.92573120373834;
 Fri, 21 Jun 2024 12:14:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8dbed7b9-3002-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; t=1718997297; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=lmBCt5pBbNYjFmZyPlSRLuMAFEqEphvoDur25CfuXdCjCoFPezjyY+Yh2w+fxHeixaMJ/u1lDuCAg3uBCcdKt6F68EmPCuf70uRXzcaFB+hUey8KvER50Hql2p4bFqANN0TZuBMUu14v3DPla0UFCMkrqOfAPVsJGUMz+8Z6RWI=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1718997297; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=az0Awjr8iknR6afX3DlNO8NfaAjpi0YMFq021oOMFmo=; 
	b=IWRfIBJ4pshWHRZQW2gx5RyYOpZTeWdoxmXya1Kl6AQHFC+TrtZDjynbbD+ACXc2lMBzPKEBBuTeJuIkm4sPpz7CUvN+D+2Z7HSDuofK+9CnDh0+fdVDDds9hzkUFrj9fCRFeZoPCp8MTc7A2RHW6b2lUI8YsNzFIpRxPuuH4aQ=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1718997297;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-Id:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding:Reply-To;
	bh=az0Awjr8iknR6afX3DlNO8NfaAjpi0YMFq021oOMFmo=;
	b=FUQ2HeJwKsXuF+TfhMrpOGfdEpQ4hW3axk2sbLvNT6BfDAuETyE2do1XYJ3fJ2yd
	IcGCPFJK78lQDcAoCXX3ILpgEHYWRpc8NvMX52Iiv6PdYEF/fhNMgFipLINmeB8+3AF
	wep3QxFr9YRyoap1w2qKCR3TYYyWJu7UKo1mptj4=
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
Date: Fri, 21 Jun 2024 15:14:34 -0400
Message-Id: <20240621191434.5046-2-tamas@tklengyel.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240621191434.5046-1-tamas@tklengyel.com>
References: <20240621191434.5046-1-tamas@tklengyel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The build integration script for oss-fuzz targets.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 scripts/oss-fuzz/build.sh | 22 ++++++++++++++++++++++
 1 file changed, 22 insertions(+)
 create mode 100755 scripts/oss-fuzz/build.sh

diff --git a/scripts/oss-fuzz/build.sh b/scripts/oss-fuzz/build.sh
new file mode 100755
index 0000000000..48528bbfc2
--- /dev/null
+++ b/scripts/oss-fuzz/build.sh
@@ -0,0 +1,22 @@
+#!/bin/bash -eu
+# Copyright 2024 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+################################################################################
+
+cd xen
+./configure clang=y --disable-stubdom --disable-pvshim --disable-docs --disable-xen
+make clang=y -C tools/include
+make clang=y -C tools/fuzz/x86_instruction_emulator libfuzzer-harness
+cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_instruction_emulator
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 20:00:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 20:00:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745516.1152641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKkQk-00041j-9A; Fri, 21 Jun 2024 20:00:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745516.1152641; Fri, 21 Jun 2024 20:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKkQk-00041c-6T; Fri, 21 Jun 2024 20:00:18 +0000
Received: by outflank-mailman (input) for mailman id 745516;
 Fri, 21 Jun 2024 20:00:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=drXV=NX=amd.com=VictorM.Lira@srs-se1.protection.inumbo.net>)
 id 1sKkQj-00041W-FC
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 20:00:17 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20600.outbound.protection.outlook.com
 [2a01:111:f403:2415::600])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e001a919-3008-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 22:00:16 +0200 (CEST)
Received: from BLAPR05CA0004.namprd05.prod.outlook.com (2603:10b6:208:36e::8)
 by SN7PR12MB7275.namprd12.prod.outlook.com (2603:10b6:806:2ae::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.30; Fri, 21 Jun
 2024 20:00:12 +0000
Received: from BL02EPF00021F69.namprd02.prod.outlook.com
 (2603:10b6:208:36e:cafe::fe) by BLAPR05CA0004.outlook.office365.com
 (2603:10b6:208:36e::8) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.37 via Frontend
 Transport; Fri, 21 Jun 2024 20:00:11 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BL02EPF00021F69.mail.protection.outlook.com (10.167.249.5) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Fri, 21 Jun 2024 20:00:11 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Fri, 21 Jun
 2024 15:00:10 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Fri, 21 Jun
 2024 15:00:10 -0500
Received: from xsjwoods50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39 via Frontend
 Transport; Fri, 21 Jun 2024 15:00:10 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e001a919-3008-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fzqIL8zpWDnBKXDLraQ7F6o+6lOHYZLXxQvnnhdcuhY9aWPeV6VNZfF7W2Fu58aX/Is8nBpgbVnfMaeOfsgecb0511aMgeA9MvvpsrgYaBFGYrhvhde0K4LvP8Z9wrmuAIEBX+TccZG0lPJwtNkYPHQT4Z0x2KXU6FV4FLSC0GUFvvbYoe3dYpBjCKgzGY/BN7FLPop9uyuP+aZt+ro5ELH27OfC4jMqfEBeCy1QDG0atYdg8UtHMlBneGbjaYXs1cjjIMJenWZCbHZLQSJcD5eSHIp+Bt6biUTe6jQgfpRL0UPz7MlP3OyQgB8QrM16TmVcncHec3j+J/HlBnfBTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=azn84zSL6Ad+0R+BeGFFrMViBNjrO0AdAP5Q7j2OxqE=;
 b=iLl3GDFe6T/FTsPhAMrCHTKBcPZsnii8HdxvTnkIrML40+SfTQ5WJjnnL5n7slCGd4roW6Ha4XX/axYnr95K/VTEfViq1ZIIqIoFaQjNZXhVilB3/bSmLWsM49s5eTjWeqw2vl+J0OrGQmxXNfmIJC612oSVi+sJU8Z8xPxxbAO8rbchh4WG6krbz0CNZe8sMPm/F8grCONvYsqHSReO8M0DwQGAMsJ/azCmbQMXs/SzSC4U3Cbmhy28qyz6G4rd7DqywRngkaOjpBGRHmsuHeeYVILPklQsxPTSN2BKLqYUL6MeWiWq9kPArnwFQL1jG18VS573GLkDxfwxgP4+qg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=azn84zSL6Ad+0R+BeGFFrMViBNjrO0AdAP5Q7j2OxqE=;
 b=y9NBYWzRwa8KXZ8fK7MO1L8P51KFwu9/JXiYhxG5eg+hEyH3sA0pSquPLDFasHMVuL7alj+VwYcHhOcqsN6rzlYAWFkXvBL6iCajS/Hh9OsICG/YmwKITL9PtTpAgvvHVfGyXeOJzh0pILtqN9yLr87ldNuIO6OEtPlqyggMdAM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: <victorm.lira@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Victor Lira <victorm.lira@amd.com>, "George
 Dunlap" <george.dunlap@cloud.com>, George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>
Subject: [XEN PATCH v2] common/sched: address a violation of MISRA C Rule 8.8
Date: Fri, 21 Jun 2024 12:59:51 -0700
Message-ID: <994b423128711b2a912401ff4cb13107ad5c6a9d.1718999221.git.victorm.lira@amd.com>
X-Mailer: git-send-email 2.37.6
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BL02EPF00021F69:EE_|SN7PR12MB7275:EE_
X-MS-Office365-Filtering-Correlation-Id: 661ffa20-69d5-44a9-1375-08dc922cc243
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|376011|36860700010|1800799021|82310400023;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?ThDL8WKZ8ueZD6lhDNBn1KDtA4EjxTE7/9wAGOYKV+UxUiO+GW2k29BehJJC?=
 =?us-ascii?Q?++15Dar76Tq+6DpMSyUgPJnpEYYrHl2pPaBKwVN7L9UjAxZyJojmFJrfEH3t?=
 =?us-ascii?Q?hxwkarJDhCvohb1bHQ3OX8ZgfQQISnIpH9ow4HsS1YEh6oSE0QM09aHW4K2Z?=
 =?us-ascii?Q?iD7HBiZKoTl3vAMVeZDeuwbhiRlQcLjtMs0IxH0TwvkTjSkaKNtNnS6xRsoc?=
 =?us-ascii?Q?Kvh0SIqaBqhNR+kgnAsi7ITYzZwWjerAksoKqBRUPUP4Udr8TYehvcyTFJIB?=
 =?us-ascii?Q?AXyfFJXD//ctRrZ91/QANzpLjtWPqxU17sROBeJtaTIdBm8m3FdNL8okrhIY?=
 =?us-ascii?Q?cnu1JDP34Z+axBT1h6ObiFmXfKuXJYq3EeNg9YfUa2jp9ShAOblzp0Znky0V?=
 =?us-ascii?Q?2j8sjqC0Q4yWTZXLAdh6/3r/+JidxI7Ea8nCSxxdpxe2zGjHo12z2n/DSudi?=
 =?us-ascii?Q?Yq8Pmc8ZyBGZspiVt3b9fS6P6iE868uRRGcsQ7ZHcw7AqbX4BiZT5AspLwd9?=
 =?us-ascii?Q?XmOIzBZlmtOsJ370WlmTyubTnOuckh3ww+E+OH3ReLub4hxhUbVpxKlqiGAe?=
 =?us-ascii?Q?gb8Jmsf7aC/R0D7p3GFZ5PKstO26EMAV8LVhR06Oqm4iNKi28w5iqceLBgKW?=
 =?us-ascii?Q?qSFve2Owa4ivtMxA4oRYB+vGvtvA9uMppf2R33Fk2PKepkaZMxCrn+O2CMmx?=
 =?us-ascii?Q?KatPAQainbXNoTyAuMBXFDE4efPyUk8tdcmauTwJWXOcTuoDSOppVHCLXJ/f?=
 =?us-ascii?Q?XKuetKramnrCecgpAJ+d11jsyEOGi6Ps0U2bAb8FybNj3uN3puHJVyuNaNsX?=
 =?us-ascii?Q?PkekdTb7muN2z9GLqhlFWf5kzJvNgt8XjU2qZlIibV3zOLHnda6x28y0KO3a?=
 =?us-ascii?Q?lOYqYKVUUo8EPteruKfiANFWbb3ZbVxR8ATNZ0u/PyCHT/LIofZQiO6gBo6a?=
 =?us-ascii?Q?sf2Yd8pLuCxAYSakJxW8y9oHJ6uaY0R3NWBJLKmFonFThbH9CsY2SaKWdKxX?=
 =?us-ascii?Q?2lexwjW4EqdUpCHmu+3qUpDWRsMt5I7dDG3dkpfq8KGdjZRZv7gldkxcr4ZN?=
 =?us-ascii?Q?eEaNJU6h9pwTJdE1FZxjduVqE5tdl9ms3raGl9X7kF+qITySoKGP7BenZBpa?=
 =?us-ascii?Q?Cttn+TgRczdjCrVnycdCgcgvx2GchWv4OsWLnQe+F/JxviCck/uKgbJd892y?=
 =?us-ascii?Q?PWflQmjBCMFZRhMxC84QpY/4TdRG8cB7S/gBm002wZc90uDlGvaeK19olafC?=
 =?us-ascii?Q?NmtKvyTKTIUQku0FHZQ4LeWUTZQ8bOAho/TvfpY3iOvDxLsLebucu/zqNrvm?=
 =?us-ascii?Q?WRpirtOxsoGtN5RMB/JH8ksRgl7Cp+b7B2Auhsf2TBsO1mXtbltf5boklq4t?=
 =?us-ascii?Q?qxbmcnZwk/uuqWRAI3YPJhudchiSnWrlTnBVc9KV++n6zRil6Q=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(376011)(36860700010)(1800799021)(82310400023);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Jun 2024 20:00:11.4378
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 661ffa20-69d5-44a9-1375-08dc922cc243
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BL02EPF00021F69.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR12MB7275

From: Victor Lira <victorm.lira@amd.com>

Rule 8.8: "The static storage class specifier shall be used in all
declarations of objects and functions that have internal linkage"

This patch fixes this by adding the static specifier.
No functional changes.

Reported-by: Stewart Hildebrand stewart.hildebrand@amd.com
Signed-off-by: Victor Lira <victorm.lira@amd.com>
Acked-by: George Dunlap <george.dunlap@cloud.com>
---
Changes from v1:
- adjust indentation and line width.
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Dario Faggioli <dfaggioli@suse.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org
---
 xen/common/sched/credit2.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index 685929c290..b4e03e2a63 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -1476,8 +1476,8 @@ static inline void runq_remove(struct csched2_unit *svc)
     list_del_init(&svc->runq_elem);
 }
 
-void burn_credits(struct csched2_runqueue_data *rqd, struct csched2_unit *svc,
-                  s_time_t now);
+static void burn_credits(struct csched2_runqueue_data *rqd,
+                         struct csched2_unit *svc, s_time_t now);
 
 static inline void
 tickle_cpu(unsigned int cpu, struct csched2_runqueue_data *rqd)
@@ -1855,8 +1855,8 @@ static void reset_credit(int cpu, s_time_t now, struct csched2_unit *snext)
     /* No need to resort runqueue, as everyone's order should be the same. */
 }
 
-void burn_credits(struct csched2_runqueue_data *rqd,
-                  struct csched2_unit *svc, s_time_t now)
+static void burn_credits(struct csched2_runqueue_data *rqd,
+                         struct csched2_unit *svc, s_time_t now)
 {
     s_time_t delta;
 
-- 
2.37.6



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 20:19:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 20:19:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745529.1152672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKkjU-000716-9C; Fri, 21 Jun 2024 20:19:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745529.1152672; Fri, 21 Jun 2024 20:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKkjU-00070z-66; Fri, 21 Jun 2024 20:19:40 +0000
Received: by outflank-mailman (input) for mailman id 745529;
 Fri, 21 Jun 2024 20:19:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKkjT-0006yu-6N
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 20:19:39 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 94d86186-300b-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 22:19:37 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6e43dad8ecso406204066b.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 13:19:37 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf48b3a6sm116947466b.87.2024.06.21.13.19.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 13:19:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94d86186-300b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719001176; x=1719605976; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=iV7iG0JHbiQKpGiAlNdsXIwsz8t+op8HRsdtCo6zs64=;
        b=n4z2koQY/lUEfS240opAUupVbuf0GvU9G4mPSILtadaCO15e8+tHcWdd4yMs+v/KdV
         jaZ3ZTkxM60l/OWUN3IAV6Ron70iZNXV8a+4T3WvzI9Lqh9tQ9cbacaiHXfGdGyN6lHt
         KZ2VZdrCXVs9oskNvNgWEoeggxJdl4NtpMJkc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719001176; x=1719605976;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=iV7iG0JHbiQKpGiAlNdsXIwsz8t+op8HRsdtCo6zs64=;
        b=oIE50orPIJCfMGsv5bwQI6Z55D7IdyeZz6Zl7dmI0LsPu3UFB/BFYKvjwrh8wdG3P4
         pYn1HlnL4FJOjv44ZgPFReAyHdLPe7RuZI/xsdnVavcSCEQdINAuS+ToblQjysGaT4um
         UuhjV1sOcbuqKCbM691KatJm0qdyhytvqQQ+DcQMFMs9qMFQQJN40gD+PLVgZEjzHxMh
         jN9CtvTrGiXh48Il76pWt22T8n6ifDbVtAfAf9B5GuXWc6DtZUjYiZ+wR9Z/WtHaSfjs
         6SJVXodMuqipPPcOhgG8QsiZNW9ygkOsYe4r7ugf9faq3ua3HMl6TTXetoYvanWTmzkj
         H3mQ==
X-Gm-Message-State: AOJu0Yx6LDF9PpD436ctkHYZdwv3ZXnWsWLtLl08ATtRuWPSEH4EusBb
	0aEo2TluWAoIyWbnpJPOSF/juPdnh2flRK2zbPP4dfrtS4cTpH6ZhXt9lQtEJscCyvx7uzXeOK9
	I2IQ=
X-Google-Smtp-Source: AGHT+IGytrhx91F2VJ70pYTG41PiGH7Wi/FvG9spB2y/Uwfh/Ho0jTBon70fBzgHfEja7cjN3qj3kw==
X-Received: by 2002:a17:907:9409:b0:a6f:46f1:5434 with SMTP id a640c23a62f3a-a6fdb65d7famr72013866b.6.1719001176238;
        Fri, 21 Jun 2024 13:19:36 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Shawn Anastasio <sanastasio@raptorengineering.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	George Dunlap <George.Dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 2/3] xen/ppc: Adjust ppc64_defconfig
Date: Fri, 21 Jun 2024 21:19:27 +0100
Message-Id: <20240621201928.319293-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240621201928.319293-1-andrew.cooper3@citrix.com>
References: <20240621201928.319293-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

All of CONFIG_SCHED_*, and CONFIG_HYPFS build fine.

Add a stub for share_xen_page_with_guest(), which is all that is necessary to
make CONFIG_TRACEBUFFER build.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Shawn Anastasio <sanastasio@raptorengineering.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
CC: George Dunlap <George.Dunlap@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>

https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1342672505

This is in aid of getting wider compiler coverage with the subseqeuent patch
---
 xen/arch/ppc/configs/ppc64_defconfig | 6 ------
 xen/arch/ppc/stubs.c                 | 6 ++++++
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/ppc/configs/ppc64_defconfig b/xen/arch/ppc/configs/ppc64_defconfig
index 48a053237afd..4924d881a27c 100644
--- a/xen/arch/ppc/configs/ppc64_defconfig
+++ b/xen/arch/ppc/configs/ppc64_defconfig
@@ -1,9 +1,3 @@
-# CONFIG_SCHED_CREDIT is not set
-# CONFIG_SCHED_RTDS is not set
-# CONFIG_SCHED_NULL is not set
-# CONFIG_SCHED_ARINC653 is not set
-# CONFIG_TRACEBUFFER is not set
-# CONFIG_HYPFS is not set
 # CONFIG_GRANT_TABLE is not set
 # CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
 # CONFIG_MEM_ACCESS is not set
diff --git a/xen/arch/ppc/stubs.c b/xen/arch/ppc/stubs.c
index 923f0e7b2095..a10691165b1b 100644
--- a/xen/arch/ppc/stubs.c
+++ b/xen/arch/ppc/stubs.c
@@ -333,3 +333,9 @@ void udelay(unsigned long usecs)
 {
     BUG_ON("unimplemented");
 }
+
+void share_xen_page_with_guest(struct page_info *page, struct domain *d,
+                               enum XENSHARE_flags flags)
+{
+    BUG_ON("unimplemented");
+}
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 20:19:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 20:19:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745527.1152652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKkjQ-0006Xx-OZ; Fri, 21 Jun 2024 20:19:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745527.1152652; Fri, 21 Jun 2024 20:19:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKkjQ-0006Xq-KA; Fri, 21 Jun 2024 20:19:36 +0000
Received: by outflank-mailman (input) for mailman id 745527;
 Fri, 21 Jun 2024 20:19:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKkjP-0006Xk-IV
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 20:19:35 +0000
Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com
 [2a00:1450:4864:20::22b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 934103dc-300b-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 22:19:34 +0200 (CEST)
Received: by mail-lj1-x22b.google.com with SMTP id
 38308e7fff4ca-2ebe785b234so25598481fa.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 13:19:34 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf48b3a6sm116947466b.87.2024.06.21.13.19.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 13:19:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 934103dc-300b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719001173; x=1719605973; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=WWqVYAr+RIssWyYwcsTLLYz+ah7FzeogIU64ufCtzYs=;
        b=ACAk6jekKbb0t+aCuBc55nLn+TD+XBWeH5idsKcqu1nDp0M6ctCWE8ym9zeUxLCyMa
         EITpri29vPIHHy0XzBot1fwgEtqHCPruwDgNqLNW0S4vIh00ZL4CpOxJjtAaXa6eH0f8
         NgHIVxq/LrFrLk5jdvNA4sIkzoPAqSKhLnuJ8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719001173; x=1719605973;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=WWqVYAr+RIssWyYwcsTLLYz+ah7FzeogIU64ufCtzYs=;
        b=gt7RWRlb09gcNoJUrq0o08FEwPEyEE5UW+tB99575HBgLEjUOA8v4JYo1bcWIlz0ix
         FWxbaiEMdreKb0xC+iPk28WMsJBjXikAgH4DKjfm+nYmNNGuwif8Xb1iCr3ZqZhSyPXC
         pp/uHBVQeNicpZYi7oIUCfZCKQLfsVQXvbZ2y5K+614feazHFS85H9cMb77hTcyTuDxa
         MK/GaFc8YOko16hKg6OzEWPYTnGnU9ghbRs5WQUZXqsIognvV3b0ttHo+k/ZgpNYV0x1
         WvK8sc7u6N39kHbUnntor+tY1dCQXfvpczNgowK2jWqY9whChEgilcS1Iu7CWPf/LzDj
         PsaA==
X-Gm-Message-State: AOJu0Yx0tAWkFsZC2ZY1eInQbGpL9U6kaluv74t1oPAGmoSxHG6M+N8d
	YTrTRM/c6iDMs0lCW8d3W+tbIL3JwQLQ3LOzE3IIMWLqVlsSVgY7Uj2P32qBuJMeS8p1SLiThjC
	bJRU=
X-Google-Smtp-Source: AGHT+IENENi3Vr58wPw4ikChsTi4+fhFJpmmdf1XluRJ4iDieqqOtdR8N77YDi1kMSIvIAN7iQzZvg==
X-Received: by 2002:ac2:592e:0:b0:52c:7fc3:601 with SMTP id 2adb3069b0e04-52ccaa98d2cmr5913736e87.61.1719001173402;
        Fri, 21 Jun 2024 13:19:33 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Shawn Anastasio <sanastasio@raptorengineering.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	George Dunlap <George.Dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: [PATCH for-4.19? 0/3] xen: build adjustments for __read_mostly/__ro_after_init
Date: Fri, 21 Jun 2024 21:19:25 +0100
Message-Id: <20240621201928.319293-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In aid of getting the RISC-V build working without introducing more technical
debt.  It's done by making PPC shed it's copy of said technical debt.

Build tested quite thoroughly, including in Gitlab.

Andrew Cooper (3):
  xen/riscv: Drop legacy __ro_after_init definition
  xen/ppc: Adjust ppc64_defconfig
  xen/ppc: Avoid using the legacy __read_mostly/__ro_after_init
    definitions

 xen/arch/ppc/configs/ppc64_defconfig | 6 ------
 xen/arch/ppc/include/asm/cache.h     | 3 ---
 xen/arch/ppc/mm-radix.c              | 1 +
 xen/arch/ppc/stubs.c                 | 7 +++++++
 xen/arch/riscv/mm.c                  | 2 +-
 xen/common/argo.c                    | 1 +
 xen/common/cpu.c                     | 1 +
 xen/common/debugtrace.c              | 1 +
 xen/common/domain.c                  | 1 +
 xen/common/event_channel.c           | 2 ++
 xen/common/keyhandler.c              | 1 +
 xen/common/memory.c                  | 1 +
 xen/common/page_alloc.c              | 1 +
 xen/common/pdx.c                     | 1 +
 xen/common/radix-tree.c              | 1 +
 xen/common/random.c                  | 2 +-
 xen/common/rcupdate.c                | 1 +
 xen/common/sched/core.c              | 1 +
 xen/common/sched/cpupool.c           | 1 +
 xen/common/sched/credit.c            | 1 +
 xen/common/sched/credit2.c           | 1 +
 xen/common/shutdown.c                | 1 +
 xen/common/spinlock.c                | 1 +
 xen/common/timer.c                   | 1 +
 xen/common/version.c                 | 3 +--
 xen/common/virtual_region.c          | 1 +
 xen/common/vmap.c                    | 2 +-
 xen/drivers/char/console.c           | 1 +
 xen/drivers/char/ns16550.c           | 1 +
 xen/drivers/char/serial.c            | 2 +-
 xen/include/xen/cache.h              | 2 ++
 xen/include/xen/hypfs.h              | 1 +
 32 files changed, 38 insertions(+), 15 deletions(-)

-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 20:19:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 20:19:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745528.1152662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKkjS-0006mQ-2I; Fri, 21 Jun 2024 20:19:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745528.1152662; Fri, 21 Jun 2024 20:19:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKkjR-0006mJ-Vw; Fri, 21 Jun 2024 20:19:37 +0000
Received: by outflank-mailman (input) for mailman id 745528;
 Fri, 21 Jun 2024 20:19:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKkjQ-0006Xk-Kl
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 20:19:36 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 940913f6-300b-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 22:19:36 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id
 a640c23a62f3a-a6ef793f4b8so252384666b.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 13:19:36 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf48b3a6sm116947466b.87.2024.06.21.13.19.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 13:19:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 940913f6-300b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719001175; x=1719605975; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=RNg2DzwYEol0Dd/MIahRG8oCTZpkXejfnW6xOn3sPPQ=;
        b=QTNoX5U6rmpJICA0jNsaTU8eA0+ISQhdY8RNztrrt+NdDZKTpQ3AsuZrss63UxYXAy
         q/XVtRUgAq57Ok6wpfL3I7ZLk98ZSCuWMxToMFk+/6KMcFO3SfPvLgVeV+oxGrliuF10
         aij2VZyjFLkbh27aylC9RW1IEiEhhcvwOepiM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719001175; x=1719605975;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=RNg2DzwYEol0Dd/MIahRG8oCTZpkXejfnW6xOn3sPPQ=;
        b=kZLjqjeMilmo5YofW8g8aDXj51yXgJCLk5B1359fBBD3Q3urp7+u+reVQrKg928SoB
         aP5zIBVY/7OoUniL5qeF2A3Hq1ogV0042R7Nrz3QN1DUzjNzguN/3+6zGZRvRkvEPs1l
         h6UzdQ7b/6weCzHOinKW2MDGvwaSlJTR+kcNB+CsgLRJwbLX6y34NbRcvxALxoEc5gqH
         J3CK5rck203zN6x3TKPupJcA5EBog2xXwmf+L0wFTdH2lKuCrFecsT4FlJjNCVnHMq+G
         dbcUQZyf339vC9q/yP/SkT+vzXXVegCal5uUGkKChQZqRbdXH8Hc1Eq7aA/w0jSAE9s8
         efxQ==
X-Gm-Message-State: AOJu0YynFIHRzBxLEhZoMeD7s/EZMPUK0hSkLcghngVuoTNWBlG0SZ8p
	LvJhz+tzuOiNH9lg6uEMYeOQyjBqBH22MVuE+t0GpkjHUxcQwksQB80d1ZIhT5FK2U2CEO809y1
	v38E=
X-Google-Smtp-Source: AGHT+IGHl4OM1ru7y/HT5MSJYwiduCg/9TtIS6T/sCUpcaB58OFHVIFvYyPaAgoCqMBV1HBaeivDLA==
X-Received: by 2002:a17:907:910d:b0:a6f:dd94:c53d with SMTP id a640c23a62f3a-a6fdd94c91amr21152666b.75.1719001174723;
        Fri, 21 Jun 2024 13:19:34 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Shawn Anastasio <sanastasio@raptorengineering.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	George Dunlap <George.Dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 1/3] xen/riscv: Drop legacy __ro_after_init definition
Date: Fri, 21 Jun 2024 21:19:26 +0100
Message-Id: <20240621201928.319293-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240621201928.319293-1-andrew.cooper3@citrix.com>
References: <20240621201928.319293-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hide the legacy __ro_after_init definition in xen/cache.h for RISC-V, to avoid
its use creeping in.  Only mm.c needs adjusting as a consequence

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Shawn Anastasio <sanastasio@raptorengineering.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
CC: George Dunlap <George.Dunlap@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>

https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1342686294
---
 xen/arch/riscv/mm.c     | 2 +-
 xen/include/xen/cache.h | 2 ++
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
index 053f043a3d2a..3ebaf6da01cc 100644
--- a/xen/arch/riscv/mm.c
+++ b/xen/arch/riscv/mm.c
@@ -1,11 +1,11 @@
 /* SPDX-License-Identifier: GPL-2.0-only */
 
-#include <xen/cache.h>
 #include <xen/compiler.h>
 #include <xen/init.h>
 #include <xen/kernel.h>
 #include <xen/macros.h>
 #include <xen/pfn.h>
+#include <xen/sections.h>
 
 #include <asm/early_printk.h>
 #include <asm/csr.h>
diff --git a/xen/include/xen/cache.h b/xen/include/xen/cache.h
index 55456823c543..82a3ba38e3e7 100644
--- a/xen/include/xen/cache.h
+++ b/xen/include/xen/cache.h
@@ -15,7 +15,9 @@
 #define __cacheline_aligned __attribute__((__aligned__(SMP_CACHE_BYTES)))
 #endif
 
+#if defined(CONFIG_ARM) || defined(CONFIG_X86) || defined(CONFIG_PPC64)
 /* TODO: Phase out the use of this via cache.h */
 #define __ro_after_init __section(".data.ro_after_init")
+#endif
 
 #endif /* __LINUX_CACHE_H */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 20:19:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 20:19:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745530.1152681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKkjW-0007HB-Et; Fri, 21 Jun 2024 20:19:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745530.1152681; Fri, 21 Jun 2024 20:19:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKkjW-0007H4-C7; Fri, 21 Jun 2024 20:19:42 +0000
Received: by outflank-mailman (input) for mailman id 745530;
 Fri, 21 Jun 2024 20:19:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKkjU-0006yu-Qz
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 20:19:40 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 95c6e334-300b-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 22:19:39 +0200 (CEST)
Received: by mail-ej1-x634.google.com with SMTP id
 a640c23a62f3a-a63359aaaa6so345706866b.2
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 13:19:38 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf48b3a6sm116947466b.87.2024.06.21.13.19.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 13:19:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95c6e334-300b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719001177; x=1719605977; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wDT9rwgL0Bk0ve6LGO8WCMJ5rW7pfBPKsUYdXcImh4o=;
        b=osXFIGds3WPQ4CBpEagIPWBEEOtZzlzzs1P1MyahsllV/usyCw89LK9JEKfarPWy6+
         JXO0Amp52hc0KjL/f+S4C1rcBnEk5KYVUsKIlUZvw1yiFfcbv5PFfhDnXOS8XnoMrZpz
         SGS0lxoFsgGobphbdOZd7kveMASEMq5hV+Ayo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719001177; x=1719605977;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=wDT9rwgL0Bk0ve6LGO8WCMJ5rW7pfBPKsUYdXcImh4o=;
        b=dRMPS2nokeihDF7/FCq7g67tWN5hsu0Ixmu14Uoylx6sr8d8IkeYaY7hujmDSgZGq6
         6CeCN0NhmgzJkCjNEzO9B4+7l8pS4gSsgm3z4ver3KVLqS0ssZgWmpA/NBpvJ4xYw2r5
         8M+VF+neauHi4bOke+5+H0PAbLdBW+MZARAgC+yVQSkqlTzHhg5Lor5G/XqHjSSBzbor
         E74utA/xff9rKh2a1xCXl72w9b64RGY58PgQ93TaEA3vB5FIBbUjL1Rq/aQy9cIndFB0
         teQSyAyhHefvSekmKh5mLSesE6n8ie2+B2EqmtAoQnZzVIrVNnV2RBQFKVkPRXfAYVOy
         AT4w==
X-Gm-Message-State: AOJu0YwvDELflXoFV2eVcWOogN/vuP+KzpEWqHs2DqF0PsuAqsP2vfH+
	Gxk5LF6QWCR6IvN/0hGxUPbh7SHJlmZBY2aRKRXZRjmdWZ1fLZ0IB2qNxU96sisL/U0GZ6r2uwe
	j6zI=
X-Google-Smtp-Source: AGHT+IGpU1hLDKtxt0s0NY1c8UQ3fpd/SJaLRV0yiCzxO3HBX5V60JLK+HU+eSl5pXo+FIj8+o1uDw==
X-Received: by 2002:a17:906:ce34:b0:a6e:f7b5:3189 with SMTP id a640c23a62f3a-a6fab7d7d0dmr538523066b.76.1719001177453;
        Fri, 21 Jun 2024 13:19:37 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Shawn Anastasio <sanastasio@raptorengineering.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	George Dunlap <George.Dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>
Subject: [PATCH 3/3] xen/ppc: Avoid using the legacy __read_mostly/__ro_after_init definitions
Date: Fri, 21 Jun 2024 21:19:28 +0100
Message-Id: <20240621201928.319293-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240621201928.319293-1-andrew.cooper3@citrix.com>
References: <20240621201928.319293-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

RISC-V wants to introduce a full build of Xen without using the legacy
definitions.  PPC64 has the most minimal full build of Xen right now, so make
it compile without the legacy definitions.

Mostly this is just including xen/sections.h in a variety of common files.  In
a couple of cases, we can drop an inclusion of {xen,asm}/cache.h, but almost
all files get the definitions transitively.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Shawn Anastasio <sanastasio@raptorengineering.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
CC: George Dunlap <George.Dunlap@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>

https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1342714126
---
 xen/arch/ppc/include/asm/cache.h | 3 ---
 xen/arch/ppc/mm-radix.c          | 1 +
 xen/arch/ppc/stubs.c             | 1 +
 xen/common/argo.c                | 1 +
 xen/common/cpu.c                 | 1 +
 xen/common/debugtrace.c          | 1 +
 xen/common/domain.c              | 1 +
 xen/common/event_channel.c       | 2 ++
 xen/common/keyhandler.c          | 1 +
 xen/common/memory.c              | 1 +
 xen/common/page_alloc.c          | 1 +
 xen/common/pdx.c                 | 1 +
 xen/common/radix-tree.c          | 1 +
 xen/common/random.c              | 2 +-
 xen/common/rcupdate.c            | 1 +
 xen/common/sched/core.c          | 1 +
 xen/common/sched/cpupool.c       | 1 +
 xen/common/sched/credit.c        | 1 +
 xen/common/sched/credit2.c       | 1 +
 xen/common/shutdown.c            | 1 +
 xen/common/spinlock.c            | 1 +
 xen/common/timer.c               | 1 +
 xen/common/version.c             | 3 +--
 xen/common/virtual_region.c      | 1 +
 xen/common/vmap.c                | 2 +-
 xen/drivers/char/console.c       | 1 +
 xen/drivers/char/ns16550.c       | 1 +
 xen/drivers/char/serial.c        | 2 +-
 xen/include/xen/cache.h          | 2 +-
 xen/include/xen/hypfs.h          | 1 +
 30 files changed, 30 insertions(+), 9 deletions(-)

diff --git a/xen/arch/ppc/include/asm/cache.h b/xen/arch/ppc/include/asm/cache.h
index 13c0bf3242c8..8a0a6b7b1756 100644
--- a/xen/arch/ppc/include/asm/cache.h
+++ b/xen/arch/ppc/include/asm/cache.h
@@ -3,7 +3,4 @@
 #ifndef _ASM_PPC_CACHE_H
 #define _ASM_PPC_CACHE_H
 
-/* TODO: Phase out the use of this via cache.h */
-#define __read_mostly __section(".data.read_mostly")
-
 #endif /* _ASM_PPC_CACHE_H */
diff --git a/xen/arch/ppc/mm-radix.c b/xen/arch/ppc/mm-radix.c
index ab5a10695c5f..0a47959e64f2 100644
--- a/xen/arch/ppc/mm-radix.c
+++ b/xen/arch/ppc/mm-radix.c
@@ -2,6 +2,7 @@
 #include <xen/init.h>
 #include <xen/kernel.h>
 #include <xen/mm.h>
+#include <xen/sections.h>
 #include <xen/types.h>
 #include <xen/lib.h>
 
diff --git a/xen/arch/ppc/stubs.c b/xen/arch/ppc/stubs.c
index a10691165b1b..0e7a26dadbc1 100644
--- a/xen/arch/ppc/stubs.c
+++ b/xen/arch/ppc/stubs.c
@@ -3,6 +3,7 @@
 #include <xen/domain.h>
 #include <xen/irq.h>
 #include <xen/nodemask.h>
+#include <xen/sections.h>
 #include <xen/time.h>
 #include <public/domctl.h>
 #include <public/vm_event.h>
diff --git a/xen/common/argo.c b/xen/common/argo.c
index 901f41eb2dbe..df19006744a3 100644
--- a/xen/common/argo.c
+++ b/xen/common/argo.c
@@ -25,6 +25,7 @@
 #include <xen/nospec.h>
 #include <xen/param.h>
 #include <xen/sched.h>
+#include <xen/sections.h>
 #include <xen/time.h>
 
 #include <xsm/xsm.h>
diff --git a/xen/common/cpu.c b/xen/common/cpu.c
index 6e35b114c080..f09af0444b6a 100644
--- a/xen/common/cpu.c
+++ b/xen/common/cpu.c
@@ -3,6 +3,7 @@
 #include <xen/event.h>
 #include <xen/init.h>
 #include <xen/sched.h>
+#include <xen/sections.h>
 #include <xen/stop_machine.h>
 #include <xen/rcupdate.h>
 
diff --git a/xen/common/debugtrace.c b/xen/common/debugtrace.c
index a272e5e43761..ca883ad9198d 100644
--- a/xen/common/debugtrace.c
+++ b/xen/common/debugtrace.c
@@ -13,6 +13,7 @@
 #include <xen/mm.h>
 #include <xen/param.h>
 #include <xen/percpu.h>
+#include <xen/sections.h>
 #include <xen/serial.h>
 #include <xen/smp.h>
 #include <xen/spinlock.h>
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 67cadb7c3f4f..3db0e0b793f9 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -11,6 +11,7 @@
 #include <xen/err.h>
 #include <xen/param.h>
 #include <xen/sched.h>
+#include <xen/sections.h>
 #include <xen/domain.h>
 #include <xen/mm.h>
 #include <xen/event.h>
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index a67feff98976..822b2c982489 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -26,6 +26,8 @@
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <xen/keyhandler.h>
+#include <xen/sections.h>
+
 #include <asm/current.h>
 
 #include <public/xen.h>
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 127ca506965c..674e7be39e9d 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -6,6 +6,7 @@
 #include <xen/delay.h>
 #include <xen/keyhandler.h>
 #include <xen/param.h>
+#include <xen/sections.h>
 #include <xen/shutdown.h>
 #include <xen/event.h>
 #include <xen/console.h>
diff --git a/xen/common/memory.c b/xen/common/memory.c
index de2cc7ad92a5..a6f2f6d1b348 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -23,6 +23,7 @@
 #include <xen/param.h>
 #include <xen/perfc.h>
 #include <xen/sched.h>
+#include <xen/sections.h>
 #include <xen/trace.h>
 #include <xen/types.h>
 #include <asm/current.h>
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 054b7edb3989..33c8c917d984 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -134,6 +134,7 @@
 #include <xen/pfn.h>
 #include <xen/types.h>
 #include <xen/sched.h>
+#include <xen/sections.h>
 #include <xen/softirq.h>
 #include <xen/spinlock.h>
 
diff --git a/xen/common/pdx.c b/xen/common/pdx.c
index d3d63b075032..b8384e6189df 100644
--- a/xen/common/pdx.c
+++ b/xen/common/pdx.c
@@ -19,6 +19,7 @@
 #include <xen/mm.h>
 #include <xen/bitops.h>
 #include <xen/nospec.h>
+#include <xen/sections.h>
 
 /**
  * Maximum (non-inclusive) usable pdx. Must be
diff --git a/xen/common/radix-tree.c b/xen/common/radix-tree.c
index adc3034222dc..fb283a9d52fc 100644
--- a/xen/common/radix-tree.c
+++ b/xen/common/radix-tree.c
@@ -21,6 +21,7 @@
 #include <xen/init.h>
 #include <xen/radix-tree.h>
 #include <xen/errno.h>
+#include <xen/sections.h>
 
 struct radix_tree_path {
 	struct radix_tree_node *node;
diff --git a/xen/common/random.c b/xen/common/random.c
index a29f2fcb991a..35a9f387fd5c 100644
--- a/xen/common/random.c
+++ b/xen/common/random.c
@@ -1,6 +1,6 @@
-#include <xen/cache.h>
 #include <xen/init.h>
 #include <xen/percpu.h>
+#include <xen/sections.h>
 #include <xen/random.h>
 #include <xen/time.h>
 #include <asm/random.h>
diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
index 212a99acd8c8..fd5d3d7484a5 100644
--- a/xen/common/rcupdate.c
+++ b/xen/common/rcupdate.c
@@ -35,6 +35,7 @@
 #include <xen/kernel.h>
 #include <xen/init.h>
 #include <xen/param.h>
+#include <xen/sections.h>
 #include <xen/spinlock.h>
 #include <xen/smp.h>
 #include <xen/rcupdate.h>
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d84b65f197b3..1a3ff5ae4dec 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -18,6 +18,7 @@
 #include <xen/lib.h>
 #include <xen/param.h>
 #include <xen/sched.h>
+#include <xen/sections.h>
 #include <xen/domain.h>
 #include <xen/delay.h>
 #include <xen/event.h>
diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index ad8f60846273..57dfee26f21f 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -22,6 +22,7 @@
 #include <xen/param.h>
 #include <xen/percpu.h>
 #include <xen/sched.h>
+#include <xen/sections.h>
 #include <xen/warning.h>
 
 #include "private.h"
diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
index 020f44595ed0..a6bb321e7da1 100644
--- a/xen/common/sched/credit.c
+++ b/xen/common/sched/credit.c
@@ -12,6 +12,7 @@
 #include <xen/lib.h>
 #include <xen/param.h>
 #include <xen/sched.h>
+#include <xen/sections.h>
 #include <xen/domain.h>
 #include <xen/delay.h>
 #include <xen/event.h>
diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index 685929c2902b..a7da60f40376 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -14,6 +14,7 @@
 #include <xen/lib.h>
 #include <xen/param.h>
 #include <xen/sched.h>
+#include <xen/sections.h>
 #include <xen/domain.h>
 #include <xen/delay.h>
 #include <xen/event.h>
diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
index 5f8141edc6b2..f413f331af17 100644
--- a/xen/common/shutdown.c
+++ b/xen/common/shutdown.c
@@ -2,6 +2,7 @@
 #include <xen/lib.h>
 #include <xen/param.h>
 #include <xen/sched.h>
+#include <xen/sections.h>
 #include <xen/domain.h>
 #include <xen/delay.h>
 #include <xen/watchdog.h>
diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
index 28c6e9d3ac60..0b877384451d 100644
--- a/xen/common/spinlock.c
+++ b/xen/common/spinlock.c
@@ -5,6 +5,7 @@
 #include <xen/param.h>
 #include <xen/smp.h>
 #include <xen/time.h>
+#include <xen/sections.h>
 #include <xen/spinlock.h>
 #include <xen/guest_access.h>
 #include <xen/preempt.h>
diff --git a/xen/common/timer.c b/xen/common/timer.c
index a21798b76f38..da0d069cc674 100644
--- a/xen/common/timer.c
+++ b/xen/common/timer.c
@@ -11,6 +11,7 @@
 #include <xen/sched.h>
 #include <xen/lib.h>
 #include <xen/param.h>
+#include <xen/sections.h>
 #include <xen/smp.h>
 #include <xen/perfc.h>
 #include <xen/time.h>
diff --git a/xen/common/version.c b/xen/common/version.c
index 80869430fc7c..b7d7d515a3dc 100644
--- a/xen/common/version.c
+++ b/xen/common/version.c
@@ -3,14 +3,13 @@
 #include <xen/init.h>
 #include <xen/errno.h>
 #include <xen/lib.h>
+#include <xen/sections.h>
 #include <xen/string.h>
 #include <xen/types.h>
 #include <xen/efi.h>
 #include <xen/elf.h>
 #include <xen/version.h>
 
-#include <asm/cache.h>
-
 const char *xen_compile_date(void)
 {
     return XEN_COMPILE_DATE;
diff --git a/xen/common/virtual_region.c b/xen/common/virtual_region.c
index 52405d84b25c..1dc2e9f592ed 100644
--- a/xen/common/virtual_region.c
+++ b/xen/common/virtual_region.c
@@ -6,6 +6,7 @@
 #include <xen/kernel.h>
 #include <xen/mm.h>
 #include <xen/rcupdate.h>
+#include <xen/sections.h>
 #include <xen/spinlock.h>
 #include <xen/virtual_region.h>
 
diff --git a/xen/common/vmap.c b/xen/common/vmap.c
index 966a7e763f0f..b3b4ddf65311 100644
--- a/xen/common/vmap.c
+++ b/xen/common/vmap.c
@@ -1,6 +1,6 @@
 #ifdef VMAP_VIRT_START
 #include <xen/bitmap.h>
-#include <xen/cache.h>
+#include <xen/sections.h>
 #include <xen/init.h>
 #include <xen/mm.h>
 #include <xen/pfn.h>
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 3a3a97bcbe3a..7da8c5296f3b 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -32,6 +32,7 @@
 #include <xen/warning.h>
 #include <xen/pv_console.h>
 #include <asm/setup.h>
+#include <xen/sections.h>
 
 #ifdef CONFIG_X86
 #include <xen/consoled.h>
diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 8f76bbe676bc..eaeb0e09d01e 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -22,6 +22,7 @@
 #include <xen/irq.h>
 #include <xen/param.h>
 #include <xen/sched.h>
+#include <xen/sections.h>
 #include <xen/timer.h>
 #include <xen/serial.h>
 #include <xen/iocap.h>
diff --git a/xen/drivers/char/serial.c b/xen/drivers/char/serial.c
index f28d8557c0a5..591a00900869 100644
--- a/xen/drivers/char/serial.c
+++ b/xen/drivers/char/serial.c
@@ -10,8 +10,8 @@
 #include <xen/init.h>
 #include <xen/mm.h>
 #include <xen/param.h>
+#include <xen/sections.h>
 #include <xen/serial.h>
-#include <xen/cache.h>
 
 #include <asm/processor.h>
 
diff --git a/xen/include/xen/cache.h b/xen/include/xen/cache.h
index 82a3ba38e3e7..a19942fd63ef 100644
--- a/xen/include/xen/cache.h
+++ b/xen/include/xen/cache.h
@@ -15,7 +15,7 @@
 #define __cacheline_aligned __attribute__((__aligned__(SMP_CACHE_BYTES)))
 #endif
 
-#if defined(CONFIG_ARM) || defined(CONFIG_X86) || defined(CONFIG_PPC64)
+#if defined(CONFIG_ARM) || defined(CONFIG_X86)
 /* TODO: Phase out the use of this via cache.h */
 #define __ro_after_init __section(".data.ro_after_init")
 #endif
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 1b65a9188c6c..d8fcac23b46b 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -4,6 +4,7 @@
 #ifdef CONFIG_HYPFS
 #include <xen/lib.h>
 #include <xen/list.h>
+#include <xen/sections.h>
 #include <xen/string.h>
 #include <public/hypfs.h>
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 20:58:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 20:58:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745568.1152712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlKi-0006CO-TY; Fri, 21 Jun 2024 20:58:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745568.1152712; Fri, 21 Jun 2024 20:58:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlKi-0006CH-QV; Fri, 21 Jun 2024 20:58:08 +0000
Received: by outflank-mailman (input) for mailman id 745568;
 Fri, 21 Jun 2024 20:58:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKlKh-0005sX-4d
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 20:58:07 +0000
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com
 [2a00:1450:4864:20::52c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f4e44aba-3010-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 22:58:05 +0200 (CEST)
Received: by mail-ed1-x52c.google.com with SMTP id
 4fb4d7f45d1cf-57d1012e52fso2824908a12.3
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 13:58:05 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d3042d509sm1440011a12.43.2024.06.21.13.58.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 13:58:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4e44aba-3010-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719003485; x=1719608285; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=I2mJFKp0tzriHW0QjBAnMPOnFKVP/CBMSZnxZwykoKk=;
        b=uDbTKnDaxM9u7N8MNzOKHnnK5RT3+WGPicwdkXstY3AdUr1N9+Q2lq6xeDYf14V+OW
         3aJIRJ3pf9TQUdaZPpVFBRqFTvoymOS5vmuov9i1xOlVsoMXbGe/OkmPgnT+HrcyfDnL
         w+67mHRFkLok+iZUNfyxjSX9LZc2f8mko3Z98=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719003485; x=1719608285;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=I2mJFKp0tzriHW0QjBAnMPOnFKVP/CBMSZnxZwykoKk=;
        b=mhkdFbN9NiXYHFsROuz9GdGEE5YWdqpzrx1u1i78mFHtyCsJ+deFcB3fwI8WIdD7uH
         72q+61Y/9+WeI2O6wUVG9N51nm3+REICC8NHfhRuK6m0c2deOqQ4RK+K20mW0GWuZJbr
         iczMYtJQZnf+kXPPeTkoZqg3IOsTjz8vcb3UrHwgXMmRHECTUZnSofB+RRSauIYTRZxU
         hrXgvnhcwpY8rKtPd3WWxqqJB1wLr6XgxFJWoznWy4t+qayLmnOS5aHV/y76Hj299l1E
         7qrvH4OzoUAIRbv9Rds5Tf5Px0InX2Zb8GRy1zSuZ4Kf4CnUhC+KpzeMUf0w3meWHMXK
         xfcQ==
X-Gm-Message-State: AOJu0Yy+YukvQJETRALvRtrqbKOIVDNS6ETVzGoRARTtWFT4XlzehQ+g
	Wfnk6FqL11XdCgjkGJ+OJqPiblb3+bIvKR1ZaC3+Af7i5p6T9oVLoi/8btOkzQkW63+zqHhGf0B
	c8/w=
X-Google-Smtp-Source: AGHT+IFXHPD0wWta+oxEYQcmgEfPdVSHm0Vxn7VWpzd99ufMFqO8UaVmn6TUdSvAMxiUnh9lEus+8w==
X-Received: by 2002:a50:d50a:0:b0:57d:2716:838c with SMTP id 4fb4d7f45d1cf-57d27168510mr3030952a12.37.1719003485207;
        Fri, 21 Jun 2024 13:58:05 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Roberto Bagnara <roberto.bagnara@bugseng.com>,
	"consulting @ bugseng . com" <consulting@bugseng.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH 2/2] xen/multicall: Change nr_calls to uniformly be unsigned long
Date: Fri, 21 Jun 2024 21:58:00 +0100
Message-Id: <20240621205800.329230-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240621205800.329230-1-andrew.cooper3@citrix.com>
References: <20240621205800.329230-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Right now, the non-compat declaration and definition of do_multicall()
differing types for the nr_calls parameter.

This is a MISRA rule 8.3 violation, but it's also time-bomb waiting for the
first 128bit architecture (RISC-V looks as if it might get there first).

Worse, the type chosen here has a side effect of truncating the guest
parameter, because Xen still doesn't have a clean hypercall ABI definition.

Switch uniformly to using unsigned long.

This addresses the MISRA violation, and while it is a guest-visible ABI
change, it's only in the corner case where the guest kernel passed a
bogus-but-correct-when-truncated value.  I can't find any any users of
mutilcall which pass a bad size to begin with, so this should have no
practical effect on guests.

In fact, this brings the behaviour of multicalls more in line with the header
description of how it behaves.

With this fix, Xen is now fully clean to Rule 8.3, so mark it so.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Roberto Bagnara <roberto.bagnara@bugseng.com>
CC: consulting@bugseng.com <consulting@bugseng.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

I know this isn't going to be universally liked, but we need to do something,
and this is my very strong vote for the least bad way out of the current mess.
---
 automation/eclair_analysis/ECLAIR/tagging.ecl | 1 +
 xen/common/multicall.c                        | 4 ++--
 xen/include/hypercall-defs.c                  | 4 ++--
 xen/include/public/xen.h                      | 2 +-
 4 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/automation/eclair_analysis/ECLAIR/tagging.ecl b/automation/eclair_analysis/ECLAIR/tagging.ecl
index b829655ca0bc..3d06a1aad410 100644
--- a/automation/eclair_analysis/ECLAIR/tagging.ecl
+++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
@@ -45,6 +45,7 @@ MC3R1.R7.2||
 MC3R1.R7.4||
 MC3R1.R8.1||
 MC3R1.R8.2||
+MC3R1.R8.3||
 MC3R1.R8.5||
 MC3R1.R8.6||
 MC3R1.R8.8||
diff --git a/xen/common/multicall.c b/xen/common/multicall.c
index 1f0cc4cb267c..ce394c5efcfe 100644
--- a/xen/common/multicall.c
+++ b/xen/common/multicall.c
@@ -34,11 +34,11 @@ static void trace_multicall_call(multicall_entry_t *call)
 }
 
 ret_t do_multicall(
-    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list, uint32_t nr_calls)
+    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list, unsigned long nr_calls)
 {
     struct vcpu *curr = current;
     struct mc_state *mcs = &curr->mc_state;
-    uint32_t         i;
+    unsigned long    i;
     int              rc = 0;
     enum mc_disposition disp = mc_continue;
 
diff --git a/xen/include/hypercall-defs.c b/xen/include/hypercall-defs.c
index 47c093acc84d..7720a29ade0b 100644
--- a/xen/include/hypercall-defs.c
+++ b/xen/include/hypercall-defs.c
@@ -135,7 +135,7 @@ xenoprof_op(int op, void *arg)
 #ifdef CONFIG_COMPAT
 prefix: compat
 set_timer_op(uint32_t lo, uint32_t hi)
-multicall(multicall_entry_compat_t *call_list, uint32_t nr_calls)
+multicall(multicall_entry_compat_t *call_list, unsigned long nr_calls)
 memory_op(unsigned int cmd, void *arg)
 #ifdef CONFIG_IOREQ_SERVER
 dm_op(domid_t domid, unsigned int nr_bufs, void *bufs)
@@ -172,7 +172,7 @@ console_io(unsigned int cmd, unsigned int count, char *buffer)
 vm_assist(unsigned int cmd, unsigned int type)
 event_channel_op(int cmd, void *arg)
 mmuext_op(mmuext_op_t *uops, unsigned int count, unsigned int *pdone, unsigned int foreigndom)
-multicall(multicall_entry_t *call_list, unsigned int nr_calls)
+multicall(multicall_entry_t *call_list, unsigned long nr_calls)
 #ifdef CONFIG_PV
 mmu_update(mmu_update_t *ureqs, unsigned int count, unsigned int *pdone, unsigned int foreigndom)
 stack_switch(unsigned long ss, unsigned long esp)
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index b47d48d0e2d6..e051f989a5ca 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -623,7 +623,7 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
 /*
  * ` enum neg_errnoval
  * ` HYPERVISOR_multicall(multicall_entry_t call_list[],
- * `                      uint32_t nr_calls);
+ * `                      unsigned long nr_calls);
  *
  * NB. The fields are logically the natural register size for this
  * architecture. In cases where xen_ulong_t is larger than this then
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 20:58:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 20:58:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745566.1152692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlKg-0005jE-BX; Fri, 21 Jun 2024 20:58:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745566.1152692; Fri, 21 Jun 2024 20:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlKg-0005j7-82; Fri, 21 Jun 2024 20:58:06 +0000
Received: by outflank-mailman (input) for mailman id 745566;
 Fri, 21 Jun 2024 20:58:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKlKf-0005j1-UG
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 20:58:05 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f3ae7ebb-3010-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 22:58:03 +0200 (CEST)
Received: by mail-ed1-x535.google.com with SMTP id
 4fb4d7f45d1cf-57d1782679fso2875778a12.0
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 13:58:03 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d3042d509sm1440011a12.43.2024.06.21.13.58.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 13:58:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3ae7ebb-3010-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719003483; x=1719608283; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=6Lzo3JDUF518si0KUt6/S/krRvZgmKUoFzznxiBo0cw=;
        b=H5WAJJ8I9+CK85Efws9INeRgP8ZByChnk36YYMA5qdF0UR4NN7h4te8hY9soZ/jVar
         xo9oU8XorpvSqaEcAbsUU7RbwEgMjLvtlRX8Jve3tUOVBlhUoAXMoNQDaH94nEt+XUzX
         AlYYJVdE9M9ieg0XwPCRFVQWZNN/02BGBQQkc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719003483; x=1719608283;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=6Lzo3JDUF518si0KUt6/S/krRvZgmKUoFzznxiBo0cw=;
        b=Wew6e48ZeTkd4bglva6qnKHhX5is7UI0maFlfnAc9ngDTL3T2/IOlTajZtcTdv8K0x
         EbCaDyqmrFPibPEuT6pXwmtEm07aNLUSZlvbIh/GtJhtHn+NEQ82LYtdj1YjCP0xzOR+
         V5APGebSr4kd1DmeJsLvvuM6Sy5UgdPpJp+Ft1F+Qppv3TVJYnwYyZ0vIUbTvineU8BP
         tJFZ/AIPpkwiV0T9OIIRg8o1LtiCmibiu4jUgYpOJkIZe30nupaJQAWJnaZEAKCEeK9g
         XmiyalfAwMC69cV9PG12MWbFLh2aPP9TvOoYX6w7ebD4Kp0h7nAp1x5OB8/T8a7eUhr/
         0h8w==
X-Gm-Message-State: AOJu0YyrwiMZxtDDbrgkvKrqaS1VSnpRit1Y3DPSQj1LhUBrabZzuvN2
	q25Rzq4m41QwZxcCFy0i33ZeyA/vJCZvkpqAdWWp1rXM+/PVRqMQNG7g/KkSHEjVe8HSd5Hbhmi
	Samk=
X-Google-Smtp-Source: AGHT+IEqP6gWYL+pJI8eWgZrb14yjcaQKCCUMGwvzUu4l5FTKxFmFCmDwrUToqzZHEl19UuNmy3XZw==
X-Received: by 2002:a50:d707:0:b0:578:56e1:7ba3 with SMTP id 4fb4d7f45d1cf-57d07ebf183mr6586536a12.38.1719003482733;
        Fri, 21 Jun 2024 13:58:02 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Roberto Bagnara <roberto.bagnara@bugseng.com>,
	"consulting @ bugseng . com" <consulting@bugseng.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 0/2] Xen: Final MISRA R8.3 fixes
Date: Fri, 21 Jun 2024 21:57:58 +0100
Message-Id: <20240621205800.329230-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

This gets Xen clean to R8.3 and marks it as blocking in Gitlab.

https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1342755199

Andrew Cooper (2):
  x86/pagewalk: Address MISRA R8.3 violation in guest_walk_tables()
  xen/multicall: Change nr_calls to uniformly be unsigned long

 automation/eclair_analysis/ECLAIR/tagging.ecl | 1 +
 xen/arch/x86/include/asm/guest_pt.h           | 2 +-
 xen/common/multicall.c                        | 4 ++--
 xen/include/hypercall-defs.c                  | 4 ++--
 xen/include/public/xen.h                      | 2 +-
 5 files changed, 7 insertions(+), 6 deletions(-)


base-commit: 9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 20:58:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 20:58:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745567.1152701 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlKh-0005xn-MB; Fri, 21 Jun 2024 20:58:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745567.1152701; Fri, 21 Jun 2024 20:58:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlKh-0005xg-JV; Fri, 21 Jun 2024 20:58:07 +0000
Received: by outflank-mailman (input) for mailman id 745567;
 Fri, 21 Jun 2024 20:58:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FH9a=NX=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKlKg-0005j1-Kg
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 20:58:06 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f465af9f-3010-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 22:58:05 +0200 (CEST)
Received: by mail-ed1-x529.google.com with SMTP id
 4fb4d7f45d1cf-57cbc66a0a6so881006a12.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 Jun 2024 13:58:05 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d3042d509sm1440011a12.43.2024.06.21.13.58.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 Jun 2024 13:58:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f465af9f-3010-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719003484; x=1719608284; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bdyjt3BIACM8vTSuTSumyihaZrwR5nCq3lnnqbQTCh8=;
        b=vCXU0+Aj8Gi0FtsVdCZBEFM9KaYOmZ3INS1y/xw1aL+TbhhbRn+HpQ6padg6BnOzRa
         ECgw3J9/iWjCUTptnIo36x+uSpVEGXtnEXGWqXfEoXdln0YrBNd8N5Nb4ynNzVBev2ay
         8IDBeC4jl7/Ce9OWEe6wmoCQbWq/TSEfhlaZU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719003484; x=1719608284;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=bdyjt3BIACM8vTSuTSumyihaZrwR5nCq3lnnqbQTCh8=;
        b=JZrSblH1OC+Sn6Xq7U53rTpY2x3ckALwQ0UmvzmDI1jLYZVnf7zKo/DIcWikC4uSoz
         dIFCIrEZPcF7LeMAGlGBamHcgMano3jaGi/FoFisyaYB60/6v0vuwVwMwRbvwERaBa6i
         jpwajJZQxM9HVoMJrV9BMrg+d/CHrfTpJ03dUp1rlolnf8AFlriwJP4jzsPDiwlknNUV
         Xfg9viubNxydMc/yqBSr3kbUjdCrai2OQQzHAm0AXIm44T5EU77QhiBEdTHUetqDiVas
         QibqleVDMYRs6IFV0qYbL9VT1zXLlnChcXusOgoLCKY+2YNPyOwA8IBn75Q0S2J/vfGs
         wDag==
X-Gm-Message-State: AOJu0Yy7aNJZ1SShm8oABvEroOCp4blksKfOgXgRs2cSHWTjQwDnCfz7
	xX2eQxtvjxhMLUuM1srIIDNCMhithtXMXyPRXo/XrKlM+vuwI5wrry/IlzgiW9Px/gQ3iYJN7SV
	oTwU=
X-Google-Smtp-Source: AGHT+IGsGEoeFtjCEZAmgNF0aYxzTDgFROuk8KFeQVBo+qKImCPjmGqgL9fBnC3cLsT564i1CILIVw==
X-Received: by 2002:aa7:db51:0:b0:57d:3df:ba2d with SMTP id 4fb4d7f45d1cf-57d3dff2a40mr540834a12.2.1719003483976;
        Fri, 21 Jun 2024 13:58:03 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Roberto Bagnara <roberto.bagnara@bugseng.com>,
	"consulting @ bugseng . com" <consulting@bugseng.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH 1/2] x86/pagewalk: Address MISRA R8.3 violation in guest_walk_tables()
Date: Fri, 21 Jun 2024 21:57:59 +0100
Message-Id: <20240621205800.329230-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240621205800.329230-1-andrew.cooper3@citrix.com>
References: <20240621205800.329230-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Commit 4c5d78a10dc8 ("x86/pagewalk: Re-implement the pagetable walker")
intentionally renamed guest_walk_tables()'s 'pfec' parameter to 'walk' because
it's not a PageFault Error Code, despite the name of some of the constants
passed in.  Sadly the constants-cleanup I've been meaning to do since then
still hasn't come to pass.

Update the declaration to match, to placate MISRA.

Fixes: 4c5d78a10dc8 ("x86/pagewalk: Re-implement the pagetable walker")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Roberto Bagnara <roberto.bagnara@bugseng.com>
CC: consulting@bugseng.com <consulting@bugseng.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/x86/include/asm/guest_pt.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/include/asm/guest_pt.h b/xen/arch/x86/include/asm/guest_pt.h
index bc312343cdf1..7b0c9b005c1f 100644
--- a/xen/arch/x86/include/asm/guest_pt.h
+++ b/xen/arch/x86/include/asm/guest_pt.h
@@ -422,7 +422,7 @@ static inline unsigned int guest_walk_to_page_order(const walk_t *gw)
 
 bool
 guest_walk_tables(const struct vcpu *v, struct p2m_domain *p2m,
-                  unsigned long va, walk_t *gw, uint32_t pfec,
+                  unsigned long va, walk_t *gw, uint32_t walk,
                   gfn_t top_gfn, mfn_t top_mfn, void *top_map);
 
 /* Pretty-print the contents of a guest-walk */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 21:04:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 21:04:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745588.1152722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlQe-0000Ab-If; Fri, 21 Jun 2024 21:04:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745588.1152722; Fri, 21 Jun 2024 21:04:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlQe-0000AU-FW; Fri, 21 Jun 2024 21:04:16 +0000
Received: by outflank-mailman (input) for mailman id 745588;
 Fri, 21 Jun 2024 21:04:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/NIW=NX=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKlQd-0000AO-9w
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 21:04:15 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cf808892-3011-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 23:04:12 +0200 (CEST)
Received: from [192.168.1.20] (host-87-12-240-97.business.telecomitalia.it
 [87.12.240.97])
 by support.bugseng.com (Postfix) with ESMTPSA id EF2684EE0738;
 Fri, 21 Jun 2024 23:04:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf808892-3011-11ef-b4bb-af5377834399
Message-ID: <fd5e03bd-2a56-45a9-8511-496de24569e9@bugseng.com>
Date: Fri, 21 Jun 2024 23:04:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2] common/sched: address a violation of MISRA C Rule
 8.8
To: victorm.lira@amd.com, xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, George Dunlap <george.dunlap@cloud.com>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli
 <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>
References: <994b423128711b2a912401ff4cb13107ad5c6a9d.1718999221.git.victorm.lira@amd.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <994b423128711b2a912401ff4cb13107ad5c6a9d.1718999221.git.victorm.lira@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 21/06/24 21:59, victorm.lira@amd.com wrote:
> From: Victor Lira <victorm.lira@amd.com>
> 
> Rule 8.8: "The static storage class specifier shall be used in all
> declarations of objects and functions that have internal linkage"

What you are addressing with this patch seems to be a violation of
Rule 8.7: "Functions and objects should not be defined with external
linkage if they are referenced in only one translation unit".

> 
> This patch fixes this by adding the static specifier.
> No functional changes.
> 
> Reported-by: Stewart Hildebrand stewart.hildebrand@amd.com
> Signed-off-by: Victor Lira <victorm.lira@amd.com>
> Acked-by: George Dunlap <george.dunlap@cloud.com>
> ---
> Changes from v1:
> - adjust indentation and line width.
> ---
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Dario Faggioli <dfaggioli@suse.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: xen-devel@lists.xenproject.org
> ---
>   xen/common/sched/credit2.c | 8 ++++----
>   1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
> index 685929c290..b4e03e2a63 100644
> --- a/xen/common/sched/credit2.c
> +++ b/xen/common/sched/credit2.c
> @@ -1476,8 +1476,8 @@ static inline void runq_remove(struct csched2_unit *svc)
>       list_del_init(&svc->runq_elem);
>   }
>   
> -void burn_credits(struct csched2_runqueue_data *rqd, struct csched2_unit *svc,
> -                  s_time_t now);
> +static void burn_credits(struct csched2_runqueue_data *rqd,
> +                         struct csched2_unit *svc, s_time_t now);
>   
>   static inline void
>   tickle_cpu(unsigned int cpu, struct csched2_runqueue_data *rqd)
> @@ -1855,8 +1855,8 @@ static void reset_credit(int cpu, s_time_t now, struct csched2_unit *snext)
>       /* No need to resort runqueue, as everyone's order should be the same. */
>   }
>   
> -void burn_credits(struct csched2_runqueue_data *rqd,
> -                  struct csched2_unit *svc, s_time_t now)
> +static void burn_credits(struct csched2_runqueue_data *rqd,
> +                         struct csched2_unit *svc, s_time_t now)
>   {
>       s_time_t delta;
>   

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 21:32:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 21:32:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745599.1152731 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlrY-0005CO-K0; Fri, 21 Jun 2024 21:32:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745599.1152731; Fri, 21 Jun 2024 21:32:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlrY-0005CH-HY; Fri, 21 Jun 2024 21:32:04 +0000
Received: by outflank-mailman (input) for mailman id 745599;
 Fri, 21 Jun 2024 21:32:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKlrW-0005CB-Sv
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 21:32:02 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b097f025-3015-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 23:31:59 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 335FD62F9F;
 Fri, 21 Jun 2024 21:31:58 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AE4E5C2BBFC;
 Fri, 21 Jun 2024 21:31:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b097f025-3015-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719005517;
	bh=2b3yQ/OXCPBmU4nkW1SyGQ0JPBACNpJz2Dc/ulbusNo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=rltagSZ3JiFAhmjhxrMo2/zi9X0yuEx5aOyzFkY9dYyrO4/oXHKfJM2CwbQ7nlCiq
	 oLnJ0kUkd334ElyhrBt2BzkGKGuaADFJTyiW1/6KmCMXYvSSEgF7MKplJm08JXl491
	 v6Jysi1JFCc01tM7q6mV0YhocUNRaYlIG6ygZirZFSEZ8LL/lp70PKQq99T4UN7LNE
	 I8PJKEO0MYAaL1ycNm8ZQrKH8++jpxwDieQNnJVXpNRXuOY11YMd0XNa3tvCm66uKM
	 VMFs779lNcTYX9kWTCDyneZpsoM2NnbNf8RL2sQkDHW/O1SgUWQD69UmPIerexIQdX
	 HiwPMZQP11XYA==
Date: Fri, 21 Jun 2024 14:31:55 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
    oleksii.kurochko@gmail.com
Subject: Re: [PATCH v2] common/unlzo: address violation of MISRA C Rule 7.3
In-Reply-To: <847f9b715b3c8e2ba0637fdd79111f4f828389c6.1718976211.git.alessandro.zucchelli@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406211431210.2572888@ubuntu-linux-20-04-desktop>
References: <847f9b715b3c8e2ba0637fdd79111f4f828389c6.1718976211.git.alessandro.zucchelli@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 21 Jun 2024, Alessandro Zucchelli wrote:
> This addresses violations of MISRA C:2012 Rule 7.3 which states as
> following: the lowercase character `l' shall not be used in a literal
> suffix.
> 
> The file common/unlzo.c defines the non-compliant constant LZO_BLOCK_SIZE with
> having a lowercase 'l'.
> It is now defined as '256*1024L'.
> 
> No functional change.
> 
> Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Asking for a release ack for this trivial change


> ---
> Changes from v1:
> Instead of deviating /common/unlzo.c reports fro Rule 7.3 they are addressed by
> changing the non-compliant definition of LZO_BLOCK_SIZE.
> ---
>  xen/common/unlzo.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/common/unlzo.c b/xen/common/unlzo.c
> index bdcefa95b3..acb8dff600 100644
> --- a/xen/common/unlzo.c
> +++ b/xen/common/unlzo.c
> @@ -52,7 +52,7 @@ static inline u32 get_unaligned_be32(const void *p)
>  static const unsigned char lzop_magic[] = {
>  	0x89, 0x4c, 0x5a, 0x4f, 0x00, 0x0d, 0x0a, 0x1a, 0x0a };
>  
> -#define LZO_BLOCK_SIZE        (256*1024l)
> +#define LZO_BLOCK_SIZE        (256*1024L)
>  #define HEADER_HAS_FILTER      0x00000800L
>  #define HEADER_SIZE_MIN       (9 + 7     + 4 + 8     + 1       + 4)
>  #define HEADER_SIZE_MAX       (9 + 7 + 1 + 8 + 8 + 4 + 1 + 255 + 4)
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 21:33:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 21:33:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745605.1152742 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlt4-0005jX-UO; Fri, 21 Jun 2024 21:33:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745605.1152742; Fri, 21 Jun 2024 21:33:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlt4-0005jQ-RL; Fri, 21 Jun 2024 21:33:38 +0000
Received: by outflank-mailman (input) for mailman id 745605;
 Fri, 21 Jun 2024 21:33:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKlt4-0005jK-5b
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 21:33:38 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ea61e56d-3015-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 23:33:37 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 5F20B62FA1;
 Fri, 21 Jun 2024 21:33:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 35869C2BBFC;
 Fri, 21 Jun 2024 21:33:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea61e56d-3015-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719005615;
	bh=l48y11LNjI5XuqxT5ZaEsrDv47R4Kr4/Whk+1TSosxU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GBt9tNICFCug0CfC4IfhkbegYSyVNKwjilJaH7WukO2osnm8N7RoP5HFhoKGQnvVD
	 kjPpNX+X/obq/cARz/FUsSFEvcOdv6qYu2RMplq1oKtgcSSR4poyXN4yihFuHzHAdh
	 +C5eYsb337g2DwZaFt6SfyP//msRKPB3fsKMmYSOS/Vvi+ihNHQOSSuzryjWeWuw3K
	 vpdEhwI377u1XQcCJCN/37jd8xMxTsEPHHxX1zJljsU5xZfdIhXnSdsTRTFtYDQLdx
	 URFd/kVrvbb/ypAVVLUghaJWcDyvj2ne0aFhFzJ9XVtztwqAT3h7gjonwXaQTarmLX
	 iaR4kA9/ma4Jw==
Date: Fri, 21 Jun 2024 14:33:32 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, oleksii.kurochko@gmail.com
Subject: Re: [XEN PATCH] automation/eclair: add more guidelines to the
 monitored set
In-Reply-To: <f03398504405689413521de1675a33e50cdbc30b.1718983858.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406211432140.2572888@ubuntu-linux-20-04-desktop>
References: <f03398504405689413521de1675a33e50cdbc30b.1718983858.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 21 Jun 2024, Federico Serafini wrote:
> Add more accepted guidelines to the monitored set to check them at each
> commit.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Asking for a release ack: this allows us to see more violations in the
regular ECLAIR scanning results. But they are not blocking, so they
won't cause additional new failures in the pipeline. It is just
informative.


> ---
>  automation/eclair_analysis/ECLAIR/monitored.ecl | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/monitored.ecl b/automation/eclair_analysis/ECLAIR/monitored.ecl
> index 4daecb0c83..9ffaebbdc3 100644
> --- a/automation/eclair_analysis/ECLAIR/monitored.ecl
> +++ b/automation/eclair_analysis/ECLAIR/monitored.ecl
> @@ -18,10 +18,13 @@
>  -enable=MC3R1.R12.5
>  -enable=MC3R1.R1.3
>  -enable=MC3R1.R13.6
> +-enable=MC3R1.R13.1
>  -enable=MC3R1.R1.4
>  -enable=MC3R1.R14.1
>  -enable=MC3R1.R14.4
>  -enable=MC3R1.R16.2
> +-enable=MC3R1.R16.3
> +-enable=MC3R1.R16.4
>  -enable=MC3R1.R16.6
>  -enable=MC3R1.R16.7
>  -enable=MC3R1.R17.1
> @@ -34,6 +37,7 @@
>  -enable=MC3R1.R20.13
>  -enable=MC3R1.R20.14
>  -enable=MC3R1.R20.4
> +-enable=MC3R1.R20.7
>  -enable=MC3R1.R20.9
>  -enable=MC3R1.R2.1
>  -enable=MC3R1.R21.10
> @@ -58,6 +62,7 @@
>  -enable=MC3R1.R5.2
>  -enable=MC3R1.R5.3
>  -enable=MC3R1.R5.4
> +-enable=MC3R1.R5.5
>  -enable=MC3R1.R5.6
>  -enable=MC3R1.R6.1
>  -enable=MC3R1.R6.2
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 21:39:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 21:39:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745611.1152752 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlyK-0006Pe-Fz; Fri, 21 Jun 2024 21:39:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745611.1152752; Fri, 21 Jun 2024 21:39:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKlyK-0006PX-D4; Fri, 21 Jun 2024 21:39:04 +0000
Received: by outflank-mailman (input) for mailman id 745611;
 Fri, 21 Jun 2024 21:39:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKlyJ-0006PR-5s
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 21:39:03 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ab412570-3016-11ef-90a3-e314d9c70b13;
 Fri, 21 Jun 2024 23:39:01 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 73876CE3CFB;
 Fri, 21 Jun 2024 21:38:54 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BDC45C2BBFC;
 Fri, 21 Jun 2024 21:38:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab412570-3016-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719005933;
	bh=xvpIthr1f2UFM6VmAjX/dj4nayX6AFVwlpZpoyBW1+0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GI2J4Wz8mto6Gbi++JaAcFiuZqpT1r+ssq2kDgdeDtCdlM1AAMFk24QU+gduI4cJS
	 WIpMdxb3AU9GwbASwuGpAa6FQy8zfJnhHo1QrzjMbZQyV2o/VUW/0WWqPYd18fQU6j
	 L/l9BS2wIN5ZrJRpBCAwg+v+Fwf2h69uXOIyOhpX4jnR8hlI8GMeD2rjt6sapSakLm
	 c4GuoWZ6+9l7WAws0jemfh1Le6DX6IMIthuBsNUnOfB4T9ryyAsEBtDHFNEePVPvyR
	 4BlgecaXYoxsFpiCCOkLaIHI/reNbiXkjCi007619sgX406TBAi7Ew+Zvh1Jvz6YSI
	 9VFpSJ0sy/5Ag==
Date: Fri, 21 Jun 2024 14:38:51 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    Jan Beulich <JBeulich@suse.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Roberto Bagnara <roberto.bagnara@bugseng.com>, 
    "consulting @ bugseng . com" <consulting@bugseng.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 1/2] x86/pagewalk: Address MISRA R8.3 violation in
 guest_walk_tables()
In-Reply-To: <20240621205800.329230-2-andrew.cooper3@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2406211438440.2572888@ubuntu-linux-20-04-desktop>
References: <20240621205800.329230-1-andrew.cooper3@citrix.com> <20240621205800.329230-2-andrew.cooper3@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 21 Jun 2024, Andrew Cooper wrote:
> Commit 4c5d78a10dc8 ("x86/pagewalk: Re-implement the pagetable walker")
> intentionally renamed guest_walk_tables()'s 'pfec' parameter to 'walk' because
> it's not a PageFault Error Code, despite the name of some of the constants
> passed in.  Sadly the constants-cleanup I've been meaning to do since then
> still hasn't come to pass.
> 
> Update the declaration to match, to placate MISRA.
> 
> Fixes: 4c5d78a10dc8 ("x86/pagewalk: Re-implement the pagetable walker")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Fri Jun 21 21:41:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 21:41:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745621.1152762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKm0v-0008LQ-VW; Fri, 21 Jun 2024 21:41:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745621.1152762; Fri, 21 Jun 2024 21:41:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKm0v-0008LJ-Se; Fri, 21 Jun 2024 21:41:45 +0000
Received: by outflank-mailman (input) for mailman id 745621;
 Fri, 21 Jun 2024 21:41:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=puzB=NX=invisiblethingslab.com=demi@srs-se1.protection.inumbo.net>)
 id 1sKm0u-0008I0-4N
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 21:41:44 +0000
Received: from fhigh8-smtp.messagingengine.com
 (fhigh8-smtp.messagingengine.com [103.168.172.159])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0b5f353c-3017-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 23:41:41 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailfhigh.nyi.internal (Postfix) with ESMTP id 1337B1140155;
 Fri, 21 Jun 2024 17:41:40 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute6.internal (MEProxy); Fri, 21 Jun 2024 17:41:40 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Fri,
 21 Jun 2024 17:41:39 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b5f353c-3017-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:in-reply-to:message-id:mime-version
	:references:reply-to:subject:subject:to:to; s=fm1; t=1719006100;
	 x=1719092500; bh=rmhRodD+faCYOHLVSgPBTODc2fv+/MTedd6HdinNApk=; b=
	S1thRR+vlTz6/nAMQsFKln/eKpbZ78H02w3ePnR/sczSKgCC6SFxFFsdsJcDFovV
	MN9ug4/n8PNhQvvS2220eUxEakkZM1cDlyzJXQN3Z84J0tOqeDyY+pMOqC0VKPU5
	0Z4B+Y/tvA0E+XAx3W2o1D5ubNdeyxvFaUCCFNoP3X2VIkmE891gA7esT4xgkiFz
	dtNNffedSlw08hy/aoj3JkaLytO3f4WXrZzP9xWtwlu6wVVFNfAPQm5+vLW05GbQ
	Bkc/2D9PmkXy2O7csLyPKVBfC5zi5Ypjni27wktPFTV7112in/BpC4smoS1zaCj/
	KmjVUFZm303MsIqNaoXwOQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:in-reply-to
	:message-id:mime-version:references:reply-to:subject:subject:to
	:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=
	fm2; t=1719006100; x=1719092500; bh=rmhRodD+faCYOHLVSgPBTODc2fv+
	/MTedd6HdinNApk=; b=PSzTmV7wdMKskz2OVouQcou9R76FFiCsDhSitGBU3uSQ
	pAAE1pvcgVV2sY1O3/sE6cugdQimtDoFdxqp2vY/xSbCAibvHoLwy/PQbLDCXhIp
	w0u/B3QK3gXCtcMuJ8kETQzmn7LIFtvC9Y3OWcjHG8R/5j8Ocy5ZPrQ3apKOiWIz
	2L0LNsGlX9R18A9G/OIgidGOMcwqUGzGoDoj8L3Lm8pn+EVlPCes2VAwHr++PpGS
	WXzKOdXCgKyHOcuNh64IX6DuicGr/LlZgWRt0bV01f7K6UPGdG/Fw/9IqXGfOgQ3
	zrI9xbv0BKZdSMDW5BKTITEA34lbDzwIXalJDXKAig==
X-ME-Sender: <xms:k_N1ZkZGaKDpWxifqG9s-k6jiZVAwvng3wv-cEkKAVOQf6ZkB_D9pA>
    <xme:k_N1ZvbrBsDQ2wZ39oxW5jy6t9-PHHvQ-xUEV7KXuTIaeKWkovcO35T3ktV_g5m-1
    k0kGWuwiXTHCSQ>
X-ME-Received: <xmr:k_N1Zu_TGc0NcNRBInJLkAnNoQv0UaigR80Kj635iB6D8Qg9gkUQAIO7_BlK-sSjUr7ZwCKniMf5KWAFPIUrBwI4lwpQs-jAB36FrwuAwGW_aZyI>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfeefhedgtdduucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepffgvmhhi
    ucforghrihgvucfqsggvnhhouhhruceouggvmhhisehinhhvihhsihgslhgvthhhihhngh
    hslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepuedthefhtddvffefjeejvdehvdej
    ieehffehkeekheegleeuleevleduteehteetnecuffhomhgrihhnpehgihhthhhusgdrtg
    homhenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpegu
    vghmihesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:k_N1Zupin2-Onmnh7uJCnn8DA6icRLpE3C7PIFR2pMB0RBsDV9KiaQ>
    <xmx:k_N1ZvodtxHZIjBoy8SxGU74PGXOiT2rhgapZDJFR8TFZNXmNdO4ng>
    <xmx:k_N1ZsT7RP-ZKws8yEpz5ivoo880fXD-qCmxTF1ICKQV5Jo5MINOMg>
    <xmx:k_N1Zvrb3i3ow15ZzS5v_33KPeCWTWGKXZFdZnL79SLg1MHSwjdmWA>
    <xmx:lPN1ZgCAp6osZ5YboWyARvqrbKpl_IkTVnHb5lzVj5hclBJZG624Uqkt>
Feedback-ID: iac594737:Fastmail
Date: Fri, 21 Jun 2024 17:41:24 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xenproject.org>
Cc: Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH for-4.19 v2] tools/xl: Open xldevd.log with O_CLOEXEC
Message-ID: <ZnXzkUclprH8TPLR@itl-email>
References: <20240621161656.63576-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha512;
	protocol="application/pgp-signature"; boundary="XuZz6rjuyGgiM3Xk"
Content-Disposition: inline
In-Reply-To: <20240621161656.63576-1-andrew.cooper3@citrix.com>


--XuZz6rjuyGgiM3Xk
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 21 Jun 2024 17:41:24 -0400
From: Demi Marie Obenour <demi@invisiblethingslab.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
	Xen-devel <xen-devel@lists.xenproject.org>
Cc: Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH for-4.19 v2] tools/xl: Open xldevd.log with O_CLOEXEC

On Fri, Jun 21, 2024 at 05:16:56PM +0100, Andrew Cooper wrote:
> `xl devd` has been observed leaking /var/log/xldevd.log into children.
>=20
> Note this is specifically safe; dup2() leaves O_CLOEXEC disabled on newfd=
, so
> after setting up stdout/stderr, it's only the logfile fd which will close=
 on
> exec().
>=20
> Link: https://github.com/QubesOS/qubes-issues/issues/8292
> Reported-by: Demi Marie Obenour <demi@invisiblethingslab.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Anthony PERARD <anthony@xenproject.org>
> CC: Juergen Gross <jgross@suse.com>
> CC: Demi Marie Obenour <demi@invisiblethingslab.com>
> CC: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> Also entirely speculative based on the QubesOS ticket.
>=20
> v2:
>  * Extend the commit message to explain why stdout/stderr aren't closed by
>    this change
>=20
> For 4.19.  This bugfix was posted earlier, but fell between the cracks.
> ---
>  tools/xl/xl_utils.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/tools/xl/xl_utils.c b/tools/xl/xl_utils.c
> index 17489d182954..060186db3a59 100644
> --- a/tools/xl/xl_utils.c
> +++ b/tools/xl/xl_utils.c
> @@ -270,7 +270,7 @@ int do_daemonize(const char *name, const char *pidfil=
e)
>          exit(-1);
>      }
> =20
> -    CHK_SYSCALL(logfile =3D open(fullname, O_WRONLY|O_CREAT|O_APPEND, 06=
44));
> +    CHK_SYSCALL(logfile =3D open(fullname, O_WRONLY | O_CREAT | O_APPEND=
 | O_CLOEXEC, 0644));
>      free(fullname);
>      assert(logfile >=3D 3);

Definitely an improvement.  I would also add O_NOCTTY to work around a
particularly unfortunate Linux kernel design decision, but that can
either be fixed up on commit or be a separate patch.

Reviewed-by: Demi Marie Obenour <demi@invisiblethingslab.com>
--=20
Sincerely,
Demi Marie Obenour (she/her/hers)
Invisible Things Lab

--XuZz6rjuyGgiM3Xk
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCgAdFiEEdodNnxM2uiJZBxxxsoi1X/+cIsEFAmZ185EACgkQsoi1X/+c
IsG+Lw//SZp5UlD/jH/152jWbKOm9LiZzzaLTCM9vLsYyUwP1sNlTy1bkhDQhhRe
lsVibpcfV5HTzgM1TBuaCBatqKsUVLQ/XzDK89BUtVtFukQq5MDSmQshixkC3Nnn
em8uaEKczu2fxEiu22XL+IFkn/5RIssZcf/HY8QZEqIfgoDfLrkIYEJ3MWB6sKiH
Mav5clRX5iVWFV5qqae/PY8UUF6q7bJ0pE3TgNB2OBSmr+Iu2cWNR3XcAVAs7fdF
4Aund9SvzhvsQWj2Cz6Rf57be8R1K+/9xjrSxeJV3+ORJAp5JE58w69dturpltxu
dvwd62mf5g9DoYxpRN45Cp2VFVME3it67tqc4qvlznNGpiY7jIOsznoAQrUNDVdt
AJ/EMDtb8rD34ft4JafLHQnid9k0CpTN9u4mElZB+cN89ebSJ6rzFYBDfDsSbb7H
1EY3+wNXEKA+5af1IvGISt7hncGgWW2Qa4ggsaPCGVwXNEHtxvEHnagux+/shjgB
J2dOsne578dFoGBUp4ee5zX1ErVfC9eks/msU/iLixgar2WkHhjiLkqN0n1t1RO1
2fRITrgfqfdYw4ms/Lkyu2G3F+1Qkv0fpjgl9arSVkZykB1RCoNiRRZ4Q0++1PpD
RztwcIDC3GnaJX+wPmngQASvU7G8FO9+x+4rxby9gnqCztpSpjQ=
=/IB5
-----END PGP SIGNATURE-----

--XuZz6rjuyGgiM3Xk--


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 21:54:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 21:54:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745630.1152772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKmCy-0001zy-1z; Fri, 21 Jun 2024 21:54:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745630.1152772; Fri, 21 Jun 2024 21:54:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKmCx-0001zr-UR; Fri, 21 Jun 2024 21:54:11 +0000
Received: by outflank-mailman (input) for mailman id 745630;
 Fri, 21 Jun 2024 21:54:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKmCw-0001zl-Tn
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 21:54:10 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c8b2c1a1-3018-11ef-b4bb-af5377834399;
 Fri, 21 Jun 2024 23:54:08 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 0EAF162581;
 Fri, 21 Jun 2024 21:54:07 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 76DFEC2BBFC;
 Fri, 21 Jun 2024 21:54:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8b2c1a1-3018-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719006846;
	bh=THbuOmsvYWGac9QqIhH7AaSf77crrWy5gCjOQmWut1k=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=KNqHXKde333GWfjx9hiUbJwOJfH51mABhz4IPa9gqFmD5kIJ6tT+cYEzW7e00n42L
	 6F88kgT3YsBWSydPMbKAgx8A1VYK1q5zmto5AoSxC6GH60azRBtQk0bqxrWuNV1Kl4
	 Cyu5T0z5qjFXTIeDmLRAA5+OMi4oyO/piTCq9Aj++cHhl+8wCwL0FOVX9DREwimte+
	 Xk3d8o/H5FYfvX5+oRMi1o3qs87j2azngA5vKrf66bM0N7LN2QR/O4MglgZYLlJc3b
	 Bs9H+emq5//djg/QXaTjlEkZvnSrq41rFeMWZiksMIliz5zeIzMv48L10pKgANuLCR
	 253zvnj6xC55g==
Date: Fri, 21 Jun 2024 14:54:04 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Andrew Cooper <andrew.cooper3@citrix.com>
cc: Xen-devel <xen-devel@lists.xenproject.org>, 
    George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Roberto Bagnara <roberto.bagnara@bugseng.com>, 
    "consulting @ bugseng . com" <consulting@bugseng.com>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH 2/2] xen/multicall: Change nr_calls to uniformly be
 unsigned long
In-Reply-To: <20240621205800.329230-3-andrew.cooper3@citrix.com>
Message-ID: <alpine.DEB.2.22.394.2406211438590.2572888@ubuntu-linux-20-04-desktop>
References: <20240621205800.329230-1-andrew.cooper3@citrix.com> <20240621205800.329230-3-andrew.cooper3@citrix.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1459642098-1719006205=:2572888"
Content-ID: <alpine.DEB.2.22.394.2406211444420.2572888@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1459642098-1719006205=:2572888
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2406211444421.2572888@ubuntu-linux-20-04-desktop>

On Fri, 21 Jun 2024, Andrew Cooper wrote:
> Right now, the non-compat declaration and definition of do_multicall()
> differing types for the nr_calls parameter.
> 
> This is a MISRA rule 8.3 violation, but it's also time-bomb waiting for the
> first 128bit architecture (RISC-V looks as if it might get there first).
> 
> Worse, the type chosen here has a side effect of truncating the guest
> parameter, because Xen still doesn't have a clean hypercall ABI definition.
> 
> Switch uniformly to using unsigned long.
> 
> This addresses the MISRA violation, and while it is a guest-visible ABI
> change, it's only in the corner case where the guest kernel passed a
> bogus-but-correct-when-truncated value.  I can't find any any users of
> mutilcall which pass a bad size to begin with, so this should have no
> practical effect on guests.
> 
> In fact, this brings the behaviour of multicalls more in line with the header
> description of how it behaves.
> 
> With this fix, Xen is now fully clean to Rule 8.3, so mark it so.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

I am in favor of this approach. I have 2 suggestions which are
nice-to-have, not must-have.

We all know that "unsigned long" is register size. However, the C
standard doesn't define it that way. I tried to make it clear in
docs/misra/C-language-toolchain.rst. However, nowhere in the public
Xen headers we say that by "unsigned long" we mean register size. I
think we should say that somewhere in the headers, but I can see it
doesn't have to be part of this patch. It would be nice to add a comment
in xen/include/public/xen.h saying "unsigned long is register size" or
"refer to docs/misra/C-language-toolchain.rst for type sizes and
definitions", but it is not a hard request.

The second observation, if we are concerned about invalid nr_calls
values, we could add a check at the beginning of do_multicall:

  if ( nr_calls >= UINT32_MAX )

I am not sure it is an improvement, but maybe it is.


Given that both the above suggestions are nice-to-have:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>





> ---
> CC: George Dunlap <George.Dunlap@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Roberto Bagnara <roberto.bagnara@bugseng.com>
> CC: consulting@bugseng.com <consulting@bugseng.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> 
> I know this isn't going to be universally liked, but we need to do something,
> and this is my very strong vote for the least bad way out of the current mess.
> ---
>  automation/eclair_analysis/ECLAIR/tagging.ecl | 1 +
>  xen/common/multicall.c                        | 4 ++--
>  xen/include/hypercall-defs.c                  | 4 ++--
>  xen/include/public/xen.h                      | 2 +-
>  4 files changed, 6 insertions(+), 5 deletions(-)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/tagging.ecl b/automation/eclair_analysis/ECLAIR/tagging.ecl
> index b829655ca0bc..3d06a1aad410 100644
> --- a/automation/eclair_analysis/ECLAIR/tagging.ecl
> +++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
> @@ -45,6 +45,7 @@ MC3R1.R7.2||
>  MC3R1.R7.4||
>  MC3R1.R8.1||
>  MC3R1.R8.2||
> +MC3R1.R8.3||
>  MC3R1.R8.5||
>  MC3R1.R8.6||
>  MC3R1.R8.8||
> diff --git a/xen/common/multicall.c b/xen/common/multicall.c
> index 1f0cc4cb267c..ce394c5efcfe 100644
> --- a/xen/common/multicall.c
> +++ b/xen/common/multicall.c
> @@ -34,11 +34,11 @@ static void trace_multicall_call(multicall_entry_t *call)
>  }
>  
>  ret_t do_multicall(
> -    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list, uint32_t nr_calls)
> +    XEN_GUEST_HANDLE_PARAM(multicall_entry_t) call_list, unsigned long nr_calls)
>  {
>      struct vcpu *curr = current;
>      struct mc_state *mcs = &curr->mc_state;
> -    uint32_t         i;
> +    unsigned long    i;
>      int              rc = 0;
>      enum mc_disposition disp = mc_continue;
>  
> diff --git a/xen/include/hypercall-defs.c b/xen/include/hypercall-defs.c
> index 47c093acc84d..7720a29ade0b 100644
> --- a/xen/include/hypercall-defs.c
> +++ b/xen/include/hypercall-defs.c
> @@ -135,7 +135,7 @@ xenoprof_op(int op, void *arg)
>  #ifdef CONFIG_COMPAT
>  prefix: compat
>  set_timer_op(uint32_t lo, uint32_t hi)
> -multicall(multicall_entry_compat_t *call_list, uint32_t nr_calls)
> +multicall(multicall_entry_compat_t *call_list, unsigned long nr_calls)
>  memory_op(unsigned int cmd, void *arg)
>  #ifdef CONFIG_IOREQ_SERVER
>  dm_op(domid_t domid, unsigned int nr_bufs, void *bufs)
> @@ -172,7 +172,7 @@ console_io(unsigned int cmd, unsigned int count, char *buffer)
>  vm_assist(unsigned int cmd, unsigned int type)
>  event_channel_op(int cmd, void *arg)
>  mmuext_op(mmuext_op_t *uops, unsigned int count, unsigned int *pdone, unsigned int foreigndom)
> -multicall(multicall_entry_t *call_list, unsigned int nr_calls)
> +multicall(multicall_entry_t *call_list, unsigned long nr_calls)
>  #ifdef CONFIG_PV
>  mmu_update(mmu_update_t *ureqs, unsigned int count, unsigned int *pdone, unsigned int foreigndom)
>  stack_switch(unsigned long ss, unsigned long esp)
> diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
> index b47d48d0e2d6..e051f989a5ca 100644
> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -623,7 +623,7 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
>  /*
>   * ` enum neg_errnoval
>   * ` HYPERVISOR_multicall(multicall_entry_t call_list[],
> - * `                      uint32_t nr_calls);
> + * `                      unsigned long nr_calls);
>   *
>   * NB. The fields are logically the natural register size for this
>   * architecture. In cases where xen_ulong_t is larger than this then
> -- 
> 2.39.2
> 
--8323329-1459642098-1719006205=:2572888--


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 22:04:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 22:04:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745638.1152783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKmMm-00044i-VO; Fri, 21 Jun 2024 22:04:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745638.1152783; Fri, 21 Jun 2024 22:04:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKmMm-00044b-RS; Fri, 21 Jun 2024 22:04:20 +0000
Received: by outflank-mailman (input) for mailman id 745638;
 Fri, 21 Jun 2024 22:04:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKmMk-00044Q-SZ
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 22:04:18 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 32580418-301a-11ef-b4bb-af5377834399;
 Sat, 22 Jun 2024 00:04:15 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id C606362FB6;
 Fri, 21 Jun 2024 22:04:13 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B008EC2BBFC;
 Fri, 21 Jun 2024 22:04:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 32580418-301a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719007453;
	bh=dMWh1Luls1b6qBIQ0xU1ayjG7GSZglQmnCDQ7tAZGJo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=fNQ0KBFQWDHiH/0m3ThZSQaeLNZb+D5ludLK7ykuON0e3xjWYe56tWsDxg4vFXR6+
	 uBIpCvFjtc167FEd6OP2fYG71adUI33JHyXwZCusXeFp6xkVtPRKtgDT2cq/oQX+xF
	 YBmteeczG+zfXysoQG1+OR5ydB73LMU9O+5ZPDS3Dv5XaCZYaHB6iCI7tyDv8Nturd
	 sbNNNBj5YKzmGpJ8Rtis5VFgE7zBP9c7NDE/d5h9pNaOiGUnuIi49y/mCB5Ry+Jq0r
	 9d4/A6/S38aQS1KoH9b612aKxAUKhBQ+buQg/aCKcB+2tVgE+fxfus4aZvwehA830p
	 zZiSiZbuvWO5Q==
Date: Fri, 21 Jun 2024 15:04:11 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2] automation/eclair_analysis: deviate and|or|xor|not
 for MISRA C Rule 21.2
In-Reply-To: <7b05a537b094598b98b92d0869d16402648fb6f5.1718964932.git.alessandro.zucchelli@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406211503370.2572888@ubuntu-linux-20-04-desktop>
References: <7b05a537b094598b98b92d0869d16402648fb6f5.1718964932.git.alessandro.zucchelli@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 21 Jun 2024, Alessandro Zucchelli wrote:
> Rule 21.2 reports identifiers reserved for the C and POSIX standard
> libraries: or, and, not and xor are reserved identifiers because they
> constitute alternate spellings for the corresponding operators (they are
> defined as macros by iso646.h); however Xen doesn't use standard library
> headers, so there is no risk of overlap.
> 
> This addresses violations arising from x86_emulate/x86_emulate.c, where
> label statements named as or, and and xor appear.
> 
> No functional change.
> 
> Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> ---
> Changes from v1:
> Added deviation for 'not' identifier.
> Added explanation of where these identifiers are defined, specifically in the
> 'iso646.h' file of the Standard Library.
> ---
>  automation/eclair_analysis/ECLAIR/deviations.ecl | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> index 069519e380..14c7afb39e 100644
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -501,7 +501,7 @@ still remain available."
>  -doc_begin="or, and and xor are reserved identifiers because they constitute alternate
>  spellings for the corresponding operators (they are defined as macros by iso646.h).
>  However, Xen doesn't use standard library headers, so there is no risk of overlap."
> --config=MC3R1.R21.2,reports+={safe, "any_area(stmt(ref(kind(label)&&^(or|and|xor)$)))"}
> +-config=MC3R1.R21.2,reports+={safe, "any_area(stmt(ref(kind(label)&&^(or|and|xor|not)$)))"}
>  -doc_end

It looks like this patch relies on the previous version to be applied?
Maybe you forgot to squash your changes with your previous patch?


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 22:07:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 22:07:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745645.1152792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKmPi-0004cY-C8; Fri, 21 Jun 2024 22:07:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745645.1152792; Fri, 21 Jun 2024 22:07:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKmPi-0004cR-8K; Fri, 21 Jun 2024 22:07:22 +0000
Received: by outflank-mailman (input) for mailman id 745645;
 Fri, 21 Jun 2024 22:07:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKmPh-0004c9-17
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 22:07:21 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9ed6a4df-301a-11ef-b4bb-af5377834399;
 Sat, 22 Jun 2024 00:07:19 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 8CEF1CE3D00;
 Fri, 21 Jun 2024 22:07:12 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7631CC32789;
 Fri, 21 Jun 2024 22:07:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ed6a4df-301a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719007631;
	bh=/4Um76W/pAVbIBbCOir5GbUqrqCWgSMwvcNAGHu+xIY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=W22bkPAVqv86j3N0rfPSubDhA6Hdc/sc9FCGUGaH8fs/uT0ryHsnkai3uA+hrFoR1
	 RleVMoBnFF+f/KHwL/2F9cEyZxw7E/atgUQpr8jc4P9HhODuCgcGV6NXR+IdjZpr8A
	 aRrT6WN+4J8tCFxz3LJdpst9nHoYyb4ve/Mfn6RRQMCg5B3amd3n8R71fFt8CqZWOS
	 gqAFdYEj2jBqwj1/LOuISVJX+injREYWvKBUWbqB/pzQtmEEp5gaMQnM+jP1hgD25M
	 mjJJMMjeghhr5T9Ac58L9gm4yxZTfPDxnUFLP/oEtIVElyVPIPJxCT6/p3JW6hGGFl
	 FS5aSODBArAoQ==
Date: Fri, 21 Jun 2024 15:07:09 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Federico Serafini <federico.serafini@bugseng.com>, 
    xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, oleksii.kurochko@gmail.com
Subject: Re: [XEN PATCH] automation/eclair: add deviations of MISRA C Rule
 5.5
In-Reply-To: <alpine.DEB.2.22.394.2406201722100.2572888@ubuntu-linux-20-04-desktop>
Message-ID: <alpine.DEB.2.22.394.2406211506160.2572888@ubuntu-linux-20-04-desktop>
References: <dbd34e37b5d757ff7ae2a7318ad12b159970604c.1718887298.git.federico.serafini@bugseng.com> <alpine.DEB.2.22.394.2406201722100.2572888@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 20 Jun 2024, Stefano Stabellini wrote:
> On Thu, 20 Jun 2024, Federico Serafini wrote:
> > MISRA C Rule 5.5 states that "Identifiers shall be distinct from macro
> > names".
> > 
> > Update ECLAIR configuration to deviate:
> > - macros expanding to their own name;
> > - clashes between macros and non-callable entities;
> > - clashes related to the selection of specific implementations of string
> >   handling functions.
> > 
> > Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

I would like to ask for a release-ack as its effect is limited to ECLAIR
analysis results and rule 5.5 is not blocking anyway (it is allowed to
fail).


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 22:24:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 22:24:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745655.1152801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKmge-0008MI-P8; Fri, 21 Jun 2024 22:24:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745655.1152801; Fri, 21 Jun 2024 22:24:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKmge-0008MB-Mc; Fri, 21 Jun 2024 22:24:52 +0000
Received: by outflank-mailman (input) for mailman id 745655;
 Fri, 21 Jun 2024 22:24:51 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKmgd-0008M5-NT
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 22:24:51 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 12159d72-301d-11ef-90a3-e314d9c70b13;
 Sat, 22 Jun 2024 00:24:49 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 5FEAE61F12;
 Fri, 21 Jun 2024 22:24:48 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id BD618C2BBFC;
 Fri, 21 Jun 2024 22:24:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12159d72-301d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719008688;
	bh=o8TbK0ni2OrU29q0sruc8RM7BJDHAGGwxdornMcUQXA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=M342ko7HVUhbLn2TNGYwTnamQSLtwbrVzXlHN0388kGkmF09wKGboNlijtJXW+qxz
	 SrYOWBOIMmnVun1LSt3D1W2P5Ke/WlmAdo8K9ai+2LkUi9RpR007wV+XDO7pxsyj86
	 lAZRXBk45Ajs82lY+y6MQ5D7iW2kzMq6druH/ybmcKGFsfVTBTjsg5lgrWA1ttC1yI
	 d74csY2VJP3hiuBwJzyTp9lw6MVLEQqvfaLGcJOT8Ia8ziAoq/mP+gZpBCFRN2EFKY
	 fIpvDfWMEaOr1/23fuEMVl2qIN9uhuGZw54DV5/IcohjKfsUCAURXGxWIOzFeSLo2S
	 ZOuNpxoMGT8gA==
Date: Fri, 21 Jun 2024 15:24:45 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Federico Serafini <federico.serafini@bugseng.com>, 
    xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, oleksii.kurochko@gmail.com
Subject: Re: [XEN PATCH v3] automation/eclair: add deviation for MISRA C Rule
 17.7
In-Reply-To: <alpine.DEB.2.22.394.2406191819370.2572888@ubuntu-linux-20-04-desktop>
Message-ID: <alpine.DEB.2.22.394.2406211522270.2572888@ubuntu-linux-20-04-desktop>
References: <b571bd05955ab9967a44517c9947545a2a530f01.1718354974.git.federico.serafini@bugseng.com> <alpine.DEB.2.22.394.2406191819370.2572888@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 19 Jun 2024, Stefano Stabellini wrote:
> On Fri, 14 Jun 2024, Federico Serafini wrote:
> > Update ECLAIR configuration to deviate some cases where not using
> > the return value of a function is not dangerous.
> > 
> > Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
 
I would like to request a release ack, as this patch only affects the
ECLAIR analysis for R17.7, which is non-blocking anyway (meaning: it
cannot cause a gitlab-ci failure, it is only informative).



> > ---
> > Changes in v3:
> > - removed unwanted underscores;
> > - grammar fixed;
> > - do not constraint to the first actual argument.
> > Changes in v2:
> > - do not deviate strlcpy and strlcat.
> > ---
> >  automation/eclair_analysis/ECLAIR/deviations.ecl | 4 ++++
> >  docs/misra/deviations.rst                        | 9 +++++++++
> >  2 files changed, 13 insertions(+)
> > 
> > diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> > index 447c1e6661..97281082a8 100644
> > --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> > +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> > @@ -413,6 +413,10 @@ explicit comment indicating the fallthrough intention is present."
> >  -config=MC3R1.R17.1,macros+={hide , "^va_(arg|start|copy|end)$"}
> >  -doc_end
> >  
> > +-doc_begin="Not using the return value of a function does not endanger safety if it coincides with an actual argument."
> > +-config=MC3R1.R17.7,calls+={safe, "any()", "decl(name(__builtin_memcpy||__builtin_memmove||__builtin_memset||cpumask_check))"}
> > +-doc_end
> > +
> >  #
> >  # Series 18.
> >  #
> > diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
> > index 36959aa44a..f3abe31eb5 100644
> > --- a/docs/misra/deviations.rst
> > +++ b/docs/misra/deviations.rst
> > @@ -364,6 +364,15 @@ Deviations related to MISRA C:2012 Rules:
> >         by `stdarg.h`.
> >       - Tagged as `deliberate` for ECLAIR.
> >  
> > +   * - R17.7
> > +     - Not using the return value of a function does not endanger safety if it
> > +       coincides with an actual argument.
> > +     - Tagged as `safe` for ECLAIR. Such functions are:
> > +         - __builtin_memcpy()
> > +         - __builtin_memmove()
> > +         - __builtin_memset()
> > +         - cpumask_check()
> > +
> >     * - R20.4
> >       - The override of the keyword \"inline\" in xen/compiler.h is present so
> >         that section contents checks pass when the compiler chooses not to
> > -- 
> > 2.34.1
> > 
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 22:34:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 22:34:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745664.1152811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKmq2-000268-NL; Fri, 21 Jun 2024 22:34:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745664.1152811; Fri, 21 Jun 2024 22:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKmq2-000261-Ke; Fri, 21 Jun 2024 22:34:34 +0000
Received: by outflank-mailman (input) for mailman id 745664;
 Fri, 21 Jun 2024 22:34:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKmq0-00025v-RT
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 22:34:32 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6cff87a4-301e-11ef-90a3-e314d9c70b13;
 Sat, 22 Jun 2024 00:34:31 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id D2D3862420;
 Fri, 21 Jun 2024 22:34:28 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C7B96C2BBFC;
 Fri, 21 Jun 2024 22:34:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6cff87a4-301e-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719009268;
	bh=TyZBPDMQXih2WNT7cUG5PFw78FxIHahxXePUGgoWvkM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=k6DmblBniO7VJiWHndC/1RNZWzERIKtJ6XqnTiZYqtOIwdCBcQNOr6cFIhsjNeFg5
	 +B+K+Wq35CBsKmNxz+pCkdv9LbgkcH5vNwmQFdyh4X2a9WxhtL5afR2I6WuDr1WC4/
	 apKKqW1aTViEh0l3kiNBxrWOcOkBTAv7qI0Wd1Zj/cTKCFJb53jimvS6jZpXffiEEA
	 1RcLWEmbeXoGXj99hX2/Dpxd+ko6V5XZ8CU3+umOZHhIG/WlH2QPDecZKZgLk63YL7
	 7DjlKx2UPt6vzYKGNx3C0Yi58aiwkNpyTsgUkacl5PjnESHznj0EbIIoOlDTAlWtIK
	 Qs7JImCNpXbNw==
Date: Fri, 21 Jun 2024 15:34:25 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>
Subject: Re: [XEN PATCH v2] xen: add explicit comment to identify notifier
 patterns
In-Reply-To: <bce5eae2-973d-4d69-bee1-09f9f09dd011@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406211529130.2572888@ubuntu-linux-20-04-desktop>
References: <d814434bf73e341f5d35836fa7063a728f7b7de4.1718788908.git.federico.serafini@bugseng.com> <f7d46c15-ff85-4a6f-afd7-df18649726c8@xen.org> <2072bf59-f125-4789-be77-40ed3641aec4@bugseng.com> <alpine.DEB.2.22.394.2406201811200.2572888@ubuntu-linux-20-04-desktop>
 <bce5eae2-973d-4d69-bee1-09f9f09dd011@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1880117008-1719009152=:2572888"
Content-ID: <alpine.DEB.2.22.394.2406211532370.2572888@ubuntu-linux-20-04-desktop>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1880117008-1719009152=:2572888
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.22.394.2406211532371.2572888@ubuntu-linux-20-04-desktop>

On Fri, 21 Jun 2024, Federico Serafini wrote:
> On 21/06/24 03:13, Stefano Stabellini wrote:
> > On Thu, 20 Jun 2024, Federico Serafini wrote:
> > > On 19/06/24 13:17, Julien Grall wrote:
> > > > Hi Federico,
> > > > 
> > > > On 19/06/2024 10:29, Federico Serafini wrote:
> > > > > MISRA C Rule 16.4 states that every `switch' statement shall have a
> > > > > `default' label" and a statement or a comment prior to the
> > > > > terminating break statement.
> > > > > 
> > > > > This patch addresses some violations of the rule related to the
> > > > > "notifier pattern": a frequently-used pattern whereby only a few
> > > > > values
> > > > > are handled by the switch statement and nothing should be done for
> > > > > others (nothing to do in the default case).
> > > > > 
> > > > > Note that for function mwait_idle_cpu_init() in
> > > > > xen/arch/x86/cpu/mwait-idle.c the /* Notifier pattern. */ comment is
> > > > > not added: differently from the other functions covered in this patch,
> > > > > the default label has a return statement that does not violates Rule
> > > > > 16.4.
> > > > > 
> > > > > No functional change.
> > > > > 
> > > > > Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> > > > > ---
> > > > > Changes in v2:
> > > > > as Jan pointed out, in the v1 some patterns were not explicitly
> > > > > identified
> > > > > (https://lore.kernel.org/xen-devel/cad05a5c-e2d8-4e5d-af05-30ae6f959184@bugseng.com/).
> > > > > 
> > > > > This version adds the /* Notifier pattern. */ to all the pattern
> > > > > present
> > > > > in
> > > > > the Xen codebase except for mwait_idle_cpu_init().
> > > > > ---
> > > > >    xen/arch/arm/cpuerrata.c                     | 1 +
> > > > >    xen/arch/arm/gic-v3-lpi.c                    | 4 ++++
> > > > >    xen/arch/arm/gic.c                           | 1 +
> > > > >    xen/arch/arm/irq.c                           | 4 ++++
> > > > >    xen/arch/arm/mmu/p2m.c                       | 1 +
> > > > >    xen/arch/arm/percpu.c                        | 1 +
> > > > >    xen/arch/arm/smpboot.c                       | 1 +
> > > > >    xen/arch/arm/time.c                          | 1 +
> > > > >    xen/arch/arm/vgic-v3-its.c                   | 2 ++
> > > > >    xen/arch/x86/acpi/cpu_idle.c                 | 4 ++++
> > > > >    xen/arch/x86/cpu/mcheck/mce.c                | 4 ++++
> > > > >    xen/arch/x86/cpu/mcheck/mce_intel.c          | 4 ++++
> > > > >    xen/arch/x86/genapic/x2apic.c                | 3 +++
> > > > >    xen/arch/x86/hvm/hvm.c                       | 1 +
> > > > >    xen/arch/x86/nmi.c                           | 1 +
> > > > >    xen/arch/x86/percpu.c                        | 3 +++
> > > > >    xen/arch/x86/psr.c                           | 3 +++
> > > > >    xen/arch/x86/smpboot.c                       | 3 +++
> > > > >    xen/common/kexec.c                           | 1 +
> > > > >    xen/common/rcupdate.c                        | 1 +
> > > > >    xen/common/sched/core.c                      | 1 +
> > > > >    xen/common/sched/cpupool.c                   | 1 +
> > > > >    xen/common/spinlock.c                        | 1 +
> > > > >    xen/common/tasklet.c                         | 1 +
> > > > >    xen/common/timer.c                           | 1 +
> > > > >    xen/drivers/cpufreq/cpufreq.c                | 1 +
> > > > >    xen/drivers/cpufreq/cpufreq_misc_governors.c | 3 +++
> > > > >    xen/drivers/passthrough/x86/hvm.c            | 3 +++
> > > > >    xen/drivers/passthrough/x86/iommu.c          | 3 +++
> > > > >    29 files changed, 59 insertions(+)
> > > > > 
> > > > > diff --git a/xen/arch/arm/cpuerrata.c b/xen/arch/arm/cpuerrata.c
> > > > > index 2b7101ea25..69c30aecd8 100644
> > > > > --- a/xen/arch/arm/cpuerrata.c
> > > > > +++ b/xen/arch/arm/cpuerrata.c
> > > > > @@ -730,6 +730,7 @@ static int cpu_errata_callback(struct
> > > > > notifier_block
> > > > > *nfb,
> > > > >            rc = enable_nonboot_cpu_caps(arm_errata);
> > > > >            break;
> > > > >        default:
> > > > > +        /* Notifier pattern. */
> > > > Without looking at the commit message (which may not be trivial when
> > > > committed), it is not clear to me what this is supposed to mean. Will
> > > > there
> > > > be a longer explanation in the MISRA doc? Should this be a SAF-*
> > > > comment?
> > > > 
> > > > >            break;
> > > > >        }
> > > > > diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
> > > > > index eb0a5535e4..4c2bd35403 100644
> > > > > --- a/xen/arch/arm/gic-v3-lpi.c
> > > > > +++ b/xen/arch/arm/gic-v3-lpi.c
> > > > > @@ -389,6 +389,10 @@ static int cpu_callback(struct notifier_block
> > > > > *nfb,
> > > > > unsigned long action,
> > > > >                printk(XENLOG_ERR "Unable to allocate the pendtable for
> > > > > CPU%lu\n",
> > > > >                       cpu);
> > > > >            break;
> > > > > +
> > > > > +    default:
> > > > > +        /* Notifier pattern. */
> > > > > +        break;
> > > > 
> > > > Skimming through v1, it was pointed out that gic-v3-lpi may miss some
> > > > cases.
> > > > 
> > > > Let me start with that I understand this patch is technically not
> > > > changing
> > > > anything. However, it gives us an opportunity to check the notifier
> > > > pattern.
> > > > 
> > > > Has anyone done any proper investigation? If so, what was the outcome?
> > > > If
> > > > not, have we identified someone to do it?
> > > > 
> > > > The same question will apply for place where you add "default".
> > > 
> > > Yes, I also think this could be an opportunity to check the pattern
> > > but no one has yet been identified to do this.
> > 
> > I don't think I understand Julien's question and/or your answer.
> > 
> > Is the question whether someone has done an analysis to make sure this
> > patch covers all notifier patters in the xen codebase?
> 
> I think Jan and Julien's concerns are about the fact that my patch
> takes for granted that all the switch statements are doing the right
> thing: someone should investigate the notifier patterns to confirm that
> their are handling the different cases correctly.

That's really difficult to do, even for the maintainers of the code in
question.

And by not taking this patch we are exposing ourselves to more safety
risks because we cannot make R16.4 blocking.


> > If so, I expect that you have done an analysis simply by basing this
> > patch on the 16.4 violations reported by ECLAIR?
> 
> The previous version of the patch was based only on the reports of
> ECLAIR but Jan said "you left out some patterns, why?".
> 
> So, this version of the patch adds the comment for all the notifier
> patterns I found using git grep "struct notifier_block \*"
> (a superset of the ones reported by ECLAIR because some of them are in
> files excluded from the analysis or deviated).

I think this patch is a step in the right direction. It doesn't prevent
anyone in the community from making expert evaluations on whether the
pattern is implemented correctly.

Honestly, I don't see another way to make progress on this, except for
maybe deviating project-wide "struct notifier_block". But that's
conceptually the same thing as this patch.


Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
--8323329-1880117008-1719009152=:2572888--


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 23:27:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 23:27:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745675.1152821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKnf6-0001XN-C6; Fri, 21 Jun 2024 23:27:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745675.1152821; Fri, 21 Jun 2024 23:27:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKnf6-0001XG-9D; Fri, 21 Jun 2024 23:27:20 +0000
Received: by outflank-mailman (input) for mailman id 745675;
 Fri, 21 Jun 2024 23:27:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=B+dc=NX=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKnf4-0001XA-MF
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 23:27:18 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c903ab96-3025-11ef-b4bb-af5377834399;
 Sat, 22 Jun 2024 01:27:14 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 7EC13CE3CD2;
 Fri, 21 Jun 2024 23:27:08 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8835FC2BBFC;
 Fri, 21 Jun 2024 23:27:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c903ab96-3025-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719012427;
	bh=itEtUME6C4Zc1DxKquhnptxC25JV3Tn3U3n3KJMVXVQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=u4pJacmhvUGJ7NZAm2TPdPo2qnFEOX/ZQ0FQHxQU/FPAnx2JvCF+3GVa5eqdzpl8+
	 8cPBan5dW8tlHABlyvp4hNKrgem5Xc7JEMRPZNhAmOeJDduucjv768zo9yTJzqJdUx
	 hoVVIdbYVWrVLja/aZfRFUcR3xOC/VeaYJS16Go6VxNbg+BycpZAIAcOwcsQbRGqh3
	 u+XXGoonGDt6fsZjzJvJ77n5wKP7vLFcBiKsrGPczKZERLvQrBSneDACSye8YAa3tl
	 DrjMswAgafOV9Ozf9GJw2Vw5tgjsCYNArlUw/46g8yKBYjsZ4mWIlfxTH5IRjHcZXK
	 2u9r8+OfbRI0w==
Date: Fri, 21 Jun 2024 16:27:05 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH 1/2] automation/eclair_analysis: deviate MISRA C Rule
 21.2
In-Reply-To: <650b7946-ddb5-4428-b6d9-d8f6e0b0f8b9@suse.com>
Message-ID: <alpine.DEB.2.22.394.2406211619070.2572888@ubuntu-linux-20-04-desktop>
References: <cover.1718816397.git.alessandro.zucchelli@bugseng.com> <5b8364528a9ece8fec9f0e70bee81c2ea94c1820.1718816397.git.alessandro.zucchelli@bugseng.com> <02ee9a03-c5b9-4250-960d-e9a2762605c8@suse.com> <alpine.DEB.2.22.394.2406201758490.2572888@ubuntu-linux-20-04-desktop>
 <650b7946-ddb5-4428-b6d9-d8f6e0b0f8b9@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Fri, 21 Jun 2024, Jan Beulich wrote:
> On 21.06.2024 03:02, Stefano Stabellini wrote:
> > On Thu, 20 Jun 2024, Jan Beulich wrote:
> >> On 19.06.2024 19:09, Alessandro Zucchelli wrote:
> >>> Rule 21.2 reports identifiers reserved for the C and POSIX standard
> >>> libraries: all xen's translation units are compiled with option
> >>> -nostdinc, this guarantees that these libraries are not used, therefore
> >>> a justification is provided for allowing uses of such identifiers in
> >>> the project.
> >>> Builtins starting with "__builtin_" still remain available.
> >>>
> >>> No functional change.
> >>>
> >>> Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
> >>> ---
> >>>  automation/eclair_analysis/ECLAIR/deviations.ecl | 11 +++++++++++
> >>>  1 file changed, 11 insertions(+)
> >>>
> >>> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> >>> index 447c1e6661..9fa9a7f01c 100644
> >>> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> >>> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> >>> @@ -487,6 +487,17 @@ leads to a violation of the Rule are deviated."
> >>>  # Series 21.
> >>>  #
> >>>  
> >>> +-doc_begin="Rules 21.1 and 21.2 report identifiers reserved for the C and POSIX
> >>> +standard libraries: if these libraries are not used there is no reason to avoid such
> >>> +identifiers. All xen's translation units are compiled with option -nostdinc,
> >>> +this guarantees that these libraries are not used. Some compilers could perform
> >>> +optimization using built-in functions: this risk is partially addressed by
> >>> +using the compilation option -fno-builtin. Builtins starting with \"__builtin_\"
> >>> +still remain available."
> >>
> >> While the sub-section "Reserved Identifiers" is part of Section 7,
> >> "Library", close coordination is needed between the library and the
> >> compiler, which only together form an "implementation". Therefore any
> >> use of identifiers beginning with two underscores or beginning with an
> >> underscore and an upper case letter is at risk of colliding not only
> >> with a particular library implementation (which we don't use), but
> >> also of such with a particular compiler implementation (which we cannot
> >> avoid to use). How can we permit use of such potentially problematic
> >> identifiers?
> > 
> > Alternative question: is there a way we can check if there is clash of
> > some sort between a compiler implementation of something and a MACRO or
> > identifier we have in Xen? An error or a warning from the compiler for
> > instance? That could be an easy way to prove we are safe.
> 
> Well. I think it is the default for the compiler to warn when re-#define-
> ing a previously #define-d (by the compiler or by us) symbol, so on that
> side we ought to be safe at any given point in time,

OK, that's good. It seems to me that this explanation should be part of
the deviation text.


> yet we're still latently unsafe (as to compilers introducing new
> pre-defines).

Sure, but we don't need to be safe in relation to future compiler. Right
now, we are targeting gcc-12.1.0 as written in
docs/misra/C-language-toolchain.rst. When we decide to enable a new
compiler in Xen we can fix/change any specific define as needed. Also
note the large amount of things written in C-language-toolchain.rst that
need to be checked and verified for a new compiler to make sure we can
actually use it safely (we make many assumptions).


> For built-in declarations, though, there's nothing I'm aware of that
> would indicate collisions.

For builtins, Alessandro was suggesting -fno-builtin. One question to
Alessandro is why would -fno-builtin only "partially" address the
problem.

Another question for Jan and also Alessandro: given that builtins
starting with __builtin_ remain available, any drawbacks in using
-fno-builtin in a Xen build?



> > Also, can we use the fact that the compiler we use is the same compiler
> > used to compile Linux, and Linux makes extensive use of identifiers and
> > macros starting with underscores as one of the reason for being safe
> > from clashes?
> 
> I think we could, but I don't think we should.


From xen-devel-bounces@lists.xenproject.org Fri Jun 21 23:28:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 Jun 2024 23:28:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745681.1152832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKngM-0002NY-Kw; Fri, 21 Jun 2024 23:28:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745681.1152832; Fri, 21 Jun 2024 23:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKngM-0002NR-I6; Fri, 21 Jun 2024 23:28:38 +0000
Received: by outflank-mailman (input) for mailman id 745681;
 Fri, 21 Jun 2024 23:28:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ur4z=NX=arm.com=robin.murphy@srs-se1.protection.inumbo.net>)
 id 1sKngK-0002NL-MY
 for xen-devel@lists.xenproject.org; Fri, 21 Jun 2024 23:28:36 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id f98ccb12-3025-11ef-b4bb-af5377834399;
 Sat, 22 Jun 2024 01:28:33 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 88123DA7;
 Fri, 21 Jun 2024 16:28:57 -0700 (PDT)
Received: from [10.57.90.59] (unknown [10.57.90.59])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 268703F6A8;
 Fri, 21 Jun 2024 16:28:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f98ccb12-3025-11ef-b4bb-af5377834399
Message-ID: <da3ec316-b001-4711-b323-70af3e6bb014@arm.com>
Date: Sat, 22 Jun 2024 00:28:23 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC PATCH v2] iommu/xen: Add Xen PV-IOMMU driver
To: TSnake41 <teddy.astie@vates.tech>, xen-devel@lists.xenproject.org,
 iommu@lists.linux.dev
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
References: <24d7ec005e77e4e0127995ba6f4ad16f33737fa5.1718981216.git.teddy.astie@vates.tech>
From: Robin Murphy <robin.murphy@arm.com>
Content-Language: en-GB
In-Reply-To: <24d7ec005e77e4e0127995ba6f4ad16f33737fa5.1718981216.git.teddy.astie@vates.tech>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-21 5:08 pm, TSnake41 wrote:
> From: Teddy Astie <teddy.astie@vates.tech>
> 
> In the context of Xen, Linux runs as Dom0 and doesn't have access to the
> machine IOMMU. Although, a IOMMU is mandatory to use some kernel features
> such as VFIO or DMA protection.
> 
> In Xen, we added a paravirtualized IOMMU with iommu_op hypercall in order to
> allow Dom0 to implement such feature. This commit introduces a new IOMMU driver
> that uses this new hypercall interface.
> 
> Signed-off-by Teddy Astie <teddy.astie@vates.tech>
> ---
> Changes since v1 :
> * formatting changes
> * applied Jan Beulich proposed changes : removed vim notes at end of pv-iommu.h
> * applied Jason Gunthorpe proposed changes : use new ops and remove redundant
> checks
> ---
>   arch/x86/include/asm/xen/hypercall.h |   6 +
>   drivers/iommu/Kconfig                |   9 +
>   drivers/iommu/Makefile               |   1 +
>   drivers/iommu/xen-iommu.c            | 489 +++++++++++++++++++++++++++
>   include/xen/interface/memory.h       |  33 ++
>   include/xen/interface/pv-iommu.h     | 104 ++++++
>   include/xen/interface/xen.h          |   1 +
>   7 files changed, 643 insertions(+)
>   create mode 100644 drivers/iommu/xen-iommu.c
>   create mode 100644 include/xen/interface/pv-iommu.h
> 
> diff --git a/arch/x86/include/asm/xen/hypercall.h b/arch/x86/include/asm/xen/hypercall.h
> index a2dd24947eb8..6b1857f27c14 100644
> --- a/arch/x86/include/asm/xen/hypercall.h
> +++ b/arch/x86/include/asm/xen/hypercall.h
> @@ -490,6 +490,12 @@ HYPERVISOR_xenpmu_op(unsigned int op, void *arg)
>   	return _hypercall2(int, xenpmu_op, op, arg);
>   }
>   
> +static inline int
> +HYPERVISOR_iommu_op(void *arg)
> +{
> +	return _hypercall1(int, iommu_op, arg);
> +}
> +
>   static inline int
>   HYPERVISOR_dm_op(
>   	domid_t dom, unsigned int nr_bufs, struct xen_dm_op_buf *bufs)
> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
> index 0af39bbbe3a3..242cefac77c9 100644
> --- a/drivers/iommu/Kconfig
> +++ b/drivers/iommu/Kconfig
> @@ -480,6 +480,15 @@ config VIRTIO_IOMMU
>   
>   	  Say Y here if you intend to run this kernel as a guest.
>   
> +config XEN_IOMMU
> +	bool "Xen IOMMU driver"
> +	depends on XEN_DOM0

Clearly this depends on X86 as well.

> +	select IOMMU_API
> +	help
> +		Xen PV-IOMMU driver for Dom0.
> +
> +		Say Y here if you intend to run this guest as Xen Dom0.
> +
>   config SPRD_IOMMU
>   	tristate "Unisoc IOMMU Support"
>   	depends on ARCH_SPRD || COMPILE_TEST
> diff --git a/drivers/iommu/Makefile b/drivers/iommu/Makefile
> index 542760d963ec..393afe22c901 100644
> --- a/drivers/iommu/Makefile
> +++ b/drivers/iommu/Makefile
> @@ -30,3 +30,4 @@ obj-$(CONFIG_IOMMU_SVA) += iommu-sva.o
>   obj-$(CONFIG_IOMMU_IOPF) += io-pgfault.o
>   obj-$(CONFIG_SPRD_IOMMU) += sprd-iommu.o
>   obj-$(CONFIG_APPLE_DART) += apple-dart.o
> +obj-$(CONFIG_XEN_IOMMU) += xen-iommu.o
> \ No newline at end of file
> diff --git a/drivers/iommu/xen-iommu.c b/drivers/iommu/xen-iommu.c
> new file mode 100644
> index 000000000000..b765445d27cd
> --- /dev/null
> +++ b/drivers/iommu/xen-iommu.c
> @@ -0,0 +1,489 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Xen PV-IOMMU driver.
> + *
> + * Copyright (C) 2024 Vates SAS
> + *
> + * Author: Teddy Astie <teddy.astie@vates.tech>
> + *
> + */
> +
> +#define pr_fmt(fmt)	"xen-iommu: " fmt
> +
> +#include <linux/kernel.h>
> +#include <linux/init.h>
> +#include <linux/types.h>
> +#include <linux/iommu.h>
> +#include <linux/dma-map-ops.h>

Please drop this; it's a driver, not a DMA ops implementation.

> +#include <linux/pci.h>
> +#include <linux/list.h>
> +#include <linux/string.h>
> +#include <linux/device/driver.h>
> +#include <linux/slab.h>
> +#include <linux/err.h>
> +#include <linux/printk.h>
> +#include <linux/stddef.h>
> +#include <linux/spinlock.h>
> +#include <linux/minmax.h>
> +#include <linux/string.h>
> +#include <asm/iommu.h>
> +
> +#include <xen/xen.h>
> +#include <xen/page.h>
> +#include <xen/interface/memory.h>
> +#include <xen/interface/physdev.h>
> +#include <xen/interface/pv-iommu.h>
> +#include <asm/xen/hypercall.h>
> +#include <asm/xen/page.h>
> +
> +MODULE_DESCRIPTION("Xen IOMMU driver");
> +MODULE_AUTHOR("Teddy Astie <teddy.astie@vates.tech>");
> +MODULE_LICENSE("GPL");
> +
> +#define MSI_RANGE_START		(0xfee00000)
> +#define MSI_RANGE_END		(0xfeefffff)
> +
> +#define XEN_IOMMU_PGSIZES       (0x1000)
> +
> +struct xen_iommu_domain {
> +	struct iommu_domain domain;
> +
> +	u16 ctx_no; /* Xen PV-IOMMU context number */
> +};
> +
> +static struct iommu_device xen_iommu_device;
> +
> +static uint32_t max_nr_pages;
> +static uint64_t max_iova_addr;
> +
> +static spinlock_t lock;

Not a great name - usually it's good to name a lock after what it 
protects. Although perhaps it is already, since AFAICS this isn't 
actually used anywhere anyway.

> +static inline struct xen_iommu_domain *to_xen_iommu_domain(struct iommu_domain *dom)
> +{
> +	return container_of(dom, struct xen_iommu_domain, domain);
> +}
> +
> +static inline u64 addr_to_pfn(u64 addr)
> +{
> +	return addr >> 12;
> +}
> +
> +static inline u64 pfn_to_addr(u64 pfn)
> +{
> +	return pfn << 12;
> +}
> +
> +bool xen_iommu_capable(struct device *dev, enum iommu_cap cap)
> +{
> +	switch (cap) {
> +	case IOMMU_CAP_CACHE_COHERENCY:
> +		return true;

Will the PV-IOMMU only ever be exposed on hardware where that really is 
always true?

> +
> +	default:
> +		return false;
> +	}
> +}
> +
> +struct iommu_domain *xen_iommu_domain_alloc_paging(struct device *dev)
> +{
> +	struct xen_iommu_domain *domain;
> +	int ret;
> +
> +	struct pv_iommu_op op = {
> +		.ctx_no = 0,
> +		.flags = 0,
> +		.subop_id = IOMMUOP_alloc_context
> +	};
> +
> +	ret = HYPERVISOR_iommu_op(&op);
> +
> +	if (ret) {
> +		pr_err("Unable to create Xen IOMMU context (%d)", ret);
> +		return ERR_PTR(ret);
> +	}
> +
> +	domain = kzalloc(sizeof(*domain), GFP_KERNEL);
> +
> +	domain->ctx_no = op.ctx_no;
> +
> +	domain->domain.geometry.aperture_start = 0;
> +	domain->domain.geometry.aperture_end = max_iova_addr;
> +	domain->domain.geometry.force_aperture = true;
> +
> +	return &domain->domain;
> +}
> +
> +static struct iommu_device *xen_iommu_probe_device(struct device *dev)
> +{
> +	if (!dev_is_pci(dev))
> +		return ERR_PTR(-ENODEV);
> +
> +	return &xen_iommu_device;

Even emulated PCI devices have to have an (emulated, presumably) IOMMU?

> +}
> +
> +static void xen_iommu_probe_finalize(struct device *dev)
> +{
> +	set_dma_ops(dev, NULL);
> +	iommu_setup_dma_ops(dev, 0, max_iova_addr);

This shouldn't even compile... anyway, core code does this now, please 
drop the whole callback.

> +}
> +
> +static int xen_iommu_map_pages(struct iommu_domain *domain, unsigned long iova,
> +			       phys_addr_t paddr, size_t pgsize, size_t pgcount,
> +			       int prot, gfp_t gfp, size_t *mapped)
> +{
> +	size_t xen_pg_count = (pgsize / XEN_PAGE_SIZE) * pgcount;

You only advertise the one page size, so you'll always get that back, 
and this seems a bit redundant.

> +	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
> +	struct pv_iommu_op op = {
> +		.subop_id = IOMMUOP_map_pages,
> +		.flags = 0,
> +		.ctx_no = dom->ctx_no
> +	};
> +	/* NOTE: paddr is actually bound to pfn, not gfn */
> +	uint64_t pfn = addr_to_pfn(paddr);
> +	uint64_t dfn = addr_to_pfn(iova);
> +	int ret = 0;
> +
> +	//pr_info("Mapping to %lx %zu %zu paddr %x\n", iova, pgsize, pgcount, paddr);

Please try to clean up debugging leftovers before posting the patch (but 
also note that there are already tracepoints and debug messages which 
can be enabled in the core code to give visibility of most of this.)

> +
> +	if (prot & IOMMU_READ)
> +		op.flags |= IOMMU_OP_readable;
> +
> +	if (prot & IOMMU_WRITE)
> +		op.flags |= IOMMU_OP_writeable;
> +
> +	while (xen_pg_count) {

Unless you're super-concerned about performance already, you don't 
really need to worry about looping here - you can happily return short 
as long as you've mapped *something*, and the core code will call you 
back again with the remainder. But it also doesn't complicate things 
*too* much as it is, so feel free to leave it in if you want to.

> +		size_t to_map = min(xen_pg_count, max_nr_pages);
> +		uint64_t gfn = pfn_to_gfn(pfn);
> +
> +		//pr_info("Mapping %lx-%lx at %lx-%lx\n", gfn, gfn + to_map - 1, dfn, dfn + to_map - 1);
> +
> +		op.map_pages.gfn = gfn;
> +		op.map_pages.dfn = dfn;
> +
> +		op.map_pages.nr_pages = to_map;
> +
> +		ret = HYPERVISOR_iommu_op(&op);
> +
> +		//pr_info("map_pages.mapped = %u\n", op.map_pages.mapped);
> +
> +		if (mapped)
> +			*mapped += XEN_PAGE_SIZE * op.map_pages.mapped;
> +
> +		if (ret)
> +			break;
> +
> +		xen_pg_count -= to_map;
> +
> +		pfn += to_map;
> +		dfn += to_map;
> +	}
> +
> +	return ret;
> +}
> +
> +static size_t xen_iommu_unmap_pages(struct iommu_domain *domain, unsigned long iova,
> +				    size_t pgsize, size_t pgcount,
> +				    struct iommu_iotlb_gather *iotlb_gather)
> +{
> +	size_t xen_pg_count = (pgsize / XEN_PAGE_SIZE) * pgcount;
> +	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
> +	struct pv_iommu_op op = {
> +		.subop_id = IOMMUOP_unmap_pages,
> +		.ctx_no = dom->ctx_no,
> +		.flags = 0,
> +	};
> +	uint64_t dfn = addr_to_pfn(iova);
> +	int ret = 0;
> +
> +	if (WARN(!dom->ctx_no, "Tried to unmap page to default context"))
> +		return -EINVAL;

This would go hilariously wrong... the return value here is bytes 
successfully unmapped, a total failure should return 0. But then how 
would it ever happen anyway? Unmap is a domain op, so a domain which 
doesn't allow unmapping shouldn't offer it in the first place...

> +	while (xen_pg_count) {
> +		size_t to_unmap = min(xen_pg_count, max_nr_pages);
> +
> +		//pr_info("Unmapping %lx-%lx\n", dfn, dfn + to_unmap - 1);
> +
> +		op.unmap_pages.dfn = dfn;
> +		op.unmap_pages.nr_pages = to_unmap;
> +
> +		ret = HYPERVISOR_iommu_op(&op);
> +
> +		if (ret)
> +			pr_warn("Unmap failure (%lx-%lx)\n", dfn, dfn + to_unmap - 1);

In this case I'd argue that you really *do* want to return short, in the 
hope of propagating the error back up and letting the caller know the 
address space is now messed up before things start blowing up even more 
if they keep going and subsequently try to map new pages into 
not-actually-unmapped VAs.

> +
> +		xen_pg_count -= to_unmap;
> +
> +		dfn += to_unmap;
> +	}
> +
> +	return pgcount * pgsize;
> +}
> +
> +int xen_iommu_attach_dev(struct iommu_domain *domain, struct device *dev)
> +{
> +	struct pci_dev *pdev;
> +	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
> +	struct pv_iommu_op op = {
> +		.subop_id = IOMMUOP_reattach_device,
> +		.flags = 0,
> +		.ctx_no = dom->ctx_no,
> +	};
> +
> +	pdev = to_pci_dev(dev);
> +
> +	op.reattach_device.dev.seg = pci_domain_nr(pdev->bus);
> +	op.reattach_device.dev.bus = pdev->bus->number;
> +	op.reattach_device.dev.devfn = pdev->devfn;
> +
> +	return HYPERVISOR_iommu_op(&op);
> +}
> +
> +static void xen_iommu_free(struct iommu_domain *domain)
> +{
> +	int ret;
> +	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
> +
> +	if (dom->ctx_no != 0) {

Much like unmap above, this not being true would imply that someone's 
managed to go round the back of the core code to get the .free op from a 
validly-allocated domain and then pass something other than that domain 
to it. Personally I'd consider that a level of brokenness that's not 
even worth trying to consider at all, but if you want to go as far as 
determining that you *have* clearly been given something you couldn't 
have allocated, then trying to kfree() it probably isn't wise either.

> +		struct pv_iommu_op op = {
> +			.ctx_no = dom->ctx_no,
> +			.flags = 0,
> +			.subop_id = IOMMUOP_free_context
> +		};
> +
> +		ret = HYPERVISOR_iommu_op(&op);
> +
> +		if (ret)
> +			pr_err("Context %hu destruction failure\n", dom->ctx_no);
> +	}
> +
> +	kfree(domain);
> +}
> +
> +static phys_addr_t xen_iommu_iova_to_phys(struct iommu_domain *domain, dma_addr_t iova)
> +{
> +	int ret;
> +	struct xen_iommu_domain *dom = to_xen_iommu_domain(domain);
> +
> +	struct pv_iommu_op op = {
> +		.ctx_no = dom->ctx_no,
> +		.flags = 0,
> +		.subop_id = IOMMUOP_lookup_page,
> +	};
> +
> +	op.lookup_page.dfn = addr_to_pfn(iova);
> +
> +	ret = HYPERVISOR_iommu_op(&op);
> +
> +	if (ret)
> +		return 0;
> +
> +	phys_addr_t page_addr = pfn_to_addr(gfn_to_pfn(op.lookup_page.gfn));
> +
> +	/* Consider non-aligned iova */
> +	return page_addr + (iova & 0xFFF);
> +}
> +
> +static void xen_iommu_get_resv_regions(struct device *dev, struct list_head *head)
> +{
> +	struct iommu_resv_region *reg;
> +	struct xen_reserved_device_memory *entries;
> +	struct xen_reserved_device_memory_map map;
> +	struct pci_dev *pdev;
> +	int ret, i;
> +
> +	pdev = to_pci_dev(dev);
> +
> +	reg = iommu_alloc_resv_region(MSI_RANGE_START,
> +				      MSI_RANGE_END - MSI_RANGE_START + 1,
> +				      0, IOMMU_RESV_MSI, GFP_KERNEL);
> +
> +	if (!reg)
> +		return;
> +
> +	list_add_tail(&reg->list, head);
> +
> +	/* Map xen-specific entries */
> +
> +	/* First, get number of entries to map */
> +	map.buffer = NULL;
> +	map.nr_entries = 0;
> +	map.flags = 0;
> +
> +	map.dev.pci.seg = pci_domain_nr(pdev->bus);
> +	map.dev.pci.bus = pdev->bus->number;
> +	map.dev.pci.devfn = pdev->devfn;
> +
> +	ret = HYPERVISOR_memory_op(XENMEM_reserved_device_memory_map, &map);
> +
> +	if (ret == 0)
> +		/* No reserved region, nothing to do */
> +		return;
> +
> +	if (ret != -ENOBUFS) {
> +		pr_err("Unable to get reserved region count (%d)\n", ret);
> +		return;
> +	}
> +
> +	/* Assume a reasonable number of entries, otherwise, something is probably wrong */
> +	if (WARN_ON(map.nr_entries > 256))
> +		pr_warn("Xen reporting many reserved regions (%u)\n", map.nr_entries);
> +
> +	/* And finally get actual mappings */
> +	entries = kcalloc(map.nr_entries, sizeof(struct xen_reserved_device_memory),
> +					  GFP_KERNEL);
> +
> +	if (!entries) {
> +		pr_err("No memory for map entries\n");
> +		return;
> +	}
> +
> +	map.buffer = entries;
> +
> +	ret = HYPERVISOR_memory_op(XENMEM_reserved_device_memory_map, &map);
> +
> +	if (ret != 0) {
> +		pr_err("Unable to get reserved regions (%d)\n", ret);
> +		kfree(entries);
> +		return;
> +	}
> +
> +	for (i = 0; i < map.nr_entries; i++) {
> +		struct xen_reserved_device_memory entry = entries[i];
> +
> +		reg = iommu_alloc_resv_region(pfn_to_addr(entry.start_pfn),
> +					      pfn_to_addr(entry.nr_pages),
> +					      0, IOMMU_RESV_RESERVED, GFP_KERNEL);
> +
> +		if (!reg)
> +			break;
> +
> +		list_add_tail(&reg->list, head);
> +	}
> +
> +	kfree(entries);
> +}
> +
> +static int default_domain_attach_dev(struct iommu_domain *domain,
> +				     struct device *dev)
> +{
> +	int ret;
> +	struct pci_dev *pdev;
> +	struct pv_iommu_op op = {
> +		.subop_id = IOMMUOP_reattach_device,
> +		.flags = 0,
> +		.ctx_no = 0 /* reattach device back to default context */
> +	};
> +
> +	pdev = to_pci_dev(dev);
> +
> +	op.reattach_device.dev.seg = pci_domain_nr(pdev->bus);
> +	op.reattach_device.dev.bus = pdev->bus->number;
> +	op.reattach_device.dev.devfn = pdev->devfn;
> +
> +	ret = HYPERVISOR_iommu_op(&op);
> +
> +	if (ret)
> +		pr_warn("Unable to release device %p\n", &op.reattach_device.dev);
> +
> +	return ret;
> +}
> +
> +static struct iommu_domain default_domain = {
> +	.ops = &(const struct iommu_domain_ops){
> +		.attach_dev = default_domain_attach_dev
> +	}
> +};

Looks like you could make it a static xen_iommu_domain and just use the 
normal attach callback? Either way please name it something less 
confusing like xen_iommu_identity_domain - "default" is far too 
overloaded round here already...

> +static struct iommu_ops xen_iommu_ops = {
> +	.identity_domain = &default_domain,
> +	.release_domain = &default_domain,
> +	.capable = xen_iommu_capable,
> +	.domain_alloc_paging = xen_iommu_domain_alloc_paging,
> +	.probe_device = xen_iommu_probe_device,
> +	.probe_finalize = xen_iommu_probe_finalize,
> +	.device_group = pci_device_group,
> +	.get_resv_regions = xen_iommu_get_resv_regions,
> +	.pgsize_bitmap = XEN_IOMMU_PGSIZES,
> +	.default_domain_ops = &(const struct iommu_domain_ops) {
> +		.map_pages = xen_iommu_map_pages,
> +		.unmap_pages = xen_iommu_unmap_pages,
> +		.attach_dev = xen_iommu_attach_dev,
> +		.iova_to_phys = xen_iommu_iova_to_phys,
> +		.free = xen_iommu_free,
> +	},
> +};
> +
> +int __init xen_iommu_init(void)
> +{
> +	int ret;
> +	struct pv_iommu_op op = {
> +		.subop_id = IOMMUOP_query_capabilities
> +	};
> +
> +	if (!xen_domain())
> +		return -ENODEV;
> +
> +	/* Check if iommu_op is supported */
> +	if (HYPERVISOR_iommu_op(&op) == -ENOSYS)
> +		return -ENODEV; /* No Xen IOMMU hardware */
> +
> +	pr_info("Initialising Xen IOMMU driver\n");
> +	pr_info("max_nr_pages=%d\n", op.cap.max_nr_pages);
> +	pr_info("max_ctx_no=%d\n", op.cap.max_ctx_no);
> +	pr_info("max_iova_addr=%llx\n", op.cap.max_iova_addr);
> +
> +	if (op.cap.max_ctx_no == 0) {
> +		pr_err("Unable to use IOMMU PV driver (no context available)\n");
> +		return -ENOTSUPP; /* Unable to use IOMMU PV ? */
> +	}
> +
> +	if (xen_domain_type == XEN_PV_DOMAIN)
> +		/* TODO: In PV domain, due to the existing pfn-gfn mapping we need to
> +		 * consider that under certains circonstances, we have :
> +		 *   pfn_to_gfn(x + 1) != pfn_to_gfn(x) + 1
> +		 *
> +		 * In these cases, we would want to separate the subop into several calls.
> +		 * (only doing the grouped operation when the mapping is actually contigous)
> +		 * Only map operation would be affected, as unmap actually uses dfn which
> +		 * doesn't have this kind of mapping.
> +		 *
> +		 * Force single-page operations to work arround this issue for now.
> +		 */
> +		max_nr_pages = 1;
> +	else
> +		/* With HVM domains, pfn_to_gfn is identity, there is no issue regarding this. */
> +		max_nr_pages = op.cap.max_nr_pages;
> +
> +	max_iova_addr = op.cap.max_iova_addr;
> +
> +	spin_lock_init(&lock);
> +
> +	ret = iommu_device_sysfs_add(&xen_iommu_device, NULL, NULL, "xen-iommu");
> +	if (ret) {
> +		pr_err("Unable to add Xen IOMMU sysfs\n");
> +		return ret;
> +	}
> +
> +	ret = iommu_device_register(&xen_iommu_device, &xen_iommu_ops, NULL);
> +	if (ret) {
> +		pr_err("Unable to register Xen IOMMU device %d\n", ret);
> +		iommu_device_sysfs_remove(&xen_iommu_device);
> +		return ret;
> +	}
> +
> +	/* swiotlb is redundant when IOMMU is active. */
> +	x86_swiotlb_enable = false;

That's not always true, but either way if this is at 
module_init/device_initcall time then it's too late to make any 
difference anyway.

> +
> +	return 0;
> +}
> +
> +void __exit xen_iommu_fini(void)
> +{
> +	pr_info("Unregistering Xen IOMMU driver\n");
> +
> +	iommu_device_unregister(&xen_iommu_device);
> +	iommu_device_sysfs_remove(&xen_iommu_device);
> +}

This is dead code since the Kconfig is only "bool". Either allow it to 
be an actual module (and make sure that works), or drop the pretence 
altogether.

Thanks,
Robin.


From xen-devel-bounces@lists.xenproject.org Sat Jun 22 00:14:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Jun 2024 00:14:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745694.1152841 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKoOC-00012q-Ch; Sat, 22 Jun 2024 00:13:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745694.1152841; Sat, 22 Jun 2024 00:13:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKoOC-00012j-AE; Sat, 22 Jun 2024 00:13:56 +0000
Received: by outflank-mailman (input) for mailman id 745694;
 Sat, 22 Jun 2024 00:13:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+wyV=NY=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sKoOA-00012d-N5
 for xen-devel@lists.xenproject.org; Sat, 22 Jun 2024 00:13:54 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4cd1f1cc-302c-11ef-b4bb-af5377834399;
 Sat, 22 Jun 2024 02:13:52 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 97895CE2F7A;
 Sat, 22 Jun 2024 00:13:48 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C156EC2BBFC;
 Sat, 22 Jun 2024 00:13:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4cd1f1cc-302c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719015227;
	bh=7yhRQT5eBXXEddwfBMGGPFjRWUfBzhE8kFFVp2IGuk0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=TzTYSkzeqKT0DsucOOR5bLwPxtevcyJZRAvLfZKiZuHb2lDzLIO4jRULqcKbysIwC
	 u7QD59QmwzYpnYWQ/3DTWVbU31Ho4FFr/8gCaibQ1lAlJjxa6fqBTMyF1GIfSzwJ+B
	 mD9MmZNDonexhZu/hloPJEDkxp+ww57zUrmzauG2q3IMMroBDwiGQwgEuHaTIzzn7T
	 hrpRQUKrYXxgBw47mZssBQnxZ9p+e10oAK9Rb+4fqdrr+Jlnn6IQolZw6mXs3v0Cay
	 U0ehkW6/tJ+NANQ+QVj3ESaXIxJ7OB8ojhXeds9HiaJZGxXVLL42UL1GZiio3Hesp2
	 wR+Aj4viwMEPg==
Date: Fri, 21 Jun 2024 17:13:45 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, andrew.cooper3@citrix.com, 
    george.dunlap@citrix.com, julien@xen.org, michal.orzel@amd.com, 
    bertrand.marquis@arm.com, roger.pau@citrix.com, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH] docs/misra: rules for mass adoption
In-Reply-To: <19073c21-d878-4d8d-95d8-90f567688ed5@suse.com>
Message-ID: <alpine.DEB.2.22.394.2406211711410.2572888@ubuntu-linux-20-04-desktop>
References: <alpine.DEB.2.22.394.2405221822500.1052252@ubuntu-linux-20-04-desktop> <19073c21-d878-4d8d-95d8-90f567688ed5@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 23 May 2024, Jan Beulich wrote:
> On 23.05.2024 03:26, Stefano Stabellini wrote:
> > @@ -725,12 +787,25 @@ maintainers if you want to suggest a change.
> >       - The Standard Library function system of <stdlib.h> shall not be used
> >       -
> >  
> > +   * - `Rule 22.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_01.c>`_
> > +     - Required
> > +     - All resources obtained dynamically by means of Standard Library
> > +       functions shall be explicitly released
> > +     -
> > +     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
> 
> The empty sub-bullet-point looks stray here.

Good catch, thanks!


> > @@ -748,6 +823,31 @@ maintainers if you want to suggest a change.
> >         stream has been closed
> >       -
> >  
> > +   * - `Rule 22.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_07.c>`_
> > +     - Required
> > +     - The macro EOF shall only be compared with the unmodified return
> > +       value from any Standard Library function capable of returning EOF
> > +     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
> 
> Shouldn't this remark also be replicated ...
> 
> > +   * - `Rule 22.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_08.c>`_
> > +     - Required
> > +     - The value of errno shall be set to zero prior to a call to an
> > +       errno-setting-function
> > +     -
> > +
> > +   * - `Rule 22.9 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_09.c>`_
> > +     - Required
> > +     - The value of errno shall be tested against zero after calling an
> > +       errno-setting-function
> > +     -
> > +
> > +   * - `Rule 22.10 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_10.c>`_
> > +     - Required
> > +     - The value of errno shall only be tested when the last function to
> > +       be called was an errno-setting-function
> > +     -
> 
> ... for all three of these, seeing that errno is something a (standard) library
> would provide? Or alternatively should remarks here say that we simply have no
> errno?

I'll replicate the full message


From xen-devel-bounces@lists.xenproject.org Sat Jun 22 00:14:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Jun 2024 00:14:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745698.1152852 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKoOp-0001W6-M0; Sat, 22 Jun 2024 00:14:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745698.1152852; Sat, 22 Jun 2024 00:14:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKoOp-0001Vz-JH; Sat, 22 Jun 2024 00:14:35 +0000
Received: by outflank-mailman (input) for mailman id 745698;
 Sat, 22 Jun 2024 00:14:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v/K8=NY=amd.com=stefano.stabellini@srs-se1.protection.inumbo.net>)
 id 1sKoOo-0001Vr-JZ
 for xen-devel@lists.xenproject.org; Sat, 22 Jun 2024 00:14:34 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20600.outbound.protection.outlook.com
 [2a01:111:f403:2415::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 65b243f1-302c-11ef-b4bb-af5377834399;
 Sat, 22 Jun 2024 02:14:31 +0200 (CEST)
Received: from PH8PR15CA0024.namprd15.prod.outlook.com (2603:10b6:510:2d2::9)
 by IA0PR12MB7603.namprd12.prod.outlook.com (2603:10b6:208:439::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.22; Sat, 22 Jun
 2024 00:14:27 +0000
Received: from CY4PEPF0000EE3D.namprd03.prod.outlook.com
 (2603:10b6:510:2d2:cafe::7c) by PH8PR15CA0024.outlook.office365.com
 (2603:10b6:510:2d2::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.36 via Frontend
 Transport; Sat, 22 Jun 2024 00:14:25 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CY4PEPF0000EE3D.mail.protection.outlook.com (10.167.242.15) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Sat, 22 Jun 2024 00:14:25 +0000
Received: from SATLEXMB04.amd.com (10.181.40.145) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Fri, 21 Jun
 2024 19:14:24 -0500
Received: from smtp.xilinx.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server id 15.1.2507.39 via Frontend
 Transport; Fri, 21 Jun 2024 19:14:23 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65b243f1-302c-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HT6a+CuTBJXei+tT/ct46Rhgd7mUgIJ4dg78/Iua/RDmMus/0o1YXwhCAy7LdtCT1TLMWg57qak9SmIB8dpIUpLeWzFTtzNd299NVCeYcJhedJ/4Te+lVFobgfd4c5tA9trHLt3lZNNHQO0iHqs2MijEcT/6ZsP1o8eIl2MqimE54XJFKTD2RJGMMWGKguE9qaYF1y5+KZAf50RZGrRFOMQK+lyQwsEYEY48xOikBpR3tPrzb+3B31j64ACfEjpjhQXcLKvsFkcXIwekpwsWgIW9jZtTsQyr1sZVmXBbPywBgWhAJLS3Ka/op2ZowTErUf7ybSHIfRFOSw7kgMfQAQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Kj4Slr3V2fdHolN+j+c/+W5Lbdyo17aak37rymkcfgA=;
 b=MUkYAP+O1NHWJxrVNwg/ddK8y3H1C1CWYfkg80178YvAPOv29H4Tx/ko6mhQHeZomiqaXpcdEHVN06Ve8mXY6pd/rbrPZqsfO8gSoYGBbksYiPov3V2/Cn/M80Hka5K+6fwrUz96psAXrFdiyg7FnoGEVyJGDFkFNuCQVgUHCQpe3n2waRDIfyyBuXHf1NKuT6Pflc2qbAZU/bVTK/7ypsRf2mn5WrB4hMOoklquJ61dgGEbOgrb0ntKzR2h/UybD/kk6f24AnrgdeXFLcMcVo37kl/L1Z2Lum4Zf8FY5/Jcf1rFJek9xyFrwRTalztRkM9y5RAS9qYQCik6GYmVsQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Kj4Slr3V2fdHolN+j+c/+W5Lbdyo17aak37rymkcfgA=;
 b=2uUP5fZ/9K+3sjfx123GEnFb0xWj+CcjV1TokkDoqzhl3wU7q0pmAHmcIg1MWsOU8Ww36r7xQ0VtF5ZFSaV8ymPyiXiG6hn/fty8E8bmd5LOtqmUKBF2pw+wTKV6HhIuB8ZkSdZIW9bucJPkrl8Ms/StrHzmjH9JCtLsvhFkF6g=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Stefano Stabellini <stefano.stabellini@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <jbeulich@suse.com>, <sstabellini@kernel.org>,
	<andrew.cooper3@citrix.com>, <julien@xen.org>, <michal.orzel@amd.com>,
	<bertrand.marquis@arm.com>, <roger.pau@citrix.com>, Stefano Stabellini
	<stefano.stabellini@amd.com>
Subject: [PATCH v2] docs/misra: rules for mass adoption
Date: Fri, 21 Jun 2024 17:14:22 -0700
Message-ID: <20240622001422.3852207-1-stefano.stabellini@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
Received-SPF: None (SATLEXMB04.amd.com: stefano.stabellini@amd.com does not
 designate permitted sender hosts)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3D:EE_|IA0PR12MB7603:EE_
X-MS-Office365-Filtering-Correlation-Id: 2d8de61b-770d-4d36-dc56-08dc92504635
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|376011|36860700010|82310400023|1800799021;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?y8YPdVfcTEwjHHvnryncb46idKUksHCtKNya/7pJCcy5CFryPdZf3bOcEgyV?=
 =?us-ascii?Q?liqtVwG2MIdkY89iOBq2Y1QRgk5FK2lrQMyasG+Ov6GlRyNf2fBqXPYqn6bb?=
 =?us-ascii?Q?YSRc4sMneU9p471Qd2TXkeXHk4wwzpeUFWzOu7pPR40PT7yvtE95dy08DXKo?=
 =?us-ascii?Q?uwoBxSPR8jtXbt/PiQueCzHln4/UL88LoJ3j/NlZbihoj/xMiXOHqBYujwR/?=
 =?us-ascii?Q?fZ1KS6Yk8/eNTl7UEMiaUwyZqeDVI4zA3HAJ8dGAsTgNGnlWDq4Nj6NrSuk5?=
 =?us-ascii?Q?YiJjYb1evNK9FUfpt5rAsfbxnqq34OdpYkaRcbkhiLWwAgfzZcHalq31Jnk0?=
 =?us-ascii?Q?VODUUyEte9pVLVHkkOS4Bf8EoaxMiIg+H/SNo07HArVF0Tx1XJqk8ugPCeF+?=
 =?us-ascii?Q?chuJyrQbfrJZToHFmelgGPIpk3Kds55L2lX6SUKDGrFLSmlGYJK3/QNwt1yn?=
 =?us-ascii?Q?ZjMneYszZe5STak0zlJU1hQpoGfNIUo2CH5y/47aKOe3DaNLJdRzAld115j0?=
 =?us-ascii?Q?KjiiFk/tMAVeJ9UPxuUrHqvTOPo8hmiHETNSkdz0tei2SrZgYDKSBsWzh860?=
 =?us-ascii?Q?fdm6w8g9AaJttCuPc+rUgJuZvPGrZcc8ct3vADYrhJHTOtz+WCDoqjDy71Az?=
 =?us-ascii?Q?sziX9V3t0ODDHLUfiR2F1q4YC0VhZX1JMXh9iwRbITY4HiaFtFgfKzrT1v8I?=
 =?us-ascii?Q?2MFiXN9hU2vd1Zdxi09bt6BLijoWWDb7RypV8vj/f8WLD93m3qSBzedRWuX7?=
 =?us-ascii?Q?i4ggUQ8CdWS398hPz0SlORI2+rKw85Gny20vrL+10PDRewBkoQjRztq5sEU9?=
 =?us-ascii?Q?6MVx+48WhI2bG40KPmpWlz7LJMYqPwcdTLXHv1yCR8a+HfWCWT47M0YUxOsy?=
 =?us-ascii?Q?LTYKvViQjn0pXDMRheRmVAIVLv1P9bDGjEF4EBkUd1UXh0+kzbEAFFro8LT1?=
 =?us-ascii?Q?wxOvWUXdX30ye9cJ8qERii7t3an376xhYqPEB/pYXkFPnJ8tvTImkxv3zkCb?=
 =?us-ascii?Q?o2Ih4AT1ENw964dYg2ED096P8Kb1RNgNPK4A0juFW72OWrYsmozHQAMgNn7L?=
 =?us-ascii?Q?3fHfcGTocGYvRB3bfc3miD3i8JdRPnH9JaJVpG3bGvk6Trb9y+TXoS5vTn9f?=
 =?us-ascii?Q?wItc+HQs9nZQmzmcVhTg893Bxk7A48vN0BEWwDMRjn3iMjupJcJAaRD1DM08?=
 =?us-ascii?Q?qjyE/Nzuw4vWX33Pcfyf0HghYMevEI3pKW1J9+/YRIl/HOBO1w9+2+pY/qQZ?=
 =?us-ascii?Q?9hkhSTe8xgOpMRe7okQWMTtOUIiC74DXgCqGd+pBSDWTYK4FTqjNY8Mi1Cwz?=
 =?us-ascii?Q?5xcsWjRqoYoiezFdOaw1BQG45WqcVQS9cIte4YlyBAzxWNOLx9MoNTooOfPP?=
 =?us-ascii?Q?ZgTFUD8=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(376011)(36860700010)(82310400023)(1800799021);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jun 2024 00:14:25.1491
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d8de61b-770d-4d36-dc56-08dc92504635
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000EE3D.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7603

From: Stefano Stabellini <sstabellini@kernel.org>

This patch adds a bunch of rules to rules.rst that are uncontroversial
and have zero violations in Xen. As such, they have been approved for
adoption.

All the ones that regard the standard library have the link to the
existing footnote in the notes.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
Changes in v2:
- replicate the Xen doesn't provide a stdlib message for 22.8, 22.9, 22.10
- remove stray empty bullet for 22.1
---
 docs/misra/rules.rst | 99 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 99 insertions(+)

diff --git a/docs/misra/rules.rst b/docs/misra/rules.rst
index 80e5e972ad..6158bad31c 100644
--- a/docs/misra/rules.rst
+++ b/docs/misra/rules.rst
@@ -580,6 +580,11 @@ maintainers if you want to suggest a change.
      - The relational operators > >= < and <= shall not be applied to objects of pointer type except where they point into the same object
      -
 
+   * - `Rule 18.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_18_08.c>`_
+     - Required
+     - Variable-length array types shall not be used
+     -
+
    * - `Rule 19.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_19_01.c>`_
      - Mandatory
      - An object shall not be assigned or copied to an overlapping
@@ -589,11 +594,29 @@ maintainers if you want to suggest a change.
        instances where Eclair is unable to verify that the code is valid
        in regard to Rule 19.1. Caution reports are not violations.
 
+   * - `Rule 20.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_02.c>`_
+     - Required
+     - The ', " or \ characters and the /* or // character sequences
+       shall not occur in a header file name
+     -
+
+   * - `Rule 20.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_03.c>`_
+     - Required
+     - The #include directive shall be followed by either a <filename>
+       or "filename" sequence
+     -
+
    * - `Rule 20.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_04.c>`_
      - Required
      - A macro shall not be defined with the same name as a keyword
      -
 
+   * - `Rule 20.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_06.c>`_
+     - Required
+     - Tokens that look like a preprocessing directive shall not occur
+       within a macro argument
+     -
+
    * - `Rule 20.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_07.c>`_
      - Required
      - Expressions resulting from the expansion of macro parameters
@@ -609,6 +632,12 @@ maintainers if you want to suggest a change.
        evaluation
      -
 
+   * - `Rule 20.11 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_11.c>`_
+     - Required
+     - A macro parameter immediately following a # operator shall not
+       immediately be followed by a ## operator
+     -
+
    * - `Rule 20.12 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_12.c>`_
      - Required
      - A macro parameter used as an operand to the # or ## operators,
@@ -651,11 +680,39 @@ maintainers if you want to suggest a change.
        declared
      - See comment for Rule 21.1
 
+   * - `Rule 21.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_03.c>`_
+     - Required
+     - The memory allocation and deallocation functions of <stdlib.h>
+       shall not be used
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+   * - `Rule 21.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_04.c>`_
+     - Required
+     - The standard header file <setjmp.h> shall not be used
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+   * - `Rule 21.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_05.c>`_
+     - Required
+     - The standard header file <signal.h> shall not be used
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
    * - `Rule 21.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_06.c>`_
      - Required
      - The Standard Library input/output routines shall not be used
      - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
 
+   * - `Rule 21.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_07.c>`_
+     - Required
+     - The Standard Library functions atof, atoi, atol and atoll of
+       <stdlib.h> shall not be used
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+   * - `Rule 21.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_08.c>`_
+     - Required
+     - The Standard Library functions abort, exit and system of
+       <stdlib.h> shall not be used
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
    * - `Rule 21.9 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_09.c>`_
      - Required
      - The library functions bsearch and qsort of <stdlib.h> shall not be used
@@ -666,6 +723,11 @@ maintainers if you want to suggest a change.
      - The Standard Library time and date routines shall not be used
      - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
 
+   * - `Rule 21.12 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_12.c>`_
+     - Advisory
+     - The exception handling features of <fenv.h> should not be used
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
    * - `Rule 21.13 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_13.c>`_
      - Mandatory
      - Any value passed to a function in <ctype.h> shall be representable as an
@@ -725,12 +787,24 @@ maintainers if you want to suggest a change.
      - The Standard Library function system of <stdlib.h> shall not be used
      -
 
+   * - `Rule 22.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_01.c>`_
+     - Required
+     - All resources obtained dynamically by means of Standard Library
+       functions shall be explicitly released
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
    * - `Rule 22.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_02.c>`_
      - Mandatory
      - A block of memory shall only be freed if it was allocated by means of a
        Standard Library function
      -
 
+   * - `Rule 22.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_03.c>`_
+     - Required
+     - The same file shall not be open for read and write access at the
+       same time on different streams 
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
    * - `Rule 22.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_04.c>`_
      - Mandatory
      - There shall be no attempt to write to a stream which has been opened as
@@ -748,6 +822,31 @@ maintainers if you want to suggest a change.
        stream has been closed
      -
 
+   * - `Rule 22.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_07.c>`_
+     - Required
+     - The macro EOF shall only be compared with the unmodified return
+       value from any Standard Library function capable of returning EOF
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+   * - `Rule 22.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_08.c>`_
+     - Required
+     - The value of errno shall be set to zero prior to a call to an
+       errno-setting-function
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+   * - `Rule 22.9 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_09.c>`_
+     - Required
+     - The value of errno shall be tested against zero after calling an
+       errno-setting-function
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+   * - `Rule 22.10 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_10.c>`_
+     - Required
+     - The value of errno shall only be tested when the last function to
+       be called was an errno-setting-function
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+
 Terms & Definitions
 -------------------
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sat Jun 22 02:00:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Jun 2024 02:00:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745727.1152861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKq3U-0006H5-1U; Sat, 22 Jun 2024 02:00:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745727.1152861; Sat, 22 Jun 2024 02:00:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKq3T-0006Gy-UP; Sat, 22 Jun 2024 02:00:39 +0000
Received: by outflank-mailman (input) for mailman id 745727;
 Sat, 22 Jun 2024 02:00:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKq3S-0006Gl-IH; Sat, 22 Jun 2024 02:00:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKq3S-0007c5-5J; Sat, 22 Jun 2024 02:00:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKq3R-0000Wd-JI; Sat, 22 Jun 2024 02:00:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKq3R-0002Kb-FY; Sat, 22 Jun 2024 02:00:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Abjw08gUudcfhckiWdyhT0cJFqZi9piOwUfMANAUC2w=; b=g3F1+gz6DgWU1F6dBtb8g7V/AD
	74ncyvnhd5DwRZmO7Bj4lCheEGJeGdFX9FsLGoh2X+hg7uQYb8+7YzkjFkWTkNH05/jBDV5UW/WTM
	lVylBuZ7TgcxMvwKrgleVmTJogfCfR3aMcYXhvakKB99YvArAYH/QAdvJ30iT8OvpgSA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186445-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-6.1 test] 186445: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-6.1:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-6.1:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a6398e37309000e35cedb5cc328a0f8d00d7d7b9
X-Osstest-Versions-That:
    linux=eb44d83053d66372327e69145e8d2fa7400a4991
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 Jun 2024 02:00:37 +0000

flight 186445 linux-6.1 real [real]
flight 186450 linux-6.1 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186445/
http://logs.test-lab.xenproject.org/osstest/logs/186450/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186450-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186370
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186370
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186370
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186370
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186370
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186370
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                a6398e37309000e35cedb5cc328a0f8d00d7d7b9
baseline version:
 linux                eb44d83053d66372327e69145e8d2fa7400a4991

Last test of basis   186370  2024-06-16 12:15:05 Z    5 days
Testing same since   186445  2024-06-21 12:46:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Csókás, Bence" <csokas.bence@prolan.hu>
  Aapo Vienamo <aapo.vienamo@linux.intel.com>
  Adam Miotk <adam.miotk@arm.com>
  Alan Stern <stern@rowland.harvard.edu>
  Aleksandr Mishin <amishin@t-argos.ru>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Shishkin <alexander.shishkin@linux.intel.com>
  Alexey Kodanev <aleksei.kodanev@bell-sw.com>
  Allen Pais <apais@linux.microsoft.com>
  Amit Sunil Dhamne <amitsd@google.com>
  Amjad Ouled-Ameur <amjad.ouled-ameur@arm.com>
  Andi Shyti <andi.shyti@kernel.org>
  Andi Shyti <andi.shyti@linux.intel.com>
  Andrei Coardos <aboutphysycs@gmail.com>
  Andrei Vagin <avagin@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Antonio Quartulli <a@unstable.cc>
  Apurva Nandan <a-nandan@ti.com>
  Armin Wolf <W_Armin@gmx.de>
  Arpana Arland <arpanax.arland@intel.com> (A Contingent worker at Intel)
  Baokun Li <libaokun1@huawei.com>
  Baoquan He <bhe@redhat.com>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Beleswar Padhi <b-padhi@ti.com>
  Ben Segall <bsegall@google.com>
  Benjamin Segall <bsegall@google.com>
  Benjamin Tissoires <bentiss@kernel.org>
  Bjorn Andersson <andersson@kernel.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Björn Töpel <bjorn@rivosinc.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Breno Leitao <leitao@debian.org>
  Chandan Kumar Rout <chandanx.rout@intel.com>
  Chen Hanxiao <chenhx.fnst@fujitsu.com>
  Chris Maness <christopher.maness@gmail.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian Brauner <brauner@kernel.org>
  Christoph Hellwig <hch@lst.de>
  Chuck Lever <chuck.lever@oracle.com>
  Cong Wang <cong.wang@bytedance.com>
  Cong Yang <yangcong5@huaqin.corp-partner.google.com>
  Csókás, Bence <csokas.bence@prolan.hu>
  Damien Le Moal <dlemoal@kernel.org>
  Dan Cross <crossd@gmail.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Wagner <dwagner@suse.de>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Dave Jiang <dave.jiang@intel.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David Lechner <dlechner@baylibre.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  DelphineCCChiu <delphine_cc_chiu@wiwynn.com>
  Dev Jain <dev.jain@arm.com>
  Dirk Behme <dirk.behme@de.bosch.com>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Doug Brown <doug@schmorgal.com>
  Douglas Anderson <dianders@chromium.org>
  Duoming Zhou <duoming@zju.edu.cn>
  Emmanuel Grumbach <emmanuel.grumbach@intel.com>
  Eric Dumazet <edumazet@google.com>
  Filipe Manana <fdmanana@suse.com>
  Florian Fainelli <florian.fainelli@broadcom.com>
  Gabor Juhos <j4g8y7@gmail.com>
  Gal Pressman <gal@nvidia.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregor Herburger <gregor.herburger@tq-group.com>
  Hagar Gamal Halim Hemdan <hagarhem@amazon.com>
  Hagar Hemdan <hagarhem@amazon.com>
  Haifeng Xu <haifeng.xu@shopee.com>
  Hailong.Liu <hailong.liu@oppo.com>
  Hamish Martin <hamish.martin@alliedtelesis.co.nz>
  Hamza Mahfooz <hamza.mahfooz@amd.com>
  Hangyu Hua <hbh25y@gmail.com>
  Hans de Goede <hdegoede@redhat.com>
  Hector Martin <marcan@marcan.st>
  Hersen Wu <hersenxs.wu@amd.com>
  Hugo Villeneuve <hvilleneuve@dimonoff.com>
  Ian Forbes <ian.forbes@broadcom.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Inki Dae <inki.dae@samsung.com>
  Jacob Keller <jacob.e.keller@intel.com>
  Jakub Kicinski <kuba@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  Jani Nikula <jani.nikula@intel.com>
  Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
  Jarkko Nikula <jarkko.nikula@linux.intel.com>
  Jason Andryuk <jandryuk@gmail.com>
  Jason Andryuk <jason.andryuk@amd.com>
  Jason Xing <kernelxing@tencent.com>
  Jean Delvare <jdelvare@suse.de>
  Jean-Baptiste Maneyrol <jean-baptiste.maneyrol@tdk.com>
  Jeff Layton <jlayton@kernel.org>
  Jens Axboe <axboe@kernel.dk>
  Jia Zhu <zhujia.zj@bytedance.com>
  Jie Wang <wangjie125@huawei.com>
  Jijie Shao <shaojijie@huawei.com>
  Jiri Kosina <jkosina@suse.com>
  Jiri Olsa <jolsa@kernel.org>
  Jisheng Zhang <jszhang@kernel.org>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan+linaro@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  John Ernberg <john.ernberg@actia.se>
  John Keeping <john@metanate.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Joshua Washington <joshwash@google.com>
  José Expósito <jose.exposito89@gmail.com>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Kalle Valo <quic_kvalo@quicinc.com>
  Karol Kolacinski <karol.kolacinski@intel.com>
  Keith Busch <kbusch@kernel.org>
  Kelsey Steele <kelseysteele@linux.microsoft.com>
  Kory Maincent <kory.maincent@bootlin.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Krzysztof Wilczyński <kwilczynski@kernel.org>
  Kuangyi Chiang <ki.chiang65@gmail.com>
  Kun(llfl) <llfl@linux.alibaba.com>
  Kuniyuki Iwashima <kuniyu@amazon.com>
  Kyle Tso <kyletso@google.com>
  Lars Kellogg-Stedman <lars@oddbit.com>
  Larysa Zaremba <larysa.zaremba@intel.com>
  Lin Ma <linma@zju.edu.cn>
  Lingbo Kong <quic_lingbok@quicinc.com>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lion Ackermann <nnamrec@gmail.com>
  Luca Ceresoli <luca.ceresoli@bootlin.com>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Marc Ferland <marc.ferland@sonatest.com>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Krastev <martin.krastev@broadcom.com>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Mathieu Poirier <mathieu.poirier@linaro.org>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Matthieu Baerts (NGI0) <matttbe@kernel.org>
  Max Filippov <jcmvbkbc@gmail.com>
  Maxime Ripard <mripard@kernel.org>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michal Hocko <mhocko@suse.com>
  Michal Wilczynski <michal.wilczynski@intel.com>
  Mickaël Salaün <mic@digikod.net>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Miri Korenblit <miriam.rachel.korenblit@intel.com>
  Moshe Shemesh <moshe@mellanox.com>
  Moshe Shemesh <moshe@nvidia.com>
  Muhammad Usama Anjum <usama.anjum@collabora.com>
  Nam Cao <namcao@linutronix.de>
  NeilBrown <neilb@suse.de>
  Nicolas Escande <nico.escande@gmail.com>
  Nikita Zhandarovich <n.zhandarovich@fintech.ru>
  Nikolay Aleksandrov <razor@blackwall.org>
  Nirmoy Das <nirmoy.das@intel.com>
  Nuno Sa <nuno.sa@analog.com>
  Oleg Nesterov <oleg@redhat.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Olga Kornievskaia <kolga@netapp.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Abeni <pabeni@redhat.com>
  Paul Greenwalt <paul.greenwalt@intel.com>
  Pavel Machek (CIP) <pavel@denx.de>
  Peter Delevoryas <peter@pjd.dev>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Petr Pavlu <petr.pavlu@suse.com>
  Pierre Tomon <pierretom+12@ik.me>
  Przemek Kitszel <przemyslaw.kitszel@intel.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com>
  Qu Wenruo <wqu@suse.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rao Shoaib <Rao.Shoaib@oracle.com>
  Remi Pommarel <repk@triplefau.lt>
  Richard Cochran <richardcochran@gmail.com>
  Rick Wertenbroek <rick.wertenbroek@gmail.com>
  Rik van Riel <riel@surriel.com>
  Ron Economos <re@w6rz.net>
  Russell King (Oracle) <rmk+kernel@armlinux.org.uk>
  Ryusuke Konishi <konishi.ryusuke@gmail.com>
  Saeed Mahameed <saeedm@nvidia.com>
  Salvatore Bonaccorso <carnil@debian.org>
  Sam James <sam@gentoo.org>
  Samuel Holland <samuel.holland@sifive.com>
  Sasha Levin <sashal@kernel.org>
  SeongJae Park <sj@kernel.org>
  Sergey Ryazanov <ryazanov.s.a@gmail.com>
  Shahar S Matityahu <shahar.s.matityahu@intel.com>
  Shay Drory <shayd@nvidia.com>
  Shichao Lai <shichaorai@gmail.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Sicong Huang <congei42@163.com>
  Steev Klimaszewski <steev@kali.org>
  Stephen Boyd <sboyd@kernel.org>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Su Yue <glass.su@suse.com>
  Subbaraya Sundeep <sbhatta@marvell.com>
  Sven Joachim <svenjoac@gmx.de>
  Taehee Yoo <ap420073@gmail.com>
  Tariq Toukan <tariqt@nvidia.com>
  Thadeu Lima de Souza Cascardo <cascardo@igalia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Weißschuh <linux@weissschuh.net>
  Tomas Winkler <tomas.winkler@intel.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uwe Kleine-König <u.kleine-koenig@pengutronix.de>
  Vadym Krevs <vkrevs@yahoo.com>
  Vamshi Gajjela <vamshigajjela@google.com>
  Vidya Srinivas <vidya.srinivas@intel.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vinicius Costa Gomes <vinicius.gomes@intel.com>
  Vinod Koul <vkoul@kernel.org>
  Vlastimil Babka <vbabka@suse.cz>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>
  Wei Fu <fuweid89@gmail.com>
  Wen Gu <guwen@linux.alibaba.com>
  Wesley Cheng <quic_wcheng@quicinc.com>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@kernel.org>
  Xiaolei Wang <xiaolei.wang@windriver.com>
  Xin Yin <yinxin.x@bytedance.com>
  Yazen Ghannam <yazen.ghannam@amd.com>
  Yonglong Liu <liuyonglong@huawei.com>
  YonglongLi <liyonglong@chinatelecom.cn>
  Yongzhi Liu <hyperlyzcs@gmail.com>
  Zack Rusin <zack.rusin@broadcom.com>
  Zack Rusin <zackr@vmware.com>
  Ziwei Xiao <ziweixiao@google.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   eb44d83053d6..a6398e373090  a6398e37309000e35cedb5cc328a0f8d00d7d7b9 -> tested/linux-6.1


From xen-devel-bounces@lists.xenproject.org Sat Jun 22 05:42:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Jun 2024 05:42:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745754.1152872 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKtVn-0005nS-4p; Sat, 22 Jun 2024 05:42:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745754.1152872; Sat, 22 Jun 2024 05:42:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKtVn-0005nL-1n; Sat, 22 Jun 2024 05:42:07 +0000
Received: by outflank-mailman (input) for mailman id 745754;
 Sat, 22 Jun 2024 05:42:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKtVm-0005nB-7n; Sat, 22 Jun 2024 05:42:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKtVm-0003XY-0W; Sat, 22 Jun 2024 05:42:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKtVl-00065T-La; Sat, 22 Jun 2024 05:42:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKtVl-0007CS-Kj; Sat, 22 Jun 2024 05:42:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FneaOGIjte/TvF5rr2QZIbc5nOyNIFiISYrp0PpiEDs=; b=aX1DD7D+DwULqU1Sm1t8z5bvTA
	HJT7jTITvXshWUwkb/YnX+yBGlDYB1YW1IMBwoyHa/aWOowRZwR5HC/w+UFHGL2uco+FtnhPBdgCi
	WMOU+7Tu7h3Vwu62G0UHDmBixmIoS4495mHVZhAjqjflMc+xk0kjRu6D8J82a6JHm9XI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186448-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186448: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore.2:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
X-Osstest-Versions-That:
    xen=62071a1c16c4dbe765491e58e456fd3a19b33298
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 Jun 2024 05:42:05 +0000

flight 186448 xen-unstable real [real]
flight 186452 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186448/
http://logs.test-lab.xenproject.org/osstest/logs/186452/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd11-amd64 18 guest-saverestore.2 fail pass in 186452-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186444
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186444
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186444
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186444
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186444
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186444
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
baseline version:
 xen                  62071a1c16c4dbe765491e58e456fd3a19b33298

Last test of basis   186444  2024-06-21 07:49:49 Z    0 days
Testing same since   186448  2024-06-21 18:40:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Matthew Barnes <matthew.barnes@cloud.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   62071a1c16..9e7c26ad85  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d -> master


From xen-devel-bounces@lists.xenproject.org Sat Jun 22 07:32:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Jun 2024 07:32:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745776.1152890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKvEM-0001BM-0S; Sat, 22 Jun 2024 07:32:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745776.1152890; Sat, 22 Jun 2024 07:32:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKvEL-0001BF-Tr; Sat, 22 Jun 2024 07:32:13 +0000
Received: by outflank-mailman (input) for mailman id 745776;
 Sat, 22 Jun 2024 07:32:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sKvEK-0001B9-R9
 for xen-devel@lists.xenproject.org; Sat, 22 Jun 2024 07:32:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sKvEK-0005Qm-3f; Sat, 22 Jun 2024 07:32:12 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.244])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sKvEJ-0003bt-Of; Sat, 22 Jun 2024 07:32:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=WDj0/8UxFsPi20A5VayK2zsCM4zWM8yc5vKX79RS6n4=; b=Dhu/AUvOsmrn20nD6wN7Sl3NrO
	TGrYZfy+qC+ylmt/O28GMxIMXBe2wF1wqZf8ZZMnocVYnh0J8yLtt+TWT6dfSkDZDvzy4wZn81Zp2
	W3hO4cKRwygSJczQZOzkPj/IlwNIq6d0UrHIYu1VX/0X97e8W/8Tl0c883Z5XnR7n+ug=;
Message-ID: <917533b5-b79c-4e97-917d-9684993bf423@xen.org>
Date: Sat, 22 Jun 2024 08:32:09 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2] xen: add explicit comment to identify notifier
 patterns
To: Stefano Stabellini <sstabellini@kernel.org>,
 Federico Serafini <federico.serafini@bugseng.com>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>
References: <d814434bf73e341f5d35836fa7063a728f7b7de4.1718788908.git.federico.serafini@bugseng.com>
 <f7d46c15-ff85-4a6f-afd7-df18649726c8@xen.org>
 <2072bf59-f125-4789-be77-40ed3641aec4@bugseng.com>
 <alpine.DEB.2.22.394.2406201811200.2572888@ubuntu-linux-20-04-desktop>
 <bce5eae2-973d-4d69-bee1-09f9f09dd011@bugseng.com>
 <alpine.DEB.2.22.394.2406211529130.2572888@ubuntu-linux-20-04-desktop>
Content-Language: en-GB
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2406211529130.2572888@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 21/06/2024 23:34, Stefano Stabellini wrote:
>>>> Yes, I also think this could be an opportunity to check the pattern
>>>> but no one has yet been identified to do this.
>>>
>>> I don't think I understand Julien's question and/or your answer.
>>>
>>> Is the question whether someone has done an analysis to make sure this
>>> patch covers all notifier patters in the xen codebase?
>>
>> I think Jan and Julien's concerns are about the fact that my patch
>> takes for granted that all the switch statements are doing the right
>> thing: someone should investigate the notifier patterns to confirm that
>> their are handling the different cases correctly.
> 
> That's really difficult to do, even for the maintainers of the code in
> question.
Sure it will require some work, but as any violation. However, I thought 
the whole point of MISRA is to improve the safety and our code base in 
general?

AFAIU, we already have some doubt that some notifiers are correct. So to 
me it seems wrong to add a comment because while this silence MISRA, 
this doesn't solve it in the true spirit.

> 
> And by not taking this patch we are exposing ourselves to more safety
> risks because we cannot make R16.4 blocking.
> 
> 
>>> If so, I expect that you have done an analysis simply by basing this
>>> patch on the 16.4 violations reported by ECLAIR?
>>
>> The previous version of the patch was based only on the reports of
>> ECLAIR but Jan said "you left out some patterns, why?".
>>
>> So, this version of the patch adds the comment for all the notifier
>> patterns I found using git grep "struct notifier_block \*"
>> (a superset of the ones reported by ECLAIR because some of them are in
>> files excluded from the analysis or deviated).
> 
> I think this patch is a step in the right direction. It doesn't prevent
> anyone in the community from making expert evaluations on whether the
> pattern is implemented correctly.

I am not sure I am reading this correctly. To me you are saying that for 
your MISRA, adding the default case is fine even if we believe some 
notifiers are incorrect. Did I understand right?

If so it does seems odd because this is really not solving the 
violation. You are just putting a smoke screen in front in the hope that 
there are no big issue in the code...

> 
> Honestly, I don't see another way to make progress on this, except for
> maybe deviating project-wide "struct notifier_block". But that's
> conceptually the same thing as this patch.

I still don't quite understand why you are so eager to hide violation 
just that you can progress faster on other bits.

I personally cannot put my name on a patch where the goal is to add a 
comment that no-one verified that it was actually true. (i.e. all the 
cases we care are handled). In particular in the Arm code, because IIRC 
we already identified issues in the past in the notifier.

I think it wouild be worth discussing the approach again in the next 
MISRA meeting.

Cheers,

> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat Jun 22 10:13:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Jun 2024 10:13:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745811.1152900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKxk0-0002sv-3f; Sat, 22 Jun 2024 10:13:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745811.1152900; Sat, 22 Jun 2024 10:13:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKxjz-0002so-W1; Sat, 22 Jun 2024 10:13:03 +0000
Received: by outflank-mailman (input) for mailman id 745811;
 Sat, 22 Jun 2024 10:13:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKxjz-0002se-Ju; Sat, 22 Jun 2024 10:13:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKxjz-0000BT-87; Sat, 22 Jun 2024 10:13:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sKxjy-0007zv-RO; Sat, 22 Jun 2024 10:13:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sKxjy-0006YH-Ql; Sat, 22 Jun 2024 10:13:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jY0Lwm21AGMM0f5fBzfDyKG9mU/9Ys0tq1jhd7kyZKE=; b=wDdw/+8cXRMPjSdZUQaudicAF0
	mSSev7Y67RCVshRk1RiEqfPwg8OlPrH/qpPeFBWi9Gu5+sBmx1PH2sLk2WkxxwNG2EjuGtkn4ZC0v
	VBbKtZKsqjQmTzBS6HVJ7zrrgpP7iFSwuA13MeZW3WcQMdqtXHKd5C/gpcfIdgAfTttQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186449-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186449: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start/freebsd.repeat:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4545981f33be5f04ee8dd43397ea29c37337dd03
X-Osstest-Versions-That:
    linux=50736169ecc8387247fe6a00932852ce7b057083
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 Jun 2024 10:13:02 +0000

flight 186449 linux-linus real [real]
flight 186454 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186449/
http://logs.test-lab.xenproject.org/osstest/logs/186454/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd12-amd64 21 guest-start/freebsd.repeat fail REGR. vs. 186437

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd12-amd64 19 guest-localmigrate/x10 fail in 186454 pass in 186449
 test-armhf-armhf-xl           8 xen-boot            fail pass in 186454-retest
 test-armhf-armhf-xl-raw       8 xen-boot            fail pass in 186454-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 186454 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 186454 never pass
 test-armhf-armhf-xl-raw     14 migrate-support-check fail in 186454 never pass
 test-armhf-armhf-xl-raw 15 saverestore-support-check fail in 186454 never pass
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186437
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 186437
 test-armhf-armhf-xl-credit1   8 xen-boot                     fail  like 186437
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186437
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186437
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                4545981f33be5f04ee8dd43397ea29c37337dd03
baseline version:
 linux                50736169ecc8387247fe6a00932852ce7b057083

Last test of basis   186437  2024-06-20 20:10:42 Z    1 days
Testing same since   186449  2024-06-21 18:42:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abel Vesa <abel.vesa@linaro.org>
  Arnd Bergmann <arnd@arndb.de>
  Bard Liao <yung-chuan.liao@linux.intel.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Dave Jiang <dave.jiang@intel.com>
  fhortner@yahoo.de
  Hans de Goede <hdegoede@redhat.com>
  Julien Panis <jpanis@baylibre.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Kuogee Hsieh <quic_khsieh@quicinc.com>
  Li RongQing <lirongqing@baidu.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Louis Chauvet <louis.chauvet@bootlin.com>
  Nikita Shubin <n.shubin@yadro.com>
  Pablo Caño <pablocpascual@gmail.com>
  Peng Fan <peng.fan@nxp.com>
  Peter Ujfalusi <peter.ujfalusi@gmail.com>
  Peter Ujfalusi@gmail.com
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raju Rangoju <Raju.Rangoju@amd.com>
  Rob Herring (Arm) <robh@kernel.org>
  Sanath S <Sanath.S@amd.com>
  Siddharth Vadapalli <s-vadapalli@ti.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Takashi Iwai <tiwai@suse.de>
  Vinod Koul <vkoul@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 648 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 22 10:48:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Jun 2024 10:48:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745825.1152910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKyI6-0006nG-Ny; Sat, 22 Jun 2024 10:48:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745825.1152910; Sat, 22 Jun 2024 10:48:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKyI6-0006n9-KZ; Sat, 22 Jun 2024 10:48:18 +0000
Received: by outflank-mailman (input) for mailman id 745825;
 Sat, 22 Jun 2024 10:48:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zABL=NY=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sKyI5-0006mn-Sq
 for xen-devel@lists.xenproject.org; Sat, 22 Jun 2024 10:48:17 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ec6d9085-3084-11ef-b4bb-af5377834399;
 Sat, 22 Jun 2024 12:48:14 +0200 (CEST)
Received: from truciolo.homenet.telecomitalia.it
 (host-87-12-240-97.business.telecomitalia.it [87.12.240.97])
 by support.bugseng.com (Postfix) with ESMTPSA id 7C3E74EE0738;
 Sat, 22 Jun 2024 12:48:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec6d9085-3084-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Nicola Vetrini <nicola.vetrini@bugseng.com>
Subject: [XEN PATCH] automation/eclair: configure Rule 13.6 and custom service B.UNEVALEFF
Date: Sat, 22 Jun 2024 12:48:05 +0200
Message-Id: <73b4105bf007e5bd0f351ec70711ba7219f31eb3.1719053209.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rule 13.6 states that "The operand of the `sizeof' operator shall not
contain any expression which has potential side effects".

Define service B.UNEVALEFF as an extension of Rule 13.6 to
check for unevalued side effects also for typeof and alignof operators.

Update ECLAIR configuration to deviate uses of macro chk_fld for
Rule 13.6 and alternative_v?call[0-9] for both Rule 13.6 and
B.UNEVALEFF.

Add service B.UNEVALEFF to the accepted.ecl guidelines and check
"violations" in the weekly analysis.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 automation/eclair_analysis/ECLAIR/B.UNEVALEFF.ecl    | 10 ++++++++++
 .../eclair_analysis/ECLAIR/accepted_guidelines.sh    |  2 ++
 automation/eclair_analysis/ECLAIR/analysis.ecl       |  1 +
 automation/eclair_analysis/ECLAIR/deviations.ecl     | 10 ++++++++++
 docs/misra/deviations.rst                            | 12 ++++++++++++
 5 files changed, 35 insertions(+)
 create mode 100644 automation/eclair_analysis/ECLAIR/B.UNEVALEFF.ecl

diff --git a/automation/eclair_analysis/ECLAIR/B.UNEVALEFF.ecl b/automation/eclair_analysis/ECLAIR/B.UNEVALEFF.ecl
new file mode 100644
index 0000000000..92d8db8986
--- /dev/null
+++ b/automation/eclair_analysis/ECLAIR/B.UNEVALEFF.ecl
@@ -0,0 +1,10 @@
+-clone_service=MC3R1.R13.6,B.UNEVALEFF
+
+-config=B.UNEVALEFF,summary="The operand of the `alignof' and `typeof'  operators shall not contain any expression which has potential side effects"
+-config=B.UNEVALEFF,stmt_child_matcher=
+{"stmt(node(utrait_expr)&&operator(alignof))", expr, 0, "stmt(any())", {}},
+{"stmt(node(utrait_type)&&operator(alignof))", type, 0, "stmt(any())", {}},
+{"stmt(node(utrait_expr)&&operator(preferred_alignof))", expr, 0, "stmt(any())", {}},
+{"stmt(node(utrait_type)&&operator(preferred_alignof))", type, 0, "stmt(any())", {}},
+{"type(node(typeof_expr))", expr, 0, "stmt(any())", {}},
+{"type(node(typeof_type))", type, 0, "stmt(any())", {}}
diff --git a/automation/eclair_analysis/ECLAIR/accepted_guidelines.sh b/automation/eclair_analysis/ECLAIR/accepted_guidelines.sh
index b308bd4cda..368135122c 100755
--- a/automation/eclair_analysis/ECLAIR/accepted_guidelines.sh
+++ b/automation/eclair_analysis/ECLAIR/accepted_guidelines.sh
@@ -11,3 +11,5 @@ accepted_rst=$1
 
 grep -Eo "\`(Dir|Rule) [0-9]+\.[0-9]+" ${accepted_rst} \
      | sed -e 's/`Rule /MC3R1.R/' -e  's/`Dir /MC3R1.D/' -e 's/.*/-enable=&/' > ${script_dir}/accepted.ecl
+
+echo "-enable=B.UNEVALEFF" >> ${script_dir}/accepted.ecl
diff --git a/automation/eclair_analysis/ECLAIR/analysis.ecl b/automation/eclair_analysis/ECLAIR/analysis.ecl
index 9134e59617..df0b551812 100644
--- a/automation/eclair_analysis/ECLAIR/analysis.ecl
+++ b/automation/eclair_analysis/ECLAIR/analysis.ecl
@@ -52,6 +52,7 @@ their Standard Library equivalents."
 -eval_file=adopted.ecl
 -eval_file=out_of_scope.ecl
 
+-eval_file=B.UNEVALEFF.ecl
 -eval_file=deviations.ecl
 -eval_file=call_properties.ecl
 -eval_file=tagging.ecl
diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index e2653f77eb..6bdfe7c883 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -328,6 +328,16 @@ of the short-circuit evaluation strategy of such logical operators."
 -config=MC3R1.R13.5,reports+={disapplied,"any()"}
 -doc_end
 
+-doc_begin="Macros alternative_v?call[0-9] use sizeof and typeof to check that th argument types match the corresponding parameter ones."
+-config=MC3R1.R13.6,reports+={deliberate,"any_area(any_loc(any_exp(macro(^alternative_vcall[0-9]$))&&file(^xen/arch/x86.*$)))"}
+-config=B.UNEVALEFF,reports+={deliberate,"any_area(any_loc(any_exp(macro(^alternative_v?call[0-9]$))&&file(^xen/arch/x86.*$)))"}
+-doc_end
+
+-doc_begin="Macro chk_fld is only used to introduce BUILD_BUG_ON checks in very specific cases where the usage is correct and can be checked by code inspection.
+The BUILD_BUG_ON checks check that EFI_TIME and struct xenpf_efi_time fields match."
+-config=MC3R1.R13.6,reports+={deliberate,"any_area(any_loc(any_exp(macro(^chk_fld$))&&file(^xen/common/efi/runtime\\.c$)))"}
+-doc_end
+
 #
 # Series 14
 #
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 36959aa44a..122e779f30 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -279,6 +279,18 @@ Deviations related to MISRA C:2012 Rules:
        the short-circuit evaluation strategy for logical operators.
      - Project-wide deviation; tagged as `disapplied` for ECLAIR.
 
+   * - R13.6
+     - On x86, macros alternative_vcall[0-9] use sizeof to type-check the
+       aguments of \"func\" without evaluating them.
+     - Tagged as `deliberate` for ECLAIR.
+
+   * - R13.6
+     - Macro chk_fld is only used to introduce BUILD_BUG_ON checks in very
+       specific cases where by code inspection you can see that its usage is
+       correct. The BUILD_BUG_ON checks check that EFI_TIME and
+       struct xenpf_efi_time fields match.
+     - Tagged as `deliberate` for ECLAIR.
+
    * - R14.2
      - The severe restrictions imposed by this rule on the use of 'for'
        statements are not counterbalanced by the presumed facilitation of the
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sat Jun 22 12:22:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Jun 2024 12:22:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745860.1152920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKzl7-0001W2-Ki; Sat, 22 Jun 2024 12:22:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745860.1152920; Sat, 22 Jun 2024 12:22:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sKzl7-0001Vv-Gu; Sat, 22 Jun 2024 12:22:21 +0000
Received: by outflank-mailman (input) for mailman id 745860;
 Sat, 22 Jun 2024 12:22:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=O0zD=NY=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sKzl7-0001Vm-8C
 for xen-devel@lists.xenproject.org; Sat, 22 Jun 2024 12:22:21 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1182efe0-3092-11ef-90a3-e314d9c70b13;
 Sat, 22 Jun 2024 14:22:19 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 5b1f17b1804b1-42172ed3487so21606445e9.0
 for <xen-devel@lists.xenproject.org>; Sat, 22 Jun 2024 05:22:19 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-42481747c76sm63506205e9.0.2024.06.22.05.22.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 22 Jun 2024 05:22:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1182efe0-3092-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719058938; x=1719663738; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=OskZSA95z4SLwpyXDqrqW8BgeB6DkJ/Y0wrFrK2f5Gk=;
        b=RX1hSlwphwSxtt/EOBtvZ6p8wmIUuDeKnXTNGXyD9xodPsDNi+miSHGe3tHBQKfmFg
         vp7wQR+W+H3BbmYPsezI57GJHMvs6OCW+7oDfoPQpk2I+L8o615Vw7cxXdUSuNM3Ft4p
         TrNCTt82ujZ+909KVgm6/WXGtp6h2To+mZMyI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719058938; x=1719663738;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=OskZSA95z4SLwpyXDqrqW8BgeB6DkJ/Y0wrFrK2f5Gk=;
        b=efcyW3wzPrL0RoZBoIoRrQ24bPsDe4R8hTrU9ocvcpyLBT/9sFtk2fmTcKK/uKp2rJ
         Rm1jiSauSAxw032Ctv12HlEBNl2dSZ4xxjrjirbGNvY0Hnd3XT2on1CyoMqWGfVIA0y5
         eiB7ot/W05YxvFUV5ZSOPodXz8Lvw0uHZzpn5jjYvtAEuEADkeXG0B1fvmrDickP2D6J
         7oh7e2S/3AFvuW57+kD5AstKW9e27Q8EbtXI9V1P0gxpUv1AzsX7EpLmdtpBMjIJqB++
         +nFP7AN4CoqU6jwSxPpHVKpjCN3arqiXaBJQxqqa47TMzbFSNHOLoZD7rHQWihK27LS2
         BhsQ==
X-Forwarded-Encrypted: i=1; AJvYcCWrBwj02Rj9PdOb5kMuRTLcBqCO+hdcUUy3dB+RM2LSaXurkQLUDax5uhiHkYOtW+N32hwvjUnPQQtZpci4BbKJDfU+TXfOJ/XbmvGFHnM=
X-Gm-Message-State: AOJu0YxGGl1LTHj0lR9LeDqyGCBHIzr6ZLuRlarPBsX0ChqumaKPPf3n
	fdQoHeaGnix52U5TwTAm6G7CS/RnY2PfCpjuQ/u7diLZmpS8XGiceLG0QBWZb2w=
X-Google-Smtp-Source: AGHT+IHJWspt2GfyfZPYEhtSiwimIdER4I5suxvT/z/dIxUD8liW2TytBf/ogipZRT7RxfPslCTJqQ==
X-Received: by 2002:a05:600c:489a:b0:424:8b0e:be07 with SMTP id 5b1f17b1804b1-4248b0ebf6dmr3996185e9.5.1719058938400;
        Sat, 22 Jun 2024 05:22:18 -0700 (PDT)
Message-ID: <e8ca39e4-dd44-4399-9712-e1737f98bd0e@citrix.com>
Date: Sat, 22 Jun 2024 13:22:17 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] automation/eclair: configure Rule 13.6 and custom
 service B.UNEVALEFF
To: Federico Serafini <federico.serafini@bugseng.com>,
 xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Nicola Vetrini <nicola.vetrini@bugseng.com>
References: <73b4105bf007e5bd0f351ec70711ba7219f31eb3.1719053209.git.federico.serafini@bugseng.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <73b4105bf007e5bd0f351ec70711ba7219f31eb3.1719053209.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 22/06/2024 11:48 am, Federico Serafini wrote:
> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> index e2653f77eb..6bdfe7c883 100644
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -328,6 +328,16 @@ of the short-circuit evaluation strategy of such logical operators."
>  -config=MC3R1.R13.5,reports+={disapplied,"any()"}
>  -doc_end
>  
> +-doc_begin="Macros alternative_v?call[0-9] use sizeof and typeof to check that th argument types match the corresponding parameter ones."

Typo "th" => "the".  Can be fixed on commit.

> +-config=MC3R1.R13.6,reports+={deliberate,"any_area(any_loc(any_exp(macro(^alternative_vcall[0-9]$))&&file(^xen/arch/x86.*$)))"}
> +-config=B.UNEVALEFF,reports+={deliberate,"any_area(any_loc(any_exp(macro(^alternative_v?call[0-9]$))&&file(^xen/arch/x86.*$)))"}

There will be expansions of these macros outside of arch/x86/. 
drivers/iommu/ as an example.

Is the file() clause targetting the source of the macro (in which it
probably wants making more specific to alternative_call.h), or the
expansions of the macro (in which case it probably wants dropping
entirely) ?

> +-doc_end
> +
> +-doc_begin="Macro chk_fld is only used to introduce BUILD_BUG_ON checks in very specific cases where the usage is correct and can be checked by code inspection.
> +The BUILD_BUG_ON checks check that EFI_TIME and struct xenpf_efi_time fields match."
> +-config=MC3R1.R13.6,reports+={deliberate,"any_area(any_loc(any_exp(macro(^chk_fld$))&&file(^xen/common/efi/runtime\\.c$)))"}
> +-doc_end

It's just occurred to me.  Anything, no matter how complicated, inside a
BUILD_BUG_ON() is clearly a compile time thing so has no relevant side
effects.

Is it possible to generally exclude any sizeof/alignof/typeof inside a
BUILD_BUG_ON()?  That would be better than identifying every user locally.

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat Jun 22 16:40:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Jun 2024 16:40:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745889.1152930 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sL3mi-0005Pu-Rz; Sat, 22 Jun 2024 16:40:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745889.1152930; Sat, 22 Jun 2024 16:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sL3mi-0005Pn-Op; Sat, 22 Jun 2024 16:40:16 +0000
Received: by outflank-mailman (input) for mailman id 745889;
 Sat, 22 Jun 2024 16:40:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Zwp=NY=gmail.com=jandryuk@srs-se1.protection.inumbo.net>)
 id 1sL3mh-0005Ph-DQ
 for xen-devel@lists.xenproject.org; Sat, 22 Jun 2024 16:40:15 +0000
Received: from mail-ed1-x52d.google.com (mail-ed1-x52d.google.com
 [2a00:1450:4864:20::52d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 17f12dec-30b6-11ef-b4bb-af5377834399;
 Sat, 22 Jun 2024 18:40:11 +0200 (CEST)
Received: by mail-ed1-x52d.google.com with SMTP id
 4fb4d7f45d1cf-57d07673185so2821067a12.1
 for <xen-devel@lists.xenproject.org>; Sat, 22 Jun 2024 09:40:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17f12dec-30b6-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719074411; x=1719679211; darn=lists.xenproject.org;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=5K/9DHneEBGpiQrfpPObWq9/9aSOcT/XXUeMIfFd2Hw=;
        b=lx4tHvb01HfXkZZI1CRQXXCQ1/kUOkj9ikC4XnyGIov2Ro9FL6LTXIDCMtLAh/zz5b
         2UT6UWeuGeHbPXGeGcU2nqwi7lcJGeBT2JYzLGRb5OlGMclKx3vH1UVLhfs+K/Y1NvUb
         kP/lPuS/8xMSVt5hVyqCAmk2ZF48LstPz1YVny+SOjr7BFMsCBkV/GnC/vgTJbwqbkWH
         sUL2DG9LXPGAUgJmevSHKzwfw7fMcKd6yC9a6ZojPT0LMhRNaZBwtMKLpeEO+rqRy8y8
         HxMlm7DRkA7k2HlKfaoM3qad5sy6lPZjT9yJejEBRjq2Jpq2j0w4CpT7UUHBC8GV95Gd
         F2sA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719074411; x=1719679211;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=5K/9DHneEBGpiQrfpPObWq9/9aSOcT/XXUeMIfFd2Hw=;
        b=k8fcWWXnYTmbWE1jXvWRRKxN0W04spjqnJj7Rlj+t4H57o2Pu3il993ePx1EEXbe53
         SrTbCFWJZ/UibYW3C56SImXEiZspGQS/oPhrWewppuIEOxj2P00rbi+EFFiwKdEZ6qsP
         tEZLBYky5uLxnxDiHaP+VjL13eZ2e2tsWLQHVP7j1vc+deQ/Ms7nhfYJfSPYAPQzBDtp
         9UoNxNyglheE7aSWgZylt2GFSA2C6s6fTudE1wsGhaULhT+5XxlDhcltXR5pBnzbomIT
         ZTVqnVxNyTO16lroB3tpKnxDKIkNDsPVnzIPHqBwg8il24F/QNTI+KmA1Csowe+uf3xr
         wIzA==
X-Gm-Message-State: AOJu0Yz+Cb4hNSyxlGqrJeAAqLm2cJNO4CeCvhgq8GrIJUoGGLNLGUMC
	9Wx0QQ9/pFTrTefhMZXml4XTU7qzO3MF0RR2Mo0wHZI6bZN1pmOGWAI7bf7kbDpinSdq5nCL9IY
	iadwXEgT7NvEO7sZ3erKTCzOGO1KvSC9U
X-Google-Smtp-Source: AGHT+IHOimMf5G6fqS/qCMY8X/4XtbFwx6kGOlAk2aNg3BtmbGgm+0D8gIDvxzYYIbqQcQjLbR6p+VubA2YAdCgLw3s=
X-Received: by 2002:a50:9f42:0:b0:57c:5503:940b with SMTP id
 4fb4d7f45d1cf-57d4a28175cmr524372a12.15.1719074410476; Sat, 22 Jun 2024
 09:40:10 -0700 (PDT)
MIME-Version: 1.0
From: Jason Andryuk <jandryuk@gmail.com>
Date: Sat, 22 Jun 2024 12:39:58 -0400
Message-ID: <CAKf6xptZay+suSB4Kf2jutt_W_KFqM1_grND=hQv1Rids7srpQ@mail.gmail.com>
Subject: Xen vkb uevent/modalias fix (Debian installer bug)
To: xen-devel <xen-devel@lists.xenproject.org>, 983357@bugs.debian.org
Content-Type: text/plain; charset="UTF-8"

Hi,

A commit to address the Xen virtual keyboard large uevent/modalias
failure has been merged into upstream linux for the upcoming 6.10
release:
https://git.kernel.org/torvalds/c/0774d19038c496f0c3602fb505c43e1b2d8eed85

This should address https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=983357
and https://github.com/systemd/systemd/issues/22944

It has already been backported to these stable releases:
6.9.3
6.8.12
6.6.33
6.1.95

It is queued for these future releases (assuming number holds):
5.15.162
5.10.221
5.4.279
4.19.317

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Sat Jun 22 16:40:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Jun 2024 16:40:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745890.1152939 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sL3mp-0005eu-1Y; Sat, 22 Jun 2024 16:40:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745890.1152939; Sat, 22 Jun 2024 16:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sL3mo-0005el-V6; Sat, 22 Jun 2024 16:40:22 +0000
Received: by outflank-mailman (input) for mailman id 745890;
 Sat, 22 Jun 2024 16:40:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sL3mo-0005eJ-0W; Sat, 22 Jun 2024 16:40:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sL3mn-0000TK-Rm; Sat, 22 Jun 2024 16:40:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sL3mn-0006tz-D9; Sat, 22 Jun 2024 16:40:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sL3mn-0000LX-Cd; Sat, 22 Jun 2024 16:40:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/fW++GsFOUKVncAY+Wnem72RFrR8rLUCfTpehbUs7/Q=; b=6VGzFLTryI51jRTgcxPXBKAmeh
	EobKnB1dSVVbAhvqP26i4D6eSPbL81fc0VaXy5Ao8AX4YANBalsov9Ofv2MrUHPN13Mq/c5AfUWf+
	LasoxNnFz8s6blPHPtajNweorJYVQqHsrp8wmeFbYOIIu3OdCQM2lOFde7xnMx4Q/dXA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186451-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186451: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=43a0881274e632dc44fff9320357dc8bf31e4826
X-Osstest-Versions-That:
    libvirt=e6b94cba7ee1ea0ab3a49ebdd2520c4a6259a013
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 Jun 2024 16:40:21 +0000

flight 186451 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186451/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186441
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              43a0881274e632dc44fff9320357dc8bf31e4826
baseline version:
 libvirt              e6b94cba7ee1ea0ab3a49ebdd2520c4a6259a013

Last test of basis   186441  2024-06-21 04:20:23 Z    1 days
Testing same since   186451  2024-06-22 04:20:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   e6b94cba7e..43a0881274  43a0881274e632dc44fff9320357dc8bf31e4826 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jun 22 20:59:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 Jun 2024 20:59:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745926.1152950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sL7oo-00082R-49; Sat, 22 Jun 2024 20:58:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745926.1152950; Sat, 22 Jun 2024 20:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sL7oo-00082B-1F; Sat, 22 Jun 2024 20:58:42 +0000
Received: by outflank-mailman (input) for mailman id 745926;
 Sat, 22 Jun 2024 20:58:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sL7on-000821-3N; Sat, 22 Jun 2024 20:58:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sL7om-0005cm-H8; Sat, 22 Jun 2024 20:58:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sL7om-0005LO-0r; Sat, 22 Jun 2024 20:58:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sL7om-0008H9-0R; Sat, 22 Jun 2024 20:58:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KdtLqiF151BoNoTlf9R4pe5xBD9lf8zp+Tdl4+xAvHE=; b=tlI7FWZwFSdg3HB0XNxoblhFIS
	jA3HRgXsJP99smFG8/M1wEa/YVOCuoLNzn7e91l7VWyE/yLDSvIwX7MM2L8ha68/nyvkp5ftFQWJr
	Xm8/KLMUh8nLIjfvEJJ7GqOoG0F4CGnjwITDiWFPMnt/cogaxhPJguIBQPiwjMWjqJwY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186453-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186453: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
X-Osstest-Versions-That:
    xen=9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 Jun 2024 20:58:40 +0000

flight 186453 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186453/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186448
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186448
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186448
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186448
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186448
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186448
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
baseline version:
 xen                  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d

Last test of basis   186453  2024-06-22 05:44:27 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jun 23 00:38:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Jun 2024 00:38:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745952.1152960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLBFa-0006we-20; Sun, 23 Jun 2024 00:38:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745952.1152960; Sun, 23 Jun 2024 00:38:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLBFZ-0006wX-VN; Sun, 23 Jun 2024 00:38:33 +0000
Received: by outflank-mailman (input) for mailman id 745952;
 Sun, 23 Jun 2024 00:38:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLBFX-0006wN-UN; Sun, 23 Jun 2024 00:38:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLBFX-0001jk-Kp; Sun, 23 Jun 2024 00:38:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLBFX-0001zh-4O; Sun, 23 Jun 2024 00:38:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sLBFX-000705-3j; Sun, 23 Jun 2024 00:38:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fXo0XJxWxxsi7ry1okSAJCWzsPfQnZxTR3GJDxGUBgo=; b=iX0ZY5MY4iy7ivFBpuT1K169u2
	BQWZtUFbpJHom7RaH4G8DHZ1xnnwXOaZyhBi9ZGMLqGuktQ5/qkbIo/vU40xe4AuM8P1ntlnUmmBv
	Z04agZO3m+oXwjP/0SyJPEJO4NG33PnFhhLhsuzaWutMzbCvsK54DOpuHA7LNb7VwZv0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186456-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186456: regressions - FAIL
X-Osstest-Failures:
    linux-linus:build-amd64-xsm:xen-build:fail:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=35bb670d65fc0f80c62383ab4f2544cec85ac57a
X-Osstest-Versions-That:
    linux=50736169ecc8387247fe6a00932852ce7b057083
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 Jun 2024 00:38:31 +0000

flight 186456 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186456/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 186437
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 186437
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 186437

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186437
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 186437
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186437
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                35bb670d65fc0f80c62383ab4f2544cec85ac57a
baseline version:
 linux                50736169ecc8387247fe6a00932852ce7b057083

Last test of basis   186437  2024-06-20 20:10:42 Z    2 days
Failing since        186449  2024-06-21 18:42:38 Z    1 days    2 attempts
Testing same since   186456  2024-06-22 10:16:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abel Vesa <abel.vesa@linaro.org>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Andy Shevchenko <andy.shevchenko@gmail.com>
  Arnd Bergmann <arnd@arndb.de>
  Bard Liao <yung-chuan.liao@linux.intel.com>
  Bart Van Assche <bvanassche@acm.org>
  Chenliang Li <cliang01.li@samsung.com>
  Christian König <christian.koenig@amd.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Miess <daniel.miess@amd.com>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Dave Airlie <airlied@redhat.com>
  Dave Jiang <dave.jiang@intel.com>
  fhortner@yahoo.de
  Hamza Mahfooz <hamza.mahfooz@amd.com>
  Hans de Goede <hdegoede@redhat.com>
  Harish Kasiviswanathan <Harish.Kasiviswanathan@amd.com>
  Honggang LI <honggangli@163.com>
  Jani Nikula <jani.nikula@intel.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jens Axboe <axboe@kernel.dk>
  Joel Slebodnick <jslebodn@redhat.com>
  Julien Panis <jpanis@baylibre.com>
  Konstantin Taranov <kotaranov@microsoft.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Kuogee Hsieh <quic_khsieh@quicinc.com>
  Leon Romanovsky <leon@kernel.org>
  Li RongQing <lirongqing@baidu.com>
  Likun Gao <Likun.Gao@amd.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Louis Chauvet <louis.chauvet@bootlin.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Strauss <michael.strauss@amd.com>
  Michal Wajdeczko <michal.wajdeczko@intel.com>
  Miklos Szeredi <mszeredi@redhat.com>
  Nathan Chancellor <nathan@kernel.org>
  Nikita Shubin <n.shubin@yadro.com>
  Pablo Caño <pablocpascual@gmail.com>
  Patrisious Haddad <phaddad@nvidia.com>
  Paul Hsieh <paul.hsieh@amd.com>
  Peng Fan <peng.fan@nxp.com>
  Peter Ujfalusi <peter.ujfalusi@gmail.com>
  Peter Ujfalusi@gmail.com
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raju Rangoju <Raju.Rangoju@amd.com>
  Rob Herring (Arm) <robh@kernel.org>
  Roman Li <roman.li@amd.com>
  Sanath S <Sanath.S@amd.com>
  Selvin Xavier <selvin.xavier@broadcom.com>
  Siddharth Vadapalli <s-vadapalli@ti.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Takashi Iwai <tiwai@suse.de>
  Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
  Thomas Hellström <thomas.hellstrom@linux.intel.com>
  Vinod Koul <vkoul@kernel.org>
  Yishai Hadas <yishaih@nvidia.com>
  Yunxiang Li <Yunxiang.Li@amd.com>
  Zaeem Mohamed <zaeem.mohamed@amd.com>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1444 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 23 03:24:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Jun 2024 03:24:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745965.1152969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLDps-0008F6-8g; Sun, 23 Jun 2024 03:24:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745965.1152969; Sun, 23 Jun 2024 03:24:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLDps-0008Ez-6A; Sun, 23 Jun 2024 03:24:12 +0000
Received: by outflank-mailman (input) for mailman id 745965;
 Sun, 23 Jun 2024 03:24:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OXD4=NZ=linux.intel.com=baolu.lu@srs-se1.protection.inumbo.net>)
 id 1sLDpr-0008Et-7I
 for xen-devel@lists.xenproject.org; Sun, 23 Jun 2024 03:24:11 +0000
Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.18])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0b44ecd7-3110-11ef-90a3-e314d9c70b13;
 Sun, 23 Jun 2024 05:24:07 +0200 (CEST)
Received: from orviesa008.jf.intel.com ([10.64.159.148])
 by orvoesa110.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 22 Jun 2024 20:24:04 -0700
Received: from unknown (HELO [10.239.159.127]) ([10.239.159.127])
 by orviesa008.jf.intel.com with ESMTP; 22 Jun 2024 20:24:01 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b44ecd7-3110-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1719113047; x=1750649047;
  h=message-id:date:mime-version:cc:subject:to:references:
   from:in-reply-to:content-transfer-encoding;
  bh=ZQAdAA6fO1mYmX4GbtwmVh2Kj57oeksFj2+YVnS3DHk=;
  b=g5d6BAMCgsF74xcFut6BaWMtpgktBY6pRjDHLDgLWf6gmaskp/SKcKVs
   hirnFUik82Uabj5kjUVVw5m397uY7Mq7uV6Zyg6ehLus9ZqrRt77TYOOo
   j5UorjJ83s4T1Pig8k9ri9TH2TgApZt40NmNFLFq3G7NA12090o1fzNtq
   4dbHOTG1GmgylXqkarYdKdXq/9ziv0D/+ZchG8zJbeo8qaA2J6xe3GIzE
   Oc4CGkzCIxlm0o5+ZM5uCkK0hWWVHD8y3DCrVWY68SYNGzTqH8RjAX3iw
   d/2897/iduiImIPgEbH27V6DXU2iHa2oK7T3R7UCynjFIViRCmI/jD5s8
   Q==;
X-CSE-ConnectionGUID: /n+ML786QM25UiqQ9GaZ0g==
X-CSE-MsgGUID: /qXqPECRRvKc+CVw/1n+XA==
X-IronPort-AV: E=McAfee;i="6700,10204,11111"; a="16249329"
X-IronPort-AV: E=Sophos;i="6.08,259,1712646000"; 
   d="scan'208";a="16249329"
X-CSE-ConnectionGUID: mGbhDtelQlS7Rw2zRNd/mQ==
X-CSE-MsgGUID: ACHECqE7SMuKm7SiobHzuw==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="6.08,259,1712646000"; 
   d="scan'208";a="43643808"
Message-ID: <4ba90f86-fd14-4d2a-b7a0-c3eaab243565@linux.intel.com>
Date: Sun, 23 Jun 2024 11:21:29 +0800
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Cc: baolu.lu@linux.intel.com, xen-devel@lists.xenproject.org,
 iommu@lists.linux.dev, Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
 Robin Murphy <robin.murphy@arm.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: [RFC PATCH] iommu/xen: Add Xen PV-IOMMU driver
To: Teddy Astie <teddy.astie@vates.tech>, Jason Gunthorpe <jgg@ziepe.ca>
References: <fe36b8d36ed3bc01c78901bdf7b87a71cb1adaad.1718286176.git.teddy.astie@vates.tech>
 <20240619163000.GK791043@ziepe.ca>
 <750967b7-252f-4523-872f-64b79358c97c@vates.tech>
Content-Language: en-US
From: Baolu Lu <baolu.lu@linux.intel.com>
In-Reply-To: <750967b7-252f-4523-872f-64b79358c97c@vates.tech>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 6/21/24 11:09 PM, Teddy Astie wrote:
> Le 19/06/2024 à 18:30, Jason Gunthorpe a écrit :
>> On Thu, Jun 13, 2024 at 01:50:22PM +0000, Teddy Astie wrote:
>>
>>> +struct iommu_domain *xen_iommu_domain_alloc(unsigned type)
>>> +{
>>> +	struct xen_iommu_domain *domain;
>>> +	u16 ctx_no;
>>> +	int ret;
>>> +
>>> +	if (type & IOMMU_DOMAIN_IDENTITY) {
>>> +		/* use default domain */
>>> +		ctx_no = 0;
>> Please use the new ops, domain_alloc_paging and the static identity domain.
> Yes, in the v2, I will use this newer interface.
> 
> I have a question on this new interface : is it valid to not have a
> identity domain (and "default domain" being blocking); well in the
> current implementation it doesn't really matter, but at some point, we
> may want to allow not having it (thus making this driver mandatory).

It's valid to not have an identity domain if "default domain being
blocking" means a paging domain with no mappings.

In the iommu driver's iommu_ops::def_domain_type callback, just always
return IOMMU_DOMAIN_DMA, which indicates that the iommu driver doesn't
support identity translation.

Best regards,
baolu


From xen-devel-bounces@lists.xenproject.org Sun Jun 23 08:17:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Jun 2024 08:17:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.745987.1152979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLIPR-0006kL-D5; Sun, 23 Jun 2024 08:17:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 745987.1152979; Sun, 23 Jun 2024 08:17:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLIPR-0006kE-AR; Sun, 23 Jun 2024 08:17:13 +0000
Received: by outflank-mailman (input) for mailman id 745987;
 Sun, 23 Jun 2024 08:17:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLIPP-0006k4-Nu; Sun, 23 Jun 2024 08:17:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLIPP-0002Gv-An; Sun, 23 Jun 2024 08:17:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLIPO-0003at-TW; Sun, 23 Jun 2024 08:17:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sLIPO-0003C6-Se; Sun, 23 Jun 2024 08:17:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=f1AuMaluKULbLiP0Kd1wcqcxC2d1GYcUI4p8/dtRjK0=; b=FrEd/SYb2xdtMxfaNB72i9E1Da
	tfgZ9nP2uvndbOB8LOMOPta0r9UdnLIrtPOI7UFv0X/FiYXBMEsrpSQicCd6IhuUjGk8ScU/0T3Hh
	OmgXzoIPsRYMno3InZnr3tbuDMOBrKvA9k5K3Pnl78S1MKqCFRVjZkhr2ArqtBe/h9i0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186457-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186457: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5f583a3162ffd9f7999af76b8ab634ce2dac9f90
X-Osstest-Versions-That:
    linux=50736169ecc8387247fe6a00932852ce7b057083
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 Jun 2024 08:17:10 +0000

flight 186457 linux-linus real [real]
flight 186459 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186457/
http://logs.test-lab.xenproject.org/osstest/logs/186459/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-raw       8 xen-boot                 fail REGR. vs. 186437

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186437
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 186437
 test-armhf-armhf-xl-credit1   8 xen-boot                     fail  like 186437
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186437
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186437
 test-armhf-armhf-examine      8 reboot                       fail  like 186437
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 186437
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186437
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                5f583a3162ffd9f7999af76b8ab634ce2dac9f90
baseline version:
 linux                50736169ecc8387247fe6a00932852ce7b057083

Last test of basis   186437  2024-06-20 20:10:42 Z    2 days
Failing since        186449  2024-06-21 18:42:38 Z    1 days    3 attempts
Testing same since   186457  2024-06-23 00:44:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abel Vesa <abel.vesa@linaro.org>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexey Dobriyan <adobriyan@gmail.com>
  Amit Kumar Mahapatra <amit.kumar-mahapatra@amd.com>
  Andre Przywara <andre.przywara@arm.com>
  Andrew Jones <ajones@ventanamicro.com>
  Andy Shevchenko <andy.shevchenko@gmail.com>
  Anup Patel <anup@brainfault.org>
  Arnd Bergmann <arnd@arndb.de>
  Babu Moger <babu.moger@amd.com>
  Bard Liao <yung-chuan.liao@linux.intel.com>
  Bart Van Assche <bvanassche@acm.org>
  Bibo Mao <maobibo@loongson.cn>
  Biju Das <biju.das.jz@bp.renesas.com>
  Breno Leitao <leitao@debian.org>
  Chandan Babu R <chandanbabu@kernel.org>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen Wang <unicorn_wang@outlook.com>
  Chenliang Li <cliang01.li@samsung.com>
  Christian König <christian.koenig@amd.com>
  Chuck Lever <chuck.lever@oracle.com>
  Colin Ian King <colin.i.king@gmail.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Miess <daniel.miess@amd.com>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Dave Airlie <airlied@redhat.com>
  Dave Chinner <dchinner@redhat.com>
  Dave Jiang <dave.jiang@intel.com>
  Fabio Estevam <festevam@gmail.com>
  fhortner@yahoo.de
  Frank Li <Frank.Li@nxp.com>
  Hamza Mahfooz <hamza.mahfooz@amd.com>
  Hans de Goede <hdegoede@redhat.com>
  Harish Kasiviswanathan <Harish.Kasiviswanathan@amd.com>
  Haylen Chu <heylenay@outlook.com>
  Honggang LI <honggangli@163.com>
  Huacai Chen <chenhuacai@loongson.cn>
  Hui Li <lihui@loongson.cn>
  Inochi Amaoto <inochiama@outlook.com>
  Jani Nikula <jani.nikula@intel.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jeff Layton <jlayton@kernel.org>
  Jens Axboe <axboe@kernel.dk>
  Joao Paulo Goncalves <joao.goncalves@toradex.com>
  Joel Slebodnick <jslebodn@redhat.com>
  Julien Panis <jpanis@baylibre.com>
  Kalle Niemi <kaleposti@gmail.com>
  Kent Overstreet <kent.overstreet@linux.dev>
  Konstantin Taranov <kotaranov@microsoft.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Kuogee Hsieh <quic_khsieh@quicinc.com>
  Leon Romanovsky <leon@kernel.org>
  Li RongQing <lirongqing@baidu.com>
  Likun Gao <Likun.Gao@amd.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Louis Chauvet <louis.chauvet@bootlin.com>
  Luis Chamberlain <mcgrof@kernel.org>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marek Vasut <marex@denx.de>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Max Krummenacher <max.krummenacher@toradex.com>
  Michael Roth <michael.roth@amd.com>
  Michael Strauss <michael.strauss@amd.com>
  Michal Wajdeczko <michal.wajdeczko@intel.com>
  Miguel Ojeda <ojeda@kernel.org>
  Miklos Szeredi <mszeredi@redhat.com>
  Nathan Chancellor <nathan@kernel.org>
  Nikita Shubin <n.shubin@yadro.com>
  Niklas Cassel <cassel@kernel.org>
  Nishanth Menon <nm@ti.com>
  Pablo Caño <pablocpascual@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrice Chotard <patrice.chotard@foss.st.com>
  Patrisious Haddad <phaddad@nvidia.com>
  Paul Hsieh <paul.hsieh@amd.com>
  Peng Fan <peng.fan@nxp.com>
  Peter Ujfalusi <peter.ujfalusi@gmail.com>
  Peter Ujfalusi@gmail.com
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raju Rangoju <Raju.Rangoju@amd.com>
  Rob Herring (Arm) <robh@kernel.org>
  Roman Li <roman.li@amd.com>
  Sanath S <Sanath.S@amd.com>
  Sean Christopherson <seanjc@google.com>
  Selvin Xavier <selvin.xavier@broadcom.com>
  Shawn Guo <shawnguo@kernel.org>
  Siddharth Vadapalli <s-vadapalli@ti.com>
  Sourabh Jain <sourabhjain@linux.ibm.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Sudeep Holla <sudeep.holla@arm.com>
  Takashi Iwai <tiwai@suse.de>
  Tao Su <tao1.su@linux.intel.com>
  Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
  Thomas Hellström <thomas.hellstrom@linux.intel.com>
  Thomas Richard <thomas.richard@bootlin.com>
  Thorsten Scherer <t.scherer@eckelmann.de>
  Tim Harvey <tharvey@gateworks.com>
  Uwe Kleine-König <u.kleine-koenig@baylibre.com>
  Uwe Kleine-König <ukleinek@kernel.org>
  Vincent Donnefort <vdonnefort@google.com>
  Vinod Koul <vkoul@kernel.org>
  Xi Ruoyao <xry111@xry111.site>
  Yang Li <yang.lee@linux.alibaba.com>
  Yi Lai <yi1.lai@intel.com>
  Yishai Hadas <yishaih@nvidia.com>
  Youling Tang <tangyouling@kylinos.cn>
  Yunxiang Li <Yunxiang.Li@amd.com>
  Zaeem Mohamed <zaeem.mohamed@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3546 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 23 09:27:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Jun 2024 09:27:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746001.1152989 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLJV0-0005zw-B8; Sun, 23 Jun 2024 09:27:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746001.1152989; Sun, 23 Jun 2024 09:27:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLJV0-0005zp-8e; Sun, 23 Jun 2024 09:27:02 +0000
Received: by outflank-mailman (input) for mailman id 746001;
 Sun, 23 Jun 2024 09:27:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5R9h=NZ=wanadoo.fr=christophe.jaillet@srs-se1.protection.inumbo.net>)
 id 1sLJUx-0005zj-Td
 for xen-devel@lists.xenproject.org; Sun, 23 Jun 2024 09:27:00 +0000
Received: from out.smtpout.orange.fr (out-14.smtpout.orange.fr [193.252.22.14])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bbef8973-3142-11ef-90a3-e314d9c70b13;
 Sun, 23 Jun 2024 11:26:58 +0200 (CEST)
Received: from fedora.home ([86.243.222.230]) by smtp.orange.fr with ESMTPA
 id LJUss4O2bZn3gLJUssabkv; Sun, 23 Jun 2024 11:26:56 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bbef8973-3142-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=wanadoo.fr;
	s=t20230301; t=1719134816;
	bh=tuK3/1THv4OAkyKVZvTBBjQAk0ybiMNLzExzqmjJTAA=;
	h=From:To:Subject:Date:Message-ID:MIME-Version;
	b=dyzuUSzxf9eZkx/L57cs7UQgY5D2fkCX1H6T7jJb6bWQ3DK3wV0JxqoEy3REXvnF0
	 bNBmScvyJ66uvrZW7sheAkivRV+0XLlJcREzy+L5nbf2EeHjDAR2nr2GytyJOoMqdb
	 xlS8aGDRl9ifQmQYPQC7fTv+iXyQkl1LqLU+LSyR3mrB1ule/mCpzEGrN+gZxAPIU5
	 k4CPhL8LfJEeOHE018lSazeByHbFVm1USTGgNRnfJy/7IC2OFfqk868YjmWMMP6cUw
	 I2z1AMV7Iku29G47MmVgIfhTfr2VsfcSBGOhBz5GBp7b64tX1qot0vjR+bI0cFhLvW
	 StnZoUlHG3erQ==
X-ME-Helo: fedora.home
X-ME-Auth: Y2hyaXN0b3BoZS5qYWlsbGV0QHdhbmFkb28uZnI=
X-ME-Date: Sun, 23 Jun 2024 11:26:56 +0200
X-ME-IP: 86.243.222.230
From: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
To: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: linux-kernel@vger.kernel.org,
	kernel-janitors@vger.kernel.org,
	Christophe JAILLET <christophe.jaillet@wanadoo.fr>,
	xen-devel@lists.xenproject.org
Subject: [PATCH] xen/manage: Constify struct shutdown_handler
Date: Sun, 23 Jun 2024 11:26:50 +0200
Message-ID: <ca1e75f66aed43191cf608de6593c7d6db9148f1.1719134768.git.christophe.jaillet@wanadoo.fr>
X-Mailer: git-send-email 2.45.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

'struct shutdown_handler' is not modified in this driver.

Constifying this structure moves some data to a read-only section, so
increase overall security.

On a x86_64, with allmodconfig:
Before:
======
   text	   data	    bss	    dec	    hex	filename
   7043	    788	      8	   7839	   1e9f	drivers/xen/manage.o

After:
=====
   text	   data	    bss	    dec	    hex	filename
   7164	    676	      8	   7848	   1ea8	drivers/xen/manage.o

Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>
---
Compile tested-only
---
 drivers/xen/manage.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index c16df629907e..b4b4ebed68da 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -208,7 +208,7 @@ static void do_reboot(void)
 	orderly_reboot();
 }
 
-static struct shutdown_handler shutdown_handlers[] = {
+static const struct shutdown_handler shutdown_handlers[] = {
 	{ "poweroff",	true,	do_poweroff },
 	{ "halt",	false,	do_poweroff },
 	{ "reboot",	true,	do_reboot   },
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Sun Jun 23 10:52:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Jun 2024 10:52:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746018.1152999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLKpT-0007iX-9r; Sun, 23 Jun 2024 10:52:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746018.1152999; Sun, 23 Jun 2024 10:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLKpT-0007iQ-7E; Sun, 23 Jun 2024 10:52:15 +0000
Received: by outflank-mailman (input) for mailman id 746018;
 Sun, 23 Jun 2024 10:52:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLKpR-0007iC-NF; Sun, 23 Jun 2024 10:52:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLKpR-00052Q-Bu; Sun, 23 Jun 2024 10:52:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLKpR-0001io-1e; Sun, 23 Jun 2024 10:52:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sLKpR-0003fE-1F; Sun, 23 Jun 2024 10:52:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sqTwy42I0W+j+iN+/UaDuGJc0EzCNVV3kRfPfHRkDBY=; b=bTTLt6PoHkbfEWLzFRSNcF7c27
	wjGbdcS325tDgnn6X0z3GWm78X8dHEXpbIvs7QrHaB9CfqmfEuFAtymHn/P0qHZnLY8T1iji9YWz8
	nAskgJMQMCgF7WLtpvcWNrEhe/b0YvBOXSAFSqmWi1jUpRSy7PB0bmev15rXrjVDU3tc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186458-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186458: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
X-Osstest-Versions-That:
    xen=9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 Jun 2024 10:52:13 +0000

flight 186458 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186458/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186453
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186453
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186453
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186453
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186453
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186453
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
baseline version:
 xen                  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d

Last test of basis   186458  2024-06-23 01:54:09 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jun 23 15:15:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 Jun 2024 15:15:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746051.1153009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLOvb-0001HC-45; Sun, 23 Jun 2024 15:14:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746051.1153009; Sun, 23 Jun 2024 15:14:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLOvb-0001H5-1J; Sun, 23 Jun 2024 15:14:51 +0000
Received: by outflank-mailman (input) for mailman id 746051;
 Sun, 23 Jun 2024 15:14:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLOvZ-0001Gv-Sn; Sun, 23 Jun 2024 15:14:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLOvZ-00012W-H3; Sun, 23 Jun 2024 15:14:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLOvZ-0000Oh-6n; Sun, 23 Jun 2024 15:14:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sLOvZ-00060E-6E; Sun, 23 Jun 2024 15:14:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Khvq6iy5qh+FUFe5hd2DG7tQg+K1Yye2wNjQIO7wgGA=; b=6NUzigmeFjHNmplm6t7HUoivPW
	UPyNukvYSOV+UgYxvOzkvdoyqXxIXl9ixK0Mp6KKIxIJwFC8u1/6ZEJXUin1tisZn+mG0amLH7onQ
	xEJQofz+tqkqFZZT3Xmgg68wzOVfedX9qAs6TZPS78Di4L9pQIZDDVSBkzL8U2Ph9RWc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186460-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186460: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-examine:reboot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5f583a3162ffd9f7999af76b8ab634ce2dac9f90
X-Osstest-Versions-That:
    linux=50736169ecc8387247fe6a00932852ce7b057083
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 Jun 2024 15:14:49 +0000

flight 186460 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186460/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-raw       8 xen-boot         fail in 186457 pass in 186460
 test-armhf-armhf-libvirt      8 xen-boot                   fail pass in 186457
 test-amd64-amd64-xl-pvhv2-amd 22 guest-start/debian.repeat fail pass in 186457

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-vhd  8 xen-boot            fail in 186457 like 186437
 test-armhf-armhf-xl-credit1   8 xen-boot            fail in 186457 like 186437
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 186457 like 186437
 test-armhf-armhf-examine      8 reboot              fail in 186457 like 186437
 test-armhf-armhf-xl-rtds      8 xen-boot            fail in 186457 like 186437
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 186457 never pass
 test-armhf-armhf-xl-qcow2   14 migrate-support-check fail in 186457 never pass
 test-armhf-armhf-xl-qcow2 15 saverestore-support-check fail in 186457 never pass
 test-armhf-armhf-xl-arndale   8 xen-boot                     fail  like 186437
 test-armhf-armhf-xl-qcow2     8 xen-boot                     fail  like 186437
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186437
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186437
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                5f583a3162ffd9f7999af76b8ab634ce2dac9f90
baseline version:
 linux                50736169ecc8387247fe6a00932852ce7b057083

Last test of basis   186437  2024-06-20 20:10:42 Z    2 days
Failing since        186449  2024-06-21 18:42:38 Z    1 days    4 attempts
Testing same since   186457  2024-06-23 00:44:09 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abel Vesa <abel.vesa@linaro.org>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexey Dobriyan <adobriyan@gmail.com>
  Amit Kumar Mahapatra <amit.kumar-mahapatra@amd.com>
  Andre Przywara <andre.przywara@arm.com>
  Andrew Jones <ajones@ventanamicro.com>
  Andy Shevchenko <andy.shevchenko@gmail.com>
  Anup Patel <anup@brainfault.org>
  Arnd Bergmann <arnd@arndb.de>
  Babu Moger <babu.moger@amd.com>
  Bard Liao <yung-chuan.liao@linux.intel.com>
  Bart Van Assche <bvanassche@acm.org>
  Bibo Mao <maobibo@loongson.cn>
  Biju Das <biju.das.jz@bp.renesas.com>
  Breno Leitao <leitao@debian.org>
  Chandan Babu R <chandanbabu@kernel.org>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen Wang <unicorn_wang@outlook.com>
  Chenliang Li <cliang01.li@samsung.com>
  Christian König <christian.koenig@amd.com>
  Chuck Lever <chuck.lever@oracle.com>
  Colin Ian King <colin.i.king@gmail.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Miess <daniel.miess@amd.com>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Dave Airlie <airlied@redhat.com>
  Dave Chinner <dchinner@redhat.com>
  Dave Jiang <dave.jiang@intel.com>
  Fabio Estevam <festevam@gmail.com>
  fhortner@yahoo.de
  Frank Li <Frank.Li@nxp.com>
  Hamza Mahfooz <hamza.mahfooz@amd.com>
  Hans de Goede <hdegoede@redhat.com>
  Harish Kasiviswanathan <Harish.Kasiviswanathan@amd.com>
  Haylen Chu <heylenay@outlook.com>
  Honggang LI <honggangli@163.com>
  Huacai Chen <chenhuacai@loongson.cn>
  Hui Li <lihui@loongson.cn>
  Inochi Amaoto <inochiama@outlook.com>
  Jani Nikula <jani.nikula@intel.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jeff Layton <jlayton@kernel.org>
  Jens Axboe <axboe@kernel.dk>
  Joao Paulo Goncalves <joao.goncalves@toradex.com>
  Joel Slebodnick <jslebodn@redhat.com>
  Julien Panis <jpanis@baylibre.com>
  Kalle Niemi <kaleposti@gmail.com>
  Kent Overstreet <kent.overstreet@linux.dev>
  Konstantin Taranov <kotaranov@microsoft.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Kuogee Hsieh <quic_khsieh@quicinc.com>
  Leon Romanovsky <leon@kernel.org>
  Li RongQing <lirongqing@baidu.com>
  Likun Gao <Likun.Gao@amd.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Louis Chauvet <louis.chauvet@bootlin.com>
  Luis Chamberlain <mcgrof@kernel.org>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marek Vasut <marex@denx.de>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Max Krummenacher <max.krummenacher@toradex.com>
  Michael Roth <michael.roth@amd.com>
  Michael Strauss <michael.strauss@amd.com>
  Michal Wajdeczko <michal.wajdeczko@intel.com>
  Miguel Ojeda <ojeda@kernel.org>
  Miklos Szeredi <mszeredi@redhat.com>
  Nathan Chancellor <nathan@kernel.org>
  Nikita Shubin <n.shubin@yadro.com>
  Niklas Cassel <cassel@kernel.org>
  Nishanth Menon <nm@ti.com>
  Pablo Caño <pablocpascual@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Patrice Chotard <patrice.chotard@foss.st.com>
  Patrisious Haddad <phaddad@nvidia.com>
  Paul Hsieh <paul.hsieh@amd.com>
  Peng Fan <peng.fan@nxp.com>
  Peter Ujfalusi <peter.ujfalusi@gmail.com>
  Peter Ujfalusi@gmail.com
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raju Rangoju <Raju.Rangoju@amd.com>
  Rob Herring (Arm) <robh@kernel.org>
  Roman Li <roman.li@amd.com>
  Sanath S <Sanath.S@amd.com>
  Sean Christopherson <seanjc@google.com>
  Selvin Xavier <selvin.xavier@broadcom.com>
  Shawn Guo <shawnguo@kernel.org>
  Siddharth Vadapalli <s-vadapalli@ti.com>
  Sourabh Jain <sourabhjain@linux.ibm.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Sudeep Holla <sudeep.holla@arm.com>
  Takashi Iwai <tiwai@suse.de>
  Tao Su <tao1.su@linux.intel.com>
  Thomas Hellstrom <thomas.hellstrom@linux.intel.com>
  Thomas Hellström <thomas.hellstrom@linux.intel.com>
  Thomas Richard <thomas.richard@bootlin.com>
  Thorsten Scherer <t.scherer@eckelmann.de>
  Tim Harvey <tharvey@gateworks.com>
  Uwe Kleine-König <u.kleine-koenig@baylibre.com>
  Uwe Kleine-König <ukleinek@kernel.org>
  Vincent Donnefort <vdonnefort@google.com>
  Vinod Koul <vkoul@kernel.org>
  Xi Ruoyao <xry111@xry111.site>
  Yang Li <yang.lee@linux.alibaba.com>
  Yi Lai <yi1.lai@intel.com>
  Yishai Hadas <yishaih@nvidia.com>
  Youling Tang <tangyouling@kylinos.cn>
  Yunxiang Li <Yunxiang.Li@amd.com>
  Zaeem Mohamed <zaeem.mohamed@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   50736169ecc8..5f583a3162ff  5f583a3162ffd9f7999af76b8ab634ce2dac9f90 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 01:10:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 01:10:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746074.1153020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLYDL-0004FC-01; Mon, 24 Jun 2024 01:09:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746074.1153020; Mon, 24 Jun 2024 01:09:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLYDK-0004F0-QX; Mon, 24 Jun 2024 01:09:46 +0000
Received: by outflank-mailman (input) for mailman id 746074;
 Mon, 24 Jun 2024 01:09:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLYDI-0004El-LW; Mon, 24 Jun 2024 01:09:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLYDI-0003sA-BD; Mon, 24 Jun 2024 01:09:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLYDH-0000by-Ja; Mon, 24 Jun 2024 01:09:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sLYDH-00008F-J3; Mon, 24 Jun 2024 01:09:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gur8Fi0lALmruD4LasdQOfQHsNcPr++WX7oeTpsoFdA=; b=k70LFtwQT+qIemGcctUC4W7N6s
	sKi6UxxZb2zS4vb1vN/CdRgpYe/ePRBhC7Ac2txTJ/NtLWoeUZ6FdMQSVzCcSneLJxmq1l2xKwfSu
	9Uc1cY11TgrhwM95+7W0CKbmyGjC2j2WSzClZecMPtUSq/pU5lDAJmfJvndtiRQ8plC8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186462-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186462: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit2:host-ping-check-xen:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=7c16f0a4ed1ce7b0dd1c01fc012e5bde89fe7748
X-Osstest-Versions-That:
    linux=5f583a3162ffd9f7999af76b8ab634ce2dac9f90
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 Jun 2024 01:09:43 +0000

flight 186462 linux-linus real [real]
flight 186463 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186462/
http://logs.test-lab.xenproject.org/osstest/logs/186463/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit2  10 host-ping-check-xen fail pass in 186463-retest
 test-armhf-armhf-xl-multivcpu  8 xen-boot           fail pass in 186463-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 186463 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 186463 never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 186463 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 186463 never pass
 test-armhf-armhf-xl-credit1   8 xen-boot                     fail  like 186457
 test-armhf-armhf-xl-raw       8 xen-boot                     fail  like 186457
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186457
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 186457
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186457
 test-armhf-armhf-xl-qcow2     8 xen-boot                     fail  like 186460
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186460
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186460
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186460
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186460
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186460
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                7c16f0a4ed1ce7b0dd1c01fc012e5bde89fe7748
baseline version:
 linux                5f583a3162ffd9f7999af76b8ab634ce2dac9f90

Last test of basis   186460  2024-06-23 08:20:08 Z    0 days
Testing same since   186462  2024-06-23 16:10:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexey Makhalov <alexey.makhalov@broadcom.com>
  Andi Shyti <andi.shyti@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Barry Song <v-songbaohua@oppo.com>
  Borislav Petkov (AMD) <bp@alien8.de>
  Dave Martin <Dave.Martin@arm.com>
  David Howells <dhowells@redhat.com>
  Grygorii Tertychnyi <grembeter@gmail.com>
  Grygorii Tertychnyi <grygorii.tertychnyi@leica-geosystems.com>
  Jiaxun Yang <jiaxun.yang@flygoat.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael Ellerman <mpe@ellerman.id.au>
  Mike Rapoport (IBM) <rppt@kernel.org>
  Nathan Lynch <nathanl@linux.ibm.com>
  Peter Korsgaard <peter@korsgaard.com>
  Reinette Chatre <reinette.chatre@intel.com>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Steve French <stfrench@microsoft.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Wolfram Sang <wsa+renesas@sang-engineering.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   5f583a3162ff..7c16f0a4ed1c  7c16f0a4ed1ce7b0dd1c01fc012e5bde89fe7748 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 06:35:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 06:35:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746089.1153030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLdHz-0005jn-Mo; Mon, 24 Jun 2024 06:34:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746089.1153030; Mon, 24 Jun 2024 06:34:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLdHz-0005jg-JV; Mon, 24 Jun 2024 06:34:55 +0000
Received: by outflank-mailman (input) for mailman id 746089;
 Mon, 24 Jun 2024 06:34:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IpOW=N2=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sLdHy-0005ja-AN
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 06:34:54 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id db7dcd67-31f3-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 08:34:50 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 2F825219DA;
 Mon, 24 Jun 2024 06:34:49 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id EC1DD13AA4;
 Mon, 24 Jun 2024 06:34:48 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id dGmJN4gTeWZnaQAAD6G6ig
 (envelope-from <jgross@suse.com>); Mon, 24 Jun 2024 06:34:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db7dcd67-31f3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1719210889; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Z3WhiZXNQHgFwvVrbBEz4za7o2pwZgL3jItNx6ibiys=;
	b=ezu47ZH2GWQetljxo8W9EMriwjsho4BU3PlWB+8aiEKMaWvQymGY7PDS3xFAtCHsDSbpt9
	/jXBzP0ap3MzYlxK12qCi5OC0LxQGKTXa8AG91Y54l2XqwOBZAxFQM3ar3pQgUtSe1buEH
	3hcjzbmA91gHsacEtnDOCUiwnl/zWfs=
Authentication-Results: smtp-out1.suse.de;
	dkim=pass header.d=suse.com header.s=susede1 header.b=ezu47ZH2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1719210889; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Z3WhiZXNQHgFwvVrbBEz4za7o2pwZgL3jItNx6ibiys=;
	b=ezu47ZH2GWQetljxo8W9EMriwjsho4BU3PlWB+8aiEKMaWvQymGY7PDS3xFAtCHsDSbpt9
	/jXBzP0ap3MzYlxK12qCi5OC0LxQGKTXa8AG91Y54l2XqwOBZAxFQM3ar3pQgUtSe1buEH
	3hcjzbmA91gHsacEtnDOCUiwnl/zWfs=
Message-ID: <988f62b6-642d-404e-ae1e-1c9a428c1eb9@suse.com>
Date: Mon, 24 Jun 2024 08:34:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] xen/manage: Constify struct shutdown_handler
To: Christophe JAILLET <christophe.jaillet@wanadoo.fr>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: linux-kernel@vger.kernel.org, kernel-janitors@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <ca1e75f66aed43191cf608de6593c7d6db9148f1.1719134768.git.christophe.jaillet@wanadoo.fr>
Content-Language: en-US
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
In-Reply-To: <ca1e75f66aed43191cf608de6593c7d6db9148f1.1719134768.git.christophe.jaillet@wanadoo.fr>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
X-Rspamd-Queue-Id: 2F825219DA
X-Spam-Score: -3.45
X-Spam-Level: 
X-Spam-Flag: NO
X-Spamd-Result: default: False [-3.45 / 50.00];
	BAYES_HAM(-2.95)[99.80%];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	R_MIXED_CHARSET(1.00)[subject];
	R_DKIM_ALLOW(-0.20)[suse.com:s=susede1];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	XM_UA_NO_VERSION(0.01)[];
	MX_GOOD(-0.01)[];
	RBL_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:104:10:150:64:97:from];
	DKIM_SIGNED(0.00)[suse.com:s=susede1];
	SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from];
	MIME_TRACE(0.00)[0:+];
	TO_DN_SOME(0.00)[];
	FREEMAIL_TO(0.00)[wanadoo.fr,kernel.org,epam.com];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	ARC_NA(0.00)[];
	FREEMAIL_ENVRCPT(0.00)[wanadoo.fr];
	RCPT_COUNT_FIVE(0.00)[6];
	RCVD_COUNT_TWO(0.00)[2];
	MID_RHS_MATCH_FROM(0.00)[];
	FROM_EQ_ENVFROM(0.00)[];
	FROM_HAS_DN(0.00)[];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DWL_DNSWL_BLOCKED(0.00)[suse.com:dkim];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	RECEIVED_SPAMHAUS_BLOCKED_OPENRESOLVER(0.00)[2a07:de40:b281:106:10:150:64:167:received];
	RCVD_TLS_ALL(0.00)[];
	DKIM_TRACE(0.00)[suse.com:+];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns,suse.com:email,suse.com:dkim]
X-Rspamd-Action: no action
X-Rspamd-Server: rspamd1.dmz-prg2.suse.org

On 23.06.24 11:26, Christophe JAILLET wrote:
> 'struct shutdown_handler' is not modified in this driver.
> 
> Constifying this structure moves some data to a read-only section, so
> increase overall security.
> 
> On a x86_64, with allmodconfig:
> Before:
> ======
>     text	   data	    bss	    dec	    hex	filename
>     7043	    788	      8	   7839	   1e9f	drivers/xen/manage.o
> 
> After:
> =====
>     text	   data	    bss	    dec	    hex	filename
>     7164	    676	      8	   7848	   1ea8	drivers/xen/manage.o
> 
> Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:21:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:21:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746097.1153041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLe0T-000388-Rx; Mon, 24 Jun 2024 07:20:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746097.1153041; Mon, 24 Jun 2024 07:20:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLe0T-000381-Nf; Mon, 24 Jun 2024 07:20:53 +0000
Received: by outflank-mailman (input) for mailman id 746097;
 Mon, 24 Jun 2024 07:20:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLe0S-00037v-Ig
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 07:20:52 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4891a04e-31fa-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 09:20:50 +0200 (CEST)
Received: from [10.176.134.80] (unknown [160.78.253.181])
 by support.bugseng.com (Postfix) with ESMTPSA id 2A3F04EE0738;
 Mon, 24 Jun 2024 09:20:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4891a04e-31fa-11ef-90a3-e314d9c70b13
Message-ID: <272eb185-4cfb-4aa7-8961-97121a718d20@bugseng.com>
Date: Mon, 24 Jun 2024 09:20:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] automation/eclair: configure Rule 13.6 and custom
 service B.UNEVALEFF
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Nicola Vetrini <nicola.vetrini@bugseng.com>
References: <73b4105bf007e5bd0f351ec70711ba7219f31eb3.1719053209.git.federico.serafini@bugseng.com>
 <e8ca39e4-dd44-4399-9712-e1737f98bd0e@citrix.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <e8ca39e4-dd44-4399-9712-e1737f98bd0e@citrix.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 22/06/24 14:22, Andrew Cooper wrote:
> On 22/06/2024 11:48 am, Federico Serafini wrote:
>> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
>> index e2653f77eb..6bdfe7c883 100644
>> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
>> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
>> @@ -328,6 +328,16 @@ of the short-circuit evaluation strategy of such logical operators."
>>   -config=MC3R1.R13.5,reports+={disapplied,"any()"}
>>   -doc_end
>>   
>> +-doc_begin="Macros alternative_v?call[0-9] use sizeof and typeof to check that th argument types match the corresponding parameter ones."
> 
> Typo "th" => "the".  Can be fixed on commit.
> 
>> +-config=MC3R1.R13.6,reports+={deliberate,"any_area(any_loc(any_exp(macro(^alternative_vcall[0-9]$))&&file(^xen/arch/x86.*$)))"}
>> +-config=B.UNEVALEFF,reports+={deliberate,"any_area(any_loc(any_exp(macro(^alternative_v?call[0-9]$))&&file(^xen/arch/x86.*$)))"}
> 
> There will be expansions of these macros outside of arch/x86/.
> drivers/iommu/ as an example.
> 
> Is the file() clause targetting the source of the macro (in which it
> probably wants making more specific to alternative_call.h), or the
> expansions of the macro (in which case it probably wants dropping
> entirely) ?

It is targetting the source of the macro,
we can make it more specific.


> 
>> +-doc_end
>> +
>> +-doc_begin="Macro chk_fld is only used to introduce BUILD_BUG_ON checks in very specific cases where the usage is correct and can be checked by code inspection.
>> +The BUILD_BUG_ON checks check that EFI_TIME and struct xenpf_efi_time fields match."
>> +-config=MC3R1.R13.6,reports+={deliberate,"any_area(any_loc(any_exp(macro(^chk_fld$))&&file(^xen/common/efi/runtime\\.c$)))"}
>> +-doc_end
> 
> It's just occurred to me.  Anything, no matter how complicated, inside a
> BUILD_BUG_ON() is clearly a compile time thing so has no relevant side
> effects.
> 
> Is it possible to generally exclude any sizeof/alignof/typeof inside a
> BUILD_BUG_ON()?  That would be better than identifying every user locally.

Sure.

I will send a v2 with your observations.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:21:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:21:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746103.1153050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLe1J-0003cN-4F; Mon, 24 Jun 2024 07:21:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746103.1153050; Mon, 24 Jun 2024 07:21:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLe1J-0003cG-0N; Mon, 24 Jun 2024 07:21:45 +0000
Received: by outflank-mailman (input) for mailman id 746103;
 Mon, 24 Jun 2024 07:21:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLe1H-0003NU-Qr
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 07:21:43 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 679a54ec-31fa-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 09:21:42 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id
 38308e7fff4ca-2eaae2a6dc1so64726511fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 00:21:42 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb3c6032sm55886325ad.150.2024.06.24.00.21.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 00:21:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 679a54ec-31fa-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719213702; x=1719818502; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=t0VRRgI0YpuhZtxfk6hZfAMe5paQe2bl7GpLBR+tLnQ=;
        b=LQDUwq4nYCFdDoNOXfEk/GSqcUfZiYdliQpen1fvbfWnXpJidqiNKzK8OUv5VIaOYB
         xviz81APFicgUeD5/en+obJ0hG+bcqnkJLTcF9EIK/k7yzhyc7+AWnVt6Z2KMKXG5Gzv
         k/LyXnuG4e80byOvteReayegwOxDZgt9q9WnCK6fMeVPxZ5Nqc71GP+XPIsuuE30MC22
         ADsrbrI8M/xO1SjKt+gKutTdXa8Avp7ouQhUOByvZ0ly5XSHo6ZZwLkxEYhjYRLeYC1R
         b2G33U1IcmmBM9CxonWg+g4Ml8C0uEeR11eLnqTme/XsaHkKmB2u1sYw9JbKX0Ux7BeN
         hzUQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719213702; x=1719818502;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=t0VRRgI0YpuhZtxfk6hZfAMe5paQe2bl7GpLBR+tLnQ=;
        b=MVDKBifFJL5aPydC7REc17EjVWJwNzfweKP91JrS8/35gKd+klOeYeIKwQ4v6R/lGC
         g+yzdAzU7uJ9o0JyPF6bsD1U0EzGZPUYCNnG/f+32QW73bdTNKIQmIEvqOVfJN77S2kI
         VjDjj5xLNk8zlG374J7TTBVEDgIc7IOZg8wAhIfLcT35XPKjgg6jKZRPgx1+oGTsQkKH
         Avq/erJGqCpeA6gfP61MIGlEMquONNxs57BOVu/Dog+If2/GqU9+lQn+Tc8P8e6mYxZG
         Ytzp+ohP4+fFx8WdD0xDs7E3oKzNAxilCOR/hNa4OLLyCNTXs9wkpd7weHuaCWvipEJ7
         Zl7g==
X-Forwarded-Encrypted: i=1; AJvYcCWkiyv2Cl1JCK/ZqWv2eTgr7KmMZHHYPeR+UYSzIfWe4hpODksyLZMJJhYkndlUH+18l+NXbHWmGP60klFqqN4JSjfc2fMR9/P6r4M3o3c=
X-Gm-Message-State: AOJu0Yzwg5KeW8GufjS3hY0J5O3oMfiFFfQTPHAtLH40qUNlQa//EEwt
	d1wwrOS31FgcUKRcis76Z0ngP0oG6YeEHhSwMQ37+OMd42pDJws5i9jm+1cGxg==
X-Google-Smtp-Source: AGHT+IHfv1JnBtP6q5w+iBcyQi/J3lYL+eDMn+K2N/GD1jpOYJ1bj3a4tw9LO0bnhoIECE6WfGXkiw==
X-Received: by 2002:a2e:81d9:0:b0:2eb:e7dd:1f88 with SMTP id 38308e7fff4ca-2ec5b2e5fe5mr26347671fa.25.1719213701625;
        Mon, 24 Jun 2024 00:21:41 -0700 (PDT)
Message-ID: <59da345e-0c2b-41e0-bd04-1f4fff8c905c@suse.com>
Date: Mon, 24 Jun 2024 09:21:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v6 6/9] xen: Make the maximum number of altp2m
 views configurable for x86
To: =?UTF-8?Q?Petr_Bene=C5=A1?= <w1benny@gmail.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <cover.1718038855.git.w1benny@gmail.com>
 <fee20e24a94cb29dea81631a6b775933d1151da4.1718038855.git.w1benny@gmail.com>
 <4a49fe9b-66fd-4a32-ad01-14ed4c5fc34c@suse.com>
 <CAKBKdXgUKYoJfB1mG+6JSaV=jWpmRmS1UbQ6N4JNZ774rP_PoQ@mail.gmail.com>
 <4cf14abe-881e-4328-9083-bd04afd6b307@suse.com>
 <CAKBKdXg5jCJwW2n2rkWS1aeHTN4X7-z-STft8n4xJ59JCTDhYA@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CAKBKdXg5jCJwW2n2rkWS1aeHTN4X7-z-STft8n4xJ59JCTDhYA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 21.06.2024 13:40, Petr Beneš wrote:
> On Thu, Jun 20, 2024 at 9:25 AM Jan Beulich <jbeulich@suse.com> wrote:
>> Not exactly. You may not assert on idx. The assertion, if any, wants to
>> check d->nr_altp2m against MAX_EPTP.
> 
> In addition to the check in arch_sanitize_domain? As a safeguard?

Well. Such an assertion can only validly be put anywhere because of the
checking done in arch_sanitize_domain(). You can view such assertions as
both a safeguard and as a comment-like thing.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:31:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:31:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746111.1153060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeAq-0005JA-18; Mon, 24 Jun 2024 07:31:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746111.1153060; Mon, 24 Jun 2024 07:31:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeAp-0005J3-TW; Mon, 24 Jun 2024 07:31:35 +0000
Received: by outflank-mailman (input) for mailman id 746111;
 Mon, 24 Jun 2024 07:31:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLeAo-0005Ix-4e
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 07:31:34 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c6edfbf8-31fb-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 09:31:31 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id
 38308e7fff4ca-2eaae2a6dc1so64851311fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 00:31:31 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb2f0406sm55888695ad.37.2024.06.24.00.31.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 00:31:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6edfbf8-31fb-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719214291; x=1719819091; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=XZnbhW7TJOMAeHZnQpiyH347MKdrXnDrOQfYzjkmLkU=;
        b=K6WbmHqrR4RlhZE4/JGVnsNtcS0DuhIIIkJhSHJLLmTrNxpIz7+N3nH+Wns3Pj4RjS
         HGNvmMs90JTEvWekuNcTn2ZBJVfJO9KCVg9epkSJk9VRbH1pwl54VIUlOPqPRNjHx631
         YtRyDA04ZuQ9i7QCl0iBFQk8DtjiBgeT29e/XnQBkxT57Nm2NnsMtd37CylJXfaxkPF5
         LbHld//ECPURYO2tp7fYaLJ0alMW396BL3rjpC8/X0hRQKsj3cJDqupe2VEePmuAetKn
         6wuFKIOtjrV37zrAyf+LF9YSW9Bfs48rZ7IU7JQLOQVJDmwM0VVt3BVDMRCY6I1v4DXI
         uh8A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719214291; x=1719819091;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=XZnbhW7TJOMAeHZnQpiyH347MKdrXnDrOQfYzjkmLkU=;
        b=vl97uEnSqaduWYJZOJHzKaH2Gz0pmX6XQOS7NZekq/UA+BhGqcGwBRbLlXKWatVvkq
         D1rH06t4wuYwQLgAI5sikU9UNjMcIh4S1Oq88JeREBh+sfr4yWjg0RfIdeBlxoeQhgqE
         GCxi+W0/yT3rRaLjiod0chDlBTp0nqi0AagJsskoRd5eYSjoRK0eh/nlmZ56SDYu9bKN
         kPUS45UipI8l0UelWEMaQNQxLH9XA2vILJFeI5GIa/eMJ4MAPIXa486X2uMsjG9jkY98
         CHSEQW69akd0P6ifMtP8822CohKGIffaq0xibIiSWuTxdLxdZN5laWxLa8zWPLCIdcWH
         45IA==
X-Forwarded-Encrypted: i=1; AJvYcCWU3fUaDXejcHjaOofn2GgJ+VZuRnay/6YWseg+J6iHIe1IdBDUFsv8k7YTcdOMf8xQrNbu+r+vzN3BgtozQKBqKoppom1yjeRAT4PJf0o=
X-Gm-Message-State: AOJu0YzqjZYUwgzkx89wvORW+MIBLRDTgMMtBX824d6T6OtfQ+IjjoHC
	PjpG/hTus8pK9LQXZeGruVOwdvYBp0eHxpiBWEz+XzRwTbBIIELy2pP0m7X0Ebs8mZVpcGdHF10
	=
X-Google-Smtp-Source: AGHT+IEOeqv5pjvGX49hWwiGzGGV2HObGsJ93Fq6sl7jownTkJ6A9Nj8NDGsotAt6gN4GjRx/OJOIA==
X-Received: by 2002:a2e:8954:0:b0:2ea:e1fe:2059 with SMTP id 38308e7fff4ca-2ec5b2e62f4mr30360651fa.27.1719214291093;
        Mon, 24 Jun 2024 00:31:31 -0700 (PDT)
Message-ID: <ac46ccb1-9247-4b66-a21d-d9841ee9f1ef@suse.com>
Date: Mon, 24 Jun 2024 09:31:24 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/2] automation/eclair_analysis: deviate MISRA C Rule 21.2
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
 consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, xen-devel@lists.xenproject.org
References: <cover.1718816397.git.alessandro.zucchelli@bugseng.com>
 <5b8364528a9ece8fec9f0e70bee81c2ea94c1820.1718816397.git.alessandro.zucchelli@bugseng.com>
 <02ee9a03-c5b9-4250-960d-e9a2762605c8@suse.com>
 <alpine.DEB.2.22.394.2406201758490.2572888@ubuntu-linux-20-04-desktop>
 <650b7946-ddb5-4428-b6d9-d8f6e0b0f8b9@suse.com>
 <alpine.DEB.2.22.394.2406211619070.2572888@ubuntu-linux-20-04-desktop>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <alpine.DEB.2.22.394.2406211619070.2572888@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 22.06.2024 01:27, Stefano Stabellini wrote:
> On Fri, 21 Jun 2024, Jan Beulich wrote:
>> On 21.06.2024 03:02, Stefano Stabellini wrote:
>>> On Thu, 20 Jun 2024, Jan Beulich wrote:
>>>> On 19.06.2024 19:09, Alessandro Zucchelli wrote:
>>>>> Rule 21.2 reports identifiers reserved for the C and POSIX standard
>>>>> libraries: all xen's translation units are compiled with option
>>>>> -nostdinc, this guarantees that these libraries are not used, therefore
>>>>> a justification is provided for allowing uses of such identifiers in
>>>>> the project.
>>>>> Builtins starting with "__builtin_" still remain available.
>>>>>
>>>>> No functional change.
>>>>>
>>>>> Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
>>>>> ---
>>>>>  automation/eclair_analysis/ECLAIR/deviations.ecl | 11 +++++++++++
>>>>>  1 file changed, 11 insertions(+)
>>>>>
>>>>> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
>>>>> index 447c1e6661..9fa9a7f01c 100644
>>>>> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
>>>>> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
>>>>> @@ -487,6 +487,17 @@ leads to a violation of the Rule are deviated."
>>>>>  # Series 21.
>>>>>  #
>>>>>  
>>>>> +-doc_begin="Rules 21.1 and 21.2 report identifiers reserved for the C and POSIX
>>>>> +standard libraries: if these libraries are not used there is no reason to avoid such
>>>>> +identifiers. All xen's translation units are compiled with option -nostdinc,
>>>>> +this guarantees that these libraries are not used. Some compilers could perform
>>>>> +optimization using built-in functions: this risk is partially addressed by
>>>>> +using the compilation option -fno-builtin. Builtins starting with \"__builtin_\"
>>>>> +still remain available."
>>>>
>>>> While the sub-section "Reserved Identifiers" is part of Section 7,
>>>> "Library", close coordination is needed between the library and the
>>>> compiler, which only together form an "implementation". Therefore any
>>>> use of identifiers beginning with two underscores or beginning with an
>>>> underscore and an upper case letter is at risk of colliding not only
>>>> with a particular library implementation (which we don't use), but
>>>> also of such with a particular compiler implementation (which we cannot
>>>> avoid to use). How can we permit use of such potentially problematic
>>>> identifiers?
>>>
>>> Alternative question: is there a way we can check if there is clash of
>>> some sort between a compiler implementation of something and a MACRO or
>>> identifier we have in Xen? An error or a warning from the compiler for
>>> instance? That could be an easy way to prove we are safe.
>>
>> Well. I think it is the default for the compiler to warn when re-#define-
>> ing a previously #define-d (by the compiler or by us) symbol, so on that
>> side we ought to be safe at any given point in time,
> 
> OK, that's good. It seems to me that this explanation should be part of
> the deviation text.
> 
> 
>> yet we're still latently unsafe (as to compilers introducing new
>> pre-defines).
> 
> Sure, but we don't need to be safe in relation to future compiler. Right
> now, we are targeting gcc-12.1.0 as written in
> docs/misra/C-language-toolchain.rst. When we decide to enable a new
> compiler in Xen we can fix/change any specific define as needed. Also
> note the large amount of things written in C-language-toolchain.rst that
> need to be checked and verified for a new compiler to make sure we can
> actually use it safely (we make many assumptions).
> 
> 
>> For built-in declarations, though, there's nothing I'm aware of that
>> would indicate collisions.
> 
> For builtins, Alessandro was suggesting -fno-builtin. One question to
> Alessandro is why would -fno-builtin only "partially" address the
> problem.
> 
> Another question for Jan and also Alessandro: given that builtins
> starting with __builtin_ remain available, any drawbacks in using
> -fno-builtin in a Xen build?

Just to mention it - we're building with -fno-builtin already anyway.
What I'm puzzled by is that it looks like I was under the wrong
impression that we're actually building -ffreestanding (which implies
-fno-builtin).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:46:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:46:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746119.1153069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeP0-00072B-6C; Mon, 24 Jun 2024 07:46:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746119.1153069; Mon, 24 Jun 2024 07:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeP0-000724-3j; Mon, 24 Jun 2024 07:46:14 +0000
Received: by outflank-mailman (input) for mailman id 746119;
 Mon, 24 Jun 2024 07:46:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLeOy-00071a-JK
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 07:46:12 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d306e712-31fd-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 09:46:10 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id
 38308e7fff4ca-2eaae2a6dc1so65037341fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 00:46:10 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7066fcbd3ecsm3093847b3a.207.2024.06.24.00.46.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 00:46:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d306e712-31fd-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719215170; x=1719819970; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=gK3kglP3CVyQ1tTCGC5HsK/kSpRbybCiwuXxxCoaV5M=;
        b=gAXsiMmn9BIItiTbqWyt7e/KFE0WQEZ63B3e4UBHWjkBnjZ9kzOcS6iHbpej7rBCL6
         7lQtTSdiBY1/tox8tDbSKJ6leuLwLOhWqgTDW7bi4T9SRT7xrvgqX3OpFOINcq5vxSu7
         LGYlsI1lYrztK0yr4ZQO6eXVYeHvwHb8CpiIUYD4QAMEO0HY1csmt1q5MaPeT49wIjGO
         evRxKUZouDRnKoB781P8Hb6HcLMzQcBJJ4bjbECx/XKC/qSig557cjauoYGsEYsc+m8Z
         bM4jKxhXUq4kHPS7+URW/iz6ErX1VMo6EUvzTIuJxvjjqnEHtLS0bIAxajQw0ccSibra
         g/kQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719215170; x=1719819970;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gK3kglP3CVyQ1tTCGC5HsK/kSpRbybCiwuXxxCoaV5M=;
        b=toeOD8iKYytgSgepDnGSvVu4vU5r7ePLNlj6JTUFtZkUOhZU2ywuNn1bGS58td5Ztx
         dHHhVKzpmGOSaJRgRB+0ryhiaK3rYNiElKhrAKzLHojAP5/g/bLWWSFV4GsAFNFyl9QG
         loMxfARE/27mc1QUsGtJ34ed9LEdeYh90R1Q8YyvsNEz8faPHDyeM238MRphgLsp3Z2o
         DGDHSlYVOXpByJTQRUfwgdOnF68olDsfHIsS7HNCy/O46gMdKiCOiwqL9tHtzEcAaB4P
         uYIwTaYcDleieRMtfSfTUag++SiPWkUhi5kdUjgYR94jLHXoM5n9x65+VD+FBjaMritI
         94tQ==
X-Gm-Message-State: AOJu0Yy8Ef5GJBJnDU+JBol9M280rBUavY8cN0elvVoTQ5oX11N1YKy2
	2LKGftlTGQgNUTadJqBR0fOZDzkIbf8ZympzCQ8r0lYbkvyU0VwQQsw9gVrAEg==
X-Google-Smtp-Source: AGHT+IG4bbUmljlZWIIFoBdbhzKj0f514/Idy0G5GEHTsTzuvqAYPkcYWIMA6oCHkNnjikEWdsHGOw==
X-Received: by 2002:a2e:a17a:0:b0:2ec:4093:ec7 with SMTP id 38308e7fff4ca-2ec5b2e7238mr29272411fa.30.1719215170408;
        Mon, 24 Jun 2024 00:46:10 -0700 (PDT)
Message-ID: <43cf9879-c781-4e05-8be4-f7f8ec87d4a3@suse.com>
Date: Mon, 24 Jun 2024 09:46:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 v2] tools/xl: Open xldevd.log with O_CLOEXEC
To: Anthony PERARD <anthony.perard@vates.tech>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Demi Marie Obenour <demi@invisiblethingslab.com>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20240621161656.63576-1-andrew.cooper3@citrix.com>
 <ZnWwbJiD6eG85VY9@l14>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZnWwbJiD6eG85VY9@l14>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 21.06.2024 18:55, Anthony PERARD wrote:
> On Fri, Jun 21, 2024 at 05:16:56PM +0100, Andrew Cooper wrote:
>> `xl devd` has been observed leaking /var/log/xldevd.log into children.
>>
>> Note this is specifically safe; dup2() leaves O_CLOEXEC disabled on newfd, so
>> after setting up stdout/stderr, it's only the logfile fd which will close on
>> exec().
>>
>> Link: https://github.com/QubesOS/qubes-issues/issues/8292
>> Reported-by: Demi Marie Obenour <demi@invisiblethingslab.com>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Anthony PERARD <anthony@xenproject.org>
>> CC: Juergen Gross <jgross@suse.com>
>> CC: Demi Marie Obenour <demi@invisiblethingslab.com>
>> CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>
>> Also entirely speculative based on the QubesOS ticket.
>>
>> v2:
>>  * Extend the commit message to explain why stdout/stderr aren't closed by
>>    this change
>>
>> For 4.19.  This bugfix was posted earlier, but fell between the cracks.
>> ---
>>  tools/xl/xl_utils.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/tools/xl/xl_utils.c b/tools/xl/xl_utils.c
>> index 17489d182954..060186db3a59 100644
>> --- a/tools/xl/xl_utils.c
>> +++ b/tools/xl/xl_utils.c
>> @@ -270,7 +270,7 @@ int do_daemonize(const char *name, const char *pidfile)
>>          exit(-1);
>>      }
>>  
>> -    CHK_SYSCALL(logfile = open(fullname, O_WRONLY|O_CREAT|O_APPEND, 0644));
>> +    CHK_SYSCALL(logfile = open(fullname, O_WRONLY | O_CREAT | O_APPEND | O_CLOEXEC, 0644));
> 
> Everytime we use O_CLOEXEC, we add in the C file
>     #ifndef O_CLOEXEC
>     #define O_CLOEXEC 0
>     #endif
> we don't need to do that anymore?
> Or I guess we'll see if someone complain when they try to build on an
> ancien version of Linux.

I'm pretty certain I'll run into that issue on one of my pretty old systems,
but if the general view is that we don't care about such environments anymore,
then so be it (and I'll take care of such issues locally).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:48:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:48:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746127.1153079 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeQi-0007jn-NC; Mon, 24 Jun 2024 07:48:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746127.1153079; Mon, 24 Jun 2024 07:48:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeQi-0007jg-KV; Mon, 24 Jun 2024 07:48:00 +0000
Received: by outflank-mailman (input) for mailman id 746127;
 Mon, 24 Jun 2024 07:48:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLeQi-0007jX-0K
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 07:48:00 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 137904a3-31fe-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 09:47:59 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ec1620a956so47667921fa.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 00:47:59 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7065107c572sm5595165b3a.8.2024.06.24.00.47.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 00:47:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 137904a3-31fe-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719215278; x=1719820078; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ZuNzFt7Z/PCMwiYpotfhwBC/Onyl8SYYFRCsdTPRooU=;
        b=Tp6fba0zjPyznAjVCUSlRzHcMq7BJnnwV8cbAxJVShBQWOKCU9WB9LGFZpGpBEt/HK
         k/imJxuir6QwZEi11P3lnnTg8Le6l2hsSoV41Eqkk6h4g9bqx0JXg42Ly17OHDwNvGGv
         FAdP70uSN+aoZQWKHQehBAzoDtxyHQqX4ysgcv/xhDA/sdfkxg6FfkpGv0xt8ZO10HN/
         1aEYe5elKC17AFH4jzZ7iw7bDNPWoMphex8+pVjMu9vn1Zdb89e2oQH0+jeWubSXPDDO
         J7LyV18EG0VFc8ZENTLPvq7VMcmLvVTSxqwIEGSevWMJbI+zW082TKBg1VsqQoVnhFdy
         /tFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719215278; x=1719820078;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ZuNzFt7Z/PCMwiYpotfhwBC/Onyl8SYYFRCsdTPRooU=;
        b=AEr09dbR+f/DggHrReXVxsY0/g30wgf3688mmqBr5wjmaSGNFAVunQTk3uQOaDJw9v
         8HV1TL3zEy42nKxl7nqFW9iB0NOND2mAF5YVKrFtwEqqXo6X7rqrt6Hk3topBrcsba+5
         M03Yr3NDi+G12Th+K/QO+W4JLXsvjJgoz0XtTV0mta3otgNIymfDVXQs5I1+/YJMCYkQ
         0DYZZLHA1PYM+gx7PYcNcpuvanN6AsqpLLwEWY84RxlXL1T2N7NlD2u6veQ4sUSZD2DC
         57TtmxqpRTwNoY7Qo09LpKYu7OwZFo8rLhP9EumF4u8+iaKLSUjiSdAm/WmSX4epZnGt
         pCMg==
X-Gm-Message-State: AOJu0YzInaizG8NH+OUk8DX/eib7AsWNOmon2XpOeJB9c9qT/NlJ4fkt
	JSCxEyIWuJ+TOGRdcFJTrxCzpOAua3HINstj/QvQukvTJnW/DreXVprOyQRg4Q==
X-Google-Smtp-Source: AGHT+IEMkmIdhADXho60NWs0vlZX9UapHgj4IS6ZLkdPw0CWUF9xmFMz5sffemGK9fnLCWZNmjgKkg==
X-Received: by 2002:a05:651c:211d:b0:2ec:5dfc:a64e with SMTP id 38308e7fff4ca-2ec5dfca6demr23504481fa.0.1719215278541;
        Mon, 24 Jun 2024 00:47:58 -0700 (PDT)
Message-ID: <7582f9e6-7b31-47e7-b42a-92c0f87effd2@suse.com>
Date: Mon, 24 Jun 2024 09:47:50 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 v2] tools/xl: Open xldevd.log with O_CLOEXEC
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Demi Marie Obenour <demi@invisiblethingslab.com>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Anthony PERARD <anthony.perard@vates.tech>
References: <20240621161656.63576-1-andrew.cooper3@citrix.com>
 <ZnWwbJiD6eG85VY9@l14> <e6262049-21b4-4eb7-9f29-6d5983bfb212@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <e6262049-21b4-4eb7-9f29-6d5983bfb212@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 21.06.2024 18:59, Andrew Cooper wrote:
> On 21/06/2024 5:55 pm, Anthony PERARD wrote:
>> On Fri, Jun 21, 2024 at 05:16:56PM +0100, Andrew Cooper wrote:
>>> `xl devd` has been observed leaking /var/log/xldevd.log into children.
>>>
>>> Note this is specifically safe; dup2() leaves O_CLOEXEC disabled on newfd, so
>>> after setting up stdout/stderr, it's only the logfile fd which will close on
>>> exec().
>>>
>>> Link: https://github.com/QubesOS/qubes-issues/issues/8292
>>> Reported-by: Demi Marie Obenour <demi@invisiblethingslab.com>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> ---
>>> CC: Anthony PERARD <anthony@xenproject.org>
>>> CC: Juergen Gross <jgross@suse.com>
>>> CC: Demi Marie Obenour <demi@invisiblethingslab.com>
>>> CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>>> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>>
>>> Also entirely speculative based on the QubesOS ticket.
>>>
>>> v2:
>>>  * Extend the commit message to explain why stdout/stderr aren't closed by
>>>    this change
>>>
>>> For 4.19.  This bugfix was posted earlier, but fell between the cracks.
>>> ---
>>>  tools/xl/xl_utils.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/tools/xl/xl_utils.c b/tools/xl/xl_utils.c
>>> index 17489d182954..060186db3a59 100644
>>> --- a/tools/xl/xl_utils.c
>>> +++ b/tools/xl/xl_utils.c
>>> @@ -270,7 +270,7 @@ int do_daemonize(const char *name, const char *pidfile)
>>>          exit(-1);
>>>      }
>>>  
>>> -    CHK_SYSCALL(logfile = open(fullname, O_WRONLY|O_CREAT|O_APPEND, 0644));
>>> +    CHK_SYSCALL(logfile = open(fullname, O_WRONLY | O_CREAT | O_APPEND | O_CLOEXEC, 0644));
>> Everytime we use O_CLOEXEC, we add in the C file
>>     #ifndef O_CLOEXEC
>>     #define O_CLOEXEC 0
>>     #endif
>> we don't need to do that anymore?
>> Or I guess we'll see if someone complain when they try to build on an
>> ancien version of Linux.
>>
>> Acked-by: Anthony PERARD <anthony.perard@vates.tech>
> 
> Thanks.  I did originally have that ifdefary here, but then I noticed
> that this isn't the first instance like this in xl, and I'm going to be
> using that as a justification soon to explicitly drop support for Linux
> < 2.6.23.

Just to mention that this is a two fold thing: I surely don't try to run
up-to-date Xen on top of this old a Linux kernel, but what is used for
building is still what the distro (with a very old kernel) would have put
there.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:51:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:51:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746135.1153089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeU0-0000r1-4g; Mon, 24 Jun 2024 07:51:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746135.1153089; Mon, 24 Jun 2024 07:51:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeU0-0000qu-25; Mon, 24 Jun 2024 07:51:24 +0000
Received: by outflank-mailman (input) for mailman id 746135;
 Mon, 24 Jun 2024 07:51:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLeTz-0000qk-3g; Mon, 24 Jun 2024 07:51:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLeTy-0003us-Rc; Mon, 24 Jun 2024 07:51:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLeTy-0000M9-Ft; Mon, 24 Jun 2024 07:51:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sLeTy-0001YS-FK; Mon, 24 Jun 2024 07:51:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=p31Qky3SkgBCc/mZuDHcVymc7uofruyO0TI7NPlmQa4=; b=JtaVWOgJcmYfEc8VNA0t7k7At4
	cULMQaEdVgZnSJFLgf9g9zKzN3nZtU+wuepNCrrK58C2dElFL+827TDpDDUbyxwreiAQFe8El3Z7M
	O6CUN6g7dLZodd+JSKkqZaMZKnB1G5sudxvMCRf6DIITzZHbGDuzLVtegXPxmW3x9tP8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186464-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186464: regressions - FAIL
X-Osstest-Failures:
    linux-linus:build-armhf:xen-build:fail:regression
    linux-linus:build-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f2661062f16b2de5d7b6a5c42a9a5c96326b8454
X-Osstest-Versions-That:
    linux=7c16f0a4ed1ce7b0dd1c01fc012e5bde89fe7748
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 Jun 2024 07:51:22 +0000

flight 186464 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186464/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                   6 xen-build                fail REGR. vs. 186462

Tests which did not succeed, but are not blocking:
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-qcow2     1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-raw       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                f2661062f16b2de5d7b6a5c42a9a5c96326b8454
baseline version:
 linux                7c16f0a4ed1ce7b0dd1c01fc012e5bde89fe7748

Last test of basis   186462  2024-06-23 16:10:07 Z    0 days
Testing same since   186464  2024-06-24 01:42:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Linus Torvalds <torvalds@linux-foundation.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  fail    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    blocked 
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 blocked 
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f2661062f16b2de5d7b6a5c42a9a5c96326b8454
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sun Jun 23 17:08:54 2024 -0400

    Linux 6.10-rc5


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:55:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:55:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746142.1153099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeXf-0001RD-K2; Mon, 24 Jun 2024 07:55:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746142.1153099; Mon, 24 Jun 2024 07:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeXf-0001R6-HU; Mon, 24 Jun 2024 07:55:11 +0000
Received: by outflank-mailman (input) for mailman id 746142;
 Mon, 24 Jun 2024 07:55:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLeXd-0001R0-Po
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 07:55:09 +0000
Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com
 [2a00:1450:4864:20::132])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1310b12b-31ff-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 09:55:07 +0200 (CEST)
Received: by mail-lf1-x132.google.com with SMTP id
 2adb3069b0e04-52ce655ff5bso631428e87.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 00:55:07 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a725ed447e6sm20760466b.147.2024.06.24.00.55.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 00:55:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1310b12b-31ff-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719215707; x=1719820507; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=TQyjBCx/reW5w7XJGJBAgHTdQBEa6lsidgMMzhUmHTo=;
        b=VzYnWDLclNjRLWrweFHPDgGlTGXeDcVSuZsp0VYzf7m8wwNCBLiMwK+rql6HvUWhww
         98wPRyV3YIh1jSJrLb0vKhOBb1611Tkk1hB+drdNpSrbHhjWgeVHeMcGMXbAXbSIPwZ7
         gjKyBvCBPJ+Ii2INWoaU6uEa5CYRR8SQanHVsd6/FPxtXaLpgBKvJ7xmKlGrM/EIFRiY
         C3n7p+cy3gvHmDdwjEG3nwOMf3UkN5rS0CgTCF8HljJ5xIdjiDkSQ8+Zjk5NegkoD9Kj
         GEUM92rRSEF5Ov6XbPZ95hb4+u3mUw2ENwMMvdICjmG0Lw2423IIJ0u99Y5w8HMzM4pi
         JlWw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719215707; x=1719820507;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=TQyjBCx/reW5w7XJGJBAgHTdQBEa6lsidgMMzhUmHTo=;
        b=wNihU7D3HqWpm3JbzJvE+2nMFJoy9hvdeaJJenG6JM5j1CKKgKUPDdWa7vqX4/+JJn
         Ysx2NEn7dnwJQNRdzfPPWKB+uYfjGjnrYGJONVQuCttiNklNR1tKjz/q+r7amCWpqJEN
         o+suZSspU6qoXgZQxMMnMXvvgxxMRBs+hbEl/ly15vfBVCK4oqv5lAToa3N0NRJNfl71
         hEKsZE5lE4vYj3YIbXr5VMInfojpFt8X3OKwNHcoENbO1O3bhHd+Nut3NWjAHGHOCmQe
         KNrKPhK6H1CwtTw6Z94tkXFZ8tToyVCr87JYbzq7ze8X2qfvnzARNaRQjPyfYYqiLuW0
         OoqQ==
X-Forwarded-Encrypted: i=1; AJvYcCUwgD5W/mBdT06r9XV39JmcNS3JApuMYX2myBx58oCKxCMXt3MYByaWjozPktpfZXFxkbNVa2xckBDGf3yvesX/zKy7o21TH7aYIORD/44=
X-Gm-Message-State: AOJu0YzasiaD0triZtFB0njcKw7QL1mByJ1tpXG8Yk3bTP/Z+XFqUC2j
	orgx9bJj962/M9KFVWCJBkkcopnosMWAqtrNE13ZI5geoe0I1h7s
X-Google-Smtp-Source: AGHT+IFoq2gRiQGG5oIEgcijcvj2DmdMuB/z7FUXOVVRbIdCV7ycd0ZhtfLefvJFO/Jrh77UuLSk4w==
X-Received: by 2002:a05:6512:3c86:b0:52c:dccd:39aa with SMTP id 2adb3069b0e04-52ce0646944mr4139613e87.67.1719215707011;
        Mon, 24 Jun 2024 00:55:07 -0700 (PDT)
Message-ID: <d5f7a8bb342cb585e2774395b1ce64dcf5406bd5.camel@gmail.com>
Subject: Re: [XEN PATCH v3] automation/eclair: add deviation for MISRA C
 Rule 17.7
From: Oleksii <oleksii.kurochko@gmail.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Federico Serafini <federico.serafini@bugseng.com>, 
 xen-devel@lists.xenproject.org, consulting@bugseng.com, Simone Ballarin
 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
 <julien@xen.org>
Date: Mon, 24 Jun 2024 09:55:05 +0200
In-Reply-To: <alpine.DEB.2.22.394.2406211522270.2572888@ubuntu-linux-20-04-desktop>
References: 
	<b571bd05955ab9967a44517c9947545a2a530f01.1718354974.git.federico.serafini@bugseng.com>
	 <alpine.DEB.2.22.394.2406191819370.2572888@ubuntu-linux-20-04-desktop>
	 <alpine.DEB.2.22.394.2406211522270.2572888@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Fri, 2024-06-21 at 15:24 -0700, Stefano Stabellini wrote:
> On Wed, 19 Jun 2024, Stefano Stabellini wrote:
> > On Fri, 14 Jun 2024, Federico Serafini wrote:
> > > Update ECLAIR configuration to deviate some cases where not using
> > > the return value of a function is not dangerous.
> > >=20
> > > Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> >=20
> > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> =C2=A0
> I would like to request a release ack, as this patch only affects the
> ECLAIR analysis for R17.7, which is non-blocking anyway (meaning: it
> cannot cause a gitlab-ci failure, it is only informative).
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii
>=20
>=20
>=20
> > > ---
> > > Changes in v3:
> > > - removed unwanted underscores;
> > > - grammar fixed;
> > > - do not constraint to the first actual argument.
> > > Changes in v2:
> > > - do not deviate strlcpy and strlcat.
> > > ---
> > > =C2=A0automation/eclair_analysis/ECLAIR/deviations.ecl | 4 ++++
> > > =C2=A0docs/misra/deviations.rst=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 9 +++++++++
> > > =C2=A02 files changed, 13 insertions(+)
> > >=20
> > > diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl
> > > b/automation/eclair_analysis/ECLAIR/deviations.ecl
> > > index 447c1e6661..97281082a8 100644
> > > --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> > > +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> > > @@ -413,6 +413,10 @@ explicit comment indicating the fallthrough
> > > intention is present."
> > > =C2=A0-config=3DMC3R1.R17.1,macros+=3D{hide , "^va_(arg|start|copy|en=
d)$"}
> > > =C2=A0-doc_end
> > > =C2=A0
> > > +-doc_begin=3D"Not using the return value of a function does not
> > > endanger safety if it coincides with an actual argument."
> > > +-config=3DMC3R1.R17.7,calls+=3D{safe, "any()",
> > > "decl(name(__builtin_memcpy||__builtin_memmove||__builtin_memset|
> > > |cpumask_check))"}
> > > +-doc_end
> > > +
> > > =C2=A0#
> > > =C2=A0# Series 18.
> > > =C2=A0#
> > > diff --git a/docs/misra/deviations.rst
> > > b/docs/misra/deviations.rst
> > > index 36959aa44a..f3abe31eb5 100644
> > > --- a/docs/misra/deviations.rst
> > > +++ b/docs/misra/deviations.rst
> > > @@ -364,6 +364,15 @@ Deviations related to MISRA C:2012 Rules:
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 by `stdarg.h`.
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `deliberate` for ECLAIR.
> > > =C2=A0
> > > +=C2=A0=C2=A0 * - R17.7
> > > +=C2=A0=C2=A0=C2=A0=C2=A0 - Not using the return value of a function =
does not
> > > endanger safety if it
> > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 coincides with an actual argume=
nt.
> > > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `safe` for ECLAIR. Such functio=
ns are:
> > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - __builtin_memcpy(=
)
> > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - __builtin_memmove=
()
> > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - __builtin_memset(=
)
> > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - cpumask_check()
> > > +
> > > =C2=A0=C2=A0=C2=A0 * - R20.4
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - The override of the keyword \"inline=
\" in xen/compiler.h
> > > is present so
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 that section contents chec=
ks pass when the compiler
> > > chooses not to
> > > --=20
> > > 2.34.1
> > >=20
> >=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:55:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:55:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746144.1153109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeXx-0001r4-UD; Mon, 24 Jun 2024 07:55:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746144.1153109; Mon, 24 Jun 2024 07:55:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeXx-0001qx-Rj; Mon, 24 Jun 2024 07:55:29 +0000
Received: by outflank-mailman (input) for mailman id 746144;
 Mon, 24 Jun 2024 07:55:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLeXw-0001qA-I7
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 07:55:28 +0000
Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com
 [2a00:1450:4864:20::534])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1ec855a8-31ff-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 09:55:27 +0200 (CEST)
Received: by mail-ed1-x534.google.com with SMTP id
 4fb4d7f45d1cf-57d26a4ee65so3649569a12.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 00:55:27 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d305841a5sm4348528a12.97.2024.06.24.00.55.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 00:55:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ec855a8-31ff-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719215727; x=1719820527; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=yWAq2lg3tgVYQqTGi+PCIx7d456rjClkYIehC/RPXDE=;
        b=gcZ2RB1bdWHteK3jG1ar7/9S74eLtMmCAV1fWdIezJs4M4tpIIrMdfYVznu1an0s00
         mfUecloH6x7Ocjht9VByV3t5jhgeXE/ussOqSUCFWy5oHACofLqAPZRIwmB3Z8O8cdWw
         AinhB+NLckIHa6g06tnH1SejR7UCqIyKzvHtxz7ybTxjs+ZgFpwuRY1eMxXr16XHvhIM
         t/oCwCQ4GskLRIxZ+v1YSK0iOA9qY1X2a4xouYojeGe2F+1D2StjjmUuvEm4wTiDKbH7
         O+vq0gLRmyaKad7Jyphb/6kcLEQksFcVD4wttDnn7V6dCRBl0b2K9lv0Un6JB0zjUKfZ
         bvvw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719215727; x=1719820527;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=yWAq2lg3tgVYQqTGi+PCIx7d456rjClkYIehC/RPXDE=;
        b=wqdgdd6EwKT7mobY8/a0ueb98ydJcZHR+q9bZFBYITdL1aTHP3IEEGI62SguTrMbv4
         os56+X3mtt03lry6/FMo42wspxfe3svzqL+uF2AiCTAbM1kmKrHKb3sW/VyIH13C13ZZ
         vT/f6k0jplF9h2pGx8I8f3TbdJQox4kNzi20xmowMGbdCdeXcR7eTeG6Eyi9K6dYkJYm
         aHCp70OZj1iadpXcWshyRvLC6ni6Idg9Yj8/sXKOIk0gQHsYuCpauLLWWeQSv1/xRdxB
         4NknBR8vwPN+ysliK3V26G1ZWbfT6s7cs8QcRzEI3h6/WL9mri99RwJnjwuQg+spWuXQ
         4J/A==
X-Forwarded-Encrypted: i=1; AJvYcCWyncA5lE/x/qJprcpfM/rENr4pxyA4frdFOfjkakgRFajIK+LaeKwTdjJp5gpE+85uuOh4TUS815NrP6OAWApctH7NnzvkkoKwye0MAHE=
X-Gm-Message-State: AOJu0Yw+Dz1+bnQhlD/Wl/JPMZvS3Bc+yc1JNoQUpsRNeVQ4AHW2PQey
	gwEiYOrEcsdTfVCQ4q/25I/PkjM9D4rrrxdPmUXFSbQkQXsceVHV
X-Google-Smtp-Source: AGHT+IFL/9g1TcYmwQ5gZ1dJEMPOeHqMRZ/R4vUPZw7nKmSaX+FowoxL1pQK+XpHZ5/yWpdlNJc8iA==
X-Received: by 2002:a50:d699:0:b0:57c:6a71:e62e with SMTP id 4fb4d7f45d1cf-57d49dc02c2mr2453472a12.23.1719215726818;
        Mon, 24 Jun 2024 00:55:26 -0700 (PDT)
Message-ID: <74d59fa3da525e58a2ae77ed6ec62e5187d4f6e8.camel@gmail.com>
Subject: Re: [XEN PATCH] automation/eclair: add deviations of MISRA C Rule
 5.5
From: Oleksii <oleksii.kurochko@gmail.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Federico Serafini <federico.serafini@bugseng.com>, 
 xen-devel@lists.xenproject.org, consulting@bugseng.com, Simone Ballarin
 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
 <julien@xen.org>
Date: Mon, 24 Jun 2024 09:55:25 +0200
In-Reply-To: <alpine.DEB.2.22.394.2406211506160.2572888@ubuntu-linux-20-04-desktop>
References: 
	<dbd34e37b5d757ff7ae2a7318ad12b159970604c.1718887298.git.federico.serafini@bugseng.com>
	 <alpine.DEB.2.22.394.2406201722100.2572888@ubuntu-linux-20-04-desktop>
	 <alpine.DEB.2.22.394.2406211506160.2572888@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Fri, 2024-06-21 at 15:07 -0700, Stefano Stabellini wrote:
> On Thu, 20 Jun 2024, Stefano Stabellini wrote:
> > On Thu, 20 Jun 2024, Federico Serafini wrote:
> > > MISRA C Rule 5.5 states that "Identifiers shall be distinct from
> > > macro
> > > names".
> > >=20
> > > Update ECLAIR configuration to deviate:
> > > - macros expanding to their own name;
> > > - clashes between macros and non-callable entities;
> > > - clashes related to the selection of specific implementations of
> > > string
> > > =C2=A0 handling functions.
> > >=20
> > > Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> >=20
> > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>=20
> I would like to ask for a release-ack as its effect is limited to
> ECLAIR
> analysis results and rule 5.5 is not blocking anyway (it is allowed
> to
> fail).

Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:56:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:56:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746157.1153120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeYp-0002Xf-7E; Mon, 24 Jun 2024 07:56:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746157.1153120; Mon, 24 Jun 2024 07:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeYp-0002XY-3n; Mon, 24 Jun 2024 07:56:23 +0000
Received: by outflank-mailman (input) for mailman id 746157;
 Mon, 24 Jun 2024 07:56:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=11f7=N2=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1sLeYn-0002Wx-MH
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 07:56:21 +0000
Received: from NAM10-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam10on20613.outbound.protection.outlook.com
 [2a01:111:f400:7e88::613])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3d90a4a2-31ff-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 09:56:20 +0200 (CEST)
Received: from CH5P220CA0024.NAMP220.PROD.OUTLOOK.COM (2603:10b6:610:1ef::23)
 by DM4PR12MB7600.namprd12.prod.outlook.com (2603:10b6:8:108::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.28; Mon, 24 Jun
 2024 07:56:16 +0000
Received: from CH1PEPF0000AD78.namprd04.prod.outlook.com
 (2603:10b6:610:1ef:cafe::3a) by CH5P220CA0024.outlook.office365.com
 (2603:10b6:610:1ef::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.38 via Frontend
 Transport; Mon, 24 Jun 2024 07:56:16 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CH1PEPF0000AD78.mail.protection.outlook.com (10.167.244.56) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Mon, 24 Jun 2024 07:56:16 +0000
Received: from SATLEXMB06.amd.com (10.181.40.147) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 24 Jun
 2024 02:56:16 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB06.amd.com
 (10.181.40.147) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 24 Jun
 2024 02:56:15 -0500
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39
 via Frontend Transport; Mon, 24 Jun 2024 02:56:14 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d90a4a2-31ff-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OYJOOumKGL8/qdyEgk6IVSIxUBPeowec3pZmn1WTpU3bJWjRypUZxWNrwuBok7T1we9VU/xYBqPbkNxA5xCLU6yG0m5OKXG0hAmUTWnSi3wFsRnyJVg99Mz0OdwBN6wVJtwkT7uRGp7jlJM5WQ/cWKXi1ewpUyX9fICTfrG9rfUXFsiIFWDdC+gTXNMQ88lDJwmpyHXvFx+AhdCI1mtfC+5M0YKBc1CpDe6tTFasaXIIFjG78hLXVLsjeD87Px+QmZ0t6bd4p/LsCvFeaCBVDImDo+BEWb6vV8T4sidp/qcGGrM4FZ1MOJi2OoFx2JtoNJrw8QPi8wZfQ/r2fvWCGA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8p9N3jX+sDeQrnUZwbq8aFuht35rrHShuG3TJ+r3T7g=;
 b=fkHf79Lv81MQRfWysQ3HPIBNtkKKDXine1dbhUZDr2ihzBgdwP08Xeso48NuBm2WiRzGFQaquEQOwK8UNl5hFt5VEhIlL6aGNnahd8t1ZgAnbR2RS2D4tmo/RFWJJMofPJVEs9L+wvope5CTB3f6w/XQzFBsFvfu1ht7Bnthx6d5RA2heKhyyYkxuPAU9nkoufQJD/1CUavIS3HMNwhZ0NVyvc8cs4wY4cgTJiH8n2w5qqH2fPAdt4SPydnvGu5cnEDsrIKG8802wIDfmSf2A3iNT3S2NnjhV4k5NwdyUFB+mCeDFPC1gkwRcJZ+AajX+A0esIoai6fWQvDevIpTbw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8p9N3jX+sDeQrnUZwbq8aFuht35rrHShuG3TJ+r3T7g=;
 b=rLULqj/LSPaCy9tKVC0l01wGK9mSr8t5/yDZXMXMOyBT/TUgtO08RpzcO5mOvjuyy68Vq+FzDqtq69ARRrzM6p+/mvuQSOinlQchQ5K9EV2AMk/uUxCSX6ySupPm00k3c9FPByK8SV30tJmSoRnWvFpW0BtwC0jxwXOFRWZGLw8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <sstabellini@kernel.org>, Michal Orzel <michal.orzel@amd.com>
Subject: [ImageBuilder] Add support for omitting host paddr for static shmem regions
Date: Mon, 24 Jun 2024 09:55:59 +0200
Message-ID: <20240624075559.15484-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CH1PEPF0000AD78:EE_|DM4PR12MB7600:EE_
X-MS-Office365-Filtering-Correlation-Id: 6a92ad91-15ab-4bc9-5d04-08dc94232046
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|36860700010|82310400023|376011|1800799021;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?PT/c3GCqyRHV/iWkE8GwQw9vL1ztAdaZKc6zZN/7UEpOWtHvDoqv30XvQ5J8?=
 =?us-ascii?Q?XB97cou/D1ew7IcMEbDlUVn/eDsvlFj8uVYDJAP97mIyLLT7mt5G1yjMeTEO?=
 =?us-ascii?Q?7P1I5jMKjOrCpaO+4HOQavWEA+DSRjwoOUApTaJJupV+8m1UBZigmdOVqhO4?=
 =?us-ascii?Q?SvPRLAtv2bN+5CoGM1hkP3ASq4NC6tj9Z/Imm5RAMqhxgOoyMkCUl7RuICQi?=
 =?us-ascii?Q?AqDnSul/9Ks9tg4YRLFO/kmlGK/cLizGQa+l2T/pSOv07n6x3BCfu1JGAjYA?=
 =?us-ascii?Q?T+wUY/TB+VbaDgGgEAKYi+SW43LJmaFk24+UU3l5h42+Et2PNRcsUXEJrZbx?=
 =?us-ascii?Q?4IlVZSJjPu0KIh+lk0TGqsB56p1lmbyIXsg7NtCFNLnPxl8MtwdtNDHhMIQk?=
 =?us-ascii?Q?BByn7O5oO+PMJSgSObC3jgI1qJveY2qrVdyaLXtzEiQlbtc91/Dh7NNlekbm?=
 =?us-ascii?Q?MZiel4EV+/0Rt7Ow6s0D1Ldy3xuIOZM6TeW3qRuM4dFXv2CHmGzsgxsQQWH+?=
 =?us-ascii?Q?OIW0wkkManzK0weDl887sqz/qImRVKGGAqNkLIglvGjpEoZaY7buM0NnB27r?=
 =?us-ascii?Q?vUvs2emeSs8mLNuZDfJYSSKdSw7H6EdaldAT81IiBlNbkyDFw7EowkY2z4cC?=
 =?us-ascii?Q?hQ/L0Dy4EzBSGagnbxtDW5U3FLnSG5nmOiBBlzpwTrMFzLRqrR1iGY/9m7lD?=
 =?us-ascii?Q?2htmpb2il9kUQmkMnAreOqx6yLObhXi88aaniOm0HcVn45ZI+pnp1/hk2riX?=
 =?us-ascii?Q?Ycie8Lu1VkHANBBamOjJlPlMTQQr0JhkJWjwJafXJHV3R7mJ/DO1pA2cKrT9?=
 =?us-ascii?Q?EGSXGJ/SRUm2WJOmo/W04CySi2BhQvA/IVow6tjSs7Q5VhHQzEqRjtMl/rag?=
 =?us-ascii?Q?zbdK06vxd8dTGhZBxmhG5KY4DumtltVossoKTaezNh8N1GsNl+kO3THx87e/?=
 =?us-ascii?Q?zZ6mzVJ2sAk+DLC9qZWsAwh+lct5Ei+4VvshpUBFjkPS/5jMtLJs9PUnQ4VA?=
 =?us-ascii?Q?PZk1EUwzlc6ieY8E9one/o/Bvy8JfPbobKWS1/yGogwztVwPHgPFOVHdyFAg?=
 =?us-ascii?Q?yMOGTqu58+ear/+SAZmDZ9PwmTcscNAGpUcaYfidE1IoyT3g30w5fVUA66E+?=
 =?us-ascii?Q?k0CaYi3yb4xKn5b2ZBnb42CrlUka7a9TIOp8MtD/Klum2jl8+m9Ixlpjxf+J?=
 =?us-ascii?Q?bJ1BclwW5A3ffmtLAdB9CRRpBIOvhOjsNf7KYYSe/LqGFpOs4OXPO6hU/sHP?=
 =?us-ascii?Q?vvGnvO1qe/p85sA1mt/O9Gk78kYK+yag2s5h0JNVrTohXsrVDN8J621QDLcy?=
 =?us-ascii?Q?45R4DLaGq7CmMmyasGzCCxgbApWrDEi0E4CBqQgH1m002XnyYWo8waGWihh8?=
 =?us-ascii?Q?6Z30qf/WQ1zrsicLKCmkISvmNNYeOwQ7MPHjNwaVY8o6s/AZ/Q=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(36860700010)(82310400023)(376011)(1800799021);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2024 07:56:16.5234
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6a92ad91-15ab-4bc9-5d04-08dc94232046
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CH1PEPF0000AD78.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB7600

Reflect the latest Xen support to be able to omit the host physical
address for static shared memory regions, in which case the address will
come from the Xen heap.

Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
 README.md                |  7 ++++---
 scripts/uboot-script-gen | 19 +++++++++++++------
 2 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/README.md b/README.md
index 7683492a6f7f..4fcd908c2c2f 100644
--- a/README.md
+++ b/README.md
@@ -199,9 +199,10 @@ Where:
 
 - DOMU_SHARED_MEM[number]="SHM-ID HPA GPA size"
   if specified, indicate SHM-ID represents the unique identifier of the shared
-  memory region, the host physical address HPA will get mapped at guest
-  address GPA in domU and the memory of size will be reserved to be shared
-  memory. The shared memory is used between two dom0less domUs.
+  memory region. The host physical address HPA is optional, if specified, will
+  get mapped at guest address GPA in domU (otherwise it will come from Xen heap)
+  and the memory of size will be reserved to be shared memory. The shared memory
+  is used between two dom0less domUs.
 
   Below is an example:
   NUM_DOMUS=2
diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
index 20cc6ef7f892..8b664e711b10 100755
--- a/scripts/uboot-script-gen
+++ b/scripts/uboot-script-gen
@@ -211,18 +211,25 @@ function add_device_tree_static_shared_mem()
     local shared_mem_id=${shared_mem%% *}
     local regions="${shared_mem#* }"
     local cells=()
-    local shared_mem_host=${regions%% *}
-
-    dt_mknode "${path}" "shared-mem@${shared_mem_host}"
+    local node_name=
 
     for val in ${regions[@]}
     do
         cells+=("$(split_value $val)")
     done
 
-    dt_set "${path}/shared-mem@${shared_mem_host}" "compatible" "str" "xen,domain-shared-memory-v1"
-    dt_set "${path}/shared-mem@${shared_mem_host}" "xen,shm-id" "str" "${shared_mem_id}"
-    dt_set "${path}/shared-mem@${shared_mem_host}" "xen,shared-mem" "hex" "${cells[*]}"
+    # Less than 3 cells means host address not provided
+    if [ ${#cells[@]} -lt 3 ]; then
+        node_name="shared-mem-${shared_mem_id}"
+    else
+        node_name="shared-mem@${regions%% *}"
+    fi
+
+    dt_mknode "${path}" "${node_name}"
+
+    dt_set "${path}/${node_name}" "compatible" "str" "xen,domain-shared-memory-v1"
+    dt_set "${path}/${node_name}" "xen,shm-id" "str" "${shared_mem_id}"
+    dt_set "${path}/${node_name}" "xen,shared-mem" "hex" "${cells[*]}"
 }
 
 function add_device_tree_cpupools()
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:56:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:56:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746160.1153130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeYv-0002o1-EM; Mon, 24 Jun 2024 07:56:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746160.1153130; Mon, 24 Jun 2024 07:56:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeYv-0002nr-Ac; Mon, 24 Jun 2024 07:56:29 +0000
Received: by outflank-mailman (input) for mailman id 746160;
 Mon, 24 Jun 2024 07:56:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLeYu-0002Uz-Hw
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 07:56:28 +0000
Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com
 [2a00:1450:4864:20::22d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 42271807-31ff-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 09:56:26 +0200 (CEST)
Received: by mail-lj1-x22d.google.com with SMTP id
 38308e7fff4ca-2ec3f875e68so45423481fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 00:56:26 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb5e166csm56580815ad.214.2024.06.24.00.56.21
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 00:56:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42271807-31ff-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719215786; x=1719820586; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=SXfjUaNlK4Pt9KYR1q+M5lgItmZhLaoA8CaGYZA+6sU=;
        b=U1BMH8Go3Bzqxtk81lKRaEQkFZojybhLSXAFELRlD6Ov7kGwkLlaB1NjJLp2qlX0Dw
         Ls120DpX3JAKxGzAWbpzGcOGX95HkmZ9+82+ccJzrJ6w1B3dIvmMH+8Q24QyFoeZhJQ1
         8jYApSGgINMFHZm1dxlTCDR0juUJch6Le8S9tKTGuJI0/uTycDCqNslndKLJyXjVfimR
         rbybjxcvS79NWwThT7PU8ObcOmp2Ck7Z0Qhpkv6aqkC5xqsQ05k52hglDIndug56sE3Z
         jh4sw21DYmZx0ERTtYX4NKdP7L9k6AKjxkJmJ2NPHzbuWsn3IF0zB/dHdaB7/L/jIi5L
         IByQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719215786; x=1719820586;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=SXfjUaNlK4Pt9KYR1q+M5lgItmZhLaoA8CaGYZA+6sU=;
        b=DRcmTS0H+LW0MJi5lBpeZh+/JBGJX766emh4tgA/fIU2oF1l31P/aT6/MVXYpP+API
         LHIlz0Rha8EfbiPWX+6GVf3e7gcjBzz4NnrrptV0hDoZvPJtYApK9JF0gVX/hYAir30T
         Ee1PYxDny4RK0vgOGrIneZAWB6JqI6vNoT7yGfKnz9QcUxbFCxL3A8yTOUASDnwTB6/p
         oRinUOD5TmNeMiqsJF25vxhHhfemjQiHT92GmdDZRYAUqJ10nnrsmOHNn7lT4Rub9NBv
         fNZCg98W5arbokEdzNCEz2YnpRua+8utPtjN6XUy4n+6iz3tuuf7rOvF+o0trmY5AqLa
         DR0g==
X-Forwarded-Encrypted: i=1; AJvYcCVhRKOuONVwArGFgNyB+BzX08GxAfiDxAJfuPaattKBqbdli0uIubr2U1XBLD6Cgwmnof70b/p7l4t8eSrdGgIflNO7sl5KIjAl7UsffZM=
X-Gm-Message-State: AOJu0YwHpcBP/ozfibCOd8PGHy+ioGQ0yRVTaffvSvqxSjoOiTK5uKMe
	sdLTNdsoSJvBSc2Zsf7utOF8bwq/NF0xDNzg03Na1h16Samj3iG+uqzuR1JSPw==
X-Google-Smtp-Source: AGHT+IFnQoKdBmTNlFjhUqOfXmWyX5I8sdBrqGV+v53hEg6Dd0YIYmN2BA/ds3fLSgfWy4MdpyXfLQ==
X-Received: by 2002:a2e:9b55:0:b0:2eb:d9a3:2071 with SMTP id 38308e7fff4ca-2ec5b3e358dmr21166861fa.50.1719215786392;
        Mon, 24 Jun 2024 00:56:26 -0700 (PDT)
Message-ID: <660fc551-c6bc-456f-8e9e-80b3e592fece@suse.com>
Date: Mon, 24 Jun 2024 09:56:18 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/2] xen/multicall: Change nr_calls to uniformly be
 unsigned long
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Roberto Bagnara <roberto.bagnara@bugseng.com>,
 "consulting @ bugseng . com" <consulting@bugseng.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240621205800.329230-1-andrew.cooper3@citrix.com>
 <20240621205800.329230-3-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240621205800.329230-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 21.06.2024 22:58, Andrew Cooper wrote:
> Right now, the non-compat declaration and definition of do_multicall()
> differing types for the nr_calls parameter.
> 
> This is a MISRA rule 8.3 violation, but it's also time-bomb waiting for the
> first 128bit architecture (RISC-V looks as if it might get there first).
> 
> Worse, the type chosen here has a side effect of truncating the guest
> parameter, because Xen still doesn't have a clean hypercall ABI definition.
> 
> Switch uniformly to using unsigned long.

And re-raising all the same question again: Why not uniformly unsigned int?
Or uint32_t?

> This addresses the MISRA violation, and while it is a guest-visible ABI
> change, it's only in the corner case where the guest kernel passed a
> bogus-but-correct-when-truncated value.  I can't find any any users of
> mutilcall which pass a bad size to begin with, so this should have no
> practical effect on guests.
> 
> In fact, this brings the behaviour of multicalls more in line with the header
> description of how it behaves.

Which description? If you mean ...

> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -623,7 +623,7 @@ DEFINE_XEN_GUEST_HANDLE(mmu_update_t);
>  /*
>   * ` enum neg_errnoval
>   * ` HYPERVISOR_multicall(multicall_entry_t call_list[],
> - * `                      uint32_t nr_calls);
> + * `                      unsigned long nr_calls);
>   *
>   * NB. The fields are logically the natural register size for this
>   * architecture. In cases where xen_ulong_t is larger than this then

... this comment here, note how is says "fields", i.e. talks about the
subsequent struct.

What you're doing is effectively an ABI change: All of the sudden the
upper bits of the incoming argument would be respected. Yes, it is
overwhelmingly likely that no-one would ever pass such a value. Yet
iirc on other similar hypercall handler adjustments in the past you
did raise a similar concern.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:56:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:56:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746163.1153140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeZC-0003Ns-KH; Mon, 24 Jun 2024 07:56:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746163.1153140; Mon, 24 Jun 2024 07:56:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeZC-0003Nl-H9; Mon, 24 Jun 2024 07:56:46 +0000
Received: by outflank-mailman (input) for mailman id 746163;
 Mon, 24 Jun 2024 07:56:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLeZB-0002Wx-DH
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 07:56:45 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4cd391de-31ff-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 09:56:44 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a724b4f1218so112647666b.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 00:56:44 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7261aece51sm1433766b.153.2024.06.24.00.56.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 00:56:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4cd391de-31ff-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719215804; x=1719820604; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=u07MF4jpyL5jS/EdK3+dxG6HnxSuXokhUkBqoFh81s8=;
        b=jDCZyqZgGviglFshq/jkQW052ePSNIfDDThRipyssj3Eyy2S1EJQ/AKqw7PrukDhk1
         auGtCjQWJFH6XgxCwgTVy9rX592u6WmSMkegdhz+EzW7tbhU/cuze5cR7osCn25KUE0f
         RR+XJ1T/ie3uO0rg96sWkjZKGvS6RHTgjQKvxpU/+egXlw+L6rbvE2qxUDG/vZrtDZ8m
         sujO9GsNL6/PaYaAn8iViPnuCDN67nAyn4Mj5jSvw7lscd741LbAu8ffTjprfFs8h649
         9+1+kbFtJf6yFHNshO2grSKSHAHegDAz9tiqBaUjNKcGRPrNLQN7D4SSmLukjbHBNcyH
         lzqw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719215804; x=1719820604;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=u07MF4jpyL5jS/EdK3+dxG6HnxSuXokhUkBqoFh81s8=;
        b=RPuhynl+hqdhJrQKXmhNSqoTj19yVDgrraqRUmfa2dyi+zjmCTSgH2X8tNc5wCvtWA
         PLIhm6i5JImFZJJ0EhgfqvWHDA8bEKAYByjdQpedfF4CVPOf5VKmaAXPoyPByiil/2uA
         2eJ3KzdGXymuxLjmA5HCnNUR7m9XaOFbr8Cf+rBzbxcy+686TeApSGUk/hcoAAuBcOiN
         nKsuhgFYFcIkHXCvnuni8ocS8p+P1MrTLI2dgpcZ2TIGYsJEIOKRg41ndwHNKhpZksLh
         b/jW3czCzxGIZFWH89x9r4Ynfg2vF0wqpmaYbO4fZx0zdXjLQuQkurliG02GDAnfbtpI
         EbXA==
X-Forwarded-Encrypted: i=1; AJvYcCVSHTho+RHM011sJleh5y0QVqYG/+aYJg+F2uQroyHEaP4GBztbHUlu8RcGWYfcBT+NKXOTKiuGfg3eLt68M0A6UqynMua2i4g5Rw2M3lQ=
X-Gm-Message-State: AOJu0YyNRa8D4FEzYWMh0Tv2xywQERwgxvm/YtW5vdTCTH2WJPZ9HOKM
	9zd8kems8zAwd9n1U84c44o0vraFV+V7WhywJ8H9iboo3oBj7+Zw
X-Google-Smtp-Source: AGHT+IHbB0HC5/0qWEnG7JiZbdS8ftYQOuf+qd8xJdXkkPud7dmUaaS8B4fEmXq3tBhzNqL8RunBlA==
X-Received: by 2002:a17:906:d108:b0:a71:ddb8:9394 with SMTP id a640c23a62f3a-a7242cb6ff3mr263965166b.40.1719215804195;
        Mon, 24 Jun 2024 00:56:44 -0700 (PDT)
Message-ID: <31527654b914ff1f77fb209024307032f7e7feb2.camel@gmail.com>
Subject: Re: [PATCH for-4.19 0/2] Xen: Final MISRA R8.3 fixes
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, Roger Pau =?ISO-8859-1?Q?Monn=E9?=
 <roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Roberto Bagnara <roberto.bagnara@bugseng.com>, "consulting @ bugseng . com"
 <consulting@bugseng.com>
Date: Mon, 24 Jun 2024 09:56:43 +0200
In-Reply-To: <20240621205800.329230-1-andrew.cooper3@citrix.com>
References: <20240621205800.329230-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

T24gRnJpLCAyMDI0LTA2LTIxIGF0IDIxOjU3ICswMTAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOgo+
IFRoaXMgZ2V0cyBYZW4gY2xlYW4gdG8gUjguMyBhbmQgbWFya3MgaXQgYXMgYmxvY2tpbmcgaW4g
R2l0bGFiLgo+IAo+IGh0dHBzOi8vZ2l0bGFiLmNvbS94ZW4tcHJvamVjdC9wZW9wbGUvYW5keWho
cC94ZW4vLS9waXBlbGluZXMvMTM0Mjc1NTE5OQo+IAo+IEFuZHJldyBDb29wZXIgKDIpOgo+IMKg
IHg4Ni9wYWdld2FsazogQWRkcmVzcyBNSVNSQSBSOC4zIHZpb2xhdGlvbiBpbiBndWVzdF93YWxr
X3RhYmxlcygpCj4gwqAgeGVuL211bHRpY2FsbDogQ2hhbmdlIG5yX2NhbGxzIHRvIHVuaWZvcm1s
eSBiZSB1bnNpZ25lZCBsb25nCj4gCj4gwqBhdXRvbWF0aW9uL2VjbGFpcl9hbmFseXNpcy9FQ0xB
SVIvdGFnZ2luZy5lY2wgfCAxICsKPiDCoHhlbi9hcmNoL3g4Ni9pbmNsdWRlL2FzbS9ndWVzdF9w
dC5owqDCoMKgwqDCoMKgwqDCoMKgwqAgfCAyICstCj4gwqB4ZW4vY29tbW9uL211bHRpY2FsbC5j
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB8IDQgKystLQo+
IMKgeGVuL2luY2x1ZGUvaHlwZXJjYWxsLWRlZnMuY8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqAgfCA0ICsrLS0KPiDCoHhlbi9pbmNsdWRlL3B1YmxpYy94ZW4uaMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB8IDIgKy0KPiDCoDUgZmlsZXMgY2hhbmdl
ZCwgNyBpbnNlcnRpb25zKCspLCA2IGRlbGV0aW9ucygtKQo+IAo+IAo+IGJhc2UtY29tbWl0OiA5
ZTdjMjZhZDg1MzJjM2VmZGExNzRkZWU1YWI4YmRiZWVmMWU0ZjZkCgpSZWxlYXNlLUFja2VkLWJ5
OiBPbGVrc2lpIEt1cm9jaGtvIDxvbGVrc2lpLmt1cm9jaGtvQGdtYWlsLmNvbT4KCn4gT2xla3Np
aQoK



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:57:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:57:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746176.1153149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeZq-00049Q-Vz; Mon, 24 Jun 2024 07:57:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746176.1153149; Mon, 24 Jun 2024 07:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeZq-00049J-TM; Mon, 24 Jun 2024 07:57:26 +0000
Received: by outflank-mailman (input) for mailman id 746176;
 Mon, 24 Jun 2024 07:57:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLeZq-0002Wx-0W
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 07:57:26 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 650ce79c-31ff-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 09:57:25 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-a7245453319so177443966b.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 00:57:25 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7247895dddsm169205366b.108.2024.06.24.00.57.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 00:57:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 650ce79c-31ff-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719215845; x=1719820645; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=AYUk6VJ8LmeMR8kwP1c9U4nk7jYUX6EJzys1MYcZb+U=;
        b=ShHWZOyfG2zm66h/uWNGCjOQWdio4gJK5RRFxJj1UYN129QR9DelE3xIIIwYBodkBk
         2UEWdW5lxr+JHZYziKc6x+g4zGn6F1OwmbKnK/9RJDjbOFo5yGe2+uUOjQppUtbljEUe
         ELs56UwGJEtPouC/tg5zv+gyHY0vHZITBVDrjovpRWFn3dL5fRx/JHtUK+0NpI0Kfhyb
         x3Qs46Cymu2ISBMKtK5xkALXijzln5r0HIOEIrtNhy0uLx3g8H9VWIaDRhhiak9vQw/I
         9ttz+UXOgP1uEmx5sqRiKsbKwAe/BzxIfeL45WadcaOX1kxju1WE/MIVXgwqYT8mMZJ1
         xE2g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719215845; x=1719820645;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=AYUk6VJ8LmeMR8kwP1c9U4nk7jYUX6EJzys1MYcZb+U=;
        b=oK4PIWPy9GEoleuwjDV1C1e+nhCBIbVt/p3ZdWuq6vmUqAST+6wnqk4tP6/nRp5Rcy
         HQ8pFdOpHRwbZDP78sjb8VznSBLvSbk8qkZOSB3MtUx2WUtKhWGS+XzyWvpjMZ8iz/Tm
         TO1UXue6iNBehJ9UdRJZfbh7U8gwzyOQSS9w++8Bu4SuI1gowfzUYWlUdiMiiG/wyBiH
         freXxTGwSvzPHwS2kfB75L07dji0u6cdDREm3ypAKW8rzrO1iEMKRhvxww2Cq62PWFyX
         8dVl2r6L8csfTECBjqdhhL12VMIneXPUKQ9o/8jQ2ZLPJpNJT5TyEWHMOHykgDjvH6vu
         1WjA==
X-Gm-Message-State: AOJu0Yx95bHRMlqIwQiC+DmTrgxgsySekNmQONsw+1YG4Tb4C1vwS/Ep
	3PxgejKjj6/PdZhCOKKSobfe4Iy3yIwBityU5o9UtBQKA3qIV9V8
X-Google-Smtp-Source: AGHT+IFfH6dXRgC75He5i6g61x2sB6RMmncdYKOVT19Q6XyFCVBunqJUWIB/1Quut5riaD7MMlHA4A==
X-Received: by 2002:a17:907:a80e:b0:a72:5471:c2ce with SMTP id a640c23a62f3a-a725471c39fmr154697066b.7.1719215844876;
        Mon, 24 Jun 2024 00:57:24 -0700 (PDT)
Message-ID: <8a712dd59ac767aa7c01701b9743b54b8a612e01.camel@gmail.com>
Subject: Re: [XEN PATCH] automation/eclair: add more guidelines to the
 monitored set
From: Oleksii <oleksii.kurochko@gmail.com>
To: Stefano Stabellini <sstabellini@kernel.org>, Federico Serafini
	 <federico.serafini@bugseng.com>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, Simone Ballarin
	 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>
Date: Mon, 24 Jun 2024 09:57:23 +0200
In-Reply-To: <alpine.DEB.2.22.394.2406211432140.2572888@ubuntu-linux-20-04-desktop>
References: 
	<f03398504405689413521de1675a33e50cdbc30b.1718983858.git.federico.serafini@bugseng.com>
	 <alpine.DEB.2.22.394.2406211432140.2572888@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Fri, 2024-06-21 at 14:33 -0700, Stefano Stabellini wrote:
> On Fri, 21 Jun 2024, Federico Serafini wrote:
> > Add more accepted guidelines to the monitored set to check them at
> > each
> > commit.
> >=20
> > Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>=20
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
>=20
> Asking for a release ack: this allows us to see more violations in
> the
> regular ECLAIR scanning results. But they are not blocking, so they
> won't cause additional new failures in the pipeline. It is just
> informative.
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

>=20
>=20
> > ---
> > =C2=A0automation/eclair_analysis/ECLAIR/monitored.ecl | 5 +++++
> > =C2=A01 file changed, 5 insertions(+)
> >=20
> > diff --git a/automation/eclair_analysis/ECLAIR/monitored.ecl
> > b/automation/eclair_analysis/ECLAIR/monitored.ecl
> > index 4daecb0c83..9ffaebbdc3 100644
> > --- a/automation/eclair_analysis/ECLAIR/monitored.ecl
> > +++ b/automation/eclair_analysis/ECLAIR/monitored.ecl
> > @@ -18,10 +18,13 @@
> > =C2=A0-enable=3DMC3R1.R12.5
> > =C2=A0-enable=3DMC3R1.R1.3
> > =C2=A0-enable=3DMC3R1.R13.6
> > +-enable=3DMC3R1.R13.1
> > =C2=A0-enable=3DMC3R1.R1.4
> > =C2=A0-enable=3DMC3R1.R14.1
> > =C2=A0-enable=3DMC3R1.R14.4
> > =C2=A0-enable=3DMC3R1.R16.2
> > +-enable=3DMC3R1.R16.3
> > +-enable=3DMC3R1.R16.4
> > =C2=A0-enable=3DMC3R1.R16.6
> > =C2=A0-enable=3DMC3R1.R16.7
> > =C2=A0-enable=3DMC3R1.R17.1
> > @@ -34,6 +37,7 @@
> > =C2=A0-enable=3DMC3R1.R20.13
> > =C2=A0-enable=3DMC3R1.R20.14
> > =C2=A0-enable=3DMC3R1.R20.4
> > +-enable=3DMC3R1.R20.7
> > =C2=A0-enable=3DMC3R1.R20.9
> > =C2=A0-enable=3DMC3R1.R2.1
> > =C2=A0-enable=3DMC3R1.R21.10
> > @@ -58,6 +62,7 @@
> > =C2=A0-enable=3DMC3R1.R5.2
> > =C2=A0-enable=3DMC3R1.R5.3
> > =C2=A0-enable=3DMC3R1.R5.4
> > +-enable=3DMC3R1.R5.5
> > =C2=A0-enable=3DMC3R1.R5.6
> > =C2=A0-enable=3DMC3R1.R6.1
> > =C2=A0-enable=3DMC3R1.R6.2
> > --=20
> > 2.34.1
> >=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:57:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:57:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746180.1153159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLea7-0004cN-7A; Mon, 24 Jun 2024 07:57:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746180.1153159; Mon, 24 Jun 2024 07:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLea7-0004cG-4U; Mon, 24 Jun 2024 07:57:43 +0000
Received: by outflank-mailman (input) for mailman id 746180;
 Mon, 24 Jun 2024 07:57:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLea5-0002Wx-UQ
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 07:57:41 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6e9d82c5-31ff-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 09:57:41 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2eaea28868dso52607311fa.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 00:57:41 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a725459233csm85895566b.96.2024.06.24.00.57.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 00:57:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e9d82c5-31ff-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719215861; x=1719820661; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=MUw2iKVVN6VeYr37kWbNB40ZsEbrYHSv0Bpzc9f3ObA=;
        b=kPcb+Mrd4Lt8fZ6tbxdJgtETrKxggBzpUMkWvzfiwZtMo6DNIbv2ashTXA4lFYKYdo
         Mh6Je0I7uloiNr3CjZ9hVPRHG16zRxjVNfc1PESpExsxY8saexrtSFi8YPMWhTUv1/ZO
         2WdAXvs9mtFLjga5TIeWrjb55AyGsEfHBDWSVC7O+5ksyjJMswtgs0bO83URRt+zhcNl
         UHBRPtpgv9epIGDRvuGIs91oLHKjm5Y2iESuCRfjCCPBji6xjtwGzPJmGu+JIBrpXnQi
         N5+JB6UpiSbx7uThJ10bIzywh83dPC8rAkJa8txRPOB+grYEMHZnL7NOLDeatRRrAw8/
         brew==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719215861; x=1719820661;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=MUw2iKVVN6VeYr37kWbNB40ZsEbrYHSv0Bpzc9f3ObA=;
        b=R8lIfdoRJ/M3eyVzRDsakvyu+B98pN1dPYYdzLPBWbpS5iP02RTtU6qwj2k7aiIexU
         yajAbbpCim32Zth6HPC3/AFvzO7KX1CEYkxQyVFBPfCqgEm9PqOrMNqNeRj+IgHurqEJ
         mf7G51pRsGwXqtaAL71dHB6BrDEzxgiz9n/yXLbgVOPAf7JEda9OvZCihgLHkCIcjNYl
         zR3lzaUnBTzTX+Tzu11ShN0YSQl+1NXPRbvmh5HNNDRxeHPLWMwEauKhOCvctEw0LRhY
         Ms3999nMTd+dZRJGscZyu11t+jOtPDnypToQ4RQ1TBehohx5F1TmoMcVwwJUK19MbLvN
         sgMQ==
X-Gm-Message-State: AOJu0Yyw6ONgLlKoJTSL0hp5I+PsdwxtWUEL/zHfjLe25BtVtkVhjM8S
	d661jFmEfXD7npi7GT1m1SlZ1hQH0W7I/M0L0J6h6Pd75wjCT13WOXh95P/E
X-Google-Smtp-Source: AGHT+IFTMbAtAhceHtzn1H6AdsiprfPuqdnbgMUeRCjae8kcxJ/ZKAM+5Y9U08BB/l7pWCOSKbNuXg==
X-Received: by 2002:a2e:8817:0:b0:2ec:5945:62e9 with SMTP id 38308e7fff4ca-2ec5b31d1d5mr27330931fa.32.1719215860669;
        Mon, 24 Jun 2024 00:57:40 -0700 (PDT)
Message-ID: <e79fa18334a0bde4dbd1e94ea4037a4bb7ac2bec.camel@gmail.com>
Subject: Re: [PATCH v2] common/unlzo: address violation of MISRA C Rule 7.3
From: Oleksii <oleksii.kurochko@gmail.com>
To: Stefano Stabellini <sstabellini@kernel.org>, Alessandro Zucchelli
	 <alessandro.zucchelli@bugseng.com>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, Andrew Cooper
	 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Date: Mon, 24 Jun 2024 09:57:39 +0200
In-Reply-To: <alpine.DEB.2.22.394.2406211431210.2572888@ubuntu-linux-20-04-desktop>
References: 
	<847f9b715b3c8e2ba0637fdd79111f4f828389c6.1718976211.git.alessandro.zucchelli@bugseng.com>
	 <alpine.DEB.2.22.394.2406211431210.2572888@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Fri, 2024-06-21 at 14:31 -0700, Stefano Stabellini wrote:
> On Fri, 21 Jun 2024, Alessandro Zucchelli wrote:
> > This addresses violations of MISRA C:2012 Rule 7.3 which states as
> > following: the lowercase character `l' shall not be used in a
> > literal
> > suffix.
> >=20
> > The file common/unlzo.c defines the non-compliant constant
> > LZO_BLOCK_SIZE with
> > having a lowercase 'l'.
> > It is now defined as '256*1024L'.
> >=20
> > No functional change.
> >=20
> > Signed-off-by: Alessandro Zucchelli
> > <alessandro.zucchelli@bugseng.com>
>=20
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>=20
> Asking for a release ack for this trivial change
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

>=20
>=20
> > ---
> > Changes from v1:
> > Instead of deviating /common/unlzo.c reports fro Rule 7.3 they are
> > addressed by
> > changing the non-compliant definition of LZO_BLOCK_SIZE.
> > ---
> > =C2=A0xen/common/unlzo.c | 2 +-
> > =C2=A01 file changed, 1 insertion(+), 1 deletion(-)
> >=20
> > diff --git a/xen/common/unlzo.c b/xen/common/unlzo.c
> > index bdcefa95b3..acb8dff600 100644
> > --- a/xen/common/unlzo.c
> > +++ b/xen/common/unlzo.c
> > @@ -52,7 +52,7 @@ static inline u32 get_unaligned_be32(const void
> > *p)
> > =C2=A0static const unsigned char lzop_magic[] =3D {
> > =C2=A0	0x89, 0x4c, 0x5a, 0x4f, 0x00, 0x0d, 0x0a, 0x1a, 0x0a };
> > =C2=A0
> > -#define LZO_BLOCK_SIZE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (256*=
1024l)
> > +#define LZO_BLOCK_SIZE=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (256*=
1024L)
> > =C2=A0#define HEADER_HAS_FILTER=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 0x0000080=
0L
> > =C2=A0#define HEADER_SIZE_MIN=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (9 + =
7=C2=A0=C2=A0=C2=A0=C2=A0 + 4 + 8=C2=A0=C2=A0=C2=A0=C2=A0 + 1=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 +
> > 4)
> > =C2=A0#define HEADER_SIZE_MAX=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (9 + =
7 + 1 + 8 + 8 + 4 + 1 + 255 +
> > 4)
> > --=20
> > 2.34.1
> >=20



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 07:58:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 07:58:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746187.1153170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLebD-0005rg-HW; Mon, 24 Jun 2024 07:58:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746187.1153170; Mon, 24 Jun 2024 07:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLebD-0005rX-DN; Mon, 24 Jun 2024 07:58:51 +0000
Received: by outflank-mailman (input) for mailman id 746187;
 Mon, 24 Jun 2024 07:58:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLebB-0005rM-Sc
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 07:58:49 +0000
Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com
 [2a00:1450:4864:20::12a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 96e8dd5c-31ff-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 09:58:49 +0200 (CEST)
Received: by mail-lf1-x12a.google.com with SMTP id
 2adb3069b0e04-52cdcd26d61so2090629e87.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 00:58:49 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52cd63cd109sm928601e87.116.2024.06.24.00.58.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 00:58:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 96e8dd5c-31ff-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719215928; x=1719820728; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Bzgh9U0V5X5k4TZc0szT7NP9q6c6SGTdFISDhBH2NvA=;
        b=Tc9JX/SCwsfFXDT5+N0jrykKsVWb25zIXNUZW6+FL//Ol6xcW2G6kfl3S/zyGgk9PG
         TtPuoWLVGckC92Ph72ZKuqkLYo/0fG4a9NRq1ylTpccWoogKDWRUFRFbnID+hJkIHIx+
         DafxqJUg9aZnt2EkssUZkee7oxKsK4KTY+MrcquiA5b7pRvbp3wP/21d2htCBbjk12tg
         dUl5e2Z0jKzcou2UqA0crqpvXlwZj11dLuqs4ntwZem4lQtO8XGz4Brf8/mj8fXylNOe
         l9oYVQaswtuQQTWtumYQGu3OCvsMuSBGticGrMPcVCcH5VPnpqCAo/LCGEKoq06+sgGH
         EI5g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719215928; x=1719820728;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Bzgh9U0V5X5k4TZc0szT7NP9q6c6SGTdFISDhBH2NvA=;
        b=Vr2qGVsjRKUP4gZn3zzvqCCXQv+d53j0RyfBTZm1iYy99tgvAvwZB4r5upv9IM0GHy
         eT2Php4INUCshrP6c5AM2pvf+rpppBCzhq9h6WwMhSFtzv28Vn+4jsCzGa+BvL4rApu2
         oY0aJZPvRTGFtc9ODQp9LzrKPVk6PmHX0LvUZ5VmVqh4udw1y3rHH2m7r0MbPNetZeA2
         wBMI9ohc9r1dOoNxelmTVnuH2hOFbrXY8mSqdYWoMtKCpakM8oOPH3Cy7iGqaSsibfNw
         C54hvF6qy7+8fzmpE2W1LykHwmyWUtumjU1LqnlXzBC3bqVFiYVepTDS6LmtKz+lwqcg
         fhAQ==
X-Forwarded-Encrypted: i=1; AJvYcCVvC0bOrFOfdnT+v3MVW7lQpMEt/Umes38TX+6jrQAGTEh9J+S27C1jxR5+LNKOnx2RoyW4YGWKLlwrTePu3+cc18s4DpLjZy+EF00MfTk=
X-Gm-Message-State: AOJu0Yx8D/F2tZZpAVY+cACv+panCZgkaPgnQxuQVzs79+Wv3RZq4Tuf
	D7H38XPwmJNPhQzquFMYaPnfJbx6MqhyZaY64CH7aqxvWenapubN
X-Google-Smtp-Source: AGHT+IGUCKHRMlLn7HmONKhovaIf3k6RBKZXkGbaWNgDXCbW8Wa1TfsJOFsNIq1v4Qj5YuoetaJQMQ==
X-Received: by 2002:a05:6512:607:b0:52c:d84b:93b2 with SMTP id 2adb3069b0e04-52ce18341f9mr1957411e87.15.1719215928317;
        Mon, 24 Jun 2024 00:58:48 -0700 (PDT)
Message-ID: <e3bea61f8823deec6f4742fe5ef73ea4291593e6.camel@gmail.com>
Subject: Re: [PATCH for-4.19? 0/3] xen: build adjustments for
 __read_mostly/__ro_after_init
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: Shawn Anastasio <sanastasio@raptorengineering.com>, George Dunlap
 <George.Dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Date: Mon, 24 Jun 2024 09:58:47 +0200
In-Reply-To: <20240621201928.319293-1-andrew.cooper3@citrix.com>
References: <20240621201928.319293-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

T24gRnJpLCAyMDI0LTA2LTIxIGF0IDIxOjE5ICswMTAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOgo+
IEluIGFpZCBvZiBnZXR0aW5nIHRoZSBSSVNDLVYgYnVpbGQgd29ya2luZyB3aXRob3V0IGludHJv
ZHVjaW5nIG1vcmUKPiB0ZWNobmljYWwKPiBkZWJ0LsKgIEl0J3MgZG9uZSBieSBtYWtpbmcgUFBD
IHNoZWQgaXQncyBjb3B5IG9mIHNhaWQgdGVjaG5pY2FsIGRlYnQuCj4gCj4gQnVpbGQgdGVzdGVk
IHF1aXRlIHRob3JvdWdobHksIGluY2x1ZGluZyBpbiBHaXRsYWIuClJlbGVhc2UtQWNrZWQtYnk6
IE9sZWtzaWkgS3Vyb2Noa28gPG9sZWtzaWkua3Vyb2Noa29AZ21haWwuY29tPgoKfiBPbGVrc2lp
Cj4gCj4gQW5kcmV3IENvb3BlciAoMyk6Cj4gwqAgeGVuL3Jpc2N2OiBEcm9wIGxlZ2FjeSBfX3Jv
X2FmdGVyX2luaXQgZGVmaW5pdGlvbgo+IMKgIHhlbi9wcGM6IEFkanVzdCBwcGM2NF9kZWZjb25m
aWcKPiDCoCB4ZW4vcHBjOiBBdm9pZCB1c2luZyB0aGUgbGVnYWN5IF9fcmVhZF9tb3N0bHkvX19y
b19hZnRlcl9pbml0Cj4gwqDCoMKgIGRlZmluaXRpb25zCj4gCj4gwqB4ZW4vYXJjaC9wcGMvY29u
Zmlncy9wcGM2NF9kZWZjb25maWcgfCA2IC0tLS0tLQo+IMKgeGVuL2FyY2gvcHBjL2luY2x1ZGUv
YXNtL2NhY2hlLmjCoMKgwqDCoCB8IDMgLS0tCj4gwqB4ZW4vYXJjaC9wcGMvbW0tcmFkaXguY8Kg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHwgMSArCj4gwqB4ZW4vYXJjaC9wcGMvc3R1YnMuY8Kg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHwgNyArKysrKysrCj4gwqB4ZW4vYXJjaC9y
aXNjdi9tbS5jwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB8IDIgKy0KPiDCoHhl
bi9jb21tb24vYXJnby5jwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgfCAx
ICsKPiDCoHhlbi9jb21tb24vY3B1LmPCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgIHwgMSArCj4gwqB4ZW4vY29tbW9uL2RlYnVndHJhY2UuY8KgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgIHwgMSArCj4gwqB4ZW4vY29tbW9uL2RvbWFpbi5jwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCB8IDEgKwo+IMKgeGVuL2NvbW1vbi9ldmVudF9jaGFubmVsLmPCoMKg
wqDCoMKgwqDCoMKgwqDCoCB8IDIgKysKPiDCoHhlbi9jb21tb24va2V5aGFuZGxlci5jwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgfCAxICsKPiDCoHhlbi9jb21tb24vbWVtb3J5LmPCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHwgMSArCj4gwqB4ZW4vY29tbW9uL3BhZ2VfYWxs
b2MuY8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHwgMSArCj4gwqB4ZW4vY29tbW9uL3BkeC5j
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB8IDEgKwo+IMKgeGVuL2Nv
bW1vbi9yYWRpeC10cmVlLmPCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB8IDEgKwo+IMKgeGVu
L2NvbW1vbi9yYW5kb20uY8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgfCAyICst
Cj4gwqB4ZW4vY29tbW9uL3JjdXBkYXRlLmPCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
fCAxICsKPiDCoHhlbi9jb21tb24vc2NoZWQvY29yZS5jwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqAgfCAxICsKPiDCoHhlbi9jb21tb24vc2NoZWQvY3B1cG9vbC5jwqDCoMKgwqDCoMKgwqDCoMKg
wqAgfCAxICsKPiDCoHhlbi9jb21tb24vc2NoZWQvY3JlZGl0LmPCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIHwgMSArCj4gwqB4ZW4vY29tbW9uL3NjaGVkL2NyZWRpdDIuY8KgwqDCoMKgwqDCoMKgwqDC
oMKgIHwgMSArCj4gwqB4ZW4vY29tbW9uL3NodXRkb3duLmPCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqAgfCAxICsKPiDCoHhlbi9jb21tb24vc3BpbmxvY2suY8KgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoCB8IDEgKwo+IMKgeGVuL2NvbW1vbi90aW1lci5jwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgIHwgMSArCj4gwqB4ZW4vY29tbW9uL3ZlcnNpb24uY8KgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHwgMyArLS0KPiDCoHhlbi9jb21tb24vdmlydHVh
bF9yZWdpb24uY8KgwqDCoMKgwqDCoMKgwqDCoCB8IDEgKwo+IMKgeGVuL2NvbW1vbi92bWFwLmPC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB8IDIgKy0KPiDCoHhlbi9kcml2
ZXJzL2NoYXIvY29uc29sZS5jwqDCoMKgwqDCoMKgwqDCoMKgwqAgfCAxICsKPiDCoHhlbi9kcml2
ZXJzL2NoYXIvbnMxNjU1MC5jwqDCoMKgwqDCoMKgwqDCoMKgwqAgfCAxICsKPiDCoHhlbi9kcml2
ZXJzL2NoYXIvc2VyaWFsLmPCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHwgMiArLQo+IMKgeGVuL2lu
Y2x1ZGUveGVuL2NhY2hlLmjCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB8IDIgKysKPiDCoHhl
bi9pbmNsdWRlL3hlbi9oeXBmcy5owqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgfCAxICsKPiDC
oDMyIGZpbGVzIGNoYW5nZWQsIDM4IGluc2VydGlvbnMoKyksIDE1IGRlbGV0aW9ucygtKQo+IAoK




From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:01:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:01:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746198.1153180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLedo-0007v5-0r; Mon, 24 Jun 2024 08:01:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746198.1153180; Mon, 24 Jun 2024 08:01:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLedn-0007uy-UX; Mon, 24 Jun 2024 08:01:31 +0000
Received: by outflank-mailman (input) for mailman id 746198;
 Mon, 24 Jun 2024 08:01:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLedn-0007uk-6T
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:01:31 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f687c70a-31ff-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 10:01:29 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id
 38308e7fff4ca-2eaa89464a3so46868771fa.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 01:01:29 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c819a8ef90sm6028890a91.33.2024.06.24.01.01.24
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 01:01:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f687c70a-31ff-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719216089; x=1719820889; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=K2giALUKLzTt8KuTE56gEE0BkU395f+bNN+zD8v16Z0=;
        b=Ojlc8Ksl64gKL6xsvvPIFElUu2oWJe9F0BfSWpSPYTHm0ivzZhIw4khnz2kPbt/bgp
         YeNHd0j2mZEYrNCfKxdxhQrfUMJbM2BcuebhcXKl/rYxri+I6oB3YeP9aSGHC2WRJEtd
         P52SxSbEJ63jGdugKSq6M69J/8wWcsbC7IewDKI3ZCUSsZ7650tymUUgFDekqrXvIK1/
         ujSaRHGVUHsaO/+h1xOh5wS39iJM2V1ZoPZ10hH9vqqub0h6KUygoqumqhUqMNcyrtk9
         dCbrbCn8tEXR5UIpeYWMjLr0c9fm8mD9fEnGAXl56tYvNDRy1IhX96GFNug9Cn2RSBQO
         /32A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719216089; x=1719820889;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=K2giALUKLzTt8KuTE56gEE0BkU395f+bNN+zD8v16Z0=;
        b=mSuLtIih5NNa/c7fnc61ZQYs72sZnACDB2qFXP+9hwZuaco11+l8XjykDf9D3fM8YX
         0jrcFen3jDGrQYRYfBaoCK3LKx3lJhaRtfWT/PS2INJiBmiIr2JMJBNUYvPoOOvmjKCq
         VHaDUew71IMSpfViRmokzO2K4NzqeJqyMre4+NEHMMqXaixGUuwEG1rlpbNlWGIX11cU
         DWvy4e62AtXN1k9COdvPthc+daYQ17OlwDPKuc8B8q8xk5goy4qOsBUjIbx8W9Y+/vD3
         OjptUn7C1sZH9vS7oqzB5JiDG5xjgv3CaF3YD5dMEPaRvPkFzGn5zG7xwuOg3DtmV/ex
         EEtA==
X-Forwarded-Encrypted: i=1; AJvYcCU39t7bOnxB/FH7a7+dH/AYFZUZ5cUZoB+3PdMocRrYAjvaFafz76ExTJn7stFBsrQj+/NlLiquJK+aqtv+3OS30hwFIu/el+OG2fLf/sE=
X-Gm-Message-State: AOJu0YwCZQZgcQW0aa1lTqbi1hKlNsmkUIRSwgeDgzJ6GX+nmUBZxO5c
	Q3/F2Y++T/b95Ljtnf+sIOEYScfNixQCbRR4wT4qHh/ExepKw987peTq7i4LtA==
X-Google-Smtp-Source: AGHT+IHAsTukawoXywCLA54AzVWWZ+qUo8azIxi2TbGFSzfawk/fR4NHuTPZf6T+CrGC9BWhOzpr6A==
X-Received: by 2002:a2e:9e82:0:b0:2ec:52da:606e with SMTP id 38308e7fff4ca-2ec579c7702mr29904581fa.40.1719216088971;
        Mon, 24 Jun 2024 01:01:28 -0700 (PDT)
Message-ID: <71f97e26-cbb5-4e6b-860a-8c3de1e7e8df@suse.com>
Date: Mon, 24 Jun 2024 10:01:20 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 3/3] xen/ppc: Avoid using the legacy
 __read_mostly/__ro_after_init definitions
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Shawn Anastasio <sanastasio@raptorengineering.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240621201928.319293-1-andrew.cooper3@citrix.com>
 <20240621201928.319293-4-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240621201928.319293-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 21.06.2024 22:19, Andrew Cooper wrote:
> RISC-V wants to introduce a full build of Xen without using the legacy
> definitions.  PPC64 has the most minimal full build of Xen right now, so make
> it compile without the legacy definitions.
> 
> Mostly this is just including xen/sections.h in a variety of common files.  In
> a couple of cases, we can drop an inclusion of {xen,asm}/cache.h, but almost
> all files get the definitions transitively.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:02:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:02:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746202.1153189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeeb-0008PB-8p; Mon, 24 Jun 2024 08:02:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746202.1153189; Mon, 24 Jun 2024 08:02:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeeb-0008P4-65; Mon, 24 Jun 2024 08:02:21 +0000
Received: by outflank-mailman (input) for mailman id 746202;
 Mon, 24 Jun 2024 08:02:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLeea-0008Ou-4l
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:02:20 +0000
Received: from mail-lf1-x132.google.com (mail-lf1-x132.google.com
 [2a00:1450:4864:20::132])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 13a90670-3200-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 10:02:18 +0200 (CEST)
Received: by mail-lf1-x132.google.com with SMTP id
 2adb3069b0e04-52ce6a9fd5cso571814e87.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 01:02:18 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52cd63b4785sm930734e87.15.2024.06.24.01.02.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 01:02:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13a90670-3200-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719216138; x=1719820938; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Vkhcj2sPlANy+bGdC4oEF7IGeQEhcSoWh14XUSl7Rlc=;
        b=IZ+9nvrnPeYsQ8A6qfrEn1cTKdyuujoYfJixwRjGNo8JNw5s2T7NoACvVZw+OZMkX+
         usmhliI+WqH91fKd9SKirn2HDkUd7g+mYl28P2jlD7je1exHCiPVT+ovQ9tfEwMRkWWH
         yLmlFWg0WQJGXsXI4sOJA2wghVu0PxAifmiBJFj3oGxyW1vYi6z3nmmf+lt1Wipbuj+5
         uQdXvYGETt7AUaKPWXHv9qYDvfY0li5tmG6QHxbFjzEJLr+sidyxEnkdAmfZk00LxRQH
         mWu4nESu3BKTrMTzDeP1m5NkfA6grdy88OIMFWAmCowmo4psFalcWtW8zGozBzSBDY3q
         x1Xw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719216138; x=1719820938;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Vkhcj2sPlANy+bGdC4oEF7IGeQEhcSoWh14XUSl7Rlc=;
        b=D6MkQEtKPQS34srK8z13DYU+FbBqXgZ4utMuBUlYV1IgTDx9omQVkdFYJ/SkT2+VlM
         chf/OqUA8gBygUhc9tomj37RHShlxLEHGnhe3Zj4UGTKNzUFTFID8bz5irf46j42n4n9
         ogBg9zkXmWkiGMc9WucIA+bxk+mIeW085b99WNIUzOzhA16FR5C35fgXnQNezC/V1mpo
         zkUV4+zeFiK5FY2f6Cnl061zn/d0mS+8XXzjGJ75pwXJVueq21zoWyaffDYfEt4aPXJs
         amrp9iBTTnbsNAcveORB85YZMuSGOLWbxaZYVeffbClpwJnZmTDqlAUllfp65EHrVSya
         qu/A==
X-Forwarded-Encrypted: i=1; AJvYcCX4+1tBbgBPGx5mCkmNRnrE0Rtqr90H5CvdxA6e49ms5ULoPUaJubwuegIp6358ARB0Wv9dnl4WcM9yE+6Y3PDMlSSmXxVp/pniVRA3sXs=
X-Gm-Message-State: AOJu0YwrrqEKFOsX3YiKQ5Ek0oaRERUfDBWq65yXeeaSsTwrr99jJ3UK
	5jWau3t484iHHLf7CWBooZl9wNAdCvKP237g4UTOzXH0vgwXlYAK
X-Google-Smtp-Source: AGHT+IHS7USEhnHKXjVAqZ+l1BUwCw4DrHarJiu2Sxueeqojf2iKICCMtS00eP9vLDQ2YbhZIMOjvQ==
X-Received: by 2002:a05:6512:2805:b0:52c:dc69:28f3 with SMTP id 2adb3069b0e04-52ce185cfdfmr2530367e87.52.1719216137698;
        Mon, 24 Jun 2024 01:02:17 -0700 (PDT)
Message-ID: <d8b2b01e608c6ddedbb2b46f58e8bd46ecfd5ca9.camel@gmail.com>
Subject: Re: [PATCH 1/3] xen/riscv: Drop legacy __ro_after_init definition
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: Shawn Anastasio <sanastasio@raptorengineering.com>, George Dunlap
 <George.Dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Date: Mon, 24 Jun 2024 10:02:16 +0200
In-Reply-To: <20240621201928.319293-2-andrew.cooper3@citrix.com>
References: <20240621201928.319293-1-andrew.cooper3@citrix.com>
	 <20240621201928.319293-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Fri, 2024-06-21 at 21:19 +0100, Andrew Cooper wrote:
> Hide the legacy __ro_after_init definition in xen/cache.h for RISC-V,
> to avoid
> its use creeping in.=C2=A0 Only mm.c needs adjusting as a consequence
>=20
> No functional change.
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Shawn Anastasio <sanastasio@raptorengineering.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> CC: George Dunlap <George.Dunlap@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
>=20
> https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1342686294
> ---
> =C2=A0xen/arch/riscv/mm.c=C2=A0=C2=A0=C2=A0=C2=A0 | 2 +-
> =C2=A0xen/include/xen/cache.h | 2 ++
> =C2=A02 files changed, 3 insertions(+), 1 deletion(-)
>=20
> diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
> index 053f043a3d2a..3ebaf6da01cc 100644
> --- a/xen/arch/riscv/mm.c
> +++ b/xen/arch/riscv/mm.c
> @@ -1,11 +1,11 @@
> =C2=A0/* SPDX-License-Identifier: GPL-2.0-only */
> =C2=A0
> -#include <xen/cache.h>
> =C2=A0#include <xen/compiler.h>
> =C2=A0#include <xen/init.h>
> =C2=A0#include <xen/kernel.h>
> =C2=A0#include <xen/macros.h>
> =C2=A0#include <xen/pfn.h>
> +#include <xen/sections.h>
> =C2=A0
> =C2=A0#include <asm/early_printk.h>
> =C2=A0#include <asm/csr.h>
> diff --git a/xen/include/xen/cache.h b/xen/include/xen/cache.h
> index 55456823c543..82a3ba38e3e7 100644
> --- a/xen/include/xen/cache.h
> +++ b/xen/include/xen/cache.h
> @@ -15,7 +15,9 @@
> =C2=A0#define __cacheline_aligned
> __attribute__((__aligned__(SMP_CACHE_BYTES)))
> =C2=A0#endif
> =C2=A0
> +#if defined(CONFIG_ARM) || defined(CONFIG_X86) ||
> defined(CONFIG_PPC64)
> =C2=A0/* TODO: Phase out the use of this via cache.h */
> =C2=A0#define __ro_after_init __section(".data.ro_after_init")
> +#endif
Why "defined(CONFIG_RISCV_64)" is missed?

~ Oleksii

> =C2=A0
> =C2=A0#endif /* __LINUX_CACHE_H */



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:04:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:04:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746211.1153200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLegx-0000e4-PG; Mon, 24 Jun 2024 08:04:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746211.1153200; Mon, 24 Jun 2024 08:04:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLegx-0000dx-Me; Mon, 24 Jun 2024 08:04:47 +0000
Received: by outflank-mailman (input) for mailman id 746211;
 Mon, 24 Jun 2024 08:04:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLegw-0000dr-DH
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:04:46 +0000
Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com
 [2a00:1450:4864:20::136])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6b31ee49-3200-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 10:04:45 +0200 (CEST)
Received: by mail-lf1-x136.google.com with SMTP id
 2adb3069b0e04-52ce12707d9so1139156e87.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 01:04:45 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52ce51ccdeesm286439e87.274.2024.06.24.01.04.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 01:04:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b31ee49-3200-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719216285; x=1719821085; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=l4ww27Yb3LTNkNA5SYuIcJfCisP5kao3G9YNomzHN00=;
        b=GhJPq3/fdbtbZndV9wy/28rt8hK1+KcEqegQ+a6f++75eheEtoUy/WzjfHfEOKQeM+
         UaetsxfqE3ssIlBCu5OcyvmTTY6eyyazC636wWDGsAOH1jmGb7vNJrwSYYtEtqRoeYI7
         3JasJoUSLPQTogMpFOBzFcDOmCU3Tl7mML/eKVqyR/VSB2ygqNEaiHpQprKqMplB3+KR
         GcuURNp5L9RkSeTJBlUuMJxy8JToh9DF43zkg0bA90iM5Te2MvBTiwR4hSiObdcC+Lf+
         1atf/3NGymhUQQsTZu2UBOld1R+/+IKsx1UPoFrzjaEATtm1imTnkS7I1Llqo/SL8CTr
         0PKw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719216285; x=1719821085;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=l4ww27Yb3LTNkNA5SYuIcJfCisP5kao3G9YNomzHN00=;
        b=TUZ0w/ufmQTIIh+51NfMASz+zKa9Ede+uRCK8slzJ7fKEyq/YZ3/0k3Vg2O4GGlMkv
         JAs/ZiEs99dQJMB2aDlS5fcsbJ7i7DqkYXdsj0R17CCP1wN9MB/MuCMjI753R+K0r60D
         F+E3606MBZjVe5GLfQMneYkFK5FTmRd7XeMbVf/efOuQap4zHoHDJJbv84yCnH3CSZnW
         tryVj2rCRAq4EjpopSc3zg/qKiUMIk+Lmb6iqopQN4QWSh45ge0IsMW59lCNQdnPRKuT
         00IH4UixHpgxtHMLeqWhkSt/hnpd3UQO842IX2GLNdSHrv6kL5Iu2kCvi5NN+oJW/42j
         hMlA==
X-Forwarded-Encrypted: i=1; AJvYcCWhaCpr9hMxmL+OqgDLef/bVHSOe3XhWvWETgPPvDyMF3+5ZIb+nx49k8Ient+QwP1cyhKNo/C2UPeE6phZaV72hF5ubItoG9LkpH3PMXg=
X-Gm-Message-State: AOJu0YwqCn5PijrE9yQHHzyzwGT2qQGHo+DA/Tz/hg4gtX0MzqtWMMAO
	Pt3JwsBDYg+SLmNFnzAp76KPBbRR5Gjaxm2ab5tgaUKieFV0mPIH8aSQArsn
X-Google-Smtp-Source: AGHT+IGt9nWScohvdd6VHN8jFjVDzkfSAN64JQtt8M36Fq7fL4USRatG2ASlJBGmyEBEa6JZNH77bQ==
X-Received: by 2002:a05:6512:348c:b0:52c:da38:b2c6 with SMTP id 2adb3069b0e04-52ce0672770mr2207650e87.50.1719216284387;
        Mon, 24 Jun 2024 01:04:44 -0700 (PDT)
Message-ID: <384405c4718838398c3a8272027f526b53340e7a.camel@gmail.com>
Subject: Re: [PATCH 3/3] xen/ppc: Avoid using the legacy
 __read_mostly/__ro_after_init definitions
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: Shawn Anastasio <sanastasio@raptorengineering.com>, George Dunlap
 <George.Dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Date: Mon, 24 Jun 2024 10:04:43 +0200
In-Reply-To: <20240621201928.319293-4-andrew.cooper3@citrix.com>
References: <20240621201928.319293-1-andrew.cooper3@citrix.com>
	 <20240621201928.319293-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Fri, 2024-06-21 at 21:19 +0100, Andrew Cooper wrote:
> RISC-V wants to introduce a full build of Xen without using the
> legacy
> definitions.=C2=A0 PPC64 has the most minimal full build of Xen right now=
,
> so make
> it compile without the legacy definitions.
>=20
> Mostly this is just including xen/sections.h in a variety of common
> files.=C2=A0 In
> a couple of cases, we can drop an inclusion of {xen,asm}/cache.h, but
> almost
> all files get the definitions transitively.
>=20
> No functional change.
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleskii

> ---
> CC: Shawn Anastasio <sanastasio@raptorengineering.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> CC: George Dunlap <George.Dunlap@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
>=20
> https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1342714126
> ---
> =C2=A0xen/arch/ppc/include/asm/cache.h | 3 ---
> =C2=A0xen/arch/ppc/mm-radix.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 | 1 +
> =C2=A0xen/arch/ppc/stubs.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/argo.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/cpu.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/debugtrace.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/domain.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/event_channel.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 2 =
++
> =C2=A0xen/common/keyhandler.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/memory.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/page_alloc.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/pdx.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/radix-tree.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/random.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 2 +-
> =C2=A0xen/common/rcupdate.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/sched/core.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/sched/cpupool.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 =
+
> =C2=A0xen/common/sched/credit.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
 | 1 +
> =C2=A0xen/common/sched/credit2.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 =
+
> =C2=A0xen/common/shutdown.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/spinlock.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/timer.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/version.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 3 +--
> =C2=A0xen/common/virtual_region.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 +
> =C2=A0xen/common/vmap.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 2 +-
> =C2=A0xen/drivers/char/console.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 =
+
> =C2=A0xen/drivers/char/ns16550.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 1 =
+
> =C2=A0xen/drivers/char/serial.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
 | 2 +-
> =C2=A0xen/include/xen/cache.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 | 2 +-
> =C2=A0xen/include/xen/hypfs.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 | 1 +
> =C2=A030 files changed, 30 insertions(+), 9 deletions(-)
>=20
> diff --git a/xen/arch/ppc/include/asm/cache.h
> b/xen/arch/ppc/include/asm/cache.h
> index 13c0bf3242c8..8a0a6b7b1756 100644
> --- a/xen/arch/ppc/include/asm/cache.h
> +++ b/xen/arch/ppc/include/asm/cache.h
> @@ -3,7 +3,4 @@
> =C2=A0#ifndef _ASM_PPC_CACHE_H
> =C2=A0#define _ASM_PPC_CACHE_H
> =C2=A0
> -/* TODO: Phase out the use of this via cache.h */
> -#define __read_mostly __section(".data.read_mostly")
> -
> =C2=A0#endif /* _ASM_PPC_CACHE_H */
> diff --git a/xen/arch/ppc/mm-radix.c b/xen/arch/ppc/mm-radix.c
> index ab5a10695c5f..0a47959e64f2 100644
> --- a/xen/arch/ppc/mm-radix.c
> +++ b/xen/arch/ppc/mm-radix.c
> @@ -2,6 +2,7 @@
> =C2=A0#include <xen/init.h>
> =C2=A0#include <xen/kernel.h>
> =C2=A0#include <xen/mm.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/types.h>
> =C2=A0#include <xen/lib.h>
> =C2=A0
> diff --git a/xen/arch/ppc/stubs.c b/xen/arch/ppc/stubs.c
> index a10691165b1b..0e7a26dadbc1 100644
> --- a/xen/arch/ppc/stubs.c
> +++ b/xen/arch/ppc/stubs.c
> @@ -3,6 +3,7 @@
> =C2=A0#include <xen/domain.h>
> =C2=A0#include <xen/irq.h>
> =C2=A0#include <xen/nodemask.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/time.h>
> =C2=A0#include <public/domctl.h>
> =C2=A0#include <public/vm_event.h>
> diff --git a/xen/common/argo.c b/xen/common/argo.c
> index 901f41eb2dbe..df19006744a3 100644
> --- a/xen/common/argo.c
> +++ b/xen/common/argo.c
> @@ -25,6 +25,7 @@
> =C2=A0#include <xen/nospec.h>
> =C2=A0#include <xen/param.h>
> =C2=A0#include <xen/sched.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/time.h>
> =C2=A0
> =C2=A0#include <xsm/xsm.h>
> diff --git a/xen/common/cpu.c b/xen/common/cpu.c
> index 6e35b114c080..f09af0444b6a 100644
> --- a/xen/common/cpu.c
> +++ b/xen/common/cpu.c
> @@ -3,6 +3,7 @@
> =C2=A0#include <xen/event.h>
> =C2=A0#include <xen/init.h>
> =C2=A0#include <xen/sched.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/stop_machine.h>
> =C2=A0#include <xen/rcupdate.h>
> =C2=A0
> diff --git a/xen/common/debugtrace.c b/xen/common/debugtrace.c
> index a272e5e43761..ca883ad9198d 100644
> --- a/xen/common/debugtrace.c
> +++ b/xen/common/debugtrace.c
> @@ -13,6 +13,7 @@
> =C2=A0#include <xen/mm.h>
> =C2=A0#include <xen/param.h>
> =C2=A0#include <xen/percpu.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/serial.h>
> =C2=A0#include <xen/smp.h>
> =C2=A0#include <xen/spinlock.h>
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 67cadb7c3f4f..3db0e0b793f9 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -11,6 +11,7 @@
> =C2=A0#include <xen/err.h>
> =C2=A0#include <xen/param.h>
> =C2=A0#include <xen/sched.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/domain.h>
> =C2=A0#include <xen/mm.h>
> =C2=A0#include <xen/event.h>
> diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
> index a67feff98976..822b2c982489 100644
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -26,6 +26,8 @@
> =C2=A0#include <xen/guest_access.h>
> =C2=A0#include <xen/hypercall.h>
> =C2=A0#include <xen/keyhandler.h>
> +#include <xen/sections.h>
> +
> =C2=A0#include <asm/current.h>
> =C2=A0
> =C2=A0#include <public/xen.h>
> diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
> index 127ca506965c..674e7be39e9d 100644
> --- a/xen/common/keyhandler.c
> +++ b/xen/common/keyhandler.c
> @@ -6,6 +6,7 @@
> =C2=A0#include <xen/delay.h>
> =C2=A0#include <xen/keyhandler.h>
> =C2=A0#include <xen/param.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/shutdown.h>
> =C2=A0#include <xen/event.h>
> =C2=A0#include <xen/console.h>
> diff --git a/xen/common/memory.c b/xen/common/memory.c
> index de2cc7ad92a5..a6f2f6d1b348 100644
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -23,6 +23,7 @@
> =C2=A0#include <xen/param.h>
> =C2=A0#include <xen/perfc.h>
> =C2=A0#include <xen/sched.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/trace.h>
> =C2=A0#include <xen/types.h>
> =C2=A0#include <asm/current.h>
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 054b7edb3989..33c8c917d984 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -134,6 +134,7 @@
> =C2=A0#include <xen/pfn.h>
> =C2=A0#include <xen/types.h>
> =C2=A0#include <xen/sched.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/softirq.h>
> =C2=A0#include <xen/spinlock.h>
> =C2=A0
> diff --git a/xen/common/pdx.c b/xen/common/pdx.c
> index d3d63b075032..b8384e6189df 100644
> --- a/xen/common/pdx.c
> +++ b/xen/common/pdx.c
> @@ -19,6 +19,7 @@
> =C2=A0#include <xen/mm.h>
> =C2=A0#include <xen/bitops.h>
> =C2=A0#include <xen/nospec.h>
> +#include <xen/sections.h>
> =C2=A0
> =C2=A0/**
> =C2=A0 * Maximum (non-inclusive) usable pdx. Must be
> diff --git a/xen/common/radix-tree.c b/xen/common/radix-tree.c
> index adc3034222dc..fb283a9d52fc 100644
> --- a/xen/common/radix-tree.c
> +++ b/xen/common/radix-tree.c
> @@ -21,6 +21,7 @@
> =C2=A0#include <xen/init.h>
> =C2=A0#include <xen/radix-tree.h>
> =C2=A0#include <xen/errno.h>
> +#include <xen/sections.h>
> =C2=A0
> =C2=A0struct radix_tree_path {
> =C2=A0	struct radix_tree_node *node;
> diff --git a/xen/common/random.c b/xen/common/random.c
> index a29f2fcb991a..35a9f387fd5c 100644
> --- a/xen/common/random.c
> +++ b/xen/common/random.c
> @@ -1,6 +1,6 @@
> -#include <xen/cache.h>
> =C2=A0#include <xen/init.h>
> =C2=A0#include <xen/percpu.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/random.h>
> =C2=A0#include <xen/time.h>
> =C2=A0#include <asm/random.h>
> diff --git a/xen/common/rcupdate.c b/xen/common/rcupdate.c
> index 212a99acd8c8..fd5d3d7484a5 100644
> --- a/xen/common/rcupdate.c
> +++ b/xen/common/rcupdate.c
> @@ -35,6 +35,7 @@
> =C2=A0#include <xen/kernel.h>
> =C2=A0#include <xen/init.h>
> =C2=A0#include <xen/param.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/spinlock.h>
> =C2=A0#include <xen/smp.h>
> =C2=A0#include <xen/rcupdate.h>
> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> index d84b65f197b3..1a3ff5ae4dec 100644
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -18,6 +18,7 @@
> =C2=A0#include <xen/lib.h>
> =C2=A0#include <xen/param.h>
> =C2=A0#include <xen/sched.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/domain.h>
> =C2=A0#include <xen/delay.h>
> =C2=A0#include <xen/event.h>
> diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
> index ad8f60846273..57dfee26f21f 100644
> --- a/xen/common/sched/cpupool.c
> +++ b/xen/common/sched/cpupool.c
> @@ -22,6 +22,7 @@
> =C2=A0#include <xen/param.h>
> =C2=A0#include <xen/percpu.h>
> =C2=A0#include <xen/sched.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/warning.h>
> =C2=A0
> =C2=A0#include "private.h"
> diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
> index 020f44595ed0..a6bb321e7da1 100644
> --- a/xen/common/sched/credit.c
> +++ b/xen/common/sched/credit.c
> @@ -12,6 +12,7 @@
> =C2=A0#include <xen/lib.h>
> =C2=A0#include <xen/param.h>
> =C2=A0#include <xen/sched.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/domain.h>
> =C2=A0#include <xen/delay.h>
> =C2=A0#include <xen/event.h>
> diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
> index 685929c2902b..a7da60f40376 100644
> --- a/xen/common/sched/credit2.c
> +++ b/xen/common/sched/credit2.c
> @@ -14,6 +14,7 @@
> =C2=A0#include <xen/lib.h>
> =C2=A0#include <xen/param.h>
> =C2=A0#include <xen/sched.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/domain.h>
> =C2=A0#include <xen/delay.h>
> =C2=A0#include <xen/event.h>
> diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
> index 5f8141edc6b2..f413f331af17 100644
> --- a/xen/common/shutdown.c
> +++ b/xen/common/shutdown.c
> @@ -2,6 +2,7 @@
> =C2=A0#include <xen/lib.h>
> =C2=A0#include <xen/param.h>
> =C2=A0#include <xen/sched.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/domain.h>
> =C2=A0#include <xen/delay.h>
> =C2=A0#include <xen/watchdog.h>
> diff --git a/xen/common/spinlock.c b/xen/common/spinlock.c
> index 28c6e9d3ac60..0b877384451d 100644
> --- a/xen/common/spinlock.c
> +++ b/xen/common/spinlock.c
> @@ -5,6 +5,7 @@
> =C2=A0#include <xen/param.h>
> =C2=A0#include <xen/smp.h>
> =C2=A0#include <xen/time.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/spinlock.h>
> =C2=A0#include <xen/guest_access.h>
> =C2=A0#include <xen/preempt.h>
> diff --git a/xen/common/timer.c b/xen/common/timer.c
> index a21798b76f38..da0d069cc674 100644
> --- a/xen/common/timer.c
> +++ b/xen/common/timer.c
> @@ -11,6 +11,7 @@
> =C2=A0#include <xen/sched.h>
> =C2=A0#include <xen/lib.h>
> =C2=A0#include <xen/param.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/smp.h>
> =C2=A0#include <xen/perfc.h>
> =C2=A0#include <xen/time.h>
> diff --git a/xen/common/version.c b/xen/common/version.c
> index 80869430fc7c..b7d7d515a3dc 100644
> --- a/xen/common/version.c
> +++ b/xen/common/version.c
> @@ -3,14 +3,13 @@
> =C2=A0#include <xen/init.h>
> =C2=A0#include <xen/errno.h>
> =C2=A0#include <xen/lib.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/string.h>
> =C2=A0#include <xen/types.h>
> =C2=A0#include <xen/efi.h>
> =C2=A0#include <xen/elf.h>
> =C2=A0#include <xen/version.h>
> =C2=A0
> -#include <asm/cache.h>
> -
> =C2=A0const char *xen_compile_date(void)
> =C2=A0{
> =C2=A0=C2=A0=C2=A0=C2=A0 return XEN_COMPILE_DATE;
> diff --git a/xen/common/virtual_region.c
> b/xen/common/virtual_region.c
> index 52405d84b25c..1dc2e9f592ed 100644
> --- a/xen/common/virtual_region.c
> +++ b/xen/common/virtual_region.c
> @@ -6,6 +6,7 @@
> =C2=A0#include <xen/kernel.h>
> =C2=A0#include <xen/mm.h>
> =C2=A0#include <xen/rcupdate.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/spinlock.h>
> =C2=A0#include <xen/virtual_region.h>
> =C2=A0
> diff --git a/xen/common/vmap.c b/xen/common/vmap.c
> index 966a7e763f0f..b3b4ddf65311 100644
> --- a/xen/common/vmap.c
> +++ b/xen/common/vmap.c
> @@ -1,6 +1,6 @@
> =C2=A0#ifdef VMAP_VIRT_START
> =C2=A0#include <xen/bitmap.h>
> -#include <xen/cache.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/init.h>
> =C2=A0#include <xen/mm.h>
> =C2=A0#include <xen/pfn.h>
> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
> index 3a3a97bcbe3a..7da8c5296f3b 100644
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -32,6 +32,7 @@
> =C2=A0#include <xen/warning.h>
> =C2=A0#include <xen/pv_console.h>
> =C2=A0#include <asm/setup.h>
> +#include <xen/sections.h>
> =C2=A0
> =C2=A0#ifdef CONFIG_X86
> =C2=A0#include <xen/consoled.h>
> diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
> index 8f76bbe676bc..eaeb0e09d01e 100644
> --- a/xen/drivers/char/ns16550.c
> +++ b/xen/drivers/char/ns16550.c
> @@ -22,6 +22,7 @@
> =C2=A0#include <xen/irq.h>
> =C2=A0#include <xen/param.h>
> =C2=A0#include <xen/sched.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/timer.h>
> =C2=A0#include <xen/serial.h>
> =C2=A0#include <xen/iocap.h>
> diff --git a/xen/drivers/char/serial.c b/xen/drivers/char/serial.c
> index f28d8557c0a5..591a00900869 100644
> --- a/xen/drivers/char/serial.c
> +++ b/xen/drivers/char/serial.c
> @@ -10,8 +10,8 @@
> =C2=A0#include <xen/init.h>
> =C2=A0#include <xen/mm.h>
> =C2=A0#include <xen/param.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/serial.h>
> -#include <xen/cache.h>
> =C2=A0
> =C2=A0#include <asm/processor.h>
> =C2=A0
> diff --git a/xen/include/xen/cache.h b/xen/include/xen/cache.h
> index 82a3ba38e3e7..a19942fd63ef 100644
> --- a/xen/include/xen/cache.h
> +++ b/xen/include/xen/cache.h
> @@ -15,7 +15,7 @@
> =C2=A0#define __cacheline_aligned
> __attribute__((__aligned__(SMP_CACHE_BYTES)))
> =C2=A0#endif
> =C2=A0
> -#if defined(CONFIG_ARM) || defined(CONFIG_X86) ||
> defined(CONFIG_PPC64)
> +#if defined(CONFIG_ARM) || defined(CONFIG_X86)
> =C2=A0/* TODO: Phase out the use of this via cache.h */
> =C2=A0#define __ro_after_init __section(".data.ro_after_init")
> =C2=A0#endif
> diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
> index 1b65a9188c6c..d8fcac23b46b 100644
> --- a/xen/include/xen/hypfs.h
> +++ b/xen/include/xen/hypfs.h
> @@ -4,6 +4,7 @@
> =C2=A0#ifdef CONFIG_HYPFS
> =C2=A0#include <xen/lib.h>
> =C2=A0#include <xen/list.h>
> +#include <xen/sections.h>
> =C2=A0#include <xen/string.h>
> =C2=A0#include <public/hypfs.h>
> =C2=A0



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:06:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:06:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746217.1153211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeiF-0001AQ-4u; Mon, 24 Jun 2024 08:06:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746217.1153211; Mon, 24 Jun 2024 08:06:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeiE-0001AJ-VX; Mon, 24 Jun 2024 08:06:06 +0000
Received: by outflank-mailman (input) for mailman id 746217;
 Mon, 24 Jun 2024 08:06:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLeiD-00018e-Ov
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:06:05 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9ad2e512-3200-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 10:06:05 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ebe6495aedso41812481fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 01:06:05 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9ebbc8dbcsm56532535ad.307.2024.06.24.01.06.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 01:06:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ad2e512-3200-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719216365; x=1719821165; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=t3Z3W57KUTUmGuAOJd5Ardnn31EKXcqeE4GFzOlAmrY=;
        b=PL0uQ0jlJIIO5LeqNqozuZhLRO34PxIFXaf3rJB4LzsFth7FzSIg4Bmm0Fn0oKQWhh
         1PmyRY+X8y1UAiTtfAvuOiJw0mVOu1rc8tz0c+MKaoLTYt6G7F/MqV9p7fQ+9/8oRv3J
         WmNaP9UW9Q3KeQ8kzS5lpmtUdJ69lYT7Phv8OPmd2VKvw7FIzP+DIvctNUZFk6KzmhL4
         qgp0vtN6XHrMZ1/n4UHJ3x97xpVAeZLXZmh7OMNLBnXxWTLJUAyyTtqpAU6TjDKCpJrr
         fkiP+qON1kExTqgF7nY1HcZMu/cDINYvvX2lkJ5+my2FlGtx2WAVEZrQOWp3LneJp6I1
         mlGQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719216365; x=1719821165;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=t3Z3W57KUTUmGuAOJd5Ardnn31EKXcqeE4GFzOlAmrY=;
        b=lEgSUobtzwX31+++sJtViHjcvX0be9AEpdJPA+SW2JJNJPWEwOXtPPlCCRF0oAar1j
         sBnEg6Y95CvshsHkfDC66L4nsU4lD5zmlpxaK7OrzGTpEgrV+8wiM6UCziUBd+yWvIMK
         xgz6AN2rOVt8M6xvJaiVUKUkEm8Hccq6x0fBqJFmtDrVnOR/OcIUmG+5AhYIu2fP/cdF
         CvASZMBlUuj5iFDEcPsDVITaIusi4o8jKSqfC+I7+xFoXvYfsSBpDsxhPEZE3tBEkQIX
         Ja/Uaf2NkxQSLWQeOvqAjwpEBFCRDBXk6dHjdC1Ce+G6RI+NADgYrvfi/Vl0BUCRf94L
         n+YQ==
X-Forwarded-Encrypted: i=1; AJvYcCVK8sW43qp4z0dPuD2nPAO+H0cj7FGF5Farva//KbfQCDmCQ9Z386IUbVl5YaoDJTJ8nFdE6/9DuLDRILUKPP/qoQ572nlEn7idSLfmnrI=
X-Gm-Message-State: AOJu0Yymxll3nepqPf5tNLOor62BHqkmEYvY0S/youMN5ziNl28xWP5M
	ZFz3P7X32NXItO5qlsGlSZ1ybz3nqXl/lePj2Mwfg9Qe+c9a64dQHEJAlaFJTw==
X-Google-Smtp-Source: AGHT+IEMCB1ZxtoQoTqD379vNEKWwRIZbv8h8x7Ioc+xIHHFzQ9bipq/yePWWtP7DcLK7ecvdbKxOg==
X-Received: by 2002:a2e:98c4:0:b0:2ec:4df7:8cef with SMTP id 38308e7fff4ca-2ec5b2a00a3mr20188611fa.15.1719216364650;
        Mon, 24 Jun 2024 01:06:04 -0700 (PDT)
Message-ID: <a15da9eb-f93f-4a01-8822-21452b762f53@suse.com>
Date: Mon, 24 Jun 2024 10:04:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/3] xen/riscv: Drop legacy __ro_after_init definition
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Shawn Anastasio <sanastasio@raptorengineering.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240621201928.319293-1-andrew.cooper3@citrix.com>
 <20240621201928.319293-2-andrew.cooper3@citrix.com>
 <d8b2b01e608c6ddedbb2b46f58e8bd46ecfd5ca9.camel@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <d8b2b01e608c6ddedbb2b46f58e8bd46ecfd5ca9.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 24.06.2024 10:02, Oleksii wrote:
> On Fri, 2024-06-21 at 21:19 +0100, Andrew Cooper wrote:
>> Hide the legacy __ro_after_init definition in xen/cache.h for RISC-V,
>> to avoid
>> its use creeping in.  Only mm.c needs adjusting as a consequence
>>
>> No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Shawn Anastasio <sanastasio@raptorengineering.com>
>> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>> CC: George Dunlap <George.Dunlap@citrix.com>
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Stefano Stabellini <sstabellini@kernel.org>
>> CC: Julien Grall <julien@xen.org>
>>
>> https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1342686294
>> ---
>>  xen/arch/riscv/mm.c     | 2 +-
>>  xen/include/xen/cache.h | 2 ++
>>  2 files changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
>> index 053f043a3d2a..3ebaf6da01cc 100644
>> --- a/xen/arch/riscv/mm.c
>> +++ b/xen/arch/riscv/mm.c
>> @@ -1,11 +1,11 @@
>>  /* SPDX-License-Identifier: GPL-2.0-only */
>>  
>> -#include <xen/cache.h>
>>  #include <xen/compiler.h>
>>  #include <xen/init.h>
>>  #include <xen/kernel.h>
>>  #include <xen/macros.h>
>>  #include <xen/pfn.h>
>> +#include <xen/sections.h>
>>  
>>  #include <asm/early_printk.h>
>>  #include <asm/csr.h>
>> diff --git a/xen/include/xen/cache.h b/xen/include/xen/cache.h
>> index 55456823c543..82a3ba38e3e7 100644
>> --- a/xen/include/xen/cache.h
>> +++ b/xen/include/xen/cache.h
>> @@ -15,7 +15,9 @@
>>  #define __cacheline_aligned
>> __attribute__((__aligned__(SMP_CACHE_BYTES)))
>>  #endif
>>  
>> +#if defined(CONFIG_ARM) || defined(CONFIG_X86) ||
>> defined(CONFIG_PPC64)
>>  /* TODO: Phase out the use of this via cache.h */
>>  #define __ro_after_init __section(".data.ro_after_init")
>> +#endif
> Why "defined(CONFIG_RISCV_64)" is missed?

The TODO is being addressed by this patch for RISC-V. See how a subsequent
patch also drops CONFIG_PPC64.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:09:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:09:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746232.1153220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLekw-0002JL-FC; Mon, 24 Jun 2024 08:08:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746232.1153220; Mon, 24 Jun 2024 08:08:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLekw-0002JE-Cg; Mon, 24 Jun 2024 08:08:54 +0000
Received: by outflank-mailman (input) for mailman id 746232;
 Mon, 24 Jun 2024 08:08:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLekv-0002Hq-Sb
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:08:53 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fe7f1071-3200-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 10:08:53 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id
 38308e7fff4ca-2eabd22d3f4so49714911fa.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 01:08:53 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 38308e7fff4ca-2ec525bf38esm7298071fa.19.2024.06.24.01.08.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 01:08:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe7f1071-3200-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719216532; x=1719821332; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=vHvXAApJ1wIYLBLHdlTsOjuj9210lXtVSO6F8TvfVsA=;
        b=FKktM8p1FLYVEG4AZcnv/gQDeZaMIoR29SmzeKD7DLHgQOZ1kb0dAx/pYN1x+1nJJa
         rx9G5OMeN92yDlDCyBJ7OgmDEWjIwQyVhR8W+neFWADyNewHUVlxCUdmLvPiKw2Ksz9c
         OsYZUS+1d2ZuDEzvyH8F6e64T0CS5W4wo0JBfSv1mjijcUz+ZI/ZXbHZuxPxa8AKLvor
         w198uhRKALViDYN0DQLWKLDNjHi41t1Ni0040j0h56OXDjbShjoWeZJie3GQ+j8nqZzS
         V6I8RyKv4JLdEQv9/ecZdiNfWwfigScT8lF9sucw0mZR+K4hXF3N3JiSK/H2GsMGRWA3
         IktQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719216532; x=1719821332;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=vHvXAApJ1wIYLBLHdlTsOjuj9210lXtVSO6F8TvfVsA=;
        b=RhNDLqqQw8tHO6Q9/kVkZwbm16H2pxr3vnsmJQYJJMLjDa8c0aqdfktR6AuHi92J1M
         owRdke65iKb3MImNHcry3tkIauXx0iYgLPHE1lV6z2JkDKKxkFzQkLkWPIS2ERXW6RKN
         pjodyHvPNJ/K8bcxZZQ20Hx2JgBK9gA8YWh+zrgymAv+t5/YZA3e5GYE41nIHSJ3MNov
         EDIk//9IaKsDQS3sV5fHhT4EsEQNl7OuGW1ha/ffp7sc6UVAH+SbK+ii8c5DMLZc1nWv
         UQ8fc2+uMws86cDGon1h1u47AcprB8NpMJUorvgrS/X0nY9CBM4bEcdKCh7uSNkDng5F
         4L4w==
X-Forwarded-Encrypted: i=1; AJvYcCX9IW1jOIrIKZ8QuPHQpYBtqxX4bRtkjTdQHllA8HIvil9y5IZO7TgihhWQwkqY05FE1qaqsii2Nz2v6PjNGeMEy0Eqf+VYw8loRkM4JPI=
X-Gm-Message-State: AOJu0Yw2EaHxYZz8yYu92UtiOp0j6pTBUEBuNV/MFyLSEPpHE1KnfX7E
	64LI9IPf5jhCMnQVeNL4HOYHfDDTFUl0qmpnLMjBRoWYE7qMeJoB
X-Google-Smtp-Source: AGHT+IGkbr2c1idYL9qdNVO4rt5yo1f2p2mUqMbYi9uxIeogf5Cg+ea+Ggamagc5zPGEiBo76vDKUg==
X-Received: by 2002:a2e:a414:0:b0:2ec:1ad3:fb0a with SMTP id 38308e7fff4ca-2ec5b30755fmr22516711fa.43.1719216530726;
        Mon, 24 Jun 2024 01:08:50 -0700 (PDT)
Message-ID: <459be95e245c28f0e3dc85ae2bbc792d4bb7efb1.camel@gmail.com>
Subject: Re: [PATCH for-4.19 0/4] xen/xlat: Improvements to compat hypercall
 checking
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich
 <JBeulich@suse.com>,  Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>
Date: Mon, 24 Jun 2024 10:08:49 +0200
In-Reply-To: <dec0441e-c66d-44cd-86cd-7b914320b9c9@citrix.com>
References: <20240415154155.2718064-1-andrew.cooper3@citrix.com>
	 <dec0441e-c66d-44cd-86cd-7b914320b9c9@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Fri, 2024-06-21 at 17:34 +0100, Andrew Cooper wrote:
> On 15/04/2024 4:41 pm, Andrew Cooper wrote:
> > This started off as patch 3, and grew somewhat.
> >=20
> > Patches 1-3 are simple and hopefully non-controversial.
> >=20
> > Patch 4 is an attempt to make the headers less fragile, but came
> > with an
> > unexpected complication.=C2=A0 Details in the patch.
> >=20
> > Andrew Cooper (4):
> > =C2=A0 xen/xlat: Sort out whitespace
> > =C2=A0 xen/xlat: Sort structs per file
> > =C2=A0 xen/gnttab: Perform compat/native gnttab_query_size check
>=20
> I'm timing out waiting for a justification on the whitespace comment.
>=20
> Oleksii: Can I get a release ack on this please?=C2=A0 Patch 3 is the mai=
n
> bugfix, which is the insertion of a missing build-time cross-check,
> so
> it's very low risk for the release.
Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

>=20
> Patch 4 was always RFC and not intended to go in as-was.
>=20
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:11:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:11:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746242.1153230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeng-0004Fk-VT; Mon, 24 Jun 2024 08:11:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746242.1153230; Mon, 24 Jun 2024 08:11:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeng-0004Fd-SR; Mon, 24 Jun 2024 08:11:44 +0000
Received: by outflank-mailman (input) for mailman id 746242;
 Mon, 24 Jun 2024 08:11:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLenf-0004FX-Oo
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:11:43 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 638c9644-3201-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 10:11:41 +0200 (CEST)
Received: by mail-lf1-x12c.google.com with SMTP id
 2adb3069b0e04-52ce12707d9so1145563e87.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 01:11:41 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52cd63b4bcesm925373e87.38.2024.06.24.01.11.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 01:11:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 638c9644-3201-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719216701; x=1719821501; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=aoVVDL74Dizr9qB2FPuqwlUc4PuxccFDS7mtLIrj7ag=;
        b=Qdqy6x3w5fsVCVdUTdh7WS5GX6X0gpbJT202knR0lOvz5VlVueyx5MZaLUShsgdf/r
         difoSPPCyfy3dg6/7ELe8eHqJODIV1gWasnEICIci2iN/QuvA0WHmcDfeoS1qvpGOKci
         0jqLc87GaoDr/31PnBM7H8iyyvUK2ugmE5U34MNcn3jwlFOGLuTWDDdJG2W+Pl7XWVzZ
         noBO3HBw1qKAjiwwwXkii1HhJHYC442qZuSq7ZMOz7mD0o+MKaXE1dFuxVnWstRAab2h
         rnwFC6VuMEMk7KazFyzcAWWvCa/5xK9t0ZWeQ3UgwWUOKcnzji/oGWnqc0dswUKm5JM8
         zWTw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719216701; x=1719821501;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=aoVVDL74Dizr9qB2FPuqwlUc4PuxccFDS7mtLIrj7ag=;
        b=f3+YR0teN1jD3LBymZPMAnEEmv2rYudypu54tMWZbK4rS6ZLC6YaSma15Utw5dbhMt
         ucP85Xbp5KGEudQL/Z2e/RlG3VLLRW7FRSkcfzIMSDaSpFhpz8zpizw2YgZjnt9Vc1ej
         tnqW4fCw/91vl9lrdy8EiaSX47B5dZvuMQqzQE3n35iaFJcu1Axsyc0g8pAoJa2p6fE9
         YVUXfEYmWA/YQhsS+yLQ7X/OjqfUAfYMhtpPEOS8RcDGa+Du0vMTq7kQQraR9KXvw6WO
         yjFQU24ZxsIcxdWNPpqWSFq9EP0jt7vJPaLBO7+5w091Zy95+cTMvrJEN3wnrcxXk9vX
         2TNQ==
X-Forwarded-Encrypted: i=1; AJvYcCWlYL3o21yoOLYUUq3Btzlan1Vn1sTTSR02NaSu+1OQw4sCP/gpY336WwKZ8nu5+qT6WaFWo4sfMTnWtlexBmd7WBzFxLXrbhJ9BUauvdQ=
X-Gm-Message-State: AOJu0Yx8vL7ISX1GDu2uUH7/AiPwOLlFLbo8Nugoz1CSWEG+JXL88fct
	GLhu8cZqgzP/v7k4MVn0aWCiEFze71F9ueIC45/Zd2YFq7BuzEM500Q0WlXb
X-Google-Smtp-Source: AGHT+IFtjB7OG/l6HOvzdDW6/xw2YijO2PqkoXd4Yqar83gdYxcMqLFV15N8TMBi5EZ1pkwgYJpqxA==
X-Received: by 2002:a05:6512:239a:b0:52c:892e:2b26 with SMTP id 2adb3069b0e04-52ce063e3fdmr2932202e87.2.1719216701219;
        Mon, 24 Jun 2024 01:11:41 -0700 (PDT)
Message-ID: <f3cf0ea29f85d13a2d351b8ff1a50fd56e6253c1.camel@gmail.com>
Subject: Re: [PATCH for-4.19 v3 0/4] x86/shadow: Trace fixes and cleanup
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, Roger Pau =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>
Date: Mon, 24 Jun 2024 10:11:40 +0200
In-Reply-To: <20240621180658.92831-1-andrew.cooper3@citrix.com>
References: <20240621180658.92831-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Fri, 2024-06-21 at 19:06 +0100, Andrew Cooper wrote:
> Patches 1-3 were my review feedback to Jan's patch 4.
>=20
> For 4.19.=C2=A0 Patch 4 (the bugfix) was Release-Acked after I posted the
> series
> (cleanup and rebased bugfix), which suggests your happy for it in
> principle,
> but I can't treat that as an implict release ack on the whole series.
>=20
> It's a tracing fix, so is minimal risk to the 4.19 release.
Sorry for confusion, first patches don't provide any functional
changes, so it is okay to me to have it in Release 4.19.

So for the whole patch series:
 Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii
>=20
> v3:
> =C2=A0* Retain __packed attribute.
>=20
> Andrew Cooper (3):
> =C2=A0 x86/shadow: Rework trace_shadow_gen() into sh_trace_va()
> =C2=A0 x86/shadow: Introduce sh_trace_gl1e_va()
> =C2=A0 x86/shadow: Rework trace_shadow_emulate_other() as
> sh_trace_gfn_va()
>=20
> Jan Beulich (1):
> =C2=A0 x86/shadow: Don't leave trace record field uninitialized
>=20
> =C2=A0xen/arch/x86/mm/shadow/multi.c | 144 ++++++++++++++----------------=
-
> --
> =C2=A01 file changed, 60 insertions(+), 84 deletions(-)
>=20
>=20
> base-commit: 9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:13:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:13:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746247.1153240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLepH-0004oD-91; Mon, 24 Jun 2024 08:13:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746247.1153240; Mon, 24 Jun 2024 08:13:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLepH-0004o6-6G; Mon, 24 Jun 2024 08:13:23 +0000
Received: by outflank-mailman (input) for mailman id 746247;
 Mon, 24 Jun 2024 08:13:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLepF-0004nv-Lt
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:13:21 +0000
Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com
 [2a00:1450:4864:20::22b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9dc43726-3201-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 10:13:19 +0200 (CEST)
Received: by mail-lj1-x22b.google.com with SMTP id
 38308e7fff4ca-2ec52fbb50bso20208461fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 01:13:19 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c819dd0cf7sm6075553a91.48.2024.06.24.01.13.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 01:13:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9dc43726-3201-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719216799; x=1719821599; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=2A1SW8jdK+8EUBTc5ozeq15HGdUgK/PBA7UnUl3OcKU=;
        b=KeuAHsgbJPUjp3pRL4pRLn9dV6U9jaEzRjM2rQ9ClotYrgKlWo0rLYRBbgziBVKFsy
         blYOo43oU4Da5qr1zEQEEIn7++hNFCcqqyHXNRgTV2+XSmJ77JR+EKqzXPiFV5kddHI3
         dzcHXvrmNLDS++ubvNjkcVUdlxKgkhsyM7KDbQL/xEW50ZpV2us5eEcsp/2x3d5piGox
         gaJJOPtY9KlYDXgYQRA/K/5OgIUPRDsnc1wDNJLz3dRBvNFhas2FWb3rY6/JG1E4lOZb
         DzyLq3Kxr5wwkvLQfyteMZ0ylWFzdZ6y7NghRrotmQjF/OAGZv+o/+rBhOOGkGcHZvI0
         VRmA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719216799; x=1719821599;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=2A1SW8jdK+8EUBTc5ozeq15HGdUgK/PBA7UnUl3OcKU=;
        b=l085FeDH3jTx7gRKTtbqX9GRFCoFDrjNdW+KJduEpp2fcM7ZGW1Vm2L3rNfyDaFaFN
         lunMSE01WlDtlGFZ30dHmWW2J24CDF9qTLV3h7O1a0Znv++Lc3oiEOtKIhKD049ekUgk
         XpoTlMpXidRyPF5yppwXT+bhMEuRRyR0O5TA4iOINVIo8eFevPM8qXJip0cKXrAcn6aX
         zgzd2kegq+seEpQR+vh2kD8pIWepFPqKMzK1/B8n82XnqzW4X0aCqfQHYO4doBn92SCH
         Z8fJKgPTAfMABhTdyXAtGIV81rM6g9GyKHTUOqnl1CMagMirvw+7RtD1VxjQAgtT0wuX
         WS7Q==
X-Forwarded-Encrypted: i=1; AJvYcCUHP1633bb/E1vJoHVI9r9kDVFAfC1zomJkxyL8vKHlC4ftXWnhko+CDIxBN+KhpAcylH0pWimMQ0svT5wqKCt52KgtaU79IAI0aLVlW34=
X-Gm-Message-State: AOJu0Yy9qtpgegKXIj1BETYg3xgu1NNeMs4fYeU6K8jK9Ckfr+YvQn2j
	2T6k6BQpr3bdntBJYi2g/6dFdxT3bQK2j5lJEkjsJCaeASyXVcT5KT4fpHQJMn9XdQLGuX1TCe8
	=
X-Google-Smtp-Source: AGHT+IHiITpD/ci4SjO8F6vB6NS0sYSmq16RR0vfzBbjoDp3voUEknYoAilxcrVHrnLD+aopQT0QpQ==
X-Received: by 2002:a2e:9104:0:b0:2ec:57c7:c72c with SMTP id 38308e7fff4ca-2ec5b3d4979mr20616111fa.35.1719216799003;
        Mon, 24 Jun 2024 01:13:19 -0700 (PDT)
Message-ID: <a17f568f-5fe3-4331-9592-509e02ae84ea@suse.com>
Date: Mon, 24 Jun 2024 10:13:08 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-5-Jiqian.Chen@amd.com>
 <49563a31-d50e-4015-88ee-e0dab9193cd1@suse.com>
 <BL1PR12MB584910D242D9D8B4BA8B15C1E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <ab99b766-7bec-4046-beb2-f77a2591a911@suse.com>
 <BL1PR12MB5849ABD858B72505D83678F9E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <099beaac-ed1f-459b-8c2b-42b325f8e4a4@suse.com>
 <BL1PR12MB5849366A442BE6C4C192ABB0E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <3352736b-e7bc-40d0-ac1f-e58de188c93c@suse.com>
 <BL1PR12MB5849D6943FCB12613A844F33E7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB5849D6943FCB12613A844F33E7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 21.06.2024 10:15, Chen, Jiqian wrote:
> On 2024/6/20 18:37, Jan Beulich wrote:
>> On 20.06.2024 12:23, Chen, Jiqian wrote:
>>> On 2024/6/20 15:43, Jan Beulich wrote:
>>>> On 20.06.2024 09:03, Chen, Jiqian wrote:
>>>>> On 2024/6/18 17:13, Jan Beulich wrote:
>>>>>> On 18.06.2024 10:10, Chen, Jiqian wrote:
>>>>>>> On 2024/6/17 23:10, Jan Beulich wrote:
>>>>>>>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>>>>>>>> --- a/tools/libs/light/libxl_pci.c
>>>>>>>>> +++ b/tools/libs/light/libxl_pci.c
>>>>>>>>> @@ -1406,6 +1406,12 @@ static bool pci_supp_legacy_irq(void)
>>>>>>>>>  #endif
>>>>>>>>>  }
>>>>>>>>>  
>>>>>>>>> +#define PCI_DEVID(bus, devfn)\
>>>>>>>>> +            ((((uint16_t)(bus)) << 8) | ((devfn) & 0xff))
>>>>>>>>> +
>>>>>>>>> +#define PCI_SBDF(seg, bus, devfn) \
>>>>>>>>> +            ((((uint32_t)(seg)) << 16) | (PCI_DEVID(bus, devfn)))
>>>>>>>>
>>>>>>>> I'm not a maintainer of this file; if I were, I'd ask that for readability's
>>>>>>>> sake all excess parentheses be dropped from these.
>>>>>>> Isn't it a coding requirement to enclose each element in parentheses in the macro definition?
>>>>>>> It seems other files also do this. See tools/libs/light/libxl_internal.h
>>>>>>
>>>>>> As said, I'm not a maintainer of this code. Yet while I'm aware that libxl
>>>>>> has its own CODING_STYLE, I can't spot anything towards excessive use of
>>>>>> parentheses there.
>>>>> So, which parentheses do you think are excessive use?
>>>>
>>>> #define PCI_DEVID(bus, devfn)\
>>>>             (((uint16_t)(bus) << 8) | ((devfn) & 0xff))
>>>>
>>>> #define PCI_SBDF(seg, bus, devfn) \
>>>>             (((uint32_t)(seg) << 16) | PCI_DEVID(bus, devfn))
>>> Thanks, will change in next version.
>>>
>>>>
>>>>>>>>> @@ -1486,6 +1496,18 @@ static void pci_add_dm_done(libxl__egc *egc,
>>>>>>>>>          goto out_no_irq;
>>>>>>>>>      }
>>>>>>>>>      if ((fscanf(f, "%u", &irq) == 1) && irq) {
>>>>>>>>> +#ifdef CONFIG_X86
>>>>>>>>> +        sbdf = PCI_SBDF(pci->domain, pci->bus,
>>>>>>>>> +                        (PCI_DEVFN(pci->dev, pci->func)));
>>>>>>>>> +        gsi = xc_physdev_gsi_from_dev(ctx->xch, sbdf);
>>>>>>>>> +        /*
>>>>>>>>> +         * Old kernel version may not support this function,
>>>>>>>>
>>>>>>>> Just kernel?
>>>>>>> Yes, xc_physdev_gsi_from_dev depends on the function implemented on linux kernel side.
>>>>>>
>>>>>> Okay, and when the kernel supports it but the underlying hypervisor doesn't
>>>>>> support what the kernel wants to use in order to fulfill the request, all
>>>>> I don't know what things you mentioned hypervisor doesn't support are,
>>>>> because xc_physdev_gsi_from_dev is to get the gsi of pcidev through sbdf information,
>>>>> that relationship can be got only in dom0 instead of Xen hypervisor.
>>>>>
>>>>>> is fine? (See also below for what may be needed in the hypervisor, even if
>>>>> You mean xc_physdev_map_pirq needs gsi?
>>>>
>>>> I'd put it slightly differently: You arrange for that function to now take a
>>>> GSI when the caller is PVH. But yes, the function, when used with
>>>> MAP_PIRQ_TYPE_GSI, clearly expects a GSI as input (see also below).
>>>>
>>>>>> this IOCTL would be satisfied by the kernel without needing to interact with
>>>>>> the hypervisor.)
>>>>>>
>>>>>>>>> +         * so if fail, keep using irq; if success, use gsi
>>>>>>>>> +         */
>>>>>>>>> +        if (gsi > 0) {
>>>>>>>>> +            irq = gsi;
>>>>>>>>
>>>>>>>> I'm still puzzled by this, when by now I think we've sufficiently clarified
>>>>>>>> that IRQs and GSIs use two distinct numbering spaces.
>>>>>>>>
>>>>>>>> Also, as previously indicated, you call this for PV Dom0 as well. Aiui on
>>>>>>>> the assumption that it'll fail. What if we decide to make the functionality
>>>>>>>> available there, too (if only for informational purposes, or for
>>>>>>>> consistency)? Suddenly you're fallback logic wouldn't work anymore, and
>>>>>>>> you'd call ...
>>>>>>>>
>>>>>>>>> +        }
>>>>>>>>> +#endif
>>>>>>>>>          r = xc_physdev_map_pirq(ctx->xch, domid, irq, &irq);
>>>>>>>>
>>>>>>>> ... the function with a GSI when a pIRQ is meant. Imo, as suggested before,
>>>>>>>> you strictly want to avoid the call on PV Dom0.
>>>>>>>>
>>>>>>>> Also for PVH Dom0: I don't think I've seen changes to the hypercall
>>>>>>>> handling, yet. How can that be when GSI and IRQ aren't the same, and hence
>>>>>>>> incoming GSI would need translating to IRQ somewhere? I can once again only
>>>>>>>> assume all your testing was done with IRQs whose numbers happened to match
>>>>>>>> their GSI numbers. (The difference, imo, would also need calling out in the
>>>>>>>> public header, where the respective interface struct(s) is/are defined.)
>>>>>>> I feel like you missed out on many of the previous discussions.
>>>>>>> Without my changes, the original codes use irq (read from file /sys/bus/pci/devices/<sbdf>/irq) to do xc_physdev_map_pirq,
>>>>>>> but xc_physdev_map_pirq require passing into gsi instead of irq, so we need to use gsi whether dom0 is PV or PVH, so for the original codes, they are wrong.
>>>>>>> Just because by chance, the irq value in the Linux kernel of pv dom0 is equal to the gsi value, so there was no problem with the original pv passthrough.
>>>>>>> But not when using PVH, so passthrough failed.
>>>>>>> With my changes, I got gsi through function xc_physdev_gsi_from_dev firstly, and to be compatible with old kernel versions(if the ioctl
>>>>>>> IOCTL_PRIVCMD_GSI_FROM_DEV is not implemented), I still need to use the irq number, so I need to check the result
>>>>>>> of gsi, if gsi > 0 means IOCTL_PRIVCMD_GSI_FROM_DEV is implemented I can use the right one gsi, otherwise keep using wrong one irq. 
>>>>>>
>>>>>> I understand all of this, to a (I think) sufficient degree at least. Yet what
>>>>>> you're effectively proposing (without explicitly saying so) is that e.g.
>>>>>> struct physdev_map_pirq's pirq field suddenly may no longer hold a pIRQ
>>>>>> number, but (when the caller is PVH) a GSI. This may be a necessary adjustment
>>>>>> to make (simply because the caller may have no way to express things in pIRQ
>>>>>> terms), but then suitable adjustments to the handling of PHYSDEVOP_map_pirq
>>>>>> would be necessary. In fact that field is presently marked as "IN or OUT";
>>>>>> when re-purposed to take a GSI on input, it may end up being necessary to pass
>>>>>> back the pIRQ (in the subject domain's numbering space). Or alternatively it
>>>>>> may be necessary to add yet another sub-function so the GSI can be translated
>>>>>> to the corresponding pIRQ for the domain that's going to use the IRQ, for that
>>>>>> then to be passed into PHYSDEVOP_map_pirq.
>>>>> If I understood correctly, your concerns about this patch are two:
>>>>> First, when dom0 is PV, I should not use xc_physdev_gsi_from_dev to get gsi to do xc_physdev_map_pirq, I should keep the original code that use irq.
>>>>
>>>> Yes.
>>> OK, I can change to do this.
>>> But I still have a concern:
>>> Although without my changes, passthrough can work now when dom0 is PV.
>>> And you also agree that: for xc_physdev_map_pirq, when use with MAP_PIRQ_TYPE_GSI, it expects a GSI as input.
>>> Isn't it a wrong for PV dom0 to pass irq in? Why don't we use gsi as it should be used, since we have a function to get gsi now?
>>
>> Indeed this and ...
>>
>>>>> Second, when dom0 is PVH, I get the gsi, but I should not pass gsi as the fourth parameter of xc_physdev_map_pirq, I should create a new local parameter pirq=-1, and pass it in.
>>>>
>>>> I think so, yes. You also may need to record the output value, so you can later
>>>> use it for unmap. xc_physdev_map_pirq() may also need adjusting, as right now
>>>> it wouldn't put a negative incoming *pirq into the .pirq field. 
>>> xc_physdev_map_pirq's logic is if we pass a negative in, it sets *pirq to index(gsi).
>>> Is its logic right? If not how do we change it?
>>
>> ... this matches ...
>>
>>>> I actually wonder if that's even correct right now, i.e. independent of your change.
>>
>> ... the remark here.
> So, what should I do next step?
> If assume the logic of function xc_physdev_map_pirq and PHYSDEVOP_map_pirq is right,
> I think what I did now is right, both PV and PVH dom0 should pass gsi into xc_physdev_map_pirq.

It may sound unfriendly, and I'm willing to accept other maintainers
disagreeing with me, but I think before we think of any extensions of
what we have here, we want to address any issues with what we have.
Since you're working in the area, and since getting the additions right
requires properly understanding how things work (and where things may
not work correctly right now), I view you as being in the best position
to see about doing that (imo) prereq step.

> By the way, I found xc_physdev_map_pirq didn't support negative pirq is since your commit 934a5253d932b6f67fe40fc48975a2b0117e4cce, do you remember why?

Counter question: What is a negative pIRQ?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:14:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:14:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746253.1153249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeq4-0005Jz-GM; Mon, 24 Jun 2024 08:14:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746253.1153249; Mon, 24 Jun 2024 08:14:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLeq4-0005Js-Du; Mon, 24 Jun 2024 08:14:12 +0000
Received: by outflank-mailman (input) for mailman id 746253;
 Mon, 24 Jun 2024 08:14:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLeq2-0005H5-N0
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:14:10 +0000
Received: from mail-lf1-x129.google.com (mail-lf1-x129.google.com
 [2a00:1450:4864:20::129])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bb4f7687-3201-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 10:14:09 +0200 (CEST)
Received: by mail-lf1-x129.google.com with SMTP id
 2adb3069b0e04-52cd6784aa4so2866927e87.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 01:14:09 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52cdf448876sm567043e87.237.2024.06.24.01.14.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 01:14:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb4f7687-3201-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719216849; x=1719821649; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=KRvuxy5PS3VQnoUzMRuCWmN4qlLW1NkpWhVLeA8R8+g=;
        b=UKiXcoABnZIdqKomR1TxJk9+NSvRYVSWoeVpbrg1a8z7cSp4Ipcez7Y+PdR5/WqkCb
         XKv6Uo8RPVBU/KVvbD3uPoR1PP2h89S5DiF4u36E2kYDKsFOwk4vkGI23bkyXXVjEcH1
         tw6SN68NWZbJeOdhAO+3vtVVIzg7EyQFZDQ/JmIFFPD93eEJX/E0MOGBPJh7iyq3reRB
         0GPjcB7v90Xjg7xhXqLzdAbpE8RtiLG3YG7rysC+irw4q2bfJx3TsWqr4YmhX73V+EyW
         S34OSP0kcDyN/UgP+Owg6ShawTbt+4AkzoojIuDX3z9VOob06V9atztvfufWKkkEq/27
         uIUA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719216849; x=1719821649;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=KRvuxy5PS3VQnoUzMRuCWmN4qlLW1NkpWhVLeA8R8+g=;
        b=VLI2F72CJMMWGBuzBSDKcWkMdd1I+unABiEQDnFOcF7+fDqh6zP07Ch2TFIgfpOrJl
         d1XzfCR1wlwAcydUx1megiMdfmCEr3mbpCS3Mz5Z+4Q7v0YPxIJmb1rtqqvWCABhnP++
         caUKh8ukFMC/q5ydqtbLDF+ZBqqutZVlE/qlks28oJAqU8OtjklknaN6w8LWm45RD1JG
         Q6TLyE0195nV0JZPoRXVSPWaSo0F1amGYnZkeyUHt+jTxaYwLUDLluoXHvshQL9RDW/b
         7RC2q9yP5jL1mrlvOzl3sojgruavOZvz4ViUq8M+Hj/ncdlwQIscnA/C4h2Gjc+hD7q6
         zyYQ==
X-Forwarded-Encrypted: i=1; AJvYcCWptk3QWKyRCrwppjZpGMtNoT28xB6bTj4I7JL0DVfQTbkUj3PsRqo00N4syF/kHKVLFJLIzXhrA+uXxXZYFmQzy/7XHzrqlV1RfNIVdD8=
X-Gm-Message-State: AOJu0YyLp+UqIKMlay7zJi1H+g4K4RXFhSf2KYBWH7HHjUM3l8XFl4RX
	3tyfhlDhf0sc2uhVf5cFK+xJOtzeAl4fmg64xBJzQ/X6Q5SpN/6n
X-Google-Smtp-Source: AGHT+IHhD4eMDwFLrabWsdxKjRtpE7AH1ZLRCbp3lw+KVOYjBFktFBtEm8K85/YKEYqwypM7iEAzug==
X-Received: by 2002:a05:6512:4888:b0:52c:a465:c61f with SMTP id 2adb3069b0e04-52ce18647cemr1931218e87.56.1719216848469;
        Mon, 24 Jun 2024 01:14:08 -0700 (PDT)
Message-ID: <83247825a45be9effa3dde303ed03942ec2a839c.camel@gmail.com>
Subject: Re: [PATCH for-4.19 v2] tools/xl: Open xldevd.log with O_CLOEXEC
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: Demi Marie Obenour <demi@invisiblethingslab.com>, Anthony PERARD
	 <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, Marek
 =?ISO-8859-1?Q?Marczykowski-G=F3recki?=
	 <marmarek@invisiblethingslab.com>
Date: Mon, 24 Jun 2024 10:14:07 +0200
In-Reply-To: <20240621161656.63576-1-andrew.cooper3@citrix.com>
References: <20240621161656.63576-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Fri, 2024-06-21 at 17:16 +0100, Andrew Cooper wrote:
> `xl devd` has been observed leaking /var/log/xldevd.log into
> children.
>=20
> Note this is specifically safe; dup2() leaves O_CLOEXEC disabled on
> newfd, so
> after setting up stdout/stderr, it's only the logfile fd which will
> close on
> exec().
>=20
> Link: https://github.com/QubesOS/qubes-issues/issues/8292
> Reported-by: Demi Marie Obenour <demi@invisiblethingslab.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

> ---
> CC: Anthony PERARD <anthony@xenproject.org>
> CC: Juergen Gross <jgross@suse.com>
> CC: Demi Marie Obenour <demi@invisiblethingslab.com>
> CC: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> Also entirely speculative based on the QubesOS ticket.
>=20
> v2:
> =C2=A0* Extend the commit message to explain why stdout/stderr aren't
> closed by
> =C2=A0=C2=A0 this change
>=20
> For 4.19.=C2=A0 This bugfix was posted earlier, but fell between the
> cracks.
> ---
> =C2=A0tools/xl/xl_utils.c | 2 +-
> =C2=A01 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/tools/xl/xl_utils.c b/tools/xl/xl_utils.c
> index 17489d182954..060186db3a59 100644
> --- a/tools/xl/xl_utils.c
> +++ b/tools/xl/xl_utils.c
> @@ -270,7 +270,7 @@ int do_daemonize(const char *name, const char
> *pidfile)
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 exit(-1);
> =C2=A0=C2=A0=C2=A0=C2=A0 }
> =C2=A0
> -=C2=A0=C2=A0=C2=A0 CHK_SYSCALL(logfile =3D open(fullname, O_WRONLY|O_CRE=
AT|O_APPEND,
> 0644));
> +=C2=A0=C2=A0=C2=A0 CHK_SYSCALL(logfile =3D open(fullname, O_WRONLY | O_C=
REAT |
> O_APPEND | O_CLOEXEC, 0644));
> =C2=A0=C2=A0=C2=A0=C2=A0 free(fullname);
> =C2=A0=C2=A0=C2=A0=C2=A0 assert(logfile >=3D 3);
> =C2=A0



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:17:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:17:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746266.1153259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLet3-0005tn-Uj; Mon, 24 Jun 2024 08:17:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746266.1153259; Mon, 24 Jun 2024 08:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLet3-0005tg-S3; Mon, 24 Jun 2024 08:17:17 +0000
Received: by outflank-mailman (input) for mailman id 746266;
 Mon, 24 Jun 2024 08:17:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLet2-0005ta-1I
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:17:16 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 297a0610-3202-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 10:17:14 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2ec1ac1aed2so48859071fa.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 01:17:14 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb2f02e3sm57279705ad.62.2024.06.24.01.17.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 01:17:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 297a0610-3202-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719217033; x=1719821833; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=UU3WMMZDg9YzwDXjWlHNVYyLjAOSWcztd/55m7NJfyw=;
        b=IBAkFfN0AkjZ0J0Epr8vOi0r9ZRbUNQzAuxw3i2L43VeLaR/z4ukhgokDIyRf5n9t5
         dNJQiz05+iXiCgWS5EvOebjS+AKxN3iBGg0NSUybQCeW4ceqZMORdfhO6ked8jI/rfXo
         j3hc01yl+htIpfm2Ntowpi5M6VLKjPxOtfmdt74jYDPnTutvMo8Svl3GaSScZN2/y+pc
         2MjYcRFN6wG1iUiEZ/mQMgb7yZ4shcwarz/CqWZvm5hBvJGTIgcIhJxxbIJtlkGGvqp0
         5wnWfLw6mokVwjti6wQRzrzTZHEOJtlmSkxsM5upA2PWrhkCK/3ZQyF123VB2gwjxOtW
         CNOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719217033; x=1719821833;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=UU3WMMZDg9YzwDXjWlHNVYyLjAOSWcztd/55m7NJfyw=;
        b=nISfCTTa6PNa6xiG13sR145JiFp6XUouNNn8qsfhZ+p5ACXvGDPoPFL2H8pA7ZxZp/
         70dKFAAMN2KoPRbE0cz/pirk5M9EaE0iLSvEXSuGUtFi15kUT84k+QVqD1UH2CBPz76r
         m+04H+RQlE/Hbq65tXCZvqUmqhzBwFgYnxQwdKRobp7DCfuS0idEW4qDCn7PBEBPOJM0
         JsXi4SB3quNe7ETIUK+5RS2QSgY2/tZu0hQ3n2RcqHb0iwLkZjortNN6BBbDjia+Lxfy
         KiGIL+QZBRhZxbd6k/xbB1YOT/f9kOxC3vwxggdReDIXsw2aXlmSSzT+wv+cR6+ISFR+
         UUJA==
X-Forwarded-Encrypted: i=1; AJvYcCUHGtDtLPR552jj2O2VysptKJQZPlrIfzqRA+E2ich9eAPLieFOqI8oL7Lb68ELNckOq7cLr/KcUmW7b0RUZs5QNZNQa7J0Y73WsV57PbI=
X-Gm-Message-State: AOJu0YyL1Gna2myxDQn0sEQjLA8y2lE4MR7RtrQTEkSBHEWmhA/mxN6D
	UBYLcmTmRbfq9uK/Etw3d6+zIpP2qr7xxvBJfcDv1p0JInuOq4IWwfJOCyzlxA==
X-Google-Smtp-Source: AGHT+IHCtBD07fRf+xB1RHBJuWeCOsMOjKJPOI3xiyCTKWhSAETxrJRlYP0abKOObopBU2g1sPruAw==
X-Received: by 2002:a2e:9642:0:b0:2ec:595a:2d1f with SMTP id 38308e7fff4ca-2ec5b2844fcmr23152871fa.2.1719217033405;
        Mon, 24 Jun 2024 01:17:13 -0700 (PDT)
Message-ID: <1ffd019d-6bab-49d1-932c-7b5aee245f32@suse.com>
Date: Mon, 24 Jun 2024 10:17:02 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Anthony PERARD <anthony@xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-6-Jiqian.Chen@amd.com>
 <b4b6cbcd-dd71-44da-aea8-6a4a170d73d5@suse.com>
 <BL1PR12MB584916579E2C16C6C9F86D1FE7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b6beb3f3-9c33-4d4c-a607-ca0eba76f049@suse.com>
 <BL1PR12MB58493479F9EF4E56E9CB814FE7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <96ba4e66-5d33-4c39-b733-790e7996332f@suse.com>
 <BL1PR12MB58493B55E074243D356B0CAAE7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB58493B55E074243D356B0CAAE7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 21.06.2024 10:20, Chen, Jiqian wrote:
> On 2024/6/20 18:42, Jan Beulich wrote:
>> On 20.06.2024 11:40, Chen, Jiqian wrote:
>>> On 2024/6/18 17:23, Jan Beulich wrote:
>>>> On 18.06.2024 10:23, Chen, Jiqian wrote:
>>>>> On 2024/6/17 23:32, Jan Beulich wrote:
>>>>>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>>>>>> @@ -1516,14 +1519,39 @@ static void pci_add_dm_done(libxl__egc *egc,
>>>>>>>              rc = ERROR_FAIL;
>>>>>>>              goto out;
>>>>>>>          }
>>>>>>> -        r = xc_domain_irq_permission(ctx->xch, domid, irq, 1);
>>>>>>> +#ifdef CONFIG_X86
>>>>>>> +        /* If dom0 doesn't have PIRQs, need to use xc_domain_gsi_permission */
>>>>>>> +        r = xc_domain_getinfo_single(ctx->xch, 0, &info);
>>>>>>
>>>>>> Hard-coded 0 is imposing limitations. Ideally you would use DOMID_SELF, but
>>>>>> I didn't check if that can be used with the underlying hypercall(s). Otherwise
>>> From the commit 10ef7a91b5a8cb8c58903c60e2dd16ed490b3bcf, DOMID_SELF is not allowed for XEN_DOMCTL_getdomaininfo.
>>> And now XEN_DOMCTL_getdomaininfo gets domain through rcu_lock_domain_by_id.
>>>
>>>>>> you want to pass the actual domid of the local domain here.
>>> What is the local domain here?
>>
>> The domain your code is running in.
>>
>>> What is method for me to get its domid?
>>
>> I hope there's an available function in one of the libraries to do that.
> I didn't find relate function.
> Hi Anthony, do you know?
> 
>> But I wouldn't even know what to look for; that's a question to (primarily)
>> Anthony then, who sadly continues to be our only tool stack maintainer.
>>
>> Alternatively we could maybe enable XEN_DOMCTL_getdomaininfo to permit
>> DOMID_SELF.
> It didn't permit DOMID_SELF since below commit. Does it still have the same problem if permit DOMID_SELF?

To answer this, all respective callers would need auditing. However, ...

> commit 10ef7a91b5a8cb8c58903c60e2dd16ed490b3bcf
> Author: kfraser@localhost.localdomain <kfraser@localhost.localdomain>
> Date:   Tue Aug 14 09:56:46 2007 +0100
> 
>     xen: Do not accept DOMID_SELF as input to DOMCTL_getdomaininfo.
>     This was screwing up callers that loop on getdomaininfo(), if there
>     was a domain with domid DOMID_FIRST_RESERVED-1 (== DOMID_SELF-1).
>     They would see DOMID_SELF-1, then look up DOMID_SELF, which has domid
>     0 of course, and then start their domain-finding loop all over again!
>     Found by Kouya Shimura <kouya@jp.fujitsu.com>. Thanks!
>     Signed-off-by: Keir Fraser <keir@xensource.com>

... I view this as a pretty odd justification for the change, when imo the
bogus loops should instead have been adjusted.

Jan

> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
> index 09a1e84d98e0..5d29667b7c3d 100644
> --- a/xen/common/domctl.c
> +++ b/xen/common/domctl.c
> @@ -463,19 +463,13 @@ long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_domctl)
>      case XEN_DOMCTL_getdomaininfo:
>      {
>          struct domain *d;
> -        domid_t dom;
> -
> -        dom = op->domain;
> -        if ( dom == DOMID_SELF )
> -            dom = current->domain->domain_id;
> +        domid_t dom = op->domain;
> 
>          rcu_read_lock(&domlist_read_lock);
> 
>          for_each_domain ( d )
> -        {
>              if ( d->domain_id >= dom )
>                  break;
> -        }
> 
>          if ( d == NULL )
>          {



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:26:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:26:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746275.1153270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLf1p-0000Fj-TB; Mon, 24 Jun 2024 08:26:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746275.1153270; Mon, 24 Jun 2024 08:26:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLf1p-0000Fc-Oj; Mon, 24 Jun 2024 08:26:21 +0000
Received: by outflank-mailman (input) for mailman id 746275;
 Mon, 24 Jun 2024 08:26:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mlm3=N2=intel.com=oliver.sang@srs-se1.protection.inumbo.net>)
 id 1sLf1o-0000FW-4u
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:26:21 +0000
Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6a4cd16d-3203-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 10:26:14 +0200 (CEST)
Received: from fmviesa006.fm.intel.com ([10.60.135.146])
 by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 24 Jun 2024 01:26:11 -0700
Received: from orsmsx603.amr.corp.intel.com ([10.22.229.16])
 by fmviesa006.fm.intel.com with ESMTP/TLS/AES256-GCM-SHA384;
 24 Jun 2024 01:26:10 -0700
Received: from orsmsx611.amr.corp.intel.com (10.22.229.24) by
 ORSMSX603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.39; Mon, 24 Jun 2024 01:26:09 -0700
Received: from orsmsx612.amr.corp.intel.com (10.22.229.25) by
 ORSMSX611.amr.corp.intel.com (10.22.229.24) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.39; Mon, 24 Jun 2024 01:26:08 -0700
Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by
 orsmsx612.amr.corp.intel.com (10.22.229.25) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.39 via Frontend Transport; Mon, 24 Jun 2024 01:26:08 -0700
Received: from NAM11-DM6-obe.outbound.protection.outlook.com (104.47.57.168)
 by edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.39; Mon, 24 Jun 2024 01:26:08 -0700
Received: from LV3PR11MB8603.namprd11.prod.outlook.com (2603:10b6:408:1b6::9)
 by CH3PR11MB8316.namprd11.prod.outlook.com (2603:10b6:610:17b::6)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.19; Mon, 24 Jun
 2024 08:26:05 +0000
Received: from LV3PR11MB8603.namprd11.prod.outlook.com
 ([fe80::4622:29cf:32b:7e5c]) by LV3PR11MB8603.namprd11.prod.outlook.com
 ([fe80::4622:29cf:32b:7e5c%2]) with mapi id 15.20.7698.025; Mon, 24 Jun 2024
 08:26:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a4cd16d-3203-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1719217575; x=1750753575;
  h=date:from:to:cc:subject:message-id:
   content-transfer-encoding:mime-version;
  bh=fGhZO2FmE8nE29RUwQ5tssL6/n/FksubW9qRWYQIPYE=;
  b=Vunf/9Tsq0niLrdjmoD/QcHCUTMvPAjqCWL1wsLY8Sddw6jDP2ojj77G
   GscoV5Gh6/xI/xns4DzfqC1N6Bng9i/4penrhIe1H8+W5GQK89mwyWM7+
   1MASL6Na2FF4z9/1YVNY7qV3kaxnVDWvzO1NFk0WZctANtt0a9qCewh48
   RD4bW6hiQ+tSEPqNRxZ0vDGBAanoql6J96k+0drwlk5HFfUhiDzYQuT3T
   q50GravHLdodXWXtaCF3M/gJ3D6rmZvQHO4Q+hGzm09YMMed2FCa1mneJ
   eLlCFyljaPi2B1dZaYK51bcT3FW3FK8pmDAo/ShkwXWpgs7B0qCfWHTXM
   w==;
X-CSE-ConnectionGUID: CaHmdJuJRxydIFc08Cf7cg==
X-CSE-MsgGUID: Y1MzlWczSBC0nO74eRlDKg==
X-IronPort-AV: E=McAfee;i="6700,10204,11112"; a="26765431"
X-IronPort-AV: E=Sophos;i="6.08,261,1712646000"; 
   d="scan'208";a="26765431"
X-CSE-ConnectionGUID: Bx13QGr0TyeIl9N6cpzLDw==
X-CSE-MsgGUID: 4BMncUgvRYS9iGVr3ZiD5A==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="6.08,261,1712646000"; 
   d="scan'208";a="43035911"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fmJDppupfMahxXcPUuAOePZ2ectfuCcnhp3Cp7IPDANxTBkapRr0yhPJnZBHEmhoj+GoiRuqD5O7r9Kdrm2jNxiW9PV0Pj8nAK5BoRWAGwSjIdW+Zaz4sZj4TCDNvhBfNgw3qj842WlSIz+d+W513WfAe3aMb92q2WIQHZNyi7i18k9aVmRROolYYjNzaeEHJAjJGcgm8CPejwnZpM2bhuJgfyluPUOx/Bv7IPsgdjSXlcV6SA4xrQzAnPQSGV9Jqs73NcPaV9YJjCLTIgpHNP4AqyitCjaDfnnZ+KKc8vEh+BFFf9DNGfhxY2GZoBeqHAn1EFItA/NOuVD+PL7AjA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ag/dNuxDxMpGNJjZ5/rdH7paw2w6rRWw/btotE8n6oo=;
 b=lVKvvOXa4WuNIaW4xj5/Qm3PAjJ6m5StURH11t2ym+k39u7QU/dfPWSJd8CWcBfxDpgalSUiXDTtn+wx9vlTKnnT1rU0BDD9e/TUdMbcP2HHAmC5cgfneBQMSgNdOp9O3FqKB9DYBubrsL64Ri0YIiKTwgOtkp1tml5mSs2kF6El6CgJf3aKeFCcE6H6O4U9KcZe5pSo+6kK1HR38iaYuhbPkgiWpHquPp8LSIAz8CAlm/A8PvhszcMNfF0/ryN9fu0/GZqHCvRhDyG69uiwLDfVtjUNWC6VaDulrJMdNRfTYJH/oMqq1tJTiC3wQB+3Hlb75pvUAmbLdDc6uTzkyg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
Date: Mon, 24 Jun 2024 16:25:48 +0800
From: kernel test robot <oliver.sang@intel.com>
To: Christoph Hellwig <hch@lst.de>
CC: <oe-lkp@lists.linux.dev>, <lkp@intel.com>, Jens Axboe <axboe@kernel.dk>,
	Damien Le Moal <dlemoal@kernel.org>, Hannes Reinecke <hare@suse.de>,
	<linux-m68k@lists.linux-m68k.org>, <linux-um@lists.infradead.org>,
	<linux-kernel@vger.kernel.org>, <linux-block@vger.kernel.org>,
	<drbd-dev@lists.linbit.com>, <nbd@other.debian.org>,
	<linuxppc-dev@lists.ozlabs.org>, <ceph-devel@vger.kernel.org>,
	<virtualization@lists.linux.dev>, <xen-devel@lists.xenproject.org>,
	<linux-bcache@vger.kernel.org>, <dm-devel@lists.linux.dev>,
	<linux-raid@vger.kernel.org>, <linux-mmc@vger.kernel.org>,
	<linux-mtd@lists.infradead.org>, <nvdimm@lists.linux.dev>,
	<linux-nvme@lists.infradead.org>, <linux-s390@vger.kernel.org>,
	<linux-scsi@vger.kernel.org>, <ying.huang@intel.com>, <feng.tang@intel.com>,
	<fengwei.yin@intel.com>, <oliver.sang@intel.com>
Subject: [axboe-block:for-next] [block]  bd4a633b6f:  fsmark.files_per_sec
 -64.5% regression
Message-ID: <202406241546.6bbd44a7-oliver.sang@intel.com>
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: SI2PR06CA0013.apcprd06.prod.outlook.com
 (2603:1096:4:186::18) To LV3PR11MB8603.namprd11.prod.outlook.com
 (2603:10b6:408:1b6::9)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: LV3PR11MB8603:EE_|CH3PR11MB8316:EE_
X-MS-Office365-Filtering-Correlation-Id: c2345f79-9682-4c4c-7189-08dc94274a04
X-LD-Processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230037|1800799021|376011|7416011|366013;
X-Microsoft-Antispam-Message-Info: =?iso-8859-1?Q?ISgO9TcVw3b2S1YGeqXvIUmn8liNmawT+wyVQPqq50YLy07N9BJ6/mxWDr?=
 =?iso-8859-1?Q?7OQxMGXg7Q8EFMgZQb7hfIkoNn9opf82oJYkjQMoYmhwDYm0A/6apn1/xq?=
 =?iso-8859-1?Q?D3oUgZFhpY86dNnP4t/ZMqxm0qIJ3mR69ZgkJRzRjXwDZ6x9VrRhp36976?=
 =?iso-8859-1?Q?qdJMUC46nvzR4s49RP0krzW7Iq/x6U14BBUTZoMHYOivZF+OJTPBOs5XTT?=
 =?iso-8859-1?Q?0Yu+umvjAdkbWyzyLY9INBjoJiyUGKZZonhz9jw+ASvB74DqEqRwMMujjl?=
 =?iso-8859-1?Q?estJkYHmLrxlp20IJrIgCvUz92AckfE82Cx1exGAQ9/xV4bQSm2yhZZ4bX?=
 =?iso-8859-1?Q?35R3T07c+HlaXkRfYqyY70j3TdZKbF63tSHlfrys1dI1/tUsfJ+jSFNz2c?=
 =?iso-8859-1?Q?Ee/E1GZktZQMepihghPwqiBgKvGOAjUxK6I2F0t2LH5kFAAo4xmPgM3Fdu?=
 =?iso-8859-1?Q?f7mQTteNFgawhzbGSNQPn8INgh4GwaDy4KBr8lKWtnSkSywrHPtJT+Mw6K?=
 =?iso-8859-1?Q?ltB3q41onqUr9TM+0PUODmiCe3qgb15W6lCO/Q/F5aMfNL5YAiL/KaYBSj?=
 =?iso-8859-1?Q?3xQFJfGSyaNIyjE2CfYg+L/TTZbb91qxCXFtPjvgqLhIPtVa74WSqBBp1W?=
 =?iso-8859-1?Q?OBrCg8xtQ5mWgxj6Aft9N0Iwozk4/pcn4Y55TEj27G3tfMMEWRroPwgiCY?=
 =?iso-8859-1?Q?bzN22xFq6VsdflyK72od6McTfu2dCasRGjB+yaFo18pr9ew5cl2XgzQOFf?=
 =?iso-8859-1?Q?/Rknnt2Gu8i0FSQjBeO0NrOormWXRJRtpyehjh6CJOHC09Bg+tWBtYef5H?=
 =?iso-8859-1?Q?lFhx4bom/JDaAoaUHovSN6COCqbZY6NyXqRT8bdl4YjMVVbJ+g1JxuFpzf?=
 =?iso-8859-1?Q?FoZprsTg55xUznMHV8QzW3j35p/nbn17hF3KbmA9qpW2Q4BZjX6tPDWJK7?=
 =?iso-8859-1?Q?TT6M3N+8z6d7edIQ2guUz9ONgms5nvylzjT0LVsn+nWQ25RFdQ0zmOy38J?=
 =?iso-8859-1?Q?ZQQWJrD8b9Zf5Uer/fXabadrNvt8B5wo7Sug3l5jI3OC6yQOzAvCe9b0xb?=
 =?iso-8859-1?Q?CzRBwzyCjGnvXfpZJOjLLqNALYyz3gqD+OKi3u/PcNYPgEVo3ppK1aJ+5O?=
 =?iso-8859-1?Q?ehNyJtPVG6kcRD94RKOTUSMLbqUJ8zVdtPnZSgvbdgVqWyG4jnt/EOGNQr?=
 =?iso-8859-1?Q?45W6pifxYrhzcYe1AvUaML4pj50VxUpsn7SzpmS/XQ70rbgcLDzKCRLPO0?=
 =?iso-8859-1?Q?PuuNbn6a842qn72x9N0dzg6MFrymuCt1VlfDKppjbKbrHIstD/jV2+w4WT?=
 =?iso-8859-1?Q?L3KypFWdbXcVmqp0vt2EEeJNdg=3D=3D?=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LV3PR11MB8603.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(1800799021)(376011)(7416011)(366013);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?iso-8859-1?Q?SSSc+j6be/IS4cJnPSFipnuHvhRcZ0Q0YRDXd50EobTaIxbSl7FYJqRvU+?=
 =?iso-8859-1?Q?9FzUAqctgbckJO2BlsoJZWvVWPokd7zrkW2i6pvz6PQy/6EukGIMVcufXT?=
 =?iso-8859-1?Q?ej5NMA9OuvWu0yIW4cIOjUPVTqLC+yWQGhtTrskfnwVawCSRT78Fzmlp0y?=
 =?iso-8859-1?Q?jJDAlZfFaHX4kBwyhA+T+F9E7bchGqWU6D2H2g1pSM8h3u9JTr5wWqBrtt?=
 =?iso-8859-1?Q?SB+UIuiALWMf4xmRfu08fODr4wVNj/xb23mPLEUKeCQSuSdZcxfzBsK96g?=
 =?iso-8859-1?Q?DKO1b7xTwSIaQgrLy3NphpnHPlGQgBktEJB8e7XuWbMv/Rp7JDtGxevdFa?=
 =?iso-8859-1?Q?4i7wpn3G3NvhR3KXZYI+nlzhk9yWqx645tghy1DVYs1WwCbKTmdF5dwWX/?=
 =?iso-8859-1?Q?df8WvHEp1caQRVRV/RQ/3ilQ4RgpdMfH9dgVPWA908bZEvGZC4qBE/rLGP?=
 =?iso-8859-1?Q?AZpgg1weePIpjKZ0SBDIiCHyyYVNFCVzP2qT1rGUTR3J1WWQy5BfweM7vI?=
 =?iso-8859-1?Q?n5WPg2DuFhY57s0CsfVzFoh74FDI/JORact57Ri8bsi2k+GCq7M5HORO3A?=
 =?iso-8859-1?Q?rslZx86InaCCY9pquxPbSp0By2ubQkKduYiUfbnbtBsW1l9jTnysrLoThG?=
 =?iso-8859-1?Q?y7pIDO4Psugj56/m9N+Pkwrnk7JHYxghzi4DPL24qgtcvEkCRxYUZ7ryRq?=
 =?iso-8859-1?Q?jPuz34L1HxsaLUB/xkIld/uBmkGC5podoQ5WTpWZdBZqvwOuu1y3NklnN9?=
 =?iso-8859-1?Q?1/m+JYsopjypyf5JMh/pDF6cnwo7Vm07Wb3hSqxd+cxzDO+/U7xa1+JMmA?=
 =?iso-8859-1?Q?a5eFU9Fs+nuCr0kT7ZCFCjsX1g5PqcrVptaO4R7g9iO94uutKMLOGYDSHS?=
 =?iso-8859-1?Q?H7z4T0l6YrPcXWaIisP64VVR1/smV1fkR79BfUNMTjPIC5fdOzbKwZXpgF?=
 =?iso-8859-1?Q?JDVgcavN9pF96TNKNM8lEhCNCVvRy6PEn/n7l9wVHrEmcJPLxkx/Xtoe0K?=
 =?iso-8859-1?Q?sIi2GwvTdmZG9Iok9121R04Su3Nyoz+mFiDIYmFoJtTMaLgT2NkTJ0JzR0?=
 =?iso-8859-1?Q?04Fqca74zXF6kqRjhXAO1tBLFJmhQHKP9BMjzaGvCnlnuzU0zyf6QG5H33?=
 =?iso-8859-1?Q?frsFQsdq5d2UTEV/SDDxEQyBhqLG3FVVDbI3XyxP2iI/Vzhh98A+cpKBu3?=
 =?iso-8859-1?Q?7aHaY/ieT9vI3dGCwZj850EzsOIQS8jKKEd/znuDw3Y74WM8ZxesMRB4ZE?=
 =?iso-8859-1?Q?LDR2Du0wXU3zH2xoXZ0h7DZvhCcpKmNtdV05EKjbvh45KfJRWNctXm1wWM?=
 =?iso-8859-1?Q?OI1a7BgiMtlJE3VspFNqSffdEQ8YGxzk5zBR1fPqMBulNFXhb86+eo+qLf?=
 =?iso-8859-1?Q?o06wHvGAL06UxX6/g17+jYCUwbifdnnliR+amwwCYdbfVmz40rUBrCPMdb?=
 =?iso-8859-1?Q?5EOzKW6BV0b+Mc/F16jrmIrJGw/yrIehuWidg3uL/5RB//1w+sfPV6E5Nw?=
 =?iso-8859-1?Q?egwf1bt7ZW1xH3o9zu8cRyzJ2wWQzFv7Q0EDkRUpDLtEuTu9GIYBuOfL1m?=
 =?iso-8859-1?Q?doMV9u/mJCSqnJyOJvkjnulAAeryVdmUpE8axkMej7ig3Pnq7ZvEt7aO8C?=
 =?iso-8859-1?Q?kCLKJs3ndneUm31IQYT1dWHwLxqpM2gUDa1Ya9TAReTDdliFtSObFIPQ?=
 =?iso-8859-1?Q?=3D=3D?=
X-MS-Exchange-CrossTenant-Network-Message-Id: c2345f79-9682-4c4c-7189-08dc94274a04
X-MS-Exchange-CrossTenant-AuthSource: LV3PR11MB8603.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2024 08:26:05.2612
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0Gp3etjs7JOufrTzb8Lrtgh2xiE06tVjjkgOFR5Mymtun16sgWRsEmUHEu/+u4F2kvbt1du++C1xIjYkJDK+Tw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR11MB8316
X-OriginatorOrg: intel.com



Hello,

kernel test robot noticed a -64.5% regression of fsmark.files_per_sec on:


commit: bd4a633b6f7c3c6b6ebc1a07317643270e751a94 ("block: move the nonrot flag to queue_limits")
https://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git for-next

testcase: fsmark
test machine: 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz (Ivy Bridge-EP) with 64G memory
parameters:

	iterations: 1x
	nr_threads: 32t
	disk: 1SSD
	fs: btrfs
	fs2: nfsv4
	filesize: 9B
	test_size: 400M
	sync_method: fsyncBeforeClose
	nr_directories: 16d
	nr_files_per_directory: 256fpd
	cpufreq_governor: performance


In addition to that, the commit also has significant impact on the following tests:

+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | fsmark: fsmark.files_per_sec -54.0% regression                                                 |
| test machine     | 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz (Ivy Bridge-EP) with 64G memory |
| test parameters  | cpufreq_governor=performance                                                                   |
|                  | disk=1SSD                                                                                      |
|                  | filesize=8K                                                                                    |
|                  | fs2=nfsv4                                                                                      |
|                  | fs=btrfs                                                                                       |
|                  | iterations=1x                                                                                  |
|                  | nr_directories=16d                                                                             |
|                  | nr_files_per_directory=256fpd                                                                  |
|                  | nr_threads=32t                                                                                 |
|                  | sync_method=fsyncBeforeClose                                                                   |
|                  | test_size=400M                                                                                 |
+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | fxmark: fxmark.ssd_btrfs_DWSL_4_directio.works/sec -75.6% regression                           |
| test machine     | 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory  |
| test parameters  | cpufreq_governor=performance                                                                   |
|                  | directio=directio                                                                              |
|                  | disk=1SSD                                                                                      |
|                  | fstype=btrfs                                                                                   |
|                  | media=ssd                                                                                      |
|                  | test=DWSL                                                                                      |
|                  | thread_nr=4                                                                                    |
+------------------+------------------------------------------------------------------------------------------------+
| testcase: change | fxmark: fxmark.ssd_btrfs_MWUM_4_directio.works/sec -45.9% regression                           |
| test machine     | 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory  |
| test parameters  | cpufreq_governor=performance                                                                   |
|                  | directio=directio                                                                              |
|                  | disk=1SSD                                                                                      |
|                  | fstype=btrfs                                                                                   |
|                  | media=ssd                                                                                      |
|                  | test=MWUM                                                                                      |
|                  | thread_nr=4                                                                                    |
+------------------+------------------------------------------------------------------------------------------------+


If you fix the issue in a separate patch/commit (i.e. not just a new version of
the same patch/commit), kindly add following tags
| Reported-by: kernel test robot <oliver.sang@intel.com>
| Closes: https://lore.kernel.org/oe-lkp/202406241546.6bbd44a7-oliver.sang@intel.com


Details are as below:
-------------------------------------------------------------------------------------------------->


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240624/202406241546.6bbd44a7-oliver.sang@intel.com

=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs2/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
  gcc-13/performance/1SSD/9B/nfsv4/btrfs/1x/x86_64-rhel-8.3/16d/256fpd/32t/debian-12-x86_64-20240206.cgz/fsyncBeforeClose/lkp-ivb-2ep2/400M/fsmark

commit: 
  1122c0c1cc ("block: move cache control settings out of queue->flags")
  bd4a633b6f ("block: move the nonrot flag to queue_limits")

1122c0c1cc71f740 bd4a633b6f7c3c6b6ebc1a07317 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
 1.189e+09          +176.5%  3.288e+09        cpuidle..time
   4069531           +40.7%    5726734        cpuidle..usage
    107.82           +41.5%     152.62        uptime.boot
      4541           +31.7%       5979        uptime.idle
      1347  34%   +7052.8%      96367  17%  numa-meminfo.node1.Active(anon)
      9671 129%    +152.0%      24367  58%  numa-meminfo.node1.Mapped
      1517  33%   +7000.6%     107767  17%  numa-meminfo.node1.Shmem
     70.27            -3.0%      68.16        iostat.cpu.idle
     22.10           +21.9%      26.93        iostat.cpu.iowait
      5.28   4%     -45.2%       2.89        iostat.cpu.system
      2.35           -14.1%       2.02        iostat.cpu.user
    337.03  34%   +7041.0%      24067  17%  numa-vmstat.node1.nr_active_anon
      2492 127%    +150.1%       6233  58%  numa-vmstat.node1.nr_mapped
    379.71  33%   +6989.5%      26919  16%  numa-vmstat.node1.nr_shmem
    337.03  34%   +7041.0%      24067  17%  numa-vmstat.node1.nr_zone_active_anon
     23.59            +4.1       27.67        mpstat.cpu.all.iowait%
      0.56            -0.3        0.27        mpstat.cpu.all.soft%
      4.71   5%      -2.3        2.37        mpstat.cpu.all.sys%
      2.35            -0.3        2.02        mpstat.cpu.all.usr%
     11.14   2%     -25.0%       8.35        mpstat.max_utilization_pct
      7.33  71%    +297.7%      29.17  11%  perf-c2c.DRAM.local
     12.83  13%   +3977.9%     523.33   9%  perf-c2c.DRAM.remote
      1.83  37%   +3063.6%      58.00  16%  perf-c2c.HIT.remote
      6.67  47%   +7842.5%     529.50        perf-c2c.HITM.local
      8.50  17%   +5262.7%     455.83   8%  perf-c2c.HITM.remote
    113674           -75.2%      28217        vmstat.io.bo
     12.73   6%     +21.5%      15.47   3%  vmstat.procs.b
      4.93   7%     -37.2%       3.10        vmstat.procs.r
    222679           -59.0%      91356        vmstat.system.cs
     53879           -15.5%      45537        vmstat.system.in
 1.347e+08          +150.3%  3.371e+08        fsmark.app_overhead
      4148           -64.5%       1471        fsmark.files_per_sec
     25.21          +179.6%      70.48        fsmark.time.elapsed_time
     25.21          +179.6%      70.48        fsmark.time.elapsed_time.max
     46.67           -60.7%      18.33   2%  fsmark.time.percent_of_cpu_this_job_got
    580573            +2.3%     593720        fsmark.time.voluntary_context_switches
      0.54 142%      +1.8        2.36  45%  perf-profile.calltrace.cycles-pp.do_anonymous_page.__handle_mm_fault.handle_mm_fault.do_user_addr_fault.exc_page_fault
      2.30  26%      +1.9        4.24  17%  perf-profile.calltrace.cycles-pp.perf_mmap__push.record__mmap_read_evlist.__cmd_record.cmd_record.run_builtin
      2.30  26%      +1.9        4.24  17%  perf-profile.children.cycles-pp.perf_mmap__push
      2.00  52%      +2.0        3.98  21%  perf-profile.children.cycles-pp.write
      1.76  55%      +2.0        3.75  25%  perf-profile.children.cycles-pp.writen
      0.54 142%      +2.1        2.60  38%  perf-profile.children.cycles-pp.do_anonymous_page
      2798         +3767.9%     108260   2%  meminfo.Active(anon)
    293199           -34.5%     192046        meminfo.Active(file)
    224570   2%     +15.3%     258927   4%  meminfo.AnonHugePages
   1029143           +12.2%    1154431        meminfo.Committed_AS
     13542   2%     -14.4%      11598   2%  meminfo.Dirty
     32982           +49.1%      49184   4%  meminfo.Mapped
      3406         +3452.4%     121025        meminfo.Shmem
    699.76         +3768.6%      27071   2%  proc-vmstat.nr_active_anon
     73394           -34.5%      48087        proc-vmstat.nr_active_file
    599442           -23.8%     456690        proc-vmstat.nr_dirtied
      3372   2%     -14.0%       2899   2%  proc-vmstat.nr_dirty
   1054218            -1.1%    1042135        proc-vmstat.nr_file_pages
    192999            +2.2%     197338        proc-vmstat.nr_inactive_anon
    195427            -8.3%     179255        proc-vmstat.nr_inactive_file
      8470           +48.6%      12584   4%  proc-vmstat.nr_mapped
    852.97         +3451.5%      30293        proc-vmstat.nr_shmem
     55125            +2.4%      56454        proc-vmstat.nr_slab_reclaimable
    598942           -24.0%     455415        proc-vmstat.nr_written
    699.76         +3768.6%      27071   2%  proc-vmstat.nr_zone_active_anon
     73394           -34.5%      48087        proc-vmstat.nr_zone_active_file
    192999            +2.2%     197338        proc-vmstat.nr_zone_inactive_anon
    195427            -8.3%     179255        proc-vmstat.nr_zone_inactive_file
      2992   2%      -7.0%       2783   2%  proc-vmstat.nr_zone_write_pending
   1701597            +6.4%    1810757        proc-vmstat.numa_hit
   1651924            +6.6%    1760989        proc-vmstat.numa_local
    145673            +4.1%     151618        proc-vmstat.pgactivate
   2171880            +6.0%    2302217        proc-vmstat.pgalloc_normal
    173881   3%    +101.2%     349830   2%  proc-vmstat.pgfault
   1649733            +9.4%    1804327        proc-vmstat.pgfree
   3257714           -36.4%    2070947        proc-vmstat.pgpgout
      8901   2%     +78.8%      15912  18%  proc-vmstat.pgreuse
      1779  15%     +94.2%       3455  21%  sched_debug.cfs_rq:/.avg_vruntime.avg
    233.79  29%    +200.6%     702.80  30%  sched_debug.cfs_rq:/.avg_vruntime.min
      5950  51%    +121.6%      13187  54%  sched_debug.cfs_rq:/.load.avg
     59.42  13%     -41.0%      35.07  22%  sched_debug.cfs_rq:/.load_avg.avg
      1779  15%     +94.2%       3455  21%  sched_debug.cfs_rq:/.min_vruntime.avg
    233.79  29%    +200.6%     702.80  30%  sched_debug.cfs_rq:/.min_vruntime.min
     38.83  16%     -60.1%      15.48  42%  sched_debug.cfs_rq:/.removed.load_avg.avg
    104.34  10%     -44.4%      58.00  29%  sched_debug.cfs_rq:/.removed.load_avg.stddev
      1198  12%    +919.1%      12211  93%  sched_debug.cpu.avg_idle.min
     25961   9%     +81.8%      47185  21%  sched_debug.cpu.clock.avg
     25962   9%     +81.7%      47186  21%  sched_debug.cpu.clock.max
     25960   9%     +81.8%      47184  21%  sched_debug.cpu.clock.min
     25888   9%     +81.6%      47021  21%  sched_debug.cpu.clock_task.avg
     25956   9%     +81.6%      47137  21%  sched_debug.cpu.clock_task.max
     24234   9%     +84.7%      44760  21%  sched_debug.cpu.clock_task.min
    737.62  10%    +100.2%       1476  21%  sched_debug.cpu.curr->pid.max
    727.72  10%   +3446.7%      25810  20%  sched_debug.cpu.nr_switches.avg
      3502  18%    +922.9%      35827  21%  sched_debug.cpu.nr_switches.max
    111.99  25%   +6551.9%       7449  55%  sched_debug.cpu.nr_switches.min
    703.85  11%    +669.7%       5417  17%  sched_debug.cpu.nr_switches.stddev
      0.01  79%   +2728.2%       0.18  19%  sched_debug.cpu.nr_uninterruptible.avg
      5.50  28%    +186.2%      15.74  28%  sched_debug.cpu.nr_uninterruptible.max
     -5.24         +1329.2%     -74.83        sched_debug.cpu.nr_uninterruptible.min
      1.82  17%    +622.8%      13.16  29%  sched_debug.cpu.nr_uninterruptible.stddev
     81285           -42.0%      47184  21%  sched_debug.cpu_clk
 4.295e+09           -57.5%  1.825e+09  21%  sched_debug.jiffies
     80731           -41.8%      46949  21%  sched_debug.ktime
     81773           -42.0%      47392  21%  sched_debug.sched_clk
      1.00           -57.5%       0.42  21%  sched_debug.sched_clock_stable()
      3.00           -57.5%       1.28  21%  sched_debug.sysctl_sched.sysctl_sched_base_slice
   6237751           -57.5%    2651044  21%  sched_debug.sysctl_sched.sysctl_sched_features
      1.00           -57.5%       0.42  21%  sched_debug.sysctl_sched.sysctl_sched_tunable_scaling
      1.29   6%     +24.0%       1.60   2%  perf-stat.i.MPKI
 1.822e+09           -33.5%  1.211e+09        perf-stat.i.branch-instructions
      5.38            +1.1        6.49        perf-stat.i.branch-miss-rate%
  97326318           -19.2%   78633918        perf-stat.i.branch-misses
      6.81   5%      +2.3        9.12        perf-stat.i.cache-miss-rate%
  11262919   6%     -38.0%    6977426        perf-stat.i.cache-misses
 1.747e+08           -55.8%   77188971        perf-stat.i.cache-references
    247420           -62.0%      94049        perf-stat.i.context-switches
      1.55            +5.5%       1.64        perf-stat.i.cpi
  1.37e+10           -41.0%  8.085e+09        perf-stat.i.cpu-cycles
      1644   2%     -54.4%     750.14        perf-stat.i.cpu-migrations
 8.908e+09           -33.7%  5.904e+09        perf-stat.i.instructions
      0.65            +5.3%       0.69        perf-stat.i.ipc
      5.22           -61.9%       1.99        perf-stat.i.metric.K/sec
      5046   4%     -14.7%       4303   3%  perf-stat.i.minor-faults
      5046   4%     -14.7%       4304   3%  perf-stat.i.page-faults
      5.34            +1.2        6.49        perf-stat.overall.branch-miss-rate%
      6.44   6%      +2.6        9.04        perf-stat.overall.cache-miss-rate%
      1.54           -11.0%       1.37        perf-stat.overall.cpi
      0.65           +12.3%       0.73        perf-stat.overall.ipc
 1.754e+09           -31.9%  1.195e+09        perf-stat.ps.branch-instructions
  93653136           -17.2%   77575731        perf-stat.ps.branch-misses
  10836125   6%     -36.5%    6879585        perf-stat.ps.cache-misses
 1.681e+08           -54.7%   76107773        perf-stat.ps.cache-references
    238085           -61.1%      92729        perf-stat.ps.context-switches
     46200            +2.4%      47330        perf-stat.ps.cpu-clock
 1.318e+10           -39.5%  7.973e+09        perf-stat.ps.cpu-cycles
      1582   2%     -53.3%     739.58        perf-stat.ps.cpu-migrations
 8.571e+09           -32.0%  5.824e+09        perf-stat.ps.instructions
      4847   4%     -12.5%       4240   3%  perf-stat.ps.minor-faults
      4847   4%     -12.5%       4241   3%  perf-stat.ps.page-faults
     46200            +2.4%      47330        perf-stat.ps.task-clock
 2.282e+11           +83.0%  4.175e+11        perf-stat.total.instructions


***************************************************************************************************
lkp-ivb-2ep2: 48 threads 2 sockets Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz (Ivy Bridge-EP) with 64G memory
=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs2/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
  gcc-13/performance/1SSD/8K/nfsv4/btrfs/1x/x86_64-rhel-8.3/16d/256fpd/32t/debian-12-x86_64-20240206.cgz/fsyncBeforeClose/lkp-ivb-2ep2/400M/fsmark

commit: 
  1122c0c1cc ("block: move cache control settings out of queue->flags")
  bd4a633b6f ("block: move the nonrot flag to queue_limits")

1122c0c1cc71f740 bd4a633b6f7c3c6b6ebc1a07317 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
 8.362e+08          +108.9%  1.747e+09        cpuidle..time
   2413903           +25.3%    3024126        cpuidle..usage
     99.38           +19.6%     118.85        uptime.boot
      4236           +14.7%       4859        uptime.idle
     24.94   2%      +9.5%      27.30        iostat.cpu.iowait
      4.47           -55.7%       1.98   2%  iostat.cpu.system
      2.36            +4.1%       2.46        iostat.cpu.user
      0.27   2%      -0.0        0.25   2%  mpstat.cpu.all.irq%
      0.48   4%      -0.2        0.30        mpstat.cpu.all.soft%
      3.99   3%      -2.5        1.46   4%  mpstat.cpu.all.sys%
     10.39           -26.3%       7.65   2%  mpstat.max_utilization_pct
    774882  12%     -24.8%     582410  19%  numa-numastat.node0.local_node
    808526  12%     -25.3%     604191  17%  numa-numastat.node0.numa_hit
    349212  26%     +54.4%     539297  21%  numa-numastat.node1.local_node
    365300  28%     +55.3%     567307  18%  numa-numastat.node1.numa_hit
    123433           -63.1%      45597        vmstat.io.bo
      5.09  14%     -27.9%       3.67  11%  vmstat.procs.r
    176044           -50.8%      86694        vmstat.system.cs
     52141           -18.2%      42673        vmstat.system.in
  74993100   3%    +108.8%  1.566e+08   5%  fsmark.app_overhead
      3060           -54.0%       1408        fsmark.files_per_sec
     17.10          +115.2%      36.80        fsmark.time.elapsed_time
     17.10          +115.2%      36.80        fsmark.time.elapsed_time.max
     36.67   2%     -48.2%      19.00        fsmark.time.percent_of_cpu_this_job_got
    254561   2%     -24.9%     191102        meminfo.Active
      2679          +314.6%      11108  19%  meminfo.Active(anon)
    251882   2%     -28.5%     179993        meminfo.Active(file)
    148758           +12.9%     167936   2%  meminfo.AnonHugePages
     15112           -23.0%      11635   2%  meminfo.Dirty
     32981           +77.1%      58414   7%  meminfo.Mapped
      3295          +838.4%      30926        meminfo.Shmem
    200691  18%     -48.8%     102702  28%  numa-meminfo.node0.Active
      1010  28%    +159.8%       2626  80%  numa-meminfo.node0.Active(anon)
    199680  18%     -49.9%     100076  30%  numa-meminfo.node0.Active(file)
     11897  19%     -48.2%       6167  30%  numa-meminfo.node0.Dirty
    405731  10%     -26.9%     296744  15%  numa-meminfo.node0.Inactive(file)
      1370  16%    +183.3%       3882 101%  numa-meminfo.node0.Shmem
      1668  17%    +409.3%       8495  21%  numa-meminfo.node1.Active(anon)
      1929  11%   +1307.1%      27148  15%  numa-meminfo.node1.Shmem
    252.78  28%    +160.3%     657.99  80%  numa-vmstat.node0.nr_active_anon
     50066  18%     -50.5%      24802  30%  numa-vmstat.node0.nr_active_file
    330634  12%     -37.2%     207678  20%  numa-vmstat.node0.nr_dirtied
      2979  19%     -48.1%       1545  30%  numa-vmstat.node0.nr_dirty
    101630  10%     -27.2%      74010  15%  numa-vmstat.node0.nr_inactive_file
      6210  44%     +65.8%      10299  12%  numa-vmstat.node0.nr_mapped
    342.90  16%    +186.3%     981.88 103%  numa-vmstat.node0.nr_shmem
    324012  12%     -36.1%     206943  19%  numa-vmstat.node0.nr_written
    252.78  28%    +160.3%     657.99  80%  numa-vmstat.node0.nr_zone_active_anon
     50066  18%     -50.5%      24802  30%  numa-vmstat.node0.nr_zone_active_file
    101629  10%     -27.2%      74010  15%  numa-vmstat.node0.nr_zone_inactive_file
      2718  17%     -47.0%       1439  27%  numa-vmstat.node0.nr_zone_write_pending
    808122  12%     -25.3%     603704  17%  numa-vmstat.node0.numa_hit
    774479  12%     -24.9%     581923  19%  numa-vmstat.node0.numa_local
    417.60  17%    +309.6%       1710  48%  numa-vmstat.node1.nr_active_anon
    482.47  11%   +1244.3%       6485  16%  numa-vmstat.node1.nr_shmem
    417.60  17%    +309.6%       1710  48%  numa-vmstat.node1.nr_zone_active_anon
    364919  27%     +55.1%     566146  18%  numa-vmstat.node1.numa_hit
    348831  26%     +54.3%     538135  21%  numa-vmstat.node1.numa_local
    669.84          +252.2%       2359  41%  proc-vmstat.nr_active_anon
     63068   2%     -29.2%      44622        proc-vmstat.nr_active_file
    177070            +1.1%     178960        proc-vmstat.nr_anon_pages
    465975           -18.7%     378719        proc-vmstat.nr_dirtied
      3780           -22.5%       2928   2%  proc-vmstat.nr_dirty
   1004574            -2.7%     977058        proc-vmstat.nr_file_pages
    177173            +3.8%     183958        proc-vmstat.nr_inactive_anon
    155978            -9.9%     140569        proc-vmstat.nr_inactive_file
      8469           +77.0%      14989   7%  proc-vmstat.nr_mapped
    825.02          +796.6%       7397   7%  proc-vmstat.nr_shmem
     43851            +2.8%      45074        proc-vmstat.nr_slab_reclaimable
    457971           -17.6%     377565        proc-vmstat.nr_written
    669.84          +252.2%       2359  41%  proc-vmstat.nr_zone_active_anon
     63068   2%     -29.2%      44622        proc-vmstat.nr_zone_active_file
    177173            +3.8%     183958        proc-vmstat.nr_zone_inactive_anon
    155978            -9.9%     140569        proc-vmstat.nr_zone_inactive_file
      3373   5%     -16.1%       2832   4%  proc-vmstat.nr_zone_write_pending
    157270   3%     +57.2%     247258        proc-vmstat.pgfault
   2488450           -26.5%    1829243        proc-vmstat.pgpgout
      7922   4%     +23.0%       9742   2%  proc-vmstat.pgreuse
      1.94  36%      +1.4        3.32  22%  perf-profile.calltrace.cycles-pp.record__mmap_read_evlist.__cmd_record.cmd_record.run_builtin.main
      1.65  14%      +1.7        3.32  22%  perf-profile.calltrace.cycles-pp.perf_mmap__push.record__mmap_read_evlist.__cmd_record.cmd_record.run_builtin
      0.00            +1.7        1.69   7%  perf-profile.calltrace.cycles-pp.dup_mm.copy_process.kernel_clone.__do_sys_clone.do_syscall_64
      2.80  35%      +1.7        4.50  18%  perf-profile.calltrace.cycles-pp.__cmd_record.cmd_record.run_builtin.main
      2.80  35%      +1.7        4.50  18%  perf-profile.calltrace.cycles-pp.cmd_record.run_builtin.main
      0.27 223%      +1.7        1.98  33%  perf-profile.calltrace.cycles-pp._Fork
      0.27 223%      +1.7        1.98  33%  perf-profile.calltrace.cycles-pp.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe._Fork
      0.27 223%      +1.7        1.98  33%  perf-profile.calltrace.cycles-pp.copy_process.kernel_clone.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe
      0.27 223%      +1.7        1.98  33%  perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe._Fork
      0.27 223%      +1.7        1.98  33%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe._Fork
      0.27 223%      +1.7        1.98  33%  perf-profile.calltrace.cycles-pp.kernel_clone.__do_sys_clone.do_syscall_64.entry_SYSCALL_64_after_hwframe._Fork
      1.42  47%      +1.9        3.32  22%  perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
      1.15  72%      +2.2        3.32  22%  perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write.writen.record__pushfn
      1.15  72%      +2.2        3.32  22%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write.writen.record__pushfn.perf_mmap__push
      1.15  72%      +2.2        3.32  22%  perf-profile.calltrace.cycles-pp.generic_perform_write.shmem_file_write_iter.vfs_write.ksys_write.do_syscall_64
      1.15  72%      +2.2        3.32  22%  perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write.writen
      1.15  72%      +2.2        3.32  22%  perf-profile.calltrace.cycles-pp.record__pushfn.perf_mmap__push.record__mmap_read_evlist.__cmd_record.cmd_record
      1.15  72%      +2.2        3.32  22%  perf-profile.calltrace.cycles-pp.shmem_file_write_iter.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
      1.15  72%      +2.2        3.32  22%  perf-profile.calltrace.cycles-pp.write.writen.record__pushfn.perf_mmap__push.record__mmap_read_evlist
      1.15  72%      +2.2        3.32  22%  perf-profile.calltrace.cycles-pp.writen.record__pushfn.perf_mmap__push.record__mmap_read_evlist.__cmd_record
      1.94  36%      +1.4        3.32  22%  perf-profile.children.cycles-pp.record__mmap_read_evlist
      1.65  14%      +1.7        3.32  22%  perf-profile.children.cycles-pp.perf_mmap__push
      0.00            +1.7        1.69   7%  perf-profile.children.cycles-pp.dup_mm
      0.27 223%      +1.7        1.98  33%  perf-profile.children.cycles-pp._Fork
      0.27 223%      +1.7        1.98  33%  perf-profile.children.cycles-pp.__do_sys_clone
      0.27 223%      +1.7        1.98  33%  perf-profile.children.cycles-pp.copy_process
      0.27 223%      +1.7        1.98  33%  perf-profile.children.cycles-pp.kernel_clone
      1.42  47%      +1.9        3.32  22%  perf-profile.children.cycles-pp.ksys_write
      1.42  47%      +1.9        3.32  22%  perf-profile.children.cycles-pp.vfs_write
      1.15  72%      +2.2        3.32  22%  perf-profile.children.cycles-pp.generic_perform_write
      1.15  72%      +2.2        3.32  22%  perf-profile.children.cycles-pp.record__pushfn
      1.15  72%      +2.2        3.32  22%  perf-profile.children.cycles-pp.shmem_file_write_iter
      1.15  72%      +2.5        3.62   7%  perf-profile.children.cycles-pp.writen
      1.42  47%      +2.5        3.92  17%  perf-profile.children.cycles-pp.write
 1.799e+09           -23.0%  1.386e+09        perf-stat.i.branch-instructions
      5.46            +1.0        6.50        perf-stat.i.branch-miss-rate%
  97222174            -9.3%   88136147        perf-stat.i.branch-misses
      6.96   8%      +2.5        9.49   3%  perf-stat.i.cache-miss-rate%
   9472265  10%     -19.3%    7648380   3%  perf-stat.i.cache-misses
 1.463e+08           -43.9%   82019846        perf-stat.i.cache-references
    199745           -53.6%      92597        perf-stat.i.context-switches
      1.46            -9.4%       1.32        perf-stat.i.cpi
 1.265e+10           -30.4%  8.795e+09        perf-stat.i.cpu-cycles
      1465   9%     -51.9%     705.40   2%  perf-stat.i.cpu-migrations
      1373   9%     -15.7%       1158   3%  perf-stat.i.cycles-between-cache-misses
 8.814e+09           -23.1%   6.78e+09        perf-stat.i.instructions
      0.70            +9.4%       0.76        perf-stat.i.ipc
      4.27           -53.5%       1.98        perf-stat.i.metric.K/sec
      6615   6%     -16.4%       5530        perf-stat.i.minor-faults
      6615   6%     -16.4%       5530        perf-stat.i.page-faults
      5.40            +1.0        6.36        perf-stat.overall.branch-miss-rate%
      6.47   8%      +2.9        9.32   3%  perf-stat.overall.cache-miss-rate%
      1.43            -9.6%       1.30        perf-stat.overall.cpi
      1347   8%     -14.6%       1151   3%  perf-stat.overall.cycles-between-cache-misses
      0.70           +10.6%       0.77        perf-stat.overall.ipc
   1.7e+09           -20.7%  1.348e+09        perf-stat.ps.branch-instructions
  91873961            -6.6%   85777312        perf-stat.ps.branch-misses
   8951075  10%     -16.9%    7442424   3%  perf-stat.ps.cache-misses
 1.382e+08           -42.3%   79820635        perf-stat.ps.cache-references
    188782           -52.3%      90114        perf-stat.ps.context-switches
     45375            +3.0%      46719        perf-stat.ps.cpu-clock
 1.195e+10           -28.4%   8.56e+09        perf-stat.ps.cpu-cycles
      1385   9%     -50.5%     686.49   2%  perf-stat.ps.cpu-migrations
 8.328e+09           -20.8%  6.598e+09        perf-stat.ps.instructions
      6239   6%     -13.8%       5375        perf-stat.ps.minor-faults
      6239   6%     -13.8%       5376        perf-stat.ps.page-faults
     45375            +3.0%      46719        perf-stat.ps.task-clock
  1.52e+11           +63.0%  2.477e+11        perf-stat.total.instructions



***************************************************************************************************
lkp-csl-2sp7: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory
=========================================================================================
compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/thread_nr:
  gcc-13/performance/directio/1SSD/btrfs/x86_64-rhel-8.3/ssd/debian-11.1-x86_64-20220510.cgz/lkp-csl-2sp7/DWSL/fxmark/4

commit: 
  1122c0c1cc ("block: move cache control settings out of queue->flags")
  bd4a633b6f ("block: move the nonrot flag to queue_limits")

1122c0c1cc71f740 bd4a633b6f7c3c6b6ebc1a07317 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
   2711233           -49.2%    1378365        cpuidle..usage
    113.67  32%     -77.0%      26.17  15%  perf-c2c.HITM.local
     68.36           +32.7%      90.69        iostat.cpu.idle
     16.62           -85.0%       2.49   3%  iostat.cpu.iowait
     14.20   2%     -57.2%       6.08   3%  iostat.cpu.system
     33.33 111%     -33.3        0.00        perf-profile.calltrace.cycles-pp.folio_remove_rmap_ptes.zap_present_ptes.zap_pte_range.zap_pmd_range.unmap_page_range
     41.67  82%     -41.7        0.00        perf-profile.children.cycles-pp.folio_remove_rmap_ptes
     25.00 100%     -25.0        0.00        perf-profile.self.cycles-pp.folio_remove_rmap_ptes
   1494783   2%     -68.3%     473712   4%  numa-numastat.node0.local_node
   1516920           -64.6%     537707   5%  numa-numastat.node0.numa_hit
    114515  25%     +38.9%     159043  14%  numa-numastat.node1.local_node
     77985  18%     -53.4%      36364  53%  numa-numastat.node1.other_node
     67.69           +22.9       90.61        mpstat.cpu.all.idle%
     17.08           -14.5        2.56   3%  mpstat.cpu.all.iowait%
      1.46   3%      -0.2        1.23   2%  mpstat.cpu.all.irq%
      0.93   2%      -0.6        0.33   2%  mpstat.cpu.all.soft%
     12.01   2%      -7.5        4.49   5%  mpstat.cpu.all.sys%
   1045438           -85.4%     152557   3%  numa-meminfo.node0.Active
     75557  13%     -71.3%      21669        numa-meminfo.node0.Active(anon)
    969881           -86.5%     130887   3%  numa-meminfo.node0.Active(file)
    869623   4%     -56.8%     375494  16%  numa-meminfo.node0.Inactive
    653865   2%     -71.4%     186796   3%  numa-meminfo.node0.Inactive(file)
     92743  11%     -63.3%      33993   3%  numa-meminfo.node0.Shmem
     68.29           +32.8%      90.72        vmstat.cpu.id
     16.68           -84.7%       2.55   6%  vmstat.cpu.wa
    116067           -85.7%      16600   2%  vmstat.io.bo
   4660813           -29.0%    3309839        vmstat.memory.cache
      0.52  15%     -80.0%       0.10  39%  vmstat.procs.b
      1.58   9%     -21.0%       1.25  10%  vmstat.procs.r
     41812   3%     -73.0%      11271   2%  vmstat.system.cs
     27419           -31.7%      18726        vmstat.system.in
      0.10 119%     -56.9%       0.04   5%  perf-stat.i.MPKI
     43412   3%     -74.3%      11165   3%  perf-stat.i.context-switches
      1457   5%     -71.2%     419.29   2%  perf-stat.i.cpu-migrations
    129.81  13%     +15.6%     150.07        perf-stat.i.cycles-between-cache-misses
      0.13  42%     -86.8%       0.02 115%  perf-stat.i.major-faults
     61.60  34%     -33.0%      41.27   6%  perf-stat.i.metric.K/sec
     42626   3%     -74.3%      10965   3%  perf-stat.ps.context-switches
      1431   5%     -71.2%     411.72   2%  perf-stat.ps.cpu-migrations
      0.13  42%     -86.7%       0.02 114%  perf-stat.ps.major-faults
   1046381           -85.4%     152689   3%  meminfo.Active
     75958  13%     -70.7%      22233        meminfo.Active(anon)
    970423           -86.6%     130455   3%  meminfo.Active(file)
   4602998           -29.7%    3237382        meminfo.Cached
   1062713           -44.5%     590084        meminfo.Inactive
    654829   2%     -71.5%     186936   2%  meminfo.Inactive(file)
   6405282           -22.1%    4991953        meminfo.Memused
    185977           -10.1%     167119        meminfo.SUnreclaim
     94548  11%     -61.3%      36622        meminfo.Shmem
    268307            -9.1%     243941        meminfo.Slab
   6936784           -21.5%    5447420   2%  meminfo.max_used_kB
     75.00   2%     -12.9%      65.33        turbostat.Avg_MHz
      2.86   2%      -0.4        2.50        turbostat.Busy%
    633581           -75.7%     154084   3%  turbostat.C1
      0.62   2%      -0.3        0.36   3%  turbostat.C1%
   1019660           -75.2%     252730   2%  turbostat.C1E
      1.80            -0.6        1.21   2%  turbostat.C1E%
   1946276           -32.6%    1312590        turbostat.IRQ
    191212  15%     -87.9%      23221        turbostat.POLL
      0.03            -0.0        0.00        turbostat.POLL%
     10.25   4%    +295.1%      40.50        turbostat.Pkg%pc2
    117.35           -11.0%     104.45        turbostat.PkgWatt
     18889  13%     -71.3%       5417        numa-vmstat.node0.nr_active_anon
    242499           -86.5%      32724   3%  numa-vmstat.node0.nr_active_file
   1759926           -87.0%     228849   2%  numa-vmstat.node0.nr_dirtied
    163471   2%     -71.4%      46702   3%  numa-vmstat.node0.nr_inactive_file
     23187  11%     -63.3%       8499   3%  numa-vmstat.node0.nr_shmem
   1759571           -87.0%     228761   2%  numa-vmstat.node0.nr_written
     18889  13%     -71.3%       5417        numa-vmstat.node0.nr_zone_active_anon
    242499           -86.5%      32724   3%  numa-vmstat.node0.nr_zone_active_file
    163471   2%     -71.4%      46702   3%  numa-vmstat.node0.nr_zone_inactive_file
   1516890           -64.6%     537657   5%  numa-vmstat.node0.numa_hit
   1494751   2%     -68.3%     473662   4%  numa-vmstat.node0.numa_local
    114346  25%     +38.9%     158813  14%  numa-vmstat.node1.numa_local
     77985  18%     -53.4%      36365  53%  numa-vmstat.node1.numa_other
    115.07           +55.1%     178.53        fxmark.ssd_btrfs_DWSL_4_directio.idle_sec
     59.45           +55.0%      92.14        fxmark.ssd_btrfs_DWSL_4_directio.idle_util
     46.68           -85.2%       6.89   3%  fxmark.ssd_btrfs_DWSL_4_directio.iowait_sec
     24.12           -85.3%       3.56   3%  fxmark.ssd_btrfs_DWSL_4_directio.iowait_util
      3.35           -20.3%       2.67   2%  fxmark.ssd_btrfs_DWSL_4_directio.irq_sec
      1.73           -20.3%       1.38   2%  fxmark.ssd_btrfs_DWSL_4_directio.irq_util
      2.49           -66.1%       0.84        fxmark.ssd_btrfs_DWSL_4_directio.softirq_sec
      1.29           -66.1%       0.44        fxmark.ssd_btrfs_DWSL_4_directio.softirq_util
     25.18   3%     -82.9%       4.32  12%  fxmark.ssd_btrfs_DWSL_4_directio.sys_sec
     13.01   2%     -82.9%       2.23  12%  fxmark.ssd_btrfs_DWSL_4_directio.sys_util
      0.80  10%     -37.2%       0.50   8%  fxmark.ssd_btrfs_DWSL_4_directio.user_sec
      0.41  10%     -37.3%       0.26   8%  fxmark.ssd_btrfs_DWSL_4_directio.user_util
    226633           -75.6%      55335   3%  fxmark.ssd_btrfs_DWSL_4_directio.works
      4532           -75.6%       1106   3%  fxmark.ssd_btrfs_DWSL_4_directio.works/sec
   4143236   5%     -87.7%     508948   4%  fxmark.time.file_system_outputs
      1228  47%     -81.9%     222.17  15%  fxmark.time.involuntary_context_switches
     11.33   4%     -64.7%       4.00        fxmark.time.percent_of_cpu_this_job_got
    269631   7%     -75.9%      64948   3%  fxmark.time.voluntary_context_switches
     18991  13%     -70.9%       5527        proc-vmstat.nr_active_anon
    242399           -86.5%      32827   3%  proc-vmstat.nr_active_file
   1759926           -87.0%     228849   2%  proc-vmstat.nr_dirtied
   1150679           -29.6%     809855        proc-vmstat.nr_file_pages
    163806           -71.3%      47029   3%  proc-vmstat.nr_inactive_file
     23665  11%     -61.4%       9139        proc-vmstat.nr_shmem
     20589            -6.7%      19203        proc-vmstat.nr_slab_reclaimable
     46538           -10.3%      41763        proc-vmstat.nr_slab_unreclaimable
   1759571           -87.0%     228761   2%  proc-vmstat.nr_written
     18991  13%     -70.9%       5527        proc-vmstat.nr_zone_active_anon
    242399           -86.5%      32827   3%  proc-vmstat.nr_zone_active_file
    163806           -71.3%      47029   3%  proc-vmstat.nr_zone_inactive_file
   1711454           -57.1%     735022        proc-vmstat.numa_hit
   1611324           -60.6%     634659        proc-vmstat.numa_local
    583094           -90.6%      54844   2%  proc-vmstat.pgactivate
   1786453           -55.2%     799914        proc-vmstat.pgalloc_normal
   1682791           -57.0%     724247        proc-vmstat.pgfree
   8394850           -85.8%    1195219   2%  proc-vmstat.pgpgout
      0.01  17%    +190.9%       0.02   8%  perf-sched.sch_delay.avg.ms.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
      0.04 128%     -99.1%       0.00 223%  perf-sched.sch_delay.avg.ms.__cond_resched.submit_bio_noacct.btrfs_submit_chunk.btrfs_submit_bio.submit_eb_page
      0.01  13%    -100.0%       0.00        perf-sched.sch_delay.avg.ms.btrfs_commit_transaction.btrfs_sync_file.__x64_sys_fsync.do_syscall_64
      0.01  49%    -100.0%       0.00        perf-sched.sch_delay.avg.ms.btrfs_start_ordered_extent.btrfs_wait_ordered_range.btrfs_sync_file.__x64_sys_fsync
      0.00  18%    +120.7%       0.01   6%  perf-sched.sch_delay.avg.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
      0.00  14%     +55.0%       0.01   7%  perf-sched.sch_delay.avg.ms.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_sync_log
      0.00  60%     -94.4%       0.00 223%  perf-sched.sch_delay.avg.ms.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown].[unknown]
      0.00  82%    +630.0%       0.01   7%  perf-sched.sch_delay.avg.ms.rcu_gp_kthread.kthread.ret_from_fork.ret_from_fork_asm
      0.00  33%    +266.7%       0.01  22%  perf-sched.sch_delay.avg.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
      0.01  28%    -100.0%       0.00        perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.wait_log_commit
      0.00  83%    +238.9%       0.01   3%  perf-sched.sch_delay.avg.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.btrfs_tree_read_lock_nested
      0.01  13%    +163.9%       0.02  15%  perf-sched.sch_delay.avg.ms.schedule_timeout.io_schedule_timeout.__iomap_dio_rw.btrfs_dio_write
      0.00  17%    +200.0%       0.01   7%  perf-sched.sch_delay.avg.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.barrier_all_devices
      0.01  10%     +66.7%       0.01  15%  perf-sched.sch_delay.avg.ms.schedule_timeout.kcompactd.kthread.ret_from_fork
      0.01  11%     +35.0%       0.01        perf-sched.sch_delay.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
      0.02  14%     +69.9%       0.03   7%  perf-sched.sch_delay.avg.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
      0.01   6%     +64.9%       0.01   3%  perf-sched.sch_delay.avg.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
      1.63  26%     -82.4%       0.29  71%  perf-sched.sch_delay.max.ms.__cond_resched.__wait_for_common.barrier_all_devices.write_all_supers.btrfs_sync_log
      0.00 178%    +693.8%       0.02  75%  perf-sched.sch_delay.max.ms.__cond_resched.down_write.btrfs_tree_lock_nested.btrfs_search_slot.btrfs_lookup_csum
      0.04  22%     +51.5%       0.07  13%  perf-sched.sch_delay.max.ms.__cond_resched.kmem_cache_alloc_noprof.alloc_extent_state.__set_extent_bit.lock_extent
      0.02  31%    +240.8%       0.08  18%  perf-sched.sch_delay.max.ms.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
      0.30 169%     -99.9%       0.00 223%  perf-sched.sch_delay.max.ms.__cond_resched.submit_bio_noacct.btrfs_submit_chunk.btrfs_submit_bio.submit_eb_page
      0.01  33%    -100.0%       0.00        perf-sched.sch_delay.max.ms.btrfs_commit_transaction.btrfs_sync_file.__x64_sys_fsync.do_syscall_64
      0.01  67%    -100.0%       0.00        perf-sched.sch_delay.max.ms.btrfs_start_ordered_extent.btrfs_wait_ordered_range.btrfs_sync_file.__x64_sys_fsync
      0.10  51%     -94.4%       0.01 129%  perf-sched.sch_delay.max.ms.btrfs_sync_log.btrfs_sync_file.__x64_sys_fsync.do_syscall_64
     11.97 117%     -93.7%       0.75  43%  perf-sched.sch_delay.max.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
      1.07  37%     -64.5%       0.38  75%  perf-sched.sch_delay.max.ms.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_sync_log
      0.01  86%     -96.3%       0.00 223%  perf-sched.sch_delay.max.ms.irqentry_exit_to_user_mode.asm_sysvec_apic_timer_interrupt.[unknown].[unknown]
      0.00 104%   +2221.7%       0.09  35%  perf-sched.sch_delay.max.ms.rcu_gp_kthread.kthread.ret_from_fork.ret_from_fork_asm
      0.05  56%     -84.2%       0.01 179%  perf-sched.sch_delay.max.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.start_log_trans
      0.04 113%    -100.0%       0.00        perf-sched.sch_delay.max.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.wait_log_commit
      0.96  51%     -54.4%       0.44  38%  perf-sched.sch_delay.max.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.barrier_all_devices
      0.01  19%    +113.3%       0.01   4%  perf-sched.total_sch_delay.average.ms
      1.73  12%    +266.3%       6.32   2%  perf-sched.total_wait_and_delay.average.ms
    158630  12%     -72.8%      43138   2%  perf-sched.total_wait_and_delay.count.ms
      1.72  12%    +266.7%       6.31   2%  perf-sched.total_wait_time.average.ms
      3.19          -100.0%       0.00        perf-sched.wait_and_delay.avg.ms.__cond_resched.ww_mutex_lock.drm_gem_vunmap_unlocked.drm_gem_fb_vunmap.drm_atomic_helper_commit_planes
      0.27   3%    +445.1%       1.46   3%  perf-sched.wait_and_delay.avg.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
      0.06   2%     +18.1%       0.07        perf-sched.wait_and_delay.avg.ms.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_sync_log
    171.54   8%    +127.3%     389.88   2%  perf-sched.wait_and_delay.avg.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
      0.43   3%    +466.1%       2.44   2%  perf-sched.wait_and_delay.avg.ms.schedule_timeout.io_schedule_timeout.__iomap_dio_rw.btrfs_dio_write
      0.06  15%     +77.5%       0.11   3%  perf-sched.wait_and_delay.avg.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.barrier_all_devices
      2.32   4%     +20.8%       2.80        perf-sched.wait_and_delay.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
      1.17   4%    +379.8%       5.63   4%  perf-sched.wait_and_delay.avg.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
     15.00  47%     -86.7%       2.00 125%  perf-sched.wait_and_delay.count.__cond_resched.generic_perform_write.shmem_file_write_iter.vfs_write.ksys_write
     35.83  25%     +67.4%      60.00   9%  perf-sched.wait_and_delay.count.__cond_resched.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
      1.00          -100.0%       0.00        perf-sched.wait_and_delay.count.__cond_resched.ww_mutex_lock.drm_gem_vunmap_unlocked.drm_gem_fb_vunmap.drm_atomic_helper_commit_planes
     17297   7%     -88.0%       2070   2%  perf-sched.wait_and_delay.count.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
     10087   5%     -84.7%       1539   2%  perf-sched.wait_and_delay.count.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_sync_log
     94.33  10%     -58.3%      39.33        perf-sched.wait_and_delay.count.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
    120.33 100%   +1865.5%       2365   2%  perf-sched.wait_and_delay.count.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.btrfs_tree_lock_nested
     20315   5%     -71.5%       5784   2%  perf-sched.wait_and_delay.count.schedule_timeout.io_schedule_timeout.__iomap_dio_rw.btrfs_dio_write
     10088   5%     -84.7%       1539   2%  perf-sched.wait_and_delay.count.schedule_timeout.io_schedule_timeout.__wait_for_common.barrier_all_devices
      2141   5%     -28.8%       1524        perf-sched.wait_and_delay.count.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
     20219   5%     -78.4%       4371   2%  perf-sched.wait_and_delay.count.wait_log_commit.btrfs_sync_log.btrfs_sync_file.__x64_sys_fsync
     41448   5%     -78.1%       9088   2%  perf-sched.wait_and_delay.count.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
      3.19          -100.0%       0.00        perf-sched.wait_and_delay.max.ms.__cond_resched.ww_mutex_lock.drm_gem_vunmap_unlocked.drm_gem_fb_vunmap.drm_atomic_helper_commit_planes
      1019   4%     +46.3%       1492  31%  perf-sched.wait_and_delay.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
     54.34  96%     -84.7%       8.32 120%  perf-sched.wait_and_delay.max.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
      2.31  36%     -62.1%       0.87  58%  perf-sched.wait_and_delay.max.ms.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_sync_log
      1321  55%     -77.3%     300.07  35%  perf-sched.wait_and_delay.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.btrfs_tree_read_lock_nested
      2.08  47%     -48.5%       1.07  22%  perf-sched.wait_and_delay.max.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.barrier_all_devices
      1437  27%     -31.3%     988.15   5%  perf-sched.wait_and_delay.max.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
      0.19 149%    +615.1%       1.36  73%  perf-sched.wait_time.avg.ms.__cond_resched.down_write.btrfs_tree_lock_nested.btrfs_search_slot.btrfs_lookup_csum
      0.04 131%  +11801.2%       5.08 134%  perf-sched.wait_time.avg.ms.__cond_resched.kmem_cache_alloc_noprof.add_delayed_ref.btrfs_drop_extents.insert_reserved_file_extent
      0.47  67%    +471.0%       2.66  30%  perf-sched.wait_time.avg.ms.__cond_resched.kmem_cache_alloc_noprof.alloc_extent_state.__set_extent_bit.lock_extent
      0.42  20%     -98.7%       0.01 223%  perf-sched.wait_time.avg.ms.__cond_resched.submit_bio_noacct.btrfs_submit_chunk.btrfs_submit_bio.submit_eb_page
     78.12 101%    -100.0%       0.00        perf-sched.wait_time.avg.ms.btrfs_commit_transaction.btrfs_sync_file.__x64_sys_fsync.do_syscall_64
      0.05  14%    -100.0%       0.00        perf-sched.wait_time.avg.ms.btrfs_start_ordered_extent.btrfs_wait_ordered_range.btrfs_sync_file.__x64_sys_fsync
      0.26   3%    +450.9%       1.45   3%  perf-sched.wait_time.avg.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
      0.05   2%     +16.5%       0.06        perf-sched.wait_time.avg.ms.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_sync_log
     11.37 142%     -99.9%       0.01 150%  perf-sched.wait_time.avg.ms.irqentry_exit_to_user_mode.asm_common_interrupt.[unknown].[unknown]
    171.54   8%    +127.3%     389.87   2%  perf-sched.wait_time.avg.ms.schedule_hrtimeout_range_clock.do_poll.constprop.0.do_sys_poll
      0.67   9%    -100.0%       0.00        perf-sched.wait_time.avg.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.wait_log_commit
      3.35  72%     -85.6%       0.48  39%  perf-sched.wait_time.avg.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.btrfs_tree_lock_nested
      0.43   3%    +470.0%       2.43   2%  perf-sched.wait_time.avg.ms.schedule_timeout.io_schedule_timeout.__iomap_dio_rw.btrfs_dio_write
      0.06  15%     +73.6%       0.10   3%  perf-sched.wait_time.avg.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.barrier_all_devices
      2.31   4%     +20.8%       2.79        perf-sched.wait_time.avg.ms.schedule_timeout.rcu_gp_fqs_loop.rcu_gp_kthread.kthread
      0.07 174%    -100.0%       0.00        perf-sched.wait_time.avg.ms.wait_current_trans.start_transaction.btrfs_attach_transaction_barrier.btrfs_sync_file
      0.06   3%     -44.4%       0.03  25%  perf-sched.wait_time.avg.ms.wait_log_commit.btrfs_sync_log.btrfs_sync_file.__x64_sys_fsync
      1.17   4%    +381.5%       5.62   4%  perf-sched.wait_time.avg.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
      1.79  24%     -75.6%       0.44  47%  perf-sched.wait_time.max.ms.__cond_resched.__wait_for_common.barrier_all_devices.write_all_supers.btrfs_sync_log
      0.19 149%   +1796.9%       3.62  54%  perf-sched.wait_time.max.ms.__cond_resched.down_write.btrfs_tree_lock_nested.btrfs_search_slot.btrfs_lookup_csum
      0.04 132%  +22425.6%       9.84 139%  perf-sched.wait_time.max.ms.__cond_resched.kmem_cache_alloc_noprof.add_delayed_ref.btrfs_drop_extents.insert_reserved_file_extent
      1.04  47%   +4402.5%      46.83  57%  perf-sched.wait_time.max.ms.__cond_resched.kmem_cache_alloc_noprof.alloc_extent_state.__set_extent_bit.lock_extent
      0.84  53%     -99.3%       0.01 223%  perf-sched.wait_time.max.ms.__cond_resched.submit_bio_noacct.btrfs_submit_chunk.btrfs_submit_bio.submit_eb_page
    281.64  99%    -100.0%       0.00        perf-sched.wait_time.max.ms.btrfs_commit_transaction.btrfs_sync_file.__x64_sys_fsync.do_syscall_64
      0.06  16%    -100.0%       0.00        perf-sched.wait_time.max.ms.btrfs_start_ordered_extent.btrfs_wait_ordered_range.btrfs_sync_file.__x64_sys_fsync
      1019   4%     +46.3%       1492  31%  perf-sched.wait_time.max.ms.do_task_dead.do_exit.do_group_exit.__x64_sys_exit_group.do_syscall_64
     49.01 107%     -83.8%       7.96 128%  perf-sched.wait_time.max.ms.io_schedule.folio_wait_bit_common.folio_wait_writeback.__filemap_fdatawait_range
      1.67  64%     -69.6%       0.51  49%  perf-sched.wait_time.max.ms.io_schedule.folio_wait_bit_common.write_all_supers.btrfs_sync_log
    333.41 141%    -100.0%       0.03 146%  perf-sched.wait_time.max.ms.irqentry_exit_to_user_mode.asm_common_interrupt.[unknown].[unknown]
      2.02  88%    -100.0%       0.00        perf-sched.wait_time.max.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.wait_log_commit
      1321  55%     -77.3%     300.06  35%  perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_read_slowpath.down_read.btrfs_tree_read_lock_nested
    533.71  48%     -71.8%     150.40  48%  perf-sched.wait_time.max.ms.schedule_preempt_disabled.rwsem_down_write_slowpath.down_write.btrfs_tree_lock_nested
      1.47   5%     -24.6%       1.11   5%  perf-sched.wait_time.max.ms.schedule_timeout.__wait_for_common.wait_for_completion_state.kernel_clone
      1.14  45%     -43.7%       0.64   9%  perf-sched.wait_time.max.ms.schedule_timeout.io_schedule_timeout.__wait_for_common.barrier_all_devices
    532.05  88%     -99.8%       1.28 163%  perf-sched.wait_time.max.ms.syscall_exit_to_user_mode.do_syscall_64.entry_SYSCALL_64_after_hwframe.[unknown]
      0.07 174%    -100.0%       0.00        perf-sched.wait_time.max.ms.wait_current_trans.start_transaction.btrfs_attach_transaction_barrier.btrfs_sync_file
      2.02  17%    +345.2%       9.00 129%  perf-sched.wait_time.max.ms.wait_log_commit.btrfs_sync_log.btrfs_sync_file.__x64_sys_fsync
      1437  27%     -31.3%     988.14   5%  perf-sched.wait_time.max.ms.worker_thread.kthread.ret_from_fork.ret_from_fork_asm



***************************************************************************************************
lkp-csl-2sp7: 96 threads 2 sockets Intel(R) Xeon(R) Gold 6252 CPU @ 2.10GHz (Cascade Lake) with 512G memory
=========================================================================================
compiler/cpufreq_governor/directio/disk/fstype/kconfig/media/rootfs/tbox_group/test/testcase/thread_nr:
  gcc-13/performance/directio/1SSD/btrfs/x86_64-rhel-8.3/ssd/debian-11.1-x86_64-20220510.cgz/lkp-csl-2sp7/MWUM/fxmark/4

commit: 
  1122c0c1cc ("block: move cache control settings out of queue->flags")
  bd4a633b6f ("block: move the nonrot flag to queue_limits")

1122c0c1cc71f740 bd4a633b6f7c3c6b6ebc1a07317 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
 2.176e+09           +11.4%  2.423e+09   3%  cpuidle..time
    132.33  20%     -36.4%      84.17  10%  perf-c2c.HITM.local
    291.11           +31.8%     383.72   5%  uptime.boot
     56.19            +7.6%      60.45        iostat.cpu.idle
     18.23           -27.6%      13.19   6%  iostat.cpu.iowait
     25.05            +3.4%      25.89        iostat.cpu.system
   2507993           -14.9%    2134516        meminfo.Active
   2144083           -18.4%    1750586   2%  meminfo.Active(file)
     77605           -25.4%      57872   5%  meminfo.Dirty
     18.37            -5.1       13.27   6%  mpstat.cpu.all.iowait%
      0.84            -0.1        0.70   4%  mpstat.cpu.all.irq%
      1.16            -0.3        0.90   6%  mpstat.cpu.all.soft%
    297906           -18.3%     243394   5%  vmstat.io.bo
      0.89   3%     -29.8%       0.63   9%  vmstat.procs.b
     47364           -21.4%      37209   5%  vmstat.system.cs
     12792           -18.5%      10428   3%  vmstat.system.in
      2.47  13%     -30.3%       1.72  40%  perf-sched.sch_delay.max.ms.__cond_resched.btrfs_alloc_path.btrfs_async_run_delayed_root.btrfs_work_helper.process_one_work
      1184  25%     +26.6%       1499  31%  perf-sched.wait_and_delay.max.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
      0.68 217%   +6285.5%      43.38 121%  perf-sched.wait_time.avg.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.workqueue_set_max_active
      1.34 220%   +5568.8%      75.81 138%  perf-sched.wait_time.max.ms.schedule_preempt_disabled.__mutex_lock.constprop.0.workqueue_set_max_active
      1183  25%     +26.6%       1498  31%  perf-sched.wait_time.max.ms.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
      1.80            -0.2        1.61        turbostat.Busy%
      2952            +3.5%       3055        turbostat.Bzy_MHz
      0.60            -0.1        0.48   6%  turbostat.C1%
      1.98            -0.2        1.80   4%  turbostat.C1E%
   1175575           +13.6%    1335284   5%  turbostat.C6
      6.37            -1.4        5.01   3%  turbostat.C6%
     11.16           -10.5%       9.99        turbostat.CPU%c1
      0.31           +12.8%       0.35        turbostat.IPC
   3186943           +11.3%    3548214   2%  turbostat.IRQ
   2505849           -14.8%    2134079        numa-meminfo.node0.Active
   2142339           -18.3%    1750535   2%  numa-meminfo.node0.Active(file)
     72152   4%     -25.9%      53438   5%  numa-meminfo.node0.Dirty
   5805800  13%     -36.0%    3718257  24%  numa-meminfo.node0.FilePages
   9182123   8%     -20.8%    7276407  12%  numa-meminfo.node0.MemUsed
   2447576  33%     -70.1%     731851 123%  numa-meminfo.node0.Unevictable
     15180  80%    +197.2%      45112  40%  numa-meminfo.node1.KReclaimable
      3231 172%    +524.0%      20162  58%  numa-meminfo.node1.Mapped
     15180  80%    +197.2%      45112  40%  numa-meminfo.node1.SReclaimable
    444470 182%    +386.0%    2160224  41%  numa-meminfo.node1.Unevictable
    535497           -18.3%     437654   2%  numa-vmstat.node0.nr_active_file
  18593647           +11.3%   20701921   2%  numa-vmstat.node0.nr_dirtied
     18026   4%     -25.9%      13355   5%  numa-vmstat.node0.nr_dirty
   1451371  13%     -36.0%     929593  24%  numa-vmstat.node0.nr_file_pages
    611894  33%     -70.1%     182962 123%  numa-vmstat.node0.nr_unevictable
  18583823           +11.3%   20690513   2%  numa-vmstat.node0.nr_written
    535497           -18.3%     437654   2%  numa-vmstat.node0.nr_zone_active_file
    611894  33%     -70.1%     182962 123%  numa-vmstat.node0.nr_zone_unevictable
     18182   5%     -26.1%      13429   5%  numa-vmstat.node0.nr_zone_write_pending
    807.93 172%    +524.4%       5044  58%  numa-vmstat.node1.nr_mapped
      3795  80%    +197.2%      11277  40%  numa-vmstat.node1.nr_slab_reclaimable
    111117 182%    +386.0%     540056  41%  numa-vmstat.node1.nr_unevictable
    111117 182%    +386.0%     540056  41%  numa-vmstat.node1.nr_zone_unevictable
    107.60           +11.1%     119.54   2%  fxmark.ssd_btrfs_MWUM_4_directio.idle_sec
     54.11           +10.7%      59.91   2%  fxmark.ssd_btrfs_MWUM_4_directio.idle_util
     45.57           -31.6%      31.18  12%  fxmark.ssd_btrfs_MWUM_4_directio.iowait_sec
     22.92           -31.8%      15.63  12%  fxmark.ssd_btrfs_MWUM_4_directio.iowait_util
      1.76           -22.1%       1.37   8%  fxmark.ssd_btrfs_MWUM_4_directio.irq_sec
      0.89           -22.4%       0.69   8%  fxmark.ssd_btrfs_MWUM_4_directio.irq_util
      2.62           -27.4%       1.90  10%  fxmark.ssd_btrfs_MWUM_4_directio.softirq_sec
      1.32           -27.7%       0.95  10%  fxmark.ssd_btrfs_MWUM_4_directio.softirq_util
     40.98           +10.3%      45.20   3%  fxmark.ssd_btrfs_MWUM_4_directio.sys_sec
     20.61            +9.9%      22.65   3%  fxmark.ssd_btrfs_MWUM_4_directio.sys_util
      1373           -45.7%     745.50   9%  fxmark.ssd_btrfs_MWUM_4_directio.works
     27.39           -45.9%      14.82   9%  fxmark.ssd_btrfs_MWUM_4_directio.works/sec
    250.17           +37.0%     342.71   5%  fxmark.time.elapsed_time
    250.17           +37.0%     342.71   5%  fxmark.time.elapsed_time.max
   4153888           -19.1%    3360997        fxmark.time.file_system_outputs
      8.50   5%     -29.4%       6.00        fxmark.time.percent_of_cpu_this_job_got
     59437  10%     +34.9%      80200   8%  sched_debug.cfs_rq:/.avg_vruntime.avg
     77362  12%     +28.6%      99496   9%  sched_debug.cfs_rq:/.avg_vruntime.max
     49777  14%     +41.4%      70377  10%  sched_debug.cfs_rq:/.avg_vruntime.min
     59437  10%     +34.9%      80200   8%  sched_debug.cfs_rq:/.min_vruntime.avg
     77362  12%     +28.6%      99496   9%  sched_debug.cfs_rq:/.min_vruntime.max
     49777  14%     +41.4%      70377  10%  sched_debug.cfs_rq:/.min_vruntime.min
     70.10  28%     +69.8%     119.03  24%  sched_debug.cfs_rq:/.util_est.avg
    387.03  19%     +36.5%     528.40  18%  sched_debug.cfs_rq:/.util_est.max
    111.66  27%     +63.4%     182.43  23%  sched_debug.cfs_rq:/.util_est.stddev
    159901           +21.8%     194738   5%  sched_debug.cpu.clock.avg
    159904           +21.8%     194739   5%  sched_debug.cpu.clock.max
    159899           +21.8%     194736   5%  sched_debug.cpu.clock.min
    156754           +22.0%     191297   5%  sched_debug.cpu.clock_task.avg
    159016           +21.8%     193608   5%  sched_debug.cpu.clock_task.max
    151859           +23.0%     186807   5%  sched_debug.cpu.clock_task.min
    571773   2%      -9.3%     518561   3%  sched_debug.cpu.max_idle_balance_cost.max
      8586  10%     -76.2%       2040  87%  sched_debug.cpu.max_idle_balance_cost.stddev
      1547   7%     +39.1%       2152  12%  sched_debug.cpu.nr_uninterruptible.max
     -3680           +35.1%      -4971        sched_debug.cpu.nr_uninterruptible.min
      2008  13%     +44.5%       2902  13%  sched_debug.cpu.nr_uninterruptible.stddev
    159901           +21.8%     194737   5%  sched_debug.cpu_clk
    159291           +21.9%     194110   5%  sched_debug.ktime
    160432           +21.7%     195278   5%  sched_debug.sched_clk
     90977            +5.5%      95982        proc-vmstat.nr_active_anon
    535809           -18.3%     437696   2%  proc-vmstat.nr_active_file
     80075            -3.3%      77402        proc-vmstat.nr_anon_pages
  18810399   2%     +11.4%   20960968   2%  proc-vmstat.nr_dirtied
     19409           -25.5%      14469   5%  proc-vmstat.nr_dirty
   1570377            -6.0%    1476623        proc-vmstat.nr_file_pages
     82770            -3.8%      79660        proc-vmstat.nr_inactive_anon
     18954            -1.0%      18765        proc-vmstat.nr_kernel_stack
      9590            -2.2%       9375        proc-vmstat.nr_mapped
     95937            +4.8%     100509        proc-vmstat.nr_shmem
  18800573   2%     +11.4%   20949542   2%  proc-vmstat.nr_written
     90977            +5.5%      95982        proc-vmstat.nr_zone_active_anon
    535809           -18.3%     437696   2%  proc-vmstat.nr_zone_active_file
     82770            -3.8%      79660        proc-vmstat.nr_zone_inactive_anon
     19688   2%     -25.3%      14712   5%  proc-vmstat.nr_zone_write_pending
      7670  50%     +96.4%      15061  26%  proc-vmstat.numa_hint_faults
      3803  88%    +157.4%       9788  21%  proc-vmstat.numa_hint_faults_local
  10950682   2%      +6.6%   11668450   2%  proc-vmstat.numa_hit
  10849707   2%      +6.6%   11566304   2%  proc-vmstat.numa_local
   1356796           +32.4%    1796173        proc-vmstat.pgactivate
  11864099            +5.9%   12565956   2%  proc-vmstat.pgalloc_normal
    791491           +24.6%     986144   4%  proc-vmstat.pgfault
  11653607            +6.8%   12444134   2%  proc-vmstat.pgfree
  75424328   2%     +11.4%   84002300   2%  proc-vmstat.pgpgout
     42955           +22.0%      52397   3%  proc-vmstat.pgreuse
  40751111   2%     -28.4%   29162586   8%  perf-stat.i.branch-instructions
      0.16   3%      -0.0        0.12   5%  perf-stat.i.branch-miss-rate%
   1966173   3%     -28.9%    1398795   8%  perf-stat.i.branch-misses
      0.56   5%      -0.1        0.41   5%  perf-stat.i.cache-miss-rate%
    338469   4%     -26.6%     248405   6%  perf-stat.i.cache-misses
   1788163   4%     -26.4%    1315825   7%  perf-stat.i.cache-references
     47778           -21.6%      37457   5%  perf-stat.i.context-switches
      0.04           -25.2%       0.03   8%  perf-stat.i.cpi
      8075           -13.9%       6951   2%  perf-stat.i.cpu-clock
 2.255e+08           -28.1%  1.622e+08   7%  perf-stat.i.cpu-cycles
      2754           -11.1%       2447   7%  perf-stat.i.cpu-migrations
     45.89   3%     -31.3%      31.53   7%  perf-stat.i.cycles-between-cache-misses
      0.01   4%      -0.0        0.00  13%  perf-stat.i.dTLB-load-miss-rate%
     62240   7%     -26.9%      45496   9%  perf-stat.i.dTLB-load-misses
  44048882   2%     -28.0%   31716199   8%  perf-stat.i.dTLB-loads
      0.00   4%      -0.0        0.00  13%  perf-stat.i.dTLB-store-miss-rate%
      7368   7%     -27.7%       5326   9%  perf-stat.i.dTLB-store-misses
  15972047   2%     -27.8%   11527820   8%  perf-stat.i.dTLB-stores
      1.71            -0.4        1.27   6%  perf-stat.i.iTLB-load-miss-rate%
     22267           -24.7%      16768   6%  perf-stat.i.iTLB-load-misses
     19977   4%     -27.0%      14589   5%  perf-stat.i.iTLB-loads
 2.026e+08   2%     -28.4%  1.451e+08   9%  perf-stat.i.instructions
    286.78   3%     -30.7%     198.87   8%  perf-stat.i.instructions-per-iTLB-miss
      0.03   3%     -27.5%       0.02   6%  perf-stat.i.ipc
      0.18  43%     +96.5%       0.35  31%  perf-stat.i.major-faults
      0.00           -28.0%       0.00   7%  perf-stat.i.metric.GHz
     31.38   4%     -24.9%      23.56   6%  perf-stat.i.metric.K/sec
      1.04   2%     -28.1%       0.75   8%  perf-stat.i.metric.M/sec
      2330            -3.1%       2257        perf-stat.i.minor-faults
      1.87  15%      -0.6        1.24  15%  perf-stat.i.node-load-miss-rate%
     32809   8%     -24.0%      24936  12%  perf-stat.i.node-loads
      1.60   8%      -0.4        1.17  15%  perf-stat.i.node-store-miss-rate%
     42947   3%     -26.9%      31405   6%  perf-stat.i.node-stores
      2330            -3.1%       2258        perf-stat.i.page-faults
      8075           -13.9%       6951   2%  perf-stat.i.task-clock
  41085549   2%     -28.6%   29351633   9%  perf-stat.ps.branch-instructions
   1980226   3%     -28.9%    1407007   8%  perf-stat.ps.branch-misses
    341317   4%     -26.7%     250339   6%  perf-stat.ps.cache-misses
   1802785   4%     -26.5%    1324804   7%  perf-stat.ps.cache-references
     47554           -21.5%      37337   5%  perf-stat.ps.context-switches
      8089           -14.0%       6956   2%  perf-stat.ps.cpu-clock
 2.274e+08           -28.2%  1.633e+08   7%  perf-stat.ps.cpu-cycles
      2740           -11.0%       2438   7%  perf-stat.ps.cpu-migrations
     62712   7%     -27.0%      45790   9%  perf-stat.ps.dTLB-load-misses
  44441011   2%     -28.1%   31931957   8%  perf-stat.ps.dTLB-loads
      7430   6%     -27.8%       5362   9%  perf-stat.ps.dTLB-store-misses
  16118579   2%     -28.0%   11611258   8%  perf-stat.ps.dTLB-stores
     22459   2%     -24.8%      16880   6%  perf-stat.ps.iTLB-load-misses
     20158   5%     -27.1%      14693   5%  perf-stat.ps.iTLB-loads
 2.042e+08   2%     -28.5%  1.461e+08   9%  perf-stat.ps.instructions
      0.18  43%     +96.8%       0.35  31%  perf-stat.ps.major-faults
      2321            -3.0%       2251        perf-stat.ps.minor-faults
     32976   8%     -23.8%      25135  12%  perf-stat.ps.node-loads
     43093   3%     -26.7%      31604   5%  perf-stat.ps.node-stores
      2321            -3.0%       2252        perf-stat.ps.page-faults
      8089           -14.0%       6956   2%  perf-stat.ps.task-clock
     29.41   2%     -16.0       13.44  45%  perf-profile.calltrace.cycles-pp.loop_process_work.process_one_work.worker_thread.kthread.ret_from_fork
     19.06   3%     -11.3        7.73  45%  perf-profile.calltrace.cycles-pp.btrfs_sync_file.loop_process_work.process_one_work.worker_thread.kthread
     24.03   3%     -10.6       13.47   6%  perf-profile.calltrace.cycles-pp.btree_write_cache_pages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.btrfs_write_marked_extents
     23.72   3%     -10.5       13.24   6%  perf-profile.calltrace.cycles-pp.submit_eb_page.btree_write_cache_pages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range
     22.11   3%      -9.7       12.38   6%  perf-profile.calltrace.cycles-pp.btrfs_submit_chunk.btrfs_submit_bio.submit_eb_page.btree_write_cache_pages.do_writepages
     22.11   3%      -9.7       12.38   6%  perf-profile.calltrace.cycles-pp.btrfs_submit_bio.submit_eb_page.btree_write_cache_pages.do_writepages.filemap_fdatawrite_wbc
     21.55   3%      -9.5       12.08   6%  perf-profile.calltrace.cycles-pp.btree_csum_one_bio.btrfs_submit_chunk.btrfs_submit_bio.submit_eb_page.btree_write_cache_pages
     20.06   3%      -8.9       11.16   4%  perf-profile.calltrace.cycles-pp.btrfs_check_leaf.btree_csum_one_bio.btrfs_submit_chunk.btrfs_submit_bio.submit_eb_page
     19.94   3%      -8.8       11.09   5%  perf-profile.calltrace.cycles-pp.__btrfs_check_leaf.btrfs_check_leaf.btree_csum_one_bio.btrfs_submit_chunk.btrfs_submit_bio
     19.56   4%      -7.0       12.60   7%  perf-profile.calltrace.cycles-pp.btrfs_write_and_wait_transaction.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space.process_one_work
     19.18   4%      -6.9       12.32   7%  perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.flush_space
     19.32   4%      -6.9       12.46   7%  perf-profile.calltrace.cycles-pp.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space
     19.16   4%      -6.8       12.32   7%  perf-profile.calltrace.cycles-pp.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction.btrfs_commit_transaction
     19.13   4%      -6.8       12.30   7%  perf-profile.calltrace.cycles-pp.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_write_and_wait_transaction
     12.86   4%      -5.9        6.99   4%  perf-profile.calltrace.cycles-pp.check_leaf_item.__btrfs_check_leaf.btrfs_check_leaf.btree_csum_one_bio.btrfs_submit_chunk
     13.52   7%      -5.0        8.56   8%  perf-profile.calltrace.cycles-pp.common_startup_64
     10.90  12%      -4.7        6.19  12%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.common_startup_64
     10.90  12%      -4.7        6.20  12%  perf-profile.calltrace.cycles-pp.start_secondary.common_startup_64
     10.89  12%      -4.7        6.19  12%  perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.common_startup_64
     10.15   3%      -4.6        5.60  45%  perf-profile.calltrace.cycles-pp.lo_write_simple.loop_process_work.process_one_work.worker_thread.kthread
      9.98   3%      -4.5        5.52  45%  perf-profile.calltrace.cycles-pp.vfs_iter_write.lo_write_simple.loop_process_work.process_one_work.worker_thread
      9.86   3%      -4.4        5.46  44%  perf-profile.calltrace.cycles-pp.do_iter_readv_writev.vfs_iter_write.lo_write_simple.loop_process_work.process_one_work
      9.79   3%      -4.4        5.43  44%  perf-profile.calltrace.cycles-pp.btrfs_do_write_iter.do_iter_readv_writev.vfs_iter_write.lo_write_simple.loop_process_work
      9.70   3%      -4.3        5.36  44%  perf-profile.calltrace.cycles-pp.btrfs_buffered_write.btrfs_do_write_iter.do_iter_readv_writev.vfs_iter_write.lo_write_simple
      9.84  12%      -4.2        5.62  13%  perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary.common_startup_64
      5.35   4%      -4.2        1.17  45%  perf-profile.calltrace.cycles-pp.btrfs_sync_log.btrfs_sync_file.loop_process_work.process_one_work.worker_thread
     11.34   8%      -4.2        7.19  11%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
      9.33  12%      -4.0        5.36  13%  perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
      4.93   4%      -3.9        1.00  44%  perf-profile.calltrace.cycles-pp.btrfs_write_marked_extents.btrfs_sync_log.btrfs_sync_file.loop_process_work.process_one_work
      4.92   4%      -3.9        1.00  44%  perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_sync_log.btrfs_sync_file.loop_process_work
     12.68   5%      -3.7        8.94   7%  perf-profile.calltrace.cycles-pp.btrfs_work_helper.process_one_work.worker_thread.kthread.ret_from_fork
      4.92   4%      -3.7        1.18   4%  perf-profile.calltrace.cycles-pp.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_sync_log.btrfs_sync_file
      4.91   3%      -3.7        1.18   4%  perf-profile.calltrace.cycles-pp.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.btrfs_write_marked_extents.btrfs_sync_log
     12.65   5%      -3.7        8.92   7%  perf-profile.calltrace.cycles-pp.btrfs_finish_one_ordered.btrfs_work_helper.process_one_work.worker_thread.kthread
      7.36   4%      -3.6        3.74  46%  perf-profile.calltrace.cycles-pp.start_ordered_ops.btrfs_sync_file.loop_process_work.process_one_work.worker_thread
      6.28   5%      -3.6        2.66  45%  perf-profile.calltrace.cycles-pp.btrfs_log_dentry_safe.btrfs_sync_file.loop_process_work.process_one_work.worker_thread
      6.27   5%      -3.6        2.66  45%  perf-profile.calltrace.cycles-pp.btrfs_log_inode_parent.btrfs_log_dentry_safe.btrfs_sync_file.loop_process_work.process_one_work
      6.27   5%      -3.6        2.66  45%  perf-profile.calltrace.cycles-pp.btrfs_log_inode.btrfs_log_inode_parent.btrfs_log_dentry_safe.btrfs_sync_file.loop_process_work
      7.27   4%      -3.5        3.72  46%  perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.start_ordered_ops.btrfs_sync_file.loop_process_work.process_one_work
      7.27   4%      -3.5        3.72  46%  perf-profile.calltrace.cycles-pp.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.start_ordered_ops.btrfs_sync_file.loop_process_work
      8.76   4%      -3.3        5.44   4%  perf-profile.calltrace.cycles-pp.check_extent_item.check_leaf_item.__btrfs_check_leaf.btrfs_check_leaf.btree_csum_one_bio
      8.24   4%      -3.3        4.92  14%  perf-profile.calltrace.cycles-pp.extent_write_cache_pages.btrfs_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range
      7.38   4%      -3.1        4.27  17%  perf-profile.calltrace.cycles-pp.__extent_writepage.extent_write_cache_pages.btrfs_writepages.do_writepages.filemap_fdatawrite_wbc
      7.27   4%      -3.0        4.26  15%  perf-profile.calltrace.cycles-pp.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.start_ordered_ops.btrfs_sync_file
      7.26   4%      -3.0        4.26  15%  perf-profile.calltrace.cycles-pp.btrfs_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.start_ordered_ops
      6.00   4%      -3.0        3.04   6%  perf-profile.calltrace.cycles-pp.btrfs_log_changed_extents.btrfs_log_inode.btrfs_log_inode_parent.btrfs_log_dentry_safe.btrfs_sync_file
      5.81   5%      -2.9        2.89   6%  perf-profile.calltrace.cycles-pp.log_one_extent.btrfs_log_changed_extents.btrfs_log_inode.btrfs_log_inode_parent.btrfs_log_dentry_safe
      4.50   9%      -2.7        1.82  11%  perf-profile.calltrace.cycles-pp.alloc_reserved_tree_block.run_delayed_tree_ref.btrfs_run_delayed_refs_for_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs
      4.30  10%      -2.7        1.65  19%  perf-profile.calltrace.cycles-pp.btrfs_insert_empty_items.alloc_reserved_tree_block.run_delayed_tree_ref.btrfs_run_delayed_refs_for_head.__btrfs_run_delayed_refs
      6.97   6%      -2.6        4.39   8%  perf-profile.calltrace.cycles-pp.btrfs_start_dirty_block_groups.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space.process_one_work
      2.75  10%      -2.3        0.41  71%  perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_insert_empty_items.alloc_reserved_tree_block.run_delayed_tree_ref.btrfs_run_delayed_refs_for_head
      8.18   7%      -2.0        6.14   9%  perf-profile.calltrace.cycles-pp.insert_reserved_file_extent.btrfs_finish_one_ordered.btrfs_work_helper.process_one_work.worker_thread
      2.48   7%      -2.0        0.49  45%  perf-profile.calltrace.cycles-pp.check_extent_data_item.check_leaf_item.__btrfs_check_leaf.btrfs_check_leaf.btree_csum_one_bio
      4.26   9%      -1.9        2.37  10%  perf-profile.calltrace.cycles-pp.btrfs_run_delayed_refs.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space.process_one_work
      4.24   9%      -1.9        2.36  10%  perf-profile.calltrace.cycles-pp.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space
      4.10   9%      -1.8        2.29  11%  perf-profile.calltrace.cycles-pp.btrfs_run_delayed_refs_for_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.btrfs_commit_transaction.flush_space
      3.72   7%      -1.7        2.00   8%  perf-profile.calltrace.cycles-pp.btrfs_get_32.__btrfs_check_leaf.btrfs_check_leaf.btree_csum_one_bio.btrfs_submit_chunk
      2.54   6%      -1.7        0.84  53%  perf-profile.calltrace.cycles-pp.btrfs_cow_block.btrfs_search_slot.lookup_inline_extent_backref.lookup_extent_backref.__btrfs_free_extent
      2.54   6%      -1.7        0.84  53%  perf-profile.calltrace.cycles-pp.btrfs_force_cow_block.btrfs_cow_block.btrfs_search_slot.lookup_inline_extent_backref.lookup_extent_backref
      7.29   4%      -1.7        5.60   9%  perf-profile.calltrace.cycles-pp.btrfs_drop_extents.insert_reserved_file_extent.btrfs_finish_one_ordered.btrfs_work_helper.process_one_work
      3.75   7%      -1.6        2.12  28%  perf-profile.calltrace.cycles-pp.__extent_writepage_io.__extent_writepage.extent_write_cache_pages.btrfs_writepages.do_writepages
      3.30   5%      -1.6        1.74   4%  perf-profile.calltrace.cycles-pp.btrfs_drop_extents.log_one_extent.btrfs_log_changed_extents.btrfs_log_inode.btrfs_log_inode_parent
      3.61   5%      -1.4        2.21   3%  perf-profile.calltrace.cycles-pp.btrfs_get_64.check_extent_item.check_leaf_item.__btrfs_check_leaf.btrfs_check_leaf
      3.09  11%      -1.4        1.74  28%  perf-profile.calltrace.cycles-pp.lookup_extent_backref.__btrfs_free_extent.btrfs_run_delayed_refs_for_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs
      3.08  11%      -1.4        1.73  28%  perf-profile.calltrace.cycles-pp.lookup_inline_extent_backref.lookup_extent_backref.__btrfs_free_extent.btrfs_run_delayed_refs_for_head.__btrfs_run_delayed_refs
      2.91   6%      -1.3        1.60  30%  perf-profile.calltrace.cycles-pp.btrfs_search_slot.lookup_inline_extent_backref.lookup_extent_backref.__btrfs_free_extent.btrfs_run_delayed_refs_for_head
      2.40  11%      -1.3        1.09   9%  perf-profile.calltrace.cycles-pp.log_extent_csums.log_one_extent.btrfs_log_changed_extents.btrfs_log_inode.btrfs_log_inode_parent
      1.83  17%      -1.3        0.53  73%  perf-profile.calltrace.cycles-pp.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
      1.82  18%      -1.3        0.53  73%  perf-profile.calltrace.cycles-pp.common_interrupt.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call
      3.32  10%      -1.3        2.07  10%  perf-profile.calltrace.cycles-pp.btrfs_run_delayed_refs_for_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.btrfs_start_dirty_block_groups.btrfs_commit_transaction
      2.45   6%      -1.2        1.20  11%  perf-profile.calltrace.cycles-pp.btrfs_run_delayed_refs.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread
      3.43  10%      -1.2        2.19  10%  perf-profile.calltrace.cycles-pp.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.btrfs_start_dirty_block_groups.btrfs_commit_transaction.flush_space
      2.44   6%      -1.2        1.20  11%  perf-profile.calltrace.cycles-pp.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.flush_space.btrfs_async_reclaim_metadata_space.process_one_work
      3.43  10%      -1.2        2.19  10%  perf-profile.calltrace.cycles-pp.btrfs_run_delayed_refs.btrfs_start_dirty_block_groups.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space
      1.57  30%      -1.2        0.34 100%  perf-profile.calltrace.cycles-pp.blk_complete_reqs.handle_softirqs.irq_exit_rcu.common_interrupt.asm_common_interrupt
      2.38   5%      -1.2        1.16   9%  perf-profile.calltrace.cycles-pp.btrfs_run_delayed_refs_for_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.flush_space.btrfs_async_reclaim_metadata_space
      1.53  30%      -1.2        0.33 100%  perf-profile.calltrace.cycles-pp.scsi_end_request.scsi_io_completion.blk_complete_reqs.handle_softirqs.irq_exit_rcu
      1.53  30%      -1.2        0.33 100%  perf-profile.calltrace.cycles-pp.scsi_io_completion.blk_complete_reqs.handle_softirqs.irq_exit_rcu.common_interrupt
      2.85   2%      -1.2        1.68   8%  perf-profile.calltrace.cycles-pp.writepage_delalloc.__extent_writepage.extent_write_cache_pages.btrfs_writepages.do_writepages
      1.41  30%      -1.1        0.31 100%  perf-profile.calltrace.cycles-pp.blk_update_request.scsi_end_request.scsi_io_completion.blk_complete_reqs.handle_softirqs
      2.54   8%      -1.0        1.52  35%  perf-profile.calltrace.cycles-pp.submit_extent_page.__extent_writepage_io.__extent_writepage.extent_write_cache_pages.btrfs_writepages
      2.30   2%      -1.0        1.30   8%  perf-profile.calltrace.cycles-pp.cow_file_range.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage.extent_write_cache_pages
      2.31   2%      -1.0        1.31   8%  perf-profile.calltrace.cycles-pp.btrfs_run_delalloc_range.writepage_delalloc.__extent_writepage.extent_write_cache_pages.btrfs_writepages
      2.24   9%      -1.0        1.25  49%  perf-profile.calltrace.cycles-pp.submit_one_bio.submit_extent_page.__extent_writepage_io.__extent_writepage.extent_write_cache_pages
      2.23   9%      -1.0        1.25  49%  perf-profile.calltrace.cycles-pp.btrfs_submit_bio.submit_one_bio.submit_extent_page.__extent_writepage_io.__extent_writepage
      2.23   9%      -1.0        1.25  49%  perf-profile.calltrace.cycles-pp.btrfs_submit_chunk.btrfs_submit_bio.submit_one_bio.submit_extent_page.__extent_writepage_io
      1.86   9%      -1.0        0.90   9%  perf-profile.calltrace.cycles-pp.btrfs_csum_file_blocks.log_csums.log_extent_csums.log_one_extent.btrfs_log_changed_extents
      1.88   9%      -1.0        0.91   9%  perf-profile.calltrace.cycles-pp.log_csums.log_extent_csums.log_one_extent.btrfs_log_changed_extents.btrfs_log_inode
      1.96  11%      -1.0        1.00   9%  perf-profile.calltrace.cycles-pp.__btrfs_free_extent.btrfs_run_delayed_refs_for_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.btrfs_commit_transaction
      1.28  18%      -0.9        0.34 100%  perf-profile.calltrace.cycles-pp.handle_softirqs.irq_exit_rcu.common_interrupt.asm_common_interrupt.cpuidle_enter_state
      1.28  18%      -0.9        0.34 100%  perf-profile.calltrace.cycles-pp.irq_exit_rcu.common_interrupt.asm_common_interrupt.cpuidle_enter_state.cpuidle_enter
      5.08   7%      -0.9        4.16  12%  perf-profile.calltrace.cycles-pp.poll_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
      2.09   4%      -0.9        1.18  15%  perf-profile.calltrace.cycles-pp.btrfs_csum_file_blocks.btrfs_finish_one_ordered.btrfs_work_helper.process_one_work.worker_thread
      1.49  12%      -0.9        0.58  47%  perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
      2.43   5%      -0.9        1.54  10%  perf-profile.calltrace.cycles-pp.btrfs_get_32.check_extent_item.check_leaf_item.__btrfs_check_leaf.btrfs_check_leaf
      1.26  17%      -0.9        0.38  71%  perf-profile.calltrace.cycles-pp.btrfs_create_common.lookup_open.open_last_lookups.path_openat.do_filp_open
      1.52  19%      -0.9        0.66  21%  perf-profile.calltrace.cycles-pp.intel_idle_irq.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
      2.06  12%      -0.9        1.21  12%  perf-profile.calltrace.cycles-pp.run_delayed_tree_ref.btrfs_run_delayed_refs_for_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.btrfs_commit_transaction
      1.39  15%      -0.8        0.55  47%  perf-profile.calltrace.cycles-pp.lookup_open.open_last_lookups.path_openat.do_filp_open.do_sys_openat2
      3.06   7%      -0.8        2.24  10%  perf-profile.calltrace.cycles-pp.btrfs_copy_from_user.btrfs_buffered_write.btrfs_do_write_iter.do_iter_readv_writev.vfs_iter_write
      3.04   7%      -0.8        2.23   9%  perf-profile.calltrace.cycles-pp.copy_page_from_iter_atomic.btrfs_copy_from_user.btrfs_buffered_write.btrfs_do_write_iter.do_iter_readv_writev
      1.35  14%      -0.7        0.62  12%  perf-profile.calltrace.cycles-pp.run_delayed_tree_ref.btrfs_run_delayed_refs_for_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.btrfs_start_dirty_block_groups
      2.15   7%      -0.7        1.44   6%  perf-profile.calltrace.cycles-pp.btrfs_dirty_pages.btrfs_buffered_write.btrfs_do_write_iter.do_iter_readv_writev.vfs_iter_write
      2.09  11%      -0.7        1.39   7%  perf-profile.calltrace.cycles-pp.btrfs_write_out_cache.btrfs_start_dirty_block_groups.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space
      2.06  10%      -0.7        1.37   6%  perf-profile.calltrace.cycles-pp.__btrfs_write_out_cache.btrfs_write_out_cache.btrfs_start_dirty_block_groups.btrfs_commit_transaction.flush_space
      1.75   8%      -0.7        1.08   9%  perf-profile.calltrace.cycles-pp.btrfs_del_items.btrfs_drop_extents.insert_reserved_file_extent.btrfs_finish_one_ordered.btrfs_work_helper
      1.84   7%      -0.6        1.23  11%  perf-profile.calltrace.cycles-pp.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
      1.10  31%      -0.6        0.49  47%  perf-profile.calltrace.cycles-pp.setup_items_for_insert.btrfs_insert_empty_items.alloc_reserved_tree_block.run_delayed_tree_ref.btrfs_run_delayed_refs_for_head
      2.08  16%      -0.6        1.49  11%  perf-profile.calltrace.cycles-pp.open64
      2.06  16%      -0.6        1.48  11%  perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
      2.06  16%      -0.6        1.48  11%  perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.open64
      2.04  15%      -0.6        1.47  11%  perf-profile.calltrace.cycles-pp.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
      2.03  15%      -0.6        1.47  11%  perf-profile.calltrace.cycles-pp.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe.open64
      2.01  16%      -0.6        1.45  11%  perf-profile.calltrace.cycles-pp.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64.entry_SYSCALL_64_after_hwframe
      2.01  16%      -0.6        1.45  11%  perf-profile.calltrace.cycles-pp.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat.do_syscall_64
      1.22   7%      -0.5        0.67  10%  perf-profile.calltrace.cycles-pp.btrfs_del_items.btrfs_drop_extents.log_one_extent.btrfs_log_changed_extents.btrfs_log_inode
      1.95  16%      -0.5        1.41  12%  perf-profile.calltrace.cycles-pp.open_last_lookups.path_openat.do_filp_open.do_sys_openat2.__x64_sys_openat
      1.87  17%      -0.5        1.34   8%  perf-profile.calltrace.cycles-pp.__btrfs_free_extent.btrfs_run_delayed_refs_for_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.btrfs_start_dirty_block_groups
      1.63  10%      -0.5        1.10  11%  perf-profile.calltrace.cycles-pp.blk_complete_reqs.handle_softirqs.run_ksoftirqd.smpboot_thread_fn.kthread
      0.92  10%      -0.5        0.39  71%  perf-profile.calltrace.cycles-pp.cache_save_setup.btrfs_start_dirty_block_groups.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space
      0.81   9%      -0.5        0.28 100%  perf-profile.calltrace.cycles-pp.btrfs_comp_cpu_keys.__btrfs_check_leaf.btrfs_check_leaf.btree_csum_one_bio.btrfs_submit_chunk
      1.66  10%      -0.5        1.13  11%  perf-profile.calltrace.cycles-pp.handle_softirqs.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork
      1.66  10%      -0.5        1.14  11%  perf-profile.calltrace.cycles-pp.run_ksoftirqd.smpboot_thread_fn.kthread.ret_from_fork.ret_from_fork_asm
      1.54   9%      -0.5        1.05  12%  perf-profile.calltrace.cycles-pp.blk_mq_end_request.blk_complete_reqs.handle_softirqs.run_ksoftirqd.smpboot_thread_fn
      1.49  11%      -0.5        1.00  12%  perf-profile.calltrace.cycles-pp.blk_update_request.blk_mq_end_request.blk_complete_reqs.handle_softirqs.run_ksoftirqd
      1.10  15%      -0.4        0.66  14%  perf-profile.calltrace.cycles-pp.write_one_eb.submit_eb_page.btree_write_cache_pages.do_writepages.filemap_fdatawrite_wbc
      1.06  10%      -0.4        0.62  15%  perf-profile.calltrace.cycles-pp.btrfs_delalloc_reserve_metadata.btrfs_buffered_write.btrfs_do_write_iter.do_iter_readv_writev.vfs_iter_write
      1.24  10%      -0.4        0.84   9%  perf-profile.calltrace.cycles-pp.__btrfs_free_extent.btrfs_run_delayed_refs_for_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.flush_space
      0.67  12%      -0.4        0.28 100%  perf-profile.calltrace.cycles-pp.btrfs_set_extent_delalloc.btrfs_dirty_pages.btrfs_buffered_write.btrfs_do_write_iter.do_iter_readv_writev
      1.14  13%      -0.4        0.75   9%  perf-profile.calltrace.cycles-pp.btrfs_setup_item_for_insert.btrfs_drop_extents.log_one_extent.btrfs_log_changed_extents.btrfs_log_inode
      1.13  13%      -0.4        0.75   9%  perf-profile.calltrace.cycles-pp.setup_items_for_insert.btrfs_setup_item_for_insert.btrfs_drop_extents.log_one_extent.btrfs_log_changed_extents
      0.86  24%      -0.4        0.48  46%  perf-profile.calltrace.cycles-pp.btrfs_get_8.check_extent_item.check_leaf_item.__btrfs_check_leaf.btrfs_check_leaf
      1.00   9%      -0.3        0.68  21%  perf-profile.calltrace.cycles-pp.btrfs_fdatawrite_range.__btrfs_write_out_cache.btrfs_write_out_cache.btrfs_start_dirty_block_groups.btrfs_commit_transaction
      1.00   9%      -0.3        0.68  21%  perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.btrfs_fdatawrite_range.__btrfs_write_out_cache.btrfs_write_out_cache.btrfs_start_dirty_block_groups
      0.98  10%      -0.3        0.67  21%  perf-profile.calltrace.cycles-pp.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.btrfs_fdatawrite_range.__btrfs_write_out_cache
      0.98  11%      -0.3        0.67  21%  perf-profile.calltrace.cycles-pp.btrfs_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.btrfs_fdatawrite_range
      0.00            +0.9        0.87  12%  perf-profile.calltrace.cycles-pp.btrfs_search_slot.btrfs_del_csums.cleanup_ref_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs
      0.00            +1.4        1.43   7%  perf-profile.calltrace.cycles-pp.btrfs_truncate_item.truncate_one_csum.btrfs_del_csums.cleanup_ref_head.__btrfs_run_delayed_refs
      0.00            +1.5        1.48   6%  perf-profile.calltrace.cycles-pp.truncate_one_csum.btrfs_del_csums.cleanup_ref_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs
     79.39            +1.9       81.32        perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
      0.00            +3.2        3.16   5%  perf-profile.calltrace.cycles-pp.btrfs_del_csums.cleanup_ref_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.btrfs_commit_transaction
      0.00            +3.8        3.77   6%  perf-profile.calltrace.cycles-pp.cleanup_ref_head.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.btrfs_commit_transaction.transaction_kthread
      0.00            +4.3        4.32   7%  perf-profile.calltrace.cycles-pp.__btrfs_run_delayed_refs.btrfs_run_delayed_refs.btrfs_commit_transaction.transaction_kthread.kthread
      0.00            +4.3        4.32   7%  perf-profile.calltrace.cycles-pp.btrfs_run_delayed_refs.btrfs_commit_transaction.transaction_kthread.kthread.ret_from_fork
      0.00            +5.2        5.22   8%  perf-profile.calltrace.cycles-pp.btrfs_commit_transaction.transaction_kthread.kthread.ret_from_fork.ret_from_fork_asm
      0.00            +5.2        5.22   8%  perf-profile.calltrace.cycles-pp.transaction_kthread.kthread.ret_from_fork.ret_from_fork_asm
     82.15            +6.1       88.22        perf-profile.calltrace.cycles-pp.kthread.ret_from_fork.ret_from_fork_asm
     82.15            +6.1       88.22        perf-profile.calltrace.cycles-pp.ret_from_fork.ret_from_fork_asm
     82.15            +6.1       88.22        perf-profile.calltrace.cycles-pp.ret_from_fork_asm
     36.06   3%     +19.8       55.84   3%  perf-profile.calltrace.cycles-pp.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread.kthread.ret_from_fork
     36.05   3%     +19.8       55.84   3%  perf-profile.calltrace.cycles-pp.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread.kthread
     33.17   3%     +21.3       54.45   3%  perf-profile.calltrace.cycles-pp.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space.process_one_work.worker_thread
      0.09 223%     +30.0       30.06   9%  perf-profile.calltrace.cycles-pp._find_next_zero_bit.steal_from_bitmap.__btrfs_add_free_space.unpin_extent_range.btrfs_finish_extent_commit
      0.82  22%     +33.0       33.84   9%  perf-profile.calltrace.cycles-pp.btrfs_finish_extent_commit.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space.process_one_work
      0.52  74%     +33.2       33.74   9%  perf-profile.calltrace.cycles-pp.unpin_extent_range.btrfs_finish_extent_commit.btrfs_commit_transaction.flush_space.btrfs_async_reclaim_metadata_space
      0.37 100%     +33.3       33.66   9%  perf-profile.calltrace.cycles-pp.__btrfs_add_free_space.unpin_extent_range.btrfs_finish_extent_commit.btrfs_commit_transaction.flush_space
      0.29 100%     +33.8       34.11   9%  perf-profile.calltrace.cycles-pp.steal_from_bitmap.__btrfs_add_free_space.unpin_extent_range.btrfs_finish_extent_commit.btrfs_commit_transaction
     29.41   2%     -16.0       13.44  45%  perf-profile.children.cycles-pp.loop_process_work
     32.80   2%     -13.9       18.92   8%  perf-profile.children.cycles-pp.__filemap_fdatawrite_range
     32.77   2%     -13.9       18.91   8%  perf-profile.children.cycles-pp.filemap_fdatawrite_wbc
     32.72   2%     -13.8       18.87   8%  perf-profile.children.cycles-pp.do_writepages
     24.60   2%     -10.7       13.89   9%  perf-profile.children.cycles-pp.btrfs_submit_chunk
     24.60   2%     -10.7       13.90   9%  perf-profile.children.cycles-pp.btrfs_submit_bio
     24.25   3%     -10.6       13.68   6%  perf-profile.children.cycles-pp.btrfs_write_marked_extents
     24.03   3%     -10.5       13.50   6%  perf-profile.children.cycles-pp.btree_write_cache_pages
     23.73   3%     -10.5       13.27   6%  perf-profile.children.cycles-pp.submit_eb_page
     19.06   3%     -10.0        9.08   8%  perf-profile.children.cycles-pp.btrfs_sync_file
     21.55   3%      -9.4       12.11   6%  perf-profile.children.cycles-pp.btree_csum_one_bio
     20.06   3%      -8.9       11.19   4%  perf-profile.children.cycles-pp.__btrfs_check_leaf
     20.06   3%      -8.9       11.19   4%  perf-profile.children.cycles-pp.btrfs_check_leaf
     19.56   4%      -6.9       12.63   7%  perf-profile.children.cycles-pp.btrfs_write_and_wait_transaction
     12.97   4%      -5.9        7.07   4%  perf-profile.children.cycles-pp.check_leaf_item
     14.34   4%      -5.3        9.08   4%  perf-profile.children.cycles-pp.btrfs_search_slot
     13.52   7%      -5.0        8.56   8%  perf-profile.children.cycles-pp.common_startup_64
     13.52   7%      -5.0        8.56   8%  perf-profile.children.cycles-pp.cpu_startup_entry
     13.51   7%      -5.0        8.56   8%  perf-profile.children.cycles-pp.do_idle
      8.41   5%      -4.8        3.58  10%  perf-profile.children.cycles-pp.btrfs_cow_block
      8.40   5%      -4.8        3.58  10%  perf-profile.children.cycles-pp.btrfs_force_cow_block
     10.90  12%      -4.7        6.20  12%  perf-profile.children.cycles-pp.start_secondary
     10.15   3%      -4.6        5.60  45%  perf-profile.children.cycles-pp.lo_write_simple
     12.09   7%      -4.4        7.65  10%  perf-profile.children.cycles-pp.cpuidle_idle_call
     11.44   8%      -4.2        7.27  10%  perf-profile.children.cycles-pp.cpuidle_enter
     11.42   8%      -4.2        7.26  10%  perf-profile.children.cycles-pp.cpuidle_enter_state
      9.96   7%      -4.1        5.88   9%  perf-profile.children.cycles-pp.btrfs_run_delayed_refs_for_head
      5.35   4%      -4.0        1.40   5%  perf-profile.children.cycles-pp.btrfs_sync_log
     12.68   5%      -3.7        8.94   7%  perf-profile.children.cycles-pp.btrfs_work_helper
     12.65   5%      -3.7        8.92   7%  perf-profile.children.cycles-pp.btrfs_finish_one_ordered
      5.14   6%      -3.5        1.65   9%  perf-profile.children.cycles-pp.btrfs_alloc_tree_block
      9.08   5%      -3.4        5.67   4%  perf-profile.children.cycles-pp.check_extent_item
      9.99   3%      -3.3        6.66   4%  perf-profile.children.cycles-pp.vfs_iter_write
      8.67   4%      -3.3        5.36  13%  perf-profile.children.cycles-pp.extent_write_cache_pages
      8.67   4%      -3.3        5.36  13%  perf-profile.children.cycles-pp.btrfs_writepages
      7.59   4%      -3.3        4.28   6%  perf-profile.children.cycles-pp.btrfs_get_32
      9.86   3%      -3.3        6.59   4%  perf-profile.children.cycles-pp.do_iter_readv_writev
     10.64   4%      -3.2        7.39   6%  perf-profile.children.cycles-pp.btrfs_drop_extents
      9.72   3%      -3.2        6.48   4%  perf-profile.children.cycles-pp.btrfs_buffered_write
      9.79   3%      -3.2        6.56   4%  perf-profile.children.cycles-pp.btrfs_do_write_iter
      6.28   5%      -3.1        3.18   5%  perf-profile.children.cycles-pp.btrfs_log_dentry_safe
      6.27   5%      -3.1        3.18   5%  perf-profile.children.cycles-pp.btrfs_log_inode_parent
      6.27   5%      -3.1        3.18   5%  perf-profile.children.cycles-pp.btrfs_log_inode
      7.36   4%      -3.1        4.29  15%  perf-profile.children.cycles-pp.start_ordered_ops
      6.05   9%      -3.0        3.03  10%  perf-profile.children.cycles-pp.btrfs_insert_empty_items
      3.69   7%      -3.0        0.68  17%  perf-profile.children.cycles-pp.btrfs_reserve_extent
      7.74   4%      -3.0        4.78  14%  perf-profile.children.cycles-pp.__extent_writepage
      6.00   4%      -3.0        3.04   6%  perf-profile.children.cycles-pp.btrfs_log_changed_extents
      3.57   7%      -2.9        0.62  13%  perf-profile.children.cycles-pp.find_free_extent
      5.81   5%      -2.9        2.89   6%  perf-profile.children.cycles-pp.log_one_extent
      6.97   6%      -2.5        4.46   8%  perf-profile.children.cycles-pp.btrfs_start_dirty_block_groups
      5.24   5%      -2.5        2.75   4%  perf-profile.children.cycles-pp.btrfs_get_64
      4.58   9%      -2.5        2.12   9%  perf-profile.children.cycles-pp.alloc_reserved_tree_block
      4.58   9%      -2.5        2.13   9%  perf-profile.children.cycles-pp.run_delayed_tree_ref
      8.27   7%      -2.1        6.20   9%  perf-profile.children.cycles-pp.insert_reserved_file_extent
      2.53   7%      -1.9        0.59  10%  perf-profile.children.cycles-pp.check_extent_data_item
      4.20  14%      -1.9        2.26  13%  perf-profile.children.cycles-pp.handle_softirqs
      3.00  21%      -1.9        1.12  30%  perf-profile.children.cycles-pp.asm_common_interrupt
      3.96   4%      -1.9        2.08  11%  perf-profile.children.cycles-pp.btrfs_csum_file_blocks
      2.99  21%      -1.9        1.12  30%  perf-profile.children.cycles-pp.common_interrupt
      5.13   9%      -1.7        3.43   8%  perf-profile.children.cycles-pp.__btrfs_free_extent
      3.66  11%      -1.7        1.98  14%  perf-profile.children.cycles-pp.blk_complete_reqs
      4.22   7%      -1.6        2.62  23%  perf-profile.children.cycles-pp.__extent_writepage_io
      3.34  13%      -1.5        1.82  15%  perf-profile.children.cycles-pp.blk_update_request
      3.94   7%      -1.5        2.46  10%  perf-profile.children.cycles-pp._raw_spin_lock
      2.57  21%      -1.4        1.14  22%  perf-profile.children.cycles-pp.irq_exit_rcu
      4.62   4%      -1.4        3.22   9%  perf-profile.children.cycles-pp.setup_items_for_insert
      2.40  11%      -1.3        1.09   9%  perf-profile.children.cycles-pp.log_extent_csums
      3.38            -1.3        2.08   7%  perf-profile.children.cycles-pp.writepage_delalloc
      4.58   6%      -1.2        3.34   5%  perf-profile.children.cycles-pp.btrfs_del_items
      3.12  11%      -1.2        1.90  11%  perf-profile.children.cycles-pp.lookup_extent_backref
      3.12  11%      -1.2        1.90  11%  perf-profile.children.cycles-pp.lookup_inline_extent_backref
      2.88   6%      -1.1        1.77  29%  perf-profile.children.cycles-pp.submit_extent_page
      2.62   2%      -1.1        1.52   8%  perf-profile.children.cycles-pp.btrfs_run_delalloc_range
      2.00  12%      -1.1        0.91  16%  perf-profile.children.cycles-pp.intel_idle_irq
      1.85  20%      -1.1        0.78  31%  perf-profile.children.cycles-pp.scsi_end_request
      1.85  20%      -1.1        0.78  31%  perf-profile.children.cycles-pp.scsi_io_completion
      1.70  11%      -1.0        0.69   9%  perf-profile.children.cycles-pp.btrfs_lookup_csum
      2.30   3%      -1.0        1.30   8%  perf-profile.children.cycles-pp.cow_file_range
      2.50   7%      -1.0        1.50  32%  perf-profile.children.cycles-pp.submit_one_bio
      2.58  10%      -1.0        1.61  11%  perf-profile.children.cycles-pp.memcpy_orig
      1.88   9%      -1.0        0.91   9%  perf-profile.children.cycles-pp.log_csums
      5.10   7%      -0.9        4.18  11%  perf-profile.children.cycles-pp.poll_idle
      2.06   8%      -0.9        1.17  10%  perf-profile.children.cycles-pp.btrfs_init_new_buffer
      2.60   4%      -0.9        1.73  47%  perf-profile.children.cycles-pp.crc_pcl
      1.79   7%      -0.9        0.93  18%  perf-profile.children.cycles-pp.intel_idle
      1.90  19%      -0.9        1.04  23%  perf-profile.children.cycles-pp.__btrfs_bio_end_io
      3.49   6%      -0.8        2.66  11%  perf-profile.children.cycles-pp.btrfs_get_token_32
      2.80   9%      -0.8        1.97  14%  perf-profile.children.cycles-pp.btrfs_setup_item_for_insert
      3.06   7%      -0.8        2.24  10%  perf-profile.children.cycles-pp.btrfs_copy_from_user
      3.32  10%      -0.8        2.50   6%  perf-profile.children.cycles-pp.do_syscall_64
      3.32  10%      -0.8        2.50   6%  perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
      1.90  13%      -0.8        1.08  14%  perf-profile.children.cycles-pp.folio_end_writeback
      3.05   6%      -0.8        2.24  10%  perf-profile.children.cycles-pp.copy_page_from_iter_atomic
      1.74  18%      -0.8        0.94  23%  perf-profile.children.cycles-pp.end_bbio_data_write
      2.41   7%      -0.8        1.62   5%  perf-profile.children.cycles-pp.btrfs_dirty_pages
      1.50  12%      -0.8        0.73   8%  perf-profile.children.cycles-pp.btrfs_get_8
      1.39  15%      -0.8        0.63  18%  perf-profile.children.cycles-pp.lookup_open
      1.26  17%      -0.7        0.52  18%  perf-profile.children.cycles-pp.btrfs_create_common
      1.66   9%      -0.7        0.93   7%  perf-profile.children.cycles-pp.alloc_extent_buffer
      2.14   7%      -0.7        1.42   8%  perf-profile.children.cycles-pp.__clear_extent_bit
      1.88  11%      -0.7        1.16  12%  perf-profile.children.cycles-pp.copy_extent_buffer_full
      3.03   7%      -0.7        2.32   8%  perf-profile.children.cycles-pp.btrfs_write_out_cache
      0.87   7%      -0.7        0.16  24%  perf-profile.children.cycles-pp.btrfs_find_space_for_alloc
      3.02   6%      -0.7        2.32   8%  perf-profile.children.cycles-pp.__btrfs_write_out_cache
      1.31   7%      -0.7        0.61  28%  perf-profile.children.cycles-pp.btrfs_extend_item
      1.78   6%      -0.7        1.08  10%  perf-profile.children.cycles-pp.add_delayed_ref
      1.16  20%      -0.7        0.46  17%  perf-profile.children.cycles-pp.btrfs_create_new_inode
      1.45   8%      -0.7        0.76  16%  perf-profile.children.cycles-pp.write_one_eb
      1.48  11%      -0.7        0.82  18%  perf-profile.children.cycles-pp.__folio_end_writeback
      0.98  22%      -0.7        0.33  36%  perf-profile.children.cycles-pp.__common_interrupt
      0.98  22%      -0.7        0.33  36%  perf-profile.children.cycles-pp.handle_edge_irq
      0.98  22%      -0.6        0.33  36%  perf-profile.children.cycles-pp.handle_irq_event
      0.94  22%      -0.6        0.31  37%  perf-profile.children.cycles-pp.__handle_irq_event_percpu
      0.93  22%      -0.6        0.31  38%  perf-profile.children.cycles-pp.ahci_single_level_irq_intr
      1.84   7%      -0.6        1.23  11%  perf-profile.children.cycles-pp.smpboot_thread_fn
      2.15  15%      -0.6        1.56  10%  perf-profile.children.cycles-pp.__x64_sys_openat
      2.08  16%      -0.6        1.49  11%  perf-profile.children.cycles-pp.open64
      1.25  14%      -0.6        0.66  12%  perf-profile.children.cycles-pp.submit_bio_noacct_nocheck
      2.14  15%      -0.6        1.56  10%  perf-profile.children.cycles-pp.do_sys_openat2
      1.13  13%      -0.6        0.56  12%  perf-profile.children.cycles-pp.__folio_start_writeback
      1.21  15%      -0.6        0.64  11%  perf-profile.children.cycles-pp.__submit_bio
      1.67  12%      -0.6        1.10   8%  perf-profile.children.cycles-pp.kmem_cache_alloc_noprof
      2.10  16%      -0.6        1.54  10%  perf-profile.children.cycles-pp.do_filp_open
      2.10  16%      -0.6        1.54  10%  perf-profile.children.cycles-pp.path_openat
      2.12   9%      -0.6        1.55   2%  perf-profile.children.cycles-pp.__set_extent_bit
      1.12   9%      -0.6        0.56   7%  perf-profile.children.cycles-pp.attach_eb_folio_to_filemap
      1.23   5%      -0.6        0.67  12%  perf-profile.children.cycles-pp.btrfs_comp_cpu_keys
      1.71   8%      -0.6        1.16  12%  perf-profile.children.cycles-pp.blk_mq_end_request
      1.06  17%      -0.6        0.50  15%  perf-profile.children.cycles-pp.blk_mq_submit_bio
      1.26   9%      -0.6        0.71  12%  perf-profile.children.cycles-pp.filemap_dirty_folio
      1.96  16%      -0.5        1.43  12%  perf-profile.children.cycles-pp.open_last_lookups
      1.66  10%      -0.5        1.14  11%  perf-profile.children.cycles-pp.run_ksoftirqd
      0.99   9%      -0.5        0.48  13%  perf-profile.children.cycles-pp.__lruvec_stat_mod_folio
      1.08   9%      -0.5        0.57  13%  perf-profile.children.cycles-pp.end_bbio_meta_write
      0.77  27%      -0.5        0.29  24%  perf-profile.children.cycles-pp.insert_with_overflow
      0.79  25%      -0.5        0.32  23%  perf-profile.children.cycles-pp.btrfs_add_link
      0.78  26%      -0.5        0.31  22%  perf-profile.children.cycles-pp.btrfs_insert_dir_item
      1.42   4%      -0.5        0.96  12%  perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
      0.69  26%      -0.4        0.25  36%  perf-profile.children.cycles-pp.ahci_handle_port_intr
      0.94   7%      -0.4        0.50  20%  perf-profile.children.cycles-pp.__folio_mark_dirty
      1.06  10%      -0.4        0.62  15%  perf-profile.children.cycles-pp.btrfs_delalloc_reserve_metadata
      1.31   9%      -0.4        0.87  46%  perf-profile.children.cycles-pp.crc32c_pcl_intel_update
      1.70   6%      -0.4        1.27   7%  perf-profile.children.cycles-pp.__schedule
      0.73  21%      -0.4        0.30  24%  perf-profile.children.cycles-pp.btrfs_finish_ordered_extent
      1.00   9%      -0.4        0.58  15%  perf-profile.children.cycles-pp.cache_save_setup
      1.06   8%      -0.4        0.64   9%  perf-profile.children.cycles-pp.__filemap_get_folio
      0.69  35%      -0.4        0.28  71%  perf-profile.children.cycles-pp.irq_work_run_list
      0.68  35%      -0.4        0.28  71%  perf-profile.children.cycles-pp.__sysvec_irq_work
      0.68  35%      -0.4        0.28  71%  perf-profile.children.cycles-pp._printk
      0.68  35%      -0.4        0.28  71%  perf-profile.children.cycles-pp.asm_sysvec_irq_work
      0.68  35%      -0.4        0.28  71%  perf-profile.children.cycles-pp.irq_work_run
      0.68  35%      -0.4        0.28  71%  perf-profile.children.cycles-pp.irq_work_single
      0.68  35%      -0.4        0.28  71%  perf-profile.children.cycles-pp.sysvec_irq_work
      0.68  35%      -0.4        0.28  71%  perf-profile.children.cycles-pp.vprintk_emit
      0.68  35%      -0.4        0.28  71%  perf-profile.children.cycles-pp.console_flush_all
      0.68  35%      -0.4        0.28  71%  perf-profile.children.cycles-pp.console_unlock
      0.67  35%      -0.4        0.27  71%  perf-profile.children.cycles-pp.serial8250_console_write
      0.93  13%      -0.4        0.53  14%  perf-profile.children.cycles-pp.add_delayed_ref_head
      0.51  20%      -0.4        0.11  36%  perf-profile.children.cycles-pp.find_free_space
      1.21   8%      -0.4        0.81  13%  perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
      0.86  31%      -0.4        0.46   8%  perf-profile.children.cycles-pp.folio_clear_dirty_for_io
      0.92   3%      -0.4        0.52  12%  perf-profile.children.cycles-pp.set_extent_buffer_dirty
      0.88  10%      -0.4        0.48  14%  perf-profile.children.cycles-pp._raw_spin_lock_irqsave
      1.36   2%      -0.4        0.96   9%  perf-profile.children.cycles-pp.schedule
      0.65  36%      -0.4        0.26  71%  perf-profile.children.cycles-pp.wait_for_lsr
      1.24  14%      -0.4        0.85   8%  perf-profile.children.cycles-pp.btrfs_bin_search
      1.26  11%      -0.4        0.88  12%  perf-profile.children.cycles-pp.__lookup_extent_mapping
      0.50  36%      -0.4        0.12  42%  perf-profile.children.cycles-pp.btrfs_lookup_csums_list
      1.99   8%      -0.4        1.62  10%  perf-profile.children.cycles-pp.__memmove
      0.87  12%      -0.4        0.51   9%  perf-profile.children.cycles-pp.__reserve_bytes
      1.06   5%      -0.4        0.70   9%  perf-profile.children.cycles-pp.lock_extent
      0.80  15%      -0.4        0.45  11%  perf-profile.children.cycles-pp.btrfs_set_range_writeback
      0.85  22%      -0.4        0.50   8%  perf-profile.children.cycles-pp.alloc_extent_state
      1.42   8%      -0.3        1.08  11%  perf-profile.children.cycles-pp.kmem_cache_free
      0.76   8%      -0.3        0.43   8%  perf-profile.children.cycles-pp.flush_smp_call_function_queue
      0.70  10%      -0.3        0.38  12%  perf-profile.children.cycles-pp.down_read
      0.71  10%      -0.3        0.39   9%  perf-profile.children.cycles-pp.__flush_smp_call_function_queue
      0.53  18%      -0.3        0.21  29%  perf-profile.children.cycles-pp.queue_work_on
      1.44   7%      -0.3        1.13  13%  perf-profile.children.cycles-pp.btrfs_fdatawrite_range
      0.94  12%      -0.3        0.64  45%  perf-profile.children.cycles-pp.csum_tree_block
      0.97   7%      -0.3        0.66  14%  perf-profile.children.cycles-pp.clear_state_bit
      0.74  12%      -0.3        0.44  12%  perf-profile.children.cycles-pp.btrfs_reserve_metadata_bytes
      0.78   7%      -0.3        0.48  10%  perf-profile.children.cycles-pp.percpu_counter_add_batch
      0.70  12%      -0.3        0.42   6%  perf-profile.children.cycles-pp.filemap_add_folio
      0.54   5%      -0.3        0.26  21%  perf-profile.children.cycles-pp.folio_account_dirtied
      0.67   8%      -0.3        0.40  22%  perf-profile.children.cycles-pp.filemap_get_entry
      0.47  17%      -0.3        0.20  22%  perf-profile.children.cycles-pp.__blk_mq_sched_dispatch_requests
      0.47  17%      -0.3        0.20  22%  perf-profile.children.cycles-pp.blk_mq_sched_dispatch_requests
      0.80  13%      -0.3        0.53   9%  perf-profile.children.cycles-pp.pagecache_get_page
      0.96  16%      -0.3        0.69  12%  perf-profile.children.cycles-pp.btrfs_get_extent
      0.73  13%      -0.3        0.46  18%  perf-profile.children.cycles-pp.xas_load
      0.67   9%      -0.3        0.40  16%  perf-profile.children.cycles-pp.btrfs_truncate_free_space_cache
      0.56  10%      -0.3        0.29  21%  perf-profile.children.cycles-pp.update_block_group_item
      0.98   5%      -0.3        0.72   9%  perf-profile.children.cycles-pp.read_extent_buffer
      0.51   7%      -0.3        0.25  23%  perf-profile.children.cycles-pp.__btrfs_check_node
      0.51   7%      -0.3        0.25  23%  perf-profile.children.cycles-pp.btrfs_check_node
      0.46  23%      -0.3        0.21  23%  perf-profile.children.cycles-pp.__queue_work
      0.53  11%      -0.2        0.29  18%  perf-profile.children.cycles-pp.sched_ttwu_pending
      0.42  18%      -0.2        0.18  28%  perf-profile.children.cycles-pp.__blk_mq_do_dispatch_sched
      0.52  11%      -0.2        0.28  17%  perf-profile.children.cycles-pp.__cond_resched
      1.16  14%      -0.2        0.93   9%  perf-profile.children.cycles-pp.read_block_for_search
      0.40  19%      -0.2        0.17  31%  perf-profile.children.cycles-pp.__btrfs_run_delayed_items
      0.68   9%      -0.2        0.46  13%  perf-profile.children.cycles-pp.lock_and_cleanup_extent_if_need
      0.93  16%      -0.2        0.70  13%  perf-profile.children.cycles-pp.find_extent_buffer
      0.49   8%      -0.2        0.27  18%  perf-profile.children.cycles-pp.btrfs_write_check
      0.55  15%      -0.2        0.34  16%  perf-profile.children.cycles-pp.blk_mq_flush_plug_list
      0.44  17%      -0.2        0.24  15%  perf-profile.children.cycles-pp.calc_available_free_space
      0.74  19%      -0.2        0.53  17%  perf-profile.children.cycles-pp.find_extent_buffer_nolock
      0.35   8%      -0.2        0.14  24%  perf-profile.children.cycles-pp.up_read
      0.26  32%      -0.2        0.06  84%  perf-profile.children.cycles-pp.blk_mq_run_work_fn
      0.36  16%      -0.2        0.16  26%  perf-profile.children.cycles-pp.blk_mq_dispatch_rq_list
      0.43  13%      -0.2        0.23  17%  perf-profile.children.cycles-pp.__blk_flush_plug
      0.50  12%      -0.2        0.30  15%  perf-profile.children.cycles-pp.menu_select
      1.08  17%      -0.2        0.88   9%  perf-profile.children.cycles-pp.set_extent_bit
      0.32   6%      -0.2        0.13  15%  perf-profile.children.cycles-pp.btrfs_reduce_alloc_profile
      0.72   9%      -0.2        0.53   9%  perf-profile.children.cycles-pp.find_lock_delalloc_range
      0.53   9%      -0.2        0.34  18%  perf-profile.children.cycles-pp.truncate_pagecache
      0.52   9%      -0.2        0.34  18%  perf-profile.children.cycles-pp.truncate_inode_pages_range
      0.69  13%      -0.2        0.50  11%  perf-profile.children.cycles-pp.btrfs_set_extent_delalloc
      0.28  12%      -0.2        0.10  60%  perf-profile.children.cycles-pp.__rq_qos_throttle
      0.33  16%      -0.2        0.15  33%  perf-profile.children.cycles-pp.scsi_queue_rq
      0.36  19%      -0.2        0.18  22%  perf-profile.children.cycles-pp.__btrfs_wait_marked_extents
      0.47   9%      -0.2        0.29  21%  perf-profile.children.cycles-pp.btrfs_replace_extent_map_range
      0.56   4%      -0.2        0.38  15%  perf-profile.children.cycles-pp.check_dir_item
      0.54  12%      -0.2        0.37  31%  perf-profile.children.cycles-pp.btrfs_alloc_reserved_file_extent
      0.33  19%      -0.2        0.16  17%  perf-profile.children.cycles-pp.insert_state
      0.27  12%      -0.2        0.10  60%  perf-profile.children.cycles-pp.wbt_wait
      0.23  16%      -0.2        0.06  76%  perf-profile.children.cycles-pp.ahci_handle_port_interrupt
      0.32  11%      -0.2        0.15  14%  perf-profile.children.cycles-pp.io_schedule
      0.23  16%      -0.2        0.06  76%  perf-profile.children.cycles-pp.sata_async_notification
      0.22  19%      -0.2        0.06  76%  perf-profile.children.cycles-pp.ahci_scr_read
      0.33  10%      -0.2        0.16  18%  perf-profile.children.cycles-pp.file_remove_privs_flags
      0.28  42%      -0.2        0.12  26%  perf-profile.children.cycles-pp.ahci_qc_complete
      0.57   7%      -0.2        0.41  22%  perf-profile.children.cycles-pp.__slab_free
      0.26  22%      -0.2        0.10  20%  perf-profile.children.cycles-pp.btrfs_put_block_group
      0.36   9%      -0.2        0.20  20%  perf-profile.children.cycles-pp.__mod_lruvec_state
      0.28  19%      -0.2        0.12  17%  perf-profile.children.cycles-pp.btrfs_lookup_inode
      0.30  13%      -0.2        0.14  20%  perf-profile.children.cycles-pp.security_inode_need_killpriv
      0.26  20%      -0.2        0.10  32%  perf-profile.children.cycles-pp.btrfs_insert_delayed_item
      0.36  12%      -0.2        0.21  13%  perf-profile.children.cycles-pp.__mod_memcg_lruvec_state
      0.46  16%      -0.2        0.31  23%  perf-profile.children.cycles-pp.prepare_pages
      0.23  21%      -0.1        0.08  61%  perf-profile.children.cycles-pp.rq_qos_wait
      0.27   9%      -0.1        0.13  19%  perf-profile.children.cycles-pp.cap_inode_need_killpriv
      0.42  12%      -0.1        0.28  11%  perf-profile.children.cycles-pp.__filemap_add_folio
      0.31  22%      -0.1        0.17  26%  perf-profile.children.cycles-pp.blk_mq_dispatch_plug_list
      0.34  24%      -0.1        0.20  19%  perf-profile.children.cycles-pp.btrfs_leaf_free_space
      0.23  29%      -0.1        0.10   9%  perf-profile.children.cycles-pp.xas_clear_mark
      0.35  16%      -0.1        0.22  28%  perf-profile.children.cycles-pp.btrfs_drop_extent_map_range
      0.24  26%      -0.1        0.11   7%  perf-profile.children.cycles-pp.__wake_up
      0.26  11%      -0.1        0.13  20%  perf-profile.children.cycles-pp.__vfs_getxattr
      0.16  27%      -0.1        0.03 101%  perf-profile.children.cycles-pp.btrfs_wait_tree_log_extents
      0.34  15%      -0.1        0.21  25%  perf-profile.children.cycles-pp.btrfs_invalidate_folio
      0.32  24%      -0.1        0.19  17%  perf-profile.children.cycles-pp.need_preemptive_reclaim
      0.35  15%      -0.1        0.22  25%  perf-profile.children.cycles-pp.truncate_cleanup_folio
      0.26  26%      -0.1        0.14  31%  perf-profile.children.cycles-pp.rcu_core
      0.40  19%      -0.1        0.28  22%  perf-profile.children.cycles-pp.free_extent_state
      0.26  28%      -0.1        0.13  16%  perf-profile.children.cycles-pp.btrfs_can_overcommit
      0.39  13%      -0.1        0.27   6%  perf-profile.children.cycles-pp.ttwu_do_activate
      0.16  48%      -0.1        0.04 110%  perf-profile.children.cycles-pp.__irqentry_text_start
      0.30  12%      -0.1        0.19  19%  perf-profile.children.cycles-pp.__mod_node_page_state
      0.23  31%      -0.1        0.11  27%  perf-profile.children.cycles-pp.btrfs_calculate_inode_block_rsv_size
      0.22  74%      -0.1        0.10  21%  perf-profile.children.cycles-pp.push_leaf_right
      0.28  14%      -0.1        0.16  19%  perf-profile.children.cycles-pp.__xa_clear_mark
      0.31  18%      -0.1        0.20  15%  perf-profile.children.cycles-pp.btrfs_update_inode_item
      0.32  15%      -0.1        0.20  23%  perf-profile.children.cycles-pp.btrfs_update_block_group
      0.33  12%      -0.1        0.22  22%  perf-profile.children.cycles-pp.unpin_extent_cache
      0.34  12%      -0.1        0.23  21%  perf-profile.children.cycles-pp.io_ctl_prepare_pages
      0.34  13%      -0.1        0.24  12%  perf-profile.children.cycles-pp.activate_task
      0.22  26%      -0.1        0.12  32%  perf-profile.children.cycles-pp.btrfs_map_block
      0.16  33%      -0.1        0.06  78%  perf-profile.children.cycles-pp.blk_finish_plug
      0.29  14%      -0.1        0.19  11%  perf-profile.children.cycles-pp.enqueue_entity
      0.24  16%      -0.1        0.14  48%  perf-profile.children.cycles-pp.loop_queue_rq
      0.31   8%      -0.1        0.22  20%  perf-profile.children.cycles-pp.run_delalloc_nocow
      0.13  36%      -0.1        0.04 105%  perf-profile.children.cycles-pp.tick_nohz_get_sleep_length
      0.14  27%      -0.1        0.05  46%  perf-profile.children.cycles-pp.mod_zone_page_state
      0.19  20%      -0.1        0.10  47%  perf-profile.children.cycles-pp.btrfs_delalloc_release_extents
      0.17  23%      -0.1        0.08  51%  perf-profile.children.cycles-pp.__switch_to
      0.20  19%      -0.1        0.11  25%  perf-profile.children.cycles-pp.__btrfs_prealloc_file_range
      0.28  23%      -0.1        0.19  20%  perf-profile.children.cycles-pp.btrfs_block_rsv_release
      0.12  26%      -0.1        0.03 100%  perf-profile.children.cycles-pp.btrfs_update_root
      0.13  10%      -0.1        0.04 111%  perf-profile.children.cycles-pp.strcmp
      0.14  22%      -0.1        0.05  55%  perf-profile.children.cycles-pp.btrfs_get_chunk_map
      0.18  18%      -0.1        0.10  32%  perf-profile.children.cycles-pp.prepare_task_switch
      0.13  19%      -0.1        0.05  75%  perf-profile.children.cycles-pp.scsi_dispatch_cmd
      0.19  17%      -0.1        0.11  48%  perf-profile.children.cycles-pp.loop_queue_work
      0.31  16%      -0.1        0.23  16%  perf-profile.children.cycles-pp.lock_delalloc_pages
      0.19  19%      -0.1        0.11  23%  perf-profile.children.cycles-pp.extent_clear_unlock_delalloc
      0.14  25%      -0.1        0.06  52%  perf-profile.children.cycles-pp.___perf_sw_event
      0.24  17%      -0.1        0.16  20%  perf-profile.children.cycles-pp.blk_mq_plug_issue_direct
      0.23  16%      -0.1        0.16  19%  perf-profile.children.cycles-pp.__blk_mq_issue_directly
      0.19  29%      -0.1        0.12  25%  perf-profile.children.cycles-pp.rcu_do_batch
      0.12  20%      -0.1        0.04  75%  perf-profile.children.cycles-pp.ata_scsi_queuecmd
      0.22   5%      -0.1        0.14  23%  perf-profile.children.cycles-pp.__process_pages_contig
      0.12  41%      -0.1        0.04  77%  perf-profile.children.cycles-pp.__mem_cgroup_uncharge
      0.24  16%      -0.1        0.16  17%  perf-profile.children.cycles-pp.__switch_to_asm
      0.14  14%      -0.1        0.07  54%  perf-profile.children.cycles-pp.btrfs_xattr_handler_get_security
      0.10  27%      -0.1        0.03 100%  perf-profile.children.cycles-pp.__ata_scsi_queuecmd
      0.28  18%      -0.1        0.21  18%  perf-profile.children.cycles-pp.folios_put_refs
      0.11  20%      -0.1        0.04  77%  perf-profile.children.cycles-pp.__kmalloc_noprof
      0.09  19%      -0.1        0.03 100%  perf-profile.children.cycles-pp.sbitmap_find_bit
      0.24  11%      -0.1        0.18  13%  perf-profile.children.cycles-pp.sched_balance_find_src_group
      0.23  10%      -0.1        0.17  13%  perf-profile.children.cycles-pp.update_sd_lb_stats
      0.09  14%      -0.1        0.03 100%  perf-profile.children.cycles-pp.__blk_mq_end_request
      0.15  28%      -0.1        0.08  22%  perf-profile.children.cycles-pp.__btrfs_qgroup_release_data
      0.10  29%      -0.1        0.04 106%  perf-profile.children.cycles-pp.folio_mapping
      0.13  20%      -0.1        0.07  62%  perf-profile.children.cycles-pp.btrfs_wait_ordered_range
      0.14  18%      -0.1        0.08  21%  perf-profile.children.cycles-pp.btrfs_reserve_data_bytes
      0.10  10%      -0.1        0.05  77%  perf-profile.children.cycles-pp.__memcpy
      0.18  19%      -0.1        0.12  29%  perf-profile.children.cycles-pp.alloc_ordered_extent
      0.16  13%      -0.1        0.11  20%  perf-profile.children.cycles-pp.sched_clock_cpu
      0.22  13%      -0.0        0.18  12%  perf-profile.children.cycles-pp.vfs_read
      0.10  22%      -0.0        0.07  18%  perf-profile.children.cycles-pp.put_cpu_partial
      0.00            +0.1        0.08  34%  perf-profile.children.cycles-pp.btrfs_set_item_key_safe
      0.00            +0.1        0.10  36%  perf-profile.children.cycles-pp.btrfs_alloc_from_cluster
      0.14  29%      +0.1        0.27  13%  perf-profile.children.cycles-pp.memcpy_extent_buffer
      0.03 100%      +0.1        0.16  43%  perf-profile.children.cycles-pp.run_delayed_data_ref
      0.02 141%      +0.1        0.16  42%  perf-profile.children.cycles-pp.alloc_reserved_file_extent
      0.01 223%      +0.1        0.16  23%  perf-profile.children.cycles-pp.find_free_extent_clustered
      0.02 149%      +0.2        0.18  26%  perf-profile.children.cycles-pp.btrfs_select_ref_head
      0.08  75%      +0.2        0.32  18%  perf-profile.children.cycles-pp.pin_down_extent
      0.18  40%      +0.7        0.87  10%  perf-profile.children.cycles-pp.memmove_extent_buffer
      0.44  23%      +0.8        1.26   6%  perf-profile.children.cycles-pp.__write_extent_buffer
      0.00            +1.5        1.52   7%  perf-profile.children.cycles-pp.btrfs_truncate_item
      0.00            +1.5        1.54   7%  perf-profile.children.cycles-pp.truncate_one_csum
     79.40            +1.9       81.32        perf-profile.children.cycles-pp.process_one_work
      0.10  11%      +3.2        3.32   6%  perf-profile.children.cycles-pp.btrfs_del_csums
      0.17  26%      +3.8        3.93   7%  perf-profile.children.cycles-pp.cleanup_ref_head
      0.00            +5.2        5.22   8%  perf-profile.children.cycles-pp.transaction_kthread
     82.15            +6.1       88.22        perf-profile.children.cycles-pp.kthread
     82.16            +6.1       88.22        perf-profile.children.cycles-pp.ret_from_fork
     82.16            +6.1       88.22        perf-profile.children.cycles-pp.ret_from_fork_asm
     36.06   3%     +19.8       55.84   3%  perf-profile.children.cycles-pp.btrfs_async_reclaim_metadata_space
     36.05   3%     +19.8       55.84   3%  perf-profile.children.cycles-pp.flush_space
     33.17   3%     +26.5       59.67   3%  perf-profile.children.cycles-pp.btrfs_commit_transaction
      0.41  36%     +29.8       30.19   9%  perf-profile.children.cycles-pp._find_next_zero_bit
      0.82  22%     +33.8       34.61   9%  perf-profile.children.cycles-pp.btrfs_finish_extent_commit
      0.66  30%     +33.8       34.48   9%  perf-profile.children.cycles-pp.unpin_extent_range
      0.39  49%     +33.8       34.23   9%  perf-profile.children.cycles-pp.steal_from_bitmap
      0.56  31%     +33.8       34.41   9%  perf-profile.children.cycles-pp.__btrfs_add_free_space
      7.28   3%      -3.1        4.16   6%  perf-profile.self.cycles-pp.btrfs_get_32
      5.00   5%      -2.4        2.61   4%  perf-profile.self.cycles-pp.btrfs_get_64
      3.51   9%      -1.4        2.12  10%  perf-profile.self.cycles-pp._raw_spin_lock
      2.57   4%      -1.0        1.62  45%  perf-profile.self.cycles-pp.crc_pcl
      2.38   7%      -0.9        1.51   6%  perf-profile.self.cycles-pp.check_extent_item
      2.43   4%      -0.9        1.56  12%  perf-profile.self.cycles-pp.memcpy_orig
      1.79   7%      -0.9        0.93  18%  perf-profile.self.cycles-pp.intel_idle
      3.01   7%      -0.8        2.20  10%  perf-profile.self.cycles-pp.copy_page_from_iter_atomic
      1.44  12%      -0.7        0.70   8%  perf-profile.self.cycles-pp.btrfs_get_8
      3.26   4%      -0.7        2.58  11%  perf-profile.self.cycles-pp.btrfs_get_token_32
      1.20   6%      -0.5        0.65  12%  perf-profile.self.cycles-pp.btrfs_comp_cpu_keys
      1.11  11%      -0.4        0.67  13%  perf-profile.self.cycles-pp.__btrfs_check_leaf
      1.22  14%      -0.4        0.84   8%  perf-profile.self.cycles-pp.btrfs_bin_search
      1.24  10%      -0.4        0.87  13%  perf-profile.self.cycles-pp.__lookup_extent_mapping
      0.80  10%      -0.4        0.44  17%  perf-profile.self.cycles-pp._raw_spin_lock_irqsave
      1.97   9%      -0.4        1.62   9%  perf-profile.self.cycles-pp.__memmove
      0.75  15%      -0.3        0.41  17%  perf-profile.self.cycles-pp.add_delayed_ref_head
      0.79   6%      -0.3        0.48  21%  perf-profile.self.cycles-pp.intel_idle_irq
      0.92   5%      -0.3        0.61  10%  perf-profile.self.cycles-pp.read_extent_buffer
      0.71  14%      -0.3        0.42  22%  perf-profile.self.cycles-pp.check_leaf_item
      0.83   4%      -0.3        0.57  11%  perf-profile.self.cycles-pp.kmem_cache_alloc_noprof
      0.37  14%      -0.3        0.11  35%  perf-profile.self.cycles-pp.down_read
      0.68   7%      -0.3        0.43  12%  perf-profile.self.cycles-pp.percpu_counter_add_batch
      0.54  10%      -0.2        0.30  20%  perf-profile.self.cycles-pp.setup_items_for_insert
      0.62  13%      -0.2        0.38  16%  perf-profile.self.cycles-pp.xas_load
      0.30  17%      -0.2        0.09  22%  perf-profile.self.cycles-pp.up_read
      0.26  25%      -0.2        0.06  75%  perf-profile.self.cycles-pp.check_extent_data_item
      0.76  13%      -0.2        0.57  17%  perf-profile.self.cycles-pp.rwsem_spin_on_owner
      0.30  19%      -0.2        0.12  27%  perf-profile.self.cycles-pp.__lruvec_stat_mod_folio
      0.48  16%      -0.2        0.30  23%  perf-profile.self.cycles-pp.btrfs_del_items
      0.23  19%      -0.2        0.05  74%  perf-profile.self.cycles-pp.ahci_single_level_irq_intr
      0.39  15%      -0.2        0.22  25%  perf-profile.self.cycles-pp.filemap_get_entry
      0.22  19%      -0.2        0.06  76%  perf-profile.self.cycles-pp.ahci_scr_read
      0.58  12%      -0.2        0.41   8%  perf-profile.self.cycles-pp.__set_extent_bit
      0.26  23%      -0.2        0.10  21%  perf-profile.self.cycles-pp.btrfs_put_block_group
      0.34  14%      -0.2        0.18  31%  perf-profile.self.cycles-pp.__folio_end_writeback
      0.69  13%      -0.2        0.54  13%  perf-profile.self.cycles-pp.kmem_cache_free
      0.22  46%      -0.1        0.07  49%  perf-profile.self.cycles-pp.ahci_qc_complete
      0.28  12%      -0.1        0.14  34%  perf-profile.self.cycles-pp.__cond_resched
      0.29  22%      -0.1        0.15  15%  perf-profile.self.cycles-pp.__folio_start_writeback
      0.34  15%      -0.1        0.20  18%  perf-profile.self.cycles-pp.folio_end_writeback
      0.28  11%      -0.1        0.15  22%  perf-profile.self.cycles-pp.__mod_memcg_lruvec_state
      0.38  23%      -0.1        0.25  23%  perf-profile.self.cycles-pp._raw_spin_lock_irq
      0.19  15%      -0.1        0.06  49%  perf-profile.self.cycles-pp.btrfs_reduce_alloc_profile
      0.39  21%      -0.1        0.26  24%  perf-profile.self.cycles-pp.free_extent_state
      0.18  31%      -0.1        0.05  85%  perf-profile.self.cycles-pp.ahci_handle_port_intr
      0.22  27%      -0.1        0.09  11%  perf-profile.self.cycles-pp.xas_clear_mark
      0.34  15%      -0.1        0.22  17%  perf-profile.self.cycles-pp.menu_select
      0.30  12%      -0.1        0.18  21%  perf-profile.self.cycles-pp.__mod_node_page_state
      0.31  15%      -0.1        0.20  20%  perf-profile.self.cycles-pp.folio_clear_dirty_for_io
      0.13  27%      -0.1        0.03 101%  perf-profile.self.cycles-pp.mod_zone_page_state
      0.26  23%      -0.1        0.16  31%  perf-profile.self.cycles-pp.block_group_cache_tree_search
      0.19  18%      -0.1        0.10  33%  perf-profile.self.cycles-pp.__filemap_add_folio
      0.17  22%      -0.1        0.08  49%  perf-profile.self.cycles-pp.__switch_to
      0.13  10%      -0.1        0.04 115%  perf-profile.self.cycles-pp.strcmp
      0.27  25%      -0.1        0.18  18%  perf-profile.self.cycles-pp.find_extent_buffer_nolock
      0.12  21%      -0.1        0.03 111%  perf-profile.self.cycles-pp.btrfs_calculate_inode_block_rsv_size
      0.19  20%      -0.1        0.11  16%  perf-profile.self.cycles-pp.insert_state
      0.24  16%      -0.1        0.16  17%  perf-profile.self.cycles-pp.__switch_to_asm
      0.10  23%      -0.1        0.03 106%  perf-profile.self.cycles-pp.folio_account_dirtied
      0.14   7%      -0.1        0.07  52%  perf-profile.self.cycles-pp.lo_write_simple
      0.16  12%      -0.1        0.10  27%  perf-profile.self.cycles-pp.filemap_dirty_folio
      0.12  22%      -0.1        0.07  47%  perf-profile.self.cycles-pp.__filemap_get_folio
      0.09  17%      -0.1        0.03 103%  perf-profile.self.cycles-pp.__memcpy
      0.08  16%      -0.1        0.03 101%  perf-profile.self.cycles-pp.__flush_smp_call_function_queue
      0.10  21%      -0.0        0.07  18%  perf-profile.self.cycles-pp.put_cpu_partial
      0.09  20%      +0.1        0.14  15%  perf-profile.self.cycles-pp.__write_extent_buffer
      0.00            +0.2        0.16  29%  perf-profile.self.cycles-pp.btrfs_truncate_item
      0.05  74%      +4.0        4.05  10%  perf-profile.self.cycles-pp.steal_from_bitmap
      0.40  38%     +29.7       30.14   9%  perf-profile.self.cycles-pp._find_next_zero_bit





Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:35:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:35:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746285.1153280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfAu-0002wK-Td; Mon, 24 Jun 2024 08:35:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746285.1153280; Mon, 24 Jun 2024 08:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfAu-0002wD-QW; Mon, 24 Jun 2024 08:35:44 +0000
Received: by outflank-mailman (input) for mailman id 746285;
 Mon, 24 Jun 2024 08:35:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3xpo=N2=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sLfAt-0002w7-VV
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:35:43 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bdbf9d18-3204-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 10:35:42 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 7971768B05; Mon, 24 Jun 2024 10:35:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdbf9d18-3204-11ef-b4bb-af5377834399
Date: Mon, 24 Jun 2024 10:35:37 +0200
From: Christoph Hellwig <hch@lst.de>
To: kernel test robot <oliver.sang@intel.com>
Cc: Christoph Hellwig <hch@lst.de>, oe-lkp@lists.linux.dev, lkp@intel.com,
	Jens Axboe <axboe@kernel.dk>, Damien Le Moal <dlemoal@kernel.org>,
	Hannes Reinecke <hare@suse.de>, linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org, linux-kernel@vger.kernel.org,
	linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
	nbd@other.debian.org, linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org, virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev, linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev, linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org,
	ying.huang@intel.com, feng.tang@intel.com, fengwei.yin@intel.com
Subject: Re: [axboe-block:for-next] [block]  bd4a633b6f:
 fsmark.files_per_sec -64.5% regression
Message-ID: <20240624083537.GA19941@lst.de>
References: <202406241546.6bbd44a7-oliver.sang@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <202406241546.6bbd44a7-oliver.sang@intel.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

This is odd to say at least.  Any chance you can check the value
of /sys/block/$DEVICE/queue/rotational for the relevant device before
and after this commit?  And is this an ATA or NVMe SSD?



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:43:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:43:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746293.1153290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfHy-0005QZ-Kz; Mon, 24 Jun 2024 08:43:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746293.1153290; Mon, 24 Jun 2024 08:43:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfHy-0005QS-H3; Mon, 24 Jun 2024 08:43:02 +0000
Received: by outflank-mailman (input) for mailman id 746293;
 Mon, 24 Jun 2024 08:43:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gesJ=N2=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sLfHw-0005QK-RZ
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:43:00 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c1fbf723-3205-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 10:42:58 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-87-17-171-46.retail.telecomitalia.it [87.17.171.46])
 by support.bugseng.com (Postfix) with ESMTPSA id 0775C4EE0738;
 Mon, 24 Jun 2024 10:42:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1fbf723-3205-11ef-b4bb-af5377834399
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3] automation/eclair_analysis: deviate and|or|xor|not for MISRA C Rule 21.2
Date: Mon, 24 Jun 2024 10:42:43 +0200
Message-Id: <f21ea3734857e0cf26afff00befb179b10d02158.1719213594.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rule 21.2 reports identifiers reserved for the C and POSIX standard
libraries: or, and, not and xor are reserved identifiers because they
constitute alternate spellings for the corresponding operators (they are
defined as macros by iso646.h); however Xen doesn't use standard library
headers, so there is no risk of overlap.

This addresses violations arising from x86_emulate/x86_emulate.c, where
label statements named as or, and and xor appear.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes from v2:
Fixed patch contents as the changes from v1 and v2 were not squashed together.
---
Changes from v1:
Added deviation for 'not' identifier.
Added explanation of where these identifiers are defined, specifically in the
'iso646.h' file of the Standard Library.
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index 9fa9a7f01c..14c7afb39e 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -498,6 +498,12 @@ still remain available."
 -config=MC3R1.R21.2,declarations+={safe, "!^__builtin_.*$"}
 -doc_end
 
+-doc_begin="or, and and xor are reserved identifiers because they constitute alternate
+spellings for the corresponding operators (they are defined as macros by iso646.h).
+However, Xen doesn't use standard library headers, so there is no risk of overlap."
+-config=MC3R1.R21.2,reports+={safe, "any_area(stmt(ref(kind(label)&&^(or|and|xor|not)$)))"}
+-doc_end
+
 -doc_begin="Xen does not use the functions provided by the Standard Library, but
 implements a set of functions that share the same names as their Standard Library equivalent.
 The implementation of these functions is available in source form, so the undefined, unspecified
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:51:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:51:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746301.1153299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfQL-0007nb-D0; Mon, 24 Jun 2024 08:51:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746301.1153299; Mon, 24 Jun 2024 08:51:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfQL-0007nU-AJ; Mon, 24 Jun 2024 08:51:41 +0000
Received: by outflank-mailman (input) for mailman id 746301;
 Mon, 24 Jun 2024 08:51:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLfQJ-0007nO-S7
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:51:39 +0000
Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com
 [2a00:1450:4864:20::22b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f8229cc5-3206-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 10:51:38 +0200 (CEST)
Received: by mail-lj1-x22b.google.com with SMTP id
 38308e7fff4ca-2ec17eb4493so55113971fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 01:51:38 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7065129b9edsm5864862b3a.148.2024.06.24.01.51.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 01:51:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8229cc5-3206-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719219098; x=1719823898; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=wi867LupThgb/OSaWtfRwuG/utSAs7BJobFFwJezGqM=;
        b=Ijq8J7p7+Rxq2YFGtEnt1toC9Ds9phUv7+RPeeyq+MLyeERYKWGhABti6By/ILtE09
         EgNPUIU5uw7MLXOgaXBXnnbxaRekLreCApGw2ncff7f3dGIQxWSs1KeW9aZgfy8AkbQk
         bG5i3mr8s7hWyfSCvYXHZE67G30Ub2GsdN24GWEc9cRdS5kMpJWwKTW0Yqsruto+IgoJ
         af4vV+ToU/QVFPAO+ihJPFIHwdzPAfrIG2nJBjHJ6QYdz7Y3US+TD0ERb/fvggP34+78
         h00lO91ReJsLGMaty53prXuANkmFV7XSLMoXwjaztU2GtfNv7KrsKvtRX8t94XxTi+LA
         paPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719219098; x=1719823898;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wi867LupThgb/OSaWtfRwuG/utSAs7BJobFFwJezGqM=;
        b=M4b5gd43HHVjaThUgORal4SZ0IkUdNONoouih4wk95n22STO/SHEXZnAsmwQOzBTMt
         rNduEQx8P9efvUR6ngiR5DtKmwDpiMob/0Y0RDIwPjdfG6WJHcICLn+YkdGfWhbrYUY0
         iNQmFcrVYlpJhNJFDN4qo3VhACDrbs7raB9u4jcg1xNuGkMdJ/PZwRJFq34zQBypbrIo
         1A6btQbK1rc0nl/CkczXm6Qp+fFa9hgBQf3+9T7JBb9v0DauugreHE//5Ig7P/j3aIax
         mS4W8yjithX/HfG21ZQgy+/6tIRWieduShjxwTJOQuwIIybLkgTu1dn4YUBXYXLTbDkD
         YV4Q==
X-Forwarded-Encrypted: i=1; AJvYcCVUHTIqYl7EsN+d1Sujzug1rKYA+nhw9cseYiNnxdrJEGWjzbqUJhh1N+jPnm9gB5rOn8SHyCCDcDvXO5NSDTPpnQHrHwMKcko9F5XWtPE=
X-Gm-Message-State: AOJu0YzD+rdOVot1Rxmy/9CLXtJq9Tv7wPKtRcZWzx3dFJOa1oTyZSVc
	GTNfpjXXq2QtFUGQzq5pGKWp3bFOkJVmYFPNuv6eHnL3yFB8QLsM4SrpKDrEFw==
X-Google-Smtp-Source: AGHT+IHM5btsh82TzUth4CA8j+2Sg93KpL/m0mmpW1Cnv762zXqycm6twnzmCq26SwjOAFHAB92BNQ==
X-Received: by 2002:a2e:95c8:0:b0:2ec:596c:b637 with SMTP id 38308e7fff4ca-2ec5b339e01mr31829361fa.49.1719219098104;
        Mon, 24 Jun 2024 01:51:38 -0700 (PDT)
Message-ID: <dfe1eaf1-35d9-4c42-a6de-ace577313559@suse.com>
Date: Mon, 24 Jun 2024 10:51:31 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 v3 2/4] x86/shadow: Introduce sh_trace_gl1e_va()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240621180658.92831-1-andrew.cooper3@citrix.com>
 <20240621180658.92831-3-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240621180658.92831-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 21.06.2024 20:06, Andrew Cooper wrote:
> trace_shadow_fixup() and trace_not_shadow_fault() both write out identical
> trace records.  Reimplement them in terms of a common sh_trace_gl1e_va().
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jun 24 08:52:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 08:52:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746305.1153310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfRI-0008Hi-NR; Mon, 24 Jun 2024 08:52:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746305.1153310; Mon, 24 Jun 2024 08:52:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfRI-0008Hb-Ja; Mon, 24 Jun 2024 08:52:40 +0000
Received: by outflank-mailman (input) for mailman id 746305;
 Mon, 24 Jun 2024 08:52:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLfRG-0008HP-Tw
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 08:52:38 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1b8fdd05-3207-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 10:52:38 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ec002caeb3so52218141fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 01:52:38 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-70662adcf3dsm4272802b3a.54.2024.06.24.01.52.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 01:52:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b8fdd05-3207-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719219158; x=1719823958; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=kNbPHfaG9z9iBWickqMzV1VuxOuq4Or2qExg/W00VFg=;
        b=EJq4fTYUrtiSbLxoeTNdh7RxmLVC8dAvqZKUhnEOWf15QKSiRI6eoG/pMK/Z+DMece
         kgAA1uJ+PBin40vBxydsOzQjJkqnfeCyDplaQWrG12nidxeZqQwJIE+NJyr7n+CgZWXJ
         q+uG0+VTHLCbQBs8N6PVt6O/gQnbElcSHfYubSIfWJGT4fNNYtKcz/VfWotE5Y/FYVPt
         OGosiXg+Qe+JlnmLthxxRQypNjeLQArW3AJHqn3TgAMM735F7apyVC0ZNDolru6OQYkS
         TJz4Sl83Dt8karhA3bHp/cNrxbHM0blPV4jHJuA9vxtOCFfWlkIo+7VHa4jEnW4ykGld
         auPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719219158; x=1719823958;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kNbPHfaG9z9iBWickqMzV1VuxOuq4Or2qExg/W00VFg=;
        b=Ftgkt9zCAlYuwsP3UvwfjkfQZrnf3HhWdfI2ZW/9SqR6si2M/p4JpVPOo4GMQdsCjR
         UbBPPSVP1X2fnU03QDSNNQhfN/KYSlKjcd9n5S9eBFpGQ7iLtWabc0XzR0gcDXsRbVrH
         ZO0QOg7pZg3Ubs2ZSC8e0QnHhG05myFbxgED765Jw1ViOrY3BoQFOOFIk1EN5DPumHyd
         skV86KUyzH/jgnvmj6aYIhrXUTtwqLWMgKO4mm8kk6Jc16c5HIYH2Q7zN7eC9pfHbfU/
         C6mj2uJuJxzVnQWYrEhoNyBu1bU5xq0jpvrbRhE0Xz9YN9uHqcLo58zsLrOQBKrFp6e+
         nzVw==
X-Forwarded-Encrypted: i=1; AJvYcCXVow1nq9BS4ZjBa8B9FPQCkF09hGY3eXLnP5OEyv7dk1ZcfysMppkOvnyMWWB0mm/WmNvjijaPLR/NHDuqhszGKRjYdFP79Rt93dDHMCI=
X-Gm-Message-State: AOJu0YxEFmG3CBxdaIw3fUOUf7IiR37RA46rM9j13MHVoer3mVURD6SK
	kA1Ix3dJNAvBnj2s+Yq9Z4W8KJGZXIF2r3DrP0eJV0ZqLTW95AKbU6OXpNU67w==
X-Google-Smtp-Source: AGHT+IHcNQ5FEKF6wIoFzM8zL+N6mRkMpbuEPZMXZ7uqpburqZBgirvdQXyHcqwf7VWyVNj2DIy0Ig==
X-Received: by 2002:a2e:9bca:0:b0:2eb:e634:fee6 with SMTP id 38308e7fff4ca-2ec5797a56emr33205001fa.18.1719219157643;
        Mon, 24 Jun 2024 01:52:37 -0700 (PDT)
Message-ID: <0ca31fc8-fb11-4a13-99c4-2cc77fdf0886@suse.com>
Date: Mon, 24 Jun 2024 10:52:31 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 v3 3/4] x86/shadow: Rework
 trace_shadow_emulate_other() as sh_trace_gfn_va()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240621180658.92831-1-andrew.cooper3@citrix.com>
 <20240621180658.92831-4-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240621180658.92831-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 21.06.2024 20:06, Andrew Cooper wrote:
> sh_trace_gfn_va() is very similar to sh_trace_gl1e_va(), and a rather shorter
> name than trace_shadow_emulate_other().
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:01:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:01:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746313.1153319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfZN-0002NH-FS; Mon, 24 Jun 2024 09:01:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746313.1153319; Mon, 24 Jun 2024 09:01:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfZN-0002NA-Cq; Mon, 24 Jun 2024 09:01:01 +0000
Received: by outflank-mailman (input) for mailman id 746313;
 Mon, 24 Jun 2024 09:01:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLfZM-0002N4-3n
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:01:00 +0000
Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com
 [2a00:1450:4864:20::22b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4593d64b-3208-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 11:00:58 +0200 (CEST)
Received: by mail-lj1-x22b.google.com with SMTP id
 38308e7fff4ca-2ebed33cb65so45011091fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 02:00:58 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 41be03b00d2f7-716b4a7161asm5071197a12.46.2024.06.24.02.00.52
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 02:00:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4593d64b-3208-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719219657; x=1719824457; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=dT7YZDQBhpR14FjOxs9ZZkWGKMt37dkRONImkVu4yUg=;
        b=QYkj081QMkfPCIFbAYenhwUL+sIz3EWsQ+SCGPk5+MAc1Nt89cOzMuyaKi/7kWGupl
         wMj6uN+QXk9vGnvcuSliQC8YFSWfRtP4Df9RazDjVAk7vvlRHSm9mcdHvFL4S7s6AnQ3
         Ejb4fRAjVjQeYF2wsZPMAqDtGhOjIYOYUcLYuDdnveMRfnAXY2rG+yuP9bagaIr/vGla
         KQoAd+TzRhEqjiAmoB2QhrXHFy0Qfy/ZCIrZQZBminMYDCWFx3rNg3J5oTQLdzLZeqJa
         rrzU7NE0IL0fXeSL1VEgCHiA/K9L7CY4+kuoxMEMn/euXGsRCqMOc+NhpwtRhrzI013y
         BDAQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719219657; x=1719824457;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=dT7YZDQBhpR14FjOxs9ZZkWGKMt37dkRONImkVu4yUg=;
        b=xI89fSvdjOQFuODHUT/LUqcWfa5JonfkNazdCEhBT0PLarqcsEqInJRDjse6b2vSa+
         z5hSv6Ye6VT2iEgMtxY76ywKJkVk+RAnKcX0ITkU6z/08B3VI40/FEueiRclhkKlV1Fq
         nWMik8WvV4fY/Bk93+GMhN8R6HdVXgYEYkHQ509VBsRAbY9Zo9Bszxab3yxRLy8cUj+V
         OO3ZVxNP+YzJO5SHGbAjssGiWwMJWUCvWykqCcHEvSeDptfx25M+xHq5pfD+wC0uLJpx
         GD/EeYy1C6PX2eR+4xSEjfm1+q0ycxIbrCddbjhZPg7bgxppzlCjDayxwCTJJ8bVLOTT
         RgZA==
X-Forwarded-Encrypted: i=1; AJvYcCXceXBo8UOzEJTni+sM6bEvWHsgQWlBOTeV8XBKIRcQP3JuVZoiOf8oVx2B4Hf8t/cg5r3PJoGV/cDEBfVdTvqjJITTq3m87eB2JmBSLh8=
X-Gm-Message-State: AOJu0YzF1oOIwsj4ziEZBDYiqz5Yc4dxKKExPjY7EDG/1Vz6j93ov13h
	U480TLhWqL1IC4KrllbZSvpNqhi8XfCdAavTemj6Dx25MB9HZzafR8aYd3Yikw==
X-Google-Smtp-Source: AGHT+IGfq194fgdZbAjcN22LuJxAdAQsGOlch951MnmWwZDkYJApzFx41F0sQjDuFO+iFgbpb+xKfQ==
X-Received: by 2002:a2e:9e09:0:b0:2ec:4ec3:9bee with SMTP id 38308e7fff4ca-2ec5931d8d9mr34268251fa.12.1719219657576;
        Mon, 24 Jun 2024 02:00:57 -0700 (PDT)
Message-ID: <d3856651-f5a6-4c96-8afe-336af2a60231@suse.com>
Date: Mon, 24 Jun 2024 11:00:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC XEN PATCH] x86/mctelem: address violations of MISRA C: 2012
 Rule 5.3
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <79eb2f12e521f96a53dd166eb7db485bb3d9d067.1718962824.git.nicola.vetrini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <79eb2f12e521f96a53dd166eb7db485bb3d9d067.1718962824.git.nicola.vetrini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 21.06.2024 11:50, Nicola Vetrini wrote:
> From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
> 
> This addresses violations of MISRA C:2012 Rule 5.3 which states as
> following: An identifier declared in an inner scope shall not hide an
> identifier declared in an outer scope. In this case the shadowing is between
> local variables "mctctl" and the file-scope static struct variable with the
> same name.
> 
> No functional change.
> 
> Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
> ---
> RFC because I'm not 100% sure the semantics of the code is preserved.
> I think so, and it passes gitlab pipelines [1], but there may be some missing
> information.

Details as to your concerns would help. I see no issue, not even a concern.

> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
> @@ -168,14 +168,14 @@ static void mctelem_xchg_head(struct mctelem_ent **headp,
>  void mctelem_defer(mctelem_cookie_t cookie, bool lmce)
>  {
>  	struct mctelem_ent *tep = COOKIE2MCTE(cookie);
> -	struct mc_telem_cpu_ctl *mctctl = &this_cpu(mctctl);
> +	struct mc_telem_cpu_ctl *mctctl_cpu = &this_cpu(mctctl);

When possible (i.e. without loss of meaning) I'd generally prefer names to
be shortened. Wouldn't just "ctl" work here?

> -	ASSERT(mctctl->pending == NULL || mctctl->lmce_pending == NULL);
> +	ASSERT(mctctl_cpu->pending == NULL || mctctl_cpu->lmce_pending == NULL);
>  
> -	if (mctctl->pending)
> -		mctelem_xchg_head(&mctctl->pending, &tep->mcte_next, tep);
> +	if (mctctl_cpu->pending)
> +		mctelem_xchg_head(&mctctl_cpu->pending, &tep->mcte_next, tep);
>  	else if (lmce)
> -		mctelem_xchg_head(&mctctl->lmce_pending, &tep->mcte_next, tep);
> +		mctelem_xchg_head(&mctctl_cpu->lmce_pending, &tep->mcte_next, tep);
>  	else {
>  		/*
>  		 * LMCE is supported on Skylake-server and later CPUs, on
> @@ -186,10 +186,10 @@ void mctelem_defer(mctelem_cookie_t cookie, bool lmce)
>  		 * moment. As a result, the following two exchanges together
>  		 * can be treated as atomic.
>  		 */

In the middle of this comment the variable is also mentioned, and hence
also wants adjusting (twice).

> -		if (mctctl->lmce_pending)
> -			mctelem_xchg_head(&mctctl->lmce_pending,
> -					  &mctctl->pending, NULL);
> -		mctelem_xchg_head(&mctctl->pending, &tep->mcte_next, tep);
> +		if (mctctl_cpu->lmce_pending)
> +			mctelem_xchg_head(&mctctl_cpu->lmce_pending,
> +					  &mctctl_cpu->pending, NULL);
> +		mctelem_xchg_head(&mctctl_cpu->pending, &tep->mcte_next, tep);
>  	}
>  }
>  
> @@ -213,7 +213,7 @@ void mctelem_process_deferred(unsigned int cpu,
>  {
>  	struct mctelem_ent *tep;
>  	struct mctelem_ent *head, *prev;
> -	struct mc_telem_cpu_ctl *mctctl = &per_cpu(mctctl, cpu);
> +	struct mc_telem_cpu_ctl *mctctl_cpu = &per_cpu(mctctl, cpu);
>  	int ret;
>  
>  	/*
> @@ -232,7 +232,7 @@ void mctelem_process_deferred(unsigned int cpu,
>  	 * Any MC# occurring after the following atomic exchange will be
>  	 * handled by another round of MCE softirq.
>  	 */
> -	mctelem_xchg_head(lmce ? &mctctl->lmce_pending : &mctctl->pending,
> +	mctelem_xchg_head(lmce ? &mctctl_cpu->lmce_pending : &mctctl_cpu->pending,
>  			  &this_cpu(mctctl.processing), NULL);

By shortening the variable name here you'd also avoid going past line
length limits.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:04:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:04:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746319.1153330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd2-0002x9-VF; Mon, 24 Jun 2024 09:04:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746319.1153330; Mon, 24 Jun 2024 09:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd2-0002x2-RM; Mon, 24 Jun 2024 09:04:48 +0000
Received: by outflank-mailman (input) for mailman id 746319;
 Mon, 24 Jun 2024 09:04:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfd1-0002wq-Np
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:47 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cdad74b6-3208-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 11:04:46 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id 55E264EE0738;
 Mon, 24 Jun 2024 11:04:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cdad74b6-3208-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 00/13] x86: address some violations of MISRA C Rule 16.3
Date: Mon, 24 Jun 2024 11:04:24 +0200
Message-Id: <cover.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This patch series fixes a missing escape in a deviation and addresses some
violations.

Federico Serafini (13):
  automation/eclair: fix deviation of MISRA C Rule 16.3
  x86/cpuid: use fallthrough pseudo keyword
  x86/domctl: address a violation of MISRA C Rule 16.3
  x86/vpmu: address violations of MISRA C Rule 16.3
  x86/traps: address violations of MISRA C Rule 16.3
  x86/mce: address violations of MISRA C Rule 16.3
  x86/hvm: address violations of MISRA C Rule 16.3
  x86/vpt: address a violation of MISRA C Rule 16.3
  x86/mm: add defensive return
  x86/mpparse: address a violation of MISRA C Rule 16.3
  x86/pmtimer: address a violation of MISRA C Rule 16.3
  x86/vPIC: address a violation of MISRA C Rule 16.3
  x86/vlapic: address a violation of MISRA C Rule 16.3

 automation/eclair_analysis/ECLAIR/deviations.ecl | 2 +-
 xen/arch/x86/cpu/mcheck/mce_amd.c                | 1 +
 xen/arch/x86/cpu/mcheck/mce_intel.c              | 2 ++
 xen/arch/x86/cpu/vpmu.c                          | 3 +++
 xen/arch/x86/cpu/vpmu_intel.c                    | 1 +
 xen/arch/x86/cpuid.c                             | 3 +--
 xen/arch/x86/domctl.c                            | 1 +
 xen/arch/x86/hvm/emulate.c                       | 9 ++++++---
 xen/arch/x86/hvm/hvm.c                           | 6 ++++++
 xen/arch/x86/hvm/hypercall.c                     | 1 +
 xen/arch/x86/hvm/irq.c                           | 1 +
 xen/arch/x86/hvm/pmtimer.c                       | 1 +
 xen/arch/x86/hvm/vlapic.c                        | 1 +
 xen/arch/x86/hvm/vpic.c                          | 1 +
 xen/arch/x86/hvm/vpt.c                           | 2 ++
 xen/arch/x86/mm.c                                | 1 +
 xen/arch/x86/mpparse.c                           | 1 +
 xen/arch/x86/traps.c                             | 3 +++
 18 files changed, 34 insertions(+), 6 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:04:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:04:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746320.1153339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd4-0003C4-7z; Mon, 24 Jun 2024 09:04:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746320.1153339; Mon, 24 Jun 2024 09:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd4-0003Bx-5L; Mon, 24 Jun 2024 09:04:50 +0000
Received: by outflank-mailman (input) for mailman id 746320;
 Mon, 24 Jun 2024 09:04:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfd2-0002wq-Fy
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:48 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ce91ff99-3208-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 11:04:48 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id 2647F4EE0755;
 Mon, 24 Jun 2024 11:04:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce91ff99-3208-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 02/13] x86/cpuid: use fallthrough pseudo keyword
Date: Mon, 24 Jun 2024 11:04:26 +0200
Message-Id: <58f1ff7e94fd2bd5290a555e44d9de0d2f515eda.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719218291.git.federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The current comment making explicit the fallthrough intention does
not follow the agreed syntax: replace it with the pseduo keyword.

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/cpuid.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index a822e80c7e..2a777436ee 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -97,9 +97,8 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         if ( is_viridian_domain(d) )
             return cpuid_viridian_leaves(v, leaf, subleaf, res);
 
+        fallthrough;
         /*
-         * Fallthrough.
-         *
          * Intel reserve up until 0x4fffffff for hypervisor use.  AMD reserve
          * only until 0x400000ff, but we already use double that.
          */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:04:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:04:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746321.1153345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd4-0003Fd-J5; Mon, 24 Jun 2024 09:04:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746321.1153345; Mon, 24 Jun 2024 09:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd4-0003Eh-CV; Mon, 24 Jun 2024 09:04:50 +0000
Received: by outflank-mailman (input) for mailman id 746321;
 Mon, 24 Jun 2024 09:04:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfd3-0002x1-2d
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:49 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ce134d4e-3208-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 11:04:47 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id 4D5F64EE0754;
 Mon, 24 Jun 2024 11:04:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce134d4e-3208-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [XEN PATCH v2 01/13] automation/eclair: fix deviation of MISRA C Rule 16.3
Date: Mon, 24 Jun 2024 11:04:25 +0200
Message-Id: <c43a32405cc949ef5bf26a2ca1d1cc7ee7f5e664.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719218291.git.federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Escape the final dot of the comment and extend the search of a
fallthrough comment up to 2 lines after the last statement.

Fixes: a128d8da913b21eff6c6d2e2a7d4c54c054b78db "automation/eclair: add deviations for MISRA C:2012 Rule 16.3"
Reported-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
Changes in v2:
- instead of introducing the hypened fallthrough, insert the missing escape.
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index b8f9155267..9df3e0f0c4 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -400,7 +400,7 @@ safe."
 -doc_end
 
 -doc_begin="Switch clauses ending with an explicit comment indicating the fallthrough intention are safe."
--config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through.? \\*/.*$,0..1))))"}
+-config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through\\.? \\*/.*$,0..2))))"}
 -doc_end
 
 -doc_begin="Switch statements having a controlling expression of enum type deliberately do not have a default case: gcc -Wall enables -Wswitch which warns (and breaks the build as we use -Werror) if one of the enum labels is missing from the switch."
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:04:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:04:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746322.1153360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd5-0003eg-Q7; Mon, 24 Jun 2024 09:04:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746322.1153360; Mon, 24 Jun 2024 09:04:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd5-0003ds-KX; Mon, 24 Jun 2024 09:04:51 +0000
Received: by outflank-mailman (input) for mailman id 746322;
 Mon, 24 Jun 2024 09:04:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfd3-0002wq-Dg
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:49 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cf0ae985-3208-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 11:04:48 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id EED224EE073D;
 Mon, 24 Jun 2024 11:04:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf0ae985-3208-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 03/13] x86/domctl: address a violation of MISRA C Rule 16.3
Date: Mon, 24 Jun 2024 11:04:27 +0200
Message-Id: <d46b484c99f858d7bfd10c6956a88ba46ac60815.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719218291.git.federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statement to address a violation of
MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/domctl.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 9190e11faa..68b5b46d1a 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -517,6 +517,7 @@ long arch_do_domctl(
 
         default:
             ret = -ENOSYS;
+            break;
         }
         break;
     }
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:04:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:04:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746323.1153365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd6-0003jX-4M; Mon, 24 Jun 2024 09:04:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746323.1153365; Mon, 24 Jun 2024 09:04:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd5-0003hU-VB; Mon, 24 Jun 2024 09:04:51 +0000
Received: by outflank-mailman (input) for mailman id 746323;
 Mon, 24 Jun 2024 09:04:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfd4-0002wq-Dg
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:50 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cf79388b-3208-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 11:04:49 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id B98CD4EE0755;
 Mon, 24 Jun 2024 11:04:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf79388b-3208-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 04/13] x86/vpmu: address violations of MISRA C Rule 16.3
Date: Mon, 24 Jun 2024 11:04:28 +0200
Message-Id: <c45b27a08a1608de85e4bbae80763f8429d40ad5.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719218291.git.federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statements to address violations of MISRA C Rule
16.3: "An unconditional `break' statement shall terminate every
switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/cpu/vpmu.c       | 3 +++
 xen/arch/x86/cpu/vpmu_intel.c | 1 +
 2 files changed, 4 insertions(+)

diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index a7bc0cd1fc..b2ba999412 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -663,6 +663,8 @@ long do_xenpmu_op(
 
         if ( pmu_params.version.maj != XENPMU_VER_MAJ )
             return -EINVAL;
+
+        break;
     }
 
     switch ( op )
@@ -776,6 +778,7 @@ long do_xenpmu_op(
 
     default:
         ret = -EINVAL;
+        break;
     }
 
     return ret;
diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index cd414165df..46f3ff86e7 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -713,6 +713,7 @@ static int cf_check core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             break;
         default:
             rdmsrl(msr, *msr_content);
+            break;
         }
     }
     else if ( msr == MSR_IA32_MISC_ENABLE )
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:04:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:04:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746324.1153379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd7-000491-Dj; Mon, 24 Jun 2024 09:04:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746324.1153379; Mon, 24 Jun 2024 09:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd7-00048H-9v; Mon, 24 Jun 2024 09:04:53 +0000
Received: by outflank-mailman (input) for mailman id 746324;
 Mon, 24 Jun 2024 09:04:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfd5-0002x1-MT
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:51 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cfe32dd2-3208-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 11:04:50 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id 79AE24EE073D;
 Mon, 24 Jun 2024 11:04:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfe32dd2-3208-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 05/13] x86/traps: address violations of MISRA C Rule 16.3
Date: Mon, 24 Jun 2024 11:04:29 +0200
Message-Id: <4f44a7b021eb4f78ccf1ce69b500b48b75df81c5.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719218291.git.federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add break or pseudo keyword fallthrough to address violations of
MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/traps.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 9906e874d5..cbcec3fafb 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1186,6 +1186,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
 
     default:
         ASSERT_UNREACHABLE();
+        break;
     }
 }
 
@@ -1748,6 +1749,7 @@ static void io_check_error(const struct cpu_user_regs *regs)
     {
     case 'd': /* 'dom0' */
         nmi_hwdom_report(_XEN_NMIREASON_io_error);
+        fallthrough;
     case 'i': /* 'ignore' */
         break;
     default:  /* 'fatal' */
@@ -1768,6 +1770,7 @@ static void unknown_nmi_error(const struct cpu_user_regs *regs,
     {
     case 'd': /* 'dom0' */
         nmi_hwdom_report(_XEN_NMIREASON_unknown);
+        fallthrough;
     case 'i': /* 'ignore' */
         break;
     default:  /* 'fatal' */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:04:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:04:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746325.1153386 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd8-0004H8-0M; Mon, 24 Jun 2024 09:04:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746325.1153386; Mon, 24 Jun 2024 09:04:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd7-0004EF-Mg; Mon, 24 Jun 2024 09:04:53 +0000
Received: by outflank-mailman (input) for mailman id 746325;
 Mon, 24 Jun 2024 09:04:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfd6-0002wq-BE
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:52 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d0d1ffca-3208-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 11:04:51 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id EA6274EE0755;
 Mon, 24 Jun 2024 11:04:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0d1ffca-3208-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 07/13] x86/hvm: address violations of MISRA C Rule 16.3
Date: Mon, 24 Jun 2024 11:04:31 +0200
Message-Id: <a20efca7042ea8f351516ea521edccd89b475929.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719218291.git.federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 16.3 states that "An unconditional `break' statement shall
terminate every switch-clause".

Add pseudo keyword fallthrough or missing break statement
to address violations of the rule.

As a defensive measure, return -EOPNOTSUPP in case an unreachable
return statement is reached.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
Changes in v2:
- replace hypened fallthrough with the pseudo keyword.
---
 xen/arch/x86/hvm/emulate.c   | 9 ++++++---
 xen/arch/x86/hvm/hvm.c       | 6 ++++++
 xen/arch/x86/hvm/hypercall.c | 1 +
 xen/arch/x86/hvm/irq.c       | 1 +
 4 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 02e378365b..859c21a5ab 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -339,7 +339,7 @@ static int hvmemul_do_io(
     }
     case X86EMUL_UNIMPLEMENTED:
         ASSERT_UNREACHABLE();
-        /* Fall-through */
+        fallthrough;
     default:
         BUG();
     }
@@ -2674,6 +2674,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
 
     default:
         ASSERT_UNREACHABLE();
+        break;
     }
 
     if ( hvmemul_ctxt->ctxt.retire.singlestep )
@@ -2764,6 +2765,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
         /* fallthrough */
     default:
         hvm_emulate_writeback(&ctxt);
+        break;
     }
 
     return rc;
@@ -2799,10 +2801,11 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
         memcpy(hvio->mmio_insn, curr->arch.vm_event->emul.insn.data,
                hvio->mmio_insn_bytes);
     }
-    /* Fall-through */
+    fallthrough;
     default:
         ctx.set_context = (kind == EMUL_KIND_SET_CONTEXT_DATA);
         rc = hvm_emulate_one(&ctx, VIO_no_completion);
+        break;
     }
 
     switch ( rc )
@@ -2818,7 +2821,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
     case X86EMUL_UNIMPLEMENTED:
         if ( hvm_monitor_emul_unimplemented() )
             return;
-        /* fall-through */
+        fallthrough;
     case X86EMUL_UNHANDLEABLE:
         hvm_dump_emulation_state(XENLOG_G_DEBUG, "Mem event", &ctx, rc);
         hvm_inject_hw_exception(trapnr, errcode);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f4b627b1f..c263e562ff 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4919,6 +4919,8 @@ static int do_altp2m_op(
 
     default:
         ASSERT_UNREACHABLE();
+        rc = -EOPNOTSUPP;
+        break;
     }
 
  out:
@@ -5020,6 +5022,8 @@ static int compat_altp2m_op(
 
     default:
         ASSERT_UNREACHABLE();
+        rc = -EOPNOTSUPP;
+        break;
     }
 
     return rc;
@@ -5283,6 +5287,8 @@ void hvm_get_segment_register(struct vcpu *v, enum x86_segment seg,
          * %cs and %tr are unconditionally present.  SVM ignores these present
          * bits and will happily run without them set.
          */
+        fallthrough;
+
     case x86_seg_cs:
         reg->p = 1;
         break;
diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 7fb3136f0c..2271afe02a 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -111,6 +111,7 @@ int hvm_hypercall(struct cpu_user_regs *regs)
     case 8:
         eax = regs->rax;
         /* Fallthrough to permission check. */
+        fallthrough;
     case 4:
     case 2:
         if ( currd->arch.monitor.guest_request_userspace_enabled &&
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 210cebb0e6..1eab44defc 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -282,6 +282,7 @@ static void hvm_set_callback_irq_level(struct vcpu *v)
             __hvm_pci_intx_assert(d, pdev, pintx);
         else
             __hvm_pci_intx_deassert(d, pdev, pintx);
+        break;
     default:
         break;
     }
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:04:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:04:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746326.1153391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd8-0004Kp-G8; Mon, 24 Jun 2024 09:04:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746326.1153391; Mon, 24 Jun 2024 09:04:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd8-0004JH-3Z; Mon, 24 Jun 2024 09:04:54 +0000
Received: by outflank-mailman (input) for mailman id 746326;
 Mon, 24 Jun 2024 09:04:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfd6-0002x1-E0
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:52 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d05832ef-3208-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 11:04:51 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id 310954EE0739;
 Mon, 24 Jun 2024 11:04:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d05832ef-3208-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 06/13] x86/mce: address violations of MISRA C Rule 16.3
Date: Mon, 24 Jun 2024 11:04:30 +0200
Message-Id: <c1015fb8f39d3a44732ccdebb8ba11392dbe4aa8.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719218291.git.federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statements to address violations of
MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/cpu/mcheck/mce_amd.c   | 1 +
 xen/arch/x86/cpu/mcheck/mce_intel.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/xen/arch/x86/cpu/mcheck/mce_amd.c b/xen/arch/x86/cpu/mcheck/mce_amd.c
index 3318b8204f..4f06a3153b 100644
--- a/xen/arch/x86/cpu/mcheck/mce_amd.c
+++ b/xen/arch/x86/cpu/mcheck/mce_amd.c
@@ -201,6 +201,7 @@ static void mcequirk_amd_apply(enum mcequirk_amd_flags flags)
 
     default:
         ASSERT(flags == MCEQUIRK_NONE);
+        break;
     }
 }
 
diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
index dd812f4b8a..9574dedbfd 100644
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -896,6 +896,8 @@ static void intel_init_ppin(const struct cpuinfo_x86 *c)
             ppin_msr = 0;
         else if ( c == &boot_cpu_data )
             ppin_msr = MSR_PPIN;
+
+        break;
     }
 }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:04:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:04:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746327.1153398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd9-0004SN-7i; Mon, 24 Jun 2024 09:04:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746327.1153398; Mon, 24 Jun 2024 09:04:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfd8-0004PU-JK; Mon, 24 Jun 2024 09:04:54 +0000
Received: by outflank-mailman (input) for mailman id 746327;
 Mon, 24 Jun 2024 09:04:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfd6-0002wq-VZ
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:52 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d1420cb8-3208-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 11:04:52 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id AF8174EE073D;
 Mon, 24 Jun 2024 11:04:51 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1420cb8-3208-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 08/13] x86/vpt: address a violation of MISRA C Rule 16.3
Date: Mon, 24 Jun 2024 11:04:32 +0200
Message-Id: <e26de71af72b51abd425f2e75f30d794e0ba46a2.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719218291.git.federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add pseudo keyword fallthrough to meet the requirements to deviate
a violation of MISRA C Rule 16.3 ("An unconditional `break'
statement shall terminate every switch-clause").

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/vpt.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index e1d6845a28..c76a9a272b 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -121,6 +121,8 @@ static int pt_irq_masked(struct periodic_time *pt)
     }
 
     /* Fallthrough to check if the interrupt is masked on the IO APIC. */
+    fallthrough;
+
     case PTSRC_ioapic:
     {
         int mask = vioapic_get_mask(v->domain, gsi);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:04:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:04:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746331.1153412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfdA-0004zd-Ev; Mon, 24 Jun 2024 09:04:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746331.1153412; Mon, 24 Jun 2024 09:04:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfdA-0004xa-9I; Mon, 24 Jun 2024 09:04:56 +0000
Received: by outflank-mailman (input) for mailman id 746331;
 Mon, 24 Jun 2024 09:04:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfd8-0002x1-Rs
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:54 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d1cbb281-3208-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 11:04:53 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id 7522D4EE0738;
 Mon, 24 Jun 2024 11:04:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1cbb281-3208-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 09/13] x86/mm: add defensive return
Date: Mon, 24 Jun 2024 11:04:33 +0200
Message-Id: <f3adf3d5dac5c4c108207883472f3ebcfa11c685.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719218291.git.federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add defensive return statement at the end of an unreachable
default case. Other than improve safety, this meets the requirements
to deviate a violation of MISRA C Rule 16.3: "An unconditional `break'
statement shall terminate every switch-clause".

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/mm.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 648d6dd475..2e19ced15e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -916,6 +916,7 @@ get_page_from_l1e(
                 return 0;
             default:
                 ASSERT_UNREACHABLE();
+                return 0;
             }
         }
         else if ( l1f & _PAGE_RW )
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:04:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:04:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746332.1153419 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfdB-000593-Dc; Mon, 24 Jun 2024 09:04:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746332.1153419; Mon, 24 Jun 2024 09:04:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfdB-00058H-0n; Mon, 24 Jun 2024 09:04:57 +0000
Received: by outflank-mailman (input) for mailman id 746332;
 Mon, 24 Jun 2024 09:04:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfd9-0002wq-Ei
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:55 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d2772e42-3208-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 11:04:54 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id 806774EE073D;
 Mon, 24 Jun 2024 11:04:53 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2772e42-3208-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 10/13] x86/mpparse: address a violation of MISRA C Rule 16.3
Date: Mon, 24 Jun 2024 11:04:34 +0200
Message-Id: <e421b80a9b016d286b9a5b8329b0bcb33584f06e.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719218291.git.federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a missing break statement to address a violation of
MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/mpparse.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/mpparse.c b/xen/arch/x86/mpparse.c
index d8ccab2449..306d8ed97a 100644
--- a/xen/arch/x86/mpparse.c
+++ b/xen/arch/x86/mpparse.c
@@ -544,6 +544,7 @@ static inline void __init construct_default_ISA_mptable(int mpc_default_type)
 		case 4:
 		case 7:
 			memcpy(bus.mpc_bustype, "MCA   ", 6);
+			break;
 	}
 	MP_bus_info(&bus);
 	if (mpc_default_type > 4) {
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:04:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:04:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746333.1153433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfdD-0005ff-92; Mon, 24 Jun 2024 09:04:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746333.1153433; Mon, 24 Jun 2024 09:04:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfdD-0005eO-2S; Mon, 24 Jun 2024 09:04:59 +0000
Received: by outflank-mailman (input) for mailman id 746333;
 Mon, 24 Jun 2024 09:04:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfdA-0002wq-Ex
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:56 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d2edd806-3208-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 11:04:55 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id 74F544EE0754;
 Mon, 24 Jun 2024 11:04:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2edd806-3208-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 11/13] x86/pmtimer: address a violation of MISRA C Rule 16.3
Date: Mon, 24 Jun 2024 11:04:35 +0200
Message-Id: <fea205262d4f7b337b804a013d0f1c411a721b46.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719218291.git.federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statement to address a violation of MISRA C Rule 16.3
("An unconditional `break' statement shall terminate every
switch-clause").

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/pmtimer.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 97099ac305..87a7a01c9f 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -185,6 +185,7 @@ static int cf_check handle_evt_io(
                 gdprintk(XENLOG_WARNING, 
                          "Bad ACPI PM register write: %x bytes (%x) at %x\n", 
                          bytes, *val, port);
+                break;
             }
         }
         /* Fix up the SCI state to match the new register state */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:05:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:05:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746334.1153439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfdE-0005qz-4D; Mon, 24 Jun 2024 09:05:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746334.1153439; Mon, 24 Jun 2024 09:04:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfdD-0005pX-NA; Mon, 24 Jun 2024 09:04:59 +0000
Received: by outflank-mailman (input) for mailman id 746334;
 Mon, 24 Jun 2024 09:04:57 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfdB-0002x1-EO
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:57 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d353f119-3208-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 11:04:56 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id 437414EE0738;
 Mon, 24 Jun 2024 11:04:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d353f119-3208-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 12/13] x86/vPIC: address a violation of MISRA C Rule 16.3
Date: Mon, 24 Jun 2024 11:04:36 +0200
Message-Id: <bf0f2ac1c8e9443b2c4f8ae6f02659927d5f7dea.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719218291.git.federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add pseudokeyword fallthrough to meet the requirements to deviate
a violation of MISRA C Rul 16.3: "An unconditional `break' statement
shall terminate every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/vpic.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index 7c3b5c7254..6427b08086 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -309,6 +309,7 @@ static void vpic_ioport_write(
             if ( !(vpic->init_state & 8) )
                 break; /* CASCADE mode: wait for write to ICW3. */
             /* SNGL mode: fall through (no ICW3). */
+            fallthrough;
         case 2:
             /* ICW3 */
             vpic->init_state++;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:05:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:05:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746335.1153447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfdF-00069D-Kp; Mon, 24 Jun 2024 09:05:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746335.1153447; Mon, 24 Jun 2024 09:05:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfdE-000639-UQ; Mon, 24 Jun 2024 09:05:00 +0000
Received: by outflank-mailman (input) for mailman id 746335;
 Mon, 24 Jun 2024 09:04:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfdC-0002x1-EV
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:04:58 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d3c0caea-3208-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 11:04:56 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id E2C7D4EE0754;
 Mon, 24 Jun 2024 11:04:55 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3c0caea-3208-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 13/13] x86/vlapic: address a violation of MISRA C Rule 16.3
Date: Mon, 24 Jun 2024 11:04:37 +0200
Message-Id: <0aa39166696e46b6bb45a0f7b5ac06bfd9fdda8e.1719218291.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719218291.git.federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statement to address a violation of MISRA C
Rule 16.3: "An unconditional `break' statement shall terminate every
switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 xen/arch/x86/hvm/vlapic.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 9cfc82666a..2ec9594271 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -367,6 +367,7 @@ static void vlapic_accept_irq(struct vcpu *v, uint32_t icr_low)
         gdprintk(XENLOG_ERR, "TODO: unsupported delivery mode in ICR %x\n",
                  icr_low);
         domain_crash(v->domain);
+        break;
     }
 }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:18:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:18:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746430.1153469 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfqU-000475-VV; Mon, 24 Jun 2024 09:18:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746430.1153469; Mon, 24 Jun 2024 09:18:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfqU-00046y-T2; Mon, 24 Jun 2024 09:18:42 +0000
Received: by outflank-mailman (input) for mailman id 746430;
 Mon, 24 Jun 2024 09:18:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLfqT-00046s-Fl
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:18:41 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id beb8b0eb-320a-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 11:18:40 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.160.55.204])
 by support.bugseng.com (Postfix) with ESMTPSA id 0E0404EE0738;
 Mon, 24 Jun 2024 11:18:38 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: beb8b0eb-320a-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Nicola Vetrini <nicola.vetrini@bugseng.com>
Subject: [XEN PATCH v2] automation/eclair: configure Rule 13.6 and custom service B.UNEVALEFF
Date: Mon, 24 Jun 2024 11:18:32 +0200
Message-Id: <5c60e98d70ae94c155fd56ec13b764b7a8f6161c.1719219962.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Rule 13.6 states that "The operand of the `sizeof' operator shall not
contain any expression which has potential side effects".

Define service B.UNEVALEFF as an extension of Rule 13.6 to
check for unevalued side effects also for typeof and alignof operators.

Update ECLAIR configuration to deviate uses of BUILD_BUG_ON and
alternative_v?call[0-9] for both Rule 13.6 and B.UNEVALEFF.

Add service B.UNEVALEFF to the accepted.ecl guidelines to check
"violations" in the weekly analysis.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
Changes in v2:
- typo fixed;
- restricted deviation to alternative.h;
- generalized deviation to the arguments of BUILD_BUG_ON.
---
 automation/eclair_analysis/ECLAIR/B.UNEVALEFF.ecl      | 10 ++++++++++
 .../eclair_analysis/ECLAIR/accepted_guidelines.sh      |  2 ++
 automation/eclair_analysis/ECLAIR/analysis.ecl         |  1 +
 automation/eclair_analysis/ECLAIR/deviations.ecl       | 10 ++++++++++
 docs/misra/deviations.rst                              | 10 ++++++++++
 5 files changed, 33 insertions(+)
 create mode 100644 automation/eclair_analysis/ECLAIR/B.UNEVALEFF.ecl

diff --git a/automation/eclair_analysis/ECLAIR/B.UNEVALEFF.ecl b/automation/eclair_analysis/ECLAIR/B.UNEVALEFF.ecl
new file mode 100644
index 0000000000..92d8db8986
--- /dev/null
+++ b/automation/eclair_analysis/ECLAIR/B.UNEVALEFF.ecl
@@ -0,0 +1,10 @@
+-clone_service=MC3R1.R13.6,B.UNEVALEFF
+
+-config=B.UNEVALEFF,summary="The operand of the `alignof' and `typeof'  operators shall not contain any expression which has potential side effects"
+-config=B.UNEVALEFF,stmt_child_matcher=
+{"stmt(node(utrait_expr)&&operator(alignof))", expr, 0, "stmt(any())", {}},
+{"stmt(node(utrait_type)&&operator(alignof))", type, 0, "stmt(any())", {}},
+{"stmt(node(utrait_expr)&&operator(preferred_alignof))", expr, 0, "stmt(any())", {}},
+{"stmt(node(utrait_type)&&operator(preferred_alignof))", type, 0, "stmt(any())", {}},
+{"type(node(typeof_expr))", expr, 0, "stmt(any())", {}},
+{"type(node(typeof_type))", type, 0, "stmt(any())", {}}
diff --git a/automation/eclair_analysis/ECLAIR/accepted_guidelines.sh b/automation/eclair_analysis/ECLAIR/accepted_guidelines.sh
index b308bd4cda..368135122c 100755
--- a/automation/eclair_analysis/ECLAIR/accepted_guidelines.sh
+++ b/automation/eclair_analysis/ECLAIR/accepted_guidelines.sh
@@ -11,3 +11,5 @@ accepted_rst=$1
 
 grep -Eo "\`(Dir|Rule) [0-9]+\.[0-9]+" ${accepted_rst} \
      | sed -e 's/`Rule /MC3R1.R/' -e  's/`Dir /MC3R1.D/' -e 's/.*/-enable=&/' > ${script_dir}/accepted.ecl
+
+echo "-enable=B.UNEVALEFF" >> ${script_dir}/accepted.ecl
diff --git a/automation/eclair_analysis/ECLAIR/analysis.ecl b/automation/eclair_analysis/ECLAIR/analysis.ecl
index 9134e59617..df0b551812 100644
--- a/automation/eclair_analysis/ECLAIR/analysis.ecl
+++ b/automation/eclair_analysis/ECLAIR/analysis.ecl
@@ -52,6 +52,7 @@ their Standard Library equivalents."
 -eval_file=adopted.ecl
 -eval_file=out_of_scope.ecl
 
+-eval_file=B.UNEVALEFF.ecl
 -eval_file=deviations.ecl
 -eval_file=call_properties.ecl
 -eval_file=tagging.ecl
diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index e2653f77eb..580d9edb8d 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -328,6 +328,16 @@ of the short-circuit evaluation strategy of such logical operators."
 -config=MC3R1.R13.5,reports+={disapplied,"any()"}
 -doc_end
 
+-doc_begin="Macros alternative_v?call[0-9] use sizeof and typeof to check that the argument types match the corresponding parameter ones."
+-config=MC3R1.R13.6,reports+={deliberate,"any_area(any_loc(any_exp(macro(^alternative_vcall[0-9]$))&&file(^xen/arch/x86/include/asm/alternative\\.h*$)))"}
+-config=B.UNEVALEFF,reports+={deliberate,"any_area(any_loc(any_exp(macro(^alternative_v?call[0-9]$))&&file(^xen/arch/x86/include/asm/alterantive\\.h*$)))"}
+-doc_end
+
+-doc_begin="Anything, no matter how complicated, inside the BUILD_BUG_ON macro is subject to a compile-time evaluation without relevant side effects."
+-config=MC3R1.R13.6,reports+={safe,"any_area(any_loc(any_exp(macro(name(BUILD_BUG_ON)))))"}
+-config=B.UNEVALEFF,reports+={safe,"any_area(any_loc(any_exp(macro(name(BUILD_BUG_ON)))))"}
+-doc_end
+
 #
 # Series 14
 #
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 36959aa44a..65dce6267f 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -279,6 +279,16 @@ Deviations related to MISRA C:2012 Rules:
        the short-circuit evaluation strategy for logical operators.
      - Project-wide deviation; tagged as `disapplied` for ECLAIR.
 
+   * - R13.6
+     - On x86, macros alternative_v?call[0-9] use sizeof and typeof to check
+       that the argument types match the corresponding parameter ones.
+     - Tagged as `deliberate` for ECLAIR.
+
+   * - R13.6
+     - Anything, no matter how complicated, inside the BUILD_BUG_ON macro is
+       subject to a compile-time evaluation without relevant side effects."
+     - Tagged as `safe` for ECLAIR.
+
    * - R14.2
      - The severe restrictions imposed by this rule on the use of 'for'
        statements are not counterbalanced by the presumed facilitation of the
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:23:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:23:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746436.1153480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfuh-00067T-Fe; Mon, 24 Jun 2024 09:23:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746436.1153480; Mon, 24 Jun 2024 09:23:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfuh-00067M-DE; Mon, 24 Jun 2024 09:23:03 +0000
Received: by outflank-mailman (input) for mailman id 746436;
 Mon, 24 Jun 2024 09:23:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dlsF=N2=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sLfug-00065a-AX
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:23:02 +0000
Received: from mail-qk1-x729.google.com (mail-qk1-x729.google.com
 [2607:f8b0:4864:20::729])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5a3fe0a0-320b-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 11:23:01 +0200 (CEST)
Received: by mail-qk1-x729.google.com with SMTP id
 af79cd13be357-796df041d73so281363385a.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 02:23:01 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce8b366esm295674585a.32.2024.06.24.02.22.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 02:23:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a3fe0a0-320b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719220980; x=1719825780; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xWPc4P2X6Wre/VYuOXgHh+AbI8DfpW502T/FVt7NFEY=;
        b=ezknJ48FBRP6gmyB9E2Qx4IGRrSM0TC9kNzEk0vCyQZ0SmVSs8sLTvN39sd4f9FQWG
         PB9QecuLuGhJF/Ink9ZEukwuLPCfXA4jHVgU1zZZ+6hOZPfGy0BGrc7wqj61GqPcd5hM
         W7LgnwhkyKVvFSL+tfzYgmmy+ksDj6+SWglRA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719220980; x=1719825780;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=xWPc4P2X6Wre/VYuOXgHh+AbI8DfpW502T/FVt7NFEY=;
        b=SEXYUwAdA37ELcheFKo6Hi0wbQrCcY1Caqli1p8f6IY/Q5q0kvCl+ihnjkhchulLn0
         GYtknqh5uw5dq7ldONfERmkuvvtvSPADYCbPz7QrE0sfx1CxIDIkN2tpgTxzhdigRoD/
         iYnX+c6j8QY5Qfp1KCBm3uO4J1CUYlUxAP5TRUvFPwnr7eAmDNYzF8xg+GqfVywTt9sJ
         yScJ90iRym6fYdNn8/zubgxFmANr3xBDxAhoh08U5HE/mZCyr0r/vfub9fkVh6/8eP1/
         im0GYXy4vu6VGhAE7CLPKvjd0Zs01xe5gc4lRnC4likk1BJxHY4Auka+vsITXJs4U17k
         W/5Q==
X-Gm-Message-State: AOJu0YwYU4/y3e8MuadEjtdK0IHKHM47PfEFugG0sV8YOAOD/vIWeV6r
	PR24H20YzKybF+YdCvX+QpoKGUBNzDmBjtpbC3kCHZOr2OXXQ8j+Yn8In0ZfztVsaT/6tqfDQJC
	7
X-Google-Smtp-Source: AGHT+IEPEE5xQIFyub1fgXWxb2cvAUYv9gFUgdnt/nBUg66ogzsydjUO/gOv3ykkJLrNFOmgi9I6qA==
X-Received: by 2002:a05:620a:2613:b0:797:aa9a:de5e with SMTP id af79cd13be357-79be468303amr561656285a.3.1719220980450;
        Mon, 24 Jun 2024 02:23:00 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Community Manager <community.manager@xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH for-4.19 2/2] CHANGELOG: Add entries related to tracing
Date: Mon, 24 Jun 2024 10:04:11 +0100
Message-Id: <20240624090411.1867850-2-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240624090411.1867850-1-george.dunlap@cloud.com>
References: <20240624090411.1867850-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
CC: Community Manager <community.manager@xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
---
 CHANGELOG.md | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index f3c6c7954f..35c3488f4b 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -21,6 +21,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
  - When building with Systemd support (./configure --enable-systemd), remove
    libsystemd as a build dependency.  Systemd Notify support is retained, now
    using a standalone library implementation.
+ - xenalyze no longer requires `--svm-mode` when analyzing traces
+   generated on AMD CPUs
 
 ### Added
  - On x86:
@@ -37,6 +39,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    compatibility reasons.  VMs configured with bootloader="/usr/bin/pygrub"
    should be updated to just bootloader="pygrub".
  - The Xen gdbstub on x86.
+ - xentrace_format has been removed; use xenalyze instead.
 
 ## [4.18.0](https://xenbits.xenproject.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.18.0) - 2023-11-16
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:23:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:23:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746437.1153490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfui-0006Lu-M7; Mon, 24 Jun 2024 09:23:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746437.1153490; Mon, 24 Jun 2024 09:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLfui-0006Ln-J6; Mon, 24 Jun 2024 09:23:04 +0000
Received: by outflank-mailman (input) for mailman id 746437;
 Mon, 24 Jun 2024 09:23:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dlsF=N2=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sLfug-00067F-Of
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:23:02 +0000
Received: from mail-qk1-x72d.google.com (mail-qk1-x72d.google.com
 [2607:f8b0:4864:20::72d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 59959d95-320b-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 11:23:00 +0200 (CEST)
Received: by mail-qk1-x72d.google.com with SMTP id
 af79cd13be357-79baa4e8531so408761785a.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 02:23:00 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce8b366esm295674585a.32.2024.06.24.02.22.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 02:22:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59959d95-320b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719220979; x=1719825779; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=qaxPYdpB2KlsuDyrkCqRVkZTlJTJ1gBe5YPrFDEJb6g=;
        b=KhJ9SmjjYiQvxGvvETJ7oVfB/8DxuhZQ05VOGEAFER8UYtSwW+NTOwcA7OS4asaWWM
         r/soQHJ8yyY9I8+fInSgwKvfvWi3VAIX7HzZTxqA8uL0bRJ0R+5l8mmxXA4H5rFAHIUL
         IyBuggAVfbqc9B6uINMKdZ/75ACqx3pSWwbuw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719220979; x=1719825779;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=qaxPYdpB2KlsuDyrkCqRVkZTlJTJ1gBe5YPrFDEJb6g=;
        b=P7dKZ2925vjJ+IZBqbKZqF/fjppa/XxV4dILBa5RMj+m6EzPvUiSvb10QEyJxSehDi
         3gCFW/vbLpjn+lJrtlcl9hS6iudQ44R8eNVWKRdW+Qf+0bgP/aF++ybmIkEB3/HmAOWS
         6DwwzfoaBmAE52C0XaRAXOp6LAKTd8o43a1rfYwql2t7tgCPkWELapbvW5UZCuZ5FcyY
         sqAtmZwldWEk9uvh71gbCV+2KImsp3XnSZxTTluk5hvCNQiHfIfWqCugYAcgx5ZASUtd
         RbltOX39ft4+MU0iHopqH8IeBtflFP4W58ZAYBx4mOyclPTkiXVN+NsqlpJoB4aSUXIl
         /1vg==
X-Gm-Message-State: AOJu0Yytz+s35HrJZpIngDOfaspqhO43fco3jrGvo2TwmwPAVVlTNX1P
	YECFLiK2fRivVPiJTA20eOC+w1Kf0/QxwOO7O4uoZ7jI/YyDukm4nXTQt1gdiwhddbbC4qZ3k5W
	v
X-Google-Smtp-Source: AGHT+IGUfeHEkQula0SkYff68VVcs4bR7JTER9swn2noVcGbRAK1BPVKQQzI7JUAZ02j8HqjHdve+w==
X-Received: by 2002:a05:620a:4556:b0:799:2d50:14a3 with SMTP id af79cd13be357-79be4492050mr644350585a.0.1719220979314;
        Mon, 24 Jun 2024 02:22:59 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Community Manager <community.manager@xenproject.org>
Subject: [PATCH for-4.19 1/2] CHANGELOG.md: Fix indentation of "Removed" section
Date: Mon, 24 Jun 2024 10:04:10 +0100
Message-Id: <20240624090411.1867850-1-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
CC: Community Manager <community.manager@xenproject.org>
---
 CHANGELOG.md | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 1778419cae..f3c6c7954f 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -31,12 +31,12 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
  - libxl support for backendtype=tap with tapback.
 
 ### Removed
-- caml-stubdom.  It hasn't built since 2014, was pinned to Ocaml 4.02, and has
-  been superseded by the MirageOS/SOLO5 projects.
-- /usr/bin/pygrub symlink.  This was deprecated in Xen 4.2 (2012) but left for
-  compatibility reasons.  VMs configured with bootloader="/usr/bin/pygrub"
-  should be updated to just bootloader="pygrub".
-- The Xen gdbstub on x86.
+ - caml-stubdom.  It hasn't built since 2014, was pinned to Ocaml 4.02, and has
+   been superseded by the MirageOS/SOLO5 projects.
+ - /usr/bin/pygrub symlink.  This was deprecated in Xen 4.2 (2012) but left for
+   compatibility reasons.  VMs configured with bootloader="/usr/bin/pygrub"
+   should be updated to just bootloader="pygrub".
+ - The Xen gdbstub on x86.
 
 ## [4.18.0](https://xenbits.xenproject.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.18.0) - 2023-11-16
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:41:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:41:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746453.1153500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLgCV-0001sW-47; Mon, 24 Jun 2024 09:41:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746453.1153500; Mon, 24 Jun 2024 09:41:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLgCV-0001sM-1S; Mon, 24 Jun 2024 09:41:27 +0000
Received: by outflank-mailman (input) for mailman id 746453;
 Mon, 24 Jun 2024 09:41:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RHjb=N2=bounce.vates.tech=bounce-md_30504962.66793f40.v1-0dde5e73b97d4d4db679376060d06ffd@srs-se1.protection.inumbo.net>)
 id 1sLgCU-0001sG-2x
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:41:26 +0000
Received: from mail134-3.atl141.mandrillapp.com
 (mail134-3.atl141.mandrillapp.com [198.2.134.3])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ea694ffe-320d-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 11:41:22 +0200 (CEST)
Received: from pmta10.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail134-3.atl141.mandrillapp.com (Mailchimp) with ESMTP id
 4W72z52TjyzDRHxYM
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 09:41:21 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 0dde5e73b97d4d4db679376060d06ffd; Mon, 24 Jun 2024 09:41:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea694ffe-320d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719222081; x=1719482581;
	bh=nlkz8I0Q2XBmCtG1ce2j8BZifoX2kJRy/Jkn45yr+Zc=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=OIPLIzaLzt7XCGsdeQrd3/gtaiR2sJUyENl8j6kKxBl3Fcxxn0wkYIDoz4REuvq5Y
	 Or/T0WNZYKsf4Wqr6+xmhzgnv+xDakiM7bW1tV5EC13JncUKM+z+e/9G1YdSDzMQ1M
	 7NoRi4WIYWz7Z39STc94VgFyYrZgKksKP8Y5ejqaSbOF2QMAY8O9/HKEm/PkMF6tR5
	 M2hQRFXpfiVhYvOqBhD3dwk/hKYMpMH6xKRfT+NuAZidypC0F9l2WlEzokad6RGN8w
	 PuS6XBAZ56UxG0MOiVV6U/7n5ktmsMtk5Uk5NI5VAbqprDSxOuGyZ/SJc1YUywLWJf
	 ++PCQkDrzIrMg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719222081; x=1719482581; i=anthony.perard@vates.tech;
	bh=nlkz8I0Q2XBmCtG1ce2j8BZifoX2kJRy/Jkn45yr+Zc=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=eEkB49Ud7h8ZrvM2m+bzeYVAvMWnSPAgVlmm7Qmf9g0RG3kLcXyRZZjoUb2dPnSZM
	 /j9v+xgkpkOPnka6dn6Ff60PHNX9XcfbHS8ycAEcMWe6NgQ7Rr2ncUKttWYJu2pgf9
	 dypOKwsb8rxlvHaIup1obD5Q/SaHG/FNE/UYAU1Y5TQ5LLEl4889tQTapkEM31OW/2
	 bptb2fSZ4jt3T/hbHnB9NEFO11kxEmDQTte96NcHcIFcwd5OzWJOh/FTgO8XrO0NVp
	 gsPz7pqLXXUCYOvgH7kfawFgn33Q+2mZI37edTyhMqu4y42y4utLzW6CU/NDGxAyHa
	 vSKZFXyIaNa2A==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?[XEN=20PATCH]=20MAINTAINERS:=20Update=20my=20email=20address=20again?=
X-Mailer: git-send-email 2.39.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719222079548
To: xen-devel@lists.xenproject.org
Cc: Anthony PERARD <anthony@xenproject.org>, Anthony PERARD <anthony.perard@vates.tech>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Message-Id: <20240624094030.41692-1-anthony.perard@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.0dde5e73b97d4d4db679376060d06ffd?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240624:md
Date: Mon, 24 Jun 2024 09:41:20 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

Signed-off-by: Anthony PERARD <anthony.perard@vates.tech>
---
 MAINTAINERS | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 8e2b30a345..9d66b898ec 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -208,7 +208,7 @@ Maintainers List (try to look for most precise areas first)
 
 9PFSD
 M:	Juergen Gross <jgross@suse.com>
-M:	Anthony PERARD <anthony@xenproject.org>
+M:	Anthony PERARD <anthony.perard@vates.tech>
 S:	Supported
 F:	tools/9pfsd/
 
@@ -383,7 +383,7 @@ F:	xen/arch/x86/machine_kexec.c
 F:	xen/arch/x86/x86_64/kexec_reloc.S
 
 LIBS
-M:	Anthony PERARD <anthony@xenproject.org>
+M:	Anthony PERARD <anthony.perard@vates.tech>
 R:	Juergen Gross <jgross@suse.com>
 S:	Supported
 F:	tools/include/libxenvchan.h
@@ -429,7 +429,7 @@ S:	Supported
 F:	tools/ocaml/
 
 OVMF UPSTREAM
-M:	Anthony PERARD <anthony@xenproject.org>
+M:	Anthony PERARD <anthony.perard@vates.tech>
 S:	Supported
 T:	git https://xenbits.xenproject.org/git-http/ovmf.git
 
@@ -462,7 +462,7 @@ T:	git https://xenbits.xenproject.org/git-http/qemu-xen-traditional.git
 
 QEMU UPSTREAM
 M:	Stefano Stabellini <sstabellini@kernel.org>
-M:	Anthony Perard <anthony@xenproject.org>
+M:	Anthony Perard <anthony.perard@vates.tech>
 S:	Supported
 T:	git https://xenbits.xenproject.org/git-http/qemu-xen.git
 
@@ -515,7 +515,7 @@ F:	xen/arch/arm/include/asm/tee
 F:	xen/arch/arm/tee/
 
 TOOLSTACK
-M:	Anthony PERARD <anthony@xenproject.org>
+M:	Anthony PERARD <anthony.perard@vates.tech>
 S:	Supported
 F:	autogen.sh
 F:	config/*.in
-- 


Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:52:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:52:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746462.1153513 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLgNH-00043Z-7o; Mon, 24 Jun 2024 09:52:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746462.1153513; Mon, 24 Jun 2024 09:52:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLgNH-00043S-4R; Mon, 24 Jun 2024 09:52:35 +0000
Received: by outflank-mailman (input) for mailman id 746462;
 Mon, 24 Jun 2024 09:52:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLHn=N2=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sLgNF-00043I-OY
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:52:33 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 78f25733-320f-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 11:52:30 +0200 (CEST)
Received: by mail-wr1-x42e.google.com with SMTP id
 ffacd0b85a97d-3621ac606e1so3003545f8f.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 02:52:30 -0700 (PDT)
Received: from ?IPV6:2a00:23ee:1630:2367:cdfd:357d:df51:2d81?
 ([2a00:23ee:1630:2367:cdfd:357d:df51:2d81])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-36638f858fbsm9525683f8f.65.2024.06.24.02.52.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 02:52:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78f25733-320f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719222750; x=1719827550; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=qNXLYealRBgzKlV6WSlGJjcUlvXtTicJG46p4JZfpTI=;
        b=GQXLmWqCYN/Rwi5wP6D1K2vBVL2bwoF+0Aru1GLJRAFWZh4FdDTu4JeEZeKhkblIEP
         ZPCoZrWE+bXpKTYzqoYBs7TaHYyY89XkQ5TJB2hUNYU6KIM5GsXn/ZXMq/eCvqdzA+Ok
         WVCW4vlCvIMtnOss3swt71CNge7x9EzpnolVI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719222750; x=1719827550;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=qNXLYealRBgzKlV6WSlGJjcUlvXtTicJG46p4JZfpTI=;
        b=Z8WbAvLML5GqENAzyTt3AuvJawKetqIIc4gHFvt7b+y8X22Vhezqw/ZbsC7EraJML9
         jQmkdEGnwr5+cm9nXzst3TwGsKSIFJ6YTLEkBpIvx7U2dKK3+FIvaXnUGVOHtK7jQ4eB
         rKESYbNolj06zu0vzsc/QhL5yaROGi5t5qm2ZpwKeWBY5AYf0gJID/qDhK0Y8o6STNz3
         aVgMGstWuG5kcUyNRxnVt/jrsaRk9ZP0s6AwKd+hzsfFZ75GPq3KZjAZh4jjozp7lP7R
         AubzjEgQgMmbP5lfeyrdQGnes5eiKDSQz58tjfcAp84d9hr1q7uTuRVryDGjwJprxcny
         HL4Q==
X-Forwarded-Encrypted: i=1; AJvYcCWCNLH0lIwF2vcCUuwNPLL7A+e+UMgGwquv9H27g5/YdHdz9l33Dm8qbgLj5gdHIdZqkTVkYkoXjr8oXjn5RzKFqWXWXR69dVVYhhuzuMs=
X-Gm-Message-State: AOJu0YzPFBBbyHiFoZZp9uIbdj98fb0tfBKy7OMrxBGyP5tPGD641PJw
	WB100eCAJ2Af4hPoDWZBuhemif45C3F5B/SE3q0B7Ay3zL9bCJAphZ9J6H3P6vg9xOz08CusnL8
	F
X-Google-Smtp-Source: AGHT+IEcNZZ3IKBnuwzVXbTiRy3rQW7FGUIh82qp0bTrQ17H3HqHlbhlMd6y0ckszudvA6Jri2+pXg==
X-Received: by 2002:adf:f68c:0:b0:362:23d5:3928 with SMTP id ffacd0b85a97d-366e3293282mr4097244f8f.17.1719222750245;
        Mon, 24 Jun 2024 02:52:30 -0700 (PDT)
Message-ID: <ca2cde48-19aa-4832-a3ab-47354ce8a7ed@citrix.com>
Date: Mon, 24 Jun 2024 10:52:27 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 1/2] CHANGELOG.md: Fix indentation of "Removed"
 section
To: George Dunlap <george.dunlap@cloud.com>, xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Community Manager <community.manager@xenproject.org>
References: <20240624090411.1867850-1-george.dunlap@cloud.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <20240624090411.1867850-1-george.dunlap@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24/06/2024 10:04 am, George Dunlap wrote:
> Signed-off-by: George Dunlap <george.dunlap@cloud.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 09:52:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 09:52:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746464.1153523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLgNd-0004T8-FW; Mon, 24 Jun 2024 09:52:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746464.1153523; Mon, 24 Jun 2024 09:52:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLgNd-0004T1-CB; Mon, 24 Jun 2024 09:52:57 +0000
Received: by outflank-mailman (input) for mailman id 746464;
 Mon, 24 Jun 2024 09:52:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLHn=N2=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sLgNc-00043I-Ls
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 09:52:56 +0000
Received: from mail-lf1-x133.google.com (mail-lf1-x133.google.com
 [2a00:1450:4864:20::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8776acfa-320f-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 11:52:55 +0200 (CEST)
Received: by mail-lf1-x133.google.com with SMTP id
 2adb3069b0e04-52cdf4bc083so2339847e87.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 02:52:55 -0700 (PDT)
Received: from ?IPV6:2a00:23ee:1630:2367:cdfd:357d:df51:2d81?
 ([2a00:23ee:1630:2367:cdfd:357d:df51:2d81])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-424817a9667sm126992315e9.15.2024.06.24.02.52.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 02:52:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8776acfa-320f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719222774; x=1719827574; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=qNXLYealRBgzKlV6WSlGJjcUlvXtTicJG46p4JZfpTI=;
        b=JfXFOfvRZQ6t4mAYijyUlunGPJWpNxKNHJTL8ByMccC4bkmOCDqg2NEDReqr4LFv0y
         Kal3zE+YKBsFvZcMIAeTVZII3zrwOWSJCjh+9UxFpqjwqxbifrVM+YCTx3+j0ZrwK5/z
         zo/qhfksquBd4HlVBLaA2WR04MowBjRQjV594=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719222774; x=1719827574;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=qNXLYealRBgzKlV6WSlGJjcUlvXtTicJG46p4JZfpTI=;
        b=mX0kxzOEZdxbIVluDu3/4jg/UqlPvaZZY4i/fRmekWlR+LcdazUQ7o0CUS3f7xDuoP
         B7zXTjfXAabKfG7tlRZ1kWOlBSl6/sILFvnvHaFVq5KycdC803LGZZkAPNBNYOuDwDFo
         ukCP/91L9Qg9iVlpTFxhOlnH2HziAcSIUYIfi1PZNE2oAyVgM+p4VjpqcD1ZOkKzvitb
         cJrsxEi0PW9ZNMabshcm0UZUp0B7UESZ2006pjzn4nvGkOWAgEnmi/onK0qnCWxwgUL5
         dOchn+KHyMvO5YsvMrjl45cWfnnarTrq5yV+uJdHFwuitGkYcy3HIZbxEPgCLSaXzO+3
         uWOg==
X-Forwarded-Encrypted: i=1; AJvYcCWPuUazyLIPvRaBv5NH7tO0JEgQr2Q7F5GW7fWfIq3t0t7TvAvGtkeAtHte6A9//YaJkGgWBx1ePFjZan67g0J5iQtFVYpCPZnkbxsyI3c=
X-Gm-Message-State: AOJu0Yx+yQDQsVp3fCFBNlMLdmWbcA1FNa+zaOONbX2tYpDMIZE4wa3y
	++6+2m1yjR+dYRHtBiJrugaXPIULYUFT78bx/nkbai0I1PPOUv1C+90UWmECOfA=
X-Google-Smtp-Source: AGHT+IFpdTGXai69L8ilvJthyMMuxnZ1o8t9qETtl4+UH2+hLaN4iEkqAZgf96k5wTyMy8SOpB+VRw==
X-Received: by 2002:a19:4310:0:b0:52c:b479:902d with SMTP id 2adb3069b0e04-52ce06105efmr3291262e87.4.1719222774537;
        Mon, 24 Jun 2024 02:52:54 -0700 (PDT)
Message-ID: <48460c9b-8593-4a9a-ad10-be2a40bc2eb3@citrix.com>
Date: Mon, 24 Jun 2024 10:52:52 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 2/2] CHANGELOG: Add entries related to tracing
To: George Dunlap <george.dunlap@cloud.com>, xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Community Manager <community.manager@xenproject.org>
References: <20240624090411.1867850-1-george.dunlap@cloud.com>
 <20240624090411.1867850-2-george.dunlap@cloud.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <20240624090411.1867850-2-george.dunlap@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24/06/2024 10:04 am, George Dunlap wrote:
> Signed-off-by: George Dunlap <george.dunlap@cloud.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 10:19:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 10:19:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746475.1153532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLgmo-0000aa-B2; Mon, 24 Jun 2024 10:18:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746475.1153532; Mon, 24 Jun 2024 10:18:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLgmo-0000aT-8R; Mon, 24 Jun 2024 10:18:58 +0000
Received: by outflank-mailman (input) for mailman id 746475;
 Mon, 24 Jun 2024 10:18:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLHn=N2=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sLgmm-0000aL-MU
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 10:18:56 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28f560f2-3213-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 12:18:54 +0200 (CEST)
Received: by mail-wr1-x433.google.com with SMTP id
 ffacd0b85a97d-365663f51adso2876263f8f.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 03:18:54 -0700 (PDT)
Received: from ?IPV6:2a00:23ee:1630:2367:cdfd:357d:df51:2d81?
 ([2a00:23ee:1630:2367:cdfd:357d:df51:2d81])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-366383f6777sm9670790f8f.21.2024.06.24.03.18.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 03:18:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28f560f2-3213-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719224334; x=1719829134; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=lPI2iq8iDxnnEIYGr+y3RMlzH7xQgs2dCQgmLpydl30=;
        b=if/ZvATU6b+uzIyJBhksQU6SGV8pHFCcT7RjDFSrycyT0oUfbERbJYwOOK+ZGpycGw
         UDrhW2Ns/rgd4duRrjigZBl70KClBeqW9WkUNUFqOHlS4i593f3TT3hc8m+gKMqEU+zT
         aSHxkM0EtOKktZei/Xl4uesmACKCde8VF7P5w=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719224334; x=1719829134;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lPI2iq8iDxnnEIYGr+y3RMlzH7xQgs2dCQgmLpydl30=;
        b=teQ0LghQoUi1FQ5dYdXGeHf0qVusQ243sxU8XSnLNRkd8vyAj+Ns4dvqQzoRyZi5sG
         9F4n4cLndtZSOKxsRkM+cu9H5gAG9wqz+mzTb8QLOt+EJuoDn5nAQIkMroKiMJoNwRH9
         K/C50/JNziPusxTGV37MVrqppuV0Ly+Ihjy7B68ahzDVB/Sx0FOvX1D/k0x+B31Hitxd
         QjzSv3k/XEQjpxRhaA2I8pJZ4mqchXETynhw826jl2WePOM0owK8OedrLNNTbShC6RLm
         PNv7o/H0+RQEeWcsSYSR3AgLySGzeq8gZI3iL4SIU9i5+4dVZnsUVYMhBusw90xV7+M6
         +WfA==
X-Gm-Message-State: AOJu0Yzel1yv4B8l2nxlVE3gzj5L0PZUpmA8WVB8L22mfWbLLDLd/hXa
	wglIOvCeKleuWi/b3C4jW+SjjmUcPEG54zbfDiVPal97xqbwbKN7mM+TqG9IrEM=
X-Google-Smtp-Source: AGHT+IGRz9OP+pBzuBn5q0CP5hnuYz8phF9rEA6YYRTqy1pQc35NEkbOCDQvUxr54bjjRPkV+oInmQ==
X-Received: by 2002:a05:6000:1448:b0:366:e586:9131 with SMTP id ffacd0b85a97d-366e58692bdmr3925885f8f.16.1719224333913;
        Mon, 24 Jun 2024 03:18:53 -0700 (PDT)
Message-ID: <a93f8078-a4df-4395-b7ad-61d2fc38e18d@citrix.com>
Date: Mon, 24 Jun 2024 11:18:51 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 v2] tools/xl: Open xldevd.log with O_CLOEXEC
To: Anthony PERARD <anthony.perard@vates.tech>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Demi Marie Obenour <demi@invisiblethingslab.com>,
 Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20240621161656.63576-1-andrew.cooper3@citrix.com>
 <ZnWwbJiD6eG85VY9@l14>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <ZnWwbJiD6eG85VY9@l14>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 21/06/2024 5:55 pm, Anthony PERARD wrote:
> On Fri, Jun 21, 2024 at 05:16:56PM +0100, Andrew Cooper wrote:
>> `xl devd` has been observed leaking /var/log/xldevd.log into children.
>>
>> Note this is specifically safe; dup2() leaves O_CLOEXEC disabled on newfd, so
>> after setting up stdout/stderr, it's only the logfile fd which will close on
>> exec().
>>
>> Link: https://github.com/QubesOS/qubes-issues/issues/8292
>> Reported-by: Demi Marie Obenour <demi@invisiblethingslab.com>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Anthony PERARD <anthony@xenproject.org>
>> CC: Juergen Gross <jgross@suse.com>
>> CC: Demi Marie Obenour <demi@invisiblethingslab.com>
>> CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>
>> Also entirely speculative based on the QubesOS ticket.
>>
>> v2:
>>  * Extend the commit message to explain why stdout/stderr aren't closed by
>>    this change
>>
>> For 4.19.  This bugfix was posted earlier, but fell between the cracks.
>> ---
>>  tools/xl/xl_utils.c | 2 +-
>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/tools/xl/xl_utils.c b/tools/xl/xl_utils.c
>> index 17489d182954..060186db3a59 100644
>> --- a/tools/xl/xl_utils.c
>> +++ b/tools/xl/xl_utils.c
>> @@ -270,7 +270,7 @@ int do_daemonize(const char *name, const char *pidfile)
>>          exit(-1);
>>      }
>>  
>> -    CHK_SYSCALL(logfile = open(fullname, O_WRONLY|O_CREAT|O_APPEND, 0644));
>> +    CHK_SYSCALL(logfile = open(fullname, O_WRONLY | O_CREAT | O_APPEND | O_CLOEXEC, 0644));
> Everytime we use O_CLOEXEC, we add in the C file
>     #ifndef O_CLOEXEC
>     #define O_CLOEXEC 0
>     #endif
> we don't need to do that anymore?
> Or I guess we'll see if someone complain when they try to build on an
> ancien version of Linux.

I double checked, and it turns out it was an FD_CLOEXEC, not an
O_CLOEXEC which xl was using, so we don't have a pre-existing example.

Therefore I've re-added this at the top of xl_utils.c

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 10:22:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 10:22:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746481.1153543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLgqH-00023f-Qt; Mon, 24 Jun 2024 10:22:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746481.1153543; Mon, 24 Jun 2024 10:22:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLgqH-00023Y-NO; Mon, 24 Jun 2024 10:22:33 +0000
Received: by outflank-mailman (input) for mailman id 746481;
 Mon, 24 Jun 2024 10:22:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sLgqF-00023I-NY
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 10:22:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sLgqF-0007F4-6p; Mon, 24 Jun 2024 10:22:31 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.0.211])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sLgqE-0007vW-Vu; Mon, 24 Jun 2024 10:22:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=qFD53FRPjRF7vyPgd2iLCt//XqKrmk+IxasvFkwr26s=; b=WktNR1yr5sJjVyqDYwgZLcxCHz
	LARigHPSf2qWf0vaR0OIqOfDsB+gJlGPjmWHRSV+EhPh/OhmNU/k9BDGZHPqE/65iOJfhmvNAGog8
	1IDhJxhx4ynCSK3m1jvB+e9OZ6Tlc7IQafv660ZJWuUjP4Iklh617Ymf+szGHyTB2Gc0=;
Message-ID: <86594fa0-a12d-42fc-ba6c-fe235becf768@xen.org>
Date: Mon, 24 Jun 2024 11:22:29 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/arm: static-shmem: request host address to
 be specified for 1:1 domains
Content-Language: en-GB
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, oleksii.kurochko@gmail.com
References: <20240621092205.30602-1-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20240621092205.30602-1-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 21/06/2024 10:22, Michal Orzel wrote:
> As a follow up to commit cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem
> should be direct-mapped for direct-mapped domains") add a check to
> request that both host and guest physical address must be supplied for
> direct mapped domains. Otherwise return an error to prevent unwanted
> behavior.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

I would argue that it should be have a fixes tag:

Fixes: 988f1c7e1f40 ("xen/arm: static-shmem: fix "gbase/pbase used 
uninitialized" build failure")

Mainly because while you fixed the build, it was missing the check in 
the "else" part.

I am happy to update it on commit if you are ok with the change.

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 10:43:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 10:43:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746488.1153553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLhAV-0005v0-EB; Mon, 24 Jun 2024 10:43:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746488.1153553; Mon, 24 Jun 2024 10:43:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLhAV-0005ut-AN; Mon, 24 Jun 2024 10:43:27 +0000
Received: by outflank-mailman (input) for mailman id 746488;
 Mon, 24 Jun 2024 10:43:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=11f7=N2=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1sLhAT-0005ul-LM
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 10:43:25 +0000
Received: from NAM11-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam11on20601.outbound.protection.outlook.com
 [2a01:111:f403:2415::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 93c035b3-3216-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 12:43:23 +0200 (CEST)
Received: from BN9PR03CA0433.namprd03.prod.outlook.com (2603:10b6:408:113::18)
 by MN2PR12MB4469.namprd12.prod.outlook.com (2603:10b6:208:268::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.26; Mon, 24 Jun
 2024 10:43:18 +0000
Received: from BN2PEPF000044A9.namprd04.prod.outlook.com
 (2603:10b6:408:113:cafe::42) by BN9PR03CA0433.outlook.office365.com
 (2603:10b6:408:113::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.38 via Frontend
 Transport; Mon, 24 Jun 2024 10:43:18 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 BN2PEPF000044A9.mail.protection.outlook.com (10.167.243.103) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Mon, 24 Jun 2024 10:43:18 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 24 Jun
 2024 05:43:17 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 24 Jun
 2024 05:43:17 -0500
Received: from [10.252.147.188] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend
 Transport; Mon, 24 Jun 2024 05:43:16 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93c035b3-3216-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TKtVPJuuK57ClXBeFYLhdwtK3RKSi93zm6TG19OO4k6fR8l59bxVgFX1tlvvTb8h3RZpHMreX1ojenY+HmlDrCBnAMNs1R7ipHY13NCR23MROC83gx6mWlokJZ3bXi3IlD0Hql20bMlUiOj755ZKF2UerXa5RRLqKp/cAdHAQhiOcDI19XR0FCdl/tsqzFK7ReAcl9ivjCdGjWUBTJHo1cqy5DPfuJ2MdjvC5aaivUW9Ib+37u45XHm8ELCjTalxVwOyrGKsIqOCSZQydQInQKgIcBcXgecIYxEqIdWDejIvEyYlye7hnhvU30h/3lYyJ/VblPGTEH53rP7tuOn5Xg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=vX/XPk4ZNl3GC0+OAmhrZeBaM3J8/ewr8h32CY/Guy0=;
 b=Q+ffqqvwZeuvOQD1fHGrraephcjgxHe5wKeY4v9gNxUJiacnbPhu3D0K8UNN1JkBNU7WjOslSMYLmoD9gubKs6Z3udjS4d9bQ8WsBRI1kNIkbxaxVQGcMoNBgv+xg9Jv3Kkvr1a1j3tklvGjmMz4mOPQ7kkD/GTw1t1zi85EfG+lckv5YRgn6HVarmg+DaItbkShgvUZTM6qijFUDiw+7f1hxA/G5WFiGhrrbAryc4HGipo/yWgVywm8ne7LewydW4x9p1AAVtTjMRyFcDO/rpdYgmhTwPLBPYmPa5Gn4ua5KgyWSYw9rbJYCSft2cpxBwi0K0jj5Y1psxAypEvq0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vX/XPk4ZNl3GC0+OAmhrZeBaM3J8/ewr8h32CY/Guy0=;
 b=wSj0iqcF8yTLogXPLMcyiSeH6w+mvtXpFMwU5Y7YLtGXZoNswMR59PdwRXbkFs+XbrNvzcoLe4cXoWhUuHIf0ywRo3fFpBEcjlXCa6fiqfbWOhLE7rwIt83SxaUvAFb+W/lzXS25Cni3mqB/kz9tuazrluSQgOlsecCxPpDE14g=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <d69b74ce-13b6-46f1-b2bd-051c913be43a@amd.com>
Date: Mon, 24 Jun 2024 12:43:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/arm: static-shmem: request host address to
 be specified for 1:1 domains
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	<oleksii.kurochko@gmail.com>
References: <20240621092205.30602-1-michal.orzel@amd.com>
 <86594fa0-a12d-42fc-ba6c-fe235becf768@xen.org>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <86594fa0-a12d-42fc-ba6c-fe235becf768@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
Received-SPF: None (SATLEXMB05.amd.com: michal.orzel@amd.com does not
 designate permitted sender hosts)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: BN2PEPF000044A9:EE_|MN2PR12MB4469:EE_
X-MS-Office365-Filtering-Correlation-Id: 3175ba24-1800-4ca4-a070-08dc943a75ad
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|376011|1800799021|36860700010|82310400023;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?bGNIMkI4YnpYRC9qaVc0d1FTMldHd05wZGdOa0QvdjJFcklZaEhRSy9sUjdY?=
 =?utf-8?B?UE1uY1BqYW5ERHdRRERSeWpxOUZDazBEYmJ4N1A4eGVkVmRQaGVTakZHR09G?=
 =?utf-8?B?b1c3ZmZCQ09GOUlvaklDQjNBbkE2VVpWOXpSQ0hOdVhsNFNRK2MxdVRZTWdL?=
 =?utf-8?B?REx2VEVsc3BteStieE1vdEE2UmpudGZrVk5uT0tWWm5meE9FbGN6VCtvdTcx?=
 =?utf-8?B?S1dLSWt6bTBTWWhJVEhQQ25HUTlQOFlCN1RVWWVqUm9aWmZEZE9xTWVtTU9i?=
 =?utf-8?B?WHlpWHg1Yndrb002dmV3L0swa2VDWUo4d2dqU3NOVXZlcE9sclBRV1pwd1l6?=
 =?utf-8?B?cFVBNlBHaitUR2swOFZUWlRBejFZL2N1OFZvYnpyUzNpSERycnlqaUVpNDRH?=
 =?utf-8?B?WFdPcFV6TVNKSDNpRE5nSnlvSkxLL0gybzNUTjlhazB1TnFWdDRmYzZsYi9t?=
 =?utf-8?B?MlJiK2o2WU1ZWWdGY2pQVHBIMmxVN0x3a3JTRGZaMFBuekNqNUtaTkZBR0FP?=
 =?utf-8?B?NERKVndaVGYrZEk2ZmRXMnQzdnJxYkxZbHdGejRzeGVZTzYxR0t6U0x4K3VD?=
 =?utf-8?B?UW8zRkd1eHdiYUZMV3pTZjhweUNvS0lQNFRoNkJFWGVkZkV5cGNJNVJYS1lD?=
 =?utf-8?B?Vmh4c1pOL3NpUE42YTdKQk1LaGxaWkNQRVRiVmtIOG12ZXBkKzc4Vnhnb3BY?=
 =?utf-8?B?dW9iSTRPK0lWcHhLWnFlRzZyb0cyRW1SK0tXc2lmc3hZNllMdXZjMmZqY2RW?=
 =?utf-8?B?SkRiaGVWRjdWWkluSHJsRGlCSVQyRmI5blNoSlFoeGFzbCtIakprT0dndFlG?=
 =?utf-8?B?cHVUVldzSFgvYmJKcW45MXB0VFh1cVI4MWQwSjVZbG9TOHFMcEhQcEdoeU1W?=
 =?utf-8?B?ME9JeC9UOU9iTDFiMGNhQTdBYTlHN2p0ZlBmQ2owYTRuTis4dkcwZGI2TlZ5?=
 =?utf-8?B?Ymc0aDhZa3dqV0lza3hRYkNIdlBnMWJjbFhHYStQUG9ZaThJMUFIY0ZyTW5Q?=
 =?utf-8?B?Ukw3eDUvUkFtYkp6Q3J3NStkQVZnOG90RnhiRkpXRS92YXlHWWJTdGRPSW9z?=
 =?utf-8?B?OG5CR25Cc2luYTRLSUc0dXVSQmVFUjZzVU9hdmZqcC9Jblhad0pDMk84dkgx?=
 =?utf-8?B?ZFQ0em5WQUxKY2ZhQXpKT2U1cU9BMjNyN0w2UlV5Y0lkU0tpRzZSdnhwbUc2?=
 =?utf-8?B?RlVBMVBkVjRYWXNTT21vRUszWVRtZGVMSXpTTVQyRmhmRnRleEpub01kV0t3?=
 =?utf-8?B?c0NvZS85SEpJcDNPUWtLUEF6a1I2MjZhOEM5dW4yZFY2TWFLQXRWN0k0SU5O?=
 =?utf-8?B?aTFQL0R5Y3pMNTR3WCszRFBVMFJubVoxby9JaGxHQlFzcVNwN3NFM1lIQ2Vx?=
 =?utf-8?B?U2M0M2FqYXp1VTZRRDFiaUlPL09ZMEZtMkMzbUZOUzBlMkJHWkN2RkNLanN6?=
 =?utf-8?B?dVJNaDBqOWlOZHQxV0dLY1QvWi9qdUhlZm1XTkErMFY4RGtrSldlcS8xZGRZ?=
 =?utf-8?B?OGFWVnNYU3BBRU1lMytELzhqZ3VjeG5wWGtKN2tjSDlmTHd5ZmhacXpZdmdX?=
 =?utf-8?B?UzBvVUYxQmszY09hNnVRN2ZrbVBjUTF5cllQL1NYK0MwM2ZJRnpVa1pKTDA0?=
 =?utf-8?B?Y3UzeERwSFpOYlBSUFByZVc5OWZPeXp3RUxnTkVQc0tHSVpZREJwYzNGVHVO?=
 =?utf-8?B?VUFKRnY5VG1BN0FTaFpnQUNLck02UjA4bzFHY3M2cjZSN2hKT0pEdU5GMFZm?=
 =?utf-8?B?WU9rQ2M5T2p2dy9Cc3ZTOVI2V0d6RUZxL1hnOXQ1UEhFTXhtR0M0V0hNOHJi?=
 =?utf-8?B?ajBXZzRzUktIeS9yY3FLbVBJdjhBd2s4TFRHTkJHZExocGR2Y3IyNmludVdT?=
 =?utf-8?B?MExDRldxTmJiRGFoSFNKeWRRbUNxZzllQmRuMHQ0bjlCUmc9PQ==?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(376011)(1800799021)(36860700010)(82310400023);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 Jun 2024 10:43:18.2469
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3175ba24-1800-4ca4-a070-08dc943a75ad
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	BN2PEPF000044A9.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR12MB4469

Hi Julien,

On 24/06/2024 12:22, Julien Grall wrote:
> 
> 
> Hi Michal,
> 
> On 21/06/2024 10:22, Michal Orzel wrote:
>> As a follow up to commit cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem
>> should be direct-mapped for direct-mapped domains") add a check to
>> request that both host and guest physical address must be supplied for
>> direct mapped domains. Otherwise return an error to prevent unwanted
>> behavior.
>>
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> 
> I would argue that it should be have a fixes tag:
> 
> Fixes: 988f1c7e1f40 ("xen/arm: static-shmem: fix "gbase/pbase used
> uninitialized" build failure")
> 
> Mainly because while you fixed the build, it was missing the check in
> the "else" part.
> 
> I am happy to update it on commit if you are ok with the change.Yes, I'm fine with it.

> 
> Reviewed-by: Julien Grall <jgrall@amazon.com>
> 
> Cheers,
> 
> --
> Julien Grall

~Michal


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 10:58:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 10:58:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746497.1153563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLhOW-0008FE-Ln; Mon, 24 Jun 2024 10:57:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746497.1153563; Mon, 24 Jun 2024 10:57:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLhOW-0008F7-J6; Mon, 24 Jun 2024 10:57:56 +0000
Received: by outflank-mailman (input) for mailman id 746497;
 Mon, 24 Jun 2024 10:57:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLhOV-0008E5-FB
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 10:57:55 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9b65aabe-3218-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 12:57:54 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2ec52fbb50aso24936051fa.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 03:57:54 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1fa3229f936sm23317445ad.85.2024.06.24.03.57.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 03:57:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b65aabe-3218-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719226673; x=1719831473; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=SuilJMjDDlKDCz8DRJVWH4zo9JVt7ObGMdKoDOJIBYI=;
        b=HSxCo086w16N2bYqecCNx96v8d1OikWqRO/SmDxTmlAHY139rj8L0djLJtSpTW/9EP
         vdHptHdAOMM+2/i3pBTWq0IHqxFEjt/ef07CARX0Fut6Q3W2GPC74vI3n2liMxXNzqkn
         hmWubmJvCxmfkZGIMlAShN7XyOeDYDOsHzhKMySCb6RLA3XPjijuA9pJkFL/NFaiOKNP
         qIm2zgDWiM1Gh02ZTBmgXTRM+zQnH863cD2eod77wy7krargPRY5t1QbgQH/MrCBOKqu
         kqwUQf2GWdk/ZsF3LA+VYbcBaggePk8sswKgOtBCpkfgwu1yOLAkPAgbtOTryURl97tF
         21rA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719226673; x=1719831473;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=SuilJMjDDlKDCz8DRJVWH4zo9JVt7ObGMdKoDOJIBYI=;
        b=HyAQg2hf4S1O3F03GQk9zMCpbt6UMhEz+zBEoLPV+DgGGBcjqC1Oh66v4ENKyWE5wJ
         OukN4mJIlSamLceq8wjHPKB4mdWPMMKyugXhkyh6QEBgQOBF1k8uwgJRnuy9wU1sx8I2
         ZwnRrafjH5406VUrq8aoYglLaMm4aFkxTAEA0fSE8Mn9IsQRMUI97V5XFkmF41Xhra/p
         qxGUz3ceAKZiu5dl5ZcfWMy7jM1Xk1SHU3iVLRm+hHTgkhOOL2HxZYzJvm0cCmDTTqlt
         8HrmpwTuJp7U2KOxTPzRXLJv/FF8bYr9yWx390DZZLtvs4dn8Cf0L3fHwplW4E7r3AuB
         OsOw==
X-Forwarded-Encrypted: i=1; AJvYcCVU6vtwIT5ieerg0DXKqY2Oxohsp5Zxujk6U7dJdhJzaKOKbL9swSxcz3nsNZEaYnU2w+svNorzGXApkk+53PZ2nEB99MmuGQu/dndifPw=
X-Gm-Message-State: AOJu0YyC9NvEfGMXozla/dhuEFEUtYfYMzUyWR6qmTHRC/8O1XBf8cpI
	v20ex137Juf6F2ufgXqTqG1M9KEZRVRjRfNxNxmjk+MUDAJv0moS0eYEUHhYwJBqZF+wQw5AS+c
	=
X-Google-Smtp-Source: AGHT+IHWfIDFZLgcXocOENQh4oTAOERwiaNz64iMpJcrS52F5BQIuj479xipV9V8gWRzXyhz/QhUuQ==
X-Received: by 2002:a2e:3018:0:b0:2ec:588d:7eb4 with SMTP id 38308e7fff4ca-2ec5b30bd1fmr32544541fa.15.1719226673434;
        Mon, 24 Jun 2024 03:57:53 -0700 (PDT)
Message-ID: <1149f3da-480e-4949-924b-6cdf39b1e17f@suse.com>
Date: Mon, 24 Jun 2024 12:57:46 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 01/13] automation/eclair: fix deviation of MISRA C
 Rule 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <c43a32405cc949ef5bf26a2ca1d1cc7ee7f5e664.1719218291.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <c43a32405cc949ef5bf26a2ca1d1cc7ee7f5e664.1719218291.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 11:04, Federico Serafini wrote:
> Escape the final dot of the comment and extend the search of a
> fallthrough comment up to 2 lines after the last statement.
> 
> Fixes: a128d8da913b21eff6c6d2e2a7d4c54c054b78db "automation/eclair: add deviations for MISRA C:2012 Rule 16.3"

Nit: Yes, the respective doc says "at least 12 digits", but please also keep
at that going forward.

Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 11:08:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 11:08:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746504.1153572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLhYe-0001sW-JG; Mon, 24 Jun 2024 11:08:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746504.1153572; Mon, 24 Jun 2024 11:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLhYe-0001sP-Ge; Mon, 24 Jun 2024 11:08:24 +0000
Received: by outflank-mailman (input) for mailman id 746504;
 Mon, 24 Jun 2024 11:08:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dlsF=N2=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sLhYd-0001sH-DE
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 11:08:23 +0000
Received: from mail-qk1-x72a.google.com (mail-qk1-x72a.google.com
 [2607:f8b0:4864:20::72a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 10d9b3e2-321a-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 13:08:21 +0200 (CEST)
Received: by mail-qk1-x72a.google.com with SMTP id
 af79cd13be357-79776e3e351so324295985a.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 04:08:21 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b51ef32f9csm33060646d6.91.2024.06.24.04.08.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 04:08:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10d9b3e2-321a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719227300; x=1719832100; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=+5qC9sjfZW5xvtM27k2obmnqE9Kc0AqhK5r2h4CzQCk=;
        b=VUN715t7gpKRgp7oMyWUJwhMmfABgUccbNQ4B/OBwbNM+L8UIx8vS16fF+lUP9WReh
         8Evxjbj5gsnfqfS5DAZzKPxCzsWjAi/v1RdvHeD6dNiYsQATwFHlLdQuhHAg/mGkzi3P
         BB6ekJ5TlmRmTwoodHUMk0cSBI0nc95c0VPMI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719227300; x=1719832100;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=+5qC9sjfZW5xvtM27k2obmnqE9Kc0AqhK5r2h4CzQCk=;
        b=vGrehJdSamZqEtimV7zojGLM6r4kbzt19uUNx9DnHdBl7zytRbmA57oeLUwiWJ0pJJ
         rQ/xUVs/HWXMTyWPzn4C+MLSQMhXMmQv0Fue2+n7heB1EDGgqtPdyijjH2S3z2+hN9IK
         NhvGW37r8dDdf9RiFp40kp7WHbQXQRxPEDTLphN+aEYoveoUXQdrRPmAEG+1tOuxWmE/
         jM5UybFkgyzPwt0ZuYaX7CBiDwMB8fbChDicYpm2G+gSsasjp8f+DUSrH0Dd7FY87kOa
         UAZ+NEjdK8EBrvZUWW+/lWX3y7rmjUdtwMhkFfVY5ABrU0ozBEOakTVkrT2Aj8t+ZOa0
         bnSw==
X-Gm-Message-State: AOJu0YxGUvfgmbLR7BV9wP0sovabAxeie19qcpoj+EC7fq3pYEoW5AKs
	TjQid7amUzow55pLW2+JpOlt46Hd5hFKGbxiruvJUjBKgvcv6zorNTA0tuHI8GhM7B/jDRb7hZb
	i
X-Google-Smtp-Source: AGHT+IG0B2zg5ewAzZld7SrzOhRA0mZZUneC/LOwnH2XbW/B2ZbTeoZBmXGyZnkDg7TXceUwal4guA==
X-Received: by 2002:ad4:50c8:0:b0:6b5:165:2f78 with SMTP id 6a1803df08f44-6b53635fd93mr54361946d6.13.1719227299753;
        Mon, 24 Jun 2024 04:08:19 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH for-4.19] tools/xenalyze: Remove argp_program_bug_address
Date: Mon, 24 Jun 2024 11:49:30 +0100
Message-Id: <20240624104930.1951521-1-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xenalyze sets argp_program_bug_address to my old Citrix address.  This
was done before xenalyze was in the xen.git tree; and it's the only
program in the tree which does so.

Now that xenalyze is part of the normal Xen distribution, it should be
obvious where to report bugs.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
Freeze exception justification: This is only a change in
documentation.

CC: Anthony PERARD <anthony@xenproject.org>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
---
 tools/xentrace/xenalyze.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index d95e52695f..adc96dd7e4 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -10920,9 +10920,6 @@ const struct argp parser_def = {
     .doc = "",
 };
 
-const char *argp_program_bug_address = "George Dunlap <george.dunlap@eu.citrix.com>";
-
-
 int main(int argc, char *argv[]) {
     /* Start with warn at stderr. */
     warn = stderr;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 11:09:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 11:09:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746510.1153582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLhZj-0002gQ-SF; Mon, 24 Jun 2024 11:09:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746510.1153582; Mon, 24 Jun 2024 11:09:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLhZj-0002gJ-Pl; Mon, 24 Jun 2024 11:09:31 +0000
Received: by outflank-mailman (input) for mailman id 746510;
 Mon, 24 Jun 2024 11:09:30 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Bblz=N2=arm.com=robin.murphy@srs-se1.protection.inumbo.net>)
 id 1sLhZi-0002er-8W
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 11:09:30 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 38d9cd84-321a-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 13:09:28 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 1DB2DDA7;
 Mon, 24 Jun 2024 04:09:52 -0700 (PDT)
Received: from [10.57.74.124] (unknown [10.57.74.124])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 4C7383F73B;
 Mon, 24 Jun 2024 04:09:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38d9cd84-321a-11ef-90a3-e314d9c70b13
Message-ID: <4c941977-868a-4bd0-9c57-eb58255d95bf@arm.com>
Date: Mon, 24 Jun 2024 12:09:23 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC PATCH] iommu/xen: Add Xen PV-IOMMU driver
To: Baolu Lu <baolu.lu@linux.intel.com>, Teddy Astie
 <teddy.astie@vates.tech>, Jason Gunthorpe <jgg@ziepe.ca>
Cc: xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
References: <fe36b8d36ed3bc01c78901bdf7b87a71cb1adaad.1718286176.git.teddy.astie@vates.tech>
 <20240619163000.GK791043@ziepe.ca>
 <750967b7-252f-4523-872f-64b79358c97c@vates.tech>
 <4ba90f86-fd14-4d2a-b7a0-c3eaab243565@linux.intel.com>
From: Robin Murphy <robin.murphy@arm.com>
Content-Language: en-GB
In-Reply-To: <4ba90f86-fd14-4d2a-b7a0-c3eaab243565@linux.intel.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 2024-06-23 4:21 am, Baolu Lu wrote:
> On 6/21/24 11:09 PM, Teddy Astie wrote:
>> Le 19/06/2024 à 18:30, Jason Gunthorpe a écrit :
>>> On Thu, Jun 13, 2024 at 01:50:22PM +0000, Teddy Astie wrote:
>>>
>>>> +struct iommu_domain *xen_iommu_domain_alloc(unsigned type)
>>>> +{
>>>> +    struct xen_iommu_domain *domain;
>>>> +    u16 ctx_no;
>>>> +    int ret;
>>>> +
>>>> +    if (type & IOMMU_DOMAIN_IDENTITY) {
>>>> +        /* use default domain */
>>>> +        ctx_no = 0;
>>> Please use the new ops, domain_alloc_paging and the static identity 
>>> domain.
>> Yes, in the v2, I will use this newer interface.
>>
>> I have a question on this new interface : is it valid to not have a
>> identity domain (and "default domain" being blocking); well in the
>> current implementation it doesn't really matter, but at some point, we
>> may want to allow not having it (thus making this driver mandatory).
> 
> It's valid to not have an identity domain if "default domain being
> blocking" means a paging domain with no mappings.
> 
> In the iommu driver's iommu_ops::def_domain_type callback, just always
> return IOMMU_DOMAIN_DMA, which indicates that the iommu driver doesn't
> support identity translation.

That's not necessary - if neither ops->identity_domain nor 
ops->domain_alloc(IOMMU_DOMAIN_IDENTITY) gives a valid domain then we 
fall back to IOMMU_DOMAIN_DMA anyway.

Thanks,
Robin.


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 11:10:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 11:10:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746516.1153592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLhap-00045c-5d; Mon, 24 Jun 2024 11:10:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746516.1153592; Mon, 24 Jun 2024 11:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLhap-00045V-35; Mon, 24 Jun 2024 11:10:39 +0000
Received: by outflank-mailman (input) for mailman id 746516;
 Mon, 24 Jun 2024 11:10:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLHn=N2=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sLhan-00045N-Ni
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 11:10:37 +0000
Received: from mail-qk1-x72b.google.com (mail-qk1-x72b.google.com
 [2607:f8b0:4864:20::72b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 611fa791-321a-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 13:10:35 +0200 (CEST)
Received: by mail-qk1-x72b.google.com with SMTP id
 af79cd13be357-7960454db4fso298693085a.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 04:10:35 -0700 (PDT)
Received: from ?IPV6:2a00:23ee:1630:2367:7a43:4e50:dd73:6d7e?
 ([2a00:23ee:1630:2367:7a43:4e50:dd73:6d7e])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b546709673sm12739576d6.37.2024.06.24.04.10.32
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 04:10:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 611fa791-321a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719227435; x=1719832235; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=0CAn4rvUqH0DXyr9yWg9CotOfQ5WDHHhm2sLzdc0xcA=;
        b=EgMeWN+IhKpJe9ajYuteSdUs2HmtmjIi6b2vTZamuQO8y0gXftY7RvrXwu4DdxqQsv
         IBm4kMqCTm+X1fXYlTTszCIfaylyxeVUb5BbQt2xlRtoo6BVbuKlYZRhMGGV9bLpezhx
         dI2jo1GUc/9JWsMDB5d6SQQgw5N1VepHtZNs0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719227435; x=1719832235;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0CAn4rvUqH0DXyr9yWg9CotOfQ5WDHHhm2sLzdc0xcA=;
        b=rJ8PgwL/RIBqgiwqVK7kd7y+sBkMOQM2mSCfG4Xz1dmQxqrxLWpkKhPVt/K1na6/NW
         QKIIrbZnvzZoXFDgx4WhE91YbChTRImO0b/p7YDWfQu9CdFf3Y084HdPPJq5lif523qC
         wgWyZnsWFbmuKMRJCw5ONqa9A2VL+0c6iZS4X+6gX/FdbO7lBBDxZAhOwQXrAeMM0keu
         z0PobCL6qE7/PgJo+EXqLeevMu5xq5T+VKUjKUavY5wWB/46tQWrOv/TjW6yXrasrLo3
         NauOlXDAoSSwAroywfWtNqnChMoKbx8e8+kFnIQKqQwqJg/Jhrzor4huCCgrgwFkV29F
         FrSQ==
X-Forwarded-Encrypted: i=1; AJvYcCXvsztyOi+Vp7261EBAfoPuwC02ucWFM30EFIf0I0D1D+7kPsgaD+gHHKld//UfzQ4GaFCXADOqUrys57Y5xSN3NbvOCFFA2neyaSKeyG4=
X-Gm-Message-State: AOJu0YwpPF35akjUeEWL0lXfDhM9gQk3819zxxrr7AqgVv0nqwoO8N6f
	g3WUVrZeZHvS+yW8LmsQ4fb1eL7Cf6aPGcNrYRtxesMXY+o1ezoWraoJzLgJu3s=
X-Google-Smtp-Source: AGHT+IE62tYq3hJRY9J7nxXAW7yV1uDYrrvuf/IVwHJK97s3w3yipm5B3zSl63p96vzEWOpk4fR+Ew==
X-Received: by 2002:ad4:4429:0:b0:6af:69c0:e559 with SMTP id 6a1803df08f44-6b5364f7398mr47099046d6.50.1719227434684;
        Mon, 24 Jun 2024 04:10:34 -0700 (PDT)
Message-ID: <ea121cba-3657-4eb1-9c20-7bd23fe76f92@citrix.com>
Date: Mon, 24 Jun 2024 12:10:31 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] tools/xenalyze: Remove argp_program_bug_address
To: George Dunlap <george.dunlap@cloud.com>, xen-devel@lists.xenproject.org
Cc: Anthony PERARD <anthony@xenproject.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <20240624104930.1951521-1-george.dunlap@cloud.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <20240624104930.1951521-1-george.dunlap@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24/06/2024 11:49 am, George Dunlap wrote:
> xenalyze sets argp_program_bug_address to my old Citrix address.  This
> was done before xenalyze was in the xen.git tree; and it's the only
> program in the tree which does so.
>
> Now that xenalyze is part of the normal Xen distribution, it should be
> obvious where to report bugs.
>
> Signed-off-by: George Dunlap <george.dunlap@cloud.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 11:19:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 11:19:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746525.1153603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLhir-0005Jk-UE; Mon, 24 Jun 2024 11:18:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746525.1153603; Mon, 24 Jun 2024 11:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLhir-0005Jd-RC; Mon, 24 Jun 2024 11:18:57 +0000
Received: by outflank-mailman (input) for mailman id 746525;
 Mon, 24 Jun 2024 11:18:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLhip-0005JT-R5; Mon, 24 Jun 2024 11:18:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLhip-0008Cz-JU; Mon, 24 Jun 2024 11:18:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLhip-00009X-5w; Mon, 24 Jun 2024 11:18:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sLhip-0003DL-5V; Mon, 24 Jun 2024 11:18:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LpBvvPnMiLZeRegP0NmzMwbRenV8Fc+UHKIJ1bGMBJ8=; b=Gr6tUlpjrAKSLFjppCveBzusJR
	3S8gLuu0b5cMmgzjmDgNteB3hraJjHRadM4ff0E8v5+mABeEyIVPbkW4p4V//dRAgVlvLo7udHmSE
	IxQwVqzCCRl/TDBBHX7yh7KwJJvlRaFsN0Pvjta+32prN2igGoYuX3BZxv3et5qSIM+4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186465-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186465: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
X-Osstest-Versions-That:
    xen=9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 Jun 2024 11:18:55 +0000

flight 186465 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186465/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186458
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186458
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186458
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186458
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186458
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186458
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
baseline version:
 xen                  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d

Last test of basis   186465  2024-06-24 01:51:55 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 12:09:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 12:09:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746536.1153613 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLiVF-0004Kp-Ll; Mon, 24 Jun 2024 12:08:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746536.1153613; Mon, 24 Jun 2024 12:08:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLiVF-0004Ki-I9; Mon, 24 Jun 2024 12:08:57 +0000
Received: by outflank-mailman (input) for mailman id 746536;
 Mon, 24 Jun 2024 12:08:56 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OGGw=N2=bounce.vates.tech=bounce-md_30504962.667961d4.v1-eb1f2d732bc943e28a3af2ae209cd8cc@srs-se1.protection.inumbo.net>)
 id 1sLiVD-0004KW-Ue
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 12:08:56 +0000
Received: from mail134-3.atl141.mandrillapp.com
 (mail134-3.atl141.mandrillapp.com [198.2.134.3])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8625700c-3222-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 14:08:54 +0200 (CEST)
Received: from pmta10.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail134-3.atl141.mandrillapp.com (Mailchimp) with ESMTP id
 4W76FJ2PSWzDRHxX5
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 12:08:52 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 eb1f2d732bc943e28a3af2ae209cd8cc; Mon, 24 Jun 2024 12:08:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8625700c-3222-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719230932; x=1719491432;
	bh=ik4x/9rcwnuDs93PoEKkslGjBhp79Yp9oN8ISLlWzd8=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=Qxgc4NgX8Bfdog9+CpWLLia1JnEjOqUnITjFS4Lt8ZWnYYNKxOQEP5Hz0xNV5ijqx
	 RQFG81XJJH6PgRjOZiQ3lQx1ykn2sQbdNKOV3aALzU+vLMzsL34ri1zWf3Z3TQFuUk
	 NXOylneF6TjqWX+b0bjZ4bIXULIw+15RjgXJBUWOvW7dD/5jIVZEr8FUN6HdaCDyG0
	 5IvOVkAlAPrQmtpQewqAsV8l8oAvL9U12SWUeyZe5X1beEmQxL/59BqkD90gyO1lU0
	 76NhpX5x2Ej4lXY3wtxNTidoM0U99FaSHOShb1bJN3m98HXaxQ9jYwS+p87764Ss1F
	 sBceCDYNrNS9Q==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719230932; x=1719491432; i=anthony.perard@vates.tech;
	bh=ik4x/9rcwnuDs93PoEKkslGjBhp79Yp9oN8ISLlWzd8=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=xYn9sCW7SsE6lQJSaaw/BEFQXXi0Cw+1XIfWLbOxJlM60c2wRonK6eelQN5NT5Li4
	 6jeu6huBd6/nN/4Hn9IBTeVx2EY9NWwubW8sDNr/0kd1smVyYpb8tCjxxrfByAdue8
	 eLKUsHUr8bNXkbHsev8isCs67usVhvHULjTEvjLLvI98XH/k9YygeFYZBKnZIImj+5
	 FcgC6T8DPHWw+WeX02Q0C01NkXDQDUgKBzmKU7bu29lrMkDMoiTtPvxur2Ti81g73U
	 0RMc2A23YV8qzSbUJOjQlwGkHpes8bLenWKL4Xo0yVuf93v3QTeyP6rYdloe8VEBae
	 BcK0pX6Uun7ew==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[XEN=20PATCH=20v10=204/5]=20tools:=20Add=20new=20function=20to=20get=20gsi=20from=20dev?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719230930251
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>, "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>
Message-Id: <Znlh0QnvdvkqOmyH@l14>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com> <20240617090035.839640-5-Jiqian.Chen@amd.com> <ZnQ+/y/AGyasDGHY@l14> <BL1PR12MB5849AB68A6D6593A464D4D7EE7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
In-Reply-To: <BL1PR12MB5849AB68A6D6593A464D4D7EE7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.eb1f2d732bc943e28a3af2ae209cd8cc?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240624:md
Date: Mon, 24 Jun 2024 12:08:52 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

On Fri, Jun 21, 2024 at 08:34:11AM +0000, Chen, Jiqian wrote:
> On 2024/6/20 22:38, Anthony PERARD wrote:
> > On Mon, Jun 17, 2024 at 05:00:34PM +0800, Jiqian Chen wrote:
> >> diff --git a/tools/include/xencall.h b/tools/include/xencall.h
> >> index fc95ed0fe58e..750aab070323 100644
> >> --- a/tools/include/xencall.h
> >> +++ b/tools/include/xencall.h
> >> @@ -113,6 +113,8 @@ int xencall5(xencall_handle *xcall, unsigned int op,
> >>               uint64_t arg1, uint64_t arg2, uint64_t arg3,
> >>               uint64_t arg4, uint64_t arg5);
> >>  
> >> +int xen_oscall_gsi_from_dev(xencall_handle *xcall, unsigned int sbdf);
> > 
> > I don't think that an appropriate library for this new feature.
> > libxencall is a generic lib to make hypercall.
> Do you have a suggested place to put this new function?

This is an internal function, which doesn't need to be exposed in a
public interface, but the implementation is used by another function. So
that can be moved to libxenctrl.


-- 

Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 12:26:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 12:26:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746542.1153622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLimZ-00076q-1q; Mon, 24 Jun 2024 12:26:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746542.1153622; Mon, 24 Jun 2024 12:26:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLimY-00076j-Uz; Mon, 24 Jun 2024 12:26:50 +0000
Received: by outflank-mailman (input) for mailman id 746542;
 Mon, 24 Jun 2024 12:26:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLimY-00076d-7d
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 12:26:50 +0000
Received: from mail-wr1-x433.google.com (mail-wr1-x433.google.com
 [2a00:1450:4864:20::433])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 06a58c1d-3225-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 14:26:48 +0200 (CEST)
Received: by mail-wr1-x433.google.com with SMTP id
 ffacd0b85a97d-362f62ae4c5so2450019f8f.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 05:26:48 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb3d43f6sm61430295ad.182.2024.06.24.05.26.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 05:26:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06a58c1d-3225-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719232007; x=1719836807; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=BISiRXI8VY0evOcJTWm+6GCK6MEtQaD4JyxX2gs5bvE=;
        b=f9yyvnmBApXk9V3Ik3L6CzjXlLm3P+CVNfHslE65/qUkmbPAX7ofbuEUM2QoBtve7Y
         mNdW7jcB+mO+BBb1svFy5OoY3LCT3zSmuoP6AJ4spsX4OuObn7onAMCcojloqhNCFOJC
         ZIsWZ32FyzqRzDqZA0LMuC+00wkj9yTR/pfF5glGOtkVo2kNX8VGAeCRvYc4ZQ8PpN1z
         Wj38cYUftXY+FdAyJMLw0QYWsnonDOXP2JbvTz6oYaOudltFdX+iSn6rddy7akYuFGXg
         dH+dduTTi8zZLqM9+xKoiyh21zvWv+ys+/2oGOBB8gXyc2fW/UhUEo1xFrGb2veghbQG
         Uz3w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719232007; x=1719836807;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=BISiRXI8VY0evOcJTWm+6GCK6MEtQaD4JyxX2gs5bvE=;
        b=kAq6m7RYs9FT8xFUGnWwLN09WCFSb7rLZIKmAVY+UoQj4G4JEFBIayiSvpr0DHyyxo
         17oSG/YehPBv11deJoT6dHSb8r2WXP4nM6xHbxmOY63Tpp2CmrkTmxdkG0yHdCJZIaLz
         G8FGC3++Sidf46Bfkn1TvyXHEQOvn8cdhkk4eEOr+ivnhb/EHMLVHQYs08OdQV/uVnJl
         A7/M63+M0ATVcx4dhH547atF96XNJC5N/lg1ZR4124pcl0iT+Fc4QqfmcfTMM2mAZGdG
         lNj0fqfFkMYnroLvU/474rdbC87EBvgCsxG5WCsJAcqb2lY9P69/1kQgWeaAKW08ydMU
         JFRw==
X-Gm-Message-State: AOJu0Yzty2tstTBUg4ze15c1MY2OMPeqr/lAPVf0HP+scQt7j+YAvCtd
	KgkLsTt9iJ817qPxj3i7Rc5BxoosxVzcmbFyoIkSTlDdzKpn7fVbAIla3vjD21ZFcYiMezP7BRQ
	=
X-Google-Smtp-Source: AGHT+IH4WPXDI4F+/OfjqRt/A2Ptuyfl1bT8g7viyJLsj3hJChWil4/ltAT2K+MmRrV/fK1LaY9UAQ==
X-Received: by 2002:adf:f512:0:b0:366:deae:bfac with SMTP id ffacd0b85a97d-366e3269282mr4065130f8f.12.1719232007417;
        Mon, 24 Jun 2024 05:26:47 -0700 (PDT)
Message-ID: <6fc55df2-5d92-4f3f-8eb3-69bd89bfea4e@suse.com>
Date: Mon, 24 Jun 2024 14:26:40 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH for-4.19] xen: re-add type checking to
 {,__}copy_from_guest_offset()
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

When re-working them to avoid UB on guest address calculations, I failed
to add explicit type checks in exchange for the implicit ones that until
then had happened in assignments that were there anyway.

Fixes: 43d5c5d5f70b ("xen: avoid UB in guest handle arithmetic")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -86,6 +86,7 @@
 #define copy_from_guest_offset(ptr, hnd, off, nr) ({    \
     unsigned long s_ = (unsigned long)(hnd).p;          \
     typeof(*(ptr)) *_d = (ptr);                         \
+    (void)((hnd).p == _d);                              \
     raw_copy_from_guest(_d,                             \
                         (const void *)(s_ + (off) * sizeof(*_d)), \
                         (nr) * sizeof(*_d));            \
@@ -140,6 +141,7 @@
 #define __copy_from_guest_offset(ptr, hnd, off, nr) ({          \
     unsigned long s_ = (unsigned long)(hnd).p;                  \
     typeof(*(ptr)) *_d = (ptr);                                 \
+    (void)((hnd).p == _d);                                      \
     __raw_copy_from_guest(_d,                                   \
                           (const void *)(s_ + (off) * sizeof(*_d)), \
                           (nr) * sizeof(*_d));                  \


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 12:29:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 12:29:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746548.1153633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLiol-0007to-FY; Mon, 24 Jun 2024 12:29:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746548.1153633; Mon, 24 Jun 2024 12:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLiol-0007th-Bw; Mon, 24 Jun 2024 12:29:07 +0000
Received: by outflank-mailman (input) for mailman id 746548;
 Mon, 24 Jun 2024 12:29:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLiok-0007tZ-DZ
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 12:29:06 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 58002706-3225-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 14:29:04 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id
 38308e7fff4ca-2ec0f3b9cfeso49606561fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 05:29:04 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1fa2921e21dsm28243335ad.94.2024.06.24.05.28.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 05:29:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58002706-3225-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719232144; x=1719836944; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=T49lwRJXT9LLlXuByh3ICxUBFnlr1lvpGV1zsmwF7II=;
        b=Bm3HaTUYp+Bvpo24AE1Ah8hl2LpEU8tn0swwQYCvLIp0B7J68t/GtDHbvvV2wwuB8J
         xlsesLTgVJjXUEV5qMiekaE4+/OymUYyO1Q6Uggt10oQ6UNRUWcqG6kUs/sPIP9ChVvy
         LDjeKr7q6M2h6sPlVlLhr6pOs0oNhrfCUvr6hJpIO0drxxxoBd+upNN2W9XgTFp/rUk7
         Q8OtOCuZ0Nl2Eq7vbyfwuCZAQ5wnN+8tSBurECM2CPWu4J1zhV51WFrJa+QsUYiTuD8m
         qEfndjhTm0DsZosy8/TZo02RjhEbRf6ghPyV9eVPqrl7+UgPsO0M+axR78h9Hw5OgdMv
         jPMg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719232144; x=1719836944;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=T49lwRJXT9LLlXuByh3ICxUBFnlr1lvpGV1zsmwF7II=;
        b=sTI7Xn07xsjS584MgkRpRW/VHzmuu4+Xn9abSM7OPn8LKFmeuFLGwwmD+sARf1fFiF
         vCps/McVnr1Y1Dz+W9GJUD1g6O3XuEZCc8GQei1l4jfc19ZYuJbbr8L+JDngJ2rrWej7
         NKTsePnP4LWUN28i/ysHAMVB6/xmuqor0T5OkKtEyVTfWysd6ulUx2cnIKinQPrQARbZ
         uT3pIfyOlE74OOAjN2JlWFwvL9Qa/e8iQ9Y303RlWJqb0sb7GCQwItV6PvWeH9MWPFd3
         gvN3lVGTmupD4HEHnwHdt9fnr3JhBKUA25SOkBJea7pgLBnzP5WtvcCawe/tRecAk+M+
         zDxQ==
X-Gm-Message-State: AOJu0Yw1fnwtspyaiC00LrH25hjAmElnCxPBg4zYt2KRucAaQwwl3B1h
	ydhdhbKQ4BxCiZHYdB2ADBfLHkrd3UatvZpC312lUwvc4pm2ATaN6GHqtfx+3FM5FGEFxSBSh84
	=
X-Google-Smtp-Source: AGHT+IFQy9PeRggWTF/W705bvZkk2NMYDXLGeQ+j+VInT5+eS6MDMGHKHpKBJsqONdI8wByiG7+GDw==
X-Received: by 2002:a2e:98d6:0:b0:2ec:4d74:8025 with SMTP id 38308e7fff4ca-2ec593d93abmr28361281fa.20.1719232143755;
        Mon, 24 Jun 2024 05:29:03 -0700 (PDT)
Message-ID: <d0b9a1e0-5c70-42c5-9c63-2e3af82487b2@suse.com>
Date: Mon, 24 Jun 2024 14:28:56 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH for-4.19?] xen: avoid UB in guest handle field accessors
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Much like noted in 43d5c5d5f70b ("xen: avoid UB in guest handle
arithmetic"), address calculations involved in accessing a struct field
can overflow, too. Cast respective pointers to "unsigned long" and
convert type checking accordingly. Remaining arithmetic is, despite
there possibly being mathematical overflow, okay as per the C99 spec:
"A computation involving unsigned operands can never overflow, because a
result that cannot be represented by the resulting unsigned integer type
is reduced modulo the number that is one greater than the largest value
that can be represented by the resulting type." The overflow that we
need to guard against is checked for in array_access_ok().

While there add the missing (see {,__}copy_to_guest_offset()) is-not-
const checks to {,__}copy_field_to_guest().

Typically, but not always, no change to generated code; code generation
(register allocation) is different for at least common/grant_table.c.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I notice that {,__}copy_field_from_guest() are actually unused, which
may not be all that surprising: There's perhaps little point in copying
in just a single field, as then any further input fields of a struct
would likely also need copying that way (to avoid multi-read issues).
copy_field_to_guest() has a mere two callers, in x86. All other sites
use __copy_field_to_guest(). Overall there may hence be room here for
simplification / reduction of redundancy.

--- unstable.orig/xen/include/xen/guest_access.h	2024-06-24 13:48:30.384279937 +0200
+++ unstable/xen/include/xen/guest_access.h	2024-06-24 13:51:35.390248096 +0200
@@ -95,16 +95,23 @@
 /* Copy sub-field of a structure to guest context via a guest handle. */
 #define copy_field_to_guest(hnd, ptr, field) ({         \
     const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    raw_copy_to_guest(_d, _s, sizeof(*_s));             \
+    unsigned long d_ = (unsigned long)(hnd).p;          \
+    /* Check that the handle is not for a const type */ \
+    void *__maybe_unused _t = (hnd).p;                  \
+    (void)((typeof_field(typeof(*(hnd).p), field) *)NULL == _s); \
+    raw_copy_to_guest((void *)(d_ + offsetof(typeof(*(hnd).p), field)), \
+                      _s, sizeof(*_s));                 \
 })
 
 /* Copy sub-field of a structure from guest context via a guest handle. */
 #define copy_field_from_guest(ptr, hnd, field) ({       \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
+    unsigned long s_ = (unsigned long)(hnd).p;          \
     typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    raw_copy_from_guest(_d, _s, sizeof(*_d));           \
+    (void)((typeof_field(typeof(*(hnd).p), field) *)NULL == _d); \
+    raw_copy_from_guest(_d,                             \
+                        (const void *)(s_ +             \
+                            offsetof(typeof(*(hnd).p), field)), \
+                        sizeof(*_d));                   \
 })
 
 #define copy_to_guest(hnd, ptr, nr)                     \
@@ -149,15 +156,22 @@
 
 #define __copy_field_to_guest(hnd, ptr, field) ({       \
     const typeof(&(ptr)->field) _s = &(ptr)->field;     \
-    void *_d = &(hnd).p->field;                         \
-    (void)(&(hnd).p->field == _s);                      \
-    __raw_copy_to_guest(_d, _s, sizeof(*_s));           \
+    unsigned long d_ = (unsigned long)(hnd).p;          \
+    /* Check that the handle is not for a const type */ \
+    void *__maybe_unused _t = (hnd).p;                  \
+    (void)((typeof_field(typeof(*(hnd).p), field) *)NULL == _s); \
+    __raw_copy_to_guest((void *)(d_ + offsetof(typeof(*(hnd).p), field)), \
+                        _s, sizeof(*_s));               \
 })
 
 #define __copy_field_from_guest(ptr, hnd, field) ({     \
-    const typeof(&(ptr)->field) _s = &(hnd).p->field;   \
+    unsigned long s_ = (unsigned long)(hnd).p;          \
     typeof(&(ptr)->field) _d = &(ptr)->field;           \
-    __raw_copy_from_guest(_d, _s, sizeof(*_d));         \
+    (void)((typeof_field(typeof(*(hnd).p), field) *)NULL == _d); \
+    __raw_copy_from_guest(_d,                           \
+                          (const void *)(s_ +           \
+                              offsetof(typeof(*(hnd).p), field)), \
+                          sizeof(*_d));                 \
 })
 
 #define __copy_to_guest(hnd, ptr, nr)                   \


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 12:33:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 12:33:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746554.1153642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLisn-0000vZ-UG; Mon, 24 Jun 2024 12:33:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746554.1153642; Mon, 24 Jun 2024 12:33:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLisn-0000vS-Ra; Mon, 24 Jun 2024 12:33:17 +0000
Received: by outflank-mailman (input) for mailman id 746554;
 Mon, 24 Jun 2024 12:33:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZTmZ=N2=bounce.vates.tech=bounce-md_30504962.66796788.v1-d58da3792a2e46e584d132937eda4056@srs-se1.protection.inumbo.net>)
 id 1sLism-0000vK-Lv
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 12:33:16 +0000
Received: from mail134-3.atl141.mandrillapp.com
 (mail134-3.atl141.mandrillapp.com [198.2.134.3])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ec947485-3225-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 14:33:14 +0200 (CEST)
Received: from pmta10.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail134-3.atl141.mandrillapp.com (Mailchimp) with ESMTP id
 4W76nN3H6rzDRHxQ8
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 12:33:12 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 d58da3792a2e46e584d132937eda4056; Mon, 24 Jun 2024 12:33:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ec947485-3225-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719232392; x=1719492892;
	bh=pwBR3KA6BhYJX7ge26Cw/EkOnRSGlcYsjSqQ3E5ejtA=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=G6WkQZF61bJn0gJol7tuDMU+L9cxVQgBGSvoiQFlaA3/XqXaJRokn06L4skBAvSUY
	 yLA5N213qoTHoWcgJaaFIqE3ZPTuXgOV+u6OIDjTNCs53EWOZ0EIP2nw8iy4p3tcyz
	 16vjbKQS/sxE1Km+T86B9t4QrDEriaORHGB5pZ+0ztzcbqjWEJau7ekCgivyOmT9vb
	 ykstGNw/2peZLPA/V5f0M69+E8+9rqFYv+TVAh/aH74N+Q4OkZLqKHt/CymkW/+m58
	 yZIHmb29LOZTYbBeXWDceAkoBrTGdpCMGrbnhokmMO6OT6oo8wGIczCj1oBwcNcKH/
	 tkB96aeC8QWhA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719232392; x=1719492892; i=anthony.perard@vates.tech;
	bh=pwBR3KA6BhYJX7ge26Cw/EkOnRSGlcYsjSqQ3E5ejtA=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=uDS3Y1uDUUE49iLczrV6Vz1I/VsnVhLZfAc1leyDFYux/wYSBTcC1Q2hKCslvLcgI
	 qqFVS99klr7nCBTbrKdsV3qBXMIBbuXG/b6JQV+zpkgSGJ4RUHRKiJ6HzBNrtoTI+5
	 P9z5h33NZ1bhZu2ZPCCvnLSxvV4E3DeogIRXQ27nXOcpJiHFrb0C/QugIZtHocfN5e
	 nCGIi2CeVAzNmkVhS0YCskJuq2srais3S7EQrx4qArqanzpEepRopya89Sk7piiLxV
	 FR2Cuubs3AGkgivslTF7A+DrllwJTBBMeuhNzOnG7A2oyDcM3skA52mhNoZ1MR6RK3
	 PwW8AU6/xTr6A==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[XEN=20PATCH=20v10=205/5]=20domctl:=20Add=20XEN=5FDOMCTL=5Fgsi=5Fpermission=20to=20grant=20gsi?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719232384688
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Jan Beulich <jbeulich@suse.com>, Keir Fraser <keir@xensource.com>, Andrew Cooper <andrew.cooper3@citrix.com>, =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>, "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>, xen-devel@lists.xenproject.org
Message-Id: <Znlnf2CHxCFadcIX@l14>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com> <20240617090035.839640-6-Jiqian.Chen@amd.com> <b4b6cbcd-dd71-44da-aea8-6a4a170d73d5@suse.com> <BL1PR12MB584916579E2C16C6C9F86D1FE7CE2@BL1PR12MB5849.namprd12.prod.outlook.com> <b6beb3f3-9c33-4d4c-a607-ca0eba76f049@suse.com> <BL1PR12MB58493479F9EF4E56E9CB814FE7C82@BL1PR12MB5849.namprd12.prod.outlook.com> <96ba4e66-5d33-4c39-b733-790e7996332f@suse.com> <BL1PR12MB58493B55E074243D356B0CAAE7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
In-Reply-To: <BL1PR12MB58493B55E074243D356B0CAAE7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.d58da3792a2e46e584d132937eda4056?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240624:md
Date: Mon, 24 Jun 2024 12:33:12 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

On Fri, Jun 21, 2024 at 08:20:55AM +0000, Chen, Jiqian wrote:
> On 2024/6/20 18:42, Jan Beulich wrote:
> > On 20.06.2024 11:40, Chen, Jiqian wrote:
> >> On 2024/6/18 17:23, Jan Beulich wrote:
> >>> On 18.06.2024 10:23, Chen, Jiqian wrote:
> >>>> On 2024/6/17 23:32, Jan Beulich wrote:
> >>>>> On 17.06.2024 11:00, Jiqian Chen wrote:
> >>>>>> @@ -1516,14 +1519,39 @@ static void pci_add_dm_done(libxl__egc *egc,
> >>>>>>              rc = ERROR_FAIL;
> >>>>>>              goto out;
> >>>>>>          }
> >>>>>> -        r = xc_domain_irq_permission(ctx->xch, domid, irq, 1);
> >>>>>> +#ifdef CONFIG_X86
> >>>>>> +        /* If dom0 doesn't have PIRQs, need to use xc_domain_gsi_permission */
> >>>>>> +        r = xc_domain_getinfo_single(ctx->xch, 0, &info);
> >>>>>
> >>>>> Hard-coded 0 is imposing limitations. Ideally you would use DOMID_SELF, but
> >>>>> I didn't check if that can be used with the underlying hypercall(s). Otherwise
> >> From the commit 10ef7a91b5a8cb8c58903c60e2dd16ed490b3bcf, DOMID_SELF is not allowed for XEN_DOMCTL_getdomaininfo.
> >> And now XEN_DOMCTL_getdomaininfo gets domain through rcu_lock_domain_by_id.
> >>
> >>>>> you want to pass the actual domid of the local domain here.
> >> What is the local domain here?
> > 
> > The domain your code is running in.
> > 
> >> What is method for me to get its domid?
> > 
> > I hope there's an available function in one of the libraries to do that.
> I didn't find relate function.
> Hi Anthony, do you know?

Yes, I managed to find:
LIBXL_TOOLSTACK_DOMID
That's the value you can use instead of "0" do designate dom0.
(That was harder than necessary to find.)

Cheers,

-- 

Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 12:42:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 12:42:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746562.1153652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLj1n-0002o4-RM; Mon, 24 Jun 2024 12:42:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746562.1153652; Mon, 24 Jun 2024 12:42:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLj1n-0002nx-Oi; Mon, 24 Jun 2024 12:42:35 +0000
Received: by outflank-mailman (input) for mailman id 746562;
 Mon, 24 Jun 2024 12:41:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SfmV=N2=gmail.com=sherrellbc@srs-se1.protection.inumbo.net>)
 id 1sLj0R-0002m5-Vw
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 12:41:12 +0000
Received: from mail-qk1-x72a.google.com (mail-qk1-x72a.google.com
 [2607:f8b0:4864:20::72a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 07defd4a-3227-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 14:41:09 +0200 (CEST)
Received: by mail-qk1-x72a.google.com with SMTP id
 af79cd13be357-796df041d73so290771685a.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 05:41:09 -0700 (PDT)
Received: from smtpclient.apple (ool-44c00bfa.dyn.optonline.net.
 [68.192.11.250]) by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce8b4b38sm310240685a.45.2024.06.24.05.41.07
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 24 Jun 2024 05:41:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07defd4a-3227-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719232868; x=1719837668; darn=lists.xenproject.org;
        h=references:to:cc:in-reply-to:date:subject:mime-version:message-id
         :from:from:to:cc:subject:date:message-id:reply-to;
        bh=xwRbpV2rQzCgu1iVPcIDYgw25xymsgeZ9bLZT5Ai6js=;
        b=W4lYnq5wH5ZF2R2dtUyO08I4N7I16/+ME2BY62M3ZQro6FraWJmlZygmVhozVCvMTs
         ZlfQSWid78JZLbYq0orY2MnlN8FCK5PtgepRC6dFamcsVJn1TJTZ/0VE8UyCZtabSgfD
         /o0lgmnKqg2YUd2OlxF3cumwFTmUvSRg/HWWnVS/SIkbbto9eOuVgfZhyCfsZy/9ugnv
         l4KqN233jhJzVT/SHKf4IM5LfFz4AIBKl3Sc2u+9LAQycVydYBXmyJ8RdhzTwdvCPIAs
         knhqR3Ng0WcBZA8l+U9SZHseOUsODwx4SKUthOhT9mztOjBi0m/f0TVL07gGiyQmigz1
         gmEQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719232868; x=1719837668;
        h=references:to:cc:in-reply-to:date:subject:mime-version:message-id
         :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=xwRbpV2rQzCgu1iVPcIDYgw25xymsgeZ9bLZT5Ai6js=;
        b=e5/e7tYka9Z3kaTFKY02i0Q1Vupp0BmffPJkJTNozVa7RAGXTFfgEGCyIg/RBRAxE/
         SprKTr7kzCsSLJH49Ik/6Ob9cjBQRm3vcemdQN5aiFtnNnqiEs0tvLRMVYs/UCfZj/wd
         Z9rORAAPNrQPWD1/lBj7kHCc28L9S5a/YjZ5zWUO42B7y0lBF1a+H2TSkEO26jkI7j1S
         EoDyHKK5UUUifz71da2dMqBLus+8Khvad72JgOrV8+aq8IyYGslLydEtqPYIebE3SGzU
         dajLxDROnGNAHOkn28pmgrevF0gHanR6q2oRq2BlOfYIKKNZMmWiuwWZB0/Oj9IQnV6S
         8Luw==
X-Forwarded-Encrypted: i=1; AJvYcCWANqkSeewZkzsg8wU3QU+XLaNUOHjLOnaXtsjhCgsCTXKst2Sw9Lhzpx8uTHawFolR6e5NQa2WzAcQ20XYd+nw/1dXB/WxLgCRmL/AutA=
X-Gm-Message-State: AOJu0Yzlx6RC1tFtJ0shUL0N8D0x2RrYCBktzZeVBskGgrtb3wjRiTDO
	1dAbA6Y7b4ju7gESdTMzk5UAp9meTnebjA4t/XteZJuCZsZQ3vxw
X-Google-Smtp-Source: AGHT+IGimLktMge4+kQ9LtViHT5gUtbraY4xfRhGiRB+Rmzm9EmmOj+8GOt3JUtYhlh6fn9bJdtoXA==
X-Received: by 2002:a05:620a:4708:b0:795:5efe:e85 with SMTP id af79cd13be357-79be44963d8mr563050985a.0.1719232868059;
        Mon, 24 Jun 2024 05:41:08 -0700 (PDT)
From: Branden Sherrell <sherrellbc@gmail.com>
Message-Id: <7BAC7BB5-C321-4C34-884A-21CC12F761BB@gmail.com>
Content-Type: multipart/alternative;
	boundary="Apple-Mail=_E4286BFD-732C-486E-9381-1DBCB2C9C036"
Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3731.700.6\))
Subject: Re: E820 memory allocation issue on Threadripper platforms
Date: Mon, 24 Jun 2024 08:40:56 -0400
In-Reply-To: <ZafOGEwms01OFaVJ@macbook>
Cc: Jan Beulich <jbeulich@suse.com>,
 Patrick Plenefisch <simonpatp@gmail.com>,
 xen-devel@lists.xenproject.org,
 Juergen Gross <jgross@suse.com>
To: =?utf-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
References: <CAOCpoWdOH=xGxiQSC1c5Ueb1THxAjH4WiZbCZq-QT+d_KAk3SA@mail.gmail.com>
 <1708c3d7-662a-44bc-b9b3-4ab9f8642d7b@suse.com>
 <dcaf9d8d-ad5a-4714-936b-79ed0e587f9d@suse.com>
 <CAOCpoWeowZPuQTeBp9nu8p8CDtE=u++wN_UqRoABZtB57D50Qw@mail.gmail.com>
 <ac742d12-ec91-4215-bb42-82a145924b4f@suse.com>
 <CAOCpoWfQmkhN3hms1xuotSUZzVzR99i9cNGGU2r=yD5PjysMiQ@mail.gmail.com>
 <fa23a590-5869-4e11-8998-1d03742c5919@suse.com> <ZaeoWBV8IEZap2mr@macbook>
 <15dcef46-aaa8-4f71-bd5c-355001dd9188@suse.com> <ZafOGEwms01OFaVJ@macbook>
X-Mailer: Apple Mail (2.3731.700.6)


--Apple-Mail=_E4286BFD-732C-486E-9381-1DBCB2C9C036
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8

Hi all,

I recently found this mailing list thread when searching for information =
on a related issue regarding conflicting E820 on a Threadripper =
platform. For those interested in additional data points, I am using the =
ASUS WRX80E-SAGE SE Wifi II motherboard that presents the following E820 =
to Xen:

(XEN) EFI RAM map:
(XEN)  [0000000000000000, 0000000000000fff] (reserved)
(XEN)  [0000000000001000, 000000000008ffff] (usable)
(XEN)  [0000000000090000, 0000000000090fff] (reserved)
(XEN)  [0000000000091000, 000000000009ffff] (usable)
(XEN)  [00000000000a0000, 00000000000fffff] (reserved)
(XEN)  [0000000000100000, 0000000003ffffff] (usable)
(XEN)  [0000000004000000, 0000000004020fff] (ACPI NVS)
(XEN)  [0000000004021000, 0000000009df1fff] (usable)
(XEN)  [0000000009df2000, 0000000009ffffff] (reserved)
(XEN)  [000000000a000000, 00000000b5b04fff] (usable)
(XEN)  [00000000b5b05000, 00000000b8cd3fff] (reserved)
(XEN)  [00000000b8cd4000, 00000000b9064fff] (ACPI data)
(XEN)  [00000000b9065000, 00000000b942afff] (ACPI NVS)
(XEN)  [00000000b942b000, 00000000bb1fefff] (reserved)
(XEN)  [00000000bb1ff000, 00000000bbffffff] (usable)
(XEN)  [00000000bc000000, 00000000bfffffff] (reserved)
(XEN)  [00000000c1100000, 00000000c1100fff] (reserved)
(XEN)  [00000000e0000000, 00000000efffffff] (reserved)
(XEN)  [00000000f1280000, 00000000f1280fff] (reserved)
(XEN)  [00000000f2200000, 00000000f22fffff] (reserved)
(XEN)  [00000000f2380000, 00000000f2380fff] (reserved)
(XEN)  [00000000f2400000, 00000000f24fffff] (reserved)
(XEN)  [00000000f3680000, 00000000f3680fff] (reserved)
(XEN)  [00000000fea00000, 00000000feafffff] (reserved)
(XEN)  [00000000fec00000, 00000000fec00fff] (reserved)
(XEN)  [00000000fec10000, 00000000fec10fff] (reserved)
(XEN)  [00000000fed00000, 00000000fed00fff] (reserved)
(XEN)  [00000000fed40000, 00000000fed44fff] (reserved)
(XEN)  [00000000fed80000, 00000000fed8ffff] (reserved)
(XEN)  [00000000fedc2000, 00000000fedcffff] (reserved)
(XEN)  [00000000fedd4000, 00000000fedd5fff] (reserved)
(XEN)  [00000000ff000000, 00000000ffffffff] (reserved)
(XEN)  [0000000100000000, 000000703f0fffff] (usable)
(XEN)  [000000703f100000, 000000703fffffff] (reserved)

And of course the default physical link address of the x86_64 kernel is =
16MiB which clearly conflicts with the EfiACPIMemoryNVS memory starting =
at 0x4000000. On latest Debian (12.5.0, bookworm) the decompressed =
kernel is more than 60MiB, so it obviously overflows into the adjacent =
region. I can also confirm that loading the Debian kernel at 2MiB also =
works as expected. Debian is also built with CONFIG_RELOCATABLE=3Dy, so =
it should be capable of being loaded with this new feature in Xen.=20

I see the link at this ticket was implemented and committed (dfc9fab0) =
on April 8, 2024 but it appears to not have made its way into the latest =
(4.18) Xen release. Though there seem to be more recent commits cherry =
picked into that branch. When is this fix expected to make it into a =
release?

Branden.

> On Jan 17, 2024, at 7:54 AM, Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com> wrote:
>=20
> On Wed, Jan 17, 2024 at 11:40:20AM +0100, Jan Beulich wrote:
>> On 17.01.2024 11:13, Roger Pau Monn=C3=A9 wrote:
>>> On Wed, Jan 17, 2024 at 09:46:27AM +0100, Jan Beulich wrote:
>>>> Whereas I assume the native kernel can deal with that as long as
>>>> it's built with CONFIG_RELOCATABLE=3Dy. I don't think we want to
>>>> get into the business of interpreting the kernel's internal
>>>> representation of the relocations needed, so it's not really
>>>> clear to me what we might do in such a case. Perhaps the only way
>>>> is to signal to the kernel that it needs to apply relocations
>>>> itself (which in turn would require the kernel to signal to us
>>>> that it's capable of doing so). Cc-ing Roger in case he has any
>>>> neat idea.
>>>=20
>>> Hm, no, not really.
>>>=20
>>> We could do like multiboot2: the kernel provides us with some
>>> placement data (min/max addresses, alignment), and Xen let's the
>>> kernel deal with relocations itself.
>>=20
>> Requiring the kernel's entry point to take a sufficiently different
>> flow then compared to how it's today, I expect.
>=20
> Indeed, I would expect that.
>=20
>>> Additionally we could support the kernel providing a section with =
the
>>> relocations and apply them from Xen, but that's likely hm, =
complicated
>>> at best, as I don't even know which kinds of relocations we would =
have
>>> to support.
>>=20
>> If the kernel was properly linked to a PIE, there'd generally be only
>> one kind of relocation (per arch) that ought to need dealing with -
>> for x86-64 that's R_X86_64_RELATIVE iirc. Hence why (I suppose) they
>> don't use ELF relocation structures (for being wastefully large), but
>> rather a more compact custom representation. Even without building =
PIE
>> (presumably in part not possible because of how per-CPU data needs
>> dealing with), they get away with handling just very few relocs (and
>> from looking at the reloc processing code I'm getting the impression
>> they mistreat R_X86_64_32 as being the same as R_X86_64_32S, when it
>> isn't; needing to get such quirks right is one more aspect of why I
>> think we should leave relocation handling to the kernel).
>=20
> Would have to look into more detail, but I think leaving any relocs
> for the OS to perform would be my initial approach.
>=20
>>> I'm not sure how Linux deals with this in the bare metal case, are
>>> relocations done after decompressing and before jumping into the =
entry
>>> point?
>>=20
>> That's how it was last time I looked, yes.
>=20
> I've created a gitlab ticket for it:
>=20
> https://gitlab.com/xen-project/xen/-/issues/180
>=20
> So that we don't forget, as I don't have time to work into this right
> now, but I think it's important enough that we don't forget.
>=20
> For PV it's a bit more unclear how we want to deal with it, as it's
> IMO a specific Linux behavior that makes it fail to boot.
>=20
> Roger.
>=20
>=20


--Apple-Mail=_E4286BFD-732C-486E-9381-1DBCB2C9C036
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=utf-8

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; =
charset=3Dutf-8"></head><body style=3D"overflow-wrap: break-word; =
-webkit-nbsp-mode: space; line-break: after-white-space;">Hi =
all,<div><br></div><div><div>I recently found this mailing list thread =
when searching for information on a related issue regarding conflicting =
E820 on a Threadripper platform. For those interested in additional data =
points, I am using the ASUS WRX80E-SAGE SE Wifi II motherboard that =
presents the following E820 to Xen:</div><div><br></div><div><div><font =
face=3D"Courier New">(XEN) EFI RAM map:</font></div><div><font =
face=3D"Courier New">(XEN) &nbsp;[0000000000000000, 0000000000000fff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[0000000000001000, 000000000008ffff] =
(usable)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[0000000000090000, 0000000000090fff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[0000000000091000, 000000000009ffff] =
(usable)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000000a0000, 00000000000fffff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[0000000000100000, 0000000003ffffff] =
(usable)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[0000000004000000, 0000000004020fff] (ACPI =
NVS)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[0000000004021000, 0000000009df1fff] =
(usable)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[0000000009df2000, 0000000009ffffff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[000000000a000000, 00000000b5b04fff] =
(usable)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000b5b05000, 00000000b8cd3fff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000b8cd4000, 00000000b9064fff] (ACPI =
data)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000b9065000, 00000000b942afff] (ACPI =
NVS)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000b942b000, 00000000bb1fefff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000bb1ff000, 00000000bbffffff] =
(usable)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000bc000000, 00000000bfffffff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000c1100000, 00000000c1100fff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000e0000000, 00000000efffffff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000f1280000, 00000000f1280fff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000f2200000, 00000000f22fffff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000f2380000, 00000000f2380fff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000f2400000, 00000000f24fffff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000f3680000, 00000000f3680fff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000fea00000, 00000000feafffff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000fec00000, 00000000fec00fff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000fec10000, 00000000fec10fff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000fed00000, 00000000fed00fff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000fed40000, 00000000fed44fff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000fed80000, 00000000fed8ffff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000fedc2000, 00000000fedcffff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000fedd4000, 00000000fedd5fff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[00000000ff000000, 00000000ffffffff] =
(reserved)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[0000000100000000, 000000703f0fffff] =
(usable)</font></div><div><font face=3D"Courier New">(XEN) =
&nbsp;[000000703f100000, 000000703fffffff] =
(reserved)</font></div></div><div><br></div><div>And of course the =
default physical link address of the x86_64 kernel is 16MiB which =
clearly conflicts with the EfiACPIMemoryNVS memory starting at =
0x4000000. On latest Debian (12.5.0, bookworm) the decompressed kernel =
is more than 60MiB, so it obviously overflows into the adjacent region. =
I can also confirm that loading the Debian kernel at 2MiB also works as =
expected. Debian is also built with CONFIG_RELOCATABLE=3Dy, so it should =
be capable of being loaded with this new feature in =
Xen.&nbsp;</div><div><br></div><div>I see the link at this ticket was =
implemented and committed (dfc9fab0) on April 8, 2024 but it appears to =
not have made its way into the latest (4.18) Xen release. Though there =
seem to be more recent commits cherry picked into that branch. When is =
this fix expected to make it into a =
release?</div><div><br></div><div>Branden.</div><div><br></div><div><block=
quote type=3D"cite"><div>On Jan 17, 2024, at 7:54 AM, Roger Pau Monn=C3=A9=
 &lt;roger.pau@citrix.com&gt; wrote:</div><br =
class=3D"Apple-interchange-newline"><div><div>On Wed, Jan 17, 2024 at =
11:40:20AM +0100, Jan Beulich wrote:<br><blockquote type=3D"cite">On =
17.01.2024 11:13, Roger Pau Monn=C3=A9 wrote:<br><blockquote =
type=3D"cite">On Wed, Jan 17, 2024 at 09:46:27AM +0100, Jan Beulich =
wrote:<br><blockquote type=3D"cite">Whereas I assume the native kernel =
can deal with that as long as<br>it's built with CONFIG_RELOCATABLE=3Dy. =
I don't think we want to<br>get into the business of interpreting the =
kernel's internal<br>representation of the relocations needed, so it's =
not really<br>clear to me what we might do in such a case. Perhaps the =
only way<br>is to signal to the kernel that it needs to apply =
relocations<br>itself (which in turn would require the kernel to signal =
to us<br>that it's capable of doing so). Cc-ing Roger in case he has =
any<br>neat idea.<br></blockquote><br>Hm, no, not really.<br><br>We =
could do like multiboot2: the kernel provides us with some<br>placement =
data (min/max addresses, alignment), and Xen let's the<br>kernel deal =
with relocations itself.<br></blockquote><br>Requiring the kernel's =
entry point to take a sufficiently different<br>flow then compared to =
how it's today, I expect.<br></blockquote><br>Indeed, I would expect =
that.<br><br><blockquote type=3D"cite"><blockquote =
type=3D"cite">Additionally we could support the kernel providing a =
section with the<br>relocations and apply them from Xen, but that's =
likely hm, complicated<br>at best, as I don't even know which kinds of =
relocations we would have<br>to support.<br></blockquote><br>If the =
kernel was properly linked to a PIE, there'd generally be only<br>one =
kind of relocation (per arch) that ought to need dealing with -<br>for =
x86-64 that's R_X86_64_RELATIVE iirc. Hence why (I suppose) =
they<br>don't use ELF relocation structures (for being wastefully =
large), but<br>rather a more compact custom representation. Even without =
building PIE<br>(presumably in part not possible because of how per-CPU =
data needs<br>dealing with), they get away with handling just very few =
relocs (and<br>from looking at the reloc processing code I'm getting the =
impression<br>they mistreat R_X86_64_32 as being the same as =
R_X86_64_32S, when it<br>isn't; needing to get such quirks right is one =
more aspect of why I<br>think we should leave relocation handling to the =
kernel).<br></blockquote><br>Would have to look into more detail, but I =
think leaving any relocs<br>for the OS to perform would be my initial =
approach.<br><br><blockquote type=3D"cite"><blockquote type=3D"cite">I'm =
not sure how Linux deals with this in the bare metal case, =
are<br>relocations done after decompressing and before jumping into the =
entry<br>point?<br></blockquote><br>That's how it was last time I =
looked, yes.<br></blockquote><br>I've created a gitlab ticket for =
it:<br><br>https://gitlab.com/xen-project/xen/-/issues/180<br><br>So =
that we don't forget, as I don't have time to work into this =
right<br>now, but I think it's important enough that we don't =
forget.<br><br>For PV it's a bit more unclear how we want to deal with =
it, as it's<br>IMO a specific Linux behavior that makes it fail to =
boot.<br><br>Roger.<br><br><br></div></div></blockquote></div><br></div></=
body></html>=

--Apple-Mail=_E4286BFD-732C-486E-9381-1DBCB2C9C036--


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 12:50:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 12:50:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746573.1153664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLj9i-0004kX-PJ; Mon, 24 Jun 2024 12:50:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746573.1153664; Mon, 24 Jun 2024 12:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLj9i-0004kQ-Ka; Mon, 24 Jun 2024 12:50:46 +0000
Received: by outflank-mailman (input) for mailman id 746573;
 Mon, 24 Jun 2024 12:50:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLj9h-0004kK-8Y
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 12:50:45 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5e44b88a-3228-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 14:50:43 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2ec5fad1984so15109171fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 05:50:43 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d3048b93asm4793507a12.56.2024.06.24.05.50.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 05:50:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e44b88a-3228-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719233443; x=1719838243; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=p6sFDusjuqBGQtOdAHYp5nC/fCld9tLxhD7a+hiN5Sw=;
        b=dCp+0ZWrVguZkFpFLGI9VnlREfGvsH6eh33LkCBlNHzra5KUEmEcZDl74Sy7ydPHpM
         DbrSSqoN5rJ2DPGE9paRn2nAcY1GV60/p0LkN0wKe5pgLc47BC+rbOWjeQ3Wb+siCnVx
         EQ6QrMMuo33zlLd6JT6FuLi8uqAlN+8uzNvis0fS9Unc4NU3/qG+g3NOhgrhM6ahj5d+
         dGOqG+IkRzLhOEZUWEp16wZzG9e9WvKe3hMMXSg2jEDaoLJb9wdNKQ991mYsycmZfE8k
         8ZtAfQ4tIjituYkHrLGtAYaSS4Zogxh2e2RFYgzkUXwHu7T+hQXcf6JSo5TLu27mSWrl
         kraA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719233443; x=1719838243;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=p6sFDusjuqBGQtOdAHYp5nC/fCld9tLxhD7a+hiN5Sw=;
        b=twhPRHQlJRAfhmmEzjngULB+awOu9661eofcngyamLiAe32+QYZMnvH0NUPjdO2oQu
         N095J4uXxd3oTEvkFqNwYqcp1/J6ITPEU+BFjofyZuRDdgc2YSdiHmxX7k6aiwynfO7E
         rBD8ARI3niMDSE0F1QLhjzEEjswO85yxH2W0qFnGT1vtSQIZLvC/DkjJodJ8sC6sPL38
         BP9iZK41ioF1g3OR5BOvXFqJyL6fXXMYaWu2lusL822TcuJq4rpHWJF4mCJY7c6Ysady
         wBeQjxOjKFxLtmSasjig5+mKfFf1lz7KepB+p8b26p1B7wYGLU0p1XWPj/o0/KFblUwy
         mb6g==
X-Forwarded-Encrypted: i=1; AJvYcCUkD81RqHj/6ksMMfwhRJbY1C/UD1WNc12+o0EXdUVGhlPmNnv83qshwVPr+7KtMI3vghwdri8DZ0vFsVeai9AMLf1rL0+S7iOjibLw2DI=
X-Gm-Message-State: AOJu0YzmaTNPuLNRTzMz9GslJyVzhza48OsW4M7/7xxfYCpBI8YQRZcV
	hEXm8MYOh/inJbycyeBeA1NyudzVH6gC45W5h7BLP7DbYElRcmsM
X-Google-Smtp-Source: AGHT+IEXF9iulpLg8Vo5qz5r6vU5CxLPgnPzAlpvQQLDHVFcDX8WYw+GYEasaVdiup6aU7DYu4u6hQ==
X-Received: by 2002:a2e:a98e:0:b0:2ec:5d83:32cf with SMTP id 38308e7fff4ca-2ec5d83332fmr36960081fa.34.1719233442522;
        Mon, 24 Jun 2024 05:50:42 -0700 (PDT)
Message-ID: <2bd871f45ec63f29ee6c7637799c73bab1ca5b3c.camel@gmail.com>
Subject: Re: [PATCH 1/3] xen/riscv: Drop legacy __ro_after_init definition
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: Shawn Anastasio <sanastasio@raptorengineering.com>, George Dunlap
 <George.Dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Date: Mon, 24 Jun 2024 14:50:41 +0200
In-Reply-To: <20240621201928.319293-2-andrew.cooper3@citrix.com>
References: <20240621201928.319293-1-andrew.cooper3@citrix.com>
	 <20240621201928.319293-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Fri, 2024-06-21 at 21:19 +0100, Andrew Cooper wrote:
> Hide the legacy __ro_after_init definition in xen/cache.h for RISC-V,
> to avoid
> its use creeping in.=C2=A0 Only mm.c needs adjusting as a consequence
>=20
> No functional change.
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

> ---
> CC: Shawn Anastasio <sanastasio@raptorengineering.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> CC: George Dunlap <George.Dunlap@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Julien Grall <julien@xen.org>
>=20
> https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1342686294
> ---
> =C2=A0xen/arch/riscv/mm.c=C2=A0=C2=A0=C2=A0=C2=A0 | 2 +-
> =C2=A0xen/include/xen/cache.h | 2 ++
> =C2=A02 files changed, 3 insertions(+), 1 deletion(-)
>=20
> diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
> index 053f043a3d2a..3ebaf6da01cc 100644
> --- a/xen/arch/riscv/mm.c
> +++ b/xen/arch/riscv/mm.c
> @@ -1,11 +1,11 @@
> =C2=A0/* SPDX-License-Identifier: GPL-2.0-only */
> =C2=A0
> -#include <xen/cache.h>
> =C2=A0#include <xen/compiler.h>
> =C2=A0#include <xen/init.h>
> =C2=A0#include <xen/kernel.h>
> =C2=A0#include <xen/macros.h>
> =C2=A0#include <xen/pfn.h>
> +#include <xen/sections.h>
> =C2=A0
> =C2=A0#include <asm/early_printk.h>
> =C2=A0#include <asm/csr.h>
> diff --git a/xen/include/xen/cache.h b/xen/include/xen/cache.h
> index 55456823c543..82a3ba38e3e7 100644
> --- a/xen/include/xen/cache.h
> +++ b/xen/include/xen/cache.h
> @@ -15,7 +15,9 @@
> =C2=A0#define __cacheline_aligned
> __attribute__((__aligned__(SMP_CACHE_BYTES)))
> =C2=A0#endif
> =C2=A0
> +#if defined(CONFIG_ARM) || defined(CONFIG_X86) ||
> defined(CONFIG_PPC64)
> =C2=A0/* TODO: Phase out the use of this via cache.h */
> =C2=A0#define __ro_after_init __section(".data.ro_after_init")
> +#endif
> =C2=A0
> =C2=A0#endif /* __LINUX_CACHE_H */



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 12:51:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 12:51:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746576.1153673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjA6-0005BQ-Uk; Mon, 24 Jun 2024 12:51:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746576.1153673; Mon, 24 Jun 2024 12:51:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjA6-0005BJ-Rr; Mon, 24 Jun 2024 12:51:10 +0000
Received: by outflank-mailman (input) for mailman id 746576;
 Mon, 24 Jun 2024 12:51:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLjA5-0004kK-Fe
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 12:51:09 +0000
Received: from mail-ed1-x52f.google.com (mail-ed1-x52f.google.com
 [2a00:1450:4864:20::52f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6ce8bd44-3228-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 14:51:08 +0200 (CEST)
Received: by mail-ed1-x52f.google.com with SMTP id
 4fb4d7f45d1cf-57d1679ee83so4398008a12.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 05:51:08 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d30583e93sm4654164a12.96.2024.06.24.05.51.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 05:51:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ce8bd44-3228-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719233467; x=1719838267; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Co5VKq2X8eV2JY38N3vHR2oZK+yr/Bk1+kSzxf0TuYo=;
        b=mvhCQlGwA9CuFBlEZXg60mtMb6EQhI2BsleH0uiU84hGMVFlPCvTHRThbM61D8W4fd
         6d3hI6Loot2Cbv5Zd7HiRJ2ffJYWdZKjwiDJiADh8nid3kLmamTuS2gUFgGsMvryUa4+
         XXIsUdrpOwSEOsaEyRyOaJwvh69TftablbmKYpdbcCyBxETg8Ud/sq99WquT1rVW0CLN
         fSOGDWIdE7uJk8SOaKsa4vv/n9BhDAD0dGTk30hNHB2r4VAXKQoB8Q3G/MBembhz6nxj
         yyJBeX7jpnAvv6HndQ5BMBaLDilyRcXLMwpPSNSzXnV6jcFFxxzZVGg/Sn4k7An6mb/n
         bK+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719233467; x=1719838267;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Co5VKq2X8eV2JY38N3vHR2oZK+yr/Bk1+kSzxf0TuYo=;
        b=X79SFBs+4+HkDlw06iqEUo64jn4gvCLbaBJAd40otqte/E01tG3xKOzu6IWJ9LUJN8
         kTX5nx6Jjh9UFayvlqAyF+K2bt5iSeC0TKxw1z+DiWICQAVZH1bLA40Mypt4r7wA/d0W
         1Gpb/C+npa1eognTgET5RQYIQhRo2Ja3i6SBt5mYIVofZNZizmnXWcXq2ow6Ug+clbT1
         nOjUuplvwLet5I/mKyqYr/B7rGfbOpeK5+wvRGRm25HktAJaLCzRsWePyPWsnPfcCJKE
         sJx5aZthoJZXUuo2xQjkYYQuT/DoQ8wHTGzLIj8mVU5IXu2VhyZe0cVmqzeFEn4lJ/He
         HdYg==
X-Forwarded-Encrypted: i=1; AJvYcCVSXGe9yxc6H99+LmHKU1Ce3HwKJaTx3EqphuMhFHdR7xNtv1ZnVL8QSeP/+qnfc9qVRxCKr1dRZss/inwm8DYMUSic0CjNBtQO4sHtLNw=
X-Gm-Message-State: AOJu0YxD/1uFhwBWkTCUr+hmjsITC+09UJfq78oyuT8HPQeAmLvXZfAM
	1iSdnyxh+M5hRAaqsLHS7wo70ggmVGlTyRIO5xge5DrIpM79I0ke
X-Google-Smtp-Source: AGHT+IFG7SGY8Ec/NL7vG/HPM8X4zeP7SF3ICNHBrufuwsEVvHPjx71PDa6mT2Ot5xz6iGSON2PaIw==
X-Received: by 2002:a50:cdc2:0:b0:57d:701:5350 with SMTP id 4fb4d7f45d1cf-57d45780645mr3221748a12.6.1719233467111;
        Mon, 24 Jun 2024 05:51:07 -0700 (PDT)
Message-ID: <7985d9bc66d5ec254eb1ad07704f9aac598efc4b.camel@gmail.com>
Subject: Re: [PATCH 1/3] xen/riscv: Drop legacy __ro_after_init definition
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Shawn Anastasio <sanastasio@raptorengineering.com>, George Dunlap
 <George.Dunlap@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>, 
 Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Date: Mon, 24 Jun 2024 14:51:06 +0200
In-Reply-To: <a15da9eb-f93f-4a01-8822-21452b762f53@suse.com>
References: <20240621201928.319293-1-andrew.cooper3@citrix.com>
	 <20240621201928.319293-2-andrew.cooper3@citrix.com>
	 <d8b2b01e608c6ddedbb2b46f58e8bd46ecfd5ca9.camel@gmail.com>
	 <a15da9eb-f93f-4a01-8822-21452b762f53@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Mon, 2024-06-24 at 10:04 +0200, Jan Beulich wrote:
> On 24.06.2024 10:02, Oleksii wrote:
> > On Fri, 2024-06-21 at 21:19 +0100, Andrew Cooper wrote:
> > > Hide the legacy __ro_after_init definition in xen/cache.h for
> > > RISC-V,
> > > to avoid
> > > its use creeping in.=C2=A0 Only mm.c needs adjusting as a consequence
> > >=20
> > > No functional change.
> > >=20
> > > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> > > ---
> > > CC: Shawn Anastasio <sanastasio@raptorengineering.com>
> > > CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > > CC: George Dunlap <George.Dunlap@citrix.com>
> > > CC: Jan Beulich <JBeulich@suse.com>
> > > CC: Stefano Stabellini <sstabellini@kernel.org>
> > > CC: Julien Grall <julien@xen.org>
> > >=20
> > > https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1342686=
294
> > > ---
> > > =C2=A0xen/arch/riscv/mm.c=C2=A0=C2=A0=C2=A0=C2=A0 | 2 +-
> > > =C2=A0xen/include/xen/cache.h | 2 ++
> > > =C2=A02 files changed, 3 insertions(+), 1 deletion(-)
> > >=20
> > > diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
> > > index 053f043a3d2a..3ebaf6da01cc 100644
> > > --- a/xen/arch/riscv/mm.c
> > > +++ b/xen/arch/riscv/mm.c
> > > @@ -1,11 +1,11 @@
> > > =C2=A0/* SPDX-License-Identifier: GPL-2.0-only */
> > > =C2=A0
> > > -#include <xen/cache.h>
> > > =C2=A0#include <xen/compiler.h>
> > > =C2=A0#include <xen/init.h>
> > > =C2=A0#include <xen/kernel.h>
> > > =C2=A0#include <xen/macros.h>
> > > =C2=A0#include <xen/pfn.h>
> > > +#include <xen/sections.h>
> > > =C2=A0
> > > =C2=A0#include <asm/early_printk.h>
> > > =C2=A0#include <asm/csr.h>
> > > diff --git a/xen/include/xen/cache.h b/xen/include/xen/cache.h
> > > index 55456823c543..82a3ba38e3e7 100644
> > > --- a/xen/include/xen/cache.h
> > > +++ b/xen/include/xen/cache.h
> > > @@ -15,7 +15,9 @@
> > > =C2=A0#define __cacheline_aligned
> > > __attribute__((__aligned__(SMP_CACHE_BYTES)))
> > > =C2=A0#endif
> > > =C2=A0
> > > +#if defined(CONFIG_ARM) || defined(CONFIG_X86) ||
> > > defined(CONFIG_PPC64)
> > > =C2=A0/* TODO: Phase out the use of this via cache.h */
> > > =C2=A0#define __ro_after_init __section(".data.ro_after_init")
> > > +#endif
> > Why "defined(CONFIG_RISCV_64)" is missed?
>=20
> The TODO is being addressed by this patch for RISC-V. See how a
> subsequent
> patch also drops CONFIG_PPC64.
Thanks for the explanation. Now it makes sense to me.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 12:53:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 12:53:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746585.1153683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjCL-0005p1-Bp; Mon, 24 Jun 2024 12:53:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746585.1153683; Mon, 24 Jun 2024 12:53:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjCL-0005ot-7C; Mon, 24 Jun 2024 12:53:29 +0000
Received: by outflank-mailman (input) for mailman id 746585;
 Mon, 24 Jun 2024 12:53:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLjCK-0005on-4B
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 12:53:28 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id bf9e44ef-3228-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 14:53:27 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a72420e84feso218399266b.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 05:53:27 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7252a1a659sm123973266b.58.2024.06.24.05.53.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 05:53:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf9e44ef-3228-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719233606; x=1719838406; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=wXz95UoQ4ZT3JzRnjHnCYO8MDAQCCR9EPtoEcbcbnZM=;
        b=VF/SQqX7OlL7mv6lrGlZrXk+oM5KM4ZQqA5FcNdEEJH2HW166Rvne3xffq6jnUmrLG
         7WupfR4LcnW9WhQ9Aj+0fXnpiEMMP5TOnyjfklFiFqfFeihAriM9z/w003Vn5Aa3HAec
         tcIYU2eV6b8wKfl43088gQNpHZYaO8akd39Pev3kUYQXHpNeOrVFj4nMcu6i55x/nyXi
         RyW03A3wRAvjgfgAsM+qN/ZTKorNJKLz9UK/daBdEEquE3p13hhU7DF2VuzSJ8Ats5hG
         1nGI2nPrV3qlwQ2hti37wcC7BvwYZXI14ZZME6FvptBhSOW5OPX+Nq94pXaH/PF3SoI8
         gZ3w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719233606; x=1719838406;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=wXz95UoQ4ZT3JzRnjHnCYO8MDAQCCR9EPtoEcbcbnZM=;
        b=WeCiwg+tcf4sushmNemvZ0AbkwEY8dDPUnq+loJOZez2sxz1+052AMoVsyR28Z0NzC
         Q5tckG8fIISbe3mAwgZupbiWdgF877MlOlu89MrVvVeYyNBO8FuNlzCW0HpJGF4bzyka
         TidjN2XMyCVy2jBJnJe7AbuD8ULT3yQD9oBUTW2Zc13PqdKem8nKTwsAYFfGyNmV+NP5
         yP55Lvb4SN/l53AbzQnsRmSYTptUZGtzxz3p0QeF3VynL35Sd2oE0kLnmrDiKFCBvgii
         i0qQ98i8HIMhm6FYaeDeTa7AsZ5ZaevOQusCYdzrZQgvU44Dv5cARIehWuEq88y7myAF
         jwIQ==
X-Forwarded-Encrypted: i=1; AJvYcCX8+ktIo+rhKLLnPbvbQb7dd3FrIVZlemEVbsAomOmkcGR4jLmUztz1NBMelNO364MPu2t1Fm8NroVc/XSGLuvhx8biZWiSiXEhL+aISPQ=
X-Gm-Message-State: AOJu0YzRqt/972y21akVx9ZWJVT/RHDHud3Fgw50pXx/pBEQwPRUW8rk
	HX63e+vt5sDEvCcgqVmvMfIqVjsWoQbGY5qJcLtnsQ7ik8k/zy26
X-Google-Smtp-Source: AGHT+IEubtp+ZXHHgFRvwddiCQXljp83TvnqEjraB3ZmVt7siu2rdIQPFVprQFhL4yt7KM/USDRv5A==
X-Received: by 2002:a17:906:19c4:b0:a6f:d423:23d with SMTP id a640c23a62f3a-a7245c80bd3mr330381366b.57.1719233606147;
        Mon, 24 Jun 2024 05:53:26 -0700 (PDT)
Message-ID: <365f48a0dba602131708bee3b20a16945508aa44.camel@gmail.com>
Subject: Re: [PATCH for-4.19 1/2] CHANGELOG.md: Fix indentation of "Removed"
 section
From: Oleksii <oleksii.kurochko@gmail.com>
To: George Dunlap <george.dunlap@cloud.com>, xen-devel@lists.xenproject.org
Cc: Community Manager <community.manager@xenproject.org>
Date: Mon, 24 Jun 2024 14:53:25 +0200
In-Reply-To: <20240624090411.1867850-1-george.dunlap@cloud.com>
References: <20240624090411.1867850-1-george.dunlap@cloud.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Mon, 2024-06-24 at 10:04 +0100, George Dunlap wrote:
> Signed-off-by: George Dunlap <george.dunlap@cloud.com>
> ---
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> CC: Community Manager <community.manager@xenproject.org>

Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

> ---
> =C2=A0CHANGELOG.md | 12 ++++++------
> =C2=A01 file changed, 6 insertions(+), 6 deletions(-)
>=20
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index 1778419cae..f3c6c7954f 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -31,12 +31,12 @@ The format is based on [Keep a
> Changelog](https://keepachangelog.com/en/1.0.0/)
> =C2=A0 - libxl support for backendtype=3Dtap with tapback.
> =C2=A0
> =C2=A0### Removed
> -- caml-stubdom.=C2=A0 It hasn't built since 2014, was pinned to Ocaml
> 4.02, and has
> -=C2=A0 been superseded by the MirageOS/SOLO5 projects.
> -- /usr/bin/pygrub symlink.=C2=A0 This was deprecated in Xen 4.2 (2012)
> but left for
> -=C2=A0 compatibility reasons.=C2=A0 VMs configured with
> bootloader=3D"/usr/bin/pygrub"
> -=C2=A0 should be updated to just bootloader=3D"pygrub".
> -- The Xen gdbstub on x86.
> + - caml-stubdom.=C2=A0 It hasn't built since 2014, was pinned to Ocaml
> 4.02, and has
> +=C2=A0=C2=A0 been superseded by the MirageOS/SOLO5 projects.
> + - /usr/bin/pygrub symlink.=C2=A0 This was deprecated in Xen 4.2 (2012)
> but left for
> +=C2=A0=C2=A0 compatibility reasons.=C2=A0 VMs configured with
> bootloader=3D"/usr/bin/pygrub"
> +=C2=A0=C2=A0 should be updated to just bootloader=3D"pygrub".
> + - The Xen gdbstub on x86.
> =C2=A0
> =C2=A0##
> [4.18.0](https://xenbits.xenproject.org/gitweb/?p=3Dxen.git;a=3Dshortlog;
> h=3DRELEASE-4.18.0) - 2023-11-16
> =C2=A0



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 12:54:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 12:54:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746591.1153693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjDA-0006Lb-IW; Mon, 24 Jun 2024 12:54:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746591.1153693; Mon, 24 Jun 2024 12:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjDA-0006LU-Ft; Mon, 24 Jun 2024 12:54:20 +0000
Received: by outflank-mailman (input) for mailman id 746591;
 Mon, 24 Jun 2024 12:54:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLjD9-0006Iy-AA
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 12:54:19 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ddf40d98-3228-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 14:54:17 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ebe0a81dc8so54482871fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 05:54:17 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9ebbb2909sm61983885ad.257.2024.06.24.05.54.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 05:54:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddf40d98-3228-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719233657; x=1719838457; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=sT3/CMGEXkPf9D0xs8i24smjMaiVBhMBYt0ZVCxsCYU=;
        b=fpOECQzkFUx/QE5zDOEgHvIQx9A3caSDHIcNDfPG40kTXsD/rydeklf2Fe+1S8zlTL
         y0c8xS8cOgVO/VTGj49helzqjp6j8RWK+rCpVnomXfOT5H7r8mAMJoQOaaNp+Bm+AG8s
         zIw6yvsmkYcoi7UxCxBUKjsqN4MmkdXG64fGSxTEOyAAsvkuxDxP+SgCxCOkxySHhIn+
         mZJQ4HD4SBexBBueN+/Vk3i1NnG+BpwmcpBsJVMlU1LpKMipuaBpArIsW9t5Yabd8HUQ
         itADaHmlBspBkuEbKTFn4j7HETsAOZCePrV+Zk4XnBJURMqQYWt1D06UwjC3/DZa3kSm
         I+vg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719233657; x=1719838457;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sT3/CMGEXkPf9D0xs8i24smjMaiVBhMBYt0ZVCxsCYU=;
        b=pFux7htQKWtpphGR2QhrRclqqajc6s+xSlwMY+qnH74eEbMwcRICAqWdWL008kNY9n
         F6VbFx9P2bgD7Q/5UruwcEJUAyVbGZKSiyUJalynwxg+IG21G2jOrBG4VGft8ty4h9U/
         slyLEEkPuFzOoebR3XB34OtCcZRpd8PeQrdFbTIKZtDx3lpWvjuKQ0dNCXa9PMMydLOG
         VfEA83Tl0PpIopzvpkVAr+2CIGuDkI8vMP3b8VaEYF6He3u/TgXbUZ2t2tAvcJSJFFeR
         VzYRc9IqqbiqDB1V5LYnVh0F51gNO55AmH6bK2l1POuy7/ZdmOR8g8ykuzIHmtBqrsPl
         F4Aw==
X-Forwarded-Encrypted: i=1; AJvYcCWUjz5Y43ZgVEqE0K+6ouyfd3mNfXap+vp0ZpjdKwQc8UZickTgpNDTfpR6XBAGx/3RJSzQ4rWaoRC26iJ4cDR1fabacR41qw09JJLYEQY=
X-Gm-Message-State: AOJu0YylRO9zcV1QXLQEs3ZjtxVwzoBAXLQVVANaHXtITtaJ6inphcPq
	L9slIVt94ZvZ9IGSXGeyeUk7dct/Scp3nMxyHoeiDKAK68rdQppH6yhbm2vmHw==
X-Google-Smtp-Source: AGHT+IHzABbbFGbRZc/c/p4J9I2kFdnR8VMXDmpl3Ow09uumZWfSO+PX7L/fJvRk1sjIHTX/DHySHQ==
X-Received: by 2002:a2e:854f:0:b0:2ec:49b5:50d5 with SMTP id 38308e7fff4ca-2ec5b357a00mr34101611fa.41.1719233657152;
        Mon, 24 Jun 2024 05:54:17 -0700 (PDT)
Message-ID: <36d581a0-f144-4756-b345-8b74ccc25c74@suse.com>
Date: Mon, 24 Jun 2024 14:54:10 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: E820 memory allocation issue on Threadripper platforms
To: Branden Sherrell <sherrellbc@gmail.com>
Cc: Patrick Plenefisch <simonpatp@gmail.com>, xen-devel@lists.xenproject.org,
 Juergen Gross <jgross@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>
References: <CAOCpoWdOH=xGxiQSC1c5Ueb1THxAjH4WiZbCZq-QT+d_KAk3SA@mail.gmail.com>
 <1708c3d7-662a-44bc-b9b3-4ab9f8642d7b@suse.com>
 <dcaf9d8d-ad5a-4714-936b-79ed0e587f9d@suse.com>
 <CAOCpoWeowZPuQTeBp9nu8p8CDtE=u++wN_UqRoABZtB57D50Qw@mail.gmail.com>
 <ac742d12-ec91-4215-bb42-82a145924b4f@suse.com>
 <CAOCpoWfQmkhN3hms1xuotSUZzVzR99i9cNGGU2r=yD5PjysMiQ@mail.gmail.com>
 <fa23a590-5869-4e11-8998-1d03742c5919@suse.com> <ZaeoWBV8IEZap2mr@macbook>
 <15dcef46-aaa8-4f71-bd5c-355001dd9188@suse.com> <ZafOGEwms01OFaVJ@macbook>
 <7BAC7BB5-C321-4C34-884A-21CC12F761BB@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <7BAC7BB5-C321-4C34-884A-21CC12F761BB@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 14:40, Branden Sherrell wrote:
> I recently found this mailing list thread when searching for information on a related issue regarding conflicting E820 on a Threadripper platform. For those interested in additional data points, I am using the ASUS WRX80E-SAGE SE Wifi II motherboard that presents the following E820 to Xen:
> 
> (XEN) EFI RAM map:
> (XEN)  [0000000000000000, 0000000000000fff] (reserved)
> (XEN)  [0000000000001000, 000000000008ffff] (usable)
> (XEN)  [0000000000090000, 0000000000090fff] (reserved)
> (XEN)  [0000000000091000, 000000000009ffff] (usable)
> (XEN)  [00000000000a0000, 00000000000fffff] (reserved)
> (XEN)  [0000000000100000, 0000000003ffffff] (usable)
> (XEN)  [0000000004000000, 0000000004020fff] (ACPI NVS)
> (XEN)  [0000000004021000, 0000000009df1fff] (usable)
> (XEN)  [0000000009df2000, 0000000009ffffff] (reserved)
> (XEN)  [000000000a000000, 00000000b5b04fff] (usable)
> (XEN)  [00000000b5b05000, 00000000b8cd3fff] (reserved)
> (XEN)  [00000000b8cd4000, 00000000b9064fff] (ACPI data)
> (XEN)  [00000000b9065000, 00000000b942afff] (ACPI NVS)
> (XEN)  [00000000b942b000, 00000000bb1fefff] (reserved)
> (XEN)  [00000000bb1ff000, 00000000bbffffff] (usable)
> (XEN)  [00000000bc000000, 00000000bfffffff] (reserved)
> (XEN)  [00000000c1100000, 00000000c1100fff] (reserved)
> (XEN)  [00000000e0000000, 00000000efffffff] (reserved)
> (XEN)  [00000000f1280000, 00000000f1280fff] (reserved)
> (XEN)  [00000000f2200000, 00000000f22fffff] (reserved)
> (XEN)  [00000000f2380000, 00000000f2380fff] (reserved)
> (XEN)  [00000000f2400000, 00000000f24fffff] (reserved)
> (XEN)  [00000000f3680000, 00000000f3680fff] (reserved)
> (XEN)  [00000000fea00000, 00000000feafffff] (reserved)
> (XEN)  [00000000fec00000, 00000000fec00fff] (reserved)
> (XEN)  [00000000fec10000, 00000000fec10fff] (reserved)
> (XEN)  [00000000fed00000, 00000000fed00fff] (reserved)
> (XEN)  [00000000fed40000, 00000000fed44fff] (reserved)
> (XEN)  [00000000fed80000, 00000000fed8ffff] (reserved)
> (XEN)  [00000000fedc2000, 00000000fedcffff] (reserved)
> (XEN)  [00000000fedd4000, 00000000fedd5fff] (reserved)
> (XEN)  [00000000ff000000, 00000000ffffffff] (reserved)
> (XEN)  [0000000100000000, 000000703f0fffff] (usable)
> (XEN)  [000000703f100000, 000000703fffffff] (reserved)
> 
> And of course the default physical link address of the x86_64 kernel is 16MiB which clearly conflicts with the EfiACPIMemoryNVS memory starting at 0x4000000. On latest Debian (12.5.0, bookworm) the decompressed kernel is more than 60MiB, so it obviously overflows into the adjacent region. I can also confirm that loading the Debian kernel at 2MiB also works as expected. Debian is also built with CONFIG_RELOCATABLE=y, so it should be capable of being loaded with this new feature in Xen. 
> 
> I see the link at this ticket was implemented and committed (dfc9fab0) on April 8, 2024 but it appears to not have made its way into the latest (4.18) Xen release. Though there seem to be more recent commits cherry picked into that branch. When is this fix expected to make it into a release?

It's not tagged as a bugfix, and PVH Dom0 also isn't "supported" in 4.18.
Hence it wasn't picked into the set of backports. I also doubt it'll help
you, as I would guess you're still using PV Dom0.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 12:54:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 12:54:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746592.1153703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjDL-0006f1-PW; Mon, 24 Jun 2024 12:54:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746592.1153703; Mon, 24 Jun 2024 12:54:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjDL-0006es-Mu; Mon, 24 Jun 2024 12:54:31 +0000
Received: by outflank-mailman (input) for mailman id 746592;
 Mon, 24 Jun 2024 12:54:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xeV4=N2=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sLjDK-0006Iy-Rh
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 12:54:30 +0000
Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com
 [2a00:1450:4864:20::534])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e4f78973-3228-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 14:54:29 +0200 (CEST)
Received: by mail-ed1-x534.google.com with SMTP id
 4fb4d7f45d1cf-57d106e69a2so3834846a12.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 05:54:29 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d30562bd5sm4635002a12.92.2024.06.24.05.54.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 05:54:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4f78973-3228-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719233669; x=1719838469; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=6KX+pU8kylpaQJrLtGtQ/eht7vzalxwqeLKxO1kolxM=;
        b=ArAmsdqZtNKiaHG6jPhM6Af3dB9RwwNBHbhtdFneOVvSZ5USTNhqFP4x/DDdy8gBwV
         /UKwcntArpa/A4DcHCHKlXE0ngvyRcVLDqccj4gi/lv9JhokfXrF+m3xKM1iELdO+hXT
         gPVZy7xIUND7wGOnaJCnEgxdGmCnUD1MCgABjdZY5sB6bSv9IpXWDh6DgBxOTEXV1ybR
         mDdWfP5bBPPcDC8eSfwKPIwanihZnUCPxabXppg90cyiWJY7ah0agx4B/lUZ/q4oyWQD
         I/hB9w+E8MmjYcZazgabt8xM7RL1lOcQabQRXSCuWmcb5PmPyLCqw+qxCg007Tqol28a
         DHpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719233669; x=1719838469;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=6KX+pU8kylpaQJrLtGtQ/eht7vzalxwqeLKxO1kolxM=;
        b=mXESU9mOmDKUmvR887dXX5mHJT/LzQY/9WdScz/XxpNRI2xxQITKiccgAcuFGenK5E
         NA/Dnxg2GrEzB/xz385FKHcIdS3+fLvINT07v/QddTnNO0seRKJuofuyDFVjE07ZJ+Ef
         zGe+Oe9COb7jo7EuWvNSN4d+cQ2J6wrms12Q0MW3iYHz288b/gOtdtPS6Wna/dSjGfze
         /04oT5BC+FXF5t1LudszJ4W4UNUzh1bXgn4dgMG/bGsP1MB3LgzD8P0HH1n9+RLr51Ac
         DOyhJ0xx/Hd9UT9fI58CW3jrRwpxBy6xPuvP+B8ww4Ln109rNWcU8qbeZuCyVyDOhECW
         t3AA==
X-Forwarded-Encrypted: i=1; AJvYcCXPUCKLjf0xQtsm6haZnzKjN6vetTx6NIuaN8z0ibiu0bF6V71dE0e1Ag2yuXUR0DOrxI9Xp6F5ClDlAMvqTQ18MaSgWia0VX7IdteWOW0=
X-Gm-Message-State: AOJu0Yz5xjRxq59VSoOYcYgrolYzKFfrJLujwPQIkw8HRSOWkIOOLLqh
	qigjhUj7Agh+a+Q3XLCLWuxkYu1aQtFplRXHI9o6R8lpxw7rMP3AqBsvEgLO
X-Google-Smtp-Source: AGHT+IHC+xc+elLTGD9n88zrbwdp5Ki05q2/Zk/tOInRsZTHNZlY1P7wDGccozoc9yaduINzIJSgRw==
X-Received: by 2002:a05:6402:34c6:b0:57d:45af:112c with SMTP id 4fb4d7f45d1cf-57d45af127amr4996691a12.4.1719233668650;
        Mon, 24 Jun 2024 05:54:28 -0700 (PDT)
Message-ID: <1a3c3c12182393f202c709a06de99b85673c7ed5.camel@gmail.com>
Subject: Re: [PATCH for-4.19] tools/xenalyze: Remove argp_program_bug_address
From: Oleksii <oleksii.kurochko@gmail.com>
To: George Dunlap <george.dunlap@cloud.com>, xen-devel@lists.xenproject.org
Cc: Anthony PERARD <anthony@xenproject.org>, Andrew Cooper
	 <andrew.cooper3@citrix.com>
Date: Mon, 24 Jun 2024 14:54:27 +0200
In-Reply-To: <20240624104930.1951521-1-george.dunlap@cloud.com>
References: <20240624104930.1951521-1-george.dunlap@cloud.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Mon, 2024-06-24 at 11:49 +0100, George Dunlap wrote:
> xenalyze sets argp_program_bug_address to my old Citrix address.=C2=A0
> This
> was done before xenalyze was in the xen.git tree; and it's the only
> program in the tree which does so.
>=20
> Now that xenalyze is part of the normal Xen distribution, it should
> be
> obvious where to report bugs.
>=20
> Signed-off-by: George Dunlap <george.dunlap@cloud.com>
> ---
> Freeze exception justification: This is only a change in
> documentation.
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

>=20
> CC: Anthony PERARD <anthony@xenproject.org>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> =C2=A0tools/xentrace/xenalyze.c | 3 ---
> =C2=A01 file changed, 3 deletions(-)
>=20
> diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
> index d95e52695f..adc96dd7e4 100644
> --- a/tools/xentrace/xenalyze.c
> +++ b/tools/xentrace/xenalyze.c
> @@ -10920,9 +10920,6 @@ const struct argp parser_def =3D {
> =C2=A0=C2=A0=C2=A0=C2=A0 .doc =3D "",
> =C2=A0};
> =C2=A0
> -const char *argp_program_bug_address =3D "George Dunlap
> <george.dunlap@eu.citrix.com>";
> -
> -
> =C2=A0int main(int argc, char *argv[]) {
> =C2=A0=C2=A0=C2=A0=C2=A0 /* Start with warn at stderr. */
> =C2=A0=C2=A0=C2=A0=C2=A0 warn =3D stderr;



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 13:08:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 13:08:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746610.1153713 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjQW-0000yP-2d; Mon, 24 Jun 2024 13:08:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746610.1153713; Mon, 24 Jun 2024 13:08:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjQV-0000yI-Uy; Mon, 24 Jun 2024 13:08:07 +0000
Received: by outflank-mailman (input) for mailman id 746610;
 Mon, 24 Jun 2024 13:08:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SfmV=N2=gmail.com=sherrellbc@srs-se1.protection.inumbo.net>)
 id 1sLjQU-0000yC-J3
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 13:08:06 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cad0cbd6-322a-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 15:08:04 +0200 (CEST)
Received: by mail-lf1-x12c.google.com with SMTP id
 2adb3069b0e04-52cdb0d816bso2090124e87.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 06:08:04 -0700 (PDT)
Received: from smtpclient.apple (ool-44c00bfa.dyn.optonline.net.
 [68.192.11.250]) by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52cd7079478sm975700e87.153.2024.06.24.06.08.02
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 24 Jun 2024 06:08:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cad0cbd6-322a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719234484; x=1719839284; darn=lists.xenproject.org;
        h=to:references:message-id:content-transfer-encoding:cc:date
         :in-reply-to:from:subject:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=NaImPSD+fr5dNaDJYiyxLTEg3pmGswuvprA97nXdlgM=;
        b=M4Ionv4/aSqLb8M/j6uubIsxxDMTXiHeJ9n7bk04OwO8aCcYrBAacAMRLG3FDrvAJk
         Knxxq4P2ltaC7f2etqZzyoCduZ1pXT3uWVgLmcbR0A3YuJNhtnbDK36q6gcJS7JAbbKA
         OIFBw2JxCzYI/TrQy/EuUNn2Sv5+VhLw/VnGmdWzWXMwErPCUKwZe9PiMRUmOst1X5Sv
         EU3ov2QWZOpvpnEFrynbHkrSGD3Mwpw1LX2JzyWByOAkUk0WeoRlInpSz8AzOrTjk/XL
         gWnpVn4H3kSm+DgkPeqZvSTG89OgZrsYQyHHx2+WyJbSvOcfYDbXDg/UX6fMlg5rziH3
         CmMg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719234484; x=1719839284;
        h=to:references:message-id:content-transfer-encoding:cc:date
         :in-reply-to:from:subject:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=NaImPSD+fr5dNaDJYiyxLTEg3pmGswuvprA97nXdlgM=;
        b=q1OX7x/B0Ra0ztcYh8hOVzFOqkaquTvO8NpmUITKUr3Rv/1jRDJlxrGOpJYzKrF07P
         eTDmYoWisSTubxSqDS9qFGw5i4e/Gmr4/gSAd/6nOIS3mcCabB5DKWnCEJQMd4bO5QKI
         +Czcay7em6Dgldr90Kh8DDHa8L15U/xzSoX5V2PSIF0QocOfO0ledjZBBh8+jsvgBCYs
         mLihgKeHyY8YP+2gVXcWC6PMlIGBFFVoRz9dSA/uEMXoRYDzNGdm35VYk+Z/xWZDhqwZ
         aBEa6T7/vnf9PjSltrNkEgVa2APpjvuIsSu5rgDtHUf1SdtdknA+Iz1ni+calXGzRF5+
         OQjA==
X-Forwarded-Encrypted: i=1; AJvYcCWUwcChr/8/BtyEGYo1RuHlukMX27XCxRU/xfVpD13LKyZtHdtwbkNc1Z1Ua9FW4JonhGPZnt9Xu3KPum+uSPCBbh6YieNr/0itsKbEQIY=
X-Gm-Message-State: AOJu0YxxQaS5adyMq93Jmq2UBjQ4HmudxK5lNGA0j9rjyzwEd0OKiwFy
	Va9Qd7HQaI4VM3X3jYvceQVhOckHFW+XAFTw1sE3jD56KiHOrBel
X-Google-Smtp-Source: AGHT+IGWsosdnzZIQKcgPioiBpaOZTG0ngxrR1dqJjGzQOBpuARYuraiwqYhJpdYU2BILaHAWZAdzw==
X-Received: by 2002:ac2:4a9e:0:b0:52c:a070:944 with SMTP id 2adb3069b0e04-52cdf15afc9mr1599880e87.23.1719234483674;
        Mon, 24 Jun 2024 06:08:03 -0700 (PDT)
Content-Type: text/plain;
	charset=us-ascii
Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3731.700.6\))
Subject: Re: E820 memory allocation issue on Threadripper platforms
From: Branden Sherrell <sherrellbc@gmail.com>
In-Reply-To: <36d581a0-f144-4756-b345-8b74ccc25c74@suse.com>
Date: Mon, 24 Jun 2024 09:07:50 -0400
Cc: Patrick Plenefisch <simonpatp@gmail.com>,
 xen-devel@lists.xenproject.org,
 Juergen Gross <jgross@suse.com>,
 =?utf-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <70B65B5D-C075-4D8E-8D2B-08A1930EE68B@gmail.com>
References: <CAOCpoWdOH=xGxiQSC1c5Ueb1THxAjH4WiZbCZq-QT+d_KAk3SA@mail.gmail.com>
 <1708c3d7-662a-44bc-b9b3-4ab9f8642d7b@suse.com>
 <dcaf9d8d-ad5a-4714-936b-79ed0e587f9d@suse.com>
 <CAOCpoWeowZPuQTeBp9nu8p8CDtE=u++wN_UqRoABZtB57D50Qw@mail.gmail.com>
 <ac742d12-ec91-4215-bb42-82a145924b4f@suse.com>
 <CAOCpoWfQmkhN3hms1xuotSUZzVzR99i9cNGGU2r=yD5PjysMiQ@mail.gmail.com>
 <fa23a590-5869-4e11-8998-1d03742c5919@suse.com> <ZaeoWBV8IEZap2mr@macbook>
 <15dcef46-aaa8-4f71-bd5c-355001dd9188@suse.com> <ZafOGEwms01OFaVJ@macbook>
 <7BAC7BB5-C321-4C34-884A-21CC12F761BB@gmail.com>
 <36d581a0-f144-4756-b345-8b74ccc25c74@suse.com>
To: Jan Beulich <jbeulich@suse.com>
X-Mailer: Apple Mail (2.3731.700.6)

What is the reasoning that this fix be applied only to PVH domains? =
Taking a look at the fix logic it appears to walk the E820 to find a =
suitable range of memory to load the kernel into (assuming it can be =
determined that the kernel is also relocatable). Why can this logic not =
be applied to dom0 kernel load in general?

Branden.

> On Jun 24, 2024, at 8:54 AM, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 24.06.2024 14:40, Branden Sherrell wrote:
>> I recently found this mailing list thread when searching for =
information on a related issue regarding conflicting E820 on a =
Threadripper platform. For those interested in additional data points, I =
am using the ASUS WRX80E-SAGE SE Wifi II motherboard that presents the =
following E820 to Xen:
>>=20
>> (XEN) EFI RAM map:
>> (XEN)  [0000000000000000, 0000000000000fff] (reserved)
>> (XEN)  [0000000000001000, 000000000008ffff] (usable)
>> (XEN)  [0000000000090000, 0000000000090fff] (reserved)
>> (XEN)  [0000000000091000, 000000000009ffff] (usable)
>> (XEN)  [00000000000a0000, 00000000000fffff] (reserved)
>> (XEN)  [0000000000100000, 0000000003ffffff] (usable)
>> (XEN)  [0000000004000000, 0000000004020fff] (ACPI NVS)
>> (XEN)  [0000000004021000, 0000000009df1fff] (usable)
>> (XEN)  [0000000009df2000, 0000000009ffffff] (reserved)
>> (XEN)  [000000000a000000, 00000000b5b04fff] (usable)
>> (XEN)  [00000000b5b05000, 00000000b8cd3fff] (reserved)
>> (XEN)  [00000000b8cd4000, 00000000b9064fff] (ACPI data)
>> (XEN)  [00000000b9065000, 00000000b942afff] (ACPI NVS)
>> (XEN)  [00000000b942b000, 00000000bb1fefff] (reserved)
>> (XEN)  [00000000bb1ff000, 00000000bbffffff] (usable)
>> (XEN)  [00000000bc000000, 00000000bfffffff] (reserved)
>> (XEN)  [00000000c1100000, 00000000c1100fff] (reserved)
>> (XEN)  [00000000e0000000, 00000000efffffff] (reserved)
>> (XEN)  [00000000f1280000, 00000000f1280fff] (reserved)
>> (XEN)  [00000000f2200000, 00000000f22fffff] (reserved)
>> (XEN)  [00000000f2380000, 00000000f2380fff] (reserved)
>> (XEN)  [00000000f2400000, 00000000f24fffff] (reserved)
>> (XEN)  [00000000f3680000, 00000000f3680fff] (reserved)
>> (XEN)  [00000000fea00000, 00000000feafffff] (reserved)
>> (XEN)  [00000000fec00000, 00000000fec00fff] (reserved)
>> (XEN)  [00000000fec10000, 00000000fec10fff] (reserved)
>> (XEN)  [00000000fed00000, 00000000fed00fff] (reserved)
>> (XEN)  [00000000fed40000, 00000000fed44fff] (reserved)
>> (XEN)  [00000000fed80000, 00000000fed8ffff] (reserved)
>> (XEN)  [00000000fedc2000, 00000000fedcffff] (reserved)
>> (XEN)  [00000000fedd4000, 00000000fedd5fff] (reserved)
>> (XEN)  [00000000ff000000, 00000000ffffffff] (reserved)
>> (XEN)  [0000000100000000, 000000703f0fffff] (usable)
>> (XEN)  [000000703f100000, 000000703fffffff] (reserved)
>>=20
>> And of course the default physical link address of the x86_64 kernel =
is 16MiB which clearly conflicts with the EfiACPIMemoryNVS memory =
starting at 0x4000000. On latest Debian (12.5.0, bookworm) the =
decompressed kernel is more than 60MiB, so it obviously overflows into =
the adjacent region. I can also confirm that loading the Debian kernel =
at 2MiB also works as expected. Debian is also built with =
CONFIG_RELOCATABLE=3Dy, so it should be capable of being loaded with =
this new feature in Xen.=20
>>=20
>> I see the link at this ticket was implemented and committed =
(dfc9fab0) on April 8, 2024 but it appears to not have made its way into =
the latest (4.18) Xen release. Though there seem to be more recent =
commits cherry picked into that branch. When is this fix expected to =
make it into a release?
>=20
> It's not tagged as a bugfix, and PVH Dom0 also isn't "supported" in =
4.18.
> Hence it wasn't picked into the set of backports. I also doubt it'll =
help
> you, as I would guess you're still using PV Dom0.
>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 13:20:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 13:20:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746618.1153723 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjcY-0004JV-2i; Mon, 24 Jun 2024 13:20:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746618.1153723; Mon, 24 Jun 2024 13:20:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjcX-0004Iy-W9; Mon, 24 Jun 2024 13:20:33 +0000
Received: by outflank-mailman (input) for mailman id 746618;
 Mon, 24 Jun 2024 13:20:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLHn=N2=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sLjcX-0004Ir-24
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 13:20:33 +0000
Received: from mail-yw1-x112f.google.com (mail-yw1-x112f.google.com
 [2607:f8b0:4864:20::112f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 87df5836-322c-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 15:20:31 +0200 (CEST)
Received: by mail-yw1-x112f.google.com with SMTP id
 00721157ae682-643acefd1afso13200177b3.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 06:20:31 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce91f16esm312535785a.77.2024.06.24.06.20.28
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 06:20:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87df5836-322c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719235230; x=1719840030; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=6poGtAm0Yd/OQGMLhi5xp1v4oxLl4Vrx2Yyd5BmKCaY=;
        b=sD1SnX5u36iNKoG9RSVq8KLibq3eQSeRys4NNiq7uSn38hQvX+QE1gGiaZr/+5wkzv
         chQONRkFP0sUyb/S8UO/zytUKG679OMeaaGFOwbxyIEozwBciFdIE/ijJcs1Cm8z2bZT
         RemqPIKa5BwF1cYWMgLuVD6quaExIOa0dPueU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719235230; x=1719840030;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=6poGtAm0Yd/OQGMLhi5xp1v4oxLl4Vrx2Yyd5BmKCaY=;
        b=Lstt8f0s/UvQEVjONN/a4Yv/BzsB11IyZ4ULu2PZX2/NvammLdR1Jq8xMt6/Zy+r0d
         r1eqQeEeqKtpask3f1CGbtIg5DOXOxROB+389V8ABUyXRNLejWPtuV2nJnSNawe1X5fo
         +HOpk0J2QUcBQuzdZfZ3sdlvm1o9cLf97r1A4CxfMmm222gIj6MCgoZbxH4dr8Ff8Kg0
         TgSTWCKUrw6r8s5QPVGsuUhEE/c0XYn/JnC/9OC6xcfI89Q28TWba/K0iaHTrn1MT9vI
         RooqCxtc3EL9ZiN/Wzx/iBEsiSJ920juApv719pEg/8gAxkffrOaXeGPm3yxkEM4zH7H
         9Nwg==
X-Forwarded-Encrypted: i=1; AJvYcCXmCDxVlPFVD4R5x9qO2rWlCpulTZ5d3P3H2/wGre3OrFuRxncPtguEkQAgI6UKysUvEIr4NizpfXOPwlVlwLvxaS+Dgy/z0Cst/0zs18U=
X-Gm-Message-State: AOJu0YwGVl1YN3s8+UX9oJSY7PpaRmaYNCxBVWF9wYYyIv3Meg1sF1eY
	4dI4qdn2Ikhx8iCkC2FCnhc8tscMdp+2iS778102vqxfNJcop3Q+Rj5iFVFv2XU=
X-Google-Smtp-Source: AGHT+IGqbp2Txe3Yef4gqvVN36swGj4ZV3nzOMK7xF2UiTsC+lLtQaJU1URViLk730AtixEi9xz9gg==
X-Received: by 2002:a05:690c:f14:b0:645:fecc:3233 with SMTP id 00721157ae682-645fecc35acmr11264027b3.7.1719235230477;
        Mon, 24 Jun 2024 06:20:30 -0700 (PDT)
Message-ID: <83743273-54ba-4f8b-9548-30dbd763887e@citrix.com>
Date: Mon, 24 Jun 2024 14:20:27 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen: re-add type checking to
 {,__}copy_from_guest_offset()
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <6fc55df2-5d92-4f3f-8eb3-69bd89bfea4e@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <6fc55df2-5d92-4f3f-8eb3-69bd89bfea4e@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24/06/2024 1:26 pm, Jan Beulich wrote:
> When re-working them to avoid UB on guest address calculations, I failed
> to add explicit type checks in exchange for the implicit ones that until
> then had happened in assignments that were there anyway.
>
> Fixes: 43d5c5d5f70b ("xen: avoid UB in guest handle arithmetic")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 13:35:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 13:35:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746624.1153732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjqb-0006T4-5X; Mon, 24 Jun 2024 13:35:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746624.1153732; Mon, 24 Jun 2024 13:35:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjqb-0006Sx-2r; Mon, 24 Jun 2024 13:35:05 +0000
Received: by outflank-mailman (input) for mailman id 746624;
 Mon, 24 Jun 2024 13:35:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLjqZ-0006Sr-ND
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 13:35:03 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8eb575e2-322e-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 15:35:01 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2ec1ac1aed2so52804951fa.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 06:35:01 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-70690b37fabsm505209b3a.1.2024.06.24.06.34.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 06:35:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8eb575e2-322e-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719236101; x=1719840901; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=U019brr9NgH/q5MYR/FzuT2DxJ+abEquW+fZ4qJ26tc=;
        b=V3Pt+IUfU8Wpf4C1vnByh8gaerH642WJLHkEgvJ62P1S5syYSdSOQ4Jnnf5KX+Kae8
         OMAJzvgllQxXu76VMkmqypx01SoMMXnlUyX/wDmMM+jrvVJT3ryFRrMvyvikqY5u4WzT
         QPNm7FtM83BeCYdBMAqAD8UGGosNekuaIZ5dj4zgqmNFH7jV8+Gs5c5BBuWWfoz1scW5
         WGSHRmvasFqD/l3humCN3yP1kMGLMLxP9gWs/EihimGxoxfOP2fmBuFudeBAmU1nMYcJ
         7aTwtvvEogXc422zm/7hvs6yICn3TH5l940yq0LjQA1FrtFu4HxyqmXLnE3fkWWJujCW
         HXow==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719236101; x=1719840901;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=U019brr9NgH/q5MYR/FzuT2DxJ+abEquW+fZ4qJ26tc=;
        b=MDLPiF3GrZ8r8e0ed6M1Aauan8rq4kS+khaly6H5w196O0xt5pB17zbybh5Jy1LXfM
         WGn5YYFcFrhdj6BfLYE6KjamQ2Biw+nu3tE0Ou3UImu8+SW0Q9EODe5ib/e0mVdWOyar
         /k0GYrzFRK/5e5y+xJ/Qj8SEplsXNd2Pk7s5a4oFq1Icy/5q4NO2aOubVQeJm+PYfQ6g
         fuEIHR/MUI0SWtH6g01Hjy7ttDaxwnFMzYmb5pm4SDgD2okek2C6n3tjQ0cLd1GiZFWz
         RpMUE6FzpISHQFlOwZjdb/urYSZecrcX6xtfYzj/zizyNi0UVAIMJN37pttR8uZC7Csr
         Jbzw==
X-Forwarded-Encrypted: i=1; AJvYcCXGnSj5Fv62TAU2mqFGypepXrY1vOY7lkcJTX6YG0P8i2+j9KbNg7gYY6NHSEEimf8FN+V5EqvVcfMgZhHlkIB6YukQFBTKnzuWfoJdqHg=
X-Gm-Message-State: AOJu0YyFrwUi/DdqKX9yidvlRne2O4fhNP7vS71D9l+zjKwyTeYDOyCh
	vF0dGfhoeHNuGd7UNDQErGzUowMH2ujyJ7hmoTr4bE+ffanVGd9JBbCzSbyFOg==
X-Google-Smtp-Source: AGHT+IEGwEP1Smx0qT5AMcJSAl8roOthr1MTgaaGJ/0WVFjT+wg4k4ldidCsrjDxqnL1v+6TUT4xRA==
X-Received: by 2002:a2e:7d15:0:b0:2ec:5a85:66ec with SMTP id 38308e7fff4ca-2ec5b30786cmr28120701fa.48.1719236101145;
        Mon, 24 Jun 2024 06:35:01 -0700 (PDT)
Message-ID: <e25f5cd4-9130-488c-8294-22bd9fbd76ff@suse.com>
Date: Mon, 24 Jun 2024 15:34:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: Minios-devel <minios-devel@lists.xenproject.org>
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Juergen Gross <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: {PATCH mini-os] mman: correct m{,un}lock() definitions
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

From: Charles Arnold <carnold@suse.com>

gcc14 no longer (silently) accepts functions with no return type
specified.

Signed-off-by: Charles Arnold <carnold@suse.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/include/posix/sys/mman.h
+++ b/include/posix/sys/mman.h
@@ -16,7 +16,7 @@
 
 void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset) asm("mmap64");
 int munmap(void *start, size_t length);
-static inline mlock(const void *addr, size_t len) { return 0; }
-static inline munlock(const void *addr, size_t len) { return 0; }
+static inline int mlock(const void *addr, size_t len) { return 0; }
+static inline int munlock(const void *addr, size_t len) { return 0; }
 
 #endif /* _POSIX_SYS_MMAN_H */


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 13:36:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 13:36:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746635.1153747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjs4-00074P-Ig; Mon, 24 Jun 2024 13:36:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746635.1153747; Mon, 24 Jun 2024 13:36:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjs4-00074I-FB; Mon, 24 Jun 2024 13:36:36 +0000
Received: by outflank-mailman (input) for mailman id 746635;
 Mon, 24 Jun 2024 13:36:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NRWk=N2=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1sLjs2-00073r-HN
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 13:36:34 +0000
Received: from sonata.ens-lyon.org (domu-toccata.ens-lyon.fr [140.77.166.138])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c54cb4d7-322e-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 15:36:33 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id 091EEA02C7;
 Mon, 24 Jun 2024 15:36:33 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id Wbocwnsz_QVE; Mon, 24 Jun 2024 15:36:32 +0200 (CEST)
Received: from begin (nat-inria-interne-52-gw-01-bso.bordeaux.inria.fr
 [194.199.1.52])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (4096 bits) server-digest
 SHA256) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id C62E1A00DD;
 Mon, 24 Jun 2024 15:36:32 +0200 (CEST)
Received: from samy by begin with local (Exim 4.97)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1sLjs0-00000006LvV-1ftl; Mon, 24 Jun 2024 15:36:32 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c54cb4d7-322e-11ef-90a3-e314d9c70b13
Date: Mon, 24 Jun 2024 15:36:32 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Minios-devel <minios-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: {PATCH mini-os] mman: correct m{,un}lock() definitions
Message-ID: <20240624133632.emyygqh2hop3kyxv@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jan Beulich <jbeulich@suse.com>,
	Minios-devel <minios-devel@lists.xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e25f5cd4-9130-488c-8294-22bd9fbd76ff@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <e25f5cd4-9130-488c-8294-22bd9fbd76ff@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jan Beulich, le lun. 24 juin 2024 15:34:53 +0200, a ecrit:
> From: Charles Arnold <carnold@suse.com>
> 
> gcc14 no longer (silently) accepts functions with no return type
> specified.
> 
> Signed-off-by: Charles Arnold <carnold@suse.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Thanks!

> 
> --- a/include/posix/sys/mman.h
> +++ b/include/posix/sys/mman.h
> @@ -16,7 +16,7 @@
>  
>  void *mmap(void *start, size_t length, int prot, int flags, int fd, off_t offset) asm("mmap64");
>  int munmap(void *start, size_t length);
> -static inline mlock(const void *addr, size_t len) { return 0; }
> -static inline munlock(const void *addr, size_t len) { return 0; }
> +static inline int mlock(const void *addr, size_t len) { return 0; }
> +static inline int munlock(const void *addr, size_t len) { return 0; }
>  
>  #endif /* _POSIX_SYS_MMAN_H */


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 13:40:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 13:40:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746647.1153761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjwE-0000oD-4d; Mon, 24 Jun 2024 13:40:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746647.1153761; Mon, 24 Jun 2024 13:40:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLjwE-0000o6-1B; Mon, 24 Jun 2024 13:40:54 +0000
Received: by outflank-mailman (input) for mailman id 746647;
 Mon, 24 Jun 2024 13:40:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLjwC-0000o0-Nx
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 13:40:52 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5f7ec253-322f-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 15:40:51 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2eaafda3b5cso46668941fa.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 06:40:51 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb3c8c01sm62702855ad.129.2024.06.24.06.40.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 06:40:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f7ec253-322f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719236451; x=1719841251; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=KLzkamUjja3jxtbo0dxxGSX71v1mCJ21/2iw1rHe5K0=;
        b=HrMFGFTlnIKgEMJ9NwVyblEtRrbQbGQEkXBzM7LgCAB+8lpfdZ4SvaTaxzaB/WBTR/
         q7lUSIWoRlN71Zp4LpNwOZEiuHLO1X92HMl53M1/3Ci8AYxl9u783tjXo3CyKGNjz0s7
         xgU8Moz8lwp0oB0tOVY6eU0R2YYv8/fSLFVuP/g8cv1M7UNmnMekE1oyKCisinG0+Vcf
         i9bU9vPDxMBGDuxN4Wvlpd99vvrlw+ZazC5nNqFvKcMYXEvA9NI2dRT/O/DpDp8Jq5GR
         Lb1c1dqU/78PILUDi2lA/z+T2SCLmTxHEv8t1fyZOO4g1WUW7RGKEO66GKNUydmKPEJS
         d6zw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719236451; x=1719841251;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=KLzkamUjja3jxtbo0dxxGSX71v1mCJ21/2iw1rHe5K0=;
        b=FuQTbfW+ju1/r60r1ODg2WUHmZzP5wnZ6oSvhZAhVjDMJxeLaWvHCjz6NN6KIXMKJF
         RVU3wCvDb5oUvQ8cp50Hyu6ocYYK1OMHagmHSs2rY6MKdQ24lsimqGgWzFjr/f2xHOZb
         z44nzrvBKQMrEkjVAN5xGTMj7lD2ds1YDya4c7iFvkcRYnEnOE1vJXC3ur7TtQr/sYFB
         CP624DzN4V8/mNs0EVyt0kD6tqYu5ppXaDD67ZHCYbiw0F7fZvncfpk8zSC3PxLMQBmH
         JmEtf4Qv93bYkSs88dUjzrOEH37TR4JonY9EW/ftaNxYb0WAQIzxQy1r8PqoOzHGLbUX
         aqHg==
X-Forwarded-Encrypted: i=1; AJvYcCVECIkXb1Kf4BouKVtwxNsu2NvXR905VTH7gWno/FOPuuFvpRoEXe3tHitNtG02fBRPP+WGn+zlzJQ9LtcOj6xWLwHEBqB7ecDPgWqu4rI=
X-Gm-Message-State: AOJu0Yy6fqwVugULAGr7dx2IbmaKsZyVUs+ZZsDy/dktQIY395rH49B9
	l6wCdHccbC8CZbfTKCFVw/TaF/vxjfbPORYk8k0visIhPM9ntyEBBaFnUvke9g==
X-Google-Smtp-Source: AGHT+IEJlbkh48uEmGZMyRNc2OFdg4O04Y6YkGtSRxTuC6TavMEwvRkFjmIhl6nsH2gdr0UKFb4ryg==
X-Received: by 2002:a2e:92d6:0:b0:2ec:514f:89b4 with SMTP id 38308e7fff4ca-2ec5951ca7bmr30496521fa.43.1719236451364;
        Mon, 24 Jun 2024 06:40:51 -0700 (PDT)
Message-ID: <23f4b971-a161-47db-85a7-54f50300d039@suse.com>
Date: Mon, 24 Jun 2024 15:40:44 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: E820 memory allocation issue on Threadripper platforms
To: Branden Sherrell <sherrellbc@gmail.com>
Cc: Patrick Plenefisch <simonpatp@gmail.com>, xen-devel@lists.xenproject.org,
 Juergen Gross <jgross@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>
References: <CAOCpoWdOH=xGxiQSC1c5Ueb1THxAjH4WiZbCZq-QT+d_KAk3SA@mail.gmail.com>
 <1708c3d7-662a-44bc-b9b3-4ab9f8642d7b@suse.com>
 <dcaf9d8d-ad5a-4714-936b-79ed0e587f9d@suse.com>
 <CAOCpoWeowZPuQTeBp9nu8p8CDtE=u++wN_UqRoABZtB57D50Qw@mail.gmail.com>
 <ac742d12-ec91-4215-bb42-82a145924b4f@suse.com>
 <CAOCpoWfQmkhN3hms1xuotSUZzVzR99i9cNGGU2r=yD5PjysMiQ@mail.gmail.com>
 <fa23a590-5869-4e11-8998-1d03742c5919@suse.com> <ZaeoWBV8IEZap2mr@macbook>
 <15dcef46-aaa8-4f71-bd5c-355001dd9188@suse.com> <ZafOGEwms01OFaVJ@macbook>
 <7BAC7BB5-C321-4C34-884A-21CC12F761BB@gmail.com>
 <36d581a0-f144-4756-b345-8b74ccc25c74@suse.com>
 <70B65B5D-C075-4D8E-8D2B-08A1930EE68B@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <70B65B5D-C075-4D8E-8D2B-08A1930EE68B@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 15:07, Branden Sherrell wrote:
> What is the reasoning that this fix be applied only to PVH domains? Taking a look at the fix logic it appears to walk the E820 to find a suitable range of memory to load the kernel into (assuming it can be determined that the kernel is also relocatable). Why can this logic not be applied to dom0 kernel load in general?

Because PV requirements are different, first and foremost because there
we have pseudo-physical and machine memory maps to deal with. As you can
see from [1] I've raised the topic on how to deal with PV there, but so
far there was no reply helping the issue towards resolution.

Btw - please don't top-post.

Jan

[1] https://lists.xen.org/archives/html/xen-devel/2024-06/msg00831.html


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 13:48:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 13:48:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746660.1153770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLk3K-0001o8-03; Mon, 24 Jun 2024 13:48:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746660.1153770; Mon, 24 Jun 2024 13:48:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLk3J-0001o1-TZ; Mon, 24 Jun 2024 13:48:13 +0000
Received: by outflank-mailman (input) for mailman id 746660;
 Mon, 24 Jun 2024 13:48:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6zbB=N2=invisiblethingslab.com=marmarek@srs-se1.protection.inumbo.net>)
 id 1sLk3J-0001nv-4s
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 13:48:13 +0000
Received: from fhigh7-smtp.messagingengine.com
 (fhigh7-smtp.messagingengine.com [103.168.172.158])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 64d5af78-3230-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 15:48:11 +0200 (CEST)
Received: from compute6.internal (compute6.nyi.internal [10.202.2.47])
 by mailfhigh.nyi.internal (Postfix) with ESMTP id C2CEA1140227;
 Mon, 24 Jun 2024 09:48:09 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute6.internal (MEProxy); Mon, 24 Jun 2024 09:48:09 -0400
Received: by mail.messagingengine.com (Postfix) with ESMTPA; Mon,
 24 Jun 2024 09:48:07 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64d5af78-3230-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	invisiblethingslab.com; h=cc:cc:content-type:content-type:date
	:date:from:from:in-reply-to:message-id:mime-version:reply-to
	:subject:subject:to:to; s=fm1; t=1719236889; x=1719323289; bh=m1
	ygcSpOcgz8PRMolKj0zus6mRL+uXQvw8SdE2416+E=; b=fzWpmH1hpKsWe0Nj1X
	/7ks6k8nCu+EubdtqWUrzgRMEl+Ipc03jgI9I4zSe3y01Uq/T0wS6098HI8MFbAN
	pDyOWt7cpf/gE3iyqfEO3vu67ltsbLbyWVOcmyPHttUtgU9/PA/vogQDd73cPcMm
	V+J+VHxpvGaJ5uGVHlOoQWCAf68gS7AFlOBf4xKUkv26CFa8s7MlKOJGc7EcIj80
	RaEBjPDoTQxkJvpsYJrMBBGrs1vEJ6wOpWTjdfG/0Y9XSZHeeBTJjpaECN+pQ8no
	wL9CwhKc57Wg1DoocF6N4fft6yN/5Ov79cpf4E3n139DxHxt/VWjWLjTwK90ryBL
	nh5A==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:cc:content-type:content-type:date:date
	:feedback-id:feedback-id:from:from:in-reply-to:message-id
	:mime-version:reply-to:subject:subject:to:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; t=
	1719236889; x=1719323289; bh=m1ygcSpOcgz8PRMolKj0zus6mRL+uXQvw8S
	dE2416+E=; b=kJ5y7ctrz70i1raqTos7bxx9TqfZriVT8sPywXo+ER/RZhrxhLC
	oGVV6s54dYNPeLBZ4+ajOm3fvg7VV2e/NGhClqaagbd3GqAs/C+5rjFk25isjX2Z
	/MR0bbP8teefcbvsw7Ak56b4D6OtQ5ymJPiJRYaRUPZBQbo6+MLuSDaOvc18Mblg
	rgyCyQ24WersBgY+47XXTjH1vrPWLRxZL7xtvB3egVxpPSod1u2D9+WnmE353IMM
	pOYd26/Iw70YB1YkucJbejiI+AW7VLAYWH45ZkL1usKeYMkgDM220JkzTXt9eXIY
	kFChZxx/okmg/cLzVXEnuGdnHNhXg4Uwavg==
X-ME-Sender: <xms:GXl5Zn04JyJeR_4i-OP_aYUIODz9nDWcwgf9B2LYD0KxYUX2dpsQ6Q>
    <xme:GXl5ZmF3pYhqtUbqKmdjKPHJPvbJRszPqmieA6Quoo7CwntI5zm_ZKGz24OrkuvgQ
    rxEu0XAD0GNZg>
X-ME-Received: <xmr:GXl5Zn5SgfpfJUeBww96Oh8tuTAxmIl0StN2gKMgp_kdJDHT_B8Ag9GUGSXxdXg2d5iuR_cFb-7wEOLcrHzYL0ytBsLpR2MEvQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvledrfeeguddgieelucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvfevuffkgggtugesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepgefhfeff
    uefhffdvheeugfefleehvdeiueekudeuveeljeehgeffteehgffggeelnecuffhomhgrih
    hnpehgihhthhhusgdrtghomhdpkhgvrhhnvghlrdhorhhgnecuvehluhhsthgvrhfuihii
    vgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsih
    gslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:GXl5Zs2_OG4ozr579RiPt9-1hSba8Y5wgy7n7dvKoAFOgvNcSKVR5A>
    <xmx:GXl5ZqGcu73kEowIXBlOxdzIuSujbb9xi15vS1AFz9R5-dNA_1SCfw>
    <xmx:GXl5Zt_y7No2iPxQfqr6gjhlRrPsujOToifFwvguwEnYVqITnYCwsw>
    <xmx:GXl5ZnlUSdMm9KfUrAl4Lf6LTwz3sZCzCIO8B0U9glOfTggydG4AKQ>
    <xmx:GXl5ZmiZWXmSR-RFYjDeD1Dgan8xBzTJmQf6r3krGx39WuVS7HVsOm1A>
Feedback-ID: i1568416f:Fastmail
Date: Mon, 24 Jun 2024 15:48:05 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Cc: Juergen Gross <jgross@suse.com>, Christoph Hellwig <hch@lst.de>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>
Subject: Regression in xen-blkfront regarding sector sizes
Message-ID: <Znl5FYI9CC37jJLX@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="5QbWF5RurAf+2VDd"
Content-Disposition: inline


--5QbWF5RurAf+2VDd
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 24 Jun 2024 15:48:05 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Cc: Juergen Gross <jgross@suse.com>, Christoph Hellwig <hch@lst.de>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>
Subject: Regression in xen-blkfront regarding sector sizes

Hi,

Some Qubes users report a regression in xen-blkfront regarding block
size reporting. It works fine on 6.8.8, but appears broken on 6.9.2.

The specific problem is that blkfront reports block size of 512, even for
backend devices of 4096. This, for example, fails 512-bytes reads with
O_DIRECT, and appears to break mounting a filesystem on such a device
(at least xfs one).

For example it looks like this:

    [user@dom0 ~]$ head /sys/block/loop12/queue/*_block_size
    =3D=3D> /sys/block/loop12/queue/logical_block_size <=3D=3D
    4096

    =3D=3D> /sys/block/loop12/queue/physical_block_size <=3D=3D
    4096

    [user@dom0 bin]$ qvm-run -p the-vm 'head /sys/block/xvdi/queue/*_block_=
size'
    =3D=3D> /sys/block/xvdi/queue/logical_block_size <=3D=3D
    512

    =3D=3D> /sys/block/xvdi/queue/physical_block_size <=3D=3D
    512

and then:

    $ sudo dd if=3D/dev/xvdi of=3D/dev/null count=3D1 status=3Dprogress ifl=
ag=3Ddirect
    /usr/bin/dd: error reading '/dev/xvdi': Input/output error
    0+0 records in
    0+0 records out
    0 bytes copied, 0.000170858 s, 0.0 kB/s

and mounting fails like this:

    [   68.055045] SGI XFS with ACLs, security attributes, realtime, scrub,=
 quota, no debug enabled
    [   68.057308] I/O error, dev xvdi, sector 0 op 0x0:(READ) flags 0x1000=
 phys_seg 1 prio class 0
    [   68.057333] XFS (xvdi): SB validate failed with error -5.

More details at https://github.com/QubesOS/qubes-issues/issues/9293

Rusty suspects it's related to
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/d=
rivers/block/xen-blkfront.c?id=3Dba3f67c1163812b5d7ec33705c31edaa30ce6c51,
so I'm cc-ing people mentioned there too.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--5QbWF5RurAf+2VDd
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmZ5eRUACgkQ24/THMrX
1yzhVwf+KP6Njl2Qp0s1yEbrTFIzpgQkPJqQ7YFHoYOss3N65lEk1XS5f6j0MKoE
7BpuQcNtti9+9guVhoSTaOnp2ittNLj6GP478E/zrOCly8Ta5G5HVz+XIMha++eO
8anR0i0V9yQr7lCetluey+WYA5VgVlwWb5CnG8S7fDfDOXn0ZW0ok1qU/CSVJJ/z
xpwkn0fo27cj3cac+Bp1RvYo5XXZ6oXURaTyHx71Ov/+WiHawiiUtAll2IV8BAxV
1DvSCbd8ggPCC+F0mLMpXnqnaITdbkP77ca0PB3Ge+vd+wa3ibLJWEIefTLXfICl
IiKkAju3aylRai7xySrRhbXCxfnxlg==
=hp8c
-----END PGP SIGNATURE-----

--5QbWF5RurAf+2VDd--


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 13:48:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 13:48:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746656.1153780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLk3e-0002KA-6V; Mon, 24 Jun 2024 13:48:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746656.1153780; Mon, 24 Jun 2024 13:48:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLk3e-0002K3-3h; Mon, 24 Jun 2024 13:48:34 +0000
Received: by outflank-mailman (input) for mailman id 746656;
 Mon, 24 Jun 2024 13:46:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pcxP=N2=kernel.org=cassel@srs-se1.protection.inumbo.net>)
 id 1sLk1K-0001TX-LI
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 13:46:10 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1b5a09d1-3230-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 15:46:08 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 511A060346;
 Mon, 24 Jun 2024 13:46:06 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2C6A4C32782;
 Mon, 24 Jun 2024 13:45:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b5a09d1-3230-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719236766;
	bh=nEbRkmLliGbrhN7a0dFrrROolzsUWf2ymgDG1frRQgc=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=HPvBUv03t5fHmHl5Vdhl5NNvWh/1rmgJmDIUT78BrhNjkekMyiHBemsXBtITKK+pk
	 Ph9y6SGv/k+x5Jl2xhlIMX2sddEfv5ScOkESPGjMaDvg1ZXM5kaST/YeMPDuDVXpgg
	 iH6h0sucNAo73sjXdAnuoapZ6RfRXTSWnSxyRGyPeNDve4kUinDBuUenLUefA9rCqX
	 54StG+jluCv5aLHX9mMsHCk5XhZuGMG20N2tbjRRxE9agT2YFzShKRFvN/weyUUZ99
	 p5MaDhc41zpO0i44wkIP3EtvIWnOiXRYHg0365/uFE0+rIV0Jq+m7pwvM4OmoBjW+O
	 CdaoXjn9E8fNg==
Date: Mon, 24 Jun 2024 15:45:57 +0200
From: Niklas Cassel <cassel@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: kernel test robot <oliver.sang@intel.com>, oe-lkp@lists.linux.dev,
	lkp@intel.com, Jens Axboe <axboe@kernel.dk>,
	Damien Le Moal <dlemoal@kernel.org>, Hannes Reinecke <hare@suse.de>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, ying.huang@intel.com,
	feng.tang@intel.com, fengwei.yin@intel.com
Subject: Re: [axboe-block:for-next] [block]  bd4a633b6f: fsmark.files_per_sec
 -64.5% regression
Message-ID: <Znl4lXRmK2ukDB7r@ryzen.lan>
References: <202406241546.6bbd44a7-oliver.sang@intel.com>
 <20240624083537.GA19941@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20240624083537.GA19941@lst.de>

On Mon, Jun 24, 2024 at 10:35:37AM +0200, Christoph Hellwig wrote:
> This is odd to say at least.  Any chance you can check the value
> of /sys/block/$DEVICE/queue/rotational for the relevant device before
> and after this commit?  And is this an ATA or NVMe SSD?
> 

Seems to be ATA SSD:
https://download.01.org/0day-ci/archive/20240624/202406241546.6bbd44a7-oliver.sang@intel.com/job.yaml

ssd_partitions: "/dev/disk/by-id/ata-INTEL_SSDSC2BG012T4_BTHC428201ZX1P2OGN-part1"

Most likely btrfs does something different depending on the nonrot flag
being set or not. (And like you are suggesting, most likely the value of
the nonrot flag is somehow different after commit bd4a633b6f)


Kind regards,
Niklas


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 14:06:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 14:06:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746672.1153790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkL9-00066E-KM; Mon, 24 Jun 2024 14:06:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746672.1153790; Mon, 24 Jun 2024 14:06:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkL9-000667-HY; Mon, 24 Jun 2024 14:06:39 +0000
Received: by outflank-mailman (input) for mailman id 746672;
 Mon, 24 Jun 2024 14:06:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jLHn=N2=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sLkL8-000661-5p
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 14:06:38 +0000
Received: from mail-qv1-xf33.google.com (mail-qv1-xf33.google.com
 [2607:f8b0:4864:20::f33])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f78fd6e6-3232-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 16:06:36 +0200 (CEST)
Received: by mail-qv1-xf33.google.com with SMTP id
 6a1803df08f44-6b5253ffd24so10609166d6.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 07:06:36 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b51ef6e795sm34003766d6.141.2024.06.24.07.06.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 07:06:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f78fd6e6-3232-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719237995; x=1719842795; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=imOiZ7CuJB5aMTBnxc3MrW9G27H52CpIPfYGY0PN8JY=;
        b=cirvNAHBNCIjw+KlqpGtij3PnxPbz4u8gbYLKN2j0ny/jOQ/kQTu5inyQL4JuVUkxf
         EgVdw+VytU9Q3eUOBztCUnmqWw0Ru65lod6wrLFiogrhcQIQMIJ5OPGIa3od7tqYpoVB
         ax0SyqcUDQa6shAeaz+DPNWjYtGfBVPWz8FQ0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719237995; x=1719842795;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=imOiZ7CuJB5aMTBnxc3MrW9G27H52CpIPfYGY0PN8JY=;
        b=Y83DqYXqFzEDF6ES6mNPCdP5hM0ps2AFaKZfgwK/GsKq3+dPFTwrphr8j625dF+1Bk
         cEoNgsK4fd0Vq3a+PAdCJqVvirQp0ZugNl9ZKl4kBKEcSSYHRoguZzXfNRzi9ujH4izU
         Sca9GehmHAtEBQNSzK6kPYwQLyS9Y4SgurpT7BoiYotK9dL23ZAsOtKcHLyQ18Us272l
         Ep1fKmhR3pIobJy32nBqpWbhXfZgzsNy4NgRn+uVuRx8iPCOJ3KhZOtbjTC+POg/B+8A
         ExrCTK7xlaHFORx/ZBUIbBsNqAF8UQqlUQcuGiTc1v5O/avZ/vS96q2JdwkZbwpMhRKb
         lB0Q==
X-Forwarded-Encrypted: i=1; AJvYcCWNnESIsiEiJiMVoLXslpJzObErEPpYSPc1bs8JKAxSApgx0ZddFV1H1SPn8VEWhro8M3Q2/dQKlk0PqBpIQatDKo0RdlZYV0jzalasvNo=
X-Gm-Message-State: AOJu0YwsdcmDECsCxTnONdI+JzxMr8jtBZvagzcXCeaZBwuApktuNUVI
	i8OoFltim9HxOKplPB9hCnU4ZYuAJMc4FJP1tLnY2LtiQMQcQqzk+kGhLr2G30Y=
X-Google-Smtp-Source: AGHT+IE0Lux+2Z+lGwXTmLtrrBF1Fn3tWEl+sNV2yAJiQhnsRn9sF/ZiXuBESXDW6ebSLBVpb2m/PA==
X-Received: by 2002:a0c:f0d3:0:b0:6b4:f6f6:bf39 with SMTP id 6a1803df08f44-6b53635e007mr58818476d6.2.1719237994931;
        Mon, 24 Jun 2024 07:06:34 -0700 (PDT)
Message-ID: <309d5dc1-833c-4f28-8010-b41968942035@citrix.com>
Date: Mon, 24 Jun 2024 15:06:32 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/4] xen/xlat: Sort structs per file
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Xen-devel <xen-devel@lists.xenproject.org>, Julien Grall <julien@xen.org>
References: <20240415154155.2718064-1-andrew.cooper3@citrix.com>
 <20240415154155.2718064-3-andrew.cooper3@citrix.com>
 <21436a56-f9e0-4700-8216-3bfa4094cc01@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <21436a56-f9e0-4700-8216-3bfa4094cc01@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 18/04/2024 10:00 am, Jan Beulich wrote:
> On 15.04.2024 17:41, Andrew Cooper wrote:
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> While I don't mind the change as is, "sort" is ambiguous here in one regard.
> Personally I'd prefer if those parts of the change were dropped, but I can
> live with the sorting criteria being spelled out in the description:
>
>> @@ -40,13 +40,6 @@
>>  
>>  ?	cpu_offline_action		arch-x86/xen-mca.h
>>  ?	mc				arch-x86/xen-mca.h
>> -?	mcinfo_bank			arch-x86/xen-mca.h
>> -?	mcinfo_common			arch-x86/xen-mca.h
>> -?	mcinfo_extended			arch-x86/xen-mca.h
>> -?	mcinfo_global			arch-x86/xen-mca.h
>> -?	mcinfo_logical_cpu		arch-x86/xen-mca.h
>> -?	mcinfo_msr			arch-x86/xen-mca.h
>> -?	mcinfo_recovery			arch-x86/xen-mca.h
>>  !	mc_fetch			arch-x86/xen-mca.h
>>  ?	mc_info				arch-x86/xen-mca.h
>>  ?	mc_inject_v2			arch-x86/xen-mca.h
>> @@ -54,6 +47,13 @@
>>  ?	mc_msrinject			arch-x86/xen-mca.h
>>  ?	mc_notifydomain			arch-x86/xen-mca.h
>>  !	mc_physcpuinfo			arch-x86/xen-mca.h
>> +?	mcinfo_bank			arch-x86/xen-mca.h
>> +?	mcinfo_common			arch-x86/xen-mca.h
>> +?	mcinfo_extended			arch-x86/xen-mca.h
>> +?	mcinfo_global			arch-x86/xen-mca.h
>> +?	mcinfo_logical_cpu		arch-x86/xen-mca.h
>> +?	mcinfo_msr			arch-x86/xen-mca.h
>> +?	mcinfo_recovery			arch-x86/xen-mca.h
>>  ?	page_offline_action		arch-x86/xen-mca.h
> Imo this sorting was fine (at least one further instance below): Whether
> underscore sorts ahead of lower case letters depends on how sorting is done.
> I take you assume sorting as per the C locale,

Indeed.

> when the original sorting was
> considering undercores to be separators, i.e. in a different character class
> (together with e.g. dash or tilde).

If _ is expected to be a separator, then the correct sorting would be:

mc
mc_fetch
mc_info
mcinfo_bank
mcinfo_common
mcinfo_extended
mcinfo_global
mcinfo_logical_cpu
mcinfo_msr
mcinfo_recovery
mc_inject_v2
mc_mceinject

which is definitely not what we want.  This came specifically from
`sort`, not something I did by hand.

`LANG=C sort` gives the ordering presented in this patch.


>
> When using C local sorting, I think arch-x86/xen-@arch@.h also would need
> moving past arch-x86/xen.h (whereas right now all separators are deemed
> equal and hence @ comes ahead of h which in turn is ahead of m).

`sort` agrees, so I'll do this too, and note it in the commit message.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 14:07:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 14:07:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746674.1153801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkLY-0006UI-RO; Mon, 24 Jun 2024 14:07:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746674.1153801; Mon, 24 Jun 2024 14:07:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkLY-0006UB-OQ; Mon, 24 Jun 2024 14:07:04 +0000
Received: by outflank-mailman (input) for mailman id 746674;
 Mon, 24 Jun 2024 14:07:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IpOW=N2=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sLkLX-0006Sp-LL
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 14:07:03 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 077703b6-3233-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 16:07:02 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2ec1ac1aed2so53337871fa.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 07:07:02 -0700 (PDT)
Received: from ?IPV6:2003:e5:8729:4000:29eb:6d9d:3214:39d2?
 (p200300e58729400029eb6d9d321439d2.dip0.t-ipconnect.de.
 [2003:e5:8729:4000:29eb:6d9d:3214:39d2])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d305616dfsm4772996a12.79.2024.06.24.07.07.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 07:07:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 077703b6-3233-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719238022; x=1719842822; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=0barnV27hwc5UKQU4XJM1KtyxYMRXyRpHupRJPqEnIQ=;
        b=CtqCJaicL94lLwhKbR64BdSnc2mDMrtmgB5nOlPMpW4HtA5gwESyd+Kgv4VTs32rdX
         b0mSdW8goE9BxdouwoSoLqvw+a1boh8fdHG3Y4Eefj38zci9LyhQukPaRrpOR+Nn7pwR
         R2VZHoy8YmMyZ7b4DEoq7uuwURr08Rn4RvRbMnH7GXpsz+bYacswyH9ATbnS/DRO+dpf
         Ir4gdq05sYArvY6EofbdqCyxMu5lScT07uAtHywKin/tI5QRDlAw39wfwnuyiNGYw11U
         jgQoAa2B5zUC6/Fq7ckvMc/APKlDv/rDn/LfmX44WXVkCsPsnGBUOXlqFwU5qOE09L9t
         DHxg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719238022; x=1719842822;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=0barnV27hwc5UKQU4XJM1KtyxYMRXyRpHupRJPqEnIQ=;
        b=ttECZd/DNlGb1bPapRcSJhuUZoFO3Awfl/i/0uSFgZ/nzDqQ0J0N7Pdy5dv+JWTT66
         kCfZyw4fP8RLSIhhCUR7Yi7mtR4mFhfuX7cR4dw7N43Q517HQ9gW9Pw3xznTDeHQfoKP
         CNKOZitpdSz0VJoq4bogjPsL06enRYn8yP9IbNNhNcm6mAJnOcdV8bLIo4oGivekZWtw
         /QwIcw4fuB8hJj+FN/1/i/7I7W60/1YanxX2OZtXxEz+W75cO8gvFxB4GBcWh9Ib2nB3
         i+9mk1JL7AEgUBdgaWHPvjYnQ+oawyjz8qTEpslcShgdZ6BhmPVNFYfbRvPa1ZGsSnO7
         TFpg==
X-Gm-Message-State: AOJu0YxS8urny5lgfDJz0KvC1whjIE3s0U0C90yJeApvYl/xyIxyii+E
	PLTT8NXjJc4Ihjz/GJc+IsavYnaALfrgYjybphaeEtsDknQM50LMq7gpcKxIbMqUfh3HAGrCWTu
	B
X-Google-Smtp-Source: AGHT+IEE50wp9VgUOoKh1JacFeWQ/XJli9PtBBP8RLQC0+E/sms7PNusypFEsXwyRLvWSpkS2ZkdmQ==
X-Received: by 2002:a2e:9d86:0:b0:2ec:5a0d:b2dd with SMTP id 38308e7fff4ca-2ec5b2f0373mr27918991fa.39.1719238021688;
        Mon, 24 Jun 2024 07:07:01 -0700 (PDT)
Message-ID: <d280b6ec-8de2-4917-a453-7b519aa67cfd@suse.com>
Date: Mon, 24 Jun 2024 16:07:01 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: ACPI NVS range conflicting with Dom0 page tables (or kernel
 image)
To: Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a5a8a016-2107-46fb-896b-2baaf66566d4@suse.com>
 <ZnBCFgHltVqj2FDh@mail-itl> <bd2eb947-7fca-4f1a-bf43-addccdda35a0@suse.com>
Content-Language: en-US
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
In-Reply-To: <bd2eb947-7fca-4f1a-bf43-addccdda35a0@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 17.06.24 16:25, Jan Beulich wrote:
> On 17.06.2024 16:03, Marek Marczykowski-Górecki wrote:
>> On Mon, Jun 17, 2024 at 01:22:37PM +0200, Jan Beulich wrote:
>>> Hello,
>>>
>>> while it feels like we had a similar situation before, I can't seem to be
>>> able to find traces thereof, or associated (Linux) commits.
>>
>> Is it some AMD Threadripper system by a chance?
> 
> It's an AMD system in any event, yes. I don't have all the details on it.
> 
>> Previous thread on this issue:
>> https://lore.kernel.org/xen-devel/CAOCpoWdOH=xGxiQSC1c5Ueb1THxAjH4WiZbCZq-QT+d_KAk3SA@mail.gmail.com/
> 
> Ah yes, that's probably the one I was vaguely remembering. There it was the
> kernel image E820 conflicted with. Yet ...
> 
>>> With
>>>
>>> (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x4000000
>>> ...
>>> (XEN)  Dom0 alloc.:   0000000440000000->0000000448000000 (619175 pages to be allocated)
>>> ...
>>> (XEN)  Loaded kernel: ffffffff81000000->ffffffff84000000
>>>
>>> the kernel occupies the space from 16Mb to 64Mb in the initial allocation.
>>> Page tables come (almost) directly above:
>>>
>>> (XEN)  Page tables:   ffffffff84001000->ffffffff84026000
>>>
>>> I.e. they're just above the 64Mb boundary. Yet sadly in the host E820 map
>>> there is
>>>
>>> (XEN)  [0000000004000000, 0000000004009fff] (ACPI NVS)
>>>
>>> i.e. a non-RAM range starting at 64Mb. The kernel (currently) won't tolerate
>>> such an overlap (also if it was overlapping the kernel image, e.g. if on the
>>> machine in question s sufficiently much larger kernel was used). Yet with its
>>> fundamental goal of making its E820 match the host one I'm also in trouble
>>> thinking of possible solutions / workarounds. I certainly do not see Xen
>>> trying to cover for this, as the E820 map re-arrangement is purely a kernel
>>> side decision (forward ported kernels got away without, and what e.g. the
>>> BSDs do is entirely unknown to me).
>>
>> In Qubes we have worked around the issue by moving the kernel lower
>> (CONFIG_PHYSICAL_START=0x200000):
>> https://github.com/QubesOS/qubes-linux-kernel/commit/3e8be4ac1682370977d4d0dc1d782c428d860282
>>
>> Far from ideal, but gets it bootable...
> 
> ... as you say, it's a workaround for particular systems, but not generally
> dealing with the underlying issue. This explains why I couldn't find any
> patch(es), though.

Today the PV dom0 boot only supports to relocate the initrd or the p2m
map in case of E820 conflicts.

Relocating the start_info data or the initial page tables should be rather
simple, but doing the same with the kernel is problematic.

I need to look into this in more detail, maybe I can come up with a solution.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 14:16:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 14:16:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746684.1153810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkU8-0000be-Kb; Mon, 24 Jun 2024 14:15:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746684.1153810; Mon, 24 Jun 2024 14:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkU8-0000bX-I7; Mon, 24 Jun 2024 14:15:56 +0000
Received: by outflank-mailman (input) for mailman id 746684;
 Mon, 24 Jun 2024 14:15:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H9EP=N2=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLkU6-0000bR-MN
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 14:15:54 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 43908da7-3234-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 16:15:52 +0200 (CEST)
Received: from [10.176.134.80] (unknown [160.78.253.181])
 by support.bugseng.com (Postfix) with ESMTPSA id 900564EE0738;
 Mon, 24 Jun 2024 16:15:51 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43908da7-3234-11ef-b4bb-af5377834399
Message-ID: <1aeebff6-68f2-4135-ae5d-6c76f29f4ab0@bugseng.com>
Date: Mon, 24 Jun 2024 16:15:51 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2] docs/misra: rules for mass adoption
To: Stefano Stabellini <stefano.stabellini@amd.com>,
 xen-devel@lists.xenproject.org
Cc: jbeulich@suse.com, sstabellini@kernel.org, andrew.cooper3@citrix.com,
 julien@xen.org, michal.orzel@amd.com, bertrand.marquis@arm.com,
 roger.pau@citrix.com
References: <20240622001422.3852207-1-stefano.stabellini@amd.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <20240622001422.3852207-1-stefano.stabellini@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 22/06/24 02:14, Stefano Stabellini wrote:
> From: Stefano Stabellini <sstabellini@kernel.org>
> 
> This patch adds a bunch of rules to rules.rst that are uncontroversial
> and have zero violations in Xen. As such, they have been approved for
> adoption.
> 
> All the ones that regard the standard library have the link to the
> existing footnote in the notes.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>

Also Rule 21.11 ("The standard header file <tgmath.h> shall not be
used") results clean, I think it should be added within this patch.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 14:23:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 14:23:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746695.1153821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkbF-0002z9-GQ; Mon, 24 Jun 2024 14:23:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746695.1153821; Mon, 24 Jun 2024 14:23:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkbF-0002z2-CJ; Mon, 24 Jun 2024 14:23:17 +0000
Received: by outflank-mailman (input) for mailman id 746695;
 Mon, 24 Jun 2024 14:21:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IYJk=N2=dreamsnake.net=aad@srs-se1.protection.inumbo.net>)
 id 1sLkZV-0002wA-0d
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 14:21:29 +0000
Received: from mail-qk1-x72b.google.com (mail-qk1-x72b.google.com
 [2607:f8b0:4864:20::72b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0a5c938c-3235-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 16:21:26 +0200 (CEST)
Received: by mail-qk1-x72b.google.com with SMTP id
 af79cd13be357-7954dcf3158so244965985a.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 07:21:26 -0700 (PDT)
Received: from smtpclient.apple (pool-100-6-75-225.pitbpa.fios.verizon.net.
 [100.6.75.225]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b51ef30ef9sm34454366d6.76.2024.06.24.07.21.24
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 24 Jun 2024 07:21:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a5c938c-3235-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=dreamsnake-net.20230601.gappssmtp.com; s=20230601; t=1719238885; x=1719843685; darn=lists.xenproject.org;
        h=references:to:cc:in-reply-to:date:subject:mime-version:message-id
         :from:from:to:cc:subject:date:message-id:reply-to;
        bh=vyM7SpnmS6E42kucj9mVuoblNwWj8XmJP/H7ZZM8ROY=;
        b=hH1JPz2MdKK2PSTD9IpuZfN6ePhQ/M91cinzgAbYbiSG8LaHZ+QHh2SoENzI8HJWW9
         Rqs1eO4pvAn9sUptCgmdUG4xaQGCdvBPe7sZfbfwvAv7y5YVbGf/+Ctb0H05lUgPxXpb
         lmTI5fdpooAwbXWHx1PErcFG5VzBsGPCw7svYkgzLIOZvz4Ea7+JGTPmqAuYvrBQUALX
         FDkztveey6KF+CJmUNBY8cCpnlzAImAX7YhhMHknlCGtL3MWS80rQry6KDKrFmfquQ+u
         /Q68A0ouUbDN7edujCrjnf3k9er7eeKsrVr7CBWAx9yQwyrUPPcKfaQIvAfUtVAj2n5V
         ooSQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719238885; x=1719843685;
        h=references:to:cc:in-reply-to:date:subject:mime-version:message-id
         :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=vyM7SpnmS6E42kucj9mVuoblNwWj8XmJP/H7ZZM8ROY=;
        b=FKtbRyTwCtaJrMpOu/RKVpzL1rpTqHokhepdM0LHCv1uz3D77M1mHL6D41dmQU1CTV
         To2Q3xSSKffE8aK31USdpFvLFB+tQAUueBpPnIvQCuiTmqCLdKwb4oF80IbNZiuopux6
         6jzM5KxPy7veOD+ZXhiMTuvOVXY2nz9U/MZ4RTO+2Z9rTnaq6N5naze0NWXOuruwvGwm
         Wh46xN9am1MTONgi2mOZjht1ASILsQPrEf4KJu0rgH5US1ZTCb8Xfl3ywKFxYHkGpwql
         UO82Clsswleg6T6ZChfqi5xSDL6F4M+8k35faXhqg3rVkj8vjjTmT+tc1RIqKeHVSfxV
         BfZg==
X-Forwarded-Encrypted: i=1; AJvYcCX8dN3Zg5pl0WsuUUCcajMRG4gZpirTECBWTsNv60Ux+HCjboZYouaHJBlB1YTPa4i2+Zw6gfR1obz19yHzhIIY4PlMeVasr9vNqqkvSTY=
X-Gm-Message-State: AOJu0Yy/C1njFaY2NLMCHP5n1LnP6nhBTXDPI2xBTIJ+JwY9sYH0T/ZS
	fbOzN6robqBXccP/Fmn83SIZYfNJmGcZBa7Ycw0kW2pNssKjVDy8wNTKOFSu
X-Google-Smtp-Source: AGHT+IHXYJoHqe+lVWeHEvpL31lPzpeMHrumwGR4d2Z3MDIXugyalKxqFS2lKuqDdFuxVAxlh+r7+g==
X-Received: by 2002:a0c:f594:0:b0:6b5:50ba:42c3 with SMTP id 6a1803df08f44-6b550ba45a4mr31114516d6.43.1719238885298;
        Mon, 24 Jun 2024 07:21:25 -0700 (PDT)
From: Anthony D'Atri <aad@dreamsnake.net>
Message-Id: <1E6AF1FD-5E2B-49D6-B42E-1BEA85BA7E93@dreamsnake.net>
Content-Type: multipart/alternative;
	boundary="Apple-Mail=_C8799745-5BE2-4755-A5E5-5C731F6565EF"
Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3774.600.62\))
Subject: Re: [axboe-block:for-next] [block]  bd4a633b6f: fsmark.files_per_sec
 -64.5% regression
Date: Mon, 24 Jun 2024 10:21:13 -0400
In-Reply-To: <Znl4lXRmK2ukDB7r@ryzen.lan>
Cc: Christoph Hellwig <hch@lst.de>,
 kernel test robot <oliver.sang@intel.com>,
 oe-lkp@lists.linux.dev,
 lkp@intel.com,
 Jens Axboe <axboe@kernel.dk>,
 Damien Le Moal <dlemoal@kernel.org>,
 Hannes Reinecke <hare@suse.de>,
 linux-m68k@lists.linux-m68k.org,
 linux-um@lists.infradead.org,
 linux-kernel@vger.kernel.org,
 linux-block@vger.kernel.org,
 drbd-dev@lists.linbit.com,
 nbd@other.debian.org,
 linuxppc-dev@lists.ozlabs.org,
 ceph-devel@vger.kernel.org,
 virtualization@lists.linux.dev,
 xen-devel@lists.xenproject.org,
 linux-bcache@vger.kernel.org,
 dm-devel@lists.linux.dev,
 linux-raid@vger.kernel.org,
 linux-mmc@vger.kernel.org,
 linux-mtd@lists.infradead.org,
 nvdimm@lists.linux.dev,
 linux-nvme@lists.infradead.org,
 linux-s390@vger.kernel.org,
 linux-scsi@vger.kernel.org,
 ying.huang@intel.com,
 feng.tang@intel.com,
 fengwei.yin@intel.com
To: Niklas Cassel <cassel@kernel.org>
References: <202406241546.6bbd44a7-oliver.sang@intel.com>
 <20240624083537.GA19941@lst.de> <Znl4lXRmK2ukDB7r@ryzen.lan>
X-Mailer: Apple Mail (2.3774.600.62)


--Apple-Mail=_C8799745-5BE2-4755-A5E5-5C731F6565EF
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain;
	charset=utf-8

S3610 I think.  Be sure to use sst or the chassis vendor=E2=80=99s tool =
to update the firmware.

> On Jun 24, 2024, at 9:45=E2=80=AFAM, Niklas Cassel <cassel@kernel.org> =
wrote:
>=20
> SSDSC2BG012T4


--Apple-Mail=_C8799745-5BE2-4755-A5E5-5C731F6565EF
Content-Transfer-Encoding: quoted-printable
Content-Type: text/html;
	charset=utf-8

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; =
charset=3Dutf-8"></head><body style=3D"overflow-wrap: break-word; =
-webkit-nbsp-mode: space; line-break: after-white-space;">S3610 I think. =
&nbsp;Be sure to use sst or the chassis vendor=E2=80=99s tool to update =
the firmware.<br id=3D"lineBreakAtBeginningOfMessage"><div><br><blockquote=
 type=3D"cite"><div>On Jun 24, 2024, at 9:45=E2=80=AFAM, Niklas Cassel =
&lt;cassel@kernel.org&gt; wrote:</div><br =
class=3D"Apple-interchange-newline"><div><span style=3D"caret-color: =
rgb(0, 0, 0); font-family: Helvetica; font-size: 18px; font-style: =
normal; font-variant-caps: normal; font-weight: 400; letter-spacing: =
normal; text-align: start; text-indent: 0px; text-transform: none; =
white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; =
text-decoration: none; float: none; display: inline =
!important;">SSDSC2BG012T4</span></div></blockquote></div><br></body></htm=
l>=

--Apple-Mail=_C8799745-5BE2-4755-A5E5-5C731F6565EF--


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 14:29:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 14:29:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746702.1153831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkh6-0003tU-1w; Mon, 24 Jun 2024 14:29:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746702.1153831; Mon, 24 Jun 2024 14:29:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkh5-0003tN-VA; Mon, 24 Jun 2024 14:29:19 +0000
Received: by outflank-mailman (input) for mailman id 746702;
 Mon, 24 Jun 2024 14:29:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=IpOW=N2=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sLkh4-0003tH-JQ
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 14:29:18 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 22adce08-3236-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 16:29:16 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a72477a60fbso158169566b.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 07:29:16 -0700 (PDT)
Received: from ?IPV6:2003:e5:8729:4000:29eb:6d9d:3214:39d2?
 (p200300e58729400029eb6d9d321439d2.dip0.t-ipconnect.de.
 [2003:e5:8729:4000:29eb:6d9d:3214:39d2])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a710595adedsm273690766b.214.2024.06.24.07.29.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 07:29:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22adce08-3236-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719239356; x=1719844156; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=m3SdMcYnJ55tPmreppreg/UrG6xInFR5S4Utq0FJfIU=;
        b=Xr6vNR94N+enxsDNaPKwAItHXfmf3bqdX/und2lhDNkkJG9I3GLtEJ9EtG3Pn1jTUy
         OhXcUKaJcZ+ntt1n39MTCoiM4xoZRGKYaG6INHi8MUj22JDeBCabO+dvP0zVDbHw6Soq
         0zV+buG8TgmZg0X8829WqcbbgTSiECpP4dOk6GWb2MOgX9XoqKhyv6Tm/m0Bfw4to//s
         CM+Yw1Dvm92Bj496yPNjmt+b80eFmfR7Vj9mQ/Pl72sDAEi+tlFmf9x1giZ/3GOHHGdm
         LNcgUi6gNgoOXeCg5jsjTL9lgr/w5LBbrbmp9A//gHoHF0ygXRqXnraTPL/TkNBv0ndn
         Ww5A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719239356; x=1719844156;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=m3SdMcYnJ55tPmreppreg/UrG6xInFR5S4Utq0FJfIU=;
        b=KKMafrkHDDT+1qvtzXYv4j+4tnV8UFFIgQMcsWl9AHCPRHPPtGardMBGdhWqfUUCLH
         iOCB803ziIHWrAkZ3iNMbsvMKK4qORKJn1fHn5H3ooaNsFLSQTaI8Lt+j1Jr1npx3cO8
         tuKdisn+aLv86bo8GxcwoWgLUxcLMuTgKNQdGVwyx3W/lPcAkt84U96bz7zdAc62dzSl
         noxUGR3BIkT7bK9dT+yG04WrThTZnOwFHRluXyv7gY4EPT+W4BaDb1zGlLiZyc6fDNQo
         QCgBo28VTyEw0ibVR7rwDg6fJaoAdSmXTgX5L2tt7LEmEW5mz7FmVuEbb1Sumx5w7Fk4
         PISA==
X-Forwarded-Encrypted: i=1; AJvYcCXwLQxf8mv3hUwhQxCQjUzchUzINpmb8ZbntEMU2Dy+HJBvd68IXDSRmDmvxvBHOozjBhnAueT4KCAGrzAohApD+YhG1Bi1te1BhXLgMDM=
X-Gm-Message-State: AOJu0Yz+F7yXVnXPMU8nbNjR94SbbGmA3pzGMM4R4ftC6iYOo4NpwERj
	KNAi48LYhtbhK7CYBFcHiTRvpzVNGMkyqnHNIujrzur+Gtsqn6YtEsc49EhhP44=
X-Google-Smtp-Source: AGHT+IH05SxNgarU73bcvncRGWPtcsIEhPhO6HvBZIN9kJzmvPZv7SsGWka+Eygq6+GshEIHdX1QIA==
X-Received: by 2002:a17:906:6a87:b0:a6f:51d8:1963 with SMTP id a640c23a62f3a-a7245bada5bmr281959966b.43.1719239355865;
        Mon, 24 Jun 2024 07:29:15 -0700 (PDT)
Message-ID: <1944dd3f-1ba8-4559-b71a-056b9309ab58@suse.com>
Date: Mon, 24 Jun 2024 16:29:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: Regression in xen-blkfront regarding sector sizes
To: =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, xen-devel <xen-devel@lists.xenproject.org>
Cc: Christoph Hellwig <hch@lst.de>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, Jens Axboe <axboe@kernel.dk>
References: <Znl5FYI9CC37jJLX@mail-itl>
Content-Language: en-US
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
In-Reply-To: <Znl5FYI9CC37jJLX@mail-itl>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 24.06.24 15:48, Marek Marczykowski-Górecki wrote:
> Hi,
> 
> Some Qubes users report a regression in xen-blkfront regarding block
> size reporting. It works fine on 6.8.8, but appears broken on 6.9.2.
> 
> The specific problem is that blkfront reports block size of 512, even for
> backend devices of 4096. This, for example, fails 512-bytes reads with
> O_DIRECT, and appears to break mounting a filesystem on such a device
> (at least xfs one).
> 
> For example it looks like this:
> 
>      [user@dom0 ~]$ head /sys/block/loop12/queue/*_block_size
>      ==> /sys/block/loop12/queue/logical_block_size <==
>      4096
> 
>      ==> /sys/block/loop12/queue/physical_block_size <==
>      4096
> 
>      [user@dom0 bin]$ qvm-run -p the-vm 'head /sys/block/xvdi/queue/*_block_size'
>      ==> /sys/block/xvdi/queue/logical_block_size <==
>      512
> 
>      ==> /sys/block/xvdi/queue/physical_block_size <==
>      512
> 
> and then:
> 
>      $ sudo dd if=/dev/xvdi of=/dev/null count=1 status=progress iflag=direct
>      /usr/bin/dd: error reading '/dev/xvdi': Input/output error
>      0+0 records in
>      0+0 records out
>      0 bytes copied, 0.000170858 s, 0.0 kB/s
> 
> and mounting fails like this:
> 
>      [   68.055045] SGI XFS with ACLs, security attributes, realtime, scrub, quota, no debug enabled
>      [   68.057308] I/O error, dev xvdi, sector 0 op 0x0:(READ) flags 0x1000 phys_seg 1 prio class 0
>      [   68.057333] XFS (xvdi): SB validate failed with error -5.
> 
> More details at https://github.com/QubesOS/qubes-issues/issues/9293
> 
> Rusty suspects it's related to
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/block/xen-blkfront.c?id=ba3f67c1163812b5d7ec33705c31edaa30ce6c51,
> so I'm cc-ing people mentioned there too.

I think the call of blkif_set_queue_limits() in this patch should NOT precede
setting of info->sector_size and info->physical_sector_size, as those are
needed by blkif_set_queue_limits().


Juergen


> 



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 14:36:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 14:36:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746709.1153840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkoM-0005lp-MH; Mon, 24 Jun 2024 14:36:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746709.1153840; Mon, 24 Jun 2024 14:36:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkoM-0005li-Jf; Mon, 24 Jun 2024 14:36:50 +0000
Received: by outflank-mailman (input) for mailman id 746709;
 Mon, 24 Jun 2024 14:36:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vjL/=N2=bounce.vates.tech=bounce-md_30504962.6679847d.v1-df7692446996417a80f84d8b5bcc0063@srs-se1.protection.inumbo.net>)
 id 1sLkoL-0005lU-FO
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 14:36:49 +0000
Received: from mail187-10.suw11.mandrillapp.com
 (mail187-10.suw11.mandrillapp.com [198.2.187.10])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2ee2603f-3237-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 16:36:47 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-10.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W79Wx41Nsz5QkLfv
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 14:36:45 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 df7692446996417a80f84d8b5bcc0063; Mon, 24 Jun 2024 14:36:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ee2603f-3237-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719239805; x=1719500305;
	bh=TZaaytuqYIwSbyX0zplkdr+dJA1t8O/nak39rKgZu4U=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=vNADMY8a8ilLpZP47DqmyR3xwBgyYqE7FqgNcT88J1Nfjb2M30v6/Kh24otdQONfo
	 zz4VBBXn8XNUxbRn99inusHnY99Z4b6Frk/MY8fIN3jTL7mQkaOMBYykYGpiXswU3o
	 Q5WWEkQHTA666BJyHJ2znOB/Wiuty8/8xw/+iZTIDsoV7vzWM5xltEFAcMBW92lLbk
	 VGE7TyJAUBHGajAuhw0bQ3ZPz8BsurSZ6EIhlmLMAfj7fv4h7k2UKofI04fOs2nmWH
	 /3EyPasS9cndMZby8uEWmdqg5Jix5mL5p3UJ4BtAPqZ50teq6F5cBfFvJtbiaNDzlT
	 hz7svfqnELXjQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719239805; x=1719500305; i=teddy.astie@vates.tech;
	bh=TZaaytuqYIwSbyX0zplkdr+dJA1t8O/nak39rKgZu4U=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=t9lZTdGXl5EQWQTJw/Goc1ev8P0zsz9sE8foihPlFsjjtaFgggDxTqnA9k4S4gA88
	 WNYGfastskMEIJOnaMyMscaZgi4QXXnLzluB76wcTLQSSLUk+tvIvP9CQmht//WUvw
	 SXdAsVWnqhaBy+bfh7yB0UkbxwDdjoH/VO69m3Q2fUqh915mzWs0nlgMzNzPrq1j+G
	 xXAuD9gP8VROU7SpZWKHOs852/fw4gUH0LciKQIFS3MK/xhHAPxFBU1+TjGFTdsPrM
	 e/1be1tXpp5QRlK2LbAcryIGTq/SoARpdCBJXoJzY6MrAubpF22yB4ad3TqWHkX1Bf
	 A/uJuuMH+32lA==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?Re:=20[RFC=20PATCH=20v2]=20iommu/xen:=20Add=20Xen=20PV-IOMMU=20driver?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719239803489
Message-Id: <a04e169d-b38a-43dc-b783-a8af1e1b0468@vates.tech>
To: Robin Murphy <robin.murphy@arm.com>, xen-devel@lists.xenproject.org, iommu@lists.linux.dev
Cc: Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
References: <24d7ec005e77e4e0127995ba6f4ad16f33737fa5.1718981216.git.teddy.astie@vates.tech> <da3ec316-b001-4711-b323-70af3e6bb014@arm.com>
In-Reply-To: <da3ec316-b001-4711-b323-70af3e6bb014@arm.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.df7692446996417a80f84d8b5bcc0063?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240624:md
Date: Mon, 24 Jun 2024 14:36:45 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hello Robin,
Thanks for the thourough review.

>> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
>> index 0af39bbbe3a3..242cefac77c9 100644
>> --- a/drivers/iommu/Kconfig
>> +++ b/drivers/iommu/Kconfig
>> @@ -480,6 +480,15 @@ config VIRTIO_IOMMU
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Say Y here if you intend to r=
un this kernel as a guest.
>> +config XEN_IOMMU
>> +=C2=A0=C2=A0=C2=A0 bool "Xen IOMMU driver"
>> +=C2=A0=C2=A0=C2=A0 depends on XEN_DOM0
> 
> Clearly this depends on X86 as well.
> 
Well, I don't intend this driver to be X86-only, even though the current 
Xen RFC doesn't support ARM (yet). Unless there is a counter-indication 
for it ?

>> +#include <linux/kernel.h>
>> +#include <linux/init.h>
>> +#include <linux/types.h>
>> +#include <linux/iommu.h>
>> +#include <linux/dma-map-ops.h>
> 
> Please drop this; it's a driver, not a DMA ops implementation.
> 
Sure, in addition to some others unneeded headers.

>> +#include <linux/pci.h>
>> +#include <linux/list.h>
>> +#include <linux/string.h>
>> +#include <linux/device/driver.h>
>> +#include <linux/slab.h>
>> +#include <linux/err.h>
>> +#include <linux/printk.h>
>> +#include <linux/stddef.h>
>> +#include <linux/spinlock.h>
>> +#include <linux/minmax.h>
>> +#include <linux/string.h>
>> +#include <asm/iommu.h>
>> +
>> +#include <xen/xen.h>
>> +#include <xen/page.h>
>> +#include <xen/interface/memory.h>
>> +#include <xen/interface/physdev.h>
>> +#include <xen/interface/pv-iommu.h>
>> +#include <asm/xen/hypercall.h>
>> +#include <asm/xen/page.h>
>> +
>> +MODULE_DESCRIPTION("Xen IOMMU driver");
>> +MODULE_AUTHOR("Teddy Astie <teddy.astie@vates.tech>");
>> +MODULE_LICENSE("GPL");
>> +
>> +#define MSI_RANGE_START=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (0xfe=
e00000)
>> +#define MSI_RANGE_END=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (0xfeef=
ffff)
>> +
>> +#define XEN_IOMMU_PGSIZES=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (0x1000)
>> +
>> +struct xen_iommu_domain {
>> +=C2=A0=C2=A0=C2=A0 struct iommu_domain domain;
>> +
>> +=C2=A0=C2=A0=C2=A0 u16 ctx_no; /* Xen PV-IOMMU context number */
>> +};
>> +
>> +static struct iommu_device xen_iommu_device;
>> +
>> +static uint32_t max_nr_pages;
>> +static uint64_t max_iova_addr;
>> +
>> +static spinlock_t lock;
> 
> Not a great name - usually it's good to name a lock after what it 
> protects. Although perhaps it is already, since AFAICS this isn't 
> actually used anywhere anyway.
> 

Yes, that shouldn't be there.

>> +static inline struct xen_iommu_domain *to_xen_iommu_domain(struct 
>> iommu_domain *dom)
>> +{
>> +=C2=A0=C2=A0=C2=A0 return container_of(dom, struct xen_iommu_domain, do=
main);
>> +}
>> +
>> +static inline u64 addr_to_pfn(u64 addr)
>> +{
>> +=C2=A0=C2=A0=C2=A0 return addr >> 12;
>> +}
>> +
>> +static inline u64 pfn_to_addr(u64 pfn)
>> +{
>> +=C2=A0=C2=A0=C2=A0 return pfn << 12;
>> +}
>> +
>> +bool xen_iommu_capable(struct device *dev, enum iommu_cap cap)
>> +{
>> +=C2=A0=C2=A0=C2=A0 switch (cap) {
>> +=C2=A0=C2=A0=C2=A0 case IOMMU_CAP_CACHE_COHERENCY:
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return true;
> 
> Will the PV-IOMMU only ever be exposed on hardware where that really is 
> always true?
> 

On the hypervisor side, the PV-IOMMU interface always implicitely flush 
the IOMMU hardware on map/unmap operation, so at the end of the 
hypercall, the cache should be always coherent IMO.

>> +
>> +static struct iommu_device *xen_iommu_probe_device(struct device *dev)
>> +{
>> +=C2=A0=C2=A0=C2=A0 if (!dev_is_pci(dev))
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return ERR_PTR(-ENODEV);
>> +
>> +=C2=A0=C2=A0=C2=A0 return &xen_iommu_device;
> 
> Even emulated PCI devices have to have an (emulated, presumably) IOMMU?
> 

No and that's indeed one thing to check.

>> +}
>> +
>> +static void xen_iommu_probe_finalize(struct device *dev)
>> +{
>> +=C2=A0=C2=A0=C2=A0 set_dma_ops(dev, NULL);
>> +=C2=A0=C2=A0=C2=A0 iommu_setup_dma_ops(dev, 0, max_iova_addr);
> 

That got changed in 6.10, good to know that this code is not required 
anymore.

> 
>> +}
>> +
>> +static int xen_iommu_map_pages(struct iommu_domain *domain, unsigned 
>> long iova,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 phys_addr_t paddr, size_t pgsize, s=
ize_t pgcount,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 int prot, gfp_t gfp, size_t *mapped=
)
>> +{
>> +=C2=A0=C2=A0=C2=A0 size_t xen_pg_count =3D (pgsize / XEN_PAGE_SIZE) * p=
gcount;
> 
> You only advertise the one page size, so you'll always get that back, 
> and this seems a bit redundant.
> 

Yes

>> +=C2=A0=C2=A0=C2=A0 struct xen_iommu_domain *dom =3D to_xen_iommu_domain=
(domain);
>> +=C2=A0=C2=A0=C2=A0 struct pv_iommu_op op =3D {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .subop_id =3D IOMMUOP_map_pa=
ges,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .flags =3D 0,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .ctx_no =3D dom->ctx_no
>> +=C2=A0=C2=A0=C2=A0 };
>> +=C2=A0=C2=A0=C2=A0 /* NOTE: paddr is actually bound to pfn, not gfn */
>> +=C2=A0=C2=A0=C2=A0 uint64_t pfn =3D addr_to_pfn(paddr);
>> +=C2=A0=C2=A0=C2=A0 uint64_t dfn =3D addr_to_pfn(iova);
>> +=C2=A0=C2=A0=C2=A0 int ret =3D 0;
>> +
>> +=C2=A0=C2=A0=C2=A0 //pr_info("Mapping to %lx %zu %zu paddr %x\n", iova,=
 pgsize, 
>> pgcount, paddr);
> 
> Please try to clean up debugging leftovers before posting the patch (but 
> also note that there are already tracepoints and debug messages which 
> can be enabled in the core code to give visibility of most of this.)
> 

Yes

>> +
>> +=C2=A0=C2=A0=C2=A0 if (prot & IOMMU_READ)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 op.flags |=3D IOMMU_OP_reada=
ble;
>> +
>> +=C2=A0=C2=A0=C2=A0 if (prot & IOMMU_WRITE)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 op.flags |=3D IOMMU_OP_write=
able;
>> +
>> +=C2=A0=C2=A0=C2=A0 while (xen_pg_count) {
> 
> Unless you're super-concerned about performance already, you don't 
> really need to worry about looping here - you can happily return short 
> as long as you've mapped *something*, and the core code will call you 
> back again with the remainder. But it also doesn't complicate things 
> *too* much as it is, so feel free to leave it in if you want to.
> 

Ok

>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 size_t to_map =3D min(xen_pg=
_count, max_nr_pages);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 uint64_t gfn =3D pfn_to_gfn(=
pfn);
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 //pr_info("Mapping %lx-%lx a=
t %lx-%lx\n", gfn, gfn + to_map - 
>> 1, dfn, dfn + to_map - 1);
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 op.map_pages.gfn =3D gfn;
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 op.map_pages.dfn =3D dfn;
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 op.map_pages.nr_pages =3D to=
_map;
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ret =3D HYPERVISOR_iommu_op(=
&op);
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 //pr_info("map_pages.mapped =
=3D %u\n", op.map_pages.mapped);
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (mapped)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *map=
ped +=3D XEN_PAGE_SIZE * op.map_pages.mapped;
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (ret)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 brea=
k;
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xen_pg_count -=3D to_map;
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pfn +=3D to_map;
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 dfn +=3D to_map;
>> +=C2=A0=C2=A0=C2=A0 }
>> +
>> +=C2=A0=C2=A0=C2=A0 return ret;
>> +}
>> +
>> +static size_t xen_iommu_unmap_pages(struct iommu_domain *domain, 
>> unsigned long iova,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 size_t pgsize, size_t pgcount=
,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct iommu_iotlb_gather *io=
tlb_gather)
>> +{
>> +=C2=A0=C2=A0=C2=A0 size_t xen_pg_count =3D (pgsize / XEN_PAGE_SIZE) * p=
gcount;
>> +=C2=A0=C2=A0=C2=A0 struct xen_iommu_domain *dom =3D to_xen_iommu_domain=
(domain);
>> +=C2=A0=C2=A0=C2=A0 struct pv_iommu_op op =3D {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .subop_id =3D IOMMUOP_unmap_=
pages,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .ctx_no =3D dom->ctx_no,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .flags =3D 0,
>> +=C2=A0=C2=A0=C2=A0 };
>> +=C2=A0=C2=A0=C2=A0 uint64_t dfn =3D addr_to_pfn(iova);
>> +=C2=A0=C2=A0=C2=A0 int ret =3D 0;
>> +
>> +=C2=A0=C2=A0=C2=A0 if (WARN(!dom->ctx_no, "Tried to unmap page to defau=
lt context"))
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return -EINVAL;
> 
> This would go hilariously wrong... the return value here is bytes 
> successfully unmapped, a total failure should return 0.

This check is not useful as the core code is never going to call it on 
this domain.

> 
>> +=C2=A0=C2=A0=C2=A0 while (xen_pg_count) {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 size_t to_unmap =3D min(xen_=
pg_count, max_nr_pages);
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 //pr_info("Unmapping %lx-%lx=
\n", dfn, dfn + to_unmap - 1);
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 op.unmap_pages.dfn =3D dfn;
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 op.unmap_pages.nr_pages =3D =
to_unmap;
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ret =3D HYPERVISOR_iommu_op(=
&op);
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (ret)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pr_w=
arn("Unmap failure (%lx-%lx)\n", dfn, dfn + to_unmap 
>> - 1);
> 
> But then how 
>> would it ever happen anyway? Unmap is a domain op, so a domain which 
>> doesn't allow unmapping shouldn't offer it in the first place...

Unmap failing should be exceptionnal, but is possible e.g with 
transparent superpages (like Xen IOMMU drivers do). Xen drivers folds 
appropriate contiguous mappings into superpages entries to optimize 
memory usage and iotlb. However, if you unmap in the middle of a region 
covered by a superpage entry, this is no longer a valid superpage entry, 
and you need to allocate and fill the lower levels, which is faillible 
if lacking memory.

> In this case I'd argue that you really *do* want to return short, in the 
> hope of propagating the error back up and letting the caller know the 
> address space is now messed up before things start blowing up even more 
> if they keep going and subsequently try to map new pages into 
> not-actually-unmapped VAs.

While mapping on top of another mapping is ok for us (it's just going to 
override the previous mapping), I definetely agree that having the 
address space messed up is not good.

> 
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xen_pg_count -=3D to_unmap;
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 dfn +=3D to_unmap;
>> +=C2=A0=C2=A0=C2=A0 }
>> +
>> +=C2=A0=C2=A0=C2=A0 return pgcount * pgsize;
>> +}
>> +
>> +int xen_iommu_attach_dev(struct iommu_domain *domain, struct device 
>> *dev)
>> +{
>> +=C2=A0=C2=A0=C2=A0 struct pci_dev *pdev;
>> +=C2=A0=C2=A0=C2=A0 struct xen_iommu_domain *dom =3D to_xen_iommu_domain=
(domain);
>> +=C2=A0=C2=A0=C2=A0 struct pv_iommu_op op =3D {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .subop_id =3D IOMMUOP_reatta=
ch_device,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .flags =3D 0,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .ctx_no =3D dom->ctx_no,
>> +=C2=A0=C2=A0=C2=A0 };
>> +
>> +=C2=A0=C2=A0=C2=A0 pdev =3D to_pci_dev(dev);
>> +
>> +=C2=A0=C2=A0=C2=A0 op.reattach_device.dev.seg =3D pci_domain_nr(pdev->b=
us);
>> +=C2=A0=C2=A0=C2=A0 op.reattach_device.dev.bus =3D pdev->bus->number;
>> +=C2=A0=C2=A0=C2=A0 op.reattach_device.dev.devfn =3D pdev->devfn;
>> +
>> +=C2=A0=C2=A0=C2=A0 return HYPERVISOR_iommu_op(&op);
>> +}
>> +
>> +static void xen_iommu_free(struct iommu_domain *domain)
>> +{
>> +=C2=A0=C2=A0=C2=A0 int ret;
>> +=C2=A0=C2=A0=C2=A0 struct xen_iommu_domain *dom =3D to_xen_iommu_domain=
(domain);
>> +
>> +=C2=A0=C2=A0=C2=A0 if (dom->ctx_no !=3D 0) {
> 
> Much like unmap above, this not being true would imply that someone's 
> managed to go round the back of the core code to get the .free op from a 
> validly-allocated domain and then pass something other than that domain 
> to it. Personally I'd consider that a level of brokenness that's not 
> even worth trying to consider at all, but if you want to go as far as 
> determining that you *have* clearly been given something you couldn't 
> have allocated, then trying to kfree() it probably isn't wise either.
> 

Yes

>> +
>> +static int default_domain_attach_dev(struct iommu_domain *domain,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct device *dev)
>> +{
>> +=C2=A0=C2=A0=C2=A0 int ret;
>> +=C2=A0=C2=A0=C2=A0 struct pci_dev *pdev;
>> +=C2=A0=C2=A0=C2=A0 struct pv_iommu_op op =3D {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .subop_id =3D IOMMUOP_reatta=
ch_device,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .flags =3D 0,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .ctx_no =3D 0 /* reattach de=
vice back to default context */
>> +=C2=A0=C2=A0=C2=A0 };
>> +
>> +=C2=A0=C2=A0=C2=A0 pdev =3D to_pci_dev(dev);
>> +
>> +=C2=A0=C2=A0=C2=A0 op.reattach_device.dev.seg =3D pci_domain_nr(pdev->b=
us);
>> +=C2=A0=C2=A0=C2=A0 op.reattach_device.dev.bus =3D pdev->bus->number;
>> +=C2=A0=C2=A0=C2=A0 op.reattach_device.dev.devfn =3D pdev->devfn;
>> +
>> +=C2=A0=C2=A0=C2=A0 ret =3D HYPERVISOR_iommu_op(&op);
>> +
>> +=C2=A0=C2=A0=C2=A0 if (ret)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pr_warn("Unable to release d=
evice %p\n", 
>> &op.reattach_device.dev);
>> +
>> +=C2=A0=C2=A0=C2=A0 return ret;
>> +}
>> +
>> +static struct iommu_domain default_domain =3D {
>> +=C2=A0=C2=A0=C2=A0 .ops =3D &(const struct iommu_domain_ops){
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .attach_dev =3D default_doma=
in_attach_dev
>> +=C2=A0=C2=A0=C2=A0 }
>> +};
> 
> Looks like you could make it a static xen_iommu_domain and just use the 
> normal attach callback? Either way please name it something less 
> confusing like xen_iommu_identity_domain - "default" is far too 
> overloaded round here already...
> 

Yes, although, if in the future, we can have either this domain as 
identity or blocking/paging depending on some upper level configuration. 
Should we have both identity and blocking domains, and only setting the 
relevant one in iommu_ops, or keep this naming.

>> +static struct iommu_ops xen_iommu_ops =3D {
>> +=C2=A0=C2=A0=C2=A0 .identity_domain =3D &default_domain,
>> +=C2=A0=C2=A0=C2=A0 .release_domain =3D &default_domain,
>> +=C2=A0=C2=A0=C2=A0 .capable =3D xen_iommu_capable,
>> +=C2=A0=C2=A0=C2=A0 .domain_alloc_paging =3D xen_iommu_domain_alloc_pagi=
ng,
>> +=C2=A0=C2=A0=C2=A0 .probe_device =3D xen_iommu_probe_device,
>> +=C2=A0=C2=A0=C2=A0 .probe_finalize =3D xen_iommu_probe_finalize,
>> +=C2=A0=C2=A0=C2=A0 .device_group =3D pci_device_group,
>> +=C2=A0=C2=A0=C2=A0 .get_resv_regions =3D xen_iommu_get_resv_regions,
>> +=C2=A0=C2=A0=C2=A0 .pgsize_bitmap =3D XEN_IOMMU_PGSIZES,
>> +=C2=A0=C2=A0=C2=A0 .default_domain_ops =3D &(const struct iommu_domain_=
ops) {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .map_pages =3D xen_iommu_map=
_pages,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .unmap_pages =3D xen_iommu_u=
nmap_pages,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .attach_dev =3D xen_iommu_at=
tach_dev,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .iova_to_phys =3D xen_iommu_=
iova_to_phys,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .free =3D xen_iommu_free,
>> +=C2=A0=C2=A0=C2=A0 },
>> +};
>> +
>> +int __init xen_iommu_init(void)
>> +{
>> +=C2=A0=C2=A0=C2=A0 int ret;
>> +=C2=A0=C2=A0=C2=A0 struct pv_iommu_op op =3D {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .subop_id =3D IOMMUOP_query_=
capabilities
>> +=C2=A0=C2=A0=C2=A0 };
>> +
>> +=C2=A0=C2=A0=C2=A0 if (!xen_domain())
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return -ENODEV;
>> +
>> +=C2=A0=C2=A0=C2=A0 /* Check if iommu_op is supported */
>> +=C2=A0=C2=A0=C2=A0 if (HYPERVISOR_iommu_op(&op) =3D=3D -ENOSYS)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return -ENODEV; /* No Xen IO=
MMU hardware */
>> +
>> +=C2=A0=C2=A0=C2=A0 pr_info("Initialising Xen IOMMU driver\n");
>> +=C2=A0=C2=A0=C2=A0 pr_info("max_nr_pages=3D%d\n", op.cap.max_nr_pages);
>> +=C2=A0=C2=A0=C2=A0 pr_info("max_ctx_no=3D%d\n", op.cap.max_ctx_no);
>> +=C2=A0=C2=A0=C2=A0 pr_info("max_iova_addr=3D%llx\n", op.cap.max_iova_ad=
dr);
>> +
>> +=C2=A0=C2=A0=C2=A0 if (op.cap.max_ctx_no =3D=3D 0) {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pr_err("Unable to use IOMMU =
PV driver (no context 
>> available)\n");
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return -ENOTSUPP; /* Unable =
to use IOMMU PV ? */
>> +=C2=A0=C2=A0=C2=A0 }
>> +
>> +=C2=A0=C2=A0=C2=A0 if (xen_domain_type =3D=3D XEN_PV_DOMAIN)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* TODO: In PV domain, due t=
o the existing pfn-gfn mapping we 
>> need to
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * consider that under =
certains circonstances, we have :
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *=C2=A0=C2=A0 pfn_to_g=
fn(x + 1) !=3D pfn_to_gfn(x) + 1
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * In these cases, we w=
ould want to separate the subop into 
>> several calls.
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * (only doing the grou=
ped operation when the mapping is 
>> actually contigous)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * Only map operation w=
ould be affected, as unmap actually 
>> uses dfn which
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * doesn't have this ki=
nd of mapping.
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 *
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * Force single-page op=
erations to work arround this issue 
>> for now.
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 max_nr_pages =3D 1;
>> +=C2=A0=C2=A0=C2=A0 else
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* With HVM domains, pfn_to_=
gfn is identity, there is no 
>> issue regarding this. */
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 max_nr_pages =3D op.cap.max_=
nr_pages;
>> +
>> +=C2=A0=C2=A0=C2=A0 max_iova_addr =3D op.cap.max_iova_addr;
>> +
>> +=C2=A0=C2=A0=C2=A0 spin_lock_init(&lock);
>> +
>> +=C2=A0=C2=A0=C2=A0 ret =3D iommu_device_sysfs_add(&xen_iommu_device, NU=
LL, NULL, 
>> "xen-iommu");
>> +=C2=A0=C2=A0=C2=A0 if (ret) {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pr_err("Unable to add Xen IO=
MMU sysfs\n");
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return ret;
>> +=C2=A0=C2=A0=C2=A0 }
>> +
>> +=C2=A0=C2=A0=C2=A0 ret =3D iommu_device_register(&xen_iommu_device, &xe=
n_iommu_ops, 
>> NULL);
>> +=C2=A0=C2=A0=C2=A0 if (ret) {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pr_err("Unable to register X=
en IOMMU device %d\n", ret);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 iommu_device_sysfs_remove(&x=
en_iommu_device);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return ret;
>> +=C2=A0=C2=A0=C2=A0 }
>> +
>> +=C2=A0=C2=A0=C2=A0 /* swiotlb is redundant when IOMMU is active. */
>> +=C2=A0=C2=A0=C2=A0 x86_swiotlb_enable =3D false;
> 
> That's not always true, but either way if this is at 
> module_init/device_initcall time then it's too late to make any 
> difference anyway.
> 

Ok

>> +
>> +=C2=A0=C2=A0=C2=A0 return 0;
>> +}
>> +
>> +void __exit xen_iommu_fini(void)
>> +{
>> +=C2=A0=C2=A0=C2=A0 pr_info("Unregistering Xen IOMMU driver\n");
>> +
>> +=C2=A0=C2=A0=C2=A0 iommu_device_unregister(&xen_iommu_device);
>> +=C2=A0=C2=A0=C2=A0 iommu_device_sysfs_remove(&xen_iommu_device);
>> +}
> 
> This is dead code since the Kconfig is only "bool". Either allow it to 
> be an actual module (and make sure that works), or drop the pretence 
> altogether.
> 

Ok, I though this function was actually a requirement even if it is not 
a module.

> Thanks,
> Robin.

Teddy


Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 14:38:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 14:38:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746717.1153850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkq1-00073h-47; Mon, 24 Jun 2024 14:38:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746717.1153850; Mon, 24 Jun 2024 14:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLkq1-00073a-1L; Mon, 24 Jun 2024 14:38:33 +0000
Received: by outflank-mailman (input) for mailman id 746717;
 Mon, 24 Jun 2024 14:38:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3xpo=N2=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sLkpz-00073S-5Y
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 14:38:31 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6c1593d3-3237-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 16:38:29 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 5FF0068C4E; Mon, 24 Jun 2024 16:38:26 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c1593d3-3237-11ef-b4bb-af5377834399
Date: Mon, 24 Jun 2024 16:38:26 +0200
From: Christoph Hellwig <hch@lst.de>
To: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>
Cc: Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?= <marmarek@invisiblethingslab.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Christoph Hellwig <hch@lst.de>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>
Subject: Re: Regression in xen-blkfront regarding sector sizes
Message-ID: <20240624143826.GA8973@lst.de>
References: <Znl5FYI9CC37jJLX@mail-itl> <1944dd3f-1ba8-4559-b71a-056b9309ab58@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1944dd3f-1ba8-4559-b71a-056b9309ab58@suse.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Mon, Jun 24, 2024 at 04:29:15PM +0200, Jürgen Groß wrote:
>> Rusty suspects it's related to
>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/block/xen-blkfront.c?id=ba3f67c1163812b5d7ec33705c31edaa30ce6c51,
>> so I'm cc-ing people mentioned there too.
>
> I think the call of blkif_set_queue_limits() in this patch should NOT precede
> setting of info->sector_size and info->physical_sector_size, as those are
> needed by blkif_set_queue_limits().

Yes.  Something like the patch below should fix it.  We could also stop
passing sector_size and physіcal_sector_size to xlvbd_alloc_gendisk to
clean things up a bit more.

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index fd7c0ff2139cee..9f3d68044f8882 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1133,6 +1133,8 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 	if (err)
 		goto out_release_minors;
 
+	info->sector_size = sector_size;
+	info->physical_sector_size = physical_sector_size;
 	blkif_set_queue_limits(info, &lim);
 	gd = blk_mq_alloc_disk(&info->tag_set, &lim, info);
 	if (IS_ERR(gd)) {
@@ -1159,8 +1161,6 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 
 	info->rq = gd->queue;
 	info->gd = gd;
-	info->sector_size = sector_size;
-	info->physical_sector_size = physical_sector_size;
 
 	xlvbd_flush(info);
 


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:08:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:08:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746726.1153861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlJF-0003aF-Ck; Mon, 24 Jun 2024 15:08:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746726.1153861; Mon, 24 Jun 2024 15:08:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlJF-0003a8-9U; Mon, 24 Jun 2024 15:08:45 +0000
Received: by outflank-mailman (input) for mailman id 746726;
 Mon, 24 Jun 2024 15:08:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLlJD-0003a2-8u
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:08:43 +0000
Received: from mail-wr1-x430.google.com (mail-wr1-x430.google.com
 [2a00:1450:4864:20::430])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a3f9062a-323b-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 17:08:41 +0200 (CEST)
Received: by mail-wr1-x430.google.com with SMTP id
 ffacd0b85a97d-362f62ae4c5so2589784f8f.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 08:08:40 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 41be03b00d2f7-716b4a7161asm5611484a12.46.2024.06.24.08.08.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 08:08:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3f9062a-323b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719241720; x=1719846520; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=vbbV/H6aJA/lONrU6slmo3t0dWSd7S5vwvDHEboopGs=;
        b=ElCnGD/Uv0Pq8NNoUlLuuys4NpuhIinX149lswyfLf+CWDgT7ICzqAJt5/X70hpHBW
         f+8jwOgLhJfLrlp4qDb7BUalQMsvYLt2wGjlyuorAme1gX/NavKhGBV0lGj+M8raP0yP
         hufu87ScBzd6tv7/INyFeqroCvcavF3S2KWphSKgP7Xh12J1vvElbuNzTA22T9+u416w
         1gswx/kiNfndkTrGAiNFUNiikFob/fL8GSqdA55F6FUriw2K3VHKNcXJJeiwQsUSOWha
         DCigOMVYmRTyWe2pRX8SpySQyB2bUxo1WGYv3WVD+SSTE0BvTHKZ8WbRED77XtcrluXU
         VjZQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719241720; x=1719846520;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=vbbV/H6aJA/lONrU6slmo3t0dWSd7S5vwvDHEboopGs=;
        b=SE+DN2LWxYez6v/xuNy/SJGykLiZW1cKoHTpG/OBXZAsx3sg+dGX5lsWv28+fmGR9Q
         gfEsnN7onfWRvOfKmth+iZBUHFRZs+YmZ6D2A0MqXzdA4JlM999AqoBXl73uofEoHkcj
         yUFRJn/lRfbqNdMHZTO8kOsiGkHG6px/8gvhGFGqP5d/Jh7KJoFCZf/qA4JALw3JQEi/
         Lj9DrUHCB1TEF/xbYcB8IXK8gPG98Yd17qIVp66a+fZ5qy4Nj2nFW6TwIoZQHCgTOFm9
         iGm2SR9ZLGlErvkHrLyd6P4839hrpN+2p34T6/3BgMkpUstAlFmt/hki+M7M3FZSZXYy
         JZUg==
X-Forwarded-Encrypted: i=1; AJvYcCU9xgLGLoD0etuM+Uw6KZH3ZRVodNz5esxJuU+kHeB/VmnasjFsB3rTnb5MjXyfCh3CGoMVRlSx8Fzw9Y2ZWhSGVoHZ7HbjlubczBIMmDM=
X-Gm-Message-State: AOJu0Yzsft6VUjd5yKNhKycumuXu1/fm5tZX4UPXiRc5xXbRGq4hAW0G
	X6CB/LdaZpKbOGf5wtwsWLJtm0iRD+pcUCovX0/xtX3Wk1CqYZJhTVGvBfWdkg==
X-Google-Smtp-Source: AGHT+IEQZbDl1k0g8zTRySb2tD1A96Rba2c7nvP4aVjjx+0F6/XFcDth2/XJdBF+Fz6HD8U2uOBcpw==
X-Received: by 2002:a5d:6448:0:b0:364:8f54:e0ec with SMTP id ffacd0b85a97d-366e3295821mr4753871f8f.19.1719241720296;
        Mon, 24 Jun 2024 08:08:40 -0700 (PDT)
Message-ID: <9708b1a2-6970-4ebc-927b-aae565b3ff57@suse.com>
Date: Mon, 24 Jun 2024 17:08:33 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 02/13] x86/cpuid: use fallthrough pseudo keyword
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <58f1ff7e94fd2bd5290a555e44d9de0d2f515eda.1719218291.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <58f1ff7e94fd2bd5290a555e44d9de0d2f515eda.1719218291.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 11:04, Federico Serafini wrote:
> The current comment making explicit the fallthrough intention does
> not follow the agreed syntax: replace it with the pseduo keyword.
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:09:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:09:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746730.1153871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlKJ-0004GS-Kb; Mon, 24 Jun 2024 15:09:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746730.1153871; Mon, 24 Jun 2024 15:09:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlKJ-0004GL-Hn; Mon, 24 Jun 2024 15:09:51 +0000
Received: by outflank-mailman (input) for mailman id 746730;
 Mon, 24 Jun 2024 15:09:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLlKI-0004Fz-8z
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:09:50 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ccb8cce9-323b-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 17:09:49 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2eaafda3b5cso47831471fa.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 08:09:49 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-70663a7974fsm4756422b3a.133.2024.06.24.08.09.45
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 08:09:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccb8cce9-323b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719241789; x=1719846589; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=GUhmYKvgkD89e1fWh/3reK688Z1jgxOIAvlXsPexvf0=;
        b=Oyi1D3qS2euqrp1mKxJ+z7HTq086J4aSbv0t1uQ04yA6awgoLavqCfAYRUj2iHGno1
         elSSIWhoubQQeXdD+e8UM05yS9WOWhAMDcKLuugtgI2gl/E/VKGo7UEV5bpQpR5fg57E
         TBPvkC9/WgmpU+8PDviWEzlMS5WIDOJuNnqXw6Jc8Ai3EW9tyaqup9d+T4KJ5g7wr3dv
         mZ7ivNDLIerJ19ZalGaZ3FHrVSJhU5BT5HirKSxMNXbpsm++P3LARhPyNzTMshUh7JKq
         rL8qUTvy4PDL/C/wmzOfah1DeVPbS6rFNej/igdT3wwo5fPuG46wuRzx7q6QTWHaQ7G0
         9a2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719241789; x=1719846589;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GUhmYKvgkD89e1fWh/3reK688Z1jgxOIAvlXsPexvf0=;
        b=tj7KTKmLttUmSMSicwO21qR0Qtkb6XV7R/Ck/Gx1UI6dMOlbrsz6Kc8xHYylAOEX71
         mG4kLrHO/J48z84n//UeS+XfmcwvVa1ruydRMENQYWYoyTIkUQiGlqaeLnfos3gmKP7T
         O1bARby7vaSa+To9vHvf54/+ruyD+49VJaOOQebEFT5JDMYoUvO22qnss9tZJJHFXpQG
         CMQ0EAAa2bkxwnaTqYy2oQE9g6eQFEMl9Z5nQ7qcnpKr9pG4uYefiw/8mmdCdITg643v
         7HVbtlrysnJv5XYQucaquwjmCgOvwNpYKMZoUCPhBl3yHI7q5rPF1VsTdqaQmkMTjeOe
         2+Lg==
X-Forwarded-Encrypted: i=1; AJvYcCWWmUgiHEsFdYvdjHiLj2QTE1uUj/g9hgAkrSxXKwVfIKk27N35lSPOqPXx0chUVQpxVzqhAdz7xf0gnq5SUT9jLGcX4w8TklsQv/YcnBo=
X-Gm-Message-State: AOJu0Yz2D7zS5XrfDqy5t6F3hg+a1hfJhEcbIF7nWixex4XlmhxFwbaj
	5dTF7xW6CTMxJpEhJHIZKu+nsMc7/L7Zja8QJYrM2Xj8+bg+BAih9bb0r4EsuA==
X-Google-Smtp-Source: AGHT+IEx+Ta2MuxM85ACa47QeQwamSSbcCZP/Ynq2Y6otJ5Krpe+cR6YSff0VqlChrdWDpTDmc4vvg==
X-Received: by 2002:a2e:9e09:0:b0:2ec:528:6a16 with SMTP id 38308e7fff4ca-2ec59401878mr31659761fa.24.1719241788661;
        Mon, 24 Jun 2024 08:09:48 -0700 (PDT)
Message-ID: <0dcc3162-dd67-4fbf-9bb3-8e0d66e96b8b@suse.com>
Date: Mon, 24 Jun 2024 17:09:42 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 03/13] x86/domctl: address a violation of MISRA C
 Rule 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <d46b484c99f858d7bfd10c6956a88ba46ac60815.1719218291.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <d46b484c99f858d7bfd10c6956a88ba46ac60815.1719218291.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 11:04, Federico Serafini wrote:
> Add missing break statement to address a violation of
> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
> every switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:16:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:16:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746740.1153880 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlR1-0005qE-9S; Mon, 24 Jun 2024 15:16:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746740.1153880; Mon, 24 Jun 2024 15:16:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlR1-0005q7-6o; Mon, 24 Jun 2024 15:16:47 +0000
Received: by outflank-mailman (input) for mailman id 746740;
 Mon, 24 Jun 2024 15:16:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLlR0-0005po-J4
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:16:46 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c4902196-323c-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 17:16:45 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id
 38308e7fff4ca-2ebec2f11b7so48836861fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 08:16:44 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 41be03b00d2f7-7199dcba2desm3415979a12.35.2024.06.24.08.16.41
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 08:16:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4902196-323c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719242204; x=1719847004; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=UIXAHclaNihtEWr0OABZTng8qozHt/sxKa1lfpXkniA=;
        b=OpnGObmjufVhkyowE9yAZlmvAyytKffDwvQRCb3hAIjXSOeOTMqOq3mqdrewd/Pcxl
         f7aewdRMIdGLTcGkhA6WRDo2kreEcIiIo7y6Ucai1gGtZKlv2mAu2cjE+T16aNa+xhWX
         oHawVkUjJK7z/7Y01fmaY1QjucwqkH+SEESVmEp0BrKjvvNtXMqR2scEOKd6p4W4Lcx4
         H8KoE8/RMqoB+Bdob/YJ5ON9nrO0zfbUr+ja9b9zQqUc9mNFzi94Z3H/10U3kHYQc5P9
         skOYear7aAPn4Pg0oUW50IB6N9+/E19k8UFvyqRtt/oLj8GrRXjFpgo6dcDNjhYb3Y3d
         Y+qQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719242204; x=1719847004;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=UIXAHclaNihtEWr0OABZTng8qozHt/sxKa1lfpXkniA=;
        b=XsgQQmX3+gRPrQQ+XhEyx1kH6cw6YL4X4JX/vOivdr/btwQJqUo5LApeEZAyRvmU4S
         8pNMfhvEyth+ThR5SVjHcSB0PxVRZUcLEYT3Vcrz+IoXvZYvqtBuAxn12xJ5hNxTUAe3
         ERWx5lBBKq/Fhq4S3oCCsFX01mPSYik+tOc6isdyC+iu3lrv+LfGo7Na2L6h7ZoG+APS
         Rm6rzLAEDXyhNuWd7oIujlHITJKstPCJqQ0RYkwJbKnJ6lopmeH5JsjI1CmMnGfcmwCv
         r4qja388rMQJJ6g5BmhPq8jX9YkV8jUuZ7YHWzvLnqvO7DW10o2lt9gCqub2jAJjthxQ
         R1OQ==
X-Forwarded-Encrypted: i=1; AJvYcCX7SC3x9JN8cBpb3YaoQd5M0vcP0HltdBwnRnuHSR5uEqpSfkhASQe2MjrxCZVcRQHz7Q50cxSKy8Ixp4TynqfGoNHsr1RjOoO4suCSBk8=
X-Gm-Message-State: AOJu0Ywf37RBlAzSzZDS36i06bymQ07MP00skaWpGRGelPoM6mnuJ3//
	R0nSj0L/r6Nc30qIyC0srBklXhsND7K/oI88KMQdYkY1bxzgQ93lP+CjDFTraQ==
X-Google-Smtp-Source: AGHT+IFngQ15CqFwR7T1stKi2RvUGUzCnA3Ev4nr1qSQoyLqhOIr8W6vjOjmXRjGjMJHhavJe4jlNQ==
X-Received: by 2002:a2e:87d9:0:b0:2ec:5467:dcb6 with SMTP id 38308e7fff4ca-2ec5b39c57bmr30529051fa.52.1719242204441;
        Mon, 24 Jun 2024 08:16:44 -0700 (PDT)
Message-ID: <1ea5bebd-23ee-4d2c-a7c8-bc6ba99851c5@suse.com>
Date: Mon, 24 Jun 2024 17:16:38 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 04/13] x86/vpmu: address violations of MISRA C Rule
 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <c45b27a08a1608de85e4bbae80763f8429d40ad5.1719218291.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <c45b27a08a1608de85e4bbae80763f8429d40ad5.1719218291.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 11:04, Federico Serafini wrote:
> --- a/xen/arch/x86/cpu/vpmu_intel.c
> +++ b/xen/arch/x86/cpu/vpmu_intel.c
> @@ -713,6 +713,7 @@ static int cf_check core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
>              break;
>          default:
>              rdmsrl(msr, *msr_content);
> +            break;
>          }
>      }
>      else if ( msr == MSR_IA32_MISC_ENABLE )

Up from here, in core2_vpmu_do_wrmsr() there's a pretty long default
block with no terminating break. Is there a reason that you don't put
one there?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:19:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:19:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746744.1153891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlTv-0006vR-Mg; Mon, 24 Jun 2024 15:19:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746744.1153891; Mon, 24 Jun 2024 15:19:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlTv-0006vK-Jp; Mon, 24 Jun 2024 15:19:47 +0000
Received: by outflank-mailman (input) for mailman id 746744;
 Mon, 24 Jun 2024 15:19:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLlTu-0006vE-Sn
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:19:46 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2fb83c6a-323d-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 17:19:45 +0200 (CEST)
Received: by mail-wr1-x42c.google.com with SMTP id
 ffacd0b85a97d-35f2c9e23d3so3412091f8f.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 08:19:45 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb4ea7dbsm64444945ad.213.2024.06.24.08.19.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 08:19:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fb83c6a-323d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719242384; x=1719847184; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=jJlN7ibmNl7wFR/L26KVpjo0sifoq51IpEGRSZdvY3M=;
        b=GP7Lhra1zCcB5v1LfKQkgK8HB3lP7xKvgNy1Rp7Br6N3nhejHXbID4B0O4sM0ZhgD8
         eHfG295ZiLduZTVYycLFtTUlCj939QjwQu7rrpMK3bjzUYSsB8hC10dcR9/kHKJCcDbA
         CPAnDG5xESdSU+q1O1qLx897Sp/TY307noRxzqjt3dWq/Z9RDANw3w72DttHQUHXzCJ5
         /WH6iufGA5eeqXXzH0MioDl6WmsWZxw8SKSysdlMn26PPH5jmg1D5jm3yTldxw31nc1h
         XLYdMaUCJYcRr5lpLLxAjNehibOhhd6rV4Dj3mdfBdjgD2FEHKbbbxaq9DZBwI/qOl/f
         bujQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719242384; x=1719847184;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jJlN7ibmNl7wFR/L26KVpjo0sifoq51IpEGRSZdvY3M=;
        b=BbPagUIC3FC5beo0vpyuoGMRDIykaPypYzNTFftLAGYX90hISHpayN0xkV1fTKPw1t
         +oBB/oycCEJyqnHwCqNZ0c1cZPijol24xvzvb6R7laqed6XjrRitYBhlclRnSZB+2YvB
         4hG36sQdJ2Jl8YuyLI08IDQ7UUsDe6SyPvmmeEqEuSOu+ZYVVkHB+09iD4nf0jvWpiWB
         l2ieYTbtU2toyf8hHYjBaYOtWAVCmARQaKzW1u1dwi9BDyaOV4j6v0ToiZYbjC2iWTOS
         j+mYMT4EtgYwZ3t1OZq1tELaY7zDf6VQsSziFJGMPWWDC99N5ElRSsSr63iBASDDLC1W
         jrQQ==
X-Forwarded-Encrypted: i=1; AJvYcCWbuyKoZhWMMAjRriqfRZcBHumTXIWlC/OOAXSSMMlOYnyoJXhXlZVVTBsz7FpIpucBG6C+E6J6zXTpt4zxy1TD7WRHCQ4qoTPss7MVyNU=
X-Gm-Message-State: AOJu0YxTEecw/HMyfhL94rhCeFQ6vWvgZKixUkTMcwQq9tvMDWXujPh8
	Iv8TJjenH14T+lsHomHEYUeGnDKdj06VUr56Z2LEmmKu1c5ELu64zyPHQRVHBw==
X-Google-Smtp-Source: AGHT+IHxAyXhfYY+NGmQZhdIRk7cOpVUreG17d+UBl8oL23FEozRIsJUUsH/WZ85cBX4Hwc6SW6ZoQ==
X-Received: by 2002:adf:f806:0:b0:362:56c2:adb4 with SMTP id ffacd0b85a97d-366e3294403mr4723450f8f.18.1719242384204;
        Mon, 24 Jun 2024 08:19:44 -0700 (PDT)
Message-ID: <84882119-b469-424b-b261-608c0c43b3f7@suse.com>
Date: Mon, 24 Jun 2024 17:19:37 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 05/13] x86/traps: address violations of MISRA C
 Rule 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <4f44a7b021eb4f78ccf1ce69b500b48b75df81c5.1719218291.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <4f44a7b021eb4f78ccf1ce69b500b48b75df81c5.1719218291.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 11:04, Federico Serafini wrote:
> Add break or pseudo keyword fallthrough to address violations of
> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
> every switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Technically the change fulfills its purpose, yet:

> @@ -1748,6 +1749,7 @@ static void io_check_error(const struct cpu_user_regs *regs)
>      {
>      case 'd': /* 'dom0' */
>          nmi_hwdom_report(_XEN_NMIREASON_io_error);
> +        fallthrough;
>      case 'i': /* 'ignore' */
>          break;
>      default:  /* 'fatal' */
> @@ -1768,6 +1770,7 @@ static void unknown_nmi_error(const struct cpu_user_regs *regs,
>      {
>      case 'd': /* 'dom0' */
>          nmi_hwdom_report(_XEN_NMIREASON_unknown);
> +        fallthrough;
>      case 'i': /* 'ignore' */
>          break;
>      default:  /* 'fatal' */

Falling through isn't really useful here. In such a case I think it would
be preferable to avoid the pseudo-keyword and use the shorter "break".

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:20:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:20:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746749.1153900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlUi-0008Lb-1J; Mon, 24 Jun 2024 15:20:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746749.1153900; Mon, 24 Jun 2024 15:20:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlUh-0008LU-V0; Mon, 24 Jun 2024 15:20:35 +0000
Received: by outflank-mailman (input) for mailman id 746749;
 Mon, 24 Jun 2024 15:20:34 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3xpo=N2=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sLlUg-0008Ad-F7
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:20:34 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4c670c6d-323d-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 17:20:33 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id AAC0B68D05; Mon, 24 Jun 2024 17:20:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c670c6d-323d-11ef-b4bb-af5377834399
Date: Mon, 24 Jun 2024 17:20:28 +0200
From: Christoph Hellwig <hch@lst.de>
To: Niklas Cassel <cassel@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>,
	kernel test robot <oliver.sang@intel.com>, oe-lkp@lists.linux.dev,
	lkp@intel.com, Jens Axboe <axboe@kernel.dk>,
	Damien Le Moal <dlemoal@kernel.org>, Hannes Reinecke <hare@suse.de>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, ying.huang@intel.com,
	feng.tang@intel.com, fengwei.yin@intel.com
Subject: Re: [axboe-block:for-next] [block]  bd4a633b6f:
 fsmark.files_per_sec -64.5% regression
Message-ID: <20240624152028.GA11961@lst.de>
References: <202406241546.6bbd44a7-oliver.sang@intel.com> <20240624083537.GA19941@lst.de> <Znl4lXRmK2ukDB7r@ryzen.lan>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Znl4lXRmK2ukDB7r@ryzen.lan>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Mon, Jun 24, 2024 at 03:45:57PM +0200, Niklas Cassel wrote:
> Seems to be ATA SSD:
> https://download.01.org/0day-ci/archive/20240624/202406241546.6bbd44a7-oliver.sang@intel.com/job.yaml
> 
> ssd_partitions: "/dev/disk/by-id/ata-INTEL_SSDSC2BG012T4_BTHC428201ZX1P2OGN-part1"
> 
> Most likely btrfs does something different depending on the nonrot flag
> being set or not. (And like you are suggesting, most likely the value of
> the nonrot flag is somehow different after commit bd4a633b6f)

Yes, btrfs does.  That's why I'm curious about the before and after,
as I can't see any way how they would be set differently.  Right now
I can only claim with vitual AHCI devices, which claim to be rotational,
though.



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:21:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:21:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746756.1153910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlVZ-0000SF-9c; Mon, 24 Jun 2024 15:21:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746756.1153910; Mon, 24 Jun 2024 15:21:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlVZ-0000S8-6k; Mon, 24 Jun 2024 15:21:29 +0000
Received: by outflank-mailman (input) for mailman id 746756;
 Mon, 24 Jun 2024 15:21:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLlVX-0008Ad-Lz
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:21:27 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6c065a50-323d-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 17:21:25 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2ebeefb9a6eso49789121fa.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 08:21:25 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c7e64a1d9esm8927755a91.56.2024.06.24.08.21.21
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 08:21:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c065a50-323d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719242485; x=1719847285; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=aS+B1yhp0qCZfGxtApIkEhc3D2jaW0lt80uqdGVw9Tc=;
        b=EXslp+G/zuz1orkun/+CnC9Zvs1+vBFSap0S75eYjLdMqS1OXt+W2hP7wZQRWn4Hk9
         bxGWZTJX/S9UOdCVngUIIxcHEXm+5oChLEN3pqyaKJiP5T5blityYIBW+F9qoF7ctzVm
         ouWHAdclZ2gG5eEO7VrI/yHeJ05cg+hbLkxRr8rpo8izSh8hSrs5BYlfvfPxGuI7coII
         DqKbfMlDZm3koAdzda0woEJ1Kh4Iqg4mLxAaxZgN4KkcLJ7EkQ443oFBl3kdZwi4yfTe
         FtBYFijAvAnJu8js0yWo4zDJvVe2wEKMXzb5Nq9JPuq0KUgIQGdS8g04XbSFD0FUNekS
         Q+bg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719242485; x=1719847285;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=aS+B1yhp0qCZfGxtApIkEhc3D2jaW0lt80uqdGVw9Tc=;
        b=IcJCbJH2D0d4nU8yflsnxFdmlqa4Q9UITnN/ca3VgAN81UckRnEGKHh1SAd6gg7PUH
         gvf/ZH/QdEgtlY7NvOW6pzk/I+IwQ4MLmkkj01GrGDI+GClec16L1N3+saBxXhQdOF98
         9XRn7RbhF+PPwM+9t69QBSxksMm+5D4257WjEWqQZ8JE7OnH72RjvRYoL2KdcK+lQw8G
         Ndiq+owF+jYc28uTFE6OVfkWBZ8TRVsp5nJjX8UFJSVA89pPjL8KSiMNp2ELuc0wvVp2
         9SQfkRqKqeUQoIdeelR+TnZURH0MRTSoxQaaprEFwsHWeMZ3CNOlYBn6WTu0ibysNmcs
         UlUQ==
X-Forwarded-Encrypted: i=1; AJvYcCWUDgQUEV3IytuWwSKPz8g2FY6SIWbNak01lRF/5cTquFUgMs352Wo6+CS2EykzMcs9qbNcgF7Ljig+y31GbBPwg36uOynTryTGeES/SIg=
X-Gm-Message-State: AOJu0YzIAakmrYNqb9yi7Y1dkMks45bnmzEPcg4CbG8n7p4JPDB9p5io
	s9cNdKHbfa3xAxhoY4GiQCuFuAKQUHhhhIHCUx/QlkcylznTOm9X6iT52dWbUA==
X-Google-Smtp-Source: AGHT+IHB7tvUHgtTAirTKM/qbCIJk/ExEHgH9bdTl/i8lpzYtaQ5+mkGXF+NCZBk9EjYeAUzboIK6w==
X-Received: by 2002:a2e:9cd4:0:b0:2ec:520d:f1dd with SMTP id 38308e7fff4ca-2ec593be843mr31018741fa.3.1719242485437;
        Mon, 24 Jun 2024 08:21:25 -0700 (PDT)
Message-ID: <e11df888-7bce-42b6-a87f-e14096da9f4b@suse.com>
Date: Mon, 24 Jun 2024 17:21:18 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 06/13] x86/mce: address violations of MISRA C Rule
 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <c1015fb8f39d3a44732ccdebb8ba11392dbe4aa8.1719218291.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <c1015fb8f39d3a44732ccdebb8ba11392dbe4aa8.1719218291.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 11:04, Federico Serafini wrote:
> Add missing break statements to address violations of
> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
> every switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:32:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:32:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746767.1153921 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlfy-0002cp-8A; Mon, 24 Jun 2024 15:32:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746767.1153921; Mon, 24 Jun 2024 15:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlfy-0002ci-5T; Mon, 24 Jun 2024 15:32:14 +0000
Received: by outflank-mailman (input) for mailman id 746767;
 Mon, 24 Jun 2024 15:32:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLlfx-0002cc-2g
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:32:13 +0000
Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com
 [2a00:1450:4864:20::22a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ecebb559-323e-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 17:32:11 +0200 (CEST)
Received: by mail-lj1-x22a.google.com with SMTP id
 38308e7fff4ca-2ec58040f39so20493151fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 08:32:11 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9ee4482afsm63472295ad.88.2024.06.24.08.32.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 08:32:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ecebb559-323e-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719243131; x=1719847931; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=s25MqQDIQHTV2/jQnH8Za5Ma6ncTSBJrtWWaZb14nYU=;
        b=ZDnI6e9WbekmlwfZ17yYoc3WK+rcOHR0fqgVNSyVlPmsMc8Mo5I2nG+EdAFWPBe4QX
         Z8UzvHiwvbOtW2Sbr/tje2aGUVXYwz6RLuUGC2iRScc9xoA/iGMxNlR+MVg2FIrTQD4H
         MXHM2SfMCTOtZWQH5i7Hi3R/Vaz9762PxnYtj3TCsJohTeEzuNoQ6Mvox334SLqEw0ed
         9hPr4giDj/K+Q/zK31XtvKmmlGhKfUVUBMXf/mT51N7prMk7clkkjdZi+156uItAkecJ
         X18uTKy20CSI9s5g5IiX3hyQVinJCpQStZeqJUzZIsl0nKwe3Uw3wde0Z5eUEPHfQjSx
         VQYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719243131; x=1719847931;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=s25MqQDIQHTV2/jQnH8Za5Ma6ncTSBJrtWWaZb14nYU=;
        b=pbMjgc0VF80Gn9DJQvyzLxb+ErTIxyClZONae3Cyzbb0rqt7Xs/QCWAO4QY1pU3Zal
         MSRAieTXgAvpsvZQZRiRlym79w6zTFWSC+hi7W0cnGSo4vOPaQ0ZOX6KWRm0A7ycIDzn
         AozJcIBj2QPVZR2C/zCekozHLlDpk0l70WSKm5PalnrCQ9UIIj0LbbcERG/thSz0tp6+
         meJObRZ0G3/v2h83D/fB2+6PNpuEl3gnSqRPdbmFY3lv6uGSR1dfOzZT1HjFb9G6pMza
         9/GkvVp18e0L2GzRqaf5cJnPPm8q7vH9Yiel56JOR+Rfn/owg5kCjQZbOsBrmiOX23eC
         V7eA==
X-Forwarded-Encrypted: i=1; AJvYcCVXjBAtnwM+NvVWuNY6DwGdqI+zRI8RcGI8mj+8R2nTiGCSm1MoW78czb7DILehJQabnjI/DUVLX+oeC0e5zTZvyee6FlpiXevXBp1AZF0=
X-Gm-Message-State: AOJu0YyX/UQy9C9e8DTxfP+oiASJUYNjS+xZ+JjZHTczurJIxErUJ5T6
	c9AU8zu3U2kMdOXYDq246GP0qh8lgoxmnZiZOWbZVulNG+eLYJ2/I/pKvgDrUw==
X-Google-Smtp-Source: AGHT+IGA5iS4PyHG1qyZR3AI4VjvptUpVbYRgqj/l3RBV9gbyZWTPDX1wCBSQzVh7L9xIqPOb+39Tw==
X-Received: by 2002:a2e:8513:0:b0:2ec:3ca1:e53e with SMTP id 38308e7fff4ca-2ec595876damr31518051fa.51.1719243131118;
        Mon, 24 Jun 2024 08:32:11 -0700 (PDT)
Message-ID: <087eb879-b3f6-4d1a-a52e-1e27337620c9@suse.com>
Date: Mon, 24 Jun 2024 17:32:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 07/13] x86/hvm: address violations of MISRA C Rule
 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <a20efca7042ea8f351516ea521edccd89b475929.1719218291.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <a20efca7042ea8f351516ea521edccd89b475929.1719218291.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 11:04, Federico Serafini wrote:
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -339,7 +339,7 @@ static int hvmemul_do_io(
>      }
>      case X86EMUL_UNIMPLEMENTED:
>          ASSERT_UNREACHABLE();
> -        /* Fall-through */
> +        fallthrough;
>      default:
>          BUG();
>      }

This or very similar comment are replaced elsewhere in this patch. I'm
sure we have more of them. Hence an alternative would be to deviate those
variations of what we already deviate. I recall there was a mail from
Julien asking to avoid extending the set, unless some forms are used
pretty frequently. Sadly nothing towards judgement between the
alternatives is said in the description.

> @@ -2674,6 +2674,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
>  
>      default:
>          ASSERT_UNREACHABLE();
> +        break;
>      }
>  
>      if ( hvmemul_ctxt->ctxt.retire.singlestep )
> @@ -2764,6 +2765,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
>          /* fallthrough */

What about this? It doesn't match anything I see in deviations.rst.

>      default:
>          hvm_emulate_writeback(&ctxt);
> +        break;
>      }
>  
>      return rc;
> @@ -2799,10 +2801,11 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
>          memcpy(hvio->mmio_insn, curr->arch.vm_event->emul.insn.data,
>                 hvio->mmio_insn_bytes);
>      }
> -    /* Fall-through */
> +    fallthrough;
>      default:
>          ctx.set_context = (kind == EMUL_KIND_SET_CONTEXT_DATA);
>          rc = hvm_emulate_one(&ctx, VIO_no_completion);
> +        break;
>      }

While not as much of a problem for the comment, I view a statement like
this as mis-indented.

> @@ -5283,6 +5287,8 @@ void hvm_get_segment_register(struct vcpu *v, enum x86_segment seg,
>           * %cs and %tr are unconditionally present.  SVM ignores these present
>           * bits and will happily run without them set.
>           */
> +        fallthrough;
> +
>      case x86_seg_cs:
>          reg->p = 1;
>          break;

Why the extra blank line here, ...

> --- a/xen/arch/x86/hvm/hypercall.c
> +++ b/xen/arch/x86/hvm/hypercall.c
> @@ -111,6 +111,7 @@ int hvm_hypercall(struct cpu_user_regs *regs)
>      case 8:
>          eax = regs->rax;
>          /* Fallthrough to permission check. */
> +        fallthrough;
>      case 4:
>      case 2:
>          if ( currd->arch.monitor.guest_request_userspace_enabled &&

... when e.g. here there's none? I'm afraid this goes back to an
unfinished discussion as to whether to have blank lines between blocks
in fall-through situations. My view is that not having them in these
cases is helping to make the falling through visually noticeable.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:35:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:35:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746782.1153967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlim-0003Zc-0Z; Mon, 24 Jun 2024 15:35:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746782.1153967; Mon, 24 Jun 2024 15:35:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlil-0003ZV-UG; Mon, 24 Jun 2024 15:35:07 +0000
Received: by outflank-mailman (input) for mailman id 746782;
 Mon, 24 Jun 2024 15:35:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLlik-0003ZP-Ee
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:35:06 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 53dc2a04-323f-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 17:35:04 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2ec50d4e47bso29839271fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 08:35:04 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb3d43f6sm63965685ad.182.2024.06.24.08.35.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 08:35:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53dc2a04-323f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719243304; x=1719848104; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=CS78ovLEa/SQa7ohVLpg6mTzt6qv6WI1dDrhccmUU64=;
        b=DDhguBWMUwOBICcj2cJKHUCkekZsrA5aGxEzUpi0UNykuv5jnRmknrMMILkt0v/5mq
         jxk5ZwRhO82rhpwQbjV0oYusAousbelKiiub0pFlnuWXAGICCrgH7QcaGxh6iNYrHTmz
         F36EkhJQ5WtKj00xccwyxDvU7Gc64RYuGZCJkfglWkjC4BC068AHevbT/Mrn357XlBRu
         /sJsa3OSSYcj1a++CCizmWHRaCY9eEmGzjbCpZxkEc3Ca8aKDiMq8AwehFuJJixVvgza
         yANcVqA99oysATkV1Lu+7d0ox4ibBx+w8+dqwgyXg3x456Ek/kHI3m1yji5H3ujkXVu0
         +a7w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719243304; x=1719848104;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=CS78ovLEa/SQa7ohVLpg6mTzt6qv6WI1dDrhccmUU64=;
        b=AarZIKyKhM7XvGXgXQrGmUcrc1fjvHAzAdWLUmCFeD7W3vpvR0s5jy/Bfe2lQnmDqd
         MgLVc1165uQM/v3+/dmW1MasQI8/J2VUvW7ZBdHc1utArchkMv3M4Bryuzv9hqTBdMqd
         dxj/Qej9eerZv6blKtNShJb4mKOBd8Ij9cYJME9c9Z+6Czs9kCvRcHsVQRvEmDdbZySc
         0OHec0XBiP+o207joL7ieC8nPGMEv2bflMXqW5nrdZbvQhBHuXhfd9cvBobD9AhIgm0o
         yVaQFbZ5OV4V1IoQD/jEqhgnsswQS3rVBUwb4XOG6Re5k6QIKgDnwHoVlf8Xpwrqo90q
         8cgA==
X-Forwarded-Encrypted: i=1; AJvYcCW5OW+DjBfeWA8+qRMOWOJ13RUcsOg7+hPnagC9copzlsEVU7tz9RcwzdtSJLv4jzAWP0C6eJnybyNPXbamihpHUXBq05yPGODmheIPWOU=
X-Gm-Message-State: AOJu0YwKBKV/WZJRG1P7eN47UN5MZaBTT/V93X3Zore1i3W1CG0VTWvy
	t6K4HLTHneM/Vpa2aWFa0qXbYqZso979jXahPN026sjcq2zuyUyjF5HnHCYIPQ==
X-Google-Smtp-Source: AGHT+IEzP+3tnQBOhW40jmkZ5vtfr25/aTIjkj3g7+sjZX59uKj8O/D4nKz4PrRMjxZv87l3UKpLfw==
X-Received: by 2002:a2e:b1ca:0:b0:2ec:5258:e890 with SMTP id 38308e7fff4ca-2ec5b2e90b3mr27955921fa.44.1719243303878;
        Mon, 24 Jun 2024 08:35:03 -0700 (PDT)
Message-ID: <52ae2abd-2517-4d73-97c9-7c156c981edc@suse.com>
Date: Mon, 24 Jun 2024 17:34:57 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 08/13] x86/vpt: address a violation of MISRA C Rule
 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <e26de71af72b51abd425f2e75f30d794e0ba46a2.1719218291.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <e26de71af72b51abd425f2e75f30d794e0ba46a2.1719218291.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 11:04, Federico Serafini wrote:
> --- a/xen/arch/x86/hvm/vpt.c
> +++ b/xen/arch/x86/hvm/vpt.c
> @@ -121,6 +121,8 @@ static int pt_irq_masked(struct periodic_time *pt)
>      }
>  
>      /* Fallthrough to check if the interrupt is masked on the IO APIC. */
> +    fallthrough;
> +
>      case PTSRC_ioapic:
>      {
>          int mask = vioapic_get_mask(v->domain, gsi);

I'm afraid this is one more case where the (pseudo)keyword wants indenting
by one more level, to match others relative to the case labels. Sure, this
will be a little odd with the preceding figure brace, but what do you do?

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:39:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:39:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746787.1153977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlnN-0004gR-Gk; Mon, 24 Jun 2024 15:39:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746787.1153977; Mon, 24 Jun 2024 15:39:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlnN-0004gK-E4; Mon, 24 Jun 2024 15:39:53 +0000
Received: by outflank-mailman (input) for mailman id 746787;
 Mon, 24 Jun 2024 15:39:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLlnM-0004gC-2E
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:39:52 +0000
Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com
 [2a00:1450:4864:20::22a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id febfd656-323f-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 17:39:51 +0200 (CEST)
Received: by mail-lj1-x22a.google.com with SMTP id
 38308e7fff4ca-2ec002caf3eso66106821fa.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 08:39:51 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-70690b37fabsm672594b3a.1.2024.06.24.08.39.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 08:39:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: febfd656-323f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719243590; x=1719848390; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=tj4zI2GZ67oH9YMXRkbGvk6m3r1bXu3FxAJ+dIdPskU=;
        b=BSDaR5ayQZG1t0aB/V2hs50Ga5yrcs4zK+5usLYwJWDVn+EqvAY3IDtXO3hrWb4oSe
         0mj+gZgB/qQ1oOy58buFCc6zAWqto22Vo91KlkHchu4CQaSLviXjfRamDi+3qtcmjRWj
         GyS6h9Z6KTGlBt/aNsgamQ8dSg8dF4Gjfc4UokURQ0jZTO1qIgi3zbp+fFvfmiUmBEWp
         INs3DNfVOuT8WjkZz7cjDWIGxFfEK6fOwycjQX3fzNlx1yP8agpDBy2aflNBGN/5OhWT
         9eZXtccimV1bqlG2khx43DyS24PzuOmdu8Zgj0kLnrj/2cP4cEB3JapMIaXEsGLVkC3R
         HJlg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719243590; x=1719848390;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tj4zI2GZ67oH9YMXRkbGvk6m3r1bXu3FxAJ+dIdPskU=;
        b=Cot2OlaI/ldT4C7LvYD686HvKLykuvRiwvStaBPDalfp29t4ql3v8gHYrWytN6MDfR
         kOjLZlChmf+Mw9upoXqJoa5L5yXG5vVjH+XmlpgMx7J+BxzTf/WFUHGI7DzXx4rlaNix
         ap3ZXbqTpkmUnmHqfFCxIejKIhWRds2X0nDISSft8Q0nCo9EeoVuoXZMmvPsZK6yPW3W
         CJFxNTRTxcV6z79aBr/wzz5hPOTNNWQv0qBcOwKOszZluAjHVVMiuz6HxTXnpYYzCRVi
         9X6Py0wfU7+RDQQX6p/tOtmavu2sHkRY+LAv2bWwAJ5nGazO4anDpic588gs2qC+uqIs
         nN6g==
X-Forwarded-Encrypted: i=1; AJvYcCUFRjNgBJN3olEwfCGcwpk/P83dQihVqixuVfByfbSCKVyFhe3YifdUrQZXiXkkjYX5pgjvRHMC3wiyRiZsUXPAZN/MulIhq9T3sU3EErs=
X-Gm-Message-State: AOJu0Yw2nQL8LuxSMaZXWrq/h0NG663xuToBMcWDk0jy6EegdmSjuG3B
	zWggLICXeElNw6mKa2IaxO4uxK5fqPoSsQSs8lM2hietyjI2+0dfzf/Ln0FjFw==
X-Google-Smtp-Source: AGHT+IExCypMDn22LVbzFwPHYV4dyN8AtYVDl2zIEN953g28HqxGuFwLyYzOZ43VDuXnAEVou56fUA==
X-Received: by 2002:a05:651c:1055:b0:2ec:efb:8b60 with SMTP id 38308e7fff4ca-2ec5b2d480amr37451551fa.13.1719243590587;
        Mon, 24 Jun 2024 08:39:50 -0700 (PDT)
Message-ID: <3ba59d21-bf67-4585-8e94-5a01cff18ec3@suse.com>
Date: Mon, 24 Jun 2024 17:39:44 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 09/13] x86/mm: add defensive return
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <f3adf3d5dac5c4c108207883472f3ebcfa11c685.1719218291.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <f3adf3d5dac5c4c108207883472f3ebcfa11c685.1719218291.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 11:04, Federico Serafini wrote:
> Add defensive return statement at the end of an unreachable
> default case. Other than improve safety,

It is particularly with this in mind that ...

> this meets the requirements
> to deviate a violation of MISRA C Rule 16.3: "An unconditional `break'
> statement shall terminate every switch-clause".
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> ---
>  xen/arch/x86/mm.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 648d6dd475..2e19ced15e 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -916,6 +916,7 @@ get_page_from_l1e(
>                  return 0;
>              default:
>                  ASSERT_UNREACHABLE();
> +                return 0;
>              }

... returning "success" in such a case can't be quite right. As indicated
elsewhere, really we want -EINTERNAL for such cases, just that errno.h
doesn't know anything like this, and I'm also unaware of a somewhat
similar identifier that we might "clone" from elsewhere. Hence we'll want
to settle on some existing error code which we then use here and in
similar situations. EINVAL is the primary example of what I would prefer
to _not_ see used for this.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:41:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:41:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746796.1153986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlos-0006Lp-Ti; Mon, 24 Jun 2024 15:41:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746796.1153986; Mon, 24 Jun 2024 15:41:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlos-0006Li-RB; Mon, 24 Jun 2024 15:41:26 +0000
Received: by outflank-mailman (input) for mailman id 746796;
 Mon, 24 Jun 2024 15:41:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLloq-0006LQ-QM
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:41:24 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 358c479c-3240-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 17:41:23 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id
 38308e7fff4ca-2ec4eefbaf1so34383901fa.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 08:41:23 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7065107c36fsm6597144b3a.13.2024.06.24.08.41.18
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 08:41:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 358c479c-3240-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719243682; x=1719848482; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Xw1LNI8blMi7R/6hECR+gj03Cb52hUQ1BvhrfOWWhzI=;
        b=dl2ie+8P6sl30M/EpVce/j17HgsPdxSV21eTes4n4bogSkbOMj+PSH7wy+1gsO42Pl
         l1BXXTmNAlFyxYRnVO1CsOWAiir+TOlxD9Bac+7Zg8JwjPe2KDB5t1sAfr/S2/1vZPbL
         SwkCKCn41xGZs7hG/Wm58d62qUzx/VIv6qfHgT6oBWRlUT0SRXBjvyKO16TcsxectMAx
         m7T5aeSUb7wWUfL+a640DB4breZAhK/aKG62QmXx0RCVYD6U3mhct/UCvesMok68FdKe
         pjKfMGU0ErY4LbKKjaVGAyhTLg2pylSDeIpKrQZbFG290wG4XSDyAoPhn2xnsLbH+3b+
         k67A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719243682; x=1719848482;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Xw1LNI8blMi7R/6hECR+gj03Cb52hUQ1BvhrfOWWhzI=;
        b=wfh1k5BkZFAm7BA/VoH/zUDaCzLLeBb5VjgvZ9cnU/32RFdBJ7lqDleMClNNSDUEwl
         9X5/NcUsHJW9Oj46hODTF54oRTOE0pQeLJW0iE91vorEzhBqroi5rQ7CemdNKR7uSfn5
         LaZuo506G880GfWBn5sIF7yXWO6AZQOZO6GP5JoPRrGD4gAs/2/eElwLxnwFP40dHTpb
         EoePYJOh8kbeZFHMraVZHBhef/6pYmEQEj1mVV8istZ/boM2+00C8QiMMAwnay+vd8jk
         M/Lnt/WKSWHxR5a8eVsGfdVBTp09y9BM7oWvM1JMT5zYb/CL7xN8WNNmORJxbfbIXVtT
         tK1A==
X-Forwarded-Encrypted: i=1; AJvYcCWMu++RxZyCcibGZ+yJG2e0eXr9+cnoGG5iGrcNqG223TTHSs/Ja7ot8D/fDXXK7R5Vnq9gNNJnBK8kNRM/93QgqQ5fB7/j4uw0pw6LcgA=
X-Gm-Message-State: AOJu0Yw6X2RF63w2B0lB0Oy4f2PdHITPKOuJ5oK9nUDWzIwKR2nqP8Jq
	DdPdY1Xcfo3RDV3kKOStGMOIGTLxaWzU29b0jVOFPxVsHPtv9NallKtCLhwF+w==
X-Google-Smtp-Source: AGHT+IHhrEFELy+4S9eUOBPgTol/yqdc9aruqomqVpKQQwAijdSEmH16mI60pg6vkAQKxWcAPtbBVQ==
X-Received: by 2002:a2e:8e95:0:b0:2ec:4d9a:104e with SMTP id 38308e7fff4ca-2ec5b3e1fa9mr28664071fa.43.1719243682528;
        Mon, 24 Jun 2024 08:41:22 -0700 (PDT)
Message-ID: <20775cad-3327-4cda-bb27-bdb3f5d00574@suse.com>
Date: Mon, 24 Jun 2024 17:41:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 10/13] x86/mpparse: address a violation of MISRA C
 Rule 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <e421b80a9b016d286b9a5b8329b0bcb33584f06e.1719218291.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <e421b80a9b016d286b9a5b8329b0bcb33584f06e.1719218291.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 11:04, Federico Serafini wrote:
> Add a missing break statement to address a violation of
> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
> every switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:44:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:44:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746803.1153997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlrL-0006wk-85; Mon, 24 Jun 2024 15:43:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746803.1153997; Mon, 24 Jun 2024 15:43:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlrL-0006wd-5T; Mon, 24 Jun 2024 15:43:59 +0000
Received: by outflank-mailman (input) for mailman id 746803;
 Mon, 24 Jun 2024 15:43:58 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLlrK-0006wX-72
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:43:58 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 91802fa0-3240-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 17:43:57 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2ebe6495aedso47160771fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 08:43:57 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c7e5af9c39sm8905171a91.46.2024.06.24.08.43.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 08:43:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91802fa0-3240-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719243837; x=1719848637; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=bgtUIcVOmWAp4BEWPRDX6M5PJcfYk1EUDCwGSqtvNPc=;
        b=cCI3p4TnrSS9CnsOo3lS0/aunAuzMWhwW5wLKSnEyFxuoEO3D7O4Yxige0v4dy8WWp
         M2X/vGjFGwB1BhYKKmoN8hGfkqDYcd2bwjRvBju+9mStihZA9f0Y3tXYeXMmL+dDdPud
         MUEBvK4EqueCu/ECZZwnk0E2rE3JeOkceb6K3AOy+DZ7i/rBZofskp5R6Ci1c/VnKVHQ
         dA3rSTh231yx7SiN3wG1d1EKVO2Rz4jWoCN2TXCFms2l5EMoqTeGr313nsHhwjMVHLUN
         dvyqn3LyimuYWcP4Z82WN3Vds4DbcAj+tDZy20+nVbif7yCOH5kW10Sw1kIzN3P8E/QI
         89Dg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719243837; x=1719848637;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bgtUIcVOmWAp4BEWPRDX6M5PJcfYk1EUDCwGSqtvNPc=;
        b=kAbq1Mgki5UselD3TwJBT5MMR07z+OnyrB8tVnliykd93wyymJVEJxIz2Mg0nDMAfE
         9hyN3rpVM2/7dm/JQzL4LKiP8VuNK32g0QlIetiNKWsl0rldfUGvlxohfM7aqJFanwA9
         /hEBDMHcnG8cGAPC1KcjvUqS0PTl8N407lpVkxccgqRvwgl0Ra5UjCSqeZt+9HghNcIE
         rJweTRKtPb90bZ13Z/81Gj589zET2nwyB9NXwQTq2s4J6q90GKBMyaI/w5OHrJdJl4kz
         1u+7TUTlmyDN4EEZJl95I1nYjyTccgX4CGLTaF88qKoANAa5zCtxjscgNC6p+Xas04Gm
         qNjQ==
X-Forwarded-Encrypted: i=1; AJvYcCVT0hPXaCLZivC9GZgEwGiLVgsoV+oXw/bAfYVKDUR2TSad8H4Yr217i7Hb1aaoF4GPbYhYGRAqBBvdfTwADW2VqKQBKg5Dy+pYrrUlRX4=
X-Gm-Message-State: AOJu0YzMvZR8rJx25Rnhu1zjZsWLddl98StTs+e314Vy6dH3XkD1/aBN
	O/JF24BEeSHQvY0Z0P7K7r20YyNLiKRLCw4dFPgZrB/wG45QtS7oMHeuXSEHPw==
X-Google-Smtp-Source: AGHT+IFFiM1tR3ggP2KAu5YvcGyFojTakyTzVUhXLOgeFi9I/17wie9NuKybi+dzotKNpz28sUmJKg==
X-Received: by 2002:a2e:87ca:0:b0:2eb:552a:f6d0 with SMTP id 38308e7fff4ca-2ec5b30a29bmr32085841fa.52.1719243836596;
        Mon, 24 Jun 2024 08:43:56 -0700 (PDT)
Message-ID: <97889584-4bf3-4cd1-9fd5-92b673b39d77@suse.com>
Date: Mon, 24 Jun 2024 17:43:49 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 11/13] x86/pmtimer: address a violation of MISRA C
 Rule 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <fea205262d4f7b337b804a013d0f1c411a721b46.1719218291.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <fea205262d4f7b337b804a013d0f1c411a721b46.1719218291.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 11:04, Federico Serafini wrote:
> Add missing break statement to address a violation of MISRA C Rule 16.3
> ("An unconditional `break' statement shall terminate every
> switch-clause").
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

I'm curious though on what basis you decided to split this off of
patch 7, ...

> ---
>  xen/arch/x86/hvm/pmtimer.c | 1 +
>  1 file changed, 1 insertion(+)

... touching various other files under xen/arch/x86/hvm/.

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:45:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:45:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746808.1154007 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlsf-0007Sc-Hw; Mon, 24 Jun 2024 15:45:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746808.1154007; Mon, 24 Jun 2024 15:45:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlsf-0007SV-Ea; Mon, 24 Jun 2024 15:45:21 +0000
Received: by outflank-mailman (input) for mailman id 746808;
 Mon, 24 Jun 2024 15:45:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLlse-0007SP-Hc
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:45:20 +0000
Received: from mail-wr1-x42e.google.com (mail-wr1-x42e.google.com
 [2a00:1450:4864:20::42e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c1a742c8-3240-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 17:45:18 +0200 (CEST)
Received: by mail-wr1-x42e.google.com with SMTP id
 ffacd0b85a97d-3632a6437d7so2631151f8f.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 08:45:18 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7065107c26esm6575439b3a.11.2024.06.24.08.45.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 08:45:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1a742c8-3240-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719243917; x=1719848717; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=u5KH9++zOKfbJeRxOUmJh9ZlSsZgBffU8EJXNLM3rI0=;
        b=Ai0wBG47FULQp0BPSZgeUENdA7XB0EtVUCW1px4TRviCTQBlKm6rePdL2JhQQ8xDy5
         NB07SK8hS/uxZtRirFN2Ot+wugHG2cAoITqBLBCxThaxxz4/sWeQ6+T2YEHd6aPJrOmJ
         cksTLmDyAMgEIx82PuTu3rMDR7UqS1SvCdUULbn9u2vx6m9qXggc/bMqzztryIovX90n
         xnGxR1js0RhIwZ0TGUgDpKSA778tBghn4b50HnaJOMv0T8wXNvkdazsIgrJtLH6tq1tu
         u1xUNSAhYAt4fCsOrOuZZxbz64+Y6kJelv40BKwnDTq7xihCZA4tRNhBRQNagOed0y4N
         935w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719243917; x=1719848717;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=u5KH9++zOKfbJeRxOUmJh9ZlSsZgBffU8EJXNLM3rI0=;
        b=LPIoR3aI7nJchdx1gdju2G/vVfxaZwRYd7ZG7Kek9Q572ZuLJfqv7ni1DOisDztkHm
         3JRziJL4t2TkTs4DZpkCnLEWyUFd0G30we8cfq05BmPM6JNF1EQcBltXj5zSoBGXuS1n
         OiWc5UzQZ0wPR3dZBGju6tzKn6n9h/Ht4iV0w9CZH2Ca6QyxTB7liTXfmRdCF4RWTzrv
         +rdR5hK/w6YuxmDaTV1REvH9yPo1QksNGXUCLyM2yN2xmBkTtGslLzFODgiNNIFAxXNE
         iZDdUFPBQTkdMgjFS4xz0BcuixuwzfBwLZszOfQPnYkJOMuiUchAf85KBtr3I/TT0wHh
         pHrQ==
X-Forwarded-Encrypted: i=1; AJvYcCWXl3wHXjigbmLNoeN/Tcyl3UQS2v9fR6b/4uD5ea7P9pU7I2v66Pez8QTFSN/V4pPEJGmpADXkfQzE3KUOhknd661I1OoptncS/vuddyc=
X-Gm-Message-State: AOJu0YxcHqKQoBFrUs6rNPQ+GrNyoeea+sSXehub+ODpFis1zPZRQUcN
	QP345+Fdy/mea3JrVMSIS4StG0hVScDWz3yUf0F1e4/ONurZx8kENcDyj1MlSg==
X-Google-Smtp-Source: AGHT+IEWBGZbW5Am7nBJEkvtg2ynScjJLopddCX1F0nvC6TQhFvIiNUC8zzUD7gVjJUeS+D9Qg5GcA==
X-Received: by 2002:a5d:474d:0:b0:362:e874:54e8 with SMTP id ffacd0b85a97d-366e332cfe0mr5112780f8f.30.1719243917604;
        Mon, 24 Jun 2024 08:45:17 -0700 (PDT)
Message-ID: <fcab478c-f6a7-46a2-87f8-005d24dfb307@suse.com>
Date: Mon, 24 Jun 2024 17:45:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 12/13] x86/vPIC: address a violation of MISRA C
 Rule 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <bf0f2ac1c8e9443b2c4f8ae6f02659927d5f7dea.1719218291.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <bf0f2ac1c8e9443b2c4f8ae6f02659927d5f7dea.1719218291.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 11:04, Federico Serafini wrote:
> Add pseudokeyword fallthrough to meet the requirements to deviate
> a violation of MISRA C Rul 16.3: "An unconditional `break' statement
> shall terminate every switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:47:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:47:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746816.1154016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlux-00084F-Sv; Mon, 24 Jun 2024 15:47:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746816.1154016; Mon, 24 Jun 2024 15:47:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLlux-000848-QL; Mon, 24 Jun 2024 15:47:43 +0000
Received: by outflank-mailman (input) for mailman id 746816;
 Mon, 24 Jun 2024 15:47:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLluw-00083G-Nh
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:47:42 +0000
Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com
 [2a00:1450:4864:20::22b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 16b3b7b0-3241-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 17:47:40 +0200 (CEST)
Received: by mail-lj1-x22b.google.com with SMTP id
 38308e7fff4ca-2ebec2f11b7so49166511fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 08:47:40 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb2f1187sm64526695ad.56.2024.06.24.08.47.37
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 08:47:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16b3b7b0-3241-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719244060; x=1719848860; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=7JRcrAWp5h3pOTBs7DVDBCcz9m6iqDl9lmBU5DVdBDM=;
        b=f9JTnjTR+gIJ0Z+dppSgvna5L4Y7N9FITrtV4glHN0m24g+frQ15MgVGz0yrPnR4gA
         v2fUKgZ0mJOH3n2/39jG4Njw31oL5cAyt889ZdgLM02vFbZJ4P4Tz+gegmv8bYQOQVrq
         UmlxCrIMw+1zkOiPLQfa3YJ4+Hi3EhhMJVT64QGPVNqTYNVttvu4QsTLJJ1pxGL2x9ho
         tvLJ2groAsgW1pOjhYyIieHLOkCOvLRuBE+rGhh6aIO/2t43oiUHMjqMauHP7EAKTd6h
         V4bSKbzFy1WleIwiQCE2k8Focsv3qZ59sJ1x2bxJU0h3kxkIT9YQytVm48ct2LyWsT5/
         A3ZQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719244060; x=1719848860;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7JRcrAWp5h3pOTBs7DVDBCcz9m6iqDl9lmBU5DVdBDM=;
        b=uxRMm4mMTrRYq4M7fotTsbhjMUibGf30PeaRFyxmUcqHdjfM+vDL/mGpUBkUo6IZQ7
         o+3jm48sImczRgAMXDDytowJjRdn48l0e9dPIyJ/PWoay09QxYYaOmm23VleQIwwrsXk
         fV9lfArTP1yOP/AyT/VtqYMo4mSGc+rSfyZgdLSPmdJLjA7eLDqJqthJix3JAK6v4zf9
         O55PiVfuzVdliZfy8CWX3C3GPxkvSu63FNrqMG8HwPhZ3pj7Wnm0JOKrI47aQ8zKG9zO
         RNbolGtFRU9MJv8dESGzfIDcqW9cMiSIx62GPNsFSqDu1Yn8haoq7DUbLUELHCJtWQcL
         U1CA==
X-Forwarded-Encrypted: i=1; AJvYcCXJKyvwN8p2N5k29yfzKwTNIaZePY9uPPmbuCQu9/hajfJx99km7/bwe1R8dFZvFTGyos8eLaX+yscepkeLxpYpU67uHlKVRoyTdzvjVtg=
X-Gm-Message-State: AOJu0Yz1e0LORjNdqfiPd4VOVHTiC31y9q31rhcqc+WAVaAiu3SPx8lR
	gFOCn/UfK8DROuySalOUv3Kgc8/B2kybsbtfnYFITkJtvVAUHHLESRt8ANM0jA==
X-Google-Smtp-Source: AGHT+IHERRDc/GJv6QVJ5Koy78QfIpyzuPSlfZxQHStwZXkzMCGDYccvU5DqoRK4AbRBBmgOwY6R1A==
X-Received: by 2002:a2e:91d5:0:b0:2ec:2b25:3c8e with SMTP id 38308e7fff4ca-2ec5b388402mr29281841fa.39.1719244060239;
        Mon, 24 Jun 2024 08:47:40 -0700 (PDT)
Message-ID: <122c0fcd-fdb8-47c0-a6e9-e044d49710c9@suse.com>
Date: Mon, 24 Jun 2024 17:47:33 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 13/13] x86/vlapic: address a violation of MISRA C
 Rule 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <0aa39166696e46b6bb45a0f7b5ac06bfd9fdda8e.1719218291.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <0aa39166696e46b6bb45a0f7b5ac06bfd9fdda8e.1719218291.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 11:04, Federico Serafini wrote:
> Add missing break statement to address a violation of MISRA C
> Rule 16.3: "An unconditional `break' statement shall terminate every
> switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Acked-by: Jan Beulich <jbeulich@suse.com>




From xen-devel-bounces@lists.xenproject.org Mon Jun 24 15:55:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 15:55:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746821.1154027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLm2l-0001vw-Jq; Mon, 24 Jun 2024 15:55:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746821.1154027; Mon, 24 Jun 2024 15:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLm2l-0001vp-Gu; Mon, 24 Jun 2024 15:55:47 +0000
Received: by outflank-mailman (input) for mailman id 746821;
 Mon, 24 Jun 2024 15:55:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wsRE=N2=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLm2k-0001vj-9x
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 15:55:46 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 36d5bc4d-3242-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 17:55:44 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id
 38308e7fff4ca-2ec6635aa43so6029721fa.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 08:55:44 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb3c6134sm64442865ad.143.2024.06.24.08.55.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 08:55:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36d5bc4d-3242-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719244543; x=1719849343; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=xy0R9HMMThtrpJAP2JQ4mwYsE+bQSH0PW5wG/ilTG/k=;
        b=RYjSYU56k4rYev2YRWIy4miLwMS/ZM8Twu0co+Rr1zwfS9Af8noR334Vg65qq54ELi
         ydu0aLsnOp502qJYHuhxbnlnX4OM958RkA169S7alsZELXmQf24WMEIXxhWopnw3VdGO
         Nsaauwc1DAhlGM+zI5MCR3IRTSDM4YZd17eNlaJSci5qnBukIuZAHpL01nSprh+fwtcP
         qrnkHijIbFrs9iRXGVndytx55mMX+O80WWvJQIlwfHBWQ2robH3i8XaBUpEhUxHL6gW0
         UFjcw+HhMsxYOURl8i4EERmb5LRnkoUcUgdcR3pl/NnefwIKE5pu9rjVfwLBo8pAXIts
         ubyA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719244543; x=1719849343;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xy0R9HMMThtrpJAP2JQ4mwYsE+bQSH0PW5wG/ilTG/k=;
        b=HvV3wrmzgF7jfZ8LIFK5SsptIrZZjR1h7FNA1mGaivXlFOcJCtMr80ugoUwrRJx+AO
         aQT4DOIY7u498HR1c7/YdhE7y2CJAgsRpbvqgYR8Jbyjh0i2miJSVa+BVWzm+/dmCbeu
         31zFtikLJVMNG8FVmTBpUpJWM3/HsvyWgMmyuUtfRyn5OMPylYR4SCsH2FQVwgdsenTz
         LFs+c6d3jLhY0pJkoy12me2RD/BNKe2C2fQG9jdwgeKFDtq7QU8rFWUWOiL44xR8PozE
         nC+xgSM4mCMSAshT0lWfUO9BGUE6eAINIm6W7KNbp+vVa5a9KjoX0YVk5UrMUSz8AttY
         BaZg==
X-Forwarded-Encrypted: i=1; AJvYcCWeotbvfWksAwUxUZPf6kJwLMS4dRrKyA0e81sY+1YINhijhRnYtwqV4MVjR/5uunSxhwm4p2sA/Ry/KgsUveWHqAyjLO7TtD02uit41pA=
X-Gm-Message-State: AOJu0YxyyC6mk5hmaJgF5wG9MOKnniL/kICISMQgpLGEO0oTC16OvfN7
	34Z0auiEfOCC+fEhH8GduvMg82fXc9mVjLJCLb7yYZY/fI3tY3LoQwTPUNEjUA==
X-Google-Smtp-Source: AGHT+IHH0xNdhs/VsoD0F9On73Wa3NZNtFLUx49qZ8n9YYqpxcfCpfLjBdHHK6k7hQEtLa8WMA+lrA==
X-Received: by 2002:a2e:8693:0:b0:2ec:4eca:7479 with SMTP id 38308e7fff4ca-2ec579846admr35636921fa.30.1719244543560;
        Mon, 24 Jun 2024 08:55:43 -0700 (PDT)
Message-ID: <45c69745-b060-4697-9f6e-b3d2a8860946@suse.com>
Date: Mon, 24 Jun 2024 17:55:36 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/2] Add libfuzzer target to fuzz/x86_instruction_emulator
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org
References: <20240621191434.5046-1-tamas@tklengyel.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240621191434.5046-1-tamas@tklengyel.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 21.06.2024 21:14, Tamas K Lengyel wrote:
> @@ -58,6 +58,9 @@ afl-harness: afl-harness.o $(OBJS) cpuid.o wrappers.o
>  afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OBJS)) cpuid.o wrappers.o
>  	$(CC) $(CFLAGS) $(GCOV_FLAGS) $(addprefix -Wl$(comma)--wrap=,$(WRAPPED)) $^ -o $@
>  
> +libfuzzer-harness: $(OBJS) cpuid.o
> +	$(CC) $(CFLAGS) $(LIB_FUZZING_ENGINE) -fsanitize=fuzzer $^ -o $@

What is LIB_FUZZING_ENGINE? I don't think we have any use of that in the
tree anywhere.

I'm further surprised you get away here without wrappers.o.

Finally, despite its base name the lack of an extension suggest to me
this isn't actually a library. Can you help me bring both aspects together?

> @@ -67,7 +70,7 @@ distclean: clean
>  
>  .PHONY: clean
>  clean:
> -	rm -f *.a *.o $(DEPS_RM) afl-harness afl-harness-cov *.gcda *.gcno *.gcov
> +	rm -f *.a *.o $(DEPS_RM) afl-harness afl-harness-cov *.gcda *.gcno *.gcov libfuzzer-harness

I'm inclined to suggest it's time to split this line (e.g. keeping all the
wildcard patterns together and moving the rest to a new rm invocation).

> --- a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
> +++ b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
> @@ -906,14 +906,12 @@ int LLVMFuzzerTestOneInput(const uint8_t *data_p, size_t size)
>  
>      if ( size <= DATA_OFFSET )
>      {
> -        printf("Input too small\n");
> -        return 1;
> +        return -1;
>      }
>  
>      if ( size > FUZZ_CORPUS_SIZE )
>      {
> -        printf("Input too large\n");
> -        return 1;
> +        return -1;
>      }
>  
>      memcpy(&input, data_p, size);

This part of the change clearly needs explaining in the description.
It's not even clear to me in how far this is related to the purpose
of the patch here (iow it may want to be a separate change, depending
on why the change is needed).

Jan


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 16:17:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 16:17:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746832.1154037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLmNf-0006qP-FX; Mon, 24 Jun 2024 16:17:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746832.1154037; Mon, 24 Jun 2024 16:17:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLmNf-0006qI-Bh; Mon, 24 Jun 2024 16:17:23 +0000
Received: by outflank-mailman (input) for mailman id 746832;
 Mon, 24 Jun 2024 16:17:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SfmV=N2=gmail.com=sherrellbc@srs-se1.protection.inumbo.net>)
 id 1sLmNd-0006qC-6e
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 16:17:21 +0000
Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com
 [2607:f8b0:4864:20::732])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3a0259d5-3245-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 18:17:19 +0200 (CEST)
Received: by mail-qk1-x732.google.com with SMTP id
 af79cd13be357-797f1287aa3so293900785a.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 09:17:19 -0700 (PDT)
Received: from smtpclient.apple (ool-44c00bfa.dyn.optonline.net.
 [68.192.11.250]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b51ef32f9csm35844516d6.91.2024.06.24.09.17.16
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 24 Jun 2024 09:17:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a0259d5-3245-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719245837; x=1719850637; darn=lists.xenproject.org;
        h=to:references:message-id:content-transfer-encoding:cc:date
         :in-reply-to:from:subject:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=l6TuVR671VZ9BWKMH/gVFkMAcPcaJpkStk5/9B4NKdA=;
        b=SO3rpgUDcuxcnYx0GttkGMf6m/4Z9g3LY0v7yfwN4W3tA93dXTe0ODQPWIJpggaqta
         9+hItcT+v2hsJMmrSliMu4wl2rV7t+bCDueZZe/CirGH1RX+atf6/J3/Qg74uQVynWka
         wcgHxTPnRjIrkApl+PUcFKJRJTozQexFTC4FkYhQGc3fxKUF2WmTK1rSzBibyfao6RR2
         K616vV9P1Xe5iI2fdfJ1pQx/yB5Mhn4fp5E/ehuYprut7eqjFhUNM/pLJeK/vmAdoyHg
         UI1V2tuN28+BnF4q1ycaGTIt+s7cLKi+taUMRdbTabtG8Td6Abj5Kcs8QIfdRNRFkXp7
         6KoQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719245837; x=1719850637;
        h=to:references:message-id:content-transfer-encoding:cc:date
         :in-reply-to:from:subject:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=l6TuVR671VZ9BWKMH/gVFkMAcPcaJpkStk5/9B4NKdA=;
        b=ZDeUvkfWHAoo8215v54mVoQsC/t/imhv46UXKah96bMxRSCvvcNVu9w/wwd4iTCxeG
         nT8Yl8Wrn2XCTqSefIM1X/6ARuaJNAPPL4/Nvzxw1zi5yS9soSri4qZMrpJRktUT2RFR
         IcpuqbqQTmjSk9zqIo8QTqUKZW+0SJsCMCQapMbZwy7Ri7yZNQldizoI2KSdD5wEY13F
         hwoh0W4CVWatiV3S/0HLJDZ/fJM3BUSdgo54h++ctubSDoPUYgU5FUfK1R4rtzAaDt3D
         FtYEOPOYvZo3EeUZi1Mk69IoAyTaE90tY+Vo9A0bJlGGl9PwI88tKuDK+JnQVYJU+d37
         aUhg==
X-Forwarded-Encrypted: i=1; AJvYcCX6Sh+OoDe3iWgW13D4uHrjfNCxwxSzUN49sCJ7AeBiD3NhjbtlEbK2zCleHeNdha3sYLS077k3/JtohpHfnUgwMz4ai+lsvhuXRY+wIIs=
X-Gm-Message-State: AOJu0YzVkPBjCHG6Z7yxo6uB74U14u48z31LwJcLTzZMX9iNn9bdFnxW
	UH9Ee11WIK8fR+xYPk62ReN6AYZDUzOCU2yvp4k25jPr5omZ+aD8
X-Google-Smtp-Source: AGHT+IHF9Dp0n4wTdOw00QkcvfZxeXZaPfhZluq1Vk9wgrLjwesYk5sTOpSWjOWjNWBwBwC6B7lKrw==
X-Received: by 2002:ad4:4505:0:b0:6b5:a38:bc78 with SMTP id 6a1803df08f44-6b53bfeaf30mr56405116d6.59.1719245837258;
        Mon, 24 Jun 2024 09:17:17 -0700 (PDT)
Content-Type: text/plain;
	charset=us-ascii
Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3731.700.6\))
Subject: Re: E820 memory allocation issue on Threadripper platforms
From: Branden Sherrell <sherrellbc@gmail.com>
In-Reply-To: <23f4b971-a161-47db-85a7-54f50300d039@suse.com>
Date: Mon, 24 Jun 2024 12:17:05 -0400
Cc: Patrick Plenefisch <simonpatp@gmail.com>,
 xen-devel@lists.xenproject.org,
 Juergen Gross <jgross@suse.com>,
 =?utf-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <CBB9871C-177F-4EA1-B33B-ED2756963976@gmail.com>
References: <CAOCpoWdOH=xGxiQSC1c5Ueb1THxAjH4WiZbCZq-QT+d_KAk3SA@mail.gmail.com>
 <1708c3d7-662a-44bc-b9b3-4ab9f8642d7b@suse.com>
 <dcaf9d8d-ad5a-4714-936b-79ed0e587f9d@suse.com>
 <CAOCpoWeowZPuQTeBp9nu8p8CDtE=u++wN_UqRoABZtB57D50Qw@mail.gmail.com>
 <ac742d12-ec91-4215-bb42-82a145924b4f@suse.com>
 <CAOCpoWfQmkhN3hms1xuotSUZzVzR99i9cNGGU2r=yD5PjysMiQ@mail.gmail.com>
 <fa23a590-5869-4e11-8998-1d03742c5919@suse.com> <ZaeoWBV8IEZap2mr@macbook>
 <15dcef46-aaa8-4f71-bd5c-355001dd9188@suse.com> <ZafOGEwms01OFaVJ@macbook>
 <7BAC7BB5-C321-4C34-884A-21CC12F761BB@gmail.com>
 <36d581a0-f144-4756-b345-8b74ccc25c74@suse.com>
 <70B65B5D-C075-4D8E-8D2B-08A1930EE68B@gmail.com>
 <23f4b971-a161-47db-85a7-54f50300d039@suse.com>
To: Jan Beulich <jbeulich@suse.com>
X-Mailer: Apple Mail (2.3731.700.6)

Sorry about that. The thread quickly diverged from the core issue to =
helping diagnose GPU issues, so I thought it best to circle back to the =
ticket reference to realign.

In any event, if there is any aid I can provide on this front please let =
me know. I have worked around this issue for now by rebuilding my x86 =
dom0 (Debian) kernel from source with link at 2MiB rather than 16MiB =
(default). But this is a rather annoying requirement when updating the =
dom0 kernel. If there is any information you need from such an offending =
platform as this one feel free to let me know and I can test or =
otherwise provide relevant information as needed.

Branden.

> On Jun 24, 2024, at 9:40 AM, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 24.06.2024 15:07, Branden Sherrell wrote:
>> What is the reasoning that this fix be applied only to PVH domains? =
Taking a look at the fix logic it appears to walk the E820 to find a =
suitable range of memory to load the kernel into (assuming it can be =
determined that the kernel is also relocatable). Why can this logic not =
be applied to dom0 kernel load in general?
>=20
> Because PV requirements are different, first and foremost because =
there
> we have pseudo-physical and machine memory maps to deal with. As you =
can
> see from [1] I've raised the topic on how to deal with PV there, but =
so
> far there was no reply helping the issue towards resolution.
>=20
> Btw - please don't top-post.
>=20
> Jan
>=20
> [1] =
https://lists.xen.org/archives/html/xen-devel/2024-06/msg00831.html



From xen-devel-bounces@lists.xenproject.org Mon Jun 24 16:28:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 16:28:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746840.1154047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLmXz-0000p0-Dp; Mon, 24 Jun 2024 16:28:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746840.1154047; Mon, 24 Jun 2024 16:28:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLmXz-0000ot-AO; Mon, 24 Jun 2024 16:28:03 +0000
Received: by outflank-mailman (input) for mailman id 746840;
 Mon, 24 Jun 2024 16:28:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hzrU=N2=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLmXx-0000on-Uj
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 16:28:01 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b8065481-3246-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 18:27:59 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 7BA0E60DDB;
 Mon, 24 Jun 2024 16:27:58 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2B9A8C32789;
 Mon, 24 Jun 2024 16:27:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8065481-3246-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719246478;
	bh=wUVcB6qL608D0QZzMhjaSO2eR9jtEZ7EHuv07yt+Q6g=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=uBlfgRF1SAgMrX2dfc1ZnvnMiJP4XGYJOlHCxuf1sX735GHCx700+QKzOWBYqNf0H
	 +GocaFXW8nVXPbzpkwlcHCGQizX+bkq59PpGAcdWww5Y98TdQcjKvcUvssPsBnAQAp
	 YGZf0tIfDK6xfan6XMgLUiUphrbwEpFvH4VzKaVz/l7Lrlokk3bmFx753efaIt3ZWh
	 jmrb4QUlfsoPJnb1dhtfDLPyac02OP+ZZTeapQVW+eajSYqIbc/3mpwMEnu0X0H0lz
	 V9f+Gf54YBF/VpB6838p81lbeX5mWpluAdj2RUsxw1cRnJt3ZLZS+edjb2zMf/sSYI
	 3nKU3J4hZoHAg==
Date: Mon, 24 Jun 2024 09:27:55 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Anthony PERARD <anthony.perard@vates.tech>
cc: xen-devel@lists.xenproject.org, Anthony PERARD <anthony@xenproject.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [XEN PATCH] MAINTAINERS: Update my email address again
In-Reply-To: <20240624094030.41692-1-anthony.perard@vates.tech>
Message-ID: <alpine.DEB.2.22.394.2406240927390.3870429@ubuntu-linux-20-04-desktop>
References: <20240624094030.41692-1-anthony.perard@vates.tech>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Anthony PERARD wrote:
> Signed-off-by: Anthony PERARD <anthony.perard@vates.tech>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 16:33:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 16:33:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746844.1154056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLmcm-0002wF-VX; Mon, 24 Jun 2024 16:33:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746844.1154056; Mon, 24 Jun 2024 16:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLmcm-0002w8-Sf; Mon, 24 Jun 2024 16:33:00 +0000
Received: by outflank-mailman (input) for mailman id 746844;
 Mon, 24 Jun 2024 16:32:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=keLG=N2=ziepe.ca=jgg@srs-se1.protection.inumbo.net>)
 id 1sLmcl-0002w1-2y
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 16:32:59 +0000
Received: from mail-qv1-xf29.google.com (mail-qv1-xf29.google.com
 [2607:f8b0:4864:20::f29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6968fc70-3247-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 18:32:57 +0200 (CEST)
Received: by mail-qv1-xf29.google.com with SMTP id
 6a1803df08f44-6b50aeb2f31so22423496d6.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 09:32:57 -0700 (PDT)
Received: from ziepe.ca
 (hlfxns017vw-142-68-80-239.dhcp-dynamic.fibreop.ns.bellaliant.net.
 [142.68.80.239]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b534b6fb20sm22033996d6.58.2024.06.24.09.32.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 09:32:55 -0700 (PDT)
Received: from jgg by wakko with local (Exim 4.95)
 (envelope-from <jgg@ziepe.ca>) id 1sLmcg-006IKk-RH;
 Mon, 24 Jun 2024 13:32:54 -0300
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6968fc70-3247-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ziepe.ca; s=google; t=1719246776; x=1719851576; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=YpMz2kHegp9vo5OTJS7oY+W3qVaSD/B67evxfsa33FE=;
        b=ST97m5bXeWsyJCz7O0VWTegpBNM/02MZcwQYxafx4ZEgBnFVPiw+MrbIjssZ9yJgTi
         StQwbgZyBdWzLfOQMQ3S3zteIVlNUl1W9mmO6db0dXZz0evdNMN/nuMF0GJL3g+pt/KE
         4O2zNioGnZZ+27OxcwZIzqYXnRSiXo69yz/6Cxq5SmMOETAI/ONzIJJk6KpdntCUtQZ4
         fHyBAoecufb+Wz62uWKh9uRTtY8bdpw3mQut2+jBbTt0BWTfjDuINCrabeWtcn0mflg4
         HhZzA+aOGmA04dw4r7kCF4ykavFvxdAlrL9ynScTTPBn248deeochEmIlOHMBYH1ebJ2
         CV1g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719246776; x=1719851576;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=YpMz2kHegp9vo5OTJS7oY+W3qVaSD/B67evxfsa33FE=;
        b=jUUc6Nv/Tgm4oXsMkp8H65cYVB8Qo1DLuOCszr12dZK4NDtzJHcG9lruU/YP84NC89
         oX+0F8oLuskTUnah1lnlXMpQXM/ZvaA8xrf56eCLdcS3Bbvpdnj1URhjQSQ4RuPqyUEb
         ZvqRdNm2zs9FevZ5eyjSImX0OFoEeOIW0MhnQEttcVP6HXSydyGLD4NW7puzkRoXUbzK
         Do6mZMA0XYopITW2HGvcrGvf02VbxBwX5uMbbG7LTfIKHuulmar4mX7+iscIimtgYLpG
         Wcmxg1P1bPsWnQk+gyOuAtUKC/g7g4dhfujt78+sFxGq6DdGZV1+rWmbjhb24n78r53c
         2yPg==
X-Forwarded-Encrypted: i=1; AJvYcCUdLkxSaUBh8c9RJSHBEXULwC30R6US1T5XtMjCsakewLJnPf8P+XHrWmD3djVzk0kmZR7lM037SI5WBVorvCtg9DaApn42nQ2wDcJTAZs=
X-Gm-Message-State: AOJu0YxhQJFttuUMnLoFnKutIzeYXf4OU+KETFrf+cC49s51JnqNvImt
	SkB73CfECQykhbtd1jPdAnn75tXOEWkeOoEdPxi/zoR3yyquZGljrEeATwFRgt4=
X-Google-Smtp-Source: AGHT+IFVDnqWLATdM12qOV0Xx5fReCHDSylxa7RaE0YpsUi44goG84xt3tUjEqTTeXwKdKL1wee4zg==
X-Received: by 2002:a0c:f585:0:b0:6b5:1c0:381d with SMTP id 6a1803df08f44-6b540aa47d4mr55523686d6.43.1719246775914;
        Mon, 24 Jun 2024 09:32:55 -0700 (PDT)
Date: Mon, 24 Jun 2024 13:32:54 -0300
From: Jason Gunthorpe <jgg@ziepe.ca>
To: Teddy Astie <teddy.astie@vates.tech>
Cc: Robin Murphy <robin.murphy@arm.com>, xen-devel@lists.xenproject.org,
	iommu@lists.linux.dev, Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: [RFC PATCH v2] iommu/xen: Add Xen PV-IOMMU driver
Message-ID: <20240624163254.GT791043@ziepe.ca>
References: <24d7ec005e77e4e0127995ba6f4ad16f33737fa5.1718981216.git.teddy.astie@vates.tech>
 <da3ec316-b001-4711-b323-70af3e6bb014@arm.com>
 <a04e169d-b38a-43dc-b783-a8af1e1b0468@vates.tech>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a04e169d-b38a-43dc-b783-a8af1e1b0468@vates.tech>

On Mon, Jun 24, 2024 at 02:36:45PM +0000, Teddy Astie wrote:
> >> +bool xen_iommu_capable(struct device *dev, enum iommu_cap cap)
> >> +{
> >> +    switch (cap) {
> >> +    case IOMMU_CAP_CACHE_COHERENCY:
> >> +        return true;
> >
> > Will the PV-IOMMU only ever be exposed on hardware where that really is
> > always true?
> >
> 
> On the hypervisor side, the PV-IOMMU interface always implicitely flush
> the IOMMU hardware on map/unmap operation, so at the end of the
> hypercall, the cache should be always coherent IMO.

Cache coherency is a property of the underlying IOMMU HW and reflects
the ability to prevent generating transactions that would bypass the
cache.

On AMD and Intel IOMMU HW this maps to a bit in their PTEs that must
always be set to claim this capability.

No ARM SMMU supports it yet.

If you imagine supporting ARM someday then this can't be a fixed true.

> Unmap failing should be exceptionnal, but is possible e.g with
> transparent superpages (like Xen IOMMU drivers do). Xen drivers folds
> appropriate contiguous mappings into superpages entries to optimize
> memory usage and iotlb. However, if you unmap in the middle of a region
> covered by a superpage entry, this is no longer a valid superpage entry,
> and you need to allocate and fill the lower levels, which is faillible
> if lacking memory.

This doesn't seem necessary. From an IOMMU perspective the contract is
that whatever gets mapped must be wholly unmapped and the unmap cannot
fail.

Failing to unmap causes big problems for iommufd and vfio as it is
about to free to the memory underlying the maps. Nothing good will
happen after this.

An implementation should rely on the core code to provide the
contiguous ranges and not attempt to combine mappings across two
map_pages() calls. If it does this it can refuse to unmap a slice of a
superpage, and thus it never has to allocate memory during unmap.

> While mapping on top of another mapping is ok for us (it's just going to
> override the previous mapping), I definetely agree that having the
> address space messed up is not good.

Technically map_pages should fail if it is already populated, but
nothing should ever do that.

Jason


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 16:57:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 16:57:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746853.1154067 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLn01-0006rp-Q7; Mon, 24 Jun 2024 16:57:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746853.1154067; Mon, 24 Jun 2024 16:57:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLn01-0006ri-N0; Mon, 24 Jun 2024 16:57:01 +0000
Received: by outflank-mailman (input) for mailman id 746853;
 Mon, 24 Jun 2024 16:57:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sLn00-0006rc-4U
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 16:57:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sLmzz-0006HC-P0; Mon, 24 Jun 2024 16:56:59 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.0.211])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sLmzz-0000JS-GE; Mon, 24 Jun 2024 16:56:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=EBg4Gh4deymQ7P0TwRKlDP4Es3l4BgPZ6VwfVzcxJJM=; b=082K1kxM8ridkWGss06gim1/wT
	Og7DQRdSZDzmzrSt6nPuCVTuDhzp1jm1qAbPIet6TJTRsfqwycJxZxC97ViI2UcuAqjZdc99BtN14
	7atIuUkATCEiRfNBk3S0ExAmHsY5I5pqbuHx+NQvYknxsKQmV/6ZxUiIsu0yXkOnEtlU=;
Message-ID: <f17200a6-efb5-4775-964c-21e687890f76@xen.org>
Date: Mon, 24 Jun 2024 17:56:57 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] xen/arm: static-shmem: request host address to
 be specified for 1:1 domains
Content-Language: en-GB
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, oleksii.kurochko@gmail.com
References: <20240621092205.30602-1-michal.orzel@amd.com>
 <86594fa0-a12d-42fc-ba6c-fe235becf768@xen.org>
 <d69b74ce-13b6-46f1-b2bd-051c913be43a@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <d69b74ce-13b6-46f1-b2bd-051c913be43a@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 24/06/2024 11:43, Michal Orzel wrote:
> Hi Julien,
> 
> On 24/06/2024 12:22, Julien Grall wrote:
>>
>>
>> Hi Michal,
>>
>> On 21/06/2024 10:22, Michal Orzel wrote:
>>> As a follow up to commit cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem
>>> should be direct-mapped for direct-mapped domains") add a check to
>>> request that both host and guest physical address must be supplied for
>>> direct mapped domains. Otherwise return an error to prevent unwanted
>>> behavior.
>>>
>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>
>> I would argue that it should be have a fixes tag:
>>
>> Fixes: 988f1c7e1f40 ("xen/arm: static-shmem: fix "gbase/pbase used
>> uninitialized" build failure")
>>
>> Mainly because while you fixed the build, it was missing the check in
>> the "else" part.
>>
>> I am happy to update it on commit if you are ok with the change.
> Yes, I'm fine with it.

Committed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 17:08:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 17:08:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746860.1154081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLnB5-0000om-Ni; Mon, 24 Jun 2024 17:08:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746860.1154081; Mon, 24 Jun 2024 17:08:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLnB5-0000of-L8; Mon, 24 Jun 2024 17:08:27 +0000
Received: by outflank-mailman (input) for mailman id 746860;
 Mon, 24 Jun 2024 17:08:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+LHY=N2=kernel.org=kbusch@srs-se1.protection.inumbo.net>)
 id 1sLnB4-0000oZ-1J
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 17:08:26 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5d5f1c04-324c-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 19:08:24 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 5EF2B60E16;
 Mon, 24 Jun 2024 17:08:23 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 682D2C2BBFC;
 Mon, 24 Jun 2024 17:08:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d5f1c04-324c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719248903;
	bh=c5IMwi2AkSm61J7nCdd2EBXV6o3vo+7AANeGa2JN45A=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=d8C/s3NgDhBmDzX7X6Z3a8NgLG1XRNWm3AVon41eJewAqY4XNmD50+AdQpjukysaA
	 0+c9peZEcs8AURzleAxgVmV3zaxrAP0+WMQxnkR3oNoVdRegLrw7QTozLmVentUZPE
	 P0BMSL1dkoExQoFoaBFFudfcpEmdYCHeKJ0irUoLxUFwgxCKTvfHLKHNHL2RN4iLoh
	 ki4NFa+liGK52kKToNiMhO53pIJVSGFwHndW7Uw+1UTH4eb3kjlEU/UFxvZz4PM4y3
	 BbdO/dVn76ts+09ZmrGp9NorxeNYJ2cNYzLcTNejNMPSMmam7/bl5ZZAKhBMTTCFIl
	 W6hH1Idyq3vZQ==
Date: Mon, 24 Jun 2024 11:08:16 -0600
From: Keith Busch <kbusch@kernel.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>, Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 14/26] block: move the nonrot flag to queue_limits
Message-ID: <ZnmoANp0TgpxWuF-@kbusch-mbp.dhcp.thefacebook.com>
References: <20240617060532.127975-1-hch@lst.de>
 <20240617060532.127975-15-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20240617060532.127975-15-hch@lst.de>

On Mon, Jun 17, 2024 at 08:04:41AM +0200, Christoph Hellwig wrote:
> -#define blk_queue_nonrot(q)	test_bit(QUEUE_FLAG_NONROT, &(q)->queue_flags)
> +#define blk_queue_nonrot(q)	((q)->limits.features & BLK_FEAT_ROTATIONAL)

This is inverted. Should be:

 #define blk_queue_nonrot(q)	(!((q)->limits.features & BLK_FEAT_ROTATIONAL))


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 17:24:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 17:24:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746866.1154092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLnQe-00042a-Tr; Mon, 24 Jun 2024 17:24:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746866.1154092; Mon, 24 Jun 2024 17:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLnQe-00042T-PV; Mon, 24 Jun 2024 17:24:32 +0000
Received: by outflank-mailman (input) for mailman id 746866;
 Mon, 24 Jun 2024 17:24:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3xpo=N2=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sLnQd-00042N-4o
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 17:24:31 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9c97f3e3-324e-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 19:24:29 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id C072868CFE; Mon, 24 Jun 2024 19:24:25 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c97f3e3-324e-11ef-b4bb-af5377834399
Date: Mon, 24 Jun 2024 19:24:25 +0200
From: Christoph Hellwig <hch@lst.de>
To: Keith Busch <kbusch@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Jens Axboe <axboe@kernel.dk>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Richard Weinberger <richard@nod.at>,
	Philipp Reisner <philipp.reisner@linbit.com>,
	Lars Ellenberg <lars.ellenberg@linbit.com>,
	Christoph =?iso-8859-1?Q?B=F6hmwalder?= <christoph.boehmwalder@linbit.com>,
	Josef Bacik <josef@toxicpanda.com>, Ming Lei <ming.lei@redhat.com>,
	"Michael S. Tsirkin" <mst@redhat.com>,
	Jason Wang <jasowang@redhat.com>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Alasdair Kergon <agk@redhat.com>, Mike Snitzer <snitzer@kernel.org>,
	Mikulas Patocka <mpatocka@redhat.com>, Song Liu <song@kernel.org>,
	Yu Kuai <yukuai3@huawei.com>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	"Martin K. Petersen" <martin.petersen@oracle.com>,
	linux-m68k@lists.linux-m68k.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, ceph-devel@vger.kernel.org,
	virtualization@lists.linux.dev, xen-devel@lists.xenproject.org,
	linux-bcache@vger.kernel.org, dm-devel@lists.linux.dev,
	linux-raid@vger.kernel.org, linux-mmc@vger.kernel.org,
	linux-mtd@lists.infradead.org, nvdimm@lists.linux.dev,
	linux-nvme@lists.infradead.org, linux-s390@vger.kernel.org,
	linux-scsi@vger.kernel.org, linux-block@vger.kernel.org,
	Damien Le Moal <dlemoal@kernel.org>
Subject: Re: [PATCH 14/26] block: move the nonrot flag to queue_limits
Message-ID: <20240624172425.GB22044@lst.de>
References: <20240617060532.127975-1-hch@lst.de> <20240617060532.127975-15-hch@lst.de> <ZnmoANp0TgpxWuF-@kbusch-mbp.dhcp.thefacebook.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <ZnmoANp0TgpxWuF-@kbusch-mbp.dhcp.thefacebook.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Mon, Jun 24, 2024 at 11:08:16AM -0600, Keith Busch wrote:
> On Mon, Jun 17, 2024 at 08:04:41AM +0200, Christoph Hellwig wrote:
> > -#define blk_queue_nonrot(q)	test_bit(QUEUE_FLAG_NONROT, &(q)->queue_flags)
> > +#define blk_queue_nonrot(q)	((q)->limits.features & BLK_FEAT_ROTATIONAL)
> 
> This is inverted. Should be:
> 
>  #define blk_queue_nonrot(q)	(!((q)->limits.features & BLK_FEAT_ROTATIONAL))

Ah yes.  And the sysfs attribute doesn't go through the macro and
won't show the effect.  I'll send a fixup.




From xen-devel-bounces@lists.xenproject.org Mon Jun 24 17:30:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 17:30:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746873.1154100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLnWl-0005qv-FI; Mon, 24 Jun 2024 17:30:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746873.1154100; Mon, 24 Jun 2024 17:30:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLnWl-0005qo-CZ; Mon, 24 Jun 2024 17:30:51 +0000
Received: by outflank-mailman (input) for mailman id 746873;
 Mon, 24 Jun 2024 17:30:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLnWk-0005qe-4z; Mon, 24 Jun 2024 17:30:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLnWj-0006rt-QW; Mon, 24 Jun 2024 17:30:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLnWj-0001CJ-Cs; Mon, 24 Jun 2024 17:30:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sLnWj-0000im-CR; Mon, 24 Jun 2024 17:30:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EVTlfPNqSeTqofET49dK0uu4XIpX+LALIYF//vcYzOc=; b=hja29OGVXqDLq/03L07JyjYOrb
	GsvShvK6EvN7h+/5OdX2WQcYyWgF3kUamCp60qAmuToftZJwXZ/jH8uR7AzkqA/Ad009WwhzgZ8w8
	bR9YyIpG5GxG7VBnp28FixxfJrCzXern2SdrtqE77W3y5o4FzijG15mr8pfJlhSdM1mQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186466-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186466: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f2661062f16b2de5d7b6a5c42a9a5c96326b8454
X-Osstest-Versions-That:
    linux=7c16f0a4ed1ce7b0dd1c01fc012e5bde89fe7748
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 Jun 2024 17:30:49 +0000

flight 186466 linux-linus real [real]
flight 186468 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186466/
http://logs.test-lab.xenproject.org/osstest/logs/186468/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 186462

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           8 xen-boot            fail pass in 186468-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 186468 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 186468 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186462
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                f2661062f16b2de5d7b6a5c42a9a5c96326b8454
baseline version:
 linux                7c16f0a4ed1ce7b0dd1c01fc012e5bde89fe7748

Last test of basis   186462  2024-06-23 16:10:07 Z    1 days
Testing same since   186464  2024-06-24 01:42:08 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Linus Torvalds <torvalds@linux-foundation.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f2661062f16b2de5d7b6a5c42a9a5c96326b8454
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sun Jun 23 17:08:54 2024 -0400

    Linux 6.10-rc5


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 17:58:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 17:58:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746887.1154110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLnxi-0000qq-O4; Mon, 24 Jun 2024 17:58:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746887.1154110; Mon, 24 Jun 2024 17:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLnxi-0000qj-LI; Mon, 24 Jun 2024 17:58:42 +0000
Received: by outflank-mailman (input) for mailman id 746887;
 Mon, 24 Jun 2024 17:58:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=keLG=N2=ziepe.ca=jgg@srs-se1.protection.inumbo.net>)
 id 1sLnxh-0000qd-Nk
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 17:58:41 +0000
Received: from mail-vk1-xa33.google.com (mail-vk1-xa33.google.com
 [2607:f8b0:4864:20::a33])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6283eb76-3253-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 19:58:39 +0200 (CEST)
Received: by mail-vk1-xa33.google.com with SMTP id
 71dfb90a1353d-4ef52e9038bso1013616e0c.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 10:58:39 -0700 (PDT)
Received: from ziepe.ca
 (hlfxns017vw-142-68-80-239.dhcp-dynamic.fibreop.ns.bellaliant.net.
 [142.68.80.239]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b52b16e9basm27595606d6.6.2024.06.24.10.58.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 Jun 2024 10:58:37 -0700 (PDT)
Received: from jgg by wakko with local (Exim 4.95)
 (envelope-from <jgg@ziepe.ca>) id 1sLnxd-006Xh6-9b;
 Mon, 24 Jun 2024 14:58:37 -0300
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6283eb76-3253-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ziepe.ca; s=google; t=1719251918; x=1719856718; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=YNU8knWMPYNyTHBdJSdgkQoEg3G3ja7Wms4FhBvjjDA=;
        b=fzhkM2xAyH75CjAOJMIDCfDxjn0Vta0SWkR7eULNvy7WfqNuzzgJsWb3SrTo4yittp
         HPBv4o1WY0UDt9G3Gpn76CXDKYq8TcYEiijoyRSCm9Bb9C/n1yQ1StH3dsn6XKLgSayE
         EEbeYnaHmCncwacdSXN+V/h7VaODPpePkMNNLCEAhWhLGbFvQ+iI7cO/kYWSaUvGynmB
         BFTMktCr+TcbpCCQc3Bd1DnfZNwev7wNxVBYOGhr2ZhU/qRjcIDSAOxLGIp9+4QdROOk
         s+tR8pSfWy5Q9UpRBP6g+2BjWRn2W+2HhwhwM48koOUTZK7coMkSBC3juUdkaSoBDzjt
         xWpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719251918; x=1719856718;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=YNU8knWMPYNyTHBdJSdgkQoEg3G3ja7Wms4FhBvjjDA=;
        b=gqelAyGGEmdd2EMulrL7v9SxIMqN/rEB1WDE0KxENYyAPdW93Vx+BhfF/uetS1UShL
         hlj9q9DrC26LHJgsf2t+j0a/xRxLEaA7rB2XGYH0wxWdTfIvaeaMBd0lC3NHMXzNckv/
         vchOTe3ridEXlsOp5CIKfZKwpsiE6OXCoi7qV+bXRc73dtXKcXYhG0fXil5LTiFHUYGL
         DycV65EwB1KPUv8Iasdh+VlWZIKamTXSTi9foHGWqTHuswkZv3SYFxYRhVwwjMC6S/oz
         SD9+SXHgvCj69RLTEVxchWEDwTGXnBWZH/UIjW/0rgUAcl6cD/z/8oNEh1m7iSTL6DAj
         z2yA==
X-Forwarded-Encrypted: i=1; AJvYcCVvaMcB/rcJUy/IdjRgRF6eaUIcUj1VjPdY53DpLNtUsYVqtHwZKlq8rLV76INtKvUVimhkxhhHEyOArjOm2ch6p+fXd3BieCbQhqMSa54=
X-Gm-Message-State: AOJu0YyY7nd1BA8lGiTrvXkMLebowQnrmxydhnobYofdRkC5i7X18Bcx
	gyEWuhqhOLHPiAMhey59AHcSX/G82QFM8v8Mqy2brGgaIwLSZaUu8bp4aCnV6b0=
X-Google-Smtp-Source: AGHT+IHnxjwk5Cm5dZlOx9nn6fHrrP01/ltJJzH9RVjyIzHnxznz3vpGsR7mP8tFLfdcH8bmSz9Pjw==
X-Received: by 2002:a05:6122:98a:b0:4ed:36f:9b38 with SMTP id 71dfb90a1353d-4ef6d82bd90mr4091612e0c.9.1719251918209;
        Mon, 24 Jun 2024 10:58:38 -0700 (PDT)
Date: Mon, 24 Jun 2024 14:58:37 -0300
From: Jason Gunthorpe <jgg@ziepe.ca>
To: Easwar Hariharan <eahariha@linux.microsoft.com>
Cc: Teddy Astie <teddy.astie@vates.tech>,
	Robin Murphy <robin.murphy@arm.com>, xen-devel@lists.xenproject.org,
	iommu@lists.linux.dev, Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: [RFC PATCH v2] iommu/xen: Add Xen PV-IOMMU driver
Message-ID: <20240624175837.GU791043@ziepe.ca>
References: <24d7ec005e77e4e0127995ba6f4ad16f33737fa5.1718981216.git.teddy.astie@vates.tech>
 <da3ec316-b001-4711-b323-70af3e6bb014@arm.com>
 <a04e169d-b38a-43dc-b783-a8af1e1b0468@vates.tech>
 <20240624163254.GT791043@ziepe.ca>
 <900edf8a-885c-4bf3-84bd-5e7b165a1ed7@linux.microsoft.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <900edf8a-885c-4bf3-84bd-5e7b165a1ed7@linux.microsoft.com>

On Mon, Jun 24, 2024 at 10:36:13AM -0700, Easwar Hariharan wrote:
> Hi Jason,
> 
> On 6/24/2024 9:32 AM, Jason Gunthorpe wrote:
> > On Mon, Jun 24, 2024 at 02:36:45PM +0000, Teddy Astie wrote:
> >>>> +bool xen_iommu_capable(struct device *dev, enum iommu_cap cap)
> >>>> +{
> >>>> +    switch (cap) {
> >>>> +    case IOMMU_CAP_CACHE_COHERENCY:
> >>>> +        return true;
> >>>
> >>> Will the PV-IOMMU only ever be exposed on hardware where that really is
> >>> always true?
> >>>
> >>
> >> On the hypervisor side, the PV-IOMMU interface always implicitely flush
> >> the IOMMU hardware on map/unmap operation, so at the end of the
> >> hypercall, the cache should be always coherent IMO.
> > 
> > Cache coherency is a property of the underlying IOMMU HW and reflects
> > the ability to prevent generating transactions that would bypass the
> > cache.
> > 
> > On AMD and Intel IOMMU HW this maps to a bit in their PTEs that must
> > always be set to claim this capability.
> > 
> > No ARM SMMU supports it yet.
> > 
> 
> Unrelated to this patch: Both the arm-smmu and arm-smmu-v3 drivers claim
> this capability if the device tree/IORT table have the corresponding flags.

Oh sorry, I misread this, as the ENFORCED version, so it isn't quite
right.

ENFORCED is as above

Just normal CACHE_COHERENCY requires just HW support and any
enablement in the IOMMU.

AMD and Intel are always enabled

ARM requires HW support and possibly some specific iommu programming.

> I read through DEN0049 to determine what are the knock-on effects, or
> equivalently the requirements to set those flags in the IORT, but came
> up empty. Could you help with what I'm missing to resolve the apparent
> contradiction between your statement and the code?

The flags are set if the underlying HW can do the coherency work, and
Linux has the IOMMU (STE/ioptes) set to work with that.

Map/unmap cache flushing during the hypercall is not a substitution,
Linux still needs to know if streaming DMA requires the flushes.

Jason


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 18:00:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 18:00:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746893.1154120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLnzG-0002Jo-1D; Mon, 24 Jun 2024 18:00:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746893.1154120; Mon, 24 Jun 2024 18:00:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLnzF-0002Jh-Uu; Mon, 24 Jun 2024 18:00:17 +0000
Received: by outflank-mailman (input) for mailman id 746893;
 Mon, 24 Jun 2024 18:00:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Bblz=N2=arm.com=robin.murphy@srs-se1.protection.inumbo.net>)
 id 1sLnzE-0002JZ-RC
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 18:00:16 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 9bea681d-3253-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 20:00:15 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9DCDBDA7;
 Mon, 24 Jun 2024 11:00:39 -0700 (PDT)
Received: from [10.57.74.124] (unknown [10.57.74.124])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 94FA63F73B;
 Mon, 24 Jun 2024 11:00:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9bea681d-3253-11ef-90a3-e314d9c70b13
Message-ID: <b65e089b-f8b0-4588-adaf-af555ab5781a@arm.com>
Date: Mon, 24 Jun 2024 19:00:10 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC PATCH v2] iommu/xen: Add Xen PV-IOMMU driver
To: Easwar Hariharan <eahariha@linux.microsoft.com>,
 Jason Gunthorpe <jgg@ziepe.ca>, Teddy Astie <teddy.astie@vates.tech>
Cc: xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
References: <24d7ec005e77e4e0127995ba6f4ad16f33737fa5.1718981216.git.teddy.astie@vates.tech>
 <da3ec316-b001-4711-b323-70af3e6bb014@arm.com>
 <a04e169d-b38a-43dc-b783-a8af1e1b0468@vates.tech>
 <20240624163254.GT791043@ziepe.ca>
 <900edf8a-885c-4bf3-84bd-5e7b165a1ed7@linux.microsoft.com>
From: Robin Murphy <robin.murphy@arm.com>
Content-Language: en-GB
In-Reply-To: <900edf8a-885c-4bf3-84bd-5e7b165a1ed7@linux.microsoft.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 2024-06-24 6:36 pm, Easwar Hariharan wrote:
> Hi Jason,
> 
> On 6/24/2024 9:32 AM, Jason Gunthorpe wrote:
>> On Mon, Jun 24, 2024 at 02:36:45PM +0000, Teddy Astie wrote:
>>>>> +bool xen_iommu_capable(struct device *dev, enum iommu_cap cap)
>>>>> +{
>>>>> +    switch (cap) {
>>>>> +    case IOMMU_CAP_CACHE_COHERENCY:
>>>>> +        return true;
>>>>
>>>> Will the PV-IOMMU only ever be exposed on hardware where that really is
>>>> always true?
>>>>
>>>
>>> On the hypervisor side, the PV-IOMMU interface always implicitely flush
>>> the IOMMU hardware on map/unmap operation, so at the end of the
>>> hypercall, the cache should be always coherent IMO.
>>
>> Cache coherency is a property of the underlying IOMMU HW and reflects
>> the ability to prevent generating transactions that would bypass the
>> cache.
>>
>> On AMD and Intel IOMMU HW this maps to a bit in their PTEs that must
>> always be set to claim this capability.
>>
>> No ARM SMMU supports it yet.
>>
> 
> Unrelated to this patch: Both the arm-smmu and arm-smmu-v3 drivers claim
> this capability if the device tree/IORT table have the corresponding flags.
> 
> I read through DEN0049 to determine what are the knock-on effects, or
> equivalently the requirements to set those flags in the IORT, but came
> up empty. Could you help with what I'm missing to resolve the apparent
> contradiction between your statement and the code?

We did rejig things slightly a while back. The status quo now is that 
IOMMU_CAP_CACHE_COHERENCY mostly covers whether IOMMU mappings can make 
device accesses coherent at all, tied in with the IOMMU_CACHE prot value 
- this is effectively forced for Intel and AMD, while for SMMU we have 
to take a guess, but as commented it's a pretty reasonable assumption 
that if the SMMU's own output for table walks etc. is coherent then its 
translation outputs are likely to be too. The further property of being 
able to then enforce a coherent mapping regardless of what an endpoint 
might try to get around it (PCIe No Snoop etc.) is now under the 
enforce_cache_coherency op - that's what SMMU can't guarantee for now 
due to the IMP-DEF nature of whether S2FWB overrides No Snoop or not.

Thanks,
Robin.


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 18:54:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 18:54:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746905.1154130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLopj-0000XA-OL; Mon, 24 Jun 2024 18:54:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746905.1154130; Mon, 24 Jun 2024 18:54:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLopj-0000X3-L4; Mon, 24 Jun 2024 18:54:31 +0000
Received: by outflank-mailman (input) for mailman id 746905;
 Mon, 24 Jun 2024 18:54:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLopj-0000Wt-0Q; Mon, 24 Jun 2024 18:54:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLopi-0008Ib-IR; Mon, 24 Jun 2024 18:54:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLopi-00038W-7Z; Mon, 24 Jun 2024 18:54:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sLopi-0004ie-4V; Mon, 24 Jun 2024 18:54:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g/5ICl8d2sxRmdGAIQejzDa+tGc+cvcNqyzD/mhJR6w=; b=01GEo3ilH25Pd+B5wpXtWYT75C
	UzM60IHs6Q2IhDWq4EpgjiMAmTng6x1Bh0zTM8n1cxYT10uCHZBLqGD65I4Xf47li4qrsfTUIpRIl
	BjyhaI7l6oX7dSUug0GxJVB7Dl1UmaloeqwciizkJf7EWmsPh4hb5JbVbDEKhOIdsD3o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186467-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186467: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=908407bf2b29a38d6879fc8c57dad14473ef67f8
X-Osstest-Versions-That:
    xen=9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 Jun 2024 18:54:30 +0000

flight 186467 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186467/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  908407bf2b29a38d6879fc8c57dad14473ef67f8
baseline version:
 xen                  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d

Last test of basis   186447  2024-06-21 15:04:10 Z    3 days
Testing same since   186467  2024-06-24 16:03:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@vates.tech>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9e7c26ad85..908407bf2b  908407bf2b29a38d6879fc8c57dad14473ef67f8 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 19:01:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 19:01:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746884.1154141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLowc-0002BR-EK; Mon, 24 Jun 2024 19:01:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746884.1154141; Mon, 24 Jun 2024 19:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLowc-0002BK-BI; Mon, 24 Jun 2024 19:01:38 +0000
Received: by outflank-mailman (input) for mailman id 746884;
 Mon, 24 Jun 2024 17:36:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pKqu=N2=linux.microsoft.com=eahariha@srs-se1.protection.inumbo.net>)
 id 1sLnbz-0006W6-RE
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 17:36:15 +0000
Received: from linux.microsoft.com (linux.microsoft.com [13.77.154.182])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTP
 id 410d92e4-3250-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 19:36:14 +0200 (CEST)
Received: from [192.168.49.54] (c-73-118-245-227.hsd1.wa.comcast.net
 [73.118.245.227])
 by linux.microsoft.com (Postfix) with ESMTPSA id EC03920B7001;
 Mon, 24 Jun 2024 10:36:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 410d92e4-3250-11ef-90a3-e314d9c70b13
DKIM-Filter: OpenDKIM Filter v2.11.0 linux.microsoft.com EC03920B7001
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.microsoft.com;
	s=default; t=1719250573;
	bh=eyV8+Wg7rnqJN5ZB9yu5wi+VOQhkLpSVFBwuzs99BlA=;
	h=Date:Cc:Subject:To:References:From:In-Reply-To:From;
	b=VVCfWIl7rgyeYeV8qIBFhuEhQtA9/ytKRfyRl/4eY4ANSsdOSMWk2mdcUDzvdZmAe
	 4GO6WjN3LwG820PLck1snRRlElpxqRpeCd3Rm3dT7S93RgHw/IxDmLU514kEfk1qlM
	 342hnCK2Z3orW9MBS5LKS1er7VB5BKCVvhhoPL7I=
Message-ID: <900edf8a-885c-4bf3-84bd-5e7b165a1ed7@linux.microsoft.com>
Date: Mon, 24 Jun 2024 10:36:13 -0700
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Cc: eahariha@linux.microsoft.com, Robin Murphy <robin.murphy@arm.com>,
 xen-devel@lists.xenproject.org, iommu@lists.linux.dev,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: [RFC PATCH v2] iommu/xen: Add Xen PV-IOMMU driver
To: Jason Gunthorpe <jgg@ziepe.ca>, Teddy Astie <teddy.astie@vates.tech>
References: <24d7ec005e77e4e0127995ba6f4ad16f33737fa5.1718981216.git.teddy.astie@vates.tech>
 <da3ec316-b001-4711-b323-70af3e6bb014@arm.com>
 <a04e169d-b38a-43dc-b783-a8af1e1b0468@vates.tech>
 <20240624163254.GT791043@ziepe.ca>
Content-Language: en-US
From: Easwar Hariharan <eahariha@linux.microsoft.com>
In-Reply-To: <20240624163254.GT791043@ziepe.ca>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Hi Jason,

On 6/24/2024 9:32 AM, Jason Gunthorpe wrote:
> On Mon, Jun 24, 2024 at 02:36:45PM +0000, Teddy Astie wrote:
>>>> +bool xen_iommu_capable(struct device *dev, enum iommu_cap cap)
>>>> +{
>>>> +    switch (cap) {
>>>> +    case IOMMU_CAP_CACHE_COHERENCY:
>>>> +        return true;
>>>
>>> Will the PV-IOMMU only ever be exposed on hardware where that really is
>>> always true?
>>>
>>
>> On the hypervisor side, the PV-IOMMU interface always implicitely flush
>> the IOMMU hardware on map/unmap operation, so at the end of the
>> hypercall, the cache should be always coherent IMO.
> 
> Cache coherency is a property of the underlying IOMMU HW and reflects
> the ability to prevent generating transactions that would bypass the
> cache.
> 
> On AMD and Intel IOMMU HW this maps to a bit in their PTEs that must
> always be set to claim this capability.
> 
> No ARM SMMU supports it yet.
> 

Unrelated to this patch: Both the arm-smmu and arm-smmu-v3 drivers claim
this capability if the device tree/IORT table have the corresponding flags.

I read through DEN0049 to determine what are the knock-on effects, or
equivalently the requirements to set those flags in the IORT, but came
up empty. Could you help with what I'm missing to resolve the apparent
contradiction between your statement and the code?

Thanks,
Easwar


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 19:04:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 19:04:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746919.1154151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLozY-0002kj-S7; Mon, 24 Jun 2024 19:04:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746919.1154151; Mon, 24 Jun 2024 19:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLozY-0002kc-Oy; Mon, 24 Jun 2024 19:04:40 +0000
Received: by outflank-mailman (input) for mailman id 746919;
 Mon, 24 Jun 2024 19:04:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qxl8=N2=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sLozX-0002kW-Ls
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 19:04:39 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 995b62c9-325c-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 21:04:37 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 002B84EE0738;
 Mon, 24 Jun 2024 21:04:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 995b62c9-325c-11ef-b4bb-af5377834399
MIME-Version: 1.0
Date: Mon, 24 Jun 2024 21:04:35 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com, Alessandro Zucchelli
 <alessandro.zucchelli@bugseng.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Subject: Re: [RFC XEN PATCH] x86/mctelem: address violations of MISRA C: 2012
 Rule 5.3
In-Reply-To: <d3856651-f5a6-4c96-8afe-336af2a60231@suse.com>
References: <79eb2f12e521f96a53dd166eb7db485bb3d9d067.1718962824.git.nicola.vetrini@bugseng.com>
 <d3856651-f5a6-4c96-8afe-336af2a60231@suse.com>
Message-ID: <dfe9bd46708440db17d594c93d53b6fc@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-24 11:00, Jan Beulich wrote:
> On 21.06.2024 11:50, Nicola Vetrini wrote:
>> From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
>> 
>> This addresses violations of MISRA C:2012 Rule 5.3 which states as
>> following: An identifier declared in an inner scope shall not hide an
>> identifier declared in an outer scope. In this case the shadowing is 
>> between
>> local variables "mctctl" and the file-scope static struct variable 
>> with the
>> same name.
>> 
>> No functional change.
>> 
>> Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
>> ---
>> RFC because I'm not 100% sure the semantics of the code is preserved.
>> I think so, and it passes gitlab pipelines [1], but there may be some 
>> missing
>> information.
> 
> Details as to your concerns would help. I see no issue, not even a 
> concern.
> 

That's reassuring. My main concern was that somehow the global (trough 
perhaps some macro expansion) would be updated instead of the local (or 
viceversa).

>> --- a/xen/arch/x86/cpu/mcheck/mctelem.c
>> +++ b/xen/arch/x86/cpu/mcheck/mctelem.c
>> @@ -168,14 +168,14 @@ static void mctelem_xchg_head(struct mctelem_ent 
>> **headp,
>>  void mctelem_defer(mctelem_cookie_t cookie, bool lmce)
>>  {
>>  	struct mctelem_ent *tep = COOKIE2MCTE(cookie);
>> -	struct mc_telem_cpu_ctl *mctctl = &this_cpu(mctctl);
>> +	struct mc_telem_cpu_ctl *mctctl_cpu = &this_cpu(mctctl);
> 
> When possible (i.e. without loss of meaning) I'd generally prefer names 
> to
> be shortened. Wouldn't just "ctl" work here?

I can try. I do not expect shadowing with "ctl", but it may happen. I'll 
try and let you know.
> 
>> -	ASSERT(mctctl->pending == NULL || mctctl->lmce_pending == NULL);
>> +	ASSERT(mctctl_cpu->pending == NULL || mctctl_cpu->lmce_pending == 
>> NULL);
>> 
>> -	if (mctctl->pending)
>> -		mctelem_xchg_head(&mctctl->pending, &tep->mcte_next, tep);
>> +	if (mctctl_cpu->pending)
>> +		mctelem_xchg_head(&mctctl_cpu->pending, &tep->mcte_next, tep);
>>  	else if (lmce)
>> -		mctelem_xchg_head(&mctctl->lmce_pending, &tep->mcte_next, tep);
>> +		mctelem_xchg_head(&mctctl_cpu->lmce_pending, &tep->mcte_next, tep);
>>  	else {
>>  		/*
>>  		 * LMCE is supported on Skylake-server and later CPUs, on
>> @@ -186,10 +186,10 @@ void mctelem_defer(mctelem_cookie_t cookie, bool 
>> lmce)
>>  		 * moment. As a result, the following two exchanges together
>>  		 * can be treated as atomic.
>>  		 */
> 
> In the middle of this comment the variable is also mentioned, and hence
> also wants adjusting (twice).

Ok, will update.

> 
>> -		if (mctctl->lmce_pending)
>> -			mctelem_xchg_head(&mctctl->lmce_pending,
>> -					  &mctctl->pending, NULL);
>> -		mctelem_xchg_head(&mctctl->pending, &tep->mcte_next, tep);
>> +		if (mctctl_cpu->lmce_pending)
>> +			mctelem_xchg_head(&mctctl_cpu->lmce_pending,
>> +					  &mctctl_cpu->pending, NULL);
>> +		mctelem_xchg_head(&mctctl_cpu->pending, &tep->mcte_next, tep);
>>  	}
>>  }
>> 
>> @@ -213,7 +213,7 @@ void mctelem_process_deferred(unsigned int cpu,
>>  {
>>  	struct mctelem_ent *tep;
>>  	struct mctelem_ent *head, *prev;
>> -	struct mc_telem_cpu_ctl *mctctl = &per_cpu(mctctl, cpu);
>> +	struct mc_telem_cpu_ctl *mctctl_cpu = &per_cpu(mctctl, cpu);
>>  	int ret;
>> 
>>  	/*
>> @@ -232,7 +232,7 @@ void mctelem_process_deferred(unsigned int cpu,
>>  	 * Any MC# occurring after the following atomic exchange will be
>>  	 * handled by another round of MCE softirq.
>>  	 */
>> -	mctelem_xchg_head(lmce ? &mctctl->lmce_pending : &mctctl->pending,
>> +	mctelem_xchg_head(lmce ? &mctctl_cpu->lmce_pending : 
>> &mctctl_cpu->pending,
>>  			  &this_cpu(mctctl.processing), NULL);
> 
> By shortening the variable name here you'd also avoid going past line
> length limits.
> 

Ok.

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 21:24:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 21:24:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746932.1154161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLrAa-0001g4-2c; Mon, 24 Jun 2024 21:24:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746932.1154161; Mon, 24 Jun 2024 21:24:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLrAa-0001fx-00; Mon, 24 Jun 2024 21:24:12 +0000
Received: by outflank-mailman (input) for mailman id 746932;
 Mon, 24 Jun 2024 21:24:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q2Am=N2=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sLrAY-0001fr-Bu
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 21:24:10 +0000
Received: from sender4-op-o12.zoho.com (sender4-op-o12.zoho.com
 [136.143.188.12]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 164a10b5-3270-11ef-b4bb-af5377834399;
 Mon, 24 Jun 2024 23:24:07 +0200 (CEST)
Received: by mx.zohomail.com with SMTPS id 1719264243678432.7074087375636;
 Mon, 24 Jun 2024 14:24:03 -0700 (PDT)
Received: by mail-yw1-f170.google.com with SMTP id
 00721157ae682-6439f6cf79dso16159977b3.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 14:24:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 164a10b5-3270-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; t=1719264244; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=S8N1nc2JxCc9xDI+M24vohX4DXa+jUOMMcB4+IvJc7MvckQbHdDWz0kWh3FwcFzkzFOUwKt1oxlVBVpScUNyuhOs0ub35U5/Y1d7kau37GsBFjVKucWAzNL5I4QBugMi70Gp+hzLN8xWZxCbbhP4nDlwWxFnUBBaXxWHsr63IJU=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1719264244; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=rDR0CtGnRUytB9puns+4maTSy3KDvr3FXJ0+K8eJa14=; 
	b=F3znArzuKrWC/177cHrQWQC0RLofCC+3V3nPBctPU2MD382E5syVTiTUAaZtI/dEVqgMrX0aqJyYm/O3eqTJO5kd2dWomVDe8a+DH7M54KIAYTi2tslycDIdrAwPzuaVb9nx60BStLspFuedN1NlaUsaXSCu19B+nvY2jLntXZY=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1719264244;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=MIME-Version:References:In-Reply-To:From:From:Date:Date:Message-ID:Subject:Subject:To:To:Cc:Cc:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=rDR0CtGnRUytB9puns+4maTSy3KDvr3FXJ0+K8eJa14=;
	b=NgCFqfOxK9C60Fdx0OoqpaUqK7Sj4bMun46o2YfZh1C7t5Hzjh4BqxFfU98CsqEx
	OcRES8VbQRuqaWLLTEfUTXSqAxL3kVbXFiTlhEbZ3fA1WTsdYdAYPPwLaT9OIWyyuvS
	uNvwVfjK7C1aI8nME/9W9xwD0FXEkK/du5VGOxlc=
X-Forwarded-Encrypted: i=1; AJvYcCX5z2def9JCDSjbzXmMgNcEtkq/l0qAnI3mz+KUZyHYH+yzhuIC0c3r806AIXUOSJik19ZQc1N2fFjk0kZhVFxwtT84/NWBj2aNL7dKAmw=
X-Gm-Message-State: AOJu0YxyLUKM0ljziTPYZ6tZz6q+APDldQ/k5tYZEPIsX2hefCl3iYka
	nfcweVP7sQ9TMXBA5yLfTtOlAVpJLYoWfv9w9+tOdipRM//obZx1WYgRx1Ha4oMmG5XsuCiyiJ0
	7OCFV6C7fIstgSY/dAflBOApyKyk=
X-Google-Smtp-Source: AGHT+IHkR4Ur8MU9QFQh2fgIosSIUBWqVJ/0HkteNZrqbG9qKxW4jR7v5kyLqJ26Yi5L6b9Vz5LOITsv3cKGg2Ocrjc=
X-Received: by 2002:a25:a347:0:b0:dff:2cc6:c470 with SMTP id
 3f1490d57ef6-e0303ed5693mr4861971276.1.1719264242795; Mon, 24 Jun 2024
 14:24:02 -0700 (PDT)
MIME-Version: 1.0
References: <20240621191434.5046-1-tamas@tklengyel.com> <45c69745-b060-4697-9f6e-b3d2a8860946@suse.com>
In-Reply-To: <45c69745-b060-4697-9f6e-b3d2a8860946@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Mon, 24 Jun 2024 17:23:26 -0400
X-Gmail-Original-Message-ID: <CABfawhkyDVw-=nR2d6KiXGYYv=coDgHUr1oXC+BmUxH_ita+iQ@mail.gmail.com>
Message-ID: <CABfawhkyDVw-=nR2d6KiXGYYv=coDgHUr1oXC+BmUxH_ita+iQ@mail.gmail.com>
Subject: Re: [PATCH 1/2] Add libfuzzer target to fuzz/x86_instruction_emulator
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 24, 2024 at 11:55=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wr=
ote:
>
> On 21.06.2024 21:14, Tamas K Lengyel wrote:
> > @@ -58,6 +58,9 @@ afl-harness: afl-harness.o $(OBJS) cpuid.o wrappers.o
> >  afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OBJS)) cpu=
id.o wrappers.o
> >       $(CC) $(CFLAGS) $(GCOV_FLAGS) $(addprefix -Wl$(comma)--wrap=3D,$(=
WRAPPED)) $^ -o $@
> >
> > +libfuzzer-harness: $(OBJS) cpuid.o
> > +     $(CC) $(CFLAGS) $(LIB_FUZZING_ENGINE) -fsanitize=3Dfuzzer $^ -o $=
@
>
> What is LIB_FUZZING_ENGINE? I don't think we have any use of that in the
> tree anywhere.

It's used by oss-fuzz, otherwise it's not doing anything.

>
> I'm further surprised you get away here without wrappers.o.

Wrappers.o was actually breaking the build for oss-fuzz at the linking
stage. It works just fine without it.

>
> Finally, despite its base name the lack of an extension suggest to me
> this isn't actually a library. Can you help me bring both aspects togethe=
r?

Libfuzzer is the the name of the fuzzing engine, like afl:
https://llvm.org/docs/LibFuzzer.html

>
> > @@ -67,7 +70,7 @@ distclean: clean
> >
> >  .PHONY: clean
> >  clean:
> > -     rm -f *.a *.o $(DEPS_RM) afl-harness afl-harness-cov *.gcda *.gcn=
o *.gcov
> > +     rm -f *.a *.o $(DEPS_RM) afl-harness afl-harness-cov *.gcda *.gcn=
o *.gcov libfuzzer-harness
>
> I'm inclined to suggest it's time to split this line (e.g. keeping all th=
e
> wildcard patterns together and moving the rest to a new rm invocation).

Sure.

>
> > --- a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
> > +++ b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
> > @@ -906,14 +906,12 @@ int LLVMFuzzerTestOneInput(const uint8_t *data_p,=
 size_t size)
> >
> >      if ( size <=3D DATA_OFFSET )
> >      {
> > -        printf("Input too small\n");
> > -        return 1;
> > +        return -1;
> >      }
> >
> >      if ( size > FUZZ_CORPUS_SIZE )
> >      {
> > -        printf("Input too large\n");
> > -        return 1;
> > +        return -1;
> >      }
> >
> >      memcpy(&input, data_p, size);
>
> This part of the change clearly needs explaining in the description.
> It's not even clear to me in how far this is related to the purpose
> of the patch here (iow it may want to be a separate change, depending
> on why the change is needed).

The printf simply produces a ton of unnecessary output while the
fuzzer is running, slowing it down. It's also not useful at all, even
for debugging. Switching the return -1 is necessary because beside
0/-1 values are reserved by libfuzzer for "future use". No issue about
putting this info into the commit message.

Tamas


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 21:40:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 21:40:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746943.1154170 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLrQZ-0004NB-Hn; Mon, 24 Jun 2024 21:40:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746943.1154170; Mon, 24 Jun 2024 21:40:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLrQZ-0004N4-Ep; Mon, 24 Jun 2024 21:40:43 +0000
Received: by outflank-mailman (input) for mailman id 746943;
 Mon, 24 Jun 2024 21:40:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sLrQY-0004My-Cx
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 21:40:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sLrQV-0002lU-Oy; Mon, 24 Jun 2024 21:40:39 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.244])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sLrQV-00008K-HM; Mon, 24 Jun 2024 21:40:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=A8B+L7WSYcGhD26sqc57Hx+AUKTOl9PmV4f1wtBgf2s=; b=caLBq4MhdJvsIoRWHwdIiZHEvT
	oJDmxf/vXFNhJqtH0kVLrcgmmjprAXXdwyWOFm5w4p52M+rEoRRcWjssO58EIxGGH5dhVpdkpv1G4
	4FamFSaGKOAbM0bqfsaMYQZERZWLGYnkENtCoKwqZyWZ6Su96LsQpil2+ABCSIrLCEqw=;
Message-ID: <5238d3a6-c47f-4951-b839-a92c5ee4e571@xen.org>
Date: Mon, 24 Jun 2024 22:40:37 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] MAINTAINERS: Update my email address again
Content-Language: en-GB
To: Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony.perard@vates.tech>
Cc: xen-devel@lists.xenproject.org, Anthony PERARD <anthony@xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Oleksii <oleksii.kurochko@gmail.com>
References: <20240624094030.41692-1-anthony.perard@vates.tech>
 <alpine.DEB.2.22.394.2406240927390.3870429@ubuntu-linux-20-04-desktop>
From: Julien Grall <julien@xen.org>
In-Reply-To: <alpine.DEB.2.22.394.2406240927390.3870429@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 24/06/2024 17:27, Stefano Stabellini wrote:
> On Mon, 24 Jun 2024, Anthony PERARD wrote:
>> Signed-off-by: Anthony PERARD <anthony.perard@vates.tech>
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

I guess this technically need an ack from the release manager. So CC 
Oleksii.

Cheers,


-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 21:58:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 21:58:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746951.1154181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLrhV-0006J0-Ur; Mon, 24 Jun 2024 21:58:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746951.1154181; Mon, 24 Jun 2024 21:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLrhV-0006It-Ru; Mon, 24 Jun 2024 21:58:13 +0000
Received: by outflank-mailman (input) for mailman id 746951;
 Mon, 24 Jun 2024 21:58:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sLrhU-0006IX-Dl
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 21:58:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sLrhT-00037z-Op; Mon, 24 Jun 2024 21:58:11 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.1.244])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sLrhT-0001Ik-I6; Mon, 24 Jun 2024 21:58:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=H2j+7aMJ5xz3pCbaLK6VeloBVc6DOQDWWUBIkFegLIM=; b=niRBluwOJB/Ds7NBAIIAPJdHkr
	cBYVOr1gyA+oOCxxcpcnU7bKUngKUNh9omFPn8JGvnOz1QI+tjAN4hyjPoqKGLtFrv87KEtdDpAi8
	2Bp5ysaV+7lmwpa4Wfe4tP4+tZE0pok+avI03bj7BujWAyqUsnp+dAd7tMtTuEaWdSME=;
Message-ID: <6f94d071-f90f-485d-a8aa-a0c8a726ce34@xen.org>
Date: Mon, 24 Jun 2024 22:58:09 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
Content-Language: en-GB
To: Tamas K Lengyel <tamas@tklengyel.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20240621191434.5046-1-tamas@tklengyel.com>
 <20240621191434.5046-2-tamas@tklengyel.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20240621191434.5046-2-tamas@tklengyel.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi,

On 21/06/2024 20:14, Tamas K Lengyel wrote:
> The build integration script for oss-fuzz targets.

Do you have any details how this is meant and/or will be used?

I also couldn't find a cover letter. For series with more than one 
patch, it is recommended to have one as it help threading and could also 
give some insight on what you are aiming to do.

> 
> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
> ---
>   scripts/oss-fuzz/build.sh | 22 ++++++++++++++++++++++
>   1 file changed, 22 insertions(+)
>   create mode 100755 scripts/oss-fuzz/build.sh
> 
> diff --git a/scripts/oss-fuzz/build.sh b/scripts/oss-fuzz/build.sh
> new file mode 100755
> index 0000000000..48528bbfc2
> --- /dev/null
> +++ b/scripts/oss-fuzz/build.sh

Depending on the answer above, we may want to consider to create the 
directory oss-fuzz under automation or maybe tools/fuzz/.

> @@ -0,0 +1,22 @@
> +#!/bin/bash -eu
> +# Copyright 2024 Google LLC

I am a bit confused with this copyright. Is this script taken from 
somewhere?

> +#
> +# Licensed under the Apache License, Version 2.0 (the "License");
> +# you may not use this file except in compliance with the License.
> +# You may obtain a copy of the License at
> +#
> +#      http://www.apache.org/licenses/LICENSE-2.0
> +#
> +# Unless required by applicable law or agreed to in writing, software
> +# distributed under the License is distributed on an "AS IS" BASIS,
> +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> +# See the License for the specific language governing permissions and
> +# limitations under the License.
> +#
> +################################################################################
> +
> +cd xen
> +./configure clang=y --disable-stubdom --disable-pvshim --disable-docs --disable-xen

Looking at the help from ./configure, 'clang=y' is not mentioned and it 
doesn't make any difference in the config.log. Can you clarify why this 
was added?

> +make clang=y -C tools/include
> +make clang=y -C tools/fuzz/x86_instruction_emulator libfuzzer-harness
> +cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_instruction_emulator

Who will be defining $OUT?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 22:19:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 22:19:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746959.1154190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLs1e-0000yE-I9; Mon, 24 Jun 2024 22:19:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746959.1154190; Mon, 24 Jun 2024 22:19:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLs1e-0000y7-FN; Mon, 24 Jun 2024 22:19:02 +0000
Received: by outflank-mailman (input) for mailman id 746959;
 Mon, 24 Jun 2024 22:19:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q2Am=N2=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sLs1d-0000y1-H0
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 22:19:01 +0000
Received: from sender4-op-o12.zoho.com (sender4-op-o12.zoho.com
 [136.143.188.12]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c02ecb27-3277-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 00:18:59 +0200 (CEST)
Received: by mx.zohomail.com with SMTPS id 1719267533754239.15634385390558;
 Mon, 24 Jun 2024 15:18:53 -0700 (PDT)
Received: by mail-yb1-f174.google.com with SMTP id
 3f1490d57ef6-dfe41f7852cso4883491276.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 15:18:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c02ecb27-3277-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; t=1719267535; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=ddLKyPpPX9a+htI/RHy39D+CWvGEJNXmhMtAl1NmRHNgGGLCAKRDx8/0k5LDW8GXQEw46p+bXoDsfm1gpO+606Rx4QVCZ/bskvNqegx8+qp3ERb9QxNYjmYh9G0SWCcY2vdPGmpB3z0sFadKeCoBcJ9prDENpVF3DfoiNhHRqFs=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1719267535; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=+r/yFq4ftT+CDCbVVm8NBUu6TF01Out8rHeaqMaEJcI=; 
	b=IKS+NHeHnxW4jy+unnQTMD6iSNDbyL4LupUuoeLiDWc9DZmhOQ43Nh39l8DpqnzblNOcu9UfZ7WASs1OnRFWo+lY0yQ7tS8XQelurR8Qj7CxANeJPbCCDp7J3IO/95+pAX93D2SiOuIPe3sDoJziMue1RTjEIDw0jYvQHjeZPG8=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1719267535;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=MIME-Version:References:In-Reply-To:From:From:Date:Date:Message-ID:Subject:Subject:To:To:Cc:Cc:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=+r/yFq4ftT+CDCbVVm8NBUu6TF01Out8rHeaqMaEJcI=;
	b=UcZSGd+QLT17LmLUlksixLo/im5bTuaT+vWKuFQLMbcZic34Tcqwh8w1CH6ltyHp
	q3ljwlChhQ8YBGPZGay9WO/8CuG6UcSFTt2VLXVr3yTp96i2852cIRjN8IHBsw7njln
	hQlfEqf1HTCml8KLkJtvOh0Yk4LFaIsmgtY6pcPk=
X-Gm-Message-State: AOJu0Ywhew6hVhBFayp/rX/KRsYsVQtw8FVNOADG9Yc+TzZd+cNTO3K7
	POAoK7UoJkxTQApC+jici3tJYbGqUF76+yshn3a92Zbt1xfF9XVAaQ6XboAwQ5oXHRLPMmvFwof
	6Iq+u9OVRY0Pzm/kDiY/0hK1gPjk=
X-Google-Smtp-Source: AGHT+IGhhuKjvHL3Q0uLIfNkuQRT3QSdw3H/lRPiSCCautBph1jWwNISMfxhQm0hEq0sCcqAu0z0Q4vrq5OJ8/vzVT4=
X-Received: by 2002:a25:d010:0:b0:dfa:ff74:f24f with SMTP id
 3f1490d57ef6-e0303f7f16fmr6569374276.28.1719267532923; Mon, 24 Jun 2024
 15:18:52 -0700 (PDT)
MIME-Version: 1.0
References: <20240621191434.5046-1-tamas@tklengyel.com> <20240621191434.5046-2-tamas@tklengyel.com>
 <6f94d071-f90f-485d-a8aa-a0c8a726ce34@xen.org>
In-Reply-To: <6f94d071-f90f-485d-a8aa-a0c8a726ce34@xen.org>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Mon, 24 Jun 2024 18:18:16 -0400
X-Gmail-Original-Message-ID: <CABfawhkCJv1oQ4+_bBHf_ys1=gtmFVT-Zn7UeYDLaSm9KQqgcA@mail.gmail.com>
Message-ID: <CABfawhkCJv1oQ4+_bBHf_ys1=gtmFVT-Zn7UeYDLaSm9KQqgcA@mail.gmail.com>
Subject: Re: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, Jun 24, 2024 at 5:58=E2=80=AFPM Julien Grall <julien@xen.org> wrote=
:
>
> Hi,
>
> On 21/06/2024 20:14, Tamas K Lengyel wrote:
> > The build integration script for oss-fuzz targets.
>
> Do you have any details how this is meant and/or will be used?

https://google.github.io/oss-fuzz/getting-started/new-project-guide/#builds=
h

>
> I also couldn't find a cover letter. For series with more than one
> patch, it is recommended to have one as it help threading and could also
> give some insight on what you are aiming to do.
>
> >
> > Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
> > ---
> >   scripts/oss-fuzz/build.sh | 22 ++++++++++++++++++++++
> >   1 file changed, 22 insertions(+)
> >   create mode 100755 scripts/oss-fuzz/build.sh
> >
> > diff --git a/scripts/oss-fuzz/build.sh b/scripts/oss-fuzz/build.sh
> > new file mode 100755
> > index 0000000000..48528bbfc2
> > --- /dev/null
> > +++ b/scripts/oss-fuzz/build.sh
>
> Depending on the answer above, we may want to consider to create the
> directory oss-fuzz under automation or maybe tools/fuzz/.

I'm fine with moving it wherever.

>
> > @@ -0,0 +1,22 @@
> > +#!/bin/bash -eu
> > +# Copyright 2024 Google LLC
>
> I am a bit confused with this copyright. Is this script taken from
> somewhere?

Yes, I took an existing build.sh from oss-fuzz, it is recommended to
have the more complex part of build.sh as part of the upstream
repository so that additional targets/fixes can be merged there
instead of opening PRs on oss-fuzz directly. With this setup the
build.sh I merge to oss-fuzz will just just this build.sh in the Xen
repository. See
https://github.com/tklengyel/oss-fuzz/commit/552317ae9d24ef1c00d87595516cc3=
64bc33b662.

>
> > +#
> > +# Licensed under the Apache License, Version 2.0 (the "License");
> > +# you may not use this file except in compliance with the License.
> > +# You may obtain a copy of the License at
> > +#
> > +#      http://www.apache.org/licenses/LICENSE-2.0
> > +#
> > +# Unless required by applicable law or agreed to in writing, software
> > +# distributed under the License is distributed on an "AS IS" BASIS,
> > +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or impl=
ied.
> > +# See the License for the specific language governing permissions and
> > +# limitations under the License.
> > +#
> > +######################################################################=
##########
> > +
> > +cd xen
> > +./configure clang=3Dy --disable-stubdom --disable-pvshim --disable-doc=
s --disable-xen
>
> Looking at the help from ./configure, 'clang=3Dy' is not mentioned and it
> doesn't make any difference in the config.log. Can you clarify why this
> was added?

Just throwing stuff at the wall till I was able to get a clang build.
If it's indeed not needed I can remove it.

>
> > +make clang=3Dy -C tools/include
> > +make clang=3Dy -C tools/fuzz/x86_instruction_emulator libfuzzer-harnes=
s
> > +cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_inst=
ruction_emulator
>
> Who will be defining $OUT?

oss-fuzz

Tamas


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 22:58:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 22:58:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746966.1154200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLsdV-0006JZ-CK; Mon, 24 Jun 2024 22:58:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746966.1154200; Mon, 24 Jun 2024 22:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLsdV-0006JS-9H; Mon, 24 Jun 2024 22:58:09 +0000
Received: by outflank-mailman (input) for mailman id 746966;
 Mon, 24 Jun 2024 22:58:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLsdT-0006JI-Pc; Mon, 24 Jun 2024 22:58:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLsdT-00047z-GD; Mon, 24 Jun 2024 22:58:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLsdT-0007ed-6S; Mon, 24 Jun 2024 22:58:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sLsdT-0001wl-60; Mon, 24 Jun 2024 22:58:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FkapAujnVt4SmV2qLLIMBW7o2j3fAeA4V66eR10zEWg=; b=YgKQ4aCl5Zdmoh3qArSCK4KRfZ
	DM4F3oruKWrQqJR8eVbzeHcGiZPQB/7iyDBqvwF0F/Zr4sNER573L6OifUSx430+2JHXdV1YdN7gQ
	cA660gstF7sWlSPPKtAlrfxPO+Ar9hT4/c4jEMZ+33fI7OzGObCPWmFimZobxCyOUD4Y=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186470-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186470: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c56f1ef577831ec70645ca5874d54f2e698c6761
X-Osstest-Versions-That:
    xen=908407bf2b29a38d6879fc8c57dad14473ef67f8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 Jun 2024 22:58:07 +0000

flight 186470 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186470/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c56f1ef577831ec70645ca5874d54f2e698c6761
baseline version:
 xen                  908407bf2b29a38d6879fc8c57dad14473ef67f8

Last test of basis   186467  2024-06-24 16:03:45 Z    0 days
Testing same since   186470  2024-06-24 19:02:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@amd.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   908407bf2b..c56f1ef577  c56f1ef577831ec70645ca5874d54f2e698c6761 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon Jun 24 23:38:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 Jun 2024 23:38:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746976.1154211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLtG4-0002mF-BA; Mon, 24 Jun 2024 23:38:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746976.1154211; Mon, 24 Jun 2024 23:38:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLtG4-0002m8-78; Mon, 24 Jun 2024 23:38:00 +0000
Received: by outflank-mailman (input) for mailman id 746976;
 Mon, 24 Jun 2024 23:37:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hzrU=N2=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLtG3-0002m1-2m
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 23:37:59 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c64fe87f-3282-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 01:37:55 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 6F794CE131B;
 Mon, 24 Jun 2024 23:37:49 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 6F1FEC32781;
 Mon, 24 Jun 2024 23:37:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c64fe87f-3282-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719272268;
	bh=vWbG0c0CobB53DU/xavLSffEahkqZpCLX4EoIvPMzs0=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=o31GOQ93FI06XUUftO+cpWwi8Ls0+bsbbPznZslhylnbq3Frqf8TraDYVIYgnVRLz
	 O4pKH7vpXu7Mi5iZTXSC1pG9lOH3OSwBRpuGFkckBY/L2ZiiQ0L5YlRIsQHzD9STcQ
	 /sBCv0UQpADPDkejI4ivt2F0PbB1qHSc3t81n1DvwbwIYdfPA9sqIbBG78cV4tFO+s
	 WIzhA58X8TTJx1IUWoivzTwVM5sRalFstAsjhY7j699IPbFGAdu2h1TAlRcxy8EXxk
	 pxBGqVfVffV7PoQe92pTnlCdX96gscvfiuaVkLArauGZsxWt8yL9KlhsQLiUGNund/
	 TDtK9PEc1FVoA==
Date: Mon, 24 Jun 2024 16:37:46 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Nicola Vetrini <nicola.vetrini@bugseng.com>
Subject: Re: [XEN PATCH v2] automation/eclair: configure Rule 13.6 and custom
 service B.UNEVALEFF
In-Reply-To: <5c60e98d70ae94c155fd56ec13b764b7a8f6161c.1719219962.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241637380.3870429@ubuntu-linux-20-04-desktop>
References: <5c60e98d70ae94c155fd56ec13b764b7a8f6161c.1719219962.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> Rule 13.6 states that "The operand of the `sizeof' operator shall not
> contain any expression which has potential side effects".
> 
> Define service B.UNEVALEFF as an extension of Rule 13.6 to
> check for unevalued side effects also for typeof and alignof operators.
> 
> Update ECLAIR configuration to deviate uses of BUILD_BUG_ON and
> alternative_v?call[0-9] for both Rule 13.6 and B.UNEVALEFF.
> 
> Add service B.UNEVALEFF to the accepted.ecl guidelines to check
> "violations" in the weekly analysis.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:14:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:14:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746982.1154222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLtp6-0008Dm-V5; Tue, 25 Jun 2024 00:14:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746982.1154222; Tue, 25 Jun 2024 00:14:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLtp6-0008Df-QV; Tue, 25 Jun 2024 00:14:12 +0000
Received: by outflank-mailman (input) for mailman id 746982;
 Tue, 25 Jun 2024 00:14:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLtp4-0008DX-Mm
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:14:10 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d69432ca-3287-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 02:14:08 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id BB85960D41;
 Tue, 25 Jun 2024 00:14:06 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id AA5A6C2BBFC;
 Tue, 25 Jun 2024 00:14:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d69432ca-3287-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719274446;
	bh=x1S8/W+lSKFYXLCDtqOh1NiNE2MIjJ7t0qeYWc+LDMY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=gd6wiDJmgq0myljXjR2cmiyDWaQfjNrEWuN/2/jsKlpCneXgFgUh3W7sj6kO2/G8J
	 DQolZFKYmjsKbAV873SSWoYTOrr1gzrgucbVlAUCVpGSnu4+XLAba/Y0qS8jvifEez
	 pLoElSteccKj6NgdGvuPExSJ113UqaVVzWcPgmJk7hFVJGrU4gvkyLLXrfU8d/f2mM
	 yq8Uj+0A4Z9M2m6dqXu3wnjAKdV48UpAP8rRLP6A/sFU+gP/9KH0mYvmKtDI4XUxPY
	 wEaEfmk2X1bYsAjyvb/C1fswvNopd6CkFC64aBL7Z+z7Le43IY+sCURbuwAQTemLcI
	 IpEKUPWHSIcbg==
Date: Mon, 24 Jun 2024 17:14:03 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Federico Serafini <federico.serafini@bugseng.com>, 
    xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Michal Orzel <michal.orzel@amd.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>
Subject: Re: [XEN PATCH v2] xen: add explicit comment to identify notifier
 patterns
In-Reply-To: <917533b5-b79c-4e97-917d-9684993bf423@xen.org>
Message-ID: <alpine.DEB.2.22.394.2406241651400.3870429@ubuntu-linux-20-04-desktop>
References: <d814434bf73e341f5d35836fa7063a728f7b7de4.1718788908.git.federico.serafini@bugseng.com> <f7d46c15-ff85-4a6f-afd7-df18649726c8@xen.org> <2072bf59-f125-4789-be77-40ed3641aec4@bugseng.com> <alpine.DEB.2.22.394.2406201811200.2572888@ubuntu-linux-20-04-desktop>
 <bce5eae2-973d-4d69-bee1-09f9f09dd011@bugseng.com> <alpine.DEB.2.22.394.2406211529130.2572888@ubuntu-linux-20-04-desktop> <917533b5-b79c-4e97-917d-9684993bf423@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 22 Jun 2024, Julien Grall wrote:
> On 21/06/2024 23:34, Stefano Stabellini wrote:
> > > > > Yes, I also think this could be an opportunity to check the pattern
> > > > > but no one has yet been identified to do this.
> > > > 
> > > > I don't think I understand Julien's question and/or your answer.
> > > > 
> > > > Is the question whether someone has done an analysis to make sure this
> > > > patch covers all notifier patters in the xen codebase?
> > > 
> > > I think Jan and Julien's concerns are about the fact that my patch
> > > takes for granted that all the switch statements are doing the right
> > > thing: someone should investigate the notifier patterns to confirm that
> > > their are handling the different cases correctly.
> > 
> > That's really difficult to do, even for the maintainers of the code in
> > question.
> Sure it will require some work, but as any violation. However, I thought the
> whole point of MISRA is to improve the safety and our code base in general?
> 
> AFAIU, we already have some doubt that some notifiers are correct. So to me it
> seems wrong to add a comment because while this silence MISRA, this doesn't
> solve it in the true spirit.
> 
> > 
> > And by not taking this patch we are exposing ourselves to more safety
> > risks because we cannot make R16.4 blocking.
> > 
> > 
> > > > If so, I expect that you have done an analysis simply by basing this
> > > > patch on the 16.4 violations reported by ECLAIR?
> > > 
> > > The previous version of the patch was based only on the reports of
> > > ECLAIR but Jan said "you left out some patterns, why?".
> > > 
> > > So, this version of the patch adds the comment for all the notifier
> > > patterns I found using git grep "struct notifier_block \*"
> > > (a superset of the ones reported by ECLAIR because some of them are in
> > > files excluded from the analysis or deviated).
> > 
> > I think this patch is a step in the right direction. It doesn't prevent
> > anyone in the community from making expert evaluations on whether the
> > pattern is implemented correctly.
> 
> I am not sure I am reading this correctly. To me you are saying that for your
> MISRA, adding the default case is fine even if we believe some notifiers are
> incorrect. Did I understand right?
> 
> If so it does seems odd because this is really not solving the violation. You
> are just putting a smoke screen in front in the hope that there are no big
> issue in the code...
> 
> > 
> > Honestly, I don't see another way to make progress on this, except for
> > maybe deviating project-wide "struct notifier_block". But that's
> > conceptually the same thing as this patch.
> 
> I still don't quite understand why you are so eager to hide violation just
> that you can progress faster on other bits.
> 
> I personally cannot put my name on a patch where the goal is to add a comment
> that no-one verified that it was actually true. (i.e. all the cases we care
> are handled). In particular in the Arm code, because IIRC we already
> identified issues in the past in the notifier.
> 
> I think it wouild be worth discussing the approach again in the next MISRA
> meeting.

Yes, good idea, we can discuss tomorrow. I'll write down my thinking
about 16.4 here in the meantime to address your question and hopefully
we can align on the approach to take tomorrow.

As you might remember I supported 16.4 and I was keen on having it in
rules.rst because I believe that this rules makes the code better.

16.4 is about ensuring that every switch that is supposed to handle all
possible parameters, actually handle all possible parameters. It is also
about having a default label so that we can handle unexpected parameters
especially in case of integers parameters. (In case of enums we can
check at compile time that we handle all possible parameters even
without a default label.)

In Xen, some switches are expected to handle all possible parameters,
but some of them are not. Specifically the "notifier pattern" switches
are by design not expected to handle all possible parameters. Of course
it can be that some of them have to, but the design of the "notifier
pattern" is that you can handle only the subset you care about, and you
don't need to care about all of the possible parameters.

ECLAIR or gcc cannot help us there. Most of these instances don't want
to handle all parameters. We have to remove the violations from the
scan, either by deviation or by adding a default label. Otherwise the
notifier pattern stops us from making more progress.

There are lots of other switches that are not part of the notifier
pattern and are required to handle all possible parameters. I would like
to make sure we enable the checks for these other switches where ECLAIR
and gcc can actually help immediately.

I do realize that some of the notifier pattern switches might want to
handle all parameters but Bugseng or anyone else looking for simple
improvements are not in the position to tell which ones they are. We
need to wait for a maintainer or expert in the specific code to spot
them. It is not a good idea to cause a delay in handling all the
remaining 16.4 more interesting switches (which is also easy) to better
handle the notifier pattern (which is hard).

The notifier pattern can be looked at separately later by the relevant
maintainer / interested community members by sending case-by-case
improvements. They cannot be mechanically resolved. My understanding is
that with this patch series committed we would be close to zero
violations for 16.4.



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:16:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:16:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746991.1154231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLtrC-0000OE-DL; Tue, 25 Jun 2024 00:16:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746991.1154231; Tue, 25 Jun 2024 00:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLtrC-0000O7-9i; Tue, 25 Jun 2024 00:16:22 +0000
Received: by outflank-mailman (input) for mailman id 746991;
 Tue, 25 Jun 2024 00:16:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=STjk=N3=amd.com=VictorM.Lira@srs-se1.protection.inumbo.net>)
 id 1sLtrA-0000O1-TN
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:16:21 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20611.outbound.protection.outlook.com
 [2a01:111:f403:200a::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 24e475e0-3288-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 02:16:19 +0200 (CEST)
Received: from PH7P220CA0039.NAMP220.PROD.OUTLOOK.COM (2603:10b6:510:32b::26)
 by CH3PR12MB7499.namprd12.prod.outlook.com (2603:10b6:610:142::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.30; Tue, 25 Jun
 2024 00:16:14 +0000
Received: from CY4PEPF0000EE3E.namprd03.prod.outlook.com
 (2603:10b6:510:32b:cafe::b) by PH7P220CA0039.outlook.office365.com
 (2603:10b6:510:32b::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.38 via Frontend
 Transport; Tue, 25 Jun 2024 00:16:14 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CY4PEPF0000EE3E.mail.protection.outlook.com (10.167.242.16) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Tue, 25 Jun 2024 00:16:13 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Mon, 24 Jun
 2024 19:16:12 -0500
Received: from xsjwoods50.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39 via Frontend
 Transport; Mon, 24 Jun 2024 19:16:11 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24e475e0-3288-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NmdHNuamfhd1lf3IrcwJmSHqV+OnPiklhe7+U1VxAN0b76uopsBMJxEnUName88EsmoBz0mXFpMddIRe1uIHxTAweLCatU5TXigof+QsDdxQMWtLzdr9aUvdDV7hoLY3U2c+a018Hv8v8v/zLiJzK1wj8wMFS7RSxEH5mx04IeWU7yoCzaOOQ264LfqBNuLGvuTT0E7+VFydjr4sjR0ySIci0X9gqiTVx27T5sPzlnrrmovE6tOF+ymFRcBnqIt8PZZuwlqfp8+3ium4NAzomoJhpMmq0CPGmBi4Au6vbQHy5Eya3jU63nxSI18WZVzOAHnmOwk1zwAk/ujnPYVsCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=aK6pT63F+/FKNnG3FTGMH16+l8vr5szLZBk4uhshQUc=;
 b=DZm+Pl30ArVMTjBQ77YeMbOutjRx4b4KJxT/pAW3bDRMYOGDTIDfa/4nU3zCj4uKDhwazOA0vYwjRDLrnNxX2qVQBgFo23+paFVZ6Adq1swL2F4NQmQUjAk2ZsO4DMrJo61tZq/u/SIyvZbjuzzwCRY3bgpYL93RbJ65Z7+k9tCHno3DsepohBLrwq+u+NJj4N0XzqcSVqN415FcpHv3dqByXNCnz5CL7VbFEhoTQgGrwb6qp/yR//P9RgMigfIfN+kv2m3yehqeze9GcHFdkCW07ptamr84LNL5XeYJDvMCMnAUQOG+BNP/qa0rD0ainK5H5NoMfA/VLbvFqggsAQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aK6pT63F+/FKNnG3FTGMH16+l8vr5szLZBk4uhshQUc=;
 b=m6uBNtXCQlKjLT0NU45c7/b6biX7qsRpYlTdIk6u2UdLAG/NFwIGNc4Q6+WVUQctyOKqAB6AWfanHz727GbNoHN/0SD1qksLMy1xPHvAPWco/vH1CxAZ4XzvLEgj8IUOS/zxwQwQmYm34ZiIlX57dtwRdNafMIXgX8EfZd9Ft9w=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: <victorm.lira@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Victor Lira <victorm.lira@amd.com>, George Dunlap
	<george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>, "Juergen
 Gross" <jgross@suse.com>
Subject: [XEN PATCH] common/sched: address a violation of MISRA C Rule 17.7
Date: Mon, 24 Jun 2024 17:15:39 -0700
Message-ID: <a5f00432063ead8d4ae09315c1b09617a12b22f7.1719274203.git.victorm.lira@amd.com>
X-Mailer: git-send-email 2.37.6
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
Received-SPF: None (SATLEXMB04.amd.com: victorm.lira@amd.com does not
 designate permitted sender hosts)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000EE3E:EE_|CH3PR12MB7499:EE_
X-MS-Office365-Filtering-Correlation-Id: bee44850-dd3f-4b27-a2bf-08dc94ac0634
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230037|376011|36860700010|1800799021|82310400023;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?yZPMHY7cn1otF/nUDmmDbFGyB198wylYE1mPP8DziI2JWGaRqIw12SsVvwqI?=
 =?us-ascii?Q?JuAdSqdV4l0x/R9tfif/rgq4uODm5FoGrE30h+wiXhgVMM3FMr/LQtAm4549?=
 =?us-ascii?Q?6NE2Id0QxWnvf3NGD2Mj62hizlrPedlZy5rKA1FoVSvEgP+HKZlVX2BwouxP?=
 =?us-ascii?Q?LxCCj5cUr+r1g3VWoIFdryvRQ6iRiOb6gmJdoMdM6scCrCbT508nlUPAvF1E?=
 =?us-ascii?Q?h1WoWBkEtLriHnn6hxfTPI9iH/FoJqqrdqYo+VsoAc514wN8/O7vC5GeQRA3?=
 =?us-ascii?Q?4KPW572YNHfCR/Q+nJ6fTbz5fXbAIjS+KvheOsj/dQFZ6G/24KbOmU3jK9wy?=
 =?us-ascii?Q?6m+h5n7olRf/0N29SNkIRLaacTE8kiTwMDbVjBfmd24TGVI8Ibtkcuff/Ql8?=
 =?us-ascii?Q?uELVOFXgj9irno41IFJ9R3NPO9RUAjiCPi6dp1mQZCOjcuXw+8ABixVrCw2h?=
 =?us-ascii?Q?E4y36PbYtYQjkiMCcYwTPjVx0WmzFrYCSlLpb4wC8kbbyCnk9yIe1bCMUIca?=
 =?us-ascii?Q?SVlWHzICm4kqPeUvHTdgeda2vP1MmUyPoJUoejkjerOvVdzZuOAQkmF/QMWH?=
 =?us-ascii?Q?y79z7uxDE8wkArCI2m/uqRE7q1rJGhZO5v0rsAnDh/s6HCavxe38stBYGmha?=
 =?us-ascii?Q?elfYjxlIXyP51Qp0U47vNXCI11w2l0thctRAC1ZCnInq5uOwI984T17c/tPJ?=
 =?us-ascii?Q?KSp+CI1uRdJAlVBK1a6wBeXUPq9HcvFBH7oojfBtkFS42wFJJt2No7HMSxdm?=
 =?us-ascii?Q?xnoQCcNYO9V33u8SrZqJLIVKX+1W6qQb59WyejcfxmEFSjBa9rgoVh4jr8oI?=
 =?us-ascii?Q?IkK0P0ka5Iv778vWdbdeXrp067xXuIW/fIxBeJj/dn6i7nbNXK7HBnulYe7G?=
 =?us-ascii?Q?MiH7qSLF2GxB0rLMAaKPfgVsX5pNBx5wgv56aJrY9MKsqHfaZBer8X9ks674?=
 =?us-ascii?Q?ZQ0JhX6PJ+3fJ9edfqBysE85VgcdUx16d8aAbj7t4lQFQxgQz3SC3Hcg4HLv?=
 =?us-ascii?Q?Mjx0LSV9UYQ6i5gzWsvcgw+YoqK8EeMxkmOV/EnSV3a5Ajwz1PNMvdKpyVHK?=
 =?us-ascii?Q?CvbBfdadwn2TeLDe5nPCqoEhU3OkoBP0tj2I58n73Jqu2I3UU4/PQkDUEcjF?=
 =?us-ascii?Q?aU0M9o7P/nf0M9lJ5VGsTuI2HEf95us/plXS/FwztL7gBIfeaDpIvqBSgPga?=
 =?us-ascii?Q?1Ynbt71olcQhE9kULXrYoRQE9tReFh8PtM9uqU9NAzjdNF5X0dOXiJ2Mtzgj?=
 =?us-ascii?Q?/ChQGLcwVnGta54QfIkhKaKcq3L6CkcMobNxXPgsb7MY1cqrZJOEAHyPcsp5?=
 =?us-ascii?Q?d+acRCQFGkxChTKZdIY3/KRsoqe9vkmR2ZgpY7AJ6I3r/NKAQ29O4b8Skc+n?=
 =?us-ascii?Q?J/tohPIqYZ4nTE/fhZgRN9aKaf7qxIzosd6qyhq3wgZ8GUy6fA=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230037)(376011)(36860700010)(1800799021)(82310400023);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2024 00:16:13.7916
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bee44850-dd3f-4b27-a2bf-08dc94ac0634
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000EE3E.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB7499

From: Victor Lira <victorm.lira@amd.com>

Rule 17.7: "The value returned by a function having non-void return type
shall be used"

This patch fixes this by adding a check to the return value.
No functional changes.

Signed-off-by: Victor Lira <victorm.lira@amd.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Dario Faggioli <dfaggioli@suse.com>
Cc: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org
---
 xen/common/sched/core.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d84b65f197..e1cd824622 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2789,7 +2789,10 @@ static int cpu_schedule_up(unsigned int cpu)
     BUG_ON(cpu >= NR_CPUS);
 
     if ( idle_vcpu[cpu] == NULL )
-        vcpu_create(idle_vcpu[0]->domain, cpu);
+    {
+        if ( vcpu_create(idle_vcpu[0]->domain, cpu) == NULL )
+            return -ENOMEM;
+    }
     else
         idle_vcpu[cpu]->sched_unit->res = sr;
 
-- 
2.37.6



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:23:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:23:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746999.1154242 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLtxP-00026Z-3b; Tue, 25 Jun 2024 00:22:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746999.1154242; Tue, 25 Jun 2024 00:22:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLtxO-00026S-V6; Tue, 25 Jun 2024 00:22:46 +0000
Received: by outflank-mailman (input) for mailman id 746999;
 Tue, 25 Jun 2024 00:22:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLtxN-00026K-IL
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:22:45 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 08fd6638-3289-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 02:22:43 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 239A0CE1714;
 Tue, 25 Jun 2024 00:22:37 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8E03EC2BBFC;
 Tue, 25 Jun 2024 00:22:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08fd6638-3289-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719274956;
	bh=MtJzAN103UKml9zS9D9mBw0ALe9QMyeKjxbf9lqVVHQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=F7D1w3CHhtGffmgIq98vfQfZyYGsOQ+D3LgFZgX4jH7bklJecx5q1StxbrGrlPRSr
	 K+bT4rGrOa9KBxwqwThJIwll6qWaMgYWGXZZfI8eH6ILrKQ+iG0JTx3LYiJT2FZrIo
	 CZ/jXjvsFHmBWJ/9460lrnAmeNUELbZXXMGQ1ingDUr/x7R1nwXF9S6SphxWFOYOTO
	 00IUgWm0U9p75tGTcc98bodXTOfVisED94nKQ6joT8eWBcpiGNinJoFGmMwXdEPA5V
	 ESz/9JV1ebt0Fy1YbMoPi8YwKpoeC2VJJPcXAHeD0+aYOCUk91CFiRl0aDnO5SB6nM
	 +OD5VJoHHeQsg==
Date: Mon, 24 Jun 2024 17:22:34 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, oleksii.kurochko@gmail.com
Subject: Re: [PATCH v3] automation/eclair_analysis: deviate and|or|xor|not
 for MISRA C Rule 21.2
In-Reply-To: <f21ea3734857e0cf26afff00befb179b10d02158.1719213594.git.alessandro.zucchelli@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241721270.3870429@ubuntu-linux-20-04-desktop>
References: <f21ea3734857e0cf26afff00befb179b10d02158.1719213594.git.alessandro.zucchelli@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Alessandro Zucchelli wrote:
> Rule 21.2 reports identifiers reserved for the C and POSIX standard
> libraries: or, and, not and xor are reserved identifiers because they
> constitute alternate spellings for the corresponding operators (they are
> defined as macros by iso646.h); however Xen doesn't use standard library
> headers, so there is no risk of overlap.
> 
> This addresses violations arising from x86_emulate/x86_emulate.c, where
> label statements named as or, and and xor appear.
> 
> No functional change.
> 
> Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Hi Oleksii,

I am asking for a release-ack as this patch adds a deviation, the only
impact is fewer violations from the ECLAIR analysis


> ---
> Changes from v2:
> Fixed patch contents as the changes from v1 and v2 were not squashed together.
> ---
> Changes from v1:
> Added deviation for 'not' identifier.
> Added explanation of where these identifiers are defined, specifically in the
> 'iso646.h' file of the Standard Library.
> ---
>  automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> index 9fa9a7f01c..14c7afb39e 100644
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -498,6 +498,12 @@ still remain available."
>  -config=MC3R1.R21.2,declarations+={safe, "!^__builtin_.*$"}
>  -doc_end
>  
> +-doc_begin="or, and and xor are reserved identifiers because they constitute alternate
> +spellings for the corresponding operators (they are defined as macros by iso646.h).
> +However, Xen doesn't use standard library headers, so there is no risk of overlap."
> +-config=MC3R1.R21.2,reports+={safe, "any_area(stmt(ref(kind(label)&&^(or|and|xor|not)$)))"}
> +-doc_end
> +
>  -doc_begin="Xen does not use the functions provided by the Standard Library, but
>  implements a set of functions that share the same names as their Standard Library equivalent.
>  The implementation of these functions is available in source form, so the undefined, unspecified
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:27:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:27:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747005.1154251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLu1R-0002fT-I8; Tue, 25 Jun 2024 00:26:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747005.1154251; Tue, 25 Jun 2024 00:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLu1R-0002fM-EE; Tue, 25 Jun 2024 00:26:57 +0000
Received: by outflank-mailman (input) for mailman id 747005;
 Tue, 25 Jun 2024 00:26:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLu1P-0002fG-Tx
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:26:55 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9ecc8b55-3289-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 02:26:53 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 68FBB60A70;
 Tue, 25 Jun 2024 00:26:52 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A9666C2BBFC;
 Tue, 25 Jun 2024 00:26:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ecc8b55-3289-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719275212;
	bh=+IH3mY1gzUzpvw6yySwfM+NnUF2wFABrQJxAcb9DocQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=vQW7h+QsN11yDJLwy2fAwA2W3DT6qk/xgwmScOTwpyEd7TbcWNPQv3AvWmhcAbbtE
	 mEVricUzC3L4MI7QSx2AWXFA3ETy1YANIJ08CZfv+R5mCA0CSVDG0XRI4/J3e3jZsT
	 raCjX8MGUMb6g34gOpTZfpXbvyTjjLvm3EIYdXRQnnmFLUcMh4ues068MpNoBBIjA7
	 Ry40dd6OVPaN+LkSevKSC30C31xfOYupjfIG7EWAAn15OmH+FpFEYY91V+FftN1eNr
	 qLl+fZzRV33M8zhL8CGAV9NYDgqi0zt0mC/HQzfTxnkiVD2dlQRB4Uz5nggiSU+6Zm
	 eJ79j0G2/CvbA==
Date: Mon, 24 Jun 2024 17:26:49 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Federico Serafini <federico.serafini@bugseng.com>, 
    xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Nicola Vetrini <nicola.vetrini@bugseng.com>, 
    oleksii.kurochko@gmail.com
Subject: Re: [XEN PATCH v2] automation/eclair: configure Rule 13.6 and custom
 service B.UNEVALEFF
In-Reply-To: <alpine.DEB.2.22.394.2406241637380.3870429@ubuntu-linux-20-04-desktop>
Message-ID: <alpine.DEB.2.22.394.2406241725240.3870429@ubuntu-linux-20-04-desktop>
References: <5c60e98d70ae94c155fd56ec13b764b7a8f6161c.1719219962.git.federico.serafini@bugseng.com> <alpine.DEB.2.22.394.2406241637380.3870429@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Stefano Stabellini wrote:
> On Mon, 24 Jun 2024, Federico Serafini wrote:
> > Rule 13.6 states that "The operand of the `sizeof' operator shall not
> > contain any expression which has potential side effects".
> > 
> > Define service B.UNEVALEFF as an extension of Rule 13.6 to
> > check for unevalued side effects also for typeof and alignof operators.
> > 
> > Update ECLAIR configuration to deviate uses of BUILD_BUG_ON and
> > alternative_v?call[0-9] for both Rule 13.6 and B.UNEVALEFF.
> > 
> > Add service B.UNEVALEFF to the accepted.ecl guidelines to check
> > "violations" in the weekly analysis.
> > 
> > Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> > Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Hi Oleksii,

I am asking for a release-ack on this rule: it widens the checks done by
ECLAIR but only for non-blocking rules (a rule not causing a gitlab-ci
failure). Hence, there should be no effect on gitlab-ci.

Cheers,

Stefano


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:38:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:38:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747013.1154261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuCs-00059t-Gs; Tue, 25 Jun 2024 00:38:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747013.1154261; Tue, 25 Jun 2024 00:38:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuCs-00059m-EE; Tue, 25 Jun 2024 00:38:46 +0000
Received: by outflank-mailman (input) for mailman id 747013;
 Tue, 25 Jun 2024 00:38:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuCq-00059g-HP
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:38:44 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 45d65b69-328b-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 02:38:43 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id DCD9560AAF;
 Tue, 25 Jun 2024 00:38:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3E8E1C2BBFC;
 Tue, 25 Jun 2024 00:38:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 45d65b69-328b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719275921;
	bh=ktQo16y2Vhka7KTqan+9+Ol8CWV6EDfxWQkNMc43LzQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=sYyQcPQi3rTewaJBDUo7c75IOwfbY89SrGatVDjp0XPDdj5LjDlu5BN2xTUJ6XTOo
	 Bnq0ToNJjX0WIG7/fTlS+esyG7f9jds7BpYYqo52Tdszu7KNgGo30zL3QM2ZOs9bCQ
	 6CNiP7H/RiDZJcGMhqpyebPPi4Qu7GW3JWpsrd9bAXFW+el2PbXv3sXlqOdiwb9u6e
	 +9wlEAr9Jk5Mhf4+Pqt/sx8coxU19rHRRQV61rpNCd0F10V1qga0ViCtpzNbuKXiA3
	 zCAg7FnUcdgC92BtY8SkJZroLCV/zIfWJjqV0H9OfxqU2yxBawlu0k1L46mU8g4bcb
	 BeQq4DXZSXExw==
Date: Mon, 24 Jun 2024 17:38:38 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Federico Serafini <federico.serafini@bugseng.com>, 
    xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, oleksii.kurochko@gmail.com
Subject: Re: [XEN PATCH v3] automation/eclair: extend existing deviations of
 MISRA C Rule 16.3
In-Reply-To: <alpine.DEB.2.22.394.2406191821310.2572888@ubuntu-linux-20-04-desktop>
Message-ID: <alpine.DEB.2.22.394.2406241736560.3870429@ubuntu-linux-20-04-desktop>
References: <71a69d25e7889ed6e8546b5cd18d423006d69ceb.1718356683.git.federico.serafini@bugseng.com> <alpine.DEB.2.22.394.2406191821310.2572888@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 19 Jun 2024, Stefano Stabellini wrote:
> On Fri, 14 Jun 2024, Federico Serafini wrote:
> > Update ECLAIR configuration to deviate more cases where an
> > unintentional fallthrough cannot happen.
> > 
> > Add Rule 16.3 to the monitored set and tag it as clean for arm.
> > 
> > Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Hi Oleksii,

I would like to ask for a release-ack as this patch only increases
deviations, hence only affecting the static analysis jobs, for a rule
that is non-blocking


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:44:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:44:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747023.1154287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuIf-0006n6-9C; Tue, 25 Jun 2024 00:44:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747023.1154287; Tue, 25 Jun 2024 00:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuIf-0006mz-6e; Tue, 25 Jun 2024 00:44:45 +0000
Received: by outflank-mailman (input) for mailman id 747023;
 Tue, 25 Jun 2024 00:44:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuIe-0006mj-7I
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:44:44 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1b3af71d-328c-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 02:44:42 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 5B937CE100E;
 Tue, 25 Jun 2024 00:44:36 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9C698C2BBFC;
 Tue, 25 Jun 2024 00:44:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b3af71d-328c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719276275;
	bh=Vtkpy9kOQr6ZNYLWY48gW0OGdAAuqTE3Gsz6e82YGlo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Hg7F2epKwAKauKNof0vwehn6NApPxrhq00BZVae16pCdc96H2hiFqPRf96L+9SmoS
	 si+dS6drD5Jq4SaLVI1/UOt4nI/tZ/jUNHBbitJDGJQhqxq1G8YDBoaVk20PNzAxml
	 4TEk2u7Ad0mmJALO7DUeieFHhhZn/BXYdRTqJIXvGLhk4mPeT3uChGKneFGltp07T0
	 o+0SBK5eDRtUD31nmdt+ryRIaYTWus9m2TCnuTOxqcnX91UrOrE/rOxZQN3cy1XSob
	 nb81Sy71a0j/giwSe9cKfJ00EaD5Q27+mD02p/Wqbh2ZMtLOjUUY0f3e1OnQTaOO5M
	 U5e/hg45sGbjw==
Date: Mon, 24 Jun 2024 17:44:33 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [XEN PATCH v2 6/6][RESEND] automation/eclair_analysis: clean
 ECLAIR configuration scripts
In-Reply-To: <120e7e4579b931c08d28d0a96848af1df7a07f7d.1718378539.git.nicola.vetrini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241744270.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com> <120e7e4579b931c08d28d0a96848af1df7a07f7d.1718378539.git.nicola.vetrini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 17 Jun 2024, Nicola Vetrini wrote:
> Remove from the ECLAIR integration scripts an unused option, which
> was already ignored, and make the help texts consistent
> with the rest of the scripts.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/eclair_analysis/ECLAIR/analyze.sh | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/analyze.sh b/automation/eclair_analysis/ECLAIR/analyze.sh
> index 0ea5520c93a6..e96456c3c18d 100755
> --- a/automation/eclair_analysis/ECLAIR/analyze.sh
> +++ b/automation/eclair_analysis/ECLAIR/analyze.sh
> @@ -11,7 +11,7 @@ fatal() {
>  }
>  
>  usage() {
> -  fatal "Usage: ${script_name} <ARM64|X86_64> <Set0|Set1|Set2|Set3>"
> +  fatal "Usage: ${script_name} <ARM64|X86_64> <accepted|monitored>"
>  }
>  
>  if [[ $# -ne 2 ]]; then
> @@ -40,7 +40,6 @@ ECLAIR_REPORT_LOG=${ECLAIR_OUTPUT_DIR}/REPORT.log
>  if [[ "$1" = "X86_64" ]]; then
>    export CROSS_COMPILE=
>    export XEN_TARGET_ARCH=x86_64
> -  EXTRA_ECLAIR_ENV_OPTIONS=-disable=MC3R1.R20.7
>  elif [[ "$1" = "ARM64" ]]; then
>    export CROSS_COMPILE=aarch64-linux-gnu-
>    export XEN_TARGET_ARCH=arm64
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:45:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:45:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747028.1154301 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuJ3-0007Hm-Iq; Tue, 25 Jun 2024 00:45:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747028.1154301; Tue, 25 Jun 2024 00:45:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuJ3-0007Hf-FP; Tue, 25 Jun 2024 00:45:09 +0000
Received: by outflank-mailman (input) for mailman id 747028;
 Tue, 25 Jun 2024 00:45:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuJ2-0006mj-9K
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:45:08 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2ac1d045-328c-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 02:45:07 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 0865D60F80;
 Tue, 25 Jun 2024 00:45:06 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3B91CC32786;
 Tue, 25 Jun 2024 00:45:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ac1d045-328c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719276305;
	bh=g7R6x5apNxQHB7old6xqfKbgGuO0ogJzumML+KWlAVw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=W4p/dkt00RNJEju2ekiUJnywHk9syXAokFV6VIJADdHEYS0ljj0ZmmbgFxCnAALju
	 4pucPlrt/RkEyZPrnR8KoBsXdEWBZRNoZ8eUuM761U4l1nXrtt98HaP2Qb+kVMups4
	 seIGGV8zOexLHD59AWPlq7xOkiGfPF0S/vSTh+/J7ePsGnLxRGD1uD1xlL3Pe11HE2
	 7hZANNWSrFiQajNW51zW7fYJITmr3mNGus+fOYUcFvMwdcqzOlBmS1XrIC6+cDicdu
	 9wndxpdo/+sLM1y+PVuAZeaakZGNIf5jcw+8bCk9C/IM788GITb+V4T57RWblsnsMS
	 nqIyk3TucVFAA==
Date: Mon, 24 Jun 2024 17:45:02 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>
Subject: Re: [XEN PATCH v2 4/6][RESEND] automation/eclair_analysis: address
 violations of MISRA C Rule 20.7
In-Reply-To: <dfebde9cc657f2669df60b08ca34352288e082ab.1718378539.git.nicola.vetrini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241744540.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com> <dfebde9cc657f2669df60b08ca34352288e082ab.1718378539.git.nicola.vetrini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 17 Jun 2024, Nicola Vetrini wrote:
> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
> of macro parameters shall be enclosed in parentheses".
> 
> The local helpers GRP2 and XADD in the x86 emulator use their first
> argument as the constant expression for a case label. This pattern
> is deviated project-wide, because it is very unlikely to induce
> developer confusion and result in the wrong control flow being
> carried out.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> Changes in v2:
> - Introduce a deviation instead of adding parentheses
> ---
>  automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++--
>  docs/misra/deviations.rst                        | 3 ++-
>  2 files changed, 6 insertions(+), 3 deletions(-)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> index c2698e7074aa..fc248641dc78 100644
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -428,13 +428,15 @@ unexpected result when the structure is given as argument to a sizeof() operator
>  
>  -doc_begin="Code violating Rule 20.7 is safe when macro parameters are used: (1)
>  as function arguments; (2) as macro arguments; (3) as array indices; (4) as lhs
> -in assignments; (5) as initializers, possibly designated, in initalizer lists."
> +in assignments; (5) as initializers, possibly designated, in initalizer lists;
> +(6) as the constant expression in a switch clause label."
>  -config=MC3R1.R20.7,expansion_context=
>  {safe, "context(__call_expr_arg_contexts)"},
>  {safe, "left_right(^[(,\\[]$,^[),\\]]$)"},
>  {safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(node(array_subscript_expr), subscript)))"},
>  {safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(operator(assign), lhs)))"},
> -{safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(node(init_list_expr||designated_init_expr), init)))"}
> +{safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(node(init_list_expr||designated_init_expr), init)))"},
> +{safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(node(case_stmt), lower||upper)))"}
>  -doc_end
>  
>  -doc_begin="Violations involving the __config_enabled macros cannot be fixed without
> diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
> index 36959aa44ac9..be2cc6bf03eb 100644
> --- a/docs/misra/deviations.rst
> +++ b/docs/misra/deviations.rst
> @@ -376,7 +376,8 @@ Deviations related to MISRA C:2012 Rules:
>         (2) as macro arguments;
>         (3) as array indices;
>         (4) as lhs in assignments;
> -       (5) as initializers, possibly designated, in initalizer lists.
> +       (5) as initializers, possibly designated, in initalizer lists;
> +       (6) as constant expressions of switch case labels.
>       - Tagged as `safe` for ECLAIR.
>  
>     * - R20.7
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:45:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:45:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747036.1154310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuJj-0007vH-UW; Tue, 25 Jun 2024 00:45:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747036.1154310; Tue, 25 Jun 2024 00:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuJj-0007vA-Rk; Tue, 25 Jun 2024 00:45:51 +0000
Received: by outflank-mailman (input) for mailman id 747036;
 Tue, 25 Jun 2024 00:45:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuJi-0006mj-0W
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:45:50 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 43b65cec-328c-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 02:45:49 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 28C9F6035D;
 Tue, 25 Jun 2024 00:45:48 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C7486C2BBFC;
 Tue, 25 Jun 2024 00:45:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43b65cec-328c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719276347;
	bh=aaxf+ji/ptweHJPtDnQS2q/m4wnxpBHymjbwKJ1gOTg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=s1oXxf6snDxBlc80Z6iQ2pYo50e5nSTML37+5/iaj6P5NA2RqMIbGjejezlUrK4FQ
	 sbi9awcAuF1B4RP7jVQWrw/jC9nxpeSnionGs8YYaaFnH1u3sShRJs0+ocKwzejzth
	 FJfJA02WYQclq+IvpjlPH1QHfaiKa0zM+hkkHGa5WnRyVUd/6Nnz8EiicxpNAY289s
	 OxCEJE1KFON2kx8+57BM3nVUj7ynMsqf0S7oSv92RKIHEEG+vpEUxkww1XoTDF6Xmg
	 9UUEqCV53Pu7qL2mQqIOtVwA+m1CSocxRp+Tn3/X9MG/0/39j0cEm9kuDhjnlO5K/X
	 M16aYDWIokFfw==
Date: Mon, 24 Jun 2024 17:45:45 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [XEN PATCH v2 1/6][RESEND] automation/eclair: address violations
 of MISRA C Rule 20.7
In-Reply-To: <af4b0512eb52be99e37c9c670f98967ca15c68ac.1718378539.git.nicola.vetrini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241745140.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com> <af4b0512eb52be99e37c9c670f98967ca15c68ac.1718378539.git.nicola.vetrini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 17 Jun 2024, Nicola Vetrini wrote:
> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
> of macro parameters shall be enclosed in parentheses".
> 
> The helper macro bitmap_switch has parameters that cannot be parenthesized
> in order to comply with the rule, as that would break its functionality.
> Moreover, the risk of misuse due developer confusion is deemed not
> substantial enough to warrant a more involved refactor, thus the macro
> is deviated for this rule.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

I would have preferred a SAF tag instead but it can be done later

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  automation/eclair_analysis/ECLAIR/deviations.ecl | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> index 447c1e6661d1..c2698e7074aa 100644
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -463,6 +463,14 @@ of this macro do not lead to developer confusion, and can thus be deviated."
>  -config=MC3R1.R20.7,reports+={safe, "any_area(any_loc(any_exp(macro(^count_args_$))))"}
>  -doc_end
>  
> +-doc_begin="The arguments of macro bitmap_switch macro can't be parenthesized as
> +the rule would require, without breaking the functionality of the macro. This is
> +a specialized local helper macro only used within the bitmap.h header, so it is
> +less likely to lead to developer confusion and it is deemed better to deviate it."
> +-file_tag+={xen_bitmap_h, "^xen/include/xen/bitmap\\.h$"}
> +-config=MC3R1.R20.7,reports+={safe, "any_area(any_loc(any_exp(macro(loc(file(xen_bitmap_h))&&^bitmap_switch$))))"}
> +-doc_end
> +
>  -doc_begin="Uses of variadic macros that have one of their arguments defined as
>  a macro and used within the body for both ordinary parameter expansion and as an
>  operand to the # or ## operators have a behavior that is well-understood and
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:47:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:47:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747044.1154321 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuLb-00009k-99; Tue, 25 Jun 2024 00:47:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747044.1154321; Tue, 25 Jun 2024 00:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuLb-00009d-5p; Tue, 25 Jun 2024 00:47:47 +0000
Received: by outflank-mailman (input) for mailman id 747044;
 Tue, 25 Jun 2024 00:47:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuLZ-0008VI-5p
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:47:45 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 86912975-328c-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 02:47:43 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id E3741CE100E;
 Tue, 25 Jun 2024 00:47:39 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 99AD8C2BBFC;
 Tue, 25 Jun 2024 00:47:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86912975-328c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719276459;
	bh=TvSvl6foj/py4SPyiCAX0s204I5R26HuqXoCLChQZqk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=CL56zmMmRGyJ1X488Gh+9pE7qMC6zphMvr/zTZLJbmdm3q3Re2aKwsSeH6gELbgbV
	 lIN8KbznHkQGa9rAtkNuu5bRrsOOAlJj7YbmO9yq9oF6xrAGlcakBDacgAaJOD9Hqi
	 I5urP5jRrEN+xhtPLBSr5v2kRR+k8OgHLmV69PbmzGsIaMZbQJNWWonpZYgQQC7uQo
	 y0YRFpmOoXWLwelOMmR32uEGMuyre5oKZGH2p/cDVP/g0rbenxnKvrfUFciAWqFJZB
	 RPtBPorCL/SDIXUxMK2cZhe7KgI3BlrjnZ8ZzxUUMn8ahlLmuB12VJF92FRhcH3cSe
	 SUoeuv2fA5hBA==
Date: Mon, 24 Jun 2024 17:47:36 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, oleksii.kurochko@gmail.com
Subject: Re: [XEN PATCH v2 0/6][RESEND] address violations of MISRA C Rule
 20.7
In-Reply-To: <cover.1718378539.git.nicola.vetrini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241743480.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi Oleksii,

I would like to ask for a release-ack as the patch series makes very few
changes outside of the static analysis configuration. The few changes to
the Xen code are very limited, straightforward and makes the code
better, see patch #3 and #5.


On Mon, 17 Jun 2024, Nicola Vetrini wrote:
> Hi all,
> 
> this series addresses several violations of Rule 20.7, as well as a
> small fix to the ECLAIR integration scripts that do not influence
> the current behaviour, but were mistakenly part of the upstream
> configuration.
> 
> Note that by applying this series the rule has a few leftover violations.
> Most of those are in x86 code in xen/arch/x86/include/asm/msi.h .
> I did send a patch [1] to deal with those, limited only to addressing the MISRA
> violations, but in the end it was dropped in favour of a more general cleanup of
> the file upon agreement, so this is why those changes are not included here.
> 
> [1] https://lore.kernel.org/xen-devel/2f2c865f20d0296e623f1d65bed25c083f5dd497.1711700095.git.nicola.vetrini@bugseng.com/
> 
> Changes in v2:
> - refactor patch 4 to deviate the pattern, instead of fixing the violations
> - The series has been resent because I forgot to properly Cc the mailing list
> 
> Nicola Vetrini (6):
>   automation/eclair: address violations of MISRA C Rule 20.7
>   xen/self-tests: address violations of MISRA rule 20.7
>   xen/guest_access: address violations of MISRA rule 20.7
>   automation/eclair_analysis: address violations of MISRA C Rule 20.7
>   x86/irq: address violations of MISRA C Rule 20.7
>   automation/eclair_analysis: clean ECLAIR configuration scripts
> 
>  automation/eclair_analysis/ECLAIR/analyze.sh     |  3 +--
>  automation/eclair_analysis/ECLAIR/deviations.ecl | 14 ++++++++++++--
>  docs/misra/deviations.rst                        |  3 ++-
>  xen/include/xen/guest_access.h                   |  4 ++--
>  xen/include/xen/irq.h                            |  2 +-
>  xen/include/xen/self-tests.h                     |  8 ++++----
>  6 files changed, 22 insertions(+), 12 deletions(-)
> 
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:50:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:50:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747050.1154331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuNg-0001Bq-Lm; Tue, 25 Jun 2024 00:49:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747050.1154331; Tue, 25 Jun 2024 00:49:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuNg-0001Bj-IC; Tue, 25 Jun 2024 00:49:56 +0000
Received: by outflank-mailman (input) for mailman id 747050;
 Tue, 25 Jun 2024 00:49:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuNf-0001Bd-Ek
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:49:55 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d5a882dc-328c-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 02:49:54 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 02B6160AE9;
 Tue, 25 Jun 2024 00:49:53 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B628EC32786;
 Tue, 25 Jun 2024 00:49:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5a882dc-328c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719276592;
	bh=c5c/ZQcuHnqkh/VfHe/M6Q5pNg7S+63DmfD3oo9iCE8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=CBqqpKuT0GNTZtV39nq7VDLGIMqQ3ESkMMd1QyJg03Envf0u+BesrXI5tloDJZNVW
	 iL7dnqxK4Si5dcQWNmOPfwXVgkbLEB8qwsxOuCtd18DvbjWOjURBuVgfSVaN4Gij8t
	 3PELo6+EJwQRX9HrUqgWRLiCqh16X0oleX1ZunRko2Zg0YeVkB0D/Ujijh/S0GjEpF
	 aBvX5mbV1qNS5qqvSe5sjPdX5YGWCKuhmEkFigWfohgu+AldJH2tcrLmEISrvM3ZjB
	 xrrib/Dm3H4OO6k96iEXFV1Q5m2WGvhw5Wh5zs2z2cISVjBMBgtmwittiskwKKCkjH
	 M1ZlL8Pg/jp3w==
Date: Mon, 24 Jun 2024 17:49:50 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Jan Beulich <jbeulich@suse.com>
Subject: Re: [XEN PATCH v2 01/13] automation/eclair: fix deviation of MISRA
 C Rule 16.3
In-Reply-To: <c43a32405cc949ef5bf26a2ca1d1cc7ee7f5e664.1719218291.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241749400.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <c43a32405cc949ef5bf26a2ca1d1cc7ee7f5e664.1719218291.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> Escape the final dot of the comment and extend the search of a
> fallthrough comment up to 2 lines after the last statement.
> 
> Fixes: a128d8da913b21eff6c6d2e2a7d4c54c054b78db "automation/eclair: add deviations for MISRA C:2012 Rule 16.3"
> Reported-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:50:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:50:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747055.1154340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuOY-0002Z8-Sw; Tue, 25 Jun 2024 00:50:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747055.1154340; Tue, 25 Jun 2024 00:50:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuOY-0002Z1-QD; Tue, 25 Jun 2024 00:50:50 +0000
Received: by outflank-mailman (input) for mailman id 747055;
 Tue, 25 Jun 2024 00:50:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuOX-0002Yr-RO
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:50:49 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f41574ab-328c-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 02:50:48 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id E6EF6CE1688;
 Tue, 25 Jun 2024 00:50:39 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id D1EA4C4AF09;
 Tue, 25 Jun 2024 00:50:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f41574ab-328c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719276639;
	bh=g1K4RdYHXaz2V3p1JqSEVDirYJDWLK8jQPlYavoXfxE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=DRDaypmUMX67NCmLbNytnmAOridWyFSKXwjumIqkcB0suAC9cGEaRi/0LZpx0Ukin
	 lvrrVQaqVQX2kznaFi/dHg6IWSZqdWcu0y1q1Zv2WIEqs2OlZ4UxULb82bPPePFcCU
	 Eq89cw02Be8H1dG2xpQdChQx/a7Gc8wfKZhAfr9+LoI6j+KtRbXYr6kHiWFNmMVBFJ
	 bj69EWA9HI4cRyNY6xaovdTmav2RxQKkF+LaiZiQYpMnYKsxK8pm8NZt5xOF0MGs/3
	 v+lKLvz5WqJQEpUrk4dBn1C6wlHgLIe2f3/UkhQTCzDdY0kRxLPUkiTNfJE0Gskf4B
	 ob+aneJ6TuaYA==
Date: Mon, 24 Jun 2024 17:50:37 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 02/13] x86/cpuid: use fallthrough pseudo keyword
In-Reply-To: <58f1ff7e94fd2bd5290a555e44d9de0d2f515eda.1719218291.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241750320.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <58f1ff7e94fd2bd5290a555e44d9de0d2f515eda.1719218291.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> The current comment making explicit the fallthrough intention does
> not follow the agreed syntax: replace it with the pseduo keyword.
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/x86/cpuid.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
> index a822e80c7e..2a777436ee 100644
> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -97,9 +97,8 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
>          if ( is_viridian_domain(d) )
>              return cpuid_viridian_leaves(v, leaf, subleaf, res);
>  
> +        fallthrough;
>          /*
> -         * Fallthrough.
> -         *
>           * Intel reserve up until 0x4fffffff for hypervisor use.  AMD reserve
>           * only until 0x400000ff, but we already use double that.
>           */
> -- 
> 2.34.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:51:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:51:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747061.1154351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuPO-00036C-4v; Tue, 25 Jun 2024 00:51:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747061.1154351; Tue, 25 Jun 2024 00:51:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuPO-000365-1u; Tue, 25 Jun 2024 00:51:42 +0000
Received: by outflank-mailman (input) for mailman id 747061;
 Tue, 25 Jun 2024 00:51:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuPN-00035t-6F
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:51:41 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 14426f62-328d-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 02:51:39 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id C6A2460F80;
 Tue, 25 Jun 2024 00:51:37 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id B22A9C32789;
 Tue, 25 Jun 2024 00:51:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14426f62-328d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719276697;
	bh=ArrrdTb++un2m2ZRivmuzKKNl1Est0Hjhb/CIa5vSKM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=kpKRD+jKmeIyYb4TyWP0JF9d2/QnpB5cv1iZEJ0hh3SUac1vP/QmOVlveozKoazap
	 /3cl3BZGjR3xMaP7P6PexSWyGkfefHu2d53Lr1IUk1I1Vm3nCeLCfZBsB5cNwnPDpz
	 q6UEpL76g8we1PaKPOnD6Fg6HVCJwR7z9S9TKDC1J4VTQLNkqr5ZfS3QR/gwhAU+dK
	 y+GofDDnlOdQwYDWCfSm9qyV9kP/aX+01xRsEFCEkipj3lmz8NOkqz9OFGFruUrx2z
	 cgk5SkH0uaCBmnKmaab32Lt0kgloomuMVRiDIptrtYvk+LQsRJ1qmygPu52be7zUOs
	 nyM3pMcttB8dA==
Date: Mon, 24 Jun 2024 17:51:35 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 03/13] x86/domctl: address a violation of MISRA
 C Rule 16.3
In-Reply-To: <d46b484c99f858d7bfd10c6956a88ba46ac60815.1719218291.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241751290.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <d46b484c99f858d7bfd10c6956a88ba46ac60815.1719218291.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> Add missing break statement to address a violation of
> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
> every switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/arch/x86/domctl.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
> index 9190e11faa..68b5b46d1a 100644
> --- a/xen/arch/x86/domctl.c
> +++ b/xen/arch/x86/domctl.c
> @@ -517,6 +517,7 @@ long arch_do_domctl(
>  
>          default:
>              ret = -ENOSYS;
> +            break;
>          }
>          break;
>      }
> -- 
> 2.34.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:52:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:52:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747073.1154361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuQQ-0003fb-DP; Tue, 25 Jun 2024 00:52:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747073.1154361; Tue, 25 Jun 2024 00:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuQQ-0003fU-Ak; Tue, 25 Jun 2024 00:52:46 +0000
Received: by outflank-mailman (input) for mailman id 747073;
 Tue, 25 Jun 2024 00:52:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuQP-0003UF-3e
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:52:45 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3b27082e-328d-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 02:52:44 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 4E8F760B00;
 Tue, 25 Jun 2024 00:52:43 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3BFFFC2BBFC;
 Tue, 25 Jun 2024 00:52:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b27082e-328d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719276763;
	bh=6siFbJ4X+nrgAxPqWslw3e+YY1OFlb3CIAc1mmesVy8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=SWT7ZT36RYACf+vJaplXLxs9uPojd39ZvpDR8eaaZzjpVGJgnOlEwprdwHYBTeICO
	 1FO2demVxhtB+1y6NTG9Lh/DJEkLHoDKUHPQXWkUVN6iwsCf67ZPZvdi33s2xUhaRm
	 dde6Z0cMJleiNVLhqVSNJvIGOpEr8jiEaO29LXhfaO6/2E0gnCaevgmlzQlGlxkKIP
	 k5j97C/vRQdFw3ECLhkT3tGwvdbGdNxBXRv1WukTs8KO+8rThXcc2zG3rdvwZUucBt
	 QblDEm6QP+LJ2v2GuJIga99MxViNZwMLo2WdMkCrbjtaM1o/SBfMsR9wo4EFGC+3sc
	 c4IWdMPD9YglQ==
Date: Mon, 24 Jun 2024 17:52:40 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 04/13] x86/vpmu: address violations of MISRA C
 Rule 16.3
In-Reply-To: <c45b27a08a1608de85e4bbae80763f8429d40ad5.1719218291.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241752300.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <c45b27a08a1608de85e4bbae80763f8429d40ad5.1719218291.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> Add missing break statements to address violations of MISRA C Rule
> 16.3: "An unconditional `break' statement shall terminate every
> switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:54:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:54:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747078.1154370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuRz-0004Cg-NA; Tue, 25 Jun 2024 00:54:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747078.1154370; Tue, 25 Jun 2024 00:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuRz-0004CZ-K5; Tue, 25 Jun 2024 00:54:23 +0000
Received: by outflank-mailman (input) for mailman id 747078;
 Tue, 25 Jun 2024 00:54:22 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuRy-0004CT-Gq
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:54:22 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7444e6b5-328d-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 02:54:21 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id D3291CE009F;
 Tue, 25 Jun 2024 00:54:18 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 58942C2BBFC;
 Tue, 25 Jun 2024 00:54:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7444e6b5-328d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719276858;
	bh=NXN1BhwIOh5OLWFV//jAMfvC42Gi+qhFna9VSGGA4/g=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=O3g8xvayeg6NbwYVGzaxXCtwFtQIDAeFLpOe0qnpU/XUaEVRdxAi05kysgbqLKchz
	 pB316b+RRQ9UIQOkrS6RVdyQVDt3tzDGDk9cuL13JeEIKkX33FI4yfFECtmjSuwm0g
	 lOMJ1WCdhviLXISUlPQSFmfUQWZENU1vMLfUPXsX6OXkBzMM+EJghklabNkz7niCqQ
	 /RwS18khkwooc+mh6VPodHck4F5ibHrWobBK81RJ8KO76q3x0+9t78b2brrApFEnP2
	 BbG5+C13zT3XXYsmYjxf6klG2qohxC0d3nVCw7qzRX7XKkpA/62gRIbsu/Ml0xKO7X
	 asyvgINOmjFQg==
Date: Mon, 24 Jun 2024 17:54:16 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 05/13] x86/traps: address violations of MISRA C
 Rule 16.3
In-Reply-To: <4f44a7b021eb4f78ccf1ce69b500b48b75df81c5.1719218291.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241753260.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <4f44a7b021eb4f78ccf1ce69b500b48b75df81c5.1719218291.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> Add break or pseudo keyword fallthrough to address violations of
> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
> every switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> ---
>  xen/arch/x86/traps.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index 9906e874d5..cbcec3fafb 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1186,6 +1186,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
>  
>      default:
>          ASSERT_UNREACHABLE();
> +        break;

Please add ASSERT_UNREACHABLE to the list of "unconditional flow control
statements" that can terminate a case, in addition to break.


>      }
>  }
>  
> @@ -1748,6 +1749,7 @@ static void io_check_error(const struct cpu_user_regs *regs)
>      {
>      case 'd': /* 'dom0' */
>          nmi_hwdom_report(_XEN_NMIREASON_io_error);
> +        fallthrough;
>      case 'i': /* 'ignore' */
>          break;
>      default:  /* 'fatal' */
> @@ -1768,6 +1770,7 @@ static void unknown_nmi_error(const struct cpu_user_regs *regs,
>      {
>      case 'd': /* 'dom0' */
>          nmi_hwdom_report(_XEN_NMIREASON_unknown);
> +        fallthrough;
>      case 'i': /* 'ignore' */
>          break;
>      default:  /* 'fatal' */

These two are nice improvements


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:55:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:55:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747084.1154381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuSo-0004hz-Vy; Tue, 25 Jun 2024 00:55:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747084.1154381; Tue, 25 Jun 2024 00:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuSo-0004hs-TM; Tue, 25 Jun 2024 00:55:14 +0000
Received: by outflank-mailman (input) for mailman id 747084;
 Tue, 25 Jun 2024 00:55:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuSm-0004CT-W6
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:55:12 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 935c547d-328d-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 02:55:12 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 4777F60B00;
 Tue, 25 Jun 2024 00:55:11 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 331C1C2BBFC;
 Tue, 25 Jun 2024 00:55:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 935c547d-328d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719276911;
	bh=/9xor8CXdhlq6PrtxGlhUTc6sfGmqbSL/B6lshBYNvw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=c1ANsOKXWxzMMV+0PvBY1kNQWOK++xhqD9WmA9DP9Xbm0+7vIJ1wDxg6P/JJKQc5q
	 4+YJiJggGlBEJp/MsEtvq4iCrtAIi4VFaWHBh3c12uSaO/7TiwQMJ0B/ZEyt2bF39u
	 GtJA4RySd2aKtF1+utyA0fYoV3tlDihElcg7Sa5iK9q1qZRPLXV54WoB262kH6LKpF
	 pdv7C5fnjPN4LyZ0K9YYpH9H5gixqB+P8ulLDUOsYVUg4ckxlNq3MrSeFxzv/fCCpE
	 Gubu4oFPKtncAJoV8zj+l3Rgw66eQW24ARxcLGzQCll38rZ/cIscTsIulFTGYz+8vS
	 fZb5vx0Kqb4Qg==
Date: Mon, 24 Jun 2024 17:55:08 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 06/13] x86/mce: address violations of MISRA C Rule
 16.3
In-Reply-To: <c1015fb8f39d3a44732ccdebb8ba11392dbe4aa8.1719218291.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241755020.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <c1015fb8f39d3a44732ccdebb8ba11392dbe4aa8.1719218291.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> Add missing break statements to address violations of
> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
> every switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:57:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:57:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747091.1154391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuUw-0005Nz-Eb; Tue, 25 Jun 2024 00:57:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747091.1154391; Tue, 25 Jun 2024 00:57:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuUw-0005Ns-Be; Tue, 25 Jun 2024 00:57:26 +0000
Received: by outflank-mailman (input) for mailman id 747091;
 Tue, 25 Jun 2024 00:57:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuUv-0005Nl-Ne
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:57:25 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e19ab8c7-328d-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 02:57:23 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 20AF96035D;
 Tue, 25 Jun 2024 00:57:22 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 156AAC2BBFC;
 Tue, 25 Jun 2024 00:57:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e19ab8c7-328d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719277041;
	bh=pW78CccedCZsn3F7jeWfItOZHI7pswqs8KkCf56uIvY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=FZEIo7tHozttVO9hNhHuEswSH46JM3wPdowfuyN6ULa89HG0QB0bR6ou1hGK+u+9L
	 JwmCyYwmvBTSsddT8qDIQcqtqkpettNF25k/DQMG8bitIwO7czH3ArPhF6sJggqmM3
	 2Vgurr1zCaEGG+cmLrybMf8CuDYuuCxzsdxsbWSjSyb1pw5ql3c8y+/1dNRnUYSnvD
	 RUaAQBzvZkXERWybVgOZQnxPaDCVgDFgkCViX2h+Q0Kr39nh2Jw6JxwhFOm3AkS7es
	 Fn9d7h+pfVpGpXJrqjGjwYVpB+wOWaaKYRM+DEYyrVV8u8sGDMlta8JqiyT1+SM0Tv
	 gR6dq123E+FMw==
Date: Mon, 24 Jun 2024 17:57:19 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 07/13] x86/hvm: address violations of MISRA C Rule
 16.3
In-Reply-To: <a20efca7042ea8f351516ea521edccd89b475929.1719218291.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241755480.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <a20efca7042ea8f351516ea521edccd89b475929.1719218291.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> MISRA C Rule 16.3 states that "An unconditional `break' statement shall
> terminate every switch-clause".
> 
> Add pseudo keyword fallthrough or missing break statement
> to address violations of the rule.
> 
> As a defensive measure, return -EOPNOTSUPP in case an unreachable
> return statement is reached.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> ---
> Changes in v2:
> - replace hypened fallthrough with the pseudo keyword.
> ---
>  xen/arch/x86/hvm/emulate.c   | 9 ++++++---
>  xen/arch/x86/hvm/hvm.c       | 6 ++++++
>  xen/arch/x86/hvm/hypercall.c | 1 +
>  xen/arch/x86/hvm/irq.c       | 1 +
>  4 files changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
> index 02e378365b..859c21a5ab 100644
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -339,7 +339,7 @@ static int hvmemul_do_io(
>      }
>      case X86EMUL_UNIMPLEMENTED:
>          ASSERT_UNREACHABLE();
> -        /* Fall-through */
> +        fallthrough;
>      default:
>          BUG();
>      }
> @@ -2674,6 +2674,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
>  
>      default:
>          ASSERT_UNREACHABLE();
> +        break;

same here


>      }
>  
>      if ( hvmemul_ctxt->ctxt.retire.singlestep )
> @@ -2764,6 +2765,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
>          /* fallthrough */
>      default:
>          hvm_emulate_writeback(&ctxt);
> +        break;
>      }
>  
>      return rc;
> @@ -2799,10 +2801,11 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
>          memcpy(hvio->mmio_insn, curr->arch.vm_event->emul.insn.data,
>                 hvio->mmio_insn_bytes);
>      }
> -    /* Fall-through */
> +    fallthrough;
>      default:
>          ctx.set_context = (kind == EMUL_KIND_SET_CONTEXT_DATA);
>          rc = hvm_emulate_one(&ctx, VIO_no_completion);
> +        break;
>      }
>  
>      switch ( rc )
> @@ -2818,7 +2821,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
>      case X86EMUL_UNIMPLEMENTED:
>          if ( hvm_monitor_emul_unimplemented() )
>              return;
> -        /* fall-through */
> +        fallthrough;
>      case X86EMUL_UNHANDLEABLE:
>          hvm_dump_emulation_state(XENLOG_G_DEBUG, "Mem event", &ctx, rc);
>          hvm_inject_hw_exception(trapnr, errcode);
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 7f4b627b1f..c263e562ff 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4919,6 +4919,8 @@ static int do_altp2m_op(
>  
>      default:
>          ASSERT_UNREACHABLE();
> +        rc = -EOPNOTSUPP;
> +        break;

same here


>      }
>  
>   out:
> @@ -5020,6 +5022,8 @@ static int compat_altp2m_op(
>  
>      default:
>          ASSERT_UNREACHABLE();
> +        rc = -EOPNOTSUPP;
> +        break;

same here


>      }
>  
>      return rc;
> @@ -5283,6 +5287,8 @@ void hvm_get_segment_register(struct vcpu *v, enum x86_segment seg,
>           * %cs and %tr are unconditionally present.  SVM ignores these present
>           * bits and will happily run without them set.
>           */
> +        fallthrough;
> +
>      case x86_seg_cs:
>          reg->p = 1;
>          break;
> diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
> index 7fb3136f0c..2271afe02a 100644
> --- a/xen/arch/x86/hvm/hypercall.c
> +++ b/xen/arch/x86/hvm/hypercall.c
> @@ -111,6 +111,7 @@ int hvm_hypercall(struct cpu_user_regs *regs)
>      case 8:
>          eax = regs->rax;
>          /* Fallthrough to permission check. */
> +        fallthrough;
>      case 4:
>      case 2:
>          if ( currd->arch.monitor.guest_request_userspace_enabled &&
> diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
> index 210cebb0e6..1eab44defc 100644
> --- a/xen/arch/x86/hvm/irq.c
> +++ b/xen/arch/x86/hvm/irq.c
> @@ -282,6 +282,7 @@ static void hvm_set_callback_irq_level(struct vcpu *v)
>              __hvm_pci_intx_assert(d, pdev, pintx);
>          else
>              __hvm_pci_intx_deassert(d, pdev, pintx);
> +        break;
>      default:
>          break;
>      }
> -- 
> 2.34.1
> 
> 


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:58:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:58:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747097.1154401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuVr-0006e2-No; Tue, 25 Jun 2024 00:58:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747097.1154401; Tue, 25 Jun 2024 00:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuVr-0006dv-Kt; Tue, 25 Jun 2024 00:58:23 +0000
Received: by outflank-mailman (input) for mailman id 747097;
 Tue, 25 Jun 2024 00:58:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuVq-0006do-Uv
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:58:22 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 02c21949-328e-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 02:58:20 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id E4028CE0B3C;
 Tue, 25 Jun 2024 00:58:15 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7F088C2BBFC;
 Tue, 25 Jun 2024 00:58:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02c21949-328e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719277095;
	bh=vyty9uCC29Pd1HPx/z4nINLebrWE8/YCMSiORS1iNbs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=PqSX/6dx8rhtt4wM9wKRoXlNQofVUEPQOwN+rPEt6v+OoyOjgaO1Be9lTr5ZkScHQ
	 qczSwN4HNu4Y4k15daqsu2XXZkkA0OIt06Pb6OV7DIQ/oP95W8ypYRw0dyoc0b5p7s
	 z9H3X4jYPUZmwpUE2XbgEaBQ/fcOcuxg/s7+jGmXBt4MYPNnctOMBQyQIsBoqVfOAz
	 N2OWA2CaOSs3+HWXntDQ0JS5PZPa8U+fvMbR8K5nEAFEax8/nhlk1zbCb/qG+myhCR
	 OwIqXmzG4OEw/6I/RbLNAns8cdDPLQWPDBQ3yRMrD+lzq8zYc3k6BQE4j5GML/oM9R
	 rs/3UGiPrT9Ww==
Date: Mon, 24 Jun 2024 17:58:13 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 08/13] x86/vpt: address a violation of MISRA C
 Rule 16.3x
In-Reply-To: <e26de71af72b51abd425f2e75f30d794e0ba46a2.1719218291.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241758050.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <e26de71af72b51abd425f2e75f30d794e0ba46a2.1719218291.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> Add pseudo keyword fallthrough to meet the requirements to deviate
> a violation of MISRA C Rule 16.3 ("An unconditional `break'
> statement shall terminate every switch-clause").
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:58:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:58:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747101.1154411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuWD-0007OU-WD; Tue, 25 Jun 2024 00:58:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747101.1154411; Tue, 25 Jun 2024 00:58:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuWD-0007ON-Rs; Tue, 25 Jun 2024 00:58:45 +0000
Received: by outflank-mailman (input) for mailman id 747101;
 Tue, 25 Jun 2024 00:58:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuWC-0006do-De
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:58:44 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 10c789d4-328e-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 02:58:42 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id B33B460AE9;
 Tue, 25 Jun 2024 00:58:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id A74A7C2BBFC;
 Tue, 25 Jun 2024 00:58:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10c789d4-328e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719277121;
	bh=1h0SljQwYt2uhrVy5EsJfWfyNxB7RHb9qSthRi3i8Rc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=l+vgBivQ3VE221i+O9++hhLsCGB7n4KLWhRdHlD5gSe0Yb0tObVRXSpUbpyHj50mv
	 DUrHFLy+veN2vsRIOn+e9NIFFbmqh+EMs+1GHPE4MZqMN/RqrXu1zan/ryzZ7jeBlq
	 5g4q7c6QqWDGsMgDOWpY6iFjGvwEhtkVL48nMnIVLKPLSjPxJS3TdZ4Sx0E1AFD3Rq
	 f3ypVJRRmoRAGD/9zamTjOY8ZcoI+LKOtaG8yGPv5n5HRljTPn39K89Ez7+uLgf9OF
	 sVxG+4kgW6A/TgA8x2wOm2LGwFDoSf6s8snQ/+yWzDoHxdHaS2oIwsAKr4c20zqdXH
	 vX0lMamdeajPA==
Date: Mon, 24 Jun 2024 17:58:39 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 09/13] x86/mm: add defensive return
In-Reply-To: <f3adf3d5dac5c4c108207883472f3ebcfa11c685.1719218291.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241758300.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <f3adf3d5dac5c4c108207883472f3ebcfa11c685.1719218291.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> Add defensive return statement at the end of an unreachable
> default case. Other than improve safety, this meets the requirements
> to deviate a violation of MISRA C Rule 16.3: "An unconditional `break'
> statement shall terminate every switch-clause".
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> ---
>  xen/arch/x86/mm.c | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index 648d6dd475..2e19ced15e 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -916,6 +916,7 @@ get_page_from_l1e(
>                  return 0;
>              default:
>                  ASSERT_UNREACHABLE();
> +                return 0;

Let's avoid this


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:59:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747106.1154421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuWl-00084c-6C; Tue, 25 Jun 2024 00:59:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747106.1154421; Tue, 25 Jun 2024 00:59:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuWl-00084U-3A; Tue, 25 Jun 2024 00:59:19 +0000
Received: by outflank-mailman (input) for mailman id 747106;
 Tue, 25 Jun 2024 00:59:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuWj-00084B-91
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:59:17 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 242fd945-328e-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 02:59:15 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 47CD261126;
 Tue, 25 Jun 2024 00:59:14 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5B364C2BBFC;
 Tue, 25 Jun 2024 00:59:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 242fd945-328e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719277154;
	bh=JIkZmOtnXJl3xP//D5O8yllNtM9t8S6vrCiSwOvt3As=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ZUc+8r6Ov99odeLg26xaLQMAuewesXCquZ/PjTIRZKWxvVnkyjFjKIVfsUpmlFleP
	 cSYswuw0ZMPdJb5yAnxYfhYJ3QYTcjHgiaK2NzNZYjYudlfEzjhxW5AKHW8/pNJubU
	 mTb1JopiOSzqkNd8M57i53FE6NEvfAgckAzhwUDsSCY0JfXXd8PqiAxwQsnwTnLBp+
	 G4KknX0AyuyP57KB2tODhYjFmWVbQfEcyssPB8ZKOBHJDl13xtuPm9LdZd/UKKYtZH
	 1S+3A2Mn1yV8ruoG54rkqmKEDSHYAt30RRb1F6G7HSyfonQzqFZWLcb0pu5zr37/WX
	 p7QD/t74ZPq+Q==
Date: Mon, 24 Jun 2024 17:59:11 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 10/13] x86/mpparse: address a violation of MISRA
 C Rule 16.3
In-Reply-To: <e421b80a9b016d286b9a5b8329b0bcb33584f06e.1719218291.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241759040.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <e421b80a9b016d286b9a5b8329b0bcb33584f06e.1719218291.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> Add a missing break statement to address a violation of
> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
> every switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 00:59:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 00:59:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747111.1154431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuXI-0000Am-Cf; Tue, 25 Jun 2024 00:59:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747111.1154431; Tue, 25 Jun 2024 00:59:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuXI-0000Af-A4; Tue, 25 Jun 2024 00:59:52 +0000
Received: by outflank-mailman (input) for mailman id 747111;
 Tue, 25 Jun 2024 00:59:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuXG-0007Jk-J0
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 00:59:50 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 38bc76ae-328e-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 02:59:49 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id BCA3961152;
 Tue, 25 Jun 2024 00:59:48 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C7615C2BBFC;
 Tue, 25 Jun 2024 00:59:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38bc76ae-328e-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719277188;
	bh=QaYCwnkhJcl4jr+s3WOhsEcSX4O2MueD0AfkkteZJI8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=FLxOy02tUtO/r6A4o9DWiOoG/0qZFyBieMKOVZtGX6ClM2dsCai+TVhvKN2//SE0T
	 YNOcwnW9PbSWwqEgRosCMlt8KHvMtI+ZbGet0Y5ZldzcYQvCLpbaD/Mp6bQmYwMhjb
	 E1stX8BRjiXcmP6SNXbBK1rjG0zxI1uMuI596kwZxhq6cR7zbrkvzciMtFommDfEtv
	 j6jzsBd0M7Ur/fLirCuOaa1dp4dNaexfpxgTKfIntwl/73c8JGg2U0QSHCW4/hWNB5
	 vTxiIR5t8RRx/CEloKsmLuhMYR6Zqt8wa6IhzWOyZwhK0bgSREtA9vOOIP7zmr5ytG
	 KiBg+UrJaY3+g==
Date: Mon, 24 Jun 2024 17:59:46 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 11/13] x86/pmtimer: address a violation of MISRA
 C Rule 16.3
In-Reply-To: <fea205262d4f7b337b804a013d0f1c411a721b46.1719218291.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241759380.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <fea205262d4f7b337b804a013d0f1c411a721b46.1719218291.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> Add missing break statement to address a violation of MISRA C Rule 16.3
> ("An unconditional `break' statement shall terminate every
> switch-clause").
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 01:00:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 01:00:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747116.1154441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuXq-0001jR-Kb; Tue, 25 Jun 2024 01:00:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747116.1154441; Tue, 25 Jun 2024 01:00:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuXq-0001ib-HK; Tue, 25 Jun 2024 01:00:26 +0000
Received: by outflank-mailman (input) for mailman id 747116;
 Tue, 25 Jun 2024 01:00:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuXp-00015h-6j
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 01:00:25 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4b87f191-328e-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 03:00:22 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id E8408CE171F;
 Tue, 25 Jun 2024 01:00:19 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 64832C32786;
 Tue, 25 Jun 2024 01:00:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b87f191-328e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719277219;
	bh=yZRv4I1j+Yh0zi1aEJXlacr6CoXERhyV6nWJ7oMqjtw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Xa8lK4zPzLSZxHBPAbg9p+4pShZRfFYoi0+SQbPja2c4DujG+P01Mq70DMYOFReV8
	 IDsUeIN1vPu2SUh4YWeupjab0Xh+ORvyyrXNh8mv8NkCa6Ha+PBs9J31ks6SSFVNUn
	 t29kcrDTTF9DBTx4CRfQFy/mmESViz+zN48xlYfHiORNyrEtzQmnMbNvD5h7yxdy3j
	 1NLkeqHm+OXzr0O/mN/PJIk48EklvEF4lAYp3nInbQKKx9/8X8DQxAYosvWQO9zsPv
	 OkHxsDwjhaDAzr0lxoyP5/3QVmUdXA5TG17TmxSpDG9dT4rHC33L8LnrytTpqbq1wr
	 mKoMB6KMiub9g==
Date: Mon, 24 Jun 2024 18:00:17 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 12/13] x86/vPIC: address a violation of MISRA C
 Rule 16.3
In-Reply-To: <bf0f2ac1c8e9443b2c4f8ae6f02659927d5f7dea.1719218291.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241800080.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <bf0f2ac1c8e9443b2c4f8ae6f02659927d5f7dea.1719218291.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> Add pseudokeyword fallthrough to meet the requirements to deviate
> a violation of MISRA C Rul 16.3: "An unconditional `break' statement
> shall terminate every switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 01:00:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 01:00:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747121.1154451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuYH-0003jq-S2; Tue, 25 Jun 2024 01:00:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747121.1154451; Tue, 25 Jun 2024 01:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLuYH-0003j1-Og; Tue, 25 Jun 2024 01:00:53 +0000
Received: by outflank-mailman (input) for mailman id 747121;
 Tue, 25 Jun 2024 01:00:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cmkP=N3=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sLuYG-0003Ar-Ot
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 01:00:52 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5da33bca-328e-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 03:00:51 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id A58C160F39;
 Tue, 25 Jun 2024 01:00:50 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 99B5BC2BBFC;
 Tue, 25 Jun 2024 01:00:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5da33bca-328e-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719277250;
	bh=qDwmdPFMex1cv54FgKNuXf1rt30UOjdd9eHj+hHj77w=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=TnzOP2a5PKPNAFGU8w7W9OVGLL7xwJ97zZDszr/Dcke7gliWHEVzUlebi8ekGKGlr
	 BCFebHBcIaVrMa6VA7UI+VoVgcRzCz8NtZa9zx1tmYSaotV0Cpr3RFgrGDrMDWrJqN
	 1NGmjjV28VLLImYSYiLt9136jpCRMfrwYSi28shfpIWbJl7B9JJKfqbWRRbVGDR/tr
	 Oqw0MhtdmEpZRABE79h9SvLGhlDF8YaVs6nPRiIDOoDTfAmGbQFb2/KLaUirYHaAlS
	 TimABDbj+Hk79yFkQ9o1ibKNk6cmOTzmY7GoSCks/3tBRzyBnCP5NKdF2hBY5Z1s1b
	 8xMw/yp1/ZMhA==
Date: Mon, 24 Jun 2024 18:00:48 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 13/13] x86/vlapic: address a violation of MISRA
 C Rule 16.3
In-Reply-To: <0aa39166696e46b6bb45a0f7b5ac06bfd9fdda8e.1719218291.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406241800420.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <0aa39166696e46b6bb45a0f7b5ac06bfd9fdda8e.1719218291.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> Add missing break statement to address a violation of MISRA C
> Rule 16.3: "An unconditional `break' statement shall terminate every
> switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 01:21:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 01:21:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747137.1154461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLurb-00070m-Co; Tue, 25 Jun 2024 01:20:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747137.1154461; Tue, 25 Jun 2024 01:20:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLurb-00070f-8s; Tue, 25 Jun 2024 01:20:51 +0000
Received: by outflank-mailman (input) for mailman id 747137;
 Tue, 25 Jun 2024 01:20:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AHgq=N3=linux.intel.com=baolu.lu@srs-se1.protection.inumbo.net>)
 id 1sLurZ-00070Z-JB
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 01:20:50 +0000
Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 24d509cb-3291-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 03:20:46 +0200 (CEST)
Received: from fmviesa004.fm.intel.com ([10.60.135.144])
 by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 24 Jun 2024 18:20:43 -0700
Received: from unknown (HELO [10.239.159.127]) ([10.239.159.127])
 by fmviesa004.fm.intel.com with ESMTP; 24 Jun 2024 18:20:41 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24d509cb-3291-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1719278447; x=1750814447;
  h=message-id:date:mime-version:cc:subject:to:references:
   from:in-reply-to:content-transfer-encoding;
  bh=U6XAiC13vaXq5UsI3E92/M07+WJrfK31lsjb3JRY1ps=;
  b=HxsCqosVo7/Bv3GJ+JkDMDorFQ6ZoZOKzXjOMqfyvJRG0vzZo0nLyuFi
   YIfq85+w/hYJRddT7uZdN2ecFlic36LjijKvaCrqSXmJduIPZOQ43WXbA
   FRNR2PN5Py1wJYWY5qlJKTJJhPoiI/k42jLOJ7tjtZq5IvvsusUXKbMpI
   GMVq4uCjY5agim68FDR6T1ss8nzJLey2RDOWYtsfFUEGktdcI0ryCTJQ8
   3ekxbSHPKEO9sZcb/LJkf6qLDYcsdAUaGaCHrTYqsw996JEGG40Yfk0YZ
   kFRSFofGktCxqLuUTMST8imJLOeKdP10IXxXjFkXtGU1+67/uY4x4p6sk
   A==;
X-CSE-ConnectionGUID: EK2jTyhhQJ6syTRx+iad7A==
X-CSE-MsgGUID: gblRdE4aTaWZNd2coKeIPA==
X-IronPort-AV: E=McAfee;i="6700,10204,11113"; a="33812925"
X-IronPort-AV: E=Sophos;i="6.08,263,1712646000"; 
   d="scan'208";a="33812925"
X-CSE-ConnectionGUID: 2MUKm378QzWdCxybgJGdZA==
X-CSE-MsgGUID: NAxNJ6sbTpmExBVHbHA4Og==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="6.08,263,1712646000"; 
   d="scan'208";a="48019532"
Message-ID: <adac26cc-0d9e-4413-a0b7-c1d6929b9543@linux.intel.com>
Date: Tue, 25 Jun 2024 09:18:07 +0800
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Cc: baolu.lu@linux.intel.com, xen-devel@lists.xenproject.org,
 iommu@lists.linux.dev, Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Subject: Re: [RFC PATCH] iommu/xen: Add Xen PV-IOMMU driver
To: Robin Murphy <robin.murphy@arm.com>, Teddy Astie
 <teddy.astie@vates.tech>, Jason Gunthorpe <jgg@ziepe.ca>
References: <fe36b8d36ed3bc01c78901bdf7b87a71cb1adaad.1718286176.git.teddy.astie@vates.tech>
 <20240619163000.GK791043@ziepe.ca>
 <750967b7-252f-4523-872f-64b79358c97c@vates.tech>
 <4ba90f86-fd14-4d2a-b7a0-c3eaab243565@linux.intel.com>
 <4c941977-868a-4bd0-9c57-eb58255d95bf@arm.com>
Content-Language: en-US
From: Baolu Lu <baolu.lu@linux.intel.com>
In-Reply-To: <4c941977-868a-4bd0-9c57-eb58255d95bf@arm.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 6/24/24 7:09 PM, Robin Murphy wrote:
> On 2024-06-23 4:21 am, Baolu Lu wrote:
>> On 6/21/24 11:09 PM, Teddy Astie wrote:
>>> Le 19/06/2024 à 18:30, Jason Gunthorpe a écrit :
>>>> On Thu, Jun 13, 2024 at 01:50:22PM +0000, Teddy Astie wrote:
>>>>
>>>>> +struct iommu_domain *xen_iommu_domain_alloc(unsigned type)
>>>>> +{
>>>>> +    struct xen_iommu_domain *domain;
>>>>> +    u16 ctx_no;
>>>>> +    int ret;
>>>>> +
>>>>> +    if (type & IOMMU_DOMAIN_IDENTITY) {
>>>>> +        /* use default domain */
>>>>> +        ctx_no = 0;
>>>> Please use the new ops, domain_alloc_paging and the static identity 
>>>> domain.
>>> Yes, in the v2, I will use this newer interface.
>>>
>>> I have a question on this new interface : is it valid to not have a
>>> identity domain (and "default domain" being blocking); well in the
>>> current implementation it doesn't really matter, but at some point, we
>>> may want to allow not having it (thus making this driver mandatory).
>>
>> It's valid to not have an identity domain if "default domain being
>> blocking" means a paging domain with no mappings.
>>
>> In the iommu driver's iommu_ops::def_domain_type callback, just always
>> return IOMMU_DOMAIN_DMA, which indicates that the iommu driver doesn't
>> support identity translation.
> 
> That's not necessary - if neither ops->identity_domain nor 
> ops->domain_alloc(IOMMU_DOMAIN_IDENTITY) gives a valid domain then we 
> fall back to IOMMU_DOMAIN_DMA anyway.

Yes. That's true.

Best regards,
baolu


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 01:59:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 01:59:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747145.1154471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLvSR-0006yG-8l; Tue, 25 Jun 2024 01:58:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747145.1154471; Tue, 25 Jun 2024 01:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLvSR-0006y9-66; Tue, 25 Jun 2024 01:58:55 +0000
Received: by outflank-mailman (input) for mailman id 747145;
 Tue, 25 Jun 2024 01:58:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLvSQ-0006xx-3H; Tue, 25 Jun 2024 01:58:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLvSP-0006xW-DV; Tue, 25 Jun 2024 01:58:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLvSO-0003p3-Qc; Tue, 25 Jun 2024 01:58:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sLvSO-0008Rr-Py; Tue, 25 Jun 2024 01:58:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zfh5Jb4Pw5/72HrvsyxjNWHGWK37qDt59q8ZUWD9PjE=; b=2x2hAaYy7UYbrVlnWpFMdVFbl8
	9NETNm2J+6oXn6l5dDU1EoYk+954WT4ZHXpNL60WtZ7gyDKUZFJySO63DizjacfADGPhYfp5+rQC1
	iHe5wf5nxLf+r7G9ROOCRXF+8TDXB6fGVnTnqxsK0EDh9E/HIabA9tSUx1iPMuCnof0Q=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186469-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186469: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=626737a5791b59df5c4d1365c4dcfc9b0d70affe
X-Osstest-Versions-That:
    linux=7c16f0a4ed1ce7b0dd1c01fc012e5bde89fe7748
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Jun 2024 01:58:52 +0000

flight 186469 linux-linus real [real]
flight 186472 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186469/
http://logs.test-lab.xenproject.org/osstest/logs/186472/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 186462

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           8 xen-boot            fail pass in 186472-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 186472 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 186472 never pass
 test-armhf-armhf-xl-qcow2     8 xen-boot                     fail  like 186462
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186462
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 186462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                626737a5791b59df5c4d1365c4dcfc9b0d70affe
baseline version:
 linux                7c16f0a4ed1ce7b0dd1c01fc012e5bde89fe7748

Last test of basis   186462  2024-06-23 16:10:07 Z    1 days
Failing since        186464  2024-06-24 01:42:08 Z    1 days    3 attempts
Testing same since   186469  2024-06-24 17:41:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
  Florian Fainelli <florian.fainelli@broadcom.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Hagar Hemdan <hagarhem@amazon.com>
  Huang-Huang Bao <i@eh5.me>
  Johan Hovold <johan+linaro@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Stefan Wahren <wahrenst@gmx.net>
  Thomas Richard <thomas.richard@bootlin.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 626737a5791b59df5c4d1365c4dcfc9b0d70affe
Merge: f2661062f16b 4ea4d4808e34
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon Jun 24 10:28:41 2024 -0400

    Merge tag 'pinctrl-v6.10-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl
    
    Pull pin control fixes from Linus Walleij:
    
     - Use flag saving spinlocks in the Renesas rzg2l driver. This fixes up
       PREEMPT_RT problems.
    
     - Remove broken Qualcomm PM8008 that clearly was never working. A new
       version will arrive in the next merge window.
    
     - Add a quirk for LP8764 regmap that was missed and made the TI J7200
       board unusable.
    
     - Fix persistance on the BCM2835 GPIO outputs kernel parameter so this
       remains consisten across a booted kernel.
    
     - Fix a potential deadlock in create_pinctrl()
    
     - Fix some erroneous bitfields and pinmux reset in the Rockchip RK3328
       driver.
    
    * tag 'pinctrl-v6.10-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl:
      pinctrl: rockchip: fix pinmux reset in rockchip_pmx_set
      pinctrl: rockchip: use dedicated pinctrl type for RK3328
      pinctrl: rockchip: fix pinmux bits for RK3328 GPIO3-B pins
      pinctrl: rockchip: fix pinmux bits for RK3328 GPIO2-B pins
      pinctrl: fix deadlock in create_pinctrl() when handling -EPROBE_DEFER
      pinctrl: bcm2835: Fix permissions of persist_gpio_outputs
      pinctrl: tps6594: add missing support for LP8764 PMIC
      dt-bindings: pinctrl: qcom,pmic-gpio: drop pm8008
      pinctrl: qcom: spmi-gpio: drop broken pm8008 support
      pinctrl: renesas: rzg2l: Use spin_{lock,unlock}_irq{save,restore}

commit f2661062f16b2de5d7b6a5c42a9a5c96326b8454
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Sun Jun 23 17:08:54 2024 -0400

    Linux 6.10-rc5

commit 4ea4d4808e342ddf89ba24b93ffa2057005aaced
Author: Huang-Huang Bao <i@eh5.me>
Date:   Thu Jun 6 20:57:55 2024 +0800

    pinctrl: rockchip: fix pinmux reset in rockchip_pmx_set
    
    rockchip_pmx_set reset all pinmuxs in group to 0 in the case of error,
    add missing bank data retrieval in that code to avoid setting mux on
    unexpected pins.
    
    Fixes: 14797189b35e ("pinctrl: rockchip: add return value to rockchip_set_mux")
    Reviewed-by: Heiko Stuebner <heiko@sntech.de>
    Signed-off-by: Huang-Huang Bao <i@eh5.me>
    Link: https://lore.kernel.org/r/20240606125755.53778-5-i@eh5.me
    Signed-off-by: Linus Walleij <linus.walleij@linaro.org>

commit 01b4b1d1cec48ef4c26616c2fc4600b2c9fec05a
Author: Huang-Huang Bao <i@eh5.me>
Date:   Thu Jun 6 20:57:54 2024 +0800

    pinctrl: rockchip: use dedicated pinctrl type for RK3328
    
    rk3328_pin_ctrl uses type of RK3288 which has a hack in
    rockchip_pinctrl_suspend and rockchip_pinctrl_resume to restore GPIO6-C6
    at assume, the hack is not applicable to RK3328 as GPIO6 is not even
    exist in it. So use a dedicated pinctrl type to skip this hack.
    
    Fixes: 3818e4a7678e ("pinctrl: rockchip: Add rk3328 pinctrl support")
    Reviewed-by: Heiko Stuebner <heiko@sntech.de>
    Signed-off-by: Huang-Huang Bao <i@eh5.me>
    Link: https://lore.kernel.org/r/20240606125755.53778-4-i@eh5.me
    Signed-off-by: Linus Walleij <linus.walleij@linaro.org>

commit 5ef6914e0bf578357b4c906ffe6b26e7eedb8ccf
Author: Huang-Huang Bao <i@eh5.me>
Date:   Thu Jun 6 20:57:53 2024 +0800

    pinctrl: rockchip: fix pinmux bits for RK3328 GPIO3-B pins
    
    The pinmux bits for GPIO3-B1 to GPIO3-B6 pins are not explicitly
    specified in RK3328 TRM, however we can get hint from pad name and its
    correspinding IOMUX setting for pins in interface descriptions. The
    correspinding IOMIX settings for these pins can be found in the same
    row next to occurrences of following pad names in RK3328 TRM.
    
    GPIO3-B1:  IO_TSPd5m0_CIFdata5m0_GPIO3B1vccio6
    GPIO3-B2: IO_TSPd6m0_CIFdata6m0_GPIO3B2vccio6
    GPIO3-B3: IO_TSPd7m0_CIFdata7m0_GPIO3B3vccio6
    GPIO3-B4: IO_CARDclkm0_GPIO3B4vccio6
    GPIO3-B5: IO_CARDrstm0_GPIO3B5vccio6
    GPIO3-B6: IO_CARDdetm0_GPIO3B6vccio6
    
    Add pinmux data to rk3328_mux_recalced_data as mux register offset for
    these pins does not follow rockchip convention.
    
    Signed-off-by: Huang-Huang Bao <i@eh5.me>
    Reviewed-by: Heiko Stuebner <heiko@sntech.de>
    Fixes: 3818e4a7678e ("pinctrl: rockchip: Add rk3328 pinctrl support")
    Link: https://lore.kernel.org/r/20240606125755.53778-3-i@eh5.me
    Signed-off-by: Linus Walleij <linus.walleij@linaro.org>

commit e8448a6c817c2aa6c6af785b1d45678bd5977e8d
Author: Huang-Huang Bao <i@eh5.me>
Date:   Thu Jun 6 20:57:52 2024 +0800

    pinctrl: rockchip: fix pinmux bits for RK3328 GPIO2-B pins
    
    The pinmux bits for GPIO2-B0 to GPIO2-B6 actually have 2 bits width,
    correct the bank flag for GPIO2-B. The pinmux bits for GPIO2-B7 is
    recalculated so it remain unchanged.
    
    The pinmux bits for those pins are not explicitly specified in RK3328
    TRM, however we can get hint from pad name and its correspinding IOMUX
    setting for pins in interface descriptions. The correspinding IOMIX
    settings for GPIO2-B0 to GPIO2-B6 can be found in the same row next to
    occurrences of following pad names in RK3328 TRM.
    
    GPIO2-B0: IO_SPIclkm0_GPIO2B0vccio5
    GPIO2-B1: IO_SPItxdm0_GPIO2B1vccio5
    GPIO2-B2: IO_SPIrxdm0_GPIO2B2vccio5
    GPIO2-B3: IO_SPIcsn0m0_GPIO2B3vccio5
    GPIO2-B4: IO_SPIcsn1m0_FLASHvol_sel_GPIO2B4vccio5
    GPIO2-B5: IO_ I2C2sda_TSADCshut_GPIO2B5vccio5
    GPIO2-B6: IO_ I2C2scl_GPIO2B6vccio5
    
    This fix has been tested on NanoPi R2S for fixing confliting pinmux bits
    between GPIO2-B7 with GPIO2-B5.
    
    Signed-off-by: Huang-Huang Bao <i@eh5.me>
    Reviewed-by: Heiko Stuebner <heiko@sntech.de>
    Fixes: 3818e4a7678e ("pinctrl: rockchip: Add rk3328 pinctrl support")
    Link: https://lore.kernel.org/r/20240606125755.53778-2-i@eh5.me
    Signed-off-by: Linus Walleij <linus.walleij@linaro.org>

commit adec57ff8e66aee632f3dd1f93787c13d112b7a1
Author: Hagar Hemdan <hagarhem@amazon.com>
Date:   Tue Jun 4 08:58:38 2024 +0000

    pinctrl: fix deadlock in create_pinctrl() when handling -EPROBE_DEFER
    
    In create_pinctrl(), pinctrl_maps_mutex is acquired before calling
    add_setting(). If add_setting() returns -EPROBE_DEFER, create_pinctrl()
    calls pinctrl_free(). However, pinctrl_free() attempts to acquire
    pinctrl_maps_mutex, which is already held by create_pinctrl(), leading to
    a potential deadlock.
    
    This patch resolves the issue by releasing pinctrl_maps_mutex before
    calling pinctrl_free(), preventing the deadlock.
    
    This bug was discovered and resolved using Coverity Static Analysis
    Security Testing (SAST) by Synopsys, Inc.
    
    Fixes: 42fed7ba44e4 ("pinctrl: move subsystem mutex to pinctrl_dev struct")
    Suggested-by: Maximilian Heyne <mheyne@amazon.de>
    Signed-off-by: Hagar Hemdan <hagarhem@amazon.com>
    Link: https://lore.kernel.org/r/20240604085838.3344-1-hagarhem@amazon.com
    Signed-off-by: Linus Walleij <linus.walleij@linaro.org>

commit 61fef29ad120d3077aeb0ef598d76822acb8c4cb
Author: Stefan Wahren <wahrenst@gmx.net>
Date:   Mon Jun 3 20:19:37 2024 +0200

    pinctrl: bcm2835: Fix permissions of persist_gpio_outputs
    
    The commit 8ff05989b44e ("pinctrl: bcm2835: Make pin freeing behavior
    configurable") unintentionally made the module parameter
    persist_gpio_outputs changeable at runtime. So drop the write permission
    in order to make the freeing behavior predictable for user applications.
    
    Fixes: 8ff05989b44e ("pinctrl: bcm2835: Make pin freeing behavior configurable")
    Reported-by: Andy Shevchenko <andy.shevchenko@gmail.com>
    Closes: https://lore.kernel.org/linux-gpio/Zjk-C0nLmlynqLAE@surfacebook.localdomain/
    Signed-off-by: Stefan Wahren <wahrenst@gmx.net>
    Acked-by: Florian Fainelli <florian.fainelli@broadcom.com>
    Link: https://lore.kernel.org/r/20240603181938.76047-2-wahrenst@gmx.net
    Signed-off-by: Linus Walleij <linus.walleij@linaro.org>

commit 03ecbe4bb61886cc7fff5b0efc3784e86ac16214
Author: Thomas Richard <thomas.richard@bootlin.com>
Date:   Mon Jun 3 10:21:10 2024 +0200

    pinctrl: tps6594: add missing support for LP8764 PMIC
    
    Add missing support for LP8764 PMIC in the probe().
    
    Issue detected with v6.10-rc1 (and reproduced with 6.10-rc2) using a TI
    J7200 EVM board.
    
    tps6594-pinctrl tps6594-pinctrl.8.auto: error -EINVAL:
      Couldn't register gpio_regmap driver
    tps6594-pinctrl tps6594-pinctrl.8.auto: probe with driver tps6594-pinctrl
      failed with error -22
    
    Fixes: 208829715917 (pinctrl: pinctrl-tps6594: Add TPS65224 PMIC pinctrl and GPIO)
    Signed-off-by: Thomas Richard <thomas.richard@bootlin.com>
    Link: https://lore.kernel.org/r/20240603082110.2104977-1-thomas.richard@bootlin.com
    Signed-off-by: Linus Walleij <linus.walleij@linaro.org>

commit 550dec8593cd0b32ffc90bb88edb190c04efbb48
Author: Johan Hovold <johan+linaro@kernel.org>
Date:   Wed May 29 18:29:53 2024 +0200

    dt-bindings: pinctrl: qcom,pmic-gpio: drop pm8008
    
    The binding for PM8008 is being reworked so that internal details like
    interrupts and register offsets are no longer described. This
    specifically also involves dropping the gpio child node and its
    compatible string which is no longer needed.
    
    Note that there are currently no users of the upstream binding and
    driver.
    
    Reviewed-by: Stephen Boyd <swboyd@chromium.org>
    Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
    Reviewed-by: Rob Herring (Arm) <robh@kernel.org>
    Link: https://lore.kernel.org/r/20240529162958.18081-10-johan+linaro@kernel.org
    Signed-off-by: Linus Walleij <linus.walleij@linaro.org>

commit 8da86499d4cd125a9561f9cd1de7fba99b0aecbf
Author: Johan Hovold <johan+linaro@kernel.org>
Date:   Wed May 29 18:29:52 2024 +0200

    pinctrl: qcom: spmi-gpio: drop broken pm8008 support
    
    The SPMI GPIO driver assumes that the parent device is an SPMI device
    and accesses random data when backcasting the parent struct device
    pointer for non-SPMI devices.
    
    Fortunately this does not seem to cause any issues currently when the
    parent device is an I2C client like the PM8008, but this could change if
    the structures are reorganised (e.g. using structure randomisation).
    
    Notably the interrupt implementation is also broken for non-SPMI devices.
    
    Also note that the two GPIO pins on PM8008 are used for interrupts and
    reset so their practical use should be limited.
    
    Drop the broken GPIO support for PM8008 for now.
    
    Fixes: ea119e5a482a ("pinctrl: qcom-pmic-gpio: Add support for pm8008")
    Cc: stable@vger.kernel.org      # 5.13
    Reviewed-by: Bryan O'Donoghue <bryan.odonoghue@linaro.org>
    Reviewed-by: Stephen Boyd <swboyd@chromium.org>
    Signed-off-by: Johan Hovold <johan+linaro@kernel.org>
    Link: https://lore.kernel.org/r/20240529162958.18081-9-johan+linaro@kernel.org
    Signed-off-by: Linus Walleij <linus.walleij@linaro.org>

commit dff5c3de21e753d1e46517aa2df0ebd23c06ede5
Merge: 1613e604df0c a39741d38c04
Author: Linus Walleij <linus.walleij@linaro.org>
Date:   Mon Jun 3 23:56:51 2024 +0200

    Merge tag 'renesas-pinctrl-fixes-for-v6.10-tag1' of git://git.kernel.org/pub/scm/linux/kernel/git/geert/renesas-drivers into fixes
    
    pinctrl: renesas: Fixes for v6.10
    
      - Fix PREEMPT_RT build failure on RZ/G2L.
    
    Signed-off-by: Linus Walleij <linus.walleij@linaro.org>

commit a39741d38c048a48ae0d65226d9548005a088f5f
Author: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
Date:   Wed May 22 08:54:21 2024 +0300

    pinctrl: renesas: rzg2l: Use spin_{lock,unlock}_irq{save,restore}
    
    On PREEMPT_RT kernels the spinlock_t maps to an rtmutex. Using
    raw_spin_lock_irqsave()/raw_spin_unlock_irqrestore() on
    &pctrl->lock.rlock breaks the PREEMPT_RT builds. To fix this use
    spin_lock_irqsave()/spin_unlock_irqrestore() on &pctrl->lock.
    
    Fixes: 02cd2d3be1c3 ("pinctrl: renesas: rzg2l: Configure the interrupt type on resume")
    Reported-by: Diederik de Haas <didi.debian@cknow.org>
    Closes: https://lore.kernel.org/all/131999629.KQPSlr0Zke@bagend
    Signed-off-by: Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
    Reviewed-by: Geert Uytterhoeven <geert+renesas@glider.be>
    Link: https://lore.kernel.org/r/20240522055421.2842689-1-claudiu.beznea.uj@bp.renesas.com
    Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 02:29:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 02:29:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747154.1154480 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLvvw-0005aA-IM; Tue, 25 Jun 2024 02:29:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747154.1154480; Tue, 25 Jun 2024 02:29:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLvvw-0005a3-Fa; Tue, 25 Jun 2024 02:29:24 +0000
Received: by outflank-mailman (input) for mailman id 747154;
 Tue, 25 Jun 2024 02:29:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3hsB=N3=intel.com=oliver.sang@srs-se1.protection.inumbo.net>)
 id 1sLvvu-0005Zx-HI
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 02:29:23 +0000
Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.21])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b848e012-329a-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 04:29:18 +0200 (CEST)
Received: from fmviesa008.fm.intel.com ([10.60.135.148])
 by orvoesa113.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 24 Jun 2024 19:29:15 -0700
Received: from orsmsx603.amr.corp.intel.com ([10.22.229.16])
 by fmviesa008.fm.intel.com with ESMTP/TLS/AES256-GCM-SHA384;
 24 Jun 2024 19:29:15 -0700
Received: from orsmsx603.amr.corp.intel.com (10.22.229.16) by
 ORSMSX603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.39; Mon, 24 Jun 2024 19:29:14 -0700
Received: from ORSEDG601.ED.cps.intel.com (10.7.248.6) by
 orsmsx603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.39 via Frontend Transport; Mon, 24 Jun 2024 19:29:14 -0700
Received: from NAM04-BN8-obe.outbound.protection.outlook.com (104.47.74.40) by
 edgegateway.intel.com (134.134.137.102) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.39; Mon, 24 Jun 2024 19:29:14 -0700
Received: from LV3PR11MB8603.namprd11.prod.outlook.com (2603:10b6:408:1b6::9)
 by SA2PR11MB4922.namprd11.prod.outlook.com (2603:10b6:806:111::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.29; Tue, 25 Jun
 2024 02:29:11 +0000
Received: from LV3PR11MB8603.namprd11.prod.outlook.com
 ([fe80::4622:29cf:32b:7e5c]) by LV3PR11MB8603.namprd11.prod.outlook.com
 ([fe80::4622:29cf:32b:7e5c%2]) with mapi id 15.20.7698.025; Tue, 25 Jun 2024
 02:29:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b848e012-329a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1719282559; x=1750818559;
  h=date:from:to:cc:subject:message-id:
   content-transfer-encoding:mime-version;
  bh=V11zIKqk/6wupkP1AL+HaUeiYuY0nVFIKfm6Rg0OkQI=;
  b=EwH0zo00ztcXTjDNa7tS696PIeNPqovcs+uBzd+TOYzUvlmSCaqlKdoE
   /66EzMSavYoGl8CyqebnUm2S4HL/iViBVmH8aN9RWfr/9PYXwE44RmnJb
   XpfVuWm73uaobMqYsjSfwYxmExtv+QdRNQy3BeZsjwmBRylJTuV6FldKL
   TVRpuQOK20cCcPXSx7/c667rU3NuEd57xarONEbYGqX/sLt2D+A6Y51my
   wi2wUCD0KftB2Iy3uCPEwXs8tIwifEX2mDvCBqY+2pRYCr4Gs32W3wC32
   +ClIfJ4UGmXjKEMGo6oXt4dYofPWQ9tW6uxoBYScFky0CodlwxrAQlB9k
   Q==;
X-CSE-ConnectionGUID: CPdVP35JQoq7QzW2iOaSNA==
X-CSE-MsgGUID: CGwVSqtOSoapZ6CGj0EEwQ==
X-IronPort-AV: E=McAfee;i="6700,10204,11113"; a="16243610"
X-IronPort-AV: E=Sophos;i="6.08,263,1712646000"; 
   d="scan'208";a="16243610"
X-CSE-ConnectionGUID: AsAxO2tPRv6jk8yJ0kSNsQ==
X-CSE-MsgGUID: bX47Fp3gRPaL8p0eVze4JA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="6.08,263,1712646000"; 
   d="scan'208";a="43476889"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=P9h8R6hzcnzhx4/2SvQorNvgdOcdkKvE0PLwksN7GPqMDbJF+VQw0p/0Fm/PCK6i7IMZeSQCBsLXgLBfZHdqr4TKJIdtx/LhSD3RiEjVQoQhz0kWPtfPOL/RH7oprc8Q418pLxlGNjF0I5qs86ckctHeyh59TOTIbQFM89ifuBbcbvUDYfdyZIxYkpgZ0g2hvBs4hoaOqUZrh5kxe8xWDMED2kV4ELERa/lXUJ/4mx/BzbudsxEVShYqelMvNIapixWiHQVA829BOFIgbyXhC7MnwHnfOpc29x16kitTJqLVlR+oP1doxcZLzU2EljTu2HCT5ChLvqdtkkADVZMwkQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=EwpKITPhZdY8rBpmX/A9w8AtI8ycz5+NsEaqwbLR1AM=;
 b=BRhf6AKDZEbWZkJctacnPUO1XrMuzfSzMhrA0783U9cwcQxkhCfM8pmENquiRBLuYhaTtL93JuwnG55xHWIai1kHfiFoOS4bCJ6O/FpyRst4grWyn7ZknvGsk/x7zNjdVxkUmMOAwaTN/xFpyHnVDVHoL1OB2gM2Mi7BY/AVkYI6/z57W+2d+H3znbzBV02VlZPSmoDFGTuFNRLJT2G3c0CYheLMLTqcMSWy3mCsGeTIfAKYhpkuOiT7j10asiOa2LpbzQoQm4W2L9U9HWjQ1sZJzCU+DRjO5GJFi+49imD6MpPDx5gYcRyxwIW4vJqQkgkl1woqAwHSNPj38HFxkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
Date: Tue, 25 Jun 2024 10:28:54 +0800
From: kernel test robot <oliver.sang@intel.com>
To: Christoph Hellwig <hch@lst.de>
CC: <oe-lkp@lists.linux.dev>, <lkp@intel.com>, Jens Axboe <axboe@kernel.dk>,
	Ulf Hansson <ulf.hansson@linaro.org>, Damien Le Moal <dlemoal@kernel.org>,
	Hannes Reinecke <hare@suse.de>, <linux-block@vger.kernel.org>,
	<linux-um@lists.infradead.org>, <drbd-dev@lists.linbit.com>,
	<nbd@other.debian.org>, <linuxppc-dev@lists.ozlabs.org>,
	<virtualization@lists.linux.dev>, <xen-devel@lists.xenproject.org>,
	<linux-bcache@vger.kernel.org>, <dm-devel@lists.linux.dev>,
	<linux-raid@vger.kernel.org>, <linux-mmc@vger.kernel.org>,
	<linux-mtd@lists.infradead.org>, <nvdimm@lists.linux.dev>,
	<linux-nvme@lists.infradead.org>, <linux-scsi@vger.kernel.org>,
	<ying.huang@intel.com>, <feng.tang@intel.com>, <fengwei.yin@intel.com>,
	<oliver.sang@intel.com>
Subject: [axboe-block:for-next] [block]  1122c0c1cc:  aim7.jobs-per-min 22.6%
 improvement
Message-ID: <202406250948.e0044f1d-oliver.sang@intel.com>
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: SG2PR02CA0133.apcprd02.prod.outlook.com
 (2603:1096:4:188::18) To LV3PR11MB8603.namprd11.prod.outlook.com
 (2603:10b6:408:1b6::9)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: LV3PR11MB8603:EE_|SA2PR11MB4922:EE_
X-MS-Office365-Filtering-Correlation-Id: bd8ede53-74eb-4dff-a384-08dc94be98a9
X-LD-Processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230037|366013|376011|7416011|1800799021;
X-Microsoft-Antispam-Message-Info: =?iso-8859-1?Q?PIIgtCnwHZoc/HDZapjTLstCsf2vJA7dqGuvyDdI+BFfAm9aBdaewcOf34?=
 =?iso-8859-1?Q?Jooffxj7oiw8pKHAv17Sq4mUOjtkpSUV/tcs5vj1nkiKk3Q++QlO/JO/Gh?=
 =?iso-8859-1?Q?8tlONGWUxgY47PMxma4El1ngJwEU7Oc+3op87+IezaceDVmIGXdJptjDdP?=
 =?iso-8859-1?Q?iIVg2eGYFpKwr2pfoXlMtUZcAeWbMDBcVEccd5U2rKljURimKIe2NV+TZl?=
 =?iso-8859-1?Q?Q9rX0DyRwPjhFB6V5l5qPpkO3wAxzzOiTB+KPPpYFhEALw2Rd2pAX8+KBW?=
 =?iso-8859-1?Q?epVTj44l2NewRbspVVAqRajF785U3k6g3U/mZVnvgaTkx85He0SypHAuDr?=
 =?iso-8859-1?Q?6Y6eHYfkZWl5QB6V41AnB1Y4aFKAltJcM1hkvNgujDRGn28R7G9PhV68cG?=
 =?iso-8859-1?Q?xEOVHyrYDmm2RNFwwdiZOef6WgAg5isxrDK6uQ2+jgx2GlAOpPLtFpbZQF?=
 =?iso-8859-1?Q?IqxHMUMJXvRzSoEbTBqgbgTfobyNrwnYCgY8tAyD8GpHwIP2ff4FCFMWe4?=
 =?iso-8859-1?Q?jC3nKsIXZycbbfIc+8u7GpXMJN16hTbeqQXaAGFdVwu6U4Jyt7zM8fCKbD?=
 =?iso-8859-1?Q?/h4a4Bh5rClpKoNWhUU94vav++fqm/6cp73Yu8r3T3KtVUmIyDh8jGAOrV?=
 =?iso-8859-1?Q?wB9vpx8ebFCakwWbPdIdDT37Tc/6z+5tJm5dZ0U2PZF1vU3CfSTvZTdSXd?=
 =?iso-8859-1?Q?4xRIsAdP99TG1gAluRlLdUU0bchgP8rcexw/UllnPGqidY9np58WrtM9au?=
 =?iso-8859-1?Q?nD4jSn5gH+OPJKP5Pm+BtloYFaHluH+BYzgDk/8KwUaok1PVv5NYQykuuB?=
 =?iso-8859-1?Q?sR90Poe9tnBPpaUpZM0F/M/Tl6SCsepytiKPgpVkI3kFKCo9lv3QCrgL3S?=
 =?iso-8859-1?Q?KwtKafjHocGu7VorMLvNpY6zGNQSaySLlJRppuiLQ8SU6nus/bPUllCGr0?=
 =?iso-8859-1?Q?4BkqhMcXzal3fErmbMPMETqxWRNSoVldhIrOTH0Ve2ial7vAZcg0A14cmW?=
 =?iso-8859-1?Q?8spKsHUVkqX+xoNJDCn2jkSMc12Iw9Lh8IpodGdXep+K8GuFgtMogOjKJV?=
 =?iso-8859-1?Q?ZRbDDwRQ0nb54bIjcrPoxI8FUgiKrHJDn3NzxclmH/ikJ4SHpKxPOJEHNX?=
 =?iso-8859-1?Q?Dz4PoKFuiGpKYXirLXeiiFMCP/OolpA8h+tOaWZyMpoX6CyC1r07mNivV0?=
 =?iso-8859-1?Q?/yvDYDb84NmdI02GnJMv+Jydn0E2cd4db84d61nKPt86rfNdCPsQNGwDtS?=
 =?iso-8859-1?Q?BppUGS2/6QMIIzj+/tylz0apPhWRVTcqpGWp/o1Fsrz/ZlCSSegdy56VT4?=
 =?iso-8859-1?Q?lONpg/ZuYe5jMybgDLAOcIgpzQ=3D=3D?=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LV3PR11MB8603.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(366013)(376011)(7416011)(1800799021);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?iso-8859-1?Q?a2eBOImJcv9EUqqAdM8+K8Sx2DYwG3JU+3AgvaMH5Y9h+UewAjTEgUqvLH?=
 =?iso-8859-1?Q?+O3U7SLWDyeVMNg16Bhdpbs2pUshRwXhsE3H2xcOLINxGV65wpjjwKcGlg?=
 =?iso-8859-1?Q?FT3ZbeLkalts+157JmiNEya3cIzAJZ2jNT/h50EYtR5IFGLxc2EPg1669S?=
 =?iso-8859-1?Q?yqQUnfnJ+ySx/XyBAtlcakbuXFKHVn4StmPSyvdHEBmXILHoC4kgJ+vTS1?=
 =?iso-8859-1?Q?os5l85/9nGv5jpPO8zKOAmyGeuOhsToOQAF0J/m6TNi7CAQQyMdkhaDRle?=
 =?iso-8859-1?Q?o0tkhlZror8Wq78cGXZ6P/aLAJ6uZtPv8pzcwp7GoWmK+fKwB5Pw8qnQNh?=
 =?iso-8859-1?Q?xRISnEz3xUQEdX84SJs8Wliubv/r5SHRRHkPpxiE10INEeVsg9HlEBwVma?=
 =?iso-8859-1?Q?Hk9lMLebXIc04/8wgltIt0ubrzxw0JOkCv1lHSSdtr9+thDoC3vcqKTvK+?=
 =?iso-8859-1?Q?+ZoxeR1NeTUZoBAiQptrIaKDn3iFkn3ksfo7aa59NUn/HJ5njz7MOvuHIq?=
 =?iso-8859-1?Q?YXjzdI8tJztF3OIu+8I1HE0WeYhxESwQiN6EOFHKg3TvE1GSQdKo/rxTRp?=
 =?iso-8859-1?Q?GZOdDX5ZxUcorUXZAlqocczlCpTMMS/T5tuqERry+PSGd/KUORLtAP2zpA?=
 =?iso-8859-1?Q?YugVmhWwiA36p7ZtzqV/14Nl4FnLhVRPTIekbVhZ+V4dcVBbbR5JLMv63C?=
 =?iso-8859-1?Q?BHBad851pqVlkxI1anKxI/P+9W23qifsT71mqEu1LsI0+aR18HsfviUB07?=
 =?iso-8859-1?Q?W+kf8kLK4SR38dVzA7Ek9v+B3JJzqy/+a2J5j9zND3kAG/WmDgXGsJebBF?=
 =?iso-8859-1?Q?FZm1oIEQAAgD6caN6uD2jHxXbwlkDLedzNnajIRn49zT1lQbeqnr6wY5wk?=
 =?iso-8859-1?Q?QnE6jDseNZXU7+SLoEeCv5bdFpgJt466kcS1uZ0Ci7GCzHxq7y/Sl/l1gj?=
 =?iso-8859-1?Q?227wuuHzhXE2+ADXTxSBBd4vuBPqxo5eFIo08xHhpIKDlBWbEXn02XyH/+?=
 =?iso-8859-1?Q?6lZuWo4hAcEsHZoekesn9i2hwbQEg3f2Hu4QklGeDKmy8GcvbfbSKygrrA?=
 =?iso-8859-1?Q?MeuFOQUF0W1DtG1yJ7YKgYml/nXBnv6r/M6EvRy8rDv+0X4J8mIaK9PBTm?=
 =?iso-8859-1?Q?YrObpua6oP1Pe1QRpdeyN85CiEWYCUQl5wSvM8jVeTsPFK7tU+rbi+tF56?=
 =?iso-8859-1?Q?bJRz7LDmCsuTxpbtBSyFrgZbTrNSQwfwqcTqDp5nSzfDJg1O366ZTL1NgQ?=
 =?iso-8859-1?Q?qoBZVtwsfmwPZWQ+zWRz18g66g8HyyrJVKdJLXavldaMnVIrbjNDjxA6fo?=
 =?iso-8859-1?Q?miH9183wvA7rkwqya2SqBHoFmay/0RQYU835d/zRIRf3DFNTMb2trmWZKG?=
 =?iso-8859-1?Q?FtsdKW2FE9ybmB4WnjsiMPBmOXGDvI55r1JCUn23c5K/dBFP7UUl/y+3O6?=
 =?iso-8859-1?Q?WXM2Ks4C7RgpdT98LWt88LpjkoiQDr9hV+kVWikPYNHVIc7kVOHEBXy1ck?=
 =?iso-8859-1?Q?cvor5rxLsgNRM3dtfWpIRhd6dU0zZoV2IRBhIG+vY0HtzXYiqJbZgDe0n3?=
 =?iso-8859-1?Q?LSszv7iifTKphqYTy6EnRcRm+XT77QB0AXjIrYmlKtgoKmMBWfiVHvrdEt?=
 =?iso-8859-1?Q?/qlwyfKtu33jTIZ+KHY2DF2n0p0RlySTXQQG6B4jdYTxOIKO0/FCzezg?=
 =?iso-8859-1?Q?=3D=3D?=
X-MS-Exchange-CrossTenant-Network-Message-Id: bd8ede53-74eb-4dff-a384-08dc94be98a9
X-MS-Exchange-CrossTenant-AuthSource: LV3PR11MB8603.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Jun 2024 02:29:10.9499
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7kxJ5gKgNkwM5s9fmrlour0cJ/y8l5AOLx8gfJoGmkeKSBi0NVGkj+TFuyayyHymlrUKN1zDxX8yNoipbVI70Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA2PR11MB4922
X-OriginatorOrg: intel.com



Hello,

kernel test robot noticed a 22.6% improvement of aim7.jobs-per-min on:


commit: 1122c0c1cc71f740fa4d5f14f239194e06a1d5e7 ("block: move cache control settings out of queue->flags")
https://git.kernel.org/cgit/linux/kernel/git/axboe/linux-block.git for-next

testcase: aim7
test machine: 96 threads 2 sockets Intel(R) Xeon(R) Platinum 8260L CPU @ 2.40GHz (Cascade Lake) with 128G memory
parameters:

	disk: 4BRD_12G
	md: RAID0
	fs: xfs
	test: sync_disk_rw
	load: 300
	cpufreq_governor: performance






Details are as below:
-------------------------------------------------------------------------------------------------->


The kernel config and materials to reproduce are available at:
https://download.01.org/0day-ci/archive/20240625/202406250948.e0044f1d-oliver.sang@intel.com

=========================================================================================
compiler/cpufreq_governor/disk/fs/kconfig/load/md/rootfs/tbox_group/test/testcase:
  gcc-13/performance/4BRD_12G/xfs/x86_64-rhel-8.3/300/RAID0/debian-12-x86_64-20240206.cgz/lkp-csl-2sp3/sync_disk_rw/aim7

commit: 
  70905f8706 ("block: remove blk_flush_policy")
  1122c0c1cc ("block: move cache control settings out of queue->flags")

70905f8706b62113 1122c0c1cc71f740fa4d5f14f23 
---------------- --------------------------- 
         %stddev     %change         %stddev
             \          |                \  
    153.19           -13.3%     132.81        uptime.boot
   2.8e+09           -11.9%  2.466e+09        cpuidle..time
  21945319   2%     -40.4%   13076160        cpuidle..usage
     29.31            +7.8%      31.58   2%  iostat.cpu.idle
     69.87            -3.6%      67.35        iostat.cpu.system
      0.04   4%      +0.0        0.08   5%  mpstat.cpu.all.iowait%
      0.78   2%      +0.2        0.99   2%  mpstat.cpu.all.usr%
     52860  49%     -78.2%      11536  78%  numa-numastat.node0.other_node
     46804  56%     +88.4%      88190  10%  numa-numastat.node1.other_node
    955871  10%     -43.3%     542216  14%  numa-meminfo.node1.Active
    955871  10%     -43.3%     542216  14%  numa-meminfo.node1.Active(anon)
   1015354  10%     -34.7%     662696  13%  numa-meminfo.node1.Shmem
      6008           -14.3%       5146   2%  perf-c2c.DRAM.remote
      7889           -12.4%       6908   2%  perf-c2c.HITM.local
      3839           -16.5%       3203   2%  perf-c2c.HITM.remote
     11728           -13.8%      10112   2%  perf-c2c.HITM.total
    695109           +20.5%     837625        vmstat.io.bo
    105.99   7%     -23.7%      80.83  11%  vmstat.procs.r
    803244           -30.9%     555360        vmstat.system.cs
    209736           -12.9%     182626        vmstat.system.in
      1448  89%    +207.9%       4459   6%  numa-vmstat.node0.nr_page_table_pages
     52860  49%     -78.2%      11536  78%  numa-vmstat.node0.numa_other
    239214  10%     -43.6%     134883  13%  numa-vmstat.node1.nr_active_anon
    254124  10%     -34.9%     165421  13%  numa-vmstat.node1.nr_shmem
    239214  10%     -43.6%     134883  13%  numa-vmstat.node1.nr_zone_active_anon
     46805  56%     +88.4%      88190  10%  numa-vmstat.node1.numa_other
     17374           +22.6%      21299        aim7.jobs-per-min
    103.64           -18.4%      84.58        aim7.time.elapsed_time
    103.64           -18.4%      84.58        aim7.time.elapsed_time.max
   4641240           -83.4%     770073        aim7.time.involuntary_context_switches
     32705            -4.3%      31289   2%  aim7.time.minor_page_faults
      6562            -3.1%       6359        aim7.time.percent_of_cpu_this_job_got
      6775           -21.0%       5351   2%  aim7.time.system_time
  49095202           -38.3%   30299361        aim7.time.voluntary_context_switches
   1297567           -37.0%     817692        meminfo.Active
   1297567           -37.0%     817692        meminfo.Active(anon)
     97760   5%     -23.4%      74859  20%  meminfo.AnonHugePages
   2390317           -15.3%    2024905        meminfo.Committed_AS
    884407           +11.9%     989723        meminfo.Inactive
    743152   2%     +14.8%     853331        meminfo.Inactive(anon)
    159265   8%     +38.6%     220668   3%  meminfo.Mapped
   1382079           -27.1%    1007445        meminfo.Shmem
    324534           -37.2%     203663   2%  proc-vmstat.nr_active_anon
   1165686            -8.2%    1070277        proc-vmstat.nr_file_pages
    185928   2%     +14.9%     213697        proc-vmstat.nr_inactive_anon
     35436            -2.9%      34420        proc-vmstat.nr_inactive_file
     40463   8%     +38.2%      55918   3%  proc-vmstat.nr_mapped
    345824           -27.3%     251424        proc-vmstat.nr_shmem
     28871            -1.4%      28477        proc-vmstat.nr_slab_reclaimable
    324534           -37.2%     203663   2%  proc-vmstat.nr_zone_active_anon
    185928   2%     +14.9%     213697        proc-vmstat.nr_zone_inactive_anon
     35436            -2.9%      34420        proc-vmstat.nr_zone_inactive_file
   5120744            -2.4%    4996195        proc-vmstat.numa_hit
   5020486            -2.5%    4896473        proc-vmstat.numa_local
    207026  10%     +50.2%     310941        proc-vmstat.pgactivate
   5196440            -2.7%    5057618        proc-vmstat.pgalloc_normal
    763396   6%     -11.8%     673464        proc-vmstat.pgfault
  74254490            -1.3%   73292473        proc-vmstat.pgpgout
     11.25  24%     -60.0%       4.50  29%  sched_debug.cfs_rq:/.h_nr_running.max
      1.59  20%     -42.7%       0.91  13%  sched_debug.cfs_rq:/.h_nr_running.stddev
    968.29   5%     -13.2%     840.04   5%  sched_debug.cfs_rq:/.runnable_avg.avg
      5533  21%     -47.1%       2925  21%  sched_debug.cfs_rq:/.runnable_avg.max
    798.88  13%     -38.3%     492.63   9%  sched_debug.cfs_rq:/.runnable_avg.stddev
    578.50   5%      -9.9%     521.30   4%  sched_debug.cfs_rq:/.util_avg.avg
      3120  20%     -40.3%       1862  19%  sched_debug.cfs_rq:/.util_avg.max
    479.36  12%     -30.4%     333.40   8%  sched_debug.cfs_rq:/.util_avg.stddev
      4592  24%     -51.8%       2215  31%  sched_debug.cfs_rq:/.util_est.max
    615.47  21%     -35.7%     395.64  15%  sched_debug.cfs_rq:/.util_est.stddev
     11.33  24%     -58.8%       4.67  26%  sched_debug.cpu.nr_running.max
      1.62  20%     -42.6%       0.93  11%  sched_debug.cpu.nr_running.stddev
    224323           -28.2%     161088        sched_debug.cpu.nr_switches.avg
    242363   2%     -27.9%     174695   2%  sched_debug.cpu.nr_switches.max
    197870   2%     -27.6%     143186        sched_debug.cpu.nr_switches.min
      7911  19%     -33.1%       5295  10%  sched_debug.cpu.nr_switches.stddev
      1.23            -4.8%       1.17        perf-stat.i.MPKI
 1.105e+10            +5.6%  1.167e+10        perf-stat.i.branch-instructions
      1.20   2%      +0.1        1.29   2%  perf-stat.i.branch-miss-rate%
    820863           -30.7%     569230        perf-stat.i.context-switches
      3.79           -10.2%       3.41        perf-stat.i.cpi
 2.176e+11            -3.2%  2.106e+11        perf-stat.i.cpu-cycles
    212040           -27.8%     153137        perf-stat.i.cpu-migrations
 5.416e+10            +6.8%  5.785e+10        perf-stat.i.instructions
      0.32           +11.8%       0.36        perf-stat.i.ipc
      0.05  77%    +233.9%       0.17  50%  perf-stat.i.major-faults
     10.74           -30.2%       7.50        perf-stat.i.metric.K/sec
      1.28            -4.3%       1.22        perf-stat.overall.MPKI
      4.02            -9.4%       3.64        perf-stat.overall.cpi
      3145            -5.3%       2979        perf-stat.overall.cycles-between-cache-misses
      0.25           +10.3%       0.27        perf-stat.overall.ipc
 1.094e+10            +5.4%  1.153e+10        perf-stat.ps.branch-instructions
    812563           -30.8%     562343        perf-stat.ps.context-switches
 2.156e+11            -3.4%  2.082e+11        perf-stat.ps.cpu-cycles
    209965           -28.0%     151248        perf-stat.ps.cpu-migrations
 5.365e+10            +6.6%  5.717e+10        perf-stat.ps.instructions
 5.641e+12           -13.1%  4.905e+12   2%  perf-stat.total.instructions
     14.88   5%     -14.9        0.00        perf-profile.calltrace.cycles-pp.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_write.vfs_write.ksys_write
     14.86   5%     -14.9        0.00        perf-profile.calltrace.cycles-pp.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_write.vfs_write
     14.77   5%     -14.8        0.00        perf-profile.calltrace.cycles-pp.__submit_bio_noacct.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync.xfs_file_buffered_write
     14.76   5%     -14.8        0.00        perf-profile.calltrace.cycles-pp.__submit_bio.__submit_bio_noacct.submit_bio_wait.blkdev_issue_flush.xfs_file_fsync
     14.74   5%     -14.7        0.00        perf-profile.calltrace.cycles-pp.md_handle_request.__submit_bio.__submit_bio_noacct.submit_bio_wait.blkdev_issue_flush
     14.72   5%     -14.7        0.00        perf-profile.calltrace.cycles-pp.raid0_make_request.md_handle_request.__submit_bio.__submit_bio_noacct.submit_bio_wait
     14.71   5%     -14.7        0.00        perf-profile.calltrace.cycles-pp.md_flush_request.raid0_make_request.md_handle_request.__submit_bio.__submit_bio_noacct
     13.32   5%     -13.3        0.00        perf-profile.calltrace.cycles-pp._raw_spin_lock_irq.md_flush_request.raid0_make_request.md_handle_request.__submit_bio
     13.25   5%     -13.3        0.00        perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irq.md_flush_request.raid0_make_request.md_handle_request
      9.70   3%      -1.1        8.61   3%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.common_startup_64
      9.70   3%      -1.1        8.61   3%  perf-profile.calltrace.cycles-pp.start_secondary.common_startup_64
      9.70   3%      -1.1        8.61   3%  perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.common_startup_64
      9.80   3%      -1.1        8.71   3%  perf-profile.calltrace.cycles-pp.common_startup_64
      9.12   3%      -1.0        8.15   3%  perf-profile.calltrace.cycles-pp.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary.common_startup_64
      8.95   3%      -0.9        8.01   3%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry
      8.95   3%      -0.9        8.02   3%  perf-profile.calltrace.cycles-pp.cpuidle_enter.cpuidle_idle_call.do_idle.cpu_startup_entry.start_secondary
      2.21            -0.4        1.78   2%  perf-profile.calltrace.cycles-pp.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
      2.22            -0.4        1.79   2%  perf-profile.calltrace.cycles-pp.kthread.ret_from_fork.ret_from_fork_asm
      2.22            -0.4        1.79   2%  perf-profile.calltrace.cycles-pp.ret_from_fork.ret_from_fork_asm
      2.22            -0.4        1.79   2%  perf-profile.calltrace.cycles-pp.ret_from_fork_asm
      2.08            -0.4        1.68   2%  perf-profile.calltrace.cycles-pp.process_one_work.worker_thread.kthread.ret_from_fork.ret_from_fork_asm
      3.09            -0.2        2.86   2%  perf-profile.calltrace.cycles-pp.native_queued_spin_lock_slowpath._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait_on_iclog.xfs_log_force_seq
      3.10            -0.2        2.87   2%  perf-profile.calltrace.cycles-pp._raw_spin_lock_irqsave.remove_wait_queue.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync
      3.10            -0.2        2.87   2%  perf-profile.calltrace.cycles-pp.remove_wait_queue.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
      3.44            -0.2        3.23   4%  perf-profile.calltrace.cycles-pp.xlog_wait_on_iclog.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write.vfs_write
      0.95            +0.1        1.04        perf-profile.calltrace.cycles-pp.mutex_spin_on_owner.__mutex_lock.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq
      0.57            +0.1        0.71   2%  perf-profile.calltrace.cycles-pp.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64
      0.58   2%      +0.3        0.84   3%  perf-profile.calltrace.cycles-pp.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread.kthread
      0.59   2%      +0.3        0.85   2%  perf-profile.calltrace.cycles-pp.xfs_end_io.process_one_work.worker_thread.kthread.ret_from_fork
      0.90   2%      +0.4        1.27   3%  perf-profile.calltrace.cycles-pp.__submit_bio_noacct.iomap_submit_ioend.iomap_writepages.xfs_vm_writepages.do_writepages
      0.88   2%      +0.4        1.26   3%  perf-profile.calltrace.cycles-pp.__submit_bio.__submit_bio_noacct.iomap_submit_ioend.iomap_writepages.xfs_vm_writepages
      0.92   3%      +0.4        1.30   3%  perf-profile.calltrace.cycles-pp.iomap_submit_ioend.iomap_writepages.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc
      0.57   3%      +0.4        0.95   6%  perf-profile.calltrace.cycles-pp.xlog_cil_commit.__xfs_trans_commit.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks
      0.64   3%      +0.4        1.03   6%  perf-profile.calltrace.cycles-pp.__xfs_trans_commit.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write
      6.90   2%      +0.5        7.40   3%  perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.cpuidle_enter.cpuidle_idle_call.do_idle
      0.92   4%      +0.5        1.43   6%  perf-profile.calltrace.cycles-pp.xfs_vn_update_time.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write.vfs_write
      0.00            +0.5        0.52        perf-profile.calltrace.cycles-pp.complete.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq
      0.94   4%      +0.5        1.46   6%  perf-profile.calltrace.cycles-pp.kiocb_modified.xfs_file_write_checks.xfs_file_buffered_write.vfs_write.ksys_write
      0.96   4%      +0.5        1.48   6%  perf-profile.calltrace.cycles-pp.xfs_file_write_checks.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64
      0.00            +0.5        0.54   2%  perf-profile.calltrace.cycles-pp.xfs_iomap_write_unwritten.xfs_end_ioend.xfs_end_io.process_one_work.worker_thread
      0.00            +0.5        0.55   2%  perf-profile.calltrace.cycles-pp.iomap_write_iter.iomap_file_buffered_write.xfs_file_buffered_write.vfs_write.ksys_write
      0.00            +0.6        0.56  10%  perf-profile.calltrace.cycles-pp.__folio_start_writeback.iomap_writepage_map.iomap_writepages.xfs_vm_writepages.do_writepages
      0.00            +0.6        0.57   6%  perf-profile.calltrace.cycles-pp.__folio_end_writeback.folio_end_writeback.iomap_finish_ioend.md_end_clone_io.__submit_bio
      0.00            +0.6        0.58   7%  perf-profile.calltrace.cycles-pp.folio_end_writeback.iomap_finish_ioend.md_end_clone_io.__submit_bio.__submit_bio_noacct
      0.00            +0.6        0.60   6%  perf-profile.calltrace.cycles-pp.iomap_finish_ioend.md_end_clone_io.__submit_bio.__submit_bio_noacct.iomap_submit_ioend
      0.08 223%      +0.6        0.72   5%  perf-profile.calltrace.cycles-pp.md_end_clone_io.__submit_bio.__submit_bio_noacct.iomap_submit_ioend.iomap_writepages
      1.45   4%      +0.7        2.15   4%  perf-profile.calltrace.cycles-pp.iomap_writepages.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range
      1.46   4%      +0.7        2.16   4%  perf-profile.calltrace.cycles-pp.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range
      1.48   4%      +0.7        2.18   4%  perf-profile.calltrace.cycles-pp.do_writepages.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync
      1.51   4%      +0.7        2.22   4%  perf-profile.calltrace.cycles-pp.filemap_fdatawrite_wbc.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.xfs_file_buffered_write
      1.51   3%      +0.7        2.23   4%  perf-profile.calltrace.cycles-pp.__filemap_fdatawrite_range.file_write_and_wait_range.xfs_file_fsync.xfs_file_buffered_write.vfs_write
      0.00            +0.7        0.72   7%  perf-profile.calltrace.cycles-pp.iomap_writepage_map.iomap_writepages.xfs_vm_writepages.do_writepages.filemap_fdatawrite_wbc
      1.60   3%      +0.8        2.36   4%  perf-profile.calltrace.cycles-pp.file_write_and_wait_range.xfs_file_fsync.xfs_file_buffered_write.vfs_write.ksys_write
     85.48            +0.8       86.24        perf-profile.calltrace.cycles-pp.xfs_file_fsync.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64
     87.06            +1.4       88.49        perf-profile.calltrace.cycles-pp.xfs_file_buffered_write.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe
     87.18            +1.5       88.64        perf-profile.calltrace.cycles-pp.vfs_write.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
     87.36            +1.5       88.82        perf-profile.calltrace.cycles-pp.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
     87.19            +1.5       88.65        perf-profile.calltrace.cycles-pp.ksys_write.do_syscall_64.entry_SYSCALL_64_after_hwframe.write
     87.36            +1.5       88.82        perf-profile.calltrace.cycles-pp.entry_SYSCALL_64_after_hwframe.write
     87.62            +1.5       89.10        perf-profile.calltrace.cycles-pp.write
     56.74           +13.7       70.42        perf-profile.calltrace.cycles-pp.osq_lock.__mutex_lock.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq
     57.89           +13.8       71.74        perf-profile.calltrace.cycles-pp.__mutex_lock.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq
     60.36           +14.6       74.96        perf-profile.calltrace.cycles-pp.__flush_workqueue.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync
     61.48           +14.6       76.09        perf-profile.calltrace.cycles-pp.xlog_cil_push_now.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write
     68.74           +14.8       83.60        perf-profile.calltrace.cycles-pp.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write.vfs_write.ksys_write
     64.97           +15.1       80.03        perf-profile.calltrace.cycles-pp.xlog_cil_force_seq.xfs_log_force_seq.xfs_file_fsync.xfs_file_buffered_write.vfs_write
     14.86   5%     -14.9        0.00        perf-profile.children.cycles-pp.submit_bio_wait
     14.96   5%     -14.8        0.12   4%  perf-profile.children.cycles-pp.md_handle_request
     14.94   5%     -14.8        0.11   3%  perf-profile.children.cycles-pp.raid0_make_request
     14.83   5%     -14.8        0.00        perf-profile.children.cycles-pp.md_flush_request
     14.88   5%     -14.8        0.06   6%  perf-profile.children.cycles-pp.blkdev_issue_flush
     15.82   5%     -14.5        1.32   3%  perf-profile.children.cycles-pp.__submit_bio_noacct
     15.81   5%     -14.5        1.31   3%  perf-profile.children.cycles-pp.__submit_bio
     13.86   5%     -13.6        0.29   3%  perf-profile.children.cycles-pp._raw_spin_lock_irq
     22.32   3%     -13.1        9.23   4%  perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
      1.96   9%      -1.5        0.49   4%  perf-profile.children.cycles-pp.intel_idle_irq
      9.70   3%      -1.1        8.61   3%  perf-profile.children.cycles-pp.start_secondary
      9.80   3%      -1.1        8.71   3%  perf-profile.children.cycles-pp.common_startup_64
      9.80   3%      -1.1        8.71   3%  perf-profile.children.cycles-pp.cpu_startup_entry
      9.79   3%      -1.1        8.71   3%  perf-profile.children.cycles-pp.do_idle
      9.20   3%      -1.0        8.25   3%  perf-profile.children.cycles-pp.cpuidle_idle_call
      9.04   3%      -0.9        8.11   3%  perf-profile.children.cycles-pp.cpuidle_enter
      9.04   3%      -0.9        8.11   3%  perf-profile.children.cycles-pp.cpuidle_enter_state
      2.21            -0.4        1.78   2%  perf-profile.children.cycles-pp.worker_thread
      2.22            -0.4        1.79   2%  perf-profile.children.cycles-pp.kthread
      2.22            -0.4        1.79   2%  perf-profile.children.cycles-pp.ret_from_fork
      2.22            -0.4        1.79   2%  perf-profile.children.cycles-pp.ret_from_fork_asm
      2.08            -0.4        1.68   2%  perf-profile.children.cycles-pp.process_one_work
      0.57            -0.3        0.24        perf-profile.children.cycles-pp.__wake_up
      0.63            -0.3        0.32   2%  perf-profile.children.cycles-pp.__wake_up_common
      1.26            -0.3        0.99        perf-profile.children.cycles-pp.try_to_wake_up
      3.56   2%      -0.2        3.34   4%  perf-profile.children.cycles-pp.xlog_wait_on_iclog
      0.46   2%      -0.1        0.36   2%  perf-profile.children.cycles-pp.select_task_rq
      0.86   3%      -0.1        0.75   2%  perf-profile.children.cycles-pp.asm_sysvec_apic_timer_interrupt
      0.43   2%      -0.1        0.33   2%  perf-profile.children.cycles-pp.select_task_rq_fair
      0.64            -0.1        0.55   2%  perf-profile.children.cycles-pp.ttwu_do_activate
      0.71   3%      -0.1        0.62   3%  perf-profile.children.cycles-pp.activate_task
      0.57            -0.1        0.48        perf-profile.children.cycles-pp.__flush_smp_call_function_queue
      0.17   2%      -0.1        0.08        perf-profile.children.cycles-pp.xlog_state_release_iclog
      0.48            -0.1        0.41   2%  perf-profile.children.cycles-pp.sched_ttwu_pending
      0.61   3%      -0.1        0.54   3%  perf-profile.children.cycles-pp.enqueue_task_fair
      0.28   3%      -0.1        0.21   3%  perf-profile.children.cycles-pp.select_idle_sibling
      0.19            -0.1        0.13   2%  perf-profile.children.cycles-pp.schedule_idle
      0.22   3%      -0.1        0.16   4%  perf-profile.children.cycles-pp.select_idle_cpu
      0.47   4%      -0.1        0.41   5%  perf-profile.children.cycles-pp.update_load_avg
      0.35   2%      -0.1        0.29   2%  perf-profile.children.cycles-pp.flush_smp_call_function_queue
      0.42   3%      -0.1        0.37   2%  perf-profile.children.cycles-pp.enqueue_entity
      0.11   6%      -0.1        0.06   8%  perf-profile.children.cycles-pp.finish_task_switch
      0.18   5%      -0.0        0.13   5%  perf-profile.children.cycles-pp.available_idle_cpu
      0.33            -0.0        0.28        perf-profile.children.cycles-pp.xlog_write
      0.12   3%      -0.0        0.07   5%  perf-profile.children.cycles-pp.xlog_write_partial
      0.30   3%      -0.0        0.25   3%  perf-profile.children.cycles-pp.asm_sysvec_call_function_single
      0.12   4%      -0.0        0.07   5%  perf-profile.children.cycles-pp.xlog_write_get_more_iclog_space
      0.37   5%      -0.0        0.32   8%  perf-profile.children.cycles-pp.dequeue_entity
      0.08            -0.0        0.03  70%  perf-profile.children.cycles-pp.__cond_resched
      0.46            -0.0        0.41        perf-profile.children.cycles-pp.xlog_cil_push_work
      0.27   3%      -0.0        0.23   3%  perf-profile.children.cycles-pp.sysvec_call_function_single
      0.08   6%      -0.0        0.04  44%  perf-profile.children.cycles-pp.select_idle_core
      0.26   2%      -0.0        0.22   3%  perf-profile.children.cycles-pp.__sysvec_call_function_single
      0.12   3%      -0.0        0.09   5%  perf-profile.children.cycles-pp.queue_work_on
      0.14   3%      -0.0        0.12   6%  perf-profile.children.cycles-pp.prepare_task_switch
      0.12   3%      -0.0        0.09        perf-profile.children.cycles-pp.ttwu_queue_wakelist
      0.26   5%      -0.0        0.23   6%  perf-profile.children.cycles-pp.update_curr
      0.12            -0.0        0.10   5%  perf-profile.children.cycles-pp.perf_trace_sched_wakeup_template
      0.13   3%      -0.0        0.11        perf-profile.children.cycles-pp.wake_affine
      0.08   4%      -0.0        0.06   8%  perf-profile.children.cycles-pp.set_next_entity
      0.10   5%      -0.0        0.07   6%  perf-profile.children.cycles-pp.kick_pool
      0.11   4%      -0.0        0.09   4%  perf-profile.children.cycles-pp.__queue_work
      0.10   3%      -0.0        0.08   4%  perf-profile.children.cycles-pp.__switch_to_asm
      0.10   4%      -0.0        0.08   6%  perf-profile.children.cycles-pp.switch_mm_irqs_off
      0.07            -0.0        0.05        perf-profile.children.cycles-pp.__smp_call_single_queue
      0.11            -0.0        0.09        perf-profile.children.cycles-pp.xlog_cil_set_ctx_write_state
      0.10            -0.0        0.08   4%  perf-profile.children.cycles-pp.task_h_load
      0.08   4%      -0.0        0.06        perf-profile.children.cycles-pp.sched_mm_cid_migrate_to
      0.08   4%      -0.0        0.06        perf-profile.children.cycles-pp.set_task_cpu
      0.07   5%      -0.0        0.05        perf-profile.children.cycles-pp.__switch_to
      0.13   4%      -0.0        0.11   3%  perf-profile.children.cycles-pp.menu_select
      0.13   6%      -0.0        0.11   5%  perf-profile.children.cycles-pp.reweight_entity
      0.11            -0.0        0.09   4%  perf-profile.children.cycles-pp.xlog_cil_write_commit_record
      0.06   6%      -0.0        0.05        perf-profile.children.cycles-pp.___perf_sw_event
      0.08   5%      -0.0        0.07   6%  perf-profile.children.cycles-pp.avg_vruntime
      0.06            -0.0        0.05        perf-profile.children.cycles-pp.perf_tp_event
      0.06            -0.0        0.05        perf-profile.children.cycles-pp.place_entity
      0.06            -0.0        0.05        perf-profile.children.cycles-pp.sched_clock
      0.05            +0.0        0.06        perf-profile.children.cycles-pp.rep_movs_alternative
      0.05            +0.0        0.06   6%  perf-profile.children.cycles-pp.kfree
      0.06            +0.0        0.07   5%  perf-profile.children.cycles-pp.copy_page_from_iter_atomic
      0.10   3%      +0.0        0.12   4%  perf-profile.children.cycles-pp.xfs_inode_item_format_data_fork
      0.05            +0.0        0.06   7%  perf-profile.children.cycles-pp.xfs_trans_read_buf_map
      0.06            +0.0        0.07   6%  perf-profile.children.cycles-pp.xfs_btree_lookup_get_block
      0.07   5%      +0.0        0.08   5%  perf-profile.children.cycles-pp.filemap_get_entry
      0.09   5%      +0.0        0.10   3%  perf-profile.children.cycles-pp.memcpy_orig
      0.12   3%      +0.0        0.14   3%  perf-profile.children.cycles-pp.xlog_state_clean_iclog
      0.07   5%      +0.0        0.08   5%  perf-profile.children.cycles-pp.filemap_dirty_folio
      0.07            +0.0        0.09   5%  perf-profile.children.cycles-pp.iomap_set_range_uptodate
      0.07   5%      +0.0        0.08   5%  perf-profile.children.cycles-pp.writeback_get_folio
      0.07            +0.0        0.09   5%  perf-profile.children.cycles-pp.xfs_end_bio
      0.06   9%      +0.0        0.07   5%  perf-profile.children.cycles-pp.io_schedule
      0.10            +0.0        0.12   3%  perf-profile.children.cycles-pp.xfs_buffered_write_iomap_begin
      0.09            +0.0        0.11   6%  perf-profile.children.cycles-pp.xfs_btree_lookup
      0.10   3%      +0.0        0.12   5%  perf-profile.children.cycles-pp.writeback_iter
      0.09            +0.0        0.11        perf-profile.children.cycles-pp.xfs_trans_committed_bulk
      0.26            +0.0        0.28        perf-profile.children.cycles-pp.flush_workqueue_prep_pwqs
      0.10            +0.0        0.12   3%  perf-profile.children.cycles-pp.__filemap_get_folio
      0.07   7%      +0.0        0.09   4%  perf-profile.children.cycles-pp.folio_wait_bit_common
      0.16   3%      +0.0        0.19   3%  perf-profile.children.cycles-pp.xfs_inode_item_format
      0.08   5%      +0.0        0.11        perf-profile.children.cycles-pp.__filemap_fdatawait_range
      0.07   5%      +0.0        0.09   5%  perf-profile.children.cycles-pp.wake_page_function
      0.07   7%      +0.0        0.09   4%  perf-profile.children.cycles-pp.folio_wait_writeback
      0.12   4%      +0.0        0.14   2%  perf-profile.children.cycles-pp.iomap_writepage_map_blocks
      0.07   6%      +0.0        0.10   5%  perf-profile.children.cycles-pp.folio_wake_bit
      0.13   2%      +0.0        0.16   2%  perf-profile.children.cycles-pp.llseek
      0.03  70%      +0.0        0.06        perf-profile.children.cycles-pp.get_jiffies_update
      0.12   3%      +0.0        0.15   2%  perf-profile.children.cycles-pp.iomap_iter
      0.14   5%      +0.0        0.16   3%  perf-profile.children.cycles-pp.__mutex_unlock_slowpath
      0.03  70%      +0.0        0.06   6%  perf-profile.children.cycles-pp.tmigr_requires_handle_remote
      0.04  44%      +0.0        0.07        perf-profile.children.cycles-pp.__lruvec_stat_mod_folio
      0.14   2%      +0.0        0.17   4%  perf-profile.children.cycles-pp.iomap_write_end
      0.04  45%      +0.0        0.07   6%  perf-profile.children.cycles-pp.xfs_trans_alloc_inode
      0.03  70%      +0.0        0.06   7%  perf-profile.children.cycles-pp.xfs_map_blocks
      0.15   3%      +0.0        0.18   2%  perf-profile.children.cycles-pp.iomap_write_begin
      0.11   5%      +0.0        0.14   3%  perf-profile.children.cycles-pp.wake_up_q
      0.14   3%      +0.0        0.17   3%  perf-profile.children.cycles-pp.xlog_cil_committed
      0.14   3%      +0.0        0.17   2%  perf-profile.children.cycles-pp.xlog_cil_process_committed
      0.03  70%      +0.0        0.07   8%  perf-profile.children.cycles-pp.balance_dirty_pages_ratelimited_flags
      0.22            +0.0        0.26   2%  perf-profile.children.cycles-pp.xlog_cil_insert_format_items
      0.15   2%      +0.0        0.19   5%  perf-profile.children.cycles-pp.xfs_bmap_add_extent_unwritten_real
      0.16   2%      +0.0        0.20   5%  perf-profile.children.cycles-pp.xfs_bmapi_convert_unwritten
      0.02 141%      +0.0        0.06  13%  perf-profile.children.cycles-pp.xlog_grant_push_threshold
      0.28   4%      +0.0        0.32   2%  perf-profile.children.cycles-pp.update_process_times
      0.15            +0.0        0.19        perf-profile.children.cycles-pp.entry_SYSRETQ_unsafe_stack
      0.32   3%      +0.0        0.36   3%  perf-profile.children.cycles-pp.tick_nohz_handler
      0.18   2%      +0.0        0.23   4%  perf-profile.children.cycles-pp.xfs_bmapi_write
      0.27   2%      +0.0        0.32        perf-profile.children.cycles-pp.xlog_ioend_work
      0.36   4%      +0.0        0.41   3%  perf-profile.children.cycles-pp.__hrtimer_run_queues
      0.26   2%      +0.0        0.31        perf-profile.children.cycles-pp.xlog_state_do_callback
      0.26   2%      +0.0        0.31        perf-profile.children.cycles-pp.xlog_state_do_iclog_callbacks
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.xa_load
      0.00            +0.1        0.05        perf-profile.children.cycles-pp.xfs_iext_lookup_extent
      0.02 141%      +0.1        0.07   5%  perf-profile.children.cycles-pp.up_write
      0.31   2%      +0.1        0.38   2%  perf-profile.children.cycles-pp.xlog_cil_insert_items
      0.41   4%      +0.1        0.47   2%  perf-profile.children.cycles-pp.hrtimer_interrupt
      0.41   3%      +0.1        0.48   3%  perf-profile.children.cycles-pp.__sysvec_apic_timer_interrupt
      0.13  12%      +0.1        0.20   8%  perf-profile.children.cycles-pp.xfs_log_ticket_ungrant
      0.30            +0.1        0.38   3%  perf-profile.children.cycles-pp.copy_to_brd
      0.56   3%      +0.1        0.64   2%  perf-profile.children.cycles-pp.sysvec_apic_timer_interrupt
      0.35            +0.1        0.43   3%  perf-profile.children.cycles-pp.brd_submit_bio
      0.95            +0.1        1.04        perf-profile.children.cycles-pp.mutex_spin_on_owner
      0.11  11%      +0.1        0.21  12%  perf-profile.children.cycles-pp.xlog_grant_add_space
      0.44            +0.1        0.55   2%  perf-profile.children.cycles-pp.iomap_write_iter
      0.19   5%      +0.1        0.30   6%  perf-profile.children.cycles-pp.iomap_finish_ioends
      0.21  11%      +0.1        0.35  12%  perf-profile.children.cycles-pp.xfs_log_reserve
      0.22  11%      +0.1        0.36  11%  perf-profile.children.cycles-pp.xfs_trans_reserve
      0.40   2%      +0.1        0.54   2%  perf-profile.children.cycles-pp.xfs_iomap_write_unwritten
      0.57            +0.1        0.71   2%  perf-profile.children.cycles-pp.iomap_file_buffered_write
      0.25  10%      +0.1        0.39  10%  perf-profile.children.cycles-pp.xfs_trans_alloc
      0.13  11%      +0.2        0.32  16%  perf-profile.children.cycles-pp.schedule_preempt_disabled
      0.23  13%      +0.2        0.46  12%  perf-profile.children.cycles-pp.sb_mark_inode_writeback
      0.25  12%      +0.2        0.50  12%  perf-profile.children.cycles-pp.sb_clear_inode_writeback
      0.59   2%      +0.3        0.85   2%  perf-profile.children.cycles-pp.xfs_end_io
      0.58   2%      +0.3        0.84   3%  perf-profile.children.cycles-pp.xfs_end_ioend
      0.46   6%      +0.3        0.72   6%  perf-profile.children.cycles-pp.md_end_clone_io
      0.30  10%      +0.3        0.57   9%  perf-profile.children.cycles-pp.__folio_start_writeback
      0.11  11%      +0.3        0.38  13%  perf-profile.children.cycles-pp.rwsem_down_read_slowpath
      0.43   7%      +0.3        0.72   7%  perf-profile.children.cycles-pp.iomap_writepage_map
      0.16   9%      +0.3        0.46  11%  perf-profile.children.cycles-pp.down_read
      0.44   8%      +0.3        0.76   7%  perf-profile.children.cycles-pp.__folio_end_writeback
      0.52   7%      +0.4        0.88   6%  perf-profile.children.cycles-pp.folio_end_writeback
      0.54   7%      +0.4        0.90   6%  perf-profile.children.cycles-pp.iomap_finish_ioend
      0.92   2%      +0.4        1.30   3%  perf-profile.children.cycles-pp.iomap_submit_ioend
      0.72   3%      +0.4        1.16   5%  perf-profile.children.cycles-pp.xlog_cil_commit
      0.82   3%      +0.5        1.28   5%  perf-profile.children.cycles-pp.__xfs_trans_commit
      0.92   4%      +0.5        1.43   6%  perf-profile.children.cycles-pp.xfs_vn_update_time
      0.94   4%      +0.5        1.46   6%  perf-profile.children.cycles-pp.kiocb_modified
      0.96   4%      +0.5        1.48   6%  perf-profile.children.cycles-pp.xfs_file_write_checks
      6.96   2%      +0.5        7.49   3%  perf-profile.children.cycles-pp.intel_idle
      1.45   4%      +0.7        2.15   5%  perf-profile.children.cycles-pp.iomap_writepages
      1.46   4%      +0.7        2.16   4%  perf-profile.children.cycles-pp.xfs_vm_writepages
      1.48   4%      +0.7        2.18   4%  perf-profile.children.cycles-pp.do_writepages
      1.51   4%      +0.7        2.22   4%  perf-profile.children.cycles-pp.filemap_fdatawrite_wbc
      1.51   3%      +0.7        2.23   4%  perf-profile.children.cycles-pp.__filemap_fdatawrite_range
      1.61   3%      +0.8        2.36   4%  perf-profile.children.cycles-pp.file_write_and_wait_range
     85.48            +0.8       86.24        perf-profile.children.cycles-pp.xfs_file_fsync
     87.06            +1.4       88.49        perf-profile.children.cycles-pp.xfs_file_buffered_write
     87.19            +1.5       88.65        perf-profile.children.cycles-pp.vfs_write
     87.20            +1.5       88.66        perf-profile.children.cycles-pp.ksys_write
     87.66            +1.5       89.14        perf-profile.children.cycles-pp.write
     87.50            +1.5       88.98        perf-profile.children.cycles-pp.do_syscall_64
     87.50            +1.5       88.99        perf-profile.children.cycles-pp.entry_SYSCALL_64_after_hwframe
     56.76           +13.7       70.44        perf-profile.children.cycles-pp.osq_lock
     57.89           +13.9       71.74        perf-profile.children.cycles-pp.__mutex_lock
     60.36           +14.6       74.96        perf-profile.children.cycles-pp.__flush_workqueue
     61.49           +14.6       76.10        perf-profile.children.cycles-pp.xlog_cil_push_now
     68.74           +14.8       83.60        perf-profile.children.cycles-pp.xfs_log_force_seq
     64.98           +15.1       80.03        perf-profile.children.cycles-pp.xlog_cil_force_seq
     22.30   3%     -13.1        9.22   4%  perf-profile.self.cycles-pp.native_queued_spin_lock_slowpath
      1.91   9%      -1.4        0.46   5%  perf-profile.self.cycles-pp.intel_idle_irq
      0.24   2%      -0.1        0.18   4%  perf-profile.self.cycles-pp._raw_spin_lock_irq
      0.18   4%      -0.1        0.12   6%  perf-profile.self.cycles-pp.available_idle_cpu
      0.37   2%      -0.0        0.32   2%  perf-profile.self.cycles-pp._raw_spin_lock_irqsave
      0.20   3%      -0.0        0.17   4%  perf-profile.self.cycles-pp.update_load_avg
      0.14   3%      -0.0        0.11   3%  perf-profile.self.cycles-pp.__schedule
      0.09   4%      -0.0        0.07   8%  perf-profile.self.cycles-pp.prepare_task_switch
      0.10            -0.0        0.08   4%  perf-profile.self.cycles-pp.task_h_load
      0.10   5%      -0.0        0.08   6%  perf-profile.self.cycles-pp.__switch_to_asm
      0.08   4%      -0.0        0.06        perf-profile.self.cycles-pp.sched_mm_cid_migrate_to
      0.07   5%      -0.0        0.05   7%  perf-profile.self.cycles-pp.menu_select
      0.09   5%      -0.0        0.08   6%  perf-profile.self.cycles-pp.switch_mm_irqs_off
      0.06   7%      -0.0        0.05        perf-profile.self.cycles-pp.__switch_to
      0.07   7%      -0.0        0.05   8%  perf-profile.self.cycles-pp.enqueue_entity
      0.10   4%      -0.0        0.09   7%  perf-profile.self.cycles-pp.update_curr
      0.05            +0.0        0.06        perf-profile.self.cycles-pp.rep_movs_alternative
      0.06            +0.0        0.07   5%  perf-profile.self.cycles-pp.xas_load
      0.08   4%      +0.0        0.10   5%  perf-profile.self.cycles-pp.__flush_workqueue
      0.07            +0.0        0.08   5%  perf-profile.self.cycles-pp.iomap_set_range_uptodate
      0.08   5%      +0.0        0.10   3%  perf-profile.self.cycles-pp.memcpy_orig
      0.05   7%      +0.0        0.07   5%  perf-profile.self.cycles-pp.down_read
      0.08   5%      +0.0        0.11   4%  perf-profile.self.cycles-pp.__mutex_lock
      0.09   4%      +0.0        0.12   6%  perf-profile.self.cycles-pp.xlog_cil_insert_items
      0.03  70%      +0.0        0.06        perf-profile.self.cycles-pp.get_jiffies_update
      0.02  99%      +0.0        0.06   7%  perf-profile.self.cycles-pp.__folio_end_writeback
      0.15            +0.0        0.19        perf-profile.self.cycles-pp.entry_SYSRETQ_unsafe_stack
      0.10  12%      +0.1        0.16   9%  perf-profile.self.cycles-pp.xfs_log_ticket_ungrant
      0.00            +0.1        0.06  11%  perf-profile.self.cycles-pp.balance_dirty_pages_ratelimited_flags
      0.30   2%      +0.1        0.37   2%  perf-profile.self.cycles-pp.copy_to_brd
      0.95            +0.1        1.03        perf-profile.self.cycles-pp.mutex_spin_on_owner
      0.11  11%      +0.1        0.20  14%  perf-profile.self.cycles-pp.xlog_grant_add_space
      6.96   2%      +0.5        7.49   3%  perf-profile.self.cycles-pp.intel_idle
     56.27           +13.5       69.81        perf-profile.self.cycles-pp.osq_lock




Disclaimer:
Results have been estimated based on internal Intel analysis and are provided
for informational purposes only. Any difference in system hardware or software
design or configuration may affect actual performance.


-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests/wiki



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 05:31:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 05:31:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747165.1154492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLymA-00080K-8B; Tue, 25 Jun 2024 05:31:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747165.1154492; Tue, 25 Jun 2024 05:31:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLymA-00080D-44; Tue, 25 Jun 2024 05:31:30 +0000
Received: by outflank-mailman (input) for mailman id 747165;
 Tue, 25 Jun 2024 05:31:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLym8-000803-KM; Tue, 25 Jun 2024 05:31:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLym8-0007XS-7f; Tue, 25 Jun 2024 05:31:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLym7-00043M-Sb; Tue, 25 Jun 2024 05:31:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sLym7-0000CR-S6; Tue, 25 Jun 2024 05:31:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kJVRie+R02PQik/n6N9k6w+Qg8V1rAlp8JaXe5W1g6g=; b=iYdl+TXyoCG0rzXO8It8Uq5Phx
	/+vUqsgB7gX5a4NJWyV9N7q1u4HsGvTdU0lCT/2FCzwWcv991yS5ItR0fy5y4d5V5B2k6SxPv6lDG
	Hwcz3SYbKhs2cvMNYFG99x++TcYUWe6Ck8mV6dNbppbbzMvqqoEPjfoBVauBdjuWERRU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186473-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186473: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b14dae96c07ef27cc7f8107ddaa16989e9ab024b
X-Osstest-Versions-That:
    xen=c56f1ef577831ec70645ca5874d54f2e698c6761
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Jun 2024 05:31:27 +0000

flight 186473 xen-unstable-smoke real [real]
flight 186476 xen-unstable-smoke real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186473/
http://logs.test-lab.xenproject.org/osstest/logs/186476/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 186470

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b14dae96c07ef27cc7f8107ddaa16989e9ab024b
baseline version:
 xen                  c56f1ef577831ec70645ca5874d54f2e698c6761

Last test of basis   186470  2024-06-24 19:02:11 Z    0 days
Testing same since   186473  2024-06-25 01:00:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Federico Serafini <federico.serafini@bugseng.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b14dae96c07ef27cc7f8107ddaa16989e9ab024b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jun 21 21:57:59 2024 +0100

    x86/pagewalk: Address MISRA R8.3 violation in guest_walk_tables()
    
    Commit 4c5d78a10dc8 ("x86/pagewalk: Re-implement the pagetable walker")
    intentionally renamed guest_walk_tables()'s 'pfec' parameter to 'walk' because
    it's not a PageFault Error Code, despite the name of some of the constants
    passed in.  Sadly the constants-cleanup I've been meaning to do since then
    still hasn't come to pass.
    
    Update the declaration to match, to placate MISRA.
    
    Fixes: 4c5d78a10dc8 ("x86/pagewalk: Re-implement the pagetable walker")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 6e07f41b27c3267e4528327ebc44dd03ac869ae3
Author: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
Date:   Fri Jun 21 15:40:47 2024 +0200

    common/unlzo: address violation of MISRA C Rule 7.3
    
    This addresses violations of MISRA C:2012 Rule 7.3 which states as
    following: the lowercase character `l' shall not be used in a literal
    suffix.
    
    The file common/unlzo.c defines the non-compliant constant LZO_BLOCK_SIZE with
    having a lowercase 'l'.
    It is now defined as '256*1024L'.
    
    No functional change.
    
    Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit c2f4ea4dc9eba454dc2e761bfd39ccb72870d268
Author: Federico Serafini <federico.serafini@bugseng.com>
Date:   Fri Jun 21 17:32:41 2024 +0200

    automation/eclair: add more guidelines to the monitored set
    
    Add more accepted guidelines to the monitored set to check them at each
    commit.
    
    Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 5add709cc6595af95e181091dbbf6e00aae836e3
Author: Federico Serafini <federico.serafini@bugseng.com>
Date:   Thu Jun 20 14:50:34 2024 +0200

    automation/eclair: add deviations of MISRA C Rule 5.5
    
    MISRA C Rule 5.5 states that "Identifiers shall be distinct from macro
    names".
    
    Update ECLAIR configuration to deviate:
    - macros expanding to their own name;
    - clashes between macros and non-callable entities;
    - clashes related to the selection of specific implementations of string
      handling functions.
    
    Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 912412f97a293f5e1deece134d78c52b1b0b8856
Author: Federico Serafini <federico.serafini@bugseng.com>
Date:   Fri Jun 14 11:15:38 2024 +0200

    automation/eclair: add deviation for MISRA C Rule 17.7
    
    Update ECLAIR configuration to deviate some cases where not using
    the return value of a function is not dangerous.
    
    Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 05:46:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 05:46:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747175.1154500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLz0P-0001DV-DE; Tue, 25 Jun 2024 05:46:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747175.1154500; Tue, 25 Jun 2024 05:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLz0P-0001DO-Ah; Tue, 25 Jun 2024 05:46:13 +0000
Received: by outflank-mailman (input) for mailman id 747175;
 Tue, 25 Jun 2024 05:46:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sQf/=N3=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLz0O-0001DC-DP
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 05:46:12 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3913276f-32b6-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 07:46:09 +0200 (CEST)
Received: from [10.176.134.80] (unknown [160.78.253.181])
 by support.bugseng.com (Postfix) with ESMTPSA id AEAD44EE0738;
 Tue, 25 Jun 2024 07:46:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3913276f-32b6-11ef-b4bb-af5377834399
Message-ID: <097ad012-665f-4df8-89ee-e532fa403a5c@bugseng.com>
Date: Tue, 25 Jun 2024 07:46:07 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3] automation/eclair: extend existing deviations of
 MISRA C Rule 16.3
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 oleksii.kurochko@gmail.com
References: <71a69d25e7889ed6e8546b5cd18d423006d69ceb.1718356683.git.federico.serafini@bugseng.com>
 <alpine.DEB.2.22.394.2406191821310.2572888@ubuntu-linux-20-04-desktop>
 <alpine.DEB.2.22.394.2406241736560.3870429@ubuntu-linux-20-04-desktop>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <alpine.DEB.2.22.394.2406241736560.3870429@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 25/06/24 02:38, Stefano Stabellini wrote:
> On Wed, 19 Jun 2024, Stefano Stabellini wrote:
>> On Fri, 14 Jun 2024, Federico Serafini wrote:
>>> Update ECLAIR configuration to deviate more cases where an
>>> unintentional fallthrough cannot happen.
>>>
>>> Add Rule 16.3 to the monitored set and tag it as clean for arm.
>>>
>>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>>
>> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> Hi Oleksii,
> 
> I would like to ask for a release-ack as this patch only increases
> deviations, hence only affecting the static analysis jobs, for a rule
> that is non-blocking

A committed patch already added Rule 16.3 to the monitored set:
I will send a v4 rebasing against the current staging branch
with Oleksii in CC.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 05:52:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 05:52:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747183.1154510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLz6o-0002qj-27; Tue, 25 Jun 2024 05:52:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747183.1154510; Tue, 25 Jun 2024 05:52:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLz6n-0002qc-Vm; Tue, 25 Jun 2024 05:52:49 +0000
Received: by outflank-mailman (input) for mailman id 747183;
 Tue, 25 Jun 2024 05:52:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o8Ww=N3=bombadil.srs.infradead.org=BATV+ee3bcc3f6418456cddbd+7611+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sLz6k-0002qU-VB
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 05:52:48 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2417b83c-32b7-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 07:52:45 +0200 (CEST)
Received: from [2001:4bb8:2dc:a91f:de7e:f7c0:f265:d96e] (helo=localhost)
 by bombadil.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sLz6f-00000001jNR-2t83; Tue, 25 Jun 2024 05:52:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2417b83c-32b7-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=Content-Transfer-Encoding:
	Content-Type:MIME-Version:Message-ID:Date:Subject:Cc:To:From:Sender:Reply-To:
	Content-ID:Content-Description:In-Reply-To:References;
	bh=YTqoi2mljEbOakIcyABD2RlNt0qxMc+ByhNNWPILtBo=; b=qxjfUNGrFRJ3+bkq5zQGMbxXyp
	NShKIzRbzMRz0vpAorvKvJfJjSQJfoJtrm43kX6bIGy2MQyBPJEU8FtCghwFFE3FmruB0OtD3b88k
	tUs2LJ8Ad/65Sq08z3+iGg0G0cR2WGaY3qNf4q3HcyMd1SkbBQmlpTLawnaRNnuR+Rl1J1ufWNQe+
	XezKmH0tg/tUX5pBJyykoXC9AWG7lr3Z2S0xN+kMnjM6zGaAh19lcXl7UiQPwL7+M5WR+hijlW+GT
	vRHwRtBHK+VwoSA+GzA0MNGmfYmYgj3/KoMe1g1JQY4GUE//zEKVqxJJxEu/IIZ06T1SsGkYEJWlk
	kzmtEWWw==;
From: Christoph Hellwig <hch@lst.de>
To: roger.pau@citrix.com
Cc: jgross@suse.com,
	marmarek@invisiblethingslab.com,
	xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	Rusty Bird <rustybird@net-c.com>
Subject: [PATCH] xen-blkfront: fix sector_size propagation to the block layer
Date: Tue, 25 Jun 2024 07:52:38 +0200
Message-ID: <20240625055238.7934-1-hch@lst.de>
X-Mailer: git-send-email 2.43.0
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Ensure that info->sector_size and info->physical_sector_size are set
before the call to blkif_set_queue_limits by doing away with the
local variables and arguments that propagate them.

Thanks to Marek Marczykowski-Górecki and Jürgen Groß for root causing
the issue.

Fixes: ba3f67c11638 ("xen-blkfront: atomically update queue limits")
Reported-by: Rusty Bird <rustybird@net-c.com>
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/block/xen-blkfront.c | 16 +++++-----------
 1 file changed, 5 insertions(+), 11 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index fa3a2ba525458b..59ce113b882a0e 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1070,8 +1070,7 @@ static char *encode_disk_name(char *ptr, unsigned int n)
 }
 
 static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
-		struct blkfront_info *info, u16 sector_size,
-		unsigned int physical_sector_size)
+		struct blkfront_info *info)
 {
 	struct queue_limits lim = {};
 	struct gendisk *gd;
@@ -1165,8 +1164,6 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
 
 	info->rq = gd->queue;
 	info->gd = gd;
-	info->sector_size = sector_size;
-	info->physical_sector_size = physical_sector_size;
 
 	xlvbd_flush(info);
 
@@ -2320,8 +2317,6 @@ static void blkfront_gather_backend_features(struct blkfront_info *info)
 static void blkfront_connect(struct blkfront_info *info)
 {
 	unsigned long long sectors;
-	unsigned long sector_size;
-	unsigned int physical_sector_size;
 	int err, i;
 	struct blkfront_ring_info *rinfo;
 
@@ -2360,7 +2355,7 @@ static void blkfront_connect(struct blkfront_info *info)
 	err = xenbus_gather(XBT_NIL, info->xbdev->otherend,
 			    "sectors", "%llu", &sectors,
 			    "info", "%u", &info->vdisk_info,
-			    "sector-size", "%lu", &sector_size,
+			    "sector-size", "%lu", &info->sector_size,
 			    NULL);
 	if (err) {
 		xenbus_dev_fatal(info->xbdev, err,
@@ -2374,9 +2369,9 @@ static void blkfront_connect(struct blkfront_info *info)
 	 * provide this. Assume physical sector size to be the same as
 	 * sector_size in that case.
 	 */
-	physical_sector_size = xenbus_read_unsigned(info->xbdev->otherend,
+	info->physical_sector_size = xenbus_read_unsigned(info->xbdev->otherend,
 						    "physical-sector-size",
-						    sector_size);
+						    info->sector_size);
 	blkfront_gather_backend_features(info);
 	for_each_rinfo(info, rinfo, i) {
 		err = blkfront_setup_indirect(rinfo);
@@ -2388,8 +2383,7 @@ static void blkfront_connect(struct blkfront_info *info)
 		}
 	}
 
-	err = xlvbd_alloc_gendisk(sectors, info, sector_size,
-				  physical_sector_size);
+	err = xlvbd_alloc_gendisk(sectors, info);
 	if (err) {
 		xenbus_dev_fatal(info->xbdev, err, "xlvbd_add at %s",
 				 info->xbdev->otherend);
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 06:01:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 06:01:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747192.1154523 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzEc-0004Vu-Qm; Tue, 25 Jun 2024 06:00:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747192.1154523; Tue, 25 Jun 2024 06:00:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzEc-0004Vn-Nx; Tue, 25 Jun 2024 06:00:54 +0000
Received: by outflank-mailman (input) for mailman id 747192;
 Tue, 25 Jun 2024 06:00:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLzEb-0004Vh-48
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 06:00:53 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 47086dbd-32b8-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 08:00:52 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id
 38308e7fff4ca-2ebed33cb65so56328441fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 23:00:52 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7069afb8bd8sm403788b3a.145.2024.06.24.23.00.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 23:00:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47086dbd-32b8-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719295251; x=1719900051; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=QGYTSjuyk5eWZDroKLLfQ8zBgMUfQ/0YGkUFlSNGDdY=;
        b=JHT4r4J+XvSDTg1d6Hecc+cV1rB9CyI2YkNEm9V09AkO1rpV+abwxA0eeVaKOxuDbA
         oYSbrJpRFz+QSlDrILg0upHmiUnMA7Xz094AQWXWkZwIwSR3H5NvV3fN8sdCfuqG308E
         O0YqpXSABnpNi08R7jMhfnBTu+lk36LgZ7PA4nbR5DsZOSmhTf8lVm8WR4vtWF13qPuG
         rBHrJis/gC7z+vrX25e1ggYc+4qFVI7zR2icvaQjBuF5DURgh2kzzaGkqh/APgjuDfQB
         I2LLoWNVKWBm6w3VJPhLbJPdkyXx0OrjUxUE2SzIyX6nmeF02yJ+qUfqdaHN3q0ZdgvV
         kJdQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719295251; x=1719900051;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=QGYTSjuyk5eWZDroKLLfQ8zBgMUfQ/0YGkUFlSNGDdY=;
        b=Y5EWtkBFx7V3fr7gZxMIYFR3+53Xt6fcObbH2nQj3B0ohbW06mYvWZEbcZ6MKPRQ7D
         j5hJodP/7j9F0evrUkOmm4ziN8yZv0vZxLf2g2tYts6Jo6Vr8E7dN6Zgtu56gEQGQlJm
         +1ebFnhym/YSPMjOqvL/NzXIaVDn5so/CACWHh+FlfQ8TdhIONCwBIGVLJWqmL8xs8Fi
         xnnRo17tG3a5Puwo028w9B5Yh8ztWi57gLceECiH9Y6Po5nkGf6AEsQZotq1DaZw5SJZ
         9Ntno/aZ9KwddAhf7f64p3hOpUKyygPRX/xY/CwHN8I9c1fAaY7P5ud9FjNNc9O4JyRw
         p43w==
X-Forwarded-Encrypted: i=1; AJvYcCUt+O+H/xA7Y/bUh9rwIOORylwShDBeBrwntzDXnMehijB24zTjNRU1FUF/AP9HtYXQbDWEzw6HIG1DdORfBqcrjQ05NPhEaNh1UgmVDg0=
X-Gm-Message-State: AOJu0YzFBY/SYenDuLqCBQXkQO1qqiwuLvqiu6yRApDrIlpoa9KOenOv
	zmwzRffXwuYI7iuHGldEnbKFkMQjZaHPueLeht2Y+QtLVv1AtS1pyGmNugFgcQ==
X-Google-Smtp-Source: AGHT+IFHPCziqXXYtQLOVV6YdqS2rDgeQmRPUWNbvwHxqhexrUH8yl/yqPxSGUXTK+BddbkVzljhlA==
X-Received: by 2002:a05:651c:209:b0:2ec:5777:aa61 with SMTP id 38308e7fff4ca-2ec5931002amr50758901fa.3.1719295251404;
        Mon, 24 Jun 2024 23:00:51 -0700 (PDT)
Message-ID: <80d0578d-26c0-4650-9edf-6926c055d415@suse.com>
Date: Tue, 25 Jun 2024 08:00:43 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/2] Add libfuzzer target to fuzz/x86_instruction_emulator
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org
References: <20240621191434.5046-1-tamas@tklengyel.com>
 <45c69745-b060-4697-9f6e-b3d2a8860946@suse.com>
 <CABfawhkyDVw-=nR2d6KiXGYYv=coDgHUr1oXC+BmUxH_ita+iQ@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CABfawhkyDVw-=nR2d6KiXGYYv=coDgHUr1oXC+BmUxH_ita+iQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 24.06.2024 23:23, Tamas K Lengyel wrote:
> On Mon, Jun 24, 2024 at 11:55 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 21.06.2024 21:14, Tamas K Lengyel wrote:
>>> @@ -58,6 +58,9 @@ afl-harness: afl-harness.o $(OBJS) cpuid.o wrappers.o
>>>  afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OBJS)) cpuid.o wrappers.o
>>>       $(CC) $(CFLAGS) $(GCOV_FLAGS) $(addprefix -Wl$(comma)--wrap=,$(WRAPPED)) $^ -o $@
>>>
>>> +libfuzzer-harness: $(OBJS) cpuid.o
>>> +     $(CC) $(CFLAGS) $(LIB_FUZZING_ENGINE) -fsanitize=fuzzer $^ -o $@
>>
>> What is LIB_FUZZING_ENGINE? I don't think we have any use of that in the
>> tree anywhere.
> 
> It's used by oss-fuzz, otherwise it's not doing anything.
> 
>>
>> I'm further surprised you get away here without wrappers.o.
> 
> Wrappers.o was actually breaking the build for oss-fuzz at the linking
> stage. It works just fine without it.

I'm worried here, to be honest. The wrappers serve a pretty important
role, and I'm having a hard time seeing why they shouldn't be needed
here when they're needed both for the test and afl harnesses. Could
you add some more detail on the build issues you encountered?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 06:01:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 06:01:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.746929.1154534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzFH-0004zt-1w; Tue, 25 Jun 2024 06:01:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 746929.1154534; Tue, 25 Jun 2024 06:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzFG-0004zm-VO; Tue, 25 Jun 2024 06:01:34 +0000
Received: by outflank-mailman (input) for mailman id 746929;
 Mon, 24 Jun 2024 20:57:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1jEd=N2=net-c.com=rustybird@srs-se1.protection.inumbo.net>)
 id 1sLqkj-0006zn-6X
 for xen-devel@lists.xenproject.org; Mon, 24 Jun 2024 20:57:29 +0000
Received: from mailo.com (msg-2.mailo.com [213.182.54.12])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5c99ac83-326c-11ef-90a3-e314d9c70b13;
 Mon, 24 Jun 2024 22:57:26 +0200 (CEST)
Received: by b221-6.in.mailobj.net [192.168.90.26] with ESMTP
 via ip-20.mailobj.net [213.182.54.20]
 Mon, 24 Jun 2024 22:57:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c99ac83-326c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple; d=net-c.com; s=mailo;
	t=1719262640; bh=yL7RpNY0mzppYqKUJ2NLIHNEkbd+z2/jheYt/lByE0Q=;
	h=X-EA-Auth:Date:From:To:Cc:Subject:Message-ID:References:
	 MIME-Version:Content-Type:Content-Transfer-Encoding:In-Reply-To;
	b=ycobJ/XxK0lDNyvZvZCfaF/btBqESLa79LJON+en38+TZ5XySXnaVs5mNa1/MRD2a
	 tD1DREkGBe/iWEGVSuGy064aZ2vRdsa/ONJfc/EvmgFprCoiXXhV6mWjWdlbIkWB2b
	 jHkLNWF/zIrChcDH2ju8hnv8d1oWiu8pAYe/YaMU=
X-EA-Auth: jzlao6B10w7KC8fjYHfmgbgCFNZVbCjLZL2oGthHEOwT9J4UezSzTA5LoR/1Goq85PA6kQAuzA2C6Nltm+C0R3RF7t21KtGhRHp48tIVmqI=
Date: Mon, 24 Jun 2024 20:56:47 +0000
From: Rusty Bird <rustybird@net-c.com>
To: Christoph Hellwig <hch@lst.de>
Cc: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
	Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?= <marmarek@invisiblethingslab.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>
Subject: Re: Regression in xen-blkfront regarding sector sizes
Message-ID: <Znndj9W_bCsFTxkz@mutt>
References: <Znl5FYI9CC37jJLX@mail-itl>
 <1944dd3f-1ba8-4559-b71a-056b9309ab58@suse.com>
 <20240624143826.GA8973@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20240624143826.GA8973@lst.de>

Christoph Hellwig:
> On Mon, Jun 24, 2024 at 04:29:15PM +0200, Jürgen Groß wrote:
> >> Rusty suspects it's related to
> >> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/drivers/block/xen-blkfront.c?id=ba3f67c1163812b5d7ec33705c31edaa30ce6c51,
> >> so I'm cc-ing people mentioned there too.
> >
> > I think the call of blkif_set_queue_limits() in this patch should NOT precede
> > setting of info->sector_size and info->physical_sector_size, as those are
> > needed by blkif_set_queue_limits().
> 
> Yes.  Something like the patch below should fix it.  We could also stop
> passing sector_size and physіcal_sector_size to xlvbd_alloc_gendisk to
> clean things up a bit more.
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index fd7c0ff2139cee..9f3d68044f8882 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -1133,6 +1133,8 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
>  	if (err)
>  		goto out_release_minors;
>  
> +	info->sector_size = sector_size;
> +	info->physical_sector_size = physical_sector_size;
>  	blkif_set_queue_limits(info, &lim);
>  	gd = blk_mq_alloc_disk(&info->tag_set, &lim, info);
>  	if (IS_ERR(gd)) {
> @@ -1159,8 +1161,6 @@ static int xlvbd_alloc_gendisk(blkif_sector_t capacity,
>  
>  	info->rq = gd->queue;
>  	info->gd = gd;
> -	info->sector_size = sector_size;
> -	info->physical_sector_size = physical_sector_size;
>  
>  	xlvbd_flush(info);
>  
> 
> 

With this patch applied on top of v6.9.4, I get the correct
logical/physical block sizes and the issue is fixed. Thank you!

Rusty




From xen-devel-bounces@lists.xenproject.org Tue Jun 25 06:19:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 06:19:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747205.1154544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzWe-000742-GK; Tue, 25 Jun 2024 06:19:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747205.1154544; Tue, 25 Jun 2024 06:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzWe-00073v-DZ; Tue, 25 Jun 2024 06:19:32 +0000
Received: by outflank-mailman (input) for mailman id 747205;
 Tue, 25 Jun 2024 06:19:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLzWd-00073p-Ac
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 06:19:31 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e0f28c03-32ba-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 08:19:29 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ebec2f11b7so55613791fa.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 23:19:29 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb2f02cdsm73133085ad.4.2024.06.24.23.19.24
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 23:19:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0f28c03-32ba-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719296368; x=1719901168; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=a+U644J+OlsI2Sj/9Id2DhqJCpOpqmaT7Dfv7XoFEVo=;
        b=VcnV9NlSeSQe0uTznzgck0ilzRy/XBmzmgwkSp8NNXvGTnKrP5awutCtNGUh3ZIT7+
         uAY0eiv9VVqZgTFms28TYJfxCVBzMd7BF/Bkq4CpW6WA8fkvDq9u3HCIwWDgGYsP64PJ
         ncTCFrFguh86xTEmPgkNq/bezn5jvbVPxypC5yBKkE1gcPuixpJn+A+pADsUCVj4ZreA
         Bl1fgda4RC9Dw/3wPJ17RNKehlb6K+xpacXgkEEsuqqd8P1D4COnPgpg/+nqW1zmuwyX
         kL1coGa4ba2f2PQcIeoLv0WcnTjMeDdzQbc0AXQHFqbPnfJthWfsJj8Btn9kFOlAs3G+
         XmLg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719296368; x=1719901168;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=a+U644J+OlsI2Sj/9Id2DhqJCpOpqmaT7Dfv7XoFEVo=;
        b=PhLe7wnQ0QNGJi8ap4gb7Ln7PX22YGHXjKXt26wYPq+IvGojKkClfcYKqxnnuHhc1r
         O1YDDCDJ57bRGTeZ7k22PsVc3i8MHz7tud3vbQc9Lj/NdL9yrIfac2rka+z2vb05NK9q
         Pr5NhyLj8SqFB2wA0j51RungAp4KvIhDySyDZWmDnWe67kueVNhzwkBaU97pf0nHc/Iv
         G77JPk19qBmm7bsZp63dbfeDTKDfa7RhQcSHNmqRn5tzb59TDjxVSBecBDafs4mCPDw8
         f2Q70svrEguj61+Sx77nFJgaIM1zTLAW6v96J2lIYvW1SdCngiqOYlDsJmx4p5eNDv0G
         zdiw==
X-Gm-Message-State: AOJu0YyMgkfzjBPlR8+k2F7i9WqdNbESXLw5SMhGJmGEwH/roNobyvkd
	c5ZxX32mUuo2b4Pb17FdHA9FwUYNVmh2ziqx7Tp59XB4bVj8zX361wL5i7VyYQ==
X-Google-Smtp-Source: AGHT+IGKzcj2ZDoELShQ4lJBfuCotbrFbJ78g2WrODw6jBO7EHOI461qEFAac6+lJKAvP9xpn83cgw==
X-Received: by 2002:a2e:98c8:0:b0:2ec:5abf:f3ae with SMTP id 38308e7fff4ca-2ec5b338aa1mr36758931fa.19.1719296368565;
        Mon, 24 Jun 2024 23:19:28 -0700 (PDT)
Message-ID: <f402157c-0dc2-4e30-820f-ecd319e9ce86@suse.com>
Date: Tue, 25 Jun 2024 08:19:19 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] MAINTAINERS: Update my email address again
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Anthony PERARD <anthony@xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Oleksii <oleksii.kurochko@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Anthony PERARD <anthony.perard@vates.tech>
References: <20240624094030.41692-1-anthony.perard@vates.tech>
 <alpine.DEB.2.22.394.2406240927390.3870429@ubuntu-linux-20-04-desktop>
 <5238d3a6-c47f-4951-b839-a92c5ee4e571@xen.org>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <5238d3a6-c47f-4951-b839-a92c5ee4e571@xen.org>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 24.06.2024 23:40, Julien Grall wrote:
> On 24/06/2024 17:27, Stefano Stabellini wrote:
>> On Mon, 24 Jun 2024, Anthony PERARD wrote:
>>> Signed-off-by: Anthony PERARD <anthony.perard@vates.tech>
>>
>> Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> I guess this technically need an ack from the release manager. So CC 
> Oleksii.

Hmm, that's on the edge, I think. Imo an ack shouldn't be needed here,
as requiring one would mean that it could also be refused. Yet such
updates would better go in quickly, so people use up-to-date information.
I'm sure committers would apply common sense as to avoiding to commit at
truly "critical" times; typically at such time a commit moratorium is in
place anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 06:26:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 06:26:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747212.1154553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzcr-00009c-9C; Tue, 25 Jun 2024 06:25:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747212.1154553; Tue, 25 Jun 2024 06:25:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzcr-00009V-6e; Tue, 25 Jun 2024 06:25:57 +0000
Received: by outflank-mailman (input) for mailman id 747212;
 Tue, 25 Jun 2024 06:25:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLzcp-00009O-7G
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 06:25:55 +0000
Received: from mail-wr1-x429.google.com (mail-wr1-x429.google.com
 [2a00:1450:4864:20::429])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c5e23c48-32bb-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 08:25:53 +0200 (CEST)
Received: by mail-wr1-x429.google.com with SMTP id
 ffacd0b85a97d-35f2c9e23d3so3762407f8f.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 23:25:53 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-706511b137csm7265112b3a.86.2024.06.24.23.25.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 23:25:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5e23c48-32bb-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719296753; x=1719901553; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Ds7BUmnZ96D/xuLbWD36rYp/G6iSUvUfatH/VCXhgHo=;
        b=aH2KCo9PnFeHJ2K/iUDfSE8udqWTvP9YlnYCVDuCt3H3QT4gqOQc8tezbq9MJqvNcg
         ZnuvdyLA57NCLU34kHg18rsjjLGBYmFpW4DIRp+C7+7N9MEdrw4F8ek03sOYe4DvQBki
         QF6qotZ6JFtDqygIo5lWWPNyUUVrtw/ajYBP2cTRp+8a7T881j3/D5QifzBf37egilut
         Mb3X+mDqazWWKznVeQdZtliZUW5s4IBNcH5hs7C+3HzjuGeKkaNimvAhL86tbVq10sZW
         uVz1z9i2ZWpOGfpVNOiCj5tAm042/e5gGcL1Ph+gXjKTPubJDHfX7VH8s/JJqUfegvM1
         274g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719296753; x=1719901553;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Ds7BUmnZ96D/xuLbWD36rYp/G6iSUvUfatH/VCXhgHo=;
        b=LshaznFhPworaGy3ZT7NLKRCEELwxGWo7OYM9jLBufi6boYpeV/gdwpOa/4Hw3iLeN
         GYgE6dK8KD8vJAsoNe8qicngTmg/fbeAPQVH4mag+l1LdrQnIocTFoa/MXyMC3KQU4KL
         WKFExDzZZFIoA21CE0mauFmE2tEfmw5upVuIGjxydMTOMFOVwQbCay14+3CO6KTVIYas
         Q6gpm/M7TCf3k0m9H4pZJ0enXvYqJUX7fjodKWB4KrtgM16Rxt1P/sESZC1gfKLW8bmB
         u0HhqaVqULgduVwV4Ga/qQ2Xe7ebduGwcQTiidCsTHbbf6i3NbuvI2xb7DLByLxfvnCU
         EVhw==
X-Forwarded-Encrypted: i=1; AJvYcCUSXYlFCkE7OjGMkk8WdH0D8w64mUYQu01Q2YNq5LW46HLJKHAq/ktuZFl16Trp7UWECKURwLRkf0LF1H27xn1QEI0L45TKLVvCJ+EVnkg=
X-Gm-Message-State: AOJu0YwwSFfNJbZkLoar7HYNjImCUMduGSIbYuLR/H04TWIlHp1Ep/bB
	qjkFMFghuaGM2TN+DvFcLr7FNHq51XqtP9GyccAurCMJL3k5HYZQ+oANYDnbVw==
X-Google-Smtp-Source: AGHT+IEErUYMI3plik3w1tw6HvNzk6fvZ7VJ1wXcZ0OwS1Vgy+cpjbigsfRBLFzipuBHVK0NZTrSkQ==
X-Received: by 2002:adf:9d89:0:b0:366:f976:598b with SMTP id ffacd0b85a97d-366f97659d1mr3076366f8f.8.1719296752761;
        Mon, 24 Jun 2024 23:25:52 -0700 (PDT)
Message-ID: <fb06a382-d26a-4973-a681-9bf3fb0f7f9b@suse.com>
Date: Tue, 25 Jun 2024 08:25:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2] xen: add explicit comment to identify notifier
 patterns
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Federico Serafini <federico.serafini@bugseng.com>,
 xen-devel@lists.xenproject.org, consulting@bugseng.com,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>,
 Julien Grall <julien@xen.org>
References: <d814434bf73e341f5d35836fa7063a728f7b7de4.1718788908.git.federico.serafini@bugseng.com>
 <f7d46c15-ff85-4a6f-afd7-df18649726c8@xen.org>
 <2072bf59-f125-4789-be77-40ed3641aec4@bugseng.com>
 <alpine.DEB.2.22.394.2406201811200.2572888@ubuntu-linux-20-04-desktop>
 <bce5eae2-973d-4d69-bee1-09f9f09dd011@bugseng.com>
 <alpine.DEB.2.22.394.2406211529130.2572888@ubuntu-linux-20-04-desktop>
 <917533b5-b79c-4e97-917d-9684993bf423@xen.org>
 <alpine.DEB.2.22.394.2406241651400.3870429@ubuntu-linux-20-04-desktop>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <alpine.DEB.2.22.394.2406241651400.3870429@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 02:14, Stefano Stabellini wrote:
> I do realize that some of the notifier pattern switches might want to
> handle all parameters but Bugseng or anyone else looking for simple
> improvements are not in the position to tell which ones they are. We
> need to wait for a maintainer or expert in the specific code to spot
> them. It is not a good idea to cause a delay in handling all the
> remaining 16.4 more interesting switches (which is also easy) to better
> handle the notifier pattern (which is hard).
> 
> The notifier pattern can be looked at separately later by the relevant
> maintainer / interested community members by sending case-by-case
> improvements. They cannot be mechanically resolved. My understanding is
> that with this patch series committed we would be close to zero
> violations for 16.4.

In fact yielding a bogus result, suggesting the tree is in a better state
than it really is. Putting myself in an assessor's position, I might be
considering such close to lying at me. IOW the fact that some violations
cannot be mechanically resolved shouldn't lead to us mechanically papering
over them.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 06:35:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 06:35:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747220.1154563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzli-000208-3E; Tue, 25 Jun 2024 06:35:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747220.1154563; Tue, 25 Jun 2024 06:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzli-000201-0i; Tue, 25 Jun 2024 06:35:06 +0000
Received: by outflank-mailman (input) for mailman id 747220;
 Tue, 25 Jun 2024 06:35:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLzlg-0001zv-Pg
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 06:35:04 +0000
Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com
 [2a00:1450:4864:20::22d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0b9b4bd0-32bd-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 08:34:59 +0200 (CEST)
Received: by mail-lj1-x22d.google.com with SMTP id
 38308e7fff4ca-2ebe785b234so56123651fa.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 23:34:59 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c819a7a557sm7968555a91.15.2024.06.24.23.34.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 23:34:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b9b4bd0-32bd-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719297299; x=1719902099; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=MMmL82UsFNEjpvybl7VhZ0LSIom4J5ANnVyWqMHiS4A=;
        b=eGDFlvqgDvH3O+1PczuWSwQXhMAUEq8oJeSmdz4TtKgIUtYkY3lrZGTPsxYLWf32gu
         PUGvLTdXTTSiIVn9GklLWGny4ABKDHEbqQNkM6KilSZkuZ6Ifc/mL4HamlB9o0ci1qaA
         P1LFgdWMTJRs4TsrURPd2W7x7drpIxg0fo2mzmzwO6Df5HlI3PXJ/qrn5ZfJY+N4PSdS
         37VzZvqeB4ZgJdxnoLsNIk0M6USykV8tIB55R/Q/6AoPJsI9yrg5usGeFSboUTI7SVaZ
         1CrrCoyMD4QYhEdQNV3fZDFdYrNtYkOwASLFc/mWf40b8nNioPKqGXmm+v0gegur4Hmk
         lW6g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719297299; x=1719902099;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=MMmL82UsFNEjpvybl7VhZ0LSIom4J5ANnVyWqMHiS4A=;
        b=dwNN1/E/ZJ/QiLnXxh08WJ5Zr1ma8Xjy7Gd7n4XzLEpnjMSgfbu78+18mjj+C09PqE
         bzCxpx+iLJaylZ6SQXq83P7iQ0RhxFcCtQ8qosVCNl2gfQCXrh5K3OCRHJzlBe49u1Lu
         9pF4WH+cft9/hGRJG+9hAI6Bfn02Nh1hV5OQyFAv24fLHYqlESawnZhHABA4EwvbMwRG
         XBLc06Om68Xsdb3uhdfryIzXuRfbo/BmaeXTC6LFbhFxrOGjanfbhH/6tSqqIuBFpepq
         cmP3F1fRLQXEhc34qJ83nLr2CpaLv4pCmykLn8zzPAphemz5QwFaTCTbXsGUrK4J5/Vh
         C7sA==
X-Forwarded-Encrypted: i=1; AJvYcCXVUoiBT6P0FCYNMWKue8OBID6zBwCwys3IrMRdDrlETxTomyFuGCZ6QMphUFvImB1b079XPPHOadqTxYwKPqW98VCocnUsmQ2I35fWO88=
X-Gm-Message-State: AOJu0YyGlF5uarxNKnLq4wwobYpBiAnY9lZXKUQqn2zSuQw0O/ezqNTa
	1S4sW/Xrkto8p6AuiVyrG0EKMVmd2E1ojf5LD8DSKMmMmATWtN9BrauUoLnuOw==
X-Google-Smtp-Source: AGHT+IGPd+EASAwSjoxRaD3cn5DoyRpG4sErL2WySDyDPY6Y89V2wwFfbX+H/LC62zCYSJiZemtD2g==
X-Received: by 2002:a2e:9248:0:b0:2e9:8a0a:ea05 with SMTP id 38308e7fff4ca-2ec5931d897mr56846631fa.17.1719297299118;
        Mon, 24 Jun 2024 23:34:59 -0700 (PDT)
Message-ID: <0fe07e0f-fe6d-4722-9f89-a78294a8b3a1@suse.com>
Date: Tue, 25 Jun 2024 08:34:51 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] common/sched: address a violation of MISRA C Rule
 17.7
To: victorm.lira@amd.com
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <a5f00432063ead8d4ae09315c1b09617a12b22f7.1719274203.git.victorm.lira@amd.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <a5f00432063ead8d4ae09315c1b09617a12b22f7.1719274203.git.victorm.lira@amd.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 02:15, victorm.lira@amd.com wrote:
> From: Victor Lira <victorm.lira@amd.com>
> 
> Rule 17.7: "The value returned by a function having non-void return type
> shall be used"
> 
> This patch fixes this by adding a check to the return value.
> No functional changes.
> 
> Signed-off-by: Victor Lira <victorm.lira@amd.com>
> ---
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Dario Faggioli <dfaggioli@suse.com>
> Cc: Juergen Gross <jgross@suse.com>
> Cc: xen-devel@lists.xenproject.org
> ---
>  xen/common/sched/core.c | 5 ++++-
>  1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
> index d84b65f197..e1cd824622 100644
> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -2789,7 +2789,10 @@ static int cpu_schedule_up(unsigned int cpu)
>      BUG_ON(cpu >= NR_CPUS);
>  
>      if ( idle_vcpu[cpu] == NULL )
> -        vcpu_create(idle_vcpu[0]->domain, cpu);
> +    {
> +        if ( vcpu_create(idle_vcpu[0]->domain, cpu) == NULL )
> +            return -ENOMEM;
> +    }

First: Two such if()s want folding.

>      else
>          idle_vcpu[cpu]->sched_unit->res = sr;
>  

Then: Down from here there is

    if ( idle_vcpu[cpu] == NULL )
        return -ENOMEM;

which your change is rendering redundant for at least the vcpu_create()
path.

Finally, as we're touching error handling here (and mayby more a question
to the maintainers than to you): What about sr in the error case? It's
being allocated earlier in the function, but not freed upon error. Hmm,
looks like cpu_schedule_down() is assumed to be taking care of the case,
yet then I wonder how that can assume that get_sched_res() would return
non-NULL - afaict it may be called without cpu_schedule_up() having run
first, or with it having bailed early with -ENOMEM.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 06:39:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 06:39:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747226.1154575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzps-0002wE-MX; Tue, 25 Jun 2024 06:39:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747226.1154575; Tue, 25 Jun 2024 06:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzps-0002w7-I4; Tue, 25 Jun 2024 06:39:24 +0000
Received: by outflank-mailman (input) for mailman id 747226;
 Tue, 25 Jun 2024 06:39:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLzpr-0002w1-4M
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 06:39:23 +0000
Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com
 [2a00:1450:4864:20::22d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a8070a1d-32bd-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 08:39:22 +0200 (CEST)
Received: by mail-lj1-x22d.google.com with SMTP id
 38308e7fff4ca-2e72224c395so56267851fa.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 23:39:22 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c819dbb7c6sm7815005a91.40.2024.06.24.23.39.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 23:39:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8070a1d-32bd-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719297562; x=1719902362; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=kJ7WN2BRd8DKoq+sO+eGPVnq8Mr5H95ZdlQpaZkE/W0=;
        b=BLoQ+VbUd2PkdQW8hYEkLnPaDRB+49HsKOfJGap9/NuRb97t5mb5O/fxp7TAKUkQIY
         LNPUPfvXNF3wQFPClSHXPun+N1Av73pNI5Hu3MtcbY2EqsQ7bwPjEdcC69qdigwF94+6
         O+CK3cvSTe9N/5kWAFRChfbeqxV48zdExdD1GgsbzCaHfDA+RH6qbiAC0fST36LCgBHJ
         phu2XdiG8G0kR+iCIXVh8QUohAZPAe/opTiDq4rGFa5r7RoVfu4OKSJHqs2dvjijKCDl
         r7sz+GWOwAwKEYqMlnIOgGdqMqPDWdwCxwV07SgH94HzA+2uo/wWTclresga7GoZpikZ
         tXhg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719297562; x=1719902362;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kJ7WN2BRd8DKoq+sO+eGPVnq8Mr5H95ZdlQpaZkE/W0=;
        b=dEFrEEfBnmlaBp35s48yOcRUzfo3EdC1u1X6c7GThSWRvDOLS88gM1ukYINHUp8YF8
         v5vTWX+Spk/ACDnbtfWBkLfsWumqAtwcFF1of3snUGN0zU/HcP7xzZgIu8b8yCZ5IZLh
         CKmbRuU2hvS3PzMUpeceOkml5J9Fe4IoXIxT128KrITV2kMuyX++q4OdKjk1lrUMGdak
         J+Xfa8DBE0DhKPjgr+DbUZEXHNahrBSAKX1YnYKOCSO5KJOkxCcIq5kkf8DBaZuuI2FI
         L19YEUWNYbTaggR9P+TbaaE03pHEleAs68dJied5i+29sC0FGMzgc+3E8D+EBeYggryL
         Scpg==
X-Gm-Message-State: AOJu0Yx5iN6ZgmsvHDAdDt8dWwLyykBWR8FNZgaMi2Y8xiJODlufTECz
	UREeCUR3ruqnwJ5xXVuomYu8z3GduuGFDzTnygmd2Vt6qA/4BnsuZRaSmbqCog==
X-Google-Smtp-Source: AGHT+IH05DqY8aGMNJpGTW85wmbo+60/5Jb1KF4e/tf+/AfTszuclizxwt1e4ya5GUG2boEuXjcA9Q==
X-Received: by 2002:a05:651c:210a:b0:2ec:6639:1208 with SMTP id 38308e7fff4ca-2ec66391319mr22108231fa.19.1719297561640;
        Mon, 24 Jun 2024 23:39:21 -0700 (PDT)
Message-ID: <88127f41-a3e3-4d05-b9f2-3e4117bf1503@suse.com>
Date: Tue, 25 Jun 2024 08:39:10 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 0/6][RESEND] address violations of MISRA C Rule
 20.7
To: Stefano Stabellini <sstabellini@kernel.org>, oleksii.kurochko@gmail.com
Cc: xen-devel@lists.xenproject.org, michal.orzel@amd.com,
 xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Nicola Vetrini <nicola.vetrini@bugseng.com>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com>
 <alpine.DEB.2.22.394.2406241743480.3870429@ubuntu-linux-20-04-desktop>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <alpine.DEB.2.22.394.2406241743480.3870429@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 02:47, Stefano Stabellini wrote:
> I would like to ask for a release-ack as the patch series makes very few
> changes outside of the static analysis configuration. The few changes to
> the Xen code are very limited, straightforward and makes the code
> better, see patch #3 and #5.

While continuing to touch automation/ may be okay, I really think time has
passed for further Misra changes in 4.19, unless they're fixing actual bugs
of course. Just my personal view though ...

Jan

> On Mon, 17 Jun 2024, Nicola Vetrini wrote:
>> Hi all,
>>
>> this series addresses several violations of Rule 20.7, as well as a
>> small fix to the ECLAIR integration scripts that do not influence
>> the current behaviour, but were mistakenly part of the upstream
>> configuration.
>>
>> Note that by applying this series the rule has a few leftover violations.
>> Most of those are in x86 code in xen/arch/x86/include/asm/msi.h .
>> I did send a patch [1] to deal with those, limited only to addressing the MISRA
>> violations, but in the end it was dropped in favour of a more general cleanup of
>> the file upon agreement, so this is why those changes are not included here.
>>
>> [1] https://lore.kernel.org/xen-devel/2f2c865f20d0296e623f1d65bed25c083f5dd497.1711700095.git.nicola.vetrini@bugseng.com/
>>
>> Changes in v2:
>> - refactor patch 4 to deviate the pattern, instead of fixing the violations
>> - The series has been resent because I forgot to properly Cc the mailing list
>>
>> Nicola Vetrini (6):
>>   automation/eclair: address violations of MISRA C Rule 20.7
>>   xen/self-tests: address violations of MISRA rule 20.7
>>   xen/guest_access: address violations of MISRA rule 20.7
>>   automation/eclair_analysis: address violations of MISRA C Rule 20.7
>>   x86/irq: address violations of MISRA C Rule 20.7
>>   automation/eclair_analysis: clean ECLAIR configuration scripts
>>
>>  automation/eclair_analysis/ECLAIR/analyze.sh     |  3 +--
>>  automation/eclair_analysis/ECLAIR/deviations.ecl | 14 ++++++++++++--
>>  docs/misra/deviations.rst                        |  3 ++-
>>  xen/include/xen/guest_access.h                   |  4 ++--
>>  xen/include/xen/irq.h                            |  2 +-
>>  xen/include/xen/self-tests.h                     |  8 ++++----
>>  6 files changed, 22 insertions(+), 12 deletions(-)
>>
>> -- 
>> 2.34.1
>>



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 06:41:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 06:41:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747232.1154583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzry-0004KP-0a; Tue, 25 Jun 2024 06:41:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747232.1154583; Tue, 25 Jun 2024 06:41:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzrx-0004KI-U9; Tue, 25 Jun 2024 06:41:33 +0000
Received: by outflank-mailman (input) for mailman id 747232;
 Tue, 25 Jun 2024 06:41:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sLzrw-0004KC-VA
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 06:41:32 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f547b5c5-32bd-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 08:41:31 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2ec3f875e68so57710371fa.0
 for <xen-devel@lists.xenproject.org>; Mon, 24 Jun 2024 23:41:31 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7066c2ebd85sm5304268b3a.93.2024.06.24.23.41.27
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 Jun 2024 23:41:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f547b5c5-32bd-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719297691; x=1719902491; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=7TihY1bbYRB9CqY586eDl31+yNlMblXuuqYEfzYh/Rs=;
        b=AEP5C9/NkRA9ptjAx/3xUn1TJM+xshYCd1F5VWL4+iQfAspp6kOfLDqHS8KRMUfLRw
         AoLn+A/ULrpm0ml7SYANllyCGtNdLLW+qCLQIDUotZ2QzWcoJU0K79/vsrq7cMDM9m0s
         Pn7X40LgetT2QgrEiY/KDzwGbVYV16onAxto+sfkjbWJrIUeDNAbK//k2pXNkSTv5b2o
         0EhdDaI/7KQxN/0pvjZGLU0jbwr2ElxO8FRBYyUXtu71q7qD34Mq9KHw7htKjyLJEoUC
         rxrTjSNb4ovzSFTslCUAGn3cG8YxIYa7CIo/C2DTZ4XdnXcz/BQFF6cMiBI4fcCWbeq8
         73kQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719297691; x=1719902491;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7TihY1bbYRB9CqY586eDl31+yNlMblXuuqYEfzYh/Rs=;
        b=iJ7U3b8BgQmFvzYsFY1ojuk3w0u5NhiZ+mHxPWZEowQpqKECEl9XDC5VfFR4VL5JpM
         0ugB+bsnFD3ziTzAU1YW75vXZ2s3d7Y+WFAFNoaAgFT8taMoFKmTvG7zLN30HTSi+kuI
         zqRTUMJM4Csx9lqJ9QgZzncRzLn81L35BKc8rwTK2RheDk6wSE6mAaueEJ9EUGnb/nKx
         pRbUVBokYRfT0RYGnKr70eYZmAfKviHpbUGSyuenVvpDN4XxW1gPAAJRZchzG/ccNBLY
         nOdt/fp0PueM4ianTwlLiD+wkqldZ9tFJqKDbVuAfHGGL6c9CXrHjl97x8RiFatj2q+z
         rMHA==
X-Gm-Message-State: AOJu0YyzPJrC+lng5g00BwNG5hng8L2pREIABn0YeEZynQbyWNJRmFrj
	BmH5FmQxVLIA69rNoiMONOQxzzL5g0LBK6SX+pduH2DdQ0FzOQlrJylaF4mIow==
X-Google-Smtp-Source: AGHT+IFRNg4IcFw8/1wM8DmKL4/9a9am+X9D/EWYh54vGiSLKvgr42BV0JPw7kEMa5L+szjSd1Oc7A==
X-Received: by 2002:a05:651c:220e:b0:2ec:5e2e:39a8 with SMTP id 38308e7fff4ca-2ec5e2e3aaamr41748591fa.3.1719297691186;
        Mon, 24 Jun 2024 23:41:31 -0700 (PDT)
Message-ID: <a5b47b7e-9dc0-4108-bd6f-eb34f7cb8c3c@suse.com>
Date: Tue, 25 Jun 2024 08:41:23 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 05/13] x86/traps: address violations of MISRA C
 Rule 16.3
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Federico Serafini <federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <4f44a7b021eb4f78ccf1ce69b500b48b75df81c5.1719218291.git.federico.serafini@bugseng.com>
 <alpine.DEB.2.22.394.2406241753260.3870429@ubuntu-linux-20-04-desktop>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <alpine.DEB.2.22.394.2406241753260.3870429@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 02:54, Stefano Stabellini wrote:
> On Mon, 24 Jun 2024, Federico Serafini wrote:
>> Add break or pseudo keyword fallthrough to address violations of
>> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
>> every switch-clause".
>>
>> No functional change.
>>
>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>> ---
>>  xen/arch/x86/traps.c | 3 +++
>>  1 file changed, 3 insertions(+)
>>
>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
>> index 9906e874d5..cbcec3fafb 100644
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -1186,6 +1186,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
>>  
>>      default:
>>          ASSERT_UNREACHABLE();
>> +        break;
> 
> Please add ASSERT_UNREACHABLE to the list of "unconditional flow control
> statements" that can terminate a case, in addition to break.

Why? Exactly the opposite is part of the subject of a recent patch, iirc.
Simply because of the rules needing to cover both debug and release builds.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 06:46:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 06:46:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747241.1154594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzwY-0004zl-KE; Tue, 25 Jun 2024 06:46:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747241.1154594; Tue, 25 Jun 2024 06:46:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzwY-0004ze-HF; Tue, 25 Jun 2024 06:46:18 +0000
Received: by outflank-mailman (input) for mailman id 747241;
 Tue, 25 Jun 2024 06:46:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLzwW-0004zU-Lv; Tue, 25 Jun 2024 06:46:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLzwW-0003ZB-4S; Tue, 25 Jun 2024 06:46:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sLzwV-0006fs-Nc; Tue, 25 Jun 2024 06:46:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sLzwV-0004vf-NA; Tue, 25 Jun 2024 06:46:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0x55CZvMoj+etbl7Kr+305o8TaL8x37G6GJ1SiqVyN0=; b=n2JJZ4B5IU5bGCEH6414kQkf1+
	rNJnnBv8/SD1mb2xOi/PJ8Cb0dpF+DSxEWaKOpIELJzEJEaOUt7V0A+tq3iipp64D2ixXRyZRCXTW
	LiUBMwL/MT8LDbBLMqluEMZvILMbeRTAEDKmPhd/bUQiLHEGU7gGd6LioaFRxCJcQ9hw=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186471-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186471: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=908407bf2b29a38d6879fc8c57dad14473ef67f8
X-Osstest-Versions-That:
    xen=9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Jun 2024 06:46:15 +0000

flight 186471 xen-unstable real [real]
flight 186478 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186471/
http://logs.test-lab.xenproject.org/osstest/logs/186478/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 186465
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 186465

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186465
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186465
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186465
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186465
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186465
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186465
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  908407bf2b29a38d6879fc8c57dad14473ef67f8
baseline version:
 xen                  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d

Last test of basis   186465  2024-06-24 01:51:55 Z    1 days
Testing same since   186471  2024-06-24 19:07:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@vates.tech>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 908407bf2b29a38d6879fc8c57dad14473ef67f8
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jun 21 20:29:07 2024 +0100

    xen/riscv: Drop legacy __ro_after_init definition
    
    Hide the legacy __ro_after_init definition in xen/cache.h for RISC-V, to avoid
    its use creeping in.  Only mm.c needs adjusting as a consequence
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 8c3bb4d8ce3f9e69ee173b8787a8cbbf1a852d06
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Feb 20 19:58:08 2024 +0000

    xen/gnttab: Perform compat/native gnttab_query_size check
    
    This subop appears to have been missed from the compat checks.
    
    Fixes: 5ce8fafa947c ("Dynamic grant-table sizing")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit ebed411e7afa240fea803ac97a0ced73fffef8dc
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Feb 20 19:34:06 2024 +0000

    xen/xlat: Sort structs per file
    
    ... with a C local to avoid ambiguities over _ and - as separators.
    
    Also adjust arch-x86/xen.h which is out-of-order relative to the other
    arch-x86/ files.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 90c1520d4eff8e6480035f523041fe62c5065833
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Feb 20 19:27:33 2024 +0000

    xen/xlat: Sort out whitespace
    
     * Fix tabs/spaces mismatch for certain rows
     * Insert lines between header files to improve legibility
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 6a17e1199332c24b41bacccdc91dbeaf22653588
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed May 22 12:17:30 2024 +0200

    x86/shadow: Don't leave trace record field uninitialized
    
    The emulation_count field is set only conditionally right now. Convert
    all field setting to an initializer, thus guaranteeing that field to be
    set to 0 (default initialized) when GUEST_PAGING_LEVELS != 3.
    
    Rework trace_shadow_emulate() to be consistent with the other trace helpers.
    
    Coverity-ID: 1598430
    Fixes: 9a86ac1aa3d2 ("xentrace 5/7: Additional tracing for the shadow code")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 8765783434e903fa8be628de25f9941b0204502d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 22 14:05:13 2024 +0100

    x86/shadow: Rework trace_shadow_emulate_other() as sh_trace_gfn_va()
    
    sh_trace_gfn_va() is very similar to sh_trace_gl1e_va(), and a rather shorter
    name than trace_shadow_emulate_other().
    
    It's only referenced in CONFIG_HVM=y builds, so give it a __maybe_unused to
    placate randconfig builds.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 578066d82b2b96e949ff46e6c142a33231b1ae2d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 22 13:58:22 2024 +0100

    x86/shadow: Introduce sh_trace_gl1e_va()
    
    trace_shadow_fixup() and trace_not_shadow_fault() both write out identical
    trace records.  Reimplement them in terms of a common sh_trace_gl1e_va().
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 2e9f8a734e3dd2b6abccea325dd5e854a3670dec
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 22 13:51:43 2024 +0100

    x86/shadow: Rework trace_shadow_gen() into sh_trace_va()
    
    The ((GUEST_PAGING_LEVELS - 2) << 8) expression in the event field is common
    to all shadow trace events, so introduce sh_trace() as a very thin wrapper
    around trace().
    
    Then, rename trace_shadow_gen() to sh_trace_va() to better describe what it is
    doing, and to be more consistent with later cleanup.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit ba52b3b624e4a1a976908552364eba924ca45430
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue May 7 12:05:58 2024 +0100

    tools/xl: Open xldevd.log with O_CLOEXEC
    
    `xl devd` has been observed leaking /var/log/xldevd.log into children.
    
    Note this is specifically safe; dup2() leaves O_CLOEXEC disabled on newfd, so
    after setting up stdout/stderr, it's only the logfile fd which will close on
    exec().
    
    Link: https://github.com/QubesOS/qubes-issues/issues/8292
    Reported-by: Demi Marie Obenour <demi@invisiblethingslab.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Demi Marie Obenour <demi@invisiblethingslab.com>
    Acked-by: Anthony PERARD <anthony.perard@vates.tech>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 06:46:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 06:46:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747244.1154604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzwa-0005Eo-Sb; Tue, 25 Jun 2024 06:46:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747244.1154604; Tue, 25 Jun 2024 06:46:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzwa-0005Eh-P6; Tue, 25 Jun 2024 06:46:20 +0000
Received: by outflank-mailman (input) for mailman id 747244;
 Tue, 25 Jun 2024 06:46:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sQf/=N3=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLzwY-0004zo-TS
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 06:46:18 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f1cf799-32be-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 08:46:16 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.161.81.3])
 by support.bugseng.com (Postfix) with ESMTPSA id 5A8E64EE0738;
 Tue, 25 Jun 2024 08:46:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f1cf799-32be-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [XEN PATCH v4] automation/eclair: extend existing deviations of MISRA C Rule 16.3
Date: Tue, 25 Jun 2024 08:46:09 +0200
Message-Id: <90044547484dac6fcb4748ae8758e38234b3261a.1719297249.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update ECLAIR configuration to deviate more cases where an
unintentional fallthrough cannot happen.

Tag Rule 16.3 as clean for arm.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes from v3:
- do not add the rule to the monitored set (it is already there).
---
Changes from v2:
- fixed grammar;
- reprhased deviations regarding do-while-false and ASSERT_UNREACHABLE().
---
 .../eclair_analysis/ECLAIR/deviations.ecl     | 31 ++++++++++++++-----
 automation/eclair_analysis/ECLAIR/tagging.ecl |  2 +-
 docs/misra/deviations.rst                     | 28 +++++++++++++++--
 3 files changed, 49 insertions(+), 12 deletions(-)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index ae2eaf50f7..c8bff0e057 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -380,14 +380,30 @@ therefore it is deemed better to leave such files as is."
 -config=MC3R1.R16.2,reports+={deliberate, "any_area(any_loc(file(x86_emulate||x86_svm_emulate)))"}
 -doc_end
 
--doc_begin="Switch clauses ending with continue, goto, return statements are
-safe."
--config=MC3R1.R16.3,terminals+={safe, "node(continue_stmt||goto_stmt||return_stmt)"}
+-doc_begin="Statements that change the control flow (i.e., break, continue, goto, return) and calls to functions that do not return the control back are \"allowed terminal statements\"."
+-stmt_selector+={r16_3_allowed_terminal, "node(break_stmt||continue_stmt||goto_stmt||return_stmt)||call(property(noreturn))"}
+-config=MC3R1.R16.3,terminals+={safe, "r16_3_allowed_terminal"}
+-doc_end
+
+-doc_begin="An if-else statement having both branches ending with an allowed terminal statement is itself an allowed terminal statement."
+-stmt_selector+={r16_3_if, "node(if_stmt)&&(child(then,r16_3_allowed_terminal)||child(then,any_stmt(stmt,-1,r16_3_allowed_terminal)))"}
+-stmt_selector+={r16_3_else, "node(if_stmt)&&(child(else,r16_3_allowed_terminal)||child(else,any_stmt(stmt,-1,r16_3_allowed_terminal)))"}
+-stmt_selector+={r16_3_if_else, "r16_3_if&&r16_3_else"}
+-config=MC3R1.R16.3,terminals+={safe, "r16_3_if_else"}
+-doc_end
+
+-doc_begin="An if-else statement having an always true condition and the true branch ending with an allowed terminal statement is itself an allowed terminal statement."
+-stmt_selector+={r16_3_if_true, "r16_3_if&&child(cond,definitely_in(1..))"}
+-config=MC3R1.R16.3,terminals+={safe, "r16_3_if_true"}
+-doc_end
+
+-doc_begin="A switch clause ending with a statement expression which, in turn, ends with an allowed terminal statement is safe."
+-config=MC3R1.R16.3,terminals+={safe, "node(stmt_expr)&&child(stmt,node(compound_stmt)&&any_stmt(stmt,-1,r16_3_allowed_terminal||r16_3_if_else||r16_3_if_true))"}
 -doc_end
 
--doc_begin="Switch clauses ending with a call to a function that does not give
-the control back (i.e., a function with attribute noreturn) are safe."
--config=MC3R1.R16.3,terminals+={safe, "call(property(noreturn))"}
+-doc_begin="A switch clause ending with a do-while-false the body of which, in turn, ends with an allowed terminal statement is safe.
+An exception to that is the macro ASSERT_UNREACHABLE() which is effective in debug build only: a switch clause ending with ASSERT_UNREACHABLE() is not considered safe."
+-config=MC3R1.R16.3,terminals+={safe, "!macro(name(ASSERT_UNREACHABLE))&&node(do_stmt)&&child(cond,definitely_in(0))&&child(body,any_stmt(stmt,-1,r16_3_allowed_terminal||r16_3_if_else||r16_3_if_true))"}
 -doc_end
 
 -doc_begin="Switch clauses ending with pseudo-keyword \"fallthrough\" are
@@ -399,8 +415,7 @@ safe."
 -config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(/BUG\\(\\);/))))"}
 -doc_end
 
--doc_begin="Switch clauses not ending with the break statement are safe if an
-explicit comment indicating the fallthrough intention is present."
+-doc_begin="Switch clauses ending with an explicit comment indicating the fallthrough intention are safe."
 -config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through.? \\*/.*$,0..1))))"}
 -doc_end
 
diff --git a/automation/eclair_analysis/ECLAIR/tagging.ecl b/automation/eclair_analysis/ECLAIR/tagging.ecl
index b829655ca0..54772809ca 100644
--- a/automation/eclair_analysis/ECLAIR/tagging.ecl
+++ b/automation/eclair_analysis/ECLAIR/tagging.ecl
@@ -107,7 +107,7 @@ if(string_equal(target,"x86_64"),
 )
 
 if(string_equal(target,"arm64"),
-    service_selector({"additional_clean_guidelines","MC3R1.R16.6||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.3"})
+    service_selector({"additional_clean_guidelines","MC3R1.R16.3||MC3R1.R16.6||MC3R1.R2.1||MC3R1.R5.3||MC3R1.R7.3"})
 )
 
 -reports+={clean:added,"service(clean_guidelines_common||additional_clean_guidelines)"}
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 16fc345756..b11a5623c7 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -330,12 +330,34 @@ Deviations related to MISRA C:2012 Rules:
      - Tagged as `deliberate` for ECLAIR.
 
    * - R16.3
-     - Switch clauses ending with continue, goto, return statements are safe.
+     - Statements that change the control flow (i.e., break, continue, goto,
+       return) and calls to functions that do not return the control back are
+       \"allowed terminal statements\".
      - Tagged as `safe` for ECLAIR.
 
    * - R16.3
-     - Switch clauses ending with a call to a function that does not give
-       the control back (i.e., a function with attribute noreturn) are safe.
+     - An if-else statement having both branches ending with one of the allowed
+       terminal statemets is itself an allowed terminal statements.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R16.3
+     - An if-else statement having an always true condition and the true
+       branch ending with an allowed terminal statement is itself an allowed
+       terminal statement.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R16.3
+     - A switch clause ending with a statement expression which, in turn, ends
+       with an allowed terminal statement (e.g., the expansion of
+       generate_exception()) is safe.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R16.3
+     - A switch clause ending with a do-while-false the body of which, in turn,
+       ends with an allowed terminal statement (e.g., PARSE_ERR_RET()) is safe.
+       An exception to that is the macro ASSERT_UNREACHABLE() which is
+       effective in debug build only: a switch clause ending with
+       ASSERT_UNREACHABLE() is not considered safe.
      - Tagged as `safe` for ECLAIR.
 
    * - R16.3
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 06:47:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 06:47:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747259.1154614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzxN-00063n-AM; Tue, 25 Jun 2024 06:47:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747259.1154614; Tue, 25 Jun 2024 06:47:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sLzxN-00063g-6R; Tue, 25 Jun 2024 06:47:09 +0000
Received: by outflank-mailman (input) for mailman id 747259;
 Tue, 25 Jun 2024 06:47:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sQf/=N3=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sLzxM-0004zo-MT
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 06:47:08 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bd340412-32be-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 08:47:07 +0200 (CEST)
Received: from [172.20.10.8] (unknown [37.161.81.3])
 by support.bugseng.com (Postfix) with ESMTPSA id 7FB294EE0738;
 Tue, 25 Jun 2024 08:47:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd340412-32be-11ef-b4bb-af5377834399
Message-ID: <31ed9bcf-dc5b-41d1-9931-e6be70e3fe71@bugseng.com>
Date: Tue, 25 Jun 2024 08:47:05 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 01/13] automation/eclair: fix deviation of MISRA C
 Rule 16.3
To: Jan Beulich <jbeulich@suse.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <c43a32405cc949ef5bf26a2ca1d1cc7ee7f5e664.1719218291.git.federico.serafini@bugseng.com>
 <1149f3da-480e-4949-924b-6cdf39b1e17f@suse.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <1149f3da-480e-4949-924b-6cdf39b1e17f@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 24/06/24 12:57, Jan Beulich wrote:
> On 24.06.2024 11:04, Federico Serafini wrote:
>> Escape the final dot of the comment and extend the search of a
>> fallthrough comment up to 2 lines after the last statement.
>>
>> Fixes: a128d8da913b21eff6c6d2e2a7d4c54c054b78db "automation/eclair: add deviations for MISRA C:2012 Rule 16.3"
> 
> Nit: Yes, the respective doc says "at least 12 digits", but please also keep
> at that going forward.

Noted.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 06:53:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 06:53:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747268.1154623 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM02w-00086v-Sh; Tue, 25 Jun 2024 06:52:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747268.1154623; Tue, 25 Jun 2024 06:52:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM02w-00086o-QC; Tue, 25 Jun 2024 06:52:54 +0000
Received: by outflank-mailman (input) for mailman id 747268;
 Tue, 25 Jun 2024 06:52:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9u5k=N3=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sM02v-00086i-ES
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 06:52:53 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8af0402c-32bf-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 08:52:52 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id EFE704EE0738;
 Tue, 25 Jun 2024 08:52:51 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8af0402c-32bf-11ef-90a3-e314d9c70b13
MIME-Version: 1.0
Date: Tue, 25 Jun 2024 08:52:51 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, michal.orzel@amd.com,
 xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Simone Ballarin <simone.ballarin@bugseng.com>, Doug Goldstein
 <cardoe@cardoe.com>
Subject: Re: [XEN PATCH v2 1/6][RESEND] automation/eclair: address violations
 of MISRA C Rule 20.7
In-Reply-To: <alpine.DEB.2.22.394.2406201718140.2572888@ubuntu-linux-20-04-desktop>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com>
 <af4b0512eb52be99e37c9c670f98967ca15c68ac.1718378539.git.nicola.vetrini@bugseng.com>
 <alpine.DEB.2.22.394.2406201718140.2572888@ubuntu-linux-20-04-desktop>
Message-ID: <4aa05e0f26f050363d9ed0401855e1bb@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-21 02:18, Stefano Stabellini wrote:
> On Mon, 16 Jun 2024, Nicola Vetrini wrote:
>> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
>> of macro parameters shall be enclosed in parentheses".
>> 
>> The helper macro bitmap_switch has parameters that cannot be 
>> parenthesized
>> in order to comply with the rule, as that would break its 
>> functionality.
>> Moreover, the risk of misuse due developer confusion is deemed not
>> substantial enough to warrant a more involved refactor, thus the macro
>> is deviated for this rule.
>> 
>> No functional change.
>> 
>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
> 
> If possible, I would prefer we used a SAF in-code comment deviation. If
> that doesn't work for any reason this patch is fine and I'd ack it.
> 

Would that be an improvement for safety in your opinion?

> 
>> ---
>>  automation/eclair_analysis/ECLAIR/deviations.ecl | 8 ++++++++
>>  1 file changed, 8 insertions(+)
>> 
>> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl 
>> b/automation/eclair_analysis/ECLAIR/deviations.ecl
>> index 447c1e6661d1..c2698e7074aa 100644
>> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
>> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
>> @@ -463,6 +463,14 @@ of this macro do not lead to developer confusion, 
>> and can thus be deviated."
>>  -config=MC3R1.R20.7,reports+={safe, 
>> "any_area(any_loc(any_exp(macro(^count_args_$))))"}
>>  -doc_end
>> 
>> +-doc_begin="The arguments of macro bitmap_switch macro can't be 
>> parenthesized as
>> +the rule would require, without breaking the functionality of the 
>> macro. This is
>> +a specialized local helper macro only used within the bitmap.h 
>> header, so it is
>> +less likely to lead to developer confusion and it is deemed better to 
>> deviate it."
>> +-file_tag+={xen_bitmap_h, "^xen/include/xen/bitmap\\.h$"}
>> +-config=MC3R1.R20.7,reports+={safe, 
>> "any_area(any_loc(any_exp(macro(loc(file(xen_bitmap_h))&&^bitmap_switch$))))"}
>> +-doc_end
>> +
>>  -doc_begin="Uses of variadic macros that have one of their arguments 
>> defined as
>>  a macro and used within the body for both ordinary parameter 
>> expansion and as an
>>  operand to the # or ## operators have a behavior that is 
>> well-understood and
>> --
>> 2.34.1
>> 

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 07:02:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 07:02:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747278.1154636 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0CS-0001dh-Oi; Tue, 25 Jun 2024 07:02:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747278.1154636; Tue, 25 Jun 2024 07:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0CS-0001da-ME; Tue, 25 Jun 2024 07:02:44 +0000
Received: by outflank-mailman (input) for mailman id 747278;
 Tue, 25 Jun 2024 07:02:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM0CR-0001dN-AB
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 07:02:43 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ea592272-32c0-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 09:02:42 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2ebed33cb65so56865091fa.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 00:02:41 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb321a62sm74079635ad.72.2024.06.25.00.02.35
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 00:02:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea592272-32c0-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719298961; x=1719903761; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=KKYdr07VWNFGfoxsvowZuZ+1cm3EVnL5fFDDNSg0l0I=;
        b=MEqdc18nOoO5T+GbqOqYJHD2xkyvJRfobXB88QjHSS500cbZyMoxFitJ6pW4ZFNPyA
         fsnRmU/zdE43RA76Sy6lpk1fVW1+cFUkVEunB/G8W/iqo+WsVRhhDSzkTP2hchRh/6Nn
         Wbqe+vc14ci8ARj0z523PZqdPHE+OKe0NvnLWhE3gVSdjXViYnHPjBHb+FDUVGOz7X33
         fMKYTtncSOTvBVkoeigYxOT5yCgOUnmNDGhnk1jq60M7lRcWY6kBRKehWJKaQVMdef9U
         sWMUYkhnrbk3v4wnQlwKjlAFISwY8woiXj6M3YrcvxDJP0wWlfyTlftKQl8bmUzylqPp
         1dRQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719298961; x=1719903761;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=KKYdr07VWNFGfoxsvowZuZ+1cm3EVnL5fFDDNSg0l0I=;
        b=Wp6zP71Hjw9qQFWoAFIdA3TcH1+E+qMPjqqf+Kv3vA8t8wxxb09GupZo8yUJ9HYOcE
         1YBPyrgSA9YV+Wnyox0ITqPyL6wOlp1BzLMrE4Itj1GDKD7UhvzkXkZCyIpz8mxQyUF6
         geXzveg4lBMVqqT5A8xEXZs6FUnm60F9tsCcG1CzxkRL0ApoTHP8sYv+25zGMWHivnFq
         7QSo025sDbZvU0WIsZzV0hVS7msjbsiTFcfKWwYrtSuQXMV+kR6tdS8adxOHJi31dmyY
         zuco96WCmNjGEITILHvaVwUighI9b4aDzb27/3bLStTWuktM8sQ0BE6hLNjEzeFKM2vU
         nYKg==
X-Forwarded-Encrypted: i=1; AJvYcCVskMz6rGoWXrfyzQGpt3V5X/NaCERM7lLBwc4jApSef2yjJUwC5JrobcQtV+BNJTrG7FD5iJuDqhYxMZRjnkOCCdT2Kp4TA2fi2AJhuMU=
X-Gm-Message-State: AOJu0YxFeiB3OXRnajTxsSFDvkm3XQKlwzPSutUgiEZztXhBpb7TWdnU
	6aO8cAyNMOt+KvLLXfYxEScUXfVd4uEUNTBhysEZTp+IUNxQQpf/DoLauYBmKA==
X-Google-Smtp-Source: AGHT+IG02nhaGbz5bNc07bPR2v4g6/t9kcdQYTiURh+4gaxUaXo/kjTQ2tm0ITGQ1klEXTzH0CCQFg==
X-Received: by 2002:a05:651c:ca:b0:2ec:140c:8985 with SMTP id 38308e7fff4ca-2ec5936fb10mr55379661fa.36.1719298961408;
        Tue, 25 Jun 2024 00:02:41 -0700 (PDT)
Message-ID: <c2098800-2565-4eff-90b5-0d285862bf26@suse.com>
Date: Tue, 25 Jun 2024 09:02:31 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v4] automation/eclair: extend existing deviations of
 MISRA C Rule 16.3
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Oleksii Kurochko
 <oleksii.kurochko@gmail.com>,
 Federico Serafini <federico.serafini@bugseng.com>,
 xen-devel@lists.xenproject.org
References: <90044547484dac6fcb4748ae8758e38234b3261a.1719297249.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <90044547484dac6fcb4748ae8758e38234b3261a.1719297249.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 08:46, Federico Serafini wrote:
> Update ECLAIR configuration to deviate more cases where an
> unintentional fallthrough cannot happen.
> 
> Tag Rule 16.3 as clean for arm.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

To add to my reply on the other series: As per above you even acked ...

> --- a/docs/misra/deviations.rst
> +++ b/docs/misra/deviations.rst
> @@ -330,12 +330,34 @@ Deviations related to MISRA C:2012 Rules:
>       - Tagged as `deliberate` for ECLAIR.
>  
>     * - R16.3
> -     - Switch clauses ending with continue, goto, return statements are safe.
> +     - Statements that change the control flow (i.e., break, continue, goto,
> +       return) and calls to functions that do not return the control back are
> +       \"allowed terminal statements\".
>       - Tagged as `safe` for ECLAIR.
>  
>     * - R16.3
> -     - Switch clauses ending with a call to a function that does not give
> -       the control back (i.e., a function with attribute noreturn) are safe.
> +     - An if-else statement having both branches ending with one of the allowed
> +       terminal statemets is itself an allowed terminal statements.
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R16.3
> +     - An if-else statement having an always true condition and the true
> +       branch ending with an allowed terminal statement is itself an allowed
> +       terminal statement.
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R16.3
> +     - A switch clause ending with a statement expression which, in turn, ends
> +       with an allowed terminal statement (e.g., the expansion of
> +       generate_exception()) is safe.
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R16.3
> +     - A switch clause ending with a do-while-false the body of which, in turn,
> +       ends with an allowed terminal statement (e.g., PARSE_ERR_RET()) is safe.
> +       An exception to that is the macro ASSERT_UNREACHABLE() which is
> +       effective in debug build only: a switch clause ending with
> +       ASSERT_UNREACHABLE() is not considered safe.
>       - Tagged as `safe` for ECLAIR.

... this explicit statement regarding ASSERT_UNREACHABLE().

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 07:14:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 07:14:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747287.1154652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0Nu-0003ee-RX; Tue, 25 Jun 2024 07:14:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747287.1154652; Tue, 25 Jun 2024 07:14:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0Nu-0003eX-P0; Tue, 25 Jun 2024 07:14:34 +0000
Received: by outflank-mailman (input) for mailman id 747287;
 Tue, 25 Jun 2024 07:14:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOoF=N3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sM0Nt-0003eR-7R
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 07:14:33 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 918b8885-32c2-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 09:14:32 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a724b3a32d2so243078466b.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 00:14:32 -0700 (PDT)
Received: from ?IPV6:2003:e5:8729:4000:29eb:6d9d:3214:39d2?
 (p200300e58729400029eb6d9d321439d2.dip0.t-ipconnect.de.
 [2003:e5:8729:4000:29eb:6d9d:3214:39d2])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf56033asm482577666b.157.2024.06.25.00.14.30
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 00:14:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 918b8885-32c2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719299671; x=1719904471; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=jHvDqZfaOeK7DkLlMhoi5sitZf81YrmoNlCsQytsxiA=;
        b=MwWrpaiBz6mUOoQGBX2+//tbo/IPpjtAtIfL4pVgDXglC8nyivVQ5zJrIgn872sO5N
         O+zSrslhANWsTmz8NH2mjAJ8zRT37qVgMDdegEVgSNo5aGvloBOnQy/GuQp4jhc4XRyF
         R+FmsyWpGpIaqu73i5PvR9hBIP0b4WzNmgfSi7O8r5wRg0fDEhT/2g+i0DmV2pF6BYP0
         aDdWfklnQiF3rhoO5/uDTxrGSq4h3BHqHLtIFWHATbEROfDfMYcfc1AyzIThcMmQ603v
         9eoXb5y9NNnBDt1YkDpm5sPQf5QZVV5UacZwQglbtcpqVeALPJR6tRqGhNVqNsCUpxdC
         3GWQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719299671; x=1719904471;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=jHvDqZfaOeK7DkLlMhoi5sitZf81YrmoNlCsQytsxiA=;
        b=N7VafHHG1q0l+UElxYJE2w79jO0neFl8elg5FIIzEmWKBYw7sielmxWo66bjlkx5dE
         qsXxljSQ5bkMHert5dRrdn3b8a7YT8s+De9XoNqKNbdPlcGVP2R4Ld5Z5evlRp9BwtTM
         IhVPauc9Pho0+BmOLCtDMG4NIkIVNwFtcbIHq27npWQ/rq4meLnzB0nB2sue6Hk2omAL
         3D+hj3jOPe5Q/8FYudSzc9l0QBIxrwqeVWssckSePOOMEFlHD/NoPqtMcujjMDatYCcs
         04HQG/RQW+8pOqFLjuHP8OZGSN1ekHVtWDNYPX8PVf88e6TlXxsDF0ADZgfEJWxaOGQR
         hTDQ==
X-Forwarded-Encrypted: i=1; AJvYcCURn9GU3Ok62pzG3U2KjZf5NBEDX9sRlSI4jWML2fAZpMhrlNmjwTbQgI9Z+EBdcI1S2PmFn5xfdsB/0cJSg7gI1uRxT+YDBBnQPBsU62Q=
X-Gm-Message-State: AOJu0Yy7OdnwFJ/tGZ06tZk+kG/UA5Wj7WydYs7VU//ks2Z0kCaf4NKU
	3zv9nLimwCuuXY3RNi+nPr2Nlgt4RulGK3JNkgk+mr64wmOVpAAXJHCZ4Bwjxx8=
X-Google-Smtp-Source: AGHT+IGIrMtRW38dfQLZCAEMH0RKGznqaaofbgCMDhcrjt9VEBfEFDUZIgFi3gLkUL4L66Z+1zYsLA==
X-Received: by 2002:a17:906:e28c:b0:a72:42f6:ff0f with SMTP id a640c23a62f3a-a7245c70af8mr321721666b.77.1719299671322;
        Tue, 25 Jun 2024 00:14:31 -0700 (PDT)
Message-ID: <3a259a5b-b900-4a31-9b6f-32cddf48e97f@suse.com>
Date: Tue, 25 Jun 2024 09:14:30 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] common/sched: address a violation of MISRA C Rule
 17.7
To: Jan Beulich <jbeulich@suse.com>, victorm.lira@amd.com
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <a5f00432063ead8d4ae09315c1b09617a12b22f7.1719274203.git.victorm.lira@amd.com>
 <0fe07e0f-fe6d-4722-9f89-a78294a8b3a1@suse.com>
Content-Language: en-US
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
In-Reply-To: <0fe07e0f-fe6d-4722-9f89-a78294a8b3a1@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 25.06.24 08:34, Jan Beulich wrote:
> On 25.06.2024 02:15, victorm.lira@amd.com wrote:
>> From: Victor Lira <victorm.lira@amd.com>
>>
>> Rule 17.7: "The value returned by a function having non-void return type
>> shall be used"
>>
>> This patch fixes this by adding a check to the return value.
>> No functional changes.
>>
>> Signed-off-by: Victor Lira <victorm.lira@amd.com>
>> ---
>> Cc: George Dunlap <george.dunlap@citrix.com>
>> Cc: Dario Faggioli <dfaggioli@suse.com>
>> Cc: Juergen Gross <jgross@suse.com>
>> Cc: xen-devel@lists.xenproject.org
>> ---
>>   xen/common/sched/core.c | 5 ++++-
>>   1 file changed, 4 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
>> index d84b65f197..e1cd824622 100644
>> --- a/xen/common/sched/core.c
>> +++ b/xen/common/sched/core.c
>> @@ -2789,7 +2789,10 @@ static int cpu_schedule_up(unsigned int cpu)
>>       BUG_ON(cpu >= NR_CPUS);
>>   
>>       if ( idle_vcpu[cpu] == NULL )
>> -        vcpu_create(idle_vcpu[0]->domain, cpu);
>> +    {
>> +        if ( vcpu_create(idle_vcpu[0]->domain, cpu) == NULL )
>> +            return -ENOMEM;
>> +    }
> 
> First: Two such if()s want folding.
> 
>>       else
>>           idle_vcpu[cpu]->sched_unit->res = sr;
>>   
> 
> Then: Down from here there is
> 
>      if ( idle_vcpu[cpu] == NULL )
>          return -ENOMEM;
> 
> which your change is rendering redundant for at least the vcpu_create()
> path.
> 
> Finally, as we're touching error handling here (and mayby more a question
> to the maintainers than to you): What about sr in the error case? It's
> being allocated earlier in the function, but not freed upon error. Hmm,
> looks like cpu_schedule_down() is assumed to be taking care of the case,
> yet then I wonder how that can assume that get_sched_res() would return
> non-NULL - afaict it may be called without cpu_schedule_up() having run
> first, or with it having bailed early with -ENOMEM.

Yes, you are right.

cpu_schedule_down() should bail out early in case sr is NULL.

I'll write a patch.


Juergen


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 07:21:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 07:21:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747294.1154663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0Uk-0005XS-Hj; Tue, 25 Jun 2024 07:21:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747294.1154663; Tue, 25 Jun 2024 07:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0Uk-0005XL-EH; Tue, 25 Jun 2024 07:21:38 +0000
Received: by outflank-mailman (input) for mailman id 747294;
 Tue, 25 Jun 2024 07:21:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sQf/=N3=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sM0Uj-0005XF-Rp
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 07:21:37 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8e5240a2-32c3-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 09:21:36 +0200 (CEST)
Received: from [172.20.10.8] (unknown [37.161.81.3])
 by support.bugseng.com (Postfix) with ESMTPSA id D3E4B4EE0738;
 Tue, 25 Jun 2024 09:21:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e5240a2-32c3-11ef-90a3-e314d9c70b13
Message-ID: <d1d6fda5-c619-4578-9a21-7da1a9810044@bugseng.com>
Date: Tue, 25 Jun 2024 09:21:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 07/13] x86/hvm: address violations of MISRA C Rule
 16.3
To: Jan Beulich <jbeulich@suse.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <a20efca7042ea8f351516ea521edccd89b475929.1719218291.git.federico.serafini@bugseng.com>
 <087eb879-b3f6-4d1a-a52e-1e27337620c9@suse.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <087eb879-b3f6-4d1a-a52e-1e27337620c9@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 24/06/24 17:32, Jan Beulich wrote:
> On 24.06.2024 11:04, Federico Serafini wrote:
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -339,7 +339,7 @@ static int hvmemul_do_io(
>>       }
>>       case X86EMUL_UNIMPLEMENTED:
>>           ASSERT_UNREACHABLE();
>> -        /* Fall-through */
>> +        fallthrough;
>>       default:
>>           BUG();
>>       }
> 
> This or very similar comment are replaced elsewhere in this patch. I'm
> sure we have more of them. Hence an alternative would be to deviate those
> variations of what we already deviate. I recall there was a mail from
> Julien asking to avoid extending the set, unless some forms are used
> pretty frequently. Sadly nothing towards judgement between the
> alternatives is said in the description.

I found few occurrences of the hypened fallthrough,
It doesn't seem like a very used form to me,
and most of them are in emulate.c, a file I needed to touch anyway.

The fact that the pseudo keyword is the one preferred is mentioned
in deviations.rst, but I can mention this also in the description.

>> @@ -2674,6 +2674,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
>>   
>>       default:
>>           ASSERT_UNREACHABLE();
>> +        break;
>>       }
>>   
>>       if ( hvmemul_ctxt->ctxt.retire.singlestep )
>> @@ -2764,6 +2765,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
>>           /* fallthrough */
> 
> What about this? It doesn't match anything I see in deviations.rst.

The last item for R16.3 in deviations.rst explicitly says that
existing comment of this form are considered as safe (i.e., deviated)
but deprecated, meaning that the pseudo keyword should be used for new
cases. We can consider a rephrasing if it is not clear enough.

> 
>>       default:
>>           hvm_emulate_writeback(&ctxt);
>> +        break;
>>       }
>>   
>>       return rc;
>> @@ -2799,10 +2801,11 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
>>           memcpy(hvio->mmio_insn, curr->arch.vm_event->emul.insn.data,
>>                  hvio->mmio_insn_bytes);
>>       }
>> -    /* Fall-through */
>> +    fallthrough;
>>       default:
>>           ctx.set_context = (kind == EMUL_KIND_SET_CONTEXT_DATA);
>>           rc = hvm_emulate_one(&ctx, VIO_no_completion);
>> +        break;
>>       }
> 
> While not as much of a problem for the comment, I view a statement like
> this as mis-indented.
> 
>> @@ -5283,6 +5287,8 @@ void hvm_get_segment_register(struct vcpu *v, enum x86_segment seg,
>>            * %cs and %tr are unconditionally present.  SVM ignores these present
>>            * bits and will happily run without them set.
>>            */
>> +        fallthrough;
>> +
>>       case x86_seg_cs:
>>           reg->p = 1;
>>           break;
> 
> Why the extra blank line here, ...
> 
>> --- a/xen/arch/x86/hvm/hypercall.c
>> +++ b/xen/arch/x86/hvm/hypercall.c
>> @@ -111,6 +111,7 @@ int hvm_hypercall(struct cpu_user_regs *regs)
>>       case 8:
>>           eax = regs->rax;
>>           /* Fallthrough to permission check. */
>> +        fallthrough;
>>       case 4:
>>       case 2:
>>           if ( currd->arch.monitor.guest_request_userspace_enabled &&
> 
> ... when e.g. here there's none? I'm afraid this goes back to an
> unfinished discussion as to whether to have blank lines between blocks
> in fall-through situations. My view is that not having them in these
> cases is helping to make the falling through visually noticeable.

I looked ad the context to preserve the style
of the existing cases.

What do you think about:
-keep the existing style when a break needs to be inserted;
-no blank line if a fallthrough needs to inserted.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 07:30:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 07:30:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747304.1154672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0dA-0007T2-AP; Tue, 25 Jun 2024 07:30:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747304.1154672; Tue, 25 Jun 2024 07:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0dA-0007Sv-7L; Tue, 25 Jun 2024 07:30:20 +0000
Received: by outflank-mailman (input) for mailman id 747304;
 Tue, 25 Jun 2024 07:30:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM0d8-0007SZ-PL
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 07:30:18 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c5543e10-32c4-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 09:30:17 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ec595d0acbso28016581fa.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 00:30:17 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-706901f821bsm1864129b3a.185.2024.06.25.00.30.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 00:30:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5543e10-32c4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719300617; x=1719905417; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=nK0Y07RFd8lZ/LFtFGqUL2rlLF1Ixqi2wrWAmUHB+z4=;
        b=Ssr4WtSz9dn1z26iCv8agLm9g5B2uUBJVSqmRGzSEjNfbeTxQ5BA/CggjyuInj/6Vw
         u9o9Rhek+/XoBcbuEv/HVISeiTeqkW0cGfPuLzyuwRKKunrGKF9o4biib01c/KnR88WG
         Axx7WLKBLzF6t1XT+MDQ0eTct0PnI0L/KuC6Z4h7BdNXSdhb4Zwt5OBD5XGCZ/+BKGsj
         clcxnjBqOV3jlp7hJWDPFoUTsqc+Uj7oXNZ7YxjNASXH6uM630xu0iW7uO1/AmDRXx/U
         WuwStE9INgZ+LAOmuklHrny9Kwn6nkIN7dY6tNGI3k9zJru/NwV7LmIe2FdTd8Sqc34D
         yasw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719300617; x=1719905417;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=nK0Y07RFd8lZ/LFtFGqUL2rlLF1Ixqi2wrWAmUHB+z4=;
        b=SYn6q0cjFiq2viNvd18wYxTMaKSAOVxokKHJ5RB9VJN52byy9WwC7frrVWQjwJS0zf
         HxQsR9cXjkHJ/rMyKtZLNJFTx8g+s4sTHlQwopO/WSiurxr9C2PCSQqxdC/WJuxUKF2L
         1oiEWDa8m92+PK55qchPWhNc0SzI1BGTIiNeYRi95RQuk77dEGP1kcNat0/2wTq6ChLA
         iMoDkSdE93mXoh1OMv0Ff6UksEP1FOcO4pO+zO7rJpJaOtgY6NagCIegGL3ndS1icrrZ
         th+lwI/J2PvrIfeVqqP+30N27qyguH63A5RSGA+BjsTB8xEaYnq3S1AHv6gZ7w3L+AtL
         Fmtw==
X-Gm-Message-State: AOJu0YxwCAQym7TXzvpu0AvUnk9XvNnrE3AImNt3xcI3J6Gdrf6YYXmL
	B3zMCfV3WHuCFKWPgVCFP3lMT6i4VqUpN2lnQ8kEEicnWbQlymhcEhYAs7CMiMzZf8obM2YeqJ4
	=
X-Google-Smtp-Source: AGHT+IGM6x0S2hjlpBYQnAepJVKzHIdFRQjWGv84699fMRFYwIC5BjyKgtTGGAqafZVcGnls8Hiljg==
X-Received: by 2002:a2e:9258:0:b0:2ec:4f01:2c0f with SMTP id 38308e7fff4ca-2ec5b31a392mr48371921fa.26.1719300617223;
        Tue, 25 Jun 2024 00:30:17 -0700 (PDT)
Message-ID: <00bb4998-d0a7-43dc-8d3c-abb3f66661cc@suse.com>
Date: Tue, 25 Jun 2024 09:30:07 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH for-4.19] gnttab: fix compat query-size handling
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

The odd DEFINE_XEN_GUEST_HANDLE(), inconsistent with all other similar
constructs, should have caught my attention. Turns out it was needed for
the build to succeed merely because the corresponding #ifndef had a
typo. That typo in turn broke compat mode guests, by having query-size
requests of theirs wire into the domain_crash() at the bottom of the
switch().

Fixes: 8c3bb4d8ce3f ("xen/gnttab: Perform compat/native gnttab_query_size check")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Looks like set-version is similarly missing in the set of structures
checked, but I'm pretty sure that we will now want to defer taking care
of that until after 4.20 was branched.

--- a/xen/common/compat/grant_table.c
+++ b/xen/common/compat/grant_table.c
@@ -33,7 +33,6 @@ CHECK_gnttab_unmap_and_replace;
 #define xen_gnttab_query_size gnttab_query_size
 CHECK_gnttab_query_size;
 #undef xen_gnttab_query_size
-DEFINE_XEN_GUEST_HANDLE(gnttab_query_size_compat_t);
 
 DEFINE_XEN_GUEST_HANDLE(gnttab_setup_table_compat_t);
 DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_compat_t);
@@ -111,7 +110,7 @@ int compat_grant_table_op(
     CASE(copy);
 #endif
 
-#ifndef CHECK_query_size
+#ifndef CHECK_gnttab_query_size
     CASE(query_size);
 #endif
 


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 07:38:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 07:38:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747313.1154683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0kg-0008PC-4k; Tue, 25 Jun 2024 07:38:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747313.1154683; Tue, 25 Jun 2024 07:38:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0kg-0008P5-21; Tue, 25 Jun 2024 07:38:06 +0000
Received: by outflank-mailman (input) for mailman id 747313;
 Tue, 25 Jun 2024 07:38:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sQf/=N3=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sM0kf-0008Oz-9B
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 07:38:05 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dad5299a-32c5-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 09:38:03 +0200 (CEST)
Received: from [172.20.10.8] (unknown [37.161.81.3])
 by support.bugseng.com (Postfix) with ESMTPSA id 3FFC34EE0738;
 Tue, 25 Jun 2024 09:38:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dad5299a-32c5-11ef-b4bb-af5377834399
Message-ID: <8d61a7d8-dd34-4bea-822e-c880437d4828@bugseng.com>
Date: Tue, 25 Jun 2024 09:38:00 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 09/13] x86/mm: add defensive return
To: Jan Beulich <jbeulich@suse.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <f3adf3d5dac5c4c108207883472f3ebcfa11c685.1719218291.git.federico.serafini@bugseng.com>
 <3ba59d21-bf67-4585-8e94-5a01cff18ec3@suse.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <3ba59d21-bf67-4585-8e94-5a01cff18ec3@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 24/06/24 17:39, Jan Beulich wrote:
> On 24.06.2024 11:04, Federico Serafini wrote:
>> Add defensive return statement at the end of an unreachable
>> default case. Other than improve safety,
> 
> It is particularly with this in mind that ...
> 
>> this meets the requirements
>> to deviate a violation of MISRA C Rule 16.3: "An unconditional `break'
>> statement shall terminate every switch-clause".
>>
>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>> ---
>>   xen/arch/x86/mm.c | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
>> index 648d6dd475..2e19ced15e 100644
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -916,6 +916,7 @@ get_page_from_l1e(
>>                   return 0;
>>               default:
>>                   ASSERT_UNREACHABLE();
>> +                return 0;
>>               }
> 
> ... returning "success" in such a case can't be quite right. As indicated
> elsewhere, really we want -EINTERNAL for such cases, just that errno.h
> doesn't know anything like this, and I'm also unaware of a somewhat
> similar identifier that we might "clone" from elsewhere. Hence we'll want
> to settle on some existing error code which we then use here and in
> similar situations. EINVAL is the primary example of what I would prefer
> to _not_ see used for this.

Sorry, I got confused by the ASSERT_UNREACHABLE followed by a
return 0 few lines above.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 07:38:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 07:38:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747318.1154693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0l8-0000YZ-D9; Tue, 25 Jun 2024 07:38:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747318.1154693; Tue, 25 Jun 2024 07:38:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0l8-0000YS-9O; Tue, 25 Jun 2024 07:38:34 +0000
Received: by outflank-mailman (input) for mailman id 747318;
 Tue, 25 Jun 2024 07:38:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LI8L=N3=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sM0l6-0000H1-M2
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 07:38:32 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20621.outbound.protection.outlook.com
 [2a01:111:f403:200a::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb4d9b06-32c5-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 09:38:31 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by PH0PR12MB8098.namprd12.prod.outlook.com (2603:10b6:510:29a::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.29; Tue, 25 Jun
 2024 07:38:25 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7698.020; Tue, 25 Jun 2024
 07:38:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb4d9b06-32c5-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mgmmSTzNuT31ry9QXwdFPYbewGzL5ZkZBW5w3ANwQ9fnfFUopvSJ+c1Cn48ft9ktm8kqKP7UgUsyjzJiqS9BHnMxh3KBx1+hkmJ2OMgzQ7k5hh96AjQLTwyJzJxS0aK70kJXBzPzU1uFCMjn16oIwc9WIdM688STTidfYEox0F+yxOG4ST3cUimX8PPVv2N/Ex74c+71nBMzvaYp/nyan3q4LbXKWA3SwMdo5B/vhJXWDxGnfGQ28Cr0Em1jxUT+pTnINRLnrVZdLPliBLRVeQvs//jHAdctLnUrOLNmtp6MQxX8IwoEHfy6S1rEcJDbulWCF7Hp7w9P1n4MFwWcww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=lIkTpXYcxtuRaBZxR6VLRxSWjDFGw8w+XJBfy5vgzXw=;
 b=UUe3/8GOSKQteGUTSTbImEIrXgkdH3i2lip/I20akz6RuTHQ2iEOZc56gXNkYPoqqrjLeipFw/OMuvPB/kYqFOy8o3HQroDM6XEK2wJqHeIOsrdq3RI4TaHeWC2HDQCP2rVMnOzVZuYgYLxqVoIYdW4nnx0wUqJw+nyGyC/UgEA82sH7sNFgfK9Bgwi505Mx/fmKtuZnsqtild3DZWNO16CRcNquqAXuAfaz4lQgD4vp97Iid8uYowkBYNAlc1HpZg2ilISjFhfpQuIbcRlrhZi9by9CW1QvjLpK55s3IM/Q0ZRIFKSSsSqJbrrD/LPXUAE4ZknRZ2+5e8a27X+fyQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lIkTpXYcxtuRaBZxR6VLRxSWjDFGw8w+XJBfy5vgzXw=;
 b=KombTF4LXHLYs6rdjJVAsnUOOh/rHWduOk0kYlTg0JYfP3v6qDxGMrphvT5EbFUINIaPV1+GVQAz8sZ0ZxLStGCV+IsUA5bc26y/yhXLnQarpCjleMXP38VXTGOAVnPurnTdhFa0ulbcF9LKfrpcMsod35vVknnpCgP8PbflOkU=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Anthony PERARD
	<anthony@xenproject.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
Thread-Topic: [XEN PATCH v10 4/5] tools: Add new function to get gsi from dev
Thread-Index:
 AQHawJToSxHkdXzldkCDZE5kihITT7HMD9QAgAGTOgD//5t7AIADeOiA//+SewCAAKx5AP//hDUAAD2nHAAAhnaTAABBP3WA
Date: Tue, 25 Jun 2024 07:38:24 +0000
Message-ID:
 <BL1PR12MB5849F10A85D99B6B244B11DBE7D52@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-5-Jiqian.Chen@amd.com>
 <49563a31-d50e-4015-88ee-e0dab9193cd1@suse.com>
 <BL1PR12MB584910D242D9D8B4BA8B15C1E7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <ab99b766-7bec-4046-beb2-f77a2591a911@suse.com>
 <BL1PR12MB5849ABD858B72505D83678F9E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <099beaac-ed1f-459b-8c2b-42b325f8e4a4@suse.com>
 <BL1PR12MB5849366A442BE6C4C192ABB0E7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <3352736b-e7bc-40d0-ac1f-e58de188c93c@suse.com>
 <BL1PR12MB5849D6943FCB12613A844F33E7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
 <a17f568f-5fe3-4331-9592-509e02ae84ea@suse.com>
In-Reply-To: <a17f568f-5fe3-4331-9592-509e02ae84ea@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7698.013)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|PH0PR12MB8098:EE_
x-ms-office365-filtering-correlation-id: 53d462be-962d-4f76-7a85-08dc94e9cbf3
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|7416011|376011|366013|1800799021|38070700015;
x-microsoft-antispam-message-info:
 =?utf-8?B?Y2JEUGVORWhmNWRvclF4NEpyb0gvb2pLdGYwNVpwd2ZROUlLN2d1VllaWDdr?=
 =?utf-8?B?YnBPSnpwT2hVcHVFeElhZ2d5TWRrdlRNRDY5c0dTM2JXdkl4S2pyNitsK2lX?=
 =?utf-8?B?V1VJYTBzQzI2anVRRGZEaDM4MUFCUzBLR0l5SU95UWY1ejZERll3dUZPVi9n?=
 =?utf-8?B?d08xUFBPS3hFN2lRZTNMU3JXK2NZbnh1RGVnYW1EVHBhV3pWVXhuTHhNUDll?=
 =?utf-8?B?d2crWXhLQ0JyMm4rWHRDRWtLN3pMQUpaMU4rV1dzaXVQZUIxZXpINVZJalgz?=
 =?utf-8?B?dC8vN2VtV1dlWXZkQStxWDhtRlVhWHhPTE9ZZFJhcXJqOHlZaWtkU1Ivd1Y1?=
 =?utf-8?B?cUNiR3Y5V29laFYyak8ybGVrdU9MWS9qSW16ejZxRzlldG1xYjZyME1PRHhP?=
 =?utf-8?B?d25aN3g1azA2V0JUR1pSb3JYV2xFZ2QyS1pmZWJxeGEzWnd2M24zbHIxL0R5?=
 =?utf-8?B?T2p2bFNvUUg5OXQwaWJKTEczTWRnbTlBNTdBbzgxck1QcWdGaEhFLzlKZmN2?=
 =?utf-8?B?ay9icG5qMW1tREEyVXdGUC9XRzY5TWFObnAwWXNpczcyQmFNY01BdzNBRFJI?=
 =?utf-8?B?Qzc0WldwOGF0UExJVzBuZnl4N01pTkxDSU9pV2MwT2ZYT0gzVU56d3Nhd0Z2?=
 =?utf-8?B?UzdGd0k1VWUyRGcxQmVHZm9GUFlvWVZ6bm5aY1NtbFJ5YmNCcVNtWXZVZ3dY?=
 =?utf-8?B?a3ZzaklHeUUyTkMxVmc0cG1GdkJ6WmxnQ3F1OE1BZ2t1ZlB2OUNZTTdEL1pH?=
 =?utf-8?B?SWRvcXU0bUxoMm1tQ1BSQjluUFk1eGFzcnFvZUV0Q3RWMTdwdTNTdTcxeGk5?=
 =?utf-8?B?UVFBSXpmK2xsNUJ0MDYrSXI2YzlXb0FjeHF0dnk0OTdvVEpqSEpFWEFkdzJy?=
 =?utf-8?B?M09ZQ1Z6dGVObHh2VTdlUm9qNEEvWHBzSnJxdEN2QjBubjdNYlV6U0UzOEpF?=
 =?utf-8?B?elNFUTJ5V1NVS1VzcVpVL21DZTk5MGNhczhhMEtsUzVVMENXRnRUQ1J3TGpr?=
 =?utf-8?B?TFJsc3VhZGJoTWpmV05OdWxqejdyOGlQak9TQTZkNW1PL1JjcjYzSmh6bUtE?=
 =?utf-8?B?QVp0c3ZzU3ZkcW9WL1VNMy9aakNXdGJTaXQ1d0ZVU3pram8vT3Y2RHA5MEVC?=
 =?utf-8?B?RDJKUUtjZnR2UW1Ya1NpeUl3QVhsdm1Dd1FOajhqMzk3QUNVYTVHL0dwd2JJ?=
 =?utf-8?B?aXcrWWdValIrU040bmhadTlPNi8zdE5ra1hvbk5helhaWWpJL0RaM0lNNTFY?=
 =?utf-8?B?NzNmR0szeEN0cUdOZm5UVlFaWE5GcUlCMVEyTDIvZDRBUFQxMWd4YzhuYVc4?=
 =?utf-8?B?elFpZlZkeEd2aC9ZNGp4dDl6bzR2Q0hXM2VsbU1ZdUNmM2NBOGo2THhKZ3l5?=
 =?utf-8?B?T2RWU0FzZGFaMkM3NHh4dFJGcEhwWU41aTBXUlRGTnIvcU91K01zcDR0akZ6?=
 =?utf-8?B?T1VBWWVuSkdMSVBHZUZ3ZW5RTHN6K0ZScmJKOG41Mi9JN1VqK25CV05Fd0M1?=
 =?utf-8?B?RWhQaDBpLzNXQmU0WmFCU1lkMWFWSU82Tlh2YXFOTVhKWG1hQUZ5UVRETFYv?=
 =?utf-8?B?YWlZK1BVbndTZ1NPMnRxVVd6YUMzRThWMzVpeEVaaGJJckJlOWNqbVRYb214?=
 =?utf-8?B?QVp0RUJheWhPYk9IM01WOXk5TlZoaHlRejVoN1pTRTJnRDgySStqZk4vNUtN?=
 =?utf-8?B?OC81WWZvVHFVVmhRa2JjS1hYQkFUck5sQVhtNUZQWndSaVZpRHMySEI1TERQ?=
 =?utf-8?B?K2tvQktMNjdBdWpiM2VwOTYwWkZoYkNZTDRSVnc1OTdZOFltaGZKV2pYdXZk?=
 =?utf-8?Q?vIcLBrTr+FINFkXYNfE0Kn8JZAFL3ohDTFSTw=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(7416011)(376011)(366013)(1800799021)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?T0NhTFZKdC9SdXJuVjM3amZiamNHdnpRVVRxL2NINnZFbTA0ajI2N3BjdjFm?=
 =?utf-8?B?U2NpMDg4RVpTajd4MUdpejRYSGlZZzJGTWtvMnpidDhIdWIzeXpwcVF2blFw?=
 =?utf-8?B?NjJTRjFPZ0s3TnFrM2JXQzgzekdBaHROTnZhM2U3NEVjMnJDVDhDQzFWRHd3?=
 =?utf-8?B?Q25XbDVGNjZoN0ppblJJRnhvalYrOUQ2dGltRFNYNDZIdjlGOXVmUHp5T1Nt?=
 =?utf-8?B?c0E5dFQ4elVPQkVQdUNjNHVXR0VmSWhtNDAyZlcvMEdKbXQ4OE5qQSt3ejJS?=
 =?utf-8?B?RkxVcWtSWHBVOHgxdHhnMXRqNWxOcGNVMG1DVkROaHdpWjFaemM2cGtrNklU?=
 =?utf-8?B?bFBVL1pKNVlkbzhEeHp5SFhYOW1HQ2RJVjFHeFlxbGxCdDZtdXRvdEZFMDJX?=
 =?utf-8?B?MXdmcUptV1RMeTlGOGVhN2R1VVJtcmt1SUZpb2xKLzh3blY4RklXMStZUkRh?=
 =?utf-8?B?VDllMU5oY3BhWktzVkRUQ1BneCtjc2RtNTJhL1dNbWdTK1FmblhKR09aMU9W?=
 =?utf-8?B?aFlvNFhsQWVhbU9EVU5DWjd1dmE2Z2NQaXRJYWtsTi9nRTJ4L1VQZytSY3VS?=
 =?utf-8?B?ZlhJM1g5em1oRWRhU3VmWWpJaTFndEh4cVVZdUxmTDZ0SmV3em5JL21YUmUr?=
 =?utf-8?B?b1RTaS9yRm92UVM2OEQ5QTAwQ283TjlMWW5XRGptQWVrVk1SckxsT0RxdGl2?=
 =?utf-8?B?S3poTHQ5SWk1M1VSOFp5YjcwNEIyVFg4NWkva3Mrdm5rOXNnNTBpeXFIZXJs?=
 =?utf-8?B?RVVRQTF6TU1yVFFNYmY0aTk4NU1MeTI1SDNINUZrT0hPMGJLbXRwZ0N5WTNW?=
 =?utf-8?B?dW5HUk1qY2tzVWpkNTl6anNFVWNHZFE3Y1F5VVZSSm9OU2JIN0NZLzVmbnlU?=
 =?utf-8?B?UjJBekhFL2wwM013NzF0N3NxZTU2Y0V2ZnA3OWo5ZXNaREgvQTF2TWZOM0N3?=
 =?utf-8?B?djBqbkdabDJ4U01HRTU1QThUVXZsUW5FUkJEdHl4R1UzQ29ZNHFVYVB5cERS?=
 =?utf-8?B?UTRzUC9LMUdDNWlIMWhFSVlPRGQ0bXIwVm8xSldaWTduRi8xNE00L0RCZyta?=
 =?utf-8?B?eFNsYU9HRU83VkluUjdlNGFwZjhpUndrQWZVbFRBQi81MUFjN0NYZ2RnR1hn?=
 =?utf-8?B?OXhmb1BPY3VGSGJjcjl5QTY2Y28ydkFOUUw4Uzk1aEdiMUFXMTlVM1VmUlVs?=
 =?utf-8?B?eWdZWGJOZlhvc3dyYzJKR2Y2RnJqS0pONzhvdFpSQmZXTklEd0ppdzZDMFVJ?=
 =?utf-8?B?Uy9DZUFCaGc5WHhiVkxBRUprb2l1ZlJOM1VtMjQ3QkZqWjlOUGpTYTVWam5k?=
 =?utf-8?B?cnpGbXlMMTJaS0ZuS3ZhZWJrUllaWGk0ZG1FV2xjNkdHTkQzSFU5VzRQaUUy?=
 =?utf-8?B?bHF6eFZ3L1pUQ01VeGhybnVhaGhTZUxJZDZ5Ni9yS1ZCT2FSWWJDN2VkTW5h?=
 =?utf-8?B?VENENXA1Nk94bExDUUE1ZzNLMjZ4NC9OWjh2TmRVcXNhc05uanhhODZnMnNM?=
 =?utf-8?B?MmlRSlZvUEpYTlBNWXBwK0U0dThpdDZIWENuUUVLNU55dG5ObjlmM1ZzeGNx?=
 =?utf-8?B?WFE3TGxVKzltYWNmRnFIN3NQQUEwYjJzdm9GTEZnN1JFNEx5czZRSEMyZE1T?=
 =?utf-8?B?T2pkVFpFeksxMHgxT24zQ1ViM3Y2ODV1VTZYUmtOeThtTEV3cHdJb1Zzc2hT?=
 =?utf-8?B?NzZWQmtNQWZUTkIrUm9aK2hvMmVGT2pseDVXemlkMGRwajV3V0p3NFJ4UXRP?=
 =?utf-8?B?WmtiQ1lmdTZnYTFRVHVxZnFmdTY3UUFMK0FGSERaTFJzNjJ1MVpFMHhpNW1M?=
 =?utf-8?B?dG8xV2t0Z1RmYmp0K2NuWjBOaytrL1dDVnV5aWtodGZwQmptSE9ZKzJmbjNv?=
 =?utf-8?B?THljTGJpUDhrSEpnNmhyaWdHbVd3bjhiNGNQSUZxaGhHUERUTzI1eS9Zak5t?=
 =?utf-8?B?WmQyck5WMXJOckcwRTZkZ0t4dlpRK28wb0hQTlg4TmNaT0pOMlg3Q0JRNVhE?=
 =?utf-8?B?NCtZNExUbFJ0NzB3L3JCZ0c5QkJ5RHlwOVU3TjFCTFhUVjJmNnEzVjR0WDhG?=
 =?utf-8?B?eStUdStMOVRmS3AraDlMM29seTg4Wis4MU9zQkMzVmVoKytGcUlBdy9wajJL?=
 =?utf-8?Q?8V7E=3D?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <DDCCE7C320FBB54A9DC0CCA4B822D868@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 53d462be-962d-4f76-7a85-08dc94e9cbf3
X-MS-Exchange-CrossTenant-originalarrivaltime: 25 Jun 2024 07:38:24.9370
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: wFypNalevULVXmyNjACVZ9q9PRsvt3lNnJbpJs5oMfBs+kf61rMVn8I7Zqd8RnzArCjP5X4LaIm0HC36bGuZuw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR12MB8098

T24gMjAyNC82LzI0IDE2OjEzLCBKYW4gQmV1bGljaCB3cm90ZToNCj4gT24gMjEuMDYuMjAyNCAx
MDoxNSwgQ2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4gT24gMjAyNC82LzIwIDE4OjM3LCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+PiBPbiAyMC4wNi4yMDI0IDEyOjIzLCBDaGVuLCBKaXFpYW4gd3JvdGU6
DQo+Pj4+IE9uIDIwMjQvNi8yMCAxNTo0MywgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4+PiBPbiAy
MC4wNi4yMDI0IDA5OjAzLCBDaGVuLCBKaXFpYW4gd3JvdGU6DQo+Pj4+Pj4gT24gMjAyNC82LzE4
IDE3OjEzLCBKYW4gQmV1bGljaCB3cm90ZToNCj4+Pj4+Pj4gT24gMTguMDYuMjAyNCAxMDoxMCwg
Q2hlbiwgSmlxaWFuIHdyb3RlOg0KPj4+Pj4+Pj4gT24gMjAyNC82LzE3IDIzOjEwLCBKYW4gQmV1
bGljaCB3cm90ZToNCj4+Pj4+Pj4+PiBPbiAxNy4wNi4yMDI0IDExOjAwLCBKaXFpYW4gQ2hlbiB3
cm90ZToNCj4+Pj4+Pj4+Pj4gLS0tIGEvdG9vbHMvbGlicy9saWdodC9saWJ4bF9wY2kuYw0KPj4+
Pj4+Pj4+PiArKysgYi90b29scy9saWJzL2xpZ2h0L2xpYnhsX3BjaS5jDQo+Pj4+Pj4+Pj4+IEBA
IC0xNDA2LDYgKzE0MDYsMTIgQEAgc3RhdGljIGJvb2wgcGNpX3N1cHBfbGVnYWN5X2lycSh2b2lk
KQ0KPj4+Pj4+Pj4+PiAgI2VuZGlmDQo+Pj4+Pj4+Pj4+ICB9DQo+Pj4+Pj4+Pj4+ICANCj4+Pj4+
Pj4+Pj4gKyNkZWZpbmUgUENJX0RFVklEKGJ1cywgZGV2Zm4pXA0KPj4+Pj4+Pj4+PiArICAgICAg
ICAgICAgKCgoKHVpbnQxNl90KShidXMpKSA8PCA4KSB8ICgoZGV2Zm4pICYgMHhmZikpDQo+Pj4+
Pj4+Pj4+ICsNCj4+Pj4+Pj4+Pj4gKyNkZWZpbmUgUENJX1NCREYoc2VnLCBidXMsIGRldmZuKSBc
DQo+Pj4+Pj4+Pj4+ICsgICAgICAgICAgICAoKCgodWludDMyX3QpKHNlZykpIDw8IDE2KSB8IChQ
Q0lfREVWSUQoYnVzLCBkZXZmbikpKQ0KPj4+Pj4+Pj4+DQo+Pj4+Pj4+Pj4gSSdtIG5vdCBhIG1h
aW50YWluZXIgb2YgdGhpcyBmaWxlOyBpZiBJIHdlcmUsIEknZCBhc2sgdGhhdCBmb3IgcmVhZGFi
aWxpdHkncw0KPj4+Pj4+Pj4+IHNha2UgYWxsIGV4Y2VzcyBwYXJlbnRoZXNlcyBiZSBkcm9wcGVk
IGZyb20gdGhlc2UuDQo+Pj4+Pj4+PiBJc24ndCBpdCBhIGNvZGluZyByZXF1aXJlbWVudCB0byBl
bmNsb3NlIGVhY2ggZWxlbWVudCBpbiBwYXJlbnRoZXNlcyBpbiB0aGUgbWFjcm8gZGVmaW5pdGlv
bj8NCj4+Pj4+Pj4+IEl0IHNlZW1zIG90aGVyIGZpbGVzIGFsc28gZG8gdGhpcy4gU2VlIHRvb2xz
L2xpYnMvbGlnaHQvbGlieGxfaW50ZXJuYWwuaA0KPj4+Pj4+Pg0KPj4+Pj4+PiBBcyBzYWlkLCBJ
J20gbm90IGEgbWFpbnRhaW5lciBvZiB0aGlzIGNvZGUuIFlldCB3aGlsZSBJJ20gYXdhcmUgdGhh
dCBsaWJ4bA0KPj4+Pj4+PiBoYXMgaXRzIG93biBDT0RJTkdfU1RZTEUsIEkgY2FuJ3Qgc3BvdCBh
bnl0aGluZyB0b3dhcmRzIGV4Y2Vzc2l2ZSB1c2Ugb2YNCj4+Pj4+Pj4gcGFyZW50aGVzZXMgdGhl
cmUuDQo+Pj4+Pj4gU28sIHdoaWNoIHBhcmVudGhlc2VzIGRvIHlvdSB0aGluayBhcmUgZXhjZXNz
aXZlIHVzZT8NCj4+Pj4+DQo+Pj4+PiAjZGVmaW5lIFBDSV9ERVZJRChidXMsIGRldmZuKVwNCj4+
Pj4+ICAgICAgICAgICAgICgoKHVpbnQxNl90KShidXMpIDw8IDgpIHwgKChkZXZmbikgJiAweGZm
KSkNCj4+Pj4+DQo+Pj4+PiAjZGVmaW5lIFBDSV9TQkRGKHNlZywgYnVzLCBkZXZmbikgXA0KPj4+
Pj4gICAgICAgICAgICAgKCgodWludDMyX3QpKHNlZykgPDwgMTYpIHwgUENJX0RFVklEKGJ1cywg
ZGV2Zm4pKQ0KPj4+PiBUaGFua3MsIHdpbGwgY2hhbmdlIGluIG5leHQgdmVyc2lvbi4NCj4+Pj4N
Cj4+Pj4+DQo+Pj4+Pj4+Pj4+IEBAIC0xNDg2LDYgKzE0OTYsMTggQEAgc3RhdGljIHZvaWQgcGNp
X2FkZF9kbV9kb25lKGxpYnhsX19lZ2MgKmVnYywNCj4+Pj4+Pj4+Pj4gICAgICAgICAgZ290byBv
dXRfbm9faXJxOw0KPj4+Pj4+Pj4+PiAgICAgIH0NCj4+Pj4+Pj4+Pj4gICAgICBpZiAoKGZzY2Fu
ZihmLCAiJXUiLCAmaXJxKSA9PSAxKSAmJiBpcnEpIHsNCj4+Pj4+Pj4+Pj4gKyNpZmRlZiBDT05G
SUdfWDg2DQo+Pj4+Pj4+Pj4+ICsgICAgICAgIHNiZGYgPSBQQ0lfU0JERihwY2ktPmRvbWFpbiwg
cGNpLT5idXMsDQo+Pj4+Pj4+Pj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAoUENJX0RFVkZO
KHBjaS0+ZGV2LCBwY2ktPmZ1bmMpKSk7DQo+Pj4+Pj4+Pj4+ICsgICAgICAgIGdzaSA9IHhjX3Bo
eXNkZXZfZ3NpX2Zyb21fZGV2KGN0eC0+eGNoLCBzYmRmKTsNCj4+Pj4+Pj4+Pj4gKyAgICAgICAg
LyoNCj4+Pj4+Pj4+Pj4gKyAgICAgICAgICogT2xkIGtlcm5lbCB2ZXJzaW9uIG1heSBub3Qgc3Vw
cG9ydCB0aGlzIGZ1bmN0aW9uLA0KPj4+Pj4+Pj4+DQo+Pj4+Pj4+Pj4gSnVzdCBrZXJuZWw/DQo+
Pj4+Pj4+PiBZZXMsIHhjX3BoeXNkZXZfZ3NpX2Zyb21fZGV2IGRlcGVuZHMgb24gdGhlIGZ1bmN0
aW9uIGltcGxlbWVudGVkIG9uIGxpbnV4IGtlcm5lbCBzaWRlLg0KPj4+Pj4+Pg0KPj4+Pj4+PiBP
a2F5LCBhbmQgd2hlbiB0aGUga2VybmVsIHN1cHBvcnRzIGl0IGJ1dCB0aGUgdW5kZXJseWluZyBo
eXBlcnZpc29yIGRvZXNuJ3QNCj4+Pj4+Pj4gc3VwcG9ydCB3aGF0IHRoZSBrZXJuZWwgd2FudHMg
dG8gdXNlIGluIG9yZGVyIHRvIGZ1bGZpbGwgdGhlIHJlcXVlc3QsIGFsbA0KPj4+Pj4+IEkgZG9u
J3Qga25vdyB3aGF0IHRoaW5ncyB5b3UgbWVudGlvbmVkIGh5cGVydmlzb3IgZG9lc24ndCBzdXBw
b3J0IGFyZSwNCj4+Pj4+PiBiZWNhdXNlIHhjX3BoeXNkZXZfZ3NpX2Zyb21fZGV2IGlzIHRvIGdl
dCB0aGUgZ3NpIG9mIHBjaWRldiB0aHJvdWdoIHNiZGYgaW5mb3JtYXRpb24sDQo+Pj4+Pj4gdGhh
dCByZWxhdGlvbnNoaXAgY2FuIGJlIGdvdCBvbmx5IGluIGRvbTAgaW5zdGVhZCBvZiBYZW4gaHlw
ZXJ2aXNvci4NCj4+Pj4+Pg0KPj4+Pj4+PiBpcyBmaW5lPyAoU2VlIGFsc28gYmVsb3cgZm9yIHdo
YXQgbWF5IGJlIG5lZWRlZCBpbiB0aGUgaHlwZXJ2aXNvciwgZXZlbiBpZg0KPj4+Pj4+IFlvdSBt
ZWFuIHhjX3BoeXNkZXZfbWFwX3BpcnEgbmVlZHMgZ3NpPw0KPj4+Pj4NCj4+Pj4+IEknZCBwdXQg
aXQgc2xpZ2h0bHkgZGlmZmVyZW50bHk6IFlvdSBhcnJhbmdlIGZvciB0aGF0IGZ1bmN0aW9uIHRv
IG5vdyB0YWtlIGENCj4+Pj4+IEdTSSB3aGVuIHRoZSBjYWxsZXIgaXMgUFZILiBCdXQgeWVzLCB0
aGUgZnVuY3Rpb24sIHdoZW4gdXNlZCB3aXRoDQo+Pj4+PiBNQVBfUElSUV9UWVBFX0dTSSwgY2xl
YXJseSBleHBlY3RzIGEgR1NJIGFzIGlucHV0IChzZWUgYWxzbyBiZWxvdykuDQo+Pj4+Pg0KPj4+
Pj4+PiB0aGlzIElPQ1RMIHdvdWxkIGJlIHNhdGlzZmllZCBieSB0aGUga2VybmVsIHdpdGhvdXQg
bmVlZGluZyB0byBpbnRlcmFjdCB3aXRoDQo+Pj4+Pj4+IHRoZSBoeXBlcnZpc29yLikNCj4+Pj4+
Pj4NCj4+Pj4+Pj4+Pj4gKyAgICAgICAgICogc28gaWYgZmFpbCwga2VlcCB1c2luZyBpcnE7IGlm
IHN1Y2Nlc3MsIHVzZSBnc2kNCj4+Pj4+Pj4+Pj4gKyAgICAgICAgICovDQo+Pj4+Pj4+Pj4+ICsg
ICAgICAgIGlmIChnc2kgPiAwKSB7DQo+Pj4+Pj4+Pj4+ICsgICAgICAgICAgICBpcnEgPSBnc2k7
DQo+Pj4+Pj4+Pj4NCj4+Pj4+Pj4+PiBJJ20gc3RpbGwgcHV6emxlZCBieSB0aGlzLCB3aGVuIGJ5
IG5vdyBJIHRoaW5rIHdlJ3ZlIHN1ZmZpY2llbnRseSBjbGFyaWZpZWQNCj4+Pj4+Pj4+PiB0aGF0
IElSUXMgYW5kIEdTSXMgdXNlIHR3byBkaXN0aW5jdCBudW1iZXJpbmcgc3BhY2VzLg0KPj4+Pj4+
Pj4+DQo+Pj4+Pj4+Pj4gQWxzbywgYXMgcHJldmlvdXNseSBpbmRpY2F0ZWQsIHlvdSBjYWxsIHRo
aXMgZm9yIFBWIERvbTAgYXMgd2VsbC4gQWl1aSBvbg0KPj4+Pj4+Pj4+IHRoZSBhc3N1bXB0aW9u
IHRoYXQgaXQnbGwgZmFpbC4gV2hhdCBpZiB3ZSBkZWNpZGUgdG8gbWFrZSB0aGUgZnVuY3Rpb25h
bGl0eQ0KPj4+Pj4+Pj4+IGF2YWlsYWJsZSB0aGVyZSwgdG9vIChpZiBvbmx5IGZvciBpbmZvcm1h
dGlvbmFsIHB1cnBvc2VzLCBvciBmb3INCj4+Pj4+Pj4+PiBjb25zaXN0ZW5jeSk/IFN1ZGRlbmx5
IHlvdSdyZSBmYWxsYmFjayBsb2dpYyB3b3VsZG4ndCB3b3JrIGFueW1vcmUsIGFuZA0KPj4+Pj4+
Pj4+IHlvdSdkIGNhbGwgLi4uDQo+Pj4+Pj4+Pj4NCj4+Pj4+Pj4+Pj4gKyAgICAgICAgfQ0KPj4+
Pj4+Pj4+PiArI2VuZGlmDQo+Pj4+Pj4+Pj4+ICAgICAgICAgIHIgPSB4Y19waHlzZGV2X21hcF9w
aXJxKGN0eC0+eGNoLCBkb21pZCwgaXJxLCAmaXJxKTsNCj4+Pj4+Pj4+Pg0KPj4+Pj4+Pj4+IC4u
LiB0aGUgZnVuY3Rpb24gd2l0aCBhIEdTSSB3aGVuIGEgcElSUSBpcyBtZWFudC4gSW1vLCBhcyBz
dWdnZXN0ZWQgYmVmb3JlLA0KPj4+Pj4+Pj4+IHlvdSBzdHJpY3RseSB3YW50IHRvIGF2b2lkIHRo
ZSBjYWxsIG9uIFBWIERvbTAuDQo+Pj4+Pj4+Pj4NCj4+Pj4+Pj4+PiBBbHNvIGZvciBQVkggRG9t
MDogSSBkb24ndCB0aGluayBJJ3ZlIHNlZW4gY2hhbmdlcyB0byB0aGUgaHlwZXJjYWxsDQo+Pj4+
Pj4+Pj4gaGFuZGxpbmcsIHlldC4gSG93IGNhbiB0aGF0IGJlIHdoZW4gR1NJIGFuZCBJUlEgYXJl
bid0IHRoZSBzYW1lLCBhbmQgaGVuY2UNCj4+Pj4+Pj4+PiBpbmNvbWluZyBHU0kgd291bGQgbmVl
ZCB0cmFuc2xhdGluZyB0byBJUlEgc29tZXdoZXJlPyBJIGNhbiBvbmNlIGFnYWluIG9ubHkNCj4+
Pj4+Pj4+PiBhc3N1bWUgYWxsIHlvdXIgdGVzdGluZyB3YXMgZG9uZSB3aXRoIElSUXMgd2hvc2Ug
bnVtYmVycyBoYXBwZW5lZCB0byBtYXRjaA0KPj4+Pj4+Pj4+IHRoZWlyIEdTSSBudW1iZXJzLiAo
VGhlIGRpZmZlcmVuY2UsIGltbywgd291bGQgYWxzbyBuZWVkIGNhbGxpbmcgb3V0IGluIHRoZQ0K
Pj4+Pj4+Pj4+IHB1YmxpYyBoZWFkZXIsIHdoZXJlIHRoZSByZXNwZWN0aXZlIGludGVyZmFjZSBz
dHJ1Y3QocykgaXMvYXJlIGRlZmluZWQuKQ0KPj4+Pj4+Pj4gSSBmZWVsIGxpa2UgeW91IG1pc3Nl
ZCBvdXQgb24gbWFueSBvZiB0aGUgcHJldmlvdXMgZGlzY3Vzc2lvbnMuDQo+Pj4+Pj4+PiBXaXRo
b3V0IG15IGNoYW5nZXMsIHRoZSBvcmlnaW5hbCBjb2RlcyB1c2UgaXJxIChyZWFkIGZyb20gZmls
ZSAvc3lzL2J1cy9wY2kvZGV2aWNlcy88c2JkZj4vaXJxKSB0byBkbyB4Y19waHlzZGV2X21hcF9w
aXJxLA0KPj4+Pj4+Pj4gYnV0IHhjX3BoeXNkZXZfbWFwX3BpcnEgcmVxdWlyZSBwYXNzaW5nIGlu
dG8gZ3NpIGluc3RlYWQgb2YgaXJxLCBzbyB3ZSBuZWVkIHRvIHVzZSBnc2kgd2hldGhlciBkb20w
IGlzIFBWIG9yIFBWSCwgc28gZm9yIHRoZSBvcmlnaW5hbCBjb2RlcywgdGhleSBhcmUgd3Jvbmcu
DQo+Pj4+Pj4+PiBKdXN0IGJlY2F1c2UgYnkgY2hhbmNlLCB0aGUgaXJxIHZhbHVlIGluIHRoZSBM
aW51eCBrZXJuZWwgb2YgcHYgZG9tMCBpcyBlcXVhbCB0byB0aGUgZ3NpIHZhbHVlLCBzbyB0aGVy
ZSB3YXMgbm8gcHJvYmxlbSB3aXRoIHRoZSBvcmlnaW5hbCBwdiBwYXNzdGhyb3VnaC4NCj4+Pj4+
Pj4+IEJ1dCBub3Qgd2hlbiB1c2luZyBQVkgsIHNvIHBhc3N0aHJvdWdoIGZhaWxlZC4NCj4+Pj4+
Pj4+IFdpdGggbXkgY2hhbmdlcywgSSBnb3QgZ3NpIHRocm91Z2ggZnVuY3Rpb24geGNfcGh5c2Rl
dl9nc2lfZnJvbV9kZXYgZmlyc3RseSwgYW5kIHRvIGJlIGNvbXBhdGlibGUgd2l0aCBvbGQga2Vy
bmVsIHZlcnNpb25zKGlmIHRoZSBpb2N0bA0KPj4+Pj4+Pj4gSU9DVExfUFJJVkNNRF9HU0lfRlJP
TV9ERVYgaXMgbm90IGltcGxlbWVudGVkKSwgSSBzdGlsbCBuZWVkIHRvIHVzZSB0aGUgaXJxIG51
bWJlciwgc28gSSBuZWVkIHRvIGNoZWNrIHRoZSByZXN1bHQNCj4+Pj4+Pj4+IG9mIGdzaSwgaWYg
Z3NpID4gMCBtZWFucyBJT0NUTF9QUklWQ01EX0dTSV9GUk9NX0RFViBpcyBpbXBsZW1lbnRlZCBJ
IGNhbiB1c2UgdGhlIHJpZ2h0IG9uZSBnc2ksIG90aGVyd2lzZSBrZWVwIHVzaW5nIHdyb25nIG9u
ZSBpcnEuIA0KPj4+Pj4+Pg0KPj4+Pj4+PiBJIHVuZGVyc3RhbmQgYWxsIG9mIHRoaXMsIHRvIGEg
KEkgdGhpbmspIHN1ZmZpY2llbnQgZGVncmVlIGF0IGxlYXN0LiBZZXQgd2hhdA0KPj4+Pj4+PiB5
b3UncmUgZWZmZWN0aXZlbHkgcHJvcG9zaW5nICh3aXRob3V0IGV4cGxpY2l0bHkgc2F5aW5nIHNv
KSBpcyB0aGF0IGUuZy4NCj4+Pj4+Pj4gc3RydWN0IHBoeXNkZXZfbWFwX3BpcnEncyBwaXJxIGZp
ZWxkIHN1ZGRlbmx5IG1heSBubyBsb25nZXIgaG9sZCBhIHBJUlENCj4+Pj4+Pj4gbnVtYmVyLCBi
dXQgKHdoZW4gdGhlIGNhbGxlciBpcyBQVkgpIGEgR1NJLiBUaGlzIG1heSBiZSBhIG5lY2Vzc2Fy
eSBhZGp1c3RtZW50DQo+Pj4+Pj4+IHRvIG1ha2UgKHNpbXBseSBiZWNhdXNlIHRoZSBjYWxsZXIg
bWF5IGhhdmUgbm8gd2F5IHRvIGV4cHJlc3MgdGhpbmdzIGluIHBJUlENCj4+Pj4+Pj4gdGVybXMp
LCBidXQgdGhlbiBzdWl0YWJsZSBhZGp1c3RtZW50cyB0byB0aGUgaGFuZGxpbmcgb2YgUEhZU0RF
Vk9QX21hcF9waXJxDQo+Pj4+Pj4+IHdvdWxkIGJlIG5lY2Vzc2FyeS4gSW4gZmFjdCB0aGF0IGZp
ZWxkIGlzIHByZXNlbnRseSBtYXJrZWQgYXMgIklOIG9yIE9VVCI7DQo+Pj4+Pj4+IHdoZW4gcmUt
cHVycG9zZWQgdG8gdGFrZSBhIEdTSSBvbiBpbnB1dCwgaXQgbWF5IGVuZCB1cCBiZWluZyBuZWNl
c3NhcnkgdG8gcGFzcw0KPj4+Pj4+PiBiYWNrIHRoZSBwSVJRIChpbiB0aGUgc3ViamVjdCBkb21h
aW4ncyBudW1iZXJpbmcgc3BhY2UpLiBPciBhbHRlcm5hdGl2ZWx5IGl0DQo+Pj4+Pj4+IG1heSBi
ZSBuZWNlc3NhcnkgdG8gYWRkIHlldCBhbm90aGVyIHN1Yi1mdW5jdGlvbiBzbyB0aGUgR1NJIGNh
biBiZSB0cmFuc2xhdGVkDQo+Pj4+Pj4+IHRvIHRoZSBjb3JyZXNwb25kaW5nIHBJUlEgZm9yIHRo
ZSBkb21haW4gdGhhdCdzIGdvaW5nIHRvIHVzZSB0aGUgSVJRLCBmb3IgdGhhdA0KPj4+Pj4+PiB0
aGVuIHRvIGJlIHBhc3NlZCBpbnRvIFBIWVNERVZPUF9tYXBfcGlycS4NCj4+Pj4+PiBJZiBJIHVu
ZGVyc3Rvb2QgY29ycmVjdGx5LCB5b3VyIGNvbmNlcm5zIGFib3V0IHRoaXMgcGF0Y2ggYXJlIHR3
bzoNCj4+Pj4+PiBGaXJzdCwgd2hlbiBkb20wIGlzIFBWLCBJIHNob3VsZCBub3QgdXNlIHhjX3Bo
eXNkZXZfZ3NpX2Zyb21fZGV2IHRvIGdldCBnc2kgdG8gZG8geGNfcGh5c2Rldl9tYXBfcGlycSwg
SSBzaG91bGQga2VlcCB0aGUgb3JpZ2luYWwgY29kZSB0aGF0IHVzZSBpcnEuDQo+Pj4+Pg0KPj4+
Pj4gWWVzLg0KPj4+PiBPSywgSSBjYW4gY2hhbmdlIHRvIGRvIHRoaXMuDQo+Pj4+IEJ1dCBJIHN0
aWxsIGhhdmUgYSBjb25jZXJuOg0KPj4+PiBBbHRob3VnaCB3aXRob3V0IG15IGNoYW5nZXMsIHBh
c3N0aHJvdWdoIGNhbiB3b3JrIG5vdyB3aGVuIGRvbTAgaXMgUFYuDQo+Pj4+IEFuZCB5b3UgYWxz
byBhZ3JlZSB0aGF0OiBmb3IgeGNfcGh5c2Rldl9tYXBfcGlycSwgd2hlbiB1c2Ugd2l0aCBNQVBf
UElSUV9UWVBFX0dTSSwgaXQgZXhwZWN0cyBhIEdTSSBhcyBpbnB1dC4NCj4+Pj4gSXNuJ3QgaXQg
YSB3cm9uZyBmb3IgUFYgZG9tMCB0byBwYXNzIGlycSBpbj8gV2h5IGRvbid0IHdlIHVzZSBnc2kg
YXMgaXQgc2hvdWxkIGJlIHVzZWQsIHNpbmNlIHdlIGhhdmUgYSBmdW5jdGlvbiB0byBnZXQgZ3Np
IG5vdz8NCj4+Pg0KPj4+IEluZGVlZCB0aGlzIGFuZCAuLi4NCj4+Pg0KPj4+Pj4+IFNlY29uZCwg
d2hlbiBkb20wIGlzIFBWSCwgSSBnZXQgdGhlIGdzaSwgYnV0IEkgc2hvdWxkIG5vdCBwYXNzIGdz
aSBhcyB0aGUgZm91cnRoIHBhcmFtZXRlciBvZiB4Y19waHlzZGV2X21hcF9waXJxLCBJIHNob3Vs
ZCBjcmVhdGUgYSBuZXcgbG9jYWwgcGFyYW1ldGVyIHBpcnE9LTEsIGFuZCBwYXNzIGl0IGluLg0K
Pj4+Pj4NCj4+Pj4+IEkgdGhpbmsgc28sIHllcy4gWW91IGFsc28gbWF5IG5lZWQgdG8gcmVjb3Jk
IHRoZSBvdXRwdXQgdmFsdWUsIHNvIHlvdSBjYW4gbGF0ZXINCj4+Pj4+IHVzZSBpdCBmb3IgdW5t
YXAuIHhjX3BoeXNkZXZfbWFwX3BpcnEoKSBtYXkgYWxzbyBuZWVkIGFkanVzdGluZywgYXMgcmln
aHQgbm93DQo+Pj4+PiBpdCB3b3VsZG4ndCBwdXQgYSBuZWdhdGl2ZSBpbmNvbWluZyAqcGlycSBp
bnRvIHRoZSAucGlycSBmaWVsZC4gDQo+Pj4+IHhjX3BoeXNkZXZfbWFwX3BpcnEncyBsb2dpYyBp
cyBpZiB3ZSBwYXNzIGEgbmVnYXRpdmUgaW4sIGl0IHNldHMgKnBpcnEgdG8gaW5kZXgoZ3NpKS4N
Cj4+Pj4gSXMgaXRzIGxvZ2ljIHJpZ2h0PyBJZiBub3QgaG93IGRvIHdlIGNoYW5nZSBpdD8NCj4+
Pg0KPj4+IC4uLiB0aGlzIG1hdGNoZXMgLi4uDQo+Pj4NCj4+Pj4+IEkgYWN0dWFsbHkgd29uZGVy
IGlmIHRoYXQncyBldmVuIGNvcnJlY3QgcmlnaHQgbm93LCBpLmUuIGluZGVwZW5kZW50IG9mIHlv
dXIgY2hhbmdlLg0KPj4+DQo+Pj4gLi4uIHRoZSByZW1hcmsgaGVyZS4NCj4+IFNvLCB3aGF0IHNo
b3VsZCBJIGRvIG5leHQgc3RlcD8NCj4+IElmIGFzc3VtZSB0aGUgbG9naWMgb2YgZnVuY3Rpb24g
eGNfcGh5c2Rldl9tYXBfcGlycSBhbmQgUEhZU0RFVk9QX21hcF9waXJxIGlzIHJpZ2h0LA0KPj4g
SSB0aGluayB3aGF0IEkgZGlkIG5vdyBpcyByaWdodCwgYm90aCBQViBhbmQgUFZIIGRvbTAgc2hv
dWxkIHBhc3MgZ3NpIGludG8geGNfcGh5c2Rldl9tYXBfcGlycS4NCj4gDQo+IEl0IG1heSBzb3Vu
ZCB1bmZyaWVuZGx5LCBhbmQgSSdtIHdpbGxpbmcgdG8gYWNjZXB0IG90aGVyIG1haW50YWluZXJz
DQo+IGRpc2FncmVlaW5nIHdpdGggbWUsIGJ1dCBJIHRoaW5rIGJlZm9yZSB3ZSB0aGluayBvZiBh
bnkgZXh0ZW5zaW9ucyBvZg0KPiB3aGF0IHdlIGhhdmUgaGVyZSwgd2Ugd2FudCB0byBhZGRyZXNz
IGFueSBpc3N1ZXMgd2l0aCB3aGF0IHdlIGhhdmUuDQo+IFNpbmNlIHlvdSdyZSB3b3JraW5nIGlu
IHRoZSBhcmVhLCBhbmQgc2luY2UgZ2V0dGluZyB0aGUgYWRkaXRpb25zIHJpZ2h0DQo+IHJlcXVp
cmVzIHByb3Blcmx5IHVuZGVyc3RhbmRpbmcgaG93IHRoaW5ncyB3b3JrIChhbmQgd2hlcmUgdGhp
bmdzIG1heQ0KPiBub3Qgd29yayBjb3JyZWN0bHkgcmlnaHQgbm93KSwgSSB2aWV3IHlvdSBhcyBi
ZWluZyBpbiB0aGUgYmVzdCBwb3NpdGlvbg0KPiB0byBzZWUgYWJvdXQgZG9pbmcgdGhhdCAoaW1v
KSBwcmVyZXEgc3RlcC4NCk1ha2Ugc2Vuc2UuDQpPSywgSSB0aGluayB0aGVyZSBtYXkgYmUgdHdv
IGlzc3VlcyBpbiB0aGUgY2FsbHN0YWNrIG9mIGZ1bmN0aW9uIHhjX3BoeXNkZXZfbWFwX3BpcnEN
Cih4Y19waHlzZGV2X21hcF9waXJxLT4gcGh5c2Rldl9tYXBfcGlycS0+IGFsbG9jYXRlX2FuZF9t
YXBfZ3NpX3BpcnEtPiBhbGxvY2F0ZV9waXJxKS4NCkZvciBmdW5jdGlvbiB4Y19waHlzZGV2X21h
cF9waXJxLCBJIHRoaW5rIGl0IHNob3VsZCBoYXZlIHR3byB1c2UgY2FzZXM6DQpGaXJzdCwgd2hl
biBjYWxsZXIgcGFzcyBhIHBpcnEgdGhhdCAiPj0wIiwgdGhleSB3YW50IHRvIG1hcCBnc2kgdG8g
YSBzcGVjaWZpYyBwaXJxLiBJbiB0aGlzIGNhc2UsIHdoZW4gaXQgcmVhY2hlcyBhbGxvY2F0ZV9w
aXJxLA0KaWYgdGhlIGdzaSBhbHJlYWR5IGhhcyBhIG1hcHBlZCBwaXJxKGN1cnJlbnRfcGlycSks
IGFuZCBjdXJyZW50X3BpcnEgaXMgbm90IGVxdWFsIHdpdGggc3BlY2lmaWMgcGlycSwgaXQgZmFp
bHMgZHVlIHRvIEVFWElTVCwgaWYgY3VycmVudF9waXJxDQppcyBlcXVhbCB3aXRoIHNwZWNpZmlj
IHBpcnEgb3IgdGhlIGdzaSBkb2Vzbid0IGhhdmUgYSBjdXJyZW50X3BpcnEsIGFsbG9jYXRlX3Bp
cnEgcmV0dXJuIHRoZSBzcGVjaWZpYyBwaXJxLiBJdCBzdWNjZXNzZXMuDQpTZWNvbmQsIHdoZW4g
Y2FsbGVyIHBhc3MgYSBwaXJxIHRoYXQgIjwwIiwgdGhleSB3YW50IHRvIGdldCBhIGZyZWUgcGly
cSBmb3IgZ3NpLiBJbiB0aGlzIGNhc2UsIHdoZW4gaXQgcmVhY2hlcyBhbGxvY2F0ZV9waXJxLA0K
aWYgdGhlIGdzaSBhbHJlYWR5IGhhcyBhIG1hcHBlZCBwaXJxKGN1cnJlbnRfcGlycSksIHdlIHNo
b3VsZCByZXR1cm4gY3VycmVudF9waXJxLCBvdGhlcndpc2UsIGl0IGFsbG9jYXRlIGEgZnJlZSBw
aXJxIHRocm91Z2ggZ2V0X2ZyZWVfcGlycXMNCmFuZCB0aGVuIHJldHVybiB0aGUgZnJlZSBwaXJx
LCBpdCBzdWNjZXNzZXMuDQpSb2FkbWFwIGlzIGJlbG93Og0KDQoJCQkJCQkJCQkJCQkJY2FsbGVy
IHBhc3MgZ3NpIGFuZCBwaXJxIGludG8geGNfcGh5c2Rldl9tYXBfcGlycQ0KCQkJCQkJCQkJCQkJ
CQkJKHhjX3BoeXNkZXZfbWFwX3BpcnEpDQoJCQkJCQkJCQkJCQkJCQkvCQlcDQoJCQkJCQkJCQkJ
CQkJCSpwaXJxPj0wCQkqcGlycTwwIFtpc3N1ZSAxXQ0KCQkJCQkJCQkJCQkJCShtYXAgZ3NpIHRv
IGEgc3BlY2lmaWMgcGlycSkJKG1hcCBnc2kgdG8gYSBmcmVlIHBpcnEpDQoJCQkJIF8JCQkJCQkJ
CQkvCQkJCQkJXA0KCQkJCXwJCQkJCQkJZ3NpIGFscmVhZHkgaGFzIGEgbWFwcGVkIHBpcnEJCQkJ
Z3NpIGFscmVhZHkgaGFzIGEgbWFwcGVkIHBpcnENCgkJCQl8CQkJCQkJCQkoY3VycmVudF9waXJx
KQkJCQkJCShjdXJyZW50X3BpcnEpDQoJCQkJfAkJCQkJCQkvCQkJXAkJCQkJLwkJXA0KCQkJCXwJ
CQkJCQkJeWVzCQkJbm8JCQkJCXllcwkJbm8NCmFsbG9jYXRlX3BpcnEtLS0tLS0tLS0tLS0tLS0t
LS0tLS0tLS0tfAkJCQkJCS8JCQkJCVwJCQkvCQkJCVwNCgkJCQl8CQkJCQljdXJyZW50X3BpcnEg
PT0gcGlycQkJCXJldHVybiBzcGVjaWZpYyBwaXJxCXJldHVybiBjdXJyZW50X3BpcnEgW2lzc3Vl
Ml0JCXJldHVybiBmcmVlIHBpcnENCgkJCQl8CQkJCS8JCQkJXA0KCQkJCXwJCQkJeWVzCQkJCW5v
DQoJCQkJfAkJCS8JCQkJCQlcDQoJCQkJfF8JCXJldHVybiBzcGVjaWZpYyBwaXJxCQkJCQlmYWls
IC1FRVhJU1QNCg0KQnV0IGZvciBjdXJyZW50IGNvZGUsDQpbaXNzdWUgMV06IHdoZW4gKnBpcnE8
MCwgaW4geGNfcGh5c2Rldl9tYXBfcGlycSwgaXQgcmUtc2V0cyAqcGlycSB0byBpbmRleCBieSBk
ZWZhdWx0LiAiIG1hcC5waXJxID0gKnBpcnEgPCAwID8gaW5kZXggOiAqcGlycTsgIiwgc28gdGhh
dCBpdCBjYW4ndCBhbGxvY2F0ZSBhIGZyZWUgcGlycSBmb3IgZ3NpKGFib3ZlIHNlY29uZCBjYXNl
KQ0KW2lzc3VlIDJdOiB3aGVuICpwaXJxPDAgYW5kIGdzaSBhbHJlYWR5IGhhcyBtYXBwZWQgcGly
cSBhbmQgaGFzIG5vIG5lZWQgdG8gYWxsb2NhdGUgYSBuZXcgZnJlZSBwaXJxLCBpbiBhbGxvY2F0
ZV9waXJxLCBpdCByZXR1cm5zICpwaXJxKDwwKSBkaXJlY3RseSwgaXQgbWVhbnMgYWxsb2NhdGVf
cGlycSBmYWlsLiBIZXJlIHNob3VsZCByZXR1cm4gdGhlIGFscmVhZHkgbWFwcGVkIHBpcnEgZm9y
IHRoYXQgZ3NpIGFuZCBtZWFuIHN1Y2Nlc3NmdWwuDQoNCj4gDQo+PiBCeSB0aGUgd2F5LCBJIGZv
dW5kIHhjX3BoeXNkZXZfbWFwX3BpcnEgZGlkbid0IHN1cHBvcnQgbmVnYXRpdmUgcGlycSBpcyBz
aW5jZSB5b3VyIGNvbW1pdCA5MzRhNTI1M2Q5MzJiNmY2N2ZlNDBmYzQ4OTc1YTJiMDExN2U0Y2Nl
LCBkbyB5b3UgcmVtZW1iZXIgd2h5Pw0KPiANCj4gQ291bnRlciBxdWVzdGlvbjogV2hhdCBpcyBh
IG5lZ2F0aXZlIHBJUlE/DQpJIG1lYW4gd2hlbiBjYWxsZXIgcGFzcyBhIHBpcnEgdGhhdCAiPDAi
IGluLg0KDQo+IA0KPiBKYW4NCg0KLS0gDQpCZXN0IHJlZ2FyZHMsDQpKaXFpYW4gQ2hlbi4NCg==


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 07:40:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 07:40:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747328.1154702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0nG-00026i-QJ; Tue, 25 Jun 2024 07:40:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747328.1154702; Tue, 25 Jun 2024 07:40:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0nG-00026b-NY; Tue, 25 Jun 2024 07:40:46 +0000
Received: by outflank-mailman (input) for mailman id 747328;
 Tue, 25 Jun 2024 07:40:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM0nF-00026M-2s
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 07:40:45 +0000
Received: from mail-lj1-x229.google.com (mail-lj1-x229.google.com
 [2a00:1450:4864:20::229])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3ab0aecc-32c6-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 09:40:44 +0200 (CEST)
Received: by mail-lj1-x229.google.com with SMTP id
 38308e7fff4ca-2ec0f3b9cfeso60905351fa.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 00:40:44 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7067e8f3293sm3654489b3a.2.2024.06.25.00.40.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 00:40:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ab0aecc-32c6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719301244; x=1719906044; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=v+TUi2Clj8MVh+JnnS51CFJp3GLTCoG1JG7Rn3hKFac=;
        b=Tez42T1Ir9dMemOlq1yJI4HS9ltXGUFjSmIIO/M2dnuDd/IA4hd3VgaX4/ISjMKyNi
         G4kp43r/+s8vHYv1lmhVV1uMs4+T0g7ZilMFOCBnY6MNv4XJfoke9/I+HGvlwLSQ+xls
         dIkuhe8Tc0rmaceV1F3+aXHWO+v31YB49sDOz51ONjqzZOUFBzEZv8DSD++Yl8PMF50W
         zIkiLiOjFyr6dkxM9P8vN565KoqmtFLKLR6fmEfLBFIt+qAHE/eaEPsCc7jaWO5WmwOI
         hqHmDfuuWBp1EYEkmgYgMniTVSjC74h4DQyxJJOhKN9dvTSuewsdUSGFZivacDHVFsux
         GjwA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719301244; x=1719906044;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=v+TUi2Clj8MVh+JnnS51CFJp3GLTCoG1JG7Rn3hKFac=;
        b=GSF1oBI/rza3fVdw/I6TvDK94Qszgmk9LNY2SHV6UFMX514/l6b8ak9QVYk/CfaXPO
         r6pUzTdiAhuiyKN1d/COqqXTYwy56Snrx5PQ8/++apR1b0NUMjw2w9kGQyqJAJ5vJjaR
         y4brsxVL4UcovGSXxdXmCdIXFykDuQYlagMMp1jtewl2Hfx8R8lgXgo7kZTm9xnstmN0
         TKII0NdHAj0MWTiRuUoxOegebnVuGl3YChSav/A437umy0oBNnjVmTNAre7XMoqLIY98
         zZFdWKmaMb00aRzya8HXnKMPQBjSdyhTKKO1oU+ebjGIyBqJMtjWdwcCNyvB2zAoB6Uz
         yBbQ==
X-Gm-Message-State: AOJu0YwkSIDSOb9rwEXv74OI9udg1JYbiLPN9kKFPdEX7Y988KQWwpFb
	QDHseE0eEMzOOn6gmnZsxVnRe/jSxm7m3EBRcQqbt9nVBUoEWoofrDX03SyX4Vz3TKYMC21fqa4
	=
X-Google-Smtp-Source: AGHT+IEunict/5Y/QLYgrBfNb4npX8rkMQy/JeD/4Wp7o1WzLCi8EZFdGjvFKY8o1hl7QUefwM6qcg==
X-Received: by 2002:a05:651c:152:b0:2ec:5128:184c with SMTP id 38308e7fff4ca-2ec593be850mr37543871fa.11.1719301243658;
        Tue, 25 Jun 2024 00:40:43 -0700 (PDT)
Message-ID: <e9adce27-7fb8-43d7-b851-1da14ad4362e@suse.com>
Date: Tue, 25 Jun 2024 09:40:35 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] gnttab: fix compat query-size handling
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <00bb4998-d0a7-43dc-8d3c-abb3f66661cc@suse.com>
Content-Language: en-US
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <00bb4998-d0a7-43dc-8d3c-abb3f66661cc@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 09:30, Jan Beulich wrote:
> The odd DEFINE_XEN_GUEST_HANDLE(), inconsistent with all other similar
> constructs, should have caught my attention. Turns out it was needed for
> the build to succeed merely because the corresponding #ifndef had a
> typo. That typo in turn broke compat mode guests, by having query-size
> requests of theirs wire into the domain_crash() at the bottom of the
> switch().
> 
> Fixes: 8c3bb4d8ce3f ("xen/gnttab: Perform compat/native gnttab_query_size check")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Looks like set-version is similarly missing in the set of structures
> checked, but I'm pretty sure that we will now want to defer taking care
> of that until after 4.20 was branched.

Actually it's get-version too, yet then for both only half of what's needed
that's missing (and luckily only the just-in-case part, not the actual layout
check). In any event I've queued a patch for 4.20.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 07:44:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 07:44:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747333.1154712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0qv-0002gk-AS; Tue, 25 Jun 2024 07:44:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747333.1154712; Tue, 25 Jun 2024 07:44:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0qv-0002gd-7h; Tue, 25 Jun 2024 07:44:33 +0000
Received: by outflank-mailman (input) for mailman id 747333;
 Tue, 25 Jun 2024 07:44:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LI8L=N3=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sM0qt-0002gX-S9
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 07:44:31 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on2062b.outbound.protection.outlook.com
 [2a01:111:f403:2412::62b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c05f7f7f-32c6-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 09:44:29 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by DM6PR12MB4172.namprd12.prod.outlook.com (2603:10b6:5:212::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.28; Tue, 25 Jun
 2024 07:44:23 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7698.020; Tue, 25 Jun 2024
 07:44:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c05f7f7f-32c6-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PprnT2BeBnEBqS8d+mUKvYU0tiAtY9xKHag4dbwW3fE9x+/l7+BkUkZTnx0auQ+FzP1qFk0VoHmu5d5gWI8V/3vW7hugfo1ZB4A3KBzIAgvbJAzLEUeDDAUSJzoDHugpJv6w7tHF3hnTM7iko+9TE3hQTqzZjJgZzv7IAt2HXAaMwWYOPuQXg3ZfcKa9uIICV7hZ0c5nmBewohJpYbHPD6k31svNZOSHtt26TP5R3263rwbhLqPsfnKNwB+LZwuX+e4PqIhfPzrFbyXU76HKFJ/Mj1KvTSgFVN9XB8wWQr8G8YkuOIa0y8L4IsyM0bZ91fCmgFP6YOrUxGeHQoY26A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=KJx6ViJPcXtXNzXn2gTQ/hPbyeVW91bxVRT/DOpoabw=;
 b=iRwttaARtP3f41IOwpGKUoe6bcQxvQErjLRuT3ORnnLi6xxMiHCOyQDYagW27JPOrCp1GuHlZzmXN50xh+bKE6gTJ8Cu2fFCZ2WxHcxDOnAuezW5GEHIwxh53NPrr3iw30PZPsUm3g7PJ7qtzsAojwJMDfDcSGE74dKPH6zKcEDio4Al9MzaMJIZqvPIClrAoUlHqoVUCL+Y0G0iUj9fadpqjgvhzVDcyxNdhLqWe69fsPf/KuBLIw4z+0jkkCKyw2M8Rd4rtMVgNJZhXELag2IiyPpdfw4k5SmYJlBcxrbbKPT+6chrgLIcLMevh+legOUyJxNrgIPrIRXLdMqybg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KJx6ViJPcXtXNzXn2gTQ/hPbyeVW91bxVRT/DOpoabw=;
 b=eLOemueODvGrLuBbLA3dvyhClk7NJB2dq5lEb2+zIeJXRr7QVsbDsgQ+KJH9N7DlHPBch7Q7jSJApY/KurCnMWqc/+EZ+yckluPONjPShIzs9xG1XrWj+RInRiL/YLvRTDzssipT6+/48FXoDCH7J45tJ0TTjCepBgsNyx3Q6eg=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>,
	=?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Juergen Gross
	<jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	"Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>, "Huang, Ray"
	<Ray.Huang@amd.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Anthony PERARD <anthony@xenproject.org>,
	"Chen, Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index:
 AQHawJTqXyg3PMEiVUiLHPQ86aZkYrHMFeyAgAGdTwD//43bAIADi4UA//+vaICAAe+IgIAEMRMAgAIOJgA=
Date: Tue, 25 Jun 2024 07:44:23 +0000
Message-ID:
 <BL1PR12MB5849099F40069F7B9C376106E7D52@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-6-Jiqian.Chen@amd.com>
 <b4b6cbcd-dd71-44da-aea8-6a4a170d73d5@suse.com>
 <BL1PR12MB584916579E2C16C6C9F86D1FE7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b6beb3f3-9c33-4d4c-a607-ca0eba76f049@suse.com>
 <BL1PR12MB58493479F9EF4E56E9CB814FE7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <96ba4e66-5d33-4c39-b733-790e7996332f@suse.com>
 <BL1PR12MB58493B55E074243D356B0CAAE7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
 <1ffd019d-6bab-49d1-932c-7b5aee245f32@suse.com>
In-Reply-To: <1ffd019d-6bab-49d1-932c-7b5aee245f32@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7698.013)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|DM6PR12MB4172:EE_
x-ms-office365-filtering-correlation-id: 8ab1a75b-3344-4e64-80a0-08dc94eaa17a
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|366013|7416011|376011|1800799021|38070700015;
x-microsoft-antispam-message-info:
 =?iso-8859-1?Q?3ZFqc7o/cn00CCoIqf2gVIyndN38SnRErTfEE89H2Dyvqvw5BF1N4uZAB8?=
 =?iso-8859-1?Q?Mkw/TZCrMaiRvxY2TNZdBf2u+AKzh38k0wOOQmlv2AdafLLcvftfG7n4+x?=
 =?iso-8859-1?Q?fKzVBi2Uv5VV7hEKU6kXgOB/82pGx/O3KA0OGj+i0bJaxWhaVUizHmXUKk?=
 =?iso-8859-1?Q?xbBGnnP/0CbaTcExABQn9M6VB2mMUEHF9/lcDGIDrqlG2Sum6oZONXZ5nU?=
 =?iso-8859-1?Q?g+OzICH7iYrO6KwkoepwUwtmJC0A0NCDJOMyAq4VYJHogrXhd/F56WOkiM?=
 =?iso-8859-1?Q?4/a+7RRI8XsJXpD8D9jDA2BDQlegOl2EuHa6GkLghZi0ATC7tm/48MjS2d?=
 =?iso-8859-1?Q?xzvdVDEVll2a9RdlphzOjCNT1b0ZdGW4k9clJ6IHpJB7rY9eJ4G/o9/BLQ?=
 =?iso-8859-1?Q?3JB5sGKTsVLXVomrNfp1fP8sYuF74DWgorFYvZlpjXDOSOis1quPAfLDDl?=
 =?iso-8859-1?Q?2IbXJ1kvKfeWERYjQqVDC2EGgdu4vTpZfydCPhQxoQ6l80+x0Lhibq1Cko?=
 =?iso-8859-1?Q?knSKqYPBlUV20e2mfYuoUH1Rut2P5atg3hg+yAzYn/lXovvJwTGqJ1Qo47?=
 =?iso-8859-1?Q?6VED+WXZgu6ZU29o4kkthVcQ1g3qiJMyCkRjXHCC73l6oDVvb/bqkgz52B?=
 =?iso-8859-1?Q?Y1nX1QJppN0fR6ua5OS3OlMy7GuTTw/Up2N/w9JaRQ2B0J2tRce7hYQck4?=
 =?iso-8859-1?Q?dqyopo0xfGfKL1FQmVUuea4oKOxbsCL++4gLJV4Cq/5sK59SjjdkM024TK?=
 =?iso-8859-1?Q?I0ZIfYCv1emy2E4raGxivAQ6+9U2ttIM7SXjUPYpTYnTk8kNnAXnKe64Mb?=
 =?iso-8859-1?Q?gge+aDA6WmddM4MVsigycHHne5vWB/HYI3EyJbH8M0C2ks/s5ug8mst648?=
 =?iso-8859-1?Q?BD08OYcwGwNnmmNaUi2VyeHSunL50sFKn7XqU+P9Nm1QJrrh4ene0YkPNb?=
 =?iso-8859-1?Q?xQhpDSoahw/uj0kmP3jiZXcp3vDiGzLkzXlebGQJQMwKjzWZCjMk63XdhH?=
 =?iso-8859-1?Q?H2xfg+kYC9DECZcTmflw326NuFk9WqmbDwW7Ff+1K4R5zw6YOutu18iCv4?=
 =?iso-8859-1?Q?A+3aF0MPiorHkSmI3M3biBfu2d2Tg0lqDaUK3hCdLq9RFklSLe2Tj/s0HX?=
 =?iso-8859-1?Q?yJxCYgjJ8begXBJmb3Q3b1Paiq5TmgyKaUPLXboaf1rWp8ovqE20NjEVdO?=
 =?iso-8859-1?Q?ItDC2OjcucOOWf4gdhJuwQYheZ/ZTIWU4fE7sSyBYMmHwIDx/nIfElG93c?=
 =?iso-8859-1?Q?/gToTkAoYZcoR9saQYd8XDhoVymWU2RYC4q6Ma/xEWHDbC10SIBYB2PFhc?=
 =?iso-8859-1?Q?xBXqAxGS3CHgIF0IKjqgqS46CkArZ7nAOaxOqg0Vzv4BMOuhPLDzVmMHK8?=
 =?iso-8859-1?Q?TPaUTiysEGYTzsIaSu8KnhTTcS8RGdii8i9sHWOjF8hvhP46sN/+k=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(366013)(7416011)(376011)(1800799021)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-1?Q?p+/VC06KvvTpjLHrM1clgT6vOha6XVJWbvblNwb1Rob/6tSKMoGPpt/q+3?=
 =?iso-8859-1?Q?D27Ta85aQu5c8n7uAw85+b/d5UsJP/xsRLFicll1PdvSWAiThWSv8OSJmD?=
 =?iso-8859-1?Q?/6IOtCsIyWHUXWut67Lp1LUdBsSoboxBwmUCy8pIEuUT7rED3Qu//AhMsW?=
 =?iso-8859-1?Q?Y8qKhsinCAhkWR3SkxDWV4Sqs1ax40wwlYF+fRjWryPPFGJ7zlDxd/FW/h?=
 =?iso-8859-1?Q?hu6O4BlypCk8MW4OzVnE2+12KuspxPYUumsgVmgIWCOUrs30x+AmznX6ON?=
 =?iso-8859-1?Q?3RFdKpPyHhrY7fu621cNHUyWR/51s+NUhKvx4kL1rBoehcauxPscJRdzNR?=
 =?iso-8859-1?Q?TuE3uIo8Qr1qwEWwOl4CrQpzBMoKfmq86ExudTG3fpoVtMGaHrE2Vvg2nS?=
 =?iso-8859-1?Q?cu6Ti1wUmGhEVjbjxM1vqaunQDMSE48UrY1LXgGxJA0sZ04nbMHqcUfeZU?=
 =?iso-8859-1?Q?3iXqn6G7APjcb3MtEMLU18kWERfPxWwuJfr48vqaOik8HbQFYMQefBM+4t?=
 =?iso-8859-1?Q?s5GPVJwyVNKMw0PZXysExEqV9V5+g6AxaiSOGb4q6MqiaDjVX8okOpd+Fu?=
 =?iso-8859-1?Q?0PZRO3uS3ltqXGZwi0hhFcXUEHjIo4oQutd2ytrDmA1+wtmBoypXPdwMdc?=
 =?iso-8859-1?Q?teHX6f0ZL6pfHOPgQxaYn+NbCm6aWtJfqWdUvkrdD12V6TT4ByyQhgbPuJ?=
 =?iso-8859-1?Q?e82UETrIA7ewzBUifueQSYUQ74FnYCyqIprdC86LYy2hX+2d4XoJmTXJX/?=
 =?iso-8859-1?Q?tFgkfo6/jW0nY6sI2M3Npevx17DUWdnFPigVyrKiKA4cGCNsgSeGej6kW1?=
 =?iso-8859-1?Q?gzbgc9c0kpacwWV1189n0nfoVmtxcySwZpTu7/xkQdLbh9EWOlm/cd6krG?=
 =?iso-8859-1?Q?UI5YhTYEzWdQ11rI7r+f61hI6RDBpwcfdN+V4zxauhYT0g5I7RM8UqpXpT?=
 =?iso-8859-1?Q?I2eIB29Rqc9H6iB9fAaSHDiZO4daNKu0d7vcmpDZHpspyvHgjSGmlPm1hf?=
 =?iso-8859-1?Q?LPwtD24Rcrs8Hbm1YbG8Uj7RekvOaOeunkUQbKyATDf8UvhLflM9+Tvi0u?=
 =?iso-8859-1?Q?rW2z142WnHaenSLaqNeYER86orJQL3s2vidyZfqtoaZLd5GBbB4yScoyuo?=
 =?iso-8859-1?Q?XZJ9WaViOg9Kjd+r8z5K43nwjm3OgOEuRoFVkRcHaMNN69j/ykORsChhZ6?=
 =?iso-8859-1?Q?GZR641JOfDyH7iSn1qR6hiEQlFGwF2IpcG9dLKmWw8kjy7dQkoFdiOtHaF?=
 =?iso-8859-1?Q?URcu4GQhUSu4hApYhzq9Z2RTLVug8fUkrUOEFf2lBLqS4MySdRAcgis89n?=
 =?iso-8859-1?Q?kO9TZHjQ4NmbWg5fMqBvAK1/qoiOqsLkK+zdD1iafWEiyb/HHo5aRc5FgG?=
 =?iso-8859-1?Q?7PO0pi++eujEFte2DNiZ5+oAZO0n1kwWQAGsKlCS1ucaQwOy7v1Dhes7hF?=
 =?iso-8859-1?Q?K5XZX62tf8JkNE89n9jmhYkqQ+8R+melAciAukGv0I+b75HqQh926dn87s?=
 =?iso-8859-1?Q?WoxVrsgixwiEMCa7sztWnB9zmyQ9goOfDPYSpHxBB46p6rAHQi+8tSiDOl?=
 =?iso-8859-1?Q?X6TUORT8G8Nen4l6OvbJMW3vTjLU6jQFlIU3al+Ka0PGoFmGwKqWoeIhCe?=
 =?iso-8859-1?Q?5cHtwDObyAjJ4=3D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <A80BCD1E8668A34B96D1617554E39362@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ab1a75b-3344-4e64-80a0-08dc94eaa17a
X-MS-Exchange-CrossTenant-originalarrivaltime: 25 Jun 2024 07:44:23.2153
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: xVsvO04bczoojJ/LTFRImnELwwd8GZBlwAI3j03m92kTtOatr5ccUaLnz1i9ov+wSkfUIktR/h9nZsoBRQk/ZA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4172

On 2024/6/24 16:17, Jan Beulich wrote:
> On 21.06.2024 10:20, Chen, Jiqian wrote:
>> On 2024/6/20 18:42, Jan Beulich wrote:
>>> On 20.06.2024 11:40, Chen, Jiqian wrote:
>>>> On 2024/6/18 17:23, Jan Beulich wrote:
>>>>> On 18.06.2024 10:23, Chen, Jiqian wrote:
>>>>>> On 2024/6/17 23:32, Jan Beulich wrote:
>>>>>>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>>>>>>> @@ -1516,14 +1519,39 @@ static void pci_add_dm_done(libxl__egc *eg=
c,
>>>>>>>>              rc =3D ERROR_FAIL;
>>>>>>>>              goto out;
>>>>>>>>          }
>>>>>>>> -        r =3D xc_domain_irq_permission(ctx->xch, domid, irq, 1);
>>>>>>>> +#ifdef CONFIG_X86
>>>>>>>> +        /* If dom0 doesn't have PIRQs, need to use xc_domain_gsi_=
permission */
>>>>>>>> +        r =3D xc_domain_getinfo_single(ctx->xch, 0, &info);
>>>>>>>
>>>>>>> Hard-coded 0 is imposing limitations. Ideally you would use DOMID_S=
ELF, but
>>>>>>> I didn't check if that can be used with the underlying hypercall(s)=
. Otherwise
>>>> From the commit 10ef7a91b5a8cb8c58903c60e2dd16ed490b3bcf, DOMID_SELF i=
s not allowed for XEN_DOMCTL_getdomaininfo.
>>>> And now XEN_DOMCTL_getdomaininfo gets domain through rcu_lock_domain_b=
y_id.
>>>>
>>>>>>> you want to pass the actual domid of the local domain here.
>>>> What is the local domain here?
>>>
>>> The domain your code is running in.
>>>
>>>> What is method for me to get its domid?
>>>
>>> I hope there's an available function in one of the libraries to do that=
.
>> I didn't find relate function.
>> Hi Anthony, do you know?
>>
>>> But I wouldn't even know what to look for; that's a question to (primar=
ily)
>>> Anthony then, who sadly continues to be our only tool stack maintainer.
>>>
>>> Alternatively we could maybe enable XEN_DOMCTL_getdomaininfo to permit
>>> DOMID_SELF.
>> It didn't permit DOMID_SELF since below commit. Does it still have the s=
ame problem if permit DOMID_SELF?
>=20
> To answer this, all respective callers would need auditing. However, ...
>=20
>> commit 10ef7a91b5a8cb8c58903c60e2dd16ed490b3bcf
>> Author: kfraser@localhost.localdomain <kfraser@localhost.localdomain>
>> Date:   Tue Aug 14 09:56:46 2007 +0100
>>
>>     xen: Do not accept DOMID_SELF as input to DOMCTL_getdomaininfo.
>>     This was screwing up callers that loop on getdomaininfo(), if there
>>     was a domain with domid DOMID_FIRST_RESERVED-1 (=3D=3D DOMID_SELF-1)=
.
>>     They would see DOMID_SELF-1, then look up DOMID_SELF, which has domi=
d
>>     0 of course, and then start their domain-finding loop all over again=
!
>>     Found by Kouya Shimura <kouya@jp.fujitsu.com>. Thanks!
>>     Signed-off-by: Keir Fraser <keir@xensource.com>
>=20
> ... I view this as a pretty odd justification for the change, when imo th=
e
> bogus loops should instead have been adjusted.
Yes, you are right.
And Anthony suggested to use LIBXL_TOOLSTACK_DOMID to replace 0 domid.
It seems there is no need to change hypercall DOMCTL_getdomaininfo for now?

>=20
> Jan
>=20
>> diff --git a/xen/common/domctl.c b/xen/common/domctl.c
>> index 09a1e84d98e0..5d29667b7c3d 100644
>> --- a/xen/common/domctl.c
>> +++ b/xen/common/domctl.c
>> @@ -463,19 +463,13 @@ long do_domctl(XEN_GUEST_HANDLE(xen_domctl_t) u_do=
mctl)
>>      case XEN_DOMCTL_getdomaininfo:
>>      {
>>          struct domain *d;
>> -        domid_t dom;
>> -
>> -        dom =3D op->domain;
>> -        if ( dom =3D=3D DOMID_SELF )
>> -            dom =3D current->domain->domain_id;
>> +        domid_t dom =3D op->domain;
>>
>>          rcu_read_lock(&domlist_read_lock);
>>
>>          for_each_domain ( d )
>> -        {
>>              if ( d->domain_id >=3D dom )
>>                  break;
>> -        }
>>
>>          if ( d =3D=3D NULL )
>>          {
>=20

--=20
Best regards,
Jiqian Chen.


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 07:46:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 07:46:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747339.1154722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0sS-0003D6-LK; Tue, 25 Jun 2024 07:46:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747339.1154722; Tue, 25 Jun 2024 07:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0sS-0003Cz-Ic; Tue, 25 Jun 2024 07:46:08 +0000
Received: by outflank-mailman (input) for mailman id 747339;
 Tue, 25 Jun 2024 07:46:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LI8L=N3=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sM0sR-0003Ct-Uh
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 07:46:07 +0000
Received: from NAM12-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam12on20610.outbound.protection.outlook.com
 [2a01:111:f403:200a::610])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fa60d8cd-32c6-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 09:46:07 +0200 (CEST)
Received: from BL1PR12MB5849.namprd12.prod.outlook.com (2603:10b6:208:384::18)
 by DM6PR12MB4172.namprd12.prod.outlook.com (2603:10b6:5:212::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.28; Tue, 25 Jun
 2024 07:46:03 +0000
Received: from BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285]) by BL1PR12MB5849.namprd12.prod.outlook.com
 ([fe80::b77f:9333:3a5a:d285%3]) with mapi id 15.20.7698.020; Tue, 25 Jun 2024
 07:46:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa60d8cd-32c6-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NMDj7TH5z8mc3opBJA0OkH0lAPWGthhf62ttW7k4kmHiUrJJDFE50xCkmBF9qsnnyl2JzPTOzablr92bkhB9DKFkMIrT4XrJtgmVbq3Nq1j72BL2ZdOjr4OvodL8ZcYK02ptKXSZmCGpb8F6PSfdz6tWw4EXJV6WaL/FShvIljsL2VcF9G2QZVGHWW+CIc7ZpSWEzTRMrznswlVNxHVILM4AFT5Tto/BlXD3CDKoYBhwZLy1XiW+lnLrG86cUDD3iRZXCSBHj0OSuB1kX5fCC9Z0HVtbqkOb6wmbEWIEF2+eduWBHKomrBPhcl3iYwIZvIqnKWkYudhpourxwOlRMg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=M/s2phCOFsIJhg/V8D4l+vBuLM71RAMOKB881PNaBLI=;
 b=FnIEWEAyAsf2alCafGV4EeI5UaajLiXtN7lAO8/6IdDTuc/u1Pp81WZqiFe7OjF613WLZg/BdvqBCaOMX0+nJuau7Mg4flzvsgYsghhPg2sDHvh1fhJWHUs93hMD6Qgt/KccoNnAPGEQnIxNMFvHuLhZcw2uMgxLO+a0rXfDlQ4P+/Y9OuHdBvKJfhfArtc1m7vUoAL1NiLZJiGYnXyvlIRITHqglPHssj4esFYJH4N4HYmaWvUPPRX337O11I5WMavWwKydkauqaEUmTY2liRwwMNNFnw2hpmX3ZOr43fPBLw4qu7ktrIaz93WZnkPqARUAvriOIOpifpT6Ey97Nw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M/s2phCOFsIJhg/V8D4l+vBuLM71RAMOKB881PNaBLI=;
 b=CgfyBVqJ4FOJOVj/LotxjSiebF03sNzmm1rS0D1btIapzifh1w1iMF3lozD1k312TExWNC9K5TShayRSUbEPqxMtp3LaGB80GXujNok8sLtpJ6/he4GxCb9Ch7c3QPxH/QxnyoNBsmDIzr9ahohjp6FQNahqO3TK08PWCop1AII=
From: "Chen, Jiqian" <Jiqian.Chen@amd.com>
To: Anthony PERARD <anthony.perard@vates.tech>
CC: Jan Beulich <jbeulich@suse.com>, Keir Fraser <keir@xensource.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, =?iso-8859-1?Q?Roger_Pau_Monn=E9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Juergen Gross <jgross@suse.com>, "Daniel P . Smith"
	<dpsmith@apertussolutions.com>, "Hildebrand, Stewart"
	<Stewart.Hildebrand@amd.com>, "Huang, Ray" <Ray.Huang@amd.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Chen,
 Jiqian" <Jiqian.Chen@amd.com>
Subject: Re: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Topic: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
Thread-Index:
 AQHawJTqXyg3PMEiVUiLHPQ86aZkYrHMFeyAgAGdTwD//43bAIADi4UA//+vaICAAe+IgIAEeJ2AgAHGqwA=
Date: Tue, 25 Jun 2024 07:46:02 +0000
Message-ID:
 <BL1PR12MB5849A8E3E11116F1BD663439E7D52@BL1PR12MB5849.namprd12.prod.outlook.com>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-6-Jiqian.Chen@amd.com>
 <b4b6cbcd-dd71-44da-aea8-6a4a170d73d5@suse.com>
 <BL1PR12MB584916579E2C16C6C9F86D1FE7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b6beb3f3-9c33-4d4c-a607-ca0eba76f049@suse.com>
 <BL1PR12MB58493479F9EF4E56E9CB814FE7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <96ba4e66-5d33-4c39-b733-790e7996332f@suse.com>
 <BL1PR12MB58493B55E074243D356B0CAAE7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
 <Znlnf2CHxCFadcIX@l14>
In-Reply-To: <Znlnf2CHxCFadcIX@l14>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-imapappendstamp: BL1PR12MB5849.namprd12.prod.outlook.com
 (15.20.7698.013)
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=amd.com;
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: BL1PR12MB5849:EE_|DM6PR12MB4172:EE_
x-ms-office365-filtering-correlation-id: c3acfff7-a33e-4b4b-9d9b-08dc94eadcea
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam:
 BCL:0;ARA:13230037|366013|7416011|376011|1800799021|38070700015;
x-microsoft-antispam-message-info:
 =?iso-8859-1?Q?e831Qo9ThSfRXabNbpK2BX+fkO2H1chTs/0MGvecJYTEQicvMlquHB8BWb?=
 =?iso-8859-1?Q?S1a5Cvq32gSrFtFILw/ajMdWX9RHnb/idpFCqHLTDNkurPeTGGBTOfveZY?=
 =?iso-8859-1?Q?yJc+jXNQVxwqu6TrZ98nOyfu2mV9BgyJuV5j2LOl1t5iRvWD1kGumSBiaq?=
 =?iso-8859-1?Q?6UnQYA+OF1ot7glnTbFOLRIV1mC+p22oq+gyTNkCPP21QDCmjY132W2eLD?=
 =?iso-8859-1?Q?jcivY1yfOEtGXiVr8ipoUvjGEB21gXho4dAcM9OuNnmgqVrO8TidftWo0p?=
 =?iso-8859-1?Q?cwpGRu9WjQBIlx42Nk7VXoYe9Urxo+UHHk4mlPVB1WBkR8aoOi+oi11bRv?=
 =?iso-8859-1?Q?/e1raneeYH4TPRBMuMOO1sN9eLoddx3XpCLb4Q5CMduMcfbT70BTUD3xwH?=
 =?iso-8859-1?Q?47rN2g4SfNPiHsIMOlPZA6zGZn3HufYrJa7GE8BvRUp8qMf3I43/dj7ydl?=
 =?iso-8859-1?Q?Vg0j+lWbrbkLEoqwt+ExDhtJFTI3MFxUp/Tf3E3uUPwbOjuQl7tBxK+epe?=
 =?iso-8859-1?Q?rTQ81D51cavlNkMHRZj7YEe/3Vwdkjd1D5zSAwOwjU1TIuu8sd+mRISS6T?=
 =?iso-8859-1?Q?/zZ3fLh3P9EuYXFInl2ZdhV0jFnH/Uwh/5l7aph/FY9tbgS4UTNZZ1o6rr?=
 =?iso-8859-1?Q?062TJ84ED5m4Kv3Pe+FArDxsGTgvXd90bT33ph1ypD5WU2U57bl/0+RR6g?=
 =?iso-8859-1?Q?KnKOoYULJXq4Ik6ccVnwVv/6e/a0xQK7h243tjp1adi8y7Kt6h9VjfvcuX?=
 =?iso-8859-1?Q?A91Bb8XarCZHhtp3/flaVln3v8P8qNyM3SkiUca7uI6MhmR2sjxEt6ljBM?=
 =?iso-8859-1?Q?lGK2fC1TRMU9e7O37lMs7W0Jhyn2sd3s7do8WGyme3hoI2q96drrXHZl2P?=
 =?iso-8859-1?Q?3F3NoL8xVHuKx38Daje6UzO4GWj967KsgKmAzp/rJ9FdvVsbn3V0HmCMsn?=
 =?iso-8859-1?Q?XeUDNo55pPOcuQQH6SjwwdPX2Aj0iJh+eexBYiNaC8Ck6WZ3jBj1DrLaox?=
 =?iso-8859-1?Q?MeHs7LYsvMfDXJi1QYHzY076LxGGHKILI33O4uRPaV96qTGEEwcr0Jy/K5?=
 =?iso-8859-1?Q?BD9tfEEFD1g93xwAMBojQR7mb7s5vMgfMEsDBXHUupMkzoAHC/vrUCK0Rb?=
 =?iso-8859-1?Q?h3+BO3FcSp24NAFn4fzrBB7xOUlUjUbtQfM5nJ0I7I0sYOpkF9ld59AnfB?=
 =?iso-8859-1?Q?JUkvds5ptka1FGKlwrIe2GzF+7ud0dz5UzJ3Ioc0nPPd0dJ8sbZUd4v0MS?=
 =?iso-8859-1?Q?IOXOiVvZI2v9gnZIh1xicw7cgE7Nb3nRH0gfVbn1+3SRqyjai2wpc16xrY?=
 =?iso-8859-1?Q?0lEDsloLCpRt64TorvKlbgbxWGOa1BUfgnW9S/54+A1fJhsziDwEkSY4zm?=
 =?iso-8859-1?Q?szJ0R6HRVUMowZIxzI4kVQ+6X0jcHtCnDWGxwIWmF+mjm0HKs8ex8=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BL1PR12MB5849.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230037)(366013)(7416011)(376011)(1800799021)(38070700015);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-1?Q?yZ/Wjao4MXQJiM0K/3s95WBOkKrYRjdq9n3p4ZdqPBAV7kwxI25aueQT41?=
 =?iso-8859-1?Q?2Uwt9XBOKL8xDJLTRKDtk0WjE5L6zFE0+jluAolESgVRGl3cdqlCX1B9tT?=
 =?iso-8859-1?Q?V+KMXkbcoT4oZiKh5Ux09B7p9xJP4SVdypgGi4PmonbvhMFUkkawJxqJzZ?=
 =?iso-8859-1?Q?zCm2CJtXAXow4h4VYjUKdMw7v3ASMqIlTF2o89M5gWlrPpfe6WsWUMaR9o?=
 =?iso-8859-1?Q?uX0/HxnrItpr5Habpw74i3KToR1inVo9sSuMWMTsVPHD6k4N5RA4aTkoMI?=
 =?iso-8859-1?Q?tzec4RYsOJvtzD6w18QLv/azn/s2dPZkqH24hZqBTuwv8j/asEiEiTDjKK?=
 =?iso-8859-1?Q?NPjAxpmTzHgZv1U4UEMJk+5kjt9vGJABqniD9gCODoDASq3umd1BspPI2G?=
 =?iso-8859-1?Q?uiOkg9t7TYC00A+u34sklIBtQ0k27BmkA4f4vMZ/i6aHMkUOm5pObhn84P?=
 =?iso-8859-1?Q?DIyGssbCa/WncXKPu+qSXR//dFjSElqMiBoxHaQBqg7ya1Da+50DebuEHp?=
 =?iso-8859-1?Q?bboLz+OTfZoT2KAKkjxU+VScKjnsH/bTdqQY8VAHCV4ciBMPK+8+x/tk6k?=
 =?iso-8859-1?Q?291I1DQ8cTc65DZdbbyR9PopFhPsfXDWwwPe5Bi9TauSh/piGiuEAAm6jk?=
 =?iso-8859-1?Q?L5piJaU18/91vbTRHPkebhE180dZF/3z7qiN6O7bd2Fmgb2njXOiWubc3W?=
 =?iso-8859-1?Q?KzPk3amRnfj0xOFxNLO0Nj6nH8PO0rQNE8RcPdE8sJZ57s4zkliBxi3mnK?=
 =?iso-8859-1?Q?La9lCJ9gMWFsgVM8GzRFxUfY5sdBNjKRTpKoD0S6eW/hYCBkCxISP1oOdP?=
 =?iso-8859-1?Q?BeCMeFhjR61JugmPUfqjWXo2kB4hGPFYr+zN1cRJ6tgIUk1U1CG/v7r2fU?=
 =?iso-8859-1?Q?wrB5Yk000Z2yGkFIKI25B3kNKMKqkpvGXSIfZ99kjuJs2vyGu4dUQNAlfO?=
 =?iso-8859-1?Q?1vDhKPWTW7VZdKGeYz1tfIbEDdIDVJJyP8bXOy5QpGvPtAOI3jjLfoBwYy?=
 =?iso-8859-1?Q?EV/LGEJVL3vSrutmwnOIUZbjMLAipNRHx+00/coapn+mYRx2G2rJ7Lwh9B?=
 =?iso-8859-1?Q?hSF9nFTgqFQEr/fbecilikZFTZppQ4RpxrZg+D+o5aWuS0rqZB/kUv4Yp4?=
 =?iso-8859-1?Q?uKI+pCza2NLjf8NFLKVTdiLu9jhmzSX1QfPl6kcIVEIdOjH805Wo4if52O?=
 =?iso-8859-1?Q?cG/StKOD8joBLRwAllqDpLW3gsNklLPflTbxZYogNLDdooSaBKpgrmCLeB?=
 =?iso-8859-1?Q?q1o67vkVy8zS6bXd2+oGAStXjGeFm5iEyAqgazgg5q0Cgd7j+kOBBLbtpF?=
 =?iso-8859-1?Q?Nue6m1MCMFvHTmFcJ1y+tGN6XDBYvqwu7gl+EVsqWy86DB0Acvg8szlGkf?=
 =?iso-8859-1?Q?uCMb31X12gavJV9DKByizWTBpTvQiaWLlMbBb9s2qMtYNVosrQYE6AiQPE?=
 =?iso-8859-1?Q?A4eQV5cougFVXv9UpaDPtrne/j1LD6wjUYlU4JriLQys6ChlQ/aNwOblhZ?=
 =?iso-8859-1?Q?1Qchux3ya+A5UPKoSy3+HaEQ1JUNl0NLcSGKtUrQWAzjyB1ZfVkF9uCZEc?=
 =?iso-8859-1?Q?V+uxWD4EIUEonDQZ6J5GRdWDa3TguMXR85pbe5KPkncV7XcKQVWzgN9Yax?=
 =?iso-8859-1?Q?Qx9u7ERk6RCd4=3D?=
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <0CADF8E3B97EE44E997D357AAEA26BC0@amdcloud.onmicrosoft.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: BL1PR12MB5849.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c3acfff7-a33e-4b4b-9d9b-08dc94eadcea
X-MS-Exchange-CrossTenant-originalarrivaltime: 25 Jun 2024 07:46:02.9140
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: R8jR56Ii1VtmC9hTcQNXqVIeiv7STGbNgrVTjdGDPyChUu+1gB3E7od3F4F1BD7WKSM8TydC3S1XREpje/bY0w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4172

On 2024/6/24 20:33, Anthony PERARD wrote:
> On Fri, Jun 21, 2024 at 08:20:55AM +0000, Chen, Jiqian wrote:
>> On 2024/6/20 18:42, Jan Beulich wrote:
>>> On 20.06.2024 11:40, Chen, Jiqian wrote:
>>>> On 2024/6/18 17:23, Jan Beulich wrote:
>>>>> On 18.06.2024 10:23, Chen, Jiqian wrote:
>>>>>> On 2024/6/17 23:32, Jan Beulich wrote:
>>>>>>> On 17.06.2024 11:00, Jiqian Chen wrote:
>>>>>>>> @@ -1516,14 +1519,39 @@ static void pci_add_dm_done(libxl__egc *eg=
c,
>>>>>>>>              rc =3D ERROR_FAIL;
>>>>>>>>              goto out;
>>>>>>>>          }
>>>>>>>> -        r =3D xc_domain_irq_permission(ctx->xch, domid, irq, 1);
>>>>>>>> +#ifdef CONFIG_X86
>>>>>>>> +        /* If dom0 doesn't have PIRQs, need to use xc_domain_gsi_=
permission */
>>>>>>>> +        r =3D xc_domain_getinfo_single(ctx->xch, 0, &info);
>>>>>>>
>>>>>>> Hard-coded 0 is imposing limitations. Ideally you would use DOMID_S=
ELF, but
>>>>>>> I didn't check if that can be used with the underlying hypercall(s)=
. Otherwise
>>>> From the commit 10ef7a91b5a8cb8c58903c60e2dd16ed490b3bcf, DOMID_SELF i=
s not allowed for XEN_DOMCTL_getdomaininfo.
>>>> And now XEN_DOMCTL_getdomaininfo gets domain through rcu_lock_domain_b=
y_id.
>>>>
>>>>>>> you want to pass the actual domid of the local domain here.
>>>> What is the local domain here?
>>>
>>> The domain your code is running in.
>>>
>>>> What is method for me to get its domid?
>>>
>>> I hope there's an available function in one of the libraries to do that=
.
>> I didn't find relate function.
>> Hi Anthony, do you know?
>=20
> Yes, I managed to find:
> LIBXL_TOOLSTACK_DOMID
> That's the value you can use instead of "0" do designate dom0.
> (That was harder than necessary to find.)
Thank you very much! I will use LIBXL_TOOLSTACK_DOMID in next version.

>=20
> Cheers,
>=20

--=20
Best regards,
Jiqian Chen.


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 07:47:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 07:47:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747352.1154732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0tK-0003nM-2W; Tue, 25 Jun 2024 07:47:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747352.1154732; Tue, 25 Jun 2024 07:47:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0tJ-0003nF-Vw; Tue, 25 Jun 2024 07:47:01 +0000
Received: by outflank-mailman (input) for mailman id 747352;
 Tue, 25 Jun 2024 07:47:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM0tI-0003Ct-2K
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 07:47:00 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1a5643c4-32c7-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 09:46:59 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ec50a5e230so36180471fa.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 00:46:59 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-706510854cbsm7429236b3a.43.2024.06.25.00.46.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 00:46:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a5643c4-32c7-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719301619; x=1719906419; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=kEeFvuMCDNOqvQJ4XM1nNXDHo8tCOhCvv5tYOD/B6Dg=;
        b=TBSR0kopViYGnX9If8bMKM2VeDXO97dQ9am0415/IOT0MTYEMXOeBZy2280Jxef/k+
         mVILF5cCmEy6vLCysdbMKWD4RXgB8Kz1FeFNrdIOzzlLWqZtcI3+JL4XTwj67uzjbZi5
         vrSfZ3agrL2k+3mILGpNLleQx0NQohuPkjR54hysbN1mRZwWx81L6Pr7ZG8i8PLppu9k
         x9q7kz5NBya8lcRVPpeWZxrDDl8FYi3+Mk27k8ccv1H8loAACaPfAWwPoA3+YImasjoU
         B8ZfBKOaTYlxlKh1HJjxk5P2+zH+ZgKaeRiJvk65f+PCKw289R29i9BVjW58qQebFO/h
         qWMg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719301619; x=1719906419;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=kEeFvuMCDNOqvQJ4XM1nNXDHo8tCOhCvv5tYOD/B6Dg=;
        b=BVcr5w+iIvCeJFxFU9Rtuft9InH3PNzaWUZlGKOhhITU+4xJKUay0nNBBUKKEnD14c
         Hhqdx5a/QyoZd+KI2iPUAk1jV9wBX0VJkHuRKWijU7X9n4LZ2y7nisznM3cr7FXAlfIS
         oi53/Cip9UYnush3jQ2JcPONR4SyWsEm+x7/KbykNzQ+dTb8lVEBt7NJo6SVFh6EMKRW
         dX42IKWmgKIXF91iK28zvU4vzYvVl/jxBeQXiW3iiOpLVVU/2c0ToNAIknjjSh3K4agD
         GpnqV2q0t2dFvQBFNiOzqgCBI2HWDKBJwXs5ORYThLQ+S/YaraoI1LCZ0dAoXeTop6WR
         6LWg==
X-Forwarded-Encrypted: i=1; AJvYcCVv8Bg4P78D5s05S2HC+nHodinjlDQELgvyHtyyPFgbsmDLNHJhAW1udGH93YcAvbMpx59vFX0YDFA2OxC4p7D2KRySreMWBXTasuvUQag=
X-Gm-Message-State: AOJu0YymDz2W6V0oRkE55Nd1omS3TAjcBKg6au2FT6sHNymnCAjE4ZGc
	t30oT6Up9l5hmxSEHfOWyuCu12Ug606dFO5aie+7F5MdB8knjzyqQKTit+9whA==
X-Google-Smtp-Source: AGHT+IG2Ai9+CTpH1tBZJpF0Yl7MxAtHCFQEWClQSpqHv2oRnY0NgryWGUsdN1v3XjaD89d0N3bBtw==
X-Received: by 2002:a2e:3518:0:b0:2ec:5759:bf2f with SMTP id 38308e7fff4ca-2ec5b26954amr37131171fa.7.1719301618823;
        Tue, 25 Jun 2024 00:46:58 -0700 (PDT)
Message-ID: <360fae9f-4632-420b-a517-ff1926a24f3e@suse.com>
Date: Tue, 25 Jun 2024 09:46:51 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 07/13] x86/hvm: address violations of MISRA C Rule
 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <a20efca7042ea8f351516ea521edccd89b475929.1719218291.git.federico.serafini@bugseng.com>
 <087eb879-b3f6-4d1a-a52e-1e27337620c9@suse.com>
 <d1d6fda5-c619-4578-9a21-7da1a9810044@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <d1d6fda5-c619-4578-9a21-7da1a9810044@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 09:21, Federico Serafini wrote:
> On 24/06/24 17:32, Jan Beulich wrote:
>> On 24.06.2024 11:04, Federico Serafini wrote:
>>> @@ -2674,6 +2674,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
>>>   
>>>       default:
>>>           ASSERT_UNREACHABLE();
>>> +        break;
>>>       }
>>>   
>>>       if ( hvmemul_ctxt->ctxt.retire.singlestep )
>>> @@ -2764,6 +2765,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
>>>           /* fallthrough */
>>
>> What about this? It doesn't match anything I see in deviations.rst.
> 
> The last item for R16.3 in deviations.rst explicitly says that
> existing comment of this form are considered as safe (i.e., deviated)
> but deprecated, meaning that the pseudo keyword should be used for new
> cases. We can consider a rephrasing if it is not clear enough.

Apologies. I mistakenly looked at grep output rather than actual file
contents. Please disregard this comment of mine.

>>> @@ -5283,6 +5287,8 @@ void hvm_get_segment_register(struct vcpu *v, enum x86_segment seg,
>>>            * %cs and %tr are unconditionally present.  SVM ignores these present
>>>            * bits and will happily run without them set.
>>>            */
>>> +        fallthrough;
>>> +
>>>       case x86_seg_cs:
>>>           reg->p = 1;
>>>           break;
>>
>> Why the extra blank line here, ...
>>
>>> --- a/xen/arch/x86/hvm/hypercall.c
>>> +++ b/xen/arch/x86/hvm/hypercall.c
>>> @@ -111,6 +111,7 @@ int hvm_hypercall(struct cpu_user_regs *regs)
>>>       case 8:
>>>           eax = regs->rax;
>>>           /* Fallthrough to permission check. */
>>> +        fallthrough;
>>>       case 4:
>>>       case 2:
>>>           if ( currd->arch.monitor.guest_request_userspace_enabled &&
>>
>> ... when e.g. here there's none? I'm afraid this goes back to an
>> unfinished discussion as to whether to have blank lines between blocks
>> in fall-through situations. My view is that not having them in these
>> cases is helping to make the falling through visually noticeable.
> 
> I looked ad the context to preserve the style
> of the existing cases.
> 
> What do you think about:
> -keep the existing style when a break needs to be inserted;

Even that may be a judgment call, I'd say. But commonly: Yes.

> -no blank line if a fallthrough needs to inserted.

Yes here, but others (Andrew?) may disagree with me.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 07:48:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 07:48:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747358.1154743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0uX-0004Yr-Aw; Tue, 25 Jun 2024 07:48:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747358.1154743; Tue, 25 Jun 2024 07:48:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM0uX-0004Yk-8Q; Tue, 25 Jun 2024 07:48:17 +0000
Received: by outflank-mailman (input) for mailman id 747358;
 Tue, 25 Jun 2024 07:48:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM0uW-0004Ya-42
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 07:48:16 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 46d01420-32c7-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 09:48:14 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ebeefb9a6eso58482121fa.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 00:48:14 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb3c6cdcsm74815185ad.128.2024.06.25.00.48.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 00:48:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46d01420-32c7-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719301693; x=1719906493; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=GuLO1y5cDkeVS7X6EsAaXv62UaMYK1EMMIwtfJo1lak=;
        b=cSbIoOImuAdTJ7ABP5Dc2CptWaOcrIzJeyAiZo76IuvLxXublP26Z05k0CwrqdNTy3
         3PZqjnRasK7cIe/S1qe8laOEm0Tv4QkcerlhNrCElHxvsRI15l1ahUOtTc6dQUAzFGV2
         qTZ+3pncY5lq0aiBE7a16y6vOc8Wglbdr1CJCptjDtPbKCZrw6odK1vjzNzslGH8FIxM
         63vWL+8NRL8kbQIkdDBiUUha6r8rLw6xC/JFEVNPliqjnQKvZtfbsPxwtNBMBY15L8wg
         WiMtavRY8cXJ4FU7IzeUB1QMGp1dHSE3x0IXnCxUZ5KHbll6gEBH6xNa7tZzXoj2gTzc
         kmEg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719301693; x=1719906493;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GuLO1y5cDkeVS7X6EsAaXv62UaMYK1EMMIwtfJo1lak=;
        b=JjQ2QI1/shjshdnGfF03kstCIbNXSTtZ6sGICQ77VisA/FnAVCPdAnIzPhYy7xScNy
         FKsVlj/eJbAhV6JMs0xEZ9tloFseWFJMd0Vc1HNVDpo+u/sRFJ+Cv0Xd+I2vgNcHOJy+
         WppnQA6ATkAOGQpAWn4Am8QFQlswF9MIMgtouHfZKB9zP5PO6HaDA71bRd1ben2BvlH6
         P0wRWJxU/EWvrKtjzqSUkR8uogYKBpZw8CEjCCcTH9rl8YJw8dWh+9ohJaiEUw3NSCLo
         cb8PtJxQNApIOep+KS9lw4A2KZx1stA2/L/aguOS1rU3oyaegDwGbuPgFQqNfoSxZhTj
         jkqA==
X-Forwarded-Encrypted: i=1; AJvYcCUHgFDd5brDdHjC/pwOLk+er94gckEbWyY3WreyAC/vGn0uqSpiL5MObS7bXhCdIcRun9tWeqa95bKThWaFaKmAR7n8s5R7nNsIi27rkkQ=
X-Gm-Message-State: AOJu0YxXoodhUbT3wzXlahXAPZ7CkI2AAcXmeDZzfs2e/EAklGK62I2t
	L6EJ29ENKhfP0SMDi+xZwHmFRlzM0zY64vVCp9ophw9wBdk7jglOPZrGpMxnCw==
X-Google-Smtp-Source: AGHT+IEoC4cU2UTiUDV9e7JtEAUjOJ+sNx06L+KEm0C1q00H4PcFxNC9zwTilDTPgqqTVKcr00a5RA==
X-Received: by 2002:a05:651c:10a7:b0:2ec:59cb:c171 with SMTP id 38308e7fff4ca-2ec59cbc198mr43672811fa.37.1719301693411;
        Tue, 25 Jun 2024 00:48:13 -0700 (PDT)
Message-ID: <f6c1014c-89b7-46c0-b126-d036464a5233@suse.com>
Date: Tue, 25 Jun 2024 09:48:01 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v10 5/5] domctl: Add XEN_DOMCTL_gsi_permission to
 grant gsi
To: "Chen, Jiqian" <Jiqian.Chen@amd.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Juergen Gross <jgross@suse.com>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 "Hildebrand, Stewart" <Stewart.Hildebrand@amd.com>,
 "Huang, Ray" <Ray.Huang@amd.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Anthony PERARD <anthony@xenproject.org>
References: <20240617090035.839640-1-Jiqian.Chen@amd.com>
 <20240617090035.839640-6-Jiqian.Chen@amd.com>
 <b4b6cbcd-dd71-44da-aea8-6a4a170d73d5@suse.com>
 <BL1PR12MB584916579E2C16C6C9F86D1FE7CE2@BL1PR12MB5849.namprd12.prod.outlook.com>
 <b6beb3f3-9c33-4d4c-a607-ca0eba76f049@suse.com>
 <BL1PR12MB58493479F9EF4E56E9CB814FE7C82@BL1PR12MB5849.namprd12.prod.outlook.com>
 <96ba4e66-5d33-4c39-b733-790e7996332f@suse.com>
 <BL1PR12MB58493B55E074243D356B0CAAE7C92@BL1PR12MB5849.namprd12.prod.outlook.com>
 <1ffd019d-6bab-49d1-932c-7b5aee245f32@suse.com>
 <BL1PR12MB5849099F40069F7B9C376106E7D52@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <BL1PR12MB5849099F40069F7B9C376106E7D52@BL1PR12MB5849.namprd12.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 09:44, Chen, Jiqian wrote:
> On 2024/6/24 16:17, Jan Beulich wrote:
>> On 21.06.2024 10:20, Chen, Jiqian wrote:
>>> On 2024/6/20 18:42, Jan Beulich wrote:
>>>> Alternatively we could maybe enable XEN_DOMCTL_getdomaininfo to permit
>>>> DOMID_SELF.
>>> It didn't permit DOMID_SELF since below commit. Does it still have the same problem if permit DOMID_SELF?
>>
>> To answer this, all respective callers would need auditing. However, ...
>>
>>> commit 10ef7a91b5a8cb8c58903c60e2dd16ed490b3bcf
>>> Author: kfraser@localhost.localdomain <kfraser@localhost.localdomain>
>>> Date:   Tue Aug 14 09:56:46 2007 +0100
>>>
>>>     xen: Do not accept DOMID_SELF as input to DOMCTL_getdomaininfo.
>>>     This was screwing up callers that loop on getdomaininfo(), if there
>>>     was a domain with domid DOMID_FIRST_RESERVED-1 (== DOMID_SELF-1).
>>>     They would see DOMID_SELF-1, then look up DOMID_SELF, which has domid
>>>     0 of course, and then start their domain-finding loop all over again!
>>>     Found by Kouya Shimura <kouya@jp.fujitsu.com>. Thanks!
>>>     Signed-off-by: Keir Fraser <keir@xensource.com>
>>
>> ... I view this as a pretty odd justification for the change, when imo the
>> bogus loops should instead have been adjusted.
> Yes, you are right.
> And Anthony suggested to use LIBXL_TOOLSTACK_DOMID to replace 0 domid.
> It seems there is no need to change hypercall DOMCTL_getdomaininfo for now?

With Anthony's suggestion - indeed.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 07:54:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 07:54:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747366.1154753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM10P-0006W9-UL; Tue, 25 Jun 2024 07:54:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747366.1154753; Tue, 25 Jun 2024 07:54:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM10P-0006W2-RG; Tue, 25 Jun 2024 07:54:21 +0000
Received: by outflank-mailman (input) for mailman id 747366;
 Tue, 25 Jun 2024 07:54:21 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sQf/=N3=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sM10P-0006Vr-25
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 07:54:21 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2046af3b-32c8-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 09:54:19 +0200 (CEST)
Received: from [10.176.134.80] (unknown [160.78.253.181])
 by support.bugseng.com (Postfix) with ESMTPSA id 404804EE0738;
 Tue, 25 Jun 2024 09:54:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2046af3b-32c8-11ef-b4bb-af5377834399
Message-ID: <52875aea-fceb-47d5-b970-d16972f9765a@bugseng.com>
Date: Tue, 25 Jun 2024 09:54:17 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 11/13] x86/pmtimer: address a violation of MISRA C
 Rule 16.3
To: Jan Beulich <jbeulich@suse.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <fea205262d4f7b337b804a013d0f1c411a721b46.1719218291.git.federico.serafini@bugseng.com>
 <97889584-4bf3-4cd1-9fd5-92b673b39d77@suse.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <97889584-4bf3-4cd1-9fd5-92b673b39d77@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 24/06/24 17:43, Jan Beulich wrote:
> On 24.06.2024 11:04, Federico Serafini wrote:
>> Add missing break statement to address a violation of MISRA C Rule 16.3
>> ("An unconditional `break' statement shall terminate every
>> switch-clause").
>>
>> No functional change.
>>
>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> I'm curious though on what basis you decided to split this off of
> patch 7, ...
> 
>> ---
>>   xen/arch/x86/hvm/pmtimer.c | 1 +
>>   1 file changed, 1 insertion(+)
> 
> ... touching various other files under xen/arch/x86/hvm/.

Looking at the log I found some commits where the component
name was made explicit.
I can squash it into patch 7 in a v3.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 07:57:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 07:57:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747377.1154766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM13c-0007AL-C2; Tue, 25 Jun 2024 07:57:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747377.1154766; Tue, 25 Jun 2024 07:57:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM13c-0007AE-9P; Tue, 25 Jun 2024 07:57:40 +0000
Received: by outflank-mailman (input) for mailman id 747377;
 Tue, 25 Jun 2024 07:57:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM13b-000799-Q7
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 07:57:39 +0000
Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com
 [2a00:1450:4864:20::22b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 976c1a1d-32c8-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 09:57:38 +0200 (CEST)
Received: by mail-lj1-x22b.google.com with SMTP id
 38308e7fff4ca-2ec4eefbaf1so41961121fa.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 00:57:38 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 41be03b00d2f7-71bb261aa98sm3791408a12.29.2024.06.25.00.57.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 00:57:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 976c1a1d-32c8-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719302258; x=1719907058; darn=lists.xenproject.org;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ByHMlf5bOGP1cI0MHWfLPjmALFk3TR56glL87P/v77s=;
        b=LHrNuR0S1dpvvo6NcLv3sBS2vl2oSiGSUfq7fkmQjtwUoMSnd04D0QJpAmQLxi9o17
         W50l6u8UzAydIQKrusQMd6WDzLQrlKog7/ezUkIcsAyy74i71vLyoMVflEJxqBUEaXjp
         HnmrLkWMX+EHDTXmyzdzW41Z8ZzmatXo/8JTKHZox+4oapdl0IlrcVV3iiLvvAUM29jJ
         iZ6ElVB8oZSjYfLWClvWlPjLl8wWmyNluNTWfr3SRtWomjZ4460ShNsvr9w5dRxPnSUv
         B8I40vUafRg2C36ebTPtnPsUl4gmsjia6b3mbySC16VdKb5rMYu5e8bEW8X3+yyJmEFB
         YAxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719302258; x=1719907058;
        h=content-transfer-encoding:autocrypt:subject:from:cc:to
         :content-language:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=ByHMlf5bOGP1cI0MHWfLPjmALFk3TR56glL87P/v77s=;
        b=u9lN6ol5orGXgHAsiu7zx7O4PdofYVMyb0t40E1Vu+W3E64YdNiYXzPgj94Dzz5l9t
         1S+uVdyJCdmk2smuGAS+9TBEsKY3pGY/jJXi+L8CVZwEICZqNi12hgxQnGJjO1PwFm5j
         vZGxdlIJmjap4oaqt3vKif8atRTChU0o4ANRpjC5stJ7Z3vd4WORbLDOIFeNrN2gzKqy
         R6d/OAyRRDagCdBkf3NM65ZXlkR3gsWfNnBvtmsY0Uhce5ma1s5emNgtRdDUf6rOVKhY
         4a4x7jSaG/4Prlg40hUYHoWqxg+oiXV3gF10Ce8xVxYcEfStz9yiUma9N7Xn+yxvrl7p
         t/Qw==
X-Gm-Message-State: AOJu0Yz9GE/NpxygfLdOeaOOiH7rp0ach2NvwGn/EtIyhNs1dp90E6gF
	7H6QrSYTP5xxr/pANzgOd7PgqLEr8yUUd07vl6SixxPEtKYXlkK9zfD39BTR+fS0kN0GoD1AFj4
	=
X-Google-Smtp-Source: AGHT+IHk8oTHDmsRPeZMfEVLLGgCYxvV5T0QT3q/BoaHankcPIrPsKxlVmzxh1gGiKOIJ5bhe+OMPg==
X-Received: by 2002:a2e:7e0f:0:b0:2ec:56b9:259f with SMTP id 38308e7fff4ca-2ec5b3e24d7mr36295221fa.48.1719302258201;
        Tue, 25 Jun 2024 00:57:38 -0700 (PDT)
Message-ID: <a98ab069-407b-4dee-9052-40ab72890d47@suse.com>
Date: Tue, 25 Jun 2024 09:57:29 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Content-Language: en-US
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Juergen Gross <jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH for-4.19?] Config.mk: update MiniOS commit
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Pull in the gcc14 build fix there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Probably nice to reference a gcc14-clean MiniOS tree from what is going
to be 4.19.

--- a/Config.mk
+++ b/Config.mk
@@ -224,7 +224,7 @@ QEMU_UPSTREAM_URL ?= https://xenbits.xen
 QEMU_UPSTREAM_REVISION ?= master
 
 MINIOS_UPSTREAM_URL ?= https://xenbits.xen.org/git-http/mini-os.git
-MINIOS_UPSTREAM_REVISION ?= b6a5b4d72b88e5c4faed01f5a44505de022860fc
+MINIOS_UPSTREAM_REVISION ?= 8b038c7411ae7e823eaf6d15d5efbe037a07197a
 
 SEABIOS_UPSTREAM_URL ?= https://xenbits.xen.org/git-http/seabios.git
 SEABIOS_UPSTREAM_REVISION ?= rel-1.16.3


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 08:00:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 08:00:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747388.1154777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM16h-0001Lv-W1; Tue, 25 Jun 2024 08:00:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747388.1154777; Tue, 25 Jun 2024 08:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM16h-0001Lo-TD; Tue, 25 Jun 2024 08:00:51 +0000
Received: by outflank-mailman (input) for mailman id 747388;
 Tue, 25 Jun 2024 08:00:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOoF=N3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sM16g-0001Li-7j
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 08:00:50 +0000
Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com
 [2a00:1450:4864:20::534])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 09049467-32c9-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 10:00:49 +0200 (CEST)
Received: by mail-ed1-x534.google.com with SMTP id
 4fb4d7f45d1cf-57d106e69a2so5300264a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 01:00:49 -0700 (PDT)
Received: from ?IPV6:2003:e5:8729:4000:29eb:6d9d:3214:39d2?
 (p200300e58729400029eb6d9d321439d2.dip0.t-ipconnect.de.
 [2003:e5:8729:4000:29eb:6d9d:3214:39d2])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d3042fd72sm5700468a12.48.2024.06.25.01.00.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 01:00:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09049467-32c9-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719302449; x=1719907249; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=xMFMveTsrDFu7oHXTVTj4UzyhsFLtc2luN03Iw0FR/g=;
        b=f6FiwLouFpng4m9B9fQrvEEzqEWATcUB/zRRZER9sZCPOkhZ6WcXXaBFjb2Mgwa2vH
         /uJboV9g/QA0oqXqgz03K34myc2Qmw7b6s858EjJG1ZVKap+tnOh8Jxflnpw6pgv7Nnz
         VAPa4ACbJGm85nzn51Fy5m65JeDxrbbN+SnKlLmY9D1cJogGBptY6KVVnzotWAJLNXJc
         jNErax/YeN33hrdSakCY9KIuYlnNTKIOxqqydpuC6DV+LCv+5iilNpQPtoHSSSDvQo+B
         A425QYv3uUzwGjokDgoW0ZA7sEiP8EYGO4MZujXOnvqo1fmVk14tY+Cf8O10avHdOn5G
         AvAA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719302449; x=1719907249;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=xMFMveTsrDFu7oHXTVTj4UzyhsFLtc2luN03Iw0FR/g=;
        b=hDpsfDgkqUT4q0ZmeTr11XzvkCE0ouniuRIfMmQjgbYYxLMXHW0HrxrHvgcw2DLCrx
         yw6/GbkYZyD0FFvnndP5DIrpTXPsWb5N++htqB6qdwY1WoH9TZGfLXYCKK7TH+2oMyDy
         Em0Inbj9JWSeDgLaE6zq0hhlAeHEL4ra/eqsw9mZqbY2HvJBNkMz8j68nQAvgsi9PgeT
         641kXpdLfSYoAdbFQAy9JbL7xPVoEgDOvGeHsY0kFK3D5uQvnoqtkbQifdQNJ1eJTSCo
         eMw/HLdoH19yHhWWWiS1GtKfQiBUYEhTn6xzCZ85yejKssWJtvPSpVcw+PmetpWTfmrF
         P/Rg==
X-Forwarded-Encrypted: i=1; AJvYcCU4dxPt5JXoUF/IPAEjoCYmFaPZJvanwmOCcDkryAIXZptjd/poZH77WNw1m4S2JZNlvOJHMQuyVJN4JXTht4DfO+wQJQvTcNYY6FOyL44=
X-Gm-Message-State: AOJu0Yyu7xFv1kPVB4tS3FPjRv4lDt9reApvoYwL+seHUeYOJgZLa4sw
	we3ORgbIIAIl8q6vYeRUNaXPjkeJSAL09CtGBfyw0AQcV2Xg15uU1ysnfXhuXik=
X-Google-Smtp-Source: AGHT+IEYpOiricPj+BbcZhZeT3mIjC3IS3FsodW93EVTO6yRoNEWydBJkAiXORvze561J4y58bMsXg==
X-Received: by 2002:a05:6402:3593:b0:57d:5442:a709 with SMTP id 4fb4d7f45d1cf-57d5442a847mr5857210a12.0.1719302448864;
        Tue, 25 Jun 2024 01:00:48 -0700 (PDT)
Message-ID: <9ab96725-858f-4b07-bb82-6c52c7ef4d44@suse.com>
Date: Tue, 25 Jun 2024 10:00:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19?] Config.mk: update MiniOS commit
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
References: <a98ab069-407b-4dee-9052-40ab72890d47@suse.com>
Content-Language: en-US
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
In-Reply-To: <a98ab069-407b-4dee-9052-40ab72890d47@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 25.06.24 09:57, Jan Beulich wrote:
> Pull in the gcc14 build fix there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 08:03:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 08:03:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747395.1154786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM19M-0001yU-FA; Tue, 25 Jun 2024 08:03:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747395.1154786; Tue, 25 Jun 2024 08:03:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM19M-0001yN-Cb; Tue, 25 Jun 2024 08:03:36 +0000
Received: by outflank-mailman (input) for mailman id 747395;
 Tue, 25 Jun 2024 08:03:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM19K-0001yF-Ug
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 08:03:34 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6aa5ece3-32c9-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 10:03:33 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id
 38308e7fff4ca-2ec17eb4493so69783931fa.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 01:03:33 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7253195bfbsm198248466b.128.2024.06.25.01.03.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 01:03:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6aa5ece3-32c9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719302612; x=1719907412; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=UE26bUNQZ4s49DkvPqDFpIp7MAin/t3uFfjZWihoang=;
        b=C1ps1PJm1V6WkWMy9eYD1DyPlazwuHlZX+s7q1i6bUObfSw4ZyA+6xSZ7WpNulKsAx
         ZOdgFkhbTY2GeMMyClzuFQlNoZQzIZu+T4/SEiaxRstckLVsaOY6i20myqCLoLHrdMcr
         IPGtv3Ar5BecxV7fw5f1oUugvaj+OhHEJupjCNJTDH+jZRJvXXgAru21rXyPDjDj8VXP
         ePLtS62i574AIb+2NUiTv9tTbLPOJG3YDGdkx3tVYtcy5XD9/cyCE81vO6UT+Hp7L/jg
         sZ5C3963pHiCk1sABqDf0hHP6kreCiioEdMnS/QX2zKQo6hYxsb9w6BlijH/Vv8pqti2
         qEMw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719302612; x=1719907412;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=UE26bUNQZ4s49DkvPqDFpIp7MAin/t3uFfjZWihoang=;
        b=kjWsCIZIZf6PAagpjrGADqrrXZhHxn8Hrc2s5lsBf0dpCqryqqUeYc5VLYNlF30Cb2
         cK9HQlXovkN6TGd0lFKFzgmXmdkrLMJC1YLE8sKrXLhnr7TidRa8wcYP57IEFLpwCAZj
         bhEts7ceIPbz1Y/bjgsfj7wVmixOoBi8/XlU14IP3wdj7qhTyHoyN2DqPmNTWiqDKuak
         4BrrrKtiIVrdmoDg+AGq/BhYsCezIlAvZaYfWM6/HsFiDTpM+cj91chPibyar/eLjC8K
         eaLKGunjndqhjpKyRRXRBMRJqW0/xaARZ9fbw9HE3LIAUvnIV064g/esGOh/yCd4nPN4
         8SUw==
X-Forwarded-Encrypted: i=1; AJvYcCXQ7Goka8rj5M8fyR4xrnKmHePqg436vQAlcfz15Pc6d7OCVkimVhbJVudj5sXOqIzy88Ecc1XnxkJ3nP0PuPJZX1FURXyQFUlVyyOAlWA=
X-Gm-Message-State: AOJu0Yzd5Jz1W3bXD0Zbn9avDipnF1oaq5QCKav3fNQRV8jW3VVlLV/t
	3GAYB+a+McXQOvcX6I8wvCiQAP3CiecyF8bMbJCcxVZ9E7fsicKc
X-Google-Smtp-Source: AGHT+IHnP/xMOSoM3NjvnelkZbTACYYGW+NscCv+dWQ4fpxHThlQeduxfRxmOQ3o+H2KYx6WQAOa6g==
X-Received: by 2002:a2e:3518:0:b0:2ec:57b4:1c6f with SMTP id 38308e7fff4ca-2ec5b31d1a2mr46079541fa.34.1719302612214;
        Tue, 25 Jun 2024 01:03:32 -0700 (PDT)
Message-ID: <2c15171ad14122ad78df9aef408c6a41fc32e8c9.camel@gmail.com>
Subject: Re: [PATCH for-4.19] xen: re-add type checking to
 {,__}copy_from_guest_offset()
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>,  "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
  Stefano Stabellini <sstabellini@kernel.org>
Date: Tue, 25 Jun 2024 10:03:31 +0200
In-Reply-To: <83743273-54ba-4f8b-9548-30dbd763887e@citrix.com>
References: <6fc55df2-5d92-4f3f-8eb3-69bd89bfea4e@suse.com>
	 <83743273-54ba-4f8b-9548-30dbd763887e@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Mon, 2024-06-24 at 14:20 +0100, Andrew Cooper wrote:
> On 24/06/2024 1:26 pm, Jan Beulich wrote:
> > When re-working them to avoid UB on guest address calculations, I
> > failed
> > to add explicit type checks in exchange for the implicit ones that
> > until
> > then had happened in assignments that were there anyway.
> >=20
> > Fixes: 43d5c5d5f70b ("xen: avoid UB in guest handle arithmetic")
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
>=20
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 08:11:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 08:11:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747407.1154796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1GX-00047g-4v; Tue, 25 Jun 2024 08:11:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747407.1154796; Tue, 25 Jun 2024 08:11:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1GX-00047Z-2L; Tue, 25 Jun 2024 08:11:01 +0000
Received: by outflank-mailman (input) for mailman id 747407;
 Tue, 25 Jun 2024 08:10:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM1GU-00046D-VG
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 08:10:58 +0000
Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com
 [2a00:1450:4864:20::531])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7332bbd9-32ca-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 10:10:57 +0200 (CEST)
Received: by mail-ed1-x531.google.com with SMTP id
 4fb4d7f45d1cf-57d044aa5beso5914155a12.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 01:10:57 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d66b04378sm1281480a12.38.2024.06.25.01.10.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 01:10:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7332bbd9-32ca-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719303056; x=1719907856; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=QRMBm69W9F8XoMVsajW1fMZ8CQvRzo5sxW0tT6q426Y=;
        b=QPadIMaYDZXE4StCGHg0n0x5acEqj9mBISwsZaT4qZ013Qo/Szn5WeFwojbe4VnDpu
         iqYkR1qu2Edf4Auwg+cSDmZNKGRCBrm10UQ8nmt6FBBxm5qsQLBTMPyU+D2I0ucWdTAC
         /+DVwIWPb8YYur8YxF8qA2poX4WHafCw3M21GrJO7SuIWCFch15illbJvDLMwCTqyEpK
         S3SKzGX5EVk2+utPKEpsLWRnpWZqsJPgfwnG1ktrpRRgXcFK4bNrClB/1jz5rub9uHld
         ED8vdSsyN7OUjkuthgEeuRgrMS5OxV+allevOWVYcAOSsBuoksX8fxGQXXXTSpZusWWz
         X6zg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719303056; x=1719907856;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=QRMBm69W9F8XoMVsajW1fMZ8CQvRzo5sxW0tT6q426Y=;
        b=IqaDNikn1WttyrPRfhGmodp3nk4rlbFrVkzFM4VNU6Mvb4kFgOwwBqQ2vY4yx616gA
         Wtu83GKDLUlFrhy77YgMWyIiN3Co7tuUAymsRmyJBf4oIhMtCiflB0hBfpBd5kJHkciK
         MZTlpK/C97YbsvY4k+vesRTqhy8FlJXBcKZoNMeK/RaCeA5f4n5A2ZrIu4Nh+Mw49jtH
         Oab90qDApxQgryS8bu08Of+wKE6I54503+0wf8Y9bFUgKHE9BkmWt9f7l9+3E7HwzMAq
         YtxQrgaumAXfQDZIzCMe8n1zXe2mqsYbXLN20k5iMR4Tb9GtmzG0HqBCvBFtJu7mIOW+
         B5ng==
X-Forwarded-Encrypted: i=1; AJvYcCXT3/nWNuGEvN+oE2LprcVbz26yVEiBX/KJphMoWFKDGr3rfxKYbeufqRPQ2G6c/DO0r8OScX/ON3i132EaOIJRqpKIr/aC85zoMcwKMJM=
X-Gm-Message-State: AOJu0YzcgilYCR2w/ClBl4jpXuB4ak9OpWHa1wZkfIQyG+kFMmDhAc8H
	mcmnoB0riDZU8WUxwI+hvXTAF3QIKOfJ7ab/c1OIh68hj23a3SOV
X-Google-Smtp-Source: AGHT+IHK6wzrGBlC7a+6h22K1UOH9H2Urfa5LtPwGxFu/S8QbrnAFoE2McI1CPlvbv/703KBA2/rMw==
X-Received: by 2002:a50:8e54:0:b0:57c:745d:110b with SMTP id 4fb4d7f45d1cf-57d4bd56bd6mr4623580a12.3.1719303056141;
        Tue, 25 Jun 2024 01:10:56 -0700 (PDT)
Message-ID: <52373e0cea119ff04ebb997f3d0aea6bd3c9dc41.camel@gmail.com>
Subject: Re: [PATCH for-4.19?] Config.mk: update MiniOS commit
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Samuel Thibault
 <samuel.thibault@ens-lyon.org>,  Juergen Gross <jgross@suse.com>
Date: Tue, 25 Jun 2024 10:10:55 +0200
In-Reply-To: <a98ab069-407b-4dee-9052-40ab72890d47@suse.com>
References: <a98ab069-407b-4dee-9052-40ab72890d47@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Tue, 2024-06-25 at 09:57 +0200, Jan Beulich wrote:
> Pull in the gcc14 build fix there.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Probably nice to reference a gcc14-clean MiniOS tree from what is
> going
> to be 4.19.
I would like to ask what do you mean by gcc14-clean here?

~ Oleksii

>=20
> --- a/Config.mk
> +++ b/Config.mk
> @@ -224,7 +224,7 @@ QEMU_UPSTREAM_URL ?=3D https://xenbits.xen
> =C2=A0QEMU_UPSTREAM_REVISION ?=3D master
> =C2=A0
> =C2=A0MINIOS_UPSTREAM_URL ?=3D https://xenbits.xen.org/git-http/mini-os.g=
it
> -MINIOS_UPSTREAM_REVISION ?=3D b6a5b4d72b88e5c4faed01f5a44505de022860fc
> +MINIOS_UPSTREAM_REVISION ?=3D 8b038c7411ae7e823eaf6d15d5efbe037a07197a
> =C2=A0
> =C2=A0SEABIOS_UPSTREAM_URL ?=3D https://xenbits.xen.org/git-http/seabios.=
git
> =C2=A0SEABIOS_UPSTREAM_REVISION ?=3D rel-1.16.3



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 08:18:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 08:18:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747415.1154806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1NM-00050f-Ro; Tue, 25 Jun 2024 08:18:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747415.1154806; Tue, 25 Jun 2024 08:18:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1NM-00050Y-PJ; Tue, 25 Jun 2024 08:18:04 +0000
Received: by outflank-mailman (input) for mailman id 747415;
 Tue, 25 Jun 2024 08:18:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM1NM-00050S-5x
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 08:18:04 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 70be83e1-32cb-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 10:18:02 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a725282b926so251957866b.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 01:18:02 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7263b3d60esm91016866b.113.2024.06.25.01.18.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 01:18:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70be83e1-32cb-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719303482; x=1719908282; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=VUYKywQBrLkclOIZnMGllCl7h16rvDZuu2uxxo77hMU=;
        b=YPe0G4wRJFfcfbqYEiFDmbtaVBO6wItsaL1tjhUdxH+j+v7r0csb5Xr3/9/9vGLjil
         jXqZ1NbzklrtxjVioKMQ3MxM+Mq2Tl+XSb/sbnEOLggikljTrs4HnWCInkKJNc4vsBWE
         tOLjouy6ka+H4z4UCfyykZGO7W+C9Ts1i2nWN+2YZa6biN/PWnyQ5xTxNiIHRRE5w78K
         jrNucl/9gOrOcEMQMSJiNdK5Fe5t5LlmzAGbCfHy/AH9Br+e9/ymf3CPKKjUWi67Ulvt
         WQY1uC1P5R7D9HP/uUM8RtSaQUg6atXK1o3pmjMkCepkgkL0HPpuPzSR9yLQFbfDMLa8
         sDGQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719303482; x=1719908282;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=VUYKywQBrLkclOIZnMGllCl7h16rvDZuu2uxxo77hMU=;
        b=D2SAo3JFDa6huRF7JIPhT2W2cSRqMHwQKV454ueWgIQLN3XFuqrbL/ehOcvV0X/0sr
         SAq3bNodmwBnUuMKJfm30ZNmKqM0B8oeN6SVUiQLGZ/SSRBykoZg3YvGkD6Kh4ZskwAq
         U/9e5APZEL8jmIkWldKhS50BCTg7iUONTSb8twHQp80FR+2t0jlW1Ha5E2kZlWLIh7hm
         NMA6Ps3p61Z5AaWwHjZ+plDbardqWB3z4wlNipTS6Jx0oJDXub+4ZqfZWv9HNq8w4TW+
         MJD5exT3FrLGq36nLt2MVM8DeHQzybdeaI3CWWcwc67IdvD55l3wuSe7PMXT/0cXEpZc
         DgxA==
X-Forwarded-Encrypted: i=1; AJvYcCWcwpPusb45bWmgs7yE75QIuguzksC8F56BI5u5mwd3uFAt2d/Ah+OsFld4sn+7GijFgrzx+j2Yy8RaTEBjgI1anZzz3JEdny13rePHMkQ=
X-Gm-Message-State: AOJu0YycT3ox7xu0lnYCrCEmrQ/5/bzJc/xrJcHrTnBVZ9pA3gpn3j9I
	gPd2vX/rS0aBOSI/QskR+5sF1Sd3eC1ez3QpBKG0sq13lw5EoQpc
X-Google-Smtp-Source: AGHT+IHDLoLNR8S9nh1g//JJBhn8X4vjLxnjaohGsmQ8yC9pU48moDGcEn/MdoKQJjYQTKRLMJvGhA==
X-Received: by 2002:a17:906:a2da:b0:a6f:61c7:dea7 with SMTP id a640c23a62f3a-a7245ccdab7mr403722166b.18.1719303481492;
        Tue, 25 Jun 2024 01:18:01 -0700 (PDT)
Message-ID: <bbf169b395fe8a911f7daf602e088ba6058be868.camel@gmail.com>
Subject: Re: [PATCH for-4.19] gnttab: fix compat query-size handling
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Roger Pau =?ISO-8859-1?Q?Monn=E9?=
 <roger.pau@citrix.com>
Date: Tue, 25 Jun 2024 10:18:00 +0200
In-Reply-To: <00bb4998-d0a7-43dc-8d3c-abb3f66661cc@suse.com>
References: <00bb4998-d0a7-43dc-8d3c-abb3f66661cc@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Tue, 2024-06-25 at 09:30 +0200, Jan Beulich wrote:
> The odd DEFINE_XEN_GUEST_HANDLE(), inconsistent with all other
> similar
> constructs, should have caught my attention. Turns out it was needed
> for
> the build to succeed merely because the corresponding #ifndef had a
> typo. That typo in turn broke compat mode guests, by having query-
> size
> requests of theirs wire into the domain_crash() at the bottom of the
> switch().
>=20
> Fixes: 8c3bb4d8ce3f ("xen/gnttab: Perform compat/native
> gnttab_query_size check")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Release-Acked-by: Oleksii Kurochko <Oleksii.kurochko@gmail.com>

~ Oleksii

> ---
> Looks like set-version is similarly missing in the set of structures
> checked, but I'm pretty sure that we will now want to defer taking
> care
> of that until after 4.20 was branched.
>=20
> --- a/xen/common/compat/grant_table.c
> +++ b/xen/common/compat/grant_table.c
> @@ -33,7 +33,6 @@ CHECK_gnttab_unmap_and_replace;
> =C2=A0#define xen_gnttab_query_size gnttab_query_size
> =C2=A0CHECK_gnttab_query_size;
> =C2=A0#undef xen_gnttab_query_size
> -DEFINE_XEN_GUEST_HANDLE(gnttab_query_size_compat_t);
> =C2=A0
> =C2=A0DEFINE_XEN_GUEST_HANDLE(gnttab_setup_table_compat_t);
> =C2=A0DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_compat_t);
> @@ -111,7 +110,7 @@ int compat_grant_table_op(
> =C2=A0=C2=A0=C2=A0=C2=A0 CASE(copy);
> =C2=A0#endif
> =C2=A0
> -#ifndef CHECK_query_size
> +#ifndef CHECK_gnttab_query_size
> =C2=A0=C2=A0=C2=A0=C2=A0 CASE(query_size);
> =C2=A0#endif
> =C2=A0



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 08:21:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 08:21:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747421.1154817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1R7-0006pc-CE; Tue, 25 Jun 2024 08:21:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747421.1154817; Tue, 25 Jun 2024 08:21:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1R7-0006pV-7X; Tue, 25 Jun 2024 08:21:57 +0000
Received: by outflank-mailman (input) for mailman id 747421;
 Tue, 25 Jun 2024 08:21:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM1R5-0006pN-Ve
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 08:21:55 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fac82f09-32cb-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 10:21:54 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a724e067017so254830366b.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 01:21:54 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a725a3f205esm150093466b.183.2024.06.25.01.21.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 01:21:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fac82f09-32cb-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719303713; x=1719908513; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=R0ffmTi0s2FGhhi1iP+Zu5mUuAAdzqMTZa3D4ssqzRE=;
        b=TZntTmvWGXYxqQzFoM0NuICldAAZDciRaS4IoSowyKTP16JW9DBXV5doMo8FwPGZU5
         wdzE/KMMLnKXiOB0U4niDO0jR+12LJ+V/lpaPknkbKZfMgby/m0hVqJFtSM61bRiHeaE
         wF9C7ACF/z0tSG6ZFcuxvkU+B1iLF6XMbejK+2s+suHqEgIgrwcDPfYN3vE8XICdcI1u
         t9zKhPIpzhNfyS2EthMWo0v+X/i1bX38oeXPmg3fa0PKkMf4p8IZ+JZpXAcDMIJsoXGU
         cAi72SGx/FH89N3uFW/S7odDiTfsq3tb7z+1Y1mLWDb5fbIK/LLi9E9yoVi3D21m/D1b
         SmPA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719303713; x=1719908513;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=R0ffmTi0s2FGhhi1iP+Zu5mUuAAdzqMTZa3D4ssqzRE=;
        b=UIOnFnFz1cCGKd9QdTmvA829asUiFxibZaQzDvwTizMPYmwgA1gYiFF0OZMaSEuSTX
         4J8KoO3q0UZkJznbuNQArItGU9r+UQVqcNFcH1vtZxKJdO+8tmZX+no/kw3zwFfOc6Ww
         PKp5mQRIeDjCY0PqssLNPMWiLxIJ25my4aayPVmQGH7oy3VYy0/gIjM6b7IMJNH5y8NJ
         S2uGKkFccxLLfgH8/h3PQa6da00l2JWfLHVHA0lWjp2awsiCRMIEpiB23Gm1p89Y5tx+
         Xyum44msaU8LK+xi8CTOSNYz0ZaQ9OTt/4xanPuuSecQT/BHxyIbdw53AYsEfH2vle4Q
         S/JA==
X-Gm-Message-State: AOJu0YxNrx2iCkuj6qlIt/Jutr/mg/4wO8AiKQ7T9HyIqhkJ0Y96zAEC
	zM8A2NHV9ROMrAtdn7rHtnlJYwShujq/U6hvvi2rgIRMxGFKeDOenaQjgPD7
X-Google-Smtp-Source: AGHT+IFCEkPhnauN69b59dX4hx+Tp2QCej6gyQt5hpZWbCQLI1B6z7prFNwMV1TCKXRiqLhAFc/Mfg==
X-Received: by 2002:a17:906:d99:b0:a6f:5318:b8f0 with SMTP id a640c23a62f3a-a7245b80ea1mr382802866b.37.1719303713249;
        Tue, 25 Jun 2024 01:21:53 -0700 (PDT)
Message-ID: <f92d296c2a43cfb23c2547791df6053183231796.camel@gmail.com>
Subject: Re: [PATCH v3] automation/eclair_analysis: deviate and|or|xor|not
 for MISRA C Rule 21.2
From: Oleksii <oleksii.kurochko@gmail.com>
To: Stefano Stabellini <sstabellini@kernel.org>, Alessandro Zucchelli
	 <alessandro.zucchelli@bugseng.com>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, Simone Ballarin
	 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>
Date: Tue, 25 Jun 2024 10:21:52 +0200
In-Reply-To: <alpine.DEB.2.22.394.2406241721270.3870429@ubuntu-linux-20-04-desktop>
References: 
	<f21ea3734857e0cf26afff00befb179b10d02158.1719213594.git.alessandro.zucchelli@bugseng.com>
	 <alpine.DEB.2.22.394.2406241721270.3870429@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Mon, 2024-06-24 at 17:22 -0700, Stefano Stabellini wrote:
> On Mon, 24 Jun 2024, Alessandro Zucchelli wrote:
> > Rule 21.2 reports identifiers reserved for the C and POSIX standard
> > libraries: or, and, not and xor are reserved identifiers because
> > they
> > constitute alternate spellings for the corresponding operators
> > (they are
> > defined as macros by iso646.h); however Xen doesn't use standard
> > library
> > headers, so there is no risk of overlap.
> >=20
> > This addresses violations arising from x86_emulate/x86_emulate.c,
> > where
> > label statements named as or, and and xor appear.
> >=20
> > No functional change.
> >=20
> > Signed-off-by: Alessandro Zucchelli
> > <alessandro.zucchelli@bugseng.com>
> > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
>=20
> Hi Oleksii,
Hi Stefano,

>=20
> I am asking for a release-ack as this patch adds a deviation, the
> only
> impact is fewer violations from the ECLAIR analysis
Looks good to me:
 Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

>=20
>=20
> > ---
> > Changes from v2:
> > Fixed patch contents as the changes from v1 and v2 were not
> > squashed together.
> > ---
> > Changes from v1:
> > Added deviation for 'not' identifier.
> > Added explanation of where these identifiers are defined,
> > specifically in the
> > 'iso646.h' file of the Standard Library.
> > ---
> > =C2=A0automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++++
> > =C2=A01 file changed, 6 insertions(+)
> >=20
> > diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl
> > b/automation/eclair_analysis/ECLAIR/deviations.ecl
> > index 9fa9a7f01c..14c7afb39e 100644
> > --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> > +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> > @@ -498,6 +498,12 @@ still remain available."
> > =C2=A0-config=3DMC3R1.R21.2,declarations+=3D{safe, "!^__builtin_.*$"}
> > =C2=A0-doc_end
> > =C2=A0
> > +-doc_begin=3D"or, and and xor are reserved identifiers because they
> > constitute alternate
> > +spellings for the corresponding operators (they are defined as
> > macros by iso646.h).
> > +However, Xen doesn't use standard library headers, so there is no
> > risk of overlap."
> > +-config=3DMC3R1.R21.2,reports+=3D{safe,
> > "any_area(stmt(ref(kind(label)&&^(or|and|xor|not)$)))"}
> > +-doc_end
> > +
> > =C2=A0-doc_begin=3D"Xen does not use the functions provided by the
> > Standard Library, but
> > =C2=A0implements a set of functions that share the same names as their
> > Standard Library equivalent.
> > =C2=A0The implementation of these functions is available in source form=
,
> > so the undefined, unspecified
> > --=20
> > 2.34.1
> >=20



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 08:27:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 08:27:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747430.1154829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1Vw-0007Ru-TF; Tue, 25 Jun 2024 08:26:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747430.1154829; Tue, 25 Jun 2024 08:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1Vw-0007Rn-Ql; Tue, 25 Jun 2024 08:26:56 +0000
Received: by outflank-mailman (input) for mailman id 747430;
 Tue, 25 Jun 2024 08:26:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM1Vv-0007Rh-BB
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 08:26:55 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id adb34fe1-32cc-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 10:26:54 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2ebe6495aedso55044461fa.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 01:26:54 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7066aeeaa5asm5426274b3a.29.2024.06.25.01.26.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 01:26:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: adb34fe1-32cc-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719304013; x=1719908813; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=wK8t7berKrAIWDrZKMUCIeRP677TszPve5OZDAogxCw=;
        b=INx1ODyLI/Ec6NrV4u/I5G0TgVZQiCOiNYBosxNyDuhe28bSEGOnNxDIddTV0i+GYb
         jpqwxxN785acV4mDDP51cmE8f5mRa7C9+kMtN2aI3xDBDqTMM6dBT9XD0I8djegzZ7pK
         ryc1vb577pwZsj5Afc1uLkT2v3hA/ZnCMC/BfUKct4bq8M1d1rDvyGRJHdGuatT3aobc
         nclENjPckyIABGulOBhKHjI+rvaz2a7eT6vAmZenD+1v/rIQKUFPOxNLMAPvRgMa7rBA
         0aieBzY9zlDE2/ZxcXhcNweDutQVYyTq1rl1Jg0cTUle50mnzNSfU3Ew94BVAOQeaRvj
         jcZw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719304013; x=1719908813;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=wK8t7berKrAIWDrZKMUCIeRP677TszPve5OZDAogxCw=;
        b=a2toIsq00ct47GOPh0/Jy4hW/purqn+pXf6Hxnrz6RdXKNlgbmjpfHFGleiPu8caPU
         MVaU3/J3vAXUojDpkLht3FnxpvopU82sh6Flj0HbNCtJcJuwkUm+X17WOf4fsYHa8mce
         BtJdTn5/mRpXjpu/Gxob0SHWnzM6qwsfanvbGS/T8Ku5j5t8Sr5fL7PeH34V2I8tnJrj
         o8I+jtqTye/K39UTgffexX87w4JUAFSSj8tgBkGUOXU8e0ei/5BofizflE+ERGvRWIDt
         r2H4n1LVNQLd7zEEIs7i+rzEABv8MLLV+QLrRGLHx+wKAPsYFclewGtwesLkmU249+SV
         buzg==
X-Forwarded-Encrypted: i=1; AJvYcCVOh5ODQr47t2lS6Ov6scvz44gMNDbRijpnXgCuqszd5t5pm/vdukDh07vmEU8vvWzr+3DAHOzm69w2wmV2pFu/iqLj1tQF+40c77UiQwc=
X-Gm-Message-State: AOJu0Yymas14rMFScUJw0lWS6ziEI9jDyOpvucNy+l44e4yuJssXcYSI
	WjnTwcMk7ZoYXPYoYLC/KqY2qrj22hh43WCy7HUd5BAsbXAIp/c4BJ/io3sIHg==
X-Google-Smtp-Source: AGHT+IFBSA3UNSm5+ik7FwRzhYhX3Iq7bLfrqlQtnPKgZPkSkIGPRNaWY/SxoiqdJAS7fI6+Wg9vZQ==
X-Received: by 2002:a2e:8792:0:b0:2ec:4eca:748b with SMTP id 38308e7fff4ca-2ec5b2a011dmr37985311fa.14.1719304013586;
        Tue, 25 Jun 2024 01:26:53 -0700 (PDT)
Message-ID: <c0519803-8753-4933-8193-fa036f626b36@suse.com>
Date: Tue, 25 Jun 2024 10:26:44 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19?] Config.mk: update MiniOS commit
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Juergen Gross <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a98ab069-407b-4dee-9052-40ab72890d47@suse.com>
 <52373e0cea119ff04ebb997f3d0aea6bd3c9dc41.camel@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <52373e0cea119ff04ebb997f3d0aea6bd3c9dc41.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 10:10, Oleksii wrote:
> On Tue, 2024-06-25 at 09:57 +0200, Jan Beulich wrote:
>> Pull in the gcc14 build fix there.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> Probably nice to reference a gcc14-clean MiniOS tree from what is
>> going
>> to be 4.19.
> I would like to ask what do you mean by gcc14-clean here?

Being able to build successfully with (recently released) gcc14, out of
the box.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 08:27:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 08:27:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747432.1154840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1Vz-0007gb-3T; Tue, 25 Jun 2024 08:26:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747432.1154840; Tue, 25 Jun 2024 08:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1Vz-0007gS-0Z; Tue, 25 Jun 2024 08:26:59 +0000
Received: by outflank-mailman (input) for mailman id 747432;
 Tue, 25 Jun 2024 08:26:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM1Vx-0007Rh-4J
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 08:26:57 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id af24fa6e-32cc-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 10:26:56 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id
 a640c23a62f3a-a7194ce90afso319500366b.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 01:26:56 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a72605ff279sm115202166b.5.2024.06.25.01.26.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 01:26:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af24fa6e-32cc-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719304016; x=1719908816; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=yEwKOUGGCIv8ACnry1j25lAziChwmD6QHaW2e6K1n0Q=;
        b=cG2L93+yDlrg97KN4mc1T1WsEKtE8sSeFgqxqDUvWG+n5IRZdTdc3NmSVd/D6xGpR5
         m9bGIyssP6qvt4AgpX2mBTiDBQc/qTHBFFtzpl6f0WoAzvWDQL09BLBljE49jl1RYPdJ
         0zCBjacZ56xbklCjExpMa0oI46nyquVkW8PPorbZ2XHIiUGy6+D41MGmsMpNnuFf9zxX
         /W/2/uXgI9zjhu5TzxK3B1Q8mrPjlBfdibkUDzsnj0JkaSvRNgf08lTiw52S1EsUwjzo
         rr0KIO2EyDAGvSUbTszlvx046efTcSJcIAPYoKKuQDhO/biH785tRQV2TnFpQLFSKGzZ
         TZjg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719304016; x=1719908816;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=yEwKOUGGCIv8ACnry1j25lAziChwmD6QHaW2e6K1n0Q=;
        b=lUPjfhVJGtgD9DHsVO4hYyEj137UPJHyiCCaQYbVqcfK9AcloPM3xYLX5VUhqiS2nt
         K+edH76iSRSqrOzJ+ZNXF+E8pEBInXv6HRNX+ksK+F74Wy5P2Iz97aH7FBuyYQ/5PLrW
         B7ULTkr487Fjdr9Gj1vjCc5Yvjwx69iO8+r03EDjQvNRH+5yjbI5AS5smH6Xabeqkxqm
         MgFRVFLcWuzxSt2sHQwRDPhIhS2h7O9eQbSRDmvL2TI9Bu4TN7sTutruPmwboxo0ecWc
         2hsu90ezr7KbgXy5YerZ88NtuygMxkb3Qn9XD+K9pF22+9II97YI90HWZNDne0ZA8IBQ
         u5eQ==
X-Forwarded-Encrypted: i=1; AJvYcCV3WI8e2a68Bl6jd87Ee+ZEqF+OuU/CMiFCAYCSpq8QFM0Aq6/076qm8pQwbjGB5a5s6UzKj3qRppZhBPNYw+y0bO/OH9ifSgtGJV446ho=
X-Gm-Message-State: AOJu0Yw/ZYA7n53qRIyBd0xT9xkjyV73EHKM6XHKWBYxIZS5dCL1KhXW
	TQE+n2B5ot5xySr24sVGXSZPjklFPd2Cqbwe9qMJOn06YImpAUPf
X-Google-Smtp-Source: AGHT+IH7gUZQtsoXSPsGOcdtI0e4fB6/EsVmDjm0zgn/6rmDTLoSwvFbWUWR/csnhPnvGks6idtjXA==
X-Received: by 2002:a17:906:3c50:b0:a72:40b5:f6de with SMTP id a640c23a62f3a-a7240b5f752mr500406366b.3.1719304015791;
        Tue, 25 Jun 2024 01:26:55 -0700 (PDT)
Message-ID: <e4eda2b666a13b4f6049f7b27c5fd571519e28d6.camel@gmail.com>
Subject: Re: [XEN PATCH v2] automation/eclair: configure Rule 13.6 and
 custom service B.UNEVALEFF
From: Oleksii <oleksii.kurochko@gmail.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Federico Serafini <federico.serafini@bugseng.com>, 
 xen-devel@lists.xenproject.org, consulting@bugseng.com, Simone Ballarin
 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
 <julien@xen.org>, Nicola Vetrini <nicola.vetrini@bugseng.com>
Date: Tue, 25 Jun 2024 10:26:54 +0200
In-Reply-To: <alpine.DEB.2.22.394.2406241725240.3870429@ubuntu-linux-20-04-desktop>
References: 
	<5c60e98d70ae94c155fd56ec13b764b7a8f6161c.1719219962.git.federico.serafini@bugseng.com>
	 <alpine.DEB.2.22.394.2406241637380.3870429@ubuntu-linux-20-04-desktop>
	 <alpine.DEB.2.22.394.2406241725240.3870429@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Mon, 2024-06-24 at 17:26 -0700, Stefano Stabellini wrote:
> On Mon, 24 Jun 2024, Stefano Stabellini wrote:
> > On Mon, 24 Jun 2024, Federico Serafini wrote:
> > > Rule 13.6 states that "The operand of the `sizeof' operator shall
> > > not
> > > contain any expression which has potential side effects".
> > >=20
> > > Define service B.UNEVALEFF as an extension of Rule 13.6 to
> > > check for unevalued side effects also for typeof and alignof
> > > operators.
> > >=20
> > > Update ECLAIR configuration to deviate uses of BUILD_BUG_ON and
> > > alternative_v?call[0-9] for both Rule 13.6 and B.UNEVALEFF.
> > >=20
> > > Add service B.UNEVALEFF to the accepted.ecl guidelines to check
> > > "violations" in the weekly analysis.
> > >=20
> > > Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> > > Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
> >=20
> > Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>=20
> Hi Oleksii,
Hi Stefano,

>=20
> I am asking for a release-ack on this rule: it widens the checks done
> by
> ECLAIR but only for non-blocking rules (a rule not causing a gitlab-
> ci
> failure). Hence, there should be no effect on gitlab-ci.
Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 08:27:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 08:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747443.1154853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1Wg-00007r-JR; Tue, 25 Jun 2024 08:27:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747443.1154853; Tue, 25 Jun 2024 08:27:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1Wg-00007k-FC; Tue, 25 Jun 2024 08:27:42 +0000
Received: by outflank-mailman (input) for mailman id 747443;
 Tue, 25 Jun 2024 08:27:42 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOoF=N3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sM1Wg-0007Rh-4G
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 08:27:42 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de [195.135.223.131])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c9972316-32cc-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 10:27:41 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E98471F7D3;
 Tue, 25 Jun 2024 08:27:39 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id AC76E1384C;
 Tue, 25 Jun 2024 08:27:39 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id YHjdJ3t/emZKMwAAD6G6ig
 (envelope-from <jgross@suse.com>); Tue, 25 Jun 2024 08:27:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9972316-32cc-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1719304060; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=E/4oITqv9kow5eWJF+EmzzvlqgGGbQrnXWUG1giZinw=;
	b=ncs3tGxFL5adzxRa2jbp0KQhw4htEpEpxTv0Y/3Jm8aBEW4bULj7aOi7qxn9aPqc2a50Hq
	Pl2GAf136NEU/3hWzQJH8+CXbyKmGff0sdickbfMNSWW3mbVd3S0Pt7bRIbkkHzy7K9QwA
	11eoNb3ppGYCi1Eg9PHAGL5bYuoTW1E=
Authentication-Results: smtp-out2.suse.de;
	dkim=pass header.d=suse.com header.s=susede1 header.b=gbODfjTg
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1719304059; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=E/4oITqv9kow5eWJF+EmzzvlqgGGbQrnXWUG1giZinw=;
	b=gbODfjTgcSBnklRzY0Jf/i8rwpioNG/WfEi0P+/ZdpBW7/L2TFp/yn+nbqSb3s6GoKwszE
	NGaKonAngMKqbyOpcv9S+KwrS7LkZD8z3Zze2yppP4M1beZNkCG8XQju7Ps0aYs2Hi36RB
	Ugg+h2o3PVRaOIaGdRVeoOLLXYAy40E=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] xen/sched: fix rare error case in cpu_schedule_down()
Date: Tue, 25 Jun 2024 10:27:36 +0200
Message-Id: <20240625082736.7238-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Rspamd-Queue-Id: E98471F7D3
X-Spam-Score: -2.01
X-Spam-Level: 
X-Spam-Flag: NO
X-Spamd-Result: default: False [-2.01 / 50.00];
	DWL_DNSWL_MED(-2.00)[suse.com:dkim];
	MID_CONTAINS_FROM(1.00)[];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	R_MISSING_CHARSET(0.50)[];
	R_DKIM_ALLOW(-0.20)[suse.com:s=susede1];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_GOOD(-0.10)[text/plain];
	MX_GOOD(-0.01)[];
	BAYES_HAM(-0.00)[23.48%];
	FROM_HAS_DN(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns];
	FROM_EQ_ENVFROM(0.00)[];
	MIME_TRACE(0.00)[0:+];
	SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from];
	TO_DN_SOME(0.00)[];
	ARC_NA(0.00)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	DKIM_SIGNED(0.00)[suse.com:s=susede1];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	RCVD_COUNT_TWO(0.00)[2];
	RCVD_TLS_ALL(0.00)[];
	RCPT_COUNT_FIVE(0.00)[5];
	DKIM_TRACE(0.00)[suse.com:+]
X-Rspamd-Action: no action
X-Rspamd-Server: rspamd1.dmz-prg2.suse.org

In case cpu_schedule_up() is failing to allocate memory for struct
sched_resource, cpu_schedule_down() will be called with the
sched_resource pointer being NULL. This needs to be handled.

Reported-by: Jan Beulich <jbeulich@suse.com>
Fixes: 207589dbacd4 ("xen/sched: move per cpu scheduler private data into struct sched_resource")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/core.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d84b65f197..0dc86b8f6c 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2829,6 +2829,8 @@ static void cpu_schedule_down(unsigned int cpu)
     rcu_read_lock(&sched_res_rculock);
 
     sr = get_sched_res(cpu);
+    if ( !sr )
+        goto out;
 
     kill_timer(&sr->s_timer);
 
@@ -2839,6 +2841,7 @@ static void cpu_schedule_down(unsigned int cpu)
     sr->sched_unit_idle = NULL;
     call_rcu(&sr->rcu, sched_res_free);
 
+ out:
     rcu_read_unlock(&sched_res_rculock);
 }
 
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 08:31:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 08:31:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747451.1154863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1aL-0002SC-19; Tue, 25 Jun 2024 08:31:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747451.1154863; Tue, 25 Jun 2024 08:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1aK-0002S5-UG; Tue, 25 Jun 2024 08:31:28 +0000
Received: by outflank-mailman (input) for mailman id 747451;
 Tue, 25 Jun 2024 08:31:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM1aJ-0002Rz-SY
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 08:31:27 +0000
Received: from mail-ed1-x533.google.com (mail-ed1-x533.google.com
 [2a00:1450:4864:20::533])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 505c75f3-32cd-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 10:31:27 +0200 (CEST)
Received: by mail-ed1-x533.google.com with SMTP id
 4fb4d7f45d1cf-57cf8880f95so5967009a12.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 01:31:27 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a710595adedsm345564866b.214.2024.06.25.01.31.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 01:31:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 505c75f3-32cd-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719304286; x=1719909086; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=QodbRYMo8xFHcZA86cpi1TjCGnzTTG37nvSuuGeV5lg=;
        b=ZvPIHToGWm0RhDrHlqXUEWUmpL/tY9foIiIsWEt3oLFuMVl6cBSiq8iyHmjwZ1yGzl
         t3jIZl3ynp30dbZy33SHaHylrFOo1rZXMM2VE/su1zsk9b+kwfDVMLJYBG7h70xz635v
         fjIMjMiWkjnLkIxLgFN/g5RdGmy6K+D8eO838AfhJwlAnOlJmhs3t1Mlxf9hXk5x5IUQ
         PKFhr0825n0fnYgtp7U8GmIjnsGm/Q7vlHgdYYvT4Tz4WVj9qxuZAf3Kw9cIqwgNWNnz
         wf7BRWE547cEhQOpp/vA4lMfplpgGzxHPijkiCO3r6CDg+2fagDuIcZb0fM04RSAv3Vt
         UEvQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719304286; x=1719909086;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=QodbRYMo8xFHcZA86cpi1TjCGnzTTG37nvSuuGeV5lg=;
        b=suZ3fK5PQ5E3Z5H4wUuyvr+hhJsVl8S2WuKJYiWWJrrGYfYb9Uhs02ieqFuUBNhgq/
         rewW41zU4Pn0YdiU59QeOru65AZU5jSnG6aK6IyUzChpi32RygctmkeB5h/xbHgGDwWb
         /LvjoHmVJpgwAL+C+hJHOdHIDu5x5Tu0/Z+sBlk1miHx3pmQknuKP17N+XOP7h0Jx6Vp
         c1ujmStoickkAVLfIFBMNehJfWoIsEUDLcAPbQpBcv9DSeHCb5uL26jRIC6dt1M1i+qK
         D0ak13OXbXaCJbD0OtmkssDDvYeZdlRgRz2JT+BxfBi26I/0mt+X23fKqlBWuhnr2m9Y
         1B1Q==
X-Gm-Message-State: AOJu0Yxu/nZop3HVnIC8JI+RM9IsvAWOS3+ZTFekxefEybay6NzTJMrp
	IqjM7jlKmW72JgNWyjycn/DLS138qtq90d0rEaMbsuPeo3DJt5I0
X-Google-Smtp-Source: AGHT+IFL9ntSB5sEcwZEDiLIhYt32n6soh/KCkpWzy58VElWLVEaEnYojK2OagUdKzPMAFkcsbL07w==
X-Received: by 2002:a17:906:99d4:b0:a72:6631:fe94 with SMTP id a640c23a62f3a-a7266320144mr208821466b.59.1719304286204;
        Tue, 25 Jun 2024 01:31:26 -0700 (PDT)
Message-ID: <25eb3ecdd1aa33af7b304ad4dd13f8561ab89761.camel@gmail.com>
Subject: Re: [XEN PATCH] MAINTAINERS: Update my email address again
From: Oleksii <oleksii.kurochko@gmail.com>
To: Julien Grall <julien@xen.org>, Stefano Stabellini
 <sstabellini@kernel.org>,  Anthony PERARD <anthony.perard@vates.tech>
Cc: xen-devel@lists.xenproject.org, Anthony PERARD <anthony@xenproject.org>,
  Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Date: Tue, 25 Jun 2024 10:31:25 +0200
In-Reply-To: <5238d3a6-c47f-4951-b839-a92c5ee4e571@xen.org>
References: <20240624094030.41692-1-anthony.perard@vates.tech>
	 <alpine.DEB.2.22.394.2406240927390.3870429@ubuntu-linux-20-04-desktop>
	 <5238d3a6-c47f-4951-b839-a92c5ee4e571@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Mon, 2024-06-24 at 22:40 +0100, Julien Grall wrote:
> Hi,
Hi Julien,

>=20
> On 24/06/2024 17:27, Stefano Stabellini wrote:
> > On Mon, 24 Jun 2024, Anthony PERARD wrote:
> > > Signed-off-by: Anthony PERARD <anthony.perard@vates.tech>
> >=20
> > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
>=20
> I guess this technically need an ack from the release manager. So CC=20
> Oleksii.
Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 08:33:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 08:33:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747456.1154872 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1cE-00030q-Bu; Tue, 25 Jun 2024 08:33:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747456.1154872; Tue, 25 Jun 2024 08:33:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1cE-00030j-92; Tue, 25 Jun 2024 08:33:26 +0000
Received: by outflank-mailman (input) for mailman id 747456;
 Tue, 25 Jun 2024 08:33:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM1cD-00030b-8k
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 08:33:25 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 95988d06-32cd-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 10:33:23 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2ec408c6d94so60179321fa.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 01:33:23 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb3c5cf6sm75596155ad.156.2024.06.25.01.33.19
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 01:33:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95988d06-32cd-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719304402; x=1719909202; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=qwsCe4VZPuQHqQL6NWMLwuN/4TGL3rAFIhoM7GsDsPE=;
        b=CU2HtsTktfgQtxWytngO074fNit25qXmJFb97OHJKhdlldHvUwfpTazHfRlzOTdiXd
         LVplMw7di52eJaaU21CbyrxeOYfRQxMROk1a1tZDFBMTcez2A6rEjkBdUpjcVf7Jhx5p
         I418WRWNe8Dvm6pEbNuXn/SoBCabBPip6dE9mqQ3r21jkqUF5onZZnePJlPvQ73RYPIv
         y6bEuMvI3Ex2Qp270t0j+qQG4AFtUNJeDSE2d5KuOQL+GbHZ4hmBRGFd6a0fdoLx2/28
         an5lGWdDg5qULLEQMFHD5dOuCJBMPRYUo+GiZKD/Xa1Wjvq9bd/Jxd+TQ3KHYfC7KYOq
         TH5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719304402; x=1719909202;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=qwsCe4VZPuQHqQL6NWMLwuN/4TGL3rAFIhoM7GsDsPE=;
        b=FcY/Y24Isxo58LhmZh6UetjULknX9aVxrIUuQDZ8tHPo99c87yszX6kRcZrIqWNd57
         h9bM/wDh/+76tkqsYuAwQpY4iJoJhgO8Q1mTu9s4czvnRFyE/7iv4uEF/UARUbe1z0rn
         60AbfkeKy1jJuGWLZpQeIB9dD9gtEc23GlP6mqai8+8Wvg62rJCI8bKpSCgIj3B2k+aQ
         Bnhr00iNfn1U7v92QAVKNGtR5hM63gmDB3ThK8E1Tg+/dzb9k8N79XX9jpf0M7pwhFVh
         DOh2XxlNzFwQmAaa7plNHjgplXEfpZf/So2nOxbz0PtVTa1vWIyi7T9B7jX/RxC7kPyw
         DBIA==
X-Forwarded-Encrypted: i=1; AJvYcCVDw6uuA+5/ZmVy8NcaokwpsFqneKDWn/HcxYIGL2yd1ILDY4uEiu1qWdekes+LuvJuiJ4pUjtdQ8XZvODXPsnsrkrQSbsKKeqR7Oj7b4s=
X-Gm-Message-State: AOJu0Yyu4yO7d4IE96Qck/9PzdD4N1fZqtZD7TMdxv+icfT7jScrYqeJ
	+1cE0lmVuwTQ4vrLHh+I7j+GWWtxdHkKobAqp5BlGo6j0hSXcBICJ1pPmlbrhQ==
X-Google-Smtp-Source: AGHT+IGE9cL0P4ecJZz8YAKz+DSs7Y5aK8zR97xjYTCG/eB/LcSLA+xH6IIRfFfQ/DaKtijtOsy3Og==
X-Received: by 2002:a2e:9e59:0:b0:2ec:5518:9550 with SMTP id 38308e7fff4ca-2ec5930fdbcmr54413921fa.10.1719304402646;
        Tue, 25 Jun 2024 01:33:22 -0700 (PDT)
Message-ID: <1ed2364d-6960-4bb2-9f3c-ac672a97e74b@suse.com>
Date: Tue, 25 Jun 2024 10:33:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] xen/sched: fix rare error case in cpu_schedule_down()
To: Juergen Gross <jgross@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20240625082736.7238-1-jgross@suse.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240625082736.7238-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 10:27, Juergen Gross wrote:
> In case cpu_schedule_up() is failing to allocate memory for struct
> sched_resource,

Or (just to mention it again) in case the function isn't called at all
because some earlier CPU_UP_PREPARE handler fails.

> cpu_schedule_down() will be called with the
> sched_resource pointer being NULL. This needs to be handled.
> 
> Reported-by: Jan Beulich <jbeulich@suse.com>
> Fixes: 207589dbacd4 ("xen/sched: move per cpu scheduler private data into struct sched_resource")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

You didn't tag it for 4.19 and you also didn't Cc Oleksii, so I expect you
deem this minor enough to be delayed until 4.20 opens, despite the Fixes:
tag?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 08:46:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 08:46:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747468.1154885 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1oQ-0005a5-Ci; Tue, 25 Jun 2024 08:46:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747468.1154885; Tue, 25 Jun 2024 08:46:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1oQ-0005Zy-9p; Tue, 25 Jun 2024 08:46:02 +0000
Received: by outflank-mailman (input) for mailman id 747468;
 Tue, 25 Jun 2024 08:46:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2LY6=N3=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sM1oP-0005Zs-9V
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 08:46:01 +0000
Received: from mail-ua1-x92a.google.com (mail-ua1-x92a.google.com
 [2607:f8b0:4864:20::92a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 57a9e2c1-32cf-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 10:45:59 +0200 (CEST)
Received: by mail-ua1-x92a.google.com with SMTP id
 a1e0cc1a2514c-80f6521eeddso1145740241.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 01:45:59 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b52d82377dsm31721876d6.19.2024.06.25.01.45.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 01:45:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57a9e2c1-32cf-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719305157; x=1719909957; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=g56+XRF9QGFax5i2ZEBOjZix2dumVKph2ax/brL9KaE=;
        b=T1HAYujrvaz6CNpa670HUoeJDBpPHxK78bZ2RzKl7u7zN0891Q0PQT5lR0FdrQR81I
         FjjkNawBAZ1unjudAcol/C7FgMntWhurVORLt+6clATQzXh3B2A1yt7y33RR9oGnWwPS
         EXbIXEaFpKfHV/6MGq5F9z2MWq/TkDs5LpZu4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719305157; x=1719909957;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=g56+XRF9QGFax5i2ZEBOjZix2dumVKph2ax/brL9KaE=;
        b=FdlrW4Gr8eQ8+Qw7VfdR4pgHzMtZh20GGYeijc7V+rcpnJBFGE5smihys8F9ZpNMCC
         nTnlVv8TKwMnQLxW1Rn7Th4GdcKa21plmxolligUY1sZx7dDe64dqUQqxhDEmg1TWQYQ
         CYzZZ+XWw8Y7nD/4oDAAeaRJlOb4t777HDtRuxMa5cBW8caXSplOzb5SAOMev4vp6lkX
         p+r2Am/NRtzKWNTpyHA8TpyN8OGJZqTgz2SMsWxIbK/UrzP8Tt/4PsANZW+NiHNZOltQ
         ytTM4PMLvLQRsdlsavRCKvMO2N89kjrS+WumfefXMmsqs+ogRKPaZDuUeqjMTHuc68Wm
         V2xg==
X-Forwarded-Encrypted: i=1; AJvYcCU5Gr44GLgRviQhc41l1t+tptKXtYIzaKEho9kYndxMzGzqbx4RO2BoR7o5426gAFdyELGd0mh531lSiqe/c+MAlM+plTRi9VREAoO4eF8=
X-Gm-Message-State: AOJu0Yx6KnI11sB0r5DJu2VjAbFFjSOxlcv73q/Oxt5X27HuBdcKPm7b
	6tET2QW/wFIzeZrt3WLimfqrs//fFSiU6mXUENS6epUU2NDIki4GRX8qLWD7+/c=
X-Google-Smtp-Source: AGHT+IH0MfSZCdGKMabk40+auVyUctI7c/cU4UkcPOH97YW3mfjjmX4BCPuh/WV68LtaDi5JbfPV3g==
X-Received: by 2002:a05:6122:9a8:b0:4ef:27db:d206 with SMTP id 71dfb90a1353d-4ef69ec006cmr5271895e0c.0.1719305157550;
        Tue, 25 Jun 2024 01:45:57 -0700 (PDT)
Date: Tue, 25 Jun 2024 10:45:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Christoph Hellwig <hch@lst.de>
Cc: jgross@suse.com, marmarek@invisiblethingslab.com,
	xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
	Rusty Bird <rustybird@net-c.com>
Subject: Re: [PATCH] xen-blkfront: fix sector_size propagation to the block
 layer
Message-ID: <ZnqDwwXgwDlggHgH@macbook>
References: <20240625055238.7934-1-hch@lst.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20240625055238.7934-1-hch@lst.de>

On Tue, Jun 25, 2024 at 07:52:38AM +0200, Christoph Hellwig wrote:
> Ensure that info->sector_size and info->physical_sector_size are set
> before the call to blkif_set_queue_limits by doing away with the
> local variables and arguments that propagate them.
> 
> Thanks to Marek Marczykowski-Górecki and Jürgen Groß for root causing
> the issue.
> 
> Fixes: ba3f67c11638 ("xen-blkfront: atomically update queue limits")
> Reported-by: Rusty Bird <rustybird@net-c.com>
> Signed-off-by: Christoph Hellwig <hch@lst.de>

Thanks for debugging this.

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 08:52:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 08:52:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747474.1154897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1uH-0007yE-0w; Tue, 25 Jun 2024 08:52:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747474.1154897; Tue, 25 Jun 2024 08:52:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1uG-0007y7-T5; Tue, 25 Jun 2024 08:52:04 +0000
Received: by outflank-mailman (input) for mailman id 747474;
 Tue, 25 Jun 2024 08:52:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOoF=N3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sM1uF-0007xw-L8
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 08:52:03 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 3052d26b-32d0-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 10:52:01 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a6cb130027aso342445766b.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 01:52:01 -0700 (PDT)
Received: from ?IPV6:2003:e5:8729:4000:29eb:6d9d:3214:39d2?
 (p200300e58729400029eb6d9d321439d2.dip0.t-ipconnect.de.
 [2003:e5:8729:4000:29eb:6d9d:3214:39d2])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d303d7ddcsm5629039a12.17.2024.06.25.01.52.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 01:52:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3052d26b-32d0-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719305521; x=1719910321; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Q2MFqd8Yxs4B1CsyjqgzZvbJzcSnJ7Xj7Owz0JLjSf8=;
        b=QT1oz4+otHEJGExt2WFqO/n1sOszXWTwjU0PBnefvasoqbLsnH+rzxGr/+w5yQpu1o
         US5SW0N8dSh7qDY+RlNSkLKgH8ILFK1mckp9EaHjz62ZYEn+BOJ4msL2tn/Ls09fm5MX
         IU25mxyuREi1w4NhtKOREWJ92hZ1CYBCGUtlDyWqI0VEwoWQhmA4ulGXn0COiH756UL1
         +2U7MSO/G28mCQDA/CpmYCT2IuOzDGs+W6WTooYXA7Q2YB0GmG9x7Wa7+iyevZNvouVu
         //6nbAIMv8mK+IN6+6qcysc2nVFzE2Ho4icqWJRL8XZQDmnnieaywa1Vxw7RGLxbYl1P
         y/3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719305521; x=1719910321;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Q2MFqd8Yxs4B1CsyjqgzZvbJzcSnJ7Xj7Owz0JLjSf8=;
        b=tQj6SaSYfvL6Ot09t5wEKVsVqJDwORfLCUoImu2SEKCmYSwVfV1xvw75OVaqWBXTOF
         hyPQnu+L/m1Ap35ziIZ48AzohjSEdGDCfzV09TcsZhxt88zniI+xiY8/unV+rWzJyJ2y
         zeprOCXWhwCd4czhOAqh+cnZlYpSK4grAl8iE/sZ9sRbjNRRMjAzgmvR80+9O1STNVaY
         0Mo+yHIKsNhwr/lrsaKMNjPD6EigjgxwxXWhxXV2nLSXSO69HNfJ5+XYT9p6qTmI1VQc
         QZdEy1PtZpW5w84y0mo7L/QM0bm/1sw4mGbNZpGMvk4WqklciQURttyptXva6FfrrW5k
         zpHg==
X-Forwarded-Encrypted: i=1; AJvYcCWkNB5Axp4/JmcMkhc0Pl6o3Y3D+7crOA/S5kxvPABSM9657DJmBrG8mhfDSW3CZu6nwYs1bPFZDurlQLhC4TC12n6yQEAhrPm8ybQ9pWU=
X-Gm-Message-State: AOJu0Yx9goJKeSJNAKGDQ0cHa0QLAxI33GQXpI6a3NpAQj/j1DYO0cVS
	N5tzQN6Cs+Fuw+vZFV9oAZNcLM1MavOdfGK4XNfBrlqsj+YKvctNTmhmaqRrwW9dFEgKsr519U2
	s
X-Google-Smtp-Source: AGHT+IHxZioJ1ivSj7/6vhriNcf7lJvLiIzsNQXjwmdbS4OHSpGw6KNptDMdy5RyDtqnQJc7P0v4eQ==
X-Received: by 2002:a50:a69d:0:b0:57c:5d4a:4122 with SMTP id 4fb4d7f45d1cf-57d4bd59fbamr5445027a12.9.1719305521250;
        Tue, 25 Jun 2024 01:52:01 -0700 (PDT)
Message-ID: <9645ed2d-4e80-4214-853b-0186d4806a26@suse.com>
Date: Tue, 25 Jun 2024 10:52:00 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] xen/sched: fix rare error case in cpu_schedule_down()
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20240625082736.7238-1-jgross@suse.com>
 <1ed2364d-6960-4bb2-9f3c-ac672a97e74b@suse.com>
Content-Language: en-US
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
In-Reply-To: <1ed2364d-6960-4bb2-9f3c-ac672a97e74b@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 25.06.24 10:33, Jan Beulich wrote:
> On 25.06.2024 10:27, Juergen Gross wrote:
>> In case cpu_schedule_up() is failing to allocate memory for struct
>> sched_resource,
> 
> Or (just to mention it again) in case the function isn't called at all
> because some earlier CPU_UP_PREPARE handler fails.
> 
>> cpu_schedule_down() will be called with the
>> sched_resource pointer being NULL. This needs to be handled.
>>
>> Reported-by: Jan Beulich <jbeulich@suse.com>
>> Fixes: 207589dbacd4 ("xen/sched: move per cpu scheduler private data into struct sched_resource")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> You didn't tag it for 4.19 and you also didn't Cc Oleksii, so I expect you
> deem this minor enough to be delayed until 4.20 opens, despite the Fixes:
> tag?

Correct.


Juergen



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 08:57:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 08:57:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747482.1154906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1zs-0000IU-IN; Tue, 25 Jun 2024 08:57:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747482.1154906; Tue, 25 Jun 2024 08:57:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM1zs-0000IA-Fk; Tue, 25 Jun 2024 08:57:52 +0000
Received: by outflank-mailman (input) for mailman id 747482;
 Tue, 25 Jun 2024 08:57:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o8Ww=N3=bombadil.srs.infradead.org=BATV+ee3bcc3f6418456cddbd+7611+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sM1zq-0000GZ-2C
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 08:57:50 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fe449146-32d0-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 10:57:48 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.97.1 #2 (Red
 Hat Linux)) id 1sM1zb-00000002Btq-1pcn;
 Tue, 25 Jun 2024 08:57:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe449146-32d0-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=THcdDm0A+7qsrmlgD/s5mbaI1bo21sJBUVFRJUuckjU=; b=iT2Kf8Ts7SV0ZMpN1tnhXxPtMH
	Ffv8ZJFKB4PVB/IdNXjk7V0C/EqzT+g2n4rI1ZqlvQji9lJIwZfjHfG1Y7Cb5TXIeVGCM4nTH9crp
	4zyWvlsKZ4WoJ7XDaIrgLwNl3YRoXD5zh+VKywveOkw7i38S3zZmoxTmQHKWk7riwSg6VmFiReOf7
	0vMD17gabcLDjQNIftPl8VaQ0ZrC/qk7EH/0G/IiWcIA8u56veyp3C42DoJVrtu8OAb3RerYCYvx8
	kxEfDPsLLm6uiPifVX8Lcu0WzNjdMGJQI+mys12zLdW5z+VY7pu7ChDIbK8PMhXWCu88C5Voy7C5+
	gU8eTZWA==;
Date: Tue, 25 Jun 2024 01:57:35 -0700
From: Christoph Hellwig <hch@infradead.org>
To: kernel test robot <oliver.sang@intel.com>
Cc: Christoph Hellwig <hch@lst.de>, oe-lkp@lists.linux.dev, lkp@intel.com,
	Jens Axboe <axboe@kernel.dk>, Ulf Hansson <ulf.hansson@linaro.org>,
	Damien Le Moal <dlemoal@kernel.org>, Hannes Reinecke <hare@suse.de>,
	linux-block@vger.kernel.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev, linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev, linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org, ying.huang@intel.com,
	feng.tang@intel.com, fengwei.yin@intel.com
Subject: Re: [axboe-block:for-next] [block]  1122c0c1cc:  aim7.jobs-per-min
 22.6% improvement
Message-ID: <ZnqGf49cvy6W-xWf@infradead.org>
References: <202406250948.e0044f1d-oliver.sang@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <202406250948.e0044f1d-oliver.sang@intel.com>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

Hi Oliver,

can you test the patch below?  It restores the previous behavior if
the device did not have a volatile write cache.  I think at least
for raid0 and raid1 without bitmap the new behavior actually is correct
and better, but it will need fixes for other modes.  If the underlying
devices did have a volatile write cache I'm a bit lost what the problem
was and this probably won't fix the issue.

---
>From 81c816827197f811e14add7a79220ed9eef6af02 Mon Sep 17 00:00:00 2001
From: Christoph Hellwig <hch@lst.de>
Date: Tue, 25 Jun 2024 08:48:18 +0200
Subject: md: set md-specific flags for all queue limits

The md driver wants to enforce a number of flags to an all devices, even
when not inheriting them from the underlying devices.  To make sure these
flags survive the queue_limits_set calls that md uses to update the
queue limits without deriving them form the previous limits add a new
md_init_stacking_limits helper that calls blk_set_stacking_limits and sets
these flags.

Fixes: 1122c0c1cc71 ("block: move cache control settings out of queue->flags")
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 drivers/md/md.c     | 13 ++++++++-----
 drivers/md/md.h     |  1 +
 drivers/md/raid0.c  |  2 +-
 drivers/md/raid1.c  |  2 +-
 drivers/md/raid10.c |  2 +-
 drivers/md/raid5.c  |  2 +-
 6 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/drivers/md/md.c b/drivers/md/md.c
index 69ea54aedd99a1..8368438e58e989 100644
--- a/drivers/md/md.c
+++ b/drivers/md/md.c
@@ -5853,6 +5853,13 @@ static void mddev_delayed_delete(struct work_struct *ws)
 	kobject_put(&mddev->kobj);
 }
 
+void md_init_stacking_limits(struct queue_limits *lim)
+{
+	blk_set_stacking_limits(lim);
+	lim->features = BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
+			BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
+}
+
 struct mddev *md_alloc(dev_t dev, char *name)
 {
 	/*
@@ -5871,10 +5878,6 @@ struct mddev *md_alloc(dev_t dev, char *name)
 	int shift;
 	int unit;
 	int error;
-	struct queue_limits lim = {
-		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
-					  BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT,
-	};
 
 	/*
 	 * Wait for any previous instance of this device to be completely
@@ -5914,7 +5917,7 @@ struct mddev *md_alloc(dev_t dev, char *name)
 		 */
 		mddev->hold_active = UNTIL_STOP;
 
-	disk = blk_alloc_disk(&lim, NUMA_NO_NODE);
+	disk = blk_alloc_disk(NULL, NUMA_NO_NODE);
 	if (IS_ERR(disk)) {
 		error = PTR_ERR(disk);
 		goto out_free_mddev;
diff --git a/drivers/md/md.h b/drivers/md/md.h
index c4d7ebf9587d07..28cb4b0b6c1740 100644
--- a/drivers/md/md.h
+++ b/drivers/md/md.h
@@ -893,6 +893,7 @@ extern int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale);
 
 extern int mddev_init(struct mddev *mddev);
 extern void mddev_destroy(struct mddev *mddev);
+void md_init_stacking_limits(struct queue_limits *lim);
 struct mddev *md_alloc(dev_t dev, char *name);
 void mddev_put(struct mddev *mddev);
 extern int md_run(struct mddev *mddev);
diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
index 62634e2a33bd0f..32d58752477847 100644
--- a/drivers/md/raid0.c
+++ b/drivers/md/raid0.c
@@ -379,7 +379,7 @@ static int raid0_set_limits(struct mddev *mddev)
 	struct queue_limits lim;
 	int err;
 
-	blk_set_stacking_limits(&lim);
+	md_init_stacking_limits(&lim);
 	lim.max_hw_sectors = mddev->chunk_sectors;
 	lim.max_write_zeroes_sectors = mddev->chunk_sectors;
 	lim.io_min = mddev->chunk_sectors << 9;
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
index 1a0eba65b8a92b..04a0c2ca173245 100644
--- a/drivers/md/raid1.c
+++ b/drivers/md/raid1.c
@@ -3194,7 +3194,7 @@ static int raid1_set_limits(struct mddev *mddev)
 	struct queue_limits lim;
 	int err;
 
-	blk_set_stacking_limits(&lim);
+	md_init_stacking_limits(&lim);
 	lim.max_write_zeroes_sectors = 0;
 	err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
 	if (err) {
diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
index 3334aa803c8380..2a9c4ee982e023 100644
--- a/drivers/md/raid10.c
+++ b/drivers/md/raid10.c
@@ -3974,7 +3974,7 @@ static int raid10_set_queue_limits(struct mddev *mddev)
 	struct queue_limits lim;
 	int err;
 
-	blk_set_stacking_limits(&lim);
+	md_init_stacking_limits(&lim);
 	lim.max_write_zeroes_sectors = 0;
 	lim.io_min = mddev->chunk_sectors << 9;
 	lim.io_opt = lim.io_min * raid10_nr_stripes(conf);
diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
index 0192a6323f09ba..10219205160bbf 100644
--- a/drivers/md/raid5.c
+++ b/drivers/md/raid5.c
@@ -7708,7 +7708,7 @@ static int raid5_set_limits(struct mddev *mddev)
 	 */
 	stripe = roundup_pow_of_two(data_disks * (mddev->chunk_sectors << 9));
 
-	blk_set_stacking_limits(&lim);
+	md_init_stacking_limits(&lim);
 	lim.io_min = mddev->chunk_sectors << 9;
 	lim.io_opt = lim.io_min * (conf->raid_disks - conf->max_degraded);
 	lim.features |= BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE;
-- 
2.43.0



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 09:17:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 09:17:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747497.1154916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM2IX-0004FB-4I; Tue, 25 Jun 2024 09:17:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747497.1154916; Tue, 25 Jun 2024 09:17:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM2IX-0004F4-1E; Tue, 25 Jun 2024 09:17:09 +0000
Received: by outflank-mailman (input) for mailman id 747497;
 Tue, 25 Jun 2024 09:17:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM2IW-0004Ey-F2
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 09:17:08 +0000
Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com
 [2a00:1450:4864:20::22a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b0be724f-32d3-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 11:17:05 +0200 (CEST)
Received: by mail-lj1-x22a.google.com with SMTP id
 38308e7fff4ca-2ec1620a956so63973711fa.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 02:17:05 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c819a66833sm8189007a91.9.2024.06.25.02.17.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 02:17:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0be724f-32d3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719307025; x=1719911825; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=LsYyQcbaduUfu96DOuZrUAkmpHBlvRODzMCVPgYQsoo=;
        b=cYWTFsQFsqrKYG2v0aYxteJpbLVKNakyKAL1QOwO2q9iGvnSrhHYPC4FyLSASnOVei
         aLhyBmQWPITx0rK18gZKFpBwiqHB42A+0zPgDmdr5uCGIK5TBEdtil/1WROY679zIRlD
         sX3P6PgiukjFD4wuxZu8hfTdNsCidyGeQa0Mtz1gGjUEh+6dM1gNjrdgzMug2WnFKbXV
         PPESA9vU5keEzgv2XxnRUvO4XBlgiNGv/Nrzs8XTUr/IJVaqcgdjioe78nr+eBUBv0l4
         B/t2Is+11Z+QMNK3mGqNXdvMfkBPJEmLiq0WmioWcxo3OOJfDfA3qqezXAzuMkg8Ujwm
         IhMw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719307025; x=1719911825;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=LsYyQcbaduUfu96DOuZrUAkmpHBlvRODzMCVPgYQsoo=;
        b=enfkhMnoZ7Ami937kEF/aNaOjJpn7fXucIkmMiSPdHXn33Lb7uDauUqQkW6DuXlJjg
         ynWiy0sKHHXyWGGdvyQcI6rLa/WYqkPADbnRXsEzKCOHw62pH7kpocMyd0ToFC+NMY2u
         QL284/j3GcCgpdZ2phImnUtJs7bZuh67hB7hhGiVcl1GEHhk2WUherl54sph4VH4LR/d
         dFNWMtO2lR4oC0O1bs2wMsAXzRqcdV/1EFn5TSyO1u+t0Yc5zPvXLPnohVShwLJsJ3F3
         5xoPSTR7CQoNnNT3253joCm5FpthCdeKk3SZ7Qw1iNh2NHd0KXTSkSfZh5FBJ/dvfpJI
         sXAw==
X-Forwarded-Encrypted: i=1; AJvYcCUKh1aL4BECNB0WaYKmGaGDBa4/pQ+bMAJy4Ryb0F3J7JHOTiDTju/S6Ac5yWhq1DDTjV6b/jjXfWDcc2ej/Rh+JovkhH8Va2s+SRCcc9M=
X-Gm-Message-State: AOJu0Yw/klO1CgseAh2d57cWuaQQ2FS3bm8JwhC2wX1z1cYfFxxgU3AY
	0UUV+WYDkNLYVqQ1axTXCtcOf4pnjj3INof1ooSU74B7x5BrKiDg79IarX9sTWdrzDmA0ubONzU
	=
X-Google-Smtp-Source: AGHT+IGHA5u5wLZ07A+fp1BC2M4h1Pc/v7hT0iNjdRZY0RM0UD450jk1aZ2exR07M/D7SJHvBPF5sw==
X-Received: by 2002:a2e:720b:0:b0:2ec:4096:4bc6 with SMTP id 38308e7fff4ca-2ec5b318000mr39135031fa.7.1719307025168;
        Tue, 25 Jun 2024 02:17:05 -0700 (PDT)
Message-ID: <0a7854e0-e01e-435e-95fe-b262cc4afc1e@suse.com>
Date: Tue, 25 Jun 2024 11:16:55 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20240621191434.5046-1-tamas@tklengyel.com>
 <20240621191434.5046-2-tamas@tklengyel.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240621191434.5046-2-tamas@tklengyel.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 21.06.2024 21:14, Tamas K Lengyel wrote:
> --- /dev/null
> +++ b/scripts/oss-fuzz/build.sh
> @@ -0,0 +1,22 @@
> +#!/bin/bash -eu
> +# Copyright 2024 Google LLC
> +#
> +# Licensed under the Apache License, Version 2.0 (the "License");
> +# you may not use this file except in compliance with the License.
> +# You may obtain a copy of the License at
> +#
> +#      http://www.apache.org/licenses/LICENSE-2.0
> +#
> +# Unless required by applicable law or agreed to in writing, software
> +# distributed under the License is distributed on an "AS IS" BASIS,
> +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
> +# See the License for the specific language governing permissions and
> +# limitations under the License.
> +#
> +################################################################################

I'm a little concerned here, but maybe I shouldn't be. According to what
I'm reading, the Apache 2.0 license is at least not entirely compatible
with GPLv2. While apparently the issue is solely with linking in Apache-
licensed code, I wonder whether us not having a respective file under
./LICENSES/ (and no pre-cooked SPDX identifier to use) actually has a
reason possibly excluding the use of such code in the project.

> +cd xen
> +./configure clang=y --disable-stubdom --disable-pvshim --disable-docs --disable-xen
> +make clang=y -C tools/include
> +make clang=y -C tools/fuzz/x86_instruction_emulator libfuzzer-harness
> +cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_instruction_emulator

In addition to what Julien said, I further think that filename / directory
name are too generic for a file with this pretty specific contents.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 09:24:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 09:24:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747503.1154926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM2PO-0006Nx-QH; Tue, 25 Jun 2024 09:24:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747503.1154926; Tue, 25 Jun 2024 09:24:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM2PO-0006Nq-NW; Tue, 25 Jun 2024 09:24:14 +0000
Received: by outflank-mailman (input) for mailman id 747503;
 Tue, 25 Jun 2024 09:24:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2LY6=N3=cloud.com=roger.pau@srs-se1.protection.inumbo.net>)
 id 1sM2PO-0006Nk-7s
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 09:24:14 +0000
Received: from mail-vs1-xe30.google.com (mail-vs1-xe30.google.com
 [2607:f8b0:4864:20::e30])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ae8b1c2f-32d4-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 11:24:12 +0200 (CEST)
Received: by mail-vs1-xe30.google.com with SMTP id
 ada2fe7eead31-48f760f89c3so27785137.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 02:24:12 -0700 (PDT)
Received: from localhost ([213.195.124.163]) by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-444e0d07b28sm24550601cf.36.2024.06.25.02.24.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 02:24:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae8b1c2f-32d4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719307451; x=1719912251; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date:from:to
         :cc:subject:date:message-id:reply-to;
        bh=Q85NcxmGLofdjMmXg1XH3x3ndabt6PeX4B7ikowQStk=;
        b=ciRlMO9sSVHClSRWd2Scw/mjDP6yq2aJ4ilBcLxvvItzWtucZvNfP3A/bnkcTVwxmb
         GpwHhfeGdc8Sz2wPJSIybnxvtqrSfnStCawlWHEexsep0/NiuIHkRjouuPMQ35kErgKZ
         R2EF0qvEh/AI7B6Um5WYYXRgMU7czmTnXnJms=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719307451; x=1719912251;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:message-id:subject:cc:to:from:date
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=Q85NcxmGLofdjMmXg1XH3x3ndabt6PeX4B7ikowQStk=;
        b=czaID0xWZIc35ERe3DNF6gBw5fEf+V113ruvL7Z6N4x2KNtXsrRqwftJjg7IblPyAg
         2qxX/42oMEbG1gjzWnZh54iwrQJAQT+JkcWL6ssqiMCNF1yQzpsJSNabJ3/bi3jh/zAJ
         rcUgtdW9dscrWU02evl+EEzCYkiG10kTKyJ2sJt+4XfEZjo/t6RSIu9pR62pcHTghb/t
         ay3cPuoVrN6Eu9XM5L+HqvxY/AQgYuNFrqO7qKj+oeu1mqP9TWsZEsHotaHcQcOMH4HO
         Vkfue9r0Xb1YoCOaxLEyY43jB0PI/iz0g5CyO9PAQHoJKKM+tndcTV/v8Rs2BtrpLaDR
         62Yw==
X-Gm-Message-State: AOJu0YyUWawcrwvIfa6xvBYy+ELEC771hEthuJlqQwWY+QmCacsy4BSM
	DYe7pFrEmcYlu0H/4KmMx1bjHQDlMnjlhkjBx/3KQwIjyqL+fgq9vifaUxSKdcU=
X-Google-Smtp-Source: AGHT+IHtlF0E97JVVSKVN1XDXhSFgPWfvYjROflI0lZlsExELeN4PNhaa8/ebx1kCviwurI9Ab3uig==
X-Received: by 2002:a05:6102:2276:b0:48f:392a:f891 with SMTP id ada2fe7eead31-48f4c0dcd51mr6665001137.21.1719307450852;
        Tue, 25 Jun 2024 02:24:10 -0700 (PDT)
Date: Tue, 25 Jun 2024 11:24:08 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH for-4.19] gnttab: fix compat query-size handling
Message-ID: <ZnqMuFBb0J9UF2XE@macbook>
References: <00bb4998-d0a7-43dc-8d3c-abb3f66661cc@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <00bb4998-d0a7-43dc-8d3c-abb3f66661cc@suse.com>

On Tue, Jun 25, 2024 at 09:30:07AM +0200, Jan Beulich wrote:
> The odd DEFINE_XEN_GUEST_HANDLE(), inconsistent with all other similar
> constructs, should have caught my attention. Turns out it was needed for
> the build to succeed merely because the corresponding #ifndef had a
> typo. That typo in turn broke compat mode guests, by having query-size
> requests of theirs wire into the domain_crash() at the bottom of the
> switch().
> 
> Fixes: 8c3bb4d8ce3f ("xen/gnttab: Perform compat/native gnttab_query_size check")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> Looks like set-version is similarly missing in the set of structures
> checked, but I'm pretty sure that we will now want to defer taking care
> of that until after 4.20 was branched.

If we have to backport the fix anyway, we might as well consider
taking it now.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 09:31:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 09:31:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747510.1154936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM2WD-00005u-Gb; Tue, 25 Jun 2024 09:31:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747510.1154936; Tue, 25 Jun 2024 09:31:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM2WD-00005n-Ca; Tue, 25 Jun 2024 09:31:17 +0000
Received: by outflank-mailman (input) for mailman id 747510;
 Tue, 25 Jun 2024 09:31:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM2WC-00005g-4t
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 09:31:16 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ab057946-32d5-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 11:31:15 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id
 38308e7fff4ca-2ec5779b423so30535111fa.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 02:31:15 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c81eaf92b2sm8043362a91.40.2024.06.25.02.31.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 02:31:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab057946-32d5-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719307874; x=1719912674; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=mPJTVbpcih923EnIV5/SV3GgFzPfabyUZsWU9+exr+w=;
        b=XRA0qKYwegwmMjh1PNgfsHC3PpfwwFf0868SJ6syyujbHQ4tOuaUZMLVWu7lL3onFB
         PWlGPyEWrqcLW0m9AwTmfS6N4pNko2fYv8ybLhG4ado/DfUDFScZPxIWerLAm2AuqjIV
         sWGVNBvXcAJbdLE1MQioFW7UjYl+XNoNrE/xp1YDaODZnwy/vWAFRtaW889KILMn61FJ
         KbgbKW6nBaNz/7nwQ1H6qep5EAgCWQhTOn9UiFYfCdYIfiMLcb0OiI0JM3AqirEQl1Ep
         v+OMIe7x6IZsCJs+Xede2uK9ClOOmSZi3nStEgIczZgmXIEuJx8AHdUWL4hNCYkYZXRz
         Jfwg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719307874; x=1719912674;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=mPJTVbpcih923EnIV5/SV3GgFzPfabyUZsWU9+exr+w=;
        b=AY/+ITsHAE/NlIO+Zq7C+ZpC1VsT2RAeHrbwn/H7/lEWKwT7dgZgH5lQq6e3p+GvaZ
         eYBag/w7JR90Ouhh8wQTJEoWxyrDVjsZD8j72qp13/Ou7lJihZXi9D58zu5QQKff+1YP
         CmuV2+KbhrlDPUEKBnbVF2CX3tFB9jM6ssB1CZoAtTO5jOdMRtYU+IIMECgGAiQ3HvH0
         bEQlNge5Wiae6Q7qeyTVqqRGBFig78B11xKN1IUU+K2Sb5oEykCEyB/oE4QbrqT24WcI
         UnJb066RLl+dimDXBru/r12HdPOzqZc4jtk4oAJE+lM/h8S+Gq9RVD7l0AzuTSFCrKw4
         rQqQ==
X-Gm-Message-State: AOJu0YxVb5d1mVRM5pvZG3vO1467PG8Bc9aCPCxmdnsUp8Y/HO5k2r0U
	u8BMFIXqe0+Yaeig4AS+i2rp8Rd29ClafTxxMZZrinnOdlbIHgQ/1jLIl3HCVA==
X-Google-Smtp-Source: AGHT+IGpQoltf0M0iJZzC+Q+aC9zS5IdGRsUH2myg5A1A3j0VQOUdXVuIagU1xm5aa3kMXYemPbTCA==
X-Received: by 2002:a2e:9b4d:0:b0:2ec:543f:6013 with SMTP id 38308e7fff4ca-2ec5931d86amr55159551fa.13.1719307874542;
        Tue, 25 Jun 2024 02:31:14 -0700 (PDT)
Message-ID: <b4da0eba-fc68-4459-b64e-9e19f8e45677@suse.com>
Date: Tue, 25 Jun 2024 11:31:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] gnttab: fix compat query-size handling
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <00bb4998-d0a7-43dc-8d3c-abb3f66661cc@suse.com>
 <ZnqMuFBb0J9UF2XE@macbook>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ZnqMuFBb0J9UF2XE@macbook>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25.06.2024 11:24, Roger Pau Monné wrote:
> On Tue, Jun 25, 2024 at 09:30:07AM +0200, Jan Beulich wrote:
>> The odd DEFINE_XEN_GUEST_HANDLE(), inconsistent with all other similar
>> constructs, should have caught my attention. Turns out it was needed for
>> the build to succeed merely because the corresponding #ifndef had a
>> typo. That typo in turn broke compat mode guests, by having query-size
>> requests of theirs wire into the domain_crash() at the bottom of the
>> switch().
>>
>> Fixes: 8c3bb4d8ce3f ("xen/gnttab: Perform compat/native gnttab_query_size check")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>> ---
>> Looks like set-version is similarly missing in the set of structures
>> checked, but I'm pretty sure that we will now want to defer taking care
>> of that until after 4.20 was branched.
> 
> If we have to backport the fix anyway, we might as well consider
> taking it now.

While I put a Fixes: tag there, this is the kind of change where I don't
think it needs backporting. The ABI of released versions is supposed to
be yet less in flux than the "stable" ABI in general.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 09:54:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 09:54:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747525.1154957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM2rz-0003hq-Aq; Tue, 25 Jun 2024 09:53:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747525.1154957; Tue, 25 Jun 2024 09:53:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM2rz-0003hj-89; Tue, 25 Jun 2024 09:53:47 +0000
Received: by outflank-mailman (input) for mailman id 747525;
 Tue, 25 Jun 2024 09:53:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sQf/=N3=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sM2rx-0003hd-Sp
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 09:53:45 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ce6a04cf-32d8-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 11:53:43 +0200 (CEST)
Received: from [10.176.134.80] (unknown [160.78.253.181])
 by support.bugseng.com (Postfix) with ESMTPSA id 6CAB94EE0738;
 Tue, 25 Jun 2024 11:53:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce6a04cf-32d8-11ef-b4bb-af5377834399
Message-ID: <33d24bb8-9ef5-4d46-a93a-9bc7310cabb8@bugseng.com>
Date: Tue, 25 Jun 2024 11:53:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 04/13] x86/vpmu: address violations of MISRA C Rule
 16.3
To: Jan Beulich <jbeulich@suse.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <c45b27a08a1608de85e4bbae80763f8429d40ad5.1719218291.git.federico.serafini@bugseng.com>
 <1ea5bebd-23ee-4d2c-a7c8-bc6ba99851c5@suse.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <1ea5bebd-23ee-4d2c-a7c8-bc6ba99851c5@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 24/06/24 17:16, Jan Beulich wrote:
> On 24.06.2024 11:04, Federico Serafini wrote:
>> --- a/xen/arch/x86/cpu/vpmu_intel.c
>> +++ b/xen/arch/x86/cpu/vpmu_intel.c
>> @@ -713,6 +713,7 @@ static int cf_check core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
>>               break;
>>           default:
>>               rdmsrl(msr, *msr_content);
>> +            break;
>>           }
>>       }
>>       else if ( msr == MSR_IA32_MISC_ENABLE )
> 
> Up from here, in core2_vpmu_do_wrmsr() there's a pretty long default
> block with no terminating break. Is there a reason that you don't put
> one there?

I noticed that vpmu_intel.c is a file out-of-scope.
The violation I addressed was shown because it involves
the macro rdmsrl coming from file xen/arch/x86/include/asm/msr.h
(in scope).
I will address also the violation you pointed out in a v3.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 10:04:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 10:04:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747536.1154970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM322-0005g1-84; Tue, 25 Jun 2024 10:04:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747536.1154970; Tue, 25 Jun 2024 10:04:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM322-0005fu-5L; Tue, 25 Jun 2024 10:04:10 +0000
Received: by outflank-mailman (input) for mailman id 747536;
 Tue, 25 Jun 2024 10:04:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sM320-0005fk-7E; Tue, 25 Jun 2024 10:04:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sM320-0001ff-5d; Tue, 25 Jun 2024 10:04:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sM31z-0005Yt-QC; Tue, 25 Jun 2024 10:04:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sM31z-00006q-PY; Tue, 25 Jun 2024 10:04:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=v7HRd/nUGQtK3Ts5Dc0Tv3XAHtnMaTVReeeJp3tKR3E=; b=kotj8Rt0bxl3v9Dvh8kO9VG2zh
	vwJ0lf58AY9f1rdZELmzTEXTVJdhksBSmg/JZfkE5Z6YZtcX32psZUKMljuZeQSQPnz0RySCuVTmc
	EZDfuGHfbwqG8OS8WguprohyNwoo30Kmcf2rrFEQ1cXg/IypVpXvVO/qOsr96eqiijpM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186479-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186479: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b14dae96c07ef27cc7f8107ddaa16989e9ab024b
X-Osstest-Versions-That:
    xen=c56f1ef577831ec70645ca5874d54f2e698c6761
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Jun 2024 10:04:07 +0000

flight 186479 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186479/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b14dae96c07ef27cc7f8107ddaa16989e9ab024b
baseline version:
 xen                  c56f1ef577831ec70645ca5874d54f2e698c6761

Last test of basis   186470  2024-06-24 19:02:11 Z    0 days
Testing same since   186473  2024-06-25 01:00:22 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Federico Serafini <federico.serafini@bugseng.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c56f1ef577..b14dae96c0  b14dae96c07ef27cc7f8107ddaa16989e9ab024b -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 10:14:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 10:14:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747548.1154993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3CI-0007mE-Gv; Tue, 25 Jun 2024 10:14:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747548.1154993; Tue, 25 Jun 2024 10:14:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3CI-0007m7-ET; Tue, 25 Jun 2024 10:14:46 +0000
Received: by outflank-mailman (input) for mailman id 747548;
 Tue, 25 Jun 2024 10:14:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=075v=N3=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sM3CH-0007WK-H4
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 10:14:45 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id be0c93ce-32db-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 12:14:44 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-87-17-171-46.retail.telecomitalia.it [87.17.171.46])
 by support.bugseng.com (Postfix) with ESMTPSA id 899554EE073D;
 Tue, 25 Jun 2024 12:14:41 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be0c93ce-32db-11ef-b4bb-af5377834399
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 1/3] common/kernel: address violation of MISRA C Rule 13.6
Date: Tue, 25 Jun 2024 12:14:19 +0200
Message-Id: <54949b0561263b9f18da500255836d89ca8838ba.1719308599.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719308599.git.alessandro.zucchelli@bugseng.com>
References: <cover.1719308599.git.alessandro.zucchelli@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In the file common/kernel.c macro ARRAY_SIZE is called with argument
current->domain->handle.
Once expanded, this ARRAY_SIZE's argument is used in sizeof operations
and thus 'current', being a macro that expands to a function
call with potential side effects, generates a violation.

To address this violation the value of current->domain is therefore
stored in a variable called 'd' before passing it to macro ARRAY_SIZE.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
---
 xen/common/kernel.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index b44b2439ca..76b4f27aef 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -660,14 +660,15 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     case XENVER_guest_handle:
     {
+        struct domain *d = current->domain;
         xen_domain_handle_t hdl;
 
         if ( deny )
             memset(&hdl, 0, ARRAY_SIZE(hdl));
 
-        BUILD_BUG_ON(ARRAY_SIZE(current->domain->handle) != ARRAY_SIZE(hdl));
+        BUILD_BUG_ON(ARRAY_SIZE(d->handle) != ARRAY_SIZE(hdl));
 
-        if ( copy_to_guest(arg, deny ? hdl : current->domain->handle,
+        if ( copy_to_guest(arg, deny ? hdl : d->handle,
                            ARRAY_SIZE(hdl) ) )
             return -EFAULT;
         return 0;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 10:14:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 10:14:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747547.1154983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3CD-0007WX-7p; Tue, 25 Jun 2024 10:14:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747547.1154983; Tue, 25 Jun 2024 10:14:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3CD-0007WQ-5H; Tue, 25 Jun 2024 10:14:41 +0000
Received: by outflank-mailman (input) for mailman id 747547;
 Tue, 25 Jun 2024 10:14:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=075v=N3=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sM3CC-0007WK-IR
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 10:14:40 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bada7cc0-32db-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 12:14:38 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-87-17-171-46.retail.telecomitalia.it [87.17.171.46])
 by support.bugseng.com (Postfix) with ESMTPSA id A83A64EE0738;
 Tue, 25 Jun 2024 12:14:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bada7cc0-32db-11ef-b4bb-af5377834399
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 0/3] address violation of MISRA C Rule 13.6
Date: Tue, 25 Jun 2024 12:14:18 +0200
Message-Id: <cover.1719308599.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series aims to address several violations of Rule 13.6 which states the
following: The operand of the `sizeof' operator shall not contain any expression
which has potential side effects.

Alessandro Zucchelli (3):
  common/kernel: address violation of MISRA C Rule 13.6
  xen/event: address violation of MISRA C Rule 13.6
  common/softirq: address violation of MISRA C Rule 13.6

 xen/common/kernel.c     | 5 +++--
 xen/common/softirq.c    | 3 ++-
 xen/include/xen/event.h | 8 +++++---
 3 files changed, 10 insertions(+), 6 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 10:14:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 10:14:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747550.1155003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3CM-00084A-OI; Tue, 25 Jun 2024 10:14:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747550.1155003; Tue, 25 Jun 2024 10:14:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3CM-000843-Lh; Tue, 25 Jun 2024 10:14:50 +0000
Received: by outflank-mailman (input) for mailman id 747550;
 Tue, 25 Jun 2024 10:14:49 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=075v=N3=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sM3CL-0007WK-9l
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 10:14:49 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c03fbf98-32db-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 12:14:47 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-87-17-171-46.retail.telecomitalia.it [87.17.171.46])
 by support.bugseng.com (Postfix) with ESMTPSA id F3BAD4EE0754;
 Tue, 25 Jun 2024 12:14:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c03fbf98-32db-11ef-b4bb-af5377834399
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 2/3] xen/event: address violation of MISRA C Rule 13.6
Date: Tue, 25 Jun 2024 12:14:20 +0200
Message-Id: <d48b73a3c5c569f95da424fe9e14a7690b1c69f8.1719308599.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719308599.git.alessandro.zucchelli@bugseng.com>
References: <cover.1719308599.git.alessandro.zucchelli@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In the file include/xen/event.h macro set_bit is called with argument
current->pause_flags.
Once expanded this set_bit's argument is used in sizeof operations
and thus 'current', being a macro that expands to a function
call with potential side effects, generates a violation.

To address this violation the value of current is therefore stored in a
variable called 'v' before passing it to macro set_bit.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
---
 xen/include/xen/event.h | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/xen/include/xen/event.h b/xen/include/xen/event.h
index f1472ea1eb..48b79f3d62 100644
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -183,13 +183,14 @@ static bool evtchn_usable(const struct evtchn *evtchn)
 /* Wait on a Xen-attached event channel. */
 #define wait_on_xen_event_channel(port, condition)                      \
     do {                                                                \
+        struct vcpu *v = current;                                       \
         if ( condition )                                                \
             break;                                                      \
-        set_bit(_VPF_blocked_in_xen, &current->pause_flags);            \
+        set_bit(_VPF_blocked_in_xen, &v->pause_flags);                  \
         smp_mb(); /* set blocked status /then/ re-evaluate condition */ \
         if ( condition )                                                \
         {                                                               \
-            clear_bit(_VPF_blocked_in_xen, &current->pause_flags);      \
+            clear_bit(_VPF_blocked_in_xen, &v->pause_flags);            \
             break;                                                      \
         }                                                               \
         raise_softirq(SCHEDULE_SOFTIRQ);                                \
@@ -198,7 +199,8 @@ static bool evtchn_usable(const struct evtchn *evtchn)
 
 #define prepare_wait_on_xen_event_channel(port)                         \
     do {                                                                \
-        set_bit(_VPF_blocked_in_xen, &current->pause_flags);            \
+        struct vcpu *v = current;                                       \
+        set_bit(_VPF_blocked_in_xen, &v->pause_flags);                  \
         raise_softirq(SCHEDULE_SOFTIRQ);                                \
         smp_mb(); /* set blocked status /then/ caller does his work */  \
     } while ( 0 )
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 10:14:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 10:14:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747553.1155014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3CQ-0008OD-23; Tue, 25 Jun 2024 10:14:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747553.1155014; Tue, 25 Jun 2024 10:14:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3CP-0008O2-Uu; Tue, 25 Jun 2024 10:14:53 +0000
Received: by outflank-mailman (input) for mailman id 747553;
 Tue, 25 Jun 2024 10:14:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=075v=N3=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sM3CO-0007Vr-Lz
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 10:14:52 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c28c9499-32db-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 12:14:52 +0200 (CEST)
Received: from delta.bugseng.com.homenet.telecomitalia.it
 (host-87-17-171-46.retail.telecomitalia.it [87.17.171.46])
 by support.bugseng.com (Postfix) with ESMTPSA id 94A5B4EE073D;
 Tue, 25 Jun 2024 12:14:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c28c9499-32db-11ef-90a3-e314d9c70b13
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 3/3] common/softirq: address violation of MISRA C Rule 13.6
Date: Tue, 25 Jun 2024 12:14:21 +0200
Message-Id: <ab8b527c775fbb7681a4658828d53e7e3419be10.1719308599.git.alessandro.zucchelli@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719308599.git.alessandro.zucchelli@bugseng.com>
References: <cover.1719308599.git.alessandro.zucchelli@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In the file common/softirq macro set_bit is called with argument
smp_processor_id.
Once expanded this set_bit's argument is used in sizeof operations
and thus 'smp_processor_id', being a macro that expands to a
function call with potential side effects, generates a violation.

To address this violation the value of smp_processor_id is therefore
stored in a variable called 'cpu' before passing it to macro set_bit.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
---
 xen/common/softirq.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/common/softirq.c b/xen/common/softirq.c
index bee4a82009..c5f3870534 100644
--- a/xen/common/softirq.c
+++ b/xen/common/softirq.c
@@ -139,7 +139,8 @@ void cpu_raise_softirq_batch_finish(void)
 
 void raise_softirq(unsigned int nr)
 {
-    set_bit(nr, &softirq_pending(smp_processor_id()));
+    unsigned int cpu = smp_processor_id();
+    set_bit(nr, &softirq_pending(cpu));
 }
 
 /*
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 10:17:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 10:17:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747580.1155023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3EV-0001IB-D5; Tue, 25 Jun 2024 10:17:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747580.1155023; Tue, 25 Jun 2024 10:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3EV-0001I4-AW; Tue, 25 Jun 2024 10:17:03 +0000
Received: by outflank-mailman (input) for mailman id 747580;
 Tue, 25 Jun 2024 10:17:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=SOoF=N3=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sM3ET-0001Hw-HF
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 10:17:01 +0000
Received: from smtp-out2.suse.de (smtp-out2.suse.de
 [2a07:de40:b251:101:10:150:64:2])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f4b1958-32dc-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 12:17:00 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 866E11F84A;
 Tue, 25 Jun 2024 10:16:59 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 5FA7D13A9A;
 Tue, 25 Jun 2024 10:16:59 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id 6oeBFRuZema4WAAAD6G6ig
 (envelope-from <jgross@suse.com>); Tue, 25 Jun 2024 10:16:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f4b1958-32dc-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1719310619; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=gMzfsIXCHb+A6KoO2MNfkP7lqMQj90KIZpVmMAEHrRk=;
	b=iJhsl26fEQIbklC6V8WxDwAt0J+3fLC7+S9fqhtoompQYm/qKRCvIq/XPbj9afNs6y6LrU
	Hs9PKgACNYPLxBAyofwtEoWqYnCUINmuZTfryU9mjTeqmBMzgtE0qsOK2UXZRS32vLgdhz
	V5ztSNTePNkl1Mt0HKz+kV51DW3fXD0=
Authentication-Results: smtp-out2.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1719310619; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=gMzfsIXCHb+A6KoO2MNfkP7lqMQj90KIZpVmMAEHrRk=;
	b=iJhsl26fEQIbklC6V8WxDwAt0J+3fLC7+S9fqhtoompQYm/qKRCvIq/XPbj9afNs6y6LrU
	Hs9PKgACNYPLxBAyofwtEoWqYnCUINmuZTfryU9mjTeqmBMzgtE0qsOK2UXZRS32vLgdhz
	V5ztSNTePNkl1Mt0HKz+kV51DW3fXD0=
Message-ID: <6a94b06a-1259-45df-bd59-cec90ff140cc@suse.com>
Date: Tue, 25 Jun 2024 12:16:58 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] xen/sched: fix rare error case in cpu_schedule_down()
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20240625082736.7238-1-jgross@suse.com>
 <1ed2364d-6960-4bb2-9f3c-ac672a97e74b@suse.com>
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
Autocrypt: addr=jgross@suse.com; keydata=
 xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB
 ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve
 dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ
 NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx
 XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB
 AAHNH0p1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT7CwHkEEwECACMFAlOMcK8CGwMH
 CwkIBwMCAQYVCAIJCgsEFgIDAQIeAQIXgAAKCRCw3p3WKL8TL8eZB/9G0juS/kDY9LhEXseh
 mE9U+iA1VsLhgDqVbsOtZ/S14LRFHczNd/Lqkn7souCSoyWsBs3/wO+OjPvxf7m+Ef+sMtr0
 G5lCWEWa9wa0IXx5HRPW/ScL+e4AVUbL7rurYMfwCzco+7TfjhMEOkC+va5gzi1KrErgNRHH
 kg3PhlnRY0Udyqx++UYkAsN4TQuEhNN32MvN0Np3WlBJOgKcuXpIElmMM5f1BBzJSKBkW0Jc
 Wy3h2Wy912vHKpPV/Xv7ZwVJ27v7KcuZcErtptDevAljxJtE7aJG6WiBzm+v9EswyWxwMCIO
 RoVBYuiocc51872tRGywc03xaQydB+9R7BHPzsBNBFOMcBYBCADLMfoA44MwGOB9YT1V4KCy
 vAfd7E0BTfaAurbG+Olacciz3yd09QOmejFZC6AnoykydyvTFLAWYcSCdISMr88COmmCbJzn
 sHAogjexXiif6ANUUlHpjxlHCCcELmZUzomNDnEOTxZFeWMTFF9Rf2k2F0Tl4E5kmsNGgtSa
 aMO0rNZoOEiD/7UfPP3dfh8JCQ1VtUUsQtT1sxos8Eb/HmriJhnaTZ7Hp3jtgTVkV0ybpgFg
 w6WMaRkrBh17mV0z2ajjmabB7SJxcouSkR0hcpNl4oM74d2/VqoW4BxxxOD1FcNCObCELfIS
 auZx+XT6s+CE7Qi/c44ibBMR7hyjdzWbABEBAAHCwF8EGAECAAkFAlOMcBYCGwwACgkQsN6d
 1ii/Ey9D+Af/WFr3q+bg/8v5tCknCtn92d5lyYTBNt7xgWzDZX8G6/pngzKyWfedArllp0Pn
 fgIXtMNV+3t8Li1Tg843EXkP7+2+CQ98MB8XvvPLYAfW8nNDV85TyVgWlldNcgdv7nn1Sq8g
 HwB2BHdIAkYce3hEoDQXt/mKlgEGsLpzJcnLKimtPXQQy9TxUaLBe9PInPd+Ohix0XOlY+Uk
 QFEx50Ki3rSDl2Zt2tnkNYKUCvTJq7jvOlaPd6d/W0tZqpyy7KVay+K4aMobDsodB3dvEAs6
 ScCnh03dDAFgIq5nsB11j3KPKdVoPlfucX2c7kGNH+LUMbzqV6beIENfNexkOfxHfw==
In-Reply-To: <1ed2364d-6960-4bb2-9f3c-ac672a97e74b@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------0otx1dfREOmaFseE7qL6NYz5"
X-Spamd-Result: default: False [-5.18 / 50.00];
	BAYES_HAM(-2.99)[99.95%];
	SIGNED_PGP(-2.00)[];
	MIME_BASE64_TEXT_BOGUS(1.00)[];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	MIME_GOOD(-0.20)[multipart/signed,multipart/mixed,text/plain];
	NEURAL_HAM_SHORT(-0.20)[-1.000];
	MIME_BASE64_TEXT(0.10)[];
	MIME_UNKNOWN(0.10)[application/pgp-keys];
	XM_UA_NO_VERSION(0.01)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	MIME_TRACE(0.00)[0:+,1:+,2:+,3:+,4:~,5:~];
	ARC_NA(0.00)[];
	TO_DN_SOME(0.00)[];
	MID_RHS_MATCH_FROM(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_EQ_ENVFROM(0.00)[];
	FROM_HAS_DN(0.00)[];
	DKIM_SIGNED(0.00)[suse.com:s=susede1];
	RCVD_COUNT_TWO(0.00)[2];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo];
	RCPT_COUNT_THREE(0.00)[4];
	HAS_ATTACHMENT(0.00)[]
X-Spam-Flag: NO
X-Spam-Score: -5.18
X-Spam-Level: 

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------0otx1dfREOmaFseE7qL6NYz5
Content-Type: multipart/mixed; boundary="------------0w9grBOL0NgMas9zMgwxtC0s";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
Message-ID: <6a94b06a-1259-45df-bd59-cec90ff140cc@suse.com>
Subject: Re: [PATCH] xen/sched: fix rare error case in cpu_schedule_down()
References: <20240625082736.7238-1-jgross@suse.com>
 <1ed2364d-6960-4bb2-9f3c-ac672a97e74b@suse.com>
In-Reply-To: <1ed2364d-6960-4bb2-9f3c-ac672a97e74b@suse.com>
Autocrypt-Gossip: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJ3BBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AAIQkQoDSui/t3IH4WIQQ+pJkfkcoLMCa4X6CgNK6L+3cgfgn7AJ9DmMd0SMJE
 ePbc7/m22D2v04iu7ACffXTdZQhNl557tJuDXZSBxDmW/tLOwU0EWTecRBAIAIK5OMKMU5R2
 Lk2bbjgX7vyQuCFFyKf9rC/4itNwhYWFSlKzVj3WJBDsoi2KvPm7AI+XB6NIkNAkshL5C0kd
 pcNd5Xo0jRR5/WE/bT7LyrJ0OJWS/qUit5eNNvsO+SxGAk28KRa1ieVLeZi9D03NL0+HIAtZ
 tecfqwgl3Y72UpLUyt+r7LQhcI/XR5IUUaD4C/chB4Vq2QkDKO7Q8+2HJOrFIjiVli4lU+Sf
 OBp64m//Y1xys++Z4ODoKh7tkh5DxiO3QBHG7bHK0CSQsJ6XUvPVYubAuy1XfSDzSeSBl//C
 v78Fclb+gi9GWidSTG/4hsEzd1fY5XwCZG/XJJY9M/sAAwUH/09Ar9W2U1Qm+DwZeP2ii3Ou
 14Z9VlVVPhcEmR/AFykL9dw/OV2O/7cdi52+l00reUu6Nd4Dl8s4f5n8b1YFzmkVVIyhwjvU
 jxtPyUgDOt6DRa+RaDlXZZmxQyWcMv2anAgYWGVszeB8Myzsw8y7xhBEVV1S+1KloCzw4V8Z
 DSJrcsZlyMDoiTb7FyqxwQnM0f6qHxWbmOOnbzJmBqpNpFuDcz/4xNsymJylm6oXiucHQBAP
 Xb/cE1YNHpuaH4SRhIxwQilCYEznWowQphNAbJtEKOmcocY7EbSt8VjXTzmYENkIfkrHRyXQ
 dUm5AoL51XZljkCqNwrADGkTvkwsWSvCSQQYEQIACQUCWTecRAIbDAAKCRCgNK6L+3cgfuef
 AJ9wlZQNQUp0KwEf8Tl37RmcxCL4bQCcC5alCSMzUBJ5DBIcR4BY+CyQFAs=

--------------0w9grBOL0NgMas9zMgwxtC0s
Content-Type: multipart/mixed; boundary="------------RJijIswkMFZcKZZTojs5eDen"

--------------RJijIswkMFZcKZZTojs5eDen
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjUuMDYuMjQgMTA6MzMsIEphbiBCZXVsaWNoIHdyb3RlOg0KPiBPbiAyNS4wNi4yMDI0
IDEwOjI3LCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0KPj4gSW4gY2FzZSBjcHVfc2NoZWR1bGVf
dXAoKSBpcyBmYWlsaW5nIHRvIGFsbG9jYXRlIG1lbW9yeSBmb3Igc3RydWN0DQo+PiBzY2hl
ZF9yZXNvdXJjZSwNCj4gDQo+IE9yIChqdXN0IHRvIG1lbnRpb24gaXQgYWdhaW4pIGluIGNh
c2UgdGhlIGZ1bmN0aW9uIGlzbid0IGNhbGxlZCBhdCBhbGwNCj4gYmVjYXVzZSBzb21lIGVh
cmxpZXIgQ1BVX1VQX1BSRVBBUkUgaGFuZGxlciBmYWlscy4NCg0KVGhpcyByZW1hcmsgbWFk
ZSBtZSBsb29raW5nIGludG8gdGhlIG5vdGlmaWVyIGltcGxlbWVudGF0aW9uLg0KDQpJIHRo
aW5rIHRoaXMgcGF0Y2ggc2hvdWxkIGJlIHJld29ya2VkIHNpZ25pZmljYW50bHk6DQoNCklu
IGNwdV91cCgpIHRoZSBDUFVfVVBfQ0FOQ0VMRUQgbm90aWZpZXIgY2FsbCBpcyB1c2luZyB0
aGUgc2FtZQ0Kbm90aWZpZXJfYmxvY2sgYXMgdGhlIG9uZSB1c2VkIGJ5IENQVV9VUF9QUkVQ
QVJFIGJlZm9yZS4gVGhpcyBtZWFucywNCnRoYXQgb25seSB0aGUgaGFuZGxlcnMgd2hpY2gg
ZGlkbid0IGZhaWwgZm9yIENQVV9VUF9QUkVQQVJFIHdpbGwgYmUNCmNhbGxlZCB3aXRoIENQ
VV9VUF9DQU5DRUxFRC4NCg0KU28gdGhlcmUgaXMgbm8gc3VjaCBjYXNlIGFzICJpbiBjYXNl
IHRoZSBmdW5jdGlvbiBpc24ndCBjYWxsZWQgYXQgYWxsDQpiZWNhdXNlIHNvbWUgZWFybGll
ciBDUFVfVVBfUFJFUEFSRSBoYW5kbGVyIGZhaWxzIi4NCg0KQW5kIGEgZmFpbHVyZSBvZiBj
cHVfc2NoZWR1bGVfdXAoKSBuZWVkcyB0byB1bmRvIGFsbCBleHRlcm5hbGx5IHZpc2libGUN
Cm1vZGlmaWNhdGlvbnMgaW5zdGVhZCBvZiBob3BpbmcgdGhhdCB0aGUgQ1BVX1VQX0NBTkNF
TEVEIG5vdGlmaWVyIHdpbGwNCmRvIHRoYXQuDQoNClNvOiBWMiBvZiB0aGUgcGF0Y2ggd2ls
bCBiZSBuZWVkZWQuDQoNCg0KSnVlcmdlbg0K
--------------RJijIswkMFZcKZZTojs5eDen
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R3/CwO0EGAEIACAWIQSFEmdy6PYElKXQl/ew3p3W
KL8TLwUCWt3w0AIbAgCBCRCw3p3WKL8TL3YgBBkWCAAdFiEEUy2wekH2OPMeOLge
gFxhu0/YY74FAlrd8NAACgkQgFxhu0/YY75NiwD/fQf/RXpyv9ZX4n8UJrKDq422
bcwkujisT6jix2mOOwYBAKiip9+mAD6W5NPXdhk1XraECcIspcf2ff5kCAlG0DIN
aTUH/RIwNWzXDG58yQoLdD/UPcFgi8GWtNUp0Fhc/GeBxGipXYnvuWxwS+Qs1Qay
7/Nbal/v4/eZZaWs8wl2VtrHTS96/IF6q2o0qMey0dq2AxnZbQIULiEndgR625EF
RFg+IbO4ldSkB3trsF2ypYLij4ZObm2casLIP7iB8NKmQ5PndL8Y07TtiQ+Sb/wn
g4GgV+BJoKdDWLPCAlCMilwbZ88Ijb+HF/aipc9hsqvW/hnXC2GajJSAY3Qs9Mib
4Hm91jzbAjmp7243pQ4bJMfYHemFFBRaoLC7ayqQjcsttN2ufINlqLFPZPR/i3IX
kt+z4drzFUyEjLM1vVvIMjkUoJs=3D
=3DeeAB
-----END PGP PUBLIC KEY BLOCK-----

--------------RJijIswkMFZcKZZTojs5eDen--

--------------0w9grBOL0NgMas9zMgwxtC0s--

--------------0otx1dfREOmaFseE7qL6NYz5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature.asc"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmZ6mRsFAwAAAAAACgkQsN6d1ii/Ey8X
VAf9HzSnLPdAleUcxYcIVWJ8cPzSxxZ9DE8BC9e95km8b2hJY4+dh2ZQJSdb+k05bo8r7LZ70PL6
l89Jf4xP0hyM34wC7sHDOzsXZIkyuvnNc4Ror5EcrV57oOJfUHnkH3Wr3oSm3ivjuYa7U49HlKnB
uwKtZaYoDwIVWdGUBR1Jf63BxtUfsW8BiC0iE7JpwaZmQAm+VWMvh0XmswXRG6MGtZbZEi2FBWF6
XBAsI5Mm77YUWi4qN0ShDORb5vJjBtMqP8tborzoxGlGW2KWrmZjo3oNhJpXxesii+2PIQynwQSG
qafYrmkXrPhWDT6K2bZMrV09AIJDwVjasog3ToSzYQ==
=xfnZ
-----END PGP SIGNATURE-----

--------------0otx1dfREOmaFseE7qL6NYz5--


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 10:38:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 10:38:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747590.1155034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3Yi-0004Ms-1i; Tue, 25 Jun 2024 10:37:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747590.1155034; Tue, 25 Jun 2024 10:37:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3Yh-0004Ml-VG; Tue, 25 Jun 2024 10:37:55 +0000
Received: by outflank-mailman (input) for mailman id 747590;
 Tue, 25 Jun 2024 10:37:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sM3Yh-0004Mb-8k; Tue, 25 Jun 2024 10:37:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sM3Yg-0002Jg-RV; Tue, 25 Jun 2024 10:37:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sM3Yg-0007Ct-EV; Tue, 25 Jun 2024 10:37:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sM3Yg-00012i-E4; Tue, 25 Jun 2024 10:37:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GeZOAqVvAvkITK3IS1x97bOt+YmQaQ/6l2OQr6y8N30=; b=HNPQql+6aOvcpKFhM5zqAf2WKk
	QQHSKy5J8wx726078vKrjPJJKjo4tp4jGdpIGZPrMfmk7pSQgaBBF9uyY+wK3fbJXX+jpWLXwTHiw
	tQvX5WBPMJH0UBKzLjUiSPoO73v+Id3FMbdX4G5yWpefJqC62Zl8k40jvF2bZ3P47/qM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186474-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186474: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-arndale:<job status>:broken:regression
    linux-linus:test-armhf-armhf-xl-arndale:host-install(5):broken:regression
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=55027e689933ba2e64f3d245fb1ff185b3e7fc81
X-Osstest-Versions-That:
    linux=7c16f0a4ed1ce7b0dd1c01fc012e5bde89fe7748
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Jun 2024 10:37:54 +0000

flight 186474 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186474/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-arndale     <job status>                 broken
 test-armhf-armhf-xl-arndale   5 host-install(5)        broken REGR. vs. 186462
 test-armhf-armhf-libvirt      8 xen-boot                 fail REGR. vs. 186462
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 186462

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1   8 xen-boot                     fail  like 186462
 test-armhf-armhf-xl-qcow2     8 xen-boot                     fail  like 186462
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                55027e689933ba2e64f3d245fb1ff185b3e7fc81
baseline version:
 linux                7c16f0a4ed1ce7b0dd1c01fc012e5bde89fe7748

Last test of basis   186462  2024-06-23 16:10:07 Z    1 days
Failing since        186464  2024-06-24 01:42:08 Z    1 days    4 attempts
Testing same since   186474  2024-06-25 02:02:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Florian Fainelli <florian.fainelli@broadcom.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Hagar Hemdan <hagarhem@amazon.com>
  Huang-Huang Bao <i@eh5.me>
  Johan Hovold <johan+linaro@kernel.org>
  John Keeping <jkeeping@inmusicbrands.com>
  Jonathan Denose <jdenose@google.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Luke D. Jones <luke@ljones.dev>
  Stefan Wahren <wahrenst@gmx.net>
  Thomas Richard <thomas.richard@bootlin.com>
  Tobias Jakobi <tjakobi@math.uni-bielefeld.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  broken  
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl-arndale broken
broken-step test-armhf-armhf-xl-arndale host-install(5)

Not pushing.

(No revision log; it would be 386 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 10:44:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 10:44:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747600.1155043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3f6-0005wc-RA; Tue, 25 Jun 2024 10:44:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747600.1155043; Tue, 25 Jun 2024 10:44:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM3f6-0005wV-Nq; Tue, 25 Jun 2024 10:44:32 +0000
Received: by outflank-mailman (input) for mailman id 747600;
 Tue, 25 Jun 2024 10:44:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vmrN=N3=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sM3f5-0005wP-76
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 10:44:31 +0000
Received: from mail-yw1-x1131.google.com (mail-yw1-x1131.google.com
 [2607:f8b0:4864:20::1131])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e5215b74-32df-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 12:44:28 +0200 (CEST)
Received: by mail-yw1-x1131.google.com with SMTP id
 00721157ae682-632597a42b8so47889297b3.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 03:44:28 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b51ed16142sm43441276d6.31.2024.06.25.03.44.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 03:44:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5215b74-32df-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719312267; x=1719917067; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=WgWtGWDisyl5UE7KYOMuG0SxuCrtt+a2R5+E4pVs6MU=;
        b=al8p1iFZenAQ2idLqttmtIlmKUqsJOM0TvkqCSvyvkeX+riOjkgAiRsFUrsrdrSZiN
         F678UCXLFtwkDCuRR0UJYsNfPKt9k5MVIbAwwnqliJ75F41ndr2HHt8h48czI7y2o5Aq
         3iHOKP9LMX+5AFY7++Z4joKRJXsE6hKBYzxe4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719312267; x=1719917067;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WgWtGWDisyl5UE7KYOMuG0SxuCrtt+a2R5+E4pVs6MU=;
        b=m6o23enyBeA7ZUfNGMSsQoJtBI9oZMB5JeN24BgfjkN+uIxXETuc7WDxB3pazdQKJ/
         1hDLei2ay+qlWLBg8i2M01/v0+a1uc28VuK7FS9IYOBOywWb69mY/AzUE54gGIooGBcn
         5jJzds9QQWD2T7QmtfNB2LemEi5zL+mlzOWKrxU/PwZNojLFVNcQhIMCFzYo8MEE6UUJ
         EekeQZEu7ULvi/MxSQTz+iBz7DQBmLkj/RMrc5JFBSpPE/55a/CbihID4CZxInsNKvxa
         ETkbp00hRt67NGjKXUSOrAZKbx6y1C4L191/gV3Sb05Vfwm4tQ1f014c1/aeljwmCfQ0
         dllg==
X-Forwarded-Encrypted: i=1; AJvYcCVBSpEAVBP5ouXhMqFZWJDCfWaH2G7BqCBVvbOmwWnk2BOWkREyrk7CcE25T1yCFrA+PsiyYNFmETv7StkcycYphGiuXlcke3ED5DF7fOs=
X-Gm-Message-State: AOJu0YxE8mc+muoLp2vkZZnlfpW/J6XSVJwlx/leInDMLfIv/4n+06LJ
	FZkzy1/+twdj7f2Es9UxxwkmHT3Bb+AUwb6hA0IJfYmgI8AuG7NagI/Cf3u1MIc=
X-Google-Smtp-Source: AGHT+IGsOxgT0M+W9AGXUG2DiWEBpIAwvFcs1E2V/hS+x7wOIyROwn5T7fJ1gv0fKoFCEe6ZMiqzoA==
X-Received: by 2002:a81:9e12:0:b0:62f:7829:6eaa with SMTP id 00721157ae682-643aa2baaccmr74695697b3.15.1719312266983;
        Tue, 25 Jun 2024 03:44:26 -0700 (PDT)
Message-ID: <3fa398eb-368a-48dd-9324-a46573c0289c@citrix.com>
Date: Tue, 25 Jun 2024 11:43:54 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] gnttab: fix compat query-size handling
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <00bb4998-d0a7-43dc-8d3c-abb3f66661cc@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <00bb4998-d0a7-43dc-8d3c-abb3f66661cc@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25/06/2024 8:30 am, Jan Beulich wrote:
> The odd DEFINE_XEN_GUEST_HANDLE(), inconsistent with all other similar
> constructs, should have caught my attention. Turns out it was needed for
> the build to succeed merely because the corresponding #ifndef had a
> typo. That typo in turn broke compat mode guests, by having query-size
> requests of theirs wire into the domain_crash() at the bottom of the
> switch().
>
> Fixes: 8c3bb4d8ce3f ("xen/gnttab: Perform compat/native gnttab_query_size check")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> Looks like set-version is similarly missing in the set of structures
> checked, but I'm pretty sure that we will now want to defer taking care
> of that until after 4.20 was branched.
>
> --- a/xen/common/compat/grant_table.c
> +++ b/xen/common/compat/grant_table.c
> @@ -33,7 +33,6 @@ CHECK_gnttab_unmap_and_replace;
>  #define xen_gnttab_query_size gnttab_query_size
>  CHECK_gnttab_query_size;
>  #undef xen_gnttab_query_size
> -DEFINE_XEN_GUEST_HANDLE(gnttab_query_size_compat_t);
>  
>  DEFINE_XEN_GUEST_HANDLE(gnttab_setup_table_compat_t);
>  DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_compat_t);
> @@ -111,7 +110,7 @@ int compat_grant_table_op(
>      CASE(copy);
>  #endif
>  
> -#ifndef CHECK_query_size
> +#ifndef CHECK_gnttab_query_size
>      CASE(query_size);
>  #endif
>  

/sigh - I almost rejected your and Stefano's feedback on v1 on the basis
that it didn't compile, but then I adjusted it to look like the
surrounding logic.  Much fool me.

But, this change *cannot* be correct.  The result is:

$ git grep -C3 CHECK_gnttab_query_size
compat/grant_table.c-31-#undef xen_gnttab_unmap_and_replace
compat/grant_table.c-32-
compat/grant_table.c-33-#define xen_gnttab_query_size gnttab_query_size
compat/grant_table.c:34:CHECK_gnttab_query_size;
compat/grant_table.c-35-#undef xen_gnttab_query_size
compat/grant_table.c-36-
compat/grant_table.c-37-DEFINE_XEN_GUEST_HANDLE(gnttab_setup_table_compat_t);
--
compat/grant_table.c-110-    CASE(copy);
compat/grant_table.c-111-#endif
compat/grant_table.c-112-
compat/grant_table.c:113:#ifndef CHECK_gnttab_query_size
compat/grant_table.c-114-    CASE(query_size);
compat/grant_table.c-115-#endif
compat/grant_table.c-116-

and the second is dead code because CHECK_gnttab_query_size is defined. 
It shouldn't be there.

So - my v1 was correct, and your and Stefano's feedback on v1 was incorrect.


The problem is, of course, that absolutely none of this is written down,
and none of the logic one needs to read to figure out it is even checked
into the tree.  It's entirely automatic and not even legible in its
intermediate form.

We are going to start with writing down a very simple set of
instructions for how the compat infrastructure works and how it should
be used.  If it's too complicated I will delete bits until it becomes
manageable, because this is completely insane.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 11:13:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 11:13:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747611.1155054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM46f-0001fs-V0; Tue, 25 Jun 2024 11:13:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747611.1155054; Tue, 25 Jun 2024 11:13:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM46f-0001fl-RE; Tue, 25 Jun 2024 11:13:01 +0000
Received: by outflank-mailman (input) for mailman id 747611;
 Tue, 25 Jun 2024 11:12:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v2rz=N3=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sM46d-0001fW-Rx
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 11:12:59 +0000
Received: from sender4-op-o12.zoho.com (sender4-op-o12.zoho.com
 [136.143.188.12]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id df686d0a-32e3-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 13:12:57 +0200 (CEST)
Received: by mx.zohomail.com with SMTPS id 1719313972306788.7746408859621;
 Tue, 25 Jun 2024 04:12:52 -0700 (PDT)
Received: by mail-yb1-f174.google.com with SMTP id
 3f1490d57ef6-dfdb6122992so4838978276.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 04:12:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df686d0a-32e3-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; t=1719313974; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=hjiGu60/HIbx+SfyZ71bLyRpJwHVZwN5D9vxRDbf7AYcOdSqXd0uwKsGTgcw8ytkE3cATPNzjIZrCInhBaHsmTGAHGaR7hz9wpSLgHAFlc1syog+J9HOW8R13xq1MbBS86hjWgWBdT/sQKod1OlEhcVkNXaahDT5x9SCnUQ4Kos=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1719313974; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=NuuKHvEK9cAFBd4G7+gVCQRO4FneDp+4BrLv0+8nQNw=; 
	b=eG8jL6CBmzI7x5/qTgvr+vWkwJRJe4TS0MsvBqf6sJr4nOoPUS4pO7kaoZvd8JjnX9tXv/Si9WP/r7LKzbrR3+8+xkbb1Ny6sVxoGy9ypwE3uu4Nofd1KoVI8UU7kpnpYf2+XEiYU8bhc+MiHcW/kWhXPhu32z4A3g6hUCzgxa4=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1719313974;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=MIME-Version:References:In-Reply-To:From:From:Date:Date:Message-ID:Subject:Subject:To:To:Cc:Cc:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=NuuKHvEK9cAFBd4G7+gVCQRO4FneDp+4BrLv0+8nQNw=;
	b=N+XvH+pV5m78pJKTYsgtfz2DSLbcbKrEe5GPOWPM9eQCVGjdQBFRR/Hluuh3ssM4
	+SmpaFBJt48iZdnh6+rpcV+R+tB3foRiARk/GCQAXPI/D6EgXHyZPLfq+CCwdG3PXi5
	yXt8BnLfsVox2DhgVr0Aow8o9V0uKtxdXs3JqzRY=
X-Forwarded-Encrypted: i=1; AJvYcCUaBr36KTKDBgWDbkxK538SltZkDMd8eGsGMVL+qe/1NOJog2nU7flyJgpSnNvdFFtyPmpMIkVOvun7di+coKrmE4uzWajuUXo6Q/oKWx0=
X-Gm-Message-State: AOJu0YxR2PopMtPU5TDEgna9UPGWBTzxfIhCbNzOSDGYTmAGMsnR2Oni
	xwrU5Y1RohMXvpTR2YHA9U3p6MvlY5/N8QjwAlv/ywdHm7E+5KctrnnbBPKb/Nu8yD8w4+6lQtZ
	02sF5RnSb8r9J5ZCXntqYx/m579Y=
X-Google-Smtp-Source: AGHT+IEXfwEFzFYYVcCWdrsau4jbEVqc65Z0/2G8SpDqGaIvZcN3msA+wJTEqf24MDOHUC7k5ZIJY58hHQrifVFLWVM=
X-Received: by 2002:a05:6902:243:b0:de4:828:b73c with SMTP id
 3f1490d57ef6-e03010eea58mr6704346276.54.1719313971445; Tue, 25 Jun 2024
 04:12:51 -0700 (PDT)
MIME-Version: 1.0
References: <20240621191434.5046-1-tamas@tklengyel.com> <45c69745-b060-4697-9f6e-b3d2a8860946@suse.com>
 <CABfawhkyDVw-=nR2d6KiXGYYv=coDgHUr1oXC+BmUxH_ita+iQ@mail.gmail.com> <80d0578d-26c0-4650-9edf-6926c055d415@suse.com>
In-Reply-To: <80d0578d-26c0-4650-9edf-6926c055d415@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Tue, 25 Jun 2024 07:12:15 -0400
X-Gmail-Original-Message-ID: <CABfawhk3RyR-ACq-mBk=F1-SCKJPiiS_yhU1=A_jR8Js3=fQyA@mail.gmail.com>
Message-ID: <CABfawhk3RyR-ACq-mBk=F1-SCKJPiiS_yhU1=A_jR8Js3=fQyA@mail.gmail.com>
Subject: Re: [PATCH 1/2] Add libfuzzer target to fuzz/x86_instruction_emulator
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 25, 2024 at 2:00=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 24.06.2024 23:23, Tamas K Lengyel wrote:
> > On Mon, Jun 24, 2024 at 11:55=E2=80=AFAM Jan Beulich <jbeulich@suse.com=
> wrote:
> >>
> >> On 21.06.2024 21:14, Tamas K Lengyel wrote:
> >>> @@ -58,6 +58,9 @@ afl-harness: afl-harness.o $(OBJS) cpuid.o wrappers=
.o
> >>>  afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OBJS)) c=
puid.o wrappers.o
> >>>       $(CC) $(CFLAGS) $(GCOV_FLAGS) $(addprefix -Wl$(comma)--wrap=3D,=
$(WRAPPED)) $^ -o $@
> >>>
> >>> +libfuzzer-harness: $(OBJS) cpuid.o
> >>> +     $(CC) $(CFLAGS) $(LIB_FUZZING_ENGINE) -fsanitize=3Dfuzzer $^ -o=
 $@
> >>
> >> What is LIB_FUZZING_ENGINE? I don't think we have any use of that in t=
he
> >> tree anywhere.
> >
> > It's used by oss-fuzz, otherwise it's not doing anything.
> >
> >>
> >> I'm further surprised you get away here without wrappers.o.
> >
> > Wrappers.o was actually breaking the build for oss-fuzz at the linking
> > stage. It works just fine without it.
>
> I'm worried here, to be honest. The wrappers serve a pretty important
> role, and I'm having a hard time seeing why they shouldn't be needed
> here when they're needed both for the test and afl harnesses. Could
> you add some more detail on the build issues you encountered?

With wrappers.o included doing the build in the oss-fuzz docker
(ubuntu 20.04 base) fails with:

...
clang -O1 -fno-omit-frame-pointer -gline-tables-only
-Wno-error=3Denum-constexpr-conversion
-Wno-error=3Dincompatible-function-pointer-types
-Wno-error=3Dint-conversion -Wno-error=3Ddeprecated-declarations
-Wno-error=3Dimplicit-function-declaration -Wno-error=3Dimplicit-int
-DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION -fsanitize=3Daddress
-fsanitize-address-use-after-scope -fsanitize=3Dfuzzer-no-link -m64
-DBUILD_ID -fno-strict-aliasing -std=3Dgnu99 -Wall -Wstrict-prototypes
-Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Werror
-Og -fno-omit-frame-pointer
-D__XEN_INTERFACE_VERSION__=3D__XEN_LATEST_INTERFACE_VERSION__ -MMD -MP
-MF .libfuzzer-harness.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
-I/src/xen/tools/fuzz/x86_instruction_emulator/../../../tools/include
-D__XEN_TOOLS__ -iquote . -fsanitize=3Dfuzzer -fsanitize=3Dfuzzer
-Wl,--wrap=3Dfwrite -Wl,--wrap=3Dmemcmp -Wl,--wrap=3Dmemcpy
-Wl,--wrap=3Dmemset -Wl,--wrap=3Dprintf -Wl,--wrap=3Dputchar -Wl,--wrap=3Dp=
uts
-Wl,--wrap=3Dsnprintf -Wl,--wrap=3Dstrstr -Wl,--wrap=3Dvprintf
-Wl,--wrap=3Dvsnprintf fuzz-emul.o x86-emulate.o x86_emulate/0f01.o
x86_emulate/0fae.o x86_emulate/0fc7.o x86_emulate/decode.o
x86_emulate/fpu.o cpuid.o wrappers.o -o libfuzzer-harness
/usr/bin/ld: /usr/bin/ld: DWARF error: invalid or unhandled FORM value: 0x2=
5
/usr/local/lib/clang/18/lib/x86_64-unknown-linux-gnu/libclang_rt.fuzzer.a(f=
uzzer.o):
in function `std::__Fuzzer::__libcpp_snprintf_l(char*, unsigned long,
__locale_struct*, char const*, ...)':
cxa_noexception.cpp:(.text._ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__loca=
le_structPKcz[_ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__locale_structPKcz=
]+0x9a):
undefined reference to `__wrap_vsnprintf'
clang: error: linker command failed with exit code 1 (use -v to see invocat=
ion)
make: *** [Makefile:62: libfuzzer-harness] Error 1
rm x86-emulate.c wrappers.c cpuid.c
make: Leaving directory '/src/xen/tools/fuzz/x86_instruction_emulator'
ERROR:__main__:Building fuzzers failed.


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 11:16:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 11:16:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747618.1155066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM49j-0002Eg-CQ; Tue, 25 Jun 2024 11:16:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747618.1155066; Tue, 25 Jun 2024 11:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM49j-0002EZ-9n; Tue, 25 Jun 2024 11:16:11 +0000
Received: by outflank-mailman (input) for mailman id 747618;
 Tue, 25 Jun 2024 11:16:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v2rz=N3=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sM49h-0002EQ-TN
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 11:16:09 +0000
Received: from sender4-op-o12.zoho.com (sender4-op-o12.zoho.com
 [136.143.188.12]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5144cb93-32e4-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 13:16:08 +0200 (CEST)
Received: by mx.zohomail.com with SMTPS id 1719314162606320.24722921450075;
 Tue, 25 Jun 2024 04:16:02 -0700 (PDT)
Received: by mail-yb1-f174.google.com with SMTP id
 3f1490d57ef6-e02bf947545so5026994276.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 04:16:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5144cb93-32e4-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; t=1719314164; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=P9JUAX9ndWMdwnZviLa7pvhGzGxJMF0hp/0Kq2+Fz75AHlHoDFczNWOeFY5uDmqClc80ZFwHdEltfJH1IwC2zoQ9EvwQ+8j8dTYuK2WS6bzNpf8PoEnP6tDqvnzhmdVoKA1oaalIcymmjC/nKaY9sKvQ/EDgBqYngX+d0j4fSb0=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1719314164; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=EMeQ/Tt64Xrl1FmiiengZ/Txtmyk2xd+z37sEUGwvIc=; 
	b=NEFYW0n/tugYbqwWGfP0CTDJ2alwHD87JvNydz9MguJ2BbmBh0CuOo3MOfBuip9MpoGdUFVHwymd6Wvhs9/eeZwSKoBZ2i74+E9biBSkd+jbRVQGodiotHaGNNfF49Dwfi7GftQeT248nRgTED9kMRR9qltzIWCFgaN12+JMgso=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1719314164;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=MIME-Version:References:In-Reply-To:From:From:Date:Date:Message-ID:Subject:Subject:To:To:Cc:Cc:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=EMeQ/Tt64Xrl1FmiiengZ/Txtmyk2xd+z37sEUGwvIc=;
	b=LRPqCBiK995RiT4gBBIgkJABl0VEUxLRRIeZCoQS031XKoEaZTe9jAistk6vcG+Q
	fi8S9yoiW/RI1KK0lLTOkmWb1lw9Eb0aNMu4wU5xXGNcVwR0H0JIz141trkyUHtbxLp
	7vqpy9W61ICgQPDQc5ig/RnGcEXpfRz/3L5tRWi0=
X-Forwarded-Encrypted: i=1; AJvYcCXWv//k3zDkgchpUwBzBCq66MgSXa1bucjtqA8Udp/2zqFQBLlUb9sAOFyyRhcQVXsHTj1QkxPYcJE86mDYUq/OmgFBjbGrjAG3A6pzOB8=
X-Gm-Message-State: AOJu0YzoK8x00a0MbZ8/MLEbtgjZOz6qyW1UvaNrsEKyRJyUM2o+ZFU8
	9Qc54RtASc9+8LOS5DAnMlZ1HnHgReehFPNInmIu1ZfGg5yxk6usnIe0mtzUTg7/SoheWqsAa7e
	vFiXzimLE3SXRF/PrxNEojHfhITg=
X-Google-Smtp-Source: AGHT+IFipnVvxnmDEUt1UvNBPS+HjXr9zBlQIJlz8c2oWixR3fSmH8+DQGg7wH3X+YDnR7/c+lgG/ji+jScKasXXvhw=
X-Received: by 2002:a25:cec1:0:b0:dff:3505:a35e with SMTP id
 3f1490d57ef6-e03040084d4mr7626284276.46.1719314161567; Tue, 25 Jun 2024
 04:16:01 -0700 (PDT)
MIME-Version: 1.0
References: <20240621191434.5046-1-tamas@tklengyel.com> <20240621191434.5046-2-tamas@tklengyel.com>
 <0a7854e0-e01e-435e-95fe-b262cc4afc1e@suse.com>
In-Reply-To: <0a7854e0-e01e-435e-95fe-b262cc4afc1e@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Tue, 25 Jun 2024 07:15:25 -0400
X-Gmail-Original-Message-ID: <CABfawhmkhCD-MFgZBrhJ1CwiiseotJ=+MANbgwsjRL_VYsnuTQ@mail.gmail.com>
Message-ID: <CABfawhmkhCD-MFgZBrhJ1CwiiseotJ=+MANbgwsjRL_VYsnuTQ@mail.gmail.com>
Subject: Re: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 25, 2024 at 5:17=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 21.06.2024 21:14, Tamas K Lengyel wrote:
> > --- /dev/null
> > +++ b/scripts/oss-fuzz/build.sh
> > @@ -0,0 +1,22 @@
> > +#!/bin/bash -eu
> > +# Copyright 2024 Google LLC
> > +#
> > +# Licensed under the Apache License, Version 2.0 (the "License");
> > +# you may not use this file except in compliance with the License.
> > +# You may obtain a copy of the License at
> > +#
> > +#      http://www.apache.org/licenses/LICENSE-2.0
> > +#
> > +# Unless required by applicable law or agreed to in writing, software
> > +# distributed under the License is distributed on an "AS IS" BASIS,
> > +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or impl=
ied.
> > +# See the License for the specific language governing permissions and
> > +# limitations under the License.
> > +#
> > +######################################################################=
##########
>
> I'm a little concerned here, but maybe I shouldn't be. According to what
> I'm reading, the Apache 2.0 license is at least not entirely compatible
> with GPLv2. While apparently the issue is solely with linking in Apache-
> licensed code, I wonder whether us not having a respective file under
> ./LICENSES/ (and no pre-cooked SPDX identifier to use) actually has a
> reason possibly excluding the use of such code in the project.
>
> > +cd xen
> > +./configure clang=3Dy --disable-stubdom --disable-pvshim --disable-doc=
s --disable-xen
> > +make clang=3Dy -C tools/include
> > +make clang=3Dy -C tools/fuzz/x86_instruction_emulator libfuzzer-harnes=
s
> > +cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_inst=
ruction_emulator
>
> In addition to what Julien said, I further think that filename / director=
y
> name are too generic for a file with this pretty specific contents.

I don't really get your concern here?

Tamas


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 11:19:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 11:19:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747624.1155076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM4CP-00031X-P6; Tue, 25 Jun 2024 11:18:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747624.1155076; Tue, 25 Jun 2024 11:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM4CP-00031Q-M7; Tue, 25 Jun 2024 11:18:57 +0000
Received: by outflank-mailman (input) for mailman id 747624;
 Tue, 25 Jun 2024 11:18:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v2rz=N3=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sM4CP-00031I-2f
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 11:18:57 +0000
Received: from sender3-op-o17.zoho.com (sender3-op-o17.zoho.com
 [136.143.184.17]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b4ed9e6b-32e4-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 13:18:55 +0200 (CEST)
Received: by mx.zohomail.com with SMTPS id 1719314330415888.1733492474157;
 Tue, 25 Jun 2024 04:18:50 -0700 (PDT)
Received: by mail-yb1-f176.google.com with SMTP id
 3f1490d57ef6-dfab5f7e749so4659368276.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 04:18:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4ed9e6b-32e4-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; t=1719314331; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=cdFqpRjP71cJyk/h30ljSOlhaa98hdj7IIpRI9JbQMbw/56F1UKll3NN/V2ULIqpRYuRtLx8XgO/9OV5AfeHhq6OL/MAuLWoI5YHbkG3xvoCQk1PdzxLlU5kk8cYsn+XItGsP7ZN74rzElJCOws9w0p1gSckgN+L2mGfty9dFz0=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1719314331; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=cTQ9vGL9Zqkl9i/BATJ7M5Dby2blfmOFQOoK+7+j6iI=; 
	b=gJaKURi0bHmq1MnLuFX1beIs0doRZz2SVIjYJVD4OqJOmM0YGpqclsZiz33SEJJ9manWN9EVrQ1UqvCntaLVLHa5oQLiJ8LpdHjUc36e39RM9/TiNuN6pYYoIqFvP0A54PDq3zLdbC/7pdtAozqicZwMSGb5wKE0E1p7KlalCEk=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1719314331;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=MIME-Version:References:In-Reply-To:From:From:Date:Date:Message-ID:Subject:Subject:To:To:Cc:Cc:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=cTQ9vGL9Zqkl9i/BATJ7M5Dby2blfmOFQOoK+7+j6iI=;
	b=eu+oepVSLrmXAaxuc8CeJt4TSxC2NfpwTA0txut0BiSiL3UfKzirrzjXXxsDuURo
	HyHrMdXWsuYPgdXOZvPCDs7TU98a6gWM9onk7dci6f80S8h1TybbgxzHLVlm7/Cx887
	WS0cM3tBGSG+zxUN/qkw/cHKHs1YnuovGNtXTNGI=
X-Forwarded-Encrypted: i=1; AJvYcCUIk5pL/4y/BzJ11TZG8mZdE4kxFgAoL/hqxZAw3/mBGsCm5leYR4+Wfi76wg+h4DNk9phwfZwGzsxWInB3tc2Keir4rbFIjFkW8F1oQkY=
X-Gm-Message-State: AOJu0Yyr9vYIYCU7mcNPMHJ3CYktVN4FsGJbHn136UYEZ0wuGe6i+jqU
	ZG55wBel0xhfhX0kakNEaabO1QOgFT69gWztUUyMUg4dyqhG3y3W8t8x3jXgM9SlrhISmKAyt0z
	Y1c5qtrcN62FYPdO6iwnl44wOZX4=
X-Google-Smtp-Source: AGHT+IEgCytVU2bi1Xl3KUQPbBSRRDDYOZqldMoSFSBPbfDSZiVrB8KH6sThblT/KKTWP/UB9VPFVMQEhb5J0ULLyEU=
X-Received: by 2002:a25:ac52:0:b0:e02:bc76:3407 with SMTP id
 3f1490d57ef6-e0300f86aa6mr7523191276.34.1719314329514; Tue, 25 Jun 2024
 04:18:49 -0700 (PDT)
MIME-Version: 1.0
References: <20240621191434.5046-1-tamas@tklengyel.com> <20240621191434.5046-2-tamas@tklengyel.com>
 <0a7854e0-e01e-435e-95fe-b262cc4afc1e@suse.com>
In-Reply-To: <0a7854e0-e01e-435e-95fe-b262cc4afc1e@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Tue, 25 Jun 2024 07:18:13 -0400
X-Gmail-Original-Message-ID: <CABfawhm2GuBpQ1sm9nwz1R73SGSxBNZ4Tprd-mYLBkFA5vDLeA@mail.gmail.com>
Message-ID: <CABfawhm2GuBpQ1sm9nwz1R73SGSxBNZ4Tprd-mYLBkFA5vDLeA@mail.gmail.com>
Subject: Re: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 25, 2024 at 5:17=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 21.06.2024 21:14, Tamas K Lengyel wrote:
> > --- /dev/null
> > +++ b/scripts/oss-fuzz/build.sh
> > @@ -0,0 +1,22 @@
> > +#!/bin/bash -eu
> > +# Copyright 2024 Google LLC
> > +#
> > +# Licensed under the Apache License, Version 2.0 (the "License");
> > +# you may not use this file except in compliance with the License.
> > +# You may obtain a copy of the License at
> > +#
> > +#      http://www.apache.org/licenses/LICENSE-2.0
> > +#
> > +# Unless required by applicable law or agreed to in writing, software
> > +# distributed under the License is distributed on an "AS IS" BASIS,
> > +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or impl=
ied.
> > +# See the License for the specific language governing permissions and
> > +# limitations under the License.
> > +#
> > +######################################################################=
##########
>
> I'm a little concerned here, but maybe I shouldn't be. According to what
> I'm reading, the Apache 2.0 license is at least not entirely compatible
> with GPLv2. While apparently the issue is solely with linking in Apache-
> licensed code, I wonder whether us not having a respective file under
> ./LICENSES/ (and no pre-cooked SPDX identifier to use) actually has a
> reason possibly excluding the use of such code in the project.

The script is standalone in a clearly separate folder, not linking
with anything else in the project so there is no license mixing.
Adding an SPDX tag to the file would be fine.

Tamas


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 11:31:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 11:31:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747636.1155090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM4OE-0005tS-VQ; Tue, 25 Jun 2024 11:31:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747636.1155090; Tue, 25 Jun 2024 11:31:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM4OE-0005tL-Qp; Tue, 25 Jun 2024 11:31:10 +0000
Received: by outflank-mailman (input) for mailman id 747636;
 Tue, 25 Jun 2024 11:31:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM4OD-0005tC-Ke
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 11:31:09 +0000
Received: from mail-wm1-x32a.google.com (mail-wm1-x32a.google.com
 [2a00:1450:4864:20::32a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 68fded9c-32e6-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 13:31:05 +0200 (CEST)
Received: by mail-wm1-x32a.google.com with SMTP id
 5b1f17b1804b1-42108856c33so37898485e9.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 04:31:05 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-70679725094sm4381456b3a.166.2024.06.25.04.31.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 04:31:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68fded9c-32e6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719315065; x=1719919865; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=YEJqKBMI7iM6kUuci281LC4IldotdeomruSBDCskKSc=;
        b=TeLWVIBRISvW5Xv3jpvKKbGs5sy0FckMUb7syvhR++0m3LRsRCeR/WnCUcC5lOl4lg
         vhpCjnlNAbyNG/9iBy0L68v5mswt9eTHHtC8fly18Trm2dbecPJTWE0MiU0e8+eUZWQv
         I+LI5fMSaru49Abfy65iiwgbQjyBo0YokYJwZcuhYr42ZHy/hV83xlz69pOKt5TtZ42q
         QeLp77YAdDSCMy/sgAGpRNGBC1Q8CTbiSHqU3qMgZJlm30xvLEbVNlK2J/gWwdYOIDAb
         POW+5C+A6oebcegxXrpD+wFS3yJOiLXkBgBInZIG0xq99AHumvaOft1LkdMptfXwx8UA
         FSUg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719315065; x=1719919865;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=YEJqKBMI7iM6kUuci281LC4IldotdeomruSBDCskKSc=;
        b=RuHQSBHARUb2oqaCMU3pgvMLHiAJFt40rBggAjaTYIT9LBeYw2XJJmEOFYzoGuBh8M
         hR9R09jQyUTuH2G3Jhwwosvzl0EgIcYtnf0wWiDnyO20aaoRKbvlVVj6N662L8NIsceS
         OGo+Io01tvmPmADbkcE4AOw+CBzOw9yBbZlSKXeClQw0CLHBZ/xOP5xjMZoU+SXaudiM
         wglE1UEH85T1Do/Kb+V7IkTCDFPQUBN3+jk9POtOBl/7wPzijhzI3GisglkxQDBoXNEf
         xPP3V8zi2dyeX533V+mw59MbjmaHuVi41ckPTiA4MsSCnonensGJEm+TLauP9UaUNLQ1
         V3sQ==
X-Forwarded-Encrypted: i=1; AJvYcCU3r6LvPyTrnLMne4YaWJdaVRhlTHhKYEo0+ldtA6apdUWbE5vMnuC+rktYdvAb3BmcV43GEo6E4zrH8LT1dHeeTjicNnnhTC+8fvhergc=
X-Gm-Message-State: AOJu0YxTfQ38Q/pAgltVBLSyfG3YCH0WCl2/jcPmGowFtvNaU4mgG+/r
	60v5WC5tdFi5MA/cUQ4fTmdIatrsEzRz9iFymkQgKm49PyYJUVQ8JU48ILHkkg==
X-Google-Smtp-Source: AGHT+IGROgiWgusoe2eAqyqvBFyGTlPjFSd/rXz78FIJHOZhGPW+vj+5+sIPNtJ89r9eHqDYxh9WNg==
X-Received: by 2002:adf:eb06:0:b0:35f:204e:bcf0 with SMTP id ffacd0b85a97d-366e32693femr7015737f8f.13.1719315065170;
        Tue, 25 Jun 2024 04:31:05 -0700 (PDT)
Message-ID: <0d3cd756-49dd-4c3b-b069-13d554133f1f@suse.com>
Date: Tue, 25 Jun 2024 13:30:56 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19] gnttab: fix compat query-size handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <00bb4998-d0a7-43dc-8d3c-abb3f66661cc@suse.com>
 <3fa398eb-368a-48dd-9324-a46573c0289c@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <3fa398eb-368a-48dd-9324-a46573c0289c@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25.06.2024 12:43, Andrew Cooper wrote:
> On 25/06/2024 8:30 am, Jan Beulich wrote:
>> The odd DEFINE_XEN_GUEST_HANDLE(), inconsistent with all other similar
>> constructs, should have caught my attention. Turns out it was needed for
>> the build to succeed merely because the corresponding #ifndef had a
>> typo. That typo in turn broke compat mode guests, by having query-size
>> requests of theirs wire into the domain_crash() at the bottom of the
>> switch().
>>
>> Fixes: 8c3bb4d8ce3f ("xen/gnttab: Perform compat/native gnttab_query_size check")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> Looks like set-version is similarly missing in the set of structures
>> checked, but I'm pretty sure that we will now want to defer taking care
>> of that until after 4.20 was branched.
>>
>> --- a/xen/common/compat/grant_table.c
>> +++ b/xen/common/compat/grant_table.c
>> @@ -33,7 +33,6 @@ CHECK_gnttab_unmap_and_replace;
>>  #define xen_gnttab_query_size gnttab_query_size
>>  CHECK_gnttab_query_size;
>>  #undef xen_gnttab_query_size
>> -DEFINE_XEN_GUEST_HANDLE(gnttab_query_size_compat_t);
>>  
>>  DEFINE_XEN_GUEST_HANDLE(gnttab_setup_table_compat_t);
>>  DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_compat_t);
>> @@ -111,7 +110,7 @@ int compat_grant_table_op(
>>      CASE(copy);
>>  #endif
>>  
>> -#ifndef CHECK_query_size
>> +#ifndef CHECK_gnttab_query_size
>>      CASE(query_size);
>>  #endif
>>  
> 
> /sigh - I almost rejected your and Stefano's feedback on v1 on the basis
> that it didn't compile, but then I adjusted it to look like the
> surrounding logic.  Much fool me.
> 
> But, this change *cannot* be correct.  The result is:
> 
> $ git grep -C3 CHECK_gnttab_query_size
> compat/grant_table.c-31-#undef xen_gnttab_unmap_and_replace
> compat/grant_table.c-32-
> compat/grant_table.c-33-#define xen_gnttab_query_size gnttab_query_size
> compat/grant_table.c:34:CHECK_gnttab_query_size;
> compat/grant_table.c-35-#undef xen_gnttab_query_size
> compat/grant_table.c-36-
> compat/grant_table.c-37-DEFINE_XEN_GUEST_HANDLE(gnttab_setup_table_compat_t);
> --
> compat/grant_table.c-110-    CASE(copy);
> compat/grant_table.c-111-#endif
> compat/grant_table.c-112-
> compat/grant_table.c:113:#ifndef CHECK_gnttab_query_size
> compat/grant_table.c-114-    CASE(query_size);
> compat/grant_table.c-115-#endif
> compat/grant_table.c-116-
> 
> and the second is dead code because CHECK_gnttab_query_size is defined. 
> It shouldn't be there.

As said elsewhere, it's there just-in-case (and now consistent with
sibling gnttab-ops). We can certainly evaluate deleting all of those
just-in-case constructs. But we want to retain consistency.

> So - my v1 was correct, and your and Stefano's feedback on v1 was incorrect.

I'm sorry, but maybe more a misunderstanding. I notice I had "now" in my
reply to you when referring to my reply to Stefano, when I think I really
meant "not". And he never actually replied, afaics.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 11:40:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 11:40:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747643.1155100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM4XV-0007su-Ol; Tue, 25 Jun 2024 11:40:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747643.1155100; Tue, 25 Jun 2024 11:40:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM4XV-0007sn-L9; Tue, 25 Jun 2024 11:40:45 +0000
Received: by outflank-mailman (input) for mailman id 747643;
 Tue, 25 Jun 2024 11:40:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM4XU-0007sh-PZ
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 11:40:44 +0000
Received: from mail-lj1-x229.google.com (mail-lj1-x229.google.com
 [2a00:1450:4864:20::229])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c0e4ed2d-32e7-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 13:40:42 +0200 (CEST)
Received: by mail-lj1-x229.google.com with SMTP id
 38308e7fff4ca-2ec50a5e230so38148721fa.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 04:40:42 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7069bd69cb1sm976244b3a.209.2024.06.25.04.40.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 04:40:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0e4ed2d-32e7-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719315642; x=1719920442; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=i261D+yE5EgpRdyvVx4mzBPwCU/LNbVIPEKFOJCK/wM=;
        b=Ifac/9eCjZAgXuaEz+Mr/RaaZH3kz2TnuuZIsPgwCdqVsLn5m3DiLq5B4Zxcq8cBXg
         hzZy2V5HR5n7Zm4c/EKNZl7OkXnD15J4EE8OufITZVQiXVI7jkUkA3nw4j7U+hbirSB0
         dsBuBhVZUEMmFMMlkSkJKla4AycSXGSOFKB66CKs5sFM9X4xs+0IXBqGuhf29LYqNet4
         iUpaFsS953jFABYTlPuikaViAoXyckQTKoD/JBSpW4jDaW6+s2JcBjvyqou2Zfq7Cd2D
         qkF0qdr9uUknDVu6qopwOs2lB1VQOsHm2LNXp17UcZHwfAUxpHC81rTZUZN0H3qAtFLT
         swuA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719315642; x=1719920442;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=i261D+yE5EgpRdyvVx4mzBPwCU/LNbVIPEKFOJCK/wM=;
        b=DYwC3Ekj8ufMO2PXUpMiBaX859zOXalTugdgrCxhCCglrGqJ87f6a0EgixDSx2skp0
         3+5L0CotKPf1WOoujdID8mO141zi196571vpxysnq6wfNe7WrsG9CWnWfzyRzj4eZjEr
         HLzYl32iZy9op7ffiSaZ3W0pWvJA5H1qGtB/cYIuWomf4quCUBTxdQlJhwVR0v04/yY8
         fLH4iDM6GX+vhtgc9RIFvI9RYfs2G8xH9CCzUGIld6d8snNx+PLJ8WUVtkVOl1cHv4LX
         cxpgEU/R02j7o+d9ScScRGxnPyeZ2ZFkwFnf6pT7cmi95TNtV2VyaT+EHyyQBeT4IRd0
         cN6g==
X-Forwarded-Encrypted: i=1; AJvYcCUq/1gm+VpbWt26xwIamYiJeIiotcoZcfDhO90d6vZRsJs0LOexs33zQ14y9uenn5McUFib9Q320rQZpIQ4OtyvLm/z+MiqDCxfej8oUq8=
X-Gm-Message-State: AOJu0YxAUWVmRk940k79yzJWU1YoLkEAroDB+6SZ0S+aBLyk6e6mY8Ff
	16e2W9RkE7if/5VrXxGu4qXd9wz0YFUbStNmZ66y3D10KduKwyAX603Y+DWpUw==
X-Google-Smtp-Source: AGHT+IH3ch11DZW7w9VwEslSanz9icn01GH+GBa+5owcbfnVev7k17/E8Z+D/NKo1rP+zH0gJZCDfg==
X-Received: by 2002:a2e:9b96:0:b0:2ec:51b5:27c8 with SMTP id 38308e7fff4ca-2ec5b2dd919mr43863441fa.32.1719315642209;
        Tue, 25 Jun 2024 04:40:42 -0700 (PDT)
Message-ID: <b9b84f10-6d41-48d9-996d-069408753e28@suse.com>
Date: Tue, 25 Jun 2024 13:40:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20240621191434.5046-1-tamas@tklengyel.com>
 <20240621191434.5046-2-tamas@tklengyel.com>
 <0a7854e0-e01e-435e-95fe-b262cc4afc1e@suse.com>
 <CABfawhmkhCD-MFgZBrhJ1CwiiseotJ=+MANbgwsjRL_VYsnuTQ@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CABfawhmkhCD-MFgZBrhJ1CwiiseotJ=+MANbgwsjRL_VYsnuTQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25.06.2024 13:15, Tamas K Lengyel wrote:
> On Tue, Jun 25, 2024 at 5:17 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 21.06.2024 21:14, Tamas K Lengyel wrote:
>>> --- /dev/null
>>> +++ b/scripts/oss-fuzz/build.sh
>>> @@ -0,0 +1,22 @@
>>> +#!/bin/bash -eu
>>> +# Copyright 2024 Google LLC
>>> +#
>>> +# Licensed under the Apache License, Version 2.0 (the "License");
>>> +# you may not use this file except in compliance with the License.
>>> +# You may obtain a copy of the License at
>>> +#
>>> +#      http://www.apache.org/licenses/LICENSE-2.0
>>> +#
>>> +# Unless required by applicable law or agreed to in writing, software
>>> +# distributed under the License is distributed on an "AS IS" BASIS,
>>> +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>> +# See the License for the specific language governing permissions and
>>> +# limitations under the License.
>>> +#
>>> +################################################################################
>>
>> I'm a little concerned here, but maybe I shouldn't be. According to what
>> I'm reading, the Apache 2.0 license is at least not entirely compatible
>> with GPLv2. While apparently the issue is solely with linking in Apache-
>> licensed code, I wonder whether us not having a respective file under
>> ./LICENSES/ (and no pre-cooked SPDX identifier to use) actually has a
>> reason possibly excluding the use of such code in the project.
>>
>>> +cd xen
>>> +./configure clang=y --disable-stubdom --disable-pvshim --disable-docs --disable-xen
>>> +make clang=y -C tools/include
>>> +make clang=y -C tools/fuzz/x86_instruction_emulator libfuzzer-harness
>>> +cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_instruction_emulator
>>
>> In addition to what Julien said, I further think that filename / directory
>> name are too generic for a file with this pretty specific contents.
> 
> I don't really get your concern here?

The thing that is built is specifically a x86 emulator piece of fuzzing
binary. Neither the directory name nor the file name contain either x86
or (at least) emul.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 11:52:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 11:52:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747650.1155110 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM4j4-0001Lt-Li; Tue, 25 Jun 2024 11:52:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747650.1155110; Tue, 25 Jun 2024 11:52:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM4j4-0001Lm-Iu; Tue, 25 Jun 2024 11:52:42 +0000
Received: by outflank-mailman (input) for mailman id 747650;
 Tue, 25 Jun 2024 11:52:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM4j3-0001Kv-Fu
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 11:52:41 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6a841cb5-32e9-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 13:52:36 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id
 38308e7fff4ca-2ec52fbb50bso33101711fa.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 04:52:36 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c819a7ec09sm8509128a91.16.2024.06.25.04.52.30
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 04:52:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a841cb5-32e9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719316356; x=1719921156; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=E/hWkwz6wMC7uscVWAZAK3nOTVUYHFC6PfN6JU3Fxzk=;
        b=BY4CI9t10DTtin9Uv0oUw72PUlyOGXdoTlAv1UcNdh1MiVy+9S4ZpYcu22KcDuG0CE
         z178+UeiHj0eymCqjqoZHkh2iglOQrqBIRZ7TWgFoWHL2k7+7H2BndsMaWbzQVgUZeTZ
         qnCmfvCmqF3E6dE2Ov30tz3jaQ3TpC9JCItgUmqnan+TdNo1viq7WrCA3XpcB3zUgJY3
         8P5hHn6PI2S9mOGrRuAXgtS1BoAsGOeFE3rQLtjubjuklJ01XSGSGjKsazGnT43Pkt6M
         rZKPka242wjkMwgFXyhqqrj3XrRq+DYVUtJAwbJ7gPUHAO6otAk7kDdq6SS8lYkzUoWs
         FfFQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719316356; x=1719921156;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=E/hWkwz6wMC7uscVWAZAK3nOTVUYHFC6PfN6JU3Fxzk=;
        b=cl7WpL7nb66QGfVrBuIugJl60D1N5AQy9mG7v8xlIzAyMdWJbRrto1cqliCh2IAaz3
         DIaTKSNXQjmxy6QqQbVvwM9b3h8pWLfs4Sk4NcewA6FA75JchQAkrIjEwcJWNgHaYCNH
         99bCTdBkHtac66puTPBwYMkxL21H0HN3PWN9JBYBGagqzUTcaHgq0AE/3ZHG0PRaJeTB
         aWDe4xg91vBtkDO44IeUlKt5xKtmOfS1cgR2tWu5mSNEEFEZ7oOJqVPMFclmsSTn5+B1
         ERsVrC52PqWIfUYpzH3Gu/Ez2odmM+9uMWFBz/gs0rBNuX2Lam1wGRVbYBS8jtCEzdsC
         m77w==
X-Forwarded-Encrypted: i=1; AJvYcCUiFtFZYVo/xHvH9Me+0BM3YK6gL0a71JEE9ATmAbPRgtFTXulKt2a367HfI3eOm2h9tAlMMFFhP4GbuJB6caHW27GzHn4aseMFvdUIA9E=
X-Gm-Message-State: AOJu0YwselHtGRDglGWmXeIJ5GfKk78go6DyKicZ2EGyp9+ktisdu9/E
	h/f2qD3NIWFwFL+YA/7SDOv7Ttof3okGNR4WsTG+YNIrj0F4UW3EDvRFCBd5tQ==
X-Google-Smtp-Source: AGHT+IEPnOjEXv1XztpKCN6ASu0UsjtmOJCKzkqYn6/9m2TT5U1D+35oOh78/+tOZLrD9XhRsLxjcA==
X-Received: by 2002:a2e:9ed7:0:b0:2ec:58e8:d7a6 with SMTP id 38308e7fff4ca-2ec5b36b765mr40760541fa.5.1719316353742;
        Tue, 25 Jun 2024 04:52:33 -0700 (PDT)
Message-ID: <8d32db90-8bd0-4a8f-82d9-938e36d3f181@suse.com>
Date: Tue, 25 Jun 2024 13:52:26 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/2] Add libfuzzer target to fuzz/x86_instruction_emulator
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org
References: <20240621191434.5046-1-tamas@tklengyel.com>
 <45c69745-b060-4697-9f6e-b3d2a8860946@suse.com>
 <CABfawhkyDVw-=nR2d6KiXGYYv=coDgHUr1oXC+BmUxH_ita+iQ@mail.gmail.com>
 <80d0578d-26c0-4650-9edf-6926c055d415@suse.com>
 <CABfawhk3RyR-ACq-mBk=F1-SCKJPiiS_yhU1=A_jR8Js3=fQyA@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CABfawhk3RyR-ACq-mBk=F1-SCKJPiiS_yhU1=A_jR8Js3=fQyA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25.06.2024 13:12, Tamas K Lengyel wrote:
> On Tue, Jun 25, 2024 at 2:00 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 24.06.2024 23:23, Tamas K Lengyel wrote:
>>> On Mon, Jun 24, 2024 at 11:55 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 21.06.2024 21:14, Tamas K Lengyel wrote:
>>>>> @@ -58,6 +58,9 @@ afl-harness: afl-harness.o $(OBJS) cpuid.o wrappers.o
>>>>>  afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OBJS)) cpuid.o wrappers.o
>>>>>       $(CC) $(CFLAGS) $(GCOV_FLAGS) $(addprefix -Wl$(comma)--wrap=,$(WRAPPED)) $^ -o $@
>>>>>
>>>>> +libfuzzer-harness: $(OBJS) cpuid.o
>>>>> +     $(CC) $(CFLAGS) $(LIB_FUZZING_ENGINE) -fsanitize=fuzzer $^ -o $@
>>>>
>>>> What is LIB_FUZZING_ENGINE? I don't think we have any use of that in the
>>>> tree anywhere.
>>>
>>> It's used by oss-fuzz, otherwise it's not doing anything.
>>>
>>>>
>>>> I'm further surprised you get away here without wrappers.o.
>>>
>>> Wrappers.o was actually breaking the build for oss-fuzz at the linking
>>> stage. It works just fine without it.
>>
>> I'm worried here, to be honest. The wrappers serve a pretty important
>> role, and I'm having a hard time seeing why they shouldn't be needed
>> here when they're needed both for the test and afl harnesses. Could
>> you add some more detail on the build issues you encountered?
> 
> With wrappers.o included doing the build in the oss-fuzz docker
> (ubuntu 20.04 base) fails with:
> 
> ...
> clang -O1 -fno-omit-frame-pointer -gline-tables-only
> -Wno-error=enum-constexpr-conversion
> -Wno-error=incompatible-function-pointer-types
> -Wno-error=int-conversion -Wno-error=deprecated-declarations
> -Wno-error=implicit-function-declaration -Wno-error=implicit-int
> -DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION -fsanitize=address
> -fsanitize-address-use-after-scope -fsanitize=fuzzer-no-link -m64
> -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes
> -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Werror
> -Og -fno-omit-frame-pointer
> -D__XEN_INTERFACE_VERSION__=__XEN_LATEST_INTERFACE_VERSION__ -MMD -MP
> -MF .libfuzzer-harness.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
> -I/src/xen/tools/fuzz/x86_instruction_emulator/../../../tools/include
> -D__XEN_TOOLS__ -iquote . -fsanitize=fuzzer -fsanitize=fuzzer
> -Wl,--wrap=fwrite -Wl,--wrap=memcmp -Wl,--wrap=memcpy
> -Wl,--wrap=memset -Wl,--wrap=printf -Wl,--wrap=putchar -Wl,--wrap=puts
> -Wl,--wrap=snprintf -Wl,--wrap=strstr -Wl,--wrap=vprintf
> -Wl,--wrap=vsnprintf fuzz-emul.o x86-emulate.o x86_emulate/0f01.o
> x86_emulate/0fae.o x86_emulate/0fc7.o x86_emulate/decode.o
> x86_emulate/fpu.o cpuid.o wrappers.o -o libfuzzer-harness
> /usr/bin/ld: /usr/bin/ld: DWARF error: invalid or unhandled FORM value: 0x25
> /usr/local/lib/clang/18/lib/x86_64-unknown-linux-gnu/libclang_rt.fuzzer.a(fuzzer.o):
> in function `std::__Fuzzer::__libcpp_snprintf_l(char*, unsigned long,
> __locale_struct*, char const*, ...)':
> cxa_noexception.cpp:(.text._ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__locale_structPKcz[_ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__locale_structPKcz]+0x9a):
> undefined reference to `__wrap_vsnprintf'
> clang: error: linker command failed with exit code 1 (use -v to see invocation)
> make: *** [Makefile:62: libfuzzer-harness] Error 1
> rm x86-emulate.c wrappers.c cpuid.c
> make: Leaving directory '/src/xen/tools/fuzz/x86_instruction_emulator'
> ERROR:__main__:Building fuzzers failed.

Hmm, yes, means we'll need an actual vsnprintf() wrapper, not just a
declaration thereof.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 12:40:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 12:40:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747664.1155120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM5Sj-0007pb-4D; Tue, 25 Jun 2024 12:39:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747664.1155120; Tue, 25 Jun 2024 12:39:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM5Sj-0007pU-1S; Tue, 25 Jun 2024 12:39:53 +0000
Received: by outflank-mailman (input) for mailman id 747664;
 Tue, 25 Jun 2024 12:39:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v2rz=N3=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sM5Si-0007pO-JH
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 12:39:52 +0000
Received: from sender4-op-o12.zoho.com (sender4-op-o12.zoho.com
 [136.143.188.12]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 029c10d9-32f0-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 14:39:50 +0200 (CEST)
Received: by mx.zohomail.com with SMTPS id 1719319184952149.86804190776775;
 Tue, 25 Jun 2024 05:39:44 -0700 (PDT)
Received: by mail-yw1-f169.google.com with SMTP id
 00721157ae682-6439f6cf79dso22080787b3.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 05:39:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 029c10d9-32f0-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; t=1719319186; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=Pg8SSFbJfx6Bdr2e5+icRvOVzAkkhF6h5LxTB/F0TeGt8aPXbvfHoVQPERBP+fstWjLv1QyHocrHTqKfumIu8AAJEnTW+CI8cWuj4fElXIk5ugMiYiPNmU/j1EYl+BoO75q1a+U+E3GAS2/xtdrIb1Nq4MLrtx9voC9W4KlShUQ=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1719319186; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=yeXTEZwZUnpjXTQTWeqpSaG9Wl3HecQDKEKY2ZuA7Yk=; 
	b=j9VvG8SPtJJXU9VzgUplsh6WuqobRnCGw5ZHLef2a4FxN3uUgDYv6tfU7Evmg/pfJoTez4L0RSgcVbYvCW7/BPV+ICEApFrI/ghRerHeqEND/2wKqD+iL7K2b+KyM+mh2B5F8bTtfgeoiphmxB8HUsRXZu5x+AkQM2nikWPCm3s=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1719319186;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=MIME-Version:References:In-Reply-To:From:From:Date:Date:Message-ID:Subject:Subject:To:To:Cc:Cc:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=yeXTEZwZUnpjXTQTWeqpSaG9Wl3HecQDKEKY2ZuA7Yk=;
	b=HSLrRu9+TwaapdM6/CzCUtb+QGWbCOosC5J/0tDNsZyS6aaJvNLJnC/D0BnXIATH
	6Jdffjf0+q0iRiPaV5uyZnPL3j19EWzZ5ULMHXHk3rgOlVrfVvF0JtMSQrsWasKGNi1
	qfiIQZ+sxgOCNfbjBCHNIHkzocmvCFjZfBgrJHNY=
X-Forwarded-Encrypted: i=1; AJvYcCXeSfA/z98MMbvQysQ5bQ/MKPDxoT3SiXslQNRT8nIEWe8v4CsF9H2GVIEpJPpW3DaK+CEOa1hdpBfpZ2NW1ndcK/Sx/J5jx2biC+r+a60=
X-Gm-Message-State: AOJu0Yy9KxXlSen9Pe82Yn9LJNS9Z+LPJXCdwBLbgHdHCqWAK2mPU/tP
	yCoH2S+px4aAP/+fn6vIk6HK6ApFSNPTICwiT8YV9AqW5SxKyjhKAsNoXjP11v5udHU8ysmBrrf
	SPA0GjPyGoQiWXanY2YeOeKmzY4Q=
X-Google-Smtp-Source: AGHT+IFcV3BnSPBOfxq/OR32QZkQSX8qCLUldTzSBF4CMwbgB3MTbKLil63LAeOsUlhkcueTi1ii2Rm3aDMzE09RhR8=
X-Received: by 2002:a25:80c8:0:b0:e02:fa22:5031 with SMTP id
 3f1490d57ef6-e0303ed4c44mr7152426276.10.1719319184049; Tue, 25 Jun 2024
 05:39:44 -0700 (PDT)
MIME-Version: 1.0
References: <20240621191434.5046-1-tamas@tklengyel.com> <20240621191434.5046-2-tamas@tklengyel.com>
 <0a7854e0-e01e-435e-95fe-b262cc4afc1e@suse.com> <CABfawhmkhCD-MFgZBrhJ1CwiiseotJ=+MANbgwsjRL_VYsnuTQ@mail.gmail.com>
 <b9b84f10-6d41-48d9-996d-069408753e28@suse.com>
In-Reply-To: <b9b84f10-6d41-48d9-996d-069408753e28@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Tue, 25 Jun 2024 08:39:07 -0400
X-Gmail-Original-Message-ID: <CABfawhkJ0t8FenCWbupGcHD-ZhorbWN7ZjMQVm-jeg_zA1g5iQ@mail.gmail.com>
Message-ID: <CABfawhkJ0t8FenCWbupGcHD-ZhorbWN7ZjMQVm-jeg_zA1g5iQ@mail.gmail.com>
Subject: Re: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 25, 2024 at 7:40=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 25.06.2024 13:15, Tamas K Lengyel wrote:
> > On Tue, Jun 25, 2024 at 5:17=E2=80=AFAM Jan Beulich <jbeulich@suse.com>=
 wrote:
> >>
> >> On 21.06.2024 21:14, Tamas K Lengyel wrote:
> >>> --- /dev/null
> >>> +++ b/scripts/oss-fuzz/build.sh
> >>> @@ -0,0 +1,22 @@
> >>> +#!/bin/bash -eu
> >>> +# Copyright 2024 Google LLC
> >>> +#
> >>> +# Licensed under the Apache License, Version 2.0 (the "License");
> >>> +# you may not use this file except in compliance with the License.
> >>> +# You may obtain a copy of the License at
> >>> +#
> >>> +#      http://www.apache.org/licenses/LICENSE-2.0
> >>> +#
> >>> +# Unless required by applicable law or agreed to in writing, softwar=
e
> >>> +# distributed under the License is distributed on an "AS IS" BASIS,
> >>> +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or im=
plied.
> >>> +# See the License for the specific language governing permissions an=
d
> >>> +# limitations under the License.
> >>> +#
> >>> +####################################################################=
############
> >>
> >> I'm a little concerned here, but maybe I shouldn't be. According to wh=
at
> >> I'm reading, the Apache 2.0 license is at least not entirely compatibl=
e
> >> with GPLv2. While apparently the issue is solely with linking in Apach=
e-
> >> licensed code, I wonder whether us not having a respective file under
> >> ./LICENSES/ (and no pre-cooked SPDX identifier to use) actually has a
> >> reason possibly excluding the use of such code in the project.
> >>
> >>> +cd xen
> >>> +./configure clang=3Dy --disable-stubdom --disable-pvshim --disable-d=
ocs --disable-xen
> >>> +make clang=3Dy -C tools/include
> >>> +make clang=3Dy -C tools/fuzz/x86_instruction_emulator libfuzzer-harn=
ess
> >>> +cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_in=
struction_emulator
> >>
> >> In addition to what Julien said, I further think that filename / direc=
tory
> >> name are too generic for a file with this pretty specific contents.
> >
> > I don't really get your concern here?
>
> The thing that is built is specifically a x86 emulator piece of fuzzing
> binary. Neither the directory name nor the file name contain either x86
> or (at least) emul.

Because this build script is not necessarily restricted to build only
this one harness in the future. Right now that's the only one that has
a suitable libfuzzer harness, but the reason this build script is here
is to be easily able to add additional fuzzing binaries without the
need to open PRs on the oss-fuzz repo, which as I understand no one
was willing to do in the xen community due to the CLA. Now that the
integration is going to be in oss-fuzz, the only thing you have to do
in the future is add more stuff to this script to get fuzzed. Anything
that's compiled with libfuzzer and copied to $OUT will be picked up by
oss-fuzz automatically. Makes sense?

Tamas


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 12:41:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 12:41:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747669.1155130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM5Ty-0000m2-ED; Tue, 25 Jun 2024 12:41:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747669.1155130; Tue, 25 Jun 2024 12:41:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM5Ty-0000lv-Au; Tue, 25 Jun 2024 12:41:10 +0000
Received: by outflank-mailman (input) for mailman id 747669;
 Tue, 25 Jun 2024 12:41:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v2rz=N3=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sM5Tx-0000lm-Hr
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 12:41:09 +0000
Received: from sender4-op-o12.zoho.com (sender4-op-o12.zoho.com
 [136.143.188.12]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3069942e-32f0-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 14:41:07 +0200 (CEST)
Received: by mx.zohomail.com with SMTPS id 1719319263050855.3783825064857;
 Tue, 25 Jun 2024 05:41:03 -0700 (PDT)
Received: by mail-yb1-f178.google.com with SMTP id
 3f1490d57ef6-dff02b8a956so4864598276.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 05:41:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3069942e-32f0-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; t=1719319264; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=X30u7fWEcZ59H6yjcE8ITmoKpSAc1tM5xPvJkWjxdTRkWQlnlvuZZecHNoLuX0Jq6Usu0zwkPED7qj82SbqR8azU4OHyJWgRsUArZiBsRpODZlkrT9LsnQvS7bcWh1+9jcvKk6A1l7SZYkNBEVUTn7hVtifRKl2IQg5QxYjC9EM=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1719319264; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=MREDzWph2h7r+wcFTkEyKLvCgSc2jrZ7cPUdF19aduU=; 
	b=eeJTpRvfe+MwsvqozTHW+309mqM3qAc1M2fg7mIq+1YZ36Ow5qnloYYp1pYOrJI8sTb/TwGJm5RyVWkoFFy1m/YlXhQpCxV7JR/Ir9M04hY+F4E7QCqaEsAeBKtwK3hmxYboANb1viNC2+EAL9xn+9no5i+tPAMIf5ohiGlTvww=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1719319264;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=MIME-Version:References:In-Reply-To:From:From:Date:Date:Message-ID:Subject:Subject:To:To:Cc:Cc:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=MREDzWph2h7r+wcFTkEyKLvCgSc2jrZ7cPUdF19aduU=;
	b=VwixQ1UgA3E5l3NeBfSjBEhvVF/vTSPlAxUGcfN63ur1GdtZPAQ5oSkxFfALmi6B
	h9gDnOcZVrc420ubZWBGoJEROe+eQrjXt5PPNFEteNGUrMljq/fzyZyzEQZR9k8A4XO
	Eo9H4fhD+e679sYyCeP9vbKw3U153siSvIxtCK/Q=
X-Forwarded-Encrypted: i=1; AJvYcCUynJMdd5DFP7upbUvMFu+SxcGQX5+5PyjTzvrovfwg0CXKEn8vfFVebTNzGhlb0ErCCKDHHzWquuuISeZ6S0QsSqN7LPJGiAfY046bS6c=
X-Gm-Message-State: AOJu0YzZauRKq4tncwVDz8g5VJ67MYzVRvMEY8ojeNFnsCgkNP57lSdc
	F0/hTdyrIslRfevza77Pzf7S6ffAbVNNRXb2+3+O6VpVO6a/WgltLp6OQmiIhffXYuxQtuabnQU
	Mkk7pu1S1fShsc6vBCOWx/f+T+Qs=
X-Google-Smtp-Source: AGHT+IGRgUPPyoKO61IxV3jQKp52JU57x63HtnQLTcLxBLLg61JmN6ewphFjsQcPDo+/03FL0bdWW6doIuVjt3rsE1Y=
X-Received: by 2002:a25:86c8:0:b0:e02:e518:c05a with SMTP id
 3f1490d57ef6-e0300f98461mr6993208276.35.1719319262161; Tue, 25 Jun 2024
 05:41:02 -0700 (PDT)
MIME-Version: 1.0
References: <20240621191434.5046-1-tamas@tklengyel.com> <45c69745-b060-4697-9f6e-b3d2a8860946@suse.com>
 <CABfawhkyDVw-=nR2d6KiXGYYv=coDgHUr1oXC+BmUxH_ita+iQ@mail.gmail.com>
 <80d0578d-26c0-4650-9edf-6926c055d415@suse.com> <CABfawhk3RyR-ACq-mBk=F1-SCKJPiiS_yhU1=A_jR8Js3=fQyA@mail.gmail.com>
 <8d32db90-8bd0-4a8f-82d9-938e36d3f181@suse.com>
In-Reply-To: <8d32db90-8bd0-4a8f-82d9-938e36d3f181@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Tue, 25 Jun 2024 08:40:25 -0400
X-Gmail-Original-Message-ID: <CABfawhnYFS97U1F4CuacbNWzLVoKXFxTSpG-Ddb-VL7di=7XDw@mail.gmail.com>
Message-ID: <CABfawhnYFS97U1F4CuacbNWzLVoKXFxTSpG-Ddb-VL7di=7XDw@mail.gmail.com>
Subject: Re: [PATCH 1/2] Add libfuzzer target to fuzz/x86_instruction_emulator
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 25, 2024 at 7:52=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 25.06.2024 13:12, Tamas K Lengyel wrote:
> > On Tue, Jun 25, 2024 at 2:00=E2=80=AFAM Jan Beulich <jbeulich@suse.com>=
 wrote:
> >>
> >> On 24.06.2024 23:23, Tamas K Lengyel wrote:
> >>> On Mon, Jun 24, 2024 at 11:55=E2=80=AFAM Jan Beulich <jbeulich@suse.c=
om> wrote:
> >>>>
> >>>> On 21.06.2024 21:14, Tamas K Lengyel wrote:
> >>>>> @@ -58,6 +58,9 @@ afl-harness: afl-harness.o $(OBJS) cpuid.o wrappe=
rs.o
> >>>>>  afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OBJS))=
 cpuid.o wrappers.o
> >>>>>       $(CC) $(CFLAGS) $(GCOV_FLAGS) $(addprefix -Wl$(comma)--wrap=
=3D,$(WRAPPED)) $^ -o $@
> >>>>>
> >>>>> +libfuzzer-harness: $(OBJS) cpuid.o
> >>>>> +     $(CC) $(CFLAGS) $(LIB_FUZZING_ENGINE) -fsanitize=3Dfuzzer $^ =
-o $@
> >>>>
> >>>> What is LIB_FUZZING_ENGINE? I don't think we have any use of that in=
 the
> >>>> tree anywhere.
> >>>
> >>> It's used by oss-fuzz, otherwise it's not doing anything.
> >>>
> >>>>
> >>>> I'm further surprised you get away here without wrappers.o.
> >>>
> >>> Wrappers.o was actually breaking the build for oss-fuzz at the linkin=
g
> >>> stage. It works just fine without it.
> >>
> >> I'm worried here, to be honest. The wrappers serve a pretty important
> >> role, and I'm having a hard time seeing why they shouldn't be needed
> >> here when they're needed both for the test and afl harnesses. Could
> >> you add some more detail on the build issues you encountered?
> >
> > With wrappers.o included doing the build in the oss-fuzz docker
> > (ubuntu 20.04 base) fails with:
> >
> > ...
> > clang -O1 -fno-omit-frame-pointer -gline-tables-only
> > -Wno-error=3Denum-constexpr-conversion
> > -Wno-error=3Dincompatible-function-pointer-types
> > -Wno-error=3Dint-conversion -Wno-error=3Ddeprecated-declarations
> > -Wno-error=3Dimplicit-function-declaration -Wno-error=3Dimplicit-int
> > -DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION -fsanitize=3Daddress
> > -fsanitize-address-use-after-scope -fsanitize=3Dfuzzer-no-link -m64
> > -DBUILD_ID -fno-strict-aliasing -std=3Dgnu99 -Wall -Wstrict-prototypes
> > -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Werror
> > -Og -fno-omit-frame-pointer
> > -D__XEN_INTERFACE_VERSION__=3D__XEN_LATEST_INTERFACE_VERSION__ -MMD -MP
> > -MF .libfuzzer-harness.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
> > -I/src/xen/tools/fuzz/x86_instruction_emulator/../../../tools/include
> > -D__XEN_TOOLS__ -iquote . -fsanitize=3Dfuzzer -fsanitize=3Dfuzzer
> > -Wl,--wrap=3Dfwrite -Wl,--wrap=3Dmemcmp -Wl,--wrap=3Dmemcpy
> > -Wl,--wrap=3Dmemset -Wl,--wrap=3Dprintf -Wl,--wrap=3Dputchar -Wl,--wrap=
=3Dputs
> > -Wl,--wrap=3Dsnprintf -Wl,--wrap=3Dstrstr -Wl,--wrap=3Dvprintf
> > -Wl,--wrap=3Dvsnprintf fuzz-emul.o x86-emulate.o x86_emulate/0f01.o
> > x86_emulate/0fae.o x86_emulate/0fc7.o x86_emulate/decode.o
> > x86_emulate/fpu.o cpuid.o wrappers.o -o libfuzzer-harness
> > /usr/bin/ld: /usr/bin/ld: DWARF error: invalid or unhandled FORM value:=
 0x25
> > /usr/local/lib/clang/18/lib/x86_64-unknown-linux-gnu/libclang_rt.fuzzer=
.a(fuzzer.o):
> > in function `std::__Fuzzer::__libcpp_snprintf_l(char*, unsigned long,
> > __locale_struct*, char const*, ...)':
> > cxa_noexception.cpp:(.text._ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__=
locale_structPKcz[_ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__locale_struct=
PKcz]+0x9a):
> > undefined reference to `__wrap_vsnprintf'
> > clang: error: linker command failed with exit code 1 (use -v to see inv=
ocation)
> > make: *** [Makefile:62: libfuzzer-harness] Error 1
> > rm x86-emulate.c wrappers.c cpuid.c
> > make: Leaving directory '/src/xen/tools/fuzz/x86_instruction_emulator'
> > ERROR:__main__:Building fuzzers failed.
>
> Hmm, yes, means we'll need an actual vsnprintf() wrapper, not just a
> declaration thereof.

I don't really get what this wrapper accomplishes and as I said,
fuzzing works with oss-fuzz just fine without it.

Tamas


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:16:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:16:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747680.1155140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM61Z-0005vS-SR; Tue, 25 Jun 2024 13:15:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747680.1155140; Tue, 25 Jun 2024 13:15:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM61Z-0005vL-P5; Tue, 25 Jun 2024 13:15:53 +0000
Received: by outflank-mailman (input) for mailman id 747680;
 Tue, 25 Jun 2024 13:15:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM61Y-0005vF-FY
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:15:52 +0000
Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com
 [2a00:1450:4864:20::12d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0b830e23-32f5-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 15:15:51 +0200 (CEST)
Received: by mail-lf1-x12d.google.com with SMTP id
 2adb3069b0e04-52ce01403f6so2805716e87.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:15:51 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb7cf885sm80750655ad.224.2024.06.25.06.15.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 06:15:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b830e23-32f5-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719321351; x=1719926151; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=vxOLbVpoesbfDqzyLnIAYCzNvJAf1U2LRJ29k4ch+Ow=;
        b=UWtJhoO2Hqe/+kEbHRuafdqYvJQumf7MVnHNfJvq4emzxBXk96+u8jVEKxaFaSN8jn
         FpxCbUIXbhyzSGos4cxzk4IvxzQT6GRc8km0Dvn0PptaQuVvAHFoogWZh067CKeSmdqe
         qGm5COU9QjpRV9jspcCvT95J6EKe54rLJK3TbzASB84Ogc+B8Nhsj0zQpsmavgNZCLtw
         HElUKgoO/PO5RZKWX5xTVzJ7rQ/OCbjI1ex8fXv3YSdHxrQgXeGsN01RaaylgU9eL8G3
         vHm/d/hUCGs049ePV2SjS10Ll0IHaJ5yGISXFqZZmrgPzx/DyV6t+5/I4hbJOstAIadI
         MfAw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719321351; x=1719926151;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=vxOLbVpoesbfDqzyLnIAYCzNvJAf1U2LRJ29k4ch+Ow=;
        b=OcdsMXnyL1ggceYYpZd80TipRjS4OAd/Pky+hgNaWyjJr8iy3wupiEUV/2V6CrfE4/
         SnuhUfLs61OgGlXKftyFfcSlBWrwmlvmu/viWngm6OFZTcRX/WOJ/rgzfPg5Qs705j92
         7oTke68iES7QsW3iI4DgUdVJPrxrY4G/BH8+nJKx7nR7+jenOqkprRU35iLd4Y1LD7qR
         szNBigi12V7AC56BfhvyKwPg+DN0oX2RooGW67Pf5l2GfNMYIdvgjceeGeJbaFtmg3jb
         WouOTRytTrnxttC2Z+a3wo+n/E3FxW9efZQQJPpRZYPHakLcJfqkJQN0l1PWCqq/pjBG
         +tgw==
X-Forwarded-Encrypted: i=1; AJvYcCVpGH0UpLyf0U8K11CLFOUx6NVtwHSj61qo5eQ8REByh3earfkJgVdQrLVyJQfU5ssZSgZrsroPUS1Zi7BOjw2JrB46rqFXdE+TT22Tek8=
X-Gm-Message-State: AOJu0Yzaz+TvCCtUmhrYxx83uZbEkiOLkQiPScd6AIhy5KXUtIDVUz8s
	JsgunDdZYmxk/0mLm4uNonSwsDhK2P6FAhH9rjdCjbf2IxZcmbdMqGF4dgzAtZ1yWlsgx0q+z80
	=
X-Google-Smtp-Source: AGHT+IGMhsAnp0SJFxj0HDu9J30xtR7Vn0O+A1oA1SyJi0Bxu/l6q4lZkgkBoTXQsf+gA/4gLT+OaA==
X-Received: by 2002:a2e:83d0:0:b0:2ec:53fb:39d1 with SMTP id 38308e7fff4ca-2ec5b36b141mr41432531fa.9.1719321350703;
        Tue, 25 Jun 2024 06:15:50 -0700 (PDT)
Message-ID: <243e34fc-57a2-464c-8a11-2cfee7e9cda3@suse.com>
Date: Tue, 25 Jun 2024 15:15:43 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/2] Add libfuzzer target to fuzz/x86_instruction_emulator
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org
References: <20240621191434.5046-1-tamas@tklengyel.com>
 <45c69745-b060-4697-9f6e-b3d2a8860946@suse.com>
 <CABfawhkyDVw-=nR2d6KiXGYYv=coDgHUr1oXC+BmUxH_ita+iQ@mail.gmail.com>
 <80d0578d-26c0-4650-9edf-6926c055d415@suse.com>
 <CABfawhk3RyR-ACq-mBk=F1-SCKJPiiS_yhU1=A_jR8Js3=fQyA@mail.gmail.com>
 <8d32db90-8bd0-4a8f-82d9-938e36d3f181@suse.com>
 <CABfawhnYFS97U1F4CuacbNWzLVoKXFxTSpG-Ddb-VL7di=7XDw@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CABfawhnYFS97U1F4CuacbNWzLVoKXFxTSpG-Ddb-VL7di=7XDw@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25.06.2024 14:40, Tamas K Lengyel wrote:
> On Tue, Jun 25, 2024 at 7:52 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 25.06.2024 13:12, Tamas K Lengyel wrote:
>>> On Tue, Jun 25, 2024 at 2:00 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 24.06.2024 23:23, Tamas K Lengyel wrote:
>>>>> On Mon, Jun 24, 2024 at 11:55 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>
>>>>>> On 21.06.2024 21:14, Tamas K Lengyel wrote:
>>>>>>> @@ -58,6 +58,9 @@ afl-harness: afl-harness.o $(OBJS) cpuid.o wrappers.o
>>>>>>>  afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OBJS)) cpuid.o wrappers.o
>>>>>>>       $(CC) $(CFLAGS) $(GCOV_FLAGS) $(addprefix -Wl$(comma)--wrap=,$(WRAPPED)) $^ -o $@
>>>>>>>
>>>>>>> +libfuzzer-harness: $(OBJS) cpuid.o
>>>>>>> +     $(CC) $(CFLAGS) $(LIB_FUZZING_ENGINE) -fsanitize=fuzzer $^ -o $@
>>>>>>
>>>>>> What is LIB_FUZZING_ENGINE? I don't think we have any use of that in the
>>>>>> tree anywhere.
>>>>>
>>>>> It's used by oss-fuzz, otherwise it's not doing anything.
>>>>>
>>>>>>
>>>>>> I'm further surprised you get away here without wrappers.o.
>>>>>
>>>>> Wrappers.o was actually breaking the build for oss-fuzz at the linking
>>>>> stage. It works just fine without it.
>>>>
>>>> I'm worried here, to be honest. The wrappers serve a pretty important
>>>> role, and I'm having a hard time seeing why they shouldn't be needed
>>>> here when they're needed both for the test and afl harnesses. Could
>>>> you add some more detail on the build issues you encountered?
>>>
>>> With wrappers.o included doing the build in the oss-fuzz docker
>>> (ubuntu 20.04 base) fails with:
>>>
>>> ...
>>> clang -O1 -fno-omit-frame-pointer -gline-tables-only
>>> -Wno-error=enum-constexpr-conversion
>>> -Wno-error=incompatible-function-pointer-types
>>> -Wno-error=int-conversion -Wno-error=deprecated-declarations
>>> -Wno-error=implicit-function-declaration -Wno-error=implicit-int
>>> -DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION -fsanitize=address
>>> -fsanitize-address-use-after-scope -fsanitize=fuzzer-no-link -m64
>>> -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes
>>> -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Werror
>>> -Og -fno-omit-frame-pointer
>>> -D__XEN_INTERFACE_VERSION__=__XEN_LATEST_INTERFACE_VERSION__ -MMD -MP
>>> -MF .libfuzzer-harness.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
>>> -I/src/xen/tools/fuzz/x86_instruction_emulator/../../../tools/include
>>> -D__XEN_TOOLS__ -iquote . -fsanitize=fuzzer -fsanitize=fuzzer
>>> -Wl,--wrap=fwrite -Wl,--wrap=memcmp -Wl,--wrap=memcpy
>>> -Wl,--wrap=memset -Wl,--wrap=printf -Wl,--wrap=putchar -Wl,--wrap=puts
>>> -Wl,--wrap=snprintf -Wl,--wrap=strstr -Wl,--wrap=vprintf
>>> -Wl,--wrap=vsnprintf fuzz-emul.o x86-emulate.o x86_emulate/0f01.o
>>> x86_emulate/0fae.o x86_emulate/0fc7.o x86_emulate/decode.o
>>> x86_emulate/fpu.o cpuid.o wrappers.o -o libfuzzer-harness
>>> /usr/bin/ld: /usr/bin/ld: DWARF error: invalid or unhandled FORM value: 0x25
>>> /usr/local/lib/clang/18/lib/x86_64-unknown-linux-gnu/libclang_rt.fuzzer.a(fuzzer.o):
>>> in function `std::__Fuzzer::__libcpp_snprintf_l(char*, unsigned long,
>>> __locale_struct*, char const*, ...)':
>>> cxa_noexception.cpp:(.text._ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__locale_structPKcz[_ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__locale_structPKcz]+0x9a):
>>> undefined reference to `__wrap_vsnprintf'
>>> clang: error: linker command failed with exit code 1 (use -v to see invocation)
>>> make: *** [Makefile:62: libfuzzer-harness] Error 1
>>> rm x86-emulate.c wrappers.c cpuid.c
>>> make: Leaving directory '/src/xen/tools/fuzz/x86_instruction_emulator'
>>> ERROR:__main__:Building fuzzers failed.
>>
>> Hmm, yes, means we'll need an actual vsnprintf() wrapper, not just a
>> declaration thereof.
> 
> I don't really get what this wrapper accomplishes

They guard against clobbering of in-register state (SIMD registers in
particular, but going forward maybe also eGRP-s as introduced by APX)
by library functions called between emulation of individual insns (or,
especially possible for fuzzing instrumented code, I think) even from
in the middle of emulating an insn. (Something as simple as the
compiler inserting a call to memcpy() or memset() somewhere in the
translation of the emulator source code could also clobber state.)

> and as I said, fuzzing works with oss-fuzz just fine without it.

I'm inclined to take this as "it appears to work just fine". Fuzzed
input register state may be lost by doing a library call somewhere,
rendering the fuzzing results less useful. This would pretty
certainly stop being tolerable the moment you compared results of
native execution of a sequence of instructions with the emulated
counterpart.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:18:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:18:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747687.1155150 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM63b-0006xu-8T; Tue, 25 Jun 2024 13:17:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747687.1155150; Tue, 25 Jun 2024 13:17:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM63b-0006xn-4y; Tue, 25 Jun 2024 13:17:59 +0000
Received: by outflank-mailman (input) for mailman id 747687;
 Tue, 25 Jun 2024 13:17:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM63a-0006xf-7c
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:17:58 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 55e35275-32f5-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 15:17:56 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id
 38308e7fff4ca-2ebe40673d8so62413171fa.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:17:56 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7068998497esm3452105b3a.15.2024.06.25.06.17.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 06:17:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55e35275-32f5-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719321476; x=1719926276; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=bv6zRyWZDi+Xzi9ZgVEmKskG6R6plvviwberHPEumf4=;
        b=cxgkFZhysg3SEGeEuCU0WlS5laYSFnwU7GNnHalMv/Bte2/kQXrLKK4T/PVItb1Sm/
         d2+dXXw5D6W2phBBxjYClafT5zaUHohbGMU+43RVF2gLOE8IT0ltH1QcwGcd8SMskgPt
         mjakfowoW/tkZJW38uLlXRgAreQz9rbc0IkLdypfDhlHAbvs+zhKugmmLPAAj1yfQ/mS
         UbiqIoglu7WmwfdEsNU9EYURr13HnA/TW+oeHE+R45yueWM2iQFpO5ldvKtbGJLQ9Nd7
         H23ppu5kcFQWSPRiyhZayqvT4BZ7iyHljKRE8UDDPh17SKXRWlKjiKcaiws4O/v0dTph
         V8GA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719321476; x=1719926276;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bv6zRyWZDi+Xzi9ZgVEmKskG6R6plvviwberHPEumf4=;
        b=QheSwEQ+yq/14llLuN8gPTqbiaAm915mvm8oKyxQDUDlNFB+DLCDnLugkLR20guhF7
         2iCl8X9obJbl2v4v83cYiMjdOuUhq3n7XdnjNqnr3jiQweglOlnqVbKjYmDaEWJBT+XW
         pNQjt1GgxOfhPjEMWjn2OYwbTqggoA63NlhqFoncGIS3HhTk7xjalcsNhx9Cq70cI82Y
         qyS37UQ4GeaORPLcReQLDwe6EHeUBTuyGyuGkChnEGwdVKSlhxtUQD1HQDUM4T4EYk6P
         Qjp3Gz8Hv5yVPQTNWnAaOMgXDkFv4GZjrDSEhx2Xk8hET12ijR1zduGQou4GKAU3jyF1
         Jtsw==
X-Forwarded-Encrypted: i=1; AJvYcCXiFBuGgho58vWpR84CDhw/qLyk5f22XdUx1cs9DE8v6nfp2Km9YkfH75Xnr9L3s41XuSodWHKkOCqq07BUmPrBXsRTaDrMxCzzTp6oMCQ=
X-Gm-Message-State: AOJu0YxyydjRuPQrSeXb9y2rRzuvArTa3nyS6/W7bSgk5j30e4OpisZn
	bYJh/axe0sL9tag7Jk+lses7BFZKgWNpm5c0DPBtOdCG8fK0MwAAYc5Sp+gfsw==
X-Google-Smtp-Source: AGHT+IGVOOghhqO53d9sReemg2y/BzisVqzszO26bakd0emcnmrpgV9JZcNF09liM71oVr49C4F/NQ==
X-Received: by 2002:a2e:9516:0:b0:2ec:4eca:7487 with SMTP id 38308e7fff4ca-2ec5b338b5fmr37697011fa.20.1719321475589;
        Tue, 25 Jun 2024 06:17:55 -0700 (PDT)
Message-ID: <66a7243d-a1a1-4236-832f-f3e1daf11b85@suse.com>
Date: Tue, 25 Jun 2024 15:17:47 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20240621191434.5046-1-tamas@tklengyel.com>
 <20240621191434.5046-2-tamas@tklengyel.com>
 <0a7854e0-e01e-435e-95fe-b262cc4afc1e@suse.com>
 <CABfawhmkhCD-MFgZBrhJ1CwiiseotJ=+MANbgwsjRL_VYsnuTQ@mail.gmail.com>
 <b9b84f10-6d41-48d9-996d-069408753e28@suse.com>
 <CABfawhkJ0t8FenCWbupGcHD-ZhorbWN7ZjMQVm-jeg_zA1g5iQ@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CABfawhkJ0t8FenCWbupGcHD-ZhorbWN7ZjMQVm-jeg_zA1g5iQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25.06.2024 14:39, Tamas K Lengyel wrote:
> On Tue, Jun 25, 2024 at 7:40 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 25.06.2024 13:15, Tamas K Lengyel wrote:
>>> On Tue, Jun 25, 2024 at 5:17 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 21.06.2024 21:14, Tamas K Lengyel wrote:
>>>>> --- /dev/null
>>>>> +++ b/scripts/oss-fuzz/build.sh
>>>>> @@ -0,0 +1,22 @@
>>>>> +#!/bin/bash -eu
>>>>> +# Copyright 2024 Google LLC
>>>>> +#
>>>>> +# Licensed under the Apache License, Version 2.0 (the "License");
>>>>> +# you may not use this file except in compliance with the License.
>>>>> +# You may obtain a copy of the License at
>>>>> +#
>>>>> +#      http://www.apache.org/licenses/LICENSE-2.0
>>>>> +#
>>>>> +# Unless required by applicable law or agreed to in writing, software
>>>>> +# distributed under the License is distributed on an "AS IS" BASIS,
>>>>> +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>>>> +# See the License for the specific language governing permissions and
>>>>> +# limitations under the License.
>>>>> +#
>>>>> +################################################################################
>>>>
>>>> I'm a little concerned here, but maybe I shouldn't be. According to what
>>>> I'm reading, the Apache 2.0 license is at least not entirely compatible
>>>> with GPLv2. While apparently the issue is solely with linking in Apache-
>>>> licensed code, I wonder whether us not having a respective file under
>>>> ./LICENSES/ (and no pre-cooked SPDX identifier to use) actually has a
>>>> reason possibly excluding the use of such code in the project.
>>>>
>>>>> +cd xen
>>>>> +./configure clang=y --disable-stubdom --disable-pvshim --disable-docs --disable-xen
>>>>> +make clang=y -C tools/include
>>>>> +make clang=y -C tools/fuzz/x86_instruction_emulator libfuzzer-harness
>>>>> +cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_instruction_emulator
>>>>
>>>> In addition to what Julien said, I further think that filename / directory
>>>> name are too generic for a file with this pretty specific contents.
>>>
>>> I don't really get your concern here?
>>
>> The thing that is built is specifically a x86 emulator piece of fuzzing
>> binary. Neither the directory name nor the file name contain either x86
>> or (at least) emul.
> 
> Because this build script is not necessarily restricted to build only
> this one harness in the future. Right now that's the only one that has
> a suitable libfuzzer harness, but the reason this build script is here
> is to be easily able to add additional fuzzing binaries without the
> need to open PRs on the oss-fuzz repo, which as I understand no one
> was willing to do in the xen community due to the CLA. Now that the
> integration is going to be in oss-fuzz, the only thing you have to do
> in the future is add more stuff to this script to get fuzzed. Anything
> that's compiled with libfuzzer and copied to $OUT will be picked up by
> oss-fuzz automatically. Makes sense?

It does, yes. Yet nothing like that was said in the description. How
should anyone have known there are future possibilities with this script?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:29:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:29:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747698.1155159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6F0-0000Rk-BE; Tue, 25 Jun 2024 13:29:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747698.1155159; Tue, 25 Jun 2024 13:29:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6F0-0000Rd-89; Tue, 25 Jun 2024 13:29:46 +0000
Received: by outflank-mailman (input) for mailman id 747698;
 Tue, 25 Jun 2024 13:29:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sM6Ey-0000RT-JP; Tue, 25 Jun 2024 13:29:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sM6Ey-0005IF-Dv; Tue, 25 Jun 2024 13:29:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sM6Ex-0005lv-Vf; Tue, 25 Jun 2024 13:29:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sM6Ex-000436-UJ; Tue, 25 Jun 2024 13:29:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BonLUIyqgKMG5sbVQPzcMXjLZWEbV7YNPSnETCU7QD0=; b=6iID9MIlEfhBWhbY7jLa4s4ONs
	m008EN7j5ywiMmlqmTTkdutCwPVFrm76x8B+xQ9+uId2hbq79LJCI6xEt76RZFgm9juXvqJU/iUrx
	YdumI0itlxtfgL/oVlSdojwOsElB8DJ57WPF8SRGiXHvnbmexUDj0OId0THXD8oHhgOg=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186475-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186475: trouble: broken/pass
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:<job status>:broken:regression
    libvirt:test-armhf-armhf-libvirt:host-install(5):broken:regression
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=ba6cd2d5a8dc785cd56a074a4d088316cc1b678b
X-Osstest-Versions-That:
    libvirt=43a0881274e632dc44fff9320357dc8bf31e4826
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Jun 2024 13:29:43 +0000

flight 186475 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186475/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt        <job status>                 broken
 test-armhf-armhf-libvirt      5 host-install(5)        broken REGR. vs. 186451

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              ba6cd2d5a8dc785cd56a074a4d088316cc1b678b
baseline version:
 libvirt              43a0881274e632dc44fff9320357dc8bf31e4826

Last test of basis   186451  2024-06-22 04:20:26 Z    3 days
Testing same since   186475  2024-06-25 04:20:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adam Julis <ajulis@redhat.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Jonathon Jongsma <jjongsma@redhat.com>
  Laine Stump <laine@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     broken  
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt broken
broken-step test-armhf-armhf-libvirt host-install(5)

Not pushing.

------------------------------------------------------------
commit ba6cd2d5a8dc785cd56a074a4d088316cc1b678b
Author: Göran Uddeborg <goeran@uddeborg.se>
Date:   Mon Jun 24 14:47:17 2024 +0200

    Translated using Weblate (Swedish)
    
    Currently translated at 77.1% (8062 of 10454 strings)
    
    Translation: libvirt/libvirt
    Translate-URL: https://translate.fedoraproject.org/projects/libvirt/libvirt/sv/
    
    Translated using Weblate (Swedish)
    
    Currently translated at 76.9% (8042 of 10454 strings)
    
    Translation: libvirt/libvirt
    Translate-URL: https://translate.fedoraproject.org/projects/libvirt/libvirt/sv/
    
    Co-authored-by: Göran Uddeborg <goeran@uddeborg.se>
    Signed-off-by: Göran Uddeborg <goeran@uddeborg.se>

commit af437d2d64cd39dac437e849156592128a8ceb28
Author: Jonathon Jongsma <jjongsma@redhat.com>
Date:   Wed Jun 12 12:18:49 2024 -0500

    qemu: Don't specify vfio-pci.ramfb when ramfb is false
    
    Commit 7c8e606b64c73ca56d7134cb16d01257f39c53ef attempted to fix
    the specification of the ramfb property for vfio-pci devices, but it
    failed when ramfb is explicitly set to 'off'. This is because only the
    'vfio-pci-nohotplug' device supports the 'ramfb' property. Since we use
    the base 'vfio-pci' device unless ramfb is enabled, attempting to set
    the 'ramfb' parameter to 'off' this will result in an error like the
    following:
    
      error: internal error: QEMU unexpectedly closed the monitor
      (vm='rhel'): 2024-06-06T04:43:22.896795Z qemu-kvm: -device
      {"driver":"vfio-pci","host":"0000:b1:00.4","id":"hostdev0","display":"on
      ","ramfb":false,"bus":"pci.7","addr":"0x0"}: Property 'vfio-pci.ramfb'
      not found.
    
    This also more closely matches what is done for mdev devices.
    
    Resolves: https://issues.redhat.com/browse/RHEL-28808
    
    Signed-off-by: Jonathon Jongsma <jjongsma@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>

commit 397c0f4b01ae1b24806a145ffbd31a9a49126ae3
Author: Laine Stump <laine@redhat.com>
Date:   Fri Jun 21 08:17:58 2024 -0400

    network: add more firewall test cases
    
    This patch adds some previously missing test cases that test for
    proper firewall rule creation when the following are included in the
    network definition:
    
    * <forward dev='blah'>
    * no forward element (an "isolated" network)
    * nat port range when only ipv4 is nat-ed
    * nat port range when both ipv4 & ipv6 are nated
    
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
    Signed-off-by: Laine Stump <laine@redhat.com>

commit aabf279ca07d9d5c3d2e6d4efd7d4c5bc2dd471e
Author: Laine Stump <laine@redhat.com>
Date:   Wed Jun 12 15:25:46 2024 -0400

    tests: fix broken nftables test data so that individual tests are successful
    
    When the chain names and table name used by the nftables firewall
    backend were changed in commit
    958aa7f274904eb8e4678a43eac845044f0dcc38, I forgot to change the test
    data file base.nftables, which has the extra "list" and "add
    chain/table" commands that are generated for the first test case of
    networkxml2firewalltest.c. When the full set of tests is run, the
    first test will be an iptables test case, so those extra commands
    won't be added to any of the nftables cases, and so the data in
    base.nftables never matches, and the tests are all successful.
    
    However, if the test are limited with, e.g. VIR_TEST_RANGE=2 (test #2
    will be the nftables version of the 1st test case), then the commands
    to add nftables table/chains *will* be generated in the test output,
    and so the test will fail. Because I was only running the entire test
    series after the initial commits of nftables tests, I didn't notice
    this. Until now.
    
    base.nftables has now been updated to reflect the current names for
    chains/table, and running individual test cases is once again
    successful.
    
    Fixes: 958aa7f274904eb8e4678a43eac845044f0dcc38
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>
    Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
    Signed-off-by: Laine Stump <laine@redhat.com>

commit 3a9095976e1060f95a8a4a985d5e9901e1098547
Author: Adam Julis <ajulis@redhat.com>
Date:   Fri Jun 21 18:16:55 2024 +0200

    qemuDomainDiskChangeSupported: Fill in missing check
    
    The attribute 'discard_no_unref' of <disk/> is not allowed to be
    changed while the virtual machine is running.
    
    Resolves: https://issues.redhat.com/browse/RHEL-37542
    Signed-off-by: Adam Julis <ajulis@redhat.com>
    Reviewed-by: Michal Privoznik <mprivozn@redhat.com>


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:41:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:41:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747710.1155176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6QX-0003dZ-EA; Tue, 25 Jun 2024 13:41:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747710.1155176; Tue, 25 Jun 2024 13:41:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6QX-0003dS-BM; Tue, 25 Jun 2024 13:41:41 +0000
Received: by outflank-mailman (input) for mailman id 747710;
 Tue, 25 Jun 2024 13:41:40 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v2rz=N3=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sM6QW-0003dM-1J
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:41:40 +0000
Received: from sender4-op-o12.zoho.com (sender4-op-o12.zoho.com
 [136.143.188.12]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a4a40a23-32f8-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 15:41:38 +0200 (CEST)
Received: by mx.zohomail.com with SMTPS id 1719322894624675.912735300306;
 Tue, 25 Jun 2024 06:41:34 -0700 (PDT)
Received: by mail-yb1-f181.google.com with SMTP id
 3f1490d57ef6-dff305df675so5982170276.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:41:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4a40a23-32f8-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; t=1719322895; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=kMXbrMIAJc+NaXqb8v6IFzvkJ2GrABIGiXW8WaAYnR//wqGTdnPEsUA9URJdcafB3ogp+I0QH6Cd8JoKW8ONQTEDndpLHxK9MHN5cxutJRA27A1bHRzgzX3m9p4Uth0Cvjr+AtkBepFiUzGCN4MjpklPNFg46eZ3wKjaJDvGops=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1719322895; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=lmn1BvzSMho7LOP+1vhvUbbeoFJzVkT0S3+qyrlel9I=; 
	b=S0Q9T2iV3v4pt2fYFA2oTN/Rbx8yPh4ERdhkFicQSen14GFXG2UB5VRylos9LhVmpR39ku075kS/BqcG9azwwd/928C7GfdLHzbewE9bdcsIxBN/RJcQVgWjGZ82hEMv7IvKQQ6PZq4i3/yb3IjESCt0LkxNPuEK/HsgCnIvq2c=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1719322895;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=MIME-Version:References:In-Reply-To:From:From:Date:Date:Message-ID:Subject:Subject:To:To:Cc:Cc:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=lmn1BvzSMho7LOP+1vhvUbbeoFJzVkT0S3+qyrlel9I=;
	b=lJHiQo7ZU/PTRvAAoeAKDBkQxW/OHNiU3JfJ5Z5FOTn+3rk67cG7xFcHaXQMs55p
	uNXhbLtrE1uqofTG0TZUPsmum7cTzYwIPjii6atczvvS9J+4TGtWxM7reSGn3xUBdYq
	g4Dm3nfPM/FhyoMDx57vqPrBBiRk27NY2AUX7k4g=
X-Forwarded-Encrypted: i=1; AJvYcCVMkRqy/preDbGPHXtrWgNrOtxxHTWmUg2SzTji58snoYFnIHysHEb8zwFfwhA6697Bz81DNSDAXN/3GQz8WuQO9fssASP9JGHJSbhDbE8=
X-Gm-Message-State: AOJu0YzmAsTGfxuRqtsxInQqTRbYTJWv/iBYYCwb8qimq3VPwKE7aPXu
	EaxzPKwi+LOVe0USJWFMOzOmRBHmuHdzcB8elw2TfZjHzwX/kKchPv5XnjEJMO1aDnyl1d8tFkA
	WlgovJyQ7hi1Cp3H1HElfBlNNysI=
X-Google-Smtp-Source: AGHT+IGNVBIqa8gXb0O4KENRe/n1RDrEpdv345encx6uD4hOrJ1wrjfXCuih2EPfE1Xo1ebauds8osS6YzVdxyQ5eyg=
X-Received: by 2002:a25:d68f:0:b0:dff:1020:6f31 with SMTP id
 3f1490d57ef6-e0303ff8138mr7187270276.45.1719322893755; Tue, 25 Jun 2024
 06:41:33 -0700 (PDT)
MIME-Version: 1.0
References: <20240621191434.5046-1-tamas@tklengyel.com> <45c69745-b060-4697-9f6e-b3d2a8860946@suse.com>
 <CABfawhkyDVw-=nR2d6KiXGYYv=coDgHUr1oXC+BmUxH_ita+iQ@mail.gmail.com>
 <80d0578d-26c0-4650-9edf-6926c055d415@suse.com> <CABfawhk3RyR-ACq-mBk=F1-SCKJPiiS_yhU1=A_jR8Js3=fQyA@mail.gmail.com>
 <8d32db90-8bd0-4a8f-82d9-938e36d3f181@suse.com> <CABfawhnYFS97U1F4CuacbNWzLVoKXFxTSpG-Ddb-VL7di=7XDw@mail.gmail.com>
 <243e34fc-57a2-464c-8a11-2cfee7e9cda3@suse.com>
In-Reply-To: <243e34fc-57a2-464c-8a11-2cfee7e9cda3@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Tue, 25 Jun 2024 09:40:57 -0400
X-Gmail-Original-Message-ID: <CABfawh=6d+F1tYLmfC-NyMn80NROFf_0HL-WkKzu-r5vjfScaw@mail.gmail.com>
Message-ID: <CABfawh=6d+F1tYLmfC-NyMn80NROFf_0HL-WkKzu-r5vjfScaw@mail.gmail.com>
Subject: Re: [PATCH 1/2] Add libfuzzer target to fuzz/x86_instruction_emulator
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 25, 2024 at 9:15=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 25.06.2024 14:40, Tamas K Lengyel wrote:
> > On Tue, Jun 25, 2024 at 7:52=E2=80=AFAM Jan Beulich <jbeulich@suse.com>=
 wrote:
> >>
> >> On 25.06.2024 13:12, Tamas K Lengyel wrote:
> >>> On Tue, Jun 25, 2024 at 2:00=E2=80=AFAM Jan Beulich <jbeulich@suse.co=
m> wrote:
> >>>>
> >>>> On 24.06.2024 23:23, Tamas K Lengyel wrote:
> >>>>> On Mon, Jun 24, 2024 at 11:55=E2=80=AFAM Jan Beulich <jbeulich@suse=
.com> wrote:
> >>>>>>
> >>>>>> On 21.06.2024 21:14, Tamas K Lengyel wrote:
> >>>>>>> @@ -58,6 +58,9 @@ afl-harness: afl-harness.o $(OBJS) cpuid.o wrap=
pers.o
> >>>>>>>  afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OBJS=
)) cpuid.o wrappers.o
> >>>>>>>       $(CC) $(CFLAGS) $(GCOV_FLAGS) $(addprefix -Wl$(comma)--wrap=
=3D,$(WRAPPED)) $^ -o $@
> >>>>>>>
> >>>>>>> +libfuzzer-harness: $(OBJS) cpuid.o
> >>>>>>> +     $(CC) $(CFLAGS) $(LIB_FUZZING_ENGINE) -fsanitize=3Dfuzzer $=
^ -o $@
> >>>>>>
> >>>>>> What is LIB_FUZZING_ENGINE? I don't think we have any use of that =
in the
> >>>>>> tree anywhere.
> >>>>>
> >>>>> It's used by oss-fuzz, otherwise it's not doing anything.
> >>>>>
> >>>>>>
> >>>>>> I'm further surprised you get away here without wrappers.o.
> >>>>>
> >>>>> Wrappers.o was actually breaking the build for oss-fuzz at the link=
ing
> >>>>> stage. It works just fine without it.
> >>>>
> >>>> I'm worried here, to be honest. The wrappers serve a pretty importan=
t
> >>>> role, and I'm having a hard time seeing why they shouldn't be needed
> >>>> here when they're needed both for the test and afl harnesses. Could
> >>>> you add some more detail on the build issues you encountered?
> >>>
> >>> With wrappers.o included doing the build in the oss-fuzz docker
> >>> (ubuntu 20.04 base) fails with:
> >>>
> >>> ...
> >>> clang -O1 -fno-omit-frame-pointer -gline-tables-only
> >>> -Wno-error=3Denum-constexpr-conversion
> >>> -Wno-error=3Dincompatible-function-pointer-types
> >>> -Wno-error=3Dint-conversion -Wno-error=3Ddeprecated-declarations
> >>> -Wno-error=3Dimplicit-function-declaration -Wno-error=3Dimplicit-int
> >>> -DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION -fsanitize=3Daddress
> >>> -fsanitize-address-use-after-scope -fsanitize=3Dfuzzer-no-link -m64
> >>> -DBUILD_ID -fno-strict-aliasing -std=3Dgnu99 -Wall -Wstrict-prototype=
s
> >>> -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Werror
> >>> -Og -fno-omit-frame-pointer
> >>> -D__XEN_INTERFACE_VERSION__=3D__XEN_LATEST_INTERFACE_VERSION__ -MMD -=
MP
> >>> -MF .libfuzzer-harness.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
> >>> -I/src/xen/tools/fuzz/x86_instruction_emulator/../../../tools/include
> >>> -D__XEN_TOOLS__ -iquote . -fsanitize=3Dfuzzer -fsanitize=3Dfuzzer
> >>> -Wl,--wrap=3Dfwrite -Wl,--wrap=3Dmemcmp -Wl,--wrap=3Dmemcpy
> >>> -Wl,--wrap=3Dmemset -Wl,--wrap=3Dprintf -Wl,--wrap=3Dputchar -Wl,--wr=
ap=3Dputs
> >>> -Wl,--wrap=3Dsnprintf -Wl,--wrap=3Dstrstr -Wl,--wrap=3Dvprintf
> >>> -Wl,--wrap=3Dvsnprintf fuzz-emul.o x86-emulate.o x86_emulate/0f01.o
> >>> x86_emulate/0fae.o x86_emulate/0fc7.o x86_emulate/decode.o
> >>> x86_emulate/fpu.o cpuid.o wrappers.o -o libfuzzer-harness
> >>> /usr/bin/ld: /usr/bin/ld: DWARF error: invalid or unhandled FORM valu=
e: 0x25
> >>> /usr/local/lib/clang/18/lib/x86_64-unknown-linux-gnu/libclang_rt.fuzz=
er.a(fuzzer.o):
> >>> in function `std::__Fuzzer::__libcpp_snprintf_l(char*, unsigned long,
> >>> __locale_struct*, char const*, ...)':
> >>> cxa_noexception.cpp:(.text._ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15=
__locale_structPKcz[_ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__locale_stru=
ctPKcz]+0x9a):
> >>> undefined reference to `__wrap_vsnprintf'
> >>> clang: error: linker command failed with exit code 1 (use -v to see i=
nvocation)
> >>> make: *** [Makefile:62: libfuzzer-harness] Error 1
> >>> rm x86-emulate.c wrappers.c cpuid.c
> >>> make: Leaving directory '/src/xen/tools/fuzz/x86_instruction_emulator=
'
> >>> ERROR:__main__:Building fuzzers failed.
> >>
> >> Hmm, yes, means we'll need an actual vsnprintf() wrapper, not just a
> >> declaration thereof.
> >
> > I don't really get what this wrapper accomplishes
>
> They guard against clobbering of in-register state (SIMD registers in
> particular, but going forward maybe also eGRP-s as introduced by APX)
> by library functions called between emulation of individual insns (or,
> especially possible for fuzzing instrumented code, I think) even from
> in the middle of emulating an insn. (Something as simple as the
> compiler inserting a call to memcpy() or memset() somewhere in the
> translation of the emulator source code could also clobber state.)
>
> > and as I said, fuzzing works with oss-fuzz just fine without it.
>
> I'm inclined to take this as "it appears to work just fine". Fuzzed
> input register state may be lost by doing a library call somewhere,
> rendering the fuzzing results less useful. This would pretty
> certainly stop being tolerable the moment you compared results of
> native execution of a sequence of instructions with the emulated
> counterpart.

Yea, that may be. Any suggested way to fix the linking issue though?
I'm not even sure why the problem only appears in the oss-fuzz build,
when I just run make normally it seems to work.

Tamas


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:43:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:43:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747719.1155186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6S7-0004EQ-Sr; Tue, 25 Jun 2024 13:43:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747719.1155186; Tue, 25 Jun 2024 13:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6S7-0004EJ-Pz; Tue, 25 Jun 2024 13:43:19 +0000
Received: by outflank-mailman (input) for mailman id 747719;
 Tue, 25 Jun 2024 13:43:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v2rz=N3=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sM6S6-0004EB-Cy
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:43:18 +0000
Received: from sender4-op-o12.zoho.com (sender4-op-o12.zoho.com
 [136.143.188.12]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dfba6550-32f8-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 15:43:17 +0200 (CEST)
Received: by mx.zohomail.com with SMTPS id 1719322991509960.1477680832319;
 Tue, 25 Jun 2024 06:43:11 -0700 (PDT)
Received: by mail-yb1-f172.google.com with SMTP id
 3f1490d57ef6-e02b571b0f6so5432398276.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:43:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dfba6550-32f8-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; t=1719322994; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=kskmQ/HkDsXBCo7+k0gw3KG2TQKmF8MfbCEHM/ZbEADlS0ykReAmiLolfZlIr74I16IDVuPV2xM741GWXOvdCcW1Y4WD4QQmMcKItUNxVbh5JQpymPVfLz/nZoNa6nhM7+8wC8rkoAldgzmvyZNhl6So/+gaBghElUcRx6KfG7s=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1719322994; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=pr1odYqS0ZrtCrgBJK30bcPuDVS3ze7cey35TPBpOlY=; 
	b=Ebvxkclg9ZqNovPIWFi1od8tPyZNaPPJbRVWESOwHJ4lKQGTibbrzsnNLgxpdkHAv88NrheKA6kPs4qV1BjP2oaibqyd3r3eh485gIOtmL1o8nd2jWE0CFhNmdR3fwCBwkfLzwp+67CBn6TxTkU5it5MPbtIWb4qQigxT8hER/4=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1719322994;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=MIME-Version:References:In-Reply-To:From:From:Date:Date:Message-ID:Subject:Subject:To:To:Cc:Cc:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=pr1odYqS0ZrtCrgBJK30bcPuDVS3ze7cey35TPBpOlY=;
	b=WnixsLCZhpY2BwZEesCNFML/5kjDHGc7xZwu07RFswP5EDbPUEHHOOpgI0qCYnSJ
	IAWA3tus3y5pkORWUUArE5Q3KGqc9hpbTzy8YRma4CjX7rLcYN6Ce1/1vTYAICYgIEt
	Pr2uyznvL7M8IL0oHIu6oJxEVg3yKk3eptqus3bY=
X-Forwarded-Encrypted: i=1; AJvYcCUspalkeHc80f9yhLr4mFs0m8GnVoRWttip/A8zg/a87psGA01DSruG3FSXZcXKFTIKbZ+K0+4Pcwplv8PPBiO9RqcvlcxzeSujcrO1BKU=
X-Gm-Message-State: AOJu0YwTK9BMqw8stIr5TL5idEgmWHb1jqu7YXRq5gLOSbjsiSBev/LH
	ccmnEI+WFTnhmnFHLQ69+2n2zi1eUY0+kGARVOvnjKTSaGCdN8Y1xeAK0cPwmyyL/1mvsyJyVWL
	/CV1HC6H4znBnSLJIRyUjDcD+9dQ=
X-Google-Smtp-Source: AGHT+IEdGW5iyChpb1JVMDxR3jJ63LbEVe4bAhxc5XoXBCuP+6lSXhBULGr2KGgRCihfdmSCCQjPADmG1B0fQysE0mc=
X-Received: by 2002:a5b:8c9:0:b0:e02:c434:8b2a with SMTP id
 3f1490d57ef6-e030402f3cdmr7300676276.54.1719322990668; Tue, 25 Jun 2024
 06:43:10 -0700 (PDT)
MIME-Version: 1.0
References: <20240621191434.5046-1-tamas@tklengyel.com> <20240621191434.5046-2-tamas@tklengyel.com>
 <0a7854e0-e01e-435e-95fe-b262cc4afc1e@suse.com> <CABfawhmkhCD-MFgZBrhJ1CwiiseotJ=+MANbgwsjRL_VYsnuTQ@mail.gmail.com>
 <b9b84f10-6d41-48d9-996d-069408753e28@suse.com> <CABfawhkJ0t8FenCWbupGcHD-ZhorbWN7ZjMQVm-jeg_zA1g5iQ@mail.gmail.com>
 <66a7243d-a1a1-4236-832f-f3e1daf11b85@suse.com>
In-Reply-To: <66a7243d-a1a1-4236-832f-f3e1daf11b85@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Tue, 25 Jun 2024 09:42:34 -0400
X-Gmail-Original-Message-ID: <CABfawhmAV5+Nr9A_Speh2ai3v9wfJtxmps=R6iTxNU1RFP4xRA@mail.gmail.com>
Message-ID: <CABfawhmAV5+Nr9A_Speh2ai3v9wfJtxmps=R6iTxNU1RFP4xRA@mail.gmail.com>
Subject: Re: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 25, 2024 at 9:18=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wro=
te:
>
> On 25.06.2024 14:39, Tamas K Lengyel wrote:
> > On Tue, Jun 25, 2024 at 7:40=E2=80=AFAM Jan Beulich <jbeulich@suse.com>=
 wrote:
> >>
> >> On 25.06.2024 13:15, Tamas K Lengyel wrote:
> >>> On Tue, Jun 25, 2024 at 5:17=E2=80=AFAM Jan Beulich <jbeulich@suse.co=
m> wrote:
> >>>>
> >>>> On 21.06.2024 21:14, Tamas K Lengyel wrote:
> >>>>> --- /dev/null
> >>>>> +++ b/scripts/oss-fuzz/build.sh
> >>>>> @@ -0,0 +1,22 @@
> >>>>> +#!/bin/bash -eu
> >>>>> +# Copyright 2024 Google LLC
> >>>>> +#
> >>>>> +# Licensed under the Apache License, Version 2.0 (the "License");
> >>>>> +# you may not use this file except in compliance with the License.
> >>>>> +# You may obtain a copy of the License at
> >>>>> +#
> >>>>> +#      http://www.apache.org/licenses/LICENSE-2.0
> >>>>> +#
> >>>>> +# Unless required by applicable law or agreed to in writing, softw=
are
> >>>>> +# distributed under the License is distributed on an "AS IS" BASIS=
,
> >>>>> +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or =
implied.
> >>>>> +# See the License for the specific language governing permissions =
and
> >>>>> +# limitations under the License.
> >>>>> +#
> >>>>> +##################################################################=
##############
> >>>>
> >>>> I'm a little concerned here, but maybe I shouldn't be. According to =
what
> >>>> I'm reading, the Apache 2.0 license is at least not entirely compati=
ble
> >>>> with GPLv2. While apparently the issue is solely with linking in Apa=
che-
> >>>> licensed code, I wonder whether us not having a respective file unde=
r
> >>>> ./LICENSES/ (and no pre-cooked SPDX identifier to use) actually has =
a
> >>>> reason possibly excluding the use of such code in the project.
> >>>>
> >>>>> +cd xen
> >>>>> +./configure clang=3Dy --disable-stubdom --disable-pvshim --disable=
-docs --disable-xen
> >>>>> +make clang=3Dy -C tools/include
> >>>>> +make clang=3Dy -C tools/fuzz/x86_instruction_emulator libfuzzer-ha=
rness
> >>>>> +cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_=
instruction_emulator
> >>>>
> >>>> In addition to what Julien said, I further think that filename / dir=
ectory
> >>>> name are too generic for a file with this pretty specific contents.
> >>>
> >>> I don't really get your concern here?
> >>
> >> The thing that is built is specifically a x86 emulator piece of fuzzin=
g
> >> binary. Neither the directory name nor the file name contain either x8=
6
> >> or (at least) emul.
> >
> > Because this build script is not necessarily restricted to build only
> > this one harness in the future. Right now that's the only one that has
> > a suitable libfuzzer harness, but the reason this build script is here
> > is to be easily able to add additional fuzzing binaries without the
> > need to open PRs on the oss-fuzz repo, which as I understand no one
> > was willing to do in the xen community due to the CLA. Now that the
> > integration is going to be in oss-fuzz, the only thing you have to do
> > in the future is add more stuff to this script to get fuzzed. Anything
> > that's compiled with libfuzzer and copied to $OUT will be picked up by
> > oss-fuzz automatically. Makes sense?
>
> It does, yes. Yet nothing like that was said in the description. How
> should anyone have known there are future possibilities with this script?

Apologies, to me "The build integration script for oss-fuzz targets."
was sufficiently descriptive but it may require some familiarity with
oss-fuzz to get. I can certainly add the above text to the commit
message if that helps.

Tamas


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:52:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:52:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747732.1155206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6aY-0006s0-0S; Tue, 25 Jun 2024 13:52:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747732.1155206; Tue, 25 Jun 2024 13:52:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6aX-0006rt-Ta; Tue, 25 Jun 2024 13:52:01 +0000
Received: by outflank-mailman (input) for mailman id 747732;
 Tue, 25 Jun 2024 13:52:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM6aW-0006cc-Bl
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:52:00 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 17a116d1-32fa-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 15:51:59 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-a6fe81a5838so329936866b.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:51:59 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf560627sm521042666b.148.2024.06.25.06.51.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 06:51:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17a116d1-32fa-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719323518; x=1719928318; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=AB0T3A1FkGRSgKa91wKeyYgsXSaWrOLREYcY7rl0FlM=;
        b=c97Tkp/enEWed50770h22bVUju9rFvGYH/xnwuz4AOlLEOpwsBorrB66+rtSQ3aenR
         jeIDAmxXRfwV4t0lsvbz+TJ6qXAPvLnI9a34hCDKdwZEPkAHmYVWv8/GQwHfEq2KLfdf
         1Au7oLg7FhxEmLIdC933zBCN3SMCtI1pl80dzhIlfkunZa5xrealFAjv4QyG/HI9pM5R
         pqXN2p1oR2xSioFFI4fI/UViQblmETVeyNNyqOaWaXp+Hlfm/cILV8D/4BNxj8SvpuFp
         NgLCag+9QGBg9A367Y40EsyOqla+NNpcsxnSHOyL7uv24jegji7YrtjDSKqx4Tm3v5hV
         UWvA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719323518; x=1719928318;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=AB0T3A1FkGRSgKa91wKeyYgsXSaWrOLREYcY7rl0FlM=;
        b=ZDoUaFpL1QKKcrOCNJNAEU4KBgaAKJJCNvt7v0eg0S9ReM6WTm+s0KUafn7EaNYgv+
         UUIHVLtXvvn0J5Vpf4M3avNv1vixW5RM5m8RShaem0zhggQTzANB7uEsoZBfmeUq5GrD
         OYQDkm/rECVtxmHpku1PKW2jk0+KPeHd8P+dOkDSDeluDfrM43vcqEcZN5GzPxIsq+4v
         4VfPyv1uxg7TX8gNuTkDqZVhwU/d3HHUPi4FhqA6ScYbI/VV8MM5Q4JZdEQAHUm0+tba
         futMCXuDlR+VV9BF8DxhLifmk0jC9wBICMzSpCAFcRAJ/FUdPUPJn9SmgFgGob6lJY62
         9/yQ==
X-Gm-Message-State: AOJu0YwAox8ft527LVziNsF7xGzHmdsJdTOi04pLs54/d6jRAEEVn9Sl
	7mYvuF8rWFi7e9AmUKufKpS/L05OW366Q+mr2phHF2pJdm9fvejycfZtJnxA
X-Google-Smtp-Source: AGHT+IEk++ZDxUS7RQ6d3SVnOntMsEWaCgNsj52pGBUV9+AK2No0h2dX8WJzid5KrTajUTQkROy4xw==
X-Received: by 2002:a17:907:cbc9:b0:a72:6849:cb0b with SMTP id a640c23a62f3a-a727f680739mr42308866b.17.1719323518020;
        Tue, 25 Jun 2024 06:51:58 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Shawn Anastasio <sanastasio@raptorengineering.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v13 01/10] xen: introduce generic non-atomic test_*bit()
Date: Tue, 25 Jun 2024 15:51:43 +0200
Message-ID: <b3052978b1ba36b37e9b844f1b70b0912f608ea0.1719319093.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.45.2
In-Reply-To: <cover.1719319093.git.oleksii.kurochko@gmail.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The following generic functions were introduced:
* test_bit
* generic__test_and_set_bit
* generic__test_and_clear_bit
* generic__test_and_change_bit

These functions and macros can be useful for architectures
that don't have corresponding arch-specific instructions.

Also, the patch introduces the following generics which are
used by the functions mentioned above:
* BITOP_BITS_PER_WORD
* BITOP_MASK
* BITOP_WORD
* BITOP_TYPE

The following approach was chosen for generic*() and arch*() bit
operation functions:
If the bit operation function that is going to be generic starts
with the prefix "__", then the corresponding generic/arch function
will also contain the "__" prefix. For example:
 * test_bit() will be defined using arch_test_bit() and
   generic_test_bit().
 * __test_and_set_bit() will be defined using
   arch__test_and_set_bit() and generic__test_and_set_bit().

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes in V13:
 - add Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Changes in V12:
 - revert change of moving the definition of BITS_PER_BYTE from <arch>/bitops.h to xen/bitops.h.
   ( a separate patch will be provided to put BITS_PER_BYTE to proper place )
 - drop comments on top of generic_*() functions and update the comments above __test_*() and test_bit().
 - update how static inline __test_*() are defined ( drop pointless fallback #define ) and test_bit().
 - drop the footer after Signed-off-by.
Changes in V11:
 - fix identation in generic_test_bit() function.
 - move definition of BITS_PER_BYTE from <arch>/bitops.h to xen/bitops.h
 - drop the changes in arm64/livepatch.c.
 - update the the comments on top of functions: generic__test_and_set_bit(), generic__test_and_clear_bit(),  generic__test_and_change_bit(),
   generic_test_bit().
 - Update footer after Signed-off section.
 - Rebase the patch on top of staging branch, so it can be merged when necessary approves will be given.
 - add Reviewed-by: Jan Beulich <jbeulich@suse.com>.
---
Changes in V10:
 - update the commit message. ( re-order paragraphs and add explanation usage of prefix "__" in bit
   operation function names )
 - add  parentheses around the whole expression of bitop_bad_size() macros.
 - move macros bitop_bad_size() above asm/bitops.h as it is not arch-specific anymore and there is
   no need for overriding it.
 - drop macros check_bitop_size() and use "if ( bitop_bad_size(addr) ) __bitop_bad_size();" implictly
   where it is needed.
 - in <xen/bitops.h> use 'int' as a first parameter for __test_and_*(), generic__test_and_*() to be
   consistent with how the mentioned functions were declared in the original per-arch functions.
 - add 'const' to p variable in generic_test_bit().
 - move definition of BITOP_BITS_PER_WORD and bitop_uint_t to xen/bitops.h as we don't allow for arch
   overrides these definitions anymore.
---
Changes in V9:
  - move up xen/bitops.h in ppc/asm/page.h.
  - update defintion of arch_check_bitop_size.
    And drop correspondent macros from x86/asm/bitops.h
  - drop parentheses in generic__test_and_set_bit() for definition of
    local variable p.
  - fix indentation inside #ifndef BITOP_TYPE...#endif
  - update the commit message.
---
 Changes in V8:
  - drop __pure for function which uses volatile.
  - drop unnessary () in generic__test_and_change_bit() for addr casting.
  - update prototype of generic_test_bit() and test_bit(): now it returns bool
    instead of int.
  - update generic_test_bit() to use BITOP_MASK().
  - Deal with fls{l} changes: it should be in the patch with introduced generic fls{l}.
  - add a footer with explanation of dependency on an uncommitted patch after Signed-off.
  - abstract bitop_size().
  - move BITOP_TYPE define to <xen/types.h>.
---
 Changes in V7:
  - move everything to xen/bitops.h to follow the same approach for all generic
    bit ops.
  - put together BITOP_BITS_PER_WORD and bitops_uint_t.
  - make BITOP_MASK more generic.
  - drop #ifdef ... #endif around BITOP_MASK, BITOP_WORD as they are generic
    enough.
  - drop "_" for generic__{test_and_set_bit,...}().
  - drop " != 0" for functions which return bool.
  - add volatile during the cast for generic__{...}().
  - update the commit message.
  - update arch related code to follow the proposed generic approach.
---
 Changes in V6:
  - Nothing changed ( only rebase )
---
 Changes in V5:
   - new patch
---
 xen/arch/arm/include/asm/bitops.h |  69 -----------
 xen/arch/ppc/include/asm/bitops.h |  54 ---------
 xen/arch/ppc/include/asm/page.h   |   2 +-
 xen/arch/ppc/mm-radix.c           |   2 +-
 xen/arch/x86/include/asm/bitops.h |  31 ++---
 xen/include/xen/bitops.h          | 182 ++++++++++++++++++++++++++++++
 6 files changed, 193 insertions(+), 147 deletions(-)

diff --git a/xen/arch/arm/include/asm/bitops.h b/xen/arch/arm/include/asm/bitops.h
index 8f4bdc09d1..db23d7edc3 100644
--- a/xen/arch/arm/include/asm/bitops.h
+++ b/xen/arch/arm/include/asm/bitops.h
@@ -22,11 +22,6 @@
 #define __set_bit(n,p)            set_bit(n,p)
 #define __clear_bit(n,p)          clear_bit(n,p)
 
-#define BITOP_BITS_PER_WORD     32
-#define BITOP_MASK(nr)          (1UL << ((nr) % BITOP_BITS_PER_WORD))
-#define BITOP_WORD(nr)          ((nr) / BITOP_BITS_PER_WORD)
-#define BITS_PER_BYTE           8
-
 #define ADDR (*(volatile int *) addr)
 #define CONST_ADDR (*(const volatile int *) addr)
 
@@ -76,70 +71,6 @@ bool test_and_change_bit_timeout(int nr, volatile void *p,
 bool clear_mask16_timeout(uint16_t mask, volatile void *p,
                           unsigned int max_try);
 
-/**
- * __test_and_set_bit - Set a bit and return its old value
- * @nr: Bit to set
- * @addr: Address to count from
- *
- * This operation is non-atomic and can be reordered.
- * If two examples of this operation race, one can appear to succeed
- * but actually fail.  You must protect multiple accesses with a lock.
- */
-static inline int __test_and_set_bit(int nr, volatile void *addr)
-{
-        unsigned int mask = BITOP_MASK(nr);
-        volatile unsigned int *p =
-                ((volatile unsigned int *)addr) + BITOP_WORD(nr);
-        unsigned int old = *p;
-
-        *p = old | mask;
-        return (old & mask) != 0;
-}
-
-/**
- * __test_and_clear_bit - Clear a bit and return its old value
- * @nr: Bit to clear
- * @addr: Address to count from
- *
- * This operation is non-atomic and can be reordered.
- * If two examples of this operation race, one can appear to succeed
- * but actually fail.  You must protect multiple accesses with a lock.
- */
-static inline int __test_and_clear_bit(int nr, volatile void *addr)
-{
-        unsigned int mask = BITOP_MASK(nr);
-        volatile unsigned int *p =
-                ((volatile unsigned int *)addr) + BITOP_WORD(nr);
-        unsigned int old = *p;
-
-        *p = old & ~mask;
-        return (old & mask) != 0;
-}
-
-/* WARNING: non atomic and it can be reordered! */
-static inline int __test_and_change_bit(int nr,
-                                            volatile void *addr)
-{
-        unsigned int mask = BITOP_MASK(nr);
-        volatile unsigned int *p =
-                ((volatile unsigned int *)addr) + BITOP_WORD(nr);
-        unsigned int old = *p;
-
-        *p = old ^ mask;
-        return (old & mask) != 0;
-}
-
-/**
- * test_bit - Determine whether a bit is set
- * @nr: bit number to test
- * @addr: Address to start counting from
- */
-static inline int test_bit(int nr, const volatile void *addr)
-{
-        const volatile unsigned int *p = (const volatile unsigned int *)addr;
-        return 1UL & (p[BITOP_WORD(nr)] >> (nr & (BITOP_BITS_PER_WORD-1)));
-}
-
 #define arch_ffs(x)  ((x) ? 1 + __builtin_ctz(x) : 0)
 #define arch_ffsl(x) ((x) ? 1 + __builtin_ctzl(x) : 0)
 #define arch_fls(x)  ((x) ? 32 - __builtin_clz(x) : 0)
diff --git a/xen/arch/ppc/include/asm/bitops.h b/xen/arch/ppc/include/asm/bitops.h
index 8119b5ace8..ee0e58e2e8 100644
--- a/xen/arch/ppc/include/asm/bitops.h
+++ b/xen/arch/ppc/include/asm/bitops.h
@@ -15,11 +15,6 @@
 #define __set_bit(n, p)         set_bit(n, p)
 #define __clear_bit(n, p)       clear_bit(n, p)
 
-#define BITOP_BITS_PER_WORD     32
-#define BITOP_MASK(nr)          (1U << ((nr) % BITOP_BITS_PER_WORD))
-#define BITOP_WORD(nr)          ((nr) / BITOP_BITS_PER_WORD)
-#define BITS_PER_BYTE           8
-
 /* PPC bit number conversion */
 #define PPC_BITLSHIFT(be)    (BITS_PER_LONG - 1 - (be))
 #define PPC_BIT(bit)         (1UL << PPC_BITLSHIFT(bit))
@@ -69,17 +64,6 @@ static inline void clear_bit(int nr, volatile void *addr)
     clear_bits(BITOP_MASK(nr), (volatile unsigned int *)addr + BITOP_WORD(nr));
 }
 
-/**
- * test_bit - Determine whether a bit is set
- * @nr: bit number to test
- * @addr: Address to start counting from
- */
-static inline int test_bit(int nr, const volatile void *addr)
-{
-    const volatile unsigned int *p = addr;
-    return 1 & (p[BITOP_WORD(nr)] >> (nr & (BITOP_BITS_PER_WORD - 1)));
-}
-
 static inline unsigned int test_and_clear_bits(
     unsigned int mask,
     volatile unsigned int *p)
@@ -133,44 +117,6 @@ static inline int test_and_set_bit(unsigned int nr, volatile void *addr)
         (volatile unsigned int *)addr + BITOP_WORD(nr)) != 0;
 }
 
-/**
- * __test_and_set_bit - Set a bit and return its old value
- * @nr: Bit to set
- * @addr: Address to count from
- *
- * This operation is non-atomic and can be reordered.
- * If two examples of this operation race, one can appear to succeed
- * but actually fail.  You must protect multiple accesses with a lock.
- */
-static inline int __test_and_set_bit(int nr, volatile void *addr)
-{
-    unsigned int mask = BITOP_MASK(nr);
-    volatile unsigned int *p = (volatile unsigned int *)addr + BITOP_WORD(nr);
-    unsigned int old = *p;
-
-    *p = old | mask;
-    return (old & mask) != 0;
-}
-
-/**
- * __test_and_clear_bit - Clear a bit and return its old value
- * @nr: Bit to clear
- * @addr: Address to count from
- *
- * This operation is non-atomic and can be reordered.
- * If two examples of this operation race, one can appear to succeed
- * but actually fail.  You must protect multiple accesses with a lock.
- */
-static inline int __test_and_clear_bit(int nr, volatile void *addr)
-{
-    unsigned int mask = BITOP_MASK(nr);
-    volatile unsigned int *p = (volatile unsigned int *)addr + BITOP_WORD(nr);
-    unsigned int old = *p;
-
-    *p = old & ~mask;
-    return (old & mask) != 0;
-}
-
 #define arch_ffs(x)  ((x) ? 1 + __builtin_ctz(x) : 0)
 #define arch_ffsl(x) ((x) ? 1 + __builtin_ctzl(x) : 0)
 #define arch_fls(x)  ((x) ? 32 - __builtin_clz(x) : 0)
diff --git a/xen/arch/ppc/include/asm/page.h b/xen/arch/ppc/include/asm/page.h
index 890e285051..6d4cd2611c 100644
--- a/xen/arch/ppc/include/asm/page.h
+++ b/xen/arch/ppc/include/asm/page.h
@@ -2,9 +2,9 @@
 #ifndef _ASM_PPC_PAGE_H
 #define _ASM_PPC_PAGE_H
 
+#include <xen/bitops.h>
 #include <xen/types.h>
 
-#include <asm/bitops.h>
 #include <asm/byteorder.h>
 
 #define PDE_VALID     PPC_BIT(0)
diff --git a/xen/arch/ppc/mm-radix.c b/xen/arch/ppc/mm-radix.c
index ab5a10695c..9055730997 100644
--- a/xen/arch/ppc/mm-radix.c
+++ b/xen/arch/ppc/mm-radix.c
@@ -1,11 +1,11 @@
 /* SPDX-License-Identifier: GPL-2.0-or-later */
+#include <xen/bitops.h>
 #include <xen/init.h>
 #include <xen/kernel.h>
 #include <xen/mm.h>
 #include <xen/types.h>
 #include <xen/lib.h>
 
-#include <asm/bitops.h>
 #include <asm/byteorder.h>
 #include <asm/early_printk.h>
 #include <asm/page.h>
diff --git a/xen/arch/x86/include/asm/bitops.h b/xen/arch/x86/include/asm/bitops.h
index aa71542e7b..f9aa60111f 100644
--- a/xen/arch/x86/include/asm/bitops.h
+++ b/xen/arch/x86/include/asm/bitops.h
@@ -19,9 +19,6 @@
 #define ADDR (*(volatile int *) addr)
 #define CONST_ADDR (*(const volatile int *) addr)
 
-extern void __bitop_bad_size(void);
-#define bitop_bad_size(addr) (sizeof(*(addr)) < 4)
-
 /**
  * set_bit - Atomically set a bit in memory
  * @nr: the bit to set
@@ -175,7 +172,7 @@ static inline int test_and_set_bit(int nr, volatile void *addr)
 })
 
 /**
- * __test_and_set_bit - Set a bit and return its old value
+ * arch__test_and_set_bit - Set a bit and return its old value
  * @nr: Bit to set
  * @addr: Address to count from
  *
@@ -183,7 +180,7 @@ static inline int test_and_set_bit(int nr, volatile void *addr)
  * If two examples of this operation race, one can appear to succeed
  * but actually fail.  You must protect multiple accesses with a lock.
  */
-static inline int __test_and_set_bit(int nr, void *addr)
+static inline int arch__test_and_set_bit(int nr, volatile void *addr)
 {
     int oldbit;
 
@@ -194,10 +191,7 @@ static inline int __test_and_set_bit(int nr, void *addr)
 
     return oldbit;
 }
-#define __test_and_set_bit(nr, addr) ({                 \
-    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
-    __test_and_set_bit(nr, addr);                       \
-})
+#define arch__test_and_set_bit arch__test_and_set_bit
 
 /**
  * test_and_clear_bit - Clear a bit and return its old value
@@ -224,7 +218,7 @@ static inline int test_and_clear_bit(int nr, volatile void *addr)
 })
 
 /**
- * __test_and_clear_bit - Clear a bit and return its old value
+ * arch__test_and_clear_bit - Clear a bit and return its old value
  * @nr: Bit to set
  * @addr: Address to count from
  *
@@ -232,7 +226,7 @@ static inline int test_and_clear_bit(int nr, volatile void *addr)
  * If two examples of this operation race, one can appear to succeed
  * but actually fail.  You must protect multiple accesses with a lock.
  */
-static inline int __test_and_clear_bit(int nr, void *addr)
+static inline int arch__test_and_clear_bit(int nr, volatile void *addr)
 {
     int oldbit;
 
@@ -243,13 +237,10 @@ static inline int __test_and_clear_bit(int nr, void *addr)
 
     return oldbit;
 }
-#define __test_and_clear_bit(nr, addr) ({               \
-    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
-    __test_and_clear_bit(nr, addr);                     \
-})
+#define arch__test_and_clear_bit arch__test_and_clear_bit
 
 /* WARNING: non atomic and it can be reordered! */
-static inline int __test_and_change_bit(int nr, void *addr)
+static inline int arch__test_and_change_bit(int nr, volatile void *addr)
 {
     int oldbit;
 
@@ -260,10 +251,7 @@ static inline int __test_and_change_bit(int nr, void *addr)
 
     return oldbit;
 }
-#define __test_and_change_bit(nr, addr) ({              \
-    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
-    __test_and_change_bit(nr, addr);                    \
-})
+#define arch__test_and_change_bit arch__test_and_change_bit
 
 /**
  * test_and_change_bit - Change a bit and return its new value
@@ -307,8 +295,7 @@ static inline int variable_test_bit(int nr, const volatile void *addr)
     return oldbit;
 }
 
-#define test_bit(nr, addr) ({                           \
-    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
+#define arch_test_bit(nr, addr) ({                      \
     __builtin_constant_p(nr) ?                          \
         constant_test_bit(nr, addr) :                   \
         variable_test_bit(nr, addr);                    \
diff --git a/xen/include/xen/bitops.h b/xen/include/xen/bitops.h
index 6a5e28730a..cc09d273c9 100644
--- a/xen/include/xen/bitops.h
+++ b/xen/include/xen/bitops.h
@@ -4,6 +4,19 @@
 #include <xen/compiler.h>
 #include <xen/types.h>
 
+#define BITOP_BITS_PER_WORD 32
+typedef uint32_t bitop_uint_t;
+
+#define BITOP_MASK(nr)  ((bitop_uint_t)1 << ((nr) % BITOP_BITS_PER_WORD))
+
+#define BITOP_WORD(nr)  ((nr) / BITOP_BITS_PER_WORD)
+
+#define BITS_PER_BYTE 8
+
+extern void __bitop_bad_size(void);
+
+#define bitop_bad_size(addr) (sizeof(*(addr)) < sizeof(bitop_uint_t))
+
 #include <asm/bitops.h>
 
 /*
@@ -24,6 +37,175 @@
 unsigned int __pure generic_ffsl(unsigned long x);
 unsigned int __pure generic_flsl(unsigned long x);
 
+/**
+ * generic__test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool
+generic__test_and_set_bit(int nr, volatile void *addr)
+{
+    bitop_uint_t mask = BITOP_MASK(nr);
+    volatile bitop_uint_t *p = (volatile bitop_uint_t *)addr + BITOP_WORD(nr);
+    bitop_uint_t old = *p;
+
+    *p = old | mask;
+    return (old & mask);
+}
+
+/**
+ * generic__test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to clear
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool
+generic__test_and_clear_bit(int nr, volatile void *addr)
+{
+    bitop_uint_t mask = BITOP_MASK(nr);
+    volatile bitop_uint_t *p = (volatile bitop_uint_t *)addr + BITOP_WORD(nr);
+    bitop_uint_t old = *p;
+
+    *p = old & ~mask;
+    return (old & mask);
+}
+
+/**
+ * generic__test_and_change_bit - Change a bit and return its old value
+ * @nr: Bit to change
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool
+generic__test_and_change_bit(int nr, volatile void *addr)
+{
+    bitop_uint_t mask = BITOP_MASK(nr);
+    volatile bitop_uint_t *p = (volatile bitop_uint_t *)addr + BITOP_WORD(nr);
+    bitop_uint_t old = *p;
+
+    *p = old ^ mask;
+    return (old & mask);
+}
+
+/**
+ * generic_test_bit - Determine whether a bit is set
+ * @nr: bit number to test
+ * @addr: Address to start counting from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool generic_test_bit(int nr, const volatile void *addr)
+{
+    bitop_uint_t mask = BITOP_MASK(nr);
+    const volatile bitop_uint_t *p =
+        (const volatile bitop_uint_t *)addr + BITOP_WORD(nr);
+
+    return (*p & mask);
+}
+
+/**
+ * __test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool
+__test_and_set_bit(int nr, volatile void *addr)
+{
+#ifndef arch__test_and_set_bit
+#define arch__test_and_set_bit generic__test_and_set_bit
+#endif
+
+    return arch__test_and_set_bit(nr, addr);
+}
+#define __test_and_set_bit(nr, addr) ({             \
+    if ( bitop_bad_size(addr) ) __bitop_bad_size(); \
+    __test_and_set_bit(nr, addr);                   \
+})
+
+/**
+ * __test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to clear
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool
+__test_and_clear_bit(int nr, volatile void *addr)
+{
+#ifndef arch__test_and_clear_bit
+#define arch__test_and_clear_bit generic__test_and_clear_bit
+#endif
+
+    return arch__test_and_clear_bit(nr, addr);
+}
+#define __test_and_clear_bit(nr, addr) ({           \
+    if ( bitop_bad_size(addr) ) __bitop_bad_size(); \
+    __test_and_clear_bit(nr, addr);                 \
+})
+
+/**
+ * __test_and_change_bit - Change a bit and return its old value
+ * @nr: Bit to change
+ * @addr: Address to count from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool
+__test_and_change_bit(int nr, volatile void *addr)
+{
+#ifndef arch__test_and_change_bit
+#define arch__test_and_change_bit generic__test_and_change_bit
+#endif
+
+    return arch__test_and_change_bit(nr, addr);
+}
+#define __test_and_change_bit(nr, addr) ({              \
+    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
+    __test_and_change_bit(nr, addr);                    \
+})
+
+/**
+ * test_bit - Determine whether a bit is set
+ * @nr: bit number to test
+ * @addr: Address to start counting from
+ *
+ * This operation is non-atomic and can be reordered.
+ * If two examples of this operation race, one can appear to succeed
+ * but actually fail.  You must protect multiple accesses with a lock.
+ */
+static always_inline bool test_bit(int nr, const volatile void *addr)
+{
+#ifndef arch_test_bit
+#define arch_test_bit generic_test_bit
+#endif
+
+    return arch_test_bit(nr, addr);
+}
+#define test_bit(nr, addr) ({                           \
+    if ( bitop_bad_size(addr) ) __bitop_bad_size();     \
+    test_bit(nr, addr);                                 \
+})
+
 static always_inline __pure unsigned int ffs(unsigned int x)
 {
     if ( __builtin_constant_p(x) )
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:52:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:52:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747733.1155211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6aY-0006vX-BG; Tue, 25 Jun 2024 13:52:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747733.1155211; Tue, 25 Jun 2024 13:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6aY-0006v8-6u; Tue, 25 Jun 2024 13:52:02 +0000
Received: by outflank-mailman (input) for mailman id 747733;
 Tue, 25 Jun 2024 13:52:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM6aW-0006cc-VQ
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:52:00 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1850fd4f-32fa-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 15:52:00 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-a724440f597so358838666b.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:52:00 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf560627sm521042666b.148.2024.06.25.06.51.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 06:51:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1850fd4f-32fa-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719323519; x=1719928319; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1FrKnov37Nx5RTAuE+Ip2KD1aMSdkm5yA9SPDVIzLCE=;
        b=QlKw80M/GeQenHK1jzsmxj83H5UC8jD3XbfcXdn+o83XjH5qL40uwkRF/jdTzigdqu
         lgfTTX5zq0OOMT32U7OQVRdQZ/sVhgeD7sYc8RzvW48By7dIQyOj9bSM273/npAa4M5a
         s2PJF88d/B8ri518IdwwumLBWISL4clgZ2LxXSFYsk6kk2yX4x+zrEIOZWVYFUr5ruaV
         FeUxFv/qd5NYasMC/277aRyQnkhMN0O601Ap421/LPxOUdAeiOl1KCzlG4rtXsMqUTho
         CtAnjQynTLpJS5ZuT3Kq5YcVbxcwRl8o62nVfaAEGuxApUt/LaOhd0BgaRgH8cmN2Bxk
         aUeQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719323519; x=1719928319;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=1FrKnov37Nx5RTAuE+Ip2KD1aMSdkm5yA9SPDVIzLCE=;
        b=hVtTvnvXCZvzhWhy8qoj6Om8IN4NqVyyXOX7x5jx80lER0FhS6OafAA7IkVyVAaTn9
         BgT3GTrPyVbn5oEzBEHvD7L6FP/4gMcE0yNN6bkmvmyXBS7/TUWXEFBTyKZPkRbZN3O+
         FisUiakc/cnb0MFliAmy5TM2ACwjlYrnqQ6yW01BQEjq8cjnzmkDdPWtZdyKXOOYRFK4
         q6CITPQXQyoa2SB3X/2hes74SkNdy+Jz8+T/mF8gTlP6vbYmS4lzZhqKI0MQZkTr1rKf
         uCZcool9h05xGr7hMSgqxfbYrc1kjZUK1GSfEO9SZCgmB29vUyDqO4Bpl6/OLrHeoWBa
         GNIQ==
X-Gm-Message-State: AOJu0YxCacodlXFFfRUxQxMvMKtnEEa1YG6op5Ht6Xr4ppm5a2u6Rabi
	mlu0mARkce/wpDU/2DZsqrnplSwdcXotEnNyS4Hz+YUeNwHS6DSVO7trK679
X-Google-Smtp-Source: AGHT+IEyYJALzg0VlcwOiDGXNXFNzd1AZPMr3eALpVdlpk/C3uBU/LrsMHq7/n0t5MvCLwgCV0nk1g==
X-Received: by 2002:a17:907:d402:b0:a72:6849:cb21 with SMTP id a640c23a62f3a-a726849d851mr195370166b.54.1719323519249;
        Tue, 25 Jun 2024 06:51:59 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v13 02/10] xen/riscv: introduce bitops.h
Date: Tue, 25 Jun 2024 15:51:44 +0200
Message-ID: <0e4441eee82b0545e59099e2f62e3a01fa198d08.1719319093.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.45.2
In-Reply-To: <cover.1719319093.git.oleksii.kurochko@gmail.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Taken from Linux-6.4.0-rc1

Xen's bitops.h consists of several Linux's headers:
* linux/arch/include/asm/bitops.h:
  * The following function were removed as they aren't used in Xen:
        * test_and_set_bit_lock
        * clear_bit_unlock
        * __clear_bit_unlock
  * The following functions were renamed in the way how they are
    used by common code:
        * __test_and_set_bit
        * __test_and_clear_bit
  * The declaration and implementation of the following functios
    were updated to make Xen build happy:
        * clear_bit
        * set_bit
        * __test_and_clear_bit
        * __test_and_set_bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Changes in V11-V13:
 - Nothing changed. Only rebase was done.
---
Changes in V10:
 - update the error message BITS_PER_LONG -> BITOP_BITS_PER_WORD
---
Changes in V9:
 - add Acked-by: Jan Beulich <jbeulich@suse.com>
 - drop redefinition of bitop_uint_t in asm/types.h as some operation in Xen common code expects
   to work with 32-bit quantities.
 - s/BITS_PER_LONG/BITOP_BITS_PER_WORD in asm/bitops.h around __AMO() macros.
---
Changes in V8:
 - define bitop_uint_t in <asm/types.h> after the changes in patch related to introduction of
   "introduce generic non-atomic test_*bit()".
 - drop duplicated __set_bit() and __clear_bit().
 - drop duplicated comment: /* Based on linux/arch/include/asm/bitops.h */.
 - update type of res and mask in test_and_op_bit_ord(): unsigned long -> bitop_uint_t.
 - drop 1 padding blank in test_and_op_bit_ord().
 - update definition of test_and_set_bit(),test_and_clear_bit(),test_and_change_bit:
   change return type to bool.
 - change addr argument type of test_and_change_bit(): unsigned long * -> void *.
 - move test_and_change_bit() closer to other test_and-s function.
 - Code style fixes: tabs -> space.
 - s/#undef __op_bit/#undef op_bit.
 - update the commit message: delete information about generic-non-atomic.h changes as now
   it is a separate patch.
---
Changes in V7:
 - Update the commit message.
 - Drop "__" for __op_bit and __op_bit_ord as they are atomic.
 - add comment above __set_bit and __clear_bit about why they are defined as atomic.
 - align bitops_uint_t with __AMO().
 - make changes after  generic non-atomic test_*bit() were changed.
 - s/__asm__ __volatile__/asm volatile
---
Changes in V6:
 - rebase clean ups were done: drop unused asm-generic includes
---
 Changes in V5:
   - new patch
---
 xen/arch/riscv/include/asm/bitops.h | 137 ++++++++++++++++++++++++++++
 1 file changed, 137 insertions(+)
 create mode 100644 xen/arch/riscv/include/asm/bitops.h

diff --git a/xen/arch/riscv/include/asm/bitops.h b/xen/arch/riscv/include/asm/bitops.h
new file mode 100644
index 0000000000..7f7af3fda1
--- /dev/null
+++ b/xen/arch/riscv/include/asm/bitops.h
@@ -0,0 +1,137 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright (C) 2012 Regents of the University of California */
+
+#ifndef _ASM_RISCV_BITOPS_H
+#define _ASM_RISCV_BITOPS_H
+
+#include <asm/system.h>
+
+#if BITOP_BITS_PER_WORD == 64
+#define __AMO(op)   "amo" #op ".d"
+#elif BITOP_BITS_PER_WORD == 32
+#define __AMO(op)   "amo" #op ".w"
+#else
+#error "Unexpected BITOP_BITS_PER_WORD"
+#endif
+
+/* Based on linux/arch/include/asm/bitops.h */
+
+/*
+ * Non-atomic bit manipulation.
+ *
+ * Implemented using atomics to be interrupt safe. Could alternatively
+ * implement with local interrupt masking.
+ */
+#define __set_bit(n, p)      set_bit(n, p)
+#define __clear_bit(n, p)    clear_bit(n, p)
+
+#define test_and_op_bit_ord(op, mod, nr, addr, ord)     \
+({                                                      \
+    bitop_uint_t res, mask;                             \
+    mask = BITOP_MASK(nr);                              \
+    asm volatile (                                      \
+        __AMO(op) #ord " %0, %2, %1"                    \
+        : "=r" (res), "+A" (addr[BITOP_WORD(nr)])       \
+        : "r" (mod(mask))                               \
+        : "memory");                                    \
+    ((res & mask) != 0);                                \
+})
+
+#define op_bit_ord(op, mod, nr, addr, ord)      \
+    asm volatile (                              \
+        __AMO(op) #ord " zero, %1, %0"          \
+        : "+A" (addr[BITOP_WORD(nr)])           \
+        : "r" (mod(BITOP_MASK(nr)))             \
+        : "memory");
+
+#define test_and_op_bit(op, mod, nr, addr)    \
+    test_and_op_bit_ord(op, mod, nr, addr, .aqrl)
+#define op_bit(op, mod, nr, addr) \
+    op_bit_ord(op, mod, nr, addr, )
+
+/* Bitmask modifiers */
+#define NOP(x)    (x)
+#define NOT(x)    (~(x))
+
+/**
+ * test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ */
+static inline bool test_and_set_bit(int nr, volatile void *p)
+{
+    volatile bitop_uint_t *addr = p;
+
+    return test_and_op_bit(or, NOP, nr, addr);
+}
+
+/**
+ * test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to clear
+ * @addr: Address to count from
+ */
+static inline bool test_and_clear_bit(int nr, volatile void *p)
+{
+    volatile bitop_uint_t *addr = p;
+
+    return test_and_op_bit(and, NOT, nr, addr);
+}
+
+/**
+ * test_and_change_bit - Toggle (change) a bit and return its old value
+ * @nr: Bit to change
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It also implies a memory barrier.
+ */
+static inline bool test_and_change_bit(int nr, volatile void *p)
+{
+    volatile bitop_uint_t *addr = p;
+
+    return test_and_op_bit(xor, NOP, nr, addr);
+}
+
+/**
+ * set_bit - Atomically set a bit in memory
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void set_bit(int nr, volatile void *p)
+{
+    volatile bitop_uint_t *addr = p;
+
+    op_bit(or, NOP, nr, addr);
+}
+
+/**
+ * clear_bit - Clears a bit in memory
+ * @nr: Bit to clear
+ * @addr: Address to start counting from
+ */
+static inline void clear_bit(int nr, volatile void *p)
+{
+    volatile bitop_uint_t *addr = p;
+
+    op_bit(and, NOT, nr, addr);
+}
+
+#undef test_and_op_bit
+#undef op_bit
+#undef NOP
+#undef NOT
+#undef __AMO
+
+#endif /* _ASM_RISCV_BITOPS_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:52:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:52:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747734.1155226 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6aZ-0007LG-Qu; Tue, 25 Jun 2024 13:52:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747734.1155226; Tue, 25 Jun 2024 13:52:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6aZ-0007L7-Mu; Tue, 25 Jun 2024 13:52:03 +0000
Received: by outflank-mailman (input) for mailman id 747734;
 Tue, 25 Jun 2024 13:52:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM6aY-0006cc-JM
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:52:02 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 190958a6-32fa-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 15:52:01 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id
 a640c23a62f3a-a72459d8d6aso315885466b.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:52:01 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf560627sm521042666b.148.2024.06.25.06.51.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 06:52:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 190958a6-32fa-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719323521; x=1719928321; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=iACHYw0ViqdZEm1bShLKi0SeTem6kMaBGRGTD9jGGV4=;
        b=ayuQznYTtjidzIXd98NNto4syzQW/jYmpQn9o2Z0qRkQLZkpHD45+wMWq5lyoqdg/W
         +JkRUUSeqQ5a9kLBFYjOzPRL2X872OC0scrdL6kRVN6gKZbrpua/Y6gA+9I3X68KhRk6
         Ng72SdJo6ghYvSOVWy2bbe/EQQG4AkoSAH9CQnAla82sMM1FsE5dp11jB8P+lb5D0HyP
         hTDzMtzt04Fr0SA/KJlnb9TjKdaDJCqp0/3SW8C5fveiiGiPrz4LApmJ4YV98KFt3lue
         Ua1IccMze70+bHb/tYkLBoscunGB1UccmxI5okIuKKco1x/IgjhNvITeLgHKk8XEDNl3
         h59Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719323521; x=1719928321;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=iACHYw0ViqdZEm1bShLKi0SeTem6kMaBGRGTD9jGGV4=;
        b=JjmX0Ngm6il5HqE5shEh3xf3PvitMf/DcLxk6jzpclY+XXXvaNm2iEz/X8FPriO7Fs
         gropWPAOmZRIQGprwuRrKXU+g/xyhzx+ht93u47w5r1mqb/d6Mat/wvkPmG+UkR6h0zO
         W+ugLwB0+RwC6kE10mnl2JoFPUy3pBHIDejKohYAQQrIYNZYSaecO0ATSInIhH6Uhv2a
         +Q/7ZU0IeTmQG9J9g9tiDCzrX3y8H0Yn7Rj3i8ZFTr4znWb3SI069QJPju1yZEiDtz3L
         x22dJOrgHWFDKuOTbUKsC75LDLoS9pZEc3/SIuhA/keayjIdKNTS452dP0ts9w+HwhKp
         qt3A==
X-Gm-Message-State: AOJu0YzLfyYmIsIwbiWVJ4240pecyOoca/z2nj0YoX+xujAhNf5nMXIY
	6jysKyEIbaxrhyiGCk41IAK4DoO+GmzZsiY8KGhBWFONAe+4ZF7J2/QsR1bN
X-Google-Smtp-Source: AGHT+IGV61VTC6egFUh4af5viUarUPSdzr9VVX3CDvva449gjKX/up3ypiw6QSgceVhOOPRd3yYL4g==
X-Received: by 2002:a17:907:a64c:b0:a72:4d91:6223 with SMTP id a640c23a62f3a-a724d916363mr513226366b.62.1719323520449;
        Tue, 25 Jun 2024 06:52:00 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v13 03/10] xen/riscv: add minimal stuff to mm.h to build full Xen
Date: Tue, 25 Jun 2024 15:51:45 +0200
Message-ID: <3d44cf567f5c361cce2713808bcea1b1b6f4f032.1719319093.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.45.2
In-Reply-To: <cover.1719319093.git.oleksii.kurochko@gmail.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Changes in V13:
 - redefine mfn_to_page() and mfn_to_page().
 - add inclusion of <xen/section.h> after rebasing on top of staging.
---
Changes in V8-V12:
 - Nothing changed only rebase.
---
Changes in V7:
 - update argument type of maddr_to_virt() function: unsigned long -> paddr_t
 - rename argument of PFN_ORDER(): pfn -> pg.
 - add Acked-by: Jan Beulich <jbeulich@suse.com>
---
Changes in V6:
 - drop __virt_to_maddr() ( transform to macro ) and __maddr_to_virt ( rename to maddr_to_virt ).
 - parenthesize va in definition of vmap_to_mfn().
 - Code style fixes.
---
Changes in V5:
 - update the comment around "struct domain *domain;" : zero -> NULL
 - fix ident. for unsigned long val;
 - put page_to_virt() and virt_to_page() close to each other.
 - drop unnessary leading underscore
 - drop a space before the comment: /* Count of uses of this frame as its current type. */
 - drop comment about a page 'not as a shadow'. it is not necessary for RISC-V
---
Changes in V4:
 - update an argument name of PFN_ORDERN macros.
 - drop pad at the end of 'struct page_info'.
 - Change message -> subject in "Changes in V3"
 - delete duplicated macros from riscv/mm.h
 - fix identation in struct page_info
 - align comment for PGC_ macros
 - update definitions of domain_set_alloc_bitsize() and domain_clamp_alloc_bitsize()
 - drop unnessary comments.
 - s/BUG/BUG_ON("...")
 - define __virt_to_maddr, __maddr_to_virt as stubs
 - add inclusion of xen/mm-frame.h for mfn_x and others
 - include "xen/mm.h" instead of "asm/mm.h" to fix compilation issues:
	 In file included from arch/riscv/setup.c:7:
	./arch/riscv/include/asm/mm.h:60:28: error: field 'list' has incomplete type
	   60 |     struct page_list_entry list;
	      |                            ^~~~
	./arch/riscv/include/asm/mm.h:81:43: error: 'MAX_ORDER' undeclared here (not in a function)
	   81 |                 unsigned long first_dirty:MAX_ORDER + 1;
	      |                                           ^~~~~~~~~
	./arch/riscv/include/asm/mm.h:81:31: error: bit-field 'first_dirty' width not an integer constant
	   81 |                 unsigned long first_dirty:MAX_ORDER + 1;
 - Define __virt_to_mfn() and __mfn_to_virt() using maddr_to_mfn() and mfn_to_maddr().
---
Changes in V3:
 - update the commit title
 - introduce DIRECTMAP_VIRT_START.
 - drop changes related pfn_to_paddr() and paddr_to_pfn as they were remvoe in
   [PATCH v2 32/39] xen/riscv: add minimal stuff to asm/page.h to build full Xen
 - code style fixes.
 - drop get_page_nr  and put_page_nr as they don't need for time being
 - drop CONFIG_STATIC_MEMORY related things
 - code style fixes
---
Changes in V2:
 - define stub for arch_get_dma_bitsize(void)
---
 xen/arch/riscv/include/asm/mm.h | 235 ++++++++++++++++++++++++++++++++
 xen/arch/riscv/mm.c             |   2 +-
 xen/arch/riscv/setup.c          |   2 +-
 3 files changed, 237 insertions(+), 2 deletions(-)

diff --git a/xen/arch/riscv/include/asm/mm.h b/xen/arch/riscv/include/asm/mm.h
index 07c7a0abba..8f05937b0d 100644
--- a/xen/arch/riscv/include/asm/mm.h
+++ b/xen/arch/riscv/include/asm/mm.h
@@ -3,11 +3,241 @@
 #ifndef _ASM_RISCV_MM_H
 #define _ASM_RISCV_MM_H
 
+#include <public/xen.h>
+#include <xen/bug.h>
+#include <xen/mm-frame.h>
+#include <xen/pdx.h>
+#include <xen/types.h>
+
 #include <asm/page-bits.h>
 
 #define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
 #define paddr_to_pfn(pa)  ((unsigned long)((pa) >> PAGE_SHIFT))
 
+#define paddr_to_pdx(pa)    mfn_to_pdx(maddr_to_mfn(pa))
+#define gfn_to_gaddr(gfn)   pfn_to_paddr(gfn_x(gfn))
+#define gaddr_to_gfn(ga)    _gfn(paddr_to_pfn(ga))
+#define mfn_to_maddr(mfn)   pfn_to_paddr(mfn_x(mfn))
+#define maddr_to_mfn(ma)    _mfn(paddr_to_pfn(ma))
+#define vmap_to_mfn(va)     maddr_to_mfn(virt_to_maddr((vaddr_t)(va)))
+#define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
+
+static inline void *maddr_to_virt(paddr_t ma)
+{
+    BUG_ON("unimplemented");
+    return NULL;
+}
+
+#define virt_to_maddr(va) ({ BUG_ON("unimplemented"); 0; })
+
+/* Convert between Xen-heap virtual addresses and machine frame numbers. */
+#define __virt_to_mfn(va)  mfn_x(maddr_to_mfn(virt_to_maddr(va)))
+#define __mfn_to_virt(mfn) maddr_to_virt(mfn_to_maddr(_mfn(mfn)))
+
+/*
+ * We define non-underscored wrappers for above conversion functions.
+ * These are overriden in various source files while underscored version
+ * remain intact.
+ */
+#define virt_to_mfn(va)     __virt_to_mfn(va)
+#define mfn_to_virt(mfn)    __mfn_to_virt(mfn)
+
+struct page_info
+{
+    /* Each frame can be threaded onto a doubly-linked list. */
+    struct page_list_entry list;
+
+    /* Reference count and various PGC_xxx flags and fields. */
+    unsigned long count_info;
+
+    /* Context-dependent fields follow... */
+    union {
+        /* Page is in use: ((count_info & PGC_count_mask) != 0). */
+        struct {
+            /* Type reference count and various PGT_xxx flags and fields. */
+            unsigned long type_info;
+        } inuse;
+
+        /* Page is on a free list: ((count_info & PGC_count_mask) == 0). */
+        union {
+            struct {
+                /*
+                 * Index of the first *possibly* unscrubbed page in the buddy.
+                 * One more bit than maximum possible order to accommodate
+                 * INVALID_DIRTY_IDX.
+                 */
+#define INVALID_DIRTY_IDX ((1UL << (MAX_ORDER + 1)) - 1)
+                unsigned long first_dirty:MAX_ORDER + 1;
+
+                /* Do TLBs need flushing for safety before next page use? */
+                bool need_tlbflush:1;
+
+#define BUDDY_NOT_SCRUBBING    0
+#define BUDDY_SCRUBBING        1
+#define BUDDY_SCRUB_ABORT      2
+                unsigned long scrub_state:2;
+            };
+
+            unsigned long val;
+        } free;
+    } u;
+
+    union {
+        /* Page is in use */
+        struct {
+            /* Owner of this page (NULL if page is anonymous). */
+            struct domain *domain;
+        } inuse;
+
+        /* Page is on a free list. */
+        struct {
+            /* Order-size of the free chunk this page is the head of. */
+            unsigned int order;
+        } free;
+    } v;
+
+    union {
+        /*
+         * Timestamp from 'TLB clock', used to avoid extra safety flushes.
+         * Only valid for: a) free pages, and b) pages with zero type count
+         */
+        uint32_t tlbflush_timestamp;
+    };
+};
+
+#define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
+
+/* Convert between machine frame numbers and page-info structures. */
+#define mfn_to_page(mfn)    (frame_table + mfn_x(mfn))
+#define page_to_mfn(pg)     _mfn((unsigned long)((pg) - frame_table))
+
+static inline void *page_to_virt(const struct page_info *pg)
+{
+    return mfn_to_virt(mfn_x(page_to_mfn(pg)));
+}
+
+/* Convert between Xen-heap virtual addresses and page-info structures. */
+static inline struct page_info *virt_to_page(const void *v)
+{
+    BUG_ON("unimplemented");
+    return NULL;
+}
+
+/*
+ * Common code requires get_page_type and put_page_type.
+ * We don't care about typecounts so we just do the minimum to make it
+ * happy.
+ */
+static inline int get_page_type(struct page_info *page, unsigned long type)
+{
+    return 1;
+}
+
+static inline void put_page_type(struct page_info *page)
+{
+}
+
+static inline void put_page_and_type(struct page_info *page)
+{
+    put_page_type(page);
+    put_page(page);
+}
+
+/*
+ * RISC-V does not have an M2P, but common code expects a handful of
+ * M2P-related defines and functions. Provide dummy versions of these.
+ */
+#define INVALID_M2P_ENTRY        (~0UL)
+#define SHARED_M2P_ENTRY         (~0UL - 1UL)
+#define SHARED_M2P(_e)           ((_e) == SHARED_M2P_ENTRY)
+
+#define set_gpfn_from_mfn(mfn, pfn) do { (void)(mfn), (void)(pfn); } while (0)
+#define mfn_to_gfn(d, mfn) ((void)(d), _gfn(mfn_x(mfn)))
+
+#define PDX_GROUP_SHIFT (PAGE_SHIFT + VPN_BITS)
+
+static inline unsigned long domain_get_maximum_gpfn(struct domain *d)
+{
+    BUG_ON("unimplemented");
+    return 0;
+}
+
+static inline long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    BUG_ON("unimplemented");
+    return 0;
+}
+
+/*
+ * On RISCV, all the RAM is currently direct mapped in Xen.
+ * Hence return always true.
+ */
+static inline bool arch_mfns_in_directmap(unsigned long mfn, unsigned long nr)
+{
+    return true;
+}
+
+#define PG_shift(idx)   (BITS_PER_LONG - (idx))
+#define PG_mask(x, idx) (x ## UL << PG_shift(idx))
+
+#define PGT_none          PG_mask(0, 1)  /* no special uses of this page   */
+#define PGT_writable_page PG_mask(1, 1)  /* has writable mappings?         */
+#define PGT_type_mask     PG_mask(1, 1)  /* Bits 31 or 63.                 */
+
+/* Count of uses of this frame as its current type. */
+#define PGT_count_width   PG_shift(2)
+#define PGT_count_mask    ((1UL << PGT_count_width) - 1)
+
+/*
+ * Page needs to be scrubbed. Since this bit can only be set on a page that is
+ * free (i.e. in PGC_state_free) we can reuse PGC_allocated bit.
+ */
+#define _PGC_need_scrub   _PGC_allocated
+#define PGC_need_scrub    PGC_allocated
+
+/* Cleared when the owning guest 'frees' this page. */
+#define _PGC_allocated    PG_shift(1)
+#define PGC_allocated     PG_mask(1, 1)
+/* Page is Xen heap? */
+#define _PGC_xen_heap     PG_shift(2)
+#define PGC_xen_heap      PG_mask(1, 2)
+/* Page is broken? */
+#define _PGC_broken       PG_shift(7)
+#define PGC_broken        PG_mask(1, 7)
+/* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */
+#define PGC_state         PG_mask(3, 9)
+#define PGC_state_inuse   PG_mask(0, 9)
+#define PGC_state_offlining PG_mask(1, 9)
+#define PGC_state_offlined PG_mask(2, 9)
+#define PGC_state_free    PG_mask(3, 9)
+#define page_state_is(pg, st) (((pg)->count_info&PGC_state) == PGC_state_##st)
+
+/* Count of references to this frame. */
+#define PGC_count_width   PG_shift(9)
+#define PGC_count_mask    ((1UL << PGC_count_width) - 1)
+
+#define _PGC_extra        PG_shift(10)
+#define PGC_extra         PG_mask(1, 10)
+
+#define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
+#define is_xen_heap_mfn(mfn) \
+    (mfn_valid(mfn) && is_xen_heap_page(mfn_to_page(mfn)))
+
+#define is_xen_fixed_mfn(mfn)                                   \
+    ((mfn_to_maddr(mfn) >= virt_to_maddr((vaddr_t)_start)) &&   \
+     (mfn_to_maddr(mfn) <= virt_to_maddr((vaddr_t)_end - 1)))
+
+#define page_get_owner(p)    (p)->v.inuse.domain
+#define page_set_owner(p, d) ((p)->v.inuse.domain = (d))
+
+/* TODO: implement */
+#define mfn_valid(mfn) ({ (void)(mfn); 0; })
+
+#define domain_set_alloc_bitsize(d) ((void)(d))
+#define domain_clamp_alloc_bitsize(d, b) ((void)(d), (b))
+
+#define PFN_ORDER(pg) ((pg)->v.free.order)
+
 extern unsigned char cpu0_boot_stack[];
 
 void setup_initial_pagetables(void);
@@ -20,4 +250,9 @@ unsigned long calc_phys_offset(void);
 
 void turn_on_mmu(unsigned long ra);
 
+static inline unsigned int arch_get_dma_bitsize(void)
+{
+    return 32; /* TODO */
+}
+
 #endif /* _ASM_RISCV_MM_H */
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
index 3ebaf6da01..ae381e9581 100644
--- a/xen/arch/riscv/mm.c
+++ b/xen/arch/riscv/mm.c
@@ -4,13 +4,13 @@
 #include <xen/init.h>
 #include <xen/kernel.h>
 #include <xen/macros.h>
+#include <xen/mm.h>
 #include <xen/pfn.h>
 #include <xen/sections.h>
 
 #include <asm/early_printk.h>
 #include <asm/csr.h>
 #include <asm/current.h>
-#include <asm/mm.h>
 #include <asm/page.h>
 #include <asm/processor.h>
 
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 6593f601c1..98a94c4c48 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -2,9 +2,9 @@
 
 #include <xen/compile.h>
 #include <xen/init.h>
+#include <xen/mm.h>
 
 #include <asm/early_printk.h>
-#include <asm/mm.h>
 
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:52:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:52:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747735.1155235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6ab-0007at-2t; Tue, 25 Jun 2024 13:52:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747735.1155235; Tue, 25 Jun 2024 13:52:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6aa-0007am-Va; Tue, 25 Jun 2024 13:52:04 +0000
Received: by outflank-mailman (input) for mailman id 747735;
 Tue, 25 Jun 2024 13:52:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM6aZ-0006cc-Ft
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:52:03 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 19a5452f-32fa-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 15:52:02 +0200 (CEST)
Received: by mail-ed1-x529.google.com with SMTP id
 4fb4d7f45d1cf-57cb9efd8d1so10629785a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:52:02 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf560627sm521042666b.148.2024.06.25.06.52.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 06:52:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19a5452f-32fa-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719323522; x=1719928322; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=1AVZBIR+suXtx8kqv3dQmGXkqD1Kmt2mq2crHeiMQ3k=;
        b=lkXVHUSsX0MWakxt71MWu4ORWMGEW6NqSwP77qDJuEsHxI5qjoDC8EgANGd2UQQwh2
         gJQbbEsuxqC3wuU0fBenPz13Lv6nByWUeZa0ZofPYQMa2zOOIHtZrb/c7A3V/PodtSY2
         l+KguGeo4S7hCbH89kbZMmVB4CliAS628bmfXctHZNXltH+S9yd65ZYQCglPNNFhLuj2
         bfuPOgtfF9QujmmIz7+KH5b1eCY4lWbfs6RN0BjMurNRVQBLb2uJyl/lrBrEK7HNbjV+
         XU5gGRGrf/vbq9uCfNfmKd+HftXcpMmoHA6pQCJ4iCjutkeDKv5SQEBiL0g6epWB4IQF
         imwg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719323522; x=1719928322;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=1AVZBIR+suXtx8kqv3dQmGXkqD1Kmt2mq2crHeiMQ3k=;
        b=Y6JzMJLE2ftTJ/RkK72L9irxmWQEb7lynfqhAjW1nTSzlRhjdUXDwhfR2cveLM7HiZ
         g6nfahl2PyEt4jFW5z7QxH5aUVcvgZo6+IQkgKtSZIN6fodzEq/zNSzM3nR9UZSzc1+X
         70TxIanZBxInnUv9lyCyVaYywmVrHOw9WdS2DHILVeum0oaKIKhVS6hRl6wBCgLxMxtx
         RWwtEXV7onzzS8nYagffpufC/avrAW4jMXSc7XuyZp/JXjPRV5tmYg/EXjGAKCIuscpq
         PglvYPaTIxcGKaS0lzA1C0nIIxs2PHeXPDxkNC8G6UnVprZ3C6gk1CgrqJXTe/u+9TMt
         1rIQ==
X-Gm-Message-State: AOJu0YxPOme+TwMKmouFo5CbsSDut/jsvnpRaYogXgCVBr92unThGvje
	FoOH89uXKviOJWy6JnnwrKUC7px3x65x7rZHHw5EKALQxvXaDoT2ml7+YBMQ
X-Google-Smtp-Source: AGHT+IH7MXpBAEKMVR4DhKpC524fQHii0Zfz6kr+7EKtldXjK6ZqcKCiqWGj+CKq/Ee/GfRunosc5A==
X-Received: by 2002:a17:906:1812:b0:a6f:e8a5:e8a6 with SMTP id a640c23a62f3a-a700e706f47mr718324966b.23.1719323521664;
        Tue, 25 Jun 2024 06:52:01 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v13 04/10] xen/riscv: add minimal amount of stubs to build full Xen
Date: Tue, 25 Jun 2024 15:51:46 +0200
Message-ID: <cc59a9f39e22b8efd017e91ca10fe1f230b3ec53.1719319093.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.45.2
In-Reply-To: <cover.1719319093.git.oleksii.kurochko@gmail.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Changes in V13:
 - drop no_irq_type because of the patch series ( [PATCH for-4.19 0/2] arch/irq: Untangle no_irq_type )
   which was merged to staging.
 - Drop unnessary stubs after rebasing on top of staging ( [PATCH for-4.19 0/2] arch/irq: Untangle no_irq_type )
 - implement get_upper_mfn_bound() as BUG_ON("unimplemented") to not introduce max_page which will be dropped
   in the next patch ( xen/riscv: enable full Xen build )
 - remove trainling space in stubs.c after the comment /* guest_access.h */
 - drop frametable_base_pdx
 - drop frametable_virt_end
---
Changes in V7-V12:
 - Only rebase was done.
---
Changes in V6:
 - update the commit in stubs.c around /* ... common/irq.c ... */
 - add Acked-by: Jan Beulich <jbeulich@suse.com>
---
Changes in V5:
 - drop unrelated changes
 - assert_failed("unimplmented...") change to BUG_ON()
---
Changes in V4:
  - added new stubs which are necessary for compilation after rebase: __cpu_up(), __cpu_disable(), __cpu_die()
    from smpboot.c
  - back changes related to printk() in early_printk() as they should be removed in the next patch to avoid
    compilation error.
  - update definition of cpu_khz: __read_mostly -> __ro_after_init.
  - drop vm_event_reset_vmtrace(). It is defibed in asm-generic/vm_event.h.
  - move vm_event_*() functions from stubs.c to riscv/vm_event.c.
  - s/BUG/BUG_ON("unimplemented") in stubs.c
  - back irq_actor_none() and irq_actor_none() as common/irq.c isn't compiled at this moment,
    so this function are needed to avoid compilation error.
  - defined max_page to avoid compilation error, it will be removed as soon as common/page_alloc.c will
    be compiled.
---
Changes in V3:
 - code style fixes.
 - update attribute for frametable_base_pdx  and frametable_virt_end to __ro_after_init.
   insteaf of read_mostly.
 - use BUG() instead of assert_failed/WARN for newly introduced stubs.
 - drop "#include <public/vm_event.h>" in stubs.c and use forward declaration instead.
 - drop ack_node() and end_node() as they aren't used now.
---
Changes in V2:
 - define udelay stub
 - remove 'select HAS_PDX' from RISC-V Kconfig because of
   https://lore.kernel.org/xen-devel/20231006144405.1078260-1-andrew.cooper3@citrix.com/
---
 xen/arch/riscv/Makefile |   1 +
 xen/arch/riscv/mm.c     |  41 ++++
 xen/arch/riscv/setup.c  |   8 +
 xen/arch/riscv/stubs.c  | 418 ++++++++++++++++++++++++++++++++++++++++
 xen/arch/riscv/traps.c  |  25 +++
 5 files changed, 493 insertions(+)
 create mode 100644 xen/arch/riscv/stubs.c

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 1ed1a8369b..60afbc0ad9 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -4,6 +4,7 @@ obj-y += mm.o
 obj-$(CONFIG_RISCV_64) += riscv64/
 obj-y += sbi.o
 obj-y += setup.o
+obj-y += stubs.o
 obj-y += traps.o
 obj-y += vm_event.o
 
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
index ae381e9581..7d09e781bf 100644
--- a/xen/arch/riscv/mm.c
+++ b/xen/arch/riscv/mm.c
@@ -1,5 +1,6 @@
 /* SPDX-License-Identifier: GPL-2.0-only */
 
+#include <xen/bug.h>
 #include <xen/compiler.h>
 #include <xen/init.h>
 #include <xen/kernel.h>
@@ -294,3 +295,43 @@ unsigned long __init calc_phys_offset(void)
     phys_offset = load_start - XEN_VIRT_START;
     return phys_offset;
 }
+
+void put_page(struct page_info *page)
+{
+    BUG_ON("unimplemented");
+}
+
+void arch_dump_shared_mem_info(void)
+{
+    BUG_ON("unimplemented");
+}
+
+int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
+{
+    BUG_ON("unimplemented");
+    return -1;
+}
+
+int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
+                              union add_to_physmap_extra extra,
+                              unsigned long idx, gfn_t gfn)
+{
+    BUG_ON("unimplemented");
+
+    return 0;
+}
+
+int destroy_xen_mappings(unsigned long s, unsigned long e)
+{
+    BUG_ON("unimplemented");
+    return -1;
+}
+
+int map_pages_to_xen(unsigned long virt,
+                     mfn_t mfn,
+                     unsigned long nr_mfns,
+                     unsigned int flags)
+{
+    BUG_ON("unimplemented");
+    return -1;
+}
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
index 98a94c4c48..8bb5bdb2ae 100644
--- a/xen/arch/riscv/setup.c
+++ b/xen/arch/riscv/setup.c
@@ -1,11 +1,19 @@
 /* SPDX-License-Identifier: GPL-2.0-only */
 
+#include <xen/bug.h>
 #include <xen/compile.h>
 #include <xen/init.h>
 #include <xen/mm.h>
 
+#include <public/version.h>
+
 #include <asm/early_printk.h>
 
+void arch_get_xen_caps(xen_capabilities_info_t *info)
+{
+    BUG_ON("unimplemented");
+}
+
 /* Xen stack for bringing up the first CPU. */
 unsigned char __initdata cpu0_boot_stack[STACK_SIZE]
     __aligned(STACK_SIZE);
diff --git a/xen/arch/riscv/stubs.c b/xen/arch/riscv/stubs.c
new file mode 100644
index 0000000000..b67d99729f
--- /dev/null
+++ b/xen/arch/riscv/stubs.c
@@ -0,0 +1,418 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+#include <xen/cpumask.h>
+#include <xen/domain.h>
+#include <xen/irq.h>
+#include <xen/nodemask.h>
+#include <xen/sections.h>
+#include <xen/time.h>
+#include <public/domctl.h>
+
+#include <asm/current.h>
+
+/* smpboot.c */
+
+cpumask_t cpu_online_map;
+cpumask_t cpu_present_map;
+cpumask_t cpu_possible_map;
+
+/* ID of the PCPU we're running on */
+DEFINE_PER_CPU(unsigned int, cpu_id);
+/* XXX these seem awfully x86ish... */
+/* representing HT siblings of each logical CPU */
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_mask);
+/* representing HT and core siblings of each logical CPU */
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask);
+
+nodemask_t __read_mostly node_online_map = { { [0] = 1UL } };
+
+/* time.c */
+
+unsigned long __ro_after_init cpu_khz;  /* CPU clock frequency in kHz. */
+
+s_time_t get_s_time(void)
+{
+    BUG_ON("unimplemented");
+}
+
+int reprogram_timer(s_time_t timeout)
+{
+    BUG_ON("unimplemented");
+}
+
+void send_timer_event(struct vcpu *v)
+{
+    BUG_ON("unimplemented");
+}
+
+void domain_set_time_offset(struct domain *d, int64_t time_offset_seconds)
+{
+    BUG_ON("unimplemented");
+}
+
+/* shutdown.c */
+
+void machine_restart(unsigned int delay_millisecs)
+{
+    BUG_ON("unimplemented");
+}
+
+void machine_halt(void)
+{
+    BUG_ON("unimplemented");
+}
+
+/* domctl.c */
+
+long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
+                    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    BUG_ON("unimplemented");
+}
+
+void arch_get_domain_info(const struct domain *d,
+                          struct xen_domctl_getdomaininfo *info)
+{
+    BUG_ON("unimplemented");
+}
+
+void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
+{
+    BUG_ON("unimplemented");
+}
+
+/* monitor.c */
+
+int arch_monitor_domctl_event(struct domain *d,
+                              struct xen_domctl_monitor_op *mop)
+{
+    BUG_ON("unimplemented");
+}
+
+/* smp.c */
+
+void arch_flush_tlb_mask(const cpumask_t *mask)
+{
+    BUG_ON("unimplemented");
+}
+
+void smp_send_event_check_mask(const cpumask_t *mask)
+{
+    BUG_ON("unimplemented");
+}
+
+void smp_send_call_function_mask(const cpumask_t *mask)
+{
+    BUG_ON("unimplemented");
+}
+
+/* irq.c */
+
+struct pirq *alloc_pirq_struct(struct domain *d)
+{
+    BUG_ON("unimplemented");
+}
+
+int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
+{
+    BUG_ON("unimplemented");
+}
+
+void pirq_guest_unbind(struct domain *d, struct pirq *pirq)
+{
+    BUG_ON("unimplemented");
+}
+
+void pirq_set_affinity(struct domain *d, int pirq, const cpumask_t *mask)
+{
+    BUG_ON("unimplemented");
+}
+
+void irq_ack_none(struct irq_desc *desc)
+{
+    BUG_ON("unimplemented");
+}
+
+int arch_init_one_irq_desc(struct irq_desc *desc)
+{
+    BUG_ON("unimplemented");
+}
+
+void smp_send_state_dump(unsigned int cpu)
+{
+    BUG_ON("unimplemented");
+}
+
+/* domain.c */
+
+DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
+unsigned long __per_cpu_offset[NR_CPUS];
+
+void context_switch(struct vcpu *prev, struct vcpu *next)
+{
+    BUG_ON("unimplemented");
+}
+
+void continue_running(struct vcpu *same)
+{
+    BUG_ON("unimplemented");
+}
+
+void sync_local_execstate(void)
+{
+    BUG_ON("unimplemented");
+}
+
+void sync_vcpu_execstate(struct vcpu *v)
+{
+    BUG_ON("unimplemented");
+}
+
+void startup_cpu_idle_loop(void)
+{
+    BUG_ON("unimplemented");
+}
+
+void free_domain_struct(struct domain *d)
+{
+    BUG_ON("unimplemented");
+}
+
+void dump_pageframe_info(struct domain *d)
+{
+    BUG_ON("unimplemented");
+}
+
+void free_vcpu_struct(struct vcpu *v)
+{
+    BUG_ON("unimplemented");
+}
+
+int arch_vcpu_create(struct vcpu *v)
+{
+    BUG_ON("unimplemented");
+}
+
+void arch_vcpu_destroy(struct vcpu *v)
+{
+    BUG_ON("unimplemented");
+}
+
+void vcpu_switch_to_aarch64_mode(struct vcpu *v)
+{
+    BUG_ON("unimplemented");
+}
+
+int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
+{
+    BUG_ON("unimplemented");
+}
+
+int arch_domain_create(struct domain *d,
+                       struct xen_domctl_createdomain *config,
+                       unsigned int flags)
+{
+    BUG_ON("unimplemented");
+}
+
+int arch_domain_teardown(struct domain *d)
+{
+    BUG_ON("unimplemented");
+}
+
+void arch_domain_destroy(struct domain *d)
+{
+    BUG_ON("unimplemented");
+}
+
+void arch_domain_shutdown(struct domain *d)
+{
+    BUG_ON("unimplemented");
+}
+
+void arch_domain_pause(struct domain *d)
+{
+    BUG_ON("unimplemented");
+}
+
+void arch_domain_unpause(struct domain *d)
+{
+    BUG_ON("unimplemented");
+}
+
+int arch_domain_soft_reset(struct domain *d)
+{
+    BUG_ON("unimplemented");
+}
+
+void arch_domain_creation_finished(struct domain *d)
+{
+    BUG_ON("unimplemented");
+}
+
+int arch_set_info_guest(struct vcpu *v, vcpu_guest_context_u c)
+{
+    BUG_ON("unimplemented");
+}
+
+int arch_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    BUG_ON("unimplemented");
+}
+
+int arch_vcpu_reset(struct vcpu *v)
+{
+    BUG_ON("unimplemented");
+}
+
+int domain_relinquish_resources(struct domain *d)
+{
+    BUG_ON("unimplemented");
+}
+
+void arch_dump_domain_info(struct domain *d)
+{
+    BUG_ON("unimplemented");
+}
+
+void arch_dump_vcpu_info(struct vcpu *v)
+{
+    BUG_ON("unimplemented");
+}
+
+void vcpu_mark_events_pending(struct vcpu *v)
+{
+    BUG_ON("unimplemented");
+}
+
+void vcpu_update_evtchn_irq(struct vcpu *v)
+{
+    BUG_ON("unimplemented");
+}
+
+void vcpu_block_unless_event_pending(struct vcpu *v)
+{
+    BUG_ON("unimplemented");
+}
+
+void vcpu_kick(struct vcpu *v)
+{
+    BUG_ON("unimplemented");
+}
+
+struct domain *alloc_domain_struct(void)
+{
+    BUG_ON("unimplemented");
+}
+
+struct vcpu *alloc_vcpu_struct(const struct domain *d)
+{
+    BUG_ON("unimplemented");
+}
+
+unsigned long
+hypercall_create_continuation(unsigned int op, const char *format, ...)
+{
+    BUG_ON("unimplemented");
+}
+
+int __init parse_arch_dom0_param(const char *s, const char *e)
+{
+    BUG_ON("unimplemented");
+}
+
+/* guestcopy.c */
+
+unsigned long raw_copy_to_guest(void *to, const void *from, unsigned int len)
+{
+    BUG_ON("unimplemented");
+}
+
+unsigned long raw_copy_from_guest(void *to, const void __user *from,
+                                  unsigned int len)
+{
+    BUG_ON("unimplemented");
+}
+
+/* sysctl.c */
+
+long arch_do_sysctl(struct xen_sysctl *sysctl,
+                    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
+{
+    BUG_ON("unimplemented");
+}
+
+void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
+{
+    BUG_ON("unimplemented");
+}
+
+/* p2m.c */
+
+int arch_set_paging_mempool_size(struct domain *d, uint64_t size)
+{
+    BUG_ON("unimplemented");
+}
+
+int unmap_mmio_regions(struct domain *d,
+                       gfn_t start_gfn,
+                       unsigned long nr,
+                       mfn_t mfn)
+{
+    BUG_ON("unimplemented");
+}
+
+int map_mmio_regions(struct domain *d,
+                     gfn_t start_gfn,
+                     unsigned long nr,
+                     mfn_t mfn)
+{
+    BUG_ON("unimplemented");
+}
+
+int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
+                          unsigned long gfn, mfn_t mfn)
+{
+    BUG_ON("unimplemented");
+}
+
+/* Return the size of the pool, in bytes. */
+int arch_get_paging_mempool_size(struct domain *d, uint64_t *size)
+{
+    BUG_ON("unimplemented");
+}
+
+/* delay.c */
+
+void udelay(unsigned long usecs)
+{
+    BUG_ON("unimplemented");
+}
+
+/* guest_access.h */
+
+static inline unsigned long raw_clear_guest(void *to, unsigned int len)
+{
+    BUG_ON("unimplemented");
+}
+
+/* smpboot.c */
+
+int __cpu_up(unsigned int cpu)
+{
+    BUG_ON("unimplemented");
+}
+
+void __cpu_disable(void)
+{
+    BUG_ON("unimplemented");
+}
+
+void __cpu_die(unsigned int cpu)
+{
+    BUG_ON("unimplemented");
+}
+
+unsigned long get_upper_mfn_bound(void)
+{
+    BUG_ON("unimplemented");
+}
diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
index ccd3593f5a..5415cf8d90 100644
--- a/xen/arch/riscv/traps.c
+++ b/xen/arch/riscv/traps.c
@@ -4,6 +4,10 @@
  *
  * RISC-V Trap handlers
  */
+
+#include <xen/lib.h>
+#include <xen/sched.h>
+
 #include <asm/processor.h>
 #include <asm/traps.h>
 
@@ -11,3 +15,24 @@ void do_trap(struct cpu_user_regs *cpu_regs)
 {
     die();
 }
+
+void vcpu_show_execution_state(struct vcpu *v)
+{
+    BUG_ON("unimplemented");
+}
+
+void show_execution_state(const struct cpu_user_regs *regs)
+{
+    printk("implement show_execution_state(regs)\n");
+}
+
+void arch_hypercall_tasklet_result(struct vcpu *v, long res)
+{
+    BUG_ON("unimplemented");
+}
+
+enum mc_disposition arch_do_multicall_call(struct mc_state *state)
+{
+    BUG_ON("unimplemented");
+    return mc_continue;
+}
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:52:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:52:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747731.1155197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6aW-0006cu-R3; Tue, 25 Jun 2024 13:52:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747731.1155197; Tue, 25 Jun 2024 13:52:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6aW-0006cn-MY; Tue, 25 Jun 2024 13:52:00 +0000
Received: by outflank-mailman (input) for mailman id 747731;
 Tue, 25 Jun 2024 13:51:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM6aV-0006cc-7Z
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:51:59 +0000
Received: from mail-ed1-x531.google.com (mail-ed1-x531.google.com
 [2a00:1450:4864:20::531])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 16dd2238-32fa-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 15:51:58 +0200 (CEST)
Received: by mail-ed1-x531.google.com with SMTP id
 4fb4d7f45d1cf-57d1679ee83so5982545a12.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:51:57 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf560627sm521042666b.148.2024.06.25.06.51.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 06:51:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16dd2238-32fa-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719323517; x=1719928317; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=3NUbKUQs4rp1PGjsnlTng9LwdCaUfLHgJXj7GhVHCDU=;
        b=ETUWKzq70/P0mCC4BkX1Jso+3KTXy3uD95qfhI7iNxYV2PeGPi0BRSGweZuT77PgjK
         M1xVFc89/4z9NnekFMcWP3WxMjc84pXcQ6Nc05BeAP9DwV00Iv3ibG/qcgKRNFN3bW9z
         J2uhhF4OqbNqskj++DU8lo7UWAbzqZX+rVxRfePfcpBdKmVt+BRbzsZjTWffH3FTPdc/
         ynQDeDK9MhqlFqn7vD1jErM8wFrt5YZGv07kOc9Q6n8pZdlotn75ctpBpD6gDoDN44uR
         AZ6OilH4OMQIs6rs62zmBSkAs+59xOw6Q5HW8wlYz0gr4SCOO9yy8A8kadn58Imth/9W
         nXig==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719323517; x=1719928317;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=3NUbKUQs4rp1PGjsnlTng9LwdCaUfLHgJXj7GhVHCDU=;
        b=Puf2uHP5ybdtu1Lse+f+25D6YFmSzIHuJOeit1ujj+xBpfBJqkhYwDz08TXea2sP9b
         koS9pENr5+TXT6dfxV+YIYy7AlKiBmaQUgsx2rsEiwTCko6kfsE4fZo/QH55/3JBdgJP
         N+hZHzhbpmNXjnyZUzHuyyCuttlmSmVr6uuUVjrgWdWeJlNBZBNZ8ofKJeipKGEnYdrI
         Z7i4+eVKVDbqXSTZwNo0kLHta5Wrb6fFvI0BX2cGMSN4uroWsqdhllh5aOr7bKDdVl6t
         ZnaXdd/qnZqhYSGHRhGPy+SrjhnCPaV2spszTpR1wa+PI5Vj+ymiFWwcvsH4qCv+ASb5
         uoVA==
X-Gm-Message-State: AOJu0YynEJ7wCOhiruma8/GHKzmgYPFNKV+dHDcDkgXDB8iGudEoH8HR
	lgjd5NVfmoVow7E8ZcERS3tIjv/3FeKZtWbkFQjFapot5zADRMEKSX1yHQDA
X-Google-Smtp-Source: AGHT+IE+/gXldzx+kUNYRvRQVl5/JhWBG4srI9Q4L+vosdpzvaK0O9Khle+5/hNoyTw46VrrQ7yf8A==
X-Received: by 2002:a17:907:a60b:b0:a6f:51d0:d226 with SMTP id a640c23a62f3a-a716519a273mr583338466b.66.1719323516644;
        Tue, 25 Jun 2024 06:51:56 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Shawn Anastasio <sanastasio@raptorengineering.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>
Subject: [PATCH for-4.19? v13 0/10]  Enable build of full Xen for RISC-V
Date: Tue, 25 Jun 2024 15:51:42 +0200
Message-ID: <cover.1719319093.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.45.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

***
I think that I need to write a separate email requesting approval for this patch
series to be included in Xen 4.19. Most of the patches are RISC-V specific,
so there is a low risk of breaking anything else.
There is only one patch that touches common or arch-specific files, and it
doesn't introduce any functional changes.
Since I can't approve it myself, I am asking for someone else to do so.
***

This patch series performs all of the additions necessary to drop the
build overrides for RISCV and enable the full Xen build. Except in cases
where compatibile implementations already exist (e.g. atomic.h and
bitops.h), the newly added definitions are simple.

The patch series is based on the following patch series:
 - [PATCH 3/3] xen/ppc: Avoid using the legacy __read_mostly/__ro_after_init definitions [1]

The link to the branch with rebased patches on top of [1] could be found here:
  https://gitlab.com/xen-project/people/olkur/xen/-/commits/riscv-full-xen-build-v13

[1] https://lore.kernel.org/xen-devel/20240621201928.319293-4-andrew.cooper3@citrix.com/

---
Changes in V13:
 - Patch was merged to staging:
   - [PATCH v12 ] xen/riscv: Update Kconfig in preparation for a full
 - Rebase on top of the current staging.
 - Update cover letter message
 - It was added 2 new patches ( patches 8 and 9 ) which are not necessary for testing environment we
   have in CI but they improves compatability with older gcc and binutils.
 - It was added patch 10 as a clean up of [PATCH v12 2/8] xen: introduce generic non-atomic test_*bit()
   for x86.
 - Drop [PATCH v12 4/8] xen/riscv: add definition of __read_mostly as it was defined as generic now
   for all architectures.
 - It was added the patch [PATCH v13 07/10] xen/common: fix build issue for common/trace.c
   to resolve compilation issue for RISC-V after reabsing on top of current staging.
 - Other changes are specific to specific patches. Please look at changes for
   specific patch.
---
Changes in V12:
 - Rebase the patch series on top of [1] mentioned above.
 - Update the cover letter message.
 - "[PATCH v11 3/9] xen/bitops: implement fls{l}() in common logic" was droped
   as it is a part of patch series [1] mentioned above.
 - Other changes are specific to specific patches. Please look at changes for
   specific patch.
---
Changes in V11:
  - Patch was merged to staging:
    - [PATCH v10 05/14] xen/riscv: introduce cmpxchg.h
	  - [PATCH v10 06/14] xen/riscv: introduce atomic.h
	  - [PATCH v10 07/14] xen/riscv: introduce monitor.h
	  - [PATCH v10 09/14] xen/riscv: add required things to current.h
	  - [PATCH v10 11/14] xen/riscv: introduce vm_event_*() functions
 - Other changes are specific to specific patches. Please look at changes for
   specific patch.
---
Changes in V10:
  - Patch was merged to staging:
    - [PATCH v9 04/15] xen/bitops: put __ffs() into linux compatible header
 - Other changes are specific to specific patches. Please look at changes for
   specific patch.
---
Changes in V9:
 - Patch was merged to staging:
    - [PATCH v8 07/17] xen/riscv: introduce io.h
  	- [PATCH v7 14/19] xen/riscv: add minimal stuff to page.h to build full Xen
 - Other changes are specific to specific patches. Please look at changes for
   specific patch.
---
Changes in V8:
 - Patch was merged to staging:
    - [PATCH v7 01/19] automation: introduce fixed randconfig for RISC-V
    - [PATCH v7 03/19] xen/riscv: introduce extenstion support check by compiler
 - Other changes are specific to specific patches. Please look at changes for
   specific patch.
 - Update the commit message:
     - drop the dependency from STATIC_ASSERT_UNREACHABLE() implementation.
     - Add suggestion to merge arch-specific changes related to __read_mostly.
---
Changes in V7:
 - Patch was merged to staging:
   [PATCH v6 15/20] xen/riscv: add minimal stuff to processor.h to build full Xen.
 - Other changes are specific to specific patches. Please look at changes for
   specific patch.
---
Changes in V6:
 - Update the cover letter message: drop already merged dependecies and add
   a new one.
 - Patches were merged to staging:
   - [PATCH v5 02/23] xen/riscv: use some asm-generic headers ( even v4 was
     merged to staging branch, I just wasn't apply changes on top of the latest staging branch )
   - [PATCH v5 03/23] xen/riscv: introduce nospec.h
   - [PATCH v5 10/23] xen/riscv: introduces acrquire, release and full barriers
 - Introduce new patches:
   - xen/riscv: introduce extenstion support check by compiler
   - xen/bitops: put __ffs() and ffz() into linux compatible header
   - xen/bitops: implement fls{l}() in common logic
 - The following patches were dropped:
   - drop some patches related to bitops operations as they were introduced in another
     patch series [...]
   - introduce new version for generic __ffs(), ffz() and fls{l}().
 - Merge patch from patch series "[PATCH v9 0/7]  Introduce generic headers" to this patch
   series as only one patch left in the generic headers patch series and it is more about
   RISC-V.
 - Other changes are specific to specific patches. please look at specific patch.
---
Changes in V5:
 - Update the cover letter as one of the dependencies were merged to staging.
 - Was introduced asm-generic for atomic ops and separate patches for asm-generic bit ops
 - Moved fence.h to separate patch to deal with some patches dependecies on fence.h
 - Patches were dropped as they were merged to staging:
   * [PATCH v4 03/30] xen: add support in public/hvm/save.h for PPC and RISC-V
   * [PATCH v4 04/30] xen/riscv: introduce cpufeature.h
   * [PATCH v4 05/30] xen/riscv: introduce guest_atomics.h
   * [PATCH v4 06/30] xen: avoid generation of empty asm/iommu.h
   * [PATCH v4 08/30] xen/riscv: introduce setup.h
   * [PATCH v4 10/30] xen/riscv: introduce flushtlb.h
   * [PATCH v4 11/30] xen/riscv: introduce smp.h
   * [PATCH v4 15/30] xen/riscv: introduce irq.h
   * [PATCH v4 16/30] xen/riscv: introduce p2m.h
   * [PATCH v4 17/30] xen/riscv: introduce regs.h
   * [PATCH v4 18/30] xen/riscv: introduce time.h
   * [PATCH v4 19/30] xen/riscv: introduce event.h
   * [PATCH v4 22/30] xen/riscv: define an address of frame table
 - Other changes are specific to specific patches. please look at specific patch
---
Changes in V4:
 - Update the cover letter message: new patch series dependencies.
 - Some patches were merged to staging, so they were dropped in this patch series:
     [PATCH v3 09/34] xen/riscv: introduce system.h
     [PATCH v3 18/34] xen/riscv: introduce domain.h
     [PATCH v3 19/34] xen/riscv: introduce guest_access.h
 - Was sent out of this patch series:
     [PATCH v3 16/34] xen/lib: introduce generic find next bit operations
 - [PATCH v3 17/34] xen/riscv: add compilation of generic find-next-bit.c was
   droped as CONFIG_GENERIC_FIND_NEXT_BIT was dropped.
 - All other changes are specific to a specific patch.
---
Changes in V3:
 - Update the cover letter message
 - The following patches were dropped as they were merged to staging:
    [PATCH v2 03/39] xen/riscv:introduce asm/byteorder.h
    [PATCH v2 04/39] xen/riscv: add public arch-riscv.h
    [PATCH v2 05/39] xen/riscv: introduce spinlock.h
    [PATCH v2 20/39] xen/riscv: define bug frame tables in xen.lds.S
    [PATCH v2 34/39] xen: add RISCV support for pmu.h
    [PATCH v2 35/39] xen: add necessary headers to common to build full Xen for RISC-V
 - Instead of the following patches were introduced new:
    [PATCH v2 10/39] xen/riscv: introduce asm/iommu.h
    [PATCH v2 11/39] xen/riscv: introduce asm/nospec.h
 - remove "asm/"  for commit messages which start with "xen/riscv:"
 - code style updates.
 - add emulation of {cmp}xchg_* for 1 and 2 bytes types.
 - code style fixes.
 - add SPDX and footer for the newly added headers.
 - introduce generic find-next-bit.c.
 - some other mionor changes. ( details please find in a patch )
---
Changes in V2:
  - Drop the following patches as they are the part of [2]:
      [PATCH v1 06/57] xen/riscv: introduce paging.h
      [PATCH v1 08/57] xen/riscv: introduce asm/device.h
      [PATCH v1 10/57] xen/riscv: introduce asm/grant_table.h
      [PATCH v1 12/57] xen/riscv: introduce asm/hypercall.h
      [PATCH v1 13/57] xen/riscv: introduce asm/iocap.h
      [PATCH v1 15/57] xen/riscv: introduce asm/mem_access.h
      [PATCH v1 18/57] xen/riscv: introduce asm/random.h
      [PATCH v1 21/57] xen/riscv: introduce asm/xenoprof.h
      [PATCH v1 24/57] xen/riscv: introduce asm/percpu.h
      [PATCH v1 29/57] xen/riscv: introduce asm/hardirq.h
      [PATCH v1 33/57] xen/riscv: introduce asm/altp2m.h
      [PATCH v1 38/57] xen/riscv: introduce asm/monitor.h
      [PATCH v1 39/57] xen/riscv: introduce asm/numa.h
      [PATCH v1 42/57] xen/riscv: introduce asm/softirq.h
  - xen/lib.h in most of the cases were changed to xen/bug.h as
    mostly functionilty of bug.h is used.
  - align arch-riscv.h with Arm's version of it.
  - change the Author of commit with introduction of asm/atomic.h.
  - update some definition from spinlock.h.
  - code style changes.
---


Oleksii Kurochko (10):
  xen: introduce generic non-atomic test_*bit()
  xen/riscv: introduce bitops.h
  xen/riscv: add minimal stuff to mm.h to build full Xen
  xen/riscv: add minimal amount of stubs to build full Xen
  xen/riscv: enable full Xen build
  xen/README: add compiler and binutils versions for RISC-V64
  xen/common: fix build issue for common/trace.c
  xen/riscv: change .insn to .byte in cpu_relax()
  xen/riscv: introduce ANDN_INSN
  xen/x86: drop constanst_test_bit() in asm/bitops.h

 README                                 |   3 +
 xen/arch/arm/include/asm/bitops.h      |  69 ----
 xen/arch/ppc/include/asm/bitops.h      |  54 ----
 xen/arch/ppc/include/asm/page.h        |   2 +-
 xen/arch/ppc/mm-radix.c                |   2 +-
 xen/arch/riscv/Makefile                |  17 +-
 xen/arch/riscv/arch.mk                 |   4 -
 xen/arch/riscv/early_printk.c          | 168 ----------
 xen/arch/riscv/include/asm/bitops.h    | 137 ++++++++
 xen/arch/riscv/include/asm/cmpxchg.h   |  16 +-
 xen/arch/riscv/include/asm/mm.h        | 235 ++++++++++++++
 xen/arch/riscv/include/asm/processor.h |   2 +-
 xen/arch/riscv/mm.c                    |  43 ++-
 xen/arch/riscv/setup.c                 |  10 +-
 xen/arch/riscv/stubs.c                 | 418 +++++++++++++++++++++++++
 xen/arch/riscv/traps.c                 |  25 ++
 xen/arch/x86/include/asm/bitops.h      |  39 +--
 xen/common/trace.c                     |   1 +
 xen/include/xen/bitops.h               | 182 +++++++++++
 19 files changed, 1096 insertions(+), 331 deletions(-)
 create mode 100644 xen/arch/riscv/include/asm/bitops.h
 create mode 100644 xen/arch/riscv/stubs.c

-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:52:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:52:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747736.1155245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6ac-0007sT-Ep; Tue, 25 Jun 2024 13:52:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747736.1155245; Tue, 25 Jun 2024 13:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6ac-0007s6-Bm; Tue, 25 Jun 2024 13:52:06 +0000
Received: by outflank-mailman (input) for mailman id 747736;
 Tue, 25 Jun 2024 13:52:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM6aa-0006cc-Oh
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:52:04 +0000
Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com
 [2a00:1450:4864:20::52e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1a70cfaa-32fa-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 15:52:04 +0200 (CEST)
Received: by mail-ed1-x52e.google.com with SMTP id
 4fb4d7f45d1cf-57d2fc03740so4256523a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:52:03 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf560627sm521042666b.148.2024.06.25.06.52.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 06:52:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a70cfaa-32fa-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719323523; x=1719928323; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jR+qhpm2A/x6dS9Bjps2sxbAqW6jN2x2QToEEDFI2xU=;
        b=XvUsxV/LsCsvGZKLlIoETxzSBNqQoQpWG1qLHdulnE6muL5u7XN2+axbHmzyzBZi2p
         vGTVDhB1AZma2CiWdNjpxGJSNXo4UfKPZee+0doQz4XtIajfDDRj4lwj8pagfOMQcG6O
         rOyp8gdR2tTrYl8nOg/uF4KRVOKK7VpjhvuVrY5mLVHOO0QY4Vq3FQKZi4uh2T339Epg
         QaEySgwO444+78M5DZV0UuBN/ru43heEda3Fd0LvWPXZEX17eiG1PC7dtpQ3pWdTm4Mw
         pbNs+9ap2uq/muYXjFeuMgfvyMJJQjs3kCcZc1nnp22SlHdiq86DdCxrqv/ZTm75/LV/
         RNRg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719323523; x=1719928323;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=jR+qhpm2A/x6dS9Bjps2sxbAqW6jN2x2QToEEDFI2xU=;
        b=b52lLdEcU5yHXJIq0moWzJMddPQxBLvYos7rHHwGmsnT8FewGzKqjhlZCPAXSGOVf7
         vmP6u/7y5zJXbSe6U4bDdFI/E/SUZTun3qBg03AguTe8GgorfU2AXoJOAV13qB7gkhkp
         qL4sYCyYVuQKQMyR85FojgzsGrdw6+kILIE90zUG/KhL9MPc2Je1PRB5+2Qjxo7o7HJD
         B7hFBttnNFhtK3PrVEyPRCgA0ULLgdQODElLsEhHfH3sRncUSFXj9vaFBp6+qELpawsu
         kpPFj0Ovtz4NjAlohQ0QA03cveA/VUurIGjXseWWjofsQMe1QU23mqu4wu4zaTyH8Eor
         yX+g==
X-Gm-Message-State: AOJu0Ywvc74zEppkWJhsMhLOhYUstUf4OSasUl+XsLiAYrW7Q3e4VGHp
	cdcX7NXL7u/1A8MHfAIYCP7cIQmzLgbrWNMv72ej/+J19gWyAPAXg6ER1KfX
X-Google-Smtp-Source: AGHT+IGozpStt1/fH4hnWeDXYbK8T53MExjJtceMbjG7MFXXzN1X1Sf4i3wU4JSd5b0vdqolT0zXdg==
X-Received: by 2002:a17:906:bf42:b0:a72:7a43:5c90 with SMTP id a640c23a62f3a-a727f8d6c1bmr32558166b.66.1719323522780;
        Tue, 25 Jun 2024 06:52:02 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v13 05/10] xen/riscv: enable full Xen build
Date: Tue, 25 Jun 2024 15:51:47 +0200
Message-ID: <b47a278c89eb436a7b88dc5c0b18a6be09c76472.1719319093.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.45.2
In-Reply-To: <cover.1719319093.git.oleksii.kurochko@gmail.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 Changes in V13:
  - implement get_upper_mfn_bound() as BUG_ON("unimplemented")
  - drop the footer after R-by as Andrew's patch series were merged to staging.
---
Changes in V5-V12:
 - Nothing changed. Only rebase.
 - Add the footer after Signed-off section.
---
Changes in V4:
 - drop stubs for irq_actor_none() and irq_actor_none() as common/irq.c is compiled now.
 - drop defintion of max_page in stubs.c as common/page_alloc.c is compiled now.
 - drop printk() related changes in riscv/early_printk.c as common version will be used.
---
Changes in V3:
 - Reviewed-by: Jan Beulich <jbeulich@suse.com>
 - unrealted change dropped in tiny64_defconfig
---
Changes in V2:
 - Nothing changed. Only rebase.
---
 xen/arch/riscv/Makefile       |  16 +++-
 xen/arch/riscv/arch.mk        |   4 -
 xen/arch/riscv/early_printk.c | 168 ----------------------------------
 3 files changed, 15 insertions(+), 173 deletions(-)

diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
index 60afbc0ad9..81b77b13d6 100644
--- a/xen/arch/riscv/Makefile
+++ b/xen/arch/riscv/Makefile
@@ -12,10 +12,24 @@ $(TARGET): $(TARGET)-syms
 	$(OBJCOPY) -O binary -S $< $@
 
 $(TARGET)-syms: $(objtree)/prelink.o $(obj)/xen.lds
-	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) -o $@
+	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< \
+	    $(objtree)/common/symbols-dummy.o -o $(dot-target).0
+	$(NM) -pa --format=sysv $(dot-target).0 \
+		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort \
+		> $(dot-target).0.S
+	$(MAKE) $(build)=$(@D) $(dot-target).0.o
+	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< \
+	    $(dot-target).0.o -o $(dot-target).1
+	$(NM) -pa --format=sysv $(dot-target).1 \
+		| $(objtree)/tools/symbols $(all_symbols) --sysv --sort \
+		> $(dot-target).1.S
+	$(MAKE) $(build)=$(@D) $(dot-target).1.o
+	$(LD) $(XEN_LDFLAGS) -T $(obj)/xen.lds -N $< $(build_id_linker) \
+	    $(dot-target).1.o -o $@
 	$(NM) -pa --format=sysv $@ \
 		| $(objtree)/tools/symbols --all-symbols --xensyms --sysv --sort \
 		> $@.map
+	rm -f $(@D)/.$(@F).[0-9]*
 
 $(obj)/xen.lds: $(src)/xen.lds.S FORCE
 	$(call if_changed_dep,cpp_lds_S)
diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
index 8c071aff65..17827c302c 100644
--- a/xen/arch/riscv/arch.mk
+++ b/xen/arch/riscv/arch.mk
@@ -38,7 +38,3 @@ extensions := $(subst $(space),,$(extensions))
 # -mcmodel=medlow would force Xen into the lower half.
 
 CFLAGS += $(riscv-generic-flags)$(extensions) -mstrict-align -mcmodel=medany
-
-# TODO: Drop override when more of the build is working
-override ALL_OBJS-y = arch/$(SRCARCH)/built_in.o
-override ALL_LIBS-y =
diff --git a/xen/arch/riscv/early_printk.c b/xen/arch/riscv/early_printk.c
index 60742a042d..610c814f54 100644
--- a/xen/arch/riscv/early_printk.c
+++ b/xen/arch/riscv/early_printk.c
@@ -40,171 +40,3 @@ void early_printk(const char *str)
         str++;
     }
 }
-
-/*
- * The following #if 1 ... #endif should be removed after printk
- * and related stuff are ready.
- */
-#if 1
-
-#include <xen/stdarg.h>
-#include <xen/string.h>
-
-/**
- * strlen - Find the length of a string
- * @s: The string to be sized
- */
-size_t (strlen)(const char * s)
-{
-    const char *sc;
-
-    for (sc = s; *sc != '\0'; ++sc)
-        /* nothing */;
-    return sc - s;
-}
-
-/**
- * memcpy - Copy one area of memory to another
- * @dest: Where to copy to
- * @src: Where to copy from
- * @count: The size of the area.
- *
- * You should not use this function to access IO space, use memcpy_toio()
- * or memcpy_fromio() instead.
- */
-void *(memcpy)(void *dest, const void *src, size_t count)
-{
-    char *tmp = (char *) dest, *s = (char *) src;
-
-    while (count--)
-        *tmp++ = *s++;
-
-    return dest;
-}
-
-int vsnprintf(char* str, size_t size, const char* format, va_list args)
-{
-    size_t i = 0; /* Current position in the output string */
-    size_t written = 0; /* Total number of characters written */
-    char* dest = str;
-
-    while ( format[i] != '\0' && written < size - 1 )
-    {
-        if ( format[i] == '%' )
-        {
-            i++;
-
-            if ( format[i] == '\0' )
-                break;
-
-            if ( format[i] == '%' )
-            {
-                if ( written < size - 1 )
-                {
-                    dest[written] = '%';
-                    written++;
-                }
-                i++;
-                continue;
-            }
-
-            /*
-             * Handle format specifiers.
-             * For simplicity, only %s and %d are implemented here.
-             */
-
-            if ( format[i] == 's' )
-            {
-                char* arg = va_arg(args, char*);
-                size_t arglen = strlen(arg);
-
-                size_t remaining = size - written - 1;
-
-                if ( arglen > remaining )
-                    arglen = remaining;
-
-                memcpy(dest + written, arg, arglen);
-
-                written += arglen;
-                i++;
-            }
-            else if ( format[i] == 'd' )
-            {
-                int arg = va_arg(args, int);
-
-                /* Convert the integer to string representation */
-                char numstr[32]; /* Assumes a maximum of 32 digits */
-                int numlen = 0;
-                int num = arg;
-                size_t remaining;
-
-                if ( arg < 0 )
-                {
-                    if ( written < size - 1 )
-                    {
-                        dest[written] = '-';
-                        written++;
-                    }
-
-                    num = -arg;
-                }
-
-                do
-                {
-                    numstr[numlen] = '0' + num % 10;
-                    num = num / 10;
-                    numlen++;
-                } while ( num > 0 );
-
-                /* Reverse the string */
-                for (int j = 0; j < numlen / 2; j++)
-                {
-                    char tmp = numstr[j];
-                    numstr[j] = numstr[numlen - 1 - j];
-                    numstr[numlen - 1 - j] = tmp;
-                }
-
-                remaining = size - written - 1;
-
-                if ( numlen > remaining )
-                    numlen = remaining;
-
-                memcpy(dest + written, numstr, numlen);
-
-                written += numlen;
-                i++;
-            }
-        }
-        else
-        {
-            if ( written < size - 1 )
-            {
-                dest[written] = format[i];
-                written++;
-            }
-            i++;
-        }
-    }
-
-    if ( size > 0 )
-        dest[written] = '\0';
-
-    return written;
-}
-
-void printk(const char *format, ...)
-{
-    static char buf[1024];
-
-    va_list args;
-    va_start(args, format);
-
-    (void)vsnprintf(buf, sizeof(buf), format, args);
-
-    early_printk(buf);
-
-    va_end(args);
-}
-
-#endif
-
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:52:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:52:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747737.1155255 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6ad-00089v-Ow; Tue, 25 Jun 2024 13:52:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747737.1155255; Tue, 25 Jun 2024 13:52:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6ad-00089i-Lb; Tue, 25 Jun 2024 13:52:07 +0000
Received: by outflank-mailman (input) for mailman id 747737;
 Tue, 25 Jun 2024 13:52:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM6ad-0006cc-4d
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:52:07 +0000
Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com
 [2a00:1450:4864:20::534])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1c0d25c5-32fa-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 15:52:06 +0200 (CEST)
Received: by mail-ed1-x534.google.com with SMTP id
 4fb4d7f45d1cf-57d1782679fso6741315a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:52:06 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf560627sm521042666b.148.2024.06.25.06.52.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 06:52:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c0d25c5-32fa-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719323526; x=1719928326; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=EblDpIRqRIu3BssT9RZ6RoIeDsTXprjPCYVIxMbl8gI=;
        b=EozEWgb0+wOnZ2mDjIiM4cZZGiLvuwVpGFKAn3SmGtOIRHEndWdw9qS9Y2NDYDwQ4Y
         UwyecA4SxwD0tZEqcf7EGjh3wfQ+IluhcJRwKXoVJ3z1jUzpao5YbqrkEH+2X2QzfdKU
         JQOiqfhW1tq0hN2dt3VaKJfiISR818+80rxf3h6dGuoLZFjHqVhitC2sCnViwgQmRYcz
         qV9Uc0pzIJNtfIksBs/Yw/olg4lc1vc5map/D5J1eiwSvaaGrDMdYksxMczuOXZNbE2x
         0vnaGjPjnyj49PyCIA6B/+1GojKD3Bq117wLt/JpUqewif+VS3PsZVZYeIWVokLv3tVJ
         AqeQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719323526; x=1719928326;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=EblDpIRqRIu3BssT9RZ6RoIeDsTXprjPCYVIxMbl8gI=;
        b=QLhGAzblM1ZPkZ7pqonTrtgayB3U1pStm2Ic2iCxxxUO5/FH3lwmeS0cwshLqvXxX1
         gugZqNEV/kYEt+CtsFMW6fNPsMDqkcq0eNCascvOHClfhn8Z8BvfEUI9fLFM568euiwd
         MYSc/3JX5LjCeaK6hb+CxZvsxEkSAZ532G/iaTfIhgHNT4270mq9gW1dXDITnPoy4sZ8
         9Ru7jx+whv7jkPbWYawCMnFaVsmQTMjosEumcKCm+D2FKt/VKGDQ5nvIt1Bl4I3Yzya1
         fKlXIrY6t7och2AT47UGNChflsXKWqRVctD66HdeBRlgiHcLU29DoWeBxZ8jGWaoFWTo
         iprA==
X-Gm-Message-State: AOJu0Ywa1t2JLZC1takviX1l2WxGzMFVf97Tk3xqWHZ45idRuITCDYWI
	p5+QYtCyz5kXd6F3260OcWfHOB84e3XKRO8PIAYYLw9yaQWAdHcH1ggRQgFx
X-Google-Smtp-Source: AGHT+IFQTMqbBSaOKEkbk9WjjORj/cTDkO5SVUJaZVfMFzNjweNugVIyDCceDNj3AhdImrkJdPGWCw==
X-Received: by 2002:a17:907:c78e:b0:a72:4b31:13b5 with SMTP id a640c23a62f3a-a727f855270mr42506766b.54.1719323525897;
        Tue, 25 Jun 2024 06:52:05 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v13 08/10] xen/riscv: change .insn to .byte in cpu_relax()
Date: Tue, 25 Jun 2024 15:51:50 +0200
Message-ID: <b5ccb3850cbfc0c84d2feea35a971351395fa974.1719319093.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.45.2
In-Reply-To: <cover.1719319093.git.oleksii.kurochko@gmail.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The .insn directive appears to check that the byte pattern is a known
extension, where .4byte doesn't.

The following compilation error occurs:
  ./arch/riscv/include/asm/processor.h: Assembler messages:
  ./arch/riscv/include/asm/processor.h:70: Error: unrecognized opcode `0x0100000F'
In case of the following Binutils:
  $ riscv64-linux-gnu-as --version
  GNU assembler (GNU Binutils for Debian) 2.35.2

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 Changes in V13:
  - new patch
---
 xen/arch/riscv/include/asm/processor.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/riscv/include/asm/processor.h b/xen/arch/riscv/include/asm/processor.h
index 6846151717..0e75122efb 100644
--- a/xen/arch/riscv/include/asm/processor.h
+++ b/xen/arch/riscv/include/asm/processor.h
@@ -67,7 +67,7 @@ static inline void cpu_relax(void)
     __asm__ __volatile__ ( "pause" );
 #else
     /* Encoding of the pause instruction */
-    __asm__ __volatile__ ( ".insn 0x0100000F" );
+    __asm__ __volatile__ ( ".byte 0x0100000F" );
 #endif
 
     barrier();
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:52:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:52:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747738.1155266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6af-0008QS-4o; Tue, 25 Jun 2024 13:52:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747738.1155266; Tue, 25 Jun 2024 13:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6af-0008Pk-06; Tue, 25 Jun 2024 13:52:09 +0000
Received: by outflank-mailman (input) for mailman id 747738;
 Tue, 25 Jun 2024 13:52:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM6ad-00086j-Og
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:52:07 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1b303187-32fa-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 15:52:05 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id
 4fb4d7f45d1cf-57d1679ee6eso9550674a12.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:52:05 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf560627sm521042666b.148.2024.06.25.06.52.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 06:52:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b303187-32fa-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719323524; x=1719928324; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=NSkiIZsfofGLVpyJX0+MYwmop/9bun+99VlWirx1I7g=;
        b=Yjc+VTH8D7g1VxX3/ymoyLnYx9qMXTecSLQ2pCbCh6sh9gPFuRpwHYBNAai4sM9oWl
         fYyUCOPRTLnwH3VPltxATdF2jo3geTTc2kE1CQh4R7ivXCS2HJdk42lW4FjjLRgOaC7M
         SAqo1104u+EeHJ+Mfl3I2Q5CiNU2xdDJAuKPJtKby9nJDUBexKLhuF48WWcuqQ4AJqJD
         0NI3nPZllRBsux9wWfnwWqKzfhUThet4SayByjcX/aXacrwYeDEdgl+ulL/NKPFpUjfu
         4coesx74ctn9aGWP4axAytHeGOcys18asBIW5m/0VVaw7zGQLeliLvbMG3Lwcg3GG++2
         5aqQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719323524; x=1719928324;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=NSkiIZsfofGLVpyJX0+MYwmop/9bun+99VlWirx1I7g=;
        b=kb+RPPJhTR0PL0o0J5e6+CD3LpvqJJbwyGMUF76dR8aRRkONim/JWSqstEVPwT/YTf
         LRjCwCa9IkMbpk8UDou/cv2N3gxhopR7qNEiL+2WqIJAsTupNrN+KF59B1xXRG2PqbVw
         Zio0GzRfnxbCrlZyhD6Y+wopSohO7Ta200OVgjw+W8/fs4KMbHAW/qdVK5dTF3aQATmT
         PADFd0Gu6taWuKWa9nbs7Xjiciuklsxgnn+5FTalxd08xYt8Vg50sQKBPXg43TWymvfB
         yisasJvPrVzRSlvMQUaYkgyJt3jNM8DyHqS0nELdch0uu4FBagVO5MvU13dMvHT6mro8
         TWgg==
X-Gm-Message-State: AOJu0Yy/G6Be0/3D8Uh/4ZVXxpj2g7qpDzdnsnzGquPjFyB7ICBcarIT
	8/UWdufkkM39s2aeHhDlpgejsc0h0yX86A2qGGcnIqA+K2aX4gGwdv3qP3pG
X-Google-Smtp-Source: AGHT+IFMs0Omxeew+4X0r3cEV7F/MLardqkDFHQTtF7JJ4AaL2aAQ5uPbtmfBLPxuLRRkEgSg04izw==
X-Received: by 2002:a17:906:3acd:b0:a6e:f540:8b5f with SMTP id a640c23a62f3a-a6ffe3ccd2emr585829766b.19.1719323524014;
        Tue, 25 Jun 2024 06:52:04 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v13 06/10] xen/README: add compiler and binutils versions for RISC-V64
Date: Tue, 25 Jun 2024 15:51:48 +0200
Message-ID: <9ad0bd3d2920ae6bf9ff81beaca9b4d899f65d9a.1719319093.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.45.2
In-Reply-To: <cover.1719319093.git.oleksii.kurochko@gmail.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This patch doesn't represent a strict lower bound for GCC and
GNU Binutils; rather, these versions are specifically employed by
the Xen RISC-V container and are anticipated to undergo continuous
testing. Older GCC and GNU Binutils would work,
but this is not a guarantee.

While it is feasible to utilize Clang, it's important to note that,
currently, there is no Xen RISC-V CI job in place to verify the
seamless functioning of the build with Clang.

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
--
Changes in V13:
 - drop the line "Older GCC and GNU Binutils would work, but this is not a guarantee."
   in README
 - add Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
--
Changes in V5-V12:
 - Nothing changed. Only rebase.
---
 Changes in V6:
  - update the message in README.
---
 Changes in V5:
  - update the commit message and README file with additional explanation about GCC and
    GNU Binutils version. Additionally, it was added information about Clang.
---
 Changes in V4:
  - Update version of GCC (12.2) and GNU Binutils (2.39) to the version
    which are in Xen's contrainter for RISC-V
---
 Changes in V3:
  - new patch
---
 README | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/README b/README
index c8a108449e..9a898125e1 100644
--- a/README
+++ b/README
@@ -48,6 +48,9 @@ provided by your OS distributor:
       - For ARM 64-bit:
         - GCC 5.1 or later
         - GNU Binutils 2.24 or later
+      - For RISC-V 64-bit:
+        - GCC 12.2 or later
+        - GNU Binutils 2.39 or later
     * POSIX compatible awk
     * Development install of zlib (e.g., zlib-dev)
     * Development install of Python 2.7 or later (e.g., python-dev)
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:52:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:52:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747739.1155276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6ag-0000Gk-I5; Tue, 25 Jun 2024 13:52:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747739.1155276; Tue, 25 Jun 2024 13:52:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6ag-0000GG-Aq; Tue, 25 Jun 2024 13:52:10 +0000
Received: by outflank-mailman (input) for mailman id 747739;
 Tue, 25 Jun 2024 13:52:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM6ae-00086j-Ea
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:52:08 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1b7f9aad-32fa-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 15:52:05 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id
 a640c23a62f3a-a724e067017so308415166b.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:52:05 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf560627sm521042666b.148.2024.06.25.06.52.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 06:52:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b7f9aad-32fa-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719323525; x=1719928325; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=UZfUGC+tX8O1jFymRKQ2X7spFAFGeVW+jPczb/zOpP0=;
        b=LZn64CFjnNLiiIU4UPs0BJ021NAUV/2PuLK9rPC5P8ntMmhv4bv956BR1sb+Zg4VDY
         Ie7tVcQqwPq1QdeI4GiJld7ndbSwa4ZITGLtaUMRXNA5GTdmvAAO2exeyVgX2eOpyUVt
         /69xVKoQqKc/cbfGr+2TtR2I8t1xar4O4FEZtTlisCnMET3+KaaqT0nO77V6W8zwL/S1
         HzYzcKzTUDAIXsyq9UXIRZTy47yAk389IyEsGbHjFWcIJqwz9B5v2JDFYqE3kiGv4cyt
         s53A7ApWCzO7I8PKcayXWXeTujNCUb9xS0qplLuf5J2D44Pf3aCkWvlkiITtmLHUlru3
         ktKQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719323525; x=1719928325;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=UZfUGC+tX8O1jFymRKQ2X7spFAFGeVW+jPczb/zOpP0=;
        b=XQPK+Gz6r+wiUTLMwUCZC51kkYM3YB8bnCmL57+hWkpsIsggWhB6lLfuhA4l50nNPq
         2zfd3mAnm8QcG0KMu+24KNMKzJcEFIIoYavfIZLz0Hy8FfC3lG8v8gfFhtw7eV15qtQ+
         f7gAyXzBTXTjReK3ySzjpO+jkV1XjOnxUh7FZsWtqLQ5kYyIDxFjRHGJ/bmtSrm8CmLp
         yG0zT4RhMw3jVFsGAvqVaEMYHB4IZUF69fiuHV9REI4q4H1ig+4n7nPYPnh3GCS8HnHj
         waQGsI5WJ16icW66b8B35JVNi8pkngDD+WUahCcAwhmZPa3llgDPhcylIsNgWki17Bwd
         PpIQ==
X-Gm-Message-State: AOJu0YxCJuY3SqEhAkQm0H5RMJV9NsvAqCrW7nbrh7zahPT3TlneXmz/
	8+hlh83fe4z9xtWVkiyQQXzqZg9LZ8PlrydMovaFTP7x58q160G+EBV7LBHt
X-Google-Smtp-Source: AGHT+IHz/ZYNtgTkJsv0BVnNvTM1CLXjySox1AgodMNHJgxRag5Z9zT2mKtBQHGaiAFiLwZfgZZivg==
X-Received: by 2002:a17:907:d40d:b0:a72:5adb:f40d with SMTP id a640c23a62f3a-a725adbfdb9mr356099566b.61.1719323524891;
        Tue, 25 Jun 2024 06:52:04 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH v13 07/10] xen/common: fix build issue for common/trace.c
Date: Tue, 25 Jun 2024 15:51:49 +0200
Message-ID: <f14f2c5629a75856f4bafdbff3cc165c373f8dc2.1719319093.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.45.2
In-Reply-To: <cover.1719319093.git.oleksii.kurochko@gmail.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

During Gitlab CI randconfig job for RISC-V failed witn an error:
 common/trace.c:57:22: error: expected '=', ',', ';', 'asm' or
                              '__attribute__' before '__read_mostly'
   57 | static u32 data_size __read_mostly;

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 Changes in V13:
  - new patch
---
 xen/common/trace.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/common/trace.c b/xen/common/trace.c
index c9c555094b..c33f115b6c 100644
--- a/xen/common/trace.c
+++ b/xen/common/trace.c
@@ -29,6 +29,7 @@
 #include <xen/mm.h>
 #include <xen/percpu.h>
 #include <xen/pfn.h>
+#include <xen/sections.h>
 #include <xen/cpu.h>
 #include <asm/atomic.h>
 #include <public/sysctl.h>
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:52:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:52:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747740.1155283 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6ah-0000NW-3G; Tue, 25 Jun 2024 13:52:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747740.1155283; Tue, 25 Jun 2024 13:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6ag-0000Mp-Oj; Tue, 25 Jun 2024 13:52:10 +0000
Received: by outflank-mailman (input) for mailman id 747740;
 Tue, 25 Jun 2024 13:52:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM6af-0006cc-8S
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:52:09 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1d53011f-32fa-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 15:52:08 +0200 (CEST)
Received: by mail-ej1-x62b.google.com with SMTP id
 a640c23a62f3a-a725282b926so310375266b.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:52:08 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf560627sm521042666b.148.2024.06.25.06.52.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 06:52:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d53011f-32fa-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719323528; x=1719928328; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=3jEpERKbIeLIcrdrPaGfmWPlnxuIgAzhJqctJSX0ZEE=;
        b=iLMINnuNJjAyveQZ6kbruV3Gw6d0HEvs/IvtN1/YbNFATWdeeaAJT5r5flQ6SMml1a
         AjDTQ+VWFI/or5LYV5Ll2ooJZ5IXX9S+TbLQrxzQ5rBzsJ55Wsy8ZvtC2SVDCeSQBgII
         C7+wa1IccIy7BTo2NqP2KTd6PFZXFcf/dHc1c2gw3ahkZSV1IOBjwOUE3RDXqnj8HLAI
         7jiGfF84VC7IqZZDAtfHvgXEgyYipfThMgAiblnUVTjfd1Kx4PQ6rrPRQ0MFljCKfFIW
         BikEwQkjKzJyQivp8rrIrsH8IXFAmgRN23gbXbqd6q1jkS7FXWqeRTA6E3NaSPCxxDll
         /fQg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719323528; x=1719928328;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=3jEpERKbIeLIcrdrPaGfmWPlnxuIgAzhJqctJSX0ZEE=;
        b=CXcYiRFbMzs1c1a/2d+ptt1Q9SeWIeVn0cQpU1TeqGc5sfBYVGTkMSz9Dla7c4BbDz
         704XxBb17XaiwXFviaHo0flinGGesFnwUu+I3lu4uco//gwRAlLOW0IDwJhGHR5oc+gA
         Tu1k/Y34fp3SB5/O/4t/zJgAL9L4RnIhbEZtaVBceme+hGFxRA23Lv4y/jWzx50RqOrc
         tLpkr8snXPjVOYTxhirdAEwZ5z+5K8BpmLd1cnqDLG7IEI5D3IVL77oSWUbrFvu9g0GS
         McWTIOznwfVYZEVBsBr/e+26/EuQ++5mWUBQOx40r7tM1lkLhe4rrybGBQbNXRvVeLx0
         GLdA==
X-Gm-Message-State: AOJu0YwjuZ2VDW+0lslFFXYmJiNWpvGzXouGzJtYFMqoeu+iyPaZ9+vD
	uVq3b0VitD+l4+Podfiqa0xGEupyoEGJWADwb/7Y/8vVGpBCW15ta/U3KP+P
X-Google-Smtp-Source: AGHT+IHn4l2awNVb//uMg4A4C7V4b9RleGetFINhGc0xQMysSLqYLmieB0t3yGy8Dc4SoNd0L1EqbQ==
X-Received: by 2002:a17:906:1cd0:b0:a6f:bae6:f56c with SMTP id a640c23a62f3a-a7245c85b84mr456988166b.3.1719323527990;
        Tue, 25 Jun 2024 06:52:07 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v13 10/10] xen/x86: drop constanst_test_bit() in asm/bitops.h
Date: Tue, 25 Jun 2024 15:51:52 +0200
Message-ID: <edd341a6e86ceac2717c59680d4e5e7fc3321b5d.1719319093.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.45.2
In-Reply-To: <cover.1719319093.git.oleksii.kurochko@gmail.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

constant_test_bit() is functionally the same as generic_test_bit(),
so constant_test_bit() can be dropped and replaced with
generic_test_bit().

Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 Changes in V13:
  - new patch ( this patch is dependent on
    xen: introduce generic non-atomic test_*bit() )
---
 xen/arch/x86/include/asm/bitops.h | 8 +-------
 1 file changed, 1 insertion(+), 7 deletions(-)

diff --git a/xen/arch/x86/include/asm/bitops.h b/xen/arch/x86/include/asm/bitops.h
index f9aa60111f..8c0403405a 100644
--- a/xen/arch/x86/include/asm/bitops.h
+++ b/xen/arch/x86/include/asm/bitops.h
@@ -277,12 +277,6 @@ static inline int test_and_change_bit(int nr, volatile void *addr)
     test_and_change_bit(nr, addr);                      \
 })
 
-static inline int constant_test_bit(int nr, const volatile void *addr)
-{
-    return ((1U << (nr & 31)) &
-            (((const volatile unsigned int *)addr)[nr >> 5])) != 0;
-}
-
 static inline int variable_test_bit(int nr, const volatile void *addr)
 {
     int oldbit;
@@ -297,7 +291,7 @@ static inline int variable_test_bit(int nr, const volatile void *addr)
 
 #define arch_test_bit(nr, addr) ({                      \
     __builtin_constant_p(nr) ?                          \
-        constant_test_bit(nr, addr) :                   \
+        generic_test_bit(nr, addr) :                    \
         variable_test_bit(nr, addr);                    \
 })
 
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 13:52:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 13:52:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747741.1155287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6ah-0000XY-Kr; Tue, 25 Jun 2024 13:52:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747741.1155287; Tue, 25 Jun 2024 13:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6ah-0000Vk-D6; Tue, 25 Jun 2024 13:52:11 +0000
Received: by outflank-mailman (input) for mailman id 747741;
 Tue, 25 Jun 2024 13:52:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM6af-00086j-Ev
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 13:52:09 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ccb7121-32fa-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 15:52:07 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id
 4fb4d7f45d1cf-57d0f929f79so5223472a12.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 06:52:07 -0700 (PDT)
Received: from fedora.. ([94.75.70.14]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf560627sm521042666b.148.2024.06.25.06.52.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 06:52:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ccb7121-32fa-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719323527; x=1719928327; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=NQmPQb13oFtw7Zjr+EpKt2RGh54SRf+k/JDvX9Bcyjo=;
        b=UFFTERedeBYHrZDHVCPP4+vRhWe4eCxPT0g4ClxMtuJ5caO4wIxwfeIMY50Jzes3yv
         hOjAP9O7t5Ntn324cLIbZjiovav1WH3I1LHCe0+YHCBrzRaop0uULwc1z4bL2jQL+5/0
         KcPAUyd72zhw3iUKp8lcFQ5Dooil8K8Ece1V8UbfrkSkiwZj+9k9PrWC6GyMPqDADx6J
         vyGDCDhCpknx1LriojhUAM/iJ+YqNkMZszzCVsfAeefvkiswAW25wrNpruUKs+8hRD0G
         0rRE0/2UHWCxPWzmfAfGWTz8ZK+unzQw+YFBHmMT4cgIXt6HBmq968pqcBq4dggh36t+
         NjQA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719323527; x=1719928327;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=NQmPQb13oFtw7Zjr+EpKt2RGh54SRf+k/JDvX9Bcyjo=;
        b=N4yvJFcTzdanGXDCmr2kOI6K0W7XEiN0zI8Jv76TXXvDbrzAaDxSfx6wUjnrGRaHs2
         8bEMNOkdG7dAoi1aZ1ZiRpIMTrYKJ796763sHTnOpVM7UzckrLlmgtf7+assh5G39l0s
         3ne6/fTryfhPeG8Bmn2VAE3xuV7gnv/YtFHneOgQFJXEs9tXxMCt4Yw3DTg1gc7bRB5o
         PdWj5ZzpBOCrVeJP1ogfIgs3svCJdzOokGa2rr1E8DOAmNcg0MgjrwjUIQc5pkclgdgU
         aQOVL8zBljBXK9SkrMJ1Ifs2J99iNlu8wR71N8wbyuKcuAHybU26QA089ZrRRAoi7wOk
         D5Sg==
X-Gm-Message-State: AOJu0YxliD81Oc3OlCsIaED3uqggZXgd+pty/I12CUlTlNe3+3ARBSmL
	g8DFMGL9k+7akkxiVDSBG4i/A1xGjItH4i3b94Dflj3lq9KT5VnCdbNYRWSc
X-Google-Smtp-Source: AGHT+IGw0bOwz082172cYDo31ZXm2xuhKvCxKA7NYCAsPV1WdsPwTwRMblw0I8dTE94cDj50GwpkMw==
X-Received: by 2002:a17:907:c245:b0:a6e:fccb:7146 with SMTP id a640c23a62f3a-a7245b565eamr498009666b.23.1719323527039;
        Tue, 25 Jun 2024 06:52:07 -0700 (PDT)
From: Oleksii Kurochko <oleksii.kurochko@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksii Kurochko <oleksii.kurochko@gmail.com>,
	Alistair Francis <alistair.francis@wdc.com>,
	Bob Eshleman <bobbyeshleman@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v13 09/10] xen/riscv: introduce ANDN_INSN
Date: Tue, 25 Jun 2024 15:51:51 +0200
Message-ID: <b0d2ff2cecf6cb324e43b9c14c87f47f3f199613.1719319093.git.oleksii.kurochko@gmail.com>
X-Mailer: git-send-email 2.45.2
In-Reply-To: <cover.1719319093.git.oleksii.kurochko@gmail.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

RISC-V does a conditional toolchain for the Zbb extension
(xen/arch/riscv/rules.mk), but unconditionally uses the
ANDN instruction in emulate_xchg_1_2().

Fixes: 51dabd6312c ("xen/riscv: introduce cmpxchg.h")

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 Changes in V13:
  - new patch
---
 xen/arch/riscv/include/asm/cmpxchg.h | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/xen/arch/riscv/include/asm/cmpxchg.h b/xen/arch/riscv/include/asm/cmpxchg.h
index d5e678c036..12278be577 100644
--- a/xen/arch/riscv/include/asm/cmpxchg.h
+++ b/xen/arch/riscv/include/asm/cmpxchg.h
@@ -18,6 +18,20 @@
         : "r" (new) \
         : "memory" );
 
+/*
+ * Binutils < 2.37 doesn't understand ANDN.  If the toolchain is too
+ld, form
+ * it of a NOT+AND pair
+ */
+#ifdef __riscv_zbb
+#define ANDN_INSN(rd, rs1, rs2)                 \
+    "andn " rd ", " rs1 ", " rs2 "\n"
+#else
+#define ANDN_INSN(rd, rs1, rs2)                 \
+    "not " rd ", " rs2 "\n"                     \
+    "and " rd ", " rs1 ", " rd "\n"
+#endif
+
 /*
  * For LR and SC, the A extension requires that the address held in rs1 be
  * naturally aligned to the size of the operand (i.e., eight-byte aligned
@@ -48,7 +62,7 @@
     \
     asm volatile ( \
         "0: lr.w" lr_sfx " %[old], %[ptr_]\n" \
-        "   andn  %[scratch], %[old], %[mask]\n" \
+        ANDN_INSN("%[scratch]", "%[old]", "%[mask]") \
         "   or   %[scratch], %[scratch], %z[new_]\n" \
         "   sc.w" sc_sfx " %[scratch], %[scratch], %[ptr_]\n" \
         "   bnez %[scratch], 0b\n" \
-- 
2.45.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 14:03:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 14:03:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747805.1155306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6lA-0005sU-OT; Tue, 25 Jun 2024 14:03:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747805.1155306; Tue, 25 Jun 2024 14:03:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6lA-0005sN-Lb; Tue, 25 Jun 2024 14:03:00 +0000
Received: by outflank-mailman (input) for mailman id 747805;
 Tue, 25 Jun 2024 14:03:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM6lA-0005sC-92
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 14:03:00 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a07293f6-32fb-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 16:02:58 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a6f8ebbd268so1088704866b.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 07:02:58 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fe3d2a8e7sm399821266b.111.2024.06.25.07.02.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 07:02:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a07293f6-32fb-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719324178; x=1719928978; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=5Z9ygOOdPx7D8HTx4hL6hIuzyJCQomPtIrQdtQbh/JE=;
        b=bUrTvYM2XX4OhGPgnqew/uXTaB4FkkDT1f6oi2Ew/EosF2VERRJa4zkkSEosil4odi
         K99nS5p0QfWmM4/UCxuZ/SIjRr8rj2FmxUhgMjZc5hA+HO8mPq0gL/50pHF+kTvDyKJs
         mDo+q8k/kQk8p0zfgNU4mzkT4ihaUtbUzGcbzxan3FAQ+ZyxHB/y+IeuVOtjyCYmZhCi
         Srbeq2Gq+OcHT46B8SNXqEQNRaV+cAJwiIvPVd4qrUW4keN4iLGLjcSQlCI5rEmEnLFQ
         pRUyLspElp6pO5fuYoEj9Z5Ci5RRAG+M4xJt6cSeju9wz+9f3YpGcAuDrEX6CpLrh7vr
         /+4Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719324178; x=1719928978;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=5Z9ygOOdPx7D8HTx4hL6hIuzyJCQomPtIrQdtQbh/JE=;
        b=Impz4SOBopPaeaVbTDbbyz1qw7AujDOOp3MSSUyh97uYIP2d5MTQIJAhuWibNelSun
         rvcnFd85GRb3CWjk+kmC0ZdhCd3VWB/O5p4USpeLl2q3yTPmK/X2koX3JPmdTzmmotQk
         TFIPD6kDAcARFQHauBDLU5pcHPUQn4O99NjlJXCqKVIm6NMkKaOEAlZCueAPX71ynPWA
         xLeev/B68JAMkKP4xQYOXf+gk6dYXf7bc5OVS28CcXTNjXVyc17oLBeFFHs0rh3DpHR4
         Fc5y+a+2cKgmlFFtU2stXvY59zEtYY4z2H2qdbvHI/NSLuDDsQFKCKk4XtvKN4n5zoOv
         kULw==
X-Forwarded-Encrypted: i=1; AJvYcCWK95NQl4IFRe0TwMRK65Lt3PZXFrlibVKsy6ncDurT1c9xhiKc8naRp0v/EnJj3e7keUbENqWBpsDQNJHE5TyKgzUKzDfN6aRuBzGXxIk=
X-Gm-Message-State: AOJu0YwN+Oh34b3NBOzFzlkF8JwJiwQ8GudEl5Cyq5hYgtbTdith/SJW
	5a7GJOm6aWq8MD4ZoHlC2y9GoqxFi4PvPCv5kMw2ySGiU0b6OUE/
X-Google-Smtp-Source: AGHT+IH1mLt1AhRgskzYMQDxS0jQXQTC2f4ReMwn5kbBAJh9IS/Bz6BKCJHjBQeP9wpllg6pglGcxw==
X-Received: by 2002:a17:907:d1a:b0:a6f:6337:1ad5 with SMTP id a640c23a62f3a-a727fc7a97cmr34145666b.27.1719324177536;
        Tue, 25 Jun 2024 07:02:57 -0700 (PDT)
Message-ID: <60b60f2cb8f44126b442259fbc1c878b8166b7dc.camel@gmail.com>
Subject: Re: [PATCH for-4.19?] Config.mk: update MiniOS commit
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Samuel Thibault
 <samuel.thibault@ens-lyon.org>,  Juergen Gross <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Date: Tue, 25 Jun 2024 16:02:56 +0200
In-Reply-To: <c0519803-8753-4933-8193-fa036f626b36@suse.com>
References: <a98ab069-407b-4dee-9052-40ab72890d47@suse.com>
	 <52373e0cea119ff04ebb997f3d0aea6bd3c9dc41.camel@gmail.com>
	 <c0519803-8753-4933-8193-fa036f626b36@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Tue, 2024-06-25 at 10:26 +0200, Jan Beulich wrote:
> On 25.06.2024 10:10, Oleksii wrote:
> > On Tue, 2024-06-25 at 09:57 +0200, Jan Beulich wrote:
> > > Pull in the gcc14 build fix there.
> > >=20
> > > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > > ---
> > > Probably nice to reference a gcc14-clean MiniOS tree from what is
> > > going
> > > to be 4.19.
> > I would like to ask what do you mean by gcc14-clean here?
>=20
> Being able to build successfully with (recently released) gcc14, out
> of
> the box.
Sorry for not asking that in initial reply.

I am still confused with "from what is going to be 4.19".

Are this words about gcc version used by Xen itself?
If yes, then before this patch gcc version of Xen and MiniOS was the
same?

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 14:08:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 14:08:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747819.1155316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6qb-0007i0-Az; Tue, 25 Jun 2024 14:08:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747819.1155316; Tue, 25 Jun 2024 14:08:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6qb-0007ht-75; Tue, 25 Jun 2024 14:08:37 +0000
Received: by outflank-mailman (input) for mailman id 747819;
 Tue, 25 Jun 2024 14:08:35 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vmrN=N3=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sM6qZ-0007fD-SE
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 14:08:35 +0000
Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com
 [2607:f8b0:4864:20::732])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 68446aa4-32fc-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 16:08:34 +0200 (CEST)
Received: by mail-qk1-x732.google.com with SMTP id
 af79cd13be357-79c0e7ec66dso28237285a.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 07:08:34 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce92f564sm407682085a.107.2024.06.25.07.08.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 07:08:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68446aa4-32fc-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719324513; x=1719929313; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=WZ9i7tm2EOJhClHOiMOnSBBeVRYUqEwtN3M9NdULw9o=;
        b=Yj+fNMenoVBn38Oz7NtLhD88s8q5mDadcGuPIGTAalVDvazQrMWq6o0qlKpKMYmYoh
         IHs7KrAEb84pCn3MHwcyEsxm78C3fXihY96OUrnF5/65sMQ6kkUQh0cjw3vAKY7sNxfJ
         YxWhL1Pc8mtmRoN+tb8jKYbA1/JfLOAgUp0Pw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719324513; x=1719929313;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=WZ9i7tm2EOJhClHOiMOnSBBeVRYUqEwtN3M9NdULw9o=;
        b=o+q0+AkCMFOkIcciBIVXsjV+Sd1bt8QQw2U5uFqWO6MivSMRFxiCljUDrqi4myY05R
         r1srkNGloVUoiLztqDtUxHgsg7hnIooWVplN5Kq7vg9/7sEo5nWiByBXuz1F0DOHJOEb
         x8VnKYh8ns7IoadVno1XrQz8cFAY97PKZWYCaT2N0nP5MjaGLmbb5b8zvmKXEGAyAExf
         WPm4aFI8uDzYJL+gQbPS0dhXNEiCuGVMxGa39fGaqc/+moSX3l4KNZNgQTBZMqq6vjML
         LLoWS06vPROgkKoK3nU8j+z4eZMagJHG4tsX1Trw49FsCO2351e/np71bvhXhAwkZIH/
         rmjg==
X-Forwarded-Encrypted: i=1; AJvYcCUCZPjhenordWVMcPtEmqxbgXWYJ0hY23tSEw3EBE0nDFtUAtBEYqEaTQHv+WxmgnoVbC/eXUJxSMTVanWvXnqYoWETXFpm14Y0iSwGI4U=
X-Gm-Message-State: AOJu0YyISyYJIHCyzYFeg75fgwycuY5X9BYjM/WoV9CzU3TRJ1WlL6VI
	88vkvBnVYMkMe3Bmvh+IF03wUSWzKcCM50RpxcCmArOvv32NyQSfGRgWUFTTXYsW2jDI/jIbmK6
	lpRU=
X-Google-Smtp-Source: AGHT+IHhNQeHxMF1iz5ojO+z0qsbLxUl4HMoxdtep1bUmSJdeiaQBXlgKLgDtwV1cnq/v+t5Docqdw==
X-Received: by 2002:a05:620a:471f:b0:795:e635:68d6 with SMTP id af79cd13be357-79be467224bmr908391985a.3.1719324482957;
        Tue, 25 Jun 2024 07:08:02 -0700 (PDT)
Message-ID: <7f72220b-7386-41bf-9f5e-0be2c70320e4@citrix.com>
Date: Tue, 25 Jun 2024 15:08:00 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19?] Config.mk: update MiniOS commit
To: Oleksii <oleksii.kurochko@gmail.com>, Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Juergen Gross <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <a98ab069-407b-4dee-9052-40ab72890d47@suse.com>
 <52373e0cea119ff04ebb997f3d0aea6bd3c9dc41.camel@gmail.com>
 <c0519803-8753-4933-8193-fa036f626b36@suse.com>
 <60b60f2cb8f44126b442259fbc1c878b8166b7dc.camel@gmail.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <60b60f2cb8f44126b442259fbc1c878b8166b7dc.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25/06/2024 3:02 pm, Oleksii wrote:
> On Tue, 2024-06-25 at 10:26 +0200, Jan Beulich wrote:
>> On 25.06.2024 10:10, Oleksii wrote:
>>> On Tue, 2024-06-25 at 09:57 +0200, Jan Beulich wrote:
>>>> Pull in the gcc14 build fix there.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>> ---
>>>> Probably nice to reference a gcc14-clean MiniOS tree from what is
>>>> going
>>>> to be 4.19.
>>> I would like to ask what do you mean by gcc14-clean here?
>> Being able to build successfully with (recently released) gcc14, out
>> of
>> the box.
> Sorry for not asking that in initial reply.
>
> I am still confused with "from what is going to be 4.19".
>
> Are this words about gcc version used by Xen itself?
> If yes, then before this patch gcc version of Xen and MiniOS was the
> same?

A release of Xen is more than just xen.git.

In this case, https://xenbits.xen.org/gitweb/?p=mini-os.git;a=summary
took a bugfix for GCC-14, and xen.git's reference to it needs updating.

MiniOS is one of the components used by xen.git/stubdom/*

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 14:13:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 14:13:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747827.1155326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6vX-0002A5-Rz; Tue, 25 Jun 2024 14:13:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747827.1155326; Tue, 25 Jun 2024 14:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM6vX-00029y-PD; Tue, 25 Jun 2024 14:13:43 +0000
Received: by outflank-mailman (input) for mailman id 747827;
 Tue, 25 Jun 2024 14:13:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sM6vW-00029o-9D; Tue, 25 Jun 2024 14:13:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sM6vW-00068f-1y; Tue, 25 Jun 2024 14:13:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sM6vV-0007Na-I0; Tue, 25 Jun 2024 14:13:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sM6vV-0005Rr-Hd; Tue, 25 Jun 2024 14:13:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zSNkayiYlCiE1EfDzpEZ4fxNpENVziJ3mvJKxl8dZFQ=; b=F9MMKsTy103gEJw52Zhv4IdNDt
	VraIQN+pYBKN8njT1iu1dsS2H7QaDT9cgrRDEv0Ec5w7r7JjZJD39FaOOJ1znAERzSJM5F1NuP78Z
	40zq65NB1w8C8e1IL6dAvjlVYifbZ0uSnyAEN2yCNtwt/+rsz/enFjUEI1yNoymBMJJY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186486-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186486: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=11ea49a3fda5f0cbd8546ee8bdc5e9c55736c828
X-Osstest-Versions-That:
    xen=b14dae96c07ef27cc7f8107ddaa16989e9ab024b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Jun 2024 14:13:41 +0000

flight 186486 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186486/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  11ea49a3fda5f0cbd8546ee8bdc5e9c55736c828
baseline version:
 xen                  b14dae96c07ef27cc7f8107ddaa16989e9ab024b

Last test of basis   186479  2024-06-25 06:00:23 Z    0 days
Testing same since   186486  2024-06-25 11:00:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@vates.tech>
  Jan Beulich <jbeulich@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b14dae96c0..11ea49a3fd  11ea49a3fda5f0cbd8546ee8bdc5e9c55736c828 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 14:22:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 14:22:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747837.1155335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM73s-0004wx-K1; Tue, 25 Jun 2024 14:22:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747837.1155335; Tue, 25 Jun 2024 14:22:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM73s-0004wq-H3; Tue, 25 Jun 2024 14:22:20 +0000
Received: by outflank-mailman (input) for mailman id 747837;
 Tue, 25 Jun 2024 14:22:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM73r-0004wk-8D
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 14:22:19 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 53aab135-32fe-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 16:22:18 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id
 38308e7fff4ca-2ec6635aa43so17156061fa.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 07:22:18 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9ebbbddf4sm81311075ad.287.2024.06.25.07.22.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 07:22:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53aab135-32fe-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719325337; x=1719930137; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=JEk5GQDBybPW95QfbZsG5Sr675yryL5eUz4xP3Eyv6w=;
        b=Gnt3bkqxnP6bBB7ioGgXRndrWaB0y/ODP/OEM6J9ZzsP+ex4DTr3dwRrls0D6fciqk
         ghI3BxRNq6txhLlvSK6PpOXEMmUpL8C0voKiYC4GXdGjho83yvyGlo9ubFnhZvi16gBv
         L6ib6sZ0c+qWIav1HMkQPcdnHKP8N+TnDPseg4ZEdXBaEEcRZ7gU7JV2BLHPRn0OGFEu
         hXNRVZO7/ESRkXQ41oBLPWuYMQvOg/4Nap/qKcIbuBe2G/zoWbcsut5fr4WVgcKHujMl
         C8MNfD/Kriev+46C/detxi+wIdoAxOo0F8U8QSTlR8StaEC9aDOcJGLBkjNTDCtYNHMB
         gB7Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719325337; x=1719930137;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=JEk5GQDBybPW95QfbZsG5Sr675yryL5eUz4xP3Eyv6w=;
        b=B/Y9TrSTgbv3+d/zpsiu9XiZ1Az6z4i+02JmaO/WVz5l5UdGsuTUywWMpqBAg8X46P
         2yRins1JNwGcYxucBPt0G4C2YzPr+mECRVNCf7O7/NWAJH/z+1jFMrGkx0YQxMMPasIv
         vq+jL5UoD37dSOCvbbMBbli1XAwOZW2NIuohspUpSA7bNwcpM38+upEZ8o9fub6ZYD9F
         qwMg6pMWugiHqnmm5u7Yyk8yON0XtlK5RvViQxOTQ3MyeBvSw39tVB8yzzm3Q6oKeDmQ
         kNqY8oRGb7Satv+aATJlTJEyFh1BXikde8jqrld73rdgrRRzb4QVg9QpPCFoCiFXhn6q
         PXBQ==
X-Forwarded-Encrypted: i=1; AJvYcCVCKuwRsai9QEw4uZK9WZ+VsJB2Vsaya3vtWJux5wuVkR8pKtgtAiI+24H+HHkg47emuxxsip/f1g8HUouvDwqetiz5nBHN8c30HMt88kc=
X-Gm-Message-State: AOJu0YwBZOmORh3WYHarB1Qyyb0Tr95KQgWWjMWugFMrO59j6iEDRhMJ
	gb+J9HenLiAjngaZH8l0WStlBItLRzc6Bwr1cZSWolEDkofb8CPDnfQxxa7Few==
X-Google-Smtp-Source: AGHT+IEWDpuE6nW0G2AhHYR52UA/acDYoMNfuXTPyyMSDAv3IZh7dDxsWQ4yMNoTdP05K4BwR4eKpg==
X-Received: by 2002:a2e:b0f3:0:b0:2ec:5785:ee94 with SMTP id 38308e7fff4ca-2ec579fefd8mr51887661fa.52.1719325337334;
        Tue, 25 Jun 2024 07:22:17 -0700 (PDT)
Message-ID: <3cb055bc-f61d-4045-8529-5a15fd5a7e00@suse.com>
Date: Tue, 25 Jun 2024 16:22:08 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 03/10] xen/riscv: add minimal stuff to mm.h to build
 full Xen
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <3d44cf567f5c361cce2713808bcea1b1b6f4f032.1719319093.git.oleksii.kurochko@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <3d44cf567f5c361cce2713808bcea1b1b6f4f032.1719319093.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 15:51, Oleksii Kurochko wrote:
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> Acked-by: Jan Beulich <jbeulich@suse.com>
> ---
> Changes in V13:
>  - redefine mfn_to_page() and mfn_to_page().

DYM page_to_mfn() here as one of the two?

> +/* Convert between machine frame numbers and page-info structures. */
> +#define mfn_to_page(mfn)    (frame_table + mfn_x(mfn))
> +#define page_to_mfn(pg)     _mfn((unsigned long)((pg) - frame_table))

Is the cast really needed here?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 14:24:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 14:24:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747843.1155345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM75r-0005id-2T; Tue, 25 Jun 2024 14:24:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747843.1155345; Tue, 25 Jun 2024 14:24:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM75q-0005iW-Vu; Tue, 25 Jun 2024 14:24:22 +0000
Received: by outflank-mailman (input) for mailman id 747843;
 Tue, 25 Jun 2024 14:24:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM75o-0005iC-VP
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 14:24:20 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9c6c1728-32fe-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 16:24:19 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2e72224c395so62824181fa.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 07:24:19 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb320b33sm82064345ad.83.2024.06.25.07.24.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 07:24:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c6c1728-32fe-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719325459; x=1719930259; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=OUwAoH1ZShYCC83aNyQ953GlT2vF568xgwpj2nltnoc=;
        b=DHLFBgFxWTZ1fYdEHp8wh1xl05HhShovWybJgrt41EZdk67OfPObyBV4E4CsibtBja
         3GcXVH8t1hi2fBLudN3OVmZbWL2XfllwbjT8vwdL45mnz+XezUK96Fvi6dSAsCXLT1fc
         CBl2Mjak2XyPoZZPwbTZjbT32hiaKkEXwt2eNu7c0F5NR3KKiIgUR0NLta3DHFan/pTy
         pNfwzWK+raZa1A6c/ORSaHzIDBFhxxmYxwMyhoD0AgJc6J5/ldeRJen3lAUSVV4zXyhw
         j978/mS2e2j7UXyos50xBXLJsKbdNZ3/ZVjrgvsixmwIgb6zwIlQ2Yf1RW67iHNi+3fs
         Mp1w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719325459; x=1719930259;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=OUwAoH1ZShYCC83aNyQ953GlT2vF568xgwpj2nltnoc=;
        b=ZjMzSU8qlYJeviCv0brGS9oFsLL08gWuTgb9i/agHmV9M2bYSsovtbaOHvMFZXpS5s
         Z/Za0nl+YLljkVHUAT5Ae6N1j2s+db+vRPJMz52ezFlYm4VYjWmqawLw5V6iR0ZueDYA
         MXcSTLV/Euc5yRZbSRN2inrzVxRa9cslv46NksyASOM/se1p5MKFEL4YTJduN1qnD3kP
         iPfxjs5IP8eYGBkRisiBtOIhbuzA7d0z1UleTvZLHkOJ6rKDEvYvZQz0KdHmPlvypvri
         oR5s5Y0+7HJTIt2qyE4yoWJ5zsP+Mpoikzw0+DnAK/0Xm2oM+t3Ai+BA70hPSBSP61Qs
         owRA==
X-Forwarded-Encrypted: i=1; AJvYcCVp806IKE1f9on1UwQ5vKdUuVS3eoEO9TYt8PJBss1fzFNgtM43Ts8hJhqIB8HRW2XcezWyyQKBJVv1/MSWU0Pl4KK5zgnFzKUHz7CQYlA=
X-Gm-Message-State: AOJu0Yx0ro1hflptmIbrFdaXxUFPfDn744iYzOP8XXtipNZBZ4a1F+Wu
	lRg+RMVHQ5Sc6cb9GzK5F01JTx7I4W/i2DQDqVWYFmP7/NAZP/FoZ4cjsa0a7w==
X-Google-Smtp-Source: AGHT+IHOoU1ll2cOMc53LYPIlAzjEb+obs22SLapmiHzb5LCFSQLFWcm/K9JbvmV39a5Xio3BuKXcA==
X-Received: by 2002:a2e:8792:0:b0:2ec:5699:5e6 with SMTP id 38308e7fff4ca-2ec5b28ba0dmr45475321fa.26.1719325459491;
        Tue, 25 Jun 2024 07:24:19 -0700 (PDT)
Message-ID: <10d1f56c-6b27-47ab-bd5e-208a0938c3eb@suse.com>
Date: Tue, 25 Jun 2024 16:24:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 05/10] xen/riscv: enable full Xen build
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <b47a278c89eb436a7b88dc5c0b18a6be09c76472.1719319093.git.oleksii.kurochko@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <b47a278c89eb436a7b88dc5c0b18a6be09c76472.1719319093.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 15:51, Oleksii Kurochko wrote:
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> ---
>  Changes in V13:
>   - implement get_upper_mfn_bound() as BUG_ON("unimplemented")

Odd, patch 4 also says this and also does so.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 14:25:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 14:25:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747848.1155355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM771-0006F4-C5; Tue, 25 Jun 2024 14:25:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747848.1155355; Tue, 25 Jun 2024 14:25:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM771-0006Ex-9M; Tue, 25 Jun 2024 14:25:35 +0000
Received: by outflank-mailman (input) for mailman id 747848;
 Tue, 25 Jun 2024 14:25:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM770-0006Ep-36
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 14:25:34 +0000
Received: from mail-wr1-x434.google.com (mail-wr1-x434.google.com
 [2a00:1450:4864:20::434])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c7f1b4c8-32fe-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 16:25:33 +0200 (CEST)
Received: by mail-wr1-x434.google.com with SMTP id
 ffacd0b85a97d-3621ac606e1so3954517f8f.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 07:25:33 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-706953b6b06sm1900144b3a.173.2024.06.25.07.25.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 07:25:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7f1b4c8-32fe-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719325532; x=1719930332; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=DDqm963lEKq4uY9yYk6vCYrTFvjTQwIhWVOjFVXWAJw=;
        b=Vq5XdDBlJkVU7BmWrRsikYKw9drCSbBahUP407SkFk2aU6OCU/2XWF/DdbW+okXFyq
         mQDallh14D6XQ4tLotkieX9BveLvXIvk0/aKpDz7CXDEh523xX5XQv9STOLtasF/wkKI
         PL/E4kAbbFlmP5FcoNVj7QU7kH+2mMgBoJpwdHUyEo/kWUXrITdTa5DotHKuRlQ6QhEI
         cIwR0/D+WW90SgmVLJrPnUjBiMgGfsImR2VvipZQwdBQOevFstZUpByi2EdEArVQ6auZ
         2uHYT0V1u6Rgi0fhTaIKxdbdbMh7zux9IjyviKiBJqJsQGC77nRjeJwSXWkF8CJRXxMN
         F6cw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719325532; x=1719930332;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=DDqm963lEKq4uY9yYk6vCYrTFvjTQwIhWVOjFVXWAJw=;
        b=Ql0zc6PjqpRacKnrDAnCkCGo0P7oDuPKzKqPRKX+/4sEwfDs7J+pC48QIae4Rt2LNT
         Vx/66zNiE8vspwZwCBVZ+EUWck9m/KTucsWJzlYdOV10VeVkAnaIE9+9eetCF441yEsw
         X9oxpMX+dhF8F32t9KU68RegkTFDoRTxPdWx/ovhCkwGnldqg4k2tFhwCljtXvDPR99S
         plSV+mr49EWVkVi3akYJoC7xEaWQqZO6Z/P5nAlo4eQLeoBw37ocIlhURqOPjz/V1X1J
         cM2M4oy3D8+0B/JWDrzPEo/CR974F3WCFTBcibiFtTLHYxQ7asGDuTW6awuyxik3v3rY
         Z79Q==
X-Forwarded-Encrypted: i=1; AJvYcCV2EbPpSx4MOWtl2gDnhcZ0+YkKT56vvuaP0peI7IC6Ze0tuEYdfBeY1buiBFAgJeEGm4E0NM8+z+Kpah9r4DjdI/1KzdqnRjyUwVUQV4w=
X-Gm-Message-State: AOJu0Yxd8F6mj2RjMunbzUMOwHYfSxIsOKK8XsAzr2eiSfxFOpnqT5sX
	VAnwMYxyMhhMymoLTHcpb8X62xz3U65mDZhO2fddkQ2XM8CblQ4BDkNotYb7CA==
X-Google-Smtp-Source: AGHT+IGjEd/2a5pYVE0a6I4TmGqHJr81NWshZlkHv7j03ktAd9HVK2CehVoqx0AIjT2bR6vnha1yFA==
X-Received: by 2002:a05:6000:1549:b0:366:df35:b64f with SMTP id ffacd0b85a97d-366e325ba7emr9647974f8f.4.1719325532545;
        Tue, 25 Jun 2024 07:25:32 -0700 (PDT)
Message-ID: <4a4e37a9-eac7-4e72-8845-6b4bbd7bafe6@suse.com>
Date: Tue, 25 Jun 2024 16:25:26 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 07/10] xen/common: fix build issue for common/trace.c
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <f14f2c5629a75856f4bafdbff3cc165c373f8dc2.1719319093.git.oleksii.kurochko@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <f14f2c5629a75856f4bafdbff3cc165c373f8dc2.1719319093.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 15:51, Oleksii Kurochko wrote:
> During Gitlab CI randconfig job for RISC-V failed witn an error:
>  common/trace.c:57:22: error: expected '=', ',', ';', 'asm' or
>                               '__attribute__' before '__read_mostly'
>    57 | static u32 data_size __read_mostly;
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

If you give a release-ack, this can go in right away, I think.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 14:46:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 14:46:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747859.1155367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7QZ-0003OB-W2; Tue, 25 Jun 2024 14:45:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747859.1155367; Tue, 25 Jun 2024 14:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7QZ-0003O4-Rk; Tue, 25 Jun 2024 14:45:47 +0000
Received: by outflank-mailman (input) for mailman id 747859;
 Tue, 25 Jun 2024 14:45:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM7QY-0003Ny-FE
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 14:45:46 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 99ececca-3301-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 16:45:45 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id
 38308e7fff4ca-2ec408c6d94so64461721fa.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 07:45:45 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c819a622f0sm8808593a91.8.2024.06.25.07.45.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 07:45:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 99ececca-3301-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719326744; x=1719931544; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=bDWoPZlQxmFrvHpt7YkCMFurjcft02gNTsFmOVmQ+nw=;
        b=UeXR2bm1QMndE0mvjQWIAg92zkhQ/j8Bx99j5valjsVmqLMO7kgze2whHaR20gXENt
         bEg9sgwatnmOBJR2dRiG8iOZIEPyprkF6GBHTpdemZTZDvqe8fA9qWN2PmZTZeZVdgcm
         4zlvXZKTWKNEVotIJMNR840Qgd1lwpa5dBgaRKj3DW62WLciWPiiAKS+CRWlr+bdL+3l
         aYtAFvSxrO9wVBY4NJgPkJRLHd3xVzpB1OocsujFXFhIDDSDh2Z2z4ezKCn3i8eON8Jg
         c9qF/Ijz3/F2m7TfcATukOSRY8bhJVcmb86PhWGGGFwnF7cU8a121PSsDl87LjLCJeVx
         4dmA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719326744; x=1719931544;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bDWoPZlQxmFrvHpt7YkCMFurjcft02gNTsFmOVmQ+nw=;
        b=qamExB9pwLj+BTrKb/5nGdVlDTwa3eUzRJoyAlscLh9WhfP6rf1s1A4HgNgkGxEkg3
         H4q9CftZxZJdAgLvxL8/PweYHWxJNM5cEDfo2pV5WaI+Zj/aJ2uLs0v3yj2d7aeWWjMo
         +RpAGZe4qy7m+zdXE6DajRJxwyJNhOTGGqvfNIoPqhh3WPz9MsVA+RSB7lQ6eEg2jH/V
         gnUu4z0YskaFlh8R5atkF9Zp5oR0fTB9x63luf2y97nC/r8YiKB2QT2TZ02wvM51eMLM
         olz9rCiGTNxgjFqrmilL9qDVuMQKSYiF35YENYbpTiEAf6rw2iFpx/5cDb9WixtkocRt
         kbZw==
X-Forwarded-Encrypted: i=1; AJvYcCUAq+X7DV4qIndTDRdfjW/e83KuuI57ffSM9H3/k3VqdVDeO5fubnsHW+80RSdPlalF0fKwzNAAC6nezqb/1gdS8b3JsNDDSkAnOrfjKck=
X-Gm-Message-State: AOJu0Ywo8dx2/7KmBAoJsnobSAnVY9Xvi/4U6/yY60x119xVwHCyhOH+
	Z1TdB+ZR+YLssNOkQDkntUWhKYCOMXotyfMhc8p7Tj+S2kcLEDZjpxguqUpiNQ==
X-Google-Smtp-Source: AGHT+IEhkiGAbrctLw046GlK2bH4EE+v86d1kOd1UCNtcpG1EVEdS+sNMMIzk1ryKY2wMzPnce/GlA==
X-Received: by 2002:a2e:9090:0:b0:2ec:5621:b9f2 with SMTP id 38308e7fff4ca-2ec5936fb3amr62825101fa.41.1719326743736;
        Tue, 25 Jun 2024 07:45:43 -0700 (PDT)
Message-ID: <8be2c7c0-0aa0-44e0-b3d3-d422fecc29b6@suse.com>
Date: Tue, 25 Jun 2024 16:45:35 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 08/10] xen/riscv: change .insn to .byte in cpu_relax()
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <b5ccb3850cbfc0c84d2feea35a971351395fa974.1719319093.git.oleksii.kurochko@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <b5ccb3850cbfc0c84d2feea35a971351395fa974.1719319093.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 15:51, Oleksii Kurochko wrote:
> The .insn directive appears to check that the byte pattern is a known
> extension, where .4byte doesn't.

Support for specifying "raw" insns was added only in 2.38. Moving away
from .insn has other downsides (which may or may not matter here, but
that would want discussing). But: Do we really need to move away? Can't
you use .insn with operands that the older gas supports, e.g.

	.insn r MISC_MEM, 0, 0, x0, x0, x16

? I'm sorry, the oldest RISC-V gas I have to hand is 2.39, so I couldn't
double check that 2.35 would grok this. From checking sources it should,
though.

> The following compilation error occurs:
>   ./arch/riscv/include/asm/processor.h: Assembler messages:
>   ./arch/riscv/include/asm/processor.h:70: Error: unrecognized opcode `0x0100000F'
> In case of the following Binutils:
>   $ riscv64-linux-gnu-as --version
>   GNU assembler (GNU Binutils for Debian) 2.35.2

In patch 6 you say 2.39. Why is 2.35.2 suddenly becoming of interest?

> --- a/xen/arch/riscv/include/asm/processor.h
> +++ b/xen/arch/riscv/include/asm/processor.h
> @@ -67,7 +67,7 @@ static inline void cpu_relax(void)
>      __asm__ __volatile__ ( "pause" );
>  #else
>      /* Encoding of the pause instruction */
> -    __asm__ __volatile__ ( ".insn 0x0100000F" );
> +    __asm__ __volatile__ ( ".byte 0x0100000F" );
>  #endif

In the description you (correctly) say .4byte; why is it .byte here?
Does this build at all?

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 14:49:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 14:49:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747866.1155376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7U6-0004SV-DW; Tue, 25 Jun 2024 14:49:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747866.1155376; Tue, 25 Jun 2024 14:49:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7U6-0004SO-AJ; Tue, 25 Jun 2024 14:49:26 +0000
Received: by outflank-mailman (input) for mailman id 747866;
 Tue, 25 Jun 2024 14:49:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM7U5-0004SI-3B
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 14:49:25 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1c4abc4c-3302-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 16:49:23 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id
 38308e7fff4ca-2ec58040f39so32152631fa.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 07:49:23 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7068af60b61sm3360514b3a.134.2024.06.25.07.49.18
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 07:49:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c4abc4c-3302-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719326962; x=1719931762; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Su1fw8OFbAmbG4pvwPw6Bvr3dAWnfrD9VfuzVqzhebU=;
        b=NGe4OBFfbPCFAJH3xOzAqag1Zt9B2ht50cgg1xZLhKi3zVWg0yW6dHiMDYvItvdu0G
         /420+oHD9jNGNrDs1tJBfX2ajei6Is4DA43/KBG0L/FvbfujvknW89lJf+pjOBKMpECW
         j4tKXO3trQN99nYh4h+CCgN9jiSScW8zYTEj0AzQEXKOmhbvqwGkWfai4j3Y8FogKWbt
         ipI2ekokvDqdnghfA+vnPV0a3ge+SfptTg1USVZu5Zj+hsVJZbndqkpqrNwigfXKpsX0
         TQJW4LAIHHsfHy/vhepNRoB8oJnlQBLCpCe7gk9z3GopClMNIOzmgwucBdBcK9Uay64T
         RXMg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719326962; x=1719931762;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Su1fw8OFbAmbG4pvwPw6Bvr3dAWnfrD9VfuzVqzhebU=;
        b=SKSmPPR2iu0IU/rn44V4I7Cz+nDKaUebLGFe/LSkX6qtOCa7SUHXYFy/3VvtHZry6W
         ShnEtFBNAhJNYfB8S8qgRH7pCQ06X2tSo2pQRN0axY5LGLzxGAHMKs/yoHgM2L9alggC
         TChkSPc0kiIUJTGedPuwKI7p76Zp5uXXWwjPNhuRX1Ew8V05vpK/SePQcQFsfjyYocs2
         rY7YWDMepUyG85oSCnj/J75LtLXEMBmhDKUI++3KIgowvllz/tc/vkN3DCQgk4WGezmE
         +W8ianV4AITFFtpb6NF1GN2AeKx/HOT30D32YeZVIq3CfUGqudPLgIaRdvKh3Q5bykpr
         TzWg==
X-Forwarded-Encrypted: i=1; AJvYcCVqXRN0vDoKXYG4O4ljtMXs3bOksv18kilDSm4IzVcTSVq5zCpVv+EFwYIZ4G5TIIynm1YCM1YzAv9VeJ3d/Qi695vrGQWphmVFxeuT+U8=
X-Gm-Message-State: AOJu0YyrlaYmBX12o5nUOw54NFbtd79LGNzDgZawMrrUL4+yDoUoZgvh
	wqfXtvPRf6f9dKSiQlE+g4IPwkDhBJ6IpfPPJKSkFltaaCxFg0+x16SriXRDfQ==
X-Google-Smtp-Source: AGHT+IGZ26yCFTOu30zLwaWfoshB6xJx+fnUGmDwpEleyXzJBvLR1gihOB3AliwO6mARCHouq9A1Vg==
X-Received: by 2002:a2e:9083:0:b0:2ec:53ad:464 with SMTP id 38308e7fff4ca-2ec594cfa5fmr45957491fa.34.1719326962530;
        Tue, 25 Jun 2024 07:49:22 -0700 (PDT)
Message-ID: <95f64eba-13b9-404a-8318-7a3fc77ea560@suse.com>
Date: Tue, 25 Jun 2024 16:49:14 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 09/10] xen/riscv: introduce ANDN_INSN
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <b0d2ff2cecf6cb324e43b9c14c87f47f3f199613.1719319093.git.oleksii.kurochko@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <b0d2ff2cecf6cb324e43b9c14c87f47f3f199613.1719319093.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 15:51, Oleksii Kurochko wrote:
> --- a/xen/arch/riscv/include/asm/cmpxchg.h
> +++ b/xen/arch/riscv/include/asm/cmpxchg.h
> @@ -18,6 +18,20 @@
>          : "r" (new) \
>          : "memory" );
>  
> +/*
> + * Binutils < 2.37 doesn't understand ANDN.  If the toolchain is too
> +ld, form

Same question: Why's 2.37 suddenly of interest? Plus, why would age of the
tool chain matter? What you care about is whether you're permitted to use
the extension at runtime. Otherwise you could again ...

Also something went wrong with line wrapping here.

> + * it of a NOT+AND pair
> + */
> +#ifdef __riscv_zbb
> +#define ANDN_INSN(rd, rs1, rs2)                 \
> +    "andn " rd ", " rs1 ", " rs2 "\n"
> +#else
> +#define ANDN_INSN(rd, rs1, rs2)                 \
> +    "not " rd ", " rs2 "\n"                     \
> +    "and " rd ", " rs1 ", " rd "\n"

... resort to .insn.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 14:52:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 14:52:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747872.1155385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7WX-0006Pq-PL; Tue, 25 Jun 2024 14:51:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747872.1155385; Tue, 25 Jun 2024 14:51:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7WX-0006Pj-Mn; Tue, 25 Jun 2024 14:51:57 +0000
Received: by outflank-mailman (input) for mailman id 747872;
 Tue, 25 Jun 2024 14:51:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM7WV-0006ON-Rl
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 14:51:55 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 76456aa2-3302-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 16:51:54 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id
 38308e7fff4ca-2ec50a5e230so40272671fa.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 07:51:54 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c819dd3d81sm8864053a91.57.2024.06.25.07.51.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 07:51:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76456aa2-3302-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719327113; x=1719931913; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=LF90kDOkU0Kf70tg7SPJ469N2laBq/yRfCpqi6lgIAs=;
        b=Na0e4cEnFn4mQ1INu+azsajes6ndXg3HnqCemRPtXWb7uCdC5Qtmpjz0qjLo/bufMz
         NvFr3vyc4minSFIaOx+1ciNSapp1wfuqGMyGAy5saSR3g+V+7P/+cTszXefa0+HrLd+5
         l7HZ0z6UjUOrKjdt6r8To9JMkHAxooMmE56HocpqtAlAwOoPO2r6LmkRp2rbkxuPdDm2
         9eQCj1ZXLCr+Lp1gIwy5nRX05E1aY+dOfjPBEXQHdO2mipgi+h3/cbjUmeALR8GeTdW6
         wVzEyeVO2KyxVbNEYZL4RHq4MzGtgDAiHtp3jsKQqbLhs5cJ57V0ya3yyHNiEJ4xeCMw
         +Fdw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719327113; x=1719931913;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=LF90kDOkU0Kf70tg7SPJ469N2laBq/yRfCpqi6lgIAs=;
        b=vToBptQWEaWv5ImtTCPQmlXYuh/DGuMrs1qP9/xcskIPRfLap9JsoDrVjkQ51052Em
         75A/ahU07F8S9Jaedyf0pCYLDF6UcPtt7Q/9IUiFlfyXqQfAYVZca1yenR8RBr7Lf2st
         cH2lAXtP+AdcEd/RAuvuECG7mIzLxqHyq4uMSCbwMV/iPOKbYVqTamEVI8W2t8cnYFRS
         QM1Q0TXLdM3R770ZwQZa4G3ij/C16RchGQC2M3DKyx3q4+VmUdEIM8P98hGUIonzrqOu
         Lmj0CGCEDZe35P9WRT6+b+39uQjfHnwyjPZanh/hhddQ/QuoBTtCpOcoImC/Pl4ExenE
         fX1A==
X-Forwarded-Encrypted: i=1; AJvYcCUoE/YDHJzkaPqnG2R1OmJ/n9GcK0OQcWIdaPEm1uzSD/6e9o4g3GN1/OY1rTjLayiRBuqPeDpVDbb0Scfu9qyiAK0Y+CnQcoW5vwY8vYA=
X-Gm-Message-State: AOJu0YySg30PnspKhwPombAlSOfcHgTBi6hVEEKahIDwj97PrR1ozIKa
	KKNgUPyM2W7MH5bYSUqwVnrZnafVWDMdxZhe8508EIfAT2b8T6j4GwjrWWWqfA==
X-Google-Smtp-Source: AGHT+IFlPk7AXVpuY3PgJusTzyodYJsE2Fa179/cn6anaC4zl6tm7uz55qZY2rK0UfcaFDWi7/OxTg==
X-Received: by 2002:a2e:88c9:0:b0:2ec:556f:3474 with SMTP id 38308e7fff4ca-2ec5b2e94bbmr50039041fa.52.1719327113271;
        Tue, 25 Jun 2024 07:51:53 -0700 (PDT)
Message-ID: <301ed888-af55-4445-9944-a1488791e120@suse.com>
Date: Tue, 25 Jun 2024 16:51:46 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 10/10] xen/x86: drop constanst_test_bit() in
 asm/bitops.h
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <edd341a6e86ceac2717c59680d4e5e7fc3321b5d.1719319093.git.oleksii.kurochko@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <edd341a6e86ceac2717c59680d4e5e7fc3321b5d.1719319093.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 15:51, Oleksii Kurochko wrote:
> constant_test_bit() is functionally the same as generic_test_bit(),
> so constant_test_bit() can be dropped and replaced with
> generic_test_bit().
> 
> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
(Didn't I ask for this to be done, so perhaps also Requested-by or alike?)



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 14:56:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 14:56:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747880.1155396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7bD-0007Tz-E4; Tue, 25 Jun 2024 14:56:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747880.1155396; Tue, 25 Jun 2024 14:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7bD-0007Ts-B0; Tue, 25 Jun 2024 14:56:47 +0000
Received: by outflank-mailman (input) for mailman id 747880;
 Tue, 25 Jun 2024 14:56:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM7bB-0007Tm-8P
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 14:56:45 +0000
Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com
 [2a00:1450:4864:20::22e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 22c38da2-3303-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 16:56:43 +0200 (CEST)
Received: by mail-lj1-x22e.google.com with SMTP id
 38308e7fff4ca-2ec5fad1984so34139281fa.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 07:56:43 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c819a623a3sm8848638a91.5.2024.06.25.07.56.39
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 07:56:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22c38da2-3303-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719327403; x=1719932203; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=GHFwsA2HBWd/ihksLPZIEBzulcUQCvnvwxuRXsfjfjs=;
        b=IXNsNlNzBvvmrg4qSa9utaUwC0J+KwsR/PigYMEfOnBuYPklrog1nOxOXlq7dO0xdT
         0LH3nycVNV1dHpVdldC3MRnW2sZQBxYr2fZR9bbZfHZSMLqMgTHttuLCtf3bNnLNNZF/
         T3DV3TmEnk3FkeOXOHmApKNJijMLBsTNRfRoFZ+DTPTccupNiR4majLdoVJYT/6gawSC
         FqNdt52REfwomTcVIXyOOaLn1ko0EmcUHm3ZWwCwFh/9dO4icalul/dMFS10iQcMVAmP
         DwP6MeWnpT9ipLSd8Z+1kUEFtI8SHsWGlcbzzMiVxCopsE2RtRg11sZAnF8uyTipLDvU
         2ESw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719327403; x=1719932203;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GHFwsA2HBWd/ihksLPZIEBzulcUQCvnvwxuRXsfjfjs=;
        b=XUJjKw1WdHY9PZS0V11vWcQnilpUEPY9HtWzMWEliBUKa3+ByDZS4G4LQRrfizFor4
         wsXyt9qbqruX389Tbn2HmwyzY5mOtsGoQoUKZX3H43Ex3XIXcB40ua1UvBTEms/0oKOJ
         9xuEmfqfXy8TvE+fHGx3FTNjcmRyLAbipcMC3p8RfbALo5mQ/8Wippnvekhbp0VzFGQd
         5EvHlAHIqtCIDUZZ+qbQ3nM0vQQO4VJhl7BQVHThamUqNQRQ69ptqqmR0g+nc8HvEz8q
         dAmw7W72FlSkQWdz1C1DkXA4vEevxG3cnJisZhs9TNhhaauwhlWh57aeRF3BRFvtcxpQ
         oUbg==
X-Forwarded-Encrypted: i=1; AJvYcCUrr/gx/Dgf4rPpWn2tV3dEsviDuY1E9wusb0TTMpV8oTp9IKEZCcI30EUv1ggzHqIHkvS/yc38COte40n8zDfX9p/1OUcu0xfLxoN6hME=
X-Gm-Message-State: AOJu0YxmJe1mhp8LyUbVF7tSNvxcsymL6+dNqHzOgMHsyqNM/iPkdA4v
	yjfSozZ+rc/l+N0RCJC0KHv1R81EYm1VZ5NQHodslOT5ZREC2ViDFvJHZhmVl3xQyr/XQ/f40/c
	=
X-Google-Smtp-Source: AGHT+IHyY6h5AdR2otcts7LbpODbEZERLAhXHgzy8WWhPPZPPR25xdG2qJfEv/T3zZCE40qQix8W/g==
X-Received: by 2002:a2e:9596:0:b0:2ec:55b5:ed50 with SMTP id 38308e7fff4ca-2ec5b2fd2d0mr55937841fa.5.1719327402768;
        Tue, 25 Jun 2024 07:56:42 -0700 (PDT)
Message-ID: <4bab3cff-93c0-497c-b0ad-8d2df26124d2@suse.com>
Date: Tue, 25 Jun 2024 16:56:35 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/2] Add libfuzzer target to fuzz/x86_instruction_emulator
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org
References: <20240621191434.5046-1-tamas@tklengyel.com>
 <45c69745-b060-4697-9f6e-b3d2a8860946@suse.com>
 <CABfawhkyDVw-=nR2d6KiXGYYv=coDgHUr1oXC+BmUxH_ita+iQ@mail.gmail.com>
 <80d0578d-26c0-4650-9edf-6926c055d415@suse.com>
 <CABfawhk3RyR-ACq-mBk=F1-SCKJPiiS_yhU1=A_jR8Js3=fQyA@mail.gmail.com>
 <8d32db90-8bd0-4a8f-82d9-938e36d3f181@suse.com>
 <CABfawhnYFS97U1F4CuacbNWzLVoKXFxTSpG-Ddb-VL7di=7XDw@mail.gmail.com>
 <243e34fc-57a2-464c-8a11-2cfee7e9cda3@suse.com>
 <CABfawh=6d+F1tYLmfC-NyMn80NROFf_0HL-WkKzu-r5vjfScaw@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CABfawh=6d+F1tYLmfC-NyMn80NROFf_0HL-WkKzu-r5vjfScaw@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25.06.2024 15:40, Tamas K Lengyel wrote:
> On Tue, Jun 25, 2024 at 9:15 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 25.06.2024 14:40, Tamas K Lengyel wrote:
>>> On Tue, Jun 25, 2024 at 7:52 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 25.06.2024 13:12, Tamas K Lengyel wrote:
>>>>> On Tue, Jun 25, 2024 at 2:00 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>
>>>>>> On 24.06.2024 23:23, Tamas K Lengyel wrote:
>>>>>>> On Mon, Jun 24, 2024 at 11:55 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>>>
>>>>>>>> On 21.06.2024 21:14, Tamas K Lengyel wrote:
>>>>>>>>> @@ -58,6 +58,9 @@ afl-harness: afl-harness.o $(OBJS) cpuid.o wrappers.o
>>>>>>>>>  afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OBJS)) cpuid.o wrappers.o
>>>>>>>>>       $(CC) $(CFLAGS) $(GCOV_FLAGS) $(addprefix -Wl$(comma)--wrap=,$(WRAPPED)) $^ -o $@
>>>>>>>>>
>>>>>>>>> +libfuzzer-harness: $(OBJS) cpuid.o
>>>>>>>>> +     $(CC) $(CFLAGS) $(LIB_FUZZING_ENGINE) -fsanitize=fuzzer $^ -o $@
>>>>>>>>
>>>>>>>> What is LIB_FUZZING_ENGINE? I don't think we have any use of that in the
>>>>>>>> tree anywhere.
>>>>>>>
>>>>>>> It's used by oss-fuzz, otherwise it's not doing anything.
>>>>>>>
>>>>>>>>
>>>>>>>> I'm further surprised you get away here without wrappers.o.
>>>>>>>
>>>>>>> Wrappers.o was actually breaking the build for oss-fuzz at the linking
>>>>>>> stage. It works just fine without it.
>>>>>>
>>>>>> I'm worried here, to be honest. The wrappers serve a pretty important
>>>>>> role, and I'm having a hard time seeing why they shouldn't be needed
>>>>>> here when they're needed both for the test and afl harnesses. Could
>>>>>> you add some more detail on the build issues you encountered?
>>>>>
>>>>> With wrappers.o included doing the build in the oss-fuzz docker
>>>>> (ubuntu 20.04 base) fails with:
>>>>>
>>>>> ...
>>>>> clang -O1 -fno-omit-frame-pointer -gline-tables-only
>>>>> -Wno-error=enum-constexpr-conversion
>>>>> -Wno-error=incompatible-function-pointer-types
>>>>> -Wno-error=int-conversion -Wno-error=deprecated-declarations
>>>>> -Wno-error=implicit-function-declaration -Wno-error=implicit-int
>>>>> -DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION -fsanitize=address
>>>>> -fsanitize-address-use-after-scope -fsanitize=fuzzer-no-link -m64
>>>>> -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes
>>>>> -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Werror
>>>>> -Og -fno-omit-frame-pointer
>>>>> -D__XEN_INTERFACE_VERSION__=__XEN_LATEST_INTERFACE_VERSION__ -MMD -MP
>>>>> -MF .libfuzzer-harness.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
>>>>> -I/src/xen/tools/fuzz/x86_instruction_emulator/../../../tools/include
>>>>> -D__XEN_TOOLS__ -iquote . -fsanitize=fuzzer -fsanitize=fuzzer
>>>>> -Wl,--wrap=fwrite -Wl,--wrap=memcmp -Wl,--wrap=memcpy
>>>>> -Wl,--wrap=memset -Wl,--wrap=printf -Wl,--wrap=putchar -Wl,--wrap=puts
>>>>> -Wl,--wrap=snprintf -Wl,--wrap=strstr -Wl,--wrap=vprintf
>>>>> -Wl,--wrap=vsnprintf fuzz-emul.o x86-emulate.o x86_emulate/0f01.o
>>>>> x86_emulate/0fae.o x86_emulate/0fc7.o x86_emulate/decode.o
>>>>> x86_emulate/fpu.o cpuid.o wrappers.o -o libfuzzer-harness
>>>>> /usr/bin/ld: /usr/bin/ld: DWARF error: invalid or unhandled FORM value: 0x25
>>>>> /usr/local/lib/clang/18/lib/x86_64-unknown-linux-gnu/libclang_rt.fuzzer.a(fuzzer.o):
>>>>> in function `std::__Fuzzer::__libcpp_snprintf_l(char*, unsigned long,
>>>>> __locale_struct*, char const*, ...)':
>>>>> cxa_noexception.cpp:(.text._ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__locale_structPKcz[_ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__locale_structPKcz]+0x9a):
>>>>> undefined reference to `__wrap_vsnprintf'
>>>>> clang: error: linker command failed with exit code 1 (use -v to see invocation)
>>>>> make: *** [Makefile:62: libfuzzer-harness] Error 1
>>>>> rm x86-emulate.c wrappers.c cpuid.c
>>>>> make: Leaving directory '/src/xen/tools/fuzz/x86_instruction_emulator'
>>>>> ERROR:__main__:Building fuzzers failed.
>>>>
>>>> Hmm, yes, means we'll need an actual vsnprintf() wrapper, not just a
>>>> declaration thereof.
>>>
>>> I don't really get what this wrapper accomplishes
>>
>> They guard against clobbering of in-register state (SIMD registers in
>> particular, but going forward maybe also eGRP-s as introduced by APX)
>> by library functions called between emulation of individual insns (or,
>> especially possible for fuzzing instrumented code, I think) even from
>> in the middle of emulating an insn. (Something as simple as the
>> compiler inserting a call to memcpy() or memset() somewhere in the
>> translation of the emulator source code could also clobber state.)
>>
>>> and as I said, fuzzing works with oss-fuzz just fine without it.
>>
>> I'm inclined to take this as "it appears to work just fine". Fuzzed
>> input register state may be lost by doing a library call somewhere,
>> rendering the fuzzing results less useful. This would pretty
>> certainly stop being tolerable the moment you compared results of
>> native execution of a sequence of instructions with the emulated
>> counterpart.
> 
> Yea, that may be. Any suggested way to fix the linking issue though?

As said before, we need to gain a real wrapper for vsnprintf(). Right
now we only have a declaration thereof, for use by the wrapper for
snprintf().

> I'm not even sure why the problem only appears in the oss-fuzz build,
> when I just run make normally it seems to work.

Sounds a little odd indeed. Yet I have no idea towards the differences
between the two.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 14:57:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 14:57:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747886.1155406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7bv-0007yL-Mr; Tue, 25 Jun 2024 14:57:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747886.1155406; Tue, 25 Jun 2024 14:57:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7bv-0007yE-JO; Tue, 25 Jun 2024 14:57:31 +0000
Received: by outflank-mailman (input) for mailman id 747886;
 Tue, 25 Jun 2024 14:57:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM7bv-0007Tg-73
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 14:57:31 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3ee4519a-3303-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 16:57:30 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id
 38308e7fff4ca-2ebe785b234so62198491fa.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 07:57:30 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb2f045bsm82416265ad.53.2024.06.25.07.57.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 07:57:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ee4519a-3303-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719327450; x=1719932250; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=RcjHut3ba7UlZEfNOKRhl6HI5WDKYwGdWVqA7p7jfxU=;
        b=Y/xFLRgPndiDhg4IQw4NCwWQmxqXWeicATg98VwMdA3WBzPtiY93AJpcTpAyYZIJMs
         bcITyJNBCAkmQ/yOr0H3vg/SDdLXeImP8au6V+i1a4ctEqeTOp4ADe63bKwrcSP5G5nG
         CiUIKt4pUsqDFc61b/B1R0tkZ9/Hjm5XMBvIiR0ye4FN6MPxjSnPq/R8bKtUPtZxhZPw
         NNoKTEzHJx9kmjyCclXDrWu9i1Q95LLYwAPMRQJIT2qRQ13TwdzcG/KkjV5NkyprpuJj
         n+CTPMb+urDIeEde+nPHZ5PkVZ64kV6CV6ndL+/Xd4deM8xbr3z5dcQXDmR2dj/wVnsj
         sDOw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719327450; x=1719932250;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=RcjHut3ba7UlZEfNOKRhl6HI5WDKYwGdWVqA7p7jfxU=;
        b=Qlj1b/Knqu/Bb6pRb4IWT6HtbQb5LmZuWGYkbAhvtajP8tAPWSYSr0ES1wp6IevaHJ
         jO7ltg3pYFw2KyQeYCOiVCTjecWBM06HtoCoZ3n5B0o02AMZQtr4fE0WGSqHVzb6kSVH
         Z1MEX2ypI17BsHXmr5jKP29MjaOGxTRwkgzrif9tY0EXmzhpK4G9S2z9fUvbcACFfSBk
         0l4n3TJ1vIUv25k4uD2dhQtBTaB92RzE/+hMFZrwSxDaVvHhkWyXIrVbiN8InnJJX77A
         XBTzKVmrdtldCexp2a4KBGfPVG/8IiU9DEPAIdWj5jGIRUKAFielCP2QAq0UvYvT5PPe
         Fhzw==
X-Forwarded-Encrypted: i=1; AJvYcCVAvoh0SoMxt/gaM1ljg9ed4OircBPL2gHw0AHv32RdVgJfTYll9Q0LWv0+x+/A/ce8LMNGgrGo3fdxi6OmGAdkj41docdnP211BVnp+BI=
X-Gm-Message-State: AOJu0YyNotAFvgTV0E3rdpU3rU83rhicUt775OFf/Ml/TrCkGEeFwu7W
	jozIdkDh/t//22xRx4+3kCYFoXO8nef6l8EmsmR0UvUEN5i2WvRjozLoyzbz3w==
X-Google-Smtp-Source: AGHT+IHEDL9VznFTNvhb6457TbTO+Jb6EUd5nQybRNa22vbyoOpmk/NEg2twOLv9Wrt3vLOm4sBxDg==
X-Received: by 2002:a2e:96c4:0:b0:2ec:1f9f:a876 with SMTP id 38308e7fff4ca-2ec593101ebmr61641471fa.6.1719327449967;
        Tue, 25 Jun 2024 07:57:29 -0700 (PDT)
Message-ID: <0b362ff3-0df4-4ec3-b748-1aa9267b701a@suse.com>
Date: Tue, 25 Jun 2024 16:57:23 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20240621191434.5046-1-tamas@tklengyel.com>
 <20240621191434.5046-2-tamas@tklengyel.com>
 <0a7854e0-e01e-435e-95fe-b262cc4afc1e@suse.com>
 <CABfawhmkhCD-MFgZBrhJ1CwiiseotJ=+MANbgwsjRL_VYsnuTQ@mail.gmail.com>
 <b9b84f10-6d41-48d9-996d-069408753e28@suse.com>
 <CABfawhkJ0t8FenCWbupGcHD-ZhorbWN7ZjMQVm-jeg_zA1g5iQ@mail.gmail.com>
 <66a7243d-a1a1-4236-832f-f3e1daf11b85@suse.com>
 <CABfawhmAV5+Nr9A_Speh2ai3v9wfJtxmps=R6iTxNU1RFP4xRA@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CABfawhmAV5+Nr9A_Speh2ai3v9wfJtxmps=R6iTxNU1RFP4xRA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25.06.2024 15:42, Tamas K Lengyel wrote:
> On Tue, Jun 25, 2024 at 9:18 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 25.06.2024 14:39, Tamas K Lengyel wrote:
>>> On Tue, Jun 25, 2024 at 7:40 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 25.06.2024 13:15, Tamas K Lengyel wrote:
>>>>> On Tue, Jun 25, 2024 at 5:17 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>
>>>>>> On 21.06.2024 21:14, Tamas K Lengyel wrote:
>>>>>>> --- /dev/null
>>>>>>> +++ b/scripts/oss-fuzz/build.sh
>>>>>>> @@ -0,0 +1,22 @@
>>>>>>> +#!/bin/bash -eu
>>>>>>> +# Copyright 2024 Google LLC
>>>>>>> +#
>>>>>>> +# Licensed under the Apache License, Version 2.0 (the "License");
>>>>>>> +# you may not use this file except in compliance with the License.
>>>>>>> +# You may obtain a copy of the License at
>>>>>>> +#
>>>>>>> +#      http://www.apache.org/licenses/LICENSE-2.0
>>>>>>> +#
>>>>>>> +# Unless required by applicable law or agreed to in writing, software
>>>>>>> +# distributed under the License is distributed on an "AS IS" BASIS,
>>>>>>> +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>>>>>> +# See the License for the specific language governing permissions and
>>>>>>> +# limitations under the License.
>>>>>>> +#
>>>>>>> +################################################################################
>>>>>>
>>>>>> I'm a little concerned here, but maybe I shouldn't be. According to what
>>>>>> I'm reading, the Apache 2.0 license is at least not entirely compatible
>>>>>> with GPLv2. While apparently the issue is solely with linking in Apache-
>>>>>> licensed code, I wonder whether us not having a respective file under
>>>>>> ./LICENSES/ (and no pre-cooked SPDX identifier to use) actually has a
>>>>>> reason possibly excluding the use of such code in the project.
>>>>>>
>>>>>>> +cd xen
>>>>>>> +./configure clang=y --disable-stubdom --disable-pvshim --disable-docs --disable-xen
>>>>>>> +make clang=y -C tools/include
>>>>>>> +make clang=y -C tools/fuzz/x86_instruction_emulator libfuzzer-harness
>>>>>>> +cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_instruction_emulator
>>>>>>
>>>>>> In addition to what Julien said, I further think that filename / directory
>>>>>> name are too generic for a file with this pretty specific contents.
>>>>>
>>>>> I don't really get your concern here?
>>>>
>>>> The thing that is built is specifically a x86 emulator piece of fuzzing
>>>> binary. Neither the directory name nor the file name contain either x86
>>>> or (at least) emul.
>>>
>>> Because this build script is not necessarily restricted to build only
>>> this one harness in the future. Right now that's the only one that has
>>> a suitable libfuzzer harness, but the reason this build script is here
>>> is to be easily able to add additional fuzzing binaries without the
>>> need to open PRs on the oss-fuzz repo, which as I understand no one
>>> was willing to do in the xen community due to the CLA. Now that the
>>> integration is going to be in oss-fuzz, the only thing you have to do
>>> in the future is add more stuff to this script to get fuzzed. Anything
>>> that's compiled with libfuzzer and copied to $OUT will be picked up by
>>> oss-fuzz automatically. Makes sense?
>>
>> It does, yes. Yet nothing like that was said in the description. How
>> should anyone have known there are future possibilities with this script?
> 
> Apologies, to me "The build integration script for oss-fuzz targets."
> was sufficiently descriptive but it may require some familiarity with
> oss-fuzz to get. I can certainly add the above text to the commit
> message if that helps.

Yes please, or and abridged variant thereof.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 14:59:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 14:59:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747893.1155416 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7eG-0000ga-27; Tue, 25 Jun 2024 14:59:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747893.1155416; Tue, 25 Jun 2024 14:59:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7eF-0000gT-Vj; Tue, 25 Jun 2024 14:59:55 +0000
Received: by outflank-mailman (input) for mailman id 747893;
 Tue, 25 Jun 2024 14:59:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM7eF-0000gN-D7
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 14:59:55 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 94152ec7-3303-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 16:59:53 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id
 38308e7fff4ca-2ebe0a81dc8so73608731fa.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 07:59:53 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb321720sm83223095ad.84.2024.06.25.07.59.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 07:59:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94152ec7-3303-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719327593; x=1719932393; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=RfqYYxPLuSCwW96Mc2hBqvO628PNFFvYqArRNxDYeiM=;
        b=DPhEV4o4+4ioMt9ZR4uVnjC5BnU2W9fzLRy98rowGJk+ZTm3FJ3ArH6X5G3vW+Tkmk
         nc8YhFlNBBygDtjMDvEQcKouoCBH/TkRWdi+2C/CZyB9Vh6CMccVoTTIkq/q5KMfMI4Z
         xRPcTcSV7WRzxyfdq+6IsVdUic0M2RiSK2Q/Xf5iIASxcfowzyXiCZm3Jvaf1NC2jhR9
         noSEojkGF+FQylKrucquxUID+/SVIhIC8TDMTrsOVI6xqMo7qhvUL50EYzpa1OKl7HCB
         +gNmfi2CFQeVq/Mufc+vDZlnQhPPSSPGHoAMbaO5y/KYZHmgHr6f39+psgkOvLDEQsYo
         BHYw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719327593; x=1719932393;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=RfqYYxPLuSCwW96Mc2hBqvO628PNFFvYqArRNxDYeiM=;
        b=aFQzSXoP3D7x9RKlDM46kJy6f+hvVEBlP2WnTY11XtEt/kMAvSB0zZYDaY1o+JN5Cq
         aKueBJ73fHejnqqJdMz9hq+IvqoJwgcZ5XjSGWz0rxnQm2whL5Mae2k/njRbI5eR7ukV
         3uOiAg8mW5rruZZ8l7Z7dJcGxwvYOtPJQQZnCHy5C8L8aSUWbd1S2V/n+ciL4l8zoWmY
         VOqN4XVdC5ABftgm/GZPBWK0ylK44C0mi2dfcHGAQaAaDDML0zVhD+KZmfZio1KI+kQj
         lbsBcvlT3idR8svMwfHN68DJW26NR1K5bMQ21xzXNBuRWhw+c7u0wpNx9lhngKqHjIVm
         q2Dg==
X-Forwarded-Encrypted: i=1; AJvYcCXBm/h9maUACESias4gsEeRKN39CdnitdQ7mLV4pc3XWZF1IAakdbqMcsWLMPv2ZIBaJha0O1X9KPyQwf6SyIq+jF78FJgB+1qeB3XWPE4=
X-Gm-Message-State: AOJu0YzFGfZ+ZgfgfovYtzAGQfvVJ0RPntpJWQJlr6OSkNMm+LdqTXgp
	fraGwJut855e8VLHBN1wa+qrOyOiEFcME9xZjFrmjOdJy992Hq7qzk5kCaOOCA==
X-Google-Smtp-Source: AGHT+IG9JgVsUAt6QKZbpn9WThMshBWGO8obO5P1jqfS05vX8HmOvvwRx0bIO/HrgCqM9f6I4LObBg==
X-Received: by 2002:a2e:7805:0:b0:2ec:5073:5814 with SMTP id 38308e7fff4ca-2ec5b2fd2f3mr63423751fa.8.1719327593006;
        Tue, 25 Jun 2024 07:59:53 -0700 (PDT)
Message-ID: <521767cb-ac08-48c5-bd91-b30c1d192331@suse.com>
Date: Tue, 25 Jun 2024 16:59:46 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/3] common/kernel: address violation of MISRA C Rule 13.6
To: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
Cc: consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1719308599.git.alessandro.zucchelli@bugseng.com>
 <54949b0561263b9f18da500255836d89ca8838ba.1719308599.git.alessandro.zucchelli@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <54949b0561263b9f18da500255836d89ca8838ba.1719308599.git.alessandro.zucchelli@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 12:14, Alessandro Zucchelli wrote:
> --- a/xen/common/kernel.c
> +++ b/xen/common/kernel.c
> @@ -660,14 +660,15 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>  
>      case XENVER_guest_handle:
>      {
> +        struct domain *d = current->domain;

Can a (new) variable thus initialized please be consistently named currd?

>          xen_domain_handle_t hdl;
>  
>          if ( deny )
>              memset(&hdl, 0, ARRAY_SIZE(hdl));
>  
> -        BUILD_BUG_ON(ARRAY_SIZE(current->domain->handle) != ARRAY_SIZE(hdl));
> +        BUILD_BUG_ON(ARRAY_SIZE(d->handle) != ARRAY_SIZE(hdl));

Wasn't there the intention to exclude BUILD_BUG_ON() for specifically this
(and any other similar) rule?

Jan

> -        if ( copy_to_guest(arg, deny ? hdl : current->domain->handle,
> +        if ( copy_to_guest(arg, deny ? hdl : d->handle,
>                             ARRAY_SIZE(hdl) ) )
>              return -EFAULT;
>          return 0;



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 15:21:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 15:21:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747908.1155432 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7zQ-00081s-Oz; Tue, 25 Jun 2024 15:21:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747908.1155432; Tue, 25 Jun 2024 15:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM7zQ-00081l-Lj; Tue, 25 Jun 2024 15:21:48 +0000
Received: by outflank-mailman (input) for mailman id 747908;
 Tue, 25 Jun 2024 15:21:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v2rz=N3=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sM7zP-00080n-1D
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 15:21:47 +0000
Received: from sender4-op-o12.zoho.com (sender4-op-o12.zoho.com
 [136.143.188.12]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9f910c34-3306-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 17:21:42 +0200 (CEST)
Received: by mx.zohomail.com with SMTPS id 171932889757350.34669583752259;
 Tue, 25 Jun 2024 08:21:37 -0700 (PDT)
Received: by mail-yb1-f176.google.com with SMTP id
 3f1490d57ef6-e02c4983f3bso5730598276.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 08:21:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9f910c34-3306-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; t=1719328899; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=E9ZznQtQ8sMvbPQ4mBsdkDcfQMRZURukXWvvYVxF6MZQSMajoDutjLKGXOfn1xTFiEMpem43S7+dlSzampvkyXMzoT7Qd8sR7nGdU0HrIhi/ZIZaJYiHvGe3L/V2xjgpDYQFJvkY94PMHAKzCu+c982GGnMjf+XHp72B029f9Yk=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1719328899; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=rOO9Vy49/Zqy206+4uypFYbo55wn+kwzAS1UCif3ofs=; 
	b=VW2Jd1cQiojy5YttjIcS9JmJWAjSDKmFCY7z8kATEz11K6zFkzO+S5YAb1u8akZckCk3Sx770wq4OMJfZu1d5mjSBJUzrrXm7bpUVePMxmRvu06N7Rys0bZJcr6KWa9VJH13FpzQJYOtbpesO1z8SIXhjh1sInXHMMdmCKAKiSE=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1719328899;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=MIME-Version:References:In-Reply-To:From:From:Date:Date:Message-ID:Subject:Subject:To:To:Cc:Cc:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=rOO9Vy49/Zqy206+4uypFYbo55wn+kwzAS1UCif3ofs=;
	b=go7AXYsl2hX7dI6GvJ2MV8uY9WtQNkBf7NcO/rTow1vWP6qJ6aq4d41QnYBxnFbv
	mmWjJCMBvMmWVz4FxreIHX7GMSyvz4z7ZozbKB7GySPqpz3yIBxBUlvtH9m0DW0zvoY
	cCup6yIgZtedfTfHCSMB5fC9766dj/aoGdaTqG+k=
X-Forwarded-Encrypted: i=1; AJvYcCUSNkjst3m9jMKJZ0WkgkFpO5bJuJmfLSbOZfokeE1Bnl8kpfN0TNRs7+rYuyM6YsFXxxIOSAH4TCrlgnQbUQVGZIcgf3cYW4E+AuqpnO4=
X-Gm-Message-State: AOJu0YxeWXmLbxkrzx1wkquijRxW+9/cE0GFW5gjxFOCPeyY3coaxhsF
	SGZQUAFd66mwE/TzYwYi93sH7ki3KggTBMqiHkhU+WoRGRjyCxKxMH/q9EvA7NVMinDE9JduchP
	hL5cNwB9TGLqhbAAQ+HmMPHFfa5c=
X-Google-Smtp-Source: AGHT+IHICN/lhATWZQX6cvYnkdlCk8Q47Bocceax1daMX/RhFzZjHqonSrCkICfkWaNW6rkxbIA/MRNe7bg95zTjuvE=
X-Received: by 2002:a25:6b0d:0:b0:e02:662d:e04a with SMTP id
 3f1490d57ef6-e0304014665mr8322839276.45.1719328896541; Tue, 25 Jun 2024
 08:21:36 -0700 (PDT)
MIME-Version: 1.0
References: <20240621191434.5046-1-tamas@tklengyel.com> <45c69745-b060-4697-9f6e-b3d2a8860946@suse.com>
 <CABfawhkyDVw-=nR2d6KiXGYYv=coDgHUr1oXC+BmUxH_ita+iQ@mail.gmail.com>
 <80d0578d-26c0-4650-9edf-6926c055d415@suse.com> <CABfawhk3RyR-ACq-mBk=F1-SCKJPiiS_yhU1=A_jR8Js3=fQyA@mail.gmail.com>
 <8d32db90-8bd0-4a8f-82d9-938e36d3f181@suse.com> <CABfawhnYFS97U1F4CuacbNWzLVoKXFxTSpG-Ddb-VL7di=7XDw@mail.gmail.com>
 <243e34fc-57a2-464c-8a11-2cfee7e9cda3@suse.com> <CABfawh=6d+F1tYLmfC-NyMn80NROFf_0HL-WkKzu-r5vjfScaw@mail.gmail.com>
 <4bab3cff-93c0-497c-b0ad-8d2df26124d2@suse.com>
In-Reply-To: <4bab3cff-93c0-497c-b0ad-8d2df26124d2@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Tue, 25 Jun 2024 11:20:59 -0400
X-Gmail-Original-Message-ID: <CABfawhkHNb1X5tAET+yyZJsBE_jvHwn1afyGvBVXeQ_fw4vqgg@mail.gmail.com>
Message-ID: <CABfawhkHNb1X5tAET+yyZJsBE_jvHwn1afyGvBVXeQ_fw4vqgg@mail.gmail.com>
Subject: Re: [PATCH 1/2] Add libfuzzer target to fuzz/x86_instruction_emulator
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, Jun 25, 2024 at 10:56=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wr=
ote:
>
> On 25.06.2024 15:40, Tamas K Lengyel wrote:
> > On Tue, Jun 25, 2024 at 9:15=E2=80=AFAM Jan Beulich <jbeulich@suse.com>=
 wrote:
> >>
> >> On 25.06.2024 14:40, Tamas K Lengyel wrote:
> >>> On Tue, Jun 25, 2024 at 7:52=E2=80=AFAM Jan Beulich <jbeulich@suse.co=
m> wrote:
> >>>>
> >>>> On 25.06.2024 13:12, Tamas K Lengyel wrote:
> >>>>> On Tue, Jun 25, 2024 at 2:00=E2=80=AFAM Jan Beulich <jbeulich@suse.=
com> wrote:
> >>>>>>
> >>>>>> On 24.06.2024 23:23, Tamas K Lengyel wrote:
> >>>>>>> On Mon, Jun 24, 2024 at 11:55=E2=80=AFAM Jan Beulich <jbeulich@su=
se.com> wrote:
> >>>>>>>>
> >>>>>>>> On 21.06.2024 21:14, Tamas K Lengyel wrote:
> >>>>>>>>> @@ -58,6 +58,9 @@ afl-harness: afl-harness.o $(OBJS) cpuid.o wr=
appers.o
> >>>>>>>>>  afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OB=
JS)) cpuid.o wrappers.o
> >>>>>>>>>       $(CC) $(CFLAGS) $(GCOV_FLAGS) $(addprefix -Wl$(comma)--wr=
ap=3D,$(WRAPPED)) $^ -o $@
> >>>>>>>>>
> >>>>>>>>> +libfuzzer-harness: $(OBJS) cpuid.o
> >>>>>>>>> +     $(CC) $(CFLAGS) $(LIB_FUZZING_ENGINE) -fsanitize=3Dfuzzer=
 $^ -o $@
> >>>>>>>>
> >>>>>>>> What is LIB_FUZZING_ENGINE? I don't think we have any use of tha=
t in the
> >>>>>>>> tree anywhere.
> >>>>>>>
> >>>>>>> It's used by oss-fuzz, otherwise it's not doing anything.
> >>>>>>>
> >>>>>>>>
> >>>>>>>> I'm further surprised you get away here without wrappers.o.
> >>>>>>>
> >>>>>>> Wrappers.o was actually breaking the build for oss-fuzz at the li=
nking
> >>>>>>> stage. It works just fine without it.
> >>>>>>
> >>>>>> I'm worried here, to be honest. The wrappers serve a pretty import=
ant
> >>>>>> role, and I'm having a hard time seeing why they shouldn't be need=
ed
> >>>>>> here when they're needed both for the test and afl harnesses. Coul=
d
> >>>>>> you add some more detail on the build issues you encountered?
> >>>>>
> >>>>> With wrappers.o included doing the build in the oss-fuzz docker
> >>>>> (ubuntu 20.04 base) fails with:
> >>>>>
> >>>>> ...
> >>>>> clang -O1 -fno-omit-frame-pointer -gline-tables-only
> >>>>> -Wno-error=3Denum-constexpr-conversion
> >>>>> -Wno-error=3Dincompatible-function-pointer-types
> >>>>> -Wno-error=3Dint-conversion -Wno-error=3Ddeprecated-declarations
> >>>>> -Wno-error=3Dimplicit-function-declaration -Wno-error=3Dimplicit-in=
t
> >>>>> -DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION -fsanitize=3Daddress
> >>>>> -fsanitize-address-use-after-scope -fsanitize=3Dfuzzer-no-link -m64
> >>>>> -DBUILD_ID -fno-strict-aliasing -std=3Dgnu99 -Wall -Wstrict-prototy=
pes
> >>>>> -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Werr=
or
> >>>>> -Og -fno-omit-frame-pointer
> >>>>> -D__XEN_INTERFACE_VERSION__=3D__XEN_LATEST_INTERFACE_VERSION__ -MMD=
 -MP
> >>>>> -MF .libfuzzer-harness.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
> >>>>> -I/src/xen/tools/fuzz/x86_instruction_emulator/../../../tools/inclu=
de
> >>>>> -D__XEN_TOOLS__ -iquote . -fsanitize=3Dfuzzer -fsanitize=3Dfuzzer
> >>>>> -Wl,--wrap=3Dfwrite -Wl,--wrap=3Dmemcmp -Wl,--wrap=3Dmemcpy
> >>>>> -Wl,--wrap=3Dmemset -Wl,--wrap=3Dprintf -Wl,--wrap=3Dputchar -Wl,--=
wrap=3Dputs
> >>>>> -Wl,--wrap=3Dsnprintf -Wl,--wrap=3Dstrstr -Wl,--wrap=3Dvprintf
> >>>>> -Wl,--wrap=3Dvsnprintf fuzz-emul.o x86-emulate.o x86_emulate/0f01.o
> >>>>> x86_emulate/0fae.o x86_emulate/0fc7.o x86_emulate/decode.o
> >>>>> x86_emulate/fpu.o cpuid.o wrappers.o -o libfuzzer-harness
> >>>>> /usr/bin/ld: /usr/bin/ld: DWARF error: invalid or unhandled FORM va=
lue: 0x25
> >>>>> /usr/local/lib/clang/18/lib/x86_64-unknown-linux-gnu/libclang_rt.fu=
zzer.a(fuzzer.o):
> >>>>> in function `std::__Fuzzer::__libcpp_snprintf_l(char*, unsigned lon=
g,
> >>>>> __locale_struct*, char const*, ...)':
> >>>>> cxa_noexception.cpp:(.text._ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP=
15__locale_structPKcz[_ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__locale_st=
ructPKcz]+0x9a):
> >>>>> undefined reference to `__wrap_vsnprintf'
> >>>>> clang: error: linker command failed with exit code 1 (use -v to see=
 invocation)
> >>>>> make: *** [Makefile:62: libfuzzer-harness] Error 1
> >>>>> rm x86-emulate.c wrappers.c cpuid.c
> >>>>> make: Leaving directory '/src/xen/tools/fuzz/x86_instruction_emulat=
or'
> >>>>> ERROR:__main__:Building fuzzers failed.
> >>>>
> >>>> Hmm, yes, means we'll need an actual vsnprintf() wrapper, not just a
> >>>> declaration thereof.
> >>>
> >>> I don't really get what this wrapper accomplishes
> >>
> >> They guard against clobbering of in-register state (SIMD registers in
> >> particular, but going forward maybe also eGRP-s as introduced by APX)
> >> by library functions called between emulation of individual insns (or,
> >> especially possible for fuzzing instrumented code, I think) even from
> >> in the middle of emulating an insn. (Something as simple as the
> >> compiler inserting a call to memcpy() or memset() somewhere in the
> >> translation of the emulator source code could also clobber state.)
> >>
> >>> and as I said, fuzzing works with oss-fuzz just fine without it.
> >>
> >> I'm inclined to take this as "it appears to work just fine". Fuzzed
> >> input register state may be lost by doing a library call somewhere,
> >> rendering the fuzzing results less useful. This would pretty
> >> certainly stop being tolerable the moment you compared results of
> >> native execution of a sequence of instructions with the emulated
> >> counterpart.
> >
> > Yea, that may be. Any suggested way to fix the linking issue though?
>
> As said before, we need to gain a real wrapper for vsnprintf(). Right
> now we only have a declaration thereof, for use by the wrapper for
> snprintf().

Where is this wrapper for snprintf? I can't find anything in
fuzz/x86_instruction_emulator. If I can see an example for one that's
implemented already I may be able to decipher what's needed here. At
the moment I don't really know what "a real wrapper" would look like.

Tamas


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 15:23:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 15:23:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747916.1155441 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM81E-0000XO-75; Tue, 25 Jun 2024 15:23:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747916.1155441; Tue, 25 Jun 2024 15:23:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM81E-0000XH-4U; Tue, 25 Jun 2024 15:23:40 +0000
Received: by outflank-mailman (input) for mailman id 747916;
 Tue, 25 Jun 2024 15:23:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sQf/=N3=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sM81C-0000X8-Qa
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 15:23:38 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e4a5edbf-3306-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 17:23:37 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [37.161.92.237])
 by support.bugseng.com (Postfix) with ESMTPSA id 2F00C4EE0738;
 Tue, 25 Jun 2024 17:23:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4a5edbf-3306-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [XEN PATCH for 4.19] automation/eclair: add deviations agreed in MISRA meetings
Date: Tue, 25 Jun 2024 17:23:29 +0200
Message-Id: <4a65e064768ad5ddce96d749f24f0bdae2c3b9da.1719328656.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update ECLAIR configuration to take into account the deviations
agreed during the MISRA meetings.

While doing this, remove the obsolete "Set [123]" comments.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
 .../eclair_analysis/ECLAIR/deviations.ecl     | 93 +++++++++++++++++--
 docs/misra/deviations.rst                     | 68 +++++++++++++-
 2 files changed, 149 insertions(+), 12 deletions(-)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index ae2eaf50f7..e6517a9142 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -1,5 +1,3 @@
-### Set 1 ###
-
 #
 # Series 2.
 #
@@ -23,6 +21,11 @@ Constant expressions and unreachable branches of if and switch statements are ex
 -config=MC3R1.R2.1,reports+={deliberate, "any_area(any_loc(any_exp(macro(name(ASSERT_UNREACHABLE||PARSE_ERR_RET||PARSE_ERR||FAIL_MSR||FAIL_CPUID)))))"}
 -doc_end
 
+-doc_begin="The asm-offset files are not linked deliberately, since they are used to generate definitions for asm modules."
+-file_tag+={asm_offsets, "^xen/arch/(arm|x86)/(arm32|arm64|x86_64)/asm-offsets\\.c$"}
+-config=MC3R1.R2.1,reports+={deliberate, "any_area(any_loc(file(asm_offsets)))"}
+-doc_end
+
 -doc_begin="Pure declarations (i.e., declarations without initialization) are
 not executable, and therefore it is safe for them to be unreachable."
 -config=MC3R1.R2.1,ignored_stmts+={"any()", "pure_decl()"}
@@ -63,6 +66,12 @@ they are not instances of commented-out code."
 -config=MC3R1.D4.3,reports+={disapplied,"!(any_area(any_loc(file(^xen/arch/arm/arm64/.*$))))"}
 -doc_end
 
+-doc_begin="The inline asm in 'arm64/lib/bitops.c' is tightly coupled with the surronding C code that acts as a wrapper, so it has been decided not to add an additional encapsulation layer."
+-file_tag+={arm64_bitops, "^xen/arch/arm/arm64/lib/bitops\\.c$"}
+-config=MC3R1.D4.3,reports+={deliberate, "all_area(any_loc(file(arm64_bitops)&&any_exp(macro(^(bit|test)op$))))"}
+-config=MC3R1.D4.3,reports+={deliberate, "any_area(any_loc(file(arm64_bitops))&&context(name(int_clear_mask16)))"}
+-doc_end
+
 -doc_begin="This header file is autogenerated or empty, therefore it poses no
 risk if included more than once."
 -file_tag+={empty_header, "^xen/arch/arm/efi/runtime\\.h$"}
@@ -213,10 +222,25 @@ Therefore the absence of prior declarations is safe."
 -config=MC3R1.R8.4,declarations+={safe, "loc(file(asm_defns))&&^current_stack_pointer$"}
 -doc_end
 
+-doc_begin="The function apei_(read|check|clear)_mce are dead code and are excluded from non-debug builds, therefore the absence of prior declarations is safe."
+-config=MC3R1.R8.4,declarations+={safe, "^apei_(read|check|clear)_mce\\(.*$"}
+-doc_end
+
 -doc_begin="asmlinkage is a marker to indicate that the function is only used to interface with asm modules."
 -config=MC3R1.R8.4,declarations+={safe,"loc(text(^(?s).*asmlinkage.*$, -1..0))"}
 -doc_end
 
+-doc_begin="Given that bsearch and sort are defined with the attribute 'gnu_inline', it's deliberate not to have a prior declaration.
+See Section \"6.33.1 Common Function Attributes\" of \"GCC_MANUAL\" for a full explanation of gnu_inline."
+-file_tag+={bsearch_sort, "^xen/include/xen/(sort|lib)\\.h$"}
+-config=MC3R1.R8.4,reports+={deliberate, "any_area(any_loc(file(bsearch_sort))&&decl(name(bsearch||sort)))"}
+-doc_end
+
+-doc_begin="first_valid_mfn is defined in this way because the current lack of NUMA support in Arm and PPC requires it."
+-file_tag+={first_valid_mfn, "^xen/common/page_alloc\\.c$"}
+-config=MC3R1.R8.4,declarations+={deliberate,"loc(file(first_valid_mfn))"}
+-doc_end
+
 -doc_begin="The following variables are compiled in multiple translation units
 belonging to different executables and therefore are safe."
 -config=MC3R1.R8.6,declarations+={safe, "name(current_stack_pointer||bsearch||sort)"}
@@ -257,8 +281,6 @@ dimension is higher than omitting the dimension."
 -config=MC3R1.R9.5,reports+={deliberate, "any()"}
 -doc_end
 
-### Set 2 ###
-
 #
 # Series 10.
 #
@@ -299,7 +321,6 @@ integers arguments on two's complement architectures
 -config=MC3R1.R10.1,reports+={safe, "any_area(any_loc(any_exp(macro(^ISOLATE_LSB$))))"}
 -doc_end
 
-### Set 3 ###
 -doc_begin="XEN only supports architectures where signed integers are
 representend using two's complement and all the XEN developers are aware of
 this."
@@ -323,6 +344,49 @@ constant expressions are required.\""
 # Series 11
 #
 
+-doc_begin="The conversion from a function pointer to unsigned long or (void *) does not lose any information, provided that the target type has enough bits to store it."
+-config=MC3R1.R11.1,casts+={safe,
+  "from(type(canonical(__function_pointer_types)))
+   &&to(type(canonical(builtin(unsigned long)||pointer(builtin(void)))))
+   &&relation(definitely_preserves_value)"
+}
+-doc_end
+
+-doc_begin="The conversion from a function pointer to a boolean has a well-known semantics that do not lead to unexpected behaviour."
+-config=MC3R1.R11.1,casts+={safe,
+  "from(type(canonical(__function_pointer_types)))
+   &&kind(pointer_to_boolean)"
+}
+-doc_end
+
+-doc_begin="The conversion from a pointer to an incomplete type to unsigned long does not lose any information, provided that the target type has enough bits to store it."
+-config=MC3R1.R11.2,casts+={safe,
+  "from(type(any()))
+   &&to(type(canonical(builtin(unsigned long))))
+   &&relation(definitely_preserves_value)"
+}
+-doc_end
+
+-doc_begin="Conversions to object pointers that have a pointee type with a smaller (i.e., less strict) alignment requirement are safe."
+-config=MC3R1.R11.3,casts+={safe,
+  "!relation(more_aligned_pointee)"
+}
+-doc_end
+
+-doc_begin="Conversions from and to integral types are safe, in the assumption that the target type has enough bits to store the value.
+See also Section \"4.7 Arrays and Pointers\" of \"GCC_MANUAL\""
+-config=MC3R1.R11.6,casts+={safe,
+    "(from(type(canonical(integral())))||to(type(canonical(integral()))))
+     &&relation(definitely_preserves_value)"}
+-doc_end
+
+-doc_begin="The conversion from a pointer to a boolean has a well-known semantics that do not lead to unexpected behaviour."
+-config=MC3R1.R11.6,casts+={safe,
+  "from(type(canonical(__pointer_types)))
+   &&kind(pointer_to_boolean)"
+}
+-doc_end
+
 -doc_begin="Violations caused by container_of are due to pointer arithmetic operations
 with the provided offset. The resulting pointer is then immediately cast back to its
 original type, which preserves the qualifier. This use is deemed safe.
@@ -354,9 +418,18 @@ activity."
 -config=MC3R1.R14.2,reports+={disapplied,"any()"}
 -doc_end
 
--doc_begin="The XEN team relies on the fact that invariant conditions of 'if'
-statements are deliberate"
--config=MC3R1.R14.3,statements={deliberate , "wrapped(any(),node(if_stmt))" }
+-doc_begin="The XEN team relies on the fact that invariant conditions of 'if' statements and conditional operators are deliberate"
+-config=MC3R1.R14.3,statements+={deliberate , "wrapped(any(),node(if_stmt||conditional_operator||binary_conditional_operator))" }
+-doc_end
+
+-doc_begin="Switches having a 'sizeof' operator as the condition are deliberate and have limited scope."
+-config=MC3R1.R14.3,statements+={safe , "wrapped(any(),node(switch_stmt)&&child(cond, operator(sizeof)))" }
+-doc_end
+
+-doc_begin="The use of an invariant size argument in {put,get}_unsafe_size and array_access_ok, as defined in arch/x86(_64)?/include/asm/uaccess.h is deliberate and is deemed safe."
+-file_tag+={x86_uaccess, "^xen/arch/x86(_64)?/include/asm/uaccess\\.h$"}
+-config=MC3R1.R14.3,reports+={deliberate, "any_area(any_loc(file(x86_uaccess)&&any_exp(macro(^(put|get)_unsafe_size$))))"}
+-config=MC3R1.R14.3,reports+={deliberate, "any_area(any_loc(file(x86_uaccess)&&any_exp(macro(^array_access_ok$))))"}
 -doc_end
 
 -doc_begin="A controlling expression of 'if' and iteration statements having integer, character or pointer type has a semantics that is well-known to all Xen developers."
@@ -527,8 +600,8 @@ falls under the jurisdiction of other MISRA rules."
 # General
 #
 
--doc_begin="do-while-0 is a well recognized loop idiom by the xen community."
--loop_idioms={do_stmt, "literal(0)"}
+-doc_begin="do-while-[01] is a well recognized loop idiom by the xen community."
+-loop_idioms={do_stmt, "literal(0)||literal(1)"}
 -doc_end
 -doc_begin="while-[01] is a well recognized loop idiom by the xen community."
 -loop_idioms+={while_stmt, "literal(0)||literal(1)"}
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 16fc345756..0b048dc225 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -63,6 +63,11 @@ Deviations related to MISRA C:2012 Rules:
        switch statement.
      - ECLAIR has been configured to ignore those statements.
 
+   * - R2.1
+     - The asm-offset files are not linked deliberately, since they are used to
+       generate definitions for asm modules.
+     - Tagged as `deliberate` for ECLAIR.
+
    * - R2.2
      - Proving compliance with respect to Rule 2.2 is generally impossible:
        see `<https://arxiv.org/abs/2212.13933>`_ for details. Moreover, peer
@@ -203,6 +208,26 @@ Deviations related to MISRA C:2012 Rules:
        it.
      - Tagged as `safe` for ECLAIR.
 
+   * - R8.4
+     - Some functions are excluded from non-debug build, therefore the absence
+       of declaration is safe.
+     - Tagged as `safe` for ECLAIR, such functions are:
+         - apei_read_mce()
+         - apei_check_mce()
+         - apei_clear_mce()
+
+   * - R8.4
+     - Given that bsearch and sort are defined with the attribute 'gnu_inline',
+       it's deliberate not to have a prior declaration.
+       See Section \"6.33.1 Common Function Attributes\" of \"GCC_MANUAL\" for
+       a full explanation of gnu_inline.
+     - Tagged as `deliberate` for ECLAIR.
+
+   * - R8.4
+     - first_valid_mfn is defined in this way because the current lack of NUMA
+       support in Arm and PPC requires it.
+     - Tagged as `deliberate` for ECLAIR.
+
    * - R8.6
      - The following variables are compiled in multiple translation units
        belonging to different executables and therefore are safe.
@@ -282,6 +307,39 @@ Deviations related to MISRA C:2012 Rules:
        If no bits are set, 0 is returned.
      - Tagged as `safe` for ECLAIR.
 
+   * - R11.1
+     - The conversion from a function pointer to unsigned long or (void \*) does
+       not lose any information, provided that the target type has enough bits
+       to store it.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R11.1
+     - The conversion from a function pointer to a boolean has a well-known
+       semantics that do not lead to unexpected behaviour.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R11.2
+     - The conversion from a pointer to an incomplete type to unsigned long
+       does not lose any information, provided that the target type has enough
+       bits to store it.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R11.3
+     - Conversions to object pointers that have a pointee type with a smaller
+       (i.e., less strict) alignment requirement are safe.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R11.6
+     - Conversions from and to integral types are safe, in the assumption that
+       the target type has enough bits to store the value.
+       See also Section \"4.7 Arrays and Pointers\" of \"GCC_MANUAL\"
+     - Tagged as `safe` for ECLAIR.
+
+   * - R11.6
+     - The conversion from a pointer to a boolean has a well-known semantics
+       that do not lead to unexpected behaviour.
+     - Tagged as `safe` for ECLAIR.
+
    * - R11.8
      - Violations caused by container_of are due to pointer arithmetic operations
        with the provided offset. The resulting pointer is then immediately cast back to its
@@ -308,8 +366,14 @@ Deviations related to MISRA C:2012 Rules:
 
    * - R14.3
      - The Xen team relies on the fact that invariant conditions of 'if'
-       statements are deliberate.
-     - Project-wide deviation; tagged as `disapplied` for ECLAIR.
+       statements and conditional operators are deliberate.
+     - Project-wide deviation; tagged as `deliberate` for ECLAIR.
+
+   * - R14.3
+     - The use of an invariant size argument in {put,get}_unsafe_size and
+       array_access_ok, as defined in arch/x86(_64)?/include/asm/uaccess.h is
+       deliberate and is deemed safe.
+     - Project-wide deviation; tagged as `deliberate` for ECLAIR.
 
    * - R14.4
      - A controlling expression of 'if' and iteration statements having
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 15:52:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 15:52:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747937.1155452 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM8T2-0007l0-CY; Tue, 25 Jun 2024 15:52:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747937.1155452; Tue, 25 Jun 2024 15:52:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM8T2-0007kt-9Z; Tue, 25 Jun 2024 15:52:24 +0000
Received: by outflank-mailman (input) for mailman id 747937;
 Tue, 25 Jun 2024 15:52:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GB9f=N3=cloud.com=matthew.barnes@srs-se1.protection.inumbo.net>)
 id 1sM8T0-0007je-M9
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 15:52:22 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e7eb4fb5-330a-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 17:52:20 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a724cd0e9c2so307465466b.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 08:52:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7eb4fb5-330a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719330740; x=1719935540; darn=lists.xenproject.org;
        h=cc:subject:message-id:date:from:in-reply-to:references:mime-version
         :from:to:cc:subject:date:message-id:reply-to;
        bh=iMfgPVTpYpWf/KLeazN26NpVkDP/oLtxcqvESr87WVc=;
        b=Z2lp/MPk4fj0wcM5CEc5CoKL1PCQlj9qrhYn6PFY0x5zaMhxKBK7TEBn8rk5cqu9jo
         Jgp3TEYT7p8Q6Xa5nDtYsqPK5WaRDknt5J/TF2b95GUfCFoPF+PK/c6xs3zphSl/LUAG
         eALwMM7slbGil9KRceAyLW2+Bne6E22YlX0Lw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719330740; x=1719935540;
        h=cc:subject:message-id:date:from:in-reply-to:references:mime-version
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=iMfgPVTpYpWf/KLeazN26NpVkDP/oLtxcqvESr87WVc=;
        b=Nlk5Z69BiuHjFAZ7fZHMBWqybaquDGAb1PJ0t9XJ0H0idf+XrLGuXo93TJdGqVafwI
         NabMs7MQc4oNUXmAkXvSBP1Z2Qkqki7KDFyS2fPA1yhZrO4qHmd71OskvOsQpKfuVXt3
         B1JqirbHgss68wokPa52VTj4X3sIpjL0llnOuX86hAugv7J1/GZ99sH6uzjxSQ2YJOb6
         a3UlHGbruFzYNniHh+YRjDHAYZUPw4x1p+tTWdEsrs3AnlfUWc+7q833W6toMngdEmkD
         ijH4pJPYUCTXjAiTfOjwRQaHf5hIIvOqSDiUh+QDouC3Q8qjtZjxRj+CNqKtGTr/VthV
         tTKw==
X-Gm-Message-State: AOJu0YyHH0eGQFY104VnOPuIDeTo0KJcrOKKa29xBjPsUgOFqPsk3pLr
	kijIUlmfw66NxInWXe53GrTj1W6MqDzTmKLgLfzwZ0q4IRM6KmZQDOMgdDGdPGNhW+k7eczkszC
	9wTEtw9fJog0sz3yoJTaGjGXaTB3riSobQY8ZnvE6c8S7vk23
X-Received: by 2002:a17:907:c24d:b0:a72:5f9a:159b with SMTP id
 a640c23a62f3a-a725f9a199bmt342895066b.56.1719330739730; Tue, 25 Jun 2024
 08:52:19 -0700 (PDT)
MIME-Version: 1.0
References: <27f4397093d92b53f89d625d682bd4b7145b65d8.1717426439.git.matthew.barnes@cloud.com>
 <ZnWee2hUmG42n/W7@l14> <667ae6e4.050a0220.21f03.212f@mx.google.com>
In-Reply-To: <667ae6e4.050a0220.21f03.212f@mx.google.com>
From: Matthew Barnes <matthew.barnes@cloud.com>
Date: Tue, 25 Jun 2024 16:51:58 +0100
Message-ID: <CAO_hw7wS3mROzA4Q1QYtNuz7iG25A_6F4UE-zrLiZaPQ17dNVw@mail.gmail.com>
Subject: [XEN PATCH] tools/misc: xen-hvmcrash: Inject #DF instead of
 overwriting RIP
Cc: Xen-devel <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: multipart/alternative; boundary="000000000000fec3cb061bb8e107"

--000000000000fec3cb061bb8e107
Content-Type: text/plain; charset="UTF-8"

On Fri, Jun 21, 2024 at 03:38:36PM +0000, Anthony PERARD wrote:
> On Mon, Jun 03, 2024 at 03:59:18PM +0100, Matthew Barnes wrote:
> > diff --git a/tools/misc/xen-hvmcrash.c b/tools/misc/xen-hvmcrash.c
> > index 1d058fa40a47..8ef1beb388f8 100644
> > --- a/tools/misc/xen-hvmcrash.c
> > +++ b/tools/misc/xen-hvmcrash.c
> > @@ -38,22 +38,21 @@
> >  #include <sys/stat.h>
> >  #include <arpa/inet.h>
> >
> > +#define XC_WANT_COMPAT_DEVICEMODEL_API
> >  #include <xenctrl.h>
> >  #include <xen/xen.h>
> >  #include <xen/domctl.h>
> >  #include <xen/hvm/save.h>
>
> There's lots of headers that aren't used by the new codes and can be
> removed. (They were probably way too many headers included when this
> utility was introduced.)

I will remove the unnecessary headers in patch v2.

> > +    for (vcpu_id = 0; vcpu_id <= dominfo.max_vcpu_id; vcpu_id++) {
> > +        printf("Injecting #DF to vcpu ID #%d...\n", vcpu_id);
> > +        ret = xc_hvm_inject_trap(xch, domid, vcpu_id,
> > +                                X86_ABORT_DF,
>
> In the definition of xendevicemodel_inject_event(), the comment say to
> look at "enum x86_event_type" for possible event "type", but there's no
> "#DF" type, can we add this new one there before using it? (It's going
> to be something like X86_EVENTTYPE_*)

To my understanding, the event types enum refer to the kind of interrupt
being handled. In this case, it is a hardware exception, which already
exists as an entry in the enum definition.

The `vector` parameter describes the kind of exception. I just found
that exception vector macros are defined in `x86-defns.h`, so I will
include that and instead use `X86_EXC_DF` in patch v2.

The only other usage of `xc_hvm_inject_trap` is in `xen-access.c`, which
defines all the required vectors as macros at the top of the source file.
Hence, I did the same in `xen-hvmcrash.c` for patch v1.

Would it be a good idea to rewrite `xen-access.c` to use `x86-defns.h`
as well, in a later patch?

> > +                                XEN_DMOP_EVENT_hw_exc, 0,
> > +                                NULL, NULL);
>
> The new code doesn't build, "NULL" aren't integers.
>
> > +        if (ret < 0) {
> > +            fprintf(stderr, "Could not inject #DF to vcpu ID #%d\n",
vcpu_id);
> > +            perror("xc_hvm_inject_trap");
>
> Are you meant to print two error lines when there's an error? You can
> also use strerror() to convert an "errno" to a string.

I will use strerror and one print call in patch v2.

> Are you meant to keep inject into other vcpus even if one have failed?

Yes; xen-hvmcrash doesn't have to inject *all* vcpus to cause it to
crash.

> Should `xen-hvmcrash` return success when it failed to inject the double
> fault to all vcpus?

I will make xen-hvmcrash yield an error if no vcpus could be injected in
patch v2.

Matt

--000000000000fec3cb061bb8e107
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_quote">On Fri, Jun 21, 2024 at 03:38:3=
6PM +0000, Anthony PERARD wrote:<br>
&gt; On Mon, Jun 03, 2024 at 03:59:18PM +0100, Matthew Barnes wrote:<br>
&gt; &gt; diff --git a/tools/misc/xen-hvmcrash.c b/tools/misc/xen-hvmcrash.=
c<br>
&gt; &gt; index 1d058fa40a47..8ef1beb388f8 100644<br>
&gt; &gt; --- a/tools/misc/xen-hvmcrash.c<br>
&gt; &gt; +++ b/tools/misc/xen-hvmcrash.c<br>
&gt; &gt; @@ -38,22 +38,21 @@<br>
&gt; &gt;=C2=A0 #include &lt;sys/stat.h&gt;<br>
&gt; &gt;=C2=A0 #include &lt;arpa/inet.h&gt;<br>
&gt; &gt;<br>
&gt; &gt; +#define XC_WANT_COMPAT_DEVICEMODEL_API<br>
&gt; &gt;=C2=A0 #include &lt;xenctrl.h&gt;<br>
&gt; &gt;=C2=A0 #include &lt;xen/xen.h&gt;<br>
&gt; &gt;=C2=A0 #include &lt;xen/domctl.h&gt;<br>
&gt; &gt;=C2=A0 #include &lt;xen/hvm/save.h&gt;<br>
&gt; <br>
&gt; There&#39;s lots of headers that aren&#39;t used by the new codes and =
can be<br>
&gt; removed. (They were probably way too many headers included when this<b=
r>
&gt; utility was introduced.)<br>
<br>
I will remove the unnecessary headers in patch v2.<br>
<br>
&gt; &gt; +=C2=A0 =C2=A0 for (vcpu_id =3D 0; vcpu_id &lt;=3D dominfo.max_vc=
pu_id; vcpu_id++) {<br>
&gt; &gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 printf(&quot;Injecting #DF to vcpu I=
D #%d...\n&quot;, vcpu_id);<br>
&gt; &gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 ret =3D xc_hvm_inject_trap(xch, domi=
d, vcpu_id,<br>
&gt; &gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 X86_ABORT_DF,<br>
&gt; <br>
&gt; In the definition of xendevicemodel_inject_event(), the comment say to=
<br>
&gt; look at &quot;enum x86_event_type&quot; for possible event &quot;type&=
quot;, but there&#39;s no<br>
&gt; &quot;#DF&quot; type, can we add this new one there before using it? (=
It&#39;s going<br>
&gt; to be something like X86_EVENTTYPE_*)<br>
<br>
To my understanding, the event types enum refer to the kind of interrupt<br=
>
being handled. In this case, it is a hardware exception, which already<br>
exists as an entry in the enum definition.<br>
<br>
The `vector` parameter describes the kind of exception. I just found<br>
that exception vector macros are defined in `x86-defns.h`, so I will<br>
include that and instead use `X86_EXC_DF` in patch v2.<br>
<br>
The only other usage of `xc_hvm_inject_trap` is in `xen-access.c`, which<br=
>
defines all the required vectors as macros at the top of the source file.<b=
r>
Hence, I did the same in `xen-hvmcrash.c` for patch v1.<br>
<br>
Would it be a good idea to rewrite `xen-access.c` to use `x86-defns.h`<br>
as well, in a later patch?<br>
<br>
&gt; &gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 XEN_DMOP_EVENT_hw_exc, 0,<=
br>
&gt; &gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =
=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 NULL, NULL);<br>
&gt; <br>
&gt; The new code doesn&#39;t build, &quot;NULL&quot; aren&#39;t integers.<=
br>
&gt; <br>
&gt; &gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 if (ret &lt; 0) {<br>
&gt; &gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 fprintf(stderr, &quot;=
Could not inject #DF to vcpu ID #%d\n&quot;, vcpu_id);<br>
&gt; &gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 perror(&quot;xc_hvm_in=
ject_trap&quot;);<br>
&gt; <br>
&gt; Are you meant to print two error lines when there&#39;s an error? You =
can<br>
&gt; also use strerror() to convert an &quot;errno&quot; to a string.<br>
<br>
I will use strerror and one print call in patch v2.<br>
<br>
&gt; Are you meant to keep inject into other vcpus even if one have failed?=
<br>
<br>
Yes; xen-hvmcrash doesn&#39;t have to inject *all* vcpus to cause it to<br>
crash.<br>
<br>
&gt; Should `xen-hvmcrash` return success when it failed to inject the doub=
le<br>
&gt; fault to all vcpus?<br>
<br>
I will make xen-hvmcrash yield an error if no vcpus could be injected in<br=
>
patch v2.<br>
<br>
Matt<br>
</div></div>

--000000000000fec3cb061bb8e107--


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 15:59:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 15:59:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747943.1155461 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM8Zh-0000eA-25; Tue, 25 Jun 2024 15:59:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747943.1155461; Tue, 25 Jun 2024 15:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM8Zg-0000e3-Vd; Tue, 25 Jun 2024 15:59:16 +0000
Received: by outflank-mailman (input) for mailman id 747943;
 Tue, 25 Jun 2024 15:59:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM8Zf-0000dx-Nv
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 15:59:15 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ddba23b1-330b-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 17:59:13 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id
 38308e7fff4ca-2ec17eb4493so77090771fa.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 08:59:13 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf5602a4sm533024966b.145.2024.06.25.08.59.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 08:59:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddba23b1-330b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719331152; x=1719935952; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=JITacUs7D1Gj2HE0H3VL2L2pGWzAGEoguFoI7P5ng78=;
        b=buao6hgOUWapjEA3fINT1N90Y1nMg6oNAzU2h7Yfhme32UkgFw3J+iwANO5sE4pw29
         mGlCaupZmXSUxU3tLUyclOjicwMuKAlVuIXgAS7dREgf2+goSd5esJ75Sg979aT0Neks
         donS9ULm7u0VCHUPXHCWxX50Z2QPwyLs25rJ3oKW5PiI/FoW4CyOAuhrSbxFNEaw8hg1
         Rm9WC7x1zCWfZDeYCwxVMafnosjUr42KCoxd/qv0uQF4SsQJguSXYWuS/matYofriRMA
         evpWr0h9Qgum/K3QxO+c+AlxirLyhuL0aoK5UzO01RZkEgLjJRopCzp5ULTS+DP63jqM
         xuPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719331152; x=1719935952;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=JITacUs7D1Gj2HE0H3VL2L2pGWzAGEoguFoI7P5ng78=;
        b=TJAB2t99MUvcqUAJGeLNaCZLLQuDfLU5BRGWlNilL9CZYjOpzgSYuoiPF0/uiNO7mP
         0Pdm/Ntz8zBUiHEC5vpMwdDIhIPM3VOnmPGTqOXpEBNAq42/t2OObIpfp/VENCi3HKLa
         GFt0X6f8UyNE4zoNeWpApMACR5CWyMD2S9Xy2h35lAqXq3UWe3plnJFzEjBj7poyNAvj
         D4dSnV0wrWZF0/0bs0jEqydPfFBR0I33FoXYbk9UEX0d/Ic1PGVJzgnDlLV7/z3uGpjZ
         y6mh5oOIJH5dz0VbGuYDWvRhRWBG8L793hDnFLukWMBxucHBJCABDlxd2Cu2soR5rKZ9
         8gfA==
X-Forwarded-Encrypted: i=1; AJvYcCXERrHSfSpnkks8t7BrUGzUdcDhA6yNLWJ6IEC4ugHLyWuN+vnQ8l/MMV0TtG/EGtCRMCe0xMJEB2A5XoamZJqzGu8X0PbyKpyFInOzkAo=
X-Gm-Message-State: AOJu0YyrijQvIhjLA9yXU8/ApmsEB0AfalKHamQsR0l1o2BPh2z8claq
	eNy4OSlxOwUd9g148bNeuAHChNlztiP0Rt2RJdUHUsaXBcd52p2q
X-Google-Smtp-Source: AGHT+IEBtYOJgQFuPTEap2DGCqiJog4+9l+aURO03qBTdPqcsXNv6FqYULjb6hfwcVl+MPv3quy18g==
X-Received: by 2002:a05:6512:2031:b0:52c:e119:7f1 with SMTP id 2adb3069b0e04-52ce185273amr4713342e87.51.1719331152022;
        Tue, 25 Jun 2024 08:59:12 -0700 (PDT)
Message-ID: <5729c8863e42c839db6a4244c80b3c9dd1574356.camel@gmail.com>
Subject: Re: [PATCH v13 03/10] xen/riscv: add minimal stuff to mm.h to build
 full Xen
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>,  Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Date: Tue, 25 Jun 2024 17:59:10 +0200
In-Reply-To: <3cb055bc-f61d-4045-8529-5a15fd5a7e00@suse.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
	 <3d44cf567f5c361cce2713808bcea1b1b6f4f032.1719319093.git.oleksii.kurochko@gmail.com>
	 <3cb055bc-f61d-4045-8529-5a15fd5a7e00@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Tue, 2024-06-25 at 16:22 +0200, Jan Beulich wrote:
> On 25.06.2024 15:51, Oleksii Kurochko wrote:
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > Acked-by: Jan Beulich <jbeulich@suse.com>
> > ---
> > Changes in V13:
> > =C2=A0- redefine mfn_to_page() and mfn_to_page().
>=20
> DYM page_to_mfn() here as one of the two?
Yes, one of them is page_to_mfn(). I will update the comment in next
version.

>=20
> > +/* Convert between machine frame numbers and page-info structures.
> > */
> > +#define mfn_to_page(mfn)=C2=A0=C2=A0=C2=A0 (frame_table + mfn_x(mfn))
> > +#define page_to_mfn(pg)=C2=A0=C2=A0=C2=A0=C2=A0 _mfn((unsigned long)((=
pg) -
> > frame_table))
>=20
> Is the cast really needed here?
I checked and it can be really dropped.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 16:13:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 16:13:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747954.1155475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM8n6-0005Sm-CD; Tue, 25 Jun 2024 16:13:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747954.1155475; Tue, 25 Jun 2024 16:13:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM8n6-0005Sf-8R; Tue, 25 Jun 2024 16:13:08 +0000
Received: by outflank-mailman (input) for mailman id 747954;
 Tue, 25 Jun 2024 16:13:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sC98=N3=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sM8n4-0005SZ-UQ
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 16:13:06 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cdef78d3-330d-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 18:13:05 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id
 38308e7fff4ca-2ec5779b423so35323181fa.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 09:13:05 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7067e0e5723sm4566681b3a.75.2024.06.25.09.13.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 09:13:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cdef78d3-330d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719331985; x=1719936785; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=8HdCHR3BFJRaCZme7stq7n84D3lXLnF87ficcOKKImU=;
        b=FHmOW1fxLJeawE1R79FhiH31IH6+zVhcmK6o4c4GNgOAeD5GvqSAM24IVCk8GK9Pmb
         85juiw9qCP1djL/NCRc16bSG+q2aMM0z3d8/lzjJgdGpToc89c/SoGihy2XALW5SxoTM
         V+scq3MXF0ForhjIv5wH2uQ1frm2bElGKtzq/guI+ZTHbHt09fE5i/BSUcZcSOe8ab0S
         ez4BhjHMOGCIU9a92uWu16KJAYz8MhT3jj57pM47kJi9YR5ZA0qwy4Ni1ZsNnx+/j5p7
         Y/INxDs7jwQMn7Rabv5eeDoFFFnK0B8xVmOpv0rVNJpb0LazcclVKggQqzcskOTe3kJ8
         9aTA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719331985; x=1719936785;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=8HdCHR3BFJRaCZme7stq7n84D3lXLnF87ficcOKKImU=;
        b=grCuieYLsXIpft3AWNe+ysPWd2U2yWvIZMwA4wJo00BsAIUrw3ZbWn/Zgr+T50Oajx
         XTN+QqVSX4Y7fQqtkPnFoBXcUj4VSVeL3rptdh6ob/fGFjah6FVT+nhJNhJZaKW5U/+N
         oevUwX4PWHh8vYwFy+jo2WNReChiyN0OktFxtwR+XcOq8HfeOLd9f+6sIQhR5Iv5fTTE
         kAbqccmhX8wjmWQ907JsWL9qCGnwY1oSt/ixEITJG+r7lq/GpVm5ZezGM5lIdzqSdZrr
         uxPEvYTlcvYOLOwx1ZgAhiBzzCB6T9Cr137CPKDxBAnwmkVZscjvwOEdPMsLnNOsDucA
         0GlQ==
X-Forwarded-Encrypted: i=1; AJvYcCXjigquu3RZBf2jRw/6tmEnCfgHxCcHo97XSw+l2aKNaQ8yMS9+KPmQTA8SiuEuRAXWQjhzqAzPVsrXT3IRW7kxkfl8pT77ZxuRxI6KT7U=
X-Gm-Message-State: AOJu0YxKM7hBUUrlhpnIbxB/iPqN4N/blSaU3vg++Z6DZIvsEQo7r57Y
	elS54XQ0hqK6rlYa11svMRPbJDBdEC2Qzjy2ZFVgGtCu7finMzoJIZMTjWbqcA==
X-Google-Smtp-Source: AGHT+IG7ORgnFThHOU/4o1CdOE6x/13LcMwX0plD1tJbqmINW9yNywpdkHU0DIz7xc52j28uHriijg==
X-Received: by 2002:a05:651c:209:b0:2ec:5777:aa61 with SMTP id 38308e7fff4ca-2ec5931002amr61709341fa.3.1719331984992;
        Tue, 25 Jun 2024 09:13:04 -0700 (PDT)
Message-ID: <5df70e32-9335-4cae-be5a-818a7ebf4537@suse.com>
Date: Tue, 25 Jun 2024 18:12:57 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/2] Add libfuzzer target to fuzz/x86_instruction_emulator
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Anthony PERARD <anthony@xenproject.org>, xen-devel@lists.xenproject.org
References: <20240621191434.5046-1-tamas@tklengyel.com>
 <45c69745-b060-4697-9f6e-b3d2a8860946@suse.com>
 <CABfawhkyDVw-=nR2d6KiXGYYv=coDgHUr1oXC+BmUxH_ita+iQ@mail.gmail.com>
 <80d0578d-26c0-4650-9edf-6926c055d415@suse.com>
 <CABfawhk3RyR-ACq-mBk=F1-SCKJPiiS_yhU1=A_jR8Js3=fQyA@mail.gmail.com>
 <8d32db90-8bd0-4a8f-82d9-938e36d3f181@suse.com>
 <CABfawhnYFS97U1F4CuacbNWzLVoKXFxTSpG-Ddb-VL7di=7XDw@mail.gmail.com>
 <243e34fc-57a2-464c-8a11-2cfee7e9cda3@suse.com>
 <CABfawh=6d+F1tYLmfC-NyMn80NROFf_0HL-WkKzu-r5vjfScaw@mail.gmail.com>
 <4bab3cff-93c0-497c-b0ad-8d2df26124d2@suse.com>
 <CABfawhkHNb1X5tAET+yyZJsBE_jvHwn1afyGvBVXeQ_fw4vqgg@mail.gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <CABfawhkHNb1X5tAET+yyZJsBE_jvHwn1afyGvBVXeQ_fw4vqgg@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25.06.2024 17:20, Tamas K Lengyel wrote:
> On Tue, Jun 25, 2024 at 10:56 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 25.06.2024 15:40, Tamas K Lengyel wrote:
>>> On Tue, Jun 25, 2024 at 9:15 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 25.06.2024 14:40, Tamas K Lengyel wrote:
>>>>> On Tue, Jun 25, 2024 at 7:52 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>
>>>>>> On 25.06.2024 13:12, Tamas K Lengyel wrote:
>>>>>>> On Tue, Jun 25, 2024 at 2:00 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>>>
>>>>>>>> On 24.06.2024 23:23, Tamas K Lengyel wrote:
>>>>>>>>> On Mon, Jun 24, 2024 at 11:55 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>>>>>>>
>>>>>>>>>> On 21.06.2024 21:14, Tamas K Lengyel wrote:
>>>>>>>>>>> @@ -58,6 +58,9 @@ afl-harness: afl-harness.o $(OBJS) cpuid.o wrappers.o
>>>>>>>>>>>  afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OBJS)) cpuid.o wrappers.o
>>>>>>>>>>>       $(CC) $(CFLAGS) $(GCOV_FLAGS) $(addprefix -Wl$(comma)--wrap=,$(WRAPPED)) $^ -o $@
>>>>>>>>>>>
>>>>>>>>>>> +libfuzzer-harness: $(OBJS) cpuid.o
>>>>>>>>>>> +     $(CC) $(CFLAGS) $(LIB_FUZZING_ENGINE) -fsanitize=fuzzer $^ -o $@
>>>>>>>>>>
>>>>>>>>>> What is LIB_FUZZING_ENGINE? I don't think we have any use of that in the
>>>>>>>>>> tree anywhere.
>>>>>>>>>
>>>>>>>>> It's used by oss-fuzz, otherwise it's not doing anything.
>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> I'm further surprised you get away here without wrappers.o.
>>>>>>>>>
>>>>>>>>> Wrappers.o was actually breaking the build for oss-fuzz at the linking
>>>>>>>>> stage. It works just fine without it.
>>>>>>>>
>>>>>>>> I'm worried here, to be honest. The wrappers serve a pretty important
>>>>>>>> role, and I'm having a hard time seeing why they shouldn't be needed
>>>>>>>> here when they're needed both for the test and afl harnesses. Could
>>>>>>>> you add some more detail on the build issues you encountered?
>>>>>>>
>>>>>>> With wrappers.o included doing the build in the oss-fuzz docker
>>>>>>> (ubuntu 20.04 base) fails with:
>>>>>>>
>>>>>>> ...
>>>>>>> clang -O1 -fno-omit-frame-pointer -gline-tables-only
>>>>>>> -Wno-error=enum-constexpr-conversion
>>>>>>> -Wno-error=incompatible-function-pointer-types
>>>>>>> -Wno-error=int-conversion -Wno-error=deprecated-declarations
>>>>>>> -Wno-error=implicit-function-declaration -Wno-error=implicit-int
>>>>>>> -DFUZZING_BUILD_MODE_UNSAFE_FOR_PRODUCTION -fsanitize=address
>>>>>>> -fsanitize-address-use-after-scope -fsanitize=fuzzer-no-link -m64
>>>>>>> -DBUILD_ID -fno-strict-aliasing -std=gnu99 -Wall -Wstrict-prototypes
>>>>>>> -Wno-unused-but-set-variable -Wno-unused-local-typedefs   -g3 -Werror
>>>>>>> -Og -fno-omit-frame-pointer
>>>>>>> -D__XEN_INTERFACE_VERSION__=__XEN_LATEST_INTERFACE_VERSION__ -MMD -MP
>>>>>>> -MF .libfuzzer-harness.d -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE
>>>>>>> -I/src/xen/tools/fuzz/x86_instruction_emulator/../../../tools/include
>>>>>>> -D__XEN_TOOLS__ -iquote . -fsanitize=fuzzer -fsanitize=fuzzer
>>>>>>> -Wl,--wrap=fwrite -Wl,--wrap=memcmp -Wl,--wrap=memcpy
>>>>>>> -Wl,--wrap=memset -Wl,--wrap=printf -Wl,--wrap=putchar -Wl,--wrap=puts
>>>>>>> -Wl,--wrap=snprintf -Wl,--wrap=strstr -Wl,--wrap=vprintf
>>>>>>> -Wl,--wrap=vsnprintf fuzz-emul.o x86-emulate.o x86_emulate/0f01.o
>>>>>>> x86_emulate/0fae.o x86_emulate/0fc7.o x86_emulate/decode.o
>>>>>>> x86_emulate/fpu.o cpuid.o wrappers.o -o libfuzzer-harness
>>>>>>> /usr/bin/ld: /usr/bin/ld: DWARF error: invalid or unhandled FORM value: 0x25
>>>>>>> /usr/local/lib/clang/18/lib/x86_64-unknown-linux-gnu/libclang_rt.fuzzer.a(fuzzer.o):
>>>>>>> in function `std::__Fuzzer::__libcpp_snprintf_l(char*, unsigned long,
>>>>>>> __locale_struct*, char const*, ...)':
>>>>>>> cxa_noexception.cpp:(.text._ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__locale_structPKcz[_ZNSt8__Fuzzer19__libcpp_snprintf_lEPcmP15__locale_structPKcz]+0x9a):
>>>>>>> undefined reference to `__wrap_vsnprintf'
>>>>>>> clang: error: linker command failed with exit code 1 (use -v to see invocation)
>>>>>>> make: *** [Makefile:62: libfuzzer-harness] Error 1
>>>>>>> rm x86-emulate.c wrappers.c cpuid.c
>>>>>>> make: Leaving directory '/src/xen/tools/fuzz/x86_instruction_emulator'
>>>>>>> ERROR:__main__:Building fuzzers failed.
>>>>>>
>>>>>> Hmm, yes, means we'll need an actual vsnprintf() wrapper, not just a
>>>>>> declaration thereof.
>>>>>
>>>>> I don't really get what this wrapper accomplishes
>>>>
>>>> They guard against clobbering of in-register state (SIMD registers in
>>>> particular, but going forward maybe also eGRP-s as introduced by APX)
>>>> by library functions called between emulation of individual insns (or,
>>>> especially possible for fuzzing instrumented code, I think) even from
>>>> in the middle of emulating an insn. (Something as simple as the
>>>> compiler inserting a call to memcpy() or memset() somewhere in the
>>>> translation of the emulator source code could also clobber state.)
>>>>
>>>>> and as I said, fuzzing works with oss-fuzz just fine without it.
>>>>
>>>> I'm inclined to take this as "it appears to work just fine". Fuzzed
>>>> input register state may be lost by doing a library call somewhere,
>>>> rendering the fuzzing results less useful. This would pretty
>>>> certainly stop being tolerable the moment you compared results of
>>>> native execution of a sequence of instructions with the emulated
>>>> counterpart.
>>>
>>> Yea, that may be. Any suggested way to fix the linking issue though?
>>
>> As said before, we need to gain a real wrapper for vsnprintf(). Right
>> now we only have a declaration thereof, for use by the wrapper for
>> snprintf().
> 
> Where is this wrapper for snprintf? I can't find anything in
> fuzz/x86_instruction_emulator.

The fuzzing stuff reuses files from the test harness, i.e.
tools/tests/x86_emulator/wrappers.c in this case. Much like cpuid.c
is coming from elsewhere, too.

Jan


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 16:19:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 16:19:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747961.1155489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM8su-0006Yk-3O; Tue, 25 Jun 2024 16:19:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747961.1155489; Tue, 25 Jun 2024 16:19:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM8st-0006Yd-U9; Tue, 25 Jun 2024 16:19:07 +0000
Received: by outflank-mailman (input) for mailman id 747961;
 Tue, 25 Jun 2024 16:19:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM8ss-0006YX-Bg
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 16:19:06 +0000
Received: from mail-ej1-x629.google.com (mail-ej1-x629.google.com
 [2a00:1450:4864:20::629])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a3d5fed0-330e-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 18:19:04 +0200 (CEST)
Received: by mail-ej1-x629.google.com with SMTP id
 a640c23a62f3a-a6fd513f18bso473007166b.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 09:19:04 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf5490d9sm523337766b.115.2024.06.25.09.19.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 09:19:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3d5fed0-330e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719332344; x=1719937144; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=/gyvLVsUvwdFZoRqoPZx464kZFivE4+/zIbIpiWr4OQ=;
        b=MRV/GMX9PaaH0TxbC+g1e20s+7GhqQUT/fqMR7DOXXNNlojNzCwbH1XoxEwMDM4U2f
         CoatEZA+ngJmSq/F5WXV1Ho+Orv8gnVIRoSMNNET5tzBYpT4KsKNNqGhX3EI1D3SqmaJ
         8J/oVGHONS0EasxYEkviAN0taWNgHf7igIfqaKugAAw4TIlYoCPRxOKxA/sJcV2G2I92
         MfoLJdbwcwRmKYzfmbdzkK9p+usTmUxNwGnZ4/Tu4Sy3IvNmSdAgnLkmChsTcuynMJgo
         s9SyRDZ86hV2JVRql+FUK09ydfTsGJDvRiaaMRXRHfRJISk27euwc83rWsuEEF6mdnuj
         E7Zg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719332344; x=1719937144;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=/gyvLVsUvwdFZoRqoPZx464kZFivE4+/zIbIpiWr4OQ=;
        b=KkPq94d5RLvFhHTMxVaAzzQydJEWhA90wHcqrUKCIeRXS5qtmzom7xB+rNxXsv04f7
         FqGzUh/Z07CsQFf1fR93gNM9b2Ok4P4EPpw3TGd6s8+TVPVzDlWgHVwNhHoENWB5lPk0
         YeK5iEb7BMYwb7knCxJH+Vz18EUxeNAPYtHGKaAB97yK2JecgNzRd1xK6Wm3p5e4Xg0M
         2nUEeHwN1oWe2/5lp90wkvirlmYikRoB00eBWryoOGUW29OdIdr+Pi+IcOztzQGPqqUD
         sbSkhzqTi72Gnnusbf5pKmejiKfvYpe8bcd+E0gpuW7ERwvHX4NqJIbfANKQW6cs5H/B
         0ldQ==
X-Forwarded-Encrypted: i=1; AJvYcCXRgJmpMAyzQ84TrVDFkUjih/te1wiz3bzf2PBaWMGTz3mLXllyVoafAewliVklej5ycr1iHgUyqFj7pIG4Raz73CHcicDEpuELmRnEekg=
X-Gm-Message-State: AOJu0YytDkux1Ryv+kpoUxFbrhNin5XPopdww0itdkhgr8SXoic60M8h
	PYfo3dEfvC46TqB2rFUCOTs1IWqFciUqLkF6HC0v8vs04Guh4bWL
X-Google-Smtp-Source: AGHT+IHL1WudS5oFr66DSRGHjgzi9fR9ipckG/dR0Ybh60xUo7VUbDxX7uGZ0spt7lqQwQST0Sl2XQ==
X-Received: by 2002:a17:906:714c:b0:a6f:13fe:75d1 with SMTP id a640c23a62f3a-a7245c648a2mr505025066b.64.1719332343545;
        Tue, 25 Jun 2024 09:19:03 -0700 (PDT)
Message-ID: <eceae0bb1a55370d5c6a5baf52b28fb6187f7cff.camel@gmail.com>
Subject: Re: [PATCH v13 05/10] xen/riscv: enable full Xen build
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>,  Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Date: Tue, 25 Jun 2024 18:19:02 +0200
In-Reply-To: <10d1f56c-6b27-47ab-bd5e-208a0938c3eb@suse.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
	 <b47a278c89eb436a7b88dc5c0b18a6be09c76472.1719319093.git.oleksii.kurochko@gmail.com>
	 <10d1f56c-6b27-47ab-bd5e-208a0938c3eb@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Tue, 2024-06-25 at 16:24 +0200, Jan Beulich wrote:
> On 25.06.2024 15:51, Oleksii Kurochko wrote:
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> > Reviewed-by: Jan Beulich <jbeulich@suse.com>
> > ---
> > =C2=A0Changes in V13:
> > =C2=A0 - implement get_upper_mfn_bound() as BUG_ON("unimplemented")
>=20
> Odd, patch 4 also says this and also does so.
It should me mentioned only in patch 4. I will drop this line in the
next patch version if this patch won't be merged.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 16:23:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 16:23:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747966.1155499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM8wq-000064-Ha; Tue, 25 Jun 2024 16:23:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747966.1155499; Tue, 25 Jun 2024 16:23:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM8wq-00005w-DE; Tue, 25 Jun 2024 16:23:12 +0000
Received: by outflank-mailman (input) for mailman id 747966;
 Tue, 25 Jun 2024 16:23:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM8wo-00005q-LX
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 16:23:10 +0000
Received: from mail-ed1-x535.google.com (mail-ed1-x535.google.com
 [2a00:1450:4864:20::535])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 34bbe37e-330f-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 18:23:07 +0200 (CEST)
Received: by mail-ed1-x535.google.com with SMTP id
 4fb4d7f45d1cf-57d1679ee6eso9846501a12.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 09:23:07 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d3006d36bsm6126479a12.0.2024.06.25.09.23.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 09:23:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34bbe37e-330f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719332587; x=1719937387; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=hMqCIJfKMUq9PKoff/Nt6wmO4mUo/wjOqMlKbEmwtjE=;
        b=gECnxRqqAC+B7pQE9kNgrmiF8UYVhix6jLta0AWl5MUVOyjDEuFa2vHEOSDbjxs1Pm
         bxEcQXgXeSYXGMqEsfQq58RYUUt5vkatwPTVWz3jdG3ZScYXU/3Psru1Z2wvS7d2cMpW
         D6M++1T1DJPTIa5Cs3N2fFeG5s6GhUwDiMadq25oOhdbCUskWy8PuaP1L7gJCgQ8y9Mu
         +1ig2yF2+gH1XLEEQUxW7CIrQ4mMaR2p/2m+Akuov0LjYwjkiyubURU5Al5Vv3pU2LKt
         XG2Sdqs6PchQQG5jRMLuis07N8iX+nEz0ro3+eqXCNslEc6U9CZ8FdAbp8e+3lTmAL1K
         3r4w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719332587; x=1719937387;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=hMqCIJfKMUq9PKoff/Nt6wmO4mUo/wjOqMlKbEmwtjE=;
        b=CtP/P29B3ZlRGZpRaPyZlphjeRLpyODDRQXh7kyvRMeXI2nQzvBXrApxX9x9QVTyC1
         ujkmc5KpX/5aJbLsKpmgcXeRAUt+ydSZgVmZwx9BVpEX/GwVAeQ1oV5MGrqHuffF7+G2
         YnonERpMl5ztRGYyUkgY9a1dD5KemoZsMCw65AV5BVzXN6T/fcpG++KwhQzMWO5sjmOR
         EBFN9EueUDCCRAhwVLiYTRr2+uc1OPqWhBkj7LEAXI9b+5Kud1rBQ/i3HYAffgQl05o1
         CvaGE+GOPvrSsbCMkcXxSjfPF8TaOdba6ktziq1cJYjRGDUjQkDW60ShVaZvtDBL9x6L
         Aosw==
X-Forwarded-Encrypted: i=1; AJvYcCU/0mfKmOlQv6RCo8HC/V2jobkJ7b6W77GWh+D2La0XsNJQxVI7lrp2W0peROOLUeohbGYPGZeqAu4peQajMyIZEigYwub5NatrrL/yKXY=
X-Gm-Message-State: AOJu0YwC4pmvl0ZSxQKnFJsiYvLtbcRzogokOtLFKBeLJHmLNbwwWKdy
	y1ozbFPqp2yD5v2Q2QrlVPkbrGQxENYq5XZBi0uvFBadDZ8sTAISrGkB2lGA
X-Google-Smtp-Source: AGHT+IHiT0gOZZWWLswYD4JqPq7CYXAQTF+yGDIF4NafhSWkwOgUjKaqIpKoHct4Nfw1YrBcH/u/eQ==
X-Received: by 2002:a50:d65e:0:b0:57c:a4a3:f1fe with SMTP id 4fb4d7f45d1cf-57d7022c35bmr2618737a12.17.1719332586723;
        Tue, 25 Jun 2024 09:23:06 -0700 (PDT)
Message-ID: <c52181a7aca8b56716d7ee354ebda9d32e67816c.camel@gmail.com>
Subject: Re: [PATCH v13 07/10] xen/common: fix build issue for common/trace.c
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Date: Tue, 25 Jun 2024 18:23:05 +0200
In-Reply-To: <4a4e37a9-eac7-4e72-8845-6b4bbd7bafe6@suse.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
	 <f14f2c5629a75856f4bafdbff3cc165c373f8dc2.1719319093.git.oleksii.kurochko@gmail.com>
	 <4a4e37a9-eac7-4e72-8845-6b4bbd7bafe6@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Tue, 2024-06-25 at 16:25 +0200, Jan Beulich wrote:
> On 25.06.2024 15:51, Oleksii Kurochko wrote:
> > During Gitlab CI randconfig job for RISC-V failed witn an error:
> > =C2=A0common/trace.c:57:22: error: expected '=3D', ',', ';', 'asm' or
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 '__attribute__' before
> > '__read_mostly'
> > =C2=A0=C2=A0 57 | static u32 data_size __read_mostly;
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> Acked-by: Jan Beulich <jbeulich@suse.com>
>=20
> If you give a release-ack, this can go in right away, I think.
Release-Acked-by: Oleksii Kurochko  <oleksii.kurochko@gmail.com>

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 16:43:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 16:43:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747974.1155507 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM9GA-0004dy-1B; Tue, 25 Jun 2024 16:43:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747974.1155507; Tue, 25 Jun 2024 16:43:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sM9G9-0004dr-Uh; Tue, 25 Jun 2024 16:43:09 +0000
Received: by outflank-mailman (input) for mailman id 747974;
 Tue, 25 Jun 2024 16:43:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sM9G9-0004dl-0s
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 16:43:09 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 000019af-3312-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 18:43:07 +0200 (CEST)
Received: by mail-ej1-x634.google.com with SMTP id
 a640c23a62f3a-a72510ebc3fso371251566b.2
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 09:43:07 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fe2a22b6fsm412408266b.85.2024.06.25.09.43.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 09:43:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 000019af-3312-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719333787; x=1719938587; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=ZgxFPjsOmmwsu7KF6UDwNopZBtQ3RAQ3rZ9jx57Phkg=;
        b=OYARCePQc0jqPKtFh+KPODamSFfadXGGnvpYO3WoIBUhLqLzjKunwRRall3S3uaVWM
         jVpWT/Bkz2QDQ7PtBN615sLlh0FfojMKcnbJMod8cNARhyAWXzILk3haSYnDXKkMcrBQ
         2IomcV/aqHFE9zZ/w+MnaXgPE9DswyhXRSQkAz1tL+3f+qHFCYAAak+V16xfvcsimAT2
         Vtw6z2K3uyER4zZfT5UGttj0FbDnUX4ksV8072WwSxfzxzzJ5zndXLRDG8ADzFpjbwDM
         shX5ATHK88MNcW4cD590pLcjdsDBqkGV4hU/x7L1fpIg0GvJiiphLvXJwfbmH+2Omhso
         SdRQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719333787; x=1719938587;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=ZgxFPjsOmmwsu7KF6UDwNopZBtQ3RAQ3rZ9jx57Phkg=;
        b=tbY1sy4OgvBL1GqKIU7DCtj1bg+TAaHIxhlcwKd5dfoNJH5r4qWzzL91l1fsegpm3d
         8zsQ9VYF4BAErcM7IJPTTz42RydCrIQb+YylG6BIcw0EIrulphgSRGsaKWqopAF3Bsm6
         oDNMuivS8+NM36BE1armp1QiUtLv7yiY5JrrIlYrTJi49AWdbQ6PqCPXqrniTY+QqzWB
         KC59K2OO2ggqnFbH/j3r4wTc4vTXGaIXJnVQMoKllqgO2xs4Zdiqtrg0IlrlqEUjwzv8
         YImZhws25xyvx2kLWmI4Vo8XX98CM+qrQmWqGC9WnT1KY/YjkHrB+9B0BEy49aZcemBm
         0nng==
X-Forwarded-Encrypted: i=1; AJvYcCXOrHwf+pGoe2GTXip5xZXAJtzIQQk4tekXZEOVMRbxVrBACwhqjCOR2tKdRsz31p33N+qDM4XQtoZYJFjLQ9+mPg0AaAa9dSKCbLjTtv8=
X-Gm-Message-State: AOJu0YyJ03ibkyiXCgL9GSpJ3C69YBagYnM9Y7BccPPmgLDEoQQuqK+T
	3r4UIu8ilFZgjxGWwUqbMShfJQXMpOlvRRBQP8Zn1RxjnuUTlrmF
X-Google-Smtp-Source: AGHT+IFI4SAOhEvVm8f2Cf8Z8zqITguJXH3FQIp+Kw1w/+JZft2r9sR5sdQCUlqFPVvldgmFhHSvFQ==
X-Received: by 2002:a17:906:7f05:b0:a6e:4693:1f6e with SMTP id a640c23a62f3a-a7242c39be2mr599489066b.29.1719333786895;
        Tue, 25 Jun 2024 09:43:06 -0700 (PDT)
Message-ID: <8dc1f4caf9ceec6228c104acaded6a70c828526b.camel@gmail.com>
Subject: Re: [PATCH v13 10/10] xen/x86: drop constanst_test_bit() in
 asm/bitops.h
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau
 =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Date: Tue, 25 Jun 2024 18:43:05 +0200
In-Reply-To: <301ed888-af55-4445-9944-a1488791e120@suse.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
	 <edd341a6e86ceac2717c59680d4e5e7fc3321b5d.1719319093.git.oleksii.kurochko@gmail.com>
	 <301ed888-af55-4445-9944-a1488791e120@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Tue, 2024-06-25 at 16:51 +0200, Jan Beulich wrote:
> On 25.06.2024 15:51, Oleksii Kurochko wrote:
> > constant_test_bit() is functionally the same as generic_test_bit(),
> > so constant_test_bit() can be dropped and replaced with
> > generic_test_bit().
> >=20
> > Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> Acked-by: Jan Beulich <jbeulich@suse.com>
> (Didn't I ask for this to be done, so perhaps also Requested-by or
> alike?)
Yes, it was you, sorry.
For sure, we should added Requested-by. ( I had cherry-picking from
other my branch where I didn't add Requested-by and just not double-
check before send to xen-devel )

Could you please add Requested-by during the merge? Thank in advance!

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 17:41:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 17:41:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747986.1155524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMAAJ-000093-51; Tue, 25 Jun 2024 17:41:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747986.1155524; Tue, 25 Jun 2024 17:41:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMAAJ-00008w-2T; Tue, 25 Jun 2024 17:41:11 +0000
Received: by outflank-mailman (input) for mailman id 747986;
 Tue, 25 Jun 2024 17:41:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMAAH-00008j-Fm; Tue, 25 Jun 2024 17:41:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMAAH-0001ka-35; Tue, 25 Jun 2024 17:41:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMAAG-0004p7-L7; Tue, 25 Jun 2024 17:41:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMAAG-0006Qj-KY; Tue, 25 Jun 2024 17:41:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zKtDs5Cqa4GkaQmfwfrpl+IOtMxTPM5eMBpXaFNgRnU=; b=HvL0kAFe//T7xjmKQeTGOfUqq2
	WLsNM9NK/CXcuHOpdQVyfI4SxY349Jccuri7MQbDB7mcUM6tirNgqUtHNRyj21H9VuDjLf2JqwEzc
	jH0GGC19pKa862nK2XKnWu7PBVdVmx7tEpzpT24QhieFZfG62Wycc/ZM2h95+hYk0SJY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186490-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186490: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=10b4bb8d6d0c515ed9663691aea3684be8f7b0fc
X-Osstest-Versions-That:
    ovmf=be38c01da2dd949e0a6f8bceeb88d2e19c8c65f7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Jun 2024 17:41:08 +0000

flight 186490 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186490/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 10b4bb8d6d0c515ed9663691aea3684be8f7b0fc
baseline version:
 ovmf                 be38c01da2dd949e0a6f8bceeb88d2e19c8c65f7

Last test of basis   186443  2024-06-21 07:11:06 Z    4 days
Testing same since   186490  2024-06-25 15:41:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Tobin Feldman-Fitzthum <tobin@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   be38c01da2..10b4bb8d6d  10b4bb8d6d0c515ed9663691aea3684be8f7b0fc -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 18:09:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 18:09:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.747998.1155536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMAbt-0005Ap-FX; Tue, 25 Jun 2024 18:09:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 747998.1155536; Tue, 25 Jun 2024 18:09:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMAbt-0005Ai-Cb; Tue, 25 Jun 2024 18:09:41 +0000
Received: by outflank-mailman (input) for mailman id 747998;
 Tue, 25 Jun 2024 18:09:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMAbs-0005Ac-AC
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 18:09:40 +0000
Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com
 [2a00:1450:4864:20::530])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 15bdf799-331e-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 20:09:38 +0200 (CEST)
Received: by mail-ed1-x530.google.com with SMTP id
 4fb4d7f45d1cf-57d07f07a27so6696245a12.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 11:09:38 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d3040f003sm6168969a12.29.2024.06.25.11.09.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 11:09:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15bdf799-331e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719338977; x=1719943777; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=EdfsJBqq8rFcxW6phT73vAfGcA3f4b9eV+5Tw0DeNj4=;
        b=c3sL0HrMrFdIaUKx0W0Seju8Px4ubx6dkXm7tZ6lUdfHHb8lwtnp/H7gVYufojXltB
         i34lPqnnGtP7TJboOCw1HNR6X3t7h0NFa6L5RxIkEsDgJ09FEJCpgFHL1jol6OiKzVHK
         01gqmBD7acbfS4OaiIGCXex41LHVVLaQemOuGpLt6Gz8fYwgwqRrJhGuK3HHABlLvYh/
         K9TAqoLLwmT2LX/bjE7uN/zDSCNHByWGrQnYmOpgH0wPlyTMHXjWfuJtEkISMD+5rkg0
         bsyl5Pxc6aRBdnH7AnMxrjlEuN3JFoMNZRZPXXUmC2HcB6DHTJFO1lsbLXoIZII59S7n
         jHoQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719338977; x=1719943777;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=EdfsJBqq8rFcxW6phT73vAfGcA3f4b9eV+5Tw0DeNj4=;
        b=M5+0xByOrXOILiKkZCREDPbG9X/+U9YgK6cIdaJC/E23eqnDIWUUaIcXVwTNwsHPOr
         ixE8vV6OEsvUWen4mTCgfbkOsF/uYJIgpx03lzVGVVNIdXhZor5BfsuLy5zLIAFU7A6n
         3EgIvsVay9xd+DmmNh4u3+PmVvH2RL6ZDgyHkpySF+3JVeFCMJJ+wEoycO1e4KPgyFUN
         u+lZAi7V9NZYCTDjZ0PvG3O1X95omEkjexPUSGTwrIaSNP/G1Cls1Wl0HNz3A3tJvzUX
         zcDStuR3EsVQCZ36R2k5qdRFqtCPmXMIb9f+QN2Wvnxe79YvW7nLp8lsfV6Tsf2a1WW/
         33/Q==
X-Forwarded-Encrypted: i=1; AJvYcCXI64GJKu93j6qtIbc/YFirCnDDi+lRz9VBQDlL/Szgp6BjrF6FQbcgQjGw4xAFajoz3hLXdkK522BxNjDBhTN2Lx9R2Gwx/HNIlyJoU84=
X-Gm-Message-State: AOJu0Yywy6lxNXIuNJVzWgw+j1jAIMZjMsf3z3oF0XxgZXObaDENILgX
	wccyDbZ/0AFRG0iWOeOhLPQ3mfzckW+wnvE36bhM2h+19q9RcDY4
X-Google-Smtp-Source: AGHT+IG7MJcVzptHcjqrtZKExE5iFng54JmEeiy6GqbWZcborJLoCBJiM+QteVd/gxxd/a7tWFZdmA==
X-Received: by 2002:a50:cd9e:0:b0:57d:785:7cc2 with SMTP id 4fb4d7f45d1cf-57d4bdcc05bmr5171698a12.28.1719338976986;
        Tue, 25 Jun 2024 11:09:36 -0700 (PDT)
Message-ID: <9de5a3235f2bce8e7ab5c5dd5faf36e1774c97a7.camel@gmail.com>
Subject: Re: [PATCH v13 08/10] xen/riscv: change .insn to .byte in
 cpu_relax()
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, George
 Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Date: Tue, 25 Jun 2024 20:09:35 +0200
In-Reply-To: <8be2c7c0-0aa0-44e0-b3d3-d422fecc29b6@suse.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
	 <b5ccb3850cbfc0c84d2feea35a971351395fa974.1719319093.git.oleksii.kurochko@gmail.com>
	 <8be2c7c0-0aa0-44e0-b3d3-d422fecc29b6@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Tue, 2024-06-25 at 16:45 +0200, Jan Beulich wrote:
> On 25.06.2024 15:51, Oleksii Kurochko wrote:
> > The .insn directive appears to check that the byte pattern is a
> > known
> > extension, where .4byte doesn't.
>=20
> Support for specifying "raw" insns was added only in 2.38. Moving
> away
> from .insn has other downsides (which may or may not matter here, but
> that would want discussing). But: Do we really need to move away?
> Can't
> you use .insn with operands that the older gas supports, e.g.
>=20
> 	.insn r MISC_MEM, 0, 0, x0, x0, x16
>=20
> ? I'm sorry, the oldest RISC-V gas I have to hand is 2.39, so I
> couldn't
> double check that 2.35 would grok this. From checking sources it
> should,
> though.
We can do in this way too. I checked on https://godbolt.org/ even with
RISC-V ( 64 bits ) gcc 8.2.0 the suggested option with .insn compiles
well.

>=20
> > The following compilation error occurs:
> > =C2=A0 ./arch/riscv/include/asm/processor.h: Assembler messages:
> > =C2=A0 ./arch/riscv/include/asm/processor.h:70: Error: unrecognized
> > opcode `0x0100000F'
> > In case of the following Binutils:
> > =C2=A0 $ riscv64-linux-gnu-as --version
> > =C2=A0 GNU assembler (GNU Binutils for Debian) 2.35.2
>=20
> In patch 6 you say 2.39. Why is 2.35.2 suddenly becoming of interest?
Andrew has (or had) an older version of binutils and started to face
the issues mentioned in this patch and the next one. As a result, some
changes were suggested.

The version in the README wasn't changed because, in my opinion, this
requires a separate CI job with a prebuilt or fixed toolchain version.
At the moment, it is supported only the version mentioned in README and
that one I have on my machine.

>=20
> > --- a/xen/arch/riscv/include/asm/processor.h
> > +++ b/xen/arch/riscv/include/asm/processor.h
> > @@ -67,7 +67,7 @@ static inline void cpu_relax(void)
> > =C2=A0=C2=A0=C2=A0=C2=A0 __asm__ __volatile__ ( "pause" );
> > =C2=A0#else
> > =C2=A0=C2=A0=C2=A0=C2=A0 /* Encoding of the pause instruction */
> > -=C2=A0=C2=A0=C2=A0 __asm__ __volatile__ ( ".insn 0x0100000F" );
> > +=C2=A0=C2=A0=C2=A0 __asm__ __volatile__ ( ".byte 0x0100000F" );
> > =C2=A0#endif
>=20
> In the description you (correctly) say .4byte; why is it .byte here?
> Does this build at all?
Yes, it is a typo. Should be .4byte and it is built, but with warning:
	value 0x100000f truncated to 0xf


~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 18:27:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 18:27:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748009.1155549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMAsf-0008Rx-Si; Tue, 25 Jun 2024 18:27:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748009.1155549; Tue, 25 Jun 2024 18:27:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMAsf-0008Rq-PH; Tue, 25 Jun 2024 18:27:01 +0000
Received: by outflank-mailman (input) for mailman id 748009;
 Tue, 25 Jun 2024 18:26:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMAsd-0008Rk-TG
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 18:26:59 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8223252d-3320-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 20:26:58 +0200 (CEST)
Received: by mail-ed1-x529.google.com with SMTP id
 4fb4d7f45d1cf-57d05e0017aso7242263a12.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 11:26:59 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-57d30562cb6sm6216109a12.83.2024.06.25.11.26.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 11:26:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8223252d-3320-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719340018; x=1719944818; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=t9iUYy2mYwtHAhrEq2Qx1BKRHdyA8rRSj+E2DApKz3E=;
        b=FHUx9lqL3C5LjdGKElXUk6z8NSgCbqyXxnE6QweZ4xOzJDFIv7pM5g9vsbWEoeDuXp
         kAkeMvxf/Lu1b5qoSiCe4mblpTs3kwxcRXW6s6wY6smn2RBXlFR1DmRsqMrP9jeyVkF0
         AAHrRImp1xP02u1GDeN/np2fg4KZaJxIxTLo7qE9FqsZZJcrgfKFGz7MxponGxxH7a0A
         ka2+rpEhJd0On5m4nOBC2O8tJlEvPnuczEVVc3lzOFg1NBNrPrVjiO+kRAvWjB5/WqZu
         qPEzjVDCDi1tKvRk9ZEG7j9CiglwmgziLgSPhIxg9GYx1LReynLgvXtUl482E7/wJwgk
         aMRg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719340018; x=1719944818;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=t9iUYy2mYwtHAhrEq2Qx1BKRHdyA8rRSj+E2DApKz3E=;
        b=rzOguWERrZNjpkSToHnCSac4L1IeHWTvpmUanvYosFnVDqKzs6k9MABpZts8QjwVAv
         6pvOyXl2mpOLBELk24vX9LmmA6R05S2E9CWW0ISQzXG8jGzbJOMEPWUcD1t4LLK4/uSp
         KYc/L3sdJdXnGrif8cjrUwcNS1RY4lRe64qYetbQjDG8r0a+4WpT9a6E+izcGs3n+vSy
         qLcg1xjPMeihsbww1Mz2F5gBWGeBqn+JmJm66ESREB8LiXWB/hlaBV7yfnDOOkUTgz1L
         TE4uJhsFmi37kCeqq2/HCKujtT0ySyspnLZPTo8UJF84vxFTQVHP7srmntw7JSd+auex
         sTqQ==
X-Forwarded-Encrypted: i=1; AJvYcCUE0MDe1Cc7ZjBIw6PADBGgh+T/4rdGZiDnCkpco2tEXZAWjmLnwddNkdb9h7VAGwpyj90MNGLpZhUvr+RO1/jLf9RGqKbDZziOQQakwno=
X-Gm-Message-State: AOJu0Yxj69DNyh5G6bkXRS++/D5RzUBFtYiIpnmebcbu0aelrRgc4GQf
	PQNccqkQ6VbXElp6me/Aj+WAH0GweT46vJuwvue3DeByxROGb7ds
X-Google-Smtp-Source: AGHT+IHOqSSMcSZYNHRPvrdbLnIRRhG27QPCLYx6cwvqAIdn0FymfvP5ZxJnKytVT7u/HTb/5+n4rA==
X-Received: by 2002:a50:d5d3:0:b0:57d:4876:55ce with SMTP id 4fb4d7f45d1cf-57d487657e4mr6821740a12.19.1719340018093;
        Tue, 25 Jun 2024 11:26:58 -0700 (PDT)
Message-ID: <3b2443ad76923afba348304b7cedbb257a6c5313.camel@gmail.com>
Subject: Re: [PATCH v13 09/10] xen/riscv: introduce ANDN_INSN
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, George
 Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Date: Tue, 25 Jun 2024 20:26:57 +0200
In-Reply-To: <95f64eba-13b9-404a-8318-7a3fc77ea560@suse.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
	 <b0d2ff2cecf6cb324e43b9c14c87f47f3f199613.1719319093.git.oleksii.kurochko@gmail.com>
	 <95f64eba-13b9-404a-8318-7a3fc77ea560@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Tue, 2024-06-25 at 16:49 +0200, Jan Beulich wrote:
> On 25.06.2024 15:51, Oleksii Kurochko wrote:
> > --- a/xen/arch/riscv/include/asm/cmpxchg.h
> > +++ b/xen/arch/riscv/include/asm/cmpxchg.h
> > @@ -18,6 +18,20 @@
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "r" (new) \
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "memory" );
> > =C2=A0
> > +/*
> > + * Binutils < 2.37 doesn't understand ANDN.=C2=A0 If the toolchain is
> > too
> > +ld, form
>=20
> Same question: Why's 2.37 suddenly of interest?
Andrew has (or had) an older version of binutils and started to face
the issues mentioned in this patch. As a result, some
changes were suggested.

> Plus, why would age of the
> tool chain matter?
As it is mentioned in the comment binutils < 2.37 doesn't understand
andn instruction.

> What you care about is whether you're permitted to use
> the extension at runtime.=C2=A0
At the moment we can't check that at runtime, w/o having an exception,
( there is some option to check when dts parsing will be available in
Xen ). I will add the check when dts parsing functionality will be
available. Right now the best what we can do it is just mentioned that
as requirement in docs.

> Otherwise you could again ...
>=20
> Also something went wrong with line wrapping here.
>=20
> > + * it of a NOT+AND pair
> > + */
> > +#ifdef __riscv_zbb
> > +#define ANDN_INSN(rd, rs1, rs2)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > +=C2=A0=C2=A0=C2=A0 "andn " rd ", " rs1 ", " rs2 "\n"
> > +#else
> > +#define ANDN_INSN(rd, rs1, rs2)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > +=C2=A0=C2=A0=C2=A0 "not " rd ", " rs2 "\n"=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 \
> > +=C2=A0=C2=A0=C2=A0 "and " rd ", " rs1 ", " rd "\n"
>=20
> ... resort to .insn.
Hmm, good point, it could be an issue.


If this patch is still needed (Andrew, have you updated your
toolchain?), then it should use .insn instead of andn. However, using
.insn requires encoding rd, rs1, and rs2 through asm ("some_reg") (?),
which seems overly complicated. And still, it is an open question if in
reality someone will use binutils which doesn't support andn
instruction...

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 18:28:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 18:28:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748015.1155560 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMAu8-0001PR-6g; Tue, 25 Jun 2024 18:28:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748015.1155560; Tue, 25 Jun 2024 18:28:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMAu8-0001PK-3O; Tue, 25 Jun 2024 18:28:32 +0000
Received: by outflank-mailman (input) for mailman id 748015;
 Tue, 25 Jun 2024 18:28:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sI+y=N3=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMAu7-0001PC-Dk
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 18:28:31 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b8c2bbe4-3320-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 20:28:30 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a7245453319so496769466b.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 11:28:30 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fcf560627sm542619166b.148.2024.06.25.11.28.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 11:28:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8c2bbe4-3320-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719340110; x=1719944910; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Pko28/t4Wtd90ChhzTiIIVxZnme5thqAQj31u5fTeR4=;
        b=LjyI6DLj2aFNVRgXcka4b30vIqaauW462oC780G5chfALljeu1Jy5GA9WJtYE6+GmR
         JBAqI+XXvjf7CD+0ycO/sfw17pvP1glXTCINbeEjBTLr9X+YP108+XgzPjTCl6F73y7n
         g95zAkYflfGoZUNSMCt4tMuSNB6TZglfUHINDZR8//PQdSOX7LDCmv335pr3H0yATP/X
         v/cQGQTbydXAfmIp03uu/p6sO9+1RQMHEVnY9qYF/kvQm1wffZDzfHp5EuORDRjpAZli
         AmALnCka5XdYdb54IHvPNm/QwU0XF9dbLZrzcNVRTn22ctmlksI5gj1/ceJ7/srGnZ6h
         BhfQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719340110; x=1719944910;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Pko28/t4Wtd90ChhzTiIIVxZnme5thqAQj31u5fTeR4=;
        b=nzyGlpPEh1nLDul0anHfgFTOXq8ofxrxaTdK2lFkYihPFiBU/N+m42gNqYf3XE7t2g
         ORFkEoUpQjghRKKXnxVY6smNIy26CPq9JemxafbcHf2Gw3h9ZzsK5WRIEFmAFM767eM1
         f6cHjmWmotuUFtb1DwWyORR6pJzT9X2Ub/5wqc4L+66Shy0rrPcDHwYORu8erwV8giyD
         KW9q4P123fhQvYU1CIEQH0xQ2VU2Ufh6NZ34mn6NjFqVFORdp4lE4+xoLCZPXdEGBExh
         EoNaWCxmrNoZKofjTwzKMg4Xsgo2nMgdol7eB7SjAcCrvlxFKXw6Q1JFcguQ+9YAXUXr
         HHhw==
X-Forwarded-Encrypted: i=1; AJvYcCVZfh79GNsWWSzaVV9wSuQCa5BYp4S+oiLfmHmUy6q17oBH3YreiaCglMYklBL562YuDN03XdrE4SdvY6D2Imp/+P6gvGOu19ZSx1ncuNY=
X-Gm-Message-State: AOJu0YyjEMPYlEnp+b8pKNLPzHkN4rIAZPmX3wRkYJCxq8MsItElXK9C
	TpXJEmCEJWPOYzpLy0xe1hY4D3IztT5fdvHhPJXBrOOZAC2Kbkpg
X-Google-Smtp-Source: AGHT+IHPrmPluPxoLl54SEeuWo3Es9CqHRGKVvB+oDSTEg5IR+AKqRkRfDH5GL+EtlZ3p3wh9jeZHQ==
X-Received: by 2002:a17:907:710d:b0:a72:5967:b34 with SMTP id a640c23a62f3a-a727fc55f2dmr66938466b.22.1719340109609;
        Tue, 25 Jun 2024 11:28:29 -0700 (PDT)
Message-ID: <ed99047fe78ee829199b42051f5657ba79ef3545.camel@gmail.com>
Subject: Re: [PATCH for-4.19?] Config.mk: update MiniOS commit
From: Oleksii <oleksii.kurochko@gmail.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Samuel Thibault
 <samuel.thibault@ens-lyon.org>,  Juergen Gross <jgross@suse.com>
Date: Tue, 25 Jun 2024 20:28:28 +0200
In-Reply-To: <a98ab069-407b-4dee-9052-40ab72890d47@suse.com>
References: <a98ab069-407b-4dee-9052-40ab72890d47@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Tue, 2024-06-25 at 09:57 +0200, Jan Beulich wrote:
> Pull in the gcc14 build fix there.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

> ---
> Probably nice to reference a gcc14-clean MiniOS tree from what is
> going
> to be 4.19.
>=20
> --- a/Config.mk
> +++ b/Config.mk
> @@ -224,7 +224,7 @@ QEMU_UPSTREAM_URL ?=3D https://xenbits.xen
> =C2=A0QEMU_UPSTREAM_REVISION ?=3D master
> =C2=A0
> =C2=A0MINIOS_UPSTREAM_URL ?=3D https://xenbits.xen.org/git-http/mini-os.g=
it
> -MINIOS_UPSTREAM_REVISION ?=3D b6a5b4d72b88e5c4faed01f5a44505de022860fc
> +MINIOS_UPSTREAM_REVISION ?=3D 8b038c7411ae7e823eaf6d15d5efbe037a07197a
> =C2=A0
> =C2=A0SEABIOS_UPSTREAM_URL ?=3D https://xenbits.xen.org/git-http/seabios.=
git
> =C2=A0SEABIOS_UPSTREAM_REVISION ?=3D rel-1.16.3



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 19:07:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 19:07:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748027.1155606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVq-0000vw-DU; Tue, 25 Jun 2024 19:07:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748027.1155606; Tue, 25 Jun 2024 19:07:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVq-0000vB-4c; Tue, 25 Jun 2024 19:07:30 +0000
Received: by outflank-mailman (input) for mailman id 748027;
 Tue, 25 Jun 2024 19:07:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vmrN=N3=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMBVo-00008w-O0
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 19:07:28 +0000
Received: from mail-ed1-x52b.google.com (mail-ed1-x52b.google.com
 [2a00:1450:4864:20::52b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2a229ede-3326-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 21:07:28 +0200 (CEST)
Received: by mail-ed1-x52b.google.com with SMTP id
 4fb4d7f45d1cf-57d07f07a27so6758519a12.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 12:07:28 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a725d7b190fsm180434766b.50.2024.06.25.12.07.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 12:07:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a229ede-3326-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719342447; x=1719947247; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Fy0Ve82OeeZoLYNkYstdewJQpe9Npq/P9dEBPwXzBhw=;
        b=GKx6sqQ7t+Feqy/jhMpBsGnVSZXS2svgsxl36T2F3tr/R9XUveQFDZFIYiYk6dHc/E
         9u4TueOLdLnM3JCmVnsoZJXpU+ZNYrDDGDVn19dAZcujvcRgRHujQ7BTOExFFXVHc2Fz
         Y1dktfqhQAC5Zki8BxfMFYTM+F6WLW4qWfMNo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719342447; x=1719947247;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Fy0Ve82OeeZoLYNkYstdewJQpe9Npq/P9dEBPwXzBhw=;
        b=whrnoK4KC1HmmOdMQNICN0KzC3YUFUoP2noi+bl+FMt1+IDSFhbZlUGFMJXPrOJEoH
         IvndQaiQxYRz4HA6e/mMUD/l1tvgiMX8CESyVhOEAOUBCwXsTEkfMUEDRlCAlFx/Ecyv
         0mdLG1EMiToo1jQ3bnP3vZ1eoEFNBQLrrZ/QdQxGEmdvULHlAVYIkF2K1+KwUCT917cz
         T35KfuUmJ6WErgzEzJRqWETKbW4Hvz/fTOENRPLZxRWPmyHpFN6xpHj82bVlJXmP1xvF
         8OEy8zgTazpiNlyxuO/8JN7CAhFnyeEvSJz6bRc0n7JZzDe9O6QiDBPTRRh8JwwszYMn
         CsUw==
X-Gm-Message-State: AOJu0Yyp+fXgGRERcPF3ICWSEk23rSYVFyg48z2tAajpEhGHFORT4YCK
	IXyFK/n2XioLnjUwz1wWL55YIJni0GigicvIcR1BHDlhmHP1gvhLGkOA/Vj1AYaDtOhsj80zBzB
	aQBU=
X-Google-Smtp-Source: AGHT+IFtdpbsqg8qlXa1oBwhSkY9AFdg41EFbjwtjMAZJsCOUe08bxR94vYnzxzecRZqHGxMo1Bgew==
X-Received: by 2002:a17:907:c78e:b0:a72:4b31:13b5 with SMTP id a640c23a62f3a-a727f855270mr120272966b.54.1719342447117;
        Tue, 25 Jun 2024 12:07:27 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>
Subject: [PATCH 5/6] ARM/vgic: Use for_each_set_bit() in vgic_set_irqs_pending()
Date: Tue, 25 Jun 2024 20:07:18 +0100
Message-Id: <20240625190719.788643-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240625190719.788643-1-andrew.cooper3@citrix.com>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

... which is better optimised for scalar values, rather than using the
arbitrary-sized bitmap helpers.

For ARM32:

  add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-16 (-16)
  Function                                     old     new   delta
  vgic_set_irqs_pending                        284     268     -16

including removing calls to _find_{first,next}_bit_le(), and two stack-spilled
words too.

For ARM64:

  add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-40 (-40)
  Function                                     old     new   delta
  vgic_set_irqs_pending                        268     228     -40

including removing three calls to find_next_bit().

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Michal Orzel <michal.orzel@amd.com>

TODO: These were debug builds, and I need to redo the analysis with release
builds.  Also extend to the other examples.
---
 xen/arch/arm/vgic.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index 57519e834d78..c060676aee78 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -421,15 +421,13 @@ void vgic_enable_irqs(struct vcpu *v, uint32_t r, unsigned int n)
 
 void vgic_set_irqs_pending(struct vcpu *v, uint32_t r, unsigned int rank)
 {
-    const unsigned long mask = r;
-    unsigned int i;
     /* The first rank is always per-vCPU */
     bool private = rank == 0;
 
     /* LPIs will never be set pending via this function */
     ASSERT(!is_lpi(32 * rank + 31));
 
-    bitmap_for_each( i, &mask, 32 )
+    for_each_set_bit ( i, r )
     {
         unsigned int irq = i + 32 * rank;
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 19:07:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 19:07:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748023.1155570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVk-000099-2o; Tue, 25 Jun 2024 19:07:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748023.1155570; Tue, 25 Jun 2024 19:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVj-000092-VG; Tue, 25 Jun 2024 19:07:23 +0000
Received: by outflank-mailman (input) for mailman id 748023;
 Tue, 25 Jun 2024 19:07:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vmrN=N3=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMBVj-00008w-CL
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 19:07:23 +0000
Received: from mail-ed1-x534.google.com (mail-ed1-x534.google.com
 [2a00:1450:4864:20::534])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 269f10fc-3326-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 21:07:22 +0200 (CEST)
Received: by mail-ed1-x534.google.com with SMTP id
 4fb4d7f45d1cf-57d251b5fccso5689047a12.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 12:07:22 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a725d7b190fsm180434766b.50.2024.06.25.12.07.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 12:07:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 269f10fc-3326-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719342441; x=1719947241; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=pbMsYGv+JExFuhLbvU/7RxYzCy5BhhzPeqcWqtXHjNU=;
        b=DWRxCxv3nQAijocWdtHiNlKABH+S4c3eNpuIAgPsYtiibkYgBPUmsnz/syI0hTxsMG
         96XmkZYGWoK+ooackuCQoPA76cr3nUVNcFYLhx8PVTqMA5Vvrwd/cHTwds3jPFCcg2IG
         nFE6dLJQmQtualZP817cuDgBFcI3cPhAd7ncA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719342441; x=1719947241;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=pbMsYGv+JExFuhLbvU/7RxYzCy5BhhzPeqcWqtXHjNU=;
        b=Peb0zfJGKPurdnmaow+YthhZ4ydg+X0lVmvY/G+FgncpMWSMdw01Pn2QM3LfM1KGm7
         bJvni5pF7ME6L2auYxRiHyQRcCIp1Gov7ndvfLd99KC47/2VgsSHfzAGu0MKVadxkTIw
         7R08asorGIVfi2ATUnw+czBiCwlhkzkQB6TRwffbE3hFE5ojRbazpNtSKd5cd/Kp1OQo
         /p4LXhfW/oIr4jKXZGFdT8QKTl3ae4nuE454LIA3QfL5Hr91JfIduonaEvaDD7Cdan6c
         FUjSvApeqoG2yiTLXB1JEB9+W3mJhQWS00lzH8GvT0lYNCA7cIWWulllAT+gWyuRay6V
         roPg==
X-Gm-Message-State: AOJu0Yy43qPZJgbm8WSXGKkmxtsNJ5gx9m9SeVWkT9it/YN1DuaZesIE
	6yWepktK056YJGCvTmrcBuLsKZnzTnoN26z7EFIj8AO98poZ81SRTL0VVvIW90m1kvyQiCcjL1i
	X2Bo=
X-Google-Smtp-Source: AGHT+IHqWbH8dR8xfcXFpDSH3kjtcbjHnXSOkIXAllH6bLeB+ZJF32lLfEI+FkHSdX4LTTFQBaI3Yg==
X-Received: by 2002:a17:906:f0c8:b0:a68:a800:5f7e with SMTP id a640c23a62f3a-a7245b45abbmr540133166b.10.1719342441534;
        Tue, 25 Jun 2024 12:07:21 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.20 0/6] xen: Rework for_each_set_bit()
Date: Tue, 25 Jun 2024 20:07:13 +0100
Message-Id: <20240625190719.788643-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

for_each_set_bit() is currently horribly inefficient for how we commonly use
it.  See patch 4 for details.

Andrew Cooper (6):
  x86/vmx: Rewrite vmx_sync_pir_to_irr() to be more efficient
  xen/bitops: Rename for_each_set_bit() to bitmap_for_each()
  xen/macros: Introduce BUILD_ERROR()
  xen/bitops: Introduce for_each_set_bit()
  ARM/vgic: Use for_each_set_bit() in vgic_set_irqs_pending()
  x86/xstate: Switch back to for_each_set_bit()

 xen/arch/arm/gic-vgic.c          |  2 +-
 xen/arch/arm/vgic.c              |  8 ++---
 xen/arch/arm/vgic/vgic-mmio-v2.c |  2 +-
 xen/arch/arm/vgic/vgic-mmio.c    | 12 +++----
 xen/arch/x86/cpu-policy.c        | 10 +++---
 xen/arch/x86/hvm/vmx/vmx.c       | 61 +++++++++++++++++++++++++++-----
 xen/arch/x86/xstate.c            |  8 ++---
 xen/common/bitops.c              | 29 +++++++++++++++
 xen/include/xen/bitmap.h         | 12 +++++++
 xen/include/xen/bitops.h         | 35 ++++++++++++------
 xen/include/xen/macros.h         |  2 ++
 xen/include/xen/self-tests.h     |  4 +--
 12 files changed, 142 insertions(+), 43 deletions(-)

-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 19:07:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 19:07:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748028.1155611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVq-00012E-QK; Tue, 25 Jun 2024 19:07:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748028.1155611; Tue, 25 Jun 2024 19:07:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVq-0000zB-GJ; Tue, 25 Jun 2024 19:07:30 +0000
Received: by outflank-mailman (input) for mailman id 748028;
 Tue, 25 Jun 2024 19:07:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vmrN=N3=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMBVp-0000O1-Gg
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 19:07:29 +0000
Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com
 [2a00:1450:4864:20::12d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a11fae5-3326-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 21:07:28 +0200 (CEST)
Received: by mail-lf1-x12d.google.com with SMTP id
 2adb3069b0e04-52cdbc20faeso4950898e87.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 12:07:28 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a725d7b190fsm180434766b.50.2024.06.25.12.07.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 12:07:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a11fae5-3326-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719342446; x=1719947246; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=H7YBcEx5ULWaurBzvfBec/KmEUPaaE087K5DxeWyyao=;
        b=iKnmxSgK9GuFLtzdObVEAh/XRF+/U+zWC1MliHekshPfCUnh/W66hW7LoUN0vws6vk
         /8+mfHC2LpUyASvw95bNP77VZE/7EyeR+Ktb+K5/GEVdeMWq5Cg54Mt3cVVWOURNegAZ
         Z2yvOCPdBN0nnaJaRaLi7yttNAqXYWF6bQR8Q=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719342446; x=1719947246;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=H7YBcEx5ULWaurBzvfBec/KmEUPaaE087K5DxeWyyao=;
        b=Z8OAv1b6j17luvX0ixNX91rnmzxnoighosv8xVRE/S9HxgURMVIEAWdzZ1AXUKb52S
         JGnchvbVNP+PnLm4O98hceO1aDo/kv4arjh5+/pN2MbBGuaQ8hd9G1ZL5WmeOTtBYjUP
         oH4fbU49wV8kda0cQ/cQCVBZeAnbiY94RLVoKlPkqE2wk6wzvluHLI4e/X9aYkdfEGfh
         0BvUX/Hm2090+p3XJ8o9+92FpFBr6pioT8TaUCq7S0QQd6BQM9ataNdDCsjki9TSds3/
         kQW/rif0Dbij5VZROE8I2h7DBVF56zuxlnO49OGDQmFvnI+Nj9Uu5/veDjkgSH8ISZ7Y
         NZIw==
X-Gm-Message-State: AOJu0Yz2y7qRxcsWd63TlmvJk4jNPhMGv81Qr5oZh7ylRITiSJSPeCfg
	TVyQD0/ldYi2L25oG8Xttz7pqm7j/RirwUiRN4JC6uR1IUIhkTwi08loRsDdWGLBVZLMxTKMj4w
	1uYo=
X-Google-Smtp-Source: AGHT+IFxAshXGQ3ajeTGLv8A1NSVQvnX+gCWqLJtm6Ui3v8djCzGmMyrzbCLeJIVmfkuYXwx8wQlKw==
X-Received: by 2002:ac2:4838:0:b0:52b:c27c:ea1f with SMTP id 2adb3069b0e04-52ce185faa8mr4432110e87.55.1719342446331;
        Tue, 25 Jun 2024 12:07:26 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH 4/6] xen/bitops: Introduce for_each_set_bit()
Date: Tue, 25 Jun 2024 20:07:17 +0100
Message-Id: <20240625190719.788643-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240625190719.788643-1-andrew.cooper3@citrix.com>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The prior version (renamed to bitmap_for_each()) was inefficeint when used
over a scalar, but this is the more common usage even before accounting for
the many opencoded forms.

Introduce a new version which operates on scalars only and does so without
spilling them to memory.  This in turn requires the addition of a type-generic
form of ffs().

Add testing for the new construct alongside the ffs/fls testing.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Michal Orzel <michal.orzel@amd.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

The naming of ffs_g() is taken from the new compiler builtins which are using
a g suffix to mean type-generic.
---
 xen/common/bitops.c      | 29 +++++++++++++++++++++++++++++
 xen/include/xen/bitops.h | 24 ++++++++++++++++++++++++
 2 files changed, 53 insertions(+)

diff --git a/xen/common/bitops.c b/xen/common/bitops.c
index 94a8983af99c..9e532f0d87aa 100644
--- a/xen/common/bitops.c
+++ b/xen/common/bitops.c
@@ -84,8 +84,37 @@ static void __init test_fls(void)
     CHECK(fls64, 0x8000000000000001ULL, 64);
 }
 
+static void __init test_for_each_set_bit(void)
+{
+    unsigned int  ui,  ui_res = 0;
+    unsigned long ul,  ul_res = 0;
+    uint64_t      ull, ull_res = 0;
+
+    ui = HIDE(0x80008001U);
+    for_each_set_bit ( i, ui )
+        ui_res |= 1U << i;
+
+    if ( ui != ui_res )
+        panic("for_each_set_bit(uint) expected %#x, got %#x\n", ui, ui_res);
+
+    ul = HIDE(1UL << (BITS_PER_LONG - 1) | 1);
+    for_each_set_bit ( i, ul )
+        ul_res |= 1UL << i;
+
+    if ( ul != ul_res )
+        panic("for_each_set_bit(ulong) expected %#lx, got %#lx\n", ul, ul_res);
+
+    ull = HIDE(0x8000000180000001ULL);
+    for_each_set_bit ( i, ull )
+        ull_res |= 1ULL << i;
+
+    if ( ull != ull_res )
+        panic("for_each_set_bit(uint64) expected %#"PRIx64", got %#"PRIx64"\n", ull, ull_res);
+}
+
 static void __init __constructor test_bitops(void)
 {
     test_ffs();
     test_fls();
+    test_for_each_set_bit();
 }
diff --git a/xen/include/xen/bitops.h b/xen/include/xen/bitops.h
index 24de0835b7ab..84ffcb8d57bc 100644
--- a/xen/include/xen/bitops.h
+++ b/xen/include/xen/bitops.h
@@ -56,6 +56,16 @@ static always_inline __pure unsigned int ffs64(uint64_t x)
         return !x || (uint32_t)x ? ffs(x) : ffs(x >> 32) + 32;
 }
 
+/*
+ * A type-generic ffs() which picks the appropriate ffs{,l,64}() based on it's
+ * argument.
+ */
+#define ffs_g(x)                                        \
+    sizeof(x) <= sizeof(int) ? ffs(x) :                 \
+        sizeof(x) <= sizeof(long) ? ffsl(x) :           \
+        sizeof(x) <= sizeof(uint64_t) ? ffs64(x) :      \
+        ({ BUILD_ERROR("ffs_g() Bad input type"); 0; })
+
 static always_inline __pure unsigned int fls(unsigned int x)
 {
     if ( __builtin_constant_p(x) )
@@ -92,6 +102,20 @@ static always_inline __pure unsigned int fls64(uint64_t x)
     }
 }
 
+/*
+ * for_each_set_bit() - Iterate over all set bits in a scalar value.
+ *
+ * @iter An iterator name.  Scoped is within the loop only.
+ * @val  A scalar value to iterate over.
+ *
+ * A copy of @val is taken internally.
+ */
+#define for_each_set_bit(iter, val)                     \
+    for ( typeof(val) __v = (val); __v; )               \
+        for ( unsigned int (iter);                      \
+              __v && ((iter) = ffs_g(__v) - 1, true);   \
+              __v &= __v - 1 )
+
 /* --------------------- Please tidy below here --------------------- */
 
 #ifndef find_next_bit
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 19:07:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 19:07:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748026.1155600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVq-0000sv-11; Tue, 25 Jun 2024 19:07:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748026.1155600; Tue, 25 Jun 2024 19:07:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVp-0000sg-SZ; Tue, 25 Jun 2024 19:07:29 +0000
Received: by outflank-mailman (input) for mailman id 748026;
 Tue, 25 Jun 2024 19:07:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vmrN=N3=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMBVo-0000O1-72
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 19:07:28 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28f46f20-3326-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 21:07:26 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a6fd513f18bso500994766b.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 12:07:26 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a725d7b190fsm180434766b.50.2024.06.25.12.07.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 12:07:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28f46f20-3326-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719342445; x=1719947245; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=hlzSBBPH1Fz0L1myZVLvo6vrKf8kQ291bZGAOZ0GC5A=;
        b=FmdYg87xCxpvnGbBTo2Atgbfbcu+ZvBCuaRsXAJUIVJzLq/2OgiPhKTibAN+TSRp6+
         qierkuHmkRKXsWTQa003Jms50RLSnxyO5s+f2ZA8dcRxntGBkVplhcbAEWpYfujacZAd
         WE4W8SNLA5SsSYZGF/Odkmi9WXcXQQxS/5cIY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719342445; x=1719947245;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=hlzSBBPH1Fz0L1myZVLvo6vrKf8kQ291bZGAOZ0GC5A=;
        b=TWbsYcCSc80bjn873FDm411QgLVIeqgmQ+IXKl/tOivyCzIqLTtq9Nqr0ouc8TT2XD
         5MEWhrTvQGsxF6itcm04la0xpV02/Z7QVYYpnELcKv9Y0itiWWc3YhQJeKB4cbKlyi8W
         tdnFqBZAmDIiPg703lY0F9zFdUwtUJj+U3YwxwzoiO3MNqnsyQZuTDKjXwkthBZyhlgL
         TviwJ0vsMMtX2Y0f19LGdJpAiMg3Mom1H1NnxGbBSFx4Az8LMjo3dPnkCd45DifiO/bY
         wynVBLoQ7o+1KPVUhoaYJDYkbM4doUzxmGw9lDi0fQSTcm8WnQbdI56TbtDPph9upM07
         iL0g==
X-Gm-Message-State: AOJu0Yw++e+9794HuqMzdbV4MaAXkmVzT3wpqG4POmGt65mm9zFWOxyG
	smnNcWESV6oemD917OpLROlip5QTni+3xNYPgFPtwZdm634UdVwoR6sRWBmJgH0D04OydJuQ7vN
	AspU=
X-Google-Smtp-Source: AGHT+IGuRz/nm8gKauLNQIF038k6w2NBrZxpsGF7hd8yXlcJSlVujS8VN+uqtC7rvaEif9/JFVCvqg==
X-Received: by 2002:a17:907:8dc6:b0:a6f:e25d:f6a4 with SMTP id a640c23a62f3a-a7245c642e5mr589513666b.76.1719342445256;
        Tue, 25 Jun 2024 12:07:25 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19? 3/6] xen/macros: Introduce BUILD_ERROR()
Date: Tue, 25 Jun 2024 20:07:16 +0100
Message-Id: <20240625190719.788643-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240625190719.788643-1-andrew.cooper3@citrix.com>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

... and use it in self-tests.h.

This is intended to replace constructs such as __bitop_bad_size().  It
produces a better diagnostic, and is MISRA-friendly.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Michal Orzel <michal.orzel@amd.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

RFC for-4.19.  This can be used to not introduce new MISRA violations when
adjusting __bitop_bad_size().  It's safe to pull out of this series.

---
 xen/include/xen/macros.h     | 2 ++
 xen/include/xen/self-tests.h | 4 ++--
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/include/xen/macros.h b/xen/include/xen/macros.h
index ec89f4654fcf..8441d7e7d66a 100644
--- a/xen/include/xen/macros.h
+++ b/xen/include/xen/macros.h
@@ -59,6 +59,8 @@
 #define BUILD_BUG_ON(cond) ((void)BUILD_BUG_ON_ZERO(cond))
 #endif
 
+#define BUILD_ERROR(msg) asm ( ".error \"" msg "\"" )
+
 /* Hide a value from the optimiser. */
 #define HIDE(x)                                 \
     ({                                          \
diff --git a/xen/include/xen/self-tests.h b/xen/include/xen/self-tests.h
index 42a4cc4d17fe..4bc322b7f2a6 100644
--- a/xen/include/xen/self-tests.h
+++ b/xen/include/xen/self-tests.h
@@ -22,9 +22,9 @@
         typeof(fn(val)) real = fn(val);                                 \
                                                                         \
         if ( !__builtin_constant_p(real) )                              \
-            asm ( ".error \"'" STR(fn(val)) "' not compile-time constant\"" ); \
+            BUILD_ERROR("'" STR(fn(val)) "' not compile-time constant"); \
         else if ( real != res )                                         \
-            asm ( ".error \"Compile time check '" STR(fn(val) == res) "' failed\"" ); \
+            BUILD_ERROR("Compile time check '" STR(fn(val) == res) "' failed"); \
     } while ( 0 )
 #else
 #define COMPILE_CHECK(fn, val, res)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 19:07:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 19:07:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748025.1155586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVo-0000SB-J8; Tue, 25 Jun 2024 19:07:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748025.1155586; Tue, 25 Jun 2024 19:07:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVo-0000RN-Dn; Tue, 25 Jun 2024 19:07:28 +0000
Received: by outflank-mailman (input) for mailman id 748025;
 Tue, 25 Jun 2024 19:07:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vmrN=N3=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMBVn-0000O1-6k
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 19:07:27 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2852989e-3326-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 21:07:25 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-a724440f597so410896266b.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 12:07:25 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a725d7b190fsm180434766b.50.2024.06.25.12.07.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 12:07:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2852989e-3326-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719342444; x=1719947244; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Zh6GzkNpe3HYBwE75qQqYBx6PpXtyY7khMvFg+0XxIs=;
        b=gN058rqBlo4/Y6K7IwrubFPJtGl/WgjrGNfsYmfRfb8pKRAavOD6++4PQgmPso68cE
         jWuIiySHlqaTx3gZFWCZ8Wu82ISmb61ITVX5vCiHCnouUJq3gg8Em4hcjdEZimlZOv6A
         Q5BfJFtX45wFC0ShvzWReUkahXwwz1jfJY/gk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719342444; x=1719947244;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Zh6GzkNpe3HYBwE75qQqYBx6PpXtyY7khMvFg+0XxIs=;
        b=W9EF4G07Ql0MrdIB0WUgs6VcI6+tJ1PN/NrKNsTCDTZANvJbEswW8UjXuEUnriMHOW
         kLERTULvRGOcp4emXoz1LwUCW0sKe6MXdFBdWs3ERavKJVxE4xNeoyg3F/RSIBADuSlt
         e51gKm/l9jAwlK6amGAV/zWmSI02Fm4bJtwht8Mhzbt0vHB5BwyJA/Q8c0KL74U4JjBF
         Vg/7IdDIBhlicd6PIl6yfkV7OkGQGn9WRJ+fQOvlJVwevTEQiNe0AdLsrUAg7htOVMRn
         g4M9T+xEUAiygbFD5NtLzpWyD5VWLTigdQbGWBwSjaInD8tFVkBXkC1QBiopp48Aaspd
         hI/w==
X-Gm-Message-State: AOJu0Yx++9vFgxVUlITtLJdndQUnzT/1GUABGGcd5kxai61d1Facawj3
	knGUtCYqkdTh4606/cGKO+LwbuqcXeyILS1sKSGQ2jbEjPh3LNjqZGyYSprrjKv4T23gCrtsxoG
	JREQ=
X-Google-Smtp-Source: AGHT+IHNPmsdQcA9qz63VN59BMA3NWhmQaolVchErbzJapICKkMXgA60cQbG04cCvcrzQuppyr0mEA==
X-Received: by 2002:a17:906:11c7:b0:a6f:4a42:1976 with SMTP id a640c23a62f3a-a7245bacda1mr454843466b.37.1719342443907;
        Tue, 25 Jun 2024 12:07:23 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH 2/6] xen/bitops: Rename for_each_set_bit() to bitmap_for_each()
Date: Tue, 25 Jun 2024 20:07:15 +0100
Message-Id: <20240625190719.788643-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240625190719.788643-1-andrew.cooper3@citrix.com>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

The current implementation wants to take an in-memory bitmap.  However, all
ARM callers and all-but-1 x86 callers spill a scalar to the stack in order to
use the "generic arbitrary bitmap" helpers under the hood.

This functions, but it's far from ideal.

Rename the construct and move it into bitmap.h, because having an iterator for
an arbitrary bitmap is a useful thing.

This will allow us to re-implement for_each_set_bit() to be more appropriate
for scalar values.

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Michal Orzel <michal.orzel@amd.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/arm/gic-vgic.c          |  2 +-
 xen/arch/arm/vgic.c              |  6 +++---
 xen/arch/arm/vgic/vgic-mmio-v2.c |  2 +-
 xen/arch/arm/vgic/vgic-mmio.c    | 12 ++++++------
 xen/arch/x86/cpu-policy.c        |  8 ++++----
 xen/arch/x86/xstate.c            |  4 ++--
 xen/include/xen/bitmap.h         | 12 ++++++++++++
 xen/include/xen/bitops.h         | 11 -----------
 8 files changed, 29 insertions(+), 28 deletions(-)

diff --git a/xen/arch/arm/gic-vgic.c b/xen/arch/arm/gic-vgic.c
index b99e28722425..0dfff76a238e 100644
--- a/xen/arch/arm/gic-vgic.c
+++ b/xen/arch/arm/gic-vgic.c
@@ -111,7 +111,7 @@ static unsigned int gic_find_unused_lr(struct vcpu *v,
     {
         unsigned int used_lr;
 
-        for_each_set_bit(used_lr, lr_mask, nr_lrs)
+        bitmap_for_each(used_lr, lr_mask, nr_lrs)
         {
             gic_hw_ops->read_lr(used_lr, &lr_val);
             if ( lr_val.virq == p->irq )
diff --git a/xen/arch/arm/vgic.c b/xen/arch/arm/vgic.c
index c04fc4f83f96..57519e834d78 100644
--- a/xen/arch/arm/vgic.c
+++ b/xen/arch/arm/vgic.c
@@ -429,7 +429,7 @@ void vgic_set_irqs_pending(struct vcpu *v, uint32_t r, unsigned int rank)
     /* LPIs will never be set pending via this function */
     ASSERT(!is_lpi(32 * rank + 31));
 
-    for_each_set_bit( i, &mask, 32 )
+    bitmap_for_each( i, &mask, 32 )
     {
         unsigned int irq = i + 32 * rank;
 
@@ -483,7 +483,7 @@ bool vgic_to_sgi(struct vcpu *v, register_t sgir, enum gic_sgi_mode irqmode,
         perfc_incr(vgic_sgi_list);
         base = target->aff1 << 4;
         bitmap = target->list;
-        for_each_set_bit( i, &bitmap, sizeof(target->list) * 8 )
+        bitmap_for_each( i, &bitmap, sizeof(target->list) * 8 )
         {
             vcpuid = base + i;
             if ( vcpuid >= d->max_vcpus || d->vcpu[vcpuid] == NULL ||
@@ -728,7 +728,7 @@ void vgic_check_inflight_irqs_pending(struct domain *d, struct vcpu *v,
     const unsigned long mask = r;
     unsigned int i;
 
-    for_each_set_bit( i, &mask, 32 )
+    bitmap_for_each( i, &mask, 32 )
     {
         struct pending_irq *p;
         struct vcpu *v_target;
diff --git a/xen/arch/arm/vgic/vgic-mmio-v2.c b/xen/arch/arm/vgic/vgic-mmio-v2.c
index 2e507b10fed5..82d0c22b39fc 100644
--- a/xen/arch/arm/vgic/vgic-mmio-v2.c
+++ b/xen/arch/arm/vgic/vgic-mmio-v2.c
@@ -108,7 +108,7 @@ static void vgic_mmio_write_sgir(struct vcpu *source_vcpu,
         return;
     }
 
-    for_each_set_bit( vcpu_id, &targets, 8 )
+    bitmap_for_each( vcpu_id, &targets, 8 )
     {
         struct vcpu *vcpu = d->vcpu[vcpu_id];
         struct vgic_irq *irq = vgic_get_irq(d, vcpu, intid);
diff --git a/xen/arch/arm/vgic/vgic-mmio.c b/xen/arch/arm/vgic/vgic-mmio.c
index 5d935a73013e..b023ecc20066 100644
--- a/xen/arch/arm/vgic/vgic-mmio.c
+++ b/xen/arch/arm/vgic/vgic-mmio.c
@@ -71,7 +71,7 @@ void vgic_mmio_write_senable(struct vcpu *vcpu,
     uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1);
     unsigned int i;
 
-    for_each_set_bit( i, &val, len * 8 )
+    bitmap_for_each( i, &val, len * 8 )
     {
         struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i);
         unsigned long flags;
@@ -116,7 +116,7 @@ void vgic_mmio_write_cenable(struct vcpu *vcpu,
     uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1);
     unsigned int i;
 
-    for_each_set_bit( i, &val, len * 8 )
+    bitmap_for_each( i, &val, len * 8 )
     {
         struct vgic_irq *irq;
         unsigned long flags;
@@ -186,7 +186,7 @@ void vgic_mmio_write_spending(struct vcpu *vcpu,
     unsigned long flags;
     irq_desc_t *desc;
 
-    for_each_set_bit( i, &val, len * 8 )
+    bitmap_for_each( i, &val, len * 8 )
     {
         struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i);
 
@@ -234,7 +234,7 @@ void vgic_mmio_write_cpending(struct vcpu *vcpu,
     unsigned long flags;
     irq_desc_t *desc;
 
-    for_each_set_bit( i, &val, len * 8 )
+    bitmap_for_each( i, &val, len * 8 )
     {
         struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i);
 
@@ -328,7 +328,7 @@ void vgic_mmio_write_cactive(struct vcpu *vcpu,
     uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1);
     unsigned int i;
 
-    for_each_set_bit( i, &val, len * 8 )
+    bitmap_for_each( i, &val, len * 8 )
     {
         struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i);
 
@@ -358,7 +358,7 @@ void vgic_mmio_write_sactive(struct vcpu *vcpu,
     uint32_t intid = VGIC_ADDR_TO_INTID(addr, 1);
     unsigned int i;
 
-    for_each_set_bit( i, &val, len * 8 )
+    bitmap_for_each( i, &val, len * 8 )
     {
         struct vgic_irq *irq = vgic_get_irq(vcpu->domain, vcpu, intid + i);
 
diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 304dc20cfab8..cd53bac777dc 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -157,7 +157,7 @@ static void zero_leaves(struct cpuid_leaf *l,
 
 static void sanitise_featureset(uint32_t *fs)
 {
-    /* for_each_set_bit() uses unsigned longs.  Extend with zeroes. */
+    /* bitmap_for_each() uses unsigned longs.  Extend with zeroes. */
     uint32_t disabled_features[
         ROUNDUP(FSCAPINTS, sizeof(unsigned long)/sizeof(uint32_t))] = {};
     unsigned int i;
@@ -174,8 +174,8 @@ static void sanitise_featureset(uint32_t *fs)
         disabled_features[i] = ~fs[i] & deep_features[i];
     }
 
-    for_each_set_bit(i, (void *)disabled_features,
-                     sizeof(disabled_features) * 8)
+    bitmap_for_each(i, (void *)disabled_features,
+                    sizeof(disabled_features) * 8)
     {
         const uint32_t *dfs = x86_cpu_policy_lookup_deep_deps(i);
         unsigned int j;
@@ -237,7 +237,7 @@ static void recalculate_xstate(struct cpu_policy *p)
     /* Subleafs 2+ */
     xstates &= ~XSTATE_FP_SSE;
     BUILD_BUG_ON(ARRAY_SIZE(p->xstate.comp) < 63);
-    for_each_set_bit ( i, &xstates, 63 )
+    bitmap_for_each ( i, &xstates, 63 )
     {
         /*
          * Pass through size (eax) and offset (ebx) directly.  Visbility of
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index 68cdd8fcf021..da9053c0a262 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -606,7 +606,7 @@ unsigned int xstate_uncompressed_size(uint64_t xcr0)
      * with respect their index.
      */
     xcr0 &= ~(X86_XCR0_SSE | X86_XCR0_X87);
-    for_each_set_bit ( i, &xcr0, 63 )
+    bitmap_for_each ( i, &xcr0, 63 )
     {
         const struct xstate_component *c = &raw_cpu_policy.xstate.comp[i];
         unsigned int s = c->offset + c->size;
@@ -634,7 +634,7 @@ unsigned int xstate_compressed_size(uint64_t xstates)
      * componenets require aligning to 64 first.
      */
     xstates &= ~(X86_XCR0_SSE | X86_XCR0_X87);
-    for_each_set_bit ( i, &xstates, 63 )
+    bitmap_for_each ( i, &xstates, 63 )
     {
         const struct xstate_component *c = &raw_cpu_policy.xstate.comp[i];
 
diff --git a/xen/include/xen/bitmap.h b/xen/include/xen/bitmap.h
index b9f980e91930..5dd7db5be9e7 100644
--- a/xen/include/xen/bitmap.h
+++ b/xen/include/xen/bitmap.h
@@ -271,6 +271,18 @@ static inline void bitmap_clear(unsigned long *map, unsigned int start,
 #undef bitmap_switch
 #undef bitmap_bytes
 
+/**
+ * bitmap_for_each - iterate over every set bit in a memory region
+ * @iter: The integer iterator
+ * @addr: The address to base the search on
+ * @size: The maximum size to search
+ */
+#define bitmap_for_each(iter, addr, size)                        \
+    for ( (iter) = find_first_bit(addr, size);                   \
+          (iter) < (size);                                       \
+          (iter) = find_next_bit(addr, size, (iter) + 1) )
+
+
 struct xenctl_bitmap;
 int xenctl_bitmap_to_bitmap(unsigned long *bitmap,
                             const struct xenctl_bitmap *xenctl_bitmap,
diff --git a/xen/include/xen/bitops.h b/xen/include/xen/bitops.h
index 6a5e28730a25..24de0835b7ab 100644
--- a/xen/include/xen/bitops.h
+++ b/xen/include/xen/bitops.h
@@ -248,17 +248,6 @@ static inline __u32 ror32(__u32 word, unsigned int shift)
 #define __L16(x) (((x) & 0x0000ff00U) ? ( 8 + __L8( (x) >> 8))  : __L8( x))
 #define ilog2(x) (((x) & 0xffff0000U) ? (16 + __L16((x) >> 16)) : __L16(x))
 
-/**
- * for_each_set_bit - iterate over every set bit in a memory region
- * @bit: The integer iterator
- * @addr: The address to base the search on
- * @size: The maximum size to search
- */
-#define for_each_set_bit(bit, addr, size)               \
-    for ( (bit) = find_first_bit(addr, size);           \
-          (bit) < (size);                               \
-          (bit) = find_next_bit(addr, size, (bit) + 1) )
-
 #define BIT_WORD(nr) ((nr) / BITS_PER_LONG)
 
 #endif /* XEN_BITOPS_H */
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 19:07:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 19:07:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748024.1155580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVo-0000Oq-94; Tue, 25 Jun 2024 19:07:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748024.1155580; Tue, 25 Jun 2024 19:07:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVo-0000Oj-5Y; Tue, 25 Jun 2024 19:07:28 +0000
Received: by outflank-mailman (input) for mailman id 748024;
 Tue, 25 Jun 2024 19:07:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vmrN=N3=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMBVm-0000O1-RK
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 19:07:26 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 278146f7-3326-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 21:07:24 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id
 a640c23a62f3a-a724b9b34b0so339742966b.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 12:07:24 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a725d7b190fsm180434766b.50.2024.06.25.12.07.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 12:07:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 278146f7-3326-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719342442; x=1719947242; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xi3l57Ho1pUmbNMNe9NHaEbiwxfjL48pmKM7zQN1k2o=;
        b=lYdTahKGt635gTcgZBqAPehk9FdZXCcADV/lGlyDj9lzHWm+4JTmdGtG2g3F+SKQS9
         nNeDACwd8HGF3HyvYCVqOsSfhYVppa+l/cJdN2BElc+VR2hyQ0g82+kksHFNXnZ6yeYj
         Rf1hpcb/WaXifpwLdnj1H8xfZaO/EHG7P1G+s=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719342442; x=1719947242;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=xi3l57Ho1pUmbNMNe9NHaEbiwxfjL48pmKM7zQN1k2o=;
        b=Cra9gL8fa6xP0muFLvxUyrH7RcCNgKcry8Y6zXbApulnz6zSJQyBLlgncAMLHiCA7f
         SsDpj3TJkOuLUFb3OE8vtFmKlLiUvAzT5OxPgivZUY9jkJqlhkiOfE+YqmYja6TDHkn6
         8xhYSCyQRPjIDE2XVH2peB0PD7QfQwzaNl1RVFVDb/SqZZqreMVgTYCtM+0yxl5ptXLc
         w16lejgz0tV6ZA0bFR2iE4maRgKRlUzDZTC9nu2o+bAYb8FfZ6wkb4IPkKSjR3guwams
         vLIVEQEav4IyS9fgVM1o4F5/LqDnMvLEJGpc9FMQNyGjdOmrln+RuiwnXw3D6nhJmZAy
         eZTA==
X-Gm-Message-State: AOJu0YzbJgjRIZMyY9hzbJKN7ouH/hL12wyELhVHf37RE225xqyacFff
	BYMcpRbpZ2hGU9c+NaMVN2p/5hbEloj39mRv7+BfpP4g3kWtbyu497a7azrmG2lYYoGNzSN4wE4
	4ubU=
X-Google-Smtp-Source: AGHT+IEd3SHDA5f778vCgNNEnjYw+yhpd1jw+f+ZXJu1RKSDoaW7l41vRWxVF3RD3nt1F64w90ZD6Q==
X-Received: by 2002:a17:907:c815:b0:a72:5226:3307 with SMTP id a640c23a62f3a-a7252263ff9mr539077266b.57.1719342442485;
        Tue, 25 Jun 2024 12:07:22 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH 1/6] x86/vmx: Rewrite vmx_sync_pir_to_irr() to be more efficient
Date: Tue, 25 Jun 2024 20:07:14 +0100
Message-Id: <20240625190719.788643-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240625190719.788643-1-andrew.cooper3@citrix.com>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

There are two issues.  First, pi_test_and_clear_on() pulls the cache-line to
the CPU and dirties it even if there's nothing outstanding, but the final
for_each_set_bit() is O(256) when O(8) would do, and would avoid multiple
atomic updates to the same IRR word.

Rewrite it from scratch, explaining what's going on at each step.

Bloat-o-meter reports 177 -> 145 (net -32), but the better aspect is the
removal calls to __find_{first,next}_bit() hidden behind for_each_set_bit().

No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Michal Orzel <michal.orzel@amd.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

The main purpose of this is to get rid of for_each_set_bit().

Full side-by-side diff:
  https://termbin.com/5wsb

The first loop ends up being unrolled identically, although there's 64bit movs
to reload 0 for the xchg which is definitely suboptimal.  Opencoding
asm ("xchg") without a memory clobber gets to 32bit movs which is an
improvement but no ideal.  However I didn't fancy going that far.

Also, the entirety of pi_desc is embedded in struct vcpu, which means when
we're executing in Xen, the prefetcher is going to be stealing it back from
the IOMMU all the time.  This is a datastructure which really should *not* be
adjacent to all the other misc data in the vcpu.
---
 xen/arch/x86/hvm/vmx/vmx.c | 61 +++++++++++++++++++++++++++++++++-----
 1 file changed, 53 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index f16faa6a615c..948ad48a4757 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2321,18 +2321,63 @@ static void cf_check vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
 
 static void cf_check vmx_sync_pir_to_irr(struct vcpu *v)
 {
-    struct vlapic *vlapic = vcpu_vlapic(v);
-    unsigned int group, i;
-    DECLARE_BITMAP(pending_intr, X86_NR_VECTORS);
+    struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc;
+    union {
+        uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)];
+        uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)];
+    } vec;
+    uint32_t *irr;
+    bool on;
 
-    if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) )
+    /*
+     * The PIR is a contended cacheline which bounces between the CPU and
+     * IOMMU.  The IOMMU updates the entire PIR atomically, but we can't
+     * express the same on the CPU side, so care has to be taken.
+     *
+     * First, do a plain read of ON.  If the PIR hasn't been modified, this
+     * will keep the cacheline Shared and not pull it Excusive on the CPU.
+     */
+    if ( !pi_test_on(desc) )
         return;
 
-    for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ )
-        pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group);
+    /*
+     * Second, if the plain read said that ON was set, we must clear it with
+     * an atomic action.  This will bring the cachline to Exclusive on the
+     * CPU.
+     *
+     * This should always succeed because noone else should be playing with
+     * the PIR behind our back, but assert so just in case.
+     */
+    on = pi_test_and_clear_on(desc);
+    ASSERT(on);
 
-    for_each_set_bit(i, pending_intr, X86_NR_VECTORS)
-        vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]);
+    /*
+     * The cacheline is now Exclusive on the CPU, and the IOMMU has indicated
+     * (via ON being set) thatat least one vector is pending too.  Atomically
+     * read and clear the entire pending bitmap as fast as we, to reduce the
+     * window that the IOMMU may steal the cacheline back from us.
+     *
+     * It is a performance concern, but not a correctness concern.  If the
+     * IOMMU does steal the cacheline back, we'll just wait to get it back
+     * again.
+     */
+    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i )
+        vec._64[i] = xchg(&desc->pir[i], 0);
+
+    /*
+     * Finally, merge the pending vectors into IRR.  The IRR register is
+     * scattered in memory, so we have to do this 32 bits at a time.
+     */
+    irr = (uint32_t *)&vcpu_vlapic(v)->regs->data[APIC_IRR];
+    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._32); ++i )
+    {
+        if ( !vec._32[i] )
+            continue;
+
+        asm ( "lock or %[val], %[irr]"
+              : [irr] "+m" (irr[i * 0x10])
+              : [val] "r" (vec._32[i]) );
+    }
 }
 
 static bool cf_check vmx_test_pir(const struct vcpu *v, uint8_t vec)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 19:07:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 19:07:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748029.1155629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVs-0001be-5x; Tue, 25 Jun 2024 19:07:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748029.1155629; Tue, 25 Jun 2024 19:07:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBVs-0001aR-1y; Tue, 25 Jun 2024 19:07:32 +0000
Received: by outflank-mailman (input) for mailman id 748029;
 Tue, 25 Jun 2024 19:07:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vmrN=N3=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMBVp-00008w-O3
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 19:07:29 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2aa255d4-3326-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 21:07:29 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id
 a640c23a62f3a-a6fd513f18bso501003766b.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 12:07:29 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a725d7b190fsm180434766b.50.2024.06.25.12.07.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 Jun 2024 12:07:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2aa255d4-3326-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719342448; x=1719947248; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=IqTAjuFXDg4PchRKJS8PgwLD1p7wjSEO9wUcu/jl/lg=;
        b=aF7Eudbs/pvxagbLa98MU3eidR/QdZZTGQG2uELKl4t9gnJr/60LB8peKiHyLgMd7Y
         c3AF5sFwInypUhNvKwNcLI83w/7l9iijfJ29p+edAdxs4WGgmub0tduDm4WKEj/oPXNw
         xZ7xxmwxSiUKHZrKBwSub7JhfZ7UlXD40XQt0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719342448; x=1719947248;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=IqTAjuFXDg4PchRKJS8PgwLD1p7wjSEO9wUcu/jl/lg=;
        b=sN0bwQ/D8zYAvvlxgFCLpa1GOdRiUDCAQiVjtDagixpIOiByGEN82nrvL28itQe1fz
         oULq0dnp7nx7sATmBvtHgbAH4jxyzj/vVb+CCih7t+egT2GW9Oi0YbMrp2E/qfnNDfud
         SYpQxt8gieONFxu4ugijtdNelkQOQv8CahozdYFyV0rIk3kOWPqHcKjR+e1oF2HTm5S7
         nSz3d3aMVjs18khVXTYFJFgKziunRk7fog5Z7n+nEF6tiSTzuYd53Tkib9SINF6WGJYt
         lWt1H4m2kltdVqbGmRBAFns7uxEdWqTI+o1YEUugqXnX8njoBNG41ez4+0I17Uike6SV
         4Xhw==
X-Gm-Message-State: AOJu0YziNW1sXQ0aZlBqpMUxWN/Q8Dzb0zJtgKJzr8XOTTqy0d+844Ht
	Q0Zbg0owXuPzvsh30VIMT80Vwo/6CeJH+nRkeUyvhEaCmBWCrsb2+LGdcel1A76BhOS3dfJthxS
	MlKs=
X-Google-Smtp-Source: AGHT+IEXoA2JRkMlTSVwVKWS+vj658ux8ClHQgfVG80Prd6QW4t1IN1zGIW7wD48aUpC2pYpxZkHdg==
X-Received: by 2002:a17:906:1995:b0:a6e:fa0a:4899 with SMTP id a640c23a62f3a-a7245b45affmr505431466b.16.1719342448411;
        Tue, 25 Jun 2024 12:07:28 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <JBeulich@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Michal Orzel <michal.orzel@amd.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH 6/6] x86/xstate: Switch back to for_each_set_bit()
Date: Tue, 25 Jun 2024 20:07:19 +0100
Message-Id: <20240625190719.788643-7-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240625190719.788643-1-andrew.cooper3@citrix.com>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

In all 3 examples, we're iterating over a scaler.  No caller can pass the
COMPRESSED flag in, so the upper bound of 63, as opposed to 64, doesn't
matter.

This alone produces:

  add/remove: 0/0 grow/shrink: 0/4 up/down: 0/-161 (-161)
  Function                                     old     new   delta
  compress_xsave_states                         66      58      -8
  xstate_uncompressed_size                     119      71     -48
  xstate_compressed_size                       124      76     -48
  recalculate_xstate                           347     290     -57

where xstate_{un,}compressed_size() have practically halved in size despite
being small before.

The change in compress_xsave_states() is unexpected.  The function is almost
entirely dead code, and within what remains there's a smaller stack frame.  I
suspect it's leftovers that the optimiser couldn't fully discard.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
CC: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
CC: Bertrand Marquis <bertrand.marquis@arm.com>
CC: Michal Orzel <michal.orzel@amd.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 xen/arch/x86/cpu-policy.c | 4 ++--
 xen/arch/x86/xstate.c     | 8 ++++----
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index cd53bac777dc..fa55f6073089 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -193,7 +193,7 @@ static void sanitise_featureset(uint32_t *fs)
 static void recalculate_xstate(struct cpu_policy *p)
 {
     uint64_t xstates = XSTATE_FP_SSE;
-    unsigned int i, ecx_mask = 0, Da1 = p->xstate.Da1;
+    unsigned int ecx_mask = 0, Da1 = p->xstate.Da1;
 
     /*
      * The Da1 leaf is the only piece of information preserved in the common
@@ -237,7 +237,7 @@ static void recalculate_xstate(struct cpu_policy *p)
     /* Subleafs 2+ */
     xstates &= ~XSTATE_FP_SSE;
     BUILD_BUG_ON(ARRAY_SIZE(p->xstate.comp) < 63);
-    bitmap_for_each ( i, &xstates, 63 )
+    for_each_set_bit ( i, xstates )
     {
         /*
          * Pass through size (eax) and offset (ebx) directly.  Visbility of
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index da9053c0a262..88dbfbeafacd 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -589,7 +589,7 @@ static bool valid_xcr0(uint64_t xcr0)
 
 unsigned int xstate_uncompressed_size(uint64_t xcr0)
 {
-    unsigned int size = XSTATE_AREA_MIN_SIZE, i;
+    unsigned int size = XSTATE_AREA_MIN_SIZE;
 
     /* Non-XCR0 states don't exist in an uncompressed image. */
     ASSERT((xcr0 & ~X86_XCR0_STATES) == 0);
@@ -606,7 +606,7 @@ unsigned int xstate_uncompressed_size(uint64_t xcr0)
      * with respect their index.
      */
     xcr0 &= ~(X86_XCR0_SSE | X86_XCR0_X87);
-    bitmap_for_each ( i, &xcr0, 63 )
+    for_each_set_bit ( i, xcr0 )
     {
         const struct xstate_component *c = &raw_cpu_policy.xstate.comp[i];
         unsigned int s = c->offset + c->size;
@@ -621,7 +621,7 @@ unsigned int xstate_uncompressed_size(uint64_t xcr0)
 
 unsigned int xstate_compressed_size(uint64_t xstates)
 {
-    unsigned int i, size = XSTATE_AREA_MIN_SIZE;
+    unsigned int size = XSTATE_AREA_MIN_SIZE;
 
     if ( xstates == 0 )
         return 0;
@@ -634,7 +634,7 @@ unsigned int xstate_compressed_size(uint64_t xstates)
      * componenets require aligning to 64 first.
      */
     xstates &= ~(X86_XCR0_SSE | X86_XCR0_X87);
-    bitmap_for_each ( i, &xstates, 63 )
+    for_each_set_bit ( i, xstates )
     {
         const struct xstate_component *c = &raw_cpu_policy.xstate.comp[i];
 
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 19:17:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 19:17:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748074.1155643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBfe-0006Bb-3q; Tue, 25 Jun 2024 19:17:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748074.1155643; Tue, 25 Jun 2024 19:17:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBfe-0006BR-0b; Tue, 25 Jun 2024 19:17:38 +0000
Received: by outflank-mailman (input) for mailman id 748074;
 Tue, 25 Jun 2024 19:17:37 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9u5k=N3=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMBfd-0006A7-J8
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 19:17:37 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9478fabb-3327-11ef-90a3-e314d9c70b13;
 Tue, 25 Jun 2024 21:17:36 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 6E4B64EE0738;
 Tue, 25 Jun 2024 21:17:35 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9478fabb-3327-11ef-90a3-e314d9c70b13
MIME-Version: 1.0
Date: Tue, 25 Jun 2024 21:17:35 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Wei Liu
 <wl@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>, Michal Orzel
 <michal.orzel@amd.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org, Simone Ballarin
 <simone.ballarin@bugseng.com>
Subject: Re: [XEN PATCH v3 03/16] misra: add deviations for direct inclusion
 guards
In-Reply-To: <b675da13-e444-472c-997b-3db45a10e797@suse.com>
References: <cover.1710145041.git.simone.ballarin@bugseng.com>
 <1fdfec12fd2207c294f50d01d8ec32f890b915d7.1710145041.git.simone.ballarin@bugseng.com>
 <adeb5103-81b4-4f04-9ff6-a0526c8065db@suse.com>
 <6472eb42-157a-4d6e-b5bb-daa74fbbd97b@bugseng.com>
 <a9f85f2b-3eae-4544-88dd-6984011f0ef9@suse.com>
 <3e4bb597-3624-418e-93d0-b95042fd27a7@bugseng.com>
 <alpine.DEB.2.22.394.2403141559270.853156@ubuntu-linux-20-04-desktop>
 <077c0373-6eec-4403-b31e-574c8e8ae067@suse.com>
 <alpine.DEB.2.22.394.2403151738160.853156@ubuntu-linux-20-04-desktop>
 <0513e505-5444-4a9f-9a77-ec9f359ddf27@suse.com>
 <alpine.DEB.2.22.394.2403181732010.853156@ubuntu-linux-20-04-desktop>
 <b675da13-e444-472c-997b-3db45a10e797@suse.com>
Message-ID: <25a6a2102986512c9f4346e9fee47661@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-03-19 08:45, Jan Beulich wrote:
> On 19.03.2024 04:34, Stefano Stabellini wrote:
>> On Mon, 18 Mar 2024, Jan Beulich wrote:
>>> On 16.03.2024 01:43, Stefano Stabellini wrote:
>>>> On Fri, 15 Mar 2024, Jan Beulich wrote:
>>>>> On 14.03.2024 23:59, Stefano Stabellini wrote:
>>>>>> On Mon, 11 Mar 2024, Simone Ballarin wrote:
>>>>>>> On 11/03/24 14:56, Jan Beulich wrote:
>>>>>>>> On 11.03.2024 13:00, Simone Ballarin wrote:
>>>>>>>>> On 11/03/24 11:08, Jan Beulich wrote:
>>>>>>>>>> On 11.03.2024 09:59, Simone Ballarin wrote:
>>>>>>>>>>> --- a/xen/arch/arm/include/asm/hypercall.h
>>>>>>>>>>> +++ b/xen/arch/arm/include/asm/hypercall.h
>>>>>>>>>>> @@ -1,3 +1,4 @@
>>>>>>>>>>> +/* SAF-5-safe direct inclusion guard before */
>>>>>>>>>>>    #ifndef __XEN_HYPERCALL_H__
>>>>>>>>>>>    #error "asm/hypercall.h should not be included directly - 
>>>>>>>>>>> include
>>>>>>>>>>> xen/hypercall.h instead"
>>>>>>>>>>>    #endif
>>>>>>>>>>> --- a/xen/arch/x86/include/asm/hypercall.h
>>>>>>>>>>> +++ b/xen/arch/x86/include/asm/hypercall.h
>>>>>>>>>>> @@ -2,6 +2,7 @@
>>>>>>>>>>>     * asm-x86/hypercall.h
>>>>>>>>>>>     */
>>>>>>>>>>>    +/* SAF-5-safe direct inclusion guard before */
>>>>>>>>>>>    #ifndef __XEN_HYPERCALL_H__
>>>>>>>>>>>    #error "asm/hypercall.h should not be included directly - 
>>>>>>>>>>> include
>>>>>>>>>>> xen/hypercall.h instead"
>>>>>>>>>>>    #endif
>>>>>>>>>> 
>>>>>>>>>> Iirc it was said that this way checking for correct guards is 
>>>>>>>>>> suppressed
>>>>>>>>>> altogether in Eclair, which is not what we want. Can you 
>>>>>>>>>> clarify this,
>>>>>>>>>> please?
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> My first change was moving this check inside the guard.
>>>>>>>>> You commented my patch saying that this would be an error 
>>>>>>>>> because someone
>>>>>>>>> can
>>>>>>>>> include it directly if it has already been included indirectly.
>>>>>>>>> I replied telling that this was the case also before the 
>>>>>>>>> change.
>>>>>>>>> You agreed with me, and we decided that the correct thing would 
>>>>>>>>> be fixing
>>>>>>>>> the
>>>>>>>>> check and not apply my temporary change to address the finding.
>>>>>>>>> 
>>>>>>>>> Considering that the code should be amended, a SAF deviation 
>>>>>>>>> seems to me
>>>>>>>>> the most appropriate way for suppressing these findings.
>>>>>>>> 
>>>>>>>> Since I don't feel your reply addresses my question, asking 
>>>>>>>> differently:
>>>>>>>> With
>>>>>>>> your change in place, will failure to have proper guards (later) 
>>>>>>>> in these
>>>>>>>> headers still be reported by Eclair?
>>>>>>> 
>>>>>>> No, if you put something between the check and the guard,
>>>>>>> no violation will be reported.
>>>>>> 
>>>>>> From this email exchange I cannot under if Jan is OK with this 
>>>>>> patch or
>>>>>> not.
>>>>> 
>>>>> Whether I'm okay(ish) with the patch here depends on our position 
>>>>> towards
>>>>> the lost checking in Eclair mentioned above. To me it still looks 
>>>>> relevant
>>>>> that checking for a guard occurs, even if that isn't first in a 
>>>>> file for
>>>>> some specific reason.
>>>> 
>>>> More checking is better than less checking, but if we cannot find a
>>>> simple and actionable way forward on this violation, deviating it is
>>>> still a big improvement because it allows us to enable the ECLAIR 
>>>> Dir
>>>> 4.10 checks in xen.git overall (which again goes back to more 
>>>> checking
>>>> is better than less checking).
>>> 
>>> You have a point here. I think though that at the very least the lost
>>> checking opportunity wants calling out quite explicitly.
>> 
>> All right, then maybe this patch can go in with a clarification in the
>> commit message?
>> 
>> Something like:
>> 
>> Note that with SAF-5-safe in place, failures to have proper guards 
>> later
>> in the header files will not be reported
> 
> That would be okay with me.
> 

Coming back to this thread. Yes, I'll update the message to reflect this 
change.

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 19:32:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 19:32:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748087.1155655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBtY-0002ax-AK; Tue, 25 Jun 2024 19:32:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748087.1155655; Tue, 25 Jun 2024 19:32:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMBtY-0002aq-7b; Tue, 25 Jun 2024 19:32:00 +0000
Received: by outflank-mailman (input) for mailman id 748087;
 Tue, 25 Jun 2024 19:31:58 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9u5k=N3=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMBtW-0002ak-Ts
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 19:31:58 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 958fb6aa-3329-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 21:31:56 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 783584EE0738;
 Tue, 25 Jun 2024 21:31:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 958fb6aa-3329-11ef-b4bb-af5377834399
MIME-Version: 1.0
Date: Tue, 25 Jun 2024 21:31:56 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Simone Ballarin <simone.ballarin@bugseng.com>, consulting@bugseng.com,
 sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH v3 05/16] xen/x86: address violations of MISRA C:2012
 Directive 4.10
In-Reply-To: <dce6c44d-94b7-43bd-858a-9337336a79cf@suse.com>
References: <cover.1710145041.git.simone.ballarin@bugseng.com>
 <dd042e7d17e7833e12a5ff6f28dd560b5ff02cf7.1710145041.git.simone.ballarin@bugseng.com>
 <dce6c44d-94b7-43bd-858a-9337336a79cf@suse.com>
Message-ID: <ef623bad297d016438b35bedc80f091d@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-03-12 09:16, Jan Beulich wrote:
> On 11.03.2024 09:59, Simone Ballarin wrote:
>> --- a/xen/arch/x86/Makefile
>> +++ b/xen/arch/x86/Makefile
>> @@ -258,18 +258,20 @@ $(obj)/asm-macros.i: CFLAGS-y += -P
>>  $(objtree)/arch/x86/include/asm/asm-macros.h: $(obj)/asm-macros.i 
>> $(src)/Makefile
>>  	$(call filechk,asm-macros.h)
>> 
>> +ARCHDIR = $(shell echo $(SRCARCH) | tr a-z A-Z)
> 
> This wants to use :=, I think - there's no reason to invoke the shell 
> ...

I agree on this

> 
>>  define filechk_asm-macros.h
>> +    echo '#ifndef ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>> +    echo '#define ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>      echo '#if 0'; \
>>      echo '.if 0'; \
>>      echo '#endif'; \
>> -    echo '#ifndef __ASM_MACROS_H__'; \
>> -    echo '#define __ASM_MACROS_H__'; \
>>      echo 'asm ( ".include \"$@\"" );'; \
>> -    echo '#endif /* __ASM_MACROS_H__ */'; \
>>      echo '#if 0'; \
>>      echo '.endif'; \
>>      cat $<; \
>> -    echo '#endif'
>> +    echo '#endif'; \
>> +    echo '#endif /* ASM_$(ARCHDIR)_ASM_MACROS_H */'
>>  endef
> 
> ... three times while expanding this macro. Alternatively (to avoid
> an unnecessary shell invocation when this macro is never expanded at
> all) a shell variable inside the "define" above would want introducing.
> Whether this 2nd approach is better depends on whether we anticipate
> further uses of ARCHDIR.

However here I'm not entirely sure about the meaning of this latter 
proposal.
My proposal is the following:

ARCHDIR := $(shell echo $(SRCARCH) | tr a-z A-Z)

in a suitably generic place (such as Kbuild.include or maybe 
xen/Makefile) as you suggested in subsequent patches that reused this 
pattern.

> 
>> --- a/xen/arch/x86/cpu/cpu.h
>> +++ b/xen/arch/x86/cpu/cpu.h
>> @@ -1,3 +1,6 @@
>> +#ifndef XEN_ARCH_X86_CPU_CPU_H
>> +#define XEN_ARCH_X86_CPU_CPU_H
>> +
>>  /* attempt to consolidate cpu attributes */
>>  struct cpu_dev {
>>  	void		(*c_early_init)(struct cpuinfo_x86 *c);
>> @@ -24,3 +27,5 @@ void amd_init_lfence(struct cpuinfo_x86 *c);
>>  void amd_init_ssbd(const struct cpuinfo_x86 *c);
>>  void amd_init_spectral_chicken(void);
>>  void detect_zen2_null_seg_behaviour(void);
>> +
>> +#endif /* XEN_ARCH_X86_CPU_CPU_H */
> 
> Leaving aside the earlier voiced request to get rid of the XEN_ 
> prefixes
> here, ...
> 
>> --- a/xen/arch/x86/x86_64/mmconfig.h
>> +++ b/xen/arch/x86/x86_64/mmconfig.h
>> @@ -5,6 +5,9 @@
>>   * Author: Allen Kay <allen.m.kay@intel.com> - adapted from linux
>>   */
>> 
>> +#ifndef XEN_ARCH_X86_X86_64_MMCONFIG_H
>> +#define XEN_ARCH_X86_X86_64_MMCONFIG_H
>> +
>>  #define PCI_DEVICE_ID_INTEL_E7520_MCH    0x3590
>>  #define PCI_DEVICE_ID_INTEL_82945G_HB    0x2770
>> 
>> @@ -72,3 +75,5 @@ int pci_mmcfg_reserved(uint64_t address, unsigned 
>> int segment,
>>  int pci_mmcfg_arch_init(void);
>>  int pci_mmcfg_arch_enable(unsigned int idx);
>>  void pci_mmcfg_arch_disable(unsigned int idx);
>> +
>> +#endif /* XEN_ARCH_X86_X86_64_MMCONFIG_H */
> 
> ... in a case like this and maybe even ...
> 
>> --- a/xen/arch/x86/x86_emulate/private.h
>> +++ b/xen/arch/x86/x86_emulate/private.h
>> @@ -6,6 +6,9 @@
>>   * Copyright (c) 2005-2007 XenSource Inc.
>>   */
>> 
>> +#ifndef XEN_ARCH_X86_X86_EMULATE_PRIVATE_H
>> +#define XEN_ARCH_X86_X86_EMULATE_PRIVATE_H
>> +
>>  #ifdef __XEN__
>> 
>>  # include <xen/bug.h>
>> @@ -836,3 +839,5 @@ static inline int read_ulong(enum x86_segment seg,
>>      *val = 0;
>>      return ops->read(seg, offset, val, bytes, ctxt);
>>  }
>> +
>> +#endif /* XEN_ARCH_X86_X86_EMULATE_PRIVATE_H */
> 
> ... this I wonder whether they are too strictly sticking to the base
> scheme (or whether the base scheme itself isn't flexible enough): I'm
> not overly happy with the "_X86_X86_" in there. Especially in the
> former case, where it's the sub-arch path, like for arm/arm<NN> I'd
> like to see that folded to just "_X86_64_" here as well.
> 

I do agree we should make an exception here: e.g. 
XEN_X86_64_EMULATE_PRIVATE_H

I'm ambivalent about the XEN_ prefix: I can't immediately see an issue 
with dropping it, but on the other hand there are several headers that 
already use it (either it or the __XEN prefix) as far as I can tell 
(e.g. x86/cpu/cpu.h), so dropping it from the naming convention would 
imply that a fair amount of additional churn may be needed to have an 
uniform naming scheme in all the xen/ directory. I'll leave the decision 
to the maintainers.

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 21:03:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 21:03:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748100.1155672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMDJP-0003BK-Md; Tue, 25 Jun 2024 21:02:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748100.1155672; Tue, 25 Jun 2024 21:02:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMDJP-0003BD-K4; Tue, 25 Jun 2024 21:02:47 +0000
Received: by outflank-mailman (input) for mailman id 748100;
 Tue, 25 Jun 2024 21:02:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vmrN=N3=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMDJO-0003B7-3F
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 21:02:46 +0000
Received: from mail-wm1-x32f.google.com (mail-wm1-x32f.google.com
 [2a00:1450:4864:20::32f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 443aa275-3336-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 23:02:43 +0200 (CEST)
Received: by mail-wm1-x32f.google.com with SMTP id
 5b1f17b1804b1-424a2dabfefso11616265e9.3
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 14:02:43 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-36638d9b8a8sm13868248f8f.50.2024.06.25.14.02.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 14:02:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 443aa275-3336-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719349363; x=1719954163; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=IM2ODf7O1MrqUMtcbzyQsPK7i2h78DBHiijlWHd+KcY=;
        b=AXbiTe4Lt815qSGiI33b2tYcSDB1406CkXMC1ZfeNWd8ACcZXpn1xYZk+lsdOF5gNN
         mgq4p+qqL6iFR419HmehLx45oW4l3o8OlxTNpKIBXZL3gptV6z1mR56iVf6P8a6PZI18
         XZzFSUb5fLq9ppVVScLp51s3wiNn9ri0+YSO8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719349363; x=1719954163;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=IM2ODf7O1MrqUMtcbzyQsPK7i2h78DBHiijlWHd+KcY=;
        b=CMpIlMGUcaK3+35odb6Ukg4/E6yZJyFnBQ+O6bCQ5Ua+Mi834PTOEhqvXdl3kPdkee
         S/XcSSSesSsrfzLEsgy8HEkEq+TZ/EV1m5eZ6tenKPe9o365a81y4ZUBWQ8bZONxRW2K
         jOaafnyScOmnS3K9Fv5FqMHFtTm140TkadAvp+0y9Et8M7F38xjwcsfGcEylGKCdMf43
         1b1P9oMp3lZDAyYqjQNOTikMNJJihjEfkqDvQerKhp3Qt1dxsLZeVfXDYEfQz4EbbmsM
         o8gjgd+UnbYBCU0msEiaodR+jkSr5/wLx55mBiWg5P6Hk5NAecpkHybVGlkK/NEChI12
         OZgA==
X-Forwarded-Encrypted: i=1; AJvYcCWZmSx5871NxhSQTN1uXjif96J6d+jcifScImoNh74Yu1kB93Tk289kCVQcECBz4z3bezQ2mGqXfDWxUj30rS51FNxYr5YZgpU3qGwjap8=
X-Gm-Message-State: AOJu0YwctYSuMOcbWU+Ca7jNUL7B8DDN1VRdIS+KaPa8Xq0T7FYk0yFH
	I0x3e+Mv1DawCw8sfqtiOeDiKwuIW6zACISxgE442EcHsvqpzUtxw54jAMKOI+xgXtjNbYrx0gt
	8Ebqgxg==
X-Google-Smtp-Source: AGHT+IFMPGbEAZOqpW1QdybOfZpsVhnoEcuo3LlRJKRJVQUOd9p7DFx8VgYEj6lMOsoP2ZqUlxc0GA==
X-Received: by 2002:a05:6000:1847:b0:366:eba0:8d8c with SMTP id ffacd0b85a97d-366eba08d93mr8789216f8f.54.1719349363330;
        Tue, 25 Jun 2024 14:02:43 -0700 (PDT)
Message-ID: <7ffabe8b-7993-4cc5-97fe-dd1cbd35798e@citrix.com>
Date: Tue, 25 Jun 2024 22:02:42 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH] tools/misc: xen-hvmcrash: Inject #DF instead of
 overwriting RIP
To: Matthew Barnes <matthew.barnes@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Anthony PERARD <anthony@xenproject.org>
References: <27f4397093d92b53f89d625d682bd4b7145b65d8.1717426439.git.matthew.barnes@cloud.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <27f4397093d92b53f89d625d682bd4b7145b65d8.1717426439.git.matthew.barnes@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 03/06/2024 3:59 pm, Matthew Barnes wrote:
> xen-hvmcrash would previously save records, overwrite the instruction
> pointer with a bogus value, and then restore them to crash a domain
> just enough to cause the guest OS to memdump.
>
> This approach is found to be unreliable when tested on a guest running
> Windows 10 x64, with some executions doing nothing at all.
>
> Another approach would be to trigger NMIs. This approach is found to be
> unreliable when tested on Linux (Ubuntu 22.04), as Linux will ignore
> NMIs if it is not configured to handle such.
>
> Injecting a double fault abort to all vCPUs is found to be more
> reliable at crashing and invoking memdumps from Windows and Linux
> domains.

Why every CPU?

We never did that before, and I don't see any it ought to be necessary
now either.



> diff --git a/tools/misc/xen-hvmcrash.c b/tools/misc/xen-hvmcrash.c
> index 1d058fa40a47..8ef1beb388f8 100644
> --- a/tools/misc/xen-hvmcrash.c
> +++ b/tools/misc/xen-hvmcrash.c
> @@ -38,22 +38,21 @@
>  #include <sys/stat.h>
>  #include <arpa/inet.h>
>  
> +#define XC_WANT_COMPAT_DEVICEMODEL_API

Please don't introduce this.  We want to purge it from the codebase, not
propagate it.

You want to open and use a libxendevicemodel handle.  (Sadly you also
need a xenctrl handle too, until we sort out the userspace ABIs).

>  #include <xenctrl.h>
>  #include <xen/xen.h>
>  #include <xen/domctl.h>
>  #include <xen/hvm/save.h>
>  
> +#define X86_ABORT_DF 8

#include <xen/asm/x86-defns.h>

and use X86_EXC_DF.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 21:10:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 21:10:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748108.1155681 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMDQg-0005TV-Gt; Tue, 25 Jun 2024 21:10:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748108.1155681; Tue, 25 Jun 2024 21:10:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMDQg-0005TO-EF; Tue, 25 Jun 2024 21:10:18 +0000
Received: by outflank-mailman (input) for mailman id 748108;
 Tue, 25 Jun 2024 21:10:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vmrN=N3=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMDQg-0005TI-00
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 21:10:18 +0000
Received: from mail-wm1-x32b.google.com (mail-wm1-x32b.google.com
 [2a00:1450:4864:20::32b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 51a29820-3337-11ef-b4bb-af5377834399;
 Tue, 25 Jun 2024 23:10:15 +0200 (CEST)
Received: by mail-wm1-x32b.google.com with SMTP id
 5b1f17b1804b1-424aa70fbc4so4096125e9.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 14:10:15 -0700 (PDT)
Received: from [192.168.1.10] (host-92-26-98-202.as13285.net. [92.26.98.202])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-366388c3d57sm13918508f8f.35.2024.06.25.14.10.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 14:10:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 51a29820-3337-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719349815; x=1719954615; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=GHZDGH06nzs3al5ESNuCzKKi6Dx4lWP9iiG1uc17bCE=;
        b=GTjEA1I1ksp6RXTS2vahvILxLhgOSHiJRtAS817+9dUenqweKfJFGrnvqryQRf+jx+
         Qk+1F3I0NlMpYAY7eOGUOWednX9wcwGHDijegz5kP/eojChENrDjnb9mG29dtfK6WyPk
         I3DOB3K40Hdbyu2EukG7wIsdleE+cDfYgGpyA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719349815; x=1719954615;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GHZDGH06nzs3al5ESNuCzKKi6Dx4lWP9iiG1uc17bCE=;
        b=gLon3mjIjv3+IwPLDEJ2+qAnVJdIovZ9UQNcIOCWWkeXE2ICaYeZd34bO/1K0UpOOm
         DcA5Lsg5jab9jOd1HMY7TRc4TvULGCLFeSO9kvbYrFKf3rYUMObsh+Yzfk0ZZzIy57Bf
         OnUSjl2D9OL6m1H2xt6bnwsh7Gzni9e2yXuCXy3CJe2eyMac9OjQBbW4VWaTD+8Nt268
         yNdtOyf5xCPW1+4skkKF9rScgRclAB04qK8iW/9guOhDzrZk2A3TNbNDd29KXUsBqMO0
         EswzcBr1lzv3QFIrnaV7DRKqAAHK5JYh9xqSYuVncBwd8MveuV8ZdzmkxbJxN4lORBYp
         pzPA==
X-Forwarded-Encrypted: i=1; AJvYcCUEW7pE3snpsw3xuYH1WpsfalTfEWGUhZ5geTF7lhDufNo60bKqS/o3eHX27Fv/4LW7Ud67f/ulxyOVFX+2le1SZsjNPKN/lRgVhvJcrC0=
X-Gm-Message-State: AOJu0YwvllVXKaA7R8R6cDf5EmepuEgtkhDUb9bUXWGfAC6TBmjhww7F
	Mx2MTHkEsTzuKDYi4rsNBjrFIC8ODPMYRFeSvYP80v16OJ4ps8y6tN+3fYZR+ZY=
X-Google-Smtp-Source: AGHT+IGFCi+pHnpJTWPcEFQSxP7Upg8qRlYSvfj52vANw9bld0sF1AAZ3LaCU9Y+vsni0TxpKNZrNQ==
X-Received: by 2002:a05:600c:4f07:b0:421:f346:6b06 with SMTP id 5b1f17b1804b1-4248cc5930emr64920405e9.28.1719349815389;
        Tue, 25 Jun 2024 14:10:15 -0700 (PDT)
Message-ID: <b638c5f8-20e9-43bb-a47b-e24cb1b1b821@citrix.com>
Date: Tue, 25 Jun 2024 22:10:14 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 09/10] xen/riscv: introduce ANDN_INSN
To: Jan Beulich <jbeulich@suse.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <b0d2ff2cecf6cb324e43b9c14c87f47f3f199613.1719319093.git.oleksii.kurochko@gmail.com>
 <95f64eba-13b9-404a-8318-7a3fc77ea560@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <95f64eba-13b9-404a-8318-7a3fc77ea560@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25/06/2024 3:49 pm, Jan Beulich wrote:
> On 25.06.2024 15:51, Oleksii Kurochko wrote:
>> --- a/xen/arch/riscv/include/asm/cmpxchg.h
>> +++ b/xen/arch/riscv/include/asm/cmpxchg.h
>> @@ -18,6 +18,20 @@
>>          : "r" (new) \
>>          : "memory" );
>>  
>> +/*
>> + * Binutils < 2.37 doesn't understand ANDN.  If the toolchain is too
>> +ld, form
> Same question: Why's 2.37 suddenly of interest?

You deleted the commit message which explains why:

> RISC-V does a conditional toolchain test for the Zbb extension
> (xen/arch/riscv/rules.mk), but unconditionally uses the ANDN
> instruction in emulate_xchg_1_2().

Either Zbb needs to be mandatory (both in the toolchain and the board
running Xen), or emulate_xchg_1_2() needs to not use the ANDN instruction.

I opted for the latter.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 22:15:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 22:15:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748120.1155698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMERC-0007PD-4Y; Tue, 25 Jun 2024 22:14:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748120.1155698; Tue, 25 Jun 2024 22:14:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMERC-0007P6-1M; Tue, 25 Jun 2024 22:14:54 +0000
Received: by outflank-mailman (input) for mailman id 748120;
 Tue, 25 Jun 2024 22:14:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMERA-0007Ow-VA; Tue, 25 Jun 2024 22:14:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMERA-00079T-NA; Tue, 25 Jun 2024 22:14:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMERA-0003Ct-9e; Tue, 25 Jun 2024 22:14:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMERA-0004yY-9C; Tue, 25 Jun 2024 22:14:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+kxSXIFbq69d9ohWoBKv7bs1cB6FIM8Sic6qfUNBzio=; b=EYmkofPc6vwZLCQMzDb39Z+iB7
	/d5NlSYPf+j7Y3VXmuZ1+ioeJN6vdG/Okc/wU5wQrggZ4Gk+xqa9HAXeDRobbB5veJaXrEvBQBj0O
	XbL0+ZIbTjeFJgjUUTVh/EwuVu5OmrwpxX9lHIFNe9Bfzz9aTPlJOxop18L16Y6mQhvc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186480-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186480: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=c56f1ef577831ec70645ca5874d54f2e698c6761
X-Osstest-Versions-That:
    xen=9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Jun 2024 22:14:52 +0000

flight 186480 xen-unstable real [real]
flight 186496 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186480/
http://logs.test-lab.xenproject.org/osstest/logs/186496/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 186465
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 186465

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186465
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186465
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186465
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186465
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186465
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186465
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  c56f1ef577831ec70645ca5874d54f2e698c6761
baseline version:
 xen                  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d

Last test of basis   186465  2024-06-24 01:51:55 Z    1 days
Failing since        186471  2024-06-24 19:07:13 Z    1 days    2 attempts
Testing same since   186480  2024-06-25 06:50:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@vates.tech>
  Jan Beulich <jbeulich@suse.com>
  Michal Orzel <michal.orzel@amd.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit c56f1ef577831ec70645ca5874d54f2e698c6761
Author: Michal Orzel <michal.orzel@amd.com>
Date:   Fri Jun 21 11:22:05 2024 +0200

    xen/arm: static-shmem: request host address to be specified for 1:1 domains
    
    As a follow up to commit cb1ddafdc573 ("xen/arm/static-shmem: Static-shmem
    should be direct-mapped for direct-mapped domains") add a check to
    request that both host and guest physical address must be supplied for
    direct mapped domains. Otherwise return an error to prevent unwanted
    behavior.
    
    Signed-off-by: Michal Orzel <michal.orzel@amd.com>
    Fixes: 988f1c7e1f40 ("xen/arm: static-shmem: fix "gbase/pbase used  uninitialized" build failure")
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 908407bf2b29a38d6879fc8c57dad14473ef67f8
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Fri Jun 21 20:29:07 2024 +0100

    xen/riscv: Drop legacy __ro_after_init definition
    
    Hide the legacy __ro_after_init definition in xen/cache.h for RISC-V, to avoid
    its use creeping in.  Only mm.c needs adjusting as a consequence
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 8c3bb4d8ce3f9e69ee173b8787a8cbbf1a852d06
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Feb 20 19:58:08 2024 +0000

    xen/gnttab: Perform compat/native gnttab_query_size check
    
    This subop appears to have been missed from the compat checks.
    
    Fixes: 5ce8fafa947c ("Dynamic grant-table sizing")
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit ebed411e7afa240fea803ac97a0ced73fffef8dc
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Feb 20 19:34:06 2024 +0000

    xen/xlat: Sort structs per file
    
    ... with a C local to avoid ambiguities over _ and - as separators.
    
    Also adjust arch-x86/xen.h which is out-of-order relative to the other
    arch-x86/ files.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 90c1520d4eff8e6480035f523041fe62c5065833
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Feb 20 19:27:33 2024 +0000

    xen/xlat: Sort out whitespace
    
     * Fix tabs/spaces mismatch for certain rows
     * Insert lines between header files to improve legibility
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 6a17e1199332c24b41bacccdc91dbeaf22653588
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed May 22 12:17:30 2024 +0200

    x86/shadow: Don't leave trace record field uninitialized
    
    The emulation_count field is set only conditionally right now. Convert
    all field setting to an initializer, thus guaranteeing that field to be
    set to 0 (default initialized) when GUEST_PAGING_LEVELS != 3.
    
    Rework trace_shadow_emulate() to be consistent with the other trace helpers.
    
    Coverity-ID: 1598430
    Fixes: 9a86ac1aa3d2 ("xentrace 5/7: Additional tracing for the shadow code")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>
    Release-acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 8765783434e903fa8be628de25f9941b0204502d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 22 14:05:13 2024 +0100

    x86/shadow: Rework trace_shadow_emulate_other() as sh_trace_gfn_va()
    
    sh_trace_gfn_va() is very similar to sh_trace_gl1e_va(), and a rather shorter
    name than trace_shadow_emulate_other().
    
    It's only referenced in CONFIG_HVM=y builds, so give it a __maybe_unused to
    placate randconfig builds.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 578066d82b2b96e949ff46e6c142a33231b1ae2d
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 22 13:58:22 2024 +0100

    x86/shadow: Introduce sh_trace_gl1e_va()
    
    trace_shadow_fixup() and trace_not_shadow_fault() both write out identical
    trace records.  Reimplement them in terms of a common sh_trace_gl1e_va().
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit 2e9f8a734e3dd2b6abccea325dd5e854a3670dec
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 22 13:51:43 2024 +0100

    x86/shadow: Rework trace_shadow_gen() into sh_trace_va()
    
    The ((GUEST_PAGING_LEVELS - 2) << 8) expression in the event field is common
    to all shadow trace events, so introduce sh_trace() as a very thin wrapper
    around trace().
    
    Then, rename trace_shadow_gen() to sh_trace_va() to better describe what it is
    doing, and to be more consistent with later cleanup.
    
    No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

commit ba52b3b624e4a1a976908552364eba924ca45430
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue May 7 12:05:58 2024 +0100

    tools/xl: Open xldevd.log with O_CLOEXEC
    
    `xl devd` has been observed leaking /var/log/xldevd.log into children.
    
    Note this is specifically safe; dup2() leaves O_CLOEXEC disabled on newfd, so
    after setting up stdout/stderr, it's only the logfile fd which will close on
    exec().
    
    Link: https://github.com/QubesOS/qubes-issues/issues/8292
    Reported-by: Demi Marie Obenour <demi@invisiblethingslab.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Demi Marie Obenour <demi@invisiblethingslab.com>
    Acked-by: Anthony PERARD <anthony.perard@vates.tech>
    Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 22:44:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 22:44:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748133.1155715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMEtD-0003tz-E7; Tue, 25 Jun 2024 22:43:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748133.1155715; Tue, 25 Jun 2024 22:43:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMEtD-0003ts-BS; Tue, 25 Jun 2024 22:43:51 +0000
Received: by outflank-mailman (input) for mailman id 748133;
 Tue, 25 Jun 2024 22:43:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wboy=N3=linux-foundation.org=akpm@srs-se1.protection.inumbo.net>)
 id 1sMEtC-0003tj-1R
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 22:43:50 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 620abea4-3344-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 00:43:48 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 4E80861780;
 Tue, 25 Jun 2024 22:43:46 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3666BC32781;
 Tue, 25 Jun 2024 22:43:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 620abea4-3344-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org;
	s=korg; t=1719355426;
	bh=TXSHlymv2cQdW6Cq8S2aO79qhvv/iUOqt59Pv0/5CXU=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=LalejVUMAXCZLbTrD760WCKxVsMt+y/H6z57ZUZWJ8bcVEpj19D2HYs0wTGwNfn50
	 lWMnhSd6uPZeT602t95YLigRh7bMgkNVpbcXDY0b13d+wNFpIToVB/OIU9+bzKAotf
	 ohw/Pr2e6FkpvNZtWPIghnTGjZ2Dym6GlV1UmDWg=
Date: Tue, 25 Jun 2024 15:43:44 -0700
From: Andrew Morton <akpm@linux-foundation.org>
To: David Hildenbrand <david@redhat.com>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
 xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com, Mike Rapoport
 <rppt@kernel.org>, Oscar Salvador <osalvador@suse.de>, "K. Y. Srinivasan"
 <kys@microsoft.com>, Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu
 <wei.liu@kernel.org>, Dexuan Cui <decui@microsoft.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Jason Wang <jasowang@redhat.com>,
 Xuan Zhuo <xuanzhuo@linux.alibaba.com>, Eugenio =?ISO-8859-1?Q?P=E9rez?=
 <eperezma@redhat.com>, Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Oleksandr Tyshchenko
 <oleksandr_tyshchenko@epam.com>, Alexander Potapenko <glider@google.com>,
 Marco Elver <elver@google.com>, Dmitry Vyukov <dvyukov@google.com>
Subject: Re: [PATCH v1 0/3] mm/memory_hotplug: use PageOffline() instead of
 PageReserved() for !ZONE_DEVICE
Message-Id: <20240625154344.9f3db1ddfe2cb9cdd5583783@linux-foundation.org>
In-Reply-To: <20240607090939.89524-1-david@redhat.com>
References: <20240607090939.89524-1-david@redhat.com>
X-Mailer: Sylpheed 3.7.0 (GTK+ 2.24.33; x86_64-pc-linux-gnu)
Mime-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

afaict we're in decent state to move this series into mm-stable.  I've
tagged the following issues:

https://lkml.kernel.org/r/80532f73e52e2c21fdc9aac7bce24aefb76d11b0.camel@linux.intel.com
https://lkml.kernel.org/r/30b5d493-b7c2-4e63-86c1-dcc73d21dc15@redhat.com

Have these been addressed and are we ready to send this series into the world?

Thanks.


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 22:44:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 22:44:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748137.1155725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMEu7-0004P6-MC; Tue, 25 Jun 2024 22:44:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748137.1155725; Tue, 25 Jun 2024 22:44:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMEu7-0004Or-JV; Tue, 25 Jun 2024 22:44:47 +0000
Received: by outflank-mailman (input) for mailman id 748137;
 Tue, 25 Jun 2024 22:44:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMEu5-0004OV-Pc; Tue, 25 Jun 2024 22:44:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMEu5-0007cH-Jh; Tue, 25 Jun 2024 22:44:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMEu5-0003wM-AL; Tue, 25 Jun 2024 22:44:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMEu5-0006D3-9u; Tue, 25 Jun 2024 22:44:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=L5CrrxkGHKLD9GD0XOU91+3jWyrDlTFzNzZkHFw6YCg=; b=UXyyN1yh17LU054UtavDUPF9kB
	T8Sh5lNCQCUoinu5TC4C7a/cLhP3gPJaebAymFEHmMWgN6fNxbiD+v+dJDHVunHjL5jN8iJrISYmW
	7pCCoYuGHwVgeGIPrzZBPMQSOlyaIjEgyQ6Bu0LXK2yU62UhgkVhIgEfjPNc+DnvLv4o=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186485-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186485: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=55027e689933ba2e64f3d245fb1ff185b3e7fc81
X-Osstest-Versions-That:
    linux=7c16f0a4ed1ce7b0dd1c01fc012e5bde89fe7748
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Jun 2024 22:44:45 +0000

flight 186485 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186485/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186462
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186462
 test-armhf-armhf-xl-multivcpu  8 xen-boot                     fail like 186462
 test-armhf-armhf-libvirt-vhd  8 xen-boot                     fail  like 186462
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186462
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186462
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                55027e689933ba2e64f3d245fb1ff185b3e7fc81
baseline version:
 linux                7c16f0a4ed1ce7b0dd1c01fc012e5bde89fe7748

Last test of basis   186462  2024-06-23 16:10:07 Z    2 days
Failing since        186464  2024-06-24 01:42:08 Z    1 days    5 attempts
Testing same since   186474  2024-06-25 02:02:05 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Stein <alexander.stein@ew.tq-group.com>
  Claudiu Beznea <claudiu.beznea.uj@bp.renesas.com>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Florian Fainelli <florian.fainelli@broadcom.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Hagar Hemdan <hagarhem@amazon.com>
  Huang-Huang Bao <i@eh5.me>
  Johan Hovold <johan+linaro@kernel.org>
  John Keeping <jkeeping@inmusicbrands.com>
  Jonathan Denose <jdenose@google.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Luke D. Jones <luke@ljones.dev>
  Stefan Wahren <wahrenst@gmx.net>
  Thomas Richard <thomas.richard@bootlin.com>
  Tobias Jakobi <tjakobi@math.uni-bielefeld.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   7c16f0a4ed1c..55027e689933  55027e689933ba2e64f3d245fb1ff185b3e7fc81 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Tue Jun 25 22:47:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 22:47:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748148.1155736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMEx8-0005TD-49; Tue, 25 Jun 2024 22:47:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748148.1155736; Tue, 25 Jun 2024 22:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMEx8-0005T6-1G; Tue, 25 Jun 2024 22:47:54 +0000
Received: by outflank-mailman (input) for mailman id 748148;
 Tue, 25 Jun 2024 22:47:52 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v2rz=N3=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sMEx6-0005T0-SB
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 22:47:52 +0000
Received: from sender4-op-o15.zoho.com (sender4-op-o15.zoho.com
 [136.143.188.15]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f2e62b71-3344-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 00:47:51 +0200 (CEST)
Delivered-To: tamas@tklengyel.com
Received: by mx.zohomail.com with SMTPS id 171935566625863.208364233485895;
 Tue, 25 Jun 2024 15:47:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2e62b71-3344-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; t=1719355667; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=T4bcsKU05hfX3R6EXK5W72cvPEHJJGcA/pYTDCd2H0dOUwlm5jMWfwD9lOuSQymJSjO0YFfis3R2hlOqQAo6UM5mD3HulQFjJUvC9C1d4VmBmmIEgapt6CTy/wCkyTHtjuQIBVesA4/rAkgjFkzZ4dFj1vy5PR+5uugiKO0msGE=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1719355667; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=fjBA7qEiORBtB5xbabiMCCsVJFWSho8KsNxAHHf2G4A=; 
	b=dAfX1lgdsROhXa/mJieP39MzS61EbUBI1ZOesqT+fUuGng8QsvoAwjqr/A2MxbsutKjPYsrMJ2viyJ6Z7dZhluiQ8K/WPhM3gw4y8Lx7Kfuis9K25nTpH8bJa1l2tLuuO5s886J4FzmfejFndxig42YXIfVYwAK6FdKJKyzdgVU=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1719355667;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-Id:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding:Reply-To;
	bh=fjBA7qEiORBtB5xbabiMCCsVJFWSho8KsNxAHHf2G4A=;
	b=MKM+FyOf4Bger0diWpWt2VSAhJDLBrvh+l3sINoTB5eZ7X+Q00ykS9/+rkYdhLe2
	pAZY2SNFtXdaSua5KKMf/+ykU7ABlaApL3oNS3ILSPiGK13PKFbC7fiEbl+H0GlKKSF
	ubpF/S9ReXKEiRqeqDX/0xNozr1/e96DKuzAHXFI=
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2 2/2] Add scripts/oss-fuzz/build.sh
Date: Tue, 25 Jun 2024 18:47:38 -0400
Message-Id: <d0974cc40ca68fe197ba7941edd934970d3a92cf.1719355322.git.tamas@tklengyel.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <d14436e64c650b388936a921837b984772a4fceb.1719355322.git.tamas@tklengyel.com>
References: <d14436e64c650b388936a921837b984772a4fceb.1719355322.git.tamas@tklengyel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The build integration script for oss-fuzz targets. Future fuzzing targets can
be added to this script and those targets will be automatically picked up by
oss-fuzz without having to open separate PRs on the oss-fuzz repo.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 scripts/oss-fuzz/build.sh | 23 +++++++++++++++++++++++
 1 file changed, 23 insertions(+)
 create mode 100755 scripts/oss-fuzz/build.sh

diff --git a/scripts/oss-fuzz/build.sh b/scripts/oss-fuzz/build.sh
new file mode 100755
index 0000000000..2cfd72adf1
--- /dev/null
+++ b/scripts/oss-fuzz/build.sh
@@ -0,0 +1,23 @@
+#!/bin/bash -eu
+# SPDX-License-Identifier: Apache-2.0
+# Copyright 2024 Google LLC
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+################################################################################
+
+cd xen
+./configure --disable-stubdom --disable-pvshim --disable-docs --disable-xen
+make clang=y -C tools/include
+make clang=y -C tools/fuzz/x86_instruction_emulator libfuzzer-harness
+cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_instruction_emulator
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Tue Jun 25 22:47:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 Jun 2024 22:47:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748149.1155746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMExB-0005iL-Ac; Tue, 25 Jun 2024 22:47:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748149.1155746; Tue, 25 Jun 2024 22:47:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMExB-0005iE-7f; Tue, 25 Jun 2024 22:47:57 +0000
Received: by outflank-mailman (input) for mailman id 748149;
 Tue, 25 Jun 2024 22:47:56 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v2rz=N3=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sMExA-0004yN-JF
 for xen-devel@lists.xenproject.org; Tue, 25 Jun 2024 22:47:56 +0000
Received: from sender4-op-o15.zoho.com (sender4-op-o15.zoho.com
 [136.143.188.15]) by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f5182c98-3344-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 00:47:54 +0200 (CEST)
Delivered-To: tamas@tklengyel.com
Received: by mx.zohomail.com with SMTPS id 1719355664606766.5793368921854;
 Tue, 25 Jun 2024 15:47:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5182c98-3344-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; t=1719355667; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=PvDuhUDEeeXYf9z/d2613UB5JA43tWWErWouIeAzXSv5/94atUwO44i/n35SrQ5oDa0av1mz8aMiAhLyBEyRORSFiT7CFj9L5j5gcQt4X6gXNXRthijHqwm830wgb1hw2PzzImWKVh8clVwyy9OCv86LOO5DwpxhrE5PCwGbe8o=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1719355667; h=Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:MIME-Version:Message-ID:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=HEbFI6t/UgSyN5s43KHHTKJdc0crQlrYCdNUwyv5erM=; 
	b=giWstThJaxPbwO2wsoJ15ajKNj64vSj5Nz5mFA3vL6XRRmwTWx1ylRP/AXYXXfGa+Agg5arQxAMlKPvSEJCGdabxcHkup2NYqTrYXtieWWUZndQBVw/uX09TNoZvDcmSNmNXON1lNKd5Eb/rGAImcEpxlRkqG2moaH8e9r5LpZA=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1719355667;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=From:From:To:To:Cc:Cc:Subject:Subject:Date:Date:Message-Id:Message-Id:MIME-Version:Content-Transfer-Encoding:Reply-To;
	bh=HEbFI6t/UgSyN5s43KHHTKJdc0crQlrYCdNUwyv5erM=;
	b=aBQg6UNgAAyY1PFrljUnEs/iDGKul2vf9fW7vmu5TDywyzDMX8e9mpJi28Dh5wzK
	WIE4qAt0n5kpn6D/qIEpbydzDanZiVyNK8Ppcvg7xRe3oqdSgkhZoqoPbW1TvWd3NBp
	hTeRTQprKeJm3du7OIRBaaQRxWdiwK8kMDuEKWs8=
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Anthony PERARD <anthony@xenproject.org>
Subject: [PATCH v2 1/2] Add libfuzzer target to fuzz/x86_instruction_emulator
Date: Tue, 25 Jun 2024 18:47:37 -0400
Message-Id: <d14436e64c650b388936a921837b984772a4fceb.1719355322.git.tamas@tklengyel.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This target enables integration into oss-fuzz. Changing invalid input return
to -1 as values other then 0/-1 are reserved by libfuzzer. Also adding the
missing __wrap_vsnprintf wrapper which is required for successful oss-fuzz
build.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 tools/fuzz/x86_instruction_emulator/Makefile    | 11 +++++++++--
 tools/fuzz/x86_instruction_emulator/fuzz-emul.c |  6 ++----
 tools/tests/x86_emulator/wrappers.c             | 11 +++++++++++
 3 files changed, 22 insertions(+), 6 deletions(-)

diff --git a/tools/fuzz/x86_instruction_emulator/Makefile b/tools/fuzz/x86_instruction_emulator/Makefile
index 1e4c6b37f5..7b6655805f 100644
--- a/tools/fuzz/x86_instruction_emulator/Makefile
+++ b/tools/fuzz/x86_instruction_emulator/Makefile
@@ -3,7 +3,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 .PHONY: x86-insn-fuzz-all
 ifeq ($(CONFIG_X86_64),y)
-x86-insn-fuzz-all: x86-insn-fuzzer.a fuzz-emul.o afl
+x86-insn-fuzz-all: x86-insn-fuzzer.a fuzz-emul.o afl libfuzzer
 else
 x86-insn-fuzz-all:
 endif
@@ -58,6 +58,9 @@ afl-harness: afl-harness.o $(OBJS) cpuid.o wrappers.o
 afl-harness-cov: afl-harness-cov.o $(patsubst %.o,%-cov.o,$(OBJS)) cpuid.o wrappers.o
 	$(CC) $(CFLAGS) $(GCOV_FLAGS) $(addprefix -Wl$(comma)--wrap=,$(WRAPPED)) $^ -o $@
 
+libfuzzer-harness: $(OBJS) cpuid.o wrappers.o
+	$(CC) $(CFLAGS) $(LIB_FUZZING_ENGINE) -fsanitize=fuzzer $(addprefix -Wl$(comma)--wrap=,$(WRAPPED)) $^ -o $@
+
 # Common targets
 .PHONY: all
 all: x86-insn-fuzz-all
@@ -67,7 +70,8 @@ distclean: clean
 
 .PHONY: clean
 clean:
-	rm -f *.a *.o $(DEPS_RM) afl-harness afl-harness-cov *.gcda *.gcno *.gcov
+	rm -f *.a *.o $(DEPS_RM) *.gcda *.gcno *.gcov \
+        afl-harness afl-harness-cov libfuzzer-harness
 	rm -rf x86_emulate x86-emulate.c x86-emulate.h wrappers.c cpuid.c
 
 .PHONY: install
@@ -81,4 +85,7 @@ afl: afl-harness
 .PHONY: afl-cov
 afl-cov: afl-harness-cov
 
+.PHONY: libfuzzer
+libfuzzer: libfuzzer-harness
+
 -include $(DEPS_INCLUDE)
diff --git a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
index eeeb6931f4..2ba9ca9e0b 100644
--- a/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
+++ b/tools/fuzz/x86_instruction_emulator/fuzz-emul.c
@@ -906,14 +906,12 @@ int LLVMFuzzerTestOneInput(const uint8_t *data_p, size_t size)
 
     if ( size <= DATA_OFFSET )
     {
-        printf("Input too small\n");
-        return 1;
+        return -1;
     }
 
     if ( size > FUZZ_CORPUS_SIZE )
     {
-        printf("Input too large\n");
-        return 1;
+        return -1;
     }
 
     memcpy(&input, data_p, size);
diff --git a/tools/tests/x86_emulator/wrappers.c b/tools/tests/x86_emulator/wrappers.c
index 3829a6f416..8f3bd1656f 100644
--- a/tools/tests/x86_emulator/wrappers.c
+++ b/tools/tests/x86_emulator/wrappers.c
@@ -91,6 +91,17 @@ int __wrap_snprintf(char *buf, size_t n, const char *fmt, ...)
     return rc;
 }
 
+int __wrap_vsnprintf(char *buf, size_t n, const char *fmt, va_list varg)
+{
+    int rc;
+
+    emul_save_fpu_state();
+    rc = __real_vsnprintf(buf, n, fmt, varg);
+    emul_restore_fpu_state();
+
+    return rc;
+}
+
 char *__wrap_strstr(const char *s1, const char *s2)
 {
     char *s;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 00:00:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 00:00:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748169.1155761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMG4k-0000ao-75; Tue, 25 Jun 2024 23:59:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748169.1155761; Tue, 25 Jun 2024 23:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMG4k-0000ah-4R; Tue, 25 Jun 2024 23:59:50 +0000
Received: by outflank-mailman (input) for mailman id 748169;
 Tue, 25 Jun 2024 23:59:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMG4i-0000aX-Iq; Tue, 25 Jun 2024 23:59:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMG4i-0000Ps-Cg; Tue, 25 Jun 2024 23:59:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMG4h-0005jy-Pc; Tue, 25 Jun 2024 23:59:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMG4h-00064N-PC; Tue, 25 Jun 2024 23:59:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7AXUQjkQGh77OnMFFI0moOVui4xWC3ZjG3rFAA8kmaw=; b=0zaHOE1/X2QnIfIM7L6QpNHoJC
	lD1Irl44b4g703Hipu7HpaF0uYo9jHaePyCvPFmqOEZFGeLGORCqtdfUoYQcK88O/MLx0c/+j+UWA
	jefs79JzQ+zetz96VbixbMAGkY6n0hB78UJlsHfyjNlZb7db633cs/G2HjYo41Iu4Bjs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186498-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186498: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=84d8eb08e15e455826ef66a4b1f1f61758cb9aba
X-Osstest-Versions-That:
    ovmf=10b4bb8d6d0c515ed9663691aea3684be8f7b0fc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 Jun 2024 23:59:47 +0000

flight 186498 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186498/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 84d8eb08e15e455826ef66a4b1f1f61758cb9aba
baseline version:
 ovmf                 10b4bb8d6d0c515ed9663691aea3684be8f7b0fc

Last test of basis   186490  2024-06-25 15:41:13 Z    0 days
Testing same since   186498  2024-06-25 22:11:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Sebastian Witt <sebastian.witt@siemens.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   10b4bb8d6d..84d8eb08e1  84d8eb08e15e455826ef66a4b1f1f61758cb9aba -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 01:00:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 01:00:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748194.1155778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMH11-0002wA-M7; Wed, 26 Jun 2024 01:00:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748194.1155778; Wed, 26 Jun 2024 01:00:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMH11-0002vN-I0; Wed, 26 Jun 2024 01:00:03 +0000
Received: by outflank-mailman (input) for mailman id 748194;
 Wed, 26 Jun 2024 01:00:02 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dBdT=N4=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMH10-00026G-D7
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 01:00:02 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 695a748a-3357-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 03:00:00 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id B568F60AC6;
 Wed, 26 Jun 2024 00:59:58 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 47725C32781;
 Wed, 26 Jun 2024 00:59:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 695a748a-3357-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719363598;
	bh=N7/kGkqol1ywjifRFNz7zVHwLE979d0dgMOoiGCd9k4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=mVmkdIjrMLDx11B/MauaRZoYijPQ8yQM+CVhKgNxEU5F9eJyq33/CW03wOBh2lEkC
	 wdrYT0Z9BsNjOk8nKgmSjDj6Lxelo01n6ymGCKLx33EkB0pPfnlDlYAz4KKQ2+cRBt
	 +AI25dXJNH6ueyhCMTz+P0+N+tjf0aGgZ95RsSWcHKhm7nID4CqIPGDa8H6eS8H3MC
	 Mlnx+GI6UvPrKOuHDEaJYTZdzrGaBfDXFDpOcTAyU9cTb/kvLDoIxrPGIZRmIMpqSL
	 0SLUqOMVzWO6oKA84pOFCzWfXKFkQ7Evo5niD8lNwtkaV4xN3NiPSiAdYeVs9KW85f
	 c6nvcB6YX6XQw==
Date: Tue, 25 Jun 2024 17:59:55 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, michal.orzel@amd.com, 
    xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [XEN PATCH v2 1/6][RESEND] automation/eclair: address violations
 of MISRA C Rule 20.7
In-Reply-To: <4aa05e0f26f050363d9ed0401855e1bb@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406251756010.3635@ubuntu-linux-20-04-desktop>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com> <af4b0512eb52be99e37c9c670f98967ca15c68ac.1718378539.git.nicola.vetrini@bugseng.com> <alpine.DEB.2.22.394.2406201718140.2572888@ubuntu-linux-20-04-desktop>
 <4aa05e0f26f050363d9ed0401855e1bb@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 25 Jun 2024, Nicola Vetrini wrote:
> On 2024-06-21 02:18, Stefano Stabellini wrote:
> > On Mon, 16 Jun 2024, Nicola Vetrini wrote:
> > > MISRA C Rule 20.7 states: "Expressions resulting from the expansion
> > > of macro parameters shall be enclosed in parentheses".
> > > 
> > > The helper macro bitmap_switch has parameters that cannot be parenthesized
> > > in order to comply with the rule, as that would break its functionality.
> > > Moreover, the risk of misuse due developer confusion is deemed not
> > > substantial enough to warrant a more involved refactor, thus the macro
> > > is deviated for this rule.
> > > 
> > > No functional change.
> > > 
> > > Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
> > 
> > If possible, I would prefer we used a SAF in-code comment deviation. If
> > that doesn't work for any reason this patch is fine and I'd ack it.
> > 
> 
> Would that be an improvement for safety in your opinion?

Not for safety, as they both achieve the same result, but for
maintainability. The deviations under deviations.ecl are project-wide so
in my opinion should only be used for project-wide global deviations
from a rule. Specific deviations for functions should be done with SAF.
Another reason for this is that deviations under deviations.ecl need to
be manually ported to any other static analyzer one by one while SAF is
meant to support multiple automatically.

More importantly if we change deviations.ecl, deviations.rst should also
be changed the same way. I would say that this is required at a minimum
as deviations.ecl and deviations.rst should be in sync.

 
> > > ---
> > >  automation/eclair_analysis/ECLAIR/deviations.ecl | 8 ++++++++
> > >  1 file changed, 8 insertions(+)
> > > 
> > > diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl
> > > b/automation/eclair_analysis/ECLAIR/deviations.ecl
> > > index 447c1e6661d1..c2698e7074aa 100644
> > > --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> > > +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> > > @@ -463,6 +463,14 @@ of this macro do not lead to developer confusion, and
> > > can thus be deviated."
> > >  -config=MC3R1.R20.7,reports+={safe,
> > > "any_area(any_loc(any_exp(macro(^count_args_$))))"}
> > >  -doc_end
> > > 
> > > +-doc_begin="The arguments of macro bitmap_switch macro can't be
> > > parenthesized as
> > > +the rule would require, without breaking the functionality of the macro.
> > > This is
> > > +a specialized local helper macro only used within the bitmap.h header, so
> > > it is
> > > +less likely to lead to developer confusion and it is deemed better to
> > > deviate it."
> > > +-file_tag+={xen_bitmap_h, "^xen/include/xen/bitmap\\.h$"}
> > > +-config=MC3R1.R20.7,reports+={safe,
> > > "any_area(any_loc(any_exp(macro(loc(file(xen_bitmap_h))&&^bitmap_switch$))))"}
> > > +-doc_end
> > > +
> > >  -doc_begin="Uses of variadic macros that have one of their arguments
> > > defined as
> > >  a macro and used within the body for both ordinary parameter expansion
> > > and as an
> > >  operand to the # or ## operators have a behavior that is well-understood
> > > and
> > > --
> > > 2.34.1
> > > 
> 
> -- 
> Nicola Vetrini, BSc
> Software Engineer, BUGSENG srl (https://bugseng.com)
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 01:08:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 01:08:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748201.1155787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMH8i-0003Ac-CQ; Wed, 26 Jun 2024 01:08:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748201.1155787; Wed, 26 Jun 2024 01:08:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMH8i-0003AV-9X; Wed, 26 Jun 2024 01:08:00 +0000
Received: by outflank-mailman (input) for mailman id 748201;
 Wed, 26 Jun 2024 01:07:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dBdT=N4=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMH8h-000396-Cl
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 01:07:59 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 85df33cf-3358-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 03:07:57 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 6BC2A60B5A;
 Wed, 26 Jun 2024 01:07:56 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id C17D5C32781;
 Wed, 26 Jun 2024 01:07:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85df33cf-3358-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719364076;
	bh=4SDBa1rGC+JAmFHN8wUJHSYJnttc1J9qCaPA5Zao9Qw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=oIKcu5vqDwddllv63t/JvZMNjMaXF4vtpTM6VKBWZbraRk9KLobxKw3mdFNRQI2od
	 driqSiI1csITukOtzk6gRxfDU7xHr2AG2qAQhMekpsJqLrVBBfPGbLCu2+5iA00NGT
	 k5g5im1G2N+nfl47fd+i9AyQtN4KFSJ2li+8OAHhb5fA+UOkxKuQz7dBnzCv1IVuGV
	 Mq/fM4RH53G5HQGdWh/w8raPwFJgLOCFz75rxaepQVCMGWTpFH+9lC0VKagCSSwCAp
	 vyPxxHMHhosVurnOenXOVLP9xmjjdcthHu13SJr1EFDMCATDz39InQdwlMrwYpY4ew
	 mZ16n2K36RDAw==
Date: Tue, 25 Jun 2024 18:07:53 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>, 
    Federico Serafini <federico.serafini@bugseng.com>, 
    xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH v4] automation/eclair: extend existing deviations of
 MISRA C Rule 16.3
In-Reply-To: <c2098800-2565-4eff-90b5-0d285862bf26@suse.com>
Message-ID: <alpine.DEB.2.22.394.2406251806240.3635@ubuntu-linux-20-04-desktop>
References: <90044547484dac6fcb4748ae8758e38234b3261a.1719297249.git.federico.serafini@bugseng.com> <c2098800-2565-4eff-90b5-0d285862bf26@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 25 Jun 2024, Jan Beulich wrote:
> On 25.06.2024 08:46, Federico Serafini wrote:
> > Update ECLAIR configuration to deviate more cases where an
> > unintentional fallthrough cannot happen.
> > 
> > Tag Rule 16.3 as clean for arm.
> > 
> > Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> > Acked-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> To add to my reply on the other series: As per above you even acked ...
> 
> > --- a/docs/misra/deviations.rst
> > +++ b/docs/misra/deviations.rst
> > @@ -330,12 +330,34 @@ Deviations related to MISRA C:2012 Rules:
> >       - Tagged as `deliberate` for ECLAIR.
> >  
> >     * - R16.3
> > -     - Switch clauses ending with continue, goto, return statements are safe.
> > +     - Statements that change the control flow (i.e., break, continue, goto,
> > +       return) and calls to functions that do not return the control back are
> > +       \"allowed terminal statements\".
> >       - Tagged as `safe` for ECLAIR.
> >  
> >     * - R16.3
> > -     - Switch clauses ending with a call to a function that does not give
> > -       the control back (i.e., a function with attribute noreturn) are safe.
> > +     - An if-else statement having both branches ending with one of the allowed
> > +       terminal statemets is itself an allowed terminal statements.
> > +     - Tagged as `safe` for ECLAIR.
> > +
> > +   * - R16.3
> > +     - An if-else statement having an always true condition and the true
> > +       branch ending with an allowed terminal statement is itself an allowed
> > +       terminal statement.
> > +     - Tagged as `safe` for ECLAIR.
> > +
> > +   * - R16.3
> > +     - A switch clause ending with a statement expression which, in turn, ends
> > +       with an allowed terminal statement (e.g., the expansion of
> > +       generate_exception()) is safe.
> > +     - Tagged as `safe` for ECLAIR.
> > +
> > +   * - R16.3
> > +     - A switch clause ending with a do-while-false the body of which, in turn,
> > +       ends with an allowed terminal statement (e.g., PARSE_ERR_RET()) is safe.
> > +       An exception to that is the macro ASSERT_UNREACHABLE() which is
> > +       effective in debug build only: a switch clause ending with
> > +       ASSERT_UNREACHABLE() is not considered safe.
> >       - Tagged as `safe` for ECLAIR.
> 
> ... this explicit statement regarding ASSERT_UNREACHABLE().

You are right... I read the statement about ASSERT_UNREACHABLE() only in
the context of do-while-false. Let's continue the discussion in the
other email thread.


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 01:11:49 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 01:11:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748209.1155798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHCM-0004rw-R9; Wed, 26 Jun 2024 01:11:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748209.1155798; Wed, 26 Jun 2024 01:11:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHCM-0004rp-O9; Wed, 26 Jun 2024 01:11:46 +0000
Received: by outflank-mailman (input) for mailman id 748209;
 Wed, 26 Jun 2024 01:11:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dBdT=N4=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMHCL-0004rj-4p
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 01:11:45 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0c30847f-3359-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 03:11:43 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id C484B60B6B;
 Wed, 26 Jun 2024 01:11:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 9CD09C32786;
 Wed, 26 Jun 2024 01:11:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c30847f-3359-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719364301;
	bh=pyNbXFTDiHujn7NdZ0hPlUDnFrFl+BqHlxo9td2YtOo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=nuBoZBPwgCyuPlcdD2zZjaYBItY/XcPbbUbXrGP5o+huunD77rpR/cC0rkSrBBkkP
	 yuNreona6JTXPXS1i9yV4BRR8uiyhqbJPCwhKIoAL4VwQtU7zjhBeNu634i69Q1GF2
	 P17b5ECO+A8Uuh2diKuiyAGvJRhTI1HcAHhtVrIKNOBUNQTZiQNdXxnuB68gtqpWTR
	 YF1zJFkplOIFEe5lgF0Pn7ioYXRUgOOv/X61exYf0po2QkQQQMS/LgEHQjFK02oXfg
	 +D4FXjq8EzEGHtzpFzMoIximo13iLMzIg/i1Wxpgs90KMHBO69sJY1CB+lEE3ko6jl
	 P44S6tjPmfzLw==
Date: Tue, 25 Jun 2024 18:11:39 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Federico Serafini <federico.serafini@bugseng.com>
Subject: Re: [XEN PATCH v2 05/13] x86/traps: address violations of MISRA C
 Rule 16.3
In-Reply-To: <a5b47b7e-9dc0-4108-bd6f-eb34f7cb8c3c@suse.com>
Message-ID: <alpine.DEB.2.22.394.2406251808040.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <4f44a7b021eb4f78ccf1ce69b500b48b75df81c5.1719218291.git.federico.serafini@bugseng.com> <alpine.DEB.2.22.394.2406241753260.3870429@ubuntu-linux-20-04-desktop>
 <a5b47b7e-9dc0-4108-bd6f-eb34f7cb8c3c@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 25 Jun 2024, Jan Beulich wrote:
> On 25.06.2024 02:54, Stefano Stabellini wrote:
> > On Mon, 24 Jun 2024, Federico Serafini wrote:
> >> Add break or pseudo keyword fallthrough to address violations of
> >> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
> >> every switch-clause".
> >>
> >> No functional change.
> >>
> >> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> >> ---
> >>  xen/arch/x86/traps.c | 3 +++
> >>  1 file changed, 3 insertions(+)
> >>
> >> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> >> index 9906e874d5..cbcec3fafb 100644
> >> --- a/xen/arch/x86/traps.c
> >> +++ b/xen/arch/x86/traps.c
> >> @@ -1186,6 +1186,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
> >>  
> >>      default:
> >>          ASSERT_UNREACHABLE();
> >> +        break;
> > 
> > Please add ASSERT_UNREACHABLE to the list of "unconditional flow control
> > statements" that can terminate a case, in addition to break.
> 
> Why? Exactly the opposite is part of the subject of a recent patch, iirc.
> Simply because of the rules needing to cover both debug and release builds.

The reason is that ASSERT_UNREACHABLE() might disappear from the release
build but it can still be used as a marker during static analysis. In
my view, ASSERT_UNREACHABLE() is equivalent to a noreturn function call
which has an empty implementation in release builds.

The only reason I can think of to require a break; after an
ASSERT_UNREACHABLE() would be if we think the unreachability only apply
to debug build, not release build:

- debug build: it is unreachable
- release build: it is reachable

I don't think that is meant to be possible so I think we can use
ASSERT_UNREACHABLE() as a marker.


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 01:13:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 01:13:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748214.1155808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHED-0005PJ-4L; Wed, 26 Jun 2024 01:13:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748214.1155808; Wed, 26 Jun 2024 01:13:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHED-0005PC-1j; Wed, 26 Jun 2024 01:13:41 +0000
Received: by outflank-mailman (input) for mailman id 748214;
 Wed, 26 Jun 2024 01:13:39 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dBdT=N4=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMHEB-0005P6-Gc
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 01:13:39 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 506fc231-3359-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 03:13:37 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id F1B7260B5A;
 Wed, 26 Jun 2024 01:13:35 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id ADD66C32781;
 Wed, 26 Jun 2024 01:13:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 506fc231-3359-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719364415;
	bh=1vDMBFbep/nJ+2OXb3icg5/JJaWp9QD7KY+cPtqHQVg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Nw3PQgg0EJsEj2tlW6NeLEdOQXPgeQdho9gHpnuAaIMPe+TjMaWBxnKqCgqi05Qtk
	 aOIPiYuxSjgYfuhGiD/CIRk+JnxekDITctkEVBMeZMrHE8TqbgM1V5Nr29utb1xld1
	 tqM+OO1EO6FcZ1GxCzTqB0Yn7xvaYUJkUF/QEIPtN1YailzSIohY+4xbD4UphYIbTf
	 t+Lk21BuMjLUyzT47Xa6byh9Jam4joMwYJFUHnuPjrjTIDIC07BbWsRcE/3NkX1CMa
	 Ug17IAV4W2G2PunhF9UTei9LS7ja4nVPVIE0GtkQ1EWX6jAQ0hxuU31++fCXZlcdqO
	 iFIlbKcvGiQFQ==
Date: Tue, 25 Jun 2024 18:13:33 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>, 
    consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org
Subject: Re: [PATCH 1/3] common/kernel: address violation of MISRA C Rule
 13.6
In-Reply-To: <521767cb-ac08-48c5-bd91-b30c1d192331@suse.com>
Message-ID: <alpine.DEB.2.22.394.2406251812480.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719308599.git.alessandro.zucchelli@bugseng.com> <54949b0561263b9f18da500255836d89ca8838ba.1719308599.git.alessandro.zucchelli@bugseng.com> <521767cb-ac08-48c5-bd91-b30c1d192331@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 25 Jun 2024, Jan Beulich wrote:
> On 25.06.2024 12:14, Alessandro Zucchelli wrote:
> > --- a/xen/common/kernel.c
> > +++ b/xen/common/kernel.c
> > @@ -660,14 +660,15 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
> >  
> >      case XENVER_guest_handle:
> >      {
> > +        struct domain *d = current->domain;
> 
> Can a (new) variable thus initialized please be consistently named currd?
> 
> >          xen_domain_handle_t hdl;
> >  
> >          if ( deny )
> >              memset(&hdl, 0, ARRAY_SIZE(hdl));
> >  
> > -        BUILD_BUG_ON(ARRAY_SIZE(current->domain->handle) != ARRAY_SIZE(hdl));
> > +        BUILD_BUG_ON(ARRAY_SIZE(d->handle) != ARRAY_SIZE(hdl));
> 
> Wasn't there the intention to exclude BUILD_BUG_ON() for specifically this
> (and any other similar) rule?

+1

I think if we could do that it would be ideal because those are the
difficult cases are only meant are build checks so there is no point in
applying to MISRA to them.


> > -        if ( copy_to_guest(arg, deny ? hdl : current->domain->handle,
> > +        if ( copy_to_guest(arg, deny ? hdl : d->handle,
> >                             ARRAY_SIZE(hdl) ) )
> >              return -EFAULT;
> >          return 0;
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 01:16:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 01:16:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748222.1155817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHHK-0005xr-Hr; Wed, 26 Jun 2024 01:16:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748222.1155817; Wed, 26 Jun 2024 01:16:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHHK-0005xk-FD; Wed, 26 Jun 2024 01:16:54 +0000
Received: by outflank-mailman (input) for mailman id 748222;
 Wed, 26 Jun 2024 01:16:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dBdT=N4=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMHHJ-0005xc-Jl
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 01:16:53 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c33ba5fe-3359-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 03:16:51 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 1F8A4CE1FF3;
 Wed, 26 Jun 2024 01:16:45 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 60774C32786;
 Wed, 26 Jun 2024 01:16:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c33ba5fe-3359-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719364604;
	bh=mYuIZGRRFgZSvymB2gMePHVyb38V4g5zhBo001tFvY4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=UnLornJyoV+yChTwX8tIFST9Ny2rE+x/T+O0g05uTmctR/4w25SFrg3zo9tRALEiU
	 qm4pma2nK4E9Rq6+h3GW6GPSJSjlWBBpNBoeKEwRnhcHhjZw9SCwtIXgiKDGFifco2
	 NSpeRz5vzdgaXP1dBVUjJV+BWPRaI/9xcUWxpdYY1l+wE/zFAvki5vfGCgswGY6xhP
	 yHzW0i2k0IzUjBqGkqOin61XwdeY9mOhY60ClGZiKEMwVRS0tZPqn8m0M29at2rBWz
	 mhKcKO24xrgqFben49S/l5a+snZ3Dc9Z5vJ0jDLHONA9u8RoGnOB3bG8S0kfR3ZtL3
	 7C8zqb3+4dJVw==
Date: Tue, 25 Jun 2024 18:16:41 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH 2/3] xen/event: address violation of MISRA C Rule 13.6
In-Reply-To: <d48b73a3c5c569f95da424fe9e14a7690b1c69f8.1719308599.git.alessandro.zucchelli@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406251816350.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719308599.git.alessandro.zucchelli@bugseng.com> <d48b73a3c5c569f95da424fe9e14a7690b1c69f8.1719308599.git.alessandro.zucchelli@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 25 Jun 2024, Alessandro Zucchelli wrote:
> In the file include/xen/event.h macro set_bit is called with argument
> current->pause_flags.
> Once expanded this set_bit's argument is used in sizeof operations
> and thus 'current', being a macro that expands to a function
> call with potential side effects, generates a violation.
> 
> To address this violation the value of current is therefore stored in a
> variable called 'v' before passing it to macro set_bit.
> 
> No functional change.
> 
> Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 01:19:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 01:19:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748226.1155827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHJN-0006XM-SX; Wed, 26 Jun 2024 01:19:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748226.1155827; Wed, 26 Jun 2024 01:19:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHJN-0006XF-Px; Wed, 26 Jun 2024 01:19:01 +0000
Received: by outflank-mailman (input) for mailman id 748226;
 Wed, 26 Jun 2024 01:19:00 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dBdT=N4=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMHJM-0006X9-SC
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 01:19:00 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0f545d9a-335a-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 03:18:58 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 1158CCE1A8D;
 Wed, 26 Jun 2024 01:18:54 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 5CE54C32781;
 Wed, 26 Jun 2024 01:18:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f545d9a-335a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719364733;
	bh=GXxfSzn4lCB7+fTOy+8iKjcHuLw8SXgAhM2oZsYQQjM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=RIccIQ9eXXZ9PCeE5vQ3cDJtV0Kr4Quh2y7D1DUWMsV3iTmYj5hsQE3t4kL366i8d
	 UjFulKM2NQ0YCaSQChpacZxJ/SY22s+AK/3chY7xl5rsXLsNczBmznlF3wMG7dZ31h
	 elxiLLjYCGnK0a6RARef8wrZoF4AX5ERH3IgzXPiaY7u50lO3iKD94UhOTeaLwu+fY
	 TBC6mCWuX+yU7sJ8WdNOHl0LpH3jozt1lRcIJEejcg53cp6rlJsYT1p3PAMxU1U2S+
	 SjJo0MHCs0NC8SzkBSVh1LntMQFUp5MS+x64ZA6q/JY+gIJsfnX2GsL/WlXUz8qIqH
	 ElyGwPTBk43FA==
Date: Tue, 25 Jun 2024 18:18:50 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH 3/3] common/softirq: address violation of MISRA C Rule
 13.6
In-Reply-To: <ab8b527c775fbb7681a4658828d53e7e3419be10.1719308599.git.alessandro.zucchelli@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406251818430.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719308599.git.alessandro.zucchelli@bugseng.com> <ab8b527c775fbb7681a4658828d53e7e3419be10.1719308599.git.alessandro.zucchelli@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 25 Jun 2024, Alessandro Zucchelli wrote:
> In the file common/softirq macro set_bit is called with argument
> smp_processor_id.
> Once expanded this set_bit's argument is used in sizeof operations
> and thus 'smp_processor_id', being a macro that expands to a
> function call with potential side effects, generates a violation.
> 
> To address this violation the value of smp_processor_id is therefore
> stored in a variable called 'cpu' before passing it to macro set_bit.
> 
> No functional change.
> 
> Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 01:39:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 01:39:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748238.1155838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHdG-0002z4-Fq; Wed, 26 Jun 2024 01:39:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748238.1155838; Wed, 26 Jun 2024 01:39:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHdG-0002yx-D7; Wed, 26 Jun 2024 01:39:34 +0000
Received: by outflank-mailman (input) for mailman id 748238;
 Wed, 26 Jun 2024 01:39:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p9lf=N4=amd.com=stefano.stabellini@srs-se1.protection.inumbo.net>)
 id 1sMHdF-0002yr-Bp
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 01:39:33 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20611.outbound.protection.outlook.com
 [2a01:111:f403:2009::611])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ee765cd4-335c-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 03:39:31 +0200 (CEST)
Received: from MW4PR03CA0284.namprd03.prod.outlook.com (2603:10b6:303:b5::19)
 by DM6PR12MB4330.namprd12.prod.outlook.com (2603:10b6:5:21d::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.30; Wed, 26 Jun
 2024 01:39:26 +0000
Received: from SJ5PEPF000001CC.namprd05.prod.outlook.com
 (2603:10b6:303:b5:cafe::f9) by MW4PR03CA0284.outlook.office365.com
 (2603:10b6:303:b5::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7677.38 via Frontend
 Transport; Wed, 26 Jun 2024 01:39:26 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 SJ5PEPF000001CC.mail.protection.outlook.com (10.167.242.41) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Wed, 26 Jun 2024 01:39:25 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Tue, 25 Jun
 2024 20:39:24 -0500
Received: from smtp.xilinx.com (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend
 Transport; Tue, 25 Jun 2024 20:39:24 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee765cd4-335c-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HaSDu3jVgMgNO/gwe9ho8we22o0ZUMhlm5pFS6zBd7hgf6R88M8FWrINUUTl2uye1hfpzIUhFct6g8ZHbqgKQ30Fz5pK+qu5eFynJeeZ4CAeMTyJy9E8snkcWEB7GYxDdy861vqXiDWn5kmDX+hWt1/KtB+Y/10X11dq4mN9misfUcreD6gSUgXZiFJAyYuROCu1X4tuOi76qHyB1q9WbOl66Ju2KnsWJjzUmkWXfGBW3V9cnifITeUJq3X9xkOrw7p8ku7g7fNv0Tm83620D3U8PCFfhdA0PjxonRY0tlAi1oxrAIwUdI3jB3O1/i0SIEnD/cnXRXczxX7Qwuedng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1uaA7rDWzroIcGQmI5b8PlLH7NpF7Sn3wlDKriPxvc0=;
 b=Qb8ekbc9uqSlIY9UQpu2IkAtXVJ08irob8OSdWxAj8FgJV7766xgZ1pF0o83C6i409FwYDb3SJmLZqlWl6+8/xPNZc+VeAwpHk/+kuDVTRGnlJi6/wogs93LjKOWVU/YQK7N4kNdANTxi1TwFxg1MAzeSjk2So3F+dIr0ae3s9zX2URrSbn1xgXTgBW4agxmS3rZUVGtCMb+f5AYqq44jZHyyLRcfCRAQCnMGbPyeOBp3FjF3AntONZLVye2s2H0V3u6jj7M1obcKfuCIAGxWlj11uYkBzDcWG9HwMaAYCFUVkeFklUnDyT3svFHrO5xutsjtq9wl5+q9XvqYepz2A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1uaA7rDWzroIcGQmI5b8PlLH7NpF7Sn3wlDKriPxvc0=;
 b=K/c84thm4moAQ+QFroNMjrcIb5whbSVPCTwuDUrQEdfaYz4Jry0JWQKu0kirWfLog1Po749UeuIlo/DOcqdsmvEdTWuAp752DQUBmIx2anws7fKdoGQ7fkzhMVdAaNa1rwOQ8SJ6SkzB6miB44yY1HZaK5tD68LWH3tyTmA41GY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
From: Stefano Stabellini <stefano.stabellini@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: <jbeulich@suse.com>, <sstabellini@kernel.org>,
	<andrew.cooper3@citrix.com>, <julien@xen.org>, <michal.orzel@amd.com>,
	<bertrand.marquis@arm.com>, <roger.pau@citrix.com>, Stefano Stabellini
	<stefano.stabellini@amd.com>
Subject: [PATCH v3] docs/misra: rules for mass adoption
Date: Tue, 25 Jun 2024 18:39:22 -0700
Message-ID: <20240626013922.92089-1-stefano.stabellini@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
Received-SPF: None (SATLEXMB03.amd.com: stefano.stabellini@amd.com does not
 designate permitted sender hosts)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SJ5PEPF000001CC:EE_|DM6PR12MB4330:EE_
X-MS-Office365-Filtering-Correlation-Id: 822fe646-3b7e-4ff8-0b0e-08dc9580d010
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230038|1800799022|82310400024|36860700011|376012;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?WKYr6GK1VtTcIuBbjeXAZBWJIKLe4TnR26xnep2fffKQUoOSLbS8TjLYb3D1?=
 =?us-ascii?Q?//nSp1nZ3RXbAvY5PL4ywahloDbGBKinYUm92fAWxfdT+wMhBzfk+T2N9G7g?=
 =?us-ascii?Q?pEZ5Pdo/IO1HJDRkH3mExjg7R6r3PVkyrB6yXPkA4mp4qhiNPPEKSBaqyhW7?=
 =?us-ascii?Q?gIY9pWJgoCNxCEtZ7chwDxPDQqwE8djitektyI/O9u8OM4MsqRcEWXapwt+1?=
 =?us-ascii?Q?HhvYo0jhZzDFvJmPRUbsrSWEdBmAPk9vj+z1+BAiBztdVvfUQb0rdXzT8H5e?=
 =?us-ascii?Q?8n8bINsuwg86TXLVK5cFyirMR7CA1PVlD7BZdN4hTwD36CAr4iL6zpufriYs?=
 =?us-ascii?Q?c63KIoCOfJwD/7y5ozQb3U0GeFZ8NrU19aO1+7ynPWN7H5QVczmbki6ZsBnp?=
 =?us-ascii?Q?n4Ntp4RghGG4YOIvpE0CMNFZ3nV5OJJvZ4/JhGxTIq3PKcHtHV4vMlIQnvH2?=
 =?us-ascii?Q?ReJez1Z+nAwrz/nlF+Y6CL8IkJpjwCZYnSy2AG4UDO4xM/wy5zTvfVWvMPsk?=
 =?us-ascii?Q?lP5gCZMpAKhK3U7shLdXa8LSpbWTjdr8gIeMlBmSEZ4FfxAZW2YIVh6Nf42P?=
 =?us-ascii?Q?5LrvAdv3810VocrRuxJBy7kZXe8AK/aa+87i+mfyHNak33d8Vv/w1SfVKgBb?=
 =?us-ascii?Q?5t9mky9USYbBmB47Dq38QuUom0JneY8lPQYlEbkmWrBAKdCg2El5SgMzHppB?=
 =?us-ascii?Q?XzlwiCcoThi90w5eZJysthF5QB/Kk5DsXrfW347/lhszDJlBD7RdRMRZ/FN+?=
 =?us-ascii?Q?Z/tFChMVhsJg5Eh5glgEUNVDis2go9OhM70IPD4kUHNm8dem8ZUUH3QCy672?=
 =?us-ascii?Q?u9pbN0IV4U/U3lLriHdewACF92OdgXmsYmLAm2P1fO8tLYFKh0W7V9yOMWv0?=
 =?us-ascii?Q?YALb0Ud3wLaArmfdt4VUySjDkCS9vAAHJHQBQiajVH7QTwgPcpL4Z401QiAS?=
 =?us-ascii?Q?izUidlLuw5MuMiOQAph/P79zV6rz+9ExE5UnkbpxCXFcDXfQAgnPHA6mY17G?=
 =?us-ascii?Q?mpe1bBlO4/0M039THqPZ0/+OcCz7foXBlvIq+PwjqAwxKxbZeauEbHPFdY2+?=
 =?us-ascii?Q?HNnuA/eToz8KlcbgLXVSNwdSuyoSNxDo474Y3x+Uv3LmoBW12c92kPiOpubx?=
 =?us-ascii?Q?6Yat7f9s5Uz140aI3MyHI1WeNtCnMfoFSUIZ7hRR0JC0XTQOTYb8uYca/KpT?=
 =?us-ascii?Q?lGnrNHpEetj8hWboEjX8iAZgxYIYShmbp6W583xtDYum1FbAM3+z5pogjFhG?=
 =?us-ascii?Q?Fp+0kZQ+QVvw9RCGPWm4d/yEKF8A8ellhSn+n1uTvZfLHEOjZsHB2CR5x4dc?=
 =?us-ascii?Q?fqftRe/NGLPL6yj6F7WXaRcJoPBgJLx50JaduaQHQeVfYQYWzO8g9KPow2mf?=
 =?us-ascii?Q?/FAU04I=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230038)(1800799022)(82310400024)(36860700011)(376012);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jun 2024 01:39:25.7426
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 822fe646-3b7e-4ff8-0b0e-08dc9580d010
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	SJ5PEPF000001CC.namprd05.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4330

From: Stefano Stabellini <sstabellini@kernel.org>

This patch adds a bunch of rules to rules.rst that are uncontroversial
and have zero violations in Xen. As such, they have been approved for
adoption.

All the ones that regard the standard library have the link to the
existing footnote in the notes.

Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
---
Changes in v3:
- add 21.11

Changes in v2:
- replicate the Xen doesn't provide a stdlib message for 22.8, 22.9, 22.10
- remove stray empty bullet for 22.1
---
 docs/misra/rules.rst | 104 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 104 insertions(+)

diff --git a/docs/misra/rules.rst b/docs/misra/rules.rst
index 80e5e972ad..2e11566e20 100644
--- a/docs/misra/rules.rst
+++ b/docs/misra/rules.rst
@@ -580,6 +580,11 @@ maintainers if you want to suggest a change.
      - The relational operators > >= < and <= shall not be applied to objects of pointer type except where they point into the same object
      -
 
+   * - `Rule 18.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_18_08.c>`_
+     - Required
+     - Variable-length array types shall not be used
+     -
+
    * - `Rule 19.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_19_01.c>`_
      - Mandatory
      - An object shall not be assigned or copied to an overlapping
@@ -589,11 +594,29 @@ maintainers if you want to suggest a change.
        instances where Eclair is unable to verify that the code is valid
        in regard to Rule 19.1. Caution reports are not violations.
 
+   * - `Rule 20.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_02.c>`_
+     - Required
+     - The ', " or \ characters and the /* or // character sequences
+       shall not occur in a header file name
+     -
+
+   * - `Rule 20.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_03.c>`_
+     - Required
+     - The #include directive shall be followed by either a <filename>
+       or "filename" sequence
+     -
+
    * - `Rule 20.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_04.c>`_
      - Required
      - A macro shall not be defined with the same name as a keyword
      -
 
+   * - `Rule 20.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_06.c>`_
+     - Required
+     - Tokens that look like a preprocessing directive shall not occur
+       within a macro argument
+     -
+
    * - `Rule 20.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_07.c>`_
      - Required
      - Expressions resulting from the expansion of macro parameters
@@ -609,6 +632,12 @@ maintainers if you want to suggest a change.
        evaluation
      -
 
+   * - `Rule 20.11 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_11.c>`_
+     - Required
+     - A macro parameter immediately following a # operator shall not
+       immediately be followed by a ## operator
+     -
+
    * - `Rule 20.12 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_20_12.c>`_
      - Required
      - A macro parameter used as an operand to the # or ## operators,
@@ -651,11 +680,39 @@ maintainers if you want to suggest a change.
        declared
      - See comment for Rule 21.1
 
+   * - `Rule 21.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_03.c>`_
+     - Required
+     - The memory allocation and deallocation functions of <stdlib.h>
+       shall not be used
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+   * - `Rule 21.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_04.c>`_
+     - Required
+     - The standard header file <setjmp.h> shall not be used
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+   * - `Rule 21.5 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_05.c>`_
+     - Required
+     - The standard header file <signal.h> shall not be used
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
    * - `Rule 21.6 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_06.c>`_
      - Required
      - The Standard Library input/output routines shall not be used
      - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
 
+   * - `Rule 21.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_07.c>`_
+     - Required
+     - The Standard Library functions atof, atoi, atol and atoll of
+       <stdlib.h> shall not be used
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+   * - `Rule 21.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_08.c>`_
+     - Required
+     - The Standard Library functions abort, exit and system of
+       <stdlib.h> shall not be used
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
    * - `Rule 21.9 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_09.c>`_
      - Required
      - The library functions bsearch and qsort of <stdlib.h> shall not be used
@@ -666,6 +723,16 @@ maintainers if you want to suggest a change.
      - The Standard Library time and date routines shall not be used
      - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
 
+   * - `Rule 21.11 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_11.c>`_
+     - Required
+     - The standard header file <tgmath.h> shall not be used
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+   * - `Rule 21.12 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_12.c>`_
+     - Advisory
+     - The exception handling features of <fenv.h> should not be used
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
    * - `Rule 21.13 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_21_13.c>`_
      - Mandatory
      - Any value passed to a function in <ctype.h> shall be representable as an
@@ -725,12 +792,24 @@ maintainers if you want to suggest a change.
      - The Standard Library function system of <stdlib.h> shall not be used
      -
 
+   * - `Rule 22.1 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_01.c>`_
+     - Required
+     - All resources obtained dynamically by means of Standard Library
+       functions shall be explicitly released
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
    * - `Rule 22.2 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_02.c>`_
      - Mandatory
      - A block of memory shall only be freed if it was allocated by means of a
        Standard Library function
      -
 
+   * - `Rule 22.3 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_03.c>`_
+     - Required
+     - The same file shall not be open for read and write access at the
+       same time on different streams 
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
    * - `Rule 22.4 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_04.c>`_
      - Mandatory
      - There shall be no attempt to write to a stream which has been opened as
@@ -748,6 +827,31 @@ maintainers if you want to suggest a change.
        stream has been closed
      -
 
+   * - `Rule 22.7 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_07.c>`_
+     - Required
+     - The macro EOF shall only be compared with the unmodified return
+       value from any Standard Library function capable of returning EOF
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+   * - `Rule 22.8 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_08.c>`_
+     - Required
+     - The value of errno shall be set to zero prior to a call to an
+       errno-setting-function
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+   * - `Rule 22.9 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_09.c>`_
+     - Required
+     - The value of errno shall be tested against zero after calling an
+       errno-setting-function
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+   * - `Rule 22.10 <https://gitlab.com/MISRA/MISRA-C/MISRA-C-2012/Example-Suite/-/blob/master/R_22_10.c>`_
+     - Required
+     - The value of errno shall only be tested when the last function to
+       be called was an errno-setting-function
+     - Xen doesn't provide, use, or link against a Standard Library [#xen-stdlib]_
+
+
 Terms & Definitions
 -------------------
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 01:39:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 01:39:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748239.1155849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHdX-0003HL-P4; Wed, 26 Jun 2024 01:39:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748239.1155849; Wed, 26 Jun 2024 01:39:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHdX-0003HA-Kb; Wed, 26 Jun 2024 01:39:51 +0000
Received: by outflank-mailman (input) for mailman id 748239;
 Wed, 26 Jun 2024 01:39:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dBdT=N4=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMHdW-0003GX-4m
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 01:39:50 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f75db6a8-335c-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 03:39:47 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 66E7CCE1FEC;
 Wed, 26 Jun 2024 01:39:41 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 10DB1C32781;
 Wed, 26 Jun 2024 01:39:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f75db6a8-335c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719365980;
	bh=/QJ90yg1H84m3CnzBykG2Nzo8awodZ13WlBhB1VMhhE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=LmUp55+Cppeqe12gCoWlDkyuHkQuhLYwzgFkTXQ3gNTTW1idiHxvKv9N4xrKQ5FzP
	 7wI3ddn99SxfmGWmfkq/GPvIvDALEnsAyUVUAcnPvj9Lt83+8AOwMiojiIYldLnhAy
	 LagGAVM8Ag/cQ8S2CrUxlhOB8dAd6qgLq3lUdG3OWneY94FhFoIM4/E/gHZU/gVyKm
	 Rp19J64GolH2xSV9QgY+xN9C7s7BUAE+8UlbnLBHVE2NLYKG+Xju6blNm8e2yi1TG3
	 J0q6JqA3NPFW6cpVaQlqJ9riA56dVtHq8vlHjl5XGXBtQ8dgFfD3AAtvOJ/ogW4ER6
	 y/IRuY9oztbag==
Date: Tue, 25 Jun 2024 18:39:37 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: Stefano Stabellini <stefano.stabellini@amd.com>, 
    xen-devel@lists.xenproject.org, jbeulich@suse.com, sstabellini@kernel.org, 
    andrew.cooper3@citrix.com, julien@xen.org, michal.orzel@amd.com, 
    bertrand.marquis@arm.com, roger.pau@citrix.com
Subject: Re: [PATCH v2] docs/misra: rules for mass adoption
In-Reply-To: <1aeebff6-68f2-4135-ae5d-6c76f29f4ab0@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406251839280.3635@ubuntu-linux-20-04-desktop>
References: <20240622001422.3852207-1-stefano.stabellini@amd.com> <1aeebff6-68f2-4135-ae5d-6c76f29f4ab0@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Federico Serafini wrote:
> Hi Stefano,
> 
> On 22/06/24 02:14, Stefano Stabellini wrote:
> > From: Stefano Stabellini <sstabellini@kernel.org>
> > 
> > This patch adds a bunch of rules to rules.rst that are uncontroversial
> > and have zero violations in Xen. As such, they have been approved for
> > adoption.
> > 
> > All the ones that regard the standard library have the link to the
> > existing footnote in the notes.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@amd.com>
> 
> Also Rule 21.11 ("The standard header file <tgmath.h> shall not be
> used") results clean, I think it should be added within this patch.

I sent a v3 adding 21.11


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 01:59:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 01:59:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748250.1155861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHwl-0007rA-Au; Wed, 26 Jun 2024 01:59:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748250.1155861; Wed, 26 Jun 2024 01:59:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHwl-0007r3-7s; Wed, 26 Jun 2024 01:59:43 +0000
Received: by outflank-mailman (input) for mailman id 748250;
 Wed, 26 Jun 2024 01:59:41 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dBdT=N4=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMHwj-0007qv-1T
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 01:59:41 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bcd7828c-335f-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 03:59:38 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 291ABCE1F0F;
 Wed, 26 Jun 2024 01:59:34 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 250DDC32781;
 Wed, 26 Jun 2024 01:59:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bcd7828c-335f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719367173;
	bh=9Iv4WoVrCMZoUB5keV99wbwJWobeal8U4cbaZLpZVNE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=sp1hzV49f/8u+/5i48WTKb+w+bxlNAKAJSJg4ZLjXqLHhRzdoBWy4/ohikvPnXf0M
	 /wPk8fWVett/iHmPBgkbwt7oypNGHdEiDStlvj6FdmF3BP1jphS9JayEYR0MkdPGdk
	 iPLufAfpJfvH+LH/Aw67yOud5FQYC1SdeyO8p4+w3g1BVNa2I8dwZp6san1xNNEZez
	 juRlDEWefQmUdR/AJZtiMs3HCndLj1N+OWHdA/2RpOxPhV5ab9dVC0OIrFssBUGWIU
	 r91fCN8jfDM4LfT2eR8u3sJEPMYRvQzNC/4QZijOC/feg8fqPhhKo6cEJa0gF6TrP/
	 dlhnwRHc0Gtag==
Date: Tue, 25 Jun 2024 18:59:30 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [XEN PATCH for 4.19] automation/eclair: add deviations agreed
 in MISRA meetings
In-Reply-To: <4a65e064768ad5ddce96d749f24f0bdae2c3b9da.1719328656.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406251850281.3635@ubuntu-linux-20-04-desktop>
References: <4a65e064768ad5ddce96d749f24f0bdae2c3b9da.1719328656.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 25 Jun 2024, Federico Serafini wrote:
> Update ECLAIR configuration to take into account the deviations
> agreed during the MISRA meetings.
> 
> While doing this, remove the obsolete "Set [123]" comments.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Thank you! Many of these deviations are really important!

I double-checked everything and it looks good. I have 2 requests for
changes below to keep deviations.rst updated. I made few comments about
some deviations that could potentially done with SAF in-code comments
but given the state of the release I think it is OK.

I would like to ask a release-ack, especially for all the deviations
about conversions because those are critical. I think the rest is OK
too.


> ---
>  .../eclair_analysis/ECLAIR/deviations.ecl     | 93 +++++++++++++++++--
>  docs/misra/deviations.rst                     | 68 +++++++++++++-
>  2 files changed, 149 insertions(+), 12 deletions(-)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> index ae2eaf50f7..e6517a9142 100644
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -1,5 +1,3 @@
> -### Set 1 ###
> -
>  #
>  # Series 2.
>  #
> @@ -23,6 +21,11 @@ Constant expressions and unreachable branches of if and switch statements are ex
>  -config=MC3R1.R2.1,reports+={deliberate, "any_area(any_loc(any_exp(macro(name(ASSERT_UNREACHABLE||PARSE_ERR_RET||PARSE_ERR||FAIL_MSR||FAIL_CPUID)))))"}
>  -doc_end
>  
> +-doc_begin="The asm-offset files are not linked deliberately, since they are used to generate definitions for asm modules."
> +-file_tag+={asm_offsets, "^xen/arch/(arm|x86)/(arm32|arm64|x86_64)/asm-offsets\\.c$"}
> +-config=MC3R1.R2.1,reports+={deliberate, "any_area(any_loc(file(asm_offsets)))"}
> +-doc_end
> +
>  -doc_begin="Pure declarations (i.e., declarations without initialization) are
>  not executable, and therefore it is safe for them to be unreachable."
>  -config=MC3R1.R2.1,ignored_stmts+={"any()", "pure_decl()"}
> @@ -63,6 +66,12 @@ they are not instances of commented-out code."
>  -config=MC3R1.D4.3,reports+={disapplied,"!(any_area(any_loc(file(^xen/arch/arm/arm64/.*$))))"}
>  -doc_end
>  
> +-doc_begin="The inline asm in 'arm64/lib/bitops.c' is tightly coupled with the surronding C code that acts as a wrapper, so it has been decided not to add an additional encapsulation layer."
> +-file_tag+={arm64_bitops, "^xen/arch/arm/arm64/lib/bitops\\.c$"}
> +-config=MC3R1.D4.3,reports+={deliberate, "all_area(any_loc(file(arm64_bitops)&&any_exp(macro(^(bit|test)op$))))"}
> +-config=MC3R1.D4.3,reports+={deliberate, "any_area(any_loc(file(arm64_bitops))&&context(name(int_clear_mask16)))"}
> +-doc_end
> +
>  -doc_begin="This header file is autogenerated or empty, therefore it poses no
>  risk if included more than once."
>  -file_tag+={empty_header, "^xen/arch/arm/efi/runtime\\.h$"}
> @@ -213,10 +222,25 @@ Therefore the absence of prior declarations is safe."
>  -config=MC3R1.R8.4,declarations+={safe, "loc(file(asm_defns))&&^current_stack_pointer$"}
>  -doc_end
>  
> +-doc_begin="The function apei_(read|check|clear)_mce are dead code and are excluded from non-debug builds, therefore the absence of prior declarations is safe."
> +-config=MC3R1.R8.4,declarations+={safe, "^apei_(read|check|clear)_mce\\(.*$"}
> +-doc_end

This is the kind of thing that might be better done with a SAF comment
but given the state of the release it is easier to do it this way now as
it doesn't require code changes in Xen. I'd say keep it as is but just
FYI.


>  -doc_begin="asmlinkage is a marker to indicate that the function is only used to interface with asm modules."
>  -config=MC3R1.R8.4,declarations+={safe,"loc(text(^(?s).*asmlinkage.*$, -1..0))"}
>  -doc_end
>  
> +-doc_begin="Given that bsearch and sort are defined with the attribute 'gnu_inline', it's deliberate not to have a prior declaration.
> +See Section \"6.33.1 Common Function Attributes\" of \"GCC_MANUAL\" for a full explanation of gnu_inline."
> +-file_tag+={bsearch_sort, "^xen/include/xen/(sort|lib)\\.h$"}
> +-config=MC3R1.R8.4,reports+={deliberate, "any_area(any_loc(file(bsearch_sort))&&decl(name(bsearch||sort)))"}
> +-doc_end

Same comment about SAF


> +-doc_begin="first_valid_mfn is defined in this way because the current lack of NUMA support in Arm and PPC requires it."
> +-file_tag+={first_valid_mfn, "^xen/common/page_alloc\\.c$"}
> +-config=MC3R1.R8.4,declarations+={deliberate,"loc(file(first_valid_mfn))"}
> +-doc_end

Same comment about SAF


>  -doc_begin="The following variables are compiled in multiple translation units
>  belonging to different executables and therefore are safe."
>  -config=MC3R1.R8.6,declarations+={safe, "name(current_stack_pointer||bsearch||sort)"}
> @@ -257,8 +281,6 @@ dimension is higher than omitting the dimension."
>  -config=MC3R1.R9.5,reports+={deliberate, "any()"}
>  -doc_end
>  
> -### Set 2 ###
> -
>  #
>  # Series 10.
>  #
> @@ -299,7 +321,6 @@ integers arguments on two's complement architectures
>  -config=MC3R1.R10.1,reports+={safe, "any_area(any_loc(any_exp(macro(^ISOLATE_LSB$))))"}
>  -doc_end
>  
> -### Set 3 ###
>  -doc_begin="XEN only supports architectures where signed integers are
>  representend using two's complement and all the XEN developers are aware of
>  this."
> @@ -323,6 +344,49 @@ constant expressions are required.\""
>  # Series 11
>  #
>  
> +-doc_begin="The conversion from a function pointer to unsigned long or (void *) does not lose any information, provided that the target type has enough bits to store it."
> +-config=MC3R1.R11.1,casts+={safe,
> +  "from(type(canonical(__function_pointer_types)))
> +   &&to(type(canonical(builtin(unsigned long)||pointer(builtin(void)))))
> +   &&relation(definitely_preserves_value)"
> +}
> +-doc_end

This one and the ones below are the important ones! I think we should
have them in the tree as soon as possible ideally 4.19. I ask for
a release-ack.


> +-doc_begin="The conversion from a function pointer to a boolean has a well-known semantics that do not lead to unexpected behaviour."
> +-config=MC3R1.R11.1,casts+={safe,
> +  "from(type(canonical(__function_pointer_types)))
> +   &&kind(pointer_to_boolean)"
> +}
> +-doc_end
> +
> +-doc_begin="The conversion from a pointer to an incomplete type to unsigned long does not lose any information, provided that the target type has enough bits to store it."
> +-config=MC3R1.R11.2,casts+={safe,
> +  "from(type(any()))
> +   &&to(type(canonical(builtin(unsigned long))))
> +   &&relation(definitely_preserves_value)"
> +}
> +-doc_end
> +
> +-doc_begin="Conversions to object pointers that have a pointee type with a smaller (i.e., less strict) alignment requirement are safe."
> +-config=MC3R1.R11.3,casts+={safe,
> +  "!relation(more_aligned_pointee)"
> +}
> +-doc_end
> +
> +-doc_begin="Conversions from and to integral types are safe, in the assumption that the target type has enough bits to store the value.
> +See also Section \"4.7 Arrays and Pointers\" of \"GCC_MANUAL\""
> +-config=MC3R1.R11.6,casts+={safe,
> +    "(from(type(canonical(integral())))||to(type(canonical(integral()))))
> +     &&relation(definitely_preserves_value)"}
> +-doc_end
> +
> +-doc_begin="The conversion from a pointer to a boolean has a well-known semantics that do not lead to unexpected behaviour."
> +-config=MC3R1.R11.6,casts+={safe,
> +  "from(type(canonical(__pointer_types)))
> +   &&kind(pointer_to_boolean)"
> +}
> +-doc_end
> +
>  -doc_begin="Violations caused by container_of are due to pointer arithmetic operations
>  with the provided offset. The resulting pointer is then immediately cast back to its
>  original type, which preserves the qualifier. This use is deemed safe.
> @@ -354,9 +418,18 @@ activity."
>  -config=MC3R1.R14.2,reports+={disapplied,"any()"}
>  -doc_end
>  
> --doc_begin="The XEN team relies on the fact that invariant conditions of 'if'
> -statements are deliberate"
> --config=MC3R1.R14.3,statements={deliberate , "wrapped(any(),node(if_stmt))" }
> +-doc_begin="The XEN team relies on the fact that invariant conditions of 'if' statements and conditional operators are deliberate"
> +-config=MC3R1.R14.3,statements+={deliberate , "wrapped(any(),node(if_stmt||conditional_operator||binary_conditional_operator))" }
> +-doc_end
> +
> +-doc_begin="Switches having a 'sizeof' operator as the condition are deliberate and have limited scope."
> +-config=MC3R1.R14.3,statements+={safe , "wrapped(any(),node(switch_stmt)&&child(cond, operator(sizeof)))" }
> +-doc_end

Needs corresponding update in deviations.rst, I don't see anything
regarding 14.3 and sizeof there. I think this is important as we should
keep the two files in sync.


> +-doc_begin="The use of an invariant size argument in {put,get}_unsafe_size and array_access_ok, as defined in arch/x86(_64)?/include/asm/uaccess.h is deliberate and is deemed safe."
> +-file_tag+={x86_uaccess, "^xen/arch/x86(_64)?/include/asm/uaccess\\.h$"}
> +-config=MC3R1.R14.3,reports+={deliberate, "any_area(any_loc(file(x86_uaccess)&&any_exp(macro(^(put|get)_unsafe_size$))))"}
> +-config=MC3R1.R14.3,reports+={deliberate, "any_area(any_loc(file(x86_uaccess)&&any_exp(macro(^array_access_ok$))))"}
>  -doc_end

Similar comment about potentailly using SAF

>  
>  -doc_begin="A controlling expression of 'if' and iteration statements having integer, character or pointer type has a semantics that is well-known to all Xen developers."
> @@ -527,8 +600,8 @@ falls under the jurisdiction of other MISRA rules."
>  # General
>  #
>  
> --doc_begin="do-while-0 is a well recognized loop idiom by the xen community."
> --loop_idioms={do_stmt, "literal(0)"}
> +-doc_begin="do-while-[01] is a well recognized loop idiom by the xen community."
> +-loop_idioms={do_stmt, "literal(0)||literal(1)"}
>  -doc_end
>  -doc_begin="while-[01] is a well recognized loop idiom by the xen community."
>  -loop_idioms+={while_stmt, "literal(0)||literal(1)"}

Needs corresponding update in deviations.rst, see "do-while-0
loops". This is important.



> diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
> index 16fc345756..0b048dc225 100644
> --- a/docs/misra/deviations.rst
> +++ b/docs/misra/deviations.rst
> @@ -63,6 +63,11 @@ Deviations related to MISRA C:2012 Rules:
>         switch statement.
>       - ECLAIR has been configured to ignore those statements.
>  
> +   * - R2.1
> +     - The asm-offset files are not linked deliberately, since they are used to
> +       generate definitions for asm modules.
> +     - Tagged as `deliberate` for ECLAIR.
> +
>     * - R2.2
>       - Proving compliance with respect to Rule 2.2 is generally impossible:
>         see `<https://arxiv.org/abs/2212.13933>`_ for details. Moreover, peer
> @@ -203,6 +208,26 @@ Deviations related to MISRA C:2012 Rules:
>         it.
>       - Tagged as `safe` for ECLAIR.
>  
> +   * - R8.4
> +     - Some functions are excluded from non-debug build, therefore the absence
> +       of declaration is safe.
> +     - Tagged as `safe` for ECLAIR, such functions are:
> +         - apei_read_mce()
> +         - apei_check_mce()
> +         - apei_clear_mce()
> +
> +   * - R8.4
> +     - Given that bsearch and sort are defined with the attribute 'gnu_inline',
> +       it's deliberate not to have a prior declaration.
> +       See Section \"6.33.1 Common Function Attributes\" of \"GCC_MANUAL\" for
> +       a full explanation of gnu_inline.
> +     - Tagged as `deliberate` for ECLAIR.
> +
> +   * - R8.4
> +     - first_valid_mfn is defined in this way because the current lack of NUMA
> +       support in Arm and PPC requires it.
> +     - Tagged as `deliberate` for ECLAIR.
> +
>     * - R8.6
>       - The following variables are compiled in multiple translation units
>         belonging to different executables and therefore are safe.
> @@ -282,6 +307,39 @@ Deviations related to MISRA C:2012 Rules:
>         If no bits are set, 0 is returned.
>       - Tagged as `safe` for ECLAIR.
>  
> +   * - R11.1
> +     - The conversion from a function pointer to unsigned long or (void \*) does
> +       not lose any information, provided that the target type has enough bits
> +       to store it.
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R11.1
> +     - The conversion from a function pointer to a boolean has a well-known
> +       semantics that do not lead to unexpected behaviour.
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R11.2
> +     - The conversion from a pointer to an incomplete type to unsigned long
> +       does not lose any information, provided that the target type has enough
> +       bits to store it.
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R11.3
> +     - Conversions to object pointers that have a pointee type with a smaller
> +       (i.e., less strict) alignment requirement are safe.
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R11.6
> +     - Conversions from and to integral types are safe, in the assumption that
> +       the target type has enough bits to store the value.
> +       See also Section \"4.7 Arrays and Pointers\" of \"GCC_MANUAL\"
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R11.6
> +     - The conversion from a pointer to a boolean has a well-known semantics
> +       that do not lead to unexpected behaviour.
> +     - Tagged as `safe` for ECLAIR.
> +
>     * - R11.8
>       - Violations caused by container_of are due to pointer arithmetic operations
>         with the provided offset. The resulting pointer is then immediately cast back to its
> @@ -308,8 +366,14 @@ Deviations related to MISRA C:2012 Rules:
>  
>     * - R14.3
>       - The Xen team relies on the fact that invariant conditions of 'if'
> -       statements are deliberate.
> -     - Project-wide deviation; tagged as `disapplied` for ECLAIR.
> +       statements and conditional operators are deliberate.
> +     - Project-wide deviation; tagged as `deliberate` for ECLAIR.
> +
> +   * - R14.3
> +     - The use of an invariant size argument in {put,get}_unsafe_size and
> +       array_access_ok, as defined in arch/x86(_64)?/include/asm/uaccess.h is
> +       deliberate and is deemed safe.
> +     - Project-wide deviation; tagged as `deliberate` for ECLAIR.
>  
>     * - R14.4
>       - A controlling expression of 'if' and iteration statements having
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 02:00:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 02:00:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748255.1155870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHxN-0001E9-NL; Wed, 26 Jun 2024 02:00:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748255.1155870; Wed, 26 Jun 2024 02:00:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMHxN-0001E2-KZ; Wed, 26 Jun 2024 02:00:21 +0000
Received: by outflank-mailman (input) for mailman id 748255;
 Wed, 26 Jun 2024 02:00:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMHxM-0001Ds-HL; Wed, 26 Jun 2024 02:00:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMHxM-0002QZ-Cr; Wed, 26 Jun 2024 02:00:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMHxL-0003oJ-W6; Wed, 26 Jun 2024 02:00:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMHxL-00037P-Ve; Wed, 26 Jun 2024 02:00:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=czpv2KOtZ0Ak2c6IRr1qWOx9Xw6E0SGOJG6ea5TNTNM=; b=WcJUTgdZOuB0oGhfD6X3nh2hDD
	Q41n79ZQfVx8CX+6rlsKkAX4bjrsnLUALTW2Ub+eAGtUBJp9JHm7AqAYfiJZ+2XTar1ymMReUSavV
	tZuuaLr65258hBqCzQnqNa2DKYTFIP20cpfbzoEMasLACc1gXHKqyCwaGGEQRTmwX18g=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186502-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186502: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=dc93ff8a5561a3085eeda9d4ac00d40545eb43cd
X-Osstest-Versions-That:
    ovmf=84d8eb08e15e455826ef66a4b1f1f61758cb9aba
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 02:00:19 +0000

flight 186502 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186502/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 dc93ff8a5561a3085eeda9d4ac00d40545eb43cd
baseline version:
 ovmf                 84d8eb08e15e455826ef66a4b1f1f61758cb9aba

Last test of basis   186498  2024-06-25 22:11:10 Z    0 days
Testing same since   186502  2024-06-26 00:13:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Sebastian Witt <sebastian.witt@siemens.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   84d8eb08e1..dc93ff8a55  dc93ff8a5561a3085eeda9d4ac00d40545eb43cd -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 02:09:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 02:09:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748268.1155883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMI63-0002rn-KB; Wed, 26 Jun 2024 02:09:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748268.1155883; Wed, 26 Jun 2024 02:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMI63-0002rg-Hb; Wed, 26 Jun 2024 02:09:19 +0000
Received: by outflank-mailman (input) for mailman id 748268;
 Wed, 26 Jun 2024 02:09:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dBdT=N4=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMI62-0002ra-6C
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 02:09:18 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1539a843-3361-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 04:09:16 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 553DCCE1FF1;
 Wed, 26 Jun 2024 02:09:10 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3D007C32781;
 Wed, 26 Jun 2024 02:09:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1539a843-3361-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719367749;
	bh=ZiQ3Iy25aBjlBx/M4Y26r8sltgMfPN3mtOiGBPcuwJc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=sVvfDyrcAcZhV2AfHau3Ku/DPXE2nLGggpB9EWAC4RYPQ1fM5hB+bCdKasgjTyLE1
	 ZJyrGFHGbFTaQ5Y5a838GffnnMSb3iEy2apgKXJc9oO+dgyFoW5Se0GG63P5FT2MAV
	 KHZeMB5aLNyqQpLU+Ivs26DehC0nwwcousr4nD220v+g0gAE2afgVDdDxaHJJER8HI
	 LvyqZnFxXg/of7Q/mLXZfgSuC5LUmlAjNHeZ0PVTE9rTYLYooHyf0ghqil/g+bn5hF
	 UInCFHDClnmonog4Eoqhj5KeKgqs1hJlKqEAtz965ZLORODtLo5sv6Ezems/1pdgrJ
	 vihMQafCPxrUA==
Date: Tue, 25 Jun 2024 19:09:07 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Michal Orzel <michal.orzel@amd.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org
Subject: Re: [ImageBuilder] Add support for omitting host paddr for static
 shmem regions
In-Reply-To: <20240624075559.15484-1-michal.orzel@amd.com>
Message-ID: <alpine.DEB.2.22.394.2406251908590.3635@ubuntu-linux-20-04-desktop>
References: <20240624075559.15484-1-michal.orzel@amd.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 24 Jun 2024, Michal Orzel wrote:
> Reflect the latest Xen support to be able to omit the host physical
> address for static shared memory regions, in which case the address will
> come from the Xen heap.
> 
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  README.md                |  7 ++++---
>  scripts/uboot-script-gen | 19 +++++++++++++------
>  2 files changed, 17 insertions(+), 9 deletions(-)
> 
> diff --git a/README.md b/README.md
> index 7683492a6f7f..4fcd908c2c2f 100644
> --- a/README.md
> +++ b/README.md
> @@ -199,9 +199,10 @@ Where:
>  
>  - DOMU_SHARED_MEM[number]="SHM-ID HPA GPA size"
>    if specified, indicate SHM-ID represents the unique identifier of the shared
> -  memory region, the host physical address HPA will get mapped at guest
> -  address GPA in domU and the memory of size will be reserved to be shared
> -  memory. The shared memory is used between two dom0less domUs.
> +  memory region. The host physical address HPA is optional, if specified, will
> +  get mapped at guest address GPA in domU (otherwise it will come from Xen heap)
> +  and the memory of size will be reserved to be shared memory. The shared memory
> +  is used between two dom0less domUs.
>  
>    Below is an example:
>    NUM_DOMUS=2
> diff --git a/scripts/uboot-script-gen b/scripts/uboot-script-gen
> index 20cc6ef7f892..8b664e711b10 100755
> --- a/scripts/uboot-script-gen
> +++ b/scripts/uboot-script-gen
> @@ -211,18 +211,25 @@ function add_device_tree_static_shared_mem()
>      local shared_mem_id=${shared_mem%% *}
>      local regions="${shared_mem#* }"
>      local cells=()
> -    local shared_mem_host=${regions%% *}
> -
> -    dt_mknode "${path}" "shared-mem@${shared_mem_host}"
> +    local node_name=
>  
>      for val in ${regions[@]}
>      do
>          cells+=("$(split_value $val)")
>      done
>  
> -    dt_set "${path}/shared-mem@${shared_mem_host}" "compatible" "str" "xen,domain-shared-memory-v1"
> -    dt_set "${path}/shared-mem@${shared_mem_host}" "xen,shm-id" "str" "${shared_mem_id}"
> -    dt_set "${path}/shared-mem@${shared_mem_host}" "xen,shared-mem" "hex" "${cells[*]}"
> +    # Less than 3 cells means host address not provided
> +    if [ ${#cells[@]} -lt 3 ]; then
> +        node_name="shared-mem-${shared_mem_id}"
> +    else
> +        node_name="shared-mem@${regions%% *}"
> +    fi
> +
> +    dt_mknode "${path}" "${node_name}"
> +
> +    dt_set "${path}/${node_name}" "compatible" "str" "xen,domain-shared-memory-v1"
> +    dt_set "${path}/${node_name}" "xen,shm-id" "str" "${shared_mem_id}"
> +    dt_set "${path}/${node_name}" "xen,shared-mem" "hex" "${cells[*]}"
>  }
>  
>  function add_device_tree_cpupools()
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 02:11:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 02:11:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748273.1155893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMI87-0004HK-VP; Wed, 26 Jun 2024 02:11:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748273.1155893; Wed, 26 Jun 2024 02:11:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMI87-0004HD-Sk; Wed, 26 Jun 2024 02:11:27 +0000
Received: by outflank-mailman (input) for mailman id 748273;
 Wed, 26 Jun 2024 02:11:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5eJY=N4=intel.com=oliver.sang@srs-se1.protection.inumbo.net>)
 id 1sMI86-0004H0-F1
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 02:11:26 +0000
Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.13])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6161e429-3361-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 04:11:23 +0200 (CEST)
Received: from fmviesa002.fm.intel.com ([10.60.135.142])
 by orvoesa105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 25 Jun 2024 19:11:19 -0700
Received: from orsmsx603.amr.corp.intel.com ([10.22.229.16])
 by fmviesa002.fm.intel.com with ESMTP/TLS/AES256-GCM-SHA384;
 25 Jun 2024 19:11:13 -0700
Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by
 ORSMSX603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.39; Tue, 25 Jun 2024 19:11:12 -0700
Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by
 orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.39 via Frontend Transport; Tue, 25 Jun 2024 19:11:12 -0700
Received: from NAM02-SN1-obe.outbound.protection.outlook.com (104.47.57.40) by
 edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.39; Tue, 25 Jun 2024 19:11:11 -0700
Received: from LV3PR11MB8603.namprd11.prod.outlook.com (2603:10b6:408:1b6::9)
 by DM3PR11MB8735.namprd11.prod.outlook.com (2603:10b6:0:4b::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.31; Wed, 26 Jun
 2024 02:11:04 +0000
Received: from LV3PR11MB8603.namprd11.prod.outlook.com
 ([fe80::4622:29cf:32b:7e5c]) by LV3PR11MB8603.namprd11.prod.outlook.com
 ([fe80::4622:29cf:32b:7e5c%2]) with mapi id 15.20.7698.025; Wed, 26 Jun 2024
 02:11:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6161e429-3361-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1719367883; x=1750903883;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=50WaMEwBZ3dy1a2cRL2cbHgRtrZFdVspwOeyKGn5+hY=;
  b=EUKuEgjEY94m7pTO15+4Tpuz2wlhGjfQFzSyrxUWuvg5nltKkQ8RVCFK
   BuNd9iSv2CwYJlGtVKXAugJQSNCoehz+gTgcla/b1V0QdBbSrTxvWQlHM
   1nIhWCAThpALWLLn9xtcQct2utrq9MVz38QQFo010v80OX69nHHxFPCa3
   Y9C29oIE3crWs5Zar5f6PBq8QAb3B+vyMOVEAq0Y1nJA3ifMUpgFsTvl6
   GzjTpi+U3T8H0HjXRR2RvTURZnTY3AOHkYlMDzwT3lWQ5D3J46vAE2XZv
   /uX7oJoJlw26rLaYSYa6aetpMsJL8KICJ8mlNzx+qdKXGd7Z34zebYH2G
   g==;
X-CSE-ConnectionGUID: hilzm1zaQCCtOkqyiTXBkQ==
X-CSE-MsgGUID: 4alJwQwYQpqVW5oszUtBiw==
X-IronPort-AV: E=McAfee;i="6700,10204,11114"; a="27560688"
X-IronPort-AV: E=Sophos;i="6.08,265,1712646000"; 
   d="scan'208";a="27560688"
X-CSE-ConnectionGUID: ECe2zXpSRc6+sE2LvGdEPA==
X-CSE-MsgGUID: WD13XVggQ7yATXYZq7bnJQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="6.08,265,1712646000"; 
   d="scan'208";a="67057432"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QwqI9Of0yA3VJ34gMAwe+VTMsEJOHi8/a4c8uNF29+OWI8ZOWaodtZW3pyEwn9g9MPFwTzYfzqsbxkOOy4PyVTQ1zO8V10STVHJbITjbZjZPOkJ4PXJrRDgAKwK6btTXRBiUU+VyrcTKuRyt6A3XGOsREUak7LYAYPhUAz9gT37uQ1gbQCozpa4N0ZJkdIcqxunJaIf8x0mEWLP1QVo5vvd2nSznxjz2g86w6j5HPw5jFrJVx7SpRrcetL7KVQUFN9MUqOfuAogkZxDrU4EaLlLnpkBMKh+8l2bOiHy7slt913tKtWAf5MIGQ148gGkLOMMJi7oY/8aWVXLXz9Kn6Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Oscf9f3Z0/GNVRKSzzvOfl2dekrQjhT3kW11lZyZWfg=;
 b=BwJFdGdYd/JyYX3UouusvVjg51gONKT3sUqjwhazVcD1q0mCxW0Z0rPz3DLDkNhMN9rbTWIiO6aHts5xbeLlHmarYvSsaeTiSQV6uSwPBUHIG2gbBZtrt30HJXudfTbhMH4ax842VwOpe8ZSJhT3UqRL+kkKPugqECzemGeZd8LgtM6Y58mrHs/yLBlyN+07kTRyuK688QApi8nz5P/7gDEzlmcKj371Gk4J/u/sPyz8kw/mz5/jNwqrCz7vRELvIxJfpm6r0FpvuuYNDEoz1mOdLUPnSYf/2Vn2gq/7qxCr14koI3uiktv0yCNqL4TJePeLvERUozdNXR93O8Lo8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
Date: Wed, 26 Jun 2024 10:10:49 +0800
From: Oliver Sang <oliver.sang@intel.com>
To: Christoph Hellwig <hch@infradead.org>
CC: Christoph Hellwig <hch@lst.de>, <oe-lkp@lists.linux.dev>, <lkp@intel.com>,
	Jens Axboe <axboe@kernel.dk>, Ulf Hansson <ulf.hansson@linaro.org>, "Damien
 Le Moal" <dlemoal@kernel.org>, Hannes Reinecke <hare@suse.de>,
	<linux-block@vger.kernel.org>, <linux-um@lists.infradead.org>,
	<drbd-dev@lists.linbit.com>, <nbd@other.debian.org>,
	<linuxppc-dev@lists.ozlabs.org>, <virtualization@lists.linux.dev>,
	<xen-devel@lists.xenproject.org>, <linux-bcache@vger.kernel.org>,
	<dm-devel@lists.linux.dev>, <linux-raid@vger.kernel.org>,
	<linux-mmc@vger.kernel.org>, <linux-mtd@lists.infradead.org>,
	<nvdimm@lists.linux.dev>, <linux-nvme@lists.infradead.org>,
	<linux-scsi@vger.kernel.org>, <ying.huang@intel.com>, <feng.tang@intel.com>,
	<fengwei.yin@intel.com>, <oliver.sang@intel.com>
Subject: Re: [axboe-block:for-next] [block]  1122c0c1cc:  aim7.jobs-per-min
 22.6% improvement
Message-ID: <Znt4qTr/NdeIPyNp@xsang-OptiPlex-9020>
References: <202406250948.e0044f1d-oliver.sang@intel.com>
 <ZnqGf49cvy6W-xWf@infradead.org>
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <ZnqGf49cvy6W-xWf@infradead.org>
X-ClientProxiedBy: SI1PR02CA0059.apcprd02.prod.outlook.com
 (2603:1096:4:1f5::19) To LV3PR11MB8603.namprd11.prod.outlook.com
 (2603:10b6:408:1b6::9)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: LV3PR11MB8603:EE_|DM3PR11MB8735:EE_
X-MS-Office365-Filtering-Correlation-Id: 53969801-6740-480e-603d-08dc95853ba0
X-LD-Processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230038|366014|376012|7416012|1800799022;
X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?jH3y+3Pg9FbsmGD6XbEaM8bVW1QpjH/Mh71CH1kEonnHhpP2n+cPYrnelyAT?=
 =?us-ascii?Q?QRb1Mw46G9+qdY11hgN9Xhh4lzgi2f5f6TbmIx2HZwcaLEDvRxWS13wNr3HR?=
 =?us-ascii?Q?W3nUwanwKLelU1PjrTHe/p9nEvgv8YN7wcSoW+kL7pSX/0tqWxm+QMcRvKCn?=
 =?us-ascii?Q?x5vrxf+uX6czt2yI7J2CxvIQPAegcvppYtox6C3Gq55gUtvzVMwf1tIk+wgE?=
 =?us-ascii?Q?ej6TGxn3OqMUoNZ9s1cobMNyJhEbaLEKYJqqrg7QtMaHj+se+8uQV6A2Dyq0?=
 =?us-ascii?Q?23v2c4oz/rJXqOGtPueSLozDC/ti3jT2Ip5tVgPZ7MfkgWk0TkOPWilhpFec?=
 =?us-ascii?Q?eYCThOdX89idTcPmgiTYH4Pcgy5iPZzsqXMVrsJgFzbi2IZVJEbx7yZOROUJ?=
 =?us-ascii?Q?hRRFHevEGKC7XRkThAQBNF9VWAWY12cOphom4+XMa9cgHmKcBsMvDkUVSPQO?=
 =?us-ascii?Q?KJ/jI5Hk6aNOtNupDlrtkqbd7MqHfQCpR4u7/ciLJAsrVA2F4KJi7JLBtg+I?=
 =?us-ascii?Q?3ZFIY8++5orGT0hJktOFGKf8VVgkXBOJvs35Cx/3wjV2qjNVyUPlIgBb9Jdb?=
 =?us-ascii?Q?zyNvEJYIJ0qfFhrro/Q4sDCBObtFa20tZwEdNci9ADC3MiXkSDV57YzEF7m0?=
 =?us-ascii?Q?p6Qvpe1ChDQMto2LZ5WHhevnmq+ItSFu031rrakdIpDjTg29mAWSpzEHkdwy?=
 =?us-ascii?Q?SLxQg2SwAedMZxxjvtZFJSvyYWDILI7TNDWW1igpVj65J/0Iu6laU2yAloUL?=
 =?us-ascii?Q?cQn77b1TdQ9bWnQzXXR4gLT/kRcXok1gosUuuRacsHJcF9XXhU7NmpLBtTv3?=
 =?us-ascii?Q?TxQojaCcb7cWvgCsYoBbTVQ+016Gkk5f7nxrk18MGcUiJFJx5fVKUAoXdJeB?=
 =?us-ascii?Q?0Tw8xoR6GbjbwkyjUA0a6XpgXR/saESRByuO3dttFJ0k2tC7zTzHDDW5SSpT?=
 =?us-ascii?Q?OneodMFjnZ59Up+3anZw+Mo9wOibmWMmjNg4VtcolBQO261eRM5cNMGVVo2+?=
 =?us-ascii?Q?DE8bezQbQSaK2FO90WKUV5b8O9IcAGlZWKXh63K5uKsHM1RXSAO/A8Htc8eN?=
 =?us-ascii?Q?U1TVXfTY+VQ8GqRAPfk7TeI8V7EO+AMWWD+FlG4Dwx4tZZhCoWyXEjRlMTyi?=
 =?us-ascii?Q?q8bOKa9LZ1L6ck5ejKC8RJFcbUgSgRvIO7b+Y4TQxqY2vPdcXxfe18eNLOhT?=
 =?us-ascii?Q?1SyKhmhI+pSH/n6LZg2h3UlkNHb47DDT60q0lItJa5erG8pL08rUU6IwhX2a?=
 =?us-ascii?Q?MBuPgYIaqlDfFk3DhslicMdqxCQBfiJLqSnrwCd2xwK/vAI7qrbddhg5Tt64?=
 =?us-ascii?Q?NJDpeRDK7KXEiGX6yH+57Gjj?=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LV3PR11MB8603.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230038)(366014)(376012)(7416012)(1800799022);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?sn0+CWWVLMkBlc+RT5HVtdeEGBu2DKgZ902iQEwkqv8fsg8k77S5f68x5gsZ?=
 =?us-ascii?Q?oMZK+a14C+q64wfteolkcDyfz7ov0zscW1SdsEdKTf/gl6ePcLGIEUhIF4FE?=
 =?us-ascii?Q?IxBkjpUg1vA32uoGAgeaGOlgXRWUL4d19LYSQNpZXubFspsJlsjcVeMmqiob?=
 =?us-ascii?Q?fE3xIt+j6FW08lL5pl9gk9vBuNC0F/eB4a34r+/JoAXYHfi1QXIsvIgJcgfQ?=
 =?us-ascii?Q?1BmhPK5jF7fDXfi4dT8hMHcId0AIwUwo9PpdgT1xXyYjIcJkSC9yFgMZZKQs?=
 =?us-ascii?Q?ufIsWiUbQfpd21FN4yBQ92gqONMWVJ3ZEK5SUPgJHzLezdcQNtv2mxaKpGL3?=
 =?us-ascii?Q?Es0iGISlXYxg2vsJOSYAe/2VePWrMXkAxzbRxQnYqzy68kJ4wq6RwtdzD4i2?=
 =?us-ascii?Q?ZAqqzTsMuKtTCoD4VjR6SHKRHcp/XCNmjNLJ/PYvjCoUqxPvcOG7S0F7lUDB?=
 =?us-ascii?Q?xCihn+FjoAOR+f/cqQUmOPGukg3kuO7DS7lCZVowUq9W76eagBhSXDuGpS9W?=
 =?us-ascii?Q?gtm0FXoA37P38M2GMBgY45Rzbndm1+j+toqgajR3lgKPyoQgs4r96bKWuB3b?=
 =?us-ascii?Q?BHbSlJTSENwrnSrqyyhZBb+vBAwnMWrM2U6ClhfXpHvF0aX+Qf1L9i0RcJKd?=
 =?us-ascii?Q?ZtHdUX4q5Ax+LjUcjdZtDQmUNGmKY3Ld13n3x7xcmhKa682bV0UBqAn/GGoL?=
 =?us-ascii?Q?yxm7/AkHsSjWwpWAS2dtMS4oERBqIIfu1rOB2nvOBEPwTK1se2sFIlUUZ+FE?=
 =?us-ascii?Q?zlJC3MLh5bomYLAAhPZYdZuE/dI0MIvsxw4rKHYmgJQqi18iDE8Lnj31zJV8?=
 =?us-ascii?Q?l4/AJDN+m3hcOcINlfWc+YfMLk4wsb1HaT2uwyHRO/pLzFFIpnzmA+RiBcWy?=
 =?us-ascii?Q?J/ZWrHfg9sPlSVC65qbbPy7su6OyqX22YmzC3PLOpdbOKJedvxt/SbVQtYg+?=
 =?us-ascii?Q?sUjJchqZMLZXo11zzIBGR79NBQlQUJ2hleAzbJYJEiGIX03zuWgGqebm2hyh?=
 =?us-ascii?Q?5NetMgFNDpjLEvoBLQgFQnOcc63ShX+H0bLgYZ3RT1GGi9T4/8UPp6E4LHhf?=
 =?us-ascii?Q?k0739mz3MuIp3WwiBfaops2sPP58Bik3LmnRVhDNM7RFU9Klrh3W2p9out//?=
 =?us-ascii?Q?LrZ831O3oFKL/IF0o7wOcJfHkRh8Go9B80zDzsNjMiOSUxQuapPxiAggxyK1?=
 =?us-ascii?Q?DUSyTZxZAL/TkiWiA8Z/UifXkjrcLx/cjAUG1e5VmehwOtZaQMCoNd+VJlnh?=
 =?us-ascii?Q?ST4hFZlvRxZb+pIj/hy0Z6jLLFUEMgY9VUkroinSZRnku75Y3HWtQoQ+FAPe?=
 =?us-ascii?Q?QZxxDT6RpKUGxNNSHILtKjKuAFYsu4HA0DjmQoD3QxpcRxtTBAmlBIoCCViO?=
 =?us-ascii?Q?ilMMzqLsenAY+2bOliucCuXHSUKuyFNcBgHFyG/6JTYpEjHET4tVxkwyEKmw?=
 =?us-ascii?Q?OrEofpenx7QtV074qlyzWD6hjzXDTLFR8cPb7Hy3lIIC90ofNTxKDfWo6wSf?=
 =?us-ascii?Q?+RMS8OVYZnBtNhPTjM70XDsd6NMiHPeoGcp+2EKnEky50+BHJNz8x+V2am0d?=
 =?us-ascii?Q?dBpKmwP6UCz2KBOV9HRSfBs4SSXgDN1HTjelC2AE2tCriXcLTCV8PC+ch8EL?=
 =?us-ascii?Q?AQ=3D=3D?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 53969801-6740-480e-603d-08dc95853ba0
X-MS-Exchange-CrossTenant-AuthSource: LV3PR11MB8603.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jun 2024 02:11:04.7215
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: k652aKh7WsrhahrW3ZgOpKvWkmIcgrVckMgb6d37h7foNeuiIn+oYNTTUrMoEWkRrIDvtHioU9Omt6SfbcxKog==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM3PR11MB8735
X-OriginatorOrg: intel.com

hi, Christoph Hellwig,

On Tue, Jun 25, 2024 at 01:57:35AM -0700, Christoph Hellwig wrote:
> Hi Oliver,
> 
> can you test the patch below?  It restores the previous behavior if
> the device did not have a volatile write cache.  I think at least
> for raid0 and raid1 without bitmap the new behavior actually is correct
> and better, but it will need fixes for other modes.  If the underlying
> devices did have a volatile write cache I'm a bit lost what the problem
> was and this probably won't fix the issue.

I'm not sure I understand this test request. as in title, we see a good
improvement of aim7 for 1122c0c1cc, and we didn't observe other issues for
this commit.

do you mean this improvement is not expected or exposes some problems instead?
then by below patch, should the performance back to the level of parent of
1122c0c1cc?

sure! it's our great pleasure to test your patches. I noticed there are
[1]
https://lore.kernel.org/all/20240625110603.50885-2-hch@lst.de/
which includes "[PATCH 1/7] md: set md-specific flags for all queue limits"
[2]
https://lore.kernel.org/all/20240625145955.115252-2-hch@lst.de/
which includes "[PATCH 1/8] md: set md-specific flags for all queue limits"

which one you suggest us to test?
do we only need to apply the first patch "md: set md-specific flags for all queue limits"
upon 1122c0c1cc?
then is the expectation the performance back to parent of 1122c0c1cc?

thanks

> 
> ---
> From 81c816827197f811e14add7a79220ed9eef6af02 Mon Sep 17 00:00:00 2001
> From: Christoph Hellwig <hch@lst.de>
> Date: Tue, 25 Jun 2024 08:48:18 +0200
> Subject: md: set md-specific flags for all queue limits
> 
> The md driver wants to enforce a number of flags to an all devices, even
> when not inheriting them from the underlying devices.  To make sure these
> flags survive the queue_limits_set calls that md uses to update the
> queue limits without deriving them form the previous limits add a new
> md_init_stacking_limits helper that calls blk_set_stacking_limits and sets
> these flags.
> 
> Fixes: 1122c0c1cc71 ("block: move cache control settings out of queue->flags")
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  drivers/md/md.c     | 13 ++++++++-----
>  drivers/md/md.h     |  1 +
>  drivers/md/raid0.c  |  2 +-
>  drivers/md/raid1.c  |  2 +-
>  drivers/md/raid10.c |  2 +-
>  drivers/md/raid5.c  |  2 +-
>  6 files changed, 13 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/md/md.c b/drivers/md/md.c
> index 69ea54aedd99a1..8368438e58e989 100644
> --- a/drivers/md/md.c
> +++ b/drivers/md/md.c
> @@ -5853,6 +5853,13 @@ static void mddev_delayed_delete(struct work_struct *ws)
>  	kobject_put(&mddev->kobj);
>  }
>  
> +void md_init_stacking_limits(struct queue_limits *lim)
> +{
> +	blk_set_stacking_limits(lim);
> +	lim->features = BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
> +			BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT;
> +}
> +
>  struct mddev *md_alloc(dev_t dev, char *name)
>  {
>  	/*
> @@ -5871,10 +5878,6 @@ struct mddev *md_alloc(dev_t dev, char *name)
>  	int shift;
>  	int unit;
>  	int error;
> -	struct queue_limits lim = {
> -		.features		= BLK_FEAT_WRITE_CACHE | BLK_FEAT_FUA |
> -					  BLK_FEAT_IO_STAT | BLK_FEAT_NOWAIT,
> -	};
>  
>  	/*
>  	 * Wait for any previous instance of this device to be completely
> @@ -5914,7 +5917,7 @@ struct mddev *md_alloc(dev_t dev, char *name)
>  		 */
>  		mddev->hold_active = UNTIL_STOP;
>  
> -	disk = blk_alloc_disk(&lim, NUMA_NO_NODE);
> +	disk = blk_alloc_disk(NULL, NUMA_NO_NODE);
>  	if (IS_ERR(disk)) {
>  		error = PTR_ERR(disk);
>  		goto out_free_mddev;
> diff --git a/drivers/md/md.h b/drivers/md/md.h
> index c4d7ebf9587d07..28cb4b0b6c1740 100644
> --- a/drivers/md/md.h
> +++ b/drivers/md/md.h
> @@ -893,6 +893,7 @@ extern int strict_strtoul_scaled(const char *cp, unsigned long *res, int scale);
>  
>  extern int mddev_init(struct mddev *mddev);
>  extern void mddev_destroy(struct mddev *mddev);
> +void md_init_stacking_limits(struct queue_limits *lim);
>  struct mddev *md_alloc(dev_t dev, char *name);
>  void mddev_put(struct mddev *mddev);
>  extern int md_run(struct mddev *mddev);
> diff --git a/drivers/md/raid0.c b/drivers/md/raid0.c
> index 62634e2a33bd0f..32d58752477847 100644
> --- a/drivers/md/raid0.c
> +++ b/drivers/md/raid0.c
> @@ -379,7 +379,7 @@ static int raid0_set_limits(struct mddev *mddev)
>  	struct queue_limits lim;
>  	int err;
>  
> -	blk_set_stacking_limits(&lim);
> +	md_init_stacking_limits(&lim);
>  	lim.max_hw_sectors = mddev->chunk_sectors;
>  	lim.max_write_zeroes_sectors = mddev->chunk_sectors;
>  	lim.io_min = mddev->chunk_sectors << 9;
> diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c
> index 1a0eba65b8a92b..04a0c2ca173245 100644
> --- a/drivers/md/raid1.c
> +++ b/drivers/md/raid1.c
> @@ -3194,7 +3194,7 @@ static int raid1_set_limits(struct mddev *mddev)
>  	struct queue_limits lim;
>  	int err;
>  
> -	blk_set_stacking_limits(&lim);
> +	md_init_stacking_limits(&lim);
>  	lim.max_write_zeroes_sectors = 0;
>  	err = mddev_stack_rdev_limits(mddev, &lim, MDDEV_STACK_INTEGRITY);
>  	if (err) {
> diff --git a/drivers/md/raid10.c b/drivers/md/raid10.c
> index 3334aa803c8380..2a9c4ee982e023 100644
> --- a/drivers/md/raid10.c
> +++ b/drivers/md/raid10.c
> @@ -3974,7 +3974,7 @@ static int raid10_set_queue_limits(struct mddev *mddev)
>  	struct queue_limits lim;
>  	int err;
>  
> -	blk_set_stacking_limits(&lim);
> +	md_init_stacking_limits(&lim);
>  	lim.max_write_zeroes_sectors = 0;
>  	lim.io_min = mddev->chunk_sectors << 9;
>  	lim.io_opt = lim.io_min * raid10_nr_stripes(conf);
> diff --git a/drivers/md/raid5.c b/drivers/md/raid5.c
> index 0192a6323f09ba..10219205160bbf 100644
> --- a/drivers/md/raid5.c
> +++ b/drivers/md/raid5.c
> @@ -7708,7 +7708,7 @@ static int raid5_set_limits(struct mddev *mddev)
>  	 */
>  	stripe = roundup_pow_of_two(data_disks * (mddev->chunk_sectors << 9));
>  
> -	blk_set_stacking_limits(&lim);
> +	md_init_stacking_limits(&lim);
>  	lim.io_min = mddev->chunk_sectors << 9;
>  	lim.io_opt = lim.io_min * (conf->raid_disks - conf->max_degraded);
>  	lim.features |= BLK_FEAT_RAID_PARTIAL_STRIPES_EXPENSIVE;
> -- 
> 2.43.0
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 03:27:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 03:27:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748284.1155904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMJJl-0000ZG-Ua; Wed, 26 Jun 2024 03:27:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748284.1155904; Wed, 26 Jun 2024 03:27:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMJJl-0000Z9-RE; Wed, 26 Jun 2024 03:27:33 +0000
Received: by outflank-mailman (input) for mailman id 748284;
 Wed, 26 Jun 2024 03:27:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMJJk-0000Yb-L8; Wed, 26 Jun 2024 03:27:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMJJk-0003st-AA; Wed, 26 Jun 2024 03:27:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMJJk-0006Vx-1Z; Wed, 26 Jun 2024 03:27:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMJJk-00039s-1D; Wed, 26 Jun 2024 03:27:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IhF42gABRX5utHkQqOk15CDe8OGSatb+aWLnrYRVRMM=; b=DoMSsFpp22lDubZTbuC4PYrQrG
	wEUVZ2eBEnHDm5MmccFHXgKaSBn11RVY6uE1/NOMQqHzJLwlXQ0qWUct5qGrxq3EdrVAr7t2NZpao
	wdNin2qGIb1FYiXyNFu3d3vn1UkAS/YPYeqSi5+4klN+CgaO8I54qyDGHn/HjLWrDldI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186501-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186501: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=853f707cd9e2eb9410dbfbadbd5a01ac0252ef83
X-Osstest-Versions-That:
    xen=11ea49a3fda5f0cbd8546ee8bdc5e9c55736c828
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 03:27:32 +0000

flight 186501 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186501/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  853f707cd9e2eb9410dbfbadbd5a01ac0252ef83
baseline version:
 xen                  11ea49a3fda5f0cbd8546ee8bdc5e9c55736c828

Last test of basis   186486  2024-06-25 11:00:23 Z    0 days
Testing same since   186501  2024-06-25 23:09:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
  Federico Serafini <federico.serafini@bugseng.com>
  Nicola Vetrini <nicola.vetrini@bugseng.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   11ea49a3fd..853f707cd9  853f707cd9e2eb9410dbfbadbd5a01ac0252ef83 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 03:40:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 03:40:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748298.1155917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMJVv-0004Hf-47; Wed, 26 Jun 2024 03:40:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748298.1155917; Wed, 26 Jun 2024 03:40:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMJVv-0004HY-0i; Wed, 26 Jun 2024 03:40:07 +0000
Received: by outflank-mailman (input) for mailman id 748298;
 Wed, 26 Jun 2024 03:40:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6czr=N4=bombadil.srs.infradead.org=BATV+6ef32e7df62b9ce2c558+7612+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sMJVt-000433-1Y
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 03:40:05 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c46e903c-336d-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 05:40:02 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.97.1 #2 (Red
 Hat Linux)) id 1sMJVe-00000005Ecx-3HCZ;
 Wed, 26 Jun 2024 03:39:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c46e903c-336d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=q35hat2STPhlr4DAn4+ABXn/NxJWaK5wgb5DlM6xpCg=; b=v38aUgExch/0roE+EfRJ2Glql8
	NJM5fN+YEIB1TYrzijSpWBKOBPz4ePJWBmxAeIq8Vy0dgCM+B8D7e2Uu/8N8Um6Vr/QojQ2rgIn5c
	kZ0Qn4LEY3fOdOmfyKwMWEDOh9Ymv1GdTyRuFBajkXiPOidtU70du4sEE9xKuUof+3qQcWAWRaj21
	Q277QHf25k5Hs4ZG2a3z7KhsZxdHv16NmcJJI6IIXWv8/dTL6K+es5s2mnReHc51vjem7WfsNQ2Oh
	GFZANqqhPdUBbop3nLGkV+BdHz5dumNmTIDB8b0TUPLkhRAW/U3MN14I64qofM3LTvdEoRKFoAPJn
	ZHT0K0YA==;
Date: Tue, 25 Jun 2024 20:39:50 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Oliver Sang <oliver.sang@intel.com>
Cc: Christoph Hellwig <hch@infradead.org>, Christoph Hellwig <hch@lst.de>,
	oe-lkp@lists.linux.dev, lkp@intel.com, Jens Axboe <axboe@kernel.dk>,
	Ulf Hansson <ulf.hansson@linaro.org>,
	Damien Le Moal <dlemoal@kernel.org>, Hannes Reinecke <hare@suse.de>,
	linux-block@vger.kernel.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev, linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev, linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org, ying.huang@intel.com,
	feng.tang@intel.com, fengwei.yin@intel.com
Subject: Re: [axboe-block:for-next] [block]  1122c0c1cc:  aim7.jobs-per-min
 22.6% improvement
Message-ID: <ZnuNhkH26nZi8fz6@infradead.org>
References: <202406250948.e0044f1d-oliver.sang@intel.com>
 <ZnqGf49cvy6W-xWf@infradead.org>
 <Znt4qTr/NdeIPyNp@xsang-OptiPlex-9020>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Znt4qTr/NdeIPyNp@xsang-OptiPlex-9020>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

On Wed, Jun 26, 2024 at 10:10:49AM +0800, Oliver Sang wrote:
> I'm not sure I understand this test request. as in title, we see a good
> improvement of aim7 for 1122c0c1cc, and we didn't observe other issues for
> this commit.

The improvement suggests we are not sending cache flushes when we should
send them, or at least just handle them in md.

> do you mean this improvement is not expected or exposes some problems instead?
> then by below patch, should the performance back to the level of parent of
> 1122c0c1cc?
> 
> sure! it's our great pleasure to test your patches. I noticed there are
> [1]
> https://lore.kernel.org/all/20240625110603.50885-2-hch@lst.de/
> which includes "[PATCH 1/7] md: set md-specific flags for all queue limits"
> [2]
> https://lore.kernel.org/all/20240625145955.115252-2-hch@lst.de/
> which includes "[PATCH 1/8] md: set md-specific flags for all queue limits"
> 
> which one you suggest us to test?
> do we only need to apply the first patch "md: set md-specific flags for all queue limits"
> upon 1122c0c1cc?
> then is the expectation the performance back to parent of 1122c0c1cc?

Either just the patch in reply or the entire [2] series would be fine.

Thanks!



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 03:46:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 03:46:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748305.1155929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMJbs-0004t7-P7; Wed, 26 Jun 2024 03:46:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748305.1155929; Wed, 26 Jun 2024 03:46:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMJbs-0004t0-MV; Wed, 26 Jun 2024 03:46:16 +0000
Received: by outflank-mailman (input) for mailman id 748305;
 Wed, 26 Jun 2024 03:46:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMJbr-0004sq-B1; Wed, 26 Jun 2024 03:46:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMJbr-0004Bx-A4; Wed, 26 Jun 2024 03:46:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMJbq-00075e-W4; Wed, 26 Jun 2024 03:46:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMJbq-0008Hc-VW; Wed, 26 Jun 2024 03:46:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HosKY/nVlBmZ4ncJ4+yLTogNFiZbnhMee+z/YJv5f68=; b=WljJVKbZ9REY9n+ZEK3sVCqKFE
	R6i/dhSaTLo2lyfEJiaZrgqiHq+/QB6Nw8dKfe/uf+FL/WMkLNN2vbogiw6ht5lUsv0EXrqAfTV+/
	ic8ZeY1OH/vs3hrcVUcu5eOvG4ctBGjG4EO0vBy71aFjj9r8b3ltK2xoPYVgw6lj5Mc4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186505-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186505: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=0333faf50e49d3b3ea2c624b4d403b405b3107a1
X-Osstest-Versions-That:
    ovmf=dc93ff8a5561a3085eeda9d4ac00d40545eb43cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 03:46:14 +0000

flight 186505 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186505/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 0333faf50e49d3b3ea2c624b4d403b405b3107a1
baseline version:
 ovmf                 dc93ff8a5561a3085eeda9d4ac00d40545eb43cd

Last test of basis   186502  2024-06-26 00:13:00 Z    0 days
Testing same since   186505  2024-06-26 02:03:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dongyan Qian <qiandongyan@loongson.cn>
  Leif Lindholm <quic_llindhol@quicinc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   dc93ff8a55..0333faf50e  0333faf50e49d3b3ea2c624b4d403b405b3107a1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 05:01:42 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 05:01:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748316.1155946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMKmc-0008Af-2i; Wed, 26 Jun 2024 05:01:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748316.1155946; Wed, 26 Jun 2024 05:01:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMKmb-0008AY-WF; Wed, 26 Jun 2024 05:01:26 +0000
Received: by outflank-mailman (input) for mailman id 748316;
 Wed, 26 Jun 2024 05:01:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Sg3A=N4=redhat.com=david@srs-se1.protection.inumbo.net>)
 id 1sMKmb-0008AS-Fa
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 05:01:25 +0000
Received: from us-smtp-delivery-124.mimecast.com
 (us-smtp-delivery-124.mimecast.com [170.10.133.124])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2221997e-3379-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 07:01:24 +0200 (CEST)
Received: from mail-lf1-f70.google.com (mail-lf1-f70.google.com
 [209.85.167.70]) by relay.mimecast.com with ESMTP with STARTTLS
 (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id
 us-mta-280-g0qzPK1bP_Gar-TLc1VdAA-1; Wed, 26 Jun 2024 01:01:20 -0400
Received: by mail-lf1-f70.google.com with SMTP id
 2adb3069b0e04-52cdbeaafcdso3196672e87.1
 for <xen-devel@lists.xenproject.org>; Tue, 25 Jun 2024 22:01:19 -0700 (PDT)
Received: from [192.168.1.34] (p548825e3.dip0.t-ipconnect.de. [84.136.37.227])
 by smtp.gmail.com with ESMTPSA id
 ffacd0b85a97d-366389b8ab0sm14651814f8f.27.2024.06.25.22.01.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 Jun 2024 22:01:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2221997e-3379-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
	s=mimecast20190719; t=1719378082;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=HwF4neA1Ypjh3MK2yusEpJBcmzkoo53PN4KI5vuznrw=;
	b=Ff44NXkkYGlfjwyisLoA17onDsw51prP+Y5t/hz7Tx3+hrm31ZktFXBfq9xzmJoehBUVNU
	+5cVHwXzG8+vyIddQihtdZ4/tf4XCijIc/oUTwDhfT6T+WoVvbPP4/NFg7JvGlGazNsTEO
	7wKn6m8EL6PhsRh/UX91/iAIxNcET7k=
X-MC-Unique: g0qzPK1bP_Gar-TLc1VdAA-1
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719378078; x=1719982878;
        h=content-transfer-encoding:in-reply-to:organization:autocrypt
         :content-language:from:references:cc:to:subject:user-agent
         :mime-version:date:message-id:x-gm-message-state:from:to:cc:subject
         :date:message-id:reply-to;
        bh=HwF4neA1Ypjh3MK2yusEpJBcmzkoo53PN4KI5vuznrw=;
        b=C4tLeqsqtJiKxpdZbSTsyTT8tZHqUC4/ybBRSgLyfSO0tBKrX6mZZ29VFcihCeXHtZ
         8wMMvqG+qjQrfcypkt4RdJ7mVRdqg0/ksjiy+53iS/oYpcNobOgMBu9yHVAZW57s3mTE
         ssH2Kf7+StjSTjI4OCT0NY870wLECP21fbrR0z58/tm0bBc6RAZTL9s479kNwdjpa1yy
         HxVoLdByH5vLYpfeRxp1E9J1dCZ2HJZA8SCei4VlLsQqQkq5rQbxJK4KgrqcbZiIKTer
         8N89UU4SvobSTPvLuzzoKIaFIOy3y7LAmDmp1mCi9nIa2AIg6bHhQs9TmJH11w5LfJUC
         c7BQ==
X-Forwarded-Encrypted: i=1; AJvYcCWOolQNUF2DimrXqJWs9eohunVGYr1ghjz81/TqYdjw3xYatU2T95ke71yHcGrFpKyMvayU7SQg1BMFYpiERJSn3g6nKunU0f9yiljsFyM=
X-Gm-Message-State: AOJu0YwsrS54M/SXRkeUXHle/aYZZsn7kFytale+jSNZe2+IASzYQjy9
	/aL4EsLdPe9o2ysVR1xe8I+ZGFMITVTRQZAAM/aXXs9RBMXc5KODPOjuTvEEeOi8wI9CQu7QG9b
	1+5VvLkCq304YX+VjSIS8SEQEpKnLqhKDGxDy4QDtylK6Ksa0AVGwEt19UUdKUDq8
X-Received: by 2002:ac2:47e8:0:b0:52c:86de:cb61 with SMTP id 2adb3069b0e04-52ce1832c4fmr6006880e87.10.1719378078689;
        Tue, 25 Jun 2024 22:01:18 -0700 (PDT)
X-Google-Smtp-Source: AGHT+IHKAOyu0d6DQon+14NNMx6s/icZdB+WrdR9cuGPbDudWABy5ihHHzhhV35oGXGpDEDr82ofyQ==
X-Received: by 2002:ac2:47e8:0:b0:52c:86de:cb61 with SMTP id 2adb3069b0e04-52ce1832c4fmr6006851e87.10.1719378078225;
        Tue, 25 Jun 2024 22:01:18 -0700 (PDT)
Message-ID: <9174171f-314f-4d8f-8b14-5bb6d34b45a5@redhat.com>
Date: Wed, 26 Jun 2024 07:01:16 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v1 0/3] mm/memory_hotplug: use PageOffline() instead of
 PageReserved() for !ZONE_DEVICE
To: Andrew Morton <akpm@linux-foundation.org>
Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org,
 linux-hyperv@vger.kernel.org, virtualization@lists.linux.dev,
 xen-devel@lists.xenproject.org, kasan-dev@googlegroups.com,
 Mike Rapoport <rppt@kernel.org>, Oscar Salvador <osalvador@suse.de>,
 "K. Y. Srinivasan" <kys@microsoft.com>,
 Haiyang Zhang <haiyangz@microsoft.com>, Wei Liu <wei.liu@kernel.org>,
 Dexuan Cui <decui@microsoft.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Jason Wang <jasowang@redhat.com>, Xuan Zhuo <xuanzhuo@linux.alibaba.com>,
 =?UTF-8?Q?Eugenio_P=C3=A9rez?= <eperezma@redhat.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Alexander Potapenko <glider@google.com>, Marco Elver <elver@google.com>,
 Dmitry Vyukov <dvyukov@google.com>
References: <20240607090939.89524-1-david@redhat.com>
 <20240625154344.9f3db1ddfe2cb9cdd5583783@linux-foundation.org>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; keydata=
 xsFNBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABzSREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT7CwZgEEwEIAEICGwMGCwkIBwMCBhUIAgkKCwQW
 AgMBAh4BAheAAhkBFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl8Ox4kFCRKpKXgACgkQTd4Q
 9wD/g1oHcA//a6Tj7SBNjFNM1iNhWUo1lxAja0lpSodSnB2g4FCZ4R61SBR4l/psBL73xktp
 rDHrx4aSpwkRP6Epu6mLvhlfjmkRG4OynJ5HG1gfv7RJJfnUdUM1z5kdS8JBrOhMJS2c/gPf
 wv1TGRq2XdMPnfY2o0CxRqpcLkx4vBODvJGl2mQyJF/gPepdDfcT8/PY9BJ7FL6Hrq1gnAo4
 3Iv9qV0JiT2wmZciNyYQhmA1V6dyTRiQ4YAc31zOo2IM+xisPzeSHgw3ONY/XhYvfZ9r7W1l
 pNQdc2G+o4Di9NPFHQQhDw3YTRR1opJaTlRDzxYxzU6ZnUUBghxt9cwUWTpfCktkMZiPSDGd
 KgQBjnweV2jw9UOTxjb4LXqDjmSNkjDdQUOU69jGMUXgihvo4zhYcMX8F5gWdRtMR7DzW/YE
 BgVcyxNkMIXoY1aYj6npHYiNQesQlqjU6azjbH70/SXKM5tNRplgW8TNprMDuntdvV9wNkFs
 9TyM02V5aWxFfI42+aivc4KEw69SE9KXwC7FSf5wXzuTot97N9Phj/Z3+jx443jo2NR34XgF
 89cct7wJMjOF7bBefo0fPPZQuIma0Zym71cP61OP/i11ahNye6HGKfxGCOcs5wW9kRQEk8P9
 M/k2wt3mt/fCQnuP/mWutNPt95w9wSsUyATLmtNrwccz63XOwU0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAHCwXwEGAEIACYCGwwWIQQb2cqtc1xMOkYN/MpN3hD3
 AP+DWgUCXw7HsgUJEqkpoQAKCRBN3hD3AP+DWrrpD/4qS3dyVRxDcDHIlmguXjC1Q5tZTwNB
 boaBTPHSy/Nksu0eY7x6HfQJ3xajVH32Ms6t1trDQmPx2iP5+7iDsb7OKAb5eOS8h+BEBDeq
 3ecsQDv0fFJOA9ag5O3LLNk+3x3q7e0uo06XMaY7UHS341ozXUUI7wC7iKfoUTv03iO9El5f
 XpNMx/YrIMduZ2+nd9Di7o5+KIwlb2mAB9sTNHdMrXesX8eBL6T9b+MZJk+mZuPxKNVfEQMQ
 a5SxUEADIPQTPNvBewdeI80yeOCrN+Zzwy/Mrx9EPeu59Y5vSJOx/z6OUImD/GhX7Xvkt3kq
 Er5KTrJz3++B6SH9pum9PuoE/k+nntJkNMmQpR4MCBaV/J9gIOPGodDKnjdng+mXliF3Ptu6
 3oxc2RCyGzTlxyMwuc2U5Q7KtUNTdDe8T0uE+9b8BLMVQDDfJjqY0VVqSUwImzTDLX9S4g/8
 kC4HRcclk8hpyhY2jKGluZO0awwTIMgVEzmTyBphDg/Gx7dZU1Xf8HFuE+UZ5UDHDTnwgv7E
 th6RC9+WrhDNspZ9fJjKWRbveQgUFCpe1sa77LAw+XFrKmBHXp9ZVIe90RMe2tRL06BGiRZr
 jPrnvUsUUsjRoRNJjKKA/REq+sAnhkNPPZ/NNMjaZ5b8Tovi8C0tmxiCHaQYqj7G2rgnT0kt
 WNyWQQ==
Organization: Red Hat
In-Reply-To: <20240625154344.9f3db1ddfe2cb9cdd5583783@linux-foundation.org>
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 26.06.24 00:43, Andrew Morton wrote:
> afaict we're in decent state to move this series into mm-stable.  I've
> tagged the following issues:
> 
> https://lkml.kernel.org/r/80532f73e52e2c21fdc9aac7bce24aefb76d11b0.camel@linux.intel.com
> https://lkml.kernel.org/r/30b5d493-b7c2-4e63-86c1-dcc73d21dc15@redhat.com
> 
> Have these been addressed and are we ready to send this series into the world?

Yes, should all be addressed and this should be good to go.

-- 
Cheers,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 05:16:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 05:16:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748321.1155956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sML1Z-0002iB-C6; Wed, 26 Jun 2024 05:16:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748321.1155956; Wed, 26 Jun 2024 05:16:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sML1Z-0002i4-9M; Wed, 26 Jun 2024 05:16:53 +0000
Received: by outflank-mailman (input) for mailman id 748321;
 Wed, 26 Jun 2024 05:16:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sML1Y-0002hu-4Y; Wed, 26 Jun 2024 05:16:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sML1X-0006DR-N8; Wed, 26 Jun 2024 05:16:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sML1X-0000tS-Ba; Wed, 26 Jun 2024 05:16:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sML1X-0004qL-BH; Wed, 26 Jun 2024 05:16:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=S+6bGISa24rM5O0VdxtQ4cAaNw5GhZj775vGJr0c1nM=; b=YdyzPsr3uq0j8zuiNPh2/0h/nc
	Pa/3eXV895bpk1WWqNkH6P53cOVnJ3LCu2q9KHFC77Tx08MTZ22OxMfOdpPOJV6NYDyhWv6R7t+d/
	s1oNqEL0g3an94j/fhBV7qmsBo35aOipfh2cHwF7udaJtfzyv5hmJSUPO4/lhOUzL6iM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186499-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186499: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=11ea49a3fda5f0cbd8546ee8bdc5e9c55736c828
X-Osstest-Versions-That:
    xen=9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 05:16:51 +0000

flight 186499 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186499/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186465
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186465
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186465
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186465
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186465
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186465
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  11ea49a3fda5f0cbd8546ee8bdc5e9c55736c828
baseline version:
 xen                  9e7c26ad8532c3efda174dee5ab8bdbeef1e4f6d

Last test of basis   186465  2024-06-24 01:51:55 Z    2 days
Failing since        186471  2024-06-24 19:07:13 Z    1 days    3 attempts
Testing same since   186499  2024-06-25 22:39:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@vates.tech>
  Federico Serafini <federico.serafini@bugseng.com>
  Jan Beulich <jbeulich@suse.com>
  Michal Orzel <michal.orzel@amd.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9e7c26ad85..11ea49a3fd  11ea49a3fda5f0cbd8546ee8bdc5e9c55736c828 -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 05:54:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 05:54:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748353.1156042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMLc3-0001ic-1Y; Wed, 26 Jun 2024 05:54:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748353.1156042; Wed, 26 Jun 2024 05:54:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMLc2-0001iV-TT; Wed, 26 Jun 2024 05:54:34 +0000
Received: by outflank-mailman (input) for mailman id 748353;
 Wed, 26 Jun 2024 05:54:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t7g0=N4=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sMLc1-0001i9-Fr
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 05:54:33 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8df143aa-3380-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 07:54:30 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BBDD021A56;
 Wed, 26 Jun 2024 05:54:29 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id 8944713ABD;
 Wed, 26 Jun 2024 05:54:29 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id P71zHxWte2Y0LgAAD6G6ig
 (envelope-from <jgross@suse.com>); Wed, 26 Jun 2024 05:54:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8df143aa-3380-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1719381269; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=clV7ss7HqK/u0Op19fT3KuCZ48T+wC3u7mMBevY6n7k=;
	b=fYjwqLOiQF4qpYPIFNEqhl3vUV66nUDlg/6CNhMb5/R+64wLtXf/Fwgq8mUezsQn4vfWFd
	bAodLrsXmYHjGqqWOEZ6OUT9ZmViinmja/0MZkYHqjYZtznk4E22CBj4/LO4si1MPwF2c/
	ZGwIp0uHwt0E+wyuI38eRJNXBpLKoqo=
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1719381269; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=clV7ss7HqK/u0Op19fT3KuCZ48T+wC3u7mMBevY6n7k=;
	b=fYjwqLOiQF4qpYPIFNEqhl3vUV66nUDlg/6CNhMb5/R+64wLtXf/Fwgq8mUezsQn4vfWFd
	bAodLrsXmYHjGqqWOEZ6OUT9ZmViinmja/0MZkYHqjYZtznk4E22CBj4/LO4si1MPwF2c/
	ZGwIp0uHwt0E+wyuI38eRJNXBpLKoqo=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Dario Faggioli <dfaggioli@suse.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] xen/sched: fix error handling in cpu_schedule_up()
Date: Wed, 26 Jun 2024 07:54:25 +0200
Message-Id: <20240626055425.3622-1-jgross@suse.com>
X-Mailer: git-send-email 2.35.3
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Spam-Score: -2.04
X-Spam-Level: 
X-Spam-Flag: NO
X-Spamd-Result: default: False [-2.04 / 50.00];
	BAYES_HAM(-2.24)[96.40%];
	MID_CONTAINS_FROM(1.00)[];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	R_MISSING_CHARSET(0.50)[];
	NEURAL_HAM_SHORT(-0.20)[-0.999];
	MIME_GOOD(-0.10)[text/plain];
	MIME_TRACE(0.00)[0:+];
	TO_DN_SOME(0.00)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	ARC_NA(0.00)[];
	DKIM_SIGNED(0.00)[suse.com:s=susede1];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	FROM_EQ_ENVFROM(0.00)[];
	FROM_HAS_DN(0.00)[];
	RCPT_COUNT_FIVE(0.00)[5];
	RCVD_COUNT_TWO(0.00)[2];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[suse.com:email,imap1.dmz-prg2.suse.org:helo];
	RCVD_TLS_ALL(0.00)[]

In case cpu_schedule_up() is failing, it needs to undo all externally
visible changes it has done before.

Reason is that cpu_schedule_callback() won't be called with the
CPU_UP_CANCELED notifier in case cpu_schedule_up() did fail.

Reported-by: Jan Beulich <jbeulich@suse.com>
Fixes: 207589dbacd4 ("xen/sched: move per cpu scheduler private data into struct sched_resource")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- undo changes in cpu_schedule_up() in case of failure
---
 xen/common/sched/core.c | 63 +++++++++++++++++++++--------------------
 1 file changed, 33 insertions(+), 30 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d84b65f197..c466711e9e 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2755,6 +2755,36 @@ static struct sched_resource *sched_alloc_res(void)
     return sr;
 }
 
+static void cf_check sched_res_free(struct rcu_head *head)
+{
+    struct sched_resource *sr = container_of(head, struct sched_resource, rcu);
+
+    free_cpumask_var(sr->cpus);
+    if ( sr->sched_unit_idle )
+        sched_free_unit_mem(sr->sched_unit_idle);
+    xfree(sr);
+}
+
+static void cpu_schedule_down(unsigned int cpu)
+{
+    struct sched_resource *sr;
+
+    rcu_read_lock(&sched_res_rculock);
+
+    sr = get_sched_res(cpu);
+
+    kill_timer(&sr->s_timer);
+
+    cpumask_clear_cpu(cpu, &sched_res_mask);
+    set_sched_res(cpu, NULL);
+
+    /* Keep idle unit. */
+    sr->sched_unit_idle = NULL;
+    call_rcu(&sr->rcu, sched_res_free);
+
+    rcu_read_unlock(&sched_res_rculock);
+}
+
 static int cpu_schedule_up(unsigned int cpu)
 {
     struct sched_resource *sr;
@@ -2794,7 +2824,10 @@ static int cpu_schedule_up(unsigned int cpu)
         idle_vcpu[cpu]->sched_unit->res = sr;
 
     if ( idle_vcpu[cpu] == NULL )
+    {
+        cpu_schedule_down(cpu);
         return -ENOMEM;
+    }
 
     idle_vcpu[cpu]->sched_unit->rendezvous_in_cnt = 0;
 
@@ -2812,36 +2845,6 @@ static int cpu_schedule_up(unsigned int cpu)
     return 0;
 }
 
-static void cf_check sched_res_free(struct rcu_head *head)
-{
-    struct sched_resource *sr = container_of(head, struct sched_resource, rcu);
-
-    free_cpumask_var(sr->cpus);
-    if ( sr->sched_unit_idle )
-        sched_free_unit_mem(sr->sched_unit_idle);
-    xfree(sr);
-}
-
-static void cpu_schedule_down(unsigned int cpu)
-{
-    struct sched_resource *sr;
-
-    rcu_read_lock(&sched_res_rculock);
-
-    sr = get_sched_res(cpu);
-
-    kill_timer(&sr->s_timer);
-
-    cpumask_clear_cpu(cpu, &sched_res_mask);
-    set_sched_res(cpu, NULL);
-
-    /* Keep idle unit. */
-    sr->sched_unit_idle = NULL;
-    call_rcu(&sr->rcu, sched_res_free);
-
-    rcu_read_unlock(&sched_res_rculock);
-}
-
 void sched_rm_cpu(unsigned int cpu)
 {
     int rc;
-- 
2.35.3



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 06:11:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 06:11:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748361.1156051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMLrz-00052O-BX; Wed, 26 Jun 2024 06:11:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748361.1156051; Wed, 26 Jun 2024 06:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMLrz-00052H-8S; Wed, 26 Jun 2024 06:11:03 +0000
Received: by outflank-mailman (input) for mailman id 748361;
 Wed, 26 Jun 2024 06:11:01 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMLrx-00052A-Jl
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 06:11:01 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id db6e3ad3-3382-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 08:10:59 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.201.39])
 by support.bugseng.com (Postfix) with ESMTPSA id B910E4EE0738;
 Wed, 26 Jun 2024 08:10:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db6e3ad3-3382-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [XEN PATCH v2 for-4.19] automation/eclair: add deviations agreed in MISRA meetings
Date: Wed, 26 Jun 2024 08:10:50 +0200
Message-Id: <816b323f5e325784947d09502f9352188bd325cf.1719381829.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update ECLAIR configuration to take into account the deviations
agreed during the MISRA meetings.

While doing this, remove the obsolete "Set [123]" comments.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
Changes in v2:
- keep sync between deviations.ecl and deviations.rst;
- use 'deliberate' tag for all the deviations of R14.3;
- do not use the term "project-wide deviation" since it does not add useful
  information.
---
 .../eclair_analysis/ECLAIR/deviations.ecl     | 93 +++++++++++++++++--
 docs/misra/deviations.rst                     | 81 ++++++++++++++--
 2 files changed, 158 insertions(+), 16 deletions(-)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index ae2eaf50f7..37cad8bf68 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -1,5 +1,3 @@
-### Set 1 ###
-
 #
 # Series 2.
 #
@@ -23,6 +21,11 @@ Constant expressions and unreachable branches of if and switch statements are ex
 -config=MC3R1.R2.1,reports+={deliberate, "any_area(any_loc(any_exp(macro(name(ASSERT_UNREACHABLE||PARSE_ERR_RET||PARSE_ERR||FAIL_MSR||FAIL_CPUID)))))"}
 -doc_end
 
+-doc_begin="The asm-offset files are not linked deliberately, since they are used to generate definitions for asm modules."
+-file_tag+={asm_offsets, "^xen/arch/(arm|x86)/(arm32|arm64|x86_64)/asm-offsets\\.c$"}
+-config=MC3R1.R2.1,reports+={deliberate, "any_area(any_loc(file(asm_offsets)))"}
+-doc_end
+
 -doc_begin="Pure declarations (i.e., declarations without initialization) are
 not executable, and therefore it is safe for them to be unreachable."
 -config=MC3R1.R2.1,ignored_stmts+={"any()", "pure_decl()"}
@@ -63,6 +66,12 @@ they are not instances of commented-out code."
 -config=MC3R1.D4.3,reports+={disapplied,"!(any_area(any_loc(file(^xen/arch/arm/arm64/.*$))))"}
 -doc_end
 
+-doc_begin="The inline asm in 'arm64/lib/bitops.c' is tightly coupled with the surronding C code that acts as a wrapper, so it has been decided not to add an additional encapsulation layer."
+-file_tag+={arm64_bitops, "^xen/arch/arm/arm64/lib/bitops\\.c$"}
+-config=MC3R1.D4.3,reports+={deliberate, "all_area(any_loc(file(arm64_bitops)&&any_exp(macro(^(bit|test)op$))))"}
+-config=MC3R1.D4.3,reports+={deliberate, "any_area(any_loc(file(arm64_bitops))&&context(name(int_clear_mask16)))"}
+-doc_end
+
 -doc_begin="This header file is autogenerated or empty, therefore it poses no
 risk if included more than once."
 -file_tag+={empty_header, "^xen/arch/arm/efi/runtime\\.h$"}
@@ -213,10 +222,25 @@ Therefore the absence of prior declarations is safe."
 -config=MC3R1.R8.4,declarations+={safe, "loc(file(asm_defns))&&^current_stack_pointer$"}
 -doc_end
 
+-doc_begin="The function apei_(read|check|clear)_mce are dead code and are excluded from non-debug builds, therefore the absence of prior declarations is safe."
+-config=MC3R1.R8.4,declarations+={safe, "^apei_(read|check|clear)_mce\\(.*$"}
+-doc_end
+
 -doc_begin="asmlinkage is a marker to indicate that the function is only used to interface with asm modules."
 -config=MC3R1.R8.4,declarations+={safe,"loc(text(^(?s).*asmlinkage.*$, -1..0))"}
 -doc_end
 
+-doc_begin="Given that bsearch and sort are defined with the attribute 'gnu_inline', it's deliberate not to have a prior declaration.
+See Section \"6.33.1 Common Function Attributes\" of \"GCC_MANUAL\" for a full explanation of gnu_inline."
+-file_tag+={bsearch_sort, "^xen/include/xen/(sort|lib)\\.h$"}
+-config=MC3R1.R8.4,reports+={deliberate, "any_area(any_loc(file(bsearch_sort))&&decl(name(bsearch||sort)))"}
+-doc_end
+
+-doc_begin="first_valid_mfn is defined in this way because the current lack of NUMA support in Arm and PPC requires it."
+-file_tag+={first_valid_mfn, "^xen/common/page_alloc\\.c$"}
+-config=MC3R1.R8.4,declarations+={deliberate,"loc(file(first_valid_mfn))"}
+-doc_end
+
 -doc_begin="The following variables are compiled in multiple translation units
 belonging to different executables and therefore are safe."
 -config=MC3R1.R8.6,declarations+={safe, "name(current_stack_pointer||bsearch||sort)"}
@@ -257,8 +281,6 @@ dimension is higher than omitting the dimension."
 -config=MC3R1.R9.5,reports+={deliberate, "any()"}
 -doc_end
 
-### Set 2 ###
-
 #
 # Series 10.
 #
@@ -299,7 +321,6 @@ integers arguments on two's complement architectures
 -config=MC3R1.R10.1,reports+={safe, "any_area(any_loc(any_exp(macro(^ISOLATE_LSB$))))"}
 -doc_end
 
-### Set 3 ###
 -doc_begin="XEN only supports architectures where signed integers are
 representend using two's complement and all the XEN developers are aware of
 this."
@@ -323,6 +344,49 @@ constant expressions are required.\""
 # Series 11
 #
 
+-doc_begin="The conversion from a function pointer to unsigned long or (void *) does not lose any information, provided that the target type has enough bits to store it."
+-config=MC3R1.R11.1,casts+={safe,
+  "from(type(canonical(__function_pointer_types)))
+   &&to(type(canonical(builtin(unsigned long)||pointer(builtin(void)))))
+   &&relation(definitely_preserves_value)"
+}
+-doc_end
+
+-doc_begin="The conversion from a function pointer to a boolean has a well-known semantics that do not lead to unexpected behaviour."
+-config=MC3R1.R11.1,casts+={safe,
+  "from(type(canonical(__function_pointer_types)))
+   &&kind(pointer_to_boolean)"
+}
+-doc_end
+
+-doc_begin="The conversion from a pointer to an incomplete type to unsigned long does not lose any information, provided that the target type has enough bits to store it."
+-config=MC3R1.R11.2,casts+={safe,
+  "from(type(any()))
+   &&to(type(canonical(builtin(unsigned long))))
+   &&relation(definitely_preserves_value)"
+}
+-doc_end
+
+-doc_begin="Conversions to object pointers that have a pointee type with a smaller (i.e., less strict) alignment requirement are safe."
+-config=MC3R1.R11.3,casts+={safe,
+  "!relation(more_aligned_pointee)"
+}
+-doc_end
+
+-doc_begin="Conversions from and to integral types are safe, in the assumption that the target type has enough bits to store the value.
+See also Section \"4.7 Arrays and Pointers\" of \"GCC_MANUAL\""
+-config=MC3R1.R11.6,casts+={safe,
+    "(from(type(canonical(integral())))||to(type(canonical(integral()))))
+     &&relation(definitely_preserves_value)"}
+-doc_end
+
+-doc_begin="The conversion from a pointer to a boolean has a well-known semantics that do not lead to unexpected behaviour."
+-config=MC3R1.R11.6,casts+={safe,
+  "from(type(canonical(__pointer_types)))
+   &&kind(pointer_to_boolean)"
+}
+-doc_end
+
 -doc_begin="Violations caused by container_of are due to pointer arithmetic operations
 with the provided offset. The resulting pointer is then immediately cast back to its
 original type, which preserves the qualifier. This use is deemed safe.
@@ -354,9 +418,18 @@ activity."
 -config=MC3R1.R14.2,reports+={disapplied,"any()"}
 -doc_end
 
--doc_begin="The XEN team relies on the fact that invariant conditions of 'if'
-statements are deliberate"
--config=MC3R1.R14.3,statements={deliberate , "wrapped(any(),node(if_stmt))" }
+-doc_begin="The XEN team relies on the fact that invariant conditions of 'if' statements and conditional operators are deliberate"
+-config=MC3R1.R14.3,statements+={deliberate, "wrapped(any(),node(if_stmt||conditional_operator||binary_conditional_operator))" }
+-doc_end
+
+-doc_begin="Switches having a 'sizeof' operator as the condition are deliberate and have limited scope."
+-config=MC3R1.R14.3,statements+={deliberate, "wrapped(any(),node(switch_stmt)&&child(cond, operator(sizeof)))" }
+-doc_end
+
+-doc_begin="The use of an invariant size argument in {put,get}_unsafe_size and array_access_ok, as defined in arch/x86(_64)?/include/asm/uaccess.h is deliberate and is deemed safe."
+-file_tag+={x86_uaccess, "^xen/arch/x86(_64)?/include/asm/uaccess\\.h$"}
+-config=MC3R1.R14.3,reports+={deliberate, "any_area(any_loc(file(x86_uaccess)&&any_exp(macro(^(put|get)_unsafe_size$))))"}
+-config=MC3R1.R14.3,reports+={deliberate, "any_area(any_loc(file(x86_uaccess)&&any_exp(macro(^array_access_ok$))))"}
 -doc_end
 
 -doc_begin="A controlling expression of 'if' and iteration statements having integer, character or pointer type has a semantics that is well-known to all Xen developers."
@@ -527,8 +600,8 @@ falls under the jurisdiction of other MISRA rules."
 # General
 #
 
--doc_begin="do-while-0 is a well recognized loop idiom by the xen community."
--loop_idioms={do_stmt, "literal(0)"}
+-doc_begin="do-while-[01] is a well recognized loop idiom by the xen community."
+-loop_idioms={do_stmt, "literal(0)||literal(1)"}
 -doc_end
 -doc_begin="while-[01] is a well recognized loop idiom by the xen community."
 -loop_idioms+={while_stmt, "literal(0)||literal(1)"}
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index 16fc345756..d682616796 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -63,6 +63,11 @@ Deviations related to MISRA C:2012 Rules:
        switch statement.
      - ECLAIR has been configured to ignore those statements.
 
+   * - R2.1
+     - The asm-offset files are not linked deliberately, since they are used to
+       generate definitions for asm modules.
+     - Tagged as `deliberate` for ECLAIR.
+
    * - R2.2
      - Proving compliance with respect to Rule 2.2 is generally impossible:
        see `<https://arxiv.org/abs/2212.13933>`_ for details. Moreover, peer
@@ -203,6 +208,26 @@ Deviations related to MISRA C:2012 Rules:
        it.
      - Tagged as `safe` for ECLAIR.
 
+   * - R8.4
+     - Some functions are excluded from non-debug build, therefore the absence
+       of declaration is safe.
+     - Tagged as `safe` for ECLAIR, such functions are:
+         - apei_read_mce()
+         - apei_check_mce()
+         - apei_clear_mce()
+
+   * - R8.4
+     - Given that bsearch and sort are defined with the attribute 'gnu_inline',
+       it's deliberate not to have a prior declaration.
+       See Section \"6.33.1 Common Function Attributes\" of \"GCC_MANUAL\" for
+       a full explanation of gnu_inline.
+     - Tagged as `deliberate` for ECLAIR.
+
+   * - R8.4
+     - first_valid_mfn is defined in this way because the current lack of NUMA
+       support in Arm and PPC requires it.
+     - Tagged as `deliberate` for ECLAIR.
+
    * - R8.6
      - The following variables are compiled in multiple translation units
        belonging to different executables and therefore are safe.
@@ -282,6 +307,39 @@ Deviations related to MISRA C:2012 Rules:
        If no bits are set, 0 is returned.
      - Tagged as `safe` for ECLAIR.
 
+   * - R11.1
+     - The conversion from a function pointer to unsigned long or (void \*) does
+       not lose any information, provided that the target type has enough bits
+       to store it.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R11.1
+     - The conversion from a function pointer to a boolean has a well-known
+       semantics that do not lead to unexpected behaviour.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R11.2
+     - The conversion from a pointer to an incomplete type to unsigned long
+       does not lose any information, provided that the target type has enough
+       bits to store it.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R11.3
+     - Conversions to object pointers that have a pointee type with a smaller
+       (i.e., less strict) alignment requirement are safe.
+     - Tagged as `safe` for ECLAIR.
+
+   * - R11.6
+     - Conversions from and to integral types are safe, in the assumption that
+       the target type has enough bits to store the value.
+       See also Section \"4.7 Arrays and Pointers\" of \"GCC_MANUAL\"
+     - Tagged as `safe` for ECLAIR.
+
+   * - R11.6
+     - The conversion from a pointer to a boolean has a well-known semantics
+       that do not lead to unexpected behaviour.
+     - Tagged as `safe` for ECLAIR.
+
    * - R11.8
      - Violations caused by container_of are due to pointer arithmetic operations
        with the provided offset. The resulting pointer is then immediately cast back to its
@@ -308,8 +366,19 @@ Deviations related to MISRA C:2012 Rules:
 
    * - R14.3
      - The Xen team relies on the fact that invariant conditions of 'if'
-       statements are deliberate.
-     - Project-wide deviation; tagged as `disapplied` for ECLAIR.
+       statements and conditional operators are deliberate.
+     - Tagged as `deliberate` for ECLAIR.
+
+   * - R14.3
+     - Switches having a 'sizeof' operator as the condition are deliberate and
+       have limited scope.
+     - Tagged as `deliberate` for ECLAIR.
+
+   * - R14.3
+     - The use of an invariant size argument in {put,get}_unsafe_size and
+       array_access_ok, as defined in arch/x86(_64)?/include/asm/uaccess.h is
+       deliberate and is deemed safe.
+     - Tagged as `deliberate` for ECLAIR.
 
    * - R14.4
      - A controlling expression of 'if' and iteration statements having
@@ -475,10 +544,10 @@ Other deviations:
    * - Deviation
      - Justification
 
-   * - do-while-0 loops
-     - The do-while-0 is a well-recognized loop idiom used by the Xen community
-       and can therefore be used, even though it would cause a number of
-       violations in some instances.
+   * - do-while-0 and do-while-1 loops
+     - The do-while-0 and do-while-1 loops are well-recognized loop idioms used
+       by the Xen community and can therefore be used, even though they would
+       cause a number of violations in some instances.
 
    * - while-0 and while-1 loops
      - while-0 and while-1 are well-recognized loop idioms used by the Xen
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 06:11:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 06:11:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748365.1156061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMLsl-0005oI-KS; Wed, 26 Jun 2024 06:11:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748365.1156061; Wed, 26 Jun 2024 06:11:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMLsl-0005oB-GT; Wed, 26 Jun 2024 06:11:51 +0000
Received: by outflank-mailman (input) for mailman id 748365;
 Wed, 26 Jun 2024 06:11:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5eJY=N4=intel.com=oliver.sang@srs-se1.protection.inumbo.net>)
 id 1sMLsj-00052A-Sg
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 06:11:50 +0000
Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.10])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f752570e-3382-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 08:11:47 +0200 (CEST)
Received: from orviesa004.jf.intel.com ([10.64.159.144])
 by fmvoesa104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 25 Jun 2024 23:11:31 -0700
Received: from orsmsx602.amr.corp.intel.com ([10.22.229.15])
 by orviesa004.jf.intel.com with ESMTP/TLS/AES256-GCM-SHA384;
 25 Jun 2024 23:11:31 -0700
Received: from orsmsx610.amr.corp.intel.com (10.22.229.23) by
 ORSMSX602.amr.corp.intel.com (10.22.229.15) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.39; Tue, 25 Jun 2024 23:11:31 -0700
Received: from ORSEDG602.ED.cps.intel.com (10.7.248.7) by
 orsmsx610.amr.corp.intel.com (10.22.229.23) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.39 via Frontend Transport; Tue, 25 Jun 2024 23:11:31 -0700
Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.169)
 by edgegateway.intel.com (134.134.137.103) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.39; Tue, 25 Jun 2024 23:11:30 -0700
Received: from LV3PR11MB8603.namprd11.prod.outlook.com (2603:10b6:408:1b6::9)
 by SN7PR11MB7440.namprd11.prod.outlook.com (2603:10b6:806:340::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Wed, 26 Jun
 2024 06:11:29 +0000
Received: from LV3PR11MB8603.namprd11.prod.outlook.com
 ([fe80::4622:29cf:32b:7e5c]) by LV3PR11MB8603.namprd11.prod.outlook.com
 ([fe80::4622:29cf:32b:7e5c%2]) with mapi id 15.20.7698.025; Wed, 26 Jun 2024
 06:11:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f752570e-3382-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1719382308; x=1750918308;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=t5kFOL26FKqErH7/75/kLz8JVthUyyur+5ROSrjsM28=;
  b=XqrQxbRmKXRYmKJdw/oQtxhYQSa8R8HDkhvNpAGPOV3jpHw76UBxK7PE
   EJibxcEle1fey4mJscHSCXk8VQaQHYRAY9vaabvA84+WhqTClJDo6juC5
   +7cHFV7PvTNEjNFPx9fjasyevi2ViAmQwYhr6OJu4py7HsxSJ6oECOfOm
   NMKV5fqbF8kKkF43zWbJNCfUN7SkUTffTTu+hKOyfHrwL2U7Ku08/y2Wi
   aDtGNGD+1N2zSXnAzmQU7YBgn1XLKV61ZsPmIvCKKg0S96lI5jJgAIt8A
   wLq5Oj0Il/usQYji+s1Gi6SNavklQqEEq601NGavkh5k/C6xrBZYr7BY9
   A==;
X-CSE-ConnectionGUID: 86kox25kSW29xpRDGbXMjQ==
X-CSE-MsgGUID: b+sjTJ5YTQmJtL8tXm/Zrw==
X-IronPort-AV: E=McAfee;i="6700,10204,11114"; a="27841666"
X-IronPort-AV: E=Sophos;i="6.08,266,1712646000"; 
   d="scan'208";a="27841666"
X-CSE-ConnectionGUID: bxoVJ6oBSMWWmVThbKsj4w==
X-CSE-MsgGUID: 48Q4rEzXS4GwjOs5t28Trw==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="6.08,266,1712646000"; 
   d="scan'208";a="49058636"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EJx9SUp9NndkNHVYJi4HQYyOTnLrCdT9FK7KOhxwCSlnU86VRUnAvejMi/fmDdgt1heNwZIT5KThAbnc646BUcerpKXzB1XTC1VBoyFfCK7tZ44BmHIu7NdmvJct9HRvVnmOGf/luVOElAHzXP2nWs6t78/139wCY5BPdtN2JJFhQayuDfxdw5oBSozXvHPuhdLIWxFMmI6bYy4/zhP8AJw/63gIk/7zNpgNRYnLPYsB0ksNrkX2VLj8kM4LPk5yr1NQq3gLuTM2HSAlgoQZuxw1u21OIjEYxftExrOcn0eKrHa1EQsUJz+RIiqItQ141Lgs6b+lwuUJrnphrvPqrA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=J8fSauy/TvO+99Vvwkl3RhAOh0VOzLNYuHGS202phAg=;
 b=lhiLnkbGk2lf+f7pFFALd8EuEtzimO8YXjEK42hYfcvENwx91wwmq5JGxmBTRjoYEmYDBBkLk3ACR32A0K/LcW5cm/qIwtF5rcRIyihXkGCekcGhpcL0zpfh5x6FW0DoIuVkWRguTOusniMyhsJZ73xmEZ3ZR4U34hFnuR34Tx4trQ5c+ZNU3yuaD/TEgnDrTyBegFq/9s0sm/9t74UJomkaPRQ5yAFxCxseBGWNUSmQd3P+M5Fi1u13npmCN6kSHD2CNhq4xdQwqYKoAhUOF9lh78IliQM7/1aYqN1lbSCyn0QvoEsmXCeQr5cREYjOCPtQFOr8eW8MWsUGFwbhWg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
Date: Wed, 26 Jun 2024 14:11:11 +0800
From: Oliver Sang <oliver.sang@intel.com>
To: Christoph Hellwig <hch@lst.de>
CC: <oe-lkp@lists.linux.dev>, <lkp@intel.com>, Jens Axboe <axboe@kernel.dk>,
	Damien Le Moal <dlemoal@kernel.org>, Hannes Reinecke <hare@suse.de>,
	<linux-m68k@lists.linux-m68k.org>, <linux-um@lists.infradead.org>,
	<linux-kernel@vger.kernel.org>, <linux-block@vger.kernel.org>,
	<drbd-dev@lists.linbit.com>, <nbd@other.debian.org>,
	<linuxppc-dev@lists.ozlabs.org>, <ceph-devel@vger.kernel.org>,
	<virtualization@lists.linux.dev>, <xen-devel@lists.xenproject.org>,
	<linux-bcache@vger.kernel.org>, <dm-devel@lists.linux.dev>,
	<linux-raid@vger.kernel.org>, <linux-mmc@vger.kernel.org>,
	<linux-mtd@lists.infradead.org>, <nvdimm@lists.linux.dev>,
	<linux-nvme@lists.infradead.org>, <linux-s390@vger.kernel.org>,
	<linux-scsi@vger.kernel.org>, <ying.huang@intel.com>, <feng.tang@intel.com>,
	<fengwei.yin@intel.com>, <oliver.sang@intel.com>
Subject: Re: [axboe-block:for-next] [block]  bd4a633b6f: fsmark.files_per_sec
 -64.5% regression
Message-ID: <Znuw/4zMD4w5Oq2a@xsang-OptiPlex-9020>
References: <202406241546.6bbd44a7-oliver.sang@intel.com>
 <20240624083537.GA19941@lst.de>
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20240624083537.GA19941@lst.de>
X-ClientProxiedBy: SI2PR06CA0001.apcprd06.prod.outlook.com
 (2603:1096:4:186::21) To LV3PR11MB8603.namprd11.prod.outlook.com
 (2603:10b6:408:1b6::9)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: LV3PR11MB8603:EE_|SN7PR11MB7440:EE_
X-MS-Office365-Filtering-Correlation-Id: 240e8ff1-6e30-4b38-0751-08dc95a6d14f
X-LD-Processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230038|366014|1800799022|7416012|376012;
X-Microsoft-Antispam-Message-Info: =?iso-8859-1?Q?a64+Aa+OBxr+fpZKf9+yf1y8XxQTNVNIOupwG3tBSFq6U+OzazbDUoWdbL?=
 =?iso-8859-1?Q?RpCuChvnGHZHU7tjWDc7ooVlA6VzPMCbMAja43FPN61qBv/rWq4ySW+3IQ?=
 =?iso-8859-1?Q?+8XZPGqOm1/t0V8eY8yurQDDH5Ji78+rBzz/3DZpuNZP9LQgLmXHbBZj7v?=
 =?iso-8859-1?Q?05Qjjm/p1QpARKqMlDL6OdcyUX4F+1bNeuf69KcsFPatzP5TloxdE0eyEi?=
 =?iso-8859-1?Q?Ks1CfBlw9t8xzW9VpG/lz6bUT/Ir3l39TzV982XR81PPVPO/onwJX/ssUT?=
 =?iso-8859-1?Q?cgJMniiIN7QoVlvLuoCyesSBrkO2C9PB+nxMmNk9MJItrKIAmu3qtQO0x+?=
 =?iso-8859-1?Q?9J3ig7/9MzyqbmQ3a0un0vPPQzkDtG4Hf1cPYYn8yFzfwis4c/jvYAiQWh?=
 =?iso-8859-1?Q?y8U4r3kysVo3oABXf87JDkKVVIEkGW10hZB3hjFoZ5Mjfq2KYHSj/e124Y?=
 =?iso-8859-1?Q?fJlrSxSwciJxT5G4Wz5d1vfC2XraDaRmfOx2NDKZl1Lv/9rTXdi9Cojzap?=
 =?iso-8859-1?Q?cBdRUu4EZaFs9b1RdeodvDpx65C31CBy8HZa8XKpYST8ciRqZhtXS4prZh?=
 =?iso-8859-1?Q?iSIozH4Bc71JTOWm2lyku1bWS06c8WJ7nRJEhX1CnCTEs6er4Yx3dgsZ7W?=
 =?iso-8859-1?Q?+0GE7UZwzljfQgYO/3ZhlZnL5Zv72C7J5rF5mjcrbCFdu56IPptmS9844F?=
 =?iso-8859-1?Q?w0yS8szsGUAkkyKd/+ek8nMY9oRCmeJjWcefhP53YnOVwoKWpG1cHKKrPA?=
 =?iso-8859-1?Q?MOfHMQFufo3AkrSIvzxhMBQCEuN8u/defsCzzkM6TcSNTeu/vGxtkfpAFT?=
 =?iso-8859-1?Q?Jvz5MAUB5yEAOScw1mJFvuzWk2g5ifW0BJDy+IaVkKS+j8FzcNx/8x0bqu?=
 =?iso-8859-1?Q?I89VKVLPklDFddt0ZBy62XGgoJKah/KftaxouDJkVk5DQQ5Dv1ksszN5Zm?=
 =?iso-8859-1?Q?J/VxyrPgH5E/UqopOklFx4OmDf0pHtZX+mKkSYktNksKOvCosS26t3J64X?=
 =?iso-8859-1?Q?DkvkE7OjsI7HH02O+FlInjcHCtEnXwYgo19sFLCzhO5Brl+n3h1kxMYeyj?=
 =?iso-8859-1?Q?O5w5abmwsPRUYDFTk6hH7SfdK48f3y1/dy/XJlCCplVFaRdbzoRGFBDZFY?=
 =?iso-8859-1?Q?5QzidBGU9/bEDvWMMLCqX6NKT3SztMMoaOy+4cqrJCuU2W3188g9KoLWxR?=
 =?iso-8859-1?Q?bCMnfuIgUWrPJBQiA0rWMX4WAfrBeKcUW+5z2CnIG8e5VRHLPeuAC+WZip?=
 =?iso-8859-1?Q?6x+0Lu9pUPMc9IpCL7vld4H9xff+/g1/LcLiLnFp7Ed5yISS4Jc2ZM3a4J?=
 =?iso-8859-1?Q?J3qg?=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LV3PR11MB8603.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230038)(366014)(1800799022)(7416012)(376012);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?iso-8859-1?Q?i5/fM8SHslBnAC2ne5UoVaNUKih9tn+rmnjB84Nv5+puKAdJC6qhGRglu9?=
 =?iso-8859-1?Q?CSa4ZF8vOIfTk0LcuTnMnRmCl/RUNrGLDbcCDXusEO/mjY6XlQTy2m7sA/?=
 =?iso-8859-1?Q?YYCE5CTQOkruWdHNqM2pdF27ctR7jRKFDNtseI3FnNlcYz5gRpxC08Z+KV?=
 =?iso-8859-1?Q?bCj5+WF5BCkRtKZyZnrD9ZQWa5YTw/Qjs+9FymTyvVV0sf6D8tKZ7Uiyp8?=
 =?iso-8859-1?Q?mV4a59sMNoqy/ZiCWgQelE35mVghDKZsYT7rrvfEHIS1JTY89S2f8GN2Sx?=
 =?iso-8859-1?Q?7KgvJSegPsU6A2Z8sd7XBXlYE2mFRS0iexmgq21mpRt7rtEnWNJzCP0mwY?=
 =?iso-8859-1?Q?8YWK8QfGEEc2tduEAu9Kz0LAwXCBNULbNJeapSFpU1uwJB+dt/vENYCNT5?=
 =?iso-8859-1?Q?NUYjtc+M4lTWh5/rgSulV9NsdcdiPM3hcTRwyDl9GKhYnDFr1WFlKxOKlS?=
 =?iso-8859-1?Q?SUwG1x9zzVzBlItOzkid2kje8JQ5dIt2BOsIbYExnBtm8eMHYsJbQfb8is?=
 =?iso-8859-1?Q?svbQARGnCweR6nXxM+1QdCGO5oWGFJHodP/tMxSqnAT2Zvp2AcsBASLQFU?=
 =?iso-8859-1?Q?RE7GUJ82QrTd83svc+EjhaHIB+qFdoUyNEoYehNcCGsgwqlQMCADdy3vf4?=
 =?iso-8859-1?Q?eNq/oBZztGkfg0Tk4aj+K8sjcYM5rXUl0LfgTdL+83lRysCwELX2zf2BmN?=
 =?iso-8859-1?Q?GztBGhxXpyDuCbh0mWRw5FUkyV4Urs5hu/Zhv7Sd16ocgvnudJGmu4w/Dq?=
 =?iso-8859-1?Q?ZcxyRoH9rAOGS2tnsIvjUfLbrPkbYo1YJV3lekulNTV10pf84hur52Hdj1?=
 =?iso-8859-1?Q?HSNDb4UMyK1lJhG5jD4MX5N2pAzW5es24vCtECZ1AM1lm8DLncjbZlNa6m?=
 =?iso-8859-1?Q?mv3TZMlsE6mPyp8TS0i4YV4kIL7Ny1dMvn6JL06sfqYUul2T0SvtRglLTw?=
 =?iso-8859-1?Q?mN7nywq1p8OulcASBHTK2iXYy1gFi+pSPQa7czHwOLcJlvMIjlxw+hErEj?=
 =?iso-8859-1?Q?DBRogqnmUNF1D6b6JGLnh67V+qAYYlcpBMIhqwbAC2HBRlgIRhhWZe+h9b?=
 =?iso-8859-1?Q?owIgwcnDR2Eu3crAGOsIbNswoYhz8FCO2I+aHK6e2UQIOXw/lUNit9wVgI?=
 =?iso-8859-1?Q?r4aB6zX88zlSKaZNQLG7b+R/6ol/GOjUTXDueUhfyB+DIFNA6/rtaWeso7?=
 =?iso-8859-1?Q?3mp8rWcv2C1g3dLhTT5t0xXxS4+NCEpzXMqUsYhWj3CKYBpcjoWwWFaeT2?=
 =?iso-8859-1?Q?uT1kxdbYpyvmHmIgzPjxWB/oJdeZne03Hnwd8sRIoL2HsvbqTlFGx7G0eH?=
 =?iso-8859-1?Q?EKaMlx9dXYmcFYPckmhRID19radNtaBbgbLQ8+2hjiK7R+SDeASQ5y/8tK?=
 =?iso-8859-1?Q?ln+032cY4xH2Gx15s8csP/dAe59Uhwau5fXbCy7br+ZdWorFMXOabtMjw2?=
 =?iso-8859-1?Q?mALrnt3i3aWvtXMmoEUYSKqJFQ4sTwOrpp5e3IphxS9zv98oQr0S0ls+Rk?=
 =?iso-8859-1?Q?7F2HhdjzzmH8VvG3rzgjc2rBtVtQaVmZOqSK0bp369zezyAUbsCFt7/PZV?=
 =?iso-8859-1?Q?lnwzVCj23jjdZjbcsa82aikGAjq9IjNZnlAZmrixdvzpbtXhxn4BDFfT+c?=
 =?iso-8859-1?Q?Lwqa6vw+t0NBK3vTLLe8rttDwzWcLiO/XPuQGq+B6amV+gsRayB+yGkA?=
 =?iso-8859-1?Q?=3D=3D?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 240e8ff1-6e30-4b38-0751-08dc95a6d14f
X-MS-Exchange-CrossTenant-AuthSource: LV3PR11MB8603.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jun 2024 06:11:28.9475
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2rBI4MbOiujmbQZnU6yBi9cXBp9FqVQGvMABiGD37FJd/z+ZIvV86HbcSvfVmFpW3TK30K2gVluEBOTixDjIgw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN7PR11MB7440
X-OriginatorOrg: intel.com

hi, Christoph Hellwig,

On Mon, Jun 24, 2024 at 10:35:37AM +0200, Christoph Hellwig wrote:
> This is odd to say at least.  Any chance you can check the value
> of /sys/block/$DEVICE/queue/rotational for the relevant device before
> and after this commit?  And is this an ATA or NVMe SSD?
> 

yeah, as Niklas mentioned, it's an ATA SSD.

I checked the /sys/block/$DEVICE/queue/rotational before and after this commit,
both show '0'. not sure if this is expected.

anyway, I noticed you send a patch [1]

so I applied this patch upon bd4a633b6f, and found the performance restored.

=========================================================================================
compiler/cpufreq_governor/disk/filesize/fs2/fs/iterations/kconfig/nr_directories/nr_files_per_directory/nr_threads/rootfs/sync_method/tbox_group/test_size/testcase:
  gcc-13/performance/1SSD/9B/nfsv4/btrfs/1x/x86_64-rhel-8.3/16d/256fpd/32t/debian-12-x86_64-20240206.cgz/fsyncBeforeClose/lkp-ivb-2ep2/400M/fsmark

commit:
  1122c0c1cc ("block: move cache control settings out of queue->flags")
  bd4a633b6f ("block: move the nonrot flag to queue_limits")
  e9a0f6a398 = bd4a633b6f + patch [1]

1122c0c1cc71f740 bd4a633b6f7c3c6b6ebc1a07317 e9a0f6a398f162d115d208ad95b
---------------- --------------------------- ---------------------------
         %stddev     %change         %stddev     %change         %stddev
             \          |                \          |                \
      4177   2%     -64.7%       1475            -1.1%       4130        fsmark.files_per_sec


[1] https://lore.kernel.org/all/20240624173835.76753-1-hch@lst.de/


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 06:18:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 06:18:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748374.1156071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMLyt-0006kw-D1; Wed, 26 Jun 2024 06:18:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748374.1156071; Wed, 26 Jun 2024 06:18:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMLyt-0006kp-9b; Wed, 26 Jun 2024 06:18:11 +0000
Received: by outflank-mailman (input) for mailman id 748374;
 Wed, 26 Jun 2024 06:18:09 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3wgj=N4=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sMLyr-0006kj-5p
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 06:18:09 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id da9d11b6-3383-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 08:18:07 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 85E4B68BEB; Wed, 26 Jun 2024 08:18:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da9d11b6-3383-11ef-90a3-e314d9c70b13
Date: Wed, 26 Jun 2024 08:18:04 +0200
From: Christoph Hellwig <hch@lst.de>
To: Oliver Sang <oliver.sang@intel.com>
Cc: Christoph Hellwig <hch@lst.de>, oe-lkp@lists.linux.dev, lkp@intel.com,
	Jens Axboe <axboe@kernel.dk>, Damien Le Moal <dlemoal@kernel.org>,
	Hannes Reinecke <hare@suse.de>, linux-m68k@lists.linux-m68k.org,
	linux-um@lists.infradead.org, linux-kernel@vger.kernel.org,
	linux-block@vger.kernel.org, drbd-dev@lists.linbit.com,
	nbd@other.debian.org, linuxppc-dev@lists.ozlabs.org,
	ceph-devel@vger.kernel.org, virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev, linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev, linux-nvme@lists.infradead.org,
	linux-s390@vger.kernel.org, linux-scsi@vger.kernel.org,
	ying.huang@intel.com, feng.tang@intel.com, fengwei.yin@intel.com
Subject: Re: [axboe-block:for-next] [block]  bd4a633b6f:
 fsmark.files_per_sec -64.5% regression
Message-ID: <20240626061804.GA23481@lst.de>
References: <202406241546.6bbd44a7-oliver.sang@intel.com> <20240624083537.GA19941@lst.de> <Znuw/4zMD4w5Oq2a@xsang-OptiPlex-9020>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <Znuw/4zMD4w5Oq2a@xsang-OptiPlex-9020>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 26, 2024 at 02:11:11PM +0800, Oliver Sang wrote:
> hi, Christoph Hellwig,
> 
> On Mon, Jun 24, 2024 at 10:35:37AM +0200, Christoph Hellwig wrote:
> > This is odd to say at least.  Any chance you can check the value
> > of /sys/block/$DEVICE/queue/rotational for the relevant device before
> > and after this commit?  And is this an ATA or NVMe SSD?
> > 
> 
> yeah, as Niklas mentioned, it's an ATA SSD.
> 
> I checked the /sys/block/$DEVICE/queue/rotational before and after this commit,
> both show '0'. not sure if this is expected.
> 
> anyway, I noticed you send a patch [1]
> 
> so I applied this patch upon bd4a633b6f, and found the performance restored.

Thanks for testing!



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 06:22:03 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 06:22:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748382.1156081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMM2b-0000EU-SG; Wed, 26 Jun 2024 06:22:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748382.1156081; Wed, 26 Jun 2024 06:22:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMM2b-0000EN-OQ; Wed, 26 Jun 2024 06:22:01 +0000
Received: by outflank-mailman (input) for mailman id 748382;
 Wed, 26 Jun 2024 06:22:01 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMM2b-0000EH-Be
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 06:22:01 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 64f43833-3384-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 08:21:59 +0200 (CEST)
Received: from [10.176.134.80] (unknown [160.78.253.181])
 by support.bugseng.com (Postfix) with ESMTPSA id C04D24EE0738;
 Wed, 26 Jun 2024 08:21:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64f43833-3384-11ef-b4bb-af5377834399
Message-ID: <16db53f9-144b-4cdb-a22d-837b4dae15ef@bugseng.com>
Date: Wed, 26 Jun 2024 08:21:58 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH for 4.19] automation/eclair: add deviations agreed in
 MISRA meetings
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <4a65e064768ad5ddce96d749f24f0bdae2c3b9da.1719328656.git.federico.serafini@bugseng.com>
 <alpine.DEB.2.22.394.2406251850281.3635@ubuntu-linux-20-04-desktop>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <alpine.DEB.2.22.394.2406251850281.3635@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 26/06/24 03:59, Stefano Stabellini wrote:
> On Tue, 25 Jun 2024, Federico Serafini wrote:
>> Update ECLAIR configuration to take into account the deviations
>> agreed during the MISRA meetings.
>>
>> While doing this, remove the obsolete "Set [123]" comments.
>>
>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> 
> Thank you! Many of these deviations are really important!
> 
> I double-checked everything and it looks good. I have 2 requests for
> changes below to keep deviations.rst updated. I made few comments about
> some deviations that could potentially done with SAF in-code comments
> but given the state of the release I think it is OK.
> 
> I would like to ask a release-ack, especially for all the deviations
> about conversions because those are critical. I think the rest is OK
> too.

The new version of the patch:
https://lists.xenproject.org/archives/html/xen-devel/2024-06/msg01471.html

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 07:07:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 07:07:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748393.1156091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMMki-0007e9-1m; Wed, 26 Jun 2024 07:07:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748393.1156091; Wed, 26 Jun 2024 07:07:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMMkh-0007e2-V3; Wed, 26 Jun 2024 07:07:35 +0000
Received: by outflank-mailman (input) for mailman id 748393;
 Wed, 26 Jun 2024 07:07:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMMkh-0007dZ-B6; Wed, 26 Jun 2024 07:07:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMMkh-0008St-4S; Wed, 26 Jun 2024 07:07:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMMkg-0005Et-RP; Wed, 26 Jun 2024 07:07:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMMkg-0008NI-Qx; Wed, 26 Jun 2024 07:07:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4ujRkh01oVSHrsOP/IFvaETM+7eoew4/dTipxM6M5ig=; b=SLQX7KsM5xUzzdhPK1w/+UGJjb
	Tie1JkwM1+Rfp6MCfV2kqE81Ag4/jBUm/RRimGrNUYz50VsbRqrmg1mpFoPDkaNyrpuu+0jm6rjq/
	GFYwYHLkjD3mohILk6+j/o40SQUN/QcBrcp8Gl8OwAstkwm6t/CvTrOZuBDmQgmA/fXU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186509-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186509: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e21bfae345f9eee1c3f585013ca50ad6ab4f86a1
X-Osstest-Versions-That:
    ovmf=0333faf50e49d3b3ea2c624b4d403b405b3107a1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 07:07:34 +0000

flight 186509 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186509/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e21bfae345f9eee1c3f585013ca50ad6ab4f86a1
baseline version:
 ovmf                 0333faf50e49d3b3ea2c624b4d403b405b3107a1

Last test of basis   186505  2024-06-26 02:03:04 Z    0 days
Testing same since   186509  2024-06-26 04:41:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Wenxing Hou <wenxing.hou@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   0333faf50e..e21bfae345  e21bfae345f9eee1c3f585013ca50ad6ab4f86a1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 07:14:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 07:14:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748403.1156105 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMMqu-0001Zi-Ot; Wed, 26 Jun 2024 07:14:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748403.1156105; Wed, 26 Jun 2024 07:14:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMMqu-0001Zb-MP; Wed, 26 Jun 2024 07:14:00 +0000
Received: by outflank-mailman (input) for mailman id 748403;
 Wed, 26 Jun 2024 07:13:59 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=evFT=N4=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMMqt-0001ZV-1m
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 07:13:59 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a7da68e3-338b-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 09:13:58 +0200 (CEST)
Received: by mail-ej1-x62b.google.com with SMTP id
 a640c23a62f3a-a72517e6225so419791766b.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 00:13:58 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a72494294a0sm351460466b.182.2024.06.26.00.13.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 00:13:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a7da68e3-338b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719386037; x=1719990837; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=cD8d/TgBD9fNS2sLEz7g7R5TN3ZPE/J0Id6vGaCm4oI=;
        b=Ai7iKp64Frpv+BBPUUyi6f2DPXrpboun2xkGJVGFNR9Cd8ec5a0K3IuFmYSM1ZPyDt
         HqIC2jDF+S4EKIsiEh6Fafmz2S+zZ/9cRNf8kRvy0dfXApzufqylFCFwm5s7pCp3Msjn
         ncdmb2c5pNBH8QbDJ/b7nujXN2nJuhWC1pxSng8YJtGkB/OrbkyGLloIuGUtr4TC8Wzi
         8+nuCXWcPdgTZZnnH10+TjPwaebxytDAZzNXqdhtQpgqtjmit3oX2PTzfkw2DQc3y7Pi
         MTSbZg13y/YRghtCTUxa50CJv8EX2qKpvd2bF1dDpbuW88K2t9ez8qT6gtr2ChsKVM+g
         DyHA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719386037; x=1719990837;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=cD8d/TgBD9fNS2sLEz7g7R5TN3ZPE/J0Id6vGaCm4oI=;
        b=bl4kb5HdJaE7g7QTuwDIiezco3qTVTyP8Vi1H7FvXV/nw9rdSmj2sIQyUBd7b626R2
         lbTzsJANZswu/1RoDTNmrnTRTu3ueQbTlSuvKGU3zgVJo8xgeoPxxqEOaqV4siHpxuNq
         B7mYQ0CHzsaNkEN3cO0T2yjLmCWweBfczevdXmLLaJ1/ZLlDH0IacVD4gFC0hy6bf5A0
         ZJ/ssbGYk1VnZWamQTeXlsCarnwOwrKQVRf5tdn2Iar1CKlJyy93uipgiZlRX1TPd6ZR
         OslGWndV+5F7nGvlFJ7CmDTqTuHpaX1TVhVrivBmV3+RoXiqeFs01ocYW5RpPqWgykJ+
         o/Dg==
X-Forwarded-Encrypted: i=1; AJvYcCV0iiS5BR6RiFBE4P6U43o2ITfHyWHVGSgr+83Lbu34vd1+ict84wE0Fm4pa3lVyi13XoCrBuM08RUEBZNwwkwOzoFZDhKJiWx7NEla5BI=
X-Gm-Message-State: AOJu0YwHPklsyZX7zNut865pf6ZUPSu0bIfrLG1V2XFEs8T4Xyu/0m69
	WniQUx+7neBan9GLNhKN05ZEth3Lb66Vvl2fcPp5kZ/B2ItCdYMD
X-Google-Smtp-Source: AGHT+IF/vDIGBBNXl1oT0rPFKa2Roxpn4s1+0DcNzjtD88W3PcvJIXfKyzHZcSVXTPwZxKPXIGSJtA==
X-Received: by 2002:a17:907:cbc2:b0:a72:7d5c:ace0 with SMTP id a640c23a62f3a-a727d5cae14mr202153366b.11.1719386037208;
        Wed, 26 Jun 2024 00:13:57 -0700 (PDT)
Message-ID: <6aedc34eb248ec9adba921593782a43f0cf0a8bf.camel@gmail.com>
Subject: Re: [PATCH v13 09/10] xen/riscv: introduce ANDN_INSN
From: Oleksii <oleksii.kurochko@gmail.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, George
 Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Date: Wed, 26 Jun 2024 09:13:56 +0200
In-Reply-To: <b638c5f8-20e9-43bb-a47b-e24cb1b1b821@citrix.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
	 <b0d2ff2cecf6cb324e43b9c14c87f47f3f199613.1719319093.git.oleksii.kurochko@gmail.com>
	 <95f64eba-13b9-404a-8318-7a3fc77ea560@suse.com>
	 <b638c5f8-20e9-43bb-a47b-e24cb1b1b821@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Tue, 2024-06-25 at 22:10 +0100, Andrew Cooper wrote:
> On 25/06/2024 3:49 pm, Jan Beulich wrote:
> > On 25.06.2024 15:51, Oleksii Kurochko wrote:
> > > --- a/xen/arch/riscv/include/asm/cmpxchg.h
> > > +++ b/xen/arch/riscv/include/asm/cmpxchg.h
> > > @@ -18,6 +18,20 @@
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "r" (new) \
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "memory" );
> > > =C2=A0
> > > +/*
> > > + * Binutils < 2.37 doesn't understand ANDN.=C2=A0 If the toolchain i=
s
> > > too
> > > +ld, form
> > Same question: Why's 2.37 suddenly of interest?
>=20
> You deleted the commit message which explains why:
>=20
> > RISC-V does a conditional toolchain test for the Zbb extension
> > (xen/arch/riscv/rules.mk), but unconditionally uses the ANDN
> > instruction in emulate_xchg_1_2().
>=20
> Either Zbb needs to be mandatory (both in the toolchain and the board
> running Xen), or emulate_xchg_1_2() needs to not use the ANDN
> instruction.
But we can't go without Zbb there are some things in <xen/bitops.h>
which now requires this extension. At the current state of development
it is mandatory.

~ Oleksii

>=20
> I opted for the latter.
>=20
> ~Andrew



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 07:24:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 07:24:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748412.1156116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMN0e-0003w5-MY; Wed, 26 Jun 2024 07:24:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748412.1156116; Wed, 26 Jun 2024 07:24:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMN0e-0003vy-JP; Wed, 26 Jun 2024 07:24:04 +0000
Received: by outflank-mailman (input) for mailman id 748412;
 Wed, 26 Jun 2024 07:24:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IXAa=N4=bugseng.com=alessandro.zucchelli@srs-se1.protection.inumbo.net>)
 id 1sMN0c-0003vq-Qo
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 07:24:02 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f5a38bf-338d-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 09:24:01 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id D53DF4EE0738;
 Wed, 26 Jun 2024 09:24:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f5a38bf-338d-11ef-90a3-e314d9c70b13
MIME-Version: 1.0
Date: Wed, 26 Jun 2024 09:24:00 +0200
From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Jan Beulich <jbeulich@suse.com>, consulting@bugseng.com, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH 1/3] common/kernel: address violation of MISRA C Rule 13.6
Reply-To: alessandro.zucchelli@bugseng.com
Mail-Reply-To: alessandro.zucchelli@bugseng.com
In-Reply-To: <alpine.DEB.2.22.394.2406251812480.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719308599.git.alessandro.zucchelli@bugseng.com>
 <54949b0561263b9f18da500255836d89ca8838ba.1719308599.git.alessandro.zucchelli@bugseng.com>
 <521767cb-ac08-48c5-bd91-b30c1d192331@suse.com>
 <alpine.DEB.2.22.394.2406251812480.3635@ubuntu-linux-20-04-desktop>
Message-ID: <34415eccf4a3204b694c93cbf0d1e816@bugseng.com>
X-Sender: alessandro.zucchelli@bugseng.com
Organization: BUGSENG Srl
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-26 03:13, Stefano Stabellini wrote:

Hi,
> On Tue, 25 Jun 2024, Jan Beulich wrote:
>> On 25.06.2024 12:14, Alessandro Zucchelli wrote:
>> > --- a/xen/common/kernel.c
>> > +++ b/xen/common/kernel.c
>> > @@ -660,14 +660,15 @@ long do_xen_version(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
>> >
>> >      case XENVER_guest_handle:
>> >      {
>> > +        struct domain *d = current->domain;
>> 
>> Can a (new) variable thus initialized please be consistently named 
>> currd?
>> 
>> >          xen_domain_handle_t hdl;
>> >
>> >          if ( deny )
>> >              memset(&hdl, 0, ARRAY_SIZE(hdl));
>> >
>> > -        BUILD_BUG_ON(ARRAY_SIZE(current->domain->handle) != ARRAY_SIZE(hdl));
>> > +        BUILD_BUG_ON(ARRAY_SIZE(d->handle) != ARRAY_SIZE(hdl));
>> 
>> Wasn't there the intention to exclude BUILD_BUG_ON() for specifically 
>> this
>> (and any other similar) rule?
> 
> +1

Yes, this macro will be deviated, you may ignore this patch.

> 
> I think if we could do that it would be ideal because those are the
> difficult cases are only meant are build checks so there is no point in
> applying to MISRA to them.
> 
> 
>> > -        if ( copy_to_guest(arg, deny ? hdl : current->domain->handle,
>> > +        if ( copy_to_guest(arg, deny ? hdl : d->handle,
>> >                             ARRAY_SIZE(hdl) ) )
>> >              return -EFAULT;
>> >          return 0;
>> 

-- 
Alessandro Zucchelli, B.Sc.

Software Engineer, BUGSENG (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 07:37:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 07:37:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748420.1156126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNDW-0006C2-QB; Wed, 26 Jun 2024 07:37:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748420.1156126; Wed, 26 Jun 2024 07:37:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNDW-0006Bv-ME; Wed, 26 Jun 2024 07:37:22 +0000
Received: by outflank-mailman (input) for mailman id 748420;
 Wed, 26 Jun 2024 07:37:21 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=evFT=N4=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMNDU-0006Bp-VB
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 07:37:20 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb6401e7-338e-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 09:37:20 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id
 4fb4d7f45d1cf-57cb9a370ddso7504049a12.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 00:37:20 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a72575f409asm261618166b.120.2024.06.26.00.37.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 00:37:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb6401e7-338e-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719387439; x=1719992239; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Au0IRdlDizwro1MrAZCGVztMGq5avE5KBRJ2mE0MKyY=;
        b=SEcRa6Zu5J3h8PnmbeWjtOkOPYrRmRqb5jGvedQN/OiZhwv5HpXgsSe7LTePYUsj4k
         aDp3VgGPusqijkeZqRZM1+eGKn2jJKpJi4ItEmZaRLA/WWQTFEx8AoRCyXhli8zNYdvI
         Eo2mMRgTvrNKJOcHJrTlbEyRymIa2IcTsGqJ2hAHMgSyJEuoPvDGCfnDspphQfvmcOUI
         UH/hYaknV+F0sVPZmwnNG+qHmcgSUP5cZqAWkaW7UhgcyWaRDpp14K7d+5PD2EIyxYzZ
         Edsh9epi6GPjzGuBl+Kfi6nzR2zM0DdtQm29ikqajFIE5S7YhYnJhx28Xlzp6X0T7AhA
         ieVw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719387439; x=1719992239;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Au0IRdlDizwro1MrAZCGVztMGq5avE5KBRJ2mE0MKyY=;
        b=GgI/9KF7J24Y6Q4opIz/It/BsHMeHutSynHQYarp02putNP365nMdvixAjGk3r1Yvy
         cqEhpOzYtzpVKjY4Lw9MGQUaxv8VDYgI0ia9dr/va0G6HP09pZO2c7GbQ06PT991fmCO
         dnFvzIakRoQo08SAOOcJvcxk9pFR6fUhuqdesTC+gGPv6QP7qGI2hg3kaQlyXvB39noy
         InXa3Rl6xEizmvfSCFKIW+GbOflibVb/g36bMO+7BCyxLI1uhrizs6BiPDFdY9L2fR77
         SDjrzMO6/lypZTHvrtduV4EAv/VjNRtMcIoQx02R1qFU87f7RXHfFmZPiLKiEq6SPE+W
         whdg==
X-Gm-Message-State: AOJu0YzwAuLNTS31HqHVk7yz3dRFiozbooYk5WxYHznPWXqYDzd9i480
	gQd+MVm2lXQdnOcQ901DRmFQl9l4I0+H8bMojp3lUnvXxj9cknPhoVmad7VL
X-Google-Smtp-Source: AGHT+IFGHEbo7KQYEYlrntH3MsusRkr1VJK15qTC8jVP6OIO9DbcqJsMDX8faMFIcrg5HEO47pdLJQ==
X-Received: by 2002:a17:906:2711:b0:a6f:b3af:2add with SMTP id a640c23a62f3a-a7245badbf4mr595210066b.45.1719387439420;
        Wed, 26 Jun 2024 00:37:19 -0700 (PDT)
Message-ID: <c6aeb6007ead36afaf48ceef1070e5ec5a2ef88f.camel@gmail.com>
Subject: Re: [XEN PATCH for 4.19] automation/eclair: add deviations agreed
 in MISRA meetings
From: Oleksii <oleksii.kurochko@gmail.com>
To: Stefano Stabellini <sstabellini@kernel.org>, Federico Serafini
	 <federico.serafini@bugseng.com>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, Simone Ballarin
 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
 <julien@xen.org>
Date: Wed, 26 Jun 2024 09:37:18 +0200
In-Reply-To: <alpine.DEB.2.22.394.2406251850281.3635@ubuntu-linux-20-04-desktop>
References: 
	<4a65e064768ad5ddce96d749f24f0bdae2c3b9da.1719328656.git.federico.serafini@bugseng.com>
	 <alpine.DEB.2.22.394.2406251850281.3635@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Tue, 2024-06-25 at 18:59 -0700, Stefano Stabellini wrote:
> > +-doc_begin=3D"The conversion from a function pointer to unsigned
> > long or (void *) does not lose any information, provided that the
> > target type has enough bits to store it."
> > +-config=3DMC3R1.R11.1,casts+=3D{safe,
> > +=C2=A0 "from(type(canonical(__function_pointer_types)))
> > +=C2=A0=C2=A0 &&to(type(canonical(builtin(unsigned
> > long)||pointer(builtin(void)))))
> > +=C2=A0=C2=A0 &&relation(definitely_preserves_value)"
> > +}
> > +-doc_end
>=20
> This one and the ones below are the important ones! I think we should
> have them in the tree as soon as possible ideally 4.19. I ask for
> a release-ack.
Just want to be sure that I understand deviations properly with this
example.

If the deviation above is merged, then it would be safe from a MISRA
point of view to cast a function pointer to 'unsigned long' or 'void
*', and thereby MISRA won't complain about code with such conversions?

~ Oleksii



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 07:46:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 07:46:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748428.1156136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNLv-0008NP-Lt; Wed, 26 Jun 2024 07:46:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748428.1156136; Wed, 26 Jun 2024 07:46:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNLv-0008NI-It; Wed, 26 Jun 2024 07:46:03 +0000
Received: by outflank-mailman (input) for mailman id 748428;
 Wed, 26 Jun 2024 07:44:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eyG7=N4=iscas.ac.cn=make24@srs-se1.protection.inumbo.net>)
 id 1sMNKB-0008Li-Im
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 07:44:15 +0000
Received: from cstnet.cn (smtp84.cstnet.cn [159.226.251.84])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id dcf7d7d4-338f-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 09:44:09 +0200 (CEST)
Received: from icess-ProLiant-DL380-Gen10.. (unknown [183.174.60.14])
 by APP-05 (Coremail) with SMTP id zQCowAAXHuatxntmX6vdEg--.47338S2;
 Wed, 26 Jun 2024 15:43:49 +0800 (CST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dcf7d7d4-338f-11ef-b4bb-af5377834399
From: Ma Ke <make24@iscas.ac.cn>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	dave.hansen@linux.intel.com,
	x86@kernel.org,
	hpa@zytor.com,
	jeremy@goop.org
Cc: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	Ma Ke <make24@iscas.ac.cn>
Subject: [PATCH] xen: Fix null pointer dereference in xen_init_lock_cpu()
Date: Wed, 26 Jun 2024 15:43:39 +0800
Message-Id: <20240626074339.2820381-1-make24@iscas.ac.cn>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-CM-TRANSID:zQCowAAXHuatxntmX6vdEg--.47338S2
X-Coremail-Antispam: 1UD129KBjvdXoWrtrykAF4ktF4kXF45CF18AFb_yoWfZrbE9F
	Z2qa1UCr4rta1av34jya45Gr4Sk3s7JryUWrs3tasIq3y5JFWkKa1Dtrnagw4jka4DurW7
	Ca4UW3yUX34jkjkaLaAFLSUrUUUUUb8apTn2vfkv8UJUUUU8Yxn0WfASr-VFAUDa7-sFnT
	9fnUUIcSsGvfJTRUUUb3kFF20E14v26r4j6ryUM7CY07I20VC2zVCF04k26cxKx2IYs7xG
	6r1F6r1fM7CIcVAFz4kK6r1j6r18M28lY4IEw2IIxxk0rwA2F7IY1VAKz4vEj48ve4kI8w
	A2z4x0Y4vE2Ix0cI8IcVAFwI0_JFI_Gr1l84ACjcxK6xIIjxv20xvEc7CjxVAFwI0_Cr0_
	Gr1UM28EF7xvwVC2z280aVAFwI0_Cr1j6rxdM28EF7xvwVC2z280aVCY1x0267AKxVW0oV
	Cq3wAac4AC62xK8xCEY4vEwIxC4wAS0I0E0xvYzxvE52x082IY62kv0487Mc02F40EFcxC
	0VAKzVAqx4xG6I80ewAv7VC0I7IYx2IY67AKxVWUJVWUGwAv7VC2z280aVAFwI0_Jr0_Gr
	1lOx8S6xCaFVCjc4AY6r1j6r4UM4x0Y48IcxkI7VAKI48JM4x0x7Aq67IIx4CEVc8vx2IE
	rcIFxwACI402YVCY1x02628vn2kIc2xKxwCF04k20xvY0x0EwIxGrwCFx2IqxVCFs4IE7x
	kEbVWUJVW8JwC20s026c02F40E14v26r1j6r18MI8I3I0E7480Y4vE14v26r106r1rMI8E
	67AF67kF1VAFwI0_Jw0_GFylIxkGc2Ij64vIr41lIxAIcVC0I7IYx2IY67AKxVWUJVWUCw
	CI42IY6xIIjxv20xvEc7CjxVAFwI0_Gr0_Cr1lIxAIcVCF04k26cxKx2IYs7xG6r1j6r1x
	MIIF0xvEx4A2jsIE14v26r1j6r4UMIIF0xvEx4A2jsIEc7CjxVAFwI0_Gr0_Gr1UYxBIda
	VFxhVjvjDU0xZFpf9x0JUQZ23UUUUU=
X-Originating-IP: [183.174.60.14]
X-CM-SenderInfo: ppdnvj2u6l2u1dvotugofq/

kasprintf() is used for formatting strings and dynamically allocating
memory space. If memory allocation fails, kasprintf() will return NULL.
We should add a check to ensure that failure does not occur.

Fixes: d5de8841355a ("x86: split spinlock implementations out into their own files")
Signed-off-by: Ma Ke <make24@iscas.ac.cn>
---
Found this error through static analysis.
---
 arch/x86/xen/spinlock.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 5c6fc16e4b92..fe3cd95c1604 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -75,6 +75,8 @@ void xen_init_lock_cpu(int cpu)
 	     cpu, per_cpu(lock_kicker_irq, cpu));
 
 	name = kasprintf(GFP_KERNEL, "spinlock%d", cpu);
+	if (!name)
+		return;
 	per_cpu(irq_name, cpu) = name;
 	irq = bind_ipi_to_irqhandler(XEN_SPIN_UNLOCK_VECTOR,
 				     cpu,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 07:54:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 07:54:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748439.1156146 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNTn-00029c-GT; Wed, 26 Jun 2024 07:54:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748439.1156146; Wed, 26 Jun 2024 07:54:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNTn-00029V-DV; Wed, 26 Jun 2024 07:54:11 +0000
Received: by outflank-mailman (input) for mailman id 748439;
 Wed, 26 Jun 2024 07:54:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMNTl-00029P-Ov
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 07:54:09 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 43c0d223-3391-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 09:54:07 +0200 (CEST)
Received: by mail-wr1-x42c.google.com with SMTP id
 ffacd0b85a97d-3621ac606e1so168187f8f.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 00:54:07 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1fa80e6a2fasm14212795ad.141.2024.06.26.00.54.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 00:54:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43c0d223-3391-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719388447; x=1719993247; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=Jomk9YsOuUV+FnjOYJJ5i1WfRB3XYh3pFVPLO+/7om8=;
        b=HSowMFwnKdXdVUPSPq3dP+z8hhc4jpGI/zwtDIFQlcIziBmc7iK0jb6C6f0xFuFCUh
         mDOXOJVXDXlimWi/SIIJtIzGoQUlAO6TvUpqMWaNnaQq33dYH/EkLoqOyoOsEzb+9mDb
         yhCSlekYFFapri2FmKnmsSLKmF5ljOZZum7UqVxCnjKEPSJHCgzMMry4QZblTP+K15YU
         H6ClK7o1ygqNujapcFglcMdklIMul1PQWs77pkfe9N79QgfILPs2X1NcO4c419qs4uSO
         cZZW/zWeX2jPerq2+XwB2m3QzfQp0D2tMRxcGrDQ7M2Gnkva1eHddxy5MPlh/LkKDdlc
         2nvw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719388447; x=1719993247;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Jomk9YsOuUV+FnjOYJJ5i1WfRB3XYh3pFVPLO+/7om8=;
        b=uw3XJ0Q0TmWuYHd6/niAHmxEdGSRuxTxC4Uaw0RL2nOpDTOGdgetYM9D2/K1Y2e47l
         hqK8wM6wHF56DCZaFvc7N8xMRAQ/hBgqh969j88/wzu9NyVds8RykaMddYhOA2nRnQ7q
         n3Ku2SVH9XbJZDErvUnMGmNhklBHWZ7QB+/vRF8tUQCyyBzavCqSQVAu0+IAj7jv/1Ec
         FV8LxM97Ma3BYNgZu6tDeLgPDAtng2mmKRibV8ZiFi1TczktxQZgLBkByNwWFwwFnu/V
         xj4baFWY5a32IdsVpDm3E5QxA0c9GPD+h1WK+d4k9UXBXJ8YaE0q/6coY8uLmQDKfIYE
         ckGA==
X-Forwarded-Encrypted: i=1; AJvYcCXqMk1nPXCMOr7cYhx9xWOLN0Ji+e1BK5lWC0Y+iF/zrYfV1MCKJ97G3ZKKVuAHPqEVpAWrOMJQqFwAN0AzdZf48txiq4QxOye5uobBcPE=
X-Gm-Message-State: AOJu0Yz+VELcr0J+ZpC5hEZQrNkM9E081BsHqcLTC4x+QLcP8WKmIKkp
	jBcZOeawmHVT8tc+If9/YJsMgA6NvpXdTjvzasczr60gY/rm438bKUw9uYsOqg==
X-Google-Smtp-Source: AGHT+IFtgipts1TEI6RELNe3B3NDSXoZv4cAG4PKhUtnqloh4z9b3LJWUUkOMFlL7YyCOiORT/q62Q==
X-Received: by 2002:adf:f990:0:b0:361:e909:60c3 with SMTP id ffacd0b85a97d-366e3267fb9mr8442270f8f.9.1719388446712;
        Wed, 26 Jun 2024 00:54:06 -0700 (PDT)
Message-ID: <6ed6d9bf-2e33-4294-974b-eb1924b011cc@suse.com>
Date: Wed, 26 Jun 2024 09:53:56 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 09/10] xen/riscv: introduce ANDN_INSN
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <b0d2ff2cecf6cb324e43b9c14c87f47f3f199613.1719319093.git.oleksii.kurochko@gmail.com>
 <95f64eba-13b9-404a-8318-7a3fc77ea560@suse.com>
 <b638c5f8-20e9-43bb-a47b-e24cb1b1b821@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <b638c5f8-20e9-43bb-a47b-e24cb1b1b821@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 23:10, Andrew Cooper wrote:
> On 25/06/2024 3:49 pm, Jan Beulich wrote:
>> On 25.06.2024 15:51, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/include/asm/cmpxchg.h
>>> +++ b/xen/arch/riscv/include/asm/cmpxchg.h
>>> @@ -18,6 +18,20 @@
>>>          : "r" (new) \
>>>          : "memory" );
>>>  
>>> +/*
>>> + * Binutils < 2.37 doesn't understand ANDN.  If the toolchain is too
>>> +ld, form
>> Same question: Why's 2.37 suddenly of interest?
> 
> You deleted the commit message which explains why:

Not really. My whole point is that while the intention of the change looks
okay, description and comment describe things insufficiently, to say the
least.

>> RISC-V does a conditional toolchain test for the Zbb extension
>> (xen/arch/riscv/rules.mk), but unconditionally uses the ANDN
>> instruction in emulate_xchg_1_2().
> 
> Either Zbb needs to be mandatory (both in the toolchain and the board
> running Xen), or emulate_xchg_1_2() needs to not use the ANDN instruction.
> 
> I opted for the latter.

And I agree with that.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 07:55:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 07:55:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748444.1156156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNV6-0002gG-Rq; Wed, 26 Jun 2024 07:55:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748444.1156156; Wed, 26 Jun 2024 07:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNV6-0002g9-Ni; Wed, 26 Jun 2024 07:55:32 +0000
Received: by outflank-mailman (input) for mailman id 748444;
 Wed, 26 Jun 2024 07:55:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMNV5-0002g1-BG
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 07:55:31 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 74a8fe2b-3391-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 09:55:29 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id
 38308e7fff4ca-2ec002caeb3so81717571fa.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 00:55:29 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7069d65dfc7sm2324392b3a.133.2024.06.26.00.55.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 00:55:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74a8fe2b-3391-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719388529; x=1719993329; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=HfM1x1Laydna422ZuoslhxvV0dtdFLN/IxH0RbYrLEc=;
        b=QGQKDaqotHOAQK24WFoFbMSNM7YM9dhk6SNJ1pljfWX9sdJVqYvlhp7srTD4Ubl+cZ
         q2KT9TSvuR9avBY4d974torRUZnnp3TayeWST1T05eb46hcQ0PRwxiFUL/lpQ2sxPsbe
         5hldFbdr8kls1qNIF2awQfIW32l6MvOj6mcInW0cT4pvEafRtIyg69XqrbaDDqaKnbKT
         pLvD/R3J1h8obv1SS6n2Tt7WvDfkVwRQOpBPwsTCPT8/EjfRBQlkBHwf74zrM7A4io3N
         i2pZG3Ep7WlcN3aq6vY/50O9Sr2tFu8Ru0DARqwB1BrrWnxU5RmiLElYCQQZW7zE/2B9
         FJ/w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719388529; x=1719993329;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=HfM1x1Laydna422ZuoslhxvV0dtdFLN/IxH0RbYrLEc=;
        b=OInQeqFPhu3TJreHHscA9oosl9ZQxDaXaU+e4UmI8KBCE/SdwnDtW21TBrWUP62g3d
         6dppFWQYBNzl3cojQpTAPxwQa2Lo13b4bSnweYHz81bC+ZcrAFmuI9YG9bJochlFovGW
         9sQypTMavKyRIw6VmCbtCv5jymwFmbFX67zDwMACv6sAKBIDT/4NkdEu2Ma3/3yAcWoV
         NXNZFe/CRUh+2eJ5tVbTQ6KOIBHfcuSQRTZbYoynPB+1fO1+n+heLI1NCg4n+niBoZij
         KMCr0ID6m83v0K9tA1v/JA/ZF9jpNQf4CJxOFm61M8jHhapS1lmL0AbJSyHJPFQLkd7g
         vNXw==
X-Forwarded-Encrypted: i=1; AJvYcCWHbt2HmvMbv9ls4LU0E1yEinf/2Z68WmjI5jApSBElC6a1E4azURxoR488e4xltTSXx5D9XFb1eVv4JlNUBmz1kAbZhMflmzG9BrkPjDo=
X-Gm-Message-State: AOJu0YyB5INOucMkzPCVj9+GYdi2R+q3zIZVRlGOTs3TPM7rwSIcwQaR
	XFwo8h5XQRSQcghQ3Gmbg8cHnzUDl6pPA8wvFNTY2sn7FZ0EsBfgcDBgPsMFGg==
X-Google-Smtp-Source: AGHT+IEzXetCol9Nlj/73s1CJm9Bdr4LbR/LQIJ9xVte8nQVAsAlhpnQXnAn0y7qPX9hNXF/VnM26w==
X-Received: by 2002:a2e:b1cf:0:b0:2eb:efff:771e with SMTP id 38308e7fff4ca-2ec57983bcemr60768691fa.29.1719388528731;
        Wed, 26 Jun 2024 00:55:28 -0700 (PDT)
Message-ID: <ccb68ab1-4e92-4278-917d-6928f0639703@suse.com>
Date: Wed, 26 Jun 2024 09:55:18 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 09/10] xen/riscv: introduce ANDN_INSN
To: Oleksii <oleksii.kurochko@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <b0d2ff2cecf6cb324e43b9c14c87f47f3f199613.1719319093.git.oleksii.kurochko@gmail.com>
 <95f64eba-13b9-404a-8318-7a3fc77ea560@suse.com>
 <b638c5f8-20e9-43bb-a47b-e24cb1b1b821@citrix.com>
 <6aedc34eb248ec9adba921593782a43f0cf0a8bf.camel@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <6aedc34eb248ec9adba921593782a43f0cf0a8bf.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 26.06.2024 09:13, Oleksii wrote:
> On Tue, 2024-06-25 at 22:10 +0100, Andrew Cooper wrote:
>> On 25/06/2024 3:49 pm, Jan Beulich wrote:
>>> On 25.06.2024 15:51, Oleksii Kurochko wrote:
>>>> --- a/xen/arch/riscv/include/asm/cmpxchg.h
>>>> +++ b/xen/arch/riscv/include/asm/cmpxchg.h
>>>> @@ -18,6 +18,20 @@
>>>>          : "r" (new) \
>>>>          : "memory" );
>>>>  
>>>> +/*
>>>> + * Binutils < 2.37 doesn't understand ANDN.  If the toolchain is
>>>> too
>>>> +ld, form
>>> Same question: Why's 2.37 suddenly of interest?
>>
>> You deleted the commit message which explains why:
>>
>>> RISC-V does a conditional toolchain test for the Zbb extension
>>> (xen/arch/riscv/rules.mk), but unconditionally uses the ANDN
>>> instruction in emulate_xchg_1_2().
>>
>> Either Zbb needs to be mandatory (both in the toolchain and the board
>> running Xen), or emulate_xchg_1_2() needs to not use the ANDN
>> instruction.
> But we can't go without Zbb there are some things in <xen/bitops.h>
> which now requires this extension. At the current state of development
> it is mandatory.

Maybe that's the "another bug" that Andrew mentioned (without any details)
on Matrix?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 08:01:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 08:01:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748457.1156166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNb8-0005Ss-JT; Wed, 26 Jun 2024 08:01:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748457.1156166; Wed, 26 Jun 2024 08:01:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNb8-0005Sl-Ga; Wed, 26 Jun 2024 08:01:46 +0000
Received: by outflank-mailman (input) for mailman id 748457;
 Wed, 26 Jun 2024 08:01:45 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMNb7-0005GT-1u
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 08:01:45 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 54076330-3392-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 10:01:44 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id
 38308e7fff4ca-2ec0f3b9cfeso76161651fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 01:01:44 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7069a42171dsm2617153b3a.191.2024.06.26.01.01.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 01:01:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54076330-3392-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719388903; x=1719993703; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=xvkmClhPnl9irN7BosyBuUnrhgAEJgG7ZDAN1QohxQw=;
        b=YUrZoPljgLkrKFDmzxgxcqmhJbLY1y/5NDbgms5rZzmuigS/3WBQTWikhJS82+fnNY
         8DOHi8WI4d3hj8AjbPmxSjF2oyxFdpeLqlvYZdbIUM+GATQ7bq1oaaWiYIS85vXCr38M
         03FwjwUjr/E1rVf7pfBwif8HK7c10kG8a4NP0NMJaxAJX6Mmnw03OGcXqA2A2BYiweM2
         h94VBKWugwF3xhpznLEgQTWP0oTR4iF6jrWa55SbVfsQ3H1pjsLo2bWX0+8YHGrNM7UD
         CLOWg4Bq3k5Ez/N9J/MbkQ7SN2zUBFFev+PJx7j4hjgL4ZEMVDIr4NE74WXJV6x5iKXv
         h3DQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719388903; x=1719993703;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=xvkmClhPnl9irN7BosyBuUnrhgAEJgG7ZDAN1QohxQw=;
        b=UQ/SIUlyIqtcFfb2VBu5/B0P4Yb0dZbF5m50o77Kfrm8GjTTNZsU0UgQq/xf4XGhxz
         398y/jqNhDlEl5JUmX3Fbqpy6/4tJZG05ob0nZD+8tJ10kr7N8Vc3gSsxnRAAHNy818m
         rYjF6cpYwZC27K7rscJAynImvbKiyjFF1uX2xHwTP4+Ix5GKwdk1nRN6FCwR/iK2V7fe
         L5oWiH3ZYOBdKLZNK/TWfD8laB3BPLgN/iq6iSEqZA5fkwK10gzM0xlQ1LzHGeCglJgX
         Y8bhDc+s6cUKx1p+HTEBhwF4pgz9jbzkkC2Oarp/leuXnwjFqDG0RyAWFUDA3Dteofjr
         FSsA==
X-Forwarded-Encrypted: i=1; AJvYcCXWEFsgA3mLFsbeQVel8CAOzlBrtb3hB2gbms352WJrW/QxMPgt7sskMR0yg5G9BlIVlWWbCS9UhYPGnYa/JFLatrdFYYKZy+hrnwcjTmg=
X-Gm-Message-State: AOJu0YyDC/XKv/9Ux+ukhHPwGbs7eVKUym7LejHUyFjTsIEKXpAcJ3dJ
	mDIpwdMVxp0dkadUg2cXMVnkuz+OdBqPIpQbFo+A5wXx7U+6WoqDYUmJtDgLEg==
X-Google-Smtp-Source: AGHT+IGIrBqhXWrONmt2LE+SRROGEkvIv4XXShQwt8vhfZDNYij3n00PjhKTfKbY4mRB4nJVFKoJtg==
X-Received: by 2002:a2e:87d3:0:b0:2ec:500c:b2e0 with SMTP id 38308e7fff4ca-2ec593e0cc0mr62623931fa.22.1719388903382;
        Wed, 26 Jun 2024 01:01:43 -0700 (PDT)
Message-ID: <237405eb-5e27-42b0-a763-c99c474075cd@suse.com>
Date: Wed, 26 Jun 2024 10:01:34 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 09/10] xen/riscv: introduce ANDN_INSN
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <b0d2ff2cecf6cb324e43b9c14c87f47f3f199613.1719319093.git.oleksii.kurochko@gmail.com>
 <95f64eba-13b9-404a-8318-7a3fc77ea560@suse.com>
 <3b2443ad76923afba348304b7cedbb257a6c5313.camel@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <3b2443ad76923afba348304b7cedbb257a6c5313.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25.06.2024 20:26, Oleksii wrote:
> On Tue, 2024-06-25 at 16:49 +0200, Jan Beulich wrote:
>> On 25.06.2024 15:51, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/include/asm/cmpxchg.h
>>> +++ b/xen/arch/riscv/include/asm/cmpxchg.h
>>> @@ -18,6 +18,20 @@
>>>          : "r" (new) \
>>>          : "memory" );
>>>  
>>> +/*
>>> + * Binutils < 2.37 doesn't understand ANDN.  If the toolchain is
>>> too
>>> +ld, form
>>
>> Same question: Why's 2.37 suddenly of interest?
> Andrew has (or had) an older version of binutils and started to face
> the issues mentioned in this patch. As a result, some
> changes were suggested.
> 
>> Plus, why would age of the
>> tool chain matter?
> As it is mentioned in the comment binutils < 2.37 doesn't understand
> andn instruction.

But that's not the point. If the tool chain is too old, our logic to
detect that should arrange for __riscv_zbb to not be set. That's all
that needed to cover gas not understanding the insn. The rest here
isn't about the capabilities of the tool chain: Either we make Zbb a
requirement (at which point .insn can be used to encode ANDN), or we
don't (at which point the replacement code you have comes into play).

>> What you care about is whether you're permitted to use
>> the extension at runtime. 
> At the moment we can't check that at runtime, w/o having an exception,
> ( there is some option to check when dts parsing will be available in
> Xen ). I will add the check when dts parsing functionality will be
> available. Right now the best what we can do it is just mentioned that
> as requirement in docs.
> 
>> Otherwise you could again ...
>>
>> Also something went wrong with line wrapping here.
>>
>>> + * it of a NOT+AND pair
>>> + */
>>> +#ifdef __riscv_zbb
>>> +#define ANDN_INSN(rd, rs1, rs2)                 \
>>> +    "andn " rd ", " rs1 ", " rs2 "\n"
>>> +#else
>>> +#define ANDN_INSN(rd, rs1, rs2)                 \
>>> +    "not " rd ", " rs2 "\n"                     \
>>> +    "and " rd ", " rs1 ", " rd "\n"
>>
>> ... resort to .insn.
> Hmm, good point, it could be an issue.
> 
> 
> If this patch is still needed (Andrew, have you updated your
> toolchain?), then it should use .insn instead of andn. However, using
> .insn requires encoding rd, rs1, and rs2 through asm ("some_reg") (?),
> which seems overly complicated.

Why? You don't want to use the raw form of .insn (which, as per the
other sub-thread on this series, is available from gas 2.38 only anyway),
but the one permitting operands to be spelled out (.insn r ...), along
the lines of what I suggested for "pause".

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 08:04:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 08:04:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748462.1156176 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNe7-000623-0D; Wed, 26 Jun 2024 08:04:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748462.1156176; Wed, 26 Jun 2024 08:04:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNe6-00061w-Td; Wed, 26 Jun 2024 08:04:50 +0000
Received: by outflank-mailman (input) for mailman id 748462;
 Wed, 26 Jun 2024 08:04:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6S0o=N4=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1sMNe6-00061q-2L
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 08:04:50 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20601.outbound.protection.outlook.com
 [2a01:111:f403:2417::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bfbdc8be-3392-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 10:04:45 +0200 (CEST)
Received: from SN7PR04CA0085.namprd04.prod.outlook.com (2603:10b6:806:121::30)
 by DM4PR12MB5913.namprd12.prod.outlook.com (2603:10b6:8:66::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Wed, 26 Jun
 2024 08:04:42 +0000
Received: from SA2PEPF00003F64.namprd04.prod.outlook.com
 (2603:10b6:806:121:cafe::a6) by SN7PR04CA0085.outlook.office365.com
 (2603:10b6:806:121::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.22 via Frontend
 Transport; Wed, 26 Jun 2024 08:04:42 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 SA2PEPF00003F64.mail.protection.outlook.com (10.167.248.39) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Wed, 26 Jun 2024 08:04:42 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Wed, 26 Jun
 2024 03:04:41 -0500
Received: from XIR-MICHALO-L1.xilinx.com (10.180.168.240) by
 SATLEXMB03.amd.com (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39
 via Frontend Transport; Wed, 26 Jun 2024 03:04:40 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bfbdc8be-3392-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CG5vA3O1yl86CKJou0Meq900XZ7B9zPP2RP8XYnfMgqL2d5nTAXQCEu1wz8JGRy2UAVEf0voc00PyaJTprv2UVxiYcjDA08XYTdtAHuE0go4uk3VEc6J04LyqO/GsuK+wp6+ULr+aCWcupzEV6DfbMmxyBEjtJThypMKVY7Jm1RO+IoSTsydGW/kCkFCtWDz4/D/52h2T7u/JigS3BurEr9UgTMJ97mdPT7hYd+ZxAdwgXUrQqZbQ/5HRH/8p5Zc+09fxcygwlrz5hyN26WmuAoQCeX9R0miKPfuSH5bZuUuLCTZdkD8akpNHBVtl5iNTV2QCsl2RqUt9d5XBVidDw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=yhiipniAr4aaZAMzuqDcpiRmiIFd3EPblKq1s2aS80I=;
 b=P+RZCUw4pk5lXsGiyOdp4XngMqGYrK0sO9YDAnuY3icpolpu16kGmcwWk4tvRcM2EFgxPA5kn2cxe4A4s4MBuDIOqjZaj0R0KEgXvqn6+OBFZ8W7/wya/2X2yNOAzqCuxxTPbd26sNeiw4r5d2MC0c1cN9z45HRRlNlgEo4szbzPFLYWRmzpizYym2Pm/vNLUNf1smxlkTTb4IVMR7BqCLRgv2hJFwjVileAxGIaNVnjVqQollL8ethP2xMfeRnvMoChl/5eVk4ywznbjgZ41GQ2giQBEWch0uIhf+qX2NUKUMnh4kcQ5c1i8gKSOW0crDGPWbl5Pik8YH9EtlBxog==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yhiipniAr4aaZAMzuqDcpiRmiIFd3EPblKq1s2aS80I=;
 b=y/GCR7IwW5AbczkO4rh4axphh7/ltk7SMomGbiem6cleI5XvEFee9QM6nSK/r1/xcjEsPOvJookN3YBaOILgyW2FSLvbfnZRpV65ZepCI1qzGOaDi+1xyM+wxo17T7dC54WAl6aRlsSngIwRIp1e7Wqfrf7Apq4yzVAvimo+288=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Michal Orzel <michal.orzel@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Michal Orzel <michal.orzel@amd.com>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	<oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19(?)] xen/arm: bootfdt: Fix device tree memory node probing
Date: Wed, 26 Jun 2024 10:04:28 +0200
Message-ID: <20240626080428.18480-1-michal.orzel@amd.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
Received-SPF: None (SATLEXMB04.amd.com: michal.orzel@amd.com does not
 designate permitted sender hosts)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: SA2PEPF00003F64:EE_|DM4PR12MB5913:EE_
X-MS-Office365-Filtering-Correlation-Id: d49f604e-8aba-47c7-7dee-08dc95b6a27d
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230038|376012|1800799022|36860700011|82310400024;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?KUvnWMzpuJANhlvlsKxmXtF6O59CwLvhLbPgXy2jwQgqmz85ra4pe4/Ypcva?=
 =?us-ascii?Q?HQko81dixeW2ZASD9Ibdi0lILactg0RCaMUhNgwoQuzn6wq2TgWvNkOQDS8p?=
 =?us-ascii?Q?CcXv35+WWWuQCU8uqlzZmtPCW27Fs9fnsBmgB7G98GXAvD25vROSH22c83KI?=
 =?us-ascii?Q?qFCjj7YexSPBAkDbtJmZ9lfpGFNa/975jNdfgxPdHaRLWmU+YhNEgb7PhVRr?=
 =?us-ascii?Q?61SVp4iurnuB/XVHD8rU9gNUfbHEU/NFf5MuK34zeyQrrJvN8DR9/4JckXdB?=
 =?us-ascii?Q?N6oHx/NEM7+RTBPaapUMSZeoaON5eTFom/B6SXfWIXYZ031xkGxXWlX9HIZa?=
 =?us-ascii?Q?RKai3yY3tvpQaRQTZo/+9TXivc0j1IxcjIS3znvSSiAQTQ0cwLb4OR9SuGuA?=
 =?us-ascii?Q?Zkw3UKhPrue8aB3SpngCYyTMHmGOcQf/xtTRap9HPHL3/8wQAqO5f8co6B4P?=
 =?us-ascii?Q?k66qhLlRdFH+f173cKDPOJYYsSrYmkaMzPtK6Daq4q0wnF6TdaHetJFT9imZ?=
 =?us-ascii?Q?frdZiDVgGyVnbDXnI3ZQ/7y5qN3L+s/Gd9gva1XZsKuuKeXOAfX+6gffDfV7?=
 =?us-ascii?Q?4txnh6v3MAi2gDaNYVCQgmlYQMDYJGg0g40R6cK+Iyo9La7sfTgbGV6fVT+I?=
 =?us-ascii?Q?DWr+/pML/sKlOjwZJKb3JO3B9+AmtLonsgHrpf4aguvdfRUTYm7EFtm8yXL7?=
 =?us-ascii?Q?Z+5UO+tEjU9UOcWSv71rr42qb1b5btRrwCa0UgKC8si7yfjkh2Vaf3570kc8?=
 =?us-ascii?Q?/0sFHNu1YDteG7ZkSB3znqWLt2jTi2Lt6RdY3q+OqbnCuzFRv4H5hNpfnvXK?=
 =?us-ascii?Q?F1RbZ2vMDBadrhZlqdA0/NkmMOH/rWG/I7VYwYeQqsjQHqHX+2Pg8E7h7b4Z?=
 =?us-ascii?Q?VTcpZkB21UIdgaVSLWWvWk2ZpLdNK6VCbNXfA4MdUjmMqnOGag5lQjVMaLDh?=
 =?us-ascii?Q?cP17KXsbRWi0TwZxcLuMfDfcBmE2X65t8varyrbPK2GEzQO3WkRusFINf3+A?=
 =?us-ascii?Q?x52tR3W94AmhHocyIZ42lTXPCkICbiOPK5H23BJp6wPO9GLVbbznllW/cedD?=
 =?us-ascii?Q?e+eTdXB9MFWdFbNk61PYgtPf1dvUsVuFW7IK8zgSSbXrwbXl5BdnEHYWXAC1?=
 =?us-ascii?Q?aj9FH8YSAKqUkpV9F7DW1m6R1SPY8dhMirmj8+piSEOhIkDH5QfjMWEDGH9i?=
 =?us-ascii?Q?Q1q1BCzDQu42JKW8ZsyMEaN2De/+oML6ZfcpxLeKwJYeBHWzQaYFtyc6s4Lp?=
 =?us-ascii?Q?XZ5k+3SQn+/MgM3VZ/lRUFrt+BVNDlU/oZy2JSltlQnTgSZlbRHPzXjlNhTA?=
 =?us-ascii?Q?mRwgD1E01ybgZqO/XkJ5IL4TZ1d78R4Jc9Azb9heurZl+Rf98GcEZh098hf/?=
 =?us-ascii?Q?cM4jSexp7xLrRVAn9DwrEJU2gXrKXG7Nb853OXk3fwcqpk2RyA=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230038)(376012)(1800799022)(36860700011)(82310400024);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jun 2024 08:04:42.1550
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d49f604e-8aba-47c7-7dee-08dc95b6a27d
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	SA2PEPF00003F64.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB5913

Memory node probing is done as part of early_scan_node() that is called
for each node with depth >= 1 (root node is at depth 0). According to
Devicetree Specification v0.4, chapter 3.4, /memory node can only exists
as a top level node. However, Xen incorrectly considers all the nodes with
unit node name "memory" as RAM. This buggy behavior can result in a
failure if there are other nodes in the device tree (at depth >= 2) with
"memory" as unit node name. An example can be a "memory@xxx" node under
/reserved-memory. Fix it by introducing device_tree_is_memory_node() to
perform all the required checks to assess if a node is a proper /memory
node.

Fixes: 3e99c95ba1c8 ("arm, device tree: parse the DTB for RAM location and size")
Signed-off-by: Michal Orzel <michal.orzel@amd.com>
---
4.19: This patch is fixing a possible early boot Xen failure (before main
console is initialized). In my case it results in a warning "Shattering
superpage is not supported" and panic "Unable to setup the directmap mappings".

If this is too late for this patch to go in, we can backport it after the tree
re-opens.
---
 xen/arch/arm/bootfdt.c | 28 +++++++++++++++++++++++++++-
 1 file changed, 27 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index 6e060111d95b..23c968e6955d 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -83,6 +83,32 @@ static bool __init device_tree_node_compatible(const void *fdt, int node,
     return false;
 }
 
+/*
+ * Check if a node is a proper /memory node according to Devicetree
+ * Specification v0.4, chapter 3.4.
+ */
+static bool __init device_tree_is_memory_node(const void *fdt, int node,
+                                              int depth)
+{
+    const char *type;
+    int len;
+
+    if ( depth != 1 )
+        return false;
+
+    if ( !device_tree_node_matches(fdt, node, "memory") )
+        return false;
+
+    type = fdt_getprop(fdt, node, "device_type", &len);
+    if ( !type )
+        return false;
+
+    if ( (len <= 0) || strcmp(type, "memory") )
+        return false;
+
+    return true;
+}
+
 void __init device_tree_get_reg(const __be32 **cell, uint32_t address_cells,
                                 uint32_t size_cells, paddr_t *start,
                                 paddr_t *size)
@@ -448,7 +474,7 @@ static int __init early_scan_node(const void *fdt,
      * populated. So we should skip the parsing.
      */
     if ( !efi_enabled(EFI_BOOT) &&
-         device_tree_node_matches(fdt, node, "memory") )
+         device_tree_is_memory_node(fdt, node, depth) )
         rc = process_memory_node(fdt, node, name, depth,
                                  address_cells, size_cells, bootinfo_get_mem());
     else if ( depth == 1 && !dt_node_cmp(name, "reserved-memory") )
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 08:11:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 08:11:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748470.1156186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNkM-0008O8-PC; Wed, 26 Jun 2024 08:11:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748470.1156186; Wed, 26 Jun 2024 08:11:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNkM-0008O1-LZ; Wed, 26 Jun 2024 08:11:18 +0000
Received: by outflank-mailman (input) for mailman id 748470;
 Wed, 26 Jun 2024 08:11:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMNkL-0008Nq-8x
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 08:11:17 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a868df1c-3393-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 10:11:15 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id
 38308e7fff4ca-2ec61eeed8eso34575001fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 01:11:15 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1fa1bbef429sm68348135ad.251.2024.06.26.01.11.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 01:11:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a868df1c-3393-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719389474; x=1719994274; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=11AcoKAhBh6rVhIm012Zwt71yoIGRLv2lXfIw9ttWaY=;
        b=O9FboZgKG3OR5BWALNeckwfPFopttjhzUXVtp14savwNtTaxXJ0IXpS2Dfl6c/XLg/
         3aDjSE0Qdp7uRI2WWFfVXTNI4CDLLgFwOG+d0ZZMP0PnA0I05KJJxRsUohUezEikj3zJ
         0Y41WwYsfDYRSEzunjXIvEgQvqUPXDgdXS6+3FNPVBcEiPohYrIsvNK+/0pzjt37n9O3
         bwunZSpxLN1IbUM8a7kPQOPeww39QK2yaLUfn51+c/HVs+VKU+uVh+b+MyHFknLI1c08
         /VDsG+GVIhH/Y3f7KAIpxeAGcyMBcvPItJ6WLaC3/Nr+kZRB0TxzQeSeyg2YGvRd30uj
         odIw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719389474; x=1719994274;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=11AcoKAhBh6rVhIm012Zwt71yoIGRLv2lXfIw9ttWaY=;
        b=fZ60pIROeeNZfBF1aP7cQQJeaIgaTa6+VPqnMDvk+t9yaWHkKAm+Jyfe4B2VkJJsZg
         W9kauoQobz6s10yCK9np48RW2HGGV0JzIgoRic6hqhy3riJFd6/kfCAvj3BMfFVxdrP+
         dxywgQnWRunA6sFKRYnjwiHhUg0LLTc0sPkHjEso8OUeGKVL9AXz7rMzB+IwOHRs0fLR
         TVdZ/oXdhoMQocLem6GpyMV05yZXigOd4Twd18qkiUVXRRKrSv+M66zC8N/3Z3/AiAyD
         EMgpkIPjKCrASj4nhMEKR2cdPcLq2a2J+AomgnkzlNKbGuLLI1h+HFrrQPa2ru/JaQJn
         xX4g==
X-Gm-Message-State: AOJu0Yy5AwiNjBOr828sxdBwwDkDPFzH1wozr+BKeMRx7OSEf+UmC3p2
	cc0W65Z7bWkZyPdoPTW5QC2X+/DJBMWHGWtaRDr0FDj18iI8qiLAiw+tCWUJQQ==
X-Google-Smtp-Source: AGHT+IEez0vYVmI/xC05r7gA9e3go7fFxePj8UgymbiGn0uP/3AFGfttd7wOQ3KarOmGAPP4ChKixQ==
X-Received: by 2002:a2e:9054:0:b0:2ed:59af:ecb7 with SMTP id 38308e7fff4ca-2ed59afeef3mr7269881fa.15.1719389474514;
        Wed, 26 Jun 2024 01:11:14 -0700 (PDT)
Message-ID: <6441010f-c2f6-4098-bf23-837955dcf803@suse.com>
Date: Wed, 26 Jun 2024 10:11:04 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 05/13] x86/traps: address violations of MISRA C
 Rule 16.3
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Federico Serafini <federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <4f44a7b021eb4f78ccf1ce69b500b48b75df81c5.1719218291.git.federico.serafini@bugseng.com>
 <alpine.DEB.2.22.394.2406241753260.3870429@ubuntu-linux-20-04-desktop>
 <a5b47b7e-9dc0-4108-bd6f-eb34f7cb8c3c@suse.com>
 <alpine.DEB.2.22.394.2406251808040.3635@ubuntu-linux-20-04-desktop>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <alpine.DEB.2.22.394.2406251808040.3635@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 26.06.2024 03:11, Stefano Stabellini wrote:
> On Tue, 25 Jun 2024, Jan Beulich wrote:
>> On 25.06.2024 02:54, Stefano Stabellini wrote:
>>> On Mon, 24 Jun 2024, Federico Serafini wrote:
>>>> Add break or pseudo keyword fallthrough to address violations of
>>>> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
>>>> every switch-clause".
>>>>
>>>> No functional change.
>>>>
>>>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>>>> ---
>>>>  xen/arch/x86/traps.c | 3 +++
>>>>  1 file changed, 3 insertions(+)
>>>>
>>>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
>>>> index 9906e874d5..cbcec3fafb 100644
>>>> --- a/xen/arch/x86/traps.c
>>>> +++ b/xen/arch/x86/traps.c
>>>> @@ -1186,6 +1186,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
>>>>  
>>>>      default:
>>>>          ASSERT_UNREACHABLE();
>>>> +        break;
>>>
>>> Please add ASSERT_UNREACHABLE to the list of "unconditional flow control
>>> statements" that can terminate a case, in addition to break.
>>
>> Why? Exactly the opposite is part of the subject of a recent patch, iirc.
>> Simply because of the rules needing to cover both debug and release builds.
> 
> The reason is that ASSERT_UNREACHABLE() might disappear from the release
> build but it can still be used as a marker during static analysis. In
> my view, ASSERT_UNREACHABLE() is equivalent to a noreturn function call
> which has an empty implementation in release builds.
> 
> The only reason I can think of to require a break; after an
> ASSERT_UNREACHABLE() would be if we think the unreachability only apply
> to debug build, not release build:
> 
> - debug build: it is unreachable
> - release build: it is reachable
> 
> I don't think that is meant to be possible so I think we can use
> ASSERT_UNREACHABLE() as a marker.

Well. For one such an assumption takes as a prereq that a debug build will
be run through full coverage testing, i.e. all reachable paths proven to
be taken. I understand that this prereq is intended to somehow be met,
even if I'm having difficulty seeing how such a final proof would look
like: Full coverage would, to me, mean that _every_ line is reachable. Yet
clearly any ASSERT_UNREACHABLE() must never be reached.

And then not covering for such cases takes the further assumption that
debug and release builds are functionally identical. I'm afraid this would
be a wrong assumption to make:
1) We may screw up somewhere, with code wrongly enabled only in one of the
   two build modes.
2) The compiler may screw up, in particular with optimization.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 08:21:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 08:21:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748479.1156195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNtg-0002IB-KX; Wed, 26 Jun 2024 08:20:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748479.1156195; Wed, 26 Jun 2024 08:20:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNtg-0002I3-Gw; Wed, 26 Jun 2024 08:20:56 +0000
Received: by outflank-mailman (input) for mailman id 748479;
 Wed, 26 Jun 2024 08:20:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMNtf-0002Hx-Ie
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 08:20:55 +0000
Received: from mail-wr1-x432.google.com (mail-wr1-x432.google.com
 [2a00:1450:4864:20::432])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 019ecb94-3395-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 10:20:54 +0200 (CEST)
Received: by mail-wr1-x432.google.com with SMTP id
 ffacd0b85a97d-35f2c9e23d3so176527f8f.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 01:20:54 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb2f0300sm94274645ad.19.2024.06.26.01.20.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 01:20:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 019ecb94-3395-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719390054; x=1719994854; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=EoPPE5nCB6AeZblT4Jgz7cpKnwiTrDzJleH+NBPtUac=;
        b=M96rt1pefFUBhqLWG+ju+PlimxC3FzQR58hwJa79o0eFvOpuACWN3YiZvyApKPXs6K
         s4lV/Few9rWKGsuUcC5PM+OgMo5/iirXip9wIBsMz/4s9QGdjbfetFV2rIWSs6LZPWVI
         PjT8L3g8BCLTDYZkMX8ZfAlEWnjIFx2UdrJKlzqh5bEtflB7ehjSEMrvxXqtfSzGkcWw
         Z1nM+aYQ4c0dgQPUPE2N4+xBDSjO6bA85qE09bQT3OgHeQYDoaZMiTN1YAAhl0C7q96n
         gCPimPl/QMEBsqGztG9EIpnIwacZGGogZUwKSpQAy+DHyfuFDzj+f/2SOWie3ppBNXTB
         jx2g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719390054; x=1719994854;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=EoPPE5nCB6AeZblT4Jgz7cpKnwiTrDzJleH+NBPtUac=;
        b=jQsph3rC7PVK9uL/FLMKR3fo61Ch+L8dm1K+bUIyjY1p+7IvufFoBVlpUARdIyAnmF
         7rQhETPVwjhSNJrU826uWM8Mxak2VfstueC1khHzqe9TwP+oNGcohps7N8+dnSQHJXUK
         nt9+N3W4PURjcUFnQWl6tP7uDD+BWCfDfdl0UXcaA8jcIUnTNSHGVcXgbZ/hzNFKFI5v
         BaLjsZ0junXcOpNtIY786sr4MklK2o6RbBE9VAQNMvTydXYsJkLdJrw/ddJ3lzyDZL8i
         V1yMWC84sETMxLYUNMOBUN2ipwpBgvH9FM8aBEZFUASM3OUepGGpbYIx04IE9xV8UJ6Z
         p5hA==
X-Forwarded-Encrypted: i=1; AJvYcCW/uK8bjrY6kQgphev7/ZkVn1Y9VU0pK9q4AgBN3xVFsVWfkra9TwM6Bypb7rDyt1Dgs0S80NRtt58cmIC3G8wwDttYOyh4ef+HAOdWHlc=
X-Gm-Message-State: AOJu0YzPwwLFtWYIuJMI94xWnhKps73HYziqabhHZj5C63txvh8TU5pT
	ZhZ7IowWocdeesNa4r0WoF9XG5hup+Y2og31IR7phEVI0vMWOkoJkTTNiyXPyQ==
X-Google-Smtp-Source: AGHT+IFjCen5At9qmYkefzL45PlLM1Wq/op1nRurE4h+uPEE0X+tPA1A/GgslGeXptFuXfpMP4geqA==
X-Received: by 2002:a5d:690e:0:b0:365:aec0:e191 with SMTP id ffacd0b85a97d-366e32956e7mr8572740f8f.21.1719390053770;
        Wed, 26 Jun 2024 01:20:53 -0700 (PDT)
Message-ID: <25371d03-1c33-41dd-9cdb-12d74fe38d42@suse.com>
Date: Wed, 26 Jun 2024 10:20:44 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 05/16] xen/x86: address violations of MISRA C:2012
 Directive 4.10
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: Simone Ballarin <simone.ballarin@bugseng.com>, consulting@bugseng.com,
 sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1710145041.git.simone.ballarin@bugseng.com>
 <dd042e7d17e7833e12a5ff6f28dd560b5ff02cf7.1710145041.git.simone.ballarin@bugseng.com>
 <dce6c44d-94b7-43bd-858a-9337336a79cf@suse.com>
 <ef623bad297d016438b35bedc80f091d@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ef623bad297d016438b35bedc80f091d@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 21:31, Nicola Vetrini wrote:
> On 2024-03-12 09:16, Jan Beulich wrote:
>> On 11.03.2024 09:59, Simone Ballarin wrote:
>>> --- a/xen/arch/x86/Makefile
>>> +++ b/xen/arch/x86/Makefile
>>> @@ -258,18 +258,20 @@ $(obj)/asm-macros.i: CFLAGS-y += -P
>>>  $(objtree)/arch/x86/include/asm/asm-macros.h: $(obj)/asm-macros.i 
>>> $(src)/Makefile
>>>  	$(call filechk,asm-macros.h)
>>>
>>> +ARCHDIR = $(shell echo $(SRCARCH) | tr a-z A-Z)
>>
>> This wants to use :=, I think - there's no reason to invoke the shell 
>> ...
> 
> I agree on this
> 
>>
>>>  define filechk_asm-macros.h
>>> +    echo '#ifndef ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>> +    echo '#define ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>>      echo '#if 0'; \
>>>      echo '.if 0'; \
>>>      echo '#endif'; \
>>> -    echo '#ifndef __ASM_MACROS_H__'; \
>>> -    echo '#define __ASM_MACROS_H__'; \
>>>      echo 'asm ( ".include \"$@\"" );'; \
>>> -    echo '#endif /* __ASM_MACROS_H__ */'; \
>>>      echo '#if 0'; \
>>>      echo '.endif'; \
>>>      cat $<; \
>>> -    echo '#endif'
>>> +    echo '#endif'; \
>>> +    echo '#endif /* ASM_$(ARCHDIR)_ASM_MACROS_H */'
>>>  endef
>>
>> ... three times while expanding this macro. Alternatively (to avoid
>> an unnecessary shell invocation when this macro is never expanded at
>> all) a shell variable inside the "define" above would want introducing.
>> Whether this 2nd approach is better depends on whether we anticipate
>> further uses of ARCHDIR.
> 
> However here I'm not entirely sure about the meaning of this latter 
> proposal.
> My proposal is the following:
> 
> ARCHDIR := $(shell echo $(SRCARCH) | tr a-z A-Z)
> 
> in a suitably generic place (such as Kbuild.include or maybe 
> xen/Makefile) as you suggested in subsequent patches that reused this 
> pattern.
> 
>>
>>> --- a/xen/arch/x86/cpu/cpu.h
>>> +++ b/xen/arch/x86/cpu/cpu.h
>>> @@ -1,3 +1,6 @@
>>> +#ifndef XEN_ARCH_X86_CPU_CPU_H
>>> +#define XEN_ARCH_X86_CPU_CPU_H
>>> +
>>>  /* attempt to consolidate cpu attributes */
>>>  struct cpu_dev {
>>>  	void		(*c_early_init)(struct cpuinfo_x86 *c);
>>> @@ -24,3 +27,5 @@ void amd_init_lfence(struct cpuinfo_x86 *c);
>>>  void amd_init_ssbd(const struct cpuinfo_x86 *c);
>>>  void amd_init_spectral_chicken(void);
>>>  void detect_zen2_null_seg_behaviour(void);
>>> +
>>> +#endif /* XEN_ARCH_X86_CPU_CPU_H */
>>
>> Leaving aside the earlier voiced request to get rid of the XEN_ 
>> prefixes
>> here, ...
>>
>>> --- a/xen/arch/x86/x86_64/mmconfig.h
>>> +++ b/xen/arch/x86/x86_64/mmconfig.h
>>> @@ -5,6 +5,9 @@
>>>   * Author: Allen Kay <allen.m.kay@intel.com> - adapted from linux
>>>   */
>>>
>>> +#ifndef XEN_ARCH_X86_X86_64_MMCONFIG_H
>>> +#define XEN_ARCH_X86_X86_64_MMCONFIG_H
>>> +
>>>  #define PCI_DEVICE_ID_INTEL_E7520_MCH    0x3590
>>>  #define PCI_DEVICE_ID_INTEL_82945G_HB    0x2770
>>>
>>> @@ -72,3 +75,5 @@ int pci_mmcfg_reserved(uint64_t address, unsigned 
>>> int segment,
>>>  int pci_mmcfg_arch_init(void);
>>>  int pci_mmcfg_arch_enable(unsigned int idx);
>>>  void pci_mmcfg_arch_disable(unsigned int idx);
>>> +
>>> +#endif /* XEN_ARCH_X86_X86_64_MMCONFIG_H */
>>
>> ... in a case like this and maybe even ...
>>
>>> --- a/xen/arch/x86/x86_emulate/private.h
>>> +++ b/xen/arch/x86/x86_emulate/private.h
>>> @@ -6,6 +6,9 @@
>>>   * Copyright (c) 2005-2007 XenSource Inc.
>>>   */
>>>
>>> +#ifndef XEN_ARCH_X86_X86_EMULATE_PRIVATE_H
>>> +#define XEN_ARCH_X86_X86_EMULATE_PRIVATE_H
>>> +
>>>  #ifdef __XEN__
>>>
>>>  # include <xen/bug.h>
>>> @@ -836,3 +839,5 @@ static inline int read_ulong(enum x86_segment seg,
>>>      *val = 0;
>>>      return ops->read(seg, offset, val, bytes, ctxt);
>>>  }
>>> +
>>> +#endif /* XEN_ARCH_X86_X86_EMULATE_PRIVATE_H */
>>
>> ... this I wonder whether they are too strictly sticking to the base
>> scheme (or whether the base scheme itself isn't flexible enough): I'm
>> not overly happy with the "_X86_X86_" in there. Especially in the
>> former case, where it's the sub-arch path, like for arm/arm<NN> I'd
>> like to see that folded to just "_X86_64_" here as well.
>>
> 
> I do agree we should make an exception here: e.g. 
> XEN_X86_64_EMULATE_PRIVATE_H
> 
> I'm ambivalent about the XEN_ prefix: I can't immediately see an issue 
> with dropping it, but on the other hand there are several headers that 
> already use it (either it or the __XEN prefix) as far as I can tell 
> (e.g. x86/cpu/cpu.h), so dropping it from the naming convention would 
> imply that a fair amount of additional churn may be needed to have an 
> uniform naming scheme in all the xen/ directory. I'll leave the decision 
> to the maintainers.

Hmm, I'm puzzled: The example you point at presently has no guard at all,
afaics. There'll need to be churn there anyway. If you picked an example,
I would have expected that to be one where the guard already fully
matches the proposed scheme. To be honest I'd be surprised if we had many
files fulfilling this criteria.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 08:27:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 08:27:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748487.1156206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNzc-0002t3-8d; Wed, 26 Jun 2024 08:27:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748487.1156206; Wed, 26 Jun 2024 08:27:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMNzc-0002sw-4k; Wed, 26 Jun 2024 08:27:04 +0000
Received: by outflank-mailman (input) for mailman id 748487;
 Wed, 26 Jun 2024 08:27:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMNza-0002sq-Nu
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 08:27:02 +0000
Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com
 [2a00:1450:4864:20::22d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dc9618ee-3395-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 10:27:01 +0200 (CEST)
Received: by mail-lj1-x22d.google.com with SMTP id
 38308e7fff4ca-2eaae2a6dc1so100635611fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 01:27:01 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c8d81b9d74sm1039134a91.45.2024.06.26.01.26.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 01:27:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc9618ee-3395-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719390421; x=1719995221; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=NxObYwcS1Kk14LV5To0HwEsnoV7XQC+0TAuppa/SzQc=;
        b=OeUwKFH/69urj6GnCS15NuAnkcHYbcxbD7h6qk9uhTT+YNyTP5pHSN/fDDbgXhoJgb
         TJZgpCo4JVciYxWHhmVIj10Deax90gDYjx7jlnkp+pHB/RdhVzxBH/Qm2b2JXlP2TzNU
         0YusJgAG21PJW6RfwD+ILl2m9okqo+oOj9578g1ju442lmsZukqRrACUHdJkV0dqqI5L
         dAp2C+wWv22qMkBhpyZDyaqUCxykOuKWtdPZRCyY7oTDtWYOeeayB6pqoTkT8ozpUwXh
         wFOMQjBGAWKPxqkRyq66Uu0V/9jtNgWDAMj9DDNxxFwLrgvd44JQNkRTkyemWUGFD97H
         1siA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719390421; x=1719995221;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=NxObYwcS1Kk14LV5To0HwEsnoV7XQC+0TAuppa/SzQc=;
        b=lv8bGOqllMqHJvDUduotCr6VYMFRuQbk8jY0K81+ngonpxrfZUWlyD10hqiUN/qj7q
         cp56cneMD/IaWwEUlHySkApByYyzfEh2uZryvDHBGSHro7t8l0aODxvx8HqLVLQtUMfe
         aopqLv43sCwVOyMpa1GaWqBZihNOzeCB7WQSB7Pm2g3/XC2uWEQpvaUxoxQ/YIQOM16w
         cS2D1bU6IuHZEncw2iMBYB3JziO9BWEG1XV8WjmEG386kIDM0zhQIqErPNA5WUhbAhnw
         TrVIjfhgcNDovmc/SuIi6HssB7xFCh290QBAf1KxnYo93gZvwdzsmzqX3erUg4uxV+vm
         goOw==
X-Forwarded-Encrypted: i=1; AJvYcCXEI1445hmkJZogFjEVh6YLECq77kXvKPUig0VpX92VDE0ZF8FHDujvsShGKIboykf6ZRSESlDqBPMlEkkKWjAWQd+2wqqY2lHRC3rfV6o=
X-Gm-Message-State: AOJu0YzmRSN9VUTyy7F2fkNh/ENfigUpxZkvRPHoS3aUTeLBrDQ/g6cA
	SSZwLek0Ym5tbeDWT9KwrObJEgmEI1PYD73TXJVvIjjgHESvu9tkq33fbT87nw==
X-Google-Smtp-Source: AGHT+IEuPd1fXvIw/AC14GXd1g1yOf6vr5X9Uppy9wEGyZtuhvCmkWJUHChyZjg6shHh+F1Bcop2+g==
X-Received: by 2002:a2e:9e18:0:b0:2ec:5488:cc9e with SMTP id 38308e7fff4ca-2ec5b2e5a63mr71673891fa.26.1719390421043;
        Wed, 26 Jun 2024 01:27:01 -0700 (PDT)
Message-ID: <c96d725f-632b-4cde-be3b-4060b40f390f@suse.com>
Date: Wed, 26 Jun 2024 10:26:51 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 08/10] xen/riscv: change .insn to .byte in cpu_relax()
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <b5ccb3850cbfc0c84d2feea35a971351395fa974.1719319093.git.oleksii.kurochko@gmail.com>
 <8be2c7c0-0aa0-44e0-b3d3-d422fecc29b6@suse.com>
 <9de5a3235f2bce8e7ab5c5dd5faf36e1774c97a7.camel@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <9de5a3235f2bce8e7ab5c5dd5faf36e1774c97a7.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25.06.2024 20:09, Oleksii wrote:
> On Tue, 2024-06-25 at 16:45 +0200, Jan Beulich wrote:
>> On 25.06.2024 15:51, Oleksii Kurochko wrote:
>>> The following compilation error occurs:
>>>   ./arch/riscv/include/asm/processor.h: Assembler messages:
>>>   ./arch/riscv/include/asm/processor.h:70: Error: unrecognized
>>> opcode `0x0100000F'
>>> In case of the following Binutils:
>>>   $ riscv64-linux-gnu-as --version
>>>   GNU assembler (GNU Binutils for Debian) 2.35.2
>>
>> In patch 6 you say 2.39. Why is 2.35.2 suddenly becoming of interest?
> Andrew has (or had) an older version of binutils and started to face
> the issues mentioned in this patch and the next one. As a result, some
> changes were suggested.
> 
> The version in the README wasn't changed because, in my opinion, this
> requires a separate CI job with a prebuilt or fixed toolchain version.
> At the moment, it is supported only the version mentioned in README and
> that one I have on my machine.

So from my perspective, if you go to the lengths of making changes to
support anything older than what you put into README, you will want to
at least briefly mention why this is needed / wanted.

Plus, as to "separate CI job": That makes little sense to me, or else
we'd need to have separate jobs for each and every compiler version
out in the world (and within range of what README says). Not just for
RISC-V, but also for other architectures. This imo simply wouldn't
scale.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 08:31:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 08:31:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748493.1156216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMO3W-0005Ga-OK; Wed, 26 Jun 2024 08:31:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748493.1156216; Wed, 26 Jun 2024 08:31:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMO3W-0005GT-LH; Wed, 26 Jun 2024 08:31:06 +0000
Received: by outflank-mailman (input) for mailman id 748493;
 Wed, 26 Jun 2024 08:31:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMO3V-0005GN-UM
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 08:31:05 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6d448709-3396-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 10:31:04 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id F31144EE0738;
 Wed, 26 Jun 2024 10:31:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d448709-3396-11ef-90a3-e314d9c70b13
MIME-Version: 1.0
Date: Wed, 26 Jun 2024 10:31:03 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Simone Ballarin <simone.ballarin@bugseng.com>, consulting@bugseng.com,
 sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH v3 05/16] xen/x86: address violations of MISRA C:2012
 Directive 4.10
In-Reply-To: <25371d03-1c33-41dd-9cdb-12d74fe38d42@suse.com>
References: <cover.1710145041.git.simone.ballarin@bugseng.com>
 <dd042e7d17e7833e12a5ff6f28dd560b5ff02cf7.1710145041.git.simone.ballarin@bugseng.com>
 <dce6c44d-94b7-43bd-858a-9337336a79cf@suse.com>
 <ef623bad297d016438b35bedc80f091d@bugseng.com>
 <25371d03-1c33-41dd-9cdb-12d74fe38d42@suse.com>
Message-ID: <bc5aaa79db259d2f8e405a74ed5c9947@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-26 10:20, Jan Beulich wrote:
> On 25.06.2024 21:31, Nicola Vetrini wrote:
>> On 2024-03-12 09:16, Jan Beulich wrote:
>>> On 11.03.2024 09:59, Simone Ballarin wrote:
>>>> --- a/xen/arch/x86/Makefile
>>>> +++ b/xen/arch/x86/Makefile
>>>> @@ -258,18 +258,20 @@ $(obj)/asm-macros.i: CFLAGS-y += -P
>>>>  $(objtree)/arch/x86/include/asm/asm-macros.h: $(obj)/asm-macros.i
>>>> $(src)/Makefile
>>>>  	$(call filechk,asm-macros.h)
>>>> 
>>>> +ARCHDIR = $(shell echo $(SRCARCH) | tr a-z A-Z)
>>> 
>>> This wants to use :=, I think - there's no reason to invoke the shell
>>> ...
>> 
>> I agree on this
>> 
>>> 
>>>>  define filechk_asm-macros.h
>>>> +    echo '#ifndef ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>>> +    echo '#define ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>>>      echo '#if 0'; \
>>>>      echo '.if 0'; \
>>>>      echo '#endif'; \
>>>> -    echo '#ifndef __ASM_MACROS_H__'; \
>>>> -    echo '#define __ASM_MACROS_H__'; \
>>>>      echo 'asm ( ".include \"$@\"" );'; \
>>>> -    echo '#endif /* __ASM_MACROS_H__ */'; \
>>>>      echo '#if 0'; \
>>>>      echo '.endif'; \
>>>>      cat $<; \
>>>> -    echo '#endif'
>>>> +    echo '#endif'; \
>>>> +    echo '#endif /* ASM_$(ARCHDIR)_ASM_MACROS_H */'
>>>>  endef
>>> 
>>> ... three times while expanding this macro. Alternatively (to avoid
>>> an unnecessary shell invocation when this macro is never expanded at
>>> all) a shell variable inside the "define" above would want 
>>> introducing.
>>> Whether this 2nd approach is better depends on whether we anticipate
>>> further uses of ARCHDIR.
>> 
>> However here I'm not entirely sure about the meaning of this latter
>> proposal.
>> My proposal is the following:
>> 
>> ARCHDIR := $(shell echo $(SRCARCH) | tr a-z A-Z)
>> 
>> in a suitably generic place (such as Kbuild.include or maybe
>> xen/Makefile) as you suggested in subsequent patches that reused this
>> pattern.
>> 
>>> 
>>>> --- a/xen/arch/x86/cpu/cpu.h
>>>> +++ b/xen/arch/x86/cpu/cpu.h
>>>> @@ -1,3 +1,6 @@
>>>> +#ifndef XEN_ARCH_X86_CPU_CPU_H
>>>> +#define XEN_ARCH_X86_CPU_CPU_H
>>>> +
>>>>  /* attempt to consolidate cpu attributes */
>>>>  struct cpu_dev {
>>>>  	void		(*c_early_init)(struct cpuinfo_x86 *c);
>>>> @@ -24,3 +27,5 @@ void amd_init_lfence(struct cpuinfo_x86 *c);
>>>>  void amd_init_ssbd(const struct cpuinfo_x86 *c);
>>>>  void amd_init_spectral_chicken(void);
>>>>  void detect_zen2_null_seg_behaviour(void);
>>>> +
>>>> +#endif /* XEN_ARCH_X86_CPU_CPU_H */
>>> 
>>> Leaving aside the earlier voiced request to get rid of the XEN_
>>> prefixes
>>> here, ...
>>> 
>>>> --- a/xen/arch/x86/x86_64/mmconfig.h
>>>> +++ b/xen/arch/x86/x86_64/mmconfig.h
>>>> @@ -5,6 +5,9 @@
>>>>   * Author: Allen Kay <allen.m.kay@intel.com> - adapted from linux
>>>>   */
>>>> 
>>>> +#ifndef XEN_ARCH_X86_X86_64_MMCONFIG_H
>>>> +#define XEN_ARCH_X86_X86_64_MMCONFIG_H
>>>> +
>>>>  #define PCI_DEVICE_ID_INTEL_E7520_MCH    0x3590
>>>>  #define PCI_DEVICE_ID_INTEL_82945G_HB    0x2770
>>>> 
>>>> @@ -72,3 +75,5 @@ int pci_mmcfg_reserved(uint64_t address, unsigned
>>>> int segment,
>>>>  int pci_mmcfg_arch_init(void);
>>>>  int pci_mmcfg_arch_enable(unsigned int idx);
>>>>  void pci_mmcfg_arch_disable(unsigned int idx);
>>>> +
>>>> +#endif /* XEN_ARCH_X86_X86_64_MMCONFIG_H */
>>> 
>>> ... in a case like this and maybe even ...
>>> 
>>>> --- a/xen/arch/x86/x86_emulate/private.h
>>>> +++ b/xen/arch/x86/x86_emulate/private.h
>>>> @@ -6,6 +6,9 @@
>>>>   * Copyright (c) 2005-2007 XenSource Inc.
>>>>   */
>>>> 
>>>> +#ifndef XEN_ARCH_X86_X86_EMULATE_PRIVATE_H
>>>> +#define XEN_ARCH_X86_X86_EMULATE_PRIVATE_H
>>>> +
>>>>  #ifdef __XEN__
>>>> 
>>>>  # include <xen/bug.h>
>>>> @@ -836,3 +839,5 @@ static inline int read_ulong(enum x86_segment 
>>>> seg,
>>>>      *val = 0;
>>>>      return ops->read(seg, offset, val, bytes, ctxt);
>>>>  }
>>>> +
>>>> +#endif /* XEN_ARCH_X86_X86_EMULATE_PRIVATE_H */
>>> 
>>> ... this I wonder whether they are too strictly sticking to the base
>>> scheme (or whether the base scheme itself isn't flexible enough): I'm
>>> not overly happy with the "_X86_X86_" in there. Especially in the
>>> former case, where it's the sub-arch path, like for arm/arm<NN> I'd
>>> like to see that folded to just "_X86_64_" here as well.
>>> 
>> 
>> I do agree we should make an exception here: e.g.
>> XEN_X86_64_EMULATE_PRIVATE_H
>> 
>> I'm ambivalent about the XEN_ prefix: I can't immediately see an issue
>> with dropping it, but on the other hand there are several headers that
>> already use it (either it or the __XEN prefix) as far as I can tell
>> (e.g. x86/cpu/cpu.h), so dropping it from the naming convention would
>> imply that a fair amount of additional churn may be needed to have an
>> uniform naming scheme in all the xen/ directory. I'll leave the 
>> decision
>> to the maintainers.
> 
> Hmm, I'm puzzled: The example you point at presently has no guard at 
> all,
> afaics. There'll need to be churn there anyway. If you picked an 
> example,
> I would have expected that to be one where the guard already fully
> matches the proposed scheme. To be honest I'd be surprised if we had 
> many
> files fulfilling this criteria.
> 

Ah, yes. I was looking at the state of the tree after some patches 
applied. On staging, most examples are in asm directories, such as 
xen/arch/x86/include/asm/endbr.h, but that would be modified anyway by 
dropping rule #3:
arch/<architecture>/include/asm/<subdir>/<filename>.h -> 
ASM_<architecture>_<subdir>_<filename>_H

With this, unless I find some other obstacle or issue from other 
maintainers, the XEN_ prefix can be dropped.

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 08:31:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 08:31:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748500.1156225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMO3x-0005nr-3y; Wed, 26 Jun 2024 08:31:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748500.1156225; Wed, 26 Jun 2024 08:31:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMO3x-0005ni-1G; Wed, 26 Jun 2024 08:31:33 +0000
Received: by outflank-mailman (input) for mailman id 748500;
 Wed, 26 Jun 2024 08:31:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMO3v-0005XU-Hr
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 08:31:31 +0000
Received: from mail-lj1-x234.google.com (mail-lj1-x234.google.com
 [2a00:1450:4864:20::234])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7c738c41-3396-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 10:31:29 +0200 (CEST)
Received: by mail-lj1-x234.google.com with SMTP id
 38308e7fff4ca-2ebe6495aedso68418221fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 01:31:29 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb2f2690sm95018495ad.40.2024.06.26.01.31.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 01:31:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c738c41-3396-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719390689; x=1719995489; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=G7ekgQKX93qfJG/dFfwSl4UE5JEKUSRbD+bTgzlbsBc=;
        b=NzrY5q3IZufmj/rTEM4WKEuJL7wA1wUY0YR5kCYISt+HEvGM0HlFyjbGwuA9xH4lqO
         o6dpt24+jrcAKTVQuLAOvLSwb9MHNX6S6csAp5mHqCPH6cHBhhxT4KPrp6t0/Z4Cs27R
         15rwkqD627XRZF0eLAuL5PDz/GmegT0EluicFKek6IUcjrYUFTiZRaZUUUqnrd+JZqQG
         8wh+0rNulPqkYAgg8hLBlx96mA+yKZSmED00Ob8mroIELj+yrj08Q23RAsIJPyh9jMbH
         jYrQbkkM5epUjv2XcHlwVoNsuZftYK0XdGjsHTG1pNya+sMdVC2VdoTp3IAuwLBKeL65
         5tQw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719390689; x=1719995489;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=G7ekgQKX93qfJG/dFfwSl4UE5JEKUSRbD+bTgzlbsBc=;
        b=nyokOVEpFbMXu/Vaq7k0kN3AK4H4FeV9qwJELohhwlB8BQHaSUjAV8bNQo6Vv7b4MY
         OM2NyxpsGgAww1A6Wl6VSO3fX0yhC8Z90XHVUDC0dU9QFAHIoElPx/Y707jQSD+HsTrJ
         dUe3kyyV6qVYaifs31dTmcbplPZcwyLO1yo8KbfP/oFzdLEv63RKJCw2qx7T0pIYrehB
         pZRQQXE8F+/Wyh//PVpKzUDGA/AmMTI18eF/CAvvN5jW5WDrJYsr8itiMTaaAnbEVzNI
         sFvwbZZo7NgmUNAiAkkYrdiUOvhNx5kDe/ezqrU16DPFbbGj4SDudjtQfnuI4tLJaV9w
         4kTQ==
X-Forwarded-Encrypted: i=1; AJvYcCV3Khci19LZ/pUDIpW9p9P3kEoUto0QrCu+4amH37LuM1P2Er4QzAxwB1fQ7g3AXYQFrnON07Xdz7+7XHCu/mZfiecEKHINKqcOB0olzvY=
X-Gm-Message-State: AOJu0YxBuTjptXAuhod0XDG1gNUqRPU15NbEea2in/49j856M907LwZS
	1znQ+jXFMQz14wVuIFhwPVXhHjUxwWZ14qJW7DGKvl/FUSMD4/Kqp9N9HWk/uCBOk/ytElmDKxw
	=
X-Google-Smtp-Source: AGHT+IG+NO58TX9FjEtExCZlYdN0aA58F9fGy8HtEgYqL5MS8aVED1fzWmHGQ2zexRFksC6p3y+HDg==
X-Received: by 2002:a2e:98c4:0:b0:2ec:4df7:8cef with SMTP id 38308e7fff4ca-2ec5b2a00a3mr53317991fa.15.1719390689263;
        Wed, 26 Jun 2024 01:31:29 -0700 (PDT)
Message-ID: <bb103587-546d-4613-bcb8-df10f5d05388@suse.com>
Date: Wed, 26 Jun 2024 10:31:19 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 02/10] xen/riscv: introduce bitops.h
To: Oleksii Kurochko <oleksii.kurochko@gmail.com>
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <0e4441eee82b0545e59099e2f62e3a01fa198d08.1719319093.git.oleksii.kurochko@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <0e4441eee82b0545e59099e2f62e3a01fa198d08.1719319093.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 15:51, Oleksii Kurochko wrote:
> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/bitops.h
> @@ -0,0 +1,137 @@
> +/* SPDX-License-Identifier: GPL-2.0 */
> +/* Copyright (C) 2012 Regents of the University of California */
> +
> +#ifndef _ASM_RISCV_BITOPS_H
> +#define _ASM_RISCV_BITOPS_H
> +
> +#include <asm/system.h>
> +
> +#if BITOP_BITS_PER_WORD == 64
> +#define __AMO(op)   "amo" #op ".d"
> +#elif BITOP_BITS_PER_WORD == 32
> +#define __AMO(op)   "amo" #op ".w"
> +#else
> +#error "Unexpected BITOP_BITS_PER_WORD"
> +#endif
> +
> +/* Based on linux/arch/include/asm/bitops.h */
> +
> +/*
> + * Non-atomic bit manipulation.
> + *
> + * Implemented using atomics to be interrupt safe. Could alternatively
> + * implement with local interrupt masking.
> + */
> +#define __set_bit(n, p)      set_bit(n, p)
> +#define __clear_bit(n, p)    clear_bit(n, p)
> +
> +#define test_and_op_bit_ord(op, mod, nr, addr, ord)     \
> +({                                                      \
> +    bitop_uint_t res, mask;                             \
> +    mask = BITOP_MASK(nr);                              \
> +    asm volatile (                                      \
> +        __AMO(op) #ord " %0, %2, %1"                    \
> +        : "=r" (res), "+A" (addr[BITOP_WORD(nr)])       \
> +        : "r" (mod(mask))                               \
> +        : "memory");                                    \
> +    ((res & mask) != 0);                                \
> +})
> +
> +#define op_bit_ord(op, mod, nr, addr, ord)      \
> +    asm volatile (                              \
> +        __AMO(op) #ord " zero, %1, %0"          \
> +        : "+A" (addr[BITOP_WORD(nr)])           \
> +        : "r" (mod(BITOP_MASK(nr)))             \
> +        : "memory");
> +
> +#define test_and_op_bit(op, mod, nr, addr)    \
> +    test_and_op_bit_ord(op, mod, nr, addr, .aqrl)
> +#define op_bit(op, mod, nr, addr) \
> +    op_bit_ord(op, mod, nr, addr, )
> +
> +/* Bitmask modifiers */
> +#define NOP(x)    (x)
> +#define NOT(x)    (~(x))

Since elsewhere you said we would use Zbb in bitops, I wanted to come back
on that: Up to here all we use is AMO.

And further down there's no asm() anymore. What were you referring to?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 08:43:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 08:43:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748509.1156236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOFe-0008J4-4m; Wed, 26 Jun 2024 08:43:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748509.1156236; Wed, 26 Jun 2024 08:43:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOFe-0008Ix-1k; Wed, 26 Jun 2024 08:43:38 +0000
Received: by outflank-mailman (input) for mailman id 748509;
 Wed, 26 Jun 2024 08:43:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMOFc-0008In-OB; Wed, 26 Jun 2024 08:43:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMOFc-0002KJ-Bc; Wed, 26 Jun 2024 08:43:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMOFc-00013D-1g; Wed, 26 Jun 2024 08:43:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMOFc-0007Jk-19; Wed, 26 Jun 2024 08:43:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nVgZjRBHzDqEdfkhm8y25De9WiYWVdQEKOLgEv8eAg4=; b=O2/8yXQUUI+sXV7aILq8OAhkMd
	yTTYWGWSodYTt8joD/eS2CCCU50KeR5qrrZ/X3Ym5TuD9wAl4BvhAZqGyd7n2LLcn0Eiq9InL32uF
	//af3eV7gJK5T7AWY5nGEZPqrB6NAV+II+aVqr86jGRwcoJMuAFxTpUi/TX8I18TAmKU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186507-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186507: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=0c94ec428fd04eb1d1d2da8d4b1ea74240b836a2
X-Osstest-Versions-That:
    libvirt=43a0881274e632dc44fff9320357dc8bf31e4826
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 08:43:36 +0000

flight 186507 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186507/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186451
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              0c94ec428fd04eb1d1d2da8d4b1ea74240b836a2
baseline version:
 libvirt              43a0881274e632dc44fff9320357dc8bf31e4826

Last test of basis   186451  2024-06-22 04:20:26 Z    4 days
Failing since        186475  2024-06-25 04:20:38 Z    1 days    2 attempts
Testing same since   186507  2024-06-26 04:20:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adam Julis <ajulis@redhat.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Jiri Denemark <jdenemar@redhat.com>
  Jon Kohler <jon@nutanix.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Laine Stump <laine@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Rayhan Faizel <rayhan.faizel@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   43a0881274..0c94ec428f  0c94ec428fd04eb1d1d2da8d4b1ea74240b836a2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 08:48:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 08:48:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748517.1156245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOKU-000112-MD; Wed, 26 Jun 2024 08:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748517.1156245; Wed, 26 Jun 2024 08:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOKU-00010v-JF; Wed, 26 Jun 2024 08:48:38 +0000
Received: by outflank-mailman (input) for mailman id 748517;
 Wed, 26 Jun 2024 08:48:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOKS-00010p-Rq
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 08:48:36 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id df776c24-3398-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 10:48:35 +0200 (CEST)
Received: from [10.176.134.80] (unknown [160.78.253.181])
 by support.bugseng.com (Postfix) with ESMTPSA id 3BEAE4EE0738;
 Wed, 26 Jun 2024 10:48:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df776c24-3398-11ef-b4bb-af5377834399
Message-ID: <d35cf13a-5cfd-425f-9c01-3a4122da3a69@bugseng.com>
Date: Wed, 26 Jun 2024 10:48:33 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH for 4.19] automation/eclair: add deviations agreed in
 MISRA meetings
To: Oleksii <oleksii.kurochko@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
References: <4a65e064768ad5ddce96d749f24f0bdae2c3b9da.1719328656.git.federico.serafini@bugseng.com>
 <alpine.DEB.2.22.394.2406251850281.3635@ubuntu-linux-20-04-desktop>
 <c6aeb6007ead36afaf48ceef1070e5ec5a2ef88f.camel@gmail.com>
Content-Language: en-US, it
From: Federico Serafini <federico.serafini@bugseng.com>
Organization: BUGSENG
In-Reply-To: <c6aeb6007ead36afaf48ceef1070e5ec5a2ef88f.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 26/06/24 09:37, Oleksii wrote:
> On Tue, 2024-06-25 at 18:59 -0700, Stefano Stabellini wrote:
>>> +-doc_begin="The conversion from a function pointer to unsigned
>>> long or (void *) does not lose any information, provided that the
>>> target type has enough bits to store it."
>>> +-config=MC3R1.R11.1,casts+={safe,
>>> +  "from(type(canonical(__function_pointer_types)))
>>> +   &&to(type(canonical(builtin(unsigned
>>> long)||pointer(builtin(void)))))
>>> +   &&relation(definitely_preserves_value)"
>>> +}
>>> +-doc_end
>>
>> This one and the ones below are the important ones! I think we should
>> have them in the tree as soon as possible ideally 4.19. I ask for
>> a release-ack.
> Just want to be sure that I understand deviations properly with this
> example.
> 
> If the deviation above is merged, then it would be safe from a MISRA
> point of view to cast a function pointer to 'unsigned long' or 'void
> *', and thereby MISRA won't complain about code with such conversions?

Exactly, taking into account section 4.7 of GCC manual.

-- 
Federico Serafini, M.Sc.

Software Engineer, BUGSENG (http://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 08:52:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 08:52:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748525.1156256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOO5-0002h6-6H; Wed, 26 Jun 2024 08:52:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748525.1156256; Wed, 26 Jun 2024 08:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOO5-0002gz-2S; Wed, 26 Jun 2024 08:52:21 +0000
Received: by outflank-mailman (input) for mailman id 748525;
 Wed, 26 Jun 2024 08:52:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMOO3-0002gt-Ha
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 08:52:19 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 64ad6ef4-3399-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 10:52:18 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id
 38308e7fff4ca-2eaea28868dso84891581fa.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 01:52:18 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb329b83sm94479965ad.115.2024.06.26.01.52.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 01:52:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64ad6ef4-3399-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719391938; x=1719996738; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=KnFVdy4+ae2BUOnNtfAoMiKw5cr1cs0xDtfUlQA/s0g=;
        b=WNILZdt+wMiK04D9QYh+624XHHGv0bpdl3K65R36ndJkflkuNAQ6flWDV3ngiZ/u2h
         sAaOguh79KIGKDUmTc0qkflVPaqB48o88AyBEFKbodGMm89D5qKxhFBUIpkXkQxD85Yu
         eUx8BNFrMLKDVWT607z3uEVaCeqRkNXKpiObW30WgyEaKJX7lG485UOZHDcb/sXX1z8G
         vUrN3sPeZi2rgWppc2OPXLdg/PuU/nmsziT0nHx89IxozjkeSgM0Nl2hEnclknqHsM14
         w8E+X1GmhsK0pv3LdcC2C6xGBOxBl924HQCYafunxloCvjd29ioqvhZNR9RX4zbWpu/c
         AoYw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719391938; x=1719996738;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=KnFVdy4+ae2BUOnNtfAoMiKw5cr1cs0xDtfUlQA/s0g=;
        b=S+98VTcFoemc3AspMYMNo4vJAZe6ejAErWhJMdFdMce34mvOtoybiYeRBt8hqPPQ+9
         bzN7dH8kLSTT61UFA3FwPjPYqcJ3vDk+T9Ko+PFMrVyZr6m4yls/ptBab3iMlcTFcm7U
         3gGi1q4a8KWUilPmuMqJNtgTsxl+IJDvZ0Alkk/KC3TRP3kKPIasJDeig+vHjfPijneH
         kslqyAJCCny5N3DwWC6VoesB1XRwekSZdHbbKsP6BTXXuz8aF3kVqYrL8VxHscgbTT1Z
         Rdmx+fKFRy8jZEGd5WK9rXATMFZz6kLpY2XtCNBNij/MQ+IMKuedzFZ02UlYFyX7THm1
         ae/g==
X-Gm-Message-State: AOJu0YxrjzYtmP3pXrfBAMKyvqkT8fKXxmoNh+NHl2uxM7SrUvHD3/Su
	SowUGuGfmCtX+//a2UCBsOkDpvYp43zvHpiKQr3/6eiNUhY6jbLD0d3hultUoQ==
X-Google-Smtp-Source: AGHT+IEuFuL4yWPCdBXokIYl02WfiKm6TAR4Czd6HSjI2CaAH/tBF03BEkRSssAGep22B7jjqefRwQ==
X-Received: by 2002:a2e:7c07:0:b0:2ec:4f0c:36f9 with SMTP id 38308e7fff4ca-2ec5b31d140mr78269471fa.36.1719391937973;
        Wed, 26 Jun 2024 01:52:17 -0700 (PDT)
Message-ID: <0b55dcb7-dbba-4c90-b0a2-9e158317f88a@suse.com>
Date: Wed, 26 Jun 2024 10:52:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] xen: Fix null pointer dereference in xen_init_lock_cpu()
To: Ma Ke <make24@iscas.ac.cn>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 jgross@suse.com, boris.ostrovsky@oracle.com, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com, x86@kernel.org,
 hpa@zytor.com
References: <20240626074339.2820381-1-make24@iscas.ac.cn>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240626074339.2820381-1-make24@iscas.ac.cn>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 26.06.2024 09:43, Ma Ke wrote:
> kasprintf() is used for formatting strings and dynamically allocating
> memory space. If memory allocation fails, kasprintf() will return NULL.
> We should add a check to ensure that failure does not occur.
> 
> Fixes: d5de8841355a ("x86: split spinlock implementations out into their own files")
> Signed-off-by: Ma Ke <make24@iscas.ac.cn>
> ---
> Found this error through static analysis.
> ---
>  arch/x86/xen/spinlock.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
> index 5c6fc16e4b92..fe3cd95c1604 100644
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -75,6 +75,8 @@ void xen_init_lock_cpu(int cpu)
>  	     cpu, per_cpu(lock_kicker_irq, cpu));
>  
>  	name = kasprintf(GFP_KERNEL, "spinlock%d", cpu);
> +	if (!name)
> +		return;
>  	per_cpu(irq_name, cpu) = name;
>  	irq = bind_ipi_to_irqhandler(XEN_SPIN_UNLOCK_VECTOR,
>  				     cpu,

While yes, error checking would better be added here, this isn't enough.
You're treating an easy to diagnose issue (at the point where the NULL
would be attempted to be de-referenced) for a possibly more difficult to
diagnose issue: Any such failure will also need propagating back up the
call stack.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 08:58:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 08:58:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748530.1156267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOTT-0003S4-Qh; Wed, 26 Jun 2024 08:57:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748530.1156267; Wed, 26 Jun 2024 08:57:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOTT-0003Rx-Lu; Wed, 26 Jun 2024 08:57:55 +0000
Received: by outflank-mailman (input) for mailman id 748530;
 Wed, 26 Jun 2024 08:57:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VtjD=N4=infradead.org=peterz@srs-se1.protection.inumbo.net>)
 id 1sMOTR-0003Rr-P8
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 08:57:54 +0000
Received: from desiato.infradead.org (desiato.infradead.org
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 296f6ee3-339a-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 10:57:51 +0200 (CEST)
Received: from j130084.upc-j.chello.nl ([24.132.130.84]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.97.1 #2 (Red Hat Linux))
 id 1sMOSu-00000008YMV-01fV; Wed, 26 Jun 2024 08:57:26 +0000
Received: by noisy.programming.kicks-ass.net (Postfix, from userid 1000)
 id 5C20D30057C; Wed, 26 Jun 2024 10:57:17 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 296f6ee3-339a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Type:MIME-Version:
	References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=c55CHPi2kcTRcsfp+2hg5MOW5UaLFYITtMG7g7iadt0=; b=EljuUJOGKoAeaU+wJym6grYK7w
	vtfBwB3FBLnWHkkW0aaOZT5HY5PAe71/c2ZkAiv4uiXEk5Tu9Fk9EpV+jWxxVtwVJuMZxDT4wZD+C
	OgzcQI+YfIuKuW7E5yG249OJEMQbybPj4Hf6XW5zV5lCgXd+AOrb4jDtmNH8QvSfoSsBOOZIHf4zu
	LruI6N2DgNs4nYFo+fW2XsyWm2ESBENGJWk/k5f1xr2FKl3t1Ze5ROBM2byOolbgNZ+lLSDvrXY2i
	HUW529605MkK8Ew8FFV092FJLFXfv/xP00FRbSni+KhvPZlizc87RYNQOKrqTGbsDLEIVbqnsccQ4
	IPaKt9BA==;
Date: Wed, 26 Jun 2024 10:57:17 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Ma Ke <make24@iscas.ac.cn>
Cc: jgross@suse.com, boris.ostrovsky@oracle.com, tglx@linutronix.de,
	mingo@redhat.com, bp@alien8.de, dave.hansen@linux.intel.com,
	x86@kernel.org, hpa@zytor.com, jeremy@goop.org,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH] xen: Fix null pointer dereference in xen_init_lock_cpu()
Message-ID: <20240626085717.GB31592@noisy.programming.kicks-ass.net>
References: <20240626074339.2820381-1-make24@iscas.ac.cn>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20240626074339.2820381-1-make24@iscas.ac.cn>

On Wed, Jun 26, 2024 at 03:43:39PM +0800, Ma Ke wrote:
> kasprintf() is used for formatting strings and dynamically allocating
> memory space. If memory allocation fails, kasprintf() will return NULL.
> We should add a check to ensure that failure does not occur.

Did you also consider what happens to the machine if you omit the rest
of this function at init?

As is, it is *extremely* unlikely the machine will fail the allocation
at boot (it has all the memory unused after all) and if for some
mysterious reason it does fail, we get a nice bug halting the boot and
telling us where shit hit fan.

Now we silently continue with undefined state and will likely run into
trouble later because we failed to setup things, like that irqhandler.
At which point everybody will be needing to buy a new WTF'o'meter to
figure out WTF happened to get in that insane position.



> Fixes: d5de8841355a ("x86: split spinlock implementations out into their own files")
> Signed-off-by: Ma Ke <make24@iscas.ac.cn>
> ---
> Found this error through static analysis.

Just because your tool found something, doesn't mean you have to be a
tool and also not think about things.

Please use brain don't be a tool.

> ---
>  arch/x86/xen/spinlock.c | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
> index 5c6fc16e4b92..fe3cd95c1604 100644
> --- a/arch/x86/xen/spinlock.c
> +++ b/arch/x86/xen/spinlock.c
> @@ -75,6 +75,8 @@ void xen_init_lock_cpu(int cpu)
>  	     cpu, per_cpu(lock_kicker_irq, cpu));
>  
>  	name = kasprintf(GFP_KERNEL, "spinlock%d", cpu);
> +	if (!name)
> +		return;
>  	per_cpu(irq_name, cpu) = name;
>  	irq = bind_ipi_to_irqhandler(XEN_SPIN_UNLOCK_VECTOR,
>  				     cpu,
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:02:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:02:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748540.1156276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOYB-0005gL-EZ; Wed, 26 Jun 2024 09:02:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748540.1156276; Wed, 26 Jun 2024 09:02:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOYB-0005gE-BA; Wed, 26 Jun 2024 09:02:47 +0000
Received: by outflank-mailman (input) for mailman id 748540;
 Wed, 26 Jun 2024 09:02:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMOYA-0005g8-ME
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:02:46 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d9a5a524-339a-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 11:02:44 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ec3f875e68so71606041fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 02:02:44 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7066ac98fc9sm7449069b3a.193.2024.06.26.02.02.28
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 02:02:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9a5a524-339a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719392564; x=1719997364; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=bqkMklrYlhAjgUeJ0hm27xKX3s305+zLVUmyJ9bylK4=;
        b=IuCD+fNtQ0Ow9flEr22SJ/XZz4zydJTRoNOOQ0t9Dagn7iiftoUQi1M7miGnPj2G7F
         /0iHilM5URUbgUoOYWpzLP5Uohgk9nFgAnmUlhofzRstSsMYfYzzX/txMVTI5dqbdmLT
         3lbM2Va+7yTAgRaW7kQmtH556VyrrGFaI4M316fhmHn3CQbGbU5tFKX6zPE9+1w/1VaP
         3vkaHsdM5y8o1GtSEBBiY3nu+y0le1jedRzsHufnBI/n/lFBY4FzrveKN6NzdGACVy3B
         TlNERRO4NDhOFAV3Axhj6Nn5UeESJYFBp2HtHJRV0YSTJIGrMg6wMrNioLiPnLx63AZT
         vnhA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719392564; x=1719997364;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=bqkMklrYlhAjgUeJ0hm27xKX3s305+zLVUmyJ9bylK4=;
        b=ErblMzET/1iGItsGPd4JKY2y2oJy+X+YYIu0Q3jOAK0GcMNN0LKJ40FECsHxhw8uoS
         OM2soaH3vMhEBmnQlpFB+eW8qDLYcLXXdrSnNRmdE+ZLLHvU4YrYmLl9NI2HVSrkqF5i
         y0AdKvZyI11EbzUu55e7NRUlJayOAMYNQauPS/zYWVRNQFcpKvi5bcBkDsijkljbuk9h
         TWjP7pR4atc8VCATvakPzHlAAkS0XT41Gnkl5V18Y8nXN55dsITpejsdzpxI09iF0oYM
         j8cd3aKOhzuKTSH5sjTUJFFZukf2eIgNFl7xS816cDLYKH/AqeR8sRUMclGqvejKmlUN
         1xJA==
X-Forwarded-Encrypted: i=1; AJvYcCWkOsiLbYgANkqXaVByO/iOvcvCRsnYSOFWUnANZmTX5iNKhGP50P/SdJyFttczMoOYjQj2DwIAV5v3RT4nwthsoxig5jKkvbGn6Tr3yRQ=
X-Gm-Message-State: AOJu0YyzEadqE6rzNfH7uvOiv0z7TV+z8+6kZWXIV92hgEe9QdH56qDy
	0jBXhVOE8sMVs4JTvG1dIDjGlMBEH9f4OUFkC0frMFq34/EUs1qi5ugj5MLiMQ==
X-Google-Smtp-Source: AGHT+IEh+m7kn+LNw3HeLiUsYaIsCFmQe2i4nKVtBN1bzhBikCjJi1QZWRFi30GL1kvliMV60tza+w==
X-Received: by 2002:a2e:7203:0:b0:2ec:2314:3465 with SMTP id 38308e7fff4ca-2ec5b36b959mr54926411fa.11.1719392551651;
        Wed, 26 Jun 2024 02:02:31 -0700 (PDT)
Message-ID: <e740e1be-7890-4e8f-879a-87043ac109c5@suse.com>
Date: Wed, 26 Jun 2024 11:02:23 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2] xen/sched: fix error handling in cpu_schedule_up()
To: Juergen Gross <jgross@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20240626055425.3622-1-jgross@suse.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240626055425.3622-1-jgross@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 26.06.2024 07:54, Juergen Gross wrote:
> In case cpu_schedule_up() is failing, it needs to undo all externally
> visible changes it has done before.
> 
> Reason is that cpu_schedule_callback() won't be called with the
> CPU_UP_CANCELED notifier in case cpu_schedule_up() did fail.
> 
> Reported-by: Jan Beulich <jbeulich@suse.com>
> Fixes: 207589dbacd4 ("xen/sched: move per cpu scheduler private data into struct sched_resource")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with two questions, just for my own reassurance:

> --- a/xen/common/sched/core.c
> +++ b/xen/common/sched/core.c
> @@ -2755,6 +2755,36 @@ static struct sched_resource *sched_alloc_res(void)
>      return sr;
>  }
>  
> +static void cf_check sched_res_free(struct rcu_head *head)
> +{
> +    struct sched_resource *sr = container_of(head, struct sched_resource, rcu);
> +
> +    free_cpumask_var(sr->cpus);
> +    if ( sr->sched_unit_idle )
> +        sched_free_unit_mem(sr->sched_unit_idle);
> +    xfree(sr);
> +}
> +
> +static void cpu_schedule_down(unsigned int cpu)
> +{
> +    struct sched_resource *sr;
> +
> +    rcu_read_lock(&sched_res_rculock);
> +
> +    sr = get_sched_res(cpu);
> +
> +    kill_timer(&sr->s_timer);
> +
> +    cpumask_clear_cpu(cpu, &sched_res_mask);
> +    set_sched_res(cpu, NULL);
> +
> +    /* Keep idle unit. */
> +    sr->sched_unit_idle = NULL;
> +    call_rcu(&sr->rcu, sched_res_free);
> +
> +    rcu_read_unlock(&sched_res_rculock);
> +}

Eyeballing suggests these two functions don't change at all; they're solely
being moved up?

Also, for the purpose here, use of RCU to effect the freeing isn't strictly
necessary. It's just that it also doesn't hurt doing it that way, and it
would be more code to directly free when coming from cpu_schedule_up()?

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:06:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:06:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748545.1156285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMObx-0006ES-Tj; Wed, 26 Jun 2024 09:06:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748545.1156285; Wed, 26 Jun 2024 09:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMObx-0006EL-R7; Wed, 26 Jun 2024 09:06:41 +0000
Received: by outflank-mailman (input) for mailman id 748545;
 Wed, 26 Jun 2024 09:06:40 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMObw-0006EF-3y
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:06:40 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 64118e2d-339b-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 11:06:36 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2ebed33cb65so72292041fa.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 02:06:36 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7069eb83d7fsm2305098b3a.99.2024.06.26.02.06.30
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 02:06:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64118e2d-339b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719392796; x=1719997596; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=/Jeq0n8+gSFVF/a9Kn28ZuDMbwG8/SmVYarkQo66UCU=;
        b=cPU5+U0LA/Pha0fqnZkLq+kTXtFOG3WGxReIvi9THEX2TnEGTTpYqUUK4ESacRXE9u
         /W2ffXYvNTlkapeU7H3ZYmWPCGcb/GsHy0zcYC7bKxw7p7FV1BPhN3ZKL/XNkRRhbDQ3
         3Vlyv1s0ENhC7D6zJx+d8JqkYXPDL0xKPU84BRVjS3w/im4nArvncW8DvICkFw3ezCTi
         qAOJaF+0S3zWXVaiTRXn6OfzqU8gUk1Rq5xH/Fpig/2dHEJFs90Uy649nDs8UnotOFEi
         v/WwGnJnyOjaoH5jjs1+FdxR856Y72rKXmPJDiYoQ9lBjGTArvuugqkwatkqVyGjn3Za
         eMqA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719392796; x=1719997596;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/Jeq0n8+gSFVF/a9Kn28ZuDMbwG8/SmVYarkQo66UCU=;
        b=GoJ7y3CWceUPAk4MRs7GbUScZzqkdF7u10qB8rXNfQ5mnyPZQgdh3lWGV6dNDgCQV8
         eXO0YaG2ekplZ2NOJ85AJaoM+wxjxl8w6Zg63HGt2YICJopCSc+KS1aa/VMsqi4p+ySi
         OiF412zx1EofnE4WfbcFwe3VqTfXWJ3dhH4X3jgWRDkIfqfI/BbJs+tA2G+OvmwxsEe9
         NNSWwbY0Uz3rXnbijYoguUdvE/H/czDgChkI31lJ3nbkQWk1rMsqidx31qk+gaT4YRHy
         /aNJHeylevHHDD6FFPoimKfg7S5Q7kETMxWdzqHBN1D10ao7HJlUkHCjuV+3Ppk6fPIo
         5Drw==
X-Forwarded-Encrypted: i=1; AJvYcCX0xBBXjrd/Gzs6qNaijSKxEDFgghE4Vb26gc+LyZVT8TH5LRQuihYOJJwDXxHT7u66qrLxnDv2XvgYHlDUz8+e/unNWQB+Wo97J1jKY5E=
X-Gm-Message-State: AOJu0YxPxJ8dG3ckG+axa9FGpQl5c4zCIVo9oEXfVUhzxSFxA/hhId3e
	1rjtYRlFZcfLpYxiCuzSGMx0f/BFwlaz27qbUeb6rEFN3u9s5r8+JA0MGX0hcA==
X-Google-Smtp-Source: AGHT+IGj8qL12/pe13crY3AWK9qjFSf4Ze8LTduTeAxWSLzdRVvfc9TAegtIFcorjZm+mCYaGCp6nw==
X-Received: by 2002:a2e:2419:0:b0:2ec:422:126 with SMTP id 38308e7fff4ca-2ec5932a567mr73398921fa.30.1719392795918;
        Wed, 26 Jun 2024 02:06:35 -0700 (PDT)
Message-ID: <ec92611e-6762-4b6c-af3e-999b748d1f1b@suse.com>
Date: Wed, 26 Jun 2024 11:06:26 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 05/16] xen/x86: address violations of MISRA C:2012
 Directive 4.10
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: Simone Ballarin <simone.ballarin@bugseng.com>, consulting@bugseng.com,
 sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1710145041.git.simone.ballarin@bugseng.com>
 <dd042e7d17e7833e12a5ff6f28dd560b5ff02cf7.1710145041.git.simone.ballarin@bugseng.com>
 <dce6c44d-94b7-43bd-858a-9337336a79cf@suse.com>
 <ef623bad297d016438b35bedc80f091d@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <ef623bad297d016438b35bedc80f091d@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 21:31, Nicola Vetrini wrote:
> On 2024-03-12 09:16, Jan Beulich wrote:
>> On 11.03.2024 09:59, Simone Ballarin wrote:
>>> --- a/xen/arch/x86/Makefile
>>> +++ b/xen/arch/x86/Makefile
>>> @@ -258,18 +258,20 @@ $(obj)/asm-macros.i: CFLAGS-y += -P
>>>  $(objtree)/arch/x86/include/asm/asm-macros.h: $(obj)/asm-macros.i 
>>> $(src)/Makefile
>>>  	$(call filechk,asm-macros.h)
>>>
>>> +ARCHDIR = $(shell echo $(SRCARCH) | tr a-z A-Z)
>>
>> This wants to use :=, I think - there's no reason to invoke the shell 
>> ...
> 
> I agree on this
> 
>>
>>>  define filechk_asm-macros.h
>>> +    echo '#ifndef ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>> +    echo '#define ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>>      echo '#if 0'; \
>>>      echo '.if 0'; \
>>>      echo '#endif'; \
>>> -    echo '#ifndef __ASM_MACROS_H__'; \
>>> -    echo '#define __ASM_MACROS_H__'; \
>>>      echo 'asm ( ".include \"$@\"" );'; \
>>> -    echo '#endif /* __ASM_MACROS_H__ */'; \
>>>      echo '#if 0'; \
>>>      echo '.endif'; \
>>>      cat $<; \
>>> -    echo '#endif'
>>> +    echo '#endif'; \
>>> +    echo '#endif /* ASM_$(ARCHDIR)_ASM_MACROS_H */'
>>>  endef
>>
>> ... three times while expanding this macro. Alternatively (to avoid
>> an unnecessary shell invocation when this macro is never expanded at
>> all) a shell variable inside the "define" above would want introducing.
>> Whether this 2nd approach is better depends on whether we anticipate
>> further uses of ARCHDIR.
> 
> However here I'm not entirely sure about the meaning of this latter 
> proposal.
> My proposal is the following:
> 
> ARCHDIR := $(shell echo $(SRCARCH) | tr a-z A-Z)
> 
> in a suitably generic place (such as Kbuild.include or maybe 
> xen/Makefile) as you suggested in subsequent patches that reused this 
> pattern.

If $(ARCHDIR) is going to be used elsewhere, then what you suggest is fine.
My "whether" in the earlier reply specifically left open for clarification
what the intentions with the variable are. The alternative I had described
makes sense only when $(ARCHDIR) would only ever be used inside the
filechk_asm-macros.h macro.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:10:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:10:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748550.1156296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOfc-0000BX-CX; Wed, 26 Jun 2024 09:10:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748550.1156296; Wed, 26 Jun 2024 09:10:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOfc-0000BQ-9J; Wed, 26 Jun 2024 09:10:28 +0000
Received: by outflank-mailman (input) for mailman id 748550;
 Wed, 26 Jun 2024 09:10:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMOfa-0000BG-PR; Wed, 26 Jun 2024 09:10:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMOfa-0002qF-Iv; Wed, 26 Jun 2024 09:10:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMOfa-0001ph-5n; Wed, 26 Jun 2024 09:10:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMOfa-0007fG-5R; Wed, 26 Jun 2024 09:10:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Y6tziryg9+GjHtGa+SZ6jsoli3aAaaCTu/WgNkUr+i4=; b=FZLGMgtUU+IpAz8QwkZ5AOGfln
	u9VBltM0eIRVE3KE7zdHf81rad9SIs9Av5VhmveXIEgtY9jlTwd24GpPzJWyK5EVXiZe0UYMrkF+1
	vIab0y9MctB0R55hhIO+ZlxiaEP6uEPgMy+0IchoT+ocn6YcK4ZB6lzrGuatHptYt8kc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186511-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186511: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=78bccfec9ce5082499db035270e7998d5330d75c
X-Osstest-Versions-That:
    ovmf=e21bfae345f9eee1c3f585013ca50ad6ab4f86a1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 09:10:26 +0000

flight 186511 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186511/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 78bccfec9ce5082499db035270e7998d5330d75c
baseline version:
 ovmf                 e21bfae345f9eee1c3f585013ca50ad6ab4f86a1

Last test of basis   186509  2024-06-26 04:41:07 Z    0 days
Testing same since   186511  2024-06-26 07:13:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e21bfae345..78bccfec9c  78bccfec9ce5082499db035270e7998d5330d75c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:20:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:20:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748564.1156305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOoz-0002JQ-9q; Wed, 26 Jun 2024 09:20:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748564.1156305; Wed, 26 Jun 2024 09:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOoz-0002JJ-7N; Wed, 26 Jun 2024 09:20:09 +0000
Received: by outflank-mailman (input) for mailman id 748564;
 Wed, 26 Jun 2024 09:20:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMOox-0002HF-5P
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:20:07 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 46a5b478-339d-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 11:20:06 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 9E1124EE0738;
 Wed, 26 Jun 2024 11:20:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46a5b478-339d-11ef-90a3-e314d9c70b13
MIME-Version: 1.0
Date: Wed, 26 Jun 2024 11:20:05 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Simone Ballarin <simone.ballarin@bugseng.com>, consulting@bugseng.com,
 sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH v3 05/16] xen/x86: address violations of MISRA C:2012
 Directive 4.10
In-Reply-To: <ec92611e-6762-4b6c-af3e-999b748d1f1b@suse.com>
References: <cover.1710145041.git.simone.ballarin@bugseng.com>
 <dd042e7d17e7833e12a5ff6f28dd560b5ff02cf7.1710145041.git.simone.ballarin@bugseng.com>
 <dce6c44d-94b7-43bd-858a-9337336a79cf@suse.com>
 <ef623bad297d016438b35bedc80f091d@bugseng.com>
 <ec92611e-6762-4b6c-af3e-999b748d1f1b@suse.com>
Message-ID: <797b00049612507d273facc581b2c2c5@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-26 11:06, Jan Beulich wrote:
> On 25.06.2024 21:31, Nicola Vetrini wrote:
>> On 2024-03-12 09:16, Jan Beulich wrote:
>>> On 11.03.2024 09:59, Simone Ballarin wrote:
>>>> --- a/xen/arch/x86/Makefile
>>>> +++ b/xen/arch/x86/Makefile
>>>> @@ -258,18 +258,20 @@ $(obj)/asm-macros.i: CFLAGS-y += -P
>>>>  $(objtree)/arch/x86/include/asm/asm-macros.h: $(obj)/asm-macros.i
>>>> $(src)/Makefile
>>>>  	$(call filechk,asm-macros.h)
>>>> 
>>>> +ARCHDIR = $(shell echo $(SRCARCH) | tr a-z A-Z)
>>> 
>>> This wants to use :=, I think - there's no reason to invoke the shell
>>> ...
>> 
>> I agree on this
>> 
>>> 
>>>>  define filechk_asm-macros.h
>>>> +    echo '#ifndef ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>>> +    echo '#define ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>>>      echo '#if 0'; \
>>>>      echo '.if 0'; \
>>>>      echo '#endif'; \
>>>> -    echo '#ifndef __ASM_MACROS_H__'; \
>>>> -    echo '#define __ASM_MACROS_H__'; \
>>>>      echo 'asm ( ".include \"$@\"" );'; \
>>>> -    echo '#endif /* __ASM_MACROS_H__ */'; \
>>>>      echo '#if 0'; \
>>>>      echo '.endif'; \
>>>>      cat $<; \
>>>> -    echo '#endif'
>>>> +    echo '#endif'; \
>>>> +    echo '#endif /* ASM_$(ARCHDIR)_ASM_MACROS_H */'
>>>>  endef
>>> 
>>> ... three times while expanding this macro. Alternatively (to avoid
>>> an unnecessary shell invocation when this macro is never expanded at
>>> all) a shell variable inside the "define" above would want 
>>> introducing.
>>> Whether this 2nd approach is better depends on whether we anticipate
>>> further uses of ARCHDIR.
>> 
>> However here I'm not entirely sure about the meaning of this latter
>> proposal.
>> My proposal is the following:
>> 
>> ARCHDIR := $(shell echo $(SRCARCH) | tr a-z A-Z)
>> 
>> in a suitably generic place (such as Kbuild.include or maybe
>> xen/Makefile) as you suggested in subsequent patches that reused this
>> pattern.
> 
> If $(ARCHDIR) is going to be used elsewhere, then what you suggest is 
> fine.
> My "whether" in the earlier reply specifically left open for 
> clarification
> what the intentions with the variable are. The alternative I had 
> described
> makes sense only when $(ARCHDIR) would only ever be used inside the
> filechk_asm-macros.h macro.
> 

Yes, the intention is to reuse $(ARCHDIR) in the formation of other 
places, as you can tell from the fact that subsequent patches replicate 
the same pattern. This is going to save some duplication.
The only matter left then is whether xen/Makefile (around line 250, just 
after setting SRCARCH) would be better, or Kbuild.include. To me the 
former place seems more natural, but I'm not totally sure.

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:24:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:24:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748571.1156316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOsd-000392-NQ; Wed, 26 Jun 2024 09:23:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748571.1156316; Wed, 26 Jun 2024 09:23:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOsd-00038v-Kh; Wed, 26 Jun 2024 09:23:55 +0000
Received: by outflank-mailman (input) for mailman id 748571;
 Wed, 26 Jun 2024 09:23:55 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMOsc-00038p-WF
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:23:55 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cdb9260d-339d-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 11:23:52 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2ec002caf3eso96666991fa.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 02:23:52 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c8d8066732sm1171601a91.28.2024.06.26.02.23.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 02:23:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cdb9260d-339d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719393832; x=1719998632; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=GPWkw+A0QweIEbEC9yzQBkX2k6VTxaQwM232yNcOUsA=;
        b=DxhZfDcftYbbDvNRSFnhD6HNCCvWf0ZZgTOeGxNCCbIZUq0sbafrl0zMFQKIRLQCPF
         sC6BlTGIZuxwKQh9/rl0jVtxlXbpABVJ6YqNmYGwnFudiCO96q7v5/RdP5oFaUrW+xdq
         cQCRJNitU2++Rkw+Q2xRw2rrE5/wiLSzzIgjv+b9wzdJg6FUU2KYav8xfcOi8mC7/kW9
         gUZ7YYK6cWUzddET4OXaneBB7kXMiZpQr2FAErImZTKmKJPIa7Cr7hr8FTLlzpY/j0FQ
         SGmQH3BDszh67BAzFC2V0KKvOesaNAgQMtLHF4+RrXPvVj8KYNY4RUjHLj8/X+n7x6k4
         z0Jg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719393832; x=1719998632;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=GPWkw+A0QweIEbEC9yzQBkX2k6VTxaQwM232yNcOUsA=;
        b=l0Xmz+Mu1fHbWjQ5qcocRPmM3NKcPRkl+FBqAMLzjkI8XFFpFiaNYaxyzIXE5Vcn8g
         ECen1RinwOvsc5BN5FEu3jw1B0myuj2A4basMoUSk84l3uvh/7GVj+ZMit302S0bLMl7
         VRv4eRBfLYvZZoYvbhryiz2o10uZ37Gyduv90aDxLFr/lM0bUl83fggLwmlVLS0/vzWi
         tl1qSrsdksXDnh13iF4LGLynxC1rw8igRCwpHzg2UNNSY5ICQAKI/E8ouhBWizAy6TD2
         5JbiMyHQu77wH/Tj98ejACMSJNzwVGwD45WqD6UZcssvwky/ddfYe1qJ6VmA8F4bZuoi
         iXrA==
X-Forwarded-Encrypted: i=1; AJvYcCUbEfCYO4pHTEHc5yfoL40zPxmUjw0iPh7fKfFir8iTR39da+mL3xIsl6lzuuY+WKByJNp1LCz2SuL0uYY4pYSQ0EAp3mHEvGJhK4u0LE8=
X-Gm-Message-State: AOJu0YxvvNjoHQReND8HA6hrLIkSho+D+qsPf8hIGH8dvffLA8QT8WKq
	oY3nnQlasJ/tpU8fPCmyi3AyQ9/FH1r6Xvdyys92rBxrro/FnLTFLCiAXyCqOg==
X-Google-Smtp-Source: AGHT+IGu80i2Hef2sO1jyd17JWBRJVn3fEU4adxiU4rbmi0dnfMyWsF6mr3pF3xxzzxak+nexY5t3w==
X-Received: by 2002:a2e:7310:0:b0:2ec:165a:224d with SMTP id 38308e7fff4ca-2ec5b31d131mr73324551fa.38.1719393832133;
        Wed, 26 Jun 2024 02:23:52 -0700 (PDT)
Message-ID: <1fe765d7-65c4-4f04-8835-327a0b623010@suse.com>
Date: Wed, 26 Jun 2024 11:23:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? 3/6] xen/macros: Introduce BUILD_ERROR()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
 <20240625190719.788643-4-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240625190719.788643-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 21:07, Andrew Cooper wrote:
> --- a/xen/include/xen/macros.h
> +++ b/xen/include/xen/macros.h
> @@ -59,6 +59,8 @@
>  #define BUILD_BUG_ON(cond) ((void)BUILD_BUG_ON_ZERO(cond))
>  #endif
>  
> +#define BUILD_ERROR(msg) asm ( ".error \"" msg "\"" )

I think this wants a comment, and one even beyond what is said for
BUILD_BUG_ON(). This is primarily to make clear to people when to use
which, i.e. I consider it important to mention here that it is intended
for code which, in the normal case, we expect to be DCE-d. The nature
of the condition used is also a relevant factor, as in some cases
BUILD_BUG_ON() simply cannot be used because something that really is
compile time constant isn't an integer constant expression. With
something to this effect:
Reviewed-by: Jan Beulich <jbeulich@suse.com>

I have another question / suggestion, though.

> --- a/xen/include/xen/self-tests.h
> +++ b/xen/include/xen/self-tests.h
> @@ -22,9 +22,9 @@
>          typeof(fn(val)) real = fn(val);                                 \
>                                                                          \
>          if ( !__builtin_constant_p(real) )                              \
> -            asm ( ".error \"'" STR(fn(val)) "' not compile-time constant\"" ); \
> +            BUILD_ERROR("'" STR(fn(val)) "' not compile-time constant"); \
>          else if ( real != res )                                         \
> -            asm ( ".error \"Compile time check '" STR(fn(val) == res) "' failed\"" ); \
> +            BUILD_ERROR("Compile time check '" STR(fn(val) == res) "' failed"); \
>      } while ( 0 )

While right here leaving the condition outside of the macro is
perhaps more natural, considering a case where there's just an if()
I wonder whether we shouldn't also (only?) have BUILD_ERROR_ON(),
better paralleling BUILD_BUG_ON():

    BUILD_ERROR_ON(!__builtin_constant_p(real),
                   "'" STR(fn(val)) "' not compile-time constant");

It then becomes questionable whether a string literal needs passing,
or whether instead the condition couldn't just be stringified while
passing to the asm():

    BUILD_ERROR_ON(!__builtin_constant_p(real));

The thing could even "return" the predicate, permitting

    if ( !BUILD_ERROR_ON(!__builtin_constant_p(real) )
        BUILD_ERROR_ON(real != res);

I realize though that there may be issues from this with unused values
being diagnosed by compiler and/or Eclair, when the "return value" is
not of interest.

I'd be fine with the respective transformation to be left for 4.20,
though. Yet of course churn-wise it would be better to get into final
shape right away.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:26:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:26:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748578.1156326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOv2-0003if-8D; Wed, 26 Jun 2024 09:26:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748578.1156326; Wed, 26 Jun 2024 09:26:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOv2-0003iY-4Y; Wed, 26 Jun 2024 09:26:24 +0000
Received: by outflank-mailman (input) for mailman id 748578;
 Wed, 26 Jun 2024 09:26:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMOv1-0003iR-Ih
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:26:23 +0000
Received: from mail-lj1-x22c.google.com (mail-lj1-x22c.google.com
 [2a00:1450:4864:20::22c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 26796c31-339e-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 11:26:21 +0200 (CEST)
Received: by mail-lj1-x22c.google.com with SMTP id
 38308e7fff4ca-2eaa89464a3so73928001fa.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 02:26:21 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-70688f2d76bsm4846996b3a.41.2024.06.26.02.26.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 02:26:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26796c31-339e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719393981; x=1719998781; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=XXz6aXhJsZgcEYOEbX6s9786rtHRcRnGYwCarZ7x8kg=;
        b=ghp2HC8jP3YJpKfy6wDtG5OIdhYfSnbfF7UEdIQ3O+Q0f0Pchx/8SQqXoREa3OPkpb
         9eE1BggBolDfUCnEzNvdUO6WMxwO8UF6kvtgE1jhFmWp06qYSOag2JJQ0c4xO63CS9st
         +4+soUlnTTSK+36sns/0//dYXdyy9sXywuqfqovt+KwtH+uJ7mgn9vL+BgCDvWfgBQPq
         7O/yuTL7GoFTt/AA+5Qkfdo8cmsso8nsR3kYS2Bcj+/eXqhWjGdGJACKEoILMk09xJxR
         XrJVm7AarwLiSptRP5e4D1VHcBYiQyCi90z9RtTn0a8ajgYK3zRa/8oQkFcqM0XhIS28
         YP2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719393981; x=1719998781;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=XXz6aXhJsZgcEYOEbX6s9786rtHRcRnGYwCarZ7x8kg=;
        b=omP49oCdJWAzswvBsU9rlTb5nALnKD3efgu4vkENGtnNWdBO7thsfUmReUgmHRK5sB
         PubWCnChMRFrxk7zuap7KnRTlaHFuBwuhKsggZS+ncFDSKxZ7oJLHagS0cGWyt9hmwuS
         XSExUFDbHYfOzEc9g8xCMeO81i2vSH/4A0AI8aPS6hUsjkyNwYMk8Mi6GSVluzdiq0Al
         0VrtgMNTIY9D92TfidLHdy6A9EXoajoJzoGAFNPlgx9ppoB3T+AyXrhScSJI9KMtIE/Y
         Hk62baMcp5cCDa6cldx+wlGJ/VGD2lPxclSlx25xbLvV64ebsjukRSireqmdmfeKYgUz
         bK+w==
X-Forwarded-Encrypted: i=1; AJvYcCVxQJ9xA2q/bX1H7voswl4259t//QAst0gSUhmYHlXUkf+nmBdz74hA4hDEtZHk1OX5mC3Cl73bh4bUqdaXsd9lmDkH3R0RKcW+mPTAODA=
X-Gm-Message-State: AOJu0YySkKRHqOndnNTpgVdJKA9QqWuoVkpYvn+9Swgf2mpF7/lJ9x6m
	pXGuvO5hJMlPZIh5TmkfzOB66VGJKBmL8qVuXojjQMC7Y8xxnRuh5Pph90ILWQ==
X-Google-Smtp-Source: AGHT+IHAbFXEpn+LcEAAgR4XwgXUXVQNISxVHUa/2+eT6GxqNTFk5cUG7i4RqvquedOk3aFYfyD0Mw==
X-Received: by 2002:a2e:b1cf:0:b0:2eb:efff:771e with SMTP id 38308e7fff4ca-2ec57983bcemr62128821fa.29.1719393981035;
        Wed, 26 Jun 2024 02:26:21 -0700 (PDT)
Message-ID: <a5009c3e-cba6-4737-aaff-c3b79a11169c@suse.com>
Date: Wed, 26 Jun 2024 11:26:11 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 05/16] xen/x86: address violations of MISRA C:2012
 Directive 4.10
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: Simone Ballarin <simone.ballarin@bugseng.com>, consulting@bugseng.com,
 sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1710145041.git.simone.ballarin@bugseng.com>
 <dd042e7d17e7833e12a5ff6f28dd560b5ff02cf7.1710145041.git.simone.ballarin@bugseng.com>
 <dce6c44d-94b7-43bd-858a-9337336a79cf@suse.com>
 <ef623bad297d016438b35bedc80f091d@bugseng.com>
 <ec92611e-6762-4b6c-af3e-999b748d1f1b@suse.com>
 <797b00049612507d273facc581b2c2c5@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <797b00049612507d273facc581b2c2c5@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 26.06.2024 11:20, Nicola Vetrini wrote:
> On 2024-06-26 11:06, Jan Beulich wrote:
>> On 25.06.2024 21:31, Nicola Vetrini wrote:
>>> On 2024-03-12 09:16, Jan Beulich wrote:
>>>> On 11.03.2024 09:59, Simone Ballarin wrote:
>>>>> --- a/xen/arch/x86/Makefile
>>>>> +++ b/xen/arch/x86/Makefile
>>>>> @@ -258,18 +258,20 @@ $(obj)/asm-macros.i: CFLAGS-y += -P
>>>>>  $(objtree)/arch/x86/include/asm/asm-macros.h: $(obj)/asm-macros.i
>>>>> $(src)/Makefile
>>>>>  	$(call filechk,asm-macros.h)
>>>>>
>>>>> +ARCHDIR = $(shell echo $(SRCARCH) | tr a-z A-Z)
>>>>
>>>> This wants to use :=, I think - there's no reason to invoke the shell
>>>> ...
>>>
>>> I agree on this
>>>
>>>>
>>>>>  define filechk_asm-macros.h
>>>>> +    echo '#ifndef ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>>>> +    echo '#define ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>>>>      echo '#if 0'; \
>>>>>      echo '.if 0'; \
>>>>>      echo '#endif'; \
>>>>> -    echo '#ifndef __ASM_MACROS_H__'; \
>>>>> -    echo '#define __ASM_MACROS_H__'; \
>>>>>      echo 'asm ( ".include \"$@\"" );'; \
>>>>> -    echo '#endif /* __ASM_MACROS_H__ */'; \
>>>>>      echo '#if 0'; \
>>>>>      echo '.endif'; \
>>>>>      cat $<; \
>>>>> -    echo '#endif'
>>>>> +    echo '#endif'; \
>>>>> +    echo '#endif /* ASM_$(ARCHDIR)_ASM_MACROS_H */'
>>>>>  endef
>>>>
>>>> ... three times while expanding this macro. Alternatively (to avoid
>>>> an unnecessary shell invocation when this macro is never expanded at
>>>> all) a shell variable inside the "define" above would want 
>>>> introducing.
>>>> Whether this 2nd approach is better depends on whether we anticipate
>>>> further uses of ARCHDIR.
>>>
>>> However here I'm not entirely sure about the meaning of this latter
>>> proposal.
>>> My proposal is the following:
>>>
>>> ARCHDIR := $(shell echo $(SRCARCH) | tr a-z A-Z)
>>>
>>> in a suitably generic place (such as Kbuild.include or maybe
>>> xen/Makefile) as you suggested in subsequent patches that reused this
>>> pattern.
>>
>> If $(ARCHDIR) is going to be used elsewhere, then what you suggest is 
>> fine.
>> My "whether" in the earlier reply specifically left open for 
>> clarification
>> what the intentions with the variable are. The alternative I had 
>> described
>> makes sense only when $(ARCHDIR) would only ever be used inside the
>> filechk_asm-macros.h macro.
> 
> Yes, the intention is to reuse $(ARCHDIR) in the formation of other 
> places, as you can tell from the fact that subsequent patches replicate 
> the same pattern. This is going to save some duplication.
> The only matter left then is whether xen/Makefile (around line 250, just 
> after setting SRCARCH) would be better, or Kbuild.include. To me the 
> former place seems more natural, but I'm not totally sure.

Depends on where all the intended uses are. If they're all in xen/Makefile,
then having the macro just there is of course sufficient. Whereas when it's
needed elsewhere, instead of exporting putting it in Kbuild.include would
seem more natural / desirable to me.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:28:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:28:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748583.1156336 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwp-0004lO-I5; Wed, 26 Jun 2024 09:28:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748583.1156336; Wed, 26 Jun 2024 09:28:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwp-0004lH-Ei; Wed, 26 Jun 2024 09:28:15 +0000
Received: by outflank-mailman (input) for mailman id 748583;
 Wed, 26 Jun 2024 09:28:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOwo-0004l0-0u
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:28:14 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 68f45ac6-339e-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 11:28:13 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.199.41])
 by support.bugseng.com (Postfix) with ESMTPSA id 5D0344EE0755;
 Wed, 26 Jun 2024 11:28:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68f45ac6-339e-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jan Beulich <jbeulich@suse.com>
Subject: [XEN PATCH v3 01/12] automation/eclair: fix deviation of MISRA C Rule 16.3
Date: Wed, 26 Jun 2024 11:27:54 +0200
Message-Id: <bcbd654ad6a241f2d98b59fb9a879f75d436276c.1719383180.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719383180.git.federico.serafini@bugseng.com>
References: <cover.1719383180.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Escape the final dot of the comment and extend the search of a
fallthrough comment up to 2 lines after the last statement.

Fixes: a128d8da913b ("automation/eclair: add deviations for MISRA C:2012 Rule 16.3")
Reported-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes in v3:
- truncate to 12 digits.

Changes in v2:
- instead of introducing the hypened fallthrough, insert the missing escape.
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index c8bff0e057..9b994be63d 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -416,7 +416,7 @@ safe."
 -doc_end
 
 -doc_begin="Switch clauses ending with an explicit comment indicating the fallthrough intention are safe."
--config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through.? \\*/.*$,0..1))))"}
+-config=MC3R1.R16.3,reports+={safe, "any_area(end_loc(any_exp(text(^(?s).*/\\* [fF]all ?through\\.? \\*/.*$,0..2))))"}
 -doc_end
 
 -doc_begin="Switch statements having a controlling expression of enum type deliberately do not have a default case: gcc -Wall enables -Wswitch which warns (and breaks the build as we use -Werror) if one of the enum labels is missing from the switch."
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:28:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:28:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748585.1156346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwq-0004si-3U; Wed, 26 Jun 2024 09:28:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748585.1156346; Wed, 26 Jun 2024 09:28:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwp-0004rH-Tl; Wed, 26 Jun 2024 09:28:15 +0000
Received: by outflank-mailman (input) for mailman id 748585;
 Wed, 26 Jun 2024 09:28:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOwo-0004l0-Mn
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:28:14 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 696bcd6f-339e-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 11:28:14 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.199.41])
 by support.bugseng.com (Postfix) with ESMTPSA id 1C6724EE0756;
 Wed, 26 Jun 2024 11:28:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 696bcd6f-339e-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [XEN PATCH v3 02/12] x86/cpuid: use fallthrough pseudo keyword
Date: Wed, 26 Jun 2024 11:27:55 +0200
Message-Id: <64ecc15337a4d325dda610608f599dc99a6bc7f5.1719383180.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719383180.git.federico.serafini@bugseng.com>
References: <cover.1719383180.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The current comment making explicit the fallthrough intention does
not follow the agreed syntax: replace it with the pseduo keyword.

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/cpuid.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index a822e80c7e..2a777436ee 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -97,9 +97,8 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         if ( is_viridian_domain(d) )
             return cpuid_viridian_leaves(v, leaf, subleaf, res);
 
+        fallthrough;
         /*
-         * Fallthrough.
-         *
          * Intel reserve up until 0x4fffffff for hypervisor use.  AMD reserve
          * only until 0x400000ff, but we already use double that.
          */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:28:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:28:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748584.1156339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwp-0004oR-PU; Wed, 26 Jun 2024 09:28:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748584.1156339; Wed, 26 Jun 2024 09:28:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwp-0004nn-MT; Wed, 26 Jun 2024 09:28:15 +0000
Received: by outflank-mailman (input) for mailman id 748584;
 Wed, 26 Jun 2024 09:28:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOwo-0004l5-Cz
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:28:14 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 689f4503-339e-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 11:28:12 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.199.41])
 by support.bugseng.com (Postfix) with ESMTPSA id 8001D4EE0738;
 Wed, 26 Jun 2024 11:28:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 689f4503-339e-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v3 00/12] x86: address some violations of MISRA C Rule 16.3
Date: Wed, 26 Jun 2024 11:27:53 +0200
Message-Id: <cover.1719383180.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This patch series fixes a missing escape in a deviation and addresses some
violations.

Federico Serafini (12):
  automation/eclair: fix deviation of MISRA C Rule 16.3
  x86/cpuid: use fallthrough pseudo keyword
  x86/domctl: address a violation of MISRA C Rule 16.3
  x86/vpmu: address violations of MISRA C Rule 16.3
  x86/traps: address violations of MISRA C Rule 16.3
  x86/mce: address violations of MISRA C Rule 16.3
  x86/hvm: address violations of MISRA C Rule 16.3
  x86/vpt: address a violation of MISRA C Rule 16.3
  x86/mm: add defensive return
  x86/mpparse: address a violation of MISRA C Rule 16.3
  x86/vPIC: address a violation of MISRA C Rule 16.3
  x86/vlapic: address a violation of MISRA C Rule 16.3

 automation/eclair_analysis/ECLAIR/deviations.ecl | 2 +-
 xen/arch/x86/cpu/mcheck/mce_amd.c                | 1 +
 xen/arch/x86/cpu/mcheck/mce_intel.c              | 2 ++
 xen/arch/x86/cpu/vpmu.c                          | 3 +++
 xen/arch/x86/cpu/vpmu_intel.c                    | 2 ++
 xen/arch/x86/cpuid.c                             | 3 +--
 xen/arch/x86/domctl.c                            | 1 +
 xen/arch/x86/hvm/emulate.c                       | 9 ++++++---
 xen/arch/x86/hvm/hvm.c                           | 5 +++++
 xen/arch/x86/hvm/hypercall.c                     | 1 +
 xen/arch/x86/hvm/irq.c                           | 1 +
 xen/arch/x86/hvm/pmtimer.c                       | 1 +
 xen/arch/x86/hvm/vlapic.c                        | 1 +
 xen/arch/x86/hvm/vpic.c                          | 1 +
 xen/arch/x86/hvm/vpt.c                           | 4 +++-
 xen/arch/x86/mm.c                                | 1 +
 xen/arch/x86/mpparse.c                           | 1 +
 xen/arch/x86/traps.c                             | 3 +++
 18 files changed, 35 insertions(+), 7 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:28:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:28:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748586.1156365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwr-0005Qb-Ab; Wed, 26 Jun 2024 09:28:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748586.1156365; Wed, 26 Jun 2024 09:28:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwr-0005Pp-5u; Wed, 26 Jun 2024 09:28:17 +0000
Received: by outflank-mailman (input) for mailman id 748586;
 Wed, 26 Jun 2024 09:28:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOwq-0004l5-7n
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:28:16 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 69e90dc5-339e-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 11:28:14 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.199.41])
 by support.bugseng.com (Postfix) with ESMTPSA id DB06D4EE0754;
 Wed, 26 Jun 2024 11:28:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69e90dc5-339e-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [XEN PATCH v3 03/12] x86/domctl: address a violation of MISRA C Rule 16.3
Date: Wed, 26 Jun 2024 11:27:56 +0200
Message-Id: <cdc125a881dd8b37613e16ac81d712e729bbb4b9.1719383180.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719383180.git.federico.serafini@bugseng.com>
References: <cover.1719383180.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statement to address a violation of
MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/domctl.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 9190e11faa..68b5b46d1a 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -517,6 +517,7 @@ long arch_do_domctl(
 
         default:
             ret = -ENOSYS;
+            break;
         }
         break;
     }
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:28:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:28:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748587.1156375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOws-0005js-Kb; Wed, 26 Jun 2024 09:28:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748587.1156375; Wed, 26 Jun 2024 09:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOws-0005jX-HK; Wed, 26 Jun 2024 09:28:18 +0000
Received: by outflank-mailman (input) for mailman id 748587;
 Wed, 26 Jun 2024 09:28:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOwq-0004l5-Sp
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:28:16 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6a4dbec0-339e-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 11:28:15 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.199.41])
 by support.bugseng.com (Postfix) with ESMTPSA id B95384EE0757;
 Wed, 26 Jun 2024 11:28:14 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6a4dbec0-339e-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v3 04/12] x86/vpmu: address violations of MISRA C Rule 16.3
Date: Wed, 26 Jun 2024 11:27:57 +0200
Message-Id: <b42004216d547e24f6537450a1c98176a821f704.1719383180.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719383180.git.federico.serafini@bugseng.com>
References: <cover.1719383180.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statements to address violations of MISRA C Rule
16.3: "An unconditional `break' statement shall terminate every
switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
Changes in v3:
- addressed all violations of R16.3 in vpmu_intel.c
---
 xen/arch/x86/cpu/vpmu.c       | 3 +++
 xen/arch/x86/cpu/vpmu_intel.c | 2 ++
 2 files changed, 5 insertions(+)

diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index a7bc0cd1fc..b2ba999412 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -663,6 +663,8 @@ long do_xenpmu_op(
 
         if ( pmu_params.version.maj != XENPMU_VER_MAJ )
             return -EINVAL;
+
+        break;
     }
 
     switch ( op )
@@ -776,6 +778,7 @@ long do_xenpmu_op(
 
     default:
         ret = -EINVAL;
+        break;
     }
 
     return ret;
diff --git a/xen/arch/x86/cpu/vpmu_intel.c b/xen/arch/x86/cpu/vpmu_intel.c
index cd414165df..26dd3a9358 100644
--- a/xen/arch/x86/cpu/vpmu_intel.c
+++ b/xen/arch/x86/cpu/vpmu_intel.c
@@ -666,6 +666,7 @@ static int cf_check core2_vpmu_do_wrmsr(unsigned int msr, uint64_t msr_content)
 
             xen_pmu_cntr_pair[tmp].control = msr_content;
         }
+        break;
     }
 
     if ( type != MSR_TYPE_GLOBAL )
@@ -713,6 +714,7 @@ static int cf_check core2_vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
             break;
         default:
             rdmsrl(msr, *msr_content);
+            break;
         }
     }
     else if ( msr == MSR_IA32_MISC_ENABLE )
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:28:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:28:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748588.1156382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwt-0005pa-7h; Wed, 26 Jun 2024 09:28:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748588.1156382; Wed, 26 Jun 2024 09:28:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOws-0005pP-TI; Wed, 26 Jun 2024 09:28:18 +0000
Received: by outflank-mailman (input) for mailman id 748588;
 Wed, 26 Jun 2024 09:28:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOwr-0004l0-9m
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:28:17 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6b1b2581-339e-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 11:28:16 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.199.41])
 by support.bugseng.com (Postfix) with ESMTPSA id 087084EE0738;
 Wed, 26 Jun 2024 11:28:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b1b2581-339e-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [XEN PATCH v3 06/12] x86/mce: address violations of MISRA C Rule 16.3
Date: Wed, 26 Jun 2024 11:27:59 +0200
Message-Id: <7c1a5865fa00b873296ba0870d620070371bcd22.1719383180.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719383180.git.federico.serafini@bugseng.com>
References: <cover.1719383180.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statements to address violations of
MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/cpu/mcheck/mce_amd.c   | 1 +
 xen/arch/x86/cpu/mcheck/mce_intel.c | 2 ++
 2 files changed, 3 insertions(+)

diff --git a/xen/arch/x86/cpu/mcheck/mce_amd.c b/xen/arch/x86/cpu/mcheck/mce_amd.c
index 3318b8204f..4f06a3153b 100644
--- a/xen/arch/x86/cpu/mcheck/mce_amd.c
+++ b/xen/arch/x86/cpu/mcheck/mce_amd.c
@@ -201,6 +201,7 @@ static void mcequirk_amd_apply(enum mcequirk_amd_flags flags)
 
     default:
         ASSERT(flags == MCEQUIRK_NONE);
+        break;
     }
 }
 
diff --git a/xen/arch/x86/cpu/mcheck/mce_intel.c b/xen/arch/x86/cpu/mcheck/mce_intel.c
index dd812f4b8a..9574dedbfd 100644
--- a/xen/arch/x86/cpu/mcheck/mce_intel.c
+++ b/xen/arch/x86/cpu/mcheck/mce_intel.c
@@ -896,6 +896,8 @@ static void intel_init_ppin(const struct cpuinfo_x86 *c)
             ppin_msr = 0;
         else if ( c == &boot_cpu_data )
             ppin_msr = MSR_PPIN;
+
+        break;
     }
 }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:28:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:28:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748589.1156387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwt-0005sV-JY; Wed, 26 Jun 2024 09:28:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748589.1156387; Wed, 26 Jun 2024 09:28:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwt-0005qp-7o; Wed, 26 Jun 2024 09:28:19 +0000
Received: by outflank-mailman (input) for mailman id 748589;
 Wed, 26 Jun 2024 09:28:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOwr-0004l5-HO
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:28:17 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6ab1bf4f-339e-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 11:28:16 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.199.41])
 by support.bugseng.com (Postfix) with ESMTPSA id 6178F4EE0758;
 Wed, 26 Jun 2024 11:28:15 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ab1bf4f-339e-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v3 05/12] x86/traps: address violations of MISRA C Rule 16.3
Date: Wed, 26 Jun 2024 11:27:58 +0200
Message-Id: <e7aea6bacb9c914a06a929dfe3606f7cc360588f.1719383180.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719383180.git.federico.serafini@bugseng.com>
References: <cover.1719383180.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add break or pseudo keyword fallthrough to address violations of
MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
Changes in v3:
- use break instead of fallthrough.
---
 xen/arch/x86/traps.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 9906e874d5..d62598a4c2 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1186,6 +1186,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
 
     default:
         ASSERT_UNREACHABLE();
+        break;
     }
 }
 
@@ -1748,6 +1749,7 @@ static void io_check_error(const struct cpu_user_regs *regs)
     {
     case 'd': /* 'dom0' */
         nmi_hwdom_report(_XEN_NMIREASON_io_error);
+        break;
     case 'i': /* 'ignore' */
         break;
     default:  /* 'fatal' */
@@ -1768,6 +1770,7 @@ static void unknown_nmi_error(const struct cpu_user_regs *regs,
     {
     case 'd': /* 'dom0' */
         nmi_hwdom_report(_XEN_NMIREASON_unknown);
+        break;
     case 'i': /* 'ignore' */
         break;
     default:  /* 'fatal' */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:28:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:28:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748590.1156395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwu-00066X-Fo; Wed, 26 Jun 2024 09:28:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748590.1156395; Wed, 26 Jun 2024 09:28:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwu-00064P-0T; Wed, 26 Jun 2024 09:28:20 +0000
Received: by outflank-mailman (input) for mailman id 748590;
 Wed, 26 Jun 2024 09:28:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOws-0004l0-4Z
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:28:18 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6b8dcf0b-339e-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 11:28:17 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.199.41])
 by support.bugseng.com (Postfix) with ESMTPSA id B85024EE0757;
 Wed, 26 Jun 2024 11:28:16 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b8dcf0b-339e-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v3 07/12] x86/hvm: address violations of MISRA C Rule 16.3
Date: Wed, 26 Jun 2024 11:28:00 +0200
Message-Id: <87cfe4d3e75c3a7d4174393a31aaaf80e0e60633.1719383180.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719383180.git.federico.serafini@bugseng.com>
References: <cover.1719383180.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 16.3 states that "An unconditional `break' statement shall
terminate every switch-clause".

Add pseudo keyword fallthrough or missing break statement
to address violations of the rule.

As a defensive measure, return -EOPNOTSUPP in case an unreachable
return statement is reached.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
Changes in v3:
- squashed here modifications of pmtimer.c;
- no blank line after fallthrough;
- better indentation of fallthrough.
---
 xen/arch/x86/hvm/emulate.c   | 9 ++++++---
 xen/arch/x86/hvm/hvm.c       | 5 +++++
 xen/arch/x86/hvm/hypercall.c | 1 +
 xen/arch/x86/hvm/irq.c       | 1 +
 xen/arch/x86/hvm/pmtimer.c   | 1 +
 5 files changed, 14 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 02e378365b..f5dd08f510 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -339,7 +339,7 @@ static int hvmemul_do_io(
     }
     case X86EMUL_UNIMPLEMENTED:
         ASSERT_UNREACHABLE();
-        /* Fall-through */
+        fallthrough;
     default:
         BUG();
     }
@@ -2674,6 +2674,7 @@ static int _hvm_emulate_one(struct hvm_emulate_ctxt *hvmemul_ctxt,
 
     default:
         ASSERT_UNREACHABLE();
+        break;
     }
 
     if ( hvmemul_ctxt->ctxt.retire.singlestep )
@@ -2764,6 +2765,7 @@ int hvm_emulate_one_mmio(unsigned long mfn, unsigned long gla)
         /* fallthrough */
     default:
         hvm_emulate_writeback(&ctxt);
+        break;
     }
 
     return rc;
@@ -2798,11 +2800,12 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
         hvio->mmio_insn_bytes = sizeof(hvio->mmio_insn);
         memcpy(hvio->mmio_insn, curr->arch.vm_event->emul.insn.data,
                hvio->mmio_insn_bytes);
+        fallthrough;
     }
-    /* Fall-through */
     default:
         ctx.set_context = (kind == EMUL_KIND_SET_CONTEXT_DATA);
         rc = hvm_emulate_one(&ctx, VIO_no_completion);
+        break;
     }
 
     switch ( rc )
@@ -2818,7 +2821,7 @@ void hvm_emulate_one_vm_event(enum emul_kind kind, unsigned int trapnr,
     case X86EMUL_UNIMPLEMENTED:
         if ( hvm_monitor_emul_unimplemented() )
             return;
-        /* fall-through */
+        fallthrough;
     case X86EMUL_UNHANDLEABLE:
         hvm_dump_emulation_state(XENLOG_G_DEBUG, "Mem event", &ctx, rc);
         hvm_inject_hw_exception(trapnr, errcode);
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f4b627b1f..d7f195ba9a 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4919,6 +4919,8 @@ static int do_altp2m_op(
 
     default:
         ASSERT_UNREACHABLE();
+        rc = -EOPNOTSUPP;
+        break;
     }
 
  out:
@@ -5020,6 +5022,8 @@ static int compat_altp2m_op(
 
     default:
         ASSERT_UNREACHABLE();
+        rc = -EOPNOTSUPP;
+        break;
     }
 
     return rc;
@@ -5283,6 +5287,7 @@ void hvm_get_segment_register(struct vcpu *v, enum x86_segment seg,
          * %cs and %tr are unconditionally present.  SVM ignores these present
          * bits and will happily run without them set.
          */
+        fallthrough;
     case x86_seg_cs:
         reg->p = 1;
         break;
diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 7fb3136f0c..2271afe02a 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -111,6 +111,7 @@ int hvm_hypercall(struct cpu_user_regs *regs)
     case 8:
         eax = regs->rax;
         /* Fallthrough to permission check. */
+        fallthrough;
     case 4:
     case 2:
         if ( currd->arch.monitor.guest_request_userspace_enabled &&
diff --git a/xen/arch/x86/hvm/irq.c b/xen/arch/x86/hvm/irq.c
index 210cebb0e6..1eab44defc 100644
--- a/xen/arch/x86/hvm/irq.c
+++ b/xen/arch/x86/hvm/irq.c
@@ -282,6 +282,7 @@ static void hvm_set_callback_irq_level(struct vcpu *v)
             __hvm_pci_intx_assert(d, pdev, pintx);
         else
             __hvm_pci_intx_deassert(d, pdev, pintx);
+        break;
     default:
         break;
     }
diff --git a/xen/arch/x86/hvm/pmtimer.c b/xen/arch/x86/hvm/pmtimer.c
index 97099ac305..87a7a01c9f 100644
--- a/xen/arch/x86/hvm/pmtimer.c
+++ b/xen/arch/x86/hvm/pmtimer.c
@@ -185,6 +185,7 @@ static int cf_check handle_evt_io(
                 gdprintk(XENLOG_WARNING, 
                          "Bad ACPI PM register write: %x bytes (%x) at %x\n", 
                          bytes, *val, port);
+                break;
             }
         }
         /* Fix up the SCI state to match the new register state */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:28:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:28:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748591.1156398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwu-0006Eq-SD; Wed, 26 Jun 2024 09:28:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748591.1156398; Wed, 26 Jun 2024 09:28:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwu-0006Cz-Ie; Wed, 26 Jun 2024 09:28:20 +0000
Received: by outflank-mailman (input) for mailman id 748591;
 Wed, 26 Jun 2024 09:28:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOwt-0004l0-EE
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:28:19 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6c632e85-339e-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 11:28:19 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.199.41])
 by support.bugseng.com (Postfix) with ESMTPSA id 264DB4EE0754;
 Wed, 26 Jun 2024 11:28:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c632e85-339e-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v3 09/12] x86/mm: add defensive return
Date: Wed, 26 Jun 2024 11:28:02 +0200
Message-Id: <acb26329a980809dda100825f52b05d0cc295315.1719383180.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719383180.git.federico.serafini@bugseng.com>
References: <cover.1719383180.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add defensive return statement at the end of an unreachable
default case. Other than improve safety, this meets the requirements
to deviate a violation of MISRA C Rule 16.3: "An unconditional `break'
statement shall terminate every switch-clause".

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
Changes in v3:
- do not return 0 (success).

Al least this version returns an error code but I am not sure about
which one to use.
---
 xen/arch/x86/mm.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 648d6dd475..a1e28b3360 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -916,6 +916,7 @@ get_page_from_l1e(
                 return 0;
             default:
                 ASSERT_UNREACHABLE();
+                return -EPERM;
             }
         }
         else if ( l1f & _PAGE_RW )
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:28:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:28:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748592.1156407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwv-0006Ql-Lu; Wed, 26 Jun 2024 09:28:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748592.1156407; Wed, 26 Jun 2024 09:28:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwv-0006OF-5b; Wed, 26 Jun 2024 09:28:21 +0000
Received: by outflank-mailman (input) for mailman id 748592;
 Wed, 26 Jun 2024 09:28:19 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOwt-0004l5-LL
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:28:19 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6bf588f8-339e-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 11:28:18 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.199.41])
 by support.bugseng.com (Postfix) with ESMTPSA id 7E24C4EE0755;
 Wed, 26 Jun 2024 11:28:17 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bf588f8-339e-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v3 08/12] x86/vpt: address a violation of MISRA C Rule 16.3
Date: Wed, 26 Jun 2024 11:28:01 +0200
Message-Id: <453ef39f5a2a1871d8b0c74d921ed6a413b179b4.1719383180.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719383180.git.federico.serafini@bugseng.com>
References: <cover.1719383180.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add pseudo keyword fallthrough to meet the requirements to deviate
a violation of MISRA C Rule 16.3 ("An unconditional `break'
statement shall terminate every switch-clause").

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
---
Changes in v3:
- better indentation of fallthrough.
---
 xen/arch/x86/hvm/vpt.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vpt.c b/xen/arch/x86/hvm/vpt.c
index e1d6845a28..ab06bea33e 100644
--- a/xen/arch/x86/hvm/vpt.c
+++ b/xen/arch/x86/hvm/vpt.c
@@ -118,9 +118,11 @@ static int pt_irq_masked(struct periodic_time *pt)
             return 0;
 
         gsi = hvm_isa_irq_to_gsi(pt->irq);
+
+        /* Fallthrough to check if the interrupt is masked on the IO APIC. */
+        fallthrough;
     }
 
-    /* Fallthrough to check if the interrupt is masked on the IO APIC. */
     case PTSRC_ioapic:
     {
         int mask = vioapic_get_mask(v->domain, gsi);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:28:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:28:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748593.1156420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOww-0006nM-R3; Wed, 26 Jun 2024 09:28:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748593.1156420; Wed, 26 Jun 2024 09:28:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOww-0006kw-EK; Wed, 26 Jun 2024 09:28:22 +0000
Received: by outflank-mailman (input) for mailman id 748593;
 Wed, 26 Jun 2024 09:28:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOwu-0004l0-3u
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:28:20 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6ccb1ce6-339e-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 11:28:19 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.199.41])
 by support.bugseng.com (Postfix) with ESMTPSA id DFE424EE0759;
 Wed, 26 Jun 2024 11:28:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6ccb1ce6-339e-11ef-90a3-e314d9c70b13
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [XEN PATCH v3 10/12] x86/mpparse: address a violation of MISRA C Rule 16.3
Date: Wed, 26 Jun 2024 11:28:03 +0200
Message-Id: <84bd2be6bfb12bc6bf87ecc0f195e2f74fa5f45f.1719383180.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719383180.git.federico.serafini@bugseng.com>
References: <cover.1719383180.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a missing break statement to address a violation of
MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/mpparse.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/mpparse.c b/xen/arch/x86/mpparse.c
index d8ccab2449..306d8ed97a 100644
--- a/xen/arch/x86/mpparse.c
+++ b/xen/arch/x86/mpparse.c
@@ -544,6 +544,7 @@ static inline void __init construct_default_ISA_mptable(int mpc_default_type)
 		case 4:
 		case 7:
 			memcpy(bus.mpc_bustype, "MCA   ", 6);
+			break;
 	}
 	MP_bus_info(&bus);
 	if (mpc_default_type > 4) {
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:28:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:28:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748597.1156429 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwx-0006zL-Qt; Wed, 26 Jun 2024 09:28:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748597.1156429; Wed, 26 Jun 2024 09:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwx-0006vX-9z; Wed, 26 Jun 2024 09:28:23 +0000
Received: by outflank-mailman (input) for mailman id 748597;
 Wed, 26 Jun 2024 09:28:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOwv-0004l5-OB
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:28:21 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6d34cf97-339e-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 11:28:20 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.199.41])
 by support.bugseng.com (Postfix) with ESMTPSA id 8A8134EE0755;
 Wed, 26 Jun 2024 11:28:19 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d34cf97-339e-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [XEN PATCH v3 11/12] x86/vPIC: address a violation of MISRA C Rule 16.3
Date: Wed, 26 Jun 2024 11:28:04 +0200
Message-Id: <68c8dbc10a0e78caae75d86fef571b1c1622a468.1719383180.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719383180.git.federico.serafini@bugseng.com>
References: <cover.1719383180.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add pseudokeyword fallthrough to meet the requirements to deviate
a violation of MISRA C Rul 16.3: "An unconditional `break' statement
shall terminate every switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/hvm/vpic.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/vpic.c b/xen/arch/x86/hvm/vpic.c
index 7c3b5c7254..6427b08086 100644
--- a/xen/arch/x86/hvm/vpic.c
+++ b/xen/arch/x86/hvm/vpic.c
@@ -309,6 +309,7 @@ static void vpic_ioport_write(
             if ( !(vpic->init_state & 8) )
                 break; /* CASCADE mode: wait for write to ICW3. */
             /* SNGL mode: fall through (no ICW3). */
+            fallthrough;
         case 2:
             /* ICW3 */
             vpic->init_state++;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:28:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:28:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748598.1156439 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwz-0007K0-0O; Wed, 26 Jun 2024 09:28:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748598.1156439; Wed, 26 Jun 2024 09:28:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMOwy-0007GT-Dx; Wed, 26 Jun 2024 09:28:24 +0000
Received: by outflank-mailman (input) for mailman id 748598;
 Wed, 26 Jun 2024 09:28:23 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P08s=N4=bugseng.com=federico.serafini@srs-se1.protection.inumbo.net>)
 id 1sMOww-0004l5-M5
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:28:22 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6db3eddd-339e-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 11:28:21 +0200 (CEST)
Received: from truciolo.bugseng.com (unknown [78.209.199.41])
 by support.bugseng.com (Postfix) with ESMTPSA id 3F0B44EE073D;
 Wed, 26 Jun 2024 11:28:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6db3eddd-339e-11ef-b4bb-af5377834399
From: Federico Serafini <federico.serafini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: consulting@bugseng.com,
	Federico Serafini <federico.serafini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [XEN PATCH v3 12/12] x86/vlapic: address a violation of MISRA C Rule 16.3
Date: Wed, 26 Jun 2024 11:28:05 +0200
Message-Id: <8cc3a42c2c73b21bb08d9140b274a7350c3b2d0b.1719383180.git.federico.serafini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719383180.git.federico.serafini@bugseng.com>
References: <cover.1719383180.git.federico.serafini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add missing break statement to address a violation of MISRA C
Rule 16.3: "An unconditional `break' statement shall terminate every
switch-clause".

No functional change.

Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/hvm/vlapic.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 9cfc82666a..2ec9594271 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -367,6 +367,7 @@ static void vlapic_accept_irq(struct vcpu *v, uint32_t icr_low)
         gdprintk(XENLOG_ERR, "TODO: unsupported delivery mode in ICR %x\n",
                  icr_low);
         domain_crash(v->domain);
+        break;
     }
 }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:49:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:49:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748676.1156466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPHa-0007Ip-Ny; Wed, 26 Jun 2024 09:49:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748676.1156466; Wed, 26 Jun 2024 09:49:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPHa-0007Ii-K6; Wed, 26 Jun 2024 09:49:42 +0000
Received: by outflank-mailman (input) for mailman id 748676;
 Wed, 26 Jun 2024 09:49:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMPHZ-0007Ic-ON
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:49:41 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 67c76f26-33a1-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 11:49:40 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2ebec2f11b7so71870591fa.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 02:49:40 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7066a0d913dsm7549558b3a.112.2024.06.26.02.49.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 02:49:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67c76f26-33a1-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719395379; x=1720000179; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=hjZ8gZ8VXhpK2mb2cf5pt8UpBO9uKk+k7+sLzgIR+Ew=;
        b=RCSWj5W2gnRjk2fp1evYXyHe6lDEpYv1peo1rGIYnU2oJgKNui0Z8+Wq/rGClAlXus
         IJQbYfgRPBmQQvcDJJ+GWq3eZV4ZRZNBqD4SxbeUVUZU5hbqQ3S99jsRaq615Pj6zeUM
         82NRtJe0Pj6Um2sVomvRITmVLy4Woo389LYtN2bdq+WOKAZgz2GtS+ledR02LC1eg3SJ
         aHwhIpGIWpcCuRVxd4kijl3Z7fWL7q/FufQJQdDsq0c7x+kugn/PtGGUpyTxLKFVqSdQ
         SIP1d6e/ZLpQodW0VySWJ5/inLDQwOhF4x4VxTfk2rGkcD3ec+6kbOLy+ZSgav3mchDh
         NUug==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719395379; x=1720000179;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=hjZ8gZ8VXhpK2mb2cf5pt8UpBO9uKk+k7+sLzgIR+Ew=;
        b=QQZDTt9A5w5yPCrGrjiSNvljIzBcgbTuDuwbUXBeH2rYhbCUN5O/c0DU/7ZoUK5kkZ
         /CGAdXO3MCXL/NlBJm/FsSbeyQaES5RtS3aXZPKpfnlJ+81hKRqVY6UnTvsvdwKvgF1f
         FTnG2tymVOqolclLL9j8ovh2us6m+S42cUEeQQXqqIdMUgCPSE0SHhNv3T1FdbaTJYpc
         du0UlJZpdz+1spSw/iiJAOPocNaJPTW1eWakVC+7+5K4640AWXOvYIxZwKsk0JpK0N6q
         fYfB1Poj2c1wsSpWZET2CYEwLCA4K3h66NLY7s3mPxLuvJ1H8j7x1xFXv3xAFmWwBHuY
         /ZxA==
X-Forwarded-Encrypted: i=1; AJvYcCV/sLdq4j6MnvAKZs/q/J+2bKU9SKgHikyD5ciUWLeE0bvJ6bH+eL6Q4PHM+51pTh31QKuarq9HYwFkpWZP8WOx2C+3p9OZVncbYmz3QVQ=
X-Gm-Message-State: AOJu0Ywm4lrIN4QaAmnv45Sr5aEAa3G+VJsES871sFRPnF8xYhtSxaI+
	vW2c5eMJaPyiUd4wx8bwFunVFXaoftwlYQ9x2TxwgorMD6vjxq/z69UT3qRAoA==
X-Google-Smtp-Source: AGHT+IEsZFr2IZgZiVfRy9tm4jrdGJABiS5dlyyiQYcyvfT6uQ+TAQ+HZndfIa+480lNiQQw9OL7rw==
X-Received: by 2002:a2e:730b:0:b0:2ec:5a25:16e9 with SMTP id 38308e7fff4ca-2ec5b3884b4mr55726391fa.34.1719395379060;
        Wed, 26 Jun 2024 02:49:39 -0700 (PDT)
Message-ID: <fc04af37-6ef6-4c91-a625-d541f9f9bfe5@suse.com>
Date: Wed, 26 Jun 2024 11:49:28 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/6] x86/vmx: Rewrite vmx_sync_pir_to_irr() to be more
 efficient
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
 <20240625190719.788643-2-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240625190719.788643-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 21:07, Andrew Cooper wrote:
> There are two issues.  First, pi_test_and_clear_on() pulls the cache-line to
> the CPU and dirties it even if there's nothing outstanding, but the final
> for_each_set_bit() is O(256) when O(8) would do, and would avoid multiple
> atomic updates to the same IRR word.

The way it's worded (grammar wise) it appears as if the 2nd issue is missing
from this description. Perhaps you meant to break the sentence at "but" (and
re-word a little what follows), which feels a little unmotivated to me (as a
non-native speaker, i.e. may not mean anything) anyway? Or maybe something
simply got lost in the middle?

> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2321,18 +2321,63 @@ static void cf_check vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
>  
>  static void cf_check vmx_sync_pir_to_irr(struct vcpu *v)
>  {
> -    struct vlapic *vlapic = vcpu_vlapic(v);
> -    unsigned int group, i;
> -    DECLARE_BITMAP(pending_intr, X86_NR_VECTORS);
> +    struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc;
> +    union {
> +        uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)];
> +        uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)];
> +    } vec;
> +    uint32_t *irr;
> +    bool on;
>  
> -    if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) )
> +    /*
> +     * The PIR is a contended cacheline which bounces between the CPU and
> +     * IOMMU.  The IOMMU updates the entire PIR atomically, but we can't
> +     * express the same on the CPU side, so care has to be taken.
> +     *
> +     * First, do a plain read of ON.  If the PIR hasn't been modified, this
> +     * will keep the cacheline Shared and not pull it Excusive on the CPU.
> +     */
> +    if ( !pi_test_on(desc) )
>          return;
>  
> -    for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ )
> -        pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group);
> +    /*
> +     * Second, if the plain read said that ON was set, we must clear it with
> +     * an atomic action.  This will bring the cachline to Exclusive on the
> +     * CPU.
> +     *
> +     * This should always succeed because noone else should be playing with
> +     * the PIR behind our back, but assert so just in case.
> +     */

Isn't "playing with" more strict than what is the case, and what we need
here? Aiui nothing should _clear this bit_ behind our back, while PIR
covers more than just this one bit, and the bit may also become reset
immediately after we cleared it.

> +    on = pi_test_and_clear_on(desc);
> +    ASSERT(on);
>  
> -    for_each_set_bit(i, pending_intr, X86_NR_VECTORS)
> -        vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]);
> +    /*
> +     * The cacheline is now Exclusive on the CPU, and the IOMMU has indicated
> +     * (via ON being set) thatat least one vector is pending too.

This isn't quite correct aiui, and hence perhaps better not to state it
exactly like this: While we're ...

>  Atomically
> +     * read and clear the entire pending bitmap as fast as we, to reduce the
> +     * window that the IOMMU may steal the cacheline back from us.
> +     *
> +     * It is a performance concern, but not a correctness concern.  If the
> +     * IOMMU does steal the cacheline back, we'll just wait to get it back
> +     * again.
> +     */
> +    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i )
> +        vec._64[i] = xchg(&desc->pir[i], 0);

... still ahead of or in this loop, new bits may become set which we then
may handle right away. The "on" indication on the next entry into this
logic may then be misleading, as we may not find any set bit.

All the code changes look good to me, otoh.

Jan

> +    /*
> +     * Finally, merge the pending vectors into IRR.  The IRR register is
> +     * scattered in memory, so we have to do this 32 bits at a time.
> +     */
> +    irr = (uint32_t *)&vcpu_vlapic(v)->regs->data[APIC_IRR];
> +    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._32); ++i )
> +    {
> +        if ( !vec._32[i] )
> +            continue;
> +
> +        asm ( "lock or %[val], %[irr]"
> +              : [irr] "+m" (irr[i * 0x10])
> +              : [val] "r" (vec._32[i]) );
> +    }
>  }
>  
>  static bool cf_check vmx_test_pir(const struct vcpu *v, uint8_t vec)



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 09:57:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 09:57:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748684.1156476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPP8-0000QO-Jv; Wed, 26 Jun 2024 09:57:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748684.1156476; Wed, 26 Jun 2024 09:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPP8-0000QH-GB; Wed, 26 Jun 2024 09:57:30 +0000
Received: by outflank-mailman (input) for mailman id 748684;
 Wed, 26 Jun 2024 09:57:29 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMPP7-0000QB-13
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 09:57:29 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7ec9c3e7-33a2-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 11:57:27 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id
 38308e7fff4ca-2ec3f875e68so72171851fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 02:57:27 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c8d8091dc9sm1230678a91.53.2024.06.26.02.57.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 02:57:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ec9c3e7-33a2-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719395847; x=1720000647; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=N3vczh/MDS73IKdcjDeqjM+rc5bYV5Lpeh4SgTSwqY4=;
        b=b9Gwvsuw+2uiBdHPfYSi3e9Gu4HVqY8thfCQilK4uZ+JV2dY1h9+DtfFq6O4awsyNh
         nQG4pazhi5ZSjINcJLRNMJwL+rWqZ/of2UnXoMAPkklVVtNnYxFCv+9Cw6lIpU68yT/Q
         a9N/Dqkc++oT5u77G8I0xcskH/ZJCcSgsZVbH3KuYg5m/GyrCfYU4wzNWXYPvP9O9NPv
         sO8UQ4VVCo9HRln9PvafaWRfAs/dv327KaND6J12ZvhASvdrMJdMURtuutWQeTEAYMjH
         rkZZ9lNIaGNLgVMkcagRrVjSklKa1MpgQmyRYOkLn3iBaM8TX7CNYIhHzkcz+znB2rAj
         MFew==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719395847; x=1720000647;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=N3vczh/MDS73IKdcjDeqjM+rc5bYV5Lpeh4SgTSwqY4=;
        b=YTy5LWOGw3ssSA7qcqcKwT2eHPoPwqsDz5/3eKESprhSS5jMW4yWusJkxmlPp9oqdE
         cqVqvmDB+gk/fYa4xLDsyx9sedtpML6j/Hz4fXuvk4s6oFc5vPRKJsI1NUC1j34UzDuq
         HuMUcnO2NpPOEB6DG7QCo1N4fdLuCVbMV0rUANq7KT4cNZ3HC+mE0iL0JEbNipv0pYII
         aPFNgy+phDF2zgn+5t6XSXOBh82tfgg4WSx9Tkm9oEZ3f/oexO3san4/7ln6gR+EE2La
         /yzF0XM6wIMknmQDqHfTsAQif/QsVs/L7ScTYqJv5zeb+qtUinkR/xuKaqhoAJ/2vZgU
         k6uA==
X-Forwarded-Encrypted: i=1; AJvYcCWwSsC8o+9qrrrlk5+VdW7oUmdVyXE785NQld3fOT4OGD81BnSXLjFUT1Lof7tzztJdw+EfXxGbmi2y7ew/NGohM2wR3Ptqxj0jg63mw78=
X-Gm-Message-State: AOJu0YyTfDcJZPeqAmoM60RQaNtYbIxDf7t+2ylskF4IImiFWoRd/keM
	JaFUZKTd3jSHGB835I7CCGpK2l6VGQlhy3lCJr5TOjJyopihP3TYYtDI4LQuWA==
X-Google-Smtp-Source: AGHT+IHhE5FZhWIfWXbrfML61WT5V1pjJGJP7MeitxmwtWhovQIKnHpDGObDPyhqq857DaHTu90vQg==
X-Received: by 2002:a2e:a17a:0:b0:2ec:42db:96a2 with SMTP id 38308e7fff4ca-2ec5b38b432mr57959711fa.29.1719395847206;
        Wed, 26 Jun 2024 02:57:27 -0700 (PDT)
Message-ID: <3899bc21-c6d5-4643-9c6d-4a01af37cfd6@suse.com>
Date: Wed, 26 Jun 2024 11:57:17 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 00/12] x86: address some violations of MISRA C Rule
 16.3
To: Federico Serafini <federico.serafini@bugseng.com>
Cc: consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org
References: <cover.1719383180.git.federico.serafini@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <cover.1719383180.git.federico.serafini@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 26.06.2024 11:27, Federico Serafini wrote:
> This patch series fixes a missing escape in a deviation and addresses some
> violations.
> 
> Federico Serafini (12):
>   automation/eclair: fix deviation of MISRA C Rule 16.3
>   x86/cpuid: use fallthrough pseudo keyword
>   x86/domctl: address a violation of MISRA C Rule 16.3
>   x86/vpmu: address violations of MISRA C Rule 16.3
>   x86/traps: address violations of MISRA C Rule 16.3
>   x86/mce: address violations of MISRA C Rule 16.3
>   x86/hvm: address violations of MISRA C Rule 16.3

Just a remark, no strict request to make further re-arrangements: Looks like
what was patch 11 in v2 was now folded into this patch. Yet then why were
other HVM parts left separate:

>   x86/vpt: address a violation of MISRA C Rule 16.3

This and ...

>   x86/mm: add defensive return
>   x86/mpparse: address a violation of MISRA C Rule 16.3
>   x86/vPIC: address a violation of MISRA C Rule 16.3
>   x86/vlapic: address a violation of MISRA C Rule 16.3

... these two. In general I'd expect splitting / keeping together to be
done consistently within a series, unless of course there's something that
wants keeping separate for other than purely mechanical reasons.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 10:04:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 10:04:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748689.1156486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPVR-0002ea-83; Wed, 26 Jun 2024 10:04:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748689.1156486; Wed, 26 Jun 2024 10:04:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPVR-0002eT-4s; Wed, 26 Jun 2024 10:04:01 +0000
Received: by outflank-mailman (input) for mailman id 748689;
 Wed, 26 Jun 2024 10:03:59 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMPVP-0002eN-Cs
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 10:03:59 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 66c76302-33a3-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 12:03:57 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2eaae2a6dc1so101917571fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 03:03:56 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb3d5a87sm95929905ad.203.2024.06.26.03.03.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 03:03:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66c76302-33a3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719396236; x=1720001036; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ETfSu0w1XQtkfIerrXgTr9QH2vryxpdAr+oK48A+6mg=;
        b=ZWwdP4P/BFh10XfwiPQA1Spy5UVM1u7ju/JpPwcrGXt3jIPkdV24la0Wq8+8b/xgLr
         xkNVTI7PGwk2vu5jBoZfqx85XaT6UB/JelNGEsXjkBMVb+I/VMVfZLVB6525Nt7b49PU
         WyCOkyKDOzWscMEo/rvgLqHWieo3JfYrScYuzHErYOMEu9CgnDjjguQzOOKBLWDBXsj7
         XP2oFoTfWgic6tv4/Wdfre3NPTVgXEJyhZUs1MJymOCrSSoJs4ehmzo8XRFhmHZuf9KC
         x/5tTHkolJJJEHYfX7XvnH+1iyBOxUNnjw0sE+XK2L6aM4TEUw8q0nqLFXToEoHlC7h7
         du5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719396236; x=1720001036;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ETfSu0w1XQtkfIerrXgTr9QH2vryxpdAr+oK48A+6mg=;
        b=VZGbdH4VxsTFcHKTt2fFWaMN5PXa1HxBt7wj6aXDZ8A+ST8JK7JDh9IZZ0LQ4QyL/+
         rwXgSjVoRcavUdmuWWRt5N4I64AgQ+JmTCVGI7QQhIUXbrcLCR30NDO2ltiepznk1w2O
         OI3IBIeprvk0wzMab+7byLtJVPWf2BlXjxNMEdz76Os2XMDe93MVrMRF26p+khnw8BMx
         poRUPnMSB+UmJ1nR8bSvqe7s0Qve3CRvZSuz3Qtv2Bor8L+Rw9llBI7sjr3Gx79bxLGb
         zWHz0uJ33qfSm/WiJSaYripYSWvBRAPWNl3iKH542cQZFQYFE1Mi2QnOxWPaRrXNjJI4
         MuDg==
X-Forwarded-Encrypted: i=1; AJvYcCUKdcWtbMIMukBeiqZhONtNo/esd2u1SkZcB32ioNIwOjpKy5b1yWYXnZlq+OJqYwUJ1phuClfMif87SOAU/C/X3kawoMzv3365sG9Mjm4=
X-Gm-Message-State: AOJu0YzTP2HnXY+NXcAKfqThOfXrHJ9f0X1u38Q55IBYWIypGQcb3s/G
	PkoyebMWwv7d8YBymBOsEBASs1po3XTkUDHrgMQ9Jqs2WTDQEkPMZOleghzjBA==
X-Google-Smtp-Source: AGHT+IHzdSQr7S4nzuXGVCRjbs0S+UBeSr9hgbsKBmDePsbXshzjrVXjOPwnJy32Cv2lPwKor9P8TA==
X-Received: by 2002:a05:651c:22f:b0:2ec:1708:4db5 with SMTP id 38308e7fff4ca-2ec5b339debmr61572191fa.51.1719396236206;
        Wed, 26 Jun 2024 03:03:56 -0700 (PDT)
Message-ID: <b04bd8d8-f218-4c95-8014-3fbf9d3a0c91@suse.com>
Date: Wed, 26 Jun 2024 12:03:46 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/6] xen/bitops: Rename for_each_set_bit() to
 bitmap_for_each()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
 <20240625190719.788643-3-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240625190719.788643-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 21:07, Andrew Cooper wrote:
> The current implementation wants to take an in-memory bitmap.  However, all
> ARM callers and all-but-1 x86 callers spill a scalar to the stack in order to
> use the "generic arbitrary bitmap" helpers under the hood.
> 
> This functions, but it's far from ideal.
> 
> Rename the construct and move it into bitmap.h, because having an iterator for
> an arbitrary bitmap is a useful thing.
> 
> This will allow us to re-implement for_each_set_bit() to be more appropriate
> for scalar values.
> 
> No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one cosmetic request: While doing the rename, would you mind sorting
style? Not necessarily uniformly across the entire patch, but at individual
sites presently using "impossible" formatting. IOW ...

> --- a/xen/arch/arm/gic-vgic.c
> +++ b/xen/arch/arm/gic-vgic.c
> @@ -111,7 +111,7 @@ static unsigned int gic_find_unused_lr(struct vcpu *v,
>      {
>          unsigned int used_lr;
>  
> -        for_each_set_bit(used_lr, lr_mask, nr_lrs)
> +        bitmap_for_each(used_lr, lr_mask, nr_lrs)

... while this one's fine (treating bitmap_for_each as ordinary identifier)
and while xstate.c is also fine (treating it as pseudo-keyword), ...

> --- a/xen/arch/arm/vgic.c
> +++ b/xen/arch/arm/vgic.c
> @@ -429,7 +429,7 @@ void vgic_set_irqs_pending(struct vcpu *v, uint32_t r, unsigned int rank)
>      /* LPIs will never be set pending via this function */
>      ASSERT(!is_lpi(32 * rank + 31));
>  
> -    for_each_set_bit( i, &mask, 32 )
> +    bitmap_for_each( i, &mask, 32 )
>      {

... this isn't possible formatting according to our style: Either there are
no blanks immediately inside the parentheses, or there also is one ahead of
the opening one.

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 10:17:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 10:17:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748698.1156496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPiW-0004wl-Bu; Wed, 26 Jun 2024 10:17:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748698.1156496; Wed, 26 Jun 2024 10:17:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPiW-0004we-8q; Wed, 26 Jun 2024 10:17:32 +0000
Received: by outflank-mailman (input) for mailman id 748698;
 Wed, 26 Jun 2024 10:17:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMPiV-0004wI-1O
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 10:17:31 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4b3d3757-33a5-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 12:17:30 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2ec5779b423so43402881fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 03:17:29 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7069e0a78e2sm2466990b3a.186.2024.06.26.03.17.24
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 03:17:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b3d3757-33a5-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719397049; x=1720001849; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=VNVUwNDUsFpGA9TMOS6Vumq0S3nackh6fEFU2xxP9vY=;
        b=REw7vk6FlxEDi/lTk5EY4D8cls7l2XPGqYBtdcB0VKjUkEbnRvkHaLwPnKUe1YzW7o
         c6XO8POelwR5yZ4sIoRqaNIfivu55qHi+UdTXKsRgssdCwxQwJBAmzs53cPTuyNkorOh
         5gwALbwqWD0lBTxQjdTFlc044f8nsYbvqiMclshjHyNyknDpLXwpU9K+wtrfebOxe/R4
         xN4L3VAd3JVvHFmlOWWxeJKdRvDywPAtjWTB/CFmR4eBK70ZH1+DvUut0g8MVu3sPzmK
         ahWLlqYRZR9VHXgqRbLUnBeEHqEyTaOlzL9BCI21d4owu4dV0ze5mpQ9R2zaZu5Tf4D2
         oz9A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719397049; x=1720001849;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=VNVUwNDUsFpGA9TMOS6Vumq0S3nackh6fEFU2xxP9vY=;
        b=CQiRZZdIDuvdedgEUjRy7nHI9ZaurJCvqwt37+nDEdctHxnubFb0oUvR90QdPP9RLj
         RhzHYqK7XcJAaEhZ6ynQ4xh8OlXlL6BXhzEvDhU4cJnWS3Aqoc1r8Pbsm3/JLor/zt6W
         j+UdDDlm/7n1dTsvN0gEPZC3wlLHjPVUqw8nO7QYU4cizMckoRFo1cq+0daviZPVDW1n
         L9zwuqwSpEfJbambzM8HNoO7AzWqlw2p7ZO90uReo4twIQE2kKCsF2XouIGtMwhayAec
         ahqRlPy6RpU+Sjlw5gKi0kD6sLEn2Ehpke+CRhL4w9o8hcuTbTGA4pvB5h21ZIYi9zqm
         ZTbQ==
X-Forwarded-Encrypted: i=1; AJvYcCWbU3EKk8c8/W5PGR3CpfsV94+Fx5Dr//L+soHvcQJxPuy1dWxdWtI78k/BRpyIiEC72z9fMo+ogrJcY/OSk5ifMge/TwaK3EDH2SCUvsQ=
X-Gm-Message-State: AOJu0Yzz7uiPy8pJVk1EJzYpic1f9f9VoBf8q8V9DDaNVFDqlaOb9f9Y
	e9twbDYbPk+06jtp3pilOpG1HDlQJOTc864mbogdOjVsGZNiNpc9OrGxf1ubyw==
X-Google-Smtp-Source: AGHT+IFyWGNtOXooXhPnrJW1cU19yjqA5r9VfHb9Ejyi7RLk9ARtyQD2CBk0NvweL3eIUXzO0xINRQ==
X-Received: by 2002:a2e:8089:0:b0:2ec:2d75:509c with SMTP id 38308e7fff4ca-2ec5938a5cbmr76255111fa.46.1719397049237;
        Wed, 26 Jun 2024 03:17:29 -0700 (PDT)
Message-ID: <950da1fb-3df6-4962-b1fe-07e853507e37@suse.com>
Date: Wed, 26 Jun 2024 12:17:19 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 4/6] xen/bitops: Introduce for_each_set_bit()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
 <20240625190719.788643-5-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240625190719.788643-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 21:07, Andrew Cooper wrote:
> The prior version (renamed to bitmap_for_each()) was inefficeint when used
> over a scalar, but this is the more common usage even before accounting for
> the many opencoded forms.
> 
> Introduce a new version which operates on scalars only and does so without
> spilling them to memory.  This in turn requires the addition of a type-generic
> form of ffs().
> 
> Add testing for the new construct alongside the ffs/fls testing.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with two remarks:

> The naming of ffs_g() is taken from the new compiler builtins which are using
> a g suffix to mean type-generic.

As you say, a g suffix, not a _g one; gcc14 documents __builtin_ffsg().
(Seeing it exists I wonder whether we wouldn't want to use it when
available, and only fall back to the macro for older versions.) Any
specific reason you use _g?

> --- a/xen/include/xen/bitops.h
> +++ b/xen/include/xen/bitops.h
> @@ -56,6 +56,16 @@ static always_inline __pure unsigned int ffs64(uint64_t x)
>          return !x || (uint32_t)x ? ffs(x) : ffs(x >> 32) + 32;
>  }
>  
> +/*
> + * A type-generic ffs() which picks the appropriate ffs{,l,64}() based on it's
> + * argument.
> + */
> +#define ffs_g(x)                                        \
> +    sizeof(x) <= sizeof(int) ? ffs(x) :                 \
> +        sizeof(x) <= sizeof(long) ? ffsl(x) :           \
> +        sizeof(x) <= sizeof(uint64_t) ? ffs64(x) :      \
> +        ({ BUILD_ERROR("ffs_g() Bad input type"); 0; })

Related to the patch introducing it: I can see how BUILD_ERROR_ON() may
be deemed sub-optimal here. Nevertheless I think we should be able to
find some common ground there.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 10:24:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 10:24:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748706.1156505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPp6-0007AM-19; Wed, 26 Jun 2024 10:24:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748706.1156505; Wed, 26 Jun 2024 10:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPp5-0007AF-Ue; Wed, 26 Jun 2024 10:24:19 +0000
Received: by outflank-mailman (input) for mailman id 748706;
 Wed, 26 Jun 2024 10:24:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMPp4-0007A9-1U
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 10:24:18 +0000
Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com
 [2a00:1450:4864:20::22d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3dec991e-33a6-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 12:24:16 +0200 (CEST)
Received: by mail-lj1-x22d.google.com with SMTP id
 38308e7fff4ca-2ec61eeed8eso36073331fa.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 03:24:16 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb3c5ccasm96800675ad.171.2024.06.26.03.24.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 03:24:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3dec991e-33a6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719397456; x=1720002256; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=uhoihlecdVahr094qoet3Yi040vaYgy+HQoQZQ+ajmM=;
        b=HZ99x/k0y9n4QvfDnTeWDWZlmm6tKHRBaO1PIYVdep5udiR/nM+UefYWiZZ5mDfkBC
         Soowf9+6cFq74nx6qoamstUHZINLpYg3vx2MbY8uNZe2SvuMng0ETys+hAx+tGeSlon/
         uGsSI0ZS1rXvJCXYsQPyy1eOWhmyy0vR2qE69mfM8o3EbT2gVVawfqC2JuIIoYlEUlno
         E92ec526Sag0RcKXppPmwDSDC8CJLsy9omMWIduzxdaZs25ZcdmghtZ+Jl50SdsKpgML
         YIJ7gDm5CoY5ez/tprbQuj1GgnclbEgh6g/pzKtGEN5I5nBOhI8buHv86hAyOZj5Z88K
         0FsA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719397456; x=1720002256;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=uhoihlecdVahr094qoet3Yi040vaYgy+HQoQZQ+ajmM=;
        b=bvBInBdTJiDKkW1CN03mfSEcPVSTA/k71OFzHWY6ngpeNdy4GFIv0Z0HBHzMqCtrAW
         a6QdBvSY/T1JRxO29LT3pkH5dga7t4gQde5gyiqIOW9UQxZTF3kDP8shKhQttYO4hJFJ
         BO7+L93m3FDSJskhey9kqs/UfnWMPLkmY5BbspW1V7y2MeftN/QqJAQjti3UM6gUCeOj
         Ws0enIVoUEeC9E0+HzZb5ToN8UTH6Nh1uLt0+6a4/X/NsXUCVNSoLSVAYzBQhLPOuRvD
         LnAP7C1mUez51wy5kHrd9lQZ/BSzC1kf7wj6CjBw7eZiBuRJAoXkoA+DF9h06jhoBzDC
         zInQ==
X-Forwarded-Encrypted: i=1; AJvYcCWk+ktiQ4fh1nLWJt1wIBFjZlWMcPNXtk3P7B9AEXXhxxRxupxabm6x/izlnjRbB+cU+egPyaeKgril76sQf1GXr+zvZWm+Ewa0gdwmnqo=
X-Gm-Message-State: AOJu0Yyp2x5459lQW0Dkc2OhrFwTJvrwmEtmtHK1Z84i5edLp3EHJViH
	AwTDOdk4ekm1T/lFoZGE+vUZKadCGf/G/aP5d7xJ5zbIvJ2Ef9PHEkF/ZdHbQQ==
X-Google-Smtp-Source: AGHT+IH+izyanq1Et8US/D22zeSqEYrSBod9f46ql9YGkiVoh5sF/u6GW0+eyiBNoLgQkLdoLzBduQ==
X-Received: by 2002:a2e:990a:0:b0:2ec:a024:a634 with SMTP id 38308e7fff4ca-2eca024ac25mr12271191fa.53.1719397456308;
        Wed, 26 Jun 2024 03:24:16 -0700 (PDT)
Message-ID: <59201cf5-9d86-4976-a331-2a7f8bb9635a@suse.com>
Date: Wed, 26 Jun 2024 12:24:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 6/6] x86/xstate: Switch back to for_each_set_bit()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
 <20240625190719.788643-7-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240625190719.788643-7-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 21:07, Andrew Cooper wrote:
> In all 3 examples, we're iterating over a scaler.  No caller can pass the
> COMPRESSED flag in, so the upper bound of 63, as opposed to 64, doesn't
> matter.

Not sure, maybe more a language question (for my education): Is "can"
really appropriate here? In recalculate_xstate() we calculate the
value ourselves, but in the two other cases the value is incoming to
the functions. Architecturally those value should not have bit 63 set,
but that's weaker than "can" according to my understanding. I'd be
fine with "may", for example.

> This alone produces:
> 
>   add/remove: 0/0 grow/shrink: 0/4 up/down: 0/-161 (-161)
>   Function                                     old     new   delta
>   compress_xsave_states                         66      58      -8
>   xstate_uncompressed_size                     119      71     -48
>   xstate_compressed_size                       124      76     -48
>   recalculate_xstate                           347     290     -57
> 
> where xstate_{un,}compressed_size() have practically halved in size despite
> being small before.
> 
> The change in compress_xsave_states() is unexpected.  The function is almost
> entirely dead code, and within what remains there's a smaller stack frame.  I
> suspect it's leftovers that the optimiser couldn't fully discard.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Other than the above:
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 10:25:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 10:25:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748712.1156516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPpv-0007hx-Cd; Wed, 26 Jun 2024 10:25:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748712.1156516; Wed, 26 Jun 2024 10:25:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPpv-0007hq-9q; Wed, 26 Jun 2024 10:25:11 +0000
Received: by outflank-mailman (input) for mailman id 748712;
 Wed, 26 Jun 2024 10:25:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMPpu-0007gX-Fv
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 10:25:10 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 5d286a5b-33a6-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 12:25:09 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id D65374EE0738;
 Wed, 26 Jun 2024 12:25:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d286a5b-33a6-11ef-90a3-e314d9c70b13
MIME-Version: 1.0
Date: Wed, 26 Jun 2024 12:25:08 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Simone Ballarin <simone.ballarin@bugseng.com>, consulting@bugseng.com,
 sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH v3 05/16] xen/x86: address violations of MISRA C:2012
 Directive 4.10
In-Reply-To: <a5009c3e-cba6-4737-aaff-c3b79a11169c@suse.com>
References: <cover.1710145041.git.simone.ballarin@bugseng.com>
 <dd042e7d17e7833e12a5ff6f28dd560b5ff02cf7.1710145041.git.simone.ballarin@bugseng.com>
 <dce6c44d-94b7-43bd-858a-9337336a79cf@suse.com>
 <ef623bad297d016438b35bedc80f091d@bugseng.com>
 <ec92611e-6762-4b6c-af3e-999b748d1f1b@suse.com>
 <797b00049612507d273facc581b2c2c5@bugseng.com>
 <a5009c3e-cba6-4737-aaff-c3b79a11169c@suse.com>
Message-ID: <e3ae670923fd061986e27b3f95833b88@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-26 11:26, Jan Beulich wrote:
> On 26.06.2024 11:20, Nicola Vetrini wrote:
>> On 2024-06-26 11:06, Jan Beulich wrote:
>>> On 25.06.2024 21:31, Nicola Vetrini wrote:
>>>> On 2024-03-12 09:16, Jan Beulich wrote:
>>>>> On 11.03.2024 09:59, Simone Ballarin wrote:
>>>>>> --- a/xen/arch/x86/Makefile
>>>>>> +++ b/xen/arch/x86/Makefile
>>>>>> @@ -258,18 +258,20 @@ $(obj)/asm-macros.i: CFLAGS-y += -P
>>>>>>  $(objtree)/arch/x86/include/asm/asm-macros.h: $(obj)/asm-macros.i
>>>>>> $(src)/Makefile
>>>>>>  	$(call filechk,asm-macros.h)
>>>>>> 
>>>>>> +ARCHDIR = $(shell echo $(SRCARCH) | tr a-z A-Z)
>>>>> 
>>>>> This wants to use :=, I think - there's no reason to invoke the 
>>>>> shell
>>>>> ...
>>>> 
>>>> I agree on this
>>>> 
>>>>> 
>>>>>>  define filechk_asm-macros.h
>>>>>> +    echo '#ifndef ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>>>>> +    echo '#define ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>>>>>      echo '#if 0'; \
>>>>>>      echo '.if 0'; \
>>>>>>      echo '#endif'; \
>>>>>> -    echo '#ifndef __ASM_MACROS_H__'; \
>>>>>> -    echo '#define __ASM_MACROS_H__'; \
>>>>>>      echo 'asm ( ".include \"$@\"" );'; \
>>>>>> -    echo '#endif /* __ASM_MACROS_H__ */'; \
>>>>>>      echo '#if 0'; \
>>>>>>      echo '.endif'; \
>>>>>>      cat $<; \
>>>>>> -    echo '#endif'
>>>>>> +    echo '#endif'; \
>>>>>> +    echo '#endif /* ASM_$(ARCHDIR)_ASM_MACROS_H */'
>>>>>>  endef
>>>>> 
>>>>> ... three times while expanding this macro. Alternatively (to avoid
>>>>> an unnecessary shell invocation when this macro is never expanded 
>>>>> at
>>>>> all) a shell variable inside the "define" above would want
>>>>> introducing.
>>>>> Whether this 2nd approach is better depends on whether we 
>>>>> anticipate
>>>>> further uses of ARCHDIR.
>>>> 
>>>> However here I'm not entirely sure about the meaning of this latter
>>>> proposal.
>>>> My proposal is the following:
>>>> 
>>>> ARCHDIR := $(shell echo $(SRCARCH) | tr a-z A-Z)
>>>> 
>>>> in a suitably generic place (such as Kbuild.include or maybe
>>>> xen/Makefile) as you suggested in subsequent patches that reused 
>>>> this
>>>> pattern.
>>> 
>>> If $(ARCHDIR) is going to be used elsewhere, then what you suggest is
>>> fine.
>>> My "whether" in the earlier reply specifically left open for
>>> clarification
>>> what the intentions with the variable are. The alternative I had
>>> described
>>> makes sense only when $(ARCHDIR) would only ever be used inside the
>>> filechk_asm-macros.h macro.
>> 
>> Yes, the intention is to reuse $(ARCHDIR) in the formation of other
>> places, as you can tell from the fact that subsequent patches 
>> replicate
>> the same pattern. This is going to save some duplication.
>> The only matter left then is whether xen/Makefile (around line 250, 
>> just
>> after setting SRCARCH) would be better, or Kbuild.include. To me the
>> former place seems more natural, but I'm not totally sure.
> 
> Depends on where all the intended uses are. If they're all in 
> xen/Makefile,
> then having the macro just there is of course sufficient. Whereas when 
> it's
> needed elsewhere, instead of exporting putting it in Kbuild.include 
> would
> seem more natural / desirable to me.
> 

The places where this would be used are these:
file: target (or define)
xen/build.mk: arch/$(SRCARCH)/include/asm/asm-offsets.h: asm-offsets.s
xen/include/Makefile: define cmd_xlat_h
xen/arch/x86/Makefile: define filechk_asm-macros.h

The only issue that comes to my mind (it may not be one at all) is that 
SRCARCH is defined and exported in xen/Makefile after including 
Kbuild.include, so it would need to be defined after SRCARCH is 
assigned:

include scripts/Kbuild.include

# Don't break if the build process wasn't called from the top level
# we need XEN_TARGET_ARCH to generate the proper config
include $(XEN_ROOT)/Config.mk

# Set ARCH/SRCARCH appropriately.

ARCH := $(XEN_TARGET_ARCH)
SRCARCH := $(shell echo $(ARCH) | \
     sed -e 's/x86.*/x86/' -e 's/arm\(32\|64\)/arm/g' \
         -e 's/riscv.*/riscv/g' -e 's/ppc.*/ppc/g')
export ARCH SRCARCH

Am I missing something?

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 10:32:01 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 10:32:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748721.1156525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPwR-0001bm-0j; Wed, 26 Jun 2024 10:31:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748721.1156525; Wed, 26 Jun 2024 10:31:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMPwQ-0001bf-UQ; Wed, 26 Jun 2024 10:31:54 +0000
Received: by outflank-mailman (input) for mailman id 748721;
 Wed, 26 Jun 2024 10:31:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMPwP-0001bZ-P6
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 10:31:53 +0000
Received: from mail-lj1-x22b.google.com (mail-lj1-x22b.google.com
 [2a00:1450:4864:20::22b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4d97c9cd-33a7-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 12:31:52 +0200 (CEST)
Received: by mail-lj1-x22b.google.com with SMTP id
 38308e7fff4ca-2ec17eb4493so88407161fa.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 03:31:52 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c8d7e5c46asm1307206a91.5.2024.06.26.03.31.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 03:31:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d97c9cd-33a7-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719397912; x=1720002712; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=nfFTNrGHQaG72YKlw5OfVon/okQOPf5apgYkqAlwkGo=;
        b=Ecwcg/ZQLrsCbS/QXKpokmVD445MXIjvoPLfE5Bmzu08/I6A3L8iT91SZ6Jbf1Bmj4
         2JZIYwrAuK5NCTlXVVP5p1zmKbH0oghbmS78PNVuriIbjXSJYjwaRN6E/5IL4tTcedr4
         neXR6nywJlAFt+LobNX5CVeLSJYlMuyYSH1QE8hYxs2v4g/osXBts14KTQ31OmcKkSGp
         1QgHDoozPJidhVLDMMZBsXEfwmniUsj+w424CHfqwq95PjkKM3us6eZANNHtBEUKauE9
         dfPG/ej3BIm1+dkRVA/i/LjLa+v+Sbjj8vVchX6QAGhsm0cehfMQcsrL9mC6uN46wipl
         mr0w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719397912; x=1720002712;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=nfFTNrGHQaG72YKlw5OfVon/okQOPf5apgYkqAlwkGo=;
        b=Wd8Aj5kl0azmf/eVQotA94XR0NCEZmTMnOhfsaivTb5gNn/IUyWhewi3rhaXGsijhC
         b3imcZkzj2Lsskzi65tcqfF+XxLmZEArGUFLZdUq4V+AtFupufwq5lO1ryBuijm2xzKY
         nXTKbRPjX3O7hRaVn4vRONGGgcTk12I2hqu523o62jwHdHb3bbx02SQGPu2I/is37yBK
         sRBbNCnDUZVHk8j0gJTBg9YnDlPw/jLi+gIHvawyA3cFJncwdWSlMXnm+fvxA3k++5vC
         hzGPhpc0twyS4uSJQSv0hikbyQymyK4zJ76URFXVPKowSFK3kTB2xl4CcOdi1r3yMHUi
         pitg==
X-Forwarded-Encrypted: i=1; AJvYcCV7pxgPEcS8XZNY6iSGCzLG3yiYUYGoonY/q/Nxk+RoXqfKEXSNQQiwSrD+3d4fYTla0MxEu3XINV3ybaMRB3aFf8pidc2eGtdaEG6f9Xs=
X-Gm-Message-State: AOJu0YwNzwZmyj2BFLV5pK+rGaJ4nQnWiLtwx2/bzi4cE3Zwl3ucoi3d
	MNkJ4gPlGd9EH15jXfQj2lQ5SF5nRF2HGeWeDaNWOx14Jp39z1wa2DOFz1sIeNxhshhhyRuH/U0
	=
X-Google-Smtp-Source: AGHT+IE3K6YDy56oBMe/cCGpZWqrnyLqFvwRJamfk5nFfI/k1CHIpghVTk7rEgAca4AGn/68tVB8rg==
X-Received: by 2002:a2e:9054:0:b0:2ed:59af:ecb4 with SMTP id 38308e7fff4ca-2ed59afee82mr13382191fa.41.1719397912113;
        Wed, 26 Jun 2024 03:31:52 -0700 (PDT)
Message-ID: <0c88d86e-f226-4225-b723-a6662fcd5bef@suse.com>
Date: Wed, 26 Jun 2024 12:31:42 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v3 05/16] xen/x86: address violations of MISRA C:2012
 Directive 4.10
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: Simone Ballarin <simone.ballarin@bugseng.com>, consulting@bugseng.com,
 sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony@xenproject.org>
References: <cover.1710145041.git.simone.ballarin@bugseng.com>
 <dd042e7d17e7833e12a5ff6f28dd560b5ff02cf7.1710145041.git.simone.ballarin@bugseng.com>
 <dce6c44d-94b7-43bd-858a-9337336a79cf@suse.com>
 <ef623bad297d016438b35bedc80f091d@bugseng.com>
 <ec92611e-6762-4b6c-af3e-999b748d1f1b@suse.com>
 <797b00049612507d273facc581b2c2c5@bugseng.com>
 <a5009c3e-cba6-4737-aaff-c3b79a11169c@suse.com>
 <e3ae670923fd061986e27b3f95833b88@bugseng.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <e3ae670923fd061986e27b3f95833b88@bugseng.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 26.06.2024 12:25, Nicola Vetrini wrote:
> On 2024-06-26 11:26, Jan Beulich wrote:
>> On 26.06.2024 11:20, Nicola Vetrini wrote:
>>> On 2024-06-26 11:06, Jan Beulich wrote:
>>>> On 25.06.2024 21:31, Nicola Vetrini wrote:
>>>>> On 2024-03-12 09:16, Jan Beulich wrote:
>>>>>> On 11.03.2024 09:59, Simone Ballarin wrote:
>>>>>>> --- a/xen/arch/x86/Makefile
>>>>>>> +++ b/xen/arch/x86/Makefile
>>>>>>> @@ -258,18 +258,20 @@ $(obj)/asm-macros.i: CFLAGS-y += -P
>>>>>>>  $(objtree)/arch/x86/include/asm/asm-macros.h: $(obj)/asm-macros.i
>>>>>>> $(src)/Makefile
>>>>>>>  	$(call filechk,asm-macros.h)
>>>>>>>
>>>>>>> +ARCHDIR = $(shell echo $(SRCARCH) | tr a-z A-Z)
>>>>>>
>>>>>> This wants to use :=, I think - there's no reason to invoke the 
>>>>>> shell
>>>>>> ...
>>>>>
>>>>> I agree on this
>>>>>
>>>>>>
>>>>>>>  define filechk_asm-macros.h
>>>>>>> +    echo '#ifndef ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>>>>>> +    echo '#define ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>>>>>>>      echo '#if 0'; \
>>>>>>>      echo '.if 0'; \
>>>>>>>      echo '#endif'; \
>>>>>>> -    echo '#ifndef __ASM_MACROS_H__'; \
>>>>>>> -    echo '#define __ASM_MACROS_H__'; \
>>>>>>>      echo 'asm ( ".include \"$@\"" );'; \
>>>>>>> -    echo '#endif /* __ASM_MACROS_H__ */'; \
>>>>>>>      echo '#if 0'; \
>>>>>>>      echo '.endif'; \
>>>>>>>      cat $<; \
>>>>>>> -    echo '#endif'
>>>>>>> +    echo '#endif'; \
>>>>>>> +    echo '#endif /* ASM_$(ARCHDIR)_ASM_MACROS_H */'
>>>>>>>  endef
>>>>>>
>>>>>> ... three times while expanding this macro. Alternatively (to avoid
>>>>>> an unnecessary shell invocation when this macro is never expanded 
>>>>>> at
>>>>>> all) a shell variable inside the "define" above would want
>>>>>> introducing.
>>>>>> Whether this 2nd approach is better depends on whether we 
>>>>>> anticipate
>>>>>> further uses of ARCHDIR.
>>>>>
>>>>> However here I'm not entirely sure about the meaning of this latter
>>>>> proposal.
>>>>> My proposal is the following:
>>>>>
>>>>> ARCHDIR := $(shell echo $(SRCARCH) | tr a-z A-Z)
>>>>>
>>>>> in a suitably generic place (such as Kbuild.include or maybe
>>>>> xen/Makefile) as you suggested in subsequent patches that reused 
>>>>> this
>>>>> pattern.
>>>>
>>>> If $(ARCHDIR) is going to be used elsewhere, then what you suggest is
>>>> fine.
>>>> My "whether" in the earlier reply specifically left open for
>>>> clarification
>>>> what the intentions with the variable are. The alternative I had
>>>> described
>>>> makes sense only when $(ARCHDIR) would only ever be used inside the
>>>> filechk_asm-macros.h macro.
>>>
>>> Yes, the intention is to reuse $(ARCHDIR) in the formation of other
>>> places, as you can tell from the fact that subsequent patches 
>>> replicate
>>> the same pattern. This is going to save some duplication.
>>> The only matter left then is whether xen/Makefile (around line 250, 
>>> just
>>> after setting SRCARCH) would be better, or Kbuild.include. To me the
>>> former place seems more natural, but I'm not totally sure.
>>
>> Depends on where all the intended uses are. If they're all in 
>> xen/Makefile,
>> then having the macro just there is of course sufficient. Whereas when 
>> it's
>> needed elsewhere, instead of exporting putting it in Kbuild.include 
>> would
>> seem more natural / desirable to me.
>>
> 
> The places where this would be used are these:
> file: target (or define)
> xen/build.mk: arch/$(SRCARCH)/include/asm/asm-offsets.h: asm-offsets.s
> xen/include/Makefile: define cmd_xlat_h
> xen/arch/x86/Makefile: define filechk_asm-macros.h
> 
> The only issue that comes to my mind (it may not be one at all) is that 
> SRCARCH is defined and exported in xen/Makefile after including 
> Kbuild.include, so it would need to be defined after SRCARCH is 
> assigned:
> 
> include scripts/Kbuild.include
> 
> # Don't break if the build process wasn't called from the top level
> # we need XEN_TARGET_ARCH to generate the proper config
> include $(XEN_ROOT)/Config.mk
> 
> # Set ARCH/SRCARCH appropriately.
> 
> ARCH := $(XEN_TARGET_ARCH)
> SRCARCH := $(shell echo $(ARCH) | \
>      sed -e 's/x86.*/x86/' -e 's/arm\(32\|64\)/arm/g' \
>          -e 's/riscv.*/riscv/g' -e 's/ppc.*/ppc/g')
> export ARCH SRCARCH
> 
> Am I missing something?

In that case the alternatives are exporting or using = rather than := in
Kbuild.include, i.e. other than initially requested. Personally I dislike
exporting to a fair degree, but I'm not sure which one's better in this
case. Cc-ing Anthony for possible input.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 10:46:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 10:46:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748727.1156535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMQAw-000412-7n; Wed, 26 Jun 2024 10:46:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748727.1156535; Wed, 26 Jun 2024 10:46:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMQAw-00040v-4z; Wed, 26 Jun 2024 10:46:54 +0000
Received: by outflank-mailman (input) for mailman id 748727;
 Wed, 26 Jun 2024 10:46:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMQAu-00040p-PC
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 10:46:52 +0000
Received: from mail-lj1-x235.google.com (mail-lj1-x235.google.com
 [2a00:1450:4864:20::235])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 64df71e4-33a9-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 12:46:50 +0200 (CEST)
Received: by mail-lj1-x235.google.com with SMTP id
 38308e7fff4ca-2ec6635aa43so27032981fa.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 03:46:50 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-70677465799sm6300002b3a.62.2024.06.26.03.46.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 03:46:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64df71e4-33a9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719398810; x=1720003610; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=LX8UarjuEjWlLpE+YqB6gP+GvT8ofrUFe1yYiIq57Zk=;
        b=Wn6UZNHwAww0baPzHJrrpLwKYlPRJAhYw2iC43RQ4Vgnu7OJrphpR3wPD/RdVCu/oW
         ContYIbtNMEHN2kuvrYA3jC25NANMvA5I8ua5jcoD/+JyGxLHndZ5pw9oB6/tZ0jjb7J
         JCufNNdVt4Iy1tKEoHh78aUpOvHOB0gNshVjVPdAaJHPJWesgwLhZaaVwIkN0VaOpXmE
         h6mOtR8nfH3fhZ3AdJiq/9PCcAkmF5Njkb2w3chYfkoSvwFrngOWIFfp+X0TdYffVCRr
         6OcHz3NmOatZco1G0bVYuybVEBVBYyPrEmSMyWJxk+hJcqdvAlSneLmt/Nygezf5ZKyq
         FUSA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719398810; x=1720003610;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=LX8UarjuEjWlLpE+YqB6gP+GvT8ofrUFe1yYiIq57Zk=;
        b=ChK+Zx86gTTXetr5poXL7SRvvMI1K+cpnewA7vfmyP/mdfV9LQexUCusFATw+wkZ8h
         ZACUaRaCbPLQ4wstam4T3Tl2xMLPtmRa3wQWwtFCZI9ZYXH1bzUbEWhjpQjbW4ZUvoNQ
         3OWEl0raaxDLybwzsFg9BO5hsk/56gs6dJo6FFv1lhON293oo26UZKHbFHk4s4IU4BS3
         xttzJer9dbRLSkqJgZ6TSOVEH7cbhv02loKWTn2wcwjvu3uYc4hiLRc3ubjluCTEjIwG
         sW5zzq2m/aj53Knw2AP1M5MGH8Dye3AyZMN1djlliW/uhQ2s7Swy55Rz54lKbGTajA7O
         KxGA==
X-Forwarded-Encrypted: i=1; AJvYcCUGyFKYhjPGzlxPToVDOTcezWp0uflRhm5jSFFf2E35m7/rhH7QxGYzlRpwZgciRxDysqWlRtZrMCL13LTQ62Qx5QM+1/u2frTtyTPmtU0=
X-Gm-Message-State: AOJu0YymQ2vFPKFSQzIauuY3dxKGE4FLvag6p2DFrSvlSkHPKrK4t9vN
	Mca3lh1nYhI4qrHLb1XXrYl41kNpBrCXjU+Li0MObrnCXvkRsh6ZgszT8S25XA==
X-Google-Smtp-Source: AGHT+IFJt5LhZmNKG9s9VgdPvey+K00VQWW4fci4js7Y9ufaRnArwS8sxZuqCUZMKSBCY0brrVL/gg==
X-Received: by 2002:a2e:6a0f:0:b0:2ec:505e:c45b with SMTP id 38308e7fff4ca-2ec579c936fmr61171611fa.35.1719398810178;
        Wed, 26 Jun 2024 03:46:50 -0700 (PDT)
Message-ID: <f324a4f3-b64d-4b20-92d0-8cfea050d44a@suse.com>
Date: Wed, 26 Jun 2024 12:46:42 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 07/10] xen/common: fix build issue for common/trace.c
To: Oleksii <oleksii.kurochko@gmail.com>
Cc: George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <f14f2c5629a75856f4bafdbff3cc165c373f8dc2.1719319093.git.oleksii.kurochko@gmail.com>
 <4a4e37a9-eac7-4e72-8845-6b4bbd7bafe6@suse.com>
 <c52181a7aca8b56716d7ee354ebda9d32e67816c.camel@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <c52181a7aca8b56716d7ee354ebda9d32e67816c.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 25.06.2024 18:23, Oleksii wrote:
> On Tue, 2024-06-25 at 16:25 +0200, Jan Beulich wrote:
>> On 25.06.2024 15:51, Oleksii Kurochko wrote:
>>> During Gitlab CI randconfig job for RISC-V failed witn an error:
>>>  common/trace.c:57:22: error: expected '=', ',', ';', 'asm' or
>>>                               '__attribute__' before
>>> '__read_mostly'
>>>    57 | static u32 data_size __read_mostly;
>>>
>>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>
>> If you give a release-ack, this can go in right away, I think.
> Release-Acked-by: Oleksii Kurochko  <oleksii.kurochko@gmail.com>

Thanks, but actually I was misled by the subject prefix. From a formal
perspective this really wants an ack from George (and mine doesn't
count anything at all).

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 10:47:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 10:47:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748728.1156546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMQB2-0004Gm-EE; Wed, 26 Jun 2024 10:47:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748728.1156546; Wed, 26 Jun 2024 10:47:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMQB2-0004Gf-BC; Wed, 26 Jun 2024 10:47:00 +0000
Received: by outflank-mailman (input) for mailman id 748728;
 Wed, 26 Jun 2024 10:46:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMQB1-0004Fn-1b; Wed, 26 Jun 2024 10:46:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMQB0-0004c8-Sv; Wed, 26 Jun 2024 10:46:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMQB0-0004Wv-9A; Wed, 26 Jun 2024 10:46:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMQB0-0008PN-8d; Wed, 26 Jun 2024 10:46:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=otb8uxi86p1QMZXMoYmYgcqr0perS+5NhLOXark85+A=; b=RfPpCa345PAYfEMoCGyWNSEoXK
	Yjigt3JK+poorSWrupntLakYUUYTp0WQlKoBpt7gStHSQh8E0Ub1/N2JtKw80DrUemV6KoZhgf5fL
	61R1HjyuM6Fekxf9AjexiQF6H5l2SJKVtFo5egRI2ALUn9MkLqiRvevPkSXGUyxIErZs=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186512-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186512: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=dc002d4f2d76bdd826359a3dd608d9bc621fcb47
X-Osstest-Versions-That:
    ovmf=78bccfec9ce5082499db035270e7998d5330d75c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 10:46:58 +0000

flight 186512 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186512/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 dc002d4f2d76bdd826359a3dd608d9bc621fcb47
baseline version:
 ovmf                 78bccfec9ce5082499db035270e7998d5330d75c

Last test of basis   186511  2024-06-26 07:13:01 Z    0 days
Testing same since   186512  2024-06-26 09:14:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Wenxing Hou <wenxing.hou@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   78bccfec9c..dc002d4f2d  dc002d4f2d76bdd826359a3dd608d9bc621fcb47 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 10:48:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 10:48:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748739.1156555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMQCc-0005k7-Pj; Wed, 26 Jun 2024 10:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748739.1156555; Wed, 26 Jun 2024 10:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMQCc-0005k0-N4; Wed, 26 Jun 2024 10:48:38 +0000
Received: by outflank-mailman (input) for mailman id 748739;
 Wed, 26 Jun 2024 10:48:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMQCa-0005jc-Ow
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 10:48:36 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a3722aee-33a9-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 12:48:35 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.214])
 by support.bugseng.com (Postfix) with ESMTPSA id 2E3584EE0738;
 Wed, 26 Jun 2024 12:48:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3722aee-33a9-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Nicola Vetrini <nicola.vetrini@bugseng.com>
Subject: [XEN PATCH v2] x86/mctelem: address violations of MISRA C: 2012 Rule 5.3
Date: Wed, 26 Jun 2024 12:48:31 +0200
Message-Id: <94752f77597b05ef9b8a387bf29512b11c0d1e15.1719398571.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>

This addresses violations of MISRA C:2012 Rule 5.3 which states as
following: An identifier declared in an inner scope shall not hide an
identifier declared in an outer scope.

In this case the gloabl variable being shadowed is the global static struct
mctctl in this file, therefore the local variables are renamed to avoid this.

No functional change.

Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
Changes in v2:
- s/mctctl_cpu/ctl/ and amended file comment and commit message
---
 xen/arch/x86/cpu/mcheck/mctelem.c | 26 +++++++++++++-------------
 1 file changed, 13 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/cpu/mcheck/mctelem.c b/xen/arch/x86/cpu/mcheck/mctelem.c
index b8d0368a7d37..123e4102adca 100644
--- a/xen/arch/x86/cpu/mcheck/mctelem.c
+++ b/xen/arch/x86/cpu/mcheck/mctelem.c
@@ -168,28 +168,28 @@ static void mctelem_xchg_head(struct mctelem_ent **headp,
 void mctelem_defer(mctelem_cookie_t cookie, bool lmce)
 {
 	struct mctelem_ent *tep = COOKIE2MCTE(cookie);
-	struct mc_telem_cpu_ctl *mctctl = &this_cpu(mctctl);
+	struct mc_telem_cpu_ctl *ctl = &this_cpu(mctctl);
 
-	ASSERT(mctctl->pending == NULL || mctctl->lmce_pending == NULL);
+	ASSERT(ctl->pending == NULL || ctl->lmce_pending == NULL);
 
-	if (mctctl->pending)
-		mctelem_xchg_head(&mctctl->pending, &tep->mcte_next, tep);
+	if (ctl->pending)
+		mctelem_xchg_head(&ctl->pending, &tep->mcte_next, tep);
 	else if (lmce)
-		mctelem_xchg_head(&mctctl->lmce_pending, &tep->mcte_next, tep);
+		mctelem_xchg_head(&ctl->lmce_pending, &tep->mcte_next, tep);
 	else {
 		/*
 		 * LMCE is supported on Skylake-server and later CPUs, on
 		 * which mce_broadcast is always true. Therefore, non-empty
-		 * mctctl->lmce_pending in this branch implies a broadcasting
+		 * ctl->lmce_pending in this branch implies a broadcasting
 		 * MC# is being handled, every CPU is in the exception
-		 * context, and no one is consuming mctctl->pending at this
+		 * context, and no one is consuming ctl->pending at this
 		 * moment. As a result, the following two exchanges together
 		 * can be treated as atomic.
 		 */
-		if (mctctl->lmce_pending)
-			mctelem_xchg_head(&mctctl->lmce_pending,
-					  &mctctl->pending, NULL);
-		mctelem_xchg_head(&mctctl->pending, &tep->mcte_next, tep);
+		if (ctl->lmce_pending)
+			mctelem_xchg_head(&ctl->lmce_pending,
+					  &ctl->pending, NULL);
+		mctelem_xchg_head(&ctl->pending, &tep->mcte_next, tep);
 	}
 }
 
@@ -213,7 +213,7 @@ void mctelem_process_deferred(unsigned int cpu,
 {
 	struct mctelem_ent *tep;
 	struct mctelem_ent *head, *prev;
-	struct mc_telem_cpu_ctl *mctctl = &per_cpu(mctctl, cpu);
+	struct mc_telem_cpu_ctl *ctl = &per_cpu(mctctl, cpu);
 	int ret;
 
 	/*
@@ -232,7 +232,7 @@ void mctelem_process_deferred(unsigned int cpu,
 	 * Any MC# occurring after the following atomic exchange will be
 	 * handled by another round of MCE softirq.
 	 */
-	mctelem_xchg_head(lmce ? &mctctl->lmce_pending : &mctctl->pending,
+	mctelem_xchg_head(lmce ? &ctl->lmce_pending : &ctl->pending,
 			  &this_cpu(mctctl.processing), NULL);
 
 	head = this_cpu(mctctl.processing);
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 11:17:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 11:17:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748757.1156570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMQeF-0002Jg-8K; Wed, 26 Jun 2024 11:17:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748757.1156570; Wed, 26 Jun 2024 11:17:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMQeF-0002JZ-5P; Wed, 26 Jun 2024 11:17:11 +0000
Received: by outflank-mailman (input) for mailman id 748757;
 Wed, 26 Jun 2024 11:17:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t7g0=N4=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sMQeE-0002JT-72
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 11:17:10 +0000
Received: from mail-wm1-x331.google.com (mail-wm1-x331.google.com
 [2a00:1450:4864:20::331])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a0164f47-33ad-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 13:17:08 +0200 (CEST)
Received: by mail-wm1-x331.google.com with SMTP id
 5b1f17b1804b1-4217926991fso55972765e9.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 04:17:08 -0700 (PDT)
Received: from ?IPV6:2003:e5:8729:4000:29eb:6d9d:3214:39d2?
 (p200300e58729400029eb6d9d321439d2.dip0.t-ipconnect.de.
 [2003:e5:8729:4000:29eb:6d9d:3214:39d2])
 by smtp.gmail.com with ESMTPSA id
 5b1f17b1804b1-424c824eef1sm21468545e9.14.2024.06.26.04.17.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 04:17:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0164f47-33ad-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719400627; x=1720005427; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :from:to:cc:subject:date:message-id:reply-to;
        bh=jirmK4baIZ++d+1+5zn7DaxAv+wTxURz/bjD2mn5TdE=;
        b=NOtP8cqYWnDf4fVugvgSFaAplsgB1LIw/D1OQXym9y0eHd1GUR1oTRqa1jChHm+YN2
         Fpopx6pDkb8nvW773DUfHG4rSh/pQM3pDDRF8SUEsU3m8ZmZpCeOdPoKHbcxpwxX6oDU
         sExU7Os3lyKWUTA/q+nRcFFBjdf6L2ElL8UATnCbTi1kH8ZHhGQRBeeaWC9kR688aW5P
         noTTgFVpHg5O/iWoyRNvU2cej1QhSRYgYjNUgo3O3q7o3DLBCMXyHcRizKbZPbzuEiD4
         T3Wtss3c+h5PDlxrcHV4hLIagh7c6w2+IQeU4O4j//uTA7ubyuNocavjm+Ig34kgYJNM
         46xA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719400627; x=1720005427;
        h=content-transfer-encoding:in-reply-to:from:content-language
         :references:cc:to:subject:user-agent:mime-version:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=jirmK4baIZ++d+1+5zn7DaxAv+wTxURz/bjD2mn5TdE=;
        b=fYWGP7hgD10VMtt0u5+9uthFlR9plLaEe6o75Gqii4u4TKkMJ/q2SeDiLKywyKj36g
         VRze/S2PbHQ310FtfcsAudS/g35O6dObiT8+46UEkBQUiMDT7oX4qpjWw6DjvXpu35qO
         NuFnfIkYRPpYtvrcJpSsejOSXTwK5F2lSne32W8tqGXY5yna5+v7p9auZKxRsUUYj22i
         BmZ2bwjH6c1Uten/9y8dkUKNipagR/DXXArgLMZQh/7Idw5P41roUULr/9lpDMfKqmV9
         eS2jweDsk39PKvFnoNYSX2PBNEezAhUFhqwsSoVcc3VbqfN8Em/5KHPO9Pbl+OL6X8H0
         N9sQ==
X-Forwarded-Encrypted: i=1; AJvYcCVFOOFAzPQ0FH5Y0wuzcUY8VYViJOqS9Zj8ax2VzmaEvuAFV+yaLBAo10gobDiM+6+kDxFJTjTOFZq6b19Tnv0qBQv03f0dpReaDoDaPGs=
X-Gm-Message-State: AOJu0Yyb1F+za/QGxsAlRdtTAIbeBxCyIlxVx2wd4y7xDgkQz3mZxoj9
	iUHes5d+2SR+mYlI5nDIcaO6osMWPKBTiUqcIgb15LLFd797dMDzInnBjTeJHxs=
X-Google-Smtp-Source: AGHT+IG11Pm/5VIV+typWr36t19hsEj+wM6CKEMbUyV7mflR3wnoTfqDDdw844eu8S0gEuSZiLPl3w==
X-Received: by 2002:a05:600c:5699:b0:422:384e:4365 with SMTP id 5b1f17b1804b1-4248b936414mr74443695e9.2.1719400627478;
        Wed, 26 Jun 2024 04:17:07 -0700 (PDT)
Message-ID: <174c3e25-5d4c-4874-b2c0-a4f962557c19@suse.com>
Date: Wed, 26 Jun 2024 13:17:06 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2] xen/sched: fix error handling in cpu_schedule_up()
To: Jan Beulich <jbeulich@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20240626055425.3622-1-jgross@suse.com>
 <e740e1be-7890-4e8f-879a-87043ac109c5@suse.com>
Content-Language: en-US
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
In-Reply-To: <e740e1be-7890-4e8f-879a-87043ac109c5@suse.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

On 26.06.24 11:02, Jan Beulich wrote:
> On 26.06.2024 07:54, Juergen Gross wrote:
>> In case cpu_schedule_up() is failing, it needs to undo all externally
>> visible changes it has done before.
>>
>> Reason is that cpu_schedule_callback() won't be called with the
>> CPU_UP_CANCELED notifier in case cpu_schedule_up() did fail.
>>
>> Reported-by: Jan Beulich <jbeulich@suse.com>
>> Fixes: 207589dbacd4 ("xen/sched: move per cpu scheduler private data into struct sched_resource")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with two questions, just for my own reassurance:
> 
>> --- a/xen/common/sched/core.c
>> +++ b/xen/common/sched/core.c
>> @@ -2755,6 +2755,36 @@ static struct sched_resource *sched_alloc_res(void)
>>       return sr;
>>   }
>>   
>> +static void cf_check sched_res_free(struct rcu_head *head)
>> +{
>> +    struct sched_resource *sr = container_of(head, struct sched_resource, rcu);
>> +
>> +    free_cpumask_var(sr->cpus);
>> +    if ( sr->sched_unit_idle )
>> +        sched_free_unit_mem(sr->sched_unit_idle);
>> +    xfree(sr);
>> +}
>> +
>> +static void cpu_schedule_down(unsigned int cpu)
>> +{
>> +    struct sched_resource *sr;
>> +
>> +    rcu_read_lock(&sched_res_rculock);
>> +
>> +    sr = get_sched_res(cpu);
>> +
>> +    kill_timer(&sr->s_timer);
>> +
>> +    cpumask_clear_cpu(cpu, &sched_res_mask);
>> +    set_sched_res(cpu, NULL);
>> +
>> +    /* Keep idle unit. */
>> +    sr->sched_unit_idle = NULL;
>> +    call_rcu(&sr->rcu, sched_res_free);
>> +
>> +    rcu_read_unlock(&sched_res_rculock);
>> +}
> 
> Eyeballing suggests these two functions don't change at all; they're solely
> being moved up?

Correct.

> Also, for the purpose here, use of RCU to effect the freeing isn't strictly
> necessary. It's just that it also doesn't hurt doing it that way, and it
> would be more code to directly free when coming from cpu_schedule_up()?

Yes.


Juergen


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 12:04:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 12:04:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748787.1156643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMRNT-0002iF-ET; Wed, 26 Jun 2024 12:03:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748787.1156643; Wed, 26 Jun 2024 12:03:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMRNT-0002i8-Bp; Wed, 26 Jun 2024 12:03:55 +0000
Received: by outflank-mailman (input) for mailman id 748787;
 Wed, 26 Jun 2024 12:03:54 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMRNS-0002i2-ME
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 12:03:54 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 26e4528e-33b4-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 14:03:51 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id
 38308e7fff4ca-2ec6635aa43so27840571fa.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 05:03:51 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1fa36cc1d10sm59153625ad.305.2024.06.26.05.03.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 05:03:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26e4528e-33b4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719403431; x=1720008231; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=tZO/76Y9olr8ovq6vNDsmdXHtHSiBbUHFWBc3FZmZwE=;
        b=NZMP5rE35rQuT6BxYD5YqTGKPG2A1SMO4VFd19vRSBnD5xNFXGxi0lfK4DVbDU8fCB
         7SSj3hjSNKUiKXoru2FR7f419MN6W+im/71KKojGYL0cZGVx9T8qfq/u5I/8Gci906b6
         89pUrCo1YZTHmjJN5Bt30QYA39pev/D9UpU8OZo43gyhFB6G2iKDmkDiOTeGlQi0zMma
         p7g++LlgGx4G9nV66BadmzfLl5pKZjSQkIPz/SSMag3fs1yXY4Z4uRtADw3Y/KfUQfhR
         17ikA74qNehab05VmdIhu14ceLBRO4+E4aDvzjRL3/myaxSVgP1RPQ72MHkI9JoHi9Ql
         mywg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719403431; x=1720008231;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tZO/76Y9olr8ovq6vNDsmdXHtHSiBbUHFWBc3FZmZwE=;
        b=lpKe57WIGQ0o0ennuRXA3GZLiW5j7ma2mUr2DU4fZfLZOUW7OWvucQfp4CfFuHj/IN
         k7pEkSiw1kQEC7GCvdRTsoPAEu4Y6eBA5bJ6uScUmPcUfWfqYDD0zA6FoXSr6l9gzL7g
         kbDoS8aRuL7ohmtyJ1E9K3oivyvFNeDDzBJxm64q7fUNZpbH1Qe2vql/puEZNfra+Cuu
         ygVBpr1M4arZZrUSMCbAxBgaAmFgzaJvmYv1U7AMZZPFMVKI1addnXqk1UXUJ0SIChsl
         etrvQcfVF2c2xybSYFDfz7KLqO015Pt4i9Csh9Bu13J5uUiDaeQmciUhzdKM9ntDQrhe
         Wrug==
X-Forwarded-Encrypted: i=1; AJvYcCWzMEKWkuXOFfUHlcWp43STEWSa7Xu05P+UKRv/XoebBUdpvqIzdsrehbkX1Ll87WR9KdqZRxhbM/kksWvl9MVwDHd937pqiIIo3OeCb+U=
X-Gm-Message-State: AOJu0YwXBYR8623+n5tHq9e9E4FVQli+79nl4AasCgbwS9cN9iF6L6tZ
	B0O8ysI77s+8s2wri01DGMYVKw4mzft3bOy9866CqI8b78o7zhVQWq4KjPwIjA==
X-Google-Smtp-Source: AGHT+IGko7qA0T0N3eT81HLLZG6/J2IJOHMPen0/SwFxmLKYCReQrqzjm8nIAStrCodP/Ipe+cKWMw==
X-Received: by 2002:a2e:874f:0:b0:2ec:55fd:f118 with SMTP id 38308e7fff4ca-2ec57967b67mr65020081fa.1.1719403429701;
        Wed, 26 Jun 2024 05:03:49 -0700 (PDT)
Message-ID: <38c35f34-a55c-4647-874e-9d88fa899a58@suse.com>
Date: Wed, 26 Jun 2024 14:03:39 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 4/6] xen/bitops: Introduce for_each_set_bit()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
 <20240625190719.788643-5-andrew.cooper3@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <20240625190719.788643-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 25.06.2024 21:07, Andrew Cooper wrote:
> --- a/xen/include/xen/bitops.h
> +++ b/xen/include/xen/bitops.h
> @@ -56,6 +56,16 @@ static always_inline __pure unsigned int ffs64(uint64_t x)
>          return !x || (uint32_t)x ? ffs(x) : ffs(x >> 32) + 32;
>  }
>  
> +/*
> + * A type-generic ffs() which picks the appropriate ffs{,l,64}() based on it's
> + * argument.
> + */
> +#define ffs_g(x)                                        \
> +    sizeof(x) <= sizeof(int) ? ffs(x) :                 \
> +        sizeof(x) <= sizeof(long) ? ffsl(x) :           \
> +        sizeof(x) <= sizeof(uint64_t) ? ffs64(x) :      \
> +        ({ BUILD_ERROR("ffs_g() Bad input type"); 0; })

I guess the lack of parentheses around the entire construct might be the
reason for the problems you're seeing, as that ...

> @@ -92,6 +102,20 @@ static always_inline __pure unsigned int fls64(uint64_t x)
>      }
>  }
>  
> +/*
> + * for_each_set_bit() - Iterate over all set bits in a scalar value.
> + *
> + * @iter An iterator name.  Scoped is within the loop only.
> + * @val  A scalar value to iterate over.
> + *
> + * A copy of @val is taken internally.
> + */
> +#define for_each_set_bit(iter, val)                     \
> +    for ( typeof(val) __v = (val); __v; )               \
> +        for ( unsigned int (iter);                      \
> +              __v && ((iter) = ffs_g(__v) - 1, true);   \

... affects what effect the "- 1" here has.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 12:09:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 12:09:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748793.1156654 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMRSb-0003lJ-0l; Wed, 26 Jun 2024 12:09:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748793.1156654; Wed, 26 Jun 2024 12:09:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMRSa-0003lC-UO; Wed, 26 Jun 2024 12:09:12 +0000
Received: by outflank-mailman (input) for mailman id 748793;
 Wed, 26 Jun 2024 12:09:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YERW=N4=arm.com=robin.murphy@srs-se1.protection.inumbo.net>)
 id 1sMRSa-0003l6-0m
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 12:09:12 +0000
Received: from foss.arm.com (foss.arm.com [217.140.110.172])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTP
 id e41ebfab-33b4-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 14:09:09 +0200 (CEST)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 217C7339;
 Wed, 26 Jun 2024 05:09:33 -0700 (PDT)
Received: from [10.57.74.5] (unknown [10.57.74.5])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C88973F73B;
 Wed, 26 Jun 2024 05:09:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e41ebfab-33b4-11ef-b4bb-af5377834399
Message-ID: <c4dc539b-a71b-4323-aa31-b97b39c633a8@arm.com>
Date: Wed, 26 Jun 2024 13:09:05 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC PATCH v2] iommu/xen: Add Xen PV-IOMMU driver
To: Teddy Astie <teddy.astie@vates.tech>, xen-devel@lists.xenproject.org,
 iommu@lists.linux.dev
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
References: <24d7ec005e77e4e0127995ba6f4ad16f33737fa5.1718981216.git.teddy.astie@vates.tech>
 <da3ec316-b001-4711-b323-70af3e6bb014@arm.com>
 <a04e169d-b38a-43dc-b783-a8af1e1b0468@vates.tech>
From: Robin Murphy <robin.murphy@arm.com>
Content-Language: en-GB
In-Reply-To: <a04e169d-b38a-43dc-b783-a8af1e1b0468@vates.tech>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

On 2024-06-24 3:36 pm, Teddy Astie wrote:
> Hello Robin,
> Thanks for the thourough review.
> 
>>> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
>>> index 0af39bbbe3a3..242cefac77c9 100644
>>> --- a/drivers/iommu/Kconfig
>>> +++ b/drivers/iommu/Kconfig
>>> @@ -480,6 +480,15 @@ config VIRTIO_IOMMU
>>>          Say Y here if you intend to run this kernel as a guest.
>>> +config XEN_IOMMU
>>> +    bool "Xen IOMMU driver"
>>> +    depends on XEN_DOM0
>>
>> Clearly this depends on X86 as well.
>>
> Well, I don't intend this driver to be X86-only, even though the current
> Xen RFC doesn't support ARM (yet). Unless there is a counter-indication
> for it ?

It's purely practical - even if you drop the asm/iommu.h stuff it would 
still break ARM DOM0 builds due to HYPERVISOR_iommu_op() only being 
defined for x86. And it's better to add a dependency here to make it 
clear what's *currently* supported, than to add dummy code to allow it 
to build for ARM if that's not actually tested or usable yet.

>>> +bool xen_iommu_capable(struct device *dev, enum iommu_cap cap)
>>> +{
>>> +    switch (cap) {
>>> +    case IOMMU_CAP_CACHE_COHERENCY:
>>> +        return true;
>>
>> Will the PV-IOMMU only ever be exposed on hardware where that really is
>> always true?
>>
> 
> On the hypervisor side, the PV-IOMMU interface always implicitely flush
> the IOMMU hardware on map/unmap operation, so at the end of the
> hypercall, the cache should be always coherent IMO.

As Jason already brought up, this is not about TLBs or anything cached 
by the IOMMU itself, it's about the memory type(s) it can create 
mappings with. Returning true here says Xen guarantees it can use a 
cacheable memory type which will let DMA snoop the CPU caches. 
Furthermore, not explicitly handling IOMMU_CACHE in the map_pages op 
then also implies that it will *always* do that, so you couldn't 
actually get an uncached mapping even if you wanted one.

>>> +    while (xen_pg_count) {
>>> +        size_t to_unmap = min(xen_pg_count, max_nr_pages);
>>> +
>>> +        //pr_info("Unmapping %lx-%lx\n", dfn, dfn + to_unmap - 1);
>>> +
>>> +        op.unmap_pages.dfn = dfn;
>>> +        op.unmap_pages.nr_pages = to_unmap;
>>> +
>>> +        ret = HYPERVISOR_iommu_op(&op);
>>> +
>>> +        if (ret)
>>> +            pr_warn("Unmap failure (%lx-%lx)\n", dfn, dfn + to_unmap
>>> - 1);
>>
>> But then how
>>> would it ever happen anyway? Unmap is a domain op, so a domain which
>>> doesn't allow unmapping shouldn't offer it in the first place...
> 
> Unmap failing should be exceptionnal, but is possible e.g with
> transparent superpages (like Xen IOMMU drivers do). Xen drivers folds
> appropriate contiguous mappings into superpages entries to optimize
> memory usage and iotlb. However, if you unmap in the middle of a region
> covered by a superpage entry, this is no longer a valid superpage entry,
> and you need to allocate and fill the lower levels, which is faillible
> if lacking memory.

OK, so in the worst case you could potentially have a partial unmap 
failure if the range crosses a superpage boundary and the end part 
happens to have been folded, and Xen doesn't detect and prepare that 
allocation until it's already unmapped up to the boundary. If that is 
so, does the hypercall interface give any information about partial 
failure, or can any error only be taken to mean that some or all of the 
given range may or may not have be unmapped now?
>> In this case I'd argue that you really *do* want to return short, in the
>> hope of propagating the error back up and letting the caller know the
>> address space is now messed up before things start blowing up even more
>> if they keep going and subsequently try to map new pages into
>> not-actually-unmapped VAs.
> 
> While mapping on top of another mapping is ok for us (it's just going to
> override the previous mapping), I definetely agree that having the
> address space messed up is not good.

Oh, indeed, quietly replacing existing PTEs might help paper over errors 
in this particular instance, but it does then allow *other* cases to go 
wrong in fun and infuriating ways :)

>>> +static struct iommu_domain default_domain = {
>>> +    .ops = &(const struct iommu_domain_ops){
>>> +        .attach_dev = default_domain_attach_dev
>>> +    }
>>> +};
>>
>> Looks like you could make it a static xen_iommu_domain and just use the
>> normal attach callback? Either way please name it something less
>> confusing like xen_iommu_identity_domain - "default" is far too
>> overloaded round here already...
>>
> 
> Yes, although, if in the future, we can have either this domain as
> identity or blocking/paging depending on some upper level configuration.
> Should we have both identity and blocking domains, and only setting the
> relevant one in iommu_ops, or keep this naming.

That's something that can be considered if and when it does happen. For 
now, if it's going to be pre-mapped as an identity domain, then let's 
just treat it as such and keep things straightforward.

>>> +void __exit xen_iommu_fini(void)
>>> +{
>>> +    pr_info("Unregistering Xen IOMMU driver\n");
>>> +
>>> +    iommu_device_unregister(&xen_iommu_device);
>>> +    iommu_device_sysfs_remove(&xen_iommu_device);
>>> +}
>>
>> This is dead code since the Kconfig is only "bool". Either allow it to
>> be an actual module (and make sure that works), or drop the pretence
>> altogether.
>>
> 
> Ok, I though this function was actually a requirement even if it is not
> a module.

No, quite the opposite - even code which is modular doesn't have to 
support removal if it doesn't want to.

Thanks,
Robin.


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 12:41:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 12:41:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748818.1156728 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMRxK-0001nT-1m; Wed, 26 Jun 2024 12:40:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748818.1156728; Wed, 26 Jun 2024 12:40:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMRxJ-0001nM-V2; Wed, 26 Jun 2024 12:40:57 +0000
Received: by outflank-mailman (input) for mailman id 748818;
 Wed, 26 Jun 2024 12:40:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sMRxJ-0001nG-5d
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 12:40:57 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sMRxI-0006qH-Sp; Wed, 26 Jun 2024 12:40:56 +0000
Received: from [15.248.3.89] (helo=[10.24.67.25])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sMRxI-0006K4-Lo; Wed, 26 Jun 2024 12:40:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=PnNMRuBhrdbLKOc4Ds57guaDpmQNCjyUAiI4EWEl0rg=; b=STDnd4XxhvOJvwDm6hLtBQttN5
	VjeYnONHHR8oFnD/OI5ORmfBWyS9zu+D06xiFpSIpvTEk7xFrJInfsSeMWWISM+WdCpUEhGr6yiqz
	WWrqKjNdd9DoY418OZrIVvhj4LRF6ZTMy6oENTkYx8tAQOmEW6gGTJDU+/6gpkMlZ1Ms=;
Message-ID: <9b6819fd-fd76-4249-b1f9-5afb372dd1e1@xen.org>
Date: Wed, 26 Jun 2024 13:40:54 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
Content-Language: en-GB
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>
References: <20240621191434.5046-1-tamas@tklengyel.com>
 <20240621191434.5046-2-tamas@tklengyel.com>
 <6f94d071-f90f-485d-a8aa-a0c8a726ce34@xen.org>
 <CABfawhkCJv1oQ4+_bBHf_ys1=gtmFVT-Zn7UeYDLaSm9KQqgcA@mail.gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <CABfawhkCJv1oQ4+_bBHf_ys1=gtmFVT-Zn7UeYDLaSm9KQqgcA@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Tamas,

On 24/06/2024 23:18, Tamas K Lengyel wrote:
> On Mon, Jun 24, 2024 at 5:58 PM Julien Grall <julien@xen.org> wrote:
>>
>> Hi,
>>
>> On 21/06/2024 20:14, Tamas K Lengyel wrote:
>>> The build integration script for oss-fuzz targets.
>>
>> Do you have any details how this is meant and/or will be used?
> 
> https://google.github.io/oss-fuzz/getting-started/new-project-guide/#buildsh
> 
>>
>> I also couldn't find a cover letter. For series with more than one
>> patch, it is recommended to have one as it help threading and could also
>> give some insight on what you are aiming to do.
>>
>>>
>>> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
>>> ---
>>>    scripts/oss-fuzz/build.sh | 22 ++++++++++++++++++++++
>>>    1 file changed, 22 insertions(+)
>>>    create mode 100755 scripts/oss-fuzz/build.sh
>>>
>>> diff --git a/scripts/oss-fuzz/build.sh b/scripts/oss-fuzz/build.sh
>>> new file mode 100755
>>> index 0000000000..48528bbfc2
>>> --- /dev/null
>>> +++ b/scripts/oss-fuzz/build.sh
>>
>> Depending on the answer above, we may want to consider to create the
>> directory oss-fuzz under automation or maybe tools/fuzz/.
> 
> I'm fine with moving it wherever.

What about tools/fuzz then? This is where are all the tooling for the 
fuzzing.

> 
>>
>>> @@ -0,0 +1,22 @@
>>> +#!/bin/bash -eu
>>> +# Copyright 2024 Google LLC
>>
>> I am a bit confused with this copyright. Is this script taken from
>> somewhere?
> 
> Yes, I took an existing build.sh from oss-fuzz,

It is unclear to me what is left from that "existing" build.sh. At least 
everything below seems to be Xen specific.

Anyway, if you want to give the copyright to Google then fair enough, 
but I think you want to use an Origin tag (or similar) to indicate the 
original copy.

>  it is recommended to
> have the more complex part of build.sh as part of the upstream
> repository so that additional targets/fixes can be merged there
> instead of opening PRs on oss-fuzz directly. With this setup the
> build.sh I merge to oss-fuzz will just just this build.sh in the Xen
> repository. See
> https://github.com/tklengyel/oss-fuzz/commit/552317ae9d24ef1c00d87595516cc364bc33b662.
> 
>>
>>> +#
>>> +# Licensed under the Apache License, Version 2.0 (the "License");
>>> +# you may not use this file except in compliance with the License.
>>> +# You may obtain a copy of the License at
>>> +#
>>> +#      http://www.apache.org/licenses/LICENSE-2.0
>>> +#
>>> +# Unless required by applicable law or agreed to in writing, software
>>> +# distributed under the License is distributed on an "AS IS" BASIS,
>>> +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>> +# See the License for the specific language governing permissions and
>>> +# limitations under the License.
>>> +#
>>> +################################################################################
>>> +
>>> +cd xen
>>> +./configure clang=y --disable-stubdom --disable-pvshim --disable-docs --disable-xen
>>
>> Looking at the help from ./configure, 'clang=y' is not mentioned and it
>> doesn't make any difference in the config.log. Can you clarify why this
>> was added?
> 
> Just throwing stuff at the wall till I was able to get a clang build.
> If it's indeed not needed I can remove it.
> 
>>
>>> +make clang=y -C tools/include
>>> +make clang=y -C tools/fuzz/x86_instruction_emulator libfuzzer-harness
>>> +cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_instruction_emulator
>>
>> Who will be defining $OUT?
> 
> oss-fuzz

Ok. Can you add a link to the documentation in build.sh? This would be 
helpful for the future reader to understand what's $OUT really mean.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 12:54:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 12:54:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748824.1156738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSAQ-0003ug-6S; Wed, 26 Jun 2024 12:54:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748824.1156738; Wed, 26 Jun 2024 12:54:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSAQ-0003uZ-3D; Wed, 26 Jun 2024 12:54:30 +0000
Received: by outflank-mailman (input) for mailman id 748824;
 Wed, 26 Jun 2024 12:54:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3iT=N4=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMSAO-0003uT-J7
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 12:54:28 +0000
Received: from mail-yw1-x1133.google.com (mail-yw1-x1133.google.com
 [2607:f8b0:4864:20::1133])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 37b925cc-33bb-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 14:54:26 +0200 (CEST)
Received: by mail-yw1-x1133.google.com with SMTP id
 00721157ae682-63bb0ff142cso65147147b3.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 05:54:26 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b51ef32f9csm54525796d6.91.2024.06.26.05.54.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 05:54:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37b925cc-33bb-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719406465; x=1720011265; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=gX3ZqrjiIkChEaGWWU8955WHVk5SImB4mMGzS5YrY4E=;
        b=CIlanUa4O2SwPI44DGextiVBVboYoZOIiMywM4A+ClyL4qvt9Ldf698RseScxaksWV
         KddBi22GBocrO8jmhWbhlCj+fMjuO6eJileXh0yy+fFeq9apWYMB03sLpVu4xYyNFlFN
         Ban5zAe1VtqlRrd8i/QQUbrNBuuLpdFmSAbjM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719406465; x=1720011265;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gX3ZqrjiIkChEaGWWU8955WHVk5SImB4mMGzS5YrY4E=;
        b=RB0E/PO+TrS6j3xBPRgHQ2e9NJDpSVuOQVfFrJC9sCSc0Knoe//no27RMfb0hb3MUk
         Zc+ySIlt8B2FlsG9HL9J8tBx3WrYwbMDYJrFWSoacGK4Ym0xbp3ZlkR6aqhZ1Az25rCK
         XuzbZCODTcZ84Rf9RPy4Xt90XFkwk7oALwLiuAAgxXY/itbfRhKYb5pckwoRLYyReT+Y
         X3KA7dd9uKCsQSX9vPuG8Wbm3zlN1dc70BYUQtjDVoHI2Qj6eunUiKa5YegL8Is6q2b2
         yK34DkHZFw5eqnU29Sh2vaFly/+LMrl9i3rNjUPnTkQ+0kvp1aqMMC2kJIjx4sOaAGJK
         vH/A==
X-Forwarded-Encrypted: i=1; AJvYcCW6eWUqM88QBzIo01NQ8VxR0yZvBxm4D1BxIrMx1WZ7czbuquAN5ZLwLqcMRdSYe9oRx5hC4cnCe8tCRxMTS7qWJ5LQ+qVjCQlnoBmZAoA=
X-Gm-Message-State: AOJu0Ywqi2nC4rvbZ7Mkh40XJ56g7U/IGGXXaCYvw4toogvW2Pj1SSvf
	uPenM1ZuURuJLCjgkEPjVCXooTaG7bw0dRmpyK6yayc0YuZkhULs6LAdZBz5Hb8=
X-Google-Smtp-Source: AGHT+IHhm8+tpo7d506oS964vIzCHDLl158ZRWx5Lhs371xJB9EapGZIHvebkhPy/R5WIw4KJB/IWw==
X-Received: by 2002:a81:690b:0:b0:627:7f2a:3b0a with SMTP id 00721157ae682-6429aa00545mr99368287b3.31.1719406465268;
        Wed, 26 Jun 2024 05:54:25 -0700 (PDT)
Message-ID: <7d1e7357-6dd3-43d5-9fa6-bdfab55a678c@citrix.com>
Date: Wed, 26 Jun 2024 13:54:22 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/6] x86/vmx: Rewrite vmx_sync_pir_to_irr() to be more
 efficient
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
 <20240625190719.788643-2-andrew.cooper3@citrix.com>
 <fc04af37-6ef6-4c91-a625-d541f9f9bfe5@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <fc04af37-6ef6-4c91-a625-d541f9f9bfe5@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 26/06/2024 10:49 am, Jan Beulich wrote:
> On 25.06.2024 21:07, Andrew Cooper wrote:
>> There are two issues.  First, pi_test_and_clear_on() pulls the cache-line to
>> the CPU and dirties it even if there's nothing outstanding, but the final
>> for_each_set_bit() is O(256) when O(8) would do, and would avoid multiple
>> atomic updates to the same IRR word.
> The way it's worded (grammar wise) it appears as if the 2nd issue is missing
> from this description. Perhaps you meant to break the sentence at "but" (and
> re-word a little what follows), which feels a little unmotivated to me (as a
> non-native speaker, i.e. may not mean anything) anyway? Or maybe something
> simply got lost in the middle?

You're right - the grammar is not great.  I'll rework it.

>
>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>> @@ -2321,18 +2321,63 @@ static void cf_check vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
>>  
>>  static void cf_check vmx_sync_pir_to_irr(struct vcpu *v)
>>  {
>> -    struct vlapic *vlapic = vcpu_vlapic(v);
>> -    unsigned int group, i;
>> -    DECLARE_BITMAP(pending_intr, X86_NR_VECTORS);
>> +    struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc;
>> +    union {
>> +        uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)];
>> +        uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)];
>> +    } vec;
>> +    uint32_t *irr;
>> +    bool on;
>>  
>> -    if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) )
>> +    /*
>> +     * The PIR is a contended cacheline which bounces between the CPU and
>> +     * IOMMU.  The IOMMU updates the entire PIR atomically, but we can't
>> +     * express the same on the CPU side, so care has to be taken.
>> +     *
>> +     * First, do a plain read of ON.  If the PIR hasn't been modified, this
>> +     * will keep the cacheline Shared and not pull it Excusive on the CPU.
>> +     */
>> +    if ( !pi_test_on(desc) )
>>          return;
>>  
>> -    for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ )
>> -        pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group);
>> +    /*
>> +     * Second, if the plain read said that ON was set, we must clear it with
>> +     * an atomic action.  This will bring the cachline to Exclusive on the
>> +     * CPU.
>> +     *
>> +     * This should always succeed because noone else should be playing with
>> +     * the PIR behind our back, but assert so just in case.
>> +     */
> Isn't "playing with" more strict than what is the case, and what we need
> here? Aiui nothing should _clear this bit_ behind our back, while PIR
> covers more than just this one bit, and the bit may also become reset
> immediately after we cleared it.

The IOMMU or other CPU forwarding an IPI strictly sets ON, and this CPU
(either the logic here, or microcode when in non-root mode) strictly
clears it.

But it is ON specifically that we care about, so I'll make that more clear.

>
>> +    on = pi_test_and_clear_on(desc);
>> +    ASSERT(on);
>>  
>> -    for_each_set_bit(i, pending_intr, X86_NR_VECTORS)
>> -        vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]);
>> +    /*
>> +     * The cacheline is now Exclusive on the CPU, and the IOMMU has indicated
>> +     * (via ON being set) thatat least one vector is pending too.
> This isn't quite correct aiui, and hence perhaps better not to state it
> exactly like this: While we're ...
>
>>  Atomically
>> +     * read and clear the entire pending bitmap as fast as we, to reduce the
>> +     * window that the IOMMU may steal the cacheline back from us.
>> +     *
>> +     * It is a performance concern, but not a correctness concern.  If the
>> +     * IOMMU does steal the cacheline back, we'll just wait to get it back
>> +     * again.
>> +     */
>> +    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i )
>> +        vec._64[i] = xchg(&desc->pir[i], 0);
> ... still ahead of or in this loop, new bits may become set which we then
> may handle right away. The "on" indication on the next entry into this
> logic may then be misleading, as we may not find any set bit.

Hmm.  Yes.

The IOMMU atomically swaps the entire cacheline in one go, so won't
produce this state.  However, the SDM algorithm for consuming it says
specifically:

1) LOCK AND to clear ON, leaving everything else unmodified
2) 256-bit-granular read&0 PIR, merging into VIRR

Which now I think about it, is almost certainly because the Atom cores
only have a 256-bit datapath.

Another Xen CPU sending an IPI sideways will have to do:

    set_bit(pir[vec])
    set_bit(ON)
    IPI(delivery vector)

in this order for everything to work.

ON is the signal for "go and scan the PIR", and (other than exact
ordering races during delivery) means that there is a bit set in PIR.

I'll see if I can make this clearer.


> All the code changes look good to me, otoh.

Thanks.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:04:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:04:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748833.1156751 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSJk-00061I-8y; Wed, 26 Jun 2024 13:04:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748833.1156751; Wed, 26 Jun 2024 13:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSJk-00061B-5F; Wed, 26 Jun 2024 13:04:08 +0000
Received: by outflank-mailman (input) for mailman id 748833;
 Wed, 26 Jun 2024 13:04:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3iT=N4=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMSJj-000615-1L
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:04:07 +0000
Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com
 [2607:f8b0:4864:20::733])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 908c8899-33bc-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:04:05 +0200 (CEST)
Received: by mail-qk1-x733.google.com with SMTP id
 af79cd13be357-7971a9947e6so339604085a.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:04:04 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce89be42sm496583585a.10.2024.06.26.06.04.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 06:04:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 908c8899-33bc-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719407044; x=1720011844; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=sqx3xoosM3OPw+JgHJO3dZLPIAwc0bXSVXwA1Jj3cZQ=;
        b=rz0nhOSKvDhYgjm9fTjlxqvdoSZDjck1FZ+rYqUta4Rgw5aTVndeZyXzO4WjhGiCZf
         7sILDUYtBAlaP0fw83SK6rw5/hsK1/T13xxhEHcMk1eoAOUJstbe1TjiifwC3Fp6DOw8
         wFBcFh3SAwjgyaXIvihklMmWh6agJNjXfEgbk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719407044; x=1720011844;
        h=content-transfer-encoding:in-reply-to:autocrypt:content-language
         :references:cc:to:from:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=sqx3xoosM3OPw+JgHJO3dZLPIAwc0bXSVXwA1Jj3cZQ=;
        b=KnqrWyKu4S1l6TQnW1Lqur6B3rw4xOkYxAG/ttc0AEn1DPK2y+5HxrqIdB/xRDTVXO
         wizEzx/1+XTku9/pN4AFNYh8Tg+dX+srbNsAtv/ZWhp6B8EoesNsMdagpP3n13qGjr/d
         6s9F5fcKPglnYHh5gudoBAShCugciehA8y68U45JLCKVowYysxpyw9NHbMJb7cObCzhH
         lQhxFEhn/rgkDMExbPW8e88wAzm9BmNFIVBsQCZENZwoZ2mkHTeqj459pHjH/61NHHkQ
         Ou+vDVTEasXcutq0kS50+QdVaz6zZ4NINCeV9trFVYgwxWUneHKLiDH9qoPPxLZYgRs6
         Gs3Q==
X-Forwarded-Encrypted: i=1; AJvYcCWVtyAeNrvC5ddJZFgRFXIGRhDk2hhUsonti7zpcQvLcZysHEN1DO6N8dVMwn+QTq4uki2/J5Q9XCVuSl7XS3UVsPGoEOU+2lBXT2BMW+Q=
X-Gm-Message-State: AOJu0YwLSomQD8fe5NsN0juiKVfebK53vI88BnKzjMpZnWVu1+fwCjtx
	/4COViL3kd37Ev1WDhOnTY688/s2R6f3ruUzzmc/f8jeQazNepeqIBIaq5ZZlbg=
X-Google-Smtp-Source: AGHT+IFVAI8k71g3JD19w6DkCdUNY7lFfv9fcBHXGu0Ye51o1MVhQUGg4TMdqPLSb9ShACo0zptgSQ==
X-Received: by 2002:a05:620a:28c7:b0:79c:c0b:4d with SMTP id af79cd13be357-79c0c0b032dmr465301285a.61.1719407043777;
        Wed, 26 Jun 2024 06:04:03 -0700 (PDT)
Message-ID: <331d5ee5-4990-47f2-b8de-77365b308796@citrix.com>
Date: Wed, 26 Jun 2024 14:04:00 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/6] x86/vmx: Rewrite vmx_sync_pir_to_irr() to be more
 efficient
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
 <20240625190719.788643-2-andrew.cooper3@citrix.com>
 <fc04af37-6ef6-4c91-a625-d541f9f9bfe5@suse.com>
 <7d1e7357-6dd3-43d5-9fa6-bdfab55a678c@citrix.com>
Content-Language: en-GB
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <7d1e7357-6dd3-43d5-9fa6-bdfab55a678c@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 26/06/2024 1:54 pm, Andrew Cooper wrote:
> On 26/06/2024 10:49 am, Jan Beulich wrote:
>> On 25.06.2024 21:07, Andrew Cooper wrote:
>>> There are two issues.  First, pi_test_and_clear_on() pulls the cache-line to
>>> the CPU and dirties it even if there's nothing outstanding, but the final
>>> for_each_set_bit() is O(256) when O(8) would do, and would avoid multiple
>>> atomic updates to the same IRR word.
>> The way it's worded (grammar wise) it appears as if the 2nd issue is missing
>> from this description. Perhaps you meant to break the sentence at "but" (and
>> re-word a little what follows), which feels a little unmotivated to me (as a
>> non-native speaker, i.e. may not mean anything) anyway? Or maybe something
>> simply got lost in the middle?
> You're right - the grammar is not great.  I'll rework it.
>
>>> --- a/xen/arch/x86/hvm/vmx/vmx.c
>>> +++ b/xen/arch/x86/hvm/vmx/vmx.c
>>> @@ -2321,18 +2321,63 @@ static void cf_check vmx_deliver_posted_intr(struct vcpu *v, u8 vector)
>>>  
>>>  static void cf_check vmx_sync_pir_to_irr(struct vcpu *v)
>>>  {
>>> -    struct vlapic *vlapic = vcpu_vlapic(v);
>>> -    unsigned int group, i;
>>> -    DECLARE_BITMAP(pending_intr, X86_NR_VECTORS);
>>> +    struct pi_desc *desc = &v->arch.hvm.vmx.pi_desc;
>>> +    union {
>>> +        uint64_t _64[X86_NR_VECTORS / (sizeof(uint64_t) * 8)];
>>> +        uint32_t _32[X86_NR_VECTORS / (sizeof(uint32_t) * 8)];
>>> +    } vec;
>>> +    uint32_t *irr;
>>> +    bool on;
>>>  
>>> -    if ( !pi_test_and_clear_on(&v->arch.hvm.vmx.pi_desc) )
>>> +    /*
>>> +     * The PIR is a contended cacheline which bounces between the CPU and
>>> +     * IOMMU.  The IOMMU updates the entire PIR atomically, but we can't
>>> +     * express the same on the CPU side, so care has to be taken.
>>> +     *
>>> +     * First, do a plain read of ON.  If the PIR hasn't been modified, this
>>> +     * will keep the cacheline Shared and not pull it Excusive on the CPU.
>>> +     */
>>> +    if ( !pi_test_on(desc) )
>>>          return;
>>>  
>>> -    for ( group = 0; group < ARRAY_SIZE(pending_intr); group++ )
>>> -        pending_intr[group] = pi_get_pir(&v->arch.hvm.vmx.pi_desc, group);
>>> +    /*
>>> +     * Second, if the plain read said that ON was set, we must clear it with
>>> +     * an atomic action.  This will bring the cachline to Exclusive on the
>>> +     * CPU.
>>> +     *
>>> +     * This should always succeed because noone else should be playing with
>>> +     * the PIR behind our back, but assert so just in case.
>>> +     */
>> Isn't "playing with" more strict than what is the case, and what we need
>> here? Aiui nothing should _clear this bit_ behind our back, while PIR
>> covers more than just this one bit, and the bit may also become reset
>> immediately after we cleared it.
> The IOMMU or other CPU forwarding an IPI strictly sets ON, and this CPU
> (either the logic here, or microcode when in non-root mode) strictly
> clears it.
>
> But it is ON specifically that we care about, so I'll make that more clear.
>
>>> +    on = pi_test_and_clear_on(desc);
>>> +    ASSERT(on);
>>>  
>>> -    for_each_set_bit(i, pending_intr, X86_NR_VECTORS)
>>> -        vlapic_set_vector(i, &vlapic->regs->data[APIC_IRR]);
>>> +    /*
>>> +     * The cacheline is now Exclusive on the CPU, and the IOMMU has indicated
>>> +     * (via ON being set) thatat least one vector is pending too.
>> This isn't quite correct aiui, and hence perhaps better not to state it
>> exactly like this: While we're ...
>>
>>>  Atomically
>>> +     * read and clear the entire pending bitmap as fast as we, to reduce the
>>> +     * window that the IOMMU may steal the cacheline back from us.
>>> +     *
>>> +     * It is a performance concern, but not a correctness concern.  If the
>>> +     * IOMMU does steal the cacheline back, we'll just wait to get it back
>>> +     * again.
>>> +     */
>>> +    for ( unsigned int i = 0; i < ARRAY_SIZE(vec._64); ++i )
>>> +        vec._64[i] = xchg(&desc->pir[i], 0);
>> ... still ahead of or in this loop, new bits may become set which we then
>> may handle right away. The "on" indication on the next entry into this
>> logic may then be misleading, as we may not find any set bit.
> Hmm.  Yes.
>
> The IOMMU atomically swaps the entire cacheline in one go, so won't
> produce this state.  However, the SDM algorithm for consuming it says
> specifically:
>
> 1) LOCK AND to clear ON, leaving everything else unmodified
> 2) 256-bit-granular read&0 PIR, merging into VIRR
>
> Which now I think about it, is almost certainly because the Atom cores
> only have a 256-bit datapath.
>
> Another Xen CPU sending an IPI sideways will have to do:
>
>     set_bit(pir[vec])
>     set_bit(ON)
>     IPI(delivery vector)
>
> in this order for everything to work.
>
> ON is the signal for "go and scan the PIR", and (other than exact
> ordering races during delivery) means that there is a bit set in PIR.
>
> I'll see if I can make this clearer.
>
>
>> All the code changes look good to me, otoh.
> Thanks.

One final thing.

This logic here depends on interrupts not being enabled between these
atomic actions, and entering non-root mode.

Specifically, Xen must not service a pending delivery-notification
vector between this point and the VMEntry microcode repeating the same
scan on the PIR Descriptor.

Getting this wrong means that we'll miss the delivery of vectors which
arrive between here and the next time something causes a
delivery-notification vector to be sent.

However, I've got no idea how to reasonably express this with
assertions.  We could in principle have a per-cpu "mustn't enable
interrupts" flag, checked in local_irq_enable/restore(), but it only
works in HVM context, and gets too messy IMO.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:16:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:16:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748840.1156761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSVh-00086g-AF; Wed, 26 Jun 2024 13:16:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748840.1156761; Wed, 26 Jun 2024 13:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSVh-00086Z-74; Wed, 26 Jun 2024 13:16:29 +0000
Received: by outflank-mailman (input) for mailman id 748840;
 Wed, 26 Jun 2024 13:16:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMSVg-00086T-0k
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:16:28 +0000
Received: from mail-lj1-x230.google.com (mail-lj1-x230.google.com
 [2a00:1450:4864:20::230])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4a8bb9a9-33be-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 15:16:26 +0200 (CEST)
Received: by mail-lj1-x230.google.com with SMTP id
 38308e7fff4ca-2eaea28868dso88522231fa.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:16:26 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1f9eb2f0406sm98763955ad.37.2024.06.26.06.16.19
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 06:16:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a8bb9a9-33be-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719407785; x=1720012585; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=KjxqHsGx4EEy2APqZvmGLSEquu3aI3gQZaw2aQDpMoo=;
        b=Tnx/VOIL8qD7md3gn1hQWeJrMtSUy91l+1Cptilp4Ys1MRCxEP67KuyGyHPFtEapff
         liGbhk9fgRNy+UvDQDSOe0/9S/PAO1G0f0gon6JqDOKHP2PuvbYLbCuNfmur0jw6QtcA
         AzJeqLhhJV9q3y3fbC59TTv/0hR8yzUwIMCS2bkUFLMkx2xEH02/JNvwOCBRqwxCryjq
         P3wWqeBU7SQAcvcksHM4sbmGpCGooC6tZqinhJ4xeQere6CGii3P/olkH+TCqx+noa4L
         wnFhUXd76KR0pFfqckmm+gFXNRkQF4jrFQo+cnPwexa7SgIVM4BsBEZcJm2Ep41UuuTk
         3J9g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719407785; x=1720012585;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=KjxqHsGx4EEy2APqZvmGLSEquu3aI3gQZaw2aQDpMoo=;
        b=KWJNzP8ow3AfsQ+OgxNr6TFQvIMUKw/ri+X4aBRht9IVmLB5qJwUbiXX0L/bEXeM36
         l061mEZy7iIhyCx8IRM3WW6qbss1vV2Vnnt+v44KaTzSHj4q3CF9u9BtyElT3TyCdiKi
         2N08xQCdSoSTVUw4nzA1ImizQ/rhzVtF6Pj0GRpibYehBz6ldqyctS+8gf3r0M7FCU3k
         54+/eUDMfwN9UxVz/j39CPOuFbQu0Q5PP/H4xI3ic/cvTRNXlt5rKEyF55VZ4wXNMDrK
         gTfbQx2zKcAu1IjNpoV4hT8gwZBbtuhYc3XV2X6cBhQ8EGkx/pgJRxKevNiwPealHOsI
         esEg==
X-Forwarded-Encrypted: i=1; AJvYcCVIK26ZGqZnW9Kwqut0iR61FOabVKIdLkzgUNZed4ankrPCrPZgbCf15iE+1amCBJ5F/Go1AG7BOaEZ538b9w+hv4tVIFE6PDwbfQxzkQw=
X-Gm-Message-State: AOJu0YzJAWD9bCmOu2QV0JyWFauhQgvpXcPXmXLrh86orCN8S3wuKtXk
	4SreKva7FlCMlnUdG0Czqjn49c4cmJS08hNxcEiXULI6hc9hCeA1QeGWKytpFw==
X-Google-Smtp-Source: AGHT+IHnscVT9HmwrFandVvBo7fBagNu7TJODjf5LeidjZ0CbdCGWqHQZs4wi8Ieu4B9UlhUMLWtMA==
X-Received: by 2002:a2e:9054:0:b0:2ed:59af:ecb4 with SMTP id 38308e7fff4ca-2ed59afee82mr17184631fa.41.1719407784496;
        Wed, 26 Jun 2024 06:16:24 -0700 (PDT)
Message-ID: <8f7290de-b31a-4d10-a28e-50707ce612a0@suse.com>
Date: Wed, 26 Jun 2024 15:16:15 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 1/6] x86/vmx: Rewrite vmx_sync_pir_to_irr() to be more
 efficient
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
 <20240625190719.788643-2-andrew.cooper3@citrix.com>
 <fc04af37-6ef6-4c91-a625-d541f9f9bfe5@suse.com>
 <7d1e7357-6dd3-43d5-9fa6-bdfab55a678c@citrix.com>
 <331d5ee5-4990-47f2-b8de-77365b308796@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <331d5ee5-4990-47f2-b8de-77365b308796@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 26.06.2024 15:04, Andrew Cooper wrote:
> One final thing.
> 
> This logic here depends on interrupts not being enabled between these
> atomic actions, and entering non-root mode.
> 
> Specifically, Xen must not service a pending delivery-notification
> vector between this point and the VMEntry microcode repeating the same
> scan on the PIR Descriptor.
> 
> Getting this wrong means that we'll miss the delivery of vectors which
> arrive between here and the next time something causes a
> delivery-notification vector to be sent.
> 
> However, I've got no idea how to reasonably express this with
> assertions.  We could in principle have a per-cpu "mustn't enable
> interrupts" flag, checked in local_irq_enable/restore(), but it only
> works in HVM context, and gets too messy IMO.

I agree. It's also nothing this patch changes; it was like this before
already. If and when we can think of a good way of expressing it, then
surely we could improve things here.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:20:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:20:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748846.1156771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSa0-0001go-QY; Wed, 26 Jun 2024 13:20:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748846.1156771; Wed, 26 Jun 2024 13:20:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSa0-0001gh-N9; Wed, 26 Jun 2024 13:20:56 +0000
Received: by outflank-mailman (input) for mailman id 748846;
 Wed, 26 Jun 2024 13:20:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xmf9=N4=tklengyel.com=tamas@srs-se1.protection.inumbo.net>)
 id 1sMSZz-0001gb-Hf
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:20:55 +0000
Received: from sender4-op-o12.zoho.com (sender4-op-o12.zoho.com
 [136.143.188.12]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e95a2063-33be-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 15:20:53 +0200 (CEST)
Received: by mx.zohomail.com with SMTPS id 1719408048996609.3835241882795;
 Wed, 26 Jun 2024 06:20:48 -0700 (PDT)
Received: by mail-yb1-f176.google.com with SMTP id
 3f1490d57ef6-e033d34987cso276794276.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:20:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e95a2063-33be-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; t=1719408050; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=lKB7cx+6vzEndxXoVe/Zh0gbJrfzOAYvNQq9KLzK5A0QGa958S7kozHFmHHDwzJIMiaXQ8sr27Rsx9RCxQpSFUi27x/Qsy9nQf62IMC6+V5tWOZS6ZD2HDtcXHqzPlnv3bo1HhyGmZn2RylnXMoOVEZPks8051ZsfsA2Y84X2Eo=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1719408050; h=Content-Type:Content-Transfer-Encoding:Cc:Cc:Date:Date:From:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:Subject:To:To:Message-Id:Reply-To; 
	bh=kSgYtVbwzpmL/F9eU5ULVuSzWNb/EJ1xLiGkbR0wEqw=; 
	b=ILDcv3uMa419rf2hMjfUnxctla2uQaHPEyHJPOc5bCvNjag7A6chThC+An3B3X7FcHVwICADKzCRUpKMS88hI3jDkESO2sVhNO+Klf55GXIT8MFOSlXd4jGerU5+i8POOKspR1fE64n2Bw8hzHHIIZPv3EvAjzKruUGyiryATCc=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=tklengyel.com;
	spf=pass  smtp.mailfrom=tamas@tklengyel.com;
	dmarc=pass header.from=<tamas@tklengyel.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1719408050;
	s=zmail; d=tklengyel.com; i=tamas@tklengyel.com;
	h=MIME-Version:References:In-Reply-To:From:From:Date:Date:Message-ID:Subject:Subject:To:To:Cc:Cc:Content-Type:Content-Transfer-Encoding:Message-Id:Reply-To;
	bh=kSgYtVbwzpmL/F9eU5ULVuSzWNb/EJ1xLiGkbR0wEqw=;
	b=koCfRHKnD1kr4YNAN1vYsFBf8sMNCoS23UIKdsUmzVq9hKM4jEwhLFOovjsENC5g
	qNked3eHDmlkwRu5+KIFoxnPBvFrvlm5plRjORBTZWD63ypxTpae9ufj9Gkkl7WTS+z
	BfLe+nBALhUKdKJpOTX+8buabIdq7Gwv6ZjTpZEw=
X-Gm-Message-State: AOJu0YwVVmM0jmNqGvc+G1CGm8jZU+qAcL0QGaIkgL4kPQAB0Mg1CaFZ
	APk9btfip9f1NWnFj/gC4BJO7pZ8sUCxGVlj7qmihCAjRIDCs6L08FtWG7cg+zrwi63H8gYzY0g
	JZ3dnL/04E+WlWBAdJDDAMVwKZno=
X-Google-Smtp-Source: AGHT+IFC50d0T+fArAsVqMhtFR/7YElCTb/9gMbZwX91WQSC68WszDfhq1yWayru2go5wsyWnx6lLdXCsBz/KJ9b3pc=
X-Received: by 2002:a25:8548:0:b0:e03:22e5:cd49 with SMTP id
 3f1490d57ef6-e0322e5d178mr5189169276.39.1719408048069; Wed, 26 Jun 2024
 06:20:48 -0700 (PDT)
MIME-Version: 1.0
References: <20240621191434.5046-1-tamas@tklengyel.com> <20240621191434.5046-2-tamas@tklengyel.com>
 <6f94d071-f90f-485d-a8aa-a0c8a726ce34@xen.org> <CABfawhkCJv1oQ4+_bBHf_ys1=gtmFVT-Zn7UeYDLaSm9KQqgcA@mail.gmail.com>
 <9b6819fd-fd76-4249-b1f9-5afb372dd1e1@xen.org>
In-Reply-To: <9b6819fd-fd76-4249-b1f9-5afb372dd1e1@xen.org>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Wed, 26 Jun 2024 09:20:12 -0400
X-Gmail-Original-Message-ID: <CABfawhmeOn0g2y40_AxRcXQe9EMNJyXhqVtg9OoAYVSHwM37fQ@mail.gmail.com>
Message-ID: <CABfawhmeOn0g2y40_AxRcXQe9EMNJyXhqVtg9OoAYVSHwM37fQ@mail.gmail.com>
Subject: Re: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
	Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Jun 26, 2024 at 8:41=E2=80=AFAM Julien Grall <julien@xen.org> wrote=
:
>
> Hi Tamas,
>
> On 24/06/2024 23:18, Tamas K Lengyel wrote:
> > On Mon, Jun 24, 2024 at 5:58=E2=80=AFPM Julien Grall <julien@xen.org> w=
rote:
> >>
> >> Hi,
> >>
> >> On 21/06/2024 20:14, Tamas K Lengyel wrote:
> >>> The build integration script for oss-fuzz targets.
> >>
> >> Do you have any details how this is meant and/or will be used?
> >
> > https://google.github.io/oss-fuzz/getting-started/new-project-guide/#bu=
ildsh
> >
> >>
> >> I also couldn't find a cover letter. For series with more than one
> >> patch, it is recommended to have one as it help threading and could al=
so
> >> give some insight on what you are aiming to do.
> >>
> >>>
> >>> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
> >>> ---
> >>>    scripts/oss-fuzz/build.sh | 22 ++++++++++++++++++++++
> >>>    1 file changed, 22 insertions(+)
> >>>    create mode 100755 scripts/oss-fuzz/build.sh
> >>>
> >>> diff --git a/scripts/oss-fuzz/build.sh b/scripts/oss-fuzz/build.sh
> >>> new file mode 100755
> >>> index 0000000000..48528bbfc2
> >>> --- /dev/null
> >>> +++ b/scripts/oss-fuzz/build.sh
> >>
> >> Depending on the answer above, we may want to consider to create the
> >> directory oss-fuzz under automation or maybe tools/fuzz/.
> >
> > I'm fine with moving it wherever.
>
> What about tools/fuzz then? This is where are all the tooling for the
> fuzzing.
>
> >
> >>
> >>> @@ -0,0 +1,22 @@
> >>> +#!/bin/bash -eu
> >>> +# Copyright 2024 Google LLC
> >>
> >> I am a bit confused with this copyright. Is this script taken from
> >> somewhere?
> >
> > Yes, I took an existing build.sh from oss-fuzz,
>
> It is unclear to me what is left from that "existing" build.sh. At least
> everything below seems to be Xen specific.
>
> Anyway, if you want to give the copyright to Google then fair enough,
> but I think you want to use an Origin tag (or similar) to indicate the
> original copy.
>
> >  it is recommended to
> > have the more complex part of build.sh as part of the upstream
> > repository so that additional targets/fixes can be merged there
> > instead of opening PRs on oss-fuzz directly. With this setup the
> > build.sh I merge to oss-fuzz will just just this build.sh in the Xen
> > repository. See
> > https://github.com/tklengyel/oss-fuzz/commit/552317ae9d24ef1c00d8759551=
6cc364bc33b662.
> >
> >>
> >>> +#
> >>> +# Licensed under the Apache License, Version 2.0 (the "License");
> >>> +# you may not use this file except in compliance with the License.
> >>> +# You may obtain a copy of the License at
> >>> +#
> >>> +#      http://www.apache.org/licenses/LICENSE-2.0
> >>> +#
> >>> +# Unless required by applicable law or agreed to in writing, softwar=
e
> >>> +# distributed under the License is distributed on an "AS IS" BASIS,
> >>> +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or im=
plied.
> >>> +# See the License for the specific language governing permissions an=
d
> >>> +# limitations under the License.
> >>> +#
> >>> +####################################################################=
############
> >>> +
> >>> +cd xen
> >>> +./configure clang=3Dy --disable-stubdom --disable-pvshim --disable-d=
ocs --disable-xen
> >>
> >> Looking at the help from ./configure, 'clang=3Dy' is not mentioned and=
 it
> >> doesn't make any difference in the config.log. Can you clarify why thi=
s
> >> was added?
> >
> > Just throwing stuff at the wall till I was able to get a clang build.
> > If it's indeed not needed I can remove it.
> >
> >>
> >>> +make clang=3Dy -C tools/include
> >>> +make clang=3Dy -C tools/fuzz/x86_instruction_emulator libfuzzer-harn=
ess
> >>> +cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_in=
struction_emulator
> >>
> >> Who will be defining $OUT?
> >
> > oss-fuzz
>
> Ok. Can you add a link to the documentation in build.sh? This would be
> helpful for the future reader to understand what's $OUT really mean.

Sure, it turns out there is already a README.oss-fuzz in tools/fuzz
that points to the oss-fuzz so I don't think there is anything else
needed here, we can just move the build script there.

Tamas


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:29:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:29:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748855.1156790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShu-0003GY-Tc; Wed, 26 Jun 2024 13:29:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748855.1156790; Wed, 26 Jun 2024 13:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShu-0003GQ-Qn; Wed, 26 Jun 2024 13:29:06 +0000
Received: by outflank-mailman (input) for mailman id 748855;
 Wed, 26 Jun 2024 13:29:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMSht-00030Q-Dy
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:29:05 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0de584a5-33c0-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 15:29:03 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.214])
 by support.bugseng.com (Postfix) with ESMTPSA id 485DA4EE0756;
 Wed, 26 Jun 2024 15:29:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0de584a5-33c0-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v2 for-4.20 1/7] automation/eclair: address violations of MISRA C Rule 20.7
Date: Wed, 26 Jun 2024 15:28:47 +0200
Message-Id: <679b1948690fecf06c9e81b398f7bf9bf5a292d2.1719407840.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719407840.git.nicola.vetrini@bugseng.com>
References: <cover.1719407840.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses".

The helper macro bitmap_switch has parameters that cannot be parenthesized
in order to comply with the rule, as that would break its functionality.
Moreover, the risk of misuse due developer confusion is deemed not
substantial enough to warrant a more involved refactor, thus the macro
is deviated for this rule.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
Changes in v2:
- Switched to a comment-based deviation to allow other tools to
pick this deviation up automatically.
---
 docs/misra/safe.json     | 8 ++++++++
 xen/include/xen/bitmap.h | 3 +++
 2 files changed, 11 insertions(+)

diff --git a/docs/misra/safe.json b/docs/misra/safe.json
index c213e0a0be3b..3f18ef401c7d 100644
--- a/docs/misra/safe.json
+++ b/docs/misra/safe.json
@@ -60,6 +60,14 @@
         },
         {
             "id": "SAF-7-safe",
+            "analyser": {
+                "eclair": "MC3R1.R20.7"
+            },
+            "name": "MC3R1.R20.7: deliberately non-parenthesized macro argument",
+            "text": "A macro parameter expands to an expression that is non-parenthesized, as doing so would break the functionality."
+        },
+        {
+            "id": "SAF-8-safe",
             "analyser": {},
             "name": "Sentinel",
             "text": "Next ID to be used"
diff --git a/xen/include/xen/bitmap.h b/xen/include/xen/bitmap.h
index b9f980e91930..6ee39aa35ac6 100644
--- a/xen/include/xen/bitmap.h
+++ b/xen/include/xen/bitmap.h
@@ -103,10 +103,13 @@ extern int bitmap_allocate_region(unsigned long *bitmap, int pos, int order);
 #define bitmap_switch(nbits, zero, small, large)			  \
 	unsigned int n__ = (nbits);					  \
 	if (__builtin_constant_p(nbits) && !n__) {			  \
+		/* SAF-7-safe Rule 20.7 non-parenthesized macro argument */ \
 		zero;							  \
 	} else if (__builtin_constant_p(nbits) && n__ <= BITS_PER_LONG) { \
+		/* SAF-7-safe Rule 20.7 non-parenthesized macro argument */ \
 		small;							  \
 	} else {							  \
+		/* SAF-7-safe Rule 20.7 non-parenthesized macro argument */ \
 		large;							  \
 	}
 
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:29:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:29:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748859.1156828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShx-00047A-9f; Wed, 26 Jun 2024 13:29:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748859.1156828; Wed, 26 Jun 2024 13:29:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShx-00044h-5A; Wed, 26 Jun 2024 13:29:09 +0000
Received: by outflank-mailman (input) for mailman id 748859;
 Wed, 26 Jun 2024 13:29:07 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMShv-00030P-98
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:29:07 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0f67418b-33c0-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:29:05 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.214])
 by support.bugseng.com (Postfix) with ESMTPSA id 1C9D64EE075A;
 Wed, 26 Jun 2024 15:29:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f67418b-33c0-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v2 for-4.20 5/7] x86/irq: address violations of MISRA C Rule 20.7
Date: Wed, 26 Jun 2024 15:28:51 +0200
Message-Id: <0e0b6fd880b01f5e3679b981edfbce7087a0bd04.1719407840.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719407840.git.nicola.vetrini@bugseng.com>
References: <cover.1719407840.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses". Therefore, some
macro definitions should gain additional parentheses to ensure that all
current and future users will be safe with respect to expansions that
can possibly alter the semantics of the passed-in macro parameter.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
Note that the rule does not apply to f because that parameter
is not used as an expression in the macro, but rather as an identifier.
---
 xen/include/xen/irq.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/xen/irq.h b/xen/include/xen/irq.h
index 580ae37e7428..17211f3399b7 100644
--- a/xen/include/xen/irq.h
+++ b/xen/include/xen/irq.h
@@ -178,7 +178,7 @@ extern struct pirq *pirq_get_info(struct domain *d, int pirq);
 
 #define pirq_field(d, p, f, def) ({ \
     const struct pirq *__pi = pirq_info(d, p); \
-    __pi ? __pi->f : def; \
+    __pi ? __pi->f : (def); \
 })
 #define pirq_to_evtchn(d, pirq) pirq_field(d, pirq, evtchn, 0)
 #define pirq_masked(d, pirq) pirq_field(d, pirq, masked, 0)
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:29:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:29:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748858.1156808 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShv-0003U6-Vi; Wed, 26 Jun 2024 13:29:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748858.1156808; Wed, 26 Jun 2024 13:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShv-0003SD-NU; Wed, 26 Jun 2024 13:29:07 +0000
Received: by outflank-mailman (input) for mailman id 748858;
 Wed, 26 Jun 2024 13:29:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMShu-00030P-MV
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:29:06 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0f04c367-33c0-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:29:05 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.214])
 by support.bugseng.com (Postfix) with ESMTPSA id 5E24C4EE0754;
 Wed, 26 Jun 2024 15:29:04 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f04c367-33c0-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v2 for-4.20 4/7] automation/eclair_analysis: address violations of MISRA C Rule 20.7
Date: Wed, 26 Jun 2024 15:28:50 +0200
Message-Id: <f1876dd0e1c41139fb01443a223723a06af62ede.1719407840.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719407840.git.nicola.vetrini@bugseng.com>
References: <cover.1719407840.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses".

The local helpers GRP2 and XADD in the x86 emulator use their first
argument as the constant expression for a case label. This pattern
is deviated project-wide, because it is very unlikely to induce
developer confusion and result in the wrong control flow being
carried out.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes in v2:
- Introduce a deviation instead of adding parentheses
---
 automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++--
 docs/misra/deviations.rst                        | 3 ++-
 2 files changed, 6 insertions(+), 3 deletions(-)

diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
index dcff4f40136c..d12be858fe84 100644
--- a/automation/eclair_analysis/ECLAIR/deviations.ecl
+++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
@@ -458,13 +458,15 @@ unexpected result when the structure is given as argument to a sizeof() operator
 
 -doc_begin="Code violating Rule 20.7 is safe when macro parameters are used: (1)
 as function arguments; (2) as macro arguments; (3) as array indices; (4) as lhs
-in assignments; (5) as initializers, possibly designated, in initalizer lists."
+in assignments; (5) as initializers, possibly designated, in initalizer lists;
+(6) as the constant expression in a switch clause label."
 -config=MC3R1.R20.7,expansion_context=
 {safe, "context(__call_expr_arg_contexts)"},
 {safe, "left_right(^[(,\\[]$,^[),\\]]$)"},
 {safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(node(array_subscript_expr), subscript)))"},
 {safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(operator(assign), lhs)))"},
-{safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(node(init_list_expr||designated_init_expr), init)))"}
+{safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(node(init_list_expr||designated_init_expr), init)))"},
+{safe, "context(skip_to(__expr_non_syntactic_contexts, stmt_child(node(case_stmt), lower||upper)))"}
 -doc_end
 
 -doc_begin="Violations involving the __config_enabled macros cannot be fixed without
diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
index c6a2affc6a0b..7be232212339 100644
--- a/docs/misra/deviations.rst
+++ b/docs/misra/deviations.rst
@@ -416,7 +416,8 @@ Deviations related to MISRA C:2012 Rules:
        (2) as macro arguments;
        (3) as array indices;
        (4) as lhs in assignments;
-       (5) as initializers, possibly designated, in initalizer lists.
+       (5) as initializers, possibly designated, in initalizer lists;
+       (6) as constant expressions of switch case labels.
      - Tagged as `safe` for ECLAIR.
 
    * - R20.7
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:29:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:29:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748857.1156803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShv-0003PP-L7; Wed, 26 Jun 2024 13:29:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748857.1156803; Wed, 26 Jun 2024 13:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShv-0003Oc-CB; Wed, 26 Jun 2024 13:29:07 +0000
Received: by outflank-mailman (input) for mailman id 748857;
 Wed, 26 Jun 2024 13:29:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMSht-00030P-Tl
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:29:05 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0e9a167e-33c0-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:29:04 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.214])
 by support.bugseng.com (Postfix) with ESMTPSA id C54A74EE0757;
 Wed, 26 Jun 2024 15:29:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e9a167e-33c0-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v2 for-4.20 3/7] xen/guest_access: address violations of MISRA rule 20.7
Date: Wed, 26 Jun 2024 15:28:49 +0200
Message-Id: <cfc2067d0389092c0f2d0e688e474b1f0d5013c7.1719407840.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719407840.git.nicola.vetrini@bugseng.com>
References: <cover.1719407840.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses". Therefore, some
macro definitions should gain additional parentheses to ensure that all
current and future users will be safe with respect to expansions that
can possibly alter the semantics of the passed-in macro parameter.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/include/xen/guest_access.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/include/xen/guest_access.h b/xen/include/xen/guest_access.h
index a2146749e396..2e0971c4872c 100644
--- a/xen/include/xen/guest_access.h
+++ b/xen/include/xen/guest_access.h
@@ -51,9 +51,9 @@
     ((XEN_GUEST_HANDLE(type)) { &(hnd).p->fld })
 
 #define guest_handle_from_ptr(ptr, type)        \
-    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(type)) { (type *)(ptr) })
 #define const_guest_handle_from_ptr(ptr, type)  \
-    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)ptr })
+    ((XEN_GUEST_HANDLE_PARAM(const_##type)) { (const type *)(ptr) })
 
 /*
  * Copy an array of objects to guest context via a guest handle,
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:29:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:29:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748860.1156835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShx-0004Av-MW; Wed, 26 Jun 2024 13:29:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748860.1156835; Wed, 26 Jun 2024 13:29:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShx-0004AH-Go; Wed, 26 Jun 2024 13:29:09 +0000
Received: by outflank-mailman (input) for mailman id 748860;
 Wed, 26 Jun 2024 13:29:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMShv-00030Q-Dz
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:29:07 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1011454f-33c0-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 15:29:07 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.214])
 by support.bugseng.com (Postfix) with ESMTPSA id 3907F4EE0758;
 Wed, 26 Jun 2024 15:29:06 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1011454f-33c0-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 for-4.20 7/7] x86/traps: address violations of MISRA C Rule 20.7
Date: Wed, 26 Jun 2024 15:28:53 +0200
Message-Id: <7830b9bfbb0aec272376817eb20bbcbfebdf4044.1719407840.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719407840.git.nicola.vetrini@bugseng.com>
References: <cover.1719407840.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses". Therefore, some
macro definitions should gain additional parentheses to ensure that all
current and future users will be safe with respect to expansions that
can possibly alter the semantics of the passed-in macro parameter.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 xen/arch/x86/traps.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 9906e874d593..ee91fc56b125 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -123,7 +123,7 @@ unsigned int __ro_after_init ler_msr;
 const unsigned int nmi_cpu;
 
 #define stack_words_per_line 4
-#define ESP_BEFORE_EXCEPTION(regs) ((unsigned long *)regs->rsp)
+#define ESP_BEFORE_EXCEPTION(regs) ((unsigned long *)(regs)->rsp)
 
 void show_code(const struct cpu_user_regs *regs)
 {
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:29:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:29:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748856.1156796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShv-0003J9-85; Wed, 26 Jun 2024 13:29:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748856.1156796; Wed, 26 Jun 2024 13:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShv-0003Ia-1n; Wed, 26 Jun 2024 13:29:07 +0000
Received: by outflank-mailman (input) for mailman id 748856;
 Wed, 26 Jun 2024 13:29:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMSht-00030Q-L4
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:29:05 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0e40c803-33c0-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 15:29:03 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.214])
 by support.bugseng.com (Postfix) with ESMTPSA id 33B2B4EE0755;
 Wed, 26 Jun 2024 15:29:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e40c803-33c0-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v2 for-4.20 2/7] xen/self-tests: address violations of MISRA rule 20.7
Date: Wed, 26 Jun 2024 15:28:48 +0200
Message-Id: <42d5c74777622407682ad80db0e31d3bd09005c7.1719407840.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719407840.git.nicola.vetrini@bugseng.com>
References: <cover.1719407840.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

MISRA C Rule 20.7 states: "Expressions resulting from the expansion
of macro parameters shall be enclosed in parentheses". Therefore, some
macro definitions should gain additional parentheses to ensure that all
current and future users will be safe with respect to expansions that
can possibly alter the semantics of the passed-in macro parameter.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
In this case the use of parentheses can detect misuses of the COMPILE_CHECK
macro for the fn argument that happen to pass the compile-time check
(see e.g. https://godbolt.org/z/n4zTdz595).

An alternative would be to deviate these macros, but since they are used
to check the correctness of other code it seemed the better alternative
to futher ensure that all usages of the macros are safe.
---
 xen/include/xen/self-tests.h | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/xen/include/xen/self-tests.h b/xen/include/xen/self-tests.h
index 42a4cc4d17fe..58484fe5a8ae 100644
--- a/xen/include/xen/self-tests.h
+++ b/xen/include/xen/self-tests.h
@@ -19,11 +19,11 @@
 #if !defined(CONFIG_CC_IS_CLANG) || CONFIG_CLANG_VERSION >= 80000
 #define COMPILE_CHECK(fn, val, res)                                     \
     do {                                                                \
-        typeof(fn(val)) real = fn(val);                                 \
+        typeof((fn)(val)) real = (fn)(val);                             \
                                                                         \
         if ( !__builtin_constant_p(real) )                              \
             asm ( ".error \"'" STR(fn(val)) "' not compile-time constant\"" ); \
-        else if ( real != res )                                         \
+        else if ( real != (res) )                                       \
             asm ( ".error \"Compile time check '" STR(fn(val) == res) "' failed\"" ); \
     } while ( 0 )
 #else
@@ -37,9 +37,9 @@
  */
 #define RUNTIME_CHECK(fn, val, res)                     \
     do {                                                \
-        typeof(fn(val)) real = fn(HIDE(val));           \
+        typeof((fn)(val)) real = (fn)(HIDE(val));       \
                                                         \
-        if ( real != res )                              \
+        if ( real != (res) )                            \
             panic("%s: %s(%s) expected %u, got %u\n",   \
                   __func__, #fn, #val, real, res);      \
     } while ( 0 )
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:29:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:29:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748854.1156781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSht-00031x-Ji; Wed, 26 Jun 2024 13:29:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748854.1156781; Wed, 26 Jun 2024 13:29:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSht-00031q-FL; Wed, 26 Jun 2024 13:29:05 +0000
Received: by outflank-mailman (input) for mailman id 748854;
 Wed, 26 Jun 2024 13:29:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMShs-00030P-H3
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:29:04 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0d586a00-33c0-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:29:02 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.214])
 by support.bugseng.com (Postfix) with ESMTPSA id 8C0B84EE073D;
 Wed, 26 Jun 2024 15:29:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d586a00-33c0-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2 for-4.20 0/7] address several violations of MISRA Rule 20.7
Date: Wed, 26 Jun 2024 15:28:46 +0200
Message-Id: <cover.1719407840.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi all,

this series addresses several violations of Rule 20.7, as well as a
small fix to the ECLAIR integration scripts that do not influence
the current behaviour, but were mistakenly part of the upstream
configuration.

Note that by applying this series the rule has a few leftover violations.
Most of those are in x86 code in xen/arch/x86/include/asm/msi.h .
I did send a patch [1] to deal with those, limited only to addressing the MISRA
violations, but in the end it was dropped in favour of a more general cleanup of
the file upon agreement, so this is why those changes are not included here.

[1] https://lore.kernel.org/xen-devel/2f2c865f20d0296e623f1d65bed25c083f5dd497.1711700095.git.nicola.vetrini@bugseng.com/

Changes in v2:
- Patch 7 is new to this series 

Nicola Vetrini (7):
  automation/eclair: address violations of MISRA C Rule 20.7
  xen/self-tests: address violations of MISRA rule 20.7
  xen/guest_access: address violations of MISRA rule 20.7
  automation/eclair_analysis: address violations of MISRA C Rule 20.7
  x86/irq: address violations of MISRA C Rule 20.7
  automation/eclair_analysis: clean ECLAIR configuration scripts
  x86/traps: address violations of MISRA C Rule 20.7

 automation/eclair_analysis/ECLAIR/analyze.sh     | 3 +--
 automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++--
 docs/misra/deviations.rst                        | 3 ++-
 docs/misra/safe.json                             | 8 ++++++++
 xen/arch/x86/traps.c                             | 2 +-
 xen/include/xen/bitmap.h                         | 3 +++
 xen/include/xen/guest_access.h                   | 4 ++--
 xen/include/xen/irq.h                            | 2 +-
 xen/include/xen/self-tests.h                     | 8 ++++----
 9 files changed, 26 insertions(+), 13 deletions(-)

-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:29:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:29:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748861.1156840 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShy-0004GZ-AN; Wed, 26 Jun 2024 13:29:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748861.1156840; Wed, 26 Jun 2024 13:29:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMShx-0004E6-QJ; Wed, 26 Jun 2024 13:29:09 +0000
Received: by outflank-mailman (input) for mailman id 748861;
 Wed, 26 Jun 2024 13:29:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMShw-00030P-7K
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:29:08 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0fb3cafe-33c0-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:29:06 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.214])
 by support.bugseng.com (Postfix) with ESMTPSA id A216B4EE0759;
 Wed, 26 Jun 2024 15:29:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fb3cafe-33c0-11ef-b4bb-af5377834399
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Simone Ballarin <simone.ballarin@bugseng.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [XEN PATCH v2 for-4.20 6/7] automation/eclair_analysis: clean ECLAIR configuration scripts
Date: Wed, 26 Jun 2024 15:28:52 +0200
Message-Id: <ab4a52b5f8d7362d9911bbd97b203f5053f57003.1719407840.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719407840.git.nicola.vetrini@bugseng.com>
References: <cover.1719407840.git.nicola.vetrini@bugseng.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove from the ECLAIR integration scripts an unused option, which
was already ignored, and make the help texts consistent
with the rest of the scripts.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 automation/eclair_analysis/ECLAIR/analyze.sh | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/automation/eclair_analysis/ECLAIR/analyze.sh b/automation/eclair_analysis/ECLAIR/analyze.sh
index 0ea5520c93a6..e96456c3c18d 100755
--- a/automation/eclair_analysis/ECLAIR/analyze.sh
+++ b/automation/eclair_analysis/ECLAIR/analyze.sh
@@ -11,7 +11,7 @@ fatal() {
 }
 
 usage() {
-  fatal "Usage: ${script_name} <ARM64|X86_64> <Set0|Set1|Set2|Set3>"
+  fatal "Usage: ${script_name} <ARM64|X86_64> <accepted|monitored>"
 }
 
 if [[ $# -ne 2 ]]; then
@@ -40,7 +40,6 @@ ECLAIR_REPORT_LOG=${ECLAIR_OUTPUT_DIR}/REPORT.log
 if [[ "$1" = "X86_64" ]]; then
   export CROSS_COMPILE=
   export XEN_TARGET_ARCH=x86_64
-  EXTRA_ECLAIR_ENV_OPTIONS=-disable=MC3R1.R20.7
 elif [[ "$1" = "ARM64" ]]; then
   export CROSS_COMPILE=aarch64-linux-gnu-
   export XEN_TARGET_ARCH=arm64
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:30:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:30:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748889.1156861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSj1-0007wG-D0; Wed, 26 Jun 2024 13:30:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748889.1156861; Wed, 26 Jun 2024 13:30:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSj1-0007w9-9N; Wed, 26 Jun 2024 13:30:15 +0000
Received: by outflank-mailman (input) for mailman id 748889;
 Wed, 26 Jun 2024 13:30:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMSiz-0007vr-GO
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:30:13 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 37382a4f-33c0-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 15:30:12 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 249014EE073D;
 Wed, 26 Jun 2024 15:30:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37382a4f-33c0-11ef-90a3-e314d9c70b13
MIME-Version: 1.0
Date: Wed, 26 Jun 2024 15:30:12 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org, michal.orzel@amd.com, xenia.ragiadakou@amd.com,
 ayan.kumar.halder@amd.com, consulting@bugseng.com, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Simone Ballarin
 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 for-4.20 0/7] address several violations of MISRA
 Rule 20.7
In-Reply-To: <cover.1719407840.git.nicola.vetrini@bugseng.com>
References: <cover.1719407840.git.nicola.vetrini@bugseng.com>
Message-ID: <8c20ec5178fc8bbe3488f0cb4fc150e4@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-26 15:28, Nicola Vetrini wrote:
> Hi all,
> 
> this series addresses several violations of Rule 20.7, as well as a
> small fix to the ECLAIR integration scripts that do not influence
> the current behaviour, but were mistakenly part of the upstream
> configuration.
> 
> Note that by applying this series the rule has a few leftover 
> violations.
> Most of those are in x86 code in xen/arch/x86/include/asm/msi.h .
> I did send a patch [1] to deal with those, limited only to addressing 
> the MISRA
> violations, but in the end it was dropped in favour of a more general 
> cleanup of
> the file upon agreement, so this is why those changes are not included 
> here.
> 
> [1] 
> https://lore.kernel.org/xen-devel/2f2c865f20d0296e623f1d65bed25c083f5dd497.1711700095.git.nicola.vetrini@bugseng.com/
> 
> Changes in v2:
> - Patch 7 is new to this series
> 

Sorry, this should have been a v3, rather than a v2

> Nicola Vetrini (7):
>   automation/eclair: address violations of MISRA C Rule 20.7
>   xen/self-tests: address violations of MISRA rule 20.7
>   xen/guest_access: address violations of MISRA rule 20.7
>   automation/eclair_analysis: address violations of MISRA C Rule 20.7
>   x86/irq: address violations of MISRA C Rule 20.7
>   automation/eclair_analysis: clean ECLAIR configuration scripts
>   x86/traps: address violations of MISRA C Rule 20.7
> 
>  automation/eclair_analysis/ECLAIR/analyze.sh     | 3 +--
>  automation/eclair_analysis/ECLAIR/deviations.ecl | 6 ++++--
>  docs/misra/deviations.rst                        | 3 ++-
>  docs/misra/safe.json                             | 8 ++++++++
>  xen/arch/x86/traps.c                             | 2 +-
>  xen/include/xen/bitmap.h                         | 3 +++
>  xen/include/xen/guest_access.h                   | 4 ++--
>  xen/include/xen/irq.h                            | 2 +-
>  xen/include/xen/self-tests.h                     | 8 ++++----
>  9 files changed, 26 insertions(+), 13 deletions(-)

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:38:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:38:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748913.1156871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSqw-0000nv-8C; Wed, 26 Jun 2024 13:38:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748913.1156871; Wed, 26 Jun 2024 13:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSqw-0000no-55; Wed, 26 Jun 2024 13:38:26 +0000
Received: by outflank-mailman (input) for mailman id 748913;
 Wed, 26 Jun 2024 13:38:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53QU=N4=bounce.vates.tech=bounce-md_30504962.667c19cb.v1-e4b15c1a44e74e1db5c63fec0713e397@srs-se1.protection.inumbo.net>)
 id 1sMSqu-0000ni-MF
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:38:24 +0000
Received: from mail134-3.atl141.mandrillapp.com
 (mail134-3.atl141.mandrillapp.com [198.2.134.3])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 5acf1f75-33c1-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:38:22 +0200 (CEST)
Received: from pmta10.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail134-3.atl141.mandrillapp.com (Mailchimp) with ESMTP id
 4W8N7b4prMzDRHyBB
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 13:38:19 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 e4b15c1a44e74e1db5c63fec0713e397; Wed, 26 Jun 2024 13:38:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5acf1f75-33c1-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719409099; x=1719669599;
	bh=4FcE7J3ihYLd+CzOxH5Il3z7tA9oho9OfuXYk/DRLv8=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=Iwol2YwzYPcT7aIG29tj5xuqDOZ4Wh0EljjHmHqeMycGqK+bWh/7rP22JVjGH3xuv
	 p4v0dpw0ilhU0UkjAjf4NC8XRrkZqRCTZAVVCVEFA/6G5SVw0wAwawbG+WgIIJ0s+7
	 +y4o8A2L4vkxZIjpF/EZsfR75xZ0kWimgtWjiL2RA2r5Vl/S7vajauLDMODGZNgyBA
	 0stCEw/0NPUhJdLUNupNN2yeDWGL+sAGY2+rXcgkPZ+e6gEnJfy5efxc/7FfrFABCj
	 1Fxt4Hpmf2aJSzIhMHSHsf0hF9duraDXk6/SxeiSrNWZINPEIVX4jHt12+YZfvYAHE
	 uI4Xj++nOc7aQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719409099; x=1719669599; i=anthony.perard@vates.tech;
	bh=4FcE7J3ihYLd+CzOxH5Il3z7tA9oho9OfuXYk/DRLv8=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=Y9+LpTszkCtKXxfLEwQEGjzxCMrAIYBL421mt99gteUtyn3Lq/gvTHa4H/R+qeHYT
	 DIpu21Sv6fnbDVZFPYB6+ASY02JJb/nyHNvemM/b1y8Bm6uBMEZe0+5+E4DH4zmX8c
	 NOe2qq2brFS3wCy7u8nFHMp3bHnnALxO5Y1koHZ9E1EFD8arW5HbJTpd7Q+C4fmuwQ
	 OtV0KgNEB0s+ewKE2Tmrcp3pmN12d0+8TGqWj2Oxi8y/F90K1OytTut7RgA0wkEfLG
	 NUcrTB5sQSM4poOCVEBv3oynw8yB5/3g/x1SlL87icT8FRopY8KQQJbpIOlEaRzqE0
	 JGBMx9nDMS/vg==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[XEN=20PATCH=20v3=2005/16]=20xen/x86:=20address=20violations=20of=20MISRA=20C:2012=20Directive=204.10?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719409098033
To: Jan Beulich <jbeulich@suse.com>
Cc: Nicola Vetrini <nicola.vetrini@bugseng.com>, Simone Ballarin <simone.ballarin@bugseng.com>, consulting@bugseng.com, sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>, =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
Message-Id: <ZnwZycQ3mU21cSpd@l14>
References: <cover.1710145041.git.simone.ballarin@bugseng.com> <dd042e7d17e7833e12a5ff6f28dd560b5ff02cf7.1710145041.git.simone.ballarin@bugseng.com> <dce6c44d-94b7-43bd-858a-9337336a79cf@suse.com> <ef623bad297d016438b35bedc80f091d@bugseng.com> <ec92611e-6762-4b6c-af3e-999b748d1f1b@suse.com> <797b00049612507d273facc581b2c2c5@bugseng.com> <a5009c3e-cba6-4737-aaff-c3b79a11169c@suse.com> <e3ae670923fd061986e27b3f95833b88@bugseng.com> <0c88d86e-f226-4225-b723-a6662fcd5bef@suse.com>
In-Reply-To: <0c88d86e-f226-4225-b723-a6662fcd5bef@suse.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.e4b15c1a44e74e1db5c63fec0713e397?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240626:md
Date: Wed, 26 Jun 2024 13:38:19 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

On Wed, Jun 26, 2024 at 12:31:42PM +0200, Jan Beulich wrote:
> On 26.06.2024 12:25, Nicola Vetrini wrote:
> > On 2024-06-26 11:26, Jan Beulich wrote:
> >> On 26.06.2024 11:20, Nicola Vetrini wrote:
> >>> On 2024-06-26 11:06, Jan Beulich wrote:
> >>>> On 25.06.2024 21:31, Nicola Vetrini wrote:
> >>>>> On 2024-03-12 09:16, Jan Beulich wrote:
> >>>>>> On 11.03.2024 09:59, Simone Ballarin wrote:
> >>>>>>> --- a/xen/arch/x86/Makefile
> >>>>>>> +++ b/xen/arch/x86/Makefile
> >>>>>>> @@ -258,18 +258,20 @@ $(obj)/asm-macros.i: CFLAGS-y += -P
> >>>>>>>  $(objtree)/arch/x86/include/asm/asm-macros.h: $(obj)/asm-macros.i
> >>>>>>> $(src)/Makefile
> >>>>>>>  	$(call filechk,asm-macros.h)
> >>>>>>>
> >>>>>>> +ARCHDIR = $(shell echo $(SRCARCH) | tr a-z A-Z)
> >>>>>>
> >>>>>> This wants to use :=, I think - there's no reason to invoke the 
> >>>>>> shell
> >>>>>> ...
> >>>>>
> >>>>> I agree on this
> >>>>>
> >>>>>>
> >>>>>>>  define filechk_asm-macros.h
> >>>>>>> +    echo '#ifndef ASM_$(ARCHDIR)_ASM_MACROS_H'; \
> >>>>>>> +    echo '#define ASM_$(ARCHDIR)_ASM_MACROS_H'; \
> >>>>>>>      echo '#if 0'; \
> >>>>>>>      echo '.if 0'; \
> >>>>>>>      echo '#endif'; \
> >>>>>>> -    echo '#ifndef __ASM_MACROS_H__'; \
> >>>>>>> -    echo '#define __ASM_MACROS_H__'; \
> >>>>>>>      echo 'asm ( ".include \"$@\"" );'; \
> >>>>>>> -    echo '#endif /* __ASM_MACROS_H__ */'; \
> >>>>>>>      echo '#if 0'; \
> >>>>>>>      echo '.endif'; \
> >>>>>>>      cat $<; \
> >>>>>>> -    echo '#endif'
> >>>>>>> +    echo '#endif'; \
> >>>>>>> +    echo '#endif /* ASM_$(ARCHDIR)_ASM_MACROS_H */'
> >>>>>>>  endef
> >>>>>>
> >>>>>> ... three times while expanding this macro. Alternatively (to avoid
> >>>>>> an unnecessary shell invocation when this macro is never expanded 
> >>>>>> at
> >>>>>> all) a shell variable inside the "define" above would want
> >>>>>> introducing.
> >>>>>> Whether this 2nd approach is better depends on whether we 
> >>>>>> anticipate
> >>>>>> further uses of ARCHDIR.
> >>>>>
> >>>>> However here I'm not entirely sure about the meaning of this latter
> >>>>> proposal.
> >>>>> My proposal is the following:
> >>>>>
> >>>>> ARCHDIR := $(shell echo $(SRCARCH) | tr a-z A-Z)
> >>>>>
> >>>>> in a suitably generic place (such as Kbuild.include or maybe
> >>>>> xen/Makefile) as you suggested in subsequent patches that reused 
> >>>>> this
> >>>>> pattern.
> >>>>
> >>>> If $(ARCHDIR) is going to be used elsewhere, then what you suggest is
> >>>> fine.
> >>>> My "whether" in the earlier reply specifically left open for
> >>>> clarification
> >>>> what the intentions with the variable are. The alternative I had
> >>>> described
> >>>> makes sense only when $(ARCHDIR) would only ever be used inside the
> >>>> filechk_asm-macros.h macro.
> >>>
> >>> Yes, the intention is to reuse $(ARCHDIR) in the formation of other
> >>> places, as you can tell from the fact that subsequent patches 
> >>> replicate
> >>> the same pattern. This is going to save some duplication.
> >>> The only matter left then is whether xen/Makefile (around line 250, 
> >>> just
> >>> after setting SRCARCH) would be better, or Kbuild.include. To me the
> >>> former place seems more natural, but I'm not totally sure.
> >>
> >> Depends on where all the intended uses are. If they're all in 
> >> xen/Makefile,
> >> then having the macro just there is of course sufficient. Whereas when 
> >> it's
> >> needed elsewhere, instead of exporting putting it in Kbuild.include 
> >> would
> >> seem more natural / desirable to me.
> >>
> > 
> > The places where this would be used are these:
> > file: target (or define)
> > xen/build.mk: arch/$(SRCARCH)/include/asm/asm-offsets.h: asm-offsets.s
> > xen/include/Makefile: define cmd_xlat_h
> > xen/arch/x86/Makefile: define filechk_asm-macros.h
> > 
> > The only issue that comes to my mind (it may not be one at all) is that 
> > SRCARCH is defined and exported in xen/Makefile after including 
> > Kbuild.include, so it would need to be defined after SRCARCH is 
> > assigned:
> > 
> > include scripts/Kbuild.include
> > 
> > # Don't break if the build process wasn't called from the top level
> > # we need XEN_TARGET_ARCH to generate the proper config
> > include $(XEN_ROOT)/Config.mk
> > 
> > # Set ARCH/SRCARCH appropriately.
> > 
> > ARCH := $(XEN_TARGET_ARCH)
> > SRCARCH := $(shell echo $(ARCH) | \
> >      sed -e 's/x86.*/x86/' -e 's/arm\(32\|64\)/arm/g' \
> >          -e 's/riscv.*/riscv/g' -e 's/ppc.*/ppc/g')
> > export ARCH SRCARCH
> > 
> > Am I missing something?
> 
> In that case the alternatives are exporting or using = rather than := in
> Kbuild.include, i.e. other than initially requested. Personally I dislike
> exporting to a fair degree, but I'm not sure which one's better in this
> case. Cc-ing Anthony for possible input.

None. The name is missleading anyway, it would suggest to me that it
contain a directory, but that's wrong.

Another thing that suboptimal, use make to call a shell to generate a
string that is going to be later use in shell context. How about just
doing the work in that later shell context?

Something like:

 define filechk_asm-macros.h
+    guard=$$(echo ASM_${SRCARCH}_ASM_MACROS_H | tr a-z A-Z); \
+    echo "#ifndef $$guard"; \
+    echo "#define $$guard"; \
     echo '#if 0'; \
     echo '.if 0'; \

Or, instead of having to write the name of the file down, we could
use a name that is already registered in a variable:

 define filechk_asm-macros.h
+    guard=$$(echo $@ | tr a-z/.- A-Z_); \
+    echo "#ifndef $$guard"; \
+    echo "#define $$guard"; \
     echo '#if 0'; \
     echo '.if 0'; \

This produces:
    #ifndef ARCH_X86_INCLUDE_ASM_ASM_MACROS_H
    #define ARCH_X86_INCLUDE_ASM_ASM_MACROS_H
    #if 0
    .if 0

Cheers,

-- 

Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:39:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:39:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748926.1156893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSsH-0001t7-Pv; Wed, 26 Jun 2024 13:39:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748926.1156893; Wed, 26 Jun 2024 13:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSsH-0001t0-N5; Wed, 26 Jun 2024 13:39:49 +0000
Received: by outflank-mailman (input) for mailman id 748926;
 Wed, 26 Jun 2024 13:39:48 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ak5a=N4=cloud.com=kelly.choi@srs-se1.protection.inumbo.net>)
 id 1sMSsG-0001bd-68
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:39:48 +0000
Received: from mail-ed1-x532.google.com (mail-ed1-x532.google.com
 [2a00:1450:4864:20::532])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8d33b630-33c1-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:39:46 +0200 (CEST)
Received: by mail-ed1-x532.google.com with SMTP id
 4fb4d7f45d1cf-57d251b5fccso351190a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:39:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d33b630-33c1-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719409185; x=1720013985; darn=lists.xenproject.org;
        h=to:subject:message-id:date:from:mime-version:from:to:cc:subject
         :date:message-id:reply-to;
        bh=+R6YONPwlHpoRRYsr6+mBW727pvDLC/LMVbe3FPlL40=;
        b=FKA1Z/Uc9+YgRCV41c65DjAORkr9fL8fWAnkm0eaGDw1dx1pBsVTLKbsimhp+8rBMO
         QlLsFMavIApXij6zXhMLa+SQIyV12GtwDEsnF+Uhiord+XXq73tIDnbMjnCEeoGQwE6s
         vB4Saa/9ke2c/30VD3YZBSB6wr1qn9V8Fmc20=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719409185; x=1720013985;
        h=to:subject:message-id:date:from:mime-version:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=+R6YONPwlHpoRRYsr6+mBW727pvDLC/LMVbe3FPlL40=;
        b=eVtFAT0k3TkoANxbLi/o6XS70NjM7XEjo7UoVy914svgvbpHln+Aoe64JfTutTPIyV
         csbkO0fRRkUWiyTLB7nVRUxpEBt5oDIIdLO7tp9BjPr+Ki14EnRwKTRrTuOjtliEL93M
         asREcMvRgKNDFt3S008LvOD+KMkgjph5JBt56CTIqNe++Rxc+nnYvtWCRKAL7ViRQ8S8
         BLofV7wDJ3QCcmsxqRPuaSDXFJ/CUT+w+lB4qgWfjoMAG5lkvkdmVrT92UToXyMbcbfs
         L2Ey07/3PSoiPv13xuMODdbDG14DKizpW7Ia63b6ZBtaEPvxT4u3adZaud6dpiRFDtYa
         06uA==
X-Gm-Message-State: AOJu0YwMzqhiMqoeDsOrBvpM+4IFb1BrZir1YxpbBn0lppmDgZIqWAXb
	949o/4QXxGIWjF6SXwvYqbvV1SqjoJcr1MV9nw1FBdzrXQ6iXLcmOI4oxgFKQT9Z/pZ+Tom5DqD
	MLxeK1JSJrTJ8WvLwHNyE7FZUOYOl/H/njig1h74l43hNG5KWyF8=
X-Google-Smtp-Source: AGHT+IFEK+k5gxufcXa5MITJDPr+ovOxlDx6IOPGKH0gzDEZjUZvs/ObB5XRX+NIqr8FTYNSWFGJrGnbNBb20sOgUDk=
X-Received: by 2002:a50:cd59:0:b0:57c:6a1f:11d5 with SMTP id
 4fb4d7f45d1cf-57d4bd63249mr7649898a12.15.1719409185337; Wed, 26 Jun 2024
 06:39:45 -0700 (PDT)
MIME-Version: 1.0
From: Kelly Choi <kelly.choi@cloud.com>
Date: Wed, 26 Jun 2024 14:39:09 +0100
Message-ID: <CAO-mL=xjBTpGX9kfmnbBKmRHgbB8xvJXfNL1M2ArBZ9p4_0Vmg@mail.gmail.com>
Subject: [ANNOUNCE] Postpone community call to 11th July 2024
To: xen-devel <xen-devel@lists.xenproject.org>, xen-users@lists.xenproject.org, 
	xen-announce@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000b74cc1061bcb2598"

--000000000000b74cc1061bcb2598
Content-Type: text/plain; charset="UTF-8"

Hi all,

Our next community call is on 4th July 2024. As this is a national holiday
in the US, I propose we move our call to the same time on *11th July 2024*
if there are no objections.

Details and agenda links will be sent that week.

Many thanks,
Kelly Choi

Community Manager
Xen Project

--000000000000b74cc1061bcb2598
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr">Hi all,=C2=A0<div><br></div><div>Our next community call i=
s on 4th July 2024. As this is a national holiday in the US, I propose we m=
ove our call to the same time on=C2=A0<u><b>11th July 2024</b></u> if there=
 are no objections.</div><div><br></div><div>Details and agenda links will=
=C2=A0be sent that week.=C2=A0</div><div><br></div><div><div><div dir=3D"lt=
r" class=3D"gmail_signature" data-smartmail=3D"gmail_signature"><div dir=3D=
"ltr"><div>Many thanks,</div><div>Kelly Choi</div><div><br></div><div><div =
style=3D"color:rgb(136,136,136)">Community Manager</div><div style=3D"color=
:rgb(136,136,136)">Xen Project=C2=A0<br></div></div></div></div></div></div=
></div>

--000000000000b74cc1061bcb2598--


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:40:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:40:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748940.1156919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSsk-0003uX-Br; Wed, 26 Jun 2024 13:40:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748940.1156919; Wed, 26 Jun 2024 13:40:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSsk-0003uL-8X; Wed, 26 Jun 2024 13:40:18 +0000
Received: by outflank-mailman (input) for mailman id 748940;
 Wed, 26 Jun 2024 13:40:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jDEV=N4=bounce.vates.tech=bounce-md_30504962.667c1a3d.v1-c22b7e43069140ac8c1ddb7c088bb1f3@srs-se1.protection.inumbo.net>)
 id 1sMSsj-0001bd-0Z
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:40:17 +0000
Received: from mail187-10.suw11.mandrillapp.com
 (mail187-10.suw11.mandrillapp.com [198.2.187.10])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 9e055e52-33c1-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:40:15 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-10.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W8N9n5MLGz5QkLrN
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 13:40:13 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 c22b7e43069140ac8c1ddb7c088bb1f3; Wed, 26 Jun 2024 13:40:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e055e52-33c1-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719409213; x=1719669713;
	bh=JHEddISG4C8qf11Zeb09KyLMyQLm7fuVNthb0dV6qkI=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=Mq8Hs7ycLARiul2BsSSwPeP5OR4um8PD9i76YcI63zkjnq0Bk6J0GfyGT6WlFsZsk
	 L+bJPjxhZetWxdKbZlxnmMsmL7QZLV7Kbq+b+m9HijP+wE7PoxJhnJBlpv3KjBbftV
	 V6eKKH3tOXTVa7+lKpd8xX5xZyyBbc3kP22yZo9LMld/fAd1Ayt2bgUJFQsk3LnRg0
	 cyvH2uDD6pGwbl/B3yUzj7rPnLGqFzs4VOjftsK7GkxtDBTJAPMHxQJSlTZqUqcASP
	 HiYdNCxVlCao2ozHofwVOUv9kFW6IYMmDsUF6gNT/TDWGo3VK1uAR7OOH1fprEBksV
	 wH572luYfwmpA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719409213; x=1719669713; i=teddy.astie@vates.tech;
	bh=JHEddISG4C8qf11Zeb09KyLMyQLm7fuVNthb0dV6qkI=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=vudwB/au3fXMW7xvJ7yVPUQHyZ/fF6k6beOL8RcukZeWA8UwI2LivHrcsV7On7zrT
	 oBfRNroUgUFN0VZZyo1MJPw42h1ymzK8pUQha8dAKMhbw5jx4JiR8fmff+zhNRz1yt
	 4AmQ3MbTptWRyFx38r5Kl02uj/qEt9g5+wDapWNHaWBcHXqrethCA/m3E/ydkoOlZ3
	 rWY4aaNebfK08QRtk0rwesUazAWhDMg0URLFO8dJnac4Gw3fhjIZxxNhSZ88rMZngY
	 hERyNLVJysRHwFGN6MoRjm5pxMunN6wL2lyRDVdxLEnIE6NBhUmVT2ptt1Mm5ypGv0
	 1gb1jLO8nyE2g==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?Re:=20[RFC=20PATCH=20v2]=20iommu/xen:=20Add=20Xen=20PV-IOMMU=20driver?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719409210959
Message-Id: <cc5c2e1d-f5bd-47b3-830f-4bb5298ac106@vates.tech>
To: Robin Murphy <robin.murphy@arm.com>, xen-devel@lists.xenproject.org, iommu@lists.linux.dev
Cc: Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>, Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
References: <24d7ec005e77e4e0127995ba6f4ad16f33737fa5.1718981216.git.teddy.astie@vates.tech> <da3ec316-b001-4711-b323-70af3e6bb014@arm.com> <a04e169d-b38a-43dc-b783-a8af1e1b0468@vates.tech> <c4dc539b-a71b-4323-aa31-b97b39c633a8@arm.com>
In-Reply-To: <c4dc539b-a71b-4323-aa31-b97b39c633a8@arm.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.c22b7e43069140ac8c1ddb7c088bb1f3?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240626:md
Date: Wed, 26 Jun 2024 13:40:13 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hello Robin,

Le 26/06/2024 =C3=A0 14:09, Robin Murphy a =C3=A9crit=C2=A0:
> On 2024-06-24 3:36 pm, Teddy Astie wrote:
>> Hello Robin,
>> Thanks for the thourough review.
>>
>>>> diff --git a/drivers/iommu/Kconfig b/drivers/iommu/Kconfig
>>>> index 0af39bbbe3a3..242cefac77c9 100644
>>>> --- a/drivers/iommu/Kconfig
>>>> +++ b/drivers/iommu/Kconfig
>>>> @@ -480,6 +480,15 @@ config VIRTIO_IOMMU
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 Say Y here if you int=
end to run this kernel as a guest.
>>>> +config XEN_IOMMU
>>>> +=C2=A0=C2=A0=C2=A0 bool "Xen IOMMU driver"
>>>> +=C2=A0=C2=A0=C2=A0 depends on XEN_DOM0
>>>
>>> Clearly this depends on X86 as well.
>>>
>> Well, I don't intend this driver to be X86-only, even though the current
>> Xen RFC doesn't support ARM (yet). Unless there is a counter-indication
>> for it ?
> 
> It's purely practical - even if you drop the asm/iommu.h stuff it would 
> still break ARM DOM0 builds due to HYPERVISOR_iommu_op() only being 
> defined for x86. And it's better to add a dependency here to make it 
> clear what's *currently* supported, than to add dummy code to allow it 
> to build for ARM if that's not actually tested or usable yet.
> 

Ok, it does exist from the hypervisor side (even though a such Xen build 
is not possible yet), I suppose I just need to add relevant hypercall 
interfaces for arm/arm64 in hypercall{.S,.h}.

>>>> +bool xen_iommu_capable(struct device *dev, enum iommu_cap cap)
>>>> +{
>>>> +=C2=A0=C2=A0=C2=A0 switch (cap) {
>>>> +=C2=A0=C2=A0=C2=A0 case IOMMU_CAP_CACHE_COHERENCY:
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return true;
>>>
>>> Will the PV-IOMMU only ever be exposed on hardware where that really is
>>> always true?
>>>
>>
>> On the hypervisor side, the PV-IOMMU interface always implicitely flush
>> the IOMMU hardware on map/unmap operation, so at the end of the
>> hypercall, the cache should be always coherent IMO.
> 
> As Jason already brought up, this is not about TLBs or anything cached 
> by the IOMMU itself, it's about the memory type(s) it can create 
> mappings with. Returning true here says Xen guarantees it can use a 
> cacheable memory type which will let DMA snoop the CPU caches. 
> Furthermore, not explicitly handling IOMMU_CACHE in the map_pages op 
> then also implies that it will *always* do that, so you couldn't 
> actually get an uncached mapping even if you wanted one.
> 

Yes, this is a point we are currently discussing on the Xen side.

>>>> +=C2=A0=C2=A0=C2=A0 while (xen_pg_count) {
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 size_t to_unmap =3D min(xe=
n_pg_count, max_nr_pages);
>>>> +
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 //pr_info("Unmapping %lx-%=
lx\n", dfn, dfn + to_unmap - 1);
>>>> +
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 op.unmap_pages.dfn =3D dfn=
;
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 op.unmap_pages.nr_pages =
=3D to_unmap;
>>>> +
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ret =3D HYPERVISOR_iommu_o=
p(&op);
>>>> +
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (ret)
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pr=
_warn("Unmap failure (%lx-%lx)\n", dfn, dfn + to_unmap
>>>> - 1);
>>>
>>> But then how
>>>> would it ever happen anyway? Unmap is a domain op, so a domain which
>>>> doesn't allow unmapping shouldn't offer it in the first place...
>>
>> Unmap failing should be exceptionnal, but is possible e.g with
>> transparent superpages (like Xen IOMMU drivers do). Xen drivers folds
>> appropriate contiguous mappings into superpages entries to optimize
>> memory usage and iotlb. However, if you unmap in the middle of a region
>> covered by a superpage entry, this is no longer a valid superpage entry,
>> and you need to allocate and fill the lower levels, which is faillible
>> if lacking memory.
> 
> OK, so in the worst case you could potentially have a partial unmap 
> failure if the range crosses a superpage boundary and the end part 
> happens to have been folded, and Xen doesn't detect and prepare that 
> allocation until it's already unmapped up to the boundary. If that is 
> so, does the hypercall interface give any information about partial 
> failure, or can any error only be taken to mean that some or all of the 
> given range may or may not have be unmapped now?

The hypercall interface has op.unmap_page.unmapped to indicate the real 
amount of pages actually unmapped even in case of error. If it is less 
than requested initially, it will retry with the remaining part, 
certainly fail afterward. Although, I have the impression that it is 
going to fail silently.

>>> In this case I'd argue that you really *do* want to return short, in th=
e
>>> hope of propagating the error back up and letting the caller know the
>>> address space is now messed up before things start blowing up even more
>>> if they keep going and subsequently try to map new pages into
>>> not-actually-unmapped VAs.
>>
>> While mapping on top of another mapping is ok for us (it's just going to
>> override the previous mapping), I definetely agree that having the
>> address space messed up is not good.
> 
> Oh, indeed, quietly replacing existing PTEs might help paper over errors 
> in this particular instance, but it does then allow *other* cases to go 
> wrong in fun and infuriating ways :)
> 

Yes, but I was wrong at my stance. Our hypercall interface doesn't allow 
mapping on top of another one (it's going to fail with -EADDRINUSE). So 
unmap failing is still going to cause bad issues.

Should iommu_unmap and related code consider the cases where unmapped !=3D 
size and report it appropriately ? So such cases, if ever happening will 
be reported louder and not failing silently.

>>>> +static struct iommu_domain default_domain =3D {
>>>> +=C2=A0=C2=A0=C2=A0 .ops =3D &(const struct iommu_domain_ops){
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .attach_dev =3D default_do=
main_attach_dev
>>>> +=C2=A0=C2=A0=C2=A0 }
>>>> +};
>>>
>>> Looks like you could make it a static xen_iommu_domain and just use the
>>> normal attach callback? Either way please name it something less
>>> confusing like xen_iommu_identity_domain - "default" is far too
>>> overloaded round here already...
>>>
>>
>> Yes, although, if in the future, we can have either this domain as
>> identity or blocking/paging depending on some upper level configuration.
>> Should we have both identity and blocking domains, and only setting the
>> relevant one in iommu_ops, or keep this naming.
> 
> That's something that can be considered if and when it does happen. For 
> now, if it's going to be pre-mapped as an identity domain, then let's 
> just treat it as such and keep things straightforward.
> 

Let's name it xen_iommu_identity_domain.

>>>> +void __exit xen_iommu_fini(void)
>>>> +{
>>>> +=C2=A0=C2=A0=C2=A0 pr_info("Unregistering Xen IOMMU driver\n");
>>>> +
>>>> +=C2=A0=C2=A0=C2=A0 iommu_device_unregister(&xen_iommu_device);
>>>> +=C2=A0=C2=A0=C2=A0 iommu_device_sysfs_remove(&xen_iommu_device);
>>>> +}
>>>
>>> This is dead code since the Kconfig is only "bool". Either allow it to
>>> be an actual module (and make sure that works), or drop the pretence
>>> altogether.
>>>
>>
>> Ok, I though this function was actually a requirement even if it is not
>> a module.
> 
> No, quite the opposite - even code which is modular doesn't have to 
> support removal if it doesn't want to.
> 

Ok

> Thanks,
> Robin.

Teddy


Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:42:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:42:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748959.1156929 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSuk-00052g-Ou; Wed, 26 Jun 2024 13:42:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748959.1156929; Wed, 26 Jun 2024 13:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMSuk-00052Z-Ln; Wed, 26 Jun 2024 13:42:22 +0000
Received: by outflank-mailman (input) for mailman id 748959;
 Wed, 26 Jun 2024 13:42:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMSuj-00052F-PN; Wed, 26 Jun 2024 13:42:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMSuj-0007uB-DG; Wed, 26 Jun 2024 13:42:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMSuj-0001mn-3k; Wed, 26 Jun 2024 13:42:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMSuj-00072e-3G; Wed, 26 Jun 2024 13:42:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QSdgy8KI+6ytEP8lD8RjgOIwawq2rYpCkhy9PVE0yMc=; b=BI+aWwnCCjcbZZh3tnbsZs2s+z
	wvV4uViqASX+julruOHCHoxYqsjpqI5ZtInvIZQcix1d66TghdAdij9bqNcrJ3huS6OKUVrh8xTpR
	iqveRwnSN9hqWlTkFmV3kQL28/qXDrRaXnkWndS3i33Zp6YRC2fnslkLDmPg1v0qGn3c=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186510-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186510: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=853f707cd9e2eb9410dbfbadbd5a01ac0252ef83
X-Osstest-Versions-That:
    xen=11ea49a3fda5f0cbd8546ee8bdc5e9c55736c828
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 13:42:21 +0000

flight 186510 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186510/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186499
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186499
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186499
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186499
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186499
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186499
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  853f707cd9e2eb9410dbfbadbd5a01ac0252ef83
baseline version:
 xen                  11ea49a3fda5f0cbd8546ee8bdc5e9c55736c828

Last test of basis   186499  2024-06-25 22:39:04 Z    0 days
Testing same since   186510  2024-06-26 05:21:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
  Federico Serafini <federico.serafini@bugseng.com>
  Nicola Vetrini <nicola.vetrini@bugseng.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   11ea49a3fd..853f707cd9  853f707cd9e2eb9410dbfbadbd5a01ac0252ef83 -> master


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:54:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:54:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.748997.1156954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMT6F-0008Cb-3O; Wed, 26 Jun 2024 13:54:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 748997.1156954; Wed, 26 Jun 2024 13:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMT6E-0008CU-W8; Wed, 26 Jun 2024 13:54:14 +0000
Received: by outflank-mailman (input) for mailman id 748997;
 Wed, 26 Jun 2024 13:54:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMT6D-0008CO-Qo
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:54:13 +0000
Received: from mail-ej1-x632.google.com (mail-ej1-x632.google.com
 [2a00:1450:4864:20::632])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 91acfa60-33c3-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 15:54:13 +0200 (CEST)
Received: by mail-ej1-x632.google.com with SMTP id
 a640c23a62f3a-a72604c8c5bso394524666b.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:54:13 -0700 (PDT)
Received: from localhost ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7261c5bc2asm231591566b.186.2024.06.26.06.54.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 06:54:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91acfa60-33c3-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410052; x=1720014852; darn=lists.xenproject.org;
        h=in-reply-to:references:cc:to:from:subject:message-id:date
         :content-transfer-encoding:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=sFtLwp8M26ePMhbl9+A2uXgdQgcBjsL8/TaCHyqwhpc=;
        b=O76vV0Y0o7Vl0lHiRn9uGJkJ3UXbPer3ew8pdjYmvXcGsqpg6ggLWAw76KSpN1he5R
         aCErgMh2dF0wo8/FC4j03L4i+8gDHeI3X6pRONQHMPHIkBWFVWoW66JY/vUvGdeMTykK
         PDcP5gbjrQXClC8c3njAIGEf2+em9lVTdTous=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410052; x=1720014852;
        h=in-reply-to:references:cc:to:from:subject:message-id:date
         :content-transfer-encoding:mime-version:x-gm-message-state:from:to
         :cc:subject:date:message-id:reply-to;
        bh=sFtLwp8M26ePMhbl9+A2uXgdQgcBjsL8/TaCHyqwhpc=;
        b=Sr1Od3/TmGlAWRs5mnv0yzQtyGvx+LKxmTGRCueoM7x8NIUB2jgx9iF/Qrw5nToCDN
         9xvRqStHIa9zBMrEWKn048FAeeh1C3RXg58IU7VDaoXRAJivpCtK1OhczPTCbBfvHNt2
         6LUqrqDnQm9ViHSs98yt6oNAqFAHqxf8jUB14Bwk0gH6ehn71KxSe4aZBF9FRiDrj3nv
         FPTEjfYan/AhS7GjxZes0odW+vrbncLqJlXF3wybCFgKjnNiM8wqqYyCEfxxzVytejhS
         deJk1T3lqecxg6xSDUBrGEqWNFyhBY8aHxjAWM0beG5+SZFsYtsR6YxMsL6lBv7wGAz8
         NWpg==
X-Forwarded-Encrypted: i=1; AJvYcCUolHO6HeyHCZZ374YEfbsthkWe3hBZ28ibr973v5mjNrxziWrEd0aSO4q68KiWOpvrXzivu0jJR8lbcKosaQV2c6456emYcpEhuqlr9x0=
X-Gm-Message-State: AOJu0YzGxJZz/1DXyNResc8orbs8Jcs4Su4ZmI6gHKWuu13vJV4X3HzT
	VzvhlVEBzktP0lHZZP+T67fePPQJw3QgWmub0LmZHynS9BU1Rfi3Z77b2H6Bbek=
X-Google-Smtp-Source: AGHT+IF8cUiv3tjJ6utpl38WSBWja90vLzoHtxrvWKI4aGmBgkqr2e4E5Oe/tQcOHvqfkUYnOJdjjg==
X-Received: by 2002:a17:906:1995:b0:a6e:fa0a:4899 with SMTP id a640c23a62f3a-a7245b45affmr658484366b.16.1719410052274;
        Wed, 26 Jun 2024 06:54:12 -0700 (PDT)
Mime-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset=UTF-8
Date: Wed, 26 Jun 2024 14:54:11 +0100
Message-Id: <D29ZZSXN0QPV.2627WUC2J3NUK@cloud.com>
Subject: Re: [PATCH v2 (resend) 04/27] acpi: vmap pages in
 acpi_os_alloc_memory
From: "Alejandro Vallejo" <alejandro.vallejo@cloud.com>
To: "Elias El Yandouzi" <eliasely@amazon.com>,
 <xen-devel@lists.xenproject.org>
Cc: <julien@xen.org>, <pdurrant@amazon.com>, <dwmw@amazon.com>, "Hongyan
 Xia" <hongyxia@amazon.com>, "Andrew Cooper" <andrew.cooper3@citrix.com>,
 "George Dunlap" <george.dunlap@citrix.com>, "Jan Beulich"
 <jbeulich@suse.com>, "Stefano Stabellini" <sstabellini@kernel.org>, "Wei
 Liu" <wl@xen.org>, "Julien Grall" <jgrall@amazon.com>
X-Mailer: aerc 0.17.0
References: <20240116192611.41112-1-eliasely@amazon.com>
 <20240116192611.41112-5-eliasely@amazon.com>
In-Reply-To: <20240116192611.41112-5-eliasely@amazon.com>

I'm late to the party but there's something bothering me a little.

On Tue Jan 16, 2024 at 7:25 PM GMT, Elias El Yandouzi wrote:
> diff --git a/xen/common/vmap.c b/xen/common/vmap.c
> index 171271fae3..966a7e763f 100644
> --- a/xen/common/vmap.c
> +++ b/xen/common/vmap.c
> @@ -245,6 +245,11 @@ void *vmap(const mfn_t *mfn, unsigned int nr)
>      return __vmap(mfn, 1, nr, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
>  }
> =20
> +void *vmap_contig(mfn_t mfn, unsigned int nr)
> +{
> +    return __vmap(&mfn, nr, 1, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
> +}
> +
>  unsigned int vmap_size(const void *va)
>  {
>      unsigned int pages =3D vm_size(va, VMAP_DEFAULT);

How is vmap_contig() different from regular vmap()?

vmap() calls map_pages_to_xen() `nr` times, while vmap_contig() calls it ju=
st
once. I'd expect both cases to work fine as they are. What am I missing? Wh=
at
would make...

> diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c
> index 389505f786..ab80d6b2a9 100644
> --- a/xen/drivers/acpi/osl.c
> +++ b/xen/drivers/acpi/osl.c
> @@ -221,7 +221,11 @@ void *__init acpi_os_alloc_memory(size_t sz)
>  	void *ptr;
> =20
>  	if (system_state =3D=3D SYS_STATE_early_boot)
> -		return mfn_to_virt(mfn_x(alloc_boot_pages(PFN_UP(sz), 1)));
> +	{
> +		mfn_t mfn =3D alloc_boot_pages(PFN_UP(sz), 1);
> +
> +		return vmap_contig(mfn, PFN_UP(sz));
... this statement not operate identically with regular vmap()? Or
probably more interestingly, what would preclude existing calls to vmap() n=
ot
operate under vmap_contig() instead?

I'm guessing it has to do with ARM having granules, but the looping logic s=
eems
wonky in the non 4K case anyway seeing how the va jumps are based on PAGE_S=
IZE.

> +	}
> =20
>  	ptr =3D xmalloc_bytes(sz);
>  	ASSERT(!ptr || is_xmalloc_memory(ptr));
> @@ -246,5 +250,11 @@ void __init acpi_os_free_memory(void *ptr)
>  	if (is_xmalloc_memory(ptr))
>  		xfree(ptr);
>  	else if (ptr && system_state =3D=3D SYS_STATE_early_boot)
> -		init_boot_pages(__pa(ptr), __pa(ptr) + PAGE_SIZE);
> +	{
> +		paddr_t addr =3D mfn_to_maddr(vmap_to_mfn(ptr));
> +		unsigned int nr =3D vmap_size(ptr);
> +
> +		vunmap(ptr);
> +		init_boot_pages(addr, addr + nr * PAGE_SIZE);
> +	}
>  }
> diff --git a/xen/include/xen/vmap.h b/xen/include/xen/vmap.h
> index 24c85de490..0c16baa85f 100644
> --- a/xen/include/xen/vmap.h
> +++ b/xen/include/xen/vmap.h
> @@ -15,6 +15,7 @@ void vm_init_type(enum vmap_region type, void *start, v=
oid *end);
>  void *__vmap(const mfn_t *mfn, unsigned int granularity, unsigned int nr=
,
>               unsigned int align, unsigned int flags, enum vmap_region ty=
pe);
>  void *vmap(const mfn_t *mfn, unsigned int nr);
> +void *vmap_contig(mfn_t mfn, unsigned int nr);
>  void vunmap(const void *va);
> =20
>  void *vmalloc(size_t size);


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749007.1156965 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMT9x-0000b6-JA; Wed, 26 Jun 2024 13:58:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749007.1156965; Wed, 26 Jun 2024 13:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMT9x-0000az-Eo; Wed, 26 Jun 2024 13:58:05 +0000
Received: by outflank-mailman (input) for mailman id 749007;
 Wed, 26 Jun 2024 13:58:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMT9v-0000af-L2
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:03 +0000
Received: from mail-ot1-x32e.google.com (mail-ot1-x32e.google.com
 [2607:f8b0:4864:20::32e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 197228bd-33c4-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:58:01 +0200 (CEST)
Received: by mail-ot1-x32e.google.com with SMTP id
 46e09a7af769-700d1721dd9so401730a34.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:01 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.57.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:57:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 197228bd-33c4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410280; x=1720015080; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=KJdcCoIY2DohEWYmWF98zcK07KMjVwPqM+9Wb+zM6zM=;
        b=f94req+2waOcWi94iixta+ixblXGRM1tbztnueTyXYHadJxN8DZF3+tzIIberq19so
         5FO9E1zkm/SHv5VpZYibN8qm1K2IC+smkfCmQAJXe2mgLYkyB8jrK1OaXU1dyUShpEQw
         7mCBrGRET0MRaDHAKATR8AEursHks5NpictrE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410280; x=1720015080;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=KJdcCoIY2DohEWYmWF98zcK07KMjVwPqM+9Wb+zM6zM=;
        b=PsTINoLBd+MJK44e3GIIuaWYr7DmDvNm/BCrBrTqRxDsdh2Y11NocDyMiqD3yEhNTl
         bgcdckciMfSD804eVk+kHuHsTQB4LTrCMQuuyn3v6FqKWbO+Tnfb7QDDUBCKB6tODdfS
         m16B/A8TJ130c5crp6jt3iZjJTmQlGhawrJs0kPNkXMxqhk6ul4SV58WioXBAc9qldA+
         I/OySZrn/WtwCY4w9MqRjvVQTWBW6VzxzhUeZruI6p8L8NJqkm8LhrYw3s4nJfa/1b/H
         8b02tOz0o8LbsDANokuegj0lY9G0d6GaK6Afdn160+ZcCEBK/DOVbK4bDrqlsRMy/fFs
         x1WA==
X-Gm-Message-State: AOJu0YzrjEcGL1YLl67ZPMRdXlt9ZCeYEVlozMgGd9sYyy6tCdGfW8q0
	ASujv890oTMfknYH659C/drK/6syLSRz2+q6PZ58pN06N7GoXi4ZWGHM6LvXklP3iZFdudtK5xp
	9dBQ=
X-Google-Smtp-Source: AGHT+IG22bxAwgvOl603G9Q1O4nbcRp1s3ssLAoyT629EVsaXULg0uU3OSI67FLwFIfcO3zDNqN7/A==
X-Received: by 2002:a05:6830:25c4:b0:700:bf0f:66da with SMTP id 46e09a7af769-700bf0f6984mr7274402a34.18.1719410279585;
        Wed, 26 Jun 2024 06:57:59 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH WIP 00/14] AMD Nested Virt Preparation
Date: Wed, 26 Jun 2024 14:38:39 +0100
Message-Id: <20240626133853.4150731-1-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This is my work-in-progress series for getting nested virt working
again on AMD.

The first patch is an early draft to integrate the SVM bits into the
CPUID framework.  It will be partially superseded by a series Andrew
has posted but which has not yet been checked in.

The second patch is a workaround which hides the PCID bit from the L1
when nested HVM is enabled.  Long-term, nested support for PCID will
be necessary for performance.

Most of the rest of the patches involve improving tracing: Either
fixing what's bitrotted in xenalyze, what's bitrotted specifically on
the SVM side, or what could be improved in the upstream tracing.

The last patch is a placeholder for future work, commenting a place
where behavior is wrong and another where more work needs to be done
to assure safety.

George Dunlap (14):
  x86/cpuid-policy: Add AMD SVM CPUID leaf to featureset
  x86/cpu-policy: HACK Disable PCID when nested virt is enabled
  xenalyze: Basic nested virt processing
  xenalyze: Track generic event information when not in summary mode
  xenalyze: Ignore vmexits where an HVM_HANDLER traces would be
    redundant
  xen/svm: Remove redundant HVM_HANDLER trace for EXCEPTION_AC
  xen/hvm: Don't skip MSR_READ trace record
  svm: Do NPF trace before calling hvm_hap_nested_page_fault
  x86/emulate: Don't trace cr reads during emulation
  xenalyze: Quiet warnings about VMEXIT_IOIO
  x86/trace: Add trace to xsetbv svm/vmx handler path
  xenalyze: Basic processing for XSETBV exits and handlers
  x86/svm: Add a trace for VMEXIT_VMRUN
  x86/nestedsvm: Note some places for improvement

 tools/libs/light/libxl_cpuid.c              |   1 +
 tools/misc/xen-cpuid.c                      |   1 +
 tools/xentrace/xenalyze.c                   | 133 +++++++++++++++-----
 xen/arch/x86/cpu-policy.c                   |  24 ++--
 xen/arch/x86/cpu/common.c                   |   2 +
 xen/arch/x86/hvm/emulate.c                  |   1 -
 xen/arch/x86/hvm/hvm.c                      |   4 +-
 xen/arch/x86/hvm/svm/nestedsvm.c            |  13 ++
 xen/arch/x86/hvm/svm/svm.c                  |   7 +-
 xen/include/public/arch-x86/cpufeatureset.h |  16 +++
 xen/include/public/trace.h                  |   1 +
 xen/include/xen/lib/x86/cpu-policy.h        |  10 +-
 xen/lib/x86/cpuid.c                         |   4 +-
 13 files changed, 167 insertions(+), 50 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749009.1156985 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMT9y-00014K-Vd; Wed, 26 Jun 2024 13:58:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749009.1156985; Wed, 26 Jun 2024 13:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMT9y-000147-S7; Wed, 26 Jun 2024 13:58:06 +0000
Received: by outflank-mailman (input) for mailman id 749009;
 Wed, 26 Jun 2024 13:58:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMT9x-0000af-BC
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:05 +0000
Received: from mail-qk1-x733.google.com (mail-qk1-x733.google.com
 [2607:f8b0:4864:20::733])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ab3aff0-33c4-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:58:03 +0200 (CEST)
Received: by mail-qk1-x733.google.com with SMTP id
 af79cd13be357-795a4fde8bfso396633185a.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:03 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.58.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ab3aff0-33c4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410282; x=1720015082; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=DtQlCgHMpYyj0NMMsQz5bXaCF8IMRtA28KklQoqKVVg=;
        b=gQu7JENtQR7tLTo8nv1XDdlszKteOr5Q2PJpoTFLCBOoJ5H6KldOIYWJ60JApkUtbS
         ihiGsYKev5a0jQ1F5l1I5rm1kOW4OkrBB4aCW8T7xLo1R80oRZzjkc0JyZ3DyF8+FA3m
         SMLTqY3MozFYRQZYmCpLDjFRvlVdZurzuOLe4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410282; x=1720015082;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=DtQlCgHMpYyj0NMMsQz5bXaCF8IMRtA28KklQoqKVVg=;
        b=A6rAIet/mvMb6gLafCcTK6Abio2bQnGb8X23GOIsARwpTWIoBDAlHezwMy/BbdX1UL
         rxmakdOnBfkNynLX+4mZXEpvyiKkf1gC2rYmGnr+wSY/hinCZnEUIHdNCqq4llWyMxpo
         1zk3LiQ2xu3vpdEcbFzv247ZLzXAdzlFZiOpKGWs3XhSFrktN8svgtjRsfjoY9KP0KYS
         D2TKAwk0W9Ncq+Nz0ZZTAJOF0slPfn7BPMT9r1ax5U0bJ70ExPVKgJnl9rQL8cIMnhbn
         FwiVzqfXVS3fziAIVnnAJG2aFMg+k45UxIk04zQM5LYCDnLfg5+YUScFYH5Oz3JPiEkl
         +UBA==
X-Gm-Message-State: AOJu0YwcJg9fccEq+cL5hGM87BbKmWYvVsuFDetXsDhMTU7R4y75S9J9
	0129pEJTpnh8bL4Bvj/sIvnyvYFSpqjm8ysCeL70aRp8B2kEjPzcLfcZe6KzfXJ8nqzbmCM8gtk
	hZ10=
X-Google-Smtp-Source: AGHT+IF+gxCgdyj9i9WXkQO/B9fdFeKeEYEhfT9GWGoCYkvEYPhcPyStk9Q1P/c1vDx6AEp+n33aqw==
X-Received: by 2002:a05:620a:2494:b0:79d:5943:15ee with SMTP id af79cd13be357-79d59431889mr16196485a.60.1719410281882;
        Wed, 26 Jun 2024 06:58:01 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH WIP 02/14] x86/cpu-policy: HACK Disable PCID when nested virt is enabled
Date: Wed, 26 Jun 2024 14:38:41 +0100
Message-Id: <20240626133853.4150731-3-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The non-nested HVM code knows how to provide PCID functionality
(non-zero values in the lower 12 bits of CR3 when running in 64-bit
mode), but the nested code doesn't.  If the L2 decides to use the PCID
functionality, the L0 will fail the next L1 VMENTRY.

Long term we definitely want to enable this feature, but for now, just
hide it from guests when nested HVM is enabled.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
 xen/arch/x86/cpu-policy.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index d3ba177dac..91281b44b0 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -916,6 +916,7 @@ void recalculate_cpuid_policy(struct domain *d)
              * hosts.
              */
             fs[FEATURESET_ead] = max_fs[FEATURESET_ead];
+            __clear_bit(X86_FEATURE_PCID, max_fs);
         }
     }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749008.1156969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMT9x-0000dM-Od; Wed, 26 Jun 2024 13:58:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749008.1156969; Wed, 26 Jun 2024 13:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMT9x-0000cc-LM; Wed, 26 Jun 2024 13:58:05 +0000
Received: by outflank-mailman (input) for mailman id 749008;
 Wed, 26 Jun 2024 13:58:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMT9w-0000af-B1
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:04 +0000
Received: from mail-ot1-x32f.google.com (mail-ot1-x32f.google.com
 [2607:f8b0:4864:20::32f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1a2cb9e6-33c4-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:58:02 +0200 (CEST)
Received: by mail-ot1-x32f.google.com with SMTP id
 46e09a7af769-700cc97b220so859853a34.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:02 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.57.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a2cb9e6-33c4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410281; x=1720015081; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=mXSq9cIt+cDOKNPJ1wEig1h0MbpC9b/C/ll62WYu3WA=;
        b=YCpEDFcFJ7d2G5KY/dtFwIhg8TPxfs3+LvPqcYHhNqtVT7zOir9QjqYWYDZTc+QAHu
         ArMYRa9kO++X8HFRHrOhMKIjWS4hsmZYoccqW9bv9fiJQQWYm8qSfW0WlHisiQRHN+Cq
         epTnWAWXa0XIb1ZVRep/P52AvPfwAczCzmPAA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410281; x=1720015081;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=mXSq9cIt+cDOKNPJ1wEig1h0MbpC9b/C/ll62WYu3WA=;
        b=GWZ40zcs3ns+/vKHS4yuVCs5nPmJRtQbtjIzs/6FtTNk4NPmvqgoIceB+f33M+fTpB
         X6bgGMPQiF/7DDmmzSJlztH/hvaYn3xegVJeZZq1/EScS2bh8T8F/5AqCZT0DCj59I5U
         rxssGs+vYYhq/cirfskf5lWA1H9dHgvpa6hIMIPdAZ4YKOlzXvVJQTKMIfGpgRUVZY+Z
         PTfEj1jtVo1fOKvb1d+jNIgG7NL1BgW2wRG7WWzMJ2G4jc4J9BiZ6Oa0tSQ1Z3jcEcbp
         jqS+44NFYpqhsz79ecMH3xU16yCabi0tJuaZkvJTftAU50gWFdYvhGQEthx0K7OrfAlv
         LGUQ==
X-Gm-Message-State: AOJu0Yy+uafvBOeMq5M+fIP1CPNs053Pvs12+iIFgxj6v4h0uf8xDR2T
	0/PT9djhK8oJpgxNUQ/N/3IdaOvITBhEGqPwd+v0rFRuhxKOVPQ3YYjJzdUw++r2EXObq7ZixYl
	Ef24=
X-Google-Smtp-Source: AGHT+IFZ+U741Xe22L3IWdU6A/JpCNgMUNRvCO/9WayrRbFltM4UDYHFtnKiLh3aCsSZgpCCJu66BA==
X-Received: by 2002:a05:6830:e8b:b0:6f9:7145:ed49 with SMTP id 46e09a7af769-700b117f908mr12160357a34.7.1719410280859;
        Wed, 26 Jun 2024 06:58:00 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@cloud.com>
Subject: [PATCH WIP 01/14] x86/cpuid-policy: Add AMD SVM CPUID leaf to featureset
Date: Wed, 26 Jun 2024 14:38:40 +0100
Message-Id: <20240626133853.4150731-2-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

NOTE: This patch is be partially superseded by Andrew Cooper's "x86:
AMD CPUID handling improvements" series.

Currently, the CPUID leaf for SVM features (extd 0xa.edx) is manually
twiddled:

 - hvm_max_policy takes host_policy and clamps it to supported
   features (with some features unilaterally enabled because they're
   always emulated

 - hvm_default_policy is copied from there

 - When recalculate_policy() is called for a guest, if SVM is clear,
   then the entire leaf is zeroed out.

Move to a mode where the extended features are off by default, and
enabled when nested_virt is enabled.

In cpufeatureset.h, define a new featureset word for the AMD SVM
features, and declare all of the bits defined in
x86/include/asm/hvm/svm/svm.h.  Mark the ones we currently pass
through to the "max policy" as HAP-only and optional.

In cpu-policy.h, define FEATURESET_ead, and convert the un-named space
in struct_cpu_policy into the appropriate union.  FIXME: Do this in a
prerequisite patch, and change all references to p->extd.raw[0xa].

Update x86_cpu_X_to_Y and Y_to_X to copy this into and out of the
appropriate leaf.

Populate this during boot in generic_identify().

Add the new featureset definition into libxl_cpuid.c.

Update the code in calculate_hvm_max_policy() to do nothing with the
"normal" CPUID bits, and use the feature bit to unconditionally enable
VMCBCLEAN. FIXME Move this to a follow-up patch.

In recalculate_cpuid_policy(), enable max_fs when nested_hvm() is
true.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@cloud.com>
---
 tools/libs/light/libxl_cpuid.c              |  1 +
 tools/misc/xen-cpuid.c                      |  1 +
 xen/arch/x86/cpu-policy.c                   | 23 ++++++++++-----------
 xen/arch/x86/cpu/common.c                   |  2 ++
 xen/include/public/arch-x86/cpufeatureset.h | 16 ++++++++++++++
 xen/include/xen/lib/x86/cpu-policy.h        | 10 ++++++++-
 xen/lib/x86/cpuid.c                         |  4 +++-
 7 files changed, 43 insertions(+), 14 deletions(-)

diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
index 063fe86eb7..e8000615ab 100644
--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -342,6 +342,7 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *policy, const char* str)
         CPUID_ENTRY(0x00000007,  1, CPUID_REG_EDX),
         MSR_ENTRY(0x10a, CPUID_REG_EAX),
         MSR_ENTRY(0x10a, CPUID_REG_EDX),
+        CPUID_ENTRY(0x8000000a, NA, CPUID_REG_EDX),
 #undef MSR_ENTRY
 #undef CPUID_ENTRY
     };
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index 4c4593528d..460acec46c 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -37,6 +37,7 @@ static const struct {
     { "CPUID 0x00000007:1.edx",     "7d1" },
     { "MSR_ARCH_CAPS.lo",         "m10Al" },
     { "MSR_ARCH_CAPS.hi",         "m10Ah" },
+    { "CPUID 0x8000000a.edx",       "ead" },
 };
 
 #define COL_ALIGN "24"
diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 304dc20cfa..d3ba177dac 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -751,22 +751,16 @@ static void __init calculate_hvm_max_policy(void)
     if ( !cpu_has_vmx )
         __clear_bit(X86_FEATURE_PKS, fs);
 
-    /* 
+    /*
      * Make adjustments to possible (nested) virtualization features exposed
      * to the guest
      */
     if ( p->extd.svm )
     {
-        /* Clamp to implemented features which require hardware support. */
-        p->extd.raw[0xa].d &= ((1u << SVM_FEATURE_NPT) |
-                               (1u << SVM_FEATURE_LBRV) |
-                               (1u << SVM_FEATURE_NRIPS) |
-                               (1u << SVM_FEATURE_PAUSEFILTER) |
-                               (1u << SVM_FEATURE_DECODEASSISTS));
         /* Enable features which are always emulated. */
-        p->extd.raw[0xa].d |= (1u << SVM_FEATURE_VMCBCLEAN);
+        __set_bit(X86_FEATURE_VMCBCLEAN, fs);
     }
-    
+
     guest_common_max_feature_adjustments(fs);
     guest_common_feature_adjustments(fs);
 
@@ -915,6 +909,14 @@ void recalculate_cpuid_policy(struct domain *d)
             __clear_bit(X86_FEATURE_VMX, max_fs);
             __clear_bit(X86_FEATURE_SVM, max_fs);
         }
+        else
+        {
+            /*
+             * Enable SVM features.  This will be empty on VMX
+             * hosts.
+             */
+            fs[FEATURESET_ead] = max_fs[FEATURESET_ead];
+        }
     }
 
     /*
@@ -981,9 +983,6 @@ void recalculate_cpuid_policy(struct domain *d)
          ((vpmu_mode & XENPMU_MODE_ALL) && !is_hardware_domain(d)) )
         p->basic.raw[0xa] = EMPTY_LEAF;
 
-    if ( !p->extd.svm )
-        p->extd.raw[0xa] = EMPTY_LEAF;
-
     if ( !p->extd.page1gb )
         p->extd.raw[0x19] = EMPTY_LEAF;
 }
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index ff4cd22897..da5ad10516 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -478,6 +478,8 @@ static void generic_identify(struct cpuinfo_x86 *c)
 		c->x86_capability[FEATURESET_e7d] = cpuid_edx(0x80000007);
 	if (c->extended_cpuid_level >= 0x80000008)
 		c->x86_capability[FEATURESET_e8b] = cpuid_ebx(0x80000008);
+	if (c->extended_cpuid_level >= 0x8000000a)
+		c->x86_capability[FEATURESET_ead] = cpuid_edx(0x8000000a);
 	if (c->extended_cpuid_level >= 0x80000021)
 		c->x86_capability[FEATURESET_e21a] = cpuid_eax(0x80000021);
 
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index d9eba5e9a7..88d5d0054b 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -365,6 +365,22 @@ XEN_CPUFEATURE(RFDS_CLEAR,         16*32+28) /*!A| Register File(s) cleared by V
 
 /* Intel-defined CPU features, MSR_ARCH_CAPS 0x10a.edx, word 17 */
 
+/* AMD-defined CPU features, CPUID level 0x8000000a.edx, word 18 */
+XEN_CPUFEATURE(NPT,                 18*32+ 0) /*h  Nested page table support */
+XEN_CPUFEATURE(LBRV,                18*32+ 1) /*h  LBR virtualization support */
+XEN_CPUFEATURE(SVML,                18*32+ 2) /*   SVM locking MSR support */
+XEN_CPUFEATURE(NRIPS,               18*32+ 3) /*h  Next RIP save on VMEXIT support */
+XEN_CPUFEATURE(TSCRATEMSR,          18*32+ 4) /*   TSC ratio MSR support */
+XEN_CPUFEATURE(VMCBCLEAN,           18*32+ 5) /*h  VMCB clean bits support */
+XEN_CPUFEATURE(FLUSHBYASID,         18*32+ 6) /*   TLB flush by ASID support */
+XEN_CPUFEATURE(DECODEASSISTS,       18*32+ 7) /*h  Decode assists support */
+XEN_CPUFEATURE(PAUSEFILTER,         18*32+10) /*h  Pause intercept filter support */
+XEN_CPUFEATURE(PAUSETHRESH,         18*32+12) /*   Pause intercept filter threshold */
+XEN_CPUFEATURE(VLOADSAVE,           18*32+15) /*   virtual vmload/vmsave */
+XEN_CPUFEATURE(VGIF,                18*32+16) /*   Virtual GIF */
+XEN_CPUFEATURE(SSS,                 18*32+19) /*   NPT Supervisor Shadow Stacks */
+XEN_CPUFEATURE(SPEC_CTRL,           18*32+20) /*   MSR_SPEC_CTRL virtualisation */
+
 #endif /* XEN_CPUFEATURE */
 
 /* Clean up from a default include.  Close the enum (for C). */
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index d26012c6da..445325a5b5 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -22,6 +22,7 @@
 #define FEATURESET_7d1       15 /* 0x00000007:1.edx    */
 #define FEATURESET_m10Al     16 /* 0x0000010a.eax      */
 #define FEATURESET_m10Ah     17 /* 0x0000010a.edx      */
+#define FEATURESET_ead       18 /* 0x8000000a.edx      */
 
 struct cpuid_leaf
 {
@@ -296,7 +297,14 @@ struct cpu_policy
             uint32_t /* d */:32;
 
             uint64_t :64, :64; /* Leaf 0x80000009. */
-            uint64_t :64, :64; /* Leaf 0x8000000a - SVM rev and features. */
+
+            /* Leaf 0x8000000a - SVM rev and features. */
+            uint64_t /* a, b */:64, /* c */:32;
+            union {
+                uint32_t ead;
+                struct { DECL_BITFIELD(ead); };
+            };
+
             uint64_t :64, :64; /* Leaf 0x8000000b. */
             uint64_t :64, :64; /* Leaf 0x8000000c. */
             uint64_t :64, :64; /* Leaf 0x8000000d. */
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index eb7698dc73..d68f442d4e 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -81,7 +81,8 @@ void x86_cpu_policy_to_featureset(
     fs[FEATURESET_7d1]       = p->feat._7d1;
     fs[FEATURESET_m10Al]     = p->arch_caps.lo;
     fs[FEATURESET_m10Ah]     = p->arch_caps.hi;
-}
+    fs[FEATURESET_ead]       = p->extd.ead;
+ }
 
 void x86_cpu_featureset_to_policy(
     const uint32_t fs[FEATURESET_NR_ENTRIES], struct cpu_policy *p)
@@ -104,6 +105,7 @@ void x86_cpu_featureset_to_policy(
     p->feat._7d1             = fs[FEATURESET_7d1];
     p->arch_caps.lo          = fs[FEATURESET_m10Al];
     p->arch_caps.hi          = fs[FEATURESET_m10Ah];
+    p->extd.ead              = fs[FEATURESET_ead];
 }
 
 void x86_cpu_policy_recalc_synth(struct cpu_policy *p)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749010.1156990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMT9z-00017U-AY; Wed, 26 Jun 2024 13:58:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749010.1156990; Wed, 26 Jun 2024 13:58:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMT9z-00015j-3u; Wed, 26 Jun 2024 13:58:07 +0000
Received: by outflank-mailman (input) for mailman id 749010;
 Wed, 26 Jun 2024 13:58:05 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMT9x-0000aq-C9
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:05 +0000
Received: from mail-qk1-x729.google.com (mail-qk1-x729.google.com
 [2607:f8b0:4864:20::729])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1b496037-33c4-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 15:58:04 +0200 (CEST)
Received: by mail-qk1-x729.google.com with SMTP id
 af79cd13be357-79c11e92afaso71239485a.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:04 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.58.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b496037-33c4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410283; x=1720015083; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=jvj66XLtWaLUUhs6hHRcK0ccgrRw6yMVK7cJhOUGdj4=;
        b=RKQrSjKSwCvg2r8kpo3ysSgxPwmWvJPnWxdIAPtON0ZWr7NhRqXB8ulJmn7dKQSYrO
         qUSSnqWV1Mxlr8WKyynv7/frWooV+pTXRDGaJPJwWnGUALOP9fobuk0dgDoFRIbZlLW1
         dHJRSx7k7L77ItffWA4OQPs3wpnmW0wzZEGbE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410283; x=1720015083;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=jvj66XLtWaLUUhs6hHRcK0ccgrRw6yMVK7cJhOUGdj4=;
        b=N8uFceKd6mbFEGC2ag90TX7uuaRg4AOzEEa3PZWCsNJF1Ef/u2q6w8TXSH6Q3N3/Tm
         CW+AJa7KhoRHr85ftN4Xxj1W0/0F8BSRE0dvV6yINVSyXGBL/rWboceV1dUzOXJbNg5b
         0lyVm5yna7eGFGuYczM+ecnM9YSNp5LQbxoLbQf7RMeE2G+FdiAuTjSkr5m2yWhtHaAH
         99DDD/zthhLMcuUbVj05ZIhasPFz3RPDfxFN2AoEt8JnbV9SN8rdJNeVa3icIbbGjYkv
         IzXAla1X2xpQdLOto2rPEYSBkFB5pKaNizP1DxfKrd1reWs3i1POunKjgCezwStsGtAm
         D4Dw==
X-Gm-Message-State: AOJu0Yyw3WjuqcDM6QnjIBtVKkVgRW58HV7YfAMvWUU/Oe3K5cvhCrYF
	u3trU9X1UPAP0Qfa4ak5gA+QuFq23AdcJ7o6Vyt7+fbLho3yb5bztOxdtaVQUpYPpiUxC0vQCab
	3wwI=
X-Google-Smtp-Source: AGHT+IHFDYppDfhSsXX23yD5NHNWcJuD1Vdf+5KwPSJ4NnOz2YoqrvF07uS8JC9JNzH8eNAHdGhIuQ==
X-Received: by 2002:a05:620a:190f:b0:795:598d:b99c with SMTP id af79cd13be357-79be701b2b6mr1258395985a.64.1719410282848;
        Wed, 26 Jun 2024 06:58:02 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH WIP 03/14] xenalyze: Basic nested virt processing
Date: Wed, 26 Jun 2024 14:38:42 +0100
Message-Id: <20240626133853.4150731-4-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

On SVM, VMEXIT that occur in when an L1 is in non-root mode ("nested
exits") are tagged with the TRC_HVM_NESTEDFLAG flag.  Expand xenalyze
to do basic handling of these records:

- Refactor hvm_process to filter out both TRC_HVM_NESTEDFLAG and
  TRC_64_FLAG when deciding how to process

- Create separate "trace volume" sub-categories for them (HVM_VOL_*),
  tweaking layout so things continue to align

- Visually distinquish nested from non-nested vmexits and vmentries in
  dump mode

At the moment, nested VMEXITs which are passed through to the L1
hypervisor won't have any HVM handlers; note in hvm_data when a vmexit
was nested, and don't warn when no handlers occur.

While here:
 - Expand one of the error messages with a bit more information
 - Replace repeated `ri->event & TRC_64_FLAG` instances with `flag64`
 - Remove a stray whitespace at the end of a line
 - Add a missing newline to a warning statement

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
 tools/xentrace/xenalyze.c | 66 ++++++++++++++++++++++++++-------------
 1 file changed, 45 insertions(+), 21 deletions(-)

diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index d95e52695f..52ee7a5f9f 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -986,7 +986,9 @@ const char * hvm_event_handler_name[HVM_EVENT_HANDLER_MAX] = {
 
 enum {
     HVM_VOL_VMENTRY,
+    HVM_VOL_VMENTRY_NESTED,
     HVM_VOL_VMEXIT,
+    HVM_VOL_VMEXIT_NESTED,
     HVM_VOL_HANDLER,
     HVM_VOL_EMUL,
     HVM_VOL_MAX
@@ -1014,6 +1016,8 @@ const char *guest_interrupt_case_name[] = {
 const char *hvm_vol_name[HVM_VOL_MAX] = {
     [HVM_VOL_VMENTRY]="vmentry",
     [HVM_VOL_VMEXIT] ="vmexit",
+    [HVM_VOL_VMENTRY_NESTED]="vmentry(n)",
+    [HVM_VOL_VMEXIT_NESTED] ="vmexit(n)",
     [HVM_VOL_HANDLER]="handler",
     [HVM_VOL_EMUL]="emul",
 };
@@ -1380,7 +1384,7 @@ struct hvm_data {
     tsc_t exit_tsc, arc_cycles, entry_tsc;
     unsigned long long rip;
     unsigned exit_reason, event_handler;
-    unsigned int short_summary_done:1, prealloc_unpin:1, wrmap_bf:1;
+    bool nested_vmexit, short_summary_done, prealloc_unpin, wrmap_bf;
 
     /* Immediate processing */
     void *d;
@@ -1807,7 +1811,7 @@ void volume_summary(struct trace_volume *vol)
             case TOPLEVEL_HVM:
                 for(k=0; k<HVM_VOL_MAX; k++) {
                     if(vol->hvm[k])
-                        printf(" +-%-7s: %10lld\n",
+                        printf(" +-%-10s: %10lld\n",
                                hvm_vol_name[k], vol->hvm[k]);
                 }
 
@@ -4633,6 +4637,9 @@ void hvm_generic_postprocess(struct hvm_data *h)
         /* Some exits we don't expect a handler; just return */
         if(opt.svm_mode)
         {
+            /* Nested vmexits passed on to L1's have no handlers for now */
+            if(h->nested_vmexit)
+                return;
             switch(h->exit_reason)
             {
             case VMEXIT_VINTR: /* Equivalent of PENDING_VIRT_INTR */
@@ -4660,8 +4667,10 @@ void hvm_generic_postprocess(struct hvm_data *h)
         if ( !warned[h->exit_reason] )
         {
             /* If we aren't a known exception, warn and log results */
-            fprintf(warn, "%s: Strange, exit %x(%s) missing a handler\n",
-                    __func__, h->exit_reason,
+            fprintf(warn, "%s: d%dv%d Strange, exit %x(%s) missing a handler\n",
+                    __func__,
+                    h->v->d->did, h->v->vid,
+                    h->exit_reason,
                     (h->exit_reason > h->exit_reason_max)
                       ? "[clipped]"
                       : h->exit_reason_name[h->exit_reason]);
@@ -5067,6 +5076,9 @@ static inline void runstate_update(struct vcpu_data *v, int new_runstate,
 
 void hvm_vmexit_process(struct record_info *ri, struct hvm_data *h,
                         struct vcpu_data *v) {
+    bool flag64 = ri->event & TRC_64_FLAG;
+    bool nested = ri->event & TRC_HVM_NESTEDFLAG;
+
     struct {
         union {
             struct {
@@ -5080,7 +5092,7 @@ void hvm_vmexit_process(struct record_info *ri, struct hvm_data *h,
         };
     } *r;
 
-    if ( ri->event & TRC_64_FLAG )
+    if(flag64)
     {
         if (check_extra_words(ri, sizeof(r->x64), "vmexit"))
             return;
@@ -5097,9 +5109,10 @@ void hvm_vmexit_process(struct record_info *ri, struct hvm_data *h,
         set_hvm_exit_reason_data(h, ri->event);
 
     h->vmexit_valid=1;
+    h->nested_vmexit=nested;
     bzero(&h->inflight, sizeof(h->inflight));
 
-    if(ri->event & TRC_64_FLAG) {
+    if(flag64) {
         if(v->guest_paging_levels != 4)
         {
             if ( verbosity >= 6 )
@@ -5145,14 +5158,16 @@ void hvm_vmexit_process(struct record_info *ri, struct hvm_data *h,
     if(opt.dump_all) {
         if ( h->exit_reason < h->exit_reason_max
              && h->exit_reason_name[h->exit_reason] != NULL)
-            printf("]%s vmexit exit_reason %s eip %llx%s\n",
+            printf("]%s vmexit%s exit_reason %s eip %llx%s\n",
                    ri->dump_header,
+                   nested ? "(n)" : "",
                    h->exit_reason_name[h->exit_reason],
                    h->rip,
                    find_symbol(h->rip));
         else
-            printf("]%s vmexit exit_reason %x eip %llx%s\n",
+            printf("]%s vmexit%s exit_reason %x eip %llx%s\n",
                    ri->dump_header,
+                   nested ? "(n)" : "",
                    h->exit_reason,
                    h->rip,
                    find_symbol(h->rip));
@@ -5244,11 +5259,13 @@ void hvm_close_vmexit(struct hvm_data *h, tsc_t tsc) {
 }
 
 void hvm_vmentry_process(struct record_info *ri, struct hvm_data *h) {
+    bool nested = ri->event & TRC_HVM_NESTEDFLAG;
     if(!h->init)
     {
         if(opt.dump_all)
-            printf("!%s vmentry\n",
-                   ri->dump_header);
+            printf("!%s vmentry%s\n",
+                   ri->dump_header,
+                   nested ? "(n)" : "");
         return;
     }
 
@@ -5283,15 +5300,18 @@ void hvm_vmentry_process(struct record_info *ri, struct hvm_data *h) {
     if(!h->vmexit_valid)
     {
         if(opt.dump_all)
-            printf("!%s vmentry\n",
-                   ri->dump_header);
+            printf("!%s vmentry%s\n",
+                   ri->dump_header,
+                   nested ? "(n)" : "");
         return;
     }
 
     if(opt.dump_all) {
         unsigned long long arc_cycles = ri->tsc - h->exit_tsc;
-        printf("]%s vmentry cycles %lld %s\n",
-               ri->dump_header, arc_cycles, (arc_cycles>10000)?"!":"");
+        printf("]%s vmentry%s cycles %lld %s\n",
+               ri->dump_header,
+               nested ? "(n)" : "",
+               arc_cycles, (arc_cycles>10000)?"!":"");
     }
 
     hvm_close_vmexit(h, ri->tsc);
@@ -5323,16 +5343,20 @@ void hvm_process(struct pcpu_info *p)
         /* FIXME: Collect analysis on this */
         break;
     default:
-        switch(ri->event) {
+        switch(ri->event & ~(TRC_HVM_NESTEDFLAG|TRC_64_FLAG)) {
         case TRC_HVM_VMX_EXIT:
-        case TRC_HVM_VMX_EXIT64:
         case TRC_HVM_SVM_EXIT:
-        case TRC_HVM_SVM_EXIT64:
-            UPDATE_VOLUME(p, hvm[HVM_VOL_VMEXIT], ri->size);
+            if (ri->event & TRC_HVM_NESTEDFLAG)
+                UPDATE_VOLUME(p, hvm[HVM_VOL_VMEXIT_NESTED], ri->size);
+            else
+                UPDATE_VOLUME(p, hvm[HVM_VOL_VMEXIT], ri->size);
             hvm_vmexit_process(ri, h, v);
             break;
         case TRC_HVM_VMENTRY:
-            UPDATE_VOLUME(p, hvm[HVM_VOL_VMENTRY], ri->size);
+            if (ri->event & TRC_HVM_NESTEDFLAG)
+                UPDATE_VOLUME(p, hvm[HVM_VOL_VMENTRY_NESTED], ri->size);
+            else
+                UPDATE_VOLUME(p, hvm[HVM_VOL_VMENTRY], ri->size);
             hvm_vmentry_process(ri, &p->current->hvm);
             break;
         default:
@@ -6977,7 +7001,7 @@ void vcpu_start(struct pcpu_info *p, struct vcpu_data *v,
         /* Change default domain to 'queued' */
         runstate_update(p->current, RUNSTATE_QUEUED, p->first_tsc);
 
-        /* 
+        /*
          * Set current to NULL, so that if another vcpu (not in INIT)
          * is scheduled here, we don't trip over the check in
          * vcpu_next_update()
@@ -6997,7 +7021,7 @@ void vcpu_start(struct pcpu_info *p, struct vcpu_data *v,
 
         return;
     } else if ( v->delayed_init ) {
-        fprintf(warn, "d%dv%d RUNSTATE_RUNNING detected, leaving INIT",
+        fprintf(warn, "d%dv%d RUNSTATE_RUNNING detected, leaving INIT\n",
                 v->d->did, v->vid);
         v->delayed_init = 0;
     }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749011.1156996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMT9z-0001HJ-Pj; Wed, 26 Jun 2024 13:58:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749011.1156996; Wed, 26 Jun 2024 13:58:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMT9z-0001F2-Lu; Wed, 26 Jun 2024 13:58:07 +0000
Received: by outflank-mailman (input) for mailman id 749011;
 Wed, 26 Jun 2024 13:58:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMT9y-0000aq-J6
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:06 +0000
Received: from mail-ot1-x329.google.com (mail-ot1-x329.google.com
 [2607:f8b0:4864:20::329])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1c51c914-33c4-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 15:58:06 +0200 (CEST)
Received: by mail-ot1-x329.google.com with SMTP id
 46e09a7af769-6fd3d9f572fso3187114a34.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:06 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.58.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c51c914-33c4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410285; x=1720015085; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=PeXJAa32g4UEe7AmZVpxSa6rawo1LbJSKY3deaYLOBo=;
        b=dw91qHep2LwHa7dnqyxLp9RKZ2E6b++Rodb7rMyCIZYlLaBk33Ki6xyjoNgwx7haaQ
         CYX997tkLQxtQDS7tQIxY81bXE1EKhTSzHRE301b2tl7c9jYD30+AjocsKpBURTsRAxu
         ZqGQFeYQKY+UBnU7ZLm7IzdVW9qu26+h1X960=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410285; x=1720015085;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=PeXJAa32g4UEe7AmZVpxSa6rawo1LbJSKY3deaYLOBo=;
        b=Rky0HY7bzmsFc1H9bQ9x6N0RhuhcQguwdlHZDObUKT8E+S/rpsIu2GYR35hNYNafcl
         KunbWizmvJM3urbOymyDxtoJf3uF4aEWMlGEgbaVyqt5YHRx1yv0DVHoULECNiahYh3J
         u6B+PUSr7gAabGOP0HpDlMuLfkuNYe1nGCQ+ZY/zGKQsl6Lvslr+q1l3WSNqu4tU3k3v
         8VC3fF2x0nPdZzLyUoM2b6MLEK6WfxryJW2K3j7KMCZuBY+wT7Z0F8VtfnJSt4xDhAcW
         Xey4gQm1Fo9Flfi2a/L4OL2vxfmyKoZIhKPGv3FMB+dcqVes7LU8UOk6zKhG3hBgWcY6
         2R8A==
X-Gm-Message-State: AOJu0Yziw1QC+f/wXD9XQcwaxb7t/vbr1IcF8K+01BJlz9OtJdbYELYm
	dwremCgFY7EpwQDRwz4N1sgpgJAnrLt8qRPmn0c86PT4/G5r7mQQtCCq5MiaVrAoS2VMv18RIdn
	pLXs=
X-Google-Smtp-Source: AGHT+IHJesmEszuQOmOLW16x2yS6nMyF1lEaZlEgMRvbsep/0oYldxQi8lVX09o99u1oxbaAhpAY8A==
X-Received: by 2002:a9d:7cc6:0:b0:6f9:aa39:fd7d with SMTP id 46e09a7af769-700af8ad056mr11300588a34.6.1719410284675;
        Wed, 26 Jun 2024 06:58:04 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH WIP 05/14] xenalyze: Ignore vmexits where an HVM_HANDLER traces would be redundant
Date: Wed, 26 Jun 2024 14:38:44 +0100
Message-Id: <20240626133853.4150731-6-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

VMX combines all exceptions into a single VMEXIT exit reason, and so
needs a separate HVM_HANDLER trace record (HVM_TRAP) to say which
exception happened; but for a number of exceptions, no further
information is put into the trace log.

SVM, by contrast, has a separate VMEXIT exit reason for each exception
vector, so HVM_TRAP would be redundant.

NB that VMEXIT_EXCEPTION_DB doesn't have an HVM_HANDLER for SVM yet;
but for VMX, there's a specific HVM_HANDLER trace record which
includes more information; so SVM really should record information as
well.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
 tools/xentrace/xenalyze.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index 46248e9a70..19b99dc66d 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -4641,6 +4641,14 @@ void hvm_generic_postprocess(struct hvm_data *h)
             {
             case VMEXIT_VINTR: /* Equivalent of PENDING_VIRT_INTR */
             case VMEXIT_PAUSE:
+                /*
+                 * VMX just has TRC_HVM_TRAP for these, which would be
+                 * redundant on SVM
+                 */
+            case VMEXIT_EXCEPTION_BP:
+            case VMEXIT_EXCEPTION_NM:
+            case VMEXIT_EXCEPTION_AC:
+            case VMEXIT_EXCEPTION_UD:
                 return;
             default:
                 break;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749012.1157005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA0-0001N2-C4; Wed, 26 Jun 2024 13:58:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749012.1157005; Wed, 26 Jun 2024 13:58:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMT9z-0001L2-Um; Wed, 26 Jun 2024 13:58:07 +0000
Received: by outflank-mailman (input) for mailman id 749012;
 Wed, 26 Jun 2024 13:58:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMT9y-0000af-JH
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:06 +0000
Received: from mail-qk1-x72b.google.com (mail-qk1-x72b.google.com
 [2607:f8b0:4864:20::72b])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1bc125b2-33c4-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:58:05 +0200 (CEST)
Received: by mail-qk1-x72b.google.com with SMTP id
 af79cd13be357-79c06a06a8eso99842385a.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:05 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.58.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bc125b2-33c4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410284; x=1720015084; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=4CicAy88BoQhTqLPRoVgCuGO0iyrk5R5H3mkM3cjN/E=;
        b=J6ZCXsgv6aZiOJCAsaTTg8qJPXOAnqYSAzGzNjwu4pN8+c/V4tTM6Gx06TrimhfXZz
         Z+2IMsj+VCpgXjUzu9ik6hDB3lHAnRSL2CtgzInB0i9+CJK4N5kANfbPDdyqPQbgf/hU
         5M7HRJJq+zAIi9YegdDEnlRC+LGKFVc8i+O40=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410284; x=1720015084;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=4CicAy88BoQhTqLPRoVgCuGO0iyrk5R5H3mkM3cjN/E=;
        b=D0t9vUZeOFo4zOhL43Fk1vUxTKLd/Z6EVe36fNBuGfvnIQhl51NDmSCqbHxZPqNewt
         WMnZlOzlTuIy9Ckt+OKVkawRuD8TpTeINDR1V5twzolgo3shiDVqJvdSpOfU+tcPAuXn
         tPUN5C901OYxiDBCmRm0AaOLGS4zHTz8Dz98o9k5YxSpwTyblNQi6yYiPGBG0U28ft0L
         bd4SjLvPWffAc0mnteXlgJirIFwFXzqLStLl6sHfMGPL5uajWNDgFBsZpJHxCLmkjvYQ
         e1qEEAffGbx36efhVpOYJAUTqv3aT28T7Pfe5RmjLGaRiPpiy+uYZ63r/2clfqtTUbmo
         0Z8w==
X-Gm-Message-State: AOJu0Yx/7G9iTJd5E0FGKqxov6Lo9XhjJex/YZCX9A1SYxs9Fw0UlJSK
	nPjSd7fUn2zdf3Yjfx5J3+b0wdo0R6KhP/qYrb3brPqb5w0VQ+1jnbFqi66Q2xhX3n/1rHRefdr
	9nFY=
X-Google-Smtp-Source: AGHT+IFuaaGzYU5rWVD3+w/otwtbTWoRCXh7i4fUBEqB1FYOqspJGP6Lpr0Q080r7c8cpsLLXlmXCA==
X-Received: by 2002:a05:620a:1a19:b0:797:b319:c3a2 with SMTP id af79cd13be357-79be6eae80dmr1257297985a.39.1719410283769;
        Wed, 26 Jun 2024 06:58:03 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH WIP 04/14] xenalyze: Track generic event information when not in summary mode
Date: Wed, 26 Jun 2024 14:38:43 +0100
Message-Id: <20240626133853.4150731-5-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Generally speaking, a VMEXIT/VMENTRY cycle should have at least three
trace records: the VMEXIT trace (which contains the processor-specific
exit code), a more generic Xen-based Xen event (an HVM_HANDLER trace
record), and a VMEXIT trace; and any given VMEXIT exit reason should
only have a single HVM_HANDLER trace type.  Having duplicate or
missing HVM_HANDLER traces is generally indicative of a problem that's
crept in in the hypervisor tracing scheme.

This is property is checked in hvm_generic_postprocess(), and
violations are flagged with a warning.  In order to do this, when an
HVM trace record that doesn't have a specific post-processor happens,
information about the HVM trace record is stored in
hvm_data->inflight.generic.

Unfortunately, while the check was being done in all "modes", the
relevant information was only being copied into inflight.generic in
summary mode.  This resulted in spurious warnings about missing
HVM_HANDLER traces when running in dump mode.

Since running in dump mode is often critical to understanding how the
warnings came about, just collect the information always as well.

That said, the data from the trace doesn't appear to be used by
anyone; so to save some time, don't bother copying it.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
 tools/xentrace/xenalyze.c | 13 ++++---------
 1 file changed, 4 insertions(+), 9 deletions(-)

diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index 52ee7a5f9f..46248e9a70 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -1367,7 +1367,6 @@ struct hvm_data {
             } msr;
             struct {
                 unsigned int event;
-                uint32_t d[4];
             } generic;
             struct {
                 unsigned eax;
@@ -4572,8 +4571,7 @@ void hvm_npf_process(struct record_info *ri, struct hvm_data *h)
                (unsigned long long)r->gpa, r->qualification,
                (unsigned long long)r->mfn, r->p2mt);
 
-    if ( opt.summary_info )
-        hvm_generic_postprocess_init(ri, h);
+    hvm_generic_postprocess_init(ri, h);
 }
 
 void hvm_rdtsc_process(struct record_info *ri, struct hvm_data *h)
@@ -4621,7 +4619,6 @@ void hvm_generic_postprocess_init(struct record_info *ri, struct hvm_data *h)
         fprintf(warn, "%s: Strange, h->postprocess set!\n",
                 __func__);
     h->inflight.generic.event = ri->event;
-    bcopy(h->d, h->inflight.generic.d, sizeof(unsigned int) * 4);
 }
 
 void hvm_generic_postprocess(struct hvm_data *h)
@@ -4899,8 +4896,7 @@ needs_vmexit:
     default:
         if(opt.dump_all)
             hvm_generic_dump(ri, "]");
-        if(opt.summary_info)
-            hvm_generic_postprocess_init(ri, h);
+        hvm_generic_postprocess_init(ri, h);
         break;
     }
 }
@@ -6166,11 +6162,10 @@ void shadow_fault_generic_process(struct record_info *ri, struct hvm_data *h)
 
     /* pf-case traces, vs others */
     h->inflight.generic.event = ri->event;
-    bcopy(ri->d, h->inflight.generic.d, sizeof(unsigned int) * 4);
 
     if(opt.dump_all)
-        shadow_fault_generic_dump(h->inflight.generic.event,
-                                  h->inflight.generic.d,
+        shadow_fault_generic_dump(ri->event,
+                                  ri->d,
                                   "]", ri->dump_header);
 
     h->inflight.pf_xen.pf_case = sevt.minor;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:09 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749013.1157022 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA1-0001v7-MA; Wed, 26 Jun 2024 13:58:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749013.1157022; Wed, 26 Jun 2024 13:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA1-0001si-Cj; Wed, 26 Jun 2024 13:58:09 +0000
Received: by outflank-mailman (input) for mailman id 749013;
 Wed, 26 Jun 2024 13:58:08 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMTA0-0000aq-Gt
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:08 +0000
Received: from mail-qk1-x72f.google.com (mail-qk1-x72f.google.com
 [2607:f8b0:4864:20::72f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1d600963-33c4-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 15:58:07 +0200 (CEST)
Received: by mail-qk1-x72f.google.com with SMTP id
 af79cd13be357-795a4fde8bfso396636985a.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:07 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.58.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d600963-33c4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410286; x=1720015086; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=RQPpBZ6HUrKlZc92OfyG7VzBOtY27D4cEzdSY9vU/Hg=;
        b=h2vjbbUd/ocY5uNF/tmZ6w9P50ZQ6dAV1H9J6EM4hxaR4FGFgk6qXhPDtc6MRFPgPN
         GGAqx9kImj2WbhL6/YeXXpKkbtyRHUisJYLAEbP7dznu7f3bqkQZwK5POmb6xnKxr0bU
         0JJcwHA2J4kOw0RLi4tKXzLeOMLVpyhGQoKXg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410286; x=1720015086;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=RQPpBZ6HUrKlZc92OfyG7VzBOtY27D4cEzdSY9vU/Hg=;
        b=m5FXvgzq0xCkHat9bJziWLFcZJfr9cCM5aaTNid4ngKkFMS/EzEI9SG5SCXoG8LJeg
         hngOLISnBxat8b1ldn7Ly/XfFBYPtsQ5mVXm6Y9Hpu84bY3EGNvRKqAEBCxERV8s0Oo4
         CrrF1kzGirdFlevEnretDJZAiN9ZGw5Ip4auZsY54MEMVY0uY2SD00lBDd1uz1qsPuyV
         ls0Z3NpIZohZOTaKadw/pBQ74rRhlEzRGM0i76FSrAk/QbcvYGhYzm3RtuOfLWEqQbTV
         /ULj6XyT0YNWM5ih8qtXFRsMVXkNdCrN8vC0QLJsh7hYp82EYa/6yU9NuO2o//o5JM2e
         a6NQ==
X-Gm-Message-State: AOJu0Yx/XIKVzERbyrKdkuat8shuvD+nfvgVdjHa2HCigNxOr8vGcUNq
	753E/moGIUX1hiwEhRq7yzeTawDqyHapN4QZkmAgOkGzO1MNXA+NwQeSzTm2fe8aMikoeeC8duK
	iM1g=
X-Google-Smtp-Source: AGHT+IHYOMfst4jjFAWDaoBPQd4epq0sodwQ8Lc9GhHBIHXeDf1jkpgCheuuVvuRECqN+bBlRsuDqg==
X-Received: by 2002:a05:620a:414b:b0:797:de98:1420 with SMTP id af79cd13be357-79be6e9f513mr1244126985a.45.1719410285511;
        Wed, 26 Jun 2024 06:58:05 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH WIP 06/14] xen/svm: Remove redundant HVM_HANDLER trace for EXCEPTION_AC
Date: Wed, 26 Jun 2024 14:38:45 +0100
Message-Id: <20240626133853.4150731-7-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Adding an HVM_TRAP trace record is redundant for EXCEPTION_AC: it adds
trace volume without adding any information, and xenalyze already
knows not to expect it.  Remove it.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
 xen/arch/x86/hvm/svm/svm.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 988250dbc1..abe665ee43 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2779,7 +2779,6 @@ void asmlinkage svm_vmexit_handler(void)
     }
 
     case VMEXIT_EXCEPTION_AC:
-        TRACE(TRC_HVM_TRAP, X86_EXC_AC);
         hvm_inject_hw_exception(X86_EXC_AC, vmcb->ei.exc.ec);
         break;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749014.1157035 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA3-0002QO-Qv; Wed, 26 Jun 2024 13:58:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749014.1157035; Wed, 26 Jun 2024 13:58:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA3-0002Q9-M6; Wed, 26 Jun 2024 13:58:11 +0000
Received: by outflank-mailman (input) for mailman id 749014;
 Wed, 26 Jun 2024 13:58:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMTA2-0000af-CT
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:10 +0000
Received: from mail-oi1-x22a.google.com (mail-oi1-x22a.google.com
 [2607:f8b0:4864:20::22a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1df9d88d-33c4-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:58:08 +0200 (CEST)
Received: by mail-oi1-x22a.google.com with SMTP id
 5614622812f47-3d55c0fadd2so971982b6e.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:08 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.58.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1df9d88d-33c4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410287; x=1720015087; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ihcaurA1Zhe1+CN+gql25MhLcUH6MWHK0QwFWHqTTAY=;
        b=UPJzTzWFFtuIwxx5EBW9eSvJoy925ThjJod6rGpTPlQNIObyaiUgPZcuD27wTKg9NO
         nAjAhb4iDYkH55A1Ki6NI0CfWeVLV71nv4MgaYQ8RZkqCMibxtXrGcXhibDBCUGR+Dyv
         /SMJ5m8XQASaNjtbH2AkNooUAyAVgUjTZ72B8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410287; x=1720015087;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=ihcaurA1Zhe1+CN+gql25MhLcUH6MWHK0QwFWHqTTAY=;
        b=ZF7Ep4hVEglaiSGnujBXWhrYDxEkXd97cC28dBMhpk767OCErYN7QcR+eJL+VT1n7s
         IrnGkQh9Mk4qpd7oc6DYLg0sfcYXPnhHaPmwTNjP5bsS4mEa1K0jL0/P6+Fp/yunp9ql
         fWJfcqkgVCsSuwYIFOigfUAXX6Ck57MDB9DNT+yJdLyOdhdvWJ5yoRG3WijThkLMguva
         Y5RjBehbdP2tp3sBbdIcKxCtPwD8uQVc2AMBZNj52uDgYIHyHwA6AOgDsVhLGM25jG0U
         4nHMOOLAyFA5qzxhdCROgnpPU+5TlCGVa29H3ooH+7VQrw6VdJ5MK9thtjV2cay+qJh/
         Q7UQ==
X-Gm-Message-State: AOJu0Yzn8CLWhWKi7NSYN4fkGDp4xT1VsvKofE2D8zx9/U9ZbZWlERJN
	HexhZwMULwsKzGdGUW0nxnvW3nD0VCKiBi89KxIhDyHKs60tDZ5kzP1Tg8t0glqNR3L3QgcBWRk
	du8I=
X-Google-Smtp-Source: AGHT+IEFfaOnT5G7EPh+PZ1hPt4pC+pnq82ztk88/9m2zZJbgavZuE6gOnjPyhkdmYShkwmjac3cmQ==
X-Received: by 2002:a05:6808:30a9:b0:3d2:1d51:e505 with SMTP id 5614622812f47-3d543aa87admr12513465b6e.17.1719410287385;
        Wed, 26 Jun 2024 06:58:07 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH WIP 07/14] xen/hvm: Don't skip MSR_READ trace record
Date: Wed, 26 Jun 2024 14:38:46 +0100
Message-Id: <20240626133853.4150731-8-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Commit 37f074a3383 ("x86/msr: introduce guest_rdmsr()") introduced a
function to combine the MSR_READ handling between PV and HVM.
Unfortunately, by returning directly, it skipped the trace generation,
leading to gaps in the trace record, as well as xenalyze errors like
this:

hvm_generic_postprocess: d2v0 Strange, exit 7c(VMEXIT_MSR) missing a handler

Replace the `return` with `goto out`.

Fixes: 37f074a3383 ("x86/msr: introduce guest_rdmsr()")
Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
 xen/arch/x86/hvm/hvm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 7f4b627b1f..0fe2b85b16 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3557,7 +3557,7 @@ int hvm_msr_read_intercept(unsigned int msr, uint64_t *msr_content)
     fixed_range_base = (uint64_t *)v->arch.hvm.mtrr.fixed_ranges;
 
     if ( (ret = guest_rdmsr(v, msr, msr_content)) != X86EMUL_UNHANDLEABLE )
-        return ret;
+        goto out;
 
     ret = X86EMUL_OKAY;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749015.1157045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA5-0002kO-Dp; Wed, 26 Jun 2024 13:58:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749015.1157045; Wed, 26 Jun 2024 13:58:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA5-0002jz-8y; Wed, 26 Jun 2024 13:58:13 +0000
Received: by outflank-mailman (input) for mailman id 749015;
 Wed, 26 Jun 2024 13:58:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMTA3-0000af-Ci
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:11 +0000
Received: from mail-qk1-x72d.google.com (mail-qk1-x72d.google.com
 [2607:f8b0:4864:20::72d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ea343a8-33c4-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:58:09 +0200 (CEST)
Received: by mail-qk1-x72d.google.com with SMTP id
 af79cd13be357-79c2c05638cso72828185a.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:09 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.58.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ea343a8-33c4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410288; x=1720015088; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=9e+XtFKCBF6+ISNOtkiCTkrjeCZCly6CC4Zvn3nhKU4=;
        b=TYbm68MLDysmVqRx672iicKjQp8G3c+5YQzWh+s0hsAVvQ1RsgslTBnR8QpExkxWFS
         WpdNilK/Y/NYMyBx9sY68ozVvXPMmztYShMdPB43rKtFIJfVvaR/esYhFDLcDs/q9pml
         ToyH91t0W4PX4edbCEsakLtWq9c6VwaGJprXY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410288; x=1720015088;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=9e+XtFKCBF6+ISNOtkiCTkrjeCZCly6CC4Zvn3nhKU4=;
        b=Zsv76ReI6uFyM41hmwcBNW8Etmo0jUlBcM+dhb9L2A02t49eubjxTF/3gT6Qiwhp/D
         j5b5YJJYs7ULeekEbvDgpujxSyNFhk77uNfVKk8hkVw/YMoUWm9zoeWgjRnIGsXzllYG
         ilh/JBAaVevcV83oEMy6sh51rln+mW66jX623uC5oeZFLG4cGQM1dBA/x7dJtWabRHCo
         i1JgdwQhnp+AWIuikZ4ctmfD6lhxJkpxiv0N/Rd65sWPcap7b+P377nAH52HIOtrxR90
         H8FStoBp9/wzv2xtRC2xLhoPSqoFJpNxO90GlPn2l2oYL2spo6XMiMEWf7dBgmfN3Icq
         EY7Q==
X-Gm-Message-State: AOJu0YyiKdmHY0dllCUb2mRw8qnadD3z3VyVEmsx/s2PVDh+Upr9cAn1
	53ox5cQL0Otb4DvkZmu4j1jsl3CTSgR1pg3+cBID0rH9moz1liraL/B0yniwv4rQX41/RI2tWle
	XdlQ=
X-Google-Smtp-Source: AGHT+IFibB7T4hWFxDlipIyL2aXKLogeYDFxykKHCycOIPIolO9DIPzvurs40uQc14FSUfQEpPfE/Q==
X-Received: by 2002:a05:620a:1a8c:b0:795:533f:452c with SMTP id af79cd13be357-79be701cf0fmr1278984585a.73.1719410288627;
        Wed, 26 Jun 2024 06:58:08 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>,
	George Dunlap <georg.dunlap@cloud.com>
Subject: [PATCH WIP 08/14] svm: Do NPF trace before calling hvm_hap_nested_page_fault
Date: Wed, 26 Jun 2024 14:38:47 +0100
Message-Id: <20240626133853.4150731-9-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Unfortunately I've forgotten exactly why I made this change.  I
suspect that there were other traces (like MMIO traces) which were
being put before the NPF trace; but to understand the trace record
you'd want to have the NPF information first.

Signed-off-by: George Dunlap <georg.dunlap@cloud.com>
---
 xen/arch/x86/hvm/svm/svm.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index abe665ee43..240401dc77 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1614,8 +1614,6 @@ static void svm_do_nested_pgfault(struct vcpu *v,
     else if ( pfec & NPT_PFEC_in_gpt )
         npfec.kind = npfec_kind_in_gpt;
 
-    ret = hvm_hap_nested_page_fault(gpa, ~0UL, npfec);
-
     if ( tb_init_done )
     {
         struct {
@@ -1636,6 +1634,8 @@ static void svm_do_nested_pgfault(struct vcpu *v,
         trace(TRC_HVM_NPF, sizeof(_d), &_d);
     }
 
+    ret = hvm_hap_nested_page_fault(gpa, ~0UL, npfec);
+
     switch ( ret )
     {
     case 1:
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749016.1157051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA6-0002pQ-2F; Wed, 26 Jun 2024 13:58:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749016.1157051; Wed, 26 Jun 2024 13:58:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA5-0002nS-Mb; Wed, 26 Jun 2024 13:58:13 +0000
Received: by outflank-mailman (input) for mailman id 749016;
 Wed, 26 Jun 2024 13:58:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMTA4-0000af-SP
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:12 +0000
Received: from mail-ot1-x332.google.com (mail-ot1-x332.google.com
 [2607:f8b0:4864:20::332])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1f79cb05-33c4-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:58:11 +0200 (CEST)
Received: by mail-ot1-x332.google.com with SMTP id
 46e09a7af769-6fa0d077694so3120525a34.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:11 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.58.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f79cb05-33c4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410290; x=1720015090; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=DoUNb50e5mp4r/rfJxIl3Pgj6LDhk8b0bQb4gVKsKhE=;
        b=PpoQCUpAqOGNwjGr6KtIYp/S0oIJDezuV6wy9FjvADuo6SsB1tLo3xG3Qk1uqBv7zB
         J371uYBpYTEZ7miQSCHixN81YnO5oJnCJiPxk9sNeVQAtM1BdLfOMtsiO1mml3+S+BRw
         LHHt7tUtPSeSLNkzapQ3ZJECJlbzGH5CJQ3dU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410290; x=1720015090;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=DoUNb50e5mp4r/rfJxIl3Pgj6LDhk8b0bQb4gVKsKhE=;
        b=QWIFzWp6s57N/fqzLdRR615b1ZE3A+8/QBYKrHnoGL1YPm+yEyI9CY7Pyl4p7NF5uD
         1FGK3Rok2veat1hLoL2IOUGpXQwuY5HIMRF4BCDwiJQVFxxmDDoGfl5NBLwu82Q+QNJf
         FZPJfUxXMdMyXI8sqWOUt4cQz6mzNJ9bvM0rg0dJPmKMXonc/9UPHW2pxuzZQv7JrXys
         ixUWRpPg8iG/4GmceHJVeMiR0XTHA84i5soit69F4NI5jln35CyunXupeX2Kls/QEFOQ
         Ybo11JrC/Bz0n76j95Bb92cpAe3a8SZ/4hr/JoLcrm8O8QLrzyI30KvsQOZ0QXbXH7+l
         64aQ==
X-Gm-Message-State: AOJu0YxnAnpLuTVSuGeGuuhF4UUvSfw6EiaDBNMUcEd7GKD+yzvd1x6J
	It+Pp0y0DkxbaSEpwpYK4ZeYrcFu2q5hQSIRaB5XcXrt29ID+rToFn49s1W6Ai1/ck42N5er+Ex
	ujX4=
X-Google-Smtp-Source: AGHT+IGZ+U34IqEAjwOGmI73wgJjK4aN/mscCA7HeKeH+LcTuTCPz+Z7Ougl3kzzmNIG9S+yzLeCNg==
X-Received: by 2002:a9d:3e49:0:b0:700:d3eb:c554 with SMTP id 46e09a7af769-700d3ebc59fmr1552872a34.1.1719410289917;
        Wed, 26 Jun 2024 06:58:09 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH WIP 09/14] x86/emulate: Don't trace cr reads during emulation
Date: Wed, 26 Jun 2024 14:38:48 +0100
Message-Id: <20240626133853.4150731-10-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

hvmemul_read_cr ends up being called fairly freqently during
emulation; these are generally not necessary for understanding a
trace, and make processing more complicated (because they show up as
extra trace records within a vmexit/entry "arc" that must be
classified).

Leave the trace in write_cr, as any hypothetical writes to CRs *will*
be necessary to understand traces.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
 xen/arch/x86/hvm/emulate.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 02e378365b..e78f06cf3f 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -2180,7 +2180,6 @@ static int cf_check hvmemul_read_cr(
     case 3:
     case 4:
         *val = current->arch.hvm.guest_cr[reg];
-        TRACE(TRC_HVM_CR_READ64, reg, *val, *val >> 32);
         return X86EMUL_OKAY;
     default:
         break;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749017.1157061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA7-0003DS-Ev; Wed, 26 Jun 2024 13:58:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749017.1157061; Wed, 26 Jun 2024 13:58:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA7-0003BZ-AX; Wed, 26 Jun 2024 13:58:15 +0000
Received: by outflank-mailman (input) for mailman id 749017;
 Wed, 26 Jun 2024 13:58:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMTA5-0000aq-LD
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:13 +0000
Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com
 [2607:f8b0:4864:20::732])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 209e8c70-33c4-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 15:58:13 +0200 (CEST)
Received: by mail-qk1-x732.google.com with SMTP id
 af79cd13be357-79c0bbff48aso152550085a.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:13 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.58.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 209e8c70-33c4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410292; x=1720015092; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=d3jc/D3oGiKvIBrZUXRabVXNRbnlweDK3F2GwPRZj5o=;
        b=S5T1oBiY7HpIqqQj5UBrBgDEhVlDzpevKmgYY03agoWjnfE1SslwTzWQzFMPT8SuGW
         t0vzskujDGvM1SyGKjVZ7jDGsvAPZ5JQFw/ZveCZQKD5FJaSJ9jLgDQJ8WRqtXMD5BoM
         O3NiGMfTiswm2H8gDsCQJTHEWlUDCC9kKQ03E=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410292; x=1720015092;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=d3jc/D3oGiKvIBrZUXRabVXNRbnlweDK3F2GwPRZj5o=;
        b=Cwuq01hezb9SMfqudNFGxW42/0eglrUv/juDhjeErs+81skXHJqXXuh13Rs3I8iLYm
         Pusjpk0wPiDSEMapKn1/aIRqb7UQI+jKUiNg0H9h69jM7XFFe6zwJYvyCfXGj0hzngVB
         nWtRr1akFqKlOiRnVXmq3PuJANEgm4QHxPCIfoRhma2zTUhUaqpR4xUa7VmLgpdvBnty
         EQMQa1JxiJX8DvNjJ9pCHYdJ8i2g6946ywW9PbngoK1Z8cWNGSOuabCgydQyotVyWm1Z
         q/ei1qzXnU4CH3JwOKX4cegYgkwIyn8i/w941Syp9gxdyjKB/k+a7rdTZCq3WAsRbBNk
         DL2w==
X-Gm-Message-State: AOJu0YzP5DI2jfcsnURcVI1KCxElXdiVrxEI1UTIFjqBQ63XiH3V4eEL
	YakWgf7ryzwiFvopkCH5lLUAE74Om5NHq/OQ2oc6xVgaf4q5f8Ev65X/zrnvwLCvSIiC2g8ihuz
	k7h4=
X-Google-Smtp-Source: AGHT+IGot/l4FhRSt0d1zR1K2xibC+CVce1BvKCqj8naOo9yU4mayrW66/FlAJVn8S5u6fG4LZWhRA==
X-Received: by 2002:a05:620a:3727:b0:798:db85:c999 with SMTP id af79cd13be357-79be6f13d87mr1195903485a.51.1719410291966;
        Wed, 26 Jun 2024 06:58:11 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>,
	Andrew Cooper <andrew.cooper@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Roger Pau Monne <roger.pau@cloud.com>
Subject: [PATCH WIP 11/14] x86/trace: Add trace to xsetbv svm/vmx handler path
Date: Wed, 26 Jun 2024 14:38:50 +0100
Message-Id: <20240626133853.4150731-12-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There are already "HVM handler" trace records for writing to XCRs in
the context of an HVM guest.  This trace is currently taken in
hvmemul_write_xcr.

However, both VMX and SVM vmexits call hvm_handle_xsetbv as a result
of an XSETBV vmexit, and hvm_handle_xsetbv calls x86emul_write_xcr
directly, bypassing the trace, resulting in no "HVM handler" trace
record for that VMEXIT.

For maximal DRY-ness, we would want hvm_handle_xsetbv to call
hvmemul_write_xcr; but since the intent seems to be for hvmemul_* to
be only accesible via hvm_emulate(), just duplicate the trace.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
CC: Andrew Cooper <andrew.cooper@cloud.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Roger Pau Monne <roger.pau@cloud.com>
---
 xen/arch/x86/hvm/hvm.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 0fe2b85b16..628a131399 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2073,6 +2073,8 @@ int hvm_handle_xsetbv(u32 index, u64 new_bv)
     if ( index == 0 )
         hvm_monitor_crX(XCR0, new_bv, current->arch.xcr0);
 
+    TRACE(TRC_HVM_XCR_WRITE64, index, new_bv, new_bv >> 32);
+
     rc = x86emul_write_xcr(index, new_bv, NULL);
     if ( rc != X86EMUL_OKAY )
         hvm_inject_hw_exception(X86_EXC_GP, 0);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749018.1157069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA8-0003K0-DW; Wed, 26 Jun 2024 13:58:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749018.1157069; Wed, 26 Jun 2024 13:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA7-0003IP-PT; Wed, 26 Jun 2024 13:58:15 +0000
Received: by outflank-mailman (input) for mailman id 749018;
 Wed, 26 Jun 2024 13:58:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMTA5-0000af-O4
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:13 +0000
Received: from mail-ot1-x32c.google.com (mail-ot1-x32c.google.com
 [2607:f8b0:4864:20::32c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1ffabbd8-33c4-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:58:12 +0200 (CEST)
Received: by mail-ot1-x32c.google.com with SMTP id
 46e09a7af769-6f855b2499cso3591964a34.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:12 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.58.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ffabbd8-33c4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410291; x=1720015091; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=aNXebvkx1B3kPnk1v4QIpuUImYj6FwSyktzAJ/iZD9E=;
        b=GIvlIpDGcvMZSx7J6k2ZpG3O1lpc71ItbHGp24k3C+qUDO3zo3yTOWAHySwzvcLIny
         fcV+M8CJK+QCEdsUAveSJKMsxYsFpeUCAdcGJbVRoEEuJnbI/KUER3peHuGImo8H5FFg
         Hd0FTelINn3FQKbjYpBsM9Y5suTiVRmQ+jix4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410291; x=1720015091;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=aNXebvkx1B3kPnk1v4QIpuUImYj6FwSyktzAJ/iZD9E=;
        b=r/lSUiEvrbwuRB5x3sQkF7MERwGcW4KdN4toJK19K65JLR8ZM5Oh7hqw6fIYR32JVS
         sgF8FPjqjvB5mrCosiaGPCj0zGdQr6oNp9bAr5holgEn0im9mfAbmwDe6wDiQgmZqF7z
         EgUekczEFMkdRtEUdU426WJkS3R96yMqB0kZGqvM6cLjHpV2ae1CGyouZ1kKEzKf0ttt
         STwMS2aAgR558mSSQuDsej00ai2n83EZFi+od08YBfLy/EWAyZ1tPSu7Nuc4qqhd49Mo
         /y3iSrsrzcJVGp6wUr+2UYX+8kyjsBmJM+kLPiBhAbP4p74JdFLkxMp/d4uktKSJN/wq
         Xm8g==
X-Gm-Message-State: AOJu0YwkL6DxBzn+g8q7fkiiVD7YR745IxZuKY5jNSUT2OY518IbN/Oa
	gDolJouFAcxYrseU7y0B775G8vAUX7Lk8M0sa9rrL0CzosSnQaYVe5iML5Yv4Lk0dOe730Zi3CW
	r6lA=
X-Google-Smtp-Source: AGHT+IEe5R8IsPa1WFXgoNQbI1tAaCHrwm2edK3lw3vpBp5ucsDlcV6zDqyFqTDgn390po2VCa/vVw==
X-Received: by 2002:a05:6830:4108:b0:6fa:11aa:e929 with SMTP id 46e09a7af769-700b11d614emr13128067a34.11.1719410290845;
        Wed, 26 Jun 2024 06:58:10 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH WIP 10/14] xenalyze: Quiet warnings about VMEXIT_IOIO
Date: Wed, 26 Jun 2024 14:38:49 +0100
Message-Id: <20240626133853.4150731-11-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There's a general issue with both PIO and MMIO reads (as detailed in
the comment); do a work-around for now.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
 tools/xentrace/xenalyze.c | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index 19b99dc66d..eb0e60e6ef 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -4650,6 +4650,24 @@ void hvm_generic_postprocess(struct hvm_data *h)
             case VMEXIT_EXCEPTION_AC:
             case VMEXIT_EXCEPTION_UD:
                 return;
+            case VMEXIT_IOIO:
+                /*
+                 * FIXME: Both IO and MMIO reads which have gone out
+                 * to the emulator and back typically have the
+                 * [mm]io_assist trace happen on resume, just before
+                 * the subsequent VMENTRY.
+                 *
+                 * However, when a VM has blocked, we call
+                 * hvm_vmexit_close() when it becomes RUNNABLE again,
+                 * rather than when it actually runs again; meaning
+                 * that when hvm_vmexit_close() is called, no HVM
+                 * handler has been seen.
+                 *
+                 * What we really need is some sort of intermediate
+                 * state for [mm]io reads; but for now just ignore
+                 * VMEXIT_IOIO exits without a handler.
+                 */
+                return;
             default:
                 break;
             }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749019.1157074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA9-0003RM-23; Wed, 26 Jun 2024 13:58:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749019.1157074; Wed, 26 Jun 2024 13:58:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA8-0003NA-B9; Wed, 26 Jun 2024 13:58:16 +0000
Received: by outflank-mailman (input) for mailman id 749019;
 Wed, 26 Jun 2024 13:58:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMTA6-0000aq-LN
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:14 +0000
Received: from mail-qk1-x729.google.com (mail-qk1-x729.google.com
 [2607:f8b0:4864:20::729])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 2137ce87-33c4-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 15:58:14 +0200 (CEST)
Received: by mail-qk1-x729.google.com with SMTP id
 af79cd13be357-79c05c19261so97325085a.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:14 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.58.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2137ce87-33c4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410293; x=1720015093; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=3TUWwDLi1R2zJHBj9N1SaI7go8LX451Wfg2VkzZDApI=;
        b=DHlQYGcAYHYp6zr+at4420d2vigfHVwNhCdFEC6c5EdxGmh6gYsvd341eoPod/lr1a
         yU/iQUIZZLz6+gJXaiN6pi5mDYTYSTjfsB3pbKIE9FxkMIiqGAqXQv8mpi14Gf1eSXqS
         EXDS4TOMIZ4T0EdezQz5c4EfRsTdxs22ZlRDg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410293; x=1720015093;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=3TUWwDLi1R2zJHBj9N1SaI7go8LX451Wfg2VkzZDApI=;
        b=r6T62Rxr3VZeLmhANYvB9DIrG5ta2XoTk7XihECUlqpr9PanKRhpnNb5SK/Z//IEi3
         yOJ5BQVVsjIPtgLTMj1DsHAR52ly0He+ZVgtRHLH7FZ7EgEhxI6qEK1ZXB/cxsVBA/AP
         mfbWrrThqzfnRL8bZCQ/cZPOLQhgcSvBvHtxrsCfHtil1qtW3L01RpgwplvnCTAKzhJx
         7+0IUdidti4yYMXPkf3AMBAvwoajnDdLUBWNNF6aUDH1f3gZDXAtO7GEx3L5wFjem1yv
         MEk6rfnnrojKFz3OQgMvwMNgv85jIZlorncJDpISi1RVlqgdg5SEKt68mDQ9zodL1aub
         oSMA==
X-Gm-Message-State: AOJu0YwhNb60vAqK44Z0DX7cyzvFet4O9hqJ32SjRcmPrqhBP10W4XaC
	NP4hxujOjZCCjnh45u8QlaJkIr0NOHGNt/ixHOCUbrkNJidC2az0ajChMk8mImt1HMk3SD65S1h
	7fyI=
X-Google-Smtp-Source: AGHT+IGJ2lo2PMzNvloP71CWFEAIEwg6k06VIqoyJtqytdgCEBGc6bBZGCVD6c/8GEgw9Au8SXicJA==
X-Received: by 2002:a05:620a:28c5:b0:79c:ec3:f121 with SMTP id af79cd13be357-79c0ec3f47bmr354079385a.31.1719410292829;
        Wed, 26 Jun 2024 06:58:12 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH WIP 12/14] xenalyze: Basic processing for XSETBV exits and handlers
Date: Wed, 26 Jun 2024 14:38:51 +0100
Message-Id: <20240626133853.4150731-13-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Basically this means adding VMEXIT strings for XSETBV exit, and adding
the handlers and strings for them.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
 tools/xentrace/xenalyze.c | 10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index eb0e60e6ef..d0e2788727 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -694,6 +694,8 @@ enum VMEXIT_EXITCODE
     VMEXIT_MONITOR          = 138,
     VMEXIT_MWAIT            = 139,
     VMEXIT_MWAIT_CONDITIONAL= 140,
+    VMEXIT_XSETBV           = 141, /* 0x8d */
+    VMEXIT_RDPRU            = 142, /* 0x8e */
     VMEXIT_NPF              = 1024, /* nested paging fault */
     VMEXIT_INVALID          =  -1
 };
@@ -853,6 +855,8 @@ const char * hvm_svm_exit_reason_name[HVM_SVM_EXIT_REASON_MAX] = {
     "VMEXIT_MWAIT",
     /* 140 */
     "VMEXIT_MWAIT_CONDITIONAL",
+    "VMEXIT_XSETBV",
+    "VMEXIT_RDPRU",
     [VMEXIT_NPF] = "VMEXIT_NPF", /* nested paging fault */
 };
 
@@ -946,6 +950,8 @@ enum {
     HVM_EVENT_TRAP,
     HVM_EVENT_TRAP_DEBUG,
     HVM_EVENT_VLAPIC,
+    HVM_EVENT_XCR_READ,
+    HVM_EVENT_XCR_WRITE,
     HVM_EVENT_HANDLER_MAX
 };
 const char * hvm_event_handler_name[HVM_EVENT_HANDLER_MAX] = {
@@ -981,7 +987,9 @@ const char * hvm_event_handler_name[HVM_EVENT_HANDLER_MAX] = {
     "realmode_emulate",
     "trap",
     "trap_debug",
-    "vlapic"
+    "vlapic",
+    "xcr_read",
+    "xcr_write"
 };
 
 enum {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749020.1157080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA9-0003au-L1; Wed, 26 Jun 2024 13:58:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749020.1157080; Wed, 26 Jun 2024 13:58:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTA9-0003XQ-6M; Wed, 26 Jun 2024 13:58:17 +0000
Received: by outflank-mailman (input) for mailman id 749020;
 Wed, 26 Jun 2024 13:58:15 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMTA7-0000aq-LN
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:15 +0000
Received: from mail-qk1-x732.google.com (mail-qk1-x732.google.com
 [2607:f8b0:4864:20::732])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 21be66db-33c4-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 15:58:15 +0200 (CEST)
Received: by mail-qk1-x732.google.com with SMTP id
 af79cd13be357-79c04e6e1b9so145459685a.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:15 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.58.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21be66db-33c4-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410294; x=1720015094; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=no+X6902JG1x7fj9iA4/toVtrkKWwuLeqtvhWCBHqWU=;
        b=QqqW2wI7R9S5p1b0r7zHB4rVrciwQpkHuA3wA+7NBp55AcVywLz9NcWT0qh4Rnm1Au
         CJxpVFOhII/Al29nhHNazKKjL40KVuSX88iE2XOym99FJZxxP7NRf22+r5CfMzNAcYO5
         GMoRZ7YlPP9RzujPNHdZHLBd44DdFKvXbzzEU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410294; x=1720015094;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=no+X6902JG1x7fj9iA4/toVtrkKWwuLeqtvhWCBHqWU=;
        b=voHWOpR/XeV9lUo1LkW8u0CBIa34xmLAZAZ8HdIdfILydO6Hrat773ErcP6KbI66kQ
         NyAYXqeAIKEQ4PKYCoIyEopyhfXEFf23TgdbP0JeZI5SHCfiq6tTP7Af9l6zxw3+Q6o8
         ozCAnQKWT07S+QvNUPQwtSvUFXYXegLm9dyeyzgqpYpXbljbxfnqex+quXoNeO9px+8u
         MwrIUGMD+HapxeF4PTM4rn/YrA5ScU0IeX2G/kS7aya1oV6qFhVqMNB3gdlcGuDP1jNg
         NZ7R7u9YnwOPh9FQUu6AcVMRQ7AvFHLPLqdfwyJcfThfT9LSHY2wYYGsc2bWsJHd4u81
         +oMQ==
X-Gm-Message-State: AOJu0Yx2CWAPjSmJXLUwKA9jmgfZYrvPxqnt6aXcvR3qQXTLl2wcCEYb
	Fes4PGMtguDhGV8OM3DEmB49Dy04pf1vIj9a4G1BZe2DO1mZLZ1hTXtagFEHY3N2QREDH4MPqYx
	W2WY=
X-Google-Smtp-Source: AGHT+IH904eHMI1zSSqNbKZPKyIjtzLTWlNqjr6c8ygXrMuor5Jj4QVS7saH0tsJmAnDi0imhmjWGQ==
X-Received: by 2002:a05:620a:424c:b0:792:952b:6310 with SMTP id af79cd13be357-79be0c6b8b3mr1288300585a.45.1719410293810;
        Wed, 26 Jun 2024 06:58:13 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH WIP 13/14] x86/svm: Add a trace for VMEXIT_VMRUN
Date: Wed, 26 Jun 2024 14:38:52 +0100
Message-Id: <20240626133853.4150731-14-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Note that this trace is SVM-specific. Most HVM handler traces are
shared between VMX and SVM because the underlying instruction set is
largely the equivalent; but in this case, the instructions are
different enough that there's no sensible way to share HVM handler
traces between them.

Keeping the target VMCB address should allow future analysis of which
L2 vcpu within an L1 is running.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
 tools/xentrace/xenalyze.c  | 20 +++++++++++++++++++-
 xen/arch/x86/hvm/svm/svm.c |  2 ++
 xen/include/public/trace.h |  1 +
 3 files changed, 22 insertions(+), 1 deletion(-)

diff --git a/tools/xentrace/xenalyze.c b/tools/xentrace/xenalyze.c
index d0e2788727..d1988b4232 100644
--- a/tools/xentrace/xenalyze.c
+++ b/tools/xentrace/xenalyze.c
@@ -952,6 +952,7 @@ enum {
     HVM_EVENT_VLAPIC,
     HVM_EVENT_XCR_READ,
     HVM_EVENT_XCR_WRITE,
+    HVM_EVENT_VMRUN,
     HVM_EVENT_HANDLER_MAX
 };
 const char * hvm_event_handler_name[HVM_EVENT_HANDLER_MAX] = {
@@ -989,7 +990,8 @@ const char * hvm_event_handler_name[HVM_EVENT_HANDLER_MAX] = {
     "trap_debug",
     "vlapic",
     "xcr_read",
-    "xcr_write"
+    "xcr_write",
+    "vmrun"
 };
 
 enum {
@@ -4610,6 +4612,19 @@ void hvm_rdtsc_process(struct record_info *ri, struct hvm_data *h)
     h->last_rdtsc = r->tsc;
 }
 
+
+void hvm_vmrun_process(struct record_info *ri, struct hvm_data *h)
+{
+    struct {
+        uint64_t vmcbaddr;
+    } *r = (typeof(r))h->d;
+
+    if ( opt.dump_all )
+        printf(" %s vmrun %llx\n",
+               ri->dump_header,
+               (unsigned long long)r->vmcbaddr);
+}
+
 void hvm_generic_summary(struct hvm_data *h, void *data)
 {
     long evt = (long)data;
@@ -4910,6 +4925,9 @@ needs_vmexit:
     case TRC_HVM_RDTSC:
         hvm_rdtsc_process(ri, h);
         break;
+    case TRC_HVM_VMRUN:
+        hvm_vmrun_process(ri, h);
+        break;
     case TRC_HVM_DR_READ:
     case TRC_HVM_DR_WRITE:
     case TRC_HVM_CPUID:
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 240401dc77..39af0ffdd6 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2177,6 +2177,8 @@ svm_vmexit_do_vmrun(struct cpu_user_regs *regs,
         return;
     }
 
+    TRACE(TRC_HVM_VMRUN, vmcbaddr, vmcbaddr >> 32);
+
     vcpu_nestedhvm(v).nv_vmentry_pending = 1;
     return;
 }
diff --git a/xen/include/public/trace.h b/xen/include/public/trace.h
index 141efa0ea7..19a29a5929 100644
--- a/xen/include/public/trace.h
+++ b/xen/include/public/trace.h
@@ -222,6 +222,7 @@
 #define TRC_HVM_VLAPIC           (TRC_HVM_HANDLER + 0x25)
 #define TRC_HVM_XCR_READ64      (TRC_HVM_HANDLER + TRC_64_FLAG + 0x26)
 #define TRC_HVM_XCR_WRITE64     (TRC_HVM_HANDLER + TRC_64_FLAG + 0x27)
+#define TRC_HVM_VMRUN           (TRC_HVM_HANDLER + 0x28)
 
 #define TRC_HVM_IOPORT_WRITE    (TRC_HVM_HANDLER + 0x216)
 #define TRC_HVM_IOMEM_WRITE     (TRC_HVM_HANDLER + 0x217)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 13:58:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 13:58:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749021.1157094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTAC-0004CF-1p; Wed, 26 Jun 2024 13:58:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749021.1157094; Wed, 26 Jun 2024 13:58:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTAB-00049P-Bs; Wed, 26 Jun 2024 13:58:19 +0000
Received: by outflank-mailman (input) for mailman id 749021;
 Wed, 26 Jun 2024 13:58:17 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMTA9-0000af-HC
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 13:58:17 +0000
Received: from mail-qk1-x72e.google.com (mail-qk1-x72e.google.com
 [2607:f8b0:4864:20::72e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 224d103b-33c4-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 15:58:16 +0200 (CEST)
Received: by mail-qk1-x72e.google.com with SMTP id
 af79cd13be357-79c06169e9cso118414785a.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 06:58:15 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79bce9318f6sm499371185a.101.2024.06.26.06.58.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 06:58:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 224d103b-33c4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410294; x=1720015094; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=TTGn0K1/CfHD8ie03BFgIyvIbG3pkRzclg3QaK3twgk=;
        b=GOrzi10VMjCDzkCGUc9Bbhjg5PhJY2r6Bo3rq900pIuJbbpnNSZqPB58tNGW2tyARU
         6d7QPtUcNFglGP9FjcfC+n9m27SrRH01IGe/0JVLjTo4DRNfYIjkkvLUJo1nd/sAlhrP
         5X8Oh2YQ9R4a3IEDmPAytfjniHt4LUzB2/WyA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410294; x=1720015094;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=TTGn0K1/CfHD8ie03BFgIyvIbG3pkRzclg3QaK3twgk=;
        b=iHi0w8Ynic1QmrOUkqcgpJ4cLZ1NIFgiFHqhWd8Bn9jKbWucrYsIYJIfOVFzK/JE6Y
         6aZwQ2cR9j3d2A5dZeOF6Whei0Be7ZHNy+TTSVLKhVnijP4ayKf5JHiUSFa+vSuzK6M/
         AfmlOInZ5NLFOGaP4m0lviNxdJZ1sZnDycZZkCF+S+MbdZhm6XXap4pTdAE56L3dCRT4
         l6xS3cm46bu0Ih5WWjpmbQuD7TF0MVhUXPTVjNrdhSVHP6mSLEVqcosyVZT2rqPet1w7
         YDDHUE4v/Ax6jKIi6RZInCbrNFJsIpN3rDzNmWCa8zr8ys3XO9uSjF/MOkA7FyckPRZZ
         bv4Q==
X-Gm-Message-State: AOJu0YziF7q4ficrPcUgkRP9CPYtzSWc5Mjko2rkB1jnXYG+KEm3lS7h
	UlsV4qnnFmHbekRNOmR+lZmjIbSbjtyrQGP/Z2cWjrHu/vkCNwRiJcvL+A9wH8c5eEk/+1UuC6Q
	YsCU=
X-Google-Smtp-Source: AGHT+IEdIn4jHdr7DjURLQkv1U3KWYIzU9lQCj249iwKA5u/sbddkQCWiKMo/H6Z+aKLg9OtApUkww==
X-Received: by 2002:a05:620a:4150:b0:795:52e7:a49a with SMTP id af79cd13be357-79be701fef5mr1199488085a.75.1719410294659;
        Wed, 26 Jun 2024 06:58:14 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>
Subject: [PATCH WIP 14/14] x86/nestedsvm: Note some places for improvement
Date: Wed, 26 Jun 2024 14:38:53 +0100
Message-Id: <20240626133853.4150731-15-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
 xen/arch/x86/hvm/svm/nestedsvm.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index 35a2cbfd7d..dca06f2a6c 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -1349,8 +1349,21 @@ nestedsvm_check_intercepts(struct vcpu *v, struct cpu_user_regs *regs,
         return NESTEDHVM_VMEXIT_INJECT;
     case VMEXIT_VMMCALL:
         /* Always let the guest handle VMMCALL/VMCALL */
+        /*
+         * FIXME: This is wrong; if the L1 hasn't set the VMMCALL
+         * intercept and the L2 executes a VMMACALL, it should result
+         * in a #UD, not a VMMCALL exception.
+         */
         return NESTEDHVM_VMEXIT_INJECT;
     default:
+        /*
+         * FIXME: It's not true that any VMEXIT not intercepted by L1
+         * can safely be handled "safely" by L0; VMCALL above is one
+         * such example, but there may be more.  We either need to
+         * combine this switch statement into the one in
+         * nsvm_vmcb_guest_intercepts_exitcode(), or explicitly list
+         * known-safe exits here.
+         */
         break;
     }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 14:01:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 14:01:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749060.1157115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTD3-0001CY-C3; Wed, 26 Jun 2024 14:01:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749060.1157115; Wed, 26 Jun 2024 14:01:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTD3-0001CR-8r; Wed, 26 Jun 2024 14:01:17 +0000
Received: by outflank-mailman (input) for mailman id 749060;
 Wed, 26 Jun 2024 14:01:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMTD1-0001Bl-Pn
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 14:01:15 +0000
Received: from mail-oa1-x2e.google.com (mail-oa1-x2e.google.com
 [2001:4860:4864:20::2e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8c51e63f-33c4-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 16:01:14 +0200 (CEST)
Received: by mail-oa1-x2e.google.com with SMTP id
 586e51a60fabf-250ca14422aso3518331fac.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 07:01:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c51e63f-33c4-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719410472; x=1720015272; darn=lists.xenproject.org;
        h=content-transfer-encoding:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=LcDvYxKVscZQ/IvCEyIO70fDtvFBEVaS5wPbRV4gR3A=;
        b=SQdrQ5cFWdJj2SuaUqbPv5uQ00Qz1R+8aOp0Fif42vSb9oNTPXMoDW/kgDR/ZniZWy
         T0r+khTMVpMGIXZ2i493MC6MQAQR6kM76d01afUQMh7E2P1d2LC7R5tOe5d46f7ElRhh
         +S/2Xhm162A/WQX6rezOpiV+puVpQr7goyv7M=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719410472; x=1720015272;
        h=content-transfer-encoding:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=LcDvYxKVscZQ/IvCEyIO70fDtvFBEVaS5wPbRV4gR3A=;
        b=apvQAQCFvuRyOo09iCNqWyKxSdog4bJaIwK/wOz8XkLcZSvGnETkVO63pTaRZa4dQ9
         FA6Cz9CvdIr1kX4nTmYsEcIK2lSI49rel3D/L9oEx3q7p/cVXU0TUCfFEurka95UrAGf
         ZZOTKcVs+AQx008sA2VSfPs+RC4/OhnwQ/nwPbt9mtMPtI5dgeYLFylFy6/FQJh9Qp4Y
         biKFHqGS6qPI6PU/8Uu+wyAl4PrnCudTg2d5f+mCN+q5Jszb7xHHlJEu7ikqVBHKYM5u
         UO2FopoC46x46wn+V9hkp8lIR7SOi2lqm5buAyYRJLSmDRk+qh3W2uc/TEI9f5taMjxQ
         bYSg==
X-Gm-Message-State: AOJu0YwdM8bUkj/Fs+HcME05knaPwkLKulw34FGa4WPKD4ILLIkeaueW
	MpZ7NgItUI3O3cNex3nm+GhgROTm8njDroelNzVHmhw4gjeUS2DYWtV/xCuci2NIzNyffHRt8QZ
	sz7w9SJEQmlDoEo1n8co9WdT1+HoJGpJP+hD+oR9JkLxNJp6KDVpsQw==
X-Google-Smtp-Source: AGHT+IEXnIqA+tmUuNEeRG2+ESZO5mFLmoOVqYSMO/KqkBy/rQxzLoNfAu/7zcD+ZV5TGhP2hLX3JqT8nb9NEAogqnU=
X-Received: by 2002:a05:6870:c10e:b0:25d:5f08:6b0b with SMTP id
 586e51a60fabf-25d5f087941mr1387302fac.18.1719410472502; Wed, 26 Jun 2024
 07:01:12 -0700 (PDT)
MIME-Version: 1.0
References: <20240626133853.4150731-1-george.dunlap@cloud.com>
In-Reply-To: <20240626133853.4150731-1-george.dunlap@cloud.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Wed, 26 Jun 2024 15:01:01 +0100
Message-ID: <CA+zSX=bS-0Q8GBu9kD-o1VN1i6V8V46PyYxNoiAMwchLsQEGog@mail.gmail.com>
Subject: Re: [PATCH WIP 00/14] AMD Nested Virt Preparation
To: xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Jun 26, 2024 at 2:57=E2=80=AFPM George Dunlap <george.dunlap@cloud.=
com> wrote:
>
> This is my work-in-progress series for getting nested virt working
> again on AMD.

Forgot to add, this can be found at this branch:

https://gitlab.com/xen-project/people/gdunlap/xen/-/commits/working/amd-nes=
ted-virt

 -George


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 14:12:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 14:12:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749104.1157124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTO0-0005VF-AU; Wed, 26 Jun 2024 14:12:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749104.1157124; Wed, 26 Jun 2024 14:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTO0-0005V8-80; Wed, 26 Jun 2024 14:12:36 +0000
Received: by outflank-mailman (input) for mailman id 749104;
 Wed, 26 Jun 2024 14:12:34 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZmsE=N4=cloud.com=matthew.barnes@srs-se1.protection.inumbo.net>)
 id 1sMTNy-0005V2-RA
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 14:12:34 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 21f66a73-33c6-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 16:12:33 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id
 a640c23a62f3a-a725a918edaso262425966b.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 07:12:33 -0700 (PDT)
Received: from EMEAENGAAD91498. ([217.156.233.157])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a72598043b2sm286688766b.106.2024.06.26.07.12.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 07:12:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21f66a73-33c6-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719411153; x=1720015953; darn=lists.xenproject.org;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:subject:cc:to:from:date:message-id:from:to
         :cc:subject:date:message-id:reply-to;
        bh=6TuPnQh1fYjjPcjTOWpDI2CCVFOB74b7VZheM9F/dCE=;
        b=FHPEzb10hjY+tsFDjUKRkNjXkBEMl0bdjIbYvlCrEuYF2SwBWtL/NG1ofi8YhgIbiN
         nOh+U+C6mFlz6sdMJfyCv65Pjztpk9L0gq/wLRt9ZRD5qJdRV7+mH8kOrvlGNFF7hK5O
         JkhERnZKeCno37lNHD53ekGYGhhs7PnVBo8mU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719411153; x=1720015953;
        h=in-reply-to:content-transfer-encoding:content-disposition
         :mime-version:references:subject:cc:to:from:date:message-id
         :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to;
        bh=6TuPnQh1fYjjPcjTOWpDI2CCVFOB74b7VZheM9F/dCE=;
        b=saWU6K+sEjW+xFWKSV7wNYiNoGRj3izPLzLpVPxCfcbVnv4kobPQze7AqYuqWwtVby
         0gnQUi48lnf091FeaLpBITJQoDkFz1uCgV3Mpp9U0WqBsWw1Lcfaz8zcjyXpKVj9V9hF
         x/G1QsTGVbNICgtP4S2HnHV5DWaQdMpR4dZcWJ5ObNtUCE43vVLtZah2RwJ1oqWf7swe
         /sx6VnLixagCCOPDsraXhu/l8ivXN5zFzp6d/Oc3VUs878eYseRRsguCIdhVeJXyeTq2
         Z2bjYzM79eRn//Tx84WaN33IfXEbRJTxlEAbA1tUcHvHEcgXZF/kTtMuxTBNYTB4IVSO
         WONQ==
X-Forwarded-Encrypted: i=1; AJvYcCWf/18AF+DQY4FDVUUEvRn1YC1CrQMWMYOfIHCrtSYe0YwzHto5mEVCUGusTRmz3RqeunXaM9nGRWVaJ6Xyy+kfPFLJd4Q+YnT4KtmQlfw=
X-Gm-Message-State: AOJu0Yy6IDfuc8cTRmhBX85rXCg7bEtEBMVfT0kvKuOERIF+MoPq9IX1
	reBYo6vlGuv4QE6JARx/ReoTPMg0bBtYLqs0QmXrP2tGnqb8z163zZjOVhQjTHE=
X-Google-Smtp-Source: AGHT+IGo4K0d1ZXr1EW4mZJdSS7nr2DJAbpFvKOqu947A+qOAG3WalvuVTECr6BxavGcK17I+M8Qcw==
X-Received: by 2002:a17:906:80c2:b0:a6f:d867:4259 with SMTP id a640c23a62f3a-a7245ba3cb5mr755984966b.26.1719411152983;
        Wed, 26 Jun 2024 07:12:32 -0700 (PDT)
Message-ID: <667c21d0.170a0220.bf390.0f32@mx.google.com>
X-Google-Original-Message-ID: <20240626141220.xkjldl2bjv5c5kvo@EMEAENGAAD91498.>
Date: Wed, 26 Jun 2024 15:12:20 +0100
From: Matthew Barnes <matthew.barnes@cloud.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Anthony PERARD <anthony@xenproject.org>,
	Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH] tools/misc: xen-hvmcrash: Inject #DF instead of
 overwriting RIP
References: <27f4397093d92b53f89d625d682bd4b7145b65d8.1717426439.git.matthew.barnes@cloud.com>
 <7ffabe8b-7993-4cc5-97fe-dd1cbd35798e@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7ffabe8b-7993-4cc5-97fe-dd1cbd35798e@citrix.com>

On Tue, Jun 25, 2024 at 10:02:42PM +0100, Andrew Cooper wrote:
> On 03/06/2024 3:59 pm, Matthew Barnes wrote:
> > xen-hvmcrash would previously save records, overwrite the instruction
> > pointer with a bogus value, and then restore them to crash a domain
> > just enough to cause the guest OS to memdump.
> >
> > This approach is found to be unreliable when tested on a guest running
> > Windows 10 x64, with some executions doing nothing at all.
> >
> > Another approach would be to trigger NMIs. This approach is found to be
> > unreliable when tested on Linux (Ubuntu 22.04), as Linux will ignore
> > NMIs if it is not configured to handle such.
> >
> > Injecting a double fault abort to all vCPUs is found to be more
> > reliable at crashing and invoking memdumps from Windows and Linux
> > domains.
> 
> Why every CPU?
> 
> We never did that before, and I don't see any it ought to be necessary
> now either.

We do: at the moment, xen-hvmcrash iterates through
hvm_save_descriptors after pausing the domain, overwriting the EIP/RIP of
each cpu it finds.

Is there a reason not to inject #DF into each domain vCPU? Wouldn't that
crash the domain more reliably?

> > diff --git a/tools/misc/xen-hvmcrash.c b/tools/misc/xen-hvmcrash.c
> > index 1d058fa40a47..8ef1beb388f8 100644
> > --- a/tools/misc/xen-hvmcrash.c
> > +++ b/tools/misc/xen-hvmcrash.c
> > @@ -38,22 +38,21 @@
> >  #include <sys/stat.h>
> >  #include <arpa/inet.h>
> >  
> > +#define XC_WANT_COMPAT_DEVICEMODEL_API
> 
> Please don't introduce this. We want to purge it from the codebase, not
> propagate it.
> 
> You want to open and use a libxendevicemodel handle. (Sadly you also
> need a xenctrl handle too, until we sort out the userspace ABIs).
> 
> >  #include <xenctrl.h>
> >  #include <xen/xen.h>
> >  #include <xen/domctl.h>
> >  #include <xen/hvm/save.h>
> >  
> > +#define X86_ABORT_DF 8
> 
> #include <xen/asm/x86-defns.h>
> 
> and use X86_EXC_DF.

Understood: this will be reflected in patch v2.

Matt


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 14:24:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 14:24:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749116.1157134 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTZD-0000Bk-Ab; Wed, 26 Jun 2024 14:24:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749116.1157134; Wed, 26 Jun 2024 14:24:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTZD-0000Bd-7v; Wed, 26 Jun 2024 14:24:11 +0000
Received: by outflank-mailman (input) for mailman id 749116;
 Wed, 26 Jun 2024 14:24:10 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+tE4=N4=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMTZC-0000BX-H0
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 14:24:10 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id c059de78-33c7-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 16:24:09 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id B565A4EE073D;
 Wed, 26 Jun 2024 16:24:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c059de78-33c7-11ef-90a3-e314d9c70b13
MIME-Version: 1.0
Date: Wed, 26 Jun 2024 16:24:08 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Anthony PERARD <anthony.perard@vates.tech>
Cc: Jan Beulich <jbeulich@suse.com>, Simone Ballarin
 <simone.ballarin@bugseng.com>, consulting@bugseng.com,
 sstabellini@kernel.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
Subject: Re: [XEN PATCH v3 05/16] xen/x86: address violations of MISRA C:2012
 Directive 4.10
In-Reply-To: <ZnwZycQ3mU21cSpd@l14>
References: <cover.1710145041.git.simone.ballarin@bugseng.com>
 <dd042e7d17e7833e12a5ff6f28dd560b5ff02cf7.1710145041.git.simone.ballarin@bugseng.com>
 <dce6c44d-94b7-43bd-858a-9337336a79cf@suse.com>
 <ef623bad297d016438b35bedc80f091d@bugseng.com>
 <ec92611e-6762-4b6c-af3e-999b748d1f1b@suse.com>
 <797b00049612507d273facc581b2c2c5@bugseng.com>
 <a5009c3e-cba6-4737-aaff-c3b79a11169c@suse.com>
 <e3ae670923fd061986e27b3f95833b88@bugseng.com>
 <0c88d86e-f226-4225-b723-a6662fcd5bef@suse.com> <ZnwZycQ3mU21cSpd@l14>
Message-ID: <4b9b8c769d65b0f69514ff47fd6a835a@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-26 15:38, Anthony PERARD wrote:
> On Wed, Jun 26, 2024 at 12:31:42PM +0200, Jan Beulich wrote:
>> On 26.06.2024 12:25, Nicola Vetrini wrote:
>> > On 2024-06-26 11:26, Jan Beulich wrote:
>> >> On 26.06.2024 11:20, Nicola Vetrini wrote:
>> >>> On 2024-06-26 11:06, Jan Beulich wrote:
>> >>>> On 25.06.2024 21:31, Nicola Vetrini wrote:
>> >>>>> On 2024-03-12 09:16, Jan Beulich wrote:
>> >>>>>> On 11.03.2024 09:59, Simone Ballarin wrote:
>> >>>>>>> --- a/xen/arch/x86/Makefile
>> >>>>>>> +++ b/xen/arch/x86/Makefile
>> >>>>>>> @@ -258,18 +258,20 @@ $(obj)/asm-macros.i: CFLAGS-y += -P
>> >>>>>>>  $(objtree)/arch/x86/include/asm/asm-macros.h: $(obj)/asm-macros.i
>> >>>>>>> $(src)/Makefile
>> >>>>>>>  	$(call filechk,asm-macros.h)
>> >>>>>>>
>> >>>>>>> +ARCHDIR = $(shell echo $(SRCARCH) | tr a-z A-Z)
>> >>>>>>
>> >>>>>> This wants to use :=, I think - there's no reason to invoke the
>> >>>>>> shell
>> >>>>>> ...
>> >>>>>
>> >>>>> I agree on this
>> >>>>>
>> >>>>>>
>> >>>>>>>  define filechk_asm-macros.h
>> >>>>>>> +    echo '#ifndef ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>> >>>>>>> +    echo '#define ASM_$(ARCHDIR)_ASM_MACROS_H'; \
>> >>>>>>>      echo '#if 0'; \
>> >>>>>>>      echo '.if 0'; \
>> >>>>>>>      echo '#endif'; \
>> >>>>>>> -    echo '#ifndef __ASM_MACROS_H__'; \
>> >>>>>>> -    echo '#define __ASM_MACROS_H__'; \
>> >>>>>>>      echo 'asm ( ".include \"$@\"" );'; \
>> >>>>>>> -    echo '#endif /* __ASM_MACROS_H__ */'; \
>> >>>>>>>      echo '#if 0'; \
>> >>>>>>>      echo '.endif'; \
>> >>>>>>>      cat $<; \
>> >>>>>>> -    echo '#endif'
>> >>>>>>> +    echo '#endif'; \
>> >>>>>>> +    echo '#endif /* ASM_$(ARCHDIR)_ASM_MACROS_H */'
>> >>>>>>>  endef
>> >>>>>>
>> >>>>>> ... three times while expanding this macro. Alternatively (to avoid
>> >>>>>> an unnecessary shell invocation when this macro is never expanded
>> >>>>>> at
>> >>>>>> all) a shell variable inside the "define" above would want
>> >>>>>> introducing.
>> >>>>>> Whether this 2nd approach is better depends on whether we
>> >>>>>> anticipate
>> >>>>>> further uses of ARCHDIR.
>> >>>>>
>> >>>>> However here I'm not entirely sure about the meaning of this latter
>> >>>>> proposal.
>> >>>>> My proposal is the following:
>> >>>>>
>> >>>>> ARCHDIR := $(shell echo $(SRCARCH) | tr a-z A-Z)
>> >>>>>
>> >>>>> in a suitably generic place (such as Kbuild.include or maybe
>> >>>>> xen/Makefile) as you suggested in subsequent patches that reused
>> >>>>> this
>> >>>>> pattern.
>> >>>>
>> >>>> If $(ARCHDIR) is going to be used elsewhere, then what you suggest is
>> >>>> fine.
>> >>>> My "whether" in the earlier reply specifically left open for
>> >>>> clarification
>> >>>> what the intentions with the variable are. The alternative I had
>> >>>> described
>> >>>> makes sense only when $(ARCHDIR) would only ever be used inside the
>> >>>> filechk_asm-macros.h macro.
>> >>>
>> >>> Yes, the intention is to reuse $(ARCHDIR) in the formation of other
>> >>> places, as you can tell from the fact that subsequent patches
>> >>> replicate
>> >>> the same pattern. This is going to save some duplication.
>> >>> The only matter left then is whether xen/Makefile (around line 250,
>> >>> just
>> >>> after setting SRCARCH) would be better, or Kbuild.include. To me the
>> >>> former place seems more natural, but I'm not totally sure.
>> >>
>> >> Depends on where all the intended uses are. If they're all in
>> >> xen/Makefile,
>> >> then having the macro just there is of course sufficient. Whereas when
>> >> it's
>> >> needed elsewhere, instead of exporting putting it in Kbuild.include
>> >> would
>> >> seem more natural / desirable to me.
>> >>
>> >
>> > The places where this would be used are these:
>> > file: target (or define)
>> > xen/build.mk: arch/$(SRCARCH)/include/asm/asm-offsets.h: asm-offsets.s
>> > xen/include/Makefile: define cmd_xlat_h
>> > xen/arch/x86/Makefile: define filechk_asm-macros.h
>> >
>> > The only issue that comes to my mind (it may not be one at all) is that
>> > SRCARCH is defined and exported in xen/Makefile after including
>> > Kbuild.include, so it would need to be defined after SRCARCH is
>> > assigned:
>> >
>> > include scripts/Kbuild.include
>> >
>> > # Don't break if the build process wasn't called from the top level
>> > # we need XEN_TARGET_ARCH to generate the proper config
>> > include $(XEN_ROOT)/Config.mk
>> >
>> > # Set ARCH/SRCARCH appropriately.
>> >
>> > ARCH := $(XEN_TARGET_ARCH)
>> > SRCARCH := $(shell echo $(ARCH) | \
>> >      sed -e 's/x86.*/x86/' -e 's/arm\(32\|64\)/arm/g' \
>> >          -e 's/riscv.*/riscv/g' -e 's/ppc.*/ppc/g')
>> > export ARCH SRCARCH
>> >
>> > Am I missing something?
>> 
>> In that case the alternatives are exporting or using = rather than := 
>> in
>> Kbuild.include, i.e. other than initially requested. Personally I 
>> dislike
>> exporting to a fair degree, but I'm not sure which one's better in 
>> this
>> case. Cc-ing Anthony for possible input.
> 
> None. The name is missleading anyway, it would suggest to me that it
> contain a directory, but that's wrong.
> 
> Another thing that suboptimal, use make to call a shell to generate a
> string that is going to be later use in shell context. How about just
> doing the work in that later shell context?
> 
> Something like:
> 
>  define filechk_asm-macros.h
> +    guard=$$(echo ASM_${SRCARCH}_ASM_MACROS_H | tr a-z A-Z); \
> +    echo "#ifndef $$guard"; \
> +    echo "#define $$guard"; \
>      echo '#if 0'; \
>      echo '.if 0'; \
> 

This approach looks ok to me.

> Or, instead of having to write the name of the file down, we could
> use a name that is already registered in a variable:
> 
>  define filechk_asm-macros.h
> +    guard=$$(echo $@ | tr a-z/.- A-Z_); \
> +    echo "#ifndef $$guard"; \
> +    echo "#define $$guard"; \
>      echo '#if 0'; \
>      echo '.if 0'; \
> 
> This produces:
>     #ifndef ARCH_X86_INCLUDE_ASM_ASM_MACROS_H
>     #define ARCH_X86_INCLUDE_ASM_ASM_MACROS_H
>     #if 0
>     .if 0
> 
> Cheers,

The issue I see here is that it would in some cases lead to long header 
guards, which is somewhat against the overall consensus, given that the 
naming convention should be followed by any file, so it was designed to 
generate shorter guards, rather than just the path.

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 14:33:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 14:33:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749126.1157144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTiF-0004p5-4u; Wed, 26 Jun 2024 14:33:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749126.1157144; Wed, 26 Jun 2024 14:33:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTiF-0004oy-1z; Wed, 26 Jun 2024 14:33:31 +0000
Received: by outflank-mailman (input) for mailman id 749126;
 Wed, 26 Jun 2024 14:33:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMTiE-0004mI-0G; Wed, 26 Jun 2024 14:33:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMTiD-0000Qo-VD; Wed, 26 Jun 2024 14:33:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMTiD-0003ev-Jy; Wed, 26 Jun 2024 14:33:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMTiD-0002Ow-JU; Wed, 26 Jun 2024 14:33:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kPPK8Za7TVnRSrRs2Z8hl0bxeV9kXxnH8xui/kw9ISY=; b=WY1Wy5RBbFDpsqcXgcwKgUEZqz
	sV6eD4nsfelRjk59xlFs9r96hSbzbq2GzHqTAsZBwjWBREfRIa8B+nuESMBF3OhYMkkreKNOeKNT+
	OrjB/LnF74ukCocmRpuhEwAwy2Lw57UkSRrHhOmBI2J5dBYPrwpfn9ZImzXCREdjbnh4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186513-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186513: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4712e3b3769e6c03e0aaaa8179395f0fb7b141cc
X-Osstest-Versions-That:
    xen=853f707cd9e2eb9410dbfbadbd5a01ac0252ef83
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 14:33:29 +0000

flight 186513 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186513/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4712e3b3769e6c03e0aaaa8179395f0fb7b141cc
baseline version:
 xen                  853f707cd9e2eb9410dbfbadbd5a01ac0252ef83

Last test of basis   186501  2024-06-25 23:09:05 Z    0 days
Testing same since   186513  2024-06-26 11:00:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   853f707cd9..4712e3b376  4712e3b3769e6c03e0aaaa8179395f0fb7b141cc -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 14:42:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 14:42:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749134.1157156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTr1-0001C7-19; Wed, 26 Jun 2024 14:42:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749134.1157156; Wed, 26 Jun 2024 14:42:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMTr0-0001C0-Su; Wed, 26 Jun 2024 14:42:34 +0000
Received: by outflank-mailman (input) for mailman id 749134;
 Wed, 26 Jun 2024 14:42:33 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zx3L=N4=arm.com=Luca.Fancellu@srs-se1.protection.inumbo.net>)
 id 1sMTqz-0001Bq-G2
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 14:42:33 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20601.outbound.protection.outlook.com
 [2a01:111:f403:2608::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 505e107d-33ca-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 16:42:30 +0200 (CEST)
Received: from DU6P191CA0071.EURP191.PROD.OUTLOOK.COM (2603:10a6:10:53e::28)
 by AS2PR08MB9917.eurprd08.prod.outlook.com (2603:10a6:20b:55f::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.30; Wed, 26 Jun
 2024 14:42:26 +0000
Received: from DU2PEPF00028D08.eurprd03.prod.outlook.com
 (2603:10a6:10:53e:cafe::23) by DU6P191CA0071.outlook.office365.com
 (2603:10a6:10:53e::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.22 via Frontend
 Transport; Wed, 26 Jun 2024 14:42:26 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DU2PEPF00028D08.mail.protection.outlook.com (10.167.242.168) with
 Microsoft
 SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.7677.15
 via Frontend Transport; Wed, 26 Jun 2024 14:42:26 +0000
Received: ("Tessian outbound 41160df97de5:v347");
 Wed, 26 Jun 2024 14:42:25 +0000
Received: from 9337e614a2ce.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B4A7ACF4-6088-4BC0-B695-519286346531.1; 
 Wed, 26 Jun 2024 14:42:18 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9337e614a2ce.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 26 Jun 2024 14:42:18 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com (2603:10a6:20b:8f::22)
 by VI0PR08MB10655.eurprd08.prod.outlook.com (2603:10a6:800:209::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Wed, 26 Jun
 2024 14:42:15 +0000
Received: from AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6204:b901:9cc6:bf2b]) by AM6PR08MB3749.eurprd08.prod.outlook.com
 ([fe80::6204:b901:9cc6:bf2b%3]) with mapi id 15.20.7698.025; Wed, 26 Jun 2024
 14:42:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 505e107d-33ca-11ef-b4bb-af5377834399
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=Nu+LVQZ0k8vzjwJafLHgA6xdMk/rELVIBv5vrboONJ7ASPsuaWJ2O/pL6SOEN8zvDrZ/dW72gLvn4+EQvVXTFTxvt/VmBRfu2JQiICt8//lghqFeUBpDO/7vnw4X/7hvBPuGgllZ43vXgUrmINbWsLeeuynn35eUYgENWj3Vx5AI47N81CkXVsoeKj8cRXK/lKPEzGYchaex6VqN/IzSULDbYHH/XuLJhzMPT3hquj9E2dVwutdSzPpUV+Xyq7D8m+HbZ+3lLgNqR70jfych5qvHasFZoQ/HqEqgNbocrh0NB6OOQk6EbDeZqoJnXqkSahrcylcTZe910XAt212SGQ==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NcGzbTMuXg2d9EQHzHL9axGsId27hZQpNGXFy4fthp4=;
 b=b3RiZLtIPOtwu3bclXu51Ei5MCQwmkyCFa0d/7bKc2AnJeLysk2M/1TC7P8y0EXHTSEDPa1NzGkjp5ZJcoclKuaszTDTukZsCrHPL5WTHd5d2f2Z2Nh+kao3pJClWITY29ku/s30hjdE3IK3yYNxfsIKXXQ7iQwJAcSipaBIY50q3EQyQawMWsl3uYy8LzeYLcMj+0yo8N3ahPpmqH194gYHoEoUr/eIksFv6riCKw9pboDPOCcB8IzRIgua5s/eVt5UXUMIwB8Eq4+lEVbNliBdgktkj0AgIfRHhjnNBVIdKwN5ngOL6FpCinbUjpdX9FDlQkLVO84Tm3AfMZkulg==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1
 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NcGzbTMuXg2d9EQHzHL9axGsId27hZQpNGXFy4fthp4=;
 b=cLrgaK9MxRUiEb7DXFP3nqiUA8Qqv6NFkRtOF50rS+6rY2YCWaXZXe0vLKrxRHcKpgnIeVpnY8hQIehrjiCfKY0NlRoMsnAmC0L3iVoPJkYU0ylexpF60+veMmCMueVYUoirxlD+lXuA1WaxkYN4CY8xsqEsdrcNZyz5lL+e5jo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=arm.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: de2e18f58063fd07
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=evJzNLgUB9mAjxim/8TE9Jt87FuorPdpkJ1fL11cFuvbli3cjo/Frq1bXkL9UbidwhMvKxB+AV+ITccdFkZH30QVA7z0wbSfUQfFUdTrFhQT25/iZ49IvwLJdCCVZN5NNkr71wzoDmRZazGHC1tydABn1c4+VEnX7DNjTut8HtUwqekS4BAVYn+gpgJo3eTd1Yxea3OO8zZI3VK4UHZemZDzzntbmJagzqNIxX4dW1cCGTQsaN/5SnVKcZ05tOJAnBQMKmRltEB4zhbJH/8zneDeVTv2zwtsIWBYYyclEpPbqy0ZHbkeUbrxj1BoAFYVrJVgS6RcYV3LhRsv8BhFUA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NcGzbTMuXg2d9EQHzHL9axGsId27hZQpNGXFy4fthp4=;
 b=aUM0DZIesrYZj3tVkj6RMpIwa1i6fcjp8FZiUDjn9wE8NU8qfypoTIly5znjNE3KX1k4HMrJwmA4TVY+fdKBtc5FFq6X/VUixZmm2x9cjH5zgKoJLPj7Xpvx4Y+34K2W6HHKQafhk2gH/aCdDORYI11PIgcz6WFVn8bPQXQJvGZaWxZBNf6lWNr2xsApnXeIgQdsgdkVe7weY1+2aTJHUaJ8jQw3vfrsBvQx/fDIGkCcwjRuxX32RVhxoKYIMm5C61jChTiE72bEHQTTIsKWy2cmx23zgrRftvAzmz1g8pBW8cNL35gMHcfab4K+pXLTzk3CGgbUafSDuiFxVfINwg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NcGzbTMuXg2d9EQHzHL9axGsId27hZQpNGXFy4fthp4=;
 b=cLrgaK9MxRUiEb7DXFP3nqiUA8Qqv6NFkRtOF50rS+6rY2YCWaXZXe0vLKrxRHcKpgnIeVpnY8hQIehrjiCfKY0NlRoMsnAmC0L3iVoPJkYU0ylexpF60+veMmCMueVYUoirxlD+lXuA1WaxkYN4CY8xsqEsdrcNZyz5lL+e5jo=
From: Luca Fancellu <Luca.Fancellu@arm.com>
To: Michal Orzel <michal.orzel@amd.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Bertrand Marquis
	<Bertrand.Marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	"oleksii.kurochko@gmail.com" <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH for-4.19(?)] xen/arm: bootfdt: Fix device tree memory node
 probing
Thread-Topic: [PATCH for-4.19(?)] xen/arm: bootfdt: Fix device tree memory
 node probing
Thread-Index: AQHax5+US/AJkZu/L0SR6EusLPxlorHaHseA
Date: Wed, 26 Jun 2024 14:42:15 +0000
Message-ID: <EDBF2EA4-C235-4537-A4DE-3E111386F6D0@arm.com>
References: <20240626080428.18480-1-michal.orzel@amd.com>
In-Reply-To: <20240626080428.18480-1-michal.orzel@amd.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3774.600.62)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	AM6PR08MB3749:EE_|VI0PR08MB10655:EE_|DU2PEPF00028D08:EE_|AS2PR08MB9917:EE_
X-MS-Office365-Filtering-Correlation-Id: 5458690f-63a4-49f2-0bbe-08dc95ee3281
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted:
 BCL:0;ARA:13230038|376012|366014|1800799022|38070700016;
X-Microsoft-Antispam-Message-Info-Original:
 =?utf-8?B?UDB6UDd2UHhYM0liQXlrTWVKa1B1d1JEaExNVGpQRHcyQVlGUEpOcWlic0s3?=
 =?utf-8?B?N3YyRnU2dU9nNi9XKzZOK1NaL2t4eDJmTVptWTVJYS84WHZMZ1hheFAyY3Ra?=
 =?utf-8?B?WVZpU1NWeG1FNjAzelgwMXUwNHJjWGVxM1YyeVgyY1ZaM2d2MEVnSTJxUXBQ?=
 =?utf-8?B?enNSTVZWbkVsNEd2MHlNMDdWQmNPK2toQWM5bGR6Sjdsdkd3d3c0eDBiVkM3?=
 =?utf-8?B?bDRWM0JUcGE0aWNTSHpqaWdwVE5Pc29BaEhWclNwL0dVVTYrclZidkpyUG5W?=
 =?utf-8?B?dkxuTXBodVdUamo4RGZ6WFhTRDAyU0ZDL1ZOcEh3OXFsQ2l1VFQxK0hPbm45?=
 =?utf-8?B?RFVmYmIyNGdLOEl2R0ZiOFFBT1h5SFg4TTRrTlBsQ1JLWnBJU3RBd3dnUTBt?=
 =?utf-8?B?QzJzT3MveUt2NHd0STJIdVVTUlNVbys5blJrR0hNV21FMEFpdFM4UklxZTN3?=
 =?utf-8?B?Y1lwSkdFb2Jwc2lyZkJ3Qks5ZUJMYitGRHlDc1ArS21GcEYrS0kxWHFZNGpB?=
 =?utf-8?B?WVNaNUNPLzg2WDljWHhOR2hnTi9kQlAxMm1uTkUzNnBvMWlGOVhuUm9iS2p1?=
 =?utf-8?B?VTJLTFd1WnhBKzhFc3BqL0pTWjBTcTZsNEc1aGh1bWYrNDU4dVl3Q0ZST0hG?=
 =?utf-8?B?RDUrZWQwN3UzOUo2MXhhRzY4N2poZkpkelFQK1ZJYTRqWjJDaldjYXRSeE1B?=
 =?utf-8?B?SDBjSnFNSmdlWm1xR29vUG5IeGlLT2wwQW54RVR1cXhnUkoxMk9RNjRUb1I1?=
 =?utf-8?B?OUwvRXZVQW14aEhsUmVoTjZkL1NOTnlUU0lTdzk0RFJobXMxQlBCeGdBWkV1?=
 =?utf-8?B?WXNnV0dQby84OTdtaVBiQzMzTWVMcVdYRGZhVW91SDJ4QWQ1anRnbFQ5R3BZ?=
 =?utf-8?B?S0JEOUVJRzg2K2ZkVUtFWTFPVjlhWFUwVXNPS1U3cmJGSFc0QTdUR2RPdkpI?=
 =?utf-8?B?dUJQWmpUTzh6bXBnU3JsRjF0WU9mQVVtY0tXWVIvdCtTMXRxYXE5ZkdUckxF?=
 =?utf-8?B?QXhLY3dNSDdHMWZzMHIzVkFaK0tSRmVWWWQrajFiemZQUUVZa3d3emJsWlFL?=
 =?utf-8?B?QStLY09Ha1hiTFhKNExpNld4VllCUTUwOVYyZzZjallkdFl1S3JYYjVqL3lB?=
 =?utf-8?B?dEljRmxmaXdCWmNxZ3paWnpEQlMrTDdrSlhOOHY1cGd0WE5lazZUS0FXOGNz?=
 =?utf-8?B?K3VwMXBVNnByclQ5Zko1dzlBWXpqTUJpV0hMdmowSGpxcWtwNkwxRmNKOUVP?=
 =?utf-8?B?QW8zb1Z2SmFBRkFCek1qRnRab2FRemUxeTZ6OWVHbzNCOXhoWjJ0Q1Fad0Y1?=
 =?utf-8?B?bVcxY0FOZW9HaU1VQTAyV2RweGVBTlRobXZPOXdTbkM0M0R4UFJjMUx0MXp5?=
 =?utf-8?B?ak16L2IyZFhDUGxMTGVVR0g4QkNzWUxja051c2IrTzBwTDdsQXpHN3A4emJL?=
 =?utf-8?B?MnZEd2twTE52L1lYdWxnWHVOdmpCOXhZRFN5S3crYXFOUFZEVndSRHhxVU5l?=
 =?utf-8?B?SWNlaVNrVHFsMk9lMU9ob05DUXJBSXp5M1oxZC9ZQkFJQzBiSWw3djI3aVI2?=
 =?utf-8?B?TGxwbWYxUWJQQWpFa0tUbUdkeHdCaHpmcXdFTktUenNVNXlKU29aeHpRbFV2?=
 =?utf-8?B?UUFuMGFlNG8yanhSQzdBUHFPS0RFYWZYaDRCWmNJcHZxZWZuckQvczJpV21z?=
 =?utf-8?B?cmgvTWhpYVZiWTIxYWZxc2V5dU5MRi95Qmszby9zcHQ5aDArZHNkMFJ3Vit6?=
 =?utf-8?B?RHZZT0FHZWtRRWRKN0dmYUhUa3JzZFZvK1N6NjB0ZC9hS3FHRDFzSW1Oci9Y?=
 =?utf-8?Q?GlTpM0eziq3AEkGybzYJUqoB1IxxLqpv8kYEs=3D?=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR08MB3749.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230038)(376012)(366014)(1800799022)(38070700016);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="utf-8"
Content-ID: <0545CF19207DE140B3FF424CBE2F059A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI0PR08MB10655
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DU2PEPF00028D08.eurprd03.prod.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	773bc698-3bae-4a12-3a85-08dc95ee2c51
X-Microsoft-Antispam:
	BCL:0;ARA:13230038|36860700011|376012|1800799022|35042699020|82310400024;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?Q21kZFlXMmV1eXg3N3hncUFQMXVsZ1dOZEVHcWFYeG83SFFrWkN2MTlENitB?=
 =?utf-8?B?TS9yRzFpYzhLaWoxZ3dTd3kwVGJJSmlIVisyQTlRa2ZDTTY2akhwNUY5MDQ1?=
 =?utf-8?B?UndoQVlJVjE2Z3picHZLZWRuT0pZOGt6RytOSzZrd0RTSE1MUjFIaG5EVTRE?=
 =?utf-8?B?cEd3STEzWlp6K1AzbTFIVEZDOTlDOGVjbW1pUjNaTjJOOTQ0REV5ZDlCNlEy?=
 =?utf-8?B?Mk9NanJsejA5S3hQb3p2S3BBR3ZhWFpZZFEvMnB1MHgyTUt6bytibFI3dFV5?=
 =?utf-8?B?OThCZWM0RFFRdjRrenBvVld1REV1RFFzcVlpN2ZOb3ZUTnI4OHZPMWhnbjdk?=
 =?utf-8?B?MHJ6czJpcXRGOW9ZWHFkVFEwaWZGVTZyMDJtdkFza2RrWXk0VHJkZjdsaHhO?=
 =?utf-8?B?L0xwSkxmU04zZUNqaU93dTNveEM2dWREcWhCdzEvd24zTFFiRUo4bmt6TmRW?=
 =?utf-8?B?NGtMOFNJWEpDK2ZrbFZ0L0tMRjlhdWl5M0Rpb0ZFNHFlcEhrS2ZtcXhTRWFO?=
 =?utf-8?B?Wmc3MEdvM2p4Z3BBQTFQd0JPVVdTR0VmcWxjUUtyK212TWdXc3VCUWpqajJP?=
 =?utf-8?B?eFBBY2ZhYmN2ZVFEMzB3b2wzeGJTZjcrNFlOTGpRWmRWcVNIdGErVGZxY2xo?=
 =?utf-8?B?b1N5bEUraXh0Nm9nS0pId3J2Yld1UGFYOXFrVjg0T0xsb2oyNzI4eHhXOFo5?=
 =?utf-8?B?b1lpcWxBWkRPekx0RnN0ZUVlSWNUaDF1dnZXZ0w1UDRKNzEvczNWcXFFckpE?=
 =?utf-8?B?UjVxeSs1NUFtaXMxaTRzMEJFRXhoeW5SZDBLc01FYkY1TzJhUUdyQnkrTHht?=
 =?utf-8?B?WGZPTU1sQlZlOWpYWEdESHBYZExvTXV6SlMyMEVtR1RhVGlodWdMRVBDWXgv?=
 =?utf-8?B?ZDExR1NGSHpkTXdqUjdFVi81c3VKRmF2M25ER25NT3JFdzU5UStFZjNYOG9o?=
 =?utf-8?B?WTdjdGlVTE00VGVLTkZEQ1p4a1Nva3NhNVh1TllkczdWbGZyUG9ENGFSMzhp?=
 =?utf-8?B?dktQekwwWWtXbkx0b0c2K2xSZ3UyeWRicndncytrcEFqTWNhU0p0ek8xaFQ5?=
 =?utf-8?B?dFVaSGhhY3ZpT2NxVWs0ZjhoOUpuZ2VPU0hhclBTSE54RFpVUzNzS3lUOWRZ?=
 =?utf-8?B?TjdwU2hlMUdqRHdmOXZsOGpIVlJqSlNldm9ELzdUYTI5enBjZVBuT1VqcmNJ?=
 =?utf-8?B?TE45NFlHQmRjOVE5V291K2N3MnlseG0rZCtTRG1WOW9Gck92YVhqWld0aUVI?=
 =?utf-8?B?emZvRGNHSHh2aUFBS3JQSkpmcjAvQVNjNUJxTEh2WjZKL3V2NG9hbS9DWG1J?=
 =?utf-8?B?Z05ERXNxNzdieG5ROTViRUtUOURlZjN5K1c0QlZCZGV3NGZYL3p2UWVZK3F5?=
 =?utf-8?B?R2VlR0JORThyV05NMUxlK0ZwZkJyN1M1MkFFaUpMUG1nQUJvZUNpNnF3Y0p5?=
 =?utf-8?B?aGhEMXV3NWptK1p1bFlXYjNkYUhiWVRualdjaC9oRjNxT0d4M2FVZ2ZUbmN2?=
 =?utf-8?B?em91L2VVZUlUWTdJbTlnYlBQdFdQZjNaWE5QMU4yM2RtWTlXK3FodFlPSk5x?=
 =?utf-8?B?MlNRL2w1amtaSU16L1F4T1VSSlpxMkJkVDlHaEtONGdxTHJiOWk5cW9FRWVn?=
 =?utf-8?B?V000L1pPaDlpYUxXY3pJKzlvNURkUlpTVDk5TjdXMjh5VEZsMWNxVlo3UVh6?=
 =?utf-8?B?cU1iOTRBY0trTEpPWnc2TmhyQzFFT0RKbDhXcWdKQ3dsSmNYRUpjTG01aGpv?=
 =?utf-8?B?WnVEVWJDS3ptQS9nT0VvUGIwZTl4WU1oejV2V0lIRk8yWTgwOEZFc2t0K2Q1?=
 =?utf-8?B?bFU4Q2FyU0ZYcnp1V1VBNWsyRXk2OGxIQW5ZS3N6S00vcERTamJScEhYelZq?=
 =?utf-8?B?a056SzlZbkVHWEpUNG8wUExoVVNtTEM1ZkUwRVhQZnExL2c9PQ==?=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230038)(36860700011)(376012)(1800799022)(35042699020)(82310400024);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 Jun 2024 14:42:26.1528
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5458690f-63a4-49f2-0bbe-08dc95ee3281
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DU2PEPF00028D08.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9917

SGkgTWljaGFsLA0KDQo+IE9uIDI2IEp1biAyMDI0LCBhdCAwOTowNCwgTWljaGFsIE9yemVsIDxt
aWNoYWwub3J6ZWxAYW1kLmNvbT4gd3JvdGU6DQo+IA0KPiBNZW1vcnkgbm9kZSBwcm9iaW5nIGlz
IGRvbmUgYXMgcGFydCBvZiBlYXJseV9zY2FuX25vZGUoKSB0aGF0IGlzIGNhbGxlZA0KPiBmb3Ig
ZWFjaCBub2RlIHdpdGggZGVwdGggPj0gMSAocm9vdCBub2RlIGlzIGF0IGRlcHRoIDApLiBBY2Nv
cmRpbmcgdG8NCj4gRGV2aWNldHJlZSBTcGVjaWZpY2F0aW9uIHYwLjQsIGNoYXB0ZXIgMy40LCAv
bWVtb3J5IG5vZGUgY2FuIG9ubHkgZXhpc3RzDQo+IGFzIGEgdG9wIGxldmVsIG5vZGUuIEhvd2V2
ZXIsIFhlbiBpbmNvcnJlY3RseSBjb25zaWRlcnMgYWxsIHRoZSBub2RlcyB3aXRoDQo+IHVuaXQg
bm9kZSBuYW1lICJtZW1vcnkiIGFzIFJBTS4gVGhpcyBidWdneSBiZWhhdmlvciBjYW4gcmVzdWx0
IGluIGENCj4gZmFpbHVyZSBpZiB0aGVyZSBhcmUgb3RoZXIgbm9kZXMgaW4gdGhlIGRldmljZSB0
cmVlIChhdCBkZXB0aCA+PSAyKSB3aXRoDQo+ICJtZW1vcnkiIGFzIHVuaXQgbm9kZSBuYW1lLiBB
biBleGFtcGxlIGNhbiBiZSBhICJtZW1vcnlAeHh4IiBub2RlIHVuZGVyDQo+IC9yZXNlcnZlZC1t
ZW1vcnkuIEZpeCBpdCBieSBpbnRyb2R1Y2luZyBkZXZpY2VfdHJlZV9pc19tZW1vcnlfbm9kZSgp
IHRvDQo+IHBlcmZvcm0gYWxsIHRoZSByZXF1aXJlZCBjaGVja3MgdG8gYXNzZXNzIGlmIGEgbm9k
ZSBpcyBhIHByb3BlciAvbWVtb3J5DQo+IG5vZGUuDQo+IA0KPiBGaXhlczogM2U5OWM5NWJhMWM4
ICgiYXJtLCBkZXZpY2UgdHJlZTogcGFyc2UgdGhlIERUQiBmb3IgUkFNIGxvY2F0aW9uIGFuZCBz
aXplIikNCj4gU2lnbmVkLW9mZi1ieTogTWljaGFsIE9yemVsIDxtaWNoYWwub3J6ZWxAYW1kLmNv
bT4NCj4gLS0tDQo+IDQuMTk6IFRoaXMgcGF0Y2ggaXMgZml4aW5nIGEgcG9zc2libGUgZWFybHkg
Ym9vdCBYZW4gZmFpbHVyZSAoYmVmb3JlIG1haW4NCj4gY29uc29sZSBpcyBpbml0aWFsaXplZCku
IEluIG15IGNhc2UgaXQgcmVzdWx0cyBpbiBhIHdhcm5pbmcgIlNoYXR0ZXJpbmcNCj4gc3VwZXJw
YWdlIGlzIG5vdCBzdXBwb3J0ZWQiIGFuZCBwYW5pYyAiVW5hYmxlIHRvIHNldHVwIHRoZSBkaXJl
Y3RtYXAgbWFwcGluZ3MiLg0KPiANCj4gSWYgdGhpcyBpcyB0b28gbGF0ZSBmb3IgdGhpcyBwYXRj
aCB0byBnbyBpbiwgd2UgY2FuIGJhY2twb3J0IGl0IGFmdGVyIHRoZSB0cmVlDQo+IHJlLW9wZW5z
Lg0KPiAtLS0NCg0KSXQgbG9va3Mgb2sgdG8gbWUsIA0KDQpSZXZpZXdlZC1ieTogTHVjYSBGYW5j
ZWxsdSA8bHVjYS5mYW5jZWxsdUBhcm0uY29tPg0KDQpJ4oCZdmUgYWxzbyB0ZXN0ZWQgb24gRlZQ
LCBJ4oCZbGwgcHV0IGl0IHRocm91Z2ggb3VyIENJIGFuZCBJ4oCZbGwgbGV0IHlvdSBrbm93Lg0K
DQpUZXN0ZWQtYnk6IEx1Y2EgRmFuY2VsbHUgPGx1Y2EuZmFuY2VsbHVAYXJtLmNvbT4NCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 15:12:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 15:12:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749145.1157165 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUJX-0001Uf-9S; Wed, 26 Jun 2024 15:12:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749145.1157165; Wed, 26 Jun 2024 15:12:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUJX-0001UY-5i; Wed, 26 Jun 2024 15:12:03 +0000
Received: by outflank-mailman (input) for mailman id 749145;
 Wed, 26 Jun 2024 15:12:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sMUJV-0001US-EA
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 15:12:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sMUJV-0001MX-7u; Wed, 26 Jun 2024 15:12:01 +0000
Received: from [15.248.3.89] (helo=[10.24.67.25])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sMUJV-00027W-0o; Wed, 26 Jun 2024 15:12:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=YrtY+XUknKG4Rl78En6BHKtQzOEjyF+mQuFIWuRoS/s=; b=wGiDU+M0FY3jCCLJEX+dYsrOWM
	NeDbyfgLVPtrgP2FNMiN8tKEa/lMFZJTpyFFlivBJjnXeSlOV4SNwVWXVoMVC43lrHmsDRvFtw7Gx
	P/exYtyXGrOecFVKmtC8mFD6ZZO5wVjZ7L3pdkjS1We9SKfMg0MYNIPE/Yb7op+1Qr+c=;
Message-ID: <431cabe9-345d-48d9-bb9c-ad412cf3430f@xen.org>
Date: Wed, 26 Jun 2024 16:11:59 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 2/2] Add scripts/oss-fuzz/build.sh
Content-Language: en-GB
To: Tamas K Lengyel <tamas@tklengyel.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini <sstabellini@kernel.org>
References: <20240621191434.5046-1-tamas@tklengyel.com>
 <20240621191434.5046-2-tamas@tklengyel.com>
 <6f94d071-f90f-485d-a8aa-a0c8a726ce34@xen.org>
 <CABfawhkCJv1oQ4+_bBHf_ys1=gtmFVT-Zn7UeYDLaSm9KQqgcA@mail.gmail.com>
 <9b6819fd-fd76-4249-b1f9-5afb372dd1e1@xen.org>
 <CABfawhmeOn0g2y40_AxRcXQe9EMNJyXhqVtg9OoAYVSHwM37fQ@mail.gmail.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <CABfawhmeOn0g2y40_AxRcXQe9EMNJyXhqVtg9OoAYVSHwM37fQ@mail.gmail.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Hi Tamas,

On 26/06/2024 14:20, Tamas K Lengyel wrote:
> On Wed, Jun 26, 2024 at 8:41 AM Julien Grall <julien@xen.org> wrote:
>>
>> Hi Tamas,
>>
>> On 24/06/2024 23:18, Tamas K Lengyel wrote:
>>> On Mon, Jun 24, 2024 at 5:58 PM Julien Grall <julien@xen.org> wrote:
>>>>
>>>> Hi,
>>>>
>>>> On 21/06/2024 20:14, Tamas K Lengyel wrote:
>>>>> The build integration script for oss-fuzz targets.
>>>>
>>>> Do you have any details how this is meant and/or will be used?
>>>
>>> https://google.github.io/oss-fuzz/getting-started/new-project-guide/#buildsh
>>>
>>>>
>>>> I also couldn't find a cover letter. For series with more than one
>>>> patch, it is recommended to have one as it help threading and could also
>>>> give some insight on what you are aiming to do.
>>>>
>>>>>
>>>>> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
>>>>> ---
>>>>>     scripts/oss-fuzz/build.sh | 22 ++++++++++++++++++++++
>>>>>     1 file changed, 22 insertions(+)
>>>>>     create mode 100755 scripts/oss-fuzz/build.sh
>>>>>
>>>>> diff --git a/scripts/oss-fuzz/build.sh b/scripts/oss-fuzz/build.sh
>>>>> new file mode 100755
>>>>> index 0000000000..48528bbfc2
>>>>> --- /dev/null
>>>>> +++ b/scripts/oss-fuzz/build.sh
>>>>
>>>> Depending on the answer above, we may want to consider to create the
>>>> directory oss-fuzz under automation or maybe tools/fuzz/.
>>>
>>> I'm fine with moving it wherever.
>>
>> What about tools/fuzz then? This is where are all the tooling for the
>> fuzzing.
>>
>>>
>>>>
>>>>> @@ -0,0 +1,22 @@
>>>>> +#!/bin/bash -eu
>>>>> +# Copyright 2024 Google LLC
>>>>
>>>> I am a bit confused with this copyright. Is this script taken from
>>>> somewhere?
>>>
>>> Yes, I took an existing build.sh from oss-fuzz,
>>
>> It is unclear to me what is left from that "existing" build.sh. At least
>> everything below seems to be Xen specific.
>>
>> Anyway, if you want to give the copyright to Google then fair enough,
>> but I think you want to use an Origin tag (or similar) to indicate the
>> original copy.
>>
>>>   it is recommended to
>>> have the more complex part of build.sh as part of the upstream
>>> repository so that additional targets/fixes can be merged there
>>> instead of opening PRs on oss-fuzz directly. With this setup the
>>> build.sh I merge to oss-fuzz will just just this build.sh in the Xen
>>> repository. See
>>> https://github.com/tklengyel/oss-fuzz/commit/552317ae9d24ef1c00d87595516cc364bc33b662.
>>>
>>>>
>>>>> +#
>>>>> +# Licensed under the Apache License, Version 2.0 (the "License");
>>>>> +# you may not use this file except in compliance with the License.
>>>>> +# You may obtain a copy of the License at
>>>>> +#
>>>>> +#      http://www.apache.org/licenses/LICENSE-2.0
>>>>> +#
>>>>> +# Unless required by applicable law or agreed to in writing, software
>>>>> +# distributed under the License is distributed on an "AS IS" BASIS,
>>>>> +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
>>>>> +# See the License for the specific language governing permissions and
>>>>> +# limitations under the License.
>>>>> +#
>>>>> +################################################################################
>>>>> +
>>>>> +cd xen
>>>>> +./configure clang=y --disable-stubdom --disable-pvshim --disable-docs --disable-xen
>>>>
>>>> Looking at the help from ./configure, 'clang=y' is not mentioned and it
>>>> doesn't make any difference in the config.log. Can you clarify why this
>>>> was added?
>>>
>>> Just throwing stuff at the wall till I was able to get a clang build.
>>> If it's indeed not needed I can remove it.
>>>
>>>>
>>>>> +make clang=y -C tools/include
>>>>> +make clang=y -C tools/fuzz/x86_instruction_emulator libfuzzer-harness
>>>>> +cp tools/fuzz/x86_instruction_emulator/libfuzzer-harness $OUT/x86_instruction_emulator
>>>>
>>>> Who will be defining $OUT?
>>>
>>> oss-fuzz
>>
>> Ok. Can you add a link to the documentation in build.sh? This would be
>> helpful for the future reader to understand what's $OUT really mean.
> 
> Sure, it turns out there is already a README.oss-fuzz in tools/fuzz
> that points to the oss-fuzz so I don't think there is anything else
> needed here, 

Perfect. I am fine with that.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 15:18:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 15:18:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749154.1157174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUPP-0003go-Sj; Wed, 26 Jun 2024 15:18:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749154.1157174; Wed, 26 Jun 2024 15:18:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUPP-0003gh-QB; Wed, 26 Jun 2024 15:18:07 +0000
Received: by outflank-mailman (input) for mailman id 749154;
 Wed, 26 Jun 2024 15:18:06 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WUJr=N4=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMUPO-0003e5-OJ
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 15:18:06 +0000
Received: from mail-lj1-x231.google.com (mail-lj1-x231.google.com
 [2a00:1450:4864:20::231])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 489d0d36-33cf-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 17:18:04 +0200 (CEST)
Received: by mail-lj1-x231.google.com with SMTP id
 38308e7fff4ca-2ec52fbb50bso47345001fa.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 08:18:04 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-7068af60b61sm5250981b3a.134.2024.06.26.08.17.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 08:18:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 489d0d36-33cf-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719415084; x=1720019884; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=C+nF/ZJHGXLxzlX1XdgvehvuY5qRS+/fgfLlBv5maYg=;
        b=OIRtNXp7atzAaXEs+gF4EelQ7SCNljyaCp0YkdCP9Pm72GkZVX6txLNkHrUGzH+Nwj
         4upXha7PUfJDFnI/SZnP6eHLltLxTsigmnV2Agmwuj2iayfUnu2mJTiNnMbY+HAugJL0
         UM0XTlwyWRXQ/PX/30AANFp79CQljDlA+cjVkuBVqlk3wRRH6EjGTK9BiPISdtuls0wt
         L//5Uknw7an4Wxqtn1nVxxEeHvoxbTbtlvhV0P/wKyk47htPwA0bHfJ0S4ZwlXpCRuCQ
         rjS2IeOSiQ+prO80qyIteAdS2Py7bkEFX2XrOvPWWLuU+lSg6cSe/Ehn/uSLp1mwelsl
         bRuA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719415084; x=1720019884;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=C+nF/ZJHGXLxzlX1XdgvehvuY5qRS+/fgfLlBv5maYg=;
        b=LbK63vSMYk0gDzjcNkMLSthJb5CUuvyFC+2VfKuUyGQfOUkgPJMHXRV0Fj/Xqu/xhX
         zy1kMbFylT6jsCV3bIEh5WzkPC/0zEQUfw/gbjhJ5GYyEjPodAiFSx4o9NrQD13sc85U
         YgKEfT8tcW1QPDQZkdG1LhbJ0ZrOSo5dEG/kYFq+1T1SH89JAiuYS32/nYm50EIiOHVW
         pLLvZXy7uX127+HhcluapHb0CAa3+QerVvbiVNLNA68+qcleHYiIg6rt8tbW+Q5CMbIQ
         TpQ1b/IDhjRAEj9ap8WGwNazX1VGWxoqpGzShuMgjX9YBJ7lOvYAFJxlxNMlUfcYKZpx
         LzlQ==
X-Forwarded-Encrypted: i=1; AJvYcCVHdYWdRWHo/cQV4w0uPdW7ZhGoagYZTez2vBVv26XoRe1ziYv2c5+cpiBcr7tqE1G6snRbamzmcjaYC370M+3un6fell6cmuWGbUL1wxQ=
X-Gm-Message-State: AOJu0YzuEa28mvDY+yR9RZBfSH5jHOTNa/6wSIv1G6/ziHiL/jIq150Q
	slX/dQTzQfAqS2Uwo0KuQdNfoTQjoYIQKZyH+2/3YsddqobPrDqFlpvJILV0Ig==
X-Google-Smtp-Source: AGHT+IGgFDCyfM2WwwjyEP3x8Bpuz3FPRNKSmeMZV4Yabd6o6lqlBqePlZ/CSbrNLLewjgk4HYezMQ==
X-Received: by 2002:a2e:a17a:0:b0:2ec:42db:96a2 with SMTP id 38308e7fff4ca-2ec5b38b432mr63409791fa.29.1719415083634;
        Wed, 26 Jun 2024 08:18:03 -0700 (PDT)
Message-ID: <55717cd6-4819-4935-82df-c04453b9676a@suse.com>
Date: Wed, 26 Jun 2024 17:17:53 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v2 (resend) 04/27] acpi: vmap pages in
 acpi_os_alloc_memory
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: julien@xen.org, pdurrant@amazon.com, dwmw@amazon.com,
 Hongyan Xia <hongyxia@amazon.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <jgrall@amazon.com>, Elias El Yandouzi <eliasely@amazon.com>,
 xen-devel@lists.xenproject.org
References: <20240116192611.41112-1-eliasely@amazon.com>
 <20240116192611.41112-5-eliasely@amazon.com>
 <D29ZZSXN0QPV.2627WUC2J3NUK@cloud.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <D29ZZSXN0QPV.2627WUC2J3NUK@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 26.06.2024 15:54, Alejandro Vallejo wrote:
> I'm late to the party but there's something bothering me a little.
> 
> On Tue Jan 16, 2024 at 7:25 PM GMT, Elias El Yandouzi wrote:
>> diff --git a/xen/common/vmap.c b/xen/common/vmap.c
>> index 171271fae3..966a7e763f 100644
>> --- a/xen/common/vmap.c
>> +++ b/xen/common/vmap.c
>> @@ -245,6 +245,11 @@ void *vmap(const mfn_t *mfn, unsigned int nr)
>>      return __vmap(mfn, 1, nr, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
>>  }
>>  
>> +void *vmap_contig(mfn_t mfn, unsigned int nr)
>> +{
>> +    return __vmap(&mfn, nr, 1, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
>> +}
>> +
>>  unsigned int vmap_size(const void *va)
>>  {
>>      unsigned int pages = vm_size(va, VMAP_DEFAULT);
> 
> How is vmap_contig() different from regular vmap()?
> 
> vmap() calls map_pages_to_xen() `nr` times, while vmap_contig() calls it just
> once. I'd expect both cases to work fine as they are. What am I missing? What
> would make...
> 
>> diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c
>> index 389505f786..ab80d6b2a9 100644
>> --- a/xen/drivers/acpi/osl.c
>> +++ b/xen/drivers/acpi/osl.c
>> @@ -221,7 +221,11 @@ void *__init acpi_os_alloc_memory(size_t sz)
>>  	void *ptr;
>>  
>>  	if (system_state == SYS_STATE_early_boot)
>> -		return mfn_to_virt(mfn_x(alloc_boot_pages(PFN_UP(sz), 1)));
>> +	{
>> +		mfn_t mfn = alloc_boot_pages(PFN_UP(sz), 1);
>> +
>> +		return vmap_contig(mfn, PFN_UP(sz));
> ... this statement not operate identically with regular vmap()? Or
> probably more interestingly, what would preclude existing calls to vmap() not
> operate under vmap_contig() instead?

Note how vmap()'s first parameter is "const mfn_t *mfn". This needs to point
to an array of "nr" MFNs. In order to use plain vmap() here, you'd first need
to set up a suitably large array, populate if with increasing MFN values, and
then make the call. Possible, but more complicated.

Jan


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 15:22:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 15:22:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749160.1157185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUTw-0006Xo-Fd; Wed, 26 Jun 2024 15:22:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749160.1157185; Wed, 26 Jun 2024 15:22:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUTw-0006Xh-Cj; Wed, 26 Jun 2024 15:22:48 +0000
Received: by outflank-mailman (input) for mailman id 749160;
 Wed, 26 Jun 2024 15:22:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5vpY=N4=bounce.vates.tech=bounce-md_30504962.667c3242.v1-9372eb8379bc40e28f2b20363a613314@srs-se1.protection.inumbo.net>)
 id 1sMUTu-0006XW-4w
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 15:22:46 +0000
Received: from mail134-3.atl141.mandrillapp.com
 (mail134-3.atl141.mandrillapp.com [198.2.134.3])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ef0b585f-33cf-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 17:22:44 +0200 (CEST)
Received: from pmta10.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail134-3.atl141.mandrillapp.com (Mailchimp) with ESMTP id
 4W8QS24phBzDRJ1Ww
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 15:22:42 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 9372eb8379bc40e28f2b20363a613314; Wed, 26 Jun 2024 15:22:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef0b585f-33cf-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719415362; x=1719675862;
	bh=33Jim/AT+SoQIHMb1mWyjCf8Oyw2QMc8mVmnlKwJbW0=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=VUeM9GbAV49+gfaocJTswbsCSqvad+4KX2Pe8cVUZWzgaTTGBWLbqMmOjTc8cn9or
	 BZA7XJ1zmPb7bvIw/uPysFy3SGOx0l+XGMnoEz9xejKSdNd6UO45QA2KqOgr2HDVcQ
	 HnpqJ/L6z78SaJZso4ZUfyLCh1IiCeQj6MmtVfITiG1CPhWyc14CZJ7S2iOIlmY+oc
	 K+kHvJ8no0yQJ7ULOHkv8X9J1JtCahkJE/1glo4YlHPmdJuqx0LrrbAI2xe3AtvZGG
	 nfHUCD9z5+u1QF2V3uO1DXvKdEkS05N7rXlTju9cm1aFHMFgDQS415TU8Twv00/4qO
	 dO9+21B/ed5+w==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719415362; x=1719675862; i=teddy.astie@vates.tech;
	bh=33Jim/AT+SoQIHMb1mWyjCf8Oyw2QMc8mVmnlKwJbW0=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=CpLOVNHVJwfMpTS6sH7xfPwuevjNkfdkC1N2vmO/4VDHUeIoZBTD8jFJ/xz1zC0OZ
	 9cX90ACQywxGTQQYqZCs0XAb6qREqBjKtYOanFkz3I3yHnLLuQlPRgYqKjy4m73B30
	 8lyFYEh0D9xt7ZerwTifwpbG1M5Mt9GrpzcCBoN7Jyqhy83+nSkqkyqMLRPHCEXda/
	 adpnmKYa7JGnSmuxXkUue0McLqqdc6xlUq+tkIESBte7vMPfmoblmAzMW8+jakTFQA
	 HzWXiPWwTu5a79lMPK3ScS+qYKwJ2+0mPV4MyiizD8mJTl+M/tuCQ9R6AvhvBS6PIH
	 zhTFNXJ/Ig9jg==
From: TSnake41 <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20XEN=20PATCH=20v2=202/5]=20docs/designs:=20Add=20a=20design=20document=20for=20IOMMU=20subsystem=20redesign?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719415361048
To: xen-devel@lists.xenproject.org
Cc: Teddy Astie <teddy.astie@vates.tech>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <0f9658f25c98f1acdab8788c705287d743103d91.1719414736.git.teddy.astie@vates.tech>
In-Reply-To: <cover.1719414736.git.teddy.astie@vates.tech>
References: <cover.1719414736.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.9372eb8379bc40e28f2b20363a613314?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240626:md
Date: Wed, 26 Jun 2024 15:22:42 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

From: Teddy Astie <teddy.astie@vates.tech>

Current IOMMU subsystem has some limitations that make PV-IOMMU practically impossible.
One of them is the assumtion that each domain is bound to a single "IOMMU domain", which
also causes complications with quarantine implementation.

Moreover, current IOMMU subsystem is not entirely well-defined, for instance, the behavior
of map_page between ARM SMMUv3 and x86 VT-d/AMD-Vi greatly differs. On ARM, it can modifies
the domain page table while on x86, it may be forbidden (e.g using HAP with PVH), or only
modifying the devices PoV (e.g using PV).

The goal of this redesign is to define more explicitely the behavior and interface of the
IOMMU subsystem while allowing PV-IOMMU to be effectively implemented.

Signed-off-by Teddy Astie <teddy.astie@vates.tech>
---
Changed in V2:
* nit s/dettach/detach/
---
 docs/designs/iommu-contexts.md | 398 +++++++++++++++++++++++++++++++++
 1 file changed, 398 insertions(+)
 create mode 100644 docs/designs/iommu-contexts.md

diff --git a/docs/designs/iommu-contexts.md b/docs/designs/iommu-contexts.md
new file mode 100644
index 0000000000..8211f91692
--- /dev/null
+++ b/docs/designs/iommu-contexts.md
@@ -0,0 +1,398 @@
+# IOMMU context management in Xen
+
+Status: Experimental
+Revision: 0
+
+# Background
+
+The design for *IOMMU paravirtualization for Dom0* [1] explains that some guests may
+want to access to IOMMU features. In order to implement this in Xen, several adjustments
+needs to be made to the IOMMU subsystem.
+
+This "hardware IOMMU domain" is currently implemented on a per-domain basis such as each
+domain actually has a specific *hardware IOMMU domain*, this design aims to allow a
+single Xen domain to manage several "IOMMU context", and allow some domains (e.g Dom0
+[1]) to modify their IOMMU contexts.
+
+In addition to this, quarantine feature can be refactored into using IOMMU contexts
+to reduce the complexity of platform-specific implementations and ensuring more
+consistency across platforms.
+
+# IOMMU context
+
+We define a "IOMMU context" as being a *hardware IOMMU domain*, but named as a context
+to avoid confusion with Xen domains.
+It represents some hardware-specific data structure that contains mappings from a device
+frame-number to a machine frame-number (e.g using a pagetable) that can be applied to
+a device using IOMMU hardware.
+
+This structure is bound to a Xen domain, but a Xen domain may have several IOMMU context.
+These contexts may be modifiable using the interface as defined in [1] aside some
+specific cases (e.g modifying default context).
+
+This is implemented in Xen as a new structure that will hold context-specific
+data.
+
+```c
+struct iommu_context {
+    u16 id; /* Context id (0 means default context) */
+    struct list_head devices;
+
+    struct arch_iommu_context arch;
+
+    bool opaque; /* context can't be modified nor accessed (e.g HAP) */
+};
+```
+
+A context is identified by a number that is domain-specific and may be used by IOMMU
+users such as PV-IOMMU by the guest.
+
+struct arch_iommu_context is splited from struct arch_iommu
+
+```c
+struct arch_iommu_context
+{
+    spinlock_t pgtables_lock;
+    struct page_list_head pgtables;
+
+    union {
+        /* Intel VT-d */
+        struct {
+            uint64_t pgd_maddr; /* io page directory machine address */
+            domid_t *didmap; /* per-iommu DID */
+            unsigned long *iommu_bitmap; /* bitmap of iommu(s) that the context uses */
+        } vtd;
+        /* AMD IOMMU */
+        struct {
+            struct page_info *root_table;
+        } amd;
+    };
+};
+
+struct arch_iommu
+{
+    spinlock_t mapping_lock; /* io page table lock */
+    struct {
+        struct page_list_head list;
+        spinlock_t lock;
+    } pgtables;
+
+    struct list_head identity_maps;
+
+    union {
+        /* Intel VT-d */
+        struct {
+            /* no more context-specific values */
+            unsigned int agaw; /* adjusted guest address width, 0 is level 2 30-bit */
+        } vtd;
+        /* AMD IOMMU */
+        struct {
+            unsigned int paging_mode;
+            struct guest_iommu *g_iommu;
+        } amd;
+    };
+};
+```
+
+IOMMU context information is now carried by iommu_context rather than being integrated to
+struct arch_iommu.
+
+# Xen domain IOMMU structure
+
+`struct domain_iommu` is modified to allow multiples context within a single Xen domain
+to exist :
+
+```c
+struct iommu_context_list {
+    uint16_t count; /* Context count excluding default context */
+
+    /* if count > 0 */
+
+    uint64_t *bitmap; /* bitmap of context allocation */
+    struct iommu_context *map; /* Map of contexts */
+};
+
+struct domain_iommu {
+    /* ... */
+
+    struct iommu_context default_ctx;
+    struct iommu_context_list other_contexts;
+
+    /* ... */
+}
+```
+
+default_ctx is a special context with id=0 that holds the page table mapping the entire
+domain, which basically preserve the previous behavior. All devices are expected to be
+bound to this context during initialization.
+
+Along with this default context that always exist, we use a pool of contexts that has a
+fixed size at domain initialization, where contexts can be allocated (if possible), and
+have a id matching their position in the map (considering that id != 0).
+These contexts may be used by IOMMU contexts users such as PV-IOMMU or quarantine domain
+(DomIO).
+
+# Platform independent context management interface
+
+A new platform independant interface is introduced in Xen hypervisor to allow
+IOMMU contexts users to create and manage contexts within domains.
+
+```c
+/* Direct context access functions (not supposed to be used directly) */
+#define iommu_default_context(d) (&dom_iommu(d)->default_ctx)
+struct iommu_context *iommu_get_context(struct domain *d, u16 ctx_no);
+int iommu_context_init(struct domain *d, struct iommu_context *ctx, u16 ctx_no, u32 flags);
+int iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags);
+
+/* Check if a specific context exist in the domain, note that ctx_no=0 always
+    exists */
+bool iommu_check_context(struct domain *d, u16 ctx_no);
+
+/* Flag for default context initialization */
+#define IOMMU_CONTEXT_INIT_default (1 << 0)
+
+/* Flag for quarantine contexts (scratch page, DMA Abort mode, ...) */
+#define IOMMU_CONTEXT_INIT_quarantine (1 << 1)
+
+/* Flag to specify that devices will need to be reattached to default domain */
+#define IOMMU_TEARDOWN_REATTACH_DEFAULT (1 << 0)
+
+/* Allocate a new context, uses CONTEXT_INIT flags */
+int iommu_context_alloc(struct domain *d, u16 *ctx_no, u32 flags);
+
+/* Free a context, uses CONTEXT_TEARDOWN flags */
+int iommu_context_free(struct domain *d, u16 ctx_no, u32 flags);
+
+/* Move a device from one context to another, including between different domains. */
+int iommu_reattach_context(struct domain *prev_dom, struct domain *next_dom,
+                            device_t *dev, u16 ctx_no);
+
+/* Add a device to a context for first initialization */
+int iommu_attach_context(struct domain *d, device_t *dev, u16 ctx_no);
+
+/* Remove a device from a context, effectively removing it from the IOMMU. */
+int iommu_detach_context(struct domain *d, device_t *dev);
+```
+
+This interface will use a new interface with drivers to implement these features.
+
+Some existing functions will have a new parameter to specify on what context to do the operation.
+- iommu_map (iommu_legacy_map untouched)
+- iommu_unmap (iommu_legacy_unmap untouched)
+- iommu_lookup_page
+- iommu_iotlb_flush
+
+These functions will modify the iommu_context structure to accomodate with the
+operations applied, these functions will be used to replace some operations previously
+made in the IOMMU driver.
+
+# IOMMU platform_ops interface changes
+
+The IOMMU driver needs to expose a way to create and manage IOMMU contexts, the approach
+taken here is to modify the interface to allow specifying a IOMMU context on operations,
+and at the same time, simplifying the interface by relying more on iommu
+platform-independent code.
+
+Added functions in iommu_ops
+
+```c
+/* Initialize a context (creating page tables, allocating hardware, structures, ...) */
+int (*context_init)(struct domain *d, struct iommu_context *ctx,
+                    u32 flags);
+/* Destroy a context, assumes no device is bound to the context. */
+int (*context_teardown)(struct domain *d, struct iommu_context *ctx,
+                        u32 flags);
+/* Put a device in a context (assumes the device is not attached to another context) */
+int (*attach)(struct domain *d, device_t *dev,
+              struct iommu_context *ctx);
+/* Remove a device from a context, and from the IOMMU. */
+int (*detach)(struct domain *d, device_t *dev,
+              struct iommu_context *prev_ctx);
+/* Move the device from a context to another, including if the new context is in
+   another domain. d corresponds to the target domain. */
+int (*reattach)(struct domain *d, device_t *dev,
+                struct iommu_context *prev_ctx,
+                struct iommu_context *ctx);
+
+#ifdef CONFIG_HAS_PCI
+/* Specific interface for phantom function devices. */
+int (*add_devfn)(struct domain *d, struct pci_dev *pdev, u16 devfn,
+                    struct iommu_context *ctx);
+int (*remove_devfn)(struct domain *d, struct pci_dev *pdev, u16 devfn,
+                struct iommu_context *ctx);
+#endif
+
+/* Changes in existing to use a specified iommu_context. */
+int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn,
+                                unsigned int flags,
+                                unsigned int *flush_flags,
+                                struct iommu_context *ctx);
+int __must_check (*unmap_page)(struct domain *d, dfn_t dfn,
+                                unsigned int order,
+                                unsigned int *flush_flags,
+                                struct iommu_context *ctx);
+int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                                unsigned int *flags,
+                                struct iommu_context *ctx);
+
+int __must_check (*iotlb_flush)(struct iommu_context *ctx, dfn_t dfn,
+                                unsigned long page_count,
+                                unsigned int flush_flags);
+
+void (*clear_root_pgtable)(struct domain *d, struct iommu_context *ctx);
+```
+
+These functions are redundant with existing functions, therefore, the following functions
+are replaced with new equivalents :
+- quarantine_init : platform-independent code and IOMMU_CONTEXT_INIT_quarantine flag
+- add_device : attach and add_devfn (phantom)
+- assign_device : attach and add_devfn (phantom)
+- remove_device : detach and remove_devfn (phantom)
+- reassign_device : reattach
+
+Some functionnal differences with previous functions, the following should be handled
+by platform-independent/arch-specific code instead of IOMMU driver :
+- identity mappings (unity mappings and rmrr)
+- device list in context and domain
+- domain of a device
+- quarantine
+
+The idea behind this is to implement IOMMU context features while simplifying IOMMU
+drivers implementations and ensuring more consistency between IOMMU drivers.
+
+## Phantom function handling
+
+PCI devices may use additionnal devfn to do DMA operations, in order to support such
+devices, an interface is added to map specific device functions without implying that
+the device is mapped to a new context (that may cause duplicates in Xen data structures).
+
+Functions add_devfn and remove_devfn allows to map a iommu context on specific devfn
+for a pci device, without altering platform-independent data structures.
+
+It is important for the reattach operation to care about these devices, in order
+to prevent devices from being partially reattached to the new context (see XSA-449 [2])
+by using a all-or-nothing approach for reattaching such devices.
+
+# Quarantine refactoring using IOMMU contexts
+
+The quarantine mecanism can be entirely reimplemented using IOMMU context, making
+it simpler, more consistent between platforms,
+
+Quarantine is currently only supported with x86 platforms and works by creating a
+single *hardware IOMMU domain* per quarantined device. All the quarantine logic is
+the implemented in a platform-specific fashion while actually implementing the same
+concepts :
+
+The *hardware IOMMU context* data structures for quarantine are currently stored in
+the device structure itself (using arch_pci_dev) and IOMMU driver needs to care about
+whether we are dealing with quarantine operations or regular operations (often dealt
+using macros such as QUARANTINE_SKIP or DEVICE_PGTABLE).
+
+The page table that will apply on the quarantined device is created reserved device
+regions, and adding mappings to a scratch page if enabled (quarantine=scratch-page).
+
+A new approach we can use is allowing the quarantine domain (DomIO) to manage IOMMU
+contexts, and implement all the quarantine logic using IOMMU contexts.
+
+That way, the quarantine implementation can be platform-independent, thus have a more
+consistent implementation between platforms. It will also allows quarantine to work
+with other IOMMU implementations without having to implement platform-specific behavior.
+Moreover, quarantine operations can be implemented using regular context operations
+instead of relying on driver-specific code.
+
+Quarantine implementation can be summarised as
+
+```c
+int iommu_quarantine_dev_init(device_t *dev)
+{
+    int ret;
+    u16 ctx_no;
+
+    if ( !iommu_quarantine )
+        return -EINVAL;
+
+    ret = iommu_context_alloc(dom_io, &ctx_no, IOMMU_CONTEXT_INIT_quarantine);
+
+    if ( ret )
+        return ret;
+
+    /** TODO: Setup scratch page, mappings... */
+
+    ret = iommu_reattach_context(dev->domain, dom_io, dev, ctx_no);
+
+    if ( ret )
+    {
+        ASSERT(!iommu_context_free(dom_io, ctx_no, 0));
+        return ret;
+    }
+
+    return ret;
+}
+```
+
+# Platform-specific considerations
+
+## Reference counters on target pages
+
+When mapping a guest page onto a IOMMU context, we need to make sure that
+this page is not reused for something else while being actually referenced
+by a IOMMU context. One way of doing it is incrementing the reference counter
+of each target page we map (excluding reserved regions), and decrementing it
+when the mapping isn't used anymore.
+
+One consideration to have is when destroying the context while having existing
+mappings in it. We can walk through the entire page table and decrement the
+reference counter of all mappings. All of that assumes that there is no reserved
+region mapped (which should be the case as a requirement of teardown, or as a
+consequence of REATTACH_DEFAULT flag).
+
+Another consideration is that the "cleanup mappings" operation may take a lot
+of time depending on the complexity of the page table. Making the teardown operation preemptable can allow the hypercall to be preempted if needed also preventing a malicious
+guest from stalling a CPU in a teardown operation with a specially crafted IOMMU
+context (e.g with several 1G superpages).
+
+## Limit the amount of pages IOMMU contexts can use
+
+In order to prevent a (eventually malicious) guest from causing too much allocations
+in Xen, we can enforce limits on the memory the IOMMU subsystem can use for IOMMU context.
+A possible implementation can be to preallocate a reasonably large chunk of memory
+and split it into pages for use by the IOMMU subsystem only for non-default IOMMU
+contexts (e.g PV-IOMMU interface), if this limitation is overcome, some operations
+may fail from the guest side. These limitations shouldn't impact "usual" operations
+of the IOMMU subsystem (e.g default context initialization).
+
+## x86 Architecture
+
+TODO
+
+### Intel VT-d
+
+VT-d uses DID to tag the *IOMMU domain* applied to a device and assumes that all entries
+with the same DID uses the same page table (i.e same IOMMU context).
+Under certain circonstances (e.g DRHD with DID limit below 16-bits), the *DID* is
+transparently converted into a DRHD-specific DID using a map managed internally.
+
+The current implementation of the code reuses the Xen domain_id as DID.
+However, by using multiples IOMMU contexts per domain, we can't use the domain_id for
+contexts (otherwise, different page tables will be mapped with the same DID).
+The following strategy is used :
+- on the default context, reuse the domain_id (the default context is unique per domain)
+- on non-default context, use a id allocated in the pseudo_domid map, (actually used by
+quarantine) which is a DID outside of Xen domain_id range
+
+### AMD-Vi
+
+TODO
+
+## Device-tree platforms
+
+### SMMU and SMMUv3
+
+TODO
+
+* * *
+
+[1] See pv-iommu.md
+
+[2] pci: phantom functions assigned to incorrect contexts
+https://xenbits.xen.org/xsa/advisory-449.html
\ No newline at end of file
-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 15:22:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 15:22:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749161.1157190 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUTw-0006aW-P7; Wed, 26 Jun 2024 15:22:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749161.1157190; Wed, 26 Jun 2024 15:22:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUTw-0006ZT-Ic; Wed, 26 Jun 2024 15:22:48 +0000
Received: by outflank-mailman (input) for mailman id 749161;
 Wed, 26 Jun 2024 15:22:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1DF7=N4=bounce.vates.tech=bounce-md_30504962.667c3243.v1-103ac74f00f0420b835b12a5fefcf48e@srs-se1.protection.inumbo.net>)
 id 1sMUTu-0006XW-Q5
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 15:22:46 +0000
Received: from mail187-10.suw11.mandrillapp.com
 (mail187-10.suw11.mandrillapp.com [198.2.187.10])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ef96bde0-33cf-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 17:22:45 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-10.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W8QS34Zjdz5QkLmN
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 15:22:43 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 103ac74f00f0420b835b12a5fefcf48e; Wed, 26 Jun 2024 15:22:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef96bde0-33cf-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719415363; x=1719675863;
	bh=+dqkQHl/dBDSZmbUuZEbUp1Dg+EDo5EWONBvSTIRkoc=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=zu+1txio9rqtGtc8zodvtjexU2773Q3kE6DDkqfs5NaIYQzQolU/wib/CMBICgMQK
	 96Nwnbj5zuIz49aAPesvRrRaib4I5tdkTzVFDgUVVdrTCfn/RfxVE/V9sCmzXcSxRf
	 q6sV001135tlUgppi1Q8q3nzp58hh4tbEWgFCO2MiMxaFH8WgucOKbve9FweBuILp+
	 vvRz64bMnDoD2Vld/kav1yNNJKCTVbI+avlAQYSKOzUfjnTkxdTLKE6JiebVdwuUYp
	 /zyxoTr9hrB0ML8a4ZhA/NJFG1AdCETs/9MjYQA0cBgu1LGCs1CR3WMAlW8ky/2byx
	 Z7sKj7hvg3JvQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719415363; x=1719675863; i=teddy.astie@vates.tech;
	bh=+dqkQHl/dBDSZmbUuZEbUp1Dg+EDo5EWONBvSTIRkoc=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=me3tWdZQgeFilzrzDFkNMkKnF0yQ4SSKmtRI+M1sVhvKgsh+hOHL1k8eBpPBhelbW
	 x6AyHxSObmywC2x0YoJmF977tM9JrmRx4JEE/9YZJ7GI4c4+HAmdGTIR9BtCQPS/R1
	 zaKfL0qfYcx7NixsNC5VWfqqXp5+1A6aY+vP3shj3tNPavSU85Nb5CKe9gJSh3xUAS
	 PglqDhqgRZnZvVLvxR6PUsxazhgR+PLl84JMmJUvR9DKEh6pGra3Ua5bZ5Ap43jhvw
	 ezdTDUac0msFGp8FQtzP/93GJznAdkMKqf4D8jBHXLa9No+fsD1QhuAOfqIi6f8EDv
	 s06n+3N4dtkEA==
From: TSnake41 <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20XEN=20PATCH=20v2=201/5]=20docs/designs:=20Add=20a=20design=20document=20for=20PV-IOMMU?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719415360143
To: xen-devel@lists.xenproject.org
Cc: Teddy Astie <teddy.astie@vates.tech>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <6642b573ddc8abc79f376c9393b7917c4f42a590.1719414736.git.teddy.astie@vates.tech>
In-Reply-To: <cover.1719414736.git.teddy.astie@vates.tech>
References: <cover.1719414736.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.103ac74f00f0420b835b12a5fefcf48e?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240626:md
Date: Wed, 26 Jun 2024 15:22:43 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

From: Teddy Astie <teddy.astie@vates.tech>

Some operating systems want to use IOMMU to implement various features (e.g
VFIO) or DMA protection.
This patch introduce a proposal for IOMMU paravirtualization for Dom0.

Signed-off-by Teddy Astie <teddy.astie@vates.tech>
---
 docs/designs/pv-iommu.md | 105 +++++++++++++++++++++++++++++++++++++++
 1 file changed, 105 insertions(+)
 create mode 100644 docs/designs/pv-iommu.md

diff --git a/docs/designs/pv-iommu.md b/docs/designs/pv-iommu.md
new file mode 100644
index 0000000000..c01062a3ad
--- /dev/null
+++ b/docs/designs/pv-iommu.md
@@ -0,0 +1,105 @@
+# IOMMU paravirtualization for Dom0
+
+Status: Experimental
+
+# Background
+
+By default, Xen only uses the IOMMU for itself, either to make device adress
+space coherent with guest adress space (x86 HVM/PVH) or to prevent devices
+from doing DMA outside it's expected memory regions including the hypervisor
+(x86 PV).
+
+A limitation is that guests (especially privildged ones) may want to use
+IOMMU hardware in order to implement features such as DMA protection and
+VFIO [1] as IOMMU functionality is not available outside of the hypervisor
+currently.
+
+[1] VFIO - "Virtual Function I/O" - https://www.kernel.org/doc/html/latest/driver-api/vfio.html
+
+# Design
+
+The operating system may want to have access to various IOMMU features such as
+context management and DMA remapping. We can create a new hypercall that allows
+the guest to have access to a new paravirtualized IOMMU interface.
+
+This feature is only meant to be available for the Dom0, as DomU have some
+emulated devices that can't be managed on Xen side and are not hardware, we
+can't rely on the hardware IOMMU to enforce DMA remapping.
+
+This interface is exposed under the `iommu_op` hypercall.
+
+In addition, Xen domains are modified in order to allow existence of several
+IOMMU context including a default one that implement default behavior (e.g
+hardware assisted paging) and can't be modified by guest. DomU cannot have
+contexts, and therefore act as if they only have the default domain.
+
+Each IOMMU context within a Xen domain is identified using a domain-specific
+context number that is used in the Xen IOMMU subsystem and the hypercall
+interface.
+
+The number of IOMMU context a domain can use is predetermined at domain creation
+and is configurable through `dom0-iommu=nb-ctx=N` xen cmdline.
+
+# IOMMU operations
+
+## Alloc context
+
+Create a new IOMMU context for the guest and return the context number to the
+guest.
+Fail if the IOMMU context limit of the guest is reached.
+
+A flag can be specified to create a identity mapping.
+
+## Free context
+
+Destroy a IOMMU context created previously.
+It is not possible to free the default context.
+
+Reattach context devices to default context if specified by the guest.
+
+Fail if there is a device in the context and reattach-to-default flag is not
+specified.
+
+## Reattach device
+
+Reattach a device to another IOMMU context (including the default one).
+The target IOMMU context number must be valid and the context allocated.
+
+The guest needs to specify a PCI SBDF of a device he has access to.
+
+## Map/unmap page
+
+Map/unmap a page on a context.
+The guest needs to specify a gfn and target dfn to map.
+
+Refuse to create the mapping if one already exist for the same dfn.
+
+## Lookup page
+
+Get the gfn mapped by a specific dfn.
+
+# Implementation considerations
+
+## Hypercall batching
+
+In order to prevent unneeded hypercalls and IOMMU flushing, it is advisable to
+be able to batch some critical IOMMU operations (e.g map/unmap multiple pages).
+
+## Hardware without IOMMU support
+
+Operating system needs to be aware on PV-IOMMU capability, and whether it is
+able to make contexts. However, some operating system may critically fail in
+case they are able to make a new IOMMU context. Which is supposed to happen
+if no IOMMU hardware is available.
+
+The hypercall interface needs a interface to advertise the ability to create
+and manage IOMMU contexts including the amount of context the guest is able
+to use. Using these informations, the Dom0 may decide whether to use or not
+the PV-IOMMU interface.
+
+## Page pool for contexts
+
+In order to prevent unexpected starving on the hypervisor memory with a
+buggy Dom0. We can preallocate the pages the contexts will use and make
+map/unmap use these pages instead of allocating them dynamically.
+
-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 15:22:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 15:22:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749163.1157209 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUTy-00072s-9o; Wed, 26 Jun 2024 15:22:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749163.1157209; Wed, 26 Jun 2024 15:22:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUTy-00072G-6L; Wed, 26 Jun 2024 15:22:50 +0000
Received: by outflank-mailman (input) for mailman id 749163;
 Wed, 26 Jun 2024 15:22:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+UY/=N4=bounce.vates.tech=bounce-md_30504962.667c3245.v1-1767b6f4715f4d92a79f29ab3bb9e524@srs-se1.protection.inumbo.net>)
 id 1sMUTw-0006XW-QT
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 15:22:48 +0000
Received: from mail187-10.suw11.mandrillapp.com
 (mail187-10.suw11.mandrillapp.com [198.2.187.10])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f11df52d-33cf-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 17:22:47 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-10.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W8QS55tBSz5QkLrh
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 15:22:45 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 1767b6f4715f4d92a79f29ab3bb9e524; Wed, 26 Jun 2024 15:22:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f11df52d-33cf-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719415365; x=1719675865;
	bh=BnjwzMk0AflYRqN9hVbL8ralKTRwJ7dacm5DkHqoKTQ=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=eztExEb2OqM04ooKaJfu3ZiCwUmXEvqxqm7C/W2g8l+mkgVp6l5zp34MAZ9p1LkSf
	 dkO50aRELWi6aMxZfRKDVcoMJJMrKo5jMZyU34HOLYPWzUApj+UcZ4qRS+GDKl4sLE
	 oD/ocqoPvbe+hb0qEaLkbuKWbmhVfgFchElzf3PPcqn4Z3qVJTAq+AORFkwEgFXnIz
	 RZBiGwh+JLCCVyl3iymWjzoaL15iGcxoER9XJdci0oKZOI6WeDBxuy/lPMCAPilxF/
	 wMWhm2GQFMjcnAiT5PIxuBDM5Gr2hwefXFvRzHANqv1Q/g27FdDLA+uDikFS1An++3
	 qHGcISOCOcVzw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719415365; x=1719675865; i=teddy.astie@vates.tech;
	bh=BnjwzMk0AflYRqN9hVbL8ralKTRwJ7dacm5DkHqoKTQ=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=wNdEX9hPbuF9HJNPlfkeONBFUKhR4Qg9KP9QscSby9liJGLOYV93+EUhOj+HbAf3H
	 2K1K51VTh0/b/js6K/n9TEkiDPRmx95Z9rYKwORcfAzlQ359ttNWPjAH6usaxbE8wb
	 dxzEGcId1sxpPPNz8ZsOd7gk3pnSAmMbJivsp3saS53V0VU5jFW5v2LjcjTHdUmTqL
	 CdjXLrgNske4N84A8XYlA4XiXdVHAswKGMYK2Izx74ZooftAm7FH1vovuUNgmnroIH
	 vzBqIcIiRqUp7l23VdVUzXKCuKIy/2EWBbaWNnYVIVJ+PpO8RESIIuApfE3tPJGngA
	 rZ+nAC9FtSy5Q==
From: TSnake41 <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20XEN=20PATCH=20v2=205/5]=20xen/public:=20Introduce=20PV-IOMMU=20hypercall=20interface?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719415364570
To: xen-devel@lists.xenproject.org
Cc: Teddy Astie <teddy.astie@vates.tech>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <42edb79171c5e075c673af6c406c930e543b3774.1719414737.git.teddy.astie@vates.tech>
In-Reply-To: <cover.1719414736.git.teddy.astie@vates.tech>
References: <cover.1719414736.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.1767b6f4715f4d92a79f29ab3bb9e524?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240626:md
Date: Wed, 26 Jun 2024 15:22:45 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

From: Teddy Astie <teddy.astie@vates.tech>

Introduce a new pv interface to manage the underlying IOMMU and manage contexts
and devices. This interface allows creation of new contexts from Dom0 and
addition of IOMMU mappings using guest PoV.

This interface doesn't allow creation of mapping to other domains.

Signed-off-by Teddy Astie <teddy.astie@vates.tech>
---
Changed in V2:
* formatting
---
 xen/common/Makefile           |   1 +
 xen/common/pv-iommu.c         | 320 ++++++++++++++++++++++++++++++++++
 xen/include/hypercall-defs.c  |   6 +
 xen/include/public/pv-iommu.h | 114 ++++++++++++
 xen/include/public/xen.h      |   1 +
 5 files changed, 442 insertions(+)
 create mode 100644 xen/common/pv-iommu.c
 create mode 100644 xen/include/public/pv-iommu.h

diff --git a/xen/common/Makefile b/xen/common/Makefile
index f12a474d40..52ada89888 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -58,6 +58,7 @@ obj-y += wait.o
 obj-bin-y += warning.init.o
 obj-$(CONFIG_XENOPROF) += xenoprof.o
 obj-y += xmalloc_tlsf.o
+obj-y += pv-iommu.o
 
 obj-bin-$(CONFIG_X86) += $(foreach n,decompress bunzip2 unxz unlzma lzo unlzo unlz4 unzstd earlycpio,$(n).init.o)
 
diff --git a/xen/common/pv-iommu.c b/xen/common/pv-iommu.c
new file mode 100644
index 0000000000..844642ee54
--- /dev/null
+++ b/xen/common/pv-iommu.c
@@ -0,0 +1,320 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/*
+ * xen/common/pv_iommu.c
+ *
+ * PV-IOMMU hypercall interface.
+ */
+
+#include <xen/mm.h>
+#include <xen/lib.h>
+#include <xen/iommu.h>
+#include <xen/sched.h>
+#include <xen/pci.h>
+#include <xen/guest_access.h>
+#include <asm/p2m.h>
+#include <asm/event.h>
+#include <public/pv-iommu.h>
+
+#define PVIOMMU_PREFIX "[PV-IOMMU] "
+
+#define PVIOMMU_MAX_PAGES 256 /* Move to Kconfig ? */
+
+/* Allowed masks for each sub-operation */
+#define ALLOC_OP_FLAGS_MASK (0)
+#define FREE_OP_FLAGS_MASK (IOMMU_TEARDOWN_REATTACH_DEFAULT)
+
+static int get_paged_frame(struct domain *d, gfn_t gfn, mfn_t *mfn,
+                           struct page_info **page, int readonly)
+{
+    p2m_type_t p2mt;
+
+    *page = get_page_from_gfn(d, gfn_x(gfn), &p2mt,
+                             (readonly) ? P2M_ALLOC : P2M_UNSHARE);
+
+    if ( !(*page) )
+    {
+        *mfn = INVALID_MFN;
+        if ( p2m_is_shared(p2mt) )
+            return -EINVAL;
+        if ( p2m_is_paging(p2mt) )
+        {
+            p2m_mem_paging_populate(d, gfn);
+            return -EIO;
+        }
+
+        return -EPERM;
+    }
+
+    *mfn = page_to_mfn(*page);
+
+    return 0;
+}
+
+static int can_use_iommu_check(struct domain *d)
+{
+    if ( !iommu_enabled )
+    {
+        printk(PVIOMMU_PREFIX "IOMMU is not enabled\n");
+        return 0;
+    }
+
+    if ( !is_hardware_domain(d) )
+    {
+        printk(PVIOMMU_PREFIX "Non-hardware domain\n");
+        return 0;
+    }
+
+    if ( !is_iommu_enabled(d) )
+    {
+        printk(PVIOMMU_PREFIX "IOMMU disabled for this domain\n");
+        return 0;
+    }
+
+    return 1;
+}
+
+static long query_cap_op(struct pv_iommu_op *op, struct domain *d)
+{
+    op->cap.max_ctx_no = d->iommu.other_contexts.count;
+    op->cap.max_nr_pages = PVIOMMU_MAX_PAGES;
+    op->cap.max_iova_addr = (1LLU << 39) - 1; /* TODO: hardcoded 39-bits */
+
+    return 0;
+}
+
+static long alloc_context_op(struct pv_iommu_op *op, struct domain *d)
+{
+    u16 ctx_no = 0;
+    int status = 0;
+
+    status = iommu_context_alloc(d, &ctx_no, op->flags & ALLOC_OP_FLAGS_MASK);
+
+    if (status < 0)
+        return status;
+
+    printk("Created context %hu\n", ctx_no);
+
+    op->ctx_no = ctx_no;
+    return 0;
+}
+
+static long free_context_op(struct pv_iommu_op *op, struct domain *d)
+{
+    return iommu_context_free(d, op->ctx_no,
+                              IOMMU_TEARDOWN_PREEMPT | (op->flags & FREE_OP_FLAGS_MASK));
+}
+
+static long reattach_device_op(struct pv_iommu_op *op, struct domain *d)
+{
+    struct physdev_pci_device dev = op->reattach_device.dev;
+    device_t *pdev;
+
+    pdev = pci_get_pdev(d, PCI_SBDF(dev.seg, dev.bus, dev.devfn));
+
+    if ( !pdev )
+        return -ENOENT;
+
+    return iommu_reattach_context(d, d, pdev, op->ctx_no);
+}
+
+static long map_pages_op(struct pv_iommu_op *op, struct domain *d)
+{
+    int ret = 0, flush_ret;
+    struct page_info *page = NULL;
+    mfn_t mfn;
+    unsigned int flags;
+    unsigned int flush_flags = 0;
+    size_t i = 0;
+
+    if ( op->map_pages.nr_pages > PVIOMMU_MAX_PAGES )
+        return -E2BIG;
+
+    if ( !iommu_check_context(d, op->ctx_no) )
+        return -EINVAL;
+
+    //printk("Mapping gfn:%lx-%lx to dfn:%lx-%lx on %hu\n",
+    //       op->map_pages.gfn, op->map_pages.gfn + op->map_pages.nr_pages - 1,
+    //       op->map_pages.dfn, op->map_pages.dfn + op->map_pages.nr_pages - 1,
+    //       op->ctx_no);
+
+    flags = 0;
+
+    if ( op->flags & IOMMU_OP_readable )
+        flags |= IOMMUF_readable;
+
+    if ( op->flags & IOMMU_OP_writeable )
+        flags |= IOMMUF_writable;
+
+    for (i = 0; i < op->map_pages.nr_pages; i++)
+    {
+        gfn_t gfn = _gfn(op->map_pages.gfn + i);
+        dfn_t dfn = _dfn(op->map_pages.dfn + i);
+
+        /* Lookup pages struct backing gfn */
+        ret = get_paged_frame(d, gfn, &mfn, &page, 0);
+
+        if ( ret )
+            break;
+
+        /* Check for conflict with existing mappings */
+        if ( !iommu_lookup_page(d, dfn, &mfn, &flags, op->ctx_no) )
+        {
+            put_page(page);
+            ret = -EADDRINUSE;
+            break;
+        }
+
+        ret = iommu_map(d, dfn, mfn, 1, flags, &flush_flags, op->ctx_no);
+
+        if ( ret )
+            break;
+    }
+
+    op->map_pages.mapped = i;
+
+    flush_ret = iommu_iotlb_flush(d, _dfn(op->map_pages.dfn),
+                                  op->map_pages.nr_pages, flush_flags,
+                                  op->ctx_no);
+
+    if ( flush_ret )
+        printk("Flush operation failed (%d)\n", flush_ret);
+
+    return ret;
+}
+
+static long unmap_pages_op(struct pv_iommu_op *op, struct domain *d)
+{
+    mfn_t mfn;
+    int ret = 0, flush_ret;
+    unsigned int flags;
+    unsigned int flush_flags = 0;
+    size_t i = 0;
+
+    if ( op->unmap_pages.nr_pages > PVIOMMU_MAX_PAGES )
+        return -E2BIG;
+
+    if ( !iommu_check_context(d, op->ctx_no) )
+        return -EINVAL;
+
+    //printk("Unmapping dfn:%lx-%lx on %hu\n",
+    //       op->unmap_pages.dfn, op->unmap_pages.dfn + op->unmap_pages.nr_pages - 1,
+    //       op->ctx_no);
+
+    for (i = 0; i < op->unmap_pages.nr_pages; i++)
+    {
+        dfn_t dfn = _dfn(op->unmap_pages.dfn + i);
+
+        /* Check if there is a valid mapping for this domain */
+        if ( iommu_lookup_page(d, dfn, &mfn, &flags, op->ctx_no) ) {
+            ret = -ENOENT;
+            break;
+        }
+
+        ret = iommu_unmap(d, dfn, 1, 0, &flush_flags, op->ctx_no);
+
+        if (ret)
+            break;
+
+        /* Decrement reference counter */
+        put_page(mfn_to_page(mfn));
+    }
+
+    op->unmap_pages.unmapped = i;
+
+    flush_ret = iommu_iotlb_flush(d, _dfn(op->unmap_pages.dfn),
+                                  op->unmap_pages.nr_pages, flush_flags,
+                                  op->ctx_no);
+
+    if ( flush_ret )
+        printk("Flush operation failed (%d)\n", flush_ret);
+
+    return ret;
+}
+
+static long lookup_page_op(struct pv_iommu_op *op, struct domain *d)
+{
+    mfn_t mfn;
+    gfn_t gfn;
+    unsigned int flags = 0;
+
+    if ( !iommu_check_context(d, op->ctx_no) )
+        return -EINVAL;
+
+    /* Check if there is a valid BFN mapping for this domain */
+    if ( iommu_lookup_page(d, _dfn(op->lookup_page.dfn), &mfn, &flags, op->ctx_no) )
+        return -ENOENT;
+
+    gfn = mfn_to_gfn(d, mfn);
+    BUG_ON(gfn_eq(gfn, INVALID_GFN));
+
+    op->lookup_page.gfn = gfn_x(gfn);
+
+    return 0;
+}
+
+long do_iommu_sub_op(struct pv_iommu_op *op)
+{
+    struct domain *d = current->domain;
+
+    if ( !can_use_iommu_check(d) )
+        return -EPERM;
+
+    switch ( op->subop_id )
+    {
+        case 0:
+            return 0;
+
+        case IOMMUOP_query_capabilities:
+            return query_cap_op(op, d);
+
+        case IOMMUOP_alloc_context:
+            return alloc_context_op(op, d);
+
+        case IOMMUOP_free_context:
+            return free_context_op(op, d);
+
+        case IOMMUOP_reattach_device:
+            return reattach_device_op(op, d);
+
+        case IOMMUOP_map_pages:
+            return map_pages_op(op, d);
+
+        case IOMMUOP_unmap_pages:
+            return unmap_pages_op(op, d);
+
+        case IOMMUOP_lookup_page:
+            return lookup_page_op(op, d);
+
+        default:
+            return -EINVAL;
+    }
+}
+
+long do_iommu_op(XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    long ret = 0;
+    struct pv_iommu_op op;
+
+    if ( unlikely(copy_from_guest(&op, arg, 1)) )
+        return -EFAULT;
+
+    ret = do_iommu_sub_op(&op);
+
+    if ( ret == -ERESTART )
+        return hypercall_create_continuation(__HYPERVISOR_iommu_op, "h", arg);
+
+    if ( unlikely(copy_to_guest(arg, &op, 1)) )
+        return -EFAULT;
+
+    return ret;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/hypercall-defs.c b/xen/include/hypercall-defs.c
index 47c093acc8..4ba4480867 100644
--- a/xen/include/hypercall-defs.c
+++ b/xen/include/hypercall-defs.c
@@ -209,6 +209,9 @@ hypfs_op(unsigned int cmd, const char *arg1, unsigned long arg2, void *arg3, uns
 #ifdef CONFIG_X86
 xenpmu_op(unsigned int op, xen_pmu_params_t *arg)
 #endif
+#ifdef CONFIG_HAS_PASSTHROUGH
+iommu_op(void *arg)
+#endif
 
 #ifdef CONFIG_PV
 caller: pv64
@@ -295,5 +298,8 @@ mca                                do       do       -        -        -
 #ifndef CONFIG_PV_SHIM_EXCLUSIVE
 paging_domctl_cont                 do       do       do       do       -
 #endif
+#ifdef CONFIG_HAS_PASSTHROUGH
+iommu_op                           do       do       do       do       -
+#endif
 
 #endif /* !CPPCHECK */
diff --git a/xen/include/public/pv-iommu.h b/xen/include/public/pv-iommu.h
new file mode 100644
index 0000000000..45f9c44eb1
--- /dev/null
+++ b/xen/include/public/pv-iommu.h
@@ -0,0 +1,114 @@
+/* SPDX-License-Identifier: MIT */
+/******************************************************************************
+ * pv-iommu.h
+ *
+ * Paravirtualized IOMMU driver interface.
+ *
+ * Copyright (c) 2024 Teddy Astie <teddy.astie@vates.tech>
+ */
+
+#ifndef __XEN_PUBLIC_PV_IOMMU_H__
+#define __XEN_PUBLIC_PV_IOMMU_H__
+
+#include "xen.h"
+#include "physdev.h"
+
+#define IOMMU_DEFAULT_CONTEXT (0)
+
+/**
+ * Query PV-IOMMU capabilities for this domain.
+ */
+#define IOMMUOP_query_capabilities    1
+
+/**
+ * Allocate an IOMMU context, the new context handle will be written to ctx_no.
+ */
+#define IOMMUOP_alloc_context         2
+
+/**
+ * Destroy a IOMMU context.
+ * All devices attached to this context are reattached to default context.
+ *
+ * The default context can't be destroyed (0).
+ */
+#define IOMMUOP_free_context          3
+
+/**
+ * Reattach the device to IOMMU context.
+ */
+#define IOMMUOP_reattach_device       4
+
+#define IOMMUOP_map_pages             5
+#define IOMMUOP_unmap_pages           6
+
+/**
+ * Get the GFN associated to a specific DFN.
+ */
+#define IOMMUOP_lookup_page           7
+
+struct pv_iommu_op {
+    uint16_t subop_id;
+    uint16_t ctx_no;
+
+/**
+ * Create a context that is cloned from default.
+ * The new context will be populated with 1:1 mappings covering the entire guest memory.
+ */
+#define IOMMU_CREATE_clone (1 << 0)
+
+#define IOMMU_OP_readable (1 << 0)
+#define IOMMU_OP_writeable (1 << 1)
+    uint32_t flags;
+
+    union {
+        struct {
+            uint64_t gfn;
+            uint64_t dfn;
+            /* Number of pages to map */
+            uint32_t nr_pages;
+            /* Number of pages actually mapped after sub-op */
+            uint32_t mapped;
+        } map_pages;
+
+        struct {
+            uint64_t dfn;
+            /* Number of pages to unmap */
+            uint32_t nr_pages;
+            /* Number of pages actually unmapped after sub-op */
+            uint32_t unmapped;
+        } unmap_pages;
+
+        struct {
+            struct physdev_pci_device dev;
+        } reattach_device;
+
+        struct {
+            uint64_t gfn;
+            uint64_t dfn;
+        } lookup_page;
+
+        struct {
+            /* Maximum number of IOMMU context this domain can use. */
+            uint16_t max_ctx_no;
+            /* Maximum number of pages that can be modified in a single map/unmap operation. */
+            uint32_t max_nr_pages;
+            /* Maximum device address (iova) that the guest can use for mappings. */
+            uint64_t max_iova_addr;
+        } cap;
+    };
+};
+
+typedef struct pv_iommu_op pv_iommu_op_t;
+DEFINE_XEN_GUEST_HANDLE(pv_iommu_op_t);
+
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
\ No newline at end of file
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index b47d48d0e2..28ab815ebc 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -118,6 +118,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_xenpmu_op            40
 #define __HYPERVISOR_dm_op                41
 #define __HYPERVISOR_hypfs_op             42
+#define __HYPERVISOR_iommu_op             43
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 15:22:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 15:22:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749162.1157205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUTy-00070i-2V; Wed, 26 Jun 2024 15:22:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749162.1157205; Wed, 26 Jun 2024 15:22:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUTx-00070b-VG; Wed, 26 Jun 2024 15:22:49 +0000
Received: by outflank-mailman (input) for mailman id 749162;
 Wed, 26 Jun 2024 15:22:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ae1K=N4=bounce.vates.tech=bounce-md_30504962.667c3245.v1-5ad0b2e156c34e69915bc1aa59ba347b@srs-se1.protection.inumbo.net>)
 id 1sMUTw-0006XW-5N
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 15:22:48 +0000
Received: from mail134-3.atl141.mandrillapp.com
 (mail134-3.atl141.mandrillapp.com [198.2.134.3])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f0463e3a-33cf-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 17:22:46 +0200 (CEST)
Received: from pmta10.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail134-3.atl141.mandrillapp.com (Mailchimp) with ESMTP id
 4W8QS51QTbzDRJ1Wv
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 15:22:45 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 5ad0b2e156c34e69915bc1aa59ba347b; Wed, 26 Jun 2024 15:22:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0463e3a-33cf-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719415365; x=1719675865;
	bh=vfdMXv8nsBcOxX19Aud9oLvJ4u7npIxafb7S7XML9pM=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=iR0ewlT2YGeR7wAWDG2V53a72e/0J0zXeziZw0hReTD2haY8Li6fACDBdS/67Qgj7
	 yumnk03Cp53XEaT5aG4bmqCHEed9zKIfoQfLrn0OInAlvdiK7NbAb3eDjD2lJCgLdL
	 EXsZ2O4Gr9JbSpSOqe7xtH1i0uazeEIJ2EULME4UnIxrQCU5VJX20HJKT+bC12tAd7
	 9V3mVTrlBOwcH9aZX69iW9CfluQNcr8rnlhaWMTxCZ0mOkqaCjHh40r09tuqZRPfst
	 WBx6CjMdOP31mdLpd9wsAKyIMlUnaRWPeyjT1sLi/PjXwI9Ytr6o7R9r34Cu5NFGz9
	 iUm9Dd0mm3ZlA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719415365; x=1719675865; i=teddy.astie@vates.tech;
	bh=vfdMXv8nsBcOxX19Aud9oLvJ4u7npIxafb7S7XML9pM=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=OBB7wWy4Q+axYu8/+GuOVuRmy+TQpbBH8bSExpK1Y8ubP9doPFLjpDm7HClP9SPo2
	 8+/7eB++iHHo4aJsNybgSCMgFMTRxLd3zf9FXscXOYL0XfG5aVLlFp2UWUA3ScTMrG
	 nraZ0GRkr0l84gQXHhjP/mrcMtSksNGLT6R+50NmnRqMCX1jJepk6JDWqeBnFjO/O3
	 9QsX9cq/0UCiFnKlpomrUWOSxx88yNgzax2nF57nDFKqQSq9bsosGTKewH0hAPKmd2
	 pQJnPe+zdvSwZW2YIAcv/WG+wGEIdT11eFsaGvCQTWyXJVdEfGJE5v5GyPqm8NPnXA
	 uLzzCTHfRgg0w==
From: TSnake41 <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20XEN=20PATCH=20v2=203/5]=20IOMMU:=20Introduce=20redesigned=20IOMMU=20subsystem?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719415362277
To: xen-devel@lists.xenproject.org
Cc: Teddy Astie <teddy.astie@vates.tech>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Lukasz Hawrylko <lukasz@hawrylko.pl>, "Daniel P. Smith" <dpsmith@apertussolutions.com>, =?utf-8?Q?Mateusz=20M=C3=B3wka?= <mateusz.mowka@intel.com>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <5ea6c3150ca0cf32b6f5993d1de8a638f1631e3d.1719414736.git.teddy.astie@vates.tech>
In-Reply-To: <cover.1719414736.git.teddy.astie@vates.tech>
References: <cover.1719414736.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.5ad0b2e156c34e69915bc1aa59ba347b?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240626:md
Date: Wed, 26 Jun 2024 15:22:45 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

From: Teddy Astie <teddy.astie@vates.tech>

Based on docs/designs/iommu-contexts.md, implement the redesigned IOMMU subsystem.

Signed-off-by Teddy Astie <teddy.astie@vates.tech>
---
Changed in V2:
* cleanup some unneeded includes
* fix dangling devices in context on detach
---
 xen/arch/x86/domain.c                |   2 +-
 xen/arch/x86/mm/p2m-ept.c            |   2 +-
 xen/arch/x86/pv/dom0_build.c         |   4 +-
 xen/arch/x86/tboot.c                 |   4 +-
 xen/common/memory.c                  |   4 +-
 xen/drivers/passthrough/Makefile     |   3 +
 xen/drivers/passthrough/context.c    | 626 +++++++++++++++++++++++++++
 xen/drivers/passthrough/iommu.c      | 337 ++++----------
 xen/drivers/passthrough/pci.c        |  49 ++-
 xen/drivers/passthrough/quarantine.c |  49 +++
 xen/include/xen/iommu.h              | 118 ++++-
 xen/include/xen/pci.h                |   3 +
 12 files changed, 896 insertions(+), 305 deletions(-)
 create mode 100644 xen/drivers/passthrough/context.c
 create mode 100644 xen/drivers/passthrough/quarantine.c

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index ccadfe0c9e..1dd2453d71 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -2405,7 +2405,7 @@ int domain_relinquish_resources(struct domain *d)
 
     PROGRESS(iommu_pagetables):
 
-        ret = iommu_free_pgtables(d);
+        ret = iommu_free_pgtables(d, iommu_default_context(d));
         if ( ret )
             return ret;
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index 469e27ee93..80026a9cb9 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -975,7 +975,7 @@ out:
             rc = iommu_iotlb_flush(d, _dfn(gfn), 1ul << order,
                                    (iommu_flags ? IOMMU_FLUSHF_added : 0) |
                                    (vtd_pte_present ? IOMMU_FLUSHF_modified
-                                                    : 0));
+                                                    : 0), 0);
         else if ( need_iommu_pt_sync(d) )
             rc = iommu_flags ?
                 iommu_legacy_map(d, _dfn(gfn), mfn, 1ul << order, iommu_flags) :
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index d8043fa58a..db7298737d 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -76,7 +76,7 @@ static __init void mark_pv_pt_pages_rdonly(struct domain *d,
          * iommu_memory_setup() ended up mapping them.
          */
         if ( need_iommu_pt_sync(d) &&
-             iommu_unmap(d, _dfn(mfn_x(page_to_mfn(page))), 1, 0, flush_flags) )
+             iommu_unmap(d, _dfn(mfn_x(page_to_mfn(page))), 1, 0, flush_flags, 0) )
             BUG();
 
         /* Read-only mapping + PGC_allocated + page-table page. */
@@ -127,7 +127,7 @@ static void __init iommu_memory_setup(struct domain *d, const char *what,
 
     while ( (rc = iommu_map(d, _dfn(mfn_x(mfn)), mfn, nr,
                             IOMMUF_readable | IOMMUF_writable | IOMMUF_preempt,
-                            flush_flags)) > 0 )
+                            flush_flags, 0)) > 0 )
     {
         mfn = mfn_add(mfn, rc);
         nr -= rc;
diff --git a/xen/arch/x86/tboot.c b/xen/arch/x86/tboot.c
index ba0700d2d5..ca55306830 100644
--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -216,9 +216,9 @@ static void tboot_gen_domain_integrity(const uint8_t key[TB_KEY_SIZE],
 
         if ( is_iommu_enabled(d) && is_vtd )
         {
-            const struct domain_iommu *dio = dom_iommu(d);
+            struct domain_iommu *dio = dom_iommu(d);
 
-            update_iommu_mac(&ctx, dio->arch.vtd.pgd_maddr,
+            update_iommu_mac(&ctx, iommu_default_context(d)->arch.vtd.pgd_maddr,
                              agaw_to_level(dio->arch.vtd.agaw));
         }
     }
diff --git a/xen/common/memory.c b/xen/common/memory.c
index de2cc7ad92..0eb0f9da7b 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -925,7 +925,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         this_cpu(iommu_dont_flush_iotlb) = 0;
 
         ret = iommu_iotlb_flush(d, _dfn(xatp->idx - done), done,
-                                IOMMU_FLUSHF_modified);
+                                IOMMU_FLUSHF_modified, 0);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
 
@@ -939,7 +939,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
             put_page(pages[i]);
 
         ret = iommu_iotlb_flush(d, _dfn(xatp->gpfn - done), done,
-                                IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified);
+                                IOMMU_FLUSHF_added | IOMMU_FLUSHF_modified, 0);
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
     }
diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index a1621540b7..69327080ab 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -4,6 +4,9 @@ obj-$(CONFIG_X86) += x86/
 obj-$(CONFIG_ARM) += arm/
 
 obj-y += iommu.o
+obj-y += context.o
+obj-y += quarantine.o
+
 obj-$(CONFIG_HAS_PCI) += pci.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
 obj-$(CONFIG_HAS_PCI) += ats.o
diff --git a/xen/drivers/passthrough/context.c b/xen/drivers/passthrough/context.c
new file mode 100644
index 0000000000..d0c07054b3
--- /dev/null
+++ b/xen/drivers/passthrough/context.c
@@ -0,0 +1,626 @@
+/*
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/iommu.h>
+#include <xen/event.h>
+#include <xen/sched.h>
+#include <xen/spinlock.h>
+#include <xen/bitops.h>
+#include <xen/bitmap.h>
+
+bool iommu_check_context(struct domain *d, u16 ctx_no) {
+    struct domain_iommu *hd = dom_iommu(d);
+
+    if (ctx_no == 0)
+        return 1; /* Default context always exist. */
+
+    if ((ctx_no - 1) >= hd->other_contexts.count)
+        return 0; /* out of bounds */
+
+    return test_bit(ctx_no - 1, hd->other_contexts.bitmap);
+}
+
+struct iommu_context *iommu_get_context(struct domain *d, u16 ctx_no) {
+    struct domain_iommu *hd = dom_iommu(d);
+
+    if (!iommu_check_context(d, ctx_no))
+        return NULL;
+
+    if (ctx_no == 0)
+        return &hd->default_ctx;
+    else
+        return &hd->other_contexts.map[ctx_no - 1];
+}
+
+static unsigned int mapping_order(const struct domain_iommu *hd,
+                                  dfn_t dfn, mfn_t mfn, unsigned long nr)
+{
+    unsigned long res = dfn_x(dfn) | mfn_x(mfn);
+    unsigned long sizes = hd->platform_ops->page_sizes;
+    unsigned int bit = ffsl(sizes) - 1, order = 0;
+
+    ASSERT(bit == PAGE_SHIFT);
+
+    while ( (sizes = (sizes >> bit) & ~1) )
+    {
+        unsigned long mask;
+
+        bit = ffsl(sizes) - 1;
+        mask = (1UL << bit) - 1;
+        if ( nr <= mask || (res & mask) )
+            break;
+        order += bit;
+        nr >>= bit;
+        res >>= bit;
+    }
+
+    return order;
+}
+
+long _iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
+               unsigned long page_count, unsigned int flags,
+               unsigned int *flush_flags, u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    unsigned long i;
+    unsigned int order, j = 0;
+    int rc = 0;
+
+    if ( !is_iommu_enabled(d) )
+        return 0;
+
+    if (!iommu_check_context(d, ctx_no))
+        return -ENOENT;
+
+    ASSERT(!IOMMUF_order(flags));
+
+    for ( i = 0; i < page_count; i += 1UL << order )
+    {
+        dfn_t dfn = dfn_add(dfn0, i);
+        mfn_t mfn = mfn_add(mfn0, i);
+
+        order = mapping_order(hd, dfn, mfn, page_count - i);
+
+        if ( (flags & IOMMUF_preempt) &&
+             ((!(++j & 0xfff) && general_preempt_check()) ||
+              i > LONG_MAX - (1UL << order)) )
+            return i;
+
+        rc = iommu_call(hd->platform_ops, map_page, d, dfn, mfn,
+                        flags | IOMMUF_order(order), flush_flags,
+                        iommu_get_context(d, ctx_no));
+
+        if ( likely(!rc) )
+            continue;
+
+        if ( !d->is_shutting_down && printk_ratelimit() )
+            printk(XENLOG_ERR
+                   "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" failed: %d\n",
+                   d->domain_id, dfn_x(dfn), mfn_x(mfn), rc);
+
+        /* while statement to satisfy __must_check */
+        while ( _iommu_unmap(d, dfn0, i, 0, flush_flags, ctx_no) )
+            break;
+
+        if ( !ctx_no && !is_hardware_domain(d) )
+            domain_crash(d);
+
+        break;
+    }
+
+    /*
+     * Something went wrong so, if we were dealing with more than a single
+     * page, flush everything and clear flush flags.
+     */
+    if ( page_count > 1 && unlikely(rc) &&
+         !iommu_iotlb_flush_all(d, *flush_flags) )
+        *flush_flags = 0;
+
+    return rc;
+}
+
+long iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
+               unsigned long page_count, unsigned int flags,
+               unsigned int *flush_flags, u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    long ret;
+
+    spin_lock(&hd->lock);
+    ret = _iommu_map(d, dfn0, mfn0, page_count, flags, flush_flags, ctx_no);
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
+                     unsigned long page_count, unsigned int flags)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    unsigned int flush_flags = 0;
+    int rc;
+
+    ASSERT(!(flags & IOMMUF_preempt));
+
+    spin_lock(&hd->lock);
+    rc = _iommu_map(d, dfn, mfn, page_count, flags, &flush_flags, 0);
+
+    if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
+        rc = _iommu_iotlb_flush(d, dfn, page_count, flush_flags, 0);
+    spin_unlock(&hd->lock);
+
+    return rc;
+}
+
+long iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_count,
+                 unsigned int flags, unsigned int *flush_flags,
+                 u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    long ret;
+
+    spin_lock(&hd->lock);
+    ret = _iommu_unmap(d, dfn0, page_count, flags, flush_flags, ctx_no);
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+long _iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_count,
+                  unsigned int flags, unsigned int *flush_flags,
+                  u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    unsigned long i;
+    unsigned int order, j = 0;
+    int rc = 0;
+
+    if ( !is_iommu_enabled(d) )
+        return 0;
+
+    if ( !iommu_check_context(d, ctx_no) )
+        return -ENOENT;
+
+    ASSERT(!(flags & ~IOMMUF_preempt));
+
+    for ( i = 0; i < page_count; i += 1UL << order )
+    {
+        dfn_t dfn = dfn_add(dfn0, i);
+        int err;
+
+        order = mapping_order(hd, dfn, _mfn(0), page_count - i);
+
+        if ( (flags & IOMMUF_preempt) &&
+             ((!(++j & 0xfff) && general_preempt_check()) ||
+              i > LONG_MAX - (1UL << order)) )
+            return i;
+
+        err = iommu_call(hd->platform_ops, unmap_page, d, dfn,
+                         flags | IOMMUF_order(order), flush_flags,
+                         iommu_get_context(d, ctx_no));
+
+        if ( likely(!err) )
+            continue;
+
+        if ( !d->is_shutting_down && printk_ratelimit() )
+            printk(XENLOG_ERR
+                   "d%d: IOMMU unmapping dfn %"PRI_dfn" failed: %d\n",
+                   d->domain_id, dfn_x(dfn), err);
+
+        if ( !rc )
+            rc = err;
+
+        if ( !is_hardware_domain(d) )
+        {
+            domain_crash(d);
+            break;
+        }
+    }
+
+    /*
+     * Something went wrong so, if we were dealing with more than a single
+     * page, flush everything and clear flush flags.
+     */
+    if ( page_count > 1 && unlikely(rc) &&
+         !iommu_iotlb_flush_all(d, *flush_flags) )
+        *flush_flags = 0;
+
+    return rc;
+}
+
+int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned long page_count)
+{
+    unsigned int flush_flags = 0;
+    struct domain_iommu *hd = dom_iommu(d);
+    int rc;
+
+    spin_lock(&hd->lock);
+    rc = _iommu_unmap(d, dfn, page_count, 0, &flush_flags, 0);
+
+    if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
+        rc = _iommu_iotlb_flush(d, dfn, page_count, flush_flags, 0);
+    spin_unlock(&hd->lock);
+
+    return rc;
+}
+
+int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
+                      unsigned int *flags, u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    int ret;
+
+    if ( !is_iommu_enabled(d) || !hd->platform_ops->lookup_page )
+        return -EOPNOTSUPP;
+
+    if (!iommu_check_context(d, ctx_no))
+        return -ENOENT;
+
+    spin_lock(&hd->lock);
+    ret = iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags, iommu_get_context(d, ctx_no));
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int _iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned long page_count,
+                       unsigned int flush_flags, u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    int rc;
+
+    if ( !is_iommu_enabled(d) || !hd->platform_ops->iotlb_flush ||
+         !page_count || !flush_flags )
+        return 0;
+
+    if ( dfn_eq(dfn, INVALID_DFN) )
+        return -EINVAL;
+
+    if ( !iommu_check_context(d, ctx_no) ) {
+        spin_unlock(&hd->lock);
+        return -ENOENT;
+    }
+
+    rc = iommu_call(hd->platform_ops, iotlb_flush, d, iommu_get_context(d, ctx_no),
+                    dfn, page_count, flush_flags);
+    if ( unlikely(rc) )
+    {
+        if ( !d->is_shutting_down && printk_ratelimit() )
+            printk(XENLOG_ERR
+                   "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", page count %lu flags %x\n",
+                   d->domain_id, rc, dfn_x(dfn), page_count, flush_flags);
+
+        if ( !is_hardware_domain(d) )
+            domain_crash(d);
+    }
+
+    return rc;
+}
+
+int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned long page_count,
+                      unsigned int flush_flags, u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    int ret;
+
+    spin_lock(&hd->lock);
+    ret = _iommu_iotlb_flush(d, dfn, page_count, flush_flags, ctx_no);
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int iommu_context_init(struct domain *d, struct iommu_context *ctx, u16 ctx_no, u32 flags)
+{
+    if ( !dom_iommu(d)->platform_ops->context_init )
+        return -ENOSYS;
+
+    INIT_LIST_HEAD(&ctx->devices);
+    ctx->id = ctx_no;
+    ctx->dying = false;
+
+    return iommu_call(dom_iommu(d)->platform_ops, context_init, d, ctx, flags);
+}
+
+int iommu_context_alloc(struct domain *d, u16 *ctx_no, u32 flags)
+{
+    unsigned int i;
+    int ret;
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->lock);
+
+    /* TODO: use TSL instead ? */
+    i = find_first_zero_bit(hd->other_contexts.bitmap, hd->other_contexts.count);
+
+    if ( i < hd->other_contexts.count )
+        set_bit(i, hd->other_contexts.bitmap);
+
+    if ( i >= hd->other_contexts.count ) /* no free context */
+        return -ENOSPC;
+
+    *ctx_no = i + 1;
+
+    ret = iommu_context_init(d, iommu_get_context(d, *ctx_no), *ctx_no, flags);
+
+    if ( ret )
+        __clear_bit(*ctx_no, hd->other_contexts.bitmap);
+
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int _iommu_attach_context(struct domain *d, device_t *dev, u16 ctx_no)
+{
+    struct iommu_context *ctx;
+    int ret;
+
+    pcidevs_lock();
+
+    if ( !iommu_check_context(d, ctx_no) )
+    {
+        ret = -ENOENT;
+        goto unlock;
+    }
+
+    ctx = iommu_get_context(d, ctx_no);
+
+    if ( ctx->dying )
+    {
+        ret = -EINVAL;
+        goto unlock;
+    }
+
+    ret = iommu_call(dom_iommu(d)->platform_ops, attach, d, dev, ctx);
+
+    if ( !ret )
+    {
+        dev->context = ctx_no;
+        list_add(&dev->context_list, &ctx->devices);
+    }
+
+unlock:
+    pcidevs_unlock();
+    return ret;
+}
+
+int iommu_attach_context(struct domain *d, device_t *dev, u16 ctx_no)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    int ret;
+
+    spin_lock(&hd->lock);
+    ret = _iommu_attach_context(d, dev, ctx_no);
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int _iommu_detach_context(struct domain *d, device_t *dev)
+{
+    struct iommu_context *ctx;
+    int ret;
+
+    if (!dev->domain)
+    {
+        printk("IOMMU: Trying to detach a non-attached device.");
+        WARN();
+        return 0;
+    }
+
+    /* Make sure device is actually in the domain. */
+    ASSERT(d == dev->domain);
+
+    pcidevs_lock();
+
+    ctx = iommu_get_context(d, dev->context);
+    ASSERT(ctx); /* device is using an invalid context ?
+                    dev->context invalid ? */
+
+    ret = iommu_call(dom_iommu(d)->platform_ops, detach, d, dev, ctx);
+
+    if ( !ret )
+    {
+        list_del(&dev->context_list);
+
+        /** TODO: Do we need to remove the device from domain ?
+         *        Reattaching to something (quarantine, hardware domain ?)
+         */
+
+        /*
+         * rcu_lock_domain ?
+         * list_del(&dev->domain_list);
+         * dev->domain = ?;
+         */
+    }
+
+    pcidevs_unlock();
+    return ret;
+}
+
+int iommu_detach_context(struct domain *d, device_t *dev)
+{
+    int ret;
+    struct domain_iommu *hd = dom_iommu(d);
+
+    spin_lock(&hd->lock);
+    ret = _iommu_detach_context(d, dev);
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int _iommu_reattach_context(struct domain *prev_dom, struct domain *next_dom,
+                            device_t *dev, u16 ctx_no)
+{
+    struct domain_iommu *hd;
+    u16 prev_ctx_no;
+    device_t *ctx_dev;
+    struct iommu_context *prev_ctx, *next_ctx;
+    int ret;
+    bool same_domain;
+
+    /* Make sure we actually are doing something meaningful */
+    BUG_ON(!prev_dom && !next_dom);
+
+    /// TODO: Do such cases exists ?
+    // /* Platform ops must match */
+    // if (dom_iommu(prev_dom)->platform_ops != dom_iommu(next_dom)->platform_ops)
+    //     return -EINVAL;
+
+    pcidevs_lock();
+
+    if (!prev_dom)
+        return _iommu_attach_context(next_dom, dev, ctx_no);
+
+    if (!next_dom)
+        return _iommu_detach_context(prev_dom, dev);
+
+    hd = dom_iommu(prev_dom);
+    same_domain = prev_dom == next_dom;
+
+    prev_ctx_no = dev->context;
+
+    if ( !same_domain && (ctx_no == prev_ctx_no) )
+    {
+        printk(XENLOG_DEBUG "Reattaching %pp to same IOMMU context c%hu\n", &dev, ctx_no);
+        ret = 0;
+        goto unlock;
+    }
+
+    if ( !iommu_check_context(next_dom, ctx_no) )
+    {
+        ret = -ENOENT;
+        goto unlock;
+    }
+
+    prev_ctx = iommu_get_context(prev_dom, prev_ctx_no);
+    next_ctx = iommu_get_context(next_dom, ctx_no);
+
+    if ( next_ctx->dying )
+    {
+        ret = -EINVAL;
+        goto unlock;
+    }
+
+    ret = iommu_call(hd->platform_ops, reattach, next_dom, dev, prev_ctx,
+                     next_ctx);
+
+    if ( ret )
+        goto unlock;
+
+    /* Remove device from previous context, and add it to new one. */
+    list_for_each_entry(ctx_dev, &prev_ctx->devices, context_list)
+    {
+        if ( ctx_dev == dev )
+        {
+            list_del(&ctx_dev->context_list);
+            list_add(&ctx_dev->context_list, &next_ctx->devices);
+            break;
+        }
+    }
+
+    if ( !same_domain )
+    {
+        /* Update domain pci devices accordingly */
+
+        /** TODO: should be done here or elsewhere ? */
+    }
+
+    if (!ret)
+        dev->context = ctx_no; /* update device context*/
+
+unlock:
+    pcidevs_unlock();
+    return ret;
+}
+
+int iommu_reattach_context(struct domain *prev_dom, struct domain *next_dom,
+                           device_t *dev, u16 ctx_no)
+{
+    int ret;
+    struct domain_iommu *prev_hd = dom_iommu(prev_dom);
+    struct domain_iommu *next_hd = dom_iommu(next_dom);
+
+    spin_lock(&prev_hd->lock);
+
+    if (prev_dom != next_dom)
+        spin_lock(&next_hd->lock);
+
+    ret = _iommu_reattach_context(prev_dom, next_dom, dev, ctx_no);
+
+    spin_unlock(&prev_hd->lock);
+
+    if (prev_dom != next_dom)
+        spin_unlock(&next_hd->lock);
+
+    return ret;
+}
+
+int _iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+
+    if ( !dom_iommu(d)->platform_ops->context_teardown )
+        return -ENOSYS;
+
+    ctx->dying = true;
+
+    /* first reattach devices back to default context if needed */
+    if ( flags & IOMMU_TEARDOWN_REATTACH_DEFAULT )
+    {
+        struct pci_dev *device;
+        list_for_each_entry(device, &ctx->devices, context_list)
+            _iommu_reattach_context(d, d, device, 0);
+    }
+    else if (!list_empty(&ctx->devices))
+        return -EBUSY; /* there is a device in context */
+
+    return iommu_call(hd->platform_ops, context_teardown, d, ctx, flags);
+}
+
+int iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    int ret;
+
+    spin_lock(&hd->lock);
+    ret = _iommu_context_teardown(d, ctx, flags);
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
+
+int iommu_context_free(struct domain *d, u16 ctx_no, u32 flags)
+{
+    int ret;
+    struct domain_iommu *hd = dom_iommu(d);
+
+    if ( ctx_no == 0 )
+        return -EINVAL;
+
+    spin_lock(&hd->lock);
+    if ( !iommu_check_context(d, ctx_no) )
+        return -ENOENT;
+
+    ret = _iommu_context_teardown(d, iommu_get_context(d, ctx_no), flags);
+
+    if ( !ret )
+        clear_bit(ctx_no - 1, hd->other_contexts.bitmap);
+
+    spin_unlock(&hd->lock);
+
+    return ret;
+}
diff --git a/xen/drivers/passthrough/iommu.c b/xen/drivers/passthrough/iommu.c
index 50bfd62553..48ffa0f97e 100644
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -12,15 +12,14 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/pci.h>
 #include <xen/sched.h>
 #include <xen/iommu.h>
-#include <xen/paging.h>
-#include <xen/guest_access.h>
-#include <xen/event.h>
 #include <xen/param.h>
-#include <xen/softirq.h>
+#include <xen/spinlock.h>
 #include <xen/keyhandler.h>
-#include <xsm/xsm.h>
+#include <asm/iommu.h>
+#include <asm/bitops.h>
 
 #ifdef CONFIG_X86
 #include <asm/e820.h>
@@ -35,22 +34,6 @@ bool __read_mostly force_iommu;
 bool __read_mostly iommu_verbose;
 static bool __read_mostly iommu_crash_disable;
 
-#define IOMMU_quarantine_none         0 /* aka false */
-#define IOMMU_quarantine_basic        1 /* aka true */
-#define IOMMU_quarantine_scratch_page 2
-#ifdef CONFIG_HAS_PCI
-uint8_t __read_mostly iommu_quarantine =
-# if defined(CONFIG_IOMMU_QUARANTINE_NONE)
-    IOMMU_quarantine_none;
-# elif defined(CONFIG_IOMMU_QUARANTINE_BASIC)
-    IOMMU_quarantine_basic;
-# elif defined(CONFIG_IOMMU_QUARANTINE_SCRATCH_PAGE)
-    IOMMU_quarantine_scratch_page;
-# endif
-#else
-# define iommu_quarantine IOMMU_quarantine_none
-#endif /* CONFIG_HAS_PCI */
-
 static bool __hwdom_initdata iommu_hwdom_none;
 bool __hwdom_initdata iommu_hwdom_strict;
 bool __read_mostly iommu_hwdom_passthrough;
@@ -61,6 +44,13 @@ int8_t __hwdom_initdata iommu_hwdom_reserved = -1;
 bool __read_mostly iommu_hap_pt_share = true;
 #endif
 
+uint16_t __read_mostly iommu_hwdom_nb_ctx = 8;
+bool __read_mostly iommu_hwdom_nb_ctx_forced = false;
+
+#ifdef CONFIG_X86
+unsigned int __read_mostly iommu_hwdom_arena_order = CONFIG_X86_ARENA_ORDER;
+#endif
+
 bool __read_mostly iommu_debug;
 
 DEFINE_PER_CPU(bool, iommu_dont_flush_iotlb);
@@ -156,6 +146,7 @@ static int __init cf_check parse_dom0_iommu_param(const char *s)
     int rc = 0;
 
     do {
+        long long ll_val;
         int val;
 
         ss = strchr(s, ',');
@@ -172,6 +163,20 @@ static int __init cf_check parse_dom0_iommu_param(const char *s)
             iommu_hwdom_reserved = val;
         else if ( !cmdline_strcmp(s, "none") )
             iommu_hwdom_none = true;
+        else if ( !parse_signed_integer("nb-ctx", s, ss, &ll_val) )
+        {
+            if (ll_val > 0 && ll_val < UINT16_MAX)
+                iommu_hwdom_nb_ctx = ll_val;
+            else
+                printk(XENLOG_WARNING "'nb-ctx=%lld' value out of range!\n", ll_val);
+        }
+        else if ( !parse_signed_integer("arena-order", s, ss, &ll_val) )
+        {
+            if (ll_val > 0)
+                iommu_hwdom_arena_order = ll_val;
+            else
+                printk(XENLOG_WARNING "'arena-order=%lld' value out of range!\n", ll_val);
+        }
         else
             rc = -EINVAL;
 
@@ -193,9 +198,26 @@ static void __hwdom_init check_hwdom_reqs(struct domain *d)
     arch_iommu_check_autotranslated_hwdom(d);
 }
 
+uint16_t __hwdom_init iommu_hwdom_ctx_count(void)
+{
+    if (iommu_hwdom_nb_ctx_forced)
+        return iommu_hwdom_nb_ctx;
+
+    /* TODO: Find a proper way of counting devices ? */
+    return 256;
+
+    /*
+    if (iommu_hwdom_nb_ctx != UINT16_MAX)
+        iommu_hwdom_nb_ctx++;
+    else
+        printk(XENLOG_WARNING " IOMMU: Can't prepare more contexts: too much devices");
+    */
+}
+
 int iommu_domain_init(struct domain *d, unsigned int opts)
 {
     struct domain_iommu *hd = dom_iommu(d);
+    uint16_t other_context_count;
     int ret = 0;
 
     if ( is_hardware_domain(d) )
@@ -236,6 +258,37 @@ int iommu_domain_init(struct domain *d, unsigned int opts)
 
     ASSERT(!(hd->need_sync && hd->hap_pt_share));
 
+    iommu_hwdom_nb_ctx = iommu_hwdom_ctx_count();
+
+    if ( is_hardware_domain(d) )
+    {
+        BUG_ON(iommu_hwdom_nb_ctx == 0); /* sanity check (prevent underflow) */
+        printk(XENLOG_INFO "Dom0 uses %lu IOMMU contexts\n",
+               (unsigned long)iommu_hwdom_nb_ctx);
+        hd->other_contexts.count = iommu_hwdom_nb_ctx - 1;
+    }
+    else if ( d == dom_io )
+    {
+        /* TODO: Determine count differently */
+        hd->other_contexts.count = 128;
+    }
+    else
+        hd->other_contexts.count = 0;
+
+    other_context_count = hd->other_contexts.count;
+    if (other_context_count > 0) {
+        /* Initialize context bitmap */
+        hd->other_contexts.bitmap = xzalloc_array(unsigned long,
+                                                  BITS_TO_LONGS(other_context_count));
+        hd->other_contexts.map = xzalloc_array(struct iommu_context,
+                                               other_context_count);
+    } else {
+        hd->other_contexts.bitmap = NULL;
+        hd->other_contexts.map = NULL;
+    }
+
+    iommu_context_init(d, &hd->default_ctx, 0, IOMMU_CONTEXT_INIT_default);
+
     return 0;
 }
 
@@ -249,13 +302,12 @@ static void cf_check iommu_dump_page_tables(unsigned char key)
 
     for_each_domain(d)
     {
-        if ( is_hardware_domain(d) || !is_iommu_enabled(d) )
+        if ( !is_iommu_enabled(d) )
             continue;
 
         if ( iommu_use_hap_pt(d) )
         {
             printk("%pd sharing page tables\n", d);
-            continue;
         }
 
         iommu_vcall(dom_iommu(d)->platform_ops, dump_page_tables, d);
@@ -276,10 +328,13 @@ void __hwdom_init iommu_hwdom_init(struct domain *d)
     iommu_vcall(hd->platform_ops, hwdom_init, d);
 }
 
-static void iommu_teardown(struct domain *d)
+void iommu_domain_destroy(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
 
+    if ( !is_iommu_enabled(d) )
+        return;
+
     /*
      * During early domain creation failure, we may reach here with the
      * ops not yet initialized.
@@ -288,224 +343,10 @@ static void iommu_teardown(struct domain *d)
         return;
 
     iommu_vcall(hd->platform_ops, teardown, d);
-}
-
-void iommu_domain_destroy(struct domain *d)
-{
-    if ( !is_iommu_enabled(d) )
-        return;
-
-    iommu_teardown(d);
 
     arch_iommu_domain_destroy(d);
 }
 
-static unsigned int mapping_order(const struct domain_iommu *hd,
-                                  dfn_t dfn, mfn_t mfn, unsigned long nr)
-{
-    unsigned long res = dfn_x(dfn) | mfn_x(mfn);
-    unsigned long sizes = hd->platform_ops->page_sizes;
-    unsigned int bit = ffsl(sizes) - 1, order = 0;
-
-    ASSERT(bit == PAGE_SHIFT);
-
-    while ( (sizes = (sizes >> bit) & ~1) )
-    {
-        unsigned long mask;
-
-        bit = ffsl(sizes) - 1;
-        mask = (1UL << bit) - 1;
-        if ( nr <= mask || (res & mask) )
-            break;
-        order += bit;
-        nr >>= bit;
-        res >>= bit;
-    }
-
-    return order;
-}
-
-long iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
-               unsigned long page_count, unsigned int flags,
-               unsigned int *flush_flags)
-{
-    const struct domain_iommu *hd = dom_iommu(d);
-    unsigned long i;
-    unsigned int order, j = 0;
-    int rc = 0;
-
-    if ( !is_iommu_enabled(d) )
-        return 0;
-
-    ASSERT(!IOMMUF_order(flags));
-
-    for ( i = 0; i < page_count; i += 1UL << order )
-    {
-        dfn_t dfn = dfn_add(dfn0, i);
-        mfn_t mfn = mfn_add(mfn0, i);
-
-        order = mapping_order(hd, dfn, mfn, page_count - i);
-
-        if ( (flags & IOMMUF_preempt) &&
-             ((!(++j & 0xfff) && general_preempt_check()) ||
-              i > LONG_MAX - (1UL << order)) )
-            return i;
-
-        rc = iommu_call(hd->platform_ops, map_page, d, dfn, mfn,
-                        flags | IOMMUF_order(order), flush_flags);
-
-        if ( likely(!rc) )
-            continue;
-
-        if ( !d->is_shutting_down && printk_ratelimit() )
-            printk(XENLOG_ERR
-                   "d%d: IOMMU mapping dfn %"PRI_dfn" to mfn %"PRI_mfn" failed: %d\n",
-                   d->domain_id, dfn_x(dfn), mfn_x(mfn), rc);
-
-        /* while statement to satisfy __must_check */
-        while ( iommu_unmap(d, dfn0, i, 0, flush_flags) )
-            break;
-
-        if ( !is_hardware_domain(d) )
-            domain_crash(d);
-
-        break;
-    }
-
-    /*
-     * Something went wrong so, if we were dealing with more than a single
-     * page, flush everything and clear flush flags.
-     */
-    if ( page_count > 1 && unlikely(rc) &&
-         !iommu_iotlb_flush_all(d, *flush_flags) )
-        *flush_flags = 0;
-
-    return rc;
-}
-
-int iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
-                     unsigned long page_count, unsigned int flags)
-{
-    unsigned int flush_flags = 0;
-    int rc;
-
-    ASSERT(!(flags & IOMMUF_preempt));
-    rc = iommu_map(d, dfn, mfn, page_count, flags, &flush_flags);
-
-    if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
-        rc = iommu_iotlb_flush(d, dfn, page_count, flush_flags);
-
-    return rc;
-}
-
-long iommu_unmap(struct domain *d, dfn_t dfn0, unsigned long page_count,
-                 unsigned int flags, unsigned int *flush_flags)
-{
-    const struct domain_iommu *hd = dom_iommu(d);
-    unsigned long i;
-    unsigned int order, j = 0;
-    int rc = 0;
-
-    if ( !is_iommu_enabled(d) )
-        return 0;
-
-    ASSERT(!(flags & ~IOMMUF_preempt));
-
-    for ( i = 0; i < page_count; i += 1UL << order )
-    {
-        dfn_t dfn = dfn_add(dfn0, i);
-        int err;
-
-        order = mapping_order(hd, dfn, _mfn(0), page_count - i);
-
-        if ( (flags & IOMMUF_preempt) &&
-             ((!(++j & 0xfff) && general_preempt_check()) ||
-              i > LONG_MAX - (1UL << order)) )
-            return i;
-
-        err = iommu_call(hd->platform_ops, unmap_page, d, dfn,
-                         flags | IOMMUF_order(order), flush_flags);
-
-        if ( likely(!err) )
-            continue;
-
-        if ( !d->is_shutting_down && printk_ratelimit() )
-            printk(XENLOG_ERR
-                   "d%d: IOMMU unmapping dfn %"PRI_dfn" failed: %d\n",
-                   d->domain_id, dfn_x(dfn), err);
-
-        if ( !rc )
-            rc = err;
-
-        if ( !is_hardware_domain(d) )
-        {
-            domain_crash(d);
-            break;
-        }
-    }
-
-    /*
-     * Something went wrong so, if we were dealing with more than a single
-     * page, flush everything and clear flush flags.
-     */
-    if ( page_count > 1 && unlikely(rc) &&
-         !iommu_iotlb_flush_all(d, *flush_flags) )
-        *flush_flags = 0;
-
-    return rc;
-}
-
-int iommu_legacy_unmap(struct domain *d, dfn_t dfn, unsigned long page_count)
-{
-    unsigned int flush_flags = 0;
-    int rc = iommu_unmap(d, dfn, page_count, 0, &flush_flags);
-
-    if ( !this_cpu(iommu_dont_flush_iotlb) && !rc )
-        rc = iommu_iotlb_flush(d, dfn, page_count, flush_flags);
-
-    return rc;
-}
-
-int iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
-                      unsigned int *flags)
-{
-    const struct domain_iommu *hd = dom_iommu(d);
-
-    if ( !is_iommu_enabled(d) || !hd->platform_ops->lookup_page )
-        return -EOPNOTSUPP;
-
-    return iommu_call(hd->platform_ops, lookup_page, d, dfn, mfn, flags);
-}
-
-int iommu_iotlb_flush(struct domain *d, dfn_t dfn, unsigned long page_count,
-                      unsigned int flush_flags)
-{
-    const struct domain_iommu *hd = dom_iommu(d);
-    int rc;
-
-    if ( !is_iommu_enabled(d) || !hd->platform_ops->iotlb_flush ||
-         !page_count || !flush_flags )
-        return 0;
-
-    if ( dfn_eq(dfn, INVALID_DFN) )
-        return -EINVAL;
-
-    rc = iommu_call(hd->platform_ops, iotlb_flush, d, dfn, page_count,
-                    flush_flags);
-    if ( unlikely(rc) )
-    {
-        if ( !d->is_shutting_down && printk_ratelimit() )
-            printk(XENLOG_ERR
-                   "d%d: IOMMU IOTLB flush failed: %d, dfn %"PRI_dfn", page count %lu flags %x\n",
-                   d->domain_id, rc, dfn_x(dfn), page_count, flush_flags);
-
-        if ( !is_hardware_domain(d) )
-            domain_crash(d);
-    }
-
-    return rc;
-}
-
 int iommu_iotlb_flush_all(struct domain *d, unsigned int flush_flags)
 {
     const struct domain_iommu *hd = dom_iommu(d);
@@ -515,7 +356,7 @@ int iommu_iotlb_flush_all(struct domain *d, unsigned int flush_flags)
          !flush_flags )
         return 0;
 
-    rc = iommu_call(hd->platform_ops, iotlb_flush, d, INVALID_DFN, 0,
+    rc = iommu_call(hd->platform_ops, iotlb_flush, d, NULL, INVALID_DFN, 0,
                     flush_flags | IOMMU_FLUSHF_all);
     if ( unlikely(rc) )
     {
@@ -531,24 +372,6 @@ int iommu_iotlb_flush_all(struct domain *d, unsigned int flush_flags)
     return rc;
 }
 
-int iommu_quarantine_dev_init(device_t *dev)
-{
-    const struct domain_iommu *hd = dom_iommu(dom_io);
-
-    if ( !iommu_quarantine || !hd->platform_ops->quarantine_init )
-        return 0;
-
-    return iommu_call(hd->platform_ops, quarantine_init,
-                      dev, iommu_quarantine == IOMMU_quarantine_scratch_page);
-}
-
-static int __init iommu_quarantine_init(void)
-{
-    dom_io->options |= XEN_DOMCTL_CDF_iommu;
-
-    return iommu_domain_init(dom_io, 0);
-}
-
 int __init iommu_setup(void)
 {
     int rc = -ENODEV;
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 5a446d3dce..fc3315f8ed 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -1,6 +1,6 @@
 /*
  * Copyright (C) 2008,  Netronome Systems, Inc.
- *                
+ *
  * This program is free software; you can redistribute it and/or modify it
  * under the terms and conditions of the GNU General Public License,
  * version 2, as published by the Free Software Foundation.
@@ -286,14 +286,14 @@ static void apply_quirks(struct pci_dev *pdev)
          * Device [8086:2fc0]
          * Erratum HSE43
          * CONFIG_TDP_NOMINAL CSR Implemented at Incorrect Offset
-         * http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-v3-spec-update.html 
+         * http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-v3-spec-update.html
          */
         { PCI_VENDOR_ID_INTEL, 0x2fc0 },
         /*
          * Devices [8086:6f60,6fa0,6fc0]
          * Errata BDF2 / BDX2
          * PCI BARs in the Home Agent Will Return Non-Zero Values During Enumeration
-         * http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-v4-spec-update.html 
+         * http://www.intel.com/content/www/us/en/processors/xeon/xeon-e5-v4-spec-update.html
         */
         { PCI_VENDOR_ID_INTEL, 0x6f60 },
         { PCI_VENDOR_ID_INTEL, 0x6fa0 },
@@ -870,8 +870,8 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
         devfn += pdev->phantom_stride;
         if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
             break;
-        ret = iommu_call(hd->platform_ops, reassign_device, d, target, devfn,
-                         pci_to_dev(pdev));
+        ret = iommu_call(hd->platform_ops, add_devfn, d, pci_to_dev(pdev), devfn,
+                         &target->iommu.default_ctx);
         if ( ret )
             goto out;
     }
@@ -880,9 +880,9 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
     vpci_deassign_device(pdev);
     write_unlock(&d->pci_lock);
 
-    devfn = pdev->devfn;
-    ret = iommu_call(hd->platform_ops, reassign_device, d, target, devfn,
-                     pci_to_dev(pdev));
+    ret = iommu_call(hd->platform_ops, reattach, target, pci_to_dev(pdev),
+                     iommu_get_context(d, pdev->context),
+                     iommu_default_context(target));
     if ( ret )
         goto out;
 
@@ -890,6 +890,7 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
         pdev->quarantine = false;
 
     pdev->fault.count = 0;
+    pdev->domain = target;
 
     write_lock(&target->pci_lock);
     /* Re-assign back to hardware_domain */
@@ -1329,12 +1330,7 @@ static int cf_check _dump_pci_devices(struct pci_seg *pseg, void *arg)
     list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
     {
         printk("%pp - ", &pdev->sbdf);
-#ifdef CONFIG_X86
-        if ( pdev->domain == dom_io )
-            printk("DomIO:%x", pdev->arch.pseudo_domid);
-        else
-#endif
-            printk("%pd", pdev->domain);
+        printk("%pd", pdev->domain);
         printk(" - node %-3d", (pdev->node != NUMA_NO_NODE) ? pdev->node : -1);
         pdev_dump_msi(pdev);
         printk("\n");
@@ -1373,7 +1369,7 @@ static int iommu_add_device(struct pci_dev *pdev)
     if ( !is_iommu_enabled(pdev->domain) )
         return 0;
 
-    rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
+    rc = iommu_attach_context(pdev->domain, pci_to_dev(pdev), 0);
     if ( rc || !pdev->phantom_stride )
         return rc;
 
@@ -1382,7 +1378,9 @@ static int iommu_add_device(struct pci_dev *pdev)
         devfn += pdev->phantom_stride;
         if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
             return 0;
-        rc = iommu_call(hd->platform_ops, add_device, devfn, pci_to_dev(pdev));
+
+        rc = iommu_call(hd->platform_ops, add_devfn, pdev->domain, pdev, devfn,
+                        iommu_default_context(pdev->domain));
         if ( rc )
             printk(XENLOG_WARNING "IOMMU: add %pp failed (%d)\n",
                    &PCI_SBDF(pdev->seg, pdev->bus, devfn), rc);
@@ -1409,6 +1407,7 @@ static int iommu_enable_device(struct pci_dev *pdev)
 static int iommu_remove_device(struct pci_dev *pdev)
 {
     const struct domain_iommu *hd;
+    struct iommu_context *ctx;
     u8 devfn;
 
     if ( !pdev->domain )
@@ -1418,6 +1417,10 @@ static int iommu_remove_device(struct pci_dev *pdev)
     if ( !is_iommu_enabled(pdev->domain) )
         return 0;
 
+    ctx = iommu_get_context(pdev->domain, pdev->context);
+    if ( !ctx )
+        return -EINVAL;
+
     for ( devfn = pdev->devfn ; pdev->phantom_stride; )
     {
         int rc;
@@ -1425,8 +1428,8 @@ static int iommu_remove_device(struct pci_dev *pdev)
         devfn += pdev->phantom_stride;
         if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
             break;
-        rc = iommu_call(hd->platform_ops, remove_device, devfn,
-                        pci_to_dev(pdev));
+        rc = iommu_call(hd->platform_ops, remove_devfn, pdev->domain, pdev,
+                        devfn, ctx);
         if ( !rc )
             continue;
 
@@ -1437,7 +1440,7 @@ static int iommu_remove_device(struct pci_dev *pdev)
 
     devfn = pdev->devfn;
 
-    return iommu_call(hd->platform_ops, remove_device, devfn, pci_to_dev(pdev));
+    return iommu_detach_context(pdev->domain, pdev);
 }
 
 static int device_assigned(u16 seg, u8 bus, u8 devfn)
@@ -1497,22 +1500,22 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
     if ( pdev->domain != dom_io )
     {
         rc = iommu_quarantine_dev_init(pci_to_dev(pdev));
+        /** TODO: Consider phantom functions */
         if ( rc )
             goto done;
     }
 
     pdev->fault.count = 0;
 
-    rc = iommu_call(hd->platform_ops, assign_device, d, devfn, pci_to_dev(pdev),
-                    flag);
+    iommu_attach_context(d, pci_to_dev(pdev), 0);
 
     while ( pdev->phantom_stride && !rc )
     {
         devfn += pdev->phantom_stride;
         if ( PCI_SLOT(devfn) != PCI_SLOT(pdev->devfn) )
             break;
-        rc = iommu_call(hd->platform_ops, assign_device, d, devfn,
-                        pci_to_dev(pdev), flag);
+        rc = iommu_call(hd->platform_ops, add_devfn, d, pci_to_dev(pdev),
+                        devfn, iommu_default_context(d));
     }
 
     if ( rc )
diff --git a/xen/drivers/passthrough/quarantine.c b/xen/drivers/passthrough/quarantine.c
new file mode 100644
index 0000000000..b58f136ad8
--- /dev/null
+++ b/xen/drivers/passthrough/quarantine.c
@@ -0,0 +1,49 @@
+#include <xen/stdint.h>
+#include <xen/iommu.h>
+#include <xen/sched.h>
+
+#ifdef CONFIG_HAS_PCI
+uint8_t __read_mostly iommu_quarantine =
+# if defined(CONFIG_IOMMU_QUARANTINE_NONE)
+    IOMMU_quarantine_none;
+# elif defined(CONFIG_IOMMU_QUARANTINE_BASIC)
+    IOMMU_quarantine_basic;
+# elif defined(CONFIG_IOMMU_QUARANTINE_SCRATCH_PAGE)
+    IOMMU_quarantine_scratch_page;
+# endif
+#else
+# define iommu_quarantine IOMMU_quarantine_none
+#endif /* CONFIG_HAS_PCI */
+
+int iommu_quarantine_dev_init(device_t *dev)
+{
+    int ret;
+    u16 ctx_no;
+
+    if ( !iommu_quarantine )
+        return 0;
+
+    ret = iommu_context_alloc(dom_io, &ctx_no, IOMMU_CONTEXT_INIT_quarantine);
+
+    if ( ret )
+        return ret;
+
+    /** TODO: Setup scratch page, mappings... */
+
+    ret = iommu_reattach_context(dev->domain, dom_io, dev, ctx_no);
+
+    if ( ret )
+    {
+        ASSERT(!iommu_context_free(dom_io, ctx_no, 0));
+        return ret;
+    }
+
+    return ret;
+}
+
+int __init iommu_quarantine_init(void)
+{
+    dom_io->options |= XEN_DOMCTL_CDF_iommu;
+
+    return iommu_domain_init(dom_io, 0);
+}
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 442ae5322d..47c8bc6f36 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -52,7 +52,11 @@ static inline bool dfn_eq(dfn_t x, dfn_t y)
 #ifdef CONFIG_HAS_PASSTHROUGH
 extern bool iommu_enable, iommu_enabled;
 extern bool force_iommu, iommu_verbose;
+
 /* Boolean except for the specific purposes of drivers/passthrough/iommu.c. */
+#define IOMMU_quarantine_none         0 /* aka false */
+#define IOMMU_quarantine_basic        1 /* aka true */
+#define IOMMU_quarantine_scratch_page 2
 extern uint8_t iommu_quarantine;
 #else
 #define iommu_enabled false
@@ -107,6 +111,11 @@ extern bool amd_iommu_perdev_intremap;
 
 extern bool iommu_hwdom_strict, iommu_hwdom_passthrough, iommu_hwdom_inclusive;
 extern int8_t iommu_hwdom_reserved;
+extern uint16_t iommu_hwdom_nb_ctx;
+
+#ifdef CONFIG_X86
+extern unsigned int iommu_hwdom_arena_order;
+#endif
 
 extern unsigned int iommu_dev_iotlb_timeout;
 
@@ -161,11 +170,16 @@ enum
  */
 long __must_check iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
                             unsigned long page_count, unsigned int flags,
-                            unsigned int *flush_flags);
+                            unsigned int *flush_flags, u16 ctx_no);
+long __must_check _iommu_map(struct domain *d, dfn_t dfn0, mfn_t mfn0,
+                             unsigned long page_count, unsigned int flags,
+                             unsigned int *flush_flags, u16 ctx_no);
 long __must_check iommu_unmap(struct domain *d, dfn_t dfn0,
                               unsigned long page_count, unsigned int flags,
-                              unsigned int *flush_flags);
-
+                              unsigned int *flush_flags, u16 ctx_no);
+long __must_check _iommu_unmap(struct domain *d, dfn_t dfn0,
+                               unsigned long page_count, unsigned int flags,
+                               unsigned int *flush_flags, u16 ctx_no);
 int __must_check iommu_legacy_map(struct domain *d, dfn_t dfn, mfn_t mfn,
                                   unsigned long page_count,
                                   unsigned int flags);
@@ -173,11 +187,16 @@ int __must_check iommu_legacy_unmap(struct domain *d, dfn_t dfn,
                                     unsigned long page_count);
 
 int __must_check iommu_lookup_page(struct domain *d, dfn_t dfn, mfn_t *mfn,
-                                   unsigned int *flags);
+                                   unsigned int *flags, u16 ctx_no);
 
 int __must_check iommu_iotlb_flush(struct domain *d, dfn_t dfn,
                                    unsigned long page_count,
-                                   unsigned int flush_flags);
+                                   unsigned int flush_flags,
+                                   u16 ctx_no);
+int __must_check _iommu_iotlb_flush(struct domain *d, dfn_t dfn,
+                                   unsigned long page_count,
+                                   unsigned int flush_flags,
+                                   u16 ctx_no);
 int __must_check iommu_iotlb_flush_all(struct domain *d,
                                        unsigned int flush_flags);
 
@@ -250,20 +269,31 @@ struct page_info;
  */
 typedef int iommu_grdm_t(xen_pfn_t start, xen_ulong_t nr, u32 id, void *ctxt);
 
+struct iommu_context;
+
 struct iommu_ops {
     unsigned long page_sizes;
     int (*init)(struct domain *d);
     void (*hwdom_init)(struct domain *d);
-    int (*quarantine_init)(device_t *dev, bool scratch_page);
-    int (*add_device)(uint8_t devfn, device_t *dev);
+    int (*context_init)(struct domain *d, struct iommu_context *ctx,
+                        u32 flags);
+    int (*context_teardown)(struct domain *d, struct iommu_context *ctx,
+                            u32 flags);
+    int (*attach)(struct domain *d, device_t *dev,
+                  struct iommu_context *ctx);
+    int (*detach)(struct domain *d, device_t *dev,
+                   struct iommu_context *prev_ctx);
+    int (*reattach)(struct domain *d, device_t *dev,
+                    struct iommu_context *prev_ctx,
+                    struct iommu_context *ctx);
+
     int (*enable_device)(device_t *dev);
-    int (*remove_device)(uint8_t devfn, device_t *dev);
-    int (*assign_device)(struct domain *d, uint8_t devfn, device_t *dev,
-                         uint32_t flag);
-    int (*reassign_device)(struct domain *s, struct domain *t,
-                           uint8_t devfn, device_t *dev);
 #ifdef CONFIG_HAS_PCI
     int (*get_device_group_id)(uint16_t seg, uint8_t bus, uint8_t devfn);
+    int (*add_devfn)(struct domain *d, struct pci_dev *pdev, u16 devfn,
+                     struct iommu_context *ctx);
+    int (*remove_devfn)(struct domain *d, struct pci_dev *pdev, u16 devfn,
+                    struct iommu_context *ctx);
 #endif /* HAS_PCI */
 
     void (*teardown)(struct domain *d);
@@ -274,12 +304,15 @@ struct iommu_ops {
      */
     int __must_check (*map_page)(struct domain *d, dfn_t dfn, mfn_t mfn,
                                  unsigned int flags,
-                                 unsigned int *flush_flags);
+                                 unsigned int *flush_flags,
+                                 struct iommu_context *ctx);
     int __must_check (*unmap_page)(struct domain *d, dfn_t dfn,
                                    unsigned int order,
-                                   unsigned int *flush_flags);
+                                   unsigned int *flush_flags,
+                                   struct iommu_context *ctx);
     int __must_check (*lookup_page)(struct domain *d, dfn_t dfn, mfn_t *mfn,
-                                    unsigned int *flags);
+                                    unsigned int *flags,
+                                    struct iommu_context *ctx);
 
 #ifdef CONFIG_X86
     int (*enable_x2apic)(void);
@@ -292,14 +325,15 @@ struct iommu_ops {
     int (*setup_hpet_msi)(struct msi_desc *msi_desc);
 
     void (*adjust_irq_affinities)(void);
-    void (*clear_root_pgtable)(struct domain *d);
+    void (*clear_root_pgtable)(struct domain *d, struct iommu_context *ctx);
     int (*update_ire_from_msi)(struct msi_desc *msi_desc, struct msi_msg *msg);
 #endif /* CONFIG_X86 */
 
     int __must_check (*suspend)(void);
     void (*resume)(void);
     void (*crash_shutdown)(void);
-    int __must_check (*iotlb_flush)(struct domain *d, dfn_t dfn,
+    int __must_check (*iotlb_flush)(struct domain *d,
+                                    struct iommu_context *ctx, dfn_t dfn,
                                     unsigned long page_count,
                                     unsigned int flush_flags);
     int (*get_reserved_device_memory)(iommu_grdm_t *func, void *ctxt);
@@ -343,11 +377,36 @@ extern int iommu_get_extra_reserved_device_memory(iommu_grdm_t *func,
 # define iommu_vcall iommu_call
 #endif
 
+struct iommu_context {
+    u16 id; /* Context id (0 means default context) */
+    struct list_head devices;
+
+    struct arch_iommu_context arch;
+
+    bool opaque; /* context can't be modified nor accessed (e.g HAP) */
+    bool dying; /* the context is tearing down */
+};
+
+struct iommu_context_list {
+    uint16_t count; /* Context count excluding default context */
+
+    /* if count > 0 */
+
+    uint64_t *bitmap; /* bitmap of context allocation */
+    struct iommu_context *map; /* Map of contexts */
+};
+
+
 struct domain_iommu {
+    spinlock_t lock; /* iommu lock */
+
 #ifdef CONFIG_HAS_PASSTHROUGH
     struct arch_iommu arch;
 #endif
 
+    struct iommu_context default_ctx;
+    struct iommu_context_list other_contexts;
+
     /* iommu_ops */
     const struct iommu_ops *platform_ops;
 
@@ -380,6 +439,7 @@ struct domain_iommu {
 #define dom_iommu(d)              (&(d)->iommu)
 #define iommu_set_feature(d, f)   set_bit(f, dom_iommu(d)->features)
 #define iommu_clear_feature(d, f) clear_bit(f, dom_iommu(d)->features)
+#define iommu_default_context(d) (&dom_iommu(d)->default_ctx)
 
 /* Are we using the domain P2M table as its IOMMU pagetable? */
 #define iommu_use_hap_pt(d)       (IS_ENABLED(CONFIG_HVM) && \
@@ -405,6 +465,8 @@ int __must_check iommu_suspend(void);
 void iommu_resume(void);
 void iommu_crash_shutdown(void);
 int iommu_get_reserved_device_memory(iommu_grdm_t *func, void *ctxt);
+
+int __init iommu_quarantine_init(void);
 int iommu_quarantine_dev_init(device_t *dev);
 
 #ifdef CONFIG_HAS_PCI
@@ -414,6 +476,28 @@ int iommu_do_pci_domctl(struct xen_domctl *domctl, struct domain *d,
 
 void iommu_dev_iotlb_flush_timeout(struct domain *d, struct pci_dev *pdev);
 
+struct iommu_context *iommu_get_context(struct domain *d, u16 ctx_no);
+bool iommu_check_context(struct domain *d, u16 ctx_no);
+
+#define IOMMU_CONTEXT_INIT_default (1 << 0)
+#define IOMMU_CONTEXT_INIT_quarantine (1 << 1)
+int iommu_context_init(struct domain *d, struct iommu_context *ctx, u16 ctx_no, u32 flags);
+
+#define IOMMU_TEARDOWN_REATTACH_DEFAULT (1 << 0)
+#define IOMMU_TEARDOWN_PREEMPT (1 << 1)
+int iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags);
+
+int iommu_context_alloc(struct domain *d, u16 *ctx_no, u32 flags);
+int iommu_context_free(struct domain *d, u16 ctx_no, u32 flags);
+
+int iommu_reattach_context(struct domain *prev_dom, struct domain *next_dom,
+                           device_t *dev, u16 ctx_no);
+int iommu_attach_context(struct domain *d, device_t *dev, u16 ctx_no);
+int iommu_detach_context(struct domain *d, device_t *dev);
+
+int _iommu_attach_context(struct domain *d, device_t *dev, u16 ctx_no);
+int _iommu_detach_context(struct domain *d, device_t *dev);
+
 /*
  * The purpose of the iommu_dont_flush_iotlb optional cpu flag is to
  * avoid unecessary iotlb_flush in the low level IOMMU code.
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index 63e49f0117..d6d4aaa6a5 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -97,6 +97,7 @@ struct pci_dev_info {
 struct pci_dev {
     struct list_head alldevs_list;
     struct list_head domain_list;
+    struct list_head context_list;
 
     struct list_head msi_list;
 
@@ -104,6 +105,8 @@ struct pci_dev {
 
     struct domain *domain;
 
+    uint16_t context; /* IOMMU context number of domain */
+
     const union {
         struct {
             uint8_t devfn;
-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 15:22:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 15:22:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749164.1157225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUU0-0007Xw-Ny; Wed, 26 Jun 2024 15:22:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749164.1157225; Wed, 26 Jun 2024 15:22:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUU0-0007Xk-KZ; Wed, 26 Jun 2024 15:22:52 +0000
Received: by outflank-mailman (input) for mailman id 749164;
 Wed, 26 Jun 2024 15:22:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ytor=N4=bounce.vates.tech=bounce-md_30504962.667c3246.v1-1a5f2d47a62f44cf916f6752b8ea59a7@srs-se1.protection.inumbo.net>)
 id 1sMUTy-00076j-Pg
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 15:22:51 +0000
Received: from mail187-10.suw11.mandrillapp.com
 (mail187-10.suw11.mandrillapp.com [198.2.187.10])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f14f36a4-33cf-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 17:22:47 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-10.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W8QS66dm7z5QkLrN
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 15:22:46 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 1a5f2d47a62f44cf916f6752b8ea59a7; Wed, 26 Jun 2024 15:22:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f14f36a4-33cf-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719415366; x=1719675866;
	bh=naAsa/7TDkuop3XSAWKlI9l3/4RQGx8GapWKnTHrxZE=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=d3LlyTOg5rAox748E4trDnBaHtALS146joAxR5Wq4ignDwd4sy4dKWfm3N6nOLoeo
	 Kb0GxpKKjhqUEwFEqFb7c+2rsBY6Qnlm1PnRHEIBrg3egUhjEdDDm2A+Df4N4CTnzI
	 PVNEl7TUoaYH+Dj84a+CIhfu45XJ8Rr3oT+nA3RCj7neRy8aJPSdrEEZjN7Tq8SqNr
	 rMLYXFXy6ey0nNVZhV7/YE/Dloi4TdQ6zLwZkM2AIPNbaxj6o+lU+AmBgdEgJd3wtJ
	 R4j7bCUJEv11/cH0n5eiIeN6wdp5U5kHvy6u9WZyzRszj4Ak3QJzlZO9+Uu1BUEw7B
	 rlYO8Q7gCNDKg==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719415366; x=1719675866; i=teddy.astie@vates.tech;
	bh=naAsa/7TDkuop3XSAWKlI9l3/4RQGx8GapWKnTHrxZE=;
	h=From:Subject:To:Cc:Message-Id:In-Reply-To:References:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=fFokS9uwEL6s0Yxvu8kccP/HS18e1+DU88PEpQuROd9iqKBXC7XZuYRgDhOkPmBDm
	 R153seFQwhg1zLGcmcyOd8VJN/zB4DjP0O/ajZADBlHBskYPoPWKWGByhjGOGp/zsW
	 9r1oEvjO5qa7Eg1OPGtDHPS2lTj02DlKb05pjVn5qjwUL+tWs7wgzAOM2ei+wXEl6Z
	 BNANeDr5mVZPxhS5OSEZKRmYkgIooUUg3a8KQVPHbNHIm7cviU6bbTUXkuq8cqhEw8
	 CQUWOPXu9XPB7JoNYOffsEqA0iGUT4XwHa8FH7TF/LqPDw9GkpgTspNgnNNrtgEHLe
	 Jbe4E7SrzV1HQ==
From: TSnake41 <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20XEN=20PATCH=20v2=204/5]=20VT-d:=20Port=20IOMMU=20driver=20to=20new=20subsystem?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719415363509
To: xen-devel@lists.xenproject.org
Cc: Teddy Astie <teddy.astie@vates.tech>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <99faab4ff22a3a12e919f91bb8c4fbbd362c05f0.1719414736.git.teddy.astie@vates.tech>
In-Reply-To: <cover.1719414736.git.teddy.astie@vates.tech>
References: <cover.1719414736.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.1a5f2d47a62f44cf916f6752b8ea59a7?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240626:md
Date: Wed, 26 Jun 2024 15:22:46 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

From: Teddy Astie <teddy.astie@vates.tech>

Port the driver with guidances specified in iommu-contexts.md.

Add a arena-based allocator for allocating a fixed chunk of memory and
split it into 4k pages for use by the IOMMU contexts. This chunk size
is configurable with X86_ARENA_ORDER and dom0-iommu=arena-order=N.

Signed-off-by Teddy Astie <teddy.astie@vates.tech>
---
Changed in V2:
* cleanup some unneeded includes
* s/dettach/detach/
* don't dump IOMMU context of non-iommu domains (fix crash with DomUs)
---
 xen/arch/x86/include/asm/arena.h     |   54 +
 xen/arch/x86/include/asm/iommu.h     |   44 +-
 xen/arch/x86/include/asm/pci.h       |   17 -
 xen/drivers/passthrough/Kconfig      |   14 +
 xen/drivers/passthrough/vtd/Makefile |    2 +-
 xen/drivers/passthrough/vtd/extern.h |   14 +-
 xen/drivers/passthrough/vtd/iommu.c  | 1557 +++++++++++---------------
 xen/drivers/passthrough/vtd/quirks.c |   21 +-
 xen/drivers/passthrough/x86/Makefile |    1 +
 xen/drivers/passthrough/x86/arena.c  |  157 +++
 xen/drivers/passthrough/x86/iommu.c  |  104 +-
 11 files changed, 981 insertions(+), 1004 deletions(-)
 create mode 100644 xen/arch/x86/include/asm/arena.h
 create mode 100644 xen/drivers/passthrough/x86/arena.c

diff --git a/xen/arch/x86/include/asm/arena.h b/xen/arch/x86/include/asm/arena.h
new file mode 100644
index 0000000000..7555b100e0
--- /dev/null
+++ b/xen/arch/x86/include/asm/arena.h
@@ -0,0 +1,54 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/**
+ * Simple arena-based page allocator.
+ */
+
+#ifndef __XEN_IOMMU_ARENA_H__
+#define __XEN_IOMMU_ARENA_H__
+
+#include "xen/domain.h"
+#include "xen/atomic.h"
+#include "xen/mm-frame.h"
+#include "xen/types.h"
+
+/**
+ * struct page_arena: Page arena structure
+ */
+struct iommu_arena {
+    /* mfn of the first page of the memory region */
+    mfn_t region_start;
+    /* bitmap of allocations */
+    unsigned long *map;
+
+    /* Order of the arena */
+    unsigned int order;
+
+    /* Used page count */
+    atomic_t used_pages;
+};
+
+/**
+ * Initialize a arena using domheap allocator.
+ * @param [out] arena Arena to allocate
+ * @param [in] domain domain that has ownership of arena pages
+ * @param [in] order order of the arena (power of two of the size)
+ * @param [in] memflags Flags for domheap_alloc_pages()
+ * @return -ENOMEM on arena allocation error, 0 otherwise
+ */
+int iommu_arena_initialize(struct iommu_arena *arena, struct domain *domain,
+                           unsigned int order, unsigned int memflags);
+
+/**
+ * Teardown a arena.
+ * @param [out] arena arena to allocate
+ * @param [in] check check for existing allocations
+ * @return -EBUSY if check is specified
+ */
+int iommu_arena_teardown(struct iommu_arena *arena, bool check);
+
+struct page_info *iommu_arena_allocate_page(struct iommu_arena *arena);
+bool iommu_arena_free_page(struct iommu_arena *arena, struct page_info *page);
+
+#define iommu_arena_size(arena) (1LLU << (arena)->order)
+
+#endif
diff --git a/xen/arch/x86/include/asm/iommu.h b/xen/arch/x86/include/asm/iommu.h
index 8dc464fbd3..8fb402f1ee 100644
--- a/xen/arch/x86/include/asm/iommu.h
+++ b/xen/arch/x86/include/asm/iommu.h
@@ -2,14 +2,18 @@
 #ifndef __ARCH_X86_IOMMU_H__
 #define __ARCH_X86_IOMMU_H__
 
+#include <xen/bitmap.h>
 #include <xen/errno.h>
 #include <xen/list.h>
 #include <xen/mem_access.h>
 #include <xen/spinlock.h>
+#include <xen/stdbool.h>
 #include <asm/apicdef.h>
 #include <asm/cache.h>
 #include <asm/processor.h>
 
+#include "arena.h"
+
 #define DEFAULT_DOMAIN_ADDRESS_WIDTH 48
 
 struct g2m_ioport {
@@ -31,27 +35,48 @@ typedef uint64_t daddr_t;
 #define dfn_to_daddr(dfn) __dfn_to_daddr(dfn_x(dfn))
 #define daddr_to_dfn(daddr) _dfn(__daddr_to_dfn(daddr))
 
-struct arch_iommu
+struct arch_iommu_context
 {
-    spinlock_t mapping_lock; /* io page table lock */
     struct {
         struct page_list_head list;
         spinlock_t lock;
     } pgtables;
 
-    struct list_head identity_maps;
+    /* Queue for freeing pages */
+    struct page_list_head free_queue;
 
     union {
         /* Intel VT-d */
         struct {
             uint64_t pgd_maddr; /* io page directory machine address */
+            domid_t *didmap; /* per-iommu DID */
+            unsigned long *iommu_bitmap; /* bitmap of iommu(s) that the context uses */
+            bool duplicated_rmrr; /* tag indicating that duplicated rmrr mappings are mapped */
+            uint32_t superpage_progress; /* superpage progress during teardown */
+        } vtd;
+        /* AMD IOMMU */
+        struct {
+            struct page_info *root_table;
+        } amd;
+    };
+};
+
+struct arch_iommu
+{
+    spinlock_t lock; /* io page table lock */
+    struct list_head identity_maps;
+
+    struct iommu_arena pt_arena; /* allocator for non-default contexts */
+
+    union {
+        /* Intel VT-d */
+        struct {
             unsigned int agaw; /* adjusted guest address width, 0 is level 2 30-bit */
-            unsigned long *iommu_bitmap; /* bitmap of iommu(s) that the domain uses */
         } vtd;
         /* AMD IOMMU */
         struct {
             unsigned int paging_mode;
-            struct page_info *root_table;
+            struct guest_iommu *g_iommu;
         } amd;
     };
 };
@@ -128,14 +153,19 @@ unsigned long *iommu_init_domid(domid_t reserve);
 domid_t iommu_alloc_domid(unsigned long *map);
 void iommu_free_domid(domid_t domid, unsigned long *map);
 
-int __must_check iommu_free_pgtables(struct domain *d);
+struct iommu_context;
+int __must_check iommu_free_pgtables(struct domain *d, struct iommu_context *ctx);
 struct domain_iommu;
 struct page_info *__must_check iommu_alloc_pgtable(struct domain_iommu *hd,
+                                                   struct iommu_context *ctx,
                                                    uint64_t contig_mask);
-void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *pg);
+void iommu_queue_free_pgtable(struct iommu_context *ctx, struct page_info *pg);
 
 /* Check [start, end] unity map range for correctness. */
 bool iommu_unity_region_ok(const char *prefix, mfn_t start, mfn_t end);
+int arch_iommu_context_init(struct domain *d, struct iommu_context *ctx, u32 flags);
+int arch_iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags);
+int arch_iommu_flush_free_queue(struct domain *d, struct iommu_context *ctx);
 
 #endif /* !__ARCH_X86_IOMMU_H__ */
 /*
diff --git a/xen/arch/x86/include/asm/pci.h b/xen/arch/x86/include/asm/pci.h
index fd5480d67d..214c1a0948 100644
--- a/xen/arch/x86/include/asm/pci.h
+++ b/xen/arch/x86/include/asm/pci.h
@@ -15,23 +15,6 @@
 
 struct arch_pci_dev {
     vmask_t used_vectors;
-    /*
-     * These fields are (de)initialized under pcidevs-lock. Other uses of
-     * them don't race (de)initialization and hence don't strictly need any
-     * locking.
-     */
-    union {
-        /* Subset of struct arch_iommu's fields, to be used in dom_io. */
-        struct {
-            uint64_t pgd_maddr;
-        } vtd;
-        struct {
-            struct page_info *root_table;
-        } amd;
-    };
-    domid_t pseudo_domid;
-    mfn_t leaf_mfn;
-    struct page_list_head pgtables_list;
 };
 
 int pci_conf_write_intercept(unsigned int seg, unsigned int bdf,
diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
index 78edd80536..1b9f4c8b9c 100644
--- a/xen/drivers/passthrough/Kconfig
+++ b/xen/drivers/passthrough/Kconfig
@@ -91,3 +91,17 @@ choice
 	config IOMMU_QUARANTINE_SCRATCH_PAGE
 		bool "scratch page"
 endchoice
+
+config X86_ARENA_ORDER
+	int "IOMMU arena order" if EXPERT
+	depends on X86
+	default 9
+	help
+	  Specifies the default size of the Dom0 IOMMU arena allocator.
+	  Use 2^order pages for arena. If your system has lots of PCI devices or if you
+	  encounter IOMMU errors in Dom0, try increasing this value.
+	  This value can be overriden with command-line dom0-iommu=arena-order=N.
+
+	  [7] 128 pages, 512 KB arena
+	  [9] 512 pages, 2 MB arena (default)
+	  [11] 2048 pages, 8 MB arena
\ No newline at end of file
diff --git a/xen/drivers/passthrough/vtd/Makefile b/xen/drivers/passthrough/vtd/Makefile
index fde7555fac..81e1f46179 100644
--- a/xen/drivers/passthrough/vtd/Makefile
+++ b/xen/drivers/passthrough/vtd/Makefile
@@ -5,4 +5,4 @@ obj-y += dmar.o
 obj-y += utils.o
 obj-y += qinval.o
 obj-y += intremap.o
-obj-y += quirks.o
+obj-y += quirks.o
\ No newline at end of file
diff --git a/xen/drivers/passthrough/vtd/extern.h b/xen/drivers/passthrough/vtd/extern.h
index 667590ee52..69f808a44a 100644
--- a/xen/drivers/passthrough/vtd/extern.h
+++ b/xen/drivers/passthrough/vtd/extern.h
@@ -80,12 +80,10 @@ uint64_t alloc_pgtable_maddr(unsigned long npages, nodeid_t node);
 void free_pgtable_maddr(u64 maddr);
 void *map_vtd_domain_page(u64 maddr);
 void unmap_vtd_domain_page(const void *va);
-int domain_context_mapping_one(struct domain *domain, struct vtd_iommu *iommu,
-                               uint8_t bus, uint8_t devfn,
-                               const struct pci_dev *pdev, domid_t domid,
-                               paddr_t pgd_maddr, unsigned int mode);
-int domain_context_unmap_one(struct domain *domain, struct vtd_iommu *iommu,
-                             uint8_t bus, uint8_t devfn);
+int apply_context_single(struct domain *domain, struct iommu_context *ctx,
+                         struct vtd_iommu *iommu, uint8_t bus, uint8_t devfn);
+int unapply_context_single(struct domain *domain, struct iommu_context *ctx,
+                           struct vtd_iommu *iommu, uint8_t bus, uint8_t devfn);
 int cf_check intel_iommu_get_reserved_device_memory(
     iommu_grdm_t *func, void *ctxt);
 
@@ -106,8 +104,8 @@ void platform_quirks_init(void);
 void vtd_ops_preamble_quirk(struct vtd_iommu *iommu);
 void vtd_ops_postamble_quirk(struct vtd_iommu *iommu);
 int __must_check me_wifi_quirk(struct domain *domain, uint8_t bus,
-                               uint8_t devfn, domid_t domid, paddr_t pgd_maddr,
-                               unsigned int mode);
+                               uint8_t devfn, domid_t domid,
+                               unsigned int mode, struct iommu_context *ctx);
 void pci_vtd_quirk(const struct pci_dev *);
 void quirk_iommu_caps(struct vtd_iommu *iommu);
 
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index e13be244c1..3466c5efd9 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -30,12 +30,20 @@
 #include <xen/time.h>
 #include <xen/pci.h>
 #include <xen/pci_regs.h>
+#include <xen/sched.h>
+#include <xen/event.h>
 #include <xen/keyhandler.h>
+#include <xen/list.h>
+#include <xen/spinlock.h>
+#include <xen/iommu.h>
+#include <xen/lib.h>
 #include <asm/msi.h>
-#include <asm/nops.h>
 #include <asm/irq.h>
 #include <asm/hvm/vmx/vmx.h>
 #include <asm/p2m.h>
+#include <asm/bitops.h>
+#include <asm/iommu.h>
+#include <asm/page.h>
 #include <mach_apic.h>
 #include "iommu.h"
 #include "dmar.h"
@@ -46,14 +54,6 @@
 #define CONTIG_MASK DMA_PTE_CONTIG_MASK
 #include <asm/pt-contig-markers.h>
 
-/* dom_io is used as a sentinel for quarantined devices */
-#define QUARANTINE_SKIP(d, pgd_maddr) ((d) == dom_io && !(pgd_maddr))
-#define DEVICE_DOMID(d, pdev) ((d) != dom_io ? (d)->domain_id \
-                                             : (pdev)->arch.pseudo_domid)
-#define DEVICE_PGTABLE(d, pdev) ((d) != dom_io \
-                                 ? dom_iommu(d)->arch.vtd.pgd_maddr \
-                                 : (pdev)->arch.vtd.pgd_maddr)
-
 bool __read_mostly iommu_igfx = true;
 bool __read_mostly iommu_qinval = true;
 #ifndef iommu_snoop
@@ -206,26 +206,14 @@ static bool any_pdev_behind_iommu(const struct domain *d,
  * clear iommu in iommu_bitmap and clear domain_id in domid_bitmap.
  */
 static void check_cleanup_domid_map(const struct domain *d,
+                                    const struct iommu_context *ctx,
                                     const struct pci_dev *exclude,
                                     struct vtd_iommu *iommu)
 {
-    bool found;
-
-    if ( d == dom_io )
-        return;
-
-    found = any_pdev_behind_iommu(d, exclude, iommu);
-    /*
-     * Hidden devices are associated with DomXEN but usable by the hardware
-     * domain. Hence they need considering here as well.
-     */
-    if ( !found && is_hardware_domain(d) )
-        found = any_pdev_behind_iommu(dom_xen, exclude, iommu);
-
-    if ( !found )
+    if ( !any_pdev_behind_iommu(d, exclude, iommu) )
     {
-        clear_bit(iommu->index, dom_iommu(d)->arch.vtd.iommu_bitmap);
-        cleanup_domid_map(d->domain_id, iommu);
+        clear_bit(iommu->index, ctx->arch.vtd.iommu_bitmap);
+        cleanup_domid_map(ctx->arch.vtd.didmap[iommu->index], iommu);
     }
 }
 
@@ -312,8 +300,9 @@ static u64 bus_to_context_maddr(struct vtd_iommu *iommu, u8 bus)
  *   PTE for the requested address,
  * - for target == 0 the full PTE contents below PADDR_BITS limit.
  */
-static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
-                                       unsigned int target,
+static uint64_t addr_to_dma_page_maddr(struct domain *domain,
+                                       struct iommu_context *ctx,
+                                       daddr_t addr, unsigned int target,
                                        unsigned int *flush_flags, bool alloc)
 {
     struct domain_iommu *hd = dom_iommu(domain);
@@ -323,10 +312,9 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
     u64 pte_maddr = 0;
 
     addr &= (((u64)1) << addr_width) - 1;
-    ASSERT(spin_is_locked(&hd->arch.mapping_lock));
     ASSERT(target || !alloc);
 
-    if ( !hd->arch.vtd.pgd_maddr )
+    if ( !ctx->arch.vtd.pgd_maddr )
     {
         struct page_info *pg;
 
@@ -334,13 +322,13 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
             goto out;
 
         pte_maddr = level;
-        if ( !(pg = iommu_alloc_pgtable(hd, 0)) )
+        if ( !(pg = iommu_alloc_pgtable(hd, ctx, 0)) )
             goto out;
 
-        hd->arch.vtd.pgd_maddr = page_to_maddr(pg);
+        ctx->arch.vtd.pgd_maddr = page_to_maddr(pg);
     }
 
-    pte_maddr = hd->arch.vtd.pgd_maddr;
+    pte_maddr = ctx->arch.vtd.pgd_maddr;
     parent = map_vtd_domain_page(pte_maddr);
     while ( level > target )
     {
@@ -376,7 +364,7 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
             }
 
             pte_maddr = level - 1;
-            pg = iommu_alloc_pgtable(hd, DMA_PTE_CONTIG_MASK);
+            pg = iommu_alloc_pgtable(hd, ctx, DMA_PTE_CONTIG_MASK);
             if ( !pg )
                 break;
 
@@ -428,38 +416,25 @@ static uint64_t addr_to_dma_page_maddr(struct domain *domain, daddr_t addr,
     return pte_maddr;
 }
 
-static paddr_t domain_pgd_maddr(struct domain *d, paddr_t pgd_maddr,
-                                unsigned int nr_pt_levels)
+static paddr_t get_context_pgd(struct domain *d, struct iommu_context *ctx,
+                               unsigned int nr_pt_levels)
 {
-    struct domain_iommu *hd = dom_iommu(d);
     unsigned int agaw;
+    paddr_t pgd_maddr = ctx->arch.vtd.pgd_maddr;
 
-    ASSERT(spin_is_locked(&hd->arch.mapping_lock));
-
-    if ( pgd_maddr )
-        /* nothing */;
-    else if ( iommu_use_hap_pt(d) )
+    if ( !ctx->arch.vtd.pgd_maddr )
     {
-        pagetable_t pgt = p2m_get_pagetable(p2m_get_hostp2m(d));
+        /*
+         * Ensure we have pagetables allocated down to the smallest
+         * level the loop below may need to run to.
+         */
+        addr_to_dma_page_maddr(d, ctx, 0, min_pt_levels, NULL, true);
 
-        pgd_maddr = pagetable_get_paddr(pgt);
+        if ( !ctx->arch.vtd.pgd_maddr )
+            return 0;
     }
-    else
-    {
-        if ( !hd->arch.vtd.pgd_maddr )
-        {
-            /*
-             * Ensure we have pagetables allocated down to the smallest
-             * level the loop below may need to run to.
-             */
-            addr_to_dma_page_maddr(d, 0, min_pt_levels, NULL, true);
-
-            if ( !hd->arch.vtd.pgd_maddr )
-                return 0;
-        }
 
-        pgd_maddr = hd->arch.vtd.pgd_maddr;
-    }
+    pgd_maddr = ctx->arch.vtd.pgd_maddr;
 
     /* Skip top level(s) of page tables for less-than-maximum level DRHDs. */
     for ( agaw = level_to_agaw(4);
@@ -727,28 +702,18 @@ static int __must_check iommu_flush_all(void)
     return rc;
 }
 
-static int __must_check cf_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
+static int __must_check cf_check iommu_flush_iotlb(struct domain *d,
+                                                   struct iommu_context *ctx,
+                                                   dfn_t dfn,
                                                    unsigned long page_count,
                                                    unsigned int flush_flags)
 {
-    struct domain_iommu *hd = dom_iommu(d);
     struct acpi_drhd_unit *drhd;
     struct vtd_iommu *iommu;
     bool flush_dev_iotlb;
     int iommu_domid;
     int ret = 0;
 
-    if ( flush_flags & IOMMU_FLUSHF_all )
-    {
-        dfn = INVALID_DFN;
-        page_count = 0;
-    }
-    else
-    {
-        ASSERT(page_count && !dfn_eq(dfn, INVALID_DFN));
-        ASSERT(flush_flags);
-    }
-
     /*
      * No need pcideves_lock here because we have flush
      * when assign/deassign device
@@ -759,13 +724,20 @@ static int __must_check cf_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
 
         iommu = drhd->iommu;
 
-        if ( !test_bit(iommu->index, hd->arch.vtd.iommu_bitmap) )
-            continue;
+        if ( ctx )
+        {
+            if ( !test_bit(iommu->index, ctx->arch.vtd.iommu_bitmap) )
+                continue;
+
+            iommu_domid = get_iommu_did(ctx->arch.vtd.didmap[iommu->index], iommu, true);
+
+            if ( iommu_domid == -1 )
+                continue;
+        }
+        else
+            iommu_domid = 0;
 
         flush_dev_iotlb = !!find_ats_dev_drhd(iommu);
-        iommu_domid = get_iommu_did(d->domain_id, iommu, !d->is_dying);
-        if ( iommu_domid == -1 )
-            continue;
 
         if ( !page_count || (page_count & (page_count - 1)) ||
              dfn_eq(dfn, INVALID_DFN) || !IS_ALIGNED(dfn_x(dfn), page_count) )
@@ -784,10 +756,13 @@ static int __must_check cf_check iommu_flush_iotlb(struct domain *d, dfn_t dfn,
             ret = rc;
     }
 
+    if ( !ret && ctx )
+        arch_iommu_flush_free_queue(d, ctx);
+
     return ret;
 }
 
-static void queue_free_pt(struct domain_iommu *hd, mfn_t mfn, unsigned int level)
+static void queue_free_pt(struct iommu_context *ctx, mfn_t mfn, unsigned int level)
 {
     if ( level > 1 )
     {
@@ -796,13 +771,13 @@ static void queue_free_pt(struct domain_iommu *hd, mfn_t mfn, unsigned int level
 
         for ( i = 0; i < PTE_NUM; ++i )
             if ( dma_pte_present(pt[i]) && !dma_pte_superpage(pt[i]) )
-                queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(pt[i])),
+                queue_free_pt(ctx, maddr_to_mfn(dma_pte_addr(pt[i])),
                               level - 1);
 
         unmap_domain_page(pt);
     }
 
-    iommu_queue_free_pgtable(hd, mfn_to_page(mfn));
+    iommu_queue_free_pgtable(ctx, mfn_to_page(mfn));
 }
 
 static int iommu_set_root_entry(struct vtd_iommu *iommu)
@@ -1433,11 +1408,6 @@ static int cf_check intel_iommu_domain_init(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
 
-    hd->arch.vtd.iommu_bitmap = xzalloc_array(unsigned long,
-                                              BITS_TO_LONGS(nr_iommus));
-    if ( !hd->arch.vtd.iommu_bitmap )
-        return -ENOMEM;
-
     hd->arch.vtd.agaw = width_to_agaw(DEFAULT_DOMAIN_ADDRESS_WIDTH);
 
     return 0;
@@ -1465,32 +1435,22 @@ static void __hwdom_init cf_check intel_iommu_hwdom_init(struct domain *d)
     }
 }
 
-/*
- * This function returns
- * - a negative errno value upon error,
- * - zero upon success when previously the entry was non-present, or this isn't
- *   the "main" request for a device (pdev == NULL), or for no-op quarantining
- *   assignments,
- * - positive (one) upon success when previously the entry was present and this
- *   is the "main" request for a device (pdev != NULL).
+/**
+ * Apply a context on a device.
+ * @param domain Domain of the context
+ * @param iommu IOMMU hardware to use (must match device iommu)
+ * @param ctx IOMMU context to apply
+ * @param devfn PCI device function (may be different to pdev)
  */
-int domain_context_mapping_one(
-    struct domain *domain,
-    struct vtd_iommu *iommu,
-    uint8_t bus, uint8_t devfn, const struct pci_dev *pdev,
-    domid_t domid, paddr_t pgd_maddr, unsigned int mode)
+int apply_context_single(struct domain *domain, struct iommu_context *ctx,
+                         struct vtd_iommu *iommu, uint8_t bus, uint8_t devfn)
 {
-    struct domain_iommu *hd = dom_iommu(domain);
     struct context_entry *context, *context_entries, lctxt;
-    __uint128_t old;
+    __uint128_t res, old;
     uint64_t maddr;
-    uint16_t seg = iommu->drhd->segment, prev_did = 0;
-    struct domain *prev_dom = NULL;
+    uint16_t seg = iommu->drhd->segment, prev_did = 0, did;
     int rc, ret;
-    bool flush_dev_iotlb;
-
-    if ( QUARANTINE_SKIP(domain, pgd_maddr) )
-        return 0;
+    bool flush_dev_iotlb, overwrite_entry = false;
 
     ASSERT(pcidevs_locked());
     spin_lock(&iommu->lock);
@@ -1499,28 +1459,15 @@ int domain_context_mapping_one(
     context = &context_entries[devfn];
     old = (lctxt = *context).full;
 
-    if ( context_present(lctxt) )
-    {
-        domid_t domid;
+    did = ctx->arch.vtd.didmap[iommu->index];
 
+    if ( context_present(*context) )
+    {
         prev_did = context_domain_id(lctxt);
-        domid = did_to_domain_id(iommu, prev_did);
-        if ( domid < DOMID_FIRST_RESERVED )
-            prev_dom = rcu_lock_domain_by_id(domid);
-        else if ( pdev ? domid == pdev->arch.pseudo_domid : domid > DOMID_MASK )
-            prev_dom = rcu_lock_domain(dom_io);
-        if ( !prev_dom )
-        {
-            spin_unlock(&iommu->lock);
-            unmap_vtd_domain_page(context_entries);
-            dprintk(XENLOG_DEBUG VTDPREFIX,
-                    "no domain for did %u (nr_dom %u)\n",
-                    prev_did, cap_ndoms(iommu->cap));
-            return -ESRCH;
-        }
+        overwrite_entry = true;
     }
 
-    if ( iommu_hwdom_passthrough && is_hardware_domain(domain) )
+    if ( iommu_hwdom_passthrough && is_hardware_domain(domain) && !ctx->id )
     {
         context_set_translation_type(lctxt, CONTEXT_TT_PASS_THRU);
     }
@@ -1528,16 +1475,10 @@ int domain_context_mapping_one(
     {
         paddr_t root;
 
-        spin_lock(&hd->arch.mapping_lock);
-
-        root = domain_pgd_maddr(domain, pgd_maddr, iommu->nr_pt_levels);
+        root = get_context_pgd(domain, ctx, iommu->nr_pt_levels);
         if ( !root )
         {
-            spin_unlock(&hd->arch.mapping_lock);
-            spin_unlock(&iommu->lock);
             unmap_vtd_domain_page(context_entries);
-            if ( prev_dom )
-                rcu_unlock_domain(prev_dom);
             return -ENOMEM;
         }
 
@@ -1546,98 +1487,39 @@ int domain_context_mapping_one(
             context_set_translation_type(lctxt, CONTEXT_TT_DEV_IOTLB);
         else
             context_set_translation_type(lctxt, CONTEXT_TT_MULTI_LEVEL);
-
-        spin_unlock(&hd->arch.mapping_lock);
     }
 
-    rc = context_set_domain_id(&lctxt, domid, iommu);
+    rc = context_set_domain_id(&lctxt, did, iommu);
     if ( rc )
-    {
-    unlock:
-        spin_unlock(&iommu->lock);
-        unmap_vtd_domain_page(context_entries);
-        if ( prev_dom )
-            rcu_unlock_domain(prev_dom);
-        return rc;
-    }
-
-    if ( !prev_dom )
-    {
-        context_set_address_width(lctxt, level_to_agaw(iommu->nr_pt_levels));
-        context_set_fault_enable(lctxt);
-        context_set_present(lctxt);
-    }
-    else if ( prev_dom == domain )
-    {
-        ASSERT(lctxt.full == context->full);
-        rc = !!pdev;
         goto unlock;
-    }
-    else
-    {
-        ASSERT(context_address_width(lctxt) ==
-               level_to_agaw(iommu->nr_pt_levels));
-        ASSERT(!context_fault_disable(lctxt));
-    }
-
-    if ( cpu_has_cx16 )
-    {
-        __uint128_t res = cmpxchg16b(context, &old, &lctxt.full);
 
-        /*
-         * Hardware does not update the context entry behind our backs,
-         * so the return value should match "old".
-         */
-        if ( res != old )
-        {
-            if ( pdev )
-                check_cleanup_domid_map(domain, pdev, iommu);
-            printk(XENLOG_ERR
-                   "%pp: unexpected context entry %016lx_%016lx (expected %016lx_%016lx)\n",
-                   &PCI_SBDF(seg, bus, devfn),
-                   (uint64_t)(res >> 64), (uint64_t)res,
-                   (uint64_t)(old >> 64), (uint64_t)old);
-            rc = -EILSEQ;
-            goto unlock;
-        }
-    }
-    else if ( !prev_dom || !(mode & MAP_WITH_RMRR) )
-    {
-        context_clear_present(*context);
-        iommu_sync_cache(context, sizeof(*context));
+    context_set_address_width(lctxt, level_to_agaw(iommu->nr_pt_levels));
+    context_set_fault_enable(lctxt);
+    context_set_present(lctxt);
 
-        write_atomic(&context->hi, lctxt.hi);
-        /* No barrier should be needed between these two. */
-        write_atomic(&context->lo, lctxt.lo);
-    }
-    else /* Best effort, updating DID last. */
-    {
-         /*
-          * By non-atomically updating the context entry's DID field last,
-          * during a short window in time TLB entries with the old domain ID
-          * but the new page tables may be inserted.  This could affect I/O
-          * of other devices using this same (old) domain ID.  Such updating
-          * therefore is not a problem if this was the only device associated
-          * with the old domain ID.  Diverting I/O of any of a dying domain's
-          * devices to the quarantine page tables is intended anyway.
-          */
-        if ( !(mode & (MAP_OWNER_DYING | MAP_SINGLE_DEVICE)) )
-            printk(XENLOG_WARNING VTDPREFIX
-                   " %pp: reassignment may cause %pd data corruption\n",
-                   &PCI_SBDF(seg, bus, devfn), prev_dom);
+    res = cmpxchg16b(context, &old, &lctxt.full);
 
-        write_atomic(&context->lo, lctxt.lo);
-        /* No barrier should be needed between these two. */
-        write_atomic(&context->hi, lctxt.hi);
+    /*
+     * Hardware does not update the context entry behind our backs,
+     * so the return value should match "old".
+     */
+    if ( res != old )
+    {
+        printk(XENLOG_ERR
+                "%pp: unexpected context entry %016lx_%016lx (expected %016lx_%016lx)\n",
+                &PCI_SBDF(seg, bus, devfn),
+                (uint64_t)(res >> 64), (uint64_t)res,
+                (uint64_t)(old >> 64), (uint64_t)old);
+        rc = -EILSEQ;
+        goto unlock;
     }
 
     iommu_sync_cache(context, sizeof(struct context_entry));
-    spin_unlock(&iommu->lock);
 
     rc = iommu_flush_context_device(iommu, prev_did, PCI_BDF(bus, devfn),
-                                    DMA_CCMD_MASK_NOBIT, !prev_dom);
+                                    DMA_CCMD_MASK_NOBIT, !overwrite_entry);
     flush_dev_iotlb = !!find_ats_dev_drhd(iommu);
-    ret = iommu_flush_iotlb_dsi(iommu, prev_did, !prev_dom, flush_dev_iotlb);
+    ret = iommu_flush_iotlb_dsi(iommu, prev_did, !overwrite_entry, flush_dev_iotlb);
 
     /*
      * The current logic for returns:
@@ -1653,230 +1535,55 @@ int domain_context_mapping_one(
     if ( rc > 0 )
         rc = 0;
 
-    set_bit(iommu->index, hd->arch.vtd.iommu_bitmap);
+    set_bit(iommu->index, ctx->arch.vtd.iommu_bitmap);
 
     unmap_vtd_domain_page(context_entries);
+    spin_unlock(&iommu->lock);
 
     if ( !seg && !rc )
-        rc = me_wifi_quirk(domain, bus, devfn, domid, pgd_maddr, mode);
-
-    if ( rc && !(mode & MAP_ERROR_RECOVERY) )
-    {
-        if ( !prev_dom ||
-             /*
-              * Unmapping here means DEV_TYPE_PCI devices with RMRRs (if such
-              * exist) would cause problems if such a region was actually
-              * accessed.
-              */
-             (prev_dom == dom_io && !pdev) )
-            ret = domain_context_unmap_one(domain, iommu, bus, devfn);
-        else
-            ret = domain_context_mapping_one(prev_dom, iommu, bus, devfn, pdev,
-                                             DEVICE_DOMID(prev_dom, pdev),
-                                             DEVICE_PGTABLE(prev_dom, pdev),
-                                             (mode & MAP_WITH_RMRR) |
-                                             MAP_ERROR_RECOVERY) < 0;
-
-        if ( !ret && pdev && pdev->devfn == devfn )
-            check_cleanup_domid_map(domain, pdev, iommu);
-    }
+        rc = me_wifi_quirk(domain, bus, devfn, did, 0, ctx);
 
-    if ( prev_dom )
-        rcu_unlock_domain(prev_dom);
+    return rc;
 
-    return rc ?: pdev && prev_dom;
+    unlock:
+        unmap_vtd_domain_page(context_entries);
+        spin_unlock(&iommu->lock);
+        return rc;
 }
 
-static const struct acpi_drhd_unit *domain_context_unmap(
-    struct domain *d, uint8_t devfn, struct pci_dev *pdev);
-
-static int domain_context_mapping(struct domain *domain, u8 devfn,
-                                  struct pci_dev *pdev)
+int apply_context(struct domain *d, struct iommu_context *ctx,
+                  struct pci_dev *pdev, u8 devfn)
 {
     const struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
-    const struct acpi_rmrr_unit *rmrr;
-    paddr_t pgd_maddr = DEVICE_PGTABLE(domain, pdev);
-    domid_t orig_domid = pdev->arch.pseudo_domid;
     int ret = 0;
-    unsigned int i, mode = 0;
-    uint16_t seg = pdev->seg, bdf;
-    uint8_t bus = pdev->bus, secbus;
-
-    /*
-     * Generally we assume only devices from one node to get assigned to a
-     * given guest.  But even if not, by replacing the prior value here we
-     * guarantee that at least some basic allocations for the device being
-     * added will get done against its node.  Any further allocations for
-     * this or other devices may be penalized then, but some would also be
-     * if we left other than NUMA_NO_NODE untouched here.
-     */
-    if ( drhd && drhd->iommu->node != NUMA_NO_NODE )
-        dom_iommu(domain)->node = drhd->iommu->node;
-
-    ASSERT(pcidevs_locked());
-
-    for_each_rmrr_device( rmrr, bdf, i )
-    {
-        if ( rmrr->segment != pdev->seg || bdf != pdev->sbdf.bdf )
-            continue;
 
-        mode |= MAP_WITH_RMRR;
-        break;
-    }
+    if ( !drhd )
+        return -EINVAL;
 
-    if ( domain != pdev->domain && pdev->domain != dom_io )
+    if ( pdev->type == DEV_TYPE_PCI_HOST_BRIDGE ||
+         pdev->type == DEV_TYPE_PCIe_BRIDGE ||
+         pdev->type == DEV_TYPE_PCIe2PCI_BRIDGE ||
+         pdev->type == DEV_TYPE_LEGACY_PCI_BRIDGE )
     {
-        if ( pdev->domain->is_dying )
-            mode |= MAP_OWNER_DYING;
-        else if ( drhd &&
-                  !any_pdev_behind_iommu(pdev->domain, pdev, drhd->iommu) &&
-                  !pdev->phantom_stride )
-            mode |= MAP_SINGLE_DEVICE;
+        printk(XENLOG_WARNING VTDPREFIX " Ignoring apply_context on PCI bridge\n");
+        return 0;
     }
 
-    switch ( pdev->type )
-    {
-        bool prev_present;
-
-    case DEV_TYPE_PCI_HOST_BRIDGE:
-        if ( iommu_debug )
-            printk(VTDPREFIX "%pd:Hostbridge: skip %pp map\n",
-                   domain, &PCI_SBDF(seg, bus, devfn));
-        if ( !is_hardware_domain(domain) )
-            return -EPERM;
-        break;
-
-    case DEV_TYPE_PCIe_BRIDGE:
-    case DEV_TYPE_PCIe2PCI_BRIDGE:
-    case DEV_TYPE_LEGACY_PCI_BRIDGE:
-        break;
-
-    case DEV_TYPE_PCIe_ENDPOINT:
-        if ( !drhd )
-            return -ENODEV;
-
-        if ( iommu_quarantine && orig_domid == DOMID_INVALID )
-        {
-            pdev->arch.pseudo_domid =
-                iommu_alloc_domid(drhd->iommu->pseudo_domid_map);
-            if ( pdev->arch.pseudo_domid == DOMID_INVALID )
-                return -ENOSPC;
-        }
-
-        if ( iommu_debug )
-            printk(VTDPREFIX "%pd:PCIe: map %pp\n",
-                   domain, &PCI_SBDF(seg, bus, devfn));
-        ret = domain_context_mapping_one(domain, drhd->iommu, bus, devfn, pdev,
-                                         DEVICE_DOMID(domain, pdev), pgd_maddr,
-                                         mode);
-        if ( ret > 0 )
-            ret = 0;
-        if ( !ret && devfn == pdev->devfn && ats_device(pdev, drhd) > 0 )
-            enable_ats_device(pdev, &drhd->iommu->ats_devices);
-
-        break;
-
-    case DEV_TYPE_PCI:
-        if ( !drhd )
-            return -ENODEV;
-
-        if ( iommu_quarantine && orig_domid == DOMID_INVALID )
-        {
-            pdev->arch.pseudo_domid =
-                iommu_alloc_domid(drhd->iommu->pseudo_domid_map);
-            if ( pdev->arch.pseudo_domid == DOMID_INVALID )
-                return -ENOSPC;
-        }
-
-        if ( iommu_debug )
-            printk(VTDPREFIX "%pd:PCI: map %pp\n",
-                   domain, &PCI_SBDF(seg, bus, devfn));
-
-        ret = domain_context_mapping_one(domain, drhd->iommu, bus, devfn,
-                                         pdev, DEVICE_DOMID(domain, pdev),
-                                         pgd_maddr, mode);
-        if ( ret < 0 )
-            break;
-        prev_present = ret;
-
-        if ( (ret = find_upstream_bridge(seg, &bus, &devfn, &secbus)) < 1 )
-        {
-            if ( !ret )
-                break;
-            ret = -ENXIO;
-        }
-        /*
-         * Strictly speaking if the device is the only one behind this bridge
-         * and the only one with this (secbus,0,0) tuple, it could be allowed
-         * to be re-assigned regardless of RMRR presence.  But let's deal with
-         * that case only if it is actually found in the wild.  Note that
-         * dealing with this just here would still not render the operation
-         * secure.
-         */
-        else if ( prev_present && (mode & MAP_WITH_RMRR) &&
-                  domain != pdev->domain )
-            ret = -EOPNOTSUPP;
-
-        /*
-         * Mapping a bridge should, if anything, pass the struct pci_dev of
-         * that bridge. Since bridges don't normally get assigned to guests,
-         * their owner would be the wrong one. Pass NULL instead.
-         */
-        if ( ret >= 0 )
-            ret = domain_context_mapping_one(domain, drhd->iommu, bus, devfn,
-                                             NULL, DEVICE_DOMID(domain, pdev),
-                                             pgd_maddr, mode);
-
-        /*
-         * Devices behind PCIe-to-PCI/PCIx bridge may generate different
-         * requester-id. It may originate from devfn=0 on the secondary bus
-         * behind the bridge. Map that id as well if we didn't already.
-         *
-         * Somewhat similar as for bridges, we don't want to pass a struct
-         * pci_dev here - there may not even exist one for this (secbus,0,0)
-         * tuple. If there is one, without properly working device groups it
-         * may again not have the correct owner.
-         */
-        if ( !ret && pdev_type(seg, bus, devfn) == DEV_TYPE_PCIe2PCI_BRIDGE &&
-             (secbus != pdev->bus || pdev->devfn != 0) )
-            ret = domain_context_mapping_one(domain, drhd->iommu, secbus, 0,
-                                             NULL, DEVICE_DOMID(domain, pdev),
-                                             pgd_maddr, mode);
-
-        if ( ret )
-        {
-            if ( !prev_present )
-                domain_context_unmap(domain, devfn, pdev);
-            else if ( pdev->domain != domain ) /* Avoid infinite recursion. */
-                domain_context_mapping(pdev->domain, devfn, pdev);
-        }
+    ASSERT(pcidevs_locked());
 
-        break;
+    ret = apply_context_single(d, ctx, drhd->iommu, pdev->bus, devfn);
 
-    default:
-        dprintk(XENLOG_ERR VTDPREFIX, "%pd:unknown(%u): %pp\n",
-                domain, pdev->type, &PCI_SBDF(seg, bus, devfn));
-        ret = -EINVAL;
-        break;
-    }
+    if ( !ret && ats_device(pdev, drhd) > 0 )
+        enable_ats_device(pdev, &drhd->iommu->ats_devices);
 
     if ( !ret && devfn == pdev->devfn )
         pci_vtd_quirk(pdev);
 
-    if ( ret && drhd && orig_domid == DOMID_INVALID )
-    {
-        iommu_free_domid(pdev->arch.pseudo_domid,
-                         drhd->iommu->pseudo_domid_map);
-        pdev->arch.pseudo_domid = DOMID_INVALID;
-    }
-
     return ret;
 }
 
-int domain_context_unmap_one(
-    struct domain *domain,
-    struct vtd_iommu *iommu,
-    uint8_t bus, uint8_t devfn)
+int unapply_context_single(struct domain *domain, struct iommu_context *ctx,
+                           struct vtd_iommu *iommu, uint8_t bus, uint8_t devfn)
 {
     struct context_entry *context, *context_entries;
     u64 maddr;
@@ -1928,8 +1635,8 @@ int domain_context_unmap_one(
     unmap_vtd_domain_page(context_entries);
 
     if ( !iommu->drhd->segment && !rc )
-        rc = me_wifi_quirk(domain, bus, devfn, DOMID_INVALID, 0,
-                           UNMAP_ME_PHANTOM_FUNC);
+        rc = me_wifi_quirk(domain, bus, devfn, DOMID_INVALID, UNMAP_ME_PHANTOM_FUNC,
+                           NULL);
 
     if ( rc && !is_hardware_domain(domain) && domain != dom_io )
     {
@@ -1947,105 +1654,14 @@ int domain_context_unmap_one(
     return rc;
 }
 
-static const struct acpi_drhd_unit *domain_context_unmap(
-    struct domain *domain,
-    uint8_t devfn,
-    struct pci_dev *pdev)
-{
-    const struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
-    struct vtd_iommu *iommu = drhd ? drhd->iommu : NULL;
-    int ret;
-    uint16_t seg = pdev->seg;
-    uint8_t bus = pdev->bus, tmp_bus, tmp_devfn, secbus;
-
-    switch ( pdev->type )
-    {
-    case DEV_TYPE_PCI_HOST_BRIDGE:
-        if ( iommu_debug )
-            printk(VTDPREFIX "%pd:Hostbridge: skip %pp unmap\n",
-                   domain, &PCI_SBDF(seg, bus, devfn));
-        return ERR_PTR(is_hardware_domain(domain) ? 0 : -EPERM);
-
-    case DEV_TYPE_PCIe_BRIDGE:
-    case DEV_TYPE_PCIe2PCI_BRIDGE:
-    case DEV_TYPE_LEGACY_PCI_BRIDGE:
-        return ERR_PTR(0);
-
-    case DEV_TYPE_PCIe_ENDPOINT:
-        if ( !iommu )
-            return ERR_PTR(-ENODEV);
-
-        if ( iommu_debug )
-            printk(VTDPREFIX "%pd:PCIe: unmap %pp\n",
-                   domain, &PCI_SBDF(seg, bus, devfn));
-        ret = domain_context_unmap_one(domain, iommu, bus, devfn);
-        if ( !ret && devfn == pdev->devfn && ats_device(pdev, drhd) > 0 )
-            disable_ats_device(pdev);
-
-        break;
-
-    case DEV_TYPE_PCI:
-        if ( !iommu )
-            return ERR_PTR(-ENODEV);
-
-        if ( iommu_debug )
-            printk(VTDPREFIX "%pd:PCI: unmap %pp\n",
-                   domain, &PCI_SBDF(seg, bus, devfn));
-        ret = domain_context_unmap_one(domain, iommu, bus, devfn);
-        if ( ret )
-            break;
-
-        tmp_bus = bus;
-        tmp_devfn = devfn;
-        if ( (ret = find_upstream_bridge(seg, &tmp_bus, &tmp_devfn,
-                                         &secbus)) < 1 )
-        {
-            if ( ret )
-            {
-                ret = -ENXIO;
-                if ( !domain->is_dying &&
-                     !is_hardware_domain(domain) && domain != dom_io )
-                {
-                    domain_crash(domain);
-                    /* Make upper layers continue in a best effort manner. */
-                    ret = 0;
-                }
-            }
-            break;
-        }
-
-        ret = domain_context_unmap_one(domain, iommu, tmp_bus, tmp_devfn);
-        /* PCIe to PCI/PCIx bridge */
-        if ( !ret && pdev_type(seg, tmp_bus, tmp_devfn) == DEV_TYPE_PCIe2PCI_BRIDGE )
-            ret = domain_context_unmap_one(domain, iommu, secbus, 0);
-
-        break;
-
-    default:
-        dprintk(XENLOG_ERR VTDPREFIX, "%pd:unknown(%u): %pp\n",
-                domain, pdev->type, &PCI_SBDF(seg, bus, devfn));
-        return ERR_PTR(-EINVAL);
-    }
-
-    if ( !ret && pdev->devfn == devfn &&
-         !QUARANTINE_SKIP(domain, pdev->arch.vtd.pgd_maddr) )
-        check_cleanup_domid_map(domain, pdev, iommu);
-
-    return drhd;
-}
-
-static void cf_check iommu_clear_root_pgtable(struct domain *d)
+static void cf_check iommu_clear_root_pgtable(struct domain *d, struct iommu_context *ctx)
 {
-    struct domain_iommu *hd = dom_iommu(d);
-
-    spin_lock(&hd->arch.mapping_lock);
-    hd->arch.vtd.pgd_maddr = 0;
-    spin_unlock(&hd->arch.mapping_lock);
+    ctx->arch.vtd.pgd_maddr = 0;
 }
 
 static void cf_check iommu_domain_teardown(struct domain *d)
 {
-    struct domain_iommu *hd = dom_iommu(d);
+    struct iommu_context *ctx = iommu_default_context(d);
     const struct acpi_drhd_unit *drhd;
 
     if ( list_empty(&acpi_drhd_units) )
@@ -2053,37 +1669,15 @@ static void cf_check iommu_domain_teardown(struct domain *d)
 
     iommu_identity_map_teardown(d);
 
-    ASSERT(!hd->arch.vtd.pgd_maddr);
+    ASSERT(!ctx->arch.vtd.pgd_maddr);
 
     for_each_drhd_unit ( drhd )
         cleanup_domid_map(d->domain_id, drhd->iommu);
-
-    XFREE(hd->arch.vtd.iommu_bitmap);
-}
-
-static void quarantine_teardown(struct pci_dev *pdev,
-                                const struct acpi_drhd_unit *drhd)
-{
-    struct domain_iommu *hd = dom_iommu(dom_io);
-
-    ASSERT(pcidevs_locked());
-
-    if ( !pdev->arch.vtd.pgd_maddr )
-        return;
-
-    ASSERT(page_list_empty(&hd->arch.pgtables.list));
-    page_list_move(&hd->arch.pgtables.list, &pdev->arch.pgtables_list);
-    while ( iommu_free_pgtables(dom_io) == -ERESTART )
-        /* nothing */;
-    pdev->arch.vtd.pgd_maddr = 0;
-
-    if ( drhd )
-        cleanup_domid_map(pdev->arch.pseudo_domid, drhd->iommu);
 }
 
 static int __must_check cf_check intel_iommu_map_page(
     struct domain *d, dfn_t dfn, mfn_t mfn, unsigned int flags,
-    unsigned int *flush_flags)
+    unsigned int *flush_flags, struct iommu_context *ctx)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct dma_pte *page, *pte, old, new = {};
@@ -2095,32 +1689,28 @@ static int __must_check cf_check intel_iommu_map_page(
            PAGE_SIZE_4K);
 
     /* Do nothing if VT-d shares EPT page table */
-    if ( iommu_use_hap_pt(d) )
+    if ( iommu_use_hap_pt(d) && !ctx->id )
         return 0;
 
     /* Do nothing if hardware domain and iommu supports pass thru. */
-    if ( iommu_hwdom_passthrough && is_hardware_domain(d) )
+    if ( iommu_hwdom_passthrough && is_hardware_domain(d) && !ctx->id )
         return 0;
 
-    spin_lock(&hd->arch.mapping_lock);
-
     /*
      * IOMMU mapping request can be safely ignored when the domain is dying.
      *
-     * hd->arch.mapping_lock guarantees that d->is_dying will be observed
+     * hd->lock guarantees that d->is_dying will be observed
      * before any page tables are freed (see iommu_free_pgtables())
      */
     if ( d->is_dying )
     {
-        spin_unlock(&hd->arch.mapping_lock);
         return 0;
     }
 
-    pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), level, flush_flags,
+    pg_maddr = addr_to_dma_page_maddr(d, ctx, dfn_to_daddr(dfn), level, flush_flags,
                                       true);
     if ( pg_maddr < PAGE_SIZE )
     {
-        spin_unlock(&hd->arch.mapping_lock);
         return -ENOMEM;
     }
 
@@ -2141,7 +1731,6 @@ static int __must_check cf_check intel_iommu_map_page(
 
     if ( !((old.val ^ new.val) & ~DMA_PTE_CONTIG_MASK) )
     {
-        spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
         return 0;
     }
@@ -2170,7 +1759,7 @@ static int __must_check cf_check intel_iommu_map_page(
         new.val &= ~(LEVEL_MASK << level_to_offset_bits(level));
         dma_set_pte_superpage(new);
 
-        pg_maddr = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), ++level,
+        pg_maddr = addr_to_dma_page_maddr(d, ctx, dfn_to_daddr(dfn), ++level,
                                           flush_flags, false);
         BUG_ON(pg_maddr < PAGE_SIZE);
 
@@ -2180,11 +1769,10 @@ static int __must_check cf_check intel_iommu_map_page(
         iommu_sync_cache(pte, sizeof(*pte));
 
         *flush_flags |= IOMMU_FLUSHF_modified | IOMMU_FLUSHF_all;
-        iommu_queue_free_pgtable(hd, pg);
+        iommu_queue_free_pgtable(ctx, pg);
         perfc_incr(iommu_pt_coalesces);
     }
 
-    spin_unlock(&hd->arch.mapping_lock);
     unmap_vtd_domain_page(page);
 
     *flush_flags |= IOMMU_FLUSHF_added;
@@ -2193,7 +1781,7 @@ static int __must_check cf_check intel_iommu_map_page(
         *flush_flags |= IOMMU_FLUSHF_modified;
 
         if ( IOMMUF_order(flags) && !dma_pte_superpage(old) )
-            queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)),
+            queue_free_pt(ctx, maddr_to_mfn(dma_pte_addr(old)),
                           IOMMUF_order(flags) / LEVEL_STRIDE);
     }
 
@@ -2201,7 +1789,8 @@ static int __must_check cf_check intel_iommu_map_page(
 }
 
 static int __must_check cf_check intel_iommu_unmap_page(
-    struct domain *d, dfn_t dfn, unsigned int order, unsigned int *flush_flags)
+    struct domain *d, dfn_t dfn, unsigned int order, unsigned int *flush_flags,
+    struct iommu_context *ctx)
 {
     struct domain_iommu *hd = dom_iommu(d);
     daddr_t addr = dfn_to_daddr(dfn);
@@ -2216,28 +1805,23 @@ static int __must_check cf_check intel_iommu_unmap_page(
     ASSERT((hd->platform_ops->page_sizes >> order) & PAGE_SIZE_4K);
 
     /* Do nothing if VT-d shares EPT page table */
-    if ( iommu_use_hap_pt(d) )
+    if ( iommu_use_hap_pt(d) && !ctx->id )
         return 0;
 
     /* Do nothing if hardware domain and iommu supports pass thru. */
     if ( iommu_hwdom_passthrough && is_hardware_domain(d) )
         return 0;
 
-    spin_lock(&hd->arch.mapping_lock);
     /* get target level pte */
-    pg_maddr = addr_to_dma_page_maddr(d, addr, level, flush_flags, false);
+    pg_maddr = addr_to_dma_page_maddr(d, ctx, addr, level, flush_flags, false);
     if ( pg_maddr < PAGE_SIZE )
-    {
-        spin_unlock(&hd->arch.mapping_lock);
         return pg_maddr ? -ENOMEM : 0;
-    }
 
     page = map_vtd_domain_page(pg_maddr);
     pte = &page[address_level_offset(addr, level)];
 
     if ( !dma_pte_present(*pte) )
     {
-        spin_unlock(&hd->arch.mapping_lock);
         unmap_vtd_domain_page(page);
         return 0;
     }
@@ -2255,7 +1839,7 @@ static int __must_check cf_check intel_iommu_unmap_page(
 
         unmap_vtd_domain_page(page);
 
-        pg_maddr = addr_to_dma_page_maddr(d, addr, level, flush_flags, false);
+        pg_maddr = addr_to_dma_page_maddr(d, ctx, addr, level, flush_flags, false);
         BUG_ON(pg_maddr < PAGE_SIZE);
 
         page = map_vtd_domain_page(pg_maddr);
@@ -2264,42 +1848,38 @@ static int __must_check cf_check intel_iommu_unmap_page(
         iommu_sync_cache(pte, sizeof(*pte));
 
         *flush_flags |= IOMMU_FLUSHF_all;
-        iommu_queue_free_pgtable(hd, pg);
+        iommu_queue_free_pgtable(ctx, pg);
         perfc_incr(iommu_pt_coalesces);
     }
 
-    spin_unlock(&hd->arch.mapping_lock);
-
     unmap_vtd_domain_page(page);
 
     *flush_flags |= IOMMU_FLUSHF_modified;
 
     if ( order && !dma_pte_superpage(old) )
-        queue_free_pt(hd, maddr_to_mfn(dma_pte_addr(old)),
+        queue_free_pt(ctx, maddr_to_mfn(dma_pte_addr(old)),
                       order / LEVEL_STRIDE);
 
     return 0;
 }
 
 static int cf_check intel_iommu_lookup_page(
-    struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags)
+    struct domain *d, dfn_t dfn, mfn_t *mfn, unsigned int *flags,
+    struct iommu_context *ctx)
 {
-    struct domain_iommu *hd = dom_iommu(d);
     uint64_t val;
 
     /*
      * If VT-d shares EPT page table or if the domain is the hardware
      * domain and iommu_passthrough is set then pass back the dfn.
      */
-    if ( iommu_use_hap_pt(d) ||
-         (iommu_hwdom_passthrough && is_hardware_domain(d)) )
+    if ( (iommu_use_hap_pt(d) ||
+         (iommu_hwdom_passthrough && is_hardware_domain(d)))
+         && !ctx->id )
         return -EOPNOTSUPP;
 
-    spin_lock(&hd->arch.mapping_lock);
-
-    val = addr_to_dma_page_maddr(d, dfn_to_daddr(dfn), 0, NULL, false);
 
-    spin_unlock(&hd->arch.mapping_lock);
+    val = addr_to_dma_page_maddr(d, ctx, dfn_to_daddr(dfn), 0, NULL, false);
 
     if ( val < PAGE_SIZE )
         return -ENOENT;
@@ -2320,7 +1900,7 @@ static bool __init vtd_ept_page_compatible(const struct vtd_iommu *iommu)
 
     /* EPT is not initialised yet, so we must check the capability in
      * the MSR explicitly rather than use cpu_has_vmx_ept_*() */
-    if ( rdmsr_safe(MSR_IA32_VMX_EPT_VPID_CAP, ept_cap) != 0 ) 
+    if ( rdmsr_safe(MSR_IA32_VMX_EPT_VPID_CAP, ept_cap) != 0 )
         return false;
 
     return (ept_has_2mb(ept_cap) && opt_hap_2mb) <=
@@ -2329,44 +1909,6 @@ static bool __init vtd_ept_page_compatible(const struct vtd_iommu *iommu)
             (cap_sps_1gb(vtd_cap) && iommu_superpages);
 }
 
-static int cf_check intel_iommu_add_device(u8 devfn, struct pci_dev *pdev)
-{
-    struct acpi_rmrr_unit *rmrr;
-    u16 bdf;
-    int ret, i;
-
-    ASSERT(pcidevs_locked());
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    for_each_rmrr_device ( rmrr, bdf, i )
-    {
-        if ( rmrr->segment == pdev->seg && bdf == PCI_BDF(pdev->bus, devfn) )
-        {
-            /*
-             * iommu_add_device() is only called for the hardware
-             * domain (see xen/drivers/passthrough/pci.c:pci_add_device()).
-             * Since RMRRs are always reserved in the e820 map for the hardware
-             * domain, there shouldn't be a conflict.
-             */
-            ret = iommu_identity_mapping(pdev->domain, p2m_access_rw,
-                                         rmrr->base_address, rmrr->end_address,
-                                         0);
-            if ( ret )
-                dprintk(XENLOG_ERR VTDPREFIX, "%pd: RMRR mapping failed\n",
-                        pdev->domain);
-        }
-    }
-
-    ret = domain_context_mapping(pdev->domain, devfn, pdev);
-    if ( ret )
-        dprintk(XENLOG_ERR VTDPREFIX, "%pd: context mapping failed\n",
-                pdev->domain);
-
-    return ret;
-}
-
 static int cf_check intel_iommu_enable_device(struct pci_dev *pdev)
 {
     struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
@@ -2382,49 +1924,16 @@ static int cf_check intel_iommu_enable_device(struct pci_dev *pdev)
     return ret >= 0 ? 0 : ret;
 }
 
-static int cf_check intel_iommu_remove_device(u8 devfn, struct pci_dev *pdev)
-{
-    const struct acpi_drhd_unit *drhd;
-    struct acpi_rmrr_unit *rmrr;
-    u16 bdf;
-    unsigned int i;
-
-    if ( !pdev->domain )
-        return -EINVAL;
-
-    drhd = domain_context_unmap(pdev->domain, devfn, pdev);
-    if ( IS_ERR(drhd) )
-        return PTR_ERR(drhd);
-
-    for_each_rmrr_device ( rmrr, bdf, i )
-    {
-        if ( rmrr->segment != pdev->seg || bdf != PCI_BDF(pdev->bus, devfn) )
-            continue;
-
-        /*
-         * Any flag is nothing to clear these mappings but here
-         * its always safe and strict to set 0.
-         */
-        iommu_identity_mapping(pdev->domain, p2m_access_x, rmrr->base_address,
-                               rmrr->end_address, 0);
-    }
-
-    quarantine_teardown(pdev, drhd);
-
-    if ( drhd )
-    {
-        iommu_free_domid(pdev->arch.pseudo_domid,
-                         drhd->iommu->pseudo_domid_map);
-        pdev->arch.pseudo_domid = DOMID_INVALID;
-    }
-
-    return 0;
-}
-
 static int __hwdom_init cf_check setup_hwdom_device(
     u8 devfn, struct pci_dev *pdev)
 {
-    return domain_context_mapping(pdev->domain, devfn, pdev);
+    if (pdev->type == DEV_TYPE_PCI_HOST_BRIDGE ||
+        pdev->type == DEV_TYPE_PCIe_BRIDGE ||
+        pdev->type == DEV_TYPE_PCIe2PCI_BRIDGE ||
+        pdev->type == DEV_TYPE_LEGACY_PCI_BRIDGE)
+        return 0;
+
+    return _iommu_attach_context(hardware_domain, pdev, 0);
 }
 
 void clear_fault_bits(struct vtd_iommu *iommu)
@@ -2518,7 +2027,7 @@ static int __must_check init_vtd_hw(bool resume)
 
     /*
      * Enable queue invalidation
-     */   
+     */
     for_each_drhd_unit ( drhd )
     {
         iommu = drhd->iommu;
@@ -2539,7 +2048,7 @@ static int __must_check init_vtd_hw(bool resume)
 
     /*
      * Enable interrupt remapping
-     */  
+     */
     if ( iommu_intremap != iommu_intremap_off )
     {
         int apic;
@@ -2622,15 +2131,61 @@ static struct iommu_state {
     uint32_t fectl;
 } *__read_mostly iommu_state;
 
-static int __init cf_check vtd_setup(void)
+static void arch_iommu_dump_domain_contexts(struct domain *d)
 {
-    struct acpi_drhd_unit *drhd;
-    struct vtd_iommu *iommu;
-    unsigned int large_sizes = iommu_superpages ? PAGE_SIZE_2M | PAGE_SIZE_1G : 0;
-    int ret;
-    bool reg_inval_supported = true;
+    unsigned int i, iommu_no;
+    struct pci_dev *pdev;
+    struct iommu_context *ctx;
+    struct domain_iommu *hd = dom_iommu(d);
 
-    if ( list_empty(&acpi_drhd_units) )
+    printk("d%hu contexts\n", d->domain_id);
+
+    spin_lock(&hd->lock);
+
+    for (i = 0; i < (1 + dom_iommu(d)->other_contexts.count); ++i)
+    {
+        if (iommu_check_context(d, i))
+        {
+            ctx = iommu_get_context(d, i);
+            printk(" Context %d (%"PRIx64")\n", i, ctx->arch.vtd.pgd_maddr);
+
+            for (iommu_no = 0; iommu_no < nr_iommus; iommu_no++)
+                printk("  IOMMU %hu (used=%u; did=%hu)\n", iommu_no,
+                       test_bit(iommu_no, ctx->arch.vtd.iommu_bitmap),
+                       ctx->arch.vtd.didmap[iommu_no]);
+
+            list_for_each_entry(pdev, &ctx->devices, context_list)
+            {
+                printk("  - %pp\n", &pdev->sbdf);
+            }
+        }
+    }
+
+    spin_unlock(&hd->lock);
+}
+
+static void arch_iommu_dump_contexts(unsigned char key)
+{
+    struct domain *d;
+
+    for_each_domain(d)
+        if (is_iommu_enabled(d)) {
+            struct domain_iommu *hd = dom_iommu(d);
+            printk("d%hu arena page usage: %d\n", d->domain_id,
+                atomic_read(&hd->arch.pt_arena.used_pages));
+
+            arch_iommu_dump_domain_contexts(d);
+        }
+}
+static int __init cf_check vtd_setup(void)
+{
+    struct acpi_drhd_unit *drhd;
+    struct vtd_iommu *iommu;
+    unsigned int large_sizes = iommu_superpages ? PAGE_SIZE_2M | PAGE_SIZE_1G : 0;
+    int ret;
+    bool reg_inval_supported = true;
+
+    if ( list_empty(&acpi_drhd_units) )
     {
         ret = -ENODEV;
         goto error;
@@ -2749,6 +2304,7 @@ static int __init cf_check vtd_setup(void)
     iommu_ops.page_sizes |= large_sizes;
 
     register_keyhandler('V', vtd_dump_iommu_info, "dump iommu info", 1);
+    register_keyhandler('X', arch_iommu_dump_contexts, "dump iommu contexts", 1);
 
     return 0;
 
@@ -2763,192 +2319,6 @@ static int __init cf_check vtd_setup(void)
     return ret;
 }
 
-static int cf_check reassign_device_ownership(
-    struct domain *source,
-    struct domain *target,
-    u8 devfn, struct pci_dev *pdev)
-{
-    int ret;
-
-    if ( !QUARANTINE_SKIP(target, pdev->arch.vtd.pgd_maddr) )
-    {
-        if ( !has_arch_pdevs(target) )
-            vmx_pi_hooks_assign(target);
-
-#ifdef CONFIG_PV
-        /*
-         * Devices assigned to untrusted domains (here assumed to be any domU)
-         * can attempt to send arbitrary LAPIC/MSI messages. We are unprotected
-         * by the root complex unless interrupt remapping is enabled.
-         */
-        if ( !iommu_intremap && !is_hardware_domain(target) &&
-             !is_system_domain(target) )
-            untrusted_msi = true;
-#endif
-
-        ret = domain_context_mapping(target, devfn, pdev);
-
-        if ( !ret && pdev->devfn == devfn &&
-             !QUARANTINE_SKIP(source, pdev->arch.vtd.pgd_maddr) )
-        {
-            const struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
-
-            if ( drhd )
-                check_cleanup_domid_map(source, pdev, drhd->iommu);
-        }
-    }
-    else
-    {
-        const struct acpi_drhd_unit *drhd;
-
-        drhd = domain_context_unmap(source, devfn, pdev);
-        ret = IS_ERR(drhd) ? PTR_ERR(drhd) : 0;
-    }
-    if ( ret )
-    {
-        if ( !has_arch_pdevs(target) )
-            vmx_pi_hooks_deassign(target);
-        return ret;
-    }
-
-    if ( devfn == pdev->devfn && pdev->domain != target )
-    {
-        write_lock(&source->pci_lock);
-        list_del(&pdev->domain_list);
-        write_unlock(&source->pci_lock);
-
-        pdev->domain = target;
-
-        write_lock(&target->pci_lock);
-        list_add(&pdev->domain_list, &target->pdev_list);
-        write_unlock(&target->pci_lock);
-    }
-
-    if ( !has_arch_pdevs(source) )
-        vmx_pi_hooks_deassign(source);
-
-    /*
-     * If the device belongs to the hardware domain, and it has RMRR, don't
-     * remove it from the hardware domain, because BIOS may use RMRR at
-     * booting time.
-     */
-    if ( !is_hardware_domain(source) )
-    {
-        const struct acpi_rmrr_unit *rmrr;
-        u16 bdf;
-        unsigned int i;
-
-        for_each_rmrr_device( rmrr, bdf, i )
-            if ( rmrr->segment == pdev->seg &&
-                 bdf == PCI_BDF(pdev->bus, devfn) )
-            {
-                /*
-                 * Any RMRR flag is always ignored when remove a device,
-                 * but its always safe and strict to set 0.
-                 */
-                ret = iommu_identity_mapping(source, p2m_access_x,
-                                             rmrr->base_address,
-                                             rmrr->end_address, 0);
-                if ( ret && ret != -ENOENT )
-                    return ret;
-            }
-    }
-
-    return 0;
-}
-
-static int cf_check intel_iommu_assign_device(
-    struct domain *d, u8 devfn, struct pci_dev *pdev, u32 flag)
-{
-    struct domain *s = pdev->domain;
-    struct acpi_rmrr_unit *rmrr;
-    int ret = 0, i;
-    u16 bdf, seg;
-    u8 bus;
-
-    if ( list_empty(&acpi_drhd_units) )
-        return -ENODEV;
-
-    seg = pdev->seg;
-    bus = pdev->bus;
-    /*
-     * In rare cases one given rmrr is shared by multiple devices but
-     * obviously this would put the security of a system at risk. So
-     * we would prevent from this sort of device assignment. But this
-     * can be permitted if user set
-     *      "pci = [ 'sbdf, rdm_policy=relaxed' ]"
-     *
-     * TODO: in the future we can introduce group device assignment
-     * interface to make sure devices sharing RMRR are assigned to the
-     * same domain together.
-     */
-    for_each_rmrr_device( rmrr, bdf, i )
-    {
-        if ( rmrr->segment == seg && bdf == PCI_BDF(bus, devfn) &&
-             rmrr->scope.devices_cnt > 1 )
-        {
-            bool relaxed = flag & XEN_DOMCTL_DEV_RDM_RELAXED;
-
-            printk(XENLOG_GUEST "%s" VTDPREFIX
-                   " It's %s to assign %pp"
-                   " with shared RMRR at %"PRIx64" for %pd.\n",
-                   relaxed ? XENLOG_WARNING : XENLOG_ERR,
-                   relaxed ? "risky" : "disallowed",
-                   &PCI_SBDF(seg, bus, devfn), rmrr->base_address, d);
-            if ( !relaxed )
-                return -EPERM;
-        }
-    }
-
-    if ( d == dom_io )
-        return reassign_device_ownership(s, d, devfn, pdev);
-
-    /* Setup rmrr identity mapping */
-    for_each_rmrr_device( rmrr, bdf, i )
-    {
-        if ( rmrr->segment == seg && bdf == PCI_BDF(bus, devfn) )
-        {
-            ret = iommu_identity_mapping(d, p2m_access_rw, rmrr->base_address,
-                                         rmrr->end_address, flag);
-            if ( ret )
-            {
-                printk(XENLOG_G_ERR VTDPREFIX
-                       "%pd: cannot map reserved region [%"PRIx64",%"PRIx64"]: %d\n",
-                       d, rmrr->base_address, rmrr->end_address, ret);
-                break;
-            }
-        }
-    }
-
-    if ( !ret )
-        ret = reassign_device_ownership(s, d, devfn, pdev);
-
-    /* See reassign_device_ownership() for the hwdom aspect. */
-    if ( !ret || is_hardware_domain(d) )
-        return ret;
-
-    for_each_rmrr_device( rmrr, bdf, i )
-    {
-        if ( rmrr->segment == seg && bdf == PCI_BDF(bus, devfn) )
-        {
-            int rc = iommu_identity_mapping(d, p2m_access_x,
-                                            rmrr->base_address,
-                                            rmrr->end_address, 0);
-
-            if ( rc && rc != -ENOENT )
-            {
-                printk(XENLOG_ERR VTDPREFIX
-                       "%pd: cannot unmap reserved region [%"PRIx64",%"PRIx64"]: %d\n",
-                       d, rmrr->base_address, rmrr->end_address, rc);
-                domain_crash(d);
-                break;
-            }
-        }
-    }
-
-    return ret;
-}
-
 static int cf_check intel_iommu_group_id(u16 seg, u8 bus, u8 devfn)
 {
     u8 secbus;
@@ -3073,6 +2443,11 @@ static void vtd_dump_page_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
     if ( level < 1 )
         return;
 
+    if (pt_maddr == 0) {
+        printk(" (empty)\n");
+        return;
+    }
+
     pt_vaddr = map_vtd_domain_page(pt_maddr);
 
     next_level = level - 1;
@@ -3103,158 +2478,478 @@ static void vtd_dump_page_table_level(paddr_t pt_maddr, int level, paddr_t gpa,
 
 static void cf_check vtd_dump_page_tables(struct domain *d)
 {
-    const struct domain_iommu *hd = dom_iommu(d);
+    struct domain_iommu *hd = dom_iommu(d);
+    unsigned int i;
 
     printk(VTDPREFIX" %pd table has %d levels\n", d,
            agaw_to_level(hd->arch.vtd.agaw));
-    vtd_dump_page_table_level(hd->arch.vtd.pgd_maddr,
-                              agaw_to_level(hd->arch.vtd.agaw), 0, 0);
+
+    for (i = 1; i < (1 + hd->other_contexts.count); ++i)
+    {
+        bool allocated = iommu_check_context(d, i);
+        printk(VTDPREFIX " %pd context %d: %s\n", d, i,
+               allocated ? "allocated" : "non-allocated");
+
+        if (allocated) {
+            const struct iommu_context *ctx = iommu_get_context(d, i);
+            vtd_dump_page_table_level(ctx->arch.vtd.pgd_maddr,
+                                      agaw_to_level(hd->arch.vtd.agaw), 0, 0);
+        }
+    }
 }
 
-static int fill_qpt(struct dma_pte *this, unsigned int level,
-                    struct page_info *pgs[6])
+static int intel_iommu_context_init(struct domain *d, struct iommu_context *ctx, u32 flags)
 {
-    struct domain_iommu *hd = dom_iommu(dom_io);
-    unsigned int i;
-    int rc = 0;
+    struct acpi_drhd_unit *drhd;
 
-    for ( i = 0; !rc && i < PTE_NUM; ++i )
+    ctx->arch.vtd.didmap = xzalloc_array(u16, nr_iommus);
+
+    if ( !ctx->arch.vtd.didmap )
+        return -ENOMEM;
+
+    ctx->arch.vtd.iommu_bitmap = xzalloc_array(unsigned long,
+                                               BITS_TO_LONGS(nr_iommus));
+    if ( !ctx->arch.vtd.iommu_bitmap )
+        return -ENOMEM;
+
+    ctx->arch.vtd.duplicated_rmrr = false;
+    ctx->arch.vtd.superpage_progress = 0;
+
+    if ( flags & IOMMU_CONTEXT_INIT_default )
     {
-        struct dma_pte *pte = &this[i], *next;
+        ctx->arch.vtd.pgd_maddr = 0;
 
-        if ( !dma_pte_present(*pte) )
+        /* Populate context DID map using domain id. */
+        for_each_drhd_unit(drhd)
         {
-            if ( !pgs[level] )
-            {
-                /*
-                 * The pgtable allocator is fine for the leaf page, as well as
-                 * page table pages, and the resulting allocations are always
-                 * zeroed.
-                 */
-                pgs[level] = iommu_alloc_pgtable(hd, 0);
-                if ( !pgs[level] )
-                {
-                    rc = -ENOMEM;
-                    break;
-                }
-
-                if ( level )
-                {
-                    next = map_vtd_domain_page(page_to_maddr(pgs[level]));
-                    rc = fill_qpt(next, level - 1, pgs);
-                    unmap_vtd_domain_page(next);
-                }
-            }
+            ctx->arch.vtd.didmap[drhd->iommu->index] =
+                convert_domid(drhd->iommu, d->domain_id);
+        }
+    }
+    else
+    {
+        /* Populate context DID map using pseudo DIDs */
+        for_each_drhd_unit(drhd)
+        {
+            ctx->arch.vtd.didmap[drhd->iommu->index] =
+                iommu_alloc_domid(drhd->iommu->pseudo_domid_map);
+        }
+
+        /* Create initial context page */
+        addr_to_dma_page_maddr(d, ctx, 0, min_pt_levels, NULL, true);
+    }
 
-            dma_set_pte_addr(*pte, page_to_maddr(pgs[level]));
-            dma_set_pte_readable(*pte);
-            dma_set_pte_writable(*pte);
+    return arch_iommu_context_init(d, ctx, flags);
+}
+
+static int intel_iommu_cleanup_pte(uint64_t pte_maddr, bool preempt)
+{
+    size_t i;
+    struct dma_pte *pte = map_vtd_domain_page(pte_maddr);
+
+    for (i = 0; i < (1 << PAGETABLE_ORDER); ++i)
+        if ( dma_pte_present(pte[i]) )
+        {
+            /* Remove the reference of the target mapping */
+            put_page(maddr_to_page(dma_pte_addr(pte[i])));
+
+            if ( preempt )
+                dma_clear_pte(pte[i]);
         }
-        else if ( level && !dma_pte_superpage(*pte) )
+
+    unmap_vtd_domain_page(pte);
+
+    return 0;
+}
+
+/**
+ * Cleanup logic :
+ * Walk through the entire page table, progressively removing mappings if preempt.
+ *
+ * Return values :
+ *  - Report preemption with -ERESTART.
+ *  - Report empty pte/pgd with 0.
+ *
+ * When preempted during superpage operation, store state in vtd.superpage_progress.
+ */
+
+static int intel_iommu_cleanup_superpage(struct iommu_context *ctx,
+                                          unsigned int page_order, uint64_t pte_maddr,
+                                          bool preempt)
+{
+    size_t i = 0, page_count = 1 << page_order;
+    struct page_info *page = maddr_to_page(pte_maddr);
+
+    if ( preempt )
+        i = ctx->arch.vtd.superpage_progress;
+
+    for (; i < page_count; page++)
+    {
+        put_page(page);
+
+        if ( preempt && (i & 0xff) && general_preempt_check() )
         {
-            next = map_vtd_domain_page(dma_pte_addr(*pte));
-            rc = fill_qpt(next, level - 1, pgs);
-            unmap_vtd_domain_page(next);
+            ctx->arch.vtd.superpage_progress = i + 1;
+            return -ERESTART;
         }
     }
 
-    return rc;
+    if ( preempt )
+        ctx->arch.vtd.superpage_progress = 0;
+
+    return 0;
 }
 
-static int cf_check intel_iommu_quarantine_init(struct pci_dev *pdev,
-                                                bool scratch_page)
+static int intel_iommu_cleanup_mappings(struct iommu_context *ctx,
+                                         unsigned int nr_pt_levels, uint64_t pgd_maddr,
+                                         bool preempt)
 {
-    struct domain_iommu *hd = dom_iommu(dom_io);
-    struct page_info *pg;
-    unsigned int agaw = hd->arch.vtd.agaw;
-    unsigned int level = agaw_to_level(agaw);
-    const struct acpi_drhd_unit *drhd;
-    const struct acpi_rmrr_unit *rmrr;
-    unsigned int i, bdf;
-    bool rmrr_found = false;
+    size_t i;
     int rc;
+    struct dma_pte *pgd = map_vtd_domain_page(pgd_maddr);
 
-    ASSERT(pcidevs_locked());
-    ASSERT(!hd->arch.vtd.pgd_maddr);
-    ASSERT(page_list_empty(&hd->arch.pgtables.list));
+    for (i = 0; i < (1 << PAGETABLE_ORDER); ++i)
+    {
+        if ( dma_pte_present(pgd[i]) )
+        {
+            uint64_t pte_maddr = dma_pte_addr(pgd[i]);
+
+            if ( dma_pte_superpage(pgd[i]) )
+                rc = intel_iommu_cleanup_superpage(ctx, nr_pt_levels * SUPERPAGE_ORDER,
+                                                   pte_maddr, preempt);
+            else if ( nr_pt_levels > 2 )
+                /* Next level is not PTE */
+                rc = intel_iommu_cleanup_mappings(ctx, nr_pt_levels - 1,
+                                                  pte_maddr, preempt);
+            else
+                rc = intel_iommu_cleanup_pte(pte_maddr, preempt);
+
+            if ( preempt && !rc )
+                /* Fold pgd (no more mappings in it) */
+                dma_clear_pte(pgd[i]);
+            else if ( preempt && (rc == -ERESTART || general_preempt_check()) )
+            {
+                unmap_vtd_domain_page(pgd);
+                return -ERESTART;
+            }
+        }
+    }
 
-    if ( pdev->arch.vtd.pgd_maddr )
+    unmap_vtd_domain_page(pgd);
+
+    return 0;
+}
+
+static int intel_iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags)
+{
+    struct acpi_drhd_unit *drhd;
+    pcidevs_lock();
+
+    // Cleanup mappings
+    if ( intel_iommu_cleanup_mappings(ctx, agaw_to_level(d->iommu.arch.vtd.agaw),
+                                      ctx->arch.vtd.pgd_maddr,
+                                      flags & IOMMUF_preempt) < 0 )
     {
-        clear_domain_page(pdev->arch.leaf_mfn);
-        return 0;
+        pcidevs_unlock();
+        return -ERESTART;
     }
 
-    drhd = acpi_find_matched_drhd_unit(pdev);
-    if ( !drhd )
-        return -ENODEV;
+    if (ctx->arch.vtd.didmap)
+    {
+        for_each_drhd_unit(drhd)
+        {
+            iommu_free_domid(ctx->arch.vtd.didmap[drhd->iommu->index],
+                drhd->iommu->pseudo_domid_map);
+        }
 
-    pg = iommu_alloc_pgtable(hd, 0);
-    if ( !pg )
-        return -ENOMEM;
+        xfree(ctx->arch.vtd.didmap);
+    }
 
-    rc = context_set_domain_id(NULL, pdev->arch.pseudo_domid, drhd->iommu);
+    pcidevs_unlock();
+    return arch_iommu_context_teardown(d, ctx, flags);
+}
 
-    /* Transiently install the root into DomIO, for iommu_identity_mapping(). */
-    hd->arch.vtd.pgd_maddr = page_to_maddr(pg);
+static int intel_iommu_map_identity(struct domain *d, struct pci_dev *pdev,
+                                    struct iommu_context *ctx, struct acpi_rmrr_unit *rmrr)
+{
+    /* TODO: This code doesn't cleanup on failure */
 
-    for_each_rmrr_device ( rmrr, bdf, i )
+    int ret = 0, rc = 0;
+    unsigned int flush_flags = 0, flags;
+    u64 base_pfn = rmrr->base_address >> PAGE_SHIFT_4K;
+    u64 end_pfn = PAGE_ALIGN_4K(rmrr->end_address) >> PAGE_SHIFT_4K;
+    u64 pfn = base_pfn;
+
+    printk(XENLOG_INFO VTDPREFIX
+            " Mapping d%dc%d device %pp identity mapping [%08" PRIx64 ":%08" PRIx64 "]\n",
+            d->domain_id, ctx->id, &pdev->sbdf, rmrr->base_address, rmrr->end_address);
+
+    ASSERT(end_pfn >= base_pfn);
+
+    while (pfn < end_pfn)
     {
-        if ( rc )
-            break;
+        mfn_t mfn = INVALID_MFN;
+        ret = intel_iommu_lookup_page(d, _dfn(pfn), &mfn, &flags, ctx);
 
-        if ( rmrr->segment == pdev->seg && bdf == pdev->sbdf.bdf )
+        if ( ret == -ENOENT )
         {
-            rmrr_found = true;
+            ret = intel_iommu_map_page(d, _dfn(pfn), _mfn(pfn),
+                                      IOMMUF_readable | IOMMUF_writable,
+                                      &flush_flags, ctx);
 
-            rc = iommu_identity_mapping(dom_io, p2m_access_rw,
-                                        rmrr->base_address, rmrr->end_address,
-                                        0);
-            if ( rc )
+            if ( ret < 0 )
+            {
                 printk(XENLOG_ERR VTDPREFIX
-                       "%pp: RMRR quarantine mapping failed\n",
-                       &pdev->sbdf);
+                        " Unable to map RMRR page %"PRI_pfn" (%d)\n",
+                        pfn, ret);
+                break;
+            }
+        }
+        else if ( ret )
+        {
+            printk(XENLOG_ERR VTDPREFIX
+                    " Unable to lookup page %"PRI_pfn" (%d)\n",
+                    pfn, ret);
+            break;
         }
+        else if ( mfn_x(mfn) != pfn )
+        {
+            /* The dfn is already mapped to something else, can't continue. */
+            printk(XENLOG_ERR VTDPREFIX
+                   " Unable to map RMRR page %"PRI_mfn"!=%"PRI_pfn" (incompatible mapping)\n",
+                   mfn_x(mfn), pfn);
+
+            ret = -EINVAL;
+            break;
+        }
+        else if ( mfn_x(mfn) == pfn )
+        {
+            /*
+             * There is already a identity mapping in this context, we need to
+             * be extra-careful when detaching the device to not break another
+             * existing RMRR.
+             */
+            printk(XENLOG_WARNING VTDPREFIX
+                   "Duplicated RMRR mapping %"PRI_pfn"\n", pfn);
+
+            ctx->arch.vtd.duplicated_rmrr = true;
+        }
+
+        pfn++;
     }
 
-    iommu_identity_map_teardown(dom_io);
-    hd->arch.vtd.pgd_maddr = 0;
-    pdev->arch.vtd.pgd_maddr = page_to_maddr(pg);
+    rc = iommu_flush_iotlb(d, ctx, _dfn(base_pfn), end_pfn - base_pfn + 1, flush_flags);
 
-    if ( !rc && scratch_page )
+    return ret ?: rc;
+}
+
+static int intel_iommu_map_dev_rmrr(struct domain *d, struct pci_dev *pdev,
+                                    struct iommu_context *ctx)
+{
+    struct acpi_rmrr_unit *rmrr;
+    u16 bdf;
+    int ret, i;
+
+    for_each_rmrr_device(rmrr, bdf, i)
     {
-        struct dma_pte *root;
-        struct page_info *pgs[6] = {};
+        if ( PCI_SBDF(rmrr->segment, bdf).sbdf == pdev->sbdf.sbdf )
+        {
+            ret = intel_iommu_map_identity(d, pdev, ctx, rmrr);
 
-        root = map_vtd_domain_page(pdev->arch.vtd.pgd_maddr);
-        rc = fill_qpt(root, level - 1, pgs);
-        unmap_vtd_domain_page(root);
+            if ( ret < 0 )
+                return ret;
+        }
+    }
 
-        pdev->arch.leaf_mfn = page_to_mfn(pgs[0]);
+    return 0;
+}
+
+static int intel_iommu_unmap_identity(struct domain *d, struct pci_dev *pdev,
+                                      struct iommu_context *ctx, struct acpi_rmrr_unit *rmrr)
+{
+    /* TODO: This code doesn't cleanup on failure */
+
+    int ret = 0, rc = 0;
+    unsigned int flush_flags = 0;
+    u64 base_pfn = rmrr->base_address >> PAGE_SHIFT_4K;
+    u64 end_pfn = PAGE_ALIGN_4K(rmrr->end_address) >> PAGE_SHIFT_4K;
+    u64 pfn = base_pfn;
+
+    printk(VTDPREFIX
+            " Unmapping d%dc%d device %pp identity mapping [%08" PRIx64 ":%08" PRIx64 "]\n",
+            d->domain_id, ctx->id, &pdev->sbdf, rmrr->base_address, rmrr->end_address);
+
+    ASSERT(end_pfn >= base_pfn);
+
+    while (pfn < end_pfn)
+    {
+        ret = intel_iommu_unmap_page(d, _dfn(pfn), PAGE_ORDER_4K, &flush_flags, ctx);
+
+        if ( ret )
+            break;
+
+        pfn++;
     }
 
-    page_list_move(&pdev->arch.pgtables_list, &hd->arch.pgtables.list);
+    rc = iommu_flush_iotlb(d, ctx, _dfn(base_pfn), end_pfn - base_pfn + 1, flush_flags);
 
-    if ( rc || (!scratch_page && !rmrr_found) )
-        quarantine_teardown(pdev, drhd);
+    return ret ?: rc;
+}
 
-    return rc;
+/* Check if another overlapping rmrr exist for another device of the context */
+static bool intel_iommu_check_duplicate(struct domain *d, struct pci_dev *pdev,
+                                        struct iommu_context *ctx,
+                                        struct acpi_rmrr_unit *rmrr)
+{
+    struct acpi_rmrr_unit *other_rmrr;
+    u16 bdf;
+    int i;
+
+    for_each_rmrr_device(other_rmrr, bdf, i)
+    {
+        if (rmrr == other_rmrr)
+            continue;
+
+        /* Skip RMRR entries of the same device */
+        if ( PCI_SBDF(rmrr->segment, bdf).sbdf == pdev->sbdf.sbdf )
+            continue;
+
+        /* Check for overlap */
+        if ( rmrr->base_address >= other_rmrr->base_address
+            && rmrr->end_address <= other_rmrr->end_address )
+            return true;
+
+        if ( other_rmrr->base_address >= rmrr->base_address
+            && other_rmrr->end_address <= rmrr->end_address )
+            return true;
+    }
+
+    return false;
+}
+
+static int intel_iommu_unmap_dev_rmrr(struct domain *d, struct pci_dev *pdev,
+                                      struct iommu_context *ctx)
+{
+    struct acpi_rmrr_unit *rmrr;
+    u16 bdf;
+    int ret, i;
+
+    for_each_rmrr_device(rmrr, bdf, i)
+    {
+        if ( PCI_SBDF(rmrr->segment, bdf).sbdf == pdev->sbdf.sbdf )
+        {
+            if ( ctx->arch.vtd.duplicated_rmrr
+                && intel_iommu_check_duplicate(d, pdev, ctx, rmrr) )
+                continue;
+
+            ret = intel_iommu_unmap_identity(d, pdev, ctx, rmrr);
+
+            if ( ret < 0 )
+                return ret;
+        }
+    }
+
+    return 0;
+}
+
+static int intel_iommu_attach(struct domain *d, struct pci_dev *pdev,
+                              struct iommu_context *ctx)
+{
+    int ret;
+    const struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
+
+    if (!pdev || !drhd)
+        return -EINVAL;
+
+    if ( ctx->id )
+    {
+        ret = intel_iommu_map_dev_rmrr(d, pdev, ctx);
+
+        if ( ret )
+            return ret;
+    }
+
+    ret = apply_context(d, ctx, pdev, pdev->devfn);
+
+    if ( ret )
+        return ret;
+
+    pci_vtd_quirk(pdev);
+
+    return ret;
+}
+
+static int intel_iommu_detach(struct domain *d, struct pci_dev *pdev,
+                               struct iommu_context *prev_ctx)
+{
+    int ret;
+    const struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
+
+    if (!pdev || !drhd)
+        return -EINVAL;
+
+    ret = unapply_context_single(d, prev_ctx, drhd->iommu, pdev->bus, pdev->devfn);
+
+    if ( ret )
+        return ret;
+
+    if ( prev_ctx->id )
+        WARN_ON(intel_iommu_unmap_dev_rmrr(d, pdev, prev_ctx));
+
+    check_cleanup_domid_map(d, prev_ctx, NULL, drhd->iommu);
+
+    return ret;
+}
+
+static int intel_iommu_reattach(struct domain *d, struct pci_dev *pdev,
+                                struct iommu_context *prev_ctx,
+                                struct iommu_context *ctx)
+{
+    int ret;
+    const struct acpi_drhd_unit *drhd = acpi_find_matched_drhd_unit(pdev);
+
+    if (!pdev || !drhd)
+        return -EINVAL;
+
+    if ( ctx->id )
+    {
+        ret = intel_iommu_map_dev_rmrr(d, pdev, ctx);
+
+        if ( ret )
+            return ret;
+    }
+
+    ret = apply_context_single(d, ctx, drhd->iommu, pdev->bus, pdev->devfn);
+
+    if ( ret )
+        return ret;
+
+    if ( prev_ctx->id )
+        WARN_ON(intel_iommu_unmap_dev_rmrr(d, pdev, prev_ctx));
+
+    /* We are overwriting an entry, cleanup previous domid if needed. */
+    check_cleanup_domid_map(d, prev_ctx, pdev, drhd->iommu);
+
+    pci_vtd_quirk(pdev);
+
+    return ret;
 }
 
 static const struct iommu_ops __initconst_cf_clobber vtd_ops = {
     .page_sizes = PAGE_SIZE_4K,
     .init = intel_iommu_domain_init,
     .hwdom_init = intel_iommu_hwdom_init,
-    .quarantine_init = intel_iommu_quarantine_init,
-    .add_device = intel_iommu_add_device,
+    .context_init = intel_iommu_context_init,
+    .context_teardown = intel_iommu_context_teardown,
+    .attach = intel_iommu_attach,
+    .detach = intel_iommu_detach,
+    .reattach = intel_iommu_reattach,
     .enable_device = intel_iommu_enable_device,
-    .remove_device = intel_iommu_remove_device,
-    .assign_device  = intel_iommu_assign_device,
     .teardown = iommu_domain_teardown,
     .clear_root_pgtable = iommu_clear_root_pgtable,
     .map_page = intel_iommu_map_page,
     .unmap_page = intel_iommu_unmap_page,
     .lookup_page = intel_iommu_lookup_page,
-    .reassign_device = reassign_device_ownership,
     .get_device_group_id = intel_iommu_group_id,
     .enable_x2apic = intel_iommu_enable_eim,
     .disable_x2apic = intel_iommu_disable_eim,
diff --git a/xen/drivers/passthrough/vtd/quirks.c b/xen/drivers/passthrough/vtd/quirks.c
index 950dcd56ef..719985f885 100644
--- a/xen/drivers/passthrough/vtd/quirks.c
+++ b/xen/drivers/passthrough/vtd/quirks.c
@@ -408,9 +408,8 @@ void __init platform_quirks_init(void)
 
 static int __must_check map_me_phantom_function(struct domain *domain,
                                                 unsigned int dev,
-                                                domid_t domid,
-                                                paddr_t pgd_maddr,
-                                                unsigned int mode)
+                                                unsigned int mode,
+                                                struct iommu_context *ctx)
 {
     struct acpi_drhd_unit *drhd;
     struct pci_dev *pdev;
@@ -422,18 +421,18 @@ static int __must_check map_me_phantom_function(struct domain *domain,
 
     /* map or unmap ME phantom function */
     if ( !(mode & UNMAP_ME_PHANTOM_FUNC) )
-        rc = domain_context_mapping_one(domain, drhd->iommu, 0,
-                                        PCI_DEVFN(dev, 7), NULL,
-                                        domid, pgd_maddr, mode);
+        rc = apply_context_single(domain, ctx, drhd->iommu, 0,
+                                  PCI_DEVFN(dev, 7));
     else
-        rc = domain_context_unmap_one(domain, drhd->iommu, 0,
-                                      PCI_DEVFN(dev, 7));
+        rc = unapply_context_single(domain, ctx, drhd->iommu, 0,
+                                    PCI_DEVFN(dev, 7));
 
     return rc;
 }
 
 int me_wifi_quirk(struct domain *domain, uint8_t bus, uint8_t devfn,
-                  domid_t domid, paddr_t pgd_maddr, unsigned int mode)
+                  domid_t domid, unsigned int mode,
+                  struct iommu_context *ctx)
 {
     u32 id;
     int rc = 0;
@@ -457,7 +456,7 @@ int me_wifi_quirk(struct domain *domain, uint8_t bus, uint8_t devfn,
             case 0x423b8086:
             case 0x423c8086:
             case 0x423d8086:
-                rc = map_me_phantom_function(domain, 3, domid, pgd_maddr, mode);
+                rc = map_me_phantom_function(domain, 3, mode, ctx);
                 break;
             default:
                 break;
@@ -483,7 +482,7 @@ int me_wifi_quirk(struct domain *domain, uint8_t bus, uint8_t devfn,
             case 0x42388086:        /* Puma Peak */
             case 0x422b8086:
             case 0x422c8086:
-                rc = map_me_phantom_function(domain, 22, domid, pgd_maddr, mode);
+                rc = map_me_phantom_function(domain, 22, mode, ctx);
                 break;
             default:
                 break;
diff --git a/xen/drivers/passthrough/x86/Makefile b/xen/drivers/passthrough/x86/Makefile
index 75b2885336..1614f3d284 100644
--- a/xen/drivers/passthrough/x86/Makefile
+++ b/xen/drivers/passthrough/x86/Makefile
@@ -1,2 +1,3 @@
 obj-y += iommu.o
+obj-y += arena.o
 obj-$(CONFIG_HVM) += hvm.o
diff --git a/xen/drivers/passthrough/x86/arena.c b/xen/drivers/passthrough/x86/arena.c
new file mode 100644
index 0000000000..984bc4d643
--- /dev/null
+++ b/xen/drivers/passthrough/x86/arena.c
@@ -0,0 +1,157 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/**
+ * Simple arena-based page allocator.
+ *
+ * Allocate a large block using alloc_domheam_pages and allocate single pages
+ * using iommu_arena_allocate_page and iommu_arena_free_page functions.
+ *
+ * Concurrent {allocate/free}_page is thread-safe
+ * iommu_arena_teardown during {allocate/free}_page is not thread-safe.
+ *
+ * Written by Teddy Astie <teddy.astie@vates.tech>
+ */
+
+#include <asm/bitops.h>
+#include <asm/page.h>
+#include <xen/atomic.h>
+#include <xen/bug.h>
+#include <xen/config.h>
+#include <xen/mm-frame.h>
+#include <xen/mm.h>
+#include <xen/xmalloc.h>
+
+#include <asm/arena.h>
+
+/* Maximum of scan tries if the bit found not available */
+#define ARENA_TSL_MAX_TRIES 5
+
+int iommu_arena_initialize(struct iommu_arena *arena, struct domain *d,
+                           unsigned int order, unsigned int memflags)
+{
+    struct page_info *page;
+
+    /* TODO: Maybe allocate differently ? */
+    page = alloc_domheap_pages(d, order, memflags);
+
+    if ( !page )
+        return -ENOMEM;
+
+    arena->map = xzalloc_array(unsigned long, BITS_TO_LONGS(1LLU << order));
+    arena->order = order;
+    arena->region_start = page_to_mfn(page);
+
+    _atomic_set(&arena->used_pages, 0);
+    bitmap_zero(arena->map, iommu_arena_size(arena));
+
+    printk(XENLOG_DEBUG "IOMMU: Allocated arena (%llu pages, start=%"PRI_mfn")\n",
+           iommu_arena_size(arena), mfn_x(arena->region_start));
+    return 0;
+}
+
+int iommu_arena_teardown(struct iommu_arena *arena, bool check)
+{
+    BUG_ON(mfn_x(arena->region_start) == 0);
+
+    /* Check for allocations if check is specified */
+    if ( check && (atomic_read(&arena->used_pages) > 0) )
+        return -EBUSY;
+
+    free_domheap_pages(mfn_to_page(arena->region_start), arena->order);
+
+    arena->region_start = _mfn(0);
+    _atomic_set(&arena->used_pages, 0);
+    xfree(arena->map);
+    arena->map = NULL;
+
+    return 0;
+}
+
+struct page_info *iommu_arena_allocate_page(struct iommu_arena *arena)
+{
+    unsigned int index;
+    unsigned int tsl_tries = 0;
+
+    BUG_ON(mfn_x(arena->region_start) == 0);
+
+    if ( atomic_read(&arena->used_pages) == iommu_arena_size(arena) )
+        /* All pages used */
+        return NULL;
+
+    do
+    {
+        index = find_first_zero_bit(arena->map, iommu_arena_size(arena));
+
+        if ( index >= iommu_arena_size(arena) )
+            /* No more free pages */
+            return NULL;
+
+        /*
+         * While there shouldn't be a lot of retries in practice, this loop
+         * *may* run indefinetly if the found bit is never free due to being
+         * overwriten by another CPU core right after. Add a safeguard for
+         * such very rare cases.
+         */
+        tsl_tries++;
+
+        if ( unlikely(tsl_tries == ARENA_TSL_MAX_TRIES) )
+        {
+            printk(XENLOG_ERR "ARENA: Too many TSL retries !");
+            return NULL;
+        }
+
+        /* Make sure that the bit we found is still free */
+    } while ( test_and_set_bit(index, arena->map) );
+
+    atomic_inc(&arena->used_pages);
+
+    return mfn_to_page(mfn_add(arena->region_start, index));
+}
+
+bool iommu_arena_free_page(struct iommu_arena *arena, struct page_info *page)
+{
+    unsigned long index;
+    mfn_t frame;
+
+    if ( !page )
+    {
+        printk(XENLOG_WARNING "IOMMU: Trying to free NULL page");
+        WARN();
+        return false;
+    }
+
+    frame = page_to_mfn(page);
+
+    /* Check if page belongs to our arena */
+    if ( (mfn_x(frame) < mfn_x(arena->region_start))
+        || (mfn_x(frame) >= (mfn_x(arena->region_start) + iommu_arena_size(arena))) )
+    {
+        printk(XENLOG_WARNING
+               "IOMMU: Trying to free outside arena region [mfn=%"PRI_mfn"]",
+               mfn_x(frame));
+        WARN();
+        return false;
+    }
+
+    index = mfn_x(frame) - mfn_x(arena->region_start);
+
+    /* Sanity check in case of underflow. */
+    ASSERT(index < iommu_arena_size(arena));
+
+    if ( !test_and_clear_bit(index, arena->map) )
+    {
+        /*
+         * Bit was free during our arena_free_page, which means that
+         * either this page was never allocated, or we are in a double-free
+         * situation.
+         */
+        printk(XENLOG_WARNING
+               "IOMMU: Freeing non-allocated region (double-free?) [mfn=%"PRI_mfn"]",
+               mfn_x(frame));
+        WARN();
+        return false;
+    }
+
+    atomic_dec(&arena->used_pages);
+
+    return true;
+}
\ No newline at end of file
diff --git a/xen/drivers/passthrough/x86/iommu.c b/xen/drivers/passthrough/x86/iommu.c
index cc0062b027..3c261539ae 100644
--- a/xen/drivers/passthrough/x86/iommu.c
+++ b/xen/drivers/passthrough/x86/iommu.c
@@ -12,6 +12,13 @@
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
+#include <xen/keyhandler.h>
+#include <xen/lib.h>
+#include <xen/pci.h>
+#include <xen/spinlock.h>
+#include <xen/bitmap.h>
+#include <xen/list.h>
+#include <xen/mm.h>
 #include <xen/cpu.h>
 #include <xen/sched.h>
 #include <xen/iocap.h>
@@ -28,6 +35,9 @@
 #include <asm/mem_paging.h>
 #include <asm/pt-contig-markers.h>
 #include <asm/setup.h>
+#include <asm/iommu.h>
+#include <asm/arena.h>
+#include <asm/page.h>
 
 const struct iommu_init_ops *__initdata iommu_init_ops;
 struct iommu_ops __ro_after_init iommu_ops;
@@ -183,15 +193,42 @@ void __hwdom_init arch_iommu_check_autotranslated_hwdom(struct domain *d)
         panic("PVH hardware domain iommu must be set in 'strict' mode\n");
 }
 
-int arch_iommu_domain_init(struct domain *d)
+int arch_iommu_context_init(struct domain *d, struct iommu_context *ctx, u32 flags)
+{
+    INIT_PAGE_LIST_HEAD(&ctx->arch.pgtables.list);
+    spin_lock_init(&ctx->arch.pgtables.lock);
+
+    INIT_PAGE_LIST_HEAD(&ctx->arch.free_queue);
+
+    return 0;
+}
+
+int arch_iommu_context_teardown(struct domain *d, struct iommu_context *ctx, u32 flags)
 {
+    /* Cleanup all page tables */
+    while ( iommu_free_pgtables(d, ctx) == -ERESTART )
+        /* nothing */;
+
+    return 0;
+}
+
+int arch_iommu_flush_free_queue(struct domain *d, struct iommu_context *ctx)
+{
+    struct page_info *pg;
     struct domain_iommu *hd = dom_iommu(d);
 
-    spin_lock_init(&hd->arch.mapping_lock);
+    while ( (pg = page_list_remove_head(&ctx->arch.free_queue)) )
+        iommu_arena_free_page(&hd->arch.pt_arena, pg);
+
+    return 0;
+}
+
+int arch_iommu_domain_init(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
 
-    INIT_PAGE_LIST_HEAD(&hd->arch.pgtables.list);
-    spin_lock_init(&hd->arch.pgtables.lock);
     INIT_LIST_HEAD(&hd->arch.identity_maps);
+    iommu_arena_initialize(&hd->arch.pt_arena, NULL, iommu_hwdom_arena_order, 0);
 
     return 0;
 }
@@ -203,8 +240,9 @@ void arch_iommu_domain_destroy(struct domain *d)
      * domain is destroyed. Note that arch_iommu_domain_destroy() is
      * called unconditionally, so pgtables may be uninitialized.
      */
-    ASSERT(!dom_iommu(d)->platform_ops ||
-           page_list_empty(&dom_iommu(d)->arch.pgtables.list));
+    struct domain_iommu *hd = dom_iommu(d);
+
+    ASSERT(!hd->platform_ops);
 }
 
 struct identity_map {
@@ -227,7 +265,7 @@ int iommu_identity_mapping(struct domain *d, p2m_access_t p2ma,
     ASSERT(base < end);
 
     /*
-     * No need to acquire hd->arch.mapping_lock: Both insertion and removal
+     * No need to acquire hd->arch.lock: Both insertion and removal
      * get done while holding pcidevs_lock.
      */
     list_for_each_entry( map, &hd->arch.identity_maps, list )
@@ -356,8 +394,8 @@ static int __hwdom_init cf_check identity_map(unsigned long s, unsigned long e,
              */
             if ( iomem_access_permitted(d, s, s) )
             {
-                rc = iommu_map(d, _dfn(s), _mfn(s), 1, perms,
-                               &info->flush_flags);
+                rc = _iommu_map(d, _dfn(s), _mfn(s), 1, perms,
+                                &info->flush_flags, 0);
                 if ( rc < 0 )
                     break;
                 /* Must map a frame at least, which is what we request for. */
@@ -366,8 +404,8 @@ static int __hwdom_init cf_check identity_map(unsigned long s, unsigned long e,
             }
             s++;
         }
-        while ( (rc = iommu_map(d, _dfn(s), _mfn(s), e - s + 1,
-                                perms, &info->flush_flags)) > 0 )
+        while ( (rc = _iommu_map(d, _dfn(s), _mfn(s), e - s + 1,
+                                 perms, &info->flush_flags, 0)) > 0 )
         {
             s += rc;
             process_pending_softirqs();
@@ -533,7 +571,6 @@ void __hwdom_init arch_iommu_hwdom_init(struct domain *d)
 
 void arch_pci_init_pdev(struct pci_dev *pdev)
 {
-    pdev->arch.pseudo_domid = DOMID_INVALID;
 }
 
 unsigned long *__init iommu_init_domid(domid_t reserve)
@@ -564,8 +601,6 @@ domid_t iommu_alloc_domid(unsigned long *map)
     static unsigned int start;
     unsigned int idx = find_next_zero_bit(map, UINT16_MAX - DOMID_MASK, start);
 
-    ASSERT(pcidevs_locked());
-
     if ( idx >= UINT16_MAX - DOMID_MASK )
         idx = find_first_zero_bit(map, UINT16_MAX - DOMID_MASK);
     if ( idx >= UINT16_MAX - DOMID_MASK )
@@ -591,7 +626,7 @@ void iommu_free_domid(domid_t domid, unsigned long *map)
         BUG();
 }
 
-int iommu_free_pgtables(struct domain *d)
+int iommu_free_pgtables(struct domain *d, struct iommu_context *ctx)
 {
     struct domain_iommu *hd = dom_iommu(d);
     struct page_info *pg;
@@ -601,17 +636,20 @@ int iommu_free_pgtables(struct domain *d)
         return 0;
 
     /* After this barrier, no new IOMMU mappings can be inserted. */
-    spin_barrier(&hd->arch.mapping_lock);
+    spin_barrier(&ctx->arch.pgtables.lock);
 
     /*
      * Pages will be moved to the free list below. So we want to
      * clear the root page-table to avoid any potential use after-free.
      */
-    iommu_vcall(hd->platform_ops, clear_root_pgtable, d);
+    iommu_vcall(hd->platform_ops, clear_root_pgtable, d, ctx);
 
-    while ( (pg = page_list_remove_head(&hd->arch.pgtables.list)) )
+    while ( (pg = page_list_remove_head(&ctx->arch.pgtables.list)) )
     {
-        free_domheap_page(pg);
+        if (ctx->id == 0)
+            free_domheap_page(pg);
+        else
+            iommu_arena_free_page(&hd->arch.pt_arena, pg);
 
         if ( !(++done & 0xff) && general_preempt_check() )
             return -ERESTART;
@@ -621,6 +659,7 @@ int iommu_free_pgtables(struct domain *d)
 }
 
 struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd,
+                                      struct iommu_context *ctx,
                                       uint64_t contig_mask)
 {
     unsigned int memflags = 0;
@@ -632,7 +671,11 @@ struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd,
         memflags = MEMF_node(hd->node);
 #endif
 
-    pg = alloc_domheap_page(NULL, memflags);
+    if (ctx->id == 0)
+        pg = alloc_domheap_page(NULL, memflags);
+    else
+        pg = iommu_arena_allocate_page(&hd->arch.pt_arena);
+
     if ( !pg )
         return NULL;
 
@@ -665,9 +708,7 @@ struct page_info *iommu_alloc_pgtable(struct domain_iommu *hd,
 
     unmap_domain_page(p);
 
-    spin_lock(&hd->arch.pgtables.lock);
-    page_list_add(pg, &hd->arch.pgtables.list);
-    spin_unlock(&hd->arch.pgtables.lock);
+    page_list_add(pg, &ctx->arch.pgtables.list);
 
     return pg;
 }
@@ -706,17 +747,22 @@ static void cf_check free_queued_pgtables(void *arg)
     }
 }
 
-void iommu_queue_free_pgtable(struct domain_iommu *hd, struct page_info *pg)
+void iommu_queue_free_pgtable(struct iommu_context *ctx, struct page_info *pg)
 {
     unsigned int cpu = smp_processor_id();
 
-    spin_lock(&hd->arch.pgtables.lock);
-    page_list_del(pg, &hd->arch.pgtables.list);
-    spin_unlock(&hd->arch.pgtables.lock);
+    spin_lock(&ctx->arch.pgtables.lock);
+    page_list_del(pg, &ctx->arch.pgtables.list);
+    spin_unlock(&ctx->arch.pgtables.lock);
 
-    page_list_add_tail(pg, &per_cpu(free_pgt_list, cpu));
+    if ( !ctx->id )
+    {
+        page_list_add_tail(pg, &per_cpu(free_pgt_list, cpu));
 
-    tasklet_schedule(&per_cpu(free_pgt_tasklet, cpu));
+        tasklet_schedule(&per_cpu(free_pgt_tasklet, cpu));
+    }
+    else
+        page_list_add_tail(pg, &ctx->arch.free_queue);
 }
 
 static int cf_check cpu_callback(
-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 15:22:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 15:22:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749165.1157232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUU1-0007fv-Au; Wed, 26 Jun 2024 15:22:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749165.1157232; Wed, 26 Jun 2024 15:22:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUU1-0007f7-6R; Wed, 26 Jun 2024 15:22:53 +0000
Received: by outflank-mailman (input) for mailman id 749165;
 Wed, 26 Jun 2024 15:22:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5KNd=N4=bounce.vates.tech=bounce-md_30504962.667c3248.v1-460b1dca485c473b8721c62f5cc8620a@srs-se1.protection.inumbo.net>)
 id 1sMUTz-00076j-FK
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 15:22:51 +0000
Received: from mail134-3.atl141.mandrillapp.com
 (mail134-3.atl141.mandrillapp.com [198.2.134.3])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f25d3673-33cf-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 17:22:49 +0200 (CEST)
Received: from pmta10.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail134-3.atl141.mandrillapp.com (Mailchimp) with ESMTP id
 4W8QS851l0zDRJ1Wx
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 15:22:48 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 460b1dca485c473b8721c62f5cc8620a; Wed, 26 Jun 2024 15:22:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f25d3673-33cf-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719415368; x=1719675868;
	bh=6vHCzN+AeDCbgU+wI/BOz1emQQdsYN01V8bF1TOnwzE=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=ML0YNSm3eQqqV6+dNFYMqFaKr/10E7ejZUdJzfZwPURkKVfCaPIkqQIsJhsa+gg4J
	 oKVPvw6IEImQHZBsNBGJ9pGoyhXCskeNOixjDO1tsSStXyB89DpkgHAO/heBIqu2Cp
	 451IY76nhlCJ1kgWPkkE+CQSQUQEu2SvQnc7rHzb0dC/DdAwnuU94mdlKkrGLiVexQ
	 2IghSKJhsaMH/j5KhQGiIHffgUHAWHytVW4SZdIl19EQPoBAMBYpWXfzJRjGnUL8uF
	 elRsByf9Y38uTCYq6BxjZdHCIJlbD4LOLKn4y8EN7H5XhIcUeLeOawWegUxFkBeRGh
	 YZQKrxeSWvmXQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719415368; x=1719675868; i=teddy.astie@vates.tech;
	bh=6vHCzN+AeDCbgU+wI/BOz1emQQdsYN01V8bF1TOnwzE=;
	h=From:Subject:To:Cc:Message-Id:Feedback-ID:Date:MIME-Version:
	 Content-Type:Content-Transfer-Encoding:CC:Date:Subject:From;
	b=Q1TOOXUY1+tWUIkFV0NTBijgotOkxAOb3qmgbts63hBFLgpYBYuHWJfGKneWAdhd+
	 Q5B4wj7cqEL1gKgpiiSu2RM+nkFD/bzEwo3jKS2aLEFXNRYFLLAunRAIHnADZgiJzb
	 Kt8SifRSvYq5OQAxjlf4YcjXBiRGDUioLq6Vt8XImdpGS3O+SYJT/U293/XOsUm88c
	 cWUmR3hz4CFZ80sdKWMD6B0K5bQWUh0vhgGESl4UMwBvsRf1uYvGxuw+zBjLtvdiLK
	 k+ZdkqTDldg9gffcW2sONIuWwUpoaoC5jhzWg0FllPZGrNC6fKEHmXPP8dOEA0Wr9Y
	 tpLjAIWdvG5nw==
From: TSnake41 <teddy.astie@vates.tech>
Subject: =?utf-8?Q?[RFC=20XEN=20PATCH=20v2=200/5]=20IOMMU=20subsystem=20redesign=20and=20PV-IOMMU=20interface?=
X-Mailer: git-send-email 2.45.2
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719415358935
To: xen-devel@lists.xenproject.org
Cc: TSnake41 <teddy.astie@vates.tech>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Lukasz Hawrylko <lukasz@hawrylko.pl>, "Daniel P. Smith" <dpsmith@apertussolutions.com>, =?utf-8?Q?Mateusz=20M=C3=B3wka?= <mateusz.mowka@intel.com>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Message-Id: <cover.1719414736.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.460b1dca485c473b8721c62f5cc8620a?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240626:md
Date: Wed, 26 Jun 2024 15:22:48 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

This work has been presented at Xen Summit 2024 during the
  IOMMU paravirtualization and Xen IOMMU subsystem rework
design session.

Operating systems may want to have access to a IOMMU in order to do DMA
protection or implement certain features (e.g VFIO on Linux).

VFIO support is mandatory for framework such as SPDK, which can be useful to
implement an alternative storage backend for virtual machines [1].

In this patch series, we introduce in Xen the ability to manage several
contexts per domain and provide a new hypercall interface to allow guests
to manage IOMMU contexts.

The VT-d driver is updated to support these new features.

[1] Using SPDK with the Xen hypervisor - FOSDEM 2023

---
Changed in v2 :
* fixed Xen crash when dumping IOMMU contexts (using X debug key)
with DomUs without IOMMU
* s/dettach/detach/
* removed some unused includes
* fix dangling devices in contexts with detach

Teddy Astie (5):
  docs/designs: Add a design document for PV-IOMMU
  docs/designs: Add a design document for IOMMU subsystem redesign
  IOMMU: Introduce redesigned IOMMU subsystem
  VT-d: Port IOMMU driver to new subsystem
  xen/public: Introduce PV-IOMMU hypercall interface

 docs/designs/iommu-contexts.md       |  398 +++++++
 docs/designs/pv-iommu.md             |  105 ++
 xen/arch/x86/domain.c                |    2 +-
 xen/arch/x86/include/asm/arena.h     |   54 +
 xen/arch/x86/include/asm/iommu.h     |   44 +-
 xen/arch/x86/include/asm/pci.h       |   17 -
 xen/arch/x86/mm/p2m-ept.c            |    2 +-
 xen/arch/x86/pv/dom0_build.c         |    4 +-
 xen/arch/x86/tboot.c                 |    4 +-
 xen/common/Makefile                  |    1 +
 xen/common/memory.c                  |    4 +-
 xen/common/pv-iommu.c                |  320 ++++++
 xen/drivers/passthrough/Kconfig      |   14 +
 xen/drivers/passthrough/Makefile     |    3 +
 xen/drivers/passthrough/context.c    |  626 +++++++++++
 xen/drivers/passthrough/iommu.c      |  337 ++----
 xen/drivers/passthrough/pci.c        |   49 +-
 xen/drivers/passthrough/quarantine.c |   49 +
 xen/drivers/passthrough/vtd/Makefile |    2 +-
 xen/drivers/passthrough/vtd/extern.h |   14 +-
 xen/drivers/passthrough/vtd/iommu.c  | 1557 +++++++++++---------------
 xen/drivers/passthrough/vtd/quirks.c |   21 +-
 xen/drivers/passthrough/x86/Makefile |    1 +
 xen/drivers/passthrough/x86/arena.c  |  157 +++
 xen/drivers/passthrough/x86/iommu.c  |  104 +-
 xen/include/hypercall-defs.c         |    6 +
 xen/include/public/pv-iommu.h        |  114 ++
 xen/include/public/xen.h             |    1 +
 xen/include/xen/iommu.h              |  118 +-
 xen/include/xen/pci.h                |    3 +
 30 files changed, 2822 insertions(+), 1309 deletions(-)
 create mode 100644 docs/designs/iommu-contexts.md
 create mode 100644 docs/designs/pv-iommu.md
 create mode 100644 xen/arch/x86/include/asm/arena.h
 create mode 100644 xen/common/pv-iommu.c
 create mode 100644 xen/drivers/passthrough/context.c
 create mode 100644 xen/drivers/passthrough/quarantine.c
 create mode 100644 xen/drivers/passthrough/x86/arena.c
 create mode 100644 xen/include/public/pv-iommu.h

-- 
2.45.2



Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 15:29:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 15:29:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749212.1157245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUa3-0002KI-94; Wed, 26 Jun 2024 15:29:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749212.1157245; Wed, 26 Jun 2024 15:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUa3-0002KB-63; Wed, 26 Jun 2024 15:29:07 +0000
Received: by outflank-mailman (input) for mailman id 749212;
 Wed, 26 Jun 2024 15:29:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pb7p=N4=bounce.vates.tech=bounce-md_30504962.667c3279.v1-5533be79f55c46a48dcd81b32f46d421@srs-se1.protection.inumbo.net>)
 id 1sMUUk-0006XW-Mi
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 15:23:38 +0000
Received: from mail187-10.suw11.mandrillapp.com
 (mail187-10.suw11.mandrillapp.com [198.2.187.10])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f47dd49-33d0-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 17:23:38 +0200 (CEST)
Received: from pmta09.mandrill.prod.suw01.rsglab.com (localhost [127.0.0.1])
 by mail187-10.suw11.mandrillapp.com (Mailchimp) with ESMTP id
 4W8QT51XxMz5QkLpF
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 15:23:37 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 5533be79f55c46a48dcd81b32f46d421; Wed, 26 Jun 2024 15:23:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f47dd49-33d0-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719415417; x=1719675917;
	bh=HQnzbPh+i1Pp6Lcr9ot6UJATrOws3xOB7YqdoqCYhiU=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=JsNaSJACkzhO3rIxMEBeFtwxusl8EpZA44O4u+f/t/Imv9bjqQJVyvd1j+5HIk9sF
	 0vu3f8yMfL9NxhBqrOJxTlsgi41+B/gbCig6liI3mSbVcwjVRbzzStudwKKi6BxbAk
	 DwXqdLmMa57pIA2fplNlYsOy6Qt0QDiC8OEznXA6UmDnEUE5kBD79JwlTXtLjrp8ez
	 mLn/hm5ciSgJlez79ozBJG8vUnx859bonOEXOO2A3UBDI9oo7OgY3jJ/9xRoBpkjs5
	 m+97Oq8q/ftnTwRPk7jlaUW3tsLFuUTvEz6MB+O3SPaHhIo6/4DXGmMgh71aPc3/Ki
	 T+Yn2nKDh015w==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719415417; x=1719675917; i=teddy.astie@vates.tech;
	bh=HQnzbPh+i1Pp6Lcr9ot6UJATrOws3xOB7YqdoqCYhiU=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=XkwOTjmbxB7JaXtLh7FxChPPedd+RgAjgI3aCIRrcx9w2Xv8gv82sQD4866Cj5Rcq
	 VRNAYbrrwzj+nXGzxX2P3cVJU/EuK60sGNFUUwzURX9ZHhVRjGvacS5lN3q6NZ6/oF
	 kEKZsZAcGKb1sQnEaZjGqvegl3zF7hqnJ0VNfT/BQ7I2IJj9CzGCWKJkPXPbJol05r
	 zubrW30ONXpm6kGBSf2G17zARDDhYuiHXe547r10yG18td1VsyUhrgeW6v9JaDxC1P
	 23v5+UaZDQ6DGgddaNCO8kRxlPtKvaoKjiD2wdpb8omrc+rAmhXejhN5OlFNAiz5xJ
	 8TR5bQV4WJ+rQ==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?Re:=20[RFC=20XEN=20PATCH=20v2=200/5]=20IOMMU=20subsystem=20redesign=20and=20PV-IOMMU=20interface?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719415415340
Message-Id: <c3a91cde-f325-4459-8629-f3575543e4d8@vates.tech>
To: xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, =?utf-8?Q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>, Lukasz Hawrylko <lukasz@hawrylko.pl>, "Daniel P. Smith" <dpsmith@apertussolutions.com>, =?utf-8?Q?Mateusz=20M=C3=B3wka?= <mateusz.mowka@intel.com>, =?utf-8?Q?Marek=20Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
References: <cover.1719414736.git.teddy.astie@vates.tech>
In-Reply-To: <cover.1719414736.git.teddy.astie@vates.tech>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.5533be79f55c46a48dcd81b32f46d421?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240626:md
Date: Wed, 26 Jun 2024 15:23:37 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

forgot to add that this patch series is rebased on top of staging

Teddy


Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 15:38:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 15:38:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749244.1157276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUjP-0006bb-H5; Wed, 26 Jun 2024 15:38:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749244.1157276; Wed, 26 Jun 2024 15:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMUjP-0006bU-DD; Wed, 26 Jun 2024 15:38:47 +0000
Received: by outflank-mailman (input) for mailman id 749244;
 Wed, 26 Jun 2024 15:38:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=e1Ku=N4=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMUjO-0006Z3-9w
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 15:38:46 +0000
Received: from mail-qk1-x735.google.com (mail-qk1-x735.google.com
 [2607:f8b0:4864:20::735])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2b7e1f3f-33d2-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 17:38:44 +0200 (CEST)
Received: by mail-qk1-x735.google.com with SMTP id
 af79cd13be357-79c084476bdso188386285a.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 08:38:44 -0700 (PDT)
Received: from georged-x-u.xenrt.citrite.net ([185.25.67.249])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b51ef30c1esm55692166d6.79.2024.06.26.08.38.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 08:38:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b7e1f3f-33d2-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719416323; x=1720021123; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=KG63BasigL6EjyAVVQsMRh0WTj185+GcKDw46ekPGTI=;
        b=UtmHvmJ563bizgLXpiYe76cEAt0Q/JcKZr2eo9eqrzeuEPz3nza7Mv93KCbD6UrYGe
         TkmSHCcPir+rFgMwy7U/DOq7L6w4O9vDn0j9k1fFlIbsY4SS5OpGEzTMnn3igNAdHQ9/
         R9hw6B7X/cn1DxeRXkrOLuUd3Wef0+sIgsbLA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719416323; x=1720021123;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=KG63BasigL6EjyAVVQsMRh0WTj185+GcKDw46ekPGTI=;
        b=DsFywpnMHQxb48tT6/vSdwExzJI8JbbXYb/M5gowdTiveSv3LU3M5WYbXdQQTYNLfB
         onIWLA5y9g3hxgH3jHQoSXN4XtJ1LsvAeWDQ+D4ZjTYQX51DFHhOLK3DYMTJn3ANKaSU
         0+fihe93qolsYgUZyVBlPueul/+W0ZEdDQxSJFoYqV1v/J8IkE7x/1q/yFjeaJg3FUmP
         YCPwjNjKy/lwbLVTZL0FeqCz6N1Z5bjV55s8BNpVckgepAy+/6BK9gZZxAhEI1ixB2iz
         uI7Cf4edudyED0PhCOEVWA6zhlVQ48f57e22wwfWOCijfTmvucgvT6F0ccnYONSPgGcm
         IFAA==
X-Gm-Message-State: AOJu0YwAJAm9DmahJ1pFCvtjy9RUTXHbz6rH5qSQfT0MLY69z8NmHdHy
	Xx/j4YhqcBxnk/Ng3d+nSPbTdkVXl3fOLJBX1yY+cuXRqiok9yFwvWRdar7Xn/3/Nr9VPtzwMgj
	nm2c=
X-Google-Smtp-Source: AGHT+IEvEO2vzEzCn0fizkYLMdQH8kX6ddBLYClg1yQzeHBXTdml0NZkSOUyGLpMRPNAKTuZzPLB3A==
X-Received: by 2002:a0c:f051:0:b0:6b4:ffca:ca96 with SMTP id 6a1803df08f44-6b5409e0b5dmr112332276d6.30.1719416323083;
        Wed, 26 Jun 2024 08:38:43 -0700 (PDT)
From: George Dunlap <george.dunlap@cloud.com>
To: xen-devel@lists.xenproject.org
Cc: George Dunlap <george.dunlap@cloud.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Dario Faggioli <dfaggioli@suse.com>,
	Juergen Gross <jgross@suse.com>,
	Nick Rosbrook <rosbrookn@gmail.com>
Subject: [PATCH] MAINTAINERS: Step down as maintainer and committer
Date: Wed, 26 Jun 2024 16:19:35 +0100
Message-Id: <20240626151935.26704-1-george.dunlap@cloud.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remain a Reviewer on the golang bindings and scheduler for now (using
a xenproject.org alias), since there may be architectural decisions I
can shed light on.

Remove the XENTRACE section entirely, as there's no obvious candidate
to take it over; having the respective parts fall back to the tools
and The Rest seems the most reasonable option.

Signed-off-by: George Dunlap <george.dunlap@cloud.com>
---
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Dario Faggioli <dfaggioli@suse.com>
CC: Juergen Gross <jgross@suse.com>
CC: Nick Rosbrook <rosbrookn@gmail.com>
---
 MAINTAINERS | 13 ++-----------
 1 file changed, 2 insertions(+), 11 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 9d66b898ec..2b0c894527 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -325,8 +325,8 @@ F:	xen/arch/x86/debug.c
 F:	tools/debugger/gdbsx/
 
 GOLANG BINDINGS
-M:	George Dunlap <george.dunlap@citrix.com>
 M:	Nick Rosbrook <rosbrookn@gmail.com>
+R:	George Dunlap <gwd@xenproject.org>
 S:	Maintained
 F:	tools/golang
 
@@ -490,9 +490,9 @@ S:	Supported
 F:	xen/common/sched/rt.c
 
 SCHEDULING
-M:	George Dunlap <george.dunlap@citrix.com>
 M:	Dario Faggioli <dfaggioli@suse.com>
 M:	Juergen Gross <jgross@suse.com>
+R:	George Dunlap <gwd@xenproject.org>
 S:	Supported
 F:	xen/common/sched/
 
@@ -597,7 +597,6 @@ F:	tools/tests/x86_emulator/
 X86 MEMORY MANAGEMENT
 M:	Jan Beulich <jbeulich@suse.com>
 M:	Andrew Cooper <andrew.cooper3@citrix.com>
-R:	George Dunlap <george.dunlap@citrix.com>
 S:	Supported
 F:	xen/arch/x86/mm/
 
@@ -641,13 +640,6 @@ F:	tools/libs/store/
 F:	tools/xenstored/
 F:	tools/xs-clients/
 
-XENTRACE
-M:	George Dunlap <george.dunlap@citrix.com>
-S:	Supported
-F:	tools/xentrace/
-F:	xen/common/trace.c
-F:	xen/include/xen/trace.h
-
 XEN MISRA ANALYSIS TOOLS
 M:	Luca Fancellu <luca.fancellu@arm.com>
 S:	Supported
@@ -670,7 +662,6 @@ K:	\b(xsm|XSM)\b
 
 THE REST
 M:	Andrew Cooper <andrew.cooper3@citrix.com>
-M:	George Dunlap <george.dunlap@citrix.com>
 M:	Jan Beulich <jbeulich@suse.com>
 M:	Julien Grall <julien@xen.org>
 M:	Stefano Stabellini <sstabellini@kernel.org>
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:13:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:13:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749255.1157285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVH0-00026s-2i; Wed, 26 Jun 2024 16:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749255.1157285; Wed, 26 Jun 2024 16:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVGz-00026l-WE; Wed, 26 Jun 2024 16:13:30 +0000
Received: by outflank-mailman (input) for mailman id 749255;
 Wed, 26 Jun 2024 16:13:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMVGy-00026a-No; Wed, 26 Jun 2024 16:13:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMVGy-0002wJ-MH; Wed, 26 Jun 2024 16:13:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMVGy-0008R5-BZ; Wed, 26 Jun 2024 16:13:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMVGy-0000Tc-B1; Wed, 26 Jun 2024 16:13:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uew+ssNONk+/USpJ4kuQZT3EnQ1dlZ2HN2LZPerGeWg=; b=J0eT+3Nn48rfidQrpMqm3rXHF+
	U6MMglew44u6d1iBTbxPP/S5XRomiYq/MMRWTHRLh+9/dImljzRAzPhiUc4uzHPOngIQwM3N/M7Rg
	HN4hzVWSUb+C7CqJ8CsbWgiJn34ggvTaXE4Dm44o3X3EUqZv2YJtvPsROX+UApuho6v8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186516-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186516: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=89377ece8f1c7243d25fd84488dcd03e37b9e661
X-Osstest-Versions-That:
    ovmf=dc002d4f2d76bdd826359a3dd608d9bc621fcb47
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 16:13:28 +0000

flight 186516 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186516/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 89377ece8f1c7243d25fd84488dcd03e37b9e661
baseline version:
 ovmf                 dc002d4f2d76bdd826359a3dd608d9bc621fcb47

Last test of basis   186512  2024-06-26 09:14:50 Z    0 days
Testing same since   186516  2024-06-26 12:41:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nhi Pham <nhi@os.amperecomputing.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   dc002d4f2d..89377ece8f  89377ece8f1c7243d25fd84488dcd03e37b9e661 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:28:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749268.1157294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVk-0005ph-F8; Wed, 26 Jun 2024 16:28:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749268.1157294; Wed, 26 Jun 2024 16:28:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVk-0005pa-Bq; Wed, 26 Jun 2024 16:28:44 +0000
Received: by outflank-mailman (input) for mailman id 749268;
 Wed, 26 Jun 2024 16:28:42 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMVVi-0005pK-Am
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:42 +0000
Received: from mail-ed1-x52a.google.com (mail-ed1-x52a.google.com
 [2a00:1450:4864:20::52a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 254ab659-33d9-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 18:28:40 +0200 (CEST)
Received: by mail-ed1-x52a.google.com with SMTP id
 4fb4d7f45d1cf-57d1782679fso678546a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:28:40 -0700 (PDT)
Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 09:28:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 254ab659-33d9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719419319; x=1720024119; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=1sPDoIaCTvRdlQs7YmztUDubOqOgrOE/wxzEsjyDTHY=;
        b=O13EYOjas9DsUkn7LnCD1plMulHOP1Y6UceohJzPBJXTvAM1DCBppCNfFogJw5vGWy
         WzGb9PgVl6KYGmvMSfrVZ0yw9a/nxDUlG9wxFczTrb9U9lpHQgC/GrmHXa+lqAFrDZhJ
         ppRPoWOrQ37SLnbgQ4dIi4Ugm5SftgrRTKhNE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719419319; x=1720024119;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=1sPDoIaCTvRdlQs7YmztUDubOqOgrOE/wxzEsjyDTHY=;
        b=GkIK/trulqTudyyQA8LgkwMicX6psdr23MNDMmJmRp99miRiqG+mYZZzLPKdigG2Ui
         NSztQP9NHb+WB8Zdi1LPk77bzjdeH+p5aG0wqsl6XjrFg7z5m2ELKr7sKwi1mQZQvGjP
         RCR/IjOLMr9wB9qwP3S1g6TbfzKkz8KsxqJXfnITC0WI9UwTtvRi4i/23pP+keMMO6sT
         s5IUrWJOFPwDPUxGA3biIZ660ehiCyQ+AV1Ci85Fx91NDfmLTLOR5stwDYXKh/0FOOU+
         lt3uDTp5segLTHNRnEeXJ8hfpe9mCev6Q1DivR/f/sYpR439Mut9ZUxNdtmMncE4gxIn
         wMTA==
X-Gm-Message-State: AOJu0YxVuOJIpSDwuXl2cJHIBlHTu95d5gM6oyo63tDhmUSFLlkrLx77
	/3FXi2oeImiESZtZXKFY7puKZiy7y3C5MIgtEd/PhcO+hsUesqyTvEsJcHyCkkn2Xu/++HOzkrL
	SZeY=
X-Google-Smtp-Source: AGHT+IHyMx+5M3z3yyidBqq2z9sbM/AGyV/FJNl6iaXybtknLljbmVIQMgyT9fInQnv+i54o6/MXuA==
X-Received: by 2002:a17:906:bc89:b0:a6f:38:6968 with SMTP id a640c23a62f3a-a7245b64da3mr771698366b.32.1719419319121;
        Wed, 26 Jun 2024 09:28:39 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Anthony PERARD <anthony.perard@vates.tech>,
	Juergen Gross <jgross@suse.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 v4 00/10] x86: Expose consistent topology to guests
Date: Wed, 26 Jun 2024 17:28:27 +0100
Message-Id: <cover.1719416329.git.alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

I've added the patches most critical to get into 4.19 at the head. They are
pretty well reviewed already and shouldn't be very contentious anymore.

v3 -> v4:
  * Fixed cpuid() bug in hvmloader, causing UB in v3
  * Fixed a bogus assert in hvmloader, also causing a crash in v3
  * Used HVM contexts rather than sync'd algos between Xen and toolstack in
    order to initialise the per-vCPU LAPIC state.
  * Formatting asjustments.

v2 -> v3:

  (v2/patch2 and v2/patch4 are already committed)

  * Moved the vlapic check hook addition to v3/patch1
    * And created a check hook for the architectural state too for consistency.
  * Fixed migrations from Xen <= 4.13 by reconstructing the previous topology.
  * Correctly set the APIC ID after a policy change when vlapic is already in
    x2APIC mode.
  * Removed bogus assumption introduced in v1 and v2 on hvmloader about which
    8bit APIC IDs represent ids > 254. (it's "id % 0xff", not "min(id, 0xff)".
        * Used an x2apic flag check instead.
  * Various formatting adjustments.

v1 -> v2:

  * v1/patch 4 replaced by a different strategy (See patches 4 and 5 in v2):
      * Have hvmloader populate MADT with the real APIC IDs as read by the APs
        themselves rather than giving it knowledge on how to derive them.
  * Removed patches 2 and 3 in v1, as no longer relevant.
  * Split v1/patch6 in two parts ((a) creating the generator and (b) plugging it
    in) and use the generator in the unit tests of the vcpuid->apicid mapping
    function. Becomes patches 6 and 8 in v2.

  Patch 1: Same as v1/patch1.
  Patch 2: Header dependency cleanup in preparation for patch3.
  Patch 3: Adds vlapic_hidden check for the newly introduced reserved area.
  Patch 4: [hvmloader] Replaces INIT+SIPI+SIPI sequences with hypercalls.
  Patch 5: [hvmloader] Retrieve the per-CPU APIC IDs from the APs themselves.
  Patch 6: Split from v1/patch6.
  Patch 7: Logically matching v1/patch5, but using v2/patch6 for testing.
  Patch 8: Split from v1/patch6.

=== Original cover letter ===

Current topology handling is close to non-existent. As things stand, APIC
IDs are allocated through the apic_id=vcpu_id*2 relation without giving any
hints to the OS on how to parse the x2APIC ID of a given CPU and assuming
the guest will assume 2 threads per core.

This series involves bringing x2APIC IDs into the migration stream, so
older guests keep operating as they used to and enhancing Xen+toolstack so
new guests get topology information consistent with their x2APIC IDs. As a
side effect of this, x2APIC IDs are now packed and don't have (unless under
a pathological case) gaps.

Further work ought to allow combining this topology configurations with
gang-scheduling of guest hyperthreads into affine physical hyperthreads.
For the time being it purposefully keeps the configuration of "1 socket" +
"1 thread per core" + "1 core per vCPU".

Patch 1: Includes x2APIC IDs in the migration stream. This allows Xen to
         reconstruct the right x2APIC IDs on migrated-in guests, and
         future-proofs itself in the face of x2APIC ID derivation changes.
Patch 2: Minor refactor to expose xc_cpu_policy in libxl
Patch 3: Refactors xen/lib/x86 to work on non-Xen freestanding environments
         (e.g: hvmloader)
Patch 4: Remove old assumptions about vcpu_id<->apic_id relationship in hvmloader
Patch 5: Add logic to derive x2APIC IDs given a CPU policy and vCPU IDs
Patch 6: Includes a simple topology generator for toolstack so new guests
         have topologically consistent information in CPUID



*** BLURB HERE ***

Alejandro Vallejo (10):
  tools/hvmloader: Fix non-deterministic cpuid()
  x86/vlapic: Move lapic migration checks to the check hooks
  xen/x86: Add initial x2APIC ID to the per-vLAPIC save area
  tools/hvmloader: Retrieve (x2)APIC IDs from the APs themselves
  xen/x86: Add supporting code for uploading LAPIC contexts during
    domain create
  tools/libguest: Make setting MTRR registers unconditional
  xen/lib: Add topology generator for x86
  xen/x86: Derive topologically correct x2APIC IDs from the policy
  xen/x86: Synthesise domain topologies
  tools/libguest: Set topologically correct x2APIC IDs for each vCPU

 tools/firmware/hvmloader/config.h        |   6 +-
 tools/firmware/hvmloader/hvmloader.c     |   4 +-
 tools/firmware/hvmloader/smp.c           |  54 ++++--
 tools/firmware/hvmloader/util.c          |   9 -
 tools/firmware/hvmloader/util.h          |  27 ++-
 tools/include/xen-tools/common-macros.h  |   5 +
 tools/libs/guest/xg_cpuid_x86.c          |  24 ++-
 tools/libs/guest/xg_dom_x86.c            | 114 ++++++++-----
 tools/tests/cpu-policy/test-cpu-policy.c | 201 +++++++++++++++++++++++
 xen/arch/x86/cpu-policy.c                |   9 +-
 xen/arch/x86/cpuid.c                     |  14 +-
 xen/arch/x86/hvm/vlapic.c                | 127 ++++++++++----
 xen/arch/x86/include/asm/hvm/vlapic.h    |   1 +
 xen/include/public/arch-x86/hvm/save.h   |   2 +
 xen/include/xen/lib/x86/cpu-policy.h     |  27 +++
 xen/lib/x86/policy.c                     | 164 ++++++++++++++++++
 xen/lib/x86/private.h                    |   4 +
 17 files changed, 687 insertions(+), 105 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:28:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749275.1157359 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVp-0007OU-O9; Wed, 26 Jun 2024 16:28:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749275.1157359; Wed, 26 Jun 2024 16:28:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVp-0007N2-IB; Wed, 26 Jun 2024 16:28:49 +0000
Received: by outflank-mailman (input) for mailman id 749275;
 Wed, 26 Jun 2024 16:28:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMVVn-0006MH-VA
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:48 +0000
Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com
 [2a00:1450:4864:20::22e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 29aaca38-33d9-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 18:28:47 +0200 (CEST)
Received: by mail-lj1-x22e.google.com with SMTP id
 38308e7fff4ca-2ec002caf3eso104744071fa.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:28:47 -0700 (PDT)
Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 09:28:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29aaca38-33d9-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719419326; x=1720024126; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=l+xOouMfNTmMxHo9v7bAjCWITyqn06N8bTQ+zZ/tKIw=;
        b=OYFEbiTnClOEsBRLewytNPp7hQYeYjUX6ji1S+VYKmIL4ODLRQgM3Z5ptDqcYlwoI6
         Q+Ehq0m6IzqdSwCkWXARzCuEaq3Dl3QolxLyei7kxfhZyUzA8Q3soPbunRde0u+epwTk
         eSSS6Lwh+47LbYl2Qu79dLnp4QvAWKQA8YGHM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719419326; x=1720024126;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=l+xOouMfNTmMxHo9v7bAjCWITyqn06N8bTQ+zZ/tKIw=;
        b=YBZnIXrv3RWbpAD7s0W4V0QovipOpzvVO1x6g51wtASB32RiWpVvQKBWut2SbHU2Os
         q1xtoKnW/mPy+iCrQTCzylZ6nnzo5vl/zsN62EQvG8KNPJYNOLJ+zR0e3sB7M0znl1jA
         s34+luSTAwyncADyAy61Dr6GXzxcUZ3kiVnWCECWBoFx5Qub87aq6mWtlJpT2t0lO4qb
         Hw/wZJIsJBs8u02DYYmGXQP8L4FD9DpBhZ5BIphxIaszevtzaAQOb+nQIHOKiHQS+cGZ
         flm7HjA8xeU96RxlPcRH3uXNgsYkDBll6CAH96dXaxDmTErS+YoT3rKGOcNAxtg8nq6Y
         egVw==
X-Gm-Message-State: AOJu0YyP4Ll9gm8NNsOCvGAuBdjbGahFt+YfRNdHKUsN5XJ9ItiGr7k9
	92yDAG/SmhXUSimIoHjDB3tWbYRb/yerB0dQ2c1p17DpQN/tXpTLQtLtzfsdBpi0PF7H2o2R0VN
	nKgM=
X-Google-Smtp-Source: AGHT+IF3IPdvzxA3JXkykkYNidfujye+/BWcALPxMJdj71ZMCRp0vGOO+OeWys7ok3Dm5kfg6BdS0w==
X-Received: by 2002:a2e:885a:0:b0:2eb:fca8:7f37 with SMTP id 38308e7fff4ca-2ec5b2e628fmr89509191fa.28.1719419326484;
        Wed, 26 Jun 2024 09:28:46 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Anthony PERARD <anthony.perard@vates.tech>
Subject: [PATCH v4 07/10] xen/lib: Add topology generator for x86
Date: Wed, 26 Jun 2024 17:28:34 +0100
Message-Id: <5ab2cb62745bca462ab3768ea1eb826d2b6e2c76.1719416329.git.alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719416329.git.alejandro.vallejo@cloud.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a helper to populate topology leaves in the cpu policy from
threads/core and cores/package counts. It's unit-tested in test-cpu-policy.c,
but it's not connected to the rest of the code yet.

Adds the ASSERT() macro to xen/lib/x86/private.h, as it was missing.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
v4:
  * v1->v2 introduced a bug. lppp must be MIN(0xff, threads_per_pkg).
  * Add missing MIN() when setting p->extd.nc (should've been done in v2)
---
 tools/tests/cpu-policy/test-cpu-policy.c | 133 +++++++++++++++++++++++
 xen/include/xen/lib/x86/cpu-policy.h     |  16 +++
 xen/lib/x86/policy.c                     |  88 +++++++++++++++
 xen/lib/x86/private.h                    |   4 +
 4 files changed, 241 insertions(+)

diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index 301df2c00285..849d7cebaa7c 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -650,6 +650,137 @@ static void test_is_compatible_failure(void)
     }
 }
 
+static void test_topo_from_parts(void)
+{
+    static const struct test {
+        unsigned int threads_per_core;
+        unsigned int cores_per_pkg;
+        struct cpu_policy policy;
+    } tests[] = {
+        {
+            .threads_per_core = 3, .cores_per_pkg = 1,
+            .policy = {
+                .x86_vendor = X86_VENDOR_AMD,
+                .topo.subleaf = {
+                    { .nr_logical = 3, .level = 0, .type = 1, .id_shift = 2, },
+                    { .nr_logical = 1, .level = 1, .type = 2, .id_shift = 2, },
+                },
+            },
+        },
+        {
+            .threads_per_core = 1, .cores_per_pkg = 3,
+            .policy = {
+                .x86_vendor = X86_VENDOR_AMD,
+                .topo.subleaf = {
+                    { .nr_logical = 1, .level = 0, .type = 1, .id_shift = 0, },
+                    { .nr_logical = 3, .level = 1, .type = 2, .id_shift = 2, },
+                },
+            },
+        },
+        {
+            .threads_per_core = 7, .cores_per_pkg = 5,
+            .policy = {
+                .x86_vendor = X86_VENDOR_AMD,
+                .topo.subleaf = {
+                    { .nr_logical = 7, .level = 0, .type = 1, .id_shift = 3, },
+                    { .nr_logical = 5, .level = 1, .type = 2, .id_shift = 6, },
+                },
+            },
+        },
+        {
+            .threads_per_core = 2, .cores_per_pkg = 128,
+            .policy = {
+                .x86_vendor = X86_VENDOR_AMD,
+                .topo.subleaf = {
+                    { .nr_logical = 2, .level = 0, .type = 1, .id_shift = 1, },
+                    { .nr_logical = 128, .level = 1, .type = 2,
+                      .id_shift = 8, },
+                },
+            },
+        },
+        {
+            .threads_per_core = 3, .cores_per_pkg = 1,
+            .policy = {
+                .x86_vendor = X86_VENDOR_INTEL,
+                .topo.subleaf = {
+                    { .nr_logical = 3, .level = 0, .type = 1, .id_shift = 2, },
+                    { .nr_logical = 3, .level = 1, .type = 2, .id_shift = 2, },
+                },
+            },
+        },
+        {
+            .threads_per_core = 1, .cores_per_pkg = 3,
+            .policy = {
+                .x86_vendor = X86_VENDOR_INTEL,
+                .topo.subleaf = {
+                    { .nr_logical = 1, .level = 0, .type = 1, .id_shift = 0, },
+                    { .nr_logical = 3, .level = 1, .type = 2, .id_shift = 2, },
+                },
+            },
+        },
+        {
+            .threads_per_core = 7, .cores_per_pkg = 5,
+            .policy = {
+                .x86_vendor = X86_VENDOR_INTEL,
+                .topo.subleaf = {
+                    { .nr_logical = 7, .level = 0, .type = 1, .id_shift = 3, },
+                    { .nr_logical = 35, .level = 1, .type = 2, .id_shift = 6, },
+                },
+            },
+        },
+        {
+            .threads_per_core = 2, .cores_per_pkg = 128,
+            .policy = {
+                .x86_vendor = X86_VENDOR_INTEL,
+                .topo.subleaf = {
+                    { .nr_logical = 2, .level = 0, .type = 1, .id_shift = 1, },
+                    { .nr_logical = 256, .level = 1, .type = 2,
+                      .id_shift = 8, },
+                },
+            },
+        },
+    };
+
+    printf("Testing topology synthesis from parts:\n");
+
+    for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
+    {
+        const struct test *t = &tests[i];
+        struct cpu_policy actual = { .x86_vendor = t->policy.x86_vendor };
+        int rc = x86_topo_from_parts(&actual, t->threads_per_core,
+                                     t->cores_per_pkg);
+
+        if ( rc || memcmp(&actual.topo, &t->policy.topo, sizeof(actual.topo)) )
+        {
+#define TOPO(n, f)  t->policy.topo.subleaf[(n)].f, actual.topo.subleaf[(n)].f
+            fail("FAIL[%d] - '%s %u t/c, %u c/p'\n",
+                 rc,
+                 x86_cpuid_vendor_to_str(t->policy.x86_vendor),
+                 t->threads_per_core, t->cores_per_pkg);
+            printf("  subleaf=%u  expected_n=%u actual_n=%u\n"
+                   "             expected_lvl=%u actual_lvl=%u\n"
+                   "             expected_type=%u actual_type=%u\n"
+                   "             expected_shift=%u actual_shift=%u\n",
+                   0,
+                   TOPO(0, nr_logical),
+                   TOPO(0, level),
+                   TOPO(0, type),
+                   TOPO(0, id_shift));
+
+            printf("  subleaf=%u  expected_n=%u actual_n=%u\n"
+                   "             expected_lvl=%u actual_lvl=%u\n"
+                   "             expected_type=%u actual_type=%u\n"
+                   "             expected_shift=%u actual_shift=%u\n",
+                   1,
+                   TOPO(1, nr_logical),
+                   TOPO(1, level),
+                   TOPO(1, type),
+                   TOPO(1, id_shift));
+#undef TOPO
+        }
+    }
+}
+
 int main(int argc, char **argv)
 {
     printf("CPU Policy unit tests\n");
@@ -667,6 +798,8 @@ int main(int argc, char **argv)
     test_is_compatible_success();
     test_is_compatible_failure();
 
+    test_topo_from_parts();
+
     if ( nr_failures )
         printf("Done: %u failures\n", nr_failures);
     else
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index d26012c6da78..79fdf9045a1b 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -542,6 +542,22 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
                                     const struct cpu_policy *guest,
                                     struct cpu_policy_errors *err);
 
+/**
+ * Synthesise topology information in `p` given high-level constraints
+ *
+ * Topology is given in various fields accross several leaves, some of
+ * which are vendor-specific. This function uses the policy itself to
+ * derive such leaves from threads/core and cores/package.
+ *
+ * @param p                   CPU policy of the domain.
+ * @param threads_per_core    threads/core. Doesn't need to be a power of 2.
+ * @param cores_per_package   cores/package. Doesn't need to be a power of 2.
+ * @return                    0 on success; -errno on failure
+ */
+int x86_topo_from_parts(struct cpu_policy *p,
+                        unsigned int threads_per_core,
+                        unsigned int cores_per_pkg);
+
 #endif /* !XEN_LIB_X86_POLICIES_H */
 
 /*
diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c
index f033d22785be..72b67b44a893 100644
--- a/xen/lib/x86/policy.c
+++ b/xen/lib/x86/policy.c
@@ -2,6 +2,94 @@
 
 #include <xen/lib/x86/cpu-policy.h>
 
+static unsigned int order(unsigned int n)
+{
+    ASSERT(n); /* clz(0) is UB */
+
+    return 8 * sizeof(n) - __builtin_clz(n);
+}
+
+int x86_topo_from_parts(struct cpu_policy *p,
+                        unsigned int threads_per_core,
+                        unsigned int cores_per_pkg)
+{
+    unsigned int threads_per_pkg = threads_per_core * cores_per_pkg;
+    unsigned int apic_id_size;
+
+    if ( !p || !threads_per_core || !cores_per_pkg )
+        return -EINVAL;
+
+    p->basic.max_leaf = MAX(0xb, p->basic.max_leaf);
+
+    memset(p->topo.raw, 0, sizeof(p->topo.raw));
+
+    /* thread level */
+    p->topo.subleaf[0].nr_logical = threads_per_core;
+    p->topo.subleaf[0].id_shift = 0;
+    p->topo.subleaf[0].level = 0;
+    p->topo.subleaf[0].type = 1;
+    if ( threads_per_core > 1 )
+        p->topo.subleaf[0].id_shift = order(threads_per_core - 1);
+
+    /* core level */
+    p->topo.subleaf[1].nr_logical = cores_per_pkg;
+    if ( p->x86_vendor == X86_VENDOR_INTEL )
+        p->topo.subleaf[1].nr_logical = threads_per_pkg;
+    p->topo.subleaf[1].id_shift = p->topo.subleaf[0].id_shift;
+    p->topo.subleaf[1].level = 1;
+    p->topo.subleaf[1].type = 2;
+    if ( cores_per_pkg > 1 )
+        p->topo.subleaf[1].id_shift += order(cores_per_pkg - 1);
+
+    apic_id_size = p->topo.subleaf[1].id_shift;
+
+    /*
+     * Contrary to what the name might seem to imply. HTT is an enabler for
+     * SMP and there's no harm in setting it even with a single vCPU.
+     */
+    p->basic.htt = true;
+    p->basic.lppp = MIN(0xff, threads_per_pkg);
+
+    switch ( p->x86_vendor )
+    {
+    case X86_VENDOR_INTEL: {
+        struct cpuid_cache_leaf *sl = p->cache.subleaf;
+
+        for ( size_t i = 0; sl->type &&
+                            i < ARRAY_SIZE(p->cache.raw); i++, sl++ )
+        {
+            sl->cores_per_package = cores_per_pkg - 1;
+            sl->threads_per_cache = threads_per_core - 1;
+            if ( sl->type == 3 /* unified cache */ )
+                sl->threads_per_cache = threads_per_pkg - 1;
+        }
+        break;
+    }
+
+    case X86_VENDOR_AMD:
+    case X86_VENDOR_HYGON:
+        /* Expose p->basic.lppp */
+        p->extd.cmp_legacy = true;
+
+        /* Clip NC to the maximum value it can hold */
+        p->extd.nc = MIN(0xff, threads_per_pkg - 1);
+
+        /* TODO: Expose leaf e1E */
+        p->extd.topoext = false;
+
+        /*
+         * Clip APIC ID to 8 bits, as that's what high core-count machines do.
+         *
+         * That's what AMD EPYC 9654 does with >256 CPUs.
+         */
+        p->extd.apic_id_size = MIN(8, apic_id_size);
+
+        break;
+    }
+
+    return 0;
+}
+
 int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
                                     const struct cpu_policy *guest,
                                     struct cpu_policy_errors *err)
diff --git a/xen/lib/x86/private.h b/xen/lib/x86/private.h
index 60bb82a400b7..2ec9dbee33c2 100644
--- a/xen/lib/x86/private.h
+++ b/xen/lib/x86/private.h
@@ -4,6 +4,7 @@
 #ifdef __XEN__
 
 #include <xen/bitops.h>
+#include <xen/bug.h>
 #include <xen/guest_access.h>
 #include <xen/kernel.h>
 #include <xen/lib.h>
@@ -17,6 +18,7 @@
 
 #else
 
+#include <assert.h>
 #include <errno.h>
 #include <inttypes.h>
 #include <stdbool.h>
@@ -28,6 +30,8 @@
 
 #include <xen-tools/common-macros.h>
 
+#define ASSERT(x) assert(x)
+
 static inline bool test_bit(unsigned int bit, const void *vaddr)
 {
     const char *addr = vaddr;
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:28:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749272.1157335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVn-0006ml-Fy; Wed, 26 Jun 2024 16:28:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749272.1157335; Wed, 26 Jun 2024 16:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVn-0006me-Cn; Wed, 26 Jun 2024 16:28:47 +0000
Received: by outflank-mailman (input) for mailman id 749272;
 Wed, 26 Jun 2024 16:28:46 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMVVm-0006MH-1a
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:46 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 285ce77d-33d9-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 18:28:45 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id
 a640c23a62f3a-a724958f118so528867766b.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:28:45 -0700 (PDT)
Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 09:28:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 285ce77d-33d9-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719419324; x=1720024124; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=+a3HTs1RURAZACbO2vkeTxBlKVMK+XtuHSHuPekrrv4=;
        b=CWY+Wdpkl3adtLlArY/ou46B99HHBRsvU7yepzw6vBuYTR/oTMAj+oFEwHOh/lxzTI
         p5JqWBckbfBvHLluhhYeb7F0RtM+HM93gg1GsfFuRtkwGUxfFGHsSEucT2hxw1HEQ5pz
         g41Rg837atHIRNmUurXPFflpgixrXqMAYJTYw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719419324; x=1720024124;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=+a3HTs1RURAZACbO2vkeTxBlKVMK+XtuHSHuPekrrv4=;
        b=FM7qh2VPGzFhJ6P5ZTPjkf2CHAEUzl5+kCzP0GROQNmhk7ymR74jL0Ng/ivOJB8JPM
         kGvQyfu5cOeSIUZ4hxxTo4WUJ11avlwIqHBSq4fJk1LdhWQFkCIvechGY/F3JUEEq4az
         I8VCj+7voujMf7A920Nsj1bgas63+6rlSPtKS94b3/KlK1fe90ZRgKv6EByMiqscy3JV
         4bzbX+o6uB207uX99sfF1bjKOhXJAWQHrb9hDnA8PMNNVdxdqKGatbemMRhllpKa6Y0o
         d8Oc1sCF44ve2H7eX1dGOOTu6X9K8oppD0S1SEAsv/UljFPcV0rept4fP9myDmomNPPm
         1mzg==
X-Gm-Message-State: AOJu0Yxzjt+nex8hKcnaQpcaE1Et1mqKmX2ex0vslYC5KvYzVJ8Tu4T6
	AWGkJXIFqeab6XS+5v2S+Blx3vO2azNjk0PuV4Z5omnab6cDaMy4Z3ji74u6QapRurtTRIJPMt9
	OtsI=
X-Google-Smtp-Source: AGHT+IH358YZg6Ktj95b4hw3RChALxCQdzhV/QFgoEwYMEa0PDV0+0MqjNTKrlGlEzuXG5+dAy4sWg==
X-Received: by 2002:a17:907:c308:b0:a72:603f:1ea2 with SMTP id a640c23a62f3a-a72603f2118mr814836066b.62.1719419324394;
        Wed, 26 Jun 2024 09:28:44 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 05/10] xen/x86: Add supporting code for uploading LAPIC contexts during domain create
Date: Wed, 26 Jun 2024 17:28:32 +0100
Message-Id: <ae0143bb190dd13171edbed947335a6bc19abe4b.1719416329.git.alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719416329.git.alejandro.vallejo@cloud.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This patch is a precondition for a later patch in which toolstack uses HVM
contexts to upload LAPIC data to a newly constructed domain.

If toolstack were to upload LAPIC contexts as part of domain creation as-is it
would encounter a problem were the architectural state does not reflect the APIC
ID in the hidden state. This patch ensures updates to the hidden state trigger
an update in the architectural registers so the APIC ID in both is consistent.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
v4:
    * New patch
---
 xen/arch/x86/hvm/vlapic.c | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index b57e39d1c6dd..ebcf74711a13 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1622,7 +1622,27 @@ static int cf_check lapic_load_hidden(struct domain *d, hvm_domain_context_t *h)
 
     s->loaded.hw = 1;
     if ( s->loaded.regs )
+    {
+        /*
+         * We already processed architectural regs in lapic_load_regs(), so
+         * this must be a migration. Fix up inconsistencies from any older Xen.
+         */
         lapic_load_fixup(s);
+    }
+    else
+    {
+        /*
+         * We haven't seen architectural regs so this could be a migration or a
+         * plain domain create. In the domain create case it's fine to modify
+         * the architectural state to align it to the APIC ID that was just
+         * uploaded and in the migrate case it doesn't matter because the
+         * architectural state will be replaced by the LAPIC_REGS ctx later on.
+         */
+        if ( vlapic_x2apic_mode(s) )
+            set_x2apic_id(s);
+        else
+            vlapic_set_reg(s, APIC_ID, SET_xAPIC_ID(s->hw.x2apic_id));
+    }
 
     hvm_update_vlapic_mode(v);
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:28:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749270.1157308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVl-00062W-3n; Wed, 26 Jun 2024 16:28:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749270.1157308; Wed, 26 Jun 2024 16:28:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVk-00061q-VR; Wed, 26 Jun 2024 16:28:44 +0000
Received: by outflank-mailman (input) for mailman id 749270;
 Wed, 26 Jun 2024 16:28:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMVVk-0005pK-0t
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:44 +0000
Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com
 [2a00:1450:4864:20::635])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 26804efe-33d9-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 18:28:42 +0200 (CEST)
Received: by mail-ej1-x635.google.com with SMTP id
 a640c23a62f3a-a7194ce90afso570376166b.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:28:42 -0700 (PDT)
Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 09:28:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26804efe-33d9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719419321; x=1720024121; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=YsReVAeUjwha0aJ48SN5/6FToZpj38HQfON50NxdYE0=;
        b=G49dRa14RDgt4uNpwBdbXEC5Sm101BrenmgMQQgmobHltitg/A8Wr6+aJzhl5SNoX6
         Tgu0b2gYm69xETNMg8RU+ZY9rjt07VycDw2YMHMxBtICPPlabakoX8YMueLqAD5IDN21
         ttPWSBvSTMdLhfn0P8ZAIGZOsBTcC+ZzIOBiU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719419321; x=1720024121;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=YsReVAeUjwha0aJ48SN5/6FToZpj38HQfON50NxdYE0=;
        b=BttpK94NaD+LNP2PpUE33zz8gG8LxEEVs32EoNSE0ISv19HOVHj9KxJi598XlenHTx
         tGGq4vU9QQXVvk9sc/zFMndaHktAe6WFwzZmhYmrKcNXqPPL6CXmqfugxEjL+6brnC+C
         2wOckidUgXdv7gu9rs9p5NsxU5vDNLAiq2vkd3j3b9Jr+geCtpvW4y2WzO94XMUPkFeP
         tmdo6rqOTsJvCXJRlowfsPLhvKOVRaLU5KkUWOASqP70DVOecBifljn12YrT52xX8CQJ
         dwFMJMmR2ve0lsOU0rAk9xhJXFMiJXXOvN5+PeE4M/X91W/p1iKWfSkZovX95lCFL49c
         31eA==
X-Gm-Message-State: AOJu0Yzv5XsUMKkjC7wygGWiKm7+GoiuSRb1iu+C79IxYg/IveH3oEJN
	4ilsX//H3RMmaKCf3UIGeu0Ozrot/6dcZFEuiUd6StQbY9psQ/iSDeyuxy/V/Z8BHKsLX3tvcfm
	WQz0=
X-Google-Smtp-Source: AGHT+IHCZu3RrV7SRUnu1c4M4htC2NGSpi5ANIL/1I0tyNvDVlF5ERgDnwZW3Gzmg/CNckHZIaIl6Q==
X-Received: by 2002:a17:907:cbc2:b0:a72:7d5c:ace0 with SMTP id a640c23a62f3a-a727d5cae14mr293649566b.11.1719419321136;
        Wed, 26 Jun 2024 09:28:41 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for 4.19 v4 02/10] x86/vlapic: Move lapic migration checks to the check hooks
Date: Wed, 26 Jun 2024 17:28:29 +0100
Message-Id: <b4e82173268e21480f9657f7457080d9fa9e310c.1719416329.git.alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719416329.git.alejandro.vallejo@cloud.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

While doing this, factor out checks common to architectural and hidden state.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
This puts essential LAPIC information in the stream. It's technically a feature
but it makes 4.19 guests a lot more future-proof. I think this should go on 4.19

v4:
  * Replaced BUG() with ASSERT_UNREACHABLE(), and allow ret -EINVAL on release.
  * Adjust printk() to be clearer
  * Assign lapic_check_common() outside the "if" condition.
---
 xen/arch/x86/hvm/vlapic.c | 85 ++++++++++++++++++++++++++-------------
 1 file changed, 58 insertions(+), 27 deletions(-)

diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 9cfc82666ae5..1a7bca5afd2f 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1553,60 +1553,91 @@ static void lapic_load_fixup(struct vlapic *vlapic)
                v, vlapic->loaded.id, vlapic->loaded.ldr, good_ldr);
 }
 
-static int cf_check lapic_load_hidden(struct domain *d, hvm_domain_context_t *h)
-{
-    unsigned int vcpuid = hvm_load_instance(h);
-    struct vcpu *v;
-    struct vlapic *s;
 
+static int lapic_check_common(const struct domain *d, unsigned int vcpuid)
+{
     if ( !has_vlapic(d) )
         return -ENODEV;
 
     /* Which vlapic to load? */
-    if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
+    if ( !domain_vcpu(d, vcpuid) )
     {
-        dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no apic%u\n",
+        dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no vCPU %u\n",
                 d->domain_id, vcpuid);
         return -EINVAL;
     }
-    s = vcpu_vlapic(v);
+
+    return 0;
+}
+
+static int cf_check lapic_check_hidden(const struct domain *d,
+                                       hvm_domain_context_t *h)
+{
+    unsigned int vcpuid = hvm_load_instance(h);
+    struct hvm_hw_lapic s;
+    int rc = lapic_check_common(d, vcpuid);
+
+    if ( rc )
+        return rc;
+
+    if ( hvm_load_entry_zeroextend(LAPIC, h, &s) != 0 )
+        return -ENODATA;
+
+    /* EN=0 with EXTD=1 is illegal */
+    if ( (s.apic_base_msr & (APIC_BASE_ENABLE | APIC_BASE_EXTD)) ==
+         APIC_BASE_EXTD )
+        return -EINVAL;
+
+    return 0;
+}
+
+static int cf_check lapic_load_hidden(struct domain *d, hvm_domain_context_t *h)
+{
+    unsigned int vcpuid = hvm_load_instance(h);
+    struct vcpu *v = d->vcpu[vcpuid];
+    struct vlapic *s = vcpu_vlapic(v);
 
     if ( hvm_load_entry_zeroextend(LAPIC, h, &s->hw) != 0 )
+    {
+        ASSERT_UNREACHABLE();
         return -EINVAL;
+    }
 
     s->loaded.hw = 1;
     if ( s->loaded.regs )
         lapic_load_fixup(s);
 
-    if ( !(s->hw.apic_base_msr & APIC_BASE_ENABLE) &&
-         unlikely(vlapic_x2apic_mode(s)) )
-        return -EINVAL;
-
     hvm_update_vlapic_mode(v);
 
     return 0;
 }
 
-static int cf_check lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
+static int cf_check lapic_check_regs(const struct domain *d,
+                                     hvm_domain_context_t *h)
 {
     unsigned int vcpuid = hvm_load_instance(h);
-    struct vcpu *v;
-    struct vlapic *s;
+    int rc;
 
-    if ( !has_vlapic(d) )
-        return -ENODEV;
+    if ( (rc = lapic_check_common(d, vcpuid)) )
+        return rc;
 
-    /* Which vlapic to load? */
-    if ( vcpuid >= d->max_vcpus || (v = d->vcpu[vcpuid]) == NULL )
-    {
-        dprintk(XENLOG_G_ERR, "HVM restore: dom%d has no apic%u\n",
-                d->domain_id, vcpuid);
-        return -EINVAL;
-    }
-    s = vcpu_vlapic(v);
+    if ( !hvm_get_entry(LAPIC_REGS, h) )
+        return -ENODATA;
+
+    return 0;
+}
+
+static int cf_check lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
+{
+    unsigned int vcpuid = hvm_load_instance(h);
+    struct vcpu *v = d->vcpu[vcpuid];
+    struct vlapic *s = vcpu_vlapic(v);
 
     if ( hvm_load_entry(LAPIC_REGS, h, s->regs) != 0 )
+    {
+        ASSERT_UNREACHABLE();
         return -EINVAL;
+    }
 
     s->loaded.id = vlapic_get_reg(s, APIC_ID);
     s->loaded.ldr = vlapic_get_reg(s, APIC_LDR);
@@ -1623,9 +1654,9 @@ static int cf_check lapic_load_regs(struct domain *d, hvm_domain_context_t *h)
     return 0;
 }
 
-HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, NULL,
+HVM_REGISTER_SAVE_RESTORE(LAPIC, lapic_save_hidden, lapic_check_hidden,
                           lapic_load_hidden, 1, HVMSR_PER_VCPU);
-HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, NULL,
+HVM_REGISTER_SAVE_RESTORE(LAPIC_REGS, lapic_save_regs, lapic_check_regs,
                           lapic_load_regs, 1, HVMSR_PER_VCPU);
 
 int vlapic_init(struct vcpu *v)
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:28:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749271.1157325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVm-0006Xx-9V; Wed, 26 Jun 2024 16:28:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749271.1157325; Wed, 26 Jun 2024 16:28:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVm-0006Xo-6P; Wed, 26 Jun 2024 16:28:46 +0000
Received: by outflank-mailman (input) for mailman id 749271;
 Wed, 26 Jun 2024 16:28:45 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMVVl-0005pK-1C
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:45 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2708ca72-33d9-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 18:28:43 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id
 a640c23a62f3a-a6f8ebbd268so142352466b.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:28:43 -0700 (PDT)
Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 09:28:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2708ca72-33d9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719419322; x=1720024122; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=/arT7jrgNzKbHjKJBGTvYB0vtdlsaxsy4yOtpT+pfjQ=;
        b=UaVUiPbr2JgDxbWB/dK1VFuXZECS0y7X3GUMPyuIVgAQaJRy7XawYE+pfvWnXkDJve
         ngJzX1p1wJjMfpD0Fyn36iKjQAeCiHRQCyReBcsDSgraigbabx9LjLBMzIAxKQ7wnijU
         J6W1SL2vXDaznYRFF93QMKXAc4fABuCbZr+UE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719419322; x=1720024122;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=/arT7jrgNzKbHjKJBGTvYB0vtdlsaxsy4yOtpT+pfjQ=;
        b=vRUYXAlcGwYhjfiZZhda3ISamo1Zyz8vRD334FuLFWJ6tIelXtc4gxNjlcjYKUC4aX
         90gG8rBw5DFuaj8i1TO1jrP1KhZXNWXl64kdiJgeV+6DgZSOWNM56A6wGOdVQIkqQCMY
         Xia0D4NsAYtJ6BRrQ504TWY1dBc+DELsD2bT9slEy1cOzDGJics6kdXjX60XWIl9e3+5
         jbJkRPOYqp91OM6K9Qa9HV0C4+ZOCNgjbKonztUF6Lr10vAGWYxuU5vCPly9NLds4T/T
         OVHzqQnNOMtGIsYvU9u5wIyI/kjlfccAIY2EynjMZcRZefZGZh5jwYrcb0QV2Y4VwldF
         xZvg==
X-Gm-Message-State: AOJu0YzT9btL1oq81pDHrYkUuJ+hEHTuNcbvBDRrbgTIye+fGoLjzCsx
	vEZSk1jWikmSOnShHKjM2rLDazYtYyDSLDKOvNNSn3CYHiPDLe/gUAjnYYANC1vNJ6F3S4mMymo
	EtGc=
X-Google-Smtp-Source: AGHT+IHkphF+PpBgLaj8v7JiBBQGt/9/WfXJ5n0pdihowoSgcBiOCYQPdv1UTqbLZ2PaPIjiH/NwKA==
X-Received: by 2002:a17:906:b54:b0:a72:5a0d:4831 with SMTP id a640c23a62f3a-a7296f809cfmr7910966b.23.1719419322071;
        Wed, 26 Jun 2024 09:28:42 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 v4 03/10] xen/x86: Add initial x2APIC ID to the per-vLAPIC save area
Date: Wed, 26 Jun 2024 17:28:30 +0100
Message-Id: <5beadf9d7997ed2df1a7c28cc2f0c5583ca7f7a3.1719416329.git.alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719416329.git.alejandro.vallejo@cloud.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This allows the initial x2APIC ID to be sent on the migration stream. This
allows further changes to topology and APIC ID assignment without breaking
existing hosts. Given the vlapic data is zero-extended on restore, fix up
migrations from hosts without the field by setting it to the old convention if
zero.

The hardcoded mapping x2apic_id=2*vcpu_id is kept for the time being, but it's
meant to be overriden by toolstack on a later patch with appropriate values.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
Same rationale as previous patch for inclusion in 4.19

Roger replied to v3 with an R-by for this patch. I didn't add it here because
the patch has seen substantial changes and it's probably worth looking at again

All changes are removals. In particular...

v4:
  * Removed hooks into cpu policy update events. They are no longer relevant.
  * Remove the derivation (within Xen) of x2apic_id from vcpu_id via lib/x86.
    * Rearranged for toolstack to provide those on hvmcontext blobs on a later
      patch. This still works out because the default is the legacy scheme of
      apicid=vcpuid*2
---
 xen/arch/x86/cpuid.c                   | 14 +++++---------
 xen/arch/x86/hvm/vlapic.c              | 22 ++++++++++++++++++++--
 xen/arch/x86/include/asm/hvm/vlapic.h  |  1 +
 xen/include/public/arch-x86/hvm/save.h |  2 ++
 4 files changed, 28 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index a822e80c7ea7..7ee596ab66a4 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -139,10 +139,9 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
         const struct cpu_user_regs *regs;
 
     case 0x1:
-        /* TODO: Rework topology logic. */
         res->b &= 0x00ffffffu;
         if ( is_hvm_domain(d) )
-            res->b |= (v->vcpu_id * 2) << 24;
+            res->b |= vlapic_x2apic_id(vcpu_vlapic(v)) << 24;
 
         /* TODO: Rework vPMU control in terms of toolstack choices. */
         if ( vpmu_available(v) &&
@@ -312,18 +311,15 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
 
     case 0xb:
         /*
-         * In principle, this leaf is Intel-only.  In practice, it is tightly
-         * coupled with x2apic, and we offer an x2apic-capable APIC emulation
-         * to guests on AMD hardware as well.
-         *
-         * TODO: Rework topology logic.
+         * Don't expose topology information to PV guests. Exposed on HVM
+         * along with x2APIC because they are tightly coupled.
          */
-        if ( p->basic.x2apic )
+        if ( is_hvm_domain(d) && p->basic.x2apic )
         {
             *(uint8_t *)&res->c = subleaf;
 
             /* Fix the x2APIC identifier. */
-            res->d = v->vcpu_id * 2;
+            res->d = vlapic_x2apic_id(vcpu_vlapic(v));
         }
         break;
 
diff --git a/xen/arch/x86/hvm/vlapic.c b/xen/arch/x86/hvm/vlapic.c
index 1a7bca5afd2f..b57e39d1c6dd 100644
--- a/xen/arch/x86/hvm/vlapic.c
+++ b/xen/arch/x86/hvm/vlapic.c
@@ -1072,7 +1072,7 @@ static uint32_t x2apic_ldr_from_id(uint32_t id)
 static void set_x2apic_id(struct vlapic *vlapic)
 {
     const struct vcpu *v = vlapic_vcpu(vlapic);
-    uint32_t apic_id = v->vcpu_id * 2;
+    uint32_t apic_id = vlapic->hw.x2apic_id;
     uint32_t apic_ldr = x2apic_ldr_from_id(apic_id);
 
     /*
@@ -1452,7 +1452,7 @@ void vlapic_reset(struct vlapic *vlapic)
     if ( v->vcpu_id == 0 )
         vlapic->hw.apic_base_msr |= APIC_BASE_BSP;
 
-    vlapic_set_reg(vlapic, APIC_ID, (v->vcpu_id * 2) << 24);
+    vlapic_set_reg(vlapic, APIC_ID, SET_xAPIC_ID(vlapic->hw.x2apic_id));
     vlapic_do_init(vlapic);
 }
 
@@ -1520,6 +1520,16 @@ static void lapic_load_fixup(struct vlapic *vlapic)
     const struct vcpu *v = vlapic_vcpu(vlapic);
     uint32_t good_ldr = x2apic_ldr_from_id(vlapic->loaded.id);
 
+    /*
+     * Loading record without hw.x2apic_id in the save stream, calculate using
+     * the traditional "vcpu_id * 2" relation. There's an implicit assumption
+     * that vCPU0 always has x2APIC0, which is true for the old relation, and
+     * still holds under the new x2APIC generation algorithm. While that case
+     * goes through the conditional it's benign because it still maps to zero.
+     */
+    if ( !vlapic->hw.x2apic_id )
+        vlapic->hw.x2apic_id = v->vcpu_id * 2;
+
     /* Skip fixups on xAPIC mode, or if the x2APIC LDR is already correct */
     if ( !vlapic_x2apic_mode(vlapic) ||
          (vlapic->loaded.ldr == good_ldr) )
@@ -1588,6 +1598,13 @@ static int cf_check lapic_check_hidden(const struct domain *d,
          APIC_BASE_EXTD )
         return -EINVAL;
 
+    /*
+     * Fail migrations from newer versions of Xen where
+     * rsvd_zero is interpreted as something else.
+     */
+    if ( s.rsvd_zero )
+        return -EINVAL;
+
     return 0;
 }
 
@@ -1672,6 +1689,7 @@ int vlapic_init(struct vcpu *v)
     }
 
     vlapic->pt.source = PTSRC_lapic;
+    vlapic->hw.x2apic_id = 2 * v->vcpu_id;
 
     vlapic->regs_page = alloc_domheap_page(v->domain, MEMF_no_owner);
     if ( !vlapic->regs_page )
diff --git a/xen/arch/x86/include/asm/hvm/vlapic.h b/xen/arch/x86/include/asm/hvm/vlapic.h
index 2c4ff94ae7a8..85c4a236b9f6 100644
--- a/xen/arch/x86/include/asm/hvm/vlapic.h
+++ b/xen/arch/x86/include/asm/hvm/vlapic.h
@@ -44,6 +44,7 @@
 #define vlapic_xapic_mode(vlapic)                               \
     (!vlapic_hw_disabled(vlapic) && \
      !((vlapic)->hw.apic_base_msr & APIC_BASE_EXTD))
+#define vlapic_x2apic_id(vlapic) ((vlapic)->hw.x2apic_id)
 
 /*
  * Generic APIC bitmap vector update & search routines.
diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
index 7ecacadde165..1c2ec669ffc9 100644
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -394,6 +394,8 @@ struct hvm_hw_lapic {
     uint32_t             disabled; /* VLAPIC_xx_DISABLED */
     uint32_t             timer_divisor;
     uint64_t             tdt_msr;
+    uint32_t             x2apic_id;
+    uint32_t             rsvd_zero;
 };
 
 DECLARE_HVM_SAVE_TYPE(LAPIC, 5, struct hvm_hw_lapic);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:28:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749274.1157354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVp-0007IS-AL; Wed, 26 Jun 2024 16:28:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749274.1157354; Wed, 26 Jun 2024 16:28:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVp-0007Gv-5l; Wed, 26 Jun 2024 16:28:49 +0000
Received: by outflank-mailman (input) for mailman id 749274;
 Wed, 26 Jun 2024 16:28:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMVVn-0005pK-QZ
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:47 +0000
Received: from mail-ed1-x536.google.com (mail-ed1-x536.google.com
 [2a00:1450:4864:20::536])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 28ede50e-33d9-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 18:28:46 +0200 (CEST)
Received: by mail-ed1-x536.google.com with SMTP id
 4fb4d7f45d1cf-57d1782679fso678725a12.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:28:46 -0700 (PDT)
Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 09:28:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28ede50e-33d9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719419325; x=1720024125; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=qRrrYvX+yDttN2Gg1qTkgVmajhbWR2NVsVybiYM7F5A=;
        b=DWQdEei9NajR8t828BZaK624uBUwS+bF4ZJsH+bScdVMgcVRu5WJ5PKXgUDRZYSw+W
         tU0KpEFvfBXemiGKGzVSFivApjOkk5I2XHaGPkAZJ/tt1oGQce+z0osWpUHLwjXL3G12
         Rx3DtFin2FACzpN2mgqZYpOkXI8ifOND+TtVo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719419325; x=1720024125;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=qRrrYvX+yDttN2Gg1qTkgVmajhbWR2NVsVybiYM7F5A=;
        b=SZ7EzqBXya2sGW/v4Crw8yA2dT4jj3wfdlSvAlYreAJB5a1y8ALjJC0Vsveih2lYai
         fHgxyIGlZnq1TSdvIyZcaeku7xTxw/8J3XbieMPzo2NdOGnfCF62ZjmflCTWbX3/PDBd
         gZWvsghivRPjFqfxaNq2GWNcUefCkKNxp+ud/iKdZHc14fiCmNRvKWuzoG0xg8kqlT68
         fxvO2MpKV+P/ERVn96rAy7ffrgd8ay2Q+FjiequLs/ykUtY70L8TdW68+ojUHi8YoSvI
         DI6icUe0YyhBN7m1+BR9KYzEQVFIPg2a2OAJixLoDoVolAqjYlR7jfFrcqOmhkIBkoUV
         grEw==
X-Gm-Message-State: AOJu0YyKFYzENZvewJXN2uUO88SDTDjdmKYPHXUdzfRYHlyV+Za7pak5
	h6jfb+/IM2b245vxsEMwQXwJxlhE9BesIAQCG5D84IcMrQpe2beyygvpbH6IEoexmygUKoZYX6c
	NftY=
X-Google-Smtp-Source: AGHT+IGIMJSmFVWpKQ8YVYg3rvYvGdWCd5kBCwEUCLXVW0MyhCXi56UMacCQ9aoBAV6VOB6qERmO2g==
X-Received: by 2002:a17:907:c81b:b0:a72:4bf2:e16 with SMTP id a640c23a62f3a-a727f65e329mr261218066b.16.1719419325348;
        Wed, 26 Jun 2024 09:28:45 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Anthony PERARD <anthony.perard@vates.tech>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v4 06/10] tools/libguest: Make setting MTRR registers unconditional
Date: Wed, 26 Jun 2024 17:28:33 +0100
Message-Id: <2c55d486bb0c54a3e813abc66d32f321edd28b81.1719416329.git.alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719416329.git.alejandro.vallejo@cloud.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This greatly simplifies a later patch that makes use of HVM contexts to upload
LAPIC data. The idea is to reuse MTRR setting procedure to avoid code
duplication. It's currently only used for PVH, but there's no real reason to
overcomplicate the toolstack preventing them being set for HVM too when
hvmloader will override them anyway.

While at it, add a missing "goto out" to what was the error condition in the
loop.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
v4:
  * New patch
---
 tools/libs/guest/xg_dom_x86.c | 83 ++++++++++++++++++-----------------
 1 file changed, 43 insertions(+), 40 deletions(-)

diff --git a/tools/libs/guest/xg_dom_x86.c b/tools/libs/guest/xg_dom_x86.c
index cba01384ae75..82ea3e2aab0b 100644
--- a/tools/libs/guest/xg_dom_x86.c
+++ b/tools/libs/guest/xg_dom_x86.c
@@ -989,6 +989,7 @@ const static void *hvm_get_save_record(const void *ctx, unsigned int type,
 
 static int vcpu_hvm(struct xc_dom_image *dom)
 {
+    /* Initialises the BSP */
     struct {
         struct hvm_save_descriptor header_d;
         HVM_SAVE_TYPE(HEADER) header;
@@ -997,6 +998,18 @@ static int vcpu_hvm(struct xc_dom_image *dom)
         struct hvm_save_descriptor end_d;
         HVM_SAVE_TYPE(END) end;
     } bsp_ctx;
+    /* Initialises APICs and MTRRs of every vCPU */
+    struct {
+        struct hvm_save_descriptor header_d;
+        HVM_SAVE_TYPE(HEADER) header;
+        struct hvm_save_descriptor mtrr_d;
+        HVM_SAVE_TYPE(MTRR) mtrr;
+        struct hvm_save_descriptor end_d;
+        HVM_SAVE_TYPE(END) end;
+    } vcpu_ctx;
+    /* Context from full_ctx */
+    const HVM_SAVE_TYPE(MTRR) *mtrr_record;
+    /* Raw context as taken from Xen */
     uint8_t *full_ctx = NULL;
     int rc;
 
@@ -1083,51 +1096,41 @@ static int vcpu_hvm(struct xc_dom_image *dom)
     bsp_ctx.end_d.instance = 0;
     bsp_ctx.end_d.length = HVM_SAVE_LENGTH(END);
 
-    /* TODO: maybe this should be a firmware option instead? */
-    if ( !dom->device_model )
+    /* TODO: maybe setting MTRRs should be a firmware option instead? */
+    mtrr_record = hvm_get_save_record(full_ctx, HVM_SAVE_CODE(MTRR), 0);
+
+    if ( !mtrr_record)
     {
-        struct {
-            struct hvm_save_descriptor header_d;
-            HVM_SAVE_TYPE(HEADER) header;
-            struct hvm_save_descriptor mtrr_d;
-            HVM_SAVE_TYPE(MTRR) mtrr;
-            struct hvm_save_descriptor end_d;
-            HVM_SAVE_TYPE(END) end;
-        } mtrr = {
-            .header_d = bsp_ctx.header_d,
-            .header = bsp_ctx.header,
-            .mtrr_d.typecode = HVM_SAVE_CODE(MTRR),
-            .mtrr_d.length = HVM_SAVE_LENGTH(MTRR),
-            .end_d = bsp_ctx.end_d,
-            .end = bsp_ctx.end,
-        };
-        const HVM_SAVE_TYPE(MTRR) *mtrr_record =
-            hvm_get_save_record(full_ctx, HVM_SAVE_CODE(MTRR), 0);
-        unsigned int i;
-
-        if ( !mtrr_record )
-        {
-            xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
-                         "%s: unable to get MTRR save record", __func__);
-            goto out;
-        }
+        xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
+                     "%s: unable to get MTRR save record", __func__);
+        goto out;
+    }
 
-        memcpy(&mtrr.mtrr, mtrr_record, sizeof(mtrr.mtrr));
+    vcpu_ctx.header_d = bsp_ctx.header_d;
+    vcpu_ctx.header = bsp_ctx.header;
+    vcpu_ctx.mtrr_d.typecode = HVM_SAVE_CODE(MTRR);
+    vcpu_ctx.mtrr_d.length = HVM_SAVE_LENGTH(MTRR);
+    vcpu_ctx.mtrr = *mtrr_record;
+    vcpu_ctx.end_d = bsp_ctx.end_d;
+    vcpu_ctx.end = bsp_ctx.end;
 
-        /*
-         * Enable MTRR, set default type to WB.
-         * TODO: add MMIO areas as UC when passthrough is supported.
-         */
-        mtrr.mtrr.msr_mtrr_def_type = MTRR_TYPE_WRBACK | MTRR_DEF_TYPE_ENABLE;
+    /*
+     * Enable MTRR, set default type to WB.
+     * TODO: add MMIO areas as UC when passthrough is supported in PVH
+     */
+    vcpu_ctx.mtrr.msr_mtrr_def_type = MTRR_TYPE_WRBACK | MTRR_DEF_TYPE_ENABLE;
 
-        for ( i = 0; i < dom->max_vcpus; i++ )
+    for ( unsigned int i = 0; i < dom->max_vcpus; i++ )
+    {
+        vcpu_ctx.mtrr_d.instance = i;
+
+        rc = xc_domain_hvm_setcontext(dom->xch, dom->guest_domid,
+                                          (uint8_t *)&vcpu_ctx, sizeof(vcpu_ctx));
+        if ( rc != 0 )
         {
-            mtrr.mtrr_d.instance = i;
-            rc = xc_domain_hvm_setcontext(dom->xch, dom->guest_domid,
-                                          (uint8_t *)&mtrr, sizeof(mtrr));
-            if ( rc != 0 )
-                xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
-                             "%s: SETHVMCONTEXT failed (rc=%d)", __func__, rc);
+            xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
+                         "%s: SETHVMCONTEXT failed (rc=%d)", __func__, rc);
+            goto out;
         }
     }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:28:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749273.1157340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVn-0006pk-Rt; Wed, 26 Jun 2024 16:28:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749273.1157340; Wed, 26 Jun 2024 16:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVn-0006oN-L9; Wed, 26 Jun 2024 16:28:47 +0000
Received: by outflank-mailman (input) for mailman id 749273;
 Wed, 26 Jun 2024 16:28:46 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMVVm-0005pK-1D
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:46 +0000
Received: from mail-ej1-x630.google.com (mail-ej1-x630.google.com
 [2a00:1450:4864:20::630])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 27cb504e-33d9-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 18:28:44 +0200 (CEST)
Received: by mail-ej1-x630.google.com with SMTP id
 a640c23a62f3a-a72420e84feso594506166b.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:28:44 -0700 (PDT)
Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 09:28:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27cb504e-33d9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719419323; x=1720024123; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=KUPCAuZy6+3qsiLL1TNf/Yu5F4mR3hLKr97Qs7ol+TI=;
        b=OTjninxtZWSDyE1V+mCKPUlE3Cl6eMIgfwVubBgUngEQIabJXs2JENQGvZ95lqHm9I
         0BSYad4UJ0vQvHxbEQ7EEGBwa6s77Gz16eNESYrKNAnc2DB/Nf2kzEnDQ0zfemxP9ynJ
         1JAkk52rEeH+bAzEpRvXoAib7UYuWt30R7OeY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719419323; x=1720024123;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=KUPCAuZy6+3qsiLL1TNf/Yu5F4mR3hLKr97Qs7ol+TI=;
        b=V32V9V3AmZ7L48IKG9vVlshPVe1mCZJEqefYf/Svj/yOj3Akul5RP6ktrecJBG96xg
         A+7DZ6znPjS4oCZguOjIT50dOaCqmyOOXavw9a5rJVHAipdxMdo5MXMPkzVxq9ODyC0D
         eQsky3C43a9M1yv31gPvRDuNRAhXyIAKJ/HUOJISE/bW6ROTxVXoiDAEPPaoC3qsH754
         5vUPgXJD6wkdN868d85ei7pdpxal9Z9dcDG+y42i1yOGUOMIis8jXbwzATgA2QCdXqFU
         V2CtkHfJg4KayurJ2sd8vA7XikJEPwagpt9ROzl3cg/L+aWmCL3ZfjoBJ0A8s3wslWNw
         hR9g==
X-Gm-Message-State: AOJu0YzvC2a/tYaUHvDJG80mf/PdWBb8zAb47hixB7QIhedJLOP0YrvR
	r1zTbVpn4unfrLSj/gXNvuWJksRJooMwVD3dg0jjs/vSR8NIQ5ade+tpTvfQkAK9E78NLtzZ/VN
	jpgU=
X-Google-Smtp-Source: AGHT+IGnjomOOSm/zrdoLTB5WjM2JZZxash/dLqgpcWWloLOuyWpg0ImCKC47O7zdJLeKl0l5JWH0Q==
X-Received: by 2002:a17:906:6057:b0:a71:bf5a:3418 with SMTP id a640c23a62f3a-a7245c809demr827881466b.53.1719419323448;
        Wed, 26 Jun 2024 09:28:43 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Anthony PERARD <anthony.perard@vates.tech>
Subject: [PATCH v4 04/10] tools/hvmloader: Retrieve (x2)APIC IDs from the APs themselves
Date: Wed, 26 Jun 2024 17:28:31 +0100
Message-Id: <c87ee1dc6957c732bf29f3cfe9900136bf6e55a9.1719416329.git.alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719416329.git.alejandro.vallejo@cloud.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Make it so the APs expose their own APIC IDs in a LUT. We can use that LUT to
populate the MADT, decoupling the algorithm that relates CPU IDs and APIC IDs
from hvmloader.

While at this also remove ap_callin, as writing the APIC ID may serve the same
purpose.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
v4:
  * Removed bogus ! in ASSERT() statement introduced in v3.
---
 tools/firmware/hvmloader/config.h       |  6 ++-
 tools/firmware/hvmloader/hvmloader.c    |  4 +-
 tools/firmware/hvmloader/smp.c          | 54 ++++++++++++++++++++-----
 tools/include/xen-tools/common-macros.h |  5 +++
 4 files changed, 56 insertions(+), 13 deletions(-)

diff --git a/tools/firmware/hvmloader/config.h b/tools/firmware/hvmloader/config.h
index cd716bf39245..213ac1f28e17 100644
--- a/tools/firmware/hvmloader/config.h
+++ b/tools/firmware/hvmloader/config.h
@@ -4,6 +4,8 @@
 #include <stdint.h>
 #include <stdbool.h>
 
+#include <xen/hvm/hvm_info_table.h>
+
 enum virtual_vga { VGA_none, VGA_std, VGA_cirrus, VGA_pt };
 extern enum virtual_vga virtual_vga;
 
@@ -48,8 +50,10 @@ extern uint8_t ioapic_version;
 
 #define IOAPIC_ID           0x01
 
+extern uint32_t CPU_TO_X2APICID[HVM_MAX_VCPUS];
+
 #define LAPIC_BASE_ADDRESS  0xfee00000
-#define LAPIC_ID(vcpu_id)   ((vcpu_id) * 2)
+#define LAPIC_ID(vcpu_id)   (CPU_TO_X2APICID[(vcpu_id)])
 
 #define PCI_ISA_DEVFN       0x08    /* dev 1, fn 0 */
 #define PCI_ISA_IRQ_MASK    0x0c20U /* ISA IRQs 5,10,11 are PCI connected */
diff --git a/tools/firmware/hvmloader/hvmloader.c b/tools/firmware/hvmloader/hvmloader.c
index f8af88fabf24..5c02e8fc226a 100644
--- a/tools/firmware/hvmloader/hvmloader.c
+++ b/tools/firmware/hvmloader/hvmloader.c
@@ -341,11 +341,11 @@ int main(void)
 
     printf("CPU speed is %u MHz\n", get_cpu_mhz());
 
+    smp_initialise();
+
     apic_setup();
     pci_setup();
 
-    smp_initialise();
-
     perform_tests();
 
     if ( bios->bios_info_setup )
diff --git a/tools/firmware/hvmloader/smp.c b/tools/firmware/hvmloader/smp.c
index 5d46eee1c5f4..43eb17e4e3be 100644
--- a/tools/firmware/hvmloader/smp.c
+++ b/tools/firmware/hvmloader/smp.c
@@ -29,7 +29,34 @@
 
 #include <xen/vcpu.h>
 
-static int ap_callin;
+/**
+ * Lookup table of x2APIC IDs.
+ *
+ * Each entry is populated its respective CPU as they come online. This is required
+ * for generating the MADT with minimal assumptions about ID relationships.
+ */
+uint32_t CPU_TO_X2APICID[HVM_MAX_VCPUS];
+
+/** Tristate about x2apic being supported. -1=unknown */
+static int has_x2apic = -1;
+
+static uint32_t read_apic_id(void)
+{
+    uint32_t apic_id;
+
+    if ( has_x2apic )
+        cpuid(0xb, NULL, NULL, NULL, &apic_id);
+    else
+    {
+        cpuid(1, NULL, &apic_id, NULL, NULL);
+        apic_id >>= 24;
+    }
+
+    /* Never called by cpu0, so should never return 0 */
+    ASSERT(apic_id);
+
+    return apic_id;
+}
 
 static void __attribute__((regparm(1))) cpu_setup(unsigned int cpu)
 {
@@ -37,13 +64,17 @@ static void __attribute__((regparm(1))) cpu_setup(unsigned int cpu)
     cacheattr_init();
     printf("done.\n");
 
-    if ( !cpu ) /* Used on the BSP too */
+    /* The BSP exits early because its APIC ID is known to be zero */
+    if ( !cpu )
         return;
 
     wmb();
-    ap_callin = 1;
+    ACCESS_ONCE(CPU_TO_X2APICID[cpu]) = read_apic_id();
 
-    /* After this point, the BSP will shut us down. */
+    /*
+     * After this point the BSP will shut us down. A write to
+     * CPU_TO_X2APICID[cpu] signals the BSP to bring down `cpu`.
+     */
 
     for ( ;; )
         asm volatile ( "hlt" );
@@ -54,10 +85,6 @@ static void boot_cpu(unsigned int cpu)
     static uint8_t ap_stack[PAGE_SIZE] __attribute__ ((aligned (16)));
     static struct vcpu_hvm_context ap;
 
-    /* Initialise shared variables. */
-    ap_callin = 0;
-    wmb();
-
     /* Wake up the secondary processor */
     ap = (struct vcpu_hvm_context) {
         .mode = VCPU_HVM_MODE_32B,
@@ -90,10 +117,11 @@ static void boot_cpu(unsigned int cpu)
         BUG();
 
     /*
-     * Wait for the secondary processor to complete initialisation.
+     * Wait for the secondary processor to complete initialisation,
+     * which is signaled by its x2APIC ID being written to the LUT.
      * Do not touch shared resources meanwhile.
      */
-    while ( !ap_callin )
+    while ( !ACCESS_ONCE(CPU_TO_X2APICID[cpu]) )
         cpu_relax();
 
     /* Take the secondary processor offline. */
@@ -104,6 +132,12 @@ static void boot_cpu(unsigned int cpu)
 void smp_initialise(void)
 {
     unsigned int i, nr_cpus = hvm_info->nr_vcpus;
+    uint32_t ecx;
+
+    cpuid(1, NULL, NULL, &ecx, NULL);
+    has_x2apic = (ecx >> 21) & 1;
+    if ( has_x2apic )
+        printf("x2APIC supported\n");
 
     printf("Multiprocessor initialisation:\n");
     cpu_setup(0);
diff --git a/tools/include/xen-tools/common-macros.h b/tools/include/xen-tools/common-macros.h
index 60912225cb7a..336c6309d96e 100644
--- a/tools/include/xen-tools/common-macros.h
+++ b/tools/include/xen-tools/common-macros.h
@@ -108,4 +108,9 @@
 #define get_unaligned(ptr)      get_unaligned_t(typeof(*(ptr)), ptr)
 #define put_unaligned(val, ptr) put_unaligned_t(typeof(*(ptr)), val, ptr)
 
+#define __ACCESS_ONCE(x) ({                             \
+            (void)(typeof(x))0; /* Scalar typecheck. */ \
+            (volatile typeof(x) *)&(x); })
+#define ACCESS_ONCE(x) (*__ACCESS_ONCE(x))
+
 #endif	/* __XEN_TOOLS_COMMON_MACROS__ */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:28:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:28:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749269.1157302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVk-0005uD-Oz; Wed, 26 Jun 2024 16:28:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749269.1157302; Wed, 26 Jun 2024 16:28:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVk-0005u3-K0; Wed, 26 Jun 2024 16:28:44 +0000
Received: by outflank-mailman (input) for mailman id 749269;
 Wed, 26 Jun 2024 16:28:43 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMVVj-0005pK-0r
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:43 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 25b93049-33d9-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 18:28:40 +0200 (CEST)
Received: by mail-ej1-x633.google.com with SMTP id
 a640c23a62f3a-a725d756d41so135752866b.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:28:40 -0700 (PDT)
Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 09:28:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25b93049-33d9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719419320; x=1720024120; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=qAI2lFL9v1i73OxDsNgyfceqLCFm8BlW5RHMtqw9d8k=;
        b=jgHGJ4ueZG/CuGnBJOPLAneSg22Rrabw8ZB/ZzjAOHEVM8hj2f4wtaRCED7JSl1Adm
         cB/2ZB5FT2uOP+mUFqx43Z07rXCoQS+l0ZTix6aFQXU5MjY2d6atVb7JiX9oEzmzIkAc
         9x3DSEJvPDDg293JWYdDO/3JyBsBr3Y8f8mnA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719419320; x=1720024120;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=qAI2lFL9v1i73OxDsNgyfceqLCFm8BlW5RHMtqw9d8k=;
        b=m+Nya1iTFScW0Th2IxICSNG2r4vKDkBHm8zB/sAgdp3+WL650KQ6V+aKTUN0Ofpvx7
         8HUuHhh9Dqrgmi/HJPpib5q9emATwiJ5boG6PwWic9VjxwvuOk5Y6uctURLQkuO/8gL8
         4LCbfiDibhvIFQkMKQvfQkIwu5dUThjcsIDhgC+XwjETKp6+IhVkWDKReJ0U7Z4gC6dU
         Qze2Pn7cPlpzBcLHNVWoyqqhsLxMaTkcDFuFR6p7b4aZJBhT02tG/rLQcauB2KJfCURv
         7qFqIqgWGRoUSjqeJ/1JdBO7VdRFH35CQI/IIygHP9eYVySJMia4+syh0sxGAS87bLp6
         6NPg==
X-Gm-Message-State: AOJu0YyzWumUziGGYdJneE1PV+jm1zffnO7IcbBfspr001+mmYdHcY8G
	sGp/l0Sv/xsKS8/Y91szWqUvOtbWiM5Zc6qotaKWRXmox/fOiWuLaZjgP+697PpSIrLL2l3e8jV
	lKeg=
X-Google-Smtp-Source: AGHT+IFL3aOpcUv5tos2lUCPGjcF7zAXhjcAbWgKdPNXb/lpgRWr7CZAG92QA3pbaZvO3I1MjsZLLQ==
X-Received: by 2002:a17:907:d382:b0:a72:5d7f:dd4a with SMTP id a640c23a62f3a-a7296fa704emr8196066b.25.1719419319978;
        Wed, 26 Jun 2024 09:28:39 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Anthony PERARD <anthony.perard@vates.tech>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for 4.19 v4 01/10] tools/hvmloader: Fix non-deterministic cpuid()
Date: Wed, 26 Jun 2024 17:28:28 +0100
Message-Id: <f8bfcfeca0a76f28703b164e1e65fb5919325b13.1719416329.git.alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719416329.git.alejandro.vallejo@cloud.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

hvmloader's cpuid() implementation deviates from Xen's in that the value passed
on ecx is unspecified. This means that when used on leaves that implement
subleaves it's unspecified which one you get; though it's more than likely an
invalid one.

Import Xen's implementation so there are no surprises.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
This is a fix for a latent bug. Should go into 4.19.

v4
  * New patch
---
 tools/firmware/hvmloader/util.c |  9 ---------
 tools/firmware/hvmloader/util.h | 27 ++++++++++++++++++++++++---
 2 files changed, 24 insertions(+), 12 deletions(-)

diff --git a/tools/firmware/hvmloader/util.c b/tools/firmware/hvmloader/util.c
index c34f077b38e3..d3b3f9038e64 100644
--- a/tools/firmware/hvmloader/util.c
+++ b/tools/firmware/hvmloader/util.c
@@ -267,15 +267,6 @@ memcmp(const void *s1, const void *s2, unsigned n)
     return 0;
 }
 
-void
-cpuid(uint32_t idx, uint32_t *eax, uint32_t *ebx, uint32_t *ecx, uint32_t *edx)
-{
-    asm volatile (
-        "cpuid"
-        : "=a" (*eax), "=b" (*ebx), "=c" (*ecx), "=d" (*edx)
-        : "0" (idx) );
-}
-
 static const char hex_digits[] = "0123456789abcdef";
 
 /* Write a two-character hex representation of 'byte' to digits[].
diff --git a/tools/firmware/hvmloader/util.h b/tools/firmware/hvmloader/util.h
index deb823a892ef..3ad7c4f6d6a2 100644
--- a/tools/firmware/hvmloader/util.h
+++ b/tools/firmware/hvmloader/util.h
@@ -184,9 +184,30 @@ int uart_exists(uint16_t uart_base);
 int lpt_exists(uint16_t lpt_base);
 int hpet_exists(unsigned long hpet_base);
 
-/* Do cpuid instruction, with operation 'idx' */
-void cpuid(uint32_t idx, uint32_t *eax, uint32_t *ebx,
-           uint32_t *ecx, uint32_t *edx);
+/* Some CPUID calls want 'count' to be placed in ecx */
+static inline void cpuid_count(
+    uint32_t op,
+    uint32_t count,
+    uint32_t *eax,
+    uint32_t *ebx,
+    uint32_t *ecx,
+    uint32_t *edx)
+{
+    asm volatile ( "cpuid"
+          : "=a" (*eax), "=b" (*ebx), "=c" (*ecx), "=d" (*edx)
+          : "0" (op), "c" (count) );
+}
+
+/* Generic CPUID function (subleaf 0) */
+static inline void cpuid(
+    uint32_t leaf,
+    uint32_t *eax,
+    uint32_t *ebx,
+    uint32_t *ecx,
+    uint32_t *edx)
+{
+    cpuid_count(leaf, 0, eax, ebx, ecx, edx);
+}
 
 /* Read the TSC register. */
 static inline uint64_t rdtsc(void)
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:28:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:28:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749276.1157374 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVr-0007q0-Aw; Wed, 26 Jun 2024 16:28:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749276.1157374; Wed, 26 Jun 2024 16:28:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVr-0007pH-6b; Wed, 26 Jun 2024 16:28:51 +0000
Received: by outflank-mailman (input) for mailman id 749276;
 Wed, 26 Jun 2024 16:28:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMVVp-0005pK-Sm
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:49 +0000
Received: from mail-lf1-x12d.google.com (mail-lf1-x12d.google.com
 [2a00:1450:4864:20::12d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2a480a84-33d9-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 18:28:48 +0200 (CEST)
Received: by mail-lf1-x12d.google.com with SMTP id
 2adb3069b0e04-52ce9ba0cedso4525241e87.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:28:48 -0700 (PDT)
Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 09:28:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a480a84-33d9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719419327; x=1720024127; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=lYHRDHK5UDpTmTiIq7osG/ZblShK9KE6SHig+ML+kuw=;
        b=GBrmuotwVwjrUTe09bHDOlqWVGDWDmSJlfHT8uZVpuMKQIfXggwkZ7DldGGlbb1qLk
         hheWaXHf5NOp0MPspObzqxsJyB9aI0ocJJvciccQo5qpmuVbqU7JDYTCp4KUFeoLqWn0
         Z+CRjREoe5tqL8PUfK4BLGVG9glDZTAFC10iQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719419327; x=1720024127;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=lYHRDHK5UDpTmTiIq7osG/ZblShK9KE6SHig+ML+kuw=;
        b=VMqzYsoWX1cmWAhGfrvUGu0BzVZbcAjXGBgW86hb1Rho0IGxQ8+4AaSm7hHPf9V9hr
         BFwKISl3PcocnyFYpRMmmLvJEGjqH/3xSQSkZsYa6qHP3P4T93gbL2AGB8z7Kmz3r69o
         DG+Km8uOonsr/DF62T3BfNfEETzMXLfEii39jfLWkoZhO0eY9L9v+NM3Sy0t7PA87Ie7
         YSteMNyzipxEP1Cetqd46nQix8ANsKNKANtmK9dp9RvahbMxNAx3lClkGjc5G6dg9ji0
         Ou8FFhzzECy0+IPJWG4CSUV6E7bUzrKfmaUxKpWf98We//Y/58t+ruFLUBzUrMgS8HTN
         bdXQ==
X-Gm-Message-State: AOJu0YwhMcc8bwQXVXw6PD1Cws0MFCmtTknjAp0uKbUgjP9JURFGvJ/U
	2ygWwTtRcdKqNwUizRY8FXk8vZEee37KhLkoiOKR0wlr317G1bh02ZXoWdme00nDzlAfKDq+cgY
	n4Zo=
X-Google-Smtp-Source: AGHT+IG3ALNl7GrDyAkGeat0PBA151FCdB5bzNb5KBoFoIZgPdmSqsTu6xRbFtJlJR/ERWDh3LdMSQ==
X-Received: by 2002:ac2:4c8c:0:b0:52c:9ae0:beed with SMTP id 2adb3069b0e04-52ce18526ecmr8127045e87.52.1719419327515;
        Wed, 26 Jun 2024 09:28:47 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Anthony PERARD <anthony.perard@vates.tech>
Subject: [PATCH v4 08/10] xen/x86: Derive topologically correct x2APIC IDs from the policy
Date: Wed, 26 Jun 2024 17:28:35 +0100
Message-Id: <b060f84d5119e537c77a8f0f42f16d3d8e1875fd.1719416329.git.alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719416329.git.alejandro.vallejo@cloud.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Implements the helper for mapping vcpu_id to x2apic_id given a valid
topology in a policy. The algo is written with the intention of extending
it to leaves 0x1f and extended 0x26 in the future.

Toolstack doesn't set leaf 0xb and the HVM default policy has it cleared,
so the leaf is not implemented. In that case, the new helper just returns
the legacy mapping.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
v2->v4 (v3 was not reviewed):
  * Rewrite eXX notation for CPUID leaves as "extended XX"
  * Newlines and linewraps
  * In the unit-test, reduce the scope of `policy`
  * In the unit-test, fail if topology generation fails.
---
 tools/tests/cpu-policy/test-cpu-policy.c | 68 +++++++++++++++++++++
 xen/include/xen/lib/x86/cpu-policy.h     | 11 ++++
 xen/lib/x86/policy.c                     | 76 ++++++++++++++++++++++++
 3 files changed, 155 insertions(+)

diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index 849d7cebaa7c..e5f9b8f7ee39 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -781,6 +781,73 @@ static void test_topo_from_parts(void)
     }
 }
 
+static void test_x2apic_id_from_vcpu_id_success(void)
+{
+    static const struct test {
+        unsigned int vcpu_id;
+        unsigned int threads_per_core;
+        unsigned int cores_per_pkg;
+        uint32_t x2apic_id;
+        uint8_t x86_vendor;
+    } tests[] = {
+        {
+            .vcpu_id = 3, .threads_per_core = 3, .cores_per_pkg = 8,
+            .x2apic_id = 1 << 2,
+        },
+        {
+            .vcpu_id = 6, .threads_per_core = 3, .cores_per_pkg = 8,
+            .x2apic_id = 2 << 2,
+        },
+        {
+            .vcpu_id = 24, .threads_per_core = 3, .cores_per_pkg = 8,
+            .x2apic_id = 1 << 5,
+        },
+        {
+            .vcpu_id = 35, .threads_per_core = 3, .cores_per_pkg = 8,
+            .x2apic_id = (35 % 3) | (((35 / 3) % 8) << 2) | ((35 / 24) << 5),
+        },
+        {
+            .vcpu_id = 96, .threads_per_core = 7, .cores_per_pkg = 3,
+            .x2apic_id = (96 % 7) | (((96 / 7) % 3) << 3) | ((96 / 21) << 5),
+        },
+    };
+
+    const uint8_t vendors[] = {
+        X86_VENDOR_INTEL,
+        X86_VENDOR_AMD,
+        X86_VENDOR_CENTAUR,
+        X86_VENDOR_SHANGHAI,
+        X86_VENDOR_HYGON,
+    };
+
+    printf("Testing x2apic id from vcpu id success:\n");
+
+    /* Perform the test run on every vendor we know about */
+    for ( size_t i = 0; i < ARRAY_SIZE(vendors); ++i )
+    {
+        for ( size_t j = 0; j < ARRAY_SIZE(tests); ++j )
+        {
+            struct cpu_policy policy = { .x86_vendor = vendors[i] };
+            const struct test *t = &tests[j];
+            uint32_t x2apic_id;
+            int rc = x86_topo_from_parts(&policy, t->threads_per_core,
+                                         t->cores_per_pkg);
+
+            if ( rc ) {
+                fail("FAIL[%d] - 'x86_topo_from_parts() failed", rc);
+                continue;
+            }
+
+            x2apic_id = x86_x2apic_id_from_vcpu_id(&policy, t->vcpu_id);
+            if ( x2apic_id != t->x2apic_id )
+                fail("FAIL - '%s cpu%u %u t/c %u c/p'. bad x2apic_id: expected=%u actual=%u\n",
+                     x86_cpuid_vendor_to_str(policy.x86_vendor),
+                     t->vcpu_id, t->threads_per_core, t->cores_per_pkg,
+                     t->x2apic_id, x2apic_id);
+        }
+    }
+}
+
 int main(int argc, char **argv)
 {
     printf("CPU Policy unit tests\n");
@@ -799,6 +866,7 @@ int main(int argc, char **argv)
     test_is_compatible_failure();
 
     test_topo_from_parts();
+    test_x2apic_id_from_vcpu_id_success();
 
     if ( nr_failures )
         printf("Done: %u failures\n", nr_failures);
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 79fdf9045a1b..d545d4727711 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -542,6 +542,17 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
                                     const struct cpu_policy *guest,
                                     struct cpu_policy_errors *err);
 
+/**
+ * Calculates the x2APIC ID of a vCPU given a CPU policy
+ *
+ * If the policy lacks leaf 0xb falls back to legacy mapping of apic_id=cpu*2
+ *
+ * @param p          CPU policy of the domain.
+ * @param id         vCPU ID of the vCPU.
+ * @returns x2APIC ID of the vCPU.
+ */
+uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t id);
+
 /**
  * Synthesise topology information in `p` given high-level constraints
  *
diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c
index 72b67b44a893..c52b7192559a 100644
--- a/xen/lib/x86/policy.c
+++ b/xen/lib/x86/policy.c
@@ -2,6 +2,82 @@
 
 #include <xen/lib/x86/cpu-policy.h>
 
+static uint32_t parts_per_higher_scoped_level(const struct cpu_policy *p,
+                                              size_t lvl)
+{
+    /*
+     * `nr_logical` reported by Intel is the number of THREADS contained in
+     * the next topological scope. For example, assuming a system with 2
+     * threads/core and 3 cores/module in a fully symmetric topology,
+     * `nr_logical` at the core level will report 6. Because it's reporting
+     * the number of threads in a module.
+     *
+     * On AMD/Hygon, nr_logical is already normalized by the higher scoped
+     * level (cores/complex, etc) so we can return it as-is.
+     */
+    if ( p->x86_vendor != X86_VENDOR_INTEL || !lvl )
+        return p->topo.subleaf[lvl].nr_logical;
+
+    return p->topo.subleaf[lvl].nr_logical /
+           p->topo.subleaf[lvl - 1].nr_logical;
+}
+
+uint32_t x86_x2apic_id_from_vcpu_id(const struct cpu_policy *p, uint32_t id)
+{
+    uint32_t shift = 0, x2apic_id = 0;
+
+    /* In the absence of topology leaves, fallback to traditional mapping */
+    if ( !p->topo.subleaf[0].type )
+        return id * 2;
+
+    /*
+     * `id` means different things at different points of the algo
+     *
+     * At lvl=0: global thread_id (same as vcpu_id)
+     * At lvl=1: global core_id
+     * At lvl=2: global socket_id (actually complex_id in AMD, module_id
+     *                             in Intel, but the name is inconsequential)
+     *
+     *                 +--+
+     *            ____ |#0| ______           <= 1 socket
+     *           /     +--+       \+--+
+     *       __#0__              __|#1|__    <= 2 cores/socket
+     *      /   |  \        +--+/  +-|+  \
+     *    #0   #1   #2      |#3|    #4    #5 <= 3 threads/core
+     *                      +--+
+     *
+     * ... and so on. Global in this context means that it's a unique
+     * identifier for the whole topology, and not relative to the level
+     * it's in. For example, in the diagram shown above, we're looking at
+     * thread #3 in the global sense, though it's #0 within its core.
+     *
+     * Note that dividing a global thread_id by the number of threads per
+     * core returns the global core id that contains it. e.g: 0, 1 or 2
+     * divided by 3 returns core_id=0. 3, 4 or 5 divided by 3 returns core
+     * 1, and so on. An analogous argument holds for higher levels. This is
+     * the property we exploit to derive x2apic_id from vcpu_id.
+     *
+     * NOTE: `topo` is currently derived from leaf 0xb, which is bound to two
+     * levels, but once we track leaves 0x1f (or extended 0x26) there will be a
+     * few more. The algorithm is written to cope with that case.
+     */
+    for ( uint32_t i = 0; i < ARRAY_SIZE(p->topo.raw); i++ )
+    {
+        uint32_t nr_parts;
+
+        if ( !p->topo.subleaf[i].type )
+            /* sentinel subleaf */
+            break;
+
+        nr_parts = parts_per_higher_scoped_level(p, i);
+        x2apic_id |= (id % nr_parts) << shift;
+        id /= nr_parts;
+        shift = p->topo.subleaf[i].id_shift;
+    }
+
+    return (id << shift) | x2apic_id;
+}
+
 static unsigned int order(unsigned int n)
 {
     ASSERT(n); /* clz(0) is UB */
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:28:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:28:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749277.1157382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVs-00088n-Md; Wed, 26 Jun 2024 16:28:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749277.1157382; Wed, 26 Jun 2024 16:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVs-00087q-II; Wed, 26 Jun 2024 16:28:52 +0000
Received: by outflank-mailman (input) for mailman id 749277;
 Wed, 26 Jun 2024 16:28:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMVVq-0005pK-Iz
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:50 +0000
Received: from mail-ej1-x631.google.com (mail-ej1-x631.google.com
 [2a00:1450:4864:20::631])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2ab9b7b4-33d9-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 18:28:49 +0200 (CEST)
Received: by mail-ej1-x631.google.com with SMTP id
 a640c23a62f3a-a727d9dd367so242573966b.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:28:49 -0700 (PDT)
Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 09:28:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ab9b7b4-33d9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719419328; x=1720024128; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=8mOpjR7SSQuHGedpTmNTfhfxLjqgXd8qu6qzKi4f14A=;
        b=HdvfMN91FrBV6e5vYd4uDdyJztAfUvRkjt+yORg27afdWOwe1iNHeVYO11LFylImEv
         jV3xDTuf/ULppt2Xn96ROy8825ifEfMb2whcARFRc5JcEybk7eBGxIAUVWBoH7rFHW3o
         LJ83Ayb3gfsvjfkqdomYtKzlPDKSw7b2NETt0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719419328; x=1720024128;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=8mOpjR7SSQuHGedpTmNTfhfxLjqgXd8qu6qzKi4f14A=;
        b=s2n7bSOmRpfgiHYF30zqHl1+Dxb2A8KKyGiTQ3rk4PvET8hu3lmR4Qsr9VgP23te4/
         BrcQSk8E7zGMsf2exyHQmt1gzG2al+5MafkQ1x+ITjip7ZgWkyQiAyPQpDUvj3rpB+lv
         +1ArcXlCIOh2n/c1mcG+DqgZOwxRVmQJM0KfQT9qEx2NlJSaT/vnnx/AaI5qCrWsMzhc
         q6CQkAAU9s2oKVK/ES2B09BBBZy/wDqvX6Jwh7oNseyfv5dRXf3TbkeJEMOu5NKSYFtv
         jUVanrB/obYxj/ywijDOQyhMWftJjaOPQL7cmFULozyEzbXnTL2/J0s4cHGfylGm7Pg9
         h/dw==
X-Gm-Message-State: AOJu0YzbHqP1G7osr1mgH0B6KeU1+Lweq22tqivQOoIFN62SJyBiRG/h
	KqfvRT388FeOGp7SMIfSeC4Vggd66A5IZGO08+mPkmt2Q1iZcDp6wnkUWQq4mDOo4Xg1ZmsjVY5
	JvQ8=
X-Google-Smtp-Source: AGHT+IH94sPg3I7PP/MnzrejaBHaa0UZyvLZuA9p7IfIniwFctzAD8RaBLO7yuYZqAkfLY2FSC4lLA==
X-Received: by 2002:a17:907:a649:b0:a72:8135:2d4f with SMTP id a640c23a62f3a-a7281352e3cmr347847466b.48.1719419328384;
        Wed, 26 Jun 2024 09:28:48 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Anthony PERARD <anthony.perard@vates.tech>,
	Juergen Gross <jgross@suse.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH v4 09/10] xen/x86: Synthesise domain topologies
Date: Wed, 26 Jun 2024 17:28:36 +0100
Message-Id: <acfa4847e8f09a3206b7f88d37cfcc85cb143f17.1719416329.git.alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719416329.git.alejandro.vallejo@cloud.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Expose sensible topologies in leaf 0xb. At the moment it synthesises non-HT
systems, in line with the previous code intent.

Leaf 0xb in the host policy is no longer zapped and the guest {max,def} policies
have their topology leaves zapped instead. The intent is for toolstack to
populate them. There's no current use for the topology information in the host
policy, but it makes no harm.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
This patch MUST NOT go in without the following intimately related one
  "Set topologically correct x2APIC IDs for each vCPU"

Otherwise we expose one topology and then create APIC IDs that don't reflect it

v2->v4 (v3 was not reviewed):
  * Adjustments to the commit message
  * Various newline/linewrap fixes
  * Also print error code in new ERROR() message
  * Preserve old logic to recreate old CPUID policy to enable migrations from
    versions of Xen without policy information in the migration stream.
---
 tools/libs/guest/xg_cpuid_x86.c | 24 +++++++++++++++++++++++-
 xen/arch/x86/cpu-policy.c       |  9 ++++++---
 2 files changed, 29 insertions(+), 4 deletions(-)

diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 4453178100ad..6062dcab01ce 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -725,8 +725,16 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
         p->policy.basic.htt       = test_bit(X86_FEATURE_HTT, host_featureset);
         p->policy.extd.cmp_legacy = test_bit(X86_FEATURE_CMP_LEGACY, host_featureset);
     }
-    else
+    else if ( restore )
     {
+        /*
+         * Reconstruct the topology exposed on Xen <= 4.13. It makes very little
+         * sense, but it's what those guests saw so it's set in stone now.
+         *
+         * Guests from Xen 4.14 onwards carry their own CPUID leaves in the
+         * migration stream so they don't need special treatment.
+         */
+
         /*
          * Topology for HVM guests is entirely controlled by Xen.  For now, we
          * hardcode APIC_ID = vcpu_id * 2 to give the illusion of no SMT.
@@ -782,6 +790,20 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
             break;
         }
     }
+    else
+    {
+        /* TODO: Expose the ability to choose a custom topology for HVM/PVH */
+        unsigned int threads_per_core = 1;
+        unsigned int cores_per_pkg = di.max_vcpu_id + 1;
+
+        rc = x86_topo_from_parts(&p->policy, threads_per_core, cores_per_pkg);
+        if ( rc )
+        {
+            ERROR("Failed to generate topology: rc=%d t/c=%u c/p=%u",
+                  rc, threads_per_core, cores_per_pkg);
+            goto out;
+        }
+    }
 
     nr_leaves = ARRAY_SIZE(p->leaves);
     rc = x86_cpuid_copy_to_buffer(&p->policy, p->leaves, &nr_leaves);
diff --git a/xen/arch/x86/cpu-policy.c b/xen/arch/x86/cpu-policy.c
index 304dc20cfab8..55a95f6e164c 100644
--- a/xen/arch/x86/cpu-policy.c
+++ b/xen/arch/x86/cpu-policy.c
@@ -263,9 +263,6 @@ static void recalculate_misc(struct cpu_policy *p)
 
     p->basic.raw[0x8] = EMPTY_LEAF;
 
-    /* TODO: Rework topology logic. */
-    memset(p->topo.raw, 0, sizeof(p->topo.raw));
-
     p->basic.raw[0xc] = EMPTY_LEAF;
 
     p->extd.e1d &= ~CPUID_COMMON_1D_FEATURES;
@@ -613,6 +610,9 @@ static void __init calculate_pv_max_policy(void)
     recalculate_xstate(p);
 
     p->extd.raw[0xa] = EMPTY_LEAF; /* No SVM for PV guests. */
+
+    /* Wipe host topology. Populated by toolstack */
+    memset(p->topo.raw, 0, sizeof(p->topo.raw));
 }
 
 static void __init calculate_pv_def_policy(void)
@@ -776,6 +776,9 @@ static void __init calculate_hvm_max_policy(void)
 
     /* It's always possible to emulate CPUID faulting for HVM guests */
     p->platform_info.cpuid_faulting = true;
+
+    /* Wipe host topology. Populated by toolstack */
+    memset(p->topo.raw, 0, sizeof(p->topo.raw));
 }
 
 static void __init calculate_hvm_def_policy(void)
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:28:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:28:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749278.1157392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVt-0008Gq-H7; Wed, 26 Jun 2024 16:28:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749278.1157392; Wed, 26 Jun 2024 16:28:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVVt-0008Eb-2y; Wed, 26 Jun 2024 16:28:53 +0000
Received: by outflank-mailman (input) for mailman id 749278;
 Wed, 26 Jun 2024 16:28:51 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMVVr-0005pK-Dp
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:28:51 +0000
Received: from mail-ej1-x62d.google.com (mail-ej1-x62d.google.com
 [2a00:1450:4864:20::62d])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 2b24fcd9-33d9-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 18:28:49 +0200 (CEST)
Received: by mail-ej1-x62d.google.com with SMTP id
 a640c23a62f3a-a7252bfe773so452804766b.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:28:49 -0700 (PDT)
Received: from EMEAENGAAD19049.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7291af7912sm42791866b.128.2024.06.26.09.28.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 09:28:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b24fcd9-33d9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719419329; x=1720024129; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=FPZPi/1GxgTiDwYl688KIib+0eFv4vArXlISETLlZHk=;
        b=gSl/22rQ+1SlO8vAi0W5OW3RMsrUXEfN6Hn9ZDF3oy/4SdFlOoUHE0RMijQf0HLwLp
         Rukh+DtBtFmrVcxTsqIr+/66h4Ued+4j8m2ZNDBhZd0ClegQT2Rn3yv51cjAbhFSFIin
         j48IyrdMJpXFyEypNQgcjX4NMyUuc2NmWFk18=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719419329; x=1720024129;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=FPZPi/1GxgTiDwYl688KIib+0eFv4vArXlISETLlZHk=;
        b=pS5/CqoJCUre+VX3hx+/naMNJIJkWJdT2h83hHTkgNDPyY54NNFy2pBiadqGEFBdPz
         potJdQOyqwcbYjC5AuXU+NW11+miljfJ1AoRMwMcXP+CjbtmCr0P2elxCESjiKIE1jRK
         qOJBjyN5nS12oXZgXIh0QaxUhkEKQV5omnRm2X/bL58QEF93mjbnmBPpWMzPZm80SwP0
         NBgLTFwY/K29j5E3fzzsCGc4Uja4CuY35xMRoZUw6ZGsN11i8AdW2fwhuHYFInBn0tM5
         M/WM1B7fPXLJQ31qXppQdKve+SGcMdxoacWyajiIl+v9gWoEfQFtgBvQWABgPVoU0NHi
         UMIA==
X-Gm-Message-State: AOJu0Yxd+bIAylzpubVBunoqziwOc+GV3MH/MNHZejE2govC9eEJRgrD
	AEIKhM3KhwX1A6BMsm0QhfN3+oBwL1H0vg1wnal7U5uR+OYbEw/mV89kNvtXPSnCH8nKjIdKnAh
	qavc=
X-Google-Smtp-Source: AGHT+IGZnws2mdi+XVnvyuewQ3id60JRRLgyqmqgHjjPNv44XKzzcauz1g5P06nGpalKkaBxSPUShQ==
X-Received: by 2002:a17:907:8e93:b0:a6e:f62d:bd02 with SMTP id a640c23a62f3a-a7245c84f2emr837266166b.7.1719419329107;
        Wed, 26 Jun 2024 09:28:49 -0700 (PDT)
From: Alejandro Vallejo <alejandro.vallejo@cloud.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
	Anthony PERARD <anthony.perard@vates.tech>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v4 10/10] tools/libguest: Set topologically correct x2APIC IDs for each vCPU
Date: Wed, 26 Jun 2024 17:28:37 +0100
Message-Id: <94a6d0ff6ce8d0e5be9546efba7aa50d2a21a2b8.1719416329.git.alejandro.vallejo@cloud.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <cover.1719416329.git.alejandro.vallejo@cloud.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Have toolstack populate the new x2APIC ID in the LAPIC save record with the
proper IDs intended for each vCPU.

Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
---
v4:
  * New patch. Replaced v3's method of letting Xen find out via the same
    algorithm toolstack uses.
---
 tools/libs/guest/xg_dom_x86.c | 37 ++++++++++++++++++++++++++++++++++-
 1 file changed, 36 insertions(+), 1 deletion(-)

diff --git a/tools/libs/guest/xg_dom_x86.c b/tools/libs/guest/xg_dom_x86.c
index 82ea3e2aab0b..2ae3a779b016 100644
--- a/tools/libs/guest/xg_dom_x86.c
+++ b/tools/libs/guest/xg_dom_x86.c
@@ -1004,19 +1004,40 @@ static int vcpu_hvm(struct xc_dom_image *dom)
         HVM_SAVE_TYPE(HEADER) header;
         struct hvm_save_descriptor mtrr_d;
         HVM_SAVE_TYPE(MTRR) mtrr;
+        struct hvm_save_descriptor lapic_d;
+        HVM_SAVE_TYPE(LAPIC) lapic;
         struct hvm_save_descriptor end_d;
         HVM_SAVE_TYPE(END) end;
     } vcpu_ctx;
-    /* Context from full_ctx */
+    /* Contexts from full_ctx */
     const HVM_SAVE_TYPE(MTRR) *mtrr_record;
+    const HVM_SAVE_TYPE(LAPIC) *lapic_record;
     /* Raw context as taken from Xen */
     uint8_t *full_ctx = NULL;
+    xc_cpu_policy_t *policy = xc_cpu_policy_init();
     int rc;
 
     DOMPRINTF_CALLED(dom->xch);
 
     assert(dom->max_vcpus);
 
+    /*
+     * Fetch the CPU policy of this domain. We need it to determine the APIC IDs
+     * each of vCPU in a manner consistent with the exported topology.
+     *
+     * TODO: It's silly to query a policy we have ourselves created. It should
+     *       instead be part of xc_dom_image
+     */
+
+    rc = xc_cpu_policy_get_domain(dom->xch, dom->guest_domid, policy);
+    if ( rc != 0 )
+    {
+        xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
+                     "%s: unable to fetch cpu policy for dom%u (rc=%d)",
+                     __func__, dom->guest_domid, rc);
+        goto out;
+    }
+
     /*
      * Get the full HVM context in order to have the header, it is not
      * possible to get the header with getcontext_partial, and crafting one
@@ -1111,6 +1132,8 @@ static int vcpu_hvm(struct xc_dom_image *dom)
     vcpu_ctx.mtrr_d.typecode = HVM_SAVE_CODE(MTRR);
     vcpu_ctx.mtrr_d.length = HVM_SAVE_LENGTH(MTRR);
     vcpu_ctx.mtrr = *mtrr_record;
+    vcpu_ctx.lapic_d.typecode = HVM_SAVE_CODE(LAPIC);
+    vcpu_ctx.lapic_d.length = HVM_SAVE_LENGTH(LAPIC);
     vcpu_ctx.end_d = bsp_ctx.end_d;
     vcpu_ctx.end = bsp_ctx.end;
 
@@ -1124,6 +1147,17 @@ static int vcpu_hvm(struct xc_dom_image *dom)
     {
         vcpu_ctx.mtrr_d.instance = i;
 
+        lapic_record = hvm_get_save_record(full_ctx, HVM_SAVE_CODE(LAPIC), i);
+        if ( !lapic_record )
+        {
+            xc_dom_panic(dom->xch, XC_INTERNAL_ERROR,
+                         "%s: unable to get LAPIC[%d] save record", __func__, i);
+            goto out;
+        }
+        vcpu_ctx.lapic = *lapic_record;
+        vcpu_ctx.lapic.x2apic_id = x86_x2apic_id_from_vcpu_id(&policy->policy, i);
+        vcpu_ctx.lapic_d.instance = i;
+
         rc = xc_domain_hvm_setcontext(dom->xch, dom->guest_domid,
                                           (uint8_t *)&vcpu_ctx, sizeof(vcpu_ctx));
         if ( rc != 0 )
@@ -1146,6 +1180,7 @@ static int vcpu_hvm(struct xc_dom_image *dom)
 
  out:
     free(full_ctx);
+    xc_cpu_policy_destroy(policy);
     return rc;
 }
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:33:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:33:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749329.1157404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVaL-0005Ji-1K; Wed, 26 Jun 2024 16:33:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749329.1157404; Wed, 26 Jun 2024 16:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVaK-0005Jb-US; Wed, 26 Jun 2024 16:33:28 +0000
Received: by outflank-mailman (input) for mailman id 749329;
 Wed, 26 Jun 2024 16:33:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMVaJ-0005JV-St
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:33:27 +0000
Received: from mail-ej1-x62a.google.com (mail-ej1-x62a.google.com
 [2a00:1450:4864:20::62a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cfb3d437-33d9-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 18:33:25 +0200 (CEST)
Received: by mail-ej1-x62a.google.com with SMTP id
 a640c23a62f3a-a6fbe639a76so140368166b.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:33:25 -0700 (PDT)
Received: from localhost ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a7291af7912sm43112866b.128.2024.06.26.09.33.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 09:33:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfb3d437-33d9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719419605; x=1720024405; darn=lists.xenproject.org;
        h=in-reply-to:references:cc:to:from:subject:message-id:date
         :content-transfer-encoding:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=IK4Sks/gRyZHvoMo5UKRJrAZk736gUoPmC0/JkEO3rE=;
        b=FgRc1H1GSVcV/ozwYcjDnXNAQpqBD2bhCkf7Xg2IKKkrNhps9LUJ/dHKcUUKkUb8t2
         5MEkdaUw70DDeTTwXxhmU8b51jMhsRLbiC+VNng9sEslBtGW+c81JXrVXu/q+ZGWiYbo
         XwKhEeeNKeA1mVwXe4zNKaewTjny691BHKgh8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719419605; x=1720024405;
        h=in-reply-to:references:cc:to:from:subject:message-id:date
         :content-transfer-encoding:mime-version:x-gm-message-state:from:to
         :cc:subject:date:message-id:reply-to;
        bh=IK4Sks/gRyZHvoMo5UKRJrAZk736gUoPmC0/JkEO3rE=;
        b=Vjb0MjeipzV3ydYjMf7lJOSUBqY8q9Cl+nItRfqkJOlQuPYNryVrl8Lu4F3q5cfymq
         VvMCDne6Dq4UMPpD3TnFnDAKgdoRRK5KoNbapYsNS1ciXTkNg5YhTqsDYTJ/EaojOku2
         +MYoaOMwpanr9lfd0BxVQommUT6OZDHjhiFOj4Y2uvnelKxSkqh8ivSh2HvLMXA6yNeF
         x6ehhKV3JCVgPLQ2C2D2CS909t0dT6tx0dcF2iK3qL8dv515Xl+0tWbFAoenfpiwpfJd
         Tn2gFfK0uMicB08yawaCBzICXARwVJ7OANtINH5ZURRM2wV8MVRSe6qhbdyv2egTpuX5
         v4BQ==
X-Forwarded-Encrypted: i=1; AJvYcCW0Sfq2WKBUXPd8UMyfK6UPe6QtxdDbzOsOCqCSw9TeOeLRM2W2NxZd6p/rEvsKyFZV13ptQoTeXQ418xMvahB9D8wiQAySj14zIWRcahM=
X-Gm-Message-State: AOJu0YwDa5epK2wBbI4nKH6aPURuEaOmC6VGo3S6buR0aHISqqcu8y31
	n5+J2S/zc6n511L+gd4dHlEEUluuxPncQnexFpeDyj/pKWAUEXcwY/v2Fva+Yhk=
X-Google-Smtp-Source: AGHT+IGZ5bpJDmnbPmPn/C5hIk9qm1oDg8DwYWjBY0ilbMCCqVt2cmHcojCbCXLlP74Eo5FqKm8Upw==
X-Received: by 2002:a17:906:2405:b0:a6f:e3e4:e0b6 with SMTP id a640c23a62f3a-a729701b9camr9483366b.27.1719419605297;
        Wed, 26 Jun 2024 09:33:25 -0700 (PDT)
Mime-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset=UTF-8
Date: Wed, 26 Jun 2024 17:33:23 +0100
Message-Id: <D2A3DPAFY4XJ.1HJA60OBU915H@cloud.com>
Subject: Re: [PATCH v2 (resend) 04/27] acpi: vmap pages in
 acpi_os_alloc_memory
From: "Alejandro Vallejo" <alejandro.vallejo@cloud.com>
To: "Jan Beulich" <jbeulich@suse.com>
Cc: <julien@xen.org>, <pdurrant@amazon.com>, <dwmw@amazon.com>, "Hongyan
 Xia" <hongyxia@amazon.com>, "Andrew Cooper" <andrew.cooper3@citrix.com>,
 "George Dunlap" <george.dunlap@citrix.com>, "Stefano Stabellini"
 <sstabellini@kernel.org>, "Wei Liu" <wl@xen.org>, "Julien Grall"
 <jgrall@amazon.com>, "Elias El Yandouzi" <eliasely@amazon.com>,
 <xen-devel@lists.xenproject.org>
X-Mailer: aerc 0.17.0
References: <20240116192611.41112-1-eliasely@amazon.com>
 <20240116192611.41112-5-eliasely@amazon.com>
 <D29ZZSXN0QPV.2627WUC2J3NUK@cloud.com>
 <55717cd6-4819-4935-82df-c04453b9676a@suse.com>
In-Reply-To: <55717cd6-4819-4935-82df-c04453b9676a@suse.com>

On Wed Jun 26, 2024 at 4:17 PM BST, Jan Beulich wrote:
> On 26.06.2024 15:54, Alejandro Vallejo wrote:
> > I'm late to the party but there's something bothering me a little.
> >=20
> > On Tue Jan 16, 2024 at 7:25 PM GMT, Elias El Yandouzi wrote:
> >> diff --git a/xen/common/vmap.c b/xen/common/vmap.c
> >> index 171271fae3..966a7e763f 100644
> >> --- a/xen/common/vmap.c
> >> +++ b/xen/common/vmap.c
> >> @@ -245,6 +245,11 @@ void *vmap(const mfn_t *mfn, unsigned int nr)
> >>      return __vmap(mfn, 1, nr, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
> >>  }
> >> =20
> >> +void *vmap_contig(mfn_t mfn, unsigned int nr)
> >> +{
> >> +    return __vmap(&mfn, nr, 1, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
> >> +}
> >> +
> >>  unsigned int vmap_size(const void *va)
> >>  {
> >>      unsigned int pages =3D vm_size(va, VMAP_DEFAULT);
> >=20
> > How is vmap_contig() different from regular vmap()?
> >=20
> > vmap() calls map_pages_to_xen() `nr` times, while vmap_contig() calls i=
t just
> > once. I'd expect both cases to work fine as they are. What am I missing=
? What
> > would make...
> >=20
> >> diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c
> >> index 389505f786..ab80d6b2a9 100644
> >> --- a/xen/drivers/acpi/osl.c
> >> +++ b/xen/drivers/acpi/osl.c
> >> @@ -221,7 +221,11 @@ void *__init acpi_os_alloc_memory(size_t sz)
> >>  	void *ptr;
> >> =20
> >>  	if (system_state =3D=3D SYS_STATE_early_boot)
> >> -		return mfn_to_virt(mfn_x(alloc_boot_pages(PFN_UP(sz), 1)));
> >> +	{
> >> +		mfn_t mfn =3D alloc_boot_pages(PFN_UP(sz), 1);
> >> +
> >> +		return vmap_contig(mfn, PFN_UP(sz));
> > ... this statement not operate identically with regular vmap()? Or
> > probably more interestingly, what would preclude existing calls to vmap=
() not
> > operate under vmap_contig() instead?
>
> Note how vmap()'s first parameter is "const mfn_t *mfn". This needs to po=
int
> to an array of "nr" MFNs. In order to use plain vmap() here, you'd first =
need
> to set up a suitably large array, populate if with increasing MFN values,=
 and
> then make the call. Possible, but more complicated.
>
> Jan

I knew I must've been missing something. That pesky pointer... No wonder th=
e
loop looked wonky. It was doing something completely different from what I
expected it to.

That clarifies it. Thanks a bunch, Jan.

Cheers,
Alejandro


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:44:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:44:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749352.1157414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVkQ-0000ol-0U; Wed, 26 Jun 2024 16:43:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749352.1157414; Wed, 26 Jun 2024 16:43:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVkP-0000oe-UE; Wed, 26 Jun 2024 16:43:53 +0000
Received: by outflank-mailman (input) for mailman id 749352;
 Wed, 26 Jun 2024 16:43:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3iT=N4=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMVkO-0000oY-Sg
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:43:52 +0000
Received: from mail-qk1-x734.google.com (mail-qk1-x734.google.com
 [2607:f8b0:4864:20::734])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 43c8489d-33db-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 18:43:50 +0200 (CEST)
Received: by mail-qk1-x734.google.com with SMTP id
 af79cd13be357-79bc769b014so59445285a.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:43:50 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 af79cd13be357-79d5879fceasm21974385a.68.2024.06.26.09.43.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 09:43:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43c8489d-33db-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719420229; x=1720025029; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=eFSEg/ryKbn5I3Lv6Z9zNIv9ShhF53/6vBq/qLrPR3s=;
        b=WdidwcHR2NuG3gjXiIx4WdBRrwxRsIE+e2PTU604+P98bKL07Xk4EAzPYXtIPBmLeh
         mGfDZMpyng4QomoVA2sqxhucocfEL3brbtezC3W2p6JXXcL2AniLtIRQdw9wqZasmc+E
         AMqA0wpHRNS0V48dj93nW3iUC7VQiChGyA3NY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719420229; x=1720025029;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=eFSEg/ryKbn5I3Lv6Z9zNIv9ShhF53/6vBq/qLrPR3s=;
        b=FIxHKjtHUaEoG4KhFE/wvBr7F3ckpcMVEoeDCuYw5DzKo2Qk/z/ahy8s2/n21+HqDl
         ssKY0RXrcVkHjn5Zhl5TLdPXDVFGGLRidT9tEFHSnTyRTc3KYEEbw/mE5tTEMimTV8Bf
         ol7Uwegzodn7vNLXRvm5ewziUB0fbggYMlBz7HIPmjv257DsOxByGmCdV0+qaIMZAPQS
         87Hkl1Byuh8f65AZfC7k1oVisVMjIsT3xbtzAibA/AgWvPf4tczFOBBLAA4BdznuHITB
         AVvTELyAzPTWC3f6J07EcDw4wJekoxAIdtW8diK7wJjuUGblaiutdzl7RhuSW6URztzr
         3rpw==
X-Forwarded-Encrypted: i=1; AJvYcCUaNGm7AFZRLvt49QHfK60kWmssxpnT5VYOXUJybgO+3xvxoCiI35qnbG1ub1f8YJ/R3THHKvkXTqmOxTk0O8jHvc1aCqccRxg8Ht2YSfA=
X-Gm-Message-State: AOJu0YxQKUE3SwCbRoZjgrigVssycZJSvwmIVyMRsUG6OoiQzQF4zhbd
	YpQYuOoYD5P0b0Z9cjk6jtgg1b/h/ME4GAKKLAc1mxyAS+33ROaKlygc8uHAhteERFvZ6Z58yhA
	zRNE=
X-Google-Smtp-Source: AGHT+IE9w2u1JI5ugIDMvwnuo2KG95UgjqguYKmWOPYCcuOmQ6sp1mNLGKUsQqEWfXAFx/9gDM6E+Q==
X-Received: by 2002:a05:620a:1a9e:b0:79c:ad5:cd7d with SMTP id af79cd13be357-79c0ad5d2e1mr647010185a.23.1719420229451;
        Wed, 26 Jun 2024 09:43:49 -0700 (PDT)
Message-ID: <7ecf1b46-c1c2-42b5-b3cb-ab737ab67900@citrix.com>
Date: Wed, 26 Jun 2024 17:43:46 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for 4.19 v4 01/10] tools/hvmloader: Fix non-deterministic
 cpuid()
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, Anthony PERARD <anthony.perard@vates.tech>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
 <f8bfcfeca0a76f28703b164e1e65fb5919325b13.1719416329.git.alejandro.vallejo@cloud.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <f8bfcfeca0a76f28703b164e1e65fb5919325b13.1719416329.git.alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 26/06/2024 5:28 pm, Alejandro Vallejo wrote:
> hvmloader's cpuid() implementation deviates from Xen's in that the value passed
> on ecx is unspecified. This means that when used on leaves that implement
> subleaves it's unspecified which one you get; though it's more than likely an
> invalid one.
>
> Import Xen's implementation so there are no surprises.

Fixes: 318ac791f9f9 ("Add utilities needed for SMBIOS generation to
hvmloader")

> Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
>
>
> diff --git a/tools/firmware/hvmloader/util.h b/tools/firmware/hvmloader/util.h
> index deb823a892ef..3ad7c4f6d6a2 100644
> --- a/tools/firmware/hvmloader/util.h
> +++ b/tools/firmware/hvmloader/util.h
> @@ -184,9 +184,30 @@ int uart_exists(uint16_t uart_base);
>  int lpt_exists(uint16_t lpt_base);
>  int hpet_exists(unsigned long hpet_base);
>  
> -/* Do cpuid instruction, with operation 'idx' */
> -void cpuid(uint32_t idx, uint32_t *eax, uint32_t *ebx,
> -           uint32_t *ecx, uint32_t *edx);
> +/* Some CPUID calls want 'count' to be placed in ecx */
> +static inline void cpuid_count(
> +    uint32_t op,
> +    uint32_t count,
> +    uint32_t *eax,
> +    uint32_t *ebx,
> +    uint32_t *ecx,
> +    uint32_t *edx)
> +{
> +    asm volatile ( "cpuid"
> +          : "=a" (*eax), "=b" (*ebx), "=c" (*ecx), "=d" (*edx)
> +          : "0" (op), "c" (count) );

"a" to be consistent with "c".

Also it would be better to name the parameters as leaf and subleaf.

Both can be fixed on commit.  However, there's no use in HVMLoader
tickling this bug right now, so I'm not sure we want to rush this into
4.19 at this point.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:46:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:46:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749360.1157426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVnM-0001Mo-FD; Wed, 26 Jun 2024 16:46:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749360.1157426; Wed, 26 Jun 2024 16:46:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVnM-0001Mh-As; Wed, 26 Jun 2024 16:46:56 +0000
Received: by outflank-mailman (input) for mailman id 749360;
 Wed, 26 Jun 2024 16:46:54 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QuO/=N4=quicinc.com=quic_jjohnson@srs-se1.protection.inumbo.net>)
 id 1sMVnK-0001Mb-Ks
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:46:54 +0000
Received: from mx0a-0031df01.pphosted.com (mx0a-0031df01.pphosted.com
 [205.220.168.131]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id af38776b-33db-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 18:46:51 +0200 (CEST)
Received: from pps.filterd (m0279867.ppops.net [127.0.0.1])
 by mx0a-0031df01.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45QAfTKB003830;
 Wed, 26 Jun 2024 16:46:43 GMT
Received: from nalasppmta01.qualcomm.com (Global_NAT1.qualcomm.com
 [129.46.96.20])
 by mx0a-0031df01.pphosted.com (PPS) with ESMTPS id 3ywmaf2kdx-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 26 Jun 2024 16:46:42 +0000 (GMT)
Received: from nalasex01a.na.qualcomm.com (nalasex01a.na.qualcomm.com
 [10.47.209.196])
 by NALASPPMTA01.qualcomm.com (8.17.1.19/8.17.1.19) with ESMTPS id
 45QGkfWO017986
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 26 Jun 2024 16:46:41 GMT
Received: from [10.48.244.230] (10.49.16.6) by nalasex01a.na.qualcomm.com
 (10.47.209.196) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.9; Wed, 26 Jun
 2024 09:46:41 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af38776b-33db-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=quicinc.com; h=
	cc:content-transfer-encoding:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to; s=qcppdkim1; bh=
	OJIU8nuIR3yUT+2mBo6yrGaIcXHwca09t9+g53UevcU=; b=N/Eeag+b1CpDisWr
	lZ/H/0ZKqf3iatNUBdeVEsaYvM29UTTNl3OKmhx0LNVg6BlJHn+OO/tUdHQbtxPd
	Tp3HWvI0rllXdb3TVEIs4QUwfvgr4au3aQNGVJnpgifoV2wh5wVWs7FHSWq8e1HT
	8VQ89SXROr9wa3Uv0gmsuvtazCmP/t1vBKAtxIbliDlOF0pIE51pYcQ6RO+cSPio
	VGAAEFmKUzKbkK1QRpmVkPD8szB7Z3nJzRBN8hqVQh/zhKzvfCz9j26Assw+/NEl
	t00i6Ws2/ao1PMa7FlwBia25rduEg1yk/oND1b4lGL4EYjaEItLXUniuHlwVTVAU
	8OsUXQ==
Message-ID: <2ecfd10e-c8f2-458c-bf07-e4472d22bcfe@quicinc.com>
Date: Wed, 26 Jun 2024 09:46:41 -0700
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] xen: add missing MODULE_DESCRIPTION() macros
Content-Language: en-US
To: Juergen Gross <jgross@suse.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Oleksandr Tyshchenko
	<oleksandr_tyshchenko@epam.com>
CC: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>,
        <kernel-janitors@vger.kernel.org>
References: <20240611-md-drivers-xen-v1-1-1eb677364ca6@quicinc.com>
From: Jeff Johnson <quic_jjohnson@quicinc.com>
In-Reply-To: <20240611-md-drivers-xen-v1-1-1eb677364ca6@quicinc.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.49.16.6]
X-ClientProxiedBy: nalasex01c.na.qualcomm.com (10.47.97.35) To
 nalasex01a.na.qualcomm.com (10.47.209.196)
X-QCInternal: smtphost
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=5800 signatures=585085
X-Proofpoint-ORIG-GUID: M3EHqaZykX8v0ytdD0PLIUxzzkFGah7n
X-Proofpoint-GUID: M3EHqaZykX8v0ytdD0PLIUxzzkFGah7n
X-Proofpoint-Virus-Version: vendor=baseguard
 engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.28.16
 definitions=2024-06-26_08,2024-06-25_01,2024-05-17_01
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0
 suspectscore=0 spamscore=0 bulkscore=0 phishscore=0 malwarescore=0
 clxscore=1015 mlxscore=0 lowpriorityscore=0 priorityscore=1501
 mlxlogscore=999 adultscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.19.0-2406140001 definitions=main-2406260123

On 6/11/2024 4:54 PM, Jeff Johnson wrote:
> With ARCH=x86, make allmodconfig && make W=1 C=1 reports:
> WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/xen/xen-pciback/xen-pciback.o
> WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/xen/xen-evtchn.o
> WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/xen/xen-privcmd.o
> 
> Add the missing invocations of the MODULE_DESCRIPTION() macro.
> 
> Signed-off-by: Jeff Johnson <quic_jjohnson@quicinc.com>
> ---
> Corrections to these descriptions are welcomed. I'm not an expert in
> this code so in most cases I've taken these descriptions directly from
> code comments, Kconfig descriptions, or git logs.  History has shown
> that in some cases these are originally wrong due to cut-n-paste
> errors, and in other cases the drivers have evolved such that the
> original information is no longer accurate.
> ---
>  drivers/xen/evtchn.c               | 1 +
>  drivers/xen/privcmd-buf.c          | 1 +
>  drivers/xen/privcmd.c              | 1 +
>  drivers/xen/xen-pciback/pci_stub.c | 1 +
>  4 files changed, 4 insertions(+)
> 
> diff --git a/drivers/xen/evtchn.c b/drivers/xen/evtchn.c
> index f6a2216c2c87..9b7fcc7dbb38 100644
> --- a/drivers/xen/evtchn.c
> +++ b/drivers/xen/evtchn.c
> @@ -729,4 +729,5 @@ static void __exit evtchn_cleanup(void)
>  module_init(evtchn_init);
>  module_exit(evtchn_cleanup);
>  
> +MODULE_DESCRIPTION("Xen /dev/xen/evtchn device driver");
>  MODULE_LICENSE("GPL");
> diff --git a/drivers/xen/privcmd-buf.c b/drivers/xen/privcmd-buf.c
> index 2fa10ca5be14..0f0dad427d7e 100644
> --- a/drivers/xen/privcmd-buf.c
> +++ b/drivers/xen/privcmd-buf.c
> @@ -19,6 +19,7 @@
>  
>  #include "privcmd.h"
>  
> +MODULE_DESCRIPTION("Xen Mmap of hypercall buffers");
>  MODULE_LICENSE("GPL");
>  
>  struct privcmd_buf_private {
> diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
> index 67dfa4778864..b9b784633c01 100644
> --- a/drivers/xen/privcmd.c
> +++ b/drivers/xen/privcmd.c
> @@ -48,6 +48,7 @@
>  
>  #include "privcmd.h"
>  
> +MODULE_DESCRIPTION("Xen hypercall passthrough driver");
>  MODULE_LICENSE("GPL");
>  
>  #define PRIV_VMA_LOCKED ((void *)1)
> diff --git a/drivers/xen/xen-pciback/pci_stub.c b/drivers/xen/xen-pciback/pci_stub.c
> index e34b623e4b41..4faebbb84999 100644
> --- a/drivers/xen/xen-pciback/pci_stub.c
> +++ b/drivers/xen/xen-pciback/pci_stub.c
> @@ -1708,5 +1708,6 @@ static void __exit xen_pcibk_cleanup(void)
>  module_init(xen_pcibk_init);
>  module_exit(xen_pcibk_cleanup);
>  
> +MODULE_DESCRIPTION("Xen PCI-device stub driver");
>  MODULE_LICENSE("Dual BSD/GPL");
>  MODULE_ALIAS("xen-backend:pci");
> 
> ---
> base-commit: 83a7eefedc9b56fe7bfeff13b6c7356688ffa670
> change-id: 20240611-md-drivers-xen-522fc8e7ef08
> 

Following up to see if anything else is needed from me. Hoping to see this in
linux-next so I can remove it from my tracking spreadsheet :)

/jeff


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:47:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:47:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749364.1157435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVoC-0001xf-N6; Wed, 26 Jun 2024 16:47:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749364.1157435; Wed, 26 Jun 2024 16:47:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVoC-0001xY-JQ; Wed, 26 Jun 2024 16:47:48 +0000
Received: by outflank-mailman (input) for mailman id 749364;
 Wed, 26 Jun 2024 16:47:47 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y3dI=N4=outstep.com=lonnie@srs-se1.protection.inumbo.net>)
 id 1sMVoB-0001xG-L4
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:47:47 +0000
Received: from mail.outstep.net (mail.outstep.net [213.136.84.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cfeb67c5-33db-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 18:47:45 +0200 (CEST)
Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon)
 with ESMTPSA id 57610234103E
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 18:47:39 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfeb67c5-33db-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=outstep.com; s=dkim;
	t=1719420462; h=from:subject:date:message-id:to:mime-version:content-type:
	 content-language:autocrypt; bh=x08X/ypKSwkbCNlQgqgo6NU4pKrMrIcRQFTRqQOC1N4=;
	b=nsRQfg8ByytfBOVNGZwYHHhzhxmoMLXgWo5OZ1LER3ZXfyj7Sj2cBQCh0xY1YIqpXbZIk1
	JZd6/jP0hpGtx8n95OZ29Upsx4fW7mbHv8M3+m3qEjgII6/6iILz6elaPU3HsS7/BJXpHp
	hnpjB/SpvC1GDZk2aPq69jG3eTuSzA2TqG+tYSnff7w/d5rpcbhCFsjqQqiDTcaUSJcN/3
	3MN7x4OSaW+qo5AbY67xtPJ0y97stSDBMT+rTvrm7YaT0zmisi7Hczu8A4UXF2lX0JwEhg
	9kM5SE2k/AZrroRiJMy4qHLieT3Eh32/q0NGPmUNr0vB3KonjkGv6CSrdaCXyA==
Content-Type: multipart/alternative;
 boundary="------------292oU0MEsyLYdgj3qgD92PnY"
Message-ID: <376f0fe4-4ae8-461d-87f2-0fa2e6913689@outstep.com>
Date: Wed, 26 Jun 2024 12:47:36 -0400
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
To: xen-devel@lists.xenproject.org
Content-Language: en-US
From: Lonnie Cumberland <lonnie@outstep.com>
Subject: Disaggregated (Xoar) Dom0 Building
Autocrypt: addr=lonnie@outstep.com; keydata=
 xsDNBGZUkBcBDADf326hFXBZUOP9VKVMb569ZBxanDFn4/VSe88oit+OyvxtQoGWqEegTtpf
 6zg1+9Dyx48+seZQvkbvZh/08CJaaNOZOP5uzwI70pWMpU+Uxvjed/Irl8Wp6pWixX+6qEm5
 F7shGilvgxCbAPM8YH8Pp8M3nBy3IZGSS4vhlBlJHZ9VsvlZ69rvwJIcVv0igb1HEHkGFl3k
 O+odw9cScRVN7yLeqgAwXmhguZuOu0HN0UEgAgGszbPAPxckImOXI2c7gBbbl0P2aJwUPwKC
 CXb2SR4P/1lAsRJPFt37AyIjhPfLd9lKJVmxl+Jrd3xQ5TZUqAWOYNURJaKIQ7FmgPGtoXgi
 YZRg7rilc24FHbpjSYzAJwF6JNgn9ZJBOlY6Ra34SIFuB7m80dDYExRzYqQWjZZfLu3kQWv2
 JDzxc0vnz1i8EkUYRlttz2RK+8bh0dbFQYRpyacAuUzqsthLOUMphuc2n994Ycjax3pXwt3H
 MvTjxZcB7tU5bBtnfV4XeyUAEQEAAc0mTG9ubmllIEN1bWJlcmxhbmQgPGxvbm5pZUBvdXRz
 dGVwLmNvbT7CwQcEEwEIADEWIQQulYU+Ak0zY3zlP1PNPEu2CUxXdQUCZlSQGAIbAwQLCQgH
 BRUICQoLBRYCAwEAAAoJEM08S7YJTFd1514MAJKgCilBtSfnDuqi6EsAv89vyLUC+UABqdIh
 ehwaImDTu65yniPARHsTQhXZI6QzfFTz3ptX7gQzZvAU0C1rVJWZaFbE4yHIEqerPPH5pTJA
 DL43GZU91is3BNE3hm2s3ArUHOEvFbWTzT9bQKjkHfPveByskzi0qlzrULZYG5kpbXx6sknW
 jFVdPkk0yv6N43ar9GjNKQqZTOJEe4U5VvHX3igMYjLB4dVmZFqvM9uMO+3pTQfnF4pzTtGd
 zX9ZIioAh/wQLF31P78ILvCUV4HOLVOGsxruZKuW/xEtA/UoLFJML5SJDrfbyNcu4Fly/5HP
 Yz42aNbnOBQkHOZKA7QaI0lfUgXgevAquRuJzvjjP8iKm+S+mpl7vIymsbkmG3E9tj5JAe9v
 xAyFFlQFi6ZVlw4PnXbiYUaJ30pa/AnrVe9nz5CpAxCX1q3ajRZApFeFYnuC7rx8LT662Pr1
 fP5RRCbcUs5K8l2mJuifETtua+BydNQfn87JmmL0keAJGM7AzQRmVJAYAQwA9n99CBs/0XZk
 ZUzwm4CjPPqVQX7xLLqsvXZB15zsddCb21T+kxK7x2Bjg8QDg/4n/wOS8SytimPS35P1MKsm
 ysNi9lHkr3a3azfYGXZQ8jKfJbChD5dfyvu/rt4lK8k1EiNEUBzUFwTgP1WeD1v1+xUb5+JJ
 6MjNFuMJMoq6vprEn0Wtv7LNDNWQj4/Xxa/kGVto9XwsrpcKSwyX7BmWEoqqzEO4PJgVSIF9
 euL4GY15RCQD0Y+FN8kAXeO+Dd0WHgtaaWCpDP+RkgXtUCFx06Ozy1OrHRdIczsu+60Xcf+K
 DeoZsA2ZQTBwcSQN5ektrNeP5KqbYcl3stdW+grtucUs6AzFF3oqZbsrB6bNLyUUjEuYvrMm
 SFVi1rfOiGc6IExl6QDT0GCf5KWv0iGbls7lNfYHVUcdbUM07LDxLhm3MkcAnLFpAHg1s+Pz
 QP858J+fpnZLvMQT9AQ/bfA6c3kw6VRFqbsAe7ZzI4C73N+nzsP9ow5ovIbvECI+xkzZABEB
 AAHCwPYEGAEIACAWIQQulYU+Ak0zY3zlP1PNPEu2CUxXdQUCZlSQGQIbDAAKCRDNPEu2CUxX
 dTdmDADYJA7nWcJrr/3Oz+KvND+5Qd7jyOsTnvmcmFmpqWkydxbn75DciH1le9qf3F+WBT2x
 CQtsFGu0E7mb4bQv2i1ugyoWOJPlVAbRvwUoyFYbxHLnlSPPq6KBLcoRDNUe26oINuH6CK30
 ZcXF0SDY26ydP7r6bC0cAzNTz6fkQsEd57wy/nSz9bt0EZnapYZ9l/W5fTSqyMcYDF92u18J
 IAn7On392bs3yTSwAeahPT+dhk3qOecbFysJRm61dw0vNCKVvm82tJKvzRPYEuFMDQEvpXb3
 OqxCCRk3v0iUxwcXZxXPZAfos7ZrM2Y9ElSHfrssbvbeqDIOrGa0d2GlfHZMlz+mnH84Np5K
 19Q/WetiOD7SKvmR54d7jZvsBt8VyDlQhMYqbNPyOnkvtQUhVWshrGGwKrB5a89dUYZMmAQd
 fL+vxMw4kBmeZmZ64Iy9ROZmDqVYD8278qC+yJC2S+uEdW9VjeW4WsUljfH2P3O8QagZsvGv
 WujEwGqqyfUF7eo=
X-Last-TLS-Session-Version: TLSv1.3

This is a multi-part message in MIME format.
--------------292oU0MEsyLYdgj3qgD92PnY
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hello All,

I hope that everyone is doing well today.

Currently, I am investigating and researching the ideas of 
"Disaggregating" Dom0 and have the Xoar Xen patches ("Breaking Up is 
Hard to Do: Security and Functionality in a Commodity Hypervisor" 2011) 
available which were developed against version 22155 of xen-unstable. 
The Linux patches are against Linux with pvops 2.6.31.13 and developed 
on a standard Ubuntu 10.04 install. My effort would also be up update 
these patches.

I have been able to locate the Xen "Dom0 Disaggregation" 
(https://wiki.xenproject.org/wiki/Dom0_Disaggregation) am reading up on 
things now but wanted to ask the developers list about any experience 
you may have had in this area since the research objective is to 
integrate Xoar with the latest Xen 4.20, if possible, and to take it 
further to basically eliminate Dom0 all together with individual Mini-OS 
or Unikernel "Service and Driver VM's" instead that are loaded at UEFI 
boot time.


Any guidance, thoughts, or ideas would be greatly appreciated,
Thanks and have a great day,
Lonnie
--------------292oU0MEsyLYdgj3qgD92PnY
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

<!DOCTYPE html>
<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <font face="Times New Roman, Times, serif">Hello All,<br>
      <br>
      I hope that everyone is doing well today.<br>
      <br>
      Currently, I am investigating and researching the ideas of
      "Disaggregating" Dom0 and have the Xoar Xen patches ("</font>Breaking
    Up is Hard to Do: Security and Functionality in a
    Commodity Hypervisor<font face="Times New Roman, Times, serif">"
      2011) available which were developed against version 22155 of
      xen-unstable. The Linux patches are against Linux with pvops
      2.6.31.13 and developed on a standard Ubuntu 10.04 install. My
      effort would also be up update these patches.<br>
      <br>
      I have been able to locate the Xen "Dom0 Disaggregation"
      (<a class="moz-txt-link-freetext" href="https://wiki.xenproject.org/wiki/Dom0_Disaggregation">https://wiki.xenproject.org/wiki/Dom0_Disaggregation</a>) am reading
      up on things now but wanted to ask the developers list about any
      experience you may have had in this area since the research
      objective is to integrate Xoar with the latest Xen 4.20, if
      possible, and to take it further to basically eliminate Dom0 all
      together with individual Mini-OS or Unikernel "Service and Driver
      VM's" instead that are loaded at UEFI boot time.<br>
      <br>
      <br>
      Any guidance, thoughts, or ideas would be greatly appreciated,<br>
      Thanks and have a great day,<br>
      Lonnie<br>
    </font>
  </body>
</html>

--------------292oU0MEsyLYdgj3qgD92PnY--


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 16:53:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 16:53:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749373.1157445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVtH-0004lq-9l; Wed, 26 Jun 2024 16:53:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749373.1157445; Wed, 26 Jun 2024 16:53:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMVtH-0004lj-5G; Wed, 26 Jun 2024 16:53:03 +0000
Received: by outflank-mailman (input) for mailman id 749373;
 Wed, 26 Jun 2024 16:53:02 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m54e=N4=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMVtF-0004la-VY
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:53:02 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 8bfb8b94-33dc-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 18:53:00 +0200 (CEST)
Received: by mail-ej1-x633.google.com with SMTP id
 a640c23a62f3a-a7245453319so151816866b.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 09:53:00 -0700 (PDT)
Received: from localhost ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fdafe90b8sm543027766b.71.2024.06.26.09.52.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 09:52:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8bfb8b94-33dc-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719420780; x=1720025580; darn=lists.xenproject.org;
        h=in-reply-to:references:subject:cc:to:from:message-id:date
         :content-transfer-encoding:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=n3OYo3MJqPqUKeOzpMmBaxpp/tkM9BYWYQF/CjwfAW0=;
        b=RaPMR5ZOUDXaNZlIkAXyb+OTcoJEQlxhhraBO5zo2XEweGOL0Kqeob26y7/iS+KHgl
         L5wpNbIfRESLqVgsiTrEmYOtxaayei6P+tk5NfzocUEOmDuMA7uMrGcgG4GGiVMb5Jwp
         XoF77Y//l0wFb9SMqI0AgW0/NXavxF3DJ3fcI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719420780; x=1720025580;
        h=in-reply-to:references:subject:cc:to:from:message-id:date
         :content-transfer-encoding:mime-version:x-gm-message-state:from:to
         :cc:subject:date:message-id:reply-to;
        bh=n3OYo3MJqPqUKeOzpMmBaxpp/tkM9BYWYQF/CjwfAW0=;
        b=RzUOJIdHpQXt9AhGkEP//ELmITpaA1kviz8G4JJeVVkP6Va6GW2FBreNsguqtUxaBw
         Rmye/t2dTH1Uk6qSw2V+naLz7YWWk9LPsn/9yBd9XhrkqF2+yM4dhUQueQcviwt+Ew0u
         V/I/BV4ryuw+AYYIxbQORbrbMq0yWQ5vz4qxeqy7bm85VASILb5uaNNP2WsjGjX7mDup
         3fpZ68e/tk6lbDK01js5C3W7bzmECrR9biZBUgrSaiWp7qW/K+moJeqmBzYRXkrBcB8s
         mU/7FmVC0MqSWvroa8yZ82mxMTpBe67TTW8Yufvau/sqL6MGZAt0u41Ypkf1xc3wLuQn
         2Y5A==
X-Forwarded-Encrypted: i=1; AJvYcCVwqRNaXLyF1Bp6nvldkSfUNsNeHbRdbErwHEYjUajLeYP5OocuCPsKaJs2V+StWYL5sXy0d1afq+ZW6iLodtbCMvOIH+ay6TmpHeHrPJM=
X-Gm-Message-State: AOJu0YywPqqIspLnka4x65PgSC8FVutW5gXYJsYhqHHszdU3Jj9TrJuy
	jvdab/DOVQ+8KC9IwWZwlNuZj9/Tis4rK6HGlOTg9ka0dSbYY6A67/K/whTDeMU=
X-Google-Smtp-Source: AGHT+IErddExJuRl2lMb6QS5JWBOeaxRBsxRCCK1VzYes/DGjLfSSV29ggabO6mN3h6Bsa9fHrz1Ng==
X-Received: by 2002:a17:906:c2c5:b0:a6f:e069:3b06 with SMTP id a640c23a62f3a-a7296f742fbmr13224466b.21.1719420780149;
        Wed, 26 Jun 2024 09:53:00 -0700 (PDT)
Mime-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset=UTF-8
Date: Wed, 26 Jun 2024 17:52:59 +0100
Message-Id: <D2A3SPBBBYCT.CYFCF8WCBM10@cloud.com>
From: "Alejandro Vallejo" <alejandro.vallejo@cloud.com>
To: "Andrew Cooper" <andrew.cooper3@citrix.com>, "Xen-devel"
 <xen-devel@lists.xenproject.org>
Cc: "Jan Beulich" <jbeulich@suse.com>, =?utf-8?q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>, "Anthony PERARD" <anthony.perard@vates.tech>,
 "Oleksii Kurochko" <oleksii.kurochko@gmail.com>
Subject: Re: [PATCH for 4.19 v4 01/10] tools/hvmloader: Fix
 non-deterministic cpuid()
X-Mailer: aerc 0.17.0
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
 <f8bfcfeca0a76f28703b164e1e65fb5919325b13.1719416329.git.alejandro.vallejo@cloud.com> <7ecf1b46-c1c2-42b5-b3cb-ab737ab67900@citrix.com>
In-Reply-To: <7ecf1b46-c1c2-42b5-b3cb-ab737ab67900@citrix.com>

On Wed Jun 26, 2024 at 5:43 PM BST, Andrew Cooper wrote:
> On 26/06/2024 5:28 pm, Alejandro Vallejo wrote:
> > hvmloader's cpuid() implementation deviates from Xen's in that the valu=
e passed
> > on ecx is unspecified. This means that when used on leaves that impleme=
nt
> > subleaves it's unspecified which one you get; though it's more than lik=
ely an
> > invalid one.
> >
> > Import Xen's implementation so there are no surprises.
>
> Fixes: 318ac791f9f9 ("Add utilities needed for SMBIOS generation to
> hvmloader")
>
> > Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> >
> >
> > diff --git a/tools/firmware/hvmloader/util.h b/tools/firmware/hvmloader=
/util.h
> > index deb823a892ef..3ad7c4f6d6a2 100644
> > --- a/tools/firmware/hvmloader/util.h
> > +++ b/tools/firmware/hvmloader/util.h
> > @@ -184,9 +184,30 @@ int uart_exists(uint16_t uart_base);
> >  int lpt_exists(uint16_t lpt_base);
> >  int hpet_exists(unsigned long hpet_base);
> > =20
> > -/* Do cpuid instruction, with operation 'idx' */
> > -void cpuid(uint32_t idx, uint32_t *eax, uint32_t *ebx,
> > -           uint32_t *ecx, uint32_t *edx);
> > +/* Some CPUID calls want 'count' to be placed in ecx */
> > +static inline void cpuid_count(
> > +    uint32_t op,
> > +    uint32_t count,
> > +    uint32_t *eax,
> > +    uint32_t *ebx,
> > +    uint32_t *ecx,
> > +    uint32_t *edx)
> > +{
> > +    asm volatile ( "cpuid"
> > +          : "=3Da" (*eax), "=3Db" (*ebx), "=3Dc" (*ecx), "=3Dd" (*edx)
> > +          : "0" (op), "c" (count) );
>
> "a" to be consistent with "c".
>
> Also it would be better to name the parameters as leaf and subleaf.
>
> Both can be fixed on commit.=C2=A0 However, there's no use in HVMLoader
> tickling this bug right now, so I'm not sure we want to rush this into
> 4.19 at this point.
>
> ~Andrew

All sound good to me. For the record, the static inlines are copied verbati=
m
from Xen so if you'd like these adjusted you probably also want to make a
postit to change Xen's too.

Cheers,
Alejandro


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 17:27:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 17:27:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749384.1157454 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMWQP-0006b1-QZ; Wed, 26 Jun 2024 17:27:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749384.1157454; Wed, 26 Jun 2024 17:27:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMWQP-0006au-Nr; Wed, 26 Jun 2024 17:27:17 +0000
Received: by outflank-mailman (input) for mailman id 749384;
 Wed, 26 Jun 2024 17:27:16 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=evFT=N4=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMWQO-0006ao-RG
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 17:27:16 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 54da9bb3-33e1-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 19:27:15 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a725ea1a385so415124666b.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 10:27:15 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a72691fe146sm206193566b.1.2024.06.26.10.27.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 10:27:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 54da9bb3-33e1-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719422835; x=1720027635; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=b/mYrX+LFwO1keoNlHptLRaX3DfMNiwUnW9cpuZv18E=;
        b=FGR4/JT5jMj67Qkkuc7QkLwDQzkhaAz8Y4KcJjJqWSUvDFFnMKsTMEphuSIr7IMlyX
         Qgx02taVvgZ2ZEbhU+xNydxpfC37gHwfiqfDuy0PIF6hpjK1JwXTuD1c2lyMBmr4APLI
         m9xX6UryeY88Hn3tNtItm9NAnOQcNlJuqrXCnn9aK0HZQh6UQgXZpxcBUDJzXeDOktCm
         rz7oSgRwY9f2CcDjeCIL+0Cg1+LWKk2qRdYcnwOzqEuU/q3UsYpBPZD64EBemqAql5Sp
         EbU5UmswwTiZKXJMfNsUnKQTeVrUCpYkPfpta/0YXiKTGekC79nEt7RMQM6G4eodPYOU
         FTgA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719422835; x=1720027635;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=b/mYrX+LFwO1keoNlHptLRaX3DfMNiwUnW9cpuZv18E=;
        b=B3qvhbEiwnWEKZJgb41dnHpxOfxuktN9npYv27Em9jh4nra/AtCHjO3YShQxtff/CI
         wj8joYCZygnLZqG0Rd+cArRIWOMBiakfXqz3r7JTp8XZy76zpZd+K7ArZ1p43jI7D6i4
         4+gjGJkv4+4IeOH8m99PDWhB8o2tD4ebIS/uagTvbpWQ/RGQ12EU2Mo3AbMqkc9t+WUY
         XSQaXDUF4toFC3dpqQJH3Ke5+jKG3DqwYiJI1PkXnrqmy0RgwUaY3tFPdxl/OW4NWnC/
         o6FRWnck7O5ZIrQNGAMYN2rEWMe7DugEdDXlJXbsCIBVxV5hCVcRzzuitGNfbVfsgJ6S
         HsJg==
X-Forwarded-Encrypted: i=1; AJvYcCVMLSVIdtSGqbDmZKHp5yVv9p3NYJSkcoXNS3Np+IJOLovHjqMwqONSlIGXQrF+wAACI2j/czQtsQHK40IyF+BiWL4i74Fc5oSiBH91Du0=
X-Gm-Message-State: AOJu0Yy9fOcJc4t80vBR2jfeDchKmB46ODdDZAlwzhkbihqOuHk9Lm4Y
	TvyNHg3MdeAknd9J9QzZSvL5FtoSOJrVrUvdNMcZSQaPMU9ViX32
X-Google-Smtp-Source: AGHT+IGt6CndDs9e1g8CTUJHx82owA1R8dmFIDmiL7Em/qdN6STADDcXmRSRq0wrUYEYO/y0SGtDZg==
X-Received: by 2002:a17:907:a605:b0:a72:4bf2:e1a with SMTP id a640c23a62f3a-a724bf20e88mr741997466b.20.1719422834958;
        Wed, 26 Jun 2024 10:27:14 -0700 (PDT)
Message-ID: <4c15dd072f08b1161d170608a096dc0851ced588.camel@gmail.com>
Subject: Re: [PATCH v13 02/10] xen/riscv: introduce bitops.h
From: oleksii.kurochko@gmail.com
To: Jan Beulich <jbeulich@suse.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>,  Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Date: Wed, 26 Jun 2024 19:27:13 +0200
In-Reply-To: <bb103587-546d-4613-bcb8-df10f5d05388@suse.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
	 <0e4441eee82b0545e59099e2f62e3a01fa198d08.1719319093.git.oleksii.kurochko@gmail.com>
	 <bb103587-546d-4613-bcb8-df10f5d05388@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

T24gV2VkLCAyMDI0LTA2LTI2IGF0IDEwOjMxICswMjAwLCBKYW4gQmV1bGljaCB3cm90ZToKPiBP
biAyNS4wNi4yMDI0IDE1OjUxLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOgo+ID4gLS0tIC9kZXYv
bnVsbAo+ID4gKysrIGIveGVuL2FyY2gvcmlzY3YvaW5jbHVkZS9hc20vYml0b3BzLmgKPiA+IEBA
IC0wLDAgKzEsMTM3IEBACj4gPiArLyogU1BEWC1MaWNlbnNlLUlkZW50aWZpZXI6IEdQTC0yLjAg
Ki8KPiA+ICsvKiBDb3B5cmlnaHQgKEMpIDIwMTIgUmVnZW50cyBvZiB0aGUgVW5pdmVyc2l0eSBv
ZiBDYWxpZm9ybmlhICovCj4gPiArCj4gPiArI2lmbmRlZiBfQVNNX1JJU0NWX0JJVE9QU19ICj4g
PiArI2RlZmluZSBfQVNNX1JJU0NWX0JJVE9QU19ICj4gPiArCj4gPiArI2luY2x1ZGUgPGFzbS9z
eXN0ZW0uaD4KPiA+ICsKPiA+ICsjaWYgQklUT1BfQklUU19QRVJfV09SRCA9PSA2NAo+ID4gKyNk
ZWZpbmUgX19BTU8ob3ApwqDCoCAiYW1vIiAjb3AgIi5kIgo+ID4gKyNlbGlmIEJJVE9QX0JJVFNf
UEVSX1dPUkQgPT0gMzIKPiA+ICsjZGVmaW5lIF9fQU1PKG9wKcKgwqAgImFtbyIgI29wICIudyIK
PiA+ICsjZWxzZQo+ID4gKyNlcnJvciAiVW5leHBlY3RlZCBCSVRPUF9CSVRTX1BFUl9XT1JEIgo+
ID4gKyNlbmRpZgo+ID4gKwo+ID4gKy8qIEJhc2VkIG9uIGxpbnV4L2FyY2gvaW5jbHVkZS9hc20v
Yml0b3BzLmggKi8KPiA+ICsKPiA+ICsvKgo+ID4gKyAqIE5vbi1hdG9taWMgYml0IG1hbmlwdWxh
dGlvbi4KPiA+ICsgKgo+ID4gKyAqIEltcGxlbWVudGVkIHVzaW5nIGF0b21pY3MgdG8gYmUgaW50
ZXJydXB0IHNhZmUuIENvdWxkCj4gPiBhbHRlcm5hdGl2ZWx5Cj4gPiArICogaW1wbGVtZW50IHdp
dGggbG9jYWwgaW50ZXJydXB0IG1hc2tpbmcuCj4gPiArICovCj4gPiArI2RlZmluZSBfX3NldF9i
aXQobiwgcCnCoMKgwqDCoMKgIHNldF9iaXQobiwgcCkKPiA+ICsjZGVmaW5lIF9fY2xlYXJfYml0
KG4sIHApwqDCoMKgIGNsZWFyX2JpdChuLCBwKQo+ID4gKwo+ID4gKyNkZWZpbmUgdGVzdF9hbmRf
b3BfYml0X29yZChvcCwgbW9kLCBuciwgYWRkciwgb3JkKcKgwqDCoMKgIFwKPiA+ICsoe8KgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gK8KgwqDC
oCBiaXRvcF91aW50X3QgcmVzLCBtYXNrO8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFwKPiA+ICvCoMKgwqAgbWFzayA9IEJJVE9QX01BU0so
bnIpO8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgXAo+ID4gK8KgwqDCoCBhc20gdm9sYXRpbGUgKMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFwKPiA+
ICvCoMKgwqDCoMKgwqDCoCBfX0FNTyhvcCkgI29yZCAiICUwLCAlMiwgJTEiwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gK8KgwqDCoMKgwqDCoMKgIDogIj1yIiAo
cmVzKSwgIitBIiAoYWRkcltCSVRPUF9XT1JEKG5yKV0pwqDCoMKgwqDCoMKgIFwKPiA+ICvCoMKg
wqDCoMKgwqDCoCA6ICJyIiAobW9kKG1hc2spKcKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiArwqDCoMKgwqDCoMKgwqAgOiAi
bWVtb3J5Iik7wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiArwqDCoMKgICgocmVzICYgbWFzaykgIT0gMCk7
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgXAo+ID4gK30pCj4gPiArCj4gPiArI2RlZmluZSBvcF9iaXRfb3JkKG9wLCBtb2QsIG5y
LCBhZGRyLCBvcmQpwqDCoMKgwqDCoCBcCj4gPiArwqDCoMKgIGFzbSB2b2xhdGlsZSAowqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4g
PiArwqDCoMKgwqDCoMKgwqAgX19BTU8ob3ApICNvcmQgIiB6ZXJvLCAlMSwgJTAiwqDCoMKgwqDC
oMKgwqDCoMKgIFwKPiA+ICvCoMKgwqDCoMKgwqDCoCA6ICIrQSIgKGFkZHJbQklUT1BfV09SRChu
cildKcKgwqDCoMKgwqDCoMKgwqDCoMKgIFwKPiA+ICvCoMKgwqDCoMKgwqDCoCA6ICJyIiAobW9k
KEJJVE9QX01BU0sobnIpKSnCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gK8KgwqDCoMKg
wqDCoMKgIDogIm1lbW9yeSIpOwo+ID4gKwo+ID4gKyNkZWZpbmUgdGVzdF9hbmRfb3BfYml0KG9w
LCBtb2QsIG5yLCBhZGRyKcKgwqDCoCBcCj4gPiArwqDCoMKgIHRlc3RfYW5kX29wX2JpdF9vcmQo
b3AsIG1vZCwgbnIsIGFkZHIsIC5hcXJsKQo+ID4gKyNkZWZpbmUgb3BfYml0KG9wLCBtb2QsIG5y
LCBhZGRyKSBcCj4gPiArwqDCoMKgIG9wX2JpdF9vcmQob3AsIG1vZCwgbnIsIGFkZHIsICkKPiA+
ICsKPiA+ICsvKiBCaXRtYXNrIG1vZGlmaWVycyAqLwo+ID4gKyNkZWZpbmUgTk9QKHgpwqDCoMKg
ICh4KQo+ID4gKyNkZWZpbmUgTk9UKHgpwqDCoMKgICh+KHgpKQo+IAo+IFNpbmNlIGVsc2V3aGVy
ZSB5b3Ugc2FpZCB3ZSB3b3VsZCB1c2UgWmJiIGluIGJpdG9wcywgSSB3YW50ZWQgdG8gY29tZQo+
IGJhY2sKPiBvbiB0aGF0OiBVcCB0byBoZXJlIGFsbCB3ZSB1c2UgaXMgQU1PLgo+IAo+IEFuZCBm
dXJ0aGVyIGRvd24gdGhlcmUncyBubyBhc20oKSBhbnltb3JlLiBXaGF0IHdlcmUgeW91IHJlZmVy
cmluZwo+IHRvPwpSSVNDLVYgZG9lc24ndCBoYXZlIGEgQ0xaIGluc3RydWN0aW9uIGluIHRoZSBi
YXNlCklTQS4gIEFzIGEgY29uc2VxdWVuY2UsIF9fYnVpbHRpbl9mZnMoKSBlbWl0cyBhIGxpYnJh
cnkgY2FsbCB0byBmZnMoKQpvbiBHQ0MsIG9yIGEgZGUgQnJ1aWpuIHNlcXVlbmNlIG9uIENsYW5n
LgoKVGhlIG9wdGlvbmFsIFpiYiBleHRlbnNpb24gYWRkcyBhIENMWiBpbnN0cnVjdGlvbiwgYWZ0
ZXIgd2hpY2gKX19idWlsdGluX2ZmcygpIGVtaXRzIGEgdmVyeSBzaW1wbGUgc2VxdWVuY2UuCgp+
IE9sZWtzaWkK



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 17:29:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 17:29:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749389.1157465 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMWSK-0007hY-69; Wed, 26 Jun 2024 17:29:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749389.1157465; Wed, 26 Jun 2024 17:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMWSK-0007hR-2Z; Wed, 26 Jun 2024 17:29:16 +0000
Received: by outflank-mailman (input) for mailman id 749389;
 Wed, 26 Jun 2024 17:29:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=evFT=N4=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMWSI-0007fy-Nc
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 17:29:14 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9b3da1a5-33e1-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 19:29:13 +0200 (CEST)
Received: by mail-ej1-x62b.google.com with SMTP id
 a640c23a62f3a-a72585032f1so477886266b.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 10:29:13 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a726e13a4d7sm199375066b.19.2024.06.26.10.29.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 10:29:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b3da1a5-33e1-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719422953; x=1720027753; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=iWI5dEIRs50S8kVANmh6dYFT6ZEbgm6ruLtK3Zl9nQY=;
        b=XBKXgzyCdnlnddW4naY6d+t9d+fFjLuAxk/2HxdfhoZVq8tCl0zgVLT0KlsULMiquh
         BYz3CpM/fCPbyrKg0J5495LWtQZMwr3h6gewlroBRhoKJs8bD3CHMPhQyVnixzTlVV6/
         3xHUgd8CXDjZZmWPRBBqMeAGHUjkgrBYl2w3VegBsZsXmeeEBgLIQAyzAl2iDjVelDIE
         UuXFwJA3TXfY2YN/LoT6iRaWxiEs5RJtdyk+4ZvRAr03oySkOz59dLpGzMeXIxU4e+nh
         4teYrHC1hs8ygPiItP5UgszEF7MM8bcNlRNcAg8EA6dscMCJk0XeZW4XsI8l1eQOL4+v
         7s7w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719422953; x=1720027753;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=iWI5dEIRs50S8kVANmh6dYFT6ZEbgm6ruLtK3Zl9nQY=;
        b=ZmvLm9BEcRx/ELN+7uMrUIHy9UrupJBzsBxqdPIJbdEQ7YcEkuyB7vuoyAhOPA4diT
         aCnqfim+BCdFCJShmA64B8g34v/2tIAy8lh9C8myXKBFDB15T8Lyz489qt5FMW/rnTJo
         1viKk9za9AkNlGLZ3ovIAMd2EIx4/ZWuLTlqvRflbDrvLpE3rRRx9V7/F8Wwk5rXwqVU
         sU3Cpjbe8ryPawM2C/XfpZtNjzhdTho/+o9unhq1grMnFibm6JUIeWvRg1JeAmaqX4mi
         Omq37XcTABIMh7ONVk8V9Jnn1Sewpk73Ii2kFzf2A2uGio7SBgyXlVOt4CpLgb7jNyoA
         xbuA==
X-Forwarded-Encrypted: i=1; AJvYcCWbohA9l5MamqYSMtUboJ86wo7Zcjg8vl4YgkDd/n/ACrvf4gAmRzPOIsTnH+XMZZx08i9DphrShLKl4FUPDenJG8EkrTJ1H91VAfewyIE=
X-Gm-Message-State: AOJu0YxNOSmxNWhhVB1uJNUN0puOzizLB6fVHyKgAOdJvUeQkypGEcji
	RuhseHOEoe7FthDkoSg6JUjoTA1uR1bOldwcyqpDOwqRTqHCj6Nj
X-Google-Smtp-Source: AGHT+IGpDo2jKmRT6BZYdg9ZX/2FCYKr5btI3Bj4AbTf8YGMTt1+Y0Ks0vQPGZwYWbW86Sv9a7OZEw==
X-Received: by 2002:a17:907:8e93:b0:a6c:6f0a:e147 with SMTP id a640c23a62f3a-a714d72c2admr941936466b.12.1719422952901;
        Wed, 26 Jun 2024 10:29:12 -0700 (PDT)
Message-ID: <993881346ec5ab466b5d5b8bf55ece57428d5859.camel@gmail.com>
Subject: Re: [PATCH v13 09/10] xen/riscv: introduce ANDN_INSN
From: oleksii.kurochko@gmail.com
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, George
 Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Date: Wed, 26 Jun 2024 19:29:11 +0200
In-Reply-To: <6ed6d9bf-2e33-4294-974b-eb1924b011cc@suse.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
	 <b0d2ff2cecf6cb324e43b9c14c87f47f3f199613.1719319093.git.oleksii.kurochko@gmail.com>
	 <95f64eba-13b9-404a-8318-7a3fc77ea560@suse.com>
	 <b638c5f8-20e9-43bb-a47b-e24cb1b1b821@citrix.com>
	 <6ed6d9bf-2e33-4294-974b-eb1924b011cc@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Wed, 2024-06-26 at 09:53 +0200, Jan Beulich wrote:
> On 25.06.2024 23:10, Andrew Cooper wrote:
> > On 25/06/2024 3:49 pm, Jan Beulich wrote:
> > > On 25.06.2024 15:51, Oleksii Kurochko wrote:
> > > > --- a/xen/arch/riscv/include/asm/cmpxchg.h
> > > > +++ b/xen/arch/riscv/include/asm/cmpxchg.h
> > > > @@ -18,6 +18,20 @@
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "r" (new) \
> > > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "memory" );
> > > > =C2=A0
> > > > +/*
> > > > + * Binutils < 2.37 doesn't understand ANDN.=C2=A0 If the toolchain
> > > > is too
> > > > +ld, form
> > > Same question: Why's 2.37 suddenly of interest?
> >=20
> > You deleted the commit message which explains why:
>=20
> Not really. My whole point is that while the intention of the change
> looks
> okay, description and comment describe things insufficiently, to say
> the
> least.
>=20
> > > RISC-V does a conditional toolchain test for the Zbb extension
> > > (xen/arch/riscv/rules.mk), but unconditionally uses the ANDN
> > > instruction in emulate_xchg_1_2().
> >=20
> > Either Zbb needs to be mandatory (both in the toolchain and the
> > board
> > running Xen), or emulate_xchg_1_2() needs to not use the ANDN
> > instruction.
> >=20
> > I opted for the latter.
>=20
> And I agree with that.
Okay, then I'll update that in the next patch version.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 17:42:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 17:42:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749395.1157475 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMWf9-0003kr-8r; Wed, 26 Jun 2024 17:42:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749395.1157475; Wed, 26 Jun 2024 17:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMWf9-0003kk-5n; Wed, 26 Jun 2024 17:42:31 +0000
Received: by outflank-mailman (input) for mailman id 749395;
 Wed, 26 Jun 2024 17:42:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=evFT=N4=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMWf7-0003hq-FY
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 17:42:29 +0000
Received: from mail-lf1-x12a.google.com (mail-lf1-x12a.google.com
 [2a00:1450:4864:20::12a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7417fc57-33e3-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 19:42:27 +0200 (CEST)
Received: by mail-lf1-x12a.google.com with SMTP id
 2adb3069b0e04-52cd80e55efso9491732e87.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 10:42:27 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a6fe779bc80sm515114666b.174.2024.06.26.10.42.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 26 Jun 2024 10:42:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7417fc57-33e3-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719423746; x=1720028546; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=aye+kJexgftmI7DoGF7ISR/b5pwbzdvkf9SkYl1FV14=;
        b=GdgDo+V38KdmYMbayBMTd/f29eIUF1LUMUlQz01IZtE3G8nYdRtuXBXB4teUS76fZO
         WDfb6sXYUUyvuahVsa5TgM8lTEU1G/lGmtSZBofgSrbUT5bVxUuKbHeT8m62S9Vq2uhF
         MEaexqjGhmVk3PnIBjNk61JZLncKkMYcdDrjjej/kKpG9eABwM48RqlPA4Z6ewd2z0kE
         ae9xio8OxLPoE1MJR1YfFnUIPL6XPMQhhhMrS1h8mJ0Y0GbkdJks+EnR3KOrDmqXwEGx
         VcZGn+2K4QB9fXEi9ZmrC+xIJa5oAqZBm6RFye4pTGuggD9vEDMssEEMoMblcuV12Tau
         LTLQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719423746; x=1720028546;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=aye+kJexgftmI7DoGF7ISR/b5pwbzdvkf9SkYl1FV14=;
        b=lAz3XVTJKKj7NS+I9/LF8sdd/23Sys+trXnJr9Fg42x2nsZEl82HJvqFqnXkROLiZt
         +c+pLO2T4R7g/XrYF78tbKOQTmjUxHYP97QkvmUdrri5X76nDBbELyeKe0MFT/mgsuUi
         aWiuIsNpx0JEZpQbpjESHnsCpUsRhPw4NZh94658hPFkGHxUCvFui2GYPZJSwUrsF6wh
         eykB4TdKjJ2ZSzl6nqaE3+RWEyRwe2mTVcbHvwRTJmtMcCSToahOu2xq1+8IQgUyXa0u
         JPrDJqPGcmb1Qsl4OkJT1uq3ocYIkWaFMpEjsM26df51kQ4gx0CPC2C/XmQ72rye0JP6
         jQvA==
X-Gm-Message-State: AOJu0YxeO/QA9w433yNbYNj4USzmxqz8wjGnpBh7XVdSpZ9R38G2QEdg
	HLVnvqWO7ABNpnRrFrJNvZk8bZRivtqRNeAhMR1hp94uw8S2O2jt
X-Google-Smtp-Source: AGHT+IGSZZ2NfovW5WlFKsopSOVg4+MBdHHq/kKbuDMYwRSqz9FY2WJi7CuH6vbn0tXeFwer7MEEbg==
X-Received: by 2002:ac2:41c5:0:b0:52c:e17c:cd7b with SMTP id 2adb3069b0e04-52ce1832abdmr8550465e87.22.1719423746381;
        Wed, 26 Jun 2024 10:42:26 -0700 (PDT)
Message-ID: <9814c00d116f14a1ce238b131b9eba19fa130986.camel@gmail.com>
Subject: Re: [XEN PATCH v2 0/6][RESEND] address violations of MISRA C Rule
 20.7
From: oleksii.kurochko@gmail.com
To: Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, michal.orzel@amd.com, 
 xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com,
 consulting@bugseng.com,  Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
 Julien Grall <julien@xen.org>, Nicola Vetrini <nicola.vetrini@bugseng.com>
Date: Wed, 26 Jun 2024 19:42:24 +0200
In-Reply-To: <88127f41-a3e3-4d05-b9f2-3e4117bf1503@suse.com>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com>
	 <alpine.DEB.2.22.394.2406241743480.3870429@ubuntu-linux-20-04-desktop>
	 <88127f41-a3e3-4d05-b9f2-3e4117bf1503@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Tue, 2024-06-25 at 08:39 +0200, Jan Beulich wrote:
> On 25.06.2024 02:47, Stefano Stabellini wrote:
> > I would like to ask for a release-ack as the patch series makes
> > very few
> > changes outside of the static analysis configuration. The few
> > changes to
> > the Xen code are very limited, straightforward and makes the code
> > better, see patch #3 and #5.
>=20
> While continuing to touch automation/ may be okay, I really think
> time has
> passed for further Misra changes in 4.19, unless they're fixing
> actual bugs
> of course. Just my personal view though ...
I am not quite sure I understand the concern. From my perspective, the
patch series addresses several MISRA violations without introducing any
functional changes. It seems safe to incorporate these MISRA changes
even at this stage of the release.

~ Oleksii
>=20
> Jan
>=20
> > On Mon, 17 Jun 2024, Nicola Vetrini wrote:
> > > Hi all,
> > >=20
> > > this series addresses several violations of Rule 20.7, as well as
> > > a
> > > small fix to the ECLAIR integration scripts that do not influence
> > > the current behaviour, but were mistakenly part of the upstream
> > > configuration.
> > >=20
> > > Note that by applying this series the rule has a few leftover
> > > violations.
> > > Most of those are in x86 code in xen/arch/x86/include/asm/msi.h .
> > > I did send a patch [1] to deal with those, limited only to
> > > addressing the MISRA
> > > violations, but in the end it was dropped in favour of a more
> > > general cleanup of
> > > the file upon agreement, so this is why those changes are not
> > > included here.
> > >=20
> > > [1]
> > > https://lore.kernel.org/xen-devel/2f2c865f20d0296e623f1d65bed25c083f5=
dd497.1711700095.git.nicola.vetrini@bugseng.com/
> > >=20
> > > Changes in v2:
> > > - refactor patch 4 to deviate the pattern, instead of fixing the
> > > violations
> > > - The series has been resent because I forgot to properly Cc the
> > > mailing list
> > >=20
> > > Nicola Vetrini (6):
> > > =C2=A0 automation/eclair: address violations of MISRA C Rule 20.7
> > > =C2=A0 xen/self-tests: address violations of MISRA rule 20.7
> > > =C2=A0 xen/guest_access: address violations of MISRA rule 20.7
> > > =C2=A0 automation/eclair_analysis: address violations of MISRA C Rule
> > > 20.7
> > > =C2=A0 x86/irq: address violations of MISRA C Rule 20.7
> > > =C2=A0 automation/eclair_analysis: clean ECLAIR configuration scripts
> > >=20
> > > =C2=A0automation/eclair_analysis/ECLAIR/analyze.sh=C2=A0=C2=A0=C2=A0=
=C2=A0 |=C2=A0 3 +--
> > > =C2=A0automation/eclair_analysis/ECLAIR/deviations.ecl | 14
> > > ++++++++++++--
> > > =C2=A0docs/misra/deviations.rst=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 3 ++-
> > > =C2=A0xen/include/xen/guest_access.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
 |=C2=A0 4 ++--
> > > =C2=A0xen/include/xen/irq.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 2 +-
> > > =C2=A0xen/include/xen/self-tests.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 |=C2=A0 8 ++++----
> > > =C2=A06 files changed, 22 insertions(+), 12 deletions(-)
> > >=20
> > > --=20
> > > 2.34.1
> > >=20
>=20



From xen-devel-bounces@lists.xenproject.org Wed Jun 26 18:09:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 18:09:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749404.1157485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMX53-00012A-BD; Wed, 26 Jun 2024 18:09:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749404.1157485; Wed, 26 Jun 2024 18:09:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMX53-000123-7a; Wed, 26 Jun 2024 18:09:17 +0000
Received: by outflank-mailman (input) for mailman id 749404;
 Wed, 26 Jun 2024 18:09:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3iT=N4=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMX52-00011x-57
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 18:09:16 +0000
Received: from mail-qk1-x72a.google.com (mail-qk1-x72a.google.com
 [2607:f8b0:4864:20::72a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 31881f4b-33e7-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 20:09:14 +0200 (CEST)
Received: by mail-qk1-x72a.google.com with SMTP id
 af79cd13be357-79c076c0e1aso127932085a.2
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 11:09:13 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-444c3b4b42asm68146491cf.91.2024.06.26.11.09.09
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 11:09:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31881f4b-33e7-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719425353; x=1720030153; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=i0Ws4kno2igXMBztBT5g7Ty4bOdbR1FWjl0bz6N8TrI=;
        b=kZ4zAzkgST3ACLIU8yce8s2rzYbDa8PNKGF7SEdKT5Wh3KmZAa61rCE2dot+QKcZOJ
         UeSnnSZwxtWVCOSKl5bWvffBe4DijnCTVBQU80xoPEKB/wv257gziL2SqAWJ+3OmJ+PY
         7KltmXl9y6OK3WWZfY8aJoQg39dosruinCNWo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719425353; x=1720030153;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=i0Ws4kno2igXMBztBT5g7Ty4bOdbR1FWjl0bz6N8TrI=;
        b=vkPOyfqMoG8aEbM6fCUUbG3HVd2L2uKI9QBvG8LdC6b+EqcYZ4Ru/tYtZJWv4MjI4W
         HFf0XZDqdkJwajX88F4+clFm64Z+w09HT0xRhOwobxDn+crKIrRc9+UrvBMN2GhyINgq
         MTF/g65upK9s3EdnsLI8WRwLNq43aF97to+jxRO7eTKlSozvBuG0BLQqaK5fk+SVKJTd
         6u+FNuJSAj1N/mr8+d2lJzBg8kTkR/GSC0CVj+AV3kUNNeuhbmv3GnGB7eTodu4wuRNr
         v1pq+5o0fArnEGv+7glOWR4n+vkOX29PzZS05iUjZax8xsm+U/8aw70pR9EMo0MezTwl
         8L+g==
X-Forwarded-Encrypted: i=1; AJvYcCUjXQFdtT0N4XJ8WX2K++Ygp+mMW+ZDhB1rZj27wo8FlMCW7lUQsTmcYGjN87K7WHlccdGFsu+GqLv7VJdd3tcamA1SCohpR93wwqgph2M=
X-Gm-Message-State: AOJu0Yz883lujBN3UFVzopYOOVYS6JkqVUyuL4SLQsKGapuCqLSK7Fhd
	mHUX4nhD8ffumktgmHsP0K7q2ozUZHFOGU/Vyaro155VSrkq8rtcGXHJWccCXZA=
X-Google-Smtp-Source: AGHT+IFRuV7zbROsCLEV/iXFDOPhUqCxBzYO52ZuBsuYWUgjnFyeyCNHxh9mF1XCSG9RWFcDDAQ6IA==
X-Received: by 2002:a05:622a:11d1:b0:446:3b38:345f with SMTP id d75a77b69052e-4463b384243mr14031121cf.10.1719425352659;
        Wed, 26 Jun 2024 11:09:12 -0700 (PDT)
Message-ID: <961f5371-3616-4476-ae12-e1d91cc56345@citrix.com>
Date: Wed, 26 Jun 2024 19:09:08 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 6/6] x86/xstate: Switch back to for_each_set_bit()
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
 <20240625190719.788643-7-andrew.cooper3@citrix.com>
 <59201cf5-9d86-4976-a331-2a7f8bb9635a@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <59201cf5-9d86-4976-a331-2a7f8bb9635a@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 26/06/2024 11:24 am, Jan Beulich wrote:
> On 25.06.2024 21:07, Andrew Cooper wrote:
>> In all 3 examples, we're iterating over a scaler.  No caller can pass the
>> COMPRESSED flag in, so the upper bound of 63, as opposed to 64, doesn't
>> matter.
> Not sure, maybe more a language question (for my education): Is "can"
> really appropriate here?

It's not the greatest choice, but it's not objectively wrong either.

>  In recalculate_xstate() we calculate the
> value ourselves, but in the two other cases the value is incoming to
> the functions. Architecturally those value should not have bit 63 set,
> but that's weaker than "can" according to my understanding. I'd be
> fine with "may", for example.

There's an ASSERT() in xstate_uncompressed_size() which covers the
property, but most if the justification comes from the fact that the
callers pass in values which are really loaded into hardware registers.

But it is certainly more accurate to say that callers don't pass the
flag in.

There isn't an ASSERT() in xstate_compressed_size(), but I suppose I
could fold this in:

diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index 88dbfbeafacd..f72f14626b7d 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -623,6 +623,8 @@ unsigned int xstate_compressed_size(uint64_t xstates)
 {
     unsigned int size = XSTATE_AREA_MIN_SIZE;
 
+    ASSERT((xstates & ~(X86_XCR0_STATES | X86_XSS_STATES)) == 0);
+
     if ( xstates == 0 )
         return 0;
 

which brings it more in line with xstate_uncompressed_size(), and has a
side effect of confirming the absence of the COMPRESSED bit.

Thoughts?

>> This alone produces:
>>
>>   add/remove: 0/0 grow/shrink: 0/4 up/down: 0/-161 (-161)
>>   Function                                     old     new   delta
>>   compress_xsave_states                         66      58      -8
>>   xstate_uncompressed_size                     119      71     -48
>>   xstate_compressed_size                       124      76     -48
>>   recalculate_xstate                           347     290     -57
>>
>> where xstate_{un,}compressed_size() have practically halved in size despite
>> being small before.
>>
>> The change in compress_xsave_states() is unexpected.  The function is almost
>> entirely dead code, and within what remains there's a smaller stack frame.  I
>> suspect it's leftovers that the optimiser couldn't fully discard.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Other than the above:
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 18:09:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 18:09:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749405.1157495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMX5C-0001Iy-KZ; Wed, 26 Jun 2024 18:09:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749405.1157495; Wed, 26 Jun 2024 18:09:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMX5C-0001Ir-Hj; Wed, 26 Jun 2024 18:09:26 +0000
Received: by outflank-mailman (input) for mailman id 749405;
 Wed, 26 Jun 2024 18:09:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMX5A-0001IH-Vt; Wed, 26 Jun 2024 18:09:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMX5A-00054k-TP; Wed, 26 Jun 2024 18:09:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMX5A-0005xF-Km; Wed, 26 Jun 2024 18:09:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMX5A-00074d-KA; Wed, 26 Jun 2024 18:09:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=baDeLtgmACIMnD3Nw5qzBI09ph95imnx5YW2ryEE4Ac=; b=hrIiqsL6cJptR8g9Dvg6Z6IyLH
	WTqiSYaMErwyMmmcHj/y17YToRKw3kVneBc52BTBOhtnru7sERjqv6o5tzoCZUMUJ9YwKe7D4u27Z
	BPyMWIhi5hX5G37uGyW+8fLjHJcfAxGADuTEKrccZU3TTW1PEohteRX7LBE+pCntfIjE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186519-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186519: regressions - FAIL
X-Osstest-Failures:
    ovmf:build-amd64-pvops:kernel-build:fail:regression
    ovmf:build-amd64-xsm:xen-build:fail:regression
    ovmf:build-amd64:xen-build:fail:regression
    ovmf:build-amd64-libvirt:build-check(1):blocked:nonblocking
    ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    ovmf=ae09721a65ab3294439f6fa233adaf3b897f702f
X-Osstest-Versions-That:
    ovmf=89377ece8f1c7243d25fd84488dcd03e37b9e661
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 18:09:24 +0000

flight 186519 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186519/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             6 kernel-build             fail REGR. vs. 186516
 build-amd64-xsm               6 xen-build                fail REGR. vs. 186516
 build-amd64                   6 xen-build                fail REGR. vs. 186516

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a

version targeted for testing:
 ovmf                 ae09721a65ab3294439f6fa233adaf3b897f702f
baseline version:
 ovmf                 89377ece8f1c7243d25fd84488dcd03e37b9e661

Last test of basis   186516  2024-06-26 12:41:06 Z    0 days
Testing same since   186519  2024-06-26 16:43:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gaurav Pandya <gaurav.pandya@amd.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  fail    
 build-i386                                                   pass    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            fail    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ae09721a65ab3294439f6fa233adaf3b897f702f
Author: Gaurav Pandya <gaurav.pandya@amd.com>
Date:   Wed Sep 20 20:37:49 2023 +0800

    MdeModulePkg/DisplayEngineDxe: Support "^" and "V" key on pop-up form
    
    BZ #4790
    Support "^" and "V" key stokes on the pop-up form. Align the
    implementation with key support on the regular HII form.
    
    Signed-off-by: Gaurav Pandya <gaurav.pandya@amd.com>


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 18:35:16 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 18:35:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749426.1157511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMXU2-0007ei-P7; Wed, 26 Jun 2024 18:35:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749426.1157511; Wed, 26 Jun 2024 18:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMXU2-0007eb-Lq; Wed, 26 Jun 2024 18:35:06 +0000
Received: by outflank-mailman (input) for mailman id 749426;
 Wed, 26 Jun 2024 18:35:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMXU1-0007eP-6e; Wed, 26 Jun 2024 18:35:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMXU1-0005hM-3H; Wed, 26 Jun 2024 18:35:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMXU0-0007Ev-ER; Wed, 26 Jun 2024 18:35:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMXU0-0007J9-Ds; Wed, 26 Jun 2024 18:35:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=k/WVY//XkkNeENaJp1Y+XydgA8/h5IMAhOIzdKCXWKA=; b=PHHVYD3+LCtxOOHcopMrg9OMd2
	FQl7Uo6G4tTUy/jnHxSdQ01aMSWfk5RBhKrpOgIDg6celfx7Q5Zyr8PhzUBPAsrhooQy7iljAL8M+
	FArwrzznhjXGdq5vFsVm9NiPuWzOk6ydE5fs2iEV52JXKqjBd2v3E2qQkPTov/Vgvxe0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186518-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186518: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=dde20e47afb51404d59be492ca924704f0fbf71d
X-Osstest-Versions-That:
    xen=4712e3b3769e6c03e0aaaa8179395f0fb7b141cc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 18:35:04 +0000

flight 186518 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186518/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dde20e47afb51404d59be492ca924704f0fbf71d
baseline version:
 xen                  4712e3b3769e6c03e0aaaa8179395f0fb7b141cc

Last test of basis   186513  2024-06-26 11:00:23 Z    0 days
Testing same since   186518  2024-06-26 15:00:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@cloud.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4712e3b376..dde20e47af  dde20e47afb51404d59be492ca924704f0fbf71d -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 19:51:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 19:51:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749445.1157530 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMYg2-0004Cx-2u; Wed, 26 Jun 2024 19:51:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749445.1157530; Wed, 26 Jun 2024 19:51:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMYg1-0004Cq-W1; Wed, 26 Jun 2024 19:51:33 +0000
Received: by outflank-mailman (input) for mailman id 749445;
 Wed, 26 Jun 2024 19:51:32 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dBdT=N4=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMYg0-0004Ck-4u
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 19:51:32 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7a7cc4a5-33f5-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 21:51:29 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 2660E61CE4;
 Wed, 26 Jun 2024 19:51:28 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id CE2D9C32789;
 Wed, 26 Jun 2024 19:51:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a7cc4a5-33f5-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719431488;
	bh=n6kgE0l52XU5oU9eMCS8iv4dzcvW7V5pRFUTSJX/ofA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=gkVpGWCWu6AU7fmwGKnI6FsaNt93rflTi1p8HDkB9qSO28l7NBd/WvNbZexof5cbk
	 fmbSfYwH82wVTh+Q5qaSkJreh24kY5b02dC5I0CO/2C47ZPenrv3AOkbGzafNz7mcz
	 Dg5GRlIf7p2RA+8xwN4TL/bjvi2xonMs8ybJmK73eqFGh8Uz9M3CXKoFUmGJ/xzMiy
	 QIq/KVdwniDj0cAK/YerOHTZ3c0ptGtpy1dBtzUsKW3JiDCNfKtLaCYf1yOif1SCk+
	 QjwVQUVrdlU+WZwuRS1SPFwGxBezOJFHLaKodT6jcxadqfeONE/5/xwq620P6Chm/p
	 nHHyL8XBTQ9vQ==
Date: Wed, 26 Jun 2024 12:51:25 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: George Dunlap <george.dunlap@cloud.com>
cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Dario Faggioli <dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>, 
    Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH] MAINTAINERS: Step down as maintainer and committer
In-Reply-To: <20240626151935.26704-1-george.dunlap@cloud.com>
Message-ID: <alpine.DEB.2.22.394.2406261249540.3635@ubuntu-linux-20-04-desktop>
References: <20240626151935.26704-1-george.dunlap@cloud.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 26 Jun 2024, George Dunlap wrote:
> Remain a Reviewer on the golang bindings and scheduler for now (using
> a xenproject.org alias), since there may be architectural decisions I
> can shed light on.
> 
> Remove the XENTRACE section entirely, as there's no obvious candidate
> to take it over; having the respective parts fall back to the tools
> and The Rest seems the most reasonable option.
> 
> Signed-off-by: George Dunlap <george.dunlap@cloud.com>

Thanks for all your efforts over the years! You should be listed as
"Committer Emeritus".

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> CC: Andrew Cooper <andrew.cooper3@citrix.com>
> CC: Jan Beulich <jbeulich@suse.com>
> CC: Julien Grall <julien@xen.org>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Dario Faggioli <dfaggioli@suse.com>
> CC: Juergen Gross <jgross@suse.com>
> CC: Nick Rosbrook <rosbrookn@gmail.com>
> ---
>  MAINTAINERS | 13 ++-----------
>  1 file changed, 2 insertions(+), 11 deletions(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 9d66b898ec..2b0c894527 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -325,8 +325,8 @@ F:	xen/arch/x86/debug.c
>  F:	tools/debugger/gdbsx/
>  
>  GOLANG BINDINGS
> -M:	George Dunlap <george.dunlap@citrix.com>
>  M:	Nick Rosbrook <rosbrookn@gmail.com>
> +R:	George Dunlap <gwd@xenproject.org>
>  S:	Maintained
>  F:	tools/golang
>  
> @@ -490,9 +490,9 @@ S:	Supported
>  F:	xen/common/sched/rt.c
>  
>  SCHEDULING
> -M:	George Dunlap <george.dunlap@citrix.com>
>  M:	Dario Faggioli <dfaggioli@suse.com>
>  M:	Juergen Gross <jgross@suse.com>
> +R:	George Dunlap <gwd@xenproject.org>
>  S:	Supported
>  F:	xen/common/sched/
>  
> @@ -597,7 +597,6 @@ F:	tools/tests/x86_emulator/
>  X86 MEMORY MANAGEMENT
>  M:	Jan Beulich <jbeulich@suse.com>
>  M:	Andrew Cooper <andrew.cooper3@citrix.com>
> -R:	George Dunlap <george.dunlap@citrix.com>
>  S:	Supported
>  F:	xen/arch/x86/mm/
>  
> @@ -641,13 +640,6 @@ F:	tools/libs/store/
>  F:	tools/xenstored/
>  F:	tools/xs-clients/
>  
> -XENTRACE
> -M:	George Dunlap <george.dunlap@citrix.com>
> -S:	Supported
> -F:	tools/xentrace/
> -F:	xen/common/trace.c
> -F:	xen/include/xen/trace.h
> -
>  XEN MISRA ANALYSIS TOOLS
>  M:	Luca Fancellu <luca.fancellu@arm.com>
>  S:	Supported
> @@ -670,7 +662,6 @@ K:	\b(xsm|XSM)\b
>  
>  THE REST
>  M:	Andrew Cooper <andrew.cooper3@citrix.com>
> -M:	George Dunlap <george.dunlap@citrix.com>
>  M:	Jan Beulich <jbeulich@suse.com>
>  M:	Julien Grall <julien@xen.org>
>  M:	Stefano Stabellini <sstabellini@kernel.org>
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 20:13:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 20:13:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749452.1157543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMZ1D-0007tn-Po; Wed, 26 Jun 2024 20:13:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749452.1157543; Wed, 26 Jun 2024 20:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMZ1D-0007tg-N3; Wed, 26 Jun 2024 20:13:27 +0000
Received: by outflank-mailman (input) for mailman id 749452;
 Wed, 26 Jun 2024 20:13:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sMZ1D-0007ta-4D
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 20:13:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sMZ1C-0007ZU-Pu; Wed, 26 Jun 2024 20:13:26 +0000
Received: from gw1.octic.net ([88.97.20.152] helo=[10.0.0.211])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sMZ1C-0004W3-IX; Wed, 26 Jun 2024 20:13:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=8UlCAQUnQ7wKyP9MSHwvz0aZjii0tgqNzYXBxJ8igKA=; b=KEzS4MIirF//bC4qBMg8FDa3gm
	MMw4teGcutc+IWV4suhwuxkdoIRBM7HKlidn0iskpHc0IMKkEDyvKqmCQDDDWrbMCWkezwtIG16JT
	l8fVgzSmgxbiq1C+PTYkf7wjGwzShkrTIOj08Uc0CXi/dULzOL/xKqtHHFWSMsKTxbKU=;
Message-ID: <766b260e-204c-423f-b0e1-c21957b6d169@xen.org>
Date: Wed, 26 Jun 2024 21:13:24 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19(?)] xen/arm: bootfdt: Fix device tree memory node
 probing
Content-Language: en-GB
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, oleksii.kurochko@gmail.com
References: <20240626080428.18480-1-michal.orzel@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <20240626080428.18480-1-michal.orzel@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit

Hi Michal,

On 26/06/2024 09:04, Michal Orzel wrote:
> Memory node probing is done as part of early_scan_node() that is called
> for each node with depth >= 1 (root node is at depth 0). According to
> Devicetree Specification v0.4, chapter 3.4, /memory node can only exists
> as a top level node. However, Xen incorrectly considers all the nodes with
> unit node name "memory" as RAM. This buggy behavior can result in a
> failure if there are other nodes in the device tree (at depth >= 2) with
> "memory" as unit node name. An example can be a "memory@xxx" node under
> /reserved-memory. Fix it by introducing device_tree_is_memory_node() to
> perform all the required checks to assess if a node is a proper /memory
> node.
> 
> Fixes: 3e99c95ba1c8 ("arm, device tree: parse the DTB for RAM location and size")
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
> 4.19: This patch is fixing a possible early boot Xen failure (before main
> console is initialized). In my case it results in a warning "Shattering
> superpage is not supported" and panic "Unable to setup the directmap mappings".
> 
> If this is too late for this patch to go in, we can backport it after the tree
> re-opens.

The code looks correct to me, but I am not sure about merging it to 4.19.

This is not a new bug (in fact has been there since pretty much Xen on 
Arm was created) and we haven't seen any report until today. So in some 
way it would be best to merge it after 4.19 so it can get more testing.

In the other hand, I guess this will block you. Is this a new platform? 
Is it available?

> ---
>   xen/arch/arm/bootfdt.c | 28 +++++++++++++++++++++++++++-
>   1 file changed, 27 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index 6e060111d95b..23c968e6955d 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -83,6 +83,32 @@ static bool __init device_tree_node_compatible(const void *fdt, int node,
>       return false;
>   }
>   
> +/*
> + * Check if a node is a proper /memory node according to Devicetree
> + * Specification v0.4, chapter 3.4.
> + */
> +static bool __init device_tree_is_memory_node(const void *fdt, int node,
> +                                              int depth)
> +{
> +    const char *type;
> +    int len;
> +
> +    if ( depth != 1 )
> +        return false;
> +
> +    if ( !device_tree_node_matches(fdt, node, "memory") )
> +        return false;
> +
> +    type = fdt_getprop(fdt, node, "device_type", &len);
> +    if ( !type )
> +        return false;
> +
> +    if ( (len <= 0) || strcmp(type, "memory") )

I would consider to use strncmp() to avoid relying on the property to be 
well-formed (i.e. nul-terminated).

> +        return false;
> +
> +    return true;
> +}
> +
>   void __init device_tree_get_reg(const __be32 **cell, uint32_t address_cells,
>                                   uint32_t size_cells, paddr_t *start,
>                                   paddr_t *size)
> @@ -448,7 +474,7 @@ static int __init early_scan_node(const void *fdt,
>        * populated. So we should skip the parsing.
>        */
>       if ( !efi_enabled(EFI_BOOT) &&
> -         device_tree_node_matches(fdt, node, "memory") )
> +         device_tree_is_memory_node(fdt, node, depth) )
>           rc = process_memory_node(fdt, node, name, depth,
>                                    address_cells, size_cells, bootinfo_get_mem());
>       else if ( depth == 1 && !dt_node_cmp(name, "reserved-memory") )

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 20:21:00 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 20:21:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749461.1157555 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMZ8Q-0001Tq-Gm; Wed, 26 Jun 2024 20:20:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749461.1157555; Wed, 26 Jun 2024 20:20:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMZ8Q-0001Tj-EG; Wed, 26 Jun 2024 20:20:54 +0000
Received: by outflank-mailman (input) for mailman id 749461;
 Wed, 26 Jun 2024 20:20:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3iT=N4=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMZ8P-0001Td-J4
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 20:20:53 +0000
Received: from mail-yb1-xb30.google.com (mail-yb1-xb30.google.com
 [2607:f8b0:4864:20::b30])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 94cee366-33f9-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 22:20:51 +0200 (CEST)
Received: by mail-yb1-xb30.google.com with SMTP id
 3f1490d57ef6-df481bf6680so6748225276.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 13:20:51 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 d75a77b69052e-444c2b96f1bsm71144221cf.50.2024.06.26.13.20.47
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 13:20:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 94cee366-33f9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719433250; x=1720038050; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=zmIqgfhSAFjxQ2QdaVxb8nf0xnw08WiIAaLdT7PYG4Q=;
        b=WbgCWjxxR2Q+QZvaZ0NPxy1ffPWhyC7zYHoe5OQl350+TQx2HHumaFMXrlVwhbYhHf
         PtV5qtFTa5+vXVvqJGZFwsF71C+eus2Gg4xp2l7eOWmr5zeRqJ+vWe0OvbxVkXu+w1fL
         VOCfs/G0FY2409gsGk3C4sWCfIdGfTzLdlZcA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719433250; x=1720038050;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=zmIqgfhSAFjxQ2QdaVxb8nf0xnw08WiIAaLdT7PYG4Q=;
        b=q62nNBfDNElVYrsSs5gGQEzMN6INPtX+lfZLknzyjNw4soTPTSV+P8JTJpRCN4+U1M
         B5b80W9z8wIcAHSRmkkkMpg2teQce7SEvQlW+zlFce+AwNzjaghozfF43V9cU1oTij4G
         uMvSsfMx6qpO+tgeK95kJuVPjP6HV50yqIAGguGZ/3Fqt5K37VcUdg6OJhBydQv0y5rq
         Nd1xZwWF3K2t/xkTvLFwr4ISyvXw4PwYNxIAIWum6HccsSQBTOwHwmzEA/dRyItbFroz
         sX6wkZwS7WbxnGC0by0LRw1p32mxKcsKNKizN1wkfuPWjsWrkuQaT/mvtixY2yzAv9jh
         H6bQ==
X-Forwarded-Encrypted: i=1; AJvYcCWwXg1A3bJyVoN46DizBKwEBBPbBJAKrTNvxQz4FRQgjIMi6hO4KNh1uTfa9j2tdwiGDFJ5dYlQQB3hIISEh3ehVyE5w6znenQeRylhgoo=
X-Gm-Message-State: AOJu0Yz5GK9KjwvX6RiorzzqVj45Z7H3XnGEwojA6KBhPeQqWGVvQisf
	OSfO1FFS6VXRQAuFSLmMPfnHzpCdK10Ut/Z/K4NbjWBgktrdSJmepuuxeC5lsHs=
X-Google-Smtp-Source: AGHT+IGC/Ou3GeYLu9ZcYb2RUeYFXig4UGjp2KN+t3/+McSUOZfZEJ+LOFSIazx9SmP9Qvw05jvqdg==
X-Received: by 2002:a25:291:0:b0:df4:f2d2:fcd6 with SMTP id 3f1490d57ef6-e0303fe9b62mr11265914276.44.1719433250146;
        Wed, 26 Jun 2024 13:20:50 -0700 (PDT)
Message-ID: <b27d4722-a8d0-4134-bc2e-d2d751a88c10@citrix.com>
Date: Wed, 26 Jun 2024 21:20:46 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 v2] x86/spec-ctrl: Support for SRSO_US_NO and
 SRSO_MSR_FIX
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240619191057.2588693-1-andrew.cooper3@citrix.com>
 <8510895d-70fb-4fce-adfa-ac5638b4ae3c@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <8510895d-70fb-4fce-adfa-ac5638b4ae3c@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 20/06/2024 9:39 am, Jan Beulich wrote:
> On 19.06.2024 21:10, Andrew Cooper wrote:
>> --- a/docs/misc/xen-command-line.pandoc
>> +++ b/docs/misc/xen-command-line.pandoc
>> @@ -2390,7 +2390,7 @@ By default SSBD will be mitigated at runtime (i.e `ssbd=runtime`).
>>  >              {ibrs,ibpb,ssbd,psfd,
>>  >              eager-fpu,l1d-flush,branch-harden,srb-lock,
>>  >              unpriv-mmio,gds-mit,div-scrub,lock-harden,
>> ->              bhi-dis-s}=<bool> ]`
>> +>              bhi-dis-s,bp-spec-reduce}=<bool> ]`
>>  
>>  Controls for speculative execution sidechannel mitigations.  By default, Xen
>>  will pick the most appropriate mitigations based on compiled in support,
>> @@ -2539,6 +2539,13 @@ boolean can be used to force or prevent Xen from using speculation barriers to
>>  protect lock critical regions.  This mitigation won't be engaged by default,
>>  and needs to be explicitly enabled on the command line.
>>  
>> +On hardware supporting SRSO_MSR_FIX, the `bp-spec-reduce=` option can be used
>> +to force or prevent Xen from using MSR_BP_CFG.BP_SPEC_REDUCE to mitigate the
>> +SRSO (Speculative Return Stack Overflow) vulnerability.
> Upon my request to add "... against HVM guests" here you replied "Ok.", yet
> then you didn't make the change? Even a changelog entry says you supposedly
> added this, so perhaps simply lost in a refresh?

Yes, and later in the thread you (correctly) pointed out that it's not
only for HVM guests.

The PV aspect, discussed in the following sentence, is very relevant and
makes it not specific to HVM guests.

> It also didn't really become clear to me whether the entirety of your reply
> to my request for adding "AMD" early in the sentence wasn't boiling down
> more to a "yes, perhaps".

It was a "no I'm not going to introduce an inconsistency with the way
this document has been written for the past 7 years".


>
>> @@ -605,6 +606,24 @@ static void __init calculate_pv_max_policy(void)
>>          __clear_bit(X86_FEATURE_IBRS, fs);
>>      }
>>  
>> +    /*
>> +     * SRSO_U/S_NO means that the CPU is not vulnerable to SRSO attacks across
>> +     * the User (CPL3)/Supervisor (CPL<3) boundary.  However the PV64
>> +     * user/kernel boundary is CPL3 on both sides, so it won't convey the
>> +     * meaning that a PV kernel expects.
>> +     *
>> +     * PV32 guests are explicitly unsupported WRT speculative safety, so are
>> +     * ignored to avoid complicating the logic.
>> +     *
>> +     * After discussions with AMD, it is believed to be safe to offer
>> +     * SRSO_US_NO to PV guests when BP_SPEC_REDUCE is active.
> IOW that specific behavior is not tied to #VMEXIT / VMRUN, and also isn't
> going to in future hardware?

This is the best I can say in public.

I am satisfied with the explanation.
>> --- a/xen/arch/x86/cpu/amd.c
>> +++ b/xen/arch/x86/cpu/amd.c
>> @@ -1009,16 +1009,33 @@ static void cf_check fam17_disable_c6(void *arg)
>>  	wrmsrl(MSR_AMD_CSTATE_CFG, val & mask);
>>  }
>>  
>> -static void amd_check_erratum_1485(void)
>> +static void amd_check_bp_cfg(void)
>>  {
>> -	uint64_t val, chickenbit = (1 << 5);
>> +	uint64_t val, new = 0;
>>  
>> -	if (cpu_has_hypervisor || boot_cpu_data.x86 != 0x19 || !is_zen4_uarch())
>> +	/*
>> +	 * AMD Erratum #1485.  Set bit 5, as instructed.
>> +	 */
>> +	if (!cpu_has_hypervisor && boot_cpu_data.x86 == 0x19 && is_zen4_uarch())
>> +		new |= (1 << 5);
>> +
>> +	/*
>> +	 * On hardware supporting SRSO_MSR_FIX, activate BP_SPEC_REDUCE by
>> +	 * default.  This lets us do two things:
>> +         *
>> +         * 1) Avoid IBPB-on-entry to mitigate SRSO attacks from HVM guests.
>> +         * 2) Lets us advertise SRSO_US_NO to PV guests.
>> +	 */
>> +	if (boot_cpu_has(X86_FEATURE_SRSO_MSR_FIX) && opt_bp_spec_reduce)
>> +		new |= BP_CFG_SPEC_REDUCE;
>> +
>> +	/* Avoid reading BP_CFG if we don't intend to change anything. */
>> +	if (!new)
>>  		return;
>>  
>>  	rdmsrl(MSR_AMD64_BP_CFG, val);
>>  
>> -	if (val & chickenbit)
>> +	if ((val & new) == new)
>>  		return;
> You explained that you want to avoid making this more complex, upon my
> question towards tweaking this to also deal with us possibly clearing
> flags. I'm okay with that, but I did ask that you add at least half a
> sentence to actually clarify this to future readers (which might include
> myself).

"I wrote it this way because it is sufficient and simple, but you can
change it in the future if you really need to" is line noise wherever
it's written.

It literally goes without saying, for every line the entire codebase.

>
>> @@ -1145,22 +1151,41 @@ static void __init ibpb_calculations(void)
>>           * Confusion.  Mitigate with IBPB-on-entry.
>>           */
>>          if ( !boot_cpu_has(X86_FEATURE_BTC_NO) )
>> -            def_ibpb_entry = true;
>> +            def_ibpb_entry_pv = def_ibpb_entry_hvm = true;
>>  
>>          /*
>> -         * Further to BTC, Zen3/4 CPUs suffer from Speculative Return Stack
>> -         * Overflow in most configurations.  Mitigate with IBPB-on-entry if we
>> -         * have the microcode that makes this an effective option.
>> +         * Further to BTC, Zen3 and later CPUs suffer from Speculative Return
>> +         * Stack Overflow in most configurations.  Mitigate with IBPB-on-entry
>> +         * if we have the microcode that makes this an effective option,
>> +         * except where there are other mitigating factors available.
>>           */
>>          if ( !boot_cpu_has(X86_FEATURE_SRSO_NO) &&
>>               boot_cpu_has(X86_FEATURE_IBPB_BRTYPE) )
>> -            def_ibpb_entry = true;
>> +        {
>> +            /*
>> +             * SRSO_U/S_NO is a subset of SRSO_NO, identifying that SRSO isn't
>> +             * possible across the user/supervisor boundary.  We only need to
>> +             * use IBPB-on-entry for PV guests on hardware which doesn't
>> +             * enumerate SRSO_US_NO.
>> +             */
>> +            if ( !boot_cpu_has(X86_FEATURE_SRSO_US_NO) )
>> +                def_ibpb_entry_pv = true;
> To my PV32 related comment here you said "..., we might as well do our best".
> Yet nothing has changed here?

Correct.

First, because the AMD Whitepaper actually says CPL0 and CPL3, not
User/Supervisor (the latter is only implied from the name AMD gave it).

Second, because I thought though what it would actually take to make
this work for PV32 which includes...

>  Yet then, thinking about it again, trying to
> help PV32 would apparently mean splitting def_ibpb_entry_pv and hence, via
> opt_ibpb_entry_pv, X86_FEATURE_IBPB_ENTRY_PV (and perhaps yet more items). I
> guess the resulting complexity then simply isn't worth it.

... the fact that Xen doesn't know if a PV guest is going to be PV32 or
PV64 until after the toolstack has partially constructed the domain,
including choosing it's "default" policy.

This ends up in a bizarre case where PV32 is more featureful than PV64,
(SRSO_US_NO does not depend on BP_SPEC_REDUCE), and nothing in the CPUID
hanlding logic can cope nicely with this.

Here, you end up in the weird situation where it's safe to always pass
the hardware SRSO_U/S_NO bit through into PV32, yet it's not

> However, as an unrelated aspect: According to the respective part of the
> comment you add to  calculate_pv_max_policy(), do we need the IBPB when
> BP_SPEC_REDUCE is active?

To what are you referring?

SRSO is about Return predictions becoming poisoned by other
predictions.  The best way to mount an SRSO attack is with forged
near-branch predictions.

For SRSO safety, we use IBPB to flush the Branch *Type* information from
the BTB.  Fam17h happened to have this property, but Fam19h needed it
retrofitting in a microcode update, with the prior "Indirect Branch
Targets only, explicitly retaining the Branch Type information" being
retroactively named SBPB.

This does not interact with the use of IBPB for it's
architecturally-given purpose.

>> +            /*
>> +             * SRSO_MSR_FIX enumerates that we can use MSR_BP_CFG.SPEC_REDUCE
>> +             * to mitigate SRSO across the host/guest boundary.  We only need
>> +             * to use IBPB-on-entry for HVM guests if we haven't enabled this
>> +             * control.
>> +             */
>> +            if ( !boot_cpu_has(X86_FEATURE_SRSO_MSR_FIX) || !opt_bp_spec_reduce )
>> +                def_ibpb_entry_hvm = true;
> Here and elsewhere, wouldn't conditionals become simpler if you (early on)
> cleared opt_bp_spec_reduce when !boot_cpu_has(X86_FEATURE_SRSO_MSR_FIX)?

I don't think so, no.  The uses are all at different phases of
initialisation.

>
>> --- a/xen/include/public/arch-x86/cpufeatureset.h
>> +++ b/xen/include/public/arch-x86/cpufeatureset.h
>> @@ -312,7 +312,9 @@ XEN_CPUFEATURE(FSRSC,              11*32+19) /*A  Fast Short REP SCASB */
>>  XEN_CPUFEATURE(AMD_PREFETCHI,      11*32+20) /*A  PREFETCHIT{0,1} Instructions */
>>  XEN_CPUFEATURE(SBPB,               11*32+27) /*A  Selective Branch Predictor Barrier */
>>  XEN_CPUFEATURE(IBPB_BRTYPE,        11*32+28) /*A  IBPB flushes Branch Type predictions too */
>> -XEN_CPUFEATURE(SRSO_NO,            11*32+29) /*A  Hardware not vulenrable to Speculative Return Stack Overflow */
>> +XEN_CPUFEATURE(SRSO_NO,            11*32+29) /*A  Hardware not vulnerable to Speculative Return Stack Overflow */
>> +XEN_CPUFEATURE(SRSO_US_NO,         11*32+30) /*A! Hardware not vulnerable to SRSO across the User/Supervisor boundary */
> Nit: Elsewhere we have ! first, and I think that's preferable, to avoid
> confusion with | (which normally comes last).

Ok.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 20:22:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 20:22:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749469.1157566 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMZ9w-00023Y-W7; Wed, 26 Jun 2024 20:22:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749469.1157566; Wed, 26 Jun 2024 20:22:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMZ9w-00023R-Sg; Wed, 26 Jun 2024 20:22:28 +0000
Received: by outflank-mailman (input) for mailman id 749469;
 Wed, 26 Jun 2024 20:22:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3iT=N4=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMZ9v-00023J-Vt
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 20:22:27 +0000
Received: from mail-qv1-xf35.google.com (mail-qv1-xf35.google.com
 [2607:f8b0:4864:20::f35])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cd2e08d8-33f9-11ef-b4bb-af5377834399;
 Wed, 26 Jun 2024 22:22:26 +0200 (CEST)
Received: by mail-qv1-xf35.google.com with SMTP id
 6a1803df08f44-6a3652a732fso28365586d6.3
 for <xen-devel@lists.xenproject.org>; Wed, 26 Jun 2024 13:22:26 -0700 (PDT)
Received: from [10.125.226.166] ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 6a1803df08f44-6b57e732b6fsm8775776d6.116.2024.06.26.13.22.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 Jun 2024 13:22:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd2e08d8-33f9-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719433345; x=1720038145; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=3/LzR6jK3l/gKPb0WTlW/GXBuoMxm8aLVCTlEEHjDFs=;
        b=jHcx/WRw6vOT9tSrlv9DkRhscbeR6QMmEDOei0VA35T3V7Sqyyx3tRLXDhYAZbw4Fk
         AnI0+chD/itSIgkvVCjo/IKoaof9AjbxlDLjMJF6J8w5Al5WWAFUbCJMEmctJKuUq2dM
         FUFlLHccC7yL4vFRZl39RK/qpEnR2+Vc8h47g=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719433345; x=1720038145;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=3/LzR6jK3l/gKPb0WTlW/GXBuoMxm8aLVCTlEEHjDFs=;
        b=sGX290Z/C5/4xFQP2SN33mqUBeQBQuyvP5BNd5oMX7dhKw5PbMGER0rn5JdtWE8j1s
         iOZD6VEU1XnfLjX5H62Cu3h+aiZCsz+qntmIzzO5mK3v+raB3EopabcVi9PCCM6z/f1q
         0mgevXKHxUOtTVT9AwU4eCiDd3xmyEkQBW0UJoDJiBYka448kzKyPm6eliEofnJVHQhw
         tHAJH6n7X/w820cDtMCgecAOkRRdLXT8rXc1Ws86dLFyydaVWsz9Eh6z2eytQX2je6Mc
         MAEd/5tNKtMcV+IICAETKqEo8nn8O4aq48bPT/TapdG9p13VB7WdwuQfWh7gCK47j7ka
         q60w==
X-Forwarded-Encrypted: i=1; AJvYcCXJZDV/b54Z8TLFhoJZggR8udiT1s9qH0rBjF0y/S2qyrk9IhLZupg+kWOBrN+zfzaYN8Oc37K4v09u2DH9SfpuemWed+eRgs7nUP2M4H4=
X-Gm-Message-State: AOJu0Yw11uMWF8joFlBKyHDrLZcxTvp+ZhVndTu8d+tgsH2K4BQq5uks
	qrOLdMvGwMT2pgfjIgwxvRlbAI/FlcTfNWKod359Owg2x2RmZ/p/vlXoQMTFoDU=
X-Google-Smtp-Source: AGHT+IFOyGnpDr70neuE+DaSPSWVfUnP1cQxFty6UoGCpcZWM1VNoA02ieW7D8wz9BXAc2qUg1mmyA==
X-Received: by 2002:a05:6214:1630:b0:6b4:4585:8e43 with SMTP id 6a1803df08f44-6b5409e0c10mr104240376d6.31.1719433344835;
        Wed, 26 Jun 2024 13:22:24 -0700 (PDT)
Message-ID: <e6e7070b-d4c6-4e86-8de0-54726d2e8383@citrix.com>
Date: Wed, 26 Jun 2024 21:22:21 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 4/6] xen/bitops: Introduce for_each_set_bit()
To: Jan Beulich <jbeulich@suse.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
 <20240625190719.788643-5-andrew.cooper3@citrix.com>
 <38c35f34-a55c-4647-874e-9d88fa899a58@suse.com>
Content-Language: en-GB
From: Andrew Cooper <andrew.cooper3@citrix.com>
Autocrypt: addr=andrew.cooper3@citrix.com; keydata=
 xsFNBFLhNn8BEADVhE+Hb8i0GV6mihnnr/uiQQdPF8kUoFzCOPXkf7jQ5sLYeJa0cQi6Penp
 VtiFYznTairnVsN5J+ujSTIb+OlMSJUWV4opS7WVNnxHbFTPYZVQ3erv7NKc2iVizCRZ2Kxn
 srM1oPXWRic8BIAdYOKOloF2300SL/bIpeD+x7h3w9B/qez7nOin5NzkxgFoaUeIal12pXSR
 Q354FKFoy6Vh96gc4VRqte3jw8mPuJQpfws+Pb+swvSf/i1q1+1I4jsRQQh2m6OTADHIqg2E
 ofTYAEh7R5HfPx0EXoEDMdRjOeKn8+vvkAwhviWXTHlG3R1QkbE5M/oywnZ83udJmi+lxjJ5
 YhQ5IzomvJ16H0Bq+TLyVLO/VRksp1VR9HxCzItLNCS8PdpYYz5TC204ViycobYU65WMpzWe
 LFAGn8jSS25XIpqv0Y9k87dLbctKKA14Ifw2kq5OIVu2FuX+3i446JOa2vpCI9GcjCzi3oHV
 e00bzYiHMIl0FICrNJU0Kjho8pdo0m2uxkn6SYEpogAy9pnatUlO+erL4LqFUO7GXSdBRbw5
 gNt25XTLdSFuZtMxkY3tq8MFss5QnjhehCVPEpE6y9ZjI4XB8ad1G4oBHVGK5LMsvg22PfMJ
 ISWFSHoF/B5+lHkCKWkFxZ0gZn33ju5n6/FOdEx4B8cMJt+cWwARAQABzSlBbmRyZXcgQ29v
 cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPsLBegQTAQgAJAIbAwULCQgHAwUVCgkI
 CwUWAgMBAAIeAQIXgAUCWKD95wIZAQAKCRBlw/kGpdefoHbdD/9AIoR3k6fKl+RFiFpyAhvO
 59ttDFI7nIAnlYngev2XUR3acFElJATHSDO0ju+hqWqAb8kVijXLops0gOfqt3VPZq9cuHlh
 IMDquatGLzAadfFx2eQYIYT+FYuMoPZy/aTUazmJIDVxP7L383grjIkn+7tAv+qeDfE+txL4
 SAm1UHNvmdfgL2/lcmL3xRh7sub3nJilM93RWX1Pe5LBSDXO45uzCGEdst6uSlzYR/MEr+5Z
 JQQ32JV64zwvf/aKaagSQSQMYNX9JFgfZ3TKWC1KJQbX5ssoX/5hNLqxMcZV3TN7kU8I3kjK
 mPec9+1nECOjjJSO/h4P0sBZyIUGfguwzhEeGf4sMCuSEM4xjCnwiBwftR17sr0spYcOpqET
 ZGcAmyYcNjy6CYadNCnfR40vhhWuCfNCBzWnUW0lFoo12wb0YnzoOLjvfD6OL3JjIUJNOmJy
 RCsJ5IA/Iz33RhSVRmROu+TztwuThClw63g7+hoyewv7BemKyuU6FTVhjjW+XUWmS/FzknSi
 dAG+insr0746cTPpSkGl3KAXeWDGJzve7/SBBfyznWCMGaf8E2P1oOdIZRxHgWj0zNr1+ooF
 /PzgLPiCI4OMUttTlEKChgbUTQ+5o0P080JojqfXwbPAyumbaYcQNiH1/xYbJdOFSiBv9rpt
 TQTBLzDKXok86M7BTQRS4TZ/ARAAkgqudHsp+hd82UVkvgnlqZjzz2vyrYfz7bkPtXaGb9H4
 Rfo7mQsEQavEBdWWjbga6eMnDqtu+FC+qeTGYebToxEyp2lKDSoAsvt8w82tIlP/EbmRbDVn
 7bhjBlfRcFjVYw8uVDPptT0TV47vpoCVkTwcyb6OltJrvg/QzV9f07DJswuda1JH3/qvYu0p
 vjPnYvCq4NsqY2XSdAJ02HrdYPFtNyPEntu1n1KK+gJrstjtw7KsZ4ygXYrsm/oCBiVW/OgU
 g/XIlGErkrxe4vQvJyVwg6YH653YTX5hLLUEL1NS4TCo47RP+wi6y+TnuAL36UtK/uFyEuPy
 wwrDVcC4cIFhYSfsO0BumEI65yu7a8aHbGfq2lW251UcoU48Z27ZUUZd2Dr6O/n8poQHbaTd
 6bJJSjzGGHZVbRP9UQ3lkmkmc0+XCHmj5WhwNNYjgbbmML7y0fsJT5RgvefAIFfHBg7fTY/i
 kBEimoUsTEQz+N4hbKwo1hULfVxDJStE4sbPhjbsPCrlXf6W9CxSyQ0qmZ2bXsLQYRj2xqd1
 bpA+1o1j2N4/au1R/uSiUFjewJdT/LX1EklKDcQwpk06Af/N7VZtSfEJeRV04unbsKVXWZAk
 uAJyDDKN99ziC0Wz5kcPyVD1HNf8bgaqGDzrv3TfYjwqayRFcMf7xJaL9xXedMcAEQEAAcLB
 XwQYAQgACQUCUuE2fwIbDAAKCRBlw/kGpdefoG4XEACD1Qf/er8EA7g23HMxYWd3FXHThrVQ
 HgiGdk5Yh632vjOm9L4sd/GCEACVQKjsu98e8o3ysitFlznEns5EAAXEbITrgKWXDDUWGYxd
 pnjj2u+GkVdsOAGk0kxczX6s+VRBhpbBI2PWnOsRJgU2n10PZ3mZD4Xu9kU2IXYmuW+e5KCA
 vTArRUdCrAtIa1k01sPipPPw6dfxx2e5asy21YOytzxuWFfJTGnVxZZSCyLUO83sh6OZhJkk
 b9rxL9wPmpN/t2IPaEKoAc0FTQZS36wAMOXkBh24PQ9gaLJvfPKpNzGD8XWR5HHF0NLIJhgg
 4ZlEXQ2fVp3XrtocHqhu4UZR4koCijgB8sB7Tb0GCpwK+C4UePdFLfhKyRdSXuvY3AHJd4CP
 4JzW0Bzq/WXY3XMOzUTYApGQpnUpdOmuQSfpV9MQO+/jo7r6yPbxT7CwRS5dcQPzUiuHLK9i
 nvjREdh84qycnx0/6dDroYhp0DFv4udxuAvt1h4wGwTPRQZerSm4xaYegEFusyhbZrI0U9tJ
 B8WrhBLXDiYlyJT6zOV2yZFuW47VrLsjYnHwn27hmxTC/7tvG3euCklmkn9Sl9IAKFu29RSo
 d5bD8kMSCYsTqtTfT6W4A3qHGvIDta3ptLYpIAOD2sY3GYq2nf3Bbzx81wZK14JdDDHUX2Rs
 6+ahAA==
In-Reply-To: <38c35f34-a55c-4647-874e-9d88fa899a58@suse.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 26/06/2024 1:03 pm, Jan Beulich wrote:
> On 25.06.2024 21:07, Andrew Cooper wrote:
>> --- a/xen/include/xen/bitops.h
>> +++ b/xen/include/xen/bitops.h
>> @@ -56,6 +56,16 @@ static always_inline __pure unsigned int ffs64(uint64_t x)
>>          return !x || (uint32_t)x ? ffs(x) : ffs(x >> 32) + 32;
>>  }
>>  
>> +/*
>> + * A type-generic ffs() which picks the appropriate ffs{,l,64}() based on it's
>> + * argument.
>> + */
>> +#define ffs_g(x)                                        \
>> +    sizeof(x) <= sizeof(int) ? ffs(x) :                 \
>> +        sizeof(x) <= sizeof(long) ? ffsl(x) :           \
>> +        sizeof(x) <= sizeof(uint64_t) ? ffs64(x) :      \
>> +        ({ BUILD_ERROR("ffs_g() Bad input type"); 0; })
> I guess the lack of parentheses around the entire construct might be the
> reason for the problems you're seeing, as that ...
>
>> @@ -92,6 +102,20 @@ static always_inline __pure unsigned int fls64(uint64_t x)
>>      }
>>  }
>>  
>> +/*
>> + * for_each_set_bit() - Iterate over all set bits in a scalar value.
>> + *
>> + * @iter An iterator name.  Scoped is within the loop only.
>> + * @val  A scalar value to iterate over.
>> + *
>> + * A copy of @val is taken internally.
>> + */
>> +#define for_each_set_bit(iter, val)                     \
>> +    for ( typeof(val) __v = (val); __v; )               \
>> +        for ( unsigned int (iter);                      \
>> +              __v && ((iter) = ffs_g(__v) - 1, true);   \
> ... affects what effect the "- 1" here has.

Yes, it was this.

After fixing the bracketing, everything is green. 
https://gitlab.com/xen-project/people/andyhhp/xen/-/pipelines/1349243540

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 20:58:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 20:58:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749480.1157585 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMZiv-0008Jg-OI; Wed, 26 Jun 2024 20:58:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749480.1157585; Wed, 26 Jun 2024 20:58:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMZiv-0008JZ-Ka; Wed, 26 Jun 2024 20:58:37 +0000
Received: by outflank-mailman (input) for mailman id 749480;
 Wed, 26 Jun 2024 20:58:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMZit-0008JP-Js; Wed, 26 Jun 2024 20:58:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMZit-0008M2-Gi; Wed, 26 Jun 2024 20:58:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMZit-0005C3-7t; Wed, 26 Jun 2024 20:58:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMZit-0002gV-7H; Wed, 26 Jun 2024 20:58:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Te+7dHbsg4grIflkKcTrbn5AUvqH6TlusqW812Zq2A0=; b=ETJGcT4hvXk+n4AEzQ3JStHGpe
	Lk8Gj5XELnElA3qmfing4mB5XkLa70SvgW6oLRZ5Z9Snl1N/MUicxQiZoc39QdgeZVQjANzAqbZfl
	n2YAUyB+qFWUyuGxSKhVZ1rPMkj9QTL91adBdd2wnRJPaoIE7L2kYnVISmRe0EFIceoE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186514-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.18-testing test] 186514: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.18-testing:test-amd64-amd64-xl:debian-fixup:fail:heisenbug
    xen-4.18-testing:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    xen-4.18-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.18-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.18-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.18-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.18-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.18-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.18-testing:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e95d30f9e5eed0c5d9dbf72d4cc3ae373152ab10
X-Osstest-Versions-That:
    xen=01f7a3c792241d348a4e454a30afdf6c0d6cd71c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 20:58:35 +0000

flight 186514 xen-4.18-testing real [real]
flight 186524 xen-4.18-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186514/
http://logs.test-lab.xenproject.org/osstest/logs/186524/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl          13 debian-fixup        fail pass in 186524-retest
 test-armhf-armhf-libvirt      8 xen-boot            fail pass in 186524-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 186524 like 186067
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 186524 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186067
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186067
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186067
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186067
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186067
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e95d30f9e5eed0c5d9dbf72d4cc3ae373152ab10
baseline version:
 xen                  01f7a3c792241d348a4e454a30afdf6c0d6cd71c

Last test of basis   186067  2024-05-21 23:40:31 Z   35 days
Testing same since   186514  2024-06-26 12:06:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          fail    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   01f7a3c792..e95d30f9e5  e95d30f9e5eed0c5d9dbf72d4cc3ae373152ab10 -> stable-4.18


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 21:30:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 21:30:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749508.1157659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMaDL-0006Hm-RK; Wed, 26 Jun 2024 21:30:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749508.1157659; Wed, 26 Jun 2024 21:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMaDL-0006Hf-MG; Wed, 26 Jun 2024 21:30:03 +0000
Received: by outflank-mailman (input) for mailman id 749508;
 Wed, 26 Jun 2024 21:30:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMaDK-0005zo-0a; Wed, 26 Jun 2024 21:30:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMaDJ-0000ai-U7; Wed, 26 Jun 2024 21:30:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMaDJ-00065x-HO; Wed, 26 Jun 2024 21:30:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMaDJ-0004qH-Gr; Wed, 26 Jun 2024 21:30:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=E+LiX/LK1lbOUGzHJLK2roiBRxyRl4urKBb2OzFLoJY=; b=Ohj+CtDwaH5nwq3bqtYZX4A3zG
	fDqytPObue0PjBXuiD8WJaGrrbRNre+EDARc5ZPsl/STwRlrtXqUfrO8fIczB1TnUv7Tzw1oymWWz
	0aXPRH4lZb0kQ1AU3CkJwGbYrwvWhx7j4tFtR8fKobjxgR8bQyDLsrKrSlTDGdDrEgmE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186520-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186520: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ae09721a65ab3294439f6fa233adaf3b897f702f
X-Osstest-Versions-That:
    ovmf=89377ece8f1c7243d25fd84488dcd03e37b9e661
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 21:30:01 +0000

flight 186520 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186520/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ae09721a65ab3294439f6fa233adaf3b897f702f
baseline version:
 ovmf                 89377ece8f1c7243d25fd84488dcd03e37b9e661

Last test of basis   186516  2024-06-26 12:41:06 Z    0 days
Testing same since   186519  2024-06-26 16:43:15 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gaurav Pandya <gaurav.pandya@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   89377ece8f..ae09721a65  ae09721a65ab3294439f6fa233adaf3b897f702f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 22:18:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 22:18:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749520.1157668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMaxk-00058i-AA; Wed, 26 Jun 2024 22:18:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749520.1157668; Wed, 26 Jun 2024 22:18:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMaxk-00058b-7V; Wed, 26 Jun 2024 22:18:00 +0000
Received: by outflank-mailman (input) for mailman id 749520;
 Wed, 26 Jun 2024 22:17:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMaxi-00058O-6n; Wed, 26 Jun 2024 22:17:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMaxi-0001Ps-3j; Wed, 26 Jun 2024 22:17:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMaxh-0007Kl-Ol; Wed, 26 Jun 2024 22:17:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMaxh-0001lC-O2; Wed, 26 Jun 2024 22:17:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=otw2MrYSbM7vcbUQiqfYwX/7Ne/bOq3sv32sTzhk5Z0=; b=JoeqGZkhP63UAS3OVs+Elod+HU
	MF4FRLQFxnmxE3J5VxgQ2F+4eIaC5oTKFqSqUjCB3FCLUV30GUsTXBwpeXv1RMFdwKRP4wzIgS5Ro
	1fi5GqPkHcXzqyWBK6LR2IhCKaCCpXiw+umYT7VaqY/o8sBByjbHHkschMaPof+8B6II=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186523-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186523: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ecadd22a3de8ce7f1799e85af6f1e37c06c57049
X-Osstest-Versions-That:
    xen=dde20e47afb51404d59be492ca924704f0fbf71d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 22:17:57 +0000

flight 186523 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186523/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ecadd22a3de8ce7f1799e85af6f1e37c06c57049
baseline version:
 xen                  dde20e47afb51404d59be492ca924704f0fbf71d

Last test of basis   186518  2024-06-26 15:00:23 Z    0 days
Testing same since   186523  2024-06-26 19:04:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@cloud.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   dde20e47af..ecadd22a3d  ecadd22a3de8ce7f1799e85af6f1e37c06c57049 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 23:27:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 23:27:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749538.1157679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMc3A-0006Iu-3F; Wed, 26 Jun 2024 23:27:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749538.1157679; Wed, 26 Jun 2024 23:27:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMc3A-0006In-0S; Wed, 26 Jun 2024 23:27:40 +0000
Received: by outflank-mailman (input) for mailman id 749538;
 Wed, 26 Jun 2024 23:27:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMc39-0006Id-7e; Wed, 26 Jun 2024 23:27:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMc39-0002mW-4M; Wed, 26 Jun 2024 23:27:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMc38-0000a1-Nu; Wed, 26 Jun 2024 23:27:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMc38-0000LY-NQ; Wed, 26 Jun 2024 23:27:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FebBlj4IzNDuw4JcNRjFJXRDFmcIvvpWHQb79rQ944s=; b=BW/wYZXfEwwIm64+hMdnmX9i/I
	VCBk1e+jO1GOG1fXhRZnGHe8KUlmrySsdY1b+c73QoRb5f8ZnEmx5iGZU/XPGcO/3XBnhQvXNWPzy
	FYjJDOsMK8QYM88n/1XWC7UiVevn0nImJhU0LKM9pPm3qRHkT+vxM8h/eP+5JgaRyFws=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186515-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.17-testing test] 186515: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.17-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-4.17-testing:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=87a49d2ec6fe2881e4b0947ea3e0e0f041dc6be3
X-Osstest-Versions-That:
    xen=3c7c9225ffa5605bf0603f9dd1666f3f786e2c44
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 Jun 2024 23:27:38 +0000

flight 186515 xen-4.17-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186515/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186135
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186135
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186135
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186135
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186135
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186135
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  87a49d2ec6fe2881e4b0947ea3e0e0f041dc6be3
baseline version:
 xen                  3c7c9225ffa5605bf0603f9dd1666f3f786e2c44

Last test of basis   186135  2024-05-24 13:05:09 Z   33 days
Testing same since   186515  2024-06-26 12:36:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3c7c9225ff..87a49d2ec6  87a49d2ec6fe2881e4b0947ea3e0e0f041dc6be3 -> stable-4.17


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 23:37:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 23:37:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749562.1157752 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMcCX-00004t-K4; Wed, 26 Jun 2024 23:37:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749562.1157752; Wed, 26 Jun 2024 23:37:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMcCX-0008WS-HY; Wed, 26 Jun 2024 23:37:21 +0000
Received: by outflank-mailman (input) for mailman id 749562;
 Wed, 26 Jun 2024 23:37:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dBdT=N4=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMcCV-0008WL-SM
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 23:37:19 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0554fb0e-3415-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 01:37:18 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id D657ACE2CB6;
 Wed, 26 Jun 2024 23:37:14 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3C9B5C116B1;
 Wed, 26 Jun 2024 23:37:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0554fb0e-3415-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719445034;
	bh=6/kCuEt6MAEYKBkd9O118P4oub3DIuxgn/yMV/Kw7gw=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=P7wGZUlJQ8VHey2LH9f4yjQg78EQ0fN1TitTl8QNt0EBC4r+2AgGKhD92LOOFiqRN
	 H1wrThqx5tH8461hw7RAYCBYBkLgiOucmtb98FuI2ShQvJtjFpmS+L9iyaI91AV12w
	 6j1Qkqbz37wXQC44EeUp5IV9trtEfqGhjakTUET5D/6/kKSILodoMbYXxcpyoQE/Sv
	 kvMOdxm2RGvd2BzNVclfpEg1Lb3UN1BO8kQxApUKfy3xRhwtnAIEhQPosBUwQgOrwD
	 /+w/7jF03OH0l2pIMkW/noGwLPhMwgHWNWE2QQRI5Wo7ppGk0fbYwDzUR8NjtILxTz
	 Upn97C+iSQftQ==
Date: Wed, 26 Jun 2024 16:37:11 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Julien Grall <julien@xen.org>
cc: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Bertrand Marquis <bertrand.marquis@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, oleksii.kurochko@gmail.com
Subject: Re: [PATCH for-4.19(?)] xen/arm: bootfdt: Fix device tree memory
 node probing
In-Reply-To: <766b260e-204c-423f-b0e1-c21957b6d169@xen.org>
Message-ID: <alpine.DEB.2.22.394.2406261633240.3635@ubuntu-linux-20-04-desktop>
References: <20240626080428.18480-1-michal.orzel@amd.com> <766b260e-204c-423f-b0e1-c21957b6d169@xen.org>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 26 Jun 2024, Julien Grall wrote:
> Hi Michal,
> 
> On 26/06/2024 09:04, Michal Orzel wrote:
> > Memory node probing is done as part of early_scan_node() that is called
> > for each node with depth >= 1 (root node is at depth 0). According to
> > Devicetree Specification v0.4, chapter 3.4, /memory node can only exists
> > as a top level node. However, Xen incorrectly considers all the nodes with
> > unit node name "memory" as RAM. This buggy behavior can result in a
> > failure if there are other nodes in the device tree (at depth >= 2) with
> > "memory" as unit node name. An example can be a "memory@xxx" node under
> > /reserved-memory. Fix it by introducing device_tree_is_memory_node() to
> > perform all the required checks to assess if a node is a proper /memory
> > node.
> > 
> > Fixes: 3e99c95ba1c8 ("arm, device tree: parse the DTB for RAM location and
> > size")
> > Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> > ---
> > 4.19: This patch is fixing a possible early boot Xen failure (before main
> > console is initialized). In my case it results in a warning "Shattering
> > superpage is not supported" and panic "Unable to setup the directmap
> > mappings".
> > 
> > If this is too late for this patch to go in, we can backport it after the
> > tree
> > re-opens.
> 
> The code looks correct to me, but I am not sure about merging it to 4.19.
> 
> This is not a new bug (in fact has been there since pretty much Xen on Arm was
> created) and we haven't seen any report until today. So in some way it would
> be best to merge it after 4.19 so it can get more testing.

First it was found on a new board, but then the issue appeared also on
an old board (the Ultrascale+). I think the reason is that a
reserved-memory node was added triggering the bug.


> In the other hand, I guess this will block you. Is this a new platform? Is it
> available?

Yes the platform is available so I would be more concerned about Xen
4.19 not booting on newer Ultrascale+ device trees. That said, we can
also backport the fix later to staging-4.19. I'll leave the decision to
you.


From xen-devel-bounces@lists.xenproject.org Wed Jun 26 23:58:31 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 Jun 2024 23:58:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749568.1157763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMcWs-0003ft-8Z; Wed, 26 Jun 2024 23:58:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749568.1157763; Wed, 26 Jun 2024 23:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMcWs-0003fm-5r; Wed, 26 Jun 2024 23:58:22 +0000
Received: by outflank-mailman (input) for mailman id 749568;
 Wed, 26 Jun 2024 23:58:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gOWq=N4=outlook.com=mhklinux@srs-se1.protection.inumbo.net>)
 id 1sMcWq-0003fa-7g
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 23:58:20 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02olkn20814.outbound.protection.outlook.com
 [2a01:111:f403:2c07::814])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f496da10-3417-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 01:58:17 +0200 (CEST)
Received: from SN6PR02MB4157.namprd02.prod.outlook.com (2603:10b6:805:33::23)
 by MN2PR02MB6670.namprd02.prod.outlook.com (2603:10b6:208:1d8::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.33; Wed, 26 Jun
 2024 23:58:13 +0000
Received: from SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df]) by SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df%2]) with mapi id 15.20.7698.033; Wed, 26 Jun 2024
 23:58:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f496da10-3417-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ahLkasJFAspFnCgLqc6aoGKaMIavgBQ7FdfUUTtCC30jelE1SAkDAzq6FPNjcwVEhgUqQywPswey7ZUzCL5rK+NXBz0NliRqP6q+G3dtvJe63Ip/HpOnupW+9kF6vopDa2s5ofZ2juutjU89S92tClCbRvMkDliF6b5hpdN3Fryxe40ck4mCqUurkG5igxVrPd3oxoJEcHebjfGNIulx5bHXvnN5vkecF575sADxBKRSYvdwpphC7Dnb0hr9KI/8PybDQchZmJyGp4Nc43T9I+BQvl82iJTEkbSeama/ruyrPHcnI+h1ngTpo+x90srTXn1Mft5xrxyUhUQj2dvb3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=OM+VDqTSws8f4noYqx/nxSpJLlkj6C6J5sQyhdpS160=;
 b=QbbISkDsvG1hqxmV0W7O3AoD1xB4WucxQymJqDuvjN9ZgaoWwdDGiX3e4w1Uv/BWEgXoRWJ2xHOnaH854SQ5JuV284F85QzNh5Tjtf/4IijxI+1bmOAjPEmSBY54C6CoESgB2jZX2B873cxUdWAxRXMTpQnsfDCfYGXX+P/22aOFStY/2EK9/nRsfkSVj8dVrvQa8DatIVYRxAtM0I8S51zCbPcNWCt/XfX4/UP/Lqah8KEjARZwZUBGvSK41e/TfzubzhFBCOIQwXs8O9vGhxnC515S5mmVTFkZNXOfDRLqYzswBrqt9ZUXe/LLVEPovkorhgIc94R8BEzK/qK2MA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none;
 dkim=none; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=outlook.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OM+VDqTSws8f4noYqx/nxSpJLlkj6C6J5sQyhdpS160=;
 b=PWMWlCDh5YAHjertGAugiMHogAk+EPdWe9+Ek8643jsU3JX2AZpbncEB+bi/Jb/ZFoIh34hS3pwV1Mi8jJyMR8+wqK9Bqv4UyulONRLmNivq7hyRHXPl3CKSpygHOrE84ri0aLpHJh/tb/nivYzyvyUKemaRBdVoDC5IFArANsIazdrwD0RfzoOHdaJTAfrUGctGwhjjuHd1SGO/AjbsJxN4uVXIaxHyU9IG4QNDRK6U1E05043L5IOo8vEBfMxhU3mmUVGG96aHdRHP38+1XP+ZH1oDhTT8K5iTaIIXIGm9CNc1vYsaGkaGGpe7mKFepcCOO5SMNLgobonb+4G3tQ==
From: Michael Kelley <mhklinux@outlook.com>
To: "robin.murphy@arm.com" <robin.murphy@arm.com>, "joro@8bytes.org"
	<joro@8bytes.org>, "will@kernel.org" <will@kernel.org>, "jgross@suse.com"
	<jgross@suse.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>, "hch@lst.de"
	<hch@lst.de>, "m.szyprowski@samsung.com" <m.szyprowski@samsung.com>,
	"petr@tesarici.cz" <petr@tesarici.cz>, "iommu@lists.linux.dev"
	<iommu@lists.linux.dev>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: RE: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Topic: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Index: AQHauIjoJBWSrCyO6UWzcncSceBiMLHa1Z9w
Date: Wed, 26 Jun 2024 23:58:13 +0000
Message-ID:
 <SN6PR02MB41577686D72E206DB0084E90D4D62@SN6PR02MB4157.namprd02.prod.outlook.com>
References: <20240607031421.182589-1-mhklinux@outlook.com>
In-Reply-To: <20240607031421.182589-1-mhklinux@outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-tmn: [l5ljiyiz4abjBC6x5j2Rf1wSkbLh+Pxz]
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: SN6PR02MB4157:EE_|MN2PR02MB6670:EE_
x-ms-office365-filtering-correlation-id: d26794d7-9b34-4ffe-0305-08dc963bd706
x-microsoft-antispam:
 BCL:0;ARA:14566002|461199028|3412199025|440099028|102099032;
x-microsoft-antispam-message-info:
 rVxtpu++W8LdWkbNJ0ETUhp3izfK+XCHyizf93lu/SvamZKTN5FBL8+81bKlpJcWhJwAD71GPwA2MKM4VARMZ1FENNhEsJUCsCRD80+SWn7/TuEXDFAVapw7Velyq2p1dOXILXcTWSdcHAcmCGzOUJ40F45d/DEjVy+mbBPR8C9Y2a6cYX8CobRsQcn5svMZ7InXflEH599xEA1MvUW3+ToTptXY0ACujLLGDWqVeQEQh3+96TZ+0wRcq7Tz6cNfb3MUwEaQHncOdLTtE3qK1olkAZ3qrn66BbaWLB8L1xHYQND3UZKnAvUSm+JpnQRDlGDd8zRHI0sfkKsvfQTZLRPMNU1+YLd1gWYW+ssyWF4MKzuM9AkMDLcChumRUvhuokjAdtfhoqKlT514rllQWJKmDaiaiqRNSHtxLwfhuOH7+M7iztUzxB53QSFkxrTI/4OYc4YxJHFiQzhrmdc0wPBWcj2/+mty7Xd1CFu6RBSNQzHZ5WhhrBLzcUbcnZpnun8lSoyqVfZauTUnvVyyK1Yp7YlS1f/J3xsMjWByrz8W07c7Reqks7MROKLoolxU
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?sBD0/CPHkvX1N8CQ+E52jfnUl7Wgz4/yPzG/lhm88g7ZssEmSxbRfxA1HAoe?=
 =?us-ascii?Q?nMpX3ttzWWOhklilzDrjgpHdNSjdxF63DzKGlCLpBkqbK35JjjLpp5ByKv4i?=
 =?us-ascii?Q?TT0NhVehQZGWAJorC682EW6I9FA/9jYDdsXENirsnb6awDl6SS/cFTmAoOM/?=
 =?us-ascii?Q?GwrzuNevENvRs+UwS2G4A7PF+LYa6E44fyQ2N4gZkw1MfGuqBN9PrgRaiwMN?=
 =?us-ascii?Q?g3OIKtWr9MiywAA856jg7pJRG5KZ9VfMynV5o3k+4sA4C8AnpFpfm+KoOvb9?=
 =?us-ascii?Q?JH3EwP8xJ9KxI165tRSxSXw1Gkytwg58qG1yXBIox0Mk5tQdlGJJmR5jSW4x?=
 =?us-ascii?Q?qdhHDiGHCQw0m3NZgexcqCC45s3/6RK+K81yok1FDqX7mFqltpiT2TPe5SO1?=
 =?us-ascii?Q?gZnAnEJB3YkhRiKWlqoE4+4mYd/YFje/BVZ9Mj7vFjOdri/mS1eR3Ts6qvAj?=
 =?us-ascii?Q?w1M2Zj5BZt6tkWq9LyU594obIjn5BXOAG18biLBzN5rfccyTciyZVq0FgG17?=
 =?us-ascii?Q?6oPAZzpskKmy+l231TcZASU6NSyNMMMcIhliDkWo2ZoOX3nPHqqqEQk8g4O6?=
 =?us-ascii?Q?FHisXmOGbYwzple/A5QrsN8rxqCIpZseC6haIKCVQEe+oJWADxIC/Uwii8Ql?=
 =?us-ascii?Q?STmDgZfeVL0hKpmy0q2aIk74ASXleOhF/seUFrf4JuNuMyb74f7YrW/LXDnP?=
 =?us-ascii?Q?dfqH9ds9uKe9vwKlK4/sIR3/ak2r9JbUL5yJcAWFkUzDXIgK+GEkjx8w/JNr?=
 =?us-ascii?Q?8XfUqOA5FUzGpIpmRi6PBLneysuQ4covvO8LNnf5z6IW2CujiQ4EWjLv3vyH?=
 =?us-ascii?Q?cpMZ4SCH715G++tlNyyMX45lrhrp0p4NOVpNlD50VyE1M645RUyXyj/B7vWK?=
 =?us-ascii?Q?1z/xDFWhjVA1dve3Eb1IOqXLRyWhEs+P/Flvo594IppAqbKKfw/LHdtoadrr?=
 =?us-ascii?Q?WHQ4+roCTfKidc50B3UF/ksyfpXQYOm/kkOBKxaOwnk4mz63tnZzTBgA28oR?=
 =?us-ascii?Q?udrmbXNQo5NML4kEeK34ZzUEbIhAd8My49u88+gStmdNb7Od5Qolr85LEZA0?=
 =?us-ascii?Q?xu7zTvo3XZ6JianA/shOBaSkxBysUapYAXSTDDnorObXpzWbMbmcAAnppkPq?=
 =?us-ascii?Q?mBqv/Tp500QlUxwC6puknFcEiZpMsyqx1qy9hAyUOO975Wa9ftltabsAAzYY?=
 =?us-ascii?Q?zDHJYviU9I97P0ECYVGcRbLhGiXL+lfGzbmq8pYybZUQTWzUob+VCli8Qx0?=
 =?us-ascii?Q?=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN6PR02MB4157.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-Network-Message-Id: d26794d7-9b34-4ffe-0305-08dc963bd706
X-MS-Exchange-CrossTenant-rms-persistedconsumerorg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Jun 2024 23:58:13.3871
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR02MB6670

From: mhkelley58@gmail.com <mhkelley58@gmail.com> Sent: Thursday, June 6, 2=
024 8:14 PM
>=20
> With CONFIG_SWIOTLB_DYNAMIC enabled, each round-trip map/unmap pair
> in the swiotlb results in 6 calls to swiotlb_find_pool(). In multiple
> places, the pool is found and used in one function, and then must
> found again in the next function that is called because only the
> tlb_addr is passed as an argument. These are the six call sites:
>=20
> dma_direct_map_page:
> 1. swiotlb_map->swiotlb_tbl_map_single->swiotlb_bounce
>=20
> dma_direct_unmap_page:
> 2. dma_direct_sync_single_for_cpu->is_swiotlb_buffer
> 3. dma_direct_sync_single_for_cpu->swiotlb_sync_single_for_cpu->
> 	swiotlb_bounce
> 4. is_swiotlb_buffer
> 5. swiotlb_tbl_unmap_single->swiotlb_del_transient
> 6. swiotlb_tbl_unmap_single->swiotlb_release_slots
>=20
> Reduce the number of calls by finding the pool at a higher level, and
> passing it as an argument instead of searching again. A key change is
> for is_swiotlb_buffer() to return a pool pointer instead of a boolean,
> and then pass this pool pointer to subsequent swiotlb functions.
> With these changes, a round-trip map/unmap pair requires only 2 calls
> to swiotlb_find_pool():
>=20
> dma_direct_unmap_page:
> 1. dma_direct_sync_single_for_cpu->is_swiotlb_buffer
> 2. is_swiotlb_buffer
>=20
> These changes come from noticing the inefficiencies in a code review,
> not from performance measurements. With CONFIG_SWIOTLB_DYNAMIC,
> swiotlb_find_pool() is not trivial, and it uses an RCU read lock,
> so avoiding the redundant calls helps performance in a hot path.
> When CONFIG_SWIOTLB_DYNAMIC is *not* set, the code size reduction
> is minimal and the perf benefits are likely negligible, but no
> harm is done.
>=20
> No functional change is intended.
>=20
> Signed-off-by: Michael Kelley <mhklinux@outlook.com>
> ---
> This patch trades off making many of the core swiotlb APIs take
> an additional argument in order to avoid duplicating calls to
> swiotlb_find_pool(). The current code seems rather wasteful in
> making 6 calls per round-trip, but I'm happy to accept others'
> judgment as to whether getting rid of the waste is worth the
> additional code complexity.

Quick ping on this RFC.  Is there any interest in moving forward?
Quite a few lines of code are affected because of adding the
additional "pool" argument to several functions, but the change
is conceptually pretty simple.

Michael

>=20
>  drivers/iommu/dma-iommu.c | 27 ++++++++++++++------
>  drivers/xen/swiotlb-xen.c | 25 +++++++++++-------
>  include/linux/swiotlb.h   | 54 +++++++++++++++++++++------------------
>  kernel/dma/direct.c       | 12 ++++++---
>  kernel/dma/direct.h       | 18 ++++++++-----
>  kernel/dma/swiotlb.c      | 43 ++++++++++++++++---------------
>  6 files changed, 106 insertions(+), 73 deletions(-)
>=20
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index f731e4b2a417..ab6bc37ecf90 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -1073,6 +1073,7 @@ static void iommu_dma_sync_single_for_cpu(struct de=
vice *dev,
>  		dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
>  {
>  	phys_addr_t phys;
> +	struct io_tlb_pool *pool;
>=20
>  	if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir))
>  		return;
> @@ -1081,21 +1082,25 @@ static void iommu_dma_sync_single_for_cpu(struct =
device *dev,
>  	if (!dev_is_dma_coherent(dev))
>  		arch_sync_dma_for_cpu(phys, size, dir);
>=20
> -	if (is_swiotlb_buffer(dev, phys))
> -		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
> +	pool =3D is_swiotlb_buffer(dev, phys);
> +	if (pool)
> +		swiotlb_sync_single_for_cpu(dev, phys, size, dir, pool);
>  }
>=20
>  static void iommu_dma_sync_single_for_device(struct device *dev,
>  		dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
>  {
>  	phys_addr_t phys;
> +	struct io_tlb_pool *pool;
>=20
>  	if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir))
>  		return;
>=20
>  	phys =3D iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
> -	if (is_swiotlb_buffer(dev, phys))
> -		swiotlb_sync_single_for_device(dev, phys, size, dir);
> +
> +	pool =3D is_swiotlb_buffer(dev, phys);
> +	if (pool)
> +		swiotlb_sync_single_for_device(dev, phys, size, dir, pool);
>=20
>  	if (!dev_is_dma_coherent(dev))
>  		arch_sync_dma_for_device(phys, size, dir);
> @@ -1189,8 +1194,12 @@ static dma_addr_t iommu_dma_map_page(struct device=
 *dev, struct page *page,
>  		arch_sync_dma_for_device(phys, size, dir);
>=20
>  	iova =3D __iommu_dma_map(dev, phys, size, prot, dma_mask);
> -	if (iova =3D=3D DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
> -		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
> +	if (iova =3D=3D DMA_MAPPING_ERROR) {
> +		struct io_tlb_pool *pool =3D is_swiotlb_buffer(dev, phys);
> +
> +		if (pool)
> +			swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs, pool);
> +	}
>  	return iova;
>  }
>=20
> @@ -1199,6 +1208,7 @@ static void iommu_dma_unmap_page(struct device *dev=
, dma_addr_t dma_handle,
>  {
>  	struct iommu_domain *domain =3D iommu_get_dma_domain(dev);
>  	phys_addr_t phys;
> +	struct io_tlb_pool *pool;
>=20
>  	phys =3D iommu_iova_to_phys(domain, dma_handle);
>  	if (WARN_ON(!phys))
> @@ -1209,8 +1219,9 @@ static void iommu_dma_unmap_page(struct device *dev=
, dma_addr_t dma_handle,
>=20
>  	__iommu_dma_unmap(dev, dma_handle, size);
>=20
> -	if (unlikely(is_swiotlb_buffer(dev, phys)))
> -		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
> +	pool =3D is_swiotlb_buffer(dev, phys);
> +	if (unlikely(pool))
> +		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs, pool);
>  }
>=20
>  /*
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 6579ae3f6dac..7af8c8466e1d 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -88,7 +88,7 @@ static inline int range_straddles_page_boundary(phys_ad=
dr_t p, size_t size)
>  	return 0;
>  }
>=20
> -static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr=
)
> +static struct io_tlb_pool *is_xen_swiotlb_buffer(struct device *dev, dma=
_addr_t dma_addr)
>  {
>  	unsigned long bfn =3D XEN_PFN_DOWN(dma_to_phys(dev, dma_addr));
>  	unsigned long xen_pfn =3D bfn_to_local_pfn(bfn);
> @@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, =
dma_addr_t dma_addr)
>  	 */
>  	if (pfn_valid(PFN_DOWN(paddr)))
>  		return is_swiotlb_buffer(dev, paddr);
> -	return 0;
> +	return NULL;
>  }
>=20
>  #ifdef CONFIG_X86
> @@ -228,7 +228,8 @@ static dma_addr_t xen_swiotlb_map_page(struct device =
*dev, struct page *page,
>  	 */
>  	if (unlikely(!dma_capable(dev, dev_addr, size, true))) {
>  		swiotlb_tbl_unmap_single(dev, map, size, dir,
> -				attrs | DMA_ATTR_SKIP_CPU_SYNC);
> +				attrs | DMA_ATTR_SKIP_CPU_SYNC,
> +				swiotlb_find_pool(dev, map));
>  		return DMA_MAPPING_ERROR;
>  	}
>=20
> @@ -254,6 +255,7 @@ static void xen_swiotlb_unmap_page(struct device *hwd=
ev, dma_addr_t dev_addr,
>  		size_t size, enum dma_data_direction dir, unsigned long attrs)
>  {
>  	phys_addr_t paddr =3D xen_dma_to_phys(hwdev, dev_addr);
> +	struct io_tlb_pool *pool;
>=20
>  	BUG_ON(dir =3D=3D DMA_NONE);
>=20
> @@ -265,8 +267,9 @@ static void xen_swiotlb_unmap_page(struct device *hwd=
ev, dma_addr_t dev_addr,
>  	}
>=20
>  	/* NOTE: We use dev_addr here, not paddr! */
> -	if (is_xen_swiotlb_buffer(hwdev, dev_addr))
> -		swiotlb_tbl_unmap_single(hwdev, paddr, size, dir, attrs);
> +	pool =3D is_xen_swiotlb_buffer(hwdev, dev_addr);
> +	if (pool)
> +		swiotlb_tbl_unmap_single(hwdev, paddr, size, dir, attrs, pool);
>  }
>=20
>  static void
> @@ -274,6 +277,7 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, d=
ma_addr_t dma_addr,
>  		size_t size, enum dma_data_direction dir)
>  {
>  	phys_addr_t paddr =3D xen_dma_to_phys(dev, dma_addr);
> +	struct io_tlb_pool *pool;
>=20
>  	if (!dev_is_dma_coherent(dev)) {
>  		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
> @@ -282,8 +286,9 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, d=
ma_addr_t dma_addr,
>  			xen_dma_sync_for_cpu(dev, dma_addr, size, dir);
>  	}
>=20
> -	if (is_xen_swiotlb_buffer(dev, dma_addr))
> -		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
> +	pool =3D is_xen_swiotlb_buffer(dev, dma_addr);
> +	if (pool)
> +		swiotlb_sync_single_for_cpu(dev, paddr, size, dir, pool);
>  }
>=20
>  static void
> @@ -291,9 +296,11 @@ xen_swiotlb_sync_single_for_device(struct device *de=
v, dma_addr_t dma_addr,
>  		size_t size, enum dma_data_direction dir)
>  {
>  	phys_addr_t paddr =3D xen_dma_to_phys(dev, dma_addr);
> +	struct io_tlb_pool *pool;
>=20
> -	if (is_xen_swiotlb_buffer(dev, dma_addr))
> -		swiotlb_sync_single_for_device(dev, paddr, size, dir);
> +	pool =3D is_xen_swiotlb_buffer(dev, dma_addr);
> +	if (pool)
> +		swiotlb_sync_single_for_device(dev, paddr, size, dir, pool);
>=20
>  	if (!dev_is_dma_coherent(dev)) {
>  		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 14bc10c1bb23..ce8651949123 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -42,24 +42,6 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
>  	int (*remap)(void *tlb, unsigned long nslabs));
>  extern void __init swiotlb_update_mem_attributes(void);
>=20
> -phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phy=
s,
> -		size_t mapping_size,
> -		unsigned int alloc_aligned_mask, enum dma_data_direction dir,
> -		unsigned long attrs);
> -
> -extern void swiotlb_tbl_unmap_single(struct device *hwdev,
> -				     phys_addr_t tlb_addr,
> -				     size_t mapping_size,
> -				     enum dma_data_direction dir,
> -				     unsigned long attrs);
> -
> -void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_=
addr,
> -		size_t size, enum dma_data_direction dir);
> -void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_add=
r,
> -		size_t size, enum dma_data_direction dir);
> -dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys,
> -		size_t size, enum dma_data_direction dir, unsigned long attrs);
> -
>  #ifdef CONFIG_SWIOTLB
>=20
>  /**
> @@ -168,12 +150,12 @@ static inline struct io_tlb_pool *swiotlb_find_pool=
(struct device *dev,
>   * * %true if @paddr points into a bounce buffer
>   * * %false otherwise
>   */
> -static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t pad=
dr)
> +static inline struct io_tlb_pool *is_swiotlb_buffer(struct device *dev, =
phys_addr_t paddr)
>  {
>  	struct io_tlb_mem *mem =3D dev->dma_io_tlb_mem;
>=20
>  	if (!mem)
> -		return false;
> +		return NULL;
>=20
>  #ifdef CONFIG_SWIOTLB_DYNAMIC
>  	/*
> @@ -187,10 +169,13 @@ static inline bool is_swiotlb_buffer(struct device =
*dev, phys_addr_t paddr)
>  	 * This barrier pairs with smp_mb() in swiotlb_find_slots().
>  	 */
>  	smp_rmb();
> -	return READ_ONCE(dev->dma_uses_io_tlb) &&
> -		swiotlb_find_pool(dev, paddr);
> +	if (READ_ONCE(dev->dma_uses_io_tlb))
> +		return swiotlb_find_pool(dev, paddr);
> +	return NULL;
>  #else
> -	return paddr >=3D mem->defpool.start && paddr < mem->defpool.end;
> +	if (paddr >=3D mem->defpool.start && paddr < mem->defpool.end)
> +		return &mem->defpool;
> +	return NULL;
>  #endif
>  }
>=20
> @@ -201,6 +186,25 @@ static inline bool is_swiotlb_force_bounce(struct de=
vice *dev)
>  	return mem && mem->force_bounce;
>  }
>=20
> +phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phy=
s,
> +		size_t mapping_size,
> +		unsigned int alloc_aligned_mask, enum dma_data_direction dir,
> +		unsigned long attrs);
> +
> +extern void swiotlb_tbl_unmap_single(struct device *hwdev,
> +				     phys_addr_t tlb_addr,
> +				     size_t mapping_size,
> +				     enum dma_data_direction dir,
> +				     unsigned long attrs,
> +				     struct io_tlb_pool *pool);
> +
> +void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_=
addr,
> +		size_t size, enum dma_data_direction dir, struct io_tlb_pool *pool);
> +void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_add=
r,
> +		size_t size, enum dma_data_direction dir, struct io_tlb_pool *pool);
> +dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys,
> +		size_t size, enum dma_data_direction dir, unsigned long attrs);
> +
>  void swiotlb_init(bool addressing_limited, unsigned int flags);
>  void __init swiotlb_exit(void);
>  void swiotlb_dev_init(struct device *dev);
> @@ -219,9 +223,9 @@ static inline void swiotlb_dev_init(struct device *de=
v)
>  {
>  }
>=20
> -static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t pad=
dr)
> +static inline struct io_tlb_pool *is_swiotlb_buffer(struct device *dev, =
phys_addr_t paddr)
>  {
> -	return false;
> +	return NULL;
>  }
>  static inline bool is_swiotlb_force_bounce(struct device *dev)
>  {
> diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> index 4d543b1e9d57..50689afb0ffd 100644
> --- a/kernel/dma/direct.c
> +++ b/kernel/dma/direct.c
> @@ -399,14 +399,16 @@ void dma_direct_sync_sg_for_device(struct device *d=
ev,
>  		struct scatterlist *sgl, int nents, enum dma_data_direction dir)
>  {
>  	struct scatterlist *sg;
> +	struct io_tlb_pool *pool;
>  	int i;
>=20
>  	for_each_sg(sgl, sg, nents, i) {
>  		phys_addr_t paddr =3D dma_to_phys(dev, sg_dma_address(sg));
>=20
> -		if (unlikely(is_swiotlb_buffer(dev, paddr)))
> +		pool =3D is_swiotlb_buffer(dev, paddr);
> +		if (unlikely(pool))
>  			swiotlb_sync_single_for_device(dev, paddr, sg->length,
> -						       dir);
> +						       dir, pool);
>=20
>  		if (!dev_is_dma_coherent(dev))
>  			arch_sync_dma_for_device(paddr, sg->length,
> @@ -422,6 +424,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
>  		struct scatterlist *sgl, int nents, enum dma_data_direction dir)
>  {
>  	struct scatterlist *sg;
> +	struct io_tlb_pool *pool;
>  	int i;
>=20
>  	for_each_sg(sgl, sg, nents, i) {
> @@ -430,9 +433,10 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
>  		if (!dev_is_dma_coherent(dev))
>  			arch_sync_dma_for_cpu(paddr, sg->length, dir);
>=20
> -		if (unlikely(is_swiotlb_buffer(dev, paddr)))
> +		pool =3D is_swiotlb_buffer(dev, paddr);
> +		if (unlikely(pool))
>  			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
> -						    dir);
> +						    dir, pool);
>=20
>  		if (dir =3D=3D DMA_FROM_DEVICE)
>  			arch_dma_mark_clean(paddr, sg->length);
> diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
> index 18d346118fe8..72aa65558e07 100644
> --- a/kernel/dma/direct.h
> +++ b/kernel/dma/direct.h
> @@ -57,9 +57,11 @@ static inline void dma_direct_sync_single_for_device(s=
truct device *dev,
>  		dma_addr_t addr, size_t size, enum dma_data_direction dir)
>  {
>  	phys_addr_t paddr =3D dma_to_phys(dev, addr);
> +	struct io_tlb_pool *pool;
>=20
> -	if (unlikely(is_swiotlb_buffer(dev, paddr)))
> -		swiotlb_sync_single_for_device(dev, paddr, size, dir);
> +	pool =3D is_swiotlb_buffer(dev, paddr);
> +	if (unlikely(pool))
> +		swiotlb_sync_single_for_device(dev, paddr, size, dir, pool);
>=20
>  	if (!dev_is_dma_coherent(dev))
>  		arch_sync_dma_for_device(paddr, size, dir);
> @@ -69,14 +71,16 @@ static inline void dma_direct_sync_single_for_cpu(str=
uct device *dev,
>  		dma_addr_t addr, size_t size, enum dma_data_direction dir)
>  {
>  	phys_addr_t paddr =3D dma_to_phys(dev, addr);
> +	struct io_tlb_pool *pool;
>=20
>  	if (!dev_is_dma_coherent(dev)) {
>  		arch_sync_dma_for_cpu(paddr, size, dir);
>  		arch_sync_dma_for_cpu_all();
>  	}
>=20
> -	if (unlikely(is_swiotlb_buffer(dev, paddr)))
> -		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
> +	pool =3D is_swiotlb_buffer(dev, paddr);
> +	if (unlikely(pool))
> +		swiotlb_sync_single_for_cpu(dev, paddr, size, dir, pool);
>=20
>  	if (dir =3D=3D DMA_FROM_DEVICE)
>  		arch_dma_mark_clean(paddr, size);
> @@ -117,12 +121,14 @@ static inline void dma_direct_unmap_page(struct dev=
ice *dev, dma_addr_t addr,
>  		size_t size, enum dma_data_direction dir, unsigned long attrs)
>  {
>  	phys_addr_t phys =3D dma_to_phys(dev, addr);
> +	struct io_tlb_pool *pool;
>=20
>  	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
>  		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
>=20
> -	if (unlikely(is_swiotlb_buffer(dev, phys)))
> +	pool =3D is_swiotlb_buffer(dev, phys);
> +	if (unlikely(pool))
>  		swiotlb_tbl_unmap_single(dev, phys, size, dir,
> -					 attrs | DMA_ATTR_SKIP_CPU_SYNC);
> +					 attrs | DMA_ATTR_SKIP_CPU_SYNC, pool);
>  }
>  #endif /* _KERNEL_DMA_DIRECT_H */
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index fe1ccb53596f..59b3e333651d 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -855,9 +855,8 @@ static unsigned int swiotlb_align_offset(struct devic=
e *dev,
>   * Bounce: copy the swiotlb buffer from or back to the original dma loca=
tion
>   */
>  static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, siz=
e_t size,
> -			   enum dma_data_direction dir)
> +			   enum dma_data_direction dir, struct io_tlb_pool *mem)
>  {
> -	struct io_tlb_pool *mem =3D swiotlb_find_pool(dev, tlb_addr);
>  	int index =3D (tlb_addr - mem->start) >> IO_TLB_SHIFT;
>  	phys_addr_t orig_addr =3D mem->slots[index].orig_addr;
>  	size_t alloc_size =3D mem->slots[index].alloc_size;
> @@ -1435,13 +1434,13 @@ phys_addr_t swiotlb_tbl_map_single(struct device =
*dev, phys_addr_t orig_addr,
>  	 * hardware behavior.  Use of swiotlb is supposed to be transparent,
>  	 * i.e. swiotlb must not corrupt memory by clobbering unwritten bytes.
>  	 */
> -	swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE);
> +	swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_TO_DEVICE, pool);
>  	return tlb_addr;
>  }
>=20
> -static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_ad=
dr)
> +static void swiotlb_release_slots(struct device *dev, phys_addr_t tlb_ad=
dr,
> +				  struct io_tlb_pool *mem)
>  {
> -	struct io_tlb_pool *mem =3D swiotlb_find_pool(dev, tlb_addr);
>  	unsigned long flags;
>  	unsigned int offset =3D swiotlb_align_offset(dev, 0, tlb_addr);
>  	int index, nslots, aindex;
> @@ -1505,11 +1504,9 @@ static void swiotlb_release_slots(struct device *d=
ev, phys_addr_t tlb_addr)
>   *
>   * Return: %true if @tlb_addr belonged to a transient pool that was rele=
ased.
>   */
> -static bool swiotlb_del_transient(struct device *dev, phys_addr_t tlb_ad=
dr)
> +static bool swiotlb_del_transient(struct device *dev, phys_addr_t tlb_ad=
dr,
> +				  struct io_tlb_pool *pool)
>  {
> -	struct io_tlb_pool *pool;
> -
> -	pool =3D swiotlb_find_pool(dev, tlb_addr);
>  	if (!pool->transient)
>  		return false;
>=20
> @@ -1522,7 +1519,8 @@ static bool swiotlb_del_transient(struct device *de=
v, phys_addr_t tlb_addr)
>  #else  /* !CONFIG_SWIOTLB_DYNAMIC */
>=20
>  static inline bool swiotlb_del_transient(struct device *dev,
> -					 phys_addr_t tlb_addr)
> +					 phys_addr_t tlb_addr,
> +					 struct io_tlb_pool *pool)
>  {
>  	return false;
>  }
> @@ -1534,34 +1532,34 @@ static inline bool swiotlb_del_transient(struct d=
evice *dev,
>   */
>  void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
>  			      size_t mapping_size, enum dma_data_direction dir,
> -			      unsigned long attrs)
> +			      unsigned long attrs, struct io_tlb_pool *pool)
>  {
>  	/*
>  	 * First, sync the memory before unmapping the entry
>  	 */
>  	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
>  	    (dir =3D=3D DMA_FROM_DEVICE || dir =3D=3D DMA_BIDIRECTIONAL))
> -		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
> +		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE, pool);
>=20
> -	if (swiotlb_del_transient(dev, tlb_addr))
> +	if (swiotlb_del_transient(dev, tlb_addr, pool))
>  		return;
> -	swiotlb_release_slots(dev, tlb_addr);
> +	swiotlb_release_slots(dev, tlb_addr, pool);
>  }
>=20
>  void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_=
addr,
> -		size_t size, enum dma_data_direction dir)
> +		size_t size, enum dma_data_direction dir, struct io_tlb_pool *pool)
>  {
>  	if (dir =3D=3D DMA_TO_DEVICE || dir =3D=3D DMA_BIDIRECTIONAL)
> -		swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE);
> +		swiotlb_bounce(dev, tlb_addr, size, DMA_TO_DEVICE, pool);
>  	else
>  		BUG_ON(dir !=3D DMA_FROM_DEVICE);
>  }
>=20
>  void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_add=
r,
> -		size_t size, enum dma_data_direction dir)
> +		size_t size, enum dma_data_direction dir, struct io_tlb_pool *pool)
>  {
>  	if (dir =3D=3D DMA_FROM_DEVICE || dir =3D=3D DMA_BIDIRECTIONAL)
> -		swiotlb_bounce(dev, tlb_addr, size, DMA_FROM_DEVICE);
> +		swiotlb_bounce(dev, tlb_addr, size, DMA_FROM_DEVICE, pool);
>  	else
>  		BUG_ON(dir !=3D DMA_TO_DEVICE);
>  }
> @@ -1586,7 +1584,8 @@ dma_addr_t swiotlb_map(struct device *dev, phys_add=
r_t paddr, size_t size,
>  	dma_addr =3D phys_to_dma_unencrypted(dev, swiotlb_addr);
>  	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
>  		swiotlb_tbl_unmap_single(dev, swiotlb_addr, size, dir,
> -			attrs | DMA_ATTR_SKIP_CPU_SYNC);
> +			attrs | DMA_ATTR_SKIP_CPU_SYNC,
> +			swiotlb_find_pool(dev, swiotlb_addr));
>  		dev_WARN_ONCE(dev, 1,
>  			"swiotlb addr %pad+%zu overflow (mask %llx, bus limit %llx).\n",
>  			&dma_addr, size, *dev->dma_mask, dev->bus_dma_limit);
> @@ -1774,11 +1773,13 @@ struct page *swiotlb_alloc(struct device *dev, si=
ze_t size)
>  bool swiotlb_free(struct device *dev, struct page *page, size_t size)
>  {
>  	phys_addr_t tlb_addr =3D page_to_phys(page);
> +	struct io_tlb_pool *pool;
>=20
> -	if (!is_swiotlb_buffer(dev, tlb_addr))
> +	pool =3D is_swiotlb_buffer(dev, tlb_addr);
> +	if (!pool)
>  		return false;
>=20
> -	swiotlb_release_slots(dev, tlb_addr);
> +	swiotlb_release_slots(dev, tlb_addr, pool);
>=20
>  	return true;
>  }
> --
> 2.25.1
>=20



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 00:37:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 00:37:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749579.1157772 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMd8t-00021Q-2m; Thu, 27 Jun 2024 00:37:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749579.1157772; Thu, 27 Jun 2024 00:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMd8t-00021J-0G; Thu, 27 Jun 2024 00:37:39 +0000
Received: by outflank-mailman (input) for mailman id 749579;
 Thu, 27 Jun 2024 00:37:38 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMd8s-00021D-Ln
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 00:37:38 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 717f7333-341d-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 02:37:36 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 5F33ECE2D27;
 Thu, 27 Jun 2024 00:37:32 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 36804C116B1;
 Thu, 27 Jun 2024 00:37:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 717f7333-341d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719448651;
	bh=Tb8YhTFq/BZxfcawxkuxjwAsofb+s+qq0pQ4bfBQHsk=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=i5hzJ8NiWT3XMRPnEI7M6HnHkwrpZv7MnwwAiCLuZAdC0a0H6ykCXGqF+GHmm5IXH
	 QbDilU0LO1DaQtE/JTY1mRqwBvERrxJ69nVcGaNqivfECSbwksGIounjzhZ8Lrxlpm
	 PcNgzw22gZpkPxIWbHXlz80sIzkDNZWDpKv906myCYEFXSDCk0xxF/r/rGvV6a3/uW
	 OcaY3Zs9ftoN7BkcNuxaaAhM9GrdS+hnTk0OfKzZaI4DftYNcuot+Uctx+rVJFxa0p
	 nI0uxLfoGuR+FgBV/Knt9OSZQC4vLlXOxTc6Xkt8PyR3sIcAq+h0JPbG10XfngVdvd
	 FLYV+FzYi47jQ==
Date: Wed, 26 Jun 2024 17:37:28 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: Oleksii <oleksii.kurochko@gmail.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>
Subject: Re: [XEN PATCH for 4.19] automation/eclair: add deviations agreed
 in MISRA meetings
In-Reply-To: <d35cf13a-5cfd-425f-9c01-3a4122da3a69@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406261735540.3635@ubuntu-linux-20-04-desktop>
References: <4a65e064768ad5ddce96d749f24f0bdae2c3b9da.1719328656.git.federico.serafini@bugseng.com> <alpine.DEB.2.22.394.2406251850281.3635@ubuntu-linux-20-04-desktop> <c6aeb6007ead36afaf48ceef1070e5ec5a2ef88f.camel@gmail.com>
 <d35cf13a-5cfd-425f-9c01-3a4122da3a69@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-940633539-1719448651=:3635"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-940633539-1719448651=:3635
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 26 Jun 2024, Federico Serafini wrote:
> On 26/06/24 09:37, Oleksii wrote:
> > On Tue, 2024-06-25 at 18:59 -0700, Stefano Stabellini wrote:
> > > > +-doc_begin="The conversion from a function pointer to unsigned
> > > > long or (void *) does not lose any information, provided that the
> > > > target type has enough bits to store it."
> > > > +-config=MC3R1.R11.1,casts+={safe,
> > > > +  "from(type(canonical(__function_pointer_types)))
> > > > +   &&to(type(canonical(builtin(unsigned
> > > > long)||pointer(builtin(void)))))
> > > > +   &&relation(definitely_preserves_value)"
> > > > +}
> > > > +-doc_end
> > > 
> > > This one and the ones below are the important ones! I think we should
> > > have them in the tree as soon as possible ideally 4.19. I ask for
> > > a release-ack.
> > Just want to be sure that I understand deviations properly with this
> > example.
> > 
> > If the deviation above is merged, then it would be safe from a MISRA
> > point of view to cast a function pointer to 'unsigned long' or 'void
> > *', and thereby MISRA won't complain about code with such conversions?
> 
> Exactly, taking into account section 4.7 of GCC manual.

Yes. From a Xen release perspective, it will only affect the static
analysis jobs, making them report fewer violations. The reason why
those specifically are important is that they are significant changes
over the plain rule and they were already documented in rules.rst.
--8323329-940633539-1719448651=:3635--


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 00:41:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 00:41:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749584.1157782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdCT-0003sF-JT; Thu, 27 Jun 2024 00:41:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749584.1157782; Thu, 27 Jun 2024 00:41:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdCT-0003s8-GN; Thu, 27 Jun 2024 00:41:21 +0000
Received: by outflank-mailman (input) for mailman id 749584;
 Thu, 27 Jun 2024 00:41:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMdCS-0003s1-GM
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 00:41:20 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f4c63504-341d-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 02:41:15 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 2C165CE2D2D;
 Thu, 27 Jun 2024 00:41:11 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 08259C116B1;
 Thu, 27 Jun 2024 00:41:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4c63504-341d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719448870;
	bh=hp/TBTOzYTzgrlkHDDLQbaMM8fjvukGqwrWs3uisSeA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=HmER610D3kaYtBvp6IGWDmEaP7N+Tl5n6W9C8OGoBtRg/bdF7irVMLr1n9W+DWpHL
	 TR0/o4dEycnm+wnynck8tid76/MzdBkKImO4EeYR7S/G5il1BtWSjpm5R2b0yzdyrj
	 WK9XTmshUu8aCHSFIEbzdw86z/DwEg6fqhhFZsIOOGj08+TK3WRviIrTnhYZrfxc6A
	 +gImLz881XFyUWrwhSQ9j9hFjcl2xLkZ7N9t/BYmdTmUlSLak55NxNFx35astuqsAJ
	 TuXnbmXBUtISFcdynninUmWjA+2BHHp9gRww83CC48XiVUvSGEchU9//FrZ35ZPY8E
	 xFPmY98+GOK6A==
Date: Wed, 26 Jun 2024 17:41:07 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, 
    Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: Re: [XEN PATCH v2 for-4.19] automation/eclair: add deviations agreed
 in MISRA meetings
In-Reply-To: <816b323f5e325784947d09502f9352188bd325cf.1719381829.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406261740560.3635@ubuntu-linux-20-04-desktop>
References: <816b323f5e325784947d09502f9352188bd325cf.1719381829.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 26 Jun 2024, Federico Serafini wrote:
> Update ECLAIR configuration to take into account the deviations
> agreed during the MISRA meetings.
> 
> While doing this, remove the obsolete "Set [123]" comments.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

release-ack requested


> ---
> Changes in v2:
> - keep sync between deviations.ecl and deviations.rst;
> - use 'deliberate' tag for all the deviations of R14.3;
> - do not use the term "project-wide deviation" since it does not add useful
>   information.
> ---
>  .../eclair_analysis/ECLAIR/deviations.ecl     | 93 +++++++++++++++++--
>  docs/misra/deviations.rst                     | 81 ++++++++++++++--
>  2 files changed, 158 insertions(+), 16 deletions(-)
> 
> diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl b/automation/eclair_analysis/ECLAIR/deviations.ecl
> index ae2eaf50f7..37cad8bf68 100644
> --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> @@ -1,5 +1,3 @@
> -### Set 1 ###
> -
>  #
>  # Series 2.
>  #
> @@ -23,6 +21,11 @@ Constant expressions and unreachable branches of if and switch statements are ex
>  -config=MC3R1.R2.1,reports+={deliberate, "any_area(any_loc(any_exp(macro(name(ASSERT_UNREACHABLE||PARSE_ERR_RET||PARSE_ERR||FAIL_MSR||FAIL_CPUID)))))"}
>  -doc_end
>  
> +-doc_begin="The asm-offset files are not linked deliberately, since they are used to generate definitions for asm modules."
> +-file_tag+={asm_offsets, "^xen/arch/(arm|x86)/(arm32|arm64|x86_64)/asm-offsets\\.c$"}
> +-config=MC3R1.R2.1,reports+={deliberate, "any_area(any_loc(file(asm_offsets)))"}
> +-doc_end
> +
>  -doc_begin="Pure declarations (i.e., declarations without initialization) are
>  not executable, and therefore it is safe for them to be unreachable."
>  -config=MC3R1.R2.1,ignored_stmts+={"any()", "pure_decl()"}
> @@ -63,6 +66,12 @@ they are not instances of commented-out code."
>  -config=MC3R1.D4.3,reports+={disapplied,"!(any_area(any_loc(file(^xen/arch/arm/arm64/.*$))))"}
>  -doc_end
>  
> +-doc_begin="The inline asm in 'arm64/lib/bitops.c' is tightly coupled with the surronding C code that acts as a wrapper, so it has been decided not to add an additional encapsulation layer."
> +-file_tag+={arm64_bitops, "^xen/arch/arm/arm64/lib/bitops\\.c$"}
> +-config=MC3R1.D4.3,reports+={deliberate, "all_area(any_loc(file(arm64_bitops)&&any_exp(macro(^(bit|test)op$))))"}
> +-config=MC3R1.D4.3,reports+={deliberate, "any_area(any_loc(file(arm64_bitops))&&context(name(int_clear_mask16)))"}
> +-doc_end
> +
>  -doc_begin="This header file is autogenerated or empty, therefore it poses no
>  risk if included more than once."
>  -file_tag+={empty_header, "^xen/arch/arm/efi/runtime\\.h$"}
> @@ -213,10 +222,25 @@ Therefore the absence of prior declarations is safe."
>  -config=MC3R1.R8.4,declarations+={safe, "loc(file(asm_defns))&&^current_stack_pointer$"}
>  -doc_end
>  
> +-doc_begin="The function apei_(read|check|clear)_mce are dead code and are excluded from non-debug builds, therefore the absence of prior declarations is safe."
> +-config=MC3R1.R8.4,declarations+={safe, "^apei_(read|check|clear)_mce\\(.*$"}
> +-doc_end
> +
>  -doc_begin="asmlinkage is a marker to indicate that the function is only used to interface with asm modules."
>  -config=MC3R1.R8.4,declarations+={safe,"loc(text(^(?s).*asmlinkage.*$, -1..0))"}
>  -doc_end
>  
> +-doc_begin="Given that bsearch and sort are defined with the attribute 'gnu_inline', it's deliberate not to have a prior declaration.
> +See Section \"6.33.1 Common Function Attributes\" of \"GCC_MANUAL\" for a full explanation of gnu_inline."
> +-file_tag+={bsearch_sort, "^xen/include/xen/(sort|lib)\\.h$"}
> +-config=MC3R1.R8.4,reports+={deliberate, "any_area(any_loc(file(bsearch_sort))&&decl(name(bsearch||sort)))"}
> +-doc_end
> +
> +-doc_begin="first_valid_mfn is defined in this way because the current lack of NUMA support in Arm and PPC requires it."
> +-file_tag+={first_valid_mfn, "^xen/common/page_alloc\\.c$"}
> +-config=MC3R1.R8.4,declarations+={deliberate,"loc(file(first_valid_mfn))"}
> +-doc_end
> +
>  -doc_begin="The following variables are compiled in multiple translation units
>  belonging to different executables and therefore are safe."
>  -config=MC3R1.R8.6,declarations+={safe, "name(current_stack_pointer||bsearch||sort)"}
> @@ -257,8 +281,6 @@ dimension is higher than omitting the dimension."
>  -config=MC3R1.R9.5,reports+={deliberate, "any()"}
>  -doc_end
>  
> -### Set 2 ###
> -
>  #
>  # Series 10.
>  #
> @@ -299,7 +321,6 @@ integers arguments on two's complement architectures
>  -config=MC3R1.R10.1,reports+={safe, "any_area(any_loc(any_exp(macro(^ISOLATE_LSB$))))"}
>  -doc_end
>  
> -### Set 3 ###
>  -doc_begin="XEN only supports architectures where signed integers are
>  representend using two's complement and all the XEN developers are aware of
>  this."
> @@ -323,6 +344,49 @@ constant expressions are required.\""
>  # Series 11
>  #
>  
> +-doc_begin="The conversion from a function pointer to unsigned long or (void *) does not lose any information, provided that the target type has enough bits to store it."
> +-config=MC3R1.R11.1,casts+={safe,
> +  "from(type(canonical(__function_pointer_types)))
> +   &&to(type(canonical(builtin(unsigned long)||pointer(builtin(void)))))
> +   &&relation(definitely_preserves_value)"
> +}
> +-doc_end
> +
> +-doc_begin="The conversion from a function pointer to a boolean has a well-known semantics that do not lead to unexpected behaviour."
> +-config=MC3R1.R11.1,casts+={safe,
> +  "from(type(canonical(__function_pointer_types)))
> +   &&kind(pointer_to_boolean)"
> +}
> +-doc_end
> +
> +-doc_begin="The conversion from a pointer to an incomplete type to unsigned long does not lose any information, provided that the target type has enough bits to store it."
> +-config=MC3R1.R11.2,casts+={safe,
> +  "from(type(any()))
> +   &&to(type(canonical(builtin(unsigned long))))
> +   &&relation(definitely_preserves_value)"
> +}
> +-doc_end
> +
> +-doc_begin="Conversions to object pointers that have a pointee type with a smaller (i.e., less strict) alignment requirement are safe."
> +-config=MC3R1.R11.3,casts+={safe,
> +  "!relation(more_aligned_pointee)"
> +}
> +-doc_end
> +
> +-doc_begin="Conversions from and to integral types are safe, in the assumption that the target type has enough bits to store the value.
> +See also Section \"4.7 Arrays and Pointers\" of \"GCC_MANUAL\""
> +-config=MC3R1.R11.6,casts+={safe,
> +    "(from(type(canonical(integral())))||to(type(canonical(integral()))))
> +     &&relation(definitely_preserves_value)"}
> +-doc_end
> +
> +-doc_begin="The conversion from a pointer to a boolean has a well-known semantics that do not lead to unexpected behaviour."
> +-config=MC3R1.R11.6,casts+={safe,
> +  "from(type(canonical(__pointer_types)))
> +   &&kind(pointer_to_boolean)"
> +}
> +-doc_end
> +
>  -doc_begin="Violations caused by container_of are due to pointer arithmetic operations
>  with the provided offset. The resulting pointer is then immediately cast back to its
>  original type, which preserves the qualifier. This use is deemed safe.
> @@ -354,9 +418,18 @@ activity."
>  -config=MC3R1.R14.2,reports+={disapplied,"any()"}
>  -doc_end
>  
> --doc_begin="The XEN team relies on the fact that invariant conditions of 'if'
> -statements are deliberate"
> --config=MC3R1.R14.3,statements={deliberate , "wrapped(any(),node(if_stmt))" }
> +-doc_begin="The XEN team relies on the fact that invariant conditions of 'if' statements and conditional operators are deliberate"
> +-config=MC3R1.R14.3,statements+={deliberate, "wrapped(any(),node(if_stmt||conditional_operator||binary_conditional_operator))" }
> +-doc_end
> +
> +-doc_begin="Switches having a 'sizeof' operator as the condition are deliberate and have limited scope."
> +-config=MC3R1.R14.3,statements+={deliberate, "wrapped(any(),node(switch_stmt)&&child(cond, operator(sizeof)))" }
> +-doc_end
> +
> +-doc_begin="The use of an invariant size argument in {put,get}_unsafe_size and array_access_ok, as defined in arch/x86(_64)?/include/asm/uaccess.h is deliberate and is deemed safe."
> +-file_tag+={x86_uaccess, "^xen/arch/x86(_64)?/include/asm/uaccess\\.h$"}
> +-config=MC3R1.R14.3,reports+={deliberate, "any_area(any_loc(file(x86_uaccess)&&any_exp(macro(^(put|get)_unsafe_size$))))"}
> +-config=MC3R1.R14.3,reports+={deliberate, "any_area(any_loc(file(x86_uaccess)&&any_exp(macro(^array_access_ok$))))"}
>  -doc_end
>  
>  -doc_begin="A controlling expression of 'if' and iteration statements having integer, character or pointer type has a semantics that is well-known to all Xen developers."
> @@ -527,8 +600,8 @@ falls under the jurisdiction of other MISRA rules."
>  # General
>  #
>  
> --doc_begin="do-while-0 is a well recognized loop idiom by the xen community."
> --loop_idioms={do_stmt, "literal(0)"}
> +-doc_begin="do-while-[01] is a well recognized loop idiom by the xen community."
> +-loop_idioms={do_stmt, "literal(0)||literal(1)"}
>  -doc_end
>  -doc_begin="while-[01] is a well recognized loop idiom by the xen community."
>  -loop_idioms+={while_stmt, "literal(0)||literal(1)"}
> diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
> index 16fc345756..d682616796 100644
> --- a/docs/misra/deviations.rst
> +++ b/docs/misra/deviations.rst
> @@ -63,6 +63,11 @@ Deviations related to MISRA C:2012 Rules:
>         switch statement.
>       - ECLAIR has been configured to ignore those statements.
>  
> +   * - R2.1
> +     - The asm-offset files are not linked deliberately, since they are used to
> +       generate definitions for asm modules.
> +     - Tagged as `deliberate` for ECLAIR.
> +
>     * - R2.2
>       - Proving compliance with respect to Rule 2.2 is generally impossible:
>         see `<https://arxiv.org/abs/2212.13933>`_ for details. Moreover, peer
> @@ -203,6 +208,26 @@ Deviations related to MISRA C:2012 Rules:
>         it.
>       - Tagged as `safe` for ECLAIR.
>  
> +   * - R8.4
> +     - Some functions are excluded from non-debug build, therefore the absence
> +       of declaration is safe.
> +     - Tagged as `safe` for ECLAIR, such functions are:
> +         - apei_read_mce()
> +         - apei_check_mce()
> +         - apei_clear_mce()
> +
> +   * - R8.4
> +     - Given that bsearch and sort are defined with the attribute 'gnu_inline',
> +       it's deliberate not to have a prior declaration.
> +       See Section \"6.33.1 Common Function Attributes\" of \"GCC_MANUAL\" for
> +       a full explanation of gnu_inline.
> +     - Tagged as `deliberate` for ECLAIR.
> +
> +   * - R8.4
> +     - first_valid_mfn is defined in this way because the current lack of NUMA
> +       support in Arm and PPC requires it.
> +     - Tagged as `deliberate` for ECLAIR.
> +
>     * - R8.6
>       - The following variables are compiled in multiple translation units
>         belonging to different executables and therefore are safe.
> @@ -282,6 +307,39 @@ Deviations related to MISRA C:2012 Rules:
>         If no bits are set, 0 is returned.
>       - Tagged as `safe` for ECLAIR.
>  
> +   * - R11.1
> +     - The conversion from a function pointer to unsigned long or (void \*) does
> +       not lose any information, provided that the target type has enough bits
> +       to store it.
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R11.1
> +     - The conversion from a function pointer to a boolean has a well-known
> +       semantics that do not lead to unexpected behaviour.
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R11.2
> +     - The conversion from a pointer to an incomplete type to unsigned long
> +       does not lose any information, provided that the target type has enough
> +       bits to store it.
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R11.3
> +     - Conversions to object pointers that have a pointee type with a smaller
> +       (i.e., less strict) alignment requirement are safe.
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R11.6
> +     - Conversions from and to integral types are safe, in the assumption that
> +       the target type has enough bits to store the value.
> +       See also Section \"4.7 Arrays and Pointers\" of \"GCC_MANUAL\"
> +     - Tagged as `safe` for ECLAIR.
> +
> +   * - R11.6
> +     - The conversion from a pointer to a boolean has a well-known semantics
> +       that do not lead to unexpected behaviour.
> +     - Tagged as `safe` for ECLAIR.
> +
>     * - R11.8
>       - Violations caused by container_of are due to pointer arithmetic operations
>         with the provided offset. The resulting pointer is then immediately cast back to its
> @@ -308,8 +366,19 @@ Deviations related to MISRA C:2012 Rules:
>  
>     * - R14.3
>       - The Xen team relies on the fact that invariant conditions of 'if'
> -       statements are deliberate.
> -     - Project-wide deviation; tagged as `disapplied` for ECLAIR.
> +       statements and conditional operators are deliberate.
> +     - Tagged as `deliberate` for ECLAIR.
> +
> +   * - R14.3
> +     - Switches having a 'sizeof' operator as the condition are deliberate and
> +       have limited scope.
> +     - Tagged as `deliberate` for ECLAIR.
> +
> +   * - R14.3
> +     - The use of an invariant size argument in {put,get}_unsafe_size and
> +       array_access_ok, as defined in arch/x86(_64)?/include/asm/uaccess.h is
> +       deliberate and is deemed safe.
> +     - Tagged as `deliberate` for ECLAIR.
>  
>     * - R14.4
>       - A controlling expression of 'if' and iteration statements having
> @@ -475,10 +544,10 @@ Other deviations:
>     * - Deviation
>       - Justification
>  
> -   * - do-while-0 loops
> -     - The do-while-0 is a well-recognized loop idiom used by the Xen community
> -       and can therefore be used, even though it would cause a number of
> -       violations in some instances.
> +   * - do-while-0 and do-while-1 loops
> +     - The do-while-0 and do-while-1 loops are well-recognized loop idioms used
> +       by the Xen community and can therefore be used, even though they would
> +       cause a number of violations in some instances.
>  
>     * - while-0 and while-1 loops
>       - while-0 and while-1 are well-recognized loop idioms used by the Xen
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 00:45:58 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 00:45:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749590.1157792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdGh-0004S1-35; Thu, 27 Jun 2024 00:45:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749590.1157792; Thu, 27 Jun 2024 00:45:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdGh-0004Ru-08; Thu, 27 Jun 2024 00:45:43 +0000
Received: by outflank-mailman (input) for mailman id 749590;
 Thu, 27 Jun 2024 00:45:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMdGf-0004Ro-Hd
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 00:45:41 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 933d76a4-341e-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 02:45:40 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 4406D61377;
 Thu, 27 Jun 2024 00:45:39 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 724CAC116B1;
 Thu, 27 Jun 2024 00:45:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 933d76a4-341e-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719449138;
	bh=e/QvjhmnEPwLI7rcX+TLCgDW91WV6cZEpSXCHcRQKzg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=qAFWupIaVX3CtLJD86GygL8dtHyIc2EApnobNOL5ucHnmZ+jxAommUDeHQeuS9mEK
	 XpqQJMvTYyOjreOHP6VwMNAnhzuIuoRCX7izEJh8/DPVPXwERLQAt2jgLaqM8aYTTf
	 jJPXSvNrbmji5kCWlcd2gy5ujpL3yJG6btYxYRmHlCL0KgyPpTM3z+MRf8raIL5Bf7
	 FX7kJWzvStc85kSmAeRE03xppuaTW3Qx8+JsSYN+KyYWMgNQgLtxEQNkof2EbYCS2m
	 QC//SthUyLkaYk8b5dnczzHwmUBAPbuiD4/5CdGuhWf0vytSEQgg41qBbADdzRSQnT
	 rXblbKvSRC52g==
Date: Wed, 26 Jun 2024 17:45:36 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>
Subject: Re: [XEN PATCH v2 for-4.20 1/7] automation/eclair: address violations
 of MISRA C Rule 20.7
In-Reply-To: <679b1948690fecf06c9e81b398f7bf9bf5a292d2.1719407840.git.nicola.vetrini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406261745050.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719407840.git.nicola.vetrini@bugseng.com> <679b1948690fecf06c9e81b398f7bf9bf5a292d2.1719407840.git.nicola.vetrini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 26 Jun 2024, Nicola Vetrini wrote:
> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
> of macro parameters shall be enclosed in parentheses".
> 
> The helper macro bitmap_switch has parameters that cannot be parenthesized
> in order to comply with the rule, as that would break its functionality.
> Moreover, the risk of misuse due developer confusion is deemed not
> substantial enough to warrant a more involved refactor, thus the macro
> is deviated for this rule.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 00:46:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 00:46:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749594.1157802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdHH-0004vs-Bk; Thu, 27 Jun 2024 00:46:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749594.1157802; Thu, 27 Jun 2024 00:46:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdHH-0004vl-8p; Thu, 27 Jun 2024 00:46:19 +0000
Received: by outflank-mailman (input) for mailman id 749594;
 Thu, 27 Jun 2024 00:46:18 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMdHG-0004vX-2O
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 00:46:18 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a875d08a-341e-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 02:46:16 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id D560661D5B;
 Thu, 27 Jun 2024 00:46:14 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 2EB2CC2BD10;
 Thu, 27 Jun 2024 00:46:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a875d08a-341e-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719449174;
	bh=7HlikCI+ZFOT2NKtW5gvTNSBMEdmvmxEXMJHZWdMq6c=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=rUxMNnShaMiK0JjsCj5wtRDzAbKvw74ckbDFmMvoEUWpwfFv2A9YnzI2ABPXqfgbm
	 De85En5dEJq67JKodcXqDb1ZylRZAI6Uz37J8F+lfV2MrVVUAZNFV6UZM0O8r+LeIY
	 PtznXqfkeery9krNMb8iBIt32CqnFk1kFmTGHip8FVTBfSAiwWcLnXM38N0SnDDIE0
	 vDSyRf5Yts7BE8297o6e7O+WQLtg8PD4X6XgBwBFVURy7gXC4M2/4Fu9ZY3nVXclO7
	 Yo/o7sWJOwCYY6f36PPlCXtM7QGYWmgLGMh144O0q8HmDGNXJD8zdfse1gfL0Sv7A+
	 ZLiEKILY9X0jA==
Date: Wed, 26 Jun 2024 17:46:11 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Jan Beulich <jbeulich@suse.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2 for-4.20 7/7] x86/traps: address violations of
 MISRA C Rule 20.7
In-Reply-To: <7830b9bfbb0aec272376817eb20bbcbfebdf4044.1719407840.git.nicola.vetrini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406261746040.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719407840.git.nicola.vetrini@bugseng.com> <7830b9bfbb0aec272376817eb20bbcbfebdf4044.1719407840.git.nicola.vetrini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 26 Jun 2024, Nicola Vetrini wrote:
> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
> of macro parameters shall be enclosed in parentheses". Therefore, some
> macro definitions should gain additional parentheses to ensure that all
> current and future users will be safe with respect to expansions that
> can possibly alter the semantics of the passed-in macro parameter.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 00:50:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 00:50:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749601.1157813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdLH-00077e-SE; Thu, 27 Jun 2024 00:50:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749601.1157813; Thu, 27 Jun 2024 00:50:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdLH-00077X-OB; Thu, 27 Jun 2024 00:50:27 +0000
Received: by outflank-mailman (input) for mailman id 749601;
 Thu, 27 Jun 2024 00:50:25 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMdLF-00077R-PV
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 00:50:25 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3cc9a4df-341f-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 02:50:24 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id A27F061D27;
 Thu, 27 Jun 2024 00:50:23 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 70A88C116B1;
 Thu, 27 Jun 2024 00:50:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cc9a4df-341f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719449423;
	bh=BeUbIpNc9CmmlAN8M6hXMEyhR1Ce3IIVwr4J5xubekM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=GAGYAfpZuvo+lIlWxNAZl17RBNBoh5BQAM1oel16OvAi5HOiHNA9wh6OPjGpSH8cD
	 0LCBCG3E6jFUz1rBuOFNP2lAX6p9Qu4mB8ANfZ8mUroQmWBrYyQzPnOQaV8Xz647o6
	 FqMWPqshN8GDDyp8MkQvyMd2rzDzt7l4taTrEGRYxitev3dXDojQzcwQhnhc2UXryJ
	 gFMUDwkr+wbEb5nKq7MwycqYSKrDkfNPKPacrgLOLE5asMtONJCBKfbnHephkv1VKM
	 lM13Lm89D8LH1/Wmt173zcsw+PWD4lIoH8BegQxdBfGGVQSx+toFhkLRKZyLddaT0e
	 G50seqMyOlmPQ==
Date: Wed, 26 Jun 2024 17:50:21 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v3 04/12] x86/vpmu: address violations of MISRA C
 Rule 16.3
In-Reply-To: <b42004216d547e24f6537450a1c98176a821f704.1719383180.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406261750140.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719383180.git.federico.serafini@bugseng.com> <b42004216d547e24f6537450a1c98176a821f704.1719383180.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 26 Jun 2024, Federico Serafini wrote:
> Add missing break statements to address violations of MISRA C Rule
> 16.3: "An unconditional `break' statement shall terminate every
> switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 00:52:50 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 00:52:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749607.1157823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdNV-0007l2-CS; Thu, 27 Jun 2024 00:52:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749607.1157823; Thu, 27 Jun 2024 00:52:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdNV-0007kv-83; Thu, 27 Jun 2024 00:52:45 +0000
Received: by outflank-mailman (input) for mailman id 749607;
 Thu, 27 Jun 2024 00:52:44 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMdNU-0007kk-Fs
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 00:52:44 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8d7a08f1-341f-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 02:52:42 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id ACB03CE2D27;
 Thu, 27 Jun 2024 00:52:36 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3424FC116B1;
 Thu, 27 Jun 2024 00:52:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d7a08f1-341f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719449556;
	bh=3XBUqB1BWWvZXcl1jQExnngeSLnZJqieStD/qTwNCqI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ZMrzQ3YR/js3R0mg1DIFCncXY1TO/q/UyaYwKvAa0igSTaAK7D7nlER0EKTHnTdWa
	 lVm3MRt/Rj1RktDu6YTM1F1xHmtOksA0Dsegv9H8OxZgFZ3C49+0ddsDlAXBHBnMrz
	 9BXqSME4PAkHisb+9FwDg5lKqxQ07kxTRPSvjV0GmqlgOB5OicsA/yKwmr2jDGHxVW
	 73NEZftDP2i93jdqwLXOqEXgImgF/clDtyoq6aPvgLKQdzvazkCA6Sm0yn9UnlvtD9
	 Vfp7498hLll4Rfn2U4NEcWmdJtoGGzU/t7MaOTFZPcnivdzdmhvhiriqGGyGOCmXkk
	 oE30WbDHNyvgw==
Date: Wed, 26 Jun 2024 17:52:33 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v3 05/12] x86/traps: address violations of MISRA C
 Rule 16.3
In-Reply-To: <e7aea6bacb9c914a06a929dfe3606f7cc360588f.1719383180.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406261751370.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719383180.git.federico.serafini@bugseng.com> <e7aea6bacb9c914a06a929dfe3606f7cc360588f.1719383180.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 26 Jun 2024, Federico Serafini wrote:
> Add break or pseudo keyword fallthrough to address violations of
> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
> every switch-clause".
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> ---
> Changes in v3:
> - use break instead of fallthrough.
> ---
>  xen/arch/x86/traps.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index 9906e874d5..d62598a4c2 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1186,6 +1186,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
>  
>      default:
>          ASSERT_UNREACHABLE();
> +        break;

FYI the ASSERT_UNREACHABLE is still being discussed

Other than that:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


>      }
>  }
>  
> @@ -1748,6 +1749,7 @@ static void io_check_error(const struct cpu_user_regs *regs)
>      {
>      case 'd': /* 'dom0' */
>          nmi_hwdom_report(_XEN_NMIREASON_io_error);
> +        break;
>      case 'i': /* 'ignore' */
>          break;
>      default:  /* 'fatal' */
> @@ -1768,6 +1770,7 @@ static void unknown_nmi_error(const struct cpu_user_regs *regs,
>      {
>      case 'd': /* 'dom0' */
>          nmi_hwdom_report(_XEN_NMIREASON_unknown);
> +        break;
>      case 'i': /* 'ignore' */
>          break;
>      default:  /* 'fatal' */


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 00:55:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 00:55:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749612.1157832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdQ6-0008NS-Qk; Thu, 27 Jun 2024 00:55:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749612.1157832; Thu, 27 Jun 2024 00:55:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdQ6-0008NL-NI; Thu, 27 Jun 2024 00:55:26 +0000
Received: by outflank-mailman (input) for mailman id 749612;
 Thu, 27 Jun 2024 00:55:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMdQ6-0008NF-3U
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 00:55:26 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id ee33b6fd-341f-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 02:55:23 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 0B77DCE222A;
 Thu, 27 Jun 2024 00:55:21 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7D62CC32789;
 Thu, 27 Jun 2024 00:55:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee33b6fd-341f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719449720;
	bh=zZnZwC46Z2xbYQdZe3lnTPq1ZWcb/3vtNaAOa5Zfxko=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=rrN1QaxJsfj2HKHxN76bmZ4rdwASn9oT4o72qFpg7d2xxwRMZ7a7d1mnoBBpmnmO6
	 /PBZQ6LMJj6MPufCzqmUWpEq6lK6S0BjnD7v7YBimalMrxJNBSe8CVA1i206AE5Jf6
	 lXicOBoch6yI0zcncgEMBQC/5Zpik3zaVNOkb5Puym4T0lJNK5LqV5jUN6oAkJKRY/
	 dDFcSue729Qi4Il8c188sfS/BU4f6uuBbHsoInUr/k00AZEYsvuhA+yc1S31Db5XCy
	 snBnAAnLPqtieg7M3VBgCnWRsrTrNegJu/4IUNAuMFx1Yo+ArQuwvXYdgsSfF2HJkj
	 X1GA30C7ADhvw==
Date: Wed, 26 Jun 2024 17:55:18 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v3 07/12] x86/hvm: address violations of MISRA C Rule
 16.3
In-Reply-To: <87cfe4d3e75c3a7d4174393a31aaaf80e0e60633.1719383180.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406261754480.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719383180.git.federico.serafini@bugseng.com> <87cfe4d3e75c3a7d4174393a31aaaf80e0e60633.1719383180.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 26 Jun 2024, Federico Serafini wrote:
> MISRA C Rule 16.3 states that "An unconditional `break' statement shall
> terminate every switch-clause".
> 
> Add pseudo keyword fallthrough or missing break statement
> to address violations of the rule.
> 
> As a defensive measure, return -EOPNOTSUPP in case an unreachable
> return statement is reached.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Aside from the ASSERT_UNREACHABLE which is still under discussion:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 00:55:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 00:55:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749615.1157843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdQW-0000Op-3N; Thu, 27 Jun 2024 00:55:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749615.1157843; Thu, 27 Jun 2024 00:55:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdQW-0000Oi-0F; Thu, 27 Jun 2024 00:55:52 +0000
Received: by outflank-mailman (input) for mailman id 749615;
 Thu, 27 Jun 2024 00:55:50 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMdQU-0008NF-Nf
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 00:55:50 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fe167cd7-341f-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 02:55:49 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 8ABC961D61;
 Thu, 27 Jun 2024 00:55:47 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 239ECC2BD10;
 Thu, 27 Jun 2024 00:55:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe167cd7-341f-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719449746;
	bh=8cYXP6GXhRYzZHoFc11LDUVtqk5BDpi6YkYifjZ+MNg=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=f0jc+l6LOiIuAOCKj3DLRNHFZ4KOWp8gQrSjDJXpfBuFn7dE9XVXyxlxmHx2Gqkeg
	 KWxlmyitLFUnO+pX0MYgmAGGF5gguG+P1+pw60dMMDkOQW8IPnDk96CtiEMmQ8rKs6
	 aiH4ZeERZJGFoa/4GJ0Vv0e4HWydZvUxubLEzvyQZUiT6yYoGqund5Bm/1qonm62qr
	 q4hdNgc2u8bZ01ZOyi0zKcruthbCrZSoqp5uXCUq7jZIxJkW+hk8MPjNPNnuuH74pw
	 +CvgyGgAJXZnfR97UhVdXtk9qdYoFGzcbrSAsrGVq8TcZoZCyIMvR3qJl1AcmPd44q
	 0iRmhoRngcLPQ==
Date: Wed, 26 Jun 2024 17:55:44 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Federico Serafini <federico.serafini@bugseng.com>
cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v3 08/12] x86/vpt: address a violation of MISRA C
 Rule 16.3
In-Reply-To: <453ef39f5a2a1871d8b0c74d921ed6a413b179b4.1719383180.git.federico.serafini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406261755380.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719383180.git.federico.serafini@bugseng.com> <453ef39f5a2a1871d8b0c74d921ed6a413b179b4.1719383180.git.federico.serafini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 26 Jun 2024, Federico Serafini wrote:
> Add pseudo keyword fallthrough to meet the requirements to deviate
> a violation of MISRA C Rule 16.3 ("An unconditional `break'
> statement shall terminate every switch-clause").
> 
> No functional change.
> 
> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 00:58:15 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 00:58:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749624.1157853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdSl-0001N9-EH; Thu, 27 Jun 2024 00:58:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749624.1157853; Thu, 27 Jun 2024 00:58:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdSl-0001N2-Ar; Thu, 27 Jun 2024 00:58:11 +0000
Received: by outflank-mailman (input) for mailman id 749624;
 Thu, 27 Jun 2024 00:58:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMdSj-0001Mw-NV
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 00:58:09 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 4fab415f-3420-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 02:58:07 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 50B44CE2D2F;
 Thu, 27 Jun 2024 00:58:02 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 7A0B0C2BD10;
 Thu, 27 Jun 2024 00:58:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fab415f-3420-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719449881;
	bh=+PT6nuJSWeXiLOR62sev/R4uJBIye3Yt/q1StPQMqGU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=mFvupdHVBLk+GsDgqZnxNDFfX91jvtXo7+AMWHEYaSNtew1/lN2pGZY7rhSvAknI0
	 keen3wAO6WCqjBFya1SKURUXHGrIvPfN8bBmlDKz4Z+8FB8tC3KD5y+c5Av9ir8l+Z
	 wyxSaVIWT58AE6+GPiC0pDYHaCzMPevYU1eX32BqL8zgEVAgdcDAHhhhXRd61IM2+D
	 UXWHBS3cVbRxk7gIacmagvVCKK5IntxVyjDhHqPNiNCfygINTiSsifmVFFqWVLA3a7
	 sybY/0cvx9RWnP9Q1UGH3E2jGw50FsQiLXSvc3waWsCGvzz2vHHqv99qgG6OYCbzcf
	 VRiol6I9AvZPQ==
Date: Wed, 26 Jun 2024 17:57:59 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, 
    Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>, 
    Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH v2] x86/mctelem: address violations of MISRA C: 2012
 Rule 5.3
In-Reply-To: <94752f77597b05ef9b8a387bf29512b11c0d1e15.1719398571.git.nicola.vetrini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406261757480.3635@ubuntu-linux-20-04-desktop>
References: <94752f77597b05ef9b8a387bf29512b11c0d1e15.1719398571.git.nicola.vetrini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 26 Jun 2024, Nicola Vetrini wrote:
> From: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
> 
> This addresses violations of MISRA C:2012 Rule 5.3 which states as
> following: An identifier declared in an inner scope shall not hide an
> identifier declared in an outer scope.
> 
> In this case the gloabl variable being shadowed is the global static struct
> mctctl in this file, therefore the local variables are renamed to avoid this.
> 
> No functional change.
> 
> Signed-off-by: Alessandro Zucchelli <alessandro.zucchelli@bugseng.com>
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

Nice one!

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 01:22:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 01:22:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749632.1157863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdqJ-00077E-87; Thu, 27 Jun 2024 01:22:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749632.1157863; Thu, 27 Jun 2024 01:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMdqJ-000777-4n; Thu, 27 Jun 2024 01:22:31 +0000
Received: by outflank-mailman (input) for mailman id 749632;
 Thu, 27 Jun 2024 01:22:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMdqH-000771-Fl
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 01:22:29 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id b54fe793-3423-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 03:22:26 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id A8B4CCE2D30;
 Thu, 27 Jun 2024 01:22:19 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 0E5A0C116B1;
 Thu, 27 Jun 2024 01:22:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b54fe793-3423-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719451337;
	bh=ppBMDbHWombTqVCft7o5PJQQJ6xnxnPuHVKKBZRQRV4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=DsaW4/tui+QRQ54+M8HpR783A2UWAkCc6ARCHwxmqtoQsdQKNlBGwjL5aT1jRTO1g
	 hSOVHWX7/AgkcYuyCTvCCLoYw1fR1ZWyWeRzMGYNAngwwKlLuQahDhY7M9JLBawrD6
	 sj1nJGTgAkc+XWixAi9cDDCZuBkeN5yrk0338FSJCXn46IpTtwBSB3ek4lt4qEthyG
	 gRyF1YbneXfwOLQr1lnSkeVlAdmHiF4P2p4nJ3WoCKhbX8a/vxkk+TPP8nj0jTTG8s
	 XHi4pY+DG+BjPSq/F/9ayijproghw/RUAstZTky4U0WQnSSrZf0esxp2s0uXpusndW
	 4HROvz6F5teLg==
Date: Wed, 26 Jun 2024 18:22:14 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: oleksii.kurochko@gmail.com
cc: Jan Beulich <jbeulich@suse.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, michal.orzel@amd.com, 
    xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Simone Ballarin <simone.ballarin@bugseng.com>, 
    Doug Goldstein <cardoe@cardoe.com>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, 
    Nicola Vetrini <nicola.vetrini@bugseng.com>
Subject: Re: [XEN PATCH v2 0/6][RESEND] address violations of MISRA C Rule
 20.7
In-Reply-To: <9814c00d116f14a1ce238b131b9eba19fa130986.camel@gmail.com>
Message-ID: <alpine.DEB.2.22.394.2406261821060.3635@ubuntu-linux-20-04-desktop>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com>  <alpine.DEB.2.22.394.2406241743480.3870429@ubuntu-linux-20-04-desktop>  <88127f41-a3e3-4d05-b9f2-3e4117bf1503@suse.com> <9814c00d116f14a1ce238b131b9eba19fa130986.camel@gmail.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 26 Jun 2024, oleksii.kurochko@gmail.com wrote:
> On Tue, 2024-06-25 at 08:39 +0200, Jan Beulich wrote:
> > On 25.06.2024 02:47, Stefano Stabellini wrote:
> > > I would like to ask for a release-ack as the patch series makes
> > > very few
> > > changes outside of the static analysis configuration. The few
> > > changes to
> > > the Xen code are very limited, straightforward and makes the code
> > > better, see patch #3 and #5.
> > 
> > While continuing to touch automation/ may be okay, I really think
> > time has
> > passed for further Misra changes in 4.19, unless they're fixing
> > actual bugs
> > of course. Just my personal view though ...
> I am not quite sure I understand the concern. From my perspective, the
> patch series addresses several MISRA violations without introducing any
> functional changes. It seems safe to incorporate these MISRA changes
> even at this stage of the release.

I agree with you but I guess Jan's point is that every change even small
could introduce a regression. This is your decision.

This is the updated version:
https://marc.info/?l=xen-devel&m=171940854121984


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 01:54:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 01:54:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749640.1157873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMeKu-0004dv-I5; Thu, 27 Jun 2024 01:54:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749640.1157873; Thu, 27 Jun 2024 01:54:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMeKu-0004do-Et; Thu, 27 Jun 2024 01:54:08 +0000
Received: by outflank-mailman (input) for mailman id 749640;
 Thu, 27 Jun 2024 01:54:08 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMeKu-0004di-0Q
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 01:54:08 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org [145.40.73.55])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 207726ae-3428-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 03:54:05 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id BE24CCE2D55;
 Thu, 27 Jun 2024 01:53:57 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 18AADC32782;
 Thu, 27 Jun 2024 01:53:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 207726ae-3428-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719453237;
	bh=jg2oguDpRhsWorICbOUbrkGWd/WeRDjZ2pRWrSzs1sI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=eVjzOjM9O+zYSn/I3RmxfryOHqOSlMfRC/mNxO7ChIE4mZtVsF5BUkfFRYCYAp9iK
	 Vhhqv+re6nw5nJQkpXEGwLDh095rsHqBgp6Xf1lx0R9e3kuFKBolT2OTe/bieDiDng
	 aoe+Xb5g5CliCiv4f4ZhxMTD0NI87XWigRbeOwP7JDnXZJhK9CCjZlp8xXko4bn6jK
	 GFu4Ljd9vpOB7Q9kw7f2Vn8T215BfRu49Vct/UywVNYcJNybFwZw+HxcudRLQPF7vJ
	 YxBzKIQHnKY0VsvUT+5Opo3ARJTmUQ7Iw0Z12tZQSu1zzyCyDa09OxxVgZWlcl2vty
	 Etu7tTz/SRiJA==
Date: Wed, 26 Jun 2024 18:53:54 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Federico Serafini <federico.serafini@bugseng.com>
Subject: Re: [XEN PATCH v2 05/13] x86/traps: address violations of MISRA C
 Rule 16.3
In-Reply-To: <6441010f-c2f6-4098-bf23-837955dcf803@suse.com>
Message-ID: <alpine.DEB.2.22.394.2406261758390.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <4f44a7b021eb4f78ccf1ce69b500b48b75df81c5.1719218291.git.federico.serafini@bugseng.com> <alpine.DEB.2.22.394.2406241753260.3870429@ubuntu-linux-20-04-desktop> <a5b47b7e-9dc0-4108-bd6f-eb34f7cb8c3c@suse.com>
 <alpine.DEB.2.22.394.2406251808040.3635@ubuntu-linux-20-04-desktop> <6441010f-c2f6-4098-bf23-837955dcf803@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 26 Jun 2024, Jan Beulich wrote:
> On 26.06.2024 03:11, Stefano Stabellini wrote:
> > On Tue, 25 Jun 2024, Jan Beulich wrote:
> >> On 25.06.2024 02:54, Stefano Stabellini wrote:
> >>> On Mon, 24 Jun 2024, Federico Serafini wrote:
> >>>> Add break or pseudo keyword fallthrough to address violations of
> >>>> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
> >>>> every switch-clause".
> >>>>
> >>>> No functional change.
> >>>>
> >>>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> >>>> ---
> >>>>  xen/arch/x86/traps.c | 3 +++
> >>>>  1 file changed, 3 insertions(+)
> >>>>
> >>>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> >>>> index 9906e874d5..cbcec3fafb 100644
> >>>> --- a/xen/arch/x86/traps.c
> >>>> +++ b/xen/arch/x86/traps.c
> >>>> @@ -1186,6 +1186,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
> >>>>  
> >>>>      default:
> >>>>          ASSERT_UNREACHABLE();
> >>>> +        break;
> >>>
> >>> Please add ASSERT_UNREACHABLE to the list of "unconditional flow control
> >>> statements" that can terminate a case, in addition to break.
> >>
> >> Why? Exactly the opposite is part of the subject of a recent patch, iirc.
> >> Simply because of the rules needing to cover both debug and release builds.
> > 
> > The reason is that ASSERT_UNREACHABLE() might disappear from the release
> > build but it can still be used as a marker during static analysis. In
> > my view, ASSERT_UNREACHABLE() is equivalent to a noreturn function call
> > which has an empty implementation in release builds.
> > 
> > The only reason I can think of to require a break; after an
> > ASSERT_UNREACHABLE() would be if we think the unreachability only apply
> > to debug build, not release build:
> > 
> > - debug build: it is unreachable
> > - release build: it is reachable
> > 
> > I don't think that is meant to be possible so I think we can use
> > ASSERT_UNREACHABLE() as a marker.
> 
> Well. For one such an assumption takes as a prereq that a debug build will
> be run through full coverage testing, i.e. all reachable paths proven to
> be taken. I understand that this prereq is intended to somehow be met,
> even if I'm having difficulty seeing how such a final proof would look
> like: Full coverage would, to me, mean that _every_ line is reachable. Yet
> clearly any ASSERT_UNREACHABLE() must never be reached.
> 
> And then not covering for such cases takes the further assumption that
> debug and release builds are functionally identical. I'm afraid this would
> be a wrong assumption to make:
> 1) We may screw up somewhere, with code wrongly enabled only in one of the
>    two build modes.
> 2) The compiler may screw up, in particular with optimization.

I think there are two different issues here we are discussing.

One issue, like you said, has to do with coverage. It is important to
mark as "unreachable" any part of the code that is indeed unreachable
so that we can account it properly when we do coverage analysis. At the
moment the only "unreachable" marker that we have is
ASSERT_UNREACHABLE(), and I am hoping we can use it as part of the
coverage analysis we'll do.

However, there is a different separate question about what to do in the
Xen code after an ASSERT_UNREACHABLE(). E.g.:

             default:
                 ASSERT_UNREACHABLE();
                 return -EPERM; /* is it better with or without this? */
             }

Leaving coverage aside, would it be better to be defensive and actually
attempt to report errors back after an ASSERT_UNREACHABLE() like in the
example? Or is it better to assume the code is actually unreachable
hence there is no need to do anything afterwards?

One one hand, being defensive sounds good, on the other hand, any code
we add after ASSERT_UNREACHABLE() is dead code which cannot be tested,
which is also not good. In this example, there is no way to test the
return -EPERM code path. We also need to consider what is the right
thing to do if Xen finds itself in an erroneous situation such as being
in an unreachable code location.

So, after thinking about it and also talking to the safety manager, I
think we should:
- implement ASSERT_UNREACHABLE with a warning in release builds
- have "return -EPERM;" or similar for defensive programming


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 01:57:12 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 01:57:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749646.1157882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMeNq-0005BJ-Tv; Thu, 27 Jun 2024 01:57:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749646.1157882; Thu, 27 Jun 2024 01:57:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMeNq-0005BC-RP; Thu, 27 Jun 2024 01:57:10 +0000
Received: by outflank-mailman (input) for mailman id 749646;
 Thu, 27 Jun 2024 01:57:09 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMeNp-0005B6-TB
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 01:57:09 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org [139.178.84.217])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8ec73eb6-3428-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 03:57:08 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id AB02161CD0;
 Thu, 27 Jun 2024 01:57:06 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 41018C116B1;
 Thu, 27 Jun 2024 01:57:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8ec73eb6-3428-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719453426;
	bh=gLAP1m53uVVW3DuwXlWwrPUiuDJAbkgLRc5l5TBw7WA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=JXeb1JGkWjv6Za/8aPiQisk0Z3f0t/kO6cGr2hwoM7eQ/otROXPdMcLSlhJHA3ckJ
	 f5Pcdsm82cDiL+oY+1a+9sEEZY7ANxfy3uPlV8OsRep5vkaXU1OSWIkD8B/nNoxQLv
	 9bVKBknKh2rkIaRPW+dD7YMuPbhP8Scs6WTmLxerDMDWBqYjgmssZVDJLlnlSwo3z4
	 epsqZNLyy86SUlAbxhZpquLcIn1/sORGaTJEJFEL0gwa+0sU/ShAyatR43ZNqmWt7t
	 pCNAaTW/4JylxzJCZNIFng4IT+AO2Yda+QaoVjrIXjynU6Abzyh6r0elMG/dAk81f5
	 NV+I2UOkNkRQA==
Date: Wed, 26 Jun 2024 18:57:03 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Stefano Stabellini <sstabellini@kernel.org>
cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org, 
    consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Federico Serafini <federico.serafini@bugseng.com>
Subject: Re: [XEN PATCH v2 05/13] x86/traps: address violations of MISRA C
 Rule 16.3
In-Reply-To: <alpine.DEB.2.22.394.2406261758390.3635@ubuntu-linux-20-04-desktop>
Message-ID: <alpine.DEB.2.22.394.2406261855020.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <4f44a7b021eb4f78ccf1ce69b500b48b75df81c5.1719218291.git.federico.serafini@bugseng.com> <alpine.DEB.2.22.394.2406241753260.3870429@ubuntu-linux-20-04-desktop> <a5b47b7e-9dc0-4108-bd6f-eb34f7cb8c3c@suse.com>
 <alpine.DEB.2.22.394.2406251808040.3635@ubuntu-linux-20-04-desktop> <6441010f-c2f6-4098-bf23-837955dcf803@suse.com> <alpine.DEB.2.22.394.2406261758390.3635@ubuntu-linux-20-04-desktop>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 26 Jun 2024, Stefano Stabellini wrote:
> So, after thinking about it and also talking to the safety manager, I
> think we should:
> - implement ASSERT_UNREACHABLE with a warning in release builds
> - have "return -EPERM;" or similar for defensive programming

Federico, as Jan agrees already on the second point, then I withdraw all
my comments about code after ASSERT_UNREACHABLE (you can consider your
R16.3 patches with my acks fully acked).


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 02:36:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 02:36:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749660.1157893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMezc-00045Z-QV; Thu, 27 Jun 2024 02:36:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749660.1157893; Thu, 27 Jun 2024 02:36:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMezc-00045S-NX; Thu, 27 Jun 2024 02:36:12 +0000
Received: by outflank-mailman (input) for mailman id 749660;
 Thu, 27 Jun 2024 02:36:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mspB=N5=intel.com=oliver.sang@srs-se1.protection.inumbo.net>)
 id 1sMeza-000458-MI
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 02:36:11 +0000
Received: from mgamail.intel.com (mgamail.intel.com [198.175.65.11])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fe24f088-342d-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 04:36:04 +0200 (CEST)
Received: from orviesa006.jf.intel.com ([10.64.159.146])
 by orvoesa103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 26 Jun 2024 19:35:59 -0700
Received: from fmsmsx603.amr.corp.intel.com ([10.18.126.83])
 by orviesa006.jf.intel.com with ESMTP/TLS/AES256-GCM-SHA384;
 26 Jun 2024 19:36:00 -0700
Received: from fmsmsx603.amr.corp.intel.com (10.18.126.83) by
 fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.39; Wed, 26 Jun 2024 19:35:58 -0700
Received: from fmsedg602.ED.cps.intel.com (10.1.192.136) by
 fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.2507.39 via Frontend Transport; Wed, 26 Jun 2024 19:35:58 -0700
Received: from NAM10-BN7-obe.outbound.protection.outlook.com (104.47.70.46) by
 edgegateway.intel.com (192.55.55.71) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.1.2507.39; Wed, 26 Jun 2024 19:35:58 -0700
Received: from LV3PR11MB8603.namprd11.prod.outlook.com (2603:10b6:408:1b6::9)
 by PH8PR11MB6998.namprd11.prod.outlook.com (2603:10b6:510:222::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.30; Thu, 27 Jun
 2024 02:35:55 +0000
Received: from LV3PR11MB8603.namprd11.prod.outlook.com
 ([fe80::4622:29cf:32b:7e5c]) by LV3PR11MB8603.namprd11.prod.outlook.com
 ([fe80::4622:29cf:32b:7e5c%2]) with mapi id 15.20.7698.025; Thu, 27 Jun 2024
 02:35:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe24f088-342d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple;
  d=intel.com; i=@intel.com; q=dns/txt; s=Intel;
  t=1719455764; x=1750991764;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=Tm5fXzj/0p2EHgLOiFB38ADrMsp4/BzPcBMAMvA3Zcg=;
  b=PIkxchl6ZBPkBiIUr/Xc4m6xP1eHuCYEOB57XO6ngs4EzDbaYkCdNLX3
   0EjL8xCnHYpFweaymLY0aRpHJZ0ZNw23e7Lj5TsNAdmsLMGbncSOxwAmU
   NLit5o1mcy6OxZ/wxO9VsWYWK+E0YP9zQAauFjC+7/ixil7uuVqYZxn9/
   D7L2hNXC8iyOLudeVi5s8OxhaZ6B94fC4MqUtYTtYRH4ooYTttbhlXtlB
   FSpvj0324lo5XS+VlMaLAPef2cxnL4KGlj1LC7q9tbVJlf1C2MgLXiQOm
   XB3wfJMctCWFpN+VXmpRN8tAbK/1v6Wzua/TlumMUxysZNzQQ0/85HhtP
   Q==;
X-CSE-ConnectionGUID: aDmkaqbhQkaSLF1peP4fLQ==
X-CSE-MsgGUID: AQ4UkXFUTO+v1tHjBSe7vA==
X-IronPort-AV: E=McAfee;i="6700,10204,11115"; a="27150869"
X-IronPort-AV: E=Sophos;i="6.08,268,1712646000"; 
   d="scan'208";a="27150869"
X-CSE-ConnectionGUID: j10ZU3/DQnqD1weyP265+g==
X-CSE-MsgGUID: iUcHYBtvRKG8/JVyWBtSnA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="6.08,268,1712646000"; 
   d="scan'208";a="44643323"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ekMKBmk67ur17JmKVOIbQXZ1fARbRMGad4UXkuRY2r5YWVhRlmYLRuAa+tk1/jQevXtZHlBL8lUxrEMhSQJsAURlKSzRRiRw0jEbfll1whAAun0sBah1iYobacA6l5JVeYheJXjv+ThwF0/cdVlkV6FbZRrf1v9yFFidoWgXOJ0ZDKV9PTlyKmuQvshTXGPE3dTjc0ErAMv8vh95mcvW7Fhst5mLVb6+KVI5wsNpPZWsFw/F2t7Xfc1zrwjTNiZciiNHPCHR3eufqHQmDBlNg0exJng8atRJBdhjD0Qlp0TygNLw+LcMRT/bcdj8s56rYC49D04Rl+uTrDmRjXJX9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+jSD7+FSTNy4cDlgbUVR5l0uy8U1HhJxiVCdYXXKGto=;
 b=cS1onbZTaz6hoAaeHtDFbFFAoLCBqY89YJ0ChMu/l5JygZp83vUplEo1j5ino1QNK0G5wxNYjBav2K4grmtn42ti+SzaiVdJQXHKI2rSYMyt6vlHYR4NtdibDCki+rI4OzvLVJPtupLeN7/ryWBhwuB8cVD62dKxeySX/pzy2w3lI71tWZqfH+s4TY6x7F4SxSh9qsfL8m6gpv1LlU5Xsg2aeJEpec41cykvfRENtheWAohONum+gjJQGsAsHQ7g+hhZIBvf2SnCXF4YVi4La+KNibZcbdQTWYAzAYwY5RdzxJEN4WWwBIgChOqdJcnKa9UeBmftpbkYiZTnwp1e4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=intel.com;
Date: Thu, 27 Jun 2024 10:35:38 +0800
From: Oliver Sang <oliver.sang@intel.com>
To: Christoph Hellwig <hch@infradead.org>
CC: Christoph Hellwig <hch@lst.de>, <oe-lkp@lists.linux.dev>, <lkp@intel.com>,
	Jens Axboe <axboe@kernel.dk>, Ulf Hansson <ulf.hansson@linaro.org>, "Damien
 Le Moal" <dlemoal@kernel.org>, Hannes Reinecke <hare@suse.de>,
	<linux-block@vger.kernel.org>, <linux-um@lists.infradead.org>,
	<drbd-dev@lists.linbit.com>, <nbd@other.debian.org>,
	<linuxppc-dev@lists.ozlabs.org>, <virtualization@lists.linux.dev>,
	<xen-devel@lists.xenproject.org>, <linux-bcache@vger.kernel.org>,
	<dm-devel@lists.linux.dev>, <linux-raid@vger.kernel.org>,
	<linux-mmc@vger.kernel.org>, <linux-mtd@lists.infradead.org>,
	<nvdimm@lists.linux.dev>, <linux-nvme@lists.infradead.org>,
	<linux-scsi@vger.kernel.org>, <ying.huang@intel.com>, <feng.tang@intel.com>,
	<fengwei.yin@intel.com>, <oliver.sang@intel.com>
Subject: Re: [axboe-block:for-next] [block]  1122c0c1cc:  aim7.jobs-per-min
 22.6% improvement
Message-ID: <ZnzP+nUrk8+9bANK@xsang-OptiPlex-9020>
References: <202406250948.e0044f1d-oliver.sang@intel.com>
 <ZnqGf49cvy6W-xWf@infradead.org>
 <Znt4qTr/NdeIPyNp@xsang-OptiPlex-9020>
 <ZnuNhkH26nZi8fz6@infradead.org>
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <ZnuNhkH26nZi8fz6@infradead.org>
X-ClientProxiedBy: KL1PR01CA0154.apcprd01.prod.exchangelabs.com
 (2603:1096:820:149::8) To LV3PR11MB8603.namprd11.prod.outlook.com
 (2603:10b6:408:1b6::9)
MIME-Version: 1.0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: LV3PR11MB8603:EE_|PH8PR11MB6998:EE_
X-MS-Office365-Filtering-Correlation-Id: 674cc3cf-f9ec-45c0-a6a8-08dc9651dedf
X-LD-Processed: 46c98d88-e344-4ed4-8496-4ed7712e255d,ExtAddr
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam: BCL:0;ARA:13230040|366016|376014|7416014|1800799024;
X-Microsoft-Antispam-Message-Info: =?us-ascii?Q?TRxfcgNOCtfec4hTwWFvMGMk/COYA0kIqOAMmNI1yLflOEUnMrC3HKSuGvS7?=
 =?us-ascii?Q?FlZ5anx25Bn6ds9LE7sKSXWnS9P0z+RqwfQwFZKr+N8xbpF4rO4O0JckG4yr?=
 =?us-ascii?Q?18GkU0fXamh/e6RVsa6DxlKZIQ/siRR2kROXVVBspKb1TXmivGfTpqoSALDT?=
 =?us-ascii?Q?D6K8RFOdaN23OAQr/BzT579DCtLzqhLQw89jGBHWmddgAc5XlHMsoxm23v9x?=
 =?us-ascii?Q?ZGf094sZ/4Do+7jifvn7Noc+OwuzRT06IKl/5fLQ3ZslTgJRLGfgJCRJsO+R?=
 =?us-ascii?Q?b7mWhhQBVT5hsWsYMBgyyKLwtteXCPK1OH5xqdNJeH470XH+opeH3e8hYtrJ?=
 =?us-ascii?Q?yyNTwe0U7WX2XUwY8FrMtkItIBEGZX2cbquzZSy+59z9gFt7g4Uc2zBjsE98?=
 =?us-ascii?Q?Jvf3otBt+Mg9ThMrpecwfNx1wJMEsHdeaywbAaGUkBBgHssBmYiqxpEA8EAq?=
 =?us-ascii?Q?77ii0hZNzvN5Jb2LhIDw7rBPHTt8GWY3doXv2iyu0T1BQOhnfSpVgDazU6SF?=
 =?us-ascii?Q?2IoUQY14B6+aa4aqByEtt2QVDHvbRHKRVBbEZGjNBn5WAEZYT/bgi/TvkzOX?=
 =?us-ascii?Q?LwAsZ5td/r7dAQBAlBn0ikMs5cH4ulNUo9onVMsbIS+4HRuv2mNxC1EWbMQh?=
 =?us-ascii?Q?guAGtIdIstekDRPHgnbw2C0dIZu4y5YePzd5+2834CTx90RUj/X59bcH1h9p?=
 =?us-ascii?Q?CzmIKkV5lQ6qMtlBj/tGbC93hWjhwyCSmZifp/5omvOc3LVCQCgPzR9azrYk?=
 =?us-ascii?Q?/KNMniNOMacvN921845wipSaJPH84jegt+JSwAgw3fZ/wHlftemOgt75jQ9/?=
 =?us-ascii?Q?Nr0kTnfvJYjRtYWW8DyprJDkprrhwicy36nlg0fpv+rOUZlIjZruK/L13TW8?=
 =?us-ascii?Q?fWG7Q3+w/PTcceIGkWsoQwqM6YktLrnX/51grMHgplF8Jv+gvCSFD49Xrwut?=
 =?us-ascii?Q?csqVFJVPJRNxYhDbY+0DL2cy+Ugg+zoDd6jcKJAY8VC/f083O62ANCxuDGKi?=
 =?us-ascii?Q?NnPxUHYRCfJMBcN6t8DDuG8JMK/XVC4rs5gKDCyw+p5Pj2jzHmnbhuzf8cRa?=
 =?us-ascii?Q?8PzXWrK7CZVb+j8DL9di6kKrGIzUAjYGzjt/3RHS453vPktsWB7toBbq9AXI?=
 =?us-ascii?Q?GHR8zOCq+qocCiTCtGphEI4+rqenHgANTDUD1lbjRtwRjYqYPCuLkobS63HM?=
 =?us-ascii?Q?K3OZGEX3MCVDLHNrnA4XhfEKQYFnecI1PKZBLqd99TsGBPVcF1qPZQ6YcjpF?=
 =?us-ascii?Q?RpjpoLTZqRCuCe4TIGY01vwDDQm2JW9SsDj2PFuaihrMaRJxm+K9FVVf30q6?=
 =?us-ascii?Q?fshmX05BlCbat33Li4490RGv?=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:LV3PR11MB8603.namprd11.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1
X-MS-Exchange-AntiSpam-MessageData-0: =?us-ascii?Q?XkJHoqEbbM3JioGWYfAmoP8lBk7qjFj2lkgZoon44AEkR4yGBxONytPQaEcz?=
 =?us-ascii?Q?bvt87CcroadHHVHWDkhzgrZPOr8Ai6Oo3vK1kK6wtUatWrn84K3MSVps9Plz?=
 =?us-ascii?Q?Q5eaG+UHCcQGUfYkz6Mud6z1tvPGKRyuF5/O0EBDqofeWMvPurkeRfZFYPqN?=
 =?us-ascii?Q?+2m03Q8dPHKXlBC2CTYZREbzJaM7sFbOzyYyewMBegAmXRNL0UFpo6QDHh+c?=
 =?us-ascii?Q?3FRW6irHhD36jivRQTtEHJs7z7qKsfvIxMZVR1+dVN5oi1UahV0Kx4nwAv6B?=
 =?us-ascii?Q?OR0HNy3/wxjkKw2Z2LsOHpxwRpYHd9+spuEXpi8sLDC1Hs4BafOMY00LbILM?=
 =?us-ascii?Q?Pnq7UEPrbKxOrAGZ0VfyOgu50nGFpYbDiNKaQAf+E+38/Z6ewWSTg+itcUyD?=
 =?us-ascii?Q?tKBd2rYqbWAwT8fQ9n4MxKtoWHf4Gv8O8Cj32SYJXLayD/bGJ0Z+v47dSI8V?=
 =?us-ascii?Q?fNx2bjPhrfVU8v0o8eKyp5AphavWzAkHAE3Fy1TTysXTXpiRqPmcJVSVI+Z0?=
 =?us-ascii?Q?Bow6fPlu2zwES1M2fMyErOgyiPkOOBsM7vKZGDKPY3yyQ2GjriCafbzfMQZf?=
 =?us-ascii?Q?xQJrtkEV7M5US6pwa9vQNwf4duLiTSfGqpDuzXa2+RFNxu9lNh5TI9h27NjI?=
 =?us-ascii?Q?a8jenYz7gz0t2CfFu4n26V/NdNtJ+f3TYubNK+it6XWRJP5WNDlYF+UzpSSL?=
 =?us-ascii?Q?Lj1Fy4HTcK9AFV0i4JKWKEPjkDcq5cxKz8KnrklZ2puizFeWJDuyOIsFqzqn?=
 =?us-ascii?Q?R1hmLstI8hD3U36TYol6f8vUZgEiZPFvItB3yJsuPup9LDWndW+PTYYZ+NaR?=
 =?us-ascii?Q?p6pecjyt06v9HMEP48DUKqBQ2V7AR+FJv0NPuB/A8YYknTXgxoSrCxRHfvOb?=
 =?us-ascii?Q?nBUSVHivLm++m/fEbMf9/ISWG1Cyu3Dkwf1CNm6rJ1PtoSPQfJ+3c4zarTG4?=
 =?us-ascii?Q?YT2v4zViUKxGQvVm1QqlN7yRm0VieczBOmGECoTtL3Ab4lSwVvTfhM17EkhR?=
 =?us-ascii?Q?79oCcpcBtlivNjLrS9dGvcph3hGtOCusnR3p9kJLJb+QFsSFd0zJPKgeVDBj?=
 =?us-ascii?Q?nQ0lRzd3afMc7qoIBREKDB7eUv7cjcHZpBfeMsNhtCuhi7TigsvplUHzztl0?=
 =?us-ascii?Q?d9wSS1OHvHR6l7GtMxb3XIdE3Thj8fpX5G1ZErs0THQKnEMRDh5xkIa6ZTXZ?=
 =?us-ascii?Q?hu0qs4qeubuRegkSL8thgXY6AcgylxwoFb1pjftNYxVZZQXT32WhfTAfm+ci?=
 =?us-ascii?Q?InvlSSFUukZnNxmg62LuKScj73V93eNWqGam7DQC48VYvCTbet23Lu3BhgeX?=
 =?us-ascii?Q?AcJJKNVtgv2JYLpgQg31EOCyxK1nqo62id1JjOp0Ku+xOzt1IoZVDUAncb2E?=
 =?us-ascii?Q?bEv/vHy6YUGIOzX9W6On4eERqHrtiXLpEj4uFgZHs50CD3cRiN/muEQlDL0p?=
 =?us-ascii?Q?pgWPxkVUCXMdEBQzT9mt5FcFMn4c5rGB7B8611DgnuepOzrcpAMqZxkgRRYz?=
 =?us-ascii?Q?fj6tdXb1MOfADBp6CPNP6OZ5c2TijvsXYdp6G60Ex2MfiTay+7Ic4GOVHWmn?=
 =?us-ascii?Q?OWwsOpxOauCPVRJfTbRJUl/4EBkIArnZ+JM6VLbc?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 674cc3cf-f9ec-45c0-a6a8-08dc9651dedf
X-MS-Exchange-CrossTenant-AuthSource: LV3PR11MB8603.namprd11.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 02:35:55.6518
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hEBCTBXCHr+NZDa/x9iWUFtzY8u2lfF7JCHOgMZESWvucK+BA/PXRt59u98o1xusx6adaHjnRY53UbVFdCazpQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH8PR11MB6998
X-OriginatorOrg: intel.com

hi, Christoph Hellwig,

On Tue, Jun 25, 2024 at 08:39:50PM -0700, Christoph Hellwig wrote:
> On Wed, Jun 26, 2024 at 10:10:49AM +0800, Oliver Sang wrote:
> > I'm not sure I understand this test request. as in title, we see a good
> > improvement of aim7 for 1122c0c1cc, and we didn't observe other issues for
> > this commit.
> 
> The improvement suggests we are not sending cache flushes when we should
> send them, or at least just handle them in md.

thanks for explanation!

> 
> > do you mean this improvement is not expected or exposes some problems instead?
> > then by below patch, should the performance back to the level of parent of
> > 1122c0c1cc?
> > 
> > sure! it's our great pleasure to test your patches. I noticed there are
> > [1]
> > https://lore.kernel.org/all/20240625110603.50885-2-hch@lst.de/
> > which includes "[PATCH 1/7] md: set md-specific flags for all queue limits"
> > [2]
> > https://lore.kernel.org/all/20240625145955.115252-2-hch@lst.de/
> > which includes "[PATCH 1/8] md: set md-specific flags for all queue limits"
> > 
> > which one you suggest us to test?
> > do we only need to apply the first patch "md: set md-specific flags for all queue limits"
> > upon 1122c0c1cc?
> > then is the expectation the performance back to parent of 1122c0c1cc?
> 
> Either just the patch in reply or the entire [2] series would be fine.

I failed to apply patch in your previous reply to 1122c0c1cc or current tip
of axboe-block/for-next:
c1440ed442a58 (axboe-block/for-next) Merge branch 'for-6.11/block' into for-next

but it's ok to apply upon next:
* 0fc4bfab2cd45 (tag: next-20240625) Add linux-next specific files for 20240625

I've already started the test based on this applyment.
is the expectation that patch should not introduce performance change comparing
to 0fc4bfab2cd45?

or if this applyment is not ok, please just give me guidance. Thanks!


> 
> Thanks!
> 


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 03:44:23 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 03:44:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749670.1157907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMg3L-0007De-I8; Thu, 27 Jun 2024 03:44:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749670.1157907; Thu, 27 Jun 2024 03:44:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMg3L-0007DX-EG; Thu, 27 Jun 2024 03:44:07 +0000
Received: by outflank-mailman (input) for mailman id 749670;
 Thu, 27 Jun 2024 03:44:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMg3J-0007DJ-VX; Thu, 27 Jun 2024 03:44:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMg3J-0007k7-SS; Thu, 27 Jun 2024 03:44:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMg3J-0002Ce-HL; Thu, 27 Jun 2024 03:44:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMg3J-0004pE-Eq; Thu, 27 Jun 2024 03:44:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Yd6IaJ6UxqmWE1MlH+cJWphq5YSpf3cardEUKy0/74k=; b=ipHATOfpIEHhiihl/kpZVpe+Pu
	njZW41mIY0ke3Bc0+GNeartmWvlNLjl5WOSWcdL74Vo3HOe+NMkeRHS0ISEoertVpNQ78RoG67Kh1
	NeF418+R8pGhCBPODJT9Xiyn/CpYIZq9su83AMY/uWCw1JbSFUSqeiGQtbHNQ2MHYiOA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186517-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186517: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=4712e3b3769e6c03e0aaaa8179395f0fb7b141cc
X-Osstest-Versions-That:
    xen=853f707cd9e2eb9410dbfbadbd5a01ac0252ef83
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 Jun 2024 03:44:05 +0000

flight 186517 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186517/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186510
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186510
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186510
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186510
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186510
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186510
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  4712e3b3769e6c03e0aaaa8179395f0fb7b141cc
baseline version:
 xen                  853f707cd9e2eb9410dbfbadbd5a01ac0252ef83

Last test of basis   186510  2024-06-26 05:21:29 Z    0 days
Testing same since   186517  2024-06-26 14:40:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   853f707cd9..4712e3b376  4712e3b3769e6c03e0aaaa8179395f0fb7b141cc -> master


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 04:54:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 04:54:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749687.1157916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMh9I-0008Oc-JY; Thu, 27 Jun 2024 04:54:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749687.1157916; Thu, 27 Jun 2024 04:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMh9I-0008OV-Gh; Thu, 27 Jun 2024 04:54:20 +0000
Received: by outflank-mailman (input) for mailman id 749687;
 Thu, 27 Jun 2024 04:54:19 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gTBj=N5=bombadil.srs.infradead.org=BATV+b68173646e9cfe1398de+7613+infradead.org+hch@srs-se1.protection.inumbo.net>)
 id 1sMh9F-0008OP-TN
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 04:54:19 +0000
Received: from bombadil.infradead.org (bombadil.infradead.org
 [2607:7c80:54:3::133])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 4da31c2e-3441-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 06:54:16 +0200 (CEST)
Received: from hch by bombadil.infradead.org with local (Exim 4.97.1 #2 (Red
 Hat Linux)) id 1sMh93-00000009D8d-4Br2;
 Thu, 27 Jun 2024 04:54:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4da31c2e-3441-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=bombadil.20210309; h=In-Reply-To:Content-Type:MIME-Version
	:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
	Content-Transfer-Encoding:Content-ID:Content-Description;
	bh=/fzE2yWElnzU7XRficmyE9tfka6WnF3hLS24PJakxl4=; b=uzcGPgconVlUhJahqA4o0y4rP2
	YDkQfdW0m0w2u8N8BRaxbwBTF1twTy53aPn4xBPj/AfBrb0WxTzWJH8wD6oR43WzK3V70db4qXtWY
	0IIZI+2mDcUbXHX9Anl5SaO8PZqQtBzUCFGUjAdJ7ExnkxJgGk1kclJpDyt1NPlhJH6k1SD2KJv1r
	Y/aWT38dOOUqqw6WO2aC6zoYnYWj+859j2ETzBrnZoTEbtU0f3blIKoRcb5DyUh8Hm74KMCv5zkZg
	gzrN7RVyue7JLzZcBns6ISX0k4yPom7ZauQBJlbkxj+mZ51Z3qG2KMut3g2uFWJoxlvZFyPWvc2cp
	37dKQmlg==;
Date: Wed, 26 Jun 2024 21:54:05 -0700
From: Christoph Hellwig <hch@infradead.org>
To: Oliver Sang <oliver.sang@intel.com>
Cc: Christoph Hellwig <hch@infradead.org>, oe-lkp@lists.linux.dev,
	lkp@intel.com, Jens Axboe <axboe@kernel.dk>,
	Ulf Hansson <ulf.hansson@linaro.org>,
	Damien Le Moal <dlemoal@kernel.org>, Hannes Reinecke <hare@suse.de>,
	linux-block@vger.kernel.org, linux-um@lists.infradead.org,
	drbd-dev@lists.linbit.com, nbd@other.debian.org,
	linuxppc-dev@lists.ozlabs.org, virtualization@lists.linux.dev,
	xen-devel@lists.xenproject.org, linux-bcache@vger.kernel.org,
	dm-devel@lists.linux.dev, linux-raid@vger.kernel.org,
	linux-mmc@vger.kernel.org, linux-mtd@lists.infradead.org,
	nvdimm@lists.linux.dev, linux-nvme@lists.infradead.org,
	linux-scsi@vger.kernel.org, ying.huang@intel.com,
	feng.tang@intel.com, fengwei.yin@intel.com
Subject: Re: [axboe-block:for-next] [block]  1122c0c1cc:  aim7.jobs-per-min
 22.6% improvement
Message-ID: <ZnzwbYSaIlT0SIEy@infradead.org>
References: <202406250948.e0044f1d-oliver.sang@intel.com>
 <ZnqGf49cvy6W-xWf@infradead.org>
 <Znt4qTr/NdeIPyNp@xsang-OptiPlex-9020>
 <ZnuNhkH26nZi8fz6@infradead.org>
 <ZnzP+nUrk8+9bANK@xsang-OptiPlex-9020>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <ZnzP+nUrk8+9bANK@xsang-OptiPlex-9020>
X-SRS-Rewrite: SMTP reverse-path rewritten from <hch@infradead.org> by bombadil.infradead.org. See http://www.infradead.org/rpr.html

On Thu, Jun 27, 2024 at 10:35:38AM +0800, Oliver Sang wrote:
> 
> I failed to apply patch in your previous reply to 1122c0c1cc or current tip
> of axboe-block/for-next:
> c1440ed442a58 (axboe-block/for-next) Merge branch 'for-6.11/block' into for-next

That already includes it.

> 
> but it's ok to apply upon next:
> * 0fc4bfab2cd45 (tag: next-20240625) Add linux-next specific files for 20240625
> 
> I've already started the test based on this applyment.
> is the expectation that patch should not introduce performance change comparing
> to 0fc4bfab2cd45?
> 
> or if this applyment is not ok, please just give me guidance. Thanks!

The expectation is that the latest block branch (and thus linux-next)
doesn't see this performance change.



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 05:34:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 05:34:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749263.1157927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMhmO-0005nW-GZ; Thu, 27 Jun 2024 05:34:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749263.1157927; Thu, 27 Jun 2024 05:34:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMhmO-0005nP-D1; Thu, 27 Jun 2024 05:34:44 +0000
Received: by outflank-mailman (input) for mailman id 749263;
 Wed, 26 Jun 2024 16:17:26 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4NZk=N4=rt-rk.com=Milan.Djokic@srs-se1.protection.inumbo.net>)
 id 1sMVKn-0002m5-8a
 for xen-devel@lists.xenproject.org; Wed, 26 Jun 2024 16:17:26 +0000
Received: from mx08-0061a602.pphosted.com (mx08-0061a602.pphosted.com
 [205.220.185.213]) by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 90c6ff4e-33d7-11ef-90a3-e314d9c70b13;
 Wed, 26 Jun 2024 18:17:21 +0200 (CEST)
Received: from pps.filterd (m0278994.ppops.net [127.0.0.1])
 by mx07-0061a602.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 45QG07Xa008532;
 Wed, 26 Jun 2024 16:16:54 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2111.outbound.protection.outlook.com [104.47.18.111])
 by mx07-0061a602.pphosted.com (PPS) with ESMTPS id 3ywktvva5g-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 26 Jun 2024 16:16:52 +0000 (GMT)
Received: from DU5PR08MB10397.eurprd08.prod.outlook.com (2603:10a6:10:524::10)
 by PAVPR08MB8990.eurprd08.prod.outlook.com (2603:10a6:102:326::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Wed, 26 Jun
 2024 16:16:49 +0000
Received: from DU5PR08MB10397.eurprd08.prod.outlook.com
 ([fe80::9791:8914:1ef3:c828]) by DU5PR08MB10397.eurprd08.prod.outlook.com
 ([fe80::9791:8914:1ef3:c828%6]) with mapi id 15.20.7698.025; Wed, 26 Jun 2024
 16:16:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90c6ff4e-33d7-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=rt-rk.com; h=cc
	:content-type:date:from:in-reply-to:message-id:mime-version
	:references:subject:to; s=s2021; bh=NXygv6F3wdlxaKwqjrGFQGEMMJu1
	F+YqhWRKx9A062U=; b=ZJLgkzbXPYYTzAXytjmaQMH8pe+4AC58u8g9ltpgOELv
	NfeKLnN6FOAZbwaNn5E5vIaNPFecYWIprOdXuGLR11S78IZbk4EHa6H6QqFMmqpz
	QFBf2eS97NPnpayh5a77Iet/D4A0z5X4S3Eq5Sx3VbJoKTD750txB9dAekEVKF1J
	VZDKIC4lN16UVfi+wLYHVEH5baMOqMnOHTDT9ttTxIosIO+BNKdyuqeFxw4cspdN
	ZjuccIdPDBECWSJurEFj4VNRCb62knEqaCVkv+NUDNOkkmRsPz6ajs3D7hzTISnE
	WlDC8wtKc5ygxFDd1WkHNDd3xIMiRpSbYnsDtm2Z3g==
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KjJ0KZZrFytl9y+pwvUPh0ibbU3Ax/6pIwgSqKL2zh8USw5+aDmpOwrmjMTAV9b5XlDNHjGSMjNCwaV2shpvUhnf3AMHqxcOxLOtcWK8wQdsn3HQfwUyzL3lkK9l9OgZiHE6t4dA/7sJE1n65i0ON2uX05riceVgt4uABHkOUr4aQT+hozSb5Tsjf0ZCW9tPME46a1DwXHOPj6OHP5PxsUXeagewq7vOTBjMgE366AVJGNJX5ce5kuniSsGdDHlc+ql89j+y+hzL/9OvxofrQnClnHf8zkBIs69w/7q+D5SQeQ0PZqbje7IQY/JwUzj7gKAOArHb7hcSYsF0+ePQyQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=tL2M1bwGnA85rQIRcaldIAN1N4W3MY2N8Uol4uBWXnw=;
 b=gxrhKqn8/KKJ0pZeNAizfvjddZGXLRlI3pW+7ycKaxrNWg/056095jyFjER/GWmEhQDz0qKdJ1Ez/zv2AlSVv69Tpt3IxzcqFUHBSgzBmimp5JJhgyt5KjduSKsU+NWXHAeYoVG//Dpytbg+gechHXClVHklG7raMpsdTiMMbU+ZmjyB9DJVZaiGdK0PeI3A2RhXNnsDiLr9zrYieN9emI0yetMb/pt2iVOoL2rlgrQ5w6s7+ezBcwHdnGHG8YFJBo5+VmBElH2sV+3vMIMHP6GSBunTQL0z4kDbJnKJPnhZR6weLnhGLz4679yQucf2UrHIEYn31QeQzdg+UG1TZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=rt-rk.com; dmarc=pass action=none header.from=rt-rk.com;
 dkim=pass header.d=rt-rk.com; arc=none
From: Milan Djokic <Milan.Djokic@rt-rk.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "milandjokic1995@gmail.com" <milandjokic1995@gmail.com>,
        Nikola Jelic
	<nikola.jelic@rt-rk.com>,
        Alistair Francis <alistair.francis@wdc.com>,
        Bob
 Eshleman <bobbyeshleman@gmail.com>,
        Connor Davis <connojdavis@gmail.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap
	<george.dunlap@citrix.com>,
        Julien Grall <julien@xen.org>,
        Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] xen/riscv: PE/COFF image header for RISC-V target
Thread-Topic: [PATCH] xen/riscv: PE/COFF image header for RISC-V target
Thread-Index: AQHat2kkFA7Fx2uRgEClS8RhG5ht9LHEFa6AgAGeooCAAAKgVA==
Date: Wed, 26 Jun 2024 16:16:48 +0000
Message-ID: 
 <DU5PR08MB103973ABF5E6F12853F5D24E1CEC12@DU5PR08MB10397.eurprd08.prod.outlook.com>
References: 
 <87b5e458498bbff2e54ac011a50ff1f9555c3613.1717354932.git.milan.djokic@rt-rk.com>
 <0e10ee9c215269b577321ba44f5d038a5eb299a7.1718193326.git.milan.djokic@rt-rk.com>
 <8112bee8-efdc-4db9-b0d4-58b160b4e923@suse.com>
In-Reply-To: <8112bee8-efdc-4db9-b0d4-58b160b4e923@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
msip_labels: 
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: DU5PR08MB10397:EE_|PAVPR08MB8990:EE_
x-ms-office365-filtering-correlation-id: a4344ee2-1eec-4acf-9cf9-08dc95fb61ac
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: 
 BCL:0;ARA:13230038|7416012|376012|1800799022|366014|38070700016;
x-microsoft-antispam-message-info: 
 =?iso-8859-1?Q?JzvPlB1dMH6UgdMJSqeU7carQb9162t9p/CI4HeRKXV4SGrpl7oImQGrxp?=
 =?iso-8859-1?Q?PyxQkPwg5tn71OaKUGRN/jhuz1QWv00ulVIpDuEyEH2iv/2DDvws6oD+Kv?=
 =?iso-8859-1?Q?QnNaTx1eqdHyAlHQ847kEz+FXb7u+unk+A56W+L+xOe4PmDzgGpBPDOUB2?=
 =?iso-8859-1?Q?w2HWOaR5GUKkQrwKI19JnADbmf/PPGGo2Cr7IWiiMcDsxdb4p6fsMQ71d5?=
 =?iso-8859-1?Q?NMVtGbuOUcqEOgcYHH8DOVHDuvGkC7V2r0wBCRhsjBHOZBY57eHuCS+ekw?=
 =?iso-8859-1?Q?+nkdXDXRV30uHVCjOu4ZVvRJk7DqDtWrJfZoPuGehWC/IVvMskBru64qFg?=
 =?iso-8859-1?Q?xuyufG48wttXZcqN3XjArH3OBr7qECuyxq4Q/Fx1KCtDwSIkCnKCJ0lT8A?=
 =?iso-8859-1?Q?7lKbFVlvywettILdAULnLAJXtef7EQwgnOvHzBV36jvAhf0lNaa0MTaiAM?=
 =?iso-8859-1?Q?+UY65MrfjLmyP+x0D0yBBUxGJxOhEJlAEOzAJJtTAb7po0W8PTMQgd4/RC?=
 =?iso-8859-1?Q?E8nJb8gtuhz3aACo/IlCi6tmyK9of5kPfD2VI7XX+vjFaCuOjxeC9vbMU7?=
 =?iso-8859-1?Q?H3vyRVX3OExg7MeC4Vl4CCqUwS7SO3shwrewfOfM3tBA2XIrvPMvCtbEgE?=
 =?iso-8859-1?Q?kUuT9RioelgWPpHUjBxHKboheNTvJEk2brWrVqp5yJkEHe7zXDRQn4QMeD?=
 =?iso-8859-1?Q?09o6/In3K1lkNKNRsuNTWSbUatRJm3HtJspPD9kah6scKaB4v8TgKwMYky?=
 =?iso-8859-1?Q?NrGUh1zImBTzt2oS2bV4+mfcYsQQXHY/SK9FiZXsAZGcynC0iPRuzIcWDV?=
 =?iso-8859-1?Q?B9smhXMFvkYgCwjZbXfRU1IO3cyaAsOQAyyivHTK/FpL+7YWsk5mEeaB+z?=
 =?iso-8859-1?Q?mTZXlcpKSaS8qujMKexvNk1+p7XWG7Apvfc1se51tU5PzmOQV0R5AcirL1?=
 =?iso-8859-1?Q?KIugkY8KJdx6i3w6+wCb3US1+YovGHUil3TKhdlW0xzu8MqUC0HV1bkHol?=
 =?iso-8859-1?Q?6qUn6MN1p41kzImA5q0ax8FOmAPolW78BFlaXvnsQrQCZEpte0kp2tx8B+?=
 =?iso-8859-1?Q?1PKlMAaDVeFr5eY8BRVTghNf3dFh74vyY7zA8FWtLO8M2WzMA5ZZr0tKfn?=
 =?iso-8859-1?Q?/xtFvJwgAjIjVW30wcGi3XAp1C93GYdniBYix1OchSe90vw5i2H6zd8sON?=
 =?iso-8859-1?Q?9ePBu7FG2pR90425hgv5EUBJbxDcXJ7iogAb2TSaRwAMZMDXgaGoKyGOPH?=
 =?iso-8859-1?Q?8xQa6ClaYFtxnu2ye5bQWdwx7356GyzNABgS66QOtD+t7wQArz+j2aNY9X?=
 =?iso-8859-1?Q?Ehtrd3rH7mzzp/XsijiviLN4+nTvgro+VUh5JY+q/QmVWDyeWSQGQXwu2O?=
 =?iso-8859-1?Q?Iy9Zz28adv?=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DU5PR08MB10397.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230038)(7416012)(376012)(1800799022)(366014)(38070700016);DIR:OUT;SFP:1102;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0: 
 =?iso-8859-1?Q?J0ZtILTbXvFvI0VQduauHjfy7RSh0/YXtqsXudaG4hROwSWOcHG0KLpLR1?=
 =?iso-8859-1?Q?DIcZ1Dv13qri1Ji+CGnAZGESNzQii6lbRZgfU/MKH7D5kqOZpDysn11G1m?=
 =?iso-8859-1?Q?NCZXGDrbMuAqWDpg7NxXWu2j2gTb9GE1JQeg8NTDmjIiAf6+z7eDN31Lxg?=
 =?iso-8859-1?Q?SdO/jthet8L5tLU8R9d/7JyjGYjAuk69yv+UN6vZvQSMyMEzC9xpLkX01L?=
 =?iso-8859-1?Q?weKeasyH43fMee4M1yJkP99ZVsjVzdMhFXWxIoqdLHYGP2mEvg5r3tpI5n?=
 =?iso-8859-1?Q?p6K9zaVuHkRGtCrGNGN+jvH+qYnX3PKsfJqvTcMaMlPA98e67SpveTd5NQ?=
 =?iso-8859-1?Q?AAfBzGDwF28XiGiL4JfXRpXgxHFOuFOIUT0kuSLT93FwgZncF50U/8h2iF?=
 =?iso-8859-1?Q?3xYTOkM5fkC+GqLDxBNI17P9FrSFwDU5+Oo3d4CUf3rkofl9IrAQmsDuPN?=
 =?iso-8859-1?Q?IEboPuMfNHXRQa1WRNytr8W9HrBBpoDI+CBYv6AcyZNAZ35stNwUVpgae2?=
 =?iso-8859-1?Q?y7C0Ykcyy5JKATLooQ55M1ow3CTNBIAqNliD9Zr/8qCyquSt486LqcZJSp?=
 =?iso-8859-1?Q?50Vm4OvpcAq8LSHv12vkqiQuJhMWIEBq5dXDzD5ro6JafIZzz+9d29GUep?=
 =?iso-8859-1?Q?F1PIVkly0SfYmr9zJwW7nHue5mTsS0Bde/P4Hsxg9h4QBnuiVYY1w/2/Dh?=
 =?iso-8859-1?Q?olK7rCQsgwKktwPWuCKXwKaErZ7dO5TrN1izHK7ygvTLGulQfNEDssQTIo?=
 =?iso-8859-1?Q?8CyAl7ARMjGfuAu5XC3zYo9vpMBUJxQwK2s1y7ff3EB+gnyF/XAP4jIYwn?=
 =?iso-8859-1?Q?JT1h5jBNviqR9S+MvTOgqJbHtjxtFYlU7wzomB2Gg2XbYp1JeRbAy1FjZz?=
 =?iso-8859-1?Q?9WezVbMSnFfc4nK9JmY5nDVmTVYTRM9yMeJQEKVzVJmGdjU2RHzxtqIrzM?=
 =?iso-8859-1?Q?5frc8/YlgF0LUkdgpoHYFr2ncWSAnpw5fC6WSnjPS8VhSJpj59e3YjOSAj?=
 =?iso-8859-1?Q?vpK0a5V4kcQcPqQ5eM4VY5RcAREHXPHRamJRa99J9v3eOWj1ikMYrFY6y+?=
 =?iso-8859-1?Q?vZKXetM5+rBdMp5KJTRQ4jeEvY2pw/nJmzQp7e8E2P4VWTOQFAw+f/5cLi?=
 =?iso-8859-1?Q?Slbnx8wz9by0Kd4wAfTku+llpd7OSWmTWU0LdLT9ym/dlCkxyCDaaXzUbs?=
 =?iso-8859-1?Q?l96EziGrzedITCi4MWwjO80//YM971odclFnXBl5N6wAqbaifJlmvKU5OH?=
 =?iso-8859-1?Q?2aH4D5dLMgIrDNZWAQob0E8rL0Go0UNiPS8/1wjgzKV4H+3x2Cq4zpJs7u?=
 =?iso-8859-1?Q?Hi9HsFRLlbZu3aH+kB7G4fIUP1wp6BS93HwEpo+Kpt6kmPg+v0jRySOdoE?=
 =?iso-8859-1?Q?XkkAg+/eSSWfGxU3a4ywLEGfjb8zXGpwWVUI6TdvN+S1P2Zg9Y7TOzUDSV?=
 =?iso-8859-1?Q?+8saV7n5lrPJ+ZROj9CGWJkw9Ogbsb6C3yJiYLtHhubff/jRGK43FCZtHr?=
 =?iso-8859-1?Q?FQvrAzqcxQ1ZDlfjmi0wp1QPsicZMMyalITmuYcbIs0qiYpKfVQhhCaGrZ?=
 =?iso-8859-1?Q?f4l3UgFEpQUNh7gGxhII9ieQclync7Ye6XP0XSirKZbv0IeSdN4WdB5UQY?=
 =?iso-8859-1?Q?UW7CJV4u8h549rgziWhiVvG5zsCcN2eeBa?=
Content-Type: multipart/related;
	boundary="_004_DU5PR08MB103973ABF5E6F12853F5D24E1CEC12DU5PR08MB10397eu_";
	type="multipart/alternative"
MIME-Version: 1.0
X-OriginatorOrg: rt-rk.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: DU5PR08MB10397.eurprd08.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: a4344ee2-1eec-4acf-9cf9-08dc95fb61ac
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 Jun 2024 16:16:48.7445
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 9bc3ed46-a3ca-43f0-b84e-9a557209a7df
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: VAdiAcBhlp9Ctr77DXpiKdMDIUGMc/tDVpyefEijyvJJf3sScYDXAvRbbuMyXHUmBpdNC+wfOHOzCOWfzY//uw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAVPR08MB8990
X-Proofpoint-ORIG-GUID: MCECvYy5edOCKMHfWpKcYiYHMG2GRWGs
X-Proofpoint-GUID: MCECvYy5edOCKMHfWpKcYiYHMG2GRWGs

--_004_DU5PR08MB103973ABF5E6F12853F5D24E1CEC12DU5PR08MB10397eu_
Content-Type: multipart/alternative;
	boundary="_000_DU5PR08MB103973ABF5E6F12853F5D24E1CEC12DU5PR08MB10397eu_"

--_000_DU5PR08MB103973ABF5E6F12853F5D24E1CEC12DU5PR08MB10397eu_
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

> Signed-off-by: Nikola Jelic <nikola.jelic@rt-rk.com>

This isn't you, is it? Your S-o-b is going to be needed, too.

Nikola.jelic@rt-rk.com is the initial author of the patch, I'll add myself =
also if necessary

> +config RISCV_EFI
> +     bool "UEFI boot service support"
> +     depends on RISCV_64
> +     default n

Nit: This line can be omitted (and if I'm not mistaken we generally do omit
such).

If we remove the default value, EFI header shall be included into xen image=
 by default. We want to keep it as optional for now, and generate plain elf=
 as default format (until full EFI support is implemented)

> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/pe.h
> @@ -0,0 +1,148 @@
> +#ifndef _ASM_RISCV_PE_H
> +#define _ASM_RISCV_PE_H
> +
> +#define LINUX_EFISTUB_MAJOR_VERSION     0x1
> +#define LINUX_EFISTUB_MINOR_VERSION     0x0
> +
> +#define MZ_MAGIC                    0x5a4d          /* "MZ" */
> +
> +#define PE_MAGIC                    0x00004550      /* "PE\0\0" */
> +#define PE_OPT_MAGIC_PE32           0x010b
> +#define PE_OPT_MAGIC_PE32PLUS       0x020b
> +
> +/* machine type */
> +#define IMAGE_FILE_MACHINE_RISCV32  0x5032
> +#define IMAGE_FILE_MACHINE_RISCV64  0x5064

Apart from this, hardly anything in this header is RISC-V specific.
Please consider moving to xen/include/xen/.

We shall move generic part into xen/include/xen/efi. This is something whic=
h should be considered for use on arm/x86 also. Currently PE/COFF header is=
 directly embedded into
head.S for arm/x86

> +    char     name[8];                /* name or "/12\0" string tbl offse=
t */

Why 12?

Either section name is specified in this field or string table offset if se=
ction name can't fit into 8 bytes, which is the case here. Btw this is take=
n over from linux kernel together with the PE/COFF header structures: https=
://github.com/torvalds/linux/blob/master/include/linux/pe.h

> + * struct riscv_image_header - riscv xen image header

You saying "xen": Is there anything Xen-specific in this struct?

Not really related to xen, this is generic riscv PE image header, comment f=
ixed in new version

> +        .long   0                                       /* LoaderFlags */
> +        .long   (section_table - .) / 8                 /* NumberOfRvaAn=
dSizes */
> +        .quad   0                                       /* ExportTable */
> +        .quad   0                                       /* ImportTable */
> +        .quad   0                                       /* ResourceTable=
 */
> +        .quad   0                                       /* ExceptionTabl=
e */
> +        .quad   0                                       /* Certification=
Table */
> +        .quad   0                                       /* BaseRelocatio=
nTable */

Would you mind clarifying on what basis this set of 6 entries was
chosen?

These fields and their sizes are defined in official PE format, see details=
 from specification bellow

[cid:542690de-3bb0-4708-a447-996a03277578]


> +/* Section table */
> +section_table:
> +        .ascii  ".text\0\0\0"
> +        .long   0
> +        .long   0
> +        .long   0                                       /* SizeOfRawData=
 */
> +        .long   0                                       /* PointerToRawD=
ata */
> +        .long   0                                       /* PointerToRelo=
cations */
> +        .long   0                                       /* PointerToLine=
Numbers */
> +        .short  0                                       /* NumberOfReloc=
ations */
> +        .short  0                                       /* NumberOfLineN=
umbers */
> +        .long   IMAGE_SCN_CNT_CODE | \
> +                IMAGE_SCN_MEM_READ | \
> +                IMAGE_SCN_MEM_EXECUTE                   /* Characteristi=
cs */
> +
> +        .ascii  ".data\0\0\0"
> +        .long   _end - xen_start                        /* VirtualSize */
> +        .long   xen_start - efi_head                    /* VirtualAddres=
s */
> +        .long   __init_end_efi - xen_start              /* SizeOfRawData=
 */
> +        .long   xen_start - efi_head                    /* PointerToRawD=
ata */
> +        .long   0                                       /* PointerToRelo=
cations */
> +        .long   0                                       /* PointerToLine=
Numbers */
> +        .short  0                                       /* NumberOfReloc=
ations */
> +        .short  0                                       /* NumberOfLineN=
umbers */
> +        .long   IMAGE_SCN_CNT_INITIALIZED_DATA | \
> +                IMAGE_SCN_MEM_READ | \
> +                IMAGE_SCN_MEM_WRITE                    /* Characteristic=
s */

IOW no code and the entire image expressed as data. Interesting.
No matter whether that has a reason or is completely arbitrary, I
think it, too, wants commenting on.

This is correct, currently we have extended image with PE/COFF (EFI) header=
 which allows xen boot from EFI loader (or U-boot) environment. And these u=
pdates are pure data. We are actively working on the implementation of Boot=
/Runtime services which shall be in the code section part and enable full U=
EFI compatible xen application for riscv.

Why does the blank line disappear? And why is ...

>      . =3D ALIGN(POINTER_ALIGN);
>      __init_end =3D .;

... __init_end not good enough? (I think I can guess the answer, but
then I further think the name of the symbol is misleading. )

Init_end_efi is used only when EFI sections are included into image. We hav=
e aligned with arm implementation here, you can take a look also there.

Regarding other comments, we've fixed our code structure following kernel E=
FI app implementation for RISC-V. This will be obvious in the updated patch=
 itself, just wanted to address comments which need additional explanation =
here before submitting new patch version.

Best regards,
Milan























________________________________
From: Jan Beulich <jbeulich@suse.com>
Sent: Thursday, June 13, 2024 2:59 PM
To: milandjokic1995@gmail.com <milandjokic1995@gmail.com>
Cc: Milan Djokic <milan.djokic@rt-rk.com>; Nikola Jelic <nikola.jelic@rt-rk=
.com>; Alistair Francis <alistair.francis@wdc.com>; Bob Eshleman <bobbyeshl=
eman@gmail.com>; Connor Davis <connojdavis@gmail.com>; Andrew Cooper <andre=
w.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Julien Gra=
ll <julien@xen.org>; Stefano Stabellini <sstabellini@kernel.org>; Wei Liu <=
wl@xen.org>; xen-devel@lists.xenproject.org <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] xen/riscv: PE/COFF image header for RISC-V target

On 12.06.2024 14:15, milandjokic1995@gmail.com wrote:
> From: Nikola Jelic <nikola.jelic@rt-rk.com>
>
> Extended RISC-V xen image with PE/COFF headers,
> in order to support xen boot from popular bootloaders like U-boot.
> Image header is optionally included (with CONFIG_RISCV_EFI) so
> both plain ELF and image with PE/COFF header can now be generated as buil=
d artifacts.
> Note that this patch also represents initial EFI application format suppo=
rt (image
> contains EFI application header embeded into binary when CONFIG_RISCV_EFI
> is enabled). For full EFI application Xen support, boot/runtime services
> and system table handling are yet to be implemented.
>
> Tested on both QEMU and StarFive VisionFive 2 with OpenSBI->U-Boot->xen->=
dom0 boot chain.
>
> Signed-off-by: Nikola Jelic <nikola.jelic@rt-rk.com>

This isn't you, is it? Your S-o-b is going to be needed, too.

> --- a/xen/arch/riscv/Kconfig
> +++ b/xen/arch/riscv/Kconfig
> @@ -9,6 +9,15 @@ config ARCH_DEFCONFIG
>        string
>        default "arch/riscv/configs/tiny64_defconfig"
>
> +config RISCV_EFI
> +     bool "UEFI boot service support"
> +     depends on RISCV_64
> +     default n

Nit: This line can be omitted (and if I'm not mistaken we generally do omit
such).

> +     help
> +       This option provides support for boot services through
> +       UEFI firmware. A UEFI stub is provided to allow Xen to
> +       be booted as an EFI application.

I don't think my v1 comment on this was addressed.

> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/pe.h
> @@ -0,0 +1,148 @@
> +#ifndef _ASM_RISCV_PE_H
> +#define _ASM_RISCV_PE_H
> +
> +#define LINUX_EFISTUB_MAJOR_VERSION     0x1
> +#define LINUX_EFISTUB_MINOR_VERSION     0x0
> +
> +#define MZ_MAGIC                    0x5a4d          /* "MZ" */
> +
> +#define PE_MAGIC                    0x00004550      /* "PE\0\0" */
> +#define PE_OPT_MAGIC_PE32           0x010b
> +#define PE_OPT_MAGIC_PE32PLUS       0x020b
> +
> +/* machine type */
> +#define IMAGE_FILE_MACHINE_RISCV32  0x5032
> +#define IMAGE_FILE_MACHINE_RISCV64  0x5064

Apart from this, hardly anything in this header is RISC-V specific.
Please consider moving to xen/include/xen/.

> +/* flags */
> +#define IMAGE_FILE_EXECUTABLE_IMAGE 0x0002
> +#define IMAGE_FILE_LINE_NUMS_STRIPPED 0x0004
> +#define IMAGE_FILE_DEBUG_STRIPPED   0x0200
> +#define IMAGE_SUBSYSTEM_EFI_APPLICATION 10
> +
> +#define IMAGE_SCN_CNT_CODE          0x00000020      /* .text */
> +#define IMAGE_SCN_CNT_INITIALIZED_DATA 0x00000040   /* .data */
> +#define IMAGE_SCN_MEM_EXECUTE       0x20000000
> +#define IMAGE_SCN_MEM_READ          0x40000000      /* readable */
> +#define IMAGE_SCN_MEM_WRITE         0x80000000      /* writeable */
> +
> +#ifndef __ASSEMBLY__
> +
> +struct mz_hdr {
> +    uint16_t magic;                  /* MZ_MAGIC */
> +    uint16_t lbsize;                 /* size of last used block */
> +    uint16_t blocks;                 /* pages in file, 0x3 */
> +    uint16_t relocs;                 /* relocations */
> +    uint16_t hdrsize;                /* header size in "paragraphs" */
> +    uint16_t min_extra_pps;          /* .bss */
> +    uint16_t max_extra_pps;          /* runtime limit for the arena size=
 */
> +    uint16_t ss;                     /* relative stack segment */
> +    uint16_t sp;                     /* initial %sp register */
> +    uint16_t checksum;               /* word checksum */
> +    uint16_t ip;                     /* initial %ip register */
> +    uint16_t cs;                     /* initial %cs relative to load seg=
ment */
> +    uint16_t reloc_table_offset;     /* offset of the first relocation */
> +    uint16_t overlay_num;
> +    uint16_t reserved0[4];
> +    uint16_t oem_id;
> +    uint16_t oem_info;
> +    uint16_t reserved1[10];
> +    uint32_t peaddr;                 /* address of pe header */
> +    char     message[];              /* message to print */
> +};
> +
> +struct pe_hdr {
> +    uint32_t magic;                  /* PE magic */
> +    uint16_t machine;                /* machine type */
> +    uint16_t sections;               /* number of sections */
> +    uint32_t timestamp;
> +    uint32_t symbol_table;           /* symbol table offset */
> +    uint32_t symbols;                /* number of symbols */
> +    uint16_t opt_hdr_size;           /* size of optional header */
> +    uint16_t flags;                  /* flags */
> +};
> +
> +struct pe32_opt_hdr {
> +    /* "standard" header */
> +    uint16_t magic;                  /* file type */
> +    uint8_t  ld_major;               /* linker major version */
> +    uint8_t  ld_minor;               /* linker minor version */
> +    uint32_t text_size;
> +    uint32_t data_size;
> +    uint32_t bss_size;
> +    uint32_t entry_point;            /* file offset of entry point */
> +    uint32_t code_base;              /* relative code addr in ram */
> +    uint32_t data_base;              /* relative data addr in ram */
> +    /* "extra" header fields */
> +    uint32_t image_base;             /* preferred load address */
> +    uint32_t section_align;          /* alignment in bytes */
> +    uint32_t file_align;             /* file alignment in bytes */
> +    uint16_t os_major;
> +    uint16_t os_minor;
> +    uint16_t image_major;
> +    uint16_t image_minor;
> +    uint16_t subsys_major;
> +    uint16_t subsys_minor;
> +    uint32_t win32_version;          /* reserved, must be 0 */
> +    uint32_t image_size;
> +    uint32_t header_size;
> +    uint32_t csum;
> +    uint16_t subsys;
> +    uint16_t dll_flags;
> +    uint32_t stack_size_req;         /* amt of stack requested */
> +    uint32_t stack_size;             /* amt of stack required */
> +    uint32_t heap_size_req;          /* amt of heap requested */
> +    uint32_t heap_size;              /* amt of heap required */
> +    uint32_t loader_flags;           /* reserved, must be 0 */
> +    uint32_t data_dirs;              /* number of data dir entries */
> +};
> +
> +struct pe32plus_opt_hdr {
> +    uint16_t magic;                  /* file type */
> +    uint8_t  ld_major;               /* linker major version */
> +    uint8_t  ld_minor;               /* linker minor version */
> +    uint32_t text_size;
> +    uint32_t data_size;
> +    uint32_t bss_size;
> +    uint32_t entry_point;            /* file offset of entry point */
> +    uint32_t code_base;              /* relative code addr in ram */
> +    /* "extra" header fields */
> +    uint64_t image_base;             /* preferred load address */
> +    uint32_t section_align;          /* alignment in bytes */
> +    uint32_t file_align;             /* file alignment in bytes */
> +    uint16_t os_major;
> +    uint16_t os_minor;
> +    uint16_t image_major;
> +    uint16_t image_minor;
> +    uint16_t subsys_major;
> +    uint16_t subsys_minor;
> +    uint32_t win32_version;          /* reserved, must be 0 */
> +    uint32_t image_size;
> +    uint32_t header_size;
> +    uint32_t csum;
> +    uint16_t subsys;
> +    uint16_t dll_flags;
> +    uint64_t stack_size_req;         /* amt of stack requested */
> +    uint64_t stack_size;             /* amt of stack required */
> +    uint64_t heap_size_req;          /* amt of heap requested */
> +    uint64_t heap_size;              /* amt of heap required */
> +    uint32_t loader_flags;           /* reserved, must be 0 */
> +    uint32_t data_dirs;              /* number of data dir entries */
> +};
> +
> +struct section_header {
> +    char     name[8];                /* name or "/12\0" string tbl offse=
t */

Why 12?

> --- /dev/null
> +++ b/xen/arch/riscv/include/asm/riscv_image_header.h

Is this file taken from somewhere else, kind of making it desirable to keep
its name in sync with the original? Otherwise: We prefer dashes over unders=
cores
in new files' names.

> @@ -0,0 +1,54 @@
> +#ifndef _ASM_RISCV_IMAGE_H
> +#define _ASM_RISCV_IMAGE_H
> +
> +#define RISCV_IMAGE_MAGIC "RISCV\0\0\0"
> +#define RISCV_IMAGE_MAGIC2 "RSC\x05"
> +
> +#define RISCV_IMAGE_FLAG_BE_SHIFT 0
> +
> +#define RISCV_IMAGE_FLAG_LE 0
> +#define RISCV_IMAGE_FLAG_BE 1
> +
> +#define __HEAD_FLAG_BE RISCV_IMAGE_FLAG_LE
> +
> +#define __HEAD_FLAG(field) (__HEAD_FLAG_##field << RISCV_IMAGE_FLAG_##fi=
eld##_SHIFT)
> +
> +#define __HEAD_FLAGS (__HEAD_FLAG(BE))
> +
> +#define RISCV_HEADER_VERSION_MAJOR 0
> +#define RISCV_HEADER_VERSION_MINOR 2
> +
> +#define RISCV_HEADER_VERSION (RISCV_HEADER_VERSION_MAJOR << 16 | \
> +                              RISCV_HEADER_VERSION_MINOR)
> +
> +#ifndef __ASSEMBLY__
> +/*
> + * struct riscv_image_header - riscv xen image header

You saying "xen": Is there anything Xen-specific in this struct?

> + * @code0:              Executable code
> + * @code1:              Executable code
> + * @text_offset:        Image load offset
> + * @image_size:         Effective Image size
> + * @reserved:           reserved
> + * @reserved:           reserved
> + * @reserved:           reserved
> + * @magic:              Magic number
> + * @reserved:           reserved
> + * @reserved:           reserved (will be used for PE COFF offset)
> + */
> +
> +struct riscv_image_header
> +{
> +    uint32_t code0;
> +    uint32_t code1;
> +    uint64_t text_offset;
> +    uint64_t image_size;
> +    uint64_t res1;
> +    uint64_t res2;
> +    uint64_t res3;
> +    uint64_t magic;
> +    uint32_t res4;
> +    uint32_t res5;
> +};
> +#endif /* __ASSEMBLY__ */
> +#endif /* _ASM_RISCV_IMAGE_H */
> --- a/xen/arch/riscv/riscv64/head.S
> +++ b/xen/arch/riscv/riscv64/head.S
> @@ -1,14 +1,150 @@
>  #include <asm/asm.h>
>  #include <asm/riscv_encoding.h>
> +#include <asm/riscv_image_header.h>
> +#ifdef CONFIG_RISCV_EFI
> +#include <asm/pe.h>
> +#endif
>
>          .section .text.header, "ax", %progbits
>
>          /*
>           * OpenSBI pass to start():
>           *   a0 -> hart_id ( bootcpu_id )
> -         *   a1 -> dtb_base
> +         *   a1 -> dtb_base
>           */
>  FUNC(start)
> +
> +efi_head:

Why is this ...

> +#ifdef CONFIG_RISCV_EFI

... ahead of this?

> +        /*
> +         * This instruction decodes to "MZ" ASCII required by UEFI.
> +         */
> +        c.li s4,-13

IOW RISCV_EFI ought to be (made) dependent upon RISCV_ISA_C?

> +        j xen_start

Doesn't this then need to be c.j, to be sure it fits in 16 bits (and
raise an assembler error if not)?

> +#else
> +        /* jump to start kernel */
> +        j xen_start
> +        /* reserved */
> +        .word 0

What struct field does this correspond to? Or if not a struct field,
why is this needed?

Also I can't help the impression that the resulting layout will be
different depending on whether RISCV_ISA_C is enabled, as the "j"
may translate to a 16-bit or 32-bit insn.

I wonder anyway what use everything from here to ...

> +#endif
> +        .balign 8
> +#ifdef CONFIG_RISCV_64
> +        /* Image load offset(2MB) from start of RAM */
> +        .dword 0x200000
> +#else
> +        /* Image load offset(4MB) from start of RAM */
> +        .dword 0x400000
> +#endif
> +        /* Effective size of xen image */
> +        .dword _end - _start
> +        .dword __HEAD_FLAGS
> +        .word RISCV_HEADER_VERSION
> +        .word 0
> +        .dword 0
> +        .ascii RISCV_IMAGE_MAGIC
> +        .balign 4
> +        .ascii RISCV_IMAGE_MAGIC2
> +#ifndef CONFIG_RISCV_EFI
> +        .word 0

... here is when RISCV_EFI=3Dn. You add data which wasn't needed so far,
and for which it also isn't explained why it would suddenly be needed.

I also can't bring several of the fields above in sync with the
struct riscv_image_header definition in the header. Please can you
annotate each field with a comment naming the corresponding C struct
field (like you do further down, at least in a way)?

> +#else
> +        .word pe_head_start - efi_head
> +pe_head_start:
> +        .long        PE_MAGIC
> +coff_header:
> +#ifdef CONFIG_RISCV_64
> +        .short  IMAGE_FILE_MACHINE_RISCV64              /* Machine */
> +#else
> +        .short  IMAGE_FILE_MACHINE_RISCV32              /* Machine */
> +#endif
> +        .short  section_count                           /* NumberOfSecti=
ons */
> +        .long   0                                       /* TimeDateStamp=
 */
> +        .long   0                                       /* PointerToSymb=
olTable */
> +        .long   0                                       /* NumberOfSymbo=
ls */
> +        .short  section_table - optional_header         /* SizeOfOptiona=
lHeader */
> +        .short  IMAGE_FILE_DEBUG_STRIPPED | \
> +                IMAGE_FILE_EXECUTABLE_IMAGE | \
> +                IMAGE_FILE_LINE_NUMS_STRIPPED           /* Characteristi=
cs */
> +
> +optional_header:
> +#ifdef CONFIG_RISCV_64
> +        .short  PE_OPT_MAGIC_PE32PLUS                   /* PE32+ format =
*/
> +#else
> +        .short  PE_OPT_MAGIC_PE32                       /* PE32 format */
> +#endif
> +        .byte   0x02                                    /* MajorLinkerVe=
rsion */
> +        .byte   0x14                                    /* MinorLinkerVe=
rsion */
> +        .long   _end - xen_start                        /* SizeOfCode */
> +        .long   0                                       /* SizeOfInitial=
izedData */
> +        .long   0                                       /* SizeOfUniniti=
alizedData */
> +        .long   0                                       /* AddressOfEntr=
yPoint */
> +        .long   xen_start - efi_head                    /* BaseOfCode */
> +
> +extra_header_fields:
> +        .quad   0                                       /* ImageBase */

This is quad only for PE32+, iirc. In PE32 it's two separate 32-bit
fields instead. The overall result may be tolerable (a data RVA of 0
can't be quite right, but we may be able to get away with that), but
it will at least want commenting on.

Any anyway - further up in the RISC-V header struct you use .word and
.dword. Why .long and .quad here? That's at least somewhat confusing.

> +        .long   PECOFF_SECTION_ALIGNMENT                /* SectionAlignm=
ent */
> +        .long   PECOFF_FILE_ALIGNMENT                   /* FileAlignment=
 */
> +        .short  0                                       /* MajorOperatin=
gSystemVersion */
> +        .short  0                                       /* MinorOperatin=
gSystemVersion */
> +        .short  LINUX_EFISTUB_MAJOR_VERSION             /* MajorImageVer=
sion */
> +        .short  LINUX_EFISTUB_MINOR_VERSION             /* MinorImageVer=
sion */
> +        .short  0                                       /* MajorSubsyste=
mVersion */
> +        .short  0                                       /* MinorSubsyste=
mVersion */
> +        .long   0                                       /* Win32VersionV=
alue */
> +        .long   _end - efi_head                         /* SizeOfImage */
> +
> +        /* Everything before the xen image is considered part of the hea=
der */
> +        .long   xen_start - efi_head                    /* SizeOfHeaders=
 */
> +        .long   0                                       /* CheckSum */
> +        .short  IMAGE_SUBSYSTEM_EFI_APPLICATION         /* Subsystem */
> +        .short  0                                       /* DllCharacteri=
stics */
> +        .quad   0                                       /* SizeOfStackRe=
serve */
> +        .quad   0                                       /* SizeOfStackCo=
mmit */
> +        .quad   0                                       /* SizeOfHeapRes=
erve */
> +        .quad   0                                       /* SizeOfHeapCom=
mit */

All of these are again 32 bits only in PE32, if I'm not mistaken.

> +        .long   0                                       /* LoaderFlags */
> +        .long   (section_table - .) / 8                 /* NumberOfRvaAn=
dSizes */
> +        .quad   0                                       /* ExportTable */
> +        .quad   0                                       /* ImportTable */
> +        .quad   0                                       /* ResourceTable=
 */
> +        .quad   0                                       /* ExceptionTabl=
e */
> +        .quad   0                                       /* Certification=
Table */
> +        .quad   0                                       /* BaseRelocatio=
nTable */

Would you mind clarifying on what basis this set of 6 entries was
chosen?

> +/* Section table */
> +section_table:
> +        .ascii  ".text\0\0\0"
> +        .long   0
> +        .long   0
> +        .long   0                                       /* SizeOfRawData=
 */
> +        .long   0                                       /* PointerToRawD=
ata */
> +        .long   0                                       /* PointerToRelo=
cations */
> +        .long   0                                       /* PointerToLine=
Numbers */
> +        .short  0                                       /* NumberOfReloc=
ations */
> +        .short  0                                       /* NumberOfLineN=
umbers */
> +        .long   IMAGE_SCN_CNT_CODE | \
> +                IMAGE_SCN_MEM_READ | \
> +                IMAGE_SCN_MEM_EXECUTE                   /* Characteristi=
cs */
> +
> +        .ascii  ".data\0\0\0"
> +        .long   _end - xen_start                        /* VirtualSize */
> +        .long   xen_start - efi_head                    /* VirtualAddres=
s */
> +        .long   __init_end_efi - xen_start              /* SizeOfRawData=
 */
> +        .long   xen_start - efi_head                    /* PointerToRawD=
ata */
> +        .long   0                                       /* PointerToRelo=
cations */
> +        .long   0                                       /* PointerToLine=
Numbers */
> +        .short  0                                       /* NumberOfReloc=
ations */
> +        .short  0                                       /* NumberOfLineN=
umbers */
> +        .long   IMAGE_SCN_CNT_INITIALIZED_DATA | \
> +                IMAGE_SCN_MEM_READ | \
> +                IMAGE_SCN_MEM_WRITE                    /* Characteristic=
s */

IOW no code and the entire image expressed as data. Interesting.
No matter whether that has a reason or is completely arbitrary, I
think it, too, wants commenting on.

> +        .set    section_count, (. - section_table) / 40
> +
> +        .balign  0x1000
> +#endif/* CONFIG_RISCV_EFI */
> +
> +FUNC(xen_start)
>          /* Mask all interrupts */
>          csrw    CSR_SIE, zero
>
> @@ -60,6 +196,9 @@ FUNC(start)
>          mv      a1, s1
>
>          tail    start_xen
> +
> +END(xen_start)
> +
>  END(start)

I don't think you addressed my function nesting comment here either.

> --- a/xen/arch/riscv/xen.lds.S
> +++ b/xen/arch/riscv/xen.lds.S
> @@ -12,6 +12,9 @@ PHDRS
>  #endif
>  }
>
> +PECOFF_SECTION_ALIGNMENT =3D 0x1000;
> +PECOFF_FILE_ALIGNMENT =3D 0x200;
> +
>  SECTIONS
>  {
>      . =3D XEN_VIRT_START;
> @@ -144,7 +147,7 @@ SECTIONS
>      .got.plt : {
>          *(.got.plt)
>      } : text
> -
> +    __init_end_efi =3D .;

Why does the blank line disappear? And why is ...

>      . =3D ALIGN(POINTER_ALIGN);
>      __init_end =3D .;

... __init_end not good enough? (I think I can guess the answer, but
then I further think the name of the symbol is misleading. )

> @@ -165,6 +168,7 @@ SECTIONS
>          . =3D ALIGN(POINTER_ALIGN);
>          __bss_end =3D .;
>      } :text
> +
>      _end =3D . ;

Interestingly an unrelated blank line suddenly appears here.

Jan
CONFIDENTIALITY: The contents of this e-mail are confidential and intended =
only for the above addressee(s). If you are not the intended recipient, or =
the person responsible for delivering it to the intended recipient, copying=
 or delivering it to anyone else or using it in any unauthorized manner is =
prohibited and may be unlawful. If you receive this e-mail by mistake, plea=
se notify the sender and the systems administrator at straymail@rt-rk.com i=
mmediately.

--_000_DU5PR08MB103973ABF5E6F12853F5D24E1CEC12DU5PR08MB10397eu_
Content-Type: text/html; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Diso-8859-=
1">
<style type=3D"text/css" style=3D"display:none;"> P {margin-top:0;margin-bo=
ttom:0;} </style>
</head>
<body dir=3D"ltr">
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b>&gt; Signed-off-by: Nikola Jelic &lt;nikola.jelic@rt-rk.com&gt;<br>
<br>
This isn't you, is it? Your S-o-b is going to be needed, too.</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b><br>
</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
Nikola.jelic@rt-rk.com is the initial author of the patch, I'll add myself =
also if necessary</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b>&gt; +config RISCV_EFI<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp; bool &quot;UEFI boot service support&quot;<b=
r>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp; depends on RISCV_64<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp; default n<br>
<br>
Nit: This line can be omitted (and if I'm not mistaken we generally do omit=
<br>
such).</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b><br>
</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
If we remove the default value, EFI header shall be included into xen image=
 by default. We want to keep it as optional for now, and generate plain elf=
 as default format (until full EFI support is implemented)</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b>&gt; --- /dev/null<br>
&gt; +++ b/xen/arch/riscv/include/asm/pe.h<br>
&gt; @@ -0,0 +1,148 @@<br>
&gt; +#ifndef _ASM_RISCV_PE_H<br>
&gt; +#define _ASM_RISCV_PE_H<br>
&gt; +<br>
&gt; +#define LINUX_EFISTUB_MAJOR_VERSION&nbsp;&nbsp;&nbsp;&nbsp; 0x1<br>
&gt; +#define LINUX_EFISTUB_MINOR_VERSION&nbsp;&nbsp;&nbsp;&nbsp; 0x0<br>
&gt; +<br>
&gt; +#define MZ_MAGIC&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0x5a4d&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* &quot;MZ&quot; */<br>
&gt; +<br>
&gt; +#define PE_MAGIC&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0x00004550&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; /* &quot;PE\0\0&quot; */<br>
&gt; +#define PE_OPT_MAGIC_PE32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp; 0x010b<br>
&gt; +#define PE_OPT_MAGIC_PE32PLUS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0x0=
20b<br>
&gt; +<br>
&gt; +/* machine type */<br>
&gt; +#define IMAGE_FILE_MACHINE_RISCV32&nbsp; 0x5032<br>
&gt; +#define IMAGE_FILE_MACHINE_RISCV64&nbsp; 0x5064<br>
<br>
Apart from this, hardly anything in this header is RISC-V specific.<br>
Please consider moving to xen/include/xen/.</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b><br>
</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
We shall move generic part into xen/include/xen/efi. This is something whic=
h should be considered for use on arm/x86 also. Currently PE/COFF header is=
 directly embedded into</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
head.S for arm/x86</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b>&gt; +&nbsp;&nbsp;&nbsp; char&nbsp;&nbsp;&nbsp;&nbsp; name[8];&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp; /* name or &quot;/12\0&quot; string tbl offset */<br>
<br>
Why 12?</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b><br>
</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
Either section name is specified in this field or string table offset if se=
ction name can't fit into 8 bytes, which is the case here. Btw this is take=
n over from linux kernel together with the PE/COFF header structures:
<a href=3D"https://github.com/torvalds/linux/blob/master/include/linux/pe.h=
" id=3D"LPlnk643349" class=3D"OWAAutoLink">
https://github.com/torvalds/linux/blob/master/include/linux/pe.h</a></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b>&gt; + * struct riscv_image_header - riscv xen image header<br>
<br>
You saying &quot;xen&quot;: Is there anything Xen-specific in this struct?<=
/b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b><br>
</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
Not really related to xen, this is generic riscv PE image header, comment f=
ixed in new version</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b>&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; =
/* LoaderFlags */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; (section=
_table - .) / 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* NumberOfRvaAndSizes */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
ExportTable */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
ImportTable */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
ResourceTable */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
ExceptionTable */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
CertificationTable */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
BaseRelocationTable */<br>
<br>
Would you mind clarifying on what basis this set of 6 entries was<br>
chosen?</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b><br>
</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
These fields and their sizes are defined in official PE format, see details=
 from specification bellow</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<span><img id=3D"image_0" size=3D"84174" contenttype=3D"image/png" style=3D=
"max-width: 1243px;" data-outlook-trace=3D"F:1|T:1" src=3D"cid:542690de-3bb=
0-4708-a447-996a03277578"></span></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b>&gt; +/* Section table */<br>
&gt; +section_table:<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .ascii&nbsp; &quot;.text\0=
\0\0&quot;<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
SizeOfRawData */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
PointerToRawData */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
PointerToRelocations */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
PointerToLineNumbers */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; 0&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Numbe=
rOfRelocations */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; 0&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Numbe=
rOfLineNumbers */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; IMAGE_SC=
N_CNT_CODE | \<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp; IMAGE_SCN_MEM_READ | \<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp; IMAGE_SCN_MEM_EXECUTE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
; /* Characteristics */<br>
&gt; +<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .ascii&nbsp; &quot;.data\0=
\0\0&quot;<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; _end - x=
en_start&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
VirtualSize */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; xen_star=
t - efi_head&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* VirtualAddress */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; __init_e=
nd_efi - xen_start&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp; /* SizeOfRawData */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; xen_star=
t - efi_head&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* PointerToRawData */<=
br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
PointerToRelocations */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
PointerToLineNumbers */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; 0&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Numbe=
rOfRelocations */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; 0&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Numbe=
rOfLineNumbers */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; IMAGE_SC=
N_CNT_INITIALIZED_DATA | \<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp; IMAGE_SCN_MEM_READ | \<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp; IMAGE_SCN_MEM_WRITE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp; /* Characteristics */<br>
<br>
IOW no code and the entire image expressed as data. Interesting.<br>
No matter whether that has a reason or is completely arbitrary, I<br>
think it, too, wants commenting on.</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b><br>
</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
This is correct, currently we have extended image with PE/COFF (EFI) header=
 which allows xen boot from EFI loader (or U-boot) environment. And these u=
pdates are pure data. We are actively working on the implementation of Boot=
/Runtime services which shall be
 in the code section part and enable full UEFI compatible xen application f=
or riscv.</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b>Why does the blank line disappear? And why is ...<br>
<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; . =3D ALIGN(POINTER_ALIGN);<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; __init_end =3D .;<br>
<br>
... __init_end not good enough? (I think I can guess the answer, but<br>
then I further think the name of the symbol is misleading. )</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b><br>
</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
Init_end_efi is used only when EFI sections are included into image. We hav=
e aligned with arm implementation here, you can take a look also there.</di=
v>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<b><br>
</b></div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
Regarding other comments, we've fixed our code structure following kernel E=
FI app implementation for RISC-V. This will be obvious in the updated patch=
 itself, just wanted to address comments which need additional explanation =
here before submitting new patch
 version.</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
Best regards,</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
Milan</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div class=3D"elementToProof" style=3D"font-family: Aptos, Aptos_EmbeddedFo=
nt, Aptos_MSFontService, Calibri, Helvetica, sans-serif; font-size: 12pt; c=
olor: rgb(0, 0, 0);">
<br>
</div>
<div id=3D"appendonsend"></div>
<hr style=3D"display: inline-block; width: 98%;">
<div id=3D"divRplyFwdMsg" dir=3D"ltr"><span style=3D"font-family: Calibri, =
sans-serif; font-size: 11pt; color: rgb(0, 0, 0);"><b>From:</b>&nbsp;Jan Be=
ulich &lt;jbeulich@suse.com&gt;<br>
<b>Sent:</b>&nbsp;Thursday, June 13, 2024 2:59 PM<br>
<b>To:</b>&nbsp;milandjokic1995@gmail.com &lt;milandjokic1995@gmail.com&gt;=
<br>
<b>Cc:</b>&nbsp;Milan Djokic &lt;milan.djokic@rt-rk.com&gt;; Nikola Jelic &=
lt;nikola.jelic@rt-rk.com&gt;; Alistair Francis &lt;alistair.francis@wdc.co=
m&gt;; Bob Eshleman &lt;bobbyeshleman@gmail.com&gt;; Connor Davis &lt;conno=
jdavis@gmail.com&gt;; Andrew Cooper &lt;andrew.cooper3@citrix.com&gt;; Geor=
ge
 Dunlap &lt;george.dunlap@citrix.com&gt;; Julien Grall &lt;julien@xen.org&g=
t;; Stefano Stabellini &lt;sstabellini@kernel.org&gt;; Wei Liu &lt;wl@xen.o=
rg&gt;; xen-devel@lists.xenproject.org &lt;xen-devel@lists.xenproject.org&g=
t;<br>
<b>Subject:</b>&nbsp;Re: [PATCH] xen/riscv: PE/COFF image header for RISC-V=
 target</span>
<div>&nbsp;</div>
</div>
<div style=3D"font-size: 11pt;"><br>
<br>
On 12.06.2024 14:15, milandjokic1995@gmail.com wrote:<br>
&gt; From: Nikola Jelic &lt;nikola.jelic@rt-rk.com&gt;<br>
&gt;<br>
&gt; Extended RISC-V xen image with PE/COFF headers,<br>
&gt; in order to support xen boot from popular bootloaders like U-boot.<br>
&gt; Image header is optionally included (with CONFIG_RISCV_EFI) so<br>
&gt; both plain ELF and image with PE/COFF header can now be generated as b=
uild artifacts.<br>
&gt; Note that this patch also represents initial EFI application format su=
pport (image<br>
&gt; contains EFI application header embeded into binary when CONFIG_RISCV_=
EFI<br>
&gt; is enabled). For full EFI application Xen support, boot/runtime servic=
es<br>
&gt; and system table handling are yet to be implemented.<br>
&gt;<br>
&gt; Tested on both QEMU and StarFive VisionFive 2 with OpenSBI-&gt;U-Boot-=
&gt;xen-&gt;dom0 boot chain.<br>
&gt;<br>
&gt; Signed-off-by: Nikola Jelic &lt;nikola.jelic@rt-rk.com&gt;<br>
<br>
This isn't you, is it? Your S-o-b is going to be needed, too.<br>
<br>
&gt; --- a/xen/arch/riscv/Kconfig<br>
&gt; +++ b/xen/arch/riscv/Kconfig<br>
&gt; @@ -9,6 +9,15 @@ config ARCH_DEFCONFIG<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; string<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; default &quot;arch/riscv/con=
figs/tiny64_defconfig&quot;<br>
&gt;&nbsp;<br>
&gt; +config RISCV_EFI<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp; bool &quot;UEFI boot service support&quot;<b=
r>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp; depends on RISCV_64<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp; default n<br>
<br>
Nit: This line can be omitted (and if I'm not mistaken we generally do omit=
<br>
such).<br>
<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp; help<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; This option provides support for=
 boot services through<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; UEFI firmware. A UEFI stub is pr=
ovided to allow Xen to<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; be booted as an EFI application.=
<br>
<br>
I don't think my v1 comment on this was addressed.<br>
<br>
&gt; --- /dev/null<br>
&gt; +++ b/xen/arch/riscv/include/asm/pe.h<br>
&gt; @@ -0,0 +1,148 @@<br>
&gt; +#ifndef _ASM_RISCV_PE_H<br>
&gt; +#define _ASM_RISCV_PE_H<br>
&gt; +<br>
&gt; +#define LINUX_EFISTUB_MAJOR_VERSION&nbsp;&nbsp;&nbsp;&nbsp; 0x1<br>
&gt; +#define LINUX_EFISTUB_MINOR_VERSION&nbsp;&nbsp;&nbsp;&nbsp; 0x0<br>
&gt; +<br>
&gt; +#define MZ_MAGIC&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0x5a4d&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* &quot;MZ&quot; */<br>
&gt; +<br>
&gt; +#define PE_MAGIC&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0x00004550&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; /* &quot;PE\0\0&quot; */<br>
&gt; +#define PE_OPT_MAGIC_PE32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp; 0x010b<br>
&gt; +#define PE_OPT_MAGIC_PE32PLUS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0x0=
20b<br>
&gt; +<br>
&gt; +/* machine type */<br>
&gt; +#define IMAGE_FILE_MACHINE_RISCV32&nbsp; 0x5032<br>
&gt; +#define IMAGE_FILE_MACHINE_RISCV64&nbsp; 0x5064<br>
<br>
Apart from this, hardly anything in this header is RISC-V specific.<br>
Please consider moving to xen/include/xen/.<br>
<br>
&gt; +/* flags */<br>
&gt; +#define IMAGE_FILE_EXECUTABLE_IMAGE 0x0002<br>
&gt; +#define IMAGE_FILE_LINE_NUMS_STRIPPED 0x0004<br>
&gt; +#define IMAGE_FILE_DEBUG_STRIPPED&nbsp;&nbsp; 0x0200<br>
&gt; +#define IMAGE_SUBSYSTEM_EFI_APPLICATION 10<br>
&gt; +<br>
&gt; +#define IMAGE_SCN_CNT_CODE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp; 0x00000020&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* .text */<br>
&gt; +#define IMAGE_SCN_CNT_INITIALIZED_DATA 0x00000040&nbsp;&nbsp; /* .dat=
a */<br>
&gt; +#define IMAGE_SCN_MEM_EXECUTE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; 0x2=
0000000<br>
&gt; +#define IMAGE_SCN_MEM_READ&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp; 0x40000000&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* readable */<br>
&gt; +#define IMAGE_SCN_MEM_WRITE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp; 0x80000000&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* writeable */<br>
&gt; +<br>
&gt; +#ifndef __ASSEMBLY__<br>
&gt; +<br>
&gt; +struct mz_hdr {<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t magic;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* MZ_M=
AGIC */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t lbsize;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* size of l=
ast used block */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t blocks;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* pages in =
file, 0x3 */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t relocs;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* relocatio=
ns */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t hdrsize;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* header size in=
 &quot;paragraphs&quot; */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t min_extra_pps;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; /* .bss */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t max_extra_pps;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; /* runtime limit for the arena size */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t ss;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp; /* relative stack segment */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp; /* initial %sp register */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t checksum;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* word checksum */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t ip;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp; /* initial %ip register */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t cs;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp; /* initial %cs relative to load segment */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t reloc_table_offset;&nbsp;&nbsp;&nbsp;&nbs=
p; /* offset of the first relocation */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t overlay_num;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t reserved0[4];<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t oem_id;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t oem_info;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t reserved1[10];<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t peaddr;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* address o=
f pe header */<br>
&gt; +&nbsp;&nbsp;&nbsp; char&nbsp;&nbsp;&nbsp;&nbsp; message[];&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* mess=
age to print */<br>
&gt; +};<br>
&gt; +<br>
&gt; +struct pe_hdr {<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t magic;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* PE m=
agic */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t machine;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* machine type *=
/<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t sections;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* number of sections =
*/<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t timestamp;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t symbol_table;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* symbol table offset */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t symbols;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* number of symb=
ols */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t opt_hdr_size;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* size of optional header */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t flags;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* flag=
s */<br>
&gt; +};<br>
&gt; +<br>
&gt; +struct pe32_opt_hdr {<br>
&gt; +&nbsp;&nbsp;&nbsp; /* &quot;standard&quot; header */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t magic;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* file=
 type */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint8_t&nbsp; ld_major;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* linker major v=
ersion */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint8_t&nbsp; ld_minor;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* linker minor v=
ersion */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t text_size;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t data_size;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t bss_size;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t entry_point;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* file offset of entry point */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t code_base;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* relative code addr in ra=
m */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t data_base;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* relative data addr in ra=
m */<br>
&gt; +&nbsp;&nbsp;&nbsp; /* &quot;extra&quot; header fields */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t image_base;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* preferred load address */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t section_align;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; /* alignment in bytes */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t file_align;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* file alignment in bytes */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t os_major;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t os_minor;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t image_major;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t image_minor;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t subsys_major;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t subsys_minor;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t win32_version;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; /* reserved, must be 0 */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t image_size;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t header_size;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t csum;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t subsys;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t dll_flags;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t stack_size_req;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp; /* amt of stack requested */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t stack_size;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* amt of stack required */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t heap_size_req;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; /* amt of heap requested */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t heap_size;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* amt of heap required */<=
br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t loader_flags;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* reserved, must be 0 */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t data_dirs;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* number of data dir entri=
es */<br>
&gt; +};<br>
&gt; +<br>
&gt; +struct pe32plus_opt_hdr {<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t magic;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* file=
 type */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint8_t&nbsp; ld_major;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* linker major v=
ersion */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint8_t&nbsp; ld_minor;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* linker minor v=
ersion */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t text_size;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t data_size;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t bss_size;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t entry_point;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* file offset of entry point */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t code_base;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* relative code addr in ra=
m */<br>
&gt; +&nbsp;&nbsp;&nbsp; /* &quot;extra&quot; header fields */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint64_t image_base;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* preferred load address */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t section_align;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; /* alignment in bytes */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t file_align;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* file alignment in bytes */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t os_major;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t os_minor;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t image_major;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t image_minor;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t subsys_major;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t subsys_minor;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t win32_version;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; /* reserved, must be 0 */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t image_size;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t header_size;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t csum;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t subsys;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint16_t dll_flags;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint64_t stack_size_req;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp; /* amt of stack requested */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint64_t stack_size;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* amt of stack required */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint64_t heap_size_req;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp; /* amt of heap requested */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint64_t heap_size;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* amt of heap required */<=
br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t loader_flags;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* reserved, must be 0 */<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t data_dirs;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* number of data dir entri=
es */<br>
&gt; +};<br>
&gt; +<br>
&gt; +struct section_header {<br>
&gt; +&nbsp;&nbsp;&nbsp; char&nbsp;&nbsp;&nbsp;&nbsp; name[8];&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p; /* name or &quot;/12\0&quot; string tbl offset */<br>
<br>
Why 12?<br>
<br>
&gt; --- /dev/null<br>
&gt; +++ b/xen/arch/riscv/include/asm/riscv_image_header.h<br>
<br>
Is this file taken from somewhere else, kind of making it desirable to keep=
<br>
its name in sync with the original? Otherwise: We prefer dashes over unders=
cores<br>
in new files' names.<br>
<br>
&gt; @@ -0,0 +1,54 @@<br>
&gt; +#ifndef _ASM_RISCV_IMAGE_H<br>
&gt; +#define _ASM_RISCV_IMAGE_H<br>
&gt; +<br>
&gt; +#define RISCV_IMAGE_MAGIC &quot;RISCV\0\0\0&quot;<br>
&gt; +#define RISCV_IMAGE_MAGIC2 &quot;RSC\x05&quot;<br>
&gt; +<br>
&gt; +#define RISCV_IMAGE_FLAG_BE_SHIFT 0<br>
&gt; +<br>
&gt; +#define RISCV_IMAGE_FLAG_LE 0<br>
&gt; +#define RISCV_IMAGE_FLAG_BE 1<br>
&gt; +<br>
&gt; +#define __HEAD_FLAG_BE RISCV_IMAGE_FLAG_LE<br>
&gt; +<br>
&gt; +#define __HEAD_FLAG(field) (__HEAD_FLAG_##field &lt;&lt; RISCV_IMAGE_=
FLAG_##field##_SHIFT)<br>
&gt; +<br>
&gt; +#define __HEAD_FLAGS (__HEAD_FLAG(BE))<br>
&gt; +<br>
&gt; +#define RISCV_HEADER_VERSION_MAJOR 0<br>
&gt; +#define RISCV_HEADER_VERSION_MINOR 2<br>
&gt; +<br>
&gt; +#define RISCV_HEADER_VERSION (RISCV_HEADER_VERSION_MAJOR &lt;&lt; 16 =
| \<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; RISCV_HEADER_VERSION_MINOR)<br>
&gt; +<br>
&gt; +#ifndef __ASSEMBLY__<br>
&gt; +/*<br>
&gt; + * struct riscv_image_header - riscv xen image header<br>
<br>
You saying &quot;xen&quot;: Is there anything Xen-specific in this struct?<=
br>
<br>
&gt; + * @code0:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp; Executable code<br>
&gt; + * @code1:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp; Executable code<br>
&gt; + * @text_offset:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Image load=
 offset<br>
&gt; + * @image_size:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Effec=
tive Image size<br>
&gt; + * @reserved:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp; reserved<br>
&gt; + * @reserved:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp; reserved<br>
&gt; + * @reserved:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp; reserved<br>
&gt; + * @magic:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp; Magic number<br>
&gt; + * @reserved:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp; reserved<br>
&gt; + * @reserved:&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp; reserved (will be used for PE COFF offset)<br>
&gt; + */<br>
&gt; +<br>
&gt; +struct riscv_image_header<br>
&gt; +{<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t code0;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t code1;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint64_t text_offset;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint64_t image_size;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint64_t res1;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint64_t res2;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint64_t res3;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint64_t magic;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t res4;<br>
&gt; +&nbsp;&nbsp;&nbsp; uint32_t res5;<br>
&gt; +};<br>
&gt; +#endif /* __ASSEMBLY__ */<br>
&gt; +#endif /* _ASM_RISCV_IMAGE_H */<br>
&gt; --- a/xen/arch/riscv/riscv64/head.S<br>
&gt; +++ b/xen/arch/riscv/riscv64/head.S<br>
&gt; @@ -1,14 +1,150 @@<br>
&gt;&nbsp; #include &lt;asm/asm.h&gt;<br>
&gt;&nbsp; #include &lt;asm/riscv_encoding.h&gt;<br>
&gt; +#include &lt;asm/riscv_image_header.h&gt;<br>
&gt; +#ifdef CONFIG_RISCV_EFI<br>
&gt; +#include &lt;asm/pe.h&gt;<br>
&gt; +#endif<br>
&gt;&nbsp;<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .section .text.h=
eader, &quot;ax&quot;, %progbits<br>
&gt;&nbsp;<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /*<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; * OpenSBI =
pass to start():<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *&nbsp;&nb=
sp; a0 -&gt; hart_id ( bootcpu_id )<br>
&gt; -&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *&nbsp;&nbsp; a1 -&g=
t; dtb_base<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *&nbsp;&nbsp; a1 -&g=
t; dtb_base<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; */<br>
&gt;&nbsp; FUNC(start)<br>
&gt; +<br>
&gt; +efi_head:<br>
<br>
Why is this ...<br>
<br>
&gt; +#ifdef CONFIG_RISCV_EFI<br>
<br>
... ahead of this?<br>
<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /*<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; * This instruction d=
ecodes to &quot;MZ&quot; ASCII required by UEFI.<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; c.li s4,-13<br>
<br>
IOW RISCV_EFI ought to be (made) dependent upon RISCV_ISA_C?<br>
<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; j xen_start<br>
<br>
Doesn't this then need to be c.j, to be sure it fits in 16 bits (and<br>
raise an assembler error if not)?<br>
<br>
&gt; +#else<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* jump to start kernel */=
<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; j xen_start<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* reserved */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .word 0<br>
<br>
What struct field does this correspond to? Or if not a struct field,<br>
why is this needed?<br>
<br>
Also I can't help the impression that the resulting layout will be<br>
different depending on whether RISCV_ISA_C is enabled, as the &quot;j&quot;=
<br>
may translate to a 16-bit or 32-bit insn.<br>
<br>
I wonder anyway what use everything from here to ...<br>
<br>
&gt; +#endif<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .balign 8<br>
&gt; +#ifdef CONFIG_RISCV_64<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Image load offset(2MB) =
from start of RAM */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .dword 0x200000<br>
&gt; +#else<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Image load offset(4MB) =
from start of RAM */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .dword 0x400000<br>
&gt; +#endif<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Effective size of xen i=
mage */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .dword _end - _start<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .dword __HEAD_FLAGS<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .word RISCV_HEADER_VERSION=
<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .word 0<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .dword 0<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .ascii RISCV_IMAGE_MAGIC<b=
r>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .balign 4<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .ascii RISCV_IMAGE_MAGIC2<=
br>
&gt; +#ifndef CONFIG_RISCV_EFI<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .word 0<br>
<br>
... here is when RISCV_EFI=3Dn. You add data which wasn't needed so far,<br>
and for which it also isn't explained why it would suddenly be needed.<br>
<br>
I also can't bring several of the fields above in sync with the<br>
struct riscv_image_header definition in the header. Please can you<br>
annotate each field with a comment naming the corresponding C struct<br>
field (like you do further down, at least in a way)?<br>
<br>
&gt; +#else<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .word pe_head_start - efi_=
head<br>
&gt; +pe_head_start:<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp; PE_MAGIC<br>
&gt; +coff_header:<br>
&gt; +#ifdef CONFIG_RISCV_64<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; IMAGE_FILE_MA=
CHINE_RISCV64&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp; /* Machine */<br>
&gt; +#else<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; IMAGE_FILE_MA=
CHINE_RISCV32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp; /* Machine */<br>
&gt; +#endif<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; section_count=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp; /* NumberOfSections */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
TimeDateStamp */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
PointerToSymbolTable */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
NumberOfSymbols */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; section_table=
 - optional_header&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* SizeO=
fOptionalHeader */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; IMAGE_FILE_DE=
BUG_STRIPPED | \<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp; IMAGE_FILE_EXECUTABLE_IMAGE | \<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp; IMAGE_FILE_LINE_NUMS_STRIPPED&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Characteristics */<br>
&gt; +<br>
&gt; +optional_header:<br>
&gt; +#ifdef CONFIG_RISCV_64<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; PE_OPT_MAGIC_=
PE32PLUS&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* PE32+ format */<br>
&gt; +#else<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; PE_OPT_MAGIC_=
PE32&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* PE32 forma=
t */<br>
&gt; +#endif<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .byte&nbsp;&nbsp; 0x02&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* MajorLinkerVers=
ion */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .byte&nbsp;&nbsp; 0x14&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* MinorLinkerVers=
ion */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; _end - x=
en_start&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
SizeOfCode */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
SizeOfInitializedData */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
SizeOfUninitializedData */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
AddressOfEntryPoint */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; xen_star=
t - efi_head&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* BaseOfCode */<br>
&gt; +<br>
&gt; +extra_header_fields:<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
ImageBase */<br>
<br>
This is quad only for PE32+, iirc. In PE32 it's two separate 32-bit<br>
fields instead. The overall result may be tolerable (a data RVA of 0<br>
can't be quite right, but we may be able to get away with that), but<br>
it will at least want commenting on.<br>
<br>
Any anyway - further up in the RISC-V header struct you use .word and<br>
.dword. Why .long and .quad here? That's at least somewhat confusing.<br>
<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; PECOFF_S=
ECTION_ALIGNMENT&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* SectionAlignment */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; PECOFF_F=
ILE_ALIGNMENT&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* FileAlignment */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; 0&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Major=
OperatingSystemVersion */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; 0&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Minor=
OperatingSystemVersion */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; LINUX_EFISTUB=
_MAJOR_VERSION&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp; /* MajorImageVersion */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; LINUX_EFISTUB=
_MINOR_VERSION&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp; /* MinorImageVersion */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; 0&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Major=
SubsystemVersion */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; 0&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Minor=
SubsystemVersion */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
Win32VersionValue */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; _end - e=
fi_head&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
; /* SizeOfImage */<br>
&gt; +<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Everything before the x=
en image is considered part of the header */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; xen_star=
t - efi_head&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* SizeOfHeaders */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
CheckSum */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; IMAGE_SUBSYST=
EM_EFI_APPLICATION&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Subsy=
stem */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; 0&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* DllCh=
aracteristics */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
SizeOfStackReserve */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
SizeOfStackCommit */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
SizeOfHeapReserve */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
SizeOfHeapCommit */<br>
<br>
All of these are again 32 bits only in PE32, if I'm not mistaken.<br>
<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
LoaderFlags */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; (section=
_table - .) / 8&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* NumberOfRvaAndSizes */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
ExportTable */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
ImportTable */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
ResourceTable */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
ExceptionTable */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
CertificationTable */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .quad&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
BaseRelocationTable */<br>
<br>
Would you mind clarifying on what basis this set of 6 entries was<br>
chosen?<br>
<br>
&gt; +/* Section table */<br>
&gt; +section_table:<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .ascii&nbsp; &quot;.text\0=
\0\0&quot;<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
SizeOfRawData */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
PointerToRawData */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
PointerToRelocations */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
PointerToLineNumbers */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; 0&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Numbe=
rOfRelocations */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; 0&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Numbe=
rOfLineNumbers */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; IMAGE_SC=
N_CNT_CODE | \<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp; IMAGE_SCN_MEM_READ | \<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp; IMAGE_SCN_MEM_EXECUTE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp=
; /* Characteristics */<br>
&gt; +<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .ascii&nbsp; &quot;.data\0=
\0\0&quot;<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; _end - x=
en_start&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
VirtualSize */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; xen_star=
t - efi_head&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* VirtualAddress */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; __init_e=
nd_efi - xen_start&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp; /* SizeOfRawData */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; xen_star=
t - efi_head&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* PointerToRawData */<=
br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
PointerToRelocations */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; 0&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* =
PointerToLineNumbers */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; 0&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Numbe=
rOfRelocations */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .short&nbsp; 0&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Numbe=
rOfLineNumbers */<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .long&nbsp;&nbsp; IMAGE_SC=
N_CNT_INITIALIZED_DATA | \<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp; IMAGE_SCN_MEM_READ | \<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp; IMAGE_SCN_MEM_WRITE&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp; /* Characteristics */<br>
<br>
IOW no code and the entire image expressed as data. Interesting.<br>
No matter whether that has a reason or is completely arbitrary, I<br>
think it, too, wants commenting on.<br>
<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .set&nbsp;&nbsp;&nbsp; sec=
tion_count, (. - section_table) / 40<br>
&gt; +<br>
&gt; +&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .balign&nbsp; 0x1000<br>
&gt; +#endif/* CONFIG_RISCV_EFI */<br>
&gt; +<br>
&gt; +FUNC(xen_start)<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; /* Mask all inte=
rrupts */<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; csrw&nbsp;&nbsp;=
&nbsp; CSR_SIE, zero<br>
&gt;&nbsp;<br>
&gt; @@ -60,6 +196,9 @@ FUNC(start)<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; mv&nbsp;&nbsp;&n=
bsp;&nbsp;&nbsp; a1, s1<br>
&gt;&nbsp;<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; tail&nbsp;&nbsp;=
&nbsp; start_xen<br>
&gt; +<br>
&gt; +END(xen_start)<br>
&gt; +<br>
&gt;&nbsp; END(start)<br>
<br>
I don't think you addressed my function nesting comment here either.<br>
<br>
&gt; --- a/xen/arch/riscv/xen.lds.S<br>
&gt; +++ b/xen/arch/riscv/xen.lds.S<br>
&gt; @@ -12,6 +12,9 @@ PHDRS<br>
&gt;&nbsp; #endif<br>
&gt;&nbsp; }<br>
&gt;&nbsp;<br>
&gt; +PECOFF_SECTION_ALIGNMENT =3D 0x1000;<br>
&gt; +PECOFF_FILE_ALIGNMENT =3D 0x200;<br>
&gt; +<br>
&gt;&nbsp; SECTIONS<br>
&gt;&nbsp; {<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; . =3D XEN_VIRT_START;<br>
&gt; @@ -144,7 +147,7 @@ SECTIONS<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; .got.plt : {<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; *(.got.plt)<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } : text<br>
&gt; -<br>
&gt; +&nbsp;&nbsp;&nbsp; __init_end_efi =3D .;<br>
<br>
Why does the blank line disappear? And why is ...<br>
<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; . =3D ALIGN(POINTER_ALIGN);<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; __init_end =3D .;<br>
<br>
... __init_end not good enough? (I think I can guess the answer, but<br>
then I further think the name of the symbol is misleading. )<br>
<br>
&gt; @@ -165,6 +168,7 @@ SECTIONS<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; . =3D ALIGN(POIN=
TER_ALIGN);<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; __bss_end =3D .;=
<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; } :text<br>
&gt; +<br>
&gt;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; _end =3D . ;<br>
<br>
Interestingly an unrelated blank line suddenly appears here.<br>
<br>
Jan</div>
CONFIDENTIALITY: The contents of this e-mail are confidential and intended =
only for the above addressee(s). If you are not the intended recipient, or =
the person responsible for delivering it to the intended recipient, copying=
 or delivering it to anyone else
 or using it in any unauthorized manner is prohibited and may be unlawful. =
If you receive this e-mail by mistake, please notify the sender and the sys=
tems administrator at straymail@rt-rk.com immediately.
</body>
</html>

--_000_DU5PR08MB103973ABF5E6F12853F5D24E1CEC12DU5PR08MB10397eu_--

--_004_DU5PR08MB103973ABF5E6F12853F5D24E1CEC12DU5PR08MB10397eu_
Content-Type: image/png; name="image.png"
Content-Description: image.png
Content-Disposition: inline; filename="image.png"; size=84174;
	creation-date="Wed, 26 Jun 2024 15:55:47 GMT";
	modification-date="Wed, 26 Jun 2024 15:55:47 GMT"
Content-ID: <542690de-3bb0-4708-a447-996a03277578>
Content-Transfer-Encoding: base64

iVBORw0KGgoAAAANSUhEUgAAA4oAAAOYCAIAAAAc8Zo/AAAgAElEQVR4Aey934tlyXXnWx4s/wnn
1fh1yJeBBj00Vw/WPLjrGhvVRQ9qupjOobi3QXBVM1xEXZt2qQVK8iXv0UNTGESpH0QWdwxZjKAa
2rfa4OrSCMMpEFRzEd1Sm6mGFuSDICkEk6Ztn2FFxFrxXRGx99nnZ57M8z0UlXvHjlix1id+7O+J
/eNcm/JDAiRAAiRAAiRAAiRAAttB4F//9V+vbYcn9IIESIAESIAESIAESIAEppSn7AQkQAIkQAIk
QAIkQAJbRIDydIsag66QAAmQAAmQAAmQAAlQnrIPkAAJkAAJkAAJkAAJbBEBytMtagy6QgIkQAIk
QAIkQAIkQHnKPkACJEACJEACJEACJLBFBChPt6gx6AoJkAAJkAAJkAAJkADlKfsACZAACZAACZAA
CZDAFhGgPN2ixqArJEACJEACJEACJEACl0menv/i3o290WjvrZPPt6Dhno1H4bP/8HQLvJnfhS9O
9mMAR5P5C7OEETh99J1XRqPR9aPJuaVd8o3tGmiXHOYFuX96cisO7zGH9wU1AaslARJYisAS8vTL
s08/PL77nRvXvxrnwb1Xv7F/591Hz3+7lEM9hSdHsaLRVijCgfJ0pgpUO+NnPaGv4dBMx1ZU5+nD
JINT44U/r7x248Z37t5/ODndLk03Sd850Ne9V298c//O4fHjT88aSAzj6OrogIsZaJ+fvIVfPg3s
rZPL+f2v0VnaSWuZSBeRp+fPxtdHo9Fr48nv2p4ylQRIgAQ2RmBBeXr2i/v7SZXimTxuv3L7wadL
qY7z0+fvH9/9zvV7XrGdfyyV7n3t9qNLtHpqZ9muRcqdlKfQaV7Zf3fS0n1DhsD56cePjt+5ff3d
Va0QteQp+Lr3jbsnZdc+ffz2jb3RK6+/u9nV044xMoTazDwXMdBenHw7iNOHL5J7NnCutDxd20S6
iDydTs+f/1AE6tcPN9uZZ/ZIZiABEthJAtfmjTp9yQ6n7b2vvXX3RyePf/b00YN7d/+PV/f0XL7M
tU5bb9v0guJcIFRWzljKtbPszsvT2+89ffqz9O/Rg4M7b+TesvftE1Ul87TBTLbzGAt5VZ5+8+CR
uvr0w5P779zWSwRyGX/8bKkvX3M71SpwOcZIy/Nm2tnf3pGp41vHuRtY415debrOiXQxeTqd/u7p
XWmJr29DJ292FSaSAAnsDoE55ekXj+QanHz23vpJeb3z7JfHdvTu3y94Fr8cp17K02FDpKc1z355
cue19IXm+l8/H2YPcpmC6ZL+kHfYpsrTWhJ9efr0h69rx7/zeG13sAzzc9pDdaCFbcr2/N4fy3xy
8DOYMaxx67bYJtcX92W9E+mi8nQ6ffHgdRmTVxX74g3GkiRAApsmMJc8PX/6g3SO7tIT5/9w8PUo
OXAtZJ6gLsepl/J0WJvOaM3P9Qmt0e1H82o+UzAbkKcS7PnkMHXtr787v5gehmtgrhlUB1rZjmzn
PzuQOeWP7zmm1rhXUyeteyJdXJ5Ozx7fkQn86/c+3o7+QS9IgAR2lcA88vT86d0oPfcOnsJKh0d3
9ug7MVOe4NzZ9MvTp++mq6V7X7tx+92np18mA5Ytltf/90++kAx2tLzo/+Xp5OG9O7euyxPUo9Ho
q9f3v3vv0S+LGxrdfH32y0fj78T8r1z/zr2n1cMXp89O7n13/8bXkhZ/5bX9Oz/yt0iuX54GJ2/E
S+AC6uhRuVg9PXtRPp12e/x+eXfkdDpFU6+8dvvez06ndvqvtN3pz+4bzBB4biBpBgz8t5N74Y6O
ve88qhCWbVq2Wjo+nRwlyHf+NjfZDP7mvHaR+DffaHE6OXn3zv439P4B6RL3J7Plb/fqafTW6s1a
qi6CPe3804d3X5e7tF+59wsN+MvTpz+6s/9a7K2vXL915/7PWvCkV49vfxN79fHz8zwKitDjGEl1
nL14/ODubQ1fOs87x2Un727HzoE2nc7oG9LVXjzO0Y1eee3G7Xce50v2ygD+nj99R0IpFb+hBnnq
HPvt5P53Xw8NjEP47NP3x7cj269ev330+IXOLY6MUd179cZ3xo9aI0bsJIDJvqsdApgObFAssuBE
il0rDuquSczlnE5fHH8r9pe71bz94vib4dB3Hunw0xZZ4IIGxshtEiABEliOwDzy9BfjJCUO+x5G
efE36WHtt36azrt5Zv9wMtbruXZ+3ftGelbUstmhsNErTz9/bBeIfam9Gz/EG/zzfP04PqCKuffe
ehQUcCR5+tO38KBtu1sk8eze0wB2lq1UYCqkdrx6O3/+13op2aoX5e0eC3v+w9QamKV+ydGLn95O
wj3n23vr6KD1YqkXj/9Cno0oPtZA4rM6vP/ecbhNLeQFDVHAsDb1AUKufziI1e39MC2fzeZvVL2j
SZ6ePupov5mvJKu1Jvgpmy9O3oxVvqV6vC5iPe3g+CfhOmkokcJvd9eir06nnz+63Xj0UAaC8SxC
N3l6/vG9VtcRiXz7pyAUu9vRqvBNNrtvnH98v1V1/zsNnsc55eAfPGprYuha5tjB34TH/BGBDOHz
yVHZe/e+rQ0l5lNdWC5sF/cTx+e0fK69t8aHacQ4LAMb1Ac3XXAita41njWJ5Zxxpk6X7Eej8p6r
fzy+EQK9/b6q0+k0DcA3F7ojvIiUuyRAAiSwKIE55KnphrxM1axVz3xZcOjbhb7+x1/fe2P8+NPT
s7Oz008fj99IAivdKnB+dnZ29ulP0mng4EPZPTs7iwu1dnLKp4ffmdh9Zf/daPXsxbPju9+IZvdg
Qc7m6729vet3HkxeiAfPj8NLK4vFm9OH+6/cGp88e5Eq/u3zkyTa9nLVGuMMFHaW9Se7ei9bDqeH
EMDe6z98Gl+9dPax3tQLZ+vJ0d6N/3z/8aen52GJ6PyLp8ozCfrpdDr9+F46Y++9Pv5QudttlKPR
KOvm9NzuaHT9zsPnZ2Lz/PTDg1j86/ZeTw18b29v79vHM98j1mg1cQs+dn1fPRnAP3SUXx6njvKD
x2fxEzvKFyf7X90fP5yk9vvy7PnDOzGKPa0CqsfNWmviUdmeHMamM8J1EdfTDj6Ed2edP0/fzV67
c/JxUAPnp49/kADnh1HS4ylyO+brPziZfH4mn8+fP3p3LF+iesfINL6eSYreuBs7+dnppx/e0/ds
XM9XbLvbsdVkQ/qGLtHtvXX8cRo6559Pjt++767aF0TTADGeetgGDnR4c2xv75Ubh4+kN38+uS+P
/Mvn669d3xtdP3hfkl88u683wedrONPpZLx34/aPZMSEas5PfzZOehpqef7XOmJaM9VoNMpDdWCD
akz2d9GJ1HWt3knMcup3g98+uh0xvePWT5//dbhfpbga9vG9kFovtVoE3CABEiCBtROYR56qypyh
yfTMZ9LHzivu4dzpdPq7yYE8FTEa7ek02n0R34zY6UGXBPbesvfRRFx2gs8XYW2+Hu3/Dawh/S7e
aDUafTM/NXz+23RyzexVQt34iZbVGGegsLNsPDd0/29BTc+fhnvxRnv+RHL2fjy/3Dj+x+TX2W/z
gkdK0pVIXSOxW9xAl4SsL2xhzxTbPx7Htb7XLcaQ8/m78YZLPVdp4KO9u08HvB+xbrXkqv0xROrJ
IP7Taef9CednZ6VjuuoJrWz1w0atNeFg2NR3gpqcqovknlaQ1O76urVgMBkfDBqNtLkVeNWrwZcO
qrm5s9iNpbRxR999nDpNdzs2jGvxIiJ1NfaNGgV43LWZ3MjDP2W0XgHC0RxzD+6Y8Bq5oa3jBV+T
fHZW3eCh3ze0e+voG71277m7MUDFN8jTgQ1ah26BzJg9rI3S0Mhda9YkZjkNrPYNp0RT3yvvrEjX
H6yT1xEwhQRIgATWTmCN8tTEnE3HeAkpRpa+vo/yVGiZs2ILWav000ffDnIva9AMS0+c+/oTUzZf
39Hzc8xs6TaPh/Qvz198LG/LuvfO7RvfTDeASmUqofI17v5fjbKz7H++b69VchvvpUWNHKxe+Csv
d6pEtlsmxNHzs0+fPT750b2D7+7fsDvqRnZK1quZ+cYyRaTWLCJd0TFimjNJ3r1xvHvSTpm9N3ho
4XwxOgdox+JGWqoBttPpdCb/HnkazJ5//vzp+8f33r17+5s37B7i0YyX588UWGePvxu/YVh3rYtY
jypIanetrpkmhbQ3DquM2mS9TxZWYyGi1LJ1c0/tjvCDdF9OdzvWxof1DasiXMqohGDR7Gk3ulEP
YRs4LXkKV0Xwjgs/tPWytVNy0+n5bz+dfHhy/92DO7du3Eh3AI9GNv/o6KuFo92zpD15YIM24jbC
dS0ut7aRTqTWtXykU0u3SaxOkQspaaXU3pCQUnCBOdaferVG6pziDgmQAAlshsAc8tQ0Wf9FUpvH
7Sxi03E939WH6pQIokqvlUEmVmVuzdeSvZEeX0sOC517r76mD9ksLE+tYPYxbOkZyMiY5+CA27RT
2ou/vSu/8po/r1zX023KY+f4uvbqkK4LZnPFVvJQHTY3ioCKXQvHAiwynP1teFAYlr4G8e+Rp797
fv+Wu9t272vXtf3s/F14EXf7elTIoSudIxV507pIo0eFspqzwJp3g29VuzQdbVPtLavtq8K6ux1r
41o2+1pspcb1t8y+8q27x89aT31hSNEN0KDpoMUCh2rHQuYO4GYhd/4X4TcUwPevXtc32iYsHVVI
PdWhYQ2Kwdq2wp9zIu2ItDGJNXOmp6D2fpCu76fv8I0vQim0rjFrcXCDBEiABNZHYB55mt45Ur0F
xnlniyj5ImY1s+cCeihfs9YUuMcrZK/S9fQAJzCzW2VuztctearvI9x7Y/zo4xfpxtf6VKcnmBkq
rS5oLsYNtWNnAvMczqJuM9Z4/ovwC4Sj0fW/OH766Wm6nK3WLo88rXrLQP6d8vT0UbwZce/18fvP
tf26Wr9ojL4eJVl1oddO8HI7Y2yc3Am76tKcrjFx5/LLU1n2lhcOvKWvvJAH9f6i98n92GPnXD21
wRLarwN4OfTg9tkHMmLiHduqvDcrTxecSDsiHSpP9Z2m6fp++q7VmsFSX/Wci8HCXRIgARJYL4F5
5On07PF303pdccnMfMzvPc0n7LzwUF6wnqaXmIxG+XKV6bNicqzS9YbC+tw2neo9A/aE9dCZXZd+
s1yW0KpL4baQ3JrcDUb3/ZGWRQVlDlbvH80plhk2ynvm4iEtm7yys6De15gNqNKyi/sauK6u5ax+
Sx2eEbgWqlpND4S/53+fnv7f03si1Y1Z/LvkqbaUXg+N1WlXWerivt19iDfyqujMvb2rp6kPOadD
kXbslUONC/Q5fwfVSXoJQqOs3paQbiGANzBUd6fUxrVRZvWN7OD0/HN7UG9kV1HguG6m7lStapu4
BFy1Y8FKB3CzkFZPFY4fCzqOUmi2lq93b6ufeUqxr83DGjQbwK3FJtKOSAfL02m6Tzf8AkKaAVrv
G07zRnF3CvrPbRIgARJYO4G55Ol0qg9JyPPd7+tzQuok/GrU9fEv4uOxcszOK8XjPln2zT4JZSOm
21SD7t15319DtEej8nWroTN7Xk35XKOaTvX0DPdHDlRp5Tky20xbaseC0lNI+WiUL2nh2FVmuRHV
3huv2tEUVXGmyTlNntrSYPH4i6+3T9aUOcO+NX0OUPO9eD89UD/ay698Gsq/S54qT/f1STXr4vee
nj2/Z2+ZsJcYSCDD5elU74fOVxWUBP615eTibUeYpzEWwmHVoCMU0OGIDtu87mugBsjToX3D+dj6
UldkyF/87Guk5rCBM3tmsLHgNa5ZiPLUdvGeaXs00+491TtWR7f8a5VsSoFHo4Y1qEZU/NUWmWci
7Yh0uDydpgek9n7w9Gl433DuD+hegpCXDPAgt0mABEhgMwTmlKfT6YuH+s6W0eiVb905ePDo6c/k
KaK74Q3t4Wpl+dCxaRR54c3bJ89Pz87k5S/H+spS9xPPtoDx9Xcevzg7O3326GnXa/l/+zj8Vrdc
Qgzvi0lm9cVSKJGHzuzm6vUfPH7xu/Ds0fvp5UoSmt3H1n12d81mJ0Ur6A5ntQfq7XxyFB+W37vx
9rG8V+hcHhU6O/306cPxW/qubH2b/d5bP3kuGX73YvKj3C4qT6fp18xHo9Frd46fydu0zuRdPzfk
tVDxwnJ2zBClt3TJ1c/z8EqjH925a18ABgauYRrP2+891WfCHp/86G6+/rv3+r2PG99kZvCXN8Dr
Wxf++O7jz8/OTieP8OcGXjt4/Pl5aL9HB/lVu17BqJP6V7XmNw8e/Uy9ff/44Luv262s1aVqLZJV
lGGs6rLO8NX9e/KSL4n6/OzsxceP7n/3bn7zrr0LDHr12eeTk6PwYqnga9cYwfeIHcirtc7kBW72
Yil8v293O1qTQZ+0oHr6xvP78Jqz6fnZ85+kDtm3eiovewpv7nimjRD/GqsMtkuUm28euFlIPTxW
JD/GHN97dQ4vpcqPRmWpN3rlO/fjW73iTGUjJmOxKvob1Edme/NPpB2RZp+NQFdOvUHlj19/PfyQ
bI7F3JrKpCFzw4zXXEABbpIACZDAGgjMLU9FGPz9gX8oJyqd8P/ejbvVqqqd8A7++n5VsHoneRad
0WzfbWHnnx63XmAugvXO3+Libtd8XaXDMolGdX388H56xaaJue6zu2sjO4FZQXe4KU+n09/ltTr1
Qf+anbz6oof23jr5m3R11+TpdPrikb7bVfPJGfrkH/TXRM2gLGV1/caBvQogOwxVFCG5XWv6XDts
vXKr+smugfylknyFNJoMLp0/fcc9LxZ+p+Dk/q2Yxc7fzkndUa0JHubNvRt3H1a/2zXP6qk8ZP63
umCc7cYtd928Ixvk6RgjfWPTfw3ouTvFmsxpl9l9o03P/ZiFgoa/Hb9RZANnZfJ0ml+mpvD3vn1y
Ur7Itv2bCHvfPpnor40glo6WglcBQKj15pwTaTVZJYt1ep1ildvjfV2PEHS0iBngBgmQAAlshMAi
8lQcO/c/ujh65fo3b9998PhF9S5OvLg/fjY9/xR+ULTjFx3PPz25+0Z82Hrv1TfGT4PN9llTTsjh
dxT1JxzldxSPTib+an/zCf2AtzWP599L3Hv1jbsnvzxrvGJzrfJUPDv79H34pda9V29UeB3J+NOs
ba/glx73Xn39u/fl9y3t9I/yNDza8hR+D3P01es3bt2593BiPzzbI2ua3dVaTSWB/HzR9W/u33n3
5Ok/tvrKdDodwj9W9rtPT96Jv2w52vva6+OfRYNnkx/deT08nbP3tdeDpmy1csPdhsDa+9qNG9+5
e/z+80zAFdQiWUXNquv06fE7t+1dV6+8dmP/u/dO6ifcTycnR5Zt79Vv3L77wP2sbnOMJNdi2fQa
h71Xv7F/50fVwGx3FTFgTYY6TA58edrbN84mD+7mX2Hde/XGrTv3P2zOB45gWqsr7iC3/pnBdjnW
Adws5B5+7n7yNPyccr6ZBH43bnr26aMj++Hl1+/8SH7XV7EU98lMpwMb1AUNO3NMpB2RzrV6OtUH
pEaj1x/gF3h1Kd39XL9tSjPwLwmQAAlshMCi8nQe53Rmt6cK5inMvCRAAleYwPlkHC40H9j7OLcy
WL3THdawt9LPmU7prwm0A0lzdb5rf6Y9ZiABEiCBtRCgPF0LVholARIYSOAySCJ7yhAfRhwY31Zl
00Aar3eQpyvDVwX3MMBWeU9nSIAEdocA5enutDUjJYHtJPDiRG4O3nO/iLZNnp69nx7CbD/qvk2u
9vty/mxc/nZULpBeDVu+XyVn4BYJkAAJbI4A5enmWLMmEiCBNoHPT+Q5f3jFWDvb+lNP3797592T
x/KTHOHz+fNH7+7rqxv6Xwq2fucWr+H87Ozsxc/Gr4fnBve+/ai8OX86PX8WfunjtfHkd4tXw5Ik
QAIksCoClKerIkk7JEACl56A3SgPD/OFzb0bB3/f8TDf1getT4CFQChAt7696CAJkMB0OqU8ZTcg
ARIggUTg/Iunx4d39vVNIKNRx6sPLhWw5z+Mq6avvnX0qH472qUKhc6SAAnsCoFNyNNdYck4SYAE
SIAESIAESIAEliZAebo0QhogARIgARIgARIgARJYHQHK09WxpCUSIAESIAESIAESIIGlCVCeLo2Q
BkiABEiABEiABEiABFZHgPJ0dSxpiQRIgARIgARIgARIYGkClKdLI6QBEiABEiABEiABEiCB1RGg
PF0dS1oiARIgARIgARIgARJYmgDl6dIIaYAESIAESIAESIAESGB1BChPV8eSlkiABEiABEiABEiA
BJYmQHm6NEIaIAESIAESIAESIAESWB0BytPVsaQlEiABEiABEiABEiCBpQlQni6NkAZIgARIgARI
gARIgARWR4DydHUsaYkESIAESIAESIAESGBpApSnSyOkARIgARIgARIgARIggdURoDxdHUtaIgES
IAESIAESIAESWJoA5enSCGmABEiABEiABEiABEhgdQQoT1fHkpZIgARIgARIgARIgASWJkB5ujRC
GiABEiABEiABEiABElgdAcrT1bGkJRIgARIgARIgARIggaUJUJ4ujZAGSIAESIAESIAESIAEVkeA
8nR1LGmJBEiABEiABEiABEhgaQKUp0sjpAESIAESIAESIAESIIHVEaA8XR1LWiIBEiABEiABEiAB
EliawEXI02fj0Wg8md/1ydFodLRAuflr+uJkf7R/8kUoKNuj8bP5jWxbic0GcjGNVTHfnBtV1Ysk
YMebWX5jDSoDdjSyETHTscUy9E0Lpye3RvsPTxczvIlS626LuTrGygNeNLpLNvpWyK2vM6+wmi5T
GxwvEKk0962TLR6lXbhWly40VDmszuq6LBVttxlxNU8wc8nTSTxNhXNV+G+xeADKPK5ONzfZ4clg
0al5rtA2kXmzgVxMY1UcN+dGVfUiCdjxZpbfTIPO5dJMn3sy9E0LGzzd9njYc2jdbbHaVmh529AW
dq5t5e+BYYcu2egzvwdtSJ9sfcLKS19nHmR9uUwbHC8QaaMLLRfG5SttQ+ZSuF603WJybp2Rzi1P
V7COCFDmCm3YZCcaelknV3sygCBPH+5f1e+XRWjDGgvQLLzZ21ibc2Nh/7FgbyyYcXPbi45W87Do
GJZebvRVtMHTbelW3N+8A34eW6hjdJOvw4lLD27hp7t4m1GdeslGXx3AsJRGmH2deZjRpXLV7dsw
t3z7itF1Rer7f8P9C0naTq8WRQFt1+jDi1pdYTnK0xbMhU4GLUNl2mpmhNLqVuwXoW2uu/c21ubc
WEkj9MaykhrmNgJT2NxlQ4GiY3Qa6ato0Om20/IKDmzeAX8iXKhj9JAvDwn8/X13B8UKQr5ko2/R
ftIIs68zL1rNHOUGtV3ZB+awD1nXFanv/1DhhW5up1eLIoG2a/ThRa2usNxq5Kl0dP0Ut4jhoXTn
aIQiE276dC92Sm9In6OJJwiH7E5WsZw/0Sw6UPiGHCHb/smz4t5TXVSInsda9CYbKFis2qKH+ydf
yJSRP1o8fPvUZEucTqfxhCSejEaj8U/K+267JyAAu/9wkm/aszOcbVj8Ma64C8XxTl+Br5+qvRqh
xcYCOP5u445azKOwgQBdcTDrG0uKQSnfZ6JLMRDtCZAZ4WNH0ksevQSCvxgUWOtH0RsL8Khdyu3o
+YdmsjYCt7Ubg1XddBZaZdt3lYHxovNHw86sXTdol0qdMLeIttF0OhU7sJvzmE0NBP9CNndDWLQm
Q0OGVn1beYEam3UU3ags5LbQ9SQo5aqAdDc80XF0IHqY7OeIgMYUupBRapPPlUgVuT+IBT9Y0vyT
77/XzL2e9Iw+aUU4TeS5JbV7ci3kyYN9cqQRIRMdkjmc0EOQs9ixMQjMc+I03C2W5jSNTiwiOkyH
2vxmHOAuLQYF9aJvga1OphBLe1zghGYRucrCDtRV9CvErt0GY8xDu8OBsjLINp5A8yGHuB1zaqW5
92IriHVs3Fsnp7gLIxQDyf0ntmPsvTI8/38/V6QzguOvAUEg0NBdbVd7FceCnqPlsZhYVu2jw+gA
1Nuaf/xIUXo2DfqJC9odmagL8S+QN7HkvY3t5Utd/N5K5OlkbGMswLKWCM1gc41mi82sIy00oeVB
IsLU2ia1tFZ0+nCcHl2KvVOtxfnRHJBdLRInBTiU6/I+xLbU/hq7YJymo+dmMHYjqxpjD9vgfPJW
KrL8qRdqRWm6VBTBQs7s+707eeQ4oqjNPT4Og+RGDiSeXO0WdtjNeXSaDsE6t5+NOxlCaK7qOOMb
t45aMI5wMms3cV9jhakcsAdtr/VGl8B5aWjblaPRf+de6j9DCEyOtO18vD0oemMBHi2XOvoAtKZO
3MmQdKHc2cB66PBKKQ4TY1hMuFBKOtvYHlfqNB5ECXSMzlJxcNnsKSHb8MegOlrNeZYGgo/C2lqs
daDQr4VxvOfZAx3QoYEwsYFiIBqya+IQlOtyOcYiABdmbJT8RKmj3TnFuU5bmA8jRT2R6GRbzGof
xm2MLoTgs1mnEp+NeQg8P8waRoEajwMkIcJIY9NYS+khdCA3CoakIWgaxK5G5NDpyVF6dkf8sT4P
PDEdzzJquPEXi6TDYlDnkzTPa+wulhBvcAMclobQpkHng57WfuX8CI2iRZLs1oYY3D2GDWcJ1nzw
nQE5yDbMrsV5GY2EfmJdyBrIBS4aUwwqQ9d/UrwWviPpRRhCQ29Dz1cfetrOjZo0yWQavi7nBjSQ
S88N7VzrEC1xdKiftoaVJqvcl9BWz5kUx7ujUZS/uN255al+6Ss6n0UgjNLAcOPQMsRJEBAXTa4Z
XSuGxE6COJN2WAsGwDetJRpGmSIp6DluwyymBTGQME7CXNPlqg+qHIFuDEOHDnWJ824E2twaDsf/
qnohZAjEuQHpea0imlOwlVmoUjedzfI7gzvzddWillp/1ROHKGYE/wsf0qSmoCQKm1jjOUMPiSWz
k+vKngwhkHP77y1FvTApVB3AfEBbftbLR/jdMkAAACAASURBVFqZXV2SwSb0JKr0pJXN1NBcSotG
Lpy3qlj0kLOmifoXSklFbjRBwdyNJbHZamox/q3bC1LEmjOCZdvxZgdC3soCtkUZSI4RfIhVFmbR
j1xKUsV+Hv7VcrIVdAYBoGXIG9mZ7HyuNB/FodHrSV1dNpKrUAcgJU8Ikjge24pp2K0XpdQE/pXA
bXoMp2RdCICKcgGznJKMm23kvDO3cpiWtbcPuDGo/a1hpGeOsorCRlW2KwqXXrcXWM09ARJhkrRU
9X/BybbzfO0dqBsRUiR8mNjDOSLPJLl3mc+pG3fMjd1tV559xAfX6/zcnn1AOFVjoVv1NjaZbOPE
VYYGbVEbSimYB7bn9KrT/GoPzC1PcQoAV6Qz2ScNv2dj32k0O0AJSb4jaq4SfbEUlL6SWp3W1Wpr
Ld+0FvlbTlV+EMIwgM4XyofeaR6kDTl3SjdqgnIzQl1viLFL3EvZdGLusl+nS0rL4GSsUqAwW4YT
9U2K1I03ROjOB+FA2d1zo4tL1ceaz1kV3/In5KmhSUpyrL/PFC7JbvmJdpKH7iwygIB+xVejOmMW
9eZe1BuLAxFXC2BpSo5C4CmzQIY2kt3y44IKxWpobkTkhvMexb2EJdUyu8/3lKoryim5G3e3Gron
+Utn8nSUrWEZ3W61fuAP6CoL2BbZ7WjSZqTaq8qOOtE6EULLlv60pzg322TLuqV+YjbtDOZzyIzR
4bYczCFoWbWPM3aGb0czjezAs7FMcZpZ0mHGkxVvW603M2kjW4sJ2WZ0Y+TO62EMdgwNwYILdWVN
9X45wBvfJ42n+Fl9bGYrF/UX7e25UYK3g7vHzOGsTZMhaC9CBVZslxNjij/0Z6mxOfkbsVBVXW/o
eHGM1/yhK+YzXfY5NVDZDql3QURGTycT71U5FmAVxpNMNcXOnA7hcHauWaXmn3b7olm7+1JlT4ZD
/ihwiLRmWNm4gIQVyNMYubYfEGx0qRAhQAn7vskVAvSwlAQEw3jT038+30tGZ63TN61F/tYjBLsd
bhee1wWTWYGgQLAmf6GzVTxHjfVGG5b/i5N9i92br+qF5vAGtSJ0Fbed3bQj4ZcTqOXD80E9PUEb
zaolWexoYoNgFUNcGpQdy+vZtUt15lxMtoIDxRmxh4C4kU+BSAP6bajBelFvLN6ZuOddgsDDYdfz
JaVrAHrTDQ7omHnrSyWeWQ1XtWt+RNFXqq4op+Ru3PBWK4K/rT6WaWRrUKTY9KhBhIV8lQVsi+x2
tGlkaq8qO9kLKxWS0L4k5II9U1xBPttOW6kKRJqKFNXhLm57T9BOrCH3/AzfvAAaanNyFKfNKCkg
AzjcIVLLzFXskkE+USI0/DHHZCNShavJ7mixk8O0A3P0ASsTNqRgnmNrpD533CtjX6x7SBQzh3PN
DSJFDrgdR71qLB+BNL2qJXfE9/+63l55mqe++IXHWQ47DYOaCSIKSegJbnctENiXjWZcWAsC13Tt
e6ofpHEVHW6nSUCz5eLVlvicFwoxOtgu2qsycjEJy8vTBrJEUyedMjKAEg75JtfcwsuJMKkoTy7Y
p51BtNbtm9Yif8VPrybFoH65wShcRUnBNPtHV2P7SRNdjQ5BCtar3sbp+/ThvvZXPZD+AqKUIgbb
zRHHp9SSR1GX21BNgTQf8aE5XSiZAN2AWlx+V7y3sfr6DK7lBK8Lh3MksNXK0yZQ5MTdMl5D0RsL
eOE2s2XfQ8pamsurzlLayQb1qEsxb/Wo/oW+KknFruaKs20eyEU22K0qgogyc+dbrqTcgrLpEKRk
a2Uxvw91FUWKXX+iKgOxGKWULgfGmuRQx1i2UiGnb2vQH4Unbhf894GlvZj5xC6kSHKoaHykV8Zj
Tqwdt+VorlEI54ZOh1K8ZSlPLEmNyVjnojDR5d3kb/jTEVR2I2aunAnJ1jS1P1hH2vZN0Mig+fA2
1phmFWkWu38M+mGXxRxLR7BFQcnf0a+yqVDG7XrjRbDFrtYocemZMVqUZbl0BsHQcDvJfdc31GDn
vOEdqNsLUoq6NFLxU7/wWHW6AcU1Sf92t105y9VGclnvv9r2f11z6KEiEXdxW7K3Alcz9je7FJJw
F7YHmTKbm9pYXp46RtLjYcKVmLXvStPaN9ec2H1ik7bPc3e03JrspB9ALa4JEXrhGxIu/YTvr3G+
To9hQXPG4r7gNN9KXzqvT/l4C8GlPNrFmo3huutHkXd0cnIrF8EoZFvsZ6kd3FOGpcHTk1v74yOv
dH1xe4qleOinfUL1oUnV6cJc8BGPdtTiYnHeuib2zOMhBVJiD1c01I3SJZ8Z+6d95ZAGCi0ymwAG
GCxbU5b1Qs6+WBAH3EFvLmHPDB0pf83QojIWzI0wndV50gDMjRWcNwL4vULNxr/VQIOO53JCvKhm
0vRqpUKvyN6GXXUD6upqNVdl+s6ZO6pY004CiqooJLtN1OWZAPyJJsQrte/i9fObCyrMnDDFeWd8
FWhf8uWj2LtCN9Dx7r8TeuNpL/SlffyCGi3jHC5ZsXbc9p6EbLn26Iz1K9/VxX87lOSLLW3qAhJ2
Bu0J/gIUROWGQOCcipdP/MQhEBywyVbaV9P12SnQIjLJ5L4ElcZNbIJ0cHAfmDHHCm2sWs+hhQ/d
/Qp96+0euUeVA9PV5bkF9+zki3XhthjoDiR4pWMnP7vm/FGXbPoKbnRN7MHhsIhzYl94XBCy4wPJ
HaBaGXEC2ntVjgVXVgiAzjFtoD3NjWJ0D9H5JvO1V6d760tozQ3edElQMUIvxUpd8QvdmVuehksP
9l+MM0qEoOnwTUYhsNBIKX+aYgBKzGLfLEsUqfdHy6dIMDZbsOvebSEWwlhVldbnG1YHfo4n2O1w
u/RcDEBBN+HGAZkiz/NgGBWw2A6B1MVt0JqnIRwdlpbqNjT8QAA6NAYSCvh5QW1AcRtdGGP3NO1C
w8YS0wW6Vi3qQfoLZMomBn98Y+k8GLHvP3R9pnTJZ87yBX3ThoMa8WzhXM55bp1MVNemHoJN5lHk
UqMqFjPfcinPOzBMUn8zzRdnYU3tbrs8TDKHWLv31jySDah3/EwsmIZw2cyHCLOrVKwIIgVr0I19
vaW3WDFUZD05HPfWsAjMHsJMW1+yqLXAsLIgR3W0lsQ8GRdgZQed0ZzCAe1LHiyY2654o5BpTRcI
VhGDws6ZJjSNJWbG2nG79CRTigK3GHHQ1atBFDzJLR52c3dVFGWjYCzWzYLMldmj6m91N9CRYf4I
WPuoP0I4O+MqlZ0iTEka3AfMpU44AYW65NsFPQFE42cLdQ+oaNBwjgMEIkUOuJ3cBPvFsIWpHjhr
RNoK7lSLzdGoSwcsZkNaYdu1dc4JEYVs7fHbGpVlu2Ob2pcxTMyVOue6RjQ2qxZQSqGHqO7Ug/Ev
4PVnUoi0zdDb2fzeXPJ08+6xxoKAHyrFwcZuq0M3sjGJBEhg8wQ4PDfPnDXuBgFRw91qfjcYXPYo
KU8vVQvC151BfnOIDsLETCRwEQQ4PC+COuvcBQJ5+XwXor2iMVKeXqKGHbDW8mwM1wvCNQJ/5e4S
RUtXSeCqEeDwvGotyni2k8C8lxm3M4pd94ry9HL0gHT7yEyt6e7v8TezXo5A6SUJXF0CHJ5Xt20Z
2ZYQiDd3wjLNlvhFN+YmQHk6NzIWIAESIAESIAESIAESWB8BytP1saVlEiABEiABEiABEiCBuQlQ
ns6NjAVIgARIgARIgARIgATWR4DydH1saZkESIAESIAESIAESGBuApSncyNjARIgARIgARIgARIg
gfURoDxdH1taJgESIAESIAESIAESmJsA5encyFiABEiABEiABEiABEhgfQQoT9fHlpZJgARIgARI
gARIgATmJkB5OjcyFiABEiABEiABEiABElgfgXnk6cp/IXoxg12/Ox9+kWX8bH2saFkJLNZwWnrI
X/nlj5k/kTXEEPOsgQBbZzqdCoRbJ6drwLsyk11T5coqMEPwe8uXax4WRPsnX1ggW7Sx2VEmLSif
o8l0CJO+UwB0hn6cfUb6S3YcXbnBZj2bG1bN6ncokfJ0hxp7ZaGufxbY7NS8MjA7Yoits3l5KsxH
48lcPWxz51FQJDI5jC7NMsEQKTYX81mZQzsGITirNTc5ylxdQ5j0nQKgM/TT6DPSX7Lj6MoNNuvZ
3LBqVr9DiVdIng5ttcGDZ6jBXcg3GeMpZ6FZ4PThfvdqU9kobrq8EMCcgzL27Wud6bS3O2XXV7i1
+Rqz82HE7d+aU/YN6sN+aOcq59oqe8hchXsyXyTzHrcWPSTTmq6443bT3iJz4KAWr2ubv/n6TgE9
1nxn6zNSO5lS+rrEQgY7a7IDBdVi17JxY9UEKE9XTfRq2lvztDItZ7RFpubVkucclHluX+vsmDyV
U/LRpO/EnBsLtgb1YT+0ofQ8m2UPmadsX965Q+4zduHHPOpZWmqROXBQi9cc5m++Pud7rM1HoHZ0
2j/w+7xqGhuWWFAtdofZYK4FCCwjT6UX2sddzZFeoh/9shidk+kmffZPnp3s430/WMrfdygDNX3G
k67OgV0z5gGDyT1IGY1G+w/TnWPgVfctj9G++Bw/4UKbVAS7uQVkHKZPJhDH7cS4Ba9yTscwKDY1
4VZN4rQVmezff88xnE7DnUONi4DAEK11N6LFkmMUd8TJhDp7biTT9KF+a7qrxdYPUg2tRokxQru4
iCAdYzGPwwa6fTRRnyFP7CSSgO7JjWjICq6o5ngxhOiqFQkhZ4NKAOq1TfRQOwmGlsv2Ajd7MRDs
RWJNLQcC2jCWGO+hTMmtm/BW1ToxhKHDB2JCB5LbGa84DrGkYq4u6TnCAaeUmCHecRi7AdSCANWP
Ro2x3WOGufqAdZXBdz1K7cGryRgnTHUO/4LxYqqE3muXlbEH6uWRdg/EOuI2ENt/KHNa6q7INm6n
Rk9DGO171Ojh/skXDeah5iKbeubr+kl587pYywNKC7nZEiJqdKpYdz4TwcyTQs6ONSqS4pIhh5xK
mSvlRupguY10eMYeC9njMP//wLeZs1YujVFH97z9dnsVzoMR1xlyNfHEpPPPOs4j0wVOTMG/TLhx
9ocBpXfXRD4Qcm5TsZe7QVcvQirc7iGwsDwNbWAzfmgqG5OTIxMTYYrRbKGj26HYijrkXHd3paR/
2Bko9QkzAqGhhdjhtJSvt5ynQv8zg6FqLQjWY9c3T0K2fGp0DhfTEPgfS6WQddjjrrnRj1cGuQ4J
sWnk0y1xCtz8l7osqGdjLdtXi5UNG5JTSykKd5LTdpxOxlZ7aCwr5Xzw1qOuKqPI3x8cXmfHV5Gt
YmdILnWCkgZSn08fjtNzEn6O7m7TpGVTmDrTwa6Ryd6piLdDpydH8SGbDnohzHzKkVqsLJqVGA24
X2nAFrTqgvMae/cjES1087ZODCF1wtCgncMHI5r2TSbWpV0J7Z9wVPqMhZlPY6FY31zh7LqOF2W9
2gxziJLv7QPY2bqBu3pRQrniPld5O6zvM7ljlw91YccIJ1cNKn6fwe6UKwzG7VAMP41fOaSd0zW6
lHYM0UjYthnAvHX5U8PBdBdQJzeKuuSQTacmXHIEaStnQw55gLgCX5yMdUXDtV2s2qoTm0rAlQ/h
azaBBl3UZ5S9SNX6raBIZtHV9O06ocvhRHsuZ3eNfoCDEce/bC+NEdPVbWtKH5fzR78wazM5bh0z
YdGFvPVFDIrzGgievNAyAJFk2c1tF9pFQ/DfQLqBo3VudxJYUJ66XhuM1ymxTkj3XdOfJCZHMOng
EqDrPcFk0VcsNMzpOnqYc7O68kMRS0VTdUozvagCvJKQbYp3YfqqQ1eGYZz5ALQUHqaUnf7ZGKa5
9vqKFEGXglW0GaupU1L1ftRV560iLi3kL9l3G/czbCjdiDFN62WAzdDwjG7eSGI+H5idDuehQdNp
FQFCJ/EOiDVA3WG84JldxC0oK9Wp+pEscAhLhHQTDcntGDI4nEtIok2sM8xCRw0nzkwSF+yNaqok
wykc6B4+2b1qC7sQbpcZS1wzV0/t/FTMFc5wUWMOzUvVtBifews01lDgrl6sqN2xY/aCMM6izh62
V1+83d3Mf6sR40WMyrNsiM7u4WIEb3uYx1y5YFmXuGRjQezkFoEKbJjX9CBXazNP14PnQzEjnsQP
jqBmBaV+zZBdOOK5jmILJxh02dyZqKgvW5YD2Uhne4WQUyvnJkhWvTVXFUDTLxvWRq4XuVLOoATV
ha7VB3DiUqtgMAerB+u/RR7Z1R4umXNQg4HXdTClQWBBeVqqydTVdJDoVyg3DnEURU/yjCDdpfoE
a05ShGJFX7GgsjUcYPFw7kDlGKjtV+f4VAPad2O49EqGa/mJvRlGhRRCr9xuP95qOoBJpBFOcE+c
l9sZ8BUq/bWkqNMf72qBAk9OGpcBsNmhb1opLVTnP2v0FIiZDxuNE0/qUVZ7iKMDlBjXCzcWttUY
S3a2aeFq0cTFrlqXKPJg0dT4V1DbJ/k/A7gZkOpguneLValbIqsUuNUmG55YtFxGUfZAY9XTOkUI
ViTWUOxaQGHDDSg9M/V1p6KuIRf3c42+q+d0B1OGK3zlw+1yhsG+PRQ41CqzBA7bTvf8t69gwVMV
Yvlj3a822OqB6FFjhoQegvBxW+VIdiFuSYcs+63V5lu5lc1mvKIu1+itgrEOQJS6GQ4Q88M2pJb8
SWOtrBpoWEHZ8OlStbYsdgzt4b5TSfmcIjWmFhRE5jOEk/JnZ+OW1jjDsWDcB5ssxbpyyDVbH6ar
yHe2bCRmKgq2+6HvEs46iuZwYIhBydMx76lxTxXke8yQg0pdyDFvAlfL/NtLYA3yNPZpHTC5M0m6
zYnBqdw7pYvgaTX7bLOPJRV9xdKztYuXp61zfDU9rUyeyrQVa5wcdWCMlNIkmAaMlTKEYXj7NkrH
8giUBEQt+3kWkObO63w5PS0b6Myba0xbLqebiGMGa/S6F1W2ICFNcNYcFnINKnqeLwhajcGcFQTj
aTOfMyShCKTY1dIdUXTS6wauFuNfqQ7HUR59KZ9kkE8cnvXg8uZ0r4zChwzDrSMusVOE4PFWM77W
LKXUW39drwpNi9R1OaUSsqEzhSflqMxmixoRAm739YGhwH2l7mwXdprjqDaeQwsDwUrldBlqeN9O
Zw/MHqVOjt3MhYxscTu1S3N6Kfut1eaZt7JZ1EVdWN0XJ/sWu5mOGw5FCi0PEJ9ZWtkEJXIrqy7H
SzJjrqpZ3200Vf/WRyHFUJye3AIB5MPpmbW0kvjXO2xGJK5me+FwNk/MpLdmybLhOls5J8Dk2dMP
fZdw1hczGEyUZwpn14DE1GIXghoM3JnnTheBBeVp3UUsxTZilXlX+ro7d4ZzUpZK+SsgOitdAYZf
PNM0xwzOEd0dyM2krTNZ1cXVG7Tfu3qaQ9ai+rcYt36sQi+vLWAKTFJqOM2/k3GTjOYKf7MPaDNm
qVO0qHe1QJGnlWy8qGtl8hQoqW8z/rqgZoCCMH0XckZ8hb45+ghAOagop3aX7QSeC4etwkJYbqlP
zBZaabawZrsts/r9UzKZwZ7WKerKRUItxa7WXGDHXdzW7Pq3qMvrWskk1emsUlbdbBopVNSI7Y7b
5QyTRwee1NXVGX9L8qXzWByDCunicJwQihjdLsZbVFfsWmWS7qdrMZK+BCJ83JbSWJdZCwdgKRoP
9DCP2TL5sq5kdvxMGs6+oKJx2XYo9GAzsXQeYimrbnMrYml8Cdf6498cWkqHGqPn8YlPHODe87pG
X4PteYezEV+jZXfnzd7OgEVk2xvs5Ob9wRFUDUNXw0IGzUInrgwk5C12IahOC1YHN+YhsKA8jf0s
z1DSLVR6YuOFdLtTRMZb1k/SU/1JQi2E3q83oYfebyMwGsxGIFbsmuiDZHGjohj23qt6sGkVaL+e
17DG4CRMiHaXdzHqnFfeyQDHFEAwaMsVhf/Bv/Ad+uE4t4h6Hf8Wz5eob321eAPe8wIFTB/om4xV
O2nVxHwFWLAxcQNe315Te4rC2cuPfxWqogaFj0FAixQxdrYpXHETDzyocjf7GOCoQpomHxCCo1c4
M8OsLngINL2Fv3yqI+YJvd0Gl1w91LLZU9lCx+pdPMd3tk4RAjSoVFDsWu2YHprAJpPOIu7cqYZC
WR1BodvPL0+LGpEJbvf2gS7g4pKOSvW5GYgcLPqY5ffGI644VTr4MXxraGcNA3E90CqJG6FrKc/Q
Pcx/rAu3Q8He7pEJ5EGNHSABydlCi+ggquqSCp+NR0cnbn2xEUhA0R4gmLsCZae8smqXE0z4VvYM
IVvaDKxysAW6cL4YnxTKu/BEdrMFKWLnFFefdxiYF5XmdsGKfCCF266eouuiEcmX3RAj6mrZD8E9
b7z++jfAYOeZAmwXfpYOSLOmsTAUOBjnZjeBheVp0nzhUhOozFBT7KDx1DjxNzLnQ6PxpNHqas8J
UOlk6XPr5LTsHBocWivzQAfSOQ6VE3iFg1ktx79ovz6hFjWGbqpO6wQKwy9yyt1a9r2TYbeyEPLB
0I2upaUdO+Naqm50ByiV6sf81GL4VwKUj4zDAoWLKxus3jCi7Qh6KNegxOJJWhzW6UnyeLwYjstm
5tRb8dhX53VhmhNjaHBbghjSWvRErh6GzJmVdzVPiMGXYtf8k4047caqVZp00OsD7mzGKT6Fg2/K
dM5rRMGLPLjcmcyb1eKLt04Rgm/Qon2xbm0FaUc/mXR3p6KuYA5o+5mn8KQchuiLqxHbHbfxLBtr
9u/WUCOxG6UnwaXdtQ/kGsWs773xWIgFG9GKgHE/Vbrwi5B1sIRTbEcPtBpsQ0uFUSP1Jv8RPm5r
wdygcNuGHJTM+slRa0SWgtnwNNGqK06q7Ski+mMousyq24WH42cCCkRJnhCqDoAmMt5itsFMcTt2
KsBVtng41EwU2+kHxlxc6CRWCM03cLItaHd1BqwkbmvOtZxHCq+GnJjUH+l51scqt7UVAlXrMykb
9ISiG3efkasamNAgMI88bRRn0rYQkNNP9+jaFi+3wA+C2oJGoAsksBkCXjpsps5N1RL166ZqYz0k
sGkClKebJr6e+vzX3/XUcSWsEtSVaEYGQQJDCJQLXUPKXJY8V1l5X5Y2oJ9rJUB5ula8mzJ+lWfh
lTIkqJXipDES2GICV/m7KK8CbXHHo2urIUB5uhqOF2cl3szUdVPRxfm1dTUT1NY1CR0igTURSPfa
4v3ra6pp82bT7aR6d+nmHWCNJLARApSnG8HMSkiABEiABEiABEiABIYRoDwdxom5SIAESIAESIAE
SIAENkKA8nQjmFkJCZAACZAACZAACZDAMAKUp8M4MRcJkAAJkAAJkAAJkMBGCFCebgQzKyEBEiAB
EiABEiABEhhGgPJ0GCfmIgESIAESIAESIAES2AgBytONYGYlJEACJEACJEACJEACwwhQng7jxFwk
QAIkQAIkQAIkQAIbIUB5uhHMrIQESIAESIAESIAESGAYAcrTYZyYiwRIgARIgARIgARIYCME5pan
k6ORfG6dnPb6J9nSD8qt7oePw4+5jZ/1VrzUwSVcFd/W98ui4pjyXCpCFiYBEiABEiABEiCBLScw
nzyVHzKeJUxjwJSnK214ytOV4qQxEiABEiABEiCBLSYwnzwF0TkjJsi5xJLkjEpWfngJV1e8ejoZ
j0brXCdeOToaJAESIAESIAESIIHVEKA8RY6Up0iD2yRAAiRAAiRAAiRwAQSGy1NZz7PP/sN462m4
6KypuNrXs3oqdwjYJ92fOoX80+k0mNVD0+lkHG/rxBXKtJ29UpcixJw+Opp4456y2Emf/YeTk1sj
tCMF9eDJF1DwGcCIdzugb9MQjpbMWGKeZ7HG8UTsgZ+jmDKdovG0hlro5hnYgbDaDBVhaBAMN0mA
BEiABEiABEhgiwgMl6fitNd5QVqZiAw6zwQQ5HTSKgg+00xBZkV5J5pM05Nk1F07hBKwkceeTBLH
zJOk1cxPhB+MmHyMYtQKQghRMib7waDVdXpyFJ4S876Nk3x3BafRZ7h59/Th2FSvVJcPSQjmWNTr
6tgs7Dn2gDcF7pggA26TAAmQAAmQAAmQwFYRWFyeikrLckqCwhTQdiBPUcNFDDlFl0ijnaNxWjFN
u2GpMWeeRqnXFHDoRqwEnHHwq/TCVdXHIThdWC2EoxpE3zQt/IX8kgdFp8sXFk2tRiglubJjdXSY
4jVuFMdm01fHPRIgARIgARIgARLYSgKLy9PJUV6hTKGJ/EpiCJRfllaiwLyijcIrqMycbXIkGk7t
S3qSoSgBcVuqd8V1oTH5Bc5gI4DllJyNFFfY44V6MQsxoq2QbkuqST3r5f2W/1pYxGX+mJTslKeK
RctPY1019pDBFp4hOzdJgARIgARIgARIYJsJbI88tcXXyThq3KhlUQ6iJMVtAZyVZS3gFpSnpZIO
7Yj+YMOCP1JdfgcqCE3IE4rKoazXnZSEUrOiQ8VcRupsorvcJgESIAESIAESIIEtJbC4PMVryjE4
TAGdlIVjucSYVv500TEqv2djff980Kl5Ny4TYmbdlupzLVK1U5ZySG1iM9TpIgrTymspJa1gIRw1
PecvMsBuzhNKFdrR7UIpyZujQ8ixbkwB7K0qYgH+TwIkQAIkQAIkQAJbTGBxeZqeOrdHjkR76VVs
9xBVllbp4Sq9ASCqLhCOQS9mI2k3X6lHeYfbwhdqCZ5YKVFvo6Y8Tc/I2w2sou1MngaDKHMnR+kC
ejBoyrh+NAo80Uf4GzcnFNI8PcJvF/edERddzDkbe+h0WfKK3jUmW9wh6RoJkAAJkAAJkMCuE1hG
nhbvRTLFJkxhGa9QWu6lS4VgCsrPJFq43J+vkg9dPZXqg0KNt3TuPzwFZ6r2FgGXPuNnhauyax90
NUnecCylo1yG2sfPYB0U8wRHwM54kqVkOKaOFTfmhmNiUz9d2M1I5El5GoDwPxIgARIgARIgga0n
MJ883fpw2g72ydN2CaaSAAmQAAmQXgXP7QAAIABJREFUAAmQAAlcDIFdkKewfnkxkFkrCZAACZAA
CZAACZDAUAJXUZ5+cbJvt2ZWt5AOBcN8JEACJEACJEACJEACF0HgKspT90uhxVP8F8GYdZIACZAA
CZAACZAACQwmcCXl6eDomZEESIAESIAESIAESGDLCFCeblmD0B0SIAESIAESIAES2G0ClKe73f6M
ngRIgARIgARIgAS2jADl6ZY1CN0hARIgARIgARIggd0mQHm62+3P6EmABEiABEiABEhgywhQnm5Z
g9AdEiABEiABEiABEthtApSnu93+jJ4ESIAESIAESIAEtowA5emWNQjdIQESIAESIAESIIHdJkB5
utvtz+hJgARIgARIgARIYMsIUJ5uWYPQHRIgARIgARIgARLYbQKUp7vd/oyeBEiABEiABEiABLaM
AOXpljUI3SEBEiABEiABEiCB3SZAebrb7c/oSYAESIAESIAESGDLCFCeblmD0B0SIAESIAESIAES
2G0ClKe73f6MngRIgARIgARIgAS2jADl6ZY1CN0hARIgARIgARIggd0mQHm62+3P6EmABEiABEiA
BEhgywhQnm5Zg9AdEiABEiABEiABEthtApSnu93+jJ4ESIAESIAESIAEtozAHPL0N/yQAAmQAAmQ
AAmQAAnsDIGLUq2UpzvTxRgoCZAACZAACZAACcxDgPJ0HlrMSwIkQAIkQAIkQAIksGYClKdrBkzz
JEACJEACJEACJEAC8xCgPJ2HFvOSAAmQAAmQAAmQAAmsmQDl6ZoB0zwJkAAJkAAJkAAJkMA8BChP
56HFvCRAAiRAAiRAAiRAAmsmQHm6ZsA0TwIkQAIkQAIkQAIkMA8BytN5aDEvCZAACZAACZAACZDA
mglQnq4ZMM2TAAmQAAmQAAmQwOYJ/PdP8PPfP9+8B4vXSHm6ODuWJAESIAESIAESIIHtJPD8x9/6
d//+T//0z+K/f//V/3jvv/16Oz1teEV52oDCJBIgARIgARIgARK41ASe//hb3/8gR/DJw7/408uj
UK+yPH3+42+N9POtHz/PTcQtEiABEiABEiABErjSBECe/vfnj//rf/3pf73/3T/9t//x//kvP/1v
nwwO/IPvqZAajVDsDjawYMarKk+f339jNHrjvmnSD77nqEKbleA++N637v+iTOQ+CZAACZAACZAA
CVwiAiB1nt9/43/7Pw8ODw8OD//v//C10ejf/sf7k9mRfPD90Wj0PVuAfX7/e1lWzS69XI6rKU9l
3TQDbRCCNiuPUp6WRLhPAiRAAiRAAiRw2QiA1Hl+/w1dpPvF/W+9cf+DH/+HfzdLoX7wvVHXlWdY
UgWzo+9/EAStLLdGDfbB9/1CYafBGu2VlKeydKrA6pAlBdqszEB5WhLhPgmQAAmQAAmQwGUjAFKn
lKfPf/ObyQyF+sH3R+2LybICaFenP1AB+ov7cj9lWhkUGRakLRr54PszpJnje1XlaZvpb35jwt7u
pVAdG8la8sjdG+CYcYcESIAESIAESIAEtptApzz9X/7DX8iF/u//7/9rz3Jml5pExSnLffffCIpL
RJQKqt8EtRUk7Aff09QPvq/idRC1qypPFUcHBGizMgdXT0si3CcBEiABEiABErhsBEDqwOrpb/Lb
UP/bu9/qunwfBGZzpa+UrUmAFvI03EIgz/+oKs06dRjGKylP5dp9v0iHNis5UZ6WRLhPAiRAAiRA
AiRw2QiA1EF5msN4/uMeefqbjntPB6+e5gv93/9AVCqsrWYXOreupjxNF/Hh6Sg+ud/ZBXiABEiA
BEiABEjgyhHA1/J/9d821GG/PP1NuOkRllfTk/vyXFTHvaeaWW6ktPpEJX/v+3poKOWrKk/j/RD5
TlLDNBQM85EACZAACZAACZDAVSCwyOppiBuf2MnX+vOT+6ZTw8X9+/q+eSdG5VAuOxDnFZanAwkw
GwmQAAmQAAmQAAlcYQILy9PBTIp7T7Gc3YeKibO2KU9nEeJxEiABEiABEiABErjEBJ7/l//8p3/6
Z41//9f/a79ftFx43fJ03oeioh+Up8u1B0uTAAmQAAmQAAmQwI4TaMrTeGsAPAg0HBLl6XBWzEkC
JEACJEACJEACJLB2ApSna0fMCkiABEiABEiABEiABIYToDwdzoo5SYAESIAESIAESIAE1k6A8nTt
iFkBCZAACZAACZAACZDAcAKUp8NZMScJkAAJkAAJkAAJkMDaCVCerh0xKyABEiABEiABEiABEhhO
gPJ0OCvmJAESIAESIAESIAESWDsBytO1I2YFJEACJEACJEACJEACwwlQng5nxZwkQAIkQAIkQAIk
QAJrJ3Bp5OlLfkiABEiABEiABEiABK40gah8KU+vdCMzOBIgARIgARIgARK4PAQoTy9PW9FTEiAB
EiABEiABEtgBApSnO9DIDJEESIAESIAESIAELg8BytPL01b0lARIgARIgARIgAR2gADl6Q40MkMk
ARIgARIgARIggctDgPL08rQVPSUBEiABEiABEiCBHSBAeboDjcwQSYAESIAESIAESODyEKA8vTxt
RU9JgARIgARIgARIYAcIUJ7uQCMzRBIgARIgARIgARK4PAQoTy9PW9FTEiABEiABEiABEtgBApSn
O9DIDJEESIAESIAESIAELg8BytPL01b0lARIgARIgARIgAR2gADl6Q40MkMkARIgARIgARIggctD
gPL08rQVPSUBEiABEiABEiCBHSBAeboDjcwQSYAESIAESIAESODyEKA8vTxtRU9JgARIgARIgARI
YAcIUJ7uQCMzRBIgARIgARIgARK4PAQoTy9PW9FTEiABEiABEiABEtgBApSnK2rkv/vy9258+asB
xn7143/+vd/7l/Dvyw8H5GcWEthWAp8dvzkajUY3H3y2rR7SLxIgARIggUtJYAfk6S+//PPfW78Q
HChPxZl/vv/LZfrK/7h/41/e/rtlLHSXXQGrn7/91jvX4N/bP5fqPrz3zrW33kty/Nc//bO33vmz
h79OfoTda3/100LchyLRlCsIxjW9FdCvHo4h5zvRDcsYjUNi6fa1t6BIh4dmTTe8kRzRr+//lWOi
sbv8mvjyZawuM8Qwc5GYvwhTQrZ6f/6eEbBIff5oObh3L7RTikRqyf5oeMXfzx7cHL15TGVaYOEu
CZAACZDA8gQoT5dnGCwMlKcDs/U5teXyNLou+sYk0cuXv77/V+M/+6vx/aBIf/VQtk39yO7Dn9/X
o6F8EEwms17+/O0onkS3oVbrwyQ6zCwEoQb+/Pztt9778OfvXXOaTKy5Umq+5aEec38x6iKElwFC
ofl+/vZbiUmUpMnDrjCDbAVu70WewQWsOia8l78PSPgZvkUtGj0iEsvqSQcEF2jYeXLAddOaClNI
gARIgARWQOBqy9N/ejtdRo8X0//l99LKpaTDAqTt/o/7N/75/i+tFKy5/t2XekXerX3Clfp/gYv7
oiBT/r/8J2wlyT/sHoCXL8GIrv5++JdqVuP68x//D7GfVj3Vc61CqjMHipXRHNG/BCNaVi0rK3R/
+HahlkSe3n/4XpBWv75/76cfPhyrSBIZ9/bPRRea8HopcqolQ7t0W8uvQmi63Z+/F2RZEKm+rMuW
DrU89KV0r4i62BU7OUYpA/JUtlXQd4QparLS0x1VJ5/1qKxex6olQDWCwYJx8MTKtzYoT1tUmEYC
JEACJLACAldbngZAhSwLaaLzTLeJUItKNCrCJEBzHrGgqjRnfvmy2FZRmAsGiRkVZKey7GhEpyxd
ntbqqXjYEN/OCHIQzzUiNI55MH3u7ULiBHn665+//Vc//dWvf/r2w19nkWRSLElGqQmkkq/YMvvk
5h5qL8kAkle1Wqnh2qunVil42KwxyE2VmJKj0KPFrpensMAZVlJrdV4gLVwojqLwlZxGI5MP7plU
DZXKAqrlLCqodylPayZMIQESIAESWAmBHZWnutwoDD/8y7h8+DIuWOZVVb0Q/6sf/3NapJTspg5t
IzSEZn758p/e1sVOOZDTZW/46qnkRDuhkvCfrzemB3lqnltEHfJULEBE2TRigdQFNttq6cN747fv
ySVpE0m2gUuJWZ6KaAt3beaL+3ATp127bzlYyixTmbBmCbUnE2WpKOzScmOp+apqi6jzmmXI2ZSn
Fg7oUXHV0vX6Oy6vVhWXyjgHq1lVW0uAatwv5cZGeS8v4mrRjr/yXNThRx0HmUwCJEACJEACSxDY
VXmaVSaqSa/8dHG0XvgMQrDKHFdP00ImXIXXVdW55GnUzfEOAS8lfb2x7TtWPTvkqd3MUHWcDjtV
vpkJhVBTYaciSdYyRfO59Utd1CxXT7OIrFVXtyOl0LTVU/NB1jfLm1nLUh0edlRbRF3o0WIXV099
wcqrUJ3PU3pQHFXgms0Yuo1S34uRvJ6qZeu/Tw7kef3jT+ojTCEBEiABEiCBFRDYWXkaFjL/8p9k
ddOu8mfNGsjqwqctRnreXiZq5v4FyOGrp1CXqElQqL7emK9DVnbI05aFXjvgzMDNWi3B6mC81B4u
9LtlQn3qXCQU3HtqiqpWkz3eSCmQX7IiG3ZlQ5cP4wY8MlVd2hal6PODzap2H3UoC8Z75Kmvty1P
pXi3dvRVe1Ud75eo7j0titR3I1TxuQSunjoc3CEBEiABElghgR2Qp3K1HR+EMnr/9Pbv/fOfy7NQ
loK6DURhx52a+R7TuGKaVknFCEheMy4bC8lTMQjyVO5GKO33yNPklYRjdwuIG+07B7pYuSgG7BTS
p3oIKS5h4kJmWsuMz49LcZNiy8vToHejZRF5IBlbK7UoQDs9bDLAqF0IIXefPHVX59vyNN4+m53/
1cO+J/dDyPqVwFaO3b0KXhOLi7WHzTBTIu897aPDYyRAAiRAAksQ2AV5Gp9hilfbUYwGnQdX3v3D
8k4Rhqeg7Hq9PdEfNV+Qfb/88s+zqaBQ9RF4VJaD5amzUIrRILjddf8OeQoR/fP9v3Pvfw0KNUWE
HkKkjtXgPhZkGaw4ihas9VaQfX+jj5OrcZRHsp2XOZv3nsIKq1rIf4M+Mwsq1JqegB0pBfLU7jdQ
u+ihpuW/Lna4s9PHkl8p6i7BZ0EpTprn8L7Y9IBXOgT2430CWblGj4CAhu/ladTEYKc/uhxn3KI8
LYlwnwRIgARIYEUEdkOedsCqrtqLKLQHjDoKMZkESEAIUJ6yH5AACZAACayJwA7LU33yCchSngIM
bpJALwH51aiDJ71ZeJAESIAESIAEFiGwk/JUhKm9JRSpUZ4ijZ7tJ4fyW+vV5yLESv2oU7glIF/O
7gljwUPlxffm1fYFbV+qYvJ0lDzD/4C/bHqp2o3OkgAJkMDWE9hJebr1rUIHSYAESIAESIAESGBn
CVCe7mzTM3ASIAESIAESIAES2EYClKfb2Cr0iQRIgARIgARIgAR2lgDl6c42PQMnARIgARIgARIg
gW0kQHm6ja1Cn0iABEiABEiABEhgZwlQnu5s0zNwEiABEiABEiABEthGApSn29gqF+iTvMxyNDr8
aEMuxOrk7UTzvpTqk+Obo8PGWzc/Ohy9ecwXHW2o/VgNCZAACZAACayBAOXpwlD9uz9VEtU/pfPk
wNQeFDE19hG+QvTm8SfBH5e42vdKig9efcrbKy1lLnlaB7sYzfYL3iMEA1WYpjwtgHCXBEiABEiA
BK4KAcrThVsSdV54P3lQqE8OyoVAlaeYX7bTy8xhtS9Iw7AiKMpMlwZFh2nmhZ2FgqWHXToPinRt
rlWeBm5PDo1D4USX28CzKMFdEiABEiABEiCBS0GA8nThZkK5Kb9AHlclq4XAtDZZppuKso2XL1+a
KrWN6F2xu7DLLWvgmIQQPrqIa/nfPP4sqGS7Ch+UdMqd/qRlzieHo1wc9SsUyRliDeCDxZbsqLjX
dEFhH1XwL1+C8ZFe3E/kRY7LByrNRiDRG7Hl5Jcvw3ePVGeuURxa9TcHDZJ/SYAESIAESGCnCVCe
Ltz8hTwVEXPzwWemtFRXfXb8pmggFGpSpSiboI1AnoqQijcJFHrUMi/srCuInicNB8edvpT0JOaC
MvOelEFJblc8Z/jk+FB/+lIQ6b0QsV6Dlt1QLO4Q1i7bKhYRlxY0WRlXqfOaMRrBgmgw+xGEb98N
Bqtc2IZquUkCJEACJEACu0uA8nThtkeRJ0aSFEvyKKxEiqyJ8rRbBSbxF1bnTAahbAq2cUlyYY+t
oNdqKvLSYacvJU2csVVGF0hWn2a6S57mDFGau0qdBg05s2VQjTkx6ftoxLkk3ibtK+n5iStN/+zB
TfiNeCgrFeV7cM1f8c10sKVygwRIgARIgARIYG0EKE8XRlvIUxE6onuiDJLFwifHbx4+EbkmKspJ
K1RXKpuCClTRVshTXPBb2F8sqPZrXVgsf0oh8xAthO0yKEl06hYyBLGYLpHLuwHwofvKDTSSFSRY
Q42bM3hvfbo6r9f6wRV7TYFgCR+/uGtFQNeqOf4lARIgARIgARJYNQHK04WJenlqa2+fHN988/jJ
g8PjT+S68OFHSZ6WCsw0n22EexyTAFL5GJ1bwwJedL4p4FAahvqzhyUrpxfTQVfcMojCM80HC6I5
QFs5zrcTZAUZF0HNmpTKRnwU2Vufrr47I5pY/HXe5mMCjQo18+AWCZAACZAACayHAOXpwlxRnsp2
uo4cVjpvyrppWHc8OEzPnpt+lQqhbJZT8TK6FrT1xbCkB0/qLOywKxgU2M18+2Y+6PSlJKOHOZts
iW420ZkO5dCCqk56TqpLAlRUY//qaUPKBxpQXQCuiLLxAFldasvTsEpt9yr4eHQPKtIk+SsGnTyN
1aGwxuzcJgESIAESIAESWIgA5elC2KRQVEhphS+rFidZQh4TcPFQKJHlphN/QbodPAkSytYOZ2ip
BQOIF7JRWtmlbY0pv4TVQigri1ozFDBTZufgSV6qxNgf2FNNUDxWKkYkMfNR1CHF8t88/gTfOWVt
cfgkrF6H1/LXdtR781AqTbcZRDGdQtdEe74qpVuMyZJTyWqdf0mABEiABEiABJYiQHm6FD4W3m0C
QS6XmnW3kTB6EiABEiABEliaAOXp0ghpYCcJpNVWatOdbH0GTQIkQAIksFYClKdrxUvjJEACJEAC
JEACJEAC8xGgPJ2PF3OTAAmQAAmQAAmQAAmslQDl6Vrx0jgJkAAJkAAJkAAJkMB8BChP5+PF3CRA
AiRAAiRAAiRAAmslQHm6Vrw0TgIkQAIkQAIkQAIkMB8BytP5eDE3CZAACZAACZAACZDAWglQnq4V
L42TAAmQAAmQAAmQAAnMR4DydD5eVz53fJ2n/9GmKx80AyQBEiABEiABEtgiApSnCzeG/ZBm+MFL
/dnP/DOeavjJgf1EJxSx17m7H9jU3y91if533tXsUn/hJ0ZHI600WKQ8XQosC5MACZAACZAACSxN
gPJ0YYSiNXWVMfy4ZVCoTw5GI5OewbbKU8wv2zcfhF+G/+hwpNI2SMPwE/AiT9Nvwb8MUjJlXthZ
VxA9cQe4QwIkQAIkQAIkQAIXToDydOEmKERe2hWJ6eSpKNfDj16W6aZKbePly5emSm0jelfsLuzy
DGsSQvg01lPTkRGK7yDK4wEX8ss1SOolY2ZxEiABEiABEiCBS0OA8nThpirkqWi1mw8+Mxmqi6af
Hb8paq+86C9rokEFgjyVlde4klroUcu8sLOuYJShujrrDr18+fLJob/cD8efHNqa7kuJSIV4ij3n
XP2Kb7bNLRIgARIgARIggatNgPJ04fYt5KkK0CQ3gwSUNcUoT9MaKlSmKlCUqH5sDbKQp32SEUzO
syniMnz0/gQrrI5Zgm6o4I77TqrKuq/eoqDZ+ZcESIAESIAESIAEFiFAeboItVCmkKe6ghiF2ifH
hw+eHL95+ESUpaxTtlZP9TbTesW0kKcrXj2FkIM49je2dsjTjw51rTQUD+ujKqvDX8pT4MpNEiAB
EiABEiCBhQlQni6MzsvTINdkJfKT45tvHj95cHj8idxvevhRkqd20T/VZ8uNtiHrrPC8FFxGl7Kw
u7DHzYJwjT4eb8pTv1b6Mt5d2nV7QLMeJpIACZAACZAACZDAIAKUp4MwtTKhPLVL+VG33bwp66bh
UaeDw3S/pulXsQVlszzteDQqLHBWl+BbHi2SJp7MWj2t70x4GW5asHtPq4rj2qrdq1AdZwIJkAAJ
kAAJkAAJdBGgPO0iMzM9SFK9vJ0VnlNmIY9d9Y6Hijs+UZ6GBVS5hh4kqdk+/mSmM3NkCGuxahu1
qat0ZO9DLfLDOq7IVvtkAuJLhMPl1TnahVlJgARIgARIgAQiAcpT9oR1EAjKlaun60BLmyRAAiRA
AiRw1QlQnl71Ft54fGm1ldp04+RZIQmQAAmQAAlcDQKUp1ejHRkFCZAACZAACZAACVwRApSnV6Qh
GQYJkAAJkAAJkAAJXA0ClKdXox0ZBQmQAAmQAAmQAAlcEQKUp1ekIRkGCZAACZAACZAACVwNApSn
V6MdGQUJkAAJkAAJkAAJXBEClKdXpCEZBgmQAAmQAAmQAAlcDQKUp1ejHRkFCZAACZAACZAACVwR
Aldenqbfdlrbj4JekX7AMBKBjw79z18RDAmQAAmQAAmQwKYJXHl5GoDKr4kO+YFN9yud8rOeDz6T
H+g8GOVf8gw/TAoKJha5WfzuKPwQqB0qjfcpZv/7olBdCCceze+9Ly2b5yF328OqoxVGMi6IJfyC
afqNVp+/2xkIM/8MbIrIhxmsW705s/zKa/y4/AlscM9KST5pL/sh2SrOzoSPDqWiT44PH3wmPyqb
KgU3RiNNtF9ttZ90HYUwAxOrOnibw48/aWtH1Y/Qu6IdjQJ+/NYz0TLwt2idXF3IE41Dog8nWv9I
zXV4qIftrzeSI/JdQoeP/sJtYpU7c0+YuaGxlZOF9MfqzZljK4ifHksEG9yzviS5JJDsj8XHDRIg
ARIggYsmsBvyVH7L3mTiTORyGoOTlpS9qcU/eyDb+egnxzffPH7yAFIKOfvys+ODYxG54kM+fc5w
Qs64TqzkGoP9w4+eHFoGs4WlLLHloR2EDRe1U+Qhk5zvTRBICoaDp3lMB/NBi1gUnz04BEHvqg5l
xKCKKjD+0aH5AKrUF5eKhrc1ePjypajS+MmRoidSESjUpG/Uz7h78+abqeGeHNy8CS0uPefBE98P
g0GsK4onCUFb3ztY77l2Cf57fw6fZKmdS7tSmtzyUI+5vxWTHELqGNbQodyTQ2uR0A2Shx1hhpa1
FrThEz3AqkMK9nnZBjmrSjR/XfF9ownBBcodEiABEiCBCyJAeVqD93InStsH8ZqvnCyfPLhpGkXO
cGHJ7aadof0pEKyLWZAOcKTexJNukk2mV9LJ/slBZa0oFcw2PKyrk5Q6atToYTnKYtT8Gg6WbYcp
EkHlQlU/Fg+mgbDsmyq1jYIJBN5bUVVzkSB2wie76vUQVBSK+qOyGnd4/OBmwPLk8ODJkwNjmLAE
CRi+rRQhoCcdug2z2HahsdxuwtX4JuOyJVstD60at1FHjV2xbM2wSGlyE8q2w4QMrtK4UxxNPltG
A566fTiAwUL3KEzFBVfz00xygwRIgARI4AIIUJ7W0Ivza1x5fXL45vFn4covnPns7JjXh/Bc6E1b
Zp/c3CtkEEpelWjghpooSkmyVZo91NzF3yLqcIk8q7ReeSrumUCxGtF+KQXwWKWMpWq3/Gbha+yy
UOmu4KvzltNXMGiv++K+qvC4vGrfE5qrp4dPPjm+efDk5UeHhx9BIOJYKFiEAISzk5Y5J3Vulf0N
+oBibLRIWerly5dWKXjYUWvRmgo/5S5206JyWiwX91QCWo1YDfiPybpdVF32aosLRof4k78aSaXi
gOVUy5SnRoIbJEACJHDxBHZEnoJQmM28OL/aauXNwwO5JJ3PfHB+VSkApz05Gj5p0TGcJmOK/K8n
6aY/5Uk6n4atoqwnzEJZCjRH0HNO81mptFFEDWGGDNXpHMPBWDDdwixURVF3UbXsZkUoeTV8CVA/
hbALbXHYt0ZbVNqxWz4ahZ7LtmeIR432Z8dvHh4eiBi1xsp9xmKJCjtGYXHFXes5MVa3aF26XbZL
7pMKLd6I6XGVpVyeXLCsLO37qCHMcLxozSTitdlA3DfDrPuwc8JXnYPVTKqtJUD9+CaLHfsQ7h7R
svxLAiRAAiSwNQR2RZ7KSVKecEIh1dUIxflVz9Z65rPnZkBz5AvQ5YnfSuWFzK56Ib04SctpOHqu
zkjeSsYVpZzmyB5CNbhZRB1wgaYp48q1FwUrr6QSryqwWtkuLGRVlzKaCjGYVaTavqB+yloW2xfP
7VMInTIu9dM6hl7Kd0xMs8KF5kDBbmlQO0M8LtvFyBgrXBlVi2Wp3JqSwzzU7MXfojWL5it2Y+tb
74UvHs0wzf+izrRbVI3DQXIg+bhiWkWaemNeT21XxFQSIAESIIGLJLAj8tTpg1m8i/NrdeteOvFL
Nv8J52A5v4IIziphHh/8SVpOsXZp2FfpzrK+VJR9Pjs4VlKoo3YrhdVpHsJx9UJ6rkISnav5UEOe
mshIuYyhbVSKNkmT3oVGV+fQnUIPFcX80fAUmt5YKjlTIKLD/Cf4mZs1WM1RN3VbUbPuSimI2u55
CF/GXKW4IF2UCivxLjPa1KrsbxV1vrWj0Zp58TsCMW+bYQZW6KrVGjZ81V5Vo7DOMBtfjYqu7mvg
HgmQAAmQwBYQ2B152qPMinYozl5d8rRYubFSQYrZOdgrqu7zrvcBBZ9spzUnOOmG/Jit8ahNl4e+
rrRn/ieFUQiUUtA4ZVCWbYQJUQTd1vvkvtMooEgyzPo20KAFDbuFGOrtVsaWr2sDam9k8Ue75Cm6
ndYyY2+U4uZbbtymbmvULknYLrKdvhpJi2ArNFZqkVWnh81aMWoXQsiNnSEWx34IZTvCDMLaVsFn
PLkfQtbMMBwyTI+ow0NJDqZMTw0sAAAgAElEQVQctOg9/ycBEiABEtg8AcpTZC5nVvzIxdz6JBrP
5cUZ3Z8F3dpV0gGlcVQP6IRsR1GVXDFhXWqO8uIynJ6TEZQg3sOyxiA3c+zFZf18wF4p6p2RqqOf
3WFCUP4quRTxKfE+zlRrBuWYh4oKP328IcagnxrpFYB2AsgplyGYBSzipHNPckeRVF0rx3g9rhhO
UOdoW9/76jyIO1FUaWYVas1OCy+rklLApNfDulIXOzScjyXfp4vyNKrA4Gd3mDB8rPNHNxrNAQQ0
fCWvrkupwk/Y1Vyxf0KP0gP8SwIkQAIksGkClKebJs76NkugVLGbrZ21XR4CQS7n70KXx3F6SgIk
QAJXj8BuyNN6MenqtSQjqgikdTWuh1VkmOAJpHVfalOPhXskQAIkcGEErrw8TRciG9fyLox5rhiu
S+rlWflbXNDM+VewVV5RTfVuJ58VxHtFTLjr6bmvrFN5X0DnvCKNxTBIgARIgASWJXDl5emygFie
BEiABEiABEiABEhgkwQoTzdJm3WRAAmQAAmQAAmQAAnMIEB5OgMQD5MACZAACZAACZAACWySAOXp
JmmzLhIgARIgARIgARIggRkEKE9nAOJhEiABEiABEiABEiCBTRKgPN0kbdZFAiRAAiRAAiRAAiQw
gwDl6QxAPEwCJEACJEACJEACJLBJApSnm6TNukiABEiABEiABEiABGYQoDydAYiHSYAESIAESIAE
SIAENkmA8nSTtFkXCZAACZAACZAACZDADAKUpzMA8TAJkAAJkAAJkAAJkMAmCVCebpI26yIBEiAB
EiABEiABEphBgPJ0BiAeJgESIAESIAESIAES2CQBytNN0mZdJEACJEACJEACJEACMwhQns4AxMMk
QAIkQAIkQAIkQAKbJEB5uknarIsESIAESIAESIAESGAGAcrTGYB4mARIgARIgARIgARIYJMEKE83
SZt1kQAJkAAJkAAJkAAJzCBwyeRpdJf/kwAJkAAJkAAJkAAJXG0C0wv6XBte79VuAEZHAiRAAiRA
AiRAAiSABIarxNXmnEOerrZiWiMBEiABEiABEiABEiCBmgDlac2EKSRAAiRAAiRAAiRAAhdGgPL0
wtCzYhIgARIgARIgARIggZoA5WnNhCkkQAIkQAIkQAIkQAIXRoDy9MLQs2ISIAESIAESIAESIIGa
AOVpzYQpJEACJEACJEACJEACF0aA8vTC0LNiEiABEiABEiABEiCBmgDlac2EKSRAAiRAAiRAAiRA
AhdGgPL0wtCzYhIgARIgARIgARIggZoA5WnNhCkkQAIkQAIkQAIkQAIXRoDy9MLQs2ISIAESIAES
IAESIIGaAOVpzYQpJEACJEACJEACJEACF0aA8vTC0LNiEiABEiABEiABEiCBmgDlac2EKSRAAiRA
AiRAAiRAAhdGgPL0wtCzYhIgARIgARIgARIggZoA5WnNhCkkQAIkQAIkQAIkQAIXRoDy9MLQs2IS
IAESIAESIAESIIGaAOVpzYQpJEACJEACJEACJEACF0aA8vTC0LNiElg1gWvT6br/rdpl2msQWHcj
RvuNiplEAiRAAltCgPJ0SxqCbpDA8gQ2IGuWd5IWZhLYQDty5p/ZCsxAAiRwkQQ4SV0kfdZNAisl
sAFZs1J/aaxNYAPtyJm/jZ6pJEACW0KAk9SWNATdIIHlCWxA1izvJC3MJLCBduTMP7MVmIEESOAi
CXCSukj6rJsEVkpgA7Jmpf7SWJvABtqRM38bPVNJgAS2hMBqJqnJ0Wh0NFk2pGfjkXz2T744Pbm1
CoPo0JzG+yISU+NlohXjt05O0b2Vbvc5P6MiIb//sMO1pQOfUXnz8Bcn+6PR+Fnz2DYkAjFxdf/k
i8W9WqLhYqUbkDWzo5MoRgN6+Oa6E7RR5f7SzCuL0+l02Z6wgXa8FpxcfGSthVuLZT/MzbnR9M0l
htPWaPaZa3M+9w2xvkHhwlpgp6/eLnOB3vIqosv8AunLjuLl54EFnL5SRQbL06ASwmnH/xf60wrG
m+sKq+6p8xvvi2jm2JMMDZly+lCE1mQ6FeMbl6dS6WxV3TtnzQx8HUMjdDzK02For/34T665zx99
5eP4LP+zr/zhtWvX36t0z3t/cO3aV24/0/SQ7Q//07/pfgPADEekkw/s25vrTn29um+k98cq/jc+
8u3OTTj9VppHr01DQ2BTWttJE1uzxsaV///N7T+6du1Pft8aLvSEP/hxzqBNbCnLjazFuTUj7kns
hbk5N3o8DIeGezI856w6Zx3vG2J9g2KW3VnH++rtKrvqk35XPcPTezveIDPLWxhUzZXNNFieZgIT
WTz0q1krGG+LdOjs07S/eP9RMGObfRHNtiaIqjXIgdPBwGzmaWOj4XwYJ/u3yoarCvfWPjvwyt6W
J6wgIiA292QEZQOoRsPNBzDI06xRgmRJUqaUL1HHiIjJ+aX4H/7RV1rqx8TNDId6QiiV6/Lwh1oo
OWMMPQ5jtr7tut3rlL7y9bEoT+Frg3yL0G8XxTeKKDeffeUPy68ZX/nDP9IiJkndRl1vX0rRfCvg
1lcbHOuFuTk3wKPWZk8fKw8t73PRFi1/QlrfACm96jSywIG+es1cQ0jYsaU2BtXerMG71NvxmuXL
xOUtlBZ3a5/ytN3efTPIgN4vxYs1pKE9dQWzRu28TGdHkwGTWm/tAwJv09za1BVEBMSGNrHhgLIh
rW44yzpso5CnUeWkJbSP/9NXrl0rltN+/7qJHhEuv39dJE6RaMI0bsxwpCeEsvstD3+ohZIzxtDj
MGbr267bvU7pK18fq+TpNHxzSKva0kDFCrc0Liypyu6f/H6RaAurulHX25dSNN8KuPXVBsd6YW7O
DfCotdnTx8pDy/tctEXLn5DWN0BKrzqNLHCgr14z57WgJS+/Maj2ZjXepd6O1yxfJi5vobS4W/ur
lKcyZtLH3ZoJ6e3VOxmu9hFVByMnNvAzuQFRr03LUf3INXRXvLp+7Y4WxqWtpUemDwjKYgYBI+PJ
kN5f9UucUNB43I729++/ZwTlJly5B1SCxbVqtOOch8DRfujOZmQyru86EFfTZ//hpLj3NDoWDvvA
IwT5PwtxyAz3NoD9LNmbic2h50lCFQ6LFcXOVixg46HR0QRNpa71bJw91L5h8LE4WK766hetW47q
PoMEdK1dXIrfIlKDzDeOpkHE4GroVFbaTJJWutMdvSaZg8SJ4kZFTEuexnaPTuZRA0NJI9J2wTGr
HSYyAQ6GWkpBevO+9kbzTaf9bQRHM9jIXP10Fpw/lqPe8F1Und8/+SIDgQ7TOeeA4VqeyuK3SdJK
d7qj8UJ/uBkgft8oWlB3vdvIswq80Xz9fXVm84VgMx+d2xMDaKn9E5n8YT7BGTuO4nTDYhyJMoPJ
qSJd34MqckedhqttaYxZ7+olUDsG95hiX81VhyL+UOwG/eggdosi1R7+NNpCuhScQzX2GOZ4AsSg
H0Zc9phBByisOQ7YOPYVZtvbmFPLNnyLRrQFgsPqj++WYgOtIc/WjapYF3QqBy3zUQ+1FnUoNmLy
JJMBelIC6sL+CUaTBagdfcZYlKcU7kifXR3UfDU2VydP8wkpNIY2g3RfQx+4d3cOO2doT7WmMgux
T2Tj4/QkCvbgumXcUTAexq35I82vFcm21oLp2nXM1bqymIK1TKPKtM5dGnczWqOgeSiGgOfpQw3f
38+K9sUbCL885FtEjuZ29PfIpjGjgYvNjCiNVSUWaowjVsa2+n96chQfCGsmdpCUetPgx9inz8Zq
FgtOxuZDIzR1fqrZgEwCpR0gGEU/tYj2Sa0d2iu7ConREHQncLeVLfOXo9YJXew+NDBYrJ6Wy2z1
pXzQsiBxiivF/oqweKItonO0gQ19xpoAPCu6rhyJXUiBB7NqJ5NMY8c4OJNF81mzttqo7Nj6dQ5H
xDDIzgXZcd6Go6GB8tlR/LQTGHYqP8Sy4VKeih7NXzPKo+FG1eJOgPSdpGhx95UD3HaBt0eWm3m0
3e0WJtdXPRB/KMc47Zq+XE9I0srRs4k05LQxEirKnMVHmHwANQQueUJ3HUIgTI/aRcPCgZ0vihke
oiwn/350zo3Qi3SecSZdttADx/Y8K3a2OMS0q8eTl9ITXLrdAcrVqQMWRrdzA73FgfnFSds33zoA
EB3z9FzDdfYrPN+FCCS6PIEEPzXwIkLHoTzXI1ivQ+B85w2Guoqxb55MjnxfSmDRBztj+nnVe+Kr
vFJ7q5OnepoRPLl3lst1eDJwIHORukeavvGHsLwrjgfCtjuae7+MLhhsOKtmP92QqK1VdWlCNXSt
L7quJhUhOlw5FlPiLc5QzqzWJX8hxux8yOB2IVuaKJEA1t4fuNixc0Y8Q+cA81xTG2mezjGQYhss
uECKbI3d3NDYsi6jpyEMXVvgTIHlwHIXMWeqHAVqC+1IWtkZsnulhQ4UQZ7mB2ps3VQXzNxyqV9a
c5IUpKrTpteKU34IxFHqcEwyll236EJwupoc2Ykz1JA5hF37rys91AZnX+HsRrqvSw8NhGzV6wZ0
0ZQkKThsc0P3zDlqbioiUpojf2zdVPWla6BitdtJUtfi2g1im4LbPa1mXhXN191XpRu7039fM6n5
nMd1JzkMfhY++BmsbOVO1Lkurb1QGzkZtsCNlOpSchNDmbhZHupGN7QH1hygUgAokcIs7cZg9qoT
FBiVzdJat7ctwsEY+AZjMBzy/tg8DJCH9itfe82qTtFA0b20igkn3+xh6JOt850aSn/FeZwH3Gka
82aXIN6cQRIHVJcLXJGt1clTVDnWP0Lz5BXzuIU5DaMVkZSiH7gBFgaJXejX8q64JtpfdzQbl2mi
/KS68nztdEaw6KxZHdUG9LNi/GfjjWkxuxcsyi6MkPIcL5bzJ/VgtB/0BALEEVgad+T7Ay8gyG75
iaeoBNk3ejOxIhgSAGOYFOKrx9p5Q6oEaJ90mqxjiQbqKGxalAzIKu2Wljv7KszdXbVj2eh6schq
7g0dR7h6KkunsDgapQlc39dL+VHxFBKnunysykY8wYkyYAI54vtepJz+z1NwTLDo0nGjLd2y+pSV
SqHSwoA2Ul9MvGaHh0JOJvIf7KIxtUzJgzr1fBceDs9Y3q2PhqVTWBwN4hIaSKSqPdevNxBre8n9
xHjU0uOLpbTqFLvu5tjyVtF8mVvMktticPP5mzHSYrN44tsaYJYCxc2fGXL0qBt18tBp6JkEGqNY
7OjkXNaewa1+mJdnAakr+Z86VvIqN4q6k1Oyw92gtFT8m8uGfV9jqjjO8705lZiN92g9+4PnLDh1
Sobq47tKy8+6z1TiMharJnzoeCEHeCgBlh/XnaLJ0oL7riX1oQLRU09KxDPmwOosjquysQF52upA
NT7XoaEf1A0cyqosU+OueGXdHc3GGx1Xi0oXsZGm/SYddNa0QOOvVSQbOiYlXzbut4MNKxUtlmXh
JBF0mPkGXqF9BeXHUipVGl9KnponHSjEAxxyYdauEqvCdQeQSNsiNQarqIFk49QSKgJosl9myxNo
p2U893hXrXdNjlzrQ4TgYUjFhkv+xFO1WNauDuWrTZSnTuLoqlvOIA/p5xdIBS2bV+vSFogelTUt
TyzSom8X7kHXDUcK+PnLQN0tC0u66y0MaqNUNJPPzFuhaU29f327S9YyxVXXOI2V5su2cwuice1T
lleDZpWNvEwetGzVkPDUlPWEysko99sjq178ztyi87ktBjZfx/RVtwLAxJ4WqwU3MmQ71Is6OJDv
pQmFJIoOAuXkUFzaKmuPPoT/y0Pgs1UaRncdO1jBzWIoicG8Sppnrcb3N99MkU9NFevK27lsSOvx
FnJ2+pbHe6zBUVKXJBHnc93OTjW2oPY4I5XdoNNzQNc7iqszRcMLSYKum3JYimzkE2LRoPFELD2x
S4R0VHjFktcsT8su2E3PdSnoqdacjaLQmVzxKqs7mo1XfSIXzDOIlHWLCuEsOEQrqNyREFz+bHyY
PMXRJWWjEHRBuZUksJ+DzbHliOSol4yCNFWXs6WiLvCi9r5m0pqLIjG5magl5G/bcisuVIpSEvK0
jThoUqJwRkrFyRFMFZZ7avniZF9aajL2rS8G0qcw6763SJbsD3R1Ldz6m9VnVCEiawppktSMv7Lf
uAQsa3LV4uvKL+7juMgxQgduRWlpmY9vbsmAbHE7FIb+AHVlB6yGQRtgLeUvU7IDPXMO1FXK03it
v/i2EL9giB7NrwZzF/2TEpXGLRdf5VDpZKw/uwr+yGbhOXALGaEtykOFobgL+SXBdsUrUyRmOc3A
Ytl9DRZvdQYrPS8cbnrRylPaSQVrXC6lo5QULg+VfCz2wadL73bRb2E3W05BQNXZK2+tySkkltag
oqJQzlnkwV3crijFLwNCOE8R4HxRn9/NtUt6HV2douW9S659vYflITVQ/K2yWdW2EUsUu8mMBVLZ
Keq5qrvrlqdx+Tr3MLwd3jG1lpDUPHKqOTTfLOyuvfa3X59xVWNSb378BUaCOJPnRKko31ogvcqt
CLqY4kWK8dE+6stihQkqSmWLFKnChqgEos64kGVcWbZsweUx3wBvMGhfSaWgydPQCl2B59NJsuop
iewOjV7eF9+dKFOz+xqQDEMIxb3kBdUCbOCWGzeEZv1QGxqMh+oEo5oNEemZMiPVK5KYLW2X1k5P
bu2fPBz39BA0W/gv/kC/9f67x0oSKPlTytNwqRdXSdP726//yR+ApqlLhbXShmaV6SKAzS0ljoFi
KCIC31w4RXQhG5wbfLecYi9Cix44Vu1b3zVlnF7MZywl2zbWJNL06KGkQ4zogmx7N1opMOIks/Ux
N+eA2UqeTq+FZdG8Sir6Um7P+IPrfwTS05ZUp7raLRstzerl6cyRJb5Bb+zvqyEnSMxm8zlo1fSV
WyEe0v7m6cVW1vEFkCNKnzlP7/Dsl1gILTuEgO8eoVPlyb+qHZoT+1g/Ol9F7oFgLGy6tnBVBwsK
P4yj3HXdsIJSXaCKWl2lcqzT25wTakn51Tc8y4sxlzPs7penTue/jLv80BW66rpWGGK2DJlGq/mA
xSofSjvooWxnsHa+K+wFsHlKQc4ZUZxA1BoGlfMMq66o/fLvrl2eaicW3SOfPJ49vNwSvpe0ukgy
pdIhGorDMncFNN9lXDurGtRJMA6k7GroHDHTrZPTbE3STdthhbYdvMpms6tqXDLodiqV+rSdw6B2
/+7SNDuLY+6tT2ZTNlqn1VBQtZqEkz7jZ1KXCq/UEOmYC7w8VwXPwU87AWsswYjW2ExsXDsLVqED
BJjJHXAyYQt/4slM8vS+JCu3mtpU3zKN9GIgbd8uy0AMXI0+Bc5l66O7QdlEb+UNL9ZwKU/uaZKg
rgYCRZ9JBRpCM8gaUDBJ6MD9iKJpYDcrG7niDzcARMUjNUHHK3tvGUJyLP7RHhL7pI/OfdtMYii1
dXtQB5PKJDZffxulVw6JURgUhcNq0EGWRCjiYppXnnbPOWC2IU/jNw38UpFSYHW8sVgeWrMhbUt5
aqhx+INHsumar+BWiNeoUNWoDi5vD3qRm758Vx9PimEFs8f+w1NwA0aiVQSZ80WwPMZzs2K7d8wt
YrQ7W6v2yo1oGXwOOfxAwCrKU4MZ9G1h04icCZ7JKEizVrQM8eps5k+yg/pkc853QLK3GBE0gfMN
xnjwqgTYnjwhlsHTgrRbPsk1F0EMrNoXl4qO1xDQ2sHtfGd24ka0kF6LKZmxa+WGvnUy0a9J2JQ+
QB19oU60U9R5lXYXkKdXKfwlY+m5bruk5Z0rfvqwXGO+Aghkhu2RNauPENfM1rS9eqdpsSKwprYr
zFbVMoEESIAEtoYA5ekSTfHFyX57EWsJm7tadHLUu8p4KbGU6wHrD6LQH+vYXX8QrCEvYK+jBc0m
QZMACZDA9hKgPN3etqFnl5sAXuHaUCSmPNa3saFIdrua9TUfWt5txoyeBEhguwlQnm53+9C7S0kg
3uq0+fVgFB9r2r6U7XHZnF5T2xVmLxsV+ksCJLBLBChPd6m1GSsJkAAJkAAJkAAJbD0BytOtbyI6
SAIkQAIkQAIkQAK7RIDydJdam7GSAAmQAAmQAAmQwNYToDzd+iaigyRAAiRAAiRAAiSwSwQoT3ep
tRkrCZAACZAACZAACWw9AcrTrW8iOkgCJEACJEACJEACu0SA8nSXWpuxkgAJkAAJkAAJkMDWE6A8
3fomooMkQAIkQAIkQAIksEsEKE93qbUZKwmQAAmQAAmQAAlsPQHK061vIjpIAiRAAiRAAiRAArtE
YD3ytOfXxr842R9t/sceO5pU/ByNZvqzvM9S0dZE3QGjSJ4cjUZHkyJxXbs9HWa1VfY25dwhi7XR
+NlAF09Pbo32H54OzD07W28sncWXQb1YjZ2uXOyB3uYILTsKjSu94tbJjGbro9pbUReD+bpWl5Wh
6XP3/CGGVxXCIpPnQsyHBLWyPKv3cC2N2IwX54FFWqdpdNnEQeN02Up8+Z5RP7vzz9MBgrU4HXkP
rvjeDstTHGP9rTw8Z5edrRnDXQ7W6Zub7KbTac84rz1bJqW3KecOOcwaV0yeCoTGJ3y56qW3TLNc
RNme08NkDN86Bp32+jpwT0Xdcc/XtbrtDDsyd88fYnZVISwyeS7EfEhQc+URzxuf8B119R6upRGb
8eI88D/Ze59X2bLsvjP/ipj3/A0FhRF4UDVSzjrBAxU9ufAKJdTAhRtMYlMUQinJhr6EBqJIUCce
qNNgQ1wQ9QTqSg3aZJIWjstrSHUheDLVpOA5uRSSE1HGZdommrX3Xmt91zpnnzhxI278ut+gqHvO
PnuvH5+19t4rzjnx8jHRGRW6b+Osebqvkjh+YtZvT/75CRCWo2jBlZ894/J0Irc2MSFwNl55Prh7
T7rYPdzdhDtSU7Fwk6aOZkqYDOWTurzZzF+Pphz1a5O+eLd0NBPUZpNjtNlsHqcxGXAup/1wPMLN
Kap9RSdDkU164sw/vp/ZwTkWjCT8nGFz+oxk1GMsnFY1K4hTiTotHq6OuANXH394eCaPt2XOyL1g
znb2qWjP8fDEfViejgaA5elm1mI3Cm9GY94J9prnRd9MCZNT/UldZnk6Iy+O2aW/PUwmybiJU+nX
VzQu6wit2aQnzvwjeJRUZAfT5dHTvCiNdnpc40hGPcbCaeWzgjiVqNPi4eqIO3D18YeHZ/J4W+aM
3AvmbGefivYcD0/cZ7fyVCawftJbdDI32me5jmGDUTere3j3tHKXFnmtor7nCJ3DW30g39olwPrp
vdmJfWxgqb10ZLiN1x4027UypOWH1Kz1E3339izKgotA6rH8f/mU19rc6/CWG0hWPk2kmNQ+N3fr
+FIjjArS1Bo0pt4kA+Hr2/Z+ZF3s3DDos2mjmgH+dLtKBtv8kiqvVZrarm/1TQ8EgaOvw2JujCaS
x6uFErIC3q+tLpulILaTXU1aGwH9PdNM2qA8BRvgUXLpDxHsYo9TSYbBqBh3MCzPTTAvH0rooxy9
e+qKHOykASAavXaqkGYJHfb3S9g/pMTsRBrMmmZjkFzgp6zADp7ecU4VUL3pCTDCPIoBtdRKkuNd
7XFjyn16nWIOuSnG2bTYY7IHP+r7OaqzzilzATNTqaze1vGeSznZTD4SmI6vDdEnFcCn7S8pmjor
fwZbiS5K7XFBdKqogAnlOenKh0eOwq5JYt/cPbioONe8fepHC0Dvdh1cwyirZJA5vuHG6WymygGQ
jIk6jI60OENQmlIRjF/crF77jiY/B7F39Ku0GgR1RAyqekFXcH+DWzzqxfUE24OzmhX2wrmY6iYJ
25JO1QZA7QtCjnh0VpJ/SwJUgwC7hyws9ZgeVSkUVJWJhaC44ATco+T92ZzuVJ6ul7aXl5BYMMR/
S50WLSw327Fi1bSoPW1gXHnrEl9VSJCs2/2yNmI6Ptwtdb1DtCUnos0ekppb2N2PZaB5p5uNeiED
1YX4GkDg4NJ0LtWWOqOaVZq1cGoWolNB8hC+T55geRjl9mCflqzqrF+SsS62dFOSIRxoTHVNI1Wm
lkJz7XIUJNhXgtGBIt9oBzOCyBzNTq7WlHOBhb/6JS6PHZel0GwAtWBb8EhTFLr6elQag96aYFvj
HnkWCdERDWJZmhVmyIHm/nhQorWDGPlurcN3nwhI2NI7oCsWqiORklItmak21FJMnS2R8rUoEAuS
2+5lzIPvEFap+iEruqZi+s1WFMxrNZymmdsgEBRI+xJST3vGRINHF0aZR+h7QWotYZb1tPSJ6dRz
F7AvuoPHIWlxQHgxfe4KU1zw5auGu+QMRiq+rxI8jZdqMVEXh9BNczIYPDwZQdEs1MgKCosIRrC7
+JSEsSFi1QKXrzhBdFkLMMVODZbObrUn+FCEm0Ax1XdA5FmjY7ri3AmOlDkCxtcszWlZ9OqMaBWn
mjGpqwcQ2239Ca7qSYqyeKyLjF+qNmC73U3AiPedhV87hQRQK8pfFKVhMnSFqq4PRZHZ2dYuWz2q
teHU2QaNZ3OyU3mKVkMmJXz2zUa6hwVIGrBzoQnzYb307bYM1o0Bs0qNAAO0Kf31NNILoUWipbmu
HfRvNDvb6apFIMzG4J3Kkr+oC4/bt1I3I1jYkTCgsbM9dou0GLxc6h3TelpvY4sWnXjRhW6YwgIk
YyJGcCe7KUxwqvhAN7UOj/RcZK9dejif4qBO5oFABysR96AECa4yJLOPxQ7h2M3I7g/rdRvofjmT
dhGmUjcPoU8b5QJNx/jB0MgBvejR9ong/UFlN51GDEgLSJWCPop33UQKUxWzAqyRQxQYttiuqTjB
B5kw6vXY7EC9cBwEevh6xvTUoZO5j6jYdbKjPLcKWsEFbRW9FoVu0mrv9heF9+MbBwVF5ZJNH7lk
mw7akPKtt/KEcESt3bNtKMI3dum8ffFJ1rZaBOegWhN6IkztoH9zVmi7odMGdAcFpuj0HekwTAYM
9OKGknThPO3qTfLVncJ68WQAACAASURBVNG/IEQy4XZpQfHEyDaAwYBowlmbCyEBkj0gajTKLl96
em4POov7oHEXGsmkY53uWp5KAOzTSvj7ZVjasBqDGDePkDUe69cCE94O6nwr3OPGY4+TcCYHbJ5G
1oz24LyyDu0A8mywXWGxIpmRP7g7qlzUhceDuiQsJa14NQXVU8kqW16LAs+zmfa4lvul5KtGUNp1
gfOkr06Y2S0WZlU5qKOszyhGhTEyFbsDy4yKqsa/VGQJbfu3oS1XU8rF4LrLIi1//AurOYLSGpax
6Lf+IUxZmgz3TJZA+Ke0xw4iErR3466RNZOxkPLGsSNPErsKGktb8MjtbUdjKBpY97SWvHlsSaeR
+VvXFiykxA4xo82InAY2kaFPc8eNN//aQXTTs0LaB5+c+bMViTTgEAOKwS3Hracz2WIMPgrM/uEK
Vq+5j/XcME5oCVLFa3jYUq7J2JADJavd5W7SBslT3+37X4CHwfUWWOVCsGLCN49ivIvxjUlwLZmc
TwcohiFw7fMWH88EVZaCGPDalLHI6qjK0NzM61JLS4+ajEN3UCAetzLABLeDIj9gd0PSN0bREvWW
GrRZmHRheSqX8gdGTU8NMweXjpvVW7MZvhZmG2wIIrKBJrkeSDvS9gRIHZE2rnXWzRb50FMux5RI
GtOpiTujgx3KU8HntTn4ZnTMLwvbMMOQIB6PTgMTWA9a2oV1oVo1WrIMJzAu9JP7NORZmo1iifs+
oiLZ7JbrTDM45VJKSjgVG7zu91GifaI8xYwftUUalfz6toqqUy5IjpkNm8QwpqbGjaxNEaN1GxTl
g1jYwGASCBgcRtXdXFXHfTy0uMvDlPYBcARjW6uYAU++oO+WtHGqnbh7BxUK2rt5OHQkglJZI38h
FfUqaCxNu0+EOqwV37bZ69RQPfXvuFNDj3DJzt5NJJIbH9X67DAz2ne2YQhspOsdZmxH0VAa4sVj
dxC2xuFwM6Yc1CkwujBiKmYf67m5s01L1FlS1x5SBxcy1bp3zlqszBi88dEUW3yjIbBE6wWIgjn1
dnVjdVtelIZxVEn1r1jVm+mxJ6y3cAHsKa0+3UYyHMbp4XB2+PIlDvp9Mpc8AFiTRHeTbFJTZbhU
tW0f0jARna4jPbbRgKHe+eUphNWstoPJqWG9tLbTJJFRcCsn+y7nkI1ifC1U5jmb08/NCLR9KYAO
xtmVtqueEtIQ8eZTEHg2h/PL075vkqDDktH2nlhLYedME6LbBZTMqP3GB4ZpWTqGFpxXWV0UmO10
G4LALATOURceD5LSBcZusAqIdrvHWXSItXWh9+GgfOxQhCzv10v9elrqVD8dfPHCZSjCQenJZpyu
2G3gNXhX+7mKOMGiFDwLqj1ApQuc5lCGNyxd16AbqvLj8W6gzruG1WEYJm8JjgB20dWdSj48aKzD
O3Mz9RycjsjM/rqnI50HAmODhdgO4vVhktTr2YZY9yR6noFiam/WZMVRhWeFS8sjIIFnK5oMaNyT
SpamrXHCGLeux9YDV/uCj6XBMfYkuI505JkQMGaNMh/wFwVJCp66MTAdWoeeeQN1wRipPJb3YgDW
x8mezARNascDLSN9SlPUXpryWNc+0nlErpgXijARWDPcRVVNyBlh5gIlm9S0ij3dlQcyfxCdviMd
tsmAYXChJTgilrrMvl7gCKKgNRyWsm9tSVLqVD9NpXkxwV8TBxvcsCA9OdufDiAquKnSXH7sOeic
NKZTFXdOf+eXpxD+urJoSVSrcp8qwsjvnws7rYH0OYLul6M0vfPGXl5e39r9FWP6sLq1/5RLL9Wk
3fekYph+UxzMpRAV01Jas51wtciENQ7eNEeBOJfweLBG+8oSlBZHjIxI8PWiENbHBDPt0QgaHNGL
N2txtldHwOwYUw9TWKpkVC8uA/ggvGiDgdHZzdvV0n7UiYQDrn6uFj6ekBGXz/OycHtKCw3LQNAK
SqEDpAf0jV9e+5kJMnW+NNURe00JnEqaA6LU8lCMcUei+yIw7HDRXNzV6pVgWyi4Szk1agDKHJ+z
0S9Ip5I/lqIb/RlK7F8c1DdSpjIwJlIRggaDndFN6anyo2owFRN4tqIobRhQDa6YJl8dl+1ZRzM1
DjdjxiGDe0UcODXYxsL07GiJ8jQ0oegEjCJkmGklG/dZPPsrTMkKXyTjLKiFxe1q9RIJB6/FvRhH
W3lGZ3pZP8eWiMoJUCg4MQl8x+okWwsadXS7I+sS2gKeXzWpX94AfrRE4qK5XSWgSaYs5kBM1Jz5
CKHvSAw9bPRuT8sluPMVsgj1FkPBl57e0akhnb0qMJ/lIHraTqFztkH6N1HIedxZ0Yu0hf9wjgxv
vUdpJUs1jVFpZhKXa7maDQiun8fJDuWpRqs80sj/mFFJCLlSZkIMW0nuem25RoJ4rDigs2cqNmpE
QaPlhAqBvzXDqnaNYr0cjYQh5VCuykeyLdsZ4ypX7RNVmFDUhccT5alVkMWK9G911aWzal3e726P
rm4+LYsXylbshtluQHzpwYjYAjdVHBiKdqDhqxMyMsm7jsai+Os2JJFqUu3gcQ//hFANZfunN0Re
32W1sGjFbq4XEkO1Z5neOS8HbuHYw4eidZH/HSjQEqeSBrQOiwLBkZerB0At0kYXxGL0yHIJ/tYu
YYUtKTRmgDEAS+KcBb981pdhSMlDj/1DaMA7G+5JDomUZ43ZONgPRJdu4W1eqJPenvTOVFRnWZMW
A5pRl/loX1DVWuSgxnQh66DyV4NV6SUf00Qe0xKE4XLkGWUuAA0lp+ukmlHatTHKDsYkzqVi8Pj6
QIFQJ37TmPO85BWEtQxVdNY5WN7SD2lY7pXazvPTDalHhsIvVAvtny7C8rSVDsoqrFEuIE759m9U
qUdu5MvVOtY9emlynQxq5ERHlZtN6A5GBI+bBEVaA4x3FjD0BlwbI1glod6J7IEusdA7jOoNjS1t
RGM3cCLTbGsQoHO2QZIKxEI+q1/iSRM4nQAQABkLomLc/VbLYOEabOJJYzoFjWdzuFN5ejZW05BA
4ALyLNjLExIggSGB4T407MOWwxCASuIwAinlMgnUHwdfpu1XbzXL08sPMXe1y48hPSCBkdvVhPJE
BPJ9rydSQ7HnTuDh7mbs7vu5m/1M7GN5eoGBvl/as4/2xoU/1LhAd2gyCTxLAutbeGYnBZM+GXyW
NI7oNB83HRE2VZHAYwmwPH0suROOwxdZ8LdfJzSJqkmABHYkUF5Y1PfqWJvuSO9x3Rtzfp9/HD6O
IoEjEmB5ekTYVEUCJEACJEACJEACJLCNAMvTbYR4nQRIgARIgARIgARI4IgEWJ4eETZVkQAJkAAJ
kAAJkAAJbCPA8nQbIV4nARIgARIgARIgARI4IgGWp0eETVUkQAIkQAIkQAIkQALbCLA83UaI10mA
BEiABEiABEiABI5IgOXpEWFTFQmQAAmQAAmQAAmQwDYCLE+3EeJ1EiABEiABEiABEiCBIxJgeXpE
2FRFAiRAAiRAAiRAAiSwjQDL022EeJ0ESIAESIAESIAESOCIBPYqT9e3i8Ve/3W46/1vH5f/gvZi
Af9N7dGgyn+edFuf0YEHbyz/odTl/cHlisBungil5fpJdG42Typ8T5vRNjl+mhxALWjwQWLdE46K
th7rf553S+LhNDmI3mKYpOXL1cNWIw/eAd3Jwq93Scye8pwESIAEpgiwPJ2i88hrU9tPFDm/Zxx3
+DOxZLGlSnisVpanmRzWWHL8PMvTtXw7mfONCKcJostYdztnebobL/YmARIggSMS2Kk8zd/su2XH
XAeywLnjNo8eOFvDPiomdtB0Cffd+aYdpGey5CAyO0K6efKkNjyp8I6nc5t3sm2nzmjBoweiED1+
uLsJ9xr3Fz4/+bHn/nrVo4P/zYh6CtCd3OcIK1tWyXMSIAESOEMCLE97Qdljn5jYQdOlqY2qZ9iB
2pMlB5I6KoblacayE/ydOqOmRw9EIXqca6/9hc9Pfuy5v1716OB/M6KeAnQn99lj2cmieE4CJEAC
F0xgdnkqS6p/bu7kla1adsii3D7hPUJo7z3CC2sx9q/yG1fZkPRzu96MWRIi0DYweXRYP0EaDoc3
z6ov8v+Lxc3H/8pcktPibFAhJy4fH85WCU0xyG+42oVFe+eybVQuKury9nDjKpsC3fAxcRMukNtH
XxQORta3P1tnE92RqTeVXQL6OMnWRPtBjRSMCo96oR1fce7mSU3I5upyneqYUWnSeLO6r7kdsteN
bEcIxHsKh9t1paGxg54IByWibXiMRpaxzln8Ur3YTWNaxUP/AQEzoHr9tpy3Y7dZvbDem00JuuaQ
vq/5qNiZUAyi+yUy9YPo0GDEJeLccp+G2H9T30L21ydEdYFWY1dNqsdglaIul6G9ZotLU49gli0U
UUhIWAObeTDEgxiWxLhoDJWqcv4lARIggasjMLs8Fc/Hl07dz8pqq+usLOi2wchyDKuzQ0SB66WO
rQVoq1TCTmN9cKCLa0dtk9MNpmhXIzfrW22vm64qrfs6lEeTKqLM8isccDDvoGBhulTkxO3ZNiHZ
d80eMc94grzKyrwLljThQaDXeSOWaM8p70qU3TAx0rRPsVXOaHu11vwqRYBGR2xQe0KkLAc2IU9q
KWCIEtietNrNRgXjwsnD3XJV67moaJA286KG8P0Yxz6sbvVXO96hmNTzJRqm3+KUJ3qDEioBK3xF
l2HHMZswo1vN5zk5L3ZBYLHQdRUJflrAqvFoMNIoxlv6QfKHyVtjpN38krRrWg772KXgWquG3U50
KSN6u1raN1sE25ibEIm7qnPzWm2qFhbvbAiq5TEJkAAJXCGBvctT3Np951gv4yaHOwFQDGvxeLvL
hOuDQhmvDdfxvG1ob2wXC9GXSRVDd0LLuM1Fa7pUNiqrQfELgNhmO9OmlmIjm1PQWzR4SxYef8w+
YkmT7xIUFLRIyNAwZKjd5S+2w3Dskm9r1Tthlcb61qteGZOsbWIgf8TZyAeGdKUNEUUDx8+iZEyb
mVEL7pi0oQtVvXUop5O+dAkER1BRJgBIw5gQULkiVqE6r627FkaBsTz14doLWtBgoDHMK2uBQMha
tLy12eRLk3VuhSBOf9cCZlTL0Bi1tf7FnI9X2l3eNtMzc0xv4C/dtEAXcXBpIJ0NJEACJHBlBPYu
T7GEsjW9rL/6lE7/Ys9GMS24shPYB+92DJ6wp4ExKGaGNccW2Zbso3sS7lVl3IQKuQQ1Zel+v/Qy
JaozK+QgXcpbnSsNRjZrsRqoUictycJjjdu1ZFLmYI9MW3Iwu8tWkSQbyq2pAlZsGHxsqx7LE+Rf
xbvwvrQhIjVt+Fc89U8zJqVNcL91HkYtpoHbWd6W8RtpagJ0yM/Zm4pizBQBFVX/otd4LFeFlU69
MCoFOmfy3NiBTFQtxxbf1sfLXOzpNCYT1bq9Xd28XD2YfKCEscNjUY/Dk2FoDHiTvpK1K9LZP1Ce
xqxwmcBfbMif0dBEK3hGAiRAAtdA4OnK07zZjNHytbju/VrzebuOahWJrs7DDtoRtxZrC5uN3/zD
HTfvT/19um7haqrqgG3P9za96H/Nktrk21I9d798b/bBwyPp37UkC3/i8rTuxPolZJKtOpJoxBIn
+1UGdfME+VfxLnyMUu0zRKSmxb8l/bTaxvimtJkXNah+RtJVrJWPkkR147ln/pqF1pJKq9qOXuOx
XPUMrH3t/zGg0uh4axdBVELWp22y6gGqluO8YjhM7Ol6xxR5GjR7Hu5uyqLROrvM+i6EQk5xdO+G
hqEx0aOESGT6DWbjE6dhRgH83ZeohmckQAIk8AwIPE156nXGNEJbi+2g9k+nTQis/uMdWj/fwFqD
7T0gQS7hqfVpY/r7dH4nrAwIwwcGqMzBpp63OvcLbfPhg6OgN1mShQd/fQOuMqHzlMwBFrPTDqo8
PB0KbH5kUL6Fd4Y4nyIBTkVUuCMlBmjF05E2Vig0y+KfZCecJsnodRQRz0BCDkTt2O+QNLpcGdIl
4N3SiyIQ99IHkIYxMXmmytP+f4UhCnzSh/t1kt7crVcvG5NSp/ppmsWZqvEXPvEb4ICzuRWj78lc
OsBpZo5sgf+gmyniAQmQAAlcPYGdytO88XTX9HpnQouDUgj6L0uAqa/FKKoUFvqE8X5pd9Fw9cf+
ILAcyv4BL5KW0ybEdp22SXu3ocBhiysqm5beyq1FJ1QGqMXHlKO05aRTLP6Sig38JAhlpm6iWi0p
l6xEq7+VcZuTajydkIkWFjM8KOh1Va0387okcYhIgy1cLkFZoL8yQVEhT4ph/n5FNcAysCMtVkjt
3qHlm2NGOMVIo4r2SP+Erhc1dNyO1UeRY42pmmyXRsjUG59dAu5MLMqDa1N3T4NJyUIRvj12aIIc
R9Ulmpq6Iz/z0kuZjC4UzSTtpqAciAyEf/1g5t3TvJSJmz7FkktoW5wpkieWzyVJLIVizviSmAMq
ltQbzGKDT+RkA09JgARI4CoI7Fae1pXU3gTNe3NYnfVFurIn+JPKQA3X4rrul7Vf7nno+ts2lSJF
yx2R0ZZ47YZiqxkwEAuOuk+IuJerNfzzAtmXaRVwtVgWn0tGDmhau2cjY8qQuEPnR6vqY1EB+26S
GLqBJVV4+1eTREba0hTFmCU9mXHTHbkDXWzdzra6kEFBiWPlRRVohWarDqs7kCciUNKpfV6uHpJw
yIdYGQBY8RoAAudaChfh4R9smkib0hmEg7RQ6pmdXeY2ldS2UV+2EjADamKEf1jKruGUtMZ6oHjr
NDSzW685sYsC0YxyBSDDuw2pkE16+9BqxQw5X1YYfZo/9+5pMUxnSpm2A7PBq4gIbFveA58qYXxi
Jv4qsCZT+3cARBT4Bfp5SAIkQALXQmDH8vQi3E4b2EXY/BRGTu2jT6Hv8mXeLzvfoy7fNXpwKAKc
VociSTkkQAIk0CfA8rTP5tKvcB/dMYIPdzd4o33H0ez+LAjILV58jPMsnKaTJEACJHBsAixPj038
ePpYnh6PNTVdLYH1LbyhUV6r4HeYqw02HSMBEjgbAixPzyYUBzeE5enBkVLg8yMQ3oi1nzc9Pw70
mARIgASOSeAay9Nj8qMuEiABEiABEiABEiCBgxJgeXpQnBRGAiRAAiRAAiRAAiSwHwGWp/vx42gS
IAESIAESIAESIIGDEmB5elCcFEYCJEACJEACJEACJLAfAZan+/HjaBIgARIgARIgARIggYMSYHl6
UJwURgIkQAIkQAIkQAIksB8Blqf78eNoEiABEiABEiABEiCBgxJgeXpQnBRGAiRAAiRAAiRAAiSw
HwGWp/vx42gSIAESIAESIAESIIGDEmB5elCcFEYCJEACJEACJEACJLAfAZan+/HjaBIgARIgARIg
ARIggYMSYHl6UJwURgIkQAIkQAIkQAIksB8Blqf78eNoEiABEiABEiABEiCBgxJgeXpQnBRGAiRA
AiRAAiRAAiSwHwGWp/vx42gSIAESIAESIAESIIGDEmB5elCcFEYCJEACJEACJEACJLAfAZan+/Hj
aBIgARIgARIgARIggYMSYHl6UJwURgIkQAIkQAIkQAIksB8Blqf78eNoEiABEiABEiABEiCBgxJg
eXpQnBRGAiRAAiRAAiRAAiSwHwGWp/vx42gSIAESIAESIAESIIGDEmB5elCcFEYCJEACJEACJEAC
JLAfAZan+/HjaBIgARIgARIgARIggYMSYHl6UJwURgIkQAIkQAIkQAIksB8Blqf78eNoEiABEiAB
EiABEiCBgxK4wPL07epmcbN6e1AMQdjD6uXi5u4htJ3VyZMTGPP2frmQzzbyaJscL5b3Y9KO27a+
XSxervaN6Jm444GQRF3crg/B8ilzvnBbPFUmHMJyTFqk+SQRL1FbHCpwaO6xj/O0OlxmimRN7Kzl
2F4eVF8v0x6hRGgvd538CPYROo835ICg5hhdZvqTrVFzLNjWZ1a4D7gjbLPnKNd3Kk/XpUIZTIlZ
4A7nzZMn7iE2vMO5OyLpyQkMdM7XiD3lmOXpAOY+DYh3c8DFaE7O1+lfvqTM+aLS3JRRT/kVZY7l
24gHqtD5CRL4YuoDwNA7DIVjYLhvZiKloKVnyqW0B0r7Gf2obRfBPk69SBiZ0Yee5gcEtdlshJV/
BveegvEPdzfeVY4GBc8jwJWVZKB3F0Ep3EWg2amS9513uxh0jL6PKE/9e20zMIHrmy2Bf8RNrCT/
UYk7qTrtcOm078+prjyKwF7GphBMyHoy2yYjOGHQpV2aRj199fG+zsn54SI+Y+F+snxQX+dYrn17
f5/cSFN8CGtN2FkdHDQz96+ixtkc1MhxFdOtB8y0Wb7kfNsfbC1PB0VbWBymGcy6ejBQQiBaK6aG
OiTqShtN8XfGQjfp1fp2cfPyUcWPicVwyzE8xny7Wp7zw15zYfeDncvT5d3glhiCm7QgBX6yL1xM
8mMyQb+pw0nVaQ6n0ymxp7n2KAJ7mZpCMCHryWybjOCEQZd2aRr19NXH+zon59MOtF7iEtlT/WT5
oArnWK59e3+f3EhTfAhrTdhZHRw0M/evosbZHNTIcRXTrQfMtFm+5HzbH2yRsBq8VpQWh2kKM64e
CJTsGiO3PyOWqGuw0cxb6KZ8qhL2QwThlmL3SuvRRHH38vR+k0MO4Ip0ib199LleaPTvLpIZ+tGX
jdDE8t1FO9Q8a8lUvgOVKxiqYlvrr+0d1VUNGrCoUW+566rjHV9Uod6hyZtNeep6c/cwJiHnepnt
9Q2iqndt9Ipwd9N1NQLgV0Q3bmENk/x//O7otrsu/HLmXnQGgrqb1T28GdzsLArGtMPA9Pw3WQKe
og3VlxptjFHVK5a0780AWYzp6EUt8N3U+KA79cG6JtrIm9Ctszui2VjFoS73PaAerKrhqvgrQorY
osVzQE5NXcfZzUYsVAfuJOtsiHkcD0TsIAm9C5jX6KFq2CScCaZZsafmj9/tQAmu2nX6XLM2HGLv
L7ar0wnTXmev5pX7JRjxmsAALdgD7Tc9mNBHPGzvZO9Aw3xMOdwNHGpsE6TlDFCKd4ZwiGdU46z5
0oy3aQWhr2uLZWYzGdQNHr61LsDhdm2SN5sNHFexbYVsALsGh2m+qDLNAZ9cItM+GhSxqeqtrt18
/K/yDx5qPmBIhsfDfBNrb1Zv3dkYO2/3LbKIHQGIBlQIuAZuwgSXHw2UgqY6BdJC9KHd8jN71cJR
NAIusVxPT7fHVby+8qNV0RGlhy7XNUpakGQLWeUZ9wUV0rJF00g5qMb7ZRUokgdzqlMnyNiae0Xq
cp10oYWqxwqPeqDmtL9mFYhFd3AiYLtLP/7RY8rT5ryBBnCbTZlgdqkksc3AbuDF70LHBiKJIN9m
nc4ruWo010uTEOdPVo3yfZuvrS1OGk7xaNyFqAJETkjYMnXNFzFYPs21cqouF712KTEPnqKFAqq3
NzSq5mZ9WUcJ1Hd3VDv4qdukXSrRV5tbwVF3/YH2rp3FZrPk4W5Zi7/Qv5WYFvc6k9WMygcmcFtS
i+VBDvCJfZrS4Kt0bhrndfYyq/A0a6fmSOmpjgT15STMBUmzBqprGyy14GytTS2+dcEy5kO1pUXM
1iGgul6D37IEZ8Ew6VhscEUlK5rMcgk3hl6konnBkuKI0SvTUNNApGn4dOnXnm5kGRLWEI1aTeAg
DYcbmbapuI/B3GDtTjSCmE13oYNuMV639aeBxUFY0AIx55AW5Jix98saMpwFMW+Dm0EFWu62inwj
ViLlKxVoqcZrRFo62Wm5qrEbVxqmj2SBrIk6JIWjSLCwBo9a/thAd8SPYr49rCr/muRWHIs9Zj/G
q2SRJts2XwI9t0COxsx21IHYvOlWDCuOFwd1ChSSujicbo9La0jI5wgGL+Fx3VmUfCxLEkw4xcfr
IaYxBFFRE+6LaoijBN3MkIGwm9S8tSxyz8CkTqOItbwFU7Hd9lyXcaKjx5WnbZtp6QhzPqR4cQlb
8LjOcFuSpC/ICTRSewmVzoQY/jAshCqpDh3zHJaBHsKQrxMTD0U+TkIwuC6dwAcWr0wA0fUthFxE
W+sxZudISwqBjweraiPOQDzO2rt2Di2pgmMEB3pxfRzwAZk9vQm+e+hH7s7Mzra3hSyNjoj40NJF
XQwJV4MZzcfQoees7zHqXRCljemvMIeP7Ux1NYBTnFAOTaRBIJpwb8lR6xofzQLLoy7p5i1bE+Zm
9VZE+ZYQhtcpZpVEq2xGqjTRCiZFW9Ml9127eUumoT1G/nbUue84pvho+5NccTL5oaHmUshPEObW
5tUbTBo3A6Sk/C9XUDIcZ+N7BkPcg6K0xQz9whbRa/VBddBPc3JGNYFquJTD6qBENcbFuNlBEGRr
fpmVODB0c/m1ecSpVuVkjwB7kAjtGA7PopJROFNw1id7ZNQh97iEV05xXUJHwN9IWALhn+AIpkc3
x2BCib4gPLmPAKWvyw+jitk6Gc0HCYR80MEkX/qGiGcg1t8OTPxZHDy2PK0oKxoAl9eLFp5G0OmL
7yU2ngdD1goI5EtTjlwiW6arirXUj6pVcvubJKTTlDQq2v6OLA19CZK7IePTbDeDcdsoZsL8zwSA
iVwafKqFCWNgIAZDxV+u6SMJOemNzekOlqRIJQldO8csKeaECA71lnnY6MnVDuSu3upjmu2BUUg8
cWd2ZxHjKTE9R7qoqy0Bo8ssF1vmexy7zg4hJ1HR8XYGGdiCqznTaITEG4vFUG/BXnf9FLWu8ck2
sBwztvVSjSIN13G57IEoqm9SbTqdwL4JqQq3C0zyxnqEl4YD+zSynFYAGXFYN7xr28DCAoUGJJPk
0uAj0ByUy5YjWLvSEgFaRoISpYzJR8lwDGJFRtdgmUReSoK6MH3G/II8Ab1VAizdPfmmCuRYmxyU
TIN3gdwjUZc/ZRHr6RJfbiRpQ3yDNlx26oXslAERwwafMclBQhlVFhxcHABU0QpD3N96BR7ItJRu
y1cGBegmTE2jgC55/QAAIABJREFU0imywUt4jDViTbCQSO6a7EfIR4T4xxbh1C3sYrApVNP86jDo
Fin0os5Bv/+a8NZZCVuhCMmftnS0S3mRjNqOffb48rStDrdr3FBHFjIJW7c8tShO+Z0CE5MJZ6BE
txMqD/yIphTUdJrK0znx60s4RnnasTBhDBzE4BwLnCG9sRDcJg+jg8dJwnBgGz9mSbkUIjg23HMP
9Zaxvj6ODWyaTYvM3zGGA7E13+Z19pRwO00xWpVAWZ96EK66zGp7LS88jig2yBlCTqJCbz3BHUja
nCqmivZufwO0od5+QdY1PikAy0fMUI1j0jwQcrX8tDbFHY0P5HErVRVuF5jkjfUILw0H9mlEORML
XewINVzbStGAZNKYPaWLg4rSPQHyN1jQMhKUKGW/8tSzHaX2lMYgjvgFeRK8ixzWt4PVErW3MnTO
GuKgRoypMqd8KeVpqJ+SHS6/XshOGRBwPIlIp0lCScXl2r+t1akBJREuFLkgS0sKnOLUqxZYy4Sp
1qcZDQKTG+b44DtD2GiCXyJCwxQmizDxGyKoVI6HH03aHB1XPQw6Ghx98VFb8MLyEiXYWYlmZ/uz
Tkc82Kc8rV9oFuW3/G0qAqnmBLbgcdjeph1OgckpaDG2gyounCbVUWHoifVuE+TvSmPmRRnhLAmE
Ajenu/TUL2FpVNIFp5nAhHwwK2GEK6OxCMtQb6xYEldq6alrE9qZJYA72yyp12MEh8OhBfWWweAL
dIt64azTZyC2yo43AFRM7uzBjY5I/9CSQanA+jdcdZlNSPyuOLgBb6Iw62qjuDx6B87GDKU51ews
DIqXfIh28ZbYc6hOR6S/ACFLgDsued7VTVRTVweKMViharuoDORx+E4wwdqwbTen+jTQ6yBkuF5h
13bsxqexiAi+b0QRIT/hklub+YAWZAhj8VDkhBpLhuuqiFaB2DI+GIASe0qdg/Qe+oUtI8Lfrm7E
zvUSkwT1+vHMNcQ9QtUuZlA8+aXmS2EV6HmXYW5kpxxIx2AUVo6zhHqL8VZ+y69VVxKF0XR/q7C4
eMLAQQSBD3RL5k2Nwq7RjDgKFMkQOcVw18JRhtjXj2QPnDpe0z5BAxJSBuo2WoZmM0xemHrBr0Gk
wmRHAfEY7I8Xjn+2X3nagodruvhmK0t5lmFZO1jiJQZwFd8vRhIxe7Y8H9H77SWcsOOOJIrriIEM
MU6rmPT0vNx0XiLeJkFXk2qk4kqjUpbAqQABM8qp1RZdCycJ1EiZkLIZw/Toj43qSvRtXmHgBhLi
QCAZ3XHCUUJB5xaKNKU6yBDc4crxSAT1twuSFIDacwTn9k6dqwgMbqGkiZrnCEJD7fU4QACZ0l5p
SKNmVM/ZOhN96pVY6GQp/HWnQQsiltCtKDX+Unjp2p3cKaPG0yz1rNXbSKTQJJmd/vuwPCSgmJkw
IS3RpEA+Jolc6sBMxkZrpyYdqs5CQj7nhc4643Lqxhcmbm0MXHRk4xJixh78p1ExK+KqiM6GWIuj
XYNT5usvyTLV6FcxwzJfMsEmaaP6sHp5s7pbYrtYm7tJ75hvumJkA8CjCEFWIRUbctLaU0xh9lkK
SJpGL9IpfumKWmBBRnEDgXKxWJ6ngNoTown+ysi4pOBpk4nLiC5QeZqDqRlvWx9wy65KfbMYfAEQ
g9X44nqZIxqLsuDcLG9vfBGLk7pgbKtBpl1JxsCBHChPa9FvZkQavrqKQDTP8RbsSq/qtc4mVkjW
Ppqf0icFxQcf/2jf8rTRgV2khb/d0fYCovhWUPb+baAgJKCoIW9JllPQQ4KqB/+8y0A1amjhrxMA
BUqnlK9qTPHQsxbFTUsoa2IZ3f5RiSYkjUpZAqeVQPvnM0QQpnhbkhp/WGF9VqCpcKwQytCY2ZNj
AchyjdHB4zEJMBDshPVOLPG5lCPYFr7qKQYC9Rb/RBF0GNOrwqvzo/8hVhe7U+dKeCS41XAtKz0Q
al4MQb0eMKpMMQxyIJ6qtKININStvdqwvFdRCt82aTerLFtqMz7McgftqiekQ1NJxTztCT4Oe7Z9
SPui8Soslacp+d2M0n9ewrTgCgE0KZAfLOJytX0CTLezHgHn2rALDRCGa0jnHwUbl1wN8H+9DuZX
EQ+OjGzq6mL9l/DCtAp8spuYhGOp5SVOXc1QMhxnsWJx1+BaoarFOqPVEks8J5lmIuh18CWFcFMT
k8Y98ns3YkNLRcwokRo9CiFDLWO+BODVizCkGa0yqwHZqSAkaMEF0/0fLU/VU+DgVI+6x2W8zfAw
8f27mboVR0ln33RKn5Jj5t0gB0L2Lu/Fd+lcyNsoVdaWjhKOGP1cacjV9nm5eoBIaQ63i7DKqUAN
uo6Xv2oJiLW0rDWu9taeYPKJDncqT09kI9WSAAmQwMUQ0E3iHA0+Z9vOkVeyaVC7zHnQn2TwlARI
YBYBlqezMLETCZAACcwiEG/GzBpyvE4sT/dhPaD3dnUzflN/Hy0cSwIkIARYnjIPSIAESGAPAvdL
eL5WHmueb8kyKLD28PvZDYUHrM/OdzpMAkcnwPL06MipkARI4JoIpDe9zrc2HbzseE1ReFpf+i93
Pq1eSieB50uA5enzjT09JwESIAESIAESIIEzJMDy9AyDQpNIgARIgARIgARI4PkSYHn6fGNPz0mA
BEiABEiABEjgDAmwPD3DoNAkEiABEiABEiABEni+BFiePt/Y03MSIAESIAESIAESOEMCLE/PMCg0
iQRIgARIgARIgASeLwGWp8839vScBEiABEiABEiABM6QAMvTMwwKTSIBEiABEiABEiCB50uA5enz
jT09JwESIAESIAESIIEzJMDy9AyDQpNIgARIgARIgARI4PkS2Lk8Xd8u5PNy9VD+U37L+2l2k/+J
595/wniW5Gm9m01P+LZxh78ultys3h5e8EZAjUuWMD3Zf1zxSYU/ASaKJAESIAESIAESuCQCu5Wn
D3c3UphWB2cVkSxPa6E8XkTumyksT/clyPEkQAIkQAIkQAJnR2C38nT322aPKk8fRSmUzptT3j3N
ljzKnVmDWJ7OwsROJEACJEACJEACl0SA5enho8Xy9PBMKZEESIAESIAESODZEJhfnq6X5aXT+n83
dw+DFx+hg70AsMl3T9urqyJlue69Hoo3BWuf8iJBVT32tqto8U/VPj0QBE69oylC9ONO2a1Zd1mA
yKdviaVUT2bzeiizyL27UTsWqmszCIHp2NT73FIot89y7Rc30L5AnhCd0L7ZuFWL23W6iQ6j4B2G
yr86i+jADB6SAAmQAAmQAAmQwJDA/PJUxoa6BIvIUr5YoSPdWkUSylNor6VVKVKHRqHkWN+UuipU
WjY637OcGIjya0E59iuiosvrrVKEqeoqfKGnItCrxjFLWs8pmUWIVO3VJVFh2tdLs7B0a6iDI0ZC
Doq1ZlIpmlVCMC9KW7YiO70vK7Wp1cTFBf/dlShSyeXnaGpzRWSXgnU8IQESIAESIAESIIEugcOU
p1KyYCHiZROUp96o1kgFo9WYtslf7BmqNKm7ZMzYPxcQqq727qmWSiLUB65vvdiSK+NmeH81DVqy
VeV+pN4jHLOkugkSmlBowUpRrgI6taD8hXYEFfqU8lTtkSvu43rpVa9cCfWlC3HDsjs4RAzACIJt
A0Qum0ckQAIkQAIkQAIk0CdwmPK03qvT58j1by0NY72CBVOomaKBWHV5XVX7eNkUx4QCUS51B4pJ
gw/WWEVwLryk0evaLDyoy/WcdZ6WiV6LNkBXlMN7Blpe5yHF8vJ/uegMNgy8t68WItA/9WuAe63i
XbiIzZ92n9U06ij+JQESIAESIAESIIE5BA5WntrD36gVaqz7pT7x1y69CgarrtznMOXp6P1XNav8
FRtyzeqFWrbqactTqXf9njEgRVDB+sE9UTN4zK86tHzHsPvNztm9VhWhPE1fObTP4OuBXeABCZAA
CZAACZAACUwROEx5mu8XukaopaRCsupHepSqK5eAcgGrLqurmkwvm1xJOco29Ad6dZVEhNOhImjJ
wkM52LcEJDRd0IJey1VDZwd1DJzmIe5A9tENBo3ePbz8UJq9m4gKNagY0N7l6BvA8jTQ5QkJkAAJ
kAAJkMBsAocpT0tBqU+cRbf9lAdqqfojJCt0pLKZ+dMoLGG9bMo+egVWrqRTePe0VE7wAuvblf8k
CISW6tnr6VCliXAo2sqp35FNquF0SmYu9Rwd1ppFgqLOQ9x6HCKtYINcghvDD3fL8l+0cnXthVS7
X1siZXfHqwH6qnEpVS2m8v6DBgs0igFFiCNyS3lEAiRAAiRAAiRAAoHAgcpTrT/0PUSr6kLRE/7d
pZerh1TBmGFYdeU+/fK0lr9WNU4PlKv20YrKDNCDVorVjvaCplV7ICQWXqVo61jSlYleiwGITryu
n5u79erlXuWpVZ9NovlVKsjauLyPnOHSzd1DrH3V2TLSqlgsiMWbIiFSUsr8SwIkQAIkQAIkQAJA
YLfyFAY+78Nc+z5vGvSeBEiABEiABEiABA5HgOXpo1iyPH0UNg4iARIgARIgARIgga0EWJ5uRTTW
geXpGBW2kQAJkAAJkAAJkMD+BFiePoohy9NHYeMgEiABEiABEiABEthKgOXpVkTsQAIkQAIkQAIk
QAIkcDwCLE+Px5qaSIAESIAESIAESIAEthJgeboVETuQAAmQAAmQAAmQAAkcjwDL0+OxpiYSIAES
IAESIAESIIGtBFiebkXEDiRAAiRAAiRAAiRAAscjwPL0eKypiQRIgARIgARIgARIYCsBlqdbEbED
CZAACZAACZAACZDA8QiwPD0ea2oiARIgARIgARIgARLYSoDl6VZE7EACJEACJEACJEACJHA8AixP
j8eamkiABEiABEiABEiABLYSYHm6FRE7kAAJkAAJkAAJkAAJHI8Ay9PjsaYmEiABEiABEiABEiCB
rQRYnm5FxA4kQAIkQAIkQAIkQALHI8Dy9HisqYkESIAESIAESIAESGArAZanWxGxAwmQAAmQAAmQ
AAmQwPEIsDw9HmtqIgESIAESIAESIAES2EqA5elWROxAAiRAAiRAAiRAAiRwPAIsT4/HmppIgARI
gARIgARIgAS2EmB5uhURO5AACZAACZAACZAACRyPAMvT47GmJhIgARIgARIgARIgga0EWJ5uRcQO
JEACJEACJEACJEACxyPA8vR4rKmJBEiABEiABEiABEhgKwGWp1sRsQMJkAAJkAAJkAAJkMDxCLA8
PR5raiIBEiABEiABEiABEthKgOXpVkTsQAIkQAIkQAIkQAIkcDwCByxPH1YvFzd3D09r+/1ysViu
n1bH1Ut/+khdbpjE8pvV271yYH27WNyOJ6lcerk68CR5u7rp2zxhzF5ODgcfAt1Q6mNb9k7ySaqP
skpM6iXGPIH7S5inx3pJTBf7zwiTt/fB0QnMtvjJJ9rhE3K2bxMdTzLrRenllwES0MXyfgLuc7/0
qPK0YB1UonvvB3NicR15OcfTJ+zz9JHaO0wPdzdlX7T/s8Wotz+tpa70b0el2yMKwUOsthMbFcvT
J8zrIHrvJD98NdBL3WD35Mn+EibFp4uHJ5AUPOL0uAR2MXBi1u8ipt/3HMOx2Rxiwez73Lmy9/7S
kXvc5lJHsTydgP6Y8nR9u7h5eTO4CbT3fjBhpl06al5KxXMV2ZNCk04N7uTBJHmpJrEWnOw8qaZd
TAJl6devy6VytWpVhSWN98vFy5uJe4o67CB/M8+DbVTJqZ6xk/vWwYzpad+5fXJazXR5RGkSm4My
MmK6aZLq9NArufr4WBwQQArrASXvK2pkjeo8M+lrmvQu8T+PhExe91075JWsNJGZpWrvBWGWltLp
MebNl/6Mej6iPF0v5UnicF4dJfxHDfzQxwvNjBSadDrPqUnyh1g+ghlZ4KZmXekz9qUzFWHyDeru
ITUGBYc8yTwPpneSuXswuW8dzBjXt+fR5LSa6fKICUlsDsrIiOmmSarTQ6/k6uNjcUAAKawHlLyv
qLRGPWqiTXqX+J9HQiav94U4b3xWmsjMErL3gjBLS+n0GPPmS39GPXcvT+W+lLw8JxkTvizG8EuE
9OPdpI99Zt6YlGnfPst1DrxMb/10XxmsC4cY3D7x3huaarcAsbFzDxUMw5us6KOaVCqqprypgHqr
5BuubvXY5Jdn1i4WHmHLSOs2/ooYqm6Pv1ukfKB5XeQB0gbKe4oPkd7GDZOLVVQNE6gOsYb2mEI+
8fJ6JEMUZnXZk0oYxPvcylZGRWtRdfBaVUOCide3627a1BEoUN8umBhYLzVlmGPBHbk+yhyMgTcZ
GhwIBEgLGoNY56nOl7+T0qQHmo0MAV3ps1wDnJYAOHYwrUZd3sQEC4lkdo+InU5yxPtIDhiLYBV4
fXO3hjfym0lidYPsy5dP6olLBUXrOdWtzYi24NyuUw4YtnIAaQMRCbHAKOtg7BDc1w7yF+PiOele
h0cudU9pRpeXdHF4sw0YioJx49v02Tp5m6lgDywyVXiNoKx64R3BoLd6USFDVoSVZwTXiHfODvu3
JXd+xMfiVUSj2Z72YPOUm8t7HB6XejActg9XMZXwYVFKBuQQz9lfEF2LmqDzj881NztOGU+D4DLm
QC/cqN32SiTs2ltAixFbghsMfT4nu5anEq24Phqr3qX1si1MZRWwRapkjIfKxMQDCbZNtpZkOu2T
hDLbMYFMUs0Y1VUSTs0oeeOzqPRU+bnoMXnl4O1qaW86iuomRCS48GX5kY04roY9rG7rL2O0hFKp
OLAa3IboKgan0WBV138NCEIj6goBN0nMUzibh7tqs/QTMwy+mGFk1Gj9Kxitp21L2lIg69g2D+vI
EAsVJn+jwEG3ZEw8hbHJ8dFAoNq6oTZTaxSUzMAGH5e0tNJndKBHOXCwOeJC5Sj6JVW4xbokf0uJ
cmwZWIt1S0LXWANqEkS4J5IrnpRWQumjCiKNLForx548IQGmpxUKEZvmLxoY3C1Jjky6s2YrB03v
svXqBMe46L6LmdCOm3BEp1QnLuXyFL4rhmimGV22Zou7R3ob3hwLGNlZ/aCHleC1zVI3hEkCoRhD
Lnmqh/5aMtbfFsoly/MaBUU9NQcHRvrqV+tpX6vlZorGJQyTE1hn5LTYb6LictHFlbyLOhL/qcQI
cpAqSpR2TQNb54MXIXsj3vtlxRL6pzWqDLcQBJh944NA1YJmD1FXyZY5IsH2pi7qvEoHFdFyhRMJ
xD5T4U6B82SukwLXCs2uPp9g5zM72bE8FYgKFNdKoQbhz+Epl1MFM5jeI+SDunIdJONkq2OHLd6u
i6C0uJAwq0tnbMHjKqn3/9YTIFjfoRdyaVt5qutIqybjKWxyusOJzDHtI+3SzdapkclvljsohGaX
/SAsMY2w5UnbBevqVh+7+0hU4a1l6Yevu4Of0htwGRNlBgjBsPFAgNaQG2W/GU+bOGSAXfKwM9BT
tON4ED3VB9wUv3TJq+NhoGuUbjOyZUpawF5UQQsoLVNsPAFqxakVQHBXTlDI2BIRAhpGgyXS3k/y
w3BA70qqlBnqtJttECbMkwwZuk1cmidhiGhglRo3vSbHWATY4SSR12tjw8U2X8qghBWvA1KVkoQ7
qKGb2CIud+agSpa/QzLQklMIBw6XzZkaY/4n76KGBLCfGF2qQZ6jg+buNoQwoX8uynHCAro2wlv6
xnsfVBOPszFCBrOlhxHbR90f2Al6s9K4HE2FOwUOZIY9GnO+zyeMfmYnu5WnaRrE+GH4y8SGe3KD
GqJglpDgfjlgry8S+AUPvKjIm9ywfxmZJ4AJGTMAah1MbjfBj0pKWRHVjBHhcFfDDMAv+tLYXRcG
iyaCjTVo02UmyIF/eXVD+xJKnxjHVBpqgAyai/WjJAHXrNLJSLbECBaP5QAILENwp6kS/U5AJJli
Kqe+kEkm5EC4F3IEbnbTJo4IK062rZyPy2woxuKlCmCgNglJ+8C3FPdReoLX7oVIy58R7TC2KbUW
OdB8MIPKa75yhtbisVyzBEjHKsX+xoEwGbXHmA3lGqqI06Rc9ow6DIdMslRdEtO4KEmLQoZjQ9rc
mncpl6cYdJcwhOY5oBTr32HPkjka4hiLOLTdBzIK0evaV0xKK1KbgDasXJcvn52lO2ZOCOu08dnl
cV/Ewmy5W+JIs+/l3DOqnG7RKBH3jypNSRv1JJv7OdOlGuWVGRr3pmhVs0+/aGneBinJa5j1kzD7
xtfb3rHcDBqH3wRAae0ZMUanFPVENMcsz3c9iiKR3GbHVLhT4GSoWGifkXV7gk+G8YzOdypPA2Jj
3Q9/61+DMb2ajCP3lUKve+DHUmrYv4zrZhJkmyrAW3Ex6a2HyfS6J/eUOSwfXehliBgsn3bzIBZV
8Xt8NDjNKzjt+BstDWt6uQQS6rnfRCkhs1rQacfiY6jAJZRrOFAajI+o1oQZSIGGuAjacOhhKiKE
sZXamNfhYgMEAmTGGitGYYJA5jkxMF/SZWt0J0ircE0qpQdK89LWL08tstHpcDYhbXq+WEQixiIc
I4jHQbOcoJBtO0QcnMQCn9LPMyomTBQCZztyqEoG6Y1mwHEWPu/S2ZSnZZZZcZzIA0M5lKtWpI7s
ArV7NyhJuIMaEQX5mSdazCs1UaTphNI2t8R16bXw1zOqNE9o7ONK3gX5aS7gd87Sz80bQREl4Vld
RtreBMSwj4TNvnbGC8lrMHIS5kTCV/kSoO6rFH2ldbBj7KN2XNGhtj/mNBglALgmwg1MRFUFrvLB
DGSCx2UQfK0d2PtsGnYpT0dmuLDWegu4Az5LLDuwi8MWu9QORKMtgtJWIt35+hIrPBTVzyRPa+2P
LXis19vfdCmd1k5jjc4wXUWS6ZFTAgunOaeTkXYKQ6QtncLzGjevjMVTPDbBepBDmTu7szkWKiH9
TQIx7tpTvFjep2XUFWm3kDPWmFaQ0fZsanbKBmWeEwPzpSIjOWtyo5FJC5wO0gAFusZBN1eER4Nu
IG2IF1qQDx6LcOgWjlFxOY4DQXXrOWxREahiMskHDqqE+HfQDVQnXTZQgqLrYW2UnvrdYyJk8y7h
zM3muQSJePgeMrSqGQwejbXEWJiTMZopuNALDk2RHcDFcpjdsesJtbs5FIUtnvZVUseX3C1sIq7L
rMEDVFfruRB615hcwFM8Rtnl2CWU04zIzUuWDAQNG0yvHeQ+PZm5HYycgtk3HhS7R9AohxNKS0/z
wg6qADztCh+JXRmdlUYzsrPA4THr9iw+1aln9P87lKc5HpWSRwXCDy84Q4wlV3wCSzzsm2u+lagR
EJm+1JYhfkuynOrSX++7hFpWhaRqL9yhEfOgAhYffWUHj0xWOwiXZFTzxX75BKt2flkbymvVVcxw
OBF10BWLy8hH1la8X+tGTwqEyR8mSYmX3QAOl1xyO/I0KA3pFCsSuWRxl/t8/gszkAppU1uLp/jW
Wl0sbpdLs3Bw703l6SKFurKF2hfaI7SQNtq7/U090ymuVn5pfI5EwZG5j23f07TuKROhNy9g1Lxs
mZQ2NV8AHbpcXNIQyEnK5ymX2703i3uxTe9DxIFZbNYCGXUADkIVEk9/S1GTxNO7dNMwoYUxsoHJ
xKUdJJjS+vXM15ZIbWJNnkj4wLb6OBKU0QwvEfRFG341EpHaT6mCrgCq3pft5IZIs0vdlaG9ouD2
SA7bJpJUR3IDmX2NQU7EFS5lBSkT0mlOBo+4zBr0vckd25tqOT6ayQmvRROneYKQgosw+8bDntWn
MaFUvLPlJUiIqAdlAOKOlut0npod/XCHh1ep9q0bfcs3ZILHYlhwBC19Vsezy9MSv5E1qGRGwQ1A
JZn0oxVYwVri3a7YKlDWwdANQyBi2+fl6iGlabFKL49XZik/RHQUUjOmCUmzWh0ZcRxUL+/D9FB7
dJeCnrilta239E7/SGdMfQAr1o+cmkZfZxFhexlReuVIlW6wc+tmJn3zP+NVZ3t0wdRomGocI2FY
Pkp/pVrMHo8amtR0lFEhEBWsh6zY4KdmW1mYXq4euoHwnpgbMQo5bWBM2+QU72AdBBouEyF0k7+I
qrEQfT59wr9YJH7drO795TZMA9coEjRMBT12c3cmpYkI/wfa+hUAuFwk2wQpZ+p7iKZaMEgz9xpK
B+2Nf4PYNE3gO5gMOQAHtbOgxKxTM8rXMDQDjitk/2/nzruEc39CAsz3mpMxBxBZSKqMNwcRBor2
9oHVDzq0ZVY7YYbD2KQRkXpuKM/SAqBEWzc3ssvzfImLW9IVvZMz6SCf4t2URnA54wreZRUKpCyS
syOeqKpQtbaY7HhbhaqRwkwOeG2hVjmjSz14GmD2jVcfxYDxFWmAGlfp4h0sL2BARq2XxrXoVQuo
SVY0ULGEu+ylY0wwdapC8ywdWbfrItDnU6Q/0/+bXZ4+JZ+Hu5vxjHlKpZRNAiQwQiAvlCNd2DSb
wNYSZ7akPTrmymkPURxKAiRAAschcBbl6fo2fC85jufUQgIkMEKA5ekIlMc2nQVMuLf0WD84jgRI
gASOTOAsytMj+0x1JEACXQJnUVF1rTv3C/dLeBBUHuqFp6VHMf/t6saVlkex+Gz9KCZQCQmQAAns
SYDl6Z4AOZwErosAy9N94in04ONl4j5Cdx3r77rFF+l2lcP+JEACJHAyAixPT4aeikmABEiABEiA
BEiABIYEWJ4OmbCFBEiABEiABEiABEjgZARYnp4MPRWTAAmQAAmQAAmQAAkMCbA8HTJhCwmQAAmQ
AAmQAAmQwMkIsDw9GXoqJgESIAESIAESIAESGBJgeTpkwhYSIAESIAESIAESIIGTEWB5ejL0VEwC
JEACJEACJEACJDAkwPJ0yIQtJEACJEACJEACJEACJyPA8vRk6KmYBEiABEiABEiABEhgSIDl6ZAJ
W0iABEiABEiABEiABE5G4JzK0/LfA1zeG4vyX4teLBa36839crG4Wb21Swc6mPzvN65vi+o9VYnl
y/WeQs4Nud9yAAAgAElEQVRpuGDB/4S3OLgo0SnxOs1/xfGcANEWEiABEiABEiCB/Qicb3kaqkOW
p/uF+YCjQ3ka6nuWpwfETFEkQAIkQAIk8HwJnFN5GqIgtc7N3UNom38y855lqK6y9FAf54uzz2da
MlveeXW8bu/OizWtIQESIAESIIHnQoDlafedAZan2ycBy9PtjNiDBEiABEiABEhgNwK7ladSsemn
vSTaCpR1fQNRXkKMtzxhSCwEZaB+6ruMdi9TDvwjinIZ5OqG76SCxoW99/lw5yLdwqZRX3Kt77kq
wFSeglh0BMaOvh1bLQeP4OXazYhVBkHNCL6DHHkld/SDfdpLou1WNKiLr8PikCAWvVtUyw0LAKlv
ozYtZhSoG3uLt3nqofS4bMbIiNyqYr162dKjmOQSkO1m4+1jL8sWCcFZM5wHJEACJEACJEACpySw
Q3kq1Yb9JuZ+2UqBVmVquVMKHaszrJQRF6VnK+xK4WJF3sPqdiVP8Vu9UnHEWgfL06ji4W458pMp
7C/y1ksrRMrwZnw5NqtaQaM90Xg8RkewvW+J/5aoOK6sxq2KjreCrLzkMOAzVqFKTaZVmoItQvCb
g5htP9jqii3lndLYaMTR5VA6o6mbTVCBnlq2N/hKA9KjG6/mCGZR/VWWCM1snUMxJn0FSllhVvGA
BEiABEiABEjg1AR2KE9DXWJ2h6pCWr2KlfpDi49yRV8nxRLKBM0tT8fNADFymMtTvAz1X6mQtJgr
fWCgK+o6AqJQAx5nPh3fobZzgLFkX9/GO9NgqisUU63ut2axM9aybkZPbDDDJNW6E2pWL3PBhfhN
AwbjYYbfg4nteFy+dYS79e6UGG9GIsZRaGgVj0mABEiABEiABE5NYIfytNQcfrOqWT7c761FDvJH
bqzmak8ZhNIqFiIms9RAoaDU0eGv97dmqV3s0+7vBo2lJ7R4edpzpNXBcCfStNlBtsRLqNJlzCp5
Kg03CFuZJUAGH6z+m0oxO76loM/E8XdmhrcrNpet6pFjyV8DTGb5emA32nVg/guoyyUYLuejZEb6
QDI42wYh8KpIm792gz9bxXMSIAESIAESIIFTE9ilPK22tlpNb9Hl8gvuXN4v/WUA9FPqkpG6Kt5y
i4WIa5F2qEhQLhx7f2mUe2n+qBck5wop3MH1OqzniCqs8uE+ol7IBVyruqr9XavKHcpSP6GzeAzy
xw+lMxSp4HLrby1dsScsT/tkzOzqhtej5dxPe8YrrVb7skhVIPxLAiRAAiRAAmdEYPfyVIyHKiEW
glJ/2b9mPyz+muNeRgQSoT+oiEWeyw+D40mwKopC44NGkSCFkd72c0WDblFZPes4FSzB8rRvVfX3
5epB9Hod7/aMqR9pc9VJV6cKjyIQBV4JZriKmBVziOU+ZqQdVLV4iscIs/b0EPSMR0fm9MH+PCYB
EiABEiABEjgOgR3K0/WtlUpQJUiB4j/9KS992t1N6WbVXqlcmwSpDPT5tRS7B/9pVCx9sKIqqvUl
TukGj+bLqd1Rg1E9R9RyiZXXRiFyoYAL3UB+u79rqssXgJvl7Q201NvSxlZKzGX8RxJELza66mJ/
vH/scSkR9BvSLkE88jc4n+SnUXoPXkz3pOqT8T4FcmIOpzGU/lsr9QK/ipSUsNwO0eMJCZAACZAA
CZDA8QnsVJ6W58Xl/7xsqjVQKXHqZS90xJtWGNVLPkofuIf2UFPGQsQrrYKolZVltN7sTOykxJFP
LTtKpVXOb+7knyVqllSN9/7vWKGFWCR1HAneRcfVnGR5qGI7VpWhsYJHadUvc00v1b9Ixu+8Vpj+
7zF5bVpHQfiUWL3gFlp7wBK8iyFrP95v1o7ACeEO5Sm+eBriBSVssQ/qUTmPpwGF1sHoqWYOy9Ma
bP4/CZAACZAACZwJgR3K03GLQ4Ey3oWtpyaQC8dT20P9JEACJEACJEACJNAlwPK0i+aKLrA8vaJg
0hUSIAESIAESuHYCLE+vPcLiH8vT5xBl+kgCJEACJEACV0KA5emVBHLSDZank3h4kQRIgARIgARI
4JwI7F2enpMztIUESIAESIAESIAESODSCbA8vfQI0n4SIAESIAESIAESuCoCLE+vKpx0hgRIgARI
gARIgAQunQDL00uPIO0nARIgARIgARIggasiwPL0qsJJZ0iABEiABEiABEjg0gmwPL30CNJ+EiAB
EiABEiABErgqAixPryqcdIYESIAESIAESIAELp0Ay9NLjyDtJwESIAESIAESIIGrIsDy9KrCSWdI
gARIgARIgARI4NIJsDy99AjSfhIgARIgARIgARK4KgIsT68qnHSGBEiABEiABEiABC6dAMvTS48g
7ScBEiABEiABEiCBqyLA8vSqwklnSIAESIAESIAESODSCbA8vfQI0n4SIAESIAESIAESuCoCLE+v
Kpx0hgRIgARIgARIgAQunQDL00uPIO0nARIgARIgARIggasiwPL0qsJJZ0iABEiABEiABEjg0gmw
PL30CNJ+EiABEiABEiABErgqAixPryqcdIYESIAESIAESIAELp0Ay9NLjyDtJwESIAESIAESIIGr
IsDy9KrCSWdIgARIgARIgARI4NIJsDy99AjSfhIgARIgARIgARK4KgIsT68qnHSGBEiABEiABEiA
BC6dAMvTS48g7ScBEiABEiABEiCBqyLA8vSqwklnSIAESIAESIAESODSCbA8vfQI0n4SIAESIAES
IAESuCoCLE+vKpx0hgRIgARIgARIgAQunQDL00uPIO0nARIgARIgARIggasiwPL0qsJJZ0iABEiA
BEiABEjg0gmwPL30CNJ+EiABEiABEiABErgqAixPryqcdIYESIAESIAESIAELp0Ay9NLjyDtJwES
IAESIAESIIGrIsDy9KrCSWdIgARIgARIgARI4NIJsDy99AjSfhIgARIgARIgARK4KgIsT68qnHSG
BEiABEiABEiABC6dAMvTS48g7ScBEiABEiCB50ng//uvf//L//z3v/zP/+2/B///23+Rxr//r6GR
JxdFgOXpRYWLxpIACZAACZAACQiBX73+Nz/7n37nZ9/+/b/6X//9N4jk63//1+/9/s++/Tv/zz/4
N1+zREUyF3TM8vSCgkVTSYAESIAESIAEKoGvP/zgr7/4H30a/+M//fMP/uMX/eu8cs4EWJ6ec3Ro
GwmQAAmQAAmQwCiBrz/cUn1u7YBi18tF+Czv8erTH9+L/scpfbi7WSxuVm+HRhanbtfDC+ffwvL0
/GNEC0mABEiABEiABBKB//LzN387+ex+awcUWCq5f/LxZ59/Vv/35m/x6tMfd8rTb+5//IP/+Qev
RkpPN4nlqbPgEQmQAAmQAAmQAAlcC4FT32jslKf90tPB9/uc2im3cecj3j3dGRkHkAAJkAAJkAAJ
XBeBkUrum//zgxeLF7/3+a/E07/99IMXi+/+8VebTe352Vd/+qPvfmuxePHr7//huv4y61c//3T5
j9/9NXlH4MWv/9bHX/6yEHq7kkfv//bLz26/+2uLxYt/+P7Hf1kEbja/erP60W9K91/7zR99+m9/
b/hwf30L7xu8XD1sNqMqWnn6V19+/Fu//qJIW72pKoJT3/zFj3/wG0Xdb/zAbDjbILI8PdvQ0DAS
IAESIAESIIHjECiVnFeD9VXOh1fff7H4zU++2vxqffudcrBp5em33/3gjz578zdvPvuj918sFjd3
D5vN5uFPlz/+8zcP33zz1Z//3ruLxYv60mcpT19862b5+VcPb1793m8sFt/+8Zda7774X5afvnn4
6v6TD35DdOd3T3/5zZs/luL2k7/65ptvpOIcVVHK0++8+/JHn9ybiuVaunt5+qv75buLF+//8ZcP
33z16e++u3jxwadHfnthxzCyPN0RGLuTAAmQAAmQAAlcG4FSyfm7p18+1PuPb1+9/+LFB3+w/O7i
3R//ZfW59PzHr/Tfsvrqk3+0WHz/ldSn/nlYvVwsyv3OTSlPv/OHUpFuNpuv/m37GdPDn7y/WHz3
k5+3Mb/6v340Up5uNv0H966ilqdq3qaIerH8v7E8fXj1/cXCbP7m0w8Wi/f/JJrcDDmXPyxPzyUS
tIMESIAESIAESGA2gYf/7Xf+4+up3ls74GC/0Yitm83mzR+9Kzc2f/czbU89vUzcPHz2yb/44OYf
vffut8ptWChP6+1Vuf2pv7L/8g9eLBZL/1H9zHdPx1SYzGahizJTy4HfG5YjM0n9Oq+/LE/PKx60
hgRIgARIgARIYAaBrf9u1NYOqMQqOWzcbOrd03/2wYvFzepv6qXS87c/q3dXN5svf/ztxeKffvrN
rz77vReL7/z2K3m6/8uv0t1TqwWtlLTbqFXoN3/6g+13Tzsqisz37EZsEfWdcjPVnCp3T//pq4dv
4KMOVAPO7f9Znp5bRGgPCZAACZAACZDAVgJ/+7//y5/98//wd73/qOnP/8Nf/4N/+f/qw/Ot0kol
5w/3P/vszTebzVer778oz+jbLdKvREzp+eK9H919+fDw5tM/+G77+dTDq/cXi+/89qdfffPw5k/l
3VN8uD8sTzc//+S7i8WL73/82Zuv3nz+8Qe/8Z3x8lTeAfjOj/7kzZd/8eVDR0UpT1/82sv6GuvH
P/jWIrwmW16BffiT918s3v3gX6+/+uabb/5mvfrdT9rbBlvBnKgDy9MTgadaEiABEiABEiCBfQj8
3X/6w4/+6tu//7Pv/zt9EbRI+/rfvfn27//svY9+/sXf/ffZ4kvRiY+/b9df3b3/wl4P/csfvys/
wNdf7v+LV/WX+ItvffdHf1qq1s2vvvyj9+WX8y9+/Qf/+tOP47unI+XpZvPNX/z4/X8oP7X/7m+v
3vx555/l/+WXH78sP7f/J58+dFS0O7J/8eoHIu3Fr//Wj9ftZ09291T+G7Bv7n70XemwWHzr3R/c
fnbWb55uNixPZ6cuO5IACZAACZAACTx3AljzPXcWT+c/y9OnY0vJJEACJEACJEACV0aA5ekxAsry
9BiUqYMESIAESIAESOAqCLA8PUYYWZ4egzJ1kAAJkAAJkAAJkAAJzCTA8nQmKHYjARIgARIgARIg
ARI4BgGWp8egTB0kQAIkQAIkQAIkQAIzCbA8nQmK3UiABEiABEiABEiABI5BgOXpMShTBwmQAAmQ
AAmQAAmQwEwCLE9ngmI3EiABEiABEiABEiCBYxBgeXoMytRBAiRAAiRAAiRAAiQwkwDL05mg2I0E
SIAESIAESIAESOAYBFieHoMydZAACZAACZAACZAACcwkwPJ0Jih2IwESIAESIAESIAESOAYBlqfH
oEwdJEACJEACJEACJEACMwmwPJ0Jit1IgARIgARIgARIgASOQYDl6TEoUwcJkAAJkAAJkAAJkMBM
AixPZ4JiNxIgARIgARIgARIggWMQYHl6DMrUQQIkQAIkQAIkQAIkMJPADuXpl/yQAAmQAAmQAAmQ
AAk8GwKbE312K09PZCTVkgAJkAAJkAAJkAAJHJXAl19+eVR9oIzlKcDgIQmQAAmQAAmQAAmQQCHA
8pSJQAIkQAIkQAIkQAIkcEYEWJ6eUTBoCgmQAAmQAAmQAAmQAMtT5gAJkAAJkAAJkAAJkMAZEWB5
ekbBoCkkQAIkQAIkQAIkQAIsT5kDJEACJEACJEACJEACZ0SA5ekZBYOmkAAJkAAJkAAJkAAJsDxl
DpAACZAACZAACZAACZwRAZanZxQMmkICJEACJEACJEACJMDylDlAAiRAAiRAAiRAAiRwRgRYnp5R
MGgKCZAACZAACZAACZAAy1PmAAmQAAmQAAmQAAmQwBkRYHl6RsGgKSRAAiRAAiRAAiRAAtdSnt4v
F/65Wb0NkX24u6kXb+4eyoW19l6uQ0eebCGwvnXKi8Vieb/ZbCrMzHyLoEdcDiEWM0T721UJ7VQc
q80aelR8LMtR55Uf28yyPHnCxOhHdory40ZNSeQ1EiABEiCBQxO4hvI01UyxEt1soKwpNcrD6qXt
nVNlzQB1HfiE2+1A43k1DDmzPD2vCJ3emmF5Wuba7ZN8DZxZaNZvp/b9ZOao07OkBSRAAiTwjAlc
fHlqd0ZLqSSR1Cqq1ZGtg2+Qj75n9uiB15FfrfIwzpv7pR8fwcX6NePlqt4An6+wX44884DORzi/
Z0JqXwVP+aWunwDz/WJPEiABEiCBoxK49PJU9z+vPu1x8+Lm7kFL1Xa7dPln9Vmw3j2VWkcllDat
t7Cx3GFtD5F1YFB31ICdTlkrTxe5QPSKxL4qKCYJQTFYxy4Wg+GzHRotT/PD/RFFqTqxlFjeu+Wz
jWDHaQJDpC0imgmDmSXysLG+MZIapbrV75mr8vRDZiVG1o4hvkmIvA+SRkkPffNHktZy2/LKJr5d
mgbAqyRAAiRAAocgcOnladv8tKxsSNoWdbu2vaoWTMPy9KfdlymtxCqblu1Stfk5lqedjRzePQ07
fQElcUnoFovF4+jBSxpVtjwwtjLCjyFwRZEVLnBnHfosTnlj7xBT+KxkDMtTfZohsWiz1emXmi9P
Un+h2TpCedracqGZhJReN6u3ofAdlqcjo2oZOkza9qb1WdGmMSRAAiRwtQSuvDz1uyNeEoUdtO1P
eGsk3KWr21utYMLAq82IacfCtt3D0mqCesNMb3rVtw8rw3JDelrR8Oq28rSnqIa4GNPKo/ZlpvnC
8nTI+tEtI3OkTbHbdXsLvM01n1nDOajfc0KeDBshslgE+03TmoHYzb6ijOSDVs/wnarmRsjnR6Ph
QBIgARIggfkELr08Hd05WhUyVh7Zo38tSgb1lu6CdudmsWg32Ea23vmgr6tnI1zug2Yso9UA0qw3
sXYGEr426OgWPr+XNlTk9kDnMj5brkL599EEhkhbqtzcPXRn1mAOesjAkDbcv2eOP9yvI1ACHofy
NOdDW0ygPG31cZXwyLv+4AIPSYAESIAEZhK49PLUnzi3W2L6RprVQINdbbiD+u2WxcvVz+q7aHg/
tbEcHTiT8xV0Wy+BCWzYAUsqBawaOMDWPrM8hfKlQneTtAxqqdJux+oXlSsI0eldCMkAL5VKnddm
ImRRtLdVhyNz8H4F7576PwLgkbVZ36JfzWivsWI3S8h5d09ZnsYQ8YwESIAEjkXg4stT2ALDjTOr
VqfL07p1+UjZ3treZo31LmxoH9RAx4rXCfVkLMN/9zTDXJSfRmlRaDwtNLs5s6081fdQTc+wOtEC
yLvYrfHdbGHvDoGRJNE8sQcXTr/OrJw2t+tBKOUrxGAij9w9ddFyFItLbcFqNau2X0fFG6ut23Oc
9Z04s5kESIAEnpjAFZSnQqhtXW13attSRTfY1eoO2u6Zhf3J7+uEXVbLU9DyHDeqXNtplWk8cwcp
/eov92OFqgN3TO2t5eng11FVEZYj8B3jZvXWLN/REnbvEggTp0zHMBmBv1wcKU/H56AIGUzkkfL0
5m6t/6ox3BT39BM5MR9UbF06bF6zPO2GmBdIgARI4BgErqQ8PQYq6iABEjhXAqnoPFczaRcJkAAJ
kMAsAixPZ2FiJxIggXMmwPL0nKND20iABEhgVwIsT3clxv4kQAJnR4Dl6dmFhAaRAAmQwB4EWJ7u
AY9DSYAESIAESIAESIAEDk2A5emhiVIeCZAACZAACZAACZDAHgRYnu4Bj0NJgARIgARIgARIgAQO
TYDl6aGJUh4JkAAJkAAJkAAJkMAeBFie7gGPQ0mABEiABEiABEiABA5NgOXpoYlSHgmQAAmQAAmQ
AAmQwB4EWJ7uAY9DSYAESIAESIAESIAEDk2A5emhiVIeCZAACZAACZAACZDAHgSupDyt/yh3+W+s
D/6r3/rf0W7/ze76H9cu/8Xv1VskVwcu15vBfzve/jvg9T/77hIWRWPsr+pEdOif/uPjqNqPwRFv
5BEJkAAJkAAJkAAJPB8CV1Ge1iqw1YWD8nSxWJRLW8rTIuTm7mEzLE8Xi0WtUEO5KVXqSHm6WBQh
m81mYImVuVP5VYvdWbXslBheIwESIAESIAESIIHLJHAN5Wm543ijt0JrUdhOtSSVaq8d491Nj1kt
Cuuodje0lJ52B7RcquVpv8qMKtZL0/V2dTNyv3YjllsfNaYK0RpXW/mXBEiABEiABEiABJ4Hgcsv
T2vl5yVjKE83UBfG2jGGt3ZrlWIsT/UmqFSr28rT+mh+pLJsZuR7oqPlabPZPYqm8owESIAESIAE
SIAErprA5ZentWT0e5ChPG0l6QLuntqbo1D/1bKy3S7Vh/vdu6cmoYiV9GjVZ7nglljitHp3eKN0
vDxtBuRa1sTxgARIgARIgARIgASumMDFl6eDR+GDNz71ZVAtVbW6tPK0Fpd2quWp9oOiM797qhUk
lqfthVTLGbXH5etd2KCgvsZaR+GbBiaHByRAAiRAAiRAAiTwLAhcf3mq90S7757WstW6DX8a5Q/r
tz3c15/qa9mq5axLiEk1effU3qaNY3hGAiRAAiRAAiRAAldN4OLL01YR+iP18HAfY9funnrPerH2
14JS2tK7pyBjt/K0a4lJnCxP0SQbwQMSIAESIAESIAESuHICl1+e5kfz3aIwP9yvT+FLxRnvbm4r
T+GhvAyMT/blYq2A9dYpdtd/XqBlVac8LS7gywBXnoR0jwRIgARIgARIgAScwOWXp+1mpz0K36k8
HX3Lc7/yVO/ODqvhxcKM9ACMHI1UzCO92EQCJEACJEACJEACV0ngCspT/aWR1oU7xKne4HzEwB10
7Nq1Fsd8sr8rN/YnARIgARIgARK4EgJXUZ5uyr9vn38yf5ERksf9V+HIRdKn0SRAAiRAAiRAAmdA
4ErK0zMgSRNIgARIgARIgARIgAQOQIDl6QEgUgQJkAAJkAAJkAAJkMChCLA8PRRJyiEBEiABEiAB
EiABEjgAAZanB4BIESRAAiRAAiRAAiRAAociwPL0UCQphwRIgARIgARIgARI4AAEWJ4eACJFkAAJ
kAAJkAAJkAAJHIoAy9NDkaQcEiABEiABEiABEiCBAxBgeXoAiBRBAiRAAiRAAiRAAiRwKAIsTw9F
knJIgARIgARIgARIgAQOQIDl6QEgUgQJkAAJkAAJkAAJkMChCLA8PRRJyiEBEiABEiABEiABEjgA
gYspT7/khwRIgARIgARIgARI4HkQ2Jzo8858vScsoucbyZ4kQAIkQAIkQAIkQAL7Ezhh4cfydP/w
UQIJkAAJkAAJkAAJXBsBlqfXFlH6QwIkQAIkQAIkQAIXTYDl6UWHj8aTAAmQAAmQAAmQwLURYHl6
bRGlPyRAAiRAAiRAAiRw0QRYnl50+Gg8CZAACZAACZAACVwbAZan1xZR+kMCJEACJEACJEACF02A
5elFh4/GkwAJkAAJkAAJkMC1EWB5em0RpT8kQAIkQAIkQAIkcNEEWJ5edPhoPAmQAAmQAAmQAAlc
GwGWp9cWUfpDAiRAAiRAAiRAAhdNgOXpRYePxpMACZAACZAACZDAtRFgeXptEaU/JEACJEACJEAC
JHDRBFieXnT4aDwJkAAJkAAJkAAJXBuBayhPH+5uFvXzcvUQArRetguL5X24sHm7ugmd18vFch27
bDabh7ubmzsU6QIXt8Pug/FX17C+VaCB3lH8fLvSMBcbjm3Aw+ql+m5/x21YLxc3q7dDJiIh5+Gw
1wW39BBdveNHirgsdKPLjkyNkeXrDFKpRyaZJuvqQaaGL1Blkh5EZrL10KfzfL+vW9nRoyypNbqa
HRrDU8j7+vV73/v8i0NK/sUnP/zJO9+r//vpJ1/vIVps20+CK3/z4fd+8uHrzebw/rqOkxxdfHkq
S7ZWCWH5LtVMb3la3y6s7tTqNs38Volat1KtLrXskB0XL50keEdWiqhlGxjdKZ/OpqPtwVsUbd1x
ex2uvkqz2CcCl+B4P+i4VpiH8SD5axcP7HhY30zJZiNfti+7PEVn9jqeEay95J9ocC/BnsScbpo9
ibanFLq1XNvaIVinVWBt/Pr1J6/D5a0nX7/66TsfvdnabdcOX3z0k/de/UJG7ebOrnpO0P/Sy9O0
Afg0nlyn/F5pK7ny+t6+0U4JuV9aWXyCuJ1CZaBxfPdzjJ4MwRZFnmMdC3odUq52Rl9DcyJwCY73
gx7Sfjw6yV/rdGDHu3VD33gz5UQHPTJPZc6MYD2V6ieUe9z4dtPsCT18GtFby7WtHdCunTrjQD1+
kvIUrcJjVXrRf6+sPLX9wAvQkfDcL/Odv878n1jsrmcOjwDqNMkDpnaPeYJMZ/DezSMxwudiEvoS
1poD5TQ/4PPGBTyxKqFclWf3N6s/s/dBereHccdFgXb3vXZod99BUTWscRCl7WMD90Z0LgIQkTx1
KG81DIG0Sw1D52b8ANSuQZdXdBQ1PPEo369W5WWV5f/RCzrGtz7ixBYL3EEiPip5s2lPdReLxXIV
Hu4bz8Xyzu6eFtRyutAvzyDWCMtU0uSrbz0NWzCX3AYDWBTdmw34/Nc13tytRl90gYhUhlVazIdi
oD7+MkVmANrnx511yYfbCtbSMrAyOSWg941SeUpmEizom83GGiU6+rJX8SWI9W5jD9zM9xLrl6u1
pWuNF8DXbcsF9vXOcgEC0V6uwLcjCnycyx5ZWNbK/Lpd28AxH3EOBlAtCy0zywt1lpuK1OISD15/
PvacXe5x1vYPX8HDfe+s9xqhpd3UhJZ2PzIojHdP4ZLUnU0pvksAbwL88PVPPrK3AsqD+M2bD+Hh
PkhQ8zabzevP3/nh6y9M+Nid11DyQnla279oSsUqUwGuOat3fvjaXlWwnu989MZvzVZ7DvJiA6Cb
Prz08rTkvT7cr0u5TKryaqnPc+sgMGA5MDYjpY9c6yx2bWHSpdOkPIMDXStH16Cn9R920MVCdykL
nN/NrWuo7pfSAQoLWwfFkbZQljVa+29/VArr9dvVsr2aLEqVSd08dBV2RZ54olFzUo7NqqcleDTp
gGgUcyoAACAASURBVEh01og0ILKNNd8RmsNBK8dB7RD0ui9qLEo90aZtyWSfwiYT1ZfjsAg8XcRH
JXvy1Cf49pVJckyNR7zlWFOrkte0NMJ2IOvYWsrTYUugsL5VgG5PUaQzqBNTWT+xjmlCEfX9upQg
aEDrBfMCnU2pFezsrNgyXCHgflFccFYoqkzhekmstepThqiomWK3Gg++l5xs8oveFmIkVnJYbZhw
Z44LD6vb9muN8YkmSAx4wWUrlWdCnWKajdIOa2mFivZDxNULJ9A3A6NTjqWU1FqwlHGlusIKslaH
rc8XH0FnGwj13GazGe+DmqX/T1xvuSTFnNZ2tSgszaLdCsEvXknxB1c3GyhPSzmo5kl7rV9Lefo9
FVJUywum8RPKR3CnyGxyapHajIncVKArlYHqjtTHZkAc6H2iPYc9u/jytC5J+iVM7oHJlC5zW7O/
rJI2r/KPogpPnD8AOOxM3h4WJm++9iPYMHBZPJbbUzHCmzS+2FXLWhDzcO+GfsmQ3DM5aOt1aAch
qYMpGh40A0d/lhekX9hJj0Bxw/DaQfVu+EwjlE11p/QaN96ZM7aNlM7c3O5h8u8z0aomwP+oKG+p
Ry7Kt/B2RX8DZ9rtoHaYfLZTbyCV9Sqpdo1d46OiccLSx9bGYtCwpdo5/H+LbEdR0pjJFIHSJ1Uw
UVqag9HZwc9Vg5F2D6/sCCVb4vBUb2mJH4RAn1a7G65uCNzT6EvUPmY89I+dPfpINfYBU0GOeGOR
6ruAToMK9xHlQIcyztXF/t7u4ocRT9La3E9jp6ZJKMs2v/jkh+VnRl6nFuVQrrkxUBf2X9YMtzZh
bDksRZu+RSplqBZ5tegshWaypJoT3j01FUkCVLFRSHS5GfXFR6Ad/A2lMFaW6D44psKzMdq+sYMy
SIGDhKc4vIbyFLjonEzZD6djC0S3IvEFwnSMfju0q9d8oGybjw+rl2mPeWLnIYhRU/q2kNY4qaRl
d8nLut8aj8trNxlUaeAQtsP2FSh0qF+fyi5ohpUbG/oEq/w9Lkn15Mn+JgLmeFFocSz3igKGfCtr
AtS8oIdNumi3NLCDSsGsGkBJi8DTRXwgOXKDsrWfsXFIl7B0k499aW93uLEFQAgc+9RcjYqMXqI6
5O+0RaBWh1FaHCXOpo+bDUaWwxQsKc3y0wnTZQdZSKztZKFQO13aTLEzjAczIj33xfDu4E6Yg6Mu
iNshQ9p3v+iayom21WWtVu2xP7iDXDWFGsmgt0RX5v7EfEdZm43Uo/6svD3Kfw2FXe0O5VqpRG2I
/mQeO7SfFg36JM3tVAwo9yPh+Tg8+A7VoUqIjVae2oH2s6rUDsqVWCC2zrPK069fv2c3REN5GjCq
OwonKA09Dbha/FR/r6s89SkUv3j59NbJlnh6h3DBF4ja7PJDt+dxktCdS3kq1eftEn65nNfHFsRB
iG3JjsvrDuUppgcIGYKq25sZdnR0x07QHoFihwVi9DlGMLULam7Q8/3Xsh/X4ibNZbMqGCAnGGU8
PmzERyVjY7WkFZTJeNnsa21hOaaoc8Uf3JOCOJZ6w5b4MMEiO1BUDUgY5XTiq5eUI/GbWwYu5yO3
1YMXeJKIteEBgiVVdAGlxPrYFopQ7KYQSKaNFe7bjQczokz3BanGPuXFjDG9u7oAKiCxJena28PQ
oaIyLLE/uBOQtkEt4uNz3xiOjExNoSyza7GeK8+my73MUIZCOYjteBxqOJMeDrTW7NxHTJaUoTqk
yjEzpPKD+6/lmX59zTQK6ZWn9goB3gwOusbL06BXhftT/mKl9Knyx4EHJIc/uaby1BY7wYTrrB/3
FovB3KukfYGQ8+mJd/jYnJtEx1i/xIdF/+mNHY2RLtawREqY9O3GenvAd27fjH1H9zsizYdRRe6f
rtchH4rSttOXewC664thDZTnDzS63Cs6MkTVJ3dczh2vtNtj01H3x0HtEPQSXH1Fsuy1eidMhTS9
blU2BBYBdESONZ32j/i4ZHHfjBcLVWM51lt6xZLR8rSk6IDweonJKcfDFoCAoGTWjBVDTk9QmEZZ
MYbl6f0SLU/l6VjEw8IO8RopfCFY5kIwCeQjc+tcD0ICWx0WytNyq8887YsdMz5oAzMQNX4vcry1
XnTCk3qdz6gLMhYzQTMN2708Lblk/ctt19G7reCOudmJuNGzjuCOtYUDeY3SSjd7hdS6SIlppV69
4Td41C5PuvUGIZakWAtiHxD+Yf33m6TF6zYpBP3epPWWOs8Kx8e/ewqStYI0FXKQy1BlkttdjpXF
dlDfSfAy1N0przE0L8IbAsGGpzu5/PIUHhPoqtdwlcWxPDjQSYizNDAN89+vxMWubgP4nMnnv4+5
6iNHqmvZ8dwtWzLQX65Liwa9REcCXdbH+stZ6Y0xwgi2tTVuOc2b5qamTfQRti7PvZvlra31pcOd
/RjcFBXD9D8PASS15ohqLvkMEIkbwXEoT9teazHVUAbXM6idg14r1KbEVcRSoH2htfoPTWiJVxLp
6SI+LhmMr7/ptpz06XCzkt+Y+3cw91G8kPLIPuUSzAL44tT6jHznhP63S/0l/qyYLuWn/TgBC1a3
3MoslQaXqj2tggntRaC0DCRjSYcRRAjuoCoNPetJMBt3jUHppmgnxA6NDxrBjJiTvvuIBFtGYky7
eue44LkRH0C19pItKEdM1Y/bE5mAO+YmEICS1LXDax7lvpLq8GpYRXl5Cj9Fl2fNVn6VCrU8ff7p
J6/tl/u1VC0P7j/6HH8y337bLiVvt48qD8/xrfQsv6myVwK0epYx0L/96L61lHulWB2WKrO9G2Dl
dfvlvv2gfrQ8xTumeDyjPEWln39o/3hq4BB+uS8y1UgHrnSe4u/ll6fzqeRJPn8ke14QgbH18YLM
p6mPIcCgP4Yax5AACVw6gfGy9TBeSckeXjw4jNi5Up5ReTr+o6i5oNjvUgiwUrmUSB3QTgb9gDAp
igRI4IIIyE3Zpygiy+1S+xevTgDkGZWnJ6BLlScgwErlBNBPrZJBP3UEqJ8ESODyCYQn+Pom66nc
Ynl6KvLUSwIkQAIkQAIkQAIkMEKA5ekIFDaRAAmQAAmQAAmQAAmcigDL01ORp14SIAESIAESIAES
IIERAixPR6CwiQRIgARIgARIgARI4FQEWJ6eijz1kgAJkAAJkAAJkAAJjBBgeToChU0kQAIkQAIk
QAIkQAKnIsDy9FTkqZcESIAESIAESIAESGCEAMvTEShsIgESIAESIAESIAESOBUBlqenIk+9JHCl
BPw/ey3/Keo5/zmT8F+ITlRE2k8/sf/ydLrKUxIgARIggWskwPL0GqNKn0jgVARefx5LUvkP7r3z
0Ztpc6bK081m8/rzd374mgXqNENeJQESIIFrIsDy9JqiSV9I4LQERv/rz28+3Hb7c0t5utku4bRu
UzsJkAAJkMBhCbA8PSxPSiOBZ0ygc5vzi49+8t6rX2w2m1qGfvGRPPR/53ut0do3m1988kN8GUBO
60CT8Izh0nUSIAESeEYEWJ4+o2DTVRJ4UgK9m6DWLgf2Nqq8BtBeKg0d7E0Aeev08y+KxdbhSe2n
cBIgARIggTMhwPL0TAJBM0jg4gn0ikhrt4Piqt8rhXZ/jg+NfP304nODDpAACZDATgRYnu6Ei51J
gAT6BOY93Nfxo+XpRp/j/+KTH8IP9juSVRT/kgAJkAAJXBUBlqdXFU46QwInJeD3PsEMbww3ROFN
09BeK9GvX7+Hv9Z//fnWn/+DRh6SAAmQAAlcNgGWp5cdP1pPAudFAN4oLYbJb/nrz5vgJ1DV5PG7
p+UHUj99L/xGym6pbjb3y8ViuT4vn2kNCZAACZDAgQmwPD0wUIojgedOoP/P8oe7pL27p/UH/vqj
qAITHvSzPH3u6UX/SYAEngUBlqfPIsx0kgQuiECsYvm7qAsKHU0lARIggcMQYHl6GI6UQgIkcBgC
6b9imk4Po4NSSIAESIAEzpoAy9OzDg+NI4HnREDeRvV/GPU5eU5fSYAESIAEkADLU6TBYxIgARIg
ARIgARIggRMTYHl64gBQPQmQAAmQAAmQAAmQABJgeYo0eEwCJEACJEACJEACJHBiAixPTxwAqicB
EiABEiABEiABEkACLE+RBo9JgARIgARIgARIgAROTIDl6YkDQPUkQAIkQAIkQAIkQAJIgOUp0uAx
CZAACZAACZAACZDAiQmwPD1xAKieBEiABEiABEiABEgACbA8RRo8JgESIAESIAESIAESODEBlqcn
DgDVkwAJkAAJkAAJkAAJIIGLKU+/5IcESIAESIAESIAESOB5ENic6PPOfL0nLKLnG8meJEACJEAC
JEACJEAC+xM4YeHH8nT/8FECCZAACZAACZAACVwbAZan1xZR+kMCJEACJEACJEACF02A5elFh4/G
kwAJkAAJkAAJkMC1EWB5em0RpT8kQAIkQAIkQAIkcNEEWJ5edPhoPAmQAAmQAAmQAAlcGwGWp9cW
UfpDAiRAAiRAAiRAAhdNgOXpRYePxpMACZAACZAACZDAtRFgeXptEaU/JEACJEACJEACJHDRBFie
XnT4aDwJkAAJkAAJkAAJXBsBlqfXFlH6QwIkQAIkQAIkQAIXTYDl6UWHj8aTAAmQAAmQAAmQwLUR
YHl6bRGlPyRAAiRAAiRAAiRw0QRYnl50+Gg8CZAACZAACZAACVwbgespTx/ubm7uHiA+D6uXi/qJ
7aXL29XNYrG8h+718H65uF1b60DmetlELrDbZuPtIzJN3OUfrG/V/5crZH0MzyRkS4/NMVSO6HAC
SmIxbpWk32gyPNzdxOQZ0XKZTT7jnI3kSRfFVjeFNszHrf1P2eF+uRidFL32x9q6vl2MLGibzfnm
1cyZK91uVm8fy8XHpTw8iEyX/iRH83xvi8/RZ4Sk1mhuPwmL2UJff/7O936S/vfh683m9efv/PD1
17PFjHT8+vV73/vpJyMi3nw43j4iY17TLz75obkwqnGeGOn15sPv/UTcv6LPdZSnrTrEVRs2Nrma
CoU6z1PjJuyjIzIf7pa6esoKqOrgeN4qc6H5g4sU4D2WNzM3uYOZs15ObZZbS65uh/MtI54KXRfF
wRQeSFA/NNPJUNT3ytBe+2Ntvtry9LFABuMuJt8Glk82HDqRJpVtemk2PeokV79+9dN3Pnrjqvcv
T11WOtpanm7tgAJjQfn16092LS6ljP78CxR5XcdXUJ626jNOp7Cd5F3n7erm5XI5vLkl7fWm4KjM
GHlbLGLZFM2IQy78LLhm7h/Nqcj56dWGFBqo27oFdjvkbByIvvyGhK6L4tw87YcmeTRmeG9G9NrH
ZMxpC9MQBvSNh04nOTz2zL2YfNspGkeOby/NdrL5OJ0vtTzdv7jcX8JxIvRYLVdQnjbXw3RKW0JY
H+viNbKEDR7lT32DtMXCDpodSfVjA3OO4+7lNnR9vB5oH8dWD2KNXbu9XR7+Skv5+IO8Ghf5//rB
J2LiiH6svQRuVd5eWP6ZvPvRPuNPtUL+uBa/T187mGF+/z4kjFvilh8H51NqScVcRaHxCveknc/o
Q3xnVaKzbtEsrAydRVDesblZvd2myA0oht2VWL9c/dReXPEgFkiSePqBr685PerEN6ssbcKCsMVf
fE0IH6d6gr1crfDhvqlbLFf+0kjhcFeSvMExJvbMp7wM0Hxok9q16DTHJIGrlsxF0b3xwXdvXONS
COOlJrU9ql4s1FNLGx9bDNSp4c5qC9rnxzXf/Lwd+XB8Y2TISgfOSrnNZqZY7zZqvPlejTcCrbOz
suR0geBOsbktYvfFtperbbMGcrKFCVuKASGHzbaFbQf64NEujfqoYDeysebpAy/I6WPJTRXbevo0
dzn1aLQ8/eLVT9tz/3Rjtb0PMP0kHW+C+vP39169hof7cu+zqajvEkixGFvKA/fQJ9ge757iJX9v
Ae10S+TtBe/zk3LzWK76w/18tUovfr02O8/9zuuzK091wxsuYbZAeJr0izCZh/XdABWoo8JM1sar
+atrIqwgx/Itlqe6MtYFERZxXcXqbqp2Srjbsbhgq2dqt313UwoF6zb0EfPnYXXb3sQVpa0ukQ6u
SIxv0jxhxBLdtq8qbdJUqiiap7IzASIM0OBlG3ifsiRe7Vwj28pZAKu1HSDF7VYTo9QTtU8xzOrI
qdc3o0f3SzXV14FWpqgWcbMee2RF3bS/mzHJkFStGIJMVmdrDd20l0mhllQsYHDJQ59Nm839Wr5z
DltC1o8meVEE0dTvGIClvDHleW4yHcvm4X5dXmSPkEtPX4F3mCzCWf1Vfb1ZX6siZ6X9NwF1N+Vm
it1uvPnemywwF5pttjpBXoki8L2cbpk1b1fL9psNkFMqSM3VVuZqjCyHi0kYfV3NYI4DTzscD72Z
HVCoDWMxVYEj5en3fvLeq1/I9VIytqLt/2/vjJXjypU0raegsdstUerxdm+3WmpJ3Te2RVLjzp2N
1do7M4yQZuYl5MlllEtvLZklRzI2ZKylkEMGjVlPvoLB0EPURgLIzD9xANYhWVVkFX+GoolCAYnM
LwGcv3BOsUWxqSCb8wCAyVPRfMXUbPbl8MMde/b05LNqQVSZ1jE5126jfqt77lV+p+1n9ORjerg2
nJ6CPBULpmuxY5LU5cFcqQ8PRYBfN6R4y+Spb8GD6R6/FJXT45tjSJdsvrps4q6RNw644IV+a/7C
dVW+kK84zE7uXAdE+OitgNc9sc6p1lsDzZLtkloRfg/mT36346SoXT30MsesJnU9m76y600YaQ1f
VOgiK0NkhRxhawEaq5Ad6Wis0Hhn3Gogf8Qc+4oTPlwNvbLsb3sSbSLlN21Qq7ea3KAVr9v1CVM7
aSNaIfcC56O35kBqV+4RBYbpjWENeoNlD6QzUBwxCF+zg4qtVEZrsGZx+aS25y8WIeY/aZuqWMF0
GgyKHtoWF+B4Rkaajc2azpsbblwccdRhckaDvrlBXCkMTEQnBAu3mv9hCLNjhdJtvttov5SHqY9m
YYrqp688GZqfImazhjyFr0Z9OSz60grJje/v3piAG/qoKjPov/z1o0YvsKwdByahzeC9fNKpp7yx
pfrZ1NPBPZen0QJ+Vyy617Q58O4aK26NPJWNpv5Mj5+wjw7s05unIyzRXC1Lyy6NUgVXhdRi3lXH
ra9ZyXai7Hdzk11mSL5Thx08PJIB21ydl/Jgcegr7ppZ6JvCqOKtQot20imFXhH9ZA4nmPmpjomF
6gfbV+Ot1csKXWSFwKv4TQ1otMoKLsDy1tFEz2n0rmLuUI0rHwkEaZ1Z+6gQHRuuZXVjcJQeclc+
qdajqJNWHyZJinwQb7ibWT4DN4JKI57jfOgiDKuffJmXRMiPz7phjRNIGN1OnuRhIOPsWcvdLeNo
LQsOMWjbabQWegXg2Qt3uzLrHz/sjZqVr3qZS+aAtU8FS5y80mxK2axZQTu6z2h2jPPWPtp0g3ih
iW3O2cTmh5Bis1vtW3rk75+OUmhqp85sDwW4rWji77IWCvn2FB21XsTsOHmaDgvt5nsq6PFn9E1e
qYyrBZzWS5tgUE9YsUGvzXC40jIZCWbzgwFvTwYxZhtteeo6tYzkzaJ7dXRNx66zckPlaVweZVEN
p7utxtjeElLLU12l1kAKsbLuEpqu9QvbQHMU6yZPNU11guz7cNpAk1TFq9XlN1whsKNPJGiQuti4
tsU3PxFVw6znywpdRGGIjHw/SGMVV1lTK2QNgVJDx7URdSAlrw203ofTGv2NEYVeltboYetjz/x4
m5ZDZdZGWRD70MlLuI2A3qYNqnPslPvZc0oQLGjWXNue5IOBsuDGxkWGwjGYDlN+S3Zy1tBaFbV+
0qj6tl/WfQeHryk75bMBDhrNhSjaU65Kgajellmdb9F+eGVuROdh6uLk7I4bfK6uTfND6A5hZq1Q
nLerQNftEGX1wlLfPNNxmFW3+uU4eSq35vt6tLKpMs6FXWogL/PpaZCAcFqpHaV5r001VnlpUbT9
bErJ4J4PB/6o2+U4Gd3DU9W2S9deu6HyNH34az0IZcDDcrKTLXs7F+IuELpAS6kvJyi23uDtjSnC
9S99lG+c/SwzVt+pQyJC7mD3lK3Wz0Xg4Fw+pZiIgdxB3xSGXTCaQbkPuKenQf30VB+yzCdP5Qrt
7cWT/mW7Oex6VFbonJW4H/NYFk4nrsDK51v7QpsfstRVj1NUHLB60ZHwTCoewvlwtT8YUVXWtZ8+
/aq15ryCytp+ft22HJ7kS6NkaDDZMlWLEe1k1d6Qm+iqlP2x15iv5BqSgXHjQLaCJMU2YoI/mOfw
R/rMiBVEjHq+MhvPGrBrVvqRXtWyteqvdHqa1/UIs20/wT13I8L3xYKnp/1xLQUGbc6qweFSpvST
TLj2udnBg23FPtqBNQ7+W7Sd1NuE8YZ+YbW6VsGEXXkzKjnXavhMZ8uOPFpa7rCbjJOHNfVkFJ89
tQblD45qm6rengQIdsrgpydv8wOy8lrEZTHS9jNY+HL5Z0/NJZenjeXW4rP6uo2Vp3ibTDdixIvL
yT4CYgMphyWadj2/u1Xflsrv2FZVm9qM1zKPy8/KdZXvdJg72bVd4vg2mnfzqd1A9zZ+V1Ei8Xro
m5OVLsP2PZ4qgeiDbNn5Z/9gol9STg3yt8LTOPoXc8NlpgyRO/uFpBpr7V66zkiuIyu4dMl7ji7c
ZdaIXRWF7JwjT/en+RvrghSnqPigP1YfHQN/hjsGTgbI2mSijxTn09P8pWkZSC/z8VR1RLzFS7Cc
P2zn+oMj3JTck/wF7TJoxV/Fa7Gc9qikIHNFWQLDGs1C+u2ewySPA2GO3Nr+VL7ab8zVaNLZ2QGl
rdbgLWzgwdqX/duyb5jWNCiY9VXvulAds98YjjSzEKL9cWYbzttAUtDYK23tm17YN6RHc9zgc2rj
u0onBLezPzmA/2NIyWCaLcGszwT/+N1zu3ks6iOCJPUJUz3vUWZtc3/ICMfK0/wYgN3fh+dTs52W
PC3qU2+yu/qUQYupz2/18dbypMHrD/n/C9Brk4fLf0hfjag2Te9Bx2IqVacvNuVB4UFVsSAv/fRU
GufnWVNjlc7w0EL2QHU85Wnmgf+9gy/OLy9XRIe1d74jfHdtCMj1wCTC2nhNR69CwK7xVzHCviRA
AoshcHSw4ec1i8FEKx0CyxV+nUFz9U2RpyMeDDo3Dr55IwlQnt7ItCzVKcrTpeKlcRIgARJYHQHK
09Wx5kirJEB5ukraN2MsytObkQd6QQIkQAJXJkB5emWENEACJEACJEACJEACJLA4ApSni2NJSyRA
AiRAAiRAAiRAAlcmQHl6ZYQ0QAIkQAIkQAIkQAIksDgClKeLY0lLJEACJEACJEACJEACVyZAeXpl
hDRAAiRAAiRAAiRAAiSwOAKUp4tjSUskQAIkQAIkQAIkQAJXJkB5emWENEACJEACJEACJEACJLA4
ApSni2NJSyRAAiRAAiRAAiRAAlcmQHl6ZYQ0QAIkQAIkQAIkQAIksDgClKeLY0lLJEACJEACJEAC
JEACVyawNvL0P/hDAiRAAiRAAiRAAiRwOwjMrunnzvhxr1FEj3eSLUmABEiABEiABEiABK5O4BqF
H+Xp1dNHCyRAAiRAAiRAAiSwaQQoTzcto4yHBEiABEiABEiABNaaAOXpWqePzpMACZAACZAACZDA
phGgPN20jDIeEiABEiABEiABElhrApSna50+Ok8CJEACJEACJEACm0aA8nTTMsp4SIAESIAESIAE
SGCtCVCernX66DwJkAAJkAAJkAAJbBoBytNNyyjjIQESIAESIAESIIG1JkB5utbpo/MkQAIkQAIk
QAIksGkEKE83LaOMhwRIgARIgARIgATWmgDl6Vqnj86TAAmQAAmQAAmQwKYRoDzdtIwyHhIgARIg
ARIgARJYawKUp2udPjpPAiRAAiRAAiRAAptGYHPk6dn7/f33Z4P8HE22JkdeezZ9tVV+Xk2HrWfH
k60Dbz6weTTR3ths9m26r/WTYx9s80pHBxpnk95SAwbI4gSkaanDXt74sU8WpbbVmh4yJ1v1s7P3
+2sQpgOCxWUByzzpBuhdb0XpaLK1P/02DHXRfGSl4KZnI/YcsAbXVjg62Grt3rU/sv8sYuH7PpYm
anP11WNf82u59Mz3s2ySzWm2zABk3JUPOjeg05OXrz/cif9efvw+m31/9+bD25O5/VfZQFxSVz+9
O73K0F/fvr5p0V0ynM2Qp0U1VhucXN3lB3bq44mt8NZOh9eJhs2z9xO9ukhLHQ42jpu5Si85N+pu
wlNVaYte3X7Br8NFV5gr/wWPs3Bz81Qmzrow+LyOofFNelHJoG6AN8nnq/oyIlkVFhtx0XzCSrFR
ZrNZzwFscz3lkfJ0Uc6teLhFuT3PzqIn0rnjjZjw5/Zf5Zsnn++8OQHJd9PkaRSUpyfvLiqdRYh/
/rJKpCsZawPkaVGH1Y4ji+fV9Ky7U8/SkSco11mqKfKrbTNk5HhStFoYYqUbRPBn+S8CYQt/+eOW
EQLndTpZnLePd+fMvI4rQ3/RgSoZ1A3wonZvcvsRyaqwWDSL5hNXig1DeWoowlZmtWtf6E2wpQQ2
YsIvZdzLGL3h8vTq4vLqFi6Ddel9NkCeFkbtHae7U8/kPr6eBWYTg1v5s7bN1BoWp1xdykneOcMt
PZXLH0DuVhdBfw6ZZfkR2QL/JFXL3WT/vOH37yzLeLfdbhF2zaa9/n26QZ8bS8vyowe3kvryYwYH
8aOrMuv0B41Mjt2UHfCHjnL0VX6042Ckm1JRXSaz/DL/8SagR928aZsITBPk3KvZ3izrMpzhlIiV
nqbsVUKWtoJpenClwIc0aToaoxhvn2x6B1Y81x+wsD/9ZnaMA3gyS/uS5lnv1dg4s1l6UsIM+3NI
MD0m7/HmvhPbfz+1pwsuCRaWgAYFvgE0m6J5Dhsfq5dIDNGr6bR5c38wXLYWIQgMdcaDbU4nEHvm
/AAAIABJREFU87WzfUF3v1WdsiNIt6rrRZH7x2VbSKFZcn0jit7GpAezMLrPUnM5uZGfHEvT9cjo
DXYn9dOcwVuIVThpqc4NYZBZS6jCD0veM4v3uJpuW3xVYZD6gBH5uG/GtrI1m7XlqZxZpvvpeDPd
Kj+kxwAGpkpFOn/9mB4eSOeypx8/6a35fIr59e3rT+8+fpbKw6+pE9y+D0e5s9ksnp7imCfJgviJ
TkZT3iaPFc+G63eT9QTki/lcPMSBb0T51spTWaW4VzaPFjq7WL5NZnsiXjNwV7oRCV6wE7oXRHQL
HqRtTjYsxSvlshnJVqgCVK9e4bPH2fGRPGQsntv+BdlHs/mSWfa+tLnbPggjzmZHU3nKGYyc+3il
ezWbHR1oCOJPLosd9w0Ggo7ijF6Dw5WgzeqaaysPc4AlcLmwlXzNBygEmlkD4LBIz46OJdWpl3JO
ui2jA55lzRakaVYr3jxVtPu36VQ0wXz+tfGD8mi71Jd4s1xQy3ECgCfQQCe25/PbdFKesEd66B7S
xjbyYdtgJkSt5XAeWLGmlI6OBg/Zt+Z20aAQXWPZ5s9sgy2lMVyELFSgBoPFvg7PSjBnrE66uKiN
2dEMWuNcSAnNOZJlayqw9mSE2bqLcrYRIaI0XQuuNK42xnUnvhlSmIQpTJ9XY0Jo7lpIPjw3kqaW
zmFcOF23LUYrQLCzPNM6fDxNYc83Q6XQkqd39G74l8MPeusfZWLSl/BAQLSZBKKpTDy8PPmabrIn
meuyT9qb3v3yEZ80SIbFwgdzyd1WJ0Fht0yhA/hkrWhT07XQMWnW4k8a+oY9iVsA3E55mpaoKY+M
In4pKte1drFyrbKVH3UtXicK4o35BVeCtDf5HreSEMsFIJ0c+dC4keXUpJ0Rt63kXZ1KOzsXs7aZ
4p6Le33rHD12lEtsNaOUCnLTOtzQqxB8LO9o3qb+w2N+MHsTigFd1vF6BYWHakYAdAL52RvIlAEX
1RXI1zzNiBUyRRdbAW/dXRqHBjJJYPkX4NE4ZMHD7GHxEeMsPZu+MgUJBrXoI0b3/LElHzr3cQe8
7wXBDgNXd/B3ZyDXvh5y8axxeiptquGC25XnVbD99Shrz77iaZqy6t53FeL0MOOHVdhGRpqtmjWc
B2Ix3TBnwJ/YBi5SYEcigS4pZANeoy5he/vYwOor+4AiugRuA9FSHKS+wyca6a+Xljx1QWbaLjY7
/fjJBOXARZF60YKpwNw2qttoeWBNK/JJp4raL4euaNPXudIQTVMWglhy36IFOEWORupm6s61/759
8lQmup0BOP+jg3alLdfSVHRPuGbEVZquZOFi6UOseck2oBxHfy9YUpy+Q+HmlT79J8mq/9HspA/r
mqx60+xdwiGbGO+ge7gJqyO7aA4IwGYWZ9q+TKTauKkf6yiF6udGzzFEF88pUVKUBEFgA4BGQID2
2xe1UbpXo7u4DNZcf3iDlLZBd7wNbc4O+Efjlbf5809tWfccmwBSqH5c1uucCtIquVEPbSslCgLU
IqHLWLAlleLhIHzxLm2t6j+cknpji7SBot5pxaICUQvB7SCtKuDJi8F0UoT+CdBqqk8gWcIml8xn
b6ulEIVmU950P+sU2NDRbD8FOha0jzZBork/7kDpb92tkN/wLjlkm2/BwvzMmh0rqOPmrRXSO+C2
tgy/xU+faW0+2kbnnD5pEAzJi6jGUMDJu6rt4Aa9foleleLAokvA8lY6g7zjX5kP8lQsd01Vtu2M
Uwr6wEApvD2ZtU1pCMmW+WYFHcKaRSCUpwrIf9/x4rzSGBHdnu62U+chqpc2bqe+thkXWO4dlnG+
iOpmauY3olDtO9coT/OFMF//5rkhmZWWdSq/Tffz1SumHrIZ4q2754tx//qHGXebYSyzX10w3FXv
2DhNwRFuWtlCy47FAA2CpaDvvhMYAVwaS0bicLAkgzVsFtb1oDtY6HsKiiS3t7lh8VZyyh3wEVHi
NMfCeejhBP+zUEuC2IdOxuRlSzXOS4SCdY8GJ9ZwKC6tfAK4k1JvkVoh25SXLXlaRrTh0BqikHbz
oii20q+6b9HWfhcFtFrlajBjz/JCe2ngflYpSAe3SQJGs/Odh/Yx3RCLYx+obdsnwY54Cl3ct3NC
8PYeY7BT2U+zMV8Qu25L/95PSX2Hz9z1UsxGNdaTp6JiLyIi/fTUvbfHA4I8Hehj7zAsmQD9cggH
tNaujiW9YbpTXrkqrXXn6cnL/EBCNFI3s7GuuzBG+C3Jx2uQp7CSQ1B2XhVq03KFTXOw8HJr2PTT
am+cwlZm1/SlXSTK/msX4NXEEzd62RyTA1ZAL+BPgOl+Kp+/y7U53omD5zFkCDsZ0o7Zbug+fPYU
B6/Lvo/jBu0GZV75w21SXy6T3lGuIms0ryI6FyWJjOdRAof1VXMLl3l5s9n+bKpPedolWbgpw7Ak
K7aGFPNSzkpVqTSePW34WbkKiUtKpTiTTvr1s6u0KSsINhbwsDUMtEw0yilmmrd67iVtNHwZ0QjL
+tUlgB6OBns0Qee1XPxEhj63QahpBrOf4oxtINLe/dTAG8OZ20BPm/emR2uOtS4EiZsF5YlA5jZW
LoR5jlLJ/Mxs/bC5a1ZGsUxVw6SX4AaiDhcp9CekHnCBHbGLXeSDsc4iSBwO182s25GxeqvPMu5u
V/4oWE2EkuzwcZ4tZlYX1RgKOGni2s7EpfWMBWmZb+K7BJQWJ59Vqlp9lKfpy0/2qED97OnpyVv5
a6z5RyyUlnKvf/jnosTJ2pSHEOSpyOLes6f24OxsZvIUL/Hqz3X+vnXyFO4DSFE/yJpwCcmIu5is
kPijvbKsSe+du8UE4+v4Il3hMgO9fq8sDJc1eciUjrSLgVcqLmXbKj+24eLd4ZAma/xqKt+HLTuj
b7glRGtmsiapRh3Hd/YKie6wRWCV9gcTPXpJG3T+Dq+8p5MKz2DK6Y4N5W2qsW7GywpdCtC+SRPy
mBSbhuWZ0jAAXa5qtIfs+5xM18hiF81a4/33R51nT2Ug6K6oYY1jjtTTfKWXEdNw7uf+wUQfbk5Y
8t+CSA31e/eBDwwNAs6G8Um4PzmwuYo31ven8kVsQ+GeTI49L5cCm1Zchgo6Q12Dd31ug8pJXJ15
EiglQwdHcafNJsGgDlfcDrkQG7qcPVhNhBjRd9VT10Zek0owogOUSpxC0Md5dk9PpfVIs0PnYahk
pLiBejHEEvzJH8wKYQWYnYFwQpe2PEX/IbNmXye8rpSwfABd2+3q/5uTQwZi7nmbz5z1ku2NladZ
qtot9epx0vxuS56KOiy9VDhW8rR8Pb/crK/PaNP3qGoL4np43sAFJbRXU/IFr/JXAkwip+Dz86zJ
uPpWP+1AeZqnCf53waenaPpi5bhsLtaXrUmABEiABEiABC5B4Hgy/PBwCTPsspEENuf09NLpwY+M
lzbCjiRAAiRAAiRAAiRAAgshQHm6EIw0QgIkQAIkQAIkQAIksBgClKeL4UgrJEACJEACJEACJEAC
CyFAeboQjDRCAiRAAiRAAiRAAiSwGAKUp4vhSCskQAIkQAIkQAIkQAILIUB5uhCMNEICJEACJEAC
JEACJLAYApSni+FIKyRAAiRAAiRAAiRAAgshQHm6EIw0QgIkQAIkQAIkQAIksBgClKeL4UgrJEAC
JEACJEACJEACCyFAeboQjDRCAiRAAiRAAiRAAiSwGAKUp4vhSCskQAIkQAIkQAIkQAILIUB5uhCM
NEICJEACJEACJEACJLAYAmsjT/+DPyRAAiRAAiRAAiRAAreDwOyafu6MH/caRfR4J9mSBEiABEiA
BEiABEjg6gSuUfhRnl49fbRAAiRAAiRAAiRAAptGgPJ00zLKeEiABEiABEiABEhgrQlQnq51+ug8
CZAACZAACZAACWwaAcrTTcso4yEBEiABEiABEiCBtSZAebrW6aPzJEACJEACJEACJLBpBChPNy2j
jIcESIAESIAESIAE1poA5elap4/OkwAJkAAJkAAJkMCmEaA83bSMMh4SIAESIAESIAESWGsClKdr
nT46TwIkQAIkQAIkQAKbRoDydNMyynhIgARIgARIgARIYK0JUJ6udfroPAmQAAmQAAmQAAlsGgHK
003LKOMhARIgARIgARIggbUmQHm61umj8yRAAiRAAiRAAiSwaQQ2R56evd/ff3/m+TmebOnP5Nir
Z14/OYLqUjyebB14dW3z23RfbWIz6etv7U+/De1uSM3Rgcb/agqsVxKdE04+rMwBGfeqOXVuym9r
qzX9ZmfTV1thuha0vfqVkB87iDhZ/0ia1sL5GOQikh4t9l514Zy93683mZ6NMfUSUXPKHU2uPL3H
jH+JNkcHW2FL75iQxQWbdqfViGq/NKRZvBCbI4a9SpNRsZed86qb2MX9PJpsNXezi1taZI/v7958
uPM6/ntzcjqT+rcnixypZ+v046c7h197716m/uSzR3Q1y+Kb0LgRP5shT2UZbG3hXnY2PVD9hFca
KMvuX0scvFQMbc7O3k9UesaFJ/va6hf/qicQEhu1LS7Wwe71dbHDiLUFKwN3ECeY10Kp16BXD11v
ULFSPOvh/DKSPsJmF86IvhdJeXf5VMm6iM0ltx0pTxfmxfFkcEVYmO3rM9SdYEtxqTvNljLa1Yx+
ffv60zsXYjdInn45/PDy4/eR0VWC8svHC4vLCw030quFNNsAeVqU4jl7mb0Vd/zBvvxtul8E6wVs
ztbxcOhSc8cwSu/Vb+Ur3PjiPLkUrHanuZeKXoNefXuY666tVtZ6OL+MpI+w2YUzou9F8txdPlWy
LmJzyW3DhrPkscT86ve0FQQ1W21+u9NsFaFecIwNkadXF5dXt3BB8mObb4A8LaH29zK7AFih3aW+
lT+bdW3iInRROxb6uraTQ+Jyf7BLZnmxIXNNINw8kuTaPT64mV4clot9+bFbnHnjLsfkdv4NffOd
Kdzf0yjFjp+XZyVhHft3JMMMBJfsFlhu4KPojf7QER5QcR+WB/7ilpGYHEanJxZqzsmsR2q5q4YD
Sjlx5aOjd9ebsMbfZmkL1OqSDv6U/EIsczMej/D9vnM74y3Ls5lIE5307/HmvmPffz+1m/ti5GCa
ntAoo4BZXDW6AMrDVD5Ka+b7u3Y2mQY6Mj7Yy0d8NZ02b+7LPqBBpae2srVy00Pfkt9lYniwWlNN
MX3ZlqfQHW6RpUEDK7Ui7SfHGrX4YBYwd1Ypt/30plxOOpqFZjrPdSD5bbFnbX1ku1xuDKwUPhj0
caVyIjNkKzVLL+eG4HNS71tCTUKd7RR/PbN4nzMxr93GCLEM4ejGmAjodPDH8twTZ4uWUrkpT7++
Lff964PVcvf83Fvn+a69/DcbwcZ+//3zO7i5741f50cL8NmD7APWfP4yCKM6PYX3oWPbkw9vT6BN
Pks++Qw3943GhzuvbWjp8vbE3kJQMPgiirdAnspMzXtrWC2yeYftr7qmCt3YoFxoZS3gUwHpcVXb
aufsgIvI2XXa0GWPV5QV+QN7kz/IIZUpuXBpkVzoVn72firXB3iQwzf0cvHW667PE9j00yzQ67fM
H7OcpE/pm3fesmOKnd6eiDPQnz8B99IQ1l2iy6agI/iZr0mrfgh4fr6rpZSDKqwkO2X5SL1OJAgQ
7AMZSMq4pGOCAFTWDStKOky2tHvoE0cQV4ajE8YzDvHOz3hzLkmkehXHFCB22eJsuopXNvc6qwa2
xLOjY5l6wxpIoBxJqg/uTxpIfYP1AljSWSbqmGIU58nRUZSnPq7NkCQNz59j3gv2EK1M3HQzwRlV
sdL2do3IEyxPtpJc2JdGmsVMYeA+mriR3ROMupokfMUbTk9746Z6v6ill+Wi2QthdnTQWETpOxha
D/cVEy6r95mQkDbd9hi1hAQg9ep2RKFjNXKq9mYNeWoi7MvhB1Vposb0VntWZmahLmStiY1LWbSp
yrvTk5evP+izp9/fHZZ78agyw3Hm6cnbcqMfPQlDi7evzcn8FjYGt6Mn79KztmE4l6ciQDWWGbgn
1iwcABVcWsiLTZensPfpKY5zg4017Ya2DWmT0EAr5Teu/7Qv6BYsS0h3Q+ywCWVf//HStaLY/JIT
Bkw58hOguD/mlriv5SOlvHk1VFTOI0aajqDSBaZ2wM3G9l4fHJUXnbfcct1AZ6DXa42FpspmMNj1
VXTBiksWrBWyo/FbianOo04vjybh8P78pKNsyuQzqK5vMYnarHISMhjbV67mkEBilgr95WbrjpZf
s281qfPZ9NW5GTfL1YXZ6q1QfNFI8ShO3qocK/BBY2l/+DRYqjq/LBALLTW0gazglgd7qbSpKqO1
6HkVbGOOga9pJ7fjV9kKqu7AZDCo2QlRBFyWkZFmq2Yt590NM54cMdS+feHSK86aq1bIb4SX7RAs
XCn4FIrEzI4VdGBU1aov40edMEB6IUZi6iuzPkWh2TnrpSFP/atRIiKToLRC9ujksyrLoYdJw1Xn
lOlrRkH/zQbNsiUYqGpvI4lGRPv2hhSyatSzTLAmbxa3QadC3zCcyVMrlJaGKxqpBgKzVy9usjyV
dQVTf7DhygVMZWUoG1ZY5FanBdsOrJDfqV5q8/X/DXuQBHPOsl9OrNVm7YPIZ3Hfjxr88wGAXXfs
uKiKyOeAb/oyijYbWLbpEdtXm6Y7Ws/AcDnMirnuqw+cWL0Uqh+bwzjStZaVWHHCnE+vLY8h/BRT
WK2ZfBWrKbO5Se+BqnxbbtLjxCgnghrS+Rk3adsLJGY4wBTL9dCIPXB2ILFLb9UIMfkBI8MacC74
n9dpHMimh3uSu9v6AmtFfYoDepoQrYl7YTdQ3OU3uB3NpuxU745d8mjJwpFKXb+pgVmzgvYzn0Ms
IafJ/co9zHK0aQZ9+2o9XKvNgs/VNtUOQc9olK4uTJtmKXr9OyR1Zv1uRtdtRRN+61wqqW9OUW2j
nm35QXKwNWudnjbkqd+U1y/797/YXsvH05OXrb8JEJoF++WENejF2SwfjjYeGKgCyjI0q+pgNnku
npjEDD3DcKpKg5PS3FSpFZIRytMxIlpXWuFevcy1sRLWTFhUxcKcD3O2rqpnT63ezWxGCXBJQDdF
nsrueTDxP5rTSGXP1WFE5eNKuEiYPB1Yto83sX2112P24S2cJ24ZGqR+OmO93gZFuzes3AUrflqw
1cJpxNBLXLrqz0m6685ouOtbTKI2M2/VivGP7T1B2lB+hzYXy7j3tRHRcig3LWNllib57LmKSF4W
bRG8nbfApXFUS8OaSujoZPbQUhSGzgo5OHnpQjMELC/sVC+4XUU9f46B3aovzlVtZbkIg+q76XeI
oq3tqhSkWBr3bUY4725E5w11kKfdcYPPVdZGhKCLpSZmZq2gpOwkuOu2tmz91tS39wfLUasr1lVa
raO6isTEjt1yLelU6gX9l+VmPgTVBmIRdB62x3Jtv+GIBtV2O8ao3XEIOWfN+tsKpdn3d2/8WdiG
jldrC/w9RvgtcDg0dQdfnF8e4yUsSLgEVnarvVi317ACoUuwOfMHvNKat61T1p5uo1gGQxtR1E1B
gmldipYc5GBvlfF0d/NtOt2Ag5OV+tlT8DJ9+MYzGJgPZgH2d0mu18vZRj4A611uYahS9G0aHE7d
i6k0hLoBz7p5R6gc2r8hNXC5Eo/A+XABk3pdOG3P29NsXNI7oFaa9DrLONnmZBwmFcy0JqZ6FJeh
dncozasyohAw7OngsylPm2sc9sCShWEN+ogzwcdFh3F6iDNh8rufavRoggxT2a3B9q7tm3OsWemb
ifYtU3fEkocecbaHK4vO2xzyCLMdP2E0j92Ny9tw5cIUpGmgAGGBSD3chAkv2yHgcDI/zz89xS2u
3BUpw6EdcBv8t2g7qbcJYw3nrRdtqEquvI7SzcWi1NsjmNoXf3sDkY/+x6rk2c0s41J949lTlJvY
BvQieiXl4c39L4dqGU9P02Hn0G0cZXZ6culnTxvytLH6kNJlymOE32XsjuizbHkKh/t4K0rmbvrx
ad3+EBYXuQSUtvLS2/b3FGla9umdWD8Cw1o1AQJFma3OfZn9+DM5SjW6q+LO6+mwqx14bhIzbdzv
dT6Ui3cOKImYcldosL8XL5yAXySkd9jcIx98qwwhjxr4QWBqkL9CK6Poph9tynD249M4DnWdr5DY
AIhkzdA5hN49uDpxF0l6C9SKk15NpJK2ERkHeZo/DZ6XcccIlvOn9Nxtf3rcxp6+Y96Wp9WOl+UU
pKMkcViDUw9SMJnobfdz1otbO5Cv9g+208bSVmvwVg66LA2Ho3NMWg4sN+WpKtQC3+ZtyA7GWy3/
traTDujtOWaHzofRNPbaeUDXWIw6lWxccUY30nrBdkIA/w8m+uVR8a1kUERwMAszAcZqy9PmFRlG
hE3PJ0z1vIcGaZeAAE5ejJSnuaXe2VfFCdaiPD08sT/7j+ow6cJyk/2LP0Jq33//8PLwc3natZyk
fihK12/Tf3p7+GkoT92y/LkAkKoSYMNtaI8PqsJw/vQCWPBKVMxw6Cs7s12wAM8VipsjTy8PIa6Q
y9thzzUjUG3ca+Y93b0UASb9UtjYiQRWRuDbdN+OeFc26CIGEuXX/erSIga4ZTYoT+XTHnxwvGX5
v9XhUqncwvQz6bcw6QyZBFZBgPJ0sZQpTxfLk9bWiACVyhola1GuMumLIkk7JEACgQDlacBx5ReU
p1dGSAMkQAIkQAIkQAIkQAKLI0B5ujiWtEQCJEACJEACJEACJHBlApSnV0ZIAyRAAiRAAiRAAiRA
AosjQHm6OJa0RAIkQAIkQAIkQAIkcGUClKdXRkgDJEACJEACJEACJEACiyNAebo4lrREAiRAAiRA
AiRAAiRwZQKUp1dGSAMkQAIkQAIkQAIkQAKLI0B5ujiWtEQCJEACJEACJEACJHBlApSnV0ZIAyRA
AiRAAiRAAiRAAosjQHm6OJa0RAIkQAIkQAIkQAIkcGUCayNP/4M/JEACJEACJEACJEACt4PA7Jp+
7owf9xpF9Hgn2ZIESIAESIAESIAESODqBK5R+FGeXj19tEACJEACJEACJEACm0aA8nTTMsp4SIAE
SIAESIAESGCtCVCernX66DwJkAAJkAAJkAAJbBoBytNNyyjjIQESIAESIAESIIG1JkB5utbpo/Mk
QAIkQAIkQAIksGkEKE83LaOMhwRIgARIgARIgATWmgDl6Vqnj86TAAmQAAmQAAmQwKYRoDzdtIwy
HhIgARIgARIgARJYawKUp2udPjpPAiRAAiRAAiRAAptGgPJ00zLKeEiABEiABEiABEhgrQlQnq51
+ug8CZAACZAACZAACWwaAcrTTcso4yEBEiABEiABEiCBtSZAebrW6aPzJEACJEACJEACJLBpBDZH
np69399/f+b5OZ5s6c/k2KvP3u9b9ZFXa+l4snXg1bVNbTUT4/vTb/Y6F86mr7awe/X2Brw8OlB4
r6bAeoWRfZta/lopGOnJ0WRry2dFmSqTIzE+TOtIm2OapRmiCMvvJsnjyVazfnY0Wa6HY6LotelF
J/VOu9d7GfWe0Ovz4SJxye4EeS/L7eBICrAvXcTkAtt2GYrbi3SvO8mPDrbCJr/A4K5maiSBKr+X
HxOubrKNLBL+5Z06v+e4OVz2kJVnOV4Rzo9kde9+f/fmw53X8d+bk9PZ7PTjpzuHX1fnyDWNtBny
VObW1hbuXGfTA9VPfomaJVk5yfKztVPg/ju0aSnKS2igY/KWsQ47hUVyoQISG7fXXMj8/MbiAIqz
b9MJfiA534BMg5L62LB7LYzNuq8udcmcN+haylNDVEWHy8raXKEQUymzYv6iu7gPcZQruDuna3f+
dOfAHIPl7Yv6P799l+G4FIxzW1pV88c7dll5k+spLZrAvCiuODfmmb+u91eKcf6Evy4Mw3G/vn39
6d2p11OeOovllO6MN3uuiC6fe87ZueytMPuHs/PbdL8cXZxrU05YJ4NDrLPpq/3JwZgr5fi4b1ZL
wyhurX5/FPXf1JfjKA3Tnfv16sdZlWvpZU50uhfgMmwX77yOo91eZsPKya6suaQPMWVhUXctXtyH
OErX8JXf6M2fcXH1h7+o//Pbdxle1dU6iGr++Ns9Vt7imkqLJjAvjO7+MK/jzX5/pfmdP+FvDizK
01XnYlHytPjdn9mwq8qMLKeeww1leCu/ZTNvnfUGmq0Nba4a6lLHA4HYIrPUsc9XgZLi8qMHaSkX
01S/P/0//qRHOmnTKYH3yKRjTCu8m+5NwyhFKGONnqZ7L61pgMGB0Ijq73z5MVN+wxc7ziTG8qMd
G2Otvio4OZtl2uV2RDj/Tm9ViYvuWq+tctfbmKR7muX2dzKRcpSGfp/SHRKafXDU9rBBnMnqeRwl
ueR9e4e14Iymw+3YZEievE/PqLz63//b5m3el1R2gCl5LiJuLOAJfJwuGHONj2t3fqGXrhFHPWwP
NXqnNXl+7HaMYXTPU6YdfZzZDCctPvLhZvffT/HDv0/yV9OpfRRMoKbpWaPshjfzD7FDT4Y16Ju/
649YpIGObKEhOkc0mbaP8N1gQaH5BW9T3opZh9CbY8VdtYPeV2wNfsqO7oThgbScUHVSfDAHbLqK
Vd9d9epZxjoIZiEonf/gX3Ij37lMq+zYHtPKjYejDGvEXAxH2kyO54agDbba24juUequZ9aWTzmR
ac8E7Qe/fcSQiLJKkU+jJdjJxbY8/XJY7vu//Phdu8DzAOt/9//cc0mNeDm/VyVPZarBbBCFijuC
BadXJqtoHYzpxSw21s9hsALByiYVdd3akltVcHkbao4mb6k/3iztlbDJao6SCW82C/WQVpw236bT
49nMnyXAEaNuxl6dS0hyAAZqms2c9VoIj1J4RwlQZesNm3jupNG2NSixFLcRI2QEk3w8UQ0km3gp
h5RV0i3t9coNPm+IfVfGYqHMDV3ReVTwPIwy31XI0ezs/VSuw+3JkDzRxNWn7zBnMKf6mqg8AAAg
AElEQVRQRk9mR+/TI0zzKWEvKStVYB3inR0d6J7pUSTPlVtaOIUhuAdpqj7slaH8sSvp1ZgJsqAs
U9Am89SVnhaIRYHNzBnI7NnRsTwqP6yB+MW+GoQo0kBlexFEis6xyM4g1xSfdcVqYzjIb2nk2Efk
yNwd2im6X7Mm8IurAsSyZhakkBOau6SFo81gMqc2FhpEXZltpgBHs7ykVRk2AUOHxGTBzh93TAid
zDr5gqJkX2K0CwckpTcTMEgtYyB57nX4eJpgs1Ir/rslT19/eHuSWpx8vlNu/Ys2Vakq5dLA7axZ
adPlaZhqaYOzCwOsNEla/FJUTiNMslThmwJcxtIizzMbVuCazYMx7mJ0uNjG9L1yG9km9OIRjYVd
xvOI3kqH0AyshXpLKzSIo+VXaBwnCZbT7m/bXGXFBgr1btZnWmrgTlrHysOjCX4GC1ZX/8KczENH
Vy0WK+RWrQWIrjvb2NGhSetqaHsZfQCl4mar7jgKlpt7RdUg+R0ty8M/6cuUXU+kE+Qd4/IyNEiD
1P/xQdElLDf9rxcImp3PsOfe8H4U2vUlWbnnSeyzChyqZmUtgMYqww5rgj/wwjGGgVzgeoPUywlE
Iy6wcn20Fg7tKgjnL4cklcpJXJGhFQT/2Nb0LXuNm2qAY35WXnUvdtXoje0I3LAZlbywsWBV+two
PN0+2ImyMnV34GC22Ei/PHEhNLfvDXI3s2OFyg5a13KAKZVuP/cu23W02V8vLXnqh6OqRE9PXr7+
/EV9mJ18XvevT22yPJUpYmK0MUVkrzHFg2XLb5ipMptNbfgCw4kYV46Z2YyCh5ziscvtiqILucAx
404tW3ZKep2LzmYU90GL0Qo4Uj7X0YuCfrIHx2QPqn5sggVDfgGWaplC9pPNxm0rqS475MiTMJ92
WDc/cIoDXcuril7cmi0RncRFjwNSOMSyUyK/DKeO1dD2MvqQjpqyNUhfVLfmZ9ZzSFrnmLta58uO
pkK3NBlqT4IDYAcnsJWt4ENLaR6lMagx3qJWzfm2sLarqXklhepHl4k7HJxJeYSoUzPLmhVKb2cV
unTXQllZcBUY1rhjTYxhIJOndRKNAFiTYj1c21rqFLAkiOB2ZRY/xuhbNStr0/Ot0kyWTTFoflrB
htHnK6LZbgq0H67T6CoM0cmv2LC34rghEe0QpPe8BeLyMRiUrrYuwE/0xwKsCjH1bT4SS/UzXC9i
d5w8lWPUxtf8K8fW6OXGylObzZCMeua5JLUpCK2rKdiYSfK3Aj7BU4020fzaGe2t9au4p8h6NrG+
krji7uBD+hfavE42pOppsJBimAmh3mKEBmoVZxQax3qfUdqr89sG8m03+FwF605ax5Xz70TSqjYn
85sRpsXSSRwYDB2ds1lITTEXSzw9PUcr4DUMvO9MhhBUtcm4LIgT2GOsJoYMFwy2Kc1HDZfhOhzL
ZhgIPQ/uta+vygX9tzxaIbeSl01BLA6UjyhoZ95eJO7F9A1ruhjDQL5anXPyWbRIP3AfDq1hOTOP
Tiqy1u+qrzSpsyNzKbnk2akthS5tbVelJsnE/JE7mp2/HUF7m1HJIYjFqV5gXD9jaocQyXSGcBTe
IOOytQN+4uSvocbXmvoOH81R7DR8NU6enp68TH92ath/TWs2VJ4OJndOj86V9Eo+rRYdGaY1ZLKe
qf5WXGBaDytQqzboN27BgeSKYpQdxFImY5anNqW+XLTAkzoXYUr4ZuQfjqWvp1W66/SYybOn0CXt
d3Y1CpMEJhX4MizaQB2z6ShFT14xQOuY9Pf469nQhSXWuJNpEIwRZRDG1fQG7cjxwxVPT12jQJpw
JqcDD/3QNZgwwzkGTkssNiUaz55600gDjoWkCVwCcQJDGTjM8rOnYyjNRY15CW6ISyAWuwyLOBP3
dN56zFaCQNIELkssBgXPnkpGbJKnRdGSp8214A+5KtVhjfkV1n7aB3SyQUZQlEggsD+0nj1tDWfW
ZHbpTCtetHOEu667a3a8CnnmcEoikDk0rxVtuAi6/TCx02Twi6ZN+PLR2jIVh8mvwA2csWGywV46
clxpZvOtEwIOBzNtsMCLHZ/wBVFzypmrVoCgG6mX8Bt85qwXtTlOns7w2VPtus6/N1ie2llmKujM
SNtKfst2h84nm+rKEdKMM97fgBXolZtUStfvTO96ToghfShVZZHbT95lhrkozst1FDa1sEmFtMJY
aaqkq2MaJf4FMbHg99ahF1xZ60kAAzXNpstD/mKy2PaDGeiY5qdFDW3qwVb+OjgZaDdO5jwCu8yY
wwBzMtG7ilklAJaS/dS9Gtpepozn78tLT1v75QqUnUhfAfa3YMKU6735OnQ1h1kaNHYbmwww93Kc
OH9cFiS1oXmPkxlme2pwIUoaqTH2AsQrTpYf/yN6XYbBvRJO7u0wdRh3fv9gAn+K2OvbWdA/1NDU
Cj4l8rAJy3CzGtaoV/K7jREykkexTyne/tVUvtCtyTKbjeGKNcCbHS4TxiHI7pb+bzJiZGBZBbcN
VQruknYvcQ0tpIhHaLuwQFyRx9P9PDzE2/AZJomtytQPCEeph5T8cgN2im+2GDvytJNZ2z8FTprb
9r/v8T1ZP6XEj44wE9riAVC451AJfOasl4x2pDzNjwH4/X1+NSrju8R/F/zN/Ut4ULrA8ri8EfYk
ARIgARIgARK4PQS+Tffb0v/2IFhipJtzenppSJ1HxC5tjx1JgARIgARIgARIgAQuT4Dy9PLs2JME
SIAESIAESIAESGDhBChPF46UBkmABEiABEiABEiABC5PgPL08uzYkwRIgARIgARIgARIYOEEKE8X
jpQGSYAESIAESIAESIAELk+A8vTy7NiTBEiABEiABEiABEhg4QQoTxeOlAZJgARIgARIgARIgAQu
T4Dy9PLs2JMESIAESIAESIAESGDhBChPF46UBkmABEiABEiABEiABC5PgPL08uzYkwRIgARIgARI
gARIYOEEKE8XjpQGSeDWEzj5fOf1p3enl+Tw5fDDncOv0vnk8503J5c1M3r005OXr+1/VL0It2ez
WbH56d3p17evPyzzf379/d0bc14LbWj1/7lbAYmFi3go7V9+/D6bXbSjDrjI34L3jv5LXs2zft6k
stDmGeH7JEACSyZAebpkwDRPAreOwPd3bz69PfxUaYUvh1nTZBw9qRRhnackYkt9FUfR2v7v04+f
gpI+PXkrwmvkTy+Kq+o28SoL9JGOSLOeM2ai1+Bi3oJvF+tofiyskD4DwDQTf+Z/npkzqXqUFuY1
DZEACYwhQHk6hhLbkAAJjCZwevLyzcmpSIfPX6BTFI7jRMAcJQHWtRhH0drebznlDU72Gnbqe1H0
6jtmBtUgAQfvdSvmDtprcCGViUYu1LHr92XfkNFBm2YzI1yaN6kuNoUu6z37kQAJnE+A8vR8PnyX
BEjgYgROP+ZzUxQKUtY7sJ/e/T+4mZ5uQyc1dpLayL11F2dZSYiITN3tnnXUvto+jpKfCbC+rYcN
zhciYrbcNVYJm/x5dyjOvP2/zSi+6m19cxg52E1/eTdLKxil3GGXZxv0brXec/db2ANBZtlB4ZgY
NoyUhw2SfXuMIXgI/mjUNkL9uAV0TGS+FGLJspH3Y2BMEBq36D69O8FPNdDejag3cQ5obT151CV9
XARCqLK/6kdKzGMWSIAEWgQoT1tUWEcCJHBJAnJnPz91qqqxGIpqYKilTC1FhfHahYULiChNcKAw
Ch6ONs7MQF0NghWbqobdfpJcKhnr++neLNxnh1HEbQvz67v8+OZhebi2PaI4hk+vBm7Ra3zr+7uG
2awCVRc6HPew70MZCmKchWdPE5kguLOgxJD9wQkZUXU2Rif1ep6NbdxDD7mR0PSm1YNL+TNDSRw2
0BSHRyPi7PIRWSIBElghAcrTFcLmUCSw8QTs2l++HqRiaDYLwjEIONCjiY9rILSGBqOA8PZxlDii
62ZNQkv0lPeqt76+zc8AVP50o4hKUb94FP1RL+w3BIURVd8P08Np62YFHNQq83ltzkLVwGIcFnJ3
jRqNlaPxXGUd45fYUJJ2vj7lAVZIDYIV8lAnn+uHcauO5qTVWyG95fC9vuN/TKsZZoEESGCVBChP
V0mbY5HAhhPAG9P5DrUeki1CnopuSEorahfXOkGeiviwu+S5AAefkgiXLHVa7HazWdAb1n7edtHT
UxBDOFw65FM/i5rHiKSst+lLYXinWwxG9dkwGxuk8BMQc6wTNXgbxbF1jPLU0iQdoU0a0WNJUWCk
0twyG/xPWQjkoSW4JwbsW2UuQ6WF5xrqrfGXQ/zbBTWoOAJfkQAJrIIA5ekqKHMMErgdBAbXdZAC
rg+ERWhpKiFT8pfQXd4y7WKF1MHbowpx+ZWttv5b2fcmw6PW9F7dvhcF1rs+iwQGBiEojEhOT9t6
1N1NJRgU/XSz0EA6mGNYsGcPKuPlZXDMLYySpxi+2ZEC6k5RpfoJBOsb7pjb+B5UIgScGFhf4OjH
nmKpeon2WSYBElgRAcrTFYHmMCSw+QTwwl+idUmE6uQC8tT/aKgoDz2LhQcWRWGE51O1TdJMc76Y
Lzb1YcfksT4fWcumHE4doEcXzu2C+I6CKT57aiqtdFdvsT6xwrO93jxyZ7C7lIvZdDiqSlfqi/5z
D6GyM0og4B3jEwgo76yNFYoyLpo7pU9PtaWNeitlT2XTndA3moWvQOWuPv1CCOlU9U38I2jW4Nt0
f2t/+q05NitJgASWS4DydLl8aZ0Ebg2Btp6I32f6YH9kNMmm8lcqpayyKei8JBTyN+XlpjC0ET2U
b3m/OZFvZ9tbWa2qCiyjaMvmX/gPbVQg5tvB1Z3oKMIksZ0oXCnCIWWaCOZ2kd1+P/3l4Wf4U1yl
vui2ElR+0qB3wImDNs2mBh+Vm0eKwlHkWh11mME4CnQ0SSeNm/I0f1ooIbw9hJQ5k+qb+x6F/KmE
k+CHvghtgpwNLrVv7osRGT0g9QcYKE+VMn+TwOoJUJ6unjlHJAESIIF1JVB9llhwGFFTLth401w9
IurvZgdWkgAJrIIA5ekqKHMMEiABEtgUAu1j8kVEJ0eh4QR0EUbPtSEjwrns8kI71wu+SQIkMCBA
eTpAwgoSIAESIIEVEejfnV+yA/kxhtWq4SWHRPMksEEEKE83KJkMhQRIgARIgARIgATWnwDl6frn
kBGQAAmQAAmQAAmQwAYRoDzdoGQyFBIgARIgARIgARJYfwKUp+ufQ0ZAAiRAAiRAAiRAAhtEgPJ0
g5LJUEiABEiABEiABEhg/QlQnq5/DhkBCZAACZAACZAACWwQAcrTDUomQyEBEiABEiABEiCB9SdA
ebr+OWQEJEACJEACJEACJLBBBChPNyiZDIUESIAESIAESIAE1p8A5en655ARkAAJkAAJkAAJkMAG
EVgbefof/CEBEiABEiABEiABErgdBGbX9HNn/LjXKKLHO8mWJEACJEACJEACJEACVydwjcKP8vTq
6aMFEiABEiABEiABEtg0ApSnm5ZRxkMCJEACJEACJEACa02A8nSt00fnSYAESIAESIAESGDTCFCe
blpGGQ8JkAAJkAAJkAAJrDUBytO1Th+dJwESIAESIAESIIFNI0B5umkZZTwkQAIkQAIkQAIksNYE
KE/XOn10ngRIgARIgARIgAQ2jQDl6aZllPGQAAmQAAmQAAmQwFoToDxd6/TReRIgARIgARIgARLY
NAKUp5uWUcZDAiRAAiRAAiRAAmtNgPJ0rdNH50mABEiABEiABEhg0whQnm5aRhkPCZAACZAACZAA
Caw1AcrTtU4fnScBEiABEiABEiCBTSOwOfL07P3+/vszy8/Z+/0t/ZkcW7UU/K1XU++QmxxPtg6O
rHVlczY7mqjNra0tG84Nbu1Pv1nvDSwcHWj8Q3TLDvfb1DMK8Bc2bLSP06A1xNn01VY1r1rNLl4n
bqzpLBIm9Y/Mk8uzkpUF6/HiNBfS42iypIysLtfdFCyacJfV0YFvmAvJyqKMjHRMtr6FTMVjvIYs
yOaiWHTsjIq97J+r37vkoryUrbhDY1z193dvPtx5Hf+9OTmdSf3bk3E2rtLq5PMdGW69fzZDnhbV
aHpxNjubHqj0lO1gYpJTVlpXWuEmPrQ5m8kKdFOa+aOJbVtxLG2wIb/lSqboRm1Yi40b4S/juo72
0+cQmE7DSHCqDN+9WM2iJcLFRl9C60qjXJ7VzSBThXMlYMuIaITNbgpG9L1QvF1WI1XghQZbSONV
O3Y8sV10If7fDCPdCbYU98JevZQRFmf069vXn965TqQ8vQDaDZCn5cNTf5eBHfP8af1tul/kV8em
N+ghXu0q7XmxnPpAePWbbMjdEjgH++mI3T51NHgu0oFFS4SGu6utghUnA1+e1c0gU4VzJZbLiGiE
zW4KRvS9ULxdVmH3uJDJJTdetWOr3zmXDDCZ7+Z9KYPHvXopQyzMKOXp5VFugDwtwfd2Gdx/Bzfr
A7jhu7XN+TvLaldpcH/5L+BsuCaz/MHD0XVIhFx69cfOtsvhNz6DMfPbaq07UHHLw2nT6hiu99JY
f8KZq9gsP6XefSj3OuUcWn/SLaowhdqWU/hHNuh5MnoFiRkOEUJQeWoZQfiQu1YUOQsOwdtAR7+h
YUPAfWQHjuOaz8nV9+l+azbeaB/CcWeqJ0yWk+tMwCZJmF0pCHtrS29xood60zNPV4em9fFjWCN2
AyWFluXy8SNP4f33U3wQwtu/mk715n6KaJrWbMmINzsvla3kunf+rp9N9peJjwiOuTEpucHMPLkt
N+G8b465zElnO+cZgLB32ZjQHZ4kSYMGVtohJ1SdFB/MAs5zq5SN0B48G5iFZr7EdKgccqlPa+HY
trW038LMV/hgsDuutJkczwvB56QuaqhJqLOd4i1mxxdLfyZ4kFaCcMIyKbu0XWJmreuCWbFCU55+
fVvu+9cHq+V5gMOv1n9YOP346c7hSXp4IHeHBwmsY7i5b8N9uPP68xew+OXQHj8I9dDkOoubK091
kvkcnc2ODmA96G6u+MNFKFfWIqxeGNpVf8va0NvfWrdZv5UAUl1RhJrQtEv4Vjv7Np2UZ45ln8qO
QeLOjo7TA8biue4szSuE2NcGUtYh2h19T0wbonZMV7WyqaGR2dE0OXl0oC3BbLpa+OMndoHvWk5Z
KClIWGwbXVEu5gxTLSVhZfBFTpU14vlSCVvbTQT0spQuwBb1YpKeVYhdkiEpcu3xeyllMnQzsrRc
ZwIlv+KeTktAJW0sBHiuSeoBtfcFb71vO3YYpm/ZdoOklYGVbYYwY1NEHgU46Vp5uH6HNejZ7Hii
S0DkDuDSyQPLBEfMCsP8N5vD4RyUNRKbeTnPn8nWCeaV1aUFYhmERFSsrIOK0Tx6VngFqaSgmBpp
dr7zEHsaCydVGQuX/AXG1W2hF8KsuWGGowq4OZNw6QaLWzFMv9RXZwgA1aI4b3PpKH1rRczqTHYU
kKZWTtXerCFPTSOKOixPiIrEfPnxe+o25wEAkaf+wECno8tT0aZqeSZ99ZlUGV3l7OnHE5St5v31
FjZXnipXnLJp99TpC3u0tI1fisq9YZNSc+V3XIGpEgeqWm/GS1+Z+fO0rtgVRecXg3K2oZuIj28e
wh5d3o2pPJu+8gtkaZEuYOUTMoTW6Wi7mBWKGfRheNlzX2X3hAu5XZy8vmu52g2jhzDCtRU9tORC
DMTyaIXsZ2sBGswSiktGj83aXCbpTlsMRpI2SSycGAicKsWO7puWzIIrsPSW1XctW3Spfd0s249t
dMwZPitfdzSHra/V6ECDBQKGXR9USXSe80fUgUwN5HUt+3Mzlaq60I9G2QOJs0Xru46hraEDBkqb
gZ0KQmsmay88cpPNRjaxqnult3xncBvVx7ngrUU90mzVrOU8xG4zNjljY3nex4cDAKuMu1kMGYYO
PpsdK5Re7nY0qDMBjVtZjMR9uzJ7NEmfSaIR2yvMjhUa8tS/GnV68jIfZ1oh9zv5bMLRDFkhnZ7q
8Wqvo8lTK5T+6k/V0azfpMLmy1O86sQpFa5G6WC1zkzVPrw9WCEjd89gZJ1ewO4gbp+zIJcTVQAe
Pk7I7mw/upuXSvygb21SoVa3bh93KClXP6mj7VkVluSYDGoNIg0ZxX7Ol6c9yzZEsXzeLI2Dr+pV
5XlEYZzTkYaxkAJ8Ksiu+gUmv4aHvxeTdLysppQFf8o5nIVjBQVZLnsxQH0zn9OowUvmOhJoDxTb
VAIofxqvO9qDTNpXGlQ/9QLJn+G9UbIcL/zpnniOtGZls1RHzJjysZkbtSPeuH6l8bDGSOtpYrFT
5EX0TR3oOgbWGsNFt8O1Qz4uVj+DmezGo1dSP6hRV6sPM26jkqeWzWBtpNkRzkPskZ4PAfVeWRzu
hBPmZDuE8hHL4OpHJttDZASzAz7kkc0TK6R686f4V//SteAHw+ZALuxPv2kbeKexXsSyysEySjwZ
NY148rn1Nf/as/w6yNNeR1WlobH0Vwe0QXuMm1F7u+RpWAN4WBKmu2fmvHnsV0pbHt5xE0vV4r9m
eWo7JubIKo2/1KRLRfPjhzWTAs4BKZetsNPRkm4FNabHD+hYeQ+HAGEU3TbOXcvV9awxkPpyTb8t
hDx+DMQg+ArquhnJ+IUcQ67bwNF+J3c4XHC1097axEDglgv6U6xbmPLaLFSCw+q7lmN0g2ZpsNAG
r8TuQ93RHLa+ndiBVdOyD5Faysu8cKoR5WVWjTZi9r1xHwPGlMZR6g1rQKBITwuts0y6jsGwXrTh
gtuIIm8d0UnvPyxVfaudJ7W3XIRBg6kQRbiumf0qNYlMllDB7MWWoc3Y5I2NBTM8bKTnhXPREGDo
EJrZsYKS0q24MxO0Wee3fBwShdq+0lmOOr2tepw8PT15qffcrWevEBRnr6OpTysUc9/fvUlPrJoy
7g1zA+o3VJ7684j5LAE/dTXKYXlDVnynS5Vn76f6hKCshOZndOi9aUVdrhKX7dqrCzJsSXLoEk8x
y8fosqHgnxXLVw58VKjpdLAPAbY7+lYoKOyhVdmm9WaldNTJlp899d08TMtwtYCN/jzLcDmsZmkz
uNVWwoVEBnZW8so5wyLq+JcIOENliwalfPmkA21xoZ1rD+e8jCwn13FuYOCODNvUZXg40nUehOnt
odJNQ8lb5uVfLMuEt5uh6YCz5EvKNkvFeFOewkLzseKfBRQjwxpvjeo/f6WpuTPbMuk5BhYbw3n4
MoFtTuZO7ZmMG6Ybx02g1NocTq8hET6o98+lMBPC9cvtjzTbdh4HBDd8LUgDHwvrLzCunTi2Q3D7
YcOEPSRsL+Jnbyu2qQgfYGxKQLD+ZyItailA99IY0gTdh8Vx8jQdatoTokMr+dQzNwjytNfRVWnv
2VM5RrVHCPjsacX8TvX6nJdjRHScanjPKG4lsrnYEX0es/3xKHwQTw3T7NfO9r8ASDsvnPHbk+nn
BLSub6XLT45VH+FdWSieOHHArohJUmSX9icH5TsiTT8xfY3txmVTDsk32VZHeRf3VpsAVilWYG6Y
mC4tDyb27Gm+puo3r3GjTxdvNe2WcdeG3XZlqZg3UAihL0/zsaKGZ7IerAv5g/zNZWm2+KRX8rQI
L3WpXJNCODgZPCNLy3UioB+KK6HvoMp2l/zxrW//YILf3Zm8H259aYLp8zAYWmOBpI9eGQ1YDklM
X8T2/daX4cGR7c8xIonBm22VzRNqyj4zrPHoQ9YmE/0TASCeyig2f9waOIYGvYHKHXU7bQs6QeS3
TxKvzRNDjChbNx4Xr9ajWd9adVBt5b/r/cdCi1GPNOtzRnchH6kcRvidbs8vjBXWSF7yiqMXzpgQ
wP+wYeqcKaeb87biyFynYvO6DyOCJIX54Dmdt14yw5HyND8GYN+jH/7pfv8KVJSnnY4uT2ODcEab
FGr+GwKhPmT/Gl+MEX5Lcm/B8vTyXsa5e3k77EkCJEACJEACJHDzCXyb7g8/PNx8t2+Th5Sn8jks
nILcpvQzVhIgARIgARIgARK4aQQoT29aRugPCZAACZAACZAACdxqApSntzr9DJ4ESIAESIAESIAE
bhoBytOblhH6QwIkQAIkQAIkQAK3mgDl6a1OP4MnARIgARIgARIggZtGgPL0pmWE/pAACZAACZAA
CZDArSZAeXqr08/gSYAESIAESIAESOCmEaA8vWkZoT8kQAIkQAIkQAIkcKsJUJ7e6vQzeBIgARIg
ARIgARK4aQQoT29aRugPCZAACZAACZAACdxqApSntzr9axP8N/vfhev/yRn/D+wQRvf/Us3/dS1Q
Wmbx69v8P3GW/37+cpWRTk9evv707vQqJrTvyefiTPhfUeu7K/4tcZ3zP9ceeiP/a+y3J8P6VGOh
dd5nNQmQAAmsIwHK03XM2i32eZ7KpDy9zslRCcqTk4uKy9OPn+4cfl10CF/fmsy9dnkqahK1ZlLz
c0I+V57OZsuBtugk0B4JkAAJXIQA5elFaLHttROgPL32FJzjwJXF3zKUVrB5ZQ/PiX7EWyJGB+eg
oJ7bJubI05l8KrjaQXV7XNaSAAmQwLURoDy9NvQc+DIEgjw9mtit/lfTs2Qun57Kf/PPwVEZJXQ8
m77SntbgMt6wTyRQnZ7Cm6IRyx1tEFJwm/vlx+9fDqtb3kG3gYUPLz9+z7az9LSOVg8jR23n8jTX
66MIcn4pNclJeKIgHXZmz9G4O3P49cuh+zPz9mDEvPHRrUoKZqETjoZQyVCnrQ2CVb4gARIggTUm
QHm6xsm7ja6jyjyeTI4zA9GpuZyF6f77Ilanr7ZK2TuKNsUGauQ24lx8zFmfvTnBR0ZFdWlNVmAy
rqur2Wz29V1SnP6utHB5KvV+QOhnkKlezyNl6KEodCNi0gViFqNZK2eRWvqK2NW77V8OVUyL8VKW
QTWcrEeLcoU2MJAzjtE16jvhmPq0gvRFayZw3ShLJEACJLDOBChP1zl7t9B3V7Kk4AEAAAgdSURB
VJkh+KODojjrZ0+tvRXkW1YTPVOdzY4nWzxADSwX8CLLLD1uDKIqiU7ReU1FhZIL5GllwZVZbF83
S5F8fauyUl5GeWr32VGSQhtEYTK3HsUCsULq9v3dm1orR2/duNVbQS1k5Q0jdvyPHd0ySyRAAiSw
pgQoT9c0cbfVbVOZAgDu0esX+Wt5+m26n+/7W8djfyKg3ODXBwNuK9MlxW1nnHoD3b+u/undKUgu
GD/KLFOEVtCmqtJi+6bNUfL09OMnFdMoYfMRrz1ykOVm7YyqUhldH2AoBZO/xW91W8Mov9WCy+70
hoVjBThRPj15aSe48SS1Ms6XJEACJLCOBChP1zFrt9hnU5lJm9p9+YudnlKPrmQGqepqnCPiA5fo
S5SbJgRRn6XmJ5/z/ffYftBM2poR7VhUXWjclqfhWU+zY7I7Oy52srT9cqiPGeR3Gv81I/ieV3bC
Ca5mqsHhJE9dXqNtlkmABEhgPQlQnq5n3m6t1y5PjyZb+9NvGYQ8e5ofJ03Pnob6ImG9Iz57ems5
Lifwk89wXugyTlQXHPWVscOjold49lSfE83fbQIH8jhB28GN+1Af1J6dcVohPxWgD7bKkwAWjkSh
X43CZ097gEPUWT1r9/oQ1Dy0QjIq56af4lf1vcHRwRYfVumxZz0JkMAaEaA8XaNk0dX0qKiefeZv
QaUb9JNJePZ0al/M169AhY6zGXzlX79TRbgLICDHjX6DG5Wifblebn+boEzaLt8Q18blSYD00o8V
yzeB1Lg27t0ND6GEI0lXnC7psnE/fYxtyv36w8/+x1P9O/4Six4Sy6AyljrpEja4Uz0wEA5cg6tp
lBRpcLX8hQFjKMb9AQbK0wo2X5IACawpAcrTNU0c3SYBEhhJIMjckX1GN6u04+h+l29YjxhF7eXt
sicJkAAJ3BwClKc3Jxf0hARIYDkExtx2v9TI6bhU//jUpSxctJOMaI8WlKcOVurARR1mexIgARK4
BAHK00tAYxcSIIHbSyBJUnuGYYXSsDw7scIRb2+SGTkJkMA1E6A8veYEcHgSIAESIAESIAESIAEk
QHmKNFgmARIgARIgARIgARK4ZgKUp9ecAA5PAiRAAiRAAiRAAiSABK5Tnj74fffB77t/99e//+mP
Fw9+38N/P/3xIv978PteLuRm95/tWs39Z9L3we9795/t5P+mwu5Pf+xly/m/2892tp/tWOVPf+z9
3V9f5Jc//bFn9erMiwe/i4Xc7P7vu/hv+9lOfvngj70Hf+yJ2b+++OmvL+7/vptf5rHu/777019f
5Mb3nj73d5/uPPhj7/6z3XtPnt9/tpv/bT/dyUE9+H0vl4f1OcwUo/TKBXt5/9nuT3+8QJvWIBf+
8V/+7b/v//vf/vlf//Ff/s3+/e2fXueav/3zv/7D/3ptbXLlf9//93/8l3/72z9JPbaUl//8r9la
1dLsl3dTXxv3l78+f/4PL3f+9j///If/cf/Z7vbTnDIJxwj89McLi86YVFjyVPnpjxe5V7bzd3/9
+wwt12c7Gcv2051qIlXk80BIPjfI424/Kdm5/zRN1N9f3H+6++DZ3k9auP90d/vJzoNne9tPdnJl
rpGOT3excS5b37/74+9zg59+f/Hg2V4u238fPNvL/7BXHsVqbMQ81vaTnVyorOV6q8wOiKmnu/ef
7d1Phe2nu/nfg99fWH0u21vbT3fvPdnJL3PHe092Hvz+wrrce7Jz7zf5t/1k98HvL3Lh7uPn2092
7z/dy2/df7q3/WQ3V24/2d1+spub3X+6l9vkyqr+p9//PjfI72Yj+b+55sGzF2gzvVX4K5MXlpEM
+d5vz43b3cd/ZqoVw3u/Pb//dBdbYrrVsuTO2pid3NdSJh1/27n/ZPfe4+fbv+3ce/z8/hOpyeX8
1oOne7mQ6+8/2c3/cpdc+eDp3v0nuw+e7qGdn569sHfvPX6e7Wz/tpObPXi6Z73sLRs3jyj0sjMp
g5a7u4+f5/K933YePHthObLCvd927j6SFN999NyyfPfR87uPnt97vHPv8c72b7v3n+xZ2V4+ePoi
V95/srf9224u23+3f9vN/7KF3Gv7NxnFamTEZDlX2qCVtVxvlWbK7Nig9x7vZIN5aHPbGpSgHu1s
P97dfrx799fn93/bu//b3r1Uc/fX5/nfvUc793/bu/vr83uPdn58+Ce+u/141yrvPdq592gnNzOD
ubKqf/DkRWnwaOfuo517j3ftv3dTzfZvez88/DOX7/6aGqT/SstU2BZ/pF7+PZJ/Pz58LjWPpMEP
v/x579Hu9mNpkyu3H+/lNlXL3L7ZMnc0Oz8+fG7lPGJukMfN7+bhzJNkVnqZG8XVX57ffbiT/20/
2rv36+72o70ff/nz7sPnP/7y571fd+8/fnH34fO7D5/n+u1He9b4x1+k0nqlt6Rl7psL937dydby
S6zM9fd+3b37cOf+4xfZcnbg7sOdVJ99EAvbj3Zz3x9++fOHX/788eFzC+3HX57/qFHc+3U3e5Ur
s/Fctv9avNjr7sOdH37+02p++Fli//GX57nyh5//tHfNTn73h5//vPtwJ1eq2/Iyj4KNs8FcL+Wf
n+O/H/7yZ35595edu7/s/PCXP+893L33UJrllz/85c/c5t7D3Vz4z//1v+G7d3/Z+fHn57kym8rN
zKANh/Xbv+7lBjZ6tmmN7z3c/U//RQb68S/y7+7PO/m/Vrj3y16q2b378+6P6d9//q/Pf/jLzo8/
7/7wl53Ud/fuL3s//GUnV979Ze//A6Su3s6g5OVrAAAAAElFTkSuQmCC

--_004_DU5PR08MB103973ABF5E6F12853F5D24E1CEC12DU5PR08MB10397eu_--


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 05:54:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 05:54:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749709.1157937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMi5d-0000Do-7j; Thu, 27 Jun 2024 05:54:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749709.1157937; Thu, 27 Jun 2024 05:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMi5d-0000Dh-3O; Thu, 27 Jun 2024 05:54:37 +0000
Received: by outflank-mailman (input) for mailman id 749709;
 Thu, 27 Jun 2024 05:54:35 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xlGX=N5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sMi5b-0000Db-Hg
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 05:54:35 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ba652426-3449-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 07:54:33 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (unknown [10.150.64.97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 1306E21B54;
 Thu, 27 Jun 2024 05:54:33 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id EBEE31369A;
 Thu, 27 Jun 2024 05:54:32 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id voPuN5j+fGYUYAAAD6G6ig
 (envelope-from <jgross@suse.com>); Thu, 27 Jun 2024 05:54:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba652426-3449-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1719467673; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=W0Z9cSGoB7Y2ifrUa6lzImX4CuBDzv+gDMFoaIRNuL0=;
	b=aaF8Bz7yORLegw6rMpdXI99A5eO862CBD1cVKhf4sYZojKvL/zldnqajiNOw1rznwRqjQi
	v3HNnNxkAx+C4On5bxyh6mL4V0Le509U1vf7kF8H/WPfnM06vKcyhdkhbTTcBRZ855agek
	9Eu2HM2BVxoWtophmUf184JLt5IOhxA=
Authentication-Results: smtp-out1.suse.de;
	none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1719467673; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=W0Z9cSGoB7Y2ifrUa6lzImX4CuBDzv+gDMFoaIRNuL0=;
	b=aaF8Bz7yORLegw6rMpdXI99A5eO862CBD1cVKhf4sYZojKvL/zldnqajiNOw1rznwRqjQi
	v3HNnNxkAx+C4On5bxyh6mL4V0Le509U1vf7kF8H/WPfnM06vKcyhdkhbTTcBRZ855agek
	9Eu2HM2BVxoWtophmUf184JLt5IOhxA=
Message-ID: <59d67a78-14a0-42ac-b0dc-3d75c109f767@suse.com>
Date: Thu, 27 Jun 2024 07:54:32 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: Disaggregated (Xoar) Dom0 Building
To: Lonnie Cumberland <lonnie@outstep.com>, xen-devel@lists.xenproject.org
References: <376f0fe4-4ae8-461d-87f2-0fa2e6913689@outstep.com>
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
Autocrypt: addr=jgross@suse.com; keydata=
 xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB
 ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve
 dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ
 NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx
 XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB
 AAHNH0p1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT7CwHkEEwECACMFAlOMcK8CGwMH
 CwkIBwMCAQYVCAIJCgsEFgIDAQIeAQIXgAAKCRCw3p3WKL8TL8eZB/9G0juS/kDY9LhEXseh
 mE9U+iA1VsLhgDqVbsOtZ/S14LRFHczNd/Lqkn7souCSoyWsBs3/wO+OjPvxf7m+Ef+sMtr0
 G5lCWEWa9wa0IXx5HRPW/ScL+e4AVUbL7rurYMfwCzco+7TfjhMEOkC+va5gzi1KrErgNRHH
 kg3PhlnRY0Udyqx++UYkAsN4TQuEhNN32MvN0Np3WlBJOgKcuXpIElmMM5f1BBzJSKBkW0Jc
 Wy3h2Wy912vHKpPV/Xv7ZwVJ27v7KcuZcErtptDevAljxJtE7aJG6WiBzm+v9EswyWxwMCIO
 RoVBYuiocc51872tRGywc03xaQydB+9R7BHPzsBNBFOMcBYBCADLMfoA44MwGOB9YT1V4KCy
 vAfd7E0BTfaAurbG+Olacciz3yd09QOmejFZC6AnoykydyvTFLAWYcSCdISMr88COmmCbJzn
 sHAogjexXiif6ANUUlHpjxlHCCcELmZUzomNDnEOTxZFeWMTFF9Rf2k2F0Tl4E5kmsNGgtSa
 aMO0rNZoOEiD/7UfPP3dfh8JCQ1VtUUsQtT1sxos8Eb/HmriJhnaTZ7Hp3jtgTVkV0ybpgFg
 w6WMaRkrBh17mV0z2ajjmabB7SJxcouSkR0hcpNl4oM74d2/VqoW4BxxxOD1FcNCObCELfIS
 auZx+XT6s+CE7Qi/c44ibBMR7hyjdzWbABEBAAHCwF8EGAECAAkFAlOMcBYCGwwACgkQsN6d
 1ii/Ey9D+Af/WFr3q+bg/8v5tCknCtn92d5lyYTBNt7xgWzDZX8G6/pngzKyWfedArllp0Pn
 fgIXtMNV+3t8Li1Tg843EXkP7+2+CQ98MB8XvvPLYAfW8nNDV85TyVgWlldNcgdv7nn1Sq8g
 HwB2BHdIAkYce3hEoDQXt/mKlgEGsLpzJcnLKimtPXQQy9TxUaLBe9PInPd+Ohix0XOlY+Uk
 QFEx50Ki3rSDl2Zt2tnkNYKUCvTJq7jvOlaPd6d/W0tZqpyy7KVay+K4aMobDsodB3dvEAs6
 ScCnh03dDAFgIq5nsB11j3KPKdVoPlfucX2c7kGNH+LUMbzqV6beIENfNexkOfxHfw==
In-Reply-To: <376f0fe4-4ae8-461d-87f2-0fa2e6913689@outstep.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------RCXIz4VorD6UABgzQQMbW0Ns"
X-Spam-Score: -4.43
X-Spam-Level: 
X-Spam-Flag: NO
X-Spamd-Result: default: False [-4.43 / 50.00];
	BAYES_HAM(-2.97)[99.85%];
	SIGNED_PGP(-2.00)[];
	MIME_BASE64_TEXT_BOGUS(1.00)[];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	NEURAL_SPAM_SHORT(0.53)[0.176];
	MIME_GOOD(-0.20)[multipart/signed,multipart/mixed,text/plain];
	MIME_BASE64_TEXT(0.10)[];
	MIME_UNKNOWN(0.10)[application/pgp-keys];
	XM_UA_NO_VERSION(0.01)[];
	TO_DN_SOME(0.00)[];
	MIME_TRACE(0.00)[0:+,1:+,2:+,3:+,4:~,5:~];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	ARC_NA(0.00)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	MID_RHS_MATCH_FROM(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	RCPT_COUNT_TWO(0.00)[2];
	FROM_EQ_ENVFROM(0.00)[];
	FROM_HAS_DN(0.00)[];
	DKIM_SIGNED(0.00)[suse.com:s=susede1];
	RCVD_COUNT_TWO(0.00)[2];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	DBL_BLOCKED_OPENRESOLVER(0.00)[imap1.dmz-prg2.suse.org:helo];
	HAS_ATTACHMENT(0.00)[]

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------RCXIz4VorD6UABgzQQMbW0Ns
Content-Type: multipart/mixed; boundary="------------HMShch6EvQOvYQBrlkXIKxee";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Lonnie Cumberland <lonnie@outstep.com>, xen-devel@lists.xenproject.org
Message-ID: <59d67a78-14a0-42ac-b0dc-3d75c109f767@suse.com>
Subject: Re: Disaggregated (Xoar) Dom0 Building
References: <376f0fe4-4ae8-461d-87f2-0fa2e6913689@outstep.com>
In-Reply-To: <376f0fe4-4ae8-461d-87f2-0fa2e6913689@outstep.com>

--------------HMShch6EvQOvYQBrlkXIKxee
Content-Type: multipart/mixed; boundary="------------I9lmmSluAhSoNaEjVDMU4uj7"

--------------I9lmmSluAhSoNaEjVDMU4uj7
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjYuMDYuMjQgMTg6NDcsIExvbm5pZSBDdW1iZXJsYW5kIHdyb3RlOg0KPiBIZWxsbyBB
bGwsDQo+IA0KPiBJIGhvcGUgdGhhdCBldmVyeW9uZSBpcyBkb2luZyB3ZWxsIHRvZGF5Lg0K
PiANCj4gQ3VycmVudGx5LCBJIGFtIGludmVzdGlnYXRpbmcgYW5kIHJlc2VhcmNoaW5nIHRo
ZSBpZGVhcyBvZiAiRGlzYWdncmVnYXRpbmciIERvbTAgDQo+IGFuZCBoYXZlIHRoZSBYb2Fy
IFhlbiBwYXRjaGVzICgiQnJlYWtpbmcgVXAgaXMgSGFyZCB0byBEbzogU2VjdXJpdHkgYW5k
IA0KPiBGdW5jdGlvbmFsaXR5IGluIGEgQ29tbW9kaXR5IEh5cGVydmlzb3IiIDIwMTEpIGF2
YWlsYWJsZSB3aGljaCB3ZXJlIGRldmVsb3BlZCANCj4gYWdhaW5zdCB2ZXJzaW9uIDIyMTU1
IG9mIHhlbi11bnN0YWJsZS4gVGhlIExpbnV4IHBhdGNoZXMgYXJlIGFnYWluc3QgTGludXgg
d2l0aCANCj4gcHZvcHMgMi42LjMxLjEzIGFuZCBkZXZlbG9wZWQgb24gYSBzdGFuZGFyZCBV
YnVudHUgMTAuMDQgaW5zdGFsbC4gTXkgZWZmb3J0IA0KPiB3b3VsZCBhbHNvIGJlIHVwIHVw
ZGF0ZSB0aGVzZSBwYXRjaGVzLg0KPiANCj4gSSBoYXZlIGJlZW4gYWJsZSB0byBsb2NhdGUg
dGhlIFhlbiAiRG9tMCBEaXNhZ2dyZWdhdGlvbiIgDQo+IChodHRwczovL3dpa2kueGVucHJv
amVjdC5vcmcvd2lraS9Eb20wX0Rpc2FnZ3JlZ2F0aW9uKSBhbSByZWFkaW5nIHVwIG9uIHRo
aW5ncyANCj4gbm93IGJ1dCB3YW50ZWQgdG8gYXNrIHRoZSBkZXZlbG9wZXJzIGxpc3QgYWJv
dXQgYW55IGV4cGVyaWVuY2UgeW91IG1heSBoYXZlIGhhZCANCj4gaW4gdGhpcyBhcmVhIHNp
bmNlIHRoZSByZXNlYXJjaCBvYmplY3RpdmUgaXMgdG8gaW50ZWdyYXRlIFhvYXIgd2l0aCB0
aGUgbGF0ZXN0IA0KPiBYZW4gNC4yMCwgaWYgcG9zc2libGUsIGFuZCB0byB0YWtlIGl0IGZ1
cnRoZXIgdG8gYmFzaWNhbGx5IGVsaW1pbmF0ZSBEb20wIGFsbCANCj4gdG9nZXRoZXIgd2l0
aCBpbmRpdmlkdWFsIE1pbmktT1Mgb3IgVW5pa2VybmVsICJTZXJ2aWNlIGFuZCBEcml2ZXIg
Vk0ncyIgaW5zdGVhZCANCj4gdGhhdCBhcmUgbG9hZGVkIGF0IFVFRkkgYm9vdCB0aW1lLg0K
PiANCj4gQW55IGd1aWRhbmNlLCB0aG91Z2h0cywgb3IgaWRlYXMgd291bGQgYmUgZ3JlYXRs
eSBhcHByZWNpYXRlZCwNCg0KSnVzdCBzb21lIHBvaW50ZXJzLCB0aGlzIGlzIG5vdCBhbiBl
eGhhdXN0aXZlIGxpc3Q6DQoNCi0geW91IHNob3VsZCBoYXZlIGEgbG9vayBhdCBkb20wbGVz
cyAoc2VlIGRvY3MvZmVhdHVyZXMvZG9tMGxlc3MucGFuZG9jIGluDQogICB0aGUgWGVuIHNv
dXJjZSB0cmVlKSBhbmQgaHlwZXJsYXVjaCAoc2VlIGRvY3MvZGVzaWducy9sYXVuY2gvaHlw
ZXJsYXVuY2gucnN0DQogICBpbiB0aGUgWGVuIHNvdXJjZSB0cmVlKQ0KDQotIFhlbnN0b3Jl
IGluIGEgc3R1Yi1kb21haW4gaXMgd29ya2luZyBmaW5lLCBpdCBpcyB0aGUgZGVmYXVsdCBp
biBvcGVuU1VTRSBhbmQNCiAgIFNMRQ0KDQotIFF1YmVzT1MgaGFzIGEgbG90IG9mIHRoZSBk
aXNhZ2dyZWdhdGlvbiB5b3UgYXJlIGxvb2tpbmcgZm9yIGltcGxlbWVudGVkDQoNCi0gSSdt
IHByZXR0eSBzdXJlIG9ubHkgdmVyeSBmZXcgY2hhbmdlcyBzaG91bGQgYmUgbmVlZGVkIGZv
ciB0aGUgTGludXgga2VybmVsLA0KICAgaWYgYW55Lg0KDQoNCkp1ZXJnZW4NCg==
--------------I9lmmSluAhSoNaEjVDMU4uj7
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R3/CwO0EGAEIACAWIQSFEmdy6PYElKXQl/ew3p3W
KL8TLwUCWt3w0AIbAgCBCRCw3p3WKL8TL3YgBBkWCAAdFiEEUy2wekH2OPMeOLge
gFxhu0/YY74FAlrd8NAACgkQgFxhu0/YY75NiwD/fQf/RXpyv9ZX4n8UJrKDq422
bcwkujisT6jix2mOOwYBAKiip9+mAD6W5NPXdhk1XraECcIspcf2ff5kCAlG0DIN
aTUH/RIwNWzXDG58yQoLdD/UPcFgi8GWtNUp0Fhc/GeBxGipXYnvuWxwS+Qs1Qay
7/Nbal/v4/eZZaWs8wl2VtrHTS96/IF6q2o0qMey0dq2AxnZbQIULiEndgR625EF
RFg+IbO4ldSkB3trsF2ypYLij4ZObm2casLIP7iB8NKmQ5PndL8Y07TtiQ+Sb/wn
g4GgV+BJoKdDWLPCAlCMilwbZ88Ijb+HF/aipc9hsqvW/hnXC2GajJSAY3Qs9Mib
4Hm91jzbAjmp7243pQ4bJMfYHemFFBRaoLC7ayqQjcsttN2ufINlqLFPZPR/i3IX
kt+z4drzFUyEjLM1vVvIMjkUoJs=3D
=3DeeAB
-----END PGP PUBLIC KEY BLOCK-----

--------------I9lmmSluAhSoNaEjVDMU4uj7--

--------------HMShch6EvQOvYQBrlkXIKxee--

--------------RCXIz4VorD6UABgzQQMbW0Ns
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature.asc"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmZ8/pgFAwAAAAAACgkQsN6d1ii/Ey86
Xwf5ASJiz+FXce0UcPM1RfANDfpP6jIDY0mbN9JJHmeumUAtFjXo0ORQwtpXHeHttL+xd3fKTYIR
ld+92rmivkWvZ8LUPy7mZK0BMR81tdOYguoQ7gcAYBDyztoWfrizryxNu4fYaNLqt62/mchFZzTC
TeN7nN/S09875TIoWIJ1xv6dePRnXbJx7ccma/Ti7cBccK7XsTCJQ0EHgTBEePx8LZiRRBjKDrIt
wjwpy9k7us8X2c8iugAWazs1eh0rAoJRWs3O7ZyloRppEz5ReIyaUBscOljWlu0Y7dbbuMXk48Hb
msAyzHtITP8xoo7m7+2rgbmVKFgxaAMVcN3Yo4FqDw==
=Bf/x
-----END PGP SIGNATURE-----

--------------RCXIz4VorD6UABgzQQMbW0Ns--


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 05:56:10 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 05:56:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749716.1157947 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMi76-0000k8-Gt; Thu, 27 Jun 2024 05:56:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749716.1157947; Thu, 27 Jun 2024 05:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMi76-0000k1-Dc; Thu, 27 Jun 2024 05:56:08 +0000
Received: by outflank-mailman (input) for mailman id 749716;
 Thu, 27 Jun 2024 05:56:07 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xlGX=N5=suse.com=jgross@srs-se1.protection.inumbo.net>)
 id 1sMi75-0000jv-0T
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 05:56:07 +0000
Received: from smtp-out1.suse.de (smtp-out1.suse.de [195.135.223.130])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f16898a1-3449-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 07:56:06 +0200 (CEST)
Received: from imap1.dmz-prg2.suse.org (imap1.dmz-prg2.suse.org
 [IPv6:2a07:de40:b281:104:10:150:64:97])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 3F3D021B61;
 Thu, 27 Jun 2024 05:56:04 +0000 (UTC)
Received: from imap1.dmz-prg2.suse.org (localhost [127.0.0.1])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256)
 (No client certificate requested)
 by imap1.dmz-prg2.suse.org (Postfix) with ESMTPS id E7DC8137DF;
 Thu, 27 Jun 2024 05:56:03 +0000 (UTC)
Received: from dovecot-director2.suse.de ([2a07:de40:b281:106:10:150:64:167])
 by imap1.dmz-prg2.suse.org with ESMTPSA id IUjiNvP+fGaCYAAAD6G6ig
 (envelope-from <jgross@suse.com>); Thu, 27 Jun 2024 05:56:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f16898a1-3449-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1719467765; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=fBCs9/ZjgkQmXbAj8cUymgYTAuUAMvh7uiPlEd+a22Y=;
	b=r+sS4r+HSBW/AUM8M+KajL9xMvsz36tS/7PTOCfi3uJMTsHVRtJYsMwpWIWwyHW+YT9+QU
	DGDYq6EZFcQnBAigDoak0De0gHQFcVzh1fSDr0KrPnCt1MjLD0Xu0WOmJlwcnlhj5wWqZo
	ILLsrBy/fQFezwd7tK10+3ZVWL5f1Yw=
Authentication-Results: smtp-out1.suse.de;
	dkim=pass header.d=suse.com header.s=susede1 header.b=SlLy19qD
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1719467764; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
	bh=fBCs9/ZjgkQmXbAj8cUymgYTAuUAMvh7uiPlEd+a22Y=;
	b=SlLy19qDb+K0ebVwAlBFm/ey1bTG2KHPKY4/wqK32L8sqch9fNawce0eHhSp/OC1a8aU2j
	GJtqku4bTJGDJp3u3bQR1bgl6JsoFjISnVZ1nSdjjtMtb/80ArZapzAfXGMrLMzDyfq36e
	5tAPxHm9eZMqAi4vdx3P+FzVDuRal9A=
Message-ID: <dc5e690d-39fd-489e-93c4-6697c8f9c8ee@suse.com>
Date: Thu, 27 Jun 2024 07:56:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] MAINTAINERS: Step down as maintainer and committer
To: George Dunlap <george.dunlap@cloud.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Dario Faggioli <dfaggioli@suse.com>, Nick Rosbrook <rosbrookn@gmail.com>
References: <20240626151935.26704-1-george.dunlap@cloud.com>
Content-Language: en-US
From: Juergen Gross <jgross@suse.com>
Autocrypt: addr=jgross@suse.com; keydata=
 xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOB
 ycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJve
 dYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJ
 NwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvx
 XP3FAp2pkW0xqG7/377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEB
 AAHNH0p1ZXJnZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT7CwHkEEwECACMFAlOMcK8CGwMH
 CwkIBwMCAQYVCAIJCgsEFgIDAQIeAQIXgAAKCRCw3p3WKL8TL8eZB/9G0juS/kDY9LhEXseh
 mE9U+iA1VsLhgDqVbsOtZ/S14LRFHczNd/Lqkn7souCSoyWsBs3/wO+OjPvxf7m+Ef+sMtr0
 G5lCWEWa9wa0IXx5HRPW/ScL+e4AVUbL7rurYMfwCzco+7TfjhMEOkC+va5gzi1KrErgNRHH
 kg3PhlnRY0Udyqx++UYkAsN4TQuEhNN32MvN0Np3WlBJOgKcuXpIElmMM5f1BBzJSKBkW0Jc
 Wy3h2Wy912vHKpPV/Xv7ZwVJ27v7KcuZcErtptDevAljxJtE7aJG6WiBzm+v9EswyWxwMCIO
 RoVBYuiocc51872tRGywc03xaQydB+9R7BHPzsBNBFOMcBYBCADLMfoA44MwGOB9YT1V4KCy
 vAfd7E0BTfaAurbG+Olacciz3yd09QOmejFZC6AnoykydyvTFLAWYcSCdISMr88COmmCbJzn
 sHAogjexXiif6ANUUlHpjxlHCCcELmZUzomNDnEOTxZFeWMTFF9Rf2k2F0Tl4E5kmsNGgtSa
 aMO0rNZoOEiD/7UfPP3dfh8JCQ1VtUUsQtT1sxos8Eb/HmriJhnaTZ7Hp3jtgTVkV0ybpgFg
 w6WMaRkrBh17mV0z2ajjmabB7SJxcouSkR0hcpNl4oM74d2/VqoW4BxxxOD1FcNCObCELfIS
 auZx+XT6s+CE7Qi/c44ibBMR7hyjdzWbABEBAAHCwF8EGAECAAkFAlOMcBYCGwwACgkQsN6d
 1ii/Ey9D+Af/WFr3q+bg/8v5tCknCtn92d5lyYTBNt7xgWzDZX8G6/pngzKyWfedArllp0Pn
 fgIXtMNV+3t8Li1Tg843EXkP7+2+CQ98MB8XvvPLYAfW8nNDV85TyVgWlldNcgdv7nn1Sq8g
 HwB2BHdIAkYce3hEoDQXt/mKlgEGsLpzJcnLKimtPXQQy9TxUaLBe9PInPd+Ohix0XOlY+Uk
 QFEx50Ki3rSDl2Zt2tnkNYKUCvTJq7jvOlaPd6d/W0tZqpyy7KVay+K4aMobDsodB3dvEAs6
 ScCnh03dDAFgIq5nsB11j3KPKdVoPlfucX2c7kGNH+LUMbzqV6beIENfNexkOfxHfw==
In-Reply-To: <20240626151935.26704-1-george.dunlap@cloud.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="------------IfxS0m9nNb9Ph9w6eHsHq2ub"
X-Rspamd-Queue-Id: 3F3D021B61
X-Spam-Score: -5.26
X-Spam-Level: 
X-Spam-Flag: NO
X-Spamd-Result: default: False [-5.26 / 50.00];
	BAYES_HAM(-2.87)[99.43%];
	SIGNED_PGP(-2.00)[];
	MIME_BASE64_TEXT_BOGUS(1.00)[];
	NEURAL_HAM_LONG(-1.00)[-1.000];
	R_DKIM_ALLOW(-0.20)[suse.com:s=susede1];
	MIME_GOOD(-0.20)[multipart/signed,multipart/mixed,text/plain];
	NEURAL_HAM_SHORT(-0.20)[-0.978];
	MIME_UNKNOWN(0.10)[application/pgp-keys];
	MIME_BASE64_TEXT(0.10)[];
	MX_GOOD(-0.01)[];
	XM_UA_NO_VERSION(0.01)[];
	FREEMAIL_ENVRCPT(0.00)[gmail.com];
	DKIM_SIGNED(0.00)[suse.com:s=susede1];
	ARC_NA(0.00)[];
	FUZZY_BLOCKED(0.00)[rspamd.com];
	TO_DN_SOME(0.00)[];
	MIME_TRACE(0.00)[0:+,1:+,2:+,3:+,4:~,5:~];
	TO_MATCH_ENVRCPT_ALL(0.00)[];
	FREEMAIL_CC(0.00)[citrix.com,suse.com,xen.org,kernel.org,gmail.com];
	SPAMHAUS_XBL(0.00)[2a07:de40:b281:104:10:150:64:97:from];
	RCVD_COUNT_TWO(0.00)[2];
	FROM_EQ_ENVFROM(0.00)[];
	FROM_HAS_DN(0.00)[];
	RCVD_TLS_ALL(0.00)[];
	MID_RHS_MATCH_FROM(0.00)[];
	RCVD_VIA_SMTP_AUTH(0.00)[];
	RCPT_COUNT_SEVEN(0.00)[8];
	HAS_ATTACHMENT(0.00)[];
	DKIM_TRACE(0.00)[suse.com:+];
	DBL_BLOCKED_OPENRESOLVER(0.00)[cloud.com:email,imap1.dmz-prg2.suse.org:helo,imap1.dmz-prg2.suse.org:rdns,suse.com:email,suse.com:dkim]
X-Rspamd-Action: no action
X-Rspamd-Server: rspamd1.dmz-prg2.suse.org

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--------------IfxS0m9nNb9Ph9w6eHsHq2ub
Content-Type: multipart/mixed; boundary="------------kbC42UvOkzc8hakkn70LPQ6Y";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: George Dunlap <george.dunlap@cloud.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Dario Faggioli <dfaggioli@suse.com>, Nick Rosbrook <rosbrookn@gmail.com>
Message-ID: <dc5e690d-39fd-489e-93c4-6697c8f9c8ee@suse.com>
Subject: Re: [PATCH] MAINTAINERS: Step down as maintainer and committer
References: <20240626151935.26704-1-george.dunlap@cloud.com>
In-Reply-To: <20240626151935.26704-1-george.dunlap@cloud.com>
Autocrypt-Gossip: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJ3BBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AAIQkQoDSui/t3IH4WIQQ+pJkfkcoLMCa4X6CgNK6L+3cgfgn7AJ9DmMd0SMJE
 ePbc7/m22D2v04iu7ACffXTdZQhNl557tJuDXZSBxDmW/tLOwU0EWTecRBAIAIK5OMKMU5R2
 Lk2bbjgX7vyQuCFFyKf9rC/4itNwhYWFSlKzVj3WJBDsoi2KvPm7AI+XB6NIkNAkshL5C0kd
 pcNd5Xo0jRR5/WE/bT7LyrJ0OJWS/qUit5eNNvsO+SxGAk28KRa1ieVLeZi9D03NL0+HIAtZ
 tecfqwgl3Y72UpLUyt+r7LQhcI/XR5IUUaD4C/chB4Vq2QkDKO7Q8+2HJOrFIjiVli4lU+Sf
 OBp64m//Y1xys++Z4ODoKh7tkh5DxiO3QBHG7bHK0CSQsJ6XUvPVYubAuy1XfSDzSeSBl//C
 v78Fclb+gi9GWidSTG/4hsEzd1fY5XwCZG/XJJY9M/sAAwUH/09Ar9W2U1Qm+DwZeP2ii3Ou
 14Z9VlVVPhcEmR/AFykL9dw/OV2O/7cdi52+l00reUu6Nd4Dl8s4f5n8b1YFzmkVVIyhwjvU
 jxtPyUgDOt6DRa+RaDlXZZmxQyWcMv2anAgYWGVszeB8Myzsw8y7xhBEVV1S+1KloCzw4V8Z
 DSJrcsZlyMDoiTb7FyqxwQnM0f6qHxWbmOOnbzJmBqpNpFuDcz/4xNsymJylm6oXiucHQBAP
 Xb/cE1YNHpuaH4SRhIxwQilCYEznWowQphNAbJtEKOmcocY7EbSt8VjXTzmYENkIfkrHRyXQ
 dUm5AoL51XZljkCqNwrADGkTvkwsWSvCSQQYEQIACQUCWTecRAIbDAAKCRCgNK6L+3cgfuef
 AJ9wlZQNQUp0KwEf8Tl37RmcxCL4bQCcC5alCSMzUBJ5DBIcR4BY+CyQFAs=

--------------kbC42UvOkzc8hakkn70LPQ6Y
Content-Type: multipart/mixed; boundary="------------Vew0vTemZEdFFfkKn0I7z0L6"

--------------Vew0vTemZEdFFfkKn0I7z0L6
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: base64

T24gMjYuMDYuMjQgMTc6MTksIEdlb3JnZSBEdW5sYXAgd3JvdGU6DQo+IFJlbWFpbiBhIFJl
dmlld2VyIG9uIHRoZSBnb2xhbmcgYmluZGluZ3MgYW5kIHNjaGVkdWxlciBmb3Igbm93ICh1
c2luZw0KPiBhIHhlbnByb2plY3Qub3JnIGFsaWFzKSwgc2luY2UgdGhlcmUgbWF5IGJlIGFy
Y2hpdGVjdHVyYWwgZGVjaXNpb25zIEkNCj4gY2FuIHNoZWQgbGlnaHQgb24uDQo+IA0KPiBS
ZW1vdmUgdGhlIFhFTlRSQUNFIHNlY3Rpb24gZW50aXJlbHksIGFzIHRoZXJlJ3Mgbm8gb2J2
aW91cyBjYW5kaWRhdGUNCj4gdG8gdGFrZSBpdCBvdmVyOyBoYXZpbmcgdGhlIHJlc3BlY3Rp
dmUgcGFydHMgZmFsbCBiYWNrIHRvIHRoZSB0b29scw0KPiBhbmQgVGhlIFJlc3Qgc2VlbXMg
dGhlIG1vc3QgcmVhc29uYWJsZSBvcHRpb24uDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBHZW9y
Z2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNsb3VkLmNvbT4NCg0KQWNrZWQtYnk6IEp1ZXJn
ZW4gR3Jvc3MgPGpncm9zc0BzdXNlLmNvbT4NCg0KDQpKdWVyZ2VuDQoNCg==
--------------Vew0vTemZEdFFfkKn0I7z0L6
Content-Type: application/pgp-keys; name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Disposition: attachment; filename="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Description: OpenPGP public key
Content-Transfer-Encoding: quoted-printable

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjri
oyspZKOBycWxw3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2
kaV2KL9650I1SJvedYm8Of8Zd621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i
1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y9bfIhWUiVXEK7MlRgUG6MvIj6Y3Am/B
BLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xqG7/377qptDmrk42GlSK
N4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR3Jvc3Mg
PGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsE
FgIDAQIeAQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4F
UGNQH2lvWAUy+dnyThpwdtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3Tye
vpB0CA3dbBQp0OW0fgCetToGIQrg0MbD1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u
+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbvoPHZ8SlM4KWm8rG+lIkGurq
qu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v5QL+qHI3EIP
tyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVy
Z2VuIEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJ
CAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4
RF7HoZhPVPogNVbC4YA6lW7DrWf0teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz7
8X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC/nuAFVGy+67q2DH8As3KPu0344T
BDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0LhITTd9jLzdDad1pQ
SToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLmXBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkM
nQfvUewRz80hSnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMB
AgAjBQJTjHDXAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/
Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJnFOXgMLdBQgBlVPO3/D9R8LtF9DBAFPN
hlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1jnDkfJZr6jrbjgyoZHi
w/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0N51N5Jf
VRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwP
OoE+lotufe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK
/1xMI3/+8jbO0tsn1tqSEUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1
c2UuZGU+wsB5BBMBAgAjBQJTjHDrAhsDBwsJCAcDAgEGFQgCCQoLBBYCAwECHgEC
F4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3g3OZUEBmDHVVbqMtzwlmNC4
k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5dM7wRqzgJpJ
wK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu
5D+jLRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzB
TNh30FVKK1EvmV2xAKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37Io
N1EblHI//x/e2AaIHpzK5h88NEawQsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6
AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpWnHIs98ndPUDpnoxWQugJ6MpMncr
0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZRwgnBC5mVM6JjQ5x
Dk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNVbVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mm
we0icXKLkpEdIXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0I
v3OOImwTEe4co3c1mwARAQABwsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMv
Q/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEwTbe8YFsw2V/Buv6Z4Mysln3nQK5ZadD
534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1vJzQ1fOU8lYFpZXTXIH
b+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8VGiwXvT
yJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqc
suylWsviuGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5B
jR/i1DG86lem3iBDXzXsZDn8R3/CwO0EGAEIACAWIQSFEmdy6PYElKXQl/ew3p3W
KL8TLwUCWt3w0AIbAgCBCRCw3p3WKL8TL3YgBBkWCAAdFiEEUy2wekH2OPMeOLge
gFxhu0/YY74FAlrd8NAACgkQgFxhu0/YY75NiwD/fQf/RXpyv9ZX4n8UJrKDq422
bcwkujisT6jix2mOOwYBAKiip9+mAD6W5NPXdhk1XraECcIspcf2ff5kCAlG0DIN
aTUH/RIwNWzXDG58yQoLdD/UPcFgi8GWtNUp0Fhc/GeBxGipXYnvuWxwS+Qs1Qay
7/Nbal/v4/eZZaWs8wl2VtrHTS96/IF6q2o0qMey0dq2AxnZbQIULiEndgR625EF
RFg+IbO4ldSkB3trsF2ypYLij4ZObm2casLIP7iB8NKmQ5PndL8Y07TtiQ+Sb/wn
g4GgV+BJoKdDWLPCAlCMilwbZ88Ijb+HF/aipc9hsqvW/hnXC2GajJSAY3Qs9Mib
4Hm91jzbAjmp7243pQ4bJMfYHemFFBRaoLC7ayqQjcsttN2ufINlqLFPZPR/i3IX
kt+z4drzFUyEjLM1vVvIMjkUoJs=3D
=3DeeAB
-----END PGP PUBLIC KEY BLOCK-----

--------------Vew0vTemZEdFFfkKn0I7z0L6--

--------------kbC42UvOkzc8hakkn70LPQ6Y--

--------------IfxS0m9nNb9Ph9w6eHsHq2ub
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature.asc"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmZ8/vMFAwAAAAAACgkQsN6d1ii/Ey+R
yQf/dvSfrC1r8ftPI8jICwCWAiSpL9iUwMEXiKhqHFQREMURRCHfHwEYxa9gfh3cOPuzyYzTdsM6
dOj5hZNIFb1+2rBwiR2Y8xncZFyKpr+yv4xI0dQxuAE6nn5k1HraoWDLkeOKmQzY6+RTPyD9euzF
QCCo0jaVfM1HzyvDtT7838hFoODk6E/zxlV4j/ZUOM0T34W3iJrR2WLmvllvrLSwoqYPraKsIKht
vEDwgik5/WiwRM0a87prt8laEz3cF2lmal/h4t5l35+Yw4Uj6vXMsCGkAWQOhmEOK2WpRMreJ/7P
liPBbIRAydxBFg+sBPFhK5KqgIuXTRAnkrhJ9UbgMQ==
=t5Qi
-----END PGP SIGNATURE-----

--------------IfxS0m9nNb9Ph9w6eHsHq2ub--


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 06:03:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 06:03:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749722.1157956 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMiDo-0002a9-3T; Thu, 27 Jun 2024 06:03:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749722.1157956; Thu, 27 Jun 2024 06:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMiDo-0002a2-0y; Thu, 27 Jun 2024 06:03:04 +0000
Received: by outflank-mailman (input) for mailman id 749722;
 Thu, 27 Jun 2024 06:03:03 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/xp/=N5=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sMiDn-0002Zw-E0
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 06:03:03 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e65037cf-344a-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 08:02:57 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 65F8768C4E; Thu, 27 Jun 2024 08:02:52 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e65037cf-344a-11ef-b4bb-af5377834399
Date: Thu, 27 Jun 2024 08:02:51 +0200
From: "hch@lst.de" <hch@lst.de>
To: Michael Kelley <mhklinux@outlook.com>
Cc: "robin.murphy@arm.com" <robin.murphy@arm.com>,
	"joro@8bytes.org" <joro@8bytes.org>,
	"will@kernel.org" <will@kernel.org>,
	"jgross@suse.com" <jgross@suse.com>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>,
	"hch@lst.de" <hch@lst.de>,
	"m.szyprowski@samsung.com" <m.szyprowski@samsung.com>,
	"petr@tesarici.cz" <petr@tesarici.cz>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Message-ID: <20240627060251.GA15590@lst.de>
References: <20240607031421.182589-1-mhklinux@outlook.com> <SN6PR02MB41577686D72E206DB0084E90D4D62@SN6PR02MB4157.namprd02.prod.outlook.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <SN6PR02MB41577686D72E206DB0084E90D4D62@SN6PR02MB4157.namprd02.prod.outlook.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, Jun 26, 2024 at 11:58:13PM +0000, Michael Kelley wrote:
> > This patch trades off making many of the core swiotlb APIs take
> > an additional argument in order to avoid duplicating calls to
> > swiotlb_find_pool(). The current code seems rather wasteful in
> > making 6 calls per round-trip, but I'm happy to accept others'
> > judgment as to whether getting rid of the waste is worth the
> > additional code complexity.
> 
> Quick ping on this RFC.  Is there any interest in moving forward?
> Quite a few lines of code are affected because of adding the
> additional "pool" argument to several functions, but the change
> is conceptually pretty simple.

Yes, this looks sensible to me.  I'm tempted to apply it.



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 06:23:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 06:23:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749738.1157967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMiXi-0005cG-Q7; Thu, 27 Jun 2024 06:23:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749738.1157967; Thu, 27 Jun 2024 06:23:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMiXi-0005c9-N8; Thu, 27 Jun 2024 06:23:38 +0000
Received: by outflank-mailman (input) for mailman id 749738;
 Thu, 27 Jun 2024 06:23:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eOj3=N5=arm.com=Bertrand.Marquis@srs-se1.protection.inumbo.net>)
 id 1sMiXh-0005c3-8v
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 06:23:37 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 (mail-db5eur02on20600.outbound.protection.outlook.com
 [2a01:111:f403:2608::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c824beb3-344d-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 08:23:35 +0200 (CEST)
Received: from AM8P189CA0018.EURP189.PROD.OUTLOOK.COM (2603:10a6:20b:218::23)
 by AS8PR08MB5912.eurprd08.prod.outlook.com (2603:10a6:20b:29f::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.35; Thu, 27 Jun
 2024 06:23:32 +0000
Received: from AM3PEPF0000A79B.eurprd04.prod.outlook.com
 (2603:10a6:20b:218:cafe::27) by AM8P189CA0018.outlook.office365.com
 (2603:10a6:20b:218::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.25 via Frontend
 Transport; Thu, 27 Jun 2024 06:23:32 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM3PEPF0000A79B.mail.protection.outlook.com (10.167.16.106) with
 Microsoft
 SMTP Server (version=TLS1_3, cipher=TLS_AES_256_GCM_SHA384) id 15.20.7677.15
 via Frontend Transport; Thu, 27 Jun 2024 06:23:31 +0000
Received: ("Tessian outbound 41160df97de5:v347");
 Thu, 27 Jun 2024 06:23:31 +0000
Received: from c37c5bc6af17.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9FD5E58E-1EA5-4FB7-A1E5-B98A274CDF3C.1; 
 Thu, 27 Jun 2024 06:23:24 +0000
Received: from EUR02-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c37c5bc6af17.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 27 Jun 2024 06:23:24 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com (2603:10a6:10:25a::24)
 by AS2PR08MB9987.eurprd08.prod.outlook.com (2603:10a6:20b:645::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.35; Thu, 27 Jun
 2024 06:23:23 +0000
Received: from DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a]) by DB9PR08MB6588.eurprd08.prod.outlook.com
 ([fe80::a8fc:ea0d:baf1:23a%7]) with mapi id 15.20.7698.033; Thu, 27 Jun 2024
 06:23:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c824beb3-344d-11ef-b4bb-af5377834399
ARC-Seal: i=2; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=pass;
 b=mPDYXZAbyi2PkgBB/aIxzOUuKcDVjOtXkfMbcGaqpAt5ozb2GGFcE45W/m5FB1CGc5QWzD3H/Ni3jop4hEGi3WKiS5/HNRnu57rQ3ySPuwywZNGPJUUZqLfW/azqtm1D30Ap9lL/DpXiDuZrKV9UeSUBjfIfeSvwlKsXF224JPIi1wlwXIiOyx63McZDw440e9WZJh8FNqu1uF1u8ME8Z7a0r760ETJpuOWzBKGBBa33mk0ox6sr4m8n5V8u2r0QtrincDHG3ptGAKgVUTamRo0y3PzCVXnUga3R2oNyKLhUrCb+GgVxZcefmb1vK2RwFTZ3FWcVA+2fBhquScRU7g==
ARC-Message-Signature: i=2; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HmreLS+xNi6TcSrdHmP9TzkHM54DKMzsDPK44F4YdpU=;
 b=YXoOZi8qmaoXNn5pZdCZ61ViPpcLLRVe3OrNv37oX9wMoTpXQMATq21CRaMUKy/61FLj3BCrNoI0sXCKzw6VxmyLRbTDMBwximuOx1FYl12qgXTNs54/7wj+5mn0GAaNfQQvEb29PErLmobwIdy1Sjc3BSgZBY4++yaFyGHT1XbJpZgXjNJKwdz0yPP6d98lQWTKj1zQFc7+Wu0/vjP1o4MpiCo//xa8/ONQtXvt8Pvoek+zoOUYlHIMeIgEmK35pRgUSfT9QVS4iPFLUiHQyqsORdJQ4YvU+gTcl5UUdiTYuz2BIphhN8l81vN20swM/3odZ6UbMw9X6r/KTLWIJw==
ARC-Authentication-Results: i=2; mx.microsoft.com 1; spf=pass (sender ip is
 63.35.35.123) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=pass (signature was verified) header.d=arm.com; arc=pass (0 oda=1 ltdi=1
 spf=[1,1,smtp.mailfrom=arm.com] dkim=[1,1,header.d=arm.com]
 dmarc=[1,1,header.from=arm.com])
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HmreLS+xNi6TcSrdHmP9TzkHM54DKMzsDPK44F4YdpU=;
 b=ZrXh0biOffmAgWZPDrsVRPgS4h0O7nVfMZ7nzsp6+ajPd1ilmUzXtkZrSrgIZfpsvMUHZGpXHtJVV2sajWiklbWcfRa/Njyu/FjD9JvC0HXYpJlj3cxV54RRAn8Kt1qqXUQYFDWFcgZ3JRbfUE/Nte+QWtX2xJ8q8+DaM9Z0LHM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; dkim=pass (signature was verified)
 header.d=arm.com;dmarc=pass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
 pr=C
X-CheckRecipientChecked: true
X-CR-MTA-CID: bbb2f1c4bde6cc7b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NMqjvfSExvnLhXNjhITqmfMJj73mU4dMpxk2kYT24EC22qa1P+umB5VMPp9M77Ag0wchr/eJnNHyBaEavCM4jOGlpg0+RyfrhdhhddTZX3ac8nLiHpLYNp+BUi8ztv7euqRTxSpns2WGkxJCCCvecVKFl1He78F7OuDCaTmK7gNXctlhNCNi4hCl3XEEoqo5SJG0w/8B3IFlrYWfj2fOgrybfIFOvo2GwTcwqE968luVBDE/V9jzwzU6Z23qxL0jzgLyud5aYGwYA8Z0RtoiBdePxlIlvsahx7jVDDKwEGC/pPfMZ6+0c2TScXUQ/H8qajEa3Q9A4UaJ9s/Th761Ag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=HmreLS+xNi6TcSrdHmP9TzkHM54DKMzsDPK44F4YdpU=;
 b=JS0dTImu3JNiSMs+3kuGy5b/zo58Jc9oWjfIoYIaZLiSQbLQSlWfJM58duPqEPbb90O1PLlLth3x4TVoj4PbPGXHY2wOev53S8juyr7EjrKQky/JKcMj67YfvPNNYmfeTc7CEDRPaBxUx2mL0Uhw+NEiMKQ72Yn7NnD/qFic9JqSTWzMZqkd6O6ciTWk99c3T2dyztWLgOOuKfkn20Mde/Ns3EZMMActo0/r31BqriAt0bj4qvSLsc+t91+22uR/a6f2eq9PFp/jTyCMRD2mXfcj7DtBBDxpMTU7+Io5Ohmxah52V0upqYsj8DtZv31I/Vk+F5BwKGA4hHu2Oa3kWg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=arm.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HmreLS+xNi6TcSrdHmP9TzkHM54DKMzsDPK44F4YdpU=;
 b=ZrXh0biOffmAgWZPDrsVRPgS4h0O7nVfMZ7nzsp6+ajPd1ilmUzXtkZrSrgIZfpsvMUHZGpXHtJVV2sajWiklbWcfRa/Njyu/FjD9JvC0HXYpJlj3cxV54RRAn8Kt1qqXUQYFDWFcgZ3JRbfUE/Nte+QWtX2xJ8q8+DaM9Z0LHM=
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: George Dunlap <george.dunlap@cloud.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Dario Faggioli
	<dfaggioli@suse.com>, Juergen Gross <jgross@suse.com>, Nick Rosbrook
	<rosbrookn@gmail.com>
Subject: Re: [PATCH] MAINTAINERS: Step down as maintainer and committer
Thread-Topic: [PATCH] MAINTAINERS: Step down as maintainer and committer
Thread-Index: AQHax98S2U1KStNvs02rkvKONmneGbHbJToA
Date: Thu, 27 Jun 2024 06:23:22 +0000
Message-ID: <4B3393D9-2396-4FF4-8D92-62426DD3F040@arm.com>
References: <20240626151935.26704-1-george.dunlap@cloud.com>
In-Reply-To: <20240626151935.26704-1-george.dunlap@cloud.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3774.600.62)
Authentication-Results-Original: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
x-ms-traffictypediagnostic:
	DB9PR08MB6588:EE_|AS2PR08MB9987:EE_|AM3PEPF0000A79B:EE_|AS8PR08MB5912:EE_
X-MS-Office365-Filtering-Correlation-Id: 85bba2d8-d868-4ea3-6a52-08dc9671aab0
x-checkrecipientrouted: true
nodisclaimer: true
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam-Untrusted:
 BCL:0;ARA:13230040|366016|376014|1800799024|38070700018;
X-Microsoft-Antispam-Message-Info-Original:
 =?us-ascii?Q?Qyt3JO5weQ5RvrRPAfbSyx47D62Vky3E6QFliS9OVQ9UTq2Elk8slHVhAJAm?=
 =?us-ascii?Q?Liti3kJReoNFtmJJloeNZArp9RshOUTL6rA//NqBzhDRbv1ADMvlzSHVW6s+?=
 =?us-ascii?Q?tYuVfge2EbLvnu8YGrIiFO88v5MIdjmpjr/wVXLkoor6SuPK5Uf1Fs7hZArW?=
 =?us-ascii?Q?jUwuS/UfYoNwp93Xrnx1PY0ZkAWOy8vbSZhG8GGJG4qlh85O260ZxBG2Nigs?=
 =?us-ascii?Q?wsHU6xOUNNWBayj6nZp3JxEfjTHdhTOfYgJigPkRWLHJOXHoS+dzG3HTQCOa?=
 =?us-ascii?Q?Ksb85BoaQ/q2hekccNvw5cJPmc+a9IonlQpisziISsuytkeUVDHqu11Cwkpe?=
 =?us-ascii?Q?+OME8BIUAnLC6ijMTvAZ669OH+Fw8omjTL/cI0SENIh16SOW4E112XjKKqQq?=
 =?us-ascii?Q?P8FoVcovxYjwK2rWD7DVrJQN7nUkOonLrGTTY8TyvRrm71m9BcUJ58gAufvF?=
 =?us-ascii?Q?XUdkqHo7S3QJ+LGBwo3y9EDDDGGXDLHWUQBWn0YCFDscQ+cje5/3bBbWhacd?=
 =?us-ascii?Q?o4yKBib6FtlVrQdaPdIulEtZutp0kRhskLPmm9s+wHuIyU91lFQejGS810/Q?=
 =?us-ascii?Q?jzGc70gof3/bJcUMjIwDWO0bTRf94GekuVFjF6d7TWfKtyjN1TZ5+0dAv7Ef?=
 =?us-ascii?Q?F1VoabjCTUFQfczCxacAwxZ50ZWdPFFV4P8tfB3cCm5pQWzJbxmPUlV7xHZd?=
 =?us-ascii?Q?IQ4cUS8ul7/n8G/RkO/6r+zcU39G0uYdFlS6f7LamDgxMCmgkX4wY0X/W7AY?=
 =?us-ascii?Q?Cl7PGc+B52Ti7Mev7JIrTd5ekxW1aO3bagXBad+dTF5hNCT+wqceDPMa94dR?=
 =?us-ascii?Q?TugzaV5fs1Muee+aDqAoMZavOb22fzn1VAZ/IDjJSH3G0elaYg9YgQuH3rQQ?=
 =?us-ascii?Q?tOrj1nFdjdddENBW85zMyevlo0PW8IjoHpdDfW5E0675wdTAfpFTOGBLY2dM?=
 =?us-ascii?Q?rSOcgmXNTqqYiLOTLvVlEFODQnSH7yl/PEXigCFJZi40v4lc7bumgPH/G29n?=
 =?us-ascii?Q?JKrLyhs+aHV67OxQLbDunziieZnHz0isazANNkoEBuCB04lbLbxiYcvRlJd2?=
 =?us-ascii?Q?KU0okO44mfD+bQ4zNZ9b7a+fXKWUaE320D392OR33oflkq28bdty4vUlzax0?=
 =?us-ascii?Q?4kmF7c/ufNR4LuzNETuTIknNxkqhx+hNSE1/prNx1slO8lgmw0R5BV9c9Kbx?=
 =?us-ascii?Q?irmYI/vPfuHJoGkP+hoOOpN/5r1F6SENmhtdt/kgvztXALVwpQMzUiAaO81N?=
 =?us-ascii?Q?5rHNg6apqEO3Au+xAqRUvlBSE86TXQ3VV8jhknQNIINLy8YQXe43WdOMjtmx?=
 =?us-ascii?Q?43T9TYm/A2d0GLmGxAky+U3Pi2/Z0lGrxUebMvuADACMygwm1lZ1ZeZbddTN?=
 =?us-ascii?Q?7exjjZr/A6GbClhunNteW6Zz3omPgTiMohngVzgqsY1A9HrQMw=3D=3D?=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DB9PR08MB6588.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(366016)(376014)(1800799024)(38070700018);DIR:OUT;SFP:1101;
Content-Type: text/plain; charset="us-ascii"
Content-ID: <2E9EE56750B1C4489B22BCE525C795BA@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS2PR08MB9987
Original-Authentication-Results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM3PEPF0000A79B.eurprd04.prod.outlook.com
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4dc94fcd-8f72-43af-c70d-08dc9671a50e
X-Microsoft-Antispam:
	BCL:0;ARA:13230040|35042699022|36860700013|1800799024|376014|82310400026;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?AG2X7LsCsnYVHAcYuhqx2hgVm76U06UM0qvbvACVm7emYRz12NgqeVP7r/23?=
 =?us-ascii?Q?gE+8VYGksFWX04WBKIEzUr7A2yX8/0lQqPTMABo976VIWBKRVbysvZ3LRK3+?=
 =?us-ascii?Q?h0Hg9Tdi5EZt2PKu0Q10s8yO/IXs6qCYQG8QDnuQvQSeyPejRSeruBdI+HZC?=
 =?us-ascii?Q?Gfiihsrls8fU3YiGdAeaJv17iBes2mUkEEbu4T4a3FRxWPYqeoD5NR+PDdQe?=
 =?us-ascii?Q?Y8fmXAzxCfJ8dnM0GuDENXlD7TwuTvDHMhwir0oLDm3a2IcF56d4Lxw2VSUP?=
 =?us-ascii?Q?s7c3G2qV9cALSSvfzmW0UFW9Sknj/U6AIDPKc0DpbbxqyXwBba3Bu9VFsJ9h?=
 =?us-ascii?Q?FzKcxpp9htKRq/V0jwm6GDX+zSHJHgClH/r9Qcq55RIOkoGHVVlvH5IBGitf?=
 =?us-ascii?Q?9UvNqBOksZ4ACesluxNQakcLvM/FhNg4FHVRHyDuBudfH3vAg33WZQwoUr26?=
 =?us-ascii?Q?uLpDzGWMDheDVpc0r3xgh3VxYHHUph6POu7lOROLqYw/BN6fwmPe8t/JnP9g?=
 =?us-ascii?Q?di7+ORArLRwSVJ9q4CA5AAiBo7sVlk0mLjCQeU3eeNVh/cchnN5KdGefVVsQ?=
 =?us-ascii?Q?dOZMt9HXnCHjnJu7LifXcBkhSiGBjrqZCgpQljNUg7tgmOwiXs1gT7cE6kPM?=
 =?us-ascii?Q?RZsdX3efuwg6dJKSg2XEK4SpI3TuqHKjoS/jluFA8WuaDSF7eF+2x6F14DSo?=
 =?us-ascii?Q?dsEssh39+EtuBvQHdmbQgdZs7W1+iFWqS3G2OUARc/Qulvgkpmv9xK1/0Mws?=
 =?us-ascii?Q?UytzGOtKq+lPLYd0XKImWJOQwMSpxPY3WjPFuqH3PndHOGXJNv3AzkZ8ljbH?=
 =?us-ascii?Q?p0fgJmie3RXDAX1R0gr9lMPkowQpdY7Njx7WyTRCpz3ZusM/+JQ4BluuJBRg?=
 =?us-ascii?Q?aoI9hKW5HDKTf8/huBsEF0bfdEV+fkeHT5Fz5bXA20N3v6a7UwHySSo1Mvv1?=
 =?us-ascii?Q?Bq2xN430/NUI0HNJhB4p3ztu23y0eNUM3hpf9HwaJ6mATkulLNiwUwq9M1+L?=
 =?us-ascii?Q?uhbvT2V8JZMmcPgDHhi2G0yQoSU/nmMgLW4WvN/6+cLfXZiVee7dxOEKtZzB?=
 =?us-ascii?Q?oG4VjDGbhM8H2sN1T8jokMhUbxZtdNWjZ8sGB+wD1at2gNOLZ+SOkVzZFwmW?=
 =?us-ascii?Q?A31JOGXIKZlK3u4b0dAYfu00aCLjON7aD2AehwosBFoEnQBDu8fr9eH3ZqHM?=
 =?us-ascii?Q?XKzrXfRq1UMOAvKop1LymA9nIyw2rnP5oi7jrs65qanvQPjmhc1W4yutcusC?=
 =?us-ascii?Q?CgBtgoCGBpi7Jx5hSWaZ9jtHM6LZ41vqwSGuR3rlBC8qgvKFeuhTblBHAlL8?=
 =?us-ascii?Q?WOOJe0nxHYJmqr+xOkYYmqqzxYhqiuFtdBGA44QNMP+Ns08QWdHVtokMti6s?=
 =?us-ascii?Q?Jw4dwdR8YaVWBNzOisp3KmSCrGnksrPRnnBN9oke1QeUN8nctKRuGE9B24qv?=
 =?us-ascii?Q?1lX0OG95BRUL6EpzB8jcapOLeobtWArm?=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(13230040)(35042699022)(36860700013)(1800799024)(376014)(82310400026);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 06:23:31.7676
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 85bba2d8-d868-4ea3-6a52-08dc9671aab0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM3PEPF0000A79B.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB5912

Hi Georges,

> On 26 Jun 2024, at 17:19, George Dunlap <george.dunlap@cloud.com> wrote:
>=20
> Remain a Reviewer on the golang bindings and scheduler for now (using
> a xenproject.org alias), since there may be architectural decisions I
> can shed light on.
>=20
> Remove the XENTRACE section entirely, as there's no obvious candidate
> to take it over; having the respective parts fall back to the tools
> and The Rest seems the most reasonable option.
>=20
> Signed-off-by: George Dunlap <george.dunlap@cloud.com>

We will definitely miss you in the community.
Thanks for everything you have done for Xen.

Acked-by: Bertrand Marquis <bertrand.marquis@arm.com>

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 06:52:35 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 06:52:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749743.1157977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMizd-0001NN-03; Thu, 27 Jun 2024 06:52:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749743.1157977; Thu, 27 Jun 2024 06:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMizc-0001NG-SW; Thu, 27 Jun 2024 06:52:28 +0000
Received: by outflank-mailman (input) for mailman id 749743;
 Thu, 27 Jun 2024 06:52:27 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vG3Y=N5=tesarici.cz=petr@srs-se1.protection.inumbo.net>)
 id 1sMiza-0001NA-Oa
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 06:52:27 +0000
Received: from bee.tesarici.cz (bee.tesarici.cz [37.205.15.56])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cde5c433-3451-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 08:52:22 +0200 (CEST)
Received: from meshulam.tesarici.cz
 (dynamic-2a00-1028-83b8-1e7a-4427-cc85-6706-c595.ipv6.o2.cz
 [IPv6:2a00:1028:83b8:1e7a:4427:cc85:6706:c595])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest
 SHA256) (No client certificate requested)
 by bee.tesarici.cz (Postfix) with ESMTPSA id 7EB9F1D6A67;
 Thu, 27 Jun 2024 08:52:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cde5c433-3451-11ef-b4bb-af5377834399
Authentication-Results: mail.tesarici.cz; dmarc=fail (p=quarantine dis=none) header.from=tesarici.cz
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tesarici.cz; s=mail;
	t=1719471141; bh=JqKNk6DK7c39as3qr7GxZRvQ/NXaR2TqXX/2BAiLjxo=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=p9rfRQ8t2qyHzmBjsliZOBTEO2ISxen4UX5Sxsfz8PgsUtI20CIfRJu0MK3VXZ2tm
	 uHCPZP9NV2nljjajcaLakFHetbrN/pnZeva2u6k4o1+a1EJmcbBYKNZGMEqBiWGaOp
	 WVpLkfsPIgV/OTuKoph7CFY2HUh9svE+P49yg84fZ5Y4mXWKeXMgeD+4Gw861dYuGb
	 9uD1JECTBxJoRLsbEK5hyAZzY/Z0MH2l7a4p96XSkbJ6z0JUkR5yuicOZoPOb8FbRT
	 Sxv5cwOeZTe38weQ/3M+A+xW3dfkOTM8kFKGKQ/i6ckA6KUb8+cY0Np+ry2pt/gJ8R
	 rAwtsJe8amWTQ==
Date: Thu, 27 Jun 2024 08:52:16 +0200
From: Petr =?UTF-8?B?VGVzYcWZw61r?= <petr@tesarici.cz>
To: "hch@lst.de" <hch@lst.de>
Cc: Michael Kelley <mhklinux@outlook.com>, "robin.murphy@arm.com"
 <robin.murphy@arm.com>, "joro@8bytes.org" <joro@8bytes.org>,
 "will@kernel.org" <will@kernel.org>, "jgross@suse.com" <jgross@suse.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>,
 "m.szyprowski@samsung.com" <m.szyprowski@samsung.com>,
 "iommu@lists.linux.dev" <iommu@lists.linux.dev>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Message-ID: <20240627085216.556744c1@meshulam.tesarici.cz>
In-Reply-To: <20240627060251.GA15590@lst.de>
References: <20240607031421.182589-1-mhklinux@outlook.com>
	<SN6PR02MB41577686D72E206DB0084E90D4D62@SN6PR02MB4157.namprd02.prod.outlook.com>
	<20240627060251.GA15590@lst.de>
X-Mailer: Claws Mail 4.3.0 (GTK 3.24.42; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Thu, 27 Jun 2024 08:02:51 +0200
"hch@lst.de" <hch@lst.de> wrote:

> On Wed, Jun 26, 2024 at 11:58:13PM +0000, Michael Kelley wrote:
> > > This patch trades off making many of the core swiotlb APIs take
> > > an additional argument in order to avoid duplicating calls to
> > > swiotlb_find_pool(). The current code seems rather wasteful in
> > > making 6 calls per round-trip, but I'm happy to accept others'
> > > judgment as to whether getting rid of the waste is worth the
> > > additional code complexity.  
> > 
> > Quick ping on this RFC.  Is there any interest in moving forward?
> > Quite a few lines of code are affected because of adding the
> > additional "pool" argument to several functions, but the change
> > is conceptually pretty simple.  
> 
> Yes, this looks sensible to me.  I'm tempted to apply it.

Oh, right. The idea is good, but I was not able to reply immediately
and then forgot about it.

For the record, I considered an alternative: Call swiotlb_* functions
unconditionally and bail out early if the pool is NULL. But it's no
good, because is_swiotlb_buffer() can be inlined, so this approach
would replace a quick check with a function call. And then there's also
swiotlb_tbl_unmap_single()...

I have only a very minor suggestion: Could is_swiotlb_buffer() be
renamed now that it no longer returns a bool? OTOH I have no good
immediate idea myself.

Petr T


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 07:21:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 07:21:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749751.1157987 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMjR9-0005Tj-4D; Thu, 27 Jun 2024 07:20:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749751.1157987; Thu, 27 Jun 2024 07:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMjR8-0005Tc-W3; Thu, 27 Jun 2024 07:20:54 +0000
Received: by outflank-mailman (input) for mailman id 749751;
 Thu, 27 Jun 2024 07:20:53 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=vG3Y=N5=tesarici.cz=petr@srs-se1.protection.inumbo.net>)
 id 1sMjR7-0005TW-1T
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 07:20:53 +0000
Received: from bee.tesarici.cz (bee.tesarici.cz [37.205.15.56])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c7fab65e-3455-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 09:20:50 +0200 (CEST)
Received: from meshulam.tesarici.cz
 (dynamic-2a00-1028-83b8-1e7a-4427-cc85-6706-c595.ipv6.o2.cz
 [IPv6:2a00:1028:83b8:1e7a:4427:cc85:6706:c595])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest
 SHA256) (No client certificate requested)
 by bee.tesarici.cz (Postfix) with ESMTPSA id B4DF91D69E7;
 Thu, 27 Jun 2024 09:20:49 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7fab65e-3455-11ef-b4bb-af5377834399
Authentication-Results: mail.tesarici.cz; dmarc=fail (p=quarantine dis=none) header.from=tesarici.cz
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tesarici.cz; s=mail;
	t=1719472849; bh=DTI2vKAaPElhD/HmfPpLCB/eYaiu2AD162KN4TedODc=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=Wy6f4jxCvx2KElFgvo2FeiER2+AWkeP6jmjpKkqw3DKZDFugiJsLOI3l2du2PPKe9
	 SA0vAxWRN7DHcaCnlWuD/fog4FyFf376PY1FONrJJFz37CstS5k1Kp1N/UrdDtQDK1
	 lE7AThyFR8O1BtpmNpjbkHNEgMCDMri9S/fW4vT/pK3mlPEsMyVd00wLOoNX5CKz1C
	 V131ap92UHIUfhKw33g/+OGwxvYTGqjFoy+/2Ntp1lxdxmjc95R3qh6nCzUJshGq/c
	 VbV7umsqzkHIhWPgeo0flPaFAJRUvhnwB/gYYvfeE9nG1SVCZ+K2dYcj3Kw2vlysRr
	 vg5aLMmV07ycQ==
Date: Thu, 27 Jun 2024 09:20:49 +0200
From: Petr =?UTF-8?B?VGVzYcWZw61r?= <petr@tesarici.cz>
To: mhkelley58@gmail.com
Cc: mhklinux@outlook.com, robin.murphy@arm.com, joro@8bytes.org,
 will@kernel.org, jgross@suse.com, sstabellini@kernel.org,
 oleksandr_tyshchenko@epam.com, hch@lst.de, m.szyprowski@samsung.com,
 iommu@lists.linux.dev, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org
Subject: Re: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Message-ID: <20240627092049.1dbec746@meshulam.tesarici.cz>
In-Reply-To: <20240607031421.182589-1-mhklinux@outlook.com>
References: <20240607031421.182589-1-mhklinux@outlook.com>
X-Mailer: Claws Mail 4.3.0 (GTK 3.24.42; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 7bit

On Thu,  6 Jun 2024 20:14:21 -0700
mhkelley58@gmail.com wrote:

> From: Michael Kelley <mhklinux@outlook.com>
> 
> With CONFIG_SWIOTLB_DYNAMIC enabled, each round-trip map/unmap pair
> in the swiotlb results in 6 calls to swiotlb_find_pool(). In multiple
> places, the pool is found and used in one function, and then must
> found again in the next function that is called because only the
> tlb_addr is passed as an argument. These are the six call sites:
> 
> dma_direct_map_page:
> 1. swiotlb_map->swiotlb_tbl_map_single->swiotlb_bounce
> 
> dma_direct_unmap_page:
> 2. dma_direct_sync_single_for_cpu->is_swiotlb_buffer
> 3. dma_direct_sync_single_for_cpu->swiotlb_sync_single_for_cpu->
> 	swiotlb_bounce
> 4. is_swiotlb_buffer
> 5. swiotlb_tbl_unmap_single->swiotlb_del_transient
> 6. swiotlb_tbl_unmap_single->swiotlb_release_slots
> 
> Reduce the number of calls by finding the pool at a higher level, and
> passing it as an argument instead of searching again. A key change is
> for is_swiotlb_buffer() to return a pool pointer instead of a boolean,
> and then pass this pool pointer to subsequent swiotlb functions.
> With these changes, a round-trip map/unmap pair requires only 2 calls
> to swiotlb_find_pool():
> 
> dma_direct_unmap_page:
> 1. dma_direct_sync_single_for_cpu->is_swiotlb_buffer
> 2. is_swiotlb_buffer
> 
> These changes come from noticing the inefficiencies in a code review,
> not from performance measurements. With CONFIG_SWIOTLB_DYNAMIC,
> swiotlb_find_pool() is not trivial, and it uses an RCU read lock,
> so avoiding the redundant calls helps performance in a hot path.
> When CONFIG_SWIOTLB_DYNAMIC is *not* set, the code size reduction
> is minimal and the perf benefits are likely negligible, but no
> harm is done.
> 
> No functional change is intended.
> 
> Signed-off-by: Michael Kelley <mhklinux@outlook.com>
> ---
> This patch trades off making many of the core swiotlb APIs take
> an additional argument in order to avoid duplicating calls to
> swiotlb_find_pool(). The current code seems rather wasteful in
> making 6 calls per round-trip, but I'm happy to accept others'
> judgment as to whether getting rid of the waste is worth the
> additional code complexity.
> 
>  drivers/iommu/dma-iommu.c | 27 ++++++++++++++------
>  drivers/xen/swiotlb-xen.c | 25 +++++++++++-------
>  include/linux/swiotlb.h   | 54 +++++++++++++++++++++------------------
>  kernel/dma/direct.c       | 12 ++++++---
>  kernel/dma/direct.h       | 18 ++++++++-----
>  kernel/dma/swiotlb.c      | 43 ++++++++++++++++---------------
>  6 files changed, 106 insertions(+), 73 deletions(-)
> 
> diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
> index f731e4b2a417..ab6bc37ecf90 100644
> --- a/drivers/iommu/dma-iommu.c
> +++ b/drivers/iommu/dma-iommu.c
> @@ -1073,6 +1073,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
>  		dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
>  {
>  	phys_addr_t phys;
> +	struct io_tlb_pool *pool;
>  
>  	if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir))
>  		return;
> @@ -1081,21 +1082,25 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
>  	if (!dev_is_dma_coherent(dev))
>  		arch_sync_dma_for_cpu(phys, size, dir);
>  
> -	if (is_swiotlb_buffer(dev, phys))
> -		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
> +	pool = is_swiotlb_buffer(dev, phys);
> +	if (pool)
> +		swiotlb_sync_single_for_cpu(dev, phys, size, dir, pool);
>  }
>  
>  static void iommu_dma_sync_single_for_device(struct device *dev,
>  		dma_addr_t dma_handle, size_t size, enum dma_data_direction dir)
>  {
>  	phys_addr_t phys;
> +	struct io_tlb_pool *pool;
>  
>  	if (dev_is_dma_coherent(dev) && !dev_use_swiotlb(dev, size, dir))
>  		return;
>  
>  	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
> -	if (is_swiotlb_buffer(dev, phys))
> -		swiotlb_sync_single_for_device(dev, phys, size, dir);
> +
> +	pool = is_swiotlb_buffer(dev, phys);
> +	if (pool)
> +		swiotlb_sync_single_for_device(dev, phys, size, dir, pool);
>  
>  	if (!dev_is_dma_coherent(dev))
>  		arch_sync_dma_for_device(phys, size, dir);
> @@ -1189,8 +1194,12 @@ static dma_addr_t iommu_dma_map_page(struct device *dev, struct page *page,
>  		arch_sync_dma_for_device(phys, size, dir);
>  
>  	iova = __iommu_dma_map(dev, phys, size, prot, dma_mask);
> -	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
> -		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
> +	if (iova == DMA_MAPPING_ERROR) {
> +		struct io_tlb_pool *pool = is_swiotlb_buffer(dev, phys);
> +
> +		if (pool)
> +			swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs, pool);
> +	}
>  	return iova;
>  }
>  
> @@ -1199,6 +1208,7 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
>  {
>  	struct iommu_domain *domain = iommu_get_dma_domain(dev);
>  	phys_addr_t phys;
> +	struct io_tlb_pool *pool;
>  
>  	phys = iommu_iova_to_phys(domain, dma_handle);
>  	if (WARN_ON(!phys))
> @@ -1209,8 +1219,9 @@ static void iommu_dma_unmap_page(struct device *dev, dma_addr_t dma_handle,
>  
>  	__iommu_dma_unmap(dev, dma_handle, size);
>  
> -	if (unlikely(is_swiotlb_buffer(dev, phys)))
> -		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
> +	pool = is_swiotlb_buffer(dev, phys);
> +	if (unlikely(pool))
> +		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs, pool);
>  }
>  
>  /*
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 6579ae3f6dac..7af8c8466e1d 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -88,7 +88,7 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
>  	return 0;
>  }
>  
> -static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
> +static struct io_tlb_pool *is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
>  {
>  	unsigned long bfn = XEN_PFN_DOWN(dma_to_phys(dev, dma_addr));
>  	unsigned long xen_pfn = bfn_to_local_pfn(bfn);
> @@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
>  	 */
>  	if (pfn_valid(PFN_DOWN(paddr)))
>  		return is_swiotlb_buffer(dev, paddr);
> -	return 0;
> +	return NULL;
>  }
>  
>  #ifdef CONFIG_X86
> @@ -228,7 +228,8 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
>  	 */
>  	if (unlikely(!dma_capable(dev, dev_addr, size, true))) {
>  		swiotlb_tbl_unmap_single(dev, map, size, dir,
> -				attrs | DMA_ATTR_SKIP_CPU_SYNC);
> +				attrs | DMA_ATTR_SKIP_CPU_SYNC,
> +				swiotlb_find_pool(dev, map));
>  		return DMA_MAPPING_ERROR;
>  	}
>  
> @@ -254,6 +255,7 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
>  		size_t size, enum dma_data_direction dir, unsigned long attrs)
>  {
>  	phys_addr_t paddr = xen_dma_to_phys(hwdev, dev_addr);
> +	struct io_tlb_pool *pool;
>  
>  	BUG_ON(dir == DMA_NONE);
>  
> @@ -265,8 +267,9 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
>  	}
>  
>  	/* NOTE: We use dev_addr here, not paddr! */
> -	if (is_xen_swiotlb_buffer(hwdev, dev_addr))
> -		swiotlb_tbl_unmap_single(hwdev, paddr, size, dir, attrs);
> +	pool = is_xen_swiotlb_buffer(hwdev, dev_addr);
> +	if (pool)
> +		swiotlb_tbl_unmap_single(hwdev, paddr, size, dir, attrs, pool);
>  }
>  
>  static void
> @@ -274,6 +277,7 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
>  		size_t size, enum dma_data_direction dir)
>  {
>  	phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr);
> +	struct io_tlb_pool *pool;
>  
>  	if (!dev_is_dma_coherent(dev)) {
>  		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
> @@ -282,8 +286,9 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
>  			xen_dma_sync_for_cpu(dev, dma_addr, size, dir);
>  	}
>  
> -	if (is_xen_swiotlb_buffer(dev, dma_addr))
> -		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
> +	pool = is_xen_swiotlb_buffer(dev, dma_addr);
> +	if (pool)
> +		swiotlb_sync_single_for_cpu(dev, paddr, size, dir, pool);
>  }
>  
>  static void
> @@ -291,9 +296,11 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
>  		size_t size, enum dma_data_direction dir)
>  {
>  	phys_addr_t paddr = xen_dma_to_phys(dev, dma_addr);
> +	struct io_tlb_pool *pool;
>  
> -	if (is_xen_swiotlb_buffer(dev, dma_addr))
> -		swiotlb_sync_single_for_device(dev, paddr, size, dir);
> +	pool = is_xen_swiotlb_buffer(dev, dma_addr);
> +	if (pool)
> +		swiotlb_sync_single_for_device(dev, paddr, size, dir, pool);
>  
>  	if (!dev_is_dma_coherent(dev)) {
>  		if (pfn_valid(PFN_DOWN(dma_to_phys(dev, dma_addr))))
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 14bc10c1bb23..ce8651949123 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -42,24 +42,6 @@ int swiotlb_init_late(size_t size, gfp_t gfp_mask,
>  	int (*remap)(void *tlb, unsigned long nslabs));
>  extern void __init swiotlb_update_mem_attributes(void);
>  
> -phys_addr_t swiotlb_tbl_map_single(struct device *hwdev, phys_addr_t phys,
> -		size_t mapping_size,
> -		unsigned int alloc_aligned_mask, enum dma_data_direction dir,
> -		unsigned long attrs);
> -
> -extern void swiotlb_tbl_unmap_single(struct device *hwdev,
> -				     phys_addr_t tlb_addr,
> -				     size_t mapping_size,
> -				     enum dma_data_direction dir,
> -				     unsigned long attrs);
> -
> -void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
> -		size_t size, enum dma_data_direction dir);
> -void swiotlb_sync_single_for_cpu(struct device *dev, phys_addr_t tlb_addr,
> -		size_t size, enum dma_data_direction dir);
> -dma_addr_t swiotlb_map(struct device *dev, phys_addr_t phys,
> -		size_t size, enum dma_data_direction dir, unsigned long attrs);
> -
>  #ifdef CONFIG_SWIOTLB
>  
>  /**
> @@ -168,12 +150,12 @@ static inline struct io_tlb_pool *swiotlb_find_pool(struct device *dev,
>   * * %true if @paddr points into a bounce buffer
>   * * %false otherwise
>   */
> -static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
> +static inline struct io_tlb_pool *is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  {
>  	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
>  
>  	if (!mem)
> -		return false;
> +		return NULL;
>  
>  #ifdef CONFIG_SWIOTLB_DYNAMIC
>  	/*
> @@ -187,10 +169,13 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
>  	 * This barrier pairs with smp_mb() in swiotlb_find_slots().
>  	 */
>  	smp_rmb();
> -	return READ_ONCE(dev->dma_uses_io_tlb) &&
> -		swiotlb_find_pool(dev, paddr);
> +	if (READ_ONCE(dev->dma_uses_io_tlb))
> +		return swiotlb_find_pool(dev, paddr);
> +	return NULL;
>  #else
> -	return paddr >= mem->defpool.start && paddr < mem->defpool.end;
> +	if (paddr >= mem->defpool.start && paddr < mem->defpool.end)
> +		return &mem->defpool;

Why are we open-coding swiotlb_find_pool() here? It does not make a
difference now, but if swiotlb_find_pool() were to change, both places
would have to be updated.

Does it save a reload from dev->dma_io_tlb_mem? IOW is the compiler
unable to optimize it away?

What about this (functionally identical) variant:

#ifdef CONFIG_SWIOTLB_DYNAMIC
	smp_rmb();
	if (!READ_ONCE(dev->dma_uses_io_tlb))
		return NULL;
#else
	if (paddr < mem->defpool.start || paddr >= mem->defpool.end);
		return NULL;
#endif

	return swiotlb_find_pool(dev, paddr);

Petr T


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 07:40:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 07:40:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749762.1158003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMjjq-0008S6-PI; Thu, 27 Jun 2024 07:40:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749762.1158003; Thu, 27 Jun 2024 07:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMjjq-0008Rz-Mm; Thu, 27 Jun 2024 07:40:14 +0000
Received: by outflank-mailman (input) for mailman id 749762;
 Thu, 27 Jun 2024 07:40:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LqiX=N5=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1sMjjp-0008Rt-AS
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 07:40:13 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20625.outbound.protection.outlook.com
 [2a01:111:f403:2412::625])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7b265f15-3458-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 09:40:11 +0200 (CEST)
Received: from PH7P221CA0006.NAMP221.PROD.OUTLOOK.COM (2603:10b6:510:32a::19)
 by SA3PR12MB9198.namprd12.prod.outlook.com (2603:10b6:806:39f::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun
 2024 07:40:07 +0000
Received: from CY4PEPF0000FCBE.namprd03.prod.outlook.com
 (2603:10b6:510:32a:cafe::dd) by PH7P221CA0006.outlook.office365.com
 (2603:10b6:510:32a::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.25 via Frontend
 Transport; Thu, 27 Jun 2024 07:40:07 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CY4PEPF0000FCBE.mail.protection.outlook.com (10.167.242.100) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Thu, 27 Jun 2024 07:40:06 +0000
Received: from SATLEXMB05.amd.com (10.181.40.146) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 27 Jun
 2024 02:40:05 -0500
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB05.amd.com
 (10.181.40.146) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 27 Jun
 2024 02:40:05 -0500
Received: from [10.252.147.188] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend
 Transport; Thu, 27 Jun 2024 02:40:04 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b265f15-3458-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Onmf8DuI91XkOmVqFNzI+p9CB+TNG+MqJMyB5F+ifSCIDR9N2L0D4W5/li2zkQ7b03ndGzfpQ5jCia8usjmPRkDIYoB1aSg7DUzqcGKQaGJqc4zX9X8GPROGH1gVuzRqUxMironuDHN0qymB5LLttHrJZoq4Fjy7zJzmJgU0toQ8xyBnM+nJw6i+tJAgblo1FlNkeZUZmVTo5eMsbB2VleV+t4e8SniuSG4eaWgzn6D957tsY4qaBXhrsjmLvtAjP+d6WpGAbfU84GD+EFFQG1g1s/ZYFEA0ZONFE285foyFuCtSUALIYBHqI8i4J2ohNYtp1nZlHcM+2Gez7/EpVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=mO3uoanngyAgOP1Qo8tbtQnDmt1qO5O1KTAdzeGf6Jg=;
 b=IFiBqSWQji8xt/6+obj7DbmCY8qfmtZE3dJ7ShsqXwaKerNwGg3lWwBF+5IJTMVewX/TW7l/eIHMp5CY0rSVXEg4fQU6C4o5272aVrjYVEWF1c50KgboG64CfFE8kNH62pmnqJcNRutu7wHWOS5G3P+FCxlCaKF5Eo2BSIVcdyZHTtJgioP98UrBK5dwvivKmBCZvqe6Zr/3z+GPsRUx2JSTYibWubHUWJCDtL6QDmvNki0rrL6LiloUMkOQxkowlAD0ORoTK4KXxN8kX2QRKk8lWQ+JxzDPI9L2l+KgGPH0WLcM8/Q96bKx/pjOruzss+wtLw9gsmnR01G30YjHDw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mO3uoanngyAgOP1Qo8tbtQnDmt1qO5O1KTAdzeGf6Jg=;
 b=PFwvLpuAZXOYGW4HrspPnm1Fonw9vuaxmD6nkYAijUv1C3nVV7N2iWKR67Dkc01EFGZ4tdtGMpnHJvwD6ToCNcV2Rt9eIEPE2hbgvmsBAjJD34x8/lesYTMdmZTRCf9SlBuWQp31lYzSZEO3qwfXlRH6ezS91kiIhxOCTeCZIiQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
Message-ID: <b5c861a4-1431-44c5-a1ec-bc859ea011c3@amd.com>
Date: Thu, 27 Jun 2024 09:40:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19(?)] xen/arm: bootfdt: Fix device tree memory node
 probing
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	<oleksii.kurochko@gmail.com>
References: <20240626080428.18480-1-michal.orzel@amd.com>
 <766b260e-204c-423f-b0e1-c21957b6d169@xen.org>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <766b260e-204c-423f-b0e1-c21957b6d169@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
Received-SPF: None (SATLEXMB05.amd.com: michal.orzel@amd.com does not
 designate permitted sender hosts)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CY4PEPF0000FCBE:EE_|SA3PR12MB9198:EE_
X-MS-Office365-Filtering-Correlation-Id: e21d0555-7fa3-4a55-dad6-08dc967c5d70
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230040|1800799024|82310400026|376014|36860700013;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?MlozV01qSUY2MTNWNjFJZVdadnlHRTkwcG5zWDAwWXNaVnphNTYvTlNYclNV?=
 =?utf-8?B?MHVLdHIxSnZMbmpQNDVlVGlOdWJkR1FMdzRUUS9VWG1CNkx0YjBndm1LOWVN?=
 =?utf-8?B?MkRiRU1hZS9HZGpBb0YxeGd5eUtYZ3VPWUdmRkRNS2g3djV0eEczTkhpUnMy?=
 =?utf-8?B?RnZIZWxQR01sdk8xYTQyL3dPek9OOTF3bk8zQUprc0xHbFVZSExHR25qNU9z?=
 =?utf-8?B?TXhvazBHMzRSYVFGSDBxVVc5RDJIeXh6dlI5OUJnY0ZNUVN5dVdlK1ozYUtO?=
 =?utf-8?B?eW8wSTNTKzRRL1R4YmlQMmVKRENKNGtveUZYZlhrUmNhKzlmWTNNaExSdUEx?=
 =?utf-8?B?UFI2TXJhTHh4MXFvMnR5VXlvWFhBd1RNUkRuNU5RdnpWMXNaWEx4RC8rYmpK?=
 =?utf-8?B?dDl5Tm5BbHlzUDN0WlhMcDdhV3dMVTBiVTBRT2FYRDJuZUk4bGZuT0hHTFpj?=
 =?utf-8?B?Nyt3VDNyZzA1WWRDRzMxYmpNSXdFZ3E4SG1rMFYzTG4zWVJFUWJnNGFrV0Mv?=
 =?utf-8?B?K25TTEF4SFFnT24xUkF1VUVwUkRrOGgyZXBKWXBMcStrN3Buc3hVZHZQbHZE?=
 =?utf-8?B?cGlRY0srUXlNTzU2bzU5Vi9wWWJSRXBHaFNRaCtUN0pXVlp6UnZlbS9ZOEZx?=
 =?utf-8?B?RU9MVFg4a3hRYjFodTdza0kwQVRFUVVKamZBR2FMSmljTFRlcFNzcjhXQUJX?=
 =?utf-8?B?Um5tV3BOcVh0TEk5bC94Z1ErREpkU2hTMW1lYSs5YnRoQTZYRVdZd2M0bnUy?=
 =?utf-8?B?TkVqK0ZCcUJkS2JWOVhlT2pwSkQxRkJRVllRcFoybVl5dGxPMkc3ZVFyTlhQ?=
 =?utf-8?B?MEJsVnBXWXNkVHAybit5TER6N2dpbkQ0SWZNZTFVcG9jN0xsclhpME1ocnk4?=
 =?utf-8?B?aE15SVVZT3ZITk9JZ1Z6NjFVL2hGY3lZTU1tSitSYzF6UlNkWjR0NldhbUw4?=
 =?utf-8?B?T1E5ak9xcFZpYjZLZ3d1YWRQU2dCUkhxRU9xMjYwayt0YkxOMWZDT243RW1H?=
 =?utf-8?B?UmZiVlNXVGtwU20vb2hHS1A5bUdPL1VjZnJMUXBQdXdtRDk1NUNYY25MYkFE?=
 =?utf-8?B?Vlh5UDVTY1ZpQTNKNHJSRXU0eFhoVWVzZVB1cG5jc1ltWGhhd3NsRnZod3Rz?=
 =?utf-8?B?bVlMakRkVlN4UWlzb2tOY2Z0NlI4Vk5VL2pXNWsrUnM5M1ZuV0NPeHZVU29X?=
 =?utf-8?B?TFBOVXZ6Vkxjakt5Tld5YmYrUVUxaTVEV01Tbm5tVFUzRlIzYkFUVE0zb2dS?=
 =?utf-8?B?bVUwdHpKWFhQb1ZlRG52QTY5aWRlNm9WL1hXdGJhbDAzZnFvSW9sTENTUElh?=
 =?utf-8?B?ODc4KzcyR3BaMkdlYWs0ZStENWRyU1hxMDJGTHN2bUkyYktDZSswa0J0UnlB?=
 =?utf-8?B?MXZORlMxc0lldURBV2NJY2gzNVNTSDhBSm1pMkFZRjVqaXRES1dWSDNiVXdj?=
 =?utf-8?B?NGpubDJkdDhSZExsSWg5d3lvVTRDWlpoLzVoYkk5WGN5NFZPSm5nQlFWVUJE?=
 =?utf-8?B?SFNoMExNanlBRjd0UXR5QTE4VzZZbHdlbXFmajZSSDNPcHJIbW8wRHB4SEtK?=
 =?utf-8?B?VTNQYjhZWkJCZmsvb25PYnRyU01zTlE3Q2tWenh1VS9tQXhCMUdFZFZHTjBZ?=
 =?utf-8?B?aGMyQ2tVTit2UE9hWXN3RUdMNFQzeGdIMHYvMlBaUVBPRFZTTVhvdGFuNUpi?=
 =?utf-8?B?ODJtK1F2SVlDTlQ2UzMvanVqUUhKYVhHQ1BLYUY2ZEdrdnhycnpiTmF5ckJC?=
 =?utf-8?B?STMyTDJnQ0RWTEVIeGdOSDNPMjV5b2hJeFFWRDRqZmdUNXBvLzB3MkZrZnZS?=
 =?utf-8?B?THB6cTdmYUVGbnA2VzRBVGxVR1psVkdqLzZDbitZemZ1ZmVhdHhuVW1CeTgr?=
 =?utf-8?B?bkFOVGVBSTROUjJGZzYzcjBkRW1OR3MrUnE3cGpMeDRJUEsxOFBZY0VVVGpn?=
 =?utf-8?Q?qYmn35So6mDyrrqnvt0TKeOZnvOTo+yx?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(376014)(36860700013);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 07:40:06.6306
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e21d0555-7fa3-4a55-dad6-08dc967c5d70
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CY4PEPF0000FCBE.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB9198

Hi Julien,

On 26/06/2024 22:13, Julien Grall wrote:
> 
> 
> Hi Michal,
> 
> On 26/06/2024 09:04, Michal Orzel wrote:
>> Memory node probing is done as part of early_scan_node() that is called
>> for each node with depth >= 1 (root node is at depth 0). According to
>> Devicetree Specification v0.4, chapter 3.4, /memory node can only exists
>> as a top level node. However, Xen incorrectly considers all the nodes with
>> unit node name "memory" as RAM. This buggy behavior can result in a
>> failure if there are other nodes in the device tree (at depth >= 2) with
>> "memory" as unit node name. An example can be a "memory@xxx" node under
>> /reserved-memory. Fix it by introducing device_tree_is_memory_node() to
>> perform all the required checks to assess if a node is a proper /memory
>> node.
>>
>> Fixes: 3e99c95ba1c8 ("arm, device tree: parse the DTB for RAM location and size")
>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>> ---
>> 4.19: This patch is fixing a possible early boot Xen failure (before main
>> console is initialized). In my case it results in a warning "Shattering
>> superpage is not supported" and panic "Unable to setup the directmap mappings".
>>
>> If this is too late for this patch to go in, we can backport it after the tree
>> re-opens.
> 
> The code looks correct to me, but I am not sure about merging it to 4.19.
> 
> This is not a new bug (in fact has been there since pretty much Xen on
> Arm was created) and we haven't seen any report until today. So in some
> way it would be best to merge it after 4.19 so it can get more testing.
> 
> In the other hand, I guess this will block you. Is this a new platform?
> Is it available?
Stefano answered this question. Also, IMO this change is quite straightforward
and does not introduce any engineering doubt, so I'm not really sure if it needs
more testing.

> 
>> ---
>>   xen/arch/arm/bootfdt.c | 28 +++++++++++++++++++++++++++-
>>   1 file changed, 27 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
>> index 6e060111d95b..23c968e6955d 100644
>> --- a/xen/arch/arm/bootfdt.c
>> +++ b/xen/arch/arm/bootfdt.c
>> @@ -83,6 +83,32 @@ static bool __init device_tree_node_compatible(const void *fdt, int node,
>>       return false;
>>   }
>>
>> +/*
>> + * Check if a node is a proper /memory node according to Devicetree
>> + * Specification v0.4, chapter 3.4.
>> + */
>> +static bool __init device_tree_is_memory_node(const void *fdt, int node,
>> +                                              int depth)
>> +{
>> +    const char *type;
>> +    int len;
>> +
>> +    if ( depth != 1 )
>> +        return false;
>> +
>> +    if ( !device_tree_node_matches(fdt, node, "memory") )
>> +        return false;
>> +
>> +    type = fdt_getprop(fdt, node, "device_type", &len);
>> +    if ( !type )
>> +        return false;
>> +
>> +    if ( (len <= 0) || strcmp(type, "memory") )
> 
> I would consider to use strncmp() to avoid relying on the property to be
> well-formed (i.e. nul-terminated).
Are you sure? AFAIR, libfdt returns NULL and -FDT_ERR_TRUNCATED as len if a string is non null terminated.
Also, let's suppose it is somehow not terminated. In that case how could libfdt set len to be > 0?
It needs to know where is the end of the string to calculate the length.

> 
>> +        return false;
>> +
>> +    return true;
>> +}
>> +
>>   void __init device_tree_get_reg(const __be32 **cell, uint32_t address_cells,
>>                                   uint32_t size_cells, paddr_t *start,
>>                                   paddr_t *size)
>> @@ -448,7 +474,7 @@ static int __init early_scan_node(const void *fdt,
>>        * populated. So we should skip the parsing.
>>        */
>>       if ( !efi_enabled(EFI_BOOT) &&
>> -         device_tree_node_matches(fdt, node, "memory") )
>> +         device_tree_is_memory_node(fdt, node, depth) )
>>           rc = process_memory_node(fdt, node, name, depth,
>>                                    address_cells, size_cells, bootinfo_get_mem());
>>       else if ( depth == 1 && !dt_node_cmp(name, "reserved-memory") )
> 
> Cheers,
> 
> --
> Julien Grall

~Michal


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 07:54:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 07:54:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749769.1158012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMjxe-0001nE-V9; Thu, 27 Jun 2024 07:54:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749769.1158012; Thu, 27 Jun 2024 07:54:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMjxe-0001n7-SN; Thu, 27 Jun 2024 07:54:30 +0000
Received: by outflank-mailman (input) for mailman id 749769;
 Thu, 27 Jun 2024 07:54:30 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMjxe-0001n1-3z
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 07:54:30 +0000
Received: from mail-lj1-x229.google.com (mail-lj1-x229.google.com
 [2a00:1450:4864:20::229])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7a7b22e0-345a-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 09:54:27 +0200 (CEST)
Received: by mail-lj1-x229.google.com with SMTP id
 38308e7fff4ca-2ec17eb4493so103647471fa.2
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 00:54:27 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-706b4a5f092sm697851b3a.216.2024.06.27.00.54.18
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 00:54:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a7b22e0-345a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719474867; x=1720079667; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=mVZUtLX7JH3kU2b7tx85MQQKJNWZyidZC6o0PtxL5Vs=;
        b=GKc04gserlWnxWPNFBU5Vj0RtmbWUsWYAWtEWRIHukc0EZsUicFgB/UL1HqTEchIp3
         WFSQ237lSGnYyQuVhnD/qtqMNU6qSWHw0cMAUtESskYzOu4+DtCVKzS0SIupYDBnFouu
         YOdCUa1adqfrovrQ/3vOlFz281SsIZFfV5SvpMMOFo9NwhaHDomqYeiO/27I0PfvC4c6
         SNXpxQQ3dxWdYTQr9adbI09QQ02WIz5Sjb8pLcO/mBZX0g+fL1iNqki90DBUU+g15wbW
         kO4GxbYXlLwSHkvbmq9lqNQIdEAy0PAsT3IlqtMXW0JFGKYG2CQIKEPBN1/N/CVP6ZEV
         Cz1g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719474867; x=1720079667;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=mVZUtLX7JH3kU2b7tx85MQQKJNWZyidZC6o0PtxL5Vs=;
        b=K887j6LzGQuVS3xgdLQtFBV6/3/MgEvVshoN6LdWM40tqiQkXViDFiaL8t1Vt0SINf
         F11tn/JaXhpSkCy0FmnPtiNm9i5XXgeKtnMiDhTUQZb/Lv+Ph3GADiRwb3sLGjl7kkvr
         mdUc7va5MNju1loONiALY8m++6kYIQYZ9DXFtphe07yLSm4KvecXhthyugXt2OTkHf9i
         Zf0Mi+gv2xAsZKHmW3XxyCy5z40V5pNwnHwzs3rKSwz5Lwk9x9lL5uuIujeVbxPJZuHF
         mwGiSEi5UWVOv9GYQ2Uwiy5ql2JhYLRRGNxNzhq0hiV7VCYiI0uPTHS6YcBXY0iS5AeC
         B9AQ==
X-Gm-Message-State: AOJu0YxVHrvLfGpwwmujnFJKeQGAQU5eG+899C1D7F4pbnDvQ3TI1Af7
	Kmu33bjs2+3g2W6Qg9hm5gb5rptsOxq82IYmQv9SBWlxHaWQq7j+f6AWf3uVdA==
X-Google-Smtp-Source: AGHT+IGMwLc4xLqfDUwuHCeqYG5ZYnWiXofyxKBroq+RD3PN3/dyPnYHVbMWvk/VZ1jmKXDGspdSQQ==
X-Received: by 2002:a2e:b173:0:b0:2ec:5945:62e9 with SMTP id 38308e7fff4ca-2ee4645fff7mr23363531fa.32.1719474865404;
        Thu, 27 Jun 2024 00:54:25 -0700 (PDT)
Message-ID: <bbf362e7-f7ec-4600-9067-0324f048e1f2@suse.com>
Date: Thu, 27 Jun 2024 09:54:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 0/6][RESEND] address violations of MISRA C Rule
 20.7
To: oleksii.kurochko@gmail.com
Cc: xen-devel@lists.xenproject.org, michal.orzel@amd.com,
 xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Julien Grall <julien@xen.org>, Nicola Vetrini <nicola.vetrini@bugseng.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com>
 <alpine.DEB.2.22.394.2406241743480.3870429@ubuntu-linux-20-04-desktop>
 <88127f41-a3e3-4d05-b9f2-3e4117bf1503@suse.com>
 <9814c00d116f14a1ce238b131b9eba19fa130986.camel@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <9814c00d116f14a1ce238b131b9eba19fa130986.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 26.06.2024 19:42, oleksii.kurochko@gmail.com wrote:
> On Tue, 2024-06-25 at 08:39 +0200, Jan Beulich wrote:
>> On 25.06.2024 02:47, Stefano Stabellini wrote:
>>> I would like to ask for a release-ack as the patch series makes
>>> very few
>>> changes outside of the static analysis configuration. The few
>>> changes to
>>> the Xen code are very limited, straightforward and makes the code
>>> better, see patch #3 and #5.
>>
>> While continuing to touch automation/ may be okay, I really think
>> time has
>> passed for further Misra changes in 4.19, unless they're fixing
>> actual bugs
>> of course. Just my personal view though ...
> I am not quite sure I understand the concern. From my perspective, the
> patch series addresses several MISRA violations without introducing any
> functional changes. It seems safe to incorporate these MISRA changes
> even at this stage of the release.

It's your call in the end. Seeing we're not even at RC1 yet (which I had
long expected us to be past), perhaps I'm overly concerned.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 07:57:08 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 07:57:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749776.1158022 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMk0A-0002Kx-B0; Thu, 27 Jun 2024 07:57:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749776.1158022; Thu, 27 Jun 2024 07:57:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMk0A-0002Kq-8R; Thu, 27 Jun 2024 07:57:06 +0000
Received: by outflank-mailman (input) for mailman id 749776;
 Thu, 27 Jun 2024 07:57:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMk09-0002Kc-DL; Thu, 27 Jun 2024 07:57:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMk09-0004MS-Ah; Thu, 27 Jun 2024 07:57:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMk09-0003di-0S; Thu, 27 Jun 2024 07:57:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMk09-0006Jp-03; Thu, 27 Jun 2024 07:57:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=l297Xu5nKlN14MA8WY0uBQzkXWYmNI3GEkfxlYNWvek=; b=LwBqehlwWmHXn6LeKa2z5Ij1Jr
	+Sx7t4nXsuT0TzHvNfkzEh5nSdJIzl602S0PTxcFLF+cJB1zJGdBNECKdgojcKSmZDiYlsSeIMS2U
	fX0XHhTbyzWwGB3hgqb4N7KbT6JUQHENaNFbTbdladxqGRq/REo+Mg/73QHu1SL4mS8A=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186528-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186528: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=24ca36a562d63f1bff04c3f11236f52969c67717
X-Osstest-Versions-That:
    linux=55027e689933ba2e64f3d245fb1ff185b3e7fc81
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 Jun 2024 07:57:05 +0000

flight 186528 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186528/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 186485

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186485
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186485
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186485
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186485
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186485
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186485
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                24ca36a562d63f1bff04c3f11236f52969c67717
baseline version:
 linux                55027e689933ba2e64f3d245fb1ff185b3e7fc81

Last test of basis   186485  2024-06-25 10:41:52 Z    1 days
Testing same since   186528  2024-06-26 22:42:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Linus Torvalds <torvalds@linux-foundation.org>
  Tejun Heo <tj@kernel.org>
  Wenchao Hao <haowenchao22@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   55027e689933..24ca36a562d6  24ca36a562d63f1bff04c3f11236f52969c67717 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 07:59:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 07:59:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749788.1158033 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMk2b-0003Cq-St; Thu, 27 Jun 2024 07:59:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749788.1158033; Thu, 27 Jun 2024 07:59:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMk2b-0003Cj-Pr; Thu, 27 Jun 2024 07:59:37 +0000
Received: by outflank-mailman (input) for mailman id 749788;
 Thu, 27 Jun 2024 07:59:36 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMk2a-0003CJ-4U
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 07:59:36 +0000
Received: from mail-lj1-x22e.google.com (mail-lj1-x22e.google.com
 [2a00:1450:4864:20::22e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 31a829d6-345b-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 09:59:35 +0200 (CEST)
Received: by mail-lj1-x22e.google.com with SMTP id
 38308e7fff4ca-2ebed33cb65so85922581fa.2
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 00:59:35 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1faac9c6141sm7194195ad.306.2024.06.27.00.59.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 00:59:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31a829d6-345b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719475175; x=1720079975; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=8vIo1QuIyyZtc4lMpkM2bWEpnBjFiSmZakRUank75F0=;
        b=L2n5wnoBHyXVTaNlWfW15GCiExnCSaYRh3NOpSlxINAGshGmBJLtUs5uDogEWjf1Ro
         c9XvzVZ0+sXsTs4M20maxKtC8sNOB5EsUKR9rxLLxtZVGESOe1prGIp8j3gIYIXdmwMJ
         Hic8pssQ1qFWukbUP1zH3/l42HlPWj3v4pZWQ+3N5rT8+huqvji3mOx6srzVke/Wfcm6
         iwCa1/RepG8XTw5s0j4kPZiA6sJNShmzDMQDEO4N/FpCVg8gNSCjgR+gA2KInF0dnFuo
         r3BGpIyAKb3nM2ZGbx9u0b/lGM7y6rNY9wRqnik6OWsoUKqtSdsAo/ptYGM2p4xqFCrx
         nERw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719475175; x=1720079975;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=8vIo1QuIyyZtc4lMpkM2bWEpnBjFiSmZakRUank75F0=;
        b=bdsy0A5sgw+G0so/jgS0Es73GVtXJor2YQ0MOU9OsKY/q5EmSyw0gueW0JzjyoWH8k
         xsuGJK+YPkrzeQYdmUrfO/ubi+XUaJ+LoDQO3Ww1EHJyROjtI4f6/tnujKv3wBWW1wM7
         g7t3BjT4wi6bWXAGJsihXTe5Lwy/LV3ifT6AII7ALDYUg95n83meWNV9zfjwOac3hrpF
         0F4j4XuDPXMSMGmAYY6IyTVNLGCZSe70KsoSjuXGubb+FIjRU6yr11m8EQrfvaGYoe09
         8zciFnzIC7O5bpJ9Y+hTKLfNLq/Y3c4z3d0CeK6AMyL1X+5E0PCVgiS6HGhmAbBUDbrQ
         cm8w==
X-Forwarded-Encrypted: i=1; AJvYcCX5HTwlAC4iXr03xFEE4gNDiY0JiZFv3BOh8yyjolXq3aMlzYQi34o1LDFom6Nc3NPj6i9bt/PbxETnKo0Hj6zkARwTQNb1qIjFO8lGy9Q=
X-Gm-Message-State: AOJu0Yxpri+wSdap4xucWGdRlRh4RK6SP1PlXMW3NHDRL2+5H5au5kM0
	xxxNzCe9uLVqfoVBMV3x198oQhse06v0QMc7CoeBqf/rMCC+LomeT+aV0yVE/A==
X-Google-Smtp-Source: AGHT+IH0kNNTY1OvtCYzPUFGVMj7oiDvu2sE2zScju4oAuMm1L+epDJa7KWbg+NrLLeyu4gqQEEPDw==
X-Received: by 2002:a2e:9c88:0:b0:2ec:5364:c790 with SMTP id 38308e7fff4ca-2ec5931d8cemr106390271fa.22.1719475174625;
        Thu, 27 Jun 2024 00:59:34 -0700 (PDT)
Message-ID: <e2d82c37-da44-4a8f-a1f8-76d5ff05b104@suse.com>
Date: Thu, 27 Jun 2024 09:59:25 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 02/10] xen/riscv: introduce bitops.h
To: oleksii.kurochko@gmail.com
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <0e4441eee82b0545e59099e2f62e3a01fa198d08.1719319093.git.oleksii.kurochko@gmail.com>
 <bb103587-546d-4613-bcb8-df10f5d05388@suse.com>
 <4c15dd072f08b1161d170608a096dc0851ced588.camel@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <4c15dd072f08b1161d170608a096dc0851ced588.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 26.06.2024 19:27, oleksii.kurochko@gmail.com wrote:
> On Wed, 2024-06-26 at 10:31 +0200, Jan Beulich wrote:
>> On 25.06.2024 15:51, Oleksii Kurochko wrote:
>>> --- /dev/null
>>> +++ b/xen/arch/riscv/include/asm/bitops.h
>>> @@ -0,0 +1,137 @@
>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>> +/* Copyright (C) 2012 Regents of the University of California */
>>> +
>>> +#ifndef _ASM_RISCV_BITOPS_H
>>> +#define _ASM_RISCV_BITOPS_H
>>> +
>>> +#include <asm/system.h>
>>> +
>>> +#if BITOP_BITS_PER_WORD == 64
>>> +#define __AMO(op)   "amo" #op ".d"
>>> +#elif BITOP_BITS_PER_WORD == 32
>>> +#define __AMO(op)   "amo" #op ".w"
>>> +#else
>>> +#error "Unexpected BITOP_BITS_PER_WORD"
>>> +#endif
>>> +
>>> +/* Based on linux/arch/include/asm/bitops.h */
>>> +
>>> +/*
>>> + * Non-atomic bit manipulation.
>>> + *
>>> + * Implemented using atomics to be interrupt safe. Could
>>> alternatively
>>> + * implement with local interrupt masking.
>>> + */
>>> +#define __set_bit(n, p)      set_bit(n, p)
>>> +#define __clear_bit(n, p)    clear_bit(n, p)
>>> +
>>> +#define test_and_op_bit_ord(op, mod, nr, addr, ord)     \
>>> +({                                                      \
>>> +    bitop_uint_t res, mask;                             \
>>> +    mask = BITOP_MASK(nr);                              \
>>> +    asm volatile (                                      \
>>> +        __AMO(op) #ord " %0, %2, %1"                    \
>>> +        : "=r" (res), "+A" (addr[BITOP_WORD(nr)])       \
>>> +        : "r" (mod(mask))                               \
>>> +        : "memory");                                    \
>>> +    ((res & mask) != 0);                                \
>>> +})
>>> +
>>> +#define op_bit_ord(op, mod, nr, addr, ord)      \
>>> +    asm volatile (                              \
>>> +        __AMO(op) #ord " zero, %1, %0"          \
>>> +        : "+A" (addr[BITOP_WORD(nr)])           \
>>> +        : "r" (mod(BITOP_MASK(nr)))             \
>>> +        : "memory");
>>> +
>>> +#define test_and_op_bit(op, mod, nr, addr)    \
>>> +    test_and_op_bit_ord(op, mod, nr, addr, .aqrl)
>>> +#define op_bit(op, mod, nr, addr) \
>>> +    op_bit_ord(op, mod, nr, addr, )
>>> +
>>> +/* Bitmask modifiers */
>>> +#define NOP(x)    (x)
>>> +#define NOT(x)    (~(x))
>>
>> Since elsewhere you said we would use Zbb in bitops, I wanted to come
>> back
>> on that: Up to here all we use is AMO.
>>
>> And further down there's no asm() anymore. What were you referring
>> to?
> RISC-V doesn't have a CLZ instruction in the base
> ISA.  As a consequence, __builtin_ffs() emits a library call to ffs()
> on GCC,

Oh, so we'd need to implement that libgcc function, along the lines of
Arm32 implementing quite a few of them to support shifts on 64-bit
quantities as well as division and modulo.

Jan

> or a de Bruijn sequence on Clang.
> 
> The optional Zbb extension adds a CLZ instruction, after which
> __builtin_ffs() emits a very simple sequence.
> 
> ~ Oleksii



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 08:01:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 08:01:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749798.1158043 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMk4C-0005Bv-Cc; Thu, 27 Jun 2024 08:01:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749798.1158043; Thu, 27 Jun 2024 08:01:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMk4C-0005Bo-9L; Thu, 27 Jun 2024 08:01:16 +0000
Received: by outflank-mailman (input) for mailman id 749798;
 Thu, 27 Jun 2024 08:01:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMk4B-0005Bf-L8
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 08:01:15 +0000
Received: from mail-lf1-x130.google.com (mail-lf1-x130.google.com
 [2a00:1450:4864:20::130])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 6c6c4cb5-345b-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 10:01:13 +0200 (CEST)
Received: by mail-lf1-x130.google.com with SMTP id
 2adb3069b0e04-52d259dbe3cso2464652e87.0
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 01:01:13 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 41be03b00d2f7-72745d077e7sm601111a12.37.2024.06.27.01.01.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 01:01:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c6c4cb5-345b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719475273; x=1720080073; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ecNue83XO5T68nH08GCvk0yWw6PYIJxula9SXfUI4yo=;
        b=RJJ5joMnYquztrKNgQE6/yOrqpxH4SemwYDT5gRRNob6NlT3XywpGNVVjshnQPz+1I
         MwIVW1qKeK05clEnO3/K1Db/3NoMg9XGcOM0xy+FxPYUKr/Xz+DiUcX6jDJCfNAMqgc+
         KUeIOvBZcLfal9pdYEVuDGuhdrx7UmEQns81dWlZy58YIQSQ2DLXnjEwK6uZEthbFwza
         hU4TjEsmT2uAxb7JiqAJbEKsHyhzMEAjb6HwPOVoTvQ6uP6s0eVfdkQYPjZ+kPWvKnBo
         XNYlvgzO5H2JMAMkw5Sn1gQ2z9L6wI2TyCSVe9veo7WtiHFsroufnSEs7LpAXFBj9M3g
         5qWQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719475273; x=1720080073;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ecNue83XO5T68nH08GCvk0yWw6PYIJxula9SXfUI4yo=;
        b=qPFJUTo5W30Sewe0t3sG4gvwHts7HCJiqFGlPPaiJOrtxcxPZQ7IzQ5hv2u5cirHqI
         lB7ud1YLtbPChDabP57is4hzJloxjriGzeS4/bajW3p6QuvTJdE9LVwhkOsdGN2S8wrd
         +kNQpuphOpt25YaRyuW9FjrRpIYVdAYJp6dKTenToALdG8HpGbZjmrNXHftBKU0WXVQI
         z6545f3+x447EBJr2IG9otvhWtdT5Dn0usstShthUxvQF41plVezBbe8FEMowdq1Onp9
         KVqVz2bQvfBtAcCzvWUwAmrv4CEgliMN5FtJE0jXKqS0MMSXfaXqmaEgVue5yplFUFyw
         +jfg==
X-Forwarded-Encrypted: i=1; AJvYcCXTXrvkJsq6qqW7AYow1Zbuk4UY0QexdxU2oC1yCDoOpZMg0Tfh12JBIX4sGdDiEdPoECN99JRmy8sUuH9f1q+C3uMYuYJ2lz/uxjjncok=
X-Gm-Message-State: AOJu0YzNhWwuxqLf1iQRUaSF4Mn+SXEUqWJC19NGr8GJBHC7zMjTPmVl
	n4esrj7XcUkfSGXiGoASINNOwqoyiuq3UBVTYABa5d2Y7qchlIstWo/C++MUQw==
X-Google-Smtp-Source: AGHT+IEoqMXKAByh38Y66A01VDwn6Wlt498BsogVjwV0meToi9ptolputQFyajlbCTqPS5YNXCqGkw==
X-Received: by 2002:a2e:3e13:0:b0:2ec:5685:f05f with SMTP id 38308e7fff4ca-2ec5938a771mr87073551fa.49.1719475273338;
        Thu, 27 Jun 2024 01:01:13 -0700 (PDT)
Message-ID: <30c1c508-daf9-40df-b9ae-0a1584eacc0a@suse.com>
Date: Thu, 27 Jun 2024 10:01:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH 6/6] x86/xstate: Switch back to for_each_set_bit()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240625190719.788643-1-andrew.cooper3@citrix.com>
 <20240625190719.788643-7-andrew.cooper3@citrix.com>
 <59201cf5-9d86-4976-a331-2a7f8bb9635a@suse.com>
 <961f5371-3616-4476-ae12-e1d91cc56345@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <961f5371-3616-4476-ae12-e1d91cc56345@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 26.06.2024 20:09, Andrew Cooper wrote:
> On 26/06/2024 11:24 am, Jan Beulich wrote:
>> On 25.06.2024 21:07, Andrew Cooper wrote:
>>> In all 3 examples, we're iterating over a scaler.  No caller can pass the
>>> COMPRESSED flag in, so the upper bound of 63, as opposed to 64, doesn't
>>> matter.
>> Not sure, maybe more a language question (for my education): Is "can"
>> really appropriate here?
> 
> It's not the greatest choice, but it's not objectively wrong either.
> 
>>  In recalculate_xstate() we calculate the
>> value ourselves, but in the two other cases the value is incoming to
>> the functions. Architecturally those value should not have bit 63 set,
>> but that's weaker than "can" according to my understanding. I'd be
>> fine with "may", for example.
> 
> There's an ASSERT() in xstate_uncompressed_size() which covers the
> property, but most if the justification comes from the fact that the
> callers pass in values which are really loaded into hardware registers.
> 
> But it is certainly more accurate to say that callers don't pass the
> flag in.
> 
> There isn't an ASSERT() in xstate_compressed_size(), but I suppose I
> could fold this in:
> 
> diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
> index 88dbfbeafacd..f72f14626b7d 100644
> --- a/xen/arch/x86/xstate.c
> +++ b/xen/arch/x86/xstate.c
> @@ -623,6 +623,8 @@ unsigned int xstate_compressed_size(uint64_t xstates)
>  {
>      unsigned int size = XSTATE_AREA_MIN_SIZE;
>  
> +    ASSERT((xstates & ~(X86_XCR0_STATES | X86_XSS_STATES)) == 0);
> +
>      if ( xstates == 0 )
>          return 0;
>  
> 
> which brings it more in line with xstate_uncompressed_size(), and has a
> side effect of confirming the absence of the COMPRESSED bit.
> 
> Thoughts?

Definitely fine with me.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 08:11:33 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 08:11:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749813.1158053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMkE4-00075T-9K; Thu, 27 Jun 2024 08:11:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749813.1158053; Thu, 27 Jun 2024 08:11:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMkE4-00075M-6d; Thu, 27 Jun 2024 08:11:28 +0000
Received: by outflank-mailman (input) for mailman id 749813;
 Thu, 27 Jun 2024 08:11:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMkE2-00075D-S2
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 08:11:26 +0000
Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com
 [2a00:1450:4864:20::22a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d88f04d6-345c-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 10:11:24 +0200 (CEST)
Received: by mail-lj1-x22a.google.com with SMTP id
 38308e7fff4ca-2ec5779b423so55300971fa.0
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 01:11:24 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1faac9793a1sm7428555ad.178.2024.06.27.01.11.18
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 01:11:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d88f04d6-345c-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719475884; x=1720080684; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=E5J23NZ1segqqhFXXYTwxEvTsF4rIj3ccvQoT6q3rCo=;
        b=CjXhR82jLroqLgB50FYDT+tPaOs5UAxIAKIeed+BF0bEaV5phucUieIxkVbvlS1Z55
         IcdMXYLpTja8pC94uL5jZJXu9NpbSYkz9j+YbhOM1R8aDKo4DxKAh0U+GL3OodOFfJyE
         J2JC6tflGjLpbhWDpbntPfuWEP3EFOJUUSVSztF1JbxGN5bdkbzp+/zoJSJqIbExiaNs
         48QGL7R5AHOUiIHmKQcfwBT3pvoVKn7UsJB66OfcNqjRmaTYwARJNBQRWJ16/DnbCJs/
         GVRBKmtWRkXS89choZ6yliNTVm+VZgKmqU5oEt8hNiTZZcEBRamFNRokoor2nHqNOsOF
         trmg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719475884; x=1720080684;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=E5J23NZ1segqqhFXXYTwxEvTsF4rIj3ccvQoT6q3rCo=;
        b=QD9OeqYXh5KwIbfvjaowZRtPsvHue9QQCIS12HBfqYkk6bSAi6ey7zLb/Auol6p8QQ
         c88JXQAQl3OcfbksmYtW8Fglm8d5Ng6M/G1Q2J/5els/6YMh6rsgipU9v3ZxR1iafEnA
         Gqy+AkoEy14+/ZOoSetfdcyOnaK1s2xZSty0xC/pP1JJYp8sQvaDJ3A2Xk7G0YYO9Jd2
         TwDpDe6FF1rH2xseLdUCN9/aeOHsjE+2uMBwaO/7FtVELpIvyhlWrrjxlOWLLuLVQI5q
         zDzLSSYx+XHN7kFtgUdh6uLEwD9DpBt2HoiHjFZdeliXxN6svR/1fVbLuLcdd6QQfj1n
         qSkQ==
X-Gm-Message-State: AOJu0YxL3QB6WYcE069+vCawMhTOLGb6McNGqO3B0UD4g2JsuSuEt7l1
	2gdjyMUlwNTX2gMDIU1eXed6jo2aexfcr+7LtMtjEHrj4ZbJG10c6HD7BL9Xcw==
X-Google-Smtp-Source: AGHT+IH/w27rasRrzMJFDIbQmkAtVIUlSQbSFOoy/RDmgPHQaNnKDxvk1u6E+R9LP+76cpuVDhVqgA==
X-Received: by 2002:a2e:9090:0:b0:2ec:5621:b9f2 with SMTP id 38308e7fff4ca-2ec5936fb3amr95241051fa.41.1719475882128;
        Thu, 27 Jun 2024 01:11:22 -0700 (PDT)
Message-ID: <84eb22c8-7737-4e6b-8194-724c792c2d92@suse.com>
Date: Thu, 27 Jun 2024 10:11:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 05/13] x86/traps: address violations of MISRA C
 Rule 16.3
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Federico Serafini <federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <4f44a7b021eb4f78ccf1ce69b500b48b75df81c5.1719218291.git.federico.serafini@bugseng.com>
 <alpine.DEB.2.22.394.2406241753260.3870429@ubuntu-linux-20-04-desktop>
 <a5b47b7e-9dc0-4108-bd6f-eb34f7cb8c3c@suse.com>
 <alpine.DEB.2.22.394.2406251808040.3635@ubuntu-linux-20-04-desktop>
 <6441010f-c2f6-4098-bf23-837955dcf803@suse.com>
 <alpine.DEB.2.22.394.2406261758390.3635@ubuntu-linux-20-04-desktop>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <alpine.DEB.2.22.394.2406261758390.3635@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 27.06.2024 03:53, Stefano Stabellini wrote:
> On Wed, 26 Jun 2024, Jan Beulich wrote:
>> On 26.06.2024 03:11, Stefano Stabellini wrote:
>>> On Tue, 25 Jun 2024, Jan Beulich wrote:
>>>> On 25.06.2024 02:54, Stefano Stabellini wrote:
>>>>> On Mon, 24 Jun 2024, Federico Serafini wrote:
>>>>>> Add break or pseudo keyword fallthrough to address violations of
>>>>>> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
>>>>>> every switch-clause".
>>>>>>
>>>>>> No functional change.
>>>>>>
>>>>>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>>>>>> ---
>>>>>>  xen/arch/x86/traps.c | 3 +++
>>>>>>  1 file changed, 3 insertions(+)
>>>>>>
>>>>>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
>>>>>> index 9906e874d5..cbcec3fafb 100644
>>>>>> --- a/xen/arch/x86/traps.c
>>>>>> +++ b/xen/arch/x86/traps.c
>>>>>> @@ -1186,6 +1186,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
>>>>>>  
>>>>>>      default:
>>>>>>          ASSERT_UNREACHABLE();
>>>>>> +        break;
>>>>>
>>>>> Please add ASSERT_UNREACHABLE to the list of "unconditional flow control
>>>>> statements" that can terminate a case, in addition to break.
>>>>
>>>> Why? Exactly the opposite is part of the subject of a recent patch, iirc.
>>>> Simply because of the rules needing to cover both debug and release builds.
>>>
>>> The reason is that ASSERT_UNREACHABLE() might disappear from the release
>>> build but it can still be used as a marker during static analysis. In
>>> my view, ASSERT_UNREACHABLE() is equivalent to a noreturn function call
>>> which has an empty implementation in release builds.
>>>
>>> The only reason I can think of to require a break; after an
>>> ASSERT_UNREACHABLE() would be if we think the unreachability only apply
>>> to debug build, not release build:
>>>
>>> - debug build: it is unreachable
>>> - release build: it is reachable
>>>
>>> I don't think that is meant to be possible so I think we can use
>>> ASSERT_UNREACHABLE() as a marker.
>>
>> Well. For one such an assumption takes as a prereq that a debug build will
>> be run through full coverage testing, i.e. all reachable paths proven to
>> be taken. I understand that this prereq is intended to somehow be met,
>> even if I'm having difficulty seeing how such a final proof would look
>> like: Full coverage would, to me, mean that _every_ line is reachable. Yet
>> clearly any ASSERT_UNREACHABLE() must never be reached.
>>
>> And then not covering for such cases takes the further assumption that
>> debug and release builds are functionally identical. I'm afraid this would
>> be a wrong assumption to make:
>> 1) We may screw up somewhere, with code wrongly enabled only in one of the
>>    two build modes.
>> 2) The compiler may screw up, in particular with optimization.
> 
> I think there are two different issues here we are discussing.
> 
> One issue, like you said, has to do with coverage. It is important to
> mark as "unreachable" any part of the code that is indeed unreachable
> so that we can account it properly when we do coverage analysis. At the
> moment the only "unreachable" marker that we have is
> ASSERT_UNREACHABLE(), and I am hoping we can use it as part of the
> coverage analysis we'll do.
> 
> However, there is a different separate question about what to do in the
> Xen code after an ASSERT_UNREACHABLE(). E.g.:
> 
>              default:
>                  ASSERT_UNREACHABLE();
>                  return -EPERM; /* is it better with or without this? */
>              }
> 
> Leaving coverage aside, would it be better to be defensive and actually
> attempt to report errors back after an ASSERT_UNREACHABLE() like in the
> example? Or is it better to assume the code is actually unreachable
> hence there is no need to do anything afterwards?
> 
> One one hand, being defensive sounds good, on the other hand, any code
> we add after ASSERT_UNREACHABLE() is dead code which cannot be tested,
> which is also not good. In this example, there is no way to test the
> return -EPERM code path. We also need to consider what is the right
> thing to do if Xen finds itself in an erroneous situation such as being
> in an unreachable code location.
> 
> So, after thinking about it and also talking to the safety manager, I
> think we should:
> - implement ASSERT_UNREACHABLE with a warning in release builds

If at all, then controlled by a default-off Kconfig setting. This would,
after all, raise the question of how ASSERT() should behave then. Imo
the two should be consistent in this regard, and NDEBUG clearly results
in the expectation that ASSERT() expands to nothing. Perhaps this is
finally the time where we need to separate NDEBUG from CONFIG_DEBUG; we
did discuss doing so before. Then in your release builds, you could
actually leave assertions active.

> - have "return -EPERM;" or similar for defensive programming

You don't say how you'd deal with the not-reachable aspect then.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 08:28:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 08:28:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749827.1158062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMkUj-0000uw-N2; Thu, 27 Jun 2024 08:28:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749827.1158062; Thu, 27 Jun 2024 08:28:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMkUj-0000up-KA; Thu, 27 Jun 2024 08:28:41 +0000
Received: by outflank-mailman (input) for mailman id 749827;
 Thu, 27 Jun 2024 08:28:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMkUh-0000tl-TK
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 08:28:39 +0000
Received: from mail-lj1-x234.google.com (mail-lj1-x234.google.com
 [2a00:1450:4864:20::234])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 40a75380-345f-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 10:28:38 +0200 (CEST)
Received: by mail-lj1-x234.google.com with SMTP id
 38308e7fff4ca-2ee4ae13aabso3670961fa.2
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 01:28:38 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1faac9b3a10sm7703645ad.273.2024.06.27.01.28.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 01:28:37 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40a75380-345f-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719476918; x=1720081718; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=7mcsX5hsBFiWNWVknJQAFiRNCEEw56g6tSMZ7MNIfAY=;
        b=W8RGbq1+/HhZr3g1jS/78L+561zHD7p5ZWNHaWA7kjn1FL1HmnNp+TxwrXZ3erjnKD
         L7Fw400i5SSed3XtY4FzZ2GwVQhs9WMJMvwRkR6Vh0maszGh8laiDWwt3Mhpk8fWZ4bC
         gKNpXAHZ8gMyWPuyGpeMQrDkYz6j2Kjg01N7qOjPXkkYcEPa75jjDRzZfXjFFPHia9lY
         JQPIMXDfkJuobIxzf1RDqo3FZOyhT7P+ia/Rtz9ObtknT0U2agRB1wlGKskU40xsiOOk
         Ewr05LcDoWf/XNncxzpHja78RZfvk+xRsp1Dy+hT99okOgCnprt20GmgzulXCFZHmZpb
         XSiw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719476918; x=1720081718;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=7mcsX5hsBFiWNWVknJQAFiRNCEEw56g6tSMZ7MNIfAY=;
        b=dVn7w+ivvWAoTs3cVSVvreVYxO+y3Tz/KatvMIdzWZHM4Riv8cYSeWt5tK4cUGpE6I
         Q44uN0eDwWvJakqXFy2IXrRu4cXTJarfdPajfX6CeTwDBztuWMdNYsyDYjMqnqcWxKHr
         UcAI+qniP3EjYknJWi7jIfhX9fare9Gfzxi7XswBXkCsBHIxzlQwQlrnsA5+Z0aWuTGl
         N1BcpLCansxri+Ku+uSq1j6DQs85R5mq8nGxPuZ8pFd4vxcaL6jg6FFFywP7lFTupBxX
         jb9UC+kCqmaIj3s+NjWEAQsv8FXbXsCQ+5xwVGmdoAG5tibLMmABriv1yuapdXV4NC9N
         3I3Q==
X-Forwarded-Encrypted: i=1; AJvYcCXEXynaenetXRVVfNoa/pSL6V7MG5bIxGcygZOfgKonC+nuIEe9YbLgJZCdRpAYKIkFaIWzaI9+z7mfjNrw3T/LhU8VPMoheB8X688wVwM=
X-Gm-Message-State: AOJu0YzXx7TqUtcez0zcEPKMf/Y1yZ1Qx3pUdpbO3P0qIeCleDJD4VDB
	8TpO0+x88oUJd5EpYG3awxY2NDcyYZUGtT5hxD+sbbm6IK1MhoPxGWEwyF/Gzua83m6auafD8bc
	=
X-Google-Smtp-Source: AGHT+IFlqabrGF5vGH+rKcahRGJb+Z0eVr+QuRkJ9jVRzJP8ux0/qAkVkyi9JqDVJxFUsdeetJKmIQ==
X-Received: by 2002:a2e:96d4:0:b0:2ee:4da5:be67 with SMTP id 38308e7fff4ca-2ee4da5c253mr536801fa.0.1719476917993;
        Thu, 27 Jun 2024 01:28:37 -0700 (PDT)
Message-ID: <c4f0e177-3c2c-47bc-a8a1-e69aff1c3bed@suse.com>
Date: Thu, 27 Jun 2024 10:28:29 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19 v2] x86/spec-ctrl: Support for SRSO_US_NO and
 SRSO_MSR_FIX
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Oleksii Kurochko <oleksii.kurochko@gmail.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20240619191057.2588693-1-andrew.cooper3@citrix.com>
 <8510895d-70fb-4fce-adfa-ac5638b4ae3c@suse.com>
 <b27d4722-a8d0-4134-bc2e-d2d751a88c10@citrix.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <b27d4722-a8d0-4134-bc2e-d2d751a88c10@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 26.06.2024 22:20, Andrew Cooper wrote:
> On 20/06/2024 9:39 am, Jan Beulich wrote:
>> On 19.06.2024 21:10, Andrew Cooper wrote:
>>> --- a/docs/misc/xen-command-line.pandoc
>>> +++ b/docs/misc/xen-command-line.pandoc
>>> @@ -2390,7 +2390,7 @@ By default SSBD will be mitigated at runtime (i.e `ssbd=runtime`).
>>>  >              {ibrs,ibpb,ssbd,psfd,
>>>  >              eager-fpu,l1d-flush,branch-harden,srb-lock,
>>>  >              unpriv-mmio,gds-mit,div-scrub,lock-harden,
>>> ->              bhi-dis-s}=<bool> ]`
>>> +>              bhi-dis-s,bp-spec-reduce}=<bool> ]`
>>>  
>>>  Controls for speculative execution sidechannel mitigations.  By default, Xen
>>>  will pick the most appropriate mitigations based on compiled in support,
>>> @@ -2539,6 +2539,13 @@ boolean can be used to force or prevent Xen from using speculation barriers to
>>>  protect lock critical regions.  This mitigation won't be engaged by default,
>>>  and needs to be explicitly enabled on the command line.
>>>  
>>> +On hardware supporting SRSO_MSR_FIX, the `bp-spec-reduce=` option can be used
>>> +to force or prevent Xen from using MSR_BP_CFG.BP_SPEC_REDUCE to mitigate the
>>> +SRSO (Speculative Return Stack Overflow) vulnerability.
>> Upon my request to add "... against HVM guests" here you replied "Ok.", yet
>> then you didn't make the change? Even a changelog entry says you supposedly
>> added this, so perhaps simply lost in a refresh?
> 
> Yes, and later in the thread you (correctly) pointed out that it's not
> only for HVM guests.
> 
> The PV aspect, discussed in the following sentence, is very relevant and
> makes it not specific to HVM guests.

Hmm, yes, sorry, I apparently lost track of that again.

>>> --- a/xen/arch/x86/cpu/amd.c
>>> +++ b/xen/arch/x86/cpu/amd.c
>>> @@ -1009,16 +1009,33 @@ static void cf_check fam17_disable_c6(void *arg)
>>>  	wrmsrl(MSR_AMD_CSTATE_CFG, val & mask);
>>>  }
>>>  
>>> -static void amd_check_erratum_1485(void)
>>> +static void amd_check_bp_cfg(void)
>>>  {
>>> -	uint64_t val, chickenbit = (1 << 5);
>>> +	uint64_t val, new = 0;
>>>  
>>> -	if (cpu_has_hypervisor || boot_cpu_data.x86 != 0x19 || !is_zen4_uarch())
>>> +	/*
>>> +	 * AMD Erratum #1485.  Set bit 5, as instructed.
>>> +	 */
>>> +	if (!cpu_has_hypervisor && boot_cpu_data.x86 == 0x19 && is_zen4_uarch())
>>> +		new |= (1 << 5);
>>> +
>>> +	/*
>>> +	 * On hardware supporting SRSO_MSR_FIX, activate BP_SPEC_REDUCE by
>>> +	 * default.  This lets us do two things:
>>> +         *
>>> +         * 1) Avoid IBPB-on-entry to mitigate SRSO attacks from HVM guests.
>>> +         * 2) Lets us advertise SRSO_US_NO to PV guests.
>>> +	 */
>>> +	if (boot_cpu_has(X86_FEATURE_SRSO_MSR_FIX) && opt_bp_spec_reduce)
>>> +		new |= BP_CFG_SPEC_REDUCE;
>>> +
>>> +	/* Avoid reading BP_CFG if we don't intend to change anything. */
>>> +	if (!new)
>>>  		return;
>>>  
>>>  	rdmsrl(MSR_AMD64_BP_CFG, val);
>>>  
>>> -	if (val & chickenbit)
>>> +	if ((val & new) == new)
>>>  		return;
>> You explained that you want to avoid making this more complex, upon my
>> question towards tweaking this to also deal with us possibly clearing
>> flags. I'm okay with that, but I did ask that you add at least half a
>> sentence to actually clarify this to future readers (which might include
>> myself).
> 
> "I wrote it this way because it is sufficient and simple, but you can
> change it in the future if you really need to" is line noise wherever
> it's written.
> 
> It literally goes without saying, for every line the entire codebase.

Well. Elsewhere we go to the lengths of trying to cover for unexpected
states. So to me this goes beyond "sufficient and simple". But anyway.

>> However, as an unrelated aspect: According to the respective part of the
>> comment you add to  calculate_pv_max_policy(), do we need the IBPB when
>> BP_SPEC_REDUCE is active?
> 
> To what are you referring?

The fact that for HVM the change to ibpb_calculations() results in no
entry-IBPB when we have SRSO_MSR_FIX and opt_bp_spec_reduce is true.
To me the comment you add to calculate_pv_max_policy() suggests a
sufficient similarity. IOW ...

> SRSO is about Return predictions becoming poisoned by other
> predictions.  The best way to mount an SRSO attack is with forged
> near-branch predictions.
> 
> For SRSO safety, we use IBPB to flush the Branch *Type* information from
> the BTB.  Fam17h happened to have this property, but Fam19h needed it
> retrofitting in a microcode update, with the prior "Indirect Branch
> Targets only, explicitly retaining the Branch Type information" being
> retroactively named SBPB.
> 
> This does not interact with the use of IBPB for it's
> architecturally-given purpose.

... why is there an interaction for HVM but not for PV? (Sorry, I'm
apparently lost here to a certain degree.)

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 08:55:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 08:55:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749842.1158073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMkuy-00055I-SS; Thu, 27 Jun 2024 08:55:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749842.1158073; Thu, 27 Jun 2024 08:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMkuy-00055B-PI; Thu, 27 Jun 2024 08:55:48 +0000
Received: by outflank-mailman (input) for mailman id 749842;
 Thu, 27 Jun 2024 08:55:47 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kCyE=N5=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMkux-000555-St
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 08:55:47 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0b06b43e-3463-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 10:55:46 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.214])
 by support.bugseng.com (Postfix) with ESMTPSA id F14354EE073D;
 Thu, 27 Jun 2024 10:55:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b06b43e-3463-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH] x86: p2m-pod: address violation of MISRA C Rule 2.1
Date: Thu, 27 Jun 2024 10:55:42 +0200
Message-Id: <a050ef1b662f64bb58afb2f6118254254dd1d649.1719478448.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The label 'out_unmap' is only reachable after ASSERT_UNREACHABLE,
so the code below is only executed upon erroneously reaching that
program point and calling domain_crash, thus resulting in the
for loop after 'out_unmap' to become unreachable in some configurations.

This is a defensive coding measure to have a safe fallback that is
reachable in non-debug builds, and can thus be deviated with a
comment-based deviation.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
---
 docs/misra/safe.json      | 8 ++++++++
 xen/arch/x86/mm/p2m-pod.c | 1 +
 2 files changed, 9 insertions(+)

diff --git a/docs/misra/safe.json b/docs/misra/safe.json
index c213e0a0be3b..b114c9485c86 100644
--- a/docs/misra/safe.json
+++ b/docs/misra/safe.json
@@ -60,6 +60,14 @@
         },
         {
             "id": "SAF-7-safe",
+            "analyser": {
+                "eclair": "MC3R1.R2.1"
+            },
+            "name": "MC3R1.R2.1: statement unreachable in some configurations",
+            "text": "Every path that can reach this statement is preceded by statements that make it unreachable in certain configurations (e.g. ASSERT_UNREACHABLE). This is understood as a means of defensive programming."
+        },
+        {
+            "id": "SAF-8-safe",
             "analyser": {},
             "name": "Sentinel",
             "text": "Next ID to be used"
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index bd84fe9e27ee..5a96c46a2286 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1040,6 +1040,7 @@ out_unmap:
      * Something went wrong, probably crashing the domain.  Unmap
      * everything and return.
      */
+    /* SAF-7-safe Rule 2.1: defensive programming */
     for ( i = 0; i < count; i++ )
         if ( map[i] )
             unmap_domain_page(map[i]);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 08:55:53 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 08:55:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749843.1158083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMkv1-0005Jf-39; Thu, 27 Jun 2024 08:55:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749843.1158083; Thu, 27 Jun 2024 08:55:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMkv1-0005JY-07; Thu, 27 Jun 2024 08:55:51 +0000
Received: by outflank-mailman (input) for mailman id 749843;
 Thu, 27 Jun 2024 08:55:49 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMkuz-000555-Ke
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 08:55:49 +0000
Received: from mail-lj1-x232.google.com (mail-lj1-x232.google.com
 [2a00:1450:4864:20::232])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0c451cad-3463-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 10:55:48 +0200 (CEST)
Received: by mail-lj1-x232.google.com with SMTP id
 38308e7fff4ca-2ec1620a956so91002291fa.1
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 01:55:48 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 41be03b00d2f7-72748e84e87sm677629a12.70.2024.06.27.01.55.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 01:55:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c451cad-3463-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719478548; x=1720083348; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=6Awv0ts2nk01U12NTVtimSVfj6ZfJzzZ7Sdo/w/gFQ8=;
        b=UgwsfbmqjIxLnKYjYyp3h69oEyTdyio8SfCb2P4AcMH1AK5XSdUzTFPHb1em+05hhE
         zxSnOs+OqP8KpHhCFo8aEntn2xNapl7JaRIZSTUIjzHRCccIPKjPml9ZnY/cFAbxFQJf
         VZ4EC33SxP50kYnUT7jn/D09TL+ADMXg7ir8EP3O6c8fYw5dt9xHmLgEUBj0Owu8v39E
         yuZuPAREn1UfINsyVBHceuJKK6bH0/ZqtIE1S1FCLdrPjEkRFMolP46kuush+vQvbTpk
         NFklTnmlg5pdnNbDjDOAuEUAys7i1fv90E/gw41ZIRRPnDn0mTN6DQRjMHu+ZPY0Y5oQ
         D6cw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719478548; x=1720083348;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=6Awv0ts2nk01U12NTVtimSVfj6ZfJzzZ7Sdo/w/gFQ8=;
        b=qs59qvZQNynhRHWlhg6eYd85vlq648sLFELAZbCIVkKTqI6/5QP6KZtDUhc9dfg8iU
         UxgwCMRpU/b3FnDALJOX8dMPg53uUPVyM2XcAM4g0qMIYw95i7hozl2JIq9V+/WPkcfI
         hq590/xyFSF95sDE1f5GlJrCPdrhq3F+0AISRucYEH3996QzRcGCxoEdsya/xTojslbZ
         iQoabBcbvsSe/fOVnEIsSvVfDb8Dv0cg6Kw4xXTZJ4WS2+QM9y/sP1CoJvTDZCH4C1uH
         dt1dtRl0rSGNO6sSIh0tbqc4KaaSXc6Avyx2HID6C7lcKn72mRZpPrnvYSZ6a/PrXrJA
         lTWw==
X-Forwarded-Encrypted: i=1; AJvYcCUo3B8IUTN3CTawwtLeW0OK13yPCGioJZmKoqCwbGeJHrRcgZDZk1zejFidjC3M8bqQuAHxr1L0hZ/jpKIJtdmW95Gr3uxHNVCoG06gddI=
X-Gm-Message-State: AOJu0YwseMUv6JtUY1hIT7NmwNjSLlQiZqLnYW3BKbrqygWi8PRboivo
	JKUq2JeXNlEGITtE3G/oqjVF+UROy6hGg6MDf/6D4a9wKxVW1ATJdNWWlu/3lg==
X-Google-Smtp-Source: AGHT+IGgYW9f5po6C6SmcZsYlVPiFCF8NHaOPufrVDCQXNF8oILRG7goseTBb+40ohuNhSw1gHfFpQ==
X-Received: by 2002:a2e:9e48:0:b0:2ec:680d:daed with SMTP id 38308e7fff4ca-2ec680ddb67mr47114201fa.1.1719478548047;
        Thu, 27 Jun 2024 01:55:48 -0700 (PDT)
Message-ID: <b508c1b8-1bdd-4378-a76d-7056452406d3@suse.com>
Date: Thu, 27 Jun 2024 10:55:35 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH] xen/riscv: PE/COFF image header for RISC-V target
To: Milan Djokic <Milan.Djokic@rt-rk.com>
Cc: "milandjokic1995@gmail.com" <milandjokic1995@gmail.com>,
 Nikola Jelic <nikola.jelic@rt-rk.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <87b5e458498bbff2e54ac011a50ff1f9555c3613.1717354932.git.milan.djokic@rt-rk.com>
 <0e10ee9c215269b577321ba44f5d038a5eb299a7.1718193326.git.milan.djokic@rt-rk.com>
 <8112bee8-efdc-4db9-b0d4-58b160b4e923@suse.com>
 <DU5PR08MB103973ABF5E6F12853F5D24E1CEC12@DU5PR08MB10397.eurprd08.prod.outlook.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <DU5PR08MB103973ABF5E6F12853F5D24E1CEC12@DU5PR08MB10397.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 26.06.2024 18:16, Milan Djokic wrote:
>> Signed-off-by: Nikola Jelic <nikola.jelic@rt-rk.com>
> 
> This isn't you, is it? Your S-o-b is going to be needed, too.
> 
> Nikola.jelic@rt-rk.com is the initial author of the patch, I'll add myself also if necessary
> 
>> +config RISCV_EFI
>> +     bool "UEFI boot service support"
>> +     depends on RISCV_64
>> +     default n
> 
> Nit: This line can be omitted (and if I'm not mistaken we generally do omit
> such).
> 
> If we remove the default value, EFI header shall be included into xen image by default.

Why's this? Or in other words, what are you deriving this from? Not specifying
a default implicitly means "n", from all I know.

> We want to keep it as optional for now, and generate plain elf as default format (until full EFI support is implemented)

I fully second this.

>> --- /dev/null
>> +++ b/xen/arch/riscv/include/asm/pe.h
>> @@ -0,0 +1,148 @@
>> +#ifndef _ASM_RISCV_PE_H
>> +#define _ASM_RISCV_PE_H
>> +
>> +#define LINUX_EFISTUB_MAJOR_VERSION     0x1
>> +#define LINUX_EFISTUB_MINOR_VERSION     0x0
>> +
>> +#define MZ_MAGIC                    0x5a4d          /* "MZ" */
>> +
>> +#define PE_MAGIC                    0x00004550      /* "PE\0\0" */
>> +#define PE_OPT_MAGIC_PE32           0x010b
>> +#define PE_OPT_MAGIC_PE32PLUS       0x020b
>> +
>> +/* machine type */
>> +#define IMAGE_FILE_MACHINE_RISCV32  0x5032
>> +#define IMAGE_FILE_MACHINE_RISCV64  0x5064
> 
> Apart from this, hardly anything in this header is RISC-V specific.
> Please consider moving to xen/include/xen/.
> 
> We shall move generic part into xen/include/xen/efi. This is something which should be considered for use on arm/x86 also.

It isn't, no. That's the case for Arm only so far afaict.

> Currently PE/COFF header is directly embedded into
> head.S for arm/x86
> 
>> +    char     name[8];                /* name or "/12\0" string tbl offset */
> 
> Why 12?
> 
> Either section name is specified in this field or string table offset if section name can't fit into 8 bytes, which is the case here.

Well, yes, I'm certainly aware of that. But the question wasn't about the
format, it was specifically about the hardcoded value 12. Why not 11 or 13?

> Btw this is taken over from linux kernel together with the PE/COFF header structures: https://github.com/torvalds/linux/blob/master/include/linux/pe.h

Which is in no way removing the need for you to be able to explain the
changes you're making.

>> + * struct riscv_image_header - riscv xen image header
> 
> You saying "xen": Is there anything Xen-specific in this struct?
> 
> Not really related to xen, this is generic riscv PE image header, comment fixed in new version
> 
>> +        .long   0                                       /* LoaderFlags */
>> +        .long   (section_table - .) / 8                 /* NumberOfRvaAndSizes */
>> +        .quad   0                                       /* ExportTable */
>> +        .quad   0                                       /* ImportTable */
>> +        .quad   0                                       /* ResourceTable */
>> +        .quad   0                                       /* ExceptionTable */
>> +        .quad   0                                       /* CertificationTable */
>> +        .quad   0                                       /* BaseRelocationTable */
> 
> Would you mind clarifying on what basis this set of 6 entries was
> chosen?
> 
> These fields and their sizes are defined in official PE format, see details from specification bellow
> 
> [cid:542690de-3bb0-4708-a447-996a03277578]

Again, I'm aware of the specification. Yet like the 12 above the 6 here
looks arbitrarily chosen. There are more entries in this table which
are permitted to be present (and well-defined). There could also be
fewer of them; any absent entry is implicitly holding the value 0 afaia.

>> +/* Section table */
>> +section_table:
>> +        .ascii  ".text\0\0\0"
>> +        .long   0
>> +        .long   0
>> +        .long   0                                       /* SizeOfRawData */
>> +        .long   0                                       /* PointerToRawData */
>> +        .long   0                                       /* PointerToRelocations */
>> +        .long   0                                       /* PointerToLineNumbers */
>> +        .short  0                                       /* NumberOfRelocations */
>> +        .short  0                                       /* NumberOfLineNumbers */
>> +        .long   IMAGE_SCN_CNT_CODE | \
>> +                IMAGE_SCN_MEM_READ | \
>> +                IMAGE_SCN_MEM_EXECUTE                   /* Characteristics */
>> +
>> +        .ascii  ".data\0\0\0"
>> +        .long   _end - xen_start                        /* VirtualSize */
>> +        .long   xen_start - efi_head                    /* VirtualAddress */
>> +        .long   __init_end_efi - xen_start              /* SizeOfRawData */
>> +        .long   xen_start - efi_head                    /* PointerToRawData */
>> +        .long   0                                       /* PointerToRelocations */
>> +        .long   0                                       /* PointerToLineNumbers */
>> +        .short  0                                       /* NumberOfRelocations */
>> +        .short  0                                       /* NumberOfLineNumbers */
>> +        .long   IMAGE_SCN_CNT_INITIALIZED_DATA | \
>> +                IMAGE_SCN_MEM_READ | \
>> +                IMAGE_SCN_MEM_WRITE                    /* Characteristics */
> 
> IOW no code and the entire image expressed as data. Interesting.
> No matter whether that has a reason or is completely arbitrary, I
> think it, too, wants commenting on.
> 
> This is correct, currently we have extended image with PE/COFF (EFI) header which allows xen boot from EFI loader (or U-boot) environment. And these updates are pure data. We are actively working on the implementation of Boot/Runtime services which shall be in the code section part and enable full UEFI compatible xen application for riscv.

Such a choice, even if transient, needs explaining in the description
(or maybe even a code comment) then.

> Why does the blank line disappear? And why is ...
> 
>>      . = ALIGN(POINTER_ALIGN);
>>      __init_end = .;
> 
> ... __init_end not good enough? (I think I can guess the answer, but
> then I further think the name of the symbol is misleading. )
> 
> Init_end_efi is used only when EFI sections are included into image.

Again, my question was different: I asked why a symbol we have already
isn't good enough, i.e. why another one needs adding.

> We have aligned with arm implementation here, you can take a look also there.

And yet again, as per above, you need to be able to explain your decisions.
You can't just say "it's done this way elsewhere as well". What if that
"elsewhere" has an obvious or maybe just subtle bug?

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 09:34:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 09:34:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749874.1158093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMlW1-0002nw-Qr; Thu, 27 Jun 2024 09:34:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749874.1158093; Thu, 27 Jun 2024 09:34:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMlW1-0002np-O9; Thu, 27 Jun 2024 09:34:05 +0000
Received: by outflank-mailman (input) for mailman id 749874;
 Thu, 27 Jun 2024 09:34:04 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h0yt=N5=outstep.com=lonnie@srs-se1.protection.inumbo.net>)
 id 1sMlW0-0002nj-RV
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 09:34:04 +0000
Received: from mail.outstep.net (mail.outstep.net [213.136.84.29])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 6430e5e1-3468-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 11:34:03 +0200 (CEST)
Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon)
 with ESMTPSA id E0C37234103E; Thu, 27 Jun 2024 11:33:50 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6430e5e1-3468-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=outstep.com; s=dkim;
	t=1719480837; h=from:subject:date:message-id:to:mime-version:content-type:
	 content-language:in-reply-to:references:autocrypt;
	bh=2WaVWz1uKoPy7V5jJP8XhvKRNX5ChIzE5XRMe+7TDC0=;
	b=WxRx7U+StoLNMAdgDycZ7ZqkB35I4QVfRRgx+R7ip7VXWDuKU3Roo9wO6aFGKWtOoLIK7t
	8EbNaa+x148EM+CUVsiUWnDzmIltnSyXpxRjksx+XK/6SsbvUcvNf5TFeMZNHbq4IrGdtX
	bZAciksHjXmPPisoq7+tmRbJOVTzEbW+/FAzcLx81ZKxbUrUiLy3nG4wHufB+9NDjtY3u1
	d+g8MLUmIMaTkihQhtxHZc18wo0zxUys/o3S1IeI/3nNb+oZzdXzzW+kDs9Hj3RGeKtgdh
	FYnlWB4pY7y8bkon19iI4lbxhFrv/cZEfScpzWu31jlbKdYxMrpK8sm1a0nVpg==
Content-Type: multipart/alternative;
 boundary="------------QHu000RIvSuCk0jjQNql1ZiU"
Message-ID: <be292bcf-d77f-44ba-b29a-b1608586647b@outstep.com>
Date: Thu, 27 Jun 2024 05:33:44 -0400
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: Disaggregated (Xoar) Dom0 Building
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <376f0fe4-4ae8-461d-87f2-0fa2e6913689@outstep.com>
 <59d67a78-14a0-42ac-b0dc-3d75c109f767@suse.com>
Content-Language: en-US
From: Lonnie Cumberland <lonnie@outstep.com>
Autocrypt: addr=lonnie@outstep.com; keydata=
 xsDNBGZUkBcBDADf326hFXBZUOP9VKVMb569ZBxanDFn4/VSe88oit+OyvxtQoGWqEegTtpf
 6zg1+9Dyx48+seZQvkbvZh/08CJaaNOZOP5uzwI70pWMpU+Uxvjed/Irl8Wp6pWixX+6qEm5
 F7shGilvgxCbAPM8YH8Pp8M3nBy3IZGSS4vhlBlJHZ9VsvlZ69rvwJIcVv0igb1HEHkGFl3k
 O+odw9cScRVN7yLeqgAwXmhguZuOu0HN0UEgAgGszbPAPxckImOXI2c7gBbbl0P2aJwUPwKC
 CXb2SR4P/1lAsRJPFt37AyIjhPfLd9lKJVmxl+Jrd3xQ5TZUqAWOYNURJaKIQ7FmgPGtoXgi
 YZRg7rilc24FHbpjSYzAJwF6JNgn9ZJBOlY6Ra34SIFuB7m80dDYExRzYqQWjZZfLu3kQWv2
 JDzxc0vnz1i8EkUYRlttz2RK+8bh0dbFQYRpyacAuUzqsthLOUMphuc2n994Ycjax3pXwt3H
 MvTjxZcB7tU5bBtnfV4XeyUAEQEAAc0mTG9ubmllIEN1bWJlcmxhbmQgPGxvbm5pZUBvdXRz
 dGVwLmNvbT7CwQcEEwEIADEWIQQulYU+Ak0zY3zlP1PNPEu2CUxXdQUCZlSQGAIbAwQLCQgH
 BRUICQoLBRYCAwEAAAoJEM08S7YJTFd1514MAJKgCilBtSfnDuqi6EsAv89vyLUC+UABqdIh
 ehwaImDTu65yniPARHsTQhXZI6QzfFTz3ptX7gQzZvAU0C1rVJWZaFbE4yHIEqerPPH5pTJA
 DL43GZU91is3BNE3hm2s3ArUHOEvFbWTzT9bQKjkHfPveByskzi0qlzrULZYG5kpbXx6sknW
 jFVdPkk0yv6N43ar9GjNKQqZTOJEe4U5VvHX3igMYjLB4dVmZFqvM9uMO+3pTQfnF4pzTtGd
 zX9ZIioAh/wQLF31P78ILvCUV4HOLVOGsxruZKuW/xEtA/UoLFJML5SJDrfbyNcu4Fly/5HP
 Yz42aNbnOBQkHOZKA7QaI0lfUgXgevAquRuJzvjjP8iKm+S+mpl7vIymsbkmG3E9tj5JAe9v
 xAyFFlQFi6ZVlw4PnXbiYUaJ30pa/AnrVe9nz5CpAxCX1q3ajRZApFeFYnuC7rx8LT662Pr1
 fP5RRCbcUs5K8l2mJuifETtua+BydNQfn87JmmL0keAJGM7AzQRmVJAYAQwA9n99CBs/0XZk
 ZUzwm4CjPPqVQX7xLLqsvXZB15zsddCb21T+kxK7x2Bjg8QDg/4n/wOS8SytimPS35P1MKsm
 ysNi9lHkr3a3azfYGXZQ8jKfJbChD5dfyvu/rt4lK8k1EiNEUBzUFwTgP1WeD1v1+xUb5+JJ
 6MjNFuMJMoq6vprEn0Wtv7LNDNWQj4/Xxa/kGVto9XwsrpcKSwyX7BmWEoqqzEO4PJgVSIF9
 euL4GY15RCQD0Y+FN8kAXeO+Dd0WHgtaaWCpDP+RkgXtUCFx06Ozy1OrHRdIczsu+60Xcf+K
 DeoZsA2ZQTBwcSQN5ektrNeP5KqbYcl3stdW+grtucUs6AzFF3oqZbsrB6bNLyUUjEuYvrMm
 SFVi1rfOiGc6IExl6QDT0GCf5KWv0iGbls7lNfYHVUcdbUM07LDxLhm3MkcAnLFpAHg1s+Pz
 QP858J+fpnZLvMQT9AQ/bfA6c3kw6VRFqbsAe7ZzI4C73N+nzsP9ow5ovIbvECI+xkzZABEB
 AAHCwPYEGAEIACAWIQQulYU+Ak0zY3zlP1PNPEu2CUxXdQUCZlSQGQIbDAAKCRDNPEu2CUxX
 dTdmDADYJA7nWcJrr/3Oz+KvND+5Qd7jyOsTnvmcmFmpqWkydxbn75DciH1le9qf3F+WBT2x
 CQtsFGu0E7mb4bQv2i1ugyoWOJPlVAbRvwUoyFYbxHLnlSPPq6KBLcoRDNUe26oINuH6CK30
 ZcXF0SDY26ydP7r6bC0cAzNTz6fkQsEd57wy/nSz9bt0EZnapYZ9l/W5fTSqyMcYDF92u18J
 IAn7On392bs3yTSwAeahPT+dhk3qOecbFysJRm61dw0vNCKVvm82tJKvzRPYEuFMDQEvpXb3
 OqxCCRk3v0iUxwcXZxXPZAfos7ZrM2Y9ElSHfrssbvbeqDIOrGa0d2GlfHZMlz+mnH84Np5K
 19Q/WetiOD7SKvmR54d7jZvsBt8VyDlQhMYqbNPyOnkvtQUhVWshrGGwKrB5a89dUYZMmAQd
 fL+vxMw4kBmeZmZ64Iy9ROZmDqVYD8278qC+yJC2S+uEdW9VjeW4WsUljfH2P3O8QagZsvGv
 WujEwGqqyfUF7eo=
In-Reply-To: <59d67a78-14a0-42ac-b0dc-3d75c109f767@suse.com>
X-Last-TLS-Session-Version: TLSv1.3

This is a multi-part message in MIME format.
--------------QHu000RIvSuCk0jjQNql1ZiU
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit

Thanks for your suggestions and information as I will definitely look 
into these more.

I have a very brief introduction to Dom0less and it is definitely 
something of interest for me to  review as well.

On the QubesOS side, I also read up a little on it and while it has a 
number of similarities to what I am trying to do on the functional side, 
it seems to be a whole distro release that comes in a 6GB ISO download 
to install where as the project I am working towards is to have 
everything as a RAM-based ultra-lightweight thin hypervisor.   I looked 
over ACRN, the NOVA Microhypervisor (Headron, Beadrock Udo), 
Rust-Shyper, Bareflank-MicroV, and many other development efforts but it 
seems that Xen is the most advanced for my purposes here.

Thanks again and I will dig into everything much more as well.

Have a great day,
Lonnie

On 6/27/2024 1:54 AM, Juergen Gross wrote:
> On 26.06.24 18:47, Lonnie Cumberland wrote:
>> Hello All,
>>
>> I hope that everyone is doing well today.
>>
>> Currently, I am investigating and researching the ideas of 
>> "Disaggregating" Dom0 and have the Xoar Xen patches ("Breaking Up is 
>> Hard to Do: Security and Functionality in a Commodity Hypervisor" 
>> 2011) available which were developed against version 22155 of 
>> xen-unstable. The Linux patches are against Linux with pvops 
>> 2.6.31.13 and developed on a standard Ubuntu 10.04 install. My effort 
>> would also be up update these patches.
>>
>> I have been able to locate the Xen "Dom0 Disaggregation" 
>> (https://wiki.xenproject.org/wiki/Dom0_Disaggregation) am reading up 
>> on things now but wanted to ask the developers list about any 
>> experience you may have had in this area since the research objective 
>> is to integrate Xoar with the latest Xen 4.20, if possible, and to 
>> take it further to basically eliminate Dom0 all together with 
>> individual Mini-OS or Unikernel "Service and Driver VM's" instead 
>> that are loaded at UEFI boot time.
>>
>> Any guidance, thoughts, or ideas would be greatly appreciated,
>
> Just some pointers, this is not an exhaustive list:
>
> - you should have a look at dom0less (see 
> docs/features/dom0less.pandoc in
>   the Xen source tree) and hyperlauch (see 
> docs/designs/launch/hyperlaunch.rst
>   in the Xen source tree)
>
> - Xenstore in a stub-domain is working fine, it is the default in 
> openSUSE and
>   SLE
>
> - QubesOS has a lot of the disaggregation you are looking for implemented
>
> - I'm pretty sure only very few changes should be needed for the Linux 
> kernel,
>   if any.
>
>
> Juergen

--------------QHu000RIvSuCk0jjQNql1ZiU
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 8bit

<!DOCTYPE html>
<html>
  <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <font face="Times New Roman, Times, serif">Thanks for your
      suggestions and information as I will definitely look into these
      more.<br>
      <br>
      I have a very brief introduction to Dom0less and it is definitely
      something of interest for me to  review as well.<br>
      <br>
      On the QubesOS side, I also read up a little on it and while it
      has a number of similarities to what I am trying to do on the
      functional side, it seems to be a whole distro release that comes
      in a 6GB ISO download to install where as the project I am working
      towards is to have everything as a RAM-based ultra-lightweight
      thin hypervisor.   I looked over ACRN, the NOVA Microhypervisor
      (Headron, Beadrock Udo), Rust-Shyper, Bareflank-MicroV, and many
      other development efforts but it seems that Xen is the most
      advanced for my purposes here.<br>
      <br>
      Thanks again and I will dig into everything much more as well.<br>
      <br>
      Have a great day,<br>
      Lonnie<br>
    </font><br>
    <div class="moz-cite-prefix">On 6/27/2024 1:54 AM, Juergen Gross
      wrote:<br>
    </div>
    <blockquote type="cite"
      cite="mid:59d67a78-14a0-42ac-b0dc-3d75c109f767@suse.com">On
      26.06.24 18:47, Lonnie Cumberland wrote:
      <br>
      <blockquote type="cite">Hello All,
        <br>
        <br>
        I hope that everyone is doing well today.
        <br>
        <br>
        Currently, I am investigating and researching the ideas of
        "Disaggregating" Dom0 and have the Xoar Xen patches ("Breaking
        Up is Hard to Do: Security and Functionality in a Commodity
        Hypervisor" 2011) available which were developed against version
        22155 of xen-unstable. The Linux patches are against Linux with
        pvops 2.6.31.13 and developed on a standard Ubuntu 10.04
        install. My effort would also be up update these patches.
        <br>
        <br>
        I have been able to locate the Xen "Dom0 Disaggregation"
        (<a class="moz-txt-link-freetext" href="https://wiki.xenproject.org/wiki/Dom0_Disaggregation">https://wiki.xenproject.org/wiki/Dom0_Disaggregation</a>) am
        reading up on things now but wanted to ask the developers list
        about any experience you may have had in this area since the
        research objective is to integrate Xoar with the latest Xen
        4.20, if possible, and to take it further to basically eliminate
        Dom0 all together with individual Mini-OS or Unikernel "Service
        and Driver VM's" instead that are loaded at UEFI boot time.
        <br>
        <br>
        Any guidance, thoughts, or ideas would be greatly appreciated,
        <br>
      </blockquote>
      <br>
      Just some pointers, this is not an exhaustive list:
      <br>
      <br>
      - you should have a look at dom0less (see
      docs/features/dom0less.pandoc in
      <br>
        the Xen source tree) and hyperlauch (see
      docs/designs/launch/hyperlaunch.rst
      <br>
        in the Xen source tree)
      <br>
      <br>
      - Xenstore in a stub-domain is working fine, it is the default in
      openSUSE and
      <br>
        SLE
      <br>
      <br>
      - QubesOS has a lot of the disaggregation you are looking for
      implemented
      <br>
      <br>
      - I'm pretty sure only very few changes should be needed for the
      Linux kernel,
      <br>
        if any.
      <br>
      <br>
      <br>
      Juergen
      <br>
    </blockquote>
    <br>
  </body>
</html>

--------------QHu000RIvSuCk0jjQNql1ZiU--


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 09:42:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 09:42:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749880.1158103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMleC-0004TR-Lb; Thu, 27 Jun 2024 09:42:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749880.1158103; Thu, 27 Jun 2024 09:42:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMleC-0004TK-Hc; Thu, 27 Jun 2024 09:42:32 +0000
Received: by outflank-mailman (input) for mailman id 749880;
 Thu, 27 Jun 2024 09:42:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMleB-0004TE-Lw
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 09:42:31 +0000
Received: from mail-lj1-x236.google.com (mail-lj1-x236.google.com
 [2a00:1450:4864:20::236])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9264e58a-3469-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 11:42:30 +0200 (CEST)
Received: by mail-lj1-x236.google.com with SMTP id
 38308e7fff4ca-2ec002caeb3so98041111fa.2
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 02:42:30 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1faac8e06e7sm9226485ad.6.2024.06.27.02.42.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 02:42:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9264e58a-3469-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719481350; x=1720086150; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=P/S8BVEh4uCDLugoaFpR3ia2RAVlpPmLeAL81b5KYvc=;
        b=CTFNibEr/sQO4Yn+cb4fBI/INMuDOU79IRR24gk4bX2JCYm3dc4pzrxnsqX9U+4YHj
         AtffeNfm8rBOqMGfYrk0SAkSESUqJM8xuxsAJvE1qWi20qzqnig+oswq6tk62M0VPIc4
         XFRJ25XCS4KsGlXFtgnPcvhAV96R1PJzxglLrvjxZt3g0ZOwxfSF/d2qyss9lev93sWW
         pGCUwK0RPYV0pV5KWo+V+qsyD8cL2O2NVKZo3TR5K6ug/cmM6LJ7ZVF8lKoEj7k10ov5
         CfpDkQ8DwDQzq2AYHKrVrtcUO+CnTe3MDCILpc1zVpBvcXNEcKQdYXdnn2P/4RqZEv8p
         +U/w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719481350; x=1720086150;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=P/S8BVEh4uCDLugoaFpR3ia2RAVlpPmLeAL81b5KYvc=;
        b=VHQCLL+CZ1ozYRjMinT3KzzT8FZMgm8utkYCsTy8VI1efvrwkxLhcJFRTgSTX6yTni
         jNHFHfC+usKInJyepY3r4/ggux+UrepZbKq02WHooygXlRBzR0je54XCVtEjGMLInaM5
         Yc3uARx6msuUpq/rOTRAcfq0ivMDyxY4H7y5qbHHLFHAZG+78eJro3q3Qn/rx2L0X9L5
         iSufSqTBpt4CxJVD9srNOZ2+S/xq9LHqrovDoRy2MjjfPOcPfoP8UmnKt+ipAOwX+6k3
         XmGuykfpgVyudXvl8fTtwrm75c9rMn+Z4ViOfgBVrCkNrkdRCo/1ujZSbV9FXkvIMaRV
         JAXQ==
X-Forwarded-Encrypted: i=1; AJvYcCU5YhZdKpEX7i2BPo/iM5NpaoFgrrHQGzmvi404gn2ykfdPx5++iDXYMBJ0/v6mTAz7KlpsfX+wWZckDVSJs05BOnzWoOT0FCczBfY4KCA=
X-Gm-Message-State: AOJu0Yz1ShZoQwebPnx71X+VPx6+VLJ/8niww9r0xhhvx52PWNQK5+GY
	N54x+tMoqRTWyd0q0DIX6kN5dLJEq2ijtnP3e7nGt49x+uOzFUhhMVy0ctTirg==
X-Google-Smtp-Source: AGHT+IGSlcojHbxVy/T92BB0xSExuMdGmuxvJCW31A9UK6LatjIMgU7MEyJDHyC0sifor2vuqrfscQ==
X-Received: by 2002:a2e:800a:0:b0:2ec:4e59:a3de with SMTP id 38308e7fff4ca-2ec57967be5mr87038181fa.10.1719481349919;
        Thu, 27 Jun 2024 02:42:29 -0700 (PDT)
Message-ID: <fe255839-f8ab-4dd1-abe8-8ec834099a8d@suse.com>
Date: Thu, 27 Jun 2024 11:42:20 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 06/10] tools/libguest: Make setting MTRR registers
 unconditional
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Anthony PERARD <anthony.perard@vates.tech>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
 <2c55d486bb0c54a3e813abc66d32f321edd28b81.1719416329.git.alejandro.vallejo@cloud.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <2c55d486bb0c54a3e813abc66d32f321edd28b81.1719416329.git.alejandro.vallejo@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 26.06.2024 18:28, Alejandro Vallejo wrote:
> This greatly simplifies a later patch that makes use of HVM contexts to upload
> LAPIC data. The idea is to reuse MTRR setting procedure to avoid code
> duplication. It's currently only used for PVH, but there's no real reason to
> overcomplicate the toolstack preventing them being set for HVM too when
> hvmloader will override them anyway.

Yet then - why set them when hvmloader will do so again? Is it even guaranteed
to be no change in (guest) behavior to do so?

Plus what about a guest which was configured to have the CPUID bit for MTRRs
clear? I think we ought to document this as not supported for PVH (we may
actually choose to refuse building such a guest), but in principle the MTRR
save/load operations should simply fail for a HVM guest in said configuration.
Making such a change in Xen now would, afaict, be benign to the tool stack.
After this adjustment it would result in a perceived regression, when there
shouldn't be any.

Thinking about it, even for PVH it may make sense to allow CPUID.MTRR=0, as
long as CPUID.PAT=1, thus forcing it into PAT-only mode. I think we did even
discuss this possible configuration before.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 09:44:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 09:44:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749885.1158112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMlg9-00055Y-3d; Thu, 27 Jun 2024 09:44:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749885.1158112; Thu, 27 Jun 2024 09:44:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMlg9-00055R-17; Thu, 27 Jun 2024 09:44:33 +0000
Received: by outflank-mailman (input) for mailman id 749885;
 Thu, 27 Jun 2024 09:44:31 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1631=N5=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMlg7-00055G-Fz
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 09:44:31 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id d96655b2-3469-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 11:44:29 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-a72988749f0so103011466b.0
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 02:44:29 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a729d7ca1f5sm41986566b.218.2024.06.27.02.44.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Jun 2024 02:44:28 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d96655b2-3469-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719481469; x=1720086269; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=L1DgNbcw3XA82bPbWLtfB7xphk4jpI2wPVlVr/JOifg=;
        b=lUwYliPZQlYVC1QrLP/izoC4go4dPTSqOAHsQiAaI3VhyybcYYWPnRmNQbF31F54M8
         tZqjjXYB/BAETtrXLWL+9tfcQzbt+CU4VGJ2gFKRSYEvvwBwK0Bmtept32JQCXMA0HVO
         OVYoNdy3GlQEx8RbIoPq1F5CHWZ76eEFmG4M5Sh/oaQ9zXtPW/rmAn0khexoJKYj1HDD
         RIeABWHlpTBs5V7fnTOuz+Q7gddl8I54s2/FRGJQ/0t5q/vx6sU85mYKuyxkmQvDZbtW
         Gmts4iNGi3SUXHNrAQN6c8h6cnwHUjtahEMv+qiVB6SxQIYWhlPPgmtoTs6xFsA5zUll
         LpHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719481469; x=1720086269;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=L1DgNbcw3XA82bPbWLtfB7xphk4jpI2wPVlVr/JOifg=;
        b=jpY3bESI/wVl6axqsz5L7TR+Yry4MgGhTsD8IT+70xlw2mSw4GfnOSZx/M9/vH0onc
         VjyuswLhQSZrIbYWbuogDRZkQ+hJZ5TqQPUi94JPmXGC41sOH/ybCnlR9vaBTXh8Cygl
         sfmNq/lFQFxhLEmXH0iJuXlP411afp6MDNC5uigOP/+VvYg3w/diM5zE7a2kU0D3YnvR
         GiK1O/bQYodOT6woXmDNRV0/ZDNaRgZ9qG3Cbj/CM/Qycomae3s9x3bkxpayG5Ya/rdf
         8bY/ziqG6IaBcllee9GtNz1Yk4CyyZ68dsYeR5YFw1qYqSyqr/o2eK3QLDzqj4jbtHJI
         FE1A==
X-Gm-Message-State: AOJu0Yxj43iwcYrqzjVWJPTWoD6AiEeXTvB2lwpvJ0m4GfbUaGBJ0KM3
	og0Z0K0w2HgLNwDhIRtyiTrsi4nDl2f0AQr9zRWMxgzH7radqMNp
X-Google-Smtp-Source: AGHT+IGroLt2qeY7Uqodmm6jI60q7Vaaf5NfZVtq1GQGEgQLYrOzg/YVxpJaZQo+3yHHNN4KZXCtXw==
X-Received: by 2002:a17:906:bf4a:b0:a72:5bb9:b13e with SMTP id a640c23a62f3a-a725bb9b2a7mr754788266b.30.1719481468852;
        Thu, 27 Jun 2024 02:44:28 -0700 (PDT)
Message-ID: <1932c346492372cc9ffe633774b412d719e662be.camel@gmail.com>
Subject: Re: [XEN PATCH v2 0/6][RESEND] address violations of MISRA C Rule
 20.7
From: oleksii.kurochko@gmail.com
To: Stefano Stabellini <sstabellini@kernel.org>, Nicola Vetrini
	 <nicola.vetrini@bugseng.com>
Cc: xen-devel@lists.xenproject.org, michal.orzel@amd.com, 
 xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com,
 consulting@bugseng.com,  Simone Ballarin <simone.ballarin@bugseng.com>,
 Doug Goldstein <cardoe@cardoe.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Date: Thu, 27 Jun 2024 11:44:27 +0200
In-Reply-To: <alpine.DEB.2.22.394.2406241743480.3870429@ubuntu-linux-20-04-desktop>
References: <cover.1718378539.git.nicola.vetrini@bugseng.com>
	 <alpine.DEB.2.22.394.2406241743480.3870429@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Mon, 2024-06-24 at 17:47 -0700, Stefano Stabellini wrote:
> Hi Oleksii,
>=20
> I would like to ask for a release-ack as the patch series makes very
> few
> changes outside of the static analysis configuration. The few changes
> to
> the Xen code are very limited, straightforward and makes the code
> better, see patch #3 and #5.
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii
>=20
>=20
> On Mon, 17 Jun 2024, Nicola Vetrini wrote:
> > Hi all,
> >=20
> > this series addresses several violations of Rule 20.7, as well as a
> > small fix to the ECLAIR integration scripts that do not influence
> > the current behaviour, but were mistakenly part of the upstream
> > configuration.
> >=20
> > Note that by applying this series the rule has a few leftover
> > violations.
> > Most of those are in x86 code in xen/arch/x86/include/asm/msi.h .
> > I did send a patch [1] to deal with those, limited only to
> > addressing the MISRA
> > violations, but in the end it was dropped in favour of a more
> > general cleanup of
> > the file upon agreement, so this is why those changes are not
> > included here.
> >=20
> > [1]
> > https://lore.kernel.org/xen-devel/2f2c865f20d0296e623f1d65bed25c083f5dd=
497.1711700095.git.nicola.vetrini@bugseng.com/
> >=20
> > Changes in v2:
> > - refactor patch 4 to deviate the pattern, instead of fixing the
> > violations
> > - The series has been resent because I forgot to properly Cc the
> > mailing list
> >=20
> > Nicola Vetrini (6):
> > =C2=A0 automation/eclair: address violations of MISRA C Rule 20.7
> > =C2=A0 xen/self-tests: address violations of MISRA rule 20.7
> > =C2=A0 xen/guest_access: address violations of MISRA rule 20.7
> > =C2=A0 automation/eclair_analysis: address violations of MISRA C Rule
> > 20.7
> > =C2=A0 x86/irq: address violations of MISRA C Rule 20.7
> > =C2=A0 automation/eclair_analysis: clean ECLAIR configuration scripts
> >=20
> > =C2=A0automation/eclair_analysis/ECLAIR/analyze.sh=C2=A0=C2=A0=C2=A0=C2=
=A0 |=C2=A0 3 +--
> > =C2=A0automation/eclair_analysis/ECLAIR/deviations.ecl | 14
> > ++++++++++++--
> > =C2=A0docs/misra/deviations.rst=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 3 ++-
> > =C2=A0xen/include/xen/guest_access.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
 |=C2=A0 4 ++--
> > =C2=A0xen/include/xen/irq.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 2 +-
> > =C2=A0xen/include/xen/self-tests.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 |=C2=A0 8 ++++----
> > =C2=A06 files changed, 22 insertions(+), 12 deletions(-)
> >=20
> > --=20
> > 2.34.1
> >=20



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 09:45:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 09:45:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749890.1158124 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMlgo-0005ac-F3; Thu, 27 Jun 2024 09:45:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749890.1158124; Thu, 27 Jun 2024 09:45:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMlgo-0005aV-9u; Thu, 27 Jun 2024 09:45:14 +0000
Received: by outflank-mailman (input) for mailman id 749890;
 Thu, 27 Jun 2024 09:45:13 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1631=N5=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMlgn-0005aN-Lv
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 09:45:13 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id f2df9b7e-3469-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 11:45:12 +0200 (CEST)
Received: by mail-ej1-x62b.google.com with SMTP id
 a640c23a62f3a-a729da840a8so63748966b.1
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 02:45:12 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a729d779669sm42384966b.99.2024.06.27.02.45.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Jun 2024 02:45:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2df9b7e-3469-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719481512; x=1720086312; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=o5bsw/bIwpiQtTxDqqa+G2RfEnwKlsaZmclV+a4TWf4=;
        b=CwZ50Ism7r/Ie7QS7qR3jcaP7nmbc+/MgvntLbx4ha2fYiGvuAWIgJHGFrfCcG6n2L
         umzVnZcUWngr6hzfUKidibkcz6memUJ0pxgZRwD356N04ZaI+DD8F4BsdIoQriARsXm4
         xPuEIAX6WdXZxJE1pCshiTOcoKyiRebOxTnVF7M+rFd7o3fL1H+fp6QWsPeYSogxuJcz
         BpuIiT6JN0wT8Mx7q33wBtuC98pia7iAEhAoEZCFOOwvqivujLzx/zA+H6uVfgmABBml
         wQXVf2zKWRTM/Kejmhs1eOIO4JcmXik1R/sSicFVKwR+JKRuxHIkiEJ+z+mC19hAt04J
         wJVg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719481512; x=1720086312;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=o5bsw/bIwpiQtTxDqqa+G2RfEnwKlsaZmclV+a4TWf4=;
        b=lwc3wbF+6Jsg5y+7yc834GMSUab8XE7PQkztXzxnVZsv2okSTCZZg6s5oE4b0x9qy/
         T9746Shw06LPIP8EHpyZxmSDB7opcwQrUKaVuwwFB4tGpej5s+tHyeJw2i6R0t8Kqe0J
         GCUc/VfK0O0AaclmjQTru+P+Ubys77bqZkf6CG8descWK8BO46czcTKIIUDWPRBOr5dA
         rKZ+2aZR66BEbR7lrjn0gt1ATsYV5eIUugaJf4mFX7ZvnI1ddSdypi+JTg1u8Q26bYcq
         1/EnAO9BsWvaPBTRXUuAnpNdWiQtA49OVZZDikEK0EkHu4YqBU7wPb19Wz52MKj6zhpB
         sVEQ==
X-Gm-Message-State: AOJu0YwzEeZXdRhH6wpOJnlQdOwT5FyAG/TGuy5GT4alirX/CwAYa4RL
	FZ7cXCEdExHX9mfwuUTtHE+mkrLiEwbUP/qYYtgqhriHOMJIIqFm
X-Google-Smtp-Source: AGHT+IGs2vCA/H/FFuDyR5glr6eBfRxB5KcllOGQmtGLf27nREEa85kWr5SzDu3hNTlWyIai7dNdoA==
X-Received: by 2002:a17:906:99d5:b0:a72:58c3:2696 with SMTP id a640c23a62f3a-a7296f5d2admr186679466b.14.1719481511468;
        Thu, 27 Jun 2024 02:45:11 -0700 (PDT)
Message-ID: <baac4da598318434e14779ba3f3b55a6e0a6de39.camel@gmail.com>
Subject: Re: [XEN PATCH v2 for-4.19] automation/eclair: add deviations
 agreed in MISRA meetings
From: oleksii.kurochko@gmail.com
To: Stefano Stabellini <sstabellini@kernel.org>, Federico Serafini
	 <federico.serafini@bugseng.com>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, Simone Ballarin
 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
 <julien@xen.org>
Date: Thu, 27 Jun 2024 11:45:10 +0200
In-Reply-To: <alpine.DEB.2.22.394.2406261740560.3635@ubuntu-linux-20-04-desktop>
References: 
	<816b323f5e325784947d09502f9352188bd325cf.1719381829.git.federico.serafini@bugseng.com>
	 <alpine.DEB.2.22.394.2406261740560.3635@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Wed, 2024-06-26 at 17:41 -0700, Stefano Stabellini wrote:
> On Wed, 26 Jun 2024, Federico Serafini wrote:
> > Update ECLAIR configuration to take into account the deviations
> > agreed during the MISRA meetings.
> >=20
> > While doing this, remove the obsolete "Set [123]" comments.
> >=20
> > Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>=20
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
>=20
> release-ack requested
Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii
>=20
>=20
> > ---
> > Changes in v2:
> > - keep sync between deviations.ecl and deviations.rst;
> > - use 'deliberate' tag for all the deviations of R14.3;
> > - do not use the term "project-wide deviation" since it does not
> > add useful
> > =C2=A0 information.
> > ---
> > =C2=A0.../eclair_analysis/ECLAIR/deviations.ecl=C2=A0=C2=A0=C2=A0=C2=A0=
 | 93
> > +++++++++++++++++--
> > =C2=A0docs/misra/deviations.rst=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 | 81 ++++++++++++++-
> > -
> > =C2=A02 files changed, 158 insertions(+), 16 deletions(-)
> >=20
> > diff --git a/automation/eclair_analysis/ECLAIR/deviations.ecl
> > b/automation/eclair_analysis/ECLAIR/deviations.ecl
> > index ae2eaf50f7..37cad8bf68 100644
> > --- a/automation/eclair_analysis/ECLAIR/deviations.ecl
> > +++ b/automation/eclair_analysis/ECLAIR/deviations.ecl
> > @@ -1,5 +1,3 @@
> > -### Set 1 ###
> > -
> > =C2=A0#
> > =C2=A0# Series 2.
> > =C2=A0#
> > @@ -23,6 +21,11 @@ Constant expressions and unreachable branches of
> > if and switch statements are ex
> > =C2=A0-config=3DMC3R1.R2.1,reports+=3D{deliberate,
> > "any_area(any_loc(any_exp(macro(name(ASSERT_UNREACHABLE||PARSE_ERR_
> > RET||PARSE_ERR||FAIL_MSR||FAIL_CPUID)))))"}
> > =C2=A0-doc_end
> > =C2=A0
> > +-doc_begin=3D"The asm-offset files are not linked deliberately,
> > since they are used to generate definitions for asm modules."
> > +-file_tag+=3D{asm_offsets,
> > "^xen/arch/(arm|x86)/(arm32|arm64|x86_64)/asm-offsets\\.c$"}
> > +-config=3DMC3R1.R2.1,reports+=3D{deliberate,
> > "any_area(any_loc(file(asm_offsets)))"}
> > +-doc_end
> > +
> > =C2=A0-doc_begin=3D"Pure declarations (i.e., declarations without
> > initialization) are
> > =C2=A0not executable, and therefore it is safe for them to be
> > unreachable."
> > =C2=A0-config=3DMC3R1.R2.1,ignored_stmts+=3D{"any()", "pure_decl()"}
> > @@ -63,6 +66,12 @@ they are not instances of commented-out code."
> > =C2=A0-
> > config=3DMC3R1.D4.3,reports+=3D{disapplied,"!(any_area(any_loc(file(^xe
> > n/arch/arm/arm64/.*$))))"}
> > =C2=A0-doc_end
> > =C2=A0
> > +-doc_begin=3D"The inline asm in 'arm64/lib/bitops.c' is tightly
> > coupled with the surronding C code that acts as a wrapper, so it
> > has been decided not to add an additional encapsulation layer."
> > +-file_tag+=3D{arm64_bitops, "^xen/arch/arm/arm64/lib/bitops\\.c$"}
> > +-config=3DMC3R1.D4.3,reports+=3D{deliberate,
> > "all_area(any_loc(file(arm64_bitops)&&any_exp(macro(^(bit|test)op$)
> > )))"}
> > +-config=3DMC3R1.D4.3,reports+=3D{deliberate,
> > "any_area(any_loc(file(arm64_bitops))&&context(name(int_clear_mask1
> > 6)))"}
> > +-doc_end
> > +
> > =C2=A0-doc_begin=3D"This header file is autogenerated or empty, therefo=
re
> > it poses no
> > =C2=A0risk if included more than once."
> > =C2=A0-file_tag+=3D{empty_header, "^xen/arch/arm/efi/runtime\\.h$"}
> > @@ -213,10 +222,25 @@ Therefore the absence of prior declarations
> > is safe."
> > =C2=A0-config=3DMC3R1.R8.4,declarations+=3D{safe,
> > "loc(file(asm_defns))&&^current_stack_pointer$"}
> > =C2=A0-doc_end
> > =C2=A0
> > +-doc_begin=3D"The function apei_(read|check|clear)_mce are dead code
> > and are excluded from non-debug builds, therefore the absence of
> > prior declarations is safe."
> > +-config=3DMC3R1.R8.4,declarations+=3D{safe,
> > "^apei_(read|check|clear)_mce\\(.*$"}
> > +-doc_end
> > +
> > =C2=A0-doc_begin=3D"asmlinkage is a marker to indicate that the functio=
n
> > is only used to interface with asm modules."
> > =C2=A0-
> > config=3DMC3R1.R8.4,declarations+=3D{safe,"loc(text(^(?s).*asmlinkage.*
> > $, -1..0))"}
> > =C2=A0-doc_end
> > =C2=A0
> > +-doc_begin=3D"Given that bsearch and sort are defined with the
> > attribute 'gnu_inline', it's deliberate not to have a prior
> > declaration.
> > +See Section \"6.33.1 Common Function Attributes\" of
> > \"GCC_MANUAL\" for a full explanation of gnu_inline."
> > +-file_tag+=3D{bsearch_sort, "^xen/include/xen/(sort|lib)\\.h$"}
> > +-config=3DMC3R1.R8.4,reports+=3D{deliberate,
> > "any_area(any_loc(file(bsearch_sort))&&decl(name(bsearch||sort)))"}
> > +-doc_end
> > +
> > +-doc_begin=3D"first_valid_mfn is defined in this way because the
> > current lack of NUMA support in Arm and PPC requires it."
> > +-file_tag+=3D{first_valid_mfn, "^xen/common/page_alloc\\.c$"}
> > +-
> > config=3DMC3R1.R8.4,declarations+=3D{deliberate,"loc(file(first_valid_m
> > fn))"}
> > +-doc_end
> > +
> > =C2=A0-doc_begin=3D"The following variables are compiled in multiple
> > translation units
> > =C2=A0belonging to different executables and therefore are safe."
> > =C2=A0-config=3DMC3R1.R8.6,declarations+=3D{safe,
> > "name(current_stack_pointer||bsearch||sort)"}
> > @@ -257,8 +281,6 @@ dimension is higher than omitting the
> > dimension."
> > =C2=A0-config=3DMC3R1.R9.5,reports+=3D{deliberate, "any()"}
> > =C2=A0-doc_end
> > =C2=A0
> > -### Set 2 ###
> > -
> > =C2=A0#
> > =C2=A0# Series 10.
> > =C2=A0#
> > @@ -299,7 +321,6 @@ integers arguments on two's complement
> > architectures
> > =C2=A0-config=3DMC3R1.R10.1,reports+=3D{safe,
> > "any_area(any_loc(any_exp(macro(^ISOLATE_LSB$))))"}
> > =C2=A0-doc_end
> > =C2=A0
> > -### Set 3 ###
> > =C2=A0-doc_begin=3D"XEN only supports architectures where signed intege=
rs
> > are
> > =C2=A0representend using two's complement and all the XEN developers ar=
e
> > aware of
> > =C2=A0this."
> > @@ -323,6 +344,49 @@ constant expressions are required.\""
> > =C2=A0# Series 11
> > =C2=A0#
> > =C2=A0
> > +-doc_begin=3D"The conversion from a function pointer to unsigned
> > long or (void *) does not lose any information, provided that the
> > target type has enough bits to store it."
> > +-config=3DMC3R1.R11.1,casts+=3D{safe,
> > +=C2=A0 "from(type(canonical(__function_pointer_types)))
> > +=C2=A0=C2=A0 &&to(type(canonical(builtin(unsigned
> > long)||pointer(builtin(void)))))
> > +=C2=A0=C2=A0 &&relation(definitely_preserves_value)"
> > +}
> > +-doc_end
> > +
> > +-doc_begin=3D"The conversion from a function pointer to a boolean
> > has a well-known semantics that do not lead to unexpected
> > behaviour."
> > +-config=3DMC3R1.R11.1,casts+=3D{safe,
> > +=C2=A0 "from(type(canonical(__function_pointer_types)))
> > +=C2=A0=C2=A0 &&kind(pointer_to_boolean)"
> > +}
> > +-doc_end
> > +
> > +-doc_begin=3D"The conversion from a pointer to an incomplete type to
> > unsigned long does not lose any information, provided that the
> > target type has enough bits to store it."
> > +-config=3DMC3R1.R11.2,casts+=3D{safe,
> > +=C2=A0 "from(type(any()))
> > +=C2=A0=C2=A0 &&to(type(canonical(builtin(unsigned long))))
> > +=C2=A0=C2=A0 &&relation(definitely_preserves_value)"
> > +}
> > +-doc_end
> > +
> > +-doc_begin=3D"Conversions to object pointers that have a pointee
> > type with a smaller (i.e., less strict) alignment requirement are
> > safe."
> > +-config=3DMC3R1.R11.3,casts+=3D{safe,
> > +=C2=A0 "!relation(more_aligned_pointee)"
> > +}
> > +-doc_end
> > +
> > +-doc_begin=3D"Conversions from and to integral types are safe, in
> > the assumption that the target type has enough bits to store the
> > value.
> > +See also Section \"4.7 Arrays and Pointers\" of \"GCC_MANUAL\""
> > +-config=3DMC3R1.R11.6,casts+=3D{safe,
> > +=C2=A0=C2=A0=C2=A0
> > "(from(type(canonical(integral())))||to(type(canonical(integral()))
> > ))
> > +=C2=A0=C2=A0=C2=A0=C2=A0 &&relation(definitely_preserves_value)"}
> > +-doc_end
> > +
> > +-doc_begin=3D"The conversion from a pointer to a boolean has a well-
> > known semantics that do not lead to unexpected behaviour."
> > +-config=3DMC3R1.R11.6,casts+=3D{safe,
> > +=C2=A0 "from(type(canonical(__pointer_types)))
> > +=C2=A0=C2=A0 &&kind(pointer_to_boolean)"
> > +}
> > +-doc_end
> > +
> > =C2=A0-doc_begin=3D"Violations caused by container_of are due to pointe=
r
> > arithmetic operations
> > =C2=A0with the provided offset. The resulting pointer is then
> > immediately cast back to its
> > =C2=A0original type, which preserves the qualifier. This use is deemed
> > safe.
> > @@ -354,9 +418,18 @@ activity."
> > =C2=A0-config=3DMC3R1.R14.2,reports+=3D{disapplied,"any()"}
> > =C2=A0-doc_end
> > =C2=A0
> > --doc_begin=3D"The XEN team relies on the fact that invariant
> > conditions of 'if'
> > -statements are deliberate"
> > --config=3DMC3R1.R14.3,statements=3D{deliberate ,
> > "wrapped(any(),node(if_stmt))" }
> > +-doc_begin=3D"The XEN team relies on the fact that invariant
> > conditions of 'if' statements and conditional operators are
> > deliberate"
> > +-config=3DMC3R1.R14.3,statements+=3D{deliberate,
> > "wrapped(any(),node(if_stmt||conditional_operator||binary_condition
> > al_operator))" }
> > +-doc_end
> > +
> > +-doc_begin=3D"Switches having a 'sizeof' operator as the condition
> > are deliberate and have limited scope."
> > +-config=3DMC3R1.R14.3,statements+=3D{deliberate,
> > "wrapped(any(),node(switch_stmt)&&child(cond, operator(sizeof)))" }
> > +-doc_end
> > +
> > +-doc_begin=3D"The use of an invariant size argument in
> > {put,get}_unsafe_size and array_access_ok, as defined in
> > arch/x86(_64)?/include/asm/uaccess.h is deliberate and is deemed
> > safe."
> > +-file_tag+=3D{x86_uaccess,
> > "^xen/arch/x86(_64)?/include/asm/uaccess\\.h$"}
> > +-config=3DMC3R1.R14.3,reports+=3D{deliberate,
> > "any_area(any_loc(file(x86_uaccess)&&any_exp(macro(^(put|get)_unsaf
> > e_size$))))"}
> > +-config=3DMC3R1.R14.3,reports+=3D{deliberate,
> > "any_area(any_loc(file(x86_uaccess)&&any_exp(macro(^array_access_ok
> > $))))"}
> > =C2=A0-doc_end
> > =C2=A0
> > =C2=A0-doc_begin=3D"A controlling expression of 'if' and iteration
> > statements having integer, character or pointer type has a
> > semantics that is well-known to all Xen developers."
> > @@ -527,8 +600,8 @@ falls under the jurisdiction of other MISRA
> > rules."
> > =C2=A0# General
> > =C2=A0#
> > =C2=A0
> > --doc_begin=3D"do-while-0 is a well recognized loop idiom by the xen
> > community."
> > --loop_idioms=3D{do_stmt, "literal(0)"}
> > +-doc_begin=3D"do-while-[01] is a well recognized loop idiom by the
> > xen community."
> > +-loop_idioms=3D{do_stmt, "literal(0)||literal(1)"}
> > =C2=A0-doc_end
> > =C2=A0-doc_begin=3D"while-[01] is a well recognized loop idiom by the x=
en
> > community."
> > =C2=A0-loop_idioms+=3D{while_stmt, "literal(0)||literal(1)"}
> > diff --git a/docs/misra/deviations.rst b/docs/misra/deviations.rst
> > index 16fc345756..d682616796 100644
> > --- a/docs/misra/deviations.rst
> > +++ b/docs/misra/deviations.rst
> > @@ -63,6 +63,11 @@ Deviations related to MISRA C:2012 Rules:
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 switch statement.
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - ECLAIR has been configured to ignore t=
hose statements.
> > =C2=A0
> > +=C2=A0=C2=A0 * - R2.1
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - The asm-offset files are not linked deliber=
ately, since
> > they are used to
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 generate definitions for asm modu=
les.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `deliberate` for ECLAIR.
> > +
> > =C2=A0=C2=A0=C2=A0 * - R2.2
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - Proving compliance with respect to Rul=
e 2.2 is generally
> > impossible:
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 see `<https://arxiv.org/abs/=
2212.13933>`_ for details.
> > Moreover, peer
> > @@ -203,6 +208,26 @@ Deviations related to MISRA C:2012 Rules:
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 it.
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `safe` for ECLAIR.
> > =C2=A0
> > +=C2=A0=C2=A0 * - R8.4
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Some functions are excluded from non-debug =
build, therefore
> > the absence
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 of declaration is safe.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `safe` for ECLAIR, such functions=
 are:
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - apei_read_mce()
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - apei_check_mce()
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - apei_clear_mce()
> > +
> > +=C2=A0=C2=A0 * - R8.4
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Given that bsearch and sort are defined wit=
h the attribute
> > 'gnu_inline',
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 it's deliberate not to have a pri=
or declaration.
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 See Section \"6.33.1 Common Funct=
ion Attributes\" of
> > \"GCC_MANUAL\" for
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 a full explanation of gnu_inline.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `deliberate` for ECLAIR.
> > +
> > +=C2=A0=C2=A0 * - R8.4
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - first_valid_mfn is defined in this way beca=
use the current
> > lack of NUMA
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 support in Arm and PPC requires i=
t.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `deliberate` for ECLAIR.
> > +
> > =C2=A0=C2=A0=C2=A0 * - R8.6
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - The following variables are compiled i=
n multiple
> > translation units
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 belonging to different execu=
tables and therefore are safe.
> > @@ -282,6 +307,39 @@ Deviations related to MISRA C:2012 Rules:
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 If no bits are set, 0 is ret=
urned.
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `safe` for ECLAIR.
> > =C2=A0
> > +=C2=A0=C2=A0 * - R11.1
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - The conversion from a function pointer to u=
nsigned long or
> > (void \*) does
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 not lose any information, provide=
d that the target type has
> > enough bits
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 to store it.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `safe` for ECLAIR.
> > +
> > +=C2=A0=C2=A0 * - R11.1
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - The conversion from a function pointer to a=
 boolean has a
> > well-known
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 semantics that do not lead to une=
xpected behaviour.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `safe` for ECLAIR.
> > +
> > +=C2=A0=C2=A0 * - R11.2
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - The conversion from a pointer to an incompl=
ete type to
> > unsigned long
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 does not lose any information, pr=
ovided that the target
> > type has enough
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 bits to store it.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `safe` for ECLAIR.
> > +
> > +=C2=A0=C2=A0 * - R11.3
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Conversions to object pointers that have a =
pointee type
> > with a smaller
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 (i.e., less strict) alignment req=
uirement are safe.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `safe` for ECLAIR.
> > +
> > +=C2=A0=C2=A0 * - R11.6
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Conversions from and to integral types are =
safe, in the
> > assumption that
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 the target type has enough bits t=
o store the value.
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 See also Section \"4.7 Arrays and=
 Pointers\" of
> > \"GCC_MANUAL\"
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `safe` for ECLAIR.
> > +
> > +=C2=A0=C2=A0 * - R11.6
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - The conversion from a pointer to a boolean =
has a well-known
> > semantics
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 that do not lead to unexpected be=
haviour.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `safe` for ECLAIR.
> > +
> > =C2=A0=C2=A0=C2=A0 * - R11.8
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - Violations caused by container_of are =
due to pointer
> > arithmetic operations
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 with the provided offset. Th=
e resulting pointer is then
> > immediately cast back to its
> > @@ -308,8 +366,19 @@ Deviations related to MISRA C:2012 Rules:
> > =C2=A0
> > =C2=A0=C2=A0=C2=A0 * - R14.3
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - The Xen team relies on the fact that i=
nvariant conditions
> > of 'if'
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 statements are deliberate.
> > -=C2=A0=C2=A0=C2=A0=C2=A0 - Project-wide deviation; tagged as `disappli=
ed` for ECLAIR.
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 statements and conditional operat=
ors are deliberate.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `deliberate` for ECLAIR.
> > +
> > +=C2=A0=C2=A0 * - R14.3
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Switches having a 'sizeof' operator as the =
condition are
> > deliberate and
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 have limited scope.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `deliberate` for ECLAIR.
> > +
> > +=C2=A0=C2=A0 * - R14.3
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - The use of an invariant size argument in
> > {put,get}_unsafe_size and
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 array_access_ok, as defined in
> > arch/x86(_64)?/include/asm/uaccess.h is
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 deliberate and is deemed safe.
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - Tagged as `deliberate` for ECLAIR.
> > =C2=A0
> > =C2=A0=C2=A0=C2=A0 * - R14.4
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - A controlling expression of 'if' and i=
teration statements
> > having
> > @@ -475,10 +544,10 @@ Other deviations:
> > =C2=A0=C2=A0=C2=A0 * - Deviation
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - Justification
> > =C2=A0
> > -=C2=A0=C2=A0 * - do-while-0 loops
> > -=C2=A0=C2=A0=C2=A0=C2=A0 - The do-while-0 is a well-recognized loop id=
iom used by the
> > Xen community
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 and can therefore be used, even t=
hough it would cause a
> > number of
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 violations in some instances.
> > +=C2=A0=C2=A0 * - do-while-0 and do-while-1 loops
> > +=C2=A0=C2=A0=C2=A0=C2=A0 - The do-while-0 and do-while-1 loops are wel=
l-recognized
> > loop idioms used
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 by the Xen community and can ther=
efore be used, even though
> > they would
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 cause a number of violations in s=
ome instances.
> > =C2=A0
> > =C2=A0=C2=A0=C2=A0 * - while-0 and while-1 loops
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 - while-0 and while-1 are well-recognize=
d loop idioms used by
> > the Xen
> > --=20
> > 2.34.1
> >=20



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 09:46:05 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 09:46:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749896.1158133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMlhc-0006AU-QM; Thu, 27 Jun 2024 09:46:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749896.1158133; Thu, 27 Jun 2024 09:46:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMlhc-0006AN-NS; Thu, 27 Jun 2024 09:46:04 +0000
Received: by outflank-mailman (input) for mailman id 749896;
 Thu, 27 Jun 2024 09:46:03 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1631=N5=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMlhb-0006AF-HO
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 09:46:03 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 10d85abe-346a-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 11:46:02 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-a724440f597so647968266b.0
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 02:46:02 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a729d7c9ad6sm41937166b.209.2024.06.27.02.46.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Jun 2024 02:46:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10d85abe-346a-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719481562; x=1720086362; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=Pd77c1XlyGo6fu2iizOQHJIdagIjARiDe6sWBTjKfXE=;
        b=bM9hddwRjD6byYXnB02PXZMGhU8kbzelLOb5NPFrUyTjg/nMZcUhYLsaufxOs+6oQv
         XH3+Qmbvr6LlGh+72gRWYFUjhJLqZNJbL/bMCBfr7RpYEmwpKI9gXC4rw8tPer3ms0K0
         qx9TCNjXIe1LG7GEqCaxOnw5YqqkgSaCFL3TkN+9THC1jTQF7NvtqmJEYs0CmhxFup73
         w5m3UVvmQthwukbqyWnhZtTk/lu0mgwjMAnYEWx4Psu1mszeGLeK6D/jrUa6f1gVqYZm
         G5FqXR9Pe4yZpWaacGGL8mW2qlz3Y8w6WAlQevvYFHJC806apSqqYVZqiC5m9tN32+Rd
         6iEg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719481562; x=1720086362;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=Pd77c1XlyGo6fu2iizOQHJIdagIjARiDe6sWBTjKfXE=;
        b=ZMOgcEJNs2bM+4f8MWM2HUeIBuziR39EnQ/eQoKQ4EVg8qrH5f/Gb+cJpm8QDp4CWT
         skDEQEz0EgjrE1KLpJfKUconWbE1FlRglOsdY3iqsN/YxRxiyVX3Z6kDhMM6a6wVWiYb
         ShnoiTDpmp09H75/HtKVEr/EdhWdBd8oL7Z9gZCc+B9ZPXpCJkgjLE+wXKFOH/45N7aV
         3Xo4iPhrIiOHUi4pqp+ZI+M6B3n3zIEtSlyLhbMrr+lpjgtEGg06+BCaCNvOFp/aof6T
         x4P6H+1RucqVUSBAn3dBd6ebabd5f5Zv17F96gvVtWnGIJY5n/rTK0Ig6Ji7k2Q3yj6K
         ZKNg==
X-Gm-Message-State: AOJu0YzKayN64yBHUvAAX61s2wH0dhxEDYjSdPHHav9LrwyWo5C5MSz1
	uCp+GyrrJOu0LPu519/2PDPiKXz56WPSE6cm86qPFtyoN3y5j8Ee
X-Google-Smtp-Source: AGHT+IGCJWGkVsdzSkRpBOShWYOhJLdUnsYI9F1b8ZCAXD2VNiZw3klilwNOusXUN2U5JcJralVB5w==
X-Received: by 2002:a17:906:ba82:b0:a72:7a70:f2e3 with SMTP id a640c23a62f3a-a727a70f443mr566277766b.39.1719481562065;
        Thu, 27 Jun 2024 02:46:02 -0700 (PDT)
Message-ID: <8d30a48da7dbad45324c2d8854fbb7fd6b850e2a.camel@gmail.com>
Subject: Re: [XEN PATCH for 4.19] automation/eclair: add deviations agreed
 in MISRA meetings
From: oleksii.kurochko@gmail.com
To: Stefano Stabellini <sstabellini@kernel.org>, Federico Serafini
	 <federico.serafini@bugseng.com>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com, Simone Ballarin
 <simone.ballarin@bugseng.com>, Doug Goldstein <cardoe@cardoe.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
 <julien@xen.org>
Date: Thu, 27 Jun 2024 11:46:00 +0200
In-Reply-To: <alpine.DEB.2.22.394.2406261735540.3635@ubuntu-linux-20-04-desktop>
References: 
	<4a65e064768ad5ddce96d749f24f0bdae2c3b9da.1719328656.git.federico.serafini@bugseng.com>
	 <alpine.DEB.2.22.394.2406251850281.3635@ubuntu-linux-20-04-desktop>
	 <c6aeb6007ead36afaf48ceef1070e5ec5a2ef88f.camel@gmail.com>
	 <d35cf13a-5cfd-425f-9c01-3a4122da3a69@bugseng.com>
	 <alpine.DEB.2.22.394.2406261735540.3635@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Wed, 2024-06-26 at 17:37 -0700, Stefano Stabellini wrote:
> On Wed, 26 Jun 2024, Federico Serafini wrote:
> > On 26/06/24 09:37, Oleksii wrote:
> > > On Tue, 2024-06-25 at 18:59 -0700, Stefano Stabellini wrote:
> > > > > +-doc_begin=3D"The conversion from a function pointer to
> > > > > unsigned
> > > > > long or (void *) does not lose any information, provided that
> > > > > the
> > > > > target type has enough bits to store it."
> > > > > +-config=3DMC3R1.R11.1,casts+=3D{safe,
> > > > > +=C2=A0 "from(type(canonical(__function_pointer_types)))
> > > > > +=C2=A0=C2=A0 &&to(type(canonical(builtin(unsigned
> > > > > long)||pointer(builtin(void)))))
> > > > > +=C2=A0=C2=A0 &&relation(definitely_preserves_value)"
> > > > > +}
> > > > > +-doc_end
> > > >=20
> > > > This one and the ones below are the important ones! I think we
> > > > should
> > > > have them in the tree as soon as possible ideally 4.19. I ask
> > > > for
> > > > a release-ack.
> > > Just want to be sure that I understand deviations properly with
> > > this
> > > example.
> > >=20
> > > If the deviation above is merged, then it would be safe from a
> > > MISRA
> > > point of view to cast a function pointer to 'unsigned long' or
> > > 'void
> > > *', and thereby MISRA won't complain about code with such
> > > conversions?
> >=20
> > Exactly, taking into account section 4.7 of GCC manual.
>=20
> Yes. From a Xen release perspective, it will only affect the static
> analysis jobs, making them report fewer violations. The reason why
> those specifically are important is that they are significant changes
> over the plain rule and they were already documented in rules.rst.
Release-Acked-By: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 09:49:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 09:49:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749903.1158142 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMlkU-0007Dn-6b; Thu, 27 Jun 2024 09:49:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749903.1158142; Thu, 27 Jun 2024 09:49:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMlkU-0007Dg-42; Thu, 27 Jun 2024 09:49:02 +0000
Received: by outflank-mailman (input) for mailman id 749903;
 Thu, 27 Jun 2024 09:49:00 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1631=N5=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMlkS-0007Da-Kg
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 09:49:00 +0000
Received: from mail-ej1-x634.google.com (mail-ej1-x634.google.com
 [2a00:1450:4864:20::634])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 7a084538-346a-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 11:48:59 +0200 (CEST)
Received: by mail-ej1-x634.google.com with SMTP id
 a640c23a62f3a-a6fd513f18bso769313066b.3
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 02:48:59 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a729d6fdc99sm43163166b.5.2024.06.27.02.48.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Jun 2024 02:48:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a084538-346a-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719481738; x=1720086538; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=OX9r7P0P/VRbQRTIqhcoA/h1nIw98fEBgnkDs8Va5UA=;
        b=LprGG19YlVYnrqGTqWey8hOk69n/hGeIMxtntDZ90jDrBLHDbI/KugHHgpZjJC/LOq
         062Df9Gm8z3OFq9BcnS/656aNK+qFjN9dew8Q2y1datlQ9k9JAOLooLCi+NptAjJl1Px
         JCyHIT4BULUUibdBQOnf3UFH+6dDQ8xmchRVJSBkilZBR3Jd8l6v6Z3cEjusdHP6Mcyp
         vIBhID256PyZfAeu+ApYLtJW/RpRZSzZd33VO7qjW4jhaBAMk9ZWWGISscrT9/vaSLwp
         lXC6nPSsF5PhxsJMbG8QJBHdWMAntMGop1Ba5ssK2ZJCbJZH8CAolsXwohm+EcL+zPDZ
         Wcqw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719481738; x=1720086538;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=OX9r7P0P/VRbQRTIqhcoA/h1nIw98fEBgnkDs8Va5UA=;
        b=rhxcVarH+9dbUDBKVpuCWAPztov1ofLrecUcoe68Si8hh6LaijzjbVhAKjRrORIuDw
         8wkoKGnaI4BvCQvBCk2BzqXYCaIOAe/zPWorQU//b6YcrRDbiKuDxl61OIRTReJd1DOt
         MOjM5LKGVOZmX+0k2yP1jeymIxIePfU3+d95MYi5iZc0mZvPak6PTwBBxDnmMSZ33hIX
         I/k+qh8CYEQQ52YuzV6lFOp3rndbRzbzPxtB9jVs6PF4MmCSbT/RLGgscKUIeUcXgUUO
         KJGyP+/rhcasSrQmsTxOqkZu1PTinWXPVIYnWrjmBtSASb8n8/UNFU60pMN/ZCy1sRGw
         Es6A==
X-Forwarded-Encrypted: i=1; AJvYcCV0AfCHMOVLHCdwDqc1WbO6nCOXhIc5HT6lpN9v/B8FjHcKch45tA2yOPHR2Ftpa4KA8fr3Z4VO7gYx7HkqaxSd9SXJtHQqh0dvFw9poEQ=
X-Gm-Message-State: AOJu0Yybuz3d+yqu3XGGLe7032C/KK8ddXRHRyywsQ5Qa6HYkU/3hFlG
	3Db9/GZR9qQDb3/VdtDvXNRA8w/a1d/x2+yQnz3mFjZbPv6trjqQ
X-Google-Smtp-Source: AGHT+IF3lCTz09NuVd+xr7DEQvXa98Qfnf93YXfMO4z1e4CN8RLHhqH+qV1ZwUgd0xRC/Tv62eD3wQ==
X-Received: by 2002:a17:906:1d03:b0:a6f:5fa8:1b7 with SMTP id a640c23a62f3a-a7245b45aa0mr826246066b.15.1719481737473;
        Thu, 27 Jun 2024 02:48:57 -0700 (PDT)
Message-ID: <1ac96e89b527c637bc32badd69c4812b6e1d7281.camel@gmail.com>
Subject: Re: [PATCH for 4.19 v4 01/10] tools/hvmloader: Fix
 non-deterministic cpuid()
From: oleksii.kurochko@gmail.com
To: Andrew Cooper <andrew.cooper3@citrix.com>, Alejandro Vallejo
	 <alejandro.vallejo@cloud.com>, Xen-devel <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <jbeulich@suse.com>, Roger Pau =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>, Anthony PERARD <anthony.perard@vates.tech>
Date: Thu, 27 Jun 2024 11:48:56 +0200
In-Reply-To: <7ecf1b46-c1c2-42b5-b3cb-ab737ab67900@citrix.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
	 <f8bfcfeca0a76f28703b164e1e65fb5919325b13.1719416329.git.alejandro.vallejo@cloud.com>
	 <7ecf1b46-c1c2-42b5-b3cb-ab737ab67900@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Wed, 2024-06-26 at 17:43 +0100, Andrew Cooper wrote:
> On 26/06/2024 5:28 pm, Alejandro Vallejo wrote:
> > hvmloader's cpuid() implementation deviates from Xen's in that the
> > value passed
> > on ecx is unspecified. This means that when used on leaves that
> > implement
> > subleaves it's unspecified which one you get; though it's more than
> > likely an
> > invalid one.
> >=20
> > Import Xen's implementation so there are no surprises.
>=20
> Fixes: 318ac791f9f9 ("Add utilities needed for SMBIOS generation to
> hvmloader")
>=20
> > Signed-off-by: Alejandro Vallejo <alejandro.vallejo@cloud.com>
> >=20
> >=20
> > diff --git a/tools/firmware/hvmloader/util.h
> > b/tools/firmware/hvmloader/util.h
> > index deb823a892ef..3ad7c4f6d6a2 100644
> > --- a/tools/firmware/hvmloader/util.h
> > +++ b/tools/firmware/hvmloader/util.h
> > @@ -184,9 +184,30 @@ int uart_exists(uint16_t uart_base);
> > =C2=A0int lpt_exists(uint16_t lpt_base);
> > =C2=A0int hpet_exists(unsigned long hpet_base);
> > =C2=A0
> > -/* Do cpuid instruction, with operation 'idx' */
> > -void cpuid(uint32_t idx, uint32_t *eax, uint32_t *ebx,
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 uint32_t =
*ecx, uint32_t *edx);
> > +/* Some CPUID calls want 'count' to be placed in ecx */
> > +static inline void cpuid_count(
> > +=C2=A0=C2=A0=C2=A0 uint32_t op,
> > +=C2=A0=C2=A0=C2=A0 uint32_t count,
> > +=C2=A0=C2=A0=C2=A0 uint32_t *eax,
> > +=C2=A0=C2=A0=C2=A0 uint32_t *ebx,
> > +=C2=A0=C2=A0=C2=A0 uint32_t *ecx,
> > +=C2=A0=C2=A0=C2=A0 uint32_t *edx)
> > +{
> > +=C2=A0=C2=A0=C2=A0 asm volatile ( "cpuid"
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "=3Da" (*eax)=
, "=3Db" (*ebx), "=3Dc" (*ecx), "=3Dd" (*edx)
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "0" (op), "c"=
 (count) );
>=20
> "a" to be consistent with "c".
>=20
> Also it would be better to name the parameters as leaf and subleaf.
>=20
> Both can be fixed on commit.=C2=A0 However, there's no use in HVMLoader
> tickling this bug right now, so I'm not sure we want to rush this
> into
> 4.19 at this point.
I agree, I think it would be better to postpone the patch until 4.20
branch.

~ Oleksii


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 09:53:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 09:53:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749913.1158153 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMloh-0000Ep-No; Thu, 27 Jun 2024 09:53:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749913.1158153; Thu, 27 Jun 2024 09:53:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMloh-0000Ei-K4; Thu, 27 Jun 2024 09:53:23 +0000
Received: by outflank-mailman (input) for mailman id 749913;
 Thu, 27 Jun 2024 09:53:22 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1631=N5=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMlog-0000Ec-3T
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 09:53:22 +0000
Received: from mail-ej1-x636.google.com (mail-ej1-x636.google.com
 [2a00:1450:4864:20::636])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 15b10aac-346b-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 11:53:20 +0200 (CEST)
Received: by mail-ej1-x636.google.com with SMTP id
 a640c23a62f3a-a724a8097deso602448266b.1
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 02:53:20 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a729d71f0edsm43428766b.56.2024.06.27.02.53.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Jun 2024 02:53:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15b10aac-346b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719482000; x=1720086800; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=bcr1yJ7a2WCNZUKDkRcKjzNKgK58hiCPwjTTdezfQtU=;
        b=JLg/yEbwryOt3OUxBHo1rZr/b9VMo9qB1/w7gjbsaP1sfGfV3mthDZNQfyDYtXlmx6
         uifwdZMwlKweVB4FyP0IgLAu0oguswgs7ch2tL0M8PCIiFLD+LwdWRTuHu1d3EWU2ir4
         aReYACeoZ/I8zpCUgB87bSyo85e4TBylShnJH2VHISmF/RGpEmDojvovrqxGnk8MYZXs
         MW5iqqIKfgLi1b8zDEUjGRaZMhBDd6rptsNxRIY3g/UDkHvaCWCCiap5jV2QBczItlj1
         BMLawq2Dzz+RSbI0JSczlMnI5Bsc3nUKEvdh18ZbvJrfVS4oQYkjtTW2oJ1nRhWnE3jr
         dAkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719482000; x=1720086800;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=bcr1yJ7a2WCNZUKDkRcKjzNKgK58hiCPwjTTdezfQtU=;
        b=XTn1JtwN1d9Epta8jvJXvHX+itXeiJdN9iUU0LyIVyEv4K2DKadZKDw5qLyLMof0hK
         XvRHgx/oc6Dsq0SXW3YYCvL0wGDB+lRZ4/DfC8j4jiTqfmQI8j1CBUE3kUoNzeNKDxYI
         o/8ymZnA7MMyuRxNTp5fPnH2Nin230FmTO2DJswQPLhChdVrcAIU/8lGk/m8K94cJvxu
         G3kTvANpPtjlsWBfvqL+KaiDagBShPqlMelh+uh9bqVehdfS5/FJ/7a8LnXgLuYyVtHF
         IgV7z4OzrXUEpb336AHf7beOezBsvELt35Wb4nRUByVmeEXFxyoCZmPdSjQkWNl/Ib+C
         g7YQ==
X-Forwarded-Encrypted: i=1; AJvYcCU8B5MRynUFnQB+Dhqf8N8M4bni1w1e/zeNxtDqREvl/ZdWebMIYoF2BquPa8TmVIfZU+qO4agIzNVnPNCRkmEBQNnBh8UDMBWsXgDaVww=
X-Gm-Message-State: AOJu0YzpFmxMyDOIfpIQYjn7lLhlZ42z7eZjacrAvznYKGIXu0XJL+ev
	B2UhLJdCHkraWnYmHsLcJPsxaQgkiUwX94HQgclRPvA03K27gCHt
X-Google-Smtp-Source: AGHT+IHzzbfeK9uFSs1OUR8SsEzlrDB4gNprNfrHCnL2/rTqGjR0qWkwgsfRRFeDy2O5agcPOOPkiA==
X-Received: by 2002:a17:906:6bc9:b0:a72:2cb0:fafe with SMTP id a640c23a62f3a-a7242e159camr753489766b.75.1719481999575;
        Thu, 27 Jun 2024 02:53:19 -0700 (PDT)
Message-ID: <bd0d5d46c8368a5f157ec79287a84320dae0c08b.camel@gmail.com>
Subject: Re: [PATCH for-4.19(?)] xen/arm: bootfdt: Fix device tree memory
 node probing
From: oleksii.kurochko@gmail.com
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>,  Bertrand Marquis <bertrand.marquis@arm.com>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>
Date: Thu, 27 Jun 2024 11:53:18 +0200
In-Reply-To: <20240626080428.18480-1-michal.orzel@amd.com>
References: <20240626080428.18480-1-michal.orzel@amd.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Wed, 2024-06-26 at 10:04 +0200, Michal Orzel wrote:
> Memory node probing is done as part of early_scan_node() that is
> called
> for each node with depth >=3D 1 (root node is at depth 0). According to
> Devicetree Specification v0.4, chapter 3.4, /memory node can only
> exists
> as a top level node. However, Xen incorrectly considers all the nodes
> with
> unit node name "memory" as RAM. This buggy behavior can result in a
> failure if there are other nodes in the device tree (at depth >=3D 2)
> with
> "memory" as unit node name. An example can be a "memory@xxx" node
> under
> /reserved-memory. Fix it by introducing device_tree_is_memory_node()
> to
> perform all the required checks to assess if a node is a proper
> /memory
> node.
>=20
> Fixes: 3e99c95ba1c8 ("arm, device tree: parse the DTB for RAM
> location and size")
> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> ---
> 4.19: This patch is fixing a possible early boot Xen failure (before
> main
> console is initialized). In my case it results in a warning
> "Shattering
> superpage is not supported" and panic "Unable to setup the directmap
> mappings".
>=20
> If this is too late for this patch to go in, we can backport it after
> the tree
> re-opens.
So if we have a warning/panic and there is no random behaviour, I think
that it would be better to postpone until 4.20 branching.

~ Oleksii

> ---
> =C2=A0xen/arch/arm/bootfdt.c | 28 +++++++++++++++++++++++++++-
> =C2=A01 file changed, 27 insertions(+), 1 deletion(-)
>=20
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index 6e060111d95b..23c968e6955d 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -83,6 +83,32 @@ static bool __init
> device_tree_node_compatible(const void *fdt, int node,
> =C2=A0=C2=A0=C2=A0=C2=A0 return false;
> =C2=A0}
> =C2=A0
> +/*
> + * Check if a node is a proper /memory node according to Devicetree
> + * Specification v0.4, chapter 3.4.
> + */
> +static bool __init device_tree_is_memory_node(const void *fdt, int
> node,
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 int depth)
> +{
> +=C2=A0=C2=A0=C2=A0 const char *type;
> +=C2=A0=C2=A0=C2=A0 int len;
> +
> +=C2=A0=C2=A0=C2=A0 if ( depth !=3D 1 )
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return false;
> +
> +=C2=A0=C2=A0=C2=A0 if ( !device_tree_node_matches(fdt, node, "memory") )
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return false;
> +
> +=C2=A0=C2=A0=C2=A0 type =3D fdt_getprop(fdt, node, "device_type", &len);
> +=C2=A0=C2=A0=C2=A0 if ( !type )
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return false;
> +
> +=C2=A0=C2=A0=C2=A0 if ( (len <=3D 0) || strcmp(type, "memory") )
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return false;
> +
> +=C2=A0=C2=A0=C2=A0 return true;
> +}
> +
> =C2=A0void __init device_tree_get_reg(const __be32 **cell, uint32_t
> address_cells,
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 uint32_t size_cells, paddr_t =
*start,
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 paddr_t *size)
> @@ -448,7 +474,7 @@ static int __init early_scan_node(const void
> *fdt,
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 * populated. So we should skip the parsing=
.
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 */
> =C2=A0=C2=A0=C2=A0=C2=A0 if ( !efi_enabled(EFI_BOOT) &&
> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 device_tree_node_matche=
s(fdt, node, "memory") )
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 device_tree_is_memory_n=
ode(fdt, node, depth) )
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 rc =3D process_memory_no=
de(fdt, node, name, depth,
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 address_cells, size_cel=
ls,
> bootinfo_get_mem());
> =C2=A0=C2=A0=C2=A0=C2=A0 else if ( depth =3D=3D 1 && !dt_node_cmp(name, "=
reserved-memory") )



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 09:58:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 09:58:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749919.1158163 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMltP-00013x-7A; Thu, 27 Jun 2024 09:58:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749919.1158163; Thu, 27 Jun 2024 09:58:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMltP-00013q-4S; Thu, 27 Jun 2024 09:58:15 +0000
Received: by outflank-mailman (input) for mailman id 749919;
 Thu, 27 Jun 2024 09:58:14 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1631=N5=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMltN-00013k-W9
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 09:58:13 +0000
Received: from mail-ej1-x62f.google.com (mail-ej1-x62f.google.com
 [2a00:1450:4864:20::62f])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id c3505255-346b-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 11:58:12 +0200 (CEST)
Received: by mail-ej1-x62f.google.com with SMTP id
 a640c23a62f3a-a724a8097deso603187566b.1
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 02:58:12 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a729d71f515sm43530066b.51.2024.06.27.02.58.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Jun 2024 02:58:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3505255-346b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719482291; x=1720087091; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=sHZosWfAog9FFoUMqCVZSYKIUVzOgM5RXUlu794ZtRk=;
        b=eneKq3i+UVqcGtBS+d5yMUtfwZD9HzoogHEaDIC+l4t2xQw8QM6z+CaFFHUeREa1Sq
         FG5hYrnmavVyYrFc4bzLV2jI1NhWHTy9lNnDD5RBx5IOuLp+TzrQv6x1ZGYZwXc6OsMJ
         5QB7Kab3cZ01sU8FucRM7KHsFrDJ1AzG9Utf6Yw9pr0l5M34P09LOmR1J2cxPAxAxGem
         DxYNn8VqLmTdFRo9HNMZ72YfT8rhMlxKC45Ue9xS6zhHhPzfEpIrWbtMOaewf6RZ5+Do
         dHx4XXJ2V+Smpn0MVzdEs7ZY6aJlcmSCXJm+9PS4YbsJQF0dzoJNsbzsSjqqxoa+S+MY
         C07A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719482291; x=1720087091;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=sHZosWfAog9FFoUMqCVZSYKIUVzOgM5RXUlu794ZtRk=;
        b=GbijUexk/o52wtl1TKEToXI2fS+9f3jsFNfOdx4YSr8Q8xRNK0UUiIEEouIVD+77CQ
         fSS2LIrcU2O6mVoru7ADUdnNxQHlbEYP0+YQBawZcyAmrf4GXjeUvr88q1U0+HnmieF4
         tcWhfC43+yxJ149S5/TPf2eFxXZ5dLb8nTQdk65LxpMMWFTTIjZSHJW8QjS36j7nDO8U
         34H6TB4OWi0ZRTRjJyJDOJp3wchcTZNM6uYj60x6GAXLbBY+Y3oCcN0IxP8xzCSVoZRg
         aLydZ7lxn2Q0Q1nHVbTn/5ifSU4Zqfl3gj8sR3LdxdgOUPK+agL7I4E0GIEu5B3Wo593
         rX+A==
X-Forwarded-Encrypted: i=1; AJvYcCXa4wcZwW1OpZP1keEfPH3Sw4mskHBBwkxmAC8EQAlL9jKB0pSLqBv+F08iBGgO+JGqvOLenwLMeTJfEnDL5bcKjP1mhAkKn5BSBUqhnAs=
X-Gm-Message-State: AOJu0Yy35mqKTFScSjJvcIWYyUajQ8m6NiGluBuWReIdq0BnGDkqOf3h
	axwO8jSr7gKoa4m9n0O+pTbd0Ni6EY8ENZE8eFTBSi43q3vvug9h
X-Google-Smtp-Source: AGHT+IH4P2cPJCVu4DkqJq8nZfDvI5X7dfezYxuMJ0f1uFNOJ69CKieHIMm5kTRDp/HgpiDieHpnvg==
X-Received: by 2002:a17:907:c085:b0:a72:7f22:5f9e with SMTP id a640c23a62f3a-a727f84870bmr389640366b.57.1719482290636;
        Thu, 27 Jun 2024 02:58:10 -0700 (PDT)
Message-ID: <f4f3a1550b4809a3cb8b27eb5e7248abf27b3944.camel@gmail.com>
Subject: Re: [PATCH v13 02/10] xen/riscv: introduce bitops.h
From: oleksii.kurochko@gmail.com
To: Jan Beulich <jbeulich@suse.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>,  Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Date: Thu, 27 Jun 2024 11:58:09 +0200
In-Reply-To: <e2d82c37-da44-4a8f-a1f8-76d5ff05b104@suse.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
	 <0e4441eee82b0545e59099e2f62e3a01fa198d08.1719319093.git.oleksii.kurochko@gmail.com>
	 <bb103587-546d-4613-bcb8-df10f5d05388@suse.com>
	 <4c15dd072f08b1161d170608a096dc0851ced588.camel@gmail.com>
	 <e2d82c37-da44-4a8f-a1f8-76d5ff05b104@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

T24gVGh1LCAyMDI0LTA2LTI3IGF0IDA5OjU5ICswMjAwLCBKYW4gQmV1bGljaCB3cm90ZToKPiBP
biAyNi4wNi4yMDI0IDE5OjI3LCBvbGVrc2lpLmt1cm9jaGtvQGdtYWlsLmNvbcKgd3JvdGU6Cj4g
PiBPbiBXZWQsIDIwMjQtMDYtMjYgYXQgMTA6MzEgKzAyMDAsIEphbiBCZXVsaWNoIHdyb3RlOgo+
ID4gPiBPbiAyNS4wNi4yMDI0IDE1OjUxLCBPbGVrc2lpIEt1cm9jaGtvIHdyb3RlOgo+ID4gPiA+
IC0tLSAvZGV2L251bGwKPiA+ID4gPiArKysgYi94ZW4vYXJjaC9yaXNjdi9pbmNsdWRlL2FzbS9i
aXRvcHMuaAo+ID4gPiA+IEBAIC0wLDAgKzEsMTM3IEBACj4gPiA+ID4gKy8qIFNQRFgtTGljZW5z
ZS1JZGVudGlmaWVyOiBHUEwtMi4wICovCj4gPiA+ID4gKy8qIENvcHlyaWdodCAoQykgMjAxMiBS
ZWdlbnRzIG9mIHRoZSBVbml2ZXJzaXR5IG9mIENhbGlmb3JuaWEKPiA+ID4gPiAqLwo+ID4gPiA+
ICsKPiA+ID4gPiArI2lmbmRlZiBfQVNNX1JJU0NWX0JJVE9QU19ICj4gPiA+ID4gKyNkZWZpbmUg
X0FTTV9SSVNDVl9CSVRPUFNfSAo+ID4gPiA+ICsKPiA+ID4gPiArI2luY2x1ZGUgPGFzbS9zeXN0
ZW0uaD4KPiA+ID4gPiArCj4gPiA+ID4gKyNpZiBCSVRPUF9CSVRTX1BFUl9XT1JEID09IDY0Cj4g
PiA+ID4gKyNkZWZpbmUgX19BTU8ob3ApwqDCoCAiYW1vIiAjb3AgIi5kIgo+ID4gPiA+ICsjZWxp
ZiBCSVRPUF9CSVRTX1BFUl9XT1JEID09IDMyCj4gPiA+ID4gKyNkZWZpbmUgX19BTU8ob3ApwqDC
oCAiYW1vIiAjb3AgIi53Igo+ID4gPiA+ICsjZWxzZQo+ID4gPiA+ICsjZXJyb3IgIlVuZXhwZWN0
ZWQgQklUT1BfQklUU19QRVJfV09SRCIKPiA+ID4gPiArI2VuZGlmCj4gPiA+ID4gKwo+ID4gPiA+
ICsvKiBCYXNlZCBvbiBsaW51eC9hcmNoL2luY2x1ZGUvYXNtL2JpdG9wcy5oICovCj4gPiA+ID4g
Kwo+ID4gPiA+ICsvKgo+ID4gPiA+ICsgKiBOb24tYXRvbWljIGJpdCBtYW5pcHVsYXRpb24uCj4g
PiA+ID4gKyAqCj4gPiA+ID4gKyAqIEltcGxlbWVudGVkIHVzaW5nIGF0b21pY3MgdG8gYmUgaW50
ZXJydXB0IHNhZmUuIENvdWxkCj4gPiA+ID4gYWx0ZXJuYXRpdmVseQo+ID4gPiA+ICsgKiBpbXBs
ZW1lbnQgd2l0aCBsb2NhbCBpbnRlcnJ1cHQgbWFza2luZy4KPiA+ID4gPiArICovCj4gPiA+ID4g
KyNkZWZpbmUgX19zZXRfYml0KG4sIHApwqDCoMKgwqDCoCBzZXRfYml0KG4sIHApCj4gPiA+ID4g
KyNkZWZpbmUgX19jbGVhcl9iaXQobiwgcCnCoMKgwqAgY2xlYXJfYml0KG4sIHApCj4gPiA+ID4g
Kwo+ID4gPiA+ICsjZGVmaW5lIHRlc3RfYW5kX29wX2JpdF9vcmQob3AsIG1vZCwgbnIsIGFkZHIs
IG9yZCnCoMKgwqDCoCBcCj4gPiA+ID4gKyh7wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiA+ID4gK8KgwqDCoCBiaXRvcF91aW50X3QgcmVzLCBt
YXNrO8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIFwKPiA+ID4gPiArwqDCoMKgIG1hc2sgPSBCSVRPUF9NQVNLKG5yKTvCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFwKPiA+ID4gPiAr
wqDCoMKgIGFzbSB2b2xhdGlsZSAowqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gPiA+ICvCoMKgwqDC
oMKgwqDCoCBfX0FNTyhvcCkgI29yZCAiICUwLCAlMiwgJTEiwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqAgXAo+ID4gPiA+ICvCoMKgwqDCoMKgwqDCoCA6ICI9ciIgKHJlcyks
ICIrQSIgKGFkZHJbQklUT1BfV09SRChucildKcKgwqDCoMKgwqDCoCBcCj4gPiA+ID4gK8KgwqDC
oMKgwqDCoMKgIDogInIiIChtb2QobWFzaykpwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIFwKPiA+ID4gPiArwqDCoMKgwqDCoMKgwqAg
OiAibWVtb3J5Iik7wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiA+ID4gK8KgwqDCoCAoKHJlcyAmIG1hc2sp
ICE9IDApO8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgIFwKPiA+ID4gPiArfSkKPiA+ID4gPiArCj4gPiA+ID4gKyNkZWZpbmUgb3Bf
Yml0X29yZChvcCwgbW9kLCBuciwgYWRkciwgb3JkKcKgwqDCoMKgwqAgXAo+ID4gPiA+ICvCoMKg
wqAgYXNtIHZvbGF0aWxlICjCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgIFwKPiA+ID4gPiArwqDCoMKgwqDCoMKgwqAgX19BTU8ob3ApICNv
cmQgIiB6ZXJvLCAlMSwgJTAiwqDCoMKgwqDCoMKgwqDCoMKgIFwKPiA+ID4gPiArwqDCoMKgwqDC
oMKgwqAgOiAiK0EiIChhZGRyW0JJVE9QX1dPUkQobnIpXSnCoMKgwqDCoMKgwqDCoMKgwqDCoCBc
Cj4gPiA+ID4gK8KgwqDCoMKgwqDCoMKgIDogInIiIChtb2QoQklUT1BfTUFTSyhucikpKcKgwqDC
oMKgwqDCoMKgwqDCoMKgwqDCoCBcCj4gPiA+ID4gK8KgwqDCoMKgwqDCoMKgIDogIm1lbW9yeSIp
Owo+ID4gPiA+ICsKPiA+ID4gPiArI2RlZmluZSB0ZXN0X2FuZF9vcF9iaXQob3AsIG1vZCwgbnIs
IGFkZHIpwqDCoMKgIFwKPiA+ID4gPiArwqDCoMKgIHRlc3RfYW5kX29wX2JpdF9vcmQob3AsIG1v
ZCwgbnIsIGFkZHIsIC5hcXJsKQo+ID4gPiA+ICsjZGVmaW5lIG9wX2JpdChvcCwgbW9kLCBuciwg
YWRkcikgXAo+ID4gPiA+ICvCoMKgwqAgb3BfYml0X29yZChvcCwgbW9kLCBuciwgYWRkciwgKQo+
ID4gPiA+ICsKPiA+ID4gPiArLyogQml0bWFzayBtb2RpZmllcnMgKi8KPiA+ID4gPiArI2RlZmlu
ZSBOT1AoeCnCoMKgwqAgKHgpCj4gPiA+ID4gKyNkZWZpbmUgTk9UKHgpwqDCoMKgICh+KHgpKQo+
ID4gPiAKPiA+ID4gU2luY2UgZWxzZXdoZXJlIHlvdSBzYWlkIHdlIHdvdWxkIHVzZSBaYmIgaW4g
Yml0b3BzLCBJIHdhbnRlZCB0bwo+ID4gPiBjb21lCj4gPiA+IGJhY2sKPiA+ID4gb24gdGhhdDog
VXAgdG8gaGVyZSBhbGwgd2UgdXNlIGlzIEFNTy4KPiA+ID4gCj4gPiA+IEFuZCBmdXJ0aGVyIGRv
d24gdGhlcmUncyBubyBhc20oKSBhbnltb3JlLiBXaGF0IHdlcmUgeW91Cj4gPiA+IHJlZmVycmlu
Zwo+ID4gPiB0bz8KPiA+IFJJU0MtViBkb2Vzbid0IGhhdmUgYSBDTFogaW5zdHJ1Y3Rpb24gaW4g
dGhlIGJhc2UKPiA+IElTQS7CoCBBcyBhIGNvbnNlcXVlbmNlLCBfX2J1aWx0aW5fZmZzKCkgZW1p
dHMgYSBsaWJyYXJ5IGNhbGwgdG8KPiA+IGZmcygpCj4gPiBvbiBHQ0MsCj4gCj4gT2gsIHNvIHdl
J2QgbmVlZCB0byBpbXBsZW1lbnQgdGhhdCBsaWJnY2MgZnVuY3Rpb24sIGFsb25nIHRoZSBsaW5l
cwo+IG9mCj4gQXJtMzIgaW1wbGVtZW50aW5nIHF1aXRlIGEgZmV3IG9mIHRoZW0gdG8gc3VwcG9y
dCBzaGlmdHMgb24gNjQtYml0Cj4gcXVhbnRpdGllcyBhcyB3ZWxsIGFzIGRpdmlzaW9uIGFuZCBt
b2R1bG8uCldoeSB3ZSBjYW4ndCBqdXN0IGxpdmUgd2l0aCBaYmIgZXh0ZW5zaW9uPyBaYmIgZXh0
ZW5zaW9uIGlzIHByZXNlbnRlZApvbiBldmVyeSBwbGF0Zm9ybSBJIGhhdmUgaW4gYWNjZXNzIHdp
dGggaHlwZXJ2aXNvciBleHRlbnNpb24gc3VwcG9ydC4KCn4gT2xla3NpaQoKPiAKPiBKYW4KPiAK
PiA+IG9yIGEgZGUgQnJ1aWpuIHNlcXVlbmNlIG9uIENsYW5nLgo+ID4gCj4gPiBUaGUgb3B0aW9u
YWwgWmJiIGV4dGVuc2lvbiBhZGRzIGEgQ0xaIGluc3RydWN0aW9uLCBhZnRlciB3aGljaAo+ID4g
X19idWlsdGluX2ZmcygpIGVtaXRzIGEgdmVyeSBzaW1wbGUgc2VxdWVuY2UuCj4gPiAKPiA+IH4g
T2xla3NpaQo+IAoK



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 10:03:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 10:03:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749927.1158173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMlyI-0002tS-SH; Thu, 27 Jun 2024 10:03:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749927.1158173; Thu, 27 Jun 2024 10:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMlyI-0002tL-PA; Thu, 27 Jun 2024 10:03:18 +0000
Received: by outflank-mailman (input) for mailman id 749927;
 Thu, 27 Jun 2024 10:03:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1631=N5=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMlyH-0002tF-FB
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 10:03:17 +0000
Received: from mail-lf1-x136.google.com (mail-lf1-x136.google.com
 [2a00:1450:4864:20::136])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 78e9d067-346c-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 12:03:16 +0200 (CEST)
Received: by mail-lf1-x136.google.com with SMTP id
 2adb3069b0e04-52ce6c93103so5633867e87.3
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 03:03:16 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a729d71f50asm44119066b.64.2024.06.27.03.03.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Jun 2024 03:03:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78e9d067-346c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719482596; x=1720087396; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=DHmNYiYM25bJGkv7NDeeqPJJ+EUVoaefnBqDwBIldU8=;
        b=IVp+ZByHI0nqoMQc/pYh4/Qow+fRT1lXODgujNsg8HK6h7YGuaSE5Uy+461pm9rBOD
         5CKkgeT51QR46NeSg/Z1DnziH8IZl/tEhaMcIKhz5nKpEqa1l8oQ/epJeGt/hn0Y3jsA
         dBGImB4BroLfpyuw7a974OIEnapV9cfBJ8U+0wImPuTcmMd4HGDiFtpbbm5r89rF2lNT
         h64SQFtRHxZCGYKZr0gPXeVHnsAa+bL9Z0uJBgIbcJaMF81I7wZdiaoUreX4MUH4bwSf
         1s1iQmWXOY8YW7QZ0ytn4JSStdjrJsmjTJq3N0apVkwMmq5T+zK9GoLbLRARhSSxxgT0
         IC2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719482596; x=1720087396;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=DHmNYiYM25bJGkv7NDeeqPJJ+EUVoaefnBqDwBIldU8=;
        b=DQFpbqT3UltL14QxMFBm7fcjb3HEU62IkYxgSJ9XkNZu0x3VqPaJJI/snAYO1Ikbpe
         /qm5Upk9MvX/2h4FwdhzoC3B4LCjTJPiIyDHj9/bbQqAvMztbdpvULtLku6dIS/KeGis
         L2+I1Oj3+01FkO/9JaYGzS77TnTaZW1ckNfFfpXZH+5g7o8aJXJU40kuMe45JpNDzFRz
         cJvyKCO/RzokCd0hF7Rv66lV3qxDa62gHR5G6rSnntrVhNxNdcoBONxvEXcQyCqf/eS3
         o8Aa1Kqshq+6J9CBWFaZerxDMETFm9fDGycAPQ6/wBqNviPTqzSkZkr9RNEFywukia7b
         STsQ==
X-Forwarded-Encrypted: i=1; AJvYcCX45sc5fPmJ3FoLZoO/BbLTnxjcNJezkgk52hUDjM+xGxyuIQ6IPSfnfMHeK9WTB4AE8q/lT87KLes4fB5e65xM0at4+eQ+gE4ejE6Zh9A=
X-Gm-Message-State: AOJu0YzJ5fOmi+9MVUsduIIXRXdWXLFg92xKzK7lFaSf4IyD0EOpRbfZ
	Ps2E9LHPZezCaNB47mV6/9JZ/Yy8mc3c5uIEihdGNzxTGHt1qs0h
X-Google-Smtp-Source: AGHT+IEiybs9is9xcOGCJHacjg5x40gLfiqX4RVAGhn0CPLy4LiJPcq5X+2zGOYt6FRQentTvKn7cQ==
X-Received: by 2002:ac2:4c8c:0:b0:52c:9ae0:beed with SMTP id 2adb3069b0e04-52ce18526ecmr9467886e87.52.1719482595387;
        Thu, 27 Jun 2024 03:03:15 -0700 (PDT)
Message-ID: <0bf82d14d2f2c33c98d5e7e50ad2877d3b65c243.camel@gmail.com>
Subject: Re: [PATCH for-4.19 v2] x86/spec-ctrl: Support for SRSO_US_NO and
 SRSO_MSR_FIX
From: oleksii.kurochko@gmail.com
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: Jan Beulich <JBeulich@suse.com>, Roger Pau =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>
Date: Thu, 27 Jun 2024 12:03:14 +0200
In-Reply-To: <20240619191057.2588693-1-andrew.cooper3@citrix.com>
References: <20240619191057.2588693-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: base64
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

T24gV2VkLCAyMDI0LTA2LTE5IGF0IDIwOjEwICswMTAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOgo+
IEFNRCBoYXZlIHVwZGF0ZWQgdGhlIFNSU08gd2hpdGVwYXBlclsxXSB3aXRoIGZ1cnRoZXIgaW5m
b3JtYXRpb24uCj4gCj4gVGhlcmUncyBhIG5ldyBTUlNPX1UvU19OTyBlbnVtZXJhdGlvbiBzYXlp
bmcgdGhhdCBTUlNPIGF0dGFja3MgY2FuJ3QKPiBjcm9zcyB0aGUKPiB1c2VyL3N1cGVydmlzb3Ig
Ym91bmRhcnkuwqAgaS5lLiBYZW4gZG9uJ3QgbmVlZCB0byB1c2UgSUJQQi1vbi1lbnRyeQo+IGZv
ciBQVi4KPiAKPiBUaGVyZSdzIGFsc28gYSBuZXcgU1JTT19NU1JfRklYIGlkZW50aWZ5aW5nIHRo
YXQgdGhlIEJQX1NQRUNfUkVEVUNFCj4gYml0IGlzCj4gYXZhaWxhYmxlIGluIE1TUl9CUF9DRkcu
wqAgV2hlbiBzZXQsIFNSU08gYXR0YWNrcyBjYW4ndCBjcm9zcyB0aGUKPiBob3N0L2d1ZXN0Cj4g
Ym91bmRhcnkuwqAgaS5lLiBYZW4gZG9uJ3QgbmVlZCB0byB1c2UgSUJQQi1vbi1lbnRyeSBmb3Ig
SFZNLgo+IAo+IEV4dGVuZCBpYnBiX2NhbGN1bGF0aW9ucygpIHRvIGFjY291bnQgZm9yIHRoZXNl
IHdoZW4gY2FsY3VsYXRpbmcKPiBvcHRfaWJwYl9lbnRyeV97cHYsaHZtfSBkZWZhdWx0cy7CoCBB
ZGQgYSBicC1zcGVjLXJlZHVjZSBvcHRpb24gdG8KPiBjb250cm9sIHRoZQo+IHVzZSBvZiBCUF9T
UEVDX1JFRFVDRSwgYnV0IGFjdGl2YXRlIGl0IGJ5IGRlZmF1bHQuCj4gCj4gQmVjYXVzZSBNU1Jf
QlBfQ0ZHIGlzIGNvcmUtc2NvcGVkIHdpdGggYSByYWNlIGNvbmRpdGlvbiB1cGRhdGluZyBpdCwK
PiByZXB1cnBvc2UKPiBhbWRfY2hlY2tfZXJyYXR1bV8xNDg1KCkgaW50byBhbWRfY2hlY2tfYnBf
Y2ZnKCkgYW5kIGNhbGN1bGF0ZSBhbGwKPiB1cGRhdGVzIGF0Cj4gb25jZS4KPiAKPiBBZHZlcnRp
c2UgU1JTT19VL1NfTk8gdG8gZ3Vlc3RzIHdoZW5ldmVyIHBvc3NpYmxlLCBhcyBpdCBhbGxvd3Mg
dGhlCj4gZ3Vlc3QKPiBrZXJuZWwgdG8gc2tpcCBTUlNPIHByb3RlY3Rpb25zIHRvby7CoCBUaGlz
IGlzIGVhc3kgZm9yIEhWTSBndWVzdHMsCj4gYnV0IGhhcmQKPiBmb3IgUFYgZ3Vlc3RzLCBhcyBi
b3RoIHRoZSBndWVzdCB1c2Vyc3BhY2UgYW5kIGtlcm5lbCBvcGVyYXRlIGluCj4gQ1BMMy7CoCBB
ZnRlcgo+IGRpc2N1c3Npbmcgd2l0aCBBTUQsIGl0IGlzIGJlbGlldmVkIHRvIGJlIHNhZmUgdG8g
YWR2ZXJ0aXNlCj4gU1JTT19VL1NfTk8gdG8gUFYKPiBndWVzdHMgd2hlbiBCUF9TUEVDX1JFRFVD
RSBpcyBhY3RpdmUuCj4gCj4gRml4IGEgdHlwbyBpbiB0aGUgU1JTT19OTydzIGNvbW1lbnQuCj4g
Cj4gWzFdCj4gaHR0cHM6Ly93d3cuYW1kLmNvbS9jb250ZW50L2RhbS9hbWQvZW4vZG9jdW1lbnRz
L2NvcnBvcmF0ZS9jci9zcGVjdWxhdGl2ZS1yZXR1cm4tc3RhY2stb3ZlcmZsb3ctd2hpdGVwYXBl
ci5wZGYKPiBTaWduZWQtb2ZmLWJ5OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRy
aXguY29tPgo+IC0tLQo+IENDOiBKYW4gQmV1bGljaCA8SkJldWxpY2hAc3VzZS5jb20+Cj4gQ0M6
IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tPgo+IENDOiBPbGVrc2lpIEt1
cm9jaGtvIDxvbGVrc2lpLmt1cm9jaGtvQGdtYWlsLmNvbT4KPiAKPiB2MjoKPiDCoCogQWRkICJm
b3IgSFZNIGd1ZXN0cyIgdG8geGVuLWNvbW1hbmQtbGluZS5wYW5kb2MKPiDCoCogUHJpbnQgZGV0
YWlscyBvbiBib290Cj4gwqAqIERvbid0IGFkdmVydGlzZSBTUlNPX1VTX05PIHRvIFBWIGd1ZXN0
cyBpZiBCUF9TUEVDX1JFRFVDRSBpc24ndAo+IGFjdGl2ZS4KPiAKPiBGb3IgNC4xOS7CoCBUaGlz
IHNob3VsZCBiZSBubyBmdW5jdGlvbmFsIGNoYW5nZSBvbiBjdXJyZW50IGhhcmR3YXJlLsKgCkl0
IHNvdW5kcyBsaWtlIHlvdSBhcmUgc3VnZ2VzdGluZyB0aGVyZSBtaWdodCBzdGlsbCBiZSBmdW5j
dGlvbmFsCmNoYW5nZXMgb24gY3VycmVudCBoYXJkd2FyZSwgYnV0IGl0IHNob3VsZG4ndC4uLiBE
byBJIGludGVycHJldCB5b3VyCndvcmRzIGNvcnJlY3RseT8KCn4gT2xla3NpaQo+IE9uCj4gZm9y
dGhjb21pbmcgaGFyZHdhcmUsIGl0IGF2b2lkcyB0aGUgc3Vic3RhbnRpYWwgcGVyZiBoaXRzIHdo
aWNoIHdlcmUKPiBuZWNlc3NhcnkKPiB0byBwcm90ZWN0IGFnYWluc3QgdGhlIFNSU08gc3BlY3Vs
YXRpdmUgdnVsbmVyYWJpbGl0eS4KPiAtLS0KPiDCoGRvY3MvbWlzYy94ZW4tY29tbWFuZC1saW5l
LnBhbmRvY8KgwqDCoMKgwqDCoMKgwqDCoMKgIHzCoCA5ICsrKy0KPiDCoHhlbi9hcmNoL3g4Ni9j
cHUtcG9saWN5LmPCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgfCAxOSArKysr
KysrKwo+IMKgeGVuL2FyY2gveDg2L2NwdS9hbWQuY8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoCB8IDI5ICsrKysrKysrKy0tLQo+IMKgeGVuL2FyY2gveDg2L2luY2x1
ZGUvYXNtL21zci1pbmRleC5owqDCoMKgwqDCoMKgwqAgfMKgIDEgKwo+IMKgeGVuL2FyY2gveDg2
L2luY2x1ZGUvYXNtL3NwZWNfY3RybC5owqDCoMKgwqDCoMKgwqAgfMKgIDEgKwo+IMKgeGVuL2Fy
Y2gveDg2L3NwZWNfY3RybC5jwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAg
fCA0OSArKysrKysrKysrKysrKysrLS0tCj4gLS0KPiDCoHhlbi9pbmNsdWRlL3B1YmxpYy9hcmNo
LXg4Ni9jcHVmZWF0dXJlc2V0LmggfMKgIDQgKy0KPiDCoDcgZmlsZXMgY2hhbmdlZCwgOTIgaW5z
ZXJ0aW9ucygrKSwgMjAgZGVsZXRpb25zKC0pCj4gCj4gZGlmZiAtLWdpdCBhL2RvY3MvbWlzYy94
ZW4tY29tbWFuZC1saW5lLnBhbmRvYyBiL2RvY3MvbWlzYy94ZW4tCj4gY29tbWFuZC1saW5lLnBh
bmRvYwo+IGluZGV4IDFkZWE3NDMxZmFiNi4uODhiZWI2NDUyNWQ1IDEwMDY0NAo+IC0tLSBhL2Rv
Y3MvbWlzYy94ZW4tY29tbWFuZC1saW5lLnBhbmRvYwo+ICsrKyBiL2RvY3MvbWlzYy94ZW4tY29t
bWFuZC1saW5lLnBhbmRvYwo+IEBAIC0yMzkwLDcgKzIzOTAsNyBAQCBCeSBkZWZhdWx0IFNTQkQg
d2lsbCBiZSBtaXRpZ2F0ZWQgYXQgcnVudGltZQo+IChpLmUgYHNzYmQ9cnVudGltZWApLgo+IMKg
PsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHtpYnJzLGlicGIsc3NiZCxwc2ZkLAo+IMKgPsKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGVhZ2VyLWZwdSxsMWQtZmx1c2gsYnJhbmNoLWhhcmRl
bixzcmItbG9jaywKPiDCoD7CoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCB1bnByaXYtbW1pbyxn
ZHMtbWl0LGRpdi1zY3J1Yixsb2NrLWhhcmRlbiwKPiAtPsKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgIGJoaS1kaXMtc309PGJvb2w+IF1gCj4gKz7CoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBi
aGktZGlzLXMsYnAtc3BlYy1yZWR1Y2V9PTxib29sPiBdYAo+IMKgCj4gwqBDb250cm9scyBmb3Ig
c3BlY3VsYXRpdmUgZXhlY3V0aW9uIHNpZGVjaGFubmVsIG1pdGlnYXRpb25zLsKgIEJ5Cj4gZGVm
YXVsdCwgWGVuCj4gwqB3aWxsIHBpY2sgdGhlIG1vc3QgYXBwcm9wcmlhdGUgbWl0aWdhdGlvbnMg
YmFzZWQgb24gY29tcGlsZWQgaW4KPiBzdXBwb3J0LAo+IEBAIC0yNTM5LDYgKzI1MzksMTMgQEAg
Ym9vbGVhbiBjYW4gYmUgdXNlZCB0byBmb3JjZSBvciBwcmV2ZW50IFhlbgo+IGZyb20gdXNpbmcg
c3BlY3VsYXRpb24gYmFycmllcnMgdG8KPiDCoHByb3RlY3QgbG9jayBjcml0aWNhbCByZWdpb25z
LsKgIFRoaXMgbWl0aWdhdGlvbiB3b24ndCBiZSBlbmdhZ2VkIGJ5Cj4gZGVmYXVsdCwKPiDCoGFu
ZCBuZWVkcyB0byBiZSBleHBsaWNpdGx5IGVuYWJsZWQgb24gdGhlIGNvbW1hbmQgbGluZS4KPiDC
oAo+ICtPbiBoYXJkd2FyZSBzdXBwb3J0aW5nIFNSU09fTVNSX0ZJWCwgdGhlIGBicC1zcGVjLXJl
ZHVjZT1gIG9wdGlvbgo+IGNhbiBiZSB1c2VkCj4gK3RvIGZvcmNlIG9yIHByZXZlbnQgWGVuIGZy
b20gdXNpbmcgTVNSX0JQX0NGRy5CUF9TUEVDX1JFRFVDRSB0bwo+IG1pdGlnYXRlIHRoZQo+ICtT
UlNPIChTcGVjdWxhdGl2ZSBSZXR1cm4gU3RhY2sgT3ZlcmZsb3cpIHZ1bG5lcmFiaWxpdHkuwqAg
WGVuIHdpbGwKPiB1c2UKPiArYnAtc3BlYy1yZWR1Y2Ugd2hlbiBhdmFpbGFibGUsIGFzIGl0IGlz
IHByZWZlcmFibGUgdG8gdXNpbmcgYGlicGItCj4gZW50cnk9aHZtYAo+ICt0byBtaXRpZ2F0ZSBT
UlNPIGZvciBIVk0gZ3Vlc3RzLCBhbmQgYmVjYXVzZSBpdCBpcyBhIG5lY2Vzc2FyeQo+IHByZXJl
cXVpc2l0ZSBpbgo+ICtvcmRlciB0byBhZHZlcnRpc2UgU1JTT19VL1NfTk8gdG8gUFYgZ3Vlc3Rz
Lgo+ICsKPiDCoCMjIyBzeW5jX2NvbnNvbGUKPiDCoD4gYD0gPGJvb2xlYW4+YAo+IMKgCj4gZGlm
ZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9jcHUtcG9saWN5LmMgYi94ZW4vYXJjaC94ODYvY3B1LXBv
bGljeS5jCj4gaW5kZXggMzA0ZGMyMGNmYWI4Li5mZDMyZmUzMzMzODQgMTAwNjQ0Cj4gLS0tIGEv
eGVuL2FyY2gveDg2L2NwdS1wb2xpY3kuYwo+ICsrKyBiL3hlbi9hcmNoL3g4Ni9jcHUtcG9saWN5
LmMKPiBAQCAtMTQsNiArMTQsNyBAQAo+IMKgI2luY2x1ZGUgPGFzbS9tc3ItaW5kZXguaD4KPiDC
oCNpbmNsdWRlIDxhc20vcGFnaW5nLmg+Cj4gwqAjaW5jbHVkZSA8YXNtL3NldHVwLmg+Cj4gKyNp
bmNsdWRlIDxhc20vc3BlY19jdHJsLmg+Cj4gwqAjaW5jbHVkZSA8YXNtL3hzdGF0ZS5oPgo+IMKg
Cj4gwqBzdHJ1Y3QgY3B1X3BvbGljeSBfX3JlYWRfbW9zdGx5wqDCoMKgwqDCoMKgIHJhd19jcHVf
cG9saWN5Owo+IEBAIC02MDUsNiArNjA2LDI0IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBjYWxjdWxh
dGVfcHZfbWF4X3BvbGljeSh2b2lkKQo+IMKgwqDCoMKgwqDCoMKgwqAgX19jbGVhcl9iaXQoWDg2
X0ZFQVRVUkVfSUJSUywgZnMpOwo+IMKgwqDCoMKgIH0KPiDCoAo+ICvCoMKgwqAgLyoKPiArwqDC
oMKgwqAgKiBTUlNPX1UvU19OTyBtZWFucyB0aGF0IHRoZSBDUFUgaXMgbm90IHZ1bG5lcmFibGUg
dG8gU1JTTwo+IGF0dGFja3MgYWNyb3NzCj4gK8KgwqDCoMKgICogdGhlIFVzZXIgKENQTDMpL1N1
cGVydmlzb3IgKENQTDwzKSBib3VuZGFyeS7CoCBIb3dldmVyIHRoZQo+IFBWNjQKPiArwqDCoMKg
wqAgKiB1c2VyL2tlcm5lbCBib3VuZGFyeSBpcyBDUEwzIG9uIGJvdGggc2lkZXMsIHNvIGl0IHdv
bid0Cj4gY29udmV5IHRoZQo+ICvCoMKgwqDCoCAqIG1lYW5pbmcgdGhhdCBhIFBWIGtlcm5lbCBl
eHBlY3RzLgo+ICvCoMKgwqDCoCAqCj4gK8KgwqDCoMKgICogUFYzMiBndWVzdHMgYXJlIGV4cGxp
Y2l0bHkgdW5zdXBwb3J0ZWQgV1JUIHNwZWN1bGF0aXZlCj4gc2FmZXR5LCBzbyBhcmUKPiArwqDC
oMKgwqAgKiBpZ25vcmVkIHRvIGF2b2lkIGNvbXBsaWNhdGluZyB0aGUgbG9naWMuCj4gK8KgwqDC
oMKgICoKPiArwqDCoMKgwqAgKiBBZnRlciBkaXNjdXNzaW9ucyB3aXRoIEFNRCwgaXQgaXMgYmVs
aWV2ZWQgdG8gYmUgc2FmZSB0bwo+IG9mZmVyCj4gK8KgwqDCoMKgICogU1JTT19VU19OTyB0byBQ
ViBndWVzdHMgd2hlbiBCUF9TUEVDX1JFRFVDRSBpcyBhY3RpdmUuCj4gK8KgwqDCoMKgICoKPiAr
wqDCoMKgwqAgKiBJZiBCUF9TUEVDX1JFRFVDRSBpc24ndCBhY3RpdmUsIHJlbW92ZSBTUlNPX1Uv
U19OTyBmcm9tIHRoZQo+IFBWIG1heAo+ICvCoMKgwqDCoCAqIHBvbGljeSwgd2hpY2ggd2lsbCBj
YXVzZSBpdCB0byBmaWx0ZXIgb3V0IG9mIFBWIGRlZmF1bHQgdG9vLgo+ICvCoMKgwqDCoCAqLwo+
ICvCoMKgwqAgaWYgKCAhYm9vdF9jcHVfaGFzKFg4Nl9GRUFUVVJFX1NSU09fTVNSX0ZJWCkgfHwK
PiAhb3B0X2JwX3NwZWNfcmVkdWNlICkKPiArwqDCoMKgwqDCoMKgwqAgX19jbGVhcl9iaXQoWDg2
X0ZFQVRVUkVfU1JTT19VU19OTywgZnMpOwo+ICsKPiDCoMKgwqDCoCBndWVzdF9jb21tb25fbWF4
X2ZlYXR1cmVfYWRqdXN0bWVudHMoZnMpOwo+IMKgwqDCoMKgIGd1ZXN0X2NvbW1vbl9mZWF0dXJl
X2FkanVzdG1lbnRzKGZzKTsKPiDCoAo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94ODYvY3B1L2Ft
ZC5jIGIveGVuL2FyY2gveDg2L2NwdS9hbWQuYwo+IGluZGV4IGFiOTIzMzM2NzNiOS4uNTIxM2Rm
ZmY2MDFkIDEwMDY0NAo+IC0tLSBhL3hlbi9hcmNoL3g4Ni9jcHUvYW1kLmMKPiArKysgYi94ZW4v
YXJjaC94ODYvY3B1L2FtZC5jCj4gQEAgLTEwMDksMTYgKzEwMDksMzMgQEAgc3RhdGljIHZvaWQg
Y2ZfY2hlY2sgZmFtMTdfZGlzYWJsZV9jNih2b2lkCj4gKmFyZykKPiDCoAl3cm1zcmwoTVNSX0FN
RF9DU1RBVEVfQ0ZHLCB2YWwgJiBtYXNrKTsKPiDCoH0KPiDCoAo+IC1zdGF0aWMgdm9pZCBhbWRf
Y2hlY2tfZXJyYXR1bV8xNDg1KHZvaWQpCj4gK3N0YXRpYyB2b2lkIGFtZF9jaGVja19icF9jZmco
dm9pZCkKPiDCoHsKPiAtCXVpbnQ2NF90IHZhbCwgY2hpY2tlbmJpdCA9ICgxIDw8IDUpOwo+ICsJ
dWludDY0X3QgdmFsLCBuZXcgPSAwOwo+IMKgCj4gLQlpZiAoY3B1X2hhc19oeXBlcnZpc29yIHx8
IGJvb3RfY3B1X2RhdGEueDg2ICE9IDB4MTkgfHwKPiAhaXNfemVuNF91YXJjaCgpKQo+ICsJLyoK
PiArCSAqIEFNRCBFcnJhdHVtICMxNDg1LsKgIFNldCBiaXQgNSwgYXMgaW5zdHJ1Y3RlZC4KPiAr
CSAqLwo+ICsJaWYgKCFjcHVfaGFzX2h5cGVydmlzb3IgJiYgYm9vdF9jcHVfZGF0YS54ODYgPT0g
MHgxOSAmJgo+IGlzX3plbjRfdWFyY2goKSkKPiArCQluZXcgfD0gKDEgPDwgNSk7Cj4gKwo+ICsJ
LyoKPiArCSAqIE9uIGhhcmR3YXJlIHN1cHBvcnRpbmcgU1JTT19NU1JfRklYLCBhY3RpdmF0ZQo+
IEJQX1NQRUNfUkVEVUNFIGJ5Cj4gKwkgKiBkZWZhdWx0LsKgIFRoaXMgbGV0cyB1cyBkbyB0d28g
dGhpbmdzOgo+ICvCoMKgwqDCoMKgwqDCoMKgICoKPiArwqDCoMKgwqDCoMKgwqDCoCAqIDEpIEF2
b2lkIElCUEItb24tZW50cnkgdG8gbWl0aWdhdGUgU1JTTyBhdHRhY2tzIGZyb20gSFZNCj4gZ3Vl
c3RzLgo+ICvCoMKgwqDCoMKgwqDCoMKgICogMikgTGV0cyB1cyBhZHZlcnRpc2UgU1JTT19VU19O
TyB0byBQViBndWVzdHMuCj4gKwkgKi8KPiArCWlmIChib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVf
U1JTT19NU1JfRklYKSAmJgo+IG9wdF9icF9zcGVjX3JlZHVjZSkKPiArCQluZXcgfD0gQlBfQ0ZH
X1NQRUNfUkVEVUNFOwo+ICsKPiArCS8qIEF2b2lkIHJlYWRpbmcgQlBfQ0ZHIGlmIHdlIGRvbid0
IGludGVuZCB0byBjaGFuZ2UKPiBhbnl0aGluZy4gKi8KPiArCWlmICghbmV3KQo+IMKgCQlyZXR1
cm47Cj4gwqAKPiDCoAlyZG1zcmwoTVNSX0FNRDY0X0JQX0NGRywgdmFsKTsKPiDCoAo+IC0JaWYg
KHZhbCAmIGNoaWNrZW5iaXQpCj4gKwlpZiAoKHZhbCAmIG5ldykgPT0gbmV3KQo+IMKgCQlyZXR1
cm47Cj4gwqAKPiDCoAkvKgo+IEBAIC0xMDI3LDcgKzEwNDQsNyBAQCBzdGF0aWMgdm9pZCBhbWRf
Y2hlY2tfZXJyYXR1bV8xNDg1KHZvaWQpCj4gwqAJICogc2FtZSB0aW1lIGJlZm9yZSB0aGUgY2hp
Y2tlbmJpdCBpcyBzZXQuIEl0J3MgYmVuaWduCj4gYmVjYXVzZSB0aGUKPiDCoAkgKiB2YWx1ZSBi
ZWluZyB3cml0dGVuIGlzIHRoZSBzYW1lIG9uIGJvdGguCj4gwqAJICovCj4gLQl3cm1zcmwoTVNS
X0FNRDY0X0JQX0NGRywgdmFsIHwgY2hpY2tlbmJpdCk7Cj4gKwl3cm1zcmwoTVNSX0FNRDY0X0JQ
X0NGRywgdmFsIHwgbmV3KTsKPiDCoH0KPiDCoAo+IMKgc3RhdGljIHZvaWQgY2ZfY2hlY2sgaW5p
dF9hbWQoc3RydWN0IGNwdWluZm9feDg2ICpjKQo+IEBAIC0xMjk3LDcgKzEzMTQsNyBAQCBzdGF0
aWMgdm9pZCBjZl9jaGVjayBpbml0X2FtZChzdHJ1Y3QKPiBjcHVpbmZvX3g4NiAqYykKPiDCoAkJ
ZGlzYWJsZV9jMV9yYW1waW5nKCk7Cj4gwqAKPiDCoAlhbWRfY2hlY2tfemVuYmxlZWQoKTsKPiAt
CWFtZF9jaGVja19lcnJhdHVtXzE0ODUoKTsKPiArCWFtZF9jaGVja19icF9jZmcoKTsKPiDCoAo+
IMKgCWlmIChmYW0xN19jNl9kaXNhYmxlZCkKPiDCoAkJZmFtMTdfZGlzYWJsZV9jNihOVUxMKTsK
PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2L2luY2x1ZGUvYXNtL21zci1pbmRleC5oCj4gYi94
ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vbXNyLWluZGV4LmgKPiBpbmRleCA5Y2RiNWIyNjI1NjYu
LjgzZmJmNDEzNWM2YiAxMDA2NDQKPiAtLS0gYS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vbXNy
LWluZGV4LmgKPiArKysgYi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vbXNyLWluZGV4LmgKPiBA
QCAtNDEyLDYgKzQxMiw3IEBACj4gwqAjZGVmaW5lIEFNRDY0X0RFX0NGR19MRkVOQ0VfU0VSSUFM
SVNFCShfQUMoMSwgVUxMKSA8PCAxKQo+IMKgI2RlZmluZSBNU1JfQU1ENjRfRVhfQ0ZHCQkweGMw
MDExMDJjVQo+IMKgI2RlZmluZSBNU1JfQU1ENjRfQlBfQ0ZHCQkweGMwMDExMDJlVQo+ICsjZGVm
aW5lwqAgQlBfQ0ZHX1NQRUNfUkVEVUNFCQkoX0FDKDEsIFVMTCkgPDwgNCkKPiDCoCNkZWZpbmUg
TVNSX0FNRDY0X0RFX0NGRzIJCTB4YzAwMTEwZTNVCj4gwqAKPiDCoCNkZWZpbmUgTVNSX0FNRDY0
X0RSMF9BRERSRVNTX01BU0sJMHhjMDAxMTAyN1UKPiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gveDg2
L2luY2x1ZGUvYXNtL3NwZWNfY3RybC5oCj4gYi94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vc3Bl
Y19jdHJsLmgKPiBpbmRleCA3MjM0N2VmMmI5NTkuLjA3NzIyNTQxODk1NiAxMDA2NDQKPiAtLS0g
YS94ZW4vYXJjaC94ODYvaW5jbHVkZS9hc20vc3BlY19jdHJsLmgKPiArKysgYi94ZW4vYXJjaC94
ODYvaW5jbHVkZS9hc20vc3BlY19jdHJsLmgKPiBAQCAtOTAsNiArOTAsNyBAQCBleHRlcm4gaW50
OF90IG9wdF94cHRpX2h3ZG9tLCBvcHRfeHB0aV9kb211Owo+IMKgCj4gwqBleHRlcm4gYm9vbCBj
cHVfaGFzX2J1Z19sMXRmOwo+IMKgZXh0ZXJuIGludDhfdCBvcHRfcHZfbDF0Zl9od2RvbSwgb3B0
X3B2X2wxdGZfZG9tdTsKPiArZXh0ZXJuIGJvb2wgb3B0X2JwX3NwZWNfcmVkdWNlOwo+IMKgCj4g
wqAvKgo+IMKgICogVGhlIEwxRCBhZGRyZXNzIG1hc2ssIHdoaWNoIG1pZ2h0IGJlIHdpZGVyIHRo
YW4gcmVwb3J0ZWQgaW4KPiBDUFVJRCwgYW5kIHRoZQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC94
ODYvc3BlY19jdHJsLmMgYi94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMKPiBpbmRleCA0MGY2YWUw
MTcwMTAuLjdhYWJiNjViYTAyOCAxMDA2NDQKPiAtLS0gYS94ZW4vYXJjaC94ODYvc3BlY19jdHJs
LmMKPiArKysgYi94ZW4vYXJjaC94ODYvc3BlY19jdHJsLmMKPiBAQCAtODMsNiArODMsNyBAQCBz
dGF0aWMgYm9vbCBfX2luaXRkYXRhIG9wdF91bnByaXZfbW1pbzsKPiDCoHN0YXRpYyBib29sIF9f
cm9fYWZ0ZXJfaW5pdCBvcHRfdmVyd19tbWlvOwo+IMKgc3RhdGljIGludDhfdCBfX2luaXRkYXRh
IG9wdF9nZHNfbWl0ID0gLTE7Cj4gwqBzdGF0aWMgaW50OF90IF9faW5pdGRhdGEgb3B0X2Rpdl9z
Y3J1YiA9IC0xOwo+ICtib29sIF9fcm9fYWZ0ZXJfaW5pdCBvcHRfYnBfc3BlY19yZWR1Y2UgPSB0
cnVlOwo+IMKgCj4gwqBzdGF0aWMgaW50IF9faW5pdCBjZl9jaGVjayBwYXJzZV9zcGVjX2N0cmwo
Y29uc3QgY2hhciAqcykKPiDCoHsKPiBAQCAtMTQzLDYgKzE0NCw3IEBAIHN0YXRpYyBpbnQgX19p
bml0IGNmX2NoZWNrIHBhcnNlX3NwZWNfY3RybChjb25zdAo+IGNoYXIgKnMpCj4gwqDCoMKgwqDC
oMKgwqDCoMKgwqDCoMKgIG9wdF91bnByaXZfbW1pbyA9IGZhbHNlOwo+IMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoCBvcHRfZ2RzX21pdCA9IDA7Cj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIG9w
dF9kaXZfc2NydWIgPSAwOwo+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIG9wdF9icF9zcGVjX3Jl
ZHVjZSA9IGZhbHNlOwo+IMKgwqDCoMKgwqDCoMKgwqAgfQo+IMKgwqDCoMKgwqDCoMKgwqAgZWxz
ZSBpZiAoIHZhbCA+IDAgKQo+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCByYyA9IC1FSU5WQUw7
Cj4gQEAgLTM2Myw2ICszNjUsOCBAQCBzdGF0aWMgaW50IF9faW5pdCBjZl9jaGVjayBwYXJzZV9z
cGVjX2N0cmwoY29uc3QKPiBjaGFyICpzKQo+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBvcHRf
Z2RzX21pdCA9IHZhbDsKPiDCoMKgwqDCoMKgwqDCoMKgIGVsc2UgaWYgKCAodmFsID0gcGFyc2Vf
Ym9vbGVhbigiZGl2LXNjcnViIiwgcywgc3MpKSA+PSAwICkKPiDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgb3B0X2Rpdl9zY3J1YiA9IHZhbDsKPiArwqDCoMKgwqDCoMKgwqAgZWxzZSBpZiAoICh2
YWwgPSBwYXJzZV9ib29sZWFuKCJicC1zcGVjLXJlZHVjZSIsIHMsIHNzKSkgPj0KPiAwICkKPiAr
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBvcHRfYnBfc3BlY19yZWR1Y2UgPSB2YWw7Cj4gwqDCoMKg
wqDCoMKgwqDCoCBlbHNlCj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIHJjID0gLUVJTlZBTDsK
PiDCoAo+IEBAIC01MDUsNyArNTA5LDcgQEAgc3RhdGljIHZvaWQgX19pbml0IHByaW50X2RldGFp
bHMoZW51bSBpbmRfdGh1bmsKPiB0aHVuaykKPiDCoMKgwqDCoMKgICogSGFyZHdhcmUgcmVhZC1v
bmx5IGluZm9ybWF0aW9uLCBzdGF0aW5nIGltbXVuaXR5IHRvIGNlcnRhaW4KPiBpc3N1ZXMsIG9y
Cj4gwqDCoMKgwqDCoCAqIHN1Z2dlc3Rpb25zIG9mIHdoaWNoIG1pdGlnYXRpb24gdG8gdXNlLgo+
IMKgwqDCoMKgwqAgKi8KPiAtwqDCoMKgIHByaW50aygiwqAgSGFyZHdhcmUKPiBoaW50czolcyVz
JXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVzXG4iLAo+ICvCoMKgwqAg
cHJpbnRrKCLCoCBIYXJkd2FyZQo+IGhpbnRzOiVzJXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVzJXMl
cyVzJXMlcyVzJXMlcyVzJXMlc1xuIiwKPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIChjYXBzICYg
QVJDSF9DQVBTX1JEQ0xfTk8pwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoCA/ICIKPiBSRENMX05PIsKgwqDCoMKgwqDCoMKgIDogIiIsCj4gwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoCAoY2FwcyAmIEFSQ0hfQ0FQU19FSUJSUynCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCA/ICIKPiBFSUJSUyLCoMKgwqDCoMKgwqDCoMKg
wqAgOiAiIiwKPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIChjYXBzICYgQVJDSF9DQVBTX1JTQkEp
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCA/ICIK
PiBSU0JBIsKgwqDCoMKgwqDCoMKgwqDCoMKgIDogIiIsCj4gQEAgLTUyOSwxMCArNTMzLDExIEBA
IHN0YXRpYyB2b2lkIF9faW5pdCBwcmludF9kZXRhaWxzKGVudW0gaW5kX3RodW5rCj4gdGh1bmsp
Cj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAoZThiwqAgJiBjcHVmZWF0X21hc2soWDg2X0ZFQVRV
UkVfQlRDX05PKSnCoMKgwqDCoMKgwqDCoMKgID8gIgo+IEJUQ19OTyLCoMKgwqDCoMKgwqDCoMKg
IDogIiIsCj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAoZThiwqAgJiBjcHVmZWF0X21hc2soWDg2
X0ZFQVRVUkVfSUJQQl9SRVQpKcKgwqDCoMKgwqDCoCA/ICIKPiBJQlBCX1JFVCLCoMKgwqDCoMKg
wqAgOiAiIiwKPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIChlMjFhICYgY3B1ZmVhdF9tYXNrKFg4
Nl9GRUFUVVJFX0lCUEJfQlJUWVBFKSnCoMKgwqAgPyAiCj4gSUJQQl9CUlRZUEUiwqDCoMKgIDog
IiIsCj4gLcKgwqDCoMKgwqDCoMKgwqDCoMKgIChlMjFhICYgY3B1ZmVhdF9tYXNrKFg4Nl9GRUFU
VVJFX1NSU09fTk8pKcKgwqDCoMKgwqDCoMKgID8gIgo+IFNSU09fTk8iwqDCoMKgwqDCoMKgwqAg
OiAiIik7Cj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgIChlMjFhICYgY3B1ZmVhdF9tYXNrKFg4Nl9G
RUFUVVJFX1NSU09fTk8pKcKgwqDCoMKgwqDCoMKgID8gIgo+IFNSU09fTk8iwqDCoMKgwqDCoMKg
wqAgOiAiIiwKPiArwqDCoMKgwqDCoMKgwqDCoMKgwqAgKGUyMWEgJiBjcHVmZWF0X21hc2soWDg2
X0ZFQVRVUkVfU1JTT19VU19OTykpwqDCoMKgwqAgPyAiCj4gU1JTT19VU19OTyLCoMKgwqDCoCA6
ICIiKTsKPiDCoAo+IMKgwqDCoMKgIC8qIEhhcmR3YXJlIGZlYXR1cmVzIHdoaWNoIG5lZWQgZHJp
dmluZyB0byBtaXRpZ2F0ZSBpc3N1ZXMuICovCj4gLcKgwqDCoCBwcmludGsoIsKgIEhhcmR3YXJl
IGZlYXR1cmVzOiVzJXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVzJXNcbiIsCj4gK8KgwqDCoCBwcmlu
dGsoIsKgIEhhcmR3YXJlIGZlYXR1cmVzOiVzJXMlcyVzJXMlcyVzJXMlcyVzJXMlcyVzJXMlc1xu
IiwKPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIChlOGLCoCAmIGNwdWZlYXRfbWFzayhYODZfRkVB
VFVSRV9JQlBCKSkgfHwKPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIChfN2QwICYgY3B1ZmVhdF9t
YXNrKFg4Nl9GRUFUVVJFX0lCUlNCKSnCoMKgwqDCoMKgwqDCoMKgwqAgPyAiCj4gSUJQQiLCoMKg
wqDCoMKgwqDCoMKgwqDCoCA6ICIiLAo+IMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgKGU4YsKgICYg
Y3B1ZmVhdF9tYXNrKFg4Nl9GRUFUVVJFX0lCUlMpKSB8fAo+IEBAIC01NTEsNyArNTU2LDggQEAg
c3RhdGljIHZvaWQgX19pbml0IHByaW50X2RldGFpbHMoZW51bSBpbmRfdGh1bmsKPiB0aHVuaykK
PiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIChjYXBzICYgQVJDSF9DQVBTX0ZCX0NMRUFSX0NUUkwp
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCA/ICIKPiBGQl9DTEVBUl9DVFJMIsKg
IDogIiIsCj4gwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAoY2FwcyAmIEFSQ0hfQ0FQU19HRFNfQ1RS
TCnCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCA/ICIKPiBHRFNf
Q1RSTCLCoMKgwqDCoMKgwqAgOiAiIiwKPiDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIChjYXBzICYg
QVJDSF9DQVBTX1JGRFNfQ0xFQVIpwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoCA/ICIKPiBSRkRTX0NMRUFSIsKgwqDCoMKgIDogIiIsCj4gLcKgwqDCoMKgwqDCoMKgwqDC
oMKgIChlMjFhICYgY3B1ZmVhdF9tYXNrKFg4Nl9GRUFUVVJFX1NCUEIpKcKgwqDCoMKgwqDCoMKg
wqDCoMKgID8gIgo+IFNCUEIiwqDCoMKgwqDCoMKgwqDCoMKgwqAgOiAiIik7Cj4gK8KgwqDCoMKg
wqDCoMKgwqDCoMKgIChlMjFhICYgY3B1ZmVhdF9tYXNrKFg4Nl9GRUFUVVJFX1NCUEIpKcKgwqDC
oMKgwqDCoMKgwqDCoMKgID8gIgo+IFNCUEIiwqDCoMKgwqDCoMKgwqDCoMKgwqAgOiAiIiwKPiAr
wqDCoMKgwqDCoMKgwqDCoMKgwqAgKGUyMWEgJiBjcHVmZWF0X21hc2soWDg2X0ZFQVRVUkVfU1JT
T19NU1JfRklYKSnCoMKgID8gIgo+IFNSU09fTVNSX0ZJWCLCoMKgIDogIiIpOwo+IMKgCj4gwqDC
oMKgwqAgLyogQ29tcGlsZWQtaW4gc3VwcG9ydCB3aGljaCBwZXJ0YWlucyB0byBtaXRpZ2F0aW9u
cy4gKi8KPiDCoMKgwqDCoCBpZiAoIElTX0VOQUJMRUQoQ09ORklHX0lORElSRUNUX1RIVU5LKSB8
fAo+IElTX0VOQUJMRUQoQ09ORklHX1NIQURPV19QQUdJTkcpIHx8Cj4gQEAgLTExMjAsNyArMTEy
Niw3IEBAIHN0YXRpYyB2b2lkIF9faW5pdCBkaXZfY2FsY3VsYXRpb25zKGJvb2wKPiBod19zbXRf
ZW5hYmxlZCkKPiDCoAo+IMKgc3RhdGljIHZvaWQgX19pbml0IGlicGJfY2FsY3VsYXRpb25zKHZv
aWQpCj4gwqB7Cj4gLcKgwqDCoCBib29sIGRlZl9pYnBiX2VudHJ5ID0gZmFsc2U7Cj4gK8KgwqDC
oCBib29sIGRlZl9pYnBiX2VudHJ5X3B2ID0gZmFsc2UsIGRlZl9pYnBiX2VudHJ5X2h2bSA9IGZh
bHNlOwo+IMKgCj4gwqDCoMKgwqAgLyogQ2hlY2sgd2UgaGF2ZSBoYXJkd2FyZSBJQlBCIHN1cHBv
cnQgYmVmb3JlIHVzaW5nIGl0Li4uICovCj4gwqDCoMKgwqAgaWYgKCAhYm9vdF9jcHVfaGFzKFg4
Nl9GRUFUVVJFX0lCUlNCKSAmJgo+ICFib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVfSUJQQikgKQo+
IEBAIC0xMTQ1LDIyICsxMTUxLDQxIEBAIHN0YXRpYyB2b2lkIF9faW5pdCBpYnBiX2NhbGN1bGF0
aW9ucyh2b2lkKQo+IMKgwqDCoMKgwqDCoMKgwqDCoCAqIENvbmZ1c2lvbi7CoCBNaXRpZ2F0ZSB3
aXRoIElCUEItb24tZW50cnkuCj4gwqDCoMKgwqDCoMKgwqDCoMKgICovCj4gwqDCoMKgwqDCoMKg
wqDCoCBpZiAoICFib290X2NwdV9oYXMoWDg2X0ZFQVRVUkVfQlRDX05PKSApCj4gLcKgwqDCoMKg
wqDCoMKgwqDCoMKgwqAgZGVmX2licGJfZW50cnkgPSB0cnVlOwo+ICvCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgIGRlZl9pYnBiX2VudHJ5X3B2ID0gZGVmX2licGJfZW50cnlfaHZtID0gdHJ1ZTsKPiDC
oAo+IMKgwqDCoMKgwqDCoMKgwqAgLyoKPiAtwqDCoMKgwqDCoMKgwqDCoCAqIEZ1cnRoZXIgdG8g
QlRDLCBaZW4zLzQgQ1BVcyBzdWZmZXIgZnJvbSBTcGVjdWxhdGl2ZQo+IFJldHVybiBTdGFjawo+
IC3CoMKgwqDCoMKgwqDCoMKgICogT3ZlcmZsb3cgaW4gbW9zdCBjb25maWd1cmF0aW9ucy7CoCBN
aXRpZ2F0ZSB3aXRoIElCUEItb24tCj4gZW50cnkgaWYgd2UKPiAtwqDCoMKgwqDCoMKgwqDCoCAq
IGhhdmUgdGhlIG1pY3JvY29kZSB0aGF0IG1ha2VzIHRoaXMgYW4gZWZmZWN0aXZlIG9wdGlvbi4K
PiArwqDCoMKgwqDCoMKgwqDCoCAqIEZ1cnRoZXIgdG8gQlRDLCBaZW4zIGFuZCBsYXRlciBDUFVz
IHN1ZmZlciBmcm9tCj4gU3BlY3VsYXRpdmUgUmV0dXJuCj4gK8KgwqDCoMKgwqDCoMKgwqAgKiBT
dGFjayBPdmVyZmxvdyBpbiBtb3N0IGNvbmZpZ3VyYXRpb25zLsKgIE1pdGlnYXRlIHdpdGgKPiBJ
QlBCLW9uLWVudHJ5Cj4gK8KgwqDCoMKgwqDCoMKgwqAgKiBpZiB3ZSBoYXZlIHRoZSBtaWNyb2Nv
ZGUgdGhhdCBtYWtlcyB0aGlzIGFuIGVmZmVjdGl2ZQo+IG9wdGlvbiwKPiArwqDCoMKgwqDCoMKg
wqDCoCAqIGV4Y2VwdCB3aGVyZSB0aGVyZSBhcmUgb3RoZXIgbWl0aWdhdGluZyBmYWN0b3JzCj4g
YXZhaWxhYmxlLgo+IMKgwqDCoMKgwqDCoMKgwqDCoCAqLwo+IMKgwqDCoMKgwqDCoMKgwqAgaWYg
KCAhYm9vdF9jcHVfaGFzKFg4Nl9GRUFUVVJFX1NSU09fTk8pICYmCj4gwqDCoMKgwqDCoMKgwqDC
oMKgwqDCoMKgwqAgYm9vdF9jcHVfaGFzKFg4Nl9GRUFUVVJFX0lCUEJfQlJUWVBFKSApCj4gLcKg
wqDCoMKgwqDCoMKgwqDCoMKgwqAgZGVmX2licGJfZW50cnkgPSB0cnVlOwo+ICvCoMKgwqDCoMKg
wqDCoCB7Cj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqAgLyoKPiArwqDCoMKgwqDCoMKgwqDCoMKg
wqDCoMKgICogU1JTT19VL1NfTk8gaXMgYSBzdWJzZXQgb2YgU1JTT19OTywgaWRlbnRpZnlpbmcg
dGhhdAo+IFNSU08gaXNuJ3QKPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICogcG9zc2libGUg
YWNyb3NzIHRoZSB1c2VyL3N1cGVydmlzb3IgYm91bmRhcnkuwqAgV2UKPiBvbmx5IG5lZWQgdG8K
PiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICogdXNlIElCUEItb24tZW50cnkgZm9yIFBWIGd1
ZXN0cyBvbiBoYXJkd2FyZSB3aGljaAo+IGRvZXNuJ3QKPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgICogZW51bWVyYXRlIFNSU09fVVNfTk8uCj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAq
Lwo+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIGlmICggIWJvb3RfY3B1X2hhcyhYODZfRkVBVFVS
RV9TUlNPX1VTX05PKSApCj4gK8KgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBkZWZfaWJw
Yl9lbnRyeV9wdiA9IHRydWU7Cj4gKwo+ICvCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIC8qCj4gK8Kg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAqIFNSU09fTVNSX0ZJWCBlbnVtZXJhdGVzIHRoYXQgd2Ug
Y2FuIHVzZQo+IE1TUl9CUF9DRkcuU1BFQ19SRURVQ0UKPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgICogdG8gbWl0aWdhdGUgU1JTTyBhY3Jvc3MgdGhlIGhvc3QvZ3Vlc3QgYm91bmRhcnkuwqAg
V2UKPiBvbmx5IG5lZWQKPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgICogdG8gdXNlIElCUEIt
b24tZW50cnkgZm9yIEhWTSBndWVzdHMgaWYgd2UgaGF2ZW4ndAo+IGVuYWJsZWQgdGhpcwo+ICvC
oMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgKiBjb250cm9sLgo+ICvCoMKgwqDCoMKgwqDCoMKgwqDC
oMKgwqAgKi8KPiArwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCBpZiAoICFib290X2NwdV9oYXMoWDg2
X0ZFQVRVUkVfU1JTT19NU1JfRklYKSB8fAo+ICFvcHRfYnBfc3BlY19yZWR1Y2UgKQo+ICvCoMKg
wqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqAgZGVmX2licGJfZW50cnlfaHZtID0gdHJ1ZTsKPiAr
wqDCoMKgwqDCoMKgwqAgfQo+IMKgwqDCoMKgIH0KPiDCoAo+IMKgwqDCoMKgIGlmICggb3B0X2li
cGJfZW50cnlfcHYgPT0gLTEgKQo+IC3CoMKgwqDCoMKgwqDCoCBvcHRfaWJwYl9lbnRyeV9wdiA9
IElTX0VOQUJMRUQoQ09ORklHX1BWKSAmJiBkZWZfaWJwYl9lbnRyeTsKPiArwqDCoMKgwqDCoMKg
wqAgb3B0X2licGJfZW50cnlfcHYgPSBJU19FTkFCTEVEKENPTkZJR19QVikgJiYKPiBkZWZfaWJw
Yl9lbnRyeV9wdjsKPiDCoMKgwqDCoCBpZiAoIG9wdF9pYnBiX2VudHJ5X2h2bSA9PSAtMSApCj4g
LcKgwqDCoMKgwqDCoMKgIG9wdF9pYnBiX2VudHJ5X2h2bSA9IElTX0VOQUJMRUQoQ09ORklHX0hW
TSkgJiYKPiBkZWZfaWJwYl9lbnRyeTsKPiArwqDCoMKgwqDCoMKgwqAgb3B0X2licGJfZW50cnlf
aHZtID0gSVNfRU5BQkxFRChDT05GSUdfSFZNKSAmJgo+IGRlZl9pYnBiX2VudHJ5X2h2bTsKPiDC
oAo+IMKgwqDCoMKgIGlmICggb3B0X2licGJfZW50cnlfcHYgKQo+IMKgwqDCoMKgIHsKPiBkaWZm
IC0tZ2l0IGEveGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2L2NwdWZlYXR1cmVzZXQuaAo+IGIv
eGVuL2luY2x1ZGUvcHVibGljL2FyY2gteDg2L2NwdWZlYXR1cmVzZXQuaAo+IGluZGV4IGQ5ZWJh
NWU5YTcxNC4uOWM5OGU0OTkyODYxIDEwMDY0NAo+IC0tLSBhL3hlbi9pbmNsdWRlL3B1YmxpYy9h
cmNoLXg4Ni9jcHVmZWF0dXJlc2V0LmgKPiArKysgYi94ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC14
ODYvY3B1ZmVhdHVyZXNldC5oCj4gQEAgLTMxMiw3ICszMTIsOSBAQCBYRU5fQ1BVRkVBVFVSRShG
U1JTQyzCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAxMSozMisxOSkgLypBwqAKPiBGYXN0IFNo
b3J0IFJFUCBTQ0FTQiAqLwo+IMKgWEVOX0NQVUZFQVRVUkUoQU1EX1BSRUZFVENISSzCoMKgwqDC
oMKgIDExKjMyKzIwKSAvKkHCoCBQUkVGRVRDSElUezAsMX0KPiBJbnN0cnVjdGlvbnMgKi8KPiDC
oFhFTl9DUFVGRUFUVVJFKFNCUEIswqDCoMKgwqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAxMSozMisy
NykgLypBwqAgU2VsZWN0aXZlIEJyYW5jaAo+IFByZWRpY3RvciBCYXJyaWVyICovCj4gwqBYRU5f
Q1BVRkVBVFVSRShJQlBCX0JSVFlQRSzCoMKgwqDCoMKgwqDCoCAxMSozMisyOCkgLypBwqAgSUJQ
QiBmbHVzaGVzCj4gQnJhbmNoIFR5cGUgcHJlZGljdGlvbnMgdG9vICovCj4gLVhFTl9DUFVGRUFU
VVJFKFNSU09fTk8swqDCoMKgwqDCoMKgwqDCoMKgwqDCoCAxMSozMisyOSkgLypBwqAgSGFyZHdh
cmUgbm90Cj4gdnVsZW5yYWJsZSB0byBTcGVjdWxhdGl2ZSBSZXR1cm4gU3RhY2sgT3ZlcmZsb3cg
Ki8KPiArWEVOX0NQVUZFQVRVUkUoU1JTT19OTyzCoMKgwqDCoMKgwqDCoMKgwqDCoMKgIDExKjMy
KzI5KSAvKkHCoCBIYXJkd2FyZSBub3QKPiB2dWxuZXJhYmxlIHRvIFNwZWN1bGF0aXZlIFJldHVy
biBTdGFjayBPdmVyZmxvdyAqLwo+ICtYRU5fQ1BVRkVBVFVSRShTUlNPX1VTX05PLMKgwqDCoMKg
wqDCoMKgwqAgMTEqMzIrMzApIC8qQSEgSGFyZHdhcmUgbm90Cj4gdnVsbmVyYWJsZSB0byBTUlNP
IGFjcm9zcyB0aGUgVXNlci9TdXBlcnZpc29yIGJvdW5kYXJ5ICovCj4gK1hFTl9DUFVGRUFUVVJF
KFNSU09fTVNSX0ZJWCzCoMKgwqDCoMKgwqAgMTEqMzIrMzEpIC8qwqDCoAo+IE1TUl9CUF9DRkcu
QlBfU1BFQ19SRURVQ0UgYXZhaWxhYmxlICovCj4gwqAKPiDCoC8qIEludGVsLWRlZmluZWQgQ1BV
IGZlYXR1cmVzLCBDUFVJRCBsZXZlbCAweDAwMDAwMDA3OjEuZWJ4LCB3b3JkIDEyCj4gKi8KPiDC
oFhFTl9DUFVGRUFUVVJFKElOVEVMX1BQSU4swqDCoMKgwqDCoMKgwqDCoCAxMiozMisgMCkgLyrC
oMKgIFByb3RlY3RlZAo+IFByb2Nlc3NvciBJbnZlbnRvcnkgTnVtYmVyICovCj4gCj4gYmFzZS1j
b21taXQ6IDQzZDVjNWQ1ZjcwYjNmNTQxOWU3ZWYzMDM5OWQyM2FkZjZkZGZhOGUKCg==



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 10:10:44 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 10:10:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749933.1158182 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMm5O-0004yN-Ix; Thu, 27 Jun 2024 10:10:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749933.1158182; Thu, 27 Jun 2024 10:10:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMm5O-0004yG-GA; Thu, 27 Jun 2024 10:10:38 +0000
Received: by outflank-mailman (input) for mailman id 749933;
 Thu, 27 Jun 2024 10:10:37 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMm5N-0004yA-Io
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 10:10:37 +0000
Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com
 [2a00:1450:4864:20::22a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7e218ead-346d-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 12:10:34 +0200 (CEST)
Received: by mail-lj1-x22a.google.com with SMTP id
 38308e7fff4ca-2eabd22d3f4so93313551fa.1
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 03:10:34 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-706b4a34ac8sm949363b3a.153.2024.06.27.03.10.28
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 03:10:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e218ead-346d-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719483034; x=1720087834; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=tMmvT783v1oTSfDdKa+kUqkIMUD5NzRUYvt6Z4n+ZCk=;
        b=SsUckkYghS2vCV8+ZZbVeTad5fycoCx9U7b+ls+Bt7iaqaVFkWluRp38sYfmvolycG
         BJyQqdyzgBAYP0+ytG8fX0UI6LhWHJieZUh3r+CmWgTOXLCTB3KuAIHXa4LdiQTlNj/S
         PnAqjinJZZWq2MiY/PNL/Fj7fRB4aWNHEIbJQ7V4ZL4jKQIC/aT/xHYKo/dyH1DoQ/n9
         fneFVcMW4Dw7vkSSa8CMHMEp41uhg+LURuaDB2Hf8gzl4GgqA0kmAcGCsQ2qMz7Ujd6p
         fCvtsQE3ls+nZLBhFPjoQ+hWt0Ll0OC93I2Mqkdz1C1Fdy2rWZaV3yw9XlyaBfB+qs96
         UjJA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719483034; x=1720087834;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=tMmvT783v1oTSfDdKa+kUqkIMUD5NzRUYvt6Z4n+ZCk=;
        b=t8IlwdGEwH1fwf7TRA1ht6cKMODF6CE/kNLXUIKAaE6c9bPk5PbZ5nL0ylRVb20Z0A
         LKPaBbbiecrObKa7tF82uMkXJm3up8+3XiUVFFYXaT6NxMBelszbo8xovJ2qcQOD+iQ+
         DBGe9ungzqCfi1iGz00XLr68zUWbTdDbqqX06fVFaQpfbNFmJYgb3aDCoONxGE2XogSB
         dWBqWhMxcV3wHmEGks5bOxWSse1g+qO6fzubDoHd6mIMIkUNakuyFbf35nv8BEDoIy4b
         lH9Zz3uhYfF+dBdMjIA8s/HFqucZ4zUoiyIzCB7KzjSSZVXNXsawMPJkSqgb2qI5ZWr4
         KOmw==
X-Forwarded-Encrypted: i=1; AJvYcCUUlqd6YmT8kkMJJmS9oORxzWE/AUVP5RldY1bzPKqyctGI33YDFawtcSc9tAFW35EKfKFbVyta8sghN5R15rcTOUONo4nTPcqiEyvZLFY=
X-Gm-Message-State: AOJu0YxxNV++D2w/KXlZMy/59ZkOyPomVo+z/MpIMgEcC8Zj3Sh7oCes
	EcoxxZ5ZH8WWRAD+CDceS1hmba01C2EL5INPRpTmkYvQxSCUUOW9kIKkVIdcrw==
X-Google-Smtp-Source: AGHT+IGvQWZIPzYtI1WHh4GTDUt72Xq6p9ikYRp+O0F3iw5zHL49cfNh5YMRvexu/vBcw3zMAAlf7g==
X-Received: by 2002:a2e:9d86:0:b0:2ec:5a0d:b2dd with SMTP id 38308e7fff4ca-2ec5b2f0373mr78048391fa.39.1719483033844;
        Thu, 27 Jun 2024 03:10:33 -0700 (PDT)
Message-ID: <4c71db0d-60a4-4347-b706-a2e06fc9cd63@suse.com>
Date: Thu, 27 Jun 2024 12:10:23 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 02/10] xen/riscv: introduce bitops.h
To: oleksii.kurochko@gmail.com
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <0e4441eee82b0545e59099e2f62e3a01fa198d08.1719319093.git.oleksii.kurochko@gmail.com>
 <bb103587-546d-4613-bcb8-df10f5d05388@suse.com>
 <4c15dd072f08b1161d170608a096dc0851ced588.camel@gmail.com>
 <e2d82c37-da44-4a8f-a1f8-76d5ff05b104@suse.com>
 <f4f3a1550b4809a3cb8b27eb5e7248abf27b3944.camel@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <f4f3a1550b4809a3cb8b27eb5e7248abf27b3944.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 27.06.2024 11:58, oleksii.kurochko@gmail.com wrote:
> On Thu, 2024-06-27 at 09:59 +0200, Jan Beulich wrote:
>> On 26.06.2024 19:27, oleksii.kurochko@gmail.com wrote:
>>> On Wed, 2024-06-26 at 10:31 +0200, Jan Beulich wrote:
>>>> On 25.06.2024 15:51, Oleksii Kurochko wrote:
>>>>> --- /dev/null
>>>>> +++ b/xen/arch/riscv/include/asm/bitops.h
>>>>> @@ -0,0 +1,137 @@
>>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>>> +/* Copyright (C) 2012 Regents of the University of California
>>>>> */
>>>>> +
>>>>> +#ifndef _ASM_RISCV_BITOPS_H
>>>>> +#define _ASM_RISCV_BITOPS_H
>>>>> +
>>>>> +#include <asm/system.h>
>>>>> +
>>>>> +#if BITOP_BITS_PER_WORD == 64
>>>>> +#define __AMO(op)   "amo" #op ".d"
>>>>> +#elif BITOP_BITS_PER_WORD == 32
>>>>> +#define __AMO(op)   "amo" #op ".w"
>>>>> +#else
>>>>> +#error "Unexpected BITOP_BITS_PER_WORD"
>>>>> +#endif
>>>>> +
>>>>> +/* Based on linux/arch/include/asm/bitops.h */
>>>>> +
>>>>> +/*
>>>>> + * Non-atomic bit manipulation.
>>>>> + *
>>>>> + * Implemented using atomics to be interrupt safe. Could
>>>>> alternatively
>>>>> + * implement with local interrupt masking.
>>>>> + */
>>>>> +#define __set_bit(n, p)      set_bit(n, p)
>>>>> +#define __clear_bit(n, p)    clear_bit(n, p)
>>>>> +
>>>>> +#define test_and_op_bit_ord(op, mod, nr, addr, ord)     \
>>>>> +({                                                      \
>>>>> +    bitop_uint_t res, mask;                             \
>>>>> +    mask = BITOP_MASK(nr);                              \
>>>>> +    asm volatile (                                      \
>>>>> +        __AMO(op) #ord " %0, %2, %1"                    \
>>>>> +        : "=r" (res), "+A" (addr[BITOP_WORD(nr)])       \
>>>>> +        : "r" (mod(mask))                               \
>>>>> +        : "memory");                                    \
>>>>> +    ((res & mask) != 0);                                \
>>>>> +})
>>>>> +
>>>>> +#define op_bit_ord(op, mod, nr, addr, ord)      \
>>>>> +    asm volatile (                              \
>>>>> +        __AMO(op) #ord " zero, %1, %0"          \
>>>>> +        : "+A" (addr[BITOP_WORD(nr)])           \
>>>>> +        : "r" (mod(BITOP_MASK(nr)))             \
>>>>> +        : "memory");
>>>>> +
>>>>> +#define test_and_op_bit(op, mod, nr, addr)    \
>>>>> +    test_and_op_bit_ord(op, mod, nr, addr, .aqrl)
>>>>> +#define op_bit(op, mod, nr, addr) \
>>>>> +    op_bit_ord(op, mod, nr, addr, )
>>>>> +
>>>>> +/* Bitmask modifiers */
>>>>> +#define NOP(x)    (x)
>>>>> +#define NOT(x)    (~(x))
>>>>
>>>> Since elsewhere you said we would use Zbb in bitops, I wanted to
>>>> come
>>>> back
>>>> on that: Up to here all we use is AMO.
>>>>
>>>> And further down there's no asm() anymore. What were you
>>>> referring
>>>> to?
>>> RISC-V doesn't have a CLZ instruction in the base
>>> ISA.  As a consequence, __builtin_ffs() emits a library call to
>>> ffs()
>>> on GCC,
>>
>> Oh, so we'd need to implement that libgcc function, along the lines
>> of
>> Arm32 implementing quite a few of them to support shifts on 64-bit
>> quantities as well as division and modulo.
> Why we can't just live with Zbb extension? Zbb extension is presented
> on every platform I have in access with hypervisor extension support.

I'd be fine that way, but then you don't need to break up ANDN into NOT
and AND. It is my understanding that Andrew has concerns here, even if
- iirc - it was him to originally suggest to build upon that extension
being available. If these concerns are solely about being able to build
with Zbb-unaware tool chains, then what to do about the build issues
there has already been said.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 10:32:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 10:32:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749950.1158193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMmQS-0000B8-DP; Thu, 27 Jun 2024 10:32:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749950.1158193; Thu, 27 Jun 2024 10:32:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMmQS-0000B1-9w; Thu, 27 Jun 2024 10:32:24 +0000
Received: by outflank-mailman (input) for mailman id 749950;
 Thu, 27 Jun 2024 10:32:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1sMmQQ-0000Au-FO
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 10:32:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sMmQQ-0007fR-2a; Thu, 27 Jun 2024 10:32:22 +0000
Received: from [15.248.2.25] (helo=[10.24.67.29])
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1sMmQP-0004AN-Qp; Thu, 27 Jun 2024 10:32:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:From:
	References:Cc:To:Subject:MIME-Version:Date:Message-ID;
	bh=Xa4o4u7u8I6QXsWixHCkdQsoG4fvPOsklXckdvOUVJU=; b=OJ0uUzPupBFsNJclQvY+m3l1uo
	a/cWCRGK9dNQsA0nhKYeensHcbFe7Vvx9oD0wyzQfIxOTNwxcC59yaaPXae8G6IDHVpGcPZ3Y8HR7
	7FwESsWkz/ZJ1HCzwXw3S7qdY3Fh5Gsp/rElHj9eNxBRbtzi9fP6lNjjVNYuKkBTknic=;
Message-ID: <ae447b0b-f791-4ca8-8b33-3600ae059b47@xen.org>
Date: Thu, 27 Jun 2024 11:32:19 +0100
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19(?)] xen/arm: bootfdt: Fix device tree memory node
 probing
Content-Language: en-GB
To: Michal Orzel <michal.orzel@amd.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, oleksii.kurochko@gmail.com
References: <20240626080428.18480-1-michal.orzel@amd.com>
 <766b260e-204c-423f-b0e1-c21957b6d169@xen.org>
 <b5c861a4-1431-44c5-a1ec-bc859ea011c3@amd.com>
From: Julien Grall <julien@xen.org>
In-Reply-To: <b5c861a4-1431-44c5-a1ec-bc859ea011c3@amd.com>
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit



On 27/06/2024 08:40, Michal Orzel wrote:
> Hi Julien,
> 
> On 26/06/2024 22:13, Julien Grall wrote:
>>
>>
>> Hi Michal,
>>
>> On 26/06/2024 09:04, Michal Orzel wrote:
>>> Memory node probing is done as part of early_scan_node() that is called
>>> for each node with depth >= 1 (root node is at depth 0). According to
>>> Devicetree Specification v0.4, chapter 3.4, /memory node can only exists
>>> as a top level node. However, Xen incorrectly considers all the nodes with
>>> unit node name "memory" as RAM. This buggy behavior can result in a
>>> failure if there are other nodes in the device tree (at depth >= 2) with
>>> "memory" as unit node name. An example can be a "memory@xxx" node under
>>> /reserved-memory. Fix it by introducing device_tree_is_memory_node() to
>>> perform all the required checks to assess if a node is a proper /memory
>>> node.
>>>
>>> Fixes: 3e99c95ba1c8 ("arm, device tree: parse the DTB for RAM location and size")
>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>> ---
>>> 4.19: This patch is fixing a possible early boot Xen failure (before main
>>> console is initialized). In my case it results in a warning "Shattering
>>> superpage is not supported" and panic "Unable to setup the directmap mappings".
>>>
>>> If this is too late for this patch to go in, we can backport it after the tree
>>> re-opens.
>>
>> The code looks correct to me, but I am not sure about merging it to 4.19.
>>
>> This is not a new bug (in fact has been there since pretty much Xen on
>> Arm was created) and we haven't seen any report until today. So in some
>> way it would be best to merge it after 4.19 so it can get more testing.
>>
>> In the other hand, I guess this will block you. Is this a new platform?
>> Is it available?
> Stefano answered this question. Also, IMO this change is quite straightforward
> and does not introduce any engineering doubt, so I'm not really sure if it needs
> more testing.

At this stage of the release we should think whether the bug is critical 
enough (rather than the risk is low enough) to be merged. IMHO, it is 
not because this has been there for the past 12 years...

But if we focus on the "riskness". We are rightly changing an interface 
which possibly someone may have (bogusly) relied on. So there is a 
lowish risk that you may end up to break use-case late in the release 
(we are a couple of weeks away) for use-case that never worked in the 
past 12 years.

We should also think what the worse that can happen if there is a bug in 
your series. The worse is we would not be able to boot on already 
supported platform. This would be quite a bad regression this late in 
the release. For Device-Tree parsing, I don't think it is enough to just 
test on just a handful of platforms this late in the release.

Which is why to me the answer to "It should be in 4.19" is not 
straightforward. If we merge post 4.19, then we give the chance to 
people to test, update & adjust their setup if needed.

Anyway, ultimately this is Oleksii's decision as the release manager.

[...]

>>> +/*
>>> + * Check if a node is a proper /memory node according to Devicetree
>>> + * Specification v0.4, chapter 3.4.
>>> + */
>>> +static bool __init device_tree_is_memory_node(const void *fdt, int node,
>>> +                                              int depth)
>>> +{
>>> +    const char *type;
>>> +    int len;
>>> +
>>> +    if ( depth != 1 )
>>> +        return false;
>>> +
>>> +    if ( !device_tree_node_matches(fdt, node, "memory") )
>>> +        return false;
>>> +
>>> +    type = fdt_getprop(fdt, node, "device_type", &len);
>>> +    if ( !type )
>>> +        return false;
>>> +
>>> +    if ( (len <= 0) || strcmp(type, "memory") )
>>
>> I would consider to use strncmp() to avoid relying on the property to be
>> well-formed (i.e. nul-terminated).
> Are you sure? AFAIR, libfdt returns NULL and -FDT_ERR_TRUNCATED as len if a string is non null terminated.

I can't find such code in path. Do you have any pointer?

> Also, let's suppose it is somehow not terminated. In that case how could libfdt set len to be > 0?

The FDT will store the length of the property otherwise you would not be 
able to encode property that are just a list of cells (i.e. numbers).

> It needs to know where is the end of the string to calculate the length.

For the name and the description, it is unclear to why would 
fdt_getprop() would only work for string property.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 10:40:28 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 10:40:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749956.1158202 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMmY1-0002Bq-4O; Thu, 27 Jun 2024 10:40:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749956.1158202; Thu, 27 Jun 2024 10:40:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMmY1-0002Bj-1s; Thu, 27 Jun 2024 10:40:13 +0000
Received: by outflank-mailman (input) for mailman id 749956;
 Thu, 27 Jun 2024 10:40:12 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMmY0-0002Bd-Lp
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 10:40:12 +0000
Received: from mail-lj1-x22a.google.com (mail-lj1-x22a.google.com
 [2a00:1450:4864:20::22a])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a0c4726f-3471-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 12:40:10 +0200 (CEST)
Received: by mail-lj1-x22a.google.com with SMTP id
 38308e7fff4ca-2ec5779b423so56863521fa.0
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 03:40:10 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-706b48d0d40sm1048497b3a.10.2024.06.27.03.40.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 03:40:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a0c4726f-3471-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719484810; x=1720089610; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=n70y0PhnhdmR0y0od7jjbLFRwIzTamhdpK5VhI//MbM=;
        b=KD8nimEdcLH2oNEaZYKMjk3fqARRHJA2lqYOoMz340P12ytfCxV1bc4Y9nQBPpvQzb
         pnNgk6pUn8CnQAtqKdNvJAp7xqcACaOUVxHELegk/sywXmqmqCsqsgdZ0x7CRu+KINuj
         SC5/BpdeJYabLxBATC0yOauhaRR/KZOGlWpeO3o6P9V/afGEMAQnilylp1/QW0sgZZTj
         QVQBjE5v91hPyOhl7d/QNEUjXcbm4ip5LSJ6z0yXbvaqLylEkyWNE6dv8K7PZei3qiA3
         IbUeSuOjvfbonpBllZrLwuMg9Bt9ieL4K66LY09cTAeURKKZWIhqeW5lzagHn4TDpNtJ
         WlUQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719484810; x=1720089610;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=n70y0PhnhdmR0y0od7jjbLFRwIzTamhdpK5VhI//MbM=;
        b=OFkHds47zErb/L09VrCLK14yiiZfqfpnXc83SGKuz5vVtHD5epIHYErhpzC33DX+xR
         Q27jb5K0+6hCrOnNTyjbf6O8F2lBK2iDVZmgONHNagivE60qZHLnm+m83UhlTDMoO/5y
         7rDsClXmEF5KJThtYAGM+dPGJYWflnrdcmCKSOptqpg1BtwIlAZd/rdyWvEg5AS++o5z
         cnuj7TRVZXm4tnGrg0/P2eQ2JWtsvog3OrA2l7rizUtu7k5alEHeIg2Q0KHOof7+xm+A
         6qd8IPH+Ewha0Py8j23PCmibDE2pwLQOBSQwOSJGBqi1UMpirPMjiWDwudVgddLdB2h7
         KWEw==
X-Gm-Message-State: AOJu0YzEi9ClV3862lSlhfCoXK0isWBaJ232+x0JnzOReXoCtAgV9tgm
	i0t/9KDp0B/qD68M3JGiBgQkfvalf1QxH/iXPOrS/llR9k73JvApGD7MLd0TVw==
X-Google-Smtp-Source: AGHT+IHjz3gXw9cnyx3lnTPBkJH93+2IB8jjqjKUAGSCBoYHbf1XUROgt9JFWErViZuQB7rL54LqeQ==
X-Received: by 2002:a2e:9d14:0:b0:2ee:45f3:1d13 with SMTP id 38308e7fff4ca-2ee45f31ed3mr22693511fa.47.1719484810010;
        Thu, 27 Jun 2024 03:40:10 -0700 (PDT)
Message-ID: <5c15f165-bfae-40ef-ab15-59e262b90b81@suse.com>
Date: Thu, 27 Jun 2024 12:39:59 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 for-4.20 7/7] x86/traps: address violations of
 MISRA C Rule 20.7
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
Cc: xen-devel@lists.xenproject.org, michal.orzel@amd.com,
 xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <cover.1719407840.git.nicola.vetrini@bugseng.com>
 <7830b9bfbb0aec272376817eb20bbcbfebdf4044.1719407840.git.nicola.vetrini@bugseng.com>
 <alpine.DEB.2.22.394.2406261746040.3635@ubuntu-linux-20-04-desktop>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <alpine.DEB.2.22.394.2406261746040.3635@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 27.06.2024 02:46, Stefano Stabellini wrote:
> On Wed, 26 Jun 2024, Nicola Vetrini wrote:
>> MISRA C Rule 20.7 states: "Expressions resulting from the expansion
>> of macro parameters shall be enclosed in parentheses". Therefore, some
>> macro definitions should gain additional parentheses to ensure that all
>> current and future users will be safe with respect to expansions that
>> can possibly alter the semantics of the passed-in macro parameter.
>>
>> No functional change.
>>
>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 11:34:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 11:34:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749974.1158216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMnOY-0001HU-62; Thu, 27 Jun 2024 11:34:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749974.1158216; Thu, 27 Jun 2024 11:34:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMnOY-0001HN-28; Thu, 27 Jun 2024 11:34:30 +0000
Received: by outflank-mailman (input) for mailman id 749974;
 Thu, 27 Jun 2024 11:34:29 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1631=N5=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMnOX-0001HH-3o
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 11:34:29 +0000
Received: from mail-ej1-x62e.google.com (mail-ej1-x62e.google.com
 [2a00:1450:4864:20::62e])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 35c3620a-3479-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 13:34:27 +0200 (CEST)
Received: by mail-ej1-x62e.google.com with SMTP id
 a640c23a62f3a-a729d9d7086so90288366b.0
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 04:34:27 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a729d71f71asm51274866b.66.2024.06.27.04.34.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Jun 2024 04:34:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35c3620a-3479-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719488066; x=1720092866; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=OpMZ0gh5qeWEIlYr+BaukSY2eykuaBjOr8afxKaYY9w=;
        b=ZTopwcc1sHu/O/ASwMRBQQuhYLFJwSagtGNDLnYHS9cKw2PMaHZO40tOJ3MlbACb1W
         Y6xSzb11nGnWY4XaXBURmHi8p385gSDa7/8TSDpB9hPfDMbGxtjA+J4s+5UCAwkCuCRF
         kkNuROPJ+T4RFHWOYwoO85DRq5P3206vanDoesYM2IGaNGe1uQ1tFvsvs6ClkcwvrwB9
         b37BWprNrOH7UKpg75obq8MqYFCqauF3ZxsR1D0JS5U41+17caqbgGxVdrkYVBxk/3Yy
         gN03mxAIJznCCZLjkgMmmGZ81oo6wP5UIJHO5lwsPgm6J87TocfnfJPZI8Rb/IcLJiQK
         ltLw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719488066; x=1720092866;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=OpMZ0gh5qeWEIlYr+BaukSY2eykuaBjOr8afxKaYY9w=;
        b=wslVjV5XvoMdoO/LnsitOKoUrsQ44rO+tg4sLIruUvJ45ySUFUdUXswbdeLUgZQTjh
         tnuaY0AKglQlj4BuZVWGh5afzXoaFmyBHSpvDh0pC/1pqp6OaEWPSmhbNJBzANwIGyfx
         zRRMcDrYc+XXAB6kzQGKpOIP/dv4xdCNukkrL3VvOK2/oAL9adGnjVjnndx36ncY2NJa
         sk8Xk8V0hk4lPRL2LeqAU/3HFFwWVcDRahqCIMmjWtfTLXiXbA9ORUJVjPvkOSA+M9nl
         8VYK4j4dL/8xhx0di0GaraHnIPNiN4nk+RfGEzowNH0OzwJMoExPG3U94XYw6unKYwvJ
         fEfA==
X-Forwarded-Encrypted: i=1; AJvYcCWZxyeOb9MnaPZcQ1Zn+JSG1G7LY4hyAG+v4gcI+QP49fS9TkOwYy4JPvGhinCU1SaktXA13+hMfE+ZHZNJjdVEa35Zc2k2bxGw0YA4QXg=
X-Gm-Message-State: AOJu0YxMSIImHeRmvOnjulfT8iF/GO19CwU2WVHyXM3nKUPMueaHf8qg
	8rCPBq2lY8UL0oZHvaVtIjIuaAOi8rTvFVsFOnPbjhX9ke1QIdeT
X-Google-Smtp-Source: AGHT+IH092yTQb2N1OMooCR3i+gNCPso3B0H251H9XjUfUbLgB4+yEPGweF4hq8CCZYinPD94GRZ9w==
X-Received: by 2002:a17:907:c717:b0:a72:6fc3:9941 with SMTP id a640c23a62f3a-a7296f5d198mr244027266b.16.1719488066247;
        Thu, 27 Jun 2024 04:34:26 -0700 (PDT)
Message-ID: <b73924b47f02d66a12b421b21d959ad7bae389b5.camel@gmail.com>
Subject: Re: [PATCH for-4.19(?)] xen/arm: bootfdt: Fix device tree memory
 node probing
From: oleksii.kurochko@gmail.com
To: Julien Grall <julien@xen.org>, Michal Orzel <michal.orzel@amd.com>, 
	xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	 <bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Date: Thu, 27 Jun 2024 13:34:24 +0200
In-Reply-To: <ae447b0b-f791-4ca8-8b33-3600ae059b47@xen.org>
References: <20240626080428.18480-1-michal.orzel@amd.com>
	 <766b260e-204c-423f-b0e1-c21957b6d169@xen.org>
	 <b5c861a4-1431-44c5-a1ec-bc859ea011c3@amd.com>
	 <ae447b0b-f791-4ca8-8b33-3600ae059b47@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Thu, 2024-06-27 at 11:32 +0100, Julien Grall wrote:
>=20
>=20
> On 27/06/2024 08:40, Michal Orzel wrote:
> > Hi Julien,
> >=20
> > On 26/06/2024 22:13, Julien Grall wrote:
> > >=20
> > >=20
> > > Hi Michal,
> > >=20
> > > On 26/06/2024 09:04, Michal Orzel wrote:
> > > > Memory node probing is done as part of early_scan_node() that
> > > > is called
> > > > for each node with depth >=3D 1 (root node is at depth 0).
> > > > According to
> > > > Devicetree Specification v0.4, chapter 3.4, /memory node can
> > > > only exists
> > > > as a top level node. However, Xen incorrectly considers all the
> > > > nodes with
> > > > unit node name "memory" as RAM. This buggy behavior can result
> > > > in a
> > > > failure if there are other nodes in the device tree (at depth
> > > > >=3D 2) with
> > > > "memory" as unit node name. An example can be a "memory@xxx"
> > > > node under
> > > > /reserved-memory. Fix it by introducing
> > > > device_tree_is_memory_node() to
> > > > perform all the required checks to assess if a node is a proper
> > > > /memory
> > > > node.
> > > >=20
> > > > Fixes: 3e99c95ba1c8 ("arm, device tree: parse the DTB for RAM
> > > > location and size")
> > > > Signed-off-by: Michal Orzel <michal.orzel@amd.com>
> > > > ---
> > > > 4.19: This patch is fixing a possible early boot Xen failure
> > > > (before main
> > > > console is initialized). In my case it results in a warning
> > > > "Shattering
> > > > superpage is not supported" and panic "Unable to setup the
> > > > directmap mappings".
> > > >=20
> > > > If this is too late for this patch to go in, we can backport it
> > > > after the tree
> > > > re-opens.
> > >=20
> > > The code looks correct to me, but I am not sure about merging it
> > > to 4.19.
> > >=20
> > > This is not a new bug (in fact has been there since pretty much
> > > Xen on
> > > Arm was created) and we haven't seen any report until today. So
> > > in some
> > > way it would be best to merge it after 4.19 so it can get more
> > > testing.
> > >=20
> > > In the other hand, I guess this will block you. Is this a new
> > > platform?
> > > Is it available?
> > Stefano answered this question. Also, IMO this change is quite
> > straightforward
> > and does not introduce any engineering doubt, so I'm not really
> > sure if it needs
> > more testing.
>=20
> At this stage of the release we should think whether the bug is
> critical=20
> enough (rather than the risk is low enough) to be merged. IMHO, it is
> not because this has been there for the past 12 years...
>=20
> But if we focus on the "riskness". We are rightly changing an
> interface=20
> which possibly someone may have (bogusly) relied on. So there is a=20
> lowish risk that you may end up to break use-case late in the release
> (we are a couple of weeks away) for use-case that never worked in the
> past 12 years.
>=20
> We should also think what the worse that can happen if there is a bug
> in=20
> your series. The worse is we would not be able to boot on already=20
> supported platform. This would be quite a bad regression this late in
> the release. For Device-Tree parsing, I don't think it is enough to
> just=20
> test on just a handful of platforms this late in the release.
>=20
> Which is why to me the answer to "It should be in 4.19" is not=20
> straightforward. If we merge post 4.19, then we give the chance to=20
> people to test, update & adjust their setup if needed.
>=20
> Anyway, ultimately this is Oleksii's decision as the release manager.
I agree with Julien, it would be better to postpone this patch series
until 4.20 branching.

~ Oleksii

>=20
> [...]
>=20
> > > > +/*
> > > > + * Check if a node is a proper /memory node according to
> > > > Devicetree
> > > > + * Specification v0.4, chapter 3.4.
> > > > + */
> > > > +static bool __init device_tree_is_memory_node(const void *fdt,
> > > > int node,
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 int depth)
> > > > +{
> > > > +=C2=A0=C2=A0=C2=A0 const char *type;
> > > > +=C2=A0=C2=A0=C2=A0 int len;
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 if ( depth !=3D 1 )
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return false;
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 if ( !device_tree_node_matches(fdt, node, "memo=
ry") )
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return false;
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 type =3D fdt_getprop(fdt, node, "device_type", =
&len);
> > > > +=C2=A0=C2=A0=C2=A0 if ( !type )
> > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return false;
> > > > +
> > > > +=C2=A0=C2=A0=C2=A0 if ( (len <=3D 0) || strcmp(type, "memory") )
> > >=20
> > > I would consider to use strncmp() to avoid relying on the
> > > property to be
> > > well-formed (i.e. nul-terminated).
> > Are you sure? AFAIR, libfdt returns NULL and -FDT_ERR_TRUNCATED as
> > len if a string is non null terminated.
>=20
> I can't find such code in path. Do you have any pointer?
>=20
> > Also, let's suppose it is somehow not terminated. In that case how
> > could libfdt set len to be > 0?
>=20
> The FDT will store the length of the property otherwise you would not
> be=20
> able to encode property that are just a list of cells (i.e. numbers).
>=20
> > It needs to know where is the end of the string to calculate the
> > length.
>=20
> For the name and the description, it is unclear to why would=20
> fdt_getprop() would only work for string property.
>=20
> Cheers,
>=20



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 12:01:37 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 12:01:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750007.1158264 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMnok-00067B-49; Thu, 27 Jun 2024 12:01:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750007.1158264; Thu, 27 Jun 2024 12:01:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMnok-000674-1T; Thu, 27 Jun 2024 12:01:34 +0000
Received: by outflank-mailman (input) for mailman id 750007;
 Thu, 27 Jun 2024 12:01:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1631=N5=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMnoj-0005la-H6
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 12:01:33 +0000
Received: from mail-lf1-x134.google.com (mail-lf1-x134.google.com
 [2a00:1450:4864:20::134])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id fee54b7f-347c-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 14:01:32 +0200 (CEST)
Received: by mail-lf1-x134.google.com with SMTP id
 2adb3069b0e04-52cdfb69724so6190067e87.1
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 05:01:32 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52e713054cbsm177978e87.121.2024.06.27.05.01.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Jun 2024 05:01:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fee54b7f-347c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719489692; x=1720094492; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=9NL1AgT6euU0rpg8fP50h7rSpVzwu7zCQeyofXoqU1o=;
        b=JB1a9ZIPYrHb59Bt5IKr1GgKuEDj1w/OTfB4xcG2KSRx2IvcqqJE62CU0vgTlPW/Jm
         HN3m1/hexY12/rwCSdnhpJIOlmP8HDiup7RzAz7WZsc+fTi69NUAH6PVasyireAO8qfV
         H4k3OWWdoJWQR1cqG8HJtyCd5zQjXDHoKpOmeIjfxCRGCcmL+yx+ewwJtxuIG4tBqle5
         2JMD625NSTkxF04Zqe+Tnh5wbSIfmKpA7Ho1tVqghLZwjTE4YOnlBJy4JnjtuTkwGu3E
         tkYFeDAr7sDwDYch86RJkQl36wbEXW0CVq543uYEJxcqCcrBiD203hp/h87kVgjhYCEd
         RiIg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719489692; x=1720094492;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=9NL1AgT6euU0rpg8fP50h7rSpVzwu7zCQeyofXoqU1o=;
        b=XSBL0Qp7T2pMtqazU63bDbbAXciXjh/AkOIrKVF5PcS7Yo5AfrCQ7AoauoJ96oXBeF
         eTPr0M3hIvXHE+Cvh+qWzQXR6Vp+pax6aV41P2x5ZEUf8qPJ32eZjpSV030b5QZlln0F
         d9yFc9ZEosojWahb1SrILRVz6VeEb3iorz+8qjWHAlHivgYlhVSovxqe91qW7D43OlJ5
         t1gNXtCOdOy3DwCHDGQuFu5NVz0Qa50HCLAS5TSsOE1fj4ULZFpyZ6vF2seXdHVyGJMQ
         q8PjDcFVZAYLzJmiFnvnMxK4G51WOm0zmtY1xzRGrIaDwXBYpUaIUZ65tTFKUcoA8Tjn
         1mUg==
X-Forwarded-Encrypted: i=1; AJvYcCV++mO+YpCMpDFZzjGI7HFoNjDEGgxI2+kYRWPdzS6tz3+GmdzeEugnm8JgH+lMkUKxueVciOC1v8f7yIycjwbI6sP/B4LN70zxsPwb5c0=
X-Gm-Message-State: AOJu0YyZB93aGj7/fajSesa+xC95oZHzEHdEDp/DWaikDddT5T8LbbMa
	NfEiaKg4BLIU5JDdoGn29Bi+Ro4yGOzP9WDr/RxuiuQdG/1qOKBG
X-Google-Smtp-Source: AGHT+IH47MAvOM0X5tOCVl1a0CysiYTQ6CODqeZQtR9PJroxlzHcwO58ZtVRfT1EhN0P3ojug+soQQ==
X-Received: by 2002:a19:ee10:0:b0:52c:db06:ee60 with SMTP id 2adb3069b0e04-52ce184395dmr7687661e87.41.1719489691996;
        Thu, 27 Jun 2024 05:01:31 -0700 (PDT)
Message-ID: <7bd44b8615eb545b4956008d02c158d5c85e2345.camel@gmail.com>
Subject: Re: [PATCH v13 02/10] xen/riscv: introduce bitops.h
From: oleksii.kurochko@gmail.com
To: Jan Beulich <jbeulich@suse.com>
Cc: Alistair Francis <alistair.francis@wdc.com>, Bob Eshleman
 <bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>,  Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
Date: Thu, 27 Jun 2024 14:01:30 +0200
In-Reply-To: <4c71db0d-60a4-4347-b706-a2e06fc9cd63@suse.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
	 <0e4441eee82b0545e59099e2f62e3a01fa198d08.1719319093.git.oleksii.kurochko@gmail.com>
	 <bb103587-546d-4613-bcb8-df10f5d05388@suse.com>
	 <4c15dd072f08b1161d170608a096dc0851ced588.camel@gmail.com>
	 <e2d82c37-da44-4a8f-a1f8-76d5ff05b104@suse.com>
	 <f4f3a1550b4809a3cb8b27eb5e7248abf27b3944.camel@gmail.com>
	 <4c71db0d-60a4-4347-b706-a2e06fc9cd63@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Thu, 2024-06-27 at 12:10 +0200, Jan Beulich wrote:
> On 27.06.2024 11:58, oleksii.kurochko@gmail.com=C2=A0wrote:
> > On Thu, 2024-06-27 at 09:59 +0200, Jan Beulich wrote:
> > > On 26.06.2024 19:27, oleksii.kurochko@gmail.com=C2=A0wrote:
> > > > On Wed, 2024-06-26 at 10:31 +0200, Jan Beulich wrote:
> > > > > On 25.06.2024 15:51, Oleksii Kurochko wrote:
> > > > > > --- /dev/null
> > > > > > +++ b/xen/arch/riscv/include/asm/bitops.h
> > > > > > @@ -0,0 +1,137 @@
> > > > > > +/* SPDX-License-Identifier: GPL-2.0 */
> > > > > > +/* Copyright (C) 2012 Regents of the University of
> > > > > > California
> > > > > > */
> > > > > > +
> > > > > > +#ifndef _ASM_RISCV_BITOPS_H
> > > > > > +#define _ASM_RISCV_BITOPS_H
> > > > > > +
> > > > > > +#include <asm/system.h>
> > > > > > +
> > > > > > +#if BITOP_BITS_PER_WORD =3D=3D 64
> > > > > > +#define __AMO(op)=C2=A0=C2=A0 "amo" #op ".d"
> > > > > > +#elif BITOP_BITS_PER_WORD =3D=3D 32
> > > > > > +#define __AMO(op)=C2=A0=C2=A0 "amo" #op ".w"
> > > > > > +#else
> > > > > > +#error "Unexpected BITOP_BITS_PER_WORD"
> > > > > > +#endif
> > > > > > +
> > > > > > +/* Based on linux/arch/include/asm/bitops.h */
> > > > > > +
> > > > > > +/*
> > > > > > + * Non-atomic bit manipulation.
> > > > > > + *
> > > > > > + * Implemented using atomics to be interrupt safe. Could
> > > > > > alternatively
> > > > > > + * implement with local interrupt masking.
> > > > > > + */
> > > > > > +#define __set_bit(n, p)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 set_bit(=
n, p)
> > > > > > +#define __clear_bit(n, p)=C2=A0=C2=A0=C2=A0 clear_bit(n, p)
> > > > > > +
> > > > > > +#define test_and_op_bit_ord(op, mod, nr, addr, ord)=C2=A0=C2=
=A0=C2=A0=C2=A0 \
> > > > > > +({=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > > > > +=C2=A0=C2=A0=C2=A0 bitop_uint_t res, mask;=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
 \
> > > > > > +=C2=A0=C2=A0=C2=A0 mask =3D BITOP_MASK(nr);=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 \
> > > > > > +=C2=A0=C2=A0=C2=A0 asm volatile (=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 __AMO(op) #ord " %0=
, %2, %1"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "=3Dr" (res), "+A=
" (addr[BITOP_WORD(nr)])=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "r" (mod(mask))=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "memory");=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > > > > +=C2=A0=C2=A0=C2=A0 ((res & mask) !=3D 0);=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 \
> > > > > > +})
> > > > > > +
> > > > > > +#define op_bit_ord(op, mod, nr, addr, ord)=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 \
> > > > > > +=C2=A0=C2=A0=C2=A0 asm volatile (=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 __AMO(op) #ord " ze=
ro, %1, %0"=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "+A" (addr[BITOP_=
WORD(nr)])=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 \
> > > > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "r" (mod(BITOP_MA=
SK(nr)))=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 \
> > > > > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 : "memory");
> > > > > > +
> > > > > > +#define test_and_op_bit(op, mod, nr, addr)=C2=A0=C2=A0=C2=A0 \
> > > > > > +=C2=A0=C2=A0=C2=A0 test_and_op_bit_ord(op, mod, nr, addr, .aqr=
l)
> > > > > > +#define op_bit(op, mod, nr, addr) \
> > > > > > +=C2=A0=C2=A0=C2=A0 op_bit_ord(op, mod, nr, addr, )
> > > > > > +
> > > > > > +/* Bitmask modifiers */
> > > > > > +#define NOP(x)=C2=A0=C2=A0=C2=A0 (x)
> > > > > > +#define NOT(x)=C2=A0=C2=A0=C2=A0 (~(x))
> > > > >=20
> > > > > Since elsewhere you said we would use Zbb in bitops, I wanted
> > > > > to
> > > > > come
> > > > > back
> > > > > on that: Up to here all we use is AMO.
> > > > >=20
> > > > > And further down there's no asm() anymore. What were you
> > > > > referring
> > > > > to?
> > > > RISC-V doesn't have a CLZ instruction in the base
> > > > ISA.=C2=A0 As a consequence, __builtin_ffs() emits a library call t=
o
> > > > ffs()
> > > > on GCC,
> > >=20
> > > Oh, so we'd need to implement that libgcc function, along the
> > > lines
> > > of
> > > Arm32 implementing quite a few of them to support shifts on 64-
> > > bit
> > > quantities as well as division and modulo.
> > Why we can't just live with Zbb extension? Zbb extension is
> > presented
> > on every platform I have in access with hypervisor extension
> > support.
>=20
> I'd be fine that way, but then you don't need to break up ANDN into
> NOT
> and AND. It is my understanding that Andrew has concerns here, even
> if
> - iirc - it was him to originally suggest to build upon that
> extension
> being available. If these concerns are solely about being able to
> build
> with Zbb-unaware tool chains, then what to do about the build issues
> there has already been said.
Not much we can do except probably using .insn, as you suggested for
the "pause" instruction in cpu_relax(), for every instruction ( at the
moment it is only ANDB but who knows which instruction will be used in
the future ) from the Zbb extension.

But then we will need to do the same for each possible extension we are
going to use, as there is still a small chance that we might encounter
an extension-unaware toolchain.

I am a little bit confused about what we should do.

In my opinion, the best approach at the moment is to use .insn for the
ANDN and PAUSE instructions and add an explanation to
docs/misc/riscv/booting.txt or create a separate document where such
issues are documented (I am not sure that README is the correct place
for this).

I am also okay to go with ANDN break up int NOT and AND, but it is
needed to come up  with concept which instruction/extenstion should be
used and how consistently to deal with such situations.

Furthermore, I don't think these changes should block the merging (
doesn't matter in 4.19 or in 4.20 Xen release ) of PATCHes v13 01-07 of
this patch series.

~ Oleksii



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 12:01:38 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 12:01:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750006.1158253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMnoe-0005rq-UE; Thu, 27 Jun 2024 12:01:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750006.1158253; Thu, 27 Jun 2024 12:01:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMnoe-0005rj-RX; Thu, 27 Jun 2024 12:01:28 +0000
Received: by outflank-mailman (input) for mailman id 750006;
 Thu, 27 Jun 2024 12:01:28 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LqiX=N5=amd.com=Michal.Orzel@srs-se1.protection.inumbo.net>)
 id 1sMnod-0005k3-Sy
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 12:01:27 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on20618.outbound.protection.outlook.com
 [2a01:111:f403:2418::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id fa0fed2f-347c-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 14:01:25 +0200 (CEST)
Received: from CH2PR02CA0022.namprd02.prod.outlook.com (2603:10b6:610:4e::32)
 by LV8PR12MB9207.namprd12.prod.outlook.com (2603:10b6:408:187::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.35; Thu, 27 Jun
 2024 12:01:21 +0000
Received: from CH3PEPF0000000B.namprd04.prod.outlook.com
 (2603:10b6:610:4e:cafe::9b) by CH2PR02CA0022.outlook.office365.com
 (2603:10b6:610:4e::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.26 via Frontend
 Transport; Thu, 27 Jun 2024 12:01:21 +0000
Received: from SATLEXMB03.amd.com (165.204.84.17) by
 CH3PEPF0000000B.mail.protection.outlook.com (10.167.244.38) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Thu, 27 Jun 2024 12:01:21 +0000
Received: from SATLEXMB03.amd.com (10.181.40.144) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Thu, 27 Jun
 2024 07:01:20 -0500
Received: from [10.252.147.188] (10.180.168.240) by SATLEXMB03.amd.com
 (10.181.40.144) with Microsoft SMTP Server id 15.1.2507.39 via Frontend
 Transport; Thu, 27 Jun 2024 07:01:19 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa0fed2f-347c-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oVQCDW25BBeZXJQq7Sfgf+JNbfN5GSHaCBXg6KNGbMr65cnXhw0Ls+/5SqhcKMAM7NCnl5U67UXenHVXjzfPmXJ5XlMOPvbSvOePO1eesF2sSi8796HFBhRiWSKWCQsEIf0fopcWrYUssHv3kzTbjq2ezPFf9LpuW7N8RN3XXL/Iur2lhYuuz3Echmms8KjHMJYXEwRQlwgbtFLPi55HzmKZqJinCBzYyHlRbKRXRDB9Awf3kDbQafG6dBK9L7STTJsU3DGx+gmCj2H3Cq2H36J+fjiBs7Zkes3VuNdPnmDPlK9faVLDaR51lXdvq313N2bR9B+amkRwqlBgFBaqhQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=zopRGf0SiJVOAOBufZsTLCwsUY4uwnCSzqb5Ja9hr5M=;
 b=DFucPRw00gEuEnEEbTg/jNomP+uePdIpjIOW3eE2tuUQJu60sNC8RxKg0rprS0DCRenMEYAVLumyHuy8DQqGLVUEWhbZBOjxZcUazIhCFJly2vfF/3yNziXL+opd2BpdHevY4FZZcgQQh5eWhf4ugvhWmj1iPX3eq/Q9mEq1cPsoegKAS0B7U2ze89cPGsUUnwvGWvFauWKTSWf9+Z1il0LsqZaZkFzwKIPTJSnfW9dabsuz6bsPO9A8vhruim8WZxRehVm7HjIpZLnEosXsa8qKdvJKN8lc/bhMGagx1eVuGPZ/1MoDSvm7kh664v/7kZ8xfFnXIVxdZghmE7l8Jg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=xen.org smtp.mailfrom=amd.com; dmarc=pass
 (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com;
 dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=zopRGf0SiJVOAOBufZsTLCwsUY4uwnCSzqb5Ja9hr5M=;
 b=mssJv23cbdNmO+/DeUt5AiKPtX4QDenVAKRS2OHdV4V8P1MLUFPqsbbcJYqaWo1KXG1LaAZghBs7r8ARElW+XSPyP+p6uu7cE4KX7s/iEUDyx+jtyg55Lyd8dwC0Mpk0NVQijc3U2NR/Kl8o/Rn/IPr6MLye9v6F76vPejpen2A=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB03.amd.com; pr=C
Message-ID: <03bad4dd-9d6e-4341-b72c-5bb516f3ce36@amd.com>
Date: Thu, 27 Jun 2024 14:01:13 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19(?)] xen/arm: bootfdt: Fix device tree memory node
 probing
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Stefano Stabellini <sstabellini@kernel.org>, Bertrand Marquis
	<bertrand.marquis@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	<oleksii.kurochko@gmail.com>
References: <20240626080428.18480-1-michal.orzel@amd.com>
 <766b260e-204c-423f-b0e1-c21957b6d169@xen.org>
 <b5c861a4-1431-44c5-a1ec-bc859ea011c3@amd.com>
 <ae447b0b-f791-4ca8-8b33-3600ae059b47@xen.org>
Content-Language: en-US
From: Michal Orzel <michal.orzel@amd.com>
In-Reply-To: <ae447b0b-f791-4ca8-8b33-3600ae059b47@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 7bit
Received-SPF: None (SATLEXMB03.amd.com: michal.orzel@amd.com does not
 designate permitted sender hosts)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CH3PEPF0000000B:EE_|LV8PR12MB9207:EE_
X-MS-Office365-Filtering-Correlation-Id: 2787f05a-64dd-4f39-eb47-08dc96a0dc47
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230040|36860700013|376014|82310400026|1800799024;
X-Microsoft-Antispam-Message-Info:
	=?utf-8?B?WjZoRDRSSzI4aUVmSkVtdlRWczhJRVhhdnAyak1GOXd0bFJlSDJsREh5UC8r?=
 =?utf-8?B?VXhxM0RKYVVCZEFCRjZsU0IxaWExMFN1YlIvVGVkRmxUekZPQzk1aHdFZVFL?=
 =?utf-8?B?WGw3MkNZOUltN1d0Z1dWRjVkaU9FSmRFQXVCQzJFSG4vVFZmSFdmclFuaUxT?=
 =?utf-8?B?TTlJeUduWjV5dlhoY28zVE5jNmgydGJyazByaC9IRVBBc1N6aUJxbFVkakw2?=
 =?utf-8?B?bEFydXV2cENqbjNPd01PSEtRd3lCUTBvbjNxTS9Fbk5PRCtueDhtdHZDVUph?=
 =?utf-8?B?cVFzekgvb0hhSklqSlpvRVFlVVd4d2k4OE1YWkRzN3FReHNVeDlFNWtwMmxk?=
 =?utf-8?B?M2RyMTA0U3lPT0VFakRncjk2bVhPdkpOdVVmRDRvQ2tOZjNvRmVlVnpTcnFw?=
 =?utf-8?B?ZGpldUo3RU9jckJ5KzdHdWt5Q1VvWk5weUZ5QnBZK0RPeWZ2UzJ4UjIxakd5?=
 =?utf-8?B?Y04rQnduUE13Nis0dzc2SS90NkFwUk9aTlUvNVU3b0RFTEI3dkk0WWI2bDEv?=
 =?utf-8?B?djJvMkcrYTFMUEZ2dWZ1b25xK3FWOE9CY1VWd1U4M0tBSEFaVU5TRnBzaXpx?=
 =?utf-8?B?enhtWDJiNUhqbHFkZzNRRTdsMXVVcks2U3k4ZlpaZ3hYYXhEMXhDamtkZ0Ro?=
 =?utf-8?B?SWxkRGhjZXpJR0ZFSjcxaUdpZTlEd012aU1uelREZmdNcnJ4eThvVExPQ2Q2?=
 =?utf-8?B?QWZsUkVMWjBZdWFzenlIT0VrVGhmRTBhdzVkWTFFTCtsT2R5NmU1M1BKeUxK?=
 =?utf-8?B?Yy9lYXY4M3JzbG43enhSYkZlOEsyVXZhSmVPM2xIMUVGY3F2VXBtVi95Z0tK?=
 =?utf-8?B?L0lpMDRnaWJ4bi84TnhWTHk2bFNnNUJEN0podnBUckJ2SWJCS3pnZ3NjdFlR?=
 =?utf-8?B?MU9IMkJPQTBrUGlKWVdyUDhqQWowazl5clVuWFJpQi9TWlFvYTJTVlBhV2RS?=
 =?utf-8?B?RE5Lc1Q3V3NvVEQvSWRZSmUxOE9FbzZVbEtwSENEZ284cE0xWmkvZmxCeEpa?=
 =?utf-8?B?ZG52ZW5KZ0VvUXZpOUZCTnlVK3FZM3NOd2pvZ2FXcmpyVlNac1YrL0VXVkRK?=
 =?utf-8?B?V3h0T1NmZVRISndrdFUzOE5yOUZFNzM3MjFSZzNVaTVQUm0wRHpIWHl0SEhJ?=
 =?utf-8?B?WWp1NCs3SlRjeTJsWXpXN09uN2hCQVZLVkdWL3BOZkN3VW5rUUJDRVNZMFVF?=
 =?utf-8?B?TStOVjcyY1B4T1EwSm1uWnBBa1FQUVVnM292TzN6NmFQVXNUcnU3L285L2Nw?=
 =?utf-8?B?bFhaV1hEQ1k5bWhsdDFid2Q0eFpTSUxleHZ1QVlwdy8wZ2JzVWM0Y3UwU1Bv?=
 =?utf-8?B?VUVPNVVXL3JwM241VTFzMUpzWVVGbE5ENVlHN2ViQUJ1d3lCSnkyVEFFR1Fs?=
 =?utf-8?B?R1lyZFFrMStjWUtxNiszam12MWdEdUZ6NmdsdmFiU0tqUnV6ekdJK3Vaem1s?=
 =?utf-8?B?eHppN2ttdHlvNmNXMFRIb0trS2VUMjhwOFVCM1E5WHRZUGtMVkhZTFNuMmhO?=
 =?utf-8?B?M3BHYkx6Q3poK0ZoSlBVcVhKaklDc1ZKME1LVUxLWEhBRmErL01vRUZTOGpa?=
 =?utf-8?B?cXNkaUVEdjJIcFpzSEloUWhrMTc0QU5zVjRqRGRMbit2WTVLWEpkSFRnT2xD?=
 =?utf-8?B?ZUtUSVRETU1JYTdQa3M4NjNwK24xRVdEZEo4ZU5VOEVVQjV2NmdHWG5qRzJ0?=
 =?utf-8?B?Y0ltMGpBWnhkYm5NMzFDTEpaZlJaQk9FVFh2cTJnVlZMQnEyTU1paEVOQWU4?=
 =?utf-8?B?c1FHQkhlSkVPbXBXcUZkMUZtVzBSNWVpTzBNS3YwaXlhUCtaUEtGOVRMcUFp?=
 =?utf-8?B?VWZYWkJ6R1RpSzNoTm9makFLYWFLTEVFWWx6V0RkOVJRWVRmaXZULzlYd3R5?=
 =?utf-8?B?cEVuMlV6Nkt5NWpZbERaRFA1cmo5VHQzUy91allPcEEwM2N3ZldUd2M4VzF2?=
 =?utf-8?Q?SUao7E7udu543ksreCdPeQPrdoPZYfao?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB03.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(376014)(82310400026)(1800799024);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 Jun 2024 12:01:21.3621
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2787f05a-64dd-4f39-eb47-08dc96a0dc47
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB03.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CH3PEPF0000000B.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: LV8PR12MB9207



On 27/06/2024 12:32, Julien Grall wrote:
> 
> 
> On 27/06/2024 08:40, Michal Orzel wrote:
>> Hi Julien,
>>
>> On 26/06/2024 22:13, Julien Grall wrote:
>>>
>>>
>>> Hi Michal,
>>>
>>> On 26/06/2024 09:04, Michal Orzel wrote:
>>>> Memory node probing is done as part of early_scan_node() that is called
>>>> for each node with depth >= 1 (root node is at depth 0). According to
>>>> Devicetree Specification v0.4, chapter 3.4, /memory node can only exists
>>>> as a top level node. However, Xen incorrectly considers all the nodes with
>>>> unit node name "memory" as RAM. This buggy behavior can result in a
>>>> failure if there are other nodes in the device tree (at depth >= 2) with
>>>> "memory" as unit node name. An example can be a "memory@xxx" node under
>>>> /reserved-memory. Fix it by introducing device_tree_is_memory_node() to
>>>> perform all the required checks to assess if a node is a proper /memory
>>>> node.
>>>>
>>>> Fixes: 3e99c95ba1c8 ("arm, device tree: parse the DTB for RAM location and size")
>>>> Signed-off-by: Michal Orzel <michal.orzel@amd.com>
>>>> ---
>>>> 4.19: This patch is fixing a possible early boot Xen failure (before main
>>>> console is initialized). In my case it results in a warning "Shattering
>>>> superpage is not supported" and panic "Unable to setup the directmap mappings".
>>>>
>>>> If this is too late for this patch to go in, we can backport it after the tree
>>>> re-opens.
>>>
>>> The code looks correct to me, but I am not sure about merging it to 4.19.
>>>
>>> This is not a new bug (in fact has been there since pretty much Xen on
>>> Arm was created) and we haven't seen any report until today. So in some
>>> way it would be best to merge it after 4.19 so it can get more testing.
>>>
>>> In the other hand, I guess this will block you. Is this a new platform?
>>> Is it available?
>> Stefano answered this question. Also, IMO this change is quite straightforward
>> and does not introduce any engineering doubt, so I'm not really sure if it needs
>> more testing.
> 
> At this stage of the release we should think whether the bug is critical
> enough (rather than the risk is low enough) to be merged. IMHO, it is
> not because this has been there for the past 12 years...
> 
> But if we focus on the "riskness". We are rightly changing an interface
> which possibly someone may have (bogusly) relied on. So there is a
> lowish risk that you may end up to break use-case late in the release
> (we are a couple of weeks away) for use-case that never worked in the
> past 12 years.
> 
> We should also think what the worse that can happen if there is a bug in
> your series. The worse is we would not be able to boot on already
> supported platform. This would be quite a bad regression this late in
> the release. For Device-Tree parsing, I don't think it is enough to just
> test on just a handful of platforms this late in the release.
> 
> Which is why to me the answer to "It should be in 4.19" is not
> straightforward. If we merge post 4.19, then we give the chance to
> people to test, update & adjust their setup if needed.
> 
> Anyway, ultimately this is Oleksii's decision as the release manager.
Ok, I agree with your reasoning.

> 
> [...]
> 
>>>> +/*
>>>> + * Check if a node is a proper /memory node according to Devicetree
>>>> + * Specification v0.4, chapter 3.4.
>>>> + */
>>>> +static bool __init device_tree_is_memory_node(const void *fdt, int node,
>>>> +                                              int depth)
>>>> +{
>>>> +    const char *type;
>>>> +    int len;
>>>> +
>>>> +    if ( depth != 1 )
>>>> +        return false;
>>>> +
>>>> +    if ( !device_tree_node_matches(fdt, node, "memory") )
>>>> +        return false;
>>>> +
>>>> +    type = fdt_getprop(fdt, node, "device_type", &len);
>>>> +    if ( !type )
>>>> +        return false;
>>>> +
>>>> +    if ( (len <= 0) || strcmp(type, "memory") )
>>>
>>> I would consider to use strncmp() to avoid relying on the property to be
>>> well-formed (i.e. nul-terminated).
>> Are you sure? AFAIR, libfdt returns NULL and -FDT_ERR_TRUNCATED as len if a string is non null terminated.
> 
> I can't find such code in path. Do you have any pointer?
I checked and I cannot find such code either. I made the wrong assumption thinking that fdt_getprop can only work with strings.
Therefore, I'm ok with changing s/strcmp/strncmp/ for hardening. Shall I respin the patch marking it as for-4.20?


~Michal


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 12:02:43 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 12:02:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750015.1158274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMnpq-0006sb-FJ; Thu, 27 Jun 2024 12:02:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750015.1158274; Thu, 27 Jun 2024 12:02:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMnpq-0006sU-Al; Thu, 27 Jun 2024 12:02:42 +0000
Received: by outflank-mailman (input) for mailman id 750015;
 Thu, 27 Jun 2024 12:02:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8NqD=N5=cloud.com=alejandro.vallejo@srs-se1.protection.inumbo.net>)
 id 1sMnpo-0006rT-UB
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 12:02:40 +0000
Received: from mail-ed1-x52c.google.com (mail-ed1-x52c.google.com
 [2a00:1450:4864:20::52c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 26d466b2-347d-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 14:02:40 +0200 (CEST)
Received: by mail-ed1-x52c.google.com with SMTP id
 4fb4d7f45d1cf-57d07673185so1710243a12.1
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 05:02:40 -0700 (PDT)
Received: from localhost ([160.101.139.1]) by smtp.gmail.com with ESMTPSA id
 4fb4d7f45d1cf-584d0c9db1csm783231a12.13.2024.06.27.05.02.38
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 05:02:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26d466b2-347d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719489759; x=1720094559; darn=lists.xenproject.org;
        h=in-reply-to:references:from:subject:cc:to:message-id:date
         :content-transfer-encoding:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=X9JgqsrnYL4gqd6bzHldK3PBIc55az4D0y+TZMTcpFI=;
        b=NHEMmgCYEwkSAVTp4lVHzw1MLex7zOQ58gmweWkU7PDqYFBnLcNQkxvTqxAr4DujGL
         nwaZi4w8Lk5tl3e6B/XOnjD1uc7k3eZKvZuNtjwgJT8XvGUNbexNC94pNAwElBPfyGKE
         GvtT7ek9U5HBO2smdDBsoVfSrnVjMLwHcr9Js=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719489759; x=1720094559;
        h=in-reply-to:references:from:subject:cc:to:message-id:date
         :content-transfer-encoding:mime-version:x-gm-message-state:from:to
         :cc:subject:date:message-id:reply-to;
        bh=X9JgqsrnYL4gqd6bzHldK3PBIc55az4D0y+TZMTcpFI=;
        b=J2Y39+7gMylJfO7xi7zPMVnhG3LW95gEnmY5tha311cJixRQ2MQdTe+74EeQ2otIwk
         dy6DDtB4CoxkCJqWNYBf9YgR7IkYunNevY4kEGvan4mh7rX+NDlWxA11PpcU8odFzfmR
         CAji8osjQjC0ot+l17T7Oj+LgeXkYSwHJrfd2mI1z0veBegjeo2pD5Xy60qdxuiUcz97
         Rtr+OrP0QV0hy0nZaVnh/7Ex1JdJC2BLcTHlDUAdZkAuYJwSEiA6OM1kwzKRmtjXhq5m
         mN85q95d9G/ho7MzYVKhzAol4sBJd8eqecMJOgxGTMGxNK+OQ3a+PQFTk4t4HF5rFkQj
         22UA==
X-Forwarded-Encrypted: i=1; AJvYcCWFmb2w24bMJKVbjETPYUhn2LPXbt3fXdI/7xkMHN4K3dose5LQ6+eIZSjyDM8vXJJw+Ix96cp8VW17QmXnrKITv5gET7aJ7n7g2i2QYzM=
X-Gm-Message-State: AOJu0Yywa/tZHMjtpnVszP1Idc44DRz1OTa9Fii80Ht96V+h3HK/dAoG
	L2LWI7YQhDPUqr5DftBjyqyiISrx+qDyNo5nhfHudBqT7ytW+4HXQ4wOWM6PshU=
X-Google-Smtp-Source: AGHT+IF/XoiggOr2fov/DFwO1VFgZ0smjAEzWCGAyWa8Aaf9hqheRxlSk2IXEZpCaEQ8HL5RoH5wQA==
X-Received: by 2002:a50:8e4f:0:b0:57c:6234:e95b with SMTP id 4fb4d7f45d1cf-57d4a273a55mr10984201a12.5.1719489759428;
        Thu, 27 Jun 2024 05:02:39 -0700 (PDT)
Mime-Version: 1.0
Content-Transfer-Encoding: quoted-printable
Content-Type: text/plain; charset=UTF-8
Date: Thu, 27 Jun 2024 13:02:37 +0100
Message-Id: <D2AS8XQPR3TS.TDT0A6SPW47G@cloud.com>
To: "Jan Beulich" <jbeulich@suse.com>
Cc: "Anthony PERARD" <anthony.perard@vates.tech>, "Juergen Gross"
 <jgross@suse.com>, "Xen-devel" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, =?utf-8?q?Roger_Pau_Monn=C3=A9?=
 <roger.pau@citrix.com>
Subject: Re: [PATCH v4 06/10] tools/libguest: Make setting MTRR registers
 unconditional
From: "Alejandro Vallejo" <alejandro.vallejo@cloud.com>
X-Mailer: aerc 0.17.0
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
 <2c55d486bb0c54a3e813abc66d32f321edd28b81.1719416329.git.alejandro.vallejo@cloud.com> <fe255839-f8ab-4dd1-abe8-8ec834099a8d@suse.com>
In-Reply-To: <fe255839-f8ab-4dd1-abe8-8ec834099a8d@suse.com>

On Thu Jun 27, 2024 at 10:42 AM BST, Jan Beulich wrote:
> On 26.06.2024 18:28, Alejandro Vallejo wrote:
> > This greatly simplifies a later patch that makes use of HVM contexts to=
 upload
> > LAPIC data. The idea is to reuse MTRR setting procedure to avoid code
> > duplication. It's currently only used for PVH, but there's no real reas=
on to
> > overcomplicate the toolstack preventing them being set for HVM too when
> > hvmloader will override them anyway.
>
> Yet then - why set them when hvmloader will do so again?

To keep the toolstack complexity tractable, essentially. This way I can sen=
d N
hypercalls (for N vCPUs) rather than 2*N and have a single hvmcontext struc=
t
rather than several.

In truth though, I could simply write back the old MTRRs taken from bsp_ctx=
 on
HVM.

> Is it even guaranteed
> to be no change in (guest) behavior to do so?

hvmloader overrides those values, so there is no change by the time BIOS or=
 OVMF
start running. As I mentioned before though, I can actually upload back the=
 old
values in the HVM case.

>
> Plus what about a guest which was configured to have the CPUID bit for MT=
RRs
> clear?
> I think we ought to document this as not supported for PVH (we may

By "this" do you mean PVH _must_ have MTRR support? I would agree.

> actually choose to refuse building such a guest), but in principle the MT=
RR
> save/load operations should simply fail for a HVM guest in said configura=
tion.

What use cases does that cover? With the adjustment I mention at the top th=
at
should be sorted. I'm wondering why we allow !mtrr at all.

> Making such a change in Xen now would, afaict, be benign to the tool stac=
k.
> After this adjustment it would result in a perceived regression, when the=
re
> shouldn't be any.

Fair point.

>
> Thinking about it, even for PVH it may make sense to allow CPUID.MTRR=3D0=
, as
> long as CPUID.PAT=3D1, thus forcing it into PAT-only mode. I think we did=
 even
> discuss this possible configuration before.
>
> Jan

Is PAT-only an existing real HW configuration? Can't say I've seen any.

Cheers,
Alejandro



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 12:08:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 12:08:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750026.1158284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMnvH-0007pi-5i; Thu, 27 Jun 2024 12:08:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750026.1158284; Thu, 27 Jun 2024 12:08:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMnvH-0007pb-2g; Thu, 27 Jun 2024 12:08:19 +0000
Received: by outflank-mailman (input) for mailman id 750026;
 Thu, 27 Jun 2024 12:08:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1631=N5=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMnvG-0007pV-1R
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 12:08:18 +0000
Received: from mail-lf1-x12c.google.com (mail-lf1-x12c.google.com
 [2a00:1450:4864:20::12c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ef6e02e7-347d-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 14:08:16 +0200 (CEST)
Received: by mail-lf1-x12c.google.com with SMTP id
 2adb3069b0e04-52ce6c93103so5799365e87.3
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 05:08:16 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 2adb3069b0e04-52e71305ebbsm181177e87.126.2024.06.27.05.08.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Jun 2024 05:08:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef6e02e7-347d-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719490096; x=1720094896; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=GzMk4hDJpHjtb6UvNw+sjRlOkUSAlQ4OZLHK0CoE3Vg=;
        b=QyZHKYsh+PMgchWYvvHgXoJ1iizkeg65cHaVYlVsaF4FjeW1FrAhWtzH19BqCqe+P/
         DFifLFrYid8ZYOGsiLlB0rt3h05hFEhbcBLvhkugCwNvkV4nWZkvtnl2ZwObLMT6Uu2G
         yLVCAR+CNREgqPzi3EiTWj7Dv4kn1XjcapnxCQCFZ/ktXfOLciJIPpUcTtSEe+QqMoNU
         eN8Q/fjJsTf12c8s7NUCW7ulWpLkOE4VLLSCTsq0h4MrpVCgaZaTIHt3nYjo6EWhYbD4
         9h/NcXQXkyaw539aSSge/6FMfKqePvomF+mRNYHQFmM1z1+ttn6oef46ZKa2bRu/SyuY
         /p/Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719490096; x=1720094896;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=GzMk4hDJpHjtb6UvNw+sjRlOkUSAlQ4OZLHK0CoE3Vg=;
        b=m470KSZxC0nlZET+b6d+ZsF6t6TtKGEomuNkV4wzba64dV+D5O0ZFtIUe+JIVFGKTw
         h7XLytX2ogUTOxHvSsWTvV9lK5gt4lR2lNv1IcnpN2Z1raofrapnR4ETjkC4L/Ugv4Ne
         C2w5qTDnHmcGFAToXMpqCn02jecmVZ/JEiRzIYs7vGHYM+jwo/v7ZS/y2wIa8tAnGMKB
         +2X/DWR5pOJwXA1c6vvSwfqzzGLFbHMt/fFr832yaSofTIvG5ZP0bowM0kN8v0XYG0Gb
         f8KTGwUtBsNYjwXxbp7cRguBNMogD06eITeAd5mefv1J/Ngd3J1qoxU6UwCscqO+WSp/
         dZew==
X-Gm-Message-State: AOJu0YzCnwQrAFtmv6179y3afKIyDrE/jtL/dhNziBVcR1H0eC8F//lb
	29ZHcOuDsUFpBagVYvFCVw2j2s0o+ziKUBzJGfXJ1iz7V8NQxWSmksopKJdt
X-Google-Smtp-Source: AGHT+IH+zjlweNDC61QjtpvFb+Rtcnx6PdOFlfao8ffZ6e6DN7NGWk+cQvn6EIem56xO9AHTVZ0owg==
X-Received: by 2002:a19:2d0e:0:b0:529:b718:8d00 with SMTP id 2adb3069b0e04-52ce182bcffmr9509144e87.8.1719490095260;
        Thu, 27 Jun 2024 05:08:15 -0700 (PDT)
Message-ID: <e8ad8849f10ab8658b84ce18670549ef6314ae4e.camel@gmail.com>
Subject: Re: [PATCH for-4.19? v13 0/10]  Enable build of full Xen for RISC-V
From: oleksii.kurochko@gmail.com
To: xen-devel@lists.xenproject.org, Jan Beulich <jbeulich@suse.com>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>,  Bertrand Marquis <bertrand.marquis@arm.com>, Michal
 Orzel <michal.orzel@amd.com>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Shawn Anastasio
 <sanastasio@raptorengineering.com>, Roger Pau =?ISO-8859-1?Q?Monn=E9?=
 <roger.pau@citrix.com>, Alistair Francis <alistair.francis@wdc.com>, Bob
 Eshleman <bobbyeshleman@gmail.com>, Connor Davis <connojdavis@gmail.com>
Date: Thu, 27 Jun 2024 14:08:14 +0200
In-Reply-To: <cover.1719319093.git.oleksii.kurochko@gmail.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

I saw a message in the xen-devel channel:```
erm...  We've got a problem.=20
https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/7185148004
very clearly failed with a panic(), but reported success out to Gitlab
```
However, I couldn't determine if this is related to the patches in this
series or to patches that were merged earlier.

I would like to understand what needs to be done to be sure that this
patch series could be merged.

Thanks in advance.

Best regards,
 Oleksii



On Tue, 2024-06-25 at 15:51 +0200, Oleksii Kurochko wrote:
> ***
> I think that I need to write a separate email requesting approval for
> this patch
> series to be included in Xen 4.19. Most of the patches are RISC-V
> specific,
> so there is a low risk of breaking anything else.
> There is only one patch that touches common or arch-specific files,
> and it
> doesn't introduce any functional changes.
> Since I can't approve it myself, I am asking for someone else to do
> so.
> ***
>=20
> This patch series performs all of the additions necessary to drop the
> build overrides for RISCV and enable the full Xen build. Except in
> cases
> where compatibile implementations already exist (e.g. atomic.h and
> bitops.h), the newly added definitions are simple.
>=20
> The patch series is based on the following patch series:
> =C2=A0- [PATCH 3/3] xen/ppc: Avoid using the legacy
> __read_mostly/__ro_after_init definitions [1]
>=20
> The link to the branch with rebased patches on top of [1] could be
> found here:
> =C2=A0
> https://gitlab.com/xen-project/people/olkur/xen/-/commits/riscv-full-xen-=
build-v13
>=20
> [1]
> https://lore.kernel.org/xen-devel/20240621201928.319293-4-andrew.cooper3@=
citrix.com/
>=20
> ---
> Changes in V13:
> =C2=A0- Patch was merged to staging:
> =C2=A0=C2=A0 - [PATCH v12 ] xen/riscv: Update Kconfig in preparation for =
a full
> =C2=A0- Rebase on top of the current staging.
> =C2=A0- Update cover letter message
> =C2=A0- It was added 2 new patches ( patches 8 and 9 ) which are not
> necessary for testing environment we
> =C2=A0=C2=A0 have in CI but they improves compatability with older gcc an=
d
> binutils.
> =C2=A0- It was added patch 10 as a clean up of [PATCH v12 2/8] xen:
> introduce generic non-atomic test_*bit()
> =C2=A0=C2=A0 for x86.
> =C2=A0- Drop [PATCH v12 4/8] xen/riscv: add definition of __read_mostly a=
s
> it was defined as generic now
> =C2=A0=C2=A0 for all architectures.
> =C2=A0- It was added the patch [PATCH v13 07/10] xen/common: fix build
> issue for common/trace.c
> =C2=A0=C2=A0 to resolve compilation issue for RISC-V after reabsing on to=
p of
> current staging.
> =C2=A0- Other changes are specific to specific patches. Please look at
> changes for
> =C2=A0=C2=A0 specific patch.
> ---
> Changes in V12:
> =C2=A0- Rebase the patch series on top of [1] mentioned above.
> =C2=A0- Update the cover letter message.
> =C2=A0- "[PATCH v11 3/9] xen/bitops: implement fls{l}() in common logic"
> was droped
> =C2=A0=C2=A0 as it is a part of patch series [1] mentioned above.
> =C2=A0- Other changes are specific to specific patches. Please look at
> changes for
> =C2=A0=C2=A0 specific patch.
> ---
> Changes in V11:
> =C2=A0 - Patch was merged to staging:
> =C2=A0=C2=A0=C2=A0 - [PATCH v10 05/14] xen/riscv: introduce cmpxchg.h
> 	=C2=A0 - [PATCH v10 06/14] xen/riscv: introduce atomic.h
> 	=C2=A0 - [PATCH v10 07/14] xen/riscv: introduce monitor.h
> 	=C2=A0 - [PATCH v10 09/14] xen/riscv: add required things to
> current.h
> 	=C2=A0 - [PATCH v10 11/14] xen/riscv: introduce vm_event_*()
> functions
> =C2=A0- Other changes are specific to specific patches. Please look at
> changes for
> =C2=A0=C2=A0 specific patch.
> ---
> Changes in V10:
> =C2=A0 - Patch was merged to staging:
> =C2=A0=C2=A0=C2=A0 - [PATCH v9 04/15] xen/bitops: put __ffs() into linux =
compatible
> header
> =C2=A0- Other changes are specific to specific patches. Please look at
> changes for
> =C2=A0=C2=A0 specific patch.
> ---
> Changes in V9:
> =C2=A0- Patch was merged to staging:
> =C2=A0=C2=A0=C2=A0 - [PATCH v8 07/17] xen/riscv: introduce io.h
> =C2=A0=C2=A0	- [PATCH v7 14/19] xen/riscv: add minimal stuff to page.h to
> build full Xen
> =C2=A0- Other changes are specific to specific patches. Please look at
> changes for
> =C2=A0=C2=A0 specific patch.
> ---
> Changes in V8:
> =C2=A0- Patch was merged to staging:
> =C2=A0=C2=A0=C2=A0 - [PATCH v7 01/19] automation: introduce fixed randcon=
fig for
> RISC-V
> =C2=A0=C2=A0=C2=A0 - [PATCH v7 03/19] xen/riscv: introduce extenstion sup=
port check
> by compiler
> =C2=A0- Other changes are specific to specific patches. Please look at
> changes for
> =C2=A0=C2=A0 specific patch.
> =C2=A0- Update the commit message:
> =C2=A0=C2=A0=C2=A0=C2=A0 - drop the dependency from STATIC_ASSERT_UNREACH=
ABLE()
> implementation.
> =C2=A0=C2=A0=C2=A0=C2=A0 - Add suggestion to merge arch-specific changes =
related to
> __read_mostly.
> ---
> Changes in V7:
> =C2=A0- Patch was merged to staging:
> =C2=A0=C2=A0 [PATCH v6 15/20] xen/riscv: add minimal stuff to processor.h=
 to
> build full Xen.
> =C2=A0- Other changes are specific to specific patches. Please look at
> changes for
> =C2=A0=C2=A0 specific patch.
> ---
> Changes in V6:
> =C2=A0- Update the cover letter message: drop already merged dependecies
> and add
> =C2=A0=C2=A0 a new one.
> =C2=A0- Patches were merged to staging:
> =C2=A0=C2=A0 - [PATCH v5 02/23] xen/riscv: use some asm-generic headers (=
 even
> v4 was
> =C2=A0=C2=A0=C2=A0=C2=A0 merged to staging branch, I just wasn't apply ch=
anges on top of
> the latest staging branch )
> =C2=A0=C2=A0 - [PATCH v5 03/23] xen/riscv: introduce nospec.h
> =C2=A0=C2=A0 - [PATCH v5 10/23] xen/riscv: introduces acrquire, release a=
nd
> full barriers
> =C2=A0- Introduce new patches:
> =C2=A0=C2=A0 - xen/riscv: introduce extenstion support check by compiler
> =C2=A0=C2=A0 - xen/bitops: put __ffs() and ffz() into linux compatible he=
ader
> =C2=A0=C2=A0 - xen/bitops: implement fls{l}() in common logic
> =C2=A0- The following patches were dropped:
> =C2=A0=C2=A0 - drop some patches related to bitops operations as they wer=
e
> introduced in another
> =C2=A0=C2=A0=C2=A0=C2=A0 patch series [...]
> =C2=A0=C2=A0 - introduce new version for generic __ffs(), ffz() and fls{l=
}().
> =C2=A0- Merge patch from patch series "[PATCH v9 0/7]=C2=A0 Introduce gen=
eric
> headers" to this patch
> =C2=A0=C2=A0 series as only one patch left in the generic headers patch s=
eries
> and it is more about
> =C2=A0=C2=A0 RISC-V.
> =C2=A0- Other changes are specific to specific patches. please look at
> specific patch.
> ---
> Changes in V5:
> =C2=A0- Update the cover letter as one of the dependencies were merged to
> staging.
> =C2=A0- Was introduced asm-generic for atomic ops and separate patches fo=
r
> asm-generic bit ops
> =C2=A0- Moved fence.h to separate patch to deal with some patches
> dependecies on fence.h
> =C2=A0- Patches were dropped as they were merged to staging:
> =C2=A0=C2=A0 * [PATCH v4 03/30] xen: add support in public/hvm/save.h for=
 PPC
> and RISC-V
> =C2=A0=C2=A0 * [PATCH v4 04/30] xen/riscv: introduce cpufeature.h
> =C2=A0=C2=A0 * [PATCH v4 05/30] xen/riscv: introduce guest_atomics.h
> =C2=A0=C2=A0 * [PATCH v4 06/30] xen: avoid generation of empty asm/iommu.=
h
> =C2=A0=C2=A0 * [PATCH v4 08/30] xen/riscv: introduce setup.h
> =C2=A0=C2=A0 * [PATCH v4 10/30] xen/riscv: introduce flushtlb.h
> =C2=A0=C2=A0 * [PATCH v4 11/30] xen/riscv: introduce smp.h
> =C2=A0=C2=A0 * [PATCH v4 15/30] xen/riscv: introduce irq.h
> =C2=A0=C2=A0 * [PATCH v4 16/30] xen/riscv: introduce p2m.h
> =C2=A0=C2=A0 * [PATCH v4 17/30] xen/riscv: introduce regs.h
> =C2=A0=C2=A0 * [PATCH v4 18/30] xen/riscv: introduce time.h
> =C2=A0=C2=A0 * [PATCH v4 19/30] xen/riscv: introduce event.h
> =C2=A0=C2=A0 * [PATCH v4 22/30] xen/riscv: define an address of frame tab=
le
> =C2=A0- Other changes are specific to specific patches. please look at
> specific patch
> ---
> Changes in V4:
> =C2=A0- Update the cover letter message: new patch series dependencies.
> =C2=A0- Some patches were merged to staging, so they were dropped in this
> patch series:
> =C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v3 09/34] xen/riscv: introduce system.h
> =C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v3 18/34] xen/riscv: introduce domain.h
> =C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v3 19/34] xen/riscv: introduce guest_acce=
ss.h
> =C2=A0- Was sent out of this patch series:
> =C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v3 16/34] xen/lib: introduce generic find=
 next bit
> operations
> =C2=A0- [PATCH v3 17/34] xen/riscv: add compilation of generic find-next-
> bit.c was
> =C2=A0=C2=A0 droped as CONFIG_GENERIC_FIND_NEXT_BIT was dropped.
> =C2=A0- All other changes are specific to a specific patch.
> ---
> Changes in V3:
> =C2=A0- Update the cover letter message
> =C2=A0- The following patches were dropped as they were merged to staging=
:
> =C2=A0=C2=A0=C2=A0 [PATCH v2 03/39] xen/riscv:introduce asm/byteorder.h
> =C2=A0=C2=A0=C2=A0 [PATCH v2 04/39] xen/riscv: add public arch-riscv.h
> =C2=A0=C2=A0=C2=A0 [PATCH v2 05/39] xen/riscv: introduce spinlock.h
> =C2=A0=C2=A0=C2=A0 [PATCH v2 20/39] xen/riscv: define bug frame tables in=
 xen.lds.S
> =C2=A0=C2=A0=C2=A0 [PATCH v2 34/39] xen: add RISCV support for pmu.h
> =C2=A0=C2=A0=C2=A0 [PATCH v2 35/39] xen: add necessary headers to common =
to build
> full Xen for RISC-V
> =C2=A0- Instead of the following patches were introduced new:
> =C2=A0=C2=A0=C2=A0 [PATCH v2 10/39] xen/riscv: introduce asm/iommu.h
> =C2=A0=C2=A0=C2=A0 [PATCH v2 11/39] xen/riscv: introduce asm/nospec.h
> =C2=A0- remove "asm/"=C2=A0 for commit messages which start with "xen/ris=
cv:"
> =C2=A0- code style updates.
> =C2=A0- add emulation of {cmp}xchg_* for 1 and 2 bytes types.
> =C2=A0- code style fixes.
> =C2=A0- add SPDX and footer for the newly added headers.
> =C2=A0- introduce generic find-next-bit.c.
> =C2=A0- some other mionor changes. ( details please find in a patch )
> ---
> Changes in V2:
> =C2=A0 - Drop the following patches as they are the part of [2]:
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 06/57] xen/riscv: introduce pagi=
ng.h
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 08/57] xen/riscv: introduce asm/=
device.h
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 10/57] xen/riscv: introduce asm/=
grant_table.h
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 12/57] xen/riscv: introduce asm/=
hypercall.h
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 13/57] xen/riscv: introduce asm/=
iocap.h
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 15/57] xen/riscv: introduce asm/=
mem_access.h
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 18/57] xen/riscv: introduce asm/=
random.h
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 21/57] xen/riscv: introduce asm/=
xenoprof.h
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 24/57] xen/riscv: introduce asm/=
percpu.h
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 29/57] xen/riscv: introduce asm/=
hardirq.h
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 33/57] xen/riscv: introduce asm/=
altp2m.h
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 38/57] xen/riscv: introduce asm/=
monitor.h
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 39/57] xen/riscv: introduce asm/=
numa.h
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 [PATCH v1 42/57] xen/riscv: introduce asm/=
softirq.h
> =C2=A0 - xen/lib.h in most of the cases were changed to xen/bug.h as
> =C2=A0=C2=A0=C2=A0 mostly functionilty of bug.h is used.
> =C2=A0 - align arch-riscv.h with Arm's version of it.
> =C2=A0 - change the Author of commit with introduction of asm/atomic.h.
> =C2=A0 - update some definition from spinlock.h.
> =C2=A0 - code style changes.
> ---
>=20
>=20
> Oleksii Kurochko (10):
> =C2=A0 xen: introduce generic non-atomic test_*bit()
> =C2=A0 xen/riscv: introduce bitops.h
> =C2=A0 xen/riscv: add minimal stuff to mm.h to build full Xen
> =C2=A0 xen/riscv: add minimal amount of stubs to build full Xen
> =C2=A0 xen/riscv: enable full Xen build
> =C2=A0 xen/README: add compiler and binutils versions for RISC-V64
> =C2=A0 xen/common: fix build issue for common/trace.c
> =C2=A0 xen/riscv: change .insn to .byte in cpu_relax()
> =C2=A0 xen/riscv: introduce ANDN_INSN
> =C2=A0 xen/x86: drop constanst_test_bit() in asm/bitops.h
>=20
> =C2=A0README=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 3 +
> =C2=A0xen/arch/arm/include/asm/bitops.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=
=C2=A0 69 ----
> =C2=A0xen/arch/ppc/include/asm/bitops.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=
=C2=A0 54 ----
> =C2=A0xen/arch/ppc/include/asm/page.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 |=C2=A0=C2=A0 2 +-
> =C2=A0xen/arch/ppc/mm-radix.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 2 +-
> =C2=A0xen/arch/riscv/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 17 +-
> =C2=A0xen/arch/riscv/arch.mk=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 4 -
> =C2=A0xen/arch/riscv/early_printk.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0 | 168 ----------
> =C2=A0xen/arch/riscv/include/asm/bitops.h=C2=A0=C2=A0=C2=A0 | 137 +++++++=
+
> =C2=A0xen/arch/riscv/include/asm/cmpxchg.h=C2=A0=C2=A0 |=C2=A0 16 +-
> =C2=A0xen/arch/riscv/include/asm/mm.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 | 235 ++++++++++++++
> =C2=A0xen/arch/riscv/include/asm/processor.h |=C2=A0=C2=A0 2 +-
> =C2=A0xen/arch/riscv/mm.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 =
43 ++-
> =C2=A0xen/arch/riscv/setup.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 10 +-
> =C2=A0xen/arch/riscv/stubs.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 418
> +++++++++++++++++++++++++
> =C2=A0xen/arch/riscv/traps.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 25 ++
> =C2=A0xen/arch/x86/include/asm/bitops.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=
=C2=A0 39 +--
> =C2=A0xen/common/trace.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=
=C2=A0=C2=A0 1 +
> =C2=A0xen/include/xen/bitops.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 182 +++++++++++
> =C2=A019 files changed, 1096 insertions(+), 331 deletions(-)
> =C2=A0create mode 100644 xen/arch/riscv/include/asm/bitops.h
> =C2=A0create mode 100644 xen/arch/riscv/stubs.c
>=20



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 12:22:51 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 12:22:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.749968.1158294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMo9E-0002Gj-AT; Thu, 27 Jun 2024 12:22:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 749968.1158294; Thu, 27 Jun 2024 12:22:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMo9E-0002Gc-7o; Thu, 27 Jun 2024 12:22:44 +0000
Received: by outflank-mailman (input) for mailman id 749968;
 Thu, 27 Jun 2024 11:08:43 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ztCX=N5=gmail.com=gaojunhao0504@srs-se1.protection.inumbo.net>)
 id 1sMmzb-00067W-6g
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 11:08:43 +0000
Received: from mail-qv1-xf2f.google.com (mail-qv1-xf2f.google.com
 [2607:f8b0:4864:20::f2f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 9b7ee4c8-3475-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 13:08:41 +0200 (CEST)
Received: by mail-qv1-xf2f.google.com with SMTP id
 6a1803df08f44-6b5052defa6so37367386d6.3
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 04:08:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b7ee4c8-3475-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719486519; x=1720091319; darn=lists.xenproject.org;
        h=content-transfer-encoding:to:subject:message-id:date:from
         :mime-version:from:to:cc:subject:date:message-id:reply-to;
        bh=l6MzYtcaVRbqw2Lc3Ihw02cJzZCenCF6o/X+KRohjS8=;
        b=S2NbIdSStRetCki96I8oYupgt9/A7q7lko1hCyeei3giNB/ON8UUKVp7PluavVhrcr
         EcLRQmDRKdvYFYIBQR2KwWzCjLryuvz4YRSn8O/VXGeRTIpMkDlbNvpljCQdNDrT//lo
         OHBHyNjpFTeklJNnbG2zvkoOSXeZrihElmXTTR+qAxNXoISBDo/jlQFMOAzSTCfQAuID
         Br4UsualT6P0Pc64MU1e0BAiSIPy4Oyc9jLCgCVUhMqle8fTZGDbV4LKsv0MH6QkJkdD
         hro0PLx4kE5k0+qOPfw8xJr4y66e64n+0P9c0zvw577f5dSs7S/jc4+PZ5AyIoZWinU2
         S1zw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719486519; x=1720091319;
        h=content-transfer-encoding:to:subject:message-id:date:from
         :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=l6MzYtcaVRbqw2Lc3Ihw02cJzZCenCF6o/X+KRohjS8=;
        b=NUQ8Ns5tNJVbBeYcmADwYnvybQ+6Ky27zykPjP33zq2S7LGBYhcsJHhz/KFkdwi8kG
         BBt/gHk7PWF6GtXvq9f7caGgoNjjXQXkPVzaNbsYPj0p+bOa2qM6DmmafqhnE5bOiIiS
         zJqmfQ6D8hzQODRGQU13uSQd73j1UCzTJMLQ0UcKyFrpMh/oLdhjfkyQxeOn13hk72p6
         7lJqTWPKQ29Jz0Z+vcyKa9/MidDnV292WjzcSB3LNKD4LyEbUpU80EmjA9vSYSZxl0qG
         i1WFLODZgoxdosNYTKIenajRyuXIZB3rXlTa7h6qURPy6G80FTKXnWS25grZgDMHwzTa
         wxsQ==
X-Gm-Message-State: AOJu0YwO+wy0luS9IdL7QQvSph3XjCPYpTKV+OlJVDmBcdWw8SuN94tJ
	spT0p7ll8XRmDJuDIEAQz9mZFMTnzIXlmfjFRNA6q89Bjl9Qk4t3/0eG9Gtnp4g0DjX+r+3Nasf
	3LgOSQNaaX/EOslsd8vsq2KzAnXc71qk6Qis=
X-Google-Smtp-Source: AGHT+IHdUOmaHYaKVA3cbK1i58+UbUAuV0eBzRUzFbaY84Bbisu4/lw+7yNMVOenNTVW3+OcOS+5e/LyI0R2MqjtS1Y=
X-Received: by 2002:a05:6214:240d:b0:6b5:4125:415d with SMTP id
 6a1803df08f44-6b54125430fmr193404376d6.13.1719486518841; Thu, 27 Jun 2024
 04:08:38 -0700 (PDT)
MIME-Version: 1.0
From: =?UTF-8?B?6auY6ZKn5rWp?= <gaojunhao0504@gmail.com>
Date: Thu, 27 Jun 2024 19:08:28 +0800
Message-ID: <CAOJPZgnoxUR-58_2DJGwYkW126nbTtC=kWrZ_gCJGcBiEcon7Q@mail.gmail.com>
Subject: domU will hang after reboot in dom0-less arch on arm
To: xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi=EF=BC=8C
I use u-boot, xen, qemu to boot domU, then execute "reboot" command in
domU, domU
will hang. Could someone know this issue and this feature is supported
or not. Thanks.

#####################################hang
info############################################
Welcome to SNPS Mini AArch64 by Buildroot
mini-aarch64 login: root
#
#
# reboot
# Stopping network: ifdown: interface eth0 not configured
OK
Saving random seed: OK
Stopping klogd: OK
Stopping syslogd: OK
umount: devtmpfs busy - remounted read-only
umount: can't unmount /: Invalid argument
The system is going down NOW!
Sent SIGTERM to all processes
Sent SIGKILL to all processes
Requesting system reboot
[   47.461948] Flash device refused suspend due to active operation (state =
20)
[   47.466276] Flash device refused suspend due to active operation (state =
20)
[   47.472966] reboot: Restarting system
(XEN) Hardware Dom0 shutdown: rebooting machine


U-Boot 2024.01 (Jun 04 2024 - 19:56:53 +0800)

DRAM:  4 GiB
Core:  51 devices, 14 uclasses, devicetree: board
Flash: 64 MiB
Loading Environment from Flash... *** Warning - bad CRC, using default
environment

In:    serial,usbkbd
Out:   serial,vidconsole
Err:   serial,vidconsole
No working controllers found
Net:   eth0: virtio-net#32
starting USB...
No working controllers found
Hit any key to stop autoboot:  0
Scanning for bootflows in all bootdevs
Seq  Method       State   Uclass    Part  Name                      Filenam=
e
---  -----------  ------  --------  ----  ------------------------
----------------
Scanning global bootmeth 'efi_mgr':
No EFI system partition
No EFI system partition
Failed to persist EFI variables
Missing TPMv2 device for EFI_TCG_PROTOCOL
Missing RNG device for EFI_RNG_PROTOCOL
Scanning bootdev 'fw-cfg@9020000.bootdev':
fatal: no kernel available
No working controllers found
scanning bus for devices...
BOOTP broadcast 1
###########################################################################=
###############

software version:
u-boot: v2024.01
xen: 4.18.0
qemu: 8.1.3

qemu command:
qemu-system-aarch64 \
-machine virt,gic_version=3D3 \
-machine virtualization=3Dtrue \
-cpu cortex-a57 \
-machine type=3Dvirt \
-m 4096 \
-smp 4 \
-bios u-boot.bin \
-device loader,file=3Dxen,force-raw=3Don,addr=3D0x49000000 \
-device loader,file=3Dvirt-gicv3.dtb,addr=3D0x44000000 \
-device loader,file=3DImage,addr=3D0x54000000 \
-device loader,file=3Drootfs.img.gz,addr=3D0x59000000 \
-device loader,file=3DImage,addr=3D0x64000000 \
-device loader,file=3Drootfs.img.gz,addr=3D0x69000000 \
-nographic \
-chardev socket,id=3Dqemu-monitor,host=3Dlocalhost,port=3D7777,server=3Don,=
wait=3Doff,telnet=3Don
\
-mon qemu-monitor,mode=3Dreadline

Thanks,
Junhao Gao


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 12:38:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 12:38:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750056.1158304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMoOR-0004NP-LI; Thu, 27 Jun 2024 12:38:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750056.1158304; Thu, 27 Jun 2024 12:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMoOR-0004NI-Ia; Thu, 27 Jun 2024 12:38:27 +0000
Received: by outflank-mailman (input) for mailman id 750056;
 Thu, 27 Jun 2024 12:38:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q1FW=N5=bounce.vates.tech=bounce-md_30504962.667d5d3d.v1-3cff0952a75c489c975df7cb677a1890@srs-se1.protection.inumbo.net>)
 id 1sMoOQ-0004Lu-EU
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 12:38:26 +0000
Received: from mail145-25.atl61.mandrillapp.com
 (mail145-25.atl61.mandrillapp.com [198.2.145.25])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 240d2180-3482-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 14:38:23 +0200 (CEST)
Received: from pmta06.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail145-25.atl61.mandrillapp.com (Mailchimp) with ESMTP id
 4W8ylx5vDQz35hXmQ
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 12:38:21 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 3cff0952a75c489c975df7cb677a1890; Thu, 27 Jun 2024 12:38:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 240d2180-3482-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719491901; x=1719752401;
	bh=HHUr0R2A0PLs0r7i6fQCR8T6CoP2tkG7jChUGBbfgYg=;
	h=From:Subject:Message-Id:To:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=G6jbAW9WlNRWAHfDZqmM5S+4LRmsLRg7OXqumiPXUkkSbVgwvZaCyp3XI3fWCxuON
	 oHRxkOLACemBDbtfITJUwJ/gyyaBhKpVvK1vFQ+hlbjaFUGtz3VgvNWLlCYNa7Y9lM
	 PtUl7W0tOM9v9iPpIL6YzU0NpRYbsEweIPpr58xaqdeSiJgcJjI7k3UGgVWVEhFXA8
	 gC+pIQ86xqkVLg6D+IuAg8jGh2Z+x+wBmQ+lNLjtxRRefcBcyF2enxIZq5jJSSaXiH
	 aMCiUmHhK0nCNgPWDrwr6ZSoVDYP/y8/ozg2Ovc1aAKiFhWSv3xa+0+R5IBCZ2jyGw
	 WMfUhR4HOU2NQ==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719491901; x=1719752401; i=teddy.astie@vates.tech;
	bh=HHUr0R2A0PLs0r7i6fQCR8T6CoP2tkG7jChUGBbfgYg=;
	h=From:Subject:Message-Id:To:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=m2XQwZjnd3/7fTrodbVAX6QzYfGqN4+YFVX6g6VP4DNs2F0bbUlp0NJL3dTyeDCe1
	 Fj2kbSNFMbLDmD94Jvb11orC4Vhy6syJp/TA19nqW+mO4krxTKYmp/4AaYgjn67Taf
	 j2g9yPy62l1A/fmFjTIa4OFkQoVhpR476Vb3d1dJ0i57cG1PWrAX4sgVNod5tzxxpg
	 AgQvWaQeZYb+lOdxPAswd3jEqvcJzFcQADmmXXN9nKRbulQ56xlhhy113siQs4HtTl
	 bQjmUtpMdGQDTxjx57XS7t+fcD2XZ/3FFTzBoepFIeZYYnWCWT6odjQVcxIvRkq6ds
	 T/Uv1M51nF/Zw==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?Re:=20Disaggregated=20(Xoar)=20Dom0=20Building?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719491899276
Message-Id: <364e5058-ce40-482d-acf3-37f70266fdb3@vates.tech>
To: Lonnie Cumberland <lonnie@outstep.com>, Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <376f0fe4-4ae8-461d-87f2-0fa2e6913689@outstep.com> <59d67a78-14a0-42ac-b0dc-3d75c109f767@suse.com> <be292bcf-d77f-44ba-b29a-b1608586647b@outstep.com>
In-Reply-To: <be292bcf-d77f-44ba-b29a-b1608586647b@outstep.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.3cff0952a75c489c975df7cb677a1890?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240627:md
Date: Thu, 27 Jun 2024 12:38:21 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hi Lonnie,

Le 27/06/2024 =C3=A0 11:33, Lonnie Cumberland a =C3=A9crit=C2=A0:
> I am working towards is to have 
> everything as a RAM-based ultra-lightweight thin hypervisor.=C2=A0=C2=A0 =
I looked 
> over ACRN, the NOVA Microhypervisor (Headron, Beadrock Udo), 
> Rust-Shyper, Bareflank-MicroV, and many other development efforts but it 
> seems that Xen is the most advanced for my purposes here.
> 

You can have a disk-less (or ramdisk based) distro supporting Xen with 
Alpine Linux (with Xen flavour). It does still use Dom0 with all its 
responsibilities though.

>>> Currently, I am investigating and researching the ideas of 
>>> "Disaggregating" Dom0 and have the Xoar Xen patches ("Breaking Up is 
>>> Hard to Do: Security and Functionality in a Commodity Hypervisor" 
>>> 2011) available which were developed against version 22155 of 
>>> xen-unstable. The Linux patches are against Linux with pvops 
>>> 2.6.31.13 and developed on a standard Ubuntu 10.04 install. My effort 
>>> would also be up update these patches.
>>>
>>> I have been able to locate the Xen "Dom0 Disaggregation" 
>>> (https://wiki.xenproject.org/wiki/Dom0_Disaggregation) am reading up 
>>> on things now but wanted to ask the developers list about any 
>>> experience you may have had in this area since the research objective 
>>> is to integrate Xoar with the latest Xen 4.20, if possible, and to 
>>> take it further to basically eliminate Dom0 all together with 
>>> individual Mini-OS or Unikernel "Service and Driver VM's" instead 
>>> that are loaded at UEFI boot time.

The latest stuff going on I have in mind regarding this idea of moving 
stuff out of Dom0 is QEMU as Unikernel (using Unikraft), there were some 
discussions on this in Matrix and at Xen Summit, and it's currently work 
in progress from Unikraft side.

Teddy


Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 12:59:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 12:59:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750067.1158313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMoiv-0007dG-DK; Thu, 27 Jun 2024 12:59:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750067.1158313; Thu, 27 Jun 2024 12:59:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMoiv-0007d9-AT; Thu, 27 Jun 2024 12:59:37 +0000
Received: by outflank-mailman (input) for mailman id 750067;
 Thu, 27 Jun 2024 12:59:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMoit-0007cz-Vy; Thu, 27 Jun 2024 12:59:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMoit-0001l2-Qh; Thu, 27 Jun 2024 12:59:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMoit-0006FX-Fg; Thu, 27 Jun 2024 12:59:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMoit-00078r-C6; Thu, 27 Jun 2024 12:59:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=MrW2zmxpQkRV2Z72szZ1wo5xqKuRxVu1yltFlBIuP64=; b=w3rFKNlrhVNv7+l2F+l7QyGVkN
	CRkosNR20uKYCOiEiJVIRUKYnWIU5KolsrgpqWtyBc4cokPKU9GQy7cTlEP7vmLNEfq2Gl51QVL6d
	+B8SEbeRTTM0e9HV3JTJYNRYwH98sltw+1nUcEtlB4FDPEXIIra9iwBp5fF2BqjwsJOI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186529-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186529: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ecadd22a3de8ce7f1799e85af6f1e37c06c57049
X-Osstest-Versions-That:
    xen=4712e3b3769e6c03e0aaaa8179395f0fb7b141cc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 Jun 2024 12:59:35 +0000

flight 186529 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186529/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186517
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186517
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186517
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186517
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186517
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186517
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ecadd22a3de8ce7f1799e85af6f1e37c06c57049
baseline version:
 xen                  4712e3b3769e6c03e0aaaa8179395f0fb7b141cc

Last test of basis   186517  2024-06-26 14:40:46 Z    0 days
Testing same since   186529  2024-06-27 03:48:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@cloud.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4712e3b376..ecadd22a3d  ecadd22a3de8ce7f1799e85af6f1e37c06c57049 -> master


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 13:01:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 13:01:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750077.1158335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMokt-0000l5-SE; Thu, 27 Jun 2024 13:01:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750077.1158335; Thu, 27 Jun 2024 13:01:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMokt-0000ky-Pg; Thu, 27 Jun 2024 13:01:39 +0000
Received: by outflank-mailman (input) for mailman id 750077;
 Thu, 27 Jun 2024 13:01:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mtIW=N5=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sMoks-0000ks-Eb
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 13:01:38 +0000
Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com
 [2a00:1450:4864:20::52e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 634e4cd6-3485-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 15:01:37 +0200 (CEST)
Received: by mail-ed1-x52e.google.com with SMTP id
 4fb4d7f45d1cf-57a30dbdb7fso1901412a12.3
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 06:01:37 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a729d7c957esm57170966b.197.2024.06.27.06.01.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Jun 2024 06:01:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 634e4cd6-3485-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719493296; x=1720098096; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=a+kItBAO12x8RhBAOJkTNenPSO8QMIo1tK5fVas/ZLI=;
        b=J3UskVdH/TUiwAefh8Upkeclh6YxAaGNuTijZXbhRG+XMJTKdOkhN462kO/VZWN4jL
         ySTygvoupLif2Ot691GhFa5SVU9QLI3WVVhXEuFte3H9KahfmiNpp7QRAL+qVPRIO8UH
         zy/RndJST0eRygxt9+uKB+wOUNmTD0HxEHBrU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719493296; x=1720098096;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=a+kItBAO12x8RhBAOJkTNenPSO8QMIo1tK5fVas/ZLI=;
        b=LuxHVCGgXFqCmoGEIDHg3l1167/bME0P6OslTU5YYD4CJHZ6qB3zusNgKuAOWtKaHV
         /8boMagdbGb7ndy0h0si03nxPPzTuyrLpscZVkZMR3PId9fNfg24ADoga0YkkrX3kEF5
         hxLAbOrbOew6oVkQju3IlSx13ykrozH+awOCv+wKsQ3J0RpY8UQow+rdl1qGRT5WRAjA
         HBBltnps0nIlbT+ezqT1ra0pQIhfvMAJxahftznACOFYBzfiPbFUdS4rTgamISH7TpZz
         UU9Ovv4HmzHgtFXGvDQAPOt4HOJa7r4j0QQ7zYEWN+Y3MPb4X04A3Lm1dt+ZaoKU77y+
         UH8g==
X-Gm-Message-State: AOJu0Yxp6h6efBJrPrvL4qH0/VuQn2OdzA1KI62+3bTYFDxXAa4H0V0O
	6nOIu6gDJfDN/mKaNcea6Bww1mxjx7SuYuUdaC2jqU4LGM3Nj7ZmfEETIjBjvMvwHhSWMEHKZTV
	e5Dw=
X-Google-Smtp-Source: AGHT+IGooD6wGGQl3UGCys3ZS9wxE2ty/uX483AoPy48a58JnD9AzLZ7ghUdBjZnC/PwO/ZBhAhOGA==
X-Received: by 2002:a17:906:c243:b0:a6f:569b:3ff0 with SMTP id a640c23a62f3a-a7242c39c0cmr908875166b.26.1719493296278;
        Thu, 27 Jun 2024 06:01:36 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19] tools/dombuilder: Correct the length calculation in xc_dom_alloc_segment()
Date: Thu, 27 Jun 2024 14:01:34 +0100
Message-Id: <20240627130134.1006059-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

xc_dom_alloc_segment() is passed a size in bytes, calculates a size in pages
from it, then fills in the new segment information with a bytes value
re-calculated from the number of pages.

This causes the module information given to the guest (MB, or PVH) to have
incorrect sizes; specifically, sizes rounded up to the next page.

This in turn is problematic for Xen.  When Xen finds a gzipped module, it
peeks at the end metadata to judge the decompressed size, which is a -4
backreference from the reported end of the module.

Fill in seg->vend using the correct number of bytes.

Fixes: ea7c8a3d0e82 ("libxc: reorganize domain builder guest memory allocator")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Anthony PERARD <anthony@xenproject.org>
CC: Juergen Gross <jgross@suse.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>

For 4.19: This was discovered when trying to test Daniel's gzip cleanup for
Hyperlaunch.  It's a subtle bug, hidden inside a second bug which isn't
appropriate content for 4.20.
---
 tools/libs/guest/xg_dom_core.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/guest/xg_dom_core.c b/tools/libs/guest/xg_dom_core.c
index c4f4e7f3e27a..f5521d528be1 100644
--- a/tools/libs/guest/xg_dom_core.c
+++ b/tools/libs/guest/xg_dom_core.c
@@ -601,7 +601,7 @@ int xc_dom_alloc_segment(struct xc_dom_image *dom,
     memset(ptr, 0, pages * page_size);
 
     seg->vstart = start;
-    seg->vend = dom->virt_alloc_end;
+    seg->vend = start + size;
 
     DOMPRINTF("%-20s:   %-12s : 0x%" PRIx64 " -> 0x%" PRIx64
               "  (pfn 0x%" PRIpfn " + 0x%" PRIpfn " pages)",
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 13:38:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 13:38:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750094.1158345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMpKQ-0005I9-KB; Thu, 27 Jun 2024 13:38:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750094.1158345; Thu, 27 Jun 2024 13:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMpKQ-0005I2-Hf; Thu, 27 Jun 2024 13:38:22 +0000
Received: by outflank-mailman (input) for mailman id 750094;
 Thu, 27 Jun 2024 13:38:20 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h0yt=N5=outstep.com=lonnie@srs-se1.protection.inumbo.net>)
 id 1sMpKO-0005Hq-Ff
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 13:38:20 +0000
Received: from mail.outstep.net (mail.outstep.net [213.136.84.29])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 830264aa-348a-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 15:38:18 +0200 (CEST)
Received: from [127.0.0.1] (localhost [127.0.0.1]) by localhost (Mailerdaemon)
 with ESMTPA id 6B575234103E; Thu, 27 Jun 2024 15:38:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 830264aa-348a-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=outstep.com; s=dkim;
	t=1719495495; h=from:subject:date:message-id:to:cc:mime-version:content-type:
	 content-transfer-encoding:in-reply-to;
	bh=5W9ntMxDTfwHRffCk9BTmg8ZcDV+azGtDS5fbr1iB6A=;
	b=ZJhezTfPTe51m/taXqjIWwV8b+MoWj4FXYXm0xlyIzIvnZzgoiHC0XS4rxNJRlQ7MQAfd+
	6f/vGDkmTo1kqxld5nkN+mV+3SBkdTt7PpDg2yneALGcso/mnxQlHDZoPNQhXVNdYef4C4
	rkus8rAiEX5O35sqzhtudRXPZ701GXsZ8ef5HwK/raaPC6OqSiKZ3r9CQrCHOVdxvM3mAv
	hdtNFeJJmEUP12BK59EXvh25cuTAuzgviQ1yQdcoHcIO/xtS6TtoIBx7OdK0euVRLulVeD
	KGPm/P+ScmwFsLWser3fSH2DSpVsjwJPQxXAqQWt2MNkhwKfbovdVo2BMfN/0w==
From: "Lonnie" <lonnie@outstep.com>
In-Reply-To: <364e5058-ce40-482d-acf3-37f70266fdb3@vates.tech>
Content-Type: text/plain; charset="utf-8"
Date: Thu, 27 Jun 2024 15:38:02 +0200
Cc: "Juergen Gross" <jgross@suse.com>, xen-devel@lists.xenproject.org
To: "Teddy Astie" <teddy.astie@vates.tech>
MIME-Version: 1.0
Message-ID: <48-667d6b00-131-71122080@10350945>
Subject: =?utf-8?q?Re=3A?= Disaggregated (Xoar) Dom0 Building
User-Agent: SOGoMail 5.6.0
Content-Transfer-Encoding: quoted-printable
X-Last-TLS-Session-Version: None

Hi Teddy,

You are actually on track with what I was thinking in this area which i=
nitially gave me 2 main ideas:

1. Take the NOVA Microhypervisor (very small TCB at only 5K LOC) and tr=
y to get QEMU or Bhyve integrated as the VMM which would require a huge=
 amount of development time.  The Genode framework has a configuration/=
compile approach that uses NOVA with a custom VirtualBox, but I did not=
 want to go that route.

2. Take the Alpine XEN distro as the base and then update the dated Xoa=
r patches which effectively breaks Dom0 into 9 Service and Driver Mini/=
Nano VMs for which I was thinking about further setting them up as ultr=
a-thin Unikernels (MirageOS, IncludeOS, etc.) but still researching.

My effort is to make a purely Ultra-Thin RAM-Based Xen Hypervisor that =
boots UEFI for modern systems. Plus a number of other features if all g=
oes well.

Your ideas of QEMU as a Unikernel would probably really work for both X=
EN and NOVA (with a bit of work on the NOVA side). I actually liked NOV=
A and experimented with it a while back being able to produce a very li=
ghtweight Microhypervisor ISO that would boot and do some simple things=
 and even fire up lightweight Linux instances but with very limited cap=
abilities, of course, just to see if it worked. Unfortunately, that dir=
ection although very interesting, would definitely take too much develo=
pment to make a viable and more complete hypervisor.  I did like that y=
ou could easily start with no VM and easily start one or more and then =
use Hot-Keys to flip between consoles. That was pretty cool and is some=
thing that I would like to see about working into this XEN effort as we=
ll maybe some config file in the Xen.efi directory for that or somethin=
g but am still thinking about it.

I think that perhaps the Alpine-XEN-Xoar approach could be benefitual b=
ut XEN, plus supporting libraries is still a bit larger than I would ha=
ve hoped although you get more capabilities and more of a solid hypervi=
sor as well, I think.  Maybe we can chat more about things if you like.

Best,
Lonnie
On Thursday, June 27, 2024 14:38 CEST, Teddy Astie <teddy.astie@vates.t=
ech> wrote:

> Hi Lonnie,
>=20
> Le 27/06/2024 =C3=A0 11:33, Lonnie Cumberland a =C3=A9crit=C2=A0:
> > I am working towards is to have=20
> > everything as a RAM-based ultra-lightweight thin hypervisor.=C2=A0=C2=
=A0 I looked=20
> > over ACRN, the NOVA Microhypervisor (Headron, Beadrock Udo),=20
> > Rust-Shyper, Bareflank-MicroV, and many other development efforts b=
ut it=20
> > seems that Xen is the most advanced for my purposes here.
> >=20
>=20
> You can have a disk-less (or ramdisk based) distro supporting Xen wit=
h=20
> Alpine Linux (with Xen flavour). It does still use Dom0 with all its=20
> responsibilities though.
>=20
> >>> Currently, I am investigating and researching the ideas of=20
> >>> "Disaggregating" Dom0 and have the Xoar Xen patches ("Breaking Up=
 is=20
> >>> Hard to Do: Security and Functionality in a Commodity Hypervisor"=20
> >>> 2011) available which were developed against version 22155 of=20
> >>> xen-unstable. The Linux patches are against Linux with pvops=20
> >>> 2.6.31.13 and developed on a standard Ubuntu 10.04 install. My ef=
fort=20
> >>> would also be up update these patches.
> >>>
> >>> I have been able to locate the Xen "Dom0 Disaggregation"=20
> >>> (https://wiki.xenproject.org/wiki/Dom0=5FDisaggregation) am readi=
ng up=20
> >>> on things now but wanted to ask the developers list about any=20
> >>> experience you may have had in this area since the research objec=
tive=20
> >>> is to integrate Xoar with the latest Xen 4.20, if possible, and t=
o=20
> >>> take it further to basically eliminate Dom0 all together with=20
> >>> individual Mini-OS or Unikernel "Service and Driver VM's" instead=20
> >>> that are loaded at UEFI boot time.
>=20
> The latest stuff going on I have in mind regarding this idea of movin=
g=20
> stuff out of Dom0 is QEMU as Unikernel (using Unikraft), there were s=
ome=20
> discussions on this in Matrix and at Xen Summit, and it's currently w=
ork=20
> in progress from Unikraft side.
>=20
> Teddy
>=20
>=20
> Teddy Astie | Vates XCP-ng Intern
>=20
> XCP-ng & Xen Orchestra - Vates solutions
>=20
> web: https://vates.tech
>



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 13:41:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 13:41:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750099.1158355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMpND-0006iu-2S; Thu, 27 Jun 2024 13:41:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750099.1158355; Thu, 27 Jun 2024 13:41:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMpNC-0006in-Vu; Thu, 27 Jun 2024 13:41:14 +0000
Received: by outflank-mailman (input) for mailman id 750099;
 Thu, 27 Jun 2024 13:41:14 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ztxJ=N5=bounce.vates.tech=bounce-md_30504962.667d6bf6.v1-93e112db93ce43eba1696d079574e9fd@srs-se1.protection.inumbo.net>)
 id 1sMpNC-0006ih-8C
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 13:41:14 +0000
Received: from mail145-25.atl61.mandrillapp.com
 (mail145-25.atl61.mandrillapp.com [198.2.145.25])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id ea6c7001-348a-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 15:41:12 +0200 (CEST)
Received: from pmta06.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail145-25.atl61.mandrillapp.com (Mailchimp) with ESMTP id
 4W908Q2fnXz35hb7b
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 13:41:10 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 93e112db93ce43eba1696d079574e9fd; Thu, 27 Jun 2024 13:41:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea6c7001-348a-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719495670; x=1719756170;
	bh=NTTNFqKwiPKP9I/ZG3X/aBVJ/iykNaZrWMPAIZEkmls=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=osKQ+qAQmmHfx+Ws+QvymIqMzGK49suvieSjvzPm/eS+DmRGDLUHtyupNn1+9ti/i
	 lVvk/R7RICei1C61BSxwolKM8lj4iYZQdci3KEFft+L9SFQFmx+v3h825ZVVIvD2X8
	 Mhr++aeOlAwqNP5MSHOiemnP50YsF662nRZ2P5qL72zvgGbelScbiFm1/SnqtUzY/+
	 8OXfXliO1HK/v2mlRQPazGBnmXvm+Xjp4wTKa2lH8lHcEEpnKGjLY3NHw50XpwUQz6
	 +cJjd88NkyenflMhDA/fwfXEvSrrq3K9P95BuVOGGyfwFHyAi34MHMb/qAHgOm97Be
	 GcvljDNpbfOWw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719495670; x=1719756170; i=vaishali.thakkar@vates.tech;
	bh=NTTNFqKwiPKP9I/ZG3X/aBVJ/iykNaZrWMPAIZEkmls=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=IDKH9wTfGhABIVYcdai+yYVPzsvw/8iKBceCJ/8HMr3GlNYgRzfYkBIEUmyC47s2V
	 PR1MtQGtBE37W9z7qCBOGJ74IBU0f4my2XegXNjkYKFzOvgvXs3HXOu3ZbeT7gOspK
	 Kielqk7sSm+jLJ5O+mMHWeZYUUcQlzhsCcBjAOw8d//JV1UxHHdO0LMc8X4IAt/bNS
	 GJ6Ub9KgdfeE0zZzMlJ7EYOpB7ork/MF2TayDOOodR+/BNtcRz8yWZxw1wCCdi32Tm
	 LfO/T5nMZjteh3KGvKskW6cPJgRn8+YbIg2PFYO8U4haPRU05qUUAu0dRaktWLD1Wq
	 TCio1UZJxxpcQ==
From: Vaishali Thakkar <vaishali.thakkar@vates.tech>
Subject: =?utf-8?Q?Re:=20[RFC=20for-4.20=20v1=201/1]=20x86/hvm:=20Introduce=20Xen-wide=20ASID=20Allocator?=
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719495669079
Message-Id: <2c843753-d27b-43cd-907e-851109890cc3@vates.tech>
To: Jan Beulich <jbeulich@suse.com>
Cc: andrew.cooper3@citrix.com, roger.pau@citrix.com, george.dunlap@citrix.com, xen-devel@lists.xenproject.org
References: <cover.1716551380.git.vaishali.thakkar@vates.tech> <f15042aa7953d986b6dbd4dc1512024ba6362420.1716551380.git.vaishali.thakkar@vates.tech> <c18dbed6-07ac-4ce6-a5e4-4a72cbac3e12@suse.com>
In-Reply-To: <c18dbed6-07ac-4ce6-a5e4-4a72cbac3e12@suse.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.93e112db93ce43eba1696d079574e9fd?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240627:md
Date: Thu, 27 Jun 2024 13:41:10 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit

On 6/13/24 1:04 PM, Jan Beulich wrote:
> On 24.05.2024 14:31, Vaishali Thakkar wrote:
>> --- a/xen/arch/x86/hvm/asid.c
>> +++ b/xen/arch/x86/hvm/asid.c
>> @@ -27,8 +27,8 @@ boolean_param("asid", opt_asid_enabled);
>>    * the TLB.
>>    *
>>    * Sketch of the Implementation:
>> - *
>> - * ASIDs are a CPU-local resource.  As preemption of ASIDs is not possible,
>> + * TODO(vaishali): Update this comment
>> + * ASIDs are Xen-wide resource.  As preemption of ASIDs is not possible,
>>    * ASIDs are assigned in a round-robin scheme.  To minimize the overhead of
>>    * ASID invalidation, at the time of a TLB flush,  ASIDs are tagged with a
>>    * 64-bit generation.  Only on a generation overflow the code needs to
>> @@ -38,20 +38,21 @@ boolean_param("asid", opt_asid_enabled);
>>    * ASID useage to retain correctness.
>>    */
>>   
>> -/* Per-CPU ASID management. */
>> +/* Xen-wide ASID management */
>>   struct hvm_asid_data {
>> -   uint64_t core_asid_generation;
>> +   uint64_t asid_generation;
>>      uint32_t next_asid;
>>      uint32_t max_asid;
>> +   uint32_t min_asid;
>>      bool disabled;
>>   };
>>   
>> -static DEFINE_PER_CPU(struct hvm_asid_data, hvm_asid_data);
>> +static struct hvm_asid_data asid_data;
>>   
>>   void hvm_asid_init(int nasids)
>>   {
>>       static int8_t g_disabled = -1;
>> -    struct hvm_asid_data *data = &this_cpu(hvm_asid_data);
>> +    struct hvm_asid_data *data = &asid_data;
>>   
>>       data->max_asid = nasids - 1;
>>       data->disabled = !opt_asid_enabled || (nasids <= 1);
>> @@ -64,67 +65,73 @@ void hvm_asid_init(int nasids)
>>       }
>>   
>>       /* Zero indicates 'invalid generation', so we start the count at one. */
>> -    data->core_asid_generation = 1;
>> +    data->asid_generation = 1;
>>   
>>       /* Zero indicates 'ASIDs disabled', so we start the count at one. */
>>       data->next_asid = 1;
> 
> Both of these want to become compile-time initializers now, I think. The
> comments would move there, but especially the second one also looks to
> need updating.

Ack, will fix this.

>>   }
>>   
>> -void hvm_asid_flush_vcpu_asid(struct hvm_vcpu_asid *asid)
>> +void hvm_asid_flush_domain_asid(struct hvm_domain_asid *asid)
>>   {
>>       write_atomic(&asid->generation, 0);
>>   }
>>   
>> -void hvm_asid_flush_vcpu(struct vcpu *v)
>> +void hvm_asid_flush_domain(struct domain *d)
>>   {
>> -    hvm_asid_flush_vcpu_asid(&v->arch.hvm.n1asid);
>> -    hvm_asid_flush_vcpu_asid(&vcpu_nestedhvm(v).nv_n2asid);
>> +    hvm_asid_flush_domain_asid(&d->arch.hvm.n1asid);
>> +    //hvm_asid_flush_domain_asid(&vcpu_nestedhvm(v).nv_n2asid);
> 
> While in Lisbon Andrew indicated to not specifically bother about the nested
> case, I don't think he meant by this to outright break it. Hence imo
> something like this will need taking care of (which it being commented out
> may or may not indicate is the plan).
> 

I've already mentioned the reason for commenting out this code for v1 in 
the cover letter. But in case you missed it, TLDR: I wanted to know the 
plans from the nested virtualization project with regards to this before 
doing the changes. And it was commented out just to present the idea and 
for testing. Also, I have put few TODOs in this RFC for v2 which indicates 
the plan to handle the nested virt parts.

Anyway, as discussed at the Xen Summit and indicated by Andrew, in v2 I'll 
make the changes kind of similar to what's happening in the non-nested virt 
code.

>>   }
>>   
>> -void hvm_asid_flush_core(void)
>> +void hvm_asid_flush_all(void)
>>   {
>> -    struct hvm_asid_data *data = &this_cpu(hvm_asid_data);
>> +    struct hvm_asid_data *data = &asid_data;
>>   
>> -    if ( data->disabled )
>> +    if ( data->disabled)
>>           return;
>>   
>> -    if ( likely(++data->core_asid_generation != 0) )
>> +    if ( likely(++data->asid_generation != 0) )
>>           return;
>>   
>>       /*
>> -     * ASID generations are 64 bit.  Overflow of generations never happens.
>> -     * For safety, we simply disable ASIDs, so correctness is established; it
>> -     * only runs a bit slower.
>> -     */
>> +    * ASID generations are 64 bit.  Overflow of generations never happens.
>> +    * For safety, we simply disable ASIDs, so correctness is established; it
>> +    * only runs a bit slower.
>> +    */
> 
> Please don't screw up indentation; this comment was well-formed before. What
> I question is whether, with the ultimate purpose in mind, the comment actually
> will continue to be correct. We can't simply disable ASIDs when we have SEV
> VMs running, can we?

You're right about SEV VMs. But wouldn't we still want to have a way to 
disable ASIDs when there are no SEV VMs are running?

>>       printk("HVM: ASID generation overrun. Disabling ASIDs.\n");
>>       data->disabled = 1;
>>   }
>>   
>> -bool hvm_asid_handle_vmenter(struct hvm_vcpu_asid *asid)
>> +/* This function is called only when first vmenter happens after creating a new domain */
>> +bool hvm_asid_handle_vmenter(struct hvm_domain_asid *asid)
> 
> With what the comment says, the function likely wants giving a different
> name.

Ack.

>>   {
>> -    struct hvm_asid_data *data = &this_cpu(hvm_asid_data);
>> +    struct hvm_asid_data *data = &asid_data;
>>   
>>       /* On erratum #170 systems we must flush the TLB.
>>        * Generation overruns are taken here, too. */
>>       if ( data->disabled )
>>           goto disabled;
>>   
>> -    /* Test if VCPU has valid ASID. */
>> -    if ( read_atomic(&asid->generation) == data->core_asid_generation )
>> +    /* Test if domain has valid ASID. */
>> +    if ( read_atomic(&asid->generation) == data->asid_generation )
>>           return 0;
>>   
>>       /* If there are no free ASIDs, need to go to a new generation */
>>       if ( unlikely(data->next_asid > data->max_asid) )
>>       {
>> -        hvm_asid_flush_core();
>> +        // TODO(vaishali): Add a check to pick the asid from the reclaimable asids if any
>> +        hvm_asid_flush_all();
>>           data->next_asid = 1;
>>           if ( data->disabled )
>>               goto disabled;
>>       }
>>   
>> -    /* Now guaranteed to be a free ASID. */
>> -    asid->asid = data->next_asid++;
>> -    write_atomic(&asid->generation, data->core_asid_generation);
>> +    /* Now guaranteed to be a free ASID. Only assign a new asid if the ASID is 1 */
>> +    if (asid->asid == 0)
> 
> Comment and code appear to not be matching up, which makes it difficult
> to figure what is actually meant.

Ack

>> +    {
>> +        asid->asid = data->next_asid++;
>> +    }
>> +
>> +    write_atomic(&asid->generation, data->asid_generation);
> 
> Is this generation thing really still relevant when a domain has an ASID
> assigned once and for all? Plus, does allocation really need to happen as
> late as immediately ahead of the first VM entry? Can't that be part of
> normal domain creation?

Ack, for the generation thing. Agree, it's not needed and makes sense to do 
allocation as part of domain creation. I'll re-structure the code accordingly.

>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -2519,10 +2519,7 @@ static int cf_check hvmemul_tlb_op(
>>   
>>       case x86emul_invpcid:
>>           if ( x86emul_invpcid_type(aux) != X86_INVPCID_INDIV_ADDR )
>> -        {
>> -            hvm_asid_flush_vcpu(current);
>>               break;
>> -        }
> 
> What replaces this flush? Things like this (also e.g. the change to
> switch_cr3_cr4(), and there are more further down) would be good to
> explain in the description, perhaps by more generally explaining how
> the flushing model changes.

Ok, will do.

>> --- a/xen/arch/x86/hvm/svm/asid.c
>> +++ b/xen/arch/x86/hvm/svm/asid.c
>> @@ -7,31 +7,35 @@
>>   #include <asm/amd.h>
>>   #include <asm/hvm/nestedhvm.h>
>>   #include <asm/hvm/svm/svm.h>
>> -
>> +#include <asm/processor.h>
>>   #include "svm.h"
>> +#include "xen/cpumask.h"
> 
> The blank line was here on purpose; please keep it. And please never
> use "" for xen/ includes. That #include also wants moving up; the
> "canonical" way of arranging #include-s would be <xen/...> first, then
> <asm/...>, then anything custom. Each block separated by a blank line.
> 

Ack

>> -void svm_asid_init(const struct cpuinfo_x86 *c)
>> +void svm_asid_init(void)
> 
> Since this is now global initialization, can't it be __init?
> 
>>   {
>> +    unsigned int cpu = smp_processor_id();
>> +    const struct cpuinfo_x86 *c;
>>       int nasids = 0;
>>   
>> -    /* Check for erratum #170, and leave ASIDs disabled if it's present. */
>> -    if ( !cpu_has_amd_erratum(c, AMD_ERRATUM_170) )
>> -        nasids = cpuid_ebx(0x8000000aU);
>> -
>> +    for_each_online_cpu( cpu ) {
> 
> Nit (style): Brace goes on its own line.

ack

>> +        c = &cpu_data[cpu];
>> +        /* Check for erratum #170, and leave ASIDs disabled if it's present. */
>> +        if ( !cpu_has_amd_erratum(c, AMD_ERRATUM_170) )
>> +            nasids += cpuid_ebx(0x8000000aU);
> 
> Why += ? Don't you mean to establish the minimum across all CPUs? Which would
> be assuming there might be an asymmetry, which generally we assume there
> isn't.
> And if you invoke CPUID, you'll need to do so on the very CPU, not many times
> in a row on the BSP.

Hmm, I'm not sure if I understand your point completely. Just to clarify, 
do you mean even if it's assumed that asymmetry is not there, we should 
find and establish min ASID count across all online CPUs and ensure that 
CPUID instruction is executed on the respective CPU?

>> +    }
>>       hvm_asid_init(nasids);
>>   }
>>   
>>   /*
>> - * Called directly before VMRUN.  Checks if the VCPU needs a new ASID,
>> + * Called directly before first VMRUN.  Checks if the domain needs an ASID,
>>    * assigns it, and if required, issues required TLB flushes.
>>    */
>>   void svm_asid_handle_vmrun(void)
> 
> Again the function name will likely want to change if this is called just
> once per domain. Question then again is whether really this needs doing as
> late as ahead of the VMRUN, of whether perhaps at least parts can be done
> during normal domain creation.

Ack, the parts with regards to setting up the asids can definitely be moved 
during the domain creation. I'll also check if the part with regards to 
setting the bit for tlb_control can be done while/after creating VMCB or not.

>>   {
>> -    struct vcpu *curr = current;
>> -    struct vmcb_struct *vmcb = curr->arch.hvm.svm.vmcb;
>> -    struct hvm_vcpu_asid *p_asid =
>> -        nestedhvm_vcpu_in_guestmode(curr)
>> -        ? &vcpu_nestedhvm(curr).nv_n2asid : &curr->arch.hvm.n1asid;
>> +    struct vcpu *v = current;
> 
> Please don't move away from naming this "curr".
> 
>> +    struct domain *d = current->domain;
> 
> Then this, if it needs to be a local variable in the first place, would
> want to be "currd".
> 
>> @@ -988,8 +986,8 @@ static void noreturn cf_check svm_do_resume(void)
>>           v->arch.hvm.svm.launch_core = smp_processor_id();
>>           hvm_migrate_timers(v);
>>           hvm_migrate_pirqs(v);
>> -        /* Migrating to another ASID domain.  Request a new ASID. */
>> -        hvm_asid_flush_vcpu(v);
>> +        /* Migrating to another ASID domain. Request a new ASID. */
>> +        hvm_asid_flush_domain(v->domain);
>>       }
> 
> Is "migrating to another ASID domain" actually still possible in the new
> model

That's correct. Eventually we will have to find a way for that. But as part 
of initial work for this, it won't be possible. Will fix the above thing.

>> @@ -2358,8 +2351,9 @@ static bool cf_check is_invlpg(
>>   
>>   static void cf_check svm_invlpg(struct vcpu *v, unsigned long linear)
>>   {
>> +    v = current;
> 
> ???
> 
>>       /* Safe fallback. Take a new ASID. */
>> -    hvm_asid_flush_vcpu(v);
>> +    hvm_asid_flush_domain(v->domain);
>>   }
>>   
>>   static bool cf_check svm_get_pending_event(
>> @@ -2533,6 +2527,7 @@ void asmlinkage svm_vmexit_handler(void)
>>       struct cpu_user_regs *regs = guest_cpu_user_regs();
>>       uint64_t exit_reason;
>>       struct vcpu *v = current;
>> +    struct domain *d = current->domain;
>>       struct vmcb_struct *vmcb = v->arch.hvm.svm.vmcb;
>>       int insn_len, rc;
>>       vintr_t intr;
>> @@ -2927,7 +2922,7 @@ void asmlinkage svm_vmexit_handler(void)
>>           }
>>           if ( (insn_len = svm_get_insn_len(v, INSTR_INVLPGA)) == 0 )
>>               break;
>> -        svm_invlpga_intercept(v, regs->rax, regs->ecx);
>> +        svm_invlpga_intercept(d, regs->rax, regs->ecx);
>>           __update_guest_eip(regs, insn_len);
>>           break;
> 
> The function may certainly benefit from introducing "d" (or better "currd"),
> but please uniformly then (in a separate, prereq patch). Else just use
> v->domain in this one place you're changing.

Ack for this and above comment regarding naming

>> --- a/xen/arch/x86/include/asm/hvm/asid.h
>> +++ b/xen/arch/x86/include/asm/hvm/asid.h
>> @@ -8,25 +8,24 @@
>>   #ifndef __ASM_X86_HVM_ASID_H__
>>   #define __ASM_X86_HVM_ASID_H__
>>   
>> +struct hvm_domain;
>> +struct hvm_domain_asid;
> 
> I can see the need for the latter, but why the former?
> 
>> -struct vcpu;
>> -struct hvm_vcpu_asid;
>> -
>> -/* Initialise ASID management for the current physical CPU. */
>> +/* Initialise ASID management distributed across all CPUs. */
>>   void hvm_asid_init(int nasids);
>>   
>>   /* Invalidate a particular ASID allocation: forces re-allocation. */
>> -void hvm_asid_flush_vcpu_asid(struct hvm_vcpu_asid *asid);
>> +void hvm_asid_flush_domain_asid(struct hvm_domain_asid *asid);
>>   
>> -/* Invalidate all ASID allocations for specified VCPU: forces re-allocation. */
>> -void hvm_asid_flush_vcpu(struct vcpu *v);
>> +/* Invalidate all ASID allocations for specified domain */
>> +void hvm_asid_flush_domain(struct domain *d);
>>   
>> -/* Flush all ASIDs on this processor core. */
>> -void hvm_asid_flush_core(void);
>> +/* Flush all ASIDs on all processor cores */
>> +void hvm_asid_flush_all(void);
>>   
>>   /* Called before entry to guest context. Checks ASID allocation, returns a
>>    * boolean indicating whether all ASIDs must be flushed. */
>> -bool hvm_asid_handle_vmenter(struct hvm_vcpu_asid *asid);
>> +bool hvm_asid_handle_vmenter(struct hvm_domain_asid *asid);
> 
> Much like the comment on the definition, this comment then wants adjusting,
> too.

Ack

>> --- a/xen/arch/x86/include/asm/hvm/vmx/vmx.h
>> +++ b/xen/arch/x86/include/asm/hvm/vmx/vmx.h
>> @@ -525,6 +525,7 @@ void ept_sync_domain(struct p2m_domain *p2m);
>>   
>>   static inline void vpid_sync_vcpu_gva(struct vcpu *v, unsigned long gva)
>>   {
>> +    struct domain *d = current->domain;
> 
> Why "current" when "v" is being passed in?

Sorry, this was a leftover from some debugging traces. Will fix this and
above unnecessary use of current..

>> --- a/xen/arch/x86/mm/hap/hap.c
>> +++ b/xen/arch/x86/mm/hap/hap.c
>> @@ -739,13 +739,13 @@ static bool cf_check flush_tlb(const unsigned long *vcpu_bitmap)
>>           if ( !flush_vcpu(v, vcpu_bitmap) )
>>               continue;
>>   
>> -        hvm_asid_flush_vcpu(v);
>> -
>>           cpu = read_atomic(&v->dirty_cpu);
>>           if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) && v->is_running )
>>               __cpumask_set_cpu(cpu, mask);
>>       }
>>   
>> +    hvm_asid_flush_domain(d);
> 
> Hmm, that's potentially much more flushing than is needed here. There
> surely wants to be a wait to flush at a granularity smaller than the
> entire domain. (Likely applies elsewhere as well.)

I see, does it mean we still need a way to flush at the vcpu level in the
case of HAP?

>> @@ -2013,6 +2014,12 @@ void asmlinkage __init noreturn __start_xen(unsigned long mbi_p)
>>           printk(XENLOG_INFO "Parked %u CPUs\n", num_parked);
>>       smp_cpus_done();
>>   
>> +    /* Initialize xen-wide ASID handling */
>> +    #ifdef CONFIG_HVM
>> +    if ( cpu_has_svm )
>> +        svm_asid_init();
>> +    #endif
> 
> Nit (style): Pre-processor directives want to start in the leftmost column.
> 
> Overall I'm afraid I have to say that there's too much hackery and too
> little description to properly reply to this to address its RFC-ness. You
> also don't really raise any specific questions.

I've raised few questions in the cover letter but the initial purpose for
sending this as an RFC was to present the idea and have some code for 
people to take a look at (instead of just talking about it theoretically) 
before the design session at Xen Summit. Nonetheless, will do better next time.

Thanks for your review!

> Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 13:46:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 13:46:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750109.1158365 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMpSS-0007NC-Oz; Thu, 27 Jun 2024 13:46:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750109.1158365; Thu, 27 Jun 2024 13:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMpSS-0007N5-MD; Thu, 27 Jun 2024 13:46:40 +0000
Received: by outflank-mailman (input) for mailman id 750109;
 Thu, 27 Jun 2024 13:46:39 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EKKR=N5=bounce.vates.tech=bounce-md_30504962.667d6d3d.v1-aa0f840678284325ab682a903935b7fc@srs-se1.protection.inumbo.net>)
 id 1sMpSR-0007Mu-AI
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 13:46:39 +0000
Received: from mail145-25.atl61.mandrillapp.com
 (mail145-25.atl61.mandrillapp.com [198.2.145.25])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id acf3deee-348b-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 15:46:38 +0200 (CEST)
Received: from pmta06.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail145-25.atl61.mandrillapp.com (Mailchimp) with ESMTP id
 4W90Gj1lmQz35hb8X
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 13:46:37 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 aa0f840678284325ab682a903935b7fc; Thu, 27 Jun 2024 13:46:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acf3deee-348b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719495997; x=1719756497;
	bh=ZWn4d2IKJRseAJtsb8ByU866UJ02444cd+xNbgechO4=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=ogAHlDW7Gvb8Vfm8xXSG2RIxlvlcO3koPyMz6aZgDwiaWcKJ+0jTzHyzMzu8NkW/p
	 SSmcLXTZAkmC71616+v6uPVVfB/0JuEhgmkdbqN+oJ48zlfchJUh7nP+1CPEhEagt+
	 vaYsTd+pJwj5GB/TMYuXy89DCTp9TumQkVuHveC7V8tPmvIhZkKkSmFEfWhoGyEnAR
	 diq0XLxfBYa04MdexyJtXeP/FV2TqRYiwysdSVCmhlbtHRI4k1FSlPLWfZLjP21OQG
	 zYDQGOazYPTyRAsGhnpD1HppU+HcDz4U5a9plhehkd0C97bXXX61EgIuv3+FqWZI5F
	 Fi7KLiuLN5ePA==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719495997; x=1719756497; i=anthony.perard@vates.tech;
	bh=ZWn4d2IKJRseAJtsb8ByU866UJ02444cd+xNbgechO4=;
	h=From:Subject:To:Cc:Message-Id:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=SHajtEMYeIRUdcejO+Q5MD6s6sSBKvSJL9vvlJeBgJKvDhM5WRjRjmdoy4PtFzRG6
	 v0GGnYaUtCHlsvqhLqecEAcTrdxJ3lapiP1r7A1QMaifcNWFv1Lrk0p+5iUASjncbJ
	 dcHAj7agjzr4hFmGhDK1wxZxzOvIP5kdtf9diO36IRJ1AZSOAytIVEYUvPnQotxd2i
	 gzi7xVcDJcEadCYfLmbSRvAfnpGCWvsm7grd2zIE+NwWSbqaV4P/w+mG5gkzM9a5tD
	 jYdVJX1e2hZyFqqwSJKdTjnRCNWMiC5CWfWF7TIIe1OBinf9Y1yYvLsTdlHSrJIWNB
	 hFgmu08j5eJJg==
From: Anthony PERARD <anthony.perard@vates.tech>
Subject: =?utf-8?Q?Re:=20[PATCH=20for-4.19]=20tools/dombuilder:=20Correct=20the=20length=20calculation=20in=20xc=5Fdom=5Falloc=5Fsegment()?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719495996062
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Juergen Gross <jgross@suse.com>, Oleksii Kurochko <oleksii.kurochko@gmail.com>
Message-Id: <Zn1tO2h6aps4Dwdc@l14>
References: <20240627130134.1006059-1-andrew.cooper3@citrix.com>
In-Reply-To: <20240627130134.1006059-1-andrew.cooper3@citrix.com>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.aa0f840678284325ab682a903935b7fc?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240627:md
Date: Thu, 27 Jun 2024 13:46:37 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

On Thu, Jun 27, 2024 at 02:01:34PM +0100, Andrew Cooper wrote:
> xc_dom_alloc_segment() is passed a size in bytes, calculates a size in pages
> from it, then fills in the new segment information with a bytes value
> re-calculated from the number of pages.
> 
> This causes the module information given to the guest (MB, or PVH) to have
> incorrect sizes; specifically, sizes rounded up to the next page.
> 
> This in turn is problematic for Xen.  When Xen finds a gzipped module, it
> peeks at the end metadata to judge the decompressed size, which is a -4
> backreference from the reported end of the module.
> 
> Fill in seg->vend using the correct number of bytes.
> 
> Fixes: ea7c8a3d0e82 ("libxc: reorganize domain builder guest memory allocator")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> 
> For 4.19: This was discovered when trying to test Daniel's gzip cleanup for
> Hyperlaunch.  It's a subtle bug, hidden inside a second bug which isn't
> appropriate content for 4.20.

Acked-by: Anthony PERARD <anthony.perard@vates.tech>

Thanks,

-- 

Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 14:26:46 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 14:26:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750135.1158375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMq4z-0004Py-Lj; Thu, 27 Jun 2024 14:26:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750135.1158375; Thu, 27 Jun 2024 14:26:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMq4z-0004Pr-JF; Thu, 27 Jun 2024 14:26:29 +0000
Received: by outflank-mailman (input) for mailman id 750135;
 Thu, 27 Jun 2024 14:26:28 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1631=N5=gmail.com=oleksii.kurochko@srs-se1.protection.inumbo.net>)
 id 1sMq4y-0004Pd-CB
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 14:26:28 +0000
Received: from mail-ej1-x62c.google.com (mail-ej1-x62c.google.com
 [2a00:1450:4864:20::62c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 3d62cf4a-3491-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 16:26:27 +0200 (CEST)
Received: by mail-ej1-x62c.google.com with SMTP id
 a640c23a62f3a-a7241b2fe79so686087566b.1
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 07:26:27 -0700 (PDT)
Received: from [192.168.219.191] ([94.75.70.14])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a729d6fdd5bsm66060266b.15.2024.06.27.07.26.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 27 Jun 2024 07:26:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d62cf4a-3491-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20230601; t=1719498387; x=1720103187; darn=lists.xenproject.org;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:from:to:cc:subject
         :date:message-id:reply-to;
        bh=JymcApNACeekwYC5ujv6uQ3hfIJUzcP4064saO773EQ=;
        b=g8F/979RC+bglUJINMZyMocQgToQLLL12hnay6jfnVF6uaHL3gbf3nhIL+Z6UO5PTq
         bcBjdPTNdsTRHQDy0O4TLu18jIVtWHDwbkTTyeQYZhWFloh+ocg0HHpOC36rhmkJqL3v
         8MHqvk4odlk+1/5I3kOVgyNrT90tMD0/JV36IpkhKkiEUbu8E58dRt7RwcL2rF71Y405
         XUZ7elI8VzP1kY23oWPKmsxGxCxb2M0zrXN3YUrIir9L7wHkIdLDVHcC0U0RGlE2U6kE
         UYli69tAzv28izpqGfJPO9TDS87PQB6+WJaGsRXzDZoYyKnmdR2hpyBKkKPtb31QGjyZ
         l9Pg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719498387; x=1720103187;
        h=mime-version:user-agent:content-transfer-encoding:references
         :in-reply-to:date:cc:to:from:subject:message-id:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=JymcApNACeekwYC5ujv6uQ3hfIJUzcP4064saO773EQ=;
        b=smAKI5nK1WoXUgg/7Zv/TLAnsRf4fs2YZ0CCN5Z1y5cXThNWWYGpi2KzjmCFfataX+
         q6V/Osn43b2ajS0QoEKX5GPYhKtwgRwRRg95gDzN4aKeCkS+axT/KTegRsRhB6f4/ioT
         6UqqYZLLZKewvzqhvD1oR6TQIDl92vQw2TfLJ99NV5jpTz0r8DuXPVV7PAmVuFAWuxHw
         hMvZG+cgL3Nyq5TARw82sYitbvWQwOQ23r9z8pbFObDlcDPJsXf1rJuPBug9MujqpWdS
         UnVdl8hjxvucEXOthyY3KF5abG4555b7FC4F9NoJa4+RzarmndIfCDYrB20jqjGtRlhN
         rzWQ==
X-Forwarded-Encrypted: i=1; AJvYcCWYjGH7EjtZf4aqutK7McM/NzLuHL8DkuSk3Wpm5cKEaeYFaEPtTc34dmsHszfJgMF++H9FoXOQNN+cMA6DQrCCG+WqkRGXzGohUVrFMaU=
X-Gm-Message-State: AOJu0Yxn0L8BB9zx9Bd+QsRPRn2wmSvCp/wMTjGLbNTAg2+iLGqmtwj0
	QmEocL02M4pOS+UYLeUfsayc5CR1l51Njd+yklJywYaEU6HZWmVq
X-Google-Smtp-Source: AGHT+IFVbpJtPkLeWPxiVyceL0IEXsmM75462yphRJ0z48A1CGTyR4eiy7jgO5F20H52Tj4/MQxWtA==
X-Received: by 2002:a17:907:cb20:b0:a6f:489a:3a28 with SMTP id a640c23a62f3a-a7242cdb40amr916085266b.61.1719498387019;
        Thu, 27 Jun 2024 07:26:27 -0700 (PDT)
Message-ID: <503b6bdd648d74c294301bd396665a7bb1a40814.camel@gmail.com>
Subject: Re: [PATCH for-4.19] tools/dombuilder: Correct the length
 calculation in xc_dom_alloc_segment()
From: oleksii.kurochko@gmail.com
To: Andrew Cooper <andrew.cooper3@citrix.com>, Xen-devel
	 <xen-devel@lists.xenproject.org>
Cc: Anthony PERARD <anthony@xenproject.org>, Juergen Gross <jgross@suse.com>
Date: Thu, 27 Jun 2024 16:26:26 +0200
In-Reply-To: <20240627130134.1006059-1-andrew.cooper3@citrix.com>
References: <20240627130134.1006059-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
User-Agent: Evolution 3.52.2 (3.52.2-1.fc40app2) 
MIME-Version: 1.0

On Thu, 2024-06-27 at 14:01 +0100, Andrew Cooper wrote:
> xc_dom_alloc_segment() is passed a size in bytes, calculates a size
> in pages
> from it, then fills in the new segment information with a bytes value
> re-calculated from the number of pages.
>=20
> This causes the module information given to the guest (MB, or PVH) to
> have
> incorrect sizes; specifically, sizes rounded up to the next page.
>=20
> This in turn is problematic for Xen.=C2=A0 When Xen finds a gzipped
> module, it
> peeks at the end metadata to judge the decompressed size, which is a
> -4
> backreference from the reported end of the module.
>=20
> Fill in seg->vend using the correct number of bytes.
>=20
> Fixes: ea7c8a3d0e82 ("libxc: reorganize domain builder guest memory
> allocator")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Anthony PERARD <anthony@xenproject.org>
> CC: Juergen Gross <jgross@suse.com>
> CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
>=20
> For 4.19: This was discovered when trying to test Daniel's gzip
> cleanup for
> Hyperlaunch.=C2=A0 It's a subtle bug, hidden inside a second bug which
> isn't
> appropriate content for 4.20.
> ---
Release-Acked-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>

~ Oleksii

> =C2=A0tools/libs/guest/xg_dom_core.c | 2 +-
> =C2=A01 file changed, 1 insertion(+), 1 deletion(-)
>=20
> diff --git a/tools/libs/guest/xg_dom_core.c
> b/tools/libs/guest/xg_dom_core.c
> index c4f4e7f3e27a..f5521d528be1 100644
> --- a/tools/libs/guest/xg_dom_core.c
> +++ b/tools/libs/guest/xg_dom_core.c
> @@ -601,7 +601,7 @@ int xc_dom_alloc_segment(struct xc_dom_image
> *dom,
> =C2=A0=C2=A0=C2=A0=C2=A0 memset(ptr, 0, pages * page_size);
> =C2=A0
> =C2=A0=C2=A0=C2=A0=C2=A0 seg->vstart =3D start;
> -=C2=A0=C2=A0=C2=A0 seg->vend =3D dom->virt_alloc_end;
> +=C2=A0=C2=A0=C2=A0 seg->vend =3D start + size;
> =C2=A0
> =C2=A0=C2=A0=C2=A0=C2=A0 DOMPRINTF("%-20s:=C2=A0=C2=A0 %-12s : 0x%" PRIx6=
4 " -> 0x%" PRIx64
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 "=C2=A0 (pfn 0x%" PRIpfn " + 0x%" PRIpfn " pages)",



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 14:47:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 14:47:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750152.1158385 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqP5-0007DK-A7; Thu, 27 Jun 2024 14:47:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750152.1158385; Thu, 27 Jun 2024 14:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqP5-0007DD-7F; Thu, 27 Jun 2024 14:47:15 +0000
Received: by outflank-mailman (input) for mailman id 750152;
 Thu, 27 Jun 2024 14:47:13 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMqP3-0007D7-50
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 14:47:13 +0000
Received: from mail-wr1-x42c.google.com (mail-wr1-x42c.google.com
 [2a00:1450:4864:20::42c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 226169a2-3494-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 16:47:11 +0200 (CEST)
Received: by mail-wr1-x42c.google.com with SMTP id
 ffacd0b85a97d-36743abace4so460720f8f.1
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 07:47:11 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1faac9aa593sm14069515ad.250.2024.06.27.07.47.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 07:47:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 226169a2-3494-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719499630; x=1720104430; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=ogChDBm2KQHAa6Elu0zQESiuNx1mGrCTNcCeJ40+IdI=;
        b=BffbzofZVV+COVLDUO2HkYHwpi5uvA8F0pi+PDEhsMtd2C7wDdpNjnZtFwRP4fZYei
         SKTrSd+yeyf8UclTVDS+ekU2GXs1D/koRGoRBqaFByY8T+YfcjydtkChIozzfGZ99A/M
         QlqL720CcPJ2ol3XTE81/zp7sseGkEA7x6jgtU1e7XuDM8cEgH3qEDG4gDkYT0uquZaq
         v7DL/gtTlq7BHDzbPjCzTedJTOzQnlCg8FVdehbfYzDtnTpDHADG2bE17PJmot/CozKI
         8gCu7U2+vxtFvjr6n77Wgt2Clmfn9AjIWmsvQosjg/3y7Qj5MonSpTV3QvqKkulkxUMe
         poaA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719499630; x=1720104430;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=ogChDBm2KQHAa6Elu0zQESiuNx1mGrCTNcCeJ40+IdI=;
        b=e0qK2Bmt2tnDn7tPczhm0GgVe5KhkvVdRqMfa0y7MlE7koGVve0A7zWbEjGKqKViDR
         EUXRhmFMsr/3nBwUALBtQ5NaJQkg05nrj0b3Cjifxk3rTcJ2RYDC761Q9+Dy9FYxBdV4
         4mwJGipLtyAwUJ3Sz74pSbe8qcdGNG5ykWO363r5JXNGwrdJT9dIYEobCnPytC3lZnbW
         z8YbGc/gesoxo350GkUtoPky/Cy7AOZguQf0D9EH4pT8xDpd7oxW+ub8SIRlUZh/py6Y
         UkuoSjG+A1EfbU5YTu0ZNRVNrlpRggursJ73UGZEfqSRq07cY43Q7lGIOJHLTluS5psa
         A1jQ==
X-Forwarded-Encrypted: i=1; AJvYcCXZok+0MNUVW+KvdXMtACTH9BUNHI9AgA0TPcqRzk/MzDg9BNDmgZXzpLXmdYAuPPTWzKBRs9f2g9zV6dzxQniaj7nMZTXf59Tm19qlT1c=
X-Gm-Message-State: AOJu0YxgdO3+dKa23pfxDsbllrvdjzQ/EH/fJEuwTk1a40bMNiCM7whf
	yKbVMb6bQhNWpftGd+rZ+VV/0hOZLNw5+pFOwx+KIesXiJUNJce5adnWlbwndw==
X-Google-Smtp-Source: AGHT+IH0sIAEWiJIccZ8E2hiss/g1QsA9yS/YsGDxtfQb9gP3OFIDvyqiORaDbbl/f4JYUqmnoBZ7A==
X-Received: by 2002:a5d:50c5:0:b0:363:1c9d:d853 with SMTP id ffacd0b85a97d-36741930ba8mr1869438f8f.32.1719499630271;
        Thu, 27 Jun 2024 07:47:10 -0700 (PDT)
Message-ID: <a5275491-f28c-427c-bd15-34dd27ff8cb9@suse.com>
Date: Thu, 27 Jun 2024 16:47:03 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [RFC for-4.20 v1 1/1] x86/hvm: Introduce Xen-wide ASID Allocator
To: Vaishali Thakkar <vaishali.thakkar@vates.tech>
Cc: andrew.cooper3@citrix.com, roger.pau@citrix.com,
 george.dunlap@citrix.com, xen-devel@lists.xenproject.org
References: <cover.1716551380.git.vaishali.thakkar@vates.tech>
 <f15042aa7953d986b6dbd4dc1512024ba6362420.1716551380.git.vaishali.thakkar@vates.tech>
 <c18dbed6-07ac-4ce6-a5e4-4a72cbac3e12@suse.com>
 <2c843753-d27b-43cd-907e-851109890cc3@vates.tech>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <2c843753-d27b-43cd-907e-851109890cc3@vates.tech>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 27.06.2024 15:41, Vaishali Thakkar wrote:
> On 6/13/24 1:04 PM, Jan Beulich wrote:
>> On 24.05.2024 14:31, Vaishali Thakkar wrote:
>>> -void hvm_asid_flush_core(void)
>>> +void hvm_asid_flush_all(void)
>>>   {
>>> -    struct hvm_asid_data *data = &this_cpu(hvm_asid_data);
>>> +    struct hvm_asid_data *data = &asid_data;
>>>   
>>> -    if ( data->disabled )
>>> +    if ( data->disabled)
>>>           return;
>>>   
>>> -    if ( likely(++data->core_asid_generation != 0) )
>>> +    if ( likely(++data->asid_generation != 0) )
>>>           return;
>>>   
>>>       /*
>>> -     * ASID generations are 64 bit.  Overflow of generations never happens.
>>> -     * For safety, we simply disable ASIDs, so correctness is established; it
>>> -     * only runs a bit slower.
>>> -     */
>>> +    * ASID generations are 64 bit.  Overflow of generations never happens.
>>> +    * For safety, we simply disable ASIDs, so correctness is established; it
>>> +    * only runs a bit slower.
>>> +    */
>>
>> Please don't screw up indentation; this comment was well-formed before. What
>> I question is whether, with the ultimate purpose in mind, the comment actually
>> will continue to be correct. We can't simply disable ASIDs when we have SEV
>> VMs running, can we?
> 
> You're right about SEV VMs. But wouldn't we still want to have a way to 
> disable ASIDs when there are no SEV VMs are running?

Possibly. Yet that still would render this comment stale in the common case,
as the way it's written it suggests simply disabling ASIDs on the fly is an
okay thing to do.

>>> +        c = &cpu_data[cpu];
>>> +        /* Check for erratum #170, and leave ASIDs disabled if it's present. */
>>> +        if ( !cpu_has_amd_erratum(c, AMD_ERRATUM_170) )
>>> +            nasids += cpuid_ebx(0x8000000aU);
>>
>> Why += ? Don't you mean to establish the minimum across all CPUs? Which would
>> be assuming there might be an asymmetry, which generally we assume there
>> isn't.
>> And if you invoke CPUID, you'll need to do so on the very CPU, not many times
>> in a row on the BSP.
> 
> Hmm, I'm not sure if I understand your point completely. Just to clarify, 
> do you mean even if it's assumed that asymmetry is not there, we should 
> find and establish min ASID count across all online CPUs and ensure that 
> CPUID instruction is executed on the respective CPU?

No, I mean that
- if we assume there may be asymmetry, CPUID will need invoking once on
  every CPU (including ones later being hot-onlined),
- if we assume no asymmetry, there's no need for accumulation.

>>> --- a/xen/arch/x86/mm/hap/hap.c
>>> +++ b/xen/arch/x86/mm/hap/hap.c
>>> @@ -739,13 +739,13 @@ static bool cf_check flush_tlb(const unsigned long *vcpu_bitmap)
>>>           if ( !flush_vcpu(v, vcpu_bitmap) )
>>>               continue;
>>>   
>>> -        hvm_asid_flush_vcpu(v);
>>> -
>>>           cpu = read_atomic(&v->dirty_cpu);
>>>           if ( cpu != this_cpu && is_vcpu_dirty_cpu(cpu) && v->is_running )
>>>               __cpumask_set_cpu(cpu, mask);
>>>       }
>>>   
>>> +    hvm_asid_flush_domain(d);
>>
>> Hmm, that's potentially much more flushing than is needed here. There
>> surely wants to be a wait to flush at a granularity smaller than the
>> entire domain. (Likely applies elsewhere as well.)
> 
> I see, does it mean we still need a way to flush at the vcpu level in the
> case of HAP?

For correctness it's not really "need", but for performance I'm pretty sure
it's going to be "want".

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 14:53:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 14:53:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750158.1158395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqUn-0000TR-TQ; Thu, 27 Jun 2024 14:53:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750158.1158395; Thu, 27 Jun 2024 14:53:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqUn-0000TK-QO; Thu, 27 Jun 2024 14:53:09 +0000
Received: by outflank-mailman (input) for mailman id 750158;
 Thu, 27 Jun 2024 14:53:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMqUm-0000T8-Gt; Thu, 27 Jun 2024 14:53:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMqUm-0003ov-FJ; Thu, 27 Jun 2024 14:53:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMqUm-0002Ez-5F; Thu, 27 Jun 2024 14:53:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMqUm-0001Qh-4n; Thu, 27 Jun 2024 14:53:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HDFV1wo4NbAT9ZUD4gl6BKeynvGWXvOgyi5y/hwfErs=; b=y7mQuPJt1dntYnDkk/bEC2JAtm
	z23eboIv/5Wk7b1s7WqDsDLyTN6MqjUKTJwB8hn1aFTUC90I2l3Nc34Xft68wvQYJ41WLl3xx1aKj
	KKTqO/1GYaL7KUE4E7TmwB4AJsw3O4hXig8zPaRy+MgeH9lOfHMMV00/6YoRUcvhIaAY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186531-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186531: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=402e473249cf62dd4c6b3b137aa845db0fe1453a
X-Osstest-Versions-That:
    xen=ecadd22a3de8ce7f1799e85af6f1e37c06c57049
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 Jun 2024 14:53:08 +0000

flight 186531 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186531/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  402e473249cf62dd4c6b3b137aa845db0fe1453a
baseline version:
 xen                  ecadd22a3de8ce7f1799e85af6f1e37c06c57049

Last test of basis   186523  2024-06-26 19:04:06 Z    0 days
Testing same since   186531  2024-06-27 12:02:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Nicola Vetrini <nicola.vetrini@bugseng.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ecadd22a3d..402e473249  402e473249cf62dd4c6b3b137aa845db0fe1453a -> smoke


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 14:53:52 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 14:53:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750163.1158405 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqVU-0000yp-66; Thu, 27 Jun 2024 14:53:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750163.1158405; Thu, 27 Jun 2024 14:53:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqVU-0000yi-2m; Thu, 27 Jun 2024 14:53:52 +0000
Received: by outflank-mailman (input) for mailman id 750163;
 Thu, 27 Jun 2024 14:53:50 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMqVS-0000yF-OH
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 14:53:50 +0000
Received: from mail-lj1-x234.google.com (mail-lj1-x234.google.com
 [2a00:1450:4864:20::234])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0ff8c5eb-3495-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 16:53:49 +0200 (CEST)
Received: by mail-lj1-x234.google.com with SMTP id
 38308e7fff4ca-2e72224c395so92622291fa.3
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 07:53:49 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 98e67ed59e1d1-2c8d81d2ab3sm3717860a91.49.2024.06.27.07.53.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 07:53:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ff8c5eb-3495-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719500029; x=1720104829; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=VPRSFFzVa74HeoYnZqoRBtBGb55QUBzKi/PYfDDlIOA=;
        b=JPcRn5Q8h3WBdFDheXrk+G+Ii7zQv4PRFLW+qvKGr3w+7l05+aCrekaWkMj6cSeB5k
         x9zKnFOXXhakTNlNmw8bOqGKAReXa6I+x4M1DXcDTM8HCBJQohwB3o3x1rHmA/JieZ2o
         /4XxeHDxWH9FIgpo1GHFQJtR5qeLpapUQTiC4OZL8kSVR0Lt4UghxsqQYnUgmFCFDPKX
         KwF+AMikplj1nOmBgVWNI2i17KY/Fo9bXsvQ8W81OvnbpTaQ46lRU8gIOK67KoDj8q5S
         PXCpc5uj9NXfa+cvLTl+20hgj+Eh4+OKZ2EiadtQfQ1YJq3ctZac45npp2SQToiCzBAY
         BXow==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719500029; x=1720104829;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=VPRSFFzVa74HeoYnZqoRBtBGb55QUBzKi/PYfDDlIOA=;
        b=e8rO6C//7NKNDRETzlccmSKGAOk/dE3dlShM4sgfYrFeW368TiWc7lXzyX/GDOg8d6
         CKRV4Ac4CzWGXbB2ys/WDsJvnQDRHAaz/zwEKFMUnsAfVnp3mztH7a3eiauEixjaruYO
         tk1hNhYwkBUqshFY2L69M81JG6eHOKVV2sAk7pig55aj3xXVUbFVfdQaM87gnxTzAkvp
         lXcOuJOLyCCPJFBveWE6Md5oGyjAStQ+h1FDFzxeJk9XKYmN8WEgchuYhK4WBG8PmVng
         //9UllJ8DMi+7AtWPkP+86N5uk6OME/9aWh2+LTx4PIE1VwSliPM9cxtqnnpnSIDOrQr
         m8RQ==
X-Forwarded-Encrypted: i=1; AJvYcCUrb0WUgItftheyF/SgYlt0l8sBHHxVYAaKj9wA7UY1pxbO6ilmxsicog3NuRjm5cvrTm4hIRtne4ppuIF3gObXkRQtmZPUPKbPPjdmZWY=
X-Gm-Message-State: AOJu0YzB9hOaxRfbGJLkV3ppmPR051EwY7x/6eSZDnVTTeRIMTWO/iRD
	0WbvOHrwG+EETXum36mhpyHskwaj6hFDfhyYEnjOeCKieA+d15qUiDDfZ8/NGw==
X-Google-Smtp-Source: AGHT+IFgJ+/sbw/6CAZjbA+Sy4kMzf2umTJFbSSRv2cHulIpckICKgwGFCqhOdIukgBFMu/Gxe0p4A==
X-Received: by 2002:a05:651c:2cc:b0:2ec:5b8f:c792 with SMTP id 38308e7fff4ca-2ec5b8fcc29mr80143561fa.43.1719500028884;
        Thu, 27 Jun 2024 07:53:48 -0700 (PDT)
Message-ID: <ab4384f7-572e-4c67-9bbc-a9238d0e0456@suse.com>
Date: Thu, 27 Jun 2024 16:53:40 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v4 06/10] tools/libguest: Make setting MTRR registers
 unconditional
To: Alejandro Vallejo <alejandro.vallejo@cloud.com>
Cc: Anthony PERARD <anthony.perard@vates.tech>,
 Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
References: <cover.1719416329.git.alejandro.vallejo@cloud.com>
 <2c55d486bb0c54a3e813abc66d32f321edd28b81.1719416329.git.alejandro.vallejo@cloud.com>
 <fe255839-f8ab-4dd1-abe8-8ec834099a8d@suse.com>
 <D2AS8XQPR3TS.TDT0A6SPW47G@cloud.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <D2AS8XQPR3TS.TDT0A6SPW47G@cloud.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 27.06.2024 14:02, Alejandro Vallejo wrote:
> On Thu Jun 27, 2024 at 10:42 AM BST, Jan Beulich wrote:
>> Plus what about a guest which was configured to have the CPUID bit for MTRRs
>> clear?
>> I think we ought to document this as not supported for PVH (we may
> 
> By "this" do you mean PVH _must_ have MTRR support? I would agree.

That was my first thought, yes. But then further down I adjusted my
considerations.

>> actually choose to refuse building such a guest), but in principle the MTRR
>> save/load operations should simply fail for a HVM guest in said configuration.
> 
> What use cases does that cover? With the adjustment I mention at the top that
> should be sorted. I'm wondering why we allow !mtrr at all.

Not allowing it would open up for a mess in what CPUID bits we allow to
override and for which ones we'd deny overrides.

>> Making such a change in Xen now would, afaict, be benign to the tool stack.
>> After this adjustment it would result in a perceived regression, when there
>> shouldn't be any.
> 
> Fair point.
> 
>>
>> Thinking about it, even for PVH it may make sense to allow CPUID.MTRR=0, as
>> long as CPUID.PAT=1, thus forcing it into PAT-only mode. I think we did even
>> discuss this possible configuration before.
> 
> Is PAT-only an existing real HW configuration? Can't say I've seen any.

I don't think there are any, but the architecture doesn't preclude it, and
that's a simpler model overall for an OS to work with. Hence why it was
discussed (to some degree) before (if my memory doesn't fail me there).

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 14:57:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 14:57:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750173.1158415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqYb-0001eC-MC; Thu, 27 Jun 2024 14:57:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750173.1158415; Thu, 27 Jun 2024 14:57:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqYb-0001e5-JR; Thu, 27 Jun 2024 14:57:05 +0000
Received: by outflank-mailman (input) for mailman id 750173;
 Thu, 27 Jun 2024 14:57:04 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMqYa-0001dd-Fb
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 14:57:04 +0000
Received: from mail-lj1-x234.google.com (mail-lj1-x234.google.com
 [2a00:1450:4864:20::234])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 8329fb30-3495-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 16:57:02 +0200 (CEST)
Received: by mail-lj1-x234.google.com with SMTP id
 38308e7fff4ca-2ebeefb9a7fso98446781fa.0
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 07:57:02 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-706b4a58ebfsm1442999b3a.196.2024.06.27.07.56.57
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 07:57:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8329fb30-3495-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719500222; x=1720105022; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=gn7wnWRU/TzvR/QPwwZ9drO1pbD5OcGHdq1Z/b5IrPg=;
        b=Nf2BDms+DJM9+Dqwp9eUqByl3HKDNEQpZKSMqLoWLwP1/boY1g0rSA39LSe5bdsWfQ
         zxIN7B+arzLjroo9LuwPaLS4+jWS9RZYkTR+t53cipWwUh9pjmNaWsTXCPI9DubGExTc
         DhLqFoGuzz7jJXASuBI5p0PjztdVYccPBaIpW7ShOdI/dRgrCGMsUH1R2NgoyCLTQl/P
         P4J9BmlRg1VD/PsBOA7cW+1nqdpyjQ/l2C6m+QzRE1tYzD/G4++CInFeDq87BgDNwN+F
         WDIqe6pQPPst+GG+6aLNa+JE0hNG72s4U7dt3unLiCbALPyyJ7agCvJiYMUIM33JrsMk
         REhg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719500222; x=1720105022;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=gn7wnWRU/TzvR/QPwwZ9drO1pbD5OcGHdq1Z/b5IrPg=;
        b=A8NOKj5kyKne6O6q79ZXhCbt3n8/vl+xDTiu27QCsCNmhOJrJGG38yfQnrYvAzKzri
         /x2HSBORm/WWeQEjtI7IP+68ZQHE5H80ZirXsvLTPHNB8bMxLyXC27evNGbKzsAWrMMq
         Wlqd4Yx/6PRw4WjTRxlwk0QzbRFWUcYyTHoFx7WMTX/yNmZuEVxJpCkVe1pxAtqHw5gh
         IlSNVi/nFDovkHWM6719yPZtPbv9VXIQyCwFc/aBwiuZ8Xb4bLL6xoQekhKNXSkYzv+a
         NHMWybkP4lR/Hp4fMMuQ2VIByQ+3K23d3ljssBEaFyHqYOyQCIyc/XNh84538s3adDEv
         O/qQ==
X-Forwarded-Encrypted: i=1; AJvYcCUf2uy0WTX41bNK94EEBN9w6/K49TDIHVqCEjFSL1UaBbrOa9UojoASvSGKe7dwxKP6HUdMm/ImEgC2wIV+yWwFKxjV55bNvth4QpD/Xes=
X-Gm-Message-State: AOJu0Yw6q6Y7zTGl8ZfHersc/vvXqeNvw/g7WiZn9NeQTKhBbmPXs8uC
	sI1X97zTTh+dhzgcHV2jPv02xntD4KxsfOYPv8L3RLcYNSgRuVshNlE4HRQVNw==
X-Google-Smtp-Source: AGHT+IEiGkgfPLc6jOnhqUCejSIhj0sGP6Z0LDuUUhzqpaeCIqKGZOWcXLXUJAGFnEXO0KQLuDJLKg==
X-Received: by 2002:a2e:9cc8:0:b0:2ec:3fb8:6a91 with SMTP id 38308e7fff4ca-2ec5797a562mr96373991fa.19.1719500222205;
        Thu, 27 Jun 2024 07:57:02 -0700 (PDT)
Message-ID: <ac245978-e21e-4df7-a071-8b63793f6c8b@suse.com>
Date: Thu, 27 Jun 2024 16:56:52 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH v13 02/10] xen/riscv: introduce bitops.h
To: oleksii.kurochko@gmail.com
Cc: Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <0e4441eee82b0545e59099e2f62e3a01fa198d08.1719319093.git.oleksii.kurochko@gmail.com>
 <bb103587-546d-4613-bcb8-df10f5d05388@suse.com>
 <4c15dd072f08b1161d170608a096dc0851ced588.camel@gmail.com>
 <e2d82c37-da44-4a8f-a1f8-76d5ff05b104@suse.com>
 <f4f3a1550b4809a3cb8b27eb5e7248abf27b3944.camel@gmail.com>
 <4c71db0d-60a4-4347-b706-a2e06fc9cd63@suse.com>
 <7bd44b8615eb545b4956008d02c158d5c85e2345.camel@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <7bd44b8615eb545b4956008d02c158d5c85e2345.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

On 27.06.2024 14:01, oleksii.kurochko@gmail.com wrote:
> On Thu, 2024-06-27 at 12:10 +0200, Jan Beulich wrote:
>> On 27.06.2024 11:58, oleksii.kurochko@gmail.com wrote:
>>> On Thu, 2024-06-27 at 09:59 +0200, Jan Beulich wrote:
>>>> On 26.06.2024 19:27, oleksii.kurochko@gmail.com wrote:
>>>>> On Wed, 2024-06-26 at 10:31 +0200, Jan Beulich wrote:
>>>>>> On 25.06.2024 15:51, Oleksii Kurochko wrote:
>>>>>>> --- /dev/null
>>>>>>> +++ b/xen/arch/riscv/include/asm/bitops.h
>>>>>>> @@ -0,0 +1,137 @@
>>>>>>> +/* SPDX-License-Identifier: GPL-2.0 */
>>>>>>> +/* Copyright (C) 2012 Regents of the University of
>>>>>>> California
>>>>>>> */
>>>>>>> +
>>>>>>> +#ifndef _ASM_RISCV_BITOPS_H
>>>>>>> +#define _ASM_RISCV_BITOPS_H
>>>>>>> +
>>>>>>> +#include <asm/system.h>
>>>>>>> +
>>>>>>> +#if BITOP_BITS_PER_WORD == 64
>>>>>>> +#define __AMO(op)   "amo" #op ".d"
>>>>>>> +#elif BITOP_BITS_PER_WORD == 32
>>>>>>> +#define __AMO(op)   "amo" #op ".w"
>>>>>>> +#else
>>>>>>> +#error "Unexpected BITOP_BITS_PER_WORD"
>>>>>>> +#endif
>>>>>>> +
>>>>>>> +/* Based on linux/arch/include/asm/bitops.h */
>>>>>>> +
>>>>>>> +/*
>>>>>>> + * Non-atomic bit manipulation.
>>>>>>> + *
>>>>>>> + * Implemented using atomics to be interrupt safe. Could
>>>>>>> alternatively
>>>>>>> + * implement with local interrupt masking.
>>>>>>> + */
>>>>>>> +#define __set_bit(n, p)      set_bit(n, p)
>>>>>>> +#define __clear_bit(n, p)    clear_bit(n, p)
>>>>>>> +
>>>>>>> +#define test_and_op_bit_ord(op, mod, nr, addr, ord)     \
>>>>>>> +({                                                      \
>>>>>>> +    bitop_uint_t res, mask;                             \
>>>>>>> +    mask = BITOP_MASK(nr);                              \
>>>>>>> +    asm volatile (                                      \
>>>>>>> +        __AMO(op) #ord " %0, %2, %1"                    \
>>>>>>> +        : "=r" (res), "+A" (addr[BITOP_WORD(nr)])       \
>>>>>>> +        : "r" (mod(mask))                               \
>>>>>>> +        : "memory");                                    \
>>>>>>> +    ((res & mask) != 0);                                \
>>>>>>> +})
>>>>>>> +
>>>>>>> +#define op_bit_ord(op, mod, nr, addr, ord)      \
>>>>>>> +    asm volatile (                              \
>>>>>>> +        __AMO(op) #ord " zero, %1, %0"          \
>>>>>>> +        : "+A" (addr[BITOP_WORD(nr)])           \
>>>>>>> +        : "r" (mod(BITOP_MASK(nr)))             \
>>>>>>> +        : "memory");
>>>>>>> +
>>>>>>> +#define test_and_op_bit(op, mod, nr, addr)    \
>>>>>>> +    test_and_op_bit_ord(op, mod, nr, addr, .aqrl)
>>>>>>> +#define op_bit(op, mod, nr, addr) \
>>>>>>> +    op_bit_ord(op, mod, nr, addr, )
>>>>>>> +
>>>>>>> +/* Bitmask modifiers */
>>>>>>> +#define NOP(x)    (x)
>>>>>>> +#define NOT(x)    (~(x))
>>>>>>
>>>>>> Since elsewhere you said we would use Zbb in bitops, I wanted
>>>>>> to
>>>>>> come
>>>>>> back
>>>>>> on that: Up to here all we use is AMO.
>>>>>>
>>>>>> And further down there's no asm() anymore. What were you
>>>>>> referring
>>>>>> to?
>>>>> RISC-V doesn't have a CLZ instruction in the base
>>>>> ISA.  As a consequence, __builtin_ffs() emits a library call to
>>>>> ffs()
>>>>> on GCC,
>>>>
>>>> Oh, so we'd need to implement that libgcc function, along the
>>>> lines
>>>> of
>>>> Arm32 implementing quite a few of them to support shifts on 64-
>>>> bit
>>>> quantities as well as division and modulo.
>>> Why we can't just live with Zbb extension? Zbb extension is
>>> presented
>>> on every platform I have in access with hypervisor extension
>>> support.
>>
>> I'd be fine that way, but then you don't need to break up ANDN into
>> NOT
>> and AND. It is my understanding that Andrew has concerns here, even
>> if
>> - iirc - it was him to originally suggest to build upon that
>> extension
>> being available. If these concerns are solely about being able to
>> build
>> with Zbb-unaware tool chains, then what to do about the build issues
>> there has already been said.
> Not much we can do except probably using .insn, as you suggested for
> the "pause" instruction in cpu_relax(), for every instruction ( at the
> moment it is only ANDB but who knows which instruction will be used in
> the future ) from the Zbb extension.
> 
> But then we will need to do the same for each possible extension we are
> going to use, as there is still a small chance that we might encounter
> an extension-unaware toolchain.
> 
> I am a little bit confused about what we should do.
> 
> In my opinion, the best approach at the moment is to use .insn for the
> ANDN and PAUSE instructions

This would be my preference, but please also consult with Andrew.

> and add an explanation to
> docs/misc/riscv/booting.txt or create a separate document where such
> issues are documented (I am not sure that README is the correct place
> for this).
> 
> I am also okay to go with ANDN break up int NOT and AND, but it is
> needed to come up  with concept which instruction/extenstion should be
> used and how consistently to deal with such situations.
> 
> Furthermore, I don't think these changes should block the merging (
> doesn't matter in 4.19 or in 4.20 Xen release ) of PATCHes v13 01-07 of
> this patch series.

This can be looked at different ways. The splitting into NOT+AND was a
rather late change.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 14:59:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 14:59:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750179.1158425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqae-0002Ws-0X; Thu, 27 Jun 2024 14:59:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750179.1158425; Thu, 27 Jun 2024 14:59:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqad-0002Wl-UE; Thu, 27 Jun 2024 14:59:11 +0000
Received: by outflank-mailman (input) for mailman id 750179;
 Thu, 27 Jun 2024 14:59:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JfIA=N5=outlook.com=mhklinux@srs-se1.protection.inumbo.net>)
 id 1sMqac-0002Wf-5v
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 14:59:10 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10olkn20820.outbound.protection.outlook.com
 [2a01:111:f403:2c12::820])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cd254a45-3495-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 16:59:08 +0200 (CEST)
Received: from SN6PR02MB4157.namprd02.prod.outlook.com (2603:10b6:805:33::23)
 by SA6PR02MB10791.namprd02.prod.outlook.com (2603:10b6:806:440::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.32; Thu, 27 Jun
 2024 14:59:03 +0000
Received: from SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df]) by SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df%2]) with mapi id 15.20.7698.033; Thu, 27 Jun 2024
 14:59:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd254a45-3495-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kWFRMAIgFA2Z6O/VljRh+SIg7NwvOtQpSPyP18M1JpShlXKqA7R1fdoiIYlIxjy77TQe/bxYd0kWArsCqlZeXzZRRHs1kpE4IbXfNah+Z6bwM+oSGDuCdviMe9LfhmciKeE1JUzsIK1sA0OKd1o5zOzplxKIlLuMXBvW2cDqApI3IPkYhz2LLg15yCHj3ObqQxttYdkRlXXjON9Sro7j1WBP18vmW7yYodB+YFdDbY9bNDNTiF81IAWsWwndcQtWiY6V7iDeM5HTRl0IHsaKtsVldufOYoW6kyy9aGNt+ZyJNFtNwS9ITlbT/wQS6QMpZ7z8JSjdWTmDtr61pH+GZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=NP/TXAPT9GP0tZ60Pj5MVZ+f2bWwsuKKiNQf1CRtEhY=;
 b=O49P3pTgQ+Ai9vj8spy0AAeTycXE4XNijAc0V2lnFyvKArWfnse0ywvCVIed8guZ8PBnrAPYh4gmDhD7AQ1sAz/HcwrfWD49zj6BiZwguHh1jkhVno9bOMKSKgBe6QtAhUpvIAImhkaZSxT0qX7qA69Hv2V4vIyQU5KzAGJZMJp3Jey9mDC5gHA01V6Q5gqQ1rDgdvCvZRZXDLi7O+IO4zEbcFdpGJATiL/mbxyHykvJ+p5pU5hGdL1WuwFTsI3d8uZGDE8xvaFOj7/0c5uBeIN8BZeuyRXO0MDjOgDBIcYDyll+I1pzssjkVbeiqn4gB5oegPuPAPW0g705QPN8oQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none;
 dkim=none; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=outlook.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NP/TXAPT9GP0tZ60Pj5MVZ+f2bWwsuKKiNQf1CRtEhY=;
 b=iTF2HFL48PS0A8ss+AnIjxysACbDIMtxftRCX2bAgyrGs8l3AC76x74mJrBHUvBzQQXWINV3t0Y2zopl7H6K28b01W87hY6Cml0YYDYhxzCOeVgaq/yeMb2BGu4Rcn1RWiMEoPNehS+XHYvdauKZ0lbWMO/kWoeHY2WaB7RO9hPMgo2iAhlWboKWQ4pJqXaPe5CdlCKj0U+epZDaz49W5F1vPdNlkaGdETM0AXynaEviGZboVc8IuTaK498fI8QscCyMKkYqOoHndqtgCgeVXjdntMZVhIjC+nzCk52XrC9S8uF0xkC2ZQEToZWowslvvZMWFTBQBmuXejbHP5+TVg==
From: Michael Kelley <mhklinux@outlook.com>
To: =?iso-8859-2?Q?Petr_Tesa=F8=EDk?= <petr@tesarici.cz>, "hch@lst.de"
	<hch@lst.de>
CC: "robin.murphy@arm.com" <robin.murphy@arm.com>, "joro@8bytes.org"
	<joro@8bytes.org>, "will@kernel.org" <will@kernel.org>, "jgross@suse.com"
	<jgross@suse.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>,
	"m.szyprowski@samsung.com" <m.szyprowski@samsung.com>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Topic: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Index: AQHauIjoJBWSrCyO6UWzcncSceBiMLHa1Z9wgABomICAAA3OAIAAhiSg
Date: Thu, 27 Jun 2024 14:59:03 +0000
Message-ID:
 <SN6PR02MB4157E61B49C8435E38AC968DD4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
References: <20240607031421.182589-1-mhklinux@outlook.com>
	<SN6PR02MB41577686D72E206DB0084E90D4D62@SN6PR02MB4157.namprd02.prod.outlook.com>
	<20240627060251.GA15590@lst.de>
 <20240627085216.556744c1@meshulam.tesarici.cz>
In-Reply-To: <20240627085216.556744c1@meshulam.tesarici.cz>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-tmn: [fWtQtpoQhh9P6n6U/S2eCRH68cq78H5Q]
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: SN6PR02MB4157:EE_|SA6PR02MB10791:EE_
x-ms-office365-filtering-correlation-id: 1ccfd2f8-4b42-48df-fd03-08dc96b9af81
x-microsoft-antispam:
 BCL:0;ARA:14566002|461199028|3412199025|440099028|102099032|56899033;
x-microsoft-antispam-message-info:
 BsbSx4tOIOspvZwvADszIlAXUTGXaCRozXth+6JQaQPAn0SfTb35w4X4aObwpY1VaEev85FSzl5DeVowJDUqOaPTofrjz3M2pod2Qg3aawudatsOu+BCTLLqz6PDWOQTJeg2OKjZJ34UOeGDSSiAxsSaZFf7iAPB4zdMGePrPuq1nz1xrJKURIl8gTtbdZrI52csjcZcfegKlfLkhO/J7wlZnh4Gqw+oBVf/KEQOp9Bcuxc+bio7bDr7dCLuW0AiItoK8EKxJFN1GYcF/Q3HriCbc43l7GqiQQ1xuHfK6KooDQ4Oi4i+OkEfo/r/Vnzvz4pNREKUw7b73ktipH0y1im1aOcUiT+3scA/Gn+HyVNPGkh0hFKzt+mkdGrVPoEyoDXpmUiro4efeFbqtsTj2I29i2/bz93uh2HJxjT41TSk2xMjjny3AdLyOzXneeXvcCRRqVdSkVwtBaGyniyjf6ntWJpfMeIsSfu8znmCBw7gTKtH42mC6fSTIIq1COCsJq6teNQOrZgk+zH/rjysDM21Y7kCB3c78vKsRfEMvU379jNApn6Z/kIHUqkvYUd4
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-2?Q?CSY1WfSq/A2Pw+DgMzEvZv3FNFIh0U1azGBe4vhVn9kEzRkImY9cEyN3EG?=
 =?iso-8859-2?Q?3oU9nfVLKvrq1rJeNoeEpzmKZ5+r6MPX5AHc4gqJDjM9hQ4pMDcg7do47T?=
 =?iso-8859-2?Q?Q0UGPye7s/KFYUCCORn++d9kfXGYA8p/toQk5vG+c2403wQPHxuNvOwjg2?=
 =?iso-8859-2?Q?7LVvdoI/3jkwBCiMRBxkkkkCPOXp+3Tb0OjxdpHw9WTkoDJermq4tEDfky?=
 =?iso-8859-2?Q?wj5dV2taBnSYzdcNeXQfS2YyxiyfV0SbX/29m1mML1o+8U5b+jxGFZNfJc?=
 =?iso-8859-2?Q?UkQH6PppfP/rbutqlg2Zt+rRCsZ73s9PgO6gzie5kjyJmL5WEADtxfv6ZM?=
 =?iso-8859-2?Q?ejOgxlI6kukqzhmZl1hvogPw5bYXOJh/yDdjH915joRqsE61V8ONazaf0f?=
 =?iso-8859-2?Q?bSPjvrt2yBIb+83GXEVzpWTc6OYCHshoyFA7JT3F9IoakfaiMuY5y8/gPk?=
 =?iso-8859-2?Q?qjLZHEnj+q5EX0GEAthRN1GX9u88fkUxKbx9GrMicsWpFNjF9x406d2MdO?=
 =?iso-8859-2?Q?jafF5Sd550JjIiJKqRNqs6fInVja4buUergL4EL07/xleiKnMgWKm/Lg4z?=
 =?iso-8859-2?Q?7CFAweA4A/UBjOR2sytWRARLeZf51QKWVG9Shh/IPj79emyuvouDs2/o+e?=
 =?iso-8859-2?Q?3/MewYMCZF5gVGrZWYRLO+hqsyBMSbVxFI2Vb8T11ILzERnHx9rvOpHTE1?=
 =?iso-8859-2?Q?1l5zAKxb9Bhu0KaO07uwMocdAihB9XbVMhzsOCav01gpgQa+Na+NRvlM07?=
 =?iso-8859-2?Q?JECjlfG16pRu2KaSIFAzXLs8uiT+18W11+yd5RlOrPQ9XudkRzGgOBW8fo?=
 =?iso-8859-2?Q?4OBbnx0miojKb24121y3u3MVgeO1R+OO3TnDalErGIZhFMUGPfYRSOh5Jr?=
 =?iso-8859-2?Q?QkpIijPDwnX3ZmlTtSvDfz4wNFwRyhKvCyOfxByEIRpCNNCIr0n7Kp32BU?=
 =?iso-8859-2?Q?CxFJj1YuPBQcqZvzrfdIZAO9JN9m1yjIMr53MRWkxmLxPAay8DF/qnZx61?=
 =?iso-8859-2?Q?PyJrqO7Vf75dWBP/Me/jOavLh6Am+WOlvbQa4gBzvfmDxIPHw8vVYrFS8J?=
 =?iso-8859-2?Q?f5nmJ5Dwv6eGA3TdyvKCN2qqOulw91fHMILp1zeH9JP5y4utoqI/hCpEFS?=
 =?iso-8859-2?Q?oOCL50KWQK9ZsWF3s2lPSuymNsbpAAE6vwkT0+3NtHYzjztDABa4SZxbP9?=
 =?iso-8859-2?Q?y95Zp69A9WKa5SmXHSqrJgXqWWxzq8rG2uT6EQMjErV7KVd7KfhRLvT1YK?=
 =?iso-8859-2?Q?qgOSzR0Ypk40tQAJ6W6ZZcm/Z7JGP0L6uQi7X1VWk=3D?=
Content-Type: text/plain; charset="iso-8859-2"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN6PR02MB4157.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-Network-Message-Id: 1ccfd2f8-4b42-48df-fd03-08dc96b9af81
X-MS-Exchange-CrossTenant-rms-persistedconsumerorg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-originalarrivaltime: 27 Jun 2024 14:59:03.7122
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA6PR02MB10791

From: Petr Tesa=F8=EDk <petr@tesarici.cz> Sent: Wednesday, June 26, 2024 11=
:52 PM
>=20
> Oh, right. The idea is good, but I was not able to reply immediately
> and then forgot about it.
>=20
> For the record, I considered an alternative: Call swiotlb_* functions
> unconditionally and bail out early if the pool is NULL. But it's no
> good, because is_swiotlb_buffer() can be inlined, so this approach
> would replace a quick check with a function call. And then there's also
> swiotlb_tbl_unmap_single()...
>=20
> I have only a very minor suggestion: Could is_swiotlb_buffer() be
> renamed now that it no longer returns a bool? OTOH I have no good
> immediate idea myself.
>

Conceptually, it's still being used as a boolean function based on
whether the return value is NULL.  Renaming it to swiotlb_get_pool()
more accurately describes the return value, but obscures the
intent of determining if it is a swiotlb buffer.  I'll think about it.
Suggestions are welcome.

Michael


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 15:00:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 15:00:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750183.1158436 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqcJ-0003wn-Ba; Thu, 27 Jun 2024 15:00:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750183.1158436; Thu, 27 Jun 2024 15:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqcJ-0003wg-7c; Thu, 27 Jun 2024 15:00:55 +0000
Received: by outflank-mailman (input) for mailman id 750183;
 Thu, 27 Jun 2024 15:00:53 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMqcH-0003wa-FN
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 15:00:53 +0000
Received: from mail-lj1-x22f.google.com (mail-lj1-x22f.google.com
 [2a00:1450:4864:20::22f])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0c20c264-3496-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 17:00:52 +0200 (CEST)
Received: by mail-lj1-x22f.google.com with SMTP id
 38308e7fff4ca-2ed5ac077f5so26454851fa.1
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 08:00:52 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d2e1a72fcca58-706b4a34396sm1504050b3a.143.2024.06.27.08.00.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 08:00:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c20c264-3496-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719500452; x=1720105252; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=9T9lhiIargU7BQQ8R4KHbS3lR+mCE9LZcP1kLLiEcAA=;
        b=AqmB6LzDT+vAcVVTMdLE5tywXsUxCf/ErcDe/2Oeh/vYPpqV7b7fM6J80cMU8U1vBb
         hVJe9AWMd9wPZ4Mg1GZfBPANLNS7sAJ9NjiXYhu2GMuGnL88poazvPx2KXZ9UZB0beET
         DjiQkX7QuzvMZHt/qnP1JZH2jqbROCXOqeeeWZxCylxEi+OyZQnFF4VDcn7UceV5Ju9k
         micfeuUHVcHkDL7iS5PZRNqAJWu3PH+tRWRBvvDAWmPjehXH+Tjs86Twq2pL9/oqx15q
         irfEBqUFs6vjU0SJuKnUIXu4ZmoTE6WsKAkm9Ju3v/iglYOu8xjjYeRYtzaHp9YR/5CB
         JtVw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719500452; x=1720105252;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=9T9lhiIargU7BQQ8R4KHbS3lR+mCE9LZcP1kLLiEcAA=;
        b=Jz6IoKs5uoI3bqnVLW5d0+AmbNQiYdRcIjt2Nwnt25Ccn0VSNZFd6Hpyk4tdKRIqIN
         I0gYmhwdxKMtp2FM6e4t/4Jk/k72/zZUSIoBHANkB/zY26fsDKTfFITUFW8GlsND8rZA
         vxF4anDwV1iCdKR/gkyRR2OZOM1fytkYyi2fIHJOb8Enlfwp8LzQoaE3N23WcWM99hlx
         O4VzNCA8CaHq44ssVKtfDnkLua0Hn+iS3R3CMW7IIvYOcKeIPHUh65G75O2UemMhxjXj
         Low2viDE0qwdZUqqhwYYuRF+/6+CCFx6SvYBEN3sRg7P3r2GbO+553GO8ZkNRyozttw4
         40fQ==
X-Forwarded-Encrypted: i=1; AJvYcCVNuoGQM8uQWb1WlkpatjG+90qH4CK4guZ55bQPyJGwRlL78voM+RUAGqU2yBUoJDcdHJ3OyweXwOPnGGNJzj/f/+BjrDPI7I/Oj2C44zk=
X-Gm-Message-State: AOJu0YxfeB0AcPIP/qdIYBPHPN8ZaDkTow+appjw9xzx3XbfMNZVPaGZ
	vuomh5vnJkKBkWf9YfXmCef2cxD5kPRJ3SLoZ0Y9MFg8J4ca/6/HEL0fJsXAFQ==
X-Google-Smtp-Source: AGHT+IEG6vZTUo6YAqdxo5/eYl5aq1qPisCJ7iDgpSsU/lY64fVDHLpMy23dL5K09OF3AwCZfJHiaA==
X-Received: by 2002:a2e:9e95:0:b0:2ec:50eb:d5a2 with SMTP id 38308e7fff4ca-2ec59403692mr87211311fa.29.1719500452036;
        Thu, 27 Jun 2024 08:00:52 -0700 (PDT)
Message-ID: <3689797f-c26d-4c8e-ae22-d31e0b27e8a2@suse.com>
Date: Thu, 27 Jun 2024 17:00:41 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [PATCH for-4.19? v13 0/10] Enable build of full Xen for RISC-V
To: oleksii.kurochko@gmail.com
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Bertrand Marquis <bertrand.marquis@arm.com>,
 Michal Orzel <michal.orzel@amd.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Shawn Anastasio <sanastasio@raptorengineering.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Alistair Francis <alistair.francis@wdc.com>,
 Bob Eshleman <bobbyeshleman@gmail.com>, Connor Davis
 <connojdavis@gmail.com>, xen-devel@lists.xenproject.org,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <e8ad8849f10ab8658b84ce18670549ef6314ae4e.camel@gmail.com>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <e8ad8849f10ab8658b84ce18670549ef6314ae4e.camel@gmail.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 27.06.2024 14:08, oleksii.kurochko@gmail.com wrote:
> I saw a message in the xen-devel channel:```
> erm...  We've got a problem. 
> https://gitlab.com/xen-project/people/andyhhp/xen/-/jobs/7185148004
> very clearly failed with a panic(), but reported success out to Gitlab
> ```
> However, I couldn't determine if this is related to the patches in this
> series or to patches that were merged earlier.

I don't think that's related to the progress of this series. It is
technically related, nevertheless.

> I would like to understand what needs to be done to be sure that this
> patch series could be merged.

First of there is - afaict - still no Arm ack for patch 1. Without that
no progress is possible at all. And then there still is this vague "I
found another bug" indication by Andrew. It wouldn't feel quite right
to me to commit without this having been clarified one way or another,
albeit formally - with acks in place and the issue not raised on the
list - doing so would still be okay-ish.

Jan


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 15:04:47 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 15:04:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750191.1158445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqfx-0004Xx-Pn; Thu, 27 Jun 2024 15:04:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750191.1158445; Thu, 27 Jun 2024 15:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqfx-0004Xq-Mx; Thu, 27 Jun 2024 15:04:41 +0000
Received: by outflank-mailman (input) for mailman id 750191;
 Thu, 27 Jun 2024 15:04:41 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JfIA=N5=outlook.com=mhklinux@srs-se1.protection.inumbo.net>)
 id 1sMqfw-0004Xi-NS
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 15:04:40 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10olkn20807.outbound.protection.outlook.com
 [2a01:111:f403:2804::807])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 92b8b871-3496-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 17:04:39 +0200 (CEST)
Received: from SN6PR02MB4157.namprd02.prod.outlook.com (2603:10b6:805:33::23)
 by IA0PR02MB9702.namprd02.prod.outlook.com (2603:10b6:208:487::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.34; Thu, 27 Jun
 2024 15:04:35 +0000
Received: from SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df]) by SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df%2]) with mapi id 15.20.7698.033; Thu, 27 Jun 2024
 15:04:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92b8b871-3496-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=junSST0Oet99976p1kU7/GaJlYJroYc6b7SLrTRZt6eolD60vjysh08sFSuJwWJ69EDreIqbdQDbpLYDSgzl7KBLfEBODNxTpXYwFEH/2PTVZV+l1GP3eg5Zb3Fb1nC3z8RdolCN7cMV3NJ/BVyRUz3F2Ff1zwngcNGqRBAjGL0s1GGLtxJVnLewIL2EmCqTui//l5kEl+vhkFNz5m6XCBTr3fnhZsyJgFWGEA/eDGTPjCNJa6LIOt664I30oHLr/mpyH1gxnnBBAyneC0atWng5VIcrLxoheKlzRVshjh/wVy7jM4757zKndit520aMyoeeQA14G2a7e3rvzNMrrg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=raNG0jVWFpSJwFielu6FwfYrMP8/HWhkBSOCMgEBgYM=;
 b=OXeHoEv6nwKM334GfD571hdH3afo5GikHpgA7fN/cC3x7lUyc7eUqkFA3pOvdfqQPDA2v/ZttIht87/7yLndcvaAFIjJc7HexkEa/+lJX0/p41zVvMxruIWK2B37pkIil7v+9MgM2QTJ/TlVeRGcMeun9qp4DPGvRvbVnonVM94+63bn0CGenq61mtDpQbsZFBQm3K64rqPTQVI+K1pjQx2FQK6clIrzSTBkxJ7yQ0BxX5P7WggXkP5/2l9OpcbTqcKr9RekIzS2owv+3QMhkMsfB2nQ/P+OL7JQhf1btqLXycAMvNi7sNdkktGipb4aSyCBFW9VMmZBqkkK095aBA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none;
 dkim=none; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=outlook.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=raNG0jVWFpSJwFielu6FwfYrMP8/HWhkBSOCMgEBgYM=;
 b=f9iOte9BsmltkGsn8IdYrqhZX8zwOu4/Y9RLlzjmr6/QavC0p999C6kIvFpy4yL/Unadzh1MYU/56+arIdnHk81gy+GOzsJvuYCZ7Nh0Y0fUhREhc0k3TJX9mhE7gB8Kflwh+J//a1EJsJqVxv0FYt4wlc+3GTCa3AyrsSt2sHpWnVdVxTaE/LikkDmm234qbPMryNwWinmi1kGjCfnZ4GYsBNu9DtGUXuD6aczoOe5XEsb0GFPIbkH+qur75n3iy2Ngb0MgHNtW2gfZ5YPB428ANcqG3QoqIoMitHXuGCy3KTHJqEDl6GKaKTKvKqkH8SmQpIxq4m7rGwj5aC2q6A==
From: Michael Kelley <mhklinux@outlook.com>
To: =?iso-8859-2?Q?Petr_Tesa=F8=EDk?= <petr@tesarici.cz>,
	"mhkelley58@gmail.com" <mhkelley58@gmail.com>
CC: "robin.murphy@arm.com" <robin.murphy@arm.com>, "joro@8bytes.org"
	<joro@8bytes.org>, "will@kernel.org" <will@kernel.org>, "jgross@suse.com"
	<jgross@suse.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>, "hch@lst.de"
	<hch@lst.de>, "m.szyprowski@samsung.com" <m.szyprowski@samsung.com>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Topic: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Index: AQHauIjoJBWSrCyO6UWzcncSceBiMLHbU/+AgACADXA=
Date: Thu, 27 Jun 2024 15:04:35 +0000
Message-ID:
 <SN6PR02MB4157CF368284CA48061E35E9D4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
References: <20240607031421.182589-1-mhklinux@outlook.com>
 <20240627092049.1dbec746@meshulam.tesarici.cz>
In-Reply-To: <20240627092049.1dbec746@meshulam.tesarici.cz>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-tmn: [AKhZ4FTPcx0HfZye0qhnHyABc3sw1wYl]
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: SN6PR02MB4157:EE_|IA0PR02MB9702:EE_
x-ms-office365-filtering-correlation-id: edd5232c-f037-4bde-a2e4-08dc96ba7522
x-microsoft-antispam:
 BCL:0;ARA:14566002|461199028|3412199025|440099028|102099032;
x-microsoft-antispam-message-info:
 C4w+6zLY5M7Vk3uroWbqkWyXGC/J79Dm3JF+VD8WwfvRbg6juRPuQm8IhWQNj+zmpc4pKMwvuDJV3V91oAJ7fZSMkw0SkuXgN71A1tuONyrOY2A5qoZ0rU7wV4wEh2Jvq5i9DJE/QWgR8p1hMVT0qx1ix4OhEl+tPPVosUR+HklI9k/Qy90CGkQ3UEdtP1lrvR3YT22WO5/pDotqpu/69fo+7okeDRfIFyxjsHQVQMws1jr2/uPStj1wFuRVRivaKVhZN9HsX6toKK6Vv5sWqXB/A6dpOHOr3hHJZnqLXOAOTnAOy7E1aPkKRqbQQaI8OJ8TFsfX5uDsywyWTvkuekbQaJpejkgjcJSmkHRQnv4yL4y2RIWnxBB0CBIH/5bj4wwvl8+mIYa3nwl/Df4OcUW8qE9aTCVX8ElNlowyw832/Q+bhTdGzBHF0oOmZZSk3H69M91QxJxpMcH2LRUPx5t64Zy6BsIiRxvZoTEfxbG4rtP79E1MmD/WrNMSl/aHIoiJIdGeCp4Qy+3jXcvFNyC+vN/t6uGMfpybgaX4r9Yiu3Hb1g1YmreLYk0g9N4p
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-2?Q?TVNMMdrcwSsBtZ9+85O93RYN9thGCKdgW6DFpEwdGDNHimWfW1bHVKMKlm?=
 =?iso-8859-2?Q?9o3eDUT9g5d/PnTGCTn3B1yopqSeNACJCXTUClJdr3N7iM7pzoLcxNbzct?=
 =?iso-8859-2?Q?4KHjH9PP4JFJreFu6vBfuZsMj+d8AojebRkCfrx5ocKfxWsnYM1sJwawjY?=
 =?iso-8859-2?Q?DOnLbOwwr42c6pfLcjcBUn7+ly8n9bnAZxSm5yk6LPTIKTNdS56rhyUutk?=
 =?iso-8859-2?Q?kFDOJ+uaygp1OvTFoXDUIdcCxyZLArIyUZcrjKLrHBBtyFtG0rP2jdYVF/?=
 =?iso-8859-2?Q?TMpNLq9thmqc2eIVuZqULCNWNqwhawfoo7gOsMIljfSVGTJpVDLSpMQYcJ?=
 =?iso-8859-2?Q?tOhoxaCsGg3buoiS3UmxVxrc1kCBTfAMxaH8LGzEgfmrD2WJrSRzxkldcH?=
 =?iso-8859-2?Q?J+9R9py5O5e0kzAq4WC11CDH5WaId4iWqz8/XSS2SlqvQDMN5oogwB854B?=
 =?iso-8859-2?Q?WTHqVLHE4FLDDhkPg6YbFX+dPn/2FLVYWh7u0CHjxn3JuzFc1jaaJnUr0F?=
 =?iso-8859-2?Q?t/fTiJ9STvPUWY2v11rcf3XPRXa5GWThR6mGa8n+3jFFVfk5o0IYWUMy9i?=
 =?iso-8859-2?Q?MgzgUhrXNIDW2Fg43mziZru2MkUWTbcVBKJ0ZsfYcsZqUXREOwOcPm9k1I?=
 =?iso-8859-2?Q?w7pOfZa/qI3nNU6hx0SczoIN2NLf+p5y5wgEJE9iQNaxr7VGI4LBoEq8A8?=
 =?iso-8859-2?Q?0s+W22Gm/lvgGfDwCv3h+Aij2/iBGIAKGmWzuNNe9AWZ4B0UGv3Dc63LRT?=
 =?iso-8859-2?Q?JFdV1ED27t7sF8yL087Nj3zDwI6coUulRrjuIYsWpF7L+gRZMh1/2nPf4T?=
 =?iso-8859-2?Q?mtVUd2y/KeiEafeTptsOOe2d+2291rr1UURbkF1wgFOFAkBX1z/LhZgb6k?=
 =?iso-8859-2?Q?NXO9dkFRvxYEyJYM8EZx9Ev2RReC4yieq/S78VrQOgjkMKkgU2pStdg2ko?=
 =?iso-8859-2?Q?lNY3/Vtp2VaLsy5Pmh+KtWPKnsm1BuCnhKJzlku1Z68LnNTW7YG7IF8tD4?=
 =?iso-8859-2?Q?9HhLQnOHSQx+gj5GV+VQ3HvAna4Oa9fydnYmeSbmaI//PkSv/sf0zEluNW?=
 =?iso-8859-2?Q?cl2WSpWRDVhmS07tD8zJAwYiwTaQ1vg2ltlL57bat5BFpWdLjZ85q3THtX?=
 =?iso-8859-2?Q?dLyWP43i7RMsnOmpNDlqWxWIbsJmGQpOOUbgOO3BIcgxZFjcbGHe6y/w7Q?=
 =?iso-8859-2?Q?7AuLf4TDMiTQ6kKMkYLFC6ghsZkOmceH1NwRvOu5S+wVJAtPHXb2WJUWPr?=
 =?iso-8859-2?Q?UYM7R50V3Rbg64dHQa1aoZjNWAoek57FvkCcqDCnY=3D?=
Content-Type: text/plain; charset="iso-8859-2"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN6PR02MB4157.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-Network-Message-Id: edd5232c-f037-4bde-a2e4-08dc96ba7522
X-MS-Exchange-CrossTenant-rms-persistedconsumerorg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-originalarrivaltime: 27 Jun 2024 15:04:35.2725
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR02MB9702

From: Petr Tesa=F8=EDk <petr@tesarici.cz> Sent: Thursday, June 27, 2024 12:=
21 AM

[...]

> > @@ -187,10 +169,13 @@ static inline bool is_swiotlb_buffer(struct devic=
e *dev, phys_addr_t paddr)
> >  	 * This barrier pairs with smp_mb() in swiotlb_find_slots().
> >  	 */
> >  	smp_rmb();
> > -	return READ_ONCE(dev->dma_uses_io_tlb) &&
> > -		swiotlb_find_pool(dev, paddr);
> > +	if (READ_ONCE(dev->dma_uses_io_tlb))
> > +		return swiotlb_find_pool(dev, paddr);
> > +	return NULL;
> >  #else
> > -	return paddr >=3D mem->defpool.start && paddr < mem->defpool.end;
> > +	if (paddr >=3D mem->defpool.start && paddr < mem->defpool.end)
> > +		return &mem->defpool;
>=20
> Why are we open-coding swiotlb_find_pool() here? It does not make a
> difference now, but if swiotlb_find_pool() were to change, both places
> would have to be updated.
>=20
> Does it save a reload from dev->dma_io_tlb_mem? IOW is the compiler
> unable to optimize it away?
>=20
> What about this (functionally identical) variant:
>=20
> #ifdef CONFIG_SWIOTLB_DYNAMIC
> 	smp_rmb();
> 	if (!READ_ONCE(dev->dma_uses_io_tlb))
> 		return NULL;
> #else
> 	if (paddr < mem->defpool.start || paddr >=3D mem->defpool.end);
> 		return NULL;
> #endif
>=20
> 	return swiotlb_find_pool(dev, paddr);
>=20

Yeah, I see your point. I'll try this and see what the generated code
looks like. It might take me a couple of days to get to it.

Michael


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 15:07:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 15:07:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750198.1158456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqj8-0005GX-AC; Thu, 27 Jun 2024 15:07:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750198.1158456; Thu, 27 Jun 2024 15:07:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqj8-0005GQ-7L; Thu, 27 Jun 2024 15:07:58 +0000
Received: by outflank-mailman (input) for mailman id 750198;
 Thu, 27 Jun 2024 15:07:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GyZL=N5=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sMqj7-0005GJ-Ak
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 15:07:57 +0000
Received: from mail-lj1-x22d.google.com (mail-lj1-x22d.google.com
 [2a00:1450:4864:20::22d])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 08a949c9-3497-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 17:07:56 +0200 (CEST)
Received: by mail-lj1-x22d.google.com with SMTP id
 38308e7fff4ca-2ec408c6d94so90364181fa.3
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 08:07:56 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 41be03b00d2f7-72745d06529sm1154394a12.24.2024.06.27.08.07.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 08:07:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08a949c9-3497-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719500876; x=1720105676; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from:cc
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=baYQFBulA6V6dKa4x0FhBbfUwASuTErtziwSk4qT0uw=;
        b=RwU8yPm3JRvMhr4bU3XuQH7ueVg/EAOsvoyXcvKtcVTiDRslbP6jAl5fH0LsbQssP7
         lFRKcI7LIE8nN9R+swEiAstTcE2KehiW75usjaR2mvSNbpNUads1xTooYnhcZrPTeYQn
         tz3KJ3SOwObHmx5JE1qHzVWA9tB2mgv/vVQ/xGCqATJd2b5jAxm734Vogk942NgZySu7
         kO2mJdd9wQct0lLPC4sP66r/blC3oS/iCuJXJqmZT2vosZOegqmtH5V3xtnRX7lh8RDr
         ZuIj5uxtuZdLbd/T0hr/hNAWvS0uGJsrGfZqtfYw0MeEOaDw31zSBhXSgmEgcsZke7UH
         heiA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719500876; x=1720105676;
        h=content-transfer-encoding:in-reply-to:autocrypt:from:cc
         :content-language:references:to:subject:user-agent:mime-version:date
         :message-id:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=baYQFBulA6V6dKa4x0FhBbfUwASuTErtziwSk4qT0uw=;
        b=omLiZx72T/tuujGcklybuQelp2HjNIwUnAU161UI3CdLLntIEZ+d7r08QoyD18yWeh
         5NPY3Dku8fjNhtOvPR1pM2yRUgi9ZUcr+IjN17HtnUCLI4243OcVBOJ9xytnGvgRmH2x
         1UDx+hqiY8lkgwA0wrBv4zS6NHtuwzizjNLhIXy2dCQIdYyDUn5PHiSS33PaIpd2WLQP
         +vmft1kyNi2SpZe22wwgxmH1fCwoBj7skAD717rfPpfXCRuRB6xs/0wS0eo3jaNRTFtP
         vqjB46+fF6Wyo2jIKTr/ArzFWKdKO0GYLh2POkrZ17qoT6PskR4fxJLjqlLaFDbIjPnQ
         wfMQ==
X-Gm-Message-State: AOJu0YzmnINetpST+RmcG1on7H91OiElSg4rORQmNcWjLThHX50fKVnn
	2cnOE+CWGZdUyhIqRRn27tezu8i6ihVASgRybNmcApVMB3p/dlJKmojxWtOYnZ+3GiukLOmbfLQ
	=
X-Google-Smtp-Source: AGHT+IFM2bXBjvrmpj6ri7mcGC6JARytTXSVaembUTjZeJ1v9PvMq5pDevvokd3fjEUSInwcp8dxsw==
X-Received: by 2002:a2e:9998:0:b0:2eb:eb7c:ec1b with SMTP id 38308e7fff4ca-2ec59329f25mr108310171fa.25.1719500875638;
        Thu, 27 Jun 2024 08:07:55 -0700 (PDT)
Message-ID: <5be12b37-ddc6-4bce-a25d-758682d8f0fe@suse.com>
Date: Thu, 27 Jun 2024 17:07:48 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: xen | Failed pipeline for staging | 402e4732
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <667d6d349798_2fbca341149a@gitlab-sidekiq-catchall-v2-57c8c99f7-ll4tl.mail>
Content-Language: en-US
Cc: Nicola Vetrini <nicola.vetrini@bugseng.com>
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <667d6d349798_2fbca341149a@gitlab-sidekiq-catchall-v2-57c8c99f7-ll4tl.mail>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 27.06.2024 15:46, GitLab wrote:
> 
> 
> Pipeline #1350627221 has failed!
> 
> Project: xen ( https://gitlab.com/xen-project/xen )
> Branch: staging ( https://gitlab.com/xen-project/xen/-/commits/staging )
> 
> Commit: 402e4732 ( https://gitlab.com/xen-project/xen/-/commit/402e473249cf62dd4c6b3b137aa845db0fe1453a )
> Commit Message: x86/traps: address violations of MISRA C Rule 2...
> Commit Author: Nicola Vetrini
> Committed by: Jan Beulich ( https://gitlab.com/jbeulich )
> 
> 
> Pipeline #1350627221 ( https://gitlab.com/xen-project/xen/-/pipelines/1350627221 ) triggered by Ganis ( https://gitlab.com/ganis )
> had 1 failed job.
> 
> Job #7202274595 ( https://gitlab.com/xen-project/xen/-/jobs/7202274595/raw )
> 
> Stage: analyze
> Name: eclair-x86_64

Without any earlier errors that I could spot the build log ends in

section_end:1719492504:step_script
[0Ksection_start:1719492504:upload_artifacts_on_failure
[0K[0K[36;1mUploading artifacts for failed job[0;m[0;m
[32;1mUploading artifacts...[0;m
ECLAIR_out/*.log: found 2 matching artifact files and directories[0;m 
[0;33mWARNING: ECLAIR_out/*.txt: no matching files. Ensure that the artifact path is relative to the working directory (/builds/xen-project/xen)[0;m 
*.log: found 2 matching artifact files and directories[0;m 
Uploading artifacts as "archive" to coordinator... 201 Created[0;m  id[0;m=7202274595 responseStatus[0;m=201 Created token[0;m=glcbt-66
[32;1mUploading artifacts...[0;m
[0;33mWARNING: gl-code-quality-report.json: no matching files. Ensure that the artifact path is relative to the working directory (/builds/xen-project/xen)[0;m 
[31;1mERROR: No files to upload                         [0;m 
section_end:1719492506:upload_artifacts_on_failure
[0Ksection_start:1719492506:cleanup_file_variables
[0K[0K[36;1mCleaning up project directory and file based variables[0;m[0;m
section_end:1719492506:cleanup_file_variables
[0K[31;1mERROR: Job failed: exit code 137
[0;m

IOW - some failure somewhere, but nothing noticeable in the log file. Is
this an expected thing for this specific job?

Jan



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 15:25:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 15:25:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750210.1158466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqzu-0000B1-N7; Thu, 27 Jun 2024 15:25:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750210.1158466; Thu, 27 Jun 2024 15:25:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMqzu-0000At-Jk; Thu, 27 Jun 2024 15:25:18 +0000
Received: by outflank-mailman (input) for mailman id 750210;
 Thu, 27 Jun 2024 15:25:17 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/xp/=N5=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sMqzt-0000An-8T
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 15:25:17 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 74477b94-3499-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 17:25:16 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 2DCFC68BFE; Thu, 27 Jun 2024 17:25:13 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74477b94-3499-11ef-90a3-e314d9c70b13
Date: Thu, 27 Jun 2024 17:25:13 +0200
From: "hch@lst.de" <hch@lst.de>
To: Michael Kelley <mhklinux@outlook.com>
Cc: Petr =?utf-8?B?VGVzYcWZw61r?= <petr@tesarici.cz>,
	"hch@lst.de" <hch@lst.de>,
	"robin.murphy@arm.com" <robin.murphy@arm.com>,
	"joro@8bytes.org" <joro@8bytes.org>,
	"will@kernel.org" <will@kernel.org>,
	"jgross@suse.com" <jgross@suse.com>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>,
	"m.szyprowski@samsung.com" <m.szyprowski@samsung.com>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Message-ID: <20240627152513.GA23497@lst.de>
References: <20240607031421.182589-1-mhklinux@outlook.com> <SN6PR02MB41577686D72E206DB0084E90D4D62@SN6PR02MB4157.namprd02.prod.outlook.com> <20240627060251.GA15590@lst.de> <20240627085216.556744c1@meshulam.tesarici.cz> <SN6PR02MB4157E61B49C8435E38AC968DD4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <SN6PR02MB4157E61B49C8435E38AC968DD4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Thu, Jun 27, 2024 at 02:59:03PM +0000, Michael Kelley wrote:
> Conceptually, it's still being used as a boolean function based on
> whether the return value is NULL.  Renaming it to swiotlb_get_pool()
> more accurately describes the return value, but obscures the
> intent of determining if it is a swiotlb buffer.  I'll think about it.
> Suggestions are welcome.

Just keep is_swiotlb_buffer as a trivial inline helper that returns
bool.



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 15:28:07 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 15:28:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750217.1158476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMr2Q-0000nJ-2e; Thu, 27 Jun 2024 15:27:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750217.1158476; Thu, 27 Jun 2024 15:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMr2P-0000nC-WB; Thu, 27 Jun 2024 15:27:53 +0000
Received: by outflank-mailman (input) for mailman id 750217;
 Thu, 27 Jun 2024 15:27:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NNyG=N5=cloud.com=george.dunlap@srs-se1.protection.inumbo.net>)
 id 1sMr2O-0000n6-M6
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 15:27:52 +0000
Received: from mail-oa1-x2c.google.com (mail-oa1-x2c.google.com
 [2001:4860:4864:20::2c])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id cf90b468-3499-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 17:27:49 +0200 (CEST)
Received: by mail-oa1-x2c.google.com with SMTP id
 586e51a60fabf-25d57957b37so1079222fac.2
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 08:27:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf90b468-3499-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=cloud.com; s=cloud; t=1719502068; x=1720106868; darn=lists.xenproject.org;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:from:to:cc:subject:date
         :message-id:reply-to;
        bh=R0Qw53IU5ENt3GoeaIEHlWBuk/kuwVhNuf26iyhfUC8=;
        b=hu4j7cZZapsqpZLPebFNBtfjGh0MEa031+5qcNJQs82PFAJcq5mFmMxh6NKe+VOV1E
         FEnQXj1osNrkY20CIDnmyuOR9tQwGmXJEk0AB7YAvHNbv7s1GGYxQ5jIiNZxCDWelswL
         cEQkZOoSHJOvUFNoNUMiTkXv7Yq3nyPZ7fE6w=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719502068; x=1720106868;
        h=content-transfer-encoding:cc:to:subject:message-id:date:from
         :in-reply-to:references:mime-version:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=R0Qw53IU5ENt3GoeaIEHlWBuk/kuwVhNuf26iyhfUC8=;
        b=QNx/RisW0+4odMsqMu5te7/6WNS1anrJUWeHbEeaDqPygNZLdG17WoUKaWBvRcvTTC
         +cT45PQCK+fLFPhVEMXljQ1VrllilRYlkXctSAa4S/nsajtGHGTg7RpjBi/i1gSE0iYX
         rKjYRT0kI5cgnHlzdV8GlZrwJg5n1pNziQvfo+KQu+uQtIihsJCp76EMlfcder+6/w/n
         H/AWAQWol1RFyqC6uqI3RIYbQKBF9sBwTPkZYEYZL1QSi9PkW2duX3m73WQH246ceQ72
         ZhxFs/y/roK7cTH6OAIrjF9/ZFvpM45LVV2aV1ShrLQGwwl2ksMjW+rKdU6vZQGdwf28
         z+FQ==
X-Forwarded-Encrypted: i=1; AJvYcCUBUra3tDBlTFIc3wN0zN62PtcoZuEw0dIoMie4RJt/OiMF0Xz1q3YwTJHYCvaS9sOEPwm8mNkp4Y+sfU2ApMjurntnZIm3nS6jzetjQmk=
X-Gm-Message-State: AOJu0YyqvlVe8rwTBYgmbghBwjxfyVLC4hMTeuyhSHX83szqCP5AFcmR
	IEXsSyUXdh71gjm8vDodUTevKMI+vRFdFwyj141r4w/SmFHfxb2QudvI4uBsBYsh7fKbp87kAot
	26i2UgwJB/WyW+hEF8R2hXRM6PHB2uQ7Zj1uSAw==
X-Google-Smtp-Source: AGHT+IFFTK4owjDhbyTrPH7JUoWcKaMK2NNl7L7xjyVvxc3ECnnOf8ie4WoZRtaz1rUkn4SApqoqIWBRI10qDkGXtKk=
X-Received: by 2002:a05:6871:58a4:b0:258:434b:729e with SMTP id
 586e51a60fabf-25d06bc7f2dmr13979378fac.7.1719502068386; Thu, 27 Jun 2024
 08:27:48 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1719319093.git.oleksii.kurochko@gmail.com>
 <f14f2c5629a75856f4bafdbff3cc165c373f8dc2.1719319093.git.oleksii.kurochko@gmail.com>
 <4a4e37a9-eac7-4e72-8845-6b4bbd7bafe6@suse.com> <c52181a7aca8b56716d7ee354ebda9d32e67816c.camel@gmail.com>
 <f324a4f3-b64d-4b20-92d0-8cfea050d44a@suse.com>
In-Reply-To: <f324a4f3-b64d-4b20-92d0-8cfea050d44a@suse.com>
From: George Dunlap <george.dunlap@cloud.com>
Date: Thu, 27 Jun 2024 16:27:37 +0100
Message-ID: <CA+zSX=aO2vDh8mv2OSwUxWkAzPtE-FkAP426nXTz5e78OmRF9A@mail.gmail.com>
Subject: Re: [PATCH v13 07/10] xen/common: fix build issue for common/trace.c
To: Jan Beulich <jbeulich@suse.com>
Cc: Oleksii <oleksii.kurochko@gmail.com>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, Jun 26, 2024 at 11:46=E2=80=AFAM Jan Beulich <jbeulich@suse.com> wr=
ote:
>
> On 25.06.2024 18:23, Oleksii wrote:
> > On Tue, 2024-06-25 at 16:25 +0200, Jan Beulich wrote:
> >> On 25.06.2024 15:51, Oleksii Kurochko wrote:
> >>> During Gitlab CI randconfig job for RISC-V failed witn an error:
> >>>  common/trace.c:57:22: error: expected '=3D', ',', ';', 'asm' or
> >>>                               '__attribute__' before
> >>> '__read_mostly'
> >>>    57 | static u32 data_size __read_mostly;
> >>>
> >>> Signed-off-by: Oleksii Kurochko <oleksii.kurochko@gmail.com>
> >>
> >> Acked-by: Jan Beulich <jbeulich@suse.com>
> >>
> >> If you give a release-ack, this can go in right away, I think.
> > Release-Acked-by: Oleksii Kurochko  <oleksii.kurochko@gmail.com>
>
> Thanks, but actually I was misled by the subject prefix. From a formal
> perspective this really wants an ack from George (and mine doesn't
> count anything at all).

Acked-by: George Dunlap <george.dunlap@cloud.com>


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 15:56:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 15:56:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750235.1158488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMrU0-0005NJ-8T; Thu, 27 Jun 2024 15:56:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750235.1158488; Thu, 27 Jun 2024 15:56:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMrU0-0005NC-5z; Thu, 27 Jun 2024 15:56:24 +0000
Received: by outflank-mailman (input) for mailman id 750235;
 Thu, 27 Jun 2024 15:56:23 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kCyE=N5=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sMrTz-0005N6-16
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 15:56:23 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id cc5a54c2-349d-11ef-90a3-e314d9c70b13;
 Thu, 27 Jun 2024 17:56:21 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 2480F4EE073C;
 Thu, 27 Jun 2024 17:56:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc5a54c2-349d-11ef-90a3-e314d9c70b13
MIME-Version: 1.0
Date: Thu, 27 Jun 2024 17:56:21 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: xen | Failed pipeline for staging | 402e4732
In-Reply-To: <5be12b37-ddc6-4bce-a25d-758682d8f0fe@suse.com>
References: <667d6d349798_2fbca341149a@gitlab-sidekiq-catchall-v2-57c8c99f7-ll4tl.mail>
 <5be12b37-ddc6-4bce-a25d-758682d8f0fe@suse.com>
Message-ID: <a1f8694a25db131fe20aded06651841b@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-27 17:07, Jan Beulich wrote:
> On 27.06.2024 15:46, GitLab wrote:
>> 
>> 
>> Pipeline #1350627221 has failed!
>> 
>> Project: xen ( https://gitlab.com/xen-project/xen )
>> Branch: staging ( https://gitlab.com/xen-project/xen/-/commits/staging 
>> )
>> 
>> Commit: 402e4732 ( 
>> https://gitlab.com/xen-project/xen/-/commit/402e473249cf62dd4c6b3b137aa845db0fe1453a 
>> )
>> Commit Message: x86/traps: address violations of MISRA C Rule 2...
>> Commit Author: Nicola Vetrini
>> Committed by: Jan Beulich ( https://gitlab.com/jbeulich )
>> 
>> 
>> Pipeline #1350627221 ( 
>> https://gitlab.com/xen-project/xen/-/pipelines/1350627221 ) triggered 
>> by Ganis ( https://gitlab.com/ganis )
>> had 1 failed job.
>> 
>> Job #7202274595 ( 
>> https://gitlab.com/xen-project/xen/-/jobs/7202274595/raw )
>> 
>> Stage: analyze
>> Name: eclair-x86_64
> 
> Without any earlier errors that I could spot the build log ends in
> 
> section_end:1719492504:step_script
> [0Ksection_start:1719492504:upload_artifacts_on_failure
> [0K[0K[36;1mUploading artifacts for failed job[0;m[0;m
> [32;1mUploading artifacts...[0;m
> ECLAIR_out/*.log: found 2 matching artifact files and directories[0;m
> [0;33mWARNING: ECLAIR_out/*.txt: no matching files. Ensure that the 
> artifact path is relative to the working directory 
> (/builds/xen-project/xen)[0;m
> *.log: found 2 matching artifact files and directories[0;m
> Uploading artifacts as "archive" to coordinator... 201 Created[0;m  
> id[0;m=7202274595 responseStatus[0;m=201 Created token[0;m=glcbt-66
> [32;1mUploading artifacts...[0;m
> [0;33mWARNING: gl-code-quality-report.json: no matching files. Ensure 
> that the artifact path is relative to the working directory 
> (/builds/xen-project/xen)[0;m
> [31;1mERROR: No files to upload                         [0;m
> section_end:1719492506:upload_artifacts_on_failure
> [0Ksection_start:1719492506:cleanup_file_variables
> [0K[0K[36;1mCleaning up project directory and file based 
> variables[0;m[0;m
> section_end:1719492506:cleanup_file_variables
> [0K[31;1mERROR: Job failed: exit code 137
> [0;m
> 
> IOW - some failure somewhere, but nothing noticeable in the log file. 
> Is
> this an expected thing for this specific job?
> 

No, and and I noticed no failure on our end either.
I saved the logs, so I would try running it again to see if it 
reproduces.

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 16:03:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 16:03:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750242.1158498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMraU-0007jr-TI; Thu, 27 Jun 2024 16:03:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750242.1158498; Thu, 27 Jun 2024 16:03:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMraU-0007jk-Qm; Thu, 27 Jun 2024 16:03:06 +0000
Received: by outflank-mailman (input) for mailman id 750242;
 Thu, 27 Jun 2024 16:03:05 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JfIA=N5=outlook.com=mhklinux@srs-se1.protection.inumbo.net>)
 id 1sMraT-0007je-Im
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 16:03:05 +0000
Received: from NAM02-SN1-obe.outbound.protection.outlook.com
 (mail-sn1nam02olkn20828.outbound.protection.outlook.com
 [2a01:111:f400:7ea9::828])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id bb48b8a1-349e-11ef-b4bb-af5377834399;
 Thu, 27 Jun 2024 18:03:03 +0200 (CEST)
Received: from SN6PR02MB4157.namprd02.prod.outlook.com (2603:10b6:805:33::23)
 by SA0PR02MB7177.namprd02.prod.outlook.com (2603:10b6:806:ec::6) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7698.35; Thu, 27 Jun
 2024 16:02:59 +0000
Received: from SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df]) by SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df%2]) with mapi id 15.20.7698.033; Thu, 27 Jun 2024
 16:02:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb48b8a1-349e-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=T94H8z0IUYVwJMXeKNGb9essLtbITesxCo6piHdGz5dRbQTW75rPIndxmcZ2tNn2jwlut+U5msIUZDgo1zGb+Ac13CjSEbE6MbLl7HrGSK39ZBgqvUofrB9LVjJGI8QE/Ne2gGEQ9rkgd9xs1r3Ys0Jzv18/lGLTgv+UZ0P+ISFvF6O2Ga53jeoqJXUjvVDVXnS2xX8AyClipuDLJ5vsUcnYVscOKw4za/GztQbFjXoWPdWkcDeDN3Q26k6OmV2TtDCQn+b/LIyFzp2qrxIfU5Ho5LdY7l35KFkYRRWY7g8WUjX+wSL/nJ6c0KuEPc6N3FJWotHFGMHqY0js5cqD1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=d6p2xWeJyrMmJRxbPJMuMDfG+p7zrMeNT592PFL1mEc=;
 b=iockCQcBDQkzrPjn/PgSzD2jev0AKXkXFlj3L1SDyadhdGKqfSBjmDkTl7H4OVqzhZdnlnzwdkk4vRMbC0ME7bXR4KwkQbHUGJz6ulCWGRrlPWV3wvEK2rcmpj7fZY+cI/sifAU2pU7yGt0l5RCevMp+BPAOUOtD+jyzIFwmrMbI42lx+QFmWZ6OynSfpvPkCCUyJcIHpqFcOp8pHBumNbaO/KEe2XCNH/7BAz+B9O6t/byZcUtr+vPCb0HOhNRtIVo28yPXJeS03VVDY2T043XgAa8GWKD4k5aUBJ1FI/OCH6GOSwNMtJw++mEU+eCX11GqVqaEYFEMTLanv+XpGw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none;
 dkim=none; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=outlook.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=d6p2xWeJyrMmJRxbPJMuMDfG+p7zrMeNT592PFL1mEc=;
 b=AwuNnj6tL6anyc9tcU39q3G0XF7YYuJa+bRNwnt1OJjLzK3O8C0osb5QIP7mcdzjydgp5OW3DrtCLkDxUnqqy5A7eLr0BLsBIcsqfYo/SLcT5vVX2An1PODRdwy7wr8TNFlMKLAmOGGvnqxGpeQMu7E/gl2x3EBRCTut3UtIG4uKJB5JC8GGMtNtdNQjOitkW77+2xLrlW4iR5zJJCVfEhJWHDD3fDiKbja0all285lsAyGcnKj5pi1WlWgMPyKmV9gNoP+FgOPUTEutLHj1fsucQmMaiX/wMXYnlQsFf4887xKrhxtqQW3i8dtTrNQXcOUlt5GPB5mqfZ7dj6CgBA==
From: Michael Kelley <mhklinux@outlook.com>
To: "hch@lst.de" <hch@lst.de>
CC: =?iso-8859-2?Q?Petr_Tesa=F8=EDk?= <petr@tesarici.cz>,
	"robin.murphy@arm.com" <robin.murphy@arm.com>, "joro@8bytes.org"
	<joro@8bytes.org>, "will@kernel.org" <will@kernel.org>, "jgross@suse.com"
	<jgross@suse.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>,
	"m.szyprowski@samsung.com" <m.szyprowski@samsung.com>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Topic: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Index:
 AQHauIjoJBWSrCyO6UWzcncSceBiMLHa1Z9wgABomICAAA3OAIAAhiSggAAJLYCAAAn5IA==
Date: Thu, 27 Jun 2024 16:02:59 +0000
Message-ID:
 <SN6PR02MB4157D9B1A64FF78461D6A7EDD4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
References: <20240607031421.182589-1-mhklinux@outlook.com>
 <SN6PR02MB41577686D72E206DB0084E90D4D62@SN6PR02MB4157.namprd02.prod.outlook.com>
 <20240627060251.GA15590@lst.de>
 <20240627085216.556744c1@meshulam.tesarici.cz>
 <SN6PR02MB4157E61B49C8435E38AC968DD4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
 <20240627152513.GA23497@lst.de>
In-Reply-To: <20240627152513.GA23497@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-tmn: [qE0gvg5VJgMwG45s6rvsjfZIwXoxxqGi]
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: SN6PR02MB4157:EE_|SA0PR02MB7177:EE_
x-ms-office365-filtering-correlation-id: febd5df1-5729-4a6d-15ad-08dc96c29dad
x-microsoft-antispam:
 BCL:0;ARA:14566002|461199028|56899033|440099028|3412199025|102099032;
x-microsoft-antispam-message-info:
 FwnbVCEnrekiFNnnwxcitXyxv5FnTvbouCjwiOobuBbeB7dToY40Glf+UC0zs2Euubqd8R8c1+s7yN4yw3LHTaTQUuawj5sl7SeV+ws1q4sCsDxPul5QQMiPoD4qsQ6vpQWAEnjJS+8MwPWQWIHQ+LdpK7fwCmT3i543RoUfYEDTlomWVfpjLfI0NjrwVsV6Uu0v0ecnFqhWh47w2CBl9du4cimmleexNAC/YVl5nzlu2bWP4LDzKyRNrRh8QdB1bV20QCAWs0nKWdo6RHZS+BLa/qPvs/KkeirUWMbLUdsLm3/6PYf+bvBO87H5emXlq8XqrOx0SZcxx28qA59R3IZzkEVawRVnf+quFO+HUzQRJIoT5CzHfiZD1jHkDpFL3mZkvIBKEy694c1eFYUrNK1z1mKfCRR6OA86+yhsD7F8m+gwSaEa8XSQpOozXJ7+9JQGiIwPHSxWPUqhtj3/7AxBdj+vG7zIgNwGiHp7an6JWmyesyAS/qP72DfPSR0K2/jSVLWocROILs2CdlE/9pQwof7aa7dkMrMvkDhSlCxpgozjqnQt4Q3GH0HDnd1H
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-2?Q?RCzYSfD7qjg5JShhCWZhCaU7C3hBM9N9rlKPqM6pmpScQ7B0eOcvO5RsQE?=
 =?iso-8859-2?Q?wZbZrPNyJIL8W+EaSFrr0Wb9ivA//9HexTKVz8J+r6V7zt45t/tV7DDr7n?=
 =?iso-8859-2?Q?N7s0JFvxtGTW+7Y0fEVAocuFL5uUsm+3ENrvI3mAUMA90+xoVRR2txCR+N?=
 =?iso-8859-2?Q?ZLatxr+JLpBmRymYa2TQUJIZuUNF3znUKIcB/ZKzbr2nkzenGFJ7RApt1J?=
 =?iso-8859-2?Q?0xvdWFk9E8Y3uU+heEMZqssEiwSSY7/Zt5L1cg4bE7o6aMFTekJF1ClnMs?=
 =?iso-8859-2?Q?Uy/i4osJS+bv+W1pVeknBKtF/xT5wvMAsV5e6RAmDEcIdki/puEZtTov1i?=
 =?iso-8859-2?Q?2Kyjyik3OqCweG0Pa4B8Mv1eg4KuhMijjyFrbqL1617OF4zGpHxSP/8rPB?=
 =?iso-8859-2?Q?ZDCtv1st8vNfAhXs9H0H3Q9mkXQQTpYennhU8CwRGb61TC/mHp1qxBfb//?=
 =?iso-8859-2?Q?qBbskg3mz4gqUjSku16tZ/iaELlMf9h2x0qrXWXQyi8JIo782g3zHloy53?=
 =?iso-8859-2?Q?0W9q71diRf+14XF4yijtfwe9xDMTqhtzZdOzVtbUcJgRxbq1jLgPQSNrUb?=
 =?iso-8859-2?Q?k1YOXWWZLFPZP59JKoUkAERJcRda7QQV56pJd9jv2NNVe3SzR2XVBPwHIC?=
 =?iso-8859-2?Q?u02RK3ZiAZGIWq8QBWllIl6jd2rhQZyl0KK5RncGWRs5gddMEmMzG5dKOC?=
 =?iso-8859-2?Q?xbQ/u1JLg6Y9WzBmrWIosHJJLUOEUu7uogETYvlKEYuagpntnz+kEwG5Pm?=
 =?iso-8859-2?Q?njouI6BwZK/JC+fljyPXeuBDKqfyE3QAkX+aGP0xRQK/0k+vapBHzKVLxo?=
 =?iso-8859-2?Q?j38pvydR0LCEVf7dCrcFtzaFAOIYudfjVw3YZmAPIaU4FfcVPQ3LyY/Owf?=
 =?iso-8859-2?Q?UgECTnSpRfi2MB2aQkpnaVipT1ciB2hdaH9lAfdWb+8Ijlbve1wtN/Ct4b?=
 =?iso-8859-2?Q?hLo53jRKGSZ07lfYXbilA177ObYM3kqIlFWZoMU0cvOgHCxfOecJ70gd1v?=
 =?iso-8859-2?Q?zfxiTTlY4anq1Av0xAp7H2fO9T8O+mFUJSM+6hI22Kxng1SDUsip4qIJVj?=
 =?iso-8859-2?Q?FraCqBj00mNJkYsH4yeTcsrWhKr1u6H7hzeep/CC5w7i5dpNYWIJ060ITK?=
 =?iso-8859-2?Q?O6c2Ry6xQ5YdmzoLlNq84t5y3iW2xBZxotZJzYZjsWCYefAR2WcwiG4vdb?=
 =?iso-8859-2?Q?ZVO69chADr+XQLd8VVOgxtfJUVblnHKiq0dJ5E7mb7n5NA/bsDhglTwGi7?=
 =?iso-8859-2?Q?vass+11CLRai9T8WvfmEOCJGiVvzuuUd5FnHRjaBM=3D?=
Content-Type: text/plain; charset="iso-8859-2"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN6PR02MB4157.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-Network-Message-Id: febd5df1-5729-4a6d-15ad-08dc96c29dad
X-MS-Exchange-CrossTenant-rms-persistedconsumerorg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-originalarrivaltime: 27 Jun 2024 16:02:59.2878
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR02MB7177

From: hch@lst.de <hch@lst.de> Sent: Thursday, June 27, 2024 8:25 AM
>=20
> On Thu, Jun 27, 2024 at 02:59:03PM +0000, Michael Kelley wrote:
> > Conceptually, it's still being used as a boolean function based on
> > whether the return value is NULL.  Renaming it to swiotlb_get_pool()
> > more accurately describes the return value, but obscures the
> > intent of determining if it is a swiotlb buffer.  I'll think about it.
> > Suggestions are welcome.
>=20
> Just keep is_swiotlb_buffer as a trivial inline helper that returns
> bool.

I don't understand what you are suggesting.  Could you elaborate a bit?
is_swiotlb_buffer() can't be trivial when CONFIG_SWIOTLB_DYNAMIC
is set.

Michael



From xen-devel-bounces@lists.xenproject.org Thu Jun 27 18:00:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 18:00:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750279.1158508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMtQI-0004d3-1R; Thu, 27 Jun 2024 18:00:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750279.1158508; Thu, 27 Jun 2024 18:00:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMtQH-0004cw-UF; Thu, 27 Jun 2024 18:00:41 +0000
Received: by outflank-mailman (input) for mailman id 750279;
 Thu, 27 Jun 2024 18:00:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMtQH-0004cm-3u; Thu, 27 Jun 2024 18:00:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMtQH-0007j9-08; Thu, 27 Jun 2024 18:00:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMtQG-0001uc-MA; Thu, 27 Jun 2024 18:00:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMtQG-00037c-Lh; Thu, 27 Jun 2024 18:00:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bWrB2WZJvV7YuxcOsDd7deLxeEfnA/wuwyca9SgwFjw=; b=Ppu5jqN3hc6Vd4L7pYce4y0Hss
	m8asWgM2rambogYq+so52zeYmkP3fmX6CEcw9XNBMBT0VmtCy2YQGftJMAHRcoQwwiO2/5rgfyaXM
	LQjidKDXGW24hT6C3fgltu6dIsmhFamElPXQHSTF9lS7OrTZOMzPqHZHubCgs59sn7D4=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186536-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186536: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=6862b9d538d96363635677198899e1669e591259
X-Osstest-Versions-That:
    ovmf=ae09721a65ab3294439f6fa233adaf3b897f702f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 Jun 2024 18:00:40 +0000

flight 186536 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186536/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 6862b9d538d96363635677198899e1669e591259
baseline version:
 ovmf                 ae09721a65ab3294439f6fa233adaf3b897f702f

Last test of basis   186520  2024-06-26 18:16:15 Z    0 days
Testing same since   186536  2024-06-27 15:41:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ae09721a65..6862b9d538  6862b9d538d96363635677198899e1669e591259 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 21:46:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 21:46:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750344.1158518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMwwj-0002s9-Ce; Thu, 27 Jun 2024 21:46:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750344.1158518; Thu, 27 Jun 2024 21:46:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMwwj-0002s2-AA; Thu, 27 Jun 2024 21:46:25 +0000
Received: by outflank-mailman (input) for mailman id 750344;
 Thu, 27 Jun 2024 21:46:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMwwh-0002rs-TZ; Thu, 27 Jun 2024 21:46:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMwwh-00033R-Qi; Thu, 27 Jun 2024 21:46:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMwwg-0005zh-Tu; Thu, 27 Jun 2024 21:46:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMwwg-0004lk-TO; Thu, 27 Jun 2024 21:46:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zvz0/rWzPQhUoUCiDCYLTQcKUiPk8dfQzE+l190vr4o=; b=JFVEe77wtFqnWcmpw7Xe7Fn91Q
	4PPTr5wyXf1OoDoPjaXTVa0scEIEUJQehuhS/pfy87CzYCdQXAnNJn18PI1stnX/n0QC8Ma9ulBge
	++hKn6Y1aYtL9uuFySFXzzK48oh7iHuinw0tFicZzqR4SN4hLhU54121e5sq+Tr7IMms=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186530-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186530: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-examine:host-install:broken:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start/freebsd.repeat:fail:regression
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-raw:debian-di-install:fail:regression
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=afcd48134c58d6af45fb3fdb648f1260b20f2326
X-Osstest-Versions-That:
    linux=24ca36a562d63f1bff04c3f11236f52969c67717
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 Jun 2024 21:46:22 +0000

flight 186530 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186530/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-examine      5 host-install           broken REGR. vs. 186528
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 186528
 test-amd64-amd64-qemuu-freebsd12-amd64 21 guest-start/freebsd.repeat fail REGR. vs. 186528
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 186528
 test-armhf-armhf-xl-raw      12 debian-di-install        fail REGR. vs. 186528

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186528
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186528
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                afcd48134c58d6af45fb3fdb648f1260b20f2326
baseline version:
 linux                24ca36a562d63f1bff04c3f11236f52969c67717

Last test of basis   186528  2024-06-26 22:42:23 Z    0 days
Testing same since   186530  2024-06-27 08:02:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  aigourensheng <shechenglong001@gmail.com>
  Andrew Bresticker <abrestic@rivosinc.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Konovalov <andreyknvl@gmail.com>
  Barry Song <v-songbaohua@oppo.com>
  Christoph Hellwig <hch@lst.de>
  David Hildenbrand <david@redhat.com>
  Hugh Dickins <hughd@google.com>
  Jan Kara <jack@suse.cz>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jeff Xu <jeffxu@chromium.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marco Elver <elver@google.com>
  Stephen Brennan <stephen.s.brennan@oracle.com>
  Suren Baghdasaryan <surenb@google.com>
  Vlastimil Babka <vbabka@suse.cz>
  yangge <yangge1116@126.com>
  Zhaoyang Huang <zhaoyang.huang@unisoc.com>
  Zi Yan <ziy@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-step test-armhf-armhf-examine host-install

Not pushing.

(No revision log; it would be 538 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 22:59:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 22:59:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750358.1158529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMy5N-00023w-Iq; Thu, 27 Jun 2024 22:59:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750358.1158529; Thu, 27 Jun 2024 22:59:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMy5N-00023p-G5; Thu, 27 Jun 2024 22:59:25 +0000
Received: by outflank-mailman (input) for mailman id 750358;
 Thu, 27 Jun 2024 22:59:24 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMy5M-00023j-0B
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 22:59:24 +0000
Received: from dfw.source.kernel.org (dfw.source.kernel.org
 [2604:1380:4641:c500::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id e2dc9cae-34d8-11ef-b4bb-af5377834399;
 Fri, 28 Jun 2024 00:59:21 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by dfw.source.kernel.org (Postfix) with ESMTP id 1544C62015;
 Thu, 27 Jun 2024 22:59:19 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 8AC0DC2BBFC;
 Thu, 27 Jun 2024 22:59:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2dc9cae-34d8-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719529158;
	bh=GnS2fyzkpU1GbG4BllMF9tQOXyG+fdLkzeXmsq+0XnQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=UjR+4LYXsj+bRx4kX4F/s6lB2abZhCw0PPqDqN8GEzxAjn/WEALeypYQQG9+44zAE
	 eC3eR0mK17F/vomD6aDlPeRg9yq3AHs3OXcNMaGk4Gy+/TNm1mDTrJ4R67mg2Zepzi
	 8ljBUpvYiH6zmJHwgKsp50q53b2WQB8Xcbv+oDeTHBSsryR30+yCqLmYMMN07EDN9Y
	 D0Z4Sn5K1nGTztJ2MjHbdu+m+WuHdre21F1sJfFpNux1oz2WvX5tIx/MlxJwXeP+Yl
	 g9Y8KEXnEdntD2P3299RW25svR0qUzjH+VA3RAfCyW5e6rVRY7zn1fhcOi/EApIut7
	 qba0H4vwi6S0g==
Date: Thu, 27 Jun 2024 15:59:16 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Jan Beulich <jbeulich@suse.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, consulting@bugseng.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Federico Serafini <federico.serafini@bugseng.com>
Subject: Re: [XEN PATCH v2 05/13] x86/traps: address violations of MISRA C
 Rule 16.3
In-Reply-To: <84eb22c8-7737-4e6b-8194-724c792c2d92@suse.com>
Message-ID: <alpine.DEB.2.22.394.2406271545210.3635@ubuntu-linux-20-04-desktop>
References: <cover.1719218291.git.federico.serafini@bugseng.com> <4f44a7b021eb4f78ccf1ce69b500b48b75df81c5.1719218291.git.federico.serafini@bugseng.com> <alpine.DEB.2.22.394.2406241753260.3870429@ubuntu-linux-20-04-desktop> <a5b47b7e-9dc0-4108-bd6f-eb34f7cb8c3c@suse.com>
 <alpine.DEB.2.22.394.2406251808040.3635@ubuntu-linux-20-04-desktop> <6441010f-c2f6-4098-bf23-837955dcf803@suse.com> <alpine.DEB.2.22.394.2406261758390.3635@ubuntu-linux-20-04-desktop> <84eb22c8-7737-4e6b-8194-724c792c2d92@suse.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 27 Jun 2024, Jan Beulich wrote:
> On 27.06.2024 03:53, Stefano Stabellini wrote:
> > On Wed, 26 Jun 2024, Jan Beulich wrote:
> >> On 26.06.2024 03:11, Stefano Stabellini wrote:
> >>> On Tue, 25 Jun 2024, Jan Beulich wrote:
> >>>> On 25.06.2024 02:54, Stefano Stabellini wrote:
> >>>>> On Mon, 24 Jun 2024, Federico Serafini wrote:
> >>>>>> Add break or pseudo keyword fallthrough to address violations of
> >>>>>> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
> >>>>>> every switch-clause".
> >>>>>>
> >>>>>> No functional change.
> >>>>>>
> >>>>>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
> >>>>>> ---
> >>>>>>  xen/arch/x86/traps.c | 3 +++
> >>>>>>  1 file changed, 3 insertions(+)
> >>>>>>
> >>>>>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> >>>>>> index 9906e874d5..cbcec3fafb 100644
> >>>>>> --- a/xen/arch/x86/traps.c
> >>>>>> +++ b/xen/arch/x86/traps.c
> >>>>>> @@ -1186,6 +1186,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
> >>>>>>  
> >>>>>>      default:
> >>>>>>          ASSERT_UNREACHABLE();
> >>>>>> +        break;
> >>>>>
> >>>>> Please add ASSERT_UNREACHABLE to the list of "unconditional flow control
> >>>>> statements" that can terminate a case, in addition to break.
> >>>>
> >>>> Why? Exactly the opposite is part of the subject of a recent patch, iirc.
> >>>> Simply because of the rules needing to cover both debug and release builds.
> >>>
> >>> The reason is that ASSERT_UNREACHABLE() might disappear from the release
> >>> build but it can still be used as a marker during static analysis. In
> >>> my view, ASSERT_UNREACHABLE() is equivalent to a noreturn function call
> >>> which has an empty implementation in release builds.
> >>>
> >>> The only reason I can think of to require a break; after an
> >>> ASSERT_UNREACHABLE() would be if we think the unreachability only apply
> >>> to debug build, not release build:
> >>>
> >>> - debug build: it is unreachable
> >>> - release build: it is reachable
> >>>
> >>> I don't think that is meant to be possible so I think we can use
> >>> ASSERT_UNREACHABLE() as a marker.
> >>
> >> Well. For one such an assumption takes as a prereq that a debug build will
> >> be run through full coverage testing, i.e. all reachable paths proven to
> >> be taken. I understand that this prereq is intended to somehow be met,
> >> even if I'm having difficulty seeing how such a final proof would look
> >> like: Full coverage would, to me, mean that _every_ line is reachable. Yet
> >> clearly any ASSERT_UNREACHABLE() must never be reached.
> >>
> >> And then not covering for such cases takes the further assumption that
> >> debug and release builds are functionally identical. I'm afraid this would
> >> be a wrong assumption to make:
> >> 1) We may screw up somewhere, with code wrongly enabled only in one of the
> >>    two build modes.
> >> 2) The compiler may screw up, in particular with optimization.
> > 
> > I think there are two different issues here we are discussing.
> > 
> > One issue, like you said, has to do with coverage. It is important to
> > mark as "unreachable" any part of the code that is indeed unreachable
> > so that we can account it properly when we do coverage analysis. At the
> > moment the only "unreachable" marker that we have is
> > ASSERT_UNREACHABLE(), and I am hoping we can use it as part of the
> > coverage analysis we'll do.
> > 
> > However, there is a different separate question about what to do in the
> > Xen code after an ASSERT_UNREACHABLE(). E.g.:
> > 
> >              default:
> >                  ASSERT_UNREACHABLE();
> >                  return -EPERM; /* is it better with or without this? */
> >              }
> > 
> > Leaving coverage aside, would it be better to be defensive and actually
> > attempt to report errors back after an ASSERT_UNREACHABLE() like in the
> > example? Or is it better to assume the code is actually unreachable
> > hence there is no need to do anything afterwards?
> > 
> > One one hand, being defensive sounds good, on the other hand, any code
> > we add after ASSERT_UNREACHABLE() is dead code which cannot be tested,
> > which is also not good. In this example, there is no way to test the
> > return -EPERM code path. We also need to consider what is the right
> > thing to do if Xen finds itself in an erroneous situation such as being
> > in an unreachable code location.
> > 
> > So, after thinking about it and also talking to the safety manager, I
> > think we should:
> > - implement ASSERT_UNREACHABLE with a warning in release builds
> 
> If at all, then controlled by a default-off Kconfig setting. This would,
> after all, raise the question of how ASSERT() should behave then. Imo
> the two should be consistent in this regard, and NDEBUG clearly results
> in the expectation that ASSERT() expands to nothing. Perhaps this is
> finally the time where we need to separate NDEBUG from CONFIG_DEBUG; we
> did discuss doing so before. Then in your release builds, you could
> actually leave assertions active.
 
Yes, a kconfig to define the behavior of ASSERT_UNREACHABLE in release
builds is fine. And you are right that we should consider doing
something similar for ASSERT too.

I think that in any environment where safety (i.e. correctness of
behavior) is a primary concern, attempting to continue without doing
anything after reaching a point that is supposed to be unreachable is
not a good idea.

I think Xen should do something in response to reaching a point it is
not supposed to reach. I don't know yet what is the best course of
action but printing a warning seems to be the bare minimum.

Crashing the system is not a good idea as it could potentially be
exploited by malicious guests (security might not be the primary concern
but still.)


> > - have "return -EPERM;" or similar for defensive programming
> 
> You don't say how you'd deal with the not-reachable aspect then.

We'll have to find a way to account for all the code that cannot be
tested. We would have a problem anyway due to the ASSERT_UNREACHABLE
checks, but the addition of "return -EPERM;" will make things slightly
worse.

I have been told to prioritize safety of the code and defensive
programming over coverage calculations.


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 23:05:57 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 23:05:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750371.1158539 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMyBe-0003ZU-79; Thu, 27 Jun 2024 23:05:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750371.1158539; Thu, 27 Jun 2024 23:05:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMyBe-0003ZN-4b; Thu, 27 Jun 2024 23:05:54 +0000
Received: by outflank-mailman (input) for mailman id 750371;
 Thu, 27 Jun 2024 23:05:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMyBd-0003ZD-Jr; Thu, 27 Jun 2024 23:05:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMyBd-0004UL-IL; Thu, 27 Jun 2024 23:05:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sMyBd-0008BK-1Q; Thu, 27 Jun 2024 23:05:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sMyBd-00047W-10; Thu, 27 Jun 2024 23:05:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=b3r8fxvn+Yir1ahquVSlnIDDGexDs3DBpFdgH39Z9N8=; b=qU700F4ygPuDRlzuOPrUqUIjdW
	T1UGqfWIulwL4oOhaHfgEVid/OFM23FmcWUUQjfk1U43Rtv8BOnMFjBb3orAqsA3GI8sVr5E0lOHg
	YAzQkRRseDZ/t4qAdfAgCVF96lf5QwATNIdrqbhVpJ7FPiK8WEu4CEnIvmIMe+mzQGvU=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186539-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186539: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=a5f147b2a31c093cc83a3f10cdda529c6b59799b
X-Osstest-Versions-That:
    ovmf=6862b9d538d96363635677198899e1669e591259
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 Jun 2024 23:05:53 +0000

flight 186539 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186539/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 a5f147b2a31c093cc83a3f10cdda529c6b59799b
baseline version:
 ovmf                 6862b9d538d96363635677198899e1669e591259

Last test of basis   186536  2024-06-27 15:41:23 Z    0 days
Testing same since   186539  2024-06-27 21:14:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
  dependabot[bot] <support@github.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   6862b9d538..a5f147b2a3  a5f147b2a31c093cc83a3f10cdda529c6b59799b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu Jun 27 23:19:13 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 Jun 2024 23:19:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750383.1158549 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMyOR-0005Nc-Gn; Thu, 27 Jun 2024 23:19:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750383.1158549; Thu, 27 Jun 2024 23:19:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMyOR-0005NV-DP; Thu, 27 Jun 2024 23:19:07 +0000
Received: by outflank-mailman (input) for mailman id 750383;
 Thu, 27 Jun 2024 23:19:06 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iDhR=N5=kernel.org=sstabellini@srs-se1.protection.inumbo.net>)
 id 1sMyOQ-0005NP-HT
 for xen-devel@lists.xenproject.org; Thu, 27 Jun 2024 23:19:06 +0000
Received: from sin.source.kernel.org (sin.source.kernel.org
 [2604:1380:40e1:4800::1])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id a3729df8-34db-11ef-90a3-e314d9c70b13;
 Fri, 28 Jun 2024 01:19:04 +0200 (CEST)
Received: from smtp.kernel.org (transwarp.subspace.kernel.org [100.75.92.58])
 by sin.source.kernel.org (Postfix) with ESMTP id 962CACE38EB;
 Thu, 27 Jun 2024 23:19:00 +0000 (UTC)
Received: by smtp.kernel.org (Postfix) with ESMTPSA id 3F444C2BBFC;
 Thu, 27 Jun 2024 23:18:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3729df8-34db-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1719530339;
	bh=vvSrM9S+NU+xVveE+rpadMDdDOUqZa36aZzukwm2AMA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=kppyzGEGFEdYO5X7bdOmIxnf5rBSOaMQyhX/rDCebj1oGa4cVRDeZLofzi1ptkKny
	 ZOpiklgKC/X62cTxUZdXVfXNCtvZiPmwmcVn436lQ7DGxkOSGwWkgIMFt/hC/lTwSX
	 eQ5ew3DOO6lnRSDy8toYf82KKvA0381vy96wGn1SgXt2KbaDHMe+i7/TDez4cLDvzx
	 m/Ekx0LrZUiIYPZR4Jj8THCOlvqNHgqgVXe0PKgPmUSfaKcjBeGCKwhQSCvKGUAyrx
	 6AXumJTdL6f4jgIWjnTOEYdZnOHCUG4MHbJwMeeHa7KfNk2e7Nc4gvxFU3vV5CfEEm
	 10PMFHIAfmWCA==
Date: Thu, 27 Jun 2024 16:18:56 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@ubuntu-linux-20-04-desktop
To: Nicola Vetrini <nicola.vetrini@bugseng.com>
cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, 
    michal.orzel@amd.com, xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, 
    consulting@bugseng.com, Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
    Julien Grall <julien@xen.org>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH] x86: p2m-pod: address violation of MISRA C Rule
 2.1
In-Reply-To: <a050ef1b662f64bb58afb2f6118254254dd1d649.1719478448.git.nicola.vetrini@bugseng.com>
Message-ID: <alpine.DEB.2.22.394.2406271617320.3635@ubuntu-linux-20-04-desktop>
References: <a050ef1b662f64bb58afb2f6118254254dd1d649.1719478448.git.nicola.vetrini@bugseng.com>
User-Agent: Alpine 2.22 (DEB 394 2020-01-19)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 27 Jun 2024, Nicola Vetrini wrote:
> The label 'out_unmap' is only reachable after ASSERT_UNREACHABLE,
> so the code below is only executed upon erroneously reaching that
> program point and calling domain_crash, thus resulting in the
> for loop after 'out_unmap' to become unreachable in some configurations.
> 
> This is a defensive coding measure to have a safe fallback that is
> reachable in non-debug builds, and can thus be deviated with a
> comment-based deviation.
> 
> No functional change.
> 
> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>

The patch needs rebasing as it doesn't apply to staging anymore

Other than that:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

This is actually going help also in terms of identifying impossible code
paths for coverage

> ---
>  docs/misra/safe.json      | 8 ++++++++
>  xen/arch/x86/mm/p2m-pod.c | 1 +
>  2 files changed, 9 insertions(+)
> 
> diff --git a/docs/misra/safe.json b/docs/misra/safe.json
> index c213e0a0be3b..b114c9485c86 100644
> --- a/docs/misra/safe.json
> +++ b/docs/misra/safe.json
> @@ -60,6 +60,14 @@
>          },
>          {
>              "id": "SAF-7-safe",
> +            "analyser": {
> +                "eclair": "MC3R1.R2.1"
> +            },
> +            "name": "MC3R1.R2.1: statement unreachable in some configurations",
> +            "text": "Every path that can reach this statement is preceded by statements that make it unreachable in certain configurations (e.g. ASSERT_UNREACHABLE). This is understood as a means of defensive programming."
> +        },
> +        {
> +            "id": "SAF-8-safe",
>              "analyser": {},
>              "name": "Sentinel",
>              "text": "Next ID to be used"
> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
> index bd84fe9e27ee..5a96c46a2286 100644
> --- a/xen/arch/x86/mm/p2m-pod.c
> +++ b/xen/arch/x86/mm/p2m-pod.c
> @@ -1040,6 +1040,7 @@ out_unmap:
>       * Something went wrong, probably crashing the domain.  Unmap
>       * everything and return.
>       */
> +    /* SAF-7-safe Rule 2.1: defensive programming */
>      for ( i = 0; i < count; i++ )
>          if ( map[i] )
>              unmap_domain_page(map[i]);
> -- 
> 2.34.1
> 


From xen-devel-bounces@lists.xenproject.org Fri Jun 28 00:18:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 00:18:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750406.1158569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMzK2-0004pg-K6; Fri, 28 Jun 2024 00:18:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750406.1158569; Fri, 28 Jun 2024 00:18:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMzK2-0004pZ-HV; Fri, 28 Jun 2024 00:18:38 +0000
Received: by outflank-mailman (input) for mailman id 750406;
 Fri, 28 Jun 2024 00:18:36 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0kmA=N6=m5p.com=ehem@srs-se1.protection.inumbo.net>)
 id 1sMzK0-0004pT-IQ
 for xen-devel@lists.xenproject.org; Fri, 28 Jun 2024 00:18:36 +0000
Received: from mailhost.m5p.com (mailhost.m5p.com [74.104.188.4])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id f3f21b6a-34e3-11ef-b4bb-af5377834399;
 Fri, 28 Jun 2024 02:18:33 +0200 (CEST)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:8ac4:0:0:0:0:f7])
 by mailhost.m5p.com (8.17.1/8.17.1) with ESMTPS id 45S0IGo1052639
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Thu, 27 Jun 2024 20:18:23 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.17.1/8.15.2/Submit) id 45S0IFhr052638;
 Thu, 27 Jun 2024 17:18:15 -0700 (PDT) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f3f21b6a-34e3-11ef-b4bb-af5377834399
Date: Thu, 27 Jun 2024 17:18:15 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
        Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
        Kelly Choi <kelly.choi@cloud.com>
Subject: Re: Serious AMD-Vi(?) issue
Message-ID: <Zn4BRxTcXKufonw5@mattapan.m5p.com>
References: <ZgHwEGCsCLHiYU5J@mattapan.m5p.com>
 <ZgRXHQpamLIdu7dk@mattapan.m5p.com>
 <c2ce4002-58d5-48a3-949c-3c361c78c0ac@suse.com>
 <ZhdNxWNpM0KCzz8E@mattapan.m5p.com>
 <2aa4d1f4-ff37-4f12-bfbb-3ef5ad3f6fdd@suse.com>
 <ZiDBc3ye2wqmBAfq@mattapan.m5p.com>
 <f0bdb386-0870-4468-846c-6c8a91eaf806@suse.com>
 <ZiH0G5kN6m+wlNjn@mattapan.m5p.com>
 <Zj7vkp4r0EY9rxT4@mattapan.m5p.com>
 <ZkHTC4RpUSpKj4wf@macbook>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ZkHTC4RpUSpKj4wf@macbook>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=4.0.0
X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-14) on mattapan.m5p.com

I'm rather surprised it was so long before the next system restart.  
Seems a quiet period as far as security updates go.  Good news is I made
several new observations, but I don't know how valuable these are.

On Mon, May 13, 2024 at 10:44:59AM +0200, Roger Pau Monn wrote:
> 
> Does booting with `iommu=no-intremap` lead to any issues being
> reported?

On boot there was in fact less.  Notably the "AMD-Vi" messages haven't
shown up at all.  I haven't stressed it very much yet, but previous
boots a message showed up the moment the MD-RAID1 driver was loaded.


I am though seeing two different messages now:

(XEN) CPU#: No irq handler for vector # (IRQ -#, LAPIC)
(XEN) IRQ# a=#[#,#] v=#[#] t=PCI-MSI s=#

These are to be appearing in pairs.  Multiple values show for each field,
though each field appears to vary between 2-3 different values.  There
are thousands of these messages showing up.

I'm unsure whether these can be attributed to the flash devices I had
been trying to use in RAID1 though.  I've got another device being
problematic with interrupts which might instead be the cause of those
messages (this one is lower urgency than the flash devices).


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Fri Jun 28 00:39:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 00:39:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750419.1158579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMze3-0007XX-6k; Fri, 28 Jun 2024 00:39:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750419.1158579; Fri, 28 Jun 2024 00:39:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sMze3-0007XQ-46; Fri, 28 Jun 2024 00:39:19 +0000
Received: by outflank-mailman (input) for mailman id 750419;
 Fri, 28 Jun 2024 00:39:18 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0kmA=N6=m5p.com=ehem@srs-se1.protection.inumbo.net>)
 id 1sMze2-0007XI-7E
 for xen-devel@lists.xenproject.org; Fri, 28 Jun 2024 00:39:18 +0000
Received: from mailhost.m5p.com (mailhost.m5p.com [74.104.188.4])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id d84b42e8-34e6-11ef-90a3-e314d9c70b13;
 Fri, 28 Jun 2024 02:39:16 +0200 (CEST)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.17.1/8.17.1) with ESMTPS id 45S0d7jJ053794
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Thu, 27 Jun 2024 20:39:13 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.17.1/8.15.2/Submit) id 45S0d7g5053793;
 Thu, 27 Jun 2024 17:39:07 -0700 (PDT) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d84b42e8-34e6-11ef-90a3-e314d9c70b13
Date: Thu, 27 Jun 2024 17:39:07 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: Xen C-state Issues
Message-ID: <Zn4GK2Anwu2lKBUd@mattapan.m5p.com>
References: <YSEo9Box2AFnmdLZ@mattapan.m5p.com>
 <dea9cf97-9332-b1c9-2cff-d87564832529@suse.com>
 <YSSFffDK5/5MUAdj@mattapan.m5p.com>
 <09fc5490-5b14-474c-dbe0-864952f19a33@suse.com>
 <YSbryyxk5G7xqHlQ@mattapan.m5p.com>
 <5a648db8-bf89-a58b-ba4b-bb992d68e82e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <5a648db8-bf89-a58b-ba4b-bb992d68e82e@suse.com>
X-Spam-Status: No, score=0.0 required=10.0 tests=KHOP_HELO_FCRDNS
	autolearn=unavailable autolearn_force=no version=4.0.0
X-Spam-Checker-Version: SpamAssassin 4.0.0 (2022-12-14) on mattapan.m5p.com

On Thu, Aug 26, 2021 at 09:51:54AM +0200, Jan Beulich wrote:
> On 26.08.2021 03:18, Elliott Mitchell wrote:
> 
> > What you're writing about would be looking for bugs in development
> > branches.
> 
> Very much so, in case there are issues left, or ones have got
> reintroduced. That's what the primary purpose of this list is.
> 
> If you were suspecting missing fixes in the kernel, I guess xen-devel
> isn't the preferred channel anyway. Otoh the stable maintainers there
> would likely want concrete commits pointed out ...

Yikes, too many things on plate and never getting back to this.  With
Xen 4.17 and Linux 6.1.  When giving domain-0, 6-vCPUs `dmesg` shows:

xen_acpi_processor: Max ACPI ID: 62
xen_acpi_processor: Uploading Xen processor PM info
xen_acpi_processor: ACPI CPU0 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU0 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU2 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU2 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU4 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU4 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU6 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU6 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU8 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU8 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU10 - C-states uploaded.
xen_acpi_processor:      C1: ACPI HLT 1 uS
xen_acpi_processor:      C2: ACPI IOPORT 0x414 18 uS
xen_acpi_processor:      C3: ACPI IOPORT 0x415 350 uS
xen_acpi_processor: ACPI CPU10 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU0 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU0 w/ PST:coord_type = 254 domain = 0
xen_acpi_processor: ACPI CPU1 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU1 w/ PST:coord_type = 254 domain = 0
xen_acpi_processor: ACPI CPU2 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU2 w/ PST:coord_type = 254 domain = 1
xen_acpi_processor: ACPI CPU3 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU3 w/ PST:coord_type = 254 domain = 1
xen_acpi_processor: ACPI CPU4 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU4 w/ PST:coord_type = 254 domain = 2
xen_acpi_processor: ACPI CPU5 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU5 w/ PST:coord_type = 254 domain = 2
xen_acpi_processor: ACPI CPU6 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU6 w/ PST:coord_type = 254 domain = 3
xen_acpi_processor: ACPI CPU7 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU7 w/ PST:coord_type = 254 domain = 3
xen_acpi_processor: ACPI CPU8 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU8 w/ PST:coord_type = 254 domain = 4
xen_acpi_processor: ACPI CPU9 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU9 w/ PST:coord_type = 254 domain = 4
xen_acpi_processor: ACPI CPU10 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU10 w/ PST:coord_type = 254 domain = 5
xen_acpi_processor: ACPI CPU11 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU11 w/ PST:coord_type = 254 domain = 5
xen_acpi_processor: ACPI CPU12 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU12 w/ PST:coord_type = 254 domain = 6
xen_acpi_processor: ACPI CPU13 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU13 w/ PST:coord_type = 254 domain = 6
xen_acpi_processor: ACPI CPU14 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU14 w/ PST:coord_type = 254 domain = 7
xen_acpi_processor: ACPI CPU15 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU15 w/ PST:coord_type = 254 domain = 7
xen_acpi_processor: ACPI CPU16 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU16 w/ PST:coord_type = 254 domain = 8
xen_acpi_processor: ACPI CPU17 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU17 w/ PST:coord_type = 254 domain = 8
xen_acpi_processor: ACPI CPU18 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU18 w/ PST:coord_type = 254 domain = 9
xen_acpi_processor: ACPI CPU19 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU19 w/ PST:coord_type = 254 domain = 9
xen_acpi_processor: ACPI CPU20 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU20 w/ PST:coord_type = 254 domain = 10
xen_acpi_processor: ACPI CPU21 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU21 w/ PST:coord_type = 254 domain = 10
xen_acpi_processor: ACPI CPU22 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU22 w/ PST:coord_type = 254 domain = 11
xen_acpi_processor: ACPI CPU23 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU23 w/ PST:coord_type = 254 domain = 11
xen_acpi_processor: ACPI CPU24 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU24 w/ PST:coord_type = 254 domain = 12
xen_acpi_processor: ACPI CPU25 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU25 w/ PST:coord_type = 254 domain = 12
xen_acpi_processor: ACPI CPU26 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU26 w/ PST:coord_type = 254 domain = 13
xen_acpi_processor: ACPI CPU27 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU27 w/ PST:coord_type = 254 domain = 13
xen_acpi_processor: ACPI CPU28 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU28 w/ PST:coord_type = 254 domain = 14
xen_acpi_processor: ACPI CPU29 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU29 w/ PST:coord_type = 254 domain = 14
xen_acpi_processor: ACPI CPU30 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU30 w/ PST:coord_type = 254 domain = 15
xen_acpi_processor: ACPI CPU31 w/ PBLK:0x0
xen_acpi_processor: ACPI CPU31 w/ PST:coord_type = 254 domain = 15
xen_acpi_processor: ACPI CPU1 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU3 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU5 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU7 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU9 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU11 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU12 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU13 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU14 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU15 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU16 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU17 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU18 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU19 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU20 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU21 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU22 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU23 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU24 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU25 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU26 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU27 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU28 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU29 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU30 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS
xen_acpi_processor: ACPI CPU31 - P-states uploaded.
xen_acpi_processor:      *P0: 4500 MHz, 5625 mW, 0 uS
xen_acpi_processor:       P1: 3000 MHz, 2550 mW, 0 uS

While Linux's `xen_acpi_processor.ko` module is part of the Linux kernel,
the Xen project should be concerned with this.  This matches with what
`xenpm get-cpuidle-states`/`xenpm get-cpufreq-states` seem to indicate.

P-state information is being uploaded to Xen for every core.  C-state
information is only being uploaded to Xen for cores which domain-0 has a
corresponding vCPU.

There is a simple mitigation, but I would certainly prefer a proper
solution.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Fri Jun 28 03:38:20 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 03:38:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750461.1158588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN2R2-00011H-Cc; Fri, 28 Jun 2024 03:38:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750461.1158588; Fri, 28 Jun 2024 03:38:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN2R2-00011A-A0; Fri, 28 Jun 2024 03:38:04 +0000
Received: by outflank-mailman (input) for mailman id 750461;
 Fri, 28 Jun 2024 03:38:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sN2R0-00010h-Rj; Fri, 28 Jun 2024 03:38:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sN2R0-0001Rm-Ks; Fri, 28 Jun 2024 03:38:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sN2R0-0001cf-7q; Fri, 28 Jun 2024 03:38:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sN2R0-0005Nz-7N; Fri, 28 Jun 2024 03:38:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8IPbMy3tcM9ucdvhgl8J/Vdfaug4nNgVXvK5Jb0QKPU=; b=YUVsx5FAsk70wbm6tgvRKledRY
	4Ax+8ve2ActmEAjVQGTf2Q5o9P3iD6hoxEqtDGYWChY9NK/SbBflMC8a/U5FWHwogI2qjB3Tk5P77
	GwZyJTWT7fxJAniqZVM8NGstuaF3wp1UaTsqhPso3jjAp+XbGhg9sAIC2tYcgeFpbQwE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186532-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-6.1 test] 186532: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-6.1:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-6.1:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-6.1:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-6.1:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-6.1:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=99e6a620de00b96f059c9e7f14b5795ca0c6b125
X-Osstest-Versions-That:
    linux=a6398e37309000e35cedb5cc328a0f8d00d7d7b9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Jun 2024 03:38:02 +0000

flight 186532 linux-6.1 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186532/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186445
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186445
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186445
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186445
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186445
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186445
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                99e6a620de00b96f059c9e7f14b5795ca0c6b125
baseline version:
 linux                a6398e37309000e35cedb5cc328a0f8d00d7d7b9

Last test of basis   186445  2024-06-21 12:46:38 Z    6 days
Testing same since   186532  2024-06-27 12:12:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Ajrat Makhmutov <rauty@altlinux.org>
  Ajrat Makhmutov <rautyrauty@gmail.com>
  Aleksandr Aprelkov <aaprelkov@usergate.com>
  Aleksandr Nogikh <nogikh@google.com>
  Alessandro Carminati (Red Hat) <alessandro.carminati@gmail.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Henrie <alexhenrie24@gmail.com>
  Alex Hung <alex.hung@amd.com>
  Allen Pais <apais@linux.microsoft.com>
  Andi Shyti <andi.shyti@kernel.org>
  Andrew Ballance <andrewjballance@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Konovalov <andreyknvl@gmail.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Chi <andy.chi@canonical.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Antje Miederhöfer <a.miederhoefer@gmx.de>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Arvid Norlander <lkml@vorpal.se>
  Ben Fradella <bfradell@netapp.com>
  Benjamin Tissoires <bentiss@kernel.org>
  Biju Das <biju.das.jz@bp.renesas.com>
  Bjorn Helgaas <bhelgaas@google.com>
  Boris Burkov <boris@bur.io>
  Borislav Petkov (AMD) <bp@alien8.de>
  Breno Leitao <leitao@debian.org>
  Changbin Du <changbin.du@huawei.com>
  Chenghai Huang <huangchenghai2@huawei.com>
  Christian Marangi <ansuelsmth@gmail.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Cyrus Lien <cyrus.lien@canonical.com>
  Dan Carpenter <dan.carpenter@linaro.org>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Jordan <daniel.m.jordan@oracle.com>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Dave Ertman <david.m.ertman@intel.com>
  Dave Jiang <dave.jiang@intel.com>
  David Ruth <druth@chromium.org>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Davide Caratti <dcaratti@redhat.com>
  Dustin L. Howett <dustin@howett.net>
  Edson Juliano Drosdeck <edson.drosdeck@gmail.com>
  En-Wei Wu <en-wei.wu@canonical.com>
  Eric Dumazet <edumazet@google.com>
  Eric Heintzmann <heintzmann.eric@free.fr>
  Erico Nunes <nunes.erico@gmail.com>
  Esben Haabendal <esben@geanix.com>
  Fabio Estevam <festevam@gmail.com>
  Felix Fietkau <nbd@nbd.name>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Fainelli <florian.fainelli@broadcom.com>
  Florian Westphal <fw@strlen.de>
  Frank Li <Frank.Li@nxp.com>
  Gavrilov Ilia <Ilia.Gavrilov@infotecs.ru>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Grygorii Tertychnyi <grembeter@gmail.com>
  Grygorii Tertychnyi <grygorii.tertychnyi@leica-geosystems.com>
  Hans de Goede <hdegoede@redhat.com>
  Heng Qi <hengqi@linux.alibaba.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Ido Schimmel <idosch@nvidia.com>
  Ignat Korchagin <ignat@cloudflare.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim <jhs@mojatatu.com>
  Jan Kara <jack@suse.cz>
  Jani Nikula <jani.nikula@intel.com>
  Jason Wang <jasowang@redhat.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jens Axboe <axboe@kernel.dk>
  Jianguo Wu <wujianguo@chinatelecom.cn>
  Jiri Kosina <jkosina@suse.com>
  Joao Pinto <Joao.Pinto@synopsys.com>
  Joao Pinto <jpinto@synopsys.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jose Ignacio Tornos Martinez <jtornosm@redhat.com>
  Jozsef Kadlecsik <kadlec@netfilter.org>
  Justin Stitt <justinstitt@google.com>
  Kalle Niemi <kaleposti@gmail.com>
  Kalle Valo <quic_kvalo@quicinc.com>
  Kelsey Steele <kelseysteele@linux.microsoft.com>
  kernelci.org bot <bot@kernelci.org>
  Klara Modin <klarasmodin@gmail.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Kunwu Chan <chentao@kylinos.cn>
  Leon Romanovsky <leon@kernel.org>
  Leon Yen <leon.yen@mediatek.com>
  Li RongQing <lirongqing@baidu.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lorenzo Bianconi <lorenzo@kernel.org>
  Luiz Angelo Daros de Luca <luizluca@gmail.com>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Luke D. Jones <luke@ljones.dev>
  Manish Rangankar <mrangankar@marvell.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marcin Szycik <marcin.szycik@linux.intel.com>
  Mario Limonciello <mario.limonciello@amd.com>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Martin Hoyer <mhoyer@redhat.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Leung <martin.leung@amd.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu (Google) <mhiramat@kernel.org>
  Mateusz Jończyk <mat.jonczyk@o2.pl>
  Matthias Maennich <maennich@google.com>
  Max Krummenacher <max.krummenacher@toradex.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michal Swiatkowski <michal.swiatkowski@linux.intel.com>
  Ming Yen Hsieh <MingYen.Hsieh@mediatek.com>
  Namhyung Kim <namhyung@kernel.org>
  Naresh Kamboju <naresh.kamboju@linaro.org>
  Nathan Chancellor <nathan@kernel.org>
  Nathan Lynch <nathanl@linux.ibm.com>
  Neal Cardwell <ncardwell@google.com>
  Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
  Nikita Shubin <n.shubin@yadro.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Oliver Neukum <oneukum@suse.com>
  Ondrej Mosnacek <omosnace@redhat.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Parker Newman <pnewman@connecttech.com>
  Patrice Chotard <patrice.chotard@foss.st.com>
  Patrisious Haddad <phaddad@nvidia.com>
  Paul E. McKenney <paulmck@kernel.org>
  Pavan Chebbi <pavan.chebbi@broadcom.com>
  Pedro Tammela <pctammela@mojatatu.com>
  Peter Korsgaard <peter@korsgaard.com>
  Peter Oberparleiter <oberpar@linux.ibm.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Peter Xu <peterx@redhat.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Qiang Yu <yuq825@gmail.com>
  Rafael Aquini <aquini@redhat.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raju Lakkaraju <Raju.Lakkaraju@microchip.com>
  Raju Rangoju <Raju.Rangoju@amd.com>
  Roman Smirnov <r.smirnov@omp.ru>
  Ron Economos <re@w6rz.net>
  Salvatore Bonaccorso <carnil@debian.org>
  Sanath S <Sanath.S@amd.com>
  Sasha Levin <sashal@kernel.org>
  Sean Christopherson <seanjc@google.com>
  Sean O'Brien <seobrien@chromium.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  SeongJae Park <sj@kernel.org>
  Shawn Guo <shawnguo@kernel.org>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Horman <horms@kernel.org>
  Simon Wunderlich <sw@simonwunderlich.de>
  Songyang Li <leesongyang@outlook.com>
  Stefan Binding <sbinding@opensource.cirrus.com>
  Stefan Wahren <wahrenst@gmx.net>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Sudeep Holla <sudeep.holla@arm.com>
  Sujai Buvaneswaran <sujai.buvaneswaran@intel.com>
  Sven Eckelmann <sven@narfation.org>
  Takashi Iwai <tiwai@suse.de>
  Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Toke Høiland-Jørgensen <toke@toke.dk>
  Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
  Tony Luck <tony.luck@intel.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tzung-Bi Shih <tzungbi@kernel.org>
  Uladzislau Rezki (Sony) <urezki@gmail.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Uri Arev <me@wantyapps.xyz>
  Vinod Koul <vkoul@kernel.org>
  Wander Lairson Costa <wander@redhat.com>
  Wayne Lin <wayne.lin@amd.com>
  Will Deacon <will@kernel.org>
  Xiaolei Wang <xiaolei.wang@windriver.com>
  Xin Long <lucien.xin@gmail.com>
  Xu Liang <lxu@maxlinear.com>
  Yonghong Song <yonghong.song@linux.dev>
  Yongqin Liu <yongqin.liu@linaro.org>
  Yuchung Cheng <ycheng@google.com>
  Yue Haibing <yuehaibing@huawei.com>
  Yunlei He <heyunlei@oppo.com>
  Zqiang <qiang.zhang1211@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   a6398e373090..99e6a620de00  99e6a620de00b96f059c9e7f14b5795ca0c6b125 -> tested/linux-6.1


From xen-devel-bounces@lists.xenproject.org Fri Jun 28 04:08:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 04:08:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750471.1158598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN2ui-00067C-SI; Fri, 28 Jun 2024 04:08:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750471.1158598; Fri, 28 Jun 2024 04:08:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN2ui-000675-PU; Fri, 28 Jun 2024 04:08:44 +0000
Received: by outflank-mailman (input) for mailman id 750471;
 Fri, 28 Jun 2024 04:08:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sN2uh-00066v-DE; Fri, 28 Jun 2024 04:08:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sN2uh-00021s-83; Fri, 28 Jun 2024 04:08:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sN2ug-0002MB-PG; Fri, 28 Jun 2024 04:08:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sN2ug-0008EU-On; Fri, 28 Jun 2024 04:08:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bz/pK5PJ+K7DPm0S8cWzzihNZpv179SdnRGTnROcb/E=; b=rjRlUsPyq8cpRGACrTrl1joQof
	dNf2klPF/kh1fQJtdBMlobGzIQpuvvzYjpfL1/JA4JwPhBqo9nEgqOF80HKWWYQuc2b8qBLfzgTz5
	a3IU9iCFiv1oFCJvlwKQHfaZlW3oAFGoluxiWB0DpTtvP3jKS+KLFgvv3iBBsQdpfF4I=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186541-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186541: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6d41f5b9e112e8934f59edfd7168a36706e0341a
X-Osstest-Versions-That:
    xen=402e473249cf62dd4c6b3b137aa845db0fe1453a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Jun 2024 04:08:42 +0000

flight 186541 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186541/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6d41f5b9e112e8934f59edfd7168a36706e0341a
baseline version:
 xen                  402e473249cf62dd4c6b3b137aa845db0fe1453a

Last test of basis   186531  2024-06-27 12:02:10 Z    0 days
Testing same since   186541  2024-06-28 00:04:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Federico Serafini <federico.serafini@bugseng.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   402e473249..6d41f5b9e1  6d41f5b9e112e8934f59edfd7168a36706e0341a -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jun 28 06:01:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 06:01:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750492.1158609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN4fz-0002Cz-JQ; Fri, 28 Jun 2024 06:01:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750492.1158609; Fri, 28 Jun 2024 06:01:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN4fz-0002Cs-GK; Fri, 28 Jun 2024 06:01:39 +0000
Received: by outflank-mailman (input) for mailman id 750492;
 Fri, 28 Jun 2024 06:01:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=q1UJ=N6=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sN4fy-0002Cm-D6
 for xen-devel@lists.xenproject.org; Fri, 28 Jun 2024 06:01:38 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id e0e08f94-3513-11ef-90a3-e314d9c70b13;
 Fri, 28 Jun 2024 08:01:37 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id C992B68D09; Fri, 28 Jun 2024 08:01:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0e08f94-3513-11ef-90a3-e314d9c70b13
Date: Fri, 28 Jun 2024 08:01:29 +0200
From: "hch@lst.de" <hch@lst.de>
To: Michael Kelley <mhklinux@outlook.com>
Cc: "hch@lst.de" <hch@lst.de>,
	Petr =?utf-8?B?VGVzYcWZw61r?= <petr@tesarici.cz>,
	"robin.murphy@arm.com" <robin.murphy@arm.com>,
	"joro@8bytes.org" <joro@8bytes.org>,
	"will@kernel.org" <will@kernel.org>,
	"jgross@suse.com" <jgross@suse.com>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>,
	"m.szyprowski@samsung.com" <m.szyprowski@samsung.com>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Message-ID: <20240628060129.GA26206@lst.de>
References: <20240607031421.182589-1-mhklinux@outlook.com> <SN6PR02MB41577686D72E206DB0084E90D4D62@SN6PR02MB4157.namprd02.prod.outlook.com> <20240627060251.GA15590@lst.de> <20240627085216.556744c1@meshulam.tesarici.cz> <SN6PR02MB4157E61B49C8435E38AC968DD4D72@SN6PR02MB4157.namprd02.prod.outlook.com> <20240627152513.GA23497@lst.de> <SN6PR02MB4157D9B1A64FF78461D6A7EDD4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <SN6PR02MB4157D9B1A64FF78461D6A7EDD4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Thu, Jun 27, 2024 at 04:02:59PM +0000, Michael Kelley wrote:
> > > Conceptually, it's still being used as a boolean function based on
> > > whether the return value is NULL.  Renaming it to swiotlb_get_pool()
> > > more accurately describes the return value, but obscures the
> > > intent of determining if it is a swiotlb buffer.  I'll think about it.
> > > Suggestions are welcome.
> > 
> > Just keep is_swiotlb_buffer as a trivial inline helper that returns
> > bool.
> 
> I don't understand what you are suggesting.  Could you elaborate a bit?
> is_swiotlb_buffer() can't be trivial when CONFIG_SWIOTLB_DYNAMIC
> is set.

Call the main function that finds and retuns the pool swiotlb_find_pool,
and then have a is_swiotlb_buffer wrapper that just returns bool.



From xen-devel-bounces@lists.xenproject.org Fri Jun 28 06:15:55 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 06:15:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750501.1158618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN4ti-0003pq-OG; Fri, 28 Jun 2024 06:15:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750501.1158618; Fri, 28 Jun 2024 06:15:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN4ti-0003pj-Lg; Fri, 28 Jun 2024 06:15:50 +0000
Received: by outflank-mailman (input) for mailman id 750501;
 Fri, 28 Jun 2024 06:15:48 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NNqu=N6=suse.com=jbeulich@srs-se1.protection.inumbo.net>)
 id 1sN4tg-0003pd-QK
 for xen-devel@lists.xenproject.org; Fri, 28 Jun 2024 06:15:48 +0000
Received: from mail-lj1-x233.google.com (mail-lj1-x233.google.com
 [2a00:1450:4864:20::233])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id dbc88e19-3515-11ef-90a3-e314d9c70b13;
 Fri, 28 Jun 2024 08:15:47 +0200 (CEST)
Received: by mail-lj1-x233.google.com with SMTP id
 38308e7fff4ca-2ec61eeed8eso2440211fa.0
 for <xen-devel@lists.xenproject.org>; Thu, 27 Jun 2024 23:15:47 -0700 (PDT)
Received: from [10.156.60.236] (ip-037-024-206-209.um08.pools.vodafone-ip.de.
 [37.24.206.209]) by smtp.gmail.com with ESMTPSA id
 d9443c01a7336-1fac1599c23sm7336845ad.280.2024.06.27.23.15.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 Jun 2024 23:15:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dbc88e19-3515-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=suse.com; s=google; t=1719555347; x=1720160147; darn=lists.xenproject.org;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:from:to:cc:subject:date:message-id:reply-to;
        bh=R82/EpIayr+6+8T5f0M7t7I97Dc8tus+c2bQiGzzL+s=;
        b=IJO9+GnIQ0+Rmx+oXLXFi6uq1e86mYXUrDV5cYTvQ8MwO96S2Hv14nhR5apnZqSkCX
         Mi4NEhuffHwGbuFIx4kTnNTXGVlShTL8v7GkLzHOI1AQITZ6VputshX/HJ5zkssD+Em/
         i84DXV8iWnzd9nM4P1YwD6iY6AHVn344gRSfoqiU82+SE6hOcWpVhA3U7SsKKI/jlXoP
         yQ6Sjf6TxKqzV0rsliX6sxfgqiA8r4aInGXydK5O9LevMJTRW31MTyTLpv8Hcz3Iqrzb
         O3UVs2L4QwpIXarMeBuCj4WP9wiRSTNsgFbvcOKFnz9Qx1wE2gaL5uMTPhngvh0iETQ/
         CqMA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719555347; x=1720160147;
        h=content-transfer-encoding:in-reply-to:autocrypt:from
         :content-language:references:cc:to:subject:user-agent:mime-version
         :date:message-id:x-gm-message-state:from:to:cc:subject:date
         :message-id:reply-to;
        bh=R82/EpIayr+6+8T5f0M7t7I97Dc8tus+c2bQiGzzL+s=;
        b=OmABAybZ0HEuM7zhTBiZDDKYYvLyc15ALiauJKhXlpQ9dbw247OL1L0gTKMKRMKnAh
         S3Bv43tb7y4vUOeQMYO5whSoUxHc/FJZ26B+ZSAVyV2IAF8EQSb2hA/ncVvZe3Ibtxtx
         PuLHs9TL2w97S8xjwTuO9EgnVTgSU4illYTFeVqmMwSQL/9hoWjIe2N1qnw8JFhP0h5i
         h8GC3RDXAYl+1SG3gOQ/vddV+mdyyD5Ndx155DiPyFUCrqtZp/x2f4Jekzt28Ol4l4Y8
         RX6a9EUzbdV5t5W59vm+t0acBJRvcQdxcVlEhq24CFU9xzdBWBURdUUejuZYmC6OhdY5
         xFVw==
X-Gm-Message-State: AOJu0YxY9kLdcQmyAT4Ehp6mEzS1TR8mfHkmzyaCVSwZlj8GBjalRYmp
	YllRPNiPPZj1ZYRuIafHXP6cVQvlCS05OhQ6FO++GH1arp4y8t3yxShJdlXZVw==
X-Google-Smtp-Source: AGHT+IEW9fc5WYl5Eh4R7s8JZO+Meq6mxoTOGzjJg6XMTqqFJx7QqDt145zTYXZszIEj7H0blkwCnA==
X-Received: by 2002:a2e:8416:0:b0:2ea:8308:841e with SMTP id 38308e7fff4ca-2ec5b345df8mr86903151fa.24.1719555346649;
        Thu, 27 Jun 2024 23:15:46 -0700 (PDT)
Message-ID: <3b026b36-926c-4ae4-9402-a30a10cf0134@suse.com>
Date: Fri, 28 Jun 2024 08:15:39 +0200
MIME-Version: 1.0
User-Agent: Mozilla Thunderbird
Subject: Re: [XEN PATCH v2 05/13] x86/traps: address violations of MISRA C
 Rule 16.3
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Federico Serafini <federico.serafini@bugseng.com>
References: <cover.1719218291.git.federico.serafini@bugseng.com>
 <4f44a7b021eb4f78ccf1ce69b500b48b75df81c5.1719218291.git.federico.serafini@bugseng.com>
 <alpine.DEB.2.22.394.2406241753260.3870429@ubuntu-linux-20-04-desktop>
 <a5b47b7e-9dc0-4108-bd6f-eb34f7cb8c3c@suse.com>
 <alpine.DEB.2.22.394.2406251808040.3635@ubuntu-linux-20-04-desktop>
 <6441010f-c2f6-4098-bf23-837955dcf803@suse.com>
 <alpine.DEB.2.22.394.2406261758390.3635@ubuntu-linux-20-04-desktop>
 <84eb22c8-7737-4e6b-8194-724c792c2d92@suse.com>
 <alpine.DEB.2.22.394.2406271545210.3635@ubuntu-linux-20-04-desktop>
Content-Language: en-US
From: Jan Beulich <jbeulich@suse.com>
Autocrypt: addr=jbeulich@suse.com; keydata=
 xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk
 hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK
 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD
 /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py
 O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl
 MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP
 nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo
 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp
 Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC
 AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee
 e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF
 hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l
 IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS
 FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj
 t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8
 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3
 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9
 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V
 m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM
 EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr
 wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A
 nAuWpQkjM1ASeQwSHEeAWPgskBQL
In-Reply-To: <alpine.DEB.2.22.394.2406271545210.3635@ubuntu-linux-20-04-desktop>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

On 28.06.2024 00:59, Stefano Stabellini wrote:
> On Thu, 27 Jun 2024, Jan Beulich wrote:
>> On 27.06.2024 03:53, Stefano Stabellini wrote:
>>> On Wed, 26 Jun 2024, Jan Beulich wrote:
>>>> On 26.06.2024 03:11, Stefano Stabellini wrote:
>>>>> On Tue, 25 Jun 2024, Jan Beulich wrote:
>>>>>> On 25.06.2024 02:54, Stefano Stabellini wrote:
>>>>>>> On Mon, 24 Jun 2024, Federico Serafini wrote:
>>>>>>>> Add break or pseudo keyword fallthrough to address violations of
>>>>>>>> MISRA C Rule 16.3: "An unconditional `break' statement shall terminate
>>>>>>>> every switch-clause".
>>>>>>>>
>>>>>>>> No functional change.
>>>>>>>>
>>>>>>>> Signed-off-by: Federico Serafini <federico.serafini@bugseng.com>
>>>>>>>> ---
>>>>>>>>  xen/arch/x86/traps.c | 3 +++
>>>>>>>>  1 file changed, 3 insertions(+)
>>>>>>>>
>>>>>>>> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
>>>>>>>> index 9906e874d5..cbcec3fafb 100644
>>>>>>>> --- a/xen/arch/x86/traps.c
>>>>>>>> +++ b/xen/arch/x86/traps.c
>>>>>>>> @@ -1186,6 +1186,7 @@ void cpuid_hypervisor_leaves(const struct vcpu *v, uint32_t leaf,
>>>>>>>>  
>>>>>>>>      default:
>>>>>>>>          ASSERT_UNREACHABLE();
>>>>>>>> +        break;
>>>>>>>
>>>>>>> Please add ASSERT_UNREACHABLE to the list of "unconditional flow control
>>>>>>> statements" that can terminate a case, in addition to break.
>>>>>>
>>>>>> Why? Exactly the opposite is part of the subject of a recent patch, iirc.
>>>>>> Simply because of the rules needing to cover both debug and release builds.
>>>>>
>>>>> The reason is that ASSERT_UNREACHABLE() might disappear from the release
>>>>> build but it can still be used as a marker during static analysis. In
>>>>> my view, ASSERT_UNREACHABLE() is equivalent to a noreturn function call
>>>>> which has an empty implementation in release builds.
>>>>>
>>>>> The only reason I can think of to require a break; after an
>>>>> ASSERT_UNREACHABLE() would be if we think the unreachability only apply
>>>>> to debug build, not release build:
>>>>>
>>>>> - debug build: it is unreachable
>>>>> - release build: it is reachable
>>>>>
>>>>> I don't think that is meant to be possible so I think we can use
>>>>> ASSERT_UNREACHABLE() as a marker.
>>>>
>>>> Well. For one such an assumption takes as a prereq that a debug build will
>>>> be run through full coverage testing, i.e. all reachable paths proven to
>>>> be taken. I understand that this prereq is intended to somehow be met,
>>>> even if I'm having difficulty seeing how such a final proof would look
>>>> like: Full coverage would, to me, mean that _every_ line is reachable. Yet
>>>> clearly any ASSERT_UNREACHABLE() must never be reached.
>>>>
>>>> And then not covering for such cases takes the further assumption that
>>>> debug and release builds are functionally identical. I'm afraid this would
>>>> be a wrong assumption to make:
>>>> 1) We may screw up somewhere, with code wrongly enabled only in one of the
>>>>    two build modes.
>>>> 2) The compiler may screw up, in particular with optimization.
>>>
>>> I think there are two different issues here we are discussing.
>>>
>>> One issue, like you said, has to do with coverage. It is important to
>>> mark as "unreachable" any part of the code that is indeed unreachable
>>> so that we can account it properly when we do coverage analysis. At the
>>> moment the only "unreachable" marker that we have is
>>> ASSERT_UNREACHABLE(), and I am hoping we can use it as part of the
>>> coverage analysis we'll do.
>>>
>>> However, there is a different separate question about what to do in the
>>> Xen code after an ASSERT_UNREACHABLE(). E.g.:
>>>
>>>              default:
>>>                  ASSERT_UNREACHABLE();
>>>                  return -EPERM; /* is it better with or without this? */
>>>              }
>>>
>>> Leaving coverage aside, would it be better to be defensive and actually
>>> attempt to report errors back after an ASSERT_UNREACHABLE() like in the
>>> example? Or is it better to assume the code is actually unreachable
>>> hence there is no need to do anything afterwards?
>>>
>>> One one hand, being defensive sounds good, on the other hand, any code
>>> we add after ASSERT_UNREACHABLE() is dead code which cannot be tested,
>>> which is also not good. In this example, there is no way to test the
>>> return -EPERM code path. We also need to consider what is the right
>>> thing to do if Xen finds itself in an erroneous situation such as being
>>> in an unreachable code location.
>>>
>>> So, after thinking about it and also talking to the safety manager, I
>>> think we should:
>>> - implement ASSERT_UNREACHABLE with a warning in release builds
>>
>> If at all, then controlled by a default-off Kconfig setting. This would,
>> after all, raise the question of how ASSERT() should behave then. Imo
>> the two should be consistent in this regard, and NDEBUG clearly results
>> in the expectation that ASSERT() expands to nothing. Perhaps this is
>> finally the time where we need to separate NDEBUG from CONFIG_DEBUG; we
>> did discuss doing so before. Then in your release builds, you could
>> actually leave assertions active.
>  
> Yes, a kconfig to define the behavior of ASSERT_UNREACHABLE in release
> builds is fine. And you are right that we should consider doing
> something similar for ASSERT too.
> 
> I think that in any environment where safety (i.e. correctness of
> behavior) is a primary concern, attempting to continue without doing
> anything after reaching a point that is supposed to be unreachable is
> not a good idea.
> 
> I think Xen should do something in response to reaching a point it is
> not supposed to reach. I don't know yet what is the best course of
> action but printing a warning seems to be the bare minimum.
> 
> Crashing the system is not a good idea as it could potentially be
> exploited by malicious guests (security might not be the primary concern
> but still.)

Yet continuing may set the system up for much harder to understand issues,
including crashes and exploitable issues later on.

Jan


From xen-devel-bounces@lists.xenproject.org Fri Jun 28 06:30:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 06:30:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750514.1158630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN57y-0006QD-1g; Fri, 28 Jun 2024 06:30:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750514.1158630; Fri, 28 Jun 2024 06:30:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN57x-0006Q6-Tg; Fri, 28 Jun 2024 06:30:33 +0000
Received: by outflank-mailman (input) for mailman id 750514;
 Fri, 28 Jun 2024 06:30:32 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z7d+=N6=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sN57w-0006Q0-Tl
 for xen-devel@lists.xenproject.org; Fri, 28 Jun 2024 06:30:32 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id eb065bf8-3517-11ef-90a3-e314d9c70b13;
 Fri, 28 Jun 2024 08:30:31 +0200 (CEST)
Received: from nico.bugseng.com (unknown [46.228.253.214])
 by support.bugseng.com (Postfix) with ESMTPSA id 3FE4D4EE073E;
 Fri, 28 Jun 2024 08:30:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb065bf8-3517-11ef-90a3-e314d9c70b13
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	michal.orzel@amd.com,
	xenia.ragiadakou@amd.com,
	ayan.kumar.halder@amd.com,
	consulting@bugseng.com,
	Nicola Vetrini <nicola.vetrini@bugseng.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [XEN PATCH v2] x86: p2m-pod: address violation of MISRA C Rule 2.1
Date: Fri, 28 Jun 2024 08:30:27 +0200
Message-Id: <43b3a42f9d323cc3f9747c56e8f59f9dffa69321.1719556140.git.nicola.vetrini@bugseng.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The label 'out_unmap' is only reachable after ASSERT_UNREACHABLE,
so the code below is only executed upon erroneously reaching that
program point and calling domain_crash, thus resulting in the
for loop after 'out_unmap' to become unreachable in some configurations.

This is a defensive coding measure to have a safe fallback that is
reachable in non-debug builds, and can thus be deviated with a
comment-based deviation.

No functional change.

Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
Changes in v2:
- rebased against current staging
---
 docs/misra/safe.json      | 8 ++++++++
 xen/arch/x86/mm/p2m-pod.c | 1 +
 2 files changed, 9 insertions(+)

diff --git a/docs/misra/safe.json b/docs/misra/safe.json
index 3f18ef401c7d..880aef784ea1 100644
--- a/docs/misra/safe.json
+++ b/docs/misra/safe.json
@@ -68,6 +68,14 @@
         },
         {
             "id": "SAF-8-safe",
+            "analyser": {
+                "eclair": "MC3R1.R2.1"
+            },
+            "name": "MC3R1.R2.1: statement unreachable in some configurations",
+            "text": "Every path that can reach this statement is preceded by statements that make it unreachable in certain configurations (e.g. ASSERT_UNREACHABLE). This is understood as a means of defensive programming."
+        },
+        {
+            "id": "SAF-9-safe",
             "analyser": {},
             "name": "Sentinel",
             "text": "Next ID to be used"
diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
index bd84fe9e27ee..736d3ffd1ff6 100644
--- a/xen/arch/x86/mm/p2m-pod.c
+++ b/xen/arch/x86/mm/p2m-pod.c
@@ -1040,6 +1040,7 @@ out_unmap:
      * Something went wrong, probably crashing the domain.  Unmap
      * everything and return.
      */
+    /* SAF-8-safe Rule 2.1: defensive programming */
     for ( i = 0; i < count; i++ )
         if ( map[i] )
             unmap_domain_page(map[i]);
-- 
2.34.1


From xen-devel-bounces@lists.xenproject.org Fri Jun 28 06:31:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 06:31:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750520.1158639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN58w-0006vb-9z; Fri, 28 Jun 2024 06:31:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750520.1158639; Fri, 28 Jun 2024 06:31:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN58w-0006vU-6B; Fri, 28 Jun 2024 06:31:34 +0000
Received: by outflank-mailman (input) for mailman id 750520;
 Fri, 28 Jun 2024 06:31:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z7d+=N6=bugseng.com=nicola.vetrini@srs-se1.protection.inumbo.net>)
 id 1sN58v-0006vL-Dv
 for xen-devel@lists.xenproject.org; Fri, 28 Jun 2024 06:31:33 +0000
Received: from support.bugseng.com (mail.bugseng.com [162.55.131.47])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0f35d738-3518-11ef-90a3-e314d9c70b13;
 Fri, 28 Jun 2024 08:31:32 +0200 (CEST)
Received: from support.bugseng.com (support.bugseng.com [162.55.131.47])
 by support.bugseng.com (Postfix) with ESMTPA id 4F0304EE073E;
 Fri, 28 Jun 2024 08:31:32 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f35d738-3518-11ef-90a3-e314d9c70b13
MIME-Version: 1.0
Date: Fri, 28 Jun 2024 08:31:32 +0200
From: Nicola Vetrini <nicola.vetrini@bugseng.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, michal.orzel@amd.com,
 xenia.ragiadakou@amd.com, ayan.kumar.halder@amd.com, consulting@bugseng.com,
 Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
 <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien Grall
 <julien@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Subject: Re: [XEN PATCH] x86: p2m-pod: address violation of MISRA C Rule 2.1
In-Reply-To: <alpine.DEB.2.22.394.2406271617320.3635@ubuntu-linux-20-04-desktop>
References: <a050ef1b662f64bb58afb2f6118254254dd1d649.1719478448.git.nicola.vetrini@bugseng.com>
 <alpine.DEB.2.22.394.2406271617320.3635@ubuntu-linux-20-04-desktop>
Message-ID: <34bb76443d4907f80cade7a595369167@bugseng.com>
X-Sender: nicola.vetrini@bugseng.com
Organization: BUGSENG s.r.l.
Content-Type: text/plain; charset=US-ASCII;
 format=flowed
Content-Transfer-Encoding: 7bit

On 2024-06-28 01:18, Stefano Stabellini wrote:
> On Thu, 27 Jun 2024, Nicola Vetrini wrote:
>> The label 'out_unmap' is only reachable after ASSERT_UNREACHABLE,
>> so the code below is only executed upon erroneously reaching that
>> program point and calling domain_crash, thus resulting in the
>> for loop after 'out_unmap' to become unreachable in some 
>> configurations.
>> 
>> This is a defensive coding measure to have a safe fallback that is
>> reachable in non-debug builds, and can thus be deviated with a
>> comment-based deviation.
>> 
>> No functional change.
>> 
>> Signed-off-by: Nicola Vetrini <nicola.vetrini@bugseng.com>
> 
> The patch needs rebasing as it doesn't apply to staging anymore
> 
> Other than that:
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> This is actually going help also in terms of identifying impossible 
> code
> paths for coverage
> 

Thanks, I just sent a rebased v2 version.

>> ---
>>  docs/misra/safe.json      | 8 ++++++++
>>  xen/arch/x86/mm/p2m-pod.c | 1 +
>>  2 files changed, 9 insertions(+)
>> 
>> diff --git a/docs/misra/safe.json b/docs/misra/safe.json
>> index c213e0a0be3b..b114c9485c86 100644
>> --- a/docs/misra/safe.json
>> +++ b/docs/misra/safe.json
>> @@ -60,6 +60,14 @@
>>          },
>>          {
>>              "id": "SAF-7-safe",
>> +            "analyser": {
>> +                "eclair": "MC3R1.R2.1"
>> +            },
>> +            "name": "MC3R1.R2.1: statement unreachable in some 
>> configurations",
>> +            "text": "Every path that can reach this statement is 
>> preceded by statements that make it unreachable in certain 
>> configurations (e.g. ASSERT_UNREACHABLE). This is understood as a 
>> means of defensive programming."
>> +        },
>> +        {
>> +            "id": "SAF-8-safe",
>>              "analyser": {},
>>              "name": "Sentinel",
>>              "text": "Next ID to be used"
>> diff --git a/xen/arch/x86/mm/p2m-pod.c b/xen/arch/x86/mm/p2m-pod.c
>> index bd84fe9e27ee..5a96c46a2286 100644
>> --- a/xen/arch/x86/mm/p2m-pod.c
>> +++ b/xen/arch/x86/mm/p2m-pod.c
>> @@ -1040,6 +1040,7 @@ out_unmap:
>>       * Something went wrong, probably crashing the domain.  Unmap
>>       * everything and return.
>>       */
>> +    /* SAF-7-safe Rule 2.1: defensive programming */
>>      for ( i = 0; i < count; i++ )
>>          if ( map[i] )
>>              unmap_domain_page(map[i]);
>> --
>> 2.34.1
>> 

-- 
Nicola Vetrini, BSc
Software Engineer, BUGSENG srl (https://bugseng.com)


From xen-devel-bounces@lists.xenproject.org Fri Jun 28 07:34:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 07:34:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750539.1158649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN67O-0005NX-KZ; Fri, 28 Jun 2024 07:34:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750539.1158649; Fri, 28 Jun 2024 07:34:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN67O-0005NQ-Hp; Fri, 28 Jun 2024 07:34:02 +0000
Received: by outflank-mailman (input) for mailman id 750539;
 Fri, 28 Jun 2024 07:34:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sN67N-0005NG-3E; Fri, 28 Jun 2024 07:34:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sN67M-0007l3-Uo; Fri, 28 Jun 2024 07:34:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sN67M-0007hb-LH; Fri, 28 Jun 2024 07:34:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sN67M-0004aV-Km; Fri, 28 Jun 2024 07:34:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sEEagA2pZGoZ6UcLhgT7+iacDv+60yIg5vZ2YGShKHg=; b=pvu0FeOeWFBbjLXIyYB5q6aFMr
	VgLjX3vZlJLG+6DADB0xspI2+p8ZhvHQ5FnKuukvE7QyFEARcV61fH6+TgOMBhzZYobQmYWZHZQJo
	vVK0uPO8yeT2RuFR1USxioT5Nu1b5HHjR1iqFifYuWieX6NJRJ5s83nfSY93KN8nrQng=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186543-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186543: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=dc3ed379dfb62ed720e46f10b6c6d0ebda6bde5f
X-Osstest-Versions-That:
    ovmf=a5f147b2a31c093cc83a3f10cdda529c6b59799b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Jun 2024 07:34:00 +0000

flight 186543 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186543/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 dc3ed379dfb62ed720e46f10b6c6d0ebda6bde5f
baseline version:
 ovmf                 a5f147b2a31c093cc83a3f10cdda529c6b59799b

Last test of basis   186539  2024-06-27 21:14:54 Z    0 days
Testing same since   186543  2024-06-28 06:13:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiaxin Wu <jiaxin.wu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   a5f147b2a3..dc3ed379df  dc3ed379dfb62ed720e46f10b6c6d0ebda6bde5f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 28 07:47:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 07:47:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750551.1158659 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN6KD-0006w7-Nt; Fri, 28 Jun 2024 07:47:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750551.1158659; Fri, 28 Jun 2024 07:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN6KD-0006w0-LD; Fri, 28 Jun 2024 07:47:17 +0000
Received: by outflank-mailman (input) for mailman id 750551;
 Fri, 28 Jun 2024 07:47:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=DBTR=N6=tesarici.cz=petr@srs-se1.protection.inumbo.net>)
 id 1sN6KA-0006vu-NL
 for xen-devel@lists.xenproject.org; Fri, 28 Jun 2024 07:47:15 +0000
Received: from bee.tesarici.cz (bee.tesarici.cz [2a03:3b40:fe:2d4::1])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id a06f235d-3522-11ef-b4bb-af5377834399;
 Fri, 28 Jun 2024 09:47:12 +0200 (CEST)
Received: from meshulam.tesarici.cz
 (dynamic-2a00-1028-83b8-1e7a-4427-cc85-6706-c595.ipv6.o2.cz
 [IPv6:2a00:1028:83b8:1e7a:4427:cc85:6706:c595])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange ECDHE (P-256) server-signature RSA-PSS (2048 bits) server-digest
 SHA256) (No client certificate requested)
 by bee.tesarici.cz (Postfix) with ESMTPSA id E7D851D8D35;
 Fri, 28 Jun 2024 09:47:09 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a06f235d-3522-11ef-b4bb-af5377834399
Authentication-Results: mail.tesarici.cz; dmarc=fail (p=quarantine dis=none) header.from=tesarici.cz
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=tesarici.cz; s=mail;
	t=1719560830; bh=5ABgD9l3EPOUU6jm+gY3zAzc/yQqRQCNjAF+JnVqObA=;
	h=Date:From:To:Cc:Subject:In-Reply-To:References:From;
	b=ph70L+uT/DAzbwgpXZ98YCI0PdG5EwmgkmsqPguQEP0YOpMbT2rLDIizYB2bn9azV
	 XZUf4hFMrQBrHaaPx9NJ0HB/Y3l1CNnOuzCg23Wiz7VeAdR58J+oD/edxNunQJewth
	 HGq96j8oOk0LzMLn9+R7BJNAe6bcwsZ0a1VW+3u1NxYvU8xXV9Q2ZnkHpRLPyEGMj7
	 qNa6juUJL0gtrvkzQwBc63MI5TOFwMp+RXARGs+m+zf5GyPbVt3AUObC30fJasUJRO
	 iQVOrDYOKaIajMpRC9oht/oPM7anlz0Qj4szdUDUot78hLSr0N8R/9k7VaudClHJqW
	 JnXDXZtOhxSxw==
Date: Fri, 28 Jun 2024 09:47:08 +0200
From: Petr =?UTF-8?B?VGVzYcWZw61r?= <petr@tesarici.cz>
To: "hch@lst.de" <hch@lst.de>
Cc: Michael Kelley <mhklinux@outlook.com>, "robin.murphy@arm.com"
 <robin.murphy@arm.com>, "joro@8bytes.org" <joro@8bytes.org>,
 "will@kernel.org" <will@kernel.org>, "jgross@suse.com" <jgross@suse.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>,
 "m.szyprowski@samsung.com" <m.szyprowski@samsung.com>,
 "iommu@lists.linux.dev" <iommu@lists.linux.dev>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Message-ID: <20240628094708.3a454619@meshulam.tesarici.cz>
In-Reply-To: <20240628060129.GA26206@lst.de>
References: <20240607031421.182589-1-mhklinux@outlook.com>
	<SN6PR02MB41577686D72E206DB0084E90D4D62@SN6PR02MB4157.namprd02.prod.outlook.com>
	<20240627060251.GA15590@lst.de>
	<20240627085216.556744c1@meshulam.tesarici.cz>
	<SN6PR02MB4157E61B49C8435E38AC968DD4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
	<20240627152513.GA23497@lst.de>
	<SN6PR02MB4157D9B1A64FF78461D6A7EDD4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
	<20240628060129.GA26206@lst.de>
X-Mailer: Claws Mail 4.3.0 (GTK 3.24.42; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

V Fri, 28 Jun 2024 08:01:29 +0200
"hch@lst.de" <hch@lst.de> naps=C3=A1no:

> On Thu, Jun 27, 2024 at 04:02:59PM +0000, Michael Kelley wrote:
> > > > Conceptually, it's still being used as a boolean function based on
> > > > whether the return value is NULL.  Renaming it to swiotlb_get_pool()
> > > > more accurately describes the return value, but obscures the
> > > > intent of determining if it is a swiotlb buffer.  I'll think about =
it.
> > > > Suggestions are welcome. =20
> > >=20
> > > Just keep is_swiotlb_buffer as a trivial inline helper that returns
> > > bool. =20
> >=20
> > I don't understand what you are suggesting.  Could you elaborate a bit?
> > is_swiotlb_buffer() can't be trivial when CONFIG_SWIOTLB_DYNAMIC
> > is set. =20
>=20
> Call the main function that finds and retuns the pool swiotlb_find_pool,
> and then have a is_swiotlb_buffer wrapper that just returns bool.
>=20

I see. That's not my point. After applying Michael's patch, the return
value is always used, except here:

bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
{
	return !dev_is_dma_coherent(dev) ||
	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
}

I don't think this one occurrence in the entire source tree is worth a
separate inline function.

If nobody has a better idea, I'm not really offended by keeping the
original name, is_swiotlb_buffer(). It would just become the only
function which starts with "is_" and provides more information in the
return value than a simple yes/no, and I thought there must be an
unwritten convention about that.

Petr T


From xen-devel-bounces@lists.xenproject.org Fri Jun 28 07:49:59 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 07:49:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750557.1158669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN6Mo-0007WR-4X; Fri, 28 Jun 2024 07:49:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750557.1158669; Fri, 28 Jun 2024 07:49:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN6Mo-0007WK-0f; Fri, 28 Jun 2024 07:49:58 +0000
Received: by outflank-mailman (input) for mailman id 750557;
 Fri, 28 Jun 2024 07:49:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sN6Mm-0007WA-Js; Fri, 28 Jun 2024 07:49:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sN6Mm-000834-GT; Fri, 28 Jun 2024 07:49:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sN6Ml-00084b-Vf; Fri, 28 Jun 2024 07:49:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sN6Ml-0000Dh-VB; Fri, 28 Jun 2024 07:49:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AMCm8wjjs01vyYaR9JVjBcONeZneGTWfoPvBM4rjKmg=; b=E88ptURzEMDCLdyn3nNYGniDIt
	b89u579ghYvh7OhxBEsngt+y398d/z3T0O48z8LbfsjV3i0ctl9Ziziz32gz4SchvbD13yx/hbJE8
	41IzsP/10fhtuFXN/+7YJZnCgFyoFIXFk3q2FjuxZLKJpy8QkQ4rrpivVEEKPHa2pDBY=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186534-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186534: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=402e473249cf62dd4c6b3b137aa845db0fe1453a
X-Osstest-Versions-That:
    xen=ecadd22a3de8ce7f1799e85af6f1e37c06c57049
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Jun 2024 07:49:55 +0000

flight 186534 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186534/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186529
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186529
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186529
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186529
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186529
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186529
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  402e473249cf62dd4c6b3b137aa845db0fe1453a
baseline version:
 xen                  ecadd22a3de8ce7f1799e85af6f1e37c06c57049

Last test of basis   186529  2024-06-27 03:48:27 Z    1 days
Testing same since   186534  2024-06-27 15:10:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Nicola Vetrini <nicola.vetrini@bugseng.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ecadd22a3d..402e473249  402e473249cf62dd4c6b3b137aa845db0fe1453a -> master


From xen-devel-bounces@lists.xenproject.org Fri Jun 28 08:58:02 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 08:58:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750587.1158707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN7QP-0007uO-AM; Fri, 28 Jun 2024 08:57:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750587.1158707; Fri, 28 Jun 2024 08:57:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sN7QP-0007uH-7E; Fri, 28 Jun 2024 08:57:45 +0000
Received: by outflank-mailman (input) for mailman id 750587;
 Fri, 28 Jun 2024 08:57:44 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WIe5=N6=bounce.vates.tech=bounce-md_30504962.667e7b03.v1-b7cc5ea0d65f4521929adac485c86e7e@srs-se1.protection.inumbo.net>)
 id 1sN7QN-0007uB-TW
 for xen-devel@lists.xenproject.org; Fri, 28 Jun 2024 08:57:44 +0000
Received: from mail137-26.atl71.mandrillapp.com
 (mail137-26.atl71.mandrillapp.com [198.2.137.26])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 79ae42a6-352c-11ef-90a3-e314d9c70b13;
 Fri, 28 Jun 2024 10:57:41 +0200 (CEST)
Received: from pmta07.mandrill.prod.atl01.rsglab.com (localhost [127.0.0.1])
 by mail137-26.atl71.mandrillapp.com (Mailchimp) with ESMTP id
 4W9Tpq6YY1zJKFJjK
 for <xen-devel@lists.xenproject.org>; Fri, 28 Jun 2024 08:57:39 +0000 (GMT)
Received: from [37.26.189.201] by mandrillapp.com id
 b7cc5ea0d65f4521929adac485c86e7e; Fri, 28 Jun 2024 08:57:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79ae42a6-352c-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=mandrillapp.com;
	s=mte1; t=1719565059; x=1719825559;
	bh=KiLNFTtDxLB0xantCYuVD/eUOVxGvj4f7KYb9gCvsEQ=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=qlDsNMSNpYa2qGfyAZExSwgSaB/PfWbFpQTLOwIz2LHNR1IpVNxAAN9bI95Xk7o7c
	 lecl85tQEAftFmpe4yz73m3sPGkC4FRdpC5pTRTKypuXZ2ml8/SM9uoEmEQoITYaEC
	 VDL5rt2e/2Weni4JcHp4utweuwyyVlzXp4hu3grpE7oUPb4YS1ETL74FzKJ+3YvyEI
	 i2TVUlgAqN9z4MQfyPjvMUyLfUkT5WpZ/AW4R7yJRuhupJ7F6IvJe6Bp9ivgU2rqzj
	 sWHSs5biX7NVmH5RPsPrOyiCb5i9GAbLdXpi9HPMyj3HSv93llmLQKzGjxVr2OBwst
	 YO1Id0ncMqTXw==
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=vates.tech; s=mte1;
	t=1719565059; x=1719825559; i=teddy.astie@vates.tech;
	bh=KiLNFTtDxLB0xantCYuVD/eUOVxGvj4f7KYb9gCvsEQ=;
	h=From:Subject:Message-Id:To:Cc:References:In-Reply-To:Feedback-ID:
	 Date:MIME-Version:Content-Type:Content-Transfer-Encoding:CC:Date:
	 Subject:From;
	b=IU8FMRWdCo+9cR6GYINicceI2aD49jO15zTeQayjMRG1v0fU+eJPdGcCTQkLzgiet
	 rB/6NamSy9TJHkfIZD9bOyq2CyJHubiFO2ikYIypat8gxjvpGvceCPS5mG06EymBdQ
	 m2+OVxMPJ1km+FIWIpTLKhyt3BrRhTIq2iuqBRG2hxhlq3RwfClM+iW6T4WLaQCoDi
	 JrgO6IRJLuh8Iy8w+ZJ2+MGF9Y2viamElsyR57UBFsXlwBP+KV3IZA+79uIkGKXtcD
	 YQM7aKh8hz5JMLEWAd8jHmtyLy0Xd+n7wag2xiz4UygNH3GOnUOg8WVByhXQLjk5aT
	 QGEUTCFqQNLcg==
From: Teddy Astie <teddy.astie@vates.tech>
Subject: =?utf-8?Q?Re:=20Disaggregated=20(Xoar)=20Dom0=20Building?=
X-Bm-Disclaimer: Yes
X-Bm-Milter-Handled: 4ffbd6c1-ee69-4e1b-aabd-f977039bd3e2
X-Bm-Transport-Timestamp: 1719565058890
Message-Id: <3a56d6f7-d19a-4aba-a876-3541d24a2f00@vates.tech>
To: Lonnie <lonnie@outstep.com>
Cc: xen-devel@lists.xenproject.org
References: <48-667d6b00-131-71122080@10350945>
In-Reply-To: <48-667d6b00-131-71122080@10350945>
X-Native-Encoded: 1
X-Report-Abuse: =?UTF-8?Q?Please=20forward=20a=20copy=20of=20this=20message,=20including=20all=20headers,=20to=20abuse@mandrill.com.=20You=20can=20also=20report=20abuse=20here:=20https://mandrillapp.com/contact/abuse=3Fid=3D30504962.b7cc5ea0d65f4521929adac485c86e7e?=
X-Mandrill-User: md_30504962
Feedback-ID: 30504962:30504962.20240628:md
Date: Fri, 28 Jun 2024 08:57:39 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hi Lonnie,

There are dedicated Matrix channels for chat
https://xenproject.org/help/matrix/

Teddy

Le 27/06/2024 =C3=A0 15:38, Lonnie a =C3=A9crit=C2=A0:
> Hi Teddy,
> 
> You are actually on track with what I was thinking in this area which ini=
tially gave me 2 main ideas:
> 
> 1. Take the NOVA Microhypervisor (very small TCB at only 5K LOC) and try =
to get QEMU or Bhyve integrated as the VMM which would require a huge amoun=
t of development time.  The Genode framework has a configuration/compile ap=
proach that uses NOVA with a custom VirtualBox, but I did not want to go th=
at route.
> 
> 2. Take the Alpine XEN distro as the base and then update the dated Xoar =
patches which effectively breaks Dom0 into 9 Service and Driver Mini/Nano V=
Ms for which I was thinking about further setting them up as ultra-thin Uni=
kernels (MirageOS, IncludeOS, etc.) but still researching.
> 
> My effort is to make a purely Ultra-Thin RAM-Based Xen Hypervisor that bo=
ots UEFI for modern systems. Plus a number of other features if all goes we=
ll.
> 
> Your ideas of QEMU as a Unikernel would probably really work for both XEN=
 and NOVA (with a bit of work on the NOVA side). I actually liked NOVA and =
experimented with it a while back being able to produce a very lightweight =
Microhypervisor ISO that would boot and do some simple things and even fire=
 up lightweight Linux instances but with very limited capabilities, of cour=
se, just to see if it worked. Unfortunately, that direction although very i=
nteresting, would definitely take too much development to make a viable and=
 more complete hypervisor.  I did like that you could easily start with no =
VM and easily start one or more and then use Hot-Keys to flip between conso=
les. That was pretty cool and is something that I would like to see about w=
orking into this XEN effort as well maybe some config file in the Xen.efi d=
irectory for that or something but am still thinking about it.
> 
> I think that perhaps the Alpine-XEN-Xoar approach could be benefitual but=
 XEN, plus supporting libraries is still a bit larger than I would have hop=
ed although you get more capabilities and more of a solid hypervisor as wel=
l, I think.  Maybe we can chat more about things if you like.
> 
> Best,
> Lonnie
> On Thursday, June 27, 2024 14:38 CEST, Teddy Astie <teddy.astie@vates.tec=
h> wrote:
> 
>> Hi Lonnie,
>>
>> Le 27/06/2024 =C3=A0 11:33, Lonnie Cumberland a =C3=A9crit=C2=A0:
>>> I am working towards is to have
>>> everything as a RAM-based ultra-lightweight thin hypervisor.=C2=A0=C2=
=A0 I looked
>>> over ACRN, the NOVA Microhypervisor (Headron, Beadrock Udo),
>>> Rust-Shyper, Bareflank-MicroV, and many other development efforts but i=
t
>>> seems that Xen is the most advanced for my purposes here.
>>>
>>
>> You can have a disk-less (or ramdisk based) distro supporting Xen with
>> Alpine Linux (with Xen flavour). It does still use Dom0 with all its
>> responsibilities though.
>>
>>>>> Currently, I am investigating and researching the ideas of
>>>>> "Disaggregating" Dom0 and have the Xoar Xen patches ("Breaking Up is
>>>>> Hard to Do: Security and Functionality in a Commodity Hypervisor"
>>>>> 2011) available which were developed against version 22155 of
>>>>> xen-unstable. The Linux patches are against Linux with pvops
>>>>> 2.6.31.13 and developed on a standard Ubuntu 10.04 install. My effort
>>>>> would also be up update these patches.
>>>>>
>>>>> I have been able to locate the Xen "Dom0 Disaggregation"
>>>>> (https://wiki.xenproject.org/wiki/Dom0_Disaggregation) am reading up
>>>>> on things now but wanted to ask the developers list about any
>>>>> experience you may have had in this area since the research objective
>>>>> is to integrate Xoar with the latest Xen 4.20, if possible, and to
>>>>> take it further to basically eliminate Dom0 all together with
>>>>> individual Mini-OS or Unikernel "Service and Driver VM's" instead
>>>>> that are loaded at UEFI boot time.
>>
>> The latest stuff going on I have in mind regarding this idea of moving
>> stuff out of Dom0 is QEMU as Unikernel (using Unikraft), there were some
>> discussions on this in Matrix and at Xen Summit, and it's currently work
>> in progress from Unikraft side.
>>
>> Teddy
>>
>>
>> Teddy Astie | Vates XCP-ng Intern
>>
>> XCP-ng & Xen Orchestra - Vates solutions
>>
>> web: https://vates.tech
>>
> 


Teddy Astie | Vates XCP-ng Intern

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech



From xen-devel-bounces@lists.xenproject.org Fri Jun 28 13:52:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 13:52:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750642.1158717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNC1G-0005hE-Ns; Fri, 28 Jun 2024 13:52:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750642.1158717; Fri, 28 Jun 2024 13:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNC1G-0005h7-LD; Fri, 28 Jun 2024 13:52:06 +0000
Received: by outflank-mailman (input) for mailman id 750642;
 Fri, 28 Jun 2024 13:52:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNC1F-0005gx-F2; Fri, 28 Jun 2024 13:52:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNC1F-00088a-AU; Fri, 28 Jun 2024 13:52:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNC1E-00042y-SK; Fri, 28 Jun 2024 13:52:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNC1E-0005C8-Rq; Fri, 28 Jun 2024 13:52:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=E+v2BKqaaZogrW4oaERDWy72fag5NPaD+lajAtZrgAc=; b=LyW36HKXOU06zTIARjGf8B9gtu
	9qeLMqVJox7TLkYnP+l2uWm/OBW27qO5pMDrUt+sFiMaJ11LfRuw9Nl1Z7ZuBf1PFo5uIg6L0E8hB
	FcLP2VIO5eXG5xTFd42oV6m3lN6ZEmH+1R/wmfFWC0XNYfreKo/aABzJ33SicuDAHY2k=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186540-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186540: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-raw:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6d6444ba82053c716fb5ac83346202659023044e
X-Osstest-Versions-That:
    linux=24ca36a562d63f1bff04c3f11236f52969c67717
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Jun 2024 13:52:04 +0000

flight 186540 linux-linus real [real]
flight 186545 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186540/
http://logs.test-lab.xenproject.org/osstest/logs/186545/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           8 xen-boot                 fail REGR. vs. 186528
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 186528

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-raw       8 xen-boot            fail pass in 186545-retest
 test-armhf-armhf-xl-credit1   8 xen-boot            fail pass in 186545-retest
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186545-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 186545 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 186545 never pass
 test-armhf-armhf-xl-raw     14 migrate-support-check fail in 186545 never pass
 test-armhf-armhf-xl-raw 15 saverestore-support-check fail in 186545 never pass
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186528
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186528
 test-armhf-armhf-xl-rtds      8 xen-boot                     fail  like 186528
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186528
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6d6444ba82053c716fb5ac83346202659023044e
baseline version:
 linux                24ca36a562d63f1bff04c3f11236f52969c67717

Last test of basis   186528  2024-06-26 22:42:23 Z    1 days
Failing since        186530  2024-06-27 08:02:12 Z    1 days    2 attempts
Testing same since   186540  2024-06-27 22:14:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  aigourensheng <shechenglong001@gmail.com>
  Aivaz Latypov <reichaivaz@gmail.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Hölzl <alexander.hoelzl@gmx.net>
  Alexei Starovoitov <ast@kernel.org>
  Alibek Omarov <a1ba.omarov@gmail.com>
  Amadeusz Sławiński <amadeuszx.slawinski@linux.intel.com>
  Andrei Simion <andrei.simion@microchip.com>
  Andrew Bresticker <abrestic@rivosinc.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Konovalov <andreyknvl@gmail.com>
  Andrii Nakryiko <andrii@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Arun Ramadoss <arun.ramadoss@microchip.com>
  Aryan Srivastava <aryan.srivastava@alliedtelesis.co.nz>
  Bard Liao <yung-chuan.liao@linux.intel.com>
  Barry Song <v-songbaohua@oppo.com>
  Bing-Jhong Billy Jheng <billy@starlabs.sg>
  Bjørn Mork <bjorn@mork.no>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen Ni <nichen@iscas.ac.cn>
  Chen-Yu Tsai <wenst@chromium.org>
  Christoph Hellwig <hch@lst.de>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniele Palmas <dnlplm@gmail.com>
  Daniil Dulov <d.dulov@aladdin.ru>
  David Hildenbrand <david@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Dirk Su <dirk.su@canonical.com>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org> # X13s
  Eduard Zingerman <eddyz87@gmail.com>
  Elinor Montmasson <elinor.montmasson@savoirfairelinux.com>
  Enguerrand de Ribaucourt <enguerrand.de-ribaucourt@savoirfairelinux.com>
  Eric Dumazet <edumazet@google.com>
  Filipe Manana <fdmanana@suse.com>
  Frank Li <Frank.Li@nxp.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Guillaume Nault <gnault@redhat.com>
  Guo Ren <guoren@kernel.org>
  Halil Pasic <pasic@linux.ibm.com>
  Hangbin Liu <liuhangbin@gmail.com>
  Heiko Carstens <hca@linux.ibm.com>
  Heiko Carstens <hca@linux.ibm.com> # s390
  Helge Deller <deller@gmx.de>
  Hsin-Te Yuan <yuanhsinte@chromium.org>
  Hugh Dickins <hughd@google.com>
  Ido Schimmel <idosch@nvidia.com>
  Ilya Maximets <i.maximets@ovn.org>
  Jack Yu <jack.yu@realtek.com>
  Jai Luthra <j-luthra@ti.com>
  Jakub Kicinski <kuba@kernel.org>
  Jan Kara <jack@suse.cz>
  Jan Sokolowski <jan.sokolowski@intel.com>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jeff Xu <jeffxu@chromium.org>
  Jens Remus <jremus@linux.ibm.com>
  Jesper Dangaard Brouer <hawk@kernel.org>
  Jianguo Wu <wujianguo@chinatelecom.cn>
  John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
  Jose Ignacio Tornos Martinez <jtornosm@redhat.com>
  Karen Ostrowska <karen.ostrowska@intel.com>
  Kory Maincent <kory.maincent@bootlin.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Kuniyuki Iwashima <kuniyu@amazon.com>
  Linus Lüssing <linus.luessing@c0d3.blue>
  Linus Torvalds <torvalds@linux-foundation.org>
  luoxuanqiang <luoxuanqiang@kylinos.cn>
  Ma Ke <make24@iscas.ac.cn>
  Maciej Strozek <mstrozek@opensource.cirrus.com>
  Maksym Yaremchuk <maksymy@nvidia.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marco Elver <elver@google.com>
  Mark Brown <broonie@kernel.org>
  Matt Bobrowski <mattbobrowski@google.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Naohiro Aota <naohiro.aota@wdc.com>
  Neal Cardwell <ncardwell@google.com>
  Nick Child <nnac123@linux.ibm.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Pengfei Xu <pengfei.xu@intel.com>
  Peter Ujfalusi <peter.ujfalusi@gmail.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Petr Machata <petrm@nvidia.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Primoz Fiser <primoz.fiser@norik.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Qu Wenruo <wqu@suse.com>
  Ratheesh Kannoth <rkannoth@marvell.com>
  Richard Fitzgerald <rf@opensource.cirrus.com>
  Shannon Nelson <shannon.nelson@amd.com>
  Shengjiu Wang <shengjiu.wang@nxp.com>
  Shigeru Yoshida <syoshida@redhat.com>
  Shuming Fan <shumingf@realtek.com>
  Simon Wunderlich <sw@simonwunderlich.de>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Stephen Brennan <stephen.s.brennan@oracle.com>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Suman Ghosh <sumang@marvell.com>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  Suren Baghdasaryan <surenb@google.com>
  Sven Eckelmann <sven@narfation.org>
  Taehee Yoo <ap420073@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Thomas GENTY <tomlohave@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tristram Ha <tristram.ha@microchip.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vijendar Mukunda <Vijendar.Mukunda@amd.com>
  Vitor Soares <vitor.soares@toradex.com>
  Vlastimil Babka <vbabka@suse.cz>
  Vyacheslav Frantsishko <itmymaill@gmail.com>
  Xin Long <lucien.xin@gmail.com>
  yangge <yangge1116@126.com>
  Yeoreum Yun <yeoreum.yun@arm.com>
  Yonghong Song <yonghong.song@linux.dev>
  Yunseong Kim <yskelg@gmail.com>
  Zhang Yi <zhangyi@everest-semi.com>
  Zhaoyang Huang <zhaoyang.huang@unisoc.com>
  Zi Yan <ziy@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 5331 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri Jun 28 14:31:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 14:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750662.1158737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNCdJ-0001yn-Qy; Fri, 28 Jun 2024 14:31:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750662.1158737; Fri, 28 Jun 2024 14:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNCdJ-0001xe-K0; Fri, 28 Jun 2024 14:31:25 +0000
Received: by outflank-mailman (input) for mailman id 750662;
 Fri, 28 Jun 2024 14:31:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KV+V=N6=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sNCdI-0001vg-PW
 for xen-devel@lists.xenproject.org; Fri, 28 Jun 2024 14:31:24 +0000
Received: from mail-ed1-x529.google.com (mail-ed1-x529.google.com
 [2a00:1450:4864:20::529])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1806494c-355b-11ef-90a3-e314d9c70b13;
 Fri, 28 Jun 2024 16:31:23 +0200 (CEST)
Received: by mail-ed1-x529.google.com with SMTP id
 4fb4d7f45d1cf-57d1d614049so953588a12.1
 for <xen-devel@lists.xenproject.org>; Fri, 28 Jun 2024 07:31:23 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a72aaf63390sm84944566b.69.2024.06.28.07.31.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Jun 2024 07:31:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1806494c-355b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719585082; x=1720189882; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=0HcoQ3jxCZ06h2GCy+THa2ZWR0EVbPJaxvXputzPwx0=;
        b=BziVxMbWZMiCPHzA6+OkT0V4E+B6qYrEQqWFeRux6NsXy/SUmDo4wLU0zafrpyCsVx
         nD2ZfNWSQMHlNYq5OVPGEutK2h9KHhD0SvDRUK0md+v94ol3E/30UvwulSqbJoQKWc67
         6UVfsz2sF2jk5HYyxO0nXCSYFQVQYpe4A7tnY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719585082; x=1720189882;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=0HcoQ3jxCZ06h2GCy+THa2ZWR0EVbPJaxvXputzPwx0=;
        b=g+nUO7xHhvel+mBgPyKVVtCnGsCCHBMHsVypPDjIATT2zmZryP2ww46u3eayJsup3A
         sD4kQEntRgVpq46koieEzHsxOwuAHfL2XnpsgkEjpRmneHCoFWkntcItl+jzxgTvLUis
         SdFuvAhmC6YcRHZM/xL+PGFr+OmaMOj8oyZLUQ6HenoeZIoUCe1JiFA2mlum60c+1q/f
         kbFm64EMOgFaTsnF/y8vBQlW8YHZCBa/XUrboICzqUvdN4NV6tV4BvZU6cYyfs1/W9+h
         bW7SrrLe+saobajFHPk2/y7v59OiIO6hduV4Wc3o4x6bFQKRjkRLfD2L00vawzFToP67
         3jww==
X-Gm-Message-State: AOJu0YyykN652JVhpfgdjNsmzKnUB8xijy4F2GOQ4EC6ZPB3NEqyYoet
	fx8ilydrql/owRivU4k0eUIqRikLZe2m7Ago/Wxn0YPM2fV7kPLGHksHMBXrCVYGA/jtuxEhxyE
	h+dk=
X-Google-Smtp-Source: AGHT+IF9o3Kg7jr3zKcI7qTmqOEqOBoKhlT5CXjFMwttb5naf+XjdFMl8lOQTAOMnf7ijdGKtzbaQw==
X-Received: by 2002:a17:906:f59b:b0:a72:b34f:e15b with SMTP id a640c23a62f3a-a72b34fe2e9mr130747566b.57.1719585082472;
        Fri, 28 Jun 2024 07:31:22 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Frediano Ziglio <frediano.ziglio@cloud.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 2/3] tools/libxs: Fix CLOEXEC handling in get_socket()
Date: Fri, 28 Jun 2024 15:31:15 +0100
Message-Id: <20240628143116.1044976-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240628143116.1044976-1-andrew.cooper3@citrix.com>
References: <20240628143116.1044976-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

get_socket() opens a socket, then uses fcntl() to set CLOEXEC.  This is racy
with exec().

Open the socket with SOCK_CLOEXEC.  Use the same compatibility strategy as
O_CLOEXEC on ancient versions of Linux.

Reported-by: Frediano Ziglio <frediano.ziglio@cloud.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Anthony PERARD <anthony@xenproject.org>
CC: Juergen Gross <jgross@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Frediano Ziglio <frediano.ziglio@cloud.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 tools/libs/store/xs.c | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
index 037e79d98b58..11a766c50887 100644
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -44,6 +44,10 @@
 #define O_CLOEXEC 0
 #endif
 
+#ifndef SOCK_CLOEXEC
+#define SOCK_CLOEXEC 0
+#endif
+
 struct xs_stored_msg {
 	XEN_TAILQ_ENTRY(struct xs_stored_msg) list;
 	struct xsd_sockmsg hdr;
@@ -207,16 +211,14 @@ int xs_fileno(struct xs_handle *h)
 static int get_socket(const char *connect_to)
 {
 	struct sockaddr_un addr;
-	int sock, saved_errno, flags;
+	int sock, saved_errno;
 
-	sock = socket(PF_UNIX, SOCK_STREAM, 0);
+	sock = socket(PF_UNIX, SOCK_STREAM | SOCK_CLOEXEC, 0);
 	if (sock < 0)
 		return -1;
 
-	if ((flags = fcntl(sock, F_GETFD)) < 0)
-		goto error;
-	flags |= FD_CLOEXEC;
-	if (fcntl(sock, F_SETFD, flags) < 0)
+	/* Compat for non-SOCK_CLOEXEC environments.  Racy. */
+	if (!SOCK_CLOEXEC && !set_cloexec(sock))
 		goto error;
 
 	addr.sun_family = AF_UNIX;
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 28 14:31:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 14:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750661.1158730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNCdJ-0001w4-GY; Fri, 28 Jun 2024 14:31:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750661.1158730; Fri, 28 Jun 2024 14:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNCdJ-0001vx-Dw; Fri, 28 Jun 2024 14:31:25 +0000
Received: by outflank-mailman (input) for mailman id 750661;
 Fri, 28 Jun 2024 14:31:24 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KV+V=N6=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sNCdI-0001vg-3X
 for xen-devel@lists.xenproject.org; Fri, 28 Jun 2024 14:31:24 +0000
Received: from mail-ej1-x62b.google.com (mail-ej1-x62b.google.com
 [2a00:1450:4864:20::62b])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 17d9ff0d-355b-11ef-90a3-e314d9c70b13;
 Fri, 28 Jun 2024 16:31:23 +0200 (CEST)
Received: by mail-ej1-x62b.google.com with SMTP id
 a640c23a62f3a-a724a8097deso83092866b.1
 for <xen-devel@lists.xenproject.org>; Fri, 28 Jun 2024 07:31:23 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a72aaf63390sm84944566b.69.2024.06.28.07.31.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Jun 2024 07:31:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17d9ff0d-355b-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719585082; x=1720189882; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=Rfv1TRT8DgfBFjODV1M7RVz/7dh85giKSn0Y1bjukyE=;
        b=bkOg5IW+rHEEaDaqvba9+5iGflgAT1BjJ+MC6Ag4h1v1Rqr3BLpGfCAgnpibAp3LfH
         vfjZmjCAd4jTXoawwMKe9ykxHCDSjj8AMfSsimG3dTAUaPrDk/9ubKDjqwFoHofT0pM+
         XusQ2/PbmcUMaFgU/ix+tzHVZOB47kfSlJ+G0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719585082; x=1720189882;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=Rfv1TRT8DgfBFjODV1M7RVz/7dh85giKSn0Y1bjukyE=;
        b=sjrXKQpxXRX8vwlzzTvYabxjrTqvSvHe2qZITYmgMiYghk0Rz2PvyNvoxj8A8aOVea
         BMHKjYjGbHZWmsgZXwAPwryoPByYZUzZqErxDEi6t/JQlPV/l5SWnDbDX1JwKIMjBsTl
         7AwBkq9+9VDUZj7aBkAVbibMc9HYoxOynLLNA3G3o10H5jrtsu+r+/Fi9EBk+90NLS37
         0f1nk6jz0KI8ZU9dSzcoHuVLCswRpNLRBphWQfJtUM48RD2ng4y7CV0DWSkyuhFiufIl
         ZW1JkDBv9qr8cjVcMdUtUUHEcAlC3EFUX9rHp/0cHHWvPRUCuW3xWZLNZ27PbGh2zb01
         MgRA==
X-Gm-Message-State: AOJu0YzLOsjlja6uClcL6znYa1F53XlrBLAFslKCoDAOG/85yrQBrq3+
	++PQ5u8UXOaef1lB72I2rYaEAHVl0R9aGPKvW94MLRyDFlVnqToUY97wK71kiSxt5/Nktvx9L1K
	iUuY=
X-Google-Smtp-Source: AGHT+IFPs7aSrfbhtvCVV1yH+c3angPA7Ax0G4Igm+BQEAUJNpmkSc8KgjnrxHqYCsmPzCLZvUgy7g==
X-Received: by 2002:a17:906:3651:b0:a72:50f7:3c6f with SMTP id a640c23a62f3a-a7250f744ccmr856963266b.14.1719585081777;
        Fri, 28 Jun 2024 07:31:21 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Frediano Ziglio <frediano.ziglio@cloud.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 1/3] tools/libxs: Fix CLOEXEC handling in get_dev()
Date: Fri, 28 Jun 2024 15:31:14 +0100
Message-Id: <20240628143116.1044976-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240628143116.1044976-1-andrew.cooper3@citrix.com>
References: <20240628143116.1044976-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Move the O_CLOEXEC compatibility outside of an #ifdef USE_PTHREAD block.

Introduce set_cloexec() to wrap fcntl() setting FD_CLOEXEC.  It will be reused
for other CLOEXEC fixes too.

Use set_cloexec() when O_CLOEXEC isn't available as a best-effort fallback.

Fixes: f4f2f3402b2f ("tools/libxs: Open /dev/xen/xenbus fds as O_CLOEXEC")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Anthony PERARD <anthony@xenproject.org>
CC: Juergen Gross <jgross@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Frediano Ziglio <frediano.ziglio@cloud.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 tools/libs/store/xs.c | 38 ++++++++++++++++++++++++++++++++------
 1 file changed, 32 insertions(+), 6 deletions(-)

diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
index 14985150737e..037e79d98b58 100644
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -40,6 +40,10 @@
 #include <xentoolcore_internal.h>
 #include <xen_list.h>
 
+#ifndef O_CLOEXEC
+#define O_CLOEXEC 0
+#endif
+
 struct xs_stored_msg {
 	XEN_TAILQ_ENTRY(struct xs_stored_msg) list;
 	struct xsd_sockmsg hdr;
@@ -54,10 +58,6 @@ struct xs_stored_msg {
 #include <dlfcn.h>
 #endif
 
-#ifndef O_CLOEXEC
-#define O_CLOEXEC 0
-#endif
-
 struct xs_handle {
 	/* Communications channel to xenstore daemon. */
 	int fd;
@@ -176,6 +176,16 @@ static bool setnonblock(int fd, int nonblock) {
 	return true;
 }
 
+static bool set_cloexec(int fd)
+{
+	int flags = fcntl(fd, F_GETFL);
+
+	if (flags < 0)
+		return false;
+
+	return fcntl(fd, flags | FD_CLOEXEC) >= 0;
+}
+
 int xs_fileno(struct xs_handle *h)
 {
 	char c = 0;
@@ -230,8 +240,24 @@ static int get_socket(const char *connect_to)
 
 static int get_dev(const char *connect_to)
 {
-	/* We cannot open read-only because requests are writes */
-	return open(connect_to, O_RDWR | O_CLOEXEC);
+	int fd, saved_errno;
+
+	fd = open(connect_to, O_RDWR | O_CLOEXEC);
+	if (fd < 0)
+		return -1;
+
+	/* Compat for non-O_CLOEXEC environments.  Racy. */
+	if (!O_CLOEXEC && !set_cloexec(fd))
+		goto error;
+
+	return fd;
+
+error:
+	saved_errno = errno;
+	close(fd);
+	errno = saved_errno;
+
+	return -1;
 }
 
 static int all_restrict_cb(Xentoolcore__Active_Handle *ah, domid_t domid) {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 28 14:31:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 14:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750663.1158741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNCdK-00025O-0f; Fri, 28 Jun 2024 14:31:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750663.1158741; Fri, 28 Jun 2024 14:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNCdJ-000237-Si; Fri, 28 Jun 2024 14:31:25 +0000
Received: by outflank-mailman (input) for mailman id 750663;
 Fri, 28 Jun 2024 14:31:25 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KV+V=N6=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sNCdI-0001vh-Vg
 for xen-devel@lists.xenproject.org; Fri, 28 Jun 2024 14:31:24 +0000
Received: from mail-ej1-x633.google.com (mail-ej1-x633.google.com
 [2a00:1450:4864:20::633])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 174174db-355b-11ef-b4bb-af5377834399;
 Fri, 28 Jun 2024 16:31:22 +0200 (CEST)
Received: by mail-ej1-x633.google.com with SMTP id
 a640c23a62f3a-a725a918edaso110637666b.3
 for <xen-devel@lists.xenproject.org>; Fri, 28 Jun 2024 07:31:22 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a72aaf63390sm84944566b.69.2024.06.28.07.31.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Jun 2024 07:31:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 174174db-355b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719585081; x=1720189881; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=usddWHN3VvPsP/+TkAkPUgsqr9zRPXC4Wl8Yt4kv95M=;
        b=kfwTZ2bDwYTgpRat4GtpxRd3eE7LtfwJGrtP1qizxrV8TKotXmsQTwYuCsOcQ8Wzac
         yeKBkl9lx8TD7t0rGq4vC4IcDUgJRKAjarYtDRyFoE9uCp7OXZxlWA9oWgtcCEJ4AkM1
         IKs6SgH+sFT5pDLwU0YjCJ3B+AmcTbzl8B4Zc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719585081; x=1720189881;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=usddWHN3VvPsP/+TkAkPUgsqr9zRPXC4Wl8Yt4kv95M=;
        b=oFZ1ZIzm+PIXsppMGLw6kvE7+WP13eDY+BLytvECm9gQhagljFuFTByWuQYdRuxY9g
         /7RkT1O9pKW5RfuXh5GdasTff+HCsTstNqsD0b3Uh6tXlNn1wZQaA34wdUNMEkIyTS0R
         IZq+XDV3OmPQnLy5Se2/VgJpE2lO9PHZUC9zI3Lt8VaHkHCQ9IHg75+0WhxvMIPcDlYq
         xTLuaWqHu0KwvJIATbYMypnySEqbRru5b/eHuQ+UB7fWaZ39x2FdQ67bvN5kPjXlvASd
         W7FU00Xk+gPyKZQC4JUtx+3HB2IOEG7PxCReVSUIKwLsz33x9pPnjwQoJH7NHjFczuL8
         DImw==
X-Gm-Message-State: AOJu0YzPlcq0wqaWeuptxZf26xCe/zSgDp5yKoLj3OG0Lp3F116B0pee
	qG9XVE+x9HAfbUNLqJ1vMd4MRnHN0Fw0lO0UGiOw59LLGOM9zL5GiTezNWN7y9d3mnAM6YDPs7T
	ggUA=
X-Google-Smtp-Source: AGHT+IH+iQ4VsnMz7FBzVjp18wF8mVVBRIpazjLkDFiXvuBn4RK5HyR1uOfGgf4r+KDs01zo4sdjdQ==
X-Received: by 2002:a17:906:71db:b0:a6f:50ae:e02 with SMTP id a640c23a62f3a-a7245b85169mr1403177766b.4.1719585081193;
        Fri, 28 Jun 2024 07:31:21 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Frediano Ziglio <frediano.ziglio@cloud.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19(?) 0/3] tools/libxs: More CLOEXEC fixes
Date: Fri, 28 Jun 2024 15:31:13 +0100
Message-Id: <20240628143116.1044976-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

More fixes to CLOEXEC handling in libxenstore.  For 4.19, because the first
attempt to fix this wasn't complete.

libxl is far worse, but I don't have time to get started on that mess.

Andrew Cooper (3):
  tools/libxs: Fix CLOEXEC handling in get_dev()
  tools/libxs: Fix CLOEXEC handling in get_socket()
  tools/libxs: Fix CLOEXEC handling in xs_fileno()

 tools/config.h.in     |  3 ++
 tools/configure       | 12 ++++++++
 tools/configure.ac    |  2 ++
 tools/libs/store/xs.c | 68 ++++++++++++++++++++++++++++++++++---------
 4 files changed, 72 insertions(+), 13 deletions(-)

-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 28 14:31:30 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 14:31:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750664.1158760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNCdM-0002dR-9k; Fri, 28 Jun 2024 14:31:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750664.1158760; Fri, 28 Jun 2024 14:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNCdM-0002dI-78; Fri, 28 Jun 2024 14:31:28 +0000
Received: by outflank-mailman (input) for mailman id 750664;
 Fri, 28 Jun 2024 14:31:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KV+V=N6=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sNCdK-0001vh-5U
 for xen-devel@lists.xenproject.org; Fri, 28 Jun 2024 14:31:26 +0000
Received: from mail-ed1-x530.google.com (mail-ed1-x530.google.com
 [2a00:1450:4864:20::530])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 18abf6f8-355b-11ef-b4bb-af5377834399;
 Fri, 28 Jun 2024 16:31:24 +0200 (CEST)
Received: by mail-ed1-x530.google.com with SMTP id
 4fb4d7f45d1cf-584ef6c07c2so2726740a12.1
 for <xen-devel@lists.xenproject.org>; Fri, 28 Jun 2024 07:31:24 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a72aaf63390sm84944566b.69.2024.06.28.07.31.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Jun 2024 07:31:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18abf6f8-355b-11ef-b4bb-af5377834399
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719585084; x=1720189884; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:from:to:cc:subject:date
         :message-id:reply-to;
        bh=NdLvmDIa3nEZMz+AIRiV7A13HKXJr2DbGh0lxxYahWU=;
        b=EtNWVEfcwMfiX50+98HpaJq5RKwhtknWaSiwXeI9IdUB1HwE4jTNI+IYiq5790m1Wt
         gfGJvZg5XeLtdM8LrYUXuBn7v83Fib7YpQB3D1uIyKiCwRqRzlvm/Q4yqw+NR81fgcVT
         MCH5ybwKPUM+RkusUEh8T/1scXRKbdCbc1/1A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719585084; x=1720189884;
        h=content-transfer-encoding:mime-version:references:in-reply-to
         :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc
         :subject:date:message-id:reply-to;
        bh=NdLvmDIa3nEZMz+AIRiV7A13HKXJr2DbGh0lxxYahWU=;
        b=l7bopm46xvouw0x8Iy0U2N6Q8BtizvATp4z5UC6oO7UtknQyNgByoLpEYrIhxmZ3fi
         yyYP+R6QfHf9WiKgKToQrybFc0VttozU7DI2ojWQeKgNKHBHqUlmeyIvSlXyCg8TED+m
         akdLkxOJLp3+N8NIDiDKxEyK1RCfNc6MB9HNZNMFckLGtZy0LNtWMdrRiu3xzHBdvVb9
         p51RSs3JoaNYy1Q3PVyxiG7DPqWLdVMoERholuAlNw6JmhflYW30NVHJsIrN8fcTkwAh
         suh9y6f2y04Do7eTkP2GgYI9e79xWMKPy7b2eQHWS0IdvhwQGRvQ9VopogQR42KRb6a+
         taTQ==
X-Gm-Message-State: AOJu0YxeIN6htE9167GUG+KCvpRnVNLWBaA4VwiuMJR7nODuf0c/j91s
	3lhm1KQ5+Oc+IHrG7ZVRq9aGz/7SCCFGVSuSsVX3weuHIegymLcDQIMtTRpwTiAuDdAJ0Vw87mb
	d8ek=
X-Google-Smtp-Source: AGHT+IH0Vk3s+qgCV1Zk3GHG2PVLElkv8IK3qGoOJzLuF5QlAFWgK8137FGB4jtTGQEENAUKrepxKw==
X-Received: by 2002:a17:906:5fd3:b0:a6f:b352:a74b with SMTP id a640c23a62f3a-a72aefd2d3cmr123807966b.38.1719585083864;
        Fri, 28 Jun 2024 07:31:23 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Frediano Ziglio <frediano.ziglio@cloud.com>,
	Oleksii Kurochko <oleksii.kurochko@gmail.com>
Subject: [PATCH for-4.19 3/3] tools/libxs: Fix CLOEXEC handling in xs_fileno()
Date: Fri, 28 Jun 2024 15:31:16 +0100
Message-Id: <20240628143116.1044976-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
In-Reply-To: <20240628143116.1044976-1-andrew.cooper3@citrix.com>
References: <20240628143116.1044976-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

xs_fileno() opens a pipe on first use to communicate between the watch thread
and the main thread.  Nothing ever sets CLOEXEC on the file descriptors.

Check for the availability of the pipe2() function with configure.  Despite
starting life as Linux-only, FreeBSD and NetBSD have gained it.

When pipe2() isn't available, try our best with pipe() and set_cloexec().

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Anthony PERARD <anthony@xenproject.org>
CC: Juergen Gross <jgross@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Frediano Ziglio <frediano.ziglio@cloud.com>
CC: Oleksii Kurochko <oleksii.kurochko@gmail.com>
---
 tools/config.h.in     |  3 +++
 tools/configure       | 12 ++++++++++++
 tools/configure.ac    |  2 ++
 tools/libs/store/xs.c | 16 +++++++++++++++-
 4 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/tools/config.h.in b/tools/config.h.in
index 0bb2fe08a143..50ad60fcb091 100644
--- a/tools/config.h.in
+++ b/tools/config.h.in
@@ -39,6 +39,9 @@
 /* Define to 1 if you have the <memory.h> header file. */
 #undef HAVE_MEMORY_H
 
+/* Define to 1 if you have the `pipe2' function. */
+#undef HAVE_PIPE2
+
 /* pygrub enabled */
 #undef HAVE_PYGRUB
 
diff --git a/tools/configure b/tools/configure
index 459bfb56520e..a6b43bfc6064 100755
--- a/tools/configure
+++ b/tools/configure
@@ -9751,6 +9751,18 @@ if test "$ax_found" = "0"; then :
 fi
 
 
+for ac_func in pipe2
+do :
+  ac_fn_c_check_func "$LINENO" "pipe2" "ac_cv_func_pipe2"
+if test "x$ac_cv_func_pipe2" = xyes; then :
+  cat >>confdefs.h <<_ACEOF
+#define HAVE_PIPE2 1
+_ACEOF
+
+fi
+done
+
+
 cat >confcache <<\_ACEOF
 # This file is a shell script that caches the results of configure
 # tests run on this system so they can be shared between configure
diff --git a/tools/configure.ac b/tools/configure.ac
index 851887080c5e..ac0fdc4314c4 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -543,4 +543,6 @@ AS_IF([test "x$pvshim" = "xy"], [
 
 AX_FIND_HEADER([INCLUDE_ENDIAN_H], [endian.h sys/endian.h])
 
+AC_CHECK_FUNCS([pipe2])
+
 AC_OUTPUT()
diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
index 11a766c50887..27bd20933efd 100644
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -190,13 +190,27 @@ static bool set_cloexec(int fd)
 	return fcntl(fd, flags | FD_CLOEXEC) >= 0;
 }
 
+static int pipe_cloexec(int fds[2])
+{
+#if HAVE_PIPE2
+	return pipe2(fds, O_CLOEXEC);
+#else
+	if (pipe(fds) < 0)
+		return -1;
+	/* Best effort to set CLOEXEC. Racy. */
+	set_cloexec(fds[0]);
+	set_cloexec(fds[1]);
+	return 0;
+#endif
+}
+
 int xs_fileno(struct xs_handle *h)
 {
 	char c = 0;
 
 	mutex_lock(&h->watch_mutex);
 
-	if ((h->watch_pipe[0] == -1) && (pipe(h->watch_pipe) != -1)) {
+	if ((h->watch_pipe[0] == -1) && (pipe_cloexec(h->watch_pipe) != -1)) {
 		/* Kick things off if the watch list is already non-empty. */
 		if (!XEN_TAILQ_EMPTY(&h->watch_list))
 			while (write(h->watch_pipe[1], &c, 1) != 1)
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 28 14:50:56 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 14:50:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750694.1158774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNCw5-0006VX-Sa; Fri, 28 Jun 2024 14:50:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750694.1158774; Fri, 28 Jun 2024 14:50:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNCw5-0006VP-Pn; Fri, 28 Jun 2024 14:50:49 +0000
Received: by outflank-mailman (input) for mailman id 750694;
 Fri, 28 Jun 2024 14:50:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNCw5-0006VG-Gq; Fri, 28 Jun 2024 14:50:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNCw5-0000v9-Eu; Fri, 28 Jun 2024 14:50:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNCw5-0005Zk-32; Fri, 28 Jun 2024 14:50:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNCw5-0000o3-2b; Fri, 28 Jun 2024 14:50:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aQ66pEBnMp0O2n+yY6fKitOdsxNOzGQhU2eCron89UA=; b=UvgOAneIo8Aj7m06LfUkxRC6Nz
	XUTEoe8GEe7sPvDvpz3wErbx9vjt93+nKr3FjNN3b6zN3N7uZXUZPrBC7u3ZG1lTfO+YBYFj//h1O
	E3PZOaM4rwscAnKU2Qn3VI+sURchbDBmwKnnFH4M691zatM0qXLX/WpIAQnJPxCvSX4w=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186542-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186542: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=ad9a6ac440ab379f68c91f1360e3eb64d9db711e
X-Osstest-Versions-That:
    libvirt=0c94ec428fd04eb1d1d2da8d4b1ea74240b836a2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Jun 2024 14:50:49 +0000

flight 186542 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186542/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186507
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              ad9a6ac440ab379f68c91f1360e3eb64d9db711e
baseline version:
 libvirt              0c94ec428fd04eb1d1d2da8d4b1ea74240b836a2

Last test of basis   186507  2024-06-26 04:20:23 Z    2 days
Testing same since   186542  2024-06-28 04:20:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  Göran Uddeborg <goeran@uddeborg.se>
  Weblate <noreply@weblate.org>
  Yuri Chornoivan <yurchor@ukr.net>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   0c94ec428f..ad9a6ac440  ad9a6ac440ab379f68c91f1360e3eb64d9db711e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 28 15:09:17 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 15:09:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750706.1158783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNDDe-0008Qs-B6; Fri, 28 Jun 2024 15:08:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750706.1158783; Fri, 28 Jun 2024 15:08:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNDDe-0008Ql-8Z; Fri, 28 Jun 2024 15:08:58 +0000
Received: by outflank-mailman (input) for mailman id 750706;
 Fri, 28 Jun 2024 15:08:57 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KV+V=N6=cloud.com=andrew.cooper@srs-se1.protection.inumbo.net>)
 id 1sNDDd-0008Qf-5y
 for xen-devel@lists.xenproject.org; Fri, 28 Jun 2024 15:08:57 +0000
Received: from mail-ed1-x52e.google.com (mail-ed1-x52e.google.com
 [2a00:1450:4864:20::52e])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 56c96f57-3560-11ef-90a3-e314d9c70b13;
 Fri, 28 Jun 2024 17:08:56 +0200 (CEST)
Received: by mail-ed1-x52e.google.com with SMTP id
 4fb4d7f45d1cf-57cc30eaf0aso445395a12.2
 for <xen-devel@lists.xenproject.org>; Fri, 28 Jun 2024 08:08:56 -0700 (PDT)
Received: from andrewcoop.eng.citrite.net ([160.101.139.1])
 by smtp.gmail.com with ESMTPSA id
 a640c23a62f3a-a72aaf18933sm88587466b.13.2024.06.28.08.08.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 Jun 2024 08:08:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56c96f57-3560-11ef-90a3-e314d9c70b13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=citrix.com; s=google; t=1719587335; x=1720192135; darn=lists.xenproject.org;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:from:to:cc:subject:date:message-id:reply-to;
        bh=DtXpU0ZxaTHakcXA0kHKUl/3f8MRMH1twZOvIOZNWeQ=;
        b=sdW/qricJrOBA0TxTvxraL1ZzVR5HgmnUdhD3Js+O5ET5DSmeihobzmHBJ6Ib24KD5
         /xKH5S/uR3XMsgXqXv1f39ph9bsVsmgKJhXTNj9KckIsYPR6118GJHJY1IZ2mgbeHunD
         +7rcdROZKoxmmmGvmr0TK1PE1XyOe2fT/wJ4c=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20230601; t=1719587335; x=1720192135;
        h=content-transfer-encoding:mime-version:message-id:date:subject:cc
         :to:from:x-gm-message-state:from:to:cc:subject:date:message-id
         :reply-to;
        bh=DtXpU0ZxaTHakcXA0kHKUl/3f8MRMH1twZOvIOZNWeQ=;
        b=xTJnGn0pFba+hLXeoxohA3PhLTlu7MiN9JtOIrcRghESzrNlAwu10d3ey6zHyUW77k
         8PTqT9AJJX5FNggG83QQnU8TvO2hMBIFDcuo8Lw34L2PWOngqHcMECP8Otz/a2HGZQcm
         A/BPHP34upv5aQKJbznXl/qTyBizkEvcW91+5Q3g/yBVYXDJ0p0oL2zzQXPuMCKGBs4B
         slhFHjokVLLEE5dMBAIG44KPkYnAWfhNa2Uq7yJpdD4IIETBaep2I+r1J1bhMgqpOrAa
         BsCG2IIMDy7D0jPhdHElqpy0eI1f1lTA6UCc0hqdJF9/W5lB0MN1BNRQj2yz033hcd+D
         sT7w==
X-Gm-Message-State: AOJu0Yw1/wAo1ueZC/4Z+sxPYvx71hoF8ImRUNjtxFh+vwPE3PluWhkG
	IJTTtgmT4UGlVcDDEjyHrMXzATQ3gSVquWS42uma1kNRqRVwCMExiGlIGsSBTHCSxKEfjF9vNQn
	Q5p4=
X-Google-Smtp-Source: AGHT+IGZWgKUcMtL9MNJ/SCWchycfUoPw8Um5c3Ik8tMVvFRFsVJ+tH12h/Iodq5mdGzEqmzJte4rw==
X-Received: by 2002:a50:d716:0:b0:56e:3293:3777 with SMTP id 4fb4d7f45d1cf-57d4bd74102mr13845522a12.17.1719587335286;
        Fri, 28 Jun 2024 08:08:55 -0700 (PDT)
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
	Anthony PERARD <anthony@xenproject.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH for-4.20] tools/libxs: Drop XSTEST
Date: Fri, 28 Jun 2024 16:08:53 +0100
Message-Id: <20240628150853.1048006-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.39.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Appears to been missed from the previous attempt in 2007.

Fixes: fed194611785 ("xenstore: Remove broken and unmaintained test code")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Anthony PERARD <anthony@xenproject.org>
CC: Juergen Gross <jgross@suse.com>
---
 tools/libs/store/xs.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
index f84fd0f74c84..983d68ffd367 100644
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -496,10 +496,6 @@ static bool read_all(int fd, void *data, unsigned int len, int nonblocking)
 	return false;
 }
 
-#ifdef XSTEST
-#define read_all read_all_choice
-#define xs_write_all write_all_choice
-#else
 /* Simple routine for writing to sockets, etc. */
 bool xs_write_all(int fd, const void *data, unsigned int len)
 {
@@ -517,7 +513,6 @@ bool xs_write_all(int fd, const void *data, unsigned int len)
 
 	return true;
 }
-#endif
 
 static int get_error(const char *errorstring)
 {
-- 
2.39.2



From xen-devel-bounces@lists.xenproject.org Fri Jun 28 17:08:45 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 17:08:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750729.1158793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNF5H-0005oy-FM; Fri, 28 Jun 2024 17:08:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750729.1158793; Fri, 28 Jun 2024 17:08:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNF5H-0005or-Cu; Fri, 28 Jun 2024 17:08:27 +0000
Received: by outflank-mailman (input) for mailman id 750729;
 Fri, 28 Jun 2024 17:08:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNF5G-0005oh-2k; Fri, 28 Jun 2024 17:08:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNF5F-0003lv-WD; Fri, 28 Jun 2024 17:08:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNF5F-0002xL-PC; Fri, 28 Jun 2024 17:08:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNF5F-00054R-Oi; Fri, 28 Jun 2024 17:08:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bT4FlEWsff8O6H3qI7PuEX84K88Gm30tJDLXkOQr1JY=; b=DOtEY1YtVYdPkjOY2XbsfIPMp5
	5FF0ssK/x0Y0wKaVszD5ahFk2g+RO1C/DdAHDaaKebiNA0SQehQ/9HLFvl5HcmED8JZ/gfBhyaybd
	blrywlpwrU07zfkYQspVWh9afCdL6U7zdM8Yx/RjSbBDOnc7EVZgp9ZNWF+OO3Y3IAN8=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186553-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 186553: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=3b2025969e6e8a2f6542996182cd4132868641c6
X-Osstest-Versions-That:
    ovmf=dc3ed379dfb62ed720e46f10b6c6d0ebda6bde5f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Jun 2024 17:08:25 +0000

flight 186553 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186553/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3b2025969e6e8a2f6542996182cd4132868641c6
baseline version:
 ovmf                 dc3ed379dfb62ed720e46f10b6c6d0ebda6bde5f

Last test of basis   186543  2024-06-28 06:13:12 Z    0 days
Testing same since   186553  2024-06-28 15:11:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
  dependabot[bot] <support@github.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   dc3ed379df..3b2025969e  3b2025969e6e8a2f6542996182cd4132868641c6 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri Jun 28 19:00:26 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 19:00:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750751.1158804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNGpR-0001V9-Jx; Fri, 28 Jun 2024 19:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750751.1158804; Fri, 28 Jun 2024 19:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNGpR-0001V2-Gf; Fri, 28 Jun 2024 19:00:13 +0000
Received: by outflank-mailman (input) for mailman id 750751;
 Fri, 28 Jun 2024 19:00:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNGpP-0001Us-Q1; Fri, 28 Jun 2024 19:00:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNGpP-0005jf-MR; Fri, 28 Jun 2024 19:00:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNGpP-0006Ki-C6; Fri, 28 Jun 2024 19:00:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNGpP-0007S5-BX; Fri, 28 Jun 2024 19:00:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c2dlMOPEhfQtVvwEkfknyild2RyQIsb+i+KyM40DmC8=; b=Xgas2g1hzjYQ8Lz8s3VO+PG+jj
	y8T5n4pHHyXX3BEJI7tpemRm7ejSdGfbSQF3kC5NsAp359hM5CWgchejm5LBkfYBa0pJLeK3Hdqwd
	Hi+2Fi2vp7V7SnrDQkWEz5Hi8plhDcYrIS9/wvDWRRH1Rl7PKm2RvgOgoexDFNIG8RyA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186552-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 186552: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=08f9b1dd9c9435d4cca006e43917245710b39be3
X-Osstest-Versions-That:
    xen=6d41f5b9e112e8934f59edfd7168a36706e0341a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Jun 2024 19:00:11 +0000

flight 186552 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186552/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  08f9b1dd9c9435d4cca006e43917245710b39be3
baseline version:
 xen                  6d41f5b9e112e8934f59edfd7168a36706e0341a

Last test of basis   186541  2024-06-28 00:04:08 Z    0 days
Testing same since   186552  2024-06-28 15:02:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  George Dunlap <george.dunlap@cloud.com>
  Juergen Gross <jgross@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6d41f5b9e1..08f9b1dd9c  08f9b1dd9c9435d4cca006e43917245710b39be3 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri Jun 28 21:06:19 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 Jun 2024 21:06:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750774.1158813 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNInE-0006Ge-DT; Fri, 28 Jun 2024 21:06:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750774.1158813; Fri, 28 Jun 2024 21:06:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNInE-0006GX-Ad; Fri, 28 Jun 2024 21:06:04 +0000
Received: by outflank-mailman (input) for mailman id 750774;
 Fri, 28 Jun 2024 21:06:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNInD-0006GN-4g; Fri, 28 Jun 2024 21:06:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNInD-0008JK-26; Fri, 28 Jun 2024 21:06:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNInC-0001XT-HG; Fri, 28 Jun 2024 21:06:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNInC-00044q-Gr; Fri, 28 Jun 2024 21:06:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=l3NN3K0jjHsWWPahMPNgx2RLfjJkGFjKhDwCr2UNqh0=; b=MMjOEpOUIqnddh2cEv2c43uu6w
	5cYKPrhevvOx4I2/GvNB1D2aoFnwdDsnmmk1Z1VAO2IjfkhjMzTudcKjs34zDqdCCqMBICjlBUF4L
	pHqB1JpALG7DGdtblWO7mfU5bXwwRjdbhDP4bsJa7RncQQw2NDYWWGerByYegCR5Fyfk=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186544-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186544: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=6d41f5b9e112e8934f59edfd7168a36706e0341a
X-Osstest-Versions-That:
    xen=402e473249cf62dd4c6b3b137aa845db0fe1453a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 Jun 2024 21:06:02 +0000

flight 186544 xen-unstable real [real]
flight 186558 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186544/
http://logs.test-lab.xenproject.org/osstest/logs/186558/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd11-amd64 21 guest-start/freebsd.repeat fail pass in 186558-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186534
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186534
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186534
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186534
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186534
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186534
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6d41f5b9e112e8934f59edfd7168a36706e0341a
baseline version:
 xen                  402e473249cf62dd4c6b3b137aa845db0fe1453a

Last test of basis   186534  2024-06-27 15:10:49 Z    1 days
Testing same since   186544  2024-06-28 07:52:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Federico Serafini <federico.serafini@bugseng.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   402e473249..6d41f5b9e1  6d41f5b9e112e8934f59edfd7168a36706e0341a -> master


From xen-devel-bounces@lists.xenproject.org Sat Jun 29 00:29:39 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Jun 2024 00:29:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750793.1158827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNLxy-0002X0-Qr; Sat, 29 Jun 2024 00:29:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750793.1158827; Sat, 29 Jun 2024 00:29:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNLxy-0002Wt-OI; Sat, 29 Jun 2024 00:29:22 +0000
Received: by outflank-mailman (input) for mailman id 750793;
 Sat, 29 Jun 2024 00:29:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNLxx-0002Wj-3H; Sat, 29 Jun 2024 00:29:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNLxw-0003vG-UJ; Sat, 29 Jun 2024 00:29:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNLxw-0000h1-Gv; Sat, 29 Jun 2024 00:29:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNLxw-0002OA-GS; Sat, 29 Jun 2024 00:29:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eTA2eJdMoBtyi0ZQO91p5hONoMKLXABD++979VehYRs=; b=vhUYnB7KdPIT10EQe2RK+DXqDj
	OQHMobl9qxeKIBzAWshAAB0iet6inQeR0l9iAjtEZPpMOQFSxNSqCQt3Tu7GhlDQg6MCwf2Azl5M4
	5wsQwjn9QvkOs8f3DDljePvF01twZYhzw4AIQO6UKnVP3rWWBnPFnLr9VBlbLopIjxA0=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186550-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186550: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-libvirt-vhd:<job status>:broken:regression
    linux-linus:test-armhf-armhf-libvirt-vhd:host-install(5):broken:regression
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:regression
    linux-linus:test-armhf-armhf-examine:reboot:fail:regression
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5bbd9b249880dba032bffa002dd9cd12cd5af09c
X-Osstest-Versions-That:
    linux=24ca36a562d63f1bff04c3f11236f52969c67717
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 Jun 2024 00:29:20 +0000

flight 186550 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186550/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-vhd    <job status>                 broken
 test-armhf-armhf-libvirt-vhd  5 host-install(5)        broken REGR. vs. 186528
 test-armhf-armhf-xl-credit2   8 xen-boot                 fail REGR. vs. 186528
 test-amd64-amd64-qemuu-freebsd11-amd64 21 guest-start/freebsd.repeat fail REGR. vs. 186528
 test-armhf-armhf-examine      8 reboot                   fail REGR. vs. 186528

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186528
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186528
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                5bbd9b249880dba032bffa002dd9cd12cd5af09c
baseline version:
 linux                24ca36a562d63f1bff04c3f11236f52969c67717

Last test of basis   186528  2024-06-26 22:42:23 Z    2 days
Failing since        186530  2024-06-27 08:02:12 Z    1 days    3 attempts
Testing same since   186550  2024-06-28 13:57:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Rainbolt <arainbolt@kfocus.org>
  aigourensheng <shechenglong001@gmail.com>
  Aivaz Latypov <reichaivaz@gmail.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Hung <alex.hung@amd.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Hölzl <alexander.hoelzl@gmx.net>
  Alexei Starovoitov <ast@kernel.org>
  Alibek Omarov <a1ba.omarov@gmail.com>
  Amadeusz Sławiński <amadeuszx.slawinski@linux.intel.com>
  Andi Shyti <andi.shyti@linux.intel.com>
  Andrei Simion <andrei.simion@microchip.com>
  Andrew Bresticker <abrestic@rivosinc.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Konovalov <andreyknvl@gmail.com>
  Andrii Nakryiko <andrii@kernel.org>
  Arnd Bergmann <arnd@arndb.de>
  Arun Ramadoss <arun.ramadoss@microchip.com>
  Aryan Srivastava <aryan.srivastava@alliedtelesis.co.nz>
  Bard Liao <yung-chuan.liao@linux.intel.com>
  Barry Song <v-songbaohua@oppo.com>
  Bing-Jhong Billy Jheng <billy@starlabs.sg>
  Bjørn Mork <bjorn@mork.no>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen Ni <nichen@iscas.ac.cn>
  Chen-Yu Tsai <wenst@chromium.org>
  Christoph Hellwig <hch@lst.de>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Daniele Palmas <dnlplm@gmail.com>
  Daniil Dulov <d.dulov@aladdin.ru>
  Dave Airlie <airlied@redhat.com>
  David Hildenbrand <david@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Dirk Su <dirk.su@canonical.com>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org> # X13s
  Eduard Zingerman <eddyz87@gmail.com>
  Elinor Montmasson <elinor.montmasson@savoirfairelinux.com>
  Enguerrand de Ribaucourt <enguerrand.de-ribaucourt@savoirfairelinux.com>
  Eric Dumazet <edumazet@google.com>
  Filipe Manana <fdmanana@suse.com>
  Frank Li <Frank.Li@nxp.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Guillaume Nault <gnault@redhat.com>
  Guo Ren <guoren@kernel.org>
  Halil Pasic <pasic@linux.ibm.com>
  Hangbin Liu <liuhangbin@gmail.com>
  Heiko Carstens <hca@linux.ibm.com>
  Heiko Carstens <hca@linux.ibm.com> # s390
  Helge Deller <deller@gmx.de>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hsin-Te Yuan <yuanhsinte@chromium.org>
  Hugh Dickins <hughd@google.com>
  Ido Schimmel <idosch@nvidia.com>
  Ilya Maximets <i.maximets@ovn.org>
  Jack Yu <jack.yu@realtek.com>
  Jai Luthra <j-luthra@ti.com>
  Jakub Kicinski <kuba@kernel.org>
  Jan Kara <jack@suse.cz>
  Jan Sokolowski <jan.sokolowski@intel.com>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jeff Xu <jeffxu@chromium.org>
  Jens Axboe <axboe@kernel.dk>
  Jens Glathe <jens.glathe@oldschoolsolutions.biz>
  Jens Remus <jremus@linux.ibm.com>
  Jesper Dangaard Brouer <hawk@kernel.org>
  Jianguo Wu <wujianguo@chinatelecom.cn>
  Johan Hovold <johan+linaro@kernel.org>
  John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
  Jose Ignacio Tornos Martinez <jtornosm@redhat.com>
  Julia Zhang <Julia.Zhang@amd.com>
  Karen Ostrowska <karen.ostrowska@intel.com>
  Kory Maincent <kory.maincent@bootlin.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Kuniyuki Iwashima <kuniyu@amazon.com>
  Li Ma <li.ma@amd.com>
  Lijo Lazar <lijo.lazar@amd.com>
  Linus Lüssing <linus.luessing@c0d3.blue>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  luoxuanqiang <luoxuanqiang@kylinos.cn>
  Lyude Paul <lyude@redhat.com>
  Ma Ke <make24@iscas.ac.cn>
  Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
  Maciej Strozek <mstrozek@opensource.cirrus.com>
  Maksym Yaremchuk <maksymy@nvidia.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marco Elver <elver@google.com>
  Mark Brown <broonie@kernel.org>
  Matt Bobrowski <mattbobrowski@google.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Strauss <michael.strauss@amd.com>
  Naohiro Aota <naohiro.aota@wdc.com>
  Neal Cardwell <ncardwell@google.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  Nick Child <nnac123@linux.ibm.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Pengfei Xu <pengfei.xu@intel.com>
  Peter Ujfalusi <peter.ujfalusi@gmail.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Petr Machata <petrm@nvidia.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Primoz Fiser <primoz.fiser@norik.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Qu Wenruo <wqu@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ratheesh Kannoth <rkannoth@marvell.com>
  Richard Fitzgerald <rf@opensource.cirrus.com>
  Shannon Nelson <shannon.nelson@amd.com>
  Shengjiu Wang <shengjiu.wang@nxp.com>
  Shigeru Yoshida <syoshida@redhat.com>
  Shuming Fan <shumingf@realtek.com>
  Simon Wunderlich <sw@simonwunderlich.de>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Steev Klimaszewski <steev@kali.org>
  Stephen Brennan <stephen.s.brennan@oracle.com>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Suman Ghosh <sumang@marvell.com>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  Suren Baghdasaryan <surenb@google.com>
  Sven Eckelmann <sven@narfation.org>
  Taehee Yoo <ap420073@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Thomas GENTY <tomlohave@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tristram Ha <tristram.ha@microchip.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vijendar Mukunda <Vijendar.Mukunda@amd.com>
  Vitor Soares <vitor.soares@toradex.com>
  Vlastimil Babka <vbabka@suse.cz>
  Vyacheslav Frantsishko <itmymaill@gmail.com>
  Xin Long <lucien.xin@gmail.com>
  Xin Zeng <xin.zeng@intel.com>
  yangge <yangge1116@126.com>
  Yeoreum Yun <yeoreum.yun@arm.com>
  Yonghong Song <yonghong.song@linux.dev>
  Yunseong Kim <yskelg@gmail.com>
  Zhang Yi <zhangyi@everest-semi.com>
  Zhaoyang Huang <zhaoyang.huang@unisoc.com>
  Zi Yan <ziy@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 broken  
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-libvirt-vhd broken
broken-step test-armhf-armhf-libvirt-vhd host-install(5)

Not pushing.

(No revision log; it would be 5945 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat Jun 29 05:51:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Jun 2024 05:51:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750819.1158838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNQzn-0007Mz-1M; Sat, 29 Jun 2024 05:51:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750819.1158838; Sat, 29 Jun 2024 05:51:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNQzm-0007Ms-UU; Sat, 29 Jun 2024 05:51:34 +0000
Received: by outflank-mailman (input) for mailman id 750819;
 Sat, 29 Jun 2024 05:51:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNQzl-0007Mg-QF; Sat, 29 Jun 2024 05:51:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNQzl-0004p9-Nd; Sat, 29 Jun 2024 05:51:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNQzl-0005FP-9E; Sat, 29 Jun 2024 05:51:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNQzl-0004dT-8b; Sat, 29 Jun 2024 05:51:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yi+w8XiO9FCk8NDsH0Hkg6Ddu/wlT54Rndv/eHN6fFM=; b=JNnUi5ZDu+RctJWf33OrVx+4iL
	0qYhxMQrMhUa5jra4NdDX5jZwgXfYYK2hllmnldaxISzgHLlCJi/PkJdmZi52Mu0AtooubC2kvSrZ
	rDFFoppfsWOISfB3GNR/A8m3q8CQkyTXFqaoWtVhK9f+VS5W1s9bTI0+8O3Sz8f5nL0s=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186559-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186559: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=08f9b1dd9c9435d4cca006e43917245710b39be3
X-Osstest-Versions-That:
    xen=6d41f5b9e112e8934f59edfd7168a36706e0341a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 Jun 2024 05:51:33 +0000

flight 186559 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186559/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186544
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186544
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186544
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186544
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186544
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186544
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  08f9b1dd9c9435d4cca006e43917245710b39be3
baseline version:
 xen                  6d41f5b9e112e8934f59edfd7168a36706e0341a

Last test of basis   186544  2024-06-28 07:52:23 Z    0 days
Testing same since   186559  2024-06-28 21:08:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bertrand Marquis <bertrand.marquis@arm.com>
  George Dunlap <george.dunlap@cloud.com>
  Juergen Gross <jgross@suse.com>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6d41f5b9e1..08f9b1dd9c  08f9b1dd9c9435d4cca006e43917245710b39be3 -> master


From xen-devel-bounces@lists.xenproject.org Sat Jun 29 11:16:48 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Jun 2024 11:16:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750866.1158851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNW4F-0007od-0t; Sat, 29 Jun 2024 11:16:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750866.1158851; Sat, 29 Jun 2024 11:16:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNW4E-0007oW-UW; Sat, 29 Jun 2024 11:16:30 +0000
Received: by outflank-mailman (input) for mailman id 750866;
 Sat, 29 Jun 2024 11:16:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNW4D-0007oM-5A; Sat, 29 Jun 2024 11:16:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNW4C-00031v-VS; Sat, 29 Jun 2024 11:16:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNW4C-0000BW-K7; Sat, 29 Jun 2024 11:16:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNW4C-0000ed-Ja; Sat, 29 Jun 2024 11:16:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vAaaASaVrkk89VIhC7xaWXyk9dCrE/717KhK1WqlVNA=; b=LaaS7Jd30HRyaF4hYqvc+89Et9
	QiDFxiOS0FJc/b8AfpEZ+LW1rGXr8pQbZv7ey0Mwo8lnpRjibEjMGnskC8NF4rMDAaag25LlBjy74
	AegSBnIgvxvOOXl3iJbRixbxLkJyvrBSSM4EWEmfMDGAsnQ1j2/mDZJqU/ni6dcWMzbI=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186562-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186562: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=de0a9f4486337d0eabacc23bd67ff73146eacdc0
X-Osstest-Versions-That:
    linux=24ca36a562d63f1bff04c3f11236f52969c67717
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 Jun 2024 11:16:28 +0000

flight 186562 linux-linus real [real]
flight 186571 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186562/
http://logs.test-lab.xenproject.org/osstest/logs/186571/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit2   8 xen-boot            fail pass in 186571-retest
 test-armhf-armhf-xl-credit1   8 xen-boot            fail pass in 186571-retest
 test-armhf-armhf-libvirt      8 xen-boot            fail pass in 186571-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 186571 like 186528
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 186571 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 186571 never pass
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 186571 never pass
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 186571 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 186571 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186528
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186528
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                de0a9f4486337d0eabacc23bd67ff73146eacdc0
baseline version:
 linux                24ca36a562d63f1bff04c3f11236f52969c67717

Last test of basis   186528  2024-06-26 22:42:23 Z    2 days
Failing since        186530  2024-06-27 08:02:12 Z    2 days    4 attempts
Testing same since   186562  2024-06-29 00:44:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aapo Vienamo <aapo.vienamo@linux.intel.com>
  Aaron Rainbolt <arainbolt@kfocus.org>
  Adam Hawley <adam.james.hawley@intel.com>
  aigourensheng <shechenglong001@gmail.com>
  Aivaz Latypov <reichaivaz@gmail.com>
  Aleksandr Mishin <amishin@t-argos.ru>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Hung <alex.hung@amd.com>
  Alexander Gordeev <agordeev@linux.ibm.com>
  Alexander Hölzl <alexander.hoelzl@gmx.net>
  Alexandre Ghiti <alexghiti@rivosinc.com>
  Alexei Starovoitov <ast@kernel.org>
  Alibek Omarov <a1ba.omarov@gmail.com>
  Amadeusz Sławiński <amadeuszx.slawinski@linux.intel.com>
  Andi Shyti <andi.shyti@linux.intel.com>
  Andrei Simion <andrei.simion@microchip.com>
  Andrew Bresticker <abrestic@rivosinc.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Konovalov <andreyknvl@gmail.com>
  Andrii Nakryiko <andrii@kernel.org>
  Andy Chiu <andy.chiu@sifive.com>
  Arnd Bergmann <arnd@arndb.de>
  Arun Ramadoss <arun.ramadoss@microchip.com>
  Aryan Srivastava <aryan.srivastava@alliedtelesis.co.nz>
  Bard Liao <yung-chuan.liao@linux.intel.com>
  Barry Song <v-songbaohua@oppo.com>
  Bartosz Golaszewski <bartosz.golaszewski@linaro.org>
  Bing-Jhong Billy Jheng <billy@starlabs.sg>
  Bjørn Mork <bjorn@mork.no>
  Boyang Yu <yuboyang@dapustor.com>
  Błażej Szczygieł <spaz16@wp.pl>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen Ni <nichen@iscas.ac.cn>
  Chen-Yu Tsai <wenst@chromium.org>
  Christoph Hellwig <hch@lst.de>
  Chuck Lever <chuck.lever@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Daniele Palmas <dnlplm@gmail.com>
  Daniil Dulov <d.dulov@aladdin.ru>
  Dave Airlie <airlied@redhat.com>
  David Arcari <darcari@redhat.com>
  David Hildenbrand <david@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Dheeraj Kumar Srivastava <dheerajkumar.srivastava@amd.com>
  Dirk Su <dirk.su@canonical.com>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org> # X13s
  Eduard Zingerman <eddyz87@gmail.com>
  Elinor Montmasson <elinor.montmasson@savoirfairelinux.com>
  Enguerrand de Ribaucourt <enguerrand.de-ribaucourt@savoirfairelinux.com>
  Eric Dumazet <edumazet@google.com>
  FahHean Lee <fahhean.lee@amd.com>
  Filipe Manana <fdmanana@suse.com>
  Frank Li <Frank.Li@nxp.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Guillaume Nault <gnault@redhat.com>
  Guo Ren <guoren@kernel.org>
  Halil Pasic <pasic@linux.ibm.com>
  Hangbin Liu <liuhangbin@gmail.com>
  Hannes Reinecke <hare@kernel.org>
  Hannes Reinecke <hare@suse.de>
  Heiko Carstens <hca@linux.ibm.com>
  Heiko Carstens <hca@linux.ibm.com> # s390
  Helge Deller <deller@gmx.de>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hsin-Te Yuan <yuanhsinte@chromium.org>
  Hugh Dickins <hughd@google.com>
  Ian Ray <ian.ray@gehealthcare.com>
  Ido Schimmel <idosch@nvidia.com>
  Ilya Maximets <i.maximets@ovn.org>
  Jack Yu <jack.yu@realtek.com>
  Jai Luthra <j-luthra@ti.com>
  Jakub Kicinski <kuba@kernel.org>
  Jan Kara <jack@suse.cz>
  Jan Sokolowski <jan.sokolowski@intel.com>
  Jani Nikula <jani.nikula@intel.com>
  Jann Horn <jannh@google.com>
  Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Jeff Xu <jeffxu@chromium.org>
  Jens Axboe <axboe@kernel.dk>
  Jens Glathe <jens.glathe@oldschoolsolutions.biz>
  Jens Remus <jremus@linux.ibm.com>
  Jesper Dangaard Brouer <hawk@kernel.org>
  Jesse Taube <jesse@rivosinc.com>
  Jianguo Wu <wujianguo@chinatelecom.cn>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan+linaro@kernel.org>
  John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
  Jose Ignacio Tornos Martinez <jtornosm@redhat.com>
  Julia Zhang <Julia.Zhang@amd.com>
  Karen Ostrowska <karen.ostrowska@intel.com>
  Kees Cook <kees@kernel.org>
  Keith Busch <kbusch@kernel.org>
  Kent Gibson <warthog618@gmail.com>
  Kent Overstreet <kent.overstreet@linux.dev>
  Kory Maincent <kory.maincent@bootlin.com>
  Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Kuniyuki Iwashima <kuniyu@amazon.com>
  Len Brown <len.brown@intel.com>
  Li Ma <li.ma@amd.com>
  Lijo Lazar <lijo.lazar@amd.com>
  Linus Lüssing <linus.luessing@c0d3.blue>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Ying <victor.liu@nxp.com>
  Lu Baolu <baolu.lu@linux.intel.com>
  luoxuanqiang <luoxuanqiang@kylinos.cn>
  Lyude Paul <lyude@redhat.com>
  Ma Ke <make24@iscas.ac.cn>
  Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
  Maciej Strozek <mstrozek@opensource.cirrus.com>
  Maksym Yaremchuk <maksymy@nvidia.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marco Elver <elver@google.com>
  Mark Brown <broonie@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Matt Bobrowski <mattbobrowski@google.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Strauss <michael.strauss@amd.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Naohiro Aota <naohiro.aota@wdc.com>
  Nathan Chancellor <nathan@kernel.org>
  Neal Cardwell <ncardwell@google.com>
  Neil Armstrong <neil.armstrong@linaro.org>
  NeilBrown <neilb@suse.de>
  Nick Child <nnac123@linux.ibm.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Oleksij Rempel <o.rempel@pengutronix.de>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmer@rivosinc.com>
  Paolo Abeni <pabeni@redhat.com>
  Patryk Wlazlyn <patryk.wlazlyn@linux.intel.com>
  Pei Li <peili.dev@gmail.com>
  Pengfei Xu <pengfei.xu@intel.com>
  Peter Ujfalusi <peter.ujfalusi@gmail.com>
  Peter Ujfalusi <peter.ujfalusi@linux.intel.com>
  Petr Machata <petrm@nvidia.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Primoz Fiser <primoz.fiser@norik.com>
  Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> (A Contingent worker at Intel)
  Qu Wenruo <wqu@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ratheesh Kannoth <rkannoth@marvell.com>
  Richard Fitzgerald <rf@opensource.cirrus.com>
  Ryan Roberts <ryan.roberts@arm.com>
  Sairaj Arun Kodilkar <sairaj.arunkodilkar@amd.com>
  Shannon Nelson <shannon.nelson@amd.com>
  Shengjiu Wang <shengjiu.wang@nxp.com>
  Shigeru Yoshida <syoshida@redhat.com>
  Shuming Fan <shumingf@realtek.com>
  Simon Wunderlich <sw@simonwunderlich.de>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Steev Klimaszewski <steev@kali.org>
  Stephen Brennan <stephen.s.brennan@oracle.com>
  Steven Rostedt (Google) <rostedt@goodmis.org>
  Suman Ghosh <sumang@marvell.com>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  Suren Baghdasaryan <surenb@google.com>
  Sven Eckelmann <sven@narfation.org>
  syzbot+770e99b65e26fa023ab1@syzkaller.appspotmail.com
  Taehee Yoo <ap420073@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Thomas GENTY <tomlohave@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Tristram Ha <tristram.ha@microchip.com>
  Vasant Hegde <vasant.hegde@amd.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vijendar Mukunda <Vijendar.Mukunda@amd.com>
  Vitor Soares <vitor.soares@toradex.com>
  Vlastimil Babka <vbabka@suse.cz>
  Vyacheslav Frantsishko <itmymaill@gmail.com>
  Will Deacon <will@kernel.org>
  Xin Long <lucien.xin@gmail.com>
  Xin Zeng <xin.zeng@intel.com>
  yangge <yangge1116@126.com>
  Yeoreum Yun <yeoreum.yun@arm.com>
  Yonghong Song <yonghong.song@linux.dev>
  Yunseong Kim <yskelg@gmail.com>
  Yuntao Liu <liuyuntao12@huawei.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zhang Yi <zhangyi@everest-semi.com>
  Zhaoyang Huang <zhaoyang.huang@unisoc.com>
  Zi Yan <ziy@nvidia.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   24ca36a562d6..de0a9f448633  de0a9f4486337d0eabacc23bd67ff73146eacdc0 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sat Jun 29 12:29:41 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Jun 2024 12:29:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750889.1158862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNXCm-00077B-Do; Sat, 29 Jun 2024 12:29:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750889.1158862; Sat, 29 Jun 2024 12:29:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNXCm-000774-9Q; Sat, 29 Jun 2024 12:29:24 +0000
Received: by outflank-mailman (input) for mailman id 750889;
 Sat, 29 Jun 2024 12:29:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNXCk-00076u-Tj; Sat, 29 Jun 2024 12:29:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNXCk-0004Fl-RR; Sat, 29 Jun 2024 12:29:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNXCk-000236-I3; Sat, 29 Jun 2024 12:29:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNXCk-0000PX-Hg; Sat, 29 Jun 2024 12:29:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6OjWvZ/vau3ZcdaZZGaNZERp1VifNSM9fg9zCGI8TcM=; b=66cO9vr80wX76zfr8wOlk1t/2y
	P/ENhz7SpgOCwS8r8uPY0kj2Cfr009kh4aZCwYkzpSePFWU4pQLwn3rIAKSp7eRn+keYUcMGIE/lB
	RX2UGH20KTyNrwzzHAvjy/Zw28YGj2pZgApr4pXsDWGjsMdLmQiPnqkRrxwBUaAVP+Jo=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186565-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 186565: tolerable all pass - PUSHED
X-Osstest-Failures:
    libvirt:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt:saverestore-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:saverestore-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    libvirt:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    libvirt:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    libvirt=67fdc636bfa14df6e72981717777f1f1aef2a987
X-Osstest-Versions-That:
    libvirt=ad9a6ac440ab379f68c91f1360e3eb64d9db711e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 Jun 2024 12:29:22 +0000

flight 186565 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186565/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186542
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-arm64-arm64-libvirt-qcow2 15 saverestore-support-check    fail never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass

version targeted for testing:
 libvirt              67fdc636bfa14df6e72981717777f1f1aef2a987
baseline version:
 libvirt              ad9a6ac440ab379f68c91f1360e3eb64d9db711e

Last test of basis   186542  2024-06-28 04:20:44 Z    1 days
Testing same since   186565  2024-06-29 04:20:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Göran Uddeborg <goeran@uddeborg.se>
  Jiri Denemark <jdenemar@redhat.com>
  Jon Kohler <jon@nutanix.com>
  Michal Privoznik <mprivozn@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peter Krempa <pkrempa@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-libvirt                                     pass    
 test-arm64-arm64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-arm64-arm64-libvirt-qcow2                               pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/libvirt.git
   ad9a6ac440..67fdc636bf  67fdc636bfa14df6e72981717777f1f1aef2a987 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat Jun 29 15:53:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Jun 2024 15:53:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750980.1158896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNaOG-0004Ep-CX; Sat, 29 Jun 2024 15:53:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750980.1158896; Sat, 29 Jun 2024 15:53:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNaOG-0004Ei-8y; Sat, 29 Jun 2024 15:53:28 +0000
Received: by outflank-mailman (input) for mailman id 750980;
 Sat, 29 Jun 2024 15:53:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R6MN=N7=outlook.com=mhklinux@srs-se1.protection.inumbo.net>)
 id 1sNaOE-0004Ec-5L
 for xen-devel@lists.xenproject.org; Sat, 29 Jun 2024 15:53:27 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10olkn2082c.outbound.protection.outlook.com
 [2a01:111:f403:2c12::82c])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id b6a7d3ac-362f-11ef-90a3-e314d9c70b13;
 Sat, 29 Jun 2024 17:53:24 +0200 (CEST)
Received: from SN6PR02MB4157.namprd02.prod.outlook.com (2603:10b6:805:33::23)
 by DS0PR02MB9534.namprd02.prod.outlook.com (2603:10b6:8:f4::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29; Sat, 29 Jun
 2024 15:53:07 +0000
Received: from SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df]) by SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df%2]) with mapi id 15.20.7698.033; Sat, 29 Jun 2024
 15:53:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6a7d3ac-362f-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EivFZ62BSRqlLbc2KPBullcZi8bTa41ryg9LTf03kOW0jLTWrXPdO6tDqooqQUqRv7BOidJdfm1ZoAEQiFBZZIW3JF/EgmGyAiwJjgLjw8BnvzCIKrY47sU+H0hjTDoZL8seNAGXClDWNoVMHnDzJPR7i2UIz2nbrb6OhXKyTGdlGRZ6ciZpHTU+SImAn3k+YSysUPIAqAya2OMX5vW5PhCrZD7Q4Rr3yUCxJtCxrtvRWJsFUW8YkK+LM4+tkqHec+35KeI34NBBWef1y+m/xleU+dxH/Z99HXncKS+6XQr2IyAoE9/NHhZWRt9Pxolxz8BpyoBsFNaGBv6JYYeP1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=RWj2FbwXeSXYv/TiJrBTuayy9dMETUpl6Go1ASRp+Gc=;
 b=XU4fXCrPp5hKk+In0UYiE3AtwybPErgROaERlQ6KLT+srlNap7Md7LjL8D6RxKZDqDlOubtWBNzCSzdY8MI663lEwMZS9KayN4zHdSE8XVSVbbnCXYVZSVQAGkoe1dO9giKu1Sf8GwU9S/0BIt0QKRbDgkD3tr+8T+eU7DfJRYET9tbsjajafFo+fd1iYi3IToyWb9TFMxjO/ZDMtwxhARofzakrL+p1xsm2dovPNxf6LRIf8ky4/466ggqlPo2JpvnAYeppYMyxXxIg/xWXxYR/lLcFIDBNRt+ms9QCMawyx1ODvjCxch78ZrJ5f6zzm0spqHeLRmbkpNuUgsSVXg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none;
 dkim=none; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=outlook.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RWj2FbwXeSXYv/TiJrBTuayy9dMETUpl6Go1ASRp+Gc=;
 b=HMycqL7CDQq58vQzgBVyFcKuqei64hNxmKj67mkyhSP1IuN8/hTdR0gItOzDpKgIUig9PuWInrkUe6FJqV6sGA/2zc6Qi926WF+g29kW9viC7w3MCET8xuGiOFjRMOrCUbxkmRhgiz7tKspIQVs/dPaj/BGEKAMbc2prowQeG+4koKU68/mjbW8iexuwLRQO1Lqwmg3NeFAE1F4dsMhKCgNFwyfTZvCJwyX4yBVraEjOACKNdbv51CS0axZi9wPkEwFCdIxLVkQkM7+uYQTge6zEETVRCMdTPOZ2DxVwEIiHge9HKlKYyPVX3apY5EUkFC4wYdoPNE3Xzlts3u/CfQ==
From: Michael Kelley <mhklinux@outlook.com>
To: =?iso-8859-2?Q?Petr_Tesa=F8=EDk?= <petr@tesarici.cz>
CC: "robin.murphy@arm.com" <robin.murphy@arm.com>, "joro@8bytes.org"
	<joro@8bytes.org>, "will@kernel.org" <will@kernel.org>, "jgross@suse.com"
	<jgross@suse.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>, "hch@lst.de"
	<hch@lst.de>, "m.szyprowski@samsung.com" <m.szyprowski@samsung.com>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Topic: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Index: AQHauIjoJBWSrCyO6UWzcncSceBiMLHbU/+AgACADXCAAzLqkA==
Date: Sat, 29 Jun 2024 15:53:07 +0000
Message-ID:
 <SN6PR02MB41578519BA2E432E1B2154FBD4D12@SN6PR02MB4157.namprd02.prod.outlook.com>
References: <20240607031421.182589-1-mhklinux@outlook.com>
 <20240627092049.1dbec746@meshulam.tesarici.cz>
 <SN6PR02MB4157CF368284CA48061E35E9D4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
In-Reply-To:
 <SN6PR02MB4157CF368284CA48061E35E9D4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-tmn: [3jSSy6RJpec1y9yGlKuW0UsMqr8rhynj]
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: SN6PR02MB4157:EE_|DS0PR02MB9534:EE_
x-ms-office365-filtering-correlation-id: 4592181b-2b33-44e2-1f08-08dc985391ed
x-microsoft-antispam:
 BCL:0;ARA:14566002|8060799006|461199028|3412199025|102099032|440099028;
x-microsoft-antispam-message-info:
 TklGObHOvm4E4w+NIUWWVVbeuSdVj9dxNC3d40cRBWntqalYeBNEPEBg5UBgRDtyrxahinZGRD7mBTYOYepqkd2m+RqesBqZxnhgvO7PHTSG7/AavRYOPiZz2rnrDyKIupJPLRltBZe9fCtHH+YTIRGfoGm2fx4cBDSti2g0rgOaa57YqWT+yX2M8w86LPGQAviEtjzeI6mQqTFJ+WlU4RzraB8F8KuBZN+yP/muc9Mj8PqbnaUNIkyr+qy8Rt21UH0ZSPZTVHyIFs9fFPm26EY1J6s4qlA49QIFbZOsSsTtk0v92A+wR5oRpWhLZlqWnZUzm5bFl3+qwjwtQ2qubTsADCmor689ffcuoujsbhHn6JzQskA6Gy7jf3mO9ya6Nc2NDSa3VHWljyLa8hgyRisu3zXqAU6+0jqzf0VMs9NGwSkMxeyY0EB9Sor26RsIHeZPU8VEGtdlWcPH2RgaxeZMx0ZUgknXlgYbBM/42OhXV4O/OIiYwitbsnqxh/X6DkO6sUCg2LFSHpr26Ns3wF62pwHlC8S3aILRpL7DKejLBTK5NRgqW/iq5NQIpmHvS4ySEiQdgUszEb0RNTLjRxXDdwlSsab37t8nqgJdkwfI2s/B6SZKQVkbK4meNd38
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-2?Q?Ut4FFUTD5P4GT5SnVfFUTUSnEYohUtodJaKLtiGtPNyKB82/WEk9sS3oYu?=
 =?iso-8859-2?Q?s+4GxpTeaADkKhlBaAdiTZqaaQcp5gqXJxujQSALbB4WgeW5o8XvfQRioP?=
 =?iso-8859-2?Q?YCATHOZ0APV90s8FlEdAxtHNa90fzmdoT+DTo9rv3LPCHfUQA4Fenc7Tie?=
 =?iso-8859-2?Q?dk7WoLfKb0KjyBqNGuMbkymtUaXKGnHjyg9DF66zlhmZFe7+uIGQkTuyy1?=
 =?iso-8859-2?Q?9zEOAAQ4KysBSCpPGxgebRGUISe55bVK6ZOXDLHm5n5hUuVBNlaKa73XP9?=
 =?iso-8859-2?Q?j0anDhBwWCJTcBzT2+wqoGgaz8vXuPWXbb4wV4mhnDL23JSNQUflR/DoTh?=
 =?iso-8859-2?Q?uVXdAWmA5/Y9joCMZ/IBHf9k6Dgz5MF6MleIhhA9ckfOH9ImO3ZFsrhFKN?=
 =?iso-8859-2?Q?4acUjbXUZEx4inWsNIHVbwfp72hVeKaXnFJT0MCpPbVwHD1DsyN5SjUdUh?=
 =?iso-8859-2?Q?WGfeBrQp9yW2QjpHiO4wlXC8OtakWcNuKdyH2jD6vp0g2Gut6Xh+RScc5g?=
 =?iso-8859-2?Q?DibofAQX5oU2TBeksn8RtM0bbx3SlG89y2ixbH44vdL1BnCZwpJcDphRNk?=
 =?iso-8859-2?Q?gKdC0EbK+XKjUFQKmcdj0NPkvWnraApPS35WBLJpuGjQYkm5Vi3/+Dch/j?=
 =?iso-8859-2?Q?D55uU8t0CmyTGI5L8ly1BrvN8y4kryykgvLopekMol+lyU25lKAq1Bqzuq?=
 =?iso-8859-2?Q?ksPKSu3byJKXT0DnAUlnMfWZcxPh7wNta3sZa25m6EP0PbzLQE635USrEj?=
 =?iso-8859-2?Q?zkbtqJa+DYHtFTOSKYivz5qNbws/f2jtno/sn9uD9Q36BcCWjmsQ7iy8GX?=
 =?iso-8859-2?Q?6IQA1nPQFH8hJfuiJrgwS0pzdS546JMgvZW1AEIh2CIb3X1sBJAtSbxFFX?=
 =?iso-8859-2?Q?WFUno1/Yw9pGtWqnnNkyP6A76n1xcTAR+t7Kh/Ti7tzJapO3ZWX62dzk5q?=
 =?iso-8859-2?Q?JvHO+RiXRM9vzfF+v5BX97IzmgNQ0PrElv6NON3tL7YvSV+O8pq84ca8/c?=
 =?iso-8859-2?Q?auX+dIYkOAT4fkz6e9ErSD1AiMSZgSq4l7wbj+5XTxZzKXHKE5jzr29kPr?=
 =?iso-8859-2?Q?n95HLsdqKYtj7j14FB8R5eBDv+XWaFZ9Ks9ttBBqyjEkx28kByPYak4pM2?=
 =?iso-8859-2?Q?16qjbD1jxhq6rykiGCasf8Pl84gGxt0X+YUVrjB2QNELh9/GQMI3tPyjhu?=
 =?iso-8859-2?Q?2NPG6CPVPekRMurjlGvv4+C4S4LEkk4eB2TUWsIqZ5I5JuP4bmOqGP2z21?=
 =?iso-8859-2?Q?/Yce3CnVxxH8/KUyvrV1t7tASvv2CvTaudW+Q9ZPY=3D?=
Content-Type: text/plain; charset="iso-8859-2"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN6PR02MB4157.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-Network-Message-Id: 4592181b-2b33-44e2-1f08-08dc985391ed
X-MS-Exchange-CrossTenant-rms-persistedconsumerorg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jun 2024 15:53:07.7663
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR02MB9534

From: Michael Kelley <mhklinux@outlook.com> Sent: Thursday, June 27, 2024 8=
:05 AM
>=20
> From: Petr Tesa=F8=EDk <petr@tesarici.cz> Sent: Thursday, June 27, 2024 1=
2:21 AM
>=20
> [...]
>=20
> > > @@ -187,10 +169,13 @@ static inline bool is_swiotlb_buffer(struct dev=
ice *dev, phys_addr_t paddr)
> > >  	 * This barrier pairs with smp_mb() in swiotlb_find_slots().
> > >  	 */
> > >  	smp_rmb();
> > > -	return READ_ONCE(dev->dma_uses_io_tlb) &&
> > > -		swiotlb_find_pool(dev, paddr);
> > > +	if (READ_ONCE(dev->dma_uses_io_tlb))
> > > +		return swiotlb_find_pool(dev, paddr);
> > > +	return NULL;
> > >  #else
> > > -	return paddr >=3D mem->defpool.start && paddr < mem->defpool.end;
> > > +	if (paddr >=3D mem->defpool.start && paddr < mem->defpool.end)
> > > +		return &mem->defpool;
> >
> > Why are we open-coding swiotlb_find_pool() here? It does not make a
> > difference now, but if swiotlb_find_pool() were to change, both places
> > would have to be updated.
> >
> > Does it save a reload from dev->dma_io_tlb_mem? IOW is the compiler
> > unable to optimize it away?
> >
> > What about this (functionally identical) variant:
> >
> > #ifdef CONFIG_SWIOTLB_DYNAMIC
> > 	smp_rmb();
> > 	if (!READ_ONCE(dev->dma_uses_io_tlb))
> > 		return NULL;
> > #else
> > 	if (paddr < mem->defpool.start || paddr >=3D mem->defpool.end);
> > 		return NULL;
> > #endif
> >
> > 	return swiotlb_find_pool(dev, paddr);
> >
>=20
> Yeah, I see your point. I'll try this and see what the generated code
> looks like. It might take me a couple of days to get to it.
>=20

With and without CONFIG_SWIOTLB_DYNAMIC, there's no meaningful
difference in the generated code for x86 or for arm64. =20

I'll incorporate this change into v2.

Michael


From xen-devel-bounces@lists.xenproject.org Sat Jun 29 15:56:18 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Jun 2024 15:56:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750986.1158906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNaQy-0004ld-Pu; Sat, 29 Jun 2024 15:56:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750986.1158906; Sat, 29 Jun 2024 15:56:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNaQy-0004lW-N4; Sat, 29 Jun 2024 15:56:16 +0000
Received: by outflank-mailman (input) for mailman id 750986;
 Sat, 29 Jun 2024 15:56:15 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=R6MN=N7=outlook.com=mhklinux@srs-se1.protection.inumbo.net>)
 id 1sNaQx-0004lF-As
 for xen-devel@lists.xenproject.org; Sat, 29 Jun 2024 15:56:15 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10olkn20808.outbound.protection.outlook.com
 [2a01:111:f403:2c12::808])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 1bb759bf-3630-11ef-b4bb-af5377834399;
 Sat, 29 Jun 2024 17:56:13 +0200 (CEST)
Received: from SN6PR02MB4157.namprd02.prod.outlook.com (2603:10b6:805:33::23)
 by DS0PR02MB9534.namprd02.prod.outlook.com (2603:10b6:8:f4::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29; Sat, 29 Jun
 2024 15:55:58 +0000
Received: from SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df]) by SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df%2]) with mapi id 15.20.7698.033; Sat, 29 Jun 2024
 15:55:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bb759bf-3630-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IQgjIYPNoZIrshjQtAzD+aUWSprqs50we9lJ6bObQAoU9kx+HTQcdPp77r0lYnScLFjKm2OnlS9EXTH8NlLLKlr6NOt6jIuJfTAXQp1xEOl1BEBHQrwoWzMH16wG/2A6bYHr+lK7ef+m97PKMHk7zZ5mqocwGqlAQG+3V8A/mfF9UdAhlzOknipp5rtVJYyZ9hy2zDEFpELJk8RWsba+N0d/tYQdaBGb3KC+NYsLb/n4F95DrVYiZ2J5tiyYqpiRnUwlAKKSEYHIhX6oA3tL6vQZATX2gu1cOULq36wm+J008DWGhDBaH+QyPpBQkYcZZJfrA2rd+0IIjBaq8bKRvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=+ih/lzqmLSKQKwRjuRJf3js26Se1yyTSpbLLJfNn+ZM=;
 b=FtYUf8K/xAN8vLAHWJFv/OxskzkxthtlqzysUNMXH0Gf727NLS4xzJrFXPOtBpflV5iHwSBSp3CU3QBkQK5Re8pBExPn8SctyntYfhR0JTpmQckBJ13bdIT0JhW321GCH3DSQo2L2olNqRKWl51YfkwupFakYekW/v7qI2q8YgXCQvD93DGaUGuMbY2J6y/yeLhHn4s6pKFrznYa6DfsnUgFQToAmLw0AW8G/1SPIvg22KkVSamEHK+5ZJE6QpQ5vOY6mYLOKqZpm3O5CgkGarn+t/w1GAAvZa3WbEt2b18yQhGnLvZRMg2a6jykyn6Fpfu8uQOA1MJEULaEVl5Qgw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none;
 dkim=none; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=outlook.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=+ih/lzqmLSKQKwRjuRJf3js26Se1yyTSpbLLJfNn+ZM=;
 b=M29Cb6jWN91PLxnvdlOo5sIJYF0uq1aX6Y0WszNhlkxF4vtRZ1ss9AZdYFMtdi+sMR0d6bm55P0UNFde8lvrzyTM6F5cRHb5O+kw+rwr5pdcuFmtOKZ9CkOs76EZN+/dzeVwYqrpgnxC3+x3D92M49cIkY7eBV4apS2DK4kspEdL4trcozjnhfMIqF14/q6pCBV1CxffBwq1COpNwG87Oc/ZpVWLvfvu5bSSx/2pI5512TUgLbGt7DSaG36QqKGGcTH12nuDAdy6sHYBfnrFDIclyi7asMdW86Jg6G0keXJ2KprxL5QZrGypE4yp6MaGK/ap9pGe5AWuEJuXNpEckw==
From: Michael Kelley <mhklinux@outlook.com>
To: =?utf-8?B?UGV0ciBUZXNhxZnDrWs=?= <petr@tesarici.cz>, "hch@lst.de"
	<hch@lst.de>
CC: "robin.murphy@arm.com" <robin.murphy@arm.com>, "joro@8bytes.org"
	<joro@8bytes.org>, "will@kernel.org" <will@kernel.org>, "jgross@suse.com"
	<jgross@suse.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>,
	"m.szyprowski@samsung.com" <m.szyprowski@samsung.com>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Topic: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Index:
 AQHauIjoJBWSrCyO6UWzcncSceBiMLHa1Z9wgABomICAAA3OAIAAhiSggAAJLYCAAAn5IIAA6tuAgAAdhQCAAhqHMA==
Date: Sat, 29 Jun 2024 15:55:58 +0000
Message-ID:
 <SN6PR02MB415781789CBD6597142BEC68D4D12@SN6PR02MB4157.namprd02.prod.outlook.com>
References: <20240607031421.182589-1-mhklinux@outlook.com>
	<SN6PR02MB41577686D72E206DB0084E90D4D62@SN6PR02MB4157.namprd02.prod.outlook.com>
	<20240627060251.GA15590@lst.de>
	<20240627085216.556744c1@meshulam.tesarici.cz>
	<SN6PR02MB4157E61B49C8435E38AC968DD4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
	<20240627152513.GA23497@lst.de>
	<SN6PR02MB4157D9B1A64FF78461D6A7EDD4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
	<20240628060129.GA26206@lst.de>
 <20240628094708.3a454619@meshulam.tesarici.cz>
In-Reply-To: <20240628094708.3a454619@meshulam.tesarici.cz>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-tmn: [0i+Lk+rVKmCvrjzEiZ5d4TzSqbjXxGjV]
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: SN6PR02MB4157:EE_|DS0PR02MB9534:EE_
x-ms-office365-filtering-correlation-id: 1333d74f-347b-44ec-9c51-08dc9853f78d
x-microsoft-antispam:
 BCL:0;ARA:14566002|8060799006|461199028|3412199025|102099032|440099028|56899033;
x-microsoft-antispam-message-info:
 QZkWcOKsfpZYuhYxrWIjqW6qI0Bh3Qd6oR3wi2xyYQHks9QUO7vSSYxijjuHVjSIBV7qTkxyLCHEWBB5ozc20LZVFrUvWMr4YnXiWudiwCmAmU5uc6caaGar027XsPTXXtfq/2MxPsHI0jmbysdAJKFD+JMc6wwe3w4e6/FZDNQFS7O0z7tY1I5ZsSSr6EnWNjkq15GobEi5rdE0dm8K+dq7wTsXGZWkPRr5jDkSBdSWsyr6rnGIlc0iAimUEI2Ur3142Po9vYIwZ654wKi287ilkSZPfGJOOCcGsuVqyupDDnkvXneH+jXIYUpOJsDMWJTNdfJn6PrPLFOjYEYfEUpSmC4shkgHN741O7q2oLrfoXrdGsWDyobyttxXcY2qCrZx1fkD23TpcRQ3zzxPQi2FpcBswR6EtZzKudtNW74l189ES4iLn8lg49WdmzxXf64SwD/sdak2jjPlD+i/pIUNQ9kN9xDnqDE7swdtcZvTJ8BUrzp7buXS7vwqszH8bhe7gTWf/U45UI3cMrEuLXYaGTpq3tQsdwS0UtTFWf2q6f3sfJZ9r3i3Ua2pRVBCeahq8o7Y1hl+5wIeyugAHlOxCaEpxSnwzpctfLbGfVaTLydWtBj4Rn6BAhGB2Zwd
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?utf-8?B?My9sWUE5eTMySHZ5czhqWEs2bUMxb2I4OGcyWm5ScEgxYVN2aVZmMXJlVDlT?=
 =?utf-8?B?ai9PVkFGblpFVk1EMTU4b29CRmxYSUV0MnBwZ043NU5YT2wxTkdnV2RPc29R?=
 =?utf-8?B?eE56TUZFdTJHa0s5QkZzdWpXcWRhRy9hbkx3MktKelgwOG9rVmF0SG5ZQ09L?=
 =?utf-8?B?YTBPMXJJZHU5ZmFyVXN2VFNEeHB1WGZMNUxDYjVBMWMyVFJrVXJzQVJmY3lS?=
 =?utf-8?B?cENqb3kzT0RRemY2TzZQNkhuTllxaXA1blYzN1Z2dEtJQ28xTEd6Z2I0U2gv?=
 =?utf-8?B?S2s0dEoxUXJOcittYWVySWRyR1ZuR1VqY0FiZmF2eXlTM2Jxb3R0MndyUXRI?=
 =?utf-8?B?RWhLT3ZmenRhVUhEenhYSS9jOXU3K0c2cjdUUWNBdHl5ZXpxMlVCUUttUXFH?=
 =?utf-8?B?OVBRVWxYZ2NpSEcwaEkzbDk0bkFvcEt5R2xuVmxkTlZ1WDZCY2w1ZnNybUJN?=
 =?utf-8?B?czNGZFoyTkxhM0xNajlGWThHYjZ4V2ppMmlLOXhYM25vbGI3TS9nN2g5N00y?=
 =?utf-8?B?UytYUjJiSldzL2gyZ0NuUEtHV1ZNQkFIZGEvbmE4Q3piVkUxQ1J1S2pkVUJz?=
 =?utf-8?B?TWp2RDlqTUZ3czlpWkREQjA2R1drRlJVZy9naWRhWXFONHk4NzFXQkVKZk1n?=
 =?utf-8?B?TFo5L0lmbmp2bnBiMHQyaVZSbWYrbE5nVVR3bFFrZGdhRVpvWUorRFZ5aXF2?=
 =?utf-8?B?aWlUUENjSElTSFlOaGRHdFdOQ0dMZFFwSHNEcjlUYkpkNmNrS05lb2lyQXZT?=
 =?utf-8?B?cnY3UXFvUWkzVGZaeWh2UHRRckJWamx6UG1NS01OUlFvcGpHd250aG92Y3Zl?=
 =?utf-8?B?OHgzSHdrZ0hkcHVCUGN4TjlFMmhnTTI2VXVpRkVuSXd3Q0hhL2t1WkNCUUZP?=
 =?utf-8?B?cXlsNFpNVDh2bXJZR05EOFFONHZpSDhtVWNnMWZ1VU1rOVFUaVUrZ09LaDBu?=
 =?utf-8?B?engrSEVTanJZazBTdTJ3Ukw3UXBRbDNpaUYzd0N6WnVQYVprSzUwWkZCY1lW?=
 =?utf-8?B?TmdIU2ovaHVXUTFhbGxuYk91NlhudmFoamY1RElHQnN1Qm45SEkxVlUxUmUv?=
 =?utf-8?B?OCt4Z01JOGt5U1lHaUFENENOVm9zdDBuYzZJcGQwV0ltWTduK29TOEsvZWts?=
 =?utf-8?B?a0xsUmdxMGVHS0xMMXh2N29abitnODNueHkzZDdGUzAxZnk3bWZ4K2k2bG5m?=
 =?utf-8?B?Sm9OSlBUeXc3ZTR0MmRqZHppN2ovU1c2cDl4UDdBMU02TFBhTFB3THNRUGxk?=
 =?utf-8?B?UzZMbDhYWDJOckpIT0QvRzdaWGQybEFxTVMxNlJKbE42NU5MSC9iNWFMb29K?=
 =?utf-8?B?aGlCRDVjOTZpZkxjZEtQWXZkSmRoVUdSWHRXZUVJUjh5NU4xczJHUjFMckdU?=
 =?utf-8?B?RzZUMmNDRm9jRk9kck13MmovbGJWdWh3VloyVWM1bXdnU0JXMDB1czJRbmRH?=
 =?utf-8?B?WnI0aW1BNmlZR2Nwa0R3V1huUEp0cndxZXdwL3ZIUnFjQW52cDFQYzFaRjVv?=
 =?utf-8?B?VDgwYWx3VkJuOHViRXo1SDFjeTZXN2RJck4zei9YeFgvdC94cWFqbVY2QWZx?=
 =?utf-8?B?V0g4cXpuMC8zZWd4azNmMHRxSy9Lb2ptd0l6UHd6MUxOR0xaNDRqMldGK0V5?=
 =?utf-8?Q?8D+cgk53DyiAzKhxPOWTEr1gXbFqvjlvGXKu7CwLiQe4=3D?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN6PR02MB4157.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-Network-Message-Id: 1333d74f-347b-44ec-9c51-08dc9853f78d
X-MS-Exchange-CrossTenant-rms-persistedconsumerorg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-originalarrivaltime: 29 Jun 2024 15:55:58.2241
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR02MB9534

RnJvbTogUGV0ciBUZXNhxZnDrWsgPHBldHJAdGVzYXJpY2kuY3o+IFNlbnQ6IEZyaWRheSwgSnVu
ZSAyOCwgMjAyNCAxMjo0NyBBTQ0KPiANCj4gViBGcmksIDI4IEp1biAyMDI0IDA4OjAxOjI5ICsw
MjAwDQo+ICJoY2hAbHN0LmRlIiA8aGNoQGxzdC5kZT4gbmFwc8Ohbm86DQo+IA0KPiA+IE9uIFRo
dSwgSnVuIDI3LCAyMDI0IGF0IDA0OjAyOjU5UE0gKzAwMDAsIE1pY2hhZWwgS2VsbGV5IHdyb3Rl
Og0KPiA+ID4gPiA+IENvbmNlcHR1YWxseSwgaXQncyBzdGlsbCBiZWluZyB1c2VkIGFzIGEgYm9v
bGVhbiBmdW5jdGlvbiBiYXNlZCBvbg0KPiA+ID4gPiA+IHdoZXRoZXIgdGhlIHJldHVybiB2YWx1
ZSBpcyBOVUxMLiAgUmVuYW1pbmcgaXQgdG8gc3dpb3RsYl9nZXRfcG9vbCgpDQo+ID4gPiA+ID4g
bW9yZSBhY2N1cmF0ZWx5IGRlc2NyaWJlcyB0aGUgcmV0dXJuIHZhbHVlLCBidXQgb2JzY3VyZXMg
dGhlDQo+ID4gPiA+ID4gaW50ZW50IG9mIGRldGVybWluaW5nIGlmIGl0IGlzIGEgc3dpb3RsYiBi
dWZmZXIuICBJJ2xsIHRoaW5rIGFib3V0IGl0Lg0KPiA+ID4gPiA+IFN1Z2dlc3Rpb25zIGFyZSB3
ZWxjb21lLg0KPiA+ID4gPg0KPiA+ID4gPiBKdXN0IGtlZXAgaXNfc3dpb3RsYl9idWZmZXIgYXMg
YSB0cml2aWFsIGlubGluZSBoZWxwZXIgdGhhdCByZXR1cm5zDQo+ID4gPiA+IGJvb2wuDQo+ID4g
Pg0KPiA+ID4gSSBkb24ndCB1bmRlcnN0YW5kIHdoYXQgeW91IGFyZSBzdWdnZXN0aW5nLiAgQ291
bGQgeW91IGVsYWJvcmF0ZSBhIGJpdD8NCj4gPiA+IGlzX3N3aW90bGJfYnVmZmVyKCkgY2FuJ3Qg
YmUgdHJpdmlhbCB3aGVuIENPTkZJR19TV0lPVExCX0RZTkFNSUMNCj4gPiA+IGlzIHNldC4NCj4g
Pg0KPiA+IENhbGwgdGhlIG1haW4gZnVuY3Rpb24gdGhhdCBmaW5kcyBhbmQgcmV0dW5zIHRoZSBw
b29sIHN3aW90bGJfZmluZF9wb29sLA0KPiA+IGFuZCB0aGVuIGhhdmUgYSBpc19zd2lvdGxiX2J1
ZmZlciB3cmFwcGVyIHRoYXQganVzdCByZXR1cm5zIGJvb2wuDQo+ID4NCj4gDQo+IEkgc2VlLiBU
aGF0J3Mgbm90IG15IHBvaW50LiBBZnRlciBhcHBseWluZyBNaWNoYWVsJ3MgcGF0Y2gsIHRoZSBy
ZXR1cm4NCj4gdmFsdWUgaXMgYWx3YXlzIHVzZWQsIGV4Y2VwdCBoZXJlOg0KPiANCj4gYm9vbCBk
bWFfZGlyZWN0X25lZWRfc3luYyhzdHJ1Y3QgZGV2aWNlICpkZXYsIGRtYV9hZGRyX3QgZG1hX2Fk
ZHIpDQo+IHsNCj4gCXJldHVybiAhZGV2X2lzX2RtYV9jb2hlcmVudChkZXYpIHx8DQo+IAkgICAg
ICAgaXNfc3dpb3RsYl9idWZmZXIoZGV2LCBkbWFfdG9fcGh5cyhkZXYsIGRtYV9hZGRyKSk7DQo+
IH0NCj4gDQo+IEkgZG9uJ3QgdGhpbmsgdGhpcyBvbmUgb2NjdXJyZW5jZSBpbiB0aGUgZW50aXJl
IHNvdXJjZSB0cmVlIGlzIHdvcnRoIGENCj4gc2VwYXJhdGUgaW5saW5lIGZ1bmN0aW9uLg0KPiAN
Cj4gSWYgbm9ib2R5IGhhcyBhIGJldHRlciBpZGVhLCBJJ20gbm90IHJlYWxseSBvZmZlbmRlZCBi
eSBrZWVwaW5nIHRoZQ0KPiBvcmlnaW5hbCBuYW1lLCBpc19zd2lvdGxiX2J1ZmZlcigpLiBJdCB3
b3VsZCBqdXN0IGJlY29tZSB0aGUgb25seQ0KPiBmdW5jdGlvbiB3aGljaCBzdGFydHMgd2l0aCAi
aXNfIiBhbmQgcHJvdmlkZXMgbW9yZSBpbmZvcm1hdGlvbiBpbiB0aGUNCj4gcmV0dXJuIHZhbHVl
IHRoYW4gYSBzaW1wbGUgeWVzL25vLCBhbmQgSSB0aG91Z2h0IHRoZXJlIG11c3QgYmUgYW4NCj4g
dW53cml0dGVuIGNvbnZlbnRpb24gYWJvdXQgdGhhdC4NCj4gDQoNClVubGVzcyB0aGVyZSBpcyBm
dXJ0aGVyIGRpc2N1c3Npb24gb24gdGhpcyBwb2ludCwgSSdsbCBqdXN0IGtlZXAgdGhlIG9yaWdp
bmFsDQoiaXNfc3dpb3RsYl9idWZmZXIoKSIgaW4gdjIuDQoNCk1pY2hhZWwNCg==


From xen-devel-bounces@lists.xenproject.org Sat Jun 29 16:47:36 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 Jun 2024 16:47:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.750999.1158916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNbEX-00034q-Hc; Sat, 29 Jun 2024 16:47:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 750999.1158916; Sat, 29 Jun 2024 16:47:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNbEX-00034j-Eq; Sat, 29 Jun 2024 16:47:29 +0000
Received: by outflank-mailman (input) for mailman id 750999;
 Sat, 29 Jun 2024 16:47:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNbEW-00034J-C2; Sat, 29 Jun 2024 16:47:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNbEW-0000se-6R; Sat, 29 Jun 2024 16:47:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNbEV-0008Az-TY; Sat, 29 Jun 2024 16:47:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNbEV-0008Ix-Sr; Sat, 29 Jun 2024 16:47:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JNKjYADdrJsVR/vwJZLMTi1IUsYcZV+vZxG4y4QwSio=; b=FOBciNEWGA25sn8EpRFVnCF2Pg
	6PrsKP7ZCwPQu4+GLGPjJVk8eJj9+TPRFaLeQZCz01mMbzLK97u4KrWbVL7bCTTrv02gYSFR/HLMs
	hrOhKN7MvPsZ4EhzXNkK8RpozmhqPtpfuAZNygUtPtiPiOKwARJoJWeTM0aQw/uqDEWA=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186568-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186568: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=08f9b1dd9c9435d4cca006e43917245710b39be3
X-Osstest-Versions-That:
    xen=08f9b1dd9c9435d4cca006e43917245710b39be3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 Jun 2024 16:47:27 +0000

flight 186568 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186568/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186559
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186559
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186559
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186559
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186559
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186559
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  08f9b1dd9c9435d4cca006e43917245710b39be3
baseline version:
 xen                  08f9b1dd9c9435d4cca006e43917245710b39be3

Last test of basis   186568  2024-06-29 05:53:59 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jun 30 00:06:24 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 00:06:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751035.1158926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNi52-0007cf-0m; Sun, 30 Jun 2024 00:06:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751035.1158926; Sun, 30 Jun 2024 00:06:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNi51-0007cX-Sj; Sun, 30 Jun 2024 00:06:07 +0000
Received: by outflank-mailman (input) for mailman id 751035;
 Sun, 30 Jun 2024 00:06:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNi50-0007cN-Js; Sun, 30 Jun 2024 00:06:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNi50-0001X2-F3; Sun, 30 Jun 2024 00:06:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNi50-00051x-2o; Sun, 30 Jun 2024 00:06:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNi50-0006jG-2M; Sun, 30 Jun 2024 00:06:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=I+i17NhtjomRSzD9lsFM5jzFdGx5rwI37o6VrXDkUKY=; b=qTR4ou0tvtZRXhsurE1qO9R+bH
	3IkulmQlP89pGAvK/t2WB00SNO6ZpHZlO05uI46RJ6AnuU0yK3GCVYQv3MXRNndLErtHXgA+NkF8s
	j5LmVaHxEqznWy1IsE9vjDHGGQSAVvICltqwF6Qk29o/UGbSOIGcnPRp+MOIpQoH2vcc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186578-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186578: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-qcow2:host-ping-check-native:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=27b31deb900dfcec60820d8d3e48f6de9ae9a18e
X-Osstest-Versions-That:
    linux=de0a9f4486337d0eabacc23bd67ff73146eacdc0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 Jun 2024 00:06:06 +0000

flight 186578 linux-linus real [real]
flight 186583 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186578/
http://logs.test-lab.xenproject.org/osstest/logs/186583/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 186562

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-qcow2  6 host-ping-check-native fail pass in 186583-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 186562

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-qcow2   14 migrate-support-check fail in 186583 never pass
 test-armhf-armhf-xl-qcow2 15 saverestore-support-check fail in 186583 never pass
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 186562
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186562
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 186562
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186562
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass

version targeted for testing:
 linux                27b31deb900dfcec60820d8d3e48f6de9ae9a18e
baseline version:
 linux                de0a9f4486337d0eabacc23bd67ff73146eacdc0

Last test of basis   186562  2024-06-29 00:44:01 Z    0 days
Testing same since   186578  2024-06-29 16:40:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Agathe Boutmy <agathe@boutmy.com>
  Andi Shyti <andi.shyti@kernel.org>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Chandan Babu R <chandanbabu@kernel.org>
  Chen Ni <nichen@iscas.ac.cn>
  Christoph Hellwig <hch@lst.de>
  Darrick J. Wong <djwong@kernel.org>
  Hans de Goede <hdegoede@redhat.com>
  Hans Hu <HansHu-oc@zhaoxin.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Kamal Dasu <kamal.dasu@broadcom.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Ulf Hansson <ulf.hansson@linaro.org>
  Wolfram Sang <wsa+renesas@sang-engineering.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 664 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 30 05:56:14 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 05:56:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751061.1158936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNnXW-0005z2-KW; Sun, 30 Jun 2024 05:55:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751061.1158936; Sun, 30 Jun 2024 05:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNnXW-0005yv-HE; Sun, 30 Jun 2024 05:55:54 +0000
Received: by outflank-mailman (input) for mailman id 751061;
 Sun, 30 Jun 2024 05:55:52 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tnWF=OA=lst.de=hch@srs-se1.protection.inumbo.net>)
 id 1sNnXU-0005yp-Mk
 for xen-devel@lists.xenproject.org; Sun, 30 Jun 2024 05:55:52 +0000
Received: from verein.lst.de (verein.lst.de [213.95.11.211])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 641c242d-36a5-11ef-b4bb-af5377834399;
 Sun, 30 Jun 2024 07:55:50 +0200 (CEST)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 3943868AFE; Sun, 30 Jun 2024 07:55:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 641c242d-36a5-11ef-b4bb-af5377834399
Date: Sun, 30 Jun 2024 07:55:42 +0200
From: "hch@lst.de" <hch@lst.de>
To: Michael Kelley <mhklinux@outlook.com>
Cc: Petr =?utf-8?B?VGVzYcWZw61r?= <petr@tesarici.cz>,
	"hch@lst.de" <hch@lst.de>,
	"robin.murphy@arm.com" <robin.murphy@arm.com>,
	"joro@8bytes.org" <joro@8bytes.org>,
	"will@kernel.org" <will@kernel.org>,
	"jgross@suse.com" <jgross@suse.com>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>,
	"m.szyprowski@samsung.com" <m.szyprowski@samsung.com>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Message-ID: <20240630055542.GA5379@lst.de>
References: <20240607031421.182589-1-mhklinux@outlook.com> <SN6PR02MB41577686D72E206DB0084E90D4D62@SN6PR02MB4157.namprd02.prod.outlook.com> <20240627060251.GA15590@lst.de> <20240627085216.556744c1@meshulam.tesarici.cz> <SN6PR02MB4157E61B49C8435E38AC968DD4D72@SN6PR02MB4157.namprd02.prod.outlook.com> <20240627152513.GA23497@lst.de> <SN6PR02MB4157D9B1A64FF78461D6A7EDD4D72@SN6PR02MB4157.namprd02.prod.outlook.com> <20240628060129.GA26206@lst.de> <20240628094708.3a454619@meshulam.tesarici.cz> <SN6PR02MB415781789CBD6597142BEC68D4D12@SN6PR02MB4157.namprd02.prod.outlook.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <SN6PR02MB415781789CBD6597142BEC68D4D12@SN6PR02MB4157.namprd02.prod.outlook.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Sat, Jun 29, 2024 at 03:55:58PM +0000, Michael Kelley wrote:
> Unless there is further discussion on this point, I'll just keep the original
> "is_swiotlb_buffer()" in v2.

That is the wrong name for something that returns the pool as pointed
out before.


From xen-devel-bounces@lists.xenproject.org Sun Jun 30 07:34:11 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 07:34:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751071.1158946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNp4M-0000AD-6B; Sun, 30 Jun 2024 07:33:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751071.1158946; Sun, 30 Jun 2024 07:33:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNp4M-0000A3-3G; Sun, 30 Jun 2024 07:33:54 +0000
Received: by outflank-mailman (input) for mailman id 751071;
 Sun, 30 Jun 2024 07:33:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNp4L-00009r-Fa; Sun, 30 Jun 2024 07:33:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNp4L-0004aw-Ak; Sun, 30 Jun 2024 07:33:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNp4K-0006Qq-UW; Sun, 30 Jun 2024 07:33:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNp4K-00009v-U7; Sun, 30 Jun 2024 07:33:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FzeCnQnDefCLgsQnim27MFAEx9InzVou1AHOVLT/iv8=; b=CYbubOvcdejKcZPzUKutUAmqPY
	Cy7FX0T+7g7vy3hMzeIqkZpwioDmUdcPULEShnvZT8ykyJOAWeSkyTzj6XaaYwmcYJeFYD55i7/3G
	IkQL1YlHOCd+QCHk8D9dE26rSJoQ1cTnPSyJSsMhmvC/LIxzk7yFqnjr/IHiPNbHxorQ=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186586-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186586: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:allowable
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8282d5af7be82100c5460d093e9774140a26b96a
X-Osstest-Versions-That:
    linux=de0a9f4486337d0eabacc23bd67ff73146eacdc0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 Jun 2024 07:33:52 +0000

flight 186586 linux-linus real [real]
flight 186592 linux-linus real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/186586/
http://logs.test-lab.xenproject.org/osstest/logs/186592/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 186562

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 186592-retest
 test-armhf-armhf-examine      8 reboot              fail pass in 186592-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds      8 xen-boot                 fail REGR. vs. 186562

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 186562
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186562
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8282d5af7be82100c5460d093e9774140a26b96a
baseline version:
 linux                de0a9f4486337d0eabacc23bd67ff73146eacdc0

Last test of basis   186562  2024-06-29 00:44:01 Z    1 days
Failing since        186578  2024-06-29 16:40:32 Z    0 days    2 attempts
Testing same since   186586  2024-06-30 00:11:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Agathe Boutmy <agathe@boutmy.com>
  Andi Shyti <andi.shyti@kernel.org>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Chandan Babu R <chandanbabu@kernel.org>
  Chen Ni <nichen@iscas.ac.cn>
  Christoph Hellwig <hch@lst.de>
  Chuck Lever <chuck.lever@oracle.com>
  Darrick J. Wong <djwong@kernel.org>
  Hans de Goede <hdegoede@redhat.com>
  Hans Hu <HansHu-oc@zhaoxin.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Kamal Dasu <kamal.dasu@broadcom.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Wolfram Sang <wsa+renesas@sang-engineering.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 701 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 30 10:49:54 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 10:49:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751091.1158955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNs7m-0003si-0Y; Sun, 30 Jun 2024 10:49:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751091.1158955; Sun, 30 Jun 2024 10:49:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNs7l-0003sb-Tn; Sun, 30 Jun 2024 10:49:37 +0000
Received: by outflank-mailman (input) for mailman id 751091;
 Sun, 30 Jun 2024 10:49:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNs7k-0003sR-BT; Sun, 30 Jun 2024 10:49:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNs7k-0000XY-7u; Sun, 30 Jun 2024 10:49:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNs7j-0005eF-QW; Sun, 30 Jun 2024 10:49:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNs7j-0007Mb-Q4; Sun, 30 Jun 2024 10:49:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5TUFRbcIdhcbpfAWb0bBEQsrwnm8QHIU8r5e9QAtRPY=; b=vGhSUg+YAXLTd6a7DCT7a7jZtl
	aAbpJm8sXdnFQDE/GZaEHmwyTMDpmNFX7Zztz95FON09mh9OmW+ltGiPZ+cr2siTmPCHr0mRw2HY0
	mQpD8wn+k3rxuf/J8Y0Gswj5o9SdthU+zAznfN+dL/kGOrKMP5TtGQOYOiBI/iBFRjdE=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186588-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 186588: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=08f9b1dd9c9435d4cca006e43917245710b39be3
X-Osstest-Versions-That:
    xen=08f9b1dd9c9435d4cca006e43917245710b39be3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 Jun 2024 10:49:35 +0000

flight 186588 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186588/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186568
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186568
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186568
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 186568
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186568
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186568
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  08f9b1dd9c9435d4cca006e43917245710b39be3
baseline version:
 xen                  08f9b1dd9c9435d4cca006e43917245710b39be3

Last test of basis   186588  2024-06-30 01:52:09 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun Jun 30 12:34:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 12:34:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751112.1158986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtl4-0007h4-38; Sun, 30 Jun 2024 12:34:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751112.1158986; Sun, 30 Jun 2024 12:34:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtl3-0007gv-W4; Sun, 30 Jun 2024 12:34:17 +0000
Received: by outflank-mailman (input) for mailman id 751112;
 Sun, 30 Jun 2024 12:34:16 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aV4l=OA=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sNtl2-0007C0-In
 for xen-devel@lists.xenproject.org; Sun, 30 Jun 2024 12:34:16 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20601.outbound.protection.outlook.com
 [2a01:111:f403:2009::601])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0f941557-36dd-11ef-b4bb-af5377834399;
 Sun, 30 Jun 2024 14:34:15 +0200 (CEST)
Received: from CH2PR07CA0017.namprd07.prod.outlook.com (2603:10b6:610:20::30)
 by SA1PR12MB5671.namprd12.prod.outlook.com (2603:10b6:806:23b::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29; Sun, 30 Jun
 2024 12:34:09 +0000
Received: from CH1PEPF0000A34A.namprd04.prod.outlook.com
 (2603:10b6:610:20:cafe::57) by CH2PR07CA0017.outlook.office365.com
 (2603:10b6:610:20::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29 via Frontend
 Transport; Sun, 30 Jun 2024 12:34:09 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CH1PEPF0000A34A.mail.protection.outlook.com (10.167.244.5) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7741.18 via Frontend Transport; Sun, 30 Jun 2024 12:34:09 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Sun, 30 Jun
 2024 07:34:05 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f941557-36dd-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KukuOpk1c36Yr7w+M1qNTo0E55qx74DYBOcAk8cHFTWBEJJ6Wp3dYteAvC8l1Y+tFZUsmm+JDnPfcEvxm1JvG3T9K6S34e1uMJ1J5QIvmmwENtvQKXEkNF2cYmLgXVYHUW1Jgsqkc0NxTXusPP1scTEyyWURxRiCff5UnQhaa1n7VgE38pNr2I6pdZxOFzlU5K1K/ATl39gWP6CKouatqj4RSdMq7jc1KC+lfLyRlwME88WUmJQNWH4bfhPJVM4jOyekjrHeS8gui40gZ3/d7OqjkLKgKDZMn+i3E8xEyj22G1YRXMbxviO84R5yHxNH2myPio3Y4rXnlepLOrKVpg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=VHdK6TgicDi/JB8CaCsM4rODqFcXf5GdnKfGXKkjABM=;
 b=oPW0I5MYAQPKmN9RJsRcyeD2mw8iG2a6azH9yIhzjHxlgYKgBtrzRPBh0aPvf7Z8PUP7/0jvlMajgTeTO+yIbYtTV8fqQZzb0Mrdgc6UJzMcB1i22mn+cQDbbHGdcAV7uE/83zMaT60lcCx7Nwgb6zcqSkifDCmdQFcgNMlvYhpiNhVQXkEwxte/mH5ebkN45qGypaLtZJEblkcesEMIMbyktDb0JskZpMd2kyyaehlyEv5b4oH1ew0ReIjVLh0JYl9HVdsLGlmUye0vS1TbhdMNpc0J008w0m+pxYwc0hku3BKvzgDq5R/W2G5oUMLypAU4dnislt3AnDIOF06pAg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VHdK6TgicDi/JB8CaCsM4rODqFcXf5GdnKfGXKkjABM=;
 b=kmmW6vkYKPOa4k/LTXEbbchUFpA6RJYeLeImWqzttEnNAWuBZArbbjZxSXF06eRqXLjut4puley9c6o+4JztL5Mq0LAD0JTrIgDG3bL+J6L7eUK/f6AH4lbNNIw+V7s0hz402RqixftdfI25ytlxO7WeOqAPwqBPXdu28CDG/ik=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [XEN PATCH v11 2/8] x86/pvh: Allow (un)map_pirq when dom0 is PVH
Date: Sun, 30 Jun 2024 20:33:38 +0800
Message-ID: <20240630123344.20623-3-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240630123344.20623-1-Jiqian.Chen@amd.com>
References: <20240630123344.20623-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CH1PEPF0000A34A:EE_|SA1PR12MB5671:EE_
X-MS-Office365-Filtering-Correlation-Id: 010176f9-6078-493f-8fc9-08dc9900f072
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230040|376014|1800799024|7416014|36860700013|82310400026;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?cFHiZCdLtY4Vy/ONe2AizWskVRuI6fq7LeyPZAgmEQBSNrioUEwlfIrNbFIq?=
 =?us-ascii?Q?GOX7PDQ1dTwp8g3GMYzdycIC3S8Ap21GTd507QLamm/IU1BLOazEC1WyCSyt?=
 =?us-ascii?Q?bsYY4SedBb5CMUpHtg1YZKVfsy1ld3mInEcmFSHmEElIcNGl15jSMrrd1kBe?=
 =?us-ascii?Q?IglfcCtFPdU+5qXLDHJj6G4N0jWsMBW71eXmeJgqS716Om5Rht1Xp0xHeZB/?=
 =?us-ascii?Q?Md7pdC9/s7V9f3m/L+ARKNBNf6V9lzYI1KtSmrITzrxR3d3lsKVKN+05BtXF?=
 =?us-ascii?Q?q1iqOvzso/YQLGX2EAot5pdjjnjpHF9NEnFQNl4H+9IiSXvyr/Tmoo52ccGc?=
 =?us-ascii?Q?t5Ra+wlctBcVOZ9xpR2cLNJlw0bPWw6tI1RBPp8VhbkoDMAJHXwgE4LbqFji?=
 =?us-ascii?Q?m0YvLN8LL/D/CH27j2OL95/UgTCC/FfH017ltMGnz2R8pfkGHfc5gebOnd2s?=
 =?us-ascii?Q?77/m2Jdub95Z18InNkLhLyrw6EkdsyEHMp68ig+s5lCOsDgCut3AoCA9b3/M?=
 =?us-ascii?Q?pxhCP7r3/R7VmhHzW67etosFdvwMNziWjM6tXM5e5NZIiOe26k9we3IBPxCw?=
 =?us-ascii?Q?Qa9bjWlaEEWZZj+KmuTMsG4+ldY/ZmVoXR3j3FGQTQOIixn+2sfGgSbb4xQN?=
 =?us-ascii?Q?7gj4vn1q+q5s/SiTcKlneZIG5x7PxChFLe6Z5QA51/qfO54YPt3UlGOPuJ6G?=
 =?us-ascii?Q?E4F4n0eCYrbiVfi9SfiUx9VbGd7aGBJVjHNLORbXdx3QJsrdDL6CnOf+xpgd?=
 =?us-ascii?Q?8KTI/6+DwrRDDUkDjhlSrmE5U9ljdl7ZVX9hQz2ZuNTFH5wDdQ737DnGwBPB?=
 =?us-ascii?Q?v+PEx0CfHYn6eT+9EMZ3ceqbu0HtCVtbxpAiX0dzctIUXmz4Nn61275Gmzqb?=
 =?us-ascii?Q?yOGbkp3CQC227VUmSchGSldQwuiE+RrqWSmnkR62D9XY9/rhy52xVQMxzq68?=
 =?us-ascii?Q?2FqSCIJ+HnRWiSOvWtrBU//yhYzzynFek6j4vqV2eH0uK7w4IZoY6+MlNCco?=
 =?us-ascii?Q?fk/PVkKDXqmhtRwQYyluZDNBQaPSOpvbkTrcFfv3JzfNpc04SdRTP9MtovlN?=
 =?us-ascii?Q?kTUS943Z8FqHsYU9Pi1+pRHKI/6qYUQnXBrMEGK72p0AdAfRn+UCDQyVy3Zs?=
 =?us-ascii?Q?Rwz5b2U0OQR73/5IeXkkLpxw5N/8WslqxEUKM/jYeXFeD041/9YRFRNRcbgb?=
 =?us-ascii?Q?UNNDSQ1jhH0RlK7DEzeG8kL4H8xXbBoD+tq+jzMWh/bsqy+8s8G2Is+CR6RM?=
 =?us-ascii?Q?ZFBZb32twToxx//WYvjRmQaEjTO4ujyzlEqB5B0ZbEShN65MgTN+y5f+xRo0?=
 =?us-ascii?Q?4hgArKP79fyjhF/wNtKibdPn1RAAALva0lrVb9SYgDXqTYFUTIkV2EeF4pmP?=
 =?us-ascii?Q?AXIdQRH2HTPfgJzJLlarrvdYH4TzYgETXdwfMwN1tmgrDaEW+T7jnGoeJJOe?=
 =?us-ascii?Q?vc5hwxb4COKi/Ng/gNRqI81Mofl6mXaD?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(376014)(1800799024)(7416014)(36860700013)(82310400026);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2024 12:34:09.1919
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 010176f9-6078-493f-8fc9-08dc9900f072
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CH1PEPF0000A34A.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR12MB5671

If run Xen with PVH dom0 and hvm domU, hvm will map a pirq for
a passthrough device by using gsi, see qemu code
xen_pt_realize->xc_physdev_map_pirq and libxl code
pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq
will call into Xen, but in hvm_physdev_op, PHYSDEVOP_map_pirq
is not allowed because currd is PVH dom0 and PVH has no
X86_EMU_USE_PIRQ flag, it will fail at has_pirq check.

So, allow PHYSDEVOP_map_pirq when dom0 is PVH and also allow
PHYSDEVOP_unmap_pirq for the removal device path to unmap pirq.
And add a new check to prevent (un)map when the subject domain
has no X86_EMU_USE_PIRQ flag.

So that the interrupt of a passthrough device can be
successfully mapped to pirq for domU with X86_EMU_USE_PIRQ flag
when dom0 is PVH

Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/x86/hvm/hypercall.c |  6 ++++++
 xen/arch/x86/physdev.c       | 14 ++++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 0fab670a4871..03ada3c880bd 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -71,8 +71,14 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
     switch ( cmd )
     {
+        /*
+        * Only being permitted for management of other domains.
+        * Further restrictions are enforced in do_physdev_op.
+        */
     case PHYSDEVOP_map_pirq:
     case PHYSDEVOP_unmap_pirq:
+        break;
+
     case PHYSDEVOP_eoi:
     case PHYSDEVOP_irq_status_query:
     case PHYSDEVOP_get_free_pirq:
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index d6dd622952a9..a165f68225c1 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -323,6 +323,13 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( !d )
             break;
 
+        /* Prevent mapping when the subject domain has no X86_EMU_USE_PIRQ */
+        if ( is_hvm_domain(d) && !has_pirq(d) )
+        {
+            rcu_unlock_domain(d);
+            return -EOPNOTSUPP;
+        }
+
         ret = physdev_map_pirq(d, map.type, &map.index, &map.pirq, &msi);
 
         rcu_unlock_domain(d);
@@ -346,6 +353,13 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( !d )
             break;
 
+        /* Prevent unmapping when the subject domain has no X86_EMU_USE_PIRQ */
+        if ( is_hvm_domain(d) && !has_pirq(d) )
+        {
+            rcu_unlock_domain(d);
+            return -EOPNOTSUPP;
+        }
+
         ret = physdev_unmap_pirq(d, unmap.pirq);
 
         rcu_unlock_domain(d);
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 30 12:34:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 12:34:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751111.1158976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtky-0007QR-Nz; Sun, 30 Jun 2024 12:34:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751111.1158976; Sun, 30 Jun 2024 12:34:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtky-0007QK-KX; Sun, 30 Jun 2024 12:34:12 +0000
Received: by outflank-mailman (input) for mailman id 751111;
 Sun, 30 Jun 2024 12:34:11 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aV4l=OA=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sNtkx-0007C1-3o
 for xen-devel@lists.xenproject.org; Sun, 30 Jun 2024 12:34:11 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com
 (mail-bn1nam02on20621.outbound.protection.outlook.com
 [2a01:111:f403:2407::621])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 0c17170b-36dd-11ef-90a3-e314d9c70b13;
 Sun, 30 Jun 2024 14:34:09 +0200 (CEST)
Received: from CH0PR07CA0021.namprd07.prod.outlook.com (2603:10b6:610:32::26)
 by SA3PR12MB8045.namprd12.prod.outlook.com (2603:10b6:806:31d::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.27; Sun, 30 Jun
 2024 12:34:05 +0000
Received: from CH1PEPF0000A348.namprd04.prod.outlook.com
 (2603:10b6:610:32:cafe::b5) by CH0PR07CA0021.outlook.office365.com
 (2603:10b6:610:32::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29 via Frontend
 Transport; Sun, 30 Jun 2024 12:34:05 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CH1PEPF0000A348.mail.protection.outlook.com (10.167.244.4) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7741.18 via Frontend Transport; Sun, 30 Jun 2024 12:34:05 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Sun, 30 Jun
 2024 07:34:01 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c17170b-36dd-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QWVHVMAYRZFwMX/cbUgd79F85UUWYwkyuwoNvVd+5WkZfkDsk7gSMTQFWnWp9J5uyLe5QSxcuZqAYLwVJnPlmwMWZsXn3zCn00TDWY8TMWcy9QzUDxV4kYONJ0IC7+SWzraf24R34aFB2HuBKw/oeR4JH32A/AdmUA23MNyu8o9HUOzenbuqPNMLKFSb0ad3hWFCis3PCQEs9Cg37KgoGeGadmXSr7qCygVRRceji1XPSFkCq3siEvUVzgcTkIp+Z/K3z+zyYE3fUKZdpBrfGPRdz5OrvfIRtNmnwd1WxWLsLAWyYnqAZHgt8yl0nF0CDChA2ytiR3xqY8/MDyMeKQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=1wz0e9PQpwPFuK0UwCxCr1bNL2PvzI97HXd1rk4IQ/w=;
 b=fGbJQAdL58DqqPon9s84Sz6BEUjxsDuzj4qu8uE9Hp7L09lBgicCJgdq1NW02NzBXmnWsqu9qTgJJx0xAwfEo7yYNnZMzeGYl+LF+rCG4kyXay+ik/NQo0QBYocmB8nNe1Y/6UwhpvNyNifdzc9G81+k6iwbtaUF+jZIkdKs2Z8mh0NiS7Xiqhwn2zRr1e3339fQzxHEshI/6dykMWV+6ABDDHblhulqkdoK6bPQBom6OzGgsKYmlCvpiOHnXNgghwvNZH26MnctaAjR4M4x+2bqKo8s38OqT5FzBntTOO6D8XT6nR7A0QDu4dfLEXiv1TkOeg+8qX1fnX257I/f7g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=1wz0e9PQpwPFuK0UwCxCr1bNL2PvzI97HXd1rk4IQ/w=;
 b=EoMrVRin1WJPNZdq+5HduZ1AaKwBn4LM8WA/w7lLKqe+WTQqHwnGAi/nply6KN7vPWwrtShP7CbXY1INzXQIrCVWkkg4ncKZmIxrjbmvfXoR16dZEJZfH4GM1NO45HrHnFF33I4DKaZxBC46KveJrrqEHy673MK2GoT5iSmI54c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>, Stewart Hildebrand <stewart.hildebrand@amd.com>
Subject: [XEN PATCH v11 1/8] xen/vpci: Clear all vpci status of device
Date: Sun, 30 Jun 2024 20:33:37 +0800
Message-ID: <20240630123344.20623-2-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240630123344.20623-1-Jiqian.Chen@amd.com>
References: <20240630123344.20623-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CH1PEPF0000A348:EE_|SA3PR12MB8045:EE_
X-MS-Office365-Filtering-Correlation-Id: 60814893-cb62-4f85-1bc3-08dc9900ee47
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230040|1800799024|7416014|376014|36860700013|82310400026;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?PcHMmdunss852bZCKghFp/HaR1iogeASHqJqJGZ4dSV6D7HlvIV2jaP13Gs3?=
 =?us-ascii?Q?qsy0zEpQJJwxELHfgxjD8sz46Qycjz28uyvajRvjUrEFvL1tNmULFx5PnNsX?=
 =?us-ascii?Q?NeBMWioWCX0qmJRxcr5/Zp48scybJO/9TeDhfcAIQgBAG1sjdCJrXKmwwGJe?=
 =?us-ascii?Q?sMKjC656Pm+OMnWlVdCQPF6HI73IiejxOCGK5rqO2KGaUnsnFEIuQ1yYReDb?=
 =?us-ascii?Q?LctNQw5uQNrhZPb4leRV1Qhe5f2CXrcbplo2TjfxGjGKBzojXJvGkudyEx/x?=
 =?us-ascii?Q?XqxvF0e+5Uvi6fWCx5Wwu6Pcs5ptlL/jU+2a5bPfpp55xC/sqZTICro9CjTi?=
 =?us-ascii?Q?NNodMb5nJoiVWcCyJmDnkRfOiKRvi+Nnr7+YIR1aL4soZShbXwNsF88Yvg3A?=
 =?us-ascii?Q?yTcZwU//5aeCjdpUqgXAZdjlD4cHYm+tGN9f5WWX/FwpRuW4KEArzbUxmX1G?=
 =?us-ascii?Q?+um301Vzyg+KaFSCmR3eSDSsqA4hwVwfAVHwr9JSX2rA+ryGhRVOWknNX5Z5?=
 =?us-ascii?Q?B0P1F1ZiCNQripFpoIjW+MLMHDBrDg+/rTpqmJpc9aAf0QF/9fIN/gDSaRrX?=
 =?us-ascii?Q?J995qvZYYh68+9E9SQZI1iR6qeMbshly/ttYLXeigZe5f/I8A1w/EfwKlcbu?=
 =?us-ascii?Q?CyyWsrdyk9QZt+CZjsDRrNglKs7DkeP4zZea3ITPc84bbNWrV3hFjnkq9q3t?=
 =?us-ascii?Q?wDngsDcRJmWvIHyoYdnSsayJY7RYw0FnLmixf1slOzn7Ccf25Q13JmeU0wab?=
 =?us-ascii?Q?sXoRmw1X13o3a4lrb+Cbasqq2hbHWfvJmrdxlHJx4tIw13dZy1PYY0BIcsxo?=
 =?us-ascii?Q?Dz8ZtLOCjDwYVrnURTQ+iNtdf6pqhw0vDXMhgozLvecPv/AjDhD/+1pX7Rqy?=
 =?us-ascii?Q?1Mu1oK50S03br5E1BtGeX2t2/sU2JGAjtXj6z1tU97cYCt1Wg5mLpzQEGbf8?=
 =?us-ascii?Q?v/UCrcdJKm/ukK+ImWOxzyJcwW4oLPQVMNeE/07MpnNruaO27pmBzdVZAPOm?=
 =?us-ascii?Q?SiO2eHtJPAL1GpDOMJnMD9CaA4e5BA+k83GHjzbwNGiI3ZeJIR+3+ZbpnkIl?=
 =?us-ascii?Q?ois0FEeBBWtZ+eA2w5aGvwyqgBzWSsShU+YjIldXLO5iJwHOSgOMtZGcdrJX?=
 =?us-ascii?Q?z+TQbCD2YXt4tPp3Ivt/mDU4oeUZLSAnxIOoAQr8TPEgJRD+2Uk2vceFZDHR?=
 =?us-ascii?Q?UsCncGEx44SrFgrFHeXXX32Wv7Gkm+si1FHhovGgxjlximYYTSoMGUcyMFsj?=
 =?us-ascii?Q?ufKhiqxZRx0jjZXAAonrLk2009veTb5S5g1EyfMJ1WDY8V9syjl1bWOGALd+?=
 =?us-ascii?Q?toafR9deBlpRCDzUIRixBYsLPd6C+K7wBNVxuMEvBHNARTYK26B/s+1Fq69E?=
 =?us-ascii?Q?CmKKohENK3Htf1rPjC/geGhZ69KdbgsSgOtmZz+B4SxsOgQykwQQYDm9bkYh?=
 =?us-ascii?Q?YoN3ZXDV3J8=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(7416014)(376014)(36860700013)(82310400026);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2024 12:34:05.5517
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 60814893-cb62-4f85-1bc3-08dc9900ee47
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CH1PEPF0000A348.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA3PR12MB8045

When a device has been reset on dom0 side, the vpci on Xen
side won't get notification, so the cached state in vpci is
all out of date compare with the real device state.
To solve that problem, add a new hypercall to clear all vpci
device state. When the state of device is reset on dom0 side,
dom0 can call this hypercall to notify vpci.

Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Reviewed-by: Stewart Hildebrand <stewart.hildebrand@amd.com>
Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
---
 xen/arch/x86/hvm/hypercall.c |  1 +
 xen/drivers/pci/physdev.c    | 58 ++++++++++++++++++++++++++++++++++++
 xen/drivers/vpci/vpci.c      | 10 +++++++
 xen/include/public/physdev.h | 20 +++++++++++++
 xen/include/xen/vpci.h       |  8 +++++
 5 files changed, 97 insertions(+)

diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 7fb3136f0c7c..0fab670a4871 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -83,6 +83,7 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     case PHYSDEVOP_pci_mmcfg_reserved:
     case PHYSDEVOP_pci_device_add:
     case PHYSDEVOP_pci_device_remove:
+    case PHYSDEVOP_pci_device_state_reset:
     case PHYSDEVOP_dbgp_op:
         if ( !is_hardware_domain(currd) )
             return -ENOSYS;
diff --git a/xen/drivers/pci/physdev.c b/xen/drivers/pci/physdev.c
index 42db3e6d133c..19a755d1c127 100644
--- a/xen/drivers/pci/physdev.c
+++ b/xen/drivers/pci/physdev.c
@@ -2,6 +2,7 @@
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
 #include <xen/init.h>
+#include <xen/vpci.h>
 
 #ifndef COMPAT
 typedef long ret_t;
@@ -67,6 +68,63 @@ ret_t pci_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         break;
     }
 
+    case PHYSDEVOP_pci_device_state_reset:
+    {
+        struct pci_device_state_reset dev_reset;
+        struct pci_dev *pdev;
+        pci_sbdf_t sbdf;
+
+        ret = -EOPNOTSUPP;
+        if ( !is_pci_passthrough_enabled() )
+            break;
+
+        ret = -EFAULT;
+        if ( copy_from_guest(&dev_reset, arg, 1) != 0 )
+            break;
+
+        sbdf = PCI_SBDF(dev_reset.dev.seg,
+                        dev_reset.dev.bus,
+                        dev_reset.dev.devfn);
+
+        ret = xsm_resource_setup_pci(XSM_PRIV, sbdf.sbdf);
+        if ( ret )
+            break;
+
+        pcidevs_lock();
+        pdev = pci_get_pdev(NULL, sbdf);
+        if ( !pdev )
+        {
+            pcidevs_unlock();
+            ret = -ENODEV;
+            break;
+        }
+
+        write_lock(&pdev->domain->pci_lock);
+        pcidevs_unlock();
+        /* Implement FLR, other reset types may be implemented in future */
+        switch ( dev_reset.reset_type )
+        {
+        case PCI_DEVICE_STATE_RESET_COLD:
+        case PCI_DEVICE_STATE_RESET_WARM:
+        case PCI_DEVICE_STATE_RESET_HOT:
+        case PCI_DEVICE_STATE_RESET_FLR:
+        {
+            ret = vpci_reset_device_state(pdev, dev_reset.reset_type);
+            if ( ret )
+                dprintk(XENLOG_ERR,
+                        "%pp: failed to reset vPCI device state\n", &sbdf);
+            break;
+        }
+
+        default:
+            ret = -EOPNOTSUPP;
+            break;
+        }
+        write_unlock(&pdev->domain->pci_lock);
+
+        break;
+    }
+
     default:
         ret = -ENOSYS;
         break;
diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index 1e6aa5d799b9..7e914d1eff9f 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -172,6 +172,16 @@ int vpci_assign_device(struct pci_dev *pdev)
 
     return rc;
 }
+
+int vpci_reset_device_state(struct pci_dev *pdev,
+                            uint32_t reset_type)
+{
+    ASSERT(rw_is_write_locked(&pdev->domain->pci_lock));
+
+    vpci_deassign_device(pdev);
+    return vpci_assign_device(pdev);
+}
+
 #endif /* __XEN__ */
 
 static int vpci_register_cmp(const struct vpci_register *r1,
diff --git a/xen/include/public/physdev.h b/xen/include/public/physdev.h
index f0c0d4727c0b..ddbcdfb05248 100644
--- a/xen/include/public/physdev.h
+++ b/xen/include/public/physdev.h
@@ -296,6 +296,13 @@ DEFINE_XEN_GUEST_HANDLE(physdev_pci_device_add_t);
  */
 #define PHYSDEVOP_prepare_msix          30
 #define PHYSDEVOP_release_msix          31
+/*
+ * Notify the hypervisor that a PCI device has been reset, so that any
+ * internally cached state is regenerated.  Should be called after any
+ * device reset performed by the hardware domain.
+ */
+#define PHYSDEVOP_pci_device_state_reset 32
+
 struct physdev_pci_device {
     /* IN */
     uint16_t seg;
@@ -305,6 +312,19 @@ struct physdev_pci_device {
 typedef struct physdev_pci_device physdev_pci_device_t;
 DEFINE_XEN_GUEST_HANDLE(physdev_pci_device_t);
 
+struct pci_device_state_reset {
+    physdev_pci_device_t dev;
+#define _PCI_DEVICE_STATE_RESET_COLD 0
+#define PCI_DEVICE_STATE_RESET_COLD  (1U<<_PCI_DEVICE_STATE_RESET_COLD)
+#define _PCI_DEVICE_STATE_RESET_WARM 1
+#define PCI_DEVICE_STATE_RESET_WARM  (1U<<_PCI_DEVICE_STATE_RESET_WARM)
+#define _PCI_DEVICE_STATE_RESET_HOT  2
+#define PCI_DEVICE_STATE_RESET_HOT   (1U<<_PCI_DEVICE_STATE_RESET_HOT)
+#define _PCI_DEVICE_STATE_RESET_FLR  3
+#define PCI_DEVICE_STATE_RESET_FLR   (1U<<_PCI_DEVICE_STATE_RESET_FLR)
+    uint32_t reset_type;
+};
+
 #define PHYSDEVOP_DBGP_RESET_PREPARE    1
 #define PHYSDEVOP_DBGP_RESET_DONE       2
 
diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
index da8d0f41e6f4..6be812dbc04a 100644
--- a/xen/include/xen/vpci.h
+++ b/xen/include/xen/vpci.h
@@ -38,6 +38,8 @@ int __must_check vpci_assign_device(struct pci_dev *pdev);
 
 /* Remove all handlers and free vpci related structures. */
 void vpci_deassign_device(struct pci_dev *pdev);
+int __must_check vpci_reset_device_state(struct pci_dev *pdev,
+                                         uint32_t reset_type);
 
 /* Add/remove a register handler. */
 int __must_check vpci_add_register_mask(struct vpci *vpci,
@@ -282,6 +284,12 @@ static inline int vpci_assign_device(struct pci_dev *pdev)
 
 static inline void vpci_deassign_device(struct pci_dev *pdev) { }
 
+static inline int __must_check vpci_reset_device_state(struct pci_dev *pdev,
+                                                       uint32_t reset_type)
+{
+    return 0;
+}
+
 static inline void vpci_dump_msi(void) { }
 
 static inline uint32_t vpci_read(pci_sbdf_t sbdf, unsigned int reg,
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 30 12:34:21 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 12:34:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751110.1158966 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtkx-0007CJ-Gb; Sun, 30 Jun 2024 12:34:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751110.1158966; Sun, 30 Jun 2024 12:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtkx-0007CC-Dy; Sun, 30 Jun 2024 12:34:11 +0000
Received: by outflank-mailman (input) for mailman id 751110;
 Sun, 30 Jun 2024 12:34:11 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aV4l=OA=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sNtkw-0007C0-Qk
 for xen-devel@lists.xenproject.org; Sun, 30 Jun 2024 12:34:11 +0000
Received: from NAM04-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam04on20600.outbound.protection.outlook.com
 [2a01:111:f403:2409::600])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 0ad4932f-36dd-11ef-b4bb-af5377834399;
 Sun, 30 Jun 2024 14:34:08 +0200 (CEST)
Received: from CH2PR05CA0047.namprd05.prod.outlook.com (2603:10b6:610:38::24)
 by CYXPR12MB9426.namprd12.prod.outlook.com (2603:10b6:930:e3::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.26; Sun, 30 Jun
 2024 12:34:02 +0000
Received: from CH1PEPF0000A346.namprd04.prod.outlook.com
 (2603:10b6:610:38:cafe::7d) by CH2PR05CA0047.outlook.office365.com
 (2603:10b6:610:38::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7741.18 via Frontend
 Transport; Sun, 30 Jun 2024 12:34:02 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CH1PEPF0000A346.mail.protection.outlook.com (10.167.244.11) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7741.18 via Frontend Transport; Sun, 30 Jun 2024 12:34:01 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Sun, 30 Jun
 2024 07:33:57 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ad4932f-36dd-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=evHwZ2rsQ1z2H3VzT7khYtjtEgcRAvq+D2sjO/fhzkok2O1Xrqlh/5B3AuezoFjLDpsH86uE4lizKlRPqukuJNyRowlCZ5BBesamHu9yO7BNNId+LuuKyK/+GfWJMvR0rxbbjTsBb/doECOD0Jrg6rAjo8Z7+T3Rl0EeakvL1TXjDB3NgsMGAs6o/heTuZmzY7JC7rJS6rSAkskogpyLD+HM+CrRDKZJ5Pc96KGS8zf3g+UWNJt5KZz0/tS7HnSOzcc5WMyCs0OqF5ko3kr6CHjFpnG17FAiz1lNmaNrBcG2/ljMMblZPR4s/4/eMimuFVrZp7Ct4SwzT0VBK46IHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=M1YTX+ahoolq08yfehzaa7U5Olve/orrsJ10M9V6WBs=;
 b=QnAan8UrY/0h2nweSwhZ8QiUMrhsoBBfcePOaczALkBbs0fTrRcx24+/Ml0Ln5Bll5KgRWo5VOv3NVOWvmOwvhR57Hl3jVUZvV+d8NIygt/Bn0A+luyEV7eie5uVfu0Z+IUbTcp+D6ynsJEkWIpp0/OCKLQ2f97dbyHvMpbS83KKM9pdD3uWX9QPDMzJdnnrVCLScsIkPzJ3y7PaKcEzaNLd/FuC/VxW33NcLCKD/yTUuKJiYUPqmpwTKhvdcktvDKqqgAynawf05/k3zOwcxTOCzdEvLLFmz5++QNi/MEFHS6hwUGm9xPhJgf4N8W4UcOZnieFVg7MyODYqCN+Hpw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=M1YTX+ahoolq08yfehzaa7U5Olve/orrsJ10M9V6WBs=;
 b=CRyxQGH9h7fng6b9WNwayQvHCh/AhFym9jbLe6DlcvlDlHGgrA1nHBHeIsVCBZgh45kS+i6j2fZTm9+RZAT8yDStIxLQMUuY6vfAhymUrjVOLp64LyIewhyA/0JFG6PhsIjP3q8o8F2C5erGcBZKTgc+NioSVz36BxSpkpec+iM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [XEN PATCH v11 0/8] Support device passthrough when dom0 is PVH on Xen
Date: Sun, 30 Jun 2024 20:33:36 +0800
Message-ID: <20240630123344.20623-1-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CH1PEPF0000A346:EE_|CYXPR12MB9426:EE_
X-MS-Office365-Filtering-Correlation-Id: d2cb6e8b-6d55-41b6-59fa-08dc9900ebfa
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230040|36860700013|376014|7416014|82310400026|1800799024;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?0u8yM6N3zY47D0IOJkCCY1vK1VtvrdzITDS57jQMyH+5Ou8Csrw/aNQyVkby?=
 =?us-ascii?Q?+G2RIVuCAVlQ4CIndB5nUrlusiFllZNrIyH6CyZowBhomN+HO8JHdl2tJTyi?=
 =?us-ascii?Q?2uTqXUZm14Um8GuPeDSUg4fJ236x0MMxvkfGAQDZmB4Uv9lSvEwt029gd6++?=
 =?us-ascii?Q?EHVnysJwfviCoDCeWCXQW2JxyMMrd15tCdNadiceaMtUeHpwLUYhb1VR5HT/?=
 =?us-ascii?Q?puNvT2P6ttbR6f0v3o6MxkMHFuoZyZtMe0NsFcwzCRU0YdqPSqMRNw/c62gt?=
 =?us-ascii?Q?EhirgOEiguOeg6/c9UlzHgjes5oQsbVWui54bH9+dWht4zoEZrs4UytLS1ST?=
 =?us-ascii?Q?ac5Oz3rejiF4nJf20BEgITDSq6J2H/9GprEx3vbiRPRZD6MOOWxVO9TPgZXs?=
 =?us-ascii?Q?nCnLJAXHxRCQ7zqs9e17a71ZhdMKs5sBS8fCgfZoJk6Jg5ofHczv6FwMH9VZ?=
 =?us-ascii?Q?vgWaH1wvWfCX/bBLvOIFP2fspaL7Xqoh8/oPAMtVTomsb6nabjEsOaDhfdNo?=
 =?us-ascii?Q?P0H5gd3foyyZAmxj5nWvQYvFzq7aEeeQ09DFY27lGV7jK9qBHYhHu2du5dKm?=
 =?us-ascii?Q?HgUpDUOW+OOqAnprEVoodrBhFuMxw7vU2thFH8YR4FKa/t7diOAfiy7WbO6R?=
 =?us-ascii?Q?igzPdH+6f14tGLnytg0kuuhI62gqbmZdyfKMV9MKPfOdmmozUt7ecIH3x1pv?=
 =?us-ascii?Q?Bk8QEZWLFG+vtK8IUXagLjh3NU33YL0JfRw1Sz/voEgVl77Te1c5MWweoPTs?=
 =?us-ascii?Q?+ygtGa91cc87I+GUd8zdXc8UFKHcloghUV4k7D34E/+kkqa1Q7pqh9QZJpa6?=
 =?us-ascii?Q?ABQjbDZF19fqqtsBaSof//SCfhxKR4+XLm+ZLrA6sLeHPW/G8n7O+TND9vLN?=
 =?us-ascii?Q?C17C7eDAPZt4WsHYfaMTUhE6PoESWehU5rGqH8nIeey8UooXRP2+GeRKO2xs?=
 =?us-ascii?Q?3RF+r/pGjClbmknBnnUNKK9ZcRbfm1L4oNwdBvla4m0tg5qDCdME91B2y09Z?=
 =?us-ascii?Q?kPMlWLdLsNIvzYm0gbzPU+3z4qKUwNplAV/7Ikel86P4eftds71C3FP7G9F1?=
 =?us-ascii?Q?oSw47pL7gqKWa3mEYmpyYimu9fsVejzyDSxbEgJiR9swzYiBjrwyuH7zRNCZ?=
 =?us-ascii?Q?HEpbFvmqZ6GF0bmz2K4dRM8KDVJeKG0iFf2reWtbH3tfXUwxmuoYD1yZyE0M?=
 =?us-ascii?Q?5Mw15er5HMic638WD14bAzhlVxUlripYSZqLorjPit4RXSZ+fZnkNs6AOTJw?=
 =?us-ascii?Q?LAG0trPVSBPDZnV5BeX2RSwsqIZcQfCI5hPIfYSnXcOqlXa6lT5HUUqIcqvG?=
 =?us-ascii?Q?TXdr2l+qAWMHsaQI8lx+0DM72JhKMZ4T+ThYTF1HfHyNiznL60eZ0jdjCFMA?=
 =?us-ascii?Q?DvWLxPLl7BzmRe8O4F9BQRhbvh0l8YVN1Q2p+yrb0SQPQSXqUA=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(36860700013)(376014)(7416014)(82310400026)(1800799024);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2024 12:34:01.7089
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d2cb6e8b-6d55-41b6-59fa-08dc9900ebfa
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CH1PEPF0000A346.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CYXPR12MB9426

Hi All,
This is v11 series to support passthrough when dom0 is PVH
v10->v11 changes:
* patch#1: Move the curly braces of "case PHYSDEVOP_pci_device_state_reset" to the next line.
           Delete unnecessary local variables "struct physdev_pci_device *dev".
           Downgrade printk to dprintk.
           Moved struct pci_device_state_reset to the public header file.
           Delete enum pci_device_state_reset_type, and use macro definitions to represent different
           reset types.
           Delete pci_device_state_reset_method, and add switch cases in PHYSDEVOP_pci_device_state_reset
           to handle different reset functions.
           Add reset type as a function parameter for vpci_reset_device_state for possible future use
* patch#2: Delete the judgment of "d==currd", so that we can prevent physdev_(un)map_pirq from being
           executed when domU has no pirq, instead of just preventing self-mapping; and modify the
           description of the commit message accordingly.
* patch#3: Modify the commit message to explain why the gsi of normal devices can work in PVH dom0 and why
           the passthrough device does not work in PVH dom0.
* patch#4: New patch, modification of allocate_pirq function, return the allocated pirq when there is
           already an allocated pirq and the caller has no specific requirements for pirq, and make it
           successful.
* patch#5: Modification on the hypervisor side proposed from patch#5 of v10.
           Add non-zero judgment for other bits of allow_access.
           Delete unnecessary judgment "if ( is_pv_domain(currd) || has_pirq(currd) )".
           Change the error exit path identifier "out" to "gsi_permission_out".
           Use ARRAY_SIZE() instead of open coed.
* patch#6: New patch, modification of xc_physdev_map_pirq to support mapping gsi to an idle pirq.
* patch#7: Patch#4 of v10, directly open "/dev/xen/privcmd" in the function xc_physdev_gsi_from_dev
           instead of adding unnecessary functions to libxencall.
           Change the type of gsi in the structure privcmd_gsi_from_dev from int to u32.
* patch#8: Modification of the tools part of patches#4 and #5 of v10, use privcmd_gsi_from_dev to get
           gsi, and use XEN_DOMCTL_gsi_permission to grant gsi.
           Change the hard-coded 0 to use LIBXL_TOOLSTACK_DOMID.
           Add libxl__arch_hvm_map_gsi to distinguish x86 related implementations.
           Add a list pcidev_pirq_list to record the relationship between sbdf and pirq, which can be
           used to obtain the corresponding pirq when unmap PIRQ.


Best regards,
Jiqian Chen



v9->v10 changes:
* patch#2: Indent the comments above PHYSDEVOP_map_pirq according to the code style.
* patch#3: Modified the description in the commit message, changing "it calls" to "it will need to call",
           indicating that there will be new codes on the kernel side that will call PHYSDEVOP_setup_gsi.
           Also added an explanation of why the interrupt of passthrough device does not work if gsi is not
           registered.
* patch#4: Added define for CONFIG_X86 in tools/libs/light/Makefile to isolate x86 code in libxl_pci.c.
* patch#5: Modified the commit message to further describe the purpose of adding XEN_DOMCTL_gsi_permission.
           Deleted pci_device_set_gsi and called XEN_DOMCTL_gsi_permission directly in pci_add_dm_done.
           Added a check for all zeros in the padding field in XEN_DOMCTL_gsi_permission, and used currd
           instead of current->domain.
           In the function gsi_2_irq, apic_pin_2_gsi_irq was used instead of the original new code, and
           error handling for irq0 was added.
           Deleted the extra spaces in the upper and lower lines of the struct xen_domctl_gsi_permission
           definition.
All patches have modified signatures as follows:
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com> means I am the author.
Signed-off-by: Huang Rui <ray.huang@amd.com> means Rui sent them to upstream firstly.
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com> means I take continue to upstream.


v8->v9 changes:
* patch#1: Move pcidevs_unlock below write_lock, and remove "ASSERT(pcidevs_locked());"
           from vpci_reset_device_state;
           Add pci_device_state_reset_type to distinguish the reset types.
* patch#2: Add a comment above PHYSDEVOP_map_pirq to describe why need this hypercall.
           Change "!is_pv_domain(d)" to "is_hvm_domain(d)", and "map.domid == DOMID_SELF" to
           "d == current->domian".
* patch#3: Remove the check of PHYSDEVOP_setup_gsi, since there is same checke in below.Although their return
           values are different, this difference is acceptable for the sake of code consistency
           if ( !is_hardware_domain(currd) )
		       return -ENOSYS;
           break;
* patch#5: Change the commit message to describe more why we need this new hypercall.
           Add comment above "if ( is_pv_domain(current->domain) || has_pirq(current->domain) )" to explain
           why we need this check.
           Add gsi_2_irq to transform gsi to irq, instead of considering gsi == irq.
           Add explicit padding to struct xen_domctl_gsi_permission.


v7->v8 changes:
* patch#2: Add the domid check(domid == DOMID_SELF) to prevent self map when guest doesn't use pirq.
           That check was missed in the previous version.
* patch#4: Due to changes in the implementation of obtaining gsi in the kernel. Change to add a new function
           to get gsi by passing in the sbdf of pci device.
* patch#5: Remove the parameter "is_gsi", when there exist gsi, in pci_add_dm_done use a new function
           pci_device_set_gsi to do map_pirq and grant permission. That gets more intuitive code logic.


v6->v7 changes:
* patch#4: Due to changes in the implementation of obtaining gsi in the kernel. Change to add a new function
           to get gsi from irq, instead of gsi sysfs.
* patch#5: Fix the issue with variable usage, rc->r.


v5->v6 changes:
* patch#1: Add Reviewed-by Stefano and Stewart. Rebase code and change old function vpci_remove_device,
           vpci_add_handlers to vpci_deassign_device, vpci_assign_device
* patch#2: Add Reviewed-by Stefano
* patch#3: Remove unnecessary "ASSERT(!has_pirq(currd));"
* patch#4: Fix some coding style issues below directory tools
* patch#5: Modified some variable names and code logic to make code easier to be understood, which to use
           gsi by default and be compatible with older kernel versions to continue to use irq


v4->v5 changes:
* patch#1: add pci_lock wrap function vpci_reset_device_state
* patch#2: move the check of self map_pirq to physdev.c, and change to check if the caller has PIRQ flag, and
           just break for PHYSDEVOP_(un)map_pirq in hvm_physdev_op
* patch#3: return -EOPNOTSUPP instead, and use ASSERT(!has_pirq(currd));
* patch#4: is the patch#5 in v4 because patch#5 in v5 has some dependency on it. And add the handling of errno
           and add the Reviewed-by Stefano
* patch#5: is the patch#4 in v4. New implementation to add new hypercall XEN_DOMCTL_gsi_permission to grant gsi


v3->v4 changes:
* patch#1: change the comment of PHYSDEVOP_pci_device_state_reset; move printings behind pcidevs_unlock
* patch#2: add check to prevent PVH self map
* patch#3: new patch, The implementation of adding PHYSDEVOP_setup_gsi for PVH is treated as a separate patch
* patch#4: new patch to solve the map_pirq problem of PVH dom0. use gsi to grant irq permission in
           XEN_DOMCTL_irq_permission.
* patch#5: to be compatible with previous kernel versions, when there is no gsi sysfs, still use irq
v4 link:
https://lore.kernel.org/xen-devel/20240105070920.350113-1-Jiqian.Chen@amd.com/T/#t

v2->v3 changes:
* patch#1: move the content out of pci_reset_device_state and delete pci_reset_device_state; add
           xsm_resource_setup_pci check for PHYSDEVOP_pci_device_state_reset; add description for
		   PHYSDEVOP_pci_device_state_reset;
* patch#2: du to changes in the implementation of the second patch on kernel side(that it will do setup_gsi and
           map_pirq when assigning a device to passthrough), add PHYSDEVOP_setup_gsi for PVH dom0, and we need
		   to support self mapping.
* patch#3: du to changes in the implementation of the second patch on kernel side(that adds a new sysfs for gsi
           instead of a new syscall), so read gsi number from the sysfs of gsi.
v3 link:
https://lore.kernel.org/xen-devel/20231210164009.1551147-1-Jiqian.Chen@amd.com/T/#t

v2 link:
https://lore.kernel.org/xen-devel/20231124104136.3263722-1-Jiqian.Chen@amd.com/T/#t
Below is the description of v2 cover letter:
This series of patches are the v2 of the implementation of passthrough when dom0 is PVH on Xen.
We sent the v1 to upstream before, but the v1 had so many problems and we got lots of suggestions.
I will introduce all issues that these patches try to fix and the differences between v1 and v2.

Issues we encountered:
1. pci_stub failed to write bar for a passthrough device.
Problem: when we run \u201csudo xl pci-assignable-add <sbdf>\u201d to assign a device, pci_stub will call
pcistub_init_device() -> pci_restore_state() -> pci_restore_config_space() ->
pci_restore_config_space_range() -> pci_restore_config_dword() -> pci_write_config_dword()\u201d, the pci config
write will trigger an io interrupt to bar_write() in the xen, but the
bar->enabled was set before, the write is not allowed now, and then when 
bar->Qemu config the
passthrough device in xen_pt_realize(), it gets invalid bar values.

Reason: the reason is that we don't tell vPCI that the device has been reset, so the current cached state in
pdev->vpci is all out of date and is different from the real device state.

Solution: to solve this problem, the first patch of kernel(xen/pci: Add xen_reset_device_state
function) and the fist patch of xen(xen/vpci: Clear all vpci status of device) add a new hypercall to reset the
state stored in vPCI when the state of real device has changed.
Thank Roger for the suggestion of this v2, and it is different from
v1 (https://lore.kernel.org/xen-devel/20230312075455.450187-3-ray.huang@amd.com/), v1 simply allow domU to write
pci bar, it does not comply with the design principles of vPCI.

2. failed to do PHYSDEVOP_map_pirq when dom0 is PVH
Problem: HVM domU will do PHYSDEVOP_map_pirq for a passthrough device by using gsi. See
xen_pt_realize->xc_physdev_map_pirq and pci_add_dm_done->xc_physdev_map_pirq. Then xc_physdev_map_pirq will call
into Xen, but in hvm_physdev_op(), PHYSDEVOP_map_pirq is not allowed.

Reason: In hvm_physdev_op(), the variable "currd" is PVH dom0 and PVH has no X86_EMU_USE_PIRQ flag, it will fail
at has_pirq check.

Solution: I think we may need to allow PHYSDEVOP_map_pirq when "currd" is dom0 (at present dom0 is PVH). The
second patch of xen(x86/pvh: Open PHYSDEVOP_map_pirq for PVH dom0) allow PVH dom0 do PHYSDEVOP_map_pirq. This v2
patch is better than v1, v1 simply remove the has_pirq check
(xen https://lore.kernel.org/xen-devel/20230312075455.450187-4-ray.huang@amd.com/).

3. the gsi of a passthrough device doesn't be unmasked
 3.1 failed to check the permission of pirq
 3.2 the gsi of passthrough device was not registered in PVH dom0

Problem:
3.1 callback function pci_add_dm_done() will be called when qemu config a passthrough device for domU.
This function will call xc_domain_irq_permission()-> pirq_access_permitted() to check if the gsi has corresponding
mappings in dom0. But it didn\u2019t, so failed. See XEN_DOMCTL_irq_permission->pirq_access_permitted, "current"
is PVH dom0 and it return irq is 0.
3.2 it's possible for a gsi (iow: vIO-APIC pin) to never get registered on PVH dom0, because the devices of PVH
are using MSI(-X) interrupts. However, the IO-APIC pin must be configured for it to be able to be mapped into a domU.

Reason: After searching codes, I find "map_pirq" and "register_gsi" will be done in function
vioapic_write_redirent->vioapic_hwdom_map_gsi when the gsi(aka ioapic's pin) is unmasked in PVH dom0.
So the two problems can be concluded to that the gsi of a passthrough device doesn't be unmasked.

Solution: to solve these problems, the second patch of kernel(xen/pvh: Unmask irq for passthrough device in PVH dom0)
call the unmask_irq() when we assign a device to be passthrough. So that passthrough devices can have the mapping of
gsi on PVH dom0 and gsi can be registered. This v2 patch is different from the
v1( kernel https://lore.kernel.org/xen-devel/20230312120157.452859-5-ray.huang@amd.com/,
kernel https://lore.kernel.org/xen-devel/20230312120157.452859-5-ray.huang@amd.com/ and
xen https://lore.kernel.org/xen-devel/20230312075455.450187-5-ray.huang@amd.com/),
v1 performed "map_pirq" and "register_gsi" on all pci devices on PVH dom0, which is unnecessary and may cause
multiple registration.

4. failed to map pirq for gsi
Problem: qemu will call xc_physdev_map_pirq() to map a passthrough device\u2019s gsi to pirq in function
xen_pt_realize(). But failed.

Reason: According to the implement of xc_physdev_map_pirq(), it needs gsi instead of irq, but qemu pass irq to it and
treat irq as gsi, it is got from file /sys/bus/pci/devices/xxxx:xx:xx.x/irq in function xen_host_pci_device_get().
But actually the gsi number is not equal with irq. On PVH dom0, when it allocates irq for a gsi in
function acpi_register_gsi_ioapic(), allocation is dynamic, and follow the principle of applying first, distributing
first. And if you debug the kernel codes(see function __irq_alloc_descs), you will find the irq number is allocated
from small to large by order, but the applying gsi number is not, gsi 38 may come before gsi 28, that causes gsi 38
get a smaller irq number than gsi 28, and then gsi != irq.

Solution: we can record the relation between gsi and irq, then when userspace(qemu) want to use gsi, we can do a
translation. The third patch of kernel(xen/privcmd: Add new syscall to get gsi from irq) records all the relations
in acpi_register_gsi_xen_pvh() when dom0 initialize pci devices, and provide a syscall for userspace to get the gsi
from irq. The third patch of xen(tools: Add new function to get gsi from irq) add a new function
xc_physdev_gsi_from_irq() to call the new syscall added on kernel side.
And then userspace can use that function to get gsi. Then xc_physdev_map_pirq() will success. This v2 patch is the
same as v1( kernel https://lore.kernel.org/xen-devel/20230312120157.452859-6-ray.huang@amd.com/ and
xen https://lore.kernel.org/xen-devel/20230312075455.450187-6-ray.huang@amd.com/)

About the v2 patch of qemu, just change an included head file, other are similar to the
v1 ( qemu https://lore.kernel.org/xen-devel/20230312092244.451465-19-ray.huang@amd.com/), just call
xc_physdev_gsi_from_irq() to get gsi from irq.

Jiqian Chen (8):
  xen/vpci: Clear all vpci status of device
  x86/pvh: Allow (un)map_pirq when dom0 is PVH
  x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
  x86/physdev: Return pirq that irq was already mapped to
  x86/domctl: Add XEN_DOMCTL_gsi_permission to grant gsi
  tools/libxc: Allow gsi be mapped into a free pirq
  tools: Add new function to get gsi from dev
  tools: Add new function to do PIRQ (un)map on PVH dom0

 tools/include/xen-sys/Linux/privcmd.h |   7 ++
 tools/include/xenctrl.h               |   7 ++
 tools/libs/ctrl/xc_domain.c           |  15 ++++
 tools/libs/ctrl/xc_physdev.c          |  37 ++++++++-
 tools/libs/light/libxl_arch.h         |   4 +
 tools/libs/light/libxl_arm.c          |  10 +++
 tools/libs/light/libxl_pci.c          |  17 ++++
 tools/libs/light/libxl_x86.c          | 111 ++++++++++++++++++++++++++
 xen/arch/x86/domctl.c                 |  33 ++++++++
 xen/arch/x86/hvm/hypercall.c          |   8 ++
 xen/arch/x86/include/asm/io_apic.h    |   2 +
 xen/arch/x86/io_apic.c                |  17 ++++
 xen/arch/x86/irq.c                    |   2 +
 xen/arch/x86/mpparse.c                |   3 +-
 xen/arch/x86/physdev.c                |  14 ++++
 xen/drivers/pci/physdev.c             |  58 ++++++++++++++
 xen/drivers/vpci/vpci.c               |  10 +++
 xen/include/public/domctl.h           |   8 ++
 xen/include/public/physdev.h          |  20 +++++
 xen/include/xen/vpci.h                |   8 ++
 xen/xsm/flask/hooks.c                 |   1 +
 21 files changed, 389 insertions(+), 3 deletions(-)

-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 30 12:34:22 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 12:34:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751113.1158996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtl8-0007zO-Bs; Sun, 30 Jun 2024 12:34:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751113.1158996; Sun, 30 Jun 2024 12:34:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtl8-0007zH-7Q; Sun, 30 Jun 2024 12:34:22 +0000
Received: by outflank-mailman (input) for mailman id 751113;
 Sun, 30 Jun 2024 12:34:20 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aV4l=OA=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sNtl6-0007C1-BU
 for xen-devel@lists.xenproject.org; Sun, 30 Jun 2024 12:34:20 +0000
Received: from NAM04-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam04on20615.outbound.protection.outlook.com
 [2a01:111:f403:2408::615])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 11f206f8-36dd-11ef-90a3-e314d9c70b13;
 Sun, 30 Jun 2024 14:34:19 +0200 (CEST)
Received: from CH2PR12CA0009.namprd12.prod.outlook.com (2603:10b6:610:57::19)
 by CH3PR12MB8073.namprd12.prod.outlook.com (2603:10b6:610:126::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29; Sun, 30 Jun
 2024 12:34:16 +0000
Received: from CH1PEPF0000A347.namprd04.prod.outlook.com
 (2603:10b6:610:57:cafe::da) by CH2PR12CA0009.outlook.office365.com
 (2603:10b6:610:57::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29 via Frontend
 Transport; Sun, 30 Jun 2024 12:34:16 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CH1PEPF0000A347.mail.protection.outlook.com (10.167.244.7) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7741.18 via Frontend Transport; Sun, 30 Jun 2024 12:34:16 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Sun, 30 Jun
 2024 07:34:12 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11f206f8-36dd-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GeVzv2a9fkuyk4eCiOwC+o3inoS0eAZXnfyCudghe7bf03yut4ihpt7gBfAfYKNOhPLBIxqajbQu5c3EX5MLsuocOLCPOmY1qIagmVthc7HiBZaP/ELS5HNFV6bYFLnqThvIJYxFQT+8jCrzqEpinXXc3btLu7MrEt1rwRmhAPt2fJYY61f9ZgZYdhSf9ueoeNfgz0iEACQ6DhEoMhgBVNbTsqKyYto7O3xUNGL3Ze/YxxKYJQ/W6uAhfEMIFZqfBSBgMykM0tuNL0e1xBdy9N+MgnNc6VLYdDCwWZfn3h2tNKaP/x4jTFk46//jIU+k04mOjS576/eSjI2sc1TUKg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=IsDPYZMENW0quBHEP8n8tBZRRuMhFLhlhmL4mlxj/b0=;
 b=i0Ebfhq/c+NgRanvD6XTNGj3ZzrLu1hrCFeQXIIwM5vxNN/MS/UO4atJlZjtT4s+8qRJlxnkDVYsMsXR8EkSk6DupV0bLTkydtE4pNdyiP61J/T2V246USV7YLb/P2kTG/Mlrezo1nj/8/bi+5bxNTq2LelIeY4fiSwdJzodP+vtlXHNpfT2ZIxlDu5SSuXDuYQsum9x7hlL/mrc7K9p56qffVhrkbV1atWYcHawlyuMzQ6tsz0dkylod9wKyr4GzmoxHwBUH7akHgRiGZx/uIo28CTRAVa/vXL14nr1x/c9rFuC7BUT4K7ceuvVunvuaKMClQNGObo1IEVR0HjpUA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IsDPYZMENW0quBHEP8n8tBZRRuMhFLhlhmL4mlxj/b0=;
 b=ZyAWxDb6A9OSVq43RvbozgKczWl+JQW3fIY335dwsl1inp2+hlo8A7JAUR9NE8GXzL3QYknVpoY1lhrX21NEzyoflMLVKYw8qzFfCfuT/YaRaoD/vaJfi2RW3FP8ueH2h9KFrhK1p40QHta6v40SzeyaUiM+HzbU3tOytH3Ru2o=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [XEN PATCH v11 4/8] x86/physdev: Return pirq that irq was already mapped to
Date: Sun, 30 Jun 2024 20:33:40 +0800
Message-ID: <20240630123344.20623-5-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240630123344.20623-1-Jiqian.Chen@amd.com>
References: <20240630123344.20623-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CH1PEPF0000A347:EE_|CH3PR12MB8073:EE_
X-MS-Office365-Filtering-Correlation-Id: 24c217c3-f908-46d3-4e41-08dc9900f4ca
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230040|376014|7416014|1800799024|36860700013|82310400026;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?l5Op+CbciuEBTladK1oGuz9S+58AFY1DfsX9l4qxGMSidQGQwxF+NaetSjRM?=
 =?us-ascii?Q?qkl+jq3cCGk/DQ/BzLralbMlvad01idJv7v8wc5HRM1YII51kfEpbd4M/H1x?=
 =?us-ascii?Q?pno4xLs7yhBPi2je26z0yKq4z+9AYUNF9aAhoBykfIoaHShjkvQbOJcFl21X?=
 =?us-ascii?Q?t/cAKhfpblyHzNDeJW7LAzgmhUOoLX1JSHLuVYGB6RKn4LfdfDjWz6a1BeMH?=
 =?us-ascii?Q?pQfmbba7SQY5Cs95f9lPqI8/SiIFwBBSWye7o3iU0zkCwkwSrTxXnaLucOal?=
 =?us-ascii?Q?6Flf713AKYgTBEXr781+52kxVgfSMgLsQxoepN1f9X415W4ROc/vXPyW4i8R?=
 =?us-ascii?Q?MX45nRJW+pJCfnMBrSjkAj2h/iOLMQktA1RzKRQkzWpmvg2uERTZZbMHh4pH?=
 =?us-ascii?Q?CGkdMnqXpABdh5JqDyAWuwZDqWb8OB66ccTCgDQTeyqeohLe1/aMIYEvxtg6?=
 =?us-ascii?Q?DSzDa4Vn7BVriVtsEIA8FDqmLtlTUpxZdfa6yEJpcAc+FnS1YsI6OabAFMSO?=
 =?us-ascii?Q?TnKgVPdpjiB875rTdWuqdKbOwVCHrA3VpT9tAVRICGdRX/D/SaicspVtcrXB?=
 =?us-ascii?Q?x4UyNcBcntYsIxuktSblLzVaCBg83s49p6fhjs4oTsfYfb6OWms940q6woV1?=
 =?us-ascii?Q?zyfrCZqKv574taIe45UufMpfPhwky978hZA0LRIXDWRLdHeAKoxuShBzMMBq?=
 =?us-ascii?Q?hsu8qFh6+0xi1psDs+5W+jrQr7/HbLNyRSvwtZAaocN4W+rsxjUT+2AxWePY?=
 =?us-ascii?Q?y7EHBSN/FWAFyTtgL7Tk6qC3XCWB7hYulD2nKmaHvGc7W/GqEBfuuGvvojfN?=
 =?us-ascii?Q?2qWgbVDn6Z+D28UgSEu0jDwrHeWW8D5moABRIskpFN+xeL05/rlt+kJUv+3G?=
 =?us-ascii?Q?h5m6LGB5+lIPkc6SMhSjuAzivZGsR/Vp/Mr6gMIE8yp02Xg51SamSMLGqy+Y?=
 =?us-ascii?Q?N13ug/3zhr4T5Cyma58WhBBlEYaOGOCLTaovii3pPs+WHx2o9xbMS6V0Wsja?=
 =?us-ascii?Q?X8q/MJAbMeKWY7zd1wnGVRmgtXGV9mVYdCs9l2jQC7J0n+eqjGV8ROfJ2AqV?=
 =?us-ascii?Q?vwILRvFW6qqtHSgs7OImG9giJJ9H3wP7HXL1//hMni62wLEwou4H0lXH4h07?=
 =?us-ascii?Q?8NfsZ4cZ1LB0orjvNiw54bpSNi4hxaenAJcjWJcaV4j9UkO3DXfAP7wC2XsE?=
 =?us-ascii?Q?xFrFombpDRhqkxwUMZUfMf/EFeIw/MBqvH7CbZaCsj7pt4YN/oID/RRAlOV1?=
 =?us-ascii?Q?rcEky/KPBsOrK1GpJooE8+xr8M6cpjK1/Rgu0Xb7FkYZ8rvbUNOZ80ck+h+A?=
 =?us-ascii?Q?WkmLD4L8xRldpjKf62ZbG8UpnH0g1N5jWOcn/dP7lZtAbgdwPqJLFmxYT/Ka?=
 =?us-ascii?Q?LRNgFZUACGHm5zJJG4jx6yE6zpQxj1V/C4W8zppZVHyDFIcuz1dUeCE6Ke3l?=
 =?us-ascii?Q?qnwHnzKQjjGVK90oNHIu5Up59NgP8u30?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(376014)(7416014)(1800799024)(36860700013)(82310400026);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2024 12:34:16.4935
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 24c217c3-f908-46d3-4e41-08dc9900f4ca
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CH1PEPF0000A347.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH3PR12MB8073

allocate_pirq is to allocate a pirq for a irq, and it supports to
allocate a free pirq(pirq parameter is <0) or a specific pirq (pirq
parameter is > 0).

For current code, it has four usecases.

First, pirq>0 and current_pirq>0, (current_pirq means if irq already
has a mapped pirq), if pirq==current_pirq means the irq already has
mapped to the pirq expected by the caller, it successes, if
pirq!=current_pirq means the pirq expected by the caller has been
mapped into other irq, it fails.

Second, pirq>0 and current_pirq<0, it means pirq expected by the
caller has not been allocated to any irqs, so it can be allocated to
caller, it successes.

Third, pirq<0 and current_pirq<0, it means caller want to allocate a
free pirq for irq and irq has no mapped pirq, it successes.

Fourth, pirq<0 and current_pirq>0, it means caller want to allocate
a free pirq for irq but irq has a mapped pirq, then it returns the
negative pirq, so it fails.

The problem is in Fourth, since the irq has a mapped pirq(current_pirq),
and the caller doesn't want to allocate a specified pirq to the irq, so
the current_pirq should be returned directly in this case, indicating
that the allocation is successful. That can help caller to success when
caller just want to allocate a free pirq but doesn't know if the irq
already has a mapped pirq or not.

Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
---
 xen/arch/x86/irq.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 9a611c79e024..5ccca1646eb1 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2897,6 +2897,8 @@ static int allocate_pirq(struct domain *d, int index, int pirq, int irq,
                     d->domain_id, index, pirq, current_pirq);
             if ( current_pirq < 0 )
                 return -EBUSY;
+            else
+                return current_pirq;
         }
         else if ( type == MAP_PIRQ_TYPE_MULTI_MSI )
         {
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 30 12:34:27 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 12:34:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751114.1159006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtlD-0008Jn-KA; Sun, 30 Jun 2024 12:34:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751114.1159006; Sun, 30 Jun 2024 12:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtlD-0008JW-FA; Sun, 30 Jun 2024 12:34:27 +0000
Received: by outflank-mailman (input) for mailman id 751114;
 Sun, 30 Jun 2024 12:34:26 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aV4l=OA=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sNtlC-0007C0-Ei
 for xen-devel@lists.xenproject.org; Sun, 30 Jun 2024 12:34:26 +0000
Received: from NAM04-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam04on20618.outbound.protection.outlook.com
 [2a01:111:f403:240a::618])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 14b0a67c-36dd-11ef-b4bb-af5377834399;
 Sun, 30 Jun 2024 14:34:24 +0200 (CEST)
Received: from CH0PR03CA0182.namprd03.prod.outlook.com (2603:10b6:610:e4::7)
 by IA0PR12MB7601.namprd12.prod.outlook.com (2603:10b6:208:43b::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29; Sun, 30 Jun
 2024 12:34:13 +0000
Received: from CH1PEPF0000A34B.namprd04.prod.outlook.com
 (2603:10b6:610:e4:cafe::b3) by CH0PR03CA0182.outlook.office365.com
 (2603:10b6:610:e4::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29 via Frontend
 Transport; Sun, 30 Jun 2024 12:34:13 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CH1PEPF0000A34B.mail.protection.outlook.com (10.167.244.10) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7741.18 via Frontend Transport; Sun, 30 Jun 2024 12:34:12 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Sun, 30 Jun
 2024 07:34:08 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14b0a67c-36dd-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ee1WY73v7qOEB8D4aUuIF0qMYn/Qw4dnvK1XczAO4Os1TTZdyph8ukVikRAdao9NGGC/esZfdC/t1rVelclF7+grplqXKKGXEMsXuXhQ51/RF/x3v7QvZnWRmvAEyIAORvKnAhvy3/ytEDcou8i/Zrm5YYhq+67Pt2A+CWIbjnGevOaekl0me31z5Hyu4UjZpgYNWZLXJdvgzCb2oVbQUrH8GQmQnzFl4iMQ1+e2XHpaWVBMmsbgyhOBGYWZa8nEdTzggt3TLdxXGHo5xUtaprz1mhqQl6yUE3CAEmT5uRvchEqAgNpV5+99QwBD0knpLsHyCDn0JNNt6i8LHoZUbA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=kvNzp09rHGoY9zYCVKlErNT4ho6RlTIpvWIBG8mH1Lo=;
 b=j1mhCxGrT3XyNuUEW7A8JdSG+V8WsG5R/Kpd2SELqZt9sjf7ev9nXMsYM7njEm5cxQCCl3Y90aXw/7DZRmsweqEJDdbBs4bZ5YKvik7RJ1K4w8nU4Jk/C3eOUsXgc4qH1tOwIIu+Fsjnh/BSTx/JRjJZo5HuqOhdbb8QLdPFLAdfjI2HmokwJ884dbltWMmC211IOq5Vw3+zc30a0GDKKlyRWF4L6jdYnnsO00nKaTWI8DSvU79qLoSEyOmjOPyQg6VikTWop1m5caun5h5YTG+IdjjbtU+FJm8FXVX+HebZM3ArPHB0yD254Faf+0pmTwnVkNcVEVRzp4hnPa6h0A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kvNzp09rHGoY9zYCVKlErNT4ho6RlTIpvWIBG8mH1Lo=;
 b=CG8Ilg/LgDS4F1K8BysueEdDUL61tvMV5/phlOH/WycrDt8CjsMg2s3MJ/+qs88Y5BgEl9JFKSGd38KFET5f18pueKzZaFFa5Du3WTTVO9A6oJkOnP6Zxw8iujc3NsFYpL/LMXPAb9cUQgyaaCsEqEtUpArHNkmvDG7P4gehza8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [XEN PATCH v11 3/8] x86/pvh: Add PHYSDEVOP_setup_gsi for PVH dom0
Date: Sun, 30 Jun 2024 20:33:39 +0800
Message-ID: <20240630123344.20623-4-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240630123344.20623-1-Jiqian.Chen@amd.com>
References: <20240630123344.20623-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CH1PEPF0000A34B:EE_|IA0PR12MB7601:EE_
X-MS-Office365-Filtering-Correlation-Id: b0bcd5ed-0694-4ca5-5822-08dc9900f2a7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230040|1800799024|82310400026|36860700013|7416014|376014;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?OmiCRUz9ca07Pr5U8tEwiGFU7SKER4srVsMwffpiipFz57QAGCWzFVaOwBzV?=
 =?us-ascii?Q?Cel8VPXOdbdH7XrCZKCVY2F8AhgbbeJxxfnx2PdUQ/auwhiSkdKSVtsWIMXB?=
 =?us-ascii?Q?lgxzSH+Myq/R+GXX20+1fHnpsAcONLEa/B8sA7gXNnSB93gUWr0pefjWBF0t?=
 =?us-ascii?Q?J5HxdsD4bId8ti/HL+bIjLj5SWqb3FCLOTHpj74x+4oOf7/G6D4taeZy9c2T?=
 =?us-ascii?Q?/WOYNZi/2wGC+EwDTwipOqry29raAKGRuPzFbDBLm/4525TiDqvH50awn1lv?=
 =?us-ascii?Q?Oqt0zsoJ9HNifPCuUppEYh0Qz6y9344RZdn7RoQcGJdUi6/PvJKsIULe/KSB?=
 =?us-ascii?Q?j12kybGV2BYmdqXwCkixuRQB/Tn6vyafrz8cfuC8ApGfX7vqSjFB1vQIf8Wq?=
 =?us-ascii?Q?hUC18TQ/b7WubYCAmq+VWcI3dvpJBA+1xj6QnyTRWasl0SzWWtk0bfmogRPK?=
 =?us-ascii?Q?WIi+Zyy8NNOmfjhzKGfGkizTHh+jgab9VoxCLHMFqSj47NbG3FZaaxQAZuMA?=
 =?us-ascii?Q?sGFmCTY2wVT4kMIlKDVMdWDOkYC6+O0dvxlLWqWvADiiiPWcS6sZ7AzMijZ5?=
 =?us-ascii?Q?KNWuduR+0gLTxh3kIXA4coVf3pyQnyLZ7/wPS4T+7SzEZjphVXVl80VVPGB/?=
 =?us-ascii?Q?Y/rm3XZc18NUSNV2XhnGZ8TumZ8kznPGwIcAPA2x3ilTdHTl5cP2bG4QU/lJ?=
 =?us-ascii?Q?r0ate2ik0QUr1MCGwe2CU4XjHK5AbEQ4xMEWprN4u2GfIeFmpOoyyLOHqpJa?=
 =?us-ascii?Q?PmXJN6JThr4Sff7DrFuDP9yCqu7jIQDbBeuxcT2nHVMcmTav6NO2CBRxZlbt?=
 =?us-ascii?Q?FiuBeo2i37A890aeol0eRgETBqItcnvhHJykqm1qpWu+yjl01X1PN4gUnrv6?=
 =?us-ascii?Q?L8+W7EsL7wUTNCPeiEtavVkuDUGy1HNv9fehhO9JTyog+5B4a71Exel3Nhe/?=
 =?us-ascii?Q?pFDWslKdiFYEkez3tv8Gy2iQ67r0MSVmu7siDQBdZDqy6iPr75HHovB38ta3?=
 =?us-ascii?Q?nBBJKRL0DN42TQzuLFm4gE2U2sTVAYiyHS4VekSEhWo/L4za8yzLDjqLtSd8?=
 =?us-ascii?Q?rE7OyJPMqoSQ7Clv7dnwK0tMl50e3BXOyUYYEMNaz087W5xoZGoEu0lj9aiO?=
 =?us-ascii?Q?16tSqz964vs5CQUgTiTAvTVpBqiFXSoYNKHzY3Z2UVPJEO7kj57eWCP0FS+S?=
 =?us-ascii?Q?3tdjDQ1GYUfV3HRGwr92a7XzwVTwNyG9o/SGWlmfnqlWwmqHFL0rsFERp7wi?=
 =?us-ascii?Q?A20WRI4WkUoZ8Sv3OKofHbXjgNfn6f6YowPiiaWgFre8VYd6t4y1E62gZb3q?=
 =?us-ascii?Q?FgWM/3FSOeryYCyYWQADMKUQ9i6RgjZjZpZCbJvL5blnx93LqmZ0/FEy6mMu?=
 =?us-ascii?Q?mdUxmJ2uVAMhNf0lbwa767QVrL3ii+Cq6TOWpyAYYeFMqK3/YhfNVejTYgbX?=
 =?us-ascii?Q?I9qz2+FEztE1LMLNoSug5Ky/ljOrMrJy?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(36860700013)(7416014)(376014);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2024 12:34:12.8933
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b0bcd5ed-0694-4ca5-5822-08dc9900f2a7
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CH1PEPF0000A34B.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA0PR12MB7601

The gsi of a passthrough device must be configured for it to be
able to be mapped into a hvm domU.
But When dom0 is PVH, the gsis don't get registered, it causes
the info of apic, pin and irq not be added into irq_2_pin list,
and the handler of irq_desc is not set, then when passthrough a
device, setting ioapic affinity and vector will fail.

To fix above problem, on Linux kernel side, a new code will
need to call PHYSDEVOP_setup_gsi for passthrough devices to
register gsi when dom0 is PVH.

So, add PHYSDEVOP_setup_gsi into hvm_physdev_op for above
purpose.

Clarify two questions:
First, why the gsi of devices belong to PVH dom0 can work?
Because when probe a driver to a normal device, it calls(on linux
kernel side) pci_device_probe-> request_threaded_irq->
irq_startup-> __unmask_ioapic-> io_apic_write, then trap into xen
side hvmemul_do_io-> hvm_io_intercept-> hvm_process_io_intercept->
vioapic_write_indirect-> vioapic_hwdom_map_gsi-> mp_register_gsi.
So that the gsi can be registered.

Second, why the gsi of passthrough device can't work when dom0
is PVH?
Because when assign a device to passthrough, it uses pciback to
probe the device, and it calls pcistub_probe->pcistub_seize->
pcistub_init_device-> xen_pcibk_reset_device->
xen_pcibk_control_isr->isr_on, but isr_on is not set, so that the
fake IRQ handler is not installed, then the gsi isn't unmasked.
What's more, we can see on Xen side, the function
vioapic_hwdom_map_gsi-> mp_register_gsi will be called only when
the gsi is unmasked, so that the gsi can't work for passthrough
device.

Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
---
 xen/arch/x86/hvm/hypercall.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 03ada3c880bd..cfe82d0f96ed 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -86,6 +86,7 @@ long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return -ENOSYS;
         break;
 
+    case PHYSDEVOP_setup_gsi:
     case PHYSDEVOP_pci_mmcfg_reserved:
     case PHYSDEVOP_pci_device_add:
     case PHYSDEVOP_pci_device_remove:
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 30 12:34:29 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 12:34:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751115.1159016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtlF-000092-2A; Sun, 30 Jun 2024 12:34:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751115.1159016; Sun, 30 Jun 2024 12:34:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtlE-00008t-UG; Sun, 30 Jun 2024 12:34:28 +0000
Received: by outflank-mailman (input) for mailman id 751115;
 Sun, 30 Jun 2024 12:34:27 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aV4l=OA=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sNtlD-0007C1-Dy
 for xen-devel@lists.xenproject.org; Sun, 30 Jun 2024 12:34:27 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12on2060a.outbound.protection.outlook.com
 [2a01:111:f403:2418::60a])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 16217c99-36dd-11ef-90a3-e314d9c70b13;
 Sun, 30 Jun 2024 14:34:26 +0200 (CEST)
Received: from CH0PR07CA0020.namprd07.prod.outlook.com (2603:10b6:610:32::25)
 by IA1PR12MB6065.namprd12.prod.outlook.com (2603:10b6:208:3ef::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29; Sun, 30 Jun
 2024 12:34:20 +0000
Received: from CH1PEPF0000A348.namprd04.prod.outlook.com
 (2603:10b6:610:32:cafe::63) by CH0PR07CA0020.outlook.office365.com
 (2603:10b6:610:32::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29 via Frontend
 Transport; Sun, 30 Jun 2024 12:34:20 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CH1PEPF0000A348.mail.protection.outlook.com (10.167.244.4) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7741.18 via Frontend Transport; Sun, 30 Jun 2024 12:34:20 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Sun, 30 Jun
 2024 07:34:16 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16217c99-36dd-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VqSLTDOqs96c5KP057H160FtIi4ZVmm/CX5ZD+ZM/CFWd+mAFr4uVMQX/G7n8Sf6hwOp5Dui7itOXSsBu25cP/LcmVc3qmYXnzPJ+sxnz8zLuCjzl0+dtu/yYndGj5ZyCdqtHXA12mfVZnEEGSrtOVe/GYIa8xYZgU4UTf0RdlR07RKJ0ZB/3M1Vi5puqxJoyvO00IOMYHxwc8gfDSJYlOdZju8Ov6OWDPDbr8xRP06ZfPQkZkgmyrxHbU3umIqJnfwNvxDbcSiEuCLxg1qGfnoLI69AtAk6WeimZfm4qCe81Bc4YKPeviFECcL9J1yc/3EmRpoyA+7JSII25JsbDg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=GEDL2Ue2C9U9cBafxmiXxoyp43uQlrhbX1J3rhhqNwM=;
 b=W8TxnQnfr/2VyRB8pWD2/SbOw5PhhXxe+IAKXvgF7ooilFKiZXfclHjCD2Yze0UL0FlAyrs3QBW3iX5VK1sOv8e6hHxt86jlQd8EA0PNWXRW/SI9fK3jpHzYjwTYkkqHQFcO00a+DDWHLs4IZ/JvsJZscq6MZHdyXzV9XU8WxTtrzl500sAGON5TQVlSnf49N2reucTFL5G0+ovtddeiBdumBjCUpodcX17Kw/eUUqj6agH6neoVDbVjRTRfqKOkdOgK8ozldW0AFnNwVP1i5q1aNJxKckHpLuCwGW+acgQDb314P4PBgzPPbNdoPYzBqCtnDgpCFrkLXaN7yP7JZQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GEDL2Ue2C9U9cBafxmiXxoyp43uQlrhbX1J3rhhqNwM=;
 b=1JHGmz4+gYKAh+ZrIG0DxqmnbqX3uiJxr3oNaTBrJ2HCRvb82sdrm5mJfM1wVCaLvwwVj05Kj05oOs7Wc96Eypgz5fkXUiHGrUHzK40ax0oD2cUv0/LDELGQOj8a+G4QOM+DXYaAzZI9rUu2BffcX4IU4oiaVrrZiwb5b18p2bU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [XEN PATCH v11 5/8] x86/domctl: Add XEN_DOMCTL_gsi_permission to grant gsi
Date: Sun, 30 Jun 2024 20:33:41 +0800
Message-ID: <20240630123344.20623-6-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240630123344.20623-1-Jiqian.Chen@amd.com>
References: <20240630123344.20623-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CH1PEPF0000A348:EE_|IA1PR12MB6065:EE_
X-MS-Office365-Filtering-Correlation-Id: 47bc0807-671d-4e18-8e19-08dc9900f6f7
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230040|1800799024|82310400026|376014|7416014|36860700013;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?CEDPZFLoglkQhdJfX29Fy5XwZMxYNwNuis2n3gNzJxfg1stWRWJBzz69lzej?=
 =?us-ascii?Q?LrejBdWDHmAZYz2/+zwi7nV5g5mWQFsBXcmjsHzf3VJKo3+Ie3r5qxH9Uf9r?=
 =?us-ascii?Q?N6TGCBg+WGwkU0F38uKa5T+lB8DUUeVj0bxZWPImtsJ7Bz8KwjEZilmL7k/j?=
 =?us-ascii?Q?e5o3uoTlY/1RLjMRib6jAZe6qg0yq8HDwYixhfm8D0wIL7DhqLjxiGkocCx1?=
 =?us-ascii?Q?Ay+Iz3hHeAWaz9OVEddXnG/WatU8a8QM8CLJFxfv0/m5NXgyfJhoj/+zw7BU?=
 =?us-ascii?Q?OeUlzSJHjPuICaMBO0Ts2WaWli3m8iXibDdd3fNNMCGIFXMtRVkcUZdlf4M8?=
 =?us-ascii?Q?Prgwx0xtyTsC0x6Bzij373FluRwVzOLsXGtomwzNmLT6aTsRCEjxYv5OX808?=
 =?us-ascii?Q?SH5L2kAwiyAGWwo0I5MqmYoFxxhBJqDgOptmbvYgroRMa4j2667buLpLOEfE?=
 =?us-ascii?Q?/JGjdQGaVMxhQse92qPOP2/R9YBawjpDYBYpmsl+F/m6QBoGe0DebA2dXqOO?=
 =?us-ascii?Q?cJHOESd4NOEDj5NE97Rv8/P66h1eGBiVu1puB+lko/u9IIIRbo/qx0mqaPX5?=
 =?us-ascii?Q?nYM8pBYA4W6Q1a70bZeIGZgQrc4EPldRyJP04KcJgI8RgHqdUhtYCz4nAvws?=
 =?us-ascii?Q?zRH/EEhr06YRsOyX+ZWp/4jtXoOA3kC4D3ycYRAgXYpldTv/sqx5X56sSfrF?=
 =?us-ascii?Q?BTpldTeTdKGoB630lrZ3to2NrbDK6YIF+2RNRV5pz6bCW13vcC6IIDPieKO8?=
 =?us-ascii?Q?Yflwqhq+adMAAzR+KAY+njykHwLGTSDsJ6k9YgqCiPQ3DFNYSF++18I84zIy?=
 =?us-ascii?Q?n1ZWEPHYFjkpoHkYOtT2jjZikD9JX+e6+Q5MP71LvYLdFZvJP5kZQnOaqhnD?=
 =?us-ascii?Q?1FcXB9qsDWz20MaU685K38gnGUH64Yv7hfYG/2X48sLVsHcSzFQoOJAWLXrI?=
 =?us-ascii?Q?rHzTBqU47jAUODNK4xFBFY9XuTkKWe69rn78C+UL1rjmLkYQlo/GqwNCautl?=
 =?us-ascii?Q?NozOFk+coFPxrsN/bVocxL/Zm5gkbKlboYpQ4zxWm4Nx1IAEK07WVoeuCVD7?=
 =?us-ascii?Q?TzwYvAgQNlbEmJrDREbcpxEA9Mx9GvCa8WWw0APo5gP+iYoiIbnFWwxnGzDU?=
 =?us-ascii?Q?cq8ItX1G0zJnq4lsHkqlffVfqQkZm5AsHKw8PQrV66aDNJE941Xpynw5AKCn?=
 =?us-ascii?Q?sq3Ww3ysdMH59BX9Nk0R6VlO3thNovAbUmkc3f03h/veEK7+M/ibw7yhCCgR?=
 =?us-ascii?Q?8kGr9V2q8wMQqKN5TGeimSSHwpUs2T4hyHaEtgxfKJTH+Yeff2/y+aAznhPH?=
 =?us-ascii?Q?TWqeujm21Q6KOmbtBSLiQgwCbqNC4dEFFJmQw5ykbVUFqdf5yxhbram3ec3D?=
 =?us-ascii?Q?nDwsn3I8VD8Bq00GIRXuicTTyoXuDVpOhBS6Qr6wcKJG1UXAqDdNW6j/FEjT?=
 =?us-ascii?Q?Wnw9MThKUlDMytTR1Z03Eh2peGsqyhWU?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(1800799024)(82310400026)(376014)(7416014)(36860700013);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2024 12:34:20.1455
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 47bc0807-671d-4e18-8e19-08dc9900f6f7
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CH1PEPF0000A348.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: IA1PR12MB6065

Some type of domain don't have PIRQs, like PVH, it doesn't do
PHYSDEVOP_map_pirq for each gsi. When passthrough a device
to guest base on PVH dom0, callstack
pci_add_dm_done->XEN_DOMCTL_irq_permission will fail at function
domain_pirq_to_irq, because PVH has no mapping of gsi, pirq and
irq on Xen side.
What's more, current hypercall XEN_DOMCTL_irq_permission requires
passing in pirq, it is not suitable for dom0 that doesn't have
PIRQs.

So, add a new hypercall XEN_DOMCTL_gsi_permission to grant the
permission of irq(translate from gsi) to dumU when dom0 has no
PIRQs.

Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
---
 xen/arch/x86/domctl.c              | 33 ++++++++++++++++++++++++++++++
 xen/arch/x86/include/asm/io_apic.h |  2 ++
 xen/arch/x86/io_apic.c             | 17 +++++++++++++++
 xen/arch/x86/mpparse.c             |  3 +--
 xen/include/public/domctl.h        |  8 ++++++++
 xen/xsm/flask/hooks.c              |  1 +
 6 files changed, 62 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index 9190e11faaa3..5f20febabbf2 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -36,6 +36,7 @@
 #include <asm/xstate.h>
 #include <asm/psr.h>
 #include <asm/cpu-policy.h>
+#include <asm/io_apic.h>
 
 static int update_domain_cpu_policy(struct domain *d,
                                     xen_domctl_cpu_policy_t *xdpc)
@@ -237,6 +238,38 @@ long arch_do_domctl(
         break;
     }
 
+    case XEN_DOMCTL_gsi_permission:
+    {
+        int irq;
+        uint8_t mask = 1;
+        unsigned int gsi = domctl->u.gsi_permission.gsi;
+        bool allow = domctl->u.gsi_permission.allow_access;
+
+        /* Check all bits and pads are zero except lowest bit */
+        ret = -EINVAL;
+        if ( domctl->u.gsi_permission.allow_access & ( !mask ) )
+            goto gsi_permission_out;
+        for ( i = 0; i < ARRAY_SIZE(domctl->u.gsi_permission.pad); ++i )
+            if ( domctl->u.gsi_permission.pad[i] )
+                goto gsi_permission_out;
+
+        if ( gsi >= nr_irqs_gsi || ( irq = gsi_2_irq(gsi) ) < 0 )
+            goto gsi_permission_out;
+
+        ret = -EPERM;
+        if ( !irq_access_permitted(currd, irq) ||
+             xsm_irq_permission(XSM_HOOK, d, irq, allow) )
+            goto gsi_permission_out;
+
+        if ( allow )
+            ret = irq_permit_access(d, irq);
+        else
+            ret = irq_deny_access(d, irq);
+
+    gsi_permission_out:
+        break;
+    }
+
     case XEN_DOMCTL_getpageframeinfo3:
     {
         unsigned int num = domctl->u.getpageframeinfo3.num;
diff --git a/xen/arch/x86/include/asm/io_apic.h b/xen/arch/x86/include/asm/io_apic.h
index 78268ea8f666..7e86d8337758 100644
--- a/xen/arch/x86/include/asm/io_apic.h
+++ b/xen/arch/x86/include/asm/io_apic.h
@@ -213,5 +213,7 @@ unsigned highest_gsi(void);
 
 int ioapic_guest_read( unsigned long physbase, unsigned int reg, u32 *pval);
 int ioapic_guest_write(unsigned long physbase, unsigned int reg, u32 val);
+int mp_find_ioapic(int gsi);
+int gsi_2_irq(int gsi);
 
 #endif
diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index d73108558e09..d54283955a60 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -955,6 +955,23 @@ static int pin_2_irq(int idx, int apic, int pin)
     return irq;
 }
 
+int gsi_2_irq(int gsi)
+{
+    int ioapic, pin, irq;
+
+    ioapic = mp_find_ioapic(gsi);
+    if ( ioapic < 0 )
+        return -EINVAL;
+
+    pin = gsi - io_apic_gsi_base(ioapic);
+
+    irq = apic_pin_2_gsi_irq(ioapic, pin);
+    if ( irq <= 0 )
+        return -EINVAL;
+
+    return irq;
+}
+
 static inline int IO_APIC_irq_trigger(int irq)
 {
     int apic, idx, pin;
diff --git a/xen/arch/x86/mpparse.c b/xen/arch/x86/mpparse.c
index d8ccab2449c6..c95da0de5770 100644
--- a/xen/arch/x86/mpparse.c
+++ b/xen/arch/x86/mpparse.c
@@ -841,8 +841,7 @@ static struct mp_ioapic_routing {
 } mp_ioapic_routing[MAX_IO_APICS];
 
 
-static int mp_find_ioapic (
-	int			gsi)
+int mp_find_ioapic(int gsi)
 {
 	unsigned int		i;
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 2a49fe46ce25..f7ae8b19d27d 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -464,6 +464,12 @@ struct xen_domctl_irq_permission {
     uint8_t pad[3];
 };
 
+/* XEN_DOMCTL_gsi_permission */
+struct xen_domctl_gsi_permission {
+    uint32_t gsi;
+    uint8_t allow_access;    /* flag to specify enable/disable of x86 gsi access */
+    uint8_t pad[3];
+};
 
 /* XEN_DOMCTL_iomem_permission */
 struct xen_domctl_iomem_permission {
@@ -1306,6 +1312,7 @@ struct xen_domctl {
 #define XEN_DOMCTL_get_paging_mempool_size       85
 #define XEN_DOMCTL_set_paging_mempool_size       86
 #define XEN_DOMCTL_dt_overlay                    87
+#define XEN_DOMCTL_gsi_permission                88
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1328,6 +1335,7 @@ struct xen_domctl {
         struct xen_domctl_setdomainhandle   setdomainhandle;
         struct xen_domctl_setdebugging      setdebugging;
         struct xen_domctl_irq_permission    irq_permission;
+        struct xen_domctl_gsi_permission    gsi_permission;
         struct xen_domctl_iomem_permission  iomem_permission;
         struct xen_domctl_ioport_permission ioport_permission;
         struct xen_domctl_hypercall_init    hypercall_init;
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 5e88c71b8e22..a5b134c91101 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -685,6 +685,7 @@ static int cf_check flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_shadow_op:
     case XEN_DOMCTL_ioport_permission:
     case XEN_DOMCTL_ioport_mapping:
+    case XEN_DOMCTL_gsi_permission:
 #endif
 #ifdef CONFIG_HAS_PASSTHROUGH
     /*
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 30 12:34:32 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 12:34:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751116.1159026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtlI-0000WU-8t; Sun, 30 Jun 2024 12:34:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751116.1159026; Sun, 30 Jun 2024 12:34:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtlI-0000WL-5l; Sun, 30 Jun 2024 12:34:32 +0000
Received: by outflank-mailman (input) for mailman id 751116;
 Sun, 30 Jun 2024 12:34:31 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aV4l=OA=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sNtlG-0007C1-Vp
 for xen-devel@lists.xenproject.org; Sun, 30 Jun 2024 12:34:30 +0000
Received: from NAM10-BN7-obe.outbound.protection.outlook.com
 (mail-bn7nam10on20614.outbound.protection.outlook.com
 [2a01:111:f403:2009::614])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 18532fa6-36dd-11ef-90a3-e314d9c70b13;
 Sun, 30 Jun 2024 14:34:30 +0200 (CEST)
Received: from CH2PR07CA0006.namprd07.prod.outlook.com (2603:10b6:610:20::19)
 by DS0PR12MB9324.namprd12.prod.outlook.com (2603:10b6:8:1b6::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29; Sun, 30 Jun
 2024 12:34:23 +0000
Received: from CH1PEPF0000A34A.namprd04.prod.outlook.com
 (2603:10b6:610:20:cafe::78) by CH2PR07CA0006.outlook.office365.com
 (2603:10b6:610:20::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29 via Frontend
 Transport; Sun, 30 Jun 2024 12:34:23 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CH1PEPF0000A34A.mail.protection.outlook.com (10.167.244.5) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7741.18 via Frontend Transport; Sun, 30 Jun 2024 12:34:23 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Sun, 30 Jun
 2024 07:34:19 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18532fa6-36dd-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Y5KE5lmkOD+fbuNRjvVaRO4XuvsR85SCt2IPqhsPlxM0luPx7tJySuRqrqPXETcac1qIxOyvtHkHiWCWTxxe24SHxOBbC6TJvRMeAMsiaX33I8vAl3Sx2TuNTPLNO4DlMKW3MpVep97OKLngBzTDeI4lMWaMj3dY7+yXZkCqiVqR6B0FiDTYgEJMcVyNAjM4RujPjABlkjdP1t0ejyQfLcqcmW1y8VNLHZp8q3t80PHJ6BNJiqMMSSBsUldkhx+xOlF95pMK27gY5IYlFlidl8fVHxIW8YonMQKL/fb72Cs6Pak42lGdP/54Tl9T+foo7LEvHag0buXXy/Yygq49Uw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8/A6f8iRXUzEPbO8B0mLZ/2d23qq0f+RU7w4EWufx0c=;
 b=Qjke6Ibo4hGRvkJ+whaZ97hQy+KNMgNsm6SBS05QxHALmaPyU74Vt+4PMKNMVuCMgg1SfrwDhXXjemRNt7PcgS+tzK14j3BmL7tUJpOvrUNq6+NTH4fUjBy2qbdQYLVbFjh0NrvtcsVr7duc70RWq+WK2f9nHQt/WGcp25TRJKE+jyGKggH/G5knLZFBMQG2P1TMDbp0dscEF+0M/35GOK9KGDNvTWScpdUbR0GkYPUMQIWH1qA1HCIDAxVzBL7ePgIkXYaOoMcGqZcI0y8WBrb1wF9o3QwMmLEQLMPBcV25M+eXWt65PJMH9N63hcl5A6G9BgHSoB6uNSjo4eAOlQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8/A6f8iRXUzEPbO8B0mLZ/2d23qq0f+RU7w4EWufx0c=;
 b=Ys0CYFw3eGnVt3GTTyZid/dw7sNOeZvcoOHrPLwT6b12c6z76I6oZMxJZStvFVqA2Gc4+MpWB3HNQZw0FLuLamO+B6sJl1qWkYM3Er2X174As410Gv91CA03HpKFJR2yFWP+C5H3+BGWnMVeP0EPu4H311C8WrPWE+R2J3MqwqU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [XEN PATCH v11 6/8] tools/libxc: Allow gsi be mapped into a free pirq
Date: Sun, 30 Jun 2024 20:33:42 +0800
Message-ID: <20240630123344.20623-7-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240630123344.20623-1-Jiqian.Chen@amd.com>
References: <20240630123344.20623-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CH1PEPF0000A34A:EE_|DS0PR12MB9324:EE_
X-MS-Office365-Filtering-Correlation-Id: 5c7a08a3-11b2-4ea9-9d94-08dc9900f920
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230040|7416014|82310400026|36860700013|376014|1800799024;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?1lqTe4oKDSfpeo/Fu97gTNQKXXHVSXAjgNVlBTG+P0Qyo3TyLP7A+CJDtyel?=
 =?us-ascii?Q?T44JzQwdlZN0m9fqzk/WtpeHTX76zFMPj6TiCIV7n9mHCPAIx6QO5E31RMuh?=
 =?us-ascii?Q?a+7z9gh47SlSuaB6VmZocQLNBQ/6p2hq+/Aw1mzBtEChEz1sS8IMsbgIyRP+?=
 =?us-ascii?Q?i/t3ow5g5US46+gcwJFkmlSscCYnlEjPqMipN1MVXVHrAasHTA1KyO3A8lnz?=
 =?us-ascii?Q?+uDTXPtgecBfnMCGJzdzO6luXbduLf3GI9jXjdXJKSEmZOSm/No+Xj6Kf4MV?=
 =?us-ascii?Q?rxOw7VwrpJ15gwy1WJlcnO0X2FXzJKIuxBd/5I53HOLlvq+nIWwqNSKAVM+U?=
 =?us-ascii?Q?k0FLujajEjOVyQSLnrOfxYsHuO1IXlnTDkThruRLfG+tSaNoLgMYUbc82C1N?=
 =?us-ascii?Q?gfDproXTvuYksgsveTAFlK2vcCCoxzvCyqmqlBHz4+S4egfdfTCyxZeJ1pPZ?=
 =?us-ascii?Q?9QGOgrb/JjKlYI0UFtCi+/prZKM1VXRC8pXTy7U1Q/2eA7k5g0WU2D0hcvp2?=
 =?us-ascii?Q?y0BSO3niLYnYh3d7RBDRVz+64xYJhLekd+a5Oa+7MHgWnpVZhyxOSm+b3tNG?=
 =?us-ascii?Q?M+YS2PPbIzTVNzvxd5GVU63j0CCw+/M0/clS4LBX1XAm9c2hbs6Wopy7RPEA?=
 =?us-ascii?Q?30sMJBtcx4nowIJoI9YPooEyIAGhEcrNTORTPemoRu6ctypW/AuJOHL7vl8N?=
 =?us-ascii?Q?1gHNcgNfQNVk+PUWGQRsufGcNoVKMIX9GiwRIppCi8/blcrkk08uARi+Cf6Q?=
 =?us-ascii?Q?1BwyX71Thz6BnHwUxi/PNB8nlJnsvDvx4t5p8zVmZ+15TSAwgZt+ftNhT8if?=
 =?us-ascii?Q?nZUUn0/9jcLmWzo3KAEzTwohviIv/fsq5fwakWE2oiW20C80jPkE2Qbz51fd?=
 =?us-ascii?Q?v7DdrqbTx0KYQrEHHEBooo7wMAgfxBEsuXYSTLVTfVal4dPB8mJZnmK6NiAv?=
 =?us-ascii?Q?bCsf3/lb6a/h2laGrkeFqpCklbrIOhXwT03cktfGOpIoWksrw4N+5tZoYy58?=
 =?us-ascii?Q?hphp8msCzqoccaBOfXn7noyrQF/FZEJxuEdgsfn6pSFwwS0qOjmnNieR//av?=
 =?us-ascii?Q?Z0jD0tBaITQelbDHXqdWmWja0VLZgAgLx5an/gy+CsWKhd6z/fRe/LQ4SwXG?=
 =?us-ascii?Q?ArCLQGrm9UbrK6l3vTpzPeWFiH3neafoxnSlmaP/XEKpum026zc1Qt2jkjCZ?=
 =?us-ascii?Q?U+6SprDUi34GaP7PnpktsTzIo4aT2V63cP/6v+IJXCSrCaeCEN/LmozgJ0Uy?=
 =?us-ascii?Q?f/JlvQ/GTid8lig5pEpglyZ+1iFfWwN42qw5lUR/FPvktQI6RG1d1aUhVmrL?=
 =?us-ascii?Q?GLJv02rvZPq4GNP603uuosHsRj3+34KdKfvkpEuYE1VKg9/eN+xUAJejh4lS?=
 =?us-ascii?Q?zWpzWScMWEMuIAHerUgV/Ce5UpY56LY9IWOI/Nh4iyo73P/xXZFUVaVvPHgD?=
 =?us-ascii?Q?p8bQjaFhRs85bwQQ33ncUAxPS5279NT+?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(7416014)(82310400026)(36860700013)(376014)(1800799024);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2024 12:34:23.7544
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c7a08a3-11b2-4ea9-9d94-08dc9900f920
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CH1PEPF0000A34A.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS0PR12MB9324

Hypercall PHYSDEVOP_map_pirq support to map a gsi into a specific
pirq or a free pirq, it depends on the parameter pirq(>0 or <0).
But in current xc_physdev_map_pirq, it set *pirq=index when
parameter pirq is <0, it causes to force all cases to be mapped
to a specific pirq. That has some problems, one is caller can't
get a free pirq value, another is that once the pecific pirq was
already mapped to other gsi, then it will fail.

So, change xc_physdev_map_pirq to allow to pass negative parameter
in and then get a free pirq.

Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
---
 tools/libs/ctrl/xc_physdev.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/ctrl/xc_physdev.c b/tools/libs/ctrl/xc_physdev.c
index 460a8e779ce8..e9fcd755fa62 100644
--- a/tools/libs/ctrl/xc_physdev.c
+++ b/tools/libs/ctrl/xc_physdev.c
@@ -50,7 +50,7 @@ int xc_physdev_map_pirq(xc_interface *xch,
     map.domid = domid;
     map.type = MAP_PIRQ_TYPE_GSI;
     map.index = index;
-    map.pirq = *pirq < 0 ? index : *pirq;
+    map.pirq = *pirq;
 
     rc = do_physdev_op(xch, PHYSDEVOP_map_pirq, &map, sizeof(map));
 
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 30 12:34:34 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 12:34:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751118.1159036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtlK-0000vj-In; Sun, 30 Jun 2024 12:34:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751118.1159036; Sun, 30 Jun 2024 12:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtlK-0000vR-EF; Sun, 30 Jun 2024 12:34:34 +0000
Received: by outflank-mailman (input) for mailman id 751118;
 Sun, 30 Jun 2024 12:34:33 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aV4l=OA=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sNtlJ-0007C1-0B
 for xen-devel@lists.xenproject.org; Sun, 30 Jun 2024 12:34:33 +0000
Received: from NAM12-DM6-obe.outbound.protection.outlook.com
 (mail-dm6nam12on20601.outbound.protection.outlook.com
 [2a01:111:f403:2417::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 19c09400-36dd-11ef-90a3-e314d9c70b13;
 Sun, 30 Jun 2024 14:34:32 +0200 (CEST)
Received: from CH2PR03CA0023.namprd03.prod.outlook.com (2603:10b6:610:59::33)
 by DM4PR12MB7575.namprd12.prod.outlook.com (2603:10b6:8:10d::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.26; Sun, 30 Jun
 2024 12:34:27 +0000
Received: from CH1PEPF0000A34C.namprd04.prod.outlook.com
 (2603:10b6:610:59:cafe::76) by CH2PR03CA0023.outlook.office365.com
 (2603:10b6:610:59::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29 via Frontend
 Transport; Sun, 30 Jun 2024 12:34:27 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CH1PEPF0000A34C.mail.protection.outlook.com (10.167.244.6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7677.15 via Frontend Transport; Sun, 30 Jun 2024 12:34:27 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Sun, 30 Jun
 2024 07:34:23 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19c09400-36dd-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H3zHZGnV7mTWzDrN1HNBFclinj/tQ6f7o/+mXmQdELpkn3wVD5cpUsdgohcZD096dKbpfsVprLmGTonudpNlNgquQmZ4QoFI/eFCeHz5VE/4BiqQLbHi+YfOklY3FKYyhWgWuVyUNs7wL0TQ1I5QNbgZH9T6I1i6j71CqzA6UAh1E7RgD9UDmqawmhiY0XbrIqHoV12ROT4/QbHs+HqT+eP38BL1wzLtXzsJiR2vZAdNqNSNULfsvTurJLf9LlfTyj08g2fh7Nk7diV1vbzwQUVP8tLF8lVPq/+w/K4canv3Iws8xb3xhzW3nVdC8Ldiwe5GABljV3BpYwbkVt/9Ng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=AIAxLkWzUDGgwfyYyLpTh1BgohmKm21ZOwTpKfup/7Y=;
 b=XxScAcqkyZif2w6nWwvdQkUHoUQ2KuPlxKCLOBc3fQN7D4OJw0K5bx6/2CpEu8dxj7bzX9NwH5g4HWpaUTHI0zejMl4JxRJ1tq3FRgziWH+A89rweO6qnqhZjveziVRN+IrL90KakvqEDlcEsZBBWi5jbVp1cc5OLhlGJCajBZZxukFaVIE3F+TiKels+sDJjzKMBZGY5Z0oCFKPkyx+ynuKMWisHsQsn226XHlrVCoMW737S+ifVeo4jySyT3CRmtjyUwgkM1s/ZL0C3O2FNE1xhXaaTZBlmQgIdnxFwvF5EOARF8Q9n5fUiTmDGm6Skhte90HvDw9JzlEXVErt8A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AIAxLkWzUDGgwfyYyLpTh1BgohmKm21ZOwTpKfup/7Y=;
 b=FVmR+MXuyjslQBtzi9CVXw+uGnjvcjfjbJHcdNL4atPv5Wl9Vg6i5Xv41Zz3LXEBK5hNpwFvpR2qCy8+sTnmF8/Y53QFswZmR3b5/xfk239RF/BS1d7A9ipHsBi7tDMjxlZ15i27I/m70t1xjG67unysz5wfqhAKqZ7Subxn5cs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [RFC XEN PATCH v11 7/8] tools: Add new function to get gsi from dev
Date: Sun, 30 Jun 2024 20:33:43 +0800
Message-ID: <20240630123344.20623-8-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240630123344.20623-1-Jiqian.Chen@amd.com>
References: <20240630123344.20623-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CH1PEPF0000A34C:EE_|DM4PR12MB7575:EE_
X-MS-Office365-Filtering-Correlation-Id: 48011aa0-bc64-4c6f-a810-08dc9900fb4f
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230040|376014|36860700013|1800799024|82310400026|7416014;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?RuEtczyWJEJdY4TXgHYXmk/IUvt3l4xEwgoIH7zzKnmbDq9TXeURxFnUQbbm?=
 =?us-ascii?Q?AwrVuYOj2tFyfB4tvX7nFLdix3i/0gGHZHMpKj80eevTK7c0ABpIqCkW9B9e?=
 =?us-ascii?Q?U7MbcwGLvFshy2udbseeZerCUGChidDMUDUqiggOc9xF5j2Sm0BzG5m76Gdg?=
 =?us-ascii?Q?euqg3XZe5KTNnGki1mxR339YscWzQXmFDwQen8LtpRMhlcApybE2ePvwM6Pa?=
 =?us-ascii?Q?iim6N57LN5dwzhjO7ibahQqwa9ZK93NgSc5Hiqy0RiYzj9ZbX6wZZ7wDBfbr?=
 =?us-ascii?Q?zHY55dwSNfJFZSpBsssCpe63zYvsVyyQJrI2KQo+sZx8xgIOLP2Tqxo2/QQO?=
 =?us-ascii?Q?VzOS47f9RG0vkK/KX9kNeH9OmI1L9wDdhfN+BYaQjHKkhuS/rmrUrr3M5KOo?=
 =?us-ascii?Q?iVJxtLVQaXNVIgdKhVHPucu0yU4QHPMLkNFv7F44vsOfFCz2CRgZfRjgEIfv?=
 =?us-ascii?Q?EriQoq/LqQN40wPp4316xKG21c/4Ka+IWZNFfYq3hwaolg+RfjMCi60BxEMq?=
 =?us-ascii?Q?zv67tQY7JmmcJTQSoKSE1VVjM3xE2pbymZVunnxRC1JM63tMmZ1WLRTiGath?=
 =?us-ascii?Q?suZiTLjNKIjKwjVzrb3nU+CeUI3p+jQDcYrxD+eUREH1LGHEH6GBY9p2Pn1y?=
 =?us-ascii?Q?9T94J63qOw+jHzsFLOY5mz59FLpYtmHjcYc72KEfL/BYQQvsAZxE0vECFpZe?=
 =?us-ascii?Q?q7i1oMS99RUIsSr+3tE8H17idyDBPPzCZ0mq7KrLD16IrIYm+LugoqKI0rh5?=
 =?us-ascii?Q?uzH4lFIvoQCzzCDlqpl178U1JXgQ/i4a3SFeLGCdi09bQhKzUHgBT+b+Kz8+?=
 =?us-ascii?Q?tgxgf1vBTW7RjVzReoZLu5JTq93z0WCud0Jmq/9fxhmkF0KdPA2UHM3RVLzl?=
 =?us-ascii?Q?GU0v6eEIrI4drsMjYvugyWkaGeLg42UbmuE1RssDp+zyGNTRVKRE8XI1TeK2?=
 =?us-ascii?Q?f+Hk0OK+X8/s298aWFr0kSDzW7fpc/5/Q44VO/8kbOMh0EGefy65Scan4L3H?=
 =?us-ascii?Q?e4ArltwAZ4KNeMnqwfKNPxKABvHnjbo5tpQP6YUxZfq84T5/NQKR+bmtTs6d?=
 =?us-ascii?Q?B+/R/pvS38HB73WFOVndPTl2t9EmqsQNDZJJGAWdns1ok19WkojmwlWYpkh6?=
 =?us-ascii?Q?V9I1rXIDRZT595WsvwjgjTqbs1E4tBKdBQ2DbMROLZwDNHhN/KHDu/9ZjwsW?=
 =?us-ascii?Q?IJI+/uYRy/FZRjZAnz1TwJIbLlS3zMiehASZIGSbeL9zKXMWTaAG3OOX32X4?=
 =?us-ascii?Q?D8C3oCw1oQ4g0YinERa8IJdh6BMq7TPU/AlwOIlcs3HiMWHG7ZT9v+YxVjOM?=
 =?us-ascii?Q?BzVr8vLAvl1a3LHGS4uwOE1YUHHUXU33HJ4h9uz3ge9O9rpyCco2xnEaqDqs?=
 =?us-ascii?Q?0b9x/r0u3Immj/g95BElDdmt8mJB0CzQTrxeMMeA+xqFyCqBJg=3D=3D?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(376014)(36860700013)(1800799024)(82310400026)(7416014);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2024 12:34:27.4337
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 48011aa0-bc64-4c6f-a810-08dc9900fb4f
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CH1PEPF0000A34C.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB7575

When passthrough a device to domU, QEMU and xl tools use its gsi
number to do pirq mapping, see QEMU code
xen_pt_realize->xc_physdev_map_pirq, and xl code
pci_add_dm_done->xc_physdev_map_pirq, but the gsi number is got
from file /sys/bus/pci/devices/<sbdf>/irq, that is wrong, because
irq is not equal with gsi, they are in different spaces, so pirq
mapping fails.

And in current codes, there is no method to get gsi for userspace.
For above purpose, add new function to get gsi, and the
corresponding ioctl is implemented on linux kernel side.

Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Chen Jiqian <Jiqian.Chen@amd.com>
---
RFC: it needs to wait for the corresponding third patch on linux kernel side to be merged.
https://lore.kernel.org/xen-devel/20240607075109.126277-4-Jiqian.Chen@amd.com/
This patch must be merged after the patch on linux kernel side
---
 tools/include/xen-sys/Linux/privcmd.h |  7 ++++++
 tools/include/xenctrl.h               |  2 ++
 tools/libs/ctrl/xc_physdev.c          | 35 +++++++++++++++++++++++++++
 3 files changed, 44 insertions(+)

diff --git a/tools/include/xen-sys/Linux/privcmd.h b/tools/include/xen-sys/Linux/privcmd.h
index bc60e8fd55eb..4cf719102116 100644
--- a/tools/include/xen-sys/Linux/privcmd.h
+++ b/tools/include/xen-sys/Linux/privcmd.h
@@ -95,6 +95,11 @@ typedef struct privcmd_mmap_resource {
 	__u64 addr;
 } privcmd_mmap_resource_t;
 
+typedef struct privcmd_gsi_from_pcidev {
+	__u32 sbdf;
+	__u32 gsi;
+} privcmd_gsi_from_pcidev_t;
+
 /*
  * @cmd: IOCTL_PRIVCMD_HYPERCALL
  * @arg: &privcmd_hypercall_t
@@ -114,6 +119,8 @@ typedef struct privcmd_mmap_resource {
 	_IOC(_IOC_NONE, 'P', 6, sizeof(domid_t))
 #define IOCTL_PRIVCMD_MMAP_RESOURCE				\
 	_IOC(_IOC_NONE, 'P', 7, sizeof(privcmd_mmap_resource_t))
+#define IOCTL_PRIVCMD_GSI_FROM_PCIDEV				\
+	_IOC(_IOC_NONE, 'P', 10, sizeof(privcmd_gsi_from_pcidev_t))
 #define IOCTL_PRIVCMD_UNIMPLEMENTED				\
 	_IOC(_IOC_NONE, 'P', 0xFF, 0)
 
diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 9ceca0cffc2f..3720e22b399a 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1641,6 +1641,8 @@ int xc_physdev_unmap_pirq(xc_interface *xch,
                           uint32_t domid,
                           int pirq);
 
+int xc_physdev_gsi_from_pcidev(xc_interface *xch, uint32_t sbdf);
+
 /*
  *  LOGGING AND ERROR REPORTING
  */
diff --git a/tools/libs/ctrl/xc_physdev.c b/tools/libs/ctrl/xc_physdev.c
index e9fcd755fa62..54edb0f3c0dc 100644
--- a/tools/libs/ctrl/xc_physdev.c
+++ b/tools/libs/ctrl/xc_physdev.c
@@ -111,3 +111,38 @@ int xc_physdev_unmap_pirq(xc_interface *xch,
     return rc;
 }
 
+int xc_physdev_gsi_from_pcidev(xc_interface *xch, uint32_t sbdf)
+{
+    int rc = -1;
+
+#if defined(__linux__)
+    int fd;
+    privcmd_gsi_from_pcidev_t dev_gsi = {
+        .sbdf = sbdf,
+        .gsi = 0,
+    };
+
+    fd = open("/dev/xen/privcmd", O_RDWR);
+
+    if (fd < 0 && (errno == ENOENT || errno == ENXIO || errno == ENODEV)) {
+        /* Fallback to /proc/xen/privcmd */
+        fd = open("/proc/xen/privcmd", O_RDWR);
+    }
+
+    if (fd < 0) {
+        PERROR("Could not obtain handle on privileged command interface");
+        return rc;
+    }
+
+    rc = ioctl(fd, IOCTL_PRIVCMD_GSI_FROM_PCIDEV, &dev_gsi);
+    close(fd);
+
+    if (rc) {
+        PERROR("Failed to get gsi from dev");
+    } else {
+        rc = dev_gsi.gsi;
+    }
+#endif
+
+    return rc;
+}
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 30 12:34:40 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 12:34:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751123.1159046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtlQ-0001Xm-0A; Sun, 30 Jun 2024 12:34:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751123.1159046; Sun, 30 Jun 2024 12:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNtlP-0001Xa-TC; Sun, 30 Jun 2024 12:34:39 +0000
Received: by outflank-mailman (input) for mailman id 751123;
 Sun, 30 Jun 2024 12:34:38 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aV4l=OA=amd.com=Jiqian.Chen@srs-se1.protection.inumbo.net>)
 id 1sNtlO-0007C1-Bb
 for xen-devel@lists.xenproject.org; Sun, 30 Jun 2024 12:34:38 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com
 (mail-mw2nam10on20602.outbound.protection.outlook.com
 [2a01:111:f403:2412::602])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 1bc5c146-36dd-11ef-90a3-e314d9c70b13;
 Sun, 30 Jun 2024 14:34:37 +0200 (CEST)
Received: from CH2PR12CA0010.namprd12.prod.outlook.com (2603:10b6:610:57::20)
 by CH2PR12MB4039.namprd12.prod.outlook.com (2603:10b6:610:a8::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29; Sun, 30 Jun
 2024 12:34:31 +0000
Received: from CH1PEPF0000A347.namprd04.prod.outlook.com
 (2603:10b6:610:57:cafe::cb) by CH2PR12CA0010.outlook.office365.com
 (2603:10b6:610:57::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.32 via Frontend
 Transport; Sun, 30 Jun 2024 12:34:31 +0000
Received: from SATLEXMB04.amd.com (165.204.84.17) by
 CH1PEPF0000A347.mail.protection.outlook.com (10.167.244.7) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.7741.18 via Frontend Transport; Sun, 30 Jun 2024 12:34:31 +0000
Received: from cjq-desktop.amd.com (10.180.168.240) by SATLEXMB04.amd.com
 (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.39; Sun, 30 Jun
 2024 07:34:27 -0500
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1bc5c146-36dd-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fPMjX3ADuJuxQEIZJLzdF0qdthSGLbWHPDachqXk6lY3yE1YMZ/FS/MszuES3tWGvGtzUFISI51rzpNdgGkj780idxFqDEWD2cWRU3GASlfJcm03e+xbIJWIVTfYAPP2cSuAhgLCH8hcu3X4j+9oZU8yc6sMnQUmgGyBldTPt1XBkFBHSzz58qIXsSYaYR1pRhkiKM5jDz3OOMonjcpZGJOxqw+1kD6R8JEgrf30S7/HIVR6hx5qsojSmPha2eSsyigjYHWHHdK61iPSuo6Q0RaZBauT6FUk1hRs9yXRb2UVDHZ95vyGWVxbN6nLjjN50rj6SNtTq/8+AY/JqwMaTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=Ej4Z4EEVIlK2QxjhAZMN4LHn69U2vtDep4mQfyTjzQU=;
 b=fI+3PhMMmH+n0iL2sksLwdsF8E2vsERJVVxLEL/sLlEdIJyxG24bicsxkcUyuHaqtEprLTpstI04iiw9c7IYCBy/wVDEMISx/Um1wzRWEEqwlBVz+ISIS08AKAd0hdXdT4l5WLj8lxsVdDcMtHg0R9g29Xnn9akvNka5mI18i75TPOzBa6W/kMa2GSl8/sblzi0YvGRiQAbu9GXc4Snr1wjU6dOn7gFDGtm5MdzdD/MMGP+g/wgZ+4ol4TXY4N8WQB19iuL+atDCGBkhXlxHK59/YEX4NybR8G/0qbWHBarTwig8HSe69sDmTULAHm4FZQMwQ+q6fnnmo7ztZNuWbg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 165.204.84.17) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=amd.com;
 dmarc=pass (p=quarantine sp=quarantine pct=100) action=none
 header.from=amd.com; dkim=none (message not signed); arc=none (0)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ej4Z4EEVIlK2QxjhAZMN4LHn69U2vtDep4mQfyTjzQU=;
 b=ZdEgydn5QAwFv4uW3mcZ+1a0TIlgkRUTZbeY4pweiuAJy9X1B6eiuiQH4XAKmAZxhmg/JBtzYhXFwEC+qWRc/Hlr+i2ZDC9UXmtkJ7vVbLQzI2Spc29pXE/io0FHVI8ChZYfhUwLCCq6SnoxO7uqWX12ujGVTYiBcNUFjJN/vV4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17)
 smtp.mailfrom=amd.com; dkim=none (message not signed)
 header.d=none;dmarc=pass action=none header.from=amd.com;
Received-SPF: Pass (protection.outlook.com: domain of amd.com designates
 165.204.84.17 as permitted sender) receiver=protection.outlook.com;
 client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C
From: Jiqian Chen <Jiqian.Chen@amd.com>
To: <xen-devel@lists.xenproject.org>
CC: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Anthony PERARD <anthony@xenproject.org>, "Juergen
 Gross" <jgross@suse.com>, "Daniel P . Smith" <dpsmith@apertussolutions.com>,
	Stewart Hildebrand <Stewart.Hildebrand@amd.com>, Huang Rui
	<Ray.Huang@amd.com>, Jiqian Chen <Jiqian.Chen@amd.com>, Huang Rui
	<ray.huang@amd.com>
Subject: [RFC XEN PATCH v11 8/8] tools: Add new function to do PIRQ (un)map on PVH dom0
Date: Sun, 30 Jun 2024 20:33:44 +0800
Message-ID: <20240630123344.20623-9-Jiqian.Chen@amd.com>
X-Mailer: git-send-email 2.34.1
In-Reply-To: <20240630123344.20623-1-Jiqian.Chen@amd.com>
References: <20240630123344.20623-1-Jiqian.Chen@amd.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.180.168.240]
X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com
 (10.181.40.145)
X-EOPAttributedMessage: 0
X-MS-PublicTrafficType: Email
X-MS-TrafficTypeDiagnostic: CH1PEPF0000A347:EE_|CH2PR12MB4039:EE_
X-MS-Office365-Filtering-Correlation-Id: cc7a5644-5bfb-4541-e2b2-08dc9900fda0
X-MS-Exchange-SenderADCheck: 1
X-MS-Exchange-AntiSpam-Relay: 0
X-Microsoft-Antispam:
	BCL:0;ARA:13230040|82310400026|36860700013|376014|7416014|1800799024;
X-Microsoft-Antispam-Message-Info:
	=?us-ascii?Q?hdT1+cc/ITsHy0PuY/YRRdSaRYxKbbZ8baBhRYfBvh12J8lWYR31EbUedZOy?=
 =?us-ascii?Q?3t92zjD/yalASoyjDjwffs1NegQu3eL/XO1EglDAsQZdBhkV/OvtMMKZSNc2?=
 =?us-ascii?Q?NjltiTPLijmgDZaFDmat4ZIcqKZLdFPtL0aZDvvitVEIRAtLyVMtQPDIi9cN?=
 =?us-ascii?Q?QCcX+HIgkDDmjIrt2ohv1tusMlc2is1FM/ePVbV/PB9ei2wUxWqLHwnWlRTB?=
 =?us-ascii?Q?TqgtlhmkyKagZV0yQ+dMzqQP6bQcJdKxaK8XD8RQcvpuLygj15K58mcbmqde?=
 =?us-ascii?Q?sH4W+X5Tf/xjDsf2JKYUV5EpCNpqOv2ehTXSO3IVoDCGN61R+UjN0kFqv1G+?=
 =?us-ascii?Q?ZQeVHxu4UMvC7bYGxzjxHoxH/hzcA2AaXTXcqx0uQv1teMqaQkiBhc6nvS0b?=
 =?us-ascii?Q?VVM15/tUCMe1M8St66wZGKMfSGOiXn5/CVtj7QRg5xlv+XaIJYrYzb0b0hyX?=
 =?us-ascii?Q?AL6iv2Tfpi5X1oLo2geaZb8zecm/MOfUZI89NFh2UndpgTanprEI99tEoVFn?=
 =?us-ascii?Q?BbOf3WuiRZRdMfHVtdwAKDFbJM/90k/vV8VVrtWEf7ZQJP5TlwPByLQxosm1?=
 =?us-ascii?Q?3aMtaJC/u4G0sM6E8yrhcPZn32wHSniQzSlCuyfq+emzmiEiH24VIX/4AODe?=
 =?us-ascii?Q?Otj4vNrX80zOv7H6ezQsnUzsLoEA034Rt9V+QytbFUx4qIBPa2cWpIqXPHxu?=
 =?us-ascii?Q?Sb+E0mDRq2Nu4TrC0DgbMFfsgc1+yhKbbHXe3fWSjHgZIfkIXo6V+RxnnfZw?=
 =?us-ascii?Q?Xorn8VFwm4sAnDDFU2EYeJtAX1+zux+/2SOpwcEYRuSFCgqWVn0SE72E3ghB?=
 =?us-ascii?Q?E8xhudBGQPQObarkbRBd1oxx0x1NWGcmVffDEDur3dQa7mTtxSGRE02eelbz?=
 =?us-ascii?Q?h47gYXurwYP1HyY1Lbi+R5gdbhJEd9Ky5Q04FKexLq8I8MgNkYKk7tYvoBsv?=
 =?us-ascii?Q?SGlFl9QE95p5YsjUTXP9wZxwrJkUyba74+o90gvpFk8WnhmxFUwDehVSv+tJ?=
 =?us-ascii?Q?R6bLguux9DMLhylTT/LjlNsGssHEpgZkL/Z9DbBZQ0u+A5Qu6ecglYNgx4xB?=
 =?us-ascii?Q?FCgFM7iTdATHHNMGD42OqHJQUwqYeHoUfPfkIgxlisD7WoJrOK8GcV82GveH?=
 =?us-ascii?Q?TdNlB5jMDG4F9dlJkU1ON4E1V3OFGLJtwdDkSPNp1ZMJTYAT0Iw40oR1z4N9?=
 =?us-ascii?Q?G9DjZnUQ8+9valBuuaHwf+nMXfO/IMZ53XoFpvrbe3ziTWO61GVbOgUodidL?=
 =?us-ascii?Q?hwuZMmIh0eNTdrn1D/3ZoS7E5qYTtY9lo65du8+6bp+LzjOXroaAveehRZ3u?=
 =?us-ascii?Q?ra8VYPcQ546Fz66qqBNv7/yPGix9fS0n0PlY1IVRf2tgzpK7ygWz5ULxROOd?=
 =?us-ascii?Q?2Cg9vCz8rP3gWxgexKJz7vz7AtnF?=
X-Forefront-Antispam-Report:
	CIP:165.204.84.17;CTRY:US;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:SATLEXMB04.amd.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(13230040)(82310400026)(36860700013)(376014)(7416014)(1800799024);DIR:OUT;SFP:1101;
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2024 12:34:31.3059
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: cc7a5644-5bfb-4541-e2b2-08dc9900fda0
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d;Ip=[165.204.84.17];Helo=[SATLEXMB04.amd.com]
X-MS-Exchange-CrossTenant-AuthSource:
	CH1PEPF0000A347.namprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB4039

When dom0 is PVH, and passthrough a device to dumU, xl will
use the gsi number of device to do a pirq mapping, see
pci_add_dm_done->xc_physdev_map_pirq, but the gsi number is
got from file /sys/bus/pci/devices/<sbdf>/irq, that confuses
irq and gsi, they are in different space and are not equal,
so it will fail when mapping.
To solve this issue, use xc_physdev_gsi_from_dev to get the
real gsi and then to map pirq.

Besides, PVH dom doesn't have PIRQ flag, it doesn't do
PHYSDEVOP_map_pirq for each gsi. So grant function callstack
pci_add_dm_done->XEN_DOMCTL_irq_permission will fail at function
domain_pirq_to_irq. And old hypercall XEN_DOMCTL_irq_permission
requires passing in pirq, it is not suitable for dom0 that
doesn't have PIRQs to grant irq permission.
To solve this issue, use the new hypercall
XEN_DOMCTL_gsi_permission to grant the permission of irq(
translate from gsi) to dumU when dom0 has no PIRQs.

Signed-off-by: Jiqian Chen <Jiqian.Chen@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Chen Jiqian <Jiqian.Chen@amd.com>
---
RFC: it needs to wait for the corresponding third patch on linux kernel side to be merged.
https://lore.kernel.org/xen-devel/20240607075109.126277-4-Jiqian.Chen@amd.com/
This patch must be merged after the patch on linux kernel side
---
 tools/include/xenctrl.h       |   5 ++
 tools/libs/ctrl/xc_domain.c   |  15 +++++
 tools/libs/light/libxl_arch.h |   4 ++
 tools/libs/light/libxl_arm.c  |  10 +++
 tools/libs/light/libxl_pci.c  |  17 ++++++
 tools/libs/light/libxl_x86.c  | 111 ++++++++++++++++++++++++++++++++++
 6 files changed, 162 insertions(+)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 3720e22b399a..33810385535e 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1382,6 +1382,11 @@ int xc_domain_irq_permission(xc_interface *xch,
                              uint32_t pirq,
                              bool allow_access);
 
+int xc_domain_gsi_permission(xc_interface *xch,
+                             uint32_t domid,
+                             uint32_t gsi,
+                             bool allow_access);
+
 int xc_domain_iomem_permission(xc_interface *xch,
                                uint32_t domid,
                                unsigned long first_mfn,
diff --git a/tools/libs/ctrl/xc_domain.c b/tools/libs/ctrl/xc_domain.c
index f2d9d14b4d9f..8540e84fda93 100644
--- a/tools/libs/ctrl/xc_domain.c
+++ b/tools/libs/ctrl/xc_domain.c
@@ -1394,6 +1394,21 @@ int xc_domain_irq_permission(xc_interface *xch,
     return do_domctl(xch, &domctl);
 }
 
+int xc_domain_gsi_permission(xc_interface *xch,
+                             uint32_t domid,
+                             uint32_t gsi,
+                             bool allow_access)
+{
+    struct xen_domctl domctl = {
+        .cmd = XEN_DOMCTL_gsi_permission,
+        .domain = domid,
+        .u.gsi_permission.gsi = gsi,
+        .u.gsi_permission.allow_access = allow_access,
+    };
+
+    return do_domctl(xch, &domctl);
+}
+
 int xc_domain_iomem_permission(xc_interface *xch,
                                uint32_t domid,
                                unsigned long first_mfn,
diff --git a/tools/libs/light/libxl_arch.h b/tools/libs/light/libxl_arch.h
index f88f11d6de1d..11b736067951 100644
--- a/tools/libs/light/libxl_arch.h
+++ b/tools/libs/light/libxl_arch.h
@@ -91,6 +91,10 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
                                       libxl_domain_config *dst,
                                       const libxl_domain_config *src);
 
+_hidden
+int libxl__arch_hvm_map_gsi(libxl__gc *gc, uint32_t sbdf, uint32_t domid);
+_hidden
+int libxl__arch_hvm_unmap_gsi(libxl__gc *gc, uint32_t sbdf, uint32_t domid);
 #if defined(__i386__) || defined(__x86_64__)
 
 #define LAPIC_BASE_ADDRESS  0xfee00000
diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index a4029e3ac810..d869bbec769e 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -1774,6 +1774,16 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
 {
 }
 
+int libxl__arch_hvm_map_gsi(libxl__gc *gc, uint32_t sbdf, uint32_t domid)
+{
+    return -1;
+}
+
+int libxl__arch_hvm_unmap_gsi(libxl__gc *gc, uint32_t sbdf, uint32_t domid)
+{
+    return -1;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libs/light/libxl_pci.c b/tools/libs/light/libxl_pci.c
index 96cb4da0794e..3d25997921cc 100644
--- a/tools/libs/light/libxl_pci.c
+++ b/tools/libs/light/libxl_pci.c
@@ -17,6 +17,7 @@
 #include "libxl_osdeps.h" /* must come before any other headers */
 
 #include "libxl_internal.h"
+#include "libxl_arch.h"
 
 #define PCI_BDF                "%04x:%02x:%02x.%01x"
 #define PCI_BDF_SHORT          "%02x:%02x.%01x"
@@ -1478,6 +1479,16 @@ static void pci_add_dm_done(libxl__egc *egc,
     fclose(f);
     if (!pci_supp_legacy_irq())
         goto out_no_irq;
+
+    /*
+     * When dom0 is PVH and mapping a x86 gsi to pirq for domU,
+     * should use gsi to grant irq permission.
+     */
+    if (!libxl__arch_hvm_map_gsi(gc, pci_encode_bdf(pci), domid))
+        goto pci_permissive;
+    else
+        LOGED(WARN, domid, "libxl__arch_hvm_map_gsi failed (err=%d)", errno);
+
     sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
                                 pci->bus, pci->dev, pci->func);
     f = fopen(sysfs_path, "r");
@@ -1505,6 +1516,7 @@ static void pci_add_dm_done(libxl__egc *egc,
     }
     fclose(f);
 
+pci_permissive:
     /* Don't restrict writes to the PCI config space from this VM */
     if (pci->permissive) {
         if ( sysfs_write_bdf(gc, SYSFS_PCIBACK_DRIVER"/permissive",
@@ -2229,6 +2241,11 @@ skip_bar:
     if (!pci_supp_legacy_irq())
         goto skip_legacy_irq;
 
+    if (!libxl__arch_hvm_unmap_gsi(gc, pci_encode_bdf(pci), domid))
+        goto skip_legacy_irq;
+    else
+        LOGED(WARN, domid, "libxl__arch_hvm_unmap_gsi failed (err=%d)", errno);
+
     sysfs_path = GCSPRINTF(SYSFS_PCI_DEV"/"PCI_BDF"/irq", pci->domain,
                            pci->bus, pci->dev, pci->func);
 
diff --git a/tools/libs/light/libxl_x86.c b/tools/libs/light/libxl_x86.c
index 60643d6f5376..e7756d323cb6 100644
--- a/tools/libs/light/libxl_x86.c
+++ b/tools/libs/light/libxl_x86.c
@@ -879,6 +879,117 @@ void libxl__arch_update_domain_config(libxl__gc *gc,
                                  libxl_defbool_val(src->b_info.u.hvm.pirq));
 }
 
+struct pcidev_map_pirq {
+    uint32_t sbdf;
+    uint32_t pirq;
+    XEN_LIST_ENTRY(struct pcidev_map_pirq) entry;
+};
+
+static pthread_mutex_t pcidev_pirq_mutex = PTHREAD_MUTEX_INITIALIZER;
+static XEN_LIST_HEAD(, struct pcidev_map_pirq) pcidev_pirq_list =
+    XEN_LIST_HEAD_INITIALIZER(pcidev_pirq_list);
+
+int libxl__arch_hvm_map_gsi(libxl__gc *gc, uint32_t sbdf, uint32_t domid)
+{
+    int pirq = -1, gsi, r;
+    xc_domaininfo_t info;
+    struct pcidev_map_pirq *pcidev_pirq;
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
+    r = xc_domain_getinfo_single(ctx->xch, LIBXL_TOOLSTACK_DOMID, &info);
+    if (r < 0) {
+        LOGED(ERROR, domid, "getdomaininfo failed (error=%d)", errno);
+        return r;
+    }
+    if ((info.flags & XEN_DOMINF_hvm_guest) &&
+        !(info.arch_config.emulation_flags & XEN_X86_EMU_USE_PIRQ)) {
+        gsi = xc_physdev_gsi_from_pcidev(ctx->xch, sbdf);
+        if (gsi < 0) {
+            return ERROR_FAIL;
+        }
+        r = xc_physdev_map_pirq(ctx->xch, domid, gsi, &pirq);
+        if (r < 0) {
+            LOGED(ERROR, domid, "xc_physdev_map_pirq gsi=%d (error=%d)",
+                  gsi, errno);
+            return r;
+        }
+        r = xc_domain_gsi_permission(ctx->xch, domid, gsi, 1);
+        if (r < 0) {
+            LOGED(ERROR, domid, "xc_domain_gsi_permission gsi=%d (error=%d)",
+                  gsi, errno);
+            return r;
+        }
+    } else {
+        return ERROR_FAIL;
+    }
+
+    /* Save the pirq for the usage of unmapping */
+    pcidev_pirq = malloc(sizeof(struct pcidev_map_pirq));
+    if (!pcidev_pirq) {
+        LOGED(ERROR, domid, "no memory for saving pirq of pcidev info");
+        return ERROR_NOMEM;
+    }
+    pcidev_pirq->sbdf = sbdf;
+    pcidev_pirq->pirq = pirq;
+
+    assert(!pthread_mutex_lock(&pcidev_pirq_mutex));
+    XEN_LIST_INSERT_HEAD(&pcidev_pirq_list, pcidev_pirq, entry);
+    assert(!pthread_mutex_unlock(&pcidev_pirq_mutex));
+
+    return 0;
+}
+
+int libxl__arch_hvm_unmap_gsi(libxl__gc *gc, uint32_t sbdf, uint32_t domid)
+{
+    int pirq = -1, gsi, r;
+    xc_domaininfo_t info;
+    struct pcidev_map_pirq *pcidev_pirq;
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+
+    r = xc_domain_getinfo_single(ctx->xch, LIBXL_TOOLSTACK_DOMID, &info);
+    if (r < 0) {
+        LOGED(ERROR, domid, "getdomaininfo failed (error=%d)", errno);
+        return r;
+    }
+    if ((info.flags & XEN_DOMINF_hvm_guest) &&
+        !(info.arch_config.emulation_flags & XEN_X86_EMU_USE_PIRQ)) {
+        gsi = xc_physdev_gsi_from_pcidev(ctx->xch, sbdf);
+        if (gsi < 0) {
+            return ERROR_FAIL;
+        }
+        assert(!pthread_mutex_lock(&pcidev_pirq_mutex));
+        XEN_LIST_FOREACH(pcidev_pirq, &pcidev_pirq_list, entry) {
+            if (pcidev_pirq->sbdf == sbdf) {
+                pirq = pcidev_pirq->pirq;
+                XEN_LIST_REMOVE(pcidev_pirq, entry);
+                free(pcidev_pirq);
+                break;
+            }
+        }
+        assert(!pthread_mutex_unlock(&pcidev_pirq_mutex));
+        if (pirq < 0) {
+            /* pirq has been unmapped, so return directly */
+            return 0;
+        }
+        r = xc_physdev_unmap_pirq(ctx->xch, domid, pirq);
+        if (r < 0) {
+            LOGED(ERROR, domid, "xc_physdev_unmap_pirq pirq=%d (error=%d)",
+                  pirq, errno);
+            return r;
+        }
+        r = xc_domain_gsi_permission(ctx->xch, domid, gsi, 0);
+        if (r < 0) {
+            LOGED(ERROR, domid, "xc_domain_gsi_permission gsi=%d (error=%d)",
+                  gsi, errno);
+            return r;
+        }
+    } else {
+        return ERROR_FAIL;
+    }
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.34.1



From xen-devel-bounces@lists.xenproject.org Sun Jun 30 12:59:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 12:59:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751170.1159056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNu8u-0007bs-TK; Sun, 30 Jun 2024 12:58:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751170.1159056; Sun, 30 Jun 2024 12:58:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNu8u-0007bl-Pk; Sun, 30 Jun 2024 12:58:56 +0000
Received: by outflank-mailman (input) for mailman id 751170;
 Sun, 30 Jun 2024 12:58:55 +0000
Received: from se1-gles-sth1-in.inumbo.com ([159.253.27.254]
 helo=se1-gles-sth1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l6Lh=OA=nxp.com=peng.fan@srs-se1.protection.inumbo.net>)
 id 1sNu8t-0007bf-IM
 for xen-devel@lists.xenproject.org; Sun, 30 Jun 2024 12:58:55 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 (mail-vi1eur05on20601.outbound.protection.outlook.com
 [2a01:111:f403:2613::601])
 by se1-gles-sth1.inumbo.com (Halon) with ESMTPS
 id 80775327-36e0-11ef-90a3-e314d9c70b13;
 Sun, 30 Jun 2024 14:58:53 +0200 (CEST)
Received: from AM6PR04MB5941.eurprd04.prod.outlook.com (2603:10a6:20b:9e::16)
 by DBAPR04MB7256.eurprd04.prod.outlook.com (2603:10a6:10:1a3::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29; Sun, 30 Jun
 2024 12:58:50 +0000
Received: from AM6PR04MB5941.eurprd04.prod.outlook.com
 ([fe80::9f4e:b695:f5f0:5256]) by AM6PR04MB5941.eurprd04.prod.outlook.com
 ([fe80::9f4e:b695:f5f0:5256%4]) with mapi id 15.20.7719.028; Sun, 30 Jun 2024
 12:58:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80775327-36e0-11ef-90a3-e314d9c70b13
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K67OslUj1PHceXHb3iYsgDi0KI9bswlwloV6cY5oqBqtmeHsn4KQzM2d8Zy09D85lGaFHOwBVzEjymGHEIfJJ73PPkYlnae8tJaw9h9rwf+1n8lD52sopisOFk021d9GaQU5UP43tckZr8mKnxBLuR6NoNMrd7AT2SmpzdajqesQtVlvWbz/dmW6fJk7ZZ9K2z+5A97GZJQYDFosq7AvLxrvC/VcrNllrj3xz+Fmai30yjjfvEK+2TEo+6vHIaoMAD3jDAmSlD3K4iHockIL+050C5xegRU7Z7egrgs/m1x/Z6SCvtMaX8RbioLji2XzEhulWVcsLeLoNC5ppuLjpw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=LevxgQALKoyX7I4IsIHT/q6K6uuqfAWoIbxrhrubsC0=;
 b=S0BcRjx9wWMloD1nxVRrInKYQdstx4wkYCW9OSkJnlVZfTNtkhVHx/grONpc8ctEzbKYuAJfrylPyppzgmkLsoHOrXsfyblhPJx9gSX7E8diGTWU8OASCFYpBjQCq4uCEmGeruBx/+xP7RkGHcRq5d5ums5XfiqfLhxRdydCIu0mx5D2kCfVEat2vwTWRfZAA/A37OZs9xa66ikvMDwUBhGapSxxbp6Vom5zBLLMBb3xMd7D0U+8SJDUHUJdrTGfoR792ClRkauUZKZv6b9VuFPxdq60+LAPN1GWoYocxloSIyCDSoOFMG2eYSv/+usk6exD56Oq7DauTjXUgAGDWg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass
 header.d=nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LevxgQALKoyX7I4IsIHT/q6K6uuqfAWoIbxrhrubsC0=;
 b=b9Vn83CleT75Qy2h5tIcfuLI9w8Pm10orWuIwzR017NEPt+Z7bjlwuwyuNY9+zwQV7UMqnwWzzs3wXquGGd0iDG1bhqIXl7QxjiT+hlSILqXSF17PO7eU0FoeXydiQftm6CaQRJMAAuPb3mkOoViUMJDkM5N+EgGB1tIkmgZDNQ=
From: Peng Fan <peng.fan@nxp.com>
To: "stefano.stabellini@amd.com" <stefano.stabellini@amd.com>, Viresh Kumar
	<viresh.kumar@linaro.org>, Julien Grall <julien@xen.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Question: xen + vhost user
Thread-Topic: Question: xen + vhost user
Thread-Index: AdrK7E1vVcASzE7XRqm0wkYJXBBDZg==
Date: Sun, 30 Jun 2024 12:58:48 +0000
Message-ID:
 <AM6PR04MB59412237BA10A23EB79D5C0E88D22@AM6PR04MB5941.eurprd04.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
authentication-results: dkim=none (message not signed)
 header.d=none;dmarc=none action=none header.from=nxp.com;
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: AM6PR04MB5941:EE_|DBAPR04MB7256:EE_
x-ms-office365-filtering-correlation-id: fd5c4f59-6209-4a24-9739-08dc99046260
x-ms-exchange-senderadcheck: 1
x-ms-exchange-antispam-relay: 0
x-microsoft-antispam: BCL:0;ARA:13230040|1800799024|376014|366016|38070700018;
x-microsoft-antispam-message-info:
 =?us-ascii?Q?5gqq+dv/8L3N8athVe6Qg3atZpo6g5toG6WyQX7eUrZP15sg5qJyK9I1zWuQ?=
 =?us-ascii?Q?4k55qv1u8JfMvazbwcqx/t/LE1WkhN55KrQV017uutQHqAzsEsr8ZThJrw/z?=
 =?us-ascii?Q?W09KXuNsVfY3frH/I9joLXVCI4Mwh1JwxxsnZJlorxRI5qAdGnScLosmVjfo?=
 =?us-ascii?Q?+kL41Qe9XxdTvItuicXaJRboIBGrUCSdQus4dGkGl1+C1V1iN5Heq704RQNX?=
 =?us-ascii?Q?v2uskCrAl5IxUf3y3G+/onGzzaGACbFjmVkiNU7r6qwv9xxWbVWXx5E96+U7?=
 =?us-ascii?Q?qm8a8FiMAgyR7Ret7eGu0MMDcNVEN0K6625VTLFv8GLvfGA357q4qdnJkuLO?=
 =?us-ascii?Q?d30GAry3Gb7PiXdx2WzylhEW4MwCjlxy8I2+GWUMBPE9Y+CDBcYW0NjyT3sK?=
 =?us-ascii?Q?zLFJ3avm7MNMXG2oKyxrS1kDU86zdFos8CrewZs356i1534qkgr7siFDIZId?=
 =?us-ascii?Q?9zyUep/E+1gPsZHUFbjKANZTQ2D4yGfTyHVSSWfDD+RFR84cLaLmf4criKtO?=
 =?us-ascii?Q?X1MiCYUVRsQk73MFfgCUkuxVGdwuienY0WUgWNSHOw7nwByPFyjjWJUT5i3Z?=
 =?us-ascii?Q?NqBms8pcjpWgsNf4KroNbTiI6gOuG9ZuXazwoG0yiFhEwjBpvmjOcHwr7hmg?=
 =?us-ascii?Q?14KaYkjLSccUfFJ7XXooEfzPjjzrdmh5iVrMQ3OFQIABW54b83S/2lH/PPWH?=
 =?us-ascii?Q?Vm9czfaR2Fi/9HCeAKTXGxkm55a5VfKmVe3QkYh5krgp2H4rfSPRhU/UIrKh?=
 =?us-ascii?Q?vNJ1Gy0yfVhMlKk0nArCvDhjnjT00yRNUAcqxAWNkYxEP7BQO1GCfpU0UP1+?=
 =?us-ascii?Q?AEZOgo72pVYiKxrGNICW2B9fMfkLj2ejR+iHju4h6ihq6jlM8QRaBvIXPxkA?=
 =?us-ascii?Q?YjTcPoIATIRyoOebLbAYDN3jqRYonFLzwhBEeRTF6OJOgl4WBAb9cJIabaZc?=
 =?us-ascii?Q?hVKNDfpZooqQhShKg/D//xzHv8UchlbsZgF/98TzDFUWDapyIbcooV0enaYy?=
 =?us-ascii?Q?j57rRDfchEDgVwGuT54AQNayXcWPe0HVYmSQaQTr7GIv/UFyVVHP1zGRLoue?=
 =?us-ascii?Q?LfprA0Nf8BQ4xQVud4uRv1YJukwgE6mfqYssBexwa6rULxIDzrgajt7J4sh8?=
 =?us-ascii?Q?q0zdbC07iRQ7QgMPK9QQ1LgDFwRDm+KuqznPCbpK0BptKAN2M+TVwdqQxj4A?=
 =?us-ascii?Q?5xkXRNMQLyNtsAzfdYey2sNFLtTaQj2HptQ58rb3FXbVpfQ8n9aYhtG8qtb0?=
 =?us-ascii?Q?6UFDGWf6j3PZznqK94PcGtCiL1KNtz50X5KPSdkcSY8rGcFD+wksZBOpMe06?=
 =?us-ascii?Q?OJ7CprzSweAiDb7vpjqXKp0yf2HqHSmoqRj+YuIzFdm8fteHmKQcOTgne5Uw?=
 =?us-ascii?Q?7yuiqTjyrh/S3vCUkU7NbxBK0QtJZ4phum4jfXyJJIp2E1bGyg=3D=3D?=
x-forefront-antispam-report:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM6PR04MB5941.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(13230040)(1800799024)(376014)(366016)(38070700018);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?us-ascii?Q?WrVTpbZMB56FpDrpWEcJF8ehcpfdkXiIM22qpqT1sH38qIBTgQEfEKmRUvDQ?=
 =?us-ascii?Q?5KYV5ITGAqo0Gv0dCdNkYLhiYkOyQpMLpTcI14uFv97IHQ2ZBr/PzNYgeesX?=
 =?us-ascii?Q?k1qeq5r5EwCDOzT6CXPKEmEZXiAoKLH+sCgTU7jxTnAgmUTZYu3tGnbXgxuE?=
 =?us-ascii?Q?pUZj9+4pAxSzW3oA/3XyD1PUn5DJE2UBQibVsv3WJ+mhtiYDmAMLdAtpo9H+?=
 =?us-ascii?Q?2LtatjfhUDjVPVFwa/ArI7tEOsWYbodVx+cd1NTs0yRgHh6ORNXqyLopxs7F?=
 =?us-ascii?Q?t28D13LFY6iuw/+GU7Ggy1IbJgnex6Ggmuya+Hjx2QUxrGoF4xcFRKe96P8Z?=
 =?us-ascii?Q?goErq07Y0KD/HG9ftQEAsHVGG6f/RZZwNqMNMhDaH8j+zMLt5nDzUzBBucWY?=
 =?us-ascii?Q?4seMw6DeFSY6GNdR1AsugYuOaOBSmSfkofnfC6Ddi1ht7gYzElVImzHhp1XB?=
 =?us-ascii?Q?F2Nihz088Pldn1mZMcOIHF5t2KXn3YkEH9JPLBMsmn3heNEX44TYmAl05mC+?=
 =?us-ascii?Q?HP5EGBFzZ0/wo4/vVvu7cFrY6KnOB8MGNaCsXmVvziWJLYT1pLhYhJtpzBP0?=
 =?us-ascii?Q?943oNHj7wESFgeHFfmKkxFmQmF0dAeGx2IgMFFmvCaFXAtShP5ntcIO/0A/y?=
 =?us-ascii?Q?dppPRlwm7UiihhgPS1qDrRcTV6G2r+HlUorEY8b1+Dc4jMy5wqdItpQd9lBz?=
 =?us-ascii?Q?KZxvQwjxb2sqd7qnFx+JH44qC/yf1/KTWO6liKDSMi8H7Hg/iFB9NzkSzCxy?=
 =?us-ascii?Q?INtqZl7TkwP4Gmuv2DSeO3d1/W8OCt3dxQIFv66i9YURlarDSqfgNqw9J23o?=
 =?us-ascii?Q?WfaF7isVIenseMeYBaiBZla9J9IUMtGUUlWtG2sa9t9Tlt1tl+bO3cjULwH9?=
 =?us-ascii?Q?Z+R/FMBq86DUdH3fJ227Gu6hywEdmYr014nBT/zHFfLUaZ4AD8iyBY3hxjnC?=
 =?us-ascii?Q?2ZF/KMh6m3SkcyMZ4AYOUL5Wa/R4ZLzpDFvpqOu5Yf7ObMwF1mN6SzodNOfn?=
 =?us-ascii?Q?5Imoi95L0rtrvzRUq9eZZwlBN0DBo3kgZ+tzphlJx/W4xlN42Cx9EnJQKUta?=
 =?us-ascii?Q?CV1NQ8vngbo/M3tqzD+fFfdWJ6dcAVxHmtmrgl6QUeEPSRaS6aenyAkTo2+f?=
 =?us-ascii?Q?5OUA+wQ/XKo6WrKg6LEp8iPPpZQqNx4ddLiZ+VjFN0uIvT9ELMNB+cdvf7XI?=
 =?us-ascii?Q?k6FblJO/riiu4M6c0BLxK6GxRLyXbAkfvOwl5sd3HVNwc15KP3dPX/lFn8rh?=
 =?us-ascii?Q?TV9ToR46l1YO37xZXAGzO/beM1sUUepijk3CUVBvVPSpBnQo3XKyHE1ADwqc?=
 =?us-ascii?Q?0b91FHo7OC8fFsZFIuoa99PVG7sG9wNuMWvgkFLgM/wulI41J6osX0acT3tR?=
 =?us-ascii?Q?9g5BVJOf+6QqkTQnWkMvoOBPKAfm7uCOd+NizzUyzuK0lJJXokz4qFM7Y8hY?=
 =?us-ascii?Q?4nAfq1xrTocdGMsM93JlxjdUagFXZeZfFYJrEBSFA6RcA1L1AELhBGAc+Vts?=
 =?us-ascii?Q?y1/cwR00OpR6So/tvgtrsAvCYWdW+0jdaR9LSaUExDMwe29PXbe9zRGao/b8?=
 =?us-ascii?Q?23PauuqOmG+h2Gf5xLg=3D?=
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM6PR04MB5941.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fd5c4f59-6209-4a24-9739-08dc99046260
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Jun 2024 12:58:48.8735
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: /icqyzxgA3aDq9mrRSRsfrQStKH6IjyLpQpy5tEht+TTd/c6BOTm6NQoPy4FKijcOP5zyqXN2CO4YVvr02LZFw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR04MB7256

Hi All,

I am trying to enable vhost user input with xen hypervisor on i.MX95, using=
 qemu
vhost-user-input. But meet " Invalid vring_addr message ". My xen domu cfg:

'-chardev', 'socket,path=3D/tmp/input.sock,id=3Dmouse0',
'-device', 'vhost-user-input-pci,chardev=3Dmouse0',

Anyone knows what missing?

Partial error log:
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_VRING_ADDR (9)
Flags:   0x1
Size:    40
vhost_vring_addr:
    index:  0
    flags:  0
    desc_user_addr:   0x0000ffff889b0000
    used_user_addr:   0x0000ffff889b04c0
    avail_user_addr:  0x0000ffff889b0400
    log_guest_addr:   0x00000000444714c0
Setting virtq addresses:
    vring_desc  at (nil)
    vring_used  at (nil)
    vring_avail at (nil)

** (vhost-user-input:1816): CRITICAL **: 07:20:46.077: Invalid vring_addr m=
essage

Thanks,
Peng.

The full vhost user debug log:
./vhost-user-input --socket-path=3D/tmp/input.sock --evdev-path=3D/d
-path=3D/dev/input/event1 ./vhost-user-input --socket-path=3D/tmp/input.soc=
k --evdev-
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_FEATURES (1)
Flags:   0x1
Size:    0
Sending back to guest u64: 0x0000000175000000
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_PROTOCOL_FEATURES (15)
Flags:   0x1
Size:    0
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_PROTOCOL_FEATURES (16)
Flags:   0x1
Size:    8
u64: 0x0000000000008e2b
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_QUEUE_NUM (17)
Flags:   0x1
Size:    0
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_MAX_MEM_SLOTS (36)
Flags:   0x1
Size:    0
u64: 0x0000000000000020
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_BACKEND_REQ_FD (21)
Flags:   0x9
Size:    0
Fds: 6
Got backend_fd: 6
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_OWNER (3)
Flags:   0x1
Size:    0
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_FEATURES (1)
Flags:   0x1
Size:    0
Sending back to guest u64: 0x0000000175000000
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_VRING_CALL (13)
Flags:   0x1
Size:    8
Fds: 7
u64: 0x0000000000000000
Got call_fd: 7 for vq: 0
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_VRING_ERR (14)
Flags:   0x1
Size:    8
Fds: 8
u64: 0x0000000000000000
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_VRING_CALL (13)
Flags:   0x1
Size:    8
Fds: 9
u64: 0x0000000000000001
Got call_fd: 9 for vq: 1
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_VRING_ERR (14)
Flags:   0x1
Size:    8
Fds: 10
u64: 0x0000000000000001
(XEN) d2v0 Unhandled SMC/HVC: 0x84000050
(XEN) d2v0 Unhandled SMC/HVC: 0x8600ff01
(XEN) d2v0: vGICD: RAZ on reserved register offset 0x00000c
(XEN) d2v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
(XEN) d2v0: vGICR: SGI: unhandled word write 0x000000ffffffff to ICACTIVER0
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_CONFIG (25)
Flags:   0x9
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_GET_CONFIG (24)
Flags:   0x1
Size:    148
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_FEATURES (2)
Flags:   0x1
Size:    8
u64: 0x0000010170000000
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_VRING_NUM (8)
Flags:   0x1
Size:    8
State.index: 0
State.num:   64
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_VRING_BASE (10)
Flags:   0x1
Size:    8
State.index: 0
State.num:   0
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D Vhost user message =3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
Request: VHOST_USER_SET_VRING_ADDR (9)
Flags:   0x1
Size:    40
vhost_vring_addr:
    index:  0
    flags:  0
    desc_user_addr:   0x0000ffff889b0000
    used_user_addr:   0x0000ffff889b04c0
    avail_user_addr:  0x0000ffff889b0400
    log_guest_addr:   0x00000000444714c0
Setting virtq addresses:
    vring_desc  at (nil)
    vring_used  at (nil)
    vring_avail at (nil)

** (vhost-user-input:1816): CRITICAL **: 07:20:46.077: Invalid vring_addr m=
essage

root@imx95evk:~#


From xen-devel-bounces@lists.xenproject.org Sun Jun 30 14:03:25 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 14:03:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751183.1159065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNv95-0000Wr-DL; Sun, 30 Jun 2024 14:03:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751183.1159065; Sun, 30 Jun 2024 14:03:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNv95-0000Wk-Aj; Sun, 30 Jun 2024 14:03:11 +0000
Received: by outflank-mailman (input) for mailman id 751183;
 Sun, 30 Jun 2024 14:03:10 +0000
Received: from se1-gles-flk1-in.inumbo.com ([94.247.172.50]
 helo=se1-gles-flk1.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0DVU=OA=outlook.com=mhklinux@srs-se1.protection.inumbo.net>)
 id 1sNv94-0000We-EU
 for xen-devel@lists.xenproject.org; Sun, 30 Jun 2024 14:03:10 +0000
Received: from NAM12-BN8-obe.outbound.protection.outlook.com
 (mail-bn8nam12olkn20813.outbound.protection.outlook.com
 [2a01:111:f403:2c18::813])
 by se1-gles-flk1.inumbo.com (Halon) with ESMTPS
 id 7993ad0e-36e9-11ef-b4bb-af5377834399;
 Sun, 30 Jun 2024 16:03:07 +0200 (CEST)
Received: from SN6PR02MB4157.namprd02.prod.outlook.com (2603:10b6:805:33::23)
 by BN0PR02MB8000.namprd02.prod.outlook.com (2603:10b6:408:16d::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7719.29; Sun, 30 Jun
 2024 14:02:52 +0000
Received: from SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df]) by SN6PR02MB4157.namprd02.prod.outlook.com
 ([fe80::cedd:1e64:8f61:b9df%2]) with mapi id 15.20.7719.028; Sun, 30 Jun 2024
 14:02:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7993ad0e-36e9-11ef-b4bb-af5377834399
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GjolRelm4fBf2AJ1viX42J42cGZwBz4dJ7RKBo1ufunzq87nLc+qoqDRw9uQnkwPp42rKimrEf8lvBb+JAdMTXRC5Rjm1lGFkfC2DrfXtbASA9quh8nAzogEbtDkRREJlf6Xfyt79mNms0KLE9kalqrDVvhCWeZLdzDJhCbB9gjIzay+zM5motSpSip7I3sM7mbWV72AxIO3wVSMSUtK63v0L8NnQsL/IDZc7W3XXMc5kaA7UYoVvqXwGQTwubk3ewnL8fBfj+z9LUdzMXAaSQaXVgtg0UR/B/QbMkLHUbR6VOyQuhnN4A4rlYVDHv9Yub6wZI1PNMnVOYT64wkelQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;
 bh=8lTk4hRxrZhQ/fhyVY54JOkDpKaShrolCHbrWuRPTV0=;
 b=PGnnSPbu9rczOnneGT1dY6Xz+iTjim8eTRfnvcpb4BLExuQS99cX0EpnjYzUzGg09iK3ESGptUVpfELAheMMYtaytLiWueKmXLT2bVwGPEbH9Nir3AgPWGnz7OFFojBShLkZwmKs3o7vviDAZLjnqcGEV2I6BlQCDfo8Z5sYPgNN8l7dGLCYYLq6h4IGNsE2+TDNzq4UKvZw25wyyxOGvKxL3DN0M6KZrH/P61semK6fKIy+QbWCfjoMLInsAahdQ2MYtCdQ+W0RDaufkjfKnK7l1Vht37JCpEIZcZZ2B5aHQ0AtVEt2POQIQxzpwJTcVfcHfPWClBAa49zWI04lJw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none;
 dkim=none; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=outlook.com;
 s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8lTk4hRxrZhQ/fhyVY54JOkDpKaShrolCHbrWuRPTV0=;
 b=fHPn0iE0Y359ZxUyYBYvs2EzC4l5uO/juVsAb5YsbQTkMcWfHxfMhDT6O9tLBxDDoq2KJvxyeP9bAED9bFv+ni9hLbYKE8MrXPcazPs482X4Z3bgI0RHA2EWUpDdmXYWolK1AdFqk62xGaz2xycSheTBQFv7DYviLAHcaTV51mBE7F1ywRiM5tYpvaNURUF81YiwxeRHj010EvIc+3g/i0YjUsb/XYn7AjyOfKRdapN0e4TDgAv4QdUnjjCsAFd42esMvC88BJJ+WOXMfTzYFP4xfQqkO/bB9uqcLpELtXZlMSD3kU0gXw5SkvSGbtcw5eA13dsW2Nxfrb5IEM9szA==
From: Michael Kelley <mhklinux@outlook.com>
To: "hch@lst.de" <hch@lst.de>
CC: =?iso-8859-2?Q?Petr_Tesa=F8=EDk?= <petr@tesarici.cz>,
	"robin.murphy@arm.com" <robin.murphy@arm.com>, "joro@8bytes.org"
	<joro@8bytes.org>, "will@kernel.org" <will@kernel.org>, "jgross@suse.com"
	<jgross@suse.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"oleksandr_tyshchenko@epam.com" <oleksandr_tyshchenko@epam.com>,
	"m.szyprowski@samsung.com" <m.szyprowski@samsung.com>,
	"iommu@lists.linux.dev" <iommu@lists.linux.dev>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Topic: [RFC 1/1] swiotlb: Reduce calls to swiotlb_find_pool()
Thread-Index:
 AQHauIjoJBWSrCyO6UWzcncSceBiMLHa1Z9wgABomICAAA3OAIAAhiSggAAJLYCAAAn5IIAA6tuAgAAdhQCAAhqHMIAA6wAAgACDZAA=
Date: Sun, 30 Jun 2024 14:02:52 +0000
Message-ID:
 <SN6PR02MB415735CA2F5487AE3FD74FFED4D22@SN6PR02MB4157.namprd02.prod.outlook.com>
References: <20240607031421.182589-1-mhklinux@outlook.com>
 <SN6PR02MB41577686D72E206DB0084E90D4D62@SN6PR02MB4157.namprd02.prod.outlook.com>
 <20240627060251.GA15590@lst.de>
 <20240627085216.556744c1@meshulam.tesarici.cz>
 <SN6PR02MB4157E61B49C8435E38AC968DD4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
 <20240627152513.GA23497@lst.de>
 <SN6PR02MB4157D9B1A64FF78461D6A7EDD4D72@SN6PR02MB4157.namprd02.prod.outlook.com>
 <20240628060129.GA26206@lst.de>
 <20240628094708.3a454619@meshulam.tesarici.cz>
 <SN6PR02MB415781789CBD6597142BEC68D4D12@SN6PR02MB4157.namprd02.prod.outlook.com>
 <20240630055542.GA5379@lst.de>
In-Reply-To: <20240630055542.GA5379@lst.de>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-tmn: [hFWwzBy8j2S1RGuEZXfH7YE/JsHE6jnB]
x-ms-publictraffictype: Email
x-ms-traffictypediagnostic: SN6PR02MB4157:EE_|BN0PR02MB8000:EE_
x-ms-office365-filtering-correlation-id: e5c7c2f5-31d9-4bc2-0a6c-08dc990d5538
x-microsoft-antispam:
 BCL:0;ARA:14566002|461199028|8060799006|102099032|3412199025|440099028;
x-microsoft-antispam-message-info:
 rUNDe6U7okKLMxQ3Y38uxbhxdVTELPNqzikkoDrT49IlbtYsIWeN79BkLYo9V2R5I2GBHHkBA2NOFEnQCmJuSD6Xim4k9u2vASWrJyl4mxupSc7aNhRD9XZxN5UcVfgYVNpjZ+ws7veE7Q08616fqANZiihXFzWb6b51bczbfIHohMo9g4/GgmWDZHG4dn6RvxdkEz7PxYht715cAql++MfYTtTYI9rA3SCihm+I5pazbsLIe+B8lkyKjPmVJrEXywlBSDPygBHtOITcHLzhUma27gL/cB4C8DKTd0hILtUMzE9Jv/fQRvm+WZrj2JRMJbJhxJDfoQpqfdjdLY2jaVTe8ZhPjqk/TpRBMtDi0DqK/oM5LT7J/a6x1LjdCY7/mzbMxNw+apQRP1ifxmv9+g7j5apSajD6kn/7eNBqAtaMY7uulFEwlSDm178vdKplcWOA4Una9ks69+C42oGA02kz2aNnW/ubw42Vh34mIhVJ6/rQ8LU+FyfyHvtHR5Mpeg5JKZ8YR0WWdmTiJaj+kdDEuQh/noMnGRefS0m5Nte7gf6RowWCvXUojWJy4kXNWCa/OD4/gGWZRu6c7iFx4dZA0oUwCp054VBwJtbRM6iAjzuHw4ZIAeffzrtJyVqb
x-ms-exchange-antispam-messagedata-chunkcount: 1
x-ms-exchange-antispam-messagedata-0:
 =?iso-8859-2?Q?d2WOrTpAaya0ZFQLsUGOJCY5fQ2CgHXBhKPOipzluqhENuIbKs6MRxz8R7?=
 =?iso-8859-2?Q?ANgRRp5GQIikIuD2vy3El1hctKGYnxUbIPpR3gjuVXIzdKlZR6Q5fOUcUI?=
 =?iso-8859-2?Q?BSCiYP9OXqtBYIznQsLRjBfKYO8IxjLW0XdFPSjBztxCJpLYErVCi2G0o8?=
 =?iso-8859-2?Q?sy+rBwlq+XBnNkB1b9Lcmqr8Snp4ifz4NUFItShAkyt5Y72rgJ3KvEH3Z2?=
 =?iso-8859-2?Q?L59TQzGBnicReNC2MhUsr6Rp4vG42v0NhYV8tHX4bzVEePezH/hr9x20OB?=
 =?iso-8859-2?Q?ilEcqkdd3rIpD5e1iZRdSUEUS3xLdzj4OM1B38is6C7/dhsms+Qv1eNArr?=
 =?iso-8859-2?Q?xyTCxXz01uuxsEdjnf4tz5nHgM5fAxOKJg38mrueLa8f6V5XvGEwVgg7sG?=
 =?iso-8859-2?Q?sTackw65xPYgLKrKkoFwe4/YJC6dikXC194AxlhP54uT0pNDWMa1hTfLE/?=
 =?iso-8859-2?Q?nGAcb5z9qrO7ZgZnyb+FzQB4Yp6tS2SEWe0NYl8jBBNvQcTTE/gZprKb91?=
 =?iso-8859-2?Q?Hjg0c6rNoWuw77bUhV90t8Ga2LA2+Y4SFsmyQeyhXSeNrHdAAlocO4+gIu?=
 =?iso-8859-2?Q?f+9oJbuTmRdtwY+j6oALXOOFTSYa4QfK3A4MhkqCBZ/xtq/bBORzlN0zvW?=
 =?iso-8859-2?Q?BVBdGqm4NjxQtK16AC2MtRxeeM7W3oY5q/XMAHYk7LqX6RdeDr4OktdeXU?=
 =?iso-8859-2?Q?vOF3Mp93Futd+pYR4LgmZvieNZNA2g/JPMyCrwhK1UdpUtmWuP5NJbKzh8?=
 =?iso-8859-2?Q?tGb8xAeZ5nlboIEWmXSRYGeWhZz2dCkHzG2QIKW0I5kDz0AGps0chU3ugI?=
 =?iso-8859-2?Q?HW4MJHY3wYMAVz+SbyJvyTkLN99z+osthw4qWRgNUp2GkIHQRsFPijrPa1?=
 =?iso-8859-2?Q?xhptO40IuPkzX2wrDqtYfVTuPYTGJ9D1VF+VZiv+6FONFsYhVYwl8bIg/3?=
 =?iso-8859-2?Q?WzNuzlMpHEyvVTkQNDwQkbolhphhKlrLX4USELfLTH8cxF51Bnc+S7PfsB?=
 =?iso-8859-2?Q?uzhD33doAaAPFbxbYRDSJiGHhy3vwVJXjZBtoNwn1HW6L6fX8ROkga4TLA?=
 =?iso-8859-2?Q?yimmnbJ52NtcjJ9nw2AtBUofy2qKEVjvmdtWqOZm/OGanhbIRCywMVn5W2?=
 =?iso-8859-2?Q?T7DlvfwwdY5l0f2GQA5y7g33jvI1cnzpDQpN8fdqXmKfKaxv5U/8K/m3Ui?=
 =?iso-8859-2?Q?A4eWRT/w/ledJAiSBWkL1tdbObKgq4d02/BRkpz4PsCYOVqFppGG4oogIi?=
 =?iso-8859-2?Q?RLaVy6b203LS35BDwM5oOH8JUL7Dke8ETmUC/0IMw=3D?=
Content-Type: text/plain; charset="iso-8859-2"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SN6PR02MB4157.namprd02.prod.outlook.com
X-MS-Exchange-CrossTenant-RMS-PersistedConsumerOrg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-Network-Message-Id: e5c7c2f5-31d9-4bc2-0a6c-08dc990d5538
X-MS-Exchange-CrossTenant-rms-persistedconsumerorg: 00000000-0000-0000-0000-000000000000
X-MS-Exchange-CrossTenant-originalarrivaltime: 30 Jun 2024 14:02:52.2910
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 84df9e7f-e9f6-40af-b435-aaaaaaaaaaaa
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN0PR02MB8000

From: hch@lst.de <hch@lst.de> Sent: Saturday, June 29, 2024 10:56 PM
>=20
> On Sat, Jun 29, 2024 at 03:55:58PM +0000, Michael Kelley wrote:
> > Unless there is further discussion on this point, I'll just keep the or=
iginal
> > "is_swiotlb_buffer()" in v2.
>=20
> That is the wrong name for something that returns the pool as pointed
> out before.

OK. Since any new name could cause confusion with the existing
swiotlb_find_pool(), here's my proposal:

1) Rename is_swiotlb_buffer() to swiotlb_find_pool(), since it
now returns a pool.  A NULL return value indicates that the
paddr is not an swiotlb buffer.

2) Similarly, rename is_xen_swiotlb_buffer() to
xen_swiotlb_find_pool()

3) The existing swiotlb_find_pool() has the same function signature,
but it is used only where the paddr is known to be an swiotlb buffer
and hence always succeeds. Rename it to __swiotlb_find_pool() as
the "internal" version of swiotlb_find_pool().

4) Do you still want is_swiotlb_buffer() as a trivial wrapper around
the new swiotlb_find_pool(), for use solely in dma_direct_need_sync()
where only a Boolean is needed and not the pool?

Thanks,

Michael


From xen-devel-bounces@lists.xenproject.org Sun Jun 30 15:08:04 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 15:08:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751193.1159075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNw9Z-0008Im-R0; Sun, 30 Jun 2024 15:07:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751193.1159075; Sun, 30 Jun 2024 15:07:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sNw9Z-0008If-OQ; Sun, 30 Jun 2024 15:07:45 +0000
Received: by outflank-mailman (input) for mailman id 751193;
 Sun, 30 Jun 2024 15:07:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNw9Y-0008IV-N4; Sun, 30 Jun 2024 15:07:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNw9Y-0005Ri-KL; Sun, 30 Jun 2024 15:07:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sNw9Y-000486-7Y; Sun, 30 Jun 2024 15:07:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sNw9Y-00039m-75; Sun, 30 Jun 2024 15:07:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tieOuSKE5+ZMQsbUQ+0We7174hHU76LiI11TcHjtVY4=; b=p3MDdC5n5ib61/jYY0+DUpj4pi
	VirITPc90RT2T1Q3ngVoJPARr4+Ixg472wNN/7rxLLtitRuIg4l7fghSZAcfMWgiOpeq2zVEO7+pN
	1373cZ98b6Hl+m/hePfNMQHm2+2Pe04zFBZFCnSWsPZFWjYozVbg73+j37Q8233gpkwM=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186594-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186594: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-armhf-armhf-examine:reboot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8282d5af7be82100c5460d093e9774140a26b96a
X-Osstest-Versions-That:
    linux=de0a9f4486337d0eabacc23bd67ff73146eacdc0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 Jun 2024 15:07:44 +0000

flight 186594 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186594/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 186562

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 186586 pass in 186594
 test-armhf-armhf-examine      8 reboot           fail in 186586 pass in 186594
 test-armhf-armhf-xl-rtds      8 xen-boot         fail in 186586 pass in 186594
 test-armhf-armhf-xl           8 xen-boot                   fail pass in 186586

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 186586 blocked in 186562
 test-armhf-armhf-xl         15 migrate-support-check fail in 186586 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 186586 never pass
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 186586 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186562
 test-armhf-armhf-libvirt      8 xen-boot                     fail  like 186562
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186562
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-qcow2    14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-qcow2    15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8282d5af7be82100c5460d093e9774140a26b96a
baseline version:
 linux                de0a9f4486337d0eabacc23bd67ff73146eacdc0

Last test of basis   186562  2024-06-29 00:44:01 Z    1 days
Failing since        186578  2024-06-29 16:40:32 Z    0 days    3 attempts
Testing same since   186586  2024-06-30 00:11:59 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Agathe Boutmy <agathe@boutmy.com>
  Andi Shyti <andi.shyti@kernel.org>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Chandan Babu R <chandanbabu@kernel.org>
  Chen Ni <nichen@iscas.ac.cn>
  Christoph Hellwig <hch@lst.de>
  Chuck Lever <chuck.lever@oracle.com>
  Darrick J. Wong <djwong@kernel.org>
  Hans de Goede <hdegoede@redhat.com>
  Hans Hu <HansHu-oc@zhaoxin.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Kamal Dasu <kamal.dasu@broadcom.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Wolfram Sang <wsa+renesas@sang-engineering.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    pass    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 pass    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 701 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun Jun 30 20:58:06 2024
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 Jun 2024 20:58:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.751208.1159086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sO1cB-0003f9-IV; Sun, 30 Jun 2024 20:57:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 751208.1159086; Sun, 30 Jun 2024 20:57:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1sO1cB-0003f2-Fi; Sun, 30 Jun 2024 20:57:39 +0000
Received: by outflank-mailman (input) for mailman id 751208;
 Sun, 30 Jun 2024 20:57:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sO1cA-0003es-DT; Sun, 30 Jun 2024 20:57:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sO1cA-0004dh-8O; Sun, 30 Jun 2024 20:57:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1sO1c9-0007ZK-Vg; Sun, 30 Jun 2024 20:57:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1sO1c9-0006FG-V7; Sun, 30 Jun 2024 20:57:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Mu+SvE2bimLSPVw1pg/DBUA4iybJH4aFRe2awl6WNbQ=; b=ILtuSezSoLsRPhle1iXFtI0bI0
	BcRMrj97lUdKjynRwJykOMMiInCFolSNBhgHs+NzWA5+5rQmDupWeph7QnO8jkAujE81rdUA7rMg/
	2F5Okp8ix3hNgW9cShWk9a6E2dxPvXZ5fK3QHpHqglEN5V2MyuVjtSJLkx7fU65eDaPc=;
To: xen-devel@lists.xenproject.org
Message-ID: <osstest-186600-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 186600: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-armhf-armhf-xl-multivcpu:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-qcow2:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-vhd:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-qcow2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:xen-boot:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qcow2:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8282d5af7be82100c5460d093e9774140a26b96a
X-Osstest-Versions-That:
    linux=de0a9f4486337d0eabacc23bd67ff73146eacdc0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 Jun 2024 20:57:37 +0000

flight 186600 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/186600/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-multivcpu  8 xen-boot                fail REGR. vs. 186562

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           8 xen-boot         fail in 186594 pass in 186600
 test-armhf-armhf-xl-qcow2     8 xen-boot                   fail pass in 186594
 test-armhf-armhf-libvirt-vhd  8 xen-boot                   fail pass in 186594

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt   16 saverestore-support-check fail blocked in 186562
 test-armhf-armhf-libvirt      8 xen-boot            fail in 186594 like 186562
 test-armhf-armhf-xl-credit2 15 migrate-support-check fail in 186594 never pass
 test-armhf-armhf-xl-credit2 16 saverestore-support-check fail in 186594 never pass
 test-armhf-armhf-libvirt-vhd 14 migrate-support-check fail in 186594 never pass
 test-armhf-armhf-libvirt-vhd 15 saverestore-support-check fail in 186594 never pass
 test-armhf-armhf-xl-qcow2   14 migrate-support-check fail in 186594 never pass
 test-armhf-armhf-xl-qcow2 15 saverestore-support-check fail in 186594 never pass
 test-armhf-armhf-xl-credit2   8 xen-boot                     fail  like 186562
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 186562
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 186562
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qcow2 14 migrate-support-check        fail never pass
 test-amd64-amd64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-raw 14 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-raw 15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-vhd      14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-raw      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-raw      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8282d5af7be82100c5460d093e9774140a26b96a
baseline version:
 linux                de0a9f4486337d0eabacc23bd67ff73146eacdc0

Last test of basis   186562  2024-06-29 00:44:01 Z    1 days
Failing since        186578  2024-06-29 16:40:32 Z    1 days    4 attempts
Testing same since   186586  2024-06-30 00:11:59 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Agathe Boutmy <agathe@boutmy.com>
  Andi Shyti <andi.shyti@kernel.org>
  Armin Wolf <W_Armin@gmx.de>
  Arnd Bergmann <arnd@arndb.de>
  Chandan Babu R <chandanbabu@kernel.org>
  Chen Ni <nichen@iscas.ac.cn>
  Christoph Hellwig <hch@lst.de>
  Chuck Lever <chuck.lever@oracle.com>
  Darrick J. Wong <djwong@kernel.org>
  Hans de Goede <hdegoede@redhat.com>
  Hans Hu <HansHu-oc@zhaoxin.com>
  Ilpo Järvinen <ilpo.jarvinen@linux.intel.com>
  Jeff Johnson <quic_jjohnson@quicinc.com>
  Kamal Dasu <kamal.dasu@broadcom.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Wolfram Sang <wsa+renesas@sang-engineering.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-examine-bios                                pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-libvirt-qcow2                               pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-xl-qcow2                                    fail    
 test-amd64-amd64-libvirt-raw                                 pass    
 test-arm64-arm64-libvirt-raw                                 pass    
 test-amd64-amd64-xl-raw                                      pass    
 test-armhf-armhf-xl-raw                                      pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-examine-uefi                                pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-libvirt-vhd                                 fail    
 test-amd64-amd64-xl-vhd                                      pass    
 test-arm64-arm64-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 701 lines long.)


